Goldenratio Using DIFF EQ

download Goldenratio Using DIFF EQ

of 14

Transcript of Goldenratio Using DIFF EQ

  • 8/7/2019 Goldenratio Using DIFF EQ

    1/14

    1

    Math 130A HW3 Selected SolutionsC T

    3 Chapter 3

    3.1 Notes

    In this chapter we completely solve and classify all autonomous 2-dimensional linear systems of s,

    namely equations of the form

    X =x

    y

    = A

    x

    y

    = AX

    How do we do this? First, we reduce to some simple cases, called canonical cases, in which the

    solution is more or less apparent. Then we reduce the more general cases to these by a coordinate

    transformation. It turns out that all real 2 2 matrices can be reduced to one of exactly 3 canonicalforms (and if we consider complex-valued solutions, they really can be reduced to one of exactly 2).The solution to the canonical form equation enables us to see the phase portrait of our solution.

    The phase portrait of a differential equation is simply the graph of several different solution

    curves in the plane (a solution curve for the differential equation, recall, is given simply by consider-

    ing a solution

    x(t)

    y(t)

    as a parametric curve in the plane). The reason why we care is that the solution

    curves all behave in a certain way which is immediately apparent from looking at the phase portrait.

    This also allows us to classify the stability of the equilibrium points. For 2D linear systems, the point

    (0, 0) is always an equilibrium point, and there are more equilibrium points precisely when det A = 0.

    This is simply because the set of equilibrium points is the solution set to AX = 0, which is a linear

    equation. This means the set of all equilibrium points is either only (0, 0), or a whole line of solutions,

    or the whole plane (but the only time this last case occurs is if A is the zero matrix, which makes for

    a rather boring differential equation). This is where all that linear algebra stuffcomes in.

    When (0, 0) is the only equilibrium point, all the interesting stuffhappens in a neighborhood of it.

    This is what exploring phase portraits is all about. Like in 1D equations, solutions may go into (0 , 0)

    (a sink), or out of (0, 0) (a source). But solutions can do much more interesting things near (0, 0) in 2

    dimensions, because theres a lot more room to move around in.

    It turns out that the nature of the equilbrium solutions and the phase portraits are completely de-

    termined by the nature of the eigenvalues of the 2 2 matrix. This is because the canonical forms ofthe matrices are completely determined by the eigenvalues. By applying the appropriate coordinate

    transformation, the phase portrait of a general 2D linear system is simply some combination of rescal-

    ing, rotation, and shearing of the phase portrait of its corresponding canonical solution, which does

    not change any features we are interested in.

    3.2 Recap of the Three Canonical Forms

    Lets recap what are the three (and only three) real canonical forms of real matrices.

    3.2.1 Real and Distinct Eigenvalues

    IfA is a 22 matrix with real, distinct eigenvalues (or a scalar matrix

    0

    0

    , the only time eigenvalues

    can be repeated without being the bad case below), then the canonical form associated to this is the

  • 8/7/2019 Goldenratio Using DIFF EQ

    2/14

    2 3 CHAPTER 3

    diagonal matrix

    D =

    1 0

    0 2

    .

    Actually, this canonical form is not unique! For example,

    2 0

    0 1

    is also a canonical form. The

    transformation T which changes A into the diagonal matrix is the matrix given by sticking the corre-

    sponding eigenvectors side by side: the first entry on the diagonal should be the eigenvalue associated

    with eigenvector in the first column ofT and the second should be associated with the eigenvector in

    the second column: if T1 = 11 and T2 = 22 then T = [1 2] transforms A into D, that is,

    D = T1AT. If we decided to use the other canonical form, the corresponding transformation wouldbe [2 1] instead. But the upshot is, however you find D, it will always be true that A = T DT

    1, andthe choice of ordering is simply choosing whether you want the x-axis or the y-axis to correspond to

    the axis defined by the first eigenvector. However, the choice does affect how you draw the phase por-

    trait for the canonical solution. The canonical solution (i.e. general soution to the canonical equation

    Y = DY) is

    Y(t) = c1e1t

    1

    0

    + c2e

    2t

    0

    1

    which means, as we know from the last homework, that the general solution to the original problem

    X = AX isX(t) = T Y(t) = c1e

    1t1 + c2e2t2

    The reason is simple: T(1, 0) is the first column of T, which, as we just explicitly constructed, is the

    first eigenvector 1. The rest of that c1, c2 and e stuffare just scalars. T(0, 1) is the second column of

    T, and hence the second eigenvector 2.

    The matrix T is not unique, either, and in fact, much more non-unique than D. Since any scalar

    multiple of an eigenvector is an eigenvector, if you rescale the columns of T by (possibly different)

    nonzero factors, D still remains the same. Rescaling the columns of T is equivalent to multiplying

    T on the right by a diagonal matrix K. Then (T K)1ATK = K1T1ATK = K1DK = D, sincediagonal matrices always commute (because they just multiply componentwise).

    3.2.2 Complex Eigenvalues

    The second canonical form is for the case of complex eigenvalues. For A a 22 matrix with real entries(the only ones we consider in this class), if the eigenvalues are complex, they must occur in conjugate

    pairs. Namely, if1 = + i then 2 = i. In some sense, this means that one eigenvalue alreadycontains all the information, or in essence, the two-dimensionality of the system has shifted from the

    two entries of a vector into a single complex number. If is the complex eigenvector correspondingto + i then we can say, in essence, that also contains all the information of the system. Thus

    we take its real and imaginary parts, 1 = Re() and 2 = Im(). These two vectors, which are not

    eigenvectorsa point which bears repeating:

    1 and 2 are noteigenvectors of A!!!

    form a transformation T = [1 2] which takes A into the canonical form

    C= T1AT =

    .

  • 8/7/2019 Goldenratio Using DIFF EQ

    3/14

    3.2 Recap of the Three Canonical Forms 3

    (note that this matrix is not diagonal!). To be honest, sometimes they do call 1 and 2 generalized

    eigenvectors, but I give the big warning because applying A to one of these vectors is notgoing to be

    times the vector. If we did want the diagonal matrix, we would in fact use both complex eigenvectors,

    and the diagonal matrix would consist of the two complex eigenvalues + i and i. However, theproblem is, the fully complex solution is, by its nature, 4-dimensional: two for the individual complex

    entries, and two for the components of the vector. It is reasonable to want a real solution to a real

    system of differential equations, so this less-simple a canonical form appropriately extracts the real

    solutions we want.1 Heres another way of thinking about it: a diagonal matrix is a 2-dimensional

    subspace (a plane inside the 4-space) of all 2 2 matrices, and so are matrices which look like C; butmatrices that look like C are like a skewed or rotated version of this plane.

    The canonical soution is

    Y(t) = c1et

    cos(t)

    sin(t)+ c2e

    t

    sin(t)

    cos(t)

    =

    cos(t) sin(t)

    sin(t) cos(t)

    c1et

    1

    0

    + c2e

    t

    0

    1

    .

    We will see why more clearly when we talk about the exponential of a matrix (thats a couple of home-works away from now). But for now, you can simply check for yourself that it works by calculating

    Y and seeing that it really does equal CY. The geometric interpretation of the matrix of sines andcosines is a clockwise rotation by t. In other words, the solution rotates about the origin clockwise

    at an angular velocity of (if < 0 then it is counterclockwise at angular velocity ||). The et termslook exactly like the solutions for real and distinct eigenvalues, except the eigenvalues arent really

    distinctthey are the single value , as in the scalar matrix case. This controls how the solution grows

    and shrinks as it rotates around the origin.

    The solution to the original problem is then

    X(t) = T Y(t) = T

    cos(t) sin(t)

    sin(t) cos(t)

    c1e

    t

    1

    0

    + c2e

    t

    0

    1

    = T et

    cos(t) sin(t)

    sin(t) cos(t)

    c1c2

    .

    This does look pretty complicated, so lets break it down: there are only 3 pieces of real data; first is

    the transformation matrix (change of coordinates), second is the time-dependent part of the solution

    one can think of et

    cos(t) sin(t)

    sin(t) cos(t)

    as being ONE thingand finally the constants from which

    we form our general solution (T1(c1, c2) is the initial condition).Again, the transformation matrix T is not unique; one may rescale it by any nonzero real number

    (however the individual columns cannot be scaled by different factors, unlike the case for real and

    distinct eigenvalues. Why?)

    3.2.3 Real Repeated Eigenvalues

    Finally we come to the exceptional case, if A has a repeated real eigenvalue . Again, unless we

    started offwith a diagonal matrix consisting of the same number on the diagonal, the corresponding

    canonical form must be

    C=

    1

    0

    ,

    The transformation matrix is a little tricky to find. The first column should be an eigenvector cor-

    responding to . In this exceptional case, you will only have a one-dimensional space of eigenvectors

    1It turns out that matrices that look like C, namely same real numbers on the main diagonal, and opposite real numbers

    along the other diagonal, behave very much like complex numbers. That these are associated with complex solutions is not

    a coincidence.

  • 8/7/2019 Goldenratio Using DIFF EQ

    4/14

    4 3 CHAPTER 3

    to choose from. The second column is going to be a vector which will make {, } linearly inde-pendent. This vector could actually be anything that is not a multiple of . Unlike true eigenvectors,

    you cannot arbitrarily choose any scale you want. This is because, remember, a 1 in the upper right

    corner of something contributes a shear to A as a linear mapping; finding this scalar, then, corresponds

    to compensating for this shear. The shear is a very distinct kind of linear transformation in the plane,

    as compared to the more geometrically obvious rescaling (analogous to what real eigenvalues do) and

    rotation (analogous to what purely imaginary eigenvalues do). So what we do is sort of a guess and

    check (pull a rabbit out of a hat). We pick some independent vector (which in practice should always

    be either (1, 0) or (0, 1), since those are the easiest to work with) and we see what happens when we

    apply A. What we want is for

    A = + ,

    because thats mimicking the behavior of the action ofC on standard basis vectors (1, 0) and (0, 1). In

    other words, instead of being merely stretched, which would make it an eigenvector, we want to be

    appropriately sheared. The procedure is: first choose w arbitrary. Then we find Aw

    w = (A

    I)w.

    This is going to be a scalar multiple of, say by . Then = w/ works. Note again that is also not

    an eigenvector, and I trust you to keep this in mind even though I dont have it in big font. Finally,

    T = [ ] is the matrix such that C= T1AT. Lets summarize step-by-step.

    Step 1. Let be the single eigenvalue, and be the single eigenvector. This is your first column of T.

    Step 2. Choose w to be linearly independent of . You can almost always pick w = (1, 0). The only

    time you cannot is when is a multiple of (1, 0)in that case, pick (0, 1) instead.

    Step 3. Compute (A I)w. Ifw = (1, 0) this is just then first column of A minus w.Step 4. This must be a multiple of. We divide by to get this scalar, call it .

    Step 5. = w/ is the second column ofT. So T = [ ] = [ w/] is it.

    The canonical solution Y, satisfying, as before, Y = CY is

    Y(t) = c1et

    1

    0

    + c2e

    t

    t

    1

    = c1e

    t

    1

    0

    + c2e

    t

    0

    1

    + t

    1

    0

    and the general solution X satisfying X = AX is

    X(t) = T Y(t) = c1et+ c2e

    t(t+ )

    The transformation T is very non-unique; for simplicity and definiteness, we always pick the choice

    w = (1, 0) but in reality, any vector outside the span of the subspace of will work.

    Here is, as promised, the rabbit pulled out of the hat:

    Figure 1: The rabbit pulled out of the hat.

  • 8/7/2019 Goldenratio Using DIFF EQ

    5/14

    3.3 Drawing Phase Portraits and Classifying Solutions 5

    3.3 Drawing Phase Portraits and Classifying Solutions

    The book does not make clear how exactly to draw phase portraits. It is true that when studying

    qualitative properties of solutions, the exact way one draws the graphs is not really that important,so long as you capture some essential, general features, like general direction of travel, straight-line

    solutions, which axis the solutions hug, etc. But it is helpful to get some exact formulas which you

    can graph for yourself, first. Once we have the phase portrait, we can then classify the nature of the

    solution (i.e. classify the equilibrium point at (0, 0)).

    What we could do is draw the parametric curves directly. This is easy to do in various mathemat-

    ical programs, but hard to do by hand. To do it by hand, what we need to do is eliminate the time

    parameter and draw the y curves as functions of x, or x as a function ofy. First lets do the real and

    distinct eigenvalues case. The solution to the canonical form equation is

    Y(t) = c1e1t

    1

    0

    + c2e

    2t

    0

    1

    or x(t) = c1e1t and y(t) = c2e

    2t. However, we can force or shoehorn e1t into looking like e2t by

    raising it to the 2/1 power (provided 1 0). In other words, e2t =

    x

    c1

    2/1. So, we have, when

    1 0,

    y = c2

    x

    c1

    2/1= B|x|2/1 .

    where B is just a real constant. In other words, the curves look like scaled power and root functions

    and their reciprocals. If 2 > 1 > 0, or, 2 < 1 < 0, then y = B|x|a for some a > 1, whichmeans they are all parabola-like things that open upward or downward (they are genuine parabolas

    if a = 2, and more flattened or squarish for a > 2, and very pointy for a close to 1. In this case, the

    curves hug the x-axis, since all the parabola-thingys are tangent to the x-axis at 0. If 1 > 2 > 0or 1 < 2 < 0, then they are root curves: y = B|x|b for some b < 1, which means x = C|y|1/2 , i.e.the previous case except turned on its side. For a scalar matrix, 1 = 2 so y = B|x| or a bunch ofVs.But in all cases, if they are both positive, (0, 0) is a source, and solutions always travel away from

    (0, 0); and both the eigenvalues negative means (0, 0) is a sink: solutions travel toward it. You should

    draw arrows on the curves accordingly. Also there are two straight-line solutions, corresponding to

    the degenerate cases.

    For eigenvalues of differing signs, we instead have hyperbola-like things (graphs of y = 1/x ex-

    cept possibly more flattened). This corresponds to (0, 0) being a saddle, because all these hyperbola-

    thingys miss the origin. To see which way to draw the arrows, we find the two straight-line solutions;

    anything traveling near the axis which corresponds to the negative eigenvalue points in the direc-

    tion of the origin, and rounds a corner and starts traveling away from the origin, following the axis

    corresponding to the positive eigenvalue.

    For complex eigenvalues, things are actually easier. The part of the solution which looks like a

    real solution is the et term, which correspond to the Vs described before, or the rays from the origin.

    What the matrix does is apply an appropriate twist, corresponding to the rotation that all the solutions

    undergo with time, turning a bunch of rays into, instead, a bunch of (exponentially growing) spirals.

    The equilibrium point is a (spiral) sink if < 0 and a source if > 0; if = 0 the point is instead a

    center, which, along with a saddle, is a genuinely new kind of equilibrium point that occurs.

    Finally, the repeated eigenvalues yield very unique and interesting solutions. One can think of an

    analogy as follows. Recall that integration of a power function xa almost always leads to xa+1

    a+1. This

    is true for all real a . . . except a = 1. Its integral is not something times x0, which is constant, but

  • 8/7/2019 Goldenratio Using DIFF EQ

    6/14

  • 8/7/2019 Goldenratio Using DIFF EQ

    7/14

    3.4 Solutions 7

    Problem 3.1. Match the following to their phase portraits.

    Solution. Obviously, all of them must have complex eigenvalues, because there is some kind of rota-

    tional motion in all of them. Curves that close up on one another (i.e. forming ellipses as in plots 2and 3, must have pure imaginary eigenvalues. Inward spiraling indicates a sink and outward spiraling

    indicates a source.

    Problem 3.2. For each of the following systems X = AX do the following:

    (a) Find the eigenvalues and eigenvectors ofA.

    (b) Find the matrixT that puts A into canonical form.

    (c) Find the general solution of both X = AX andY = T1ATY.

    (d) Sketch the phase portraits of both systems.

    Solution.

    i. A =

    0 1

    1 0

    . The characteristic polynomial is 2 1 = 0, or eigenvalues 1 = 1 and 2 = 1.

    (1, 1) is an eigenvector corresponding to (1, 1) and (1, 1) is an eigenvector for 1. This is part(a). So writing them in order we find:

    T =

    1 1

    1 1

    and the canonical form matrix (you dont actually have to carry this calculation out, because the

    theory shows that you will get the diagonal matrix consisting of the eigenvalues. You could do it

    as a check, though. . . )

    D = T1AT =

    1 0

    0 1

    (this is part (b)). This enables to write down the general solutions to Y = DY and X = AX:

    Y(t) = c1et

    1

    0

    + c2e

    t

    0

    1

    X(t) = T Y(t) = c1et

    1

    1

    + c2e

    t

    1

    1

    .

    And that is part (c). Consulting our guide above, we see that the solution curves for Y give us the

    family y = C|x|1/1 = C|x|1 which give hyperbolas which open along y = x, and the equilibriumpoint (0, 0) is a saddle. Since the canonical x-solution is the one with the negative eigenvalue,

    canonical solutions flow outward to infinity along the x-axis. For the original problem, they flow

    outward along the x-eigenvector, (1, 1).

    ii. A =

    1 1

    1 0

    . The characteristic polynomial is 2 1 leading to roots = 1

    5

    2. We take

    to be the solution with the + sign. = 1 is the golden ratio, for anyone curious. The second

    eigenvalue well call . This leads to another saddle, since the eigenvalues are real, distinct, and

  • 8/7/2019 Goldenratio Using DIFF EQ

    8/14

    8 3 CHAPTER 3

    1.5 1.0 0.5 0.5 1.0 1.5

    1.5

    1.0

    0.5

    0.5

    1.0

    1.5

    (a) Phase Portrait for Canonical Solution

    1.5 1.0 0.5 0.5 1.0 1.5

    1.5

    1.0

    0.5

    0.5

    1.0

    1.5

    (b) Actual Phase Portrait

    Figure 3: Part (d), phase portraits for matrix (i).

    with opposite signs. The eigenvectors are then (1, 12

    (1 +

    5)) and (1, 12

    (1 +

    5)). This gives

    us T and D:

    T =

    1 11+52

    1

    52

    , D =

    1+

    52

    0

    0 1

    52

    .So therefore

    Y(t) = c1et

    10

    + c2et

    01

    X(t) = T Y(t) = c1et

    1

    + c2e

    t

    1

    .

    We note that y = C|x|/ which is negative and of magnitude less than 1. So the decay is sloweras x and thus the saddle hugs the y-axis, or the axis corresponding to the eigenvector(1, ).

    iii. A =

    1 1

    1 0

    . The char. poly. is now 2 + 1, so that = 1i

    32

    , complex roots. We let = 12

    and =

    3

    2so that = + i is the eigenvalue with a + sign. is a sixth root of unity. The

    eigenvector is then = (1, 1) = (1, 12

    (1 + i 3)) (the second component is a cube root ofunity). So this gives us the transformation vectors 1 = Re() = (1, 12 ) and 2 = Im() = (0,

    3

    2)

    or

    T =

    1 012

    3

    2

    , C=

    12

    3

    2

    32

    12

    .This leads to the canonical solution

    Y(t) = c1et/2

    cos(

    32

    t)

    sin(

    32

    t)

    + c2et/2sin(

    3

    2t)

    cos(

    32

    t)

    .

  • 8/7/2019 Goldenratio Using DIFF EQ

    9/14

    3.4 Solutions 9

    1.5 1.0 0.5 0.5 1.0 1.5

    1.5

    1.0

    0.5

    0.5

    1.0

    1.5

    (a) Phase Portrait for Canonical Solution

    1.5 1.0 0.5 0.5 1.0 1.5

    1.5

    1.0

    0.5

    0.5

    1.0

    1.5

    (b) Actual Phase Portrait

    Figure 4: Part (d), phase portraits for matrix (ii).

    By our choice of, and the fact that > 0, it is a spiral source, and so spirals outward clockwise.

    The solution to the original problem is

    X(t) = c1et/2

    cos(

    32

    t)

    12

    cos(

    32

    t)

    32

    sin(

    32

    t)

    + c2et/2 sin(

    3

    2t)

    12

    sin(

    32

    t) +

    32

    cos(

    32

    t).

    By our nifty shortcut in problem 9, we can determine the direction of the spiralling for the solutionby looking at the sign of the second entry of A. It is 1 > 0 so the transformation T preserves

    orientation, and the solution spirals outward clockwise as well.

    1.5 1.0 0.5 0.5 1.0 1.5

    1.5

    1.0

    0.5

    0.5

    1.0

    1.5

    (a) Phase Portrait for Canonical Solution

    1.5 1.0 0.5 0.5 1.0 1.5

    1.5

    1.0

    0.5

    0.5

    1.0

    1.5

    (b) Actual Phase Portrait

    Figure 5: Part (d), phase portraits for matrix (iii).

  • 8/7/2019 Goldenratio Using DIFF EQ

    10/14

    10 3 CHAPTER 3

    iv. A =

    1 1

    1 3

    . The characteristic polynomial is 2 4 + 4, which leads to the repeated root

    = 2. Since the matrix is not already diagonal, we know its canonical form must be C=2 1

    0 2

    ,that is, it must have a 1 in the upper right corner. Our one eigenvector is = (1, 2 1) = (1, 1).Now lets take w = (1, 0). Then Aw = (1, 1) and 2w = (2, 0). So subtracting the two, we get(1, 1). This is , and so = 1 in the above step-by-step guide. Therefore we should take = w/(1) = (1, 0). All in all, we have

    T =

    1 11 0

    , C=

    2 1

    0 2

    The canonical solution Y = CY

    Y(t)=

    c1e

    2t10 +

    c2e

    2t t1

    Now the solution to the original problem is:

    X(t) = T Y(t) = c1e2t

    1

    1

    + c2e

    2t

    t 1

    t

    Since > 0, (0, 0) is a source and solutions tend away from the origin.

    1.5 1.0 0.5 0.5 1.0 1.5

    1.5

    1.0

    0.5

    0.5

    1.0

    1.5

    (a) Phase Portrait for Canonical Solution

    1.5 1.0 0.5 0.5 1.0 1.5

    1.5

    1.0

    0.5

    0.5

    1.0

    1.5

    (b) Actual Phase Portrait

    Figure 6: Part (d), phase portraits for matrix (iv).

    v. A =

    1 1

    1 3

    . The characteristic polynomial is 2+22 which gives = 2

    4+82

    = 1

    3.

    This will give a saddle (darn it. I was hoping to get two real eigenvalues of the same sign).

    vi. A =

    1 1

    1 1

    The characteristic polynomial is 2 2 = 0 or =

    2. Great. Another saddle.

    Come to office hours if you really want to see these.

  • 8/7/2019 Goldenratio Using DIFF EQ

    11/14

    3.4 Solutions 11

    Problem 3.3. Solve the following harmonic oscillators. See solution to problem 4, which will be in

    the HW 4 solution.

    Problem 3.4. Consider the harmonic oscillator, x + bx + kx = 0, which, as a system, looks like:

    X =x

    y

    =

    0 1

    k b

    X

    (This is a tried and true problem, and youve seen it many times already. A complete solution will be

    given in HW4 solution, because I want you to see it in the context of exploring parameter spaces)

    Problem 3.5. Sketch the phase portrait for a certain family of DiffEqs (I will solve this problem in

    the HW4 solution).

    Problem 3.7. Show that the canonical system with repeated eigenvalues,Y =

    1

    0

    Y, has solutions

    that are tangent to(1, 0) (i.e. tangent to thex-axis) at the origin.

    Solution. This is a very good problem, though it may seem intimidating. First, one should note that no

    solution to this ever actually reaches (0, 0) (unless we start at (0, 0), and thus stay there forever). So

    it is not a matter of plugging in actual values. Instead, the solution either approaches (0, 0) as t (if < 0) or as t (if > 0). Now tangent to the x-axis means simply that dy

    dx= 0 at (0, 0).

    What isdy

    dx? Well, if you remember the old trick of regarding derivatives as fractions, it is

    dy/dt

    dx/dt= y/x.

    But, setting (x(t),y(t)) to our canonical solution Y(t) we find that x(t) = c1et+c2tet+ c2et whiley(t) = c2et. Finding y/x we have that the ets cancel out, so we have

    dy

    dx=

    c2

    c1 + tc2 + c2

    which quite plainly goes to 0 when taking t (precisely which limit to take, + or , is stillimportant even though the limits happen to be the same, because it affects what point dy/dx is being

    taken at; we want it to be taken at the origin, so we must choose the limit at which the solution actually

    goes to the origin).

    Problem 3.9. Determine a computable condition for when a solution with complex eigenvalues spirals

    counterclockwise or clockwise.

    Solution. This is actually a very interesting problem. A very, very interesting problem. Let A be ourmatrix, with eigenvalues = i and canonical form C =

    .

    . We may as well assume that

    > 0, because we can simply reorder our eigenvalues if not. In practice, occurs as the term with

    the square root in the solution to the quadratic, so taking the eigenvalue to be the positive imaginary

    square root (i.e. taking the plus sign in the quadratic formula), this will give us a positive anyway.

    By pure inspection, or whatever, the canonical solution, with positive , always rotates clockwise.

    This is because the rotation matrix that occurs in the canonical solution denotes clockwise rotation at

    a rate of radians per unit time. But how do we get the original solution from the canonical solution?

    The transformation matrix T, of course. If T is orientation-preserving, that is, det T > 0, then the

    spirals will go the same way the canonical solution goes (which depends on the sign of ). On the

  • 8/7/2019 Goldenratio Using DIFF EQ

    12/14

    12 3 CHAPTER 3

    other hand, if the transformation is orientation-reversing, i.e. det T < 0, then the behavior is opposite

    from that of the canonical solution. How do we determine the nature of T from A?

    Remember our Totally Awesome Shortcut. If (a b) is the first row of A, then an eigenvector

    associated to it is (b, a) = (b, ( a) + i). Separating into real and imaginary parts, we get= (b, a) and = (0, ). So we have

    T =

    b 0

    a

    .

    So det T = b which, since > 0, is positive exactly whenever b > 0 and negative when b < 0.

    Therefore, if the upper-right entry is positive, the solutions rotate clockwise, and if it is negative, it

    rotates counterclockwise. This is the so-called computable condition: look at the sign of the second

    entry!

    Problem 3.10. Consider the system X = AX where the trace ofA is not zero, butdetA = 0. Find thegeneral solution of this system and sketch the phase portrait.

    Solution. The characteristic polynomial is 2 (trA) = 0 or 1 = 0 and 2 = trA. The eigenvaluesare therefore always real and distinct. Let A =

    a b

    c d

    . Then the 0 eigenvector is (b, a), and corre-

    sponding to a + d is (b, a a d) = (b, d). Thus the transformation matrix and canonical formis

    T =

    b ba d

    , D =

    0 0

    0 a + d

    .

    The canonical solution is Y(t) = c1 1

    0+ c2e(a+d)t

    0

    1 = (c1, c2e(a+d)t). The phase portrait is a bunch of

    straight, vertical lines; the solutions move away from the x-axis (the line of equilibrium points) if the

    trace is positive, and toward it if the trace is negative. The solution to the original problem is

    X(t) = c1

    ba

    + c2e

    (a+d)t

    b

    d

    Lets actually ask ourselves what the canonical system means. Let = a + d. Then the we have

    that the differential equation is x = 0 and y = , a constant. This says y = t+ c0. In a very realsense, the coordinate x is redundant. It is FAPP2 a 1D equation. One could regard x as a parameter,

    but the solution to this canonical form doesnt even change with the parameter x. The phase portrait

    is in fact the bifurcation diagram, where the line y = 0 is the line of equilibrium points. They are all

    sources if > 0 and sinks if < 0.To really drive the point home, consider the family of s y = cy + a where a is now theparameter. Then setting x = a we have

    x

    y

    =

    0 0

    1 c

    x

    y

    Then our canonical transformation is

    T =

    c 01 c

    .

    2For all practical purposes.

  • 8/7/2019 Goldenratio Using DIFF EQ

    13/14

    3.4 Solutions 13

    and we have

    X(t) = c1

    c1

    + c2e

    ct

    0

    c

    The phase portrait is still a collection of vertical lines, and y = 0 and they are sources or sinks

    depending on whether c > 0 or c < 0. Consider the initial condition X(0) = (x,y0) (x is always

    constant). This says that T(c1, c2) = (x,y0) or (c1, c2) = T1(x,y0) = (cx, x+cy0)/c2 = (x/c, x/c2+

    y0/c).

    The family of solutions is then y(t) = ac+ ( a

    c+y0)e

    ct and the phase portrait for the 2D system is

    indeed the bifurcation diagram of this family.

    Problem 3.14. Let A be a 2 2 matrix. Show thatA satisfies its own characteristic equation: A2 (trA)A + (detA)I= 0. Use this to show that ifA has real, repeated eigenvalues, then for anyv inR2, v

    is either an eigenvector forA or(A I)v is an eigen vector forA.Solution. The fact that A satisfies its own characteristic polynomial is known as the Cayley-Hamilton

    Theorem. We have that if A =

    a b

    c d

    , then A2 =

    a2 + bc b(a + d)

    c(a + d) d2 + bc

    . Now

    A2 (a + d)A =

    a2 + bc a(a + d) 00 d2 + bc d(a + d)

    =

    bc ad 0

    0 bc da

    which clearly is (detA)I. Now if A has repeated eigenvalues, let be an eigenvector for A. Thecharacteristic polynomial must be p(x) = (x )2. Therefore p(A) = (A I)2 = 0. Ifv is not aneigenvector for A, then (A I)v 0. This means

    0 = (A I)2v = A(A I)v (A I)v.

    Therefore A[(A I)v] = [(A I)v],which, along with the fact that (A I)v 0, shows that (A I)v is an eigenvector for A. Problem 3.16. Consider the nonlinear system

    x = |y|y = x

    Sketch its phase portrait.

    Solution. First off, let us not forget a very general principle. Differential equations are local in nature.

    In other words, at a particular point in the plane, the solutions depend only on what is happening in

    a small neighborhood about that point. Therefore, in particular, at any point (x0,y0) away from the

    x-axis (i.e. y = 0), we have that the differential equation, in a small enough neighborhood of the point

    (x0,y0), is either x= y, y = x (if y0 > 0) or x = y, y = x, if y0 < 0. What does this mean? It

    means that we simply solve one differential equation in the upper half-plane, namely

    X =

    0 1

    1 0

    X

    and in the lower-half plane we solve

    X =

    0 1

    1 0

    X.

  • 8/7/2019 Goldenratio Using DIFF EQ

    14/14

    14 3 CHAPTER 3

    and literally just stick the results together at the x-axis. The former system is a system with purely

    imaginary eigenvalues and therefore has a phase portrait that looks like a bunch of semicircles in the

    upper half-plane (they would be full circles if we were solving the equation in the whole plane). On

    the other hand, the latter system has real and distinct eigenvalues, 1. It is a bunch of hyperbolas.The eigenvectors are (1, 1) and (1, 1), and the transformation it describes is a rotation by /4, andreflection, and a scaling by a factor of

    2.

    3 2 1 1 2 3

    3

    2

    1

    1

    2

    3

    Figure 7: Phase portrait for this system.