Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7:...

23
Chapter 7: Systems of Linear Differential Equations Philip Gressman University of Pennsylvania Philip Gressman Math 240 002 2014C: Chapter 7 1 / 23

Transcript of Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7:...

Page 1: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Chapter 7: Systems of Linear DifferentialEquations

Philip Gressman

University of Pennsylvania

Philip Gressman Math 240 002 2014C: Chapter 7 1 / 23

Page 2: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

The Beginning

Definition

Every vector function (with values in Rn) which is k-timescontinuously differentiable on an interval I is said to belong toC k(I ,Rn).

Basic Fact

The space C k(I ,Rn) is a vector space over the reals underpointwise addition and scalar multiplication. This vector space isinfinite-dimensional.

Important Transformations

Differentiation maps C k(I ,Rn) to C k−1(I ,Rn) for k > 0 and mapsC∞(I ,Rn) to itself. If A(t) is a k-times differentiablematrix-valued function, then multiplication by A on the left alsomaps C k(I ,Rn) to itself.

Philip Gressman Math 240 002 2014C: Chapter 7 2 / 23

Page 3: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Linear Independence

Definition

Vector functions x1, . . . , x` are, as always, called linearlyindependent when there are no constants c1, . . . , c` for which

c1x1 + · · · c`x` = 0

except c1 = · · · = c` = 0.

IMPORTANT: Remember that when we say that a vectorfunction equals zero, that means it equals the old-fashioned zerovector at every single point.Linear DEPENDENCE is hard: if x1, . . . , x` are linearlyindependent at even a single point, then as vector functions theyare linearly independent. The converse is not true: they mighteven be linearly dependent at every point and still be linearlyindependent as vector functions.

Philip Gressman Math 240 002 2014C: Chapter 7 3 / 23

Page 4: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Linear Independence

Linear Independence

We say that time-dependent vectors ~X1, . . . , ~Xn are linearlyindependent on an interval I when the only constants c1, . . . , cnsuch that

c1~X1(t) + c2~X2(t) + · · ·+ cn~Xn(t) ≡ 0

on the entire interval are all zeros.

Ind:

[1t

],

[01

],

[sin tcos t

]Dep:

[sin2 t

1

],

[1

1 + t

],

[cos2 tt

]

Ind:

[0t

],

[0t2

].

Philip Gressman Math 240 002 2014C: Chapter 7 4 / 23

Page 5: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

The Wronskian

Given time-dependent column vectors ~X1, . . . , ~Xn each of length n,we form the Wronskian to be the determinant of the matrix whosecolumns are exactly ~X1, . . . , ~Xn, i.e.,

W (~X1, . . . , ~Xn) = det(~X1, . . . , ~Xn).

FACT: If the Wronskian is nonzero even at a single point, then~X1, . . . , ~Xn must be linearly independent. In fact, they might evenbe linearly independent when the Wronskian is always zero, but forsolutions to first-order systems of ODEs this pathology does nothappen.

Philip Gressman Math 240 002 2014C: Chapter 7 5 / 23

Page 6: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Linear ODE Systems: §7.1–7.2

First-Order Linear Systems

A first-order linear system may be written in the form

d

dt~X = A(t)~X + ~G (t).

Here A is an n × n matrix whose entries may or may not dependon t. ~G (t) is a column vector of length n which is fully describedin the problem itself, and ~X (t) is an unknown column vector oflength n whose entries may depend on t.

General Solutions

The general solution is a complete listing of all solution vectors.

Initial Value Problem

This is the specific solution for which X (0) is prescribed.

Philip Gressman Math 240 002 2014C: Chapter 7 6 / 23

Page 7: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Important General Facts about First-Order Systems

• Higher-order systems of ODEs can always be recast as asystem of first-order ODEs with more unknown functions.

• Systems of ODEs can always be solved by elimination; this is,however, a labor-intensive way to do it since unknownconstants will be related and you’ll have to do a lot of linearequation solving.

Philip Gressman Math 240 002 2014C: Chapter 7 7 / 23

Page 8: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Homogeneous Equations

Definition

The equation ddt~X = A(t)~X is called homogeneous. If your

equation is given as ddt~X = A(t)~X + ~G (t) for some nonzero ~G (t),

then the equation ddt~X = A(t)~X is called the associated

homogeneous first-order system.

Superposition Principle

If ~X1(t) and ~X2(t) are solutions of the homogeneous ODEddt~X = A(t)~X , then c1~X1 + c2~X2 will also be a solution. The same

can be said for any linear combination of any number of solutions(i.e., more than two solutions).

Philip Gressman Math 240 002 2014C: Chapter 7 8 / 23

Page 9: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

7.3: Theory of First-order Systems

Theorem: Existence and Uniqueness

The IVP x(t0) = x0, x ′ = A(t)x(t) + b(t) for x , b vector-valuedfunctions of time and A a matrix-valued function of time, has aunique C 1 solution on any interval I containing x0 when A and bare continuous.

Consequences

Theorem: When x(t) is a time-dependent vector in Rn, thegeneral solution of x ′(t) = A(t)x(t) on any interval is ann-dimensional vector space.Theorem: Solutions x1, . . . , xn are linearly independent if and onlyif the Wronskian is never zero.Theorem: If xp is any solution to x ′ = Ax + b, then the generalsolution to this ODE is given by x = xc + xp where xc ranges overall solutions of the associated homogeneous equation.

Philip Gressman Math 240 002 2014C: Chapter 7 9 / 23

Page 10: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Inhomogeneous Systems

Finding the general solution of an inhomogeneous system is onlyslightly more difficult than solving a homogeneous one.

1 You must first find some solution ~Xp. It is called a particularsolution.

2 The general solution of the inhomogeneous system will alwaysbe of the form

~X = c1~X1 + · · ·+ cn~Xn + ~Xp

Where ~X1, . . . , ~Xn are a fundamental set of solutions (i.e., acomplete set) for the associated homogeneous system.

Philip Gressman Math 240 002 2014C: Chapter 7 10 / 23

Page 11: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Homogeneous Linear Systems: §7.4

We consider a system of ODEs with the form

d

dt~X = A~X

where ~X is a column vector of length n and A is an n × n matrixwith constant entries. We begin by looking for very simplesolutions, then use the superposition principle to describe the morecomplicated ones.

The Simplest Case

An example of a very simple solution is one whose direction doesnot change (only the magnitude). It would be expressible in theform

~X (t) = f (t)~E

where ~E is a constant vector and f is some unknown function of t.

Philip Gressman Math 240 002 2014C: Chapter 7 11 / 23

Page 12: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

When you assume that a solution has some special form, it isknown as an ansatz. It’s a completely reasonable question to askand mathematically rigorous because you might end up learningthat no such solutions exist. For us, we plug our ansatz

Our Ansatz

~X (t) = f (t)~E

into the equation and get

f ′(t)~E = f (t)A~E ⇒ A~E =f ′(t)

f (t)~E .

If the equation must be true at all times, then f ′(t)f (t) must be

constant. Call the constant λ. We arrive at the eigenvectorequation...

Philip Gressman Math 240 002 2014C: Chapter 7 12 / 23

Page 13: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

The Conclusion

If ~E is an eigenvector of A with eigenvalue λ, then

~X (t) = Ceλt~E

solves the first-order system

d

dt~X = A~X .

Moreover, linearly independent eigenvectors give linearlyindependent solutions of the system.

If A is n× n and has n linearly independent eigenvectors ~E1, . . . , ~En

with eigenvalues λ1, . . . , λn, then the general solution of thesystem will be

~X (t) = C1eλ1t~E1 + · · ·+ Cne

λnt~En.

Philip Gressman Math 240 002 2014C: Chapter 7 13 / 23

Page 14: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Complex Eigenvalues

If A is a real matrix with complex eigenvalue λ = α + iβ andeigenvector ~E = ~Ere + i~Eim, Then

~X (t) = eαt+iβt(~Ere + i~Eim)

will be a solution. This can only happen if the real parts andimaginary parts are each solutions by themselves. We conclude

~Xre(t) = eαt(cosβt)~Ere − eαt(sinβt)~Eim

~Xim(t) = eαt(sinβt)~Ere + eαt(cosβt)~Eim

are linearly independent real solutions of the system of ODEs.

Philip Gressman Math 240 002 2014C: Chapter 7 14 / 23

Page 15: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

“Missing” Eigenvectors

If A does not have n eigenvectors, the ansatz gives only a partialanswer and we end up missing some solutions. We fix this bymaking a better ansatz (with increasing complexity depending onhow bad the situation is). For example:

New Ansatz

~X (t) = eλt~E2 + teλt~E .

Plug it into ddt~X = A~X , and we get

eλt(λ~E2 + (1 + λt)~E

)= eλt

(A~E2 + tA~E

).

We must have A~E = λ~E and (A− λI ) ~E2 = ~E .

We must take ~E to be an eigenvector, but ~E2 satisfies a differentequality and is called a generalized eigenvector.

Philip Gressman Math 240 002 2014C: Chapter 7 15 / 23

Page 16: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

“Missing” Eigenvectors in General

General Ansatz

~X (t) = eλt[tn

n!~En + · · ·+ t~E1 + ~E0

]Generalized Eigenvectors

The general ansatz will solve the system when

(A− λI )~En = 0,

(A− λI )~En−1 = ~En,

...

(A− λI )~E0 = ~E1.

Philip Gressman Math 240 002 2014C: Chapter 7 16 / 23

Page 17: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

§7.8: Solution by diagonalization

Given the first-order system

d

dt~X = A~X

one useful technique you should be able to use is solution bydiagonalization. Here the idea is like substitution: you assume~X = P ~Y for some matrix P and then try to solve for Y instead ofX :

d

dtP ~Y = A(P ~Y )⇒ d

dt~Y =

(P−1AP

)~Y .

So if A is diagonalizable, you can do the following:

1 Solve the systemd

dt~Y = D ~Y

where D is the diagonalization of A.

2 To solve the original system, simply set ~X = P ~Y .

Philip Gressman Math 240 002 2014C: Chapter 7 17 / 23

Page 18: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

§7.8: Solution by exponentiation

Matrix exponentiation

eAt := I + tA +t2

2A2 +

t3

3!A3 + · · ·

Solution of the IVP

There is exactly one solution to the IVP

d

dt~X (t) = A~X (t), ~X (0) = ~V

and it equals ~X (t) = eAt ~V .

There are several tricks that you might use to carry out the infinitesum and write down a simple formula that equals eAt .

Philip Gressman Math 240 002 2014C: Chapter 7 18 / 23

Page 19: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

§7.8: Solution by exponentiation

1 Use diagonalization to find a pattern for the powersA,A2,A3,A4, . . .. The exponential of a diagonal matrix issimply the exponential of each of the diagonal entries.

2 Solve it like a system of equations: You can writeeAt = b0(t)I + b1(t)A + · · ·+ bn−1(t)An−1 for unknownfunctions b0, . . . , bn−1. Often you can solve for thesefunctions using the fact that

eλt = b0(t) + b1(t)λ+ · · ·+ bn−1(t)λn−1

for each of the eigenvalues λ (note that you will be able tosolve when you have n distinct eigenvalues).

3 For 2× 2: if there is only one eigenvalue and only oneeigenvector, then the matrix exponential will take the form

eAt = eλt [I + t(A− λI )] .

Philip Gressman Math 240 002 2014C: Chapter 7 19 / 23

Page 20: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Phase Portraits

A phase portrait is a simultaneous plotting of several solutions ofan ODE. The axes are the coordinates of the vector and the timevariable is suppressed.

Two Eigenvals. < 0 Two Eigenvals. > 0 Mixed Signs

Pictures from Paul’s Online Math Notes

Philip Gressman Math 240 002 2014C: Chapter 7 20 / 23

Page 21: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Phase Portraits for Complex Eigenvalues

When eigenvalue λ = α + iβ:

α < 0 α = 0 α > 0

Pictures from Paul’s Online Math Notes

Philip Gressman Math 240 002 2014C: Chapter 7 21 / 23

Page 22: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Phase Portraits for “Missing” Eigenvectors

λ < 0 λ > 0

Pictures from Paul’s Online Math Notes

Philip Gressman Math 240 002 2014C: Chapter 7 22 / 23

Page 23: Chapter 7: Systems of Linear Differential Equationsgressman/overheads_chap7.pdf · Chapter 7: Systems of Linear Di erential Equations ... {7.2 First-Order Linear Systems A rst-order

Inhomogeneous Systems/Undetermined Coefficients

Just like for single inhomogeneous ODEs, one can often make aneducated guess about the form of the particular solution:

d

dt~X =

[2 −10 3

]~X +

[1 + e−t

2

]⇒ ~Xp = ~V1 + ~V2e

−t

The structure of the method is still the same:

1 Expand the inhomogeneous terms to look like vectors timesconstants, exponentials, powers of t, and/or sines and cosines.

2 Use the tables from undetermined coeffs and/or your intuitionto guess the form of the particular solution.

3 Instead of multiplying the terms in ~Xp by unknown constants,multiply by unknown vectors.

4 Try to solve for the unknown vectors. If it doesn’t work, tryincluding more terms in your guess with higher powers of tattached.

Philip Gressman Math 240 002 2014C: Chapter 7 23 / 23