Presession, 2014 - · PDF filePresession, 2014 It is a very easy ... Alpha C. Chiang:...

444
Presession, 2014 It is a very easy but important course PØter Medvegyev 2014 Medvegyev (CEU) Presession 2014 1 / 444

Transcript of Presession, 2014 - · PDF filePresession, 2014 It is a very easy ... Alpha C. Chiang:...

Presession, 2014It is a very easy but important course

Péter Medvegyev

2014

Medvegyev (CEU) Presession 2014 1 / 444

Outline

We will jump back and force but we will cover:

1 Algebra, mainly linear algebra.2 Analysis, one variable calculus, di¤erentiation, integration.3 Di¤erential equations, di¤erence equations.4 Functions of several variables, convexity.5 Optimization, Lagrange and KuhnTucker theorems.6 Dynamic optimization, discrete and continuous time.

Medvegyev (CEU) Presession 2014 2 / 444

Reader, sources to read

Books: (There is a lot, but basically all are the same.)

Carl P. Simon and Lawrence Blume: Mathematics for Economists,W.W. Norton and Company.

Alpha C. Chiang: Fundamental Methods of Mathematical Economics,McGraw-Hill Book Company.

Peter Hammond and Knut Sydsæter: Mathematics for EconomicAnalysis, .Prentice Hall. (There is a Hungarian version.)

Peter Hammond, Knut Sydsæter, Atle Seierstad and Arne Strøm:Further Mathematic for Economic Analysis, Prentice Hall.

Peter Hammonf, Knut Sydsæter and Arne Strøm: EssentialMathematics for Economic Analysis, Prentice Hall.

Medvegyev (CEU) Presession 2014 3 / 444

Reader, sources to read

Internet sources

http://academicearth.orghttp://wikipedia.orghttp://www.wolframalpha.com/

Medvegyev (CEU) Presession 2014 4 / 444

Axioms of real numbers

1 Algebraic properties.2 Ordering properties.3 Completeness, topology, metric properties.

Medvegyev (CEU) Presession 2014 5 / 444

Algebraic properties

There are two operations: addition and multiplication.

1 commutativity: x + y = y + x , xy = yx ;2 associativity: x + (y + z) = (x + y) + z , x (yz) = (xy) z ;3 there is a neutral element, that is x + 0 = x , x 1 = x ;4 there is an inverse element x + (x) = 0, xx1 = 1, x 6= 0;5 distributivity: (x + y) z = xz + yz .

The addition and the multiplication form an abelian group. The realnumbers form a eld.

Medvegyev (CEU) Presession 2014 6 / 444

Ordering properties

1 a b or b a (totality);.2 If a b and b a then a = b (antisymmetric);3 If a b and b c then a c (transitivity);4 if x y then x + z y + z ;5 if x 0 and y 0 then xy 0.

The real numbers form a totally ordered eld.

Medvegyev (CEU) Presession 2014 7 / 444

Topology: Completeness and Archimedian property

1 If [an1, bn1] [an, bn ] [an+1, bn+1] then \n [an, bn ] 6= ∅(Cantors axiom)

2 For any ε > 0 and for any x 0 there is a natural number n such that

nε $ ε+ ε+ . . .+ ε > x .

(axiom of Archimedes)

DenitionThe real numbers form a complete Archimedean ordered eld.

Medvegyev (CEU) Presession 2014 8 / 444

Supremum and inmum

DenitionA set A R has an upper bound if there is a k 2 R such that a k forevery a 2 A. The smallest upper bound, the least upper bound, of a set Ais called the supremum of A. The supremum of A is denoted by supA. If Ais empty then by denition supA = ∞. If it does not have an upperbound then by denition supA = +∞.

Medvegyev (CEU) Presession 2014 9 / 444

Supremum and inmum

The same argument is valid with the largest, greatest lower bound calledinmum and denoted by inf A.

Example

If A = (a, b) then supA = b and inf A = a. Obviously there is no maxAand minA.

Medvegyev (CEU) Presession 2014 10 / 444

Supremum and inmum, main existence theorem

TheoremEvery bounded from above, non-empty set in R has a nite supremum.

Medvegyev (CEU) Presession 2014 11 / 444

Supremum and inmum, proof

As A is not empty there is an a0 2 A. As A has an upper bound there is ab0 which is an upper bound of A. This means that

1 [a0,∞) \ A 6= ∅, that is no upper bound of A is smaller than a0 and2 A (∞, b0] , that is b0 is an upper bound.

Now let c $ (a0 + b0) /2.

If c is an upper bound then let a1 $ a0 and b1 $ c .If c is not an upper bound then let a1 $ c and b1 $ b0.

Trivially 1. and 2. remain true for a1 and b1. Continue the procedure forevery n = 1, 2, . . . .

Medvegyev (CEU) Presession 2014 12 / 444

Supremum and inmum, proof

By Cantors axiom there is an x in the intersection \n [an, bn ] .

Medvegyev (CEU) Presession 2014 13 / 444

Supremum and inmum, proof

From the axiom of Archimedes one can show that x is unique.

1 To show this one should show that if x1 and x2 are in the intersection

then there is a n0 such that for every n n0

0 < jx1 x2 j bn an = 2n (b0 a0) < ε $ jx1 x2 j ,

which is impossible.2 We prove n 2n for every n. It is true if n = 1. Then by induction

n+ 1 n+ n = 2n 2 2n = 2n+1.

3 By the axiom of Archimedes and by 2. for some n

2nε nε > b0 a0

hence2n (b0 a0) < ε.

Medvegyev (CEU) Presession 2014 14 / 444

Supremum and inmum, proof

1 x is an upper bound as if not then as every bn is an upper bound

an x < a bn

for some a 2 A. But then the intersection is not unique.2 x is the smallest upper bound as if not then as every upper bound isnot smaller than an

an k < x bnfor some upper bound k. But then the intersection is not unique.again.

Medvegyev (CEU) Presession 2014 15 / 444

Monotone convergence

Corollary

If a sequence (an) is increasing that is an an+1 and if it is bounded fromabove, that is there is a number K such that an K for every n thenlimn!∞ an = A exists that is for every ε > 0 there is an N such that

jan Aj < ε, n N.

Let A = supn an.

Medvegyev (CEU) Presession 2014 16 / 444

Monotone convergence

ExampleThe limit

limn!∞

1+

1n

n= e

exists.

The main tool: if ak 0 then

npa1a2 . . . an

a1 + a2 + + ann

Medvegyev (CEU) Presession 2014 17 / 444

Monotone convergence

First we prove that the sequence an = (1+ 1/n)n is increasing.

an =

1+

1n

n=

1+

1n

n 1

n1+ 1

n

+ 1

n+ 1

!n+1

=

n+ 2n+ 1

n+1=

1+

1n+ 1

n+1= an+1.

Medvegyev (CEU) Presession 2014 18 / 444

Monotone convergence

The sequence is bounded from above

14an =

14

1+

1n

n=

1+

1n

n 12 12

n (1+ 1/n) + 1

n+ 2

n+2=

=

n+ 1+ 1n+ 2

n+2= 1

Medvegyev (CEU) Presession 2014 19 / 444

Convexity principle

There are many proofs of the inequality about the geometric andarithmetic mean

npa1a2 . . . an

a1 + a2 + + ann

.

Most of them are elementary. But the "real" one is based on the concavityof the logarithmic function or the convexity of the exponential function

ln npa1a2 . . . an =

1nln a1a2 . . . an =

1n ∑ ln an ln

a1 + a2 + + ann

.

Medvegyev (CEU) Presession 2014 20 / 444

Complex numbers

DenitionThe complex numbers are a eld on the pairs of real numbers denoted by(a, b) $ a+ bi with the following operations:

1 (a+ bi) + (c + di) $ (a, b) + (c , d) = (a+ c , b+ d) =(a+ c) + (b+ d) i ;

2 (a+ bi) (c + di) = ac + adi + bci + bdi2 $ ac + adi + bci +bd (1) = ac bd + (ad + bc) i $ (ac bd , ad + bc) .

The complex numbers form a eld. (In Matlab the multiplication isdenoted by *.)

Medvegyev (CEU) Presession 2014 21 / 444

Complex numbers, multiplication

ExampleIf z1 = 1+ 2i , z2 = 2+ 3i then

z1z2 = (1+ 2i) (2+ 3i) = 2+ 3i + 4i + 6i2 = 4+ 7i .

Medvegyev (CEU) Presession 2014 22 / 444

Complex numbers, roots

Denition1 If z is a complex number then n

pz is a set of complex numbers such

that if α is an element of this set then αn = z .2 If z = 1 then elements of the set n

p1 are called the roots of unity of

order n or the n-th roots of unity.

Medvegyev (CEU) Presession 2014 23 / 444

Complex numbers, roots

Example3p1 is the set of the solution of the equation z3 = 1. They are the third

order of unity.

1,12 ip32,12+ i

p32

12 ip32

!3=

=

12

3+ 3

12

2 ip32

!+ 3

12

ip32

!2+

ip32

!3=

= 18 i 3

p38+98+ i3p38

= 1

Medvegyev (CEU) Presession 2014 24 / 444

Complex numbers, roots

Example4p1 is the set of the solution of the equation z4 = 1. They are the fourth

order of unity.1,1, i ,i .

Medvegyev (CEU) Presession 2014 25 / 444

Complex numbers, roots

TheoremOver the complex numbers every polynomial of order n has n roots.Sometimes some roots are calculated with multiplicity.

Denition

A root z0 has multiplicity k if p (z0) = p0 (z0) = = p(k1) (z0) = 0.

Medvegyev (CEU) Presession 2014 26 / 444

Complex numbers, roots

Example

The roots of z2 (z 1) = z3 z2 are 0, 0 and 1.Observe that the roots of3z2 2z are z = 0 and z = 2/3.

Example

The roots of z2 (z 1)2 = z2 2z3 + z4 are 0, 0, 1, 1. Observe that theroots of 2z 6z2 + 4z3 = 2z (2z 1) (z 1) are z = 0.z = 1 andz = 1/2.

Medvegyev (CEU) Presession 2014 27 / 444

Complex numbers, polar form

DenitionEvery complex number has the representation

z = r (cos ϕ+ i sin ϕ) .

The length of the vector r is unique, but the angle ϕ is not. This is calledthe polar representation of z . Obviously ϕ+ k2π is also valid for everyk = 0,1,2, . . . .

Medvegyev (CEU) Presession 2014 28 / 444

Complex numbers, polar form

If zk = rk (cos ϕk + i sin ϕk ) with k = 1, 2 then using elementarytrigonometric identities

z1z2 = r1 (cos ϕ1 + i sin ϕ1) r2 (cos ϕ2 + i sin ϕ2) =

= r1r2 ((cos ϕ1 cos ϕ2 sin ϕ1 sin ϕ2) + i (cos ϕ1 sin ϕ2 + cos ϕ2 sin ϕ1)) =

= r1r2 (cos (ϕ1 + ϕ2) + i sin (ϕ1 + ϕ2)) .

TheoremThe multiplication of complex numbers is

1 the multiplication of the lengths of the vectors,2 addition of the angles of the vectors.

Medvegyev (CEU) Presession 2014 29 / 444

Complex numbers, polar form

Example

The rootsp1 are the numbers r (cos ϕ+ i sin ϕ) such that

r2 (cos 2ϕ+ i sin 2ϕ) = 1 (cosπ + i sinπ) = 1.

From this r = 1 and 2ϕ = π + 2kπ. hence ϕ = π/2 and ϕ = 3π/2.That is p

1 = fz1 = i , z2 = ig .

Medvegyev (CEU) Presession 2014 30 / 444

Complex numbers, polar form

Example

The roots 3p1 are the numbers r (cos ϕ+ i sin ϕ) such that

r3 (cos 3ϕ+ i sin 3ϕ) = 1 (cosπ + i sinπ) = 1.

From this r = 1 and 3ϕ = π + 2kπ. Hence ϕ = π/3, ϕ = 3π/3 andϕ = 5π/3. That is 3

p1 = fz1, z2, z3g with

z1 = cosπ

3+ i sin

π

3=12+ i

p32,

z2 = 1,

z3 = cos5π

3+ i sin

3=12 ip32.

Medvegyev (CEU) Presession 2014 31 / 444

Complex numbers, polar form

Example

Calculate 6p2!

z1 =6p2 (cos 0+ i sin 0) = 6

p2,

z2 =6p2cos

π

3+ i sin

π

3

=

6p2

12+ i

p32

!,

z3 =6p2cos

3+ i sin

3

=

6p2

12+ i

p32

!,

z4 =6p2 (cosπ + i sinπ) = 6

p2,

z5 =6p2cos

6+ i sin

6

=

6p2

12 ip32

!,

z6 =6p2cos

10π

6+ i sin

10π

6

=

6p2

12 ip32

!.

Medvegyev (CEU) Presession 2014 32 / 444

Complex numbers, polar form

Example

Calculatepi ! The polar representation of i is

i = cosπ/2+ i sinπ/2.

The roots are

z1 = cosπ

4+ i sin

π

4=

p22+ i

p22,

z2 = cos5π

4+ i sin

4=

p22+ i

p22

!.

Medvegyev (CEU) Presession 2014 33 / 444

Complex numbers, division

DenitionIf z = a+ bi is a complex number then z $ a bi is called the conjugatenumber.

zz = (a+ bi) (a bi) = a2 (bi)2 = a2 + b2

which is a real number. If z 6= 0

1z=zzz=a bia2 + b2

.

Medvegyev (CEU) Presession 2014 34 / 444

Complex numbers, division

Example1/i = i/1 = i . To check this

i (i) = i2 = (1) = 1.

Medvegyev (CEU) Presession 2014 35 / 444

Complex numbers, division

Example

Calculate 1/ (1+ i) .

11+ i

=1 i

(1+ i) (1 i) =1 i1 i2 =

1 i2.

To check this

(1+ i)(1 i)2

=12(1+ i) (1 i) = 1

22 = 1.

Medvegyev (CEU) Presession 2014 36 / 444

Complex numbers, division

Example

Calculate i/ (1+ i)!

i1+ i

= i1

1+ i= i

1 i2

=1+ i2.

To check this

1+ i2

(1+ i) =12(1+ i)2 =

=12

12 + 2 1 i + i2

=

=12

12 + 2 1 i 1

= i

Medvegyev (CEU) Presession 2014 37 / 444

Complex numbers, division

z1 $ 1/z is, by denition, the complex number such that z1z = 1. Letz $ a+ bi and let z1 $ x + yi . So

zz1 = (a+ bi) (x + yi) = (ax by) + i (ay + bx) .

ax by = 1, bx + ay = 0.

abx b2y = b, abx + a2y = 0,a2 + b2

y = b, y = b/

a2 + b2

bx +ab/

a2 + b2

= 0, x = a/

a2 + b2

.

Medvegyev (CEU) Presession 2014 38 / 444

Homework

1 Calculate the argument and the size of the complex numbers 1+ i ,1+ i , 1+ i .

2 How much is (1+ 2i)20 , (3 i)200 , (1 i)50 , (7+ 2i)16?3 Calculate

p1 i

p3, 4p1+ i , 3

p3 4i , 8

pi , 12pi !

4 What is wrong? 1 = i2 =p1p1p(1) (1) =

p1 = 1.

5 Let ε 6= 1, be a root of unity of order n. Show that ∑n1j=0 εj = 0

6 Show that if z1 + z2 + z3 = 0 and jz1j = jz2j = jz3j = 1 then z1, z2and z3 form an equilateral triangle.

Medvegyev (CEU) Presession 2014 39 / 444

Topology of complex plain

DenitionIf z 2 C then z 2 R2. For every n if x 2 Rn one can dene the length ofx with

kxk $s

n

∑k=1

x2k .

If z 2 C then the notation jz j is used. The mapping x 7! kxk is called thenorm of the vector x . In the case of complex numbers jz j is called theabsolute value of z .

Denitiond (x , y) $ kx yk.

Medvegyev (CEU) Presession 2014 40 / 444

Topology of complex plain

TheoremThe norm function has the following properties:

1 kxk 0, and kxk = 0 if and only if x = 0.2 kλxk = jλj kxk for any scalar λ In the case of complex numbersjz1z2j = jz1j jz2j .

3 kx + yk kxk+ kyk . (This is the only one which is not trivial.)

Medvegyev (CEU) Presession 2014 41 / 444

Topology of complex plain

Corollary

The metric d (x , y) = kx yk has the properties1 d (x , y) 0 and d (x , y) = 0 if and only if x = y .2 d (x , y) = d (y , x) .3 d (x , y) d (x , z) + d (z , y) .

Medvegyev (CEU) Presession 2014 42 / 444

Topology of complex plain

We will discuss this later, but one cannot start early enough the discussionof the completeness.

DenitionA sequence (xn) in a metric space is a Cauchy sequence if for every ε > 0there is an N such that if n,m N then d (xn, xm) < ε.

TheoremThe complex plain, or Rn is complete which means that every Cauchysequence is convergent. (Whatever this means. The point is that as thecomplex numbers or the Rn are not ordered one should change thedenition.)

Medvegyev (CEU) Presession 2014 43 / 444

Topology of complex plain, Weierstrass criteria

Corollary

If for some complex numbers∞

∑n=1

jzn j < ∞ then

∑n=1

zn $ limN!∞

N

∑n=1

zn

exists. (The word exists is a very great word. One should use it very

carefully.) Observe that the sequenceN

∑n=1

jzn j is increasing and by

denition it has an upper bound. So with this tool we transform thesupremum tool from real numbers to complex numbers.

Medvegyev (CEU) Presession 2014 44 / 444

Topology of complex plain, Weierstrass criteria

Denition

If∞

∑n=1

jzn j < ∞ then we say that the series ∑n zn is absolutely convergent.

Medvegyev (CEU) Presession 2014 45 / 444

Topology of complex plain, Weierstrass criteria

Corollary

If∞

∑n=1

kxnk < ∞ for some sequence of vectors in Rn then

∑n=1

xn $ limN!∞

N

∑n=1

xn.

exists. One can also use that kxnk bn and∞

∑n=1

bn < ∞.

Medvegyev (CEU) Presession 2014 46 / 444

Topology of complex plain, Weierstrass criteria

Denition

If∞

∑n=1

kxnk < ∞ then we say that the series ∑n xn is absolutely

convergent.

Medvegyev (CEU) Presession 2014 47 / 444

Topology of complex plain

Hence it is very important to develop tools to decide when series with nonnegative elements have nite upper bound.

1 The only simple one is the geometric series

∑n=0

qn = limN!∞

1 qN1 q , q 0

which is bounded if and only if q < 1.2 The next one if 0 an Aqn with q < 1.3 an+1/an q < 1, which implies0 an+1 anq an1q2 a0qn+1. (Ratio test)

4 npan q < 1, which implies that an qn (Root test.)

Medvegyev (CEU) Presession 2014 48 / 444

Power series, radius of convergence

DenitionA (complex) function

f (z) =∞

∑k=0

akzk = a0 + a1z + a2z2 + . . .

is called power series. More generally

f (z) =∞

∑k=0

ak (z z0)k = a0 + a1 (z z0) + a2 (z z0)2 + . . .

Medvegyev (CEU) Presession 2014 49 / 444

Power series, radius of convergence

Theorem (CauchyHadamard test)For every power series there is an R, called the radius of convergence, suchthat if jz j < R then the series is convergent if jx j > R then it is divergenton jx j = R anything can happen.

R =1

lim supn!∞

npjan j

.

R = lim infn!∞

1npjan j

Medvegyev (CEU) Presession 2014 50 / 444

Power series, radius of convergence

Theorem (Ratio test)If

limn!∞

an+1an

exists (nite or innite) then

R =1

limn!∞

an+1an = limn!∞

anan+1 .

Medvegyev (CEU) Presession 2014 51 / 444

Power series, radius of convergence

TheoremInside the open circle of convergence the convergence is absolute and it isuniform on any compact subset of the open circle of convergence.

This practically means that inside the open circle of convergence one cancalculate with power series like with polynomials. One can di¤erentiateand integrate them by terms

f 0 (z) =∞

∑n=0

annzn1 =∞

∑n=1

annzn1Zf (z) dz =

∑n=0

anZzndz =

∑n=0

anzn+1

n+ 1+ C .

Medvegyev (CEU) Presession 2014 52 / 444

Power series, radius of convergence

ExampleCalculate the expected value of the geometric distribution.

A random variable P (ξ = k) = pqk1 (k = 1, 2, . . .) is called geometricdistribution. As pqk1 0 and

∑k=1

pqk1 = p∞

∑k=1

qk1 = p (1+ q + . . .) = p1

1 q = 1

it is a distribution. Its expected value is

∑k

kP (ξ = k) =∞

∑k=1

kpqk1 = p∞

∑k=1

qk0= p

∑k=0

qk!0=

= p

11 q

0= p

1

(1 q)2=1p.

Medvegyev (CEU) Presession 2014 53 / 444

Homeworks

Calculate the radius of convergence of the series∞

∑k=0

zk .

Calculate the radius of convergence of the series∞

∑k=0

zk/k.

Calculate the radius of convergence of the series∞

∑k=0

kzk .

Prove that limk!∞kpk = 1.

Calculate the variance of the geometric distribution.

Calculate the cumulative distribution function of the geometricdistribution.

Medvegyev (CEU) Presession 2014 54 / 444

Homework

For which x are the series convergent?∞

∑n=1

x2n.

∑n=1

11+x

n.

∑n=1

nkxn.

∑n=1

(sin x)n .

∑n=1

(exp (x))n

Medvegyev (CEU) Presession 2014 55 / 444

Complex exponential function

DenitionFor every z 2 C dene

exp (z) $∞

∑k=0

zk

k !.

Medvegyev (CEU) Presession 2014 56 / 444

Complex exponential function

TheoremThe function exp (z) exists for every z and the sum is convergentabsolutely for every z and uniformly on any bounded subset of C.

It is obvious from the ratio test

limn!∞

an+1zn+1anzn

= limn!∞

n!zn+1

(n+ 1)!zn

= limn!∞

zn+ 1

= 0 < 1limn!∞

an+1an = lim

n!∞

1n+ 1

= 0.R =

1limn!∞

an+1an

=1

limn!∞1n+1

= ∞

Medvegyev (CEU) Presession 2014 57 / 444

Complex exponential function

TheoremFor any complex number z1 and z2

exp (z1 + z2) = exp (z1) exp (z2)

TheoremThe exponential function is everywhere di¤erentiable in complex sense thatis

limh!0

exp (z + h) exp (z)h

= limh!0

exp (h) 1h

exp (z)

= 1 exp (z)

Medvegyev (CEU) Presession 2014 58 / 444

Complex exponential function

By the absolute convergence of the series one can reorder the sum andusing the commutativity of the product

exp (z1) exp (z2) =∞

∑k=0

∑l=0

zk1k !z l2l !=

∑n=0

n

∑i=0

z i1i !

zni2

(n i)! =

=∞

∑n=0

n

∑i=0

ni

z i1z

ni2

n!=

∑k=0

(z1 + z2)k

k !$

$ exp (z1 + z2) .

By the uniform convergence of the derivatives we can di¤erentiate underthe innite sum

exp (az)0 =∞

∑k=0

ka(za)k1

k != a

∑k=1

(az)k1

(k 1)! = a exp (az) .

Medvegyev (CEU) Presession 2014 59 / 444

Complex exponential function

TheoremIf t is real then

exp (it) = cos t + i sin t.

For every z = a+ bi

exp (z) = ea (cos b+ i sin b)

It is obvious as

exp (it) =∞

∑k=0

(it)k

k != ∑

n=2k

(it)k

k !+ ∑n=2k+1

(it)k

k !=

=

1 t

2

2!+t4

4!+ . . .

+ i(t t

3

3!+t5

5!+ . . .) =

= cos t + i sin t.

Medvegyev (CEU) Presession 2014 60 / 444

Close your eyes: Banach algebra, a detour

One can dene the exponential function over any abstract structure Xwhere one has a

1 linear structure that is where X is a linear space that is αx + βy iswell-dened,

2 one has a multiplication xy , that is xn for any n = 0, 1, 2, . . . is welldened, therefore one can dene the polynomials ∑n

k=0 xk/k !,

3 the space has a topology that is one can dene the limit ∑∞k=0 x

k/k !.

Medvegyev (CEU) Presession 2014 61 / 444

Close your eyes: Banach algebra, a detour

To prove the existence of the limit one should dene a norm kxk with theproperties

1 kxk 0, x = 0, kxk = 0,2 kαx + βyk jαj kxk+ jβj kyk ,3 kxyk kxk kyk .

Medvegyev (CEU) Presession 2014 62 / 444

Close your eyes: Banach algebra, a detour

One can further generalize the Weierstrass criteria: If (sn) is the sequenceof the partial sums with m > n then

ksn smk = m

∑k=n+1

ak

m

∑k=n+1

kakk m

∑k=n+1

βk < ε

if ∑k βk < ∞ and kakk βk .This implies that n

∑k=0

xk

k !

n

∑k=0

kxkk

k !.

Hence to apply Weierstrass criteria one needs the completeness of themetric structure.

Medvegyev (CEU) Presession 2014 63 / 444

Exponential operator

DenitionIf A is an operator that is A : X ! X , then An (x) = A (A (. . . x)) , If

f (x) $ ∑k

akxk = a0 + a1x + a2x2 + . . .

is a power series then one can dene

f (A) $ a0I + a1A+ a2A2 + . . .

if everything is well dened otherwise.

exp (A) $ I + A1!+A2

2!+ . . .

Medvegyev (CEU) Presession 2014 64 / 444

Close your eyes: Banach algebra, a detour

ExampleThe exponential of the di¤erential operator A = d/dx .

If f (x) = x2 then

(Af ) (x) = 2x ,A2f

(x) = 2,

A3f

(x) = 0.

exp (A) = I +A1!+A2

2!+A3

3!+ . . . ,

assuming that the limit is meaningful. In our case

(exp (A) f ) (x) = 2 + 2x + 22+ 0 . . . = x2 + 2x + 1 = (x + 1)2 .

Medvegyev (CEU) Presession 2014 65 / 444

Close your eyes: Banach algebra, a detour

If f (x) = xn then

(Af ) (x) = nxn1,A2f

(x) = n (n 1) xn2, . . . .

exp (A) $ I + A1!+A2

2!+A3

3!+ . . . ,

assuming that the limit is meaningful. In our case

(exp (A) f ) (x) =

= n + nxn1 + n (n 1)2!

xn2 +n (n 1) (n 2)

3!xn3 . . . =

=n

∑k=0

nk

xnk = (x + 1)n .

Medvegyev (CEU) Presession 2014 66 / 444

Close your eyes: Banach algebra, a detour

If f (x) = exp (x) then

(Af ) (x) = exp (x) ,

that is Af = f . Hence Anf = f .

(exp (A) f ) (x) = exp (x) +exp (x)1!

+exp (x)2!

+ . . . =

= exp (x)1+

11!+12!+ . . .

= exp (x) e.

Observe that f (x) = exp (x) is an eigenvector of A with eigenvalueλ = 1. Hence λ = e is an eigenvalue for exp (A) .

Medvegyev (CEU) Presession 2014 67 / 444

Close your eyes: Banach algebra, a detour

As

exp (A) exp (A) = exp (A A) = exp (0) = Iexp (A)1 = exp (A) .

If f (x) = (x + 1)2 then

exp (A) = I A1!+A2

2! A

3

3!+ . . .

(exp (A) f ) (x) = (x + 1)2 2 (x + 1) + 22+ 0+ . . . = x2

Medvegyev (CEU) Presession 2014 68 / 444

Close your eyes: Banach algebra, a detour

If f (x) = exp (x) then

exp (A) (exp (A) f ) (x) =

= exp (x) e exp (x) e + exp (x) e2!

+ . . . =

= exp (x) e1 1+ 1

2!+ . . .

=

= exp (x) ee1 = exp (x) = f (x) .

Medvegyev (CEU) Presession 2014 69 / 444

Close your eyes: Banach algebra, a detour

If f (x) = exp (ax) then

(Af ) (x) = exp (ax) +a exp (ax)

1!+a2 exp (ax)

2!+ . . . =

= exp (ax) exp (a) = exp (a) f (x) .

exp (ax) a exp (ax)1!

+a2 exp (ax)

2!+ . . . =

= exp (ax)1 a+ a

2

2+ . . .

= exp (ax) exp (a) .

exp (ax) is an eigenvector of A with eigenvalue a. Hence it is andeigenvector of exp (A) with eigenvalue exp (a) .

Medvegyev (CEU) Presession 2014 70 / 444

Close your eyes: Banach algebra, a detour

If f (x) = sin x then

(exp (A) (f )) (x) = sin x +cos x1!

+ sin x2!

+ cos x3!

+sin x4!

. . . =

= sin x1 1

2!+14!+

+ cos x

1 1

3!+15!

=

= sin x cos 1+ cos x sin 1 =

= sin (x + 1)

Medvegyev (CEU) Presession 2014 71 / 444

Close your eyes: Banach algebra, a detour

If f (x) = exp (ix) then

(exp (A) (f )) (x) = exp (ix) +i exp (ix)1!

+i2 exp (ix)

2!+ . . . =

= exp (ix)1+

i1!+i2

2!+

=

= exp (ix) exp (i) = exp (i (x + 1)) .

Medvegyev (CEU) Presession 2014 72 / 444

Close your eyes: Banach algebra, a detour

Example

Let A be the operator of shifting: if (an) is a sequence then letA (an) = (an+1) .

A ((1, 2, 3, 4 . . .)) = (2, 3, . . .) .

(exp (A)) (1, 2, . . .) = (1, 2, 3 . . .) +(2, 3, 4, )

1!+(3, 4, 5, )

2!+ . . .

Medvegyev (CEU) Presession 2014 73 / 444

Close your eyes: Banach algebra, a detour

The rst element in the sequence is

1+21!+32!+43!+54!+ . . . = (x exp (x))0 at x = 1.

1 exp (x) + x exp (x) = 2e

The second element is

2+31!+42!+53!+64!

=x2 exp (x)

0at x = 1.

2x exp (x) + x2 exp (x) = 3e.

Medvegyev (CEU) Presession 2014 74 / 444

Close your eyes: Banach algebra, a detour

(xn exp (x))0 =

xn

∑k=0

xk

k !

!0=

∑k=0

xk+n

k !

!0=

=∞

∑k=0

xk+n

k !

0=

∑k=0

(k + n)xk+n1

k !=

= nxn1 exp (x) + xn exp (x) .

If x = 1 then∞

∑k=0

k + nk !

= (n+ 1) e

Medvegyev (CEU) Presession 2014 75 / 444

Close your eyes: Banach algebra, a detour

(exp (A)) (1, 2, . . .) = e (2, 3, 4, . . .)

Medvegyev (CEU) Presession 2014 76 / 444

Homework

Let A be the shift operator (a1, a2, a3, . . .) 7! (a2, a3, a4, . . .) . Try todene (I A)1 .Let A be the shift operator (a1, a2, a3, . . .) 7! (0, a1, a2, a3, a4, . . .) .Try to dene (I A)1 .Let A be the shift operator (a1, a2, a3, . . .) 7! (a2, a3, a4, . . .) . Try todene sinA.

Let A be the multiplication operator that is f (x) 7! x f (x) .Try todene exp (A) and (I A)1.

Medvegyev (CEU) Presession 2014 77 / 444

Open your eyes: Matrix exponential

ExampleThe simplest nontrivial Banach algebra is the set of quadratic matrixes.Hence if A is any quadratic matrix then

X (t) $ exp (tA) =∞

∑k=0

(tA)k

k != I + tA+

(tA)2

2!+ . . .

is well dened for any t. (Let us recall: the existence is a very great word!)Observe that there is no t before the rst term.

Medvegyev (CEU) Presession 2014 78 / 444

Open your eyes: Matrix exponential

In the same way as in the complex plain one can show that

TheoremX (t + s) = X (t)X (s) = exp ((t + s)A) = exp (tA) exp (sA) .

TheoremIf AB = BA then exp (A+ B) = exp (A) exp (B)

As the matrix multiplication is not commutative

exp (A+ B) = exp (A) exp (B)

is not always true.

Medvegyev (CEU) Presession 2014 79 / 444

Open your eyes: Matrix exponentials

TheoremX (t) $ exp (tA) is the solution of the di¤erential equationX 0 (t) = X (t)A (= AX (t)) .

dX (t)dt

$ limh!0

X (t + h) X (t)h

= limh!0

X (t)X (h) X (t)h

=

= X (t) limh!0

X (h) Ih

= X (t) limh!0

A+

∑k=2

Akhk1

k !

!=

= X (t) limh!0

(A+ hB (h)) = X (t)A.

Medvegyev (CEU) Presession 2014 80 / 444

Open your eyes: Matrix exponentials

The only nontrivial question is that how one can e¤ectively calculateexp (tA) for a matrix A.

See:http://en.wikipedia.org/wiki/Matrix_exponentialhttp://www.cs.cornell.edu/cv/researchpdf/19ways+.pdf

Medvegyev (CEU) Presession 2014 81 / 444

Open your eyes: Matrix exponentials

Example

Calculate exp

a b0 a

As

a 00 a

0 b0 0

=

0 ab0 0

=

0 b0 0

a 00 a

one can use the "exponential formula"

exp

a b0 a

= exp

a 00 a

+

0 b0 0

=

= exp

a 00 a

exp

0 b0 0

.

Medvegyev (CEU) Presession 2014 82 / 444

As

a 00 a

is diagonal

exp

a 00 a

=

=

1 00 1

+

a/1! 00 a/1!

+

a2/2! 00 a2/2!

+ . . . =

=

exp (a) 00 exp (a)

.

Medvegyev (CEU) Presession 2014 83 / 444

Open your eyes: Matrix exponentials

As0 b0 0

is nilpotent, that is

0 b0 0

2=

0 00 0

exp

0 b0 0

=

1 00 1

+

0 b0 0

+ 0 =

=

1 b0 1

.

Medvegyev (CEU) Presession 2014 84 / 444

Open your eyes: Matrix exponential

exp

a b0 a

=

exp (a) 00 exp (a)

1 b0 1

=

=

exp (a) b exp (a)0 exp (a)

.

Medvegyev (CEU) Presession 2014 85 / 444

Homework

Using the di¤erentiation rule

ddzexp (az) = a exp (az)

of the complex exponential function prove that

1 ddx sin x = cos x

2 ddx cos x = sin x

Medvegyev (CEU) Presession 2014 86 / 444

Overview

We assume that it is well-known. Just a short summary.

1 Sequences, limits2 Continuous functions3 Di¤erentiable functions4 Integration and the fundamental theorem of calculus

Medvegyev (CEU) Presession 2014 87 / 444

Sequences, limits

DenitionEvery sequence (an) is a mapping from the natural numbers to the set ofreal or complex numbers. Most of the time we will deal with real valuedsequences but what we say one can mainly apply to complex valuedsequences or to sequences in Rn as well.

1 We say that a sequence (an) has a limit A if for any ε > 0 one cannd an index N such that jan Aj < ε for every n N.

limn!∞

an = A.

2 We say that a sequence (an) goes to plus innity if for any K > 0one can nd an index N such that an > K for every n N.

limn!∞

an = +∞.

Medvegyev (CEU) Presession 2014 88 / 444

Remark on the denition, metric spaces

In the denition of the limit the absolute value plays no role. One canapply the denition to any metric space.

DenitionA structure (X , d) is called metric space if the metric d : X X ! R

satises the following axioms:

1 d (x .y) 0 and d (x , y) = 0 if and only if x = y .2 d (x , y) = d (y , x) .3 d (x , y) d (x , z) + d (z , y) .

DenitionWe say that a sequence (an) in a metric space (X , d) has a limit A if forany ε > 0 one can nd an index N such that d (an,A) < ε for everyn N.

limn!∞

an = A.

Medvegyev (CEU) Presession 2014 89 / 444

Metric structure of nite dimensional spaces

Example

In Rn the expression d (x, y) $q

∑ni=1 (xi yi )

2 is a metric.

One should only prove the third property the so called triangle inequalitysn

∑i=1(xi yi )2

sn

∑i=1(xi zi )2 +

sn

∑i=1(zi yi )2.

sn

∑i=1(ui + vi )

2 s

n

∑i=1u2i +

sn

∑i=1v2i .

n

∑i=1(ui + vi )

2 n

∑i=1u2i +

n

∑i=1v2i + 2

sn

∑i=1v2i

n

∑i=1u2i

n

∑i=1uivi

sn

∑i=1v2i

n

∑i=1u2i (Cauchys inequality)

Medvegyev (CEU) Presession 2014 90 / 444

Cauchy inequality

∑ni=1 (ui λvi )

2 0 for any λ.

Aλ2 + Bλ+ C $ λ2 ∑iv2i 2λ ∑

iuivi +∑

iu2i 0

Hence D $ B2 4AC 0 that is

4

∑iuivi

!2 4∑

iv2i ∑

iv2i 0

which implies that ∑iuivi

!2 ∑

iv2i ∑

iv2i ,

which implies the inequality.

Medvegyev (CEU) Presession 2014 91 / 444

Properties of convergent sequences

1 The limit is unique.2 If limn!∞ an and limn!∞ bn exist and they are nite thenlimn!∞ (anbn) and limn!∞ (an + bn) also exist and

limn!∞

(anbn) =limn!∞

an

limn!∞

bn,

limn!∞

(an + bn) = limn!∞

an + limn!∞

bn.

3 If limn!∞ an exists and it is nite and it is not zero then a1n is welldened for all n large enough and limn!∞ a1n exists and

limn!∞

a1n =limn!∞

an1

.

4 If limn!∞ an and limn!∞ bn exist and an bn thenlimn!∞ an limn!∞ bn.

Medvegyev (CEU) Presession 2014 92 / 444

Divergent series

Example

∑∞n=1

1n is by denition limN!∞ ∑N

n=11n . If aN $ ∑N

n=11n then (aN ) is

increasing.

1+12+

13+14

+ +

1

2n1 + 1+ . . .+

12n

1+ 12+ 2

122+ + 1

2n2n1 =

= 1+12+12+ . . .+

12! ∞.

Hence ∑∞n=1 1/n = ∞.

Medvegyev (CEU) Presession 2014 93 / 444

BolzanoWeierstrass theorem

TheoremEvery bounded sequence has a convergent subsequence.

It is su¢ cient to show that every sequence of real numbers contains amonotonous, increasing or decreasing, subsequence.An index n is a peak if an am for every m n. For a sequence (an)there are two possibilities:

1 the sequence contains innitely many peaks,2 the number of peaks is nite.

In the rst case if n1 < n2 . . . are the peaks then an1 an2 . . . is adecreasing sequence. In the second case there is an index n0 such that ifn n0 then n is not a peak. So there is an n1 > n0 such that an1 an0and there is an n2 > n1 such that an2 an1 etc. Hence in this casean0 an1 . . . is an increasing subsequence.

Medvegyev (CEU) Presession 2014 94 / 444

Continuous functions

DenitionLet (X , dX ) and (Y , dY ) be metric spaces. A function f : X ! Y iscontinuous at x0 2 X , if whenever limn xn = x0 the limes limn!∞ f (xn)exits and its value is f (x0) . A function f : X ! Y is continuous if

limn!∞

f (xn) = flimn!∞

xn

whenever limn xn exists.

Alternatively:

DenitionLet (X , dX ) and (Y , dY ) be metric spaces. A function f : X ! Y iscontinuous at x0 2 X , if for every ε > 0 there is a δ > 0 such thatdX (x , x0) < δ implies that dY (f (x) , f (x0)) < ε. f is continuous on X ifit is continuous at every point of X .

Medvegyev (CEU) Presession 2014 95 / 444

Homework

Prove that the two denitions are equivalent.

If f and g are continuous then fg and f + g are continuous.

if f is continuous at x0 and f (x0) 6= 0 then 1/f (x) is alsocontinuous at x0.

Medvegyev (CEU) Presession 2014 96 / 444

Theorem of Weierstrass

TheoremIf f : [a, b] is continuous then there is an x0 2 [a, b]

f (x0) = max ff (x) : x 2 [a, b]g .

First one proves that f is bounded from above. If not then f (xn) n forsome (xn) [a, b] . One can assume that xn is convergent to some x0.Hence

∞ > f (x0) = flimnxn= lim

nf (xn) lim

nn = ∞.

As f is bounded from above M $ sup ff (x) : x 2 [a, b]g < ∞. Iff (x) < M for every x , then g (x) $ 1/ (M f (x)) is a continuous butunbounded function on [a, b] .

Medvegyev (CEU) Presession 2014 97 / 444

Limit of a function

DenitionLet (X , dX ) and (Y , dY ) be metric spaces. Assume that a function fmaps Xn fx0g to Y We say that f has a limit at x0 and the limit isA 2 Y if the function

ef (x) $ f (x) if x 2 Xn fx0gA if x = x0

is continuos at x0 2 X .limx!x0

f (x) = A

Medvegyev (CEU) Presession 2014 98 / 444

Right and left limits are not equal

ExampleCalculate limx!0 1/x !

limx&0

1x= +∞, lim

x%0

1x= ∞

and the limit is not existing.

Example

Calculate limx!2 2x+1x2 !

limx&2

2x + 1x 2 = +∞, lim

x%2

2x + 1x 2 = ∞

and limit is not existing.

Medvegyev (CEU) Presession 2014 99 / 444

Right and left limits are not equal

Example

Calculate limx!0 21/x .

limx&0

21/x = 2limx&0 1/x = 2+∞ = ∞.

limx%0

21/x = 2limx%0 1/x = 2∞ = 0.

Medvegyev (CEU) Presession 2014 100 / 444

Di¤erentiation

DenitionA function f : (a, b)! R is di¤erentiable at x0 2 (a, b) and its derivativeis f 0 (x0) if

limh!0

f (x0 + h) f (x)h

$ f 0 (x0) $dfdx(x0)

where the limit f 0 (x0) is nite.

Medvegyev (CEU) Presession 2014 101 / 444

Di¤erentiation

LemmaA function f : (a, b)! R is di¤erentiable at x0 2 (a, b) if and only if

f (x0 + h) = f (x0) + f 0 (x0) h+ o (h)

where

limh!0

o (h)h

= 0.

Loosely speaking, a derivative can be thought of as how much a quantityis changing at a given point.

Medvegyev (CEU) Presession 2014 102 / 444

Di¤erentiation

Example

Calculate the derivative of the function f (x) =px . If x > 0 then

f 0 (x) $ limh!0

px + h

px

h= lim

h!0

x + h xhpx + h+

px =

= limh!0

1px + h+

px=

12px.

If x = 0 then f 0 (x) is not existing as the limit is +∞.

Medvegyev (CEU) Presession 2014 103 / 444

Di¤erentiation

Example

Calculate the derivative of the function f (x) = jx j at the point x = 0.

f (h) f (0)h

=jhjh=

1 if h > 01 if h < 0

.

Hence the limit

limh!0

f (h) f (0)h

is not existing.

Medvegyev (CEU) Presession 2014 104 / 444

Di¤erentiation

LemmaEvery di¤erentiable function is continuous.

If f 0 (x) is nite then

jf (x + h) f (x)j = f (x + h) f (x)h

jhj K jhjfor h small enough. This implies that if h! 0 then f (x + h)! f (x) .

Medvegyev (CEU) Presession 2014 105 / 444

Homework

Prove the rules of di¤erentiation

(f (x) + g (x))0 = f 0 (x) + g 0 (x)

(af (x))0 = af 0 (x)

(f (x) g (x))0 = f (x) g 0 (x) + f 0 (x) g (x)1f (x )

0= f 0(x )

f 2(x ) , f (x) 6= 0f (x )g (x )

0= f 0(x )g (x )f (x )g 0(x )

g 2(x ) g (x) 6= 0

(f (g (x)))0 = f 0 (g (x)) g 0 (x) .

Medvegyev (CEU) Presession 2014 106 / 444

Fermats principle

TheoremIf x0 2 (a, b) and f : (a, b)! R has local minimum or maximum at x0and f 0 (x0) exists then f 0 (x0) = 0.

If x0 is a local minimum and h > 0 then

f (x0 + h) f (x0)h

0

if h < 0f (x0 + h) f (x0)

h 0.

If h! 0 the limit is zero. (If exists.)

Medvegyev (CEU) Presession 2014 107 / 444

Lagranges mean value theorem

TheoremIf f : [a, b]! R continuous and di¤erentiable over (a, b) the there is anξ 2 (a, b) with

f 0 (ξ) =f (b) f (a)

b a .

Medvegyev (CEU) Presession 2014 108 / 444

Theorem of Weierstrass

Let

g (x) $ f (b) f (a)b a x f (x) .

Obviously g (a) = g (b) as

f (b) f (a)b a a f (a) = f (b) f (a)

b a b f (b)

If g is not constant then there is a a < ξ < b where g (x) is either max ormin. By Fermats principle

0 = g 0 (ξ) =f (b) f (a)

b a f 0 (ξ)

Medvegyev (CEU) Presession 2014 109 / 444

Analysis of functions

Let f be a di¤erentiable function

1 f is increasing if and only if f 0 0.2 f is decreasing if and only if f 0 0.3 f is convex if and only if f 0 is increasing.4 f is concave if and only if f 0 is decreasing.

Medvegyev (CEU) Presession 2014 110 / 444

Analysis of functions

By denition a function f is convex on [a, b] if for all λ 2 [0, 1]

f (λx + (1 λ) y) λf (x) + (1 λ) f (y) , x , y 2 [a, b] .

Reordering

λ (f (x) f (λx + (1 λ) y)) +

+ (1 λ) (f (y) f (λx + (1 λ) y)) 0.

Medvegyev (CEU) Presession 2014 111 / 444

Analysis of functions

On the left side by Lagranges theorem

λf 0 (ξ1) (x (λx + (1 λ) y)) +

+ (1 λ) f 0 (ξ2) (y (λx + (1 λ) y))

which is

λ (1 λ) f 0 (ξ1) (x y) + (1 λ) λf 0 (ξ2) (y x)

which isλ (1 λ) (x y)

f 0 (ξ1) f 0 (ξ2)

.

If f 0 is increasing then f 0 (ξ1) f 0 (ξ2) , then as x y the product is nonnegative, so f is convex.

Medvegyev (CEU) Presession 2014 112 / 444

Analysis of functions

On the other hand assume that f is convex. If u < v < x , then

x vx u u +

v ux u x =

xu vu + vx uxx u =

=vx vux u = v .

thereforef (v) x v

x u f (u) +v ux u f (x) .

Rearranging

(x u) f (v) (x v) f (u) + (v u) f (x) == (x v) f (u) + ((x u) (x v)) f (x)

Medvegyev (CEU) Presession 2014 113 / 444

Analysis of functions

(x u) (f (v) f (x)) (x v) (f (u) f (x))The number

(x u) (x v) 0so

f (v) f (x)x v f (u) f (x)

x u ,

that is if u < v < x

f (u) f (x)u x f (v) f (x)

v x .

Medvegyev (CEU) Presession 2014 114 / 444

Analysis of functions

If x & v , then using the fact that f is continuous in v

f (u) f (v)u v f 0 (v) .

Similarly with u < x < v

f (u) f (x)u x f (v) f (x)

v x

so if x & u implies

f 0 (u) f (u) f (v)u v ,

hence the derivative f 0 is increasing.

Medvegyev (CEU) Presession 2014 115 / 444

Riemanns integral

DenitionLet f : [a, b]! R and let

∆ : a = x0 < x1 < . . . < xn = b

be a partition of [a, b] .

S∆ $ ∑k

supx2(xk ,xk+1)

f (x) (xk+1 xk ) ,

s∆ $ ∑k

infx2(xk ,xk+1)

f (x) (xk+1 xk ) .

Z b

af (x) dx = inf

∆S∆ = sup

∆s∆

if the supremum is equal to the inmum.

Medvegyev (CEU) Presession 2014 116 / 444

Fundamental theorem of calculus

TheoremIf f : [a, b]! R is continuous and f 0 exists over (a, b) and f 0 is Riemannintegrable then Z b

af 0 (x) dx = f (b) f (a) .

Sometime it is written asZ b

af (x) dx = F (b) F (a)

where now F is the antiderivative of f that if F 0 = f and f is Riemannintegrable.

Medvegyev (CEU) Presession 2014 117 / 444

Fundamental theorem of calculus

Let (xk )nk=0 be a partition of [a, b] , x0 $ a, xn $ b. By Lagranges theorem

F (b) F (a) =n

∑k=1

(F (xk ) F (xk1)) =n

∑k=1

f (ξk ) (xk xk1) .

So if

Sn $ ∑k

sup ff (x) : x 2 [xk1, xk ]g (xk xk1)

sn $ ∑k

inf ff (x) : x 2 [xk1, xk ]g (xk xk1)

thensn F (b) F (a) Sn.

By the denition of the integral

F (b) F (a) = inf Sn $ sup sn.

Medvegyev (CEU) Presession 2014 118 / 444

Stieltjesintegral

DenitionLet f : [a, b]! R and let F : [a, b]! R be an increasing function. Let

∆ : a = x0 < x1 < . . . < xn = b

be a partition of [a, b] .

S∆ $ ∑k

supx2(xk ,xk+1)

f (x) (F (xk+1) F (xk )) ,

s∆ $ ∑k

infx2(xk ,xk+1)

f (x) (F (xk+1) F (xk )) .

Z b

af (x) dF (x) = inf

∆S∆ = sup

∆s∆

if the supremum is equal to the inmum.

Medvegyev (CEU) Presession 2014 119 / 444

Stieltjesintegral

Theorem

If f is continuous and F is increasing thenR ba fdF is well dened on any

[a, b] .

DenitionR ∞a fdF $ limN!∞

R Na fdF .

DenitionIf ξ is a random variable and F is the cumulative distribution function of ξthen

E (ξ) =Z ∞

∞xdF (x)

if the Stieltjes integral is absolute convergent that is ifR ∞∞ jx j dF (x) < ∞. (Otherwise the expected value is not dened.)

Medvegyev (CEU) Presession 2014 120 / 444

Homework

Prove that if F (x) =R x∞ f (t) dt thenR b

a h (x) dF (x) =R ba h (x) f (x) dx .

Calculate the following integralsZ x2 + 5pxdx ,

Z x3

x + 5dx ,

Z5p3x 1dx ,

Z r1 x1+ x

dxZ sinpxpxdx ,

Z 10

1

xpx 1

dx ,Z 1

0

e2x4p1+ e2x

dx ,Z dx

sin4 x.

Medvegyev (CEU) Presession 2014 121 / 444

Taylors formula

TheoremLet f be an (n+ 1) times di¤erentiable function on [a, b] (c , d) . Forevery x 2 [a, b] there is a ξ such that

f (x) =f (x0)0!

+f 0 (x0)1!

(x x0) + . . .+f (n) (x0)n!

(x x0)n +

+f (n+1) (ξ)(n+ 1)!

(x x0)n+1 ,

where x0 can be either a or b and ξ is between x and x0. Specially

f (b) =f (a)0!

+f 0 (a)1!

(b a) + . . .+f (n) (a)n!

(b a)n +

+f (n+1) (ξ)(n+ 1)!

(b a)n+1 ,

Medvegyev (CEU) Presession 2014 122 / 444

Taylors formula

TheoremFor every p natural number one can also write the remainder as

Rn (x0, x) =f (n+1) (ξ)n!p

(x ξ)n+1p (x x0)p .

Specially if p = 1 then

Rn (x0, x) =f (n+1) (ξ)

n!(x ξ)n (x x0) .

For p = n+ 1 this is the Lagrange form of the remainder for p = 1 this isthe Cauchy form of the remainder.

Medvegyev (CEU) Presession 2014 123 / 444

Taylors formula

Example

Calculate the Taylor series of f (x) =px around x0 = 1.

f (x0) = 1, f 0 (x0) = 12x1/2 = 1

2 , f00 (x0) = 1

22 x3/2 = 1

22 , f000 (x0) =

323 x

5/2 = 323 .

f (n) (x0) = (1)n+1 (2n 3) . . . 12n

x(2n1)/2 =

= (1)n+1 (2n 3) . . . 12n

.

f (n) (x0)n!

= (1)n+1 (2n 3) . . . 12nn!

=

= (1)n+1 (2n 3) . . . 1(2n) (2n 2) . . . 2

.

Therefore R = 1.Medvegyev (CEU) Presession 2014 124 / 444

Taylors formula

One should check the convergence topx . From the Lagrange form of the

remainder if x > 1 = x0 then xα 1 so

Rn (x , x0) =f (n+1)

n!(ξ) (x 1)n = bnqn ! 0

If x < x0 = 1 then one cannot use the Lagrange form of the remainder. Inthis case calculate the Cauchy form and using that 0 < x < ξ < 1

f (n+1)

n!(ξ) (x ξ)n (x x0) = cnξ(2n1)/2 (x ξ)n (x x0) =

= cnξ1/2ξn (x ξ)n (x x0) =

= cnξ1/2xξ 1n(x x0)! 0

Medvegyev (CEU) Presession 2014 125 / 444

Homework

Calculate the Taylor expansion of following functions. Calculate the radiusof convergence.

Exponential function exp (x), x0 = 0.

The trigonometric functions sin x and cos x , x0 = 0.

The logarithmic function ln x , x0 = 1.

Medvegyev (CEU) Presession 2014 126 / 444

Homework

Find the following limits

limx!∞

x sin1x, limx!0

xctgx , limx!∞

3x + 13x + 7

x+3Analyze the following functions

y =x 1x2 x , y =

x 1x2

, y =1ln x

, y = x ln x .

Analyze the following functions

y = xx , y = x1/x , y = ex2.

Medvegyev (CEU) Presession 2014 127 / 444

Separable variables

dydx= g (x) h (y) , p (y)

dydx= g (x)

If y = f (x) is a solution thenZp (f (x)) f 0 (x) dx =

Zg (x) dx

Substituting y = f (x) Zp (y) dy =

Zg (x) dx

If H (y) =Rp (y) dy and G (x) =

Rg (x) dx are the antiderivatives then

H (f (x)) = G (x) so f (x) is an implicit solution to the equationF (x , y) = H (y) G (x) = 0.

Medvegyev (CEU) Presession 2014 128 / 444

Separable variables

Example

Solve (1+ x) dy ydx = 0.

dyy

=dx1+ x

ln jy j = ln jx + 1j+ cjy j = exp (ln jx + 1j+ c)y = c exp (ln jx + 1j)y = c (x + 1) .

Observe that there are many di¤erent c in the calculation.

Medvegyev (CEU) Presession 2014 129 / 444

Separable variables

Example

Solve the initial value problem dy/dx = x/y , y (4) = 3.

ydy = xdxy2/2 = x2/2+ c

y2 + x2 = c , 32 + 42 = 25

x2 + y2 = 25.

Medvegyev (CEU) Presession 2014 130 / 444

Separable variables

Example

Solve xy4dx +y2 + 2

exp (3x) dy = 0

x exp (3x) dx +y2 + 2y4

dy = 0.

13x exp (3x) 1

9exp (3x) y1 2

3y3 = c .

We lost the solution y = 0. As

x 0+ (0+ 2) exp (3x) ddx0 = 0.

Medvegyev (CEU) Presession 2014 131 / 444

Separable variables

Example

Solve the equation dy/dx = y2 4, y (0) = 2.

dyy2 4 = dx ,

dy4 (y 2)

dy4 (y + 2)

= dx

14ln jy 2j 1

4ln jy + 2j = x + c

ln

y 2y + 2

= 4x + c , y 2y + 2

= c exp (4x)y 2y + 2

= c exp (4x) , y = 21+ c exp (4x)1+ c exp (4x)

Which has no solution if x = 0 and y = 2. As we lost the solutiony = 2.

Medvegyev (CEU) Presession 2014 132 / 444

Linear equations

Example

Solve xdy/dx 4y = x6 exp (x)

dydx 4xy = x5 exp (x)

expZ 4xdxdydx 4xexp

Z 4xdxy = x5 exp (x) exp

Z 4xdx

exp

Z 4xdxy0= x5 exp (x) exp (4 ln jx j)

jx j4 y =Z x5

jx j4exp (x) dx + c , x4y =

Zx exp (x) dx + c

x4y = x exp (x) exp (x) + c , y = x5 exp (x) x4 exp (x) + cx4.

Medvegyev (CEU) Presession 2014 133 / 444

Linear equations

Example

Solvex2 + 9

dy/dx + xy = 0.

dydx+

xx2 + 9

y = 0

exp12

Z 2xx2 + 9

dxdydx+

xx2 + 9

exp12

Z 2xx2 + 9

dxy = 0

exp12lnx2 + 9

dydx+

xx2 + 9

exp12lnx2 + 9

y = 0p

x2 + 9y0= 0,

px2 + 9y

= c ,

y =cpx2 + 9

Medvegyev (CEU) Presession 2014 134 / 444

Linear equations

ExampleSolve xdy/dx + y = 2x .

We try to solve on (0,∞)

dydx+1xy = 2

expZ 1

xdxdydx+ exp

Z 1xdx1xy = 2 exp

Z 1xdx

xdydx+ x

1xy = x

dydx+ 1 y = 2x

(xy)0 = 2x , xy = 2Zxdx + c

y =x2

x+cx= x +

cx

Medvegyev (CEU) Presession 2014 135 / 444

Linear equations

ExampleSolve xdy/dx + y = 2x .

We try to solve on (∞, 0)

dydx+1xy = 2

expZ 1

xdxdydx+ exp

Z 1xdx1xy = 2 exp

Z 1xdx

x dydx x 1

xy = 2x

(xy)0 = 2Zxdx + c

y =x2

x+cx= x +

cx.

Medvegyev (CEU) Presession 2014 136 / 444

Linear equations

If the linear equation is homogeneous that is a (x) y 0 + b (x) y = 0 thenone can solve it as a separable equation.

dyy

= b (x)a (x)

, ln jy j = Z b (x)a (x)

dx + c

y = c expZ b (x)a (x)

dx.

Which is the same

y 0 (x) expZ b (x)

a (x)dx+ y

b (x)a (x)

expZ b (x)

a (x)dx= 0

y (x) expZ b (x)

a (x)dx0

= 0

y (x) expZ b (x)

a (x)dx= c .

Medvegyev (CEU) Presession 2014 137 / 444

Linear equations

To nd the general solution to the inhomogeneous equationy 0 + P (x) y = Q (x) one should nd a particular solution. One can try tond it with the variation of parameters.

yp (x) = u (x) y (x) .

y 0p (x) = u0 (x) y (x) + y 0 (x) u (x) .

u0 (x) y (x) + y 0 (x) u (x) + P (x) u (x) y (x) = Q (x)

u0 (x) y (x) + u (x)y 0 (x) + P (x) y (x)

= Q (x)

u0 (x) y (x) + u (x) 0 = Q (x) .dudx=Q (x)y (x)

, u (x) =Z Q (x)y (x)

dx .

Medvegyev (CEU) Presession 2014 138 / 444

Exact equations

We want to solve an equation like

M (x , y) +N (x , y) y 0 = 0.

Can we use the implicit di¤erentiation rule

F (x , y (x)) = c ) ∂F∂x(x , y (x)) +

∂F∂y(x , y (x)) y 0 = 0?

If∂F∂x(x , y (x)) = M (x , y (x)) ,

∂F∂y(x , y (x)) = N (x , y (x))

then F (x , y) = c is an implicit solution. As the Hesse matrix is symmetric

∂M (x , y)∂y

=∂2F (x , y)

∂x∂y=

∂2F (x , y)∂y∂x

=∂N (x , y)

∂x.

Medvegyev (CEU) Presession 2014 139 / 444

Exact equations

Example

Solve the equation 2x + y2 + 2xyy 0 = 0.

Formally one can write it like2x + y2

dx + 2xydy = 0.

∂x

x2 + xy2

+

∂y

x2 + xy2

y 0 =

2x + y2

+ (2xy) y 0 = 0.

Hence the implicit solution is F (x , y) = x2 + xy2 = c . Observe that

∂y

2x + y2

= 2y =

∂x(2xy) .

Medvegyev (CEU) Presession 2014 140 / 444

Exact equations

Example

Solve the equation x2y3dx + x3y2dy = 0.

With the usual notation the equation is x2y3 + x3y2y 0 = 0. Obviously

ddx

13x3y3 (x)

= x2y3 (x) + x3y2 (y) y 0 (x)

Hence the implicit solution is F (x , y) = x3y3 = c . Observe that

∂x2y3

∂y= 3x2y2 =

∂x3y2

∂x.

Medvegyev (CEU) Presession 2014 141 / 444

Exact equations

DenitionAn equation M (x , y) +N (x , y) y 0 = 0 is called exact if∂M∂y (x , y) =

∂N∂x (x , y) .

TheoremIf M, N and their partial derivatives are continuous and the equation isexact then the equation has an implicit solution F (x , y (x)) = c that isthere is a di¤erentiable function F (x , y) such that

∂F (x , y)∂x

= M (x , y) ,∂F (x , y)

∂y= N (x , y) .

Observe that by Young theorem if for some F (x , y) the second orderpartial derivatives exists and they are continuous then F is twicedi¤erentiable, hence the Hesse matrix is symmetric. The theorem is insome sense the converse to Young theorem.

Medvegyev (CEU) Presession 2014 142 / 444

Exact equations

DenitionA function y (x) is a solution of an exact equation if it satises theimplicit function F (x , y) = c . In this case di¤erentiating by x and usingthe chain rule

F (x , y (x)) = c .

∂F∂x(x , y (x)) +

∂F∂y(x , y (x)) y 0 (x) = 0.

M (x , y (x)) +N (x , y (x)) y 0 (x) = 0.

Medvegyev (CEU) Presession 2014 143 / 444

Exact equations

Example

Solve the equation (y cos x + 2xey ) +sin x + x2ey 1

y 0 = 0.

The equation is exact as

M 0y (x , y) = cos x + 2xe

y = N 0x (x , y) .

So there is an implicit solution.

F 0x (x , y) = y cos x + 2xey

F 0y (x , y) = sin x + x2ey 1.

Medvegyev (CEU) Presession 2014 144 / 444

Exact equations

Integrating the rst line by x

F (x , y) =Zy cos x + 2xeydx + c (y) =

= y sin x + x2ey + c (y) .

F 0y (x , y) = sin x + x2ey + c 0 (y) .

Hence c 0 (y) = 1, c (y) = y . The implicit solution is

F (x , y) = y sin x + x2ey y = c .

Medvegyev (CEU) Presession 2014 145 / 444

Exact equations

Example

Solve the equation3xy + y2

+x2 + 2xy

y 0 = 0.

M 0y (x , y) = 3x + 2y 6= N 0x (x , y) = 2x + 2y .

So it is not an exact equation. If a F (x , y) exists then

F (x , y) =ZM (x , y) dx =

Z3xy + y2dx =

32x2y + xy2 + c (y) .

F 0y (x , y) =32x2 + 2xy + c 0 (y) = x2 + 2xy .

c 0 (y) =dcdy= 1

2x2.

Which one cannot solve independently form x .

Medvegyev (CEU) Presession 2014 146 / 444

Exact equations

Example

Solve the equation 2xydx +x2 1

dy = 0.

M 0y (x , y) = 2x = N 0x (x , y) .

F (x , y) =ZM (x , y) dx = x2y + c (y)

∂yF (x , y) = x2 + c 0 (y) = x2 1

c 0 (y) = 1, c (y) = y + c .F (x , y) = x2y yx2y y = c , y

x2 1

= c

y =c

x2 1

Medvegyev (CEU) Presession 2014 147 / 444

Exact equations

One can also solve by separation of variables

2xdxx2 1 = dy

y

lnx2 1 = ln jy j+ cx2 1 = exp ( ln jy j+ c) =

x2 1 = cy1

y =c

x2 1 .

Medvegyev (CEU) Presession 2014 148 / 444

Homework

Determine whether or not each of the problems are exact. If it is exactnd the solution.

1 (2x + 3) + (2y 2) y 0 = 02 (2x + 4y) + (2x 2y) y 0 = 033x2 2xy + 2

dx +

6y2 x2 + 3

dy = 0

42xy2 + 2y

+2x2y + 2x

y 0 = 0.

5 dydx =

ax+bybx+cy

6 dydx =

axbybxcy

7 (ex sin y 2y sin x) dx + (ex cos y + 2 cos x) dy = 08 (ex sin y + 3y) dx (3x ex sin y) dy = 0.

Medvegyev (CEU) Presession 2014 149 / 444

Second order linear di¤erential equations

DenitionThe equation

P (t) y 00 +Q (t) y 0 + R (t) y = G (t)

is called second order linear equation. If P,Q R are constant then it is aconstant coe¢ cient equation. If G = 0 then it is homogeneous, otherwisenonhomogeneous/inhomogeneous. The constant coe¢ cient, second order,homogeneous equation is

ax 00 + bx 0 + cx = 0.

The polynomialaλ2 + bλ+ c = 0

is the characteristic polynomial of the di¤erential equation.

Medvegyev (CEU) Presession 2014 150 / 444

Second order linear di¤erential equations

1. If λ0 is a solution of the characteristic equation then y = exp (λ0t) is asolution of the di¤erential equation:

ay 00 + by 0 + cy =aλ20 + bλ0 + c

exp (λ0t) = 0.

Medvegyev (CEU) Presession 2014 151 / 444

Second order linear di¤erential equations

2. If the characteristic polynomial has two di¤erent (complex) roots thenthe two solutions are linearly independent. We prove more

c1 exp (λ1t) + c2 exp (λ2t) = 0

then di¤erentiating

c1λ1 exp (λ1t) + c2λ2 exp (λ2t) = 0.

Medvegyev (CEU) Presession 2014 152 / 444

Second order linear di¤erential equations

The two equations have a non trivial solution c1, c2 if and only if thedeterminant is not zero:

W =

exp (λ1t) exp (λ2t)λ1 exp (λ1t) λ2 exp (λ2t)

== exp (λ1t) exp (λ2t)

1 1λ1 λ2

= exp (λ1t) exp (λ2t) (λ2 λ1) 6= 0

as by 1 = exp (0) = exp (z) exp (z) the exp (z) is never zero.

Medvegyev (CEU) Presession 2014 153 / 444

Second order linear di¤erential equations

LemmaIf λ1 6= λ2 then

exp (λ1t)λ1 exp (λ1t)

= exp (λ1t)

1

λ1

,

exp (λ2t)λ2 exp (λ2t)

= exp (λ2t)

1

λ2

that is

y1 (t)y 01 (t)

,

y2 (t)y 02 (t)

are independent for any t. (y1 and y2 are the solutions of the equation.)

Medvegyev (CEU) Presession 2014 154 / 444

Second order linear di¤erential equations

3. If there are repeated roots then we use the dAlembert method, that iswe are looking for a second solution in the form y2 (t) = v (t) exp (λ0t) .

y 02 (t) = λ0v (t) exp (λ0t) + v 0 (t) exp (λ0t)

y 002 (t) = λ20v (t) exp (λ0t) + 2λ0 exp (λ0t) v 0 (t) + v 00 (t) exp (λ0t) .

Substituting to the equation

a λ20v (t) exp (λ0t) + 2λ0 exp (λ0t) v 0 (t) + v 00 (t) exp (λ0t)

+

b λ0v (t) exp (λ0t) + v 0 (t) exp (λ0t)

+

c v (t) exp (λ0t) = 0

Medvegyev (CEU) Presession 2014 155 / 444

Second order linear di¤erential equations

Using that as we have just one root λ0 = b/2a

0 = exp (λ0t) v (t)aλ20 + bλ0 + c

+ exp (λ0 (t)) v 0 (t) (b+ 2aλ0) +

+av 00 (t) exp (λ0t) =

= exp (λ0t) v (t) 0+ exp (λ0 (t)) v 0 (t) 0+ av 00 (t) exp (λ0t) .

So v 00 (t) = 0 that is

v (t) = c1t + c2 =) v (t) $ t.

Medvegyev (CEU) Presession 2014 156 / 444

Second order linear di¤erential equations

Now again the vectorsy1 (t)y 01 (t)

y2 (t)y 02 (t)

=

exp (λ0t)

λ0 exp (λ0t)

t exp (λ0t)

(tλ0 + 1) exp (λ0t)

are linearly independent for every t as exp (λ0t) t exp (λ0t)

λ0 exp (λ0t) (tλ0 + 1) exp (λ0t)

= (exp (λ0t))2 1 tλ0 tλ0 + 1

== (exp (λ0t))

2 (tλ0 + 1 tλ0) = (exp (λ0t))2 6= 0.

Medvegyev (CEU) Presession 2014 157 / 444

Second order linear di¤erential equations

4. If there are no real roots we use the complex exponential function tond two di¤erent complex solutions exp (λ1t) and exp (λ2t) , whereλ1 6= λ2 are complex numbers. As the equation is linear for every complexnumber c1 and c2

c1 exp (λ1t) + c2 exp (λ2t)

is a complex solution. If the coe¢ cients of the equation are real the rootsare conjugal that is λ1 = a+ ib and λ2 = a ib. In this case

exp (λ1t) = exp (at) (cos bt + i sin bt)

exp (λ2t) = exp (at) (cos bt i sin bt)

Medvegyev (CEU) Presession 2014 158 / 444

Second order linear di¤erential equations

As c1 and c2 are arbitrary complex numbers

y1 (t) =exp (λ1t) + exp (λ2t)

2= exp (at) cos bt

y2 (t) =exp (λ1t) exp (λ2t)

2i= exp (at) sin bt

are real solutions.

Medvegyev (CEU) Presession 2014 159 / 444

Second order linear di¤erential equations

Again to prove the linear independence of the solutions one should showthat cos bt and sin bt are independent cos bt sin bt

b sin bt b cos bt

= b cos2 bt + sin2 bt = b 6= 0as the solution is complex.

Medvegyev (CEU) Presession 2014 160 / 444

Second order linear di¤erential equations

TheoremIf y1 and y2 are the two (independent) solutions dened above then(y1 (t) , y 01 (t)) and (y2 (t) , y

02 (t)) are also independent for every t.

DenitionIf we are looking for a solution for which y (t0) = y0 and y 0 (t0) = y 00 forsome given (t0, y0, y 00) then we say that it is a Cauchy problem or an initialvalue problem.

TheoremThe solution of the initial value problem is unique.

Medvegyev (CEU) Presession 2014 161 / 444

Second order linear di¤erential equations

TheoremLet y1 and y2 be the two (independent) solutions above. If y is a solutionthen it is a linear combination of y1 and y2.

Fix a t0 and let y0 = y (t0) and y 00 = y0 (t0) . As y1 and y2 are

independent there are c1 and c2 such thaty (t0)y 0 (t0)

=

y0y 00

= c1

y1 (t0)y 01 (t0)

+ c2

y2 (t0)y 02 (t0)

.

Hence c1y1 + c2y2 is a solution of the initial value problem. As thesolution of the initial value problem is unique y (t) = c1y1 (t) + c2y2 (t)for every t.

Medvegyev (CEU) Presession 2014 162 / 444

Second order linear di¤erential equations

TheoremIf the coe¢ cients are real and if for some complex solution the initialvalues are real then the whole solution is real.

The point is that if the coe¢ cients are real and there is a complex solutionthen both the real and the complex parts form a solution. As the equationis linear and y (t) = Re y (t) + Im y (t) is a solution theny (t) = Re y (t) Im y (t) is also a solution. So if Im y (t) 6= 0 thenthere are two di¤erent solutions to an initial value problem.

Medvegyev (CEU) Presession 2014 163 / 444

DenitionThe set of functions y = c1y1 + c2y2 is the so-called general solution ofthe homogeneous equation.

Medvegyev (CEU) Presession 2014 164 / 444

Homework

In each problem nd the general solution of the given di¤erential equation:

y 00 + 2y 0 3y = 06y 00 y 0 y = 0y 00 + 5y 0 = 0

y 9y 0 + 9y = 0y 00 2y 0 2y = 0

Medvegyev (CEU) Presession 2014 165 / 444

Nonhomogeneous equations

DenitionThe equations

a y 00 + b y 0 + c y = f (x)are called nonhomogeneous second order linear di¤erential equations.

Medvegyev (CEU) Presession 2014 166 / 444

Nonhomogeneous equations

TheoremThe solutions of the equation above are of the form yp + L, where yp is anarbitrary solution of the equation and L is the linear space of all solutionsof the homogeneous equation

a y 00 + b y 0 + c y = 0.

If y1 and y2 are solutions then

a y 001 + b y 01 + c y1 = f (x)

a y 002 + b y 02 + c y2 = f (x)

hence

a y 001 y 002

+ b

y 01 y 02

+ c (y1 y1) =

= f (x) f (x) = 0

hence y1 y2 2 L that is y1 2 y2 + L $ yp + L.Medvegyev (CEU) Presession 2014 167 / 444

Nonhomogeneous equations

The only question is how one can nd a particular solution yp . We will usethe method of undetermined coe¢ cients.

Medvegyev (CEU) Presession 2014 168 / 444

Nonhomogeneous equations

Example

Find a particular solution of y 00 3y 0 4y = 3e2t .

Assume that the solution is in the form yp = Ae2t . Substituting back

y 0p = 2Ae2t , y 00p = 4Ae

2t .

y 00p 3y 0p 4yp = (4A 6A 4A) e2t = 6Ae2t = 3e2t

which implies that A = 1/2. So our particular solution is

yp = 12e2t .

Medvegyev (CEU) Presession 2014 169 / 444

Nonhomogeneous equations

Example

Find a particular solution of y 00 3y 0 4y = 2 sin t.

Assume that the solution is in the form yp = A sin t +B cos t. Substitutingback

y 0p = A cos t B sin ty 00p = A sin t B cos t

y 00p 3y 0p 4yp == A sin t B cos t 3 (A cos t B sin t) 4 (A sin t + B cos t =)

= (A+ 3B 4A) sin t + (B 3A 4B) cos t = 2 sin t

Medvegyev (CEU) Presession 2014 170 / 444

Nonhomogeneous equations

5A+ 3B = 2

3A 5B = 0

A =

2 30 5

5 33 5

=1034

,B =

5 23 0

5 33 5

=634

which implies that A = 5/17,B = 3/17.

Medvegyev (CEU) Presession 2014 171 / 444

Nonhomogeneous equations

Example

Find a particular solution of y 00 + 4y = 2 cos 2t.

Try yp = A sin 2t + B cos 2t. In this case

y 0p = 2A cos 2t 2B sin 2ty 00p = 4A sin 2t +4B cos 2t,

so y 00p + 4yp = 0 and this form is not working. Observe that thecharacteristic roots are λ2 + 4 = 0, λ12 = 2i . so yp is a solution of thehomogeneous equation.

Medvegyev (CEU) Presession 2014 172 / 444

Nonhomogeneous equations

Assume that the particular solution is in the formyp = At sin 2t + tB cos 2t.

y 0p = A sin 2t + 2At cos 2t + B cos 2t 2Bt sin 2ty 00p = 2A cos 2t + 2A cos 2t

4At sin 2t 2B sin 2t 2B sin 2t 4Bt cos 2t.y 00p + 4yp = 4A cos 2t 4B sin 2t 4At sin 2t4Bt cos 2t + 4 (At sin 2t + Bt cos 2t) =

= 4A cos 2t 4B sin 2t.

Medvegyev (CEU) Presession 2014 173 / 444

Nonhomogeneous equations

Hence4A cos 2t 4B sin 2t = 2 cos 2t

which implies that A = 1/2,B = 0.

yp =t2sin 2t.

Medvegyev (CEU) Presession 2014 174 / 444

Nonhomogeneous equations

Rewrite the equation as

y 00 + 4y = 2 exp (2it) .

The solution is the real part. Let yp = At exp (2it) , where A is complex.

y 0p = A (exp (2it) + t2i exp (2it))

y 00p = A2i exp (2it) + 2i exp (2it) + t4i2 exp (2it)

=

= A exp (2it) (4i 4t) .= y 00p + 4yp = 4iA exp (2it) = 2 exp (2it) .

4iA = 2,A =12i= i

2

yp = i2t (cos 2t + i sin 2t) =

it2cos 2t +

t2sin 2t

The real part is yp = t/2 sin 2t.

Medvegyev (CEU) Presession 2014 175 / 444

Nonhomogeneous equations

LemmaIf the assumed form duplicates a solution of the homogeneous equationmultiply it by t. If it is not su¢ cient then multiply it again by t.

Medvegyev (CEU) Presession 2014 176 / 444

Nonhomogeneous equations

ExampleFind a particular solution of

y 00 3y 4y = 8 exp (t) cos 2t.

We are looking for the solution as

yp (t) = A exp (t) cos 2t + B exp (t) sin 2t.

y 0p (t) = (A+ 2B) exp (t) cos 2t + (2A+ B) exp (t) sin 2ty 00p (t) = (3A+ 4B) exp (t) cos 2t + (4A 3B) exp (t) sin 2t

Substituting back

10A+ 2B = 8, 2A 10B = 0.A = 10/13,B = 2/13.

Medvegyev (CEU) Presession 2014 177 / 444

Nonhomogeneous equations

ExampleFind a particular solution of

y 00 + 4y 0 2y = 2t2 3t + 6.

We are looking for the solution as y = At2 + Bt + C .

y 0 = 2At + B, y 00 = 2A.

Substituting back

2A+ (8At + 4B)2At2 + 2Bt + 2C

= 2t2 3t + 6.

2At2 + (8A 2B) t + (2A+ 4B 2C ) = 2t2 3t + 6.

Hence 2A = 2, 8A 2B = 3, 2A+ 4B 2C = 6. That isA = 1,B = 5/2, C = 9.

yp = t2 52t 9.

Medvegyev (CEU) Presession 2014 178 / 444

Nonhomogeneous equations

ExampleFind a particular solution of

y 00 5y 0 + 4y = 8 exp (t) .

The solution A exp (t) is not working as λ = 1 is a root of thecharacteristic polynomial λ2 5λ+ 4 = 0. So we try At exp (t) .

y 0 = At exp (t) + A exp (t)

y 00 = At exp (t) + 2A exp (t) .

Medvegyev (CEU) Presession 2014 179 / 444

Nonhomogeneous equations

Substituting back

y 00 5y 0 + 4y =At exp (t) + 2A exp (t) 5 (At exp (t) + A exp (t)) +

+4At exp (t) =

= 3A exp (t) = 8 exp (t)

A =83) yp =

83exp (t) .

Medvegyev (CEU) Presession 2014 180 / 444

Nonhomogeneous equations

ExampleFind a particular solution of

y 00 2y 0 + y = exp (t) .

The general solution of the homogeneous equation isy = c1 exp (t) + c2t exp (t) . Hence we try y = At2 exp (t) .

y 0 = At2 exp (t) + 2At exp (t)

y 00 = At2 exp (t) + 2At exp (t) + 2At exp (t) + 2A exp (t) =

= At2 exp (t) + 4At exp (t) + 2A exp (t) .

Medvegyev (CEU) Presession 2014 181 / 444

Nonhomogeneous equations

Substituting back

At2 exp (t) + 4At exp (t) + 2A exp (t)2At2 exp (t) + 2At exp (t)

+

+At2 exp (t) .

At2 + 4At + 2A 2At2 + 2At

+ At2 = 1

2A = 1, A = 1/2,) yp =t2

2exp (t) .

Medvegyev (CEU) Presession 2014 182 / 444

Nonhomogeneous equations

Lemma

(u (x) v (x))(n) =n

∑k=0

(nk)u(k ) (x) v (nk ) (x) .

(u (x) v (x))00 = u00 (x) v (x) + 2u0 (x) v 0 (x) + v 00 (x) u (x) .

Medvegyev (CEU) Presession 2014 183 / 444

Nonhomogeneous equations

t2

2exp (t)

0= t exp (t) +

t2

2exp (t)

t2

2exp (t)

00= exp (t)

t2

2+ 2 exp (t) t + exp (t)

y 00 2y 0 + y = exp (t) t2

2+ 2 exp (t) t + exp (t)

2t exp2 t2

2exp (t) +

+t2

2exp (t) = exp (t) .

Medvegyev (CEU) Presession 2014 184 / 444

Homework

Find the solution of the given initial value problems:

1 y 00 + y 2y = 2t, y(0) = 0, y 0(0) = 1.2 y 00 + 4y = t2 + 3et , y (0) = 0, y 0 (0) = 2.3 y 00 2y 0 + y = tet + 4, y (0) = 1, y 0 (0) = 1.4 y 00 2y 0 3y = 3te2t , y (0) = 1, y 0 (0) = 0.5 y 00 + 4y = 3 sin 2t, y (0) = 2, y 0 (0) = 1.

Medvegyev (CEU) Presession 2014 185 / 444

Linear di¤erential equation systems

Solving with Matlab,Let y 00 = (b/a) y 0 (c/a) y . Let y 0 $ z . Then

z 0

y 0

=

y 00

y 0

=

b/a c/a1 0

y 0

y

=

=

b/a c/a1 0

zy

So every second order, constant coe¢ cient linear di¤erential equation is amatrix di¤erential equation. The solution is

zy

= exp

tb/a c/a1 0

.

Medvegyev (CEU) Presession 2014 186 / 444

Linear di¤erential equation systems

detb/a λ c/a

1 λ

=ba

λ+ λ2 +ca.

The characteristic polynomial of the equation is the characteristicpolynomial of the matrix.

Medvegyev (CEU) Presession 2014 187 / 444

Second order homogeneous di¤erence equations

Now we turn to the general solution of the second order homogeneousdi¤erence equation

yt+2 + ayt+1 + byt = 0.

As with the di¤erential equations let us consider the characteristicpolynomial

λ2 + aλ+ b = 0.

Depending on the roots there are three cases.

Medvegyev (CEU) Presession 2014 188 / 444

Second order homogeneous di¤erence equations, two roots

If λ1 and λ2 are distinct real roots then the general solution is

yt = c1λt1 + c2λ

t2.

One should prove that λtk (k = 1, 2) are solutions and all other solutionshas this form.

λt+2k + aλt+1k + bλtk = λtkλ2k + aλk + b

= 0

So yt is a solution. If y0 and y1 is given then using the recursionyt+2 = ayt+1 byt the whole yt is well dened.

y0 = c1 + c2y1 = c1λ1 + c2λ2

has one solution (c1, c2) for any (y0, y1) as 1 1λ1 λ2

= λ2 λ1 6= 0.

Medvegyev (CEU) Presession 2014 189 / 444

Second order homogeneous di¤erence equations, one root

There is one repeated root. One solution is still λt1 = λt2 = λt . Thesecond solution is tλt . First we show that this is a solution

(t + 2) λt+2 + a (t + 1) λt+1 + btλt =

tλtλ2 + aλ+ b

+ λt

2λ2 + aλ

= 0

as λ = a/2. On the other hand

y0 = c1λ0 + c20λ0 = c1 + 0

y1 = c1λ1 + c21 λ1 = c1 + c2

If λ 6= 0 then it has a solution (c1, c2) for any initial condition (y0, y1) as 1 0λ λ

= λ 6= 0.

If λ = 0 then b = 0 and λ = a/2 implies that a = 0 that is theequation is λ2 = 0 that is yt+2 = 0.

Medvegyev (CEU) Presession 2014 190 / 444

Second order homogeneous di¤erence equations, complexroots

If the roots are complex the general solution is

z1λt1 + z2λ

t1 = z1r

te itθ + z2r teitθ =

= (z1 + z2) r t cos tθ + i (z1 z2) r t sin tθ

where z1 and z2 can be complex. This implies that the general realsolution is

c1r t cos θt + c2r t sin θt

where now c1 and c2 are real. Observe that for any c1 and c2 the equation

z1 + z2 = c1iz1 iz2 = c2

has a solution as 1 1i i

= 2i 6= 0Medvegyev (CEU) Presession 2014 191 / 444

Second order homogeneous di¤erence equations, complexroots

On the other hand if y0 and y1 are arbitrary initial values then the equation

c1r0 cos θ0+ c2r0 sin θ0 = y0c1r1 cos θ1+ c2r1 sin θ1 = y1

has a solution as r0 cos θ0 r0 sin θ0r1 cos θ1 r1 sin θ1

= 1 0r cos θ r sin θ

= r sin θ 6= 0

as otherwise the solutions of the characteristic equation are real.

Medvegyev (CEU) Presession 2014 192 / 444

Second order homogeneous di¤erence equations, complexroots

ExampleFind the general solution of the equation

yt+2 4yt+1 + 16yt = 0.

The characteristic polynomial is λ2 4λ+ 16. The roots are

+4p16 642

=+4

p48

2=+4 4i

p3

2=

= 4

12 ip32

!.

This implies that r = 4 and θ = π/3. So the general solution is

yt = 4tc1 cos t

π

3+ c2 sin t

π

3

.

Medvegyev (CEU) Presession 2014 193 / 444

Homework

Solve the following equations:

xt+2 6xt+1 + 8xt = 0xt+2 8xt+1 + 16xt = 0xt+2 + 2xt+1 + 3xt = 0

Medvegyev (CEU) Presession 2014 194 / 444

EulerCauchy equation

DenitionThe equation

anxndnydxn

+ an1xn1dn1ydxn1

+ an2xn2dn2ydxn2

+ . . .+ a0y = g (x)

is called EulerCauchy equation. The second order EulerCauchy equationhas the form

a x2y 00 + b xy 0 + c y = f (x) .One can solve the equation only on intervals (0,∞) or on (∞, 0) . Wewill solve it on (0,∞) .

Medvegyev (CEU) Presession 2014 195 / 444

EulerCauchy equation

Let us try to nd the solution of the homogeneous equation as y = xm .

dydx= mxm1,

d2ydx2

= m (m 1) xm2.

0 = ax2d2ydx2

+ bxdydx+ cy =

= am (m 1) xm + bmxm + cxm = 0.

Which implies that

am (m 1) + bm+ c = 0

am2 + (b a)m+ c = 0.

Medvegyev (CEU) Presession 2014 196 / 444

EulerCauchy equation

Example

Solve x2y 00 2xy 0 4y = 0

The characteristic polynomial is

m2 3m 4 = 0

m =+3

p9+ 162

=+3 52

m1 = 1,m2 = 4.y = c1x1 + c2x4.

Medvegyev (CEU) Presession 2014 197 / 444

EulerCauchy equation

Example

Solve 4x2y 00 + 8xy 0 + y = 0

The characteristic polynomial is

4m2 + 4m+ 1 = 0

m =4

p16 168

= 12

y = c1x1/2 + c2x1/2 ln x

But why?

Medvegyev (CEU) Presession 2014 198 / 444

EulerCauchy equation

Assume that there is a double root. In this case

m1 = b a2a

, y1 (x) = xm1 , y (x) = u (x) y1 (x) .

y 00 +baxy 0 +

cax2

y = 0, y 00 + P (x) y 0 +Q (x) y = 0

y 0 (x) = u0 (x) y1 (x) + u (x) y 01 (x) ,

y 00 (x) = u00 (x) y1 (x) + 2u0 (x) y 01 (x) + y001 (x) u (x) .

0 = u00 (x) y1 (x) + 2u0 (x) y 01 (x) + y001 (x) u (x) +

+P (x)u0 (x) y1 (x) + u (x) y 01 (x)

+

+Q (x) y1 (x) u (x) .

Medvegyev (CEU) Presession 2014 199 / 444

EulerCauchy equation

0 = u (x)y 001 (x) + P (x) y

01 (x) +Q (x) y1 (x)

+

+u0 (x)2y 01 (x) + P (x) y1 (x)

+

+u00 (x) y1 (x)

As y1 (x) is a solution the rst term is zero. Hence

0 = u00 (x) y1 (x) + u0 (x)2y 01 (x) + P (x) y1 (x)

.

Medvegyev (CEU) Presession 2014 200 / 444

EulerCauchy equation

Introducing w = u0

w 0y1 + 2y 01 + Py1

w = 0.

This is a linear or a separable equation.

dww+ 2

y 01y1dx + Pdx = 0

ln (jw j) = 2 ln jy1j ZPdx + c

w =1y21c exp

ZPdx

u = c1

Z expRPdx

y21

dx + c2

Medvegyev (CEU) Presession 2014 201 / 444

EulerCauchy equation

Hence

y2 (x) = y1 (x)Z exp

RPdx

y21

dx .

In the EulerCauchy case we assume that x > 0

y1 (x) = xm1 = x(ba)/2a

P (x) =bax

u (x) =Z exp

ba

R1x

x(ba)/a

dx =Z exp

ba ln jx j

x(ba)/a

dx =

=Z jx jb/a

x(ba)/adx =

Zxb/ax (ba)/adx =

Z 1xdx = ln x + c

Hence y2 (x) = xm1 ln x .

Medvegyev (CEU) Presession 2014 202 / 444

EulerCauchy equation

Let y = xα+βi . Calculate the derivative y 0.

y 0 =xαx βi

0= (xα)0 x βi + xα

x βi0=

= αxα1x βi + xα (exp (iβ ln x))0 =

= αxα1x βi + xα (cos (β ln x) + i sin (β ln x))0 =

= αxα1x βi + xα

β

xsin (β ln x) + i

β

xcos (β ln x)

=

= αxα1x βi + βxα1 ( sin (β ln x) + i cos (β ln x)) == αxα1x βi + iβxα1 (cos (β ln x) + i sin (β ln x)) =

= αxα1x βi + iβxα1x βi = (α+ iβ) xα+iβ1.

Medvegyev (CEU) Presession 2014 203 / 444

EulerCauchy equation

If we have conjugate complex roots then the general solution is

y (x) = c1xα+βi + c2xαβi .

But what does this mean?

xα+βi = xαx βi = xα (exp (ln x))βi = xα exp (βi ln x) =

= xα (cos (β ln x) + i sin (β ln x)) .

y1 =12

xα+βi + xαβi

= xα cos (β ln x)

y2 =12i

xα+βi xαβi

= xα sin (β ln x) .

Medvegyev (CEU) Presession 2014 204 / 444

EulerCauchy equation

ExampleSolve the initial value problem

x2d2ydx2

+ 3xdydx+ 3y = 0, y (1) = 1, y 0 (1) = 5.

The characteristic equation is

m2 + (b a)m+ c = m2 + 2m+ 3 = 0.

m1,2 =2

p4 122

= 1 ip2.

The general solution is

y (x) =1x

c1 cos

p2 ln x

+ c2 sin

p2 ln x

Medvegyev (CEU) Presession 2014 205 / 444

EulerCauchy equation

1 =11

c1 cos

p20+ c2 sin

p20, c1 = 1.

y 0 (x) = 1x2

c1 cos

p2 ln x

+ c2 sin

p2 ln x

+

+1x

c1 sin

p2 ln x

p2x+ c2 cos

p2 ln x

p2x

!5 = 1+ c2

p2

c2 = 4p2= 2

p2

Medvegyev (CEU) Presession 2014 206 / 444

EulerCauchy equation

Example

Solve the equation x2y 00 3xy 0 + 3y = 2x4 exp (x) .

First we solve the homogeneous equation.

0 = m2 + (b a)m+ c = m2 4m+ 3

m1,2 =4

p16 122

=4 22

m1 = 1,m2 = 3.

The general solution of the homogeneous equation is

y (x) = c1x + c2x3.

As the equation is linear to nd the general solution of the inhomogeneousequation it is su¢ cient to nd a particular solution.

Medvegyev (CEU) Presession 2014 207 / 444

Finding a particular solution

Assume that we know the general solution c1y1 (x) + c2y2 (x) of thehomogeneous equation

y 00 (x) + P (x) y 0 +Q (x) y = 0.

We try to nd a particular solution of the inhomogeneous one

y 00 (x) + P (x) y 0 +Q (x) y = R (x)

asyp (x) = u1 (x) y1 (x) + u2 (x) y2 (x) .

Calculating the derivatives y 0 and y 00and substituting back and using thaty1 and y2 are solutions of the homogeneous equation after some simplecalculation on gets that

R (x) = y 00p (x) + P (x) y0p (x) +Q (x) yp (x) =

=ddx

y1u01 + y2u

02

+ P (x)

y1u01 + y2u

02

+

+y 01u

01 + y

02u02

.

Medvegyev (CEU) Presession 2014 208 / 444

Finding a particular solution

One way to solve it is

y1u01 + y2u02 = 0

y 01u01 + y

02u02 = R (x)

y1 (x) y2 (x)y 01 (x) y 02 (x)

u01 (x)u02 (x)

=

0

R (x)

.

As y1 (x) and y2 (x) are linearly independent solutions one can solve theequations. Then integrating u01 and u

02 one nds the particular solution.

Medvegyev (CEU) Presession 2014 209 / 444

Finding a particular solution

In our case

y1 (x) = x , y2 (x) = x3,R (x) = 2x4 exp (x) /x2 = 2x2 exp (x) .x x3

1 3x2

u01 (x)u02 (x)

=

0

2x2 exp (x)

.

x x3

1 3x2

= 3x3 x3 = 2x3 x 01 2x2 exp (x)

= 2x3 exp (x) , u02 (x) = exp (x) , u2 (x) = exp (x) 0 x3

2x2 exp (x) 3x2

= 2x5 exp (x) , u01 (x) = x2 exp (x) , u1 (x) =

= x2 exp (x) + 2x exp (x) 2 exp (x) .

Medvegyev (CEU) Presession 2014 210 / 444

Finding a particular solution

The general solution of the above equation is

y (x) = c1x + c2x3+

+x2 exp (x) + 2x exp (x) 2 exp (x)

x+

+ exp (x) x3 =

= c1x + c2x3 + 2x2 exp (x) 2x exp (x) .

Medvegyev (CEU) Presession 2014 211 / 444

Finding particular solution

ExampleFind a particular solution of

y 00 2y + y = exp (t) .

λ2 2λ+ 1 = 0,λ12 = 1.

The solutions of the homogeneous equation are

y1 = exp (t) , y2 = t exp (t)

yp (t) = u1 (t) exp (t) + u2 (t) t exp (t)exp (t) t exp (t)exp (t) exp (t) + t exp (t)

u01 (t)u02 (t)

=

0

exp (t)

Medvegyev (CEU) Presession 2014 212 / 444

Finding particular solution

exp (t) t exp (t)exp (t) exp (t) + t exp (t)

= exp (t) 1 t exp (t)1 exp (t) + t exp (t)

== exp (2t)

1 t1 1+ t

= exp (2t) (1+ t t) = exp (2t) 0 t exp (t)exp (t) exp (t) + t exp (t)

= t exp (2t) , u01 (t) = t, u1 (t) = t2

2. exp (t) 0

exp (t) exp (t)

= exp (2t) , u02 (t) = 1, u2 (t) = t.

The particular solution is

yp (t) = t2

2exp (t) + t2 exp (t) =

12t2 exp (t)

Medvegyev (CEU) Presession 2014 213 / 444

Finding particular solution

Example

Find a particular solution of y 00 + 4y = 2 cos 2t.

The roots are λ2 + 4 = 0 λ = 2i . The solution is

y1 (t) = cos 2t, y2 (t) = sin 2t

The equation iscos 2t sin 2t2 sin 2t 2 cos 2t

u01 (t)u02 (t)

=

0

2 cos 2t

.

Medvegyev (CEU) Presession 2014 214 / 444

Finding particular solution

cos 2t sin 2t2 sin 2t 2 cos 2t

= 2 cos2 (2t) + 2 sin2 (2t) = 2 0 sin 2t2 cos 2t 2 cos 2t

= 2 cos 2t sin 2t = sin 4t, u1 =18cos 4t cos 2t 0

2 sin 2t 2 cos 2t

= 2 cos2 2t, u02 = cos2 2t,

u2 =Zcos2 2tdt =

12t +

18sin 4t

Medvegyev (CEU) Presession 2014 215 / 444

Finding particular solution

yp =18cos 2t cos 4t + sin 2t

12t +

18sin 4t

=

=12t sin 2t +

18(cos 2t cos 4t + sin 2t sin 4t) =

=12t sin 2t +

18

cos 2t

cos2 2t sin2 2t

+ 2 sin 2t sin 2t cos 2t

=

=12t sin 2t +

18

cos 2t

cos2 2t sin2 2t + 2 sin2 2t

=

=12t sin 2t +

18cos 2t.

But cos 2t is a solution of the homogeneous equation so one can useyp = t/2 sin 2t.

Medvegyev (CEU) Presession 2014 216 / 444

Second order equations

Example

Solve the equation x2y 00 + 2xy 0 1 = 0.

It is an inhomogeneous EulerCauchy equation. The characteristicpolynomial is

m2 +m = 0, m = 0,m = 1.The general solution is c1 + c2x1. To nd a particular solution of theinhomogeneous one

1 x1

0 x2

u01 (x)u02 (x)

=

0x2

.

u01 (x) =x3x2 = x

1, u02 (x) =x2

x2 = 1

u1 (x) =Zx1dx = ln x , u2 (x) =

Z1dx = x .

yp (x) = 1 ln x + x1 (x) = ln x 1.Medvegyev (CEU) Presession 2014 217 / 444

Matrixes

DenitionMatrices are rectangular arrays

A =

0BBB@a11 a12 a13 . . . a1na21 a22 a23 . . . a2n...

......

. . . . . .am1 am2 am3 . . . amn

1CCCAThe size of the matrix is m n. The matrices of size m 1 are calledcolumn vectors and the matrices of size 1 n are called row vectors. Thecolumn vectors sometimes are called simply vectors.

Medvegyev (CEU) Presession 2014 218 / 444

Matrix addition

DenitionOne can calculate the linear combination of matrices of the same size.The linear combination of two matrices is the matrix of the linearcombination of the elements. The matrices form a linear space.

Example

2

0@ 1 22 34 1

1A+ 30@ 5 32 00 1

1A =

0@ 17 1310 68 1

1A .

Medvegyev (CEU) Presession 2014 219 / 444

Matrix multiplication

DenitionOne can multiply matrices of compatible size. If the size of A is m nand the size of B is n r then the size of the product matrix is m r .The element cij of the product C is the scalar product of row i of matrix Aand that of the column j of matrix B.

Example

If A $

0@ 1 12 11 1

1A and B $1 4 52 3 0

then

C $

0@ 1 12 11 1

1A 1 4 52 3 0

=

0@ 3 7 50 5 103 7 5

1A .Medvegyev (CEU) Presession 2014 220 / 444

Matrix multiplication by vectors

Every vector is a special matrix so if A is a matrix of size m n and x is avector of dimension n then one can dene the product Ax which is acolumn vector of size m. One can think about Ax as the linearcombination of the columns of A with the coordinates of x.

Example

Let A $

0@ 1 12 11 1

1A and let x $12

then

Ax $

0@ 1 12 11 1

1A 12

=

0@ 303

1A which is the same as

1

0@ 121

1A+ 20@ 111

1A =

0@ 303

1A .Medvegyev (CEU) Presession 2014 221 / 444

Matrix multiplication by vectors

The same is true for row vectors.

Example

Let A $

0@ 1 12 11 1

1A and let x $1 2 1

then

xA $1 2 1

0@ 1 12 11 1

1A =4 2

, which is the same as

1 1 1

+ 2

2 1

1

1 1

.

Medvegyev (CEU) Presession 2014 222 / 444

Matrix transposition

Denition

If A is a matrix of size m n then B $ AT is a matrix of size nm andbij $ aji .

Example

If x =

0@ 113

1A then xT =1 1 3

.

xTA =1 1 3

0@ 1 12 11 1

1A =6 3

.

Medvegyev (CEU) Presession 2014 223 / 444

Rules of the matrix calculation

1 A (BC) = (AB)C.2 A (B+C) = AB+AC, (B+C)A = BA+CA.3 (A+B)T = AT+BT , but (AB)T = BTAT .

4ATT= A.

Medvegyev (CEU) Presession 2014 224 / 444

Matrix multiplication is not commutative

Example

Let A =0 00 1

, B =

1 23 4

. Then

AB =

0 00 1

1 23 4

=

0 03 4

,

BA =

1 23 4

0 00 1

=

0 20 4

.

Medvegyev (CEU) Presession 2014 225 / 444

Homework

1 Let A =

1 21 3

, B =

0 11 0

, C =

3 21 3

. Check

the rules of matrix calculation with these matrices.

2 Let A =

0@ 0 1 11 2 11 2 4

1A , B =0@ 1 1 13 2 11 5 7

1A . Check that(AB)T = BTAT .

Medvegyev (CEU) Presession 2014 226 / 444

Vector spaces, basic examples

1 The simplest example of vector spaces are Rn that is the set of allpossible n-tuples of real numbers x = (x1, x2, . . . , xn) .

2 One can also dene Cn that is the set of all possible n-tuples ofcomplex numbers z = (z1, z2, . . . , zn) .

3 The numbers x1, x2, . . . , xn are the coordinates.4 By denition x+ y $ (x1 + y1, x2 + y2, . . . , xn + yn) and

λx = (λx1,λx2, . . . ,λxn) .5 It is obvious that Rn and Cn are closed under the operation of nitelinear combination ∑k αkxk .

6 There are subsets of Rn and Cn which have the same property. Thegoal of linear algebra is to handle these subsets. The main tools arethe abstract vectors spaces.

Medvegyev (CEU) Presession 2014 227 / 444

Vector spaces

A structure (L,+, ) is a vector space if1 Addition is an abelian group.2 The multiplication with a scalar is distributive in both way, that is

λ(v+w) = λv+ λw and (λ+ µ)v = λv+ µv.3 Compatibility of scalar multiplication with eld multiplication that is(λµ) x = λ (µx) .

4 1v = v.

DenitionA subset of a linear space is a subspace if it is a linear space itself..

Medvegyev (CEU) Presession 2014 228 / 444

Bases and dimension

DenitionThe smallest linear subspace of a linear space containing of a set A is thelinear subspace or the linear space generated by A. This subspace isdenoted by lin (A) .

DenitionA linear space L is nite dimensional if there is a nite subset A withL = lin (A) . The smallest number of points, dim (L) , necessary togenerate the space is called the dimension of L.

DenitionThe smallest generating subsets are called bases.

Medvegyev (CEU) Presession 2014 229 / 444

Linear independence

DenitionVectors x1, x2, . . . , xn are linearly independent if ∑n

k=1 αkxk = 0 impliesthat all αk = 0.

LemmaThe vectors in a bases of a nite dimensional space are linearlyindependent.

If not then there is a collection of (αk ) with ∑nk=1 αkxk = 0 where some

αk 6= 0. If e.g. α1 6= 0 then x1 = ∑nk=2

αkα1xk . Substituting x1 back to

any linear combination one can reduce the number of vectors in thegenerating set.

Medvegyev (CEU) Presession 2014 230 / 444

Linear independence, coordinates

LemmaThe constants are unique.

If ∑k αkxk = ∑k βkxk then ∑k (αk βk ) xk = 0 which implies thatαk = βk

DenitionThe unique constants are called the coordinates with respect to the base.

Medvegyev (CEU) Presession 2014 231 / 444

Simplex method, Gaussian elimination

Let (λ1,λ2, . . . ,λn) be the coordinates of a in a basis (x1, x2, . . . , xn) .What are the coordinates of the same a in the basis (y, x2, . . . , xn) if thecoordinates of y are (µ1, µ2, . . . µn)?By the denition of the coordinates

a = λ1x1 + λ2x2 + . . .+ λnxn.y = µ1x1 + µ2x2 + . . .+ µnxn.

If µ1 6= 0 then we can express x1 with y and with the other xk

x1 =1

µ1(y (µ2x2 + . . .+ µnxn)) .

Medvegyev (CEU) Presession 2014 232 / 444

Simplex method, Gaussian elimination

Substituting back

a =λ1µ1y+

λ2 λ1µ2

µ1

x2 + . . .+

λn

λ1µnµ1

xn.

Medvegyev (CEU) Presession 2014 233 / 444

Homework

Let a1 = (1, 2, 3) , a2 = (1, 1, 1) , a3 = (1,1, 1) . Find thecoordinates of the next vectors with respect to a1, a2, a3.

1 (1, 2, 4)2 (2,1, 4)3 (1, 0, 1)4 (1, 0, 0)

Find the coordinates of the polynomial (1+ x)3 with respect to1, x , x2, x3.

Medvegyev (CEU) Presession 2014 234 / 444

Denition of Euclidean space

DenitionIf x, y 2 Rn then (x , y) $ ∑n

k=1 xkyk is called the scalar product of x andy. If x, y 2 Cn then the scalar product is dened as ∑n

k=1 xkyk .

DenitionThe length of a vector x = (x1, x2, . . . , xn) in Rn is

kxk $q

∑nk=1 x

2k =

p(x, x). The length of a vector z = (z1, z2, . . . , zn)

in Cn is kxk $q

∑nk=1 jxk j

2 =p(x, x). Where obviously if z = a+ bi is

a complex number then jz j =pzz =

pa2 + b2.

DenitionIf x, y 2 Rn then

cos θ $ (x, y)kxk kyk .

Medvegyev (CEU) Presession 2014 235 / 444

Euclidean space

DenitionA linear space X with a scalar product (x , y) is called a real Euclideanspace if the scalar product satises

1 (x , x) 0, (x , x) = 0, x = 02 (x , y) = (y , x)3 (α1x1 + α2x2, y) = (α1x1, y) + (α2x2, y)

If X is a linear space over the complex numbers then we assume that(x , y) = (y , x).

Medvegyev (CEU) Presession 2014 236 / 444

Projection on linear subspaces

Let E be an Euclidean space and let L E and let x0 /2 L What is thedistance between x0 and L? That is we should solve the optimizationproblem

min fkx x0k j x 2 Lg .

DenitionIf A is any set then for any x0 the closest point in A is called the projectionof x0 on A and it is denoted by PA (x0) .

TheoremIf L is a subspace x 2 L is an optimal solution of the problem thenx x0 is orthogonal to L, that is (x x0, l) = 0 for every l 2 L.

Medvegyev (CEU) Presession 2014 237 / 444

Projection on linear subspaces

Assume not, that is (x0 x, l) $ σ 6= 0 for some l 2 L. Let

bl $ x + σ

klk2l 2 L, (Using that L is a linear subspace)

x0 bl 2 = x0 x σ

klk2l , x0 x

σ

klk2l

!=

= kx0 xk2 2σ

klk2(l , x0 x) +

σ2

klk4klk2 $

$ kx0 xk2 2σ

klk2σ+

σ2

klk2=

= kx0 xk2 σ2

klk2< kx0 xk2 .

which is impossible.

Medvegyev (CEU) Presession 2014 238 / 444

Projection on linear subspaces

TheoremIf x x0 is orthogonal to L then x is an optimal solution.

Any l 2 L has a representation l = x + (l x) = x + h with h 2 Lusing that x0 x ? L

kx0 lk2 $ kx0 x hk2 == kx0 xk2 2 (h, x0 x) + khk2 == kx0 xk2 + khk2 kx0 xk2 .

It is clear that the optimum is unique when h = 0.

Medvegyev (CEU) Presession 2014 239 / 444

Projection on linear subspaces

Let X Rn be a nite dimensional subspace with basis b1, b2, . . . , bm .x x0 is orthogonal to X if and only if it is orthogonal to every bk .

(bk , x x0) = 0.

If B $ (b1, b2, . . . , bm) then BT (x x0) = 0. That is BT x = BT x0.Let bx be the coordinates of x that is x = Bbx . Then

BT x = BTBbx = BT x0.

Medvegyev (CEU) Presession 2014 240 / 444

Projection on linear subspaces

Lemma

The mm matrix BTB is invertible.

If not then there is a vector y 6= 0 such that BTBy = 0. HenceyTBTBy = 0. But this is just

yTBT

(By) = 0, that is

(By)T (By) = 0,) kByk2 = 0

hence By = 0. But as B is a basis and y 6= 0 this is impossible.

Medvegyev (CEU) Presession 2014 241 / 444

Projection on linear subspaces

TheoremThe OLS equation is bx = BTB1 BT x0therefore

x = Bbx = B BTB1 BT x0.

Medvegyev (CEU) Presession 2014 242 / 444

Projection on linear subspaces

ProblemThe basic numerical problem is how to nd a basis and how to invert amatrix.

Medvegyev (CEU) Presession 2014 243 / 444

Projection on linear subspaces

Example

Let L = fy = xg and x0 =10

. Then B =

11

bx = (1, 1) 11

1(1, 1)

10

=12.

Hence x =11

(1/2) =

1/21/2

. The distance is

kx0 xk =q(1 1/2)2 + (0 1/2)2 = 1/

p2.

Medvegyev (CEU) Presession 2014 244 / 444

Projection on linear subspaces

ExampleProjection with Mahalanobis distance.

Let Ω be a positive denite matrix. The Mahalanobis distance comes froman Euclidean space structure with the scalar product (x , y) = xTΩ1y .(The inverse of a positive denite matrix is also positive denite. The idea

is that we normalize with the standard deviation

r∑nk=1

xkσk

2.).

Now0 = (bk , x

x0) = bTk Ω1 (x x0)that is BTΩ1x = BTΩ1x0 that is

BTΩ1x = BTΩ1Bbx = BTΩ1x0.

Medvegyev (CEU) Presession 2014 245 / 444

Projection on linear subspaces

Hence we get the GLS estimator

bx =BTΩ1B

1BTΩ1x0.

x = BBTΩ1B

1BTΩ1x0

Medvegyev (CEU) Presession 2014 246 / 444

Projection on linear subspaces

Of course one should show that BTΩ1B is invertible. Again if not thenfor some y 6= 0

BTΩ1By = 0,) yTBTΩ1By = 0.

Hence(By)T Ω1By = 0.

As Ω1 is positive denite By = 0 which is impossible.

Medvegyev (CEU) Presession 2014 247 / 444

Projection on linear subspaces

DenitionWe say that the basis vectors (bk ) are orthonormal when (bk , bj ) = δkj .

If the basis is orthonormal then from

b = λ1b1 + λ2b2 + . . .+ λmbm =) (b, bk ) = λk .

That is we have the so called Fourier expansion

b = (b, b1) b1 + (b, b2) b2 + . . .+ (b, bm) bm .

In the OLS bx = BTB1 BT x0 = I1BT x0 = BT x0Medvegyev (CEU) Presession 2014 248 / 444

Matrices as linear operators

DenitionIf x 2 Rn is arbitrary and A is an m n matrix then x 7! Ax denes alinear operator from Rn to Rm .

im (A) $ fy : y = Ax, x 2 Rng ,ker (A) $ fx : Ax = 0, x 2 Rng .

Lemmaim (A) is a linear subspace of Rm , ker (A) is a linear subspace in Rn.

If y1, y2 2 im (A) then yi = Axi for some xi and obviously

α1y1 + α2y2 = A (α1x1 + α2x2) $ Ax 2 im (A) .

If x1, x2 2 ker (A) then

A (α1x1 + α2x2) = α1Ax1 + α2Ax2 = 0.

Medvegyev (CEU) Presession 2014 249 / 444

The rank of a matrix

DenitionThe column rank of a matrix A is the maximal number of linearlyindependent columns of A. Likewise, the row rank is the maximal numberof linearly independent rows of A.

LemmaFor any matrix the row rank and the column rank are the equal.

The number of independent column vectors is the number of columnvectors one can bring into the bases with Gauss elimination. But exactlythe same calculation is applied when one calculates the number of rowvectors one can bring into the basis of the row vectors.

Medvegyev (CEU) Presession 2014 250 / 444

The rank of a matrix

DenitionThe rank of a matrix is the column or the row rank.

LemmaThe rank of a matrix is just the dimension of the image space of the linearoperator generated by A,

rank (A) = dim (im (A))

Medvegyev (CEU) Presession 2014 251 / 444

Basis of the image space, rank of a matrix

1. To determine the rank of a matrix one should try to bring as manyvectors to the basis as possible.2. The vectors in the basis form a basis of the image space.

Medvegyev (CEU) Presession 2014 252 / 444

Solving homogeneous linear equations

Let A be an m n matrix and let A1 be the matrix which contains thebases of im (A) . Let D the matrix of the coordinates of the columns of Anot in A1. Then A = A1 (I,D) , where I is the identity matrix of orderrank (A) . As the columns of A1 are independent

ker (A) = fx : 0 = Axg= fx : 0 = A1 (I,D) xg =

= fx : 0 = (I,D) xg =(x1, x2) : 0 = (I,D)

x1x2

=

= f(x1, x2) : 0 = x1 +Dx2g = f(x1, x2) : x1 = Dx2g

where x2 is an arbitrary vector in Rndim(im(A)). (x1 has dim (im (A))coordinates.)

Medvegyev (CEU) Presession 2014 253 / 444

Solving homogeneous linear equations

Obviously

ker (A) =

x1x2

=

DI

t, t 2 Rndim(im(A))

soDI

is a basis for ker (A) and n dim (im (A)) = dim (ker (A)) .

Medvegyev (CEU) Presession 2014 254 / 444

Solving inhomogeneous linear equations

Let A be an m n matrix and let A1 be the matrix which contains thebases of im (A) . Let D the matrix of the coordinates of the columns of Anot in A1. Then A = A1 (I,D) , where I is the identity matrix of orderrank (A) . As the columns of A1 are independent

fx : Ax = bg = fx : A1d= A1 (I,D) xg =

= fx : d = (I,D) xg =(x1, x2) : d = (I,D)

x1x2

=

= f(x1, x2) : d = x1 +Dx2g = f(x1, x2) : x1 = dDx2g

where x2 is an arbitrary vector in Rndim(im(A)). (x1 has dim (im (A))coordinates.)

Medvegyev (CEU) Presession 2014 255 / 444

The dimension of the image space and the kernel

LemmaIf A is an m n matrix then

dim (ker (A)) + dim (im (A)) = n.

dim (ker (A)) is the degree of freedom of the equation Ax = b.

Medvegyev (CEU) Presession 2014 256 / 444

A basis of the row vector space

Lemma

The row vectors of (I,D) form a bases of imAT.

As I is in the matrix, the row vectors of (I,D) are independent. As thenumber of rows is equal to the rank of A one should only show that therows of (I,D) are in im

AT.

imAT= fu : u = xAg = fu : u = xA1(I,D)g == fu : u = y (I,D)g .

Medvegyev (CEU) Presession 2014 257 / 444

A basis of the row vector space

Of course it is not clear that all possible y has the representation ofy = xA1 with some x. Hence

imAT fu : u = y (I,D)g = lin ((I,D)) .

But as dimimATis just the rank of AT which is the same as the

rank of A, which is just the number of rows of (I,D) there is an equalityin the relation above. Hence the rows of (I,D) are in im

ATso they

form a basis of imAT.

Medvegyev (CEU) Presession 2014 258 / 444

A basis of the row vector space

dimimAT= dim (im (A)) and D is an

dim (im (A)) dim (ker (A)) = dimimAT dim (ker (A)) matrix.

Obviously Idim(im(AT)),D

DIdim(ker(A))

= 0n

so imAT? ker (A) . Observe that

Idim(im(AT)),D

is a bases for

imATand

D

Idim(ker(A))

or

DIdim(ker(A))

is a bases for ker (A) .

Medvegyev (CEU) Presession 2014 259 / 444

Homework

Let A =

0BB@1 1 03 1 03 1 12 0 1

1CCA . Calculate the dimension of the im (A) and

that of imAT. Give a basis of both spaces. What is the dimension

of ker (A) and kerAT?

Solve the homogeneous linear equation

x1 + 2x2 + 3x3 = 0

x1 + 2x2 + 4x3 = 0.

Medvegyev (CEU) Presession 2014 260 / 444

Homework

Let x1 = (1, 2, 3, 4) , x2 = (2, 2, 2, 2) , x3 = (1, 0, 0, 1) . What is thedimension of the subspace generated by, x1, x2, x3?

Calculate the rank of A =

0BB@1 1 11 1 01 0 00 0 0

1CCA .

Medvegyev (CEU) Presession 2014 261 / 444

Homework

Let A =

0BB@1 2 1 1 11 1 1 1 12 3 0 2 24 6 0 4 4

1CCA . What is the dimension of ker (A) .

Solve the linear equation Ax = 0. Give a basis of im (A) andimAT, ker (A) , ker

AT.

What is the rank of the matrix0@ 1 1 1 12 3 1 24 6 0 4

1A .Solve the linear equation Ax = 0. Find a solution where x4 = 0.Find a solution of the linear equation

x1 + 2x2 + 3x3 = 1

x1 + 2x2 + 4x3 = 1.

What is the set of the solutions.Medvegyev (CEU) Presession 2014 262 / 444

Homework

Let A =

0BB@1 2 10 0 21 0 12 0 1

1CCA . Calculate dim (im (A)) . How much is

dimkerAT

?

Let A =

0BB@1 2 10 0 21 0 12 0 1

1CCA . Calculate ATA and AAT . Calculate therank of the products.

Medvegyev (CEU) Presession 2014 263 / 444

Identity matrix

DenitionFor every n let us dene the identity matrix In with (δij ) where

δij $0 if i 6= j1 if i = j

.

Obviously if A is an m n matrix then ImA = A = AIn. The size of In isobviously n n . If the order of the identity matrix is obvious we drop thesubscript in the notation.

Example

I2 =1 00 1

, I3 =

0@ 1 0 00 1 00 0 1

1AMedvegyev (CEU) Presession 2014 264 / 444

Inverse matrix

DenitionWe say that a matrix A is quadratic or it is a square matrix if its size isn n, that is the number of rows equal to the number of columns.

DenitionA matrix, denoted by A1 is the inverse matrix of a square matrix A ifAA1 = I.

LemmaIf A and B are invertible then AB is also invertible and(AB)1 = B1A1.

(AB)B1A1

= A

BB1

A1 = A I A1 = AA1 = I.

Medvegyev (CEU) Presession 2014 265 / 444

Inverse matrix and the image space

LemmaA necessary and su¢ cient condition for an n n matrix A to be invertibleis that the columns of A form a bases of Rn.

If A is invertible then the columns of A form a basis as for any b 2 Rn ifone denes x $ A1b then

Ax $ AA1b

=AA1

b = I b = b

and it is impossible that the columns of A were dependent as in this caseone could form a bases in Rn with less vectors than n. On the other handif the columns of A form a bases of Rn then there are vectors xk withAxk = ek , where ek is the k-th column of I and A1 is just the matrixformed from xk .

Medvegyev (CEU) Presession 2014 266 / 444

Inverse matrix and the image space

LemmaIf a matrix is invertible then its inverse is also invertible.

AA1 = I also means that the rows of A1 form a basis in the row space,that is the rows are independent. But then A1 has full rank. In this casethe columns of A1 also form a basis in the columns space, therefore A1

is invertible.

DenitionA square matrix A is called regular, or sometime nonsingular if it isinvertible, otherwise it is called singular. Obviously a quadratic matrix isregular if and only if its columns or its rows are independent.

Medvegyev (CEU) Presession 2014 267 / 444

Left and right inverses

LemmaIf A is regular then A1A = I.

As A is invertible it has full rank, but then the rows are also independentso there is an X with XA = I. So

X = XI = XAA1

= (XA)A1 = IA1 = A1.

Corollary

The inverse matrix is unique.A1

1= A.

Let AX = I. Then X = I X =A1A

X = A1 (AX) = A1I = A1.

Medvegyev (CEU) Presession 2014 268 / 444

Permutations

DenitionThe permutations of order n are the bijective mappings

f1, 2, . . . , ng ! f1, 2, . . . , ng .

The set of all permutations of order n is denoted by Sn. A transposition iswhen one swaps two elements. A permutation is odd when it swaps oddnumber of elements otherwise it is even. To any permutation σ oneattaches the signature of σ. The signature sgn (σ) is +1 for even and 1for odd permutations.

Medvegyev (CEU) Presession 2014 269 / 444

Determinants

Example

σ $1 2 3 41 2 4 3

or σ $

1 2 3 41 4 3 2

are odd.

σ $1 2 3 42 3 1 4

is an even permutation.

Medvegyev (CEU) Presession 2014 270 / 444

Determinants

Denition

Let A be a square matrix of order n. By denition

det (A) $ jAj $ ∑σ2Sn

sgn (σ)∏iaiσ(i ),

where Sn is the set of all permutation of order n and sgn (σ) is thesignature of the permutation.

Medvegyev (CEU) Presession 2014 271 / 444

Determinants

Example 1 23 4

= 1 4 2 3 = 2.

1 2 31 2 11 0 1

=1 2 1+ 3 (1) 0+ 2 1 1 3 2 1 (1) 2 1 0 1 1 = 0.Observe that the third columns is the sum of the rst two.

Medvegyev (CEU) Presession 2014 272 / 444

Laplace expansion

DenitionThe minor of element aij is the determinant of dimension n 1 which weget deleting row i and column j . This minor is denoted by jMij j. Thecofactor related to aij is jCij j $ (1)i+j jMij j .

Example1 2 3

1 2 11 0 1

= 1 jC11j+ 2 jC12j+ 3 jC13j =1

2 10 1

2 1 11 1

+ 3 1 21 0

= 0.

Medvegyev (CEU) Presession 2014 273 / 444

Laplace expansion

TheoremIf A is a square matrix then jAj = ∑k aik jCik j and jAj = ∑k akj jCkj j forany i and j

Medvegyev (CEU) Presession 2014 274 / 444

Production rule

TheoremIf A and B are square matrices then jABj = jAj jBj .

Example

A =1 12 1

, B =

0 12 3

.

AB =1 12 1

0 12 3

=

2 42 1

.

1 12 1

0 12 3

= (3) (2) = 6And 2 4

2 1

= 6.Medvegyev (CEU) Presession 2014 275 / 444

Characterization of determinants

TheoremThe determinants are the only alternating multilinear functionals over thesquare matrices with the property that jIj = 1. That is

1 if one transposes two rows or two columns of a square matrix thenthe sign of the determinant changes, but the absolute value of thedeterminant does not change;

2 the determinants are linear functions in the rows or in the columns ofa matrix, assuming of course that the other rows or columns are xed.

Medvegyev (CEU) Presession 2014 276 / 444

Alternating functional

Example1 2 34 5 67 8 0

= 27.3 2 16 5 40 8 7

= 27.1 2 11 2 11 0 1

= 0.

Medvegyev (CEU) Presession 2014 277 / 444

Multilinear function

Example1+ 1 2 34+ 1 5 67+ 1 8 0

= 0.1 2 31 5 61 8 0

= 27.1 2 34 5 67 8 0

= 27.

Medvegyev (CEU) Presession 2014 278 / 444

Characterization of the determinants

Let f be a multilinear functional of n variable over Rn.

f (x1, x2, . . . , xn) = f∑ xi1ei ,∑ xi2ei , . . . ,∑ xinei

=

= ∑ xi1fei ,∑ xi2ei , . . . ,∑ xinei

=

= ∑ixi1

∑jxj2f

ei , ej , . . . ,∑ xinei

!=

= ∑i ,j ,k ,l

xi1xj2xk3xl4f (ei , ej , . . .) .

Medvegyev (CEU) Presession 2014 279 / 444

Characterization of the determinants

If f is alternating thenf (. . . , ei , . . . , ei , . . .) = f (. . . , ei , . . . , ei , . . .) = 0. So

f (x1, x2, . . . , xn) = ∑i ,j ,k ,l

xi1xj2xk3xl4f (ei , ej , . . .)

and in the sum no two indexes are the same so (i , j , k, l , . . .) is apermutation.

Medvegyev (CEU) Presession 2014 280 / 444

Characterization of the determinants

f (x1, x2, . . . , xn) = ∑ixi1

∑j 6=ixj2f

ei , ej , . . . ,∑ xinei

!=

= ∑ixi1

∑j 6=ixj2f

ei , ej , . . . ,∑ xinei

!

= ∑ixi1

∑j 6=ixj2

k 6=i ,k 6=jxk3f

ei , ej , . . . ,∑ xinei

!!=

= . . . = ∑σ2Sn

xσ11xσ22 xσnnf (eσ1 , eσ2 , . . . , eσn ) =

= ∑σ2Sn

xσ11xσ22 xσnnsgn (σ) f (e1, e2, . . . , en) = jXj .

Medvegyev (CEU) Presession 2014 281 / 444

Product rule and the characterization of the alternatingmultilinear functionals

Using the just presented argument

jABj = f (ABe1,ABe2, . . . ,ABen) =

= f (Ab1,Ab2, . . . ,Abn) = f

∑k

akbk1, . . . ,∑k

akbkn

!=

= ∑σ2Sn

bσ11bσ22 bσnnsgn (σ) f (a1, a2, . . . , an) =

= jBj f (a1, a2, . . . , an) = jBj jAj .

Medvegyev (CEU) Presession 2014 282 / 444

Matrix inversion with determinants

DenitionIf A is a square matrix then

adj (A) =

0BBBBB@jC11j jC21j jC31j . . . jCn1jjC12j jC22j jC32j . . . jCn2jjC13j jC23j jC33j . . . jCn3j...

......

. . ....

jC1n j jC2n j jC3n j . . . jCnn j

1CCCCCAObserve the indexes!

LemmaIf A is a square matrix then A adj (A) = jAj I.

Medvegyev (CEU) Presession 2014 283 / 444

Matrix inversion with determinants

TheoremA square matrix A is invertible if and only if jAj 6= 0. In this case

A1 =1jAj adj (A) .

Example

Let A =1 31 2

. adj (A) =

2 31 1

. jAj =

1 31 2

= 1.A1 =

11

2 31 1

=

2 31 1

.

Medvegyev (CEU) Presession 2014 284 / 444

Cramers rule

Let A be a nonsingular matrix. The solution of the equation Ax = b isx = A1b. By the formula for the inverse matrix

x = A1b =1jAj adj (A) b.

By the denition of the matrix multiplication

xi =1jAj ∑k

jCki j bk .

By the Laplace expansion this is just the determinant of a matrix wherecolumn i is changed to b.

Medvegyev (CEU) Presession 2014 285 / 444

Cramers rule

ExampleSolve the equation

2x1 + x2 x3 = 1

3x1 2x2 + x3 = 2

4x1 + 6x2 5x3 = 0.

2 1 13 2 14 6 5

= 1,

1 1 12 2 10 6 5

= 2,2 1 13 2 14 0 5

= 7,2 1 13 2 24 6 0

= 10.

The solution is x1 = 2, x2 = 7, x3 = 10.

Medvegyev (CEU) Presession 2014 286 / 444

Homework

Calculate

1 ii 1

i 11 i

and 1 ii 1

i 11 i

.Calculate

1 2i2i 1

2 . and 1 2i2i 1

2 .Solve the linear equation

x1 + ix2 + x3 = 0

x1 + x2 + x3 = 0.

Calculate

0@ 1 2 33 4 51 1 1

1A10 .

Medvegyev (CEU) Presession 2014 287 / 444

Homework

Let A =1 21 1

. Calculate adj (A) and A1

Let A =1 01 1

. Calculate adj (A) and A1.

Let A =1 21 3

. Calculate adj (A) and A1

Medvegyev (CEU) Presession 2014 288 / 444

Eigenvalues, eigenvectors

In general, a matrix acts on a vector by changing both its magnitude andits direction. However, a matrix may act on certain vectors by changingonly their magnitude, and leaving their direction unchanged (or, possibly,reversing it). These vectors are the eigenvectors of the matrix. A matrixacts on an eigenvector by multiplying its magnitude by a factor, which ispositive if its direction is unchanged and negative if its direction isreversed. This factor is the eigenvalue associated with that eigenvector.An eigenspace is the set of all eigenvectors that have the same eigenvalue.

DenitionA complex or real number λ is an eigenvalue of a matrix A if there is anx 6= 0 such that Ax =λx. Every x 6= 0 for which the relation is valid iscalled an eigenvector related to the eigenvalue λ.

Medvegyev (CEU) Presession 2014 289 / 444

Existence of an eigenvalue

Let v 6= 0 be arbitrary. v,Av,A2v, . . . ,Anv are dependent as the numberof vectors is n+ 1. Hence for some ak

a0v+ a1Av+ . . .+ anAnv = 0.

By the fundamental theorem of algebra the polynomialp (λ) = a0 + a1λ+ a2λ

2 + . . .+ anλn has a decomposition with m n.

p (λ) = c (λ λ1) (λ λ2) (λ λm) .

Hence as the matrixes Ak commute

0 = c (A λ1I) (A λ2I) (A λmI) v.

Hence one of the matrices must be singular as the product is singular.

Medvegyev (CEU) Presession 2014 290 / 444

Characteristic polynomial

LemmaA complex or real number λ is an eigenvalue of a matrix A if and only ifjA λIj = 0. The polynomial p (λ) $ jA λIj is called the characteristicpolynomial of A.

LemmaEvery matrix has at least one eigenvalue and one eigenvector.

Medvegyev (CEU) Presession 2014 291 / 444

Characteristic polynomial

Example

Calculate the eigenvalues of A =1 21 3

.

p (λ) $ 1 λ 2

1 3 λ

= (1 λ) (3 λ) 2 = λ2 4λ+ 1.

The roots of the characteristic equation are 2p3.

Medvegyev (CEU) Presession 2014 292 / 444

Eigenvectors as basis

TheoremIf the eigenvalues are di¤erent then the eigenvectors related to theeigenvalues are independent.

Let λ1,λ2, . . . ,λn be the eigenvalues and x1, x2, . . . , xn be thecorresponding eigenvectors. x1 6= 0 so it is linearly independent. Now ifthe theorem were true for some k but xk+1 = ∑k

i=1 βixi , where not all βiare zero then

λk+1xk+1 = Axk+1 =k

∑i=1

βiAxi =k

∑i=1

βiλixi

which implies that

0 =k

∑i=1

βi (λi λk+1) xi

which is impossible as λk+1 6= λi and the vectors x1, . . . , xk areindependent.

Medvegyev (CEU) Presession 2014 293 / 444

Eigenvectors as basis

One can have many other nice proofs. Let

β1x1 + β2x2 + . . .+βmxm = 0.

Apply (A λ2I) (A λ3I) (A λmI) on both sides one gets

β1 (λ1 λ2) (λ1 λ3) (λ1 λm) x1 = 0

as the (A λk I) matrices commute and (A λk I) xk = 0 and

(A λk I) x1 = Ax1 λkx1 = λ1x1 λkx1 = (λ1 λk ) x1.

As λ1 6= λk if k 6= 1 this implies that β1 = 0. Then apply the same trickwith λ2....

Medvegyev (CEU) Presession 2014 294 / 444

Eigenvectors of symmetric matrices

Denition

A real matrix S is symmetric if S = ST .

LemmaThe eigenvalues of a symmetric matrix S are real.

If λ is an eigenvalue then Sx = λx for some possible complex vectorx 6= 0. As x can complex dene

(x, y) $ ∑ixiy i .

Using that S is a real and symmetric matrix

λ (x, x) = (λx, x) = (Sx, x) =x,ST x

=

= (x,Sx) = (x,λx) = λ (x, x) ,

which implies that λ = λ, that is λ is real.Medvegyev (CEU) Presession 2014 295 / 444

Eigenvectors of symmetric matrices

LemmaThe eigenvectors of a real symmetric matrix related to di¤erenteigenvalues are orthogonal.

Let x1, x2 be eigenvectors related to di¤erent eigenvalues λ1,λ2 . One canassume that λ2 6= 0

(x1, x2) =x1,

1λ2Sx2

=

ST x1,

1λ2x2

=

=

Sx1,

1λ2x2

=

λ1x1,

1λ2x2

=

λ1λ2(x1, x2) ,

Which implies that (x1, x2) = 0.

Medvegyev (CEU) Presession 2014 296 / 444

Eigenvectors of symmetric matrices

Let x1, x2, . . . , xk be orthogonal eigenvectors of S. Let

L? $ fx : (x.xk ) = 0g .

If x 2 L? then, using that S is symmetric

(Sx, xk ) = (x,Sxk ) = (x,λkxk ) = λk (x, xk ) = 0.

Hence S maps L? into itself. As every linear mapping of a nitedimensional space into itself has an eigenvalue and an eigenvector in thatspace S has an eigenvector in L?. So if L? 6= (0) then S has k + 1orthogonal eigenvectors....

Medvegyev (CEU) Presession 2014 297 / 444

Spectral representation theorem for symmetric matrices

CorollaryThe eigenvectors of a real symmetric matrix S form a basis of theunderlying linear space. Hence there is a matrix B, formed from theeigenvectors of S such that BΛ = SB where Λ is a real valued diagonalmatrix formed from the eigenvalues of S. Hence for every symmetricmatrix S there is an invertible matrix B such that.

S = BΛB1, Λ = B1SB.

One can normalize the eigenvectors and choose them to be orthogonal. Inthis case B is orthonormal that is B1 = BT .

Of course there are non orthogonal but independent eigenvectorsbelonging to repeated eigenvalues.

Medvegyev (CEU) Presession 2014 298 / 444

Spectral representation theorem for symmetric matrices

If (bk ) is orthonormal then

S = BΛBT =n

∑i=1

λibkbTk =n

∑i=1

λiPk .

Observe that Pk $ bkbTk is a projection on the one dimensional subspace.Lk = lin (bk ) .

Pk $ bkbTk = bk 1 bTk = bk bTk bk

1 bTk

Medvegyev (CEU) Presession 2014 299 / 444

Homework

Find the eigenvalues and the eigenvectors of the following matrices:

2 11 2

,

3 45 2

,

0 aa 0

,

0@ 5 6 31 0 11 2 1

1A ,0@ 0 0 10 1 01 0 0

1A ,0@ 2 1 2

5 3 31 0 2

1A .What is the relation between the eigenvalues and the eigenvectors ofa matrix and its inverse?

Let λ1,λ2 . . . ,λn be the eigenvalues of a matrix A. What are theeigenvalues of A2?

Medvegyev (CEU) Presession 2014 300 / 444

Linear systems

DenitionIf A is a quadratic matrix then the equation

x = Ax , x (0) = a

is a linear di¤erential equation. The recursive system

xt+1 = Axt , x0 = a

is a linear di¤erence equation.

Medvegyev (CEU) Presession 2014 301 / 444

Linear systems

TheoremThe general solution of a linear di¤erential equation is

x (t) = exp (At) x (0) =

∑k=0

1k !(At)k

!x0.

The general solution of a linear di¤erence equation is

xt = Atx0.

Medvegyev (CEU) Presession 2014 302 / 444

Linear systems

TheoremIf A = SΛS1 then

An = SΛnS1,) exp (tA) = S exp (Λt)S1.

An = SΛS1SΛS1 . . .SΛS1 = SΛtS1.

Medvegyev (CEU) Presession 2014 303 / 444

Linear systems

1 It Λ = diag (λ1,λ2, . . . ,λn) is diagonal thenexp (tΛ) = diag (exp (λ1t) , . . . exp (λnt)) .

2 If Λ is block diagonal Λ = diag (Λ1,Λ2, . . . ,Λm) thenexp (tΛ) = diag (exp (Λ1t) , . . . exp (Λmt)) .

3 For every matrix A there is a Λ with blocks like0BBBB@λ 1 0 0

0 λ. . . 0

. . . . . .. . . 1

0 0 0 λ

1CCCCAwhere λ is a characteristic root of the matrix. This decomposition iscalled the Jordan decomposition. (There can be more than oneJordan block with some λ.)

4 If the eigenvectors form a basis then there are no no-trivial Jordanblocks.Medvegyev (CEU) Presession 2014 304 / 444

Linear systems

The same holds for the di¤erence equations, that is if Λ is block diagonalΛ = diag (Λ1,Λ2, . . . ,Λm) then

Λt = diagΛt1, . . . Λt

m

.

Medvegyev (CEU) Presession 2014 305 / 444

Linear systems

Example

Let A =0 10 0

. Solve the equation dx/dt = Ax.

The characteristic roots of A are λ1 = λ2 = 0. The correspondingdi¤erential equation is

x1 = x2,

x2 = 0.

Hence x2 = c2 and x1 = c2t + c1.

Medvegyev (CEU) Presession 2014 306 / 444

Linear systems

Example

Let A =1 10 1

. Solve the equation xt+1 = Axt .

The characteristic roots of A are λ1 = λ2 = 1. The correspondingdi¤erence equation is

x (1)t+1 = x(1)t + x (2)t , x (2)t+1 = x

(2)t

The general solution is x (2)t = x (2)0 = c2, x(1)t = tc2 + x

(1)0 = tc2 + c1.

Medvegyev (CEU) Presession 2014 307 / 444

Linear systems

Example

Calculate exp (At) if A =

λ 10 λ

.

A =

λ 00 λ

+

0 10 0

= +N

Observe thatλ 00 λ

0 10 0

=

0 λ0 0

=

0 10 0

λ 00 λ

So we can use the "usual" rules of calculation.

Medvegyev (CEU) Presession 2014 308 / 444

Linear systems

Hence as N2 = 0

(At)n = tnn

∑k=0

nk

ΛkNnk = tn

Λn + nΛn1N

=

= tn

λn 00 λn

+

0 nλn1

0 0

=

= tn

λn nλn1

0 λn

.

Medvegyev (CEU) Presession 2014 309 / 444

Linear systems

Hence

exp (At) $∞

∑n=0

tn

Λn

n!+ n

Λn1

n!N=

=∞

∑n=0

tnΛn

n!+

∑n=1

tnnΛn1

n!N =

=∞

∑n=0

tnΛn

n!+ t

∑n=1

tn1Λn1

(n 1)!N

= exp (Λt) + t exp (Λt)N =

=

exp (λt) 0

0 exp (λt)

+

0 t exp (λt)0 0

=

=

exp (λt) t exp (λt)

0 exp (λt)

.

Medvegyev (CEU) Presession 2014 310 / 444

Homework

Let A =1 22 1

. Calculate exp (At) and solve the equation

x = Ax, x (0) = x0

Let A =1 22 1

. Calculate At and solve the equation xt+1 = Axt ,

x (0) = x0

Let A =

1 11 1

. Calculate exp (At) and solve the equation

x = Ax, x (0) = x0

Let A =

1 11 1

. Calculate At and solve the equation

xt+1 = Axt , x (0) = x0.

Medvegyev (CEU) Presession 2014 311 / 444

Homework

Let A =0 10 0

. Calculate exp (At) .

Let A =

0@ 0 1 00 0 10 0 0

1A . Calculate exp (At) .Let A be a Jordan block with λ = 0 Calculate exp (At) .

Medvegyev (CEU) Presession 2014 312 / 444

Di¤erentiation in higher dimensions

DenitionA set U in Rn is open, if for any x 2 U there is an r > 0 such that a ballaround x of radius r is in U.

DenitionLet U be an open subset of Rn. The function F : U ! Rm is (totally)di¤erentiable at x 2 U, if there is a linear map A : Rn ! Rm such that

F (x + h) F (x) = Ah+ o (h) ,

where for o (h)

limh!0

o (h)khk = 0.

The linear map A is called the derivative or the total derivative of F . Thefunction F is di¤erentiable if it is di¤erentiable for every x 2 U.

Medvegyev (CEU) Presession 2014 313 / 444

Chain rule

LemmaIf F is di¤erentiable at x0, and G is di¤erentiable at y0 $ F (x0) , thenH $ G F is di¤erentiable at x0 and

(G F )0 (x0) = G 0 (y0) F 0 (x0) =

= G 0 (F (x0)) F 0 (x0) .

Medvegyev (CEU) Presession 2014 314 / 444

Partial derivatives, gradient

DenitionIf F is Rn ! R, one can dene the

∂F∂xi(x) $ lim

h!0

F (x + eih) F (x)h

partial derivatives. Let F : Rn ! Rm . The matrix (m n)

J (x) $

0BBBB@∂F1∂x1

∂F1∂x2

∂F1∂xn

∂F2∂x1

∂F2∂x2

∂F2∂xn

......

∂Fm∂x1

∂Fm∂x2

∂Fm∂xn

1CCCCAis called the Jacobi matrix of F . If F : Rn ! R, then the Jacobi-matrix iscalled the gradient of F .

Medvegyev (CEU) Presession 2014 315 / 444

Partial derivatives, gradient

Example

Calculate the derivative of f (x) = (x ,Ax) .

f (x) =n

∑i=1

n

∑j=1aijxixj .

∂f∂xi

=n

∑j=1aijxj +

n

∑j=1ajixj .

As the derivative is a row vector

ddx(x ,Ax) = (Ax)T +

AT x

T= xTAT + xTA

Medvegyev (CEU) Presession 2014 316 / 444

Di¤erentiation and partial derivatives

The di¤erentiability is a nice concept but it is not easy to guarantee it.The only reasonable criteria is the following:

TheoremIf the Jacobi matrix exists in an open set containing x and the x 7! J (x)is continuous in x then the mapping F is di¤erentiable in x .

ExampleThe function

f (x , y) $(

2xyx 2+y 2 if x2 + y2 > 00 if x2 + y2 = 0

,

has partial derivative, but not di¤erentiable since it is not continuous atx = y = 0.

Medvegyev (CEU) Presession 2014 317 / 444

Di¤erentiation and partial derivatives

LemmaIf F : Rn ! Rm is di¤erentiable at a point x , then the Jacobi matrix J (x)is just the matrix of the derivative F 0 in the standard coordinate basis.

Medvegyev (CEU) Presession 2014 318 / 444

Di¤erentiation and partial derivatives

Corollary

If g (t) = f (x1 (t) , x2 (t) , , xm (t)) then

g 0 (t) =m

∑k=1

∂f∂xk

(x1 (t) , x2 (t) , , xm (t)) x 0k (t)

Medvegyev (CEU) Presession 2014 319 / 444

Eulers theorem

Example

The function f is homogeneous of degree k if f (tx) = tk f (x) .Di¤erentiating both side, by xi and using the chain rule

∂f (tx)∂ (txi )

t = tk∂f∂xi(x) ,

so the partial derivatives are homogeneous of degree k 1. Di¤erentiatingby t and using the chain rule

n

∑i=1

∂f (tx)∂ (txi )

xi = ktk1f (x) .

If t = 1, thenn

∑i=1

∂f (x)∂xi

xi = kf (x) .

Medvegyev (CEU) Presession 2014 320 / 444

Eulers theorem

Example

Let f (x , y) = x2y .

f (tx , ty) = t3x2y = t3f (x , y) .∂f (x , y)

∂x= 2xy ,

∂f (x , y)∂y

= x2.

∂f (x , y)∂x

x +∂f (x , y)

∂yy = 2xyx + x2y = 3x2y

Medvegyev (CEU) Presession 2014 321 / 444

Eulers theorem

Example

Let f (x , y) =pxy .

f (tx , ty) =ptxty = t

pxy = tf (x , y) .

∂f (x , y)∂x

=1

2pxyy ,

∂f (x , y)∂y

=1

2pxyx .

∂f (x , y)∂x

x +∂f (x , y)

∂yy =

12pxyyx +

12pxyxy =

pxy

Medvegyev (CEU) Presession 2014 322 / 444

Envelope theorem

TheoremConsider an arbitrary maximization problem where the objective function fdepends on some parameter a.

M (a) $ maxxf (x , a) = f (x (a) , a) .

If the objective function f is good enough, then

dM (a)da

=∂f∂a(x)

x=x (a)

,

where x (a) is the point, were f is maximal at a.

Medvegyev (CEU) Presession 2014 323 / 444

Envelope theorem

Assume that x (a) is di¤erentiable. By the chain rule

M 0 (a) =∂f∂x(x (a) , a)

dx (a)da

+

+∂f∂a(x (a) , a)

dada.

Since by Fermats principle

∂f∂x(x (a) , a) = 0,

so

M 0 (a) =∂f∂a(x (a) , a) .

Medvegyev (CEU) Presession 2014 324 / 444

Example

Let f (x , a) = (x + a)2 . .

M (a) = 0) dMda

= 0.

On the other hand

x (a) = a, ∂f∂a= 2 (x + a)

dMda

=∂f∂a(x (a) , a) = 2 (x (a) + a) = 2 (a+ a) = 0.

Medvegyev (CEU) Presession 2014 325 / 444

Envelope theorem

Example

Let f (x .a) = x2 + a.

M (a) = a,) dMda

= 1.

x (a) = 0,∂f∂a= 1

dMda

=∂f∂a(x (a) , a) = 1 (0, a) = 1.

Medvegyev (CEU) Presession 2014 326 / 444

Envelope theorem

Example

Let f (x , a) = (x + a)2 + a.

∂f (x , a)∂x

= 2 (x + a) ,) x (a) = a,M (a) = a, dMda

= 1

∂f∂a(x , a) = 2 (x + a) + 1

dMda

=∂f∂a(x (a) , a) = 2 (a+ a) + 1 = 1.

Medvegyev (CEU) Presession 2014 327 / 444

Envelope theorem

Example

Let f (x , a) = a (x + a)2 + a, where a > 0.

∂f (x , a)∂x

= 2a (x + a) ,) x (a) = a,M (a) = a, dMda

= 1

∂f∂a(x , a) = 2a (x + a) (x + a)2 + 1

dMda

=∂f∂a(x (a) , a) = 1.

Medvegyev (CEU) Presession 2014 328 / 444

Parametric integrals

ExampleLet

M (a) $Z ϕ(a)

0f (a, t) dt.

Calculate the derivative dM/da!

Medvegyev (CEU) Presession 2014 329 / 444

Parametric integrals

Assume that f and ϕ are good enough, e.g. we can derivative under theintegral. Let G (x , y) $

R x0 f (y , t) dt. The derivative of G is

∂G∂x (x , y)

∂G∂y (x , y)

=

f (y , x) ,

∂y

Z x

0f (y , t) dt

=

=

f (y , x) ,

Z x

0

∂yf (y , t) dt

.

M (a) is a composed function: M (a) $ G (ϕ (a) , a) . By the chain rule:

dMda

=

∂G∂x (ϕ (a) , a)

∂G∂y (ϕ (a) , a)

ϕ0 (a)1

=

=

f (a, ϕ (a)) ,

Z ϕ(a)

0

∂af (a, t) dt

ϕ0 (a)1

=

=Z ϕ(a)

0

∂af (a, t) dt + f (a, ϕ (a)) ϕ0 (a) .

Medvegyev (CEU) Presession 2014 330 / 444

Parametric integrals

Example

Let M (a) =R a0 exp (ax) dx .

M (a) =

exp (ax)a

a0=exp

a2

a 1a

M 0 (a) =2a2 exp

a2 exp

a2

a2.+

1a2.

M 0 (a) =Z a

0x exp (ax) dx + exp

a21 =

=

xexp (ax)a

a0Z a

0

exp (ax)a

dx + expa2=

= 2 expa2exp (ax)a2

a0= 2 exp

a2exp

a2

a2+1a2.

Medvegyev (CEU) Presession 2014 331 / 444

Parametric integrals

Example

Let M (a) =R a20 cos axdx .

M (a) =Z a2

0cos axdx =

sin axa

a20=sin a3

a

M 0 (a) =3a2a cos a3 sin a3

a2

dMda

=Z a2

0x sin axdx + 2a cos aa2 =

=hxcos axa

ia20Z a2

0

cos axa

dx + 2a cos xa3 =

= 3a cos a3 sin axa2

a20= 3a cos a3 sin a

3

a2.

Medvegyev (CEU) Presession 2014 332 / 444

Loglinearization

TheoremThe Taylor expansion of ln (1+ x) is convergent if jx j < 1. In this case

ln (1+ x) = ln (1+ x) ln 1 =Z x

0

11+ t

dt =

=Z x

0

∑n=0

(t)n dt =∞

∑n=0

Z x

0(t)n dt =

=∞

∑n=0

(1)n xn+1

n+ 1= x x

2

2+x3

3+ . . . .

Medvegyev (CEU) Presession 2014 333 / 444

Loglinearization

DenitionA state of a dynamic system is a steady state if the system is "basically"not changing in that state. A more specic denition is that the constantsolutions are the steady states.

Example

If the system is modelled with a di¤erence equation xt+1 = f (xt ) then x

is a steady state if x = f (x) . If the system is described by a relationf (xt , yt ) = g (zt ) then (x, y , z) is a steady state if

f (x, y ) = g (z) . If Hx (1)t , x (2)t , . . . x (n)t

= 0, then

x (k )

is a steady

state if Hx (1) , . . . , x (k )

= 0. It is possible that x (k )t is of a form x (l)t+1 or

x (l)t+2 etc.

Medvegyev (CEU) Presession 2014 334 / 444

Loglinearization

DenitionA logdi¤erence of a dynamic variable (xt ) is

ext+1 $ log xt+1 log xt = logxt+1xt

= log1+

xt+1 xtxt

=

=xt+1 xt

xt+ . . . xt+1 xt

xt.

The interpretation is change in percentage. Sometimes if x is a steadystate then ext $ log xt log x xt x

x.

In this case ext is the loglinearization of (xt ) around the steady state. (Ofcourse we assume that everything is well dened that is xt and x arepositive and close enough.)

Medvegyev (CEU) Presession 2014 335 / 444

Loglinearization

The general process for loglinearization is the following. Letx (1)t , x (2)t , . . . , x (n)t

be some time series and let

fx (1)t , x (2)t , . . . , x (n)t

= 0.

Letx (1) , x (2) , . . . , x (3)

be a steady state (constant) solution that is let

assume thefx (1) , x (2) , . . . , x (n)

= 0.

Medvegyev (CEU) Presession 2014 336 / 444

Loglinearization

"Discrete" loglinearization: Using the denition of the derivative

0 = fx (1)t , x (2)t , . . . , x (n)t

f

x (1) , x (2) , . . . , x (n)

=

=n

∑k=1

∂fx (1) , x (2) , . . . , x (n)

∂xk

x (k )t x (k )

+ o (..)

n

∑k=1

∂fx (1) , x (2) , . . . , x (n)

∂xk

x (k )x (k )t x (k )x (k )

=

=n

∑k=1

∂fx (1) , x (2) , . . . , x (n)

∂xk

x (k ) exkt .

Medvegyev (CEU) Presession 2014 337 / 444

Loglinearization

"Continuous" loglinearization: Using the denition of the derivative andintroducing X (k )t = log xkt

0 = fx (1)t , x (2)t , . . . , x (n)t

f

x (1) , x (2) , . . . , x (n)

=

= fexp

X (1)t

, . . . , exp

X (n)t

f

exp

X (1)

, . . . , exp

X (n)

n

∑k=1

∂fx (1) , x (2) , . . . , x (n)

∂xk

expX (k )

X (k )t X (k )

=

=n

∑k=1

∂fx (1) , x (2) , . . . , x (n)

∂xk

x (k )ln x (k )t ln x (k )

=n

∑k=1

∂fx (1) , x (2) , . . . , x (n)

∂xk

x (k ) exkt ..Medvegyev (CEU) Presession 2014 338 / 444

Loglinearization

ExampleLoglinearization of a rst order di¤erence equation.

Let xt+1 = f (xt ) . The linearization of the equation around a point x is

xt+1 = f (x) + f 0 (x) (xt x) + . . . f (x) + f 0 (x) (xt x) .

If x is a steady state then f (x) = x and

xt+1 x = f 0 (x) (xt x) + . . . f 0 (x) (xt x) .

If x > 0 then

xt+1 xx

f 0 (x)xt xxext+1 f 0 (x) ext

Medvegyev (CEU) Presession 2014 339 / 444

Loglinearization

ExampleLoglinearization of xt+1 =

pxt .

First we nd the steady state solutions:. x =px , that is x = 0 and

x = 1. One cannot loglinearize around x = 0. The loglinearizationaround x = 1 is ext+1 1

2pxext = 1

2ext .

Of course one cannot even linearize the system around x = 0 as thefunction

px is not di¤erentiable at x = 0.

Medvegyev (CEU) Presession 2014 340 / 444

Loglinearization

ExampleLoglinearization of a rst order di¤erence equation with a controlparameter.

Let xt+1 = f (xt , ut ) where ut is a control variable. Then using thetwo-dimensional di¤erentiation one can linearize the system around anypoint (x, u) .

xt+1 f (x, u) +∂f (x, u)

∂x(xt x) +

∂f (x, u)∂u

(ut u) .

If (x, u) is a steady state then f (x, u) = x and

xt+1 x ∂f (x, u)∂x

(xt x) +∂f (x, u)

∂u(ut u) .

xt+1 xx

∂f (x, u)∂x

xt xx

+∂f (x, u)

∂uut ux

.

Medvegyev (CEU) Presession 2014 341 / 444

Loglinearization

Hence the loglinearized version of

xt+1 = f (xt , ut )

is ext+1 ∂f (x, u)∂x

ext + ux ∂f (x, u)∂u

eutOne can also write it like

xext+1 ∂f (x, u)∂x

xext + ∂f (x, u)∂u

ueut

Medvegyev (CEU) Presession 2014 342 / 444

Loglinearization

ExampleLoglinearization with three variables.

Let f (xt , yt ) = g (zt ) . Then combining the two equations above

f (x, y ) +∂f (x, y )

∂x(xt x) +

∂f (x, u)∂y

(yt y )

g (z) + g 0 (z) (zt z) .

If (x, y , z) is a steady state then f (x, y ) = g (z) so

∂f (x, y )∂x

(xt x) +∂f (x, u)

∂y(yt y )

g 0 (z) (zt z)

Medvegyev (CEU) Presession 2014 343 / 444

Loglinearization

Hence the loglinearized version of

f (xt , yt ) = g (zt )

is

∂f (x, y )∂x

xext + ∂f (x, u)∂y

y eyt g 0 (z) zezt .

Medvegyev (CEU) Presession 2014 344 / 444

Loglinearization

ExampleLoglinearization of the production function

yt = atkαt .

y eyt (k)α aeat + aα (k)α1 kekt .As y = a (k)α we can simplify

eyt = eat + αekt .

Medvegyev (CEU) Presession 2014 345 / 444

Loglinearization

ExampleLoglinearization of a sum

yt = ct + it .

y eyt = cect + ieiteyt =c

y ect + i

y eit .

Medvegyev (CEU) Presession 2014 346 / 444

Loglinearization

ExampleLoglinearization of a product

zt = ytxteztz y xext + xy eytezt ext + eyt .Sometimes it is written as

xtyt xy xy

ext + eytxtyt xy (1+ ext + eyt ) .

Medvegyev (CEU) Presession 2014 347 / 444

Loglinearization

ExampleLoglinearization of a ratio

zt =xtyteztz =1y xext x

(y )2y eyt

ezt = ext eyt .Sometimes it is written as

xtyt xy xy

ext eytxtyt xy (1+ ext eyt ) .

Medvegyev (CEU) Presession 2014 348 / 444

Loglinearization

Loglinearization of model

xt + a = (1 b) ytzt

xext (1 b) 1zy eyt (1 b) y

(z)2zezt

ext (1 b) y

zx(eyt ezt ) .

But x + a = (1 b) y z hence

ext (1 b) y

zx(eyt ezt ) = x + a

x(eyt ezt )

x

x + aext eyt ezt .

Medvegyev (CEU) Presession 2014 349 / 444

Loglinearization

ExampleLoglinearization of model

ln zt = z0 + ρ ln zt1 + εt .

1zzezt ρ

1zzezt1 + εeεtezt ρezt1 + εeεt = ρezt1 + εt ε =

ρezt1 + εt

as ε = 0 since it is an error term. (We used that z is the steady state ofzt and zt1.) Observe that in this case the calculation is meaningless as eεtis not dened.

Medvegyev (CEU) Presession 2014 350 / 444

Loglinearization

In this case we modify the argument:

∂f (x, y )∂x

(xt x) +∂f (x, u)

∂y(yt y )

g 0 (z) (zt z)

If y = 0 then

∂f (x, y )∂x

xext + ∂f (x, u)∂y

yt

g 0 (z) zeztand one can prove the formula above.

Medvegyev (CEU) Presession 2014 351 / 444

Loglinearization

So far we assumed only that the steady state variables are nonzero. Nowwe assume that all variables are positive hence we can take their logarithm.

g (zt ) = f (xt , yt )

g (exp (ln zt )) = f (exp (ln xt ) , exp (ln yt )) .

Introducing new variables Zt $ ln zt , Xt = ln xt and Yt = ln (yt )

g (exp (Zt )) = f (exp (Xt ) , exp (Yt ))

g (exp (Z )) + g (exp (Z ))0 exp (Z ) (Zt Z ) f (exp (X ) , exp (Y )) +

+∂f (exp (X ) , exp (Y ))

∂xexp (X ) (Xt X ) +

+∂f (exp (X ) , exp (Y ))

∂yexp (Y ) (Yt Y ) .

Medvegyev (CEU) Presession 2014 352 / 444

Loglinearization

Using that (X ,Y ,Z ) form a steady state

g (exp (Z ))0 exp (Z ) (Zt Z )

∂f (exp (X ) , exp (Y ))∂x

exp (X ) (Xt X ) +

+∂f (exp (X ) , exp (Y ))

∂yexp (Y ) (Yt Y ) +

Substituting back

g (z)0 z (ln zt ln z)

∂f (x, y )∂x

x (ln xt ln x) +∂f (x, y )

∂yy (ln yt ln y ) .

Using the other, via logarithm, denition of (ext , eyt ,ezt )g (z)0 zezt ∂f (x, y )

∂xxext + ∂f (x, y )

∂yy eyt .

Medvegyev (CEU) Presession 2014 353 / 444

Implicit di¤erentiation

Let F (y (x) , x) = 0. We want to calculate y 0. Observe that hereeverything is a vector. If x has m component y has n component thenF (y , x) = 0 is in fact n equation of n+m variables and ∂F/∂y is ann n dimensional matrix, y is a function of m variables having values inRn so y 0 (x) is a matrix of nm. By the chain rule

∂F∂y(y (x) , x) y (x)0 +

∂F∂x(y (x) , x) = 0.

Reordering

y 0 (x)(nm) =

∂F∂y (y (x) , x)

1∂F∂x (y (x) , x)

(n n) (nm)

assuming that ∂F∂y (y (x) , x) is invertible. Here we assumed that everything

was meaningfull, e.g. y 0 (x) is well dened.

Medvegyev (CEU) Presession 2014 354 / 444

Implicit di¤erentiation

There are many versions of the implicit function theorem. In all of themwe assume that F (y , x) is dened on an open set of Rn Rm and

∂F∂y (y0, x0)

1exists. In actual applications the biggest problem is how

one can show that F is di¤erentiable at a certain point. The only way is toshow that F is contnuously di¤erentiable. (Via Jacobi matrix, using partialderivates.) But from theoretical point of view, if it is possible, we want todrop the condition of continuous di¤erentiability.

Medvegyev (CEU) Presession 2014 355 / 444

Implicit di¤erentiation

The best known one is the following:

Theorem (Implicit function theorem)

If F is continuously di¤erentiable and the inverse

∂F∂y (y0, x0)

1exists

then y (x) is locally exists and continuouly di¤erentiable that is there is anopen ball V around x0 and an open ball W around y0 and a continuouslydi¤erentiable function y (x) such that

1 y (x0) = y02 f(x , y) 2 V W j F (y , x) = 0g = f(x , y (x)) j x 2 V g ,3 y (x) is the unique solution for every x .

Medvegyev (CEU) Presession 2014 356 / 444

Implicit di¤erentiation

The rst generalization is about the higher derivates:

TheoremIf in the above theorem we assume that F (x , y) is k 1 timescontinuously di¤erentiable then y (x) is also k times continuouslydi¤erentiable.

Another generalization is the following:

TheoremIf F (x , y) is just di¤erentiable (not continuously di¤erentiable) and justx 7! F 0y is continuous then y (x) is just di¤erentiable.

Medvegyev (CEU) Presession 2014 357 / 444

Implicit di¤erentiation

Now we drop the continuity of the di¤erentation.

TheoremIn this case F (y , x) is just continuous.and di¤erentiable just at (y0, x0) .We still assume that F 0y (y0, x0) is invertable. Then there is aneigbourhood V of x0 and a function y (x) on V such that

1 y (x0) = y0,2 F (y (x) , x) = 0 if x 2 V , (We do not know that y (x) is unique.)3 y (x) is di¤erentiable at x0. (Hence the implicite di¤erentiation ruleholds at x0, that is F 0y y

0 + F 0x = 0 at (x0, y0)).

4 So y 0 (x0) = F 0y (y0, x0)

1 Fx (y0, x0) .

Medvegyev (CEU) Presession 2014 358 / 444

Implicit di¤erentiation

Example

Let F (x , y) = x2 + y2 25.

∂F∂y(x , y) = 2y .

If y 6= 0 that is outside the x-axis the circle is locally invertible but ofcourse not globally. For any (x0, y0) on the circle if y0 6= 0 there is aninverse. If y0 > 0 then y0 =

q25 x20 . At x = x0

dydx

=1

2q25 x20

(2x0) .

∂F∂x

= 2x ,∂F∂y= 2y ,

dydx= 2x0

2y0=

x0q25 x20

.

Medvegyev (CEU) Presession 2014 359 / 444

Implicit di¤erentiation

Let F (x , y) = sin x + y2. x0 = 3π/2, y0 = 1.

sin3π

2+ 12 = 0

∂F∂y

= 2y = 2 6= 0.

dydx

= cos 3π/22

= 0

Medvegyev (CEU) Presession 2014 360 / 444

Homework

Study the following implicit functions:

1 sin (xy) = 02 ex + sin y = 03 ey + sin x = 04 ex + sin y = 15 ey + sin x = 16 x2 + y2 = 1.

Medvegyev (CEU) Presession 2014 361 / 444

Second derivatives

DenitionThe second derivative is the derivative of the derivative.

TheoremIf f : Rn ! R then the second derivative f 00 (x) is a symmetric bilinearform.

DenitionThe matrix of the second partial derivatives

H $

∂2f∂xi∂xj

is called the Hesse matrix.

Medvegyev (CEU) Presession 2014 362 / 444

LemmaIf the Hesse matrix is continuous then f is twice di¤erentiable.

LemmaThe Hesse matrix is the matrix in the standard basis of the secondderivative as a bilinear form. Hence the Hesse matrix of a twicedi¤erentiable function is always symmetric.

Medvegyev (CEU) Presession 2014 363 / 444

Optimization in higher dimensions

TheoremLet G Rn be an open set. If f : G ! R has a local minimum atx0 2 G , and f is di¤erentiable at x0, then

gradf (x0) = f 0 (x0) = 0.

If f : G ! R has a local minimum at x0 2 G , and f is twice di¤erentiablein x0, then the second derivative is positive semi-denite.

For all v 2 Rn the function g (t) $ f (tv + x0) has a local minimum att = 0. By the chain rule g 0 (0) = f 0 (x0) v = 0, that is f 0 (x0) = 0. If f istwice di¤erentiable, then g is also twice di¤erentiable, and

0 g 00 (0) =ddtf 0 (tv + x0) v

t=0

=

= vT f 00 (x0) v 0.

Medvegyev (CEU) Presession 2014 364 / 444

Optimization in higher dimensions

ExampleOn every direction a function can have a local minimum, but the point isnot a local minimum.

f (x , y) =y x2

y 2x2

. If y = ax then

ga (x) $ f (x , ax) =ax x2

ax 2x2

=

= a2x2 2ax3 ax3 + 2x4 == 2x4 3ax3 + a2x2.

g 0a (x) = 8x3 9ax2 + 2a2x , g 0a (0) = 0

g 00a (x) = 24x2 18ax + 2a2, g 00a (0) = 2a2 > 0.

Medvegyev (CEU) Presession 2014 365 / 444

Optimization in higher dimensions

So ga has a local minimum at x = 0. On the other hand f (0, 0) = 0 andf has a negative value in any two dimensional neighborhood of (0, 0)

f 00 (0, 0) =0 00 2

which is positive semi-denite.

Medvegyev (CEU) Presession 2014 366 / 444

Optimization in higher dimensions

.

LemmaLet G Rn be an open set. If f : G ! R is twice di¤erentiable at x0,and f 0 (x0) = 0, and f 00 (x0) is strictly positive denite, then x0 is a localminimum for f .

Medvegyev (CEU) Presession 2014 367 / 444

Optimization in higher dimensions

ExampleIn the positive semi-denite case we cannot say anything.

Let f (x , y) = x4 y4.

f 00 (0, 0) =0 00 0

but (0, 0) is not a local minimum. If f (x , y) = x4 + y4 then again

f 00 (0, 0) =0 00 0

but in this case (0, 0) is a global minimum.

Medvegyev (CEU) Presession 2014 368 / 444

Optimization in higher dimensions

DenitionA function f is convex by denition if and only if f is convex on thesegment [x , y ] , for any x and y , where

[x , y ] $ fz : z = λx + (1 λ) yg ,

that is

g (t) $ f (ty + (1 t) x) = f (y + (1 t) (x y))

is convex on the [0, 1] interval.

For di¤erentiable functions this means, that

f 0 (x) (y x) = f 0 (x) (1) (x y) = g 0 (0) 1 g (1) g (0) = f (y) f (x) .

Medvegyev (CEU) Presession 2014 369 / 444

Optimization in higher dimensions

TheoremLet C Rn be a convex, open set and let f be a di¤erentiable function.The following conditions are equivalent:

1 f is convex2 for any x 2 C and h such that x + h 2 C

f 0 (x) h = ∑i

∂f∂xi(x) hi f (x + h) f (x)

Medvegyev (CEU) Presession 2014 370 / 444

Optimization in higher dimensions

CorollaryIf f is a convex function, then every stationary point, that is every point xwhere f 0 (x) = 0 is a global minimum point.

TheoremLet C Rn be convex, open set and let f be twice di¤erentiable function.The following conditions are equivalent:

1 f is convex,2 f 00 is positive semi-denite on C.

Medvegyev (CEU) Presession 2014 371 / 444

Optimization in higher dimensions

Let g (t) $ f (ty + (1 t) x) . f is convex if and only if, for any x and yg is convex, which is equivalent with

0 g 00 (t) =f 0 (x + t (y x)) (y x)

0=

= (x y)T f 00 (x y) .

Medvegyev (CEU) Presession 2014 372 / 444

Sylvester criterion

DenitionLet A be an m n matrix and let k be an integer with 0 < k m, andk n. A k k minor of A is the determinant of a k k matrix obtainedfrom A by deleting m k rows and n k columns. For a m n matrixthere are a total of (mk )(

nk) minors of size k k.

DenitionLet A be an n n matrix. We denote by A(i1, . . . , ik ) the principalsubmatrix of A lying in rows and columns with indices i1, . . . , ik .jA(i1, . . . , ik )j is a principial minor. There are (nk) principial minors of sizek and 2n 1 principial minors.

Denition∆k $ jA (1, 2, . . . , k)j is a leading principal minor. There are n leadingprincipal minors.

Medvegyev (CEU) Presession 2014 373 / 444

Sylvester criterion

TheoremA symmetric matrix A is positive denite if and only if all its leadingprincipal minors are positive, i.e.

∆k = jA (1, 2, . . . , k)j > 0.

TheoremAll principal minors of a positive-denite matrix are positive. A symmetricn n matrix is positive denite if there exists a nested sequence of nprincipal minors of A (not just the leading principal minors) that arepositive..

TheoremA symmetric matrix A is positive semi-denite if and only if all its principalminors are nonnegative.

Medvegyev (CEU) Presession 2014 374 / 444

Sylvester criterion

CorollaryA symmetric matrix A is negative denite if and only if

(1)k ∆k = (1)k jA (1, 2, . . . , k)j > 0.

CorollaryA symmetric matrix A is negative semi-denite if and only if

(1)k jA(i1, . . . , ik )j 0

for every principial minor.

Medvegyev (CEU) Presession 2014 375 / 444

Jacobi criterion

DenitionWe denote the number of positive eigenvalues and that of negativeeigenvalues of a symmetric matrix A by π(A) and ν(A), respectively, andcall them the positive inertia and the negative inertia of A. Let δ (A) bethe number of zero eigenvalues.The symbol δ (A) stands for the defect, orrank deciency, of A. Obviously

π (A) + ν (A) + δ (A) = size (A) .

The triplet(π (A) , ν (A) , δ (A))

is called the inertia of A.

Medvegyev (CEU) Presession 2014 376 / 444

Jacobi criterion

TheoremLet

Λk $

1 if k = 0jA (1, 2, . . . , k)j if k 1 .

Assume that all the leading principal minors of a symmetric matrix A arenonzero. Then the positive inertia of A equals to the number of signcoincidences in the sequence Λ0, . . . ,∆n. and the negative inertia equalsto the number of sign variations in this sequence.

Medvegyev (CEU) Presession 2014 377 / 444

Jacobi criterion

ExampleCalculate the inertia of

A =

0@ 1 2 12 3 41 4 1

1A

∆0 = 1,∆1 = 1,∆2 = 1 22 3

= 1,∆3 =1 2 12 3 41 4 1

= 34. Sothe inertia of A is (2, 1, 0) .

Medvegyev (CEU) Presession 2014 378 / 444

Gundelnger and Frobenius criterion

TheoremLet A be a symmetric matrix. Assume that in sequence Λ0, . . . ,∆n, thedeterminant ∆n 6= 0.

1 Assume that a minor ∆k , k < n, may be zero. In each such occasion(i.e., when ∆k = 0), assume that ∆k1∆k+1 6= 0. Assign arbitrarysigns to the zero minors ∆k .the Jacobi rule holds for the modiedsequence.

2 Assume that it may be possible that ∆k = ∆k+1 = 0 whenk < n 1. In each such occasion, assume that ∆k1∆k+2 6= 0.Assign the same (arbitrary) sign to ∆k and ∆k+1 if ∆k1∆k+2 < 0and di¤erent signs (in any one of the two possible ways) if∆k1∆k+2 > 0. Then the Jacobi rule holds for the modied sequence.

3 One cannot say anything with three consecutive zeros in the sequence.

Medvegyev (CEU) Presession 2014 379 / 444

Optimization in higher dimensions

ExampleIt is not true that if A is a symmetric matrix then A is positivesemi-denite if and only if the principal minors are nonnegative.

If A =0 00 1

then ∆1 = ∆2 = 0 but A is negative semidenite.

Medvegyev (CEU) Presession 2014 380 / 444

Optimization in higher dimensions

ExampleIt is not true that if A is a symmetric matrix then A is positivesemi-denite if and only if the leading principal minors are nonnegative.

Let

A =

0@ 1 1 11 1 11 1 1/2

1Athen ∆1 = j1j = 1, ∆2 =

1 11 1

= 0, and ∆3 =

1 1 11 1 11 1 1/2

= 0 but 1 11 1/2

= -1/2 and

1 1 2

0@ 1 1 11 1 11 1 1/2

1A0@ 112

1A = 2 < 0.

Medvegyev (CEU) Presession 2014 381 / 444

Optimization in higher dimensions

ExampleGundelnger and Frobenius criterion criterion for n = 2.

Let A =

α ββ γ

. To apply the criterion α = 0 and β 6= 0 as ∆2 6= 0.

In this case∆0 = 1,∆1 = ,∆2 = β2

so A is indenite.

Medvegyev (CEU) Presession 2014 382 / 444

Optimization in higher dimensions

Example

What is the deniteness of the quadratic form f (x) = xTAx if

A =

0@ 1 0 10 0 21 0 0

1A .The matrix is not symmetric so one should check the deniteness of

B =A+AT

2=

0@ 1 0 10 0 11 1 0

1A .

Medvegyev (CEU) Presession 2014 383 / 444

Optimization in higher dimensions

Observe that 1 00 0

= 0The determinant of B is 1. The determinant is the product of theeigenvalues so B is either indenite or negative denite. ButeT1 Be1 = 1 > 0 so B is indenite. (jB λEj = (1)n k (λ) wherek (λ) = ∏

i(λ λi )

ni .

(1)n k (0) = det (B) = (1)n ∏i(λi )

ni = ∏i

λnii )

Medvegyev (CEU) Presession 2014 384 / 444

Optimization in higher dimensions

Observe that∆0 = 1,∆1 = 1,∆2 = 0,∆3 = 1.

If ∆2 = +1 then (1, 1, 1,1) if ∆2 = +1 then (1, 1,1,1) and in bothcases

(π (B) , ν (B) , δ (B)) = (2, 1, 0)

Medvegyev (CEU) Presession 2014 385 / 444

Optimization in higher dimensions

One can also observe that the characteristic polynomial of B isk (λ) = λ3 λ2 2λ+ 1.

dk (λ)dλ

= 3λ2 2λ 2.

+2p4+ 246

=+2

p28

6

So the local maximum of k (λ) is at λ1 < 0 and the local minimum is atλ2 > 0. Hence there is a negative and a positive root.

Medvegyev (CEU) Presession 2014 386 / 444

Optimization in higher dimensions

ExampleCalculate the inertia of

A =

0@ 0 0 10 1 11 1 1

1A .Observe that ∆0 = 1,∆1 = 0,∆2 = 0,∆3 = 1. By theGundelngerFrobenius criteria the modied sequence is (1, 1, 1,1) or(1,1,1,1) , so the inertia is (2, 1, 0) . The eigenvalues are (byMatlab) -0.8019, 0.5550, 2.2470.

Medvegyev (CEU) Presession 2014 387 / 444

Optimization in higher dimensions

TheoremIf A is an (n n) symmetric matrix and ∆1 > 0, ∆2 > 0, . . . ∆n1 > 0and ∆n = 0 then A is positive semi-denite. If (1)k ∆k > 0 fork = 1, . . . n 1 and ∆n = 0 then A is negative semi-denite .

Medvegyev (CEU) Presession 2014 388 / 444

Optimization in higher dimensions

Example

Let A =

0@ 2 1 31 2 33 3 6

1A . ∆1 = 1,∆2 = 3,∆3 = 0. So A is positive

semi-dente. The eigenvalues of A are 9, 1, 0.

Medvegyev (CEU) Presession 2014 389 / 444

Convexity and quasi-convexity

DenitionA function f is called quasi-convex if the lower contour sets

fx : f (x) cg

are convex for all c . If for all c the upper contour sets

fx : f (x) cg

are convex then the function called quasi-concave.

Medvegyev (CEU) Presession 2014 390 / 444

Convexity and quasi-convexity

TheoremA function f dened on a convex set is quasi convex if for all 0 < λ < 1

f (λx + (1 λ) y) max (f (x) , f (y)) .

If f is quasi convex by the rst denition then if λ = max (f (x) , f (y))then the set fu : f (u) λg is convex so as x , y 2 fu : f (u) λgtherefore λx + (1 λ) y 2 fu : f (u) λg hence

f (λx + (1 λ) y) λ = max (f (x) , f (y)) .

On the other hand if f is quasi convex by the second denition then forany λ if x , y 2 fu : f (u) λg then

f (λx + (1 λ) y) max (f (x) , f (y)) λ

henceλx + (1) y 2 fu : f (u) λg

that is f is quasi convex.Medvegyev (CEU) Presession 2014 391 / 444

Convexity and quasi-convexity

CorollaryAny convex function is quasi convex.

If f is convex then

f (λx + (1 λ) y) λf (x) + (1 λ) f (y) max (f (x) , f (y)) .

Medvegyev (CEU) Presession 2014 392 / 444

Convexity and quasi-convexity

Corollary

If f is quasi-convex and g is increasing then g f , that is x 7! g (f (x)) isquasi-convex as well.

As g is increasing from

f (λx + (1 λ) y) max (f (x) , , f (y))

g (f (λx + (1 λ) y)) g (max (f (x) , , f (y))) =

= max (g (f (x)) , g (f (y))) .

Medvegyev (CEU) Presession 2014 393 / 444

Convexity and quasi-convexity

ExampleThe CobbDouglas function

f (x1, x2, . . . , xn) $ An

∏i=1xαii

for any αi > 0, A > 0 is quasi-concave on (x1, x2, . . . , xn) > 0.

The functions ln xi are concave so

u $ α1 ln x1 + α2 ln x2 + . . .+ αn ln xn

is concave, The Aex is strictly monotone, so

f (x1, x2, . . . , xn) = eu

is quasi-concave.

Medvegyev (CEU) Presession 2014 394 / 444

Convexity and quasi-convexity

Asf (λx + (1 λ) y) max (f (x) , f (y))

A continuous function is quasi convex if and only if on any

[x , y ] = fu : u = λx + (1 λ) y , 0 λ 1g

segment there will be no strict local maxima in the interior of [x .y ] . Fromthis it is not di¢ cult to prove the following two propositions:

Medvegyev (CEU) Presession 2014 395 / 444

Convexity and quasi-convexity

LemmaLet f be a di¤erentiable function from an open, convex, non-empty set Uof Rn into R. f is quasi-convex if and only if

8x , y 2 U and f (y) f (x)) f 0 (x) (y x) 0.

Let f be a twice di¤erentiable function from Rn into R. If

8x 2 Rn, h 6= 0, f 0 (x) h = 0) hf 00 (x) h > 0

then f is quasi-convex.

Medvegyev (CEU) Presession 2014 396 / 444

Convexity and quasi-convexity

ExampleIf ∑ αi 1 then the CobbDouglas function is concave.

We prove it for n = 2. The Hesse matrix isα1 (α1 1)Axα12

1 xα22 α1α2Ax

α111 xα21

2α1α2Ax

α111 xα21

2 α2 (α2 1)Axα11 x

α222

=

= Axα121 xα22

2

α1 (α1 1) x22 α1α2x1x2

α1α2x1x2 α2 (α2 1) x21

α1 (α1 1) x22 α1α2x1x2α1α2x1x2 α2 (α2 1) x21

=α1x2x1α2

(α1 1) x2 α1x2α2x1 (α2 1) x1

= α1x22 x21 α2

α1 1 α1α2 α2 1

.Medvegyev (CEU) Presession 2014 397 / 444

Convexity and quasi convexity

α1 1 α1α2 α2 1

= (α1 1) (α2 1) α1α2 = (1 (α1 + α2))

Which is positive if α1 + α2 < 1. As α1 (α1 1) < 0 the Hesse matrix is anegative denite quadratic form so in this case the function is concave. (Infact in this case the CobbDouglas function is strictly concave as theHesse matrix is negative denite and not just negative semi denite.)

Medvegyev (CEU) Presession 2014 398 / 444

Convexity and quasi convexity

If α1 + α2 = 1 then α1 (α1 1) x22 λ α1α2x1x2α1α2x1x2 α2 (α2 1) x21 λ

==

α1α2x22 λ α1α2x1x2α1α2x1x2 α2α1x21 λ

== λ2 +

α2α1

x21 + x

22

λ =

= λλ+

α2α1

x21 + x

22

so the characteristic roots are non positive, so the Hesse matrix is negativesemi denite and function is concave.

Medvegyev (CEU) Presession 2014 399 / 444

Optimization in higher dimensions

LemmaThe sum of two convex functions is convex.

ExampleThe sum of two quasi-convex functions is not quasi-convex.

y = x3 and y = x are quasi-convex, but y = x3 x is not.quasi-convex.

Medvegyev (CEU) Presession 2014 400 / 444

Homework

Let f (K , L) $ A (δKρ + (1 δ) Lρ)1/ρ

. Show that if ρ 1then the function is concave, if ρ 1 then the function is convex.Show that the function g (x , y) = ex+y + exy x y is convex.Show that the function f (x , y) = x + y ex ex+y is concave.Show that the CobbDouglas function is not concave if ∑i αi > 1.

Which of the following functions is quasi concave: f (x) = 3x + 4,f (x , y) = yex , f (x , y) = x2y3, x , y > 0?Prove that if f and g are quasi convex then max (f , g) is also quasiconvex.

Give an example of two quasi convex functions which product is notquasi convex.

Medvegyev (CEU) Presession 2014 401 / 444

Homework

Answer the following questions with yes or no.

1 The sum of convex functions is never concave.2 If f (x) and g (y) are positive, decreasing functions then f (x) g (y)are quasi convex.

3 If f (x) and g (y) are positive, decreasing functions then f (x) g (y)are quasi concave.

4 If f (x) and g (y) are positive, concave functions then f (x) g (y) arequasi concave

5 The product of concave functions is quasi-concave.6 The quasi convex functions form a cone that is if f1 and f2 are quasiconvex and α1 and α2 are non-negative then α1f1 + α2f2 is also quasiconvex.

7 A convex function on a closed interval is continuous.8 The second derivative of a convex function is always positive denite.

Medvegyev (CEU) Presession 2014 402 / 444

Homework

Give an example of a convex function which is not log convex.

Give an example of a log concave function which is not concave.

Show a quasi convex function for which there is no increasingmonotone function g such that g f is convex.Show that if A and B are convex sets thenA+ B $ fz j z = x + y , x 2 A, y 2 Bg is convex.Show that if A and B are convex then A\ B is also convex.Show that if K is compact and A : Rn ! Rm is a linear mappingthen A (K ) $ fz j z = Ay , y 2 Kg is also compact.Show that if C is a closed convex set and K is a convex compact setthen K + C is also a closed convex set.

Show that if A and B are compact sets then A+ B is also compact.

Medvegyev (CEU) Presession 2014 403 / 444

Homework

Is it true that

if f : [a, b]! R is continuous and f : (a, b)! R is di¤erentiable,then there is a ξ 2 (a, b) such that

f (b) f (a) = f 0 (ξ) (b a) .

if f : (a, b)! R is di¤erentiable, then there is a ξ 2 (a, b) such that

f (b) f (a) = f 0 (ξ) (b a) .

if f 0 and f 00 0 thenpf is quasi-convex,

if f 0 and f 00 0 thenpf is quasi-convex.

Medvegyev (CEU) Presession 2014 404 / 444

Homework

Is it true that

if f and g are convex then f + g is also convex,

if f is quasi-convex and g is quasi-concave then f g is quasi-convex,if f and g are convex and monotone functions and h (x) $ f (g (x))then h is convex,

if f and g are convex di¤erentiable functions and h (x) = f (g (x))then h00 0?

Medvegyev (CEU) Presession 2014 405 / 444

Homework

Let G be an open convex subset of Rn and f : G ! R. Is it true that

if the characteristic roots of f 00 (x) are non negative for all x 2 Gthen f is convex,

if the characteristic roots of f 00 (x) are positive for all x 2 G then f isstrictly convex,

if f is quasi-convex then the characteristic roots of f 00 (x) are nonnegative for all x 2 G ,if for all x 2 G the matrix f 00 (x) is non negative then f is convex?

Medvegyev (CEU) Presession 2014 406 / 444

Homework

Is it true that

if the set A Rn is convex and f : Rn ! Rm is a linear mappingthen f (A) = fy : y = f (x) , x 2 Ag is convex,if the set A Rm is convex and f : Rn ! Rm is a linear mappingthen f 1 (A) = fx : y = f (x) , y 2 Ag is convex,if the set A Rn is convex and closed and f : Rn ! Rm is a linearmapping then f (A) = fy : y = f (x) , x 2 Ag is convex and closed,if the set A Rm is convex and closed and f : Rn ! Rm is a linearmapping then f 1 (A) = fx : y = f (x) , y 2 Ag is convex and closed.

Medvegyev (CEU) Presession 2014 407 / 444

Homework

Is it true that

the CES function f (K , L) $ A (δKρ + (1 δ) Lρ)1/ρ is convex

when ρ 1,the CES function f (K , L) $ A (δKρ + (1 δ) Lρ)

1/ρ is concavewhen ρ 1,the CES function f (K , L) $ A (δKρ + (1 δ) Lρ)

1/ρ is concavewhen ρ 1,the CES function f (K , L) $ A (δKρ + (1 δ) Lρ)

1/ρ is convexwhen ρ 1?

Medvegyev (CEU) Presession 2014 408 / 444

Homework

Are these quadratic forms positive denite?

7x21 + 5x22 + 3x

23 8x1x2 + 8x2x3

8x1x3 + 2x1x4 + 2x2x3 + 8x2x4.

2x1x2 + 2x3x4.

Medvegyev (CEU) Presession 2014 409 / 444

Homework

What are the local extrema of the following functions? What are theglobal extrema?

z = e(x2+y 2) x2 + y2

z = ex2y (5 2x + y)

z = x2y3 (1 x y) .

Medvegyev (CEU) Presession 2014 410 / 444

Theorem of Weierstrass

TheoremIf function f : [a, b]! R is continuous then there is a point x0 2 [a, b]which is a point of global minima for f on [a, b] .

Medvegyev (CEU) Presession 2014 411 / 444

Theorem of Weierstrass

We want to generalize the theorem to Rn.

DenitionA set A is bounded if there is a K such that kxk K for every x 2 A.

DenitionA set A is closed if for every (xn) in if xn ! x0 then we know that x0 2 Aas well.

DenitionA set in Rn is compact when it is closed an bounded.

TheoremIf a function f is continuous on a non-empty compact set then it obtainsits minimum on this set.

Medvegyev (CEU) Presession 2014 412 / 444

Separating hyperplanes

ExampleIf A Rn is a non empty, convex and closed set then for every x0 there isa unique projection of x0 on A.

Medvegyev (CEU) Presession 2014 413 / 444

Separating hyperplanes

TheoremLet K be a non empty compact and convex set and let A be a non emptyconvex and closed set. If K \ A = ∅ then there is a p 6= 0 such that

sup f(p, x) j x 2 Kg < inf f(p, x) j x 2 Ag .

Theorem (Separating hyperplanes)

If A is a non empty convex set and x0 /2 A then there is a p 6= 0 such that

(p, x0) inf f(p, x) j x 2 Ag .

Theorem (Separating hyperplanes)If A and B are non empty convex sets and A\ B = ∅ then there is ap 6= 0 such that

sup f(p, x) j x 2 Bg inf f(p, x) j x 2 Ag .Medvegyev (CEU) Presession 2014 414 / 444

Homework

A set A Rn is called open if for every x 2 A there is an r > 0 such thatthe ball fy j kx yk < rg is in A.1. Prove that if (Aγ) is a collection of open sets then their union is alsoopen.2. Prove that if (An) is a nite collection of open sets then theirintersection is also open. Give a counter example that it is not true forarbitrary intersection.3. Prove that a set is closed if and only if its complement is open.4. Prove that arbitrary intersection of closed sets is closed.5. Prove that nite union of closed sets is closed. Give a counter examplethat it is not true for arbitrary union.

Medvegyev (CEU) Presession 2014 415 / 444

Constrained optimization

Consider the following optimization problem:

ϕ0 (x)! min, x 2 Xwhere U Rn and X $ fϕ1 (x) 0, . . . , ϕm (x) 0, x 2 Ug .

DenitionIf x 2 X then x is called feasible. If x solves the minimization problemthen x is called optimal. If there is an open ball V such that x isoptimal in X \ V then we say that x is a local minimum.

DenitionThe function

L (l , x) $ λ0ϕ0 (x) + λ1ϕ1 (x) + . . .+ λmϕm (x)

is called the Lagrangian of the problem. The real numbers (λk )mk=0 are the

so called multipliers.

Medvegyev (CEU) Presession 2014 416 / 444

Convex optimization

Theorem (Convex KuhnTucker)

Assume that x is the optimal solution of the above problem and the setU and the functions ϕk , k = 0, 1, . . . ,m are convex. There exists a vector

l $ (λ0,λ1, . . . ,λm) 2 Rm+1,

such that

1 0 l , l 6= 0,2 λi ϕi (x

) = 0, for all i = 1, . . . ,m,3 L (l , x) = min fL (l , x) j x 2 Ug .

Medvegyev (CEU) Presession 2014 417 / 444

Convex optimization

Example

Let x + y ! min, x2 + y2 1 and let λ0 $ 1,λ1 $ 1/p2. Then

L (λ0,λ1, x , y) = x + y + 1/p2x2 + y2 1

.

It is a convex function so its has its minimum where the partial derivativesvanish

∂L∂x

= 1+ 2/p2x = 0,

∂L∂y

= 1+ 2p2y = 0.

So x = y = 1/p2, which are feasible.

Medvegyev (CEU) Presession 2014 418 / 444

Convex optimization

Example

If x ! min, x 0 then L (λ0,λ1, x) = λ0x λ1x which has a minimumif and only if λ0 = λ1 $ λ. In this case L 0. As l 6= 0 λ > 0, hence theonly x for which the complementarity condition λx = 0 holds is x = 0.

Medvegyev (CEU) Presession 2014 419 / 444

Convex optimization

Theorem (Su¢ cient KuhnTucker)

If there is a multiplier vector l , such that λ0 > 0, and for the feasible x

the conditions 1. 3. are true, then x is an optimal solution of theproblem.

Medvegyev (CEU) Presession 2014 420 / 444

Convex optimization

DenitionThe function ∑ αixi + β is called a¢ ne.

Denition (Generalized Slaters Condition)The problem satises the generalized Slaters condition if there is anbx 2 U such that ϕi (bx) < 0 if ϕi is non-a¢ ne and ϕi (bx) 0 if ϕi isa¢ ne. If the generalized Slaters conditions are satised one can say thatthe problem is not only consistent but it is also superconsistent.

Theorem (Slaters condition)

If the problem is superconsistent then one can choose λ0 = 1 in theconvex optimization problem.

Medvegyev (CEU) Presession 2014 421 / 444

Convex optimization

TheoremAssume that the problem is a superconsistent convex optimization problemand U is an open convex set. Assume also that all functions ϕi aredi¤erentiable at x. Then x is an optimal solution of the constrainedoptimization problem if and only if there is a vector

l = (λ1, . . . ,λm) 2 Rm

such that

1 l 0,2 λi ϕi (x

) = 0, for all i = 1, . . . ,m,3 L0x (l , x

) = ϕ00 (x) +∑m

k=1 λk ϕ0k (x) = 0.

Medvegyev (CEU) Presession 2014 422 / 444

Convex optimization

ExampleSlaters condition fails.

Consider the problemx ! min, x2 0.

Obviously it is a convex problem and the only feasible solution is x = 0,hence Slaters condition is not valid. The Lagrangian is

L (λ0,λ1, x) = λ0x + λ1x2.

∂L∂x= λ0 + 2λ1x = 0, x2 0

has a solution only when λ0 = 0. Obviously L (0, 1, x) = x2 has its globalminimum at x = 0. L (1, 0, x) = x has no minimum andL (1, 1, x) = x + x2 has its minimum at

1+ 2x = 0, x = 1/2.

Medvegyev (CEU) Presession 2014 423 / 444

Non convex optimization

ExampleIf the problem is not convex then the optimal solution is not a globaloptimum point of the Lagrangian.

Consider the problemx3 ! min, x2 1.

L (x ,λ) = x3 + λx2 1

, which has no global minimum. The optimal

solution is x = 1.∂L∂x(x ,λ) = 3x2 + 2λx = 0.

At x = 1 3 2λ = 0, hence λ = 3/2. At x = 1

L00xx =x3 +

32

x2 1

00= 6x + 3 = 3 < 0

so the global minimum is a local maximum of the Lagrangian.Medvegyev (CEU) Presession 2014 424 / 444

Non convex optimization

Theorem (Di¤erentiable Kuhn-Tucker)

Let x be a local minimum and let I $ fi 1 j ϕi (x) = 0g . Assume

that if i /2 I then ϕi are continuous at x and if i 2 I then ϕi are

di¤erentiable at x. There is a vector

l $ (λ0,λ1, . . . ,λm) 2 Rm+1,

such that

1 l 0, l 6= 0,2 λi ϕi (x

) = 0, for all i = 1, . . . ,m,3 L0x (l , x

) $ λ0ϕ00 (x) +∑m

k=1 λk ϕ0k (x) = 0.

If the gradient vectors ϕ0i (x) , i 2 I are linearly independent, then λ0 = 1

is possible.

Medvegyev (CEU) Presession 2014 425 / 444

Non convex optimization

TheoremConsider the problem

ϕ0 (x) ! min

ϕi (x) 0, i = 1, 2, . . . ,m,ϕi (x) = 0, i = m+ 1, 2, . . . , p

x 2 U Rn.

Assume that

1 U is open,2 for i = 0, 1, . . . ,m all ϕi are di¤erentiable at x

,

3 for i = m+ 1,m+ 2, . . . , p all ϕi are continuously di¤erentiable at x.

Medvegyev (CEU) Presession 2014 426 / 444

Non convex optimization

TheoremIf x is a local minimum of the problem, then there is an

l $ (λ0,λ1, . . . λp) 2 Rp+1

such that

1 l 6= 0,2 λi 0, for i = 0, . . . ,m,3 λi ϕi (x

) = 0, for i = 1, . . . , p,4 L0x (l , x

) = ∑pk=0 λk ϕ0k (x

) = 0.

If ϕ0i (x) i = m+ 1, . . . , p and I $ fi 1 j ϕi (x

) = 0g ϕ0i (x) , i 2 I

are linearly independent then λ0 = 1 is possible.

Medvegyev (CEU) Presession 2014 427 / 444

Lagranges theorem

TheoremAssume that ϕ0 is di¤erentiable and let ϕk k = 1, 2, . . . , p be continuouslydi¤erentiable and assume that ϕ0 has a local minimum at x on the set

X $ fx j ϕk (x) = 0, k = 1, . . . , pg $ fx j F (x) = 0g .

Then there are multipliers

l = (λ0,λ1, . . . ,λp) 2 Rp+1

such that

1 l 6= 0,2 L0x (l , x

) = ∑pk=0 λk ϕ0k (x

) = 0.

If ϕ0i (x) i = 1, . . . , p are linearly independent then λ0 = 1 is possible.

Medvegyev (CEU) Presession 2014 428 / 444

Lagranges theorem

ExampleConstraint qualication is important.

Consider the problem

3x + 2y ! min

x2 + y2 = 0

The "Lagrangian" is L = 3x + 2y + λx2 + y2

.

∂L∂x

= 3+ 2λx = 0,∂L∂y= 2+ 2λy = 0.

2/2y = λ = 3/2x

which has no feasible solution, althought there is an optimal solutionx = y = 0. At x = y = 0 grad (ϕ (x, y )) =

2x 2y

= 0.

Medvegyev (CEU) Presession 2014 429 / 444

Lagranges theorem

First of all observe that if the gradients of the constraints at x are notlinearly independent then the theorem is obvious as one can choose λ0 $ 0and by the dependence of the row vector gradients ϕ0k (x

) there areconstants λk , k = 1, . . . , p not all of them zero such that

p

∑k=1

λk ϕ0k (x) = 0.

Hence L0x = ∑pk=0 λk ϕ0k (x

) = 0. If the gradients are independent thenone should prove that the row vector ϕ00 (x

) is a linear combination of therow vectors ϕ0k (x

) , k = 1, . . . , p. If p = n then it is trivial as in this casethe matrix formed from the gradients is invertible. Otherwise we can usethe implicit function theorem.

Medvegyev (CEU) Presession 2014 430 / 444

Lagranges theorem

Lagrange theorem is just a combination of Fermats principlem chain ruleand the Implict Function Theorem. The basic idea in optimization theoryis that if a point x0 is a local minimum then for any h "variation" x0 + thhas a minimum at t = 0. Then one should calculate the derivarive usingthe chain rule. With IFT one eliminates the constrains. Let ϕ0 (x1, x2) beoptimal over

W \ X = W \ fF (x1, x2) = 0g .

Medvegyev (CEU) Presession 2014 431 / 444

Lagranges theorem

By IFT there is a di¤erentiable function x1 (x2) and a neighborhood V ofx2 such that F (x1 (x2) , x2) = 0 and

f(x1 (x2) , x2) j x2 2 V g W

where V Rnp is also open. Observe that x2 has n p components andx2 (x1) is a function having n p variables and p values. The function

ψ (x2) $ ϕ0 (x1 (x2) , x2)

is a real valued function of n p variables. By Fermats principle itsgradient is zero at x2 .

∂x2ψ (x2 ) $ ∂

∂x2ϕ0 (x1 (x

2 ) , x

2 ) =

∂x2ϕ0 (x

1 , x

2 ) =

=∂

∂x2ϕ0 (x

) = 0.

Medvegyev (CEU) Presession 2014 432 / 444

Lagranges theorem

By the chain rule

∂x2ϕ0 (x

) =∂

∂x1ϕ0 (x1 (x

2 ) , x

2 )

∂x1∂x2

+∂

∂x2ϕ0 (x1 (x

2 ) , x

2 )

∂x2∂x2

=

=∂

∂x1ϕ0 (x

)∂x1∂x2

+∂

∂x2ϕ0 (x

) .

Observe that as x2 has n p components and x1 has p components thedimensions are

1 (n p) = (1 p) (p (n p)) + 1 (n p)

Medvegyev (CEU) Presession 2014 433 / 444

Lagranges theorem

By the implicit di¤erentiation rule

∂x1∂x2

=

∂F∂x1

1 ∂F∂x2

where by denition F (x1, x2) = 0 is the equation formed by the constrainsϕk , (k = 1, 2, . . . ,m)That is

0 =∂

∂x1ϕ0 (x

)∂x1∂x2

+∂

∂x2ϕ0 (x

) I =

= ∂

∂x1ϕ0 (x

)

∂F∂x1

1 ∂F∂x2

+∂

∂x2ϕ0 (x

) .

Medvegyev (CEU) Presession 2014 434 / 444

Lagranges theorem

Reordering and with simple substitution

uT =∂

∂x1ϕ0 (x

)

∂F∂x1

1

∂x2ϕ0 (x

) =∂

∂x1ϕ0 (x

)

∂F∂x1

1 ∂F∂x2

$

$ uT∂F∂x2

Obviously

uT∂F∂x1

$ ∂

∂x1ϕ0 (x

)

∂F∂x1

1 ∂F∂x1

=∂

∂x1ϕ0 (x

) .

Medvegyev (CEU) Presession 2014 435 / 444

Lagranges theorem

That is combining the two equations

ϕ00 (x) =

∂x1ϕ0 (x

) ,∂

∂x2ϕ0 (x

)

= uT

∂F∂x1

,∂F∂x2

=

= uT F 0 (x) = uT

0BBB@ϕ01 (x

)ϕ02 (x

)...

ϕ02 (x)

1CCCA =p

∑k=1

uk ϕ0k (x) .

Medvegyev (CEU) Presession 2014 436 / 444

Lagranges theorem

Lagrange theorem is just an application of the implicite function theoryand Fermat principle on

ψ (x2) $ ϕ0 (x1 (x2) , x2) or on ψ (t) $ ϕ0 (x1 (x2 + th) , x2 + th)

Hence using another IFT one can also prove the next generalization:

Medvegyev (CEU) Presession 2014 437 / 444

Lagranges theorem

TheoremAssume that ϕk k = 0, 1, . . . , p are di¤erentiable (One can just assumethat ϕk , k = 0, 1, . . . , p are di¤erentiable just at x and ϕk ,k = 1, 2, . . . , p are continuous. ) and assume that ϕ0 has a localminimum at x on the set

X $ fx j ϕk (x) = 0, k = 1, . . . , pg $ fx j F (x) = 0g .

Then there are multipliers

l = (λ0,λ1, . . . ,λp) 2 Rp+1

such that

1 l 6= 0,2 L0x (l , x

) = ∑pk=0 λk ϕ0k (x

) = 0.

If ϕ0i (x) i = 1, . . . , p are linearly independent then λ0 = 1 is possible.

Medvegyev (CEU) Presession 2014 438 / 444

Homework

Study the following problems

y ! min, y3 x2 = 03x2 2x3 ! min, (3 x)3 y2 = 0.x2 y2 ! max, x2 + y2 = 1

x2 y2 ! min, x2 + y2 = 1.

x2 + y2 ! min, (x 1)3 y2 0.ln x + ln y ! max, x2 + y2 = 1, x , y 0.

Medvegyev (CEU) Presession 2014 439 / 444

Minimal norm solution

Assume that A is fat and full rank. We are looking for the solution

kxk ! min,Ax = b.

As A is full rank one can apply Lagrange with λ0 = 1 for this convexoptimization problem.

L (l , x) = kxk2 + lT (Ax b)

L0x (l , x) = 2xT + lTA = 0

xT = lTA/2, x = 12AT l

Medvegyev (CEU) Presession 2014 440 / 444

Minimal norm solution

Substituting back

12AAT l = b, l = 2

AAT

1b

x = ATAAT

1b.

One should show that AAT is invertible. If AAT z = 0 for some z thenzTAAT z = 0 which is

AT z 2 = 0. Hence AT z = 0. As A is full rankand fat z = 0.

Medvegyev (CEU) Presession 2014 441 / 444

Minimal norm solution

Recall that the least square solution of equation Ax = b is the solution ofthe problem

kAx bk ! min

It is a unconstrained minimization of a convex function. Using that

ddx(x ,Bx) = (Bx)T +

BT x

T= xTBT + xTB =x

B+BT

Medvegyev (CEU) Presession 2014 442 / 444

Minimal norm solution

∂xkAx bk2 =

∂x(Ax b,Ax b) =

=∂

∂x((Ax ,Ax) (b,Ax) (Ax , b) + (b, b)) =

=∂

∂x

x ,ATAx

2

x ,AT b

+ (b, b)

=

= 2ATAx 2AT b

T= 0.

Medvegyev (CEU) Presession 2014 443 / 444

Minimal norm solution

That is ATAx = AT b. If A is skinny and full rank thenATA

1exists

andx =

ATA

1AT b.

Medvegyev (CEU) Presession 2014 444 / 444