Mathematical Methods III

18
7/30/2019 Mathematical Methods III http://slidepdf.com/reader/full/mathematical-methods-iii 1/18 Mathematical Methods III (partial) course notes 1 Fourier series 2 1.1 Simple harmonic motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 2 1.2 Periodic functions and Fourier coefficients . . . . . . . . . . . . . . . . . . . . . . . 2 1.3 Even and odd functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3 1.4 Parseval’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4 2 Ordinary differential equations 5 2.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 5 2.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.3 Exponential growth and decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.4 First-order, separable equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 6 2.5 Linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 7 2.6 Numerical integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.6.1 The Euler method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8 2.6.2 The Heun method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.6.3 Practical methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9 2.7 2 nd -order, linear homogeneous ODEs with constant coefficients . . . . . . . . . . . . 10 2.8 Damped oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.8.1 Weak under-damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.8.2 Critical damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10 2.8.3 Over-damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.9 2 nd -order, linear inhomogeneous ODEs with constant coefficients in the homogeneous part . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.10 Forced oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11 2.10.1 Non-resonant case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 2.10.2 Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12 3 Some linear algebra 13 3.1 Review of matrix properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.2 Matrices and transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 13 3.3 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 14 3.4 Co-ordinate tranformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.5 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15 3.6 Diagonalising a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16 3.7 Applications of diagonalisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17 4 Dirac’s delta-function 18 Daniel Litim

Transcript of Mathematical Methods III

Page 1: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 1/18

Mathematical Methods III

(partial) course notes

1 Fourier series 21.1 Simple harmonic motion . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21.2 Periodic functions and Fourier coefficients . . . . . . . . . . . . . . . . . . . . . . . 21.3 Even and odd functions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.4 Parseval’s theorem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Ordinary differential equations 52.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 52.2 Solutions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.3 Exponential growth and decay . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.4 First-order, separable equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 62.5 Linear equations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 72.6 Numerical integration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.6.1 The Euler method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.6.2 The Heun method . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 92.6.3 Practical methods . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 9

2.7 2nd-order, linear homogeneous ODEs with constant coefficients . . . . . . . . . . . . 102.8 Damped oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 10

2.8.1 Weak under-damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.8.2 Critical damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.8.3 Over-damping . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.9 2nd-order, linear inhomogeneous ODEs with constant coefficients in the homogeneouspart . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

2.10 Forced oscillators . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.10.1 Non-resonant case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.10.2 Resonance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 12

3 Some linear algebra 133.1 Review of matrix properties . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.2 Matrices and transformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 133.3 Determinants . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.4 Co-ordinate tranformations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.5 Eigenvalues and eigenvectors . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.6 Diagonalising a matrix . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 163.7 Applications of diagonalisation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 17

4 Dirac’s delta-function 18

Daniel Litim

Page 2: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 2/18

1 Fourier series

1.1 Simple harmonic motion

θ = ωt, ω =const.

x = A cos θ = A cos ωt.y = A sin θ = A sin ωt.

z = x + iy = eiωt.

A =amplitude.T =period.

T =2π

ω.

For waves that vary in space, we use a different notation,

y = A sin kx = A sin2πx

λ, k =

λ,

where k is the wavenumber and λ is the wavelength.

For waves travelling with speed v we have

y = A sin k(x−

vt) = A sin(kx−

ωt), ω = kv.

Wavelength at fixed time λ = 2π/k.Period at fixed position T = 2π/ω = λ/v.

1.2 Periodic functions and Fourier coefficients

Any periodic functionf (x) = f (x + p)

where p is the period, or wavelength, can be expressed as a sum of sine and cosine waves of wavelength

λn =p

n,

n = 1 is the fundamental,

Page 3: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 3/18

For a function of period 2π, we write

f (x) =1

2a0 +

∞n=1

an cos nx +∞n=1

bn sin nx,

where

a0 =1

π

π−π

f (x) dx,

an =1

π

π−π

f (x) cos nx dx,

bn =1

π

π−π

f (x) sin nx dx.

More generally, for a function of period 2l, we write

f (x) =1

2a0 +

∞n=1

an cosnπx

l+

∞n=1

bn sinnπx

l,

where

a0 =1

l

l−l

f (x) dx,

an =1

l

l−l

f (x) cosnπx

ldx,

bn =1

l l−l f (x) sin

nπx

l dx.

1.3 Even and odd functions

An even function is one that is symmetric about the origin: f (−x) = f (x).

Page 4: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 4/18

An odd function is one that is anti-symmetric about the origin: f (−x) = −f (x).

Any function can be split into even and odd parts like this:

f (x) =1

2[f (x) + f (−x)] +

1

2[f (x)− f (−x)],

where the first term is even and the second odd.Even functions contain only a constant plus cosine terms in their Fourier series; odd functionscontain only sine terms.

1.4 Parseval’s theorem

Parseval’s theorem relates the mean square of a function to the Fourier coefficients:

1

π−π

f (x)2 dx =1

4a20 +

1

2

∞n=1

a2n +1

2

∞n=1

b2n.

In physical terms, this shows how the mean power in a wave is split between the various harmonics.

Page 5: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 5/18

2 Ordinary differential equations

2.1 Introduction

A differential equation (DE) is an equation involving derivatives of functions, e.g.

v(t) : F = mdv

dt, or F = m v′.

Ordinary DEs (ODEs) contain derivatives with respect to a single variable, e.g.

q (t) : I =dq

dt.

Partial DEs (PDEs) have derviatives with respect to two or more variables, e.g.

y(x, t) :∂y

∂t

+ α∂ 2y

∂x2

= 0.

The order of the equation is equal to that of the highest derviative that it contains, e.g.

y(x) : y′ + x y2 = 1 1st-order

y(x) : y′′ + x y = 1 2nd-order

Linear homogeneous equations are those where each term is a linear function of the unknown andits derivatives:

y(x) : a0(x) y + a1(x) y′ + a2(x) y′′ + . . . + an y(n) = 0,

where y(n) is the n’th derivative of y. E.g. unforced pendulum

θ(t) : θ′′ + ω2 θ = 0.

Linear inhomogeneous equations in addition contain a non-zero function of the independent variable:

y(x) : a0(x) y + a1(x) y′ + a2(x) y′′ + . . . + an y(n) = b(x),

where y(n) is the n’th derivative of y. E.g. forced pendulum

θ(t) : θ′′ + ω2 θ = A cosΩt.

The term linear on its own can mean either of the above or just linear homogeneous.Non-linear equations contain at least one non-linear term involving the unknown, e.g. non-linear

pendulumθ(t) : θ′′ + ω2 sin θ = 0.

Examples:Exponential growth (or decay):

x(t) : x′ = kx

—linear, homogeneous, first-order ODEForced, damped oscillator:

x(t) : x′′ + bx′ + ω2x = γ sin φt

—linear, inhomogeneous, second-order ODEWave equation:

E (x, t) :∂ 2E

∂t2= c2

∂ 2E

∂x2

Page 6: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 6/18

2.2 Solutions

A solution of a DE is a differentiable function that satisfies the defining equation. For example,the equation

T (t) : T ′ = −k(T − T 0)

has the solutionT = T 0 + A e−kt,

where A is a constant.The general solution of a linear, first-order ODE has one undetermined constant, in this case A.A particular solution comes about by choosing a value for the constant, e.g.

T = T 0 + 30K e−kt.

A singular solution is one that is constant. In this case

T = T 0.

An initial value problem (IVP) seeks a solution for which the unknown takes a given value at sometime, e.g.

T (0 s) = 2T 0 ⇒ T = T 0

1 + e−kt

.

If a first-order ODE can be put into the form y′ = f (x) then there exists a unique solution to anygiven IVP.

2.3 Exponential growth and decay

The general 1st-order, linear equation with constant coefficients has the form

y(x) : y′ + ay = b,

where a and b are constant. The solution should be memorised. It is

y =b

a+ C e−ax

where C is an arbitrary integration constant. This is exponential growth (a < 0) or decay (a > 0).

2.4 First-order, separable equationsThese are equations of the form

x(t) : x′ = g(t) h(x).

To solve, separate the variables ,dx

h(x)= g(t) dt,

and integrateH (x) = G(t) + c

wheredH

=1

anddG

= g(t).

Page 7: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 7/18

For exampley(x) : x y′ = y + 1

becomesdy

y + 1=

dx

x

which integrates to give

ln(y + 1) = ln x + const = ln x + ln a = ln(ax),

ory = −1 + ax.

Orthogonal curves to a given function have gradients thatwhen multiplied by the original give −1. In the above case

(y + 1) dy = −x dx ⇒ x2 + (y + 1)2 = const.

The radial lines in the diagram on the right show the so-lutions to the original equation, and the circles show theorthogonal curves.

2.5 Linear equations

First-order linear ODEs have the form

x(t) : x′ + p(t) x = q (t).

The method of solution is called Variation of Parameters . First solve the homogeneous equation

X (t) : X ′ + p(t) X = 0

by separation of variables dX

X = −

p(t) dt

ln X = −I + a, where I (t) =

p(t) dt.

X = A e−I

where A = ea is an arbitrary constant.Next try a solution of the form

x = A(t) e−I

where the constant (or parameter) A has been turned into a variable.The original inhomogeneous equation becomes

A′ = q (t) eI

with solution

Page 8: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 8/18

eI is known as the integrating factor

Putting all this together we get the following:The solution of the linear, inhomogeneous equation

x(t) : x′ + p(t) x = q (t)

is

x eI =

q (t) eI dt + c, where I =

p(t) dt.

Note:• When determining I is is not necessary to add an integration constant as it will cancel out.

However, when doing the integral of q eI then an integration constant is required in order toobtain the general solution.

• Be careful of the sign of I as it is easy to get this wrong if the p(t) term is on the wrong sideof the original equation.

• After solving for x, the term containing the integration constant becomes c e−I —i.e. it is notconstant.

For example:

x(t) : x′ =sin t

t− 2x

t.

Here p(t) = +2/t. Hence the integrating factor is given by

I =

2

tdt = ln t2; eI = t2.

The solution is

x t2

= sin t

t t2

dt =

t sin t dt = −t cos t + sin t + k.

x = −t3 cos t + t2 sin t + k t2.

2.6 Numerical integration

Consider the initial value problem

x′ = f (t, x); x(t0) = x0.

When we cannot solve this exactly, we can compute an approximate solution as follows.

2.6.1 The Euler method

From the definition of a derivative, dx = f (t, x) dt. Provided dt is small, this gives an approximateexpression for the variation in x. Let us take equal steps (this is not strictly necessary) dt = h.Then

tn+1 = tn + h,

xn+1 = xn + h f (tn, xn).

This is known as a predict-evaluate (PE) method. It is 1st-order, which means that it gives the

correct answer up to terms in h. However, this is not very accurate and it should not be used.

Page 9: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 9/18

The Euler method.

2.6.2 The Heun method

A better method makes use of a corrector step that averages the value of the gradient at thebeginning and end of the timestep:

tn+1 = tn + hxE n+1 = xn + h f (tn, xn),

xn+1 = xn +h

2[f (tn, xn) + f (tn+1, xE

n+1)].

This predict-evaluate-correct-evaluate (PECE) method is 2nd-order, i.e. correct to terms in h2.Although it requires two evaluations per step, it is much more accurate than the Euler method.

2.6.3 Practical methods

The most common method in practical use is a 4th-order Runge-Kutta that requires 4 evaluations

per step. In MATLAB it is implemented in the routine ode45. It performs well in most situations,but other methods exist to handle badly-behaved functions. An example use of ode45 is given inthe routine first.m available on the course web-page.

Page 10: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 10/18

2.7 2nd-order, linear homogeneous ODEs with constant coefficients

ay′′ + by′ + cy = 0, a, b, c = const.

Try exponentials, y = eλt. Then we get the characteristic equation

aλ2 + bλ + c = 0,

with solutions

λ± =−b±√ b2 − 4ac

2a.

Because we can multiply the solutions of linear homogeneous equations by a constant and add themtogether, this leads to the general solution

y = C +eλ+t + C −eλ−t,

except when the two roots are equal. In that case we take the solution C eλt and do Variation of Parameters by turing C into a variable C (t). This yields the solution C = A t + B where A and Bare new integration constants.Hence when a, b & c are real (as they will be in most physical applications) then there three distinctkinds of solution

λ =

α, β α± βiα, α

⇒ y =

A eαt + B eβt

(A cos βt + B sin βt) eαt

(At + B) eαt,

where α & β are real and A & B are integration constants (which in general may be complex).

2.8 Damped oscillatorsThe equation for a damped harmonic oscillator is

y′′ + ky ′ + ω2y = 0,

where k is the damping coefficient and ω is the natural frequency in the absence of damping. Theroots of the characteristic equation are

λ =1

2

−k ±

√ k2 − 4ω2

.

2.8.1 Weak under-damping

means k2 < 4ω2, hence λ± = −k2±∆i where ∆ =

ω2 − k2/4. y = e−

k

2t (A cos∆t + B sin∆t).

2.8.2 Critical damping

means k2 = 4ω2, hence λ± = −k2 . y = e−

k

2t (At + B). This gives the fastest possible decay in

amplitude.

Page 11: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 11/18

2.8.3 Over-damping

means k2 > 4ω2, hence λ± = −k2± ∆ where ∆ =

k2/4− ω2. y = A e−(

k

2+∆)t + B e−(

k

2−∆)t. The

second term here decays more slowly than in the case of critical dancing.

0 5 10 15−2

−1

0

1

2

3simple harmonic motion

k=0

0 5 10 15−2

−1

0

1

2

3weak damping

k=0.1k=0.3k=1.0

0 5 10 15−2

−1

0

1

2

3critical damping

k=2

0 5 10 15−2

−1

0

1

2

3over−damping

k=3k=5k=10

Solutions of y′′ + ky ′ + y = 0 for different values of the damping k.

2.9 2nd

-order, linear inhomogeneous ODEs with constant coefficientsin the homogeneous part

ay′′ + by′ + cy = f (t), a, b, c = const.

Let the solution of the homogeneous equation be yc—the complementary function . It contains the2 undetermined integration constants.Next look for any solution, y p of the full equation. Then the general solution is

y = yc + y p.

If f (t) = edt P n(t) then the particular integral has the form

y p =

edt Qn(t) if d = λ+ = λ−t edt Qn(t) if d = λ+ = λ−t2 edt Qn(t) if d = λ+ = λ−

where λ± are the roots of the characteristic equation and P n and Qn are polynomials of degree n.Note: because the equation is linear, we are allowed to break f (t) up into pieces and then addtogether all the particular integrals for each part. If f (t) is periodic then we can do this usingFourier series.

2.10 Forced oscillators

The equation for an undamped harmonic oscillator with periodic forcing isy′′ + ω2y = F cos ω′t,

Page 12: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 12/18

2.10.1 Non-resonant case

means ω = ω′. The solution is

y =F

ω2

−ω′2 cos ω′t + B sin ωt + C cos ωt.

To get the sense of the solution, consider the case y(0) = 0, y′(0) = 0. Then

y =F

ω2 − ω′2 (cos ω′t− cos ωt) =2F

ω2 − ω′2 sin

ω′ − ω

2t

sin

ω′ + ω

2t

.

When ω ≈ ω′ we get beating . The first sine term has low frequency (long period) and can bethought of as modulating the amplitude of the wave. The second sine term has high frequency(short period) and oscillates within the amplitude set by the first (see diagram below).

2.10.2 Resonance

means ω = ω′. In this case,

y =

F

2ωt + B

sin ωt + C cos ωt.

The amplitude grows linearly with time and tends to infinity as t →∞. The can be useful (tuningradios) or destructive (breaking bridges). In practice either

• The system breaks due to extreme amplitudes,• Damping limits the amplitude of the oscillations,• Non-linear terms limit the amplitude of the oscillations, or

•The equations are invalid for ω = ω′ and instead take some special resonant form.

0 5 10 15−4

−2

0

2

4

6simple harmonic motion

a=0

0 5 10 15−4

−2

0

2

4

6non−resonant forcing, w=2

a=1a=3a=10

0 50 100 150−10

−5

0

5

10

15beating

w=1.2

0 10 20 30−20

−10

0

10

20

30resonance

w=1

Solutions of y′′ + y = A cos wt for different values of the forcing frequency w.

Page 13: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 13/18

3 Some linear algebra

3.1 Review of matrix properties

A matrix is a rectangular array of numbers (or functions)

A =

A11 A12 . . . A1n

A21 A22 . . . A2n

. . . . . . . . . . . .An1 An2 . . . Ann

m× n matrix.

If A is an l ×m matrix and B is an m× n matrix, then the product C = AB is defined as

C ik =m

j=1

AijB jk

—this is often described by the row

×column rule.

Note: matrix algebra is non-commutative, ie AB = BA. In fact BA does not exist unless l = n.

We will mainly be concerned with square matrices n× n.The identity matrix is

I n =

1 0 . . . 00 1 . . . 0

. . . . . . . . . . . .0 0 . . . 1

(I n)ij = δ ij =

1, i = j0, i = j

The transpose, AT, of a matrix is obtained by swapping rows and columns. If A is m

×n then

AT is n×m,ATij = A ji.

(AB)T = BTAT.

For square matrices it is often possible to find an inverse matrix A−1 such that

A−1A = I = A A−1.

(AB)−1 = B−1A−1.

An orthogonal matrix is one for which AT = A−1.

3.2 Matrices and transformations

We can think of square matrices as transforming one vector into another. For example:

A =

2 −11 1

A

10

=

2 −11 1

10

=

21

A

01

=

2 −11 1

01

=

−11

Page 14: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 14/18

There is a special class of linear transformations that preserve lengths and angles. These are calledorthogonal. Then for every vector x =

x

y

that gets mapped into X =

X

Y

we must have that

X 2 + Y 2 = x2 + y2; i.e. XTX = xTx where XT = (X Y ), x = (x y).

ButX = A x; XT = (A x)T = xTAT. Therefore XTX = xT(ATA)x

If this is to be true for all vectors then we need ATA = I which is the definition of an orthogonalmatrix. The converse also holds: if A is orthogonal then so is the corresponding co-ordinatetransformation. This result holds also in 3 or more dimensions.

3.3 Determinants

In general a matrix transformation will cause stretching and rotation of vectors. The determinantof A, det(A) or |A|, measures the change in the area of the unit square (or volume of the unit cube

. . .) under the transformation.• A = A11, det(A) = A11.

• A =

A11 A12

A21 A22

, det(A) = A11A22 −A12A21.

• A =

A11 A12 A13

A21 A22 A23

A31 A32 A33

,

det(A) = A11 det A22 A23

A32 A33−A12 det A21 A23

A31 A33 + A13 det A21 A22

A31 A32

In general det(A) = j A1 jC 1 j where the C 1 j are the cofactors equal to (−1)1+ j times the deter-

minant of the matrix that results from crossing out the corresponding rows and columns.

If the determinant of A is non-zero then the inverse exists and is equal to

A−1 =1

det(A)C T; (A−1)ij =

1

det(A)C ji.

• inv(A11) = 1/A11.

• inv

A11 A12

A21 A22

= 1

detA

A22 −A12

−A21 A11

.

• for higher dimensions use MATLAB or row-reduction (e.g. Boas §3.2)

The determinant of A is zero if and only if the rows (and columns) of A are linearly dependent—i.e. a linear combination of them is equal to a vector of zeros. For example both of the followingmatrices have zero determinant:

A =

2 −41 −2

; 2×

21

+

−4−2

=

00

; (2, −4)− 2× (1, −2) = (0, 0).

A =

1 1 01 0 1

;

11

− 1

0

− 0

1

=

00

.

Page 15: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 15/18

3.4 Co-ordinate tranformations

The above figure shows two different views of the action of matrices on vectors. On the left wehave our original interpretation as a linear transformation of one vector into another.

X = A x;

X

Y

=

2 −11 1

x

y

.

Alternatively, the right panel shows an alternative view in which we think of the vector as remainingfixed and the transformation as representing a change in the co-ordinate system that we use torepresent the vector.

x = A−1X;

x

y

=

13

13

−1

3

2

3

X

Y

= X

13

−1

3

+ Y

13

2

3

.

From this we see that the columns of the inverse matrix represent the basis vectors of the newco-ordinate system.

3.5 Eigenvalues and eigenvectors

In any transformation, there are often vectors whose direction is unaltered and it can be very usefulto use these as the basis of a new co-ordinate system. They satisfy

A e = µ e

where e is an eigenvector and µ an eigenvalue.

A e− µ I e = 0,

(A− µ I ) e = 0. (∗)

If e is non-zero then this means that the columns of A− µ I are linearly dependent. Hence

det(A− µ I ) = 0.

Example:

A =

5 −2−2 2

; A− µ I =

5− µ −2−2 2− µ

.

Page 16: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 16/18

det(A− µ I ) = 0 ⇒ µ = 1 or 6.

To find the eigenvectors, substitute µ back into equation (*).

µ = 1, e1 = (xy

):

5−

1−

2−2 2− 1

xy

= 0

0

; 4

−2

−2 1 x

y

= 0

0

.

Hence4x− 2y = 0−2x + y = 0

linearly denpendent, as desired ⇒ e1 ∝

12

.

µ = 6, e6 = (xy

):

5− 6 −2−2 2− 6

xy

=

00

;

−1 −2−2 −4

xy

=

00

.

−x− 2y = 0−2x− 4y = 0

linearly dependent ⇒ e6 ∝ −21

.

It is useful (but not necessary) to normalise the eigenvectors to unit length

e1 =1√

5

12

,

e2 =

1

√ 5 −2

1

.

3.6 Diagonalising a matrix

If we use the eigenvectors as the basis of a new co-ordinate system then the matrix that describesthe transformation will be diagonal. Writing x = X e1 + Y e2 where X and Y are the co-ordinatesin the new system. Then

A x = A (X e1 + Y e2) = X A e1 + Y A e2 = X µ1e1 + Y µ2e2 = µ1(X e1) + µ2(Y e2).

In other words in the new co-ordinate system A is equivalent to

D =

µ1 00 µ2

where D has the eigenvalues down the main diagonal.In matrix notation, the transformation can be expressed as x = C X or X = C −1x, whereC = (e1, e2) is the matrix of eigenvectors. From this, it is easy to show that D = C −1A C , orA = C D C −1.

Example:

A =

1 2 0

2 −1 2

det A− I = 0 ⇒ = −3 1 or 3.

Page 17: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 17/18

e−3 =1√

6

1−2

1

; e1 =

1√ 2

1

0−1

; e3 =

1√ 3

1

11

; C =

1√ 6

1√ 2

1√ 3

− 2√ 6

0 1√ 3

1√ 6− 1√

21√ 3

.

3.7 Applications of diagonalisationExample 1: Consider the conic 5x2 − 4x y + 2y2 = 30. This can be written

xTA x = 30, where x =

xy

and A =

5 −2

−2 2

A has eigenvalues µ1 = 1, µ2 = 6 and eigenvectors e1 = 1√ 5

12

, e2 = 1√

5

−21

.

Recall that X = C −1x and A = C D C −1 where C is the matrix of eigenvectors and D is thediagonal matrix of eigenvalues. Hence

x

T

A x = x

T

C D C −1

x = X

T

D X.

Therefore, in the new co-ordinate system, the equation for the conic becomes X 2 + 6Y 2 = 30.Where the eigenvectors are perpendicular and of unit length, as here, then the transformation isorthogonal and corresponds to a rotation and/or reflection

C C T = (e1 e2)T(e1 e2) =

eT1eT2

(e1 e2) =

eT1 e1 eT1 e2

eT2 e1 eT2 e2

=

1 00 1

.

Example 2: Coupled oscillators. Consider the following system of masses, m, attached to light

springs of spring constant k. Let the displacements from the equilibrium positions be x and y.

By considering the tension in each spring, it can be shown that the acceleration of each mass is

m x = k (y − x)− kx = −2k x + k ym y = −k y − k (y − x) = k x− 2k y

xy

=

k

m

−2 11 −2

xy

.

This matrix has eigenvalues µ1 = −1, µ2 = −3 and eigenvectors e1 = 1√ 211

, e2 = 1√ 2

1−1

.In these new co-ordinates, using e1 and e2 as basis vectors,

X

Y

=

k

m

−1 0

0 −3

X Y

.

Now the system can easily be solved to yield X = X 0 sin(ω1t + φ1) and Y = Y 0 sin(ω2t + φ2), where

ω1 =

k/m and ω2 =

3k/m.The motion has been split into two independent harmonic oscillations. The eigenvectors describethe modes of oscillation. In the first both masses move together in the same direction, while in thesecond they move in opposite directions.

Page 18: Mathematical Methods III

7/30/2019 Mathematical Methods III

http://slidepdf.com/reader/full/mathematical-methods-iii 18/18

4 Dirac’s delta-function

Definition

P.A.M. Dirac introduced an important new type of function into physics, the so-called δ -function.

It is defined as follows: The δ (x)-function vanishes identically for all x = 0, except for x = 0, whereit is infinite such that

δ (x) = 0 for x = 0 , and ∞−∞

dx δ (x) = 1 .

One may imagine the δ -function as describing e.g. the charge density of the electron: Experimen-tally, the electron has no spatial extension and is therefore well approximated by a point particle.On the other hand, the electron has an electric charge e. Hence, its charge density ρ(x), i.e. itscharge per volume, is infinite at the point where the electron is located, and zero everywhere else,hence ρ(x) = eδ (x).

Another important use for δ -functions comes from differential equations, where it is used to describea strong force acting during a very short period of time.

Now let’s return to the δ -function itself and its basic properties. The definition is given via anintegral, ∞

−∞dx δ (x− y) f (x) = f (y) (1)

for sufficiently “smooth” (differentiable, at most finitely many “jumps”) test functions f (x). There-fore, integrals involving a δ -function are particularily easy! For example,

−∞dx δ (x

−1) ln 3x

x2

1 + 7x

=1

8

ln 3 .

There is an interesting link between the δ -function and the Heaviside step function θ(x), defined as

θ(x) =

1 , for x > 00 , for x < 0

. (2)

with θ(0) undefined. In particular ∂ xθ(x) vanishes for x = 0, similar to the δ -function. This is nocoincidence, because the derivative of the θ-function is a δ -function itself: using partial integration,one finds that ∞

−∞dx (∂ xθ(x)) f (x) = −

∞−∞

dx θ(x) ∂ x f (x) + θ(x) f (x)x=∞x=−∞

= − ∞0

dx ∂ x f (x) + f (x = ∞)

= f (0)− f (∞) + f (∞)

= f (0) .

Therefore, we conclude that ∂ xθ(x) = δ (x).

Properties

Rescaling: δ (a x) = 1|a| δ (x), and in particular δ (−x) = δ (x).

Changing the argument: δ(x2) 1 δ(x)