The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook...

25
The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004 What is this? These pages are a collection of facts (identities, approxima- tions, inequalities, relations, ...) about matrices and matters relating to them. It is collected in this form for the convenience of anyone who wants a quick desktop reference . Disclaimer: The identities, approximations and relations presented here were obviously not invented but collected, borrowed and copied from a large amount of sources. These sources include similar but shorter notes found on the internet and appendices in books - see the references for a full list. Among the few excep- tions are the derivatives involving traces and the Petersen-Hao approximation on inverses. Errors: Very likely there are errors, typos, and mistakes for which I apolo- gize and would be grateful to receive corrections at [email protected] or other channels of communication found on my homepage. Its ongoing: The project of keeping a large repository of relations involving matrices is naturally ongoing and the version will be apparent from the date in the header. Suggestions: Your suggestion for additional content or elaboration of some topics is most welcome at [email protected]. Notation: Matrices are written in capital bold letters like A, vectors are in bold lower case like a and scalars as plain letters (both upper and lower) like a or A. Thus, A 12 denotes the scalar placed at entry (1, 2) in the matrix A, while A 12 would denote a matrix with some indices for whatever purpose. Parenthesis around a matrix, however, followed by indices denotes that specific entry of the matrix, i.e. (A) ij = A ij . Keywords: Matrix algebra, matrix relations, matrix identities, derivative of determinant, derivative of inverse matrix, differentiate a matrix. 1

Transcript of The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook...

Page 1: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

The Matrix Cookbook

Kaare Brandt PetersenISP, IMM, Technical University of Denmark

September 21, 2004

What is this? These pages are a collection of facts (identities, approxima-tions, inequalities, relations, ...) about matrices and matters relating to them.It is collected in this form for the convenience of anyone who wants a quickdesktop reference .

Disclaimer: The identities, approximations and relations presented here wereobviously not invented but collected, borrowed and copied from a large amountof sources. These sources include similar but shorter notes found on the internetand appendices in books - see the references for a full list. Among the few excep-tions are the derivatives involving traces and the Petersen-Hao approximationon inverses.

Errors: Very likely there are errors, typos, and mistakes for which I apolo-gize and would be grateful to receive corrections at [email protected] or otherchannels of communication found on my homepage.

Its ongoing: The project of keeping a large repository of relations involvingmatrices is naturally ongoing and the version will be apparent from the date inthe header.

Suggestions: Your suggestion for additional content or elaboration of sometopics is most welcome at [email protected].

Notation: Matrices are written in capital bold letters like A, vectors are inbold lower case like a and scalars as plain letters (both upper and lower) like aor A. Thus, A12 denotes the scalar placed at entry (1, 2) in the matrix A, whileA12 would denote a matrix with some indices for whatever purpose. Parenthesisaround a matrix, however, followed by indices denotes that specific entry of thematrix, i.e. (A)ij = Aij .

Keywords: Matrix algebra, matrix relations, matrix identities, derivative ofdeterminant, derivative of inverse matrix, differentiate a matrix.

1

Page 2: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

CONTENTS 2

Contents

1 Basics 3

2 Derivatives 32.1 Derivatives of a Determinant . . . . . . . . . . . . . . . . . . . . 32.2 Derivatives of an Inverse . . . . . . . . . . . . . . . . . . . . . . . 42.3 Derivatives of Matrices, Vectors and Scalar Forms . . . . . . . . 42.4 Derivatives of Traces . . . . . . . . . . . . . . . . . . . . . . . . . 5

3 Inverses 73.1 Exact Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.2 Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 73.3 Generalized Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . 73.4 Pseudo Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

4 Decompositions 94.1 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . 94.2 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . 94.3 Triangular Decomposition . . . . . . . . . . . . . . . . . . . . . . 10

5 General Statistics and Probability 115.1 Moments of any distribution . . . . . . . . . . . . . . . . . . . . . 115.2 Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 11

6 Gaussians 126.1 One Dimensional . . . . . . . . . . . . . . . . . . . . . . . . . . . 126.2 One Dimensional Mixture of Gaussians . . . . . . . . . . . . . . . 146.3 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 156.4 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 166.5 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 186.6 Mixture of Gaussians . . . . . . . . . . . . . . . . . . . . . . . . . 19

7 Miscellaneous 197.1 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 197.2 Indices, Entries and Vectors . . . . . . . . . . . . . . . . . . . . . 197.3 Solutions to Systems of Equations . . . . . . . . . . . . . . . . . 217.4 Block matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 227.5 Positive Definite and Semi-definite Matrices . . . . . . . . . . . . 237.6 Integral Involving Dirac Delta Functions . . . . . . . . . . . . . . 24

Page 3: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

1 BASICS 3

1 Basics

(AT )−1 = (A−1)T

(AB)−1 = B−1A−1

(AB)T = BT AT

Tr(A) =∑

i

Aii =∑

i

λi, λi = eig(A)

Tr(ABC) = Tr(BCA) = Tr(CAB)

det(A) = |A| =∏

i

λi λi = eig(A)

|AB| = |A||B|, if A and B are invertible

|A−1| = 1|A|

2 Derivatives

2.1 Derivatives of a Determinant

2.1.1 General form

∂|A|∂x

= |A|Tr[A−1 ∂A

∂x

]2.1.2 Linear forms

∂|A|∂A

= |A|(A−1)T

∂|BAC|∂A

= |BAC|(A−1)T = |BAC|(AT )−1

2.1.3 Square forms

Assume B to be square and symmetric. Then

∂|AT BA|∂A

= 2|AT BA|BA(AT BA)−1

Note that A does not have to be square.

∂ ln |AT A|∂A

= 2(A+)T

∂ ln |AT A|∂A+

= −2AT

See [4].

Page 4: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

2 DERIVATIVES 4

2.1.4 Other nonlinear forms

∂ ln |A|∂A

= (A−1)T = (AT )−1

∂|Xk|∂X

= k|Xk|X−T

See [3].

2.2 Derivatives of an Inverse

∂A−1

∂x= −A−1 ∂A

∂xA−1

See [8]. If the entries of A are independent (i.e. not symmetric, Toeplitz orwith other kinds of structure), then

∂(A−1)kl

∂Aij= −(A−1)ki(A−1)jl

∂bT A−1c∂A

= −A−T bcT A−T

∂|A−1|∂A

= −|A−1|(A−1)T

2.3 Derivatives of Matrices, Vectors and Scalar Forms

2.3.1 First Order

∂aT b∂a

=∂bT a∂a

= b

∂bT Ac∂A

= bcT

∂bT AT c∂A

= cbT

∂bT Ab∂A

=∂bT AT b

∂A= bbT

If the elements of A are independent variables, then

∂A∂Aij

= Jij

∂(AB)ij

∂Amn= δim(B)nj = (JmnB)ij

∂(AT B)ij

∂Amn= δin(B)mj = (JnmB)ij

Page 5: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

2 DERIVATIVES 5

2.3.2 Second Order∂

∂Aij

∑klmn

AklAmn = 2∑kl

Akl

∂bT AT Ac∂A

= A(bcT + cbT )

∂(Ba + b)T C(Da + d)∂a

= BT C(Da + d) + DT CT (Ba + b)

∂(AT BA)kl

∂Aij= δlj(AT B)ki + δkj(BA)il

∂(AT BA)∂Aij

= AT BJij + JjiBA (Jij)kl = δikδjl

See Sec 7.2 for useful properties of the Single-entry matrix Jij

∂aT Ba∂a

= (B + BT )a

∂bT AT DAc∂A

= DT AbcT + DAcbT

∂A(Ab + c)T D(Ab + c) = (D + DT )(Ab + c)bT

2.4 Derivatives of Traces

2.4.1 First Order∂

∂ATr(A) = I

∂ATr(AB) = BT

∂ATr(BAC) = BT CT

∂ATr(BAC) = CB

2.4.2 Second Order∂

∂ATr(A2) = 2A

∂ATr(A2B) = (AB + BA)T

∂ATr(AT BA) = BA + BT A

Page 6: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

3 INVERSES 6

∂ATr(AT A) = 2A

∂ATr(BT AT CAB) = CT ABBT + CABBT

∂ATr

[AT BAC

]= BAC + BT ACT

∂XTr(AXBXT C) = AT CT XBT + CAXB

∂XTr

[(AXb + c)(AXb + c)T

]= 2AT (AXb + c)bT

See [3].

2.4.3 Higher Order

∂XTr(Xk) = k(Xk−1)T

∂XTr(AXk) =

k−1∑r=0

(XrAXk−r−1)T

∂ATr

[BT AT CAAT CAB

]= CAAT CABBT + CT ABBT AT CT A

+ CABBT AT CA + CT AAT CT ABBT

2.4.4 Other∂

∂XTr(AX−1B) = −(X−1BAX−1)T

Assume B and C to be symmetric, then

∂XTr

[(XT CX)−1A

]= −(CX(XT CX)−1)(A + AT )(XT CX)−1

∂XTr

[(XT CX)−1(XT BX)

]= −2CX(XT CX)−1XT BX(XT CX)−1

+2BX(XT CX)−1

See [3].

3 Inverses

3.1 Exact Relations

3.1.1 The Woodbury identity

(A + CBCT )−1 = A−1 −A−1C(B−1 + CT A−1C)−1CT A−1

Page 7: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

3 INVERSES 7

3.1.2 The Kailath Variant

(A + BC)−1 = A−1 −A−1B(I + CA−1B)−1CA−1

See [2] page 153.

3.1.3 A PosDef identity

Assume P,R to be positive definite and invertible, then

(P−1 + BT R−1B)−1BT R−1 = PBT (BPBT + R)−1

See [9].

3.2 Approximations

(I + A)−1 ∼= I−A, if A small

3.2.1 The Petersen-Hao approximation

A−A(I + A)−1A ∼= I−A−1 if A large and symmetric

3.3 Generalized Inverse

3.3.1 Definition

A generalized inverse matrix of the matrix A is any matrix A− such that

AA−A = A

The matrix A− is not unique.

3.4 Pseudo Inverse

3.4.1 Definition

The pseudo inverse (or Moore-Penrose inverse) of a matrix A is the matrix A+

that fulfils

I AA+A = A

II A+AA+ = A+

III AA+ symmetricIV A+A symmetric

The matrix A+ is unique and does always exist.

Page 8: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

4 DECOMPOSITIONS 8

3.4.2 Basic Properties

Assume A+ to be the pseudo-inverse of A, then

(A+)+ = A

(AT )+ = (A+)T

(cA)+ = (1/c)A+

(AT A)+ = A+(AT )+

(AAT )+ = (A+)+A+

See [1].

3.4.3 Construction

Assume that A has full rank, then

A n× n Square rank(A) = n ⇒ A+ = A−1

A n×m Broad rank(A) = n ⇒ A+ = AT (AAT )−1

A n×m Tall rank(A) = m ⇒ A+ = (AT A)−1AT

Assume A does not have full rank, i.e. A is n×m and rank(A) = r < min(n, m).The pseudo inverse A+ can be constructed from the singular value decomposi-tion A = UDVT , by

A+ = VD+UT

A different way is this: There does always exists two matrices C n × r and Dr ×m of rank r, such that A = CD. Using these matrices it holds that

A+ = DT (DDT )−1(CT C)−1CT

See [1].

4 Decompositions

4.1 Eigenvalues and Eigenvectors

4.1.1 Definition

The eigenvectors v and eigenvalues λ are the ones satisfying

Avi = λivi

AV = VD, (D)ij = δijλi

where the columns of V are the vectors vi

4.1.2 General Properties

eig(AB) = eig(BA)A is n×m ⇒ At most min(n, m) distinct λi

rank(A) = r ⇒ At most r non-zero λi

Page 9: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

4 DECOMPOSITIONS 9

4.1.3 Symmetric

Assume A is symmetric, then

VVT = I (i.e. V is orthogonal)λi ∈ < (i.e. λi is real)

Tr(Ap) =∑

iλpi

eig(I + cA) = 1 + cλi

eig(A−1) = λ−1i

4.2 Singular Value Decomposition

Any n×m matrix A can be written as

A = UDVT

whereU = eigenvectors of AAT n× nD = diag(eig(AAT )) n×mV = eigenvectors of AT A m×m

4.2.1 Symmetric Square decomposed into squares

Assume A to be n× n and symmetric. Then[A

]=

[V

] [D

] [VT

]where D is diagonal with the eigenvalues of A and V is orthogonal and theeigenvectors of A.

4.2.2 Square decomposed into squares

Assume A to be n× n. Then[A

]=

[V

] [D

] [UT

]where D is diagonal with the square root of the eigenvalues of AAT , V is theeigenvectors of AAT and UT is the eigenvectors of AT A.

4.2.3 Square decomposed into rectangular

Assume V∗D∗UT∗ = 0 then we can expand the SVD of A into

[A

]=

[V V∗

] [D 00 D∗

] [UT

UT∗

]where the SVD of A is A = VDUT .

Page 10: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

5 GENERAL STATISTICS AND PROBABILITY 10

4.2.4 Rectangular decomposition I

Assume A is n×m[A

]=

[V

] [D

] [UT

]where D is diagonal with the square root of the eigenvalues of AAT , V is theeigenvectors of AAT and UT is the eigenvectors of AT A.

4.2.5 Rectangular decomposition II

Assume A is n×m

[A

]=

[V

] D

UT

4.2.6 Rectangular decomposition III

Assume A is n×m

[A

]=

[V

] [D

] UT

where D is diagonal with the square root of the eigenvalues of AAT , V is theeigenvectors of AAT and UT is the eigenvectors of AT A.

4.3 Triangular Decomposition

4.3.1 Cholesky-decomposition

Assume A is positive definite, then

A = BT B

where B is a unique upper triangular matrix.

5 General Statistics and Probability

5.1 Moments of any distribution

5.1.1 Mean and covariance of linear forms

Assume X and x to be a matrix and a vector of random variables. Then

E[AXB + C] = AE[X]B + C

Var[Ax] = AVar[x]AT

Cov[Ax,By] = ACov[x,y]BT

See [7].

Page 11: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

5 GENERAL STATISTICS AND PROBABILITY 11

5.1.2 Mean and Variance of Square Forms

Assume A is symmetric, c = E[x] and Σ = Var[x]. Assume also that allcoordinates xi are independent, have the same central moments µ1, µ2, µ3, µ4

and denote a = diag(A). Then

E[xT Ax] = Tr(AΣ) + cT Ac

Var[xT Ax] = 2µ22Tr(A2) + 4µ2cT A2c + 4µ3cT Aa + (µ4 − 3µ2

2)aT a

See [7]

5.2 Expectations

Assume x to be a stochastic vector with mean m, covariance M and centralmoments vr = E[(x−m)r].

5.2.1 Linear FormsE[Ax + b] = Am + b

E[Ax] = Am

E[x + b] = m + b

5.2.2 Quadratic FormsE[(Ax + a)(Bx + b)T ] = AMBT + (Am + a)(Bm + b)T

E[xxT ] = M + mmT

E[xaT x] = (M + mmT )aE[xT axT ] = aT (M + mmT )

E[(Ax)(Ax)T ] = A(M + mmT )AT

E[(x + a)(x + a)T ] = M + (m + a)(m + a)T

E[(Ax + a)T (Bx + b)] = Tr(AMBT ) + (Am + a)T (Bm + b)E[xT x] = Tr(M) + mT m

E[xT Ax] = Tr(AM) + mT Am

E[(Ax)T (Ax)] = Tr(AMAT ) + (Am)T (Am)E[(x + a)T (x + a)] = Tr(M) + (m + a)T (m + a)

See [3].

Page 12: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

6 GAUSSIANS 12

5.2.3 Cubic Forms

Assume x to be independent, then

E[(Ax + a)(Bx + b)T (Cx + c)] = Adiag(BT C)v3

+Tr(BMCT )(Am + a)+AMCT (Bm + b)+(AMBT + (Am + a)(Bm + b)T )(Cm + c)

E[xxT x] = v3 + 2Mm + (Tr(M) + mT m)mE[(Ax + a)(Ax + a)T (Ax + a)] = Adiag(AT A)v3

+[2AMAT + (Ax + a)(Ax + a)T ](Am + a)+Tr(AMAT )(Am + a)

E[(Ax + a)bT (Cx + c)(Dx + d)T ] = (Ax + a)bT (CMDT + (Cm + c)(Dm + d)T )+(AMCT + (Am + a)(Cm + c)T )b(Dm + d)T

+bT (Cm + c)(AMDT − (Am + a)(Dm + d)T )

See [3].

6 Gaussians

6.1 One Dimensional

6.1.1 Density and Normalization

The density is

p(s) =1√

2πσ2exp

(− (s− µ)2

2σ2

)Normalization integrals ∫

e−(s−µ)2

2σ2 ds =√

2πσ2

∫e−(ax2+bx+c)dx =

√π

aexp

[b2 − 4ac

4a

]∫

ec2x2+c1x+c0dx =√

π

−c2exp

[c21 − 4c2c0

−4c2

]6.1.2 Completing the Squares

c2x2 + c1x + c0 = −a(x− b)2 + w

−a = c2 b =12

c1

c2w =

14

c21

c2+ c0

Page 13: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

6 GAUSSIANS 13

orc2x

2 + c1x + c0 = − 12σ2

(x− µ)2 + d

µ =−c1

2c2σ2 =

−12c2

d = c0 −c21

4c2

6.1.3 Moments

If the density is expressed by

p(x) =1√

2πσ2exp

[− (s− µ)2

2σ2

]or p(x) = C exp(c2x

2 + c1x)

then the first few basic moments are

〈x〉 = µ = −c12c2

〈x2〉 = σ2 + µ2 = −12c2

+(−c12c2

)2

〈x3〉 = 3σ2µ + µ3 = c1(2c2)2

[3− c2

12c2

]〈x4〉 = µ4 + 6µ2σ2 + 3σ4 =

(1

2c2

)2[(

c12c2

)2

− 6 c21

2c2+ 3

]and the central moments are

〈(x− µ)〉 = 0 = 0〈(x− µ)2〉 = σ2 =

[−12c2

]〈(x− µ)3〉 = 0 = 0

〈(x− µ)4〉 = 3σ4 = 3[

12c2

]2

A kind of pseudo-moments (un-normalized integrals) can easily be derived as∫exp(c2x

2 + c1x)xndx = Z〈xn〉 =√

π

−c2exp

[c21

−4c2

]〈xn〉

From the un-centralized moments one can derive other entities like

〈x2〉 − 〈x〉2 = σ2 = −12c2

〈x3〉 − 〈x2〉〈x〉 = 2σ2µ = 2c1(2c2)2

〈x4〉 − 〈x2〉2 = 2σ4 + 4µ2σ2 = 2(2c2)2

[1− 4 c2

12c2

]6.2 One Dimensional Mixture of Gaussians

6.2.1 Density and Normalization

p(s) =K∑k

ρk√2πσ2

k

exp[−1

2(s− µk)2

σ2k

]

Page 14: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

6 GAUSSIANS 14

6.2.2 Moments

An useful fact of MoG, is that

〈xn〉 =∑

k

ρk〈xn〉k

where 〈·〉k denotes average with respect to the k.th component. We can calculatethe first four moments from the densities

p(x) =∑

k

ρk1√

2πσ2k

exp[−1

2(x− µk)2

σ2k

]

p(x) =∑

k

ρkCk exp[ck2x

2 + ck1x]

as

〈x〉 =∑

k ρkµk =∑

k ρk

[−ck12ck2

]〈x2〉 =

∑k ρk(σ2

k + µ2k) =

∑k ρk

[−12ck2

+(−ck12ck2

)2]

〈x3〉 =∑

k ρk(3σ2kµk + µ3

k) =∑

k ρk

[ck1

(2ck2)2

[3− c2

k12ck2

]]〈x4〉 =

∑k ρk(µ4

k + 6µ2kσ2

k + 3σ4k) =

∑k ρk

[(1

2ck2

)2[(

ck12ck2

)2

− 6 c2k1

2ck2+ 3

]]If all the gaussians are centered, i.e. µk = 0 for all k, then (obviously)

〈x〉 = 0 = 0〈x2〉 =

∑k ρkσ2

k =∑

k ρk

[−12ck2

]〈x3〉 = 0 = 0

〈x4〉 =∑

k ρk3σ4k =

∑k ρk3

[−12ck2

]2

From the un-centralized moments one can derive other entities like

〈x2〉 − 〈x〉2 =∑

k,k′ ρkρk′[µ2

k + σ2k − µkµk′

]〈x3〉 − 〈x2〉〈x〉 =

∑k,k′ ρkρk′

[3σ2

kµk + µ3k − (σ2

k + µ2k)µk′

]〈x4〉 − 〈x2〉2 =

∑k,k′ ρkρk′

[µ4

k + 6µ2kσ2

k + 3σ4k − (σ2

k + µ2k)(σ2

k′ + µ2k′)

]6.3 Basics

6.3.1 Density and normalization

The density of x ∼ N (m,Σ) is

p(x) =1√|2πΣ|

exp[−1

2(x−m)T Σ−1(x−m)

]

Page 15: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

6 GAUSSIANS 15

Integration and normalization∫exp

[−1

2(x−m)T Σ−1(x−m)

]dx =

√|2πΣ|

∫exp

[−1

2Tr(ST AS) + Tr(BT S)

]dS = exp

[−1

2Tr(BT A−1B)

]√|2πA−1|

The derivative of the density is a vector of the form

∂p(x)∂x

= −p(x)Σ−1x

6.3.2 Linear combination

Assume x ∼ N (mx,Σx) and y ∼ N (my,Σy) then

Ax + By + c ∼ N (Amx + Bmy + c,AΣxAT + BΣyBT )

6.3.3 Rearranging Means

NAx[m,Σ] =

√|2π(AT Σ−1A)−1|√

|2πΣ|Nx[A−1m, (AT Σ−1A)−1]

6.3.4 Rearranging into squared form

xT Ax− xT b− cT x + d = (x−m)T A(x−m) + η

m = A−1c = A−1b

η = d− cT A−1b

A variant is (Assume A is symmetric)

−12xT Ax + bT x = −1

2(x−A−1b)T A(x−A−1b) +

12bT A−1b

A variant with traces

−12Tr(ST AS)+Tr(BT S) = −1

2Tr[(S−A−1B)T A(S−A−1B)]+

12Tr(BT A−1B)

6.3.5 Sum of two squared forms

Rearranging the sum of two

(x−m1)T Σ−11 (x−m1)+(x−m2)T Σ−1

2 (x−m2) = (x−mc)T Σ−1c (x−mc)+C

Σ−1c = Σ−1

1 + Σ−12

mc = (Σ−11 + Σ−1

2 )−1(Σ−11 m1 + Σ−1

2 m2)C = mT

1 Σ−11 m1 + mT

2 Σ−12 m2

−(mT1 Σ−1

1 + mT2 Σ−1

2 )(Σ−11 + Σ−1

2 )−1(Σ−11 m1 + Σ−1

2 m2)

Page 16: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

6 GAUSSIANS 16

6.3.6 Product of gaussian densities

Let Nx(m,Σ) denote a density of x, then

Nx(m1,Σ1) · Nx(m2,Σ2) = ccNx(mc,Σc)

cc = Nm1(m2, (Σ1 + Σ2))

=1√

|2π(Σ1 + Σ2)|exp

[−1

2(m1 −m2)T (Σ1 + Σ2)−1(m1 −m2)

]mc = (Σ−1

1 + Σ−12 )−1(Σ−1

1 m1 + Σ−12 m2)

Σc = (Σ−11 + Σ−1

2 )−1

but note that the product is not normalized as a density of x.

6.4 Moments

6.4.1 Mean and covariance of linear forms

First and second moments. Assume x ∼ N (m,Σ)

E(x) = m

Cov(x,x) = Var(x) = Σ = E(xxT )− E(x)E(xT ) = E(xxT )−mmT

As for any other distribution is holds for gaussians that

E[Ax] = AE[x]

Var[Ax] = AVar[x]AT

Cov[Ax,By] = ACov[x,y]BT

6.4.2 Mean and variance of square forms

Mean and variance of square forms: Assume x ∼ N (m,Σ)

E(xxT ) = Σ + mmT

E[xT Ax] = Tr(AΣ) + mT Am

Var(xT Ax) = 2σ4Tr(A2) + 4σ2mT A2m

E[(x−m′)T A(x−m′)] = (m−m′)T A(m−m′) + Tr(AΣ)

Assume x ∼ N (0, σ2I) and A and B to be symmetric, then

Cov(xT Ax,xT Bx) = 2σ4Tr(AB)

Page 17: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

6 GAUSSIANS 17

6.4.3 Cubic forms

E[xbT xxT ] = mbT (M + mmT ) + (M + mmT )bmT

+bT m(M−mmT )

6.4.4 Mean of Quartic Forms

E[xxT xxT ] = 2(Σ + mmT )2 + mT m(Σ−mmT )+Tr(Σ)(Σ + mmT )

E[xxT AxxT ] = (Σ + mmT )(A + AT )(Σ + mmT )+mT Am(Σ−mmT ) + Tr[AΣ(Σ + mmT )]

E[xT xxT x] = 2Tr(Σ2) + 4mT Σm + (Tr(Σ) + mT m)2

E[xT AxxT Bx] = Tr[AΣ(B + BT )Σ] + mT (A + AT )Σ(B + BT )m+(Tr(AΣ) + mT Am)(Tr(BΣ) + mT Bm)

E[aT xbT xcT xdT x]= (aT (Σ + mmT )b)(cT (Σ + mmT )d)

+(aT (Σ + mmT )c)(bT (Σ + mmT )d)+(aT (Σ + mmT )d)(bT (Σ + mmT )c)− 2aT mbT mcT mdT m

E[(Ax + a)(Bx + b)T (Cx + c)(Dx + d)T ]= [AΣBT + (Am + a)(Bm + b)T ][CΣDT + (Cm + c)(Dm + d)T ]

+[AΣCT + (Am + a)(Cm + c)T ][BΣDT + (Bm + b)(Dm + d)T ]+(Bm + b)T (Cm + c)[AΣDT − (Am + a)(Dm + d)T ]+Tr(BΣCT )[AΣDT + (Am + a)(Dm + d)T ]

E[(Ax + a)T (Bx + b)(Cx + c)T (Dx + d)]= Tr[AΣ(CT D + DT C)ΣBT ]

+[(Am + a)T B + (Bm + b)T A]Σ[CT (Dm + d) + DT (Cm + c)]+[Tr(AΣBT ) + (Am + a)T (Bm + b)][Tr(CΣDT ) + (Cm + c)T (Dm + d)]

See [3].

6.4.5 Moments

E[x] =∑

k

ρkmk

Cov(x) =∑

k

∑k′

ρkρk′(Σk + mkmTk −mkmT

k′)

Page 18: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

7 MISCELLANEOUS 18

6.5 Miscellaneous

6.5.1 Whitening

Assume x ∼ N (v,Σ) then

z = Σ−1/2(x−m) ∼ N (0, I)

Conversely having z ∼ N (0, I) one can generate data x ∼ N (m,Σ) by setting

x = Σ1/2z + m ∼ N (m,Σ)

Note that Σ1/2 means the matrix which fulfils Σ1/2Σ1/2 = Σ, and that it existsand is unique since Σ is positive definite.

6.5.2 The Chi-Square connection

Assume x ∼ N (m,Σ) and x to be n dimensional, then

z = (x−m)T Σ−1(x−m) ∼ χ2n

6.5.3 Entropy

Entropy of a D-dimensional gaussian

H(x) =∫N (m,Σ) lnN (m,Σ)dx = − ln

√|2πΣ| − D

2

6.6 Mixture of Gaussians

6.6.1 Density

The variable x is distributed as a mixture of gaussians if it has the density

p(x) =K∑

k=1

ρk1√

|2πΣk|exp

[−1

2(x−mk)T Σ−1

k (x−mk)]

where ρk sum to 1 and the Σk all are positive definite.

7 Miscellaneous

7.1 Miscellaneous

For any A it holds that

rank(A) = rank(AT ) = rank(AAT ) = rank(AT A)

Assume A is positive definite. Then

rank(BT AB) = rank(B)

A is positive definite ⇔ ∃B invertible, such that A = BBT

Page 19: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

7 MISCELLANEOUS 19

7.2 Indices, Entries and Vectors

Let ei denote the column vector which is 1 on entry i and zero elsewhere, i.e.(ei)j = δij , and let Jij denote the matrix which is 1 on entry (i, j) and zeroelsewhere.

7.2.1 Rows and Columns

i.th row of A = eTi A

j.th column of A = Aej

7.2.2 Permutations

Let P be some permutation matrix, e.g.

P =

0 1 01 0 00 0 1

=[

e2 e1 e3

]=

eT2

eT1

eT3

then

AP =[

Ae2 Ae1 Ae2

]PA =

eT2 A

eT1 A

eT3 A

That is, the first is a matrix which has columns of A but in permuted sequenceand the second is a matrix which has the rows of A but in the permuted se-quence.

7.2.3 Swap and Zeros

Assume A to be n×m and Jij to be m× p

AJij =[

0 0 . . . Ai . . . 0]

i.e. an n× p matrix of zeros with the i.th column of A in the placed of the j.thcolumn. Assume A to be n×m and Jij to be p× n

JijA =

00...

Aj

...0

i.e. an p × m matrix of zeros with the j.th row of A in the placed of the i.throw.

Page 20: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

7 MISCELLANEOUS 20

7.2.4 Rewriting product of elements

AkiBjl = (AeieTj B)kl = (AJijB)kl

AikBlj = (AT eieTj BT )kl = (AT JijBT )kl

AikBjl = (AT eieTj B)kl = (AT JijB)kl

7.2.5 The Singleentry Matrix in Scalar Expressions

Assume A is n×m and J is m× n, then

Tr(AJij) = Tr(JijA) = Aji

Assume A is n× n, J is n×m and B is m× n, then

Tr(AJijB) = (AT BT )ij

Tr(AJjiB) = (BA)ij

Assume A is n× n, Jij is n×m B is m× n, then

xT AJijBx = (AT xxT BT )ij

7.3 Solutions to Systems of Equations

7.3.1 Existence in Linear Systems

Assume A is n×m and consider the linear system

Ax = b

Construct the augmented matrix B = [A b] then

Condition Solutionrank(A) = rank(B) = m Unique solution xrank(A) = rank(B) < m Many solutions xrank(A) < rank(B) No solutions x

7.3.2 Standard Square

Assume A is square and invertible, then

Ax = b ⇒ x = A−1b

Page 21: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

7 MISCELLANEOUS 21

7.3.3 Degenerated Square

7.3.4 Over-determined Rectangular

Assume A to be n×m, n > m (tall) and rank(A) = m, then

Ax = b ⇒ x = (AT A)−1AT b = A+b

that is if there exists a solution x at all! If there is no solution the followingcan be useful:

Ax = b ⇒ xmin = A+b

Now xmin is the vector x which minimizes ||Ax− b||2, i.e. the vector which is”least wrong”. The matrix A+ is the pseudo-inverse of A. See [1].

7.3.5 Under-determined Rectangular

Assume A is n×m and n < m (”broad”).

Ax = b ⇒ xmin = (AT A)−1Ab

The equation have many solutions x. But xmin is the solution which minimizes||Ax−b||2 and also the solution with the smallest norm ||x||2. The same holdsfor a matrix version: Assume A is n×m, X is m× n and B is n× n, then

AX = B ⇒ Xmin = A+B

The equation have many solutions X. But Xmin is the solution which minimizes||AX−B||2 and also the solution with the smallest norm ||X||2. See [1].

Similar but different: Assume A is square n × n and the matrices B0,B1

are n×N , where N > n, then if B0 has maximal rank

AB0 = B1 ⇒ Amin = B1BT0 (B0BT

0 )−1

where Amin denotes the matrix which is optimal in a least square sense. Aninterpretation is that A is the linear approximation which maps the columnsvectors of B0 into the columns vectors of B1.

7.3.6 Linear form and zeros

Ax = 0, ∀x ⇒ A = 0

7.3.7 Square form and zeros

If A is symmetric, then

xT Ax = 0, ∀x ⇒ A = 0

7.4 Block matrices

Let Aij denote the ij.th block of A.

Page 22: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

7 MISCELLANEOUS 22

7.4.1 Multiplication

Assuming the dimensions of the blocks matches we have[A11 A12

A21 A22

] [B11 B12

B21 B22

]=

[A11B11 + A12B21 A11B12 + A12B22

A21B11 + A22B21 A21B12 + A22B22

]7.4.2 The Determinant

The determinant can be expressed as by the use of

C1 = A11 −A12A−122 A21

C2 = A22 −A21A−111 A12

as ∣∣∣∣[ A11 A12

A21 A22

]∣∣∣∣ = |A22| · |C1| = |A11| · |C2|

7.4.3 The Inverse

The inverse can be expressed as by the use of

C1 = A11 −A12A−122 A21

C2 = A22 −A21A−111 A12

as [A11 A12

A21 A22

]−1

=[

C−11 −A−1

11 A12C−12

−C−12 A21A−1

11 C−12

]=

[A−1

11 + A−111 A12C−1

2 A21A−111 −C−1

1 A12A−122

−A−122 A21C−1

1 A−122 + A−1

22 A21C−11 A12A−1

22

]7.4.4 Block diagonal

For block diagonal matrices we have[A11 00 A22

]−1

=[

(A11)−1 00 (A22)−1

]∣∣∣∣[ A11 0

0 A22

]∣∣∣∣ = |A11| · |A22|

7.5 Positive Definite and Semi-definite Matrices

7.5.1 Definitions

A matrix A is positive definite if and only if

xT Ax > 0, ∀x

Page 23: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

7 MISCELLANEOUS 23

A matrix A is positive semi-definite if and only if

xT Ax ≥ 0, ∀x

Note that if A is positive definite, then A is also positive semi-definite.

7.5.2 Eigenvalues

The following holds with respect to the eigenvalues:

A pos. def. ⇒ eig(A) > 0A pos. semi-def. ⇒ eig(A) ≥ 0

7.5.3 Trace

The following holds with respect to the trace:

A pos. def. ⇒ Tr(A) > 0A pos. semi-def. ⇒ Tr(A) ≥ 0

7.5.4 Inverse

If A is positive definite, then A is invertible and A−1 is also positive definite.

7.5.5 Diagonal

If A is positive definite, then Aii > 0,∀i

7.5.6 Decomposition I

The matrix A is positive semi-definite of rank r ⇔ there exists a matrix B ofrank r such that A = BBT

The matrix A is positive definite ⇔ there exists an invertible matrix B suchthat A = BBT

7.5.7 Decomposition II

Assume A is an n× n positive semi-definite, then there exists an n× r matrixB of rank r such that BT AB = I.

7.5.8 Equation with zeros

Assume A is positive semi-definite, then XT AX = 0 ⇒ AX = 0

7.5.9 Rank of product

Assume A is positive definite, then rank(BABT ) = rank(A)

Page 24: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

7 MISCELLANEOUS 24

7.5.10 Positive definite property

If A is n× n positive definite and B is r × n of rank r, then BABT is positivedefinite.

7.5.11 Outer Product

If X is n× r of rank r, then XXT is positive definite.

7.5.12 Small pertubations

If A is positive definite and B is symmetric, then A− tB is positive definite forsufficiently small t.

7.6 Integral Involving Dirac Delta Functions

Assuming A to be square, then∫p(s)δ(x−As)ds =

1|A|

p(A−1x)

Assuming A to be ”underdetermined”, i.e. ”tall”, then∫p(s)δ(x−As)ds =

{1√

|AT A|p(A+x) if x = AA+x

0 elsewhere

}

See [4].

Page 25: The Matrix Cookbook - cim.mcgill.cadudek/417/Papers/matrixOperations.pdf · The Matrix Cookbook Kaare Brandt Petersen ISP, IMM, Technical University of Denmark September 21, 2004

REFERENCES 25

References

[1] S. Barnet, Matrices. Methods and Applicatiopns, Oxford Applied Mathe-matics and Computin Science Series, Clarendon Press, Oxford, 1990.Comment: Well-written and reasonably comprehensive.

[2] C. M. Bishop, Neural Networks for Pattern Recognition, Oxford UniversityPress, 1995.

[3] M. Brookes, Matrix Reference Manual, website available (May 20, 2004) athttp://www.ee.ic.ac.uk/hp/staff/dmb/matrix/calculus.htmlComment: To the point and very comprehensive, but the HTML formatmakes it less appealing and less easy to read.

[4] M. Dyrholm, ”Some Matrix Results” (August 23, 2004)http://www.imm.dtu.dk/ mad/papers/madrix.pdf

[5] M. S. Pedersen, Matricks, Note available on the internet (May 20, 2004) athttp://www.imm.dtu.dk/pubdb/views/edoc download.php/2976/pdf/imm2976.pdf

[6] S. Roweis, Matrix Identities, Note available on the internet (May 20, 2004)at http://www.cs.toronto.edu/ roweis/notes/matrixid.pdf

[7] G. Seber and A. Lee, Linear Regression Analysis, 2nd Ed. Wiley, New York,2002.

[8] S. M. Selby, ”Standard Mathematical Tables”, 23. edition, CRC Press,1974. (First published in 1964).Comment: Table like desktop reference.

[9] M. Welling, The Kalman Filter, Lecture Note, California Institute of Tech-nology.