Cookbook Matrix

download Cookbook Matrix

of 43

Transcript of Cookbook Matrix

  • The Matrix Cookbook

    Kaare Brandt PetersenMichael Syskind Pedersen

    Version: January 5, 2005

    What is this? These pages are a collection of facts (identities, approxima-tions, inequalities, relations, ...) about matrices and matters relating to them.It is collected in this form for the convenience of anyone who wants a quickdesktop reference .

    Disclaimer: The identities, approximations and relations presented here wereobviously not invented but collected, borrowed and copied from a large amountof sources. These sources include similar but shorter notes found on the internetand appendices in books - see the references for a full list.

    Errors: Very likely there are errors, typos, and mistakes for which we apolo-gize and would be grateful to receive corrections at [email protected].

    Its ongoing: The project of keeping a large repository of relations involvingmatrices is naturally ongoing and the version will be apparent from the date inthe header.

    Suggestions: Your suggestion for additional content or elaboration of sometopics is most welcome at [email protected].

    Acknowledgements: We would like to thank the following for discussions,proofreading, extensive corrections and suggestions: Esben Hoegh-Rasmussenand Vasile Sima.

    Keywords: Matrix algebra, matrix relations, matrix identities, derivative ofdeterminant, derivative of inverse matrix, differentiate a matrix.

    1

  • CONTENTS CONTENTS

    Contents

    1 Basics 51.1 Trace and Determinants . . . . . . . . . . . . . . . . . . . . . . . 51.2 The Special Case 2x2 . . . . . . . . . . . . . . . . . . . . . . . . . 6

    2 Derivatives 72.1 Derivatives of a Determinant . . . . . . . . . . . . . . . . . . . . 72.2 Derivatives of an Inverse . . . . . . . . . . . . . . . . . . . . . . . 82.3 Derivatives of Matrices, Vectors and Scalar Forms . . . . . . . . 92.4 Derivatives of Traces . . . . . . . . . . . . . . . . . . . . . . . . . 112.5 Derivatives of Structured Matrices . . . . . . . . . . . . . . . . . 12

    3 Inverses 143.1 Exact Relations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 143.2 Implication on Inverses . . . . . . . . . . . . . . . . . . . . . . . . 143.3 Approximations . . . . . . . . . . . . . . . . . . . . . . . . . . . . 153.4 Generalized Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . 153.5 Pseudo Inverse . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15

    4 Complex Matrices 174.1 Complex Derivatives . . . . . . . . . . . . . . . . . . . . . . . . . 17

    5 Decompositions 205.1 Eigenvalues and Eigenvectors . . . . . . . . . . . . . . . . . . . . 205.2 Singular Value Decomposition . . . . . . . . . . . . . . . . . . . . 205.3 Triangular Decomposition . . . . . . . . . . . . . . . . . . . . . . 21

    6 General Statistics and Probability 226.1 Moments of any distribution . . . . . . . . . . . . . . . . . . . . . 226.2 Expectations . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22

    7 Gaussians 247.1 One Dimensional . . . . . . . . . . . . . . . . . . . . . . . . . . . 247.2 Basics . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 257.3 Moments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 277.4 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 297.5 One Dimensional Mixture of Gaussians . . . . . . . . . . . . . . . 297.6 Mixture of Gaussians . . . . . . . . . . . . . . . . . . . . . . . . . 30

    8 Miscellaneous 318.1 Functions and Series . . . . . . . . . . . . . . . . . . . . . . . . . 318.2 Indices, Entries and Vectors . . . . . . . . . . . . . . . . . . . . . 328.3 Solutions to Systems of Equations . . . . . . . . . . . . . . . . . 358.4 Block matrices . . . . . . . . . . . . . . . . . . . . . . . . . . . . 368.5 Matrix Norms . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 378.6 Positive Definite and Semi-definite Matrices . . . . . . . . . . . . 38

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 2

  • CONTENTS CONTENTS

    8.7 Integral Involving Dirac Delta Functions . . . . . . . . . . . . . . 398.8 Miscellaneous . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40

    A Proofs and Details 41

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 3

  • CONTENTS CONTENTS

    Notation and Nomenclature

    A MatrixAij Matrix indexed for some purposeAi Matrix indexed for some purposeAij Matrix indexed for some purposeAn Matrix indexed for some purpose or

    The n.th power of a square matrixA1 The inverse matrix of the matrix AA+ The pseudo inverse matrix of the matrix AA1/2 The square root of a matrix (if unique), not elementwise(A)ij The (i, j).th entry of the matrix AAij The (i, j).th entry of the matrix Aa Vectorai Vector indexed for some purposeai The i.th element of the vector aa Scalar

  • 1 BASICS

    1 Basics

    (AB)1 = B1A1

    (ABC...)1 = ...C1B1A1

    (AT )1 = (A1)T

    (A+B)T = AT +BT

    (AB)T = BTAT

    (ABC...)T = ...CTBTAT

    (AH)1 = (A1)H

    (A+B)H = AH +BH

    (AB)H = BHAH

    (ABC...)H = ...CHBHAH

    1.1 Trace and Determinants

    Tr(A) =i

    Aii =i

    i, i = eig(A)

    Tr(A) = Tr(AT )

    Tr(AB) = Tr(BA)

    Tr(A+B) = Tr(A) + Tr(B)

    Tr(ABC) = Tr(BCA) = Tr(CAB)

    det(A) =i

    i i = eig(A)

    det(AB) = det(A) det(B), if A and B are invertible

    det(A1) =1

    det(A)

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 5

  • 1.2 The Special Case 2x2 1 BASICS

    1.2 The Special Case 2x2

    Consider the matrix A

    A =[A11 A12A21 A22

    ]Determinant and trace

    det(A) = A11A22 A12A21Tr(A) = A11 +A22

    Eigenvalues2 Tr(A) + det(A) = 0

    1 =Tr(A) +

    Tr(A)2 4 det(A)

    22 =

    Tr(A)Tr(A)2 4 det(A)2

    1 + 2 = Tr(A) 12 = det(A)

    Eigenvectors

    v1 [

    A121 A11

    ]v2

    [A12

    2 A11]

    Inverse

    A1 =1

    det(A)

    [A22 A12A21 A11

    ]

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 6

  • 2 DERIVATIVES

    2 Derivatives

    This section is covering differentiation of a number of expressions with respect toa matrix X. Note that it is always assumed that X has no special structure, i.e.that the elements of X are independent (e.g. not symmetric, Toeplitz, positivedefinite). See section 2.5 for differentiation of structured matrices. The basicassumptions can be written in a formula as

    XklXij

    = iklj

    that is for e.g. vector forms,[xy

    ]i

    =xiy

    [x

    y

    ]i

    =x

    yi

    [xy

    ]ij

    =xiyj

    The following rules are general and very useful when deriving the differential ofan expression ([10]):

    A = 0 (A is a constant) (1)(X) = X (2)

    (X+Y) = X+ Y (3)(Tr(X)) = Tr(X) (4)(XY) = (X)Y+X(Y) (5)

    (X Y) = (X) Y+X (Y) (6)(XY) = (X)Y+X (Y) (7)(X1) = X1(X)X1 (8)

    (det(X)) = det(X)Tr(X1X) (9)(ln(det(X))) = Tr(X1X) (10)

    XT = (X)T (11)XH = (X)H (12)

    2.1 Derivatives of a Determinant

    2.1.1 General form

    det(Y)x

    = det(Y)Tr[Y1

    Yx

    ]2.1.2 Linear forms

    det(X)X

    = det(X)(X1)T

    det(AXB)X

    = det(AXB)(X1)T = det(AXB)(XT )1

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 7

  • 2.2 Derivatives of an Inverse 2 DERIVATIVES

    2.1.3 Square forms

    If X is square and invertible, then

    det(XTAX)X

    = 2det(XTAX)XT

    If X is not square but A is symmetric, then

    det(XTAX)X

    = 2det(XTAX)AX(XTAX)1

    If X is not square and A is not symmetric, then

    det(XTAX)X

    = det(XTAX)(AX(XTAX)1 +ATX(XTATX)1) (13)

    2.1.4 Other nonlinear forms

    Some special cases are (See [8])

    ln det(XTX)|X

    = 2(X+)T

    ln det(XTX)X+

    = 2XT

    ln | det(X)|X

    = (X1)T = (XT )1

    det(Xk)X

    = k det(Xk)XT

    See [7].

    2.2 Derivatives of an Inverse

    From [15] we have the basic identity

    Y1

    x= Y1 Y

    xY1

    from which it follows

    (X1)klXij

    = (X1)ki(X1)jl

    aTX1bX

    = XTabTXT

    det(X1)X

    = det(X1)(X1)T

    Tr(AX1B)X

    = (X1BAX1)T

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 8

  • 2.3 Derivatives of Matrices, Vectors and Scalar Forms 2 DERIVATIVES

    2.3 Derivatives of Matrices, Vectors and Scalar Forms

    2.3.1 First Order

    xTbx

    =bTxx

    = b

    aTXbX

    = abT

    aTXTbX

    = baT

    aTXaX

    =aTXTa

    X= aaT

    XXij

    = Jij

    (XA)ijXmn

    = im(A)nj = (JmnA)ij

    (XTA)ijXmn

    = in(A)mj = (JnmA)ij

    2.3.2 Second Order

    Xij

    klmn

    XklXmn = 2kl

    Xkl

    bTXTXcX

    = X(bcT + cbT )

    (Bx+ b)TC(Dx+ d)x

    = BTC(Dx+ d) +DTCT (Bx+ b)

    (XTBX)klXij

    = lj(XTB)ki + kj(BX)il

    (XTBX)Xij

    = XTBJij + JjiBX (Jij)kl = ikjl

    See Sec 8.2 for useful properties of the Single-entry matrix Jij

    xTBxx

    = (B+BT )x

    bTXTDXcX

    = DTXbcT +DXcbT

    X(Xb+ c)TD(Xb+ c) = (D+DT )(Xb+ c)bT

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 9

  • 2.3 Derivatives of Matrices, Vectors and Scalar Forms 2 DERIVATIVES

    Assume W is symmetric, then

    s(xAs)TW(xAs) = 2ATW(xAs)

    x(xAs)TW(xAs) = 2W(xAs)

    A(xAs)TW(xAs) = 2W(xAs)sT

    2.3.3 Higher order and non-linear

    XaTXnb =

    n1r=0

    (Xr)TabT (Xn1r)T (14)

    XaT (Xn)TXnb =

    n1r=0

    [Xn1rabT (Xn)TXr

    +(Xr)TXnabT (Xn1r)T]

    (15)

    See A.0.1 for a proof.

    Assume s and r are functions of x, i.e. s = s(x), r = r(x), and that A is aconstant, then

    xsTAs =

    [sx

    ]T(A+AT )s

    xsTAr =

    [sx

    ]TAs+

    [rx

    ]TAT r

    2.3.4 Gradient and Hessian

    Using the above we have for the gradient and the hessian

    f = xTAx+ bTx

    xf = fx

    = (A+AT )x+ b

    2f

    xxT= A+AT

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 10

  • 2.4 Derivatives of Traces 2 DERIVATIVES

    2.4 Derivatives of Traces

    2.4.1 First Order

    XTr(X) = I

    XTr(XB) = BT (16)

    XTr(BXC) = BTCT

    XTr(BXTC) = CB

    XTr(XTC) = C

    XTr(BXT ) = B

    2.4.2 Second Order

    XTr(X2) = 2X

    XTr(X2B) = (XB+BX)T

    XTr(XTBX) = BX+BTX

    XTr(XBXT ) = XBT +XB

    XTr(XTX) = 2X

    XTr(BXXT ) = (B+BT )X

    XTr(BTXTCXB) = CTXBBT +CXBBT

    XTr[XTBXC

    ]= BXC+BTXCT

    XTr(AXBXTC) = ATCTXBT +CAXB

    XTr[(AXb+ c)(AXb+ c)T

    ]= 2AT (AXb+ c)bT

    See [7].

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 11

  • 2.5 Derivatives of Structured Matrices 2 DERIVATIVES

    2.4.3 Higher Order

    XTr(Xk) = k(Xk1)T

    XTr(AXk) =

    k1r=0

    (XrAXkr1)T

    XTr[BTXTCXXTCXB

    ]= CXXTCXBBT +CTXBBTXTCTX

    + CXBBTXTCX+CTXXTCTXBBT

    2.4.4 Other

    XTr(AX1B) = (X1BAX1)T = XTATBTXT

    Assume B and C to be symmetric, then

    XTr[(XTCX)1A

    ]= (CX(XTCX)1)(A+AT )(XTCX)1

    XTr[(XTCX)1(XTBX)

    ]= 2CX(XTCX)1XTBX(XTCX)1

    +2BX(XTCX)1

    See [7].

    2.5 Derivatives of Structured Matrices

    Assume that the matrix A has some structure, i.e. is symmetric, toeplitz, etc.In that case the derivatives of the previous section does not apply in general.In stead, consider the following general rule for differentiating a scalar functionf(A)

    df

    dAij=kl

    f

    Akl

    AklAij

    = Tr

    [[f

    A

    ]TAAij

    ]The matrix differentiated with respect to itself is in this document referred toas the structure matrix of A and is defined simply by

    AAij

    = Sij

    If A has no special structure we have simply Sij = Jij , that is, the structurematrix is simply the singleentry matrix. Many structures have a representationin singleentry matrices, see Sec. 8.2.7 for more examples of structure matrices.

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 12

  • 2.5 Derivatives of Structured Matrices 2 DERIVATIVES

    2.5.1 Symmetric

    If A is symmetric, then Sij = Jij + Jji JijJij and therefore

    df

    dA=[f

    A

    ]+[f

    A

    ]T diag

    [f

    A

    ]That is, e.g., ([5], [16]):

    Tr(AX)X

    = A+AT (A I), see (20) (17) det(X)X

    = 2X (X I) (18) ln det(X)

    X= 2X1 (X1 I) (19)

    2.5.2 Diagonal

    If X is diagonal, then ([10]):

    Tr(AX)X

    = A I (20)

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 13

  • 3 INVERSES

    3 Inverses

    3.1 Exact Relations

    3.1.1 The Woodbury identity

    (A+CBCT )1 = A1 A1C(B1 +CTA1C)1CTA1

    If P,R are positive definite, then (see [17])

    (P1 +BTR1B)1BTR1 = PBT (BPBT +R)1

    3.1.2 The Kailath Variant

    (A+BC)1 = A1 A1B(I+CA1B)1CA1

    See [4] page 153.

    3.1.3 The Searle Set of Identities

    The following set of identities, can be found in [13], page 151,

    (I+A1)1 = A(A+ I)1

    (A+BBT )1B = A1B(I+BTA1B)1

    (A1 +B1)1 = A(A+B)1B = B(A+B)1A

    AA(A+B)1A = BB(A+B)1BA1 +B1 = A1(A+B)B1

    (I+AB)1 = IA(I+BA)1B(I+AB)1A = A(I+AB)1

    3.2 Implication on Inverses

    (A+B)1 = A1 +B1 AB1A = BA1BSee [13].

    3.2.1 A PosDef identity

    Assume P,R to be positive definite and invertible, then

    (P1 +BTR1B)1BTR1 = PBT (BPBT +R)1

    See [?].

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 14

  • 3.3 Approximations 3 INVERSES

    3.3 Approximations

    (I+A)1 = IA+A2 A3 + ...AA(I+A)1A = IA1 if A large and symmetric

    If 2 is small then

    (Q+ 2M)1 = Q1 2Q1MQ1

    3.4 Generalized Inverse

    3.4.1 Definition

    A generalized inverse matrix of the matrix A is any matrix A such that

    AAA = A

    The matrix A is not unique.

    3.5 Pseudo Inverse

    3.5.1 Definition

    The pseudo inverse (or Moore-Penrose inverse) of a matrix A is the matrix A+

    that fulfils

    I AA+A = AII A+AA+ = A+

    III AA+ symmetricIV A+A symmetric

    The matrix A+ is unique and does always exist.

    3.5.2 Properties

    Assume A+ to be the pseudo-inverse of A, then (See [3])

    (A+)+ = A(AT )+ = (A+)T

    (cA)+ = (1/c)A+

    (ATA)+ = A+(AT )+

    (AAT )+ = (AT )+A+

    Assume A to have full rank, then

    (AA+)(AA+) = AA+

    (A+A)(A+A) = A+ATr(AA+) = rank(AA+) (See [14])Tr(A+A) = rank(A+A) (See [14])

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 15

  • 3.5 Pseudo Inverse 3 INVERSES

    3.5.3 Construction

    Assume that A has full rank, then

    A n n Square rank(A) = n A+ = A1A nm Broad rank(A) = n A+ = AT (AAT )1A nm Tall rank(A) = m A+ = (ATA)1AT

    AssumeA does not have full rank, i.e. A is nm and rank(A) = r < min(n,m).The pseudo inverse A+ can be constructed from the singular value decomposi-tion A = UDVT , by

    A+ = VD+UT

    A different way is this: There does always exists two matrices C n r and Dr m of rank r, such that A = CD. Using these matrices it holds that

    A+ = DT (DDT )1(CTC)1CT

    See [3].

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 16

  • 4 COMPLEX MATRICES

    4 Complex Matrices

    4.1 Complex Derivatives

    In order to differentiate an expression f(z) with respect to a complex z, theCauchy-Riemann equations have to be satisfied ([7]):

    df(z)dz

    =

  • 4.1 Complex Derivatives 4 COMPLEX MATRICES

    Complex Gradient Matrix: If f is a real function of a complex matrix Z,then the complex gradient matrix is given by ([2])

    f(Z) = 2df(Z)dZ

    (28)

    =f(Z)

  • 4.1 Complex Derivatives 4 COMPLEX MATRICES

    complex number.

    Tr(AXH)

  • 5 DECOMPOSITIONS

    5 Decompositions

    5.1 Eigenvalues and Eigenvectors

    5.1.1 Definition

    The eigenvectors v and eigenvalues are the ones satisfying

    Avi = ivi

    AV = VD, (D)ij = ijiwhere the columns of V are the vectors vi

    5.1.2 General Properties

    eig(AB) = eig(BA)A is nm At most min(n,m) distinct irank(A) = r At most r non-zero i

    5.1.3 Symmetric

    Assume A is symmetric, then

    VVT = I (i.e. V is orthogonal)i R (i.e. i is real)

    Tr(Ap) =

    ipi

    eig(I+ cA) = 1 + cieig(A1) = 1i

    5.2 Singular Value Decomposition

    Any nm matrix A can be written asA = UDVT

    whereU = eigenvectors of AAT n nD =

    diag(eig(AAT )) nm

    V = eigenvectors of ATA mm

    5.2.1 Symmetric Square decomposed into squares

    Assume A to be n n and symmetric. Then[A]=[V] [

    D] [

    VT]

    where D is diagonal with the eigenvalues of A and V is orthogonal and theeigenvectors of A.

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 20

  • 5.3 Triangular Decomposition 5 DECOMPOSITIONS

    5.2.2 Square decomposed into squares

    Assume A to be n n. Then[A]=[V] [

    D] [

    UT]

    where D is diagonal with the square root of the eigenvalues of AAT , V is theeigenvectors of AAT and UT is the eigenvectors of ATA.

    5.2.3 Square decomposed into rectangular

    Assume VDUT = 0 then we can expand the SVD of A into[A]=[V V

    ] [ D 00 D

    ] [UT

    UT

    ]where the SVD of A is A = VDUT .

    5.2.4 Rectangular decomposition I

    Assume A is nm[A

    ]=[V] [

    D] [

    UT]

    where D is diagonal with the square root of the eigenvalues of AAT , V is theeigenvectors of AAT and UT is the eigenvectors of ATA.

    5.2.5 Rectangular decomposition II

    Assume A is nm[

    A]=[

    V] D

    UT

    5.2.6 Rectangular decomposition III

    Assume A is nm[

    A]=[V] [

    D] UT

    where D is diagonal with the square root of the eigenvalues of AAT , V is theeigenvectors of AAT and UT is the eigenvectors of ATA.

    5.3 Triangular Decomposition

    5.3.1 Cholesky-decomposition

    Assume A is positive definite, then

    A = BTB

    where B is a unique upper triangular matrix.

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 21

  • 6 GENERAL STATISTICS AND PROBABILITY

    6 General Statistics and Probability

    6.1 Moments of any distribution

    6.1.1 Mean and covariance of linear forms

    Assume X and x to be a matrix and a vector of random variables. Then

    E[AXB+C] = AE[X]B+C

    Var[Ax] = AVar[x]AT

    Cov[Ax,By] = ACov[x,y]BT

    See [14].

    6.1.2 Mean and Variance of Square Forms

    Assume A is symmetric, c = E[x] and = Var[x]. Assume also that allcoordinates xi are independent, have the same central moments 1, 2, 3, 4and denote a = diag(A). Then

    E[xTAx] = Tr(A) + cTAc

    Var[xTAx] = 222Tr(A2) + 42cTA2c+ 43cTAa+ (4 322)aTa

    See [14]

    6.2 Expectations

    Assume x to be a stochastic vector with mean m, covariance M and centralmoments vr = E[(xm)r].

    6.2.1 Linear Forms

    E[Ax+ b] = Am+ bE[Ax] = Am

    E[x+ b] = m+ b

    6.2.2 Quadratic Forms

    E[(Ax+ a)(Bx+ b)T ] = AMBT + (Am+ a)(Bm+ b)T

    E[xxT ] = M+mmT

    E[xaTx] = (M+mmT )aE[xTaxT ] = aT (M+mmT )

    E[(Ax)(Ax)T ] = A(M+mmT )AT

    E[(x+ a)(x+ a)T ] = M+ (m+ a)(m+ a)T

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 22

  • 6.2 Expectations 6 GENERAL STATISTICS AND PROBABILITY

    E[(Ax+ a)T (Bx+ b)] = Tr(AMBT ) + (Am+ a)T (Bm+ b)E[xTx] = Tr(M) +mTm

    E[xTAx] = Tr(AM) +mTAmE[(Ax)T (Ax)] = Tr(AMAT ) + (Am)T (Am)

    E[(x+ a)T (x+ a)] = Tr(M) + (m+ a)T (m+ a)

    See [7].

    6.2.3 Cubic Forms

    Assume x to be independent, then

    E[(Ax+ a)(Bx+ b)T (Cx+ c)] = Adiag(BTC)v3+Tr(BMCT )(Am+ a)+AMCT (Bm+ b)+(AMBT + (Am+ a)(Bm+ b)T )(Cm+ c)

    E[xxTx] = v3 + 2Mm+ (Tr(M) +mTm)mE[(Ax+ a)(Ax+ a)T (Ax+ a)] = Adiag(ATA)v3

    +[2AMAT + (Ax+ a)(Ax+ a)T ](Am+ a)+Tr(AMAT )(Am+ a)

    E[(Ax+ a)bT (Cx+ c)(Dx+ d)T ] = (Ax+ a)bT (CMDT + (Cm+ c)(Dm+ d)T )+(AMCT + (Am+ a)(Cm+ c)T )b(Dm+ d)T

    +bT (Cm+ c)(AMDT (Am+ a)(Dm+ d)T )

    See [7].

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 23

  • 7 GAUSSIANS

    7 Gaussians

    7.1 One Dimensional

    7.1.1 Density and Normalization

    The density is

    p(s) =12pi2

    exp( (s )

    2

    22

    )Normalization integrals

    e(s)222 ds =

    2pi2

    e(ax

    2+bx+c)dx =pi

    aexp

    [b2 4ac

    4a

    ]ec2x

    2+c1x+c0dx =

    pi

    c2 exp[c21 4c2c04c2

    ]7.1.2 Completing the Squares

    c2x2 + c1x+ c0 = a(x b)2 + w

    a = c2 b = 12c1c2

    w =14c21c2

    + c0

    orc2x

    2 + c1x+ c0 = 122 (x )2 + d

    =c12c2

    2 =12c2

    d = c0 c21

    4c2

    7.1.3 Moments

    If the density is expressed by

    p(x) =12pi2

    exp[ (s )

    2

    22

    ]or p(x) = C exp(c2x2 + c1x)

    then the first few basic moments are

    x = = c12c2x2 = 2 + 2 = 12c2 +

    (c12c2

    )2x3 = 32+ 3 = c1(2c2)2

    [3 c212c2

    ]x4 = 4 + 622 + 34 =

    (c12c2

    )4+ 6

    (c12c2

    )2 (12c2

    )+ 3

    (12c2

    )2and the central moments are

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 24

  • 7.2 Basics 7 GAUSSIANS

    (x ) = 0 = 0(x )2 = 2 =

    [12c2

    ](x )3 = 0 = 0(x )4 = 34 = 3

    [12c2

    ]2A kind of pseudo-moments (un-normalized integrals) can easily be derived as

    exp(c2x2 + c1x)xndx = Zxn =

    pi

    c2 exp[

    c214c2

    ]xn

    From the un-centralized moments one can derive other entities like

    x2 x2 = 2 = 12c2x3 x2x = 22 = 2c1(2c2)2x4 x22 = 24 + 422 = 2(2c2)2

    [1 4 c212c2

    ]7.2 Basics

    7.2.1 Density and normalization

    The density of x N (m,) is

    p(x) =1

    det(2pi)exp

    [12(xm)T1(xm)

    ]Note that if x is d-dimensional, then det(2pi) = (2pi)d det().Integration and normalization

    exp[12(xm)T1(xm)

    ]dx =

    det(2pi)

    exp

    [12xTAx+ bTx

    ]dx =

    det(2piA1) exp

    [12bTA1b

    ]

    exp[12Tr(STAS) + Tr(BTS)

    ]dS =

    det(2piA1) exp

    [12Tr(BTA1B)

    ]The derivatives of the density are

    p(x)x

    = p(x)1(xm)

    2p

    xxT= p(x)

    (1(xm)(xm)T1 1

    )7.2.2 Linear combination

    Assume x N (mx,x) and y N (my,y) thenAx+By + c N (Amx +Bmy + c,AxAT +ByBT )

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 25

  • 7.2 Basics 7 GAUSSIANS

    7.2.3 Rearranging Means

    NAx[m,] =det(2pi(AT1A)1)

    det(2pi)Nx[A1m, (AT1A)1]

    7.2.4 Rearranging into squared form

    If A is symmetric, then

    12xTAx+ bTx = 1

    2(xA1b)TA(xA1b) + 1

    2bTA1b

    12Tr(XTAX)+Tr(BTX) = 1

    2Tr[(XA1B)TA(XA1B)]+1

    2Tr(BTA1B)

    7.2.5 Sum of two squared forms

    In vector formulation (assuming 1,2 are symmetric)

    12(xm1)T11 (xm1)

    12(xm2)T12 (xm2)

    = 12(xmc)T1c (xmc) + C

    1c = 11 +

    12

    mc = (11 +12 )

    1(11 m1 +12 m2)

    C =12(mT1

    11 +m

    T2

    12 )(

    11 +

    12 )

    1(11 m1 +12 m2)

    12

    (mT1

    11 m1 +m

    T2

    12 m2

    )In a trace formulation (assuming 1,2 are symmetric)

    12Tr((XM1)T11 (XM1))

    12Tr((XM2)T12 (XM2))

    = 12Tr[(XMc)T1c (XMc)] + C

    1c = 11 +

    12

    Mc = (11 +12 )

    1(11 M1 +12 M2)

    C =12Tr[(11 M1 +

    12 M2)

    T (11 +12 )

    1(11 M1 +12 M2)

    ]12Tr(MT1

    11 M1 +M

    T2

    12 M2)

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 26

  • 7.3 Moments 7 GAUSSIANS

    7.2.6 Product of gaussian densities

    Let Nx(m,) denote a density of x, then

    Nx(m1,1) Nx(m2,2) = ccNx(mc,c)

    cc = Nm1(m2, (1 +2))=

    1det(2pi(1 +2))

    exp[12(m1 m2)T (1 +2)1(m1 m2)

    ]mc = (11 +

    12 )

    1(11 m1 +12 m2)

    c = (11 +12 )

    1

    but note that the product is not normalized as a density of x.

    7.3 Moments

    7.3.1 Mean and covariance of linear forms

    First and second moments. Assume x N (m,)

    E(x) =m

    Cov(x,x) = Var(x) = = E(xxT ) E(x)E(xT ) = E(xxT )mmT

    As for any other distribution is holds for gaussians that

    E[Ax] = AE[x]

    Var[Ax] = AVar[x]AT

    Cov[Ax,By] = ACov[x,y]BT

    7.3.2 Mean and variance of square forms

    Mean and variance of square forms: Assume x N (m,)

    E(xxT ) = +mmT

    E[xTAx] = Tr(A) +mTAmVar(xTAx) = 24Tr(A2) + 42mTA2m

    E[(xm)TA(xm)] = (mm)TA(mm) + Tr(A)

    Assume x N (0, 2I) and A and B to be symmetric, then

    Cov(xTAx,xTBx) = 24Tr(AB)

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 27

  • 7.3 Moments 7 GAUSSIANS

    7.3.3 Cubic forms

    E[xbTxxT ] = mbT (M+mmT ) + (M+mmT )bmT

    +bTm(MmmT )

    7.3.4 Mean of Quartic Forms

    E[xxTxxT ] = 2(+mmT )2 +mTm(mmT )+Tr()(+mmT )

    E[xxTAxxT ] = (+mmT )(A+AT )(+mmT )+mTAm(mmT ) + Tr[A(+mmT )]

    E[xTxxTx] = 2Tr(2) + 4mTm+ (Tr() +mTm)2

    E[xTAxxTBx] = Tr[A(B+BT )] +mT (A+AT )(B+BT )m+(Tr(A) +mTAm)(Tr(B) +mTBm)

    E[aTxbTxcTxdTx]= (aT (+mmT )b)(cT (+mmT )d)

    +(aT (+mmT )c)(bT (+mmT )d)+(aT (+mmT )d)(bT (+mmT )c) 2aTmbTmcTmdTm

    E[(Ax+ a)(Bx+ b)T (Cx+ c)(Dx+ d)T ]= [ABT + (Am+ a)(Bm+ b)T ][CDT + (Cm+ c)(Dm+ d)T ]

    +[ACT + (Am+ a)(Cm+ c)T ][BDT + (Bm+ b)(Dm+ d)T ]+(Bm+ b)T (Cm+ c)[ADT (Am+ a)(Dm+ d)T ]+Tr(BCT )[ADT + (Am+ a)(Dm+ d)T ]

    E[(Ax+ a)T (Bx+ b)(Cx+ c)T (Dx+ d)]= Tr[A(CTD+DTC)BT ]

    +[(Am+ a)TB+ (Bm+ b)TA][CT (Dm+ d) +DT (Cm+ c)]+[Tr(ABT ) + (Am+ a)T (Bm+ b)][Tr(CDT ) + (Cm+ c)T (Dm+ d)]

    See [7].

    7.3.5 Moments

    E[x] =k

    kmk

    Cov(x) =k

    k

    kk(k +mkmTk mkmTk)

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 28

  • 7.4 Miscellaneous 7 GAUSSIANS

    7.4 Miscellaneous

    7.4.1 Whitening

    Assume x N (m,) thenz = 1/2(xm) N (0, I)

    Conversely having z N (0, I) one can generate data x N (m,) by settingx = 1/2z+m N (m,)

    Note that 1/2 means the matrix which fulfils 1/21/2 = , and that it existsand is unique since is positive definite.

    7.4.2 The Chi-Square connection

    Assume x N (m,) and x to be n dimensional, thenz = (xm)T1(xm) 2n

    7.4.3 Entropy

    Entropy of a D-dimensional gaussian

    H(x) =N (m,) lnN (m,)dx = ln

    det(2pi) D

    2

    7.5 One Dimensional Mixture of Gaussians

    7.5.1 Density and Normalization

    p(s) =Kk

    k2pi2k

    exp[12(s k)2

    2k

    ]

    7.5.2 Moments

    An useful fact of MoG, is that

    xn =k

    kxnk

    where k denotes average with respect to the k.th component. We can calculatethe first four moments from the densities

    p(x) =k

    k12pi2k

    exp[12(x k)2

    2k

    ]p(x) =

    k

    kCk exp[ck2x

    2 + ck1x]

    as

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 29

  • 7.6 Mixture of Gaussians 7 GAUSSIANS

    x = k kk = k k [ck12ck2 ]x2 = k k(2k + 2k) = k k [ 12ck2 + (ck12ck2 )2]x3 = k k(32kk + 3k) = k k [ ck1(2ck2)2 [3 c2k12ck2 ]]x4 = k k(4k + 62k2k + 34k) = k k [( 12ck2)2 [( ck12ck2)2 6 c2k12ck2 + 3]]

    If all the gaussians are centered, i.e. k = 0 for all k, then

    x = 0 = 0x2 = k k2k = k k [ 12ck2 ]x3 = 0 = 0x4 = k k34k = k k3 [ 12ck2 ]2

    From the un-centralized moments one can derive other entities like

    x2 x2 = k,k kk [2k + 2k kk]x3 x2x = k,k kk [32kk + 3k (2k + 2k)k]x4 x22 = k,k kk [4k + 62k2k + 34k (2k + 2k)(2k + 2k)]7.6 Mixture of Gaussians

    7.6.1 Density

    The variable x is distributed as a mixture of gaussians if it has the density

    p(x) =Kk=1

    k1

    det(2pik)exp

    [12(xmk)T1k (xmk)

    ]where k sum to 1 and the k all are positive definite.

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 30

  • 8 MISCELLANEOUS

    8 Miscellaneous

    8.1 Functions and Series

    8.1.1 Finite Series

    (Xn I)(X I)1 = I+X+X2 + ...+Xn1

    8.1.2 Taylor Expansion of Scalar Function

    Consider some scalar function f(x) which takes the vector x as an argument.This we can Taylor expand around x0

    f(x) = f(x0) + g(x0)T (x x0) + 12(x x0)TH(x0)(x x0)

    where

    g(x0) =f(x)x

    x0

    H(x0) =2f(x)xxT

    x0

    8.1.3 Taylor Expansion of Vector Functions

    8.1.4 Matrix Functions by Infinite Series

    As for analytical functions in one dimension, one can define a matrix functionfor square matrices X by an infinite series

    f(X) =n=0

    cnXn

    assuming the limit exists and is finite. If the coefficients cn fulfils

    n cnxn

  • 8.2 Indices, Entries and Vectors 8 MISCELLANEOUS

    8.1.5 Exponential Matrix Function

    In analogy to the ordinary scalar exponential function, one can define exponen-tial and logarithmic matrix functions:

    eA =n=0

    1n!An = I+A+

    12A2 + ...

    eA =n=0

    1n!(1)nAn = IA+ 1

    2A2 ...

    etA =n=0

    1n!(tA)n = I+ tA+

    12t2A2 + ...

    ln(I+A) =n=1

    (1)n1n

    An = A 12A2 +

    13A3 ...

    Some of the properties of the exponential function are [1]

    eAeB = eA+B if AB = BA

    (eA)1 = eA

    d

    dtetA = AetA, t R

    8.1.6 Trigonometric Functions

    sin(A) =n=0

    (1)nA2n+1(2n+ 1)!

    = A 13!A3 +

    15!A5 ...

    cos(A) =n=0

    (1)nA2n(2n)!

    = I 12!A2 +

    14!A4 ...

    8.2 Indices, Entries and Vectors

    Let ei denote the column vector which is 1 on entry i and zero elsewhere, i.e.(ei)j = ij , and let Jij denote the matrix which is 1 on entry (i, j) and zeroelsewhere.

    8.2.1 Rows and Columns

    i.th row of A = eTi A

    j.th column of A = Aej

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 32

  • 8.2 Indices, Entries and Vectors 8 MISCELLANEOUS

    8.2.2 Permutations

    Let P be some permutation matrix, e.g.

    P =

    0 1 01 0 00 0 1

    = [ e2 e1 e3 ] = eT2eT1eT3

    then

    AP =[Ae2 Ae1 Ae2

    ]PA =

    eT2AeT1AeT3A

    That is, the first is a matrix which has columns of A but in permuted sequenceand the second is a matrix which has the rows of A but in the permuted se-quence.

    8.2.3 Swap and Zeros

    Assume A to be nm and Jij to be m p

    AJij =[0 0 . . . Ai . . . 0

    ]i.e. an n p matrix of zeros with the i.th column of A in the placed of the j.thcolumn. Assume A to be nm and Jij to be p n

    JijA =

    00...Aj...0

    i.e. an p m matrix of zeros with the j.th row of A in the placed of the i.throw.

    8.2.4 Rewriting product of elements

    AkiBjl = (AeieTj B)kl = (AJijB)kl

    AikBlj = (ATeieTj BT )kl = (ATJijBT )kl

    AikBjl = (AT eieTj B)kl = (ATJijB)kl

    AkiBlj = (AeieTj BT )kl = (AJijBT )kl

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 33

  • 8.2 Indices, Entries and Vectors 8 MISCELLANEOUS

    8.2.5 Properties of the Singleentry Matrix

    If i = jJijJij = Jij (Jij)T (Jij)T = Jij

    Jij(Jij)T = Jij (Jij)TJij = Jij

    If i 6= jJijJij = 0 (Jij)T (Jij)T = 0

    Jij(Jij)T = Jii (Jij)TJij = Jjj

    8.2.6 The Singleentry Matrix in Scalar Expressions

    Assume A is nm and J is m n, then

    Tr(AJij) = Tr(JijA) = (AT )ij

    Assume A is n n, J is nm and B is m n, then

    Tr(AJijB) = (ATBT )ij

    Tr(AJjiB) = (BA)ij

    Tr(AJijJijB) = diag(ATBT )ij

    Assume A is n n, Jij is nm B is m n, then

    xTAJijBx = (ATxxTBT )ij

    xTAJijJijBx = diag(ATxxTBT )ij

    8.2.7 Structure Matrices

    The structure matrix is defined by

    AAij

    = Sij

    If A has no special structure then

    Sij = Jij

    If A is symmetric thenSij = Jij + Jji JijJij

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 34

  • 8.3 Solutions to Systems of Equations 8 MISCELLANEOUS

    8.3 Solutions to Systems of Equations

    8.3.1 Existence in Linear Systems

    Assume A is nm and consider the linear system

    Ax = b

    Construct the augmented matrix B = [A b] then

    Condition Solutionrank(A) = rank(B) = m Unique solution xrank(A) = rank(B) < m Many solutions xrank(A) < rank(B) No solutions x

    8.3.2 Standard Square

    Assume A is square and invertible, then

    Ax = b x = A1b

    8.3.3 Degenerated Square

    8.3.4 Over-determined Rectangular

    Assume A to be nm, n > m (tall) and rank(A) = m, then

    Ax = b x = (ATA)1ATb = A+b

    that is if there exists a solution x at all! If there is no solution the followingcan be useful:

    Ax = b xmin = A+bNow xmin is the vector x which minimizes ||Ax b||2, i.e. the vector which isleast wrong. The matrix A+ is the pseudo-inverse of A. See [3].

    8.3.5 Under-determined Rectangular

    Assume A is nm and n < m (broad).

    Ax = b xmin = AT (AAT )1b

    The equation have many solutions x. But xmin is the solution which minimizes||Axb||2 and also the solution with the smallest norm ||x||2. The same holdsfor a matrix version: Assume A is nm, X is m n and B is n n, then

    AX = B Xmin = A+B

    The equation have many solutionsX. ButXmin is the solution which minimizes||AXB||2 and also the solution with the smallest norm ||X||2. See [3].

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 35

  • 8.4 Block matrices 8 MISCELLANEOUS

    Similar but different: Assume A is square n n and the matrices B0,B1are nN , where N > n, then if B0 has maximal rank

    AB0 = B1 Amin = B1BT0 (B0BT0 )1

    where Amin denotes the matrix which is optimal in a least square sense. Aninterpretation is that A is the linear approximation which maps the columnsvectors of B0 into the columns vectors of B1.

    8.3.6 Linear form and zeros

    Ax = 0, x A = 0

    8.3.7 Square form and zeros

    If A is symmetric, then

    xTAx = 0, x A = 0

    8.4 Block matrices

    Let Aij denote the ij.th block of A.

    8.4.1 Multiplication

    Assuming the dimensions of the blocks matches we have[A11 A12A21 A22

    ] [B11 B12B21 B22

    ]=[A11B11 +A12B21 A11B12 +A12B22A21B11 +A22B21 A21B12 +A22B22

    ]8.4.2 The Determinant

    The determinant can be expressed as by the use of

    C1 = A11 A12A122 A21C2 = A22 A21A111 A12

    as

    det([

    A11 A12A21 A22

    ])= det(A22) det(C1) = det(A11) det(C2)

    8.4.3 The Inverse

    The inverse can be expressed as by the use of

    C1 = A11 A12A122 A21C2 = A22 A21A111 A12

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 36

  • 8.5 Matrix Norms 8 MISCELLANEOUS

    as [A11 A12A21 A22

    ]1=[

    C11 A111 A12C12C12 A21A111 C12

    ]=[A111 +A

    111 A12C

    12 A21A

    111 C11 A12A122

    A122 A21C11 A122 +A122 A21C11 A12A122

    ]8.4.4 Block diagonal

    For block diagonal matrices we have[A11 00 A22

    ]1=[(A11)1 0

    0 (A22)1

    ]

    det([

    A11 00 A22

    ])= det(A11) det(A22)

    8.5 Matrix Norms

    8.5.1 Definitions

    A matrix norm is a mapping which fulfils

    ||A|| 0 ||A|| = 0 A = 0||cA|| = |c|||A||, c R

    ||A+B|| ||A||+ ||B||

    8.5.2 Examples

    ||A||1 = maxj

    i

    |Aij |

    ||A||2 =max eig(ATA)

    ||A||p = ( max||x||p=1 ||Ax||p)1/p

    ||A|| = maxi

    j

    |Aij |

    ||A||F =

    ij

    |Aij |2 (Frobenius)

    ||A||max = maxij

    |Aij |||A||KF = ||sing(A)||1 (Ky Fan)

    where sing(A) is the vector of singular values of the matrix A.

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 37

  • 8.6 Positive Definite and Semi-definite Matrices 8 MISCELLANEOUS

    8.5.3 Inequalities

    E. H. Rasmussen has in yet unpublished material derived and collected thefollowing inequalities. They are collected in a table as below, assuming A is anm n, and d = min{m,n}

    ||A||max ||A||1 ||A|| ||A||2 ||A||F ||A||KF||A||max 1 1 1 1 1||A||1 m m

    m

    m

    m

    ||A|| n nn

    n

    n

    ||A||2mn

    n

    m 1 1

    ||A||Fmn

    n

    m

    d 1

    ||A||KFmnd

    nd

    md d

    d

    which are to be read as, e.g.

    ||A||2 m ||A||

    8.6 Positive Definite and Semi-definite Matrices

    8.6.1 Definitions

    A matrix A is positive definite if and only if

    xTAx > 0, x

    A matrix A is positive semi-definite if and only if

    xTAx 0, x

    Note that if A is positive definite, then A is also positive semi-definite.

    8.6.2 Eigenvalues

    The following holds with respect to the eigenvalues:

    A pos. def. eig(A) > 0A pos. semi-def. eig(A) 0

    8.6.3 Trace

    The following holds with respect to the trace:

    A pos. def. Tr(A) > 0A pos. semi-def. Tr(A) 0

    8.6.4 Inverse

    If A is positive definite, then A is invertible and A1 is also positive definite.

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 38

  • 8.7 Integral Involving Dirac Delta Functions 8 MISCELLANEOUS

    8.6.5 Diagonal

    If A is positive definite, then Aii > 0,i

    8.6.6 Decomposition I

    The matrix A is positive semi-definite of rank r there exists a matrix B ofrank r such that A = BBT

    The matrix A is positive definite there exists an invertible matrix B suchthat A = BBT

    8.6.7 Decomposition II

    Assume A is an n n positive semi-definite, then there exists an n r matrixB of rank r such that BTAB = I.

    8.6.8 Equation with zeros

    Assume A is positive semi-definite, then XTAX = 0 AX = 0

    8.6.9 Rank of product

    Assume A is positive definite, then rank(BABT ) = rank(B)

    8.6.10 Positive definite property

    If A is n n positive definite and B is r n of rank r, then BABT is positivedefinite.

    8.6.11 Outer Product

    If X is n r of rank r, then XXT is positive definite.

    8.6.12 Small pertubations

    If A is positive definite and B is symmetric, then A tB is positive definite forsufficiently small t.

    8.7 Integral Involving Dirac Delta Functions

    Assuming A to be square, thenp(s)(xAs)ds = 1

    det(A)p(A1x)

    Assuming A to be underdetermined, i.e. tall, thenp(s)(xAs)ds =

    {1

    det(ATA)p(A+x) if x = AA+x

    0 elsewhere

    }

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 39

  • 8.8 Miscellaneous 8 MISCELLANEOUS

    See [8].

    8.8 Miscellaneous

    For any A it holds that

    rank(A) = rank(AT ) = rank(AAT ) = rank(ATA)

    Assume A is positive definite. Then

    rank(BTAB) = rank(B)

    A is positive definite B invertible, such that A = BBT

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 40

  • A PROOFS AND DETAILS

    A Proofs and Details

    A.0.1 Proof of Equation 14

    Essentially we need to calculate

    (Xn)klXij

    =

    Xij

    u1,...,un1

    Xk,u1Xu1,u2 ...Xun1,l

    = k,iu1,jXu1,u2 ...Xun1,l+Xk,u1u1,iu2,j ...Xun1,l

    ...+Xk,u1Xu1,u2 ...un1,il,j

    =n1r=0

    (Xr)ki(Xn1r)jl

    =n1r=0

    (XrJijXn1r)kl

    Using the properties of the single entry matrix found in Sec. 8.2.5, the resultfollows easily.

    A.0.2 Details on Eq. 47

    det(XHAX) = det(XHAX)Tr[(XHAX)1(XHAX)]= det(XHAX)Tr[(XHAX)1((XH)AX+XH(AX))]= det(XHAX)

    (Tr[(XHAX)1(XH)AX] + Tr[(XHAX)1XH(AX)]

    )= det(XHAX)

    (Tr[AX(XHAX)1(XH)] + Tr[(XHAX)1XHA(X)]

    )First, the derivative is found with respect to the real part of X

    det(XHAX)

  • A PROOFS AND DETAILS

    Hence, derivative yields

    det(XHAX)X

    =12

    ( det(XHAX)

  • REFERENCES REFERENCES

    References

    [1] Karl Gustav Andersson and Lars-Christer Boiers. Ordinaera differentialek-vationer. Studenterlitteratur, 1992.

    [2] Jorn Anemuller, Terrence J. Sejnowski, and Scott Makeig. Complex inde-pendent component analysis of frequency-domain electroencephalographicdata. Neural Networks, 16(9):13111323, November 2003.

    [3] S. Barnet. Matrices. Methods and Applications. Oxford Applied Mathe-matics and Computin Science Series. Clarendon Press, 1990.

    [4] Christoffer Bishop. Neural Networks for Pattern Recognition. Oxford Uni-versity Press, 1995.

    [5] Robert J. Boik. Lecture notes: Statistics 550. Online, April 22 2002. Notes.

    [6] D. H. Brandwood. A complex gradient operator and its application inadaptive array theory. IEE Proceedings, 130(1):1116, February 1983. PTS.F and H.

    [7] M. Brookes. Matrix reference manual, 2004. Website May 20, 2004.

    [8] Mads Dyrholm. Some matrix results, 2004. Website August 23, 2004.

    [9] Simon Haykin. Adaptive Filter Theory. Prentice Hall, Upper Saddle River,NJ, 4th edition, 2002.

    [10] Thomas P. Minka. Old and new matrix algebra useful for statistics, De-cember 2000. Notes.

    [11] L. Parra and C. Spence. Convolutive blind separation of non-stationarysources. In IEEE Transactions Speech and Audio Processing, pages 320327, May 2000.

    [12] Laurent Schwartz. Cours dAnalyse, volume II. Hermann, Paris, 1967. Asreferenced in [9].

    [13] Shayle R. Searle. Matrix Algebra Useful for Statistics. John Wiley andSons, 1982.

    [14] G. Seber and A. Lee. Linear Regression Analysis. John Wiley and Sons,2002.

    [15] S. M. Selby. Standard Mathematical Tables. CRC Press, 1974.

    [16] Inna Stainvas. Matrix algebra in differential calculus. Neural ComputingResearch Group, Information Engeneering, Aston University, UK, August2002. Notes.

    [17] Max Welling. The kalman filter. Lecture Note.

    Petersen & Pedersen, The Matrix Cookbook (Version: January 5, 2005), Page 43