Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is...
Transcript of Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is...
![Page 1: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/1.jpg)
Signal Processing 1Linear Operators
Univ.-Prof.,Dr.-Ing. Markus RuppWS 18/19
Th 14:00-15:30EI3A, Fr 8:45-10:00EI4
LVA 389.166
Last change: 24.8.2018
![Page 2: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/2.jpg)
11Univ.-Prof. Dr.-Ing.
Markus Rupp
Learning GoalsLinear Operators (4Units, Chapters 4-7) Linear Transformations, Functionals (Ch 4.1) Null- and other spaces (Ch 4.5) Orthog. Subspaces, Matrix Rank (Ch 4.7) Projections (Ch 4.8-4.9) Factorization/Decomposition
Eigenvalue-decomp. Hermitian mat. (Ch 5.1-5.2,6.1-6.3) Filter design based on eigenfilters (Ch 6.9)
Subspace techniques: PHD,MUSIC,ESPRIT (Ch 6.10-6.11) Singular value decomposition SVD (Ch 7) condition number (Ch 4.10) MIMO transmission, blind source separation
![Page 3: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/3.jpg)
Need Abelian group w.r.t +
12Univ.-Prof. Dr.-Ing.
Markus Rupp
And need distributivitythus ring!!!
Linearity Definition 4.1: A transformation A:XY in which
X and Y are vector spaces, defined over a ring is called linear, if for each x1,x2 from X and scalars α1,α2 from R we have:
A[α1 x1+α2x2]= A[α2x2+α1 x1]=α1A[x1] +α2A[x2]
Examples for linear operators are matrices, sampling, derivatives and convolutional integrals (functionals).
![Page 4: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/4.jpg)
13Univ.-Prof. Dr.-Ing.
Markus Rupp
Linearity Note, in maths it is even more precise:
A transformation A:XY in which X and Y are vector spaces, defined over a ring are called linear, if for each x1,x2 from X and scalars α1,α2over a number field (Ger.: Zahlenkörper) K we have :
A[α1 x1+α2x2]=α1A[x1] +α2A[x2]
However, we (engineers) restrict K for our functions to the set of real numbers!
![Page 5: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/5.jpg)
14Univ.-Prof. Dr.-Ing.
Markus Rupp
Linearity Examples for linear operators are:
Example 4.1 A complex valued number z from C is formed by a vector x from R2:
A[z] = x =[real(z),imag(z)]T
Example 4.2 A quadruple s=[s1,s2,s3,s4] from C4 is mapped onto a 4x4 matrix from C4x4 :
[ ]1 2 3 4* * * *2 1 4 3* * * *3 4 1 2
4 3 2 1
s s s ss s s s
A ss s s ss s s s
− − = − −
![Page 6: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/6.jpg)
15Univ.-Prof. Dr.-Ing.
Markus Rupp
Linear Operators Examples for linear operators are:
Example 4.3 Let a continuous function g(t) from C[0,1] being sampled at fix time points 0<t1<t2<...<tn<1:A[g(t)]=[g(t1), g(t2),..., g(tn)] in Rn
Example 4.4 A function f:XR(C) that maps from a vector space X onto real (complex) numbers is called a functional. If it is linear, then it is called a linear functional:
∫
∫
∫
∞
∞−
−=
−=
=
dttjtxxf
dttgtxxf
dttxT
xf
b
a
T
)exp()()(
)()()(
)(1)(
3
2
01
ω
τ
![Page 7: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/7.jpg)
16Univ.-Prof. Dr.-Ing.
Markus Rupp
Linear Operators Example 4.5: Consider the causal sequence xk;
k=0,1,2,.. The mapping of the sequence onto a sum
is a linear operator. Example 4.6: Consider the Hermitian operator
H[.], that transposes a matrix and additionally builds the conjugate complex value of all elements. (Ger.: adjungierte Matrix, Engl.: adjoint matrix)
∑=
=k
llk xs
0
[ ] [ ]( ) RAAAABB
AHABAHABHHH
HH
∈+=+=+
====
2,1221122112211
222111
;
;
αααααααFrench mathematician Charles Hermite (24.12.1822 – 14.1.1901)
![Page 8: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/8.jpg)
17Univ.-Prof. Dr.-Ing.
Markus Rupp
Linear Operators Definition 4.2: Linear Functional: f:XR,
f(ax+by)=af(x)+bf(y).
Remark 1: all inner products over functions can be interpreted as linear functionals.
Remark 2: all continuous, linear functionals in the Hilbert space can be described by inner products (Riesz‘ Theorem).
)(),()()()(2 tgtxdttgtxxfb
a
=−= ∫ τ
![Page 9: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/9.jpg)
18Univ.-Prof. Dr.-Ing.
Markus Rupp
Linear Operators Note that all functional units of a MIMO
transmission can be considered as linear!
Sample
0fc
STCoding
Modu-latorg(t) gk
s1(t)
Modu-lator 0fc
s2(t)
Let the modulation alphabet not be restricted to fixed signal points!
![Page 10: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/10.jpg)
19Univ.-Prof. Dr.-Ing.
Markus Rupp
Bounded Operators We already introduced vector norms and showed
that they can be also used for matrices as induced norms.
This can be extended towards linear operators:
Definition 4.3: Is the norm of an operator finite, then we call this operator bounded (Ger.: beschränkt): ||A[.]||p<M<oo, or, ||A[x]||p<M||x||p
[ ][ ]
[ ]px
ppx
p
pxpp
xAxxA
xxA
AAp 100,
supsupsup. =≠≠ =
=== ind
![Page 11: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/11.jpg)
20Univ.-Prof. Dr.-Ing.
Markus Rupp
Bounded Operators Theorem 4.1: A linear operator A:XY is
bounded, i.e., , if and only if it is continuous, i.e., for some finite and positive M and L.
Proof: Let‘s assume A is bounded, then we find that
But this is identical to the condition for continuity.
Starting with continuity we can thus also conclude boundedness!
[ ] xMxA ≤[ ] [ ] xdLxdxAxA ≤+−
[ ] xdMxdA ≤
[ ] [ ] xdLxdxAxA ≤+−[ ] =xdA
[ ] [ ] =+− xdxAxA
![Page 12: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/12.jpg)
21Univ.-Prof. Dr.-Ing.
Markus Rupp
Inner Products Lemma 4.1: Inner products are continuous. I.e. if xnx
is true in an inner product space S, then <xn,y> <x,y> for y from S.
Proof: If xn converges, it must also be bounded, thus
Then, we have:∞<≤ Mxn
.,,,0
,,,
yxyxxx
yxxyxxyxyx
nn
nnn
towards converges then Since →−
−≤−=−
![Page 13: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/13.jpg)
22Univ.-Prof. Dr.-Ing.
Markus Rupp
Continuity of Functionals With such technique we can also show
the following: Let f(x)=<x,g(x)> be a functional, then
f(x) is continuous if g(x) is bounded:
.,)(,)(,0
,,,
gxxfgxxfxx
gxxgxxgxgx
nnn
nnn
==→−
−≤−=−
towards convergesthus Since
![Page 14: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/14.jpg)
23Univ.-Prof. Dr.-Ing.
Markus Rupp
Bounded Operators Such properties are also important for the
inverses of linear operators.
Notation: If we concatenate an operator n times:
A[A[…]]=An[.]
A0[.]=I[.] is the identity operator A-1[A[.]]=I[.] defines the inverse of an operator.
![Page 15: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/15.jpg)
24Univ.-Prof. Dr.-Ing.
Markus Rupp
Bounded Operators Theorem 4.2: Let ||.|| be an operator norm
satisfying the submultiplicative property and A[.]:XX a linear operator with ||A[.]||<1. Then (I-A)-1 exists and:
( )
( )∑
∑∞
=
−
∞
=
−
−=
=−
0
1
0
1
i
i
i
i
AIA
AAI
![Page 16: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/16.jpg)
25Univ.-Prof. Dr.-Ing.
Markus Rupp
Bounded Operators Proof: Let ||A[.]||<1. If I-A is singular then
there is at least one vector x unequal to 0 so that (I-A)[x]=0. Thus we also have x=A[x] and
In this case we must have which is a contradiction. I-A is not singular !
By successive multiplication we have:
[ ] [ ].AxxAx ≤=
( )( ) kk AIAAAIAI −=++++− −12 ..
[ ] 1. ≥A
![Page 17: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/17.jpg)
26Univ.-Prof. Dr.-Ing.
Markus Rupp
Bounded Operators Since
and ||A[.]||<1 it must be true that
And therefore also:
Note that A[.] must be square XX !
kk AA ≤
0lim =∞→k
k A
( ) IAAIi
i =
− ∑
∞
=0
Submultiplicative Property
![Page 18: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/18.jpg)
27Univ.-Prof. Dr.-Ing.
Markus Rupp
Bounded Operators Example 4.7: Consider the following
operator:
Obviously, this operator is bounded:
Thus, its inverse must exist in the form:
=
2/2/
*yx
yx
A
2/1[.]2/12/2/
* =⇒
=
=
Ayx
yx
yx
A
( ) ∑∞
=
− =−0
1
i
iAAI
![Page 19: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/19.jpg)
28Univ.-Prof. Dr.-Ing.
Markus Rupp
Bounded Operators Still Example 4.7: We thus have to show that
=
=
=
=
−
=
−⇒
=
=
+
++
12*
1212
2
22
3*
332
**0
2/2/
;2/2/
2/2/
;4/4/
2/2/
)(;2/2/
;
k
kk
k
kk
yx
yx
Ayx
yx
A
yx
yx
Ayx
yx
A
yyx
yx
AIyx
yx
Ayx
yx
A
( ) ( )1
0 0ori i
i i
a a a aI A A A I A
b b b b
∞ ∞−
= =
− = − =
∑ ∑
![Page 20: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/20.jpg)
29Univ.-Prof. Dr.-Ing.
Markus Rupp
Bounded Operators Still Example 4.7: We thus have to show that
=
−
+
−
=
−
+
=
−
+
−
=
+
=
+
=
∑
∑∑
∑∑∑
=
=
+
=
=
+
==
ba
bba
bba
bba
A
yx
yx
yx
yx
yx
yx
yx
Ayx
Ayx
A
i
i
i
i
i
i
i
i
i
i
i
i
2/2/
32
2/2/
34
2/2/
32
34
4/1112/1
4/111
2/12/1
***0
**
*0
12
0
2
0
12
0
2
0
( )
=
−
=
− ∑∑
∞
=
∞
= ba
bba
Aba
AIAi
i
i
i
2/2/*
00
![Page 21: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/21.jpg)
30Univ.-Prof. Dr.-Ing.
Markus Rupp
Remark Since most of the linear operators are being used
in form of matrices, we will mainly deal with matrices in the following, thus A[x]=Ax.
Most of the shown properties, in particular the projection properties are not limited to matrices!
![Page 22: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/22.jpg)
31Univ.-Prof. Dr.-Ing.
Markus Rupp
Null- and other Spaces Definition 4.4: The vector space, spanned
by the columns of a matrix A=[a1,a2,..,an]:XY is called its column space or range (Ger.: Spaltenraum von A):
The second row is the more general form and describes the column space of a linear operator.
[ ]( ) ( ) [ ][ ]
1 2 1 2. span , ,... , ,...,
: for
n nR A a a a A a a a
y Y A x y x X
= =
= ∈ = ∈
![Page 23: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/23.jpg)
32Univ.-Prof. Dr.-Ing.
Markus Rupp
Null- and other Spaces Definition 4.5: The vector space spanned by the
(conjugate complex) rows of a matrix A=[b1
T;b2T;...;bn
T]:XY, is called row space of A (Ger.: Zeilenraum) or column space of the adjoint operator A*[.]:
Note: the Hermitian of a matrix is a special form of the adjoint (Ger.: adjungierter) linear operator: A*[x]=AHx.
( ) ( )
1
* * ** 21 2[.] span ; ;...;
: for
T
T
n
Tn
H
b
bR A b b b A
b
x X A y x y Y
= =
= ∈ = ∈
![Page 24: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/24.jpg)
Adjoint Operator Definition 4.6: Consider matrix A, then
AH is called the Adjoint matrix = Hermitian of a matrix.
Now consider linear operator A[]:
A*[] is called the adjoint operator.
33Univ.-Prof. Dr.-Ing.
Markus Rupp
[ ] ( ) *
, ,
, , ,
HAx y x A y
A x y x adj A y x A y
=
= =
![Page 25: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/25.jpg)
Adjoint Operator
34Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.8: Adjoint operator
( ) [ ]nn
n
n
n
nn
yAntrecytx
dtyx(t)dttxx(t)
tx
*
*21
21
*
)(ˆ
?)(ˆ
),(ˆ
=−=
=
∑
∫∑∫
∞
−∞=
+
−
∞
−∞=
∞
∞−
:Answer
that such there is rec(t)
-1/2 1/2
![Page 26: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/26.jpg)
Self-Adjoint In some cases the operators are self-
adjoint Definition 4.7: A self adjoint
operator satisfies:
This is given for all Hermitian matrices.
35Univ.-Prof. Dr.-Ing.
Markus Rupp
yAxyAxyxA H ,,, ==
![Page 27: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/27.jpg)
36Univ.-Prof. Dr.-Ing.
Markus Rupp
Null- and other Spaces Definition 4.8: The vector space defined
by the solutions A[x]=0 of a linear operator A[.]:XY is called nullspace N(A) of A[.] (Ger.: Nullraum). It is also called kernel (Ger.: Kern) of the operator: ker(A)
Definition 4.9: The vector space defined by the solutions A*[y]=0 of a linear operator A[.]:XY is called nullspace N(A*) of A* or left nullspace (Ger.: linker Nullraum): ker(A*).
![Page 28: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/28.jpg)
37Univ.-Prof. Dr.-Ing.
Markus Rupp
Null- and other Spaces Example 4.9: Let A be a linear matrix operator
with:
Then the column space and nullspace of A are given by:
=
000101001
A
=
=
010
span)(;010
,011
span)( ANAR
![Page 29: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/29.jpg)
38Univ.-Prof. Dr.-Ing.
Markus Rupp
Null- and other Spaces
Still Example 4.9: The row space (column space of the adjoint matrix) and the left nullspace are given by:
=→
=
010000011
000101001
HAA
( ) ( )
=
=
100
span;101
,001
span HH ANAR
![Page 30: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/30.jpg)
Kernel
39Univ.-Prof. Dr.-Ing.
Markus Rupp
Null- and other Spaces Example 4.10: Consider the convolution:
The nullspace of the linear operator L consists of all functions x(t) which convolved with h(t) result in zero.
In the Fourier domain these are the functions X(jω) that have no overlap with H(jω). Thus:
( ) ∫ −=t
dthxtxL0
)()()( τττ
0)()(|)()( ≡= ωω jXjHtxLN
![Page 31: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/31.jpg)
Null and other Spaces
40Univ.-Prof. Dr.-Ing.
Markus Rupp
[5,2,8]T defines N(AH)
![Page 32: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/32.jpg)
41Univ.-Prof. Dr.-Ing.
Markus Rupp
Null- and other Spaces Let vector b be from the column space of A. Then
we have: A linear combination of the columns of A must be exactly b: Ax=b.
Given Ax=b, There is exactly one solution if b is in the column space
of A and the columns are linearly independent. There is no solution if b is not in the column space of A. There is an infinite amount of solutions if b is in the
column space of A, and its columns are not linearly independent.
Proofs follow later...
![Page 33: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/33.jpg)
42Univ.-Prof. Dr.-Ing.
Markus Rupp
Null- and other Spaces Based on these definitions we can already state
the following relations for linear operatorsA:XY:
( )
( ) YANXANXAR
YAR
⊂
⊂⊂
⊂
*
*
)(
)([ ]
YyXx
yxA
∈∈
=
![Page 34: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/34.jpg)
43Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.11 Hands Free Telephone (Freisprechtelefon)
Far end speaker
localspeaker+ echo of far endspeaker
![Page 35: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/35.jpg)
44Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.11: System Identification
The essential problem of a hands free telephone is a system identification. Such a problem can be described in form of a system of linear equations.
Far endspeaker
Localspeaker
![Page 36: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/36.jpg)
45Univ.-Prof. Dr.-Ing.
Markus Rupp
System Identifikationwith random signals
Assume a (at least WSS) signal at the input of an LTI system with impulse response w(τ). The correlation of input and output signal is:
w(τ)x(t) y(t)
xy
xyxx )()()(
rwR
rdttwtr
xx =
=−∫ ττ
![Page 37: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/37.jpg)
46Univ.-Prof. Dr.-Ing.
Markus Rupp
System Identifikationwith random signals Not knowing the correlation terms, the linear system of
equations can be approximated by observations:
In order to solve a system of order m (dim(w)=m), n must be at least m. In practice, often n=2m.
xk must be persistent exciting!
[ ] [ ]
=
=
=
=
∑∑
∑∑
=
−
=
==
n
k kn
kTk
n
k k
nR
n
kTk
Tk
xx
nnw
nw
n
EwE
rwR
xx
1
1
1
1
)(
1
11
11
yxxx
yxxx
yxxx
kk
kk
kkk
xy
![Page 38: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/38.jpg)
47Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.11 Consider now the problem of stereo transmission
of a source S (far end speaker):
Note: gk(i) are the various paths to the
microphones.
s gk(1)
gk(2)
xk(1)
xk(2)
M F
![Page 39: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/39.jpg)
48Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.11 With our vector notation we find the following
relations:
[ ][ ]
2,1;
...
,...,,
,...,,
)()(
221
21
11
)(1
)(1
)(0
)(
)(1
)(1
)()(
==
=
=
=
=
+−+−
−−
+−−
−
+−−
igSx
S
ss
sssss
S
gggg
xxxx
ik
ik
Tk
LkLk
kk
Lkkk
k
TiL
iii
TiLk
ik
ik
ik
Hankel matrix
![Page 40: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/40.jpg)
49Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.11 Consider the following relation:
Construct the vector xkT=[xk
T(1),xkT(2)].
Note, the following ACF matrix is singular!
)()()()(
)()()()()()(
ijTk
ik
jT
jk
iTjTk
iTjiTk
gxgSg
gSggSggx
==
==
∑∑==
==
n
k kT
kkT
k
kT
kkT
kTk
n
kkxx SggSSggS
SggSSggS
nxx
nnR
1)2()2()1()2(
)2()1()1()1(
1
11)(
![Page 41: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/41.jpg)
50Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.11 Consider the following vector for arbitrary values
α unequal to zero:
For this vector we have:
Obviously, u is in the nullspace of Rxx(n).
[ ])1()2( , TTT ggu −= α
01)( )1(
)2(
1)2()2()1()2(
)2()1()1()1(
=
−
= ∑
= gg
SggSSggS
SggSSggS
nunR
n
k kT
kkT
k
kT
kkT
kxx
![Page 42: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/42.jpg)
67Univ.-Prof. Dr.-Ing.
Markus Rupp
Sumspaces Definition 4.10: Let V and W be linear subspaces,
then the space S=V+W is called inner sumspace consisting of all combinations x=v+w.
Definition 4.11: Let V and W be linear subspaces. The direct sumspace
is constructed by the pairs (v,w) . V+W and are different linear spaces. If V and
W are disjoint, they have the same mathematical properties and are said to be isomorphic
WVT ⊕=
W Vx
WV ⊕
![Page 43: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/43.jpg)
68Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Definition 4.12: Let S be a vector space
and V and W both subspaces of S. V and W are called orthogonal subspaces if for each pair v from V and w from W we have: <v,w>=0.
Definition 4.13: Let V be a subset of a vector space S with inner product. The space of all vectors orthogonal to the vectors in V is called orthogonal complement (Ger.: orthogonaler Komplementärraum) and is denoted:
WV =⊥
W Vx
![Page 44: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/44.jpg)
69Univ.-Prof. Dr.-Ing.
Markus Rupp
Sumspaces Example 4.12:
)1,1,1(),0,1,1(),1,0,1(),1,1,0(),1,0,0(),0,1,0(),0,0,1(),0,0,0()1,1,0,0,0,1(),1,1,0,0,0,0(),1,0,0,0,0,1(),1,0,0,0,0,0(
),0,1,0,0,0,1(),0,1,0,0,0,0(),0,0,0,0,0,1(),0,0,0,0,0,0(
span)1,1,0(),1,0,0(),0,1,0(),0,0,1(),0,0,0(
)1,1,0(),1,0,0(),0,1,0(),0,0,0()0,0,1(),0,0,0(
=
=⊕
=+
=∪
=∪
==
=
⊥
⊥
⊥
⊥
⊥
S
VV
SVVSVV
VVVW
V
![Page 45: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/45.jpg)
70Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Example 4.13: Let be S=GF(2)3. The vectors
v=(1,0,0) and w=(0,0,1) are from S. They span the subspaces V and W:V=span(v)=(0,0,0),(1,0,0)W=span(w)=(0,0,0),(0,0,1)
Both spaces are orthogonal subspaces. The subspace V has the orthogonal complement :
)1,1,0(),1,0,0(),0,1,0(),0,0,0(=⊥V
![Page 46: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/46.jpg)
71Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Which vectors span the orthogonal
complement? Answer:
Note:
( ))1,0,0(),0,1,0(span
)1,1,0(),1,0,0(),0,1,0(),0,0,0(==⊥V
)1,1,1(),1,1,0(),1,0,1(),0,1,1(),1,0,0(),0,1,0(),0,0,1(),0,0,0(
)1,1,0(),1,0,0(),0,1,0(),0,0,1(),0,0,0(
=≠=∪ ⊥
SS
VV
![Page 47: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/47.jpg)
72Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Note: let be v from V and w from W. Assume that V and
are orthogonal complements in S. Then we do not necessarily have:
Typically for such properties we need complete spaces (Cauchy series!)
( )SVVSVVspanSVV
=+
=∪
=∪
⊥
⊥
⊥
WV =⊥
![Page 48: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/48.jpg)
Projections We had already used projections P in
the context of LS: P=P2. Definition 4.14: In an orthogonal
projection its range and its nullspace are orthogonal subspaces, in oblique projections, this is not the case.
73Univ.-Prof. Dr.-Ing.
Markus Rupp
![Page 49: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/49.jpg)
Example 4.14 Consider the following projection
matrix
Its range and nullspace are given by
74Univ.-Prof. Dr.-Ing.
Markus Rupp
=
100
αA
( ) ( )
−
=
=
α1
;10
spanANspanAR
![Page 50: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/50.jpg)
Example 4.14 Thus, we have in general an oblique
projection Only for α=0, we have an orthogonal
projection
Note that the eigenvalues of projection matrices are either 0 or 1.
75Univ.-Prof. Dr.-Ing.
Markus Rupp
![Page 51: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/51.jpg)
76Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Theorem 4.3: Let be V and W two subspaces of a
vector space S (not necessarily a complete one) with inner product. Then we have:
0,0)6
0)5)4)3)2)1
==
=∩∈
=
⊂⊂
⊂
⊥⊥
⊥
⊥⊥⊥⊥
⊥⊥
⊥⊥
⊥
SS
xVVxVV
VWWVVV
SV
then If
:have we then If
of subspace complete a is
![Page 52: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/52.jpg)
Recall Lemma 4.1: Inner products are
continuous. I.e. if xnx is true in an inner product space S, then <xn,y> <x,y> for every y from S.
77Univ.-Prof. Dr.-Ing.
Markus Rupp
![Page 53: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/53.jpg)
78Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces
Proof (Part 1): Let be xn a series of vectors in with xnx and v from V. Because of the continuity property of the inner product (Lemma 4.1) we have:
Note: The following is not true: This is because the space S is not complete. There can be Cauchy-series in V that are not in .
⊥V
⊥
→
∈⇒
∈==⇒=
Vx
Vvvxvxvx nxxn nfor ;0,,lim0,
⊥⊥= VV
⊥⊥V
SV of subspace complete a is ⊥)1
![Page 54: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/54.jpg)
79Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Theorem 4.4: Let be A:XY a bounded linear
operator between two Hilbert spaces X and Y and R(A) as well as R(AH) complete subspaces. Then we have: 1)
2)
Proof:
( ) ( ) ( ) ( ) ⊥⊥
⊥⊥
==
==
)();(
)(;)(
ANARANAR
ANARANARHH
HH
( )
( ) ⊥⊂⊥
====
∈=∈=∈
)(
00,,,,
)(0
ARANzy
xzAxzxAzy
XxyxAARyzAANz
H
H
HH
therefore and Thus
for with and from thus ,Let
XANYARYyXxyxA ∈∈∈∈= )(,)(,,;
For every z,we find a y, such that…
![Page 55: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/55.jpg)
80Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Further:
Since
( )( ) ( )
( ) ( )H
HH
H
ANAR
ANzzA
zxzAxzxA
ARxAXxARz
YARxAYyxA
⊂
∈=
==
∈∈∈
⊂∈∈=
⊥
⊥
)( :Finally
, thereforeand 0 Thus
andevery for ,0,
, and nowLet
, :have We
( ) ( )( ) ⊥
⊥⊥
=
⊂⊂
)(
)()(
ARAN
ANARARANH
HH
:havemust we
and
XANYARYyXxyxA ∈∈∈∈= )(,)(,,;
No matter what x we select, for all z…
![Page 56: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/56.jpg)
Orthogonal Subspaces With the same technique, we can also
prove that:
Lemma 4.2 R(A*[.])=R(A*[A[.]]) or for matrices: R(AH)=R(AHA)
81Univ.-Prof. Dr.-Ing.
Markus Rupp
![Page 57: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/57.jpg)
82Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Theorem 4.5: (Fredholm’s Alternative Theorem,
Ger.: Fredholmscher Alternativsatz) Let A be a bounded, linear operator. The equation
Ax=bhas at least one solution if and only if <b,v>=0 for every vector v from N(AH): AHv=0. More precisely:
Particular for matrices: the equation Ax=b has (at least) one solution if and only if bHv=0 for each vector v for which AHv=0.
( )HANbARb ⊥⇔∈ )(
Erik Ivar Fredholm (7.4.1866 – 17.8.1927) was a Swedish mathematician.
![Page 58: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/58.jpg)
83Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Proof:
Converse: Consider now that <b,v>=0 if v from N(AH), but Ax=b has no solution. Since b is not from R(A), we assume that b= br+b0 , with br from R(A) and let b0 be orthogonal to the vectors from R(A). Thus we have <Ax,b0>=0 for all x and thus AHb0=0. Moreover, if b0 is in N(AH) and 0=<b,b0>=<br+b0,b0>= <br,b0>+<b0,b0>=<b0,b0> b0=0, b is therefore from R(A).
( )00,,, ====
∈=
xvAxvxAv,b
ANvbxAH
H
Then
. and Let
![Page 59: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/59.jpg)
84Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces We thus have proven the existence but not the
uniqueness of the solution.
Theorem 4.6: The solution of Ax=b is unique if and only if the unique solution of Ax=0 is x=0, thus N(A)=0.
Proof hint: start with A(x+∆x)=b
![Page 60: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/60.jpg)
Orthogonal Subspaces Example 4.15 Fredholm‘s Theorem
85Univ.-Prof. Dr.-Ing.
Markus Rupp
( )
( )
=
=
−
−=
=
00
)(52
41
121
654
321
)(
spanspan
spanspan
ANAR
ANAR
H
H
=
635241
A
;975
== bxA 0
121
, =
−
−b
![Page 61: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/61.jpg)
86Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Remember: The dimension of a vector space
defines the number of linearly independent vectors required to span the space.
Definition 4.15: The rank (Ger.: Rang) of a matrix A is defined by the dimension of its column space (row space).
Example 4.16: Let an mxn matrix A be of rank r:
( ) ( )( )( ) ( )( ) rmANrnAN
rARrARH
H
−=−=
==
dim;)(dimdim;)(dim
![Page 62: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/62.jpg)
remember Definition 2.23: Let T be a Hamel
basis for S. The cardinality of T is the dimension of S, |T|=dim(S). It equals the number of linearly independent vectors, required to span the space S.
87Univ.-Prof. Dr.-Ing.
Markus Rupp
![Page 63: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/63.jpg)
88Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Example 4.17:
( )
( )
=
−=
=
=
===
=
00
span;014
span)(
5205
,241
span;52
,51
span)(
2,3,2;5205241
T
T
ANAN
ARAR
rnmA
![Page 64: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/64.jpg)
89Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Definition 4.16: An mxn matrix is called of full
rank if rank(A)=min(m,n). If a matrix is not of full rank then it is called
rank-deficient. Theorem 4.7: For matrix products AB we have:
( ) ( )( )( )( ) ( )HH
HH
BRABR
ABNAN
ARABRABNBN
⊂
⊂
⊂⊂
)()()()(
![Page 65: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/65.jpg)
90Univ.-Prof. Dr.-Ing.
Markus Rupp
Orthogonal Subspaces Proof: (only part 1, thus )
If Bx=0, then ABx=0. Thus every x from N(B) is also in N(AB).
Note that the 2nd and 4th property leads to the following:
)rank()rank()rank()rank(
BABAAB
≤≤
)()( ABNBN ⊂
We can neverincrease the rank by a matrix product!
![Page 66: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/66.jpg)
How does LS help in solving sets of linear equations? Example 4.18: Consider the following:
an overdetermined set of equations. What happens if we apply LS?
91Univ.-Prof. Dr.-Ing.
Markus Rupp
11 1 1
1
1
N
N
M MN M
Ax ba a b
x
xa a b
M N
=
=
>
![Page 67: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/67.jpg)
How does LS help in solving sets of linear equations? Consider the following:
If AHA is of rank N, LS delivers a unique solution, as then AHb is in range of AH!
If rank(AHA)<N, LS cannot solve the problem!Regularisation
92Univ.-Prof. Dr.-Ing.
Markus Rupp
H H
N N
Ax bA A x A b
M N×
=
=
>
![Page 68: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/68.jpg)
How does LS help in solving sets of linear equations? If rank(AHA)<N, LS cannot solve the
problem!Regularisation
A small positive ε guarantees the set of equations to be solvable.
The solution may be different though!
93Univ.-Prof. Dr.-Ing.
Markus Rupp
( )
, 0
H H
N N
Ax b
A A I x A b
M N
ε
ε×
=
+ =
> >
![Page 69: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/69.jpg)
How does LS help in solving sets of linear equations? Example 4.18: Consider the following:
which we know as underdetermined problem.
94Univ.-Prof. Dr.-Ing.
Markus Rupp
NM
b
b
x
x
aa
aa
bxA
M
N
MNM
N
<
=
=
1
1
1
111
![Page 70: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/70.jpg)
How does LS help in solving sets of linear equations? Underdetermined LS delivers:
If rank(AAH)=M, the solution is the Minimum Norm LS solution.
If rank(AAH)<M, the inverse cannot be computedregularization.
95Univ.-Prof. Dr.-Ing.
Markus Rupp
( )NM
bAAAx
bxAHH
LS
<=
=−1
![Page 71: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/71.jpg)
96Univ.-Prof. Dr.-Ing.
Markus Rupp
Factorisation A linear system of equations Ax=b can be solved in
many different ways. Hereby the numerical precision is in most cases an important factor for the quality of the result.
There are numerous methods for matrix equations that convert the general problem Ax=b in an equivalent problem Bx=c in which the matrix B exhibits particular properties so that the system can be solved easily.
In the following we will shortly present the major ideas without going into the details of each method.
![Page 72: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/72.jpg)
97Univ.-Prof. Dr.-Ing.
Markus Rupp
Factorisation LU: stands for „lower-“ and „upper-triangular“. A=LU can be
solved easier since LUx=b: Ux=c and Lc=b, i.e., two linear systems of equations, that are easy to solve.
Cholesky: a particular solution of LU factorisation for Hermitian, positive-definite matrices A=LU=LLH.In general such matrices can be decomposed into LDLH, UDUH or QDQH. Here, Q is a unitary matrix: QQH=I and D is a diagonal matrix. In case of QDQH it is called eigenvalue decomposition.
QR: A=QR. Here, Q is a unitary matrix: QQH=I and R=U is an upper triangular matrix. A=QR is easier to solve since QRx=b: Rx=c und Qc=bc=QHb, i.e., two sets of equations that are easy to solve.
![Page 73: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/73.jpg)
98Univ.-Prof. Dr.-Ing.
Markus Rupp
Factorisation SVD: Singular Value Decomposition. A=UΣVH with
two unitary matrices U and V and the diagonal matrix Σ.
In the following we treat the eigenvalue decomposition first and we will then show the singular valued decomposition.
![Page 74: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/74.jpg)
99Univ.-Prof. Dr.-Ing.
Markus Rupp
Eigenvalue Decomposition Let A be an m x m matrix from C. Consider the
linear equationAu=λu
or equivalently (A-λI)u=0.
Here the trivial solution u=0 is not of interest but the nullspace of (A-λI).
Particular values λ, generating non-trivial nullspaces, are called eigenvalues. The corresponding vectors u, are called eigenvectors.
![Page 75: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/75.jpg)
100Univ.-Prof. Dr.-Ing.
Markus Rupp
Eigenvalue Decomposition Definition 4.17: The polynomial in λ,
generated by the determinant of (A-λI) is called characteristic polynomial.
The determinant det(A-λI)=0 is called characteristic equation of A.
The roots of the characteristic equation are called eigenvalues. The set with all eigenvalues is called the spectrum of A.
![Page 76: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/76.jpg)
101Univ.-Prof. Dr.-Ing.
Markus Rupp
Eigenvalue Decomposition Example 4.19: Let a linear, time invariant system
be described by the following equation in state space:
Since the matrix inversion of (qI-A) determines the dynamic and stability behavior of the system, so does its determinant det(λI-A).
( )( ) k
k
kk
kkk
xBAqIC
xqH
zCyxBzAz
1
1
−
+
−=
=
=+=
![Page 77: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/77.jpg)
102Univ.-Prof. Dr.-Ing.
Markus Rupp
Eigenvalue Decomposition Lemma 4.3: If the eigenvalues of a matrix A are
all different then the corresponding eigenvectors are all linearly independent.
Proof: We start with m=2 and the opposite: Let’s assume the eigenvectors u1 and u2 are linearly dependent.
Since λ1 and λ2 are different and u1 is not the zero vector, we must have c1 =0.
( ) 00|
0
1211
222121
2221112211
2211
=−=+−
+=+=+
ucucuc
ucucuAcuAcucuc
λλλλ
λλ
![Page 78: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/78.jpg)
103Univ.-Prof. Dr.-Ing.
Markus Rupp
Eigenvalue Decomposition A similar argument leads to c2=0. This proves the
two eigenvectors are linearly independent. For m>2 we consider always the case that two
vectors are linearly dependent and prove the contradiction.
If the eigenvalues are not different, the eigenvectors can be linearly dependent or not. Consider the following matrices:
;4014
;4004
=
= BA
![Page 79: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/79.jpg)
104Univ.-Prof. Dr.-Ing.
Markus Rupp
Eigenvalue Decomposition Consider the decomposition A=UΛU-1, with the
diagonal matrix Λ. Let’s assume first that A has n linearly
independent eigenvectors. Then we have:
[Ax1, Ax2,... Axn]=[λ1u1,λ2u2,... λnun]=AU=UΛ.
Are the eigenvectors linearly independent, then U can be inverted, and we find:
A= UΛU-1
![Page 80: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/80.jpg)
105Univ.-Prof. Dr.-Ing.
Markus Rupp
Eigenvalue Decomposition Remark: Such matrix transformation in which the
eigenvalues are not changed are called similarity transformation (Ger.: Ähnlichkeitstransformation). Two matrices are called similar if they have the same eigenvalues.
Advantage of the transformation:
11
0
1
1
!
)(
−Λ−∞
=
−
−
=
Λ=
Λ==
Λ=
∑
∑∑
UUeUi
Ue
UfUAfAf
UUA
i
iA
i
ii
i
ii
mm
![Page 81: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/81.jpg)
106Univ.-Prof. Dr.-Ing.
Markus Rupp
Jordanform If the eigenvectors are not linearly independent, a
diagonalisation is not possible! However, a close to diagonal form is possible, the so-called Jordan form:
A=TJT-1. Here, matrix J is of blockdiagonal form with the
blocks Ji: 1
1
i
ii
i
J
λλ
λ
=
![Page 82: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/82.jpg)
107Univ.-Prof. Dr.-Ing.
Markus Rupp
Jordanform Example 4.22: Consider the following
matrix:
It has a single eigenvalue λ=3, and two linearly independent eigenvectors:
=
300030103
B
[ ] [ ] ;0,1,0;0,0,1 21TT uu ==
![Page 83: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/83.jpg)
108Univ.-Prof. Dr.-Ing.
Markus Rupp
Jordanform Still Example 4.22: Thus, the Jordan
form of the matrix B becomes:
We have: B=TJT-1
Since: Bm =TJmT-1
However, Jm is not diagonal or of Jordan form!
=
300030013
)(BJ
=
010110001
T
![Page 84: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/84.jpg)
109Univ.-Prof. Dr.-Ing.
Markus Rupp
Minimal Polynomial Theorem 4.8: (Cayley Hamilton): Each square
matrix satisfies its own characteristic equation.
Definition 4.18: A polynomial f is called anihilating polynomial of a square matrix A if: f(A)=0.
Definition 4.19: The anihilating (monic) polynomial of A with smallest degree is called minimal polynomial of A.
![Page 85: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/85.jpg)
110Univ.-Prof. Dr.-Ing.
Markus Rupp
Minimal Polynomial Example 4.23: Consider the following
matrices
The corresponding minimal polynomials are:
We recognize the relation to the size of the Jordan blocks.
;616
16;
5515
;4
44
321
=
=
= AAA
( ) ( ) ;6)(;5)(;4)( 33
221 −=−=−= xxfxxfxxf
![Page 86: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/86.jpg)
116Univ.-Prof. Dr.-Ing.
Markus Rupp
Projections We already found that the overdetermined
LS problem Ax=b can be described by linearly filtering the observation vector:
Let’s assume b can be described by two parts: b=Ay+z. The first part is in the column space R(A) of A and z is in its orthogonal complement, thus in N(AH).
( ) ( )( )( ) ( )HHH
LS
HH
ANbAAAAIbbe
ARbAAAAb
∈−=−=
∈=−
−
1
1
ˆ
ˆ
![Page 87: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/87.jpg)
117Univ.-Prof. Dr.-Ing.
Markus Rupp
Projections Thus we have for the LS estimate:
This also explains why the error eLS and the LS estimate are orthogonal: they are from orthogonal complements.
This also means that the two projections P and I-P decompose a vector into two orthogonal complements (project).
( ) ( )( )( )( ) zzyAAAAAIe
yAzyAAAAAbHH
LS
HH
=+−=
=+=−
−
1
1ˆ
![Page 88: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/88.jpg)
118Univ.-Prof. Dr.-Ing.
Markus Rupp
Projections
s
( ) ( )( )( )( ) zzyAAAAAIe
yAzyAAAAAbHH
LS
HH
=+−=
=+=−
−
1
1ˆ
)(ˆ ARb ∈
( )
( )LS
H
e R A
N A
⊥∈
=
![Page 89: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/89.jpg)
119Univ.-Prof. Dr.-Ing.
Markus Rupp
Example The two signals a and b are to be transmitted. For
this, three-component vectors [u,v,w] are available. If a and b are transmitted in the form [a,b,a+b], then they span a complete subspace V in R3.
Disturbed by additive noise, a vector x=[1,2,4] is received. Which is the closest vector to [1,2,4] in V?
Is it [1,2,3], [1,3,4], [2,2,4] or even different?
![Page 90: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/90.jpg)
120Univ.-Prof. Dr.-Ing.
Markus Rupp
Example Consider the LS solution of the problem The subspace V is spanned by the two vectors:
[1,0,1],[0,1,1]. We can thus make use of the projection property
of the LS solution:
( )T
THH
H
xAAAAv
A
]6666,3,3333,2,333,1[
110101
10
=
=
=
−
![Page 91: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/91.jpg)
121Univ.-Prof. Dr.-Ing.
Markus Rupp
Projections
s]1,1,0[
]1,0,1[],4,2,2[]4,3,1[],3,2,1[
)(ˆ ARb ∈ ]1,1,1[
)(−∈ ⊥AReLS [ ]
[ ]
[ ]1,1,131
1,1,037
1,0,134]4,2,1[
−−
+
=
HH AAAA 1)( −
HH AAAAI 1)( −−
]4,2,1[
![Page 92: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/92.jpg)
123Univ.-Prof. Dr.-Ing.
Markus Rupp
Unitary Matrices Definition 4.21:
A matrix with the property UHU=I is called (semi-)unitary (Ger.: unitär).A matrix with UTU=I is called orthogonal.
Note: if U is an nxn matrix, it follows that UH=U-1.
![Page 93: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/93.jpg)
124Univ.-Prof. Dr.-Ing.
Markus Rupp
Hermitian Matrices Definition 4.20:
If A=AT, for A from R, then the matrix A is called symmetric (Ger.: symmetrisch).If A=AH, for A from C, then the matrix A is called Hermitian (Ger.: Hermitesch oder Hermitsch).
Such matrices naturally occur in form of covariance matrices Rxx=E[xxH] or when solving LS problems.
Lemma 4.4: The eigenvalues of Hermitian matrices are real-valued.
Proof: <Au,u>= λ<u, u>=<u,AHu>=<u,Au>= λ*<u, u> λ*=λ.
![Page 94: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/94.jpg)
125Univ.-Prof. Dr.-Ing.
Markus Rupp
Hermitian Matrices Lemma 4.5: The eigenvectors to different eigenvalues of
Hermitian matrices are orthogonal.
Proof: Let λ1 and λ2 be two different eigenvalues with corresponding eigenvectors u1 and u2. Then we have:<Au1, u2>=<u1,AHu2>=<u1,Au2>=<u1,λ2u2>
=λ2<u1, u2> =λ1<u1, u2>thus: (λ2 -λ1)<u1, u2>=0 and therefore <u1, u2>=0.
Lemma 4.6: Every Hermitian nxn matrix A can be diagonalized by a unitary matrix U.The unitary matrix U simply consists of orthonormaleigenvectors.
A=UΛUH=Σ λiui uiH
![Page 95: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/95.jpg)
126Univ.-Prof. Dr.-Ing.
Markus Rupp
Subspace Techniques Definition 4.22: Let A be an mxm matrix and S a
subspace of R(A). S is called invariant subspace of A if for every x from S there exists an Ax from S.
Example 4.25: Let an nxn matrix A have k (smaller than n) different eigenvalues with the corresponding eigenvectors qi, i=1,2,...,n. Let U=[u1, u2,..., um] and Ui i=1,2,…,k, be the k subsets of eigenvectors, corresponding to the k eigenvalues λi, i=1,2,...,k. The subspaces span(Ui) spanned by the subsets Uiare invariant subspaces of A.
![Page 96: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/96.jpg)
127Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.25 For example consider a 6x6 matrix
with:
65436543
322322
1111
,, and , rseigenvecto threehas , and rseigenvecto twohas
r eigenvecto one has
uuuUuuuuuUuu
uUu
=→=→=→
λλ
λ
![Page 97: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/97.jpg)
128Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.25 Simplified, let
SxuuuAuAxAuuSuux
Auuuu
ARspanSuuA
∈=+=+=∈+=
=
⊂=
2322232
3232
32
322
andin n combinatiolinear any for subspaceinvariant an is ,spanS
rseigenvecto andbelet To));((;
λβλαλβαβα
λλ
![Page 98: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/98.jpg)
Example: Hotel California from Eagles (1976) A= check out hotel x= any hotel customer S= set of all hotel customers
Ax:SR(A) „You can check out any time you like but
you can never leave„
129Univ.-Prof. Dr.-Ing.
Markus Rupp
![Page 99: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/99.jpg)
130Univ.-Prof. Dr.-Ing.
Markus Rupp
Subspace Techniques Theorem 4.9: Let A be an nxn Hermitian matrix
with k (maximum n) different eigenvalues. Then we have:
The matrices Pi are projection matrices in the (invariant) subspace span(Ui), spanned by the normalized eigenvectors uj.
∑
∑
∑
∈
=
=
=
=
=
ij Uu
Hjji
k
ii
k
iii
uuP
PI
PA
:with
:identity
:iondecomposit spectral
1
1λ
![Page 100: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/100.jpg)
131Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.25 again For example
( ) ( )1
2 3
11
2 32
4 5 63
1 1 2 2 3 3 4 4 5 5 6 61 2 3
has one eigenvector has two eigenvectors and has three eigenvectors , and
H H H H H H
P P P
uu uu u u
A u u u u u u u u u u u u
λλλ
λ λ λ= + + + + +
![Page 101: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/101.jpg)
132Univ.-Prof. Dr.-Ing.
Markus Rupp
Subspace Techniques Proof: We already know that Hermitian matrices
can be diagonalized by unitary matrices, thus:
∑∑
∑∑∑
==
===
===
===Λ=
k
ii
n
i
Hii
k
iii
n
i
Hiii
n
i
Hiii
PIuu
Puuuu
1
H
1
111
H
UU
:have weNote
UUA λλλ
![Page 102: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/102.jpg)
133Univ.-Prof. Dr.-Ing.
Markus Rupp
Subspace Techniques Consider the reaction of a Hermitian
matrix A applied onto an arbitrary vector x:
An operator A can thus be decomposed into smaller (partial) operations called projections.
1 1stretching of projection of onto the variousthe components components of the subspace
ij
k kH
j ji i ii i u U
x
Ax P x u u xλ λ= = ∈
= =∑ ∑ ∑
![Page 103: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/103.jpg)
134Univ.-Prof. Dr.-Ing.
Markus Rupp
Subspace Techniques Example 4.26: Consider the matrix
with the two eigenvalues λ1=5 and λ2=10 and the corresponding eigenvectors 1 2
1
2
1 2
1 21 1;2 15 5
1 1 1 21 12 2 2 45 5
2 2 4 21 11 1 2 15 5
5 10
T
T
u u
P
P
A P P
− = =
= =
− − − = = − = +
−
−=
6229
A
![Page 104: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/104.jpg)
135Univ.-Prof. Dr.-Ing.
Markus Rupp
Subspace Techniques Example 4.27: Consider a weakly stationary random process
with Hermitian autocorrelation matrix Rxx. The diagonalization of Rxx leads to:
If considering the eigenvalues one realizes that some can be extremely small, thus do not have much part of the ACF matrix.
They could be neglected, approximating the process. Description of correlation by a few strong eigenvalues:
Karhunen-Loeve description of random processes x.
[ ]
[ ] [ ] Λ===
=
Λ==
UxxUEyyER
:xUyConsider
UUxxER
Hyy
H
Hxx
HH
H
Orthogonal„Decorrelation“
Kari Karhunen (1915–1992) was a Finnish mathematical statistician.Michel Loève (22.1.1907–17.2.1979) was a French-American math- statistician
![Page 105: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/105.jpg)
136Univ.-Prof. Dr.-Ing.
Markus Rupp
Subspace Techniques Consider the expression
Selecting the various eigenvectors x=un, we obtain the corresponding eigenvalues λn.
xuux
xuuxxPxxAx
k
i Uu
Hjji
H
k
i Uu
Hjj
Hi
k
ii
Hi
H
ij
ij
=
==
∑ ∑
∑ ∑∑
= ∈
= ∈=
1
11
λ
λλ
![Page 106: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/106.jpg)
137Univ.-Prof. Dr.-Ing.
Markus Rupp
Subspace Techniques
For the largest eigenvector we obtain the largest eigenvalue and so on…
Set x=umax (x=umin)
xuuxxAxk
i Uu
Hjji
HH
ij
= ∑ ∑
= ∈1λ
minmax min;max λλ ==xxxAx
xxxAx
H
H
xH
H
x
( ) ( ) maxmaxmax1
2
maxmaxmaxmaxmaxmaxmaxmax
max1
maxmaxmax
maxmaxuuuuuuuu
uuuuuAu
Huu
HHH
k
i Uu
Hjji
HH
H
ij
λλλ
λ
=
= ∈
===
= ∑ ∑
![Page 107: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/107.jpg)
138Univ.-Prof. Dr.-Ing.
Markus Rupp
Subspace Techniques Definition 4.23: The expression
is called Rayleigh quotient.
Note that such expression only makes sense when applied to Hermitian matrices.
xxxAxxAr H
H
=),(
maxmin ),( λλ ≤≤ xAr
![Page 108: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/108.jpg)
139Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.28: Eigenfilter The matched filter (Ger.: Signalangepasstes
Filter) is well known to maximize the signal to noise ratio of deterministic signals.
If, however, the maximal signal to noise ratio of random signals is considered, we speak of an eigenfilter.
![Page 109: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/109.jpg)
140Univ.-Prof. Dr.-Ing.
Markus Rupp
Eigenfilter
For a random signal xk we have: yk=Σhm(xk-m+vk-m)= hTxk+hTvk Received signal power: P= hTE[xkxk
H]h* =hTRxxh*= hHRxxh Noise power: N= hHE[vkvk
H]h =σv2 hHh
Maximize the Signal to noise ratio
The optimal solution can be found based on the largest eigenvector to λmax.
h+xk
vk
yk
2max
2
)(maxmax
v
xx
v
xx
σλ
σR
hhhRh
NP
H
H
hh ==
![Page 110: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/110.jpg)
141Univ.-Prof. Dr.-Ing.
Markus Rupp
Filter Example 4.29 This technique can be used for filter
design. Let us design a linear phase filter with 2N+1 coefficients for which we are given the magnitude (Ger.: Amplitudengang) Hd(ejΩ):
A low pass filter is to design with limit frequencies Ωp,Ωs so that
( )
≤Ω≤ΩΩ≤Ω≤
=Ω
πs
pjd eH
;00;1
( ) ( ) )()cos(0
Ω=Ω== Ω−
=
Ω−ΩΩ−Ω ∑ cbenbeeHeeH TjNN
nn
jNjR
jNj
![Page 111: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/111.jpg)
142Univ.-Prof. Dr.-Ing.
Markus Rupp
Filter Example 4.29 In the stopband (Ger.: Sperrbereich) we have:
where we introduced matrix P:
( ) ( )
( ) ( ) bPbbdccb
deHeHE
TTT
jd
jRS
s
s
=ΩΩΩ=
Ω−=
∫
∫
Ω
Ω
ΩΩ
π
π
π1
2
( ) ( )∫Ω
ΩΩΩ=π
πs
djiPij coscos1
![Page 112: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/112.jpg)
143Univ.-Prof. Dr.-Ing.
Markus Rupp
Filter Example 4.29 In the passband (Ger.: Durchlassbereich) we have
for Ω=0: Hd(ejΩ)=1, or equivalently bT1=1. The error is given by:
1- bTc(Ω)=bT[1-c(Ω)]. The so obtained error energy is to minimize:
The entire filter problem is thus given by:
( )[ ] ( )[ ] bQbbdccbE TTTP
p
=
ΩΩ−Ω−= ∫
Ω
0
111π
( ) ( ) ( )10
11<<
−+==−+=α
ααααα bQbbPbbRbEEJ TTTPS
![Page 113: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/113.jpg)
144Univ.-Prof. Dr.-Ing.
Markus Rupp
Filter Example 4.29
Obviously, there is still freedom in the choice of b. We can restrict this by normalizing in the form of bTb=1.
The filter problem is then given by: Minimize bTRb with constraint bTb=1.
Or, equivalently:
)(min min RbbbRb
T
T
b λ=
( ) ( ) ( )( )QPR
bQbbPbbRbEEJ TTTPS
ααααααα
−+=−+==−+=
111
![Page 114: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/114.jpg)
145Univ.-Prof. Dr.-Ing.
Markus Rupp
Further Subspace Techniques Many modern DSP techniques are based on
subspace methods. The most relevant are:
Pisarenko’s Harmonic Decomposition (PHD) MUSIC ESPRIT
We will have a closer look at them in the following.
![Page 115: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/115.jpg)
unknown RV amplitudes
from C
146Univ.-Prof. Dr.-Ing.
Markus Rupp
Further Subspace Techniques Consider the following signal model:
)(wa)(x1
2 tetp
i
tfji
i += ∑=
π
white noiseunknown σw
2unknown
frequencies
unknown order p
![Page 116: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/116.jpg)
147Univ.-Prof. Dr.-Ing.
Markus Rupp
Further Subspace Techniques We assume amplitudes ai and noise to be random. x(t) becomes a random process.
We sample the signal equidistantly on M (M>p) positions and obtain:
[ ]
[ ] [ ]ww
ww1
2xx
1
)1(2222
axx
wax
,...,,,1
RSPS
RssEER
s
eees
H
p
i
Hiii
H
p
iii
TTMfjTfjTfji
iii
+=
+==
+=
=
∑
∑
=
=
−πππ
Hermitian
![Page 117: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/117.jpg)
Further Subspace Techniques Connection to LS:
which is not exactly the same!
148Univ.-Prof. Dr.-Ing.
Markus Rupp
[ ]
2
2,|
1
)1(2222
min
,...,,,1
aSxa
RSPSR
waSwsax
eees
pSaLS
wwH
xx
p
iii
TTMfjTfjTfji
iii
−=
+=
+=+=
=
∑=
−πππ
![Page 118: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/118.jpg)
149Univ.-Prof. Dr.-Ing.
Markus Rupp
Further Subspace Techniques A bit more detailed:
Note, that S is a Vandermonde matrix.
[ ]
[ ][ ]
[ ]
[ ][ ] ww21
2
22
21
21
wwww1
2xx
...... Rsss
aE
aEaE
sss
RSPSRssaER
Hp
p
p
Hp
i
Hiii
+
=
+=+= ∑=
![Page 119: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/119.jpg)
150Univ.-Prof. Dr.-Ing.
Markus Rupp
Further Subspace Techniques The vector space spans1,..., sp, spanned by S is a
subspace of the signal xk. It is called the signal-subspace.
Setting the noise to zero, we can find only p<M eigenvalues that are different from zero with their corresponding eigenvectors (Ger.: Haupteigenvektoren):
pp
p
i
Hiiit
uuusss
uuR
,...,,span,...,,span 2121
10)(
=
= ∑=
=λ
wxx
![Page 120: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/120.jpg)
Further Subspace Techniques Back to LS:
Alternative method to find σw2 and p!
151Univ.-Prof. Dr.-Ing.
Markus Rupp
[ ]
[ ]2
2,|
21
2
2,|
21
min
,...,,;
min
,...,,
bUxb
uuuUwbUx
aSxa
waSxsssS
pUbLS
p
pSaLS
p
−=
=+=
−=
+=
=
![Page 121: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/121.jpg)
Further Subspace Techniques Complement U:
Find p by sparsity method.
152Univ.-Prof. Dr.-Ing.
Markus Rupp
[ ]
[ ]pbthsbUxb
UUU
bUxb
uuuUwbUx
UbLS
pUbLS
p
=−=
=
−=
=+=
0
2
2|
2
2,|
21
..min
~,
min
,...,,;
![Page 122: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/122.jpg)
153Univ.-Prof. Dr.-Ing.
Markus Rupp
Further Subspace Techniques We thus have the possibility to find the signal
subspace out of Rxx without knowing the various frequencies fi.
In a second step we can determine the unknown frequencies.
If we further assume white noise w(t), we have:
.2wxx ISPSR H σ+=
![Page 123: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/123.jpg)
154Univ.-Prof. Dr.-Ing.
Markus Rupp
Further Subspace Techniques We recognize that noise causes an increase of the
eigenvalues by σw2 without changing the eigenvectors.
Note that next to the p signal-eigenvalues also the M-p remaining eigenvalues now take on the value σw
2 with corresponding eigenvectors up+1,...,uM. These eigenvectors are solely determined by the noise.
We call
the noise-subspace. Note that every (eigen)vector of the signal-subspace is
orthogonal to every (eigen)vector of the noise-subspace. Pisarenko recommends to take just one noise vector: M=p+1.
Mpp uuuN ,...,,span 21 ++=
![Page 124: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/124.jpg)
155Univ.-Prof. Dr.-Ing.
Markus Rupp
Pisarenko‘s Harmonic Decomposition In a first step, the eigenvalues and corresponding
vectors are computed from the ACF matrix Rxx. With them, we can determine p and σw
2. Since the eigenvectors from the noise subspace
are orthogonal to those of the signal-subspace, we conclude that:
siHuk=0; for i=1,2,...,p and k=p+1,p+1,…,M.
This in turn provides (M-p) polynomials for the unknown frequencies. 1
21 2 ... 0
HpHp
p
HM
u
u s s s
u
+
+
=
![Page 125: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/125.jpg)
156Univ.-Prof. Dr.-Ing.
Markus Rupp
Pisarenko‘s Harmonic Decomposition
We thus have
Take for example 1.row:
1
21 2 ... 0
HpHp
p
HM
u
u s s s
u
+
+
=
[ ]
∑
∑−
=+
−
=+
+
=
=
=
1
0,1
1
0,1
211
0)2exp(
0)2exp(
0...
M
m
mimp
M
mimp
pHp
Tfju
mTfju
sssu
π
π
[ ]TTMfjTfjTfji
iii eees )1(2222 ,...,,,1 −= πππ
u is given
fi can be determined
![Page 126: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/126.jpg)
157Univ.-Prof. Dr.-Ing.
Markus Rupp
Pisarenko‘s Harmonic Decomposition
Have ACF:
Take for example 1.row of ACF matrix:
2xx ww ww yy ww
1
1
21 2 1 2 ww... ...
i
pH H
i iii
p
H
p p
p
R E a s s R SPS R R R
pp
s s s s s s R
p
=
= + = + = +
= +
∑
( ) 1,...1,0;2exp
)()(
2w
1
2wyyxx
−=+−=
+=
∑=
MmpTmfj
mrmr
mk
p
kk
m
σδπ
σδ
![Page 127: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/127.jpg)
158Univ.-Prof. Dr.-Ing.
Markus Rupp
Pisarenko‘s Harmonic Decomposition
1
1 2 1
1 2 1
xx ww ww yy ww1
221
2 2 2 2 22
2 ( 1) 2 ( 1) 2 ( 1) 2 (2 ( 1)
1 1 ... 1 1 ...... 1
1... 1 ...
p
P
P p
pH H
i iii
j f Tj f T
j f T j f T j f T j f T
j f M T j f M T j f M T j fj f M Tp
R p s s R SPS R R R
p e epe e e e
pe e e e e
ππ
π π π π
π π π ππ
=
−−
−
− − − −− −
= + = + = +
=
∑
ww
1)
xx xx xx
xx xx
xx xx xx
(0) (1) ... ( 1)(1) (0)
( 1) ( 2) ... (0)
M T
R
r r r Mr r
r M r M r
−
+
− = − −
![Page 128: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/128.jpg)
159Univ.-Prof. Dr.-Ing.
Markus Rupp
Pisarenko‘s Harmonic Decomposition Classic solution: Eventually, we can determine the
powers pi =E[|ai|2].
Since xk=yk+wk rxx(m)=ryy(m)+σw2 δm
With LS we can even find the complex-valued ai. Problematic with this method is the imprecise
determination of the frequencies.
31 2
31 2
31 2
31
222 21 xx
2 22 22 2 2 22 xx
2 32 32 3 2 33 xx
222 xx
... (1)(2)...(3)...
( )... ...
p
p
p
p
j fj fj f j f
j fj fj f j f
j fj fj f j f
j pfj pfj pf p
e e e e p rp re e e ep re e e e
p r pe e e
πππ π
πππ π
πππ π
πππ
=
![Page 129: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/129.jpg)
160Univ.-Prof. Dr.-Ing.
Markus Rupp
MUSIC MUSIC=MUltiple SIgnal Classification. Consider the vector:
Computing the expression
It becomes maximal for the desired frequencies. Then we continue as for PHD.
[ ]ik
H
TfMjfjfj
ffMpkufs
eeefs
=+==
= −
und ,..1;0)(
,...,,,1)( )1(2222 πππ
∑+=
= M
pjj
H ufsfP
1
2)(
1)(
![Page 130: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/130.jpg)
161Univ.-Prof. Dr.-Ing.
Markus Rupp
ESPRIT ESPRIT=Estimation of Signal Parameters via
Rotational Invariance Techniques. Let us consider the following two covariance
matrices:
( )1 2
1xx
2 2xx w w
22 2
E x x ; E x x
;
diag , ,...,
01 0
1 0
p
H Hk k k k
H H
j fj f j f
R Q
R SPS I Q SP S E
e e e
E
ππ π
σ σ
+ = =
= + = Φ +
Φ =
=
![Page 131: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/131.jpg)
162Univ.-Prof. Dr.-Ing.
Markus Rupp
ESPRIT As before we determine p and σw
2 by the eigenvalues of Rxx.
We then obtain SPSH=R-σw2I as well as SPΦHSH.
Consider now SP[I-λΦH]SHu=0. The generalized eigenvalues are the desired values
ej2πfi. Generalized eigenvalues are defined by:
SPSHu=λSPΦHSHu
Hint: try Matlab D=eig(A,B).
![Page 132: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/132.jpg)
163Univ.-Prof. Dr.-Ing.
Markus Rupp
ESPRIT Applications
Note that Fourier transform gives: F[h(t-τ)]=H(jω) ejωτ
Thus, all calculations of temporal changes or delays are equivalent to the determination of frequencies. This is for example being used in radar techniques.
AoA (Angle of Arrival) and AoD (Angle of Departure) computation in wireless fields.
![Page 133: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/133.jpg)
180Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition Consider MIMO transmission
Maximum channel capacity:
T
1P
2Ps B s
H
x y
? C obtain to )P B,(T, PT,TRselect to How max iH=
+= =
H
TKRtrace HRH
NSNRIC detlogmax 2)(max
![Page 134: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/134.jpg)
181Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition Theorem 4.10: Every matrix A from Cmxn can be decomposed in the
following form: A=UΣVH with the unitary matrices U from Cmxm and V from Cnxn as well as the “diagonal matrix“ Σ from Rmxn, with p=min(m,n).
This particular factorization is called Singular Value Decomposition =SVD (Ger.: Singulärwertzerlegung). The diagonal elements of Σ are called singular values (Ger.: Singulärwerte). Singular values are never negative!
Invented by: E. Beltrami (1835-1899), C.Jordan (1838-1921), J.J. Sylvester (1814-1897), E. Schmidt (1876-1959) and H.Weyl (1885-1955)
[ ]
1
1 0 0 or 0 0 O
0 00 0 00 0 0
Dp
p
σσ
σσ
Σ = Σ = = Σ
![Page 135: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/135.jpg)
Singular Value Decomposition Quick proof: Assume Λ2 is of large size, Λ1 of small All eigenvalues in Λ1 >0
182Univ.-Prof. Dr.-Ing.
Markus Rupp
1
1
1 12 2
1 1 1 1
2 2
112
1 2 2 2
;
;
,
H H H H
U
H H
H
U
B A A A AV V V A AV I
B AA AA U U
U AV U U B UO
− −
−
= = Λ → Λ Λ =
= = Λ
Λ = Λ → = Λ =
rank(B1)=rank(B2)!
![Page 136: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/136.jpg)
183Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition (lengthy) Proof: Consider the eigenvalue
decomposition of a matrix AHA with A from Cnxn : AHAV= VΛ1
Let the eigenvalues in Λ1 be ordered so that λ1,...,λr>0 and λr+1=...=λn=0.
We can thus construct the following r vectors:
We find that <ui,uj>=δi-j, for i,j=1..r.
rivAui
ii ,...2,1; ==
λ
![Page 137: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/137.jpg)
184Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition The set u1,..., ur from U1 can be extended with orthonormal
vectors (for example by the Gram Schmidt method). We thus obtain U=[u1,...,ur,...,un]=[U1,U2]: UHU=I.
Obviously, the vectors in U are eigenvectors for AAH from Cmxm : AAHui= AAHAvi/sqrt(λi)=Aλivi/sqrt(λi)=λiui
This is clear for the eigenvalues that are distinct from zero. For the zero eigenvalues the corresponding eigenvectors must come
from the nullspace of AAH: AAH ui =0; i=r+1,…,m Since we have for Hermitian matrices that R(AAH) is the orthogonal
complement to N((AAH)H) =N(AAH), all eigenvectors are orthogonal.
![Page 138: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/138.jpg)
185Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition We therefore find for UAVH:
i>r: zi=AHui is in the nullspace of A (Azi=0) and in the range of AH.
For AHui=0 we also have vjH AHui = ui
H Avj =0. Thus UHAV=Σ has a diagonal block with the non-
zero elements sqrt(λj), j=1,2,…,r.
11.. :
1.. : 0 0i
H H Hi j i j i i j
i
Hi i
z
i r u Av v A Av
i r m A A u
λ δλ
λ
−= = =
= + = → =
![Page 139: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/139.jpg)
186Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition Example 4.31:
Example 4.32: Let B1=AHA. Then: B1V=V Λ1 nxn
= (VΣTUH) (UΣVH) V=VΣTΣ=V Λ1
Also B2=AAH: B2U=UΛ2 =UΣΣT mxm
=Σ==
=Σ==
000
0:2,3
0000
:3,2
2
1
2
1
σσ
σσ
nm
nm
A=UΣVH
![Page 140: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/140.jpg)
187Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition Thus:
=ΣΣ
=ΣΣ
=Σ==
=ΣΣ
=ΣΣ
=Σ==
0
000
0:2,3
0
0000
:3,2
22
21
22
21
2
1
22
212
2
21
2
1
σσ
σσ
σσ
σσ
σσ
σσ
TT
TT
nm
nm
and
and
![Page 141: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/141.jpg)
188Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition Note further that if A is from R then all matrices
(U,Σ,V) are from R. Spectral decomposition:
Example 4.33: Matrix norm: Frobenius norm: ||A||F
2=trace(AAH)=σ12+σ2
2+...+σp2. l2 norm:
[ ]max( , ) min( , ) ( )
11 2
1 1 12
H p m n p m n r rank AH H HH
i i i i i ii i iHi i i
VA U V U U u v u v u v
Vσ σ σ
= = =
= = =
= Σ = Σ = = =
∑ ∑ ∑
2max
2
ind2σ=A
![Page 142: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/142.jpg)
189Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition Note that next to the described form of
SVD A=UΣVH=[U1 U2] [Σ+ Ο] [V1 V2]H
there is also another one: A = UΣVH = U1 Σ+ V1
H
called the thin SVD.
![Page 143: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/143.jpg)
Singular Value Decomposition What is the consequence of the spectral
decomposition?
allows to decompose A into a product of two (three) matrices. The size of them depends on the ranklow rank compression.
190Univ.-Prof. Dr.-Ing.
Markus Rupp
[ ] 11 2
1 12 Hi i
H p rH HH H
i i i ii i iHi i u v
VA U V U U u v u v UV
Vσ σ σ
= =
= Σ = Σ = = =
∑ ∑
![Page 144: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/144.jpg)
Singular Value Decomposition What is the consequence of the
spectral decomposition?
similar to Rayleigh quotient.
191Univ.-Prof. Dr.-Ing.
Markus Rupp
[ ]min( , ) ( )
11 2
1 12
max
2 2
0
H p m n r rank AH HH
i i i ii iHi i
H
VA U V U U u v u v
V
x Ay
x y
σ σ
σ
= =
= =
= Σ = Σ = =
≤ ≤
∑ ∑
![Page 145: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/145.jpg)
192Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition Let B=AAH be Hermitian. Then we have:
B=AAH=UDUH=UΣVHVΣTUH=UΣΣTUH. In this case the eigenvalues of B=AAH equal the
square of the singular values of A. The rank of an arbitrary matrix A is given by the
number r of non zero singular values: rank(A)=r. Partition
U=[U1 U2]
[ ]
( )11
121
span:
0:
:::)(
UzUbCb
yUUbCb
yUbCbxVUbCbxAbCbAR
m
m
m
Hmm
==∈=
Σ=∈=
Σ=∈=
Σ=∈==∈=
+
![Page 146: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/146.jpg)
193Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition Thus we find a new interpretation of the four
fundamental subspaces:
( )( )
( ) ( )( ) ( )2
1
2
1
spanspanspan)(span)(
UANVARVANUAR
H
H
=
=
==
![Page 147: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/147.jpg)
194Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition Consider again the LS problem with m>n observations
(overdetermined): b from Cm, x from Cn
2
2
2
2
2
2
2
2
~~min
min
minmin
bx
bUxV
bxVUbxA
HH
H
−Σ=
−Σ=
−Σ=−
11
0 0 0n
n
xx
x
σ
σ
Σ = =
1
1
n
n
m
b
bb
b
+
![Page 148: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/148.jpg)
195Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition By the particular structure of Σ, the lower n+1..m
rows of b are eliminated.
11
# 1
1/ 0; 0 O ;
O1/ 0
0r
r
σσ
σσ
+ −+
Σ Σ = = Σ = = Σ
![Page 149: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/149.jpg)
196Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition Thus:
Consider now the LS solution of the overdetermined system:
HHH
H
HH
UVAAAbUVx
bUxVbx
#1
#
#
#
)(:rsePseudoinve
~~
Σ=→
Σ=
Σ=
Σ=
−
( )( )( ) ( )
#
1
1
1 1
H H
T H H T H
T H T H T T H
x A A A b
V U U V V U b
V V V U b V U b
−
−
− −
Σ
=
= Σ Σ Σ
= Σ Σ Σ = Σ Σ Σ
![Page 150: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/150.jpg)
197Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition The LS method thus searches in the
reduced observation space Cn the solution with smallest norm.
How does this relate to the underdetermined solution? Let us consider now m<n, b from Cm, x from Cn
In this case components x of the parameter space are eliminated.
1
11
2
1
000
m
mm
m
n
xb
x bx
xb
x
σ
σ +
Σ = =
![Page 151: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/151.jpg)
198Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition Let us thus consider the solution of the
underdetermined LS solution:
The underdetermined LS method also finds a minimum norm solution, however now in a reduced parameter space.
( )( )( ) ( )
#
1
1
1 1
H H
T H H T H
T H T H T T H
x A AA b
V U U V V U b
V U U U b V U b
−
−
− −
Σ
=
= Σ Σ Σ
= Σ ΣΣ = Σ ΣΣ
![Page 152: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/152.jpg)
199Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.34: Basic MIMO System
T
1P
2Ps B s
H
x y
nHxy += ˆ 21 BnsBHTPs +=
V HU
VT = HUB =
1P
2Ps1σ
2σ
1n
2n s
'nsPTVUˆ 2/1
2
1 +
= H
σσ
Bs
![Page 153: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/153.jpg)
200Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.34: Basic MIMO System The complicated MIMO system can now be
described as r=rank(H) independent subsystems. Maximum channel capacity:
∑
∏
=
=
=
=
=
+
∑=
+
∑=
+=
=
=
r
iii
TKP
r
i iiTKP
H
TKRtrace
PN
PN
HRHN
Ic
r
ii
r
ii
1
22
12
2
2)(max
SNR1logmax
SNR1logmax
SNRdetlogmax
1
1
σ
σ
Waterfilling solution due to Claude Shannon (1948)
![Page 154: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/154.jpg)
201Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.35 .
Thus: U,V arbitrary (but unitary), Σ has entries -1,0,1. X
( )TT
TTTTT
T
VUVUVUVUVU
VUW
ΣΣΣ=
ΣΣΣ=Σ
Σ=
: :have We
:SVD
WWWWnmRW
T
nm
=
>∈ ×
:solve :Consider ;
![Page 155: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/155.jpg)
202Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.35 Consider matrix Wk of dimension m X n, m>n: Wk+1=Wk+µ Wk (I-Wk
TWk) Let µ>0, W0=X from R, with full rank
Question: whereto converges Wk , or WkTWk?
SVD: Let: Wk=UΣVT
Then:
( ) ( )( )
( )[ ] TT
TTT
TTTTTk
Tkkk
VIUVIUVU
VUUVIVUVUWWIWW
ΣΣ−Σ+Σ=
ΣΣ−Σ+Σ=
ΣΣ−Σ+Σ=−+
µ
µ
µµ
![Page 156: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/156.jpg)
203Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.35 Wk+µ Wk (I-Wk
TWk)= U [ Σ+µ Σ (I-ΣTΣ) ]VT
We can describe Wk+1 =U Σk+1 VT similarly: Σk+1= Σk+µ Σk (I-Σk
TΣk)
On its diagonal: σk+1=σk+ µ σk (1- σk
2) =σk [1+ µ (1- σk
2)] 1-σk+1 =(1-σk) [1- µ σk(1+ σk)]
All σk move towards 1Woo=UI+VT
WooTWoo=I.
![Page 157: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/157.jpg)
204Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.35 µ determines the speed of the movement. (1-σk+1) =(1-σk) [1- µ σk(1+ σk)]
convergence condition: µ < 2/[σk,max+σk,max
2] Note that 0,25+2x2 > x+x2;x>0 Thus: µ < 2/[0,25+2σk,max
2] Since 2/[0,25+2Σi σk,i
2]< 2/[0,25+2σk,max2]
Conservative bound: µ <2/[0,25+2trace(WkTWk)]
![Page 158: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/158.jpg)
205Univ.-Prof. Dr.-Ing.
Markus Rupp
Example 4.35 Applications: Decorrelate vector process
WooTWoo=I
Blind Source Separation
Invert matrix: solve WooTR2Woo=I
Root of matrix: WooTR-1Woo=I
![Page 159: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/159.jpg)
Example 4.36: CoMP Problem 2
But does this also work for K<N? K Users N Antennas
Even if desired user receives little power
207Univ.-Prof. Dr.-Ing.
Markus Rupp
2
1 1 122
2is not of full rank
max max
( ) SLR
T T T
x x T T
B
opt
h x x h h xx H H xH x
x N B
=
∈ → = ∞
![Page 160: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/160.jpg)
CoMP Problem 2 Apply SVD on B
(B is Hermitesch, thus V=U):
208Univ.-Prof. Dr.-Ing.
Markus Rupp
2 2
1
1 2 1 1 2 1
2
2
1 1 1 1 12 2 2 222
2 22
2
, ,..., , ,.., , ,..., , ,..,
max max max
H
HK K N K K N
KU U
T T T T TT T
x y yT TT T
O
Topt
B U V u u u u u u u u u u
Ox U y
h x y U h h U y y U h h U y
y U H HU y y yH x
y U h
σ
σ+ +
= Σ =
=
= →
=
( )22
1 1 1 1 12 2 2 22
SLR but desired signal power is maximalmax SNLR
T TT Toptx U U h h x h U U h→ = → =
= ∞→
Here, any ymaximizes SLR
Find that ythat maximizessignal energy
![Page 161: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/161.jpg)
CoMP Problem 2 Alternatively consider SNLR:
back to original Rayleigh quotient.
209Univ.-Prof. Dr.-Ing.
Markus Rupp
22
2
2
21maxSNLR
v
T
xxH
xh
σ+=
![Page 162: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/162.jpg)
210Univ.-Prof. Dr.-Ing.
Markus Rupp
Singular Value Decomposition Applications of SVD:
Subspace techniques such as PHD, MUSIC and ESPRIT.
Total Least Squares: ||Ax-b|| with distorted observation matrix A.
Solution of numerically sensitive problems such as matrix inversion.
![Page 163: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/163.jpg)
211Univ.-Prof. Dr.-Ing.
Markus Rupp
Condition Numbers Example 4.37: Consider a matrix problem in which
a few singular values are very small:
Inverting the matrix leads to a strong amplification in particularly of these small values which can lead to numerical errors. It can be better to set these values to zero and compute the pseudo inverse instead.
max
1 #;l
σ
σε
−
Σ = Σ ≠ Σ
![Page 164: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/164.jpg)
212Univ.-Prof. Dr.-Ing.
Markus Rupp
Condition Numbers Note however, that setting small singular values to
zero also reduces the rank of a matrix. Such methods are thus called:
Rank Reducing Methods.
A measure that tells the quality of a matrix with respect to its invertibility, is its condition number:
κ(A)=||A||2 ||A-1||2
![Page 165: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/165.jpg)
213Univ.-Prof. Dr.-Ing.
Markus Rupp
Condition Numbers The smallest condition number is one! For
example, for identity matrices, permutation matrices, unitary quadratic matrices.
Otherwise we have κ>1. For regular matrices A we have:
2
21
2
2
1
1min
2
21max
2
2
2
min/1
max/1
max
xxA
x
xA
xxA
x
x
x
=
−
=
=
=
=
=
σ
σ
min
max)(σσκ =A
![Page 166: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/166.jpg)
214Univ.-Prof. Dr.-Ing.
Markus Rupp
Condition Numbers For Hermitian matrices A their condition number
is given by the magnitude of their (real-valued) eigenvalues:
If A is not Hermitian, we can alternatively compute the square roots of the eigenvalues of AHA:
)()()(
min
max
AAA
λλκ =
)()()(
min
max
AAAAA H
H
λλκ =
![Page 167: Signal Processing 1 - TU Wien...13 Univ.-Prof. Dr.-Ing. Markus Rupp Linearity Note, in maths it is even more precise: A transformation A:X Y in which X and Y are vector spaces, defined](https://reader033.fdocuments.net/reader033/viewer/2022041602/5e318dce86de6912961d19e9/html5/thumbnails/167.jpg)
215Univ.-Prof. Dr.-Ing.
Markus Rupp
Condition Numbers Consider the following distorted problem: A(x+∆x)=b+∆b.
We find that
The condition number thus determines how much an error on one side of the equation impacts the other side.
bb
Axx ∆
≤∆
)(κ