Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based...

33
Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich University of Technology Institute for Circuit Theory and Signal Processing Univ.-Prof. Dr.techn. Josef A. Nossek

Transcript of Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based...

Page 1: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

Multi-Dimensional Subspace-BasedParameter Estimation and Prewhitening

Stefanie Schwarz

Bachelor’s Thesis

Munich University of TechnologyInstitute for Circuit Theory and Signal Processing

Univ.-Prof. Dr.techn. Josef A. Nossek

Page 2: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

Date of Start: 01/12/2011Date of Examination: 26/03/2012

Supervisors: M.Sc. Qing Bai (Munich University of Technology),Prof. Dr.-Ing. Joao Paulo C. L. da Costa (Universidade de Brasılia)

Theresienstr. 9080290 MunichGermany

26/03/2012

Page 3: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

Contents

1. Introduction 8

2. Tensor Calculus 102.1 r-Mode Unfolding . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 102.2 r-Mode Product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 112.3 Subspace-based Decomposition of Tensors . . . . . . . . . . . .. . . . . . . . . . 12

2.3.1 Tensor Ranks . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 122.3.2 The Higher-Order SVD (HOSVD) . . . . . . . . . . . . . . . . . . . . .. 132.3.3 PARAFAC decomposition . . . . . . . . . . . . . . . . . . . . . . . . . .14

3. Data Model 163.1 Matrix Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . 163.2 Tensor Notation . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . 18

4. R-D Parameter Estimation 194.1 R-D Standard ESPRIT (R-D SE) . . . . . . . . . . . . . . . . . . . . . . . . . . . 194.2 R-D Standard Tensor-ESPRIT (R-D STE) . . . . . . . . . . . . . . . . . . . . . . 204.3 Closed-Form PARAFAC based Parameter Estimation (CFP-PE) . . . . . . . . . . 20

5. R-D Prewhitening 215.1 Sequential GSVD (S-GSVD) . . . . . . . . . . . . . . . . . . . . . . . . . .. . . 21

5.1.1 Prewhitening Correlation Factor Estimation (PCFE) .. . . . . . . . . . . 225.1.2 Tensor Prewhitening Scheme: S-GSVD . . . . . . . . . . . . . . .. . . . 22

5.2 Iterative Sequential GSVD (I-S-GSVD) . . . . . . . . . . . . . . .. . . . . . . . 23

6. Simulation Results 256.1 White Noise Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 256.2 Colored Noise Case . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . 26

7. Conclusions 30

Appendix 31A1 The Kronecker product . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. . 31

Bibliography 32

3

Page 4: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

List of Figures

1.1 MIMO multipath scenario with2×2 antenna arrays on the transmitter and receiverside. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 8

2.1 Examples and notation for a scalar, vector, matrix and order-3 tensor. . . . . . . . . 102.2 Unfoldings of a4 × 5 × 3-tensor. Left: the1-mode vectors, center: the2- mode

vectors, right: the3-mode vectors which are then used as columns of the corre-sponding matrix unfolding. . . . . . . . . . . . . . . . . . . . . . . . . . . .. . . 11

2.3 n-mode products of an order-3 tensor. Left: the1-mode product, center: the2-mode product, right: the3-mode product. . . . . . . . . . . . . . . . . . . . . . . 11

2.4 Full SVD, economy-size SVD and low-rank approximation of matrix A ∈ C5×4

with rankρ = 3 and model orderd = 2. . . . . . . . . . . . . . . . . . . . . . . . 122.5 Core tensor of an order-3 tensor withn-ranksρ1, ρ2, andρ3. Only the firstρ1 ×

ρ2 × ρ3 elements indicated in blue are non-zero. . . . . . . . . . . . . . . . .. . . 132.6 Illustration of PARAFAC decomposition for a3-way tensor. Above: representation

as a sum of rank-one tensors; below:r-mode products based decomposition. . . . . 14

3.1 2-dimensional outer-product based array (OPA) of size3× 3. . . . . . . . . . . . . 17

4.1 R-D Standard ESPRIT (R-D SE),R-D Standard Tensor-ESPRIT (R-D STE) andClosed-Form PARAFAC based Parameter Estimation (CFP-PE).. . . . . . . . . . 19

5.1 Basic steps of S-GSVD prewhitening scheme with Prewhitening Correlation Fac-tor Estimation (PCFE). . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . .. 23

5.2 Basic steps of I-S-GSVD iterative prewhitening scheme.. . . . . . . . . . . . . . 24

6.1 RMSE vs. SNR for the white noise case forL = 50 runs. . . . . . . . . . . . . . . 266.2 RMSE vs. Iterations with SNR= 15dB, correlation coefficientρ = 0.9 andL = 20

runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 276.3 RMSE vs. SNR with correlation coefficientρ = 0.9, K = 4 iterations andL = 20

runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286.4 RMSE vs. Correlation coefficient with SNR= 20dB,K = 4 iterations andL = 20

runs. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 286.5 RMSE vs. Array Spacing Variance with SNR= 40dB, correlation coefficientρ =

0.9, K = 4 iterations andL = 15 runs. . . . . . . . . . . . . . . . . . . . . . . . . 29

4

Page 5: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

Acknowledgements

I would like to express my sincerest gratitude to Prof. Dr.-Ing. Joao Paulo Carvalho Lustosada Costa, adjunct professor at Universidade de Brasılia (UnB), Brazil, for having given me the op-portunity to work on this interesting topic under his supervision. His bright ideas and professionalguidance regarding my thesis, along with his invaluable support in every day issues have made thiswork possible and made my stay in Brasılia unforgettable.

I am also very thankful for the funding from the German Academic Exchange Service (DAAD)through theRISE weltweit programme, which has enabled my internship at UnB.

Finally, I would like to thank M.Sc. Qing Bai and Univ.-Prof.Dr.techn. Josef A. Nossek fromthe Institute for Circuit Theory and Signal Processing at Technical University of Munich (TUM)for the acceptance of this thesis and good cooperation.

Page 6: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich
Page 7: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

Abstract

High-resolution parameter estimation is a research field that has gained considerable attention inthe past decades. A typical application is in MIMO channel measurements, where parameters suchas direction-of-arrival (DOA), direction-of-departure (DOD), path delay and Doppler spread aredesired to be extracted from the measured signal.

Recently, subspace-based parameter estimation techniques have been improved by taking ad-vantage of the multi-dimensional structure inherent in themeasurement signal. This is accom-plished by adopting subspace-based decompositions using tensor calculus, i.e., higher-dimensionalmatrices. State-of-the-art tensor-based decompositionsinclude Higher-Order Singular Value De-composition (HOSVD) low-rank approximation and Closed-Form Parallel Factor Analysis (CFP).The former served as the basis for the Standard Tensor-ESPRIT (STE) and the latter laid the foun-dation for CFP based parameter estimation scheme (CFP-PE),which are both presented in the firstpart of this thesis. The latter technique is appealing sinceit is applicable to mixed arbitrary arraysand outer product based arrays.

The second part of this thesis investigates the case that theparameter estimation is subject to thepresence of colored noise or interference, which can severely deteriorate the estimation accuracy.In order to avoid this, tensor-based prewhitening techniques are applied which exploit the Kro-necker structure of the noise correlation matrices. Assuming that estimates of the noise covariancefactors are available, e.g., through a noise-only measurement, the estimation accuracy can be sig-nificantly improved by using Sequential Generalized Singular Value Decomposition (S-GSVD).In case the noise covariance information is unknown, Iterative Sequential Generalized SingularValue Decomposition (I-S-GSVD) can successfully be applied. These tensor-based prewhiten-ing techniques, S-GSVD and I-S-GSVD, can each be combined with the above-mentioned multi-dimensional HOSVD and CFP based parameter estimation schemes.

As a novelty in this thesis, the I-S-GSVD prewhitening in conjunction with CFP based param-eter estimation is proposed. In this way, the advantages of both techniques are joined, that is, thesuitability of I-S-GSVD for data contaminated with colorednoise without knowledge of the noisecovariance, and the applicability of CFP to mixed array geometries and the robustness to arrayswith positioning errors.

Page 8: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

1. Introduction

High-resolution parameter estimation involves the extraction of relevant parameters from a set ofR-dimensional (R-D) data measured by an antenna array. In the field of MIMO channel sound-ing, the considered dimensions of the measured data can correspond to time, frequency, or spa-tial dimensions, i.e., the measurements captured by one- ortwo-dimensional antenna arrays atthe transmitter and the receiver. The estimated parametersinclude direction-of-arrival (DOA),direction-of-departure (DOD), Doppler spread, or path delay. In this context, the desired param-eters are also calledspatial frequencies. A typical multipath scenario with2 × 2 antenna arraysat the transmitter and receiver side is illustrated in Figure 1.1. Other applications of parameterestimation are manifold, reaching from radar and sonar to biomedical imaging and seismology.

TX RX

Direction-of-

arrival (DOA)

Direction-of-

departure (DOD)

Fig. 1.1. MIMO multipath scenario with2× 2 antenna arrays on the transmitter and receiver side.

A wide class of efficient parameter estimation schemes usingsubspace decomposition arebased on Standard ESPRIT (SE) [1], which exploits the symmetries present in a one-dimensionalantenna array. A generalized scheme which makes Standard ESPRIT applicable to multi-dimensional measurements is referred to asR-D Standard ESPRIT (R-D SE) [2], in which theR-dimensional data is unfolded into a matrix via a stacking operation. Obviously, this represen-tation sees the problem from just one perspective, i.e., oneprojection, and neglects theR-D gridstructure inherent in the data. Consequently, parameters cannot be estimated properly when signalsare not resolvable in certain dimension. A possibility to keep the multi-dimensional structure is toexpress the estimation problem using higher-dimensional matrices, so-called tensors. By consid-ering all dimensions as a whole, it is possible to estimate parameters even if they are not resolvablefor each dimension separately, and the resolution, accuracy, and robustness can be improved.

Tensor-based parameter estimation schemes have gained attention in the past few years andare presented in the first part of this thesis. Tensor-based extensions of the ESPRIT scheme havebeen developed recently, namely Standard Tensor-ESPRIT (STE) and Unitary Tensor-ESPRIT(UTE) [2], which utilize a tensor extension of the Singular Value Decomposition (SVD), the so-called Higher-Order SVD (HOSVD) [3]. However, one harsh constraint on ESPRIT schemes

8

Page 9: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

1. Introduction 9

is imposed by the shift-invariance property, which stipulates that the antenna array must have aspecific symmetric lattice structure. Positioning errors in real antenna arrays, for example, leadto a violation of this constraint. Schemes based on ParallelFactor Analysis (PARAFAC) analysis,a tool rooted in psychometrics [4], do not require the shift-invariance property, as they can beapplied to arbitrary array geometries. There exist iterative solutions for PARAFAC decompositionsuch as Alternating Least Squares (ALS) [5], which we do not consider in this thesis in favourof the closed-form PARAFAC (CFP) [6] solution. Based on thisclosed form scheme, the closed-form PARAFAC based Parameter Estimation scheme (CFP-PE) [7] was proposed, which deliversaccurate estimates for arbitrary arrays and is robust against positioning errors.

The second part of this thesis is dedicated to prewhitening schemes that mitigate the effect ofmulti-dimensional colored noise or interference present at the receiver and/or transmitter antennas.Since the colored noise affects more the signal component, its presence can severely deteriorate theestimation accuracy. Prewhitening aims to distribute the noise power evenly across the noise spaceto improve the estimation accuracy. Moreover, the presented schemes assume that the colorednoise has a Kronecker structure, as can be found in certain EEG [8] and MIMO applications [9],where the noise covariance matrix is taken to be the Kronecker product of the temporal and spatialcorrelation matrices.

A tensor-based prewhitening scheme that exploits the inherent Kronecker structure of the noiseis Sequential Generalized Singular Value Decomposition (S-GSVD), which can be applied if thesecond order statistics of the noise are known. This scheme was combined with subspace decom-positions via HOSVD [10] and closed-form PARAFAC [11]. Bothcombinations have an improvedaccuracy over matrix based prewhitening schemes, as well ashigh computational efficiency.

The iterative counterpart of the above prewhitening scheme(I-S-GSVD) [12] can be used ifnoise samples cannot be collected without the presence of the signal, thus hindering a noise statis-tics estimation. The proposal in this thesis is to combine I-S-GSVD with CFP decomposition. Inthis way, one joins the advantages of both techniques, that is, the suitability of I-S-GSVD for datacontaminated with colored noise without knowledge of noisestatistics, and the applicability ofCFP to mixed array geometries as well as the robustness to arrays with positioning errors.

The remainder of this thesis is organized as follows. A preliminary introduction to tensorcalculus and subspace decomposition of tensor-shaped datais given in Section 2. The data modeland its tensor notation are presented in Section 3. The basicconcepts of the above mentionedmulti-dimensional parameter estimation schemes are explained in Section 4. Efficient tensor-basedprewhitening schemes are discussed in Section 5. Section 6 assesses the performance and accuracyof the presented methods in MATLAB . Finally, conclusions are drawn in Section 7.

Page 10: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

2. Tensor Calculus

The following section aims at familiarizing the reader withfundamental tensor calculus, whichbuilds the basis for all multi-dimensional parameter estimation and prewhitening techniques pre-sented in this thesis. The notation is in accordance with [3]. Furthermore, the tensor-extension ofthe Singular Value Decomposition (SVD), the so-called Higher-Order SVD, is presented.

In essence, tensors are higher-dimensional matrices. An order-R tensor (also calledR-dimensional orR-way tensor) is denoted by the calligraphic variable

A ∈ CM1×M2×···×MR , (2.1)

which means thatA hasMr complex elements along the dimension (or mode)r for r = 1, . . . , R.A single tensor element is symbolized by

am1,m2,...,mR∈ C , ir = 1, . . . ,Mr , r = 1, . . . , R . (2.2)

In this sense, an order-0 tensor is a scalar, an order-1 tensor is equivalent to a vector, and an order-2tensor represents a matrix. Order-3 tensors can be thought of as elements arranged in a cuboid.Higher-dimensional tensors (R > 3) go beyond graphical imagination, yet are the most naturalway to represent the data sampled from antenna grids, as willbe shown later on. An illustrativeexplanation together with the notation used in this thesis is shown in Fig. 2.1.

Fig. 2.1. Examples and notation for a scalar, vector, matrixand order-3 tensor.

2.1 r-Mode Unfolding

Ther-mode unfolding of a tensorA is denoted as

[A](r) ∈ CMr×(M1·M2·...·Mr−1·Mr+1·...·MR) (2.3)

10

Page 11: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

2.2r-Mode Product 11

and represents the matrix ofr-mode vectors of the tensorA. Ther-mode vectors of a tensor areobtained by varying ther-th index within its range(1, . . . ,Mr) and keeping all the other indicesfixed.

In other words, unfolding a tensor means to slice it into vectors along a certain dimensionrand rearrange them as a matrix. As an example, all possibler-mode vectors of an order-3 tensorof size4× 5× 3 are shown in Fig. 2.2. The order for rearranging the columns is chosen conformto [3] and indicated by the arrows in the figure.

Fig. 2.2. Unfoldings of a4× 5× 3-tensor. Left: the1-mode vectors, center: the2- mode vectors, right: the3-mode vectors which are then used as columns of the corresponding matrix unfolding.

2.2 r-Mode Product

The r-mode product of a tensorA ∈ CM1×M2×···×MR and a matrixU ∈ CJr×Mr along ther-thmode is denoted as

B = A×r U ∈ CM1×M2×···×Jr×···×MR . (2.4)

Note that the number of elements in ther-th dimension ofA , Mr, must match the number ofcolumns inU . Ther-mode product is obtained by multiplying allr-mode vectors ofA from theleft-hand side by the matrixU . It follows that

[A×r U ](r) = U · [A](r) . (2.5)

Fig. 2.3 shows possibler-mode products of the order-3 tensorA with matricesU 1, U 2 andU 3.

Fig. 2.3. n-mode products of an order-3 tensor. Left: the1-mode product, center: the2- mode product,right: the3-mode product.

Page 12: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

12 2. Tensor Calculus

2.3 Subspace-based Decomposition of Tensors

Since the parameter estimation techniques presented in this thesis are based on the analysis of thesignal subspace, methods for decomposing the subspace of the tensor-shaped measurements arerequired. A technique that is commonly applied in conventional matrix-based parameter estimationmethods (e.g., in standard ESPRIT) is the Singular Value Decomposition (SVD). Recall the SVDof a matrixA ∈ CM×N , which is defined as

A = UΣV H , (2.6)

whereU ∈ CM×M andV ∈ CN×N are unitary matrices andΣ ∈ RM×N is a pseudo-diagonalmatrix containing the non-negative singular values ofA ordered by magnitude. Ifρ is the rank ofthe rank-deficient matrixA, i.e., there exist exactlyρ non-zero singular values, the correspondinglossless economy-size SVD is

A = U sΣsV sH , (2.7)

whereU s ∈ CM×ρ andV s ∈ CN×ρ contain the firstρ columns ofU andV , respectively, andΣs ∈ R

ρ×ρ is the full-rank diagonal subspace matrix containing the singular values on its maindiagonal. Considering only thed ≤ ρ significant singular values, further reduction can be achievedthrough a so-called low-rank approximation (or truncated SVD)

A ≈ U ′

sΣ′

sV′

sH , (2.8)

whereU ′

s ∈ CM×d, V ′

s ∈ CN×d andΣ ′

s ∈ Rd×d. All three types of SVD are shown in Figure 2.4.In a MIMO channel measurements context,d referred to asmodel order, that is, the number of

principal multipath components bearing a strong signal. The low-rank approximation thus isolatesthe signal subspace of the measured signal, while treating non-significant multipath componentsas noise.

Full SVD

Economy-size SVD

Low-rank approximation

Fig. 2.4. Full SVD, economy-size SVD and low-rank approximation of matrixA ∈ C5×4 with rankρ = 3

and model orderd = 2.

2.3.1 Tensor Ranks

For matrices, the column (row) rank is defined as the dimension of the vector space spanned by thecolumns (rows). As a fundamental theorem, the column rank and row rank of a matrix are alwaysequal. For higher-order tensors, there exist two differentrank definitions:

Page 13: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

2.3 Subspace-based Decomposition of Tensors 13

• The r-ranks of an R-dimensional tensor are defined as the dimension of the vector spacespanned by ther-mode vectors of the tensor. Consequently, ther-rank is equal to the rank ofther-mode unfolding. Unlike for matrices, ther-ranks of a tensor are not required to be equal.

• Thetensor tank. A tensorA ∈ CM1×M2×···×MR has rank one if it can be represented via outerproducts ofR non-zero vectorsf (r) ∈ CMr as

A = f (1) ◦ f (2) ◦ . . . ◦ f (R) . (2.9)

Consequently, a tensorA has rankr if it can be stated as the linear combination ofr rank-onetensors and if this cannot be accomplished with less thanr terms:

A =r

n=1

f (1)n ◦ f (2)

n ◦ . . . ◦ f (R)n (2.10)

Note thatr-rank(A) ≤ rank(A) ∀r = 1, . . . , R , (2.11)

which means that the tensor rank of a higher-order tensor canbe larger than all itsr-ranks.

2.3.2 The Higher-Order SVD (HOSVD)

Analogously to the SVD of a matrix, we define the Higher-OrderSVD (HOSVD) [13] of a tensorA ∈ CM1×M2×···×MR as the SVDs of allr-mode unfoldings of a tensor. It is given by

A = S ×1 U 1 ×2 U 2 · · · ×R UR, (2.12)

whereU r ∈ CMr×Mr , r = 1, 2, . . . , R are the unitary matrices containing the singular vectors ofther-th mode unfolding.S ∈ CM1×M2×···×MR is the core-tensor, which is not diagonal but satisfiesthe so-called all-orthogonality conditions [3]. Figure 2.5 depicts a core tensor for an order-3 tensor.It is shown that only the firstρ1× ρ2× ρ3 elements of the core-tensor are non-zero. The size of theblue cuboid thus indicates ther-ranksρr of the tensorA, as they were defined in Section 2.3.1.

Fig. 2.5. Core tensor of an order-3 tensor withn-ranksρ1, ρ2, andρ3. Only the firstρ1 × ρ2 × ρ3 elementsindicated in blue are non-zero.

Therefore, an economy-size HOSVD ofA can be stated as

A = S [s] ×1 U[s]1 ×2 U

[s]2 · · · ×R U

[s]R , (2.13)

whereS [s] ∈ Cρ1×ρ2×···×ρR as shown in Figure 2.5 andU [s]r ∈ CMr×ρr , r = 1, 2, . . . , R contain

the firstρr columns ofU r. An example of a core tensorS with its non-zero partS [s] is depictedin Figure 2.5. Note thatρr ≤ Mr for all r = 1, 2, . . . , R.

Finally, for a model orderd, the corresponding HOSVD low-rank approximation is

A ≈ S ′[s]

×1 U′[s]

1 ×2 U′[s]

2 · · · ×R U ′[s]

R (2.14)

whereS ′[s]

∈ Cd×d×···×d, andU ′

[s]

r ∈ CMr×d, r = 1, 2, . . . , R are the matrices ofr-mode singular

vectors.In practice, the HOSVD is obtained via the SVDs of the matrix unfoldings.

Page 14: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

14 2. Tensor Calculus

2.3.3 PARAFAC decomposition

The Parallel Factor Analysis (PARAFAC), a tool that originally stems from the field of psychomet-rics [4], takes a different approach at decomposing a tensor. While the HOSVD is focussed on ther-spaces, PARAFAC considers the fact that the SVD can be seen as a decomposition of a matrixinto the sum of a minimal number of rank-one matrices, which are given by the corresponding leftand right singular vectors and weighted by the corresponding singular values. In the same manner,we can decompose theR-dimensional data tensor into a sum of a minimal number of rank-onetensors, as they were defined in (2.9). Therefore, the aim of PARAFAC is to decompose a tensorX of rankd into a sum of at leastd rank-one tensors:

A =d

n=1

f (1)n ◦ f (2)

n ◦ . . . ◦ f (R)n , (2.15)

wheref (r)n ∈ C

Mr , n = 1, . . . , d. This means that the model order coincides with the tensor rankas defined in (2.10). By defining the so-calledfactor matrices F (r) ∈ CMr×d, which contain thevectorsf (r)

i as columns

F (r) =[

f(r)1 , . . . , f

(r)d

]

∈ CMr×d , (2.16)

the PARAFAC decomposition of a tensorA ∈ CM1×M2×···×MR with model orderd can be rewrittenas

A = IR,d ×1 F(1) ×2 F

(2) · · · ×R F (R) , (2.17)

whereIR,d is theR-dimensional identity tensor of sized × d × . . . × d. Its elements are equalto one for indicesi1 = i2 = . . . = iR and zero otherwise. Comparing (2.17) with the HOSVDlow-rank approximation (2.14), the core tensor is replacedby the ”diagonal” identity tensor viaPARAFAC decomposition. The dimensions are thus completelydecoupled.

Figure 2.6 illustrates the PARAFAC decomposition for an order-3 tensor; first as a sum ofrank-one tensors according to (2.15), then asr-mode products based decomposition (2.17).

Fig. 2.6. Illustration of PARAFAC decomposition for a3-way tensor. Above: representation as a sum ofrank-one tensors; below:r-mode products based decomposition.

There exist iterative solutions for accomplishing the PARAFAC decomposition, such as Mul-tilinear Alternating Least Squares (MALS) [5]. However, the MALS algorithm is not suitable forthe case that the factor matrices are rank deficient [6]. Moreover, it has a high computational com-plexity and the convergence is not guaranteed, since it is aniterative solution. The solution used in

Page 15: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

2.3 Subspace-based Decomposition of Tensors 15

the thesis is Closed-form PARAFAC (CFP) [6], which uses several simultaneous matrix diagonal-izations based on the HOSVD. The problem here is the computationally expensive task of findingthe correct factor matrix estimates out of a large set of estimates. However, the computationalcomplexity of the CFP can be drastically reduced by computing only one solution.

Page 16: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

3. Data Model

The tensor notation introduced in Section 2 is a convenient way to represent multi-dimensionalsignals sampled from antenna grids. For our data model, we assume thatd superimposing planarwavefronts are captured by anR-dimensional (R-D) grid with Mr sensors in each dimensionr ∈{1, . . . , R}. These dimensions can, e.g., be the horizontal and verticalaxis of the transmitterand receiver array, or frequency bins. Each dimensionr represents a spatial frequencyµ(r)

i to beestimated for each pathi, i = 1, . . . , d. The spatial frequencies correspond to physical parameterssuch as elevation or azimuth angle of the direction-of-departure or direction-of-arrival, time delayof arrival or Doppler shift.

At a sampling time instantn and sensorm1, . . . , mR, we obtain the single measurement

xm1,...,mR,n =

d∑

i=1

si,n ·

R∏

r=1

e(mr−1)j·µ(r)i + nm1,...,mR,n , (3.1)

wheresi,n are the complex symbols from thei-th source at snapshotn. The noise elementsnm1,...,mR,n are i.i.d. ZMCSCG (zero-mean circularly-symmetric complex Gaussian) with vari-anceσ2

n. Note that in Section 4, this noise is assumed to be white, whereas the colored noise caseis considered in Section 5.

The data are collected inN consecutive time instants, called snapshots. The model orderd, thatis, the number of principal multipath components, is assumed to be known. It can be estimated byusing multi-dimensional model order selection schemes [7]. Furthermore, we assume thatd ≤ N

andd ≤ Mmax (overdetermined system).

The signal is taken to be narrowband such that the antenna element spacing do not exceed halfa wavelength. Figure 3.1 shows an example of a measurement grid in form of a 2-dimensionalouter-product array (OPA), where all distances∆

(r)i for i = 1, 2, 3 andr = 1, 2 can take different

values.

3.1 Matrix Notation

For matrix notation, the measurements have to be aligned into a matrix which is accomplished byappropriate stacking. If we capture measurements overN subsequent time instants and stack each

16

Page 17: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

3.1 Matrix Notation 17

Δ

x1,3 x2,3

1

x3,3

Δ2

x1,2 x2,2 x3,2

x1,1 x2,1 x3,1

1(1) (1)

Δ11(2)Δ11(2)

Δ2(2)

Fig. 3.1. 2-dimensional outer-product based array (OPA) ofsize3× 3.

snapshot into a column of a matrix, one obtains for the measurement matrixX ∈ CM×N

X =

x1,1,...,1,1,1 x1,1,...,1,1,2 . . . x1,1,...,1,1,N

x1,1,...,1,2,1 x1,1,...,1,2,2 . . . x1,1,...,1,2,N...

......

...x1,1,...,1,MR,1 x1,1,...,1,MR,2 . . . x1,1,...,1,MR,N

x1,1,...,2,1,1 x1,1,...,2,1,2 . . . x1,1,...,2,1,N

x1,1,...,2,2,1 x1,1,...,2,2,2 . . . x1,1,...,2,2,N...

......

...xM1,M2,...,MR−1,MR,1 xM1,M2,...,MR−1,MR,2 . . . xM1,M2,...,MR−1,MR,N

(3.2)

whereM =∏R

r=1Mr. The additive noise sample can be summarized in a noise matrix N ∈C

M×N which is stacked in the same fashion asX. Using matrix-vector notation for the datamodel (3.1), one obtains

X = A · S +N , (3.3)

where

S =

s1,1 s1,2 . . . s1,Ns2,1 s2,2 . . . s2,N

......

...sd,1 sd,2 . . . sd,N

∈ Cd×N (3.4)

is the symbol matrix, andA ∈ CM×d is the so-called joint array steering matrix whose columnscontain the array steering vectorsa (µi), i = 1, . . . , d as given in

A = [a (µ1) ,a (µ2) , . . . ,a (µd)] (3.5)

with µi =[

µ(1)i , µ

(2)i , . . . , µ

(R)i

]T

. That is, thei-th column ofA only contains theR spatial

frequenciesµ(r)i , r = 1, . . . , R belonging to pathi.

The array steering vectors can explicitly be calculated as the Kronecker products (matrix outerproduct, see A1) of the array steering vectors of the separate modes through

a (µi) = a(1)(

µ(1)i

)

⊗ a(2)(

µ(2)i

)

⊗ . . .⊗ a(R)(

µ(R)i

)

. (3.6)

Page 18: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

18 3. Data Model

The vectorsa(r)(

µ(r)i

)

∈ CMr×1 denote the response of the array in ther-th mode due to thei-th

wavefront. As an example, for a Uniform Rectangular Array (URA), which is an OPA (Fig. 3.1)with constant distances over a mode withMr sensors, we have that

a(r)(

µ(r)i

)

=

1

ej·µ(r)i

e2·j·µ(r)i

...

e(Mr−1)·j·µ(r)i

. (3.7)

3.2 Tensor Notation

A more natural way to can capture the samples (3.1) overN subsequent time instants is by arrang-ing them as anR+1-dimensional measurement tensorX ∈ CM1×...×MR×N . Similarly to (3.3), thetensor notation reads as

X = A×R+1 ST +N , (3.8)

whereA ∈ CM1×...×MR×d is the array steering tensor of an outer product array (OPA) as in Figure3.1 given by

A =

d∑

n=1

a(1)(

µ(1)i

)

◦ a(2)(

µ(2)i

)

◦ . . . ◦ a(R)(

µ(R)i

)

. (3.9)

S ∈ Cd×N is the same transmitted symbol matrix as in (3.4), andN ∈ C

M1×...×MR×N thenoise tensor. Similarly to the procedure in Section 2.3.3, where (2.15) has a structure as (3.9), thearray steering tensor can also be stated as

A = IR+1,d ×1 A(1) ×2 A

(2) . . .×R A(R) (3.10)

whereA(r) ∈ CMr×d comprises of

A(r) =[

a(r)(

µ(r)1

)

,a(r)(

µ(r)2

)

, . . . ,a(r)(

µ(r)d

)]

. (3.11)

The following relations between the matrix notation from Section 3.1 and the presented tensornotation hold:

A = [A] T(R+1) , (3.12)

N = [N ] T(R+1) , (3.13)

X = [X ] T(R+1) , (3.14)

i.e., the measurement matrixX is equal to the transpose of the unfolding of the measurementtensorX along the last mode. The above steps are also referred to as stacking operations.

Page 19: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

4. R-D Parameter Estimation

In this section, multi-dimensional parameter estimation schemes based on subspace decompositionare presented, where the signal and noise subspaces of the measurement tensorX as in (3.8) areseparated. The number of principal path componentsd can be estimated according to Model OrderSelection (MOS) schemes such as [7]. The three presentedR-dimensional parameter estimationtechniques areR-D Standard ESPRIT (R-D SE),R-D Standard Tensor ESPRIT (R-D STE) – bothof which can only be applied if the shift invariance property[1] holds – and finally closed-formPARAFAC Parameter Estimation (CFP-PE). Figure 4.1 delivers an overview of all three discussedschemes and shall help the reader follows the steps presented in the following subsections.

Measurement Tensor

HOSVD

low-rank decomposition

SVD

low-rank decomposition

PARAFAC

decomposition via CFP

Signal subspace tensorSignal subspace matrix

Measurement Tensor

stacking

operation

Shift Invariance (SI)

equations

Peak Search (PS)

Factor matrices

Standard

Tensor-

ESPRIT

(STE)

Standard

ESPRIT

(SE)

Closed-Form

PARAFAC based

Parameter Estimation

(CFP-PE)

Fig. 4.1. R-D Standard ESPRIT (R-D SE),R-D Standard Tensor-ESPRIT (R-D STE) and Closed-FormPARAFAC based Parameter Estimation (CFP-PE).

4.1 R-D Standard ESPRIT (R-D SE)

Via the stacking operation (3.14), the measurement tensorX is reshaped to a matrixX ∈ CM×N

where(

∏R

r=1Mr

)

. The signal subspace is computed via a low-rank Singular Value Decomposi-

19

Page 20: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

20 4. R-D Parameter Estimation

tion (SVD) according to (2.8) asX ≈ U sΣsV s

H , (4.1)

whereΣs ∈ Rd×d. Note that the prime symbol is dropped for notational convenience. By exploit-ing the shift invariance of the antenna array, a low-computational closed-form expression for theparameter estimation can be deduced [2].

4.2 R-D Standard Tensor-ESPRIT (R-D STE)

This method employs the actual measurement tensorX and separates the signal and noise sub-spaces via HOSVD low-rank approximation according to (2.14) as

X ≈ S [s] ×1 U[s]1 . . .×R U

[s]R ×R+1 U

[s]R+1 , (4.2)

whereS(s) ∈ Cr1×...×rR+1 is the core tensor andU [s]r ∈ CMr×rr the subspace matrix of ther-th

dimension, andrr = min(Mr, d) is ther-rank ofX .The signal subspace tensorU [s] ∈ CM1×M2×...×MR×d is therefore

U [s] = S [s] ×1 U[s]1 . . .×R U

[s]R . (4.3)

Again, exploiting the shift-invariance structure here, wecan buildR shift invariance matricesaccording to [2].

4.3 Closed-Form PARAFAC based Parameter Estimation (CFP-PE)

The Closed-Form PARAFAC based Parameter Estimation (CFP-PE) scheme has been proposed in[7]. The measurement tensorX is decomposed via PARAFAC according to (2.17):

X = IR+1,d ×1 F(1) ×2 F

(2) . . .×R F (R) ×R+1 F(R+1) , (4.4)

whereIR+1,d is theR + 1-dimensional identity tensor and each dimension has sized. The factormatricesF (r) ∈ CMr×d are found via the closed-form PARAFAC solution presented in[6].

Comparing with the tensor data model (3.8) and (3.10), one can see that the factor matricesF (r) provide estimates for the system’s steering matricesA(r) and symbol matrixS:

X ≈ IR+1,d ×1 A(1) . . .×R A(R) ×R+1 S

T (4.5)

Thus, through PARAFAC decomposition, we are able to find estimates for the correct structure ofthe steering matricesA(r), regardless of whether the sensor grid fulfils the shift-invariance or not.This guarantees the flexibility of this scheme regarding thechosen sensor array structure, and leadsto an increased robustness.

Furthermore, the CFP decouples the multi-dimensional datainto vectors corresponding to acertain dimension and source. Therefore, after the CFP, a multi-dimensional problem is trans-formed into several one-dimensional problems. These one dimensional problems can be solvedvia Peak Search (PS) or via Shift Invariance (SI) if applicable for the given sensor grid. Moreover,the CFP-PE allows to introduce a step called merging dimensions, which is applied to increase themodel order. A subsequent Least Squares Khatri-Rao Factorization (LSKRF) is used to refactorizethe merged factor matrices.

Page 21: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

5. R-D Prewhitening

In this section, state-of-the-art tensor-based prewhitening schemes are presented, namely Sequen-tial Generalized SVD (S-GSVD) and its iterative counterpart I-S-GSVD. The former can be ap-plied if a noisy-only measurement for the estimation the noise statistics are available, while thelatter scheme can deliver improved estimates even without any information about the noise.

From now on, we thus assume that the additive noise componentfrom (3.8) is colored

X = A×R+1 ST

+N (c) (5.1)

and that the colored noise tensorN (c) ∈ CM1×...×MR×N has a Kronecker structure, as can be foundin certain EEG [8] and MIMO applications [9]. The colored noise tensor can thus be stated as

[

N (c)]

(R+1)= [N ](R+1) · (L1 ⊗ L2 ⊗ . . .⊗LR)

T, (5.2)

where⊗ is the Kronecker product (see A1),N is a white noise tensor collecting i.i.d. ZMC-SCG noise samples with varianceσ2

n, andLr ∈ CMr×Mr , r = 1, . . . , R are the so-callednoisecorrelation factors of ther-th dimension of the colored noise tensor.

As proven in [10], (5.2) can be rewritten as

N (c) = N ×1 L1 ×2 L2 . . .×R LR , (5.3)

with N ∈ CM1×...×MR×N denoting a white (=uncorrelated) ZMCSCG noise tensor. Please note

that while the noise tensorN isR+1-dimensional, there are only correlation matrices for the firstR dimensions as we assume that the time samples are uncorrelated. Alternatively, one can say thatLR+1 is given to be an identity matrix, which has no effect on the noise tensor.

The noise covariance matrix on ther-th modeRr is defined as

E

{

[

N (c)]

(r)·[

N (c)]H

(r)

}

= α ·Rr = α ·Lr ·LHr , (5.4)

where α is a normalization constant, such thattr(Lr · LHr ) = Mr. The equivalence be-

tween (5.2), (5.3) and (5.4) is shown in [10].

5.1 Sequential GSVD (S-GSVD)

The Sequential GSVD prewhitening scheme was proposed in [10]. As presented in the following,it consists of two steps: first, the estimation of the correlation factorsLr from the noise-onlymeasurement tensorN (c). Then, the actual prewhitening scheme can be applied.

21

Page 22: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

22 5. R-D Prewhitening

5.1.1 Prewhitening Correlation Factor Estimation (PCFE)

In order to apply the S-GSVD prewhitening scheme, the correlation factorsLr must be estimatedfirst from the noise-only measurement tensorN (c). Dropping the expectation operator from (5.4),one can estimate the noise covariance matrixRr for each dimensionr = 1, . . . , R by

Rr = α′ ·[

N (c)]

(r)·[

N (c)]H

(r)= Lr · L

H

r , (5.5)

where againα′ is chosen such thattr(Rr) = Mr. These estimates then need to be factorizedto obtain the correlation factor estimatesLr, e.g. directly via a Cholesky decomposition or viaeigenvalue decomposition (EVD)

Rr = Qr ·Λ ·QHr , (5.6)

from which follows that

Lr = Qr ·Λ12 . (5.7)

5.1.2 Tensor Prewhitening Scheme: S-GSVD

Once the estimatesL1, . . . , LR ∈ CMr×Mr of the correlation factor matrices are computed through(5.5), the S-GSVD prewhitening scheme can be executed as follows (see also Figure 5.2):1) Prewhiten the measurement tensorX ∈ CM1×M2×MR×N :

X = X ×1 L−1

1 ×2 L−1

2 . . .×R L−1

R (5.8)

Note that due to the uncorrelatedness between the time instants, we have onlyR correlationfactors, while the measurement tensor hasR + 1 dimensions. By substituting our colouredmeasurement tensor (5.1) in (5.8)

X =(

A×R+1 ST

+N (c))

×1 L−1

1 ×2 L−1

2 . . .×R L−1

R (5.9)

= A×1 L−1

1 ×2 L−1

2 . . .×R L−1

R ×R+1 ST

+N (5.10)

while taking into account the Kronecker model of the coloured noise tensor (5.3), the multi-dimensional noise component becomes white. However, the signal component ofX has beendistorted through the prewhitening. This must be accountedfor in a later dewhitening step.

2) Compute the HOSVD low-rank approximation (2.14) ofX

X ≈ S [s] ×1 U[s]1 ×2 U

[s]2 . . .×R U

[s]R ×R+1 U

[s]R+1 , (5.11)

such that that corresponding subspace tensorU[s]

is

U[s]

= S [s] ×1 U[s]1 ×2 U

[s]2 . . .×R U

[s]R , (5.12)

whereS [s] ∈ Cp1×p2×...×pR×d, U [s]r ∈ CMr×pr such thatpr = min (Mi, d) for r = 1, . . . , R.

We assume again thatd ≤ N

3) Dewhiten the estimated subspace in order to reconstruct the signal subspace:

U [s] = U[s]×1 L1 ×2 L2 . . .×R LR (5.13)

Page 23: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

5.2 Iterative Sequential GSVD (I-S-GSVD) 23

Prewhitening

Estimate Parameters

(STE, CFP-PE)

HOSVD

low-rank approximation

Dewhitening

Estimate

via PCFE

S-GSVD

Fig. 5.1. Basic steps of S-GSVD prewhitening scheme with Prewhitening Correlation Factor Estimation(PCFE).

With the new correctly dewhitenened subspace tensorU [s], the parameters can be estimatedaccording to the Standard Tensor-ESPRIT or CFP based parameter estimation (CFP-PE) scheme(see Sections 4.2 and 4.3).

Originally, the S-GSVD was derived by applying multiple GSVDs [13] to the measurementtensor – hence the name sequential GSVD. In this way, the matrix inversions in the prewhiteningstep (5.8) can be avoided. However, the procedure presentedabove is more accurate than theoriginal S-GSVD and therefore preferable.

5.2 Iterative Sequential GSVD (I-S-GSVD)

If the second-order statistics of the noise cannot be estimated, e.g., if only a small number ofnoise snapshots is available, or if the noise cannot be measured without the presence of the signalcomponent, then Iterative Sequential GSVD (I-S-GSVD) can be used, which was proposed inconjunction with STE in [12] . The principle idea is to apply the prewhitening correlation factorestimation (PCFE) from Section 5.1.1 iteratively to compute estimatesLr of the correlation factorsLr. The concept of the I-S-GSVD prewhitening scheme is depicted in Figure 5.2. Contrarilyto [12], the I-S-GSVD concept was expanded by the option to chose CFP-PE in the parameterestimation step. This conjunction of I-S-GSVD and CFP-PE has not yet been investigated in theliterature and will be scrutinized in the simulations of Section 6.

The I-S-GSVD algorithm works as follows:

1) Initialize Lr asMr ×Mr identity matrices forr = 1, . . . , R.

2) Do S-GSVD from Section 5.1.2.

3) Estimate parametersµ(r)i via STE or CFP-PE (see Sections 4.2 and 4.3).

Page 24: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

24 5. R-D Prewhitening

InitializeSequential GSVD

(Section 5.2)

Estimate parameters

(STE, CFP-PE)

Estimate signal matrix

and steering tensor

Update noise tensor

Estimate new

using PCFE

Fig. 5.2. Basic steps of I-S-GSVD iterative prewhitening scheme.

4) From the obtainedµ(r)i , estimate the array steering tensorA according to the model in (3.9).

UsingX andA, calculate the signal matrix:

S =

(

[X ](R+1) ·[

A

]+

(R+1)

)T

, (5.14)

where+ is the Moore-Penrose pseudo inverse.5) GivenA andS, compute an estimate the noise tensor:

N(c)

= X − A×R+1 ST (5.15)

6) FromN(c)

, update the estimateLr using PCFE (see Section 5.1.1).7) Go back to step 2.

According to [12], the root mean square change (RMSC) of the parameter estimatesµ(r)i be-

tween two iteration can be applied as a stopping criteria. Via simulations in conjunction with STE,it could be shown that between two and three iterations are always sufficient to achieve conver-gence. This fact could also be verified for the I-S-GSVD in conjunction with the CFP-PE, as willbe shown in the following section.

Page 25: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

6. Simulation Results

In this section, simulations carried out in MATLAB shall demonstrate the performance of the dis-cussed multi-dimensional parameter estimation techniques and prewhitening schemes using theR-D harmonic retrieval model of (3.8), where the spatial frequenciesµ(r)

i are drawn from a uni-form distribution in[−π, π]. The source symbols are i.i.d. ZMCSCG distributed with power equalto σ2

s for all the sources. The SNR at the receiver is defined as

SNR = 10 · log10

(

σ2s

σ2n

)

, (6.1)

whereσ2n is the variance of the elements of the white noise tensorN in (3.8) for Section 6.1 and

(5.3) for Section 6.2.For all simulations that were executed, a scenario with the specifications shown in Table 6.1 was

employed. Five different parameters were estimated, rendering the scenario into a5-D parameterestimation problem.

Variable Estimated parameter

Transmitter antenna array M1 = 4 µ(1)i : DOD azimuth

of size4× 4 M2 = 4 µ(2)i : DOD elevation

Receiver antenna array M3 = 4 µ(3)i : DOA azimuth

of size4× 4 M4 = 4 µ(4)i : DOA elevation

Number of frequency bins M5 = 4 µ(5)i : path delay

Number of snapshots N = 4Number of paths d = 3

Table 6.1. Specification of simulated5-D scenario.

If a simulation is carried out withL realizations for each SNR value, the overall RMSE readsas

RMSE=

√E

{

R∑

r=1

d∑

i=1

(

µ(r)i − µ

(r)i

)2}

. (6.2)

6.1 White Noise Case

First, different tensor-based parameter estimation schemes are briefly compared for a white-noisescenario in Figure 6.1. The RMSE was plotted versus the SNR according to (6.2). One can see that

25

Page 26: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

26 6. Simulation Results

the tensor-based schemes standard tensor-ESPRIT (STE) (Section 4.2) and closed-form PARAFACbased (CFP-PE) (Section 4.3) have an improved performance over the ordinary standard ESPRIT(SE). However, comparing all schemes with the Cramer-Rao bound [14], there is still room forimprovement.

−5 0 5 1010

−2

10−1

100

SNR [dB]

RM

SE

SE5−D STECFP−PEDet. CRB

Fig. 6.1. RMSE vs. SNR for the white noise case forL = 50 runs.

6.2 Colored Noise Case

In this section, a colored noise is generated according to (5.3). The main goal is to investigatethe performance of the not yet investigated I-G-SVD prewhitening method in conjunction withCFP-PE from Section 5.2, from now denoted as I-S-CFP-PE. Thenew scheme is assessed againstthe plain non-iterative S-GSVD prewhitening scheme joinedwith CFP-PE, abbreviated by S-CFP-PE, as well as plain CFP-PE without prewhitening. In the simulations, it is considered that theelements of the noise covariance matrix in ther-th modeRr = Lr · L

Hr vary as a function of the

correlation coefficientρr, similarly as in [10]. As an example the structure ofRr as a function ofρr for Mr = 3 is given as

Rr =

1 ρ∗r (ρ∗r)2

ρr 1 ρ∗rρr

2 ρr 1

, (6.3)

whereρr is the correlation coefficient of ther-th mode. However, in the following simulations thecorrelation coefficients were constant over all correlateddimensions withρ = ρr∀r = 1, . . . , R.Note that also other types of correlation models can be used.To be consistent with (5.4),Lr isnormalized such thattr(Lr ·L

Hr ) = Mr. Again, the RMSE is computed according to (6.2).

First of all, the convergence of I-S-CFP-PE is scrutinzed inFigure 6.2. One can see thatconvergence is reached after only three iterations, which is remarkable. Although there is a smallremaining gap to the S-CFP-PE scheme, the RMSE compared to the non-prewhitening scheme

Page 27: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

6.2 Colored Noise Case 27

CFP-PE can be improved by a factor of ten for the given scenario with correlation coefficientρ = 0.9. It is expected that the remaining error vanishes for an increasing number of snapshotsN ,however this was impossible to simulate due to the great computational complexity.

Next, the performance of these scheme is tested over a wide SNR range in a colored noisescenario with correlation coefficientρ = 0.9 in Figure 6.3. In the low-SNR region, the I-S-CFP-PEdelivers only marginally better estimates than the non-prewhitening scheme. Again, it is expectedthat this gap to the S-CFP-PE scheme error would decrease significantly for an increased numberof snapshotsN . In the high-SNR region, the I-S-CFP-PE performs very closeto the non-iterativeS-CFP-PE, and is thus able to successfully estimate the noise correlation factors.

In Figure 6.4, it is looked into the performance over a varying correlation coefficient. For lowρ, that is, low correlation over all dimensions, all three schemes perform equally well. For highcorrelation, the estimate can be improved drastically by the prewhitening schemes. At the chosenSNR of20dB, the I-S-GSVD is always very close to the performance of the non-iterative scheme.

Finally, in Figure 6.5, the performance of the schemes is assessed for a positioning error sce-nario. While all before mentioned simulations were executed in shift-invariant outer product ar-rays, the sensor array is made non-shift invariant in this simulation. To this end, the antennas ofthe first two dimensions, e.g., the2-dimensional receiver antenna array, are randomly misplacedwith an positioning error varianceρp. For this scenario, CFP combined with Peak Search (PS) canbe successfully applied, while the CFP utilizing shift invariance (SI) naturally performs as bad asstandard tensor-ESPRIT S-GSVD prewhitening.

1 2 3 4 5 6 7 810

−3

10−2

10−1

100

Iterations

RM

SE

CFP−PE w/o PWTS−CFP−PEI−S−CFP−PE

Fig. 6.2. RMSE vs. Iterations with SNR= 15dB, correlation coefficientρ = 0.9 andL = 20 runs.

Page 28: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

28 6. Simulation Results

−10 0 10 20 30 40 50 6010

−5

10−4

10−3

10−2

10−1

100

101

SNR[dB]

RM

SE

CFP−PE w/o PWTS−CFP−PEI−S−CFP−PE

Fig. 6.3. RMSE vs. SNR with correlation coefficientρ = 0.9, K = 4 iterations andL = 20 runs.

0 0.2 0.4 0.6 0.8 1

10−4

10−3

10−2

10−1

Correlation coefficient ρi

RM

SE

CFP−PE w/o PWTS−CFP−PEI−S−CFP−PE

Fig. 6.4. RMSE vs. Correlation coefficient with SNR= 20dB,K = 4 iterations andL = 20 runs.

Page 29: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

6.2 Colored Noise Case 29

10−4

10−2

100

10−4

10−3

10−2

10−1

100

101

Variance σp

RM

SE

CFP−PE w/o PWTS−CFP−PE (SI)S−CFP−PE (PS)I−S−CFP (PS)STE S−GVD

Fig. 6.5. RMSE vs. Array Spacing Variance with SNR= 40dB, correlation coefficientρ = 0.9, K = 4iterations andL = 15 runs.

Page 30: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

7. Conclusions

The tensor-based parameter techniques presented in this thesis achieve an improved accuracycompared to matrix-based schemes. The advantage of ESPRIT-based schemes is their low com-putational complexity through the closed-form shift-invariance equation, while the closed-formPARAFAC based parameter estimation can be praised for its applicability to mixed array geome-tries and the robustness to arrays with positioning errors.

For scenarios with Kronecker colored noise, the results show that the proposed tensor-basedprewhitening improves remarkably the estimation accuracyof the plain CFP parameter estimator,while retaining its advantages as written above.

Simulations assessed the performance of the proposed iterative tensor-based S-GSVDprewhitening in conjunction with a CFP based parameter estimator. This iterative algorithm canachieve both a very good estimation of signal parameters andof the noise variance given a largenumber of snapshots. The iteration converges very fast and the remaining error is small. The per-formance is very close to estimation obtained using the non-iterative S-GSVD prewhitening withknowledge of the noise covariance information.

Pointers for future research could be the carry-out of simulations with more advanced andrealistic channel models, e.g. in geometry-based scenarios.

30

Page 31: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

Appendix

A1 The Kronecker product

The Kronecker product is the outer product of two matrices. GivenA ∈ CM×N andB ∈ CP×Q,the Kronecker product is the block matrix

A⊗B =

a11B . . . a1NB...

. . ....

aM1B... aMNB

∈ C

(M ·P )×(N×Q) . (A1)

31

Page 32: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

Bibliography

[1] Roy, R.; Kailath, T.: ESPRIT – estimation of signal parameters via rotational invariancetechniques. – In: IEEE Transactions on Acoustics, Speech, and Signal Processing37 (Juli1989), S. 984–995.

[2] Haardt, M.; Roemer, F.; Galdo, G. D.: Higher-order SVD based subspace estimation to im-prove the parameter estimation accuracy in multi-dimensional harmonic retrieval problems. –In: IEEE Transactions on Signal Processing56 (Juli 2008), S. 3198–3213.

[3] De Lathauwer, L.; De Moor, B.; Vandewalle, J.: A multilinear singular value decomposi-tion. – In: SIAM J. Matrix Anal. Appl.21(4) (2000).

[4] Cattell, R. B.: Parallel proportional profiles and otherprinciples for determining the choiceof factors by rotation. – In: Psychometrika9 (Dez. 1944).

[5] Bro, R.; Sidiropoulos, N.; Giannakis, G. B.: A fast leastsquares algorithm for separatingtrilinear mixtures. – In: Proc. Int. Workshop on Independent Component Analysis for BlindSignal Separation (ICA 99), Jan. 1999. S. 289–294.

[6] Roemer, F.; Haardt, M.: A closed-form solution for multilinear PARAFAC decompositions. –In: Proc. 5th IEEE Sensor Array and Multich. Sig. Proc. Workshop (SAM 2008), Darmstadt,Germany, Juli 2008. S. 487–491.

[7] da Costa, J. P. C. L.; Roemer, F.; Weis, M.; Haardt, M.: Robust R-D parameter estimation viaclosed-form PARAFAC. – In: International ITG Workshop on Smart Antennas (WSA 2010),Dresden, Germany, Marz 2010. S. 99–106.

[8] Huizenga, H. M.; de Munck, J. C.; Waldorp, L. J.; Grasman,R. P. P. P.: Spatiotemporaleeg/meg source analysis based on a parametric noise covariance model. – In: IEEE Transac-tions on Biomedical Engineering49 (Juni 2002), S. 533 – 539.

[9] Park, B.; Wong, T. F.: Training sequence optimization inMIMO systems with colorednoise. – In: Military Communications Conference (MILCOM 2003), Gainesville, USA, Okt.2003.

[10] da Costa, J. P. C. L.; Romer, F.; Haardt, M.: Sequential GSVD based prewhitening for multi-dimensional HOSVD based subspace estimation. – In: Proc. International ITG Workshop onSmart Antennas (WSA 2009), Berlin, Germany, Feb. 2009.

[11] da Costa, J. P. C. L.; Haardt, M. et al.: Robust R-D parameter estimation via closed-formPARAFAC in kronecker colored environments. – In: International Symposium on WirelessCommunication Systems (ISWCS 2010), York, United Kingdom,Sep. 2010. S. 115–119.

[12] da Costa, J. P. C. L.; Roemer, F.; Haardt, M.: Iterative sequential GSVD (I-S-GSVD) basedprewhitening for multidimensional HOSVD based subspace estimation without knowledgeof the noise covariance information. – In: International ITG Workshop on Smart Antennas(WSA 2010), Dresden, Germany, Marz 2010. S. 151–155.

32

Page 33: Multi-Dimensional Subspace-Based Parameter Estimation and ... · Multi-Dimensional Subspace-Based Parameter Estimation and Prewhitening Stefanie Schwarz Bachelor’s Thesis Munich

Bibliography 33

[13] Vandewalle, J.; Lathauwer, L. D.; Comon, P.: The generalized higher order singular valuedecomposition and the oriented signal-to-signal ratios ofpairs of signal tensors and their usein signal processing. – In: European Conference on Circuit Theory and Design, Krakow,Poland, Sep. 2003.

[14] Stoica, P.; Nehorai, A.: Music, maximum likelihood, and cramer-rao bound. – In: IEEETransactions on Acoustics, Speech, and Signal Processing37 (Mai 1989), S. 720 – 741.