Jaron Achterberg

13
Eindhoven University of Technology Department of Electrical Engineering Control Systems The Department of Electrical Engineering of the Eindhoven University of Technology accepts no responsibility for the contents of M.Sc. theses or practical training reports Model reduction of multidimensional dynamical systems by tensor decompositions by Jaron Achterberg Master of Science thesis Project period: 2008 Report Number: 08A/05 Commissioned by: Prof.dr.ir. A.C.P.M. Backx Supervisors: Dr. S. Weiland Dr.ir. J. Ludlage (IPCOS)

Transcript of Jaron Achterberg

Page 1: Jaron Achterberg

Eindhoven University of Technology Department of Electrical Engineering Control Systems

The Department of Electrical Engineering of the Eindhoven University of Technology accepts no responsibility for the contents of M.Sc. theses or practical training reports

Model reduction of

multidimensional dynamical systems by tensor decompositions

by

Jaron Achterberg

Master of Science thesis Project period: 2008 Report Number: 08A/05 Commissioned by: Prof.dr.ir. A.C.P.M. Backx Supervisors: Dr. S. Weiland Dr.ir. J. Ludlage (IPCOS)

Page 2: Jaron Achterberg

Model reduction of multidimensional dynamicalsystems by tensor decompositions

Jaron Achterberg

Abstract—This paper investigates different methods for multidimensional POD reduction. The POD basis functions will beobtained from a tensor decomposition method. Multiple methodsare available and will be compared for their use in modelreduction. These methods will be applied to a dynamical systemdescribed by Partial Differential Equations, PDEs. The methodsare well defined on functions on Cartesian domains, severalmethods to apply them on non Cartesian domains are treated aswell.

I. I NTRODUCTION

An important part of scientific research is dedicated to thecreation of mathematical models of physical processes. A lotof effort is made to describe dynamical processes as accurateas possible. Control designers use these models to find asuitable controller. However, often the physical processes thatneed to be controlled are very complex and, as a consequence,result in large scale models. Many modern controller synthesismethods will create a controller with equal order as the orderof the plant model. For complex processes this results incontrollers that will be hard to implement or whose operationis restricted by limited computational power.

To achieve a technically feasible controller for complexsystems one has either to reduce the model of the plant or theorder of the controller. Various reduction methods have alreadybeen developed, that are able to handle different classes ofmodels. Physical processes that are described by Partial Dif-ferential Equations, (PDEs) are known to be computationallyintensive for simulation. A Method for reduction of PDEmodels was developed in [1]. This method is primarily knownfor this application as Proper Orthogonal Decomposition,POD, and in other fields as, Karhunen Loeve decomposition orPrincipal Component Analysis. For systems that evolve bothover time as well as space, this method aims to decomposetemporal-spatial signals in such a way that truncated spectraldecompositions define optimal signal approximations in a welldefined sense. Specifically, this method finds a spatial andtemporal basis that can approximately describe the dynamicsof the process.

This paper will look for an appropriate extension of methodthat uses the spatial structure to achieve a better reduction. Theseparation of the spatial structure follows easily for functionson a Cartesian domain. Yet to apply this method to functionson Non-Cartesian domains is a non trivial extension of thePOD method that explicitly uses Cartesian structures in thespatial domain.

II. PROBLEM FORMULATION

A. Notation and terminology

Both superscript and subscript symbols have to be inter-preted as indices. Functions are denoted bya, vectors bya

matrices by capital symbolsA, the set of reals byR andmore general sets byA. Sets of functions will be written withcalligraphic symbolsA. All inner products by〈·, ·〉.

Time derivatives,dfdt are written byf , partial derivatives∂f

∂x

and ∂2f∂x2 by fx andfxx respectively.

B. Problem formulation

Many physical systems admit useful representations byPartial Differential Equations, PDEs. In this paper the problemthat is addressed is to find given a PDE a reduced model ofthe system such that symmetries in the Cartesian structureof the domain is invariant in the reduction.In problems thathave spatial domain of dimension larger than one, one cantry to separate the spatial basis. The reduced model shouldapproximate the original as close as possible. First spectraldecompositions that admit separated basis functions have tobe introduced, and reduction in this expansion is treated. Toobtain the separated basis functions from data, the data willbe described as a tensor so one can apply tensor decomposi-tions. Several tensor decompositions are available and will becompared for their use in model reduction. Optimality of thereduction is investigated.

The focus will be on linear PDEs on Cartesian domains, sothe complexity of the benchmark problems is bounded. Thefollowing example of a PDE is used throughout the paper toclarify the different methods that will be investigated.

Example 1 (Heat equation). The heatu on a two dimensionalslab material with dimensionsX × Y = [0, Lx] × [0, Ly]changes in timeu(t, x, y) : X × Y × T 7→ R, according tothe PDE,

ρCut = κxuxx + κyuyy + νxux + νyuy.

Here ρ is the heat resistance andC the heat capacity of thematerial, the parametersκ andν denote the conductivity andthe viscosity of the material in the various directions.

The POD methods that are investigated all look for a finitedecomposition ofu in components that are simple but stillrepresent the original solution. These components are sortedin how well they resemble the original solution of the PDE inenergy sense. Thus it becomes easy to determine which and

Page 3: Jaron Achterberg

how many of those components are needed to approximate thesolution with a prescribed accuracy.

III. SPECTRAL DECOMPOSITIONS

To be able to reduce a large model, the model and itssolution will be decomposed, such that it will be easy tofind the dominant spectral components of the model and itssolution. The reduction will follow easily from the truncationof the spectral decomposition. This method is well known forfunctions of one spatial dimension. Under certain conditionsit is possible to extend this decomposition to a separationof functions on multidimensional domains in to multiplefunctions in the separated coordinates of the domain.

Like Fourier series where a function or signal is decom-posed as the sum of trigonometric functions, it is sometimespossible to decompose a general function on any full basisaccording to

f(x) =∞∑

i

aiφi(x).

Hereφi are called ”basis functions”, the (real) numbersai arecoefficients. More precise,

Theorem 1. If (X , 〈·, ·〉) is a separable Hilbert space withorthonormal basisφii∈I

any elementf ∈ X can be writtenas,

f =∑

i∈I

aiφi,

whereai =⟨

f, φi⟩

are called Fourier or modal coefficients.

For the proof see, [2].

Consider theL2(X) inner product space on real functionsX → R. For t ∈ T fixed, assumef(t, ·), φi ∈ L2(X), so thatapplying Theorem 1 withX = L2(X) gives,

ai(t) =⟨

f(t, ·), φi⟩

=

x∈X

f(t, x)φi(x)dx. (1)

This gives decomposition,

f(t, x) =∑

i

ai(t)φi(x). (2)

Functions of multiple variables can also be decomposed.Introduce the Hilbert spaces,L2(Ω), Ω ⊂ R

d of functionsf : Ω → R with the inner product〈f, g〉 =

ω∈Ωf(ω)g(ω)dω.

With the domainΩ = X, Y, T, X × Y, we obtain,

X = L2(X),

T = L2(T),

Y = L2(Y),

F = L2(X × Y). (3)

Let φi be an orthonormal basis ofF , ξi an orthonormalbasis ofX , ηj an orthonormal basis ofY and τk anorthonormal basis ofT . Applying Theorem 1 with these

Hilbert spaces leads to different decompositions:

f(t, x, y) =∞∑

i=1

ai(t)φi(x, y), (4)

f(t, x, y) =

∞∑

i=1

∞∑

j=1

aij(t)ξi(x)ηj(y), (5)

f(t, x, y) =

∞∑

i=1

∞∑

j=1

∞∑

k=1

aijkτ i(t)ξj(x)ηk(y). (6)

The first decomposition, (4), follows directly from the HilbertF space with an inner product like (1), but now(x, y) issubstituted inx.

To obtain the second decomposition (5), first Lemma 1 isintroduced.

Lemma 1. L2 Inner products〈·, ·〉X and 〈·, ·〉Y commute,

〈〈f(x, y), g(x)〉X , h(y)〉Y =⟨

〈f(x, y), h(y)〉Y , g(x)⟩

X .

For the proof see Appendix A. By use of Theorem 1,f(t, x, y) can be decomposed as,

f =∑

i

f, ξi(x)⟩

X ξi(x)

=∑

i

j

⟨⟨

f, ξi(x)⟩

X , ηj(y)⟩

Y ξi(x)ηj(y).

By use of Lemma 1 this is the same as,

=∑

i

j

f, ηi(y)⟩

Y , ξj(x)⟩

Xξi(x)ηj(y)

=

∞∑

i

∞∑

j

aij(t)ξi(x)ηj(y).

thusaij(t) are uniquely defined by,

aij(t) =⟨

f, ηi(y)⟩

Y , ξj(x)⟩

X=

⟨⟨

f, ξj(x)⟩

X , ηi(y)⟩

Y .

The third decomposition (6) is obtained in the same way, butnow f is also projected on the basisτk of T .

A. Signal reduction

Reduction of the signal space is now achieved by truncationof the spectral expansion. Thus the representation ofu, canbe reduced to anr-dimensional approximation by,

ur =

r∑

i=1

aiφi.

where,ai andφi are obtained from the spectral expansion ofthe original signal,u. In the multi-variable case it is possible tohave a different truncation order for each expansion argument.So, by use of equations (4)(5)(6),u(t, x, y) can be reduced inthe following ways,

ur =

r∑

i=1

ai(t)φi(x, y) φi ∈ Fr ⊂ F ,

Page 4: Jaron Achterberg

urx,ry=

rx∑

i=1

ry∑

j=1

aij(t)ξi(x)ηj(y)

ξi ∈ Xrx⊂ X

ηj ∈ Yry⊂ Y

,

urt,rx,ry=

rt∑

i=1

rx∑

j=1

ry∑

k=1

aijkτ i(t)ξj(x)ηk(y)

τ i ∈ Trt⊂ T

ξj ∈ Xrx⊂ X

ηk ∈ Yry⊂ Y

,

hereF ,X ,Y, T are the same spaces as given in (3).

B. System reduction

For reduction of the system the residual,R(u) of the PDEhas to be considered. In the case of Example 1 the residual is,

R(u) := ρCut − κxuxx − κyuyy − νxux − νyuy.

Let H be a Hilbert space such thatR : dom(R) → H is awell defined map. Callu a weak solution of the PDE if,

〈R(u), φ〉H = 0, for all φ ∈ H.

Now reduction of the system means that the weak solutionsare relaxed to the requirement that

〈R(u), φ〉H = 0, for all φ ∈ Hr ⊂ H, (7)

whereHr is a r-dimensional subspace ofH. Depending onthe choices ofH and Hr this gives different reduced ordermodels,

1) r = rx×y

H = FHr = spanφ1, . . . , φr

0 = 〈R(u), φ〉Hrfor all φ ∈ Hr ⊂ H.

2) r = (rx, ry)

H = X ⊗ YHr = spanξ1, . . . , ξrx

× spanη1, . . . , ηry

0 = 〈〈R(u), ξ〉X , η〉Y for all

ξ ∈ Xrx⊂ X

η ∈ Yry⊂ Y

.

3) r = (rt, rx, ry)

H = T ⊗ X ⊗ YHr = spanτ1, . . . , τrt

× spanξ1, . . . , ξrx

× spanη1, . . . , ηry

0 =⟨

〈〈R(u), ξ〉X , η〉Y , τ⟩

Tfor all

τ ∈ Trt⊂ T

ξ ∈ Xrx⊂ X

η ∈ Yry⊂ Y

.

C. Galerkin Projection

For controller design it is useful to convert these PDEs intoa set of Ordinary Differential Equations, ODEs. A method toachieve this is by Galerkin projection [3]. Ifu has a spectraldecompositionu =

k akφk with φk orthonormal basisfunctions in a Hilbert spaceH andHr = spanφ1, . . . , φrthen the reduced order model is equivalent to an ODE in

the coefficient vectora = [a1, . . . , ar]T of particular simple

structure. For Example 1 this can be obtained for the expansionof (4),

0 =⟨

R(u), φk(x, y)⟩

,

is equivalent to,

0 =⟨

ρCut − κxuxx − κyuyy − νxux − νyuy, φk(x, y)⟩

.

Reduction of the solution,ur(t, x, y) =∑r

i=1 ai(t)φi(x, y),

ρC

r∑

i=1

ai(t)φi(x, y) − κx

r∑

i=1

ai(t)φixx(x, y)

− κy

r∑

i=1

ai(t)φiyy(x, y) − νx

r∑

i=1

ai(t)φix(x, y)

−νy

r∑

i=1

ai(t)φiy(x, y), φk(x, y)

= 0.

This leads to,

ρCak(t)−r

i=1

κxφixx − κyφi

yy − νxφix − νyφi

y, φk⟩

ai(t) = 0.

With a(t) = [a1(t), . . . , ar(t)]T , this becomes an explicit

ODE in the vectora,

ρCa = Aa,

where,

Aij =⟨

κxφixx − κyφi

yy − νxφix − νyφi

y, φk⟩

i, j = 1, . . . , r.

The tensor decomposition methods use an expansion as givenin (5), so in the same way a set of ODEs will be obtainedfrom the PDE of Example 1, but now with this expansion,⟨

〈ρCu, ξk〉, ηl⟩

=⟨

〈κxuxx + κyuyy + νxux + νyuy, ξk〉, ηl⟩

.

Reduction of the solution, u(t, x, y) =∑rx

i=1

∑ry

j=1 aij(t)ξi(x)ηj(y) leads to

ρC

ry∑

j=1

akjηj =

rx∑

i=1

ry∑

j=1

aij

κxξixx + νxξi

x, ξk⟩

ηj

+

ry∑

j=1

akj

(

κyηjyy + νyηj

y

)

ρC

ry∑

j=1

akjηj , ηl

=

rx∑

i=1

ry∑

j=1

aij

κxξixx + νxξi

x, ξk⟩

ηj

+

ry∑

j=1

akj

(

κyηjyy + νyηj

y

)

, ηl

ρCakl =

rx∑

i=1

ail

κxξixx+νxξi

x, ξk⟩

+

ry∑

j=1

akj

κyηjyy+νyηj

y, ηl⟩

.

With (A)kl(t) = aklThis can be expressed as,

ρCA = (κxΞxx + νxΞx)A + A(κyHyy + νyHy), (8)

Page 5: Jaron Achterberg

where,

(Ξxx)ij =⟨

ξixx, ξj

i, j = 1, . . . , rx

(Ξx)ij =⟨

ξix, ξj

i, j = 1, . . . , rx

(Hxx)ij =⟨

ηixx, ηj

i, j = 1, . . . , ry

(Hx)ij =⟨

ηix, ηj

i, j = 1, . . . , ry.

This is again an explicit ODE, now in the matrixA.

For the decomposition of (6) the set of equations resultsin a pure Algebraic Equation of the tensorA ∈ R

rt×rx×ry .Solving this Algebraic Equation will give a result that is staticwhich is not so useful for it can not be used as a dynamicsystem, therefore the details are omitted.

IV. T ENSOR DECOMPOSITIONS ONCARTESIAN DOMAINS

In the previous section the spectral decompositions(4),(5),(6), were introduced. It was also shown how PDEs canbe transformed to ODEs by performing Galerkin Projections.However, the basis functions used in the expansion and theprojections have not been specified. In Fourier series expan-sions sine and cosine functions are used as basis functions.But in general any set of orthonormal functions could beused. To achieve an optimal approximation by truncated spec-tral decompositions, the POD method proposes using basisfunctions derived from data. Measured or simulated data of aposition and time dependent function is stored as snapshotsin a matrix, such that the time samples, which are vectorscontaining spatial information, are represented by the columns.Applying the SVD to the matrix, gives besides the singularvalues two unitary matrices, a matrix whose columns representspatial basis vectors and another whose columns represent thetemporal basis. The basis vectors are ordered and can be usedas basis functions for spectral decompositions and projections.

In problems that have a larger than one dimensional spatialdomain or range, one could also try to separate the spatialbasis. In general these kind of functions can be representedby a multidimensional linear mapping fromM to N functionspaces,X1 × · · · × XM → Y1 × · · · × YN .

The mathematical object to describe mappings betweenmultiple vector spaces is the tensor. A multi dimensionallinear mapping towardR, that maps the spaces of the domainand the dual spaces of the range towardR, X1 × · · · ×XM × Y∗

1 × · · · × Y∗N → R. The challenge is now to find

a suitable decomposition of this tensor such that reductionof the dimension of the basis is optimal. Some properties andterminology of a tensor that will be used in the treatment of thedifferent tensor decompositions will be given. Several of theseterms are well known in literature [4] others like tensor rankare specifically defined for the use in tensor decompositions.

Definition 1 (Tensor). A tensor is a multi linear functional,

T : V1 × · · · × VN → R.

The number of vector spaces the tensor operates on is calledthe order of the tensor. The space of order-N tensors is a vector

space,TN . Addition and scalar multiplication is inherited fromall the vector spaces the tensor operates on.

Example 2 (The matrix as an order 2 tensor). A matrix, A ∈R

Lx×Ly defines a tensorA : RLx × R

Ly → R,

A(x,y) := 〈Ay,x〉 =⟨

AT x,y⟩

.

The tensor product can be viewed as a generalisation of theouter product for vectors,A = xxT .

Definition 2 (Tensor product). For elementsun ∈ Vn, n =1, . . . , N the tensor productu1 ⊗ · · · ⊗ uN is the tensor:

(u1 ⊗ · · · ⊗ uN )(v1, . . . ,vN ) =

N∏

i=1

〈ui,vi〉 .

The tensorU = u1 ⊗ · · · ⊗ uN is called a rank-one tensor.

The tensor elements ofT ∈ TN are specified byti1,...,iN

wherein ranges through1, . . . ,dimVn. The elementsti1,...,iN

representT with respect to a special basis,

ei11 , . . . , eiN

N ,

such that ti1,...,iN= T (ei1

1 , . . . ,eiN

N ). The tensor can bedecomposed on the natural unit basisei1

1 ⊗ · · · ⊗ eiN

N ,according to

T =

L1∑

i1

· · ·LN∑

i1

ti1,...,iN

(

ei11 ⊗ · · · ⊗ eiN

N

)

.

Definition 3 (n-mode kernel of a tensorT ).

kern T := vn ∈ Vn|T (v1, . . . ,vN ) = 0, ∀vk ∈ Vk, k 6= n.

In matrix theory the rank of a matrix equals the column androw rank of a matrix, which happens to be the same number.For tensors the notion of row or column rank can be seen asa modal rank.

Definition 4 (Modal rank). The tensorT asV1×· · ·×VN → R

has then-mode rank ofT defined asdim(Vn)−dim(kern(T )).Notice the resemblance with the Rank-Nullity theorem formatrices.

The modal rank of a tensor is not the same in each mode,so the notion of overall tensor rank is difficult to express.Therefore a tensor of rank one, which is well defined, is used.

Definition 5 (Tensor Rank). The minimum numberR of rankone tensors that can reconstruct the original tensor like,T =∑R

i=1 U i, with U i of rank 1 is called the rank of the tensorT .

Although these concepts are quite intuitive extensions of therank known from linear algebra, there are no general results,only some special cases[5], on the maximum rank a tensor ofarbitrary order and dimensions can attain.

In the tensor definition the vector spaces on whichT

operates were already assumed to have an inner product. Nowthe inner product on tensors can be defined.

Page 6: Jaron Achterberg

Fig. 1: Unfolding of a 3-tensor in one direction

Definition 6 (Tensor Inner product). Given two tensorsS, T ∈TN define the innerproduct

〈T, S〉 :=L1∑

i1

···LN∑

iN

L1∑

j1

···LN∑

jN

ti1,...,iN sj1,...,jN

ei1t1,e

j1s1

1···

eiNtN,ejN

sN

N,

where〈·, ·〉i is the inner product defined onVi. TN equippedwith 〈·, ·〉 becomes an inner product space.

There are multiple norms possible for a tensor:

Definition 7 (The Frobenius norm). The Frobenius norm caneasily be defined by using the inner product,

‖T‖2Fro = 〈T, T 〉 .

Definition 8 (The operator norm).

‖T‖ = sup‖vi‖i

=1

|T (v1, . . . ,vN )| .

In Appendix B the decomposition of functionals in a tensorsetting is treated.

A. High Order SVD (HOSVD)

This decomposition was proposed by Lieven De Lathouwer[6], it uses the basis obtained from the normal matrix SVDon unfoldings of the tensor in all its directions. Theith

unfolding of a N-way array along itsith mode is actually thereordering of its elements into a matrix with the dimensions,Wi ∈ R

Li×L1···Li−1Li+1···LN . Thekth row in theith unfoldingis the sequence of all elements inT (·, . . . , ·,ek

i , ·, . . . , ·) in anarbitrary order. This is depicted for tensor of order three inFig. 1.

In [7] it is stated that the rank one approximation made bythe first singular vectors obtained from this decompositionisnot the best possible. This is apart from that its definition is juston elements of the tensor instead of the operation the tensorrepresents another disadvantage. A great advantage thoughis that its algorithm uses the very well developed SVD formatrices.

Example 3. HOSVD on a simple rank2 × 2 × 2 tensorIn Fig. 2(a) the tensorT (x,y,z) is depicted.

T (x,y,z) = tijkxi ⊗ yj ⊗ zk,

0

1 1

0

0 0

1 0y

x

z

(a) tensor

0

12 + 1

2

√5 0

0

0 0

0√

52 1

2+ 12

√5

y

x

z

(b) core

Fig. 2: Rank2 × 2 × 2 tensor and its core

with the coeficientstijk,

t111 = 1, t121 = 0, t112 = 1, t122 = 0,

t211 = 1, t221 = 0, t212 = 0, t222 = 0.

This tensor has the unfoldings,

Wx = ( 1 0 0 01 0 1 0 ) , Wy = ( 0 0 0 0

1 1 1 0 ) , Wz = ( 1 0 0 01 1 0 0 ) .

The SVD applied separately on these unfoldings gives basiselements defined by left singular vectors.Wi = UiSiVi, gives,

Ux =(−0.5257 −0.8507−0.8507 0.5257

)

, Uy =(

0 −1−1 0

)

,

Uz =(−0.5257 −0.8507−0.8507 0.5257

)

.

The core tensorS, depicted in Figure 2(b), is obtained by,

Si,j,k = T ((Ux)eix, (Uy)ej

y, (Uz)ekz).

Example 4. HOSVD on the heat equation data. Data wasobtained from the simulation of the heat equation of Example1simulated on a grid of61×81 spatial samples and over a timehorizon of[0, 3] with sample time0.05s. It has an asymmetricinitial condition, see Fig. 5, that is applied to the dynamics ofExample 1 with the parameters,

κx = 0.5, νx = 0.5, ρC = 5,

κy = 0.5, νy = 0.5.

Application of the HOSVD algorithm resulted in a basis inx, y andt separately. In Fig. 3 the first five basis functions inboth thex and y direction are shown. One can observe thedisplaced peaks of the different basis functions to cover theconvective behaviour of the system.

B. Tensor SVD (TSVD)

This method has been proposed by Belzen,Weiland,de Graaf[8]. It looks for a set of orthonormal vectors in each domain,Vn, n = 1, . . . , N that maximises the tensor such that thevectors are unit. The algorithm keeps searching for extravectors such that they are orthogonal to the set that is alreadyfound, where orthogonality is determined by the inner productsdefined on the vector spaces the tensor operates on.

Definition 9 (Singular values).

σ1(T ) = ‖T‖ = T (v11 , . . . , v1

N ),

Page 7: Jaron Achterberg

0 10 20 30 40 50 60-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

(a) basis functions in thex-direction

0 10 20 30 40 50 60 70 80-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

(b) basis functions in they-direction

Fig. 3: HOSVD basis vectors for the heat equation data. Orderof the vectors, , , , ,

and fork > 1,

σk(T ) = supvn∈Sk

n,1≤n≤N

|T (v1, . . . ,vN )| ,

where,

Skn :=

v ∈ Vn| ‖v‖n= 1,⟨

v,vjn

n=0 for j =1, . . .,(k − 1)

.

The vectorsvkn k = 1, . . . ,dimVn are now considered to be

basis vectors ofVn. The N-th order tensor can be decomposed,

T =

L1∑

k1

· · ·LN∑

kN

Sk1···kNvk1

1 ⊗ · · · ⊗ vkN

N ,

whereS, the core tensor, is obtained by projection ofT ontothe basis. That is,

Sk1,...,kN= T (vk1

1 , . . . ,vkN

N ).

Notice that the singular valuesσk are on the diagonalSk···k.This optimisation is solved by solving for eachk con-secutively, ∂

∂viLk(v1, . . . ,vN , λ1, . . . λN , µ1, . . . , µN ) = 0,

where

Lk =T (v1, . . .,vN)+N

n=1

⟨1

2(1−vT

nvn), λn

+N

n=1

〈gn(vn), µn〉 .

with, gn(vn) :=[⟨

vn,v1n

, . . . ,⟨

vn,vk−1n

⟩]T ∈ Rk−1 To

illustrate this procedure it is applied to the tensor of Example 3The first basis vectors have to satisfy,

T (·,y(1),z(1)) = λ(1)x 〈·,x(1)〉,

T (x(1), ·,z(1)) = λ(1)y 〈·,y(1)〉,

T (x(1),y(1), ·) = λ(1)z 〈·,z(1)〉.

The basis vectors are all on the unit ball,‖·‖ = 1, thesuperscript(1) is dropped for the elements in the vectors,

x21 + x2

2 = 1, y21 + y2

2 = 1, z21 + z2

2 = 1.

The first basis vectors and their singular value is obtained,

λ =1

2+

1

2

√5,

x(1) =

12+ 1

2

√5√

2 12+ 1

2

√5

1√2 1

2+ 12

√5

, y(1) =

(

10

)

, z(1) =

12+ 1

2

√5√

2 12+ 1

2

√5

1√2 1

2+ 12

√5

.

The second basis vectors values.x(2),y(2),z(2) have to satisfythe extra conditions,

〈x(2),x(1)〉 = 0, 〈y(2),y(1)〉 = 0, 〈z(2),z(1)〉 = 0.

This gives,

λ(2) = 0, µx = 0, µy =−√

5

2 12 + 1

2

√5, µz = 0,

x(2) =

± 1√2 12+ 1

2

5

∓12+ 1

2

5√2 12+ 1

2

5

, y(2) =(

0±1

)

, z(2) =

± 1√2 12+ 1

2

5

∓12+ 1

2

5√2 12+ 1

2

5

.

The tensor product of the first basis vectors found by theSVD have minimal error approximating the original, measuredin Frobenius norm.

Theorem 2. Given a tensorT , let σ1 be its first singularvalue, u1, . . . ,uN the corresponding singular vectors. Let

Page 8: Jaron Achterberg

T1 = σ1u1 ⊗ · · · ⊗ uN . Then ‖T − T1‖Fro is minimalamong all rank-one approximations ofT , i.e. ‖T − T1‖Fro ≤‖T − T0‖Fro ∀T0 ∈ TN of rank one.

The proof is given in Appendix A.

Example 5. The TSVD applied on the same data obtained inExample 4 gives a set of basis functions inx, in y and in t.In figure 5 are the first five basis functions in both thex andy direction shown.

C. Successive Rank-1 decomposition

HOSVD and TSVD impose orthogonality conditions on theseparate vector spaces the tensor operates on. From a physicalviewpoint this is for example for the heat equation a strangecondition. Apparently, the orientation of the coordinatesgivesdifferent optimal approximations. Intuitively one would like toimpose the orthogonality condition on the total tensor space.Although orthogonality in all the separate vector spaces thetensor operates on implies orthogonality of the total tensor.Orthogonality of just one basis vector to the space spannedby its preceding basis vectors is enough to guarantee the rankone tensor to be orthogonal to the rank one tensors of thepreceding basis vectors. This follows from,

u(n)1 ⊗ · · · ⊗ u

(n)N ,

n−1∑

i1=1

· · ·n−1∑

iN=1

u(i)1 ⊗ · · · ⊗ u

(i)N

= 0,

u(n)1 ,

n−1∑

i1=1

u(i)1

· · ·⟨

u(n)N ,

n−1∑

iN=1

u(i)N

= 0.

The successive rank one tensor decomposition is defined as,

T =

R∑

i=1

σi (u1, . . . ,uN )(i)

,

with,

(u1, . . . ,uN )(i)

, (u1, . . . ,uN )(j)

=

0 i 6= j

1 i = j.

The rank one tensorsU (i) = σi (u1, . . . ,uN )(i) are obtained

from the optimisation problem,

U (i) = arg max‖U(i)‖=1

U(i)⊥U(j)∀j<i

T,U (i)⟩

.

The valuesσi are determined by,

σi =⟨

T,U (i)⟩

.

This decomposition has some nice properties on the opti-mality of the error made by rank-R approximations. A bigdisadvantage though is that it is hard to obtain a ODE from aPDE by using these basis functions in the Galerkin projections,because the orthogonality of the separate basis functions is nota condition anymore.

0 10 20 30 40 50 60-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

(a) basis functions in thex-direction

0 10 20 30 40 50 60 70 80-0.5

-0.4

-0.3

-0.2

-0.1

0

0.1

0.2

0.3

0.4

(b) basis functions in they-direction

Fig. 4: TSVD basis vectors for the heat equation data. Orderof the vectors, , , , ,

V. SIMULATION RESULTS

The HOSVD decomposition made in Example 4 and theTSVD decomposition on the same data are compared for theiruse in reduction. First the reduction of the solution data, bytruncation of the expansion (5) will be reviewed. The methodthat uses the matrix SVD for the separation of signals in aspatial and a temporal signal will also be applied on this data,

Page 9: Jaron Achterberg

so one can compare it with the tensor decompositions. Thebasis obtained from these decompositions will then be used tosimulate the reduced dynamical systems, with again the sameparameters used in Example 4.

0

10

20

30

40

50

60

70

80

90

010

2030

4050

6070

−0.5

0

0.5

1

Fig. 5: Initial condition

A. Reduction of the solution

The tensor decomposition gives basis functions thatcan be used in the expansion,u(rt,rx,ry)(t, x, y) =∑rt

i

∑rx

j

∑ry

k τ i(t)ξj(x)ηk(y), whererx, ry, rt are the orderof truncation in the different directions. Since onlyξ(x)jand η(y)k will be used in the simulations of the reducedsystems. The approximations of the solution by a truncatedexpansion of only truncations in thex and y direction areinteresting to compare. Thus the basisτ i(t) will be takenfull, u(rx,ry)(t, x, y) =

∑Lt

i

∑rx

j

∑ry

k τ i(t)ξj(x)ηk(y). No-tice this is impossible for the normal POD method that usesthe matrix SVD, since the number of spatial basis functionsare equal to the number of temporal basis functions in thereduced expansion of (4).

First the reduction of the solution by the classical PODmethod will be done. The singular values obtained from thematrix SVD are depicted in Fig.6. One can see a reduction upto order five is reasonable, the operator norm of the error ofthe orderr approximation is given byσr+1. This error normcan not be calculated for the tensor approximations. Thus forcomparing the methods to the SVD for matrices, the Frobeniusnorm for up to 5th order approximations is given in Table I.

The Frobenius norm of the error of the approximation upto fifth order in each spatial dimension are calculated. Table IIand Table III show these norms of the error for the HOSVDand the TSVD respective. One can see from these tables theTSVD only gives a better approximation of the tensor forjust one basis function in each direction. Adding extra vectorsto the basis increases the accuracy of the approximation asone would expect, but the HOSVD gives better higher orderapproximations. Noticerx = ry = 1 in Table III is optimal.

B. Reduced system simulations

The basis obtained from the HOSVD and the TSVD areused to simulate the set of ODEs, (8) that were obtained from

Order 1 2 3 4 5‖Er‖Fro 24.041 10.718 4.476 1.760 0.665

TABLE I: Error norm

HH

HH

Hry

rx 1 2 3 4 5

1 28.486 24.100 22.963 22.251 22.0722 27.991 18.124 15.231 13.378 12.9143 27.245 16.701 13.216 11.002 10.0664 27.011 15.933 11.932 9.196 7.2325 26.965 15.814 11.733 8.848 6.269

TABLE II: Error norm HOSVDrx × ry approximation

the heat equation of Example 1. Again the Frobenius norm ofthe error is calculated for up to 5 basis functions in each spatialdimension. These are depicted in Table V for the HOSVD andTable IV for the TSVD.

Another measure to compare those methods is looking atthe normalised maximal difference between the original andthe approximation averaged over time thus,

E(rx, ry) :=1

Nt

Nt∑

i=1

maxx,y

∣T (x, y, ti) − Arx,ry(x, y, ti)

‖T (·, ·, ti)‖Fro

.

This measure for all of the rankrx × ry approximations isplotted in Fig. 7 for the HOSVD, and in Fig. 8 for the TSVD,whererx range through 1 tot 61 andry through 1 to 81.

VI. N ON-CARTESIAN DOMAINS

Data for the generation of the basis functions is usuallyobtained from Finite Element Method, FEM, simulations.Those methods are optimised to find a grid that will describedthe physics as accurately as possible on the least points needed.Often this results in a non equidistant and non square grid.

HH

HH

Hry

rx 1 2 3 4 5

1 28.274 25.535 25.356 24.754 24.7012 28.036 20.016 19.643 17.693 17.5643 27.967 19.722 18.705 16.194 15.9814 27.847 19.280 18.224 15.063 14.7615 27.844 19.271 18.208 14.950 14.571

TABLE III: Error norm TSVD rx × ry approximation

HH

HH

Hry

rx 1 2 3 4 5

1 28.761 25.830 25.662 25.977 26.4662 28.524 20.882 20.625 20.360 20.9643 28.490 20.749 19.775 19.072 19.6184 28.372 20.436 19.427 18.203 18.6955 28.366 20.424 19.413 18.113 18.547

TABLE IV: Error norm TSVD simulation onrx × ry basis

Page 10: Jaron Achterberg

0 20 40 60 800

10

20

30

40

50

60

Fig. 6: Singular values

rxry

010203040506070

0

50

100

0.09

0.1

0.11

0.12

0.13

0.14

0.15

0.16

Fig. 7: Time averaged maximal error of approximations ob-tained from the HOSVD

rxry

010203040506070

0

50

100

0.08

0.1

0.12

0.14

0.16

0.18

Fig. 8: Time averaged maximal error of approximations ob-tained from the TSVD

HH

HH

Hry

rx 1 2 3 4 5

1 28.896 24.489 24.804 24.079 24.1612 28.456 19.521 18.342 16.814 17.0543 27.650 18.197 16.892 15.188 15.2174 27.432 17.443 15.524 13.523 13.0475 27.366 17.359 15.449 13.342 12.613

TABLE V: Error norm HOSVD simulation onrx × ry basis

The structure of such a grid is called the mesh structure. Whenapplying the normal SVD on such data this mesh structure canbe translated into a gramian that weights the inner product suchthat the Euclidean inner product on the discrete function valuesrepresents the inner product of the interpolated functions.

As an example a flow problem on a non Cartesian domain,see Fig. 9 was simulated in Comsol a FEM solver. It simulatedthis on a mesh structure defined by triangulation. A snapshotof this is shown in Fig. 10.

0.15 m

0.16 m

0.15 m

0.41 m

2.2 m

P = 0

u = v = 0

u = v = 0

Fig. 9: Flow problem

0 0.5 1 1.5 20

0.2

0.4

Fig. 10: Snapshot of flow problem from Example 6

Example 6 (Flow problem). The PDE of the flow problemthat was simulated is given as,

ρ

(

∂u

∂t+ (u · ∇) u

)

= η∆u −∇P,

∇ · u = 0.

A. Relation of the SVD for continuous functions with SVD onthe discretised functions

All the projections and data manipulations were stated oncontinuous functions but the computation is done on discretedata. Therefore the numerical answers have to be related tothe original continuous functions, this is interpolation.

One way to view sampling (discretisation) is to representsignalsu ∈ X on a finite dimensional basis,φ1, . . . , φN of

Page 11: Jaron Achterberg

elements inX . That is, any sampledu ∈ U can be written as,

u =∑

k

ukφk.

Call coefficients uk sampling coefficients. LetU =spanφ1, . . . , φN, thendimU = N, U ⊂ X

Definition 10 (Gramian). The GramianG ∈ RN×N is given

by,Gkl =

φk, φl⟩

.

Note that,G is symmetric andG ≻ 0.

Lemma 2. The inner product of 2 arbitrary solutions can beinferred from sample coefficients. Ifu′, u′′ are elements inU ,with sampling coefficientsu′

k, u′′l respectively. Then,

〈u′, u′′〉 = u′T Gu′′,

where u′ = col(u′1, . . . , u

′N ),u′′ = col(u′′

1 , . . . , u′′N ) denote

the vectors of sampling coefficients ofu′ andu′′, respectively.

Conversely, a partial POD basis ofX can be obtained fromsample coefficients as follows.

Theorem 3. Let Usnap = UΣV T be an SVD ofUsnap. LetG−1/2U = [u1, . . . , uN ] with un ∈ R

N . Define

ξn :=

N∑

k=1

unkφk, n = 1, . . . , N,

thenξn ∈ X , ξn is a POD basis forU .

Proof:

〈ξm, ξn〉 = uTmGun = uT

mG−1/2GG−1/2un = uTmum = δmn.

B. Separation of functions on non Cartesian domains

There are several ways of applying the treated methods tofunctions on non Cartesian domains,Ω ⊂ R.

The first way would be to extend the values on the dis-cretised non-Cartesian domain with arbitrary values such thatthe data is represented as an N-way array. The interpolationfunctions will not consider the artificial values. And by usingthe Gramian to weight the inner product the tensor decom-position will give little significance on the added data. Thisway one treats the inner product on the non Cartesian domainas a weighted inner product on the Cartesian domain, wherethe weights are allowed to become zero, thus not defining aninner product.

The second way would be to find a coordinate transfor-mation, π : X × Y → Ω ⊂ R

2 such that Hilbert spaceson the Cartesian coordinates can be related to the originalHilbert space. Thus inner products can be calculated in thenew separated coordinates. Functions on the new coordinatedomain X × Y can be expressed as sum of separable func-tions, f(x, y) =

i f ix(x)f i

y(y). By applying the coordinatetransformation the inner product onΩ can be expressed as an

inner product onX × Y as follows,∫

Ω

f(ω)g(ω)dω =

∫∫

X×Y

f(π(x, y))g(π(x, y))det(∇⊗ π) dxdy,

where (∇⊗ π) represents the Jacobian ofπ. Supposeπ tohave the property thatp(x, y) := det(∇⊗ π) is separableas p(x)p(y) and substitutingf(π(x, y)) =

i f ix(x)f i

y(y)and g(π(x, y)) =

j gjx(x)gj

y(y), one can express the innerproduct as,

〈f, g〉 =∑

i

j

X

f ix(x)gj

x(x)p(x)dx

Y

f iy(y)gj

y(y)p(y)dy,

=∑

i

j

f ix, gj

x

X⟨

f iy, gj

y

Y ,

where〈·, ·〉X , 〈·, ·〉Y (are defined in the obvious way) are innerproducts if and only ifp(x) > 0 on X, p(y) > 0 on Y.

An alternative third way would be to introduce a (nonlinear)coordinate transformation, only without the previous derivedconditions. This amounts to computing the inner product onΩ only, separation is inX andY. In Fig. 11 an assignment ofnew coordinates for the problem of Example 6 is shown, suchthat it is possible to separate the function on the new domain.

Fig. 11: Coordinate transformation

VII. SYMMETRY

A big disadvantage of the POD method is that its effective-ness is strongly related to the coordinates in which the dataisrepresented, to illustrate this see Example 7.

Example 7 (Traveling pulse). Let u(t, x) be the solution ofthe PDE,

∂u

∂t= ν

∂u

∂x,

u(0, x) = u0(x).

This simple PDE has the analytic solution,u(t, x) = u0(x +νt). Now if u0(x) is taken to be a smooth pulse like asin(x)

x function, discretised such that the snapshot data attime intervals collected into a matrix, results in a identitymatrix. This means that, the singular vectors are equallyimportant. This means although the dynamics are very simple,the POD method is not able to find a better basis. Changing

Page 12: Jaron Achterberg

the coordinates gives the system,

w = ν,

u(t, x) = u(x − w).

Thus the same dynamics can be simulated using just onebasis function on different coordinates. This change of co-ordinates gives an exact representation in which the PDE ischanged to an ODE without model reduction. Since the dy-namics it describes is still the same the search for a reductiontechnique that does not depend on the coordinate system isstarted. A possible way to achieve this is by describing thedynamics in a coordinate free way. Operations like the gradi-ent, divergence, rotation, Laplacian are all independent on thecoordinate system that is used to express them. In other wordsapplying the operation in one coordinate system or doing acoordinate transformation first then apply the operation and doa coordinate transformation back to the original coordinateshas the same results. A way to describe these operationswithout using coordinates is by using differential forms.[9][10]

The afore mentioned operations are defined in 3-dimensional space. They can also be extended ton-dimensional spaces by using the exterior derivative, the ex-terior product and the Hodge operator. Some literature [11][12][13] deals with finding symmetries in PDEs by using theseoperators from differential geometry.

This procedure, first introduced in [11], creates from thePDE a set of differential forms such that when the dependentvariables are considered to be functions of the independentvariables and the forms are set equal to zero, called annullingthey form the original PDE. Annulling restricts the forms tothe manifold of the solution of the PDE. Now the method looksfor invariances of the PDE, this means the equations are stillsatisfied after a transformation. By solving for the invarianceof the Lie derivative of these forms, the determining equationsfor the symmetry generators are obtained.

This method shows some promise for doing reduction ina coordinate free way, yet there are still no practical resultsmade.

VIII. C ONCLUSION

In this project the use of tensor decompositions in modelreduction applications was studied. More specifically threetypes of reduction strategies have been compared, each ofthem based on a different spectral decomposition of signals.For the reduction of PDEs these POD by tensor decompositionmethods still have some difficulties. Compared to the normalPOD they achieve a lesser accurate reduction in number ofbasis functions. The HOSVD seems work better than theTSVD, yet it is hard to make a valid comparison, because thiscould be numerical implementation issues for the TSVD. Thisalgorithm is still in development, while the HOSVD makes useof the well developed SVD algorithm for matrices. It shouldbe noted that the conclusions on performance are based onthe application of the methods to an example. The successiverank one approximations are not useful for obtaining a setof ODEs from a PDE by Galerkin projections, yet one could

change the method to one that imposes the condition for timebasis functions to be orthonormal. Applying the Galerkingprojection on those basis functions could result in a ODE.

Also a major concern should be the application of thesemethods to functions on non Cartesian domains and howboundary conditions get translated to these reduced models.As those boundary conditions determine most of the behavioura PDE describes.

Another application for these tensor decompositions is iden-tification on multi dimensional data. For example data obtainedfrom batch processes that will exhibit dynamics in time and inbatch number. These tensor decompositions could lead to multidimensional controller design, where the process variation inbatch number is also controlled. The value of separating thebasis functions is more clear for these kind processes than theprocesses described by PDEs.

The real computational reduction will be from the reductionof the number of ODEs that have to be solved. Typicallyfor linear problems the ODE can be written in a state spacerepresentation. TheA matrix does not have to be full rank, soone could try to reduce this set of equation to a smaller set.For the simple dynamics of the heat equation it was observedit is possible to separate the dynamics into two sets of ODEs,one for each direction.

For the dynamics describe by PDEs the reduction by findingsymmetries would be an good extension to the reduction byPOD, since this method operates at a higher level than POD.

ACKNOWLEDGMENT

I would like to thank Siep Weiland, Femke van Belzen,Jobert Ludlage and LeilaOzkan for their supervision. Thepeople from IPCOS to give me the opportunity to perform myresearch at their company. And my fellow graduate studentsat the control department for the excellent ambiance.

APPENDIX

A. Proofs

Lemma 1:〈〈f(x, y), g(x)〉X , h(y)〉Y

=

Y

X

f(x, y)g(x)dxh(y)dy,

=

Y

X

f(x, y)g(x)h(y)dxdy,

=

X

Y

f(x, y)g(x)h(y)dydx,

=

X

Y

f(x, y)h(y)dyg(x)dx,

=⟨

〈f(x, y), h(y)〉Y , g(x)⟩

X .

Theorem 2: Given a tensorT , a rank one approx-imation, T1 = σ1(u1 ⊗ · · · ⊗ uN ) where ui ∈ S =ui| ‖ui‖ = 1, (u1 ⊗ · · · ⊗ uN ), and σ1 is obtained on〈T, (u1 ⊗ · · · ⊗ uN )〉. The error norm, Frobenius norm, is

Page 13: Jaron Achterberg

minimized by the vectorsui obtained from the maximi-sation problem,maxui∈S |T (u1, . . . ,uN )|

arg minui∈S

‖T − T1‖

= arg minui

‖T − T1‖2

= arg minui∈S

〈T − T1, T − T1〉

= arg minui∈S

(〈T, T 〉 − 2 〈T, T1〉 + 〈T1, T1〉)

= arg minui∈S

(〈T, T 〉 − 2 〈T, σ1(u1 ⊗ · · · ⊗ uN )〉

+ 〈σ1(u1 ⊗ · · · ⊗ uN ), σ1(u1 ⊗ · · · ⊗ uN )〉)= arg min

ui∈S(〈T, T 〉 − 2 〈T, σ1(u1 ⊗ · · · ⊗ uN )〉

+σ21 〈(u1 ⊗ · · · ⊗ uN ), (u1 ⊗ · · · ⊗ uN )〉

)

= arg minui∈S

(

〈T, T 〉 − 2σ1 〈T, (u1 ⊗ · · · ⊗ uN )〉 + σ21

)

= arg minui∈S

(

〈T, T 〉 − 2 〈T, (u1 ⊗ · · · ⊗ uN )〉2

+ 〈T, (u1 ⊗ · · · ⊗ uN )〉2)

= arg minui∈S

(

〈T, T 〉 − 〈T, (u1 ⊗ · · · ⊗ uN )〉2)

= arg maxui∈S

〈T, (u1 ⊗ · · · ⊗ uN )〉2

= arg maxui∈S

|〈T, (u1 ⊗ · · · ⊗ uN )〉|

= arg maxui∈S

|T (u1, . . . ,uN )| .

B. Functional Approach

One can view these functions of one or more variablesto be elements of a function space. By introducing a scalarmultiplication and addition of functions, this function spacebecomes a vector space. Let it also consist only of functionsthat are bounded in theL2 norm derived of theL2 innerproduct,

f i,f j⟩

=

x∈X

f i(x)f j(x)dx.

Thus obtaining Hibert spaces, now leteix be an orthonormal

basis forX andejy an orthonormal basis forY. So that,

ξ =∑

i

ξieix, η =

j

ηjejy.

Definition 11. The 2-tensoreix⊗ej

y : X ×Y → R is definedby,

(

eix ⊗ ej

y

)

(ξ,η) =⟨

eix, ξ

⟩ ⟨

ejy,η

.

Now let H = X ⊗ Y consist of all functions,h(x, y) :(x, y) ∈ X → R bounded inL2 norm derived of theL2 innerproduct,

hi,hj⟩

=

∫∫

(x,y)∈X

hi(x, y)hj(x, y)dxdy.

Lemma 3. The seteix ⊗ ej

y is a basis forH = X ⊗ Y,

ζ =∑

i

j

ζijeix ⊗ ej

y.

Proof:

X ⊗ Y ∋ ζ(·, ·) : X × Y → R

ζ(ξ,η) = ζ(∑

i

ξieix,

ηjejy)

By the application of multi linearity of the elements ofX ⊗Ythis is,

i

j

ξiηjζ(eix,ej

y) =∑

i

j

ξiηjζij =

i

j

eix, ξ

⟩ ⟨

ejy,η

ζij =∑

i

j

ζij(

eix ⊗ ej

y

)

(ξ,η).

Thus,ζ =

i

j

ζij(

eix ⊗ ej

y

)

.

To show the difference of the inner product on the tensorproduct of Hilbert spaces with inner product of the Cartesianproduct of Hilbert spaces, this inner product is given.FX×Y =X × Y that consists of the functionsf(x, y) on the domain

X × Y that look like(

fx(x)fy(y)

)

. Thus the inner product,

〈f, g〉F =

⟨(

fx(x)fy(y)

)

,

(

gx(x)gy(y)

)⟩

F:= 〈fx(x), gx(x)〉X + 〈fy(y), gy(y)〉Y .

REFERENCES

[1] P. Astrid,Reduction of Process Simulation Models: A Proper OrthogonalDecomposition Approach. Technische Universiteit Eindhoven, 2004.

[2] R. Curtain and H. Zwart,An Introduction to Infinite-Dimensional LinearSystems Theory. Springer, 1995.

[3] V. Thomee, Galerkin finite element methods for parabolic problems.Lecture Notes in Mathematics, 1984.

[4] R. Abraham, J. Marsden, and T. Ratiu,Manifolds, Tensor Analysis, andApplications. Springer, 1988.

[5] J. Kruskal, “Rank, decomposition, and uniqueness for 3-way and n-wayarrays,”Multiway data analysis, pp. 7–18, 1989.

[6] L. De Lathauwer,Signal Processing Based on Multilinear Algebra.Katholieke Universiteit Leuven Faculteit der Toegepaste WetenschappenDept. Elektrotechniek, 1997.

[7] L. De Lathauwer, B. De Moor, and J. Vandewalle, “On the Best Rank-1and Rank-(R, R,..., R) Approximation of Higher-Order Tensors,” SIAMJournal on Matrix Analysis and Applications, vol. 21, p. 1324, 2000.

[8] W. S. G. J. d. Belzen, F. van, “Singular value decompositions and lowrank approximations of multi-linear functionals.” inProc. 46th IEEEConference on Decision and Control. IEEE, December 2007, pp. 3751–3756.

[9] D. Edelen,Applied Exterior Calculus. Wiley, New York,, 1985.[10] H. Flanders,Differential Forms With Applications to the Physical

Sciences. Courier Dover Publications, 1989.[11] B. HARRISON, “Differential Form Symmetry Analysis of TwoEqua-

tions Cited by Fushchych,”Symmetry in Nonlinear MathematicalPhysics, vol. 1, pp. 21–33, 1997.

[12] B. Harrison and F. Estabrook, “Geometric Approach to InvarianceGroups and Solution of Partial Differential Systems,”Journal of Math-ematical Physics, vol. 12, p. 653, 2003.

[13] B. Harrison, “The Differential Form Method for FindingSymmetries,”SIGMA, vol. 1, no. 001, p. 12, 2005.