Modified SMS method for computing outer inverses of Toeplitz matrices

13
Modified SMS method for computing outer inverses of Toeplitz matrices q Marko Miladinovic ´ , Sladjana Miljkovic ´ , Predrag Stanimirovic ´ University of Niš, Department of Mathematics, Faculty of Science, Višegradska 33, 18000 Niš, Serbia article info Keywords: Toeplitz matrix Displacement rank Successive matrix squaring Outer inverse Convergence rate abstract We introduce a new algorithm based on the successive matrix squaring (SMS) method. This algorithm uses the strategy of e-displacement rank in order to find various outer inverses with prescribed ranges and null spaces of a square Toeplitz matrix. Using the idea of dis- placement theory which decreases the memory space requirements as well as the compu- tational cost, our method tends to be very effective for Toeplitz matrices. Ó 2011 Elsevier Inc. All rights reserved. 1. Introduction The classical Newton’s iteration X iþ1 ¼ UðX i Þ¼ 2X i X i MX i i ¼ 0; 1; 2; ... ; ð1:1Þ for the inversion of a matrix M was proposed by Schultz in 1933 [29] and it is well studied in [26]. If we look at Schultz’s paper, we immediately find that he does not suggest this method for arbitrary matrices (this would be a bit too expensive). It is interesting that he considers an example of a Toeplitz matrix since its structure admits very cheap matrix operations. Thus, the Schultz iteration is advocated only in case of agreeably structured matrices. Kailath et al. [17] introduce the concept of displacement rank as well as displacement operator for close-to-Toeplitz matrices, which is systematically studied in the general case in [13]. It is shown that the displacement theory approach is very powerful in designing fast inversion algo- rithms for structured matrices, especially Toeplitz and Hankel matrices. A number of different displacement operators have been introduced and analyzed in the literature [2,14,17,19,20,25,32]. The initial guess X 0 is chosen such that it has low dis- placement rank. The limit X 1 , which represents the inverse or the generalized inverse of M, has also low displacement rank, see [13,14]. However, the iteration matrices X i in the Newton’s method may not have low displacement rank. In the worst case, the displacement rank of X i increases exponentially until it reaches the n i.e., the dimension of the matrix. Parallel with the increase of the displacement rank, the computational cost of the Newton’s process grows significantly while i incre- ments. It follows that the growth of the displacement rank of X i must be controlled for the purpose of developing an efficient algorithm. In order to make out of the Newton’s iteration a useful tool for computing the inverse as well as generalized in- verses of a Toeplitz matrix, the authors in [2] introduce the concept of the orthogonal displacement representation and e-dis- placement rank of a matrix. In this way they control the growth of the displacement rank of X i . At each step, via the truncation, the iteration matrix is approximated by an appropriate matrix with a low displacement rank. This strategy be- longs to the class of compression techniques, for inversion of structured matrices which are described in [7,8,25,28]. Because of the displacement structure of the iteration matrix, the matrix-vector multiplication involved in Newton’s iteration can be done efficiently. 0096-3003/$ - see front matter Ó 2011 Elsevier Inc. All rights reserved. doi:10.1016/j.amc.2011.08.046 q This paper is supported by the research Project 174013 of the Serbian Ministry of Science. The authors are grateful for this type of support. Corresponding author. E-mail addresses: [email protected] (M. Miladinovic ´), [email protected] (S. Miljkovic ´), [email protected] (P. Stanimirovic ´). Applied Mathematics and Computation 218 (2011) 3131–3143 Contents lists available at SciVerse ScienceDirect Applied Mathematics and Computation journal homepage: www.elsevier.com/locate/amc

Transcript of Modified SMS method for computing outer inverses of Toeplitz matrices

Page 1: Modified SMS method for computing outer inverses of Toeplitz matrices

Applied Mathematics and Computation 218 (2011) 3131–3143

Contents lists available at SciVerse ScienceDirect

Applied Mathematics and Computation

journal homepage: www.elsevier .com/ locate/amc

Modified SMS method for computing outer inverses of Toeplitz matrices q

Marko Miladinovic ⇑, Sladjana Miljkovic, Predrag StanimirovicUniversity of Niš, Department of Mathematics, Faculty of Science, Višegradska 33, 18000 Niš, Serbia

a r t i c l e i n f o a b s t r a c t

Keywords:Toeplitz matrixDisplacement rankSuccessive matrix squaringOuter inverseConvergence rate

0096-3003/$ - see front matter � 2011 Elsevier Incdoi:10.1016/j.amc.2011.08.046

q This paper is supported by the research Project⇑ Corresponding author.

E-mail addresses: [email protected]

We introduce a new algorithm based on the successive matrix squaring (SMS) method. Thisalgorithm uses the strategy of e-displacement rank in order to find various outer inverseswith prescribed ranges and null spaces of a square Toeplitz matrix. Using the idea of dis-placement theory which decreases the memory space requirements as well as the compu-tational cost, our method tends to be very effective for Toeplitz matrices.

� 2011 Elsevier Inc. All rights reserved.

1. Introduction

The classical Newton’s iteration

Xiþ1 ¼ UðXiÞ ¼ 2Xi � XiMXi i ¼ 0;1;2; . . . ; ð1:1Þ

for the inversion of a matrix M was proposed by Schultz in 1933 [29] and it is well studied in [26]. If we look at Schultz’spaper, we immediately find that he does not suggest this method for arbitrary matrices (this would be a bit too expensive).It is interesting that he considers an example of a Toeplitz matrix since its structure admits very cheap matrix operations.Thus, the Schultz iteration is advocated only in case of agreeably structured matrices. Kailath et al. [17] introduce the conceptof displacement rank as well as displacement operator for close-to-Toeplitz matrices, which is systematically studied in thegeneral case in [13]. It is shown that the displacement theory approach is very powerful in designing fast inversion algo-rithms for structured matrices, especially Toeplitz and Hankel matrices. A number of different displacement operators havebeen introduced and analyzed in the literature [2,14,17,19,20,25,32]. The initial guess X0 is chosen such that it has low dis-placement rank. The limit X1, which represents the inverse or the generalized inverse of M, has also low displacement rank,see [13,14]. However, the iteration matrices Xi in the Newton’s method may not have low displacement rank. In the worstcase, the displacement rank of Xi increases exponentially until it reaches the n i.e., the dimension of the matrix. Parallel withthe increase of the displacement rank, the computational cost of the Newton’s process grows significantly while i incre-ments. It follows that the growth of the displacement rank of Xi must be controlled for the purpose of developing an efficientalgorithm. In order to make out of the Newton’s iteration a useful tool for computing the inverse as well as generalized in-verses of a Toeplitz matrix, the authors in [2] introduce the concept of the orthogonal displacement representation and e-dis-placement rank of a matrix. In this way they control the growth of the displacement rank of Xi. At each step, via thetruncation, the iteration matrix is approximated by an appropriate matrix with a low displacement rank. This strategy be-longs to the class of compression techniques, for inversion of structured matrices which are described in [7,8,25,28]. Becauseof the displacement structure of the iteration matrix, the matrix-vector multiplication involved in Newton’s iteration can bedone efficiently.

. All rights reserved.

174013 of the Serbian Ministry of Science. The authors are grateful for this type of support.

(M. Miladinovic), [email protected] (S. Miljkovic), [email protected] (P. Stanimirovic).

Page 2: Modified SMS method for computing outer inverses of Toeplitz matrices

3132 M. Miladinovic et al. / Applied Mathematics and Computation 218 (2011) 3131–3143

Numerical properties of the Shultz method powered by the concept of orthogonal displacement representation as well ase-displacement rank show the effectiveness of the method applied on structured matrices. The entire computation usesmuch less memory and CPU time with respect to the classical method, especially for large scale matrices.

Let Cm�n and Cm�nr denote the set of all complex m � n matrices and the set of all complex m � n matrices of rank r, respec-

tively. By A⁄, RðAÞ; rankðAÞ and NðAÞ we denote the conjugate transpose, the range, the rank and the null space of A 2 Cm�n,respectively.

Recall that, for an arbitrary matrix A 2 Cm�n, the set of all outer inverses (or also called {2}-inverses) is defined by thefollowing

Af2g ¼ fX 2 Cn�mjXAX ¼ Xg: ð1:2Þ

With A{2}s we denote the set of all outer inverses of rank s and the symbol A(2) stands for an arbitrary outer inverse of A. IfA 2 Cm�n

r ;V is a subspace of Cn of dimension t 6 r and U is a subspace of Cm of dimension m � t, then A has a {2}-inverse Xsuch that RðXÞ ¼ V and NðXÞ ¼ U if and only if AV � U ¼ Cm; in which case X is unique and it is denoted by Að2ÞV ;U .

The outer generalized inverse with prescribed range V and null-space U is the generalized inverse of special interest inmatrix theory. The reason of the importance of this inverse is the fact that the most common generalized inverses: theMoore–Penrose inverse A�, the weighted Moore–Penrose inverse AyM;N , the Drazin inverse AD, the group inverse A#, theBott-Duffin inverse Að�1Þ

ðLÞ and the generalized Bott-Duffin inverse AðþÞðLÞ are all {2}-generalized inverses of A with prescribedrange and null space.

In order to compute the generalized inverse Að2ÞV ;U many authors have proposed various numerical methods (see, for exam-ple, [22,36,37]).

The Newton’s iterative scheme (1.1) has been frequently modified for the purpose of computing various generalized in-verses of the structured matrices. Main results are comprised in [2–4,31,32]. The authors in the previously mentioned papersuse various strategies in choosing the starting approximation X0, in defining the iterations as well as in the usage of displace-ment operators.

The paper aims at presenting a uniform algorithm applicable in computation of outer inverses with prescribed range andnull space of an arbitrary Toeplitz matrix. Our algorithm is based on the successive matrix squaring (SMS) algorithm, whichis an equivalent to the Shultz method. Besides the classical modifications of the Shultz method available in the literature,corresponding to the orthogonal displacement representation and the e-displacement rank, we use several adaptations ofthe SMS algorithm which enable the uniform approach for various generalized inverses. The algorithm is an adaptation ofthe SMS algorithm proposed in [30] and uses the concept of the approximate orthogonal displacement representation intro-duced in [2,4].

Since the SMS algorithm for computing outer inverses from [30] comprises known SMS techniques from [6,33,34], itseems reasonable to expect that the algorithm presented in the current paper generalizes known results from the papers[2–4,31,32]. Indeed, our method can be considered as a universal algorithm which computes the outer generalized inversesof a square Toeplitz matrix. In particular cases of the general algorithm, we derive iterative procedures for computing reflex-ive g-inverses, the Moore–Penrose, the group and the Drazin inverse, suitable for square Toeplitz matrices. Therefore, ourmethod includes algorithms investigated in [2,4] as particular cases. Furthermore, in the case m = n we get an alternativeapproach in computation of the Moore–Penrose inverse, taking into account that the algorithms derived in [3,32] are basedon the different concepts of displacement operator.

The organization of the remainder of this paper is the following. In the next section we state some known definitions re-lated to a displacement operator and e-displacement rank. Also, the unified algorithm is presented. Convergence propertiesand the computational complexity of the algorithm are investigated in the Section 3. Numerical examples and some appli-cations are presented in the fourth section.

2. Algorithm construction

The concept of displacement operator, introduced in [17], has been studied in many papers [8,12,14,18,19,27] and itsmany different forms are presented. Displacement structure of the Drazin inverse, the group inverse and the Mð2Þ

V ;U inverseis considered in [4,9,21,35]. In the present paper we keep our attention on the displacement operator of Sylvester type, givenby

DA;BðMÞ ¼ AM �MB; ð2:1Þ

where A, B and M are matrices of appropriate sizes.The next proposition gives some arithmetics regarding the defined displacement operator DA,B.

Proposition 2.1. The following holds

DA;BðMNÞ ¼ DA;BðMÞN þMDA;BðNÞ þMðB� AÞN:

Proof. Follows immediately from the definition of the displacement operator DA,B. h

Page 3: Modified SMS method for computing outer inverses of Toeplitz matrices

M. Miladinovic et al. / Applied Mathematics and Computation 218 (2011) 3131–3143 3133

The rank of the displacement matrix DA, B(M) is called the displacement rank of M and is denoted by drk (M). For a Toeplitzmatrix M and appropriate choices of matrices A and B the following holds drk (M) 6 2 [13]. Here we recall the definition of e-displacement rank as a rank of perturbed displacement.

Definition 2.1 [2]. For a given e > 0 the relative e-displacement rank of a matrix M is defined as

drkeðMÞ ¼ minkEk6ekMk

rankðDA;BðMÞ þ EÞ;

where k�k denotes the Euclidian norm.The main purpose of the displacement operator is to enable representation of the original matrix through a matrix that

has low rank and from which one can also easily recover the original matrix. Besides the fact that the displacement matrixDA,B(M) has low rank it is well-known that its singular value decomposition can be easily computed.

Remark 2.1. The computation of the singular value decomposition of a matrix, in general, is a very expensive tool. In ourcase, what makes it cheaper, as it is also explained in [2], is the fact that it can be calculated using the QR factorization ofappropriate matrices.

Let r1 P � � �P rk be the nonzero singular values of DA,B(M) and let

DA;BðMÞ ¼ URVT ¼Xk

i¼1

riuivTi ; ð2:2Þ

be the singular value decomposition of DA,B(M). Suppose that the original matrix M can be recovered by the formula

M ¼ cXk

i¼1

rif ðuiÞgðv iÞ; ð2:3Þ

where f and g, generally speaking, represent the products of matrices which can be generated by the vectors ui and vi respec-tively and where c is a constant. The representation (2.3) of the matrix M is called orthogonal displacement representation(odr) (with respect to DA,B) and the corresponding 3-tuple (U,r,V) orthogonal displacement generator (odg) wherer = (r1,r2, . . . ,rk) [2]. For example, the odr of an arbitrary Toeplitz matrix is given in [2].

Proposition 2.2. Suppose that M is a Toeplitz matrix. Let DA;BðMÞ ¼ URVT ¼Pk

i¼1riuivTi ðr1 P r2 P � � �P rk > 0Þ be the

singular value decomposition of DA,B(M) and let e be such that 0 < e < r1. Then drke(M) = r if and only if rr > ekMkP rr+1.

Proof. See Theorem 2.1 from [2]. h

For a given e > 0, if drke(M) = r one can get an approximate Me of M

Me ¼ cXr

i¼1

rif ðuiÞgðv iÞ; r < k;

which is called an approximate orthogonal displacement representation (aodr) of the matrix M and the associated generatorðbU ; r; bV Þ is called an approximate orthogonal displacement generator (aodg), where

r ¼ ðr1;r2; . . . ;rrÞ; bU ¼ ½u1; u2; . . . ;ur �; bV ¼ ½v1; v2; . . . ; v r �:

We also use the definition of the operator trunce(�) defined on the set of matrices as follows Me = trunce(M) [2].In the present paper we derive an algorithm for finding the outer inverse with prescribed range and null space of an arbi-

trary Toeplitz matrix M 2 Cn�n. The entries of the matrix M, defined by the general rule mi,j = mi�j, are constant complex num-bers along the main diagonal parallels. In the remainder of the paper the notation toeplitz[c,r] stands for the Toeplitz matrixwhich is determined by its first column c and its first row r. As usual, by C+ and C�we denote the Toeplitz matrices defined byCþ ¼ Z þ e1eT

n; C� ¼ Z � e1eTn; where Z = toeplitz[(0,1,0, . . . ,0), (0, . . . ,0)] and ei is the ith column of the identity matrix. Let

C+(x) and C�(x) be circulant and anti-circulant matrices which are defined by

CþðxÞ ¼ toeplitz ½ðx1; x2; . . . ; xnÞ; ðx1; xn; . . . ; x2Þ�;C�ðxÞ ¼ toeplitz ½ðx1; x2; . . . ; xnÞ; ðx1;�xn; . . . ;�x2Þ�:

For an arbitrary Toeplitz matrix M we make the choice A = C� and B = C+ in the definition of the displacement operator (2.1).The parameters from (2.3) become [2]:

c ¼ �12; f ðuiÞ ¼ C�ðuiÞ; gðv iÞ ¼ CþðJv iÞ;

where J is the permutation matrix having 1 on the anti-diagonal. For the sake of simplicity, in the rest of the paper we use thenotation D instead of DC� ;Cþ .

Page 4: Modified SMS method for computing outer inverses of Toeplitz matrices

3134 M. Miladinovic et al. / Applied Mathematics and Computation 218 (2011) 3131–3143

Remark 2.2 [2]. For the approximation of the matrix M 2 Cn�n given by its aodg Me, the matrix-vector product can beefficiently done by Oðrn log nÞ FFT’s, where r = drke(M).

The following result provides an error bound of the aodr of M.

Proposition 2.3 [2]. Suppose that M is a Toeplitz matrix. Let r = drke(M) 6 drk (M) = k, Me be an aodr of M, and let r1,r2, . . . ,rk,be the nonzero singular values of DA,B(M). Then it holds

kM �Mek 612

nXk

i¼rþ1

ri 612

nðk� rÞe: ð2:4Þ

Many authors have built a number of iterative methods for finding generalized inverses of an arbitrary Toeplitz matrix bymodifying the Newton’s method [2–4,31,32]. For that purpose, they use operators of the form DA,B(M), as well as the conceptof displacement representation and e-displacement rank of matrices. It is shown that these methods are very effective andthat they significantly decrease computational cost.

In the following part of this section we construct an algorithm for finding outer inverses with prescribed range an nullspace of an arbitrary square Toeplitz matrix.

Let M 2 Cn�nq and R 2 Cn�n

s ; 0 6 s 6 q be arbitrary matrices. We start with the following iterative scheme (see [30])

X1 ¼ Q ;

Xkþ1 ¼ PXk þ Q ; k 2 N;ð2:5Þ

where P = I � bRM, Q = bR and b presents the relaxation parameter. The original idea of the iterative scheme (2.5), named thesuccessive matrix squaring method, was previously proposed by Chen et al. [6] in the case R = M⁄. The method proposed byChen et al. is used for computing the Moore–Penrose inverse of the matrix M. Instead of the iterative scheme (2.5) it is pos-sible to use the 2n � 2n matrix

T ¼P Q

0 I

� �:

The kth matrix power of the matrix T is given by

Tk ¼ Pk Pk�1

i¼0PiQ

0 I

264375; k > 1;

where the n � n matrix blockPk�1

i¼0 PiQ is actually the kth approximation Xk of the pseudoinverse of the matrix M from (2.5),whence Xk ¼

Pk�1i¼0 PiQ : The matrix power Tk can be computed by means of the repeated squaring of the matrix T, therefore

instead of using the iterative scheme (2.5) we can observe the iterative process

T0 ¼ T;

Tkþ1 ¼ T2k ; k 2 N0:

ð2:6Þ

Afterwards, the blockP

iPiQ generated after k steps of the successive squaring (2.6) is equivalent with the approximation

produced by 2k steps of the iterative process (2.5). Namely, we consider the sequence {Yk} given by Yk ¼ X2k ¼P

iPiQ . For

the sake of completeness we restate the following well-known result.

Lemma 2.1 [15]. Let H 2 Cn�n and e > 0 be given. There is at least one matrix norm k�k such that

qðHÞ 6 kHk 6 qðHÞ þ e; ð2:7Þ

where q(H) denotes the spectral radius of H.The authors in [30] prove that the iterative scheme given in terms of the block matrix (2.6) converges to the outer inverse

of the matrix M with prescribed range and null space, determined by an appropriately chosen matrix R.

Proposition 2.4 [30]. Let M 2 Cn�nq and R 2 Cn�n

s ; 0 6 s 6 q be given such that

MRðRÞ � N ðRÞ ¼ Cn: ð2:8Þ

The sequence of approximations

Yk ¼X2k�1

i¼0

ðI � bRMÞibR; ð2:9Þ

converges, in the matrix norm k�k to the outer inverse Y ¼ Mð2ÞRðRÞ;NðRÞ of the matrix M if b is a fixed real number such that

max16i6t

j1� bkij < 1;

where rank(RM) = t, ki, i = 1, . . . t are eigenvalues of RM, and k�k satisfies condition (2.7) for H = I � bMR.

Page 5: Modified SMS method for computing outer inverses of Toeplitz matrices

M. Miladinovic et al. / Applied Mathematics and Computation 218 (2011) 3131–3143 3135

Since the usage of the partitioned 2n � 2n matrix in the SMS acceleration scheme is not appropriate for application of adisplacement operator, our intention is to avoid such an approach. For that purpose, we observe that the sequence of approx-imations given by (2.9) can be rewritten by the following steps

Ykþ1 ¼ ðSk þ IÞYk;

Skþ1 ¼ I � Ykþ1M;k 2 N0; ð2:10Þ

where the initial choices are Y0 = Q, S0 = P. With the rules given by (2.10) one can easily verify that Yk ¼ X2k and Sk ¼ P2k

.Because of ineffective manipulation with high displacement rank matrix sequence {Yk} whose displacement rank, in the

worst case, can increases exponentially, we introduce the following modification of the method (2.10). Namely, using theconcept of the displacement representation, we modify the successive matrix squaring method from [30] by approximatingthe matrix Yk+1 with a matrix with a low displacement rank. We define the sequence {Zk} by

Zkþ1 ¼ ððPk þ IÞZkÞek

Pkþ1 ¼ I � Zkþ1M;k 2 N0; ð2:11Þ

with the starting values Z0 = Q, P0 = P. For notational simplicity let us denote

Z0k ¼ ðPk þ IÞZk; ð2:12Þ

which satisfies the inequality drk(Z0k) 6 drk(Pk) + drk(Zk). After we use the aodr of Z0k we have that the displacement rank ofthe sequence {Zk} is low during the iterative process. The low displacement rank of the sequence of matrices Pk is also re-tained, since drk(Pk) 6 drk(Zk) + m, where m = drk(M).

In the rest of this section we present an algorithm for computing the outer inverse with prescribed range and null space ofan arbitrary Toeplitz matrix.

In our modified SMS method we compute and store the SVD of D(Zk) instead of Zk and later determine Zk using (2.3). From(2.12) and Proposition 2.1 we obtain

DðZ0kÞ ¼ DðPkÞZk þ PkDðZkÞ þ DðZkÞ þ 2Pke1eTnZk: ð2:13Þ

If we denote the SVD of D(Zk) by UZRZVTZ and the SVD of D(Pk) by UPRPVT

P then the extended form of D(Z0k) can be rewritten bythe following matrix form

DðZ0kÞ ¼ UZ0RZ0VTZ0 ¼ ½UP ðPk þ IÞUZ Pke1 �

RP 0 00 RZ 00 0 2

264375 VT

PZk

VTZ

eTnZk

264375: ð2:14Þ

On the other hand, in order to compute the SVD of D(Pk+1), which we need for the next iteration, we have the following

DðPkþ1Þ ¼ DðIÞ � DðZkþ1ÞM � Zkþ1DðMÞ � 2Zkþ1e1eTnM:

We denote the SVD of D(Zk+1) by UZRZVTZ and the SVD of D(M) by UMRMVT

M . The extended form of the previous identity can bewritten by

DðPkþ1Þ ¼ UPRPVTP ¼ ½�UZ �Zkþ1UM �Zkþ1e1 �e1 �

RZ 0 0 00 RM 0 00 0 2 00 0 0 2

2666437775

VTZM

VTM

eTnM

eTn

266664377775: ð2:15Þ

Therefore the SVD of D(Zk+1) can be computed by Algorithm 2.1.

Algorithm 2.1. The odg of Zk+1 and the odg of Pk+1

Require: The odg of Zk, the odg of Pk, the odg of M and the truncation value e.1: Determine the matrices UZ0 ;RZ0 ;V

TZ0 according to (2.14).

2: Compute the QR factorizations UZ0 ¼ Q1R1 and VZ0 ¼ Q2R2.3: Compute the SVD of R1RZ0R

T2 ¼ URV .

4: Set br ¼ Diagðr1;r2; . . . ;rrÞ, such that r is defined in relation to the truncation value e:rr+1 6 er1 < rr.

5: Set UZkþ1¼ Q1

bU ;RZkþ1¼ bR and VZkþ1

¼ Q2bV , where bU and bV present the first r columns of U, V respectively.

6: Determine the matrices UP ;RP;VTP according to (2.15).

7: Compute the QR factorizations UP = Q3R3 and VP = Q4R4.8: Compute the SVD of R3RPRT

4 ¼ URV .9: Set UPkþ1

¼ Q3U;RPkþ1¼ R and VPkþ1

¼ Q4V .

Page 6: Modified SMS method for computing outer inverses of Toeplitz matrices

3136 M. Miladinovic et al. / Applied Mathematics and Computation 218 (2011) 3131–3143

The computational cost of Algorithm 2.1 is about OðmhÞ FFT’s, where h = maxkdrk(Zk). If h is small enough, or even it isindependent of n, the algorithm requires Oðn log nÞ operations per step.

To complete our modified SMS method we present Algorithm 2.2 for finding the outer inverses of a Toeplitz matrix given

in terms of odr for the approximate matrices with respect to the operator D.

Algorithm 2.2 Modified SMS algorithm

Require: A Toeplitz matrix M and its displacement operator D, a matrix R, a relaxation parameter b and a residual errorbound n.

1: Set k = 0, Zk = bR and Pk = I � bRM

2: Compute the SVD of DðPkÞ ¼ UPkRPk

VTPk;DðZkÞ ¼ UZk

RZkVT

Zkand DðMÞ ¼ UMRMVT

M .3: Determine the truncation value ek.4: Compute the odg ðUZkþ1

;RZkþ1;VZkþ1

Þ of Zk+1 and ðUPkþ1;RPkþ1

;VPkþ1Þ of Pk+1 according to Algorithm 2.1 with input

ðUZk;RZk

;VZkÞ; ðUPk

;RPk;VPkÞ; UMRMVT

M

� �and e = ek.

5: Determine the norm kres(Zk+1)k. If kres(Zk+1)k < n goto step 7, otherwise continue6: Set k = k + 1, goto step 37: Return the matrix Zk+1.

The value of kres(Zk)k in Step 5 presents the norm of the residual kres(Zk)k = kZkMZk � Zkk. Since, in general, computingkres(Zk)k is an expensive operation, a cheaper way is to compute the norm of the vector kres(Zk)e1k. This operation can beefficiently computed by using a few FFT’s.

3. Convergence properties

It is well-known that matrices with an AB-displacement structure possess generalized inverses with a BA-displacementstructure. Investigations in this field started in the paper [14] for the Moore–Penrose inverse. In the recent paper [21] theseinvestigations are extended to the generalized inverse Mð2Þ

V ;U .

Proposition 3.1 [21]. Let M 2 Cn�n be an arbitrary matrix and Mð2ÞV ;U be its {2}-inverse. Then

rank DB;A Mð2ÞV ;U

� �� �6 rankðDA;BðMÞÞ þ rankðDB;AðRÞÞ:

In the case when the input matrix M has a low displacement rank with respect to DA,B and R has a low displacement rankwith respect to DB,A, the generalized inverse Mð2Þ

V ;U also possesses a low displacement rank with respect to DB,A. As a conse-quence, it is evident that choosing the matrix R to be a matrix with low displacement rank (with respect to DB,A) is of crucialimportance to keep the displacement rank of the outer inverse with prescribed range and null space low. Then, it is possibleto use the following result from [12] in order to show the convergence of the algorithm.

Theorem 3.1 [12]. Let V be a normed space and consider a function f : V ? V and M 2 V. Assume that B = f(M) can be obtained bythe iteration of the form

Xk ¼ UðXk�1Þ; k ¼ 1;2; . . . ; ð3:1Þ

where U is a one-step operator. Further, assume that for any initial guess X0 sufficiently close to B, the process converges:

limx!1

Xk ¼ B:

Assume that there are constants cU, �U > 0 and a > 1 such that

kUðXÞ � Bk 6 cUkX � Bka for all X 2 V with kX � Bk 6 �U: ð3:2Þ

Let S � V be a set of structured elements and trunce : V ? S be a truncation operator such that

kX � Xek 6 cRkX � Bk for allX 2 V with kX � Bk 6 �U: ð3:3Þ

Then there exists d > 0 such that the truncated iterative process Yk = (U(Yk�1))e converges to B so that

kYk � Bk 6 cRUkYk�1 � Bka with cRU ¼ ðcR þ 1ÞcU; k ¼ 1;2; . . . : ð3:4Þ

for any starting value Y0 = (Y0)e satisfying kY0 � Bk < d.For our iterative process, the condition (3.2) in Theorem 3.1 is undoubtly satisfied for the choices cU = kMk and a = 2. The

choice of the truncation value ek at each step of the iterative process (2.11) is very important, since it indicates what will bethe value of the displacement rank r, which is actually the balance point between the convergence of the process and the lowcomputational requirements of the algorithm. Instead of analyzing the conditions under which (3.3) is satisfied we give

Page 7: Modified SMS method for computing outer inverses of Toeplitz matrices

M. Miladinovic et al. / Applied Mathematics and Computation 218 (2011) 3131–3143 3137

some specifics in the following alternative proof for the convergence, which characterizes the described process. In addition,the presented result also determines the distances between the values of the real process and the truncated process at eachiteration.

The notion of the q-Pochhammer symbol (or q-rising factorial), defined by

ða; qÞk ¼Yk�1

i¼0

ð1� aqiÞ

is used for the purpose of proving the next result.

Theorem 3.2. Let M 2 Cn�nq and R 2 Cn�n

s ;0 6 s 6 q be given such that the condition (2.8) is satisfied. The sequence ofapproximations {Zk} given by (2.11) converges to the outer inverse Z ¼ Mð2ÞRðRÞ;NðRÞ of the matrix M if the following three conditionsare satisfied:

1. b is a real number such that

max16i6t

j1� bkij < 1;

where rank (RM) = t, ki, i = 1, . . . , t are eigenvalues of RM.2. The truncation values ek; k 2 N0 are chosen such that

ek ¼2

n h0k � hkþ1� � hk; ð3:5Þ

where

h0k ¼ drk Z0k� �

; hk ¼ drkðZkÞ; k P 0; h0 ¼min e;d2 � kPk2

kMk

( );

and

hk ¼min ekþ1;dkþ2ð1� dkÞkMk

( ); k P 1; ð3:6Þ

for 0 < e� 1 and D = 1 � e.3. We chose a matrix norm k�k such that the condition (2.7) from Lemma 2.1 is satisfied for H = I � bMR and also that the propertykPk 6 1 � e is obeyed.

Under these assumptions we have the quadratic convergence of the algorithm, defined by the rule

kZ � ZkkkZk 6 max

16i6tj1� bkij

� 2k

þOðeÞ ð3:7Þ

and the norm estimation

kRkk 6ð�1; dÞkþ1

2ð1þ dÞ �eð1� ekÞ

1� e¼ OðeÞ; k P 0; ð3:8Þ

where {Yk} is the sequence generated by the SMS algorithm and Rk = Zk � Yk.

Proof. Without loss of generality one can assume that kIk = 1. Otherwise, in the case of kIk > 1, instead of the norm k�kwhichsatisfies the theorem Condition 3 the induced matrix norm N(M) = supkEk=1kMEk can be used. According to Theorem 5.6.26from [15] the new norm satisfies N(M) 6 kMk for all M 2 Cn�n and N(I) = 1.

Further, since the matrix Zk+1 is the aodr of Z0k, it is possible to use Zkþ1 ¼ Z0k þ Ek. According to (3.5) as well as (2.4) it isevident that kEkk 6 hk, k P 0. Let us prove the auxiliary result

kPkk 6 dkþ1; k P 0:

In the case k = 0 we have kP0k = kPk 6 d. The proof in the case k = 1 follows from

P1 ¼ I � Z1M ¼ I � ðP0 þ IÞZ0M � E0M ¼ I � Z0M � P0Z0M � E0M ¼ P20 � E0M;

which produces

kP1k 6 kP0k2 þ h0kMk 6 kPk2 þ d2 � kPk2

kMk kMk ¼ d2:

Page 8: Modified SMS method for computing outer inverses of Toeplitz matrices

3138 M. Miladinovic et al. / Applied Mathematics and Computation 218 (2011) 3131–3143

Assuming that kPkk 6 dk+1 and taking into account Pkþ1 ¼ P2k � EkM as well as kEkk 6 hk we verify the inductive step:

kPkþ1k 6 kPkk2 þ hkkMk 6 d2kþ2 þ dkþ2ð1� dkÞkMk kMk ¼ dkþ2:

Now we estimate the norm kRkk as in (3.8). In the case k = 0 we have kR0k = 0. To prove the statement in the case k P 1 weagain use the mathematical induction. The base of the induction (case k = 1) can be verified from

Z1 ¼ ðZ00Þ�e0¼ ðY1Þ�e0

¼ Y1 þ E0;

which implies

kR1k ¼ kE0k 6 h0 6 e ¼ ð�1; dÞ22ð1þ dÞ e:

Let us suppose the approximation of the matrix norm kRkk as in (3.8). Then the inductive step follows from

Zkþ1 ¼ ððPk þ IÞZkÞ�ek¼ ðPk þ IÞðYk þ RkÞ þ Ek ¼ Ykþ1 þ ðPk þ IÞRk þ Ek;

which together with (3.6) gives

kRkþ1k 6 ðdkþ1 þ 1ÞkRkk þ hk 6 ðdkþ1 þ 1Þð�1; dÞkþ1

2ð1þ dÞ �eð1� ekÞ

1� eþ hk 6 ðdkþ1 þ 1Þ

Xk

s¼1

Yk

i¼2

ðdi þ 1Þes þ ekþ1

6

Xk

s¼1

Ykþ1

i¼2

ðdi þ 1Þes þYkþ1

i¼2

ðdi þ 1Þekþ1 ¼Xkþ1

s¼1

Ykþ1

i¼2

ðdi þ 1Þes:

So continuing the previous inequalities we obtain

kRkþ1k 6ð�1; dÞkþ2

2ð1þ dÞXkþ1

s¼1

es ¼ð�1; dÞkþ2

2ð1þ dÞ �eð1� ekþ1Þ

1� e¼ OðeÞ:

In this way, we verify the induction and prove kRkk ¼ OðeÞ:Denote by Y = limYk, Z = limZk. Since

P1i¼0D

i <1, it follows that (�1,D)k converges to (�1,D)1, and

kZ � Yk 6 ð�1; dÞ12ð2� eÞ �

e1� e

¼ OðeÞ:

Therefore, the estimation (3.8) is verified and it is possible to use Z = Y. Now, following the assumptions 1 and 3 we have

kY � YkkkYk 6 max

16i6tj1� bkij

� 2k

þOðeÞ;

(see [30], Theorem 2.1). On the other hand we got the estimate for kRkk, which produces

kZ � Zkk ¼ kZ � Yk � Rkk ¼ kY � Yk � Rkk 6 kY � Ykk þ kRkk:

Finally, we get

kZ � ZkkkZk 6 max

16i6tj1� bkij

� 2k

þOðeÞ þð�1; dÞkþ1

2ð1þ dÞkZk �eð1� ekÞ

1� e;

which immediately confirms (3.7) and the quadratic convergence of the sequence (2.11).It is known that all norms on a finite dimensional vector space are equivalent and we found a norm for which the process

converges. Therefore, the general convergence with respect to any norm follows. h

However, based on some strategies, one can choose ek to be constant during the process or to be dynamic (to depend onsome changeable parameters). A realistic strategy in choosing the value for e is according to some specific heuristics. Someheuristics for the choice of the parameter ek are presented in the next section.

Remark 3.1. For the purpose of choosing appropriate truncation value we remark that the restrictive condition kPk 6 1 � ein Theorem 3.2 is always satisfied in the case of the ordinary inverse, the Moore–Penrose inverse, the group and the Drazininverse. In the first two cases we have R = M⁄, which yields P = P⁄ i.e. q(P) = kPk and in the case of the Drazin inverse R = Ml,l P ind (M) from which follows P = H.

Page 9: Modified SMS method for computing outer inverses of Toeplitz matrices

M. Miladinovic et al. / Applied Mathematics and Computation 218 (2011) 3131–3143 3139

4. Application and numerical examples

The {2}-inverses have application in the iterative methods for solving nonlinear equations [1,24] as well as in statistics[11,16]. In particular, outer inverses play an important role in stable approximations of ill-posed problems and in linearand nonlinear problems involving rank-deficient generalized inverses [23,38].

On the other hand, the theory of Toeplitz matrices arises in scientific computing and engineering, for example, imageprocessing, numerical differential equations, integral equations, time series analysis and control theory (see, for example[5,19]). Toeplitz matrices are important in the study of discrete time random processes. The covariance matrices ofweakly stationary processes are Toeplitz matrices. The triangular Toeplitz matrices provide a matrix representation ofcausal linear time invariant filters [10]. The Moore–Penrose inverse of a Toeplitz matrix is used in the image restorationprocess [3].

In the sequel we state some results which are relevant for computing outer inverses of an arbitrary Toeplitz matrix M 2 Cn�n.In order to get an explicit representation of the outer inverse of M, which is a limit value of the sequence of approximations(2.11) for given M and R, we observe the full rank factorization of the matrix R. Therefore, we present the following result.

Corollary 4.1. For given M 2 Cn�nq let us choose an arbitrary matrix R 2 Cn�n

s ; 0 6 s 6 q, and its full rank factorizationR ¼ FG; F 2 Cn�s

s ; G 2 Cs�ns , such that GMF is invertible. Suppose that b is a selected real number satisfying max16i6sj1 � bkij < 1,

where ki are eigenvalues of FGM. Under the assumption 2 from Theorem 3.2 the sequence of approximations given by (2.11)converges to the outer inverse of M,

Z ¼ FðGMFÞ�1G ¼ Mð2ÞRðFÞ;NðGÞ 2 Mf2gs:

Proof. Follows immediately from Z = Y, Theorem 2.3 from [30] and Theorem 3.2. h

Corollary 4.2. Under the assumptions of Theorem 3.2 the following holds:

(i) Z = MD in the case R = Ml, l P ind(M);(ii) Z = M� in the case R = M⁄;

(iii) Z = M# in the case R = M;(iv) Z 2M{1,2} for s = q.

Proof. It suffices to use the following representations:

MD ¼ Mð2ÞRðMlÞ;NðMlÞ

; l P indðMÞ; My ¼ Mð2ÞRðMÞ;NðMÞ; M# ¼ Mð2Þ

RðMÞ;NðMÞ;

Mf1;2g ¼ Mf2gq

in conjunction with Corollary 4.1. h

Next, we report some numerical results obtained from testing our method, named modified SMS method, for computingouter inverses of a Toeplitz matrix. We select 5 different types of Toeplitz matrices. During the execution of the algorithm weobserve and later report three different indicators: the number of iterations (No of iter), maximum displacement rank(Mdrk), as well as the sum of the displacement ranks (Sdrk). The code is written in the programming package MATLAB

on a Workstation Intel Celeron 2.0 GHz. The stopping condition for Algorithm 2.2 is kres(Zk) k 6 n = 10�6. Also, the valueof the starting parameter is b = 1/kRMk, where the matrix R is chosen according to Theorem 3.2 and Corollary 4.2.

In order to find the unified approach in choosing the parameter e for all test matrices we use the static choice e = 10�8,which is actually the square root of the double precision of the floating point arithmetic. In the case of computing the inverseas well as the group inverse of the matrix we present the strategy of choosing the e according to some specific heuristic. Thisheuristic gives improvements of the algorithm such as the lower Mdrk as well as the lower Sdrk which decrease the com-putational cost of the algorithm and the number of operations per step.

Remark 4.1. Based on Theorem 3.2, similarly as in [30] we can also determine a priori the number of steps required forachieving predefined accuracy. It is obvious that the number of steps for SMS and modified SMS is identical.

By M[pjq] we denote the submatrix of M obtained by extracting the rows indexed by p and the columns indexed by q.When q = {1, 2, . . . ,n}, we denote M[pjq] by M[pj:], and in case of p = {1,2, . . . ,n}, we denote M[pjq] by M[:jq]. The sequence{1, . . . ,s} is simply denoted by 1:s.

Example 4.1. In this example we consider the Toeplitz matrices M 2 Rn�nq and R 2 Rn�n

s (n is an odd number) given by

M ¼ toeplitz½ð1;0; . . . ; 0;1Þ�; R ¼ toeplitz½ð1;0; . . . ;0;1;0; . . . ;0;1Þ�;

where s = (n � 1)/2 < q = n � 1. The vector which represents R has 1 in the middle.

Page 10: Modified SMS method for computing outer inverses of Toeplitz matrices

3140 M. Miladinovic et al. / Applied Mathematics and Computation 218 (2011) 3131–3143

One of the possible full rank factorizations of the matrix R = FG is F = R[:j1:s], G = R[1:sj:]. In the case n = 7 we have

Table 1Numeri

Dim

173365

129257513

10252049

R ¼

1 0 0 1 0 0 10 1 0 0 1 0 00 0 1 0 0 1 01 0 0 1 0 0 10 1 0 0 1 0 00 0 1 0 0 1 01 0 0 1 0 0 1

2666666666664

3777777777775¼

1 0 00 1 00 0 11 0 00 1 00 0 11 0 0

2666666666664

3777777777775�

1 0 0 1 0 0 10 1 0 0 1 0 00 0 1 0 0 1 0

264375:

Based on Corollary 4.1, the explicit representation of the outer inverse is

Mð2ÞRðFÞ;NðGÞ ¼ FðGMFÞ�1G ¼

0:2 0 0 0:2 0 0 0:20 0:5 0 0 0:5 0 00 0 0:5 0 0 0:5 0

0:2 0 0 0:2 0 0 0:20 0:5 0 0 0:5 0 00 0 0:5 0 0 0:5 0

0:2 0 0 0:2 0 0 0:2

2666666666664

3777777777775:

The sequence of approximations {Zk} given by Algorithm 2.2 converges to the outer inverse given by this explicit represen-tation. In Table 1 we present numerical results obtained from computing the outer inverse of M by Algorithm 2.2 consideringseveral different dimensions of M. Also, the number of iterations of the original SMS method is presented as comparison.

As one can see from Table 1 the number of iterations of the original SMS method is the same as the number of iterativesteps obtained by our method. Also, the Mdrk is very low for all dimensions of the matrix M, i.e. Mdrk = 3.

In the remaining of the section, for each test matrix, we consider 8 different numerical experiments with the dimensionsas follows: 16, 32, 64, 128, 256, 512, 1024, 2048.

Example 4.2. The inverses of two matrices

M1 ¼ toeplitz½ð1;1=2; . . . ;1=nÞ�; M2 ¼ toeplitz½ð4;1;0; . . . ;0Þ�;

chosen from [2], are computed in the case of R = M⁄. Numerical results obtained during the computation of the inverses ofmatrices M1 and M2 are given in Tables 2 and 3. Besides the static choice of the truncation value e = 10�8, we observe theadditional heuristic e = max(res/kMk,10�8).

We see that the number of iterations is almost identical for the two different heuristics. On the other hand, it is evidentthat, observing all other indicators, the great improvement is achieved by putting e = max(res/kMk,10�8) instead of e = 10�8.Obviously, these improvements are obtained for both matrices M1 and M2.

Example 4.3. The test matrix

M ¼ toeplitz½ð1;1=2; . . . ;1=ðn� 1Þ;1Þ; ð1;1=ðn� 1Þ; . . . ;1=2;1Þ�

is taken from [4]. Tables 4 and 5 report the numerical results obtained during the computation of the group inverse (gener-ated in the case R = M), as well as the Moore–Penrose inverse (R = M⁄) of a Toeplitz matrix M 2 Rn�n.

Let us mention that another dynamic heuristic is chosen for computing the group inverse with respect to the heuristicused in Example 4.2. It is obvious that the presented heuristic, which makes e at least as small as 10�6, provides significantimprovements of the observed indicators in comparison to the static choice e = 10�8.

cal results generated by computing the outer inverse of M.

Modified SMS SMS

No of iter Mdrk Sdrk No of iter

5 3 15 55 3 15 55 3 15 55 3 15 55 3 15 55 3 15 55 3 15 55 3 15 5

Page 11: Modified SMS method for computing outer inverses of Toeplitz matrices

Table 2Numerical results generated by computing the inverse of M1.

Dim Modified SMS SMS

e = max (res/kAk,10�8) e = 10�8

No of iter Mdrk Sdrk No of iter Mdrk Sdrk No of iter

16 11 2 15 11 13 105 1132 11 2 17 12 16 129 1264 12 2 20 13 16 148 13

128 13 2 25 13 16 158 13256 13 3 27 14 16 171 14512 14 4 35 14 16 176 14

1024 14 4 38 14 16 180 142048 14 4 42 15 16 189 15

Table 3Numerical result generated by computing the inverse of matrix M2.

Dim Modified SMS SMS

e = max (res/kAk,10�8) e = 10�8

No of iter Mdrk Sdrk No of iter Mdrk Sdrk No of iter

16 7 2 11 7 13 65 732 7 2 11 7 13 69 764 7 3 16 7 13 69 7

128 7 3 18 7 13 69 7256 7 3 18 7 13 69 7512 7 4 20 7 13 69 7

1024 7 4 21 7 13 69 72048 7 4 21 7 13 69 7

Table 4Numerical results for computing the group inverse.

Dim Modified SMS SMS

e = min((1 + 10�6 � kPkk2)/kAk,10�6) e = 10�8

No of iter Mdrk Sdrk No of iter Mdrk Sdrk No of iter

16 9 9 60 9 9 66 932 10 10 66 10 10 76 1064 10 10 68 10 11 81 10

128 10 9 72 10 12 87 10256 11 9 77 11 11 92 11512 11 9 80 11 11 91 11

1024 11 9 82 11 11 93 112048 11 9 82 11 11 95 11

Table 5Numerical results generated by computing the Moore–Penrose inverse.

Dim Modified SMS SMS

No of iter Mdrk Sdrk No of iter

16 9 13 84 932 10 14 98 1064 10 15 107 10

128 10 15 112 10256 11 15 119 11512 11 15 122 11

1024 11 15 124 112048 11 16 128 11

M. Miladinovic et al. / Applied Mathematics and Computation 218 (2011) 3131–3143 3141

Page 12: Modified SMS method for computing outer inverses of Toeplitz matrices

Table 6Numerical results generated by computing the Drazin inverse of M.

Dim Modified SMS SMS

No of iter Mdrk Sdrk No of iter

16 8 6 48 832 8 6 48 864 8 6 48 8128 8 6 48 8256 8 6 48 8512 8 6 48 81024 8 6 48 82048 8 6 48 8

3142 M. Miladinovic et al. / Applied Mathematics and Computation 218 (2011) 3131–3143

It is interesting to point out that the results presented in the Tables 4 and 5 are obtained for the same starting matrix M.Also, the group and the Moore–Penrose inverse of the matrix M are identical. As one can see from the tables, although thenumber of iterations is the same, the Mdrk as well as the Sdrk obtained from the computation of the group inverse is essen-tially smaller than in the case of computing the MP inverse, for the same choice of the parameter e = 10�8.

Example 4.4. Let us consider the matrix M 2 Rn�n represented by

M ¼ toeplitz½ð0; . . . ; 0;1;1;0; . . . ;0Þ; ð0; 0; . . . ;0;1;1Þ�:

Since the index of the matrix M is ind (M) = 2, the Drazin inverse of M is computed in the case R = Ml, l P 2. For the chosenmatrix R we have that rank (R) = 6. According to Corollary 4.2, the iterative process of the modified SMS method converges tothe Drazin inverse. After the use of the static choice e = 10�8 we get the following results.

It is evident from Table 6 that Mdrk = 6, which is a good result having in mind that drk(Z0) = drk(bR) = 4.

5. Conclusion

The Newton’s method originally proposed by Shultz in 1933 [29] is a very important tool for computing the inverse aswell as the generalized inverses of an arbitrary structured matrix. There exist several different modifications of the Newton’smethod dealing with a Toeplitz matrix. Each of them is intended for computing either the ordinary inverse, the group inverseor the Moore–Penrose inverse [2–4,32]. Various heuristics in the choice of the starting approximation, in defining the iter-ative rules as well as in the selection of displacement operators are used in these papers. Hence, the necessity of having aunified algorithm is obvious.

In this paper we use the idea introduced in [30], accompanied by the concept of displacement operator and displacementrank of iteration matrices. Namely, we present the modification of the SMS algorithm for computing the outer inverse withprescribed range and null space of an arbitrary Toeplitz matrix. Our method is actually a generalization of the ideas from[2,4] and represent the universal algorithm for computing generalized inverses containing these results as partial cases. Fur-thermore, by using the circulant and anti-circulant matrices for obtaining the odr of the sequence of approximations Zk, wepropose a different approach for computing the Moore–Penrose inverse of a square Toeplitz matrix, with respect to themethod from [3,32]. Also, by giving the unified approach we resolve the previously mentioned diversities in strategies forchoosing the starting approximations as well as the diversities in defining the iterative rules and in the usage of various dis-placement operators.

The presented quadratic convergence and numerical results show that our algorithm is as good as the SMS algorithm forcomputing outer generalized inverses, when it comes to the number of iterations. Furthermore, the achieved low values ofMdrk and Sdrk evidently decrease the computational cost per step having in mind that the strategy of FFT’s and convolutionsare used in matrix-vector multiplications. Additionally, three different heuristics are used for the choice of the truncationvalue ek. For given test matrices the numerical results presented in the paper favour strategy for the dynamic choice ofthe truncation parameter in comparison to the static choice.

References

[1] A. Ben-Israel, T.N.E. Greville, Generalized Inverses: Theory and Applications, second ed., Springer, 2003.[2] D. Bini, B. Meini, Approximate Displacement Rank and Applications, in: V. Olshevsky (Ed.), Structured Matrices in Mathematics, Computer Science and

Engineering II, Contemporary Mathematics, 281, American Mathematical Society, Rhode Island, 2001, pp. 215–232.[3] D. Bini, G. Codevico, M. Van Barel, Solving Toeplitz least squares problems by means of Newton’s iteration, Numer. Algorithms 33 (2003) 93–103.[4] J.F. Cai, M.K. Ng, Y. Wei, Modified Newton’s algorithm for computing the group inverses of singular Toeplitz matrices, J. Comput. Math. 24 (2006) 647–

656.[5] R. Chan, M. Ng, Conjugate gradient methods for Toeplitz systems, SIAM Rev. 38 (1996) 427–482.[6] L. Chen, E.V. Krishnamurthy, I. Macleod, Generalized matrix inversion and rank computation by successive matrix powering, Parallel Comput. 20

(1994) 297–311.

Page 13: Modified SMS method for computing outer inverses of Toeplitz matrices

M. Miladinovic et al. / Applied Mathematics and Computation 218 (2011) 3131–3143 3143

[7] G. Codevico, V.Y. Pan, M. Van Barel, X. Wang, A. Zheng, The least squares compression policy for Newton-like iteration of structured matrices (P. Mitic, J.Carne, Eds.), IMS2004 Proceedings, 2004, pp. 1–22.

[8] G. Codevico, V.Y. Pan, M. Van Barel, Newton-like iteration based on a cubic polynomial for structured matrices, Numer. Algorithms 36 (4) (2004) 365–380.

[9] H. Diao, Y. Wei, S. Qiao, Displacement rank of the Drazin inverse, J. Comput. Appl. Math. 167 (2004) 147–161.[10] U. Grenander, M. Rosenblatt, Statistical Analysis of Stationary Time Series, Wiley and Sons, NY, 1966.[11] A.J. Getson, F.C. Hsuan, {2}-Inverses and their Statistical Applications, Lecture Notes in Statistics, 47, Springer, Berlin, 1988.[12] W. Hackbush, B.N. Khoromskij, E.E. Tyrtyshnikov, Approximate iterations for structured matrices, Numer. Math. 109 (2008) 365–383.[13] G. Heinig, K. Rost, Algebraic Method for Toeplitz-like Matrices and Operators, Akademie-Verlag, Berlin and Birkhauser, 1984.[14] G. Heinig, F. Hellinger, Displacement structure of pseudoinverses, Linear Algebra Appl. 197/198 (1994) 623–649.[15] R.A. Horn, C.R. Johnson, Matrix Analysis, Cambridge University Press, Cambridge, New York, New Rochelle, Melbourne, Sydney, 1986.[16] F. Husen, P. Langenberg, A. Getson, The {2}-inverse with applications to satistics, Linear Algebra Appl. 70 (1985) 241–248.[17] T. Kailath, S.Y. Kung, M. Morf, Displacement rank of matrices and linear equations, J. Math. Anal. Appl. 68 (1979) 395–407.[18] T. Kailath, A.H. Sayed, Fast algorithms for generalized displacement structures, in recent advances in mathematical theory of systems, control,

networks and signal processing II, Proceedings of the MTNS-91 (H.Kimura, S.Kodama, Eds.) Mita Press, Japan, 1992, pp. 27–32.[19] T. Kailath, A.H. Sayed, Displacement structure: theory and applications, SIAM Rev. 37 (1995) 297–386.[20] T. Kailath, S.Y. Kung, M. Morf, Displacement rank of a matrix, Bull. Amer. Math. Soc. 1 (1979) 769–773.[21] S. Li, Displacement structure of the generalized inverse Að2ÞT;S , Appl. Math. Comput. 156 (2004) 33–40.[22] X. Liu, C. Hu, Y. Yu, Further results on iterative methods for computing generalized inverses, J. Comput. Appl. Math. 234 (2010) 684–694.[23] M.Z. Nashed, Generalized Inverse and Applications, Academic Press, New York, 1976.[24] M.Z. Nashed, X. Chen, Convergence of Newton-like methods for singular operator equations using outer inverses, Numer. Math. 66 (1993) 235–257.[25] V.Y. Pan, Structured Matrices and Polynomials. Unifed Superfast Algorithms, Birkhauser Springer, 2001.[26] V.Y. Pan, R. Schreiber, An improved Newton iteration for generalized inverse of a matrix with applications, SIAM J. Sci. Stat. Comput. 12 (1991) 1109–

1131.[27] V.Y. Pan, Y. Rami, X. Wang, Structured matrices and Newton’s iteration: unified approach, Linear Algebra Appl. 343–344 (2002) 233–265.[28] V.Y. Pan, M. Van Barel, X.M. Wang, G. Codevico, Iterative inversion of structured matrices, Theor. Comput. Sci. 315 (2004) 581–592.[29] G. Schulz, Iterative Berechnung der reziproken Matrix, ZAMM Z. Angew. Math. Mech. 13 (1933) 57–59.[30] P. Stanimirovic, D. Cvetkovic-Ilic, Successive matrix squaring algorithm for computing outer inverses, Appl. Math. Comput. 203 (2008) 19–29.[31] W.R. Trench, An algorithm for the inversion of finite Toeplitz matrices, SIAM J. Appl. Math. 12 (1964) 515–522.[32] Y. Wei, J. Cai, M.K. Ng, Computing Moore–Penrose inverses of Toeplitz matrices by Newton’s iteration, Math. Comput. Model. 40 (2004) 181–191.[33] Y. Wei, Successive matrix squaring algorithm for computing Drazin inverse, Appl. Math. Comput. 108 (2000) 67–75.[34] Y. Wei, H. Wu, J. Wei, Successive matrix squaring algorithm for parallel computing the weighted generalized inverse AyMN , Appl. Math. Comput. 116

(2000) 289–296.[35] Y. Wei, M.K. Ng, Displacement structure of group inverses, Numer. Linear Algebra Appl. 12 (2005) 103–110.[36] Y. Yu, Y. Wei, Determinantal representation of the generalized inverse Að2ÞT;S over integral domains and its applications, Linear Multilinear Algebra 57

(2009) 547–559.[37] Y. Yu, Y. Wei, The representation and computational procedures for the generalized inverse Að2ÞT;S of an operator A in Hilbert spaces, Numer. Funct. Anal.

Optim. 30 (2009) 168–182.[38] B. Zheng, R.B. Bapat, Generalized inverse Að2ÞT;S and a rank equation, Appl. Math. Comput. 155 (2004) 407–415.