Additive rank-one preserving mappings on triangular matrix algebras

21
Linear Algebra and its Applications 312 (2000) 13–33 www.elsevier.com/locate/laa Additive rank-one preserving mappings on triangular matrix algebras Jeremy Bell 1 , A.R. Sourour * ,2 Department of Mathematics and Statistics, University of Victoria, Vic., British Columbia, Canada V8W 3P4 Received 15 September 1999; accepted 27 December 1999 Submitted by M. Marcus Abstract We classify surjective additive maps on the space of block upper triangular matrices that preserve matrices of rank one as well as linear maps preserving matrices of rank one on fairly general subspaces of matrices. © 2000 Elsevier Science Inc. All rights reserved. AMS classification: Primary 15A04; Secondary 15A03 Keywords: Triangular matrix algebras; Rank-one preservers 1. Introduction Characterizing linear maps on spaces of matrices or operators that preserve certain subsets or properties has been an active area of research for quite a while. Of these so-called linear preserver problems, one of the most basic is arguably the rank-one preservers. Indeed several other questions about preservers may be reduced to, or solved with the help of, rank-one preservers. This has been observed in [10,14]. For example, preserving commutativity [4,15,17], spectrum [7,13] or invertibility [16] involves rank-one preservers. Classifying isomorphisms of several types of * Corresponding author. E-mail addresses: [email protected] (J. Bell), [email protected] (A.R. Sourour). 1 Research supported by an NSERC undergraduate student research award. 2 Research supported by an NSERC research grant. 0024-3795/00/$ - see front matter ( 2000 Elsevier Science Inc. All rights reserved. PII: S 0 0 2 4 - 3 7 9 5 ( 0 0 ) 0 0 0 2 4 - 0

Transcript of Additive rank-one preserving mappings on triangular matrix algebras

Linear Algebra and its Applications 312 (2000) 13–33www.elsevier.com/locate/laa

Additive rank-one preserving mappings ontriangular matrix algebras

Jeremy Bell1, A.R. Sourour∗,2

Department of Mathematics and Statistics, University of Victoria, Vic., British Columbia,Canada V8W 3P4

Received 15 September 1999; accepted 27 December 1999

Submitted by M. Marcus

Abstract

We classify surjective additive maps on the space of block upper triangular matrices thatpreserve matrices of rank one as well as linear maps preserving matrices of rank one on fairlygeneral subspaces of matrices. © 2000 Elsevier Science Inc. All rights reserved.

AMS classification: Primary 15A04; Secondary 15A03

Keywords: Triangular matrix algebras; Rank-one preservers

1. Introduction

Characterizing linear maps on spaces of matrices or operators that preserve certainsubsets or properties has been an active area of research for quite a while. Of theseso-called linear preserver problems, one of the most basic is arguably the rank-onepreservers. Indeed several other questions about preservers may be reduced to, orsolved with the help of, rank-one preservers. This has been observed in [10,14].For example, preserving commutativity [4,15,17], spectrum [7,13] or invertibility[16] involves rank-one preservers. Classifying isomorphisms of several types of

∗ Corresponding author.E-mail addresses:[email protected] (J. Bell), [email protected] (A.R. Sourour).

1 Research supported by an NSERC undergraduate student research award.2 Research supported by an NSERC research grant.

0024-3795/00/$ - see front matter( 2000 Elsevier Science Inc. All rights reserved.PII: S 0 0 2 4 - 3 7 9 5 ( 0 0 ) 0 0 0 2 4 - 0

14 J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33

operator algebras is frequently accomplished by exploiting the fact that they preserverank-one operators; see, e.g. [6, Chapter 17].

The linear rank-one preservers on the space of alln× n matrices were character-ized by Marcus and Moyls [11]. They show that every such map is a composition ofa left multiplicationLA by an invertible matrixA, a right multiplicationRB by aninvertible matrixB, and possibly the transpose map. Related results may be found in[2,3,8,9,12].More recently, Omladic and Šemrl [14] characterized surjective additivemaps on the space of finite rank operators on real or complex Banach spaces. In caseof finite dimensional spaces, they show that every such map is a composition of thethree types of maps described above and a fourth type induced by an automorphismof the underlying field.

The main purpose of this paper is to characterize additive rank-one preservingmaps on algebras of block upper triangular matrices (Sections 5 and 7). In additionto the four types of maps described above, we identify a fifth type which we maycome across. Furthermore, we also identify linear rank-one preserving maps on fairlygeneral subspaces of matrices (Section 3).

Let us now fix some notation and terminology. ByMmn(F), we denote the spaceof all m× n matrices over an arbitrary fieldF, and as usualMn = Mnn. Given twovectorsu ∈ Fm andv ∈ Fn, we shall denote byu⊗ v them× n matrix uvt, whichwe may associate with the operatorz 7→ (vtz)u from Fn ontoFm. It is obvious thata matrixA has rank one if and only ifA = u⊗ v for nonzerou andv. The standardbasis forFn is denoted by{ek}nk=1, i.e.e1 = (1,0, . . . ,0), e2 = (0,1,0, . . . ,0), etc.the matrix unitsei ⊗ ej are denoted byEij .

A map ϕ from a spaceS1 of matrices into a spaceS2 of matrices is said topreserve matrices of rank oneif ϕ(T ) is of rank one, wheneverT has rank one. Itis saidpreserve rank-one matrices in both directionswhenϕ(T ) is of rank one ifand only ifT has rank one. Aleft multiplicationLA is the mapT 7→ AT for a fixedmatrixA. Right multiplicationsRB are defined analogously.

We make use of a particular permutation matrixJ given by

J =

0 0 . . . 0 10 0 . . . 1 0...

0 1 . . . 0 01 0 . . . 0 0

(1)

i.e.,J = [δi,n+1−i ], whereδ is the Kronecker delta symbol. IfT t denotes the trans-pose ofT , then it is straightforward to verify that

T + := JT tJ (2)

maps the algebra of upper triangular matrices onto itself and preserves rank-onematrices. We observe thatT 7→ T + may also be described as the transpose withrespect to the anti-diagonal, i.e., the “diagonal” that contains the positions(i,1 +n− i).

J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33 15

We will also find it useful to define aspace of rank-one matricesto be a subspaceS of matrices with the property that all nonzero matrices inS have rank one.

2. Preliminaries

We prove a lemma which will be used frequently in the following sections.

Lemma 2.1. Assume thatx1, x2, u are nonzero vectors inFn and thaty1, y2, v arenonzero vectors inFm and that each of the three linear transformations:

C1 = u⊗ v + x1 ⊗ y1,

C2 = u⊗ v + x2 ⊗ y2,

C3 = u⊗ v + x1 ⊗ y1 + x2 ⊗ y2 + x1 ⊗ y2

is of rank one, then:(i) If x1, x2 are linearly independent, and y1, y2 are linearly independent, thenu⊗ v = x2 ⊗ y1.

(ii) If x1, x2 are linearly independent buty1, y2 are linearly dependent, thenv is ascalar multiple ofy1.

(iii) If x1, x2 are linearly dependent buty1, y2 are linearly independent, thenu is ascalar multiple ofx1.

(iv) If both pairs {x1, x2} and {y1, y2} are linearly dependent, then u is a scalarmultiple ofx1 or v is a scalar multiple ofy1.

Proof. (i) If u, x1 are linearly independent, thenv = c1y1 for somec1 ∈ F, sincerankC1 = 1. Nowv, y2 are linearly independent and sou = c2x2 for somec2 ∈ F,since rankC2 = 1. NowC3 = x1 ⊗ (y1 + y2)+ x2 ⊗ (c1c2y1 + y2) has rank one,implying thaty1 + y2 andc1c2y1 + y2 are linearly dependent, hencec1c2 = 1 andu⊗ v = x2 ⊗ y1. On the other hand ifu = c1x1, thenv = c2y2 since rankC2 = 1,and henceC3 = (c1c2 + 1)x1 ⊗ y2 + x1 ⊗ y1 + x2 ⊗ y2 which is never of rank one.

Parts (ii)–(iv) are quite easy to verify. �

3. Linear maps

In this section, we characterize rank-one preserving linear maps on fairly generalsubspaces of matrices.

Theorem 3.1. LetL be a subspace ofMmn(F) satisfying the following conditions:(a) L containsx0 ⊗ Fn for somex0 ∈ Fm,(b) L containsFm ⊗ y0 for somey0 ∈ Fn,

16 J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33

(c) L is spanned by its rank-one matrices.Let ϕ : L −→ M kl(F) be a linear mapping preserving rank-one matrices. Then

either(i) m 6 k, n 6 l, and there exists ak ×mmatrix A of rank m and ann× l matrix

B rank n such that:

ϕ(T ) = ATB for everyT ∈ L

or(ii) m 6 l, n 6 k, and there exists ak × nmatrix A of rank n and anm× l matrix

B rank m such that:

ϕ(T ) = AT tB for everyT ∈ L

or(iii) ϕ(L) is a space of rank-one matrices

Before proving Theorem 3.1, we state two immediate corollaries.

Corollary 3.2. With the same notation as above, if ϕ preserves rank one, then eitherϕ preserves every rank, i.e., rankϕ(T ) = rank T for everyT ∈ L or ϕ(L) is a spaceof rank-one matrices.

Corollary 3.3. With the same notation as above, if ϕ preserves rank one in bothdirections, then:(a) ϕ is of the form(i) or (ii) of Theorem3.1;(b) ϕ is injective;(c) ϕ preserves every rank.

Proof of Theorem 3.1. The image underϕ of (x0 ⊗ Fn) is a vector space of rank-one matrices. Thenϕ(x0 ⊗ Fn) = V ⊗ v0 or u0 ⊗W for some subspaceV of Fk

and a vectorv0 ∈ Fl or a subspaceW of Fl and a vectoru0 ∈ Fk. Replacingϕ bythe mapψ(T ) = ϕ(T t) if necessary, we may assume with no loss of generality thatϕ(x0 ⊗ Fn) = u0 ⊗W. Since the kernel ofϕ contains no matrices of rank one, thendimW = n. Consequently,l > n andϕ(x0 ⊗ y) = u0 ⊗ g(y) for some injective lin-ear transformationg : Fn −→ Fl, i.e.ϕ(x0 ⊗ y) = u0 ⊗ B ty for ann× l matrix Bof rankn.

Similarly,ϕ(Fm ⊗ y0) is a space of rank-one matrices and hence takes one of thetwo forms mentioned above. We consider two cases.

Case1. ϕ(Fm ⊗ y0) ⊆ u1 ⊗ Fl . Thereforeϕ(x ⊗ y0) = u1 ⊗ h(x) for an injec-tive linear transformationh. Sinceϕ(x0 ⊗ y0) = u0 ⊗ w = u1 ⊗ w′ for nonzero vec-torsw andw′, thenu0 andu1 are linearly dependent. Butu1 is only determined upto a multiplicative scalar, hence we may assume thatu0 = u1.

We show that the image underϕ of every rank-one matrix inL is containedin u0 ⊗ Fl , and consequentlyϕ(L) ⊆ u0 ⊗ Fl . Assume, to the contrary, that there

J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33 17

exist nonzero vectorsx, y, u, v such thatϕ(x ⊗ y) = u⊗ v and{u0, u} are linearlyindependent. LetK1 = x ⊗ y, and

K2 = (x + x0)⊗ y,

K3 = x ⊗ (y + y0)

and

K4 = (x + x0)⊗ (y + y0)

and letCj = ϕ(Kj ); 1 6 j 6 4. ThusC1, C2, C3, C4 are all of rank one,C1 =u⊗ v,

C2 = u⊗ v + u0 ⊗ g(y),

C3 = u⊗ v + u0 ⊗ h(x)

and

C4 = u⊗ v + u0 ⊗ (h(x)+ g(y)+ g(y0)).

Sinceu andu0 are linearly independent, we conclude thatv = αg(y) = βh(x) =γg(y0) for nonzero scalarsα, β andγ . It follows thatϕ((βx − γ x0)⊗ y0) = 0 con-tradicting the rank-one preserving property. This establishes thatϕ(L) ⊆ u0 ⊗ Fl , aspace of rank-one matrices.

Case2. ϕ(Fm ⊗ y0) ⊆ Fk ⊗ v0. As before, we have thatϕ(x ⊗ y0) = Ax ⊗ v0,for a k ×m matrix of rankm, i.e., an injective linear transformation fromFm intoFk. Furthermore,u0 ⊗ B ty0 = ϕ(x0 ⊗ y0) = Ax0 ⊗ v0. After absorbing a constantin u0 andv0 if necessary, we may assume thatAx0 = u0 andB ty0 = v0.

Now consider an arbitrary rank-one matrixK1 = x ⊗ y ∈ L. Let

K2 = (x + x0)⊗ y,

K3 = x ⊗ (y + y0)

and

K4 = (x + x0)⊗ (y + y0)

and letCj = ϕ(Kj ); 1 6 j 6 4. ThusC1, C2, C3, C4 are all of rank one. IfC1 =u⊗ v, then

C2 = u⊗ v + u0 ⊗ B ty,

C3 = u⊗ v + Ax ⊗ v0

and

C4 = u⊗ v + u0 ⊗ B ty + Ax ⊗ v0 + u0 ⊗ v0.

If u0, Ax are linearly independent andv0, Bty are linearly independent, then by

Lemma 2.1, we conclude that

18 J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33

ϕ(x ⊗ y) = Ax ⊗ B ty.

On the other hand, ifAx = cu0 for a scalarc, thenϕ((x − cx0)⊗ y0) = Ax ⊗v0 − cu0 ⊗ v0 = 0. Thus(x − cx0)⊗ y0 is not of rank one, and sox = cx0. In thiscase, we also getϕ(x ⊗ y) = cu0 ⊗ B ty = Ax ⊗ B ty. A similar argument provesthe same conclusion whenB ty andv0 are linearly dependent. Thereforeϕ(x ⊗ y) =A(x ⊗ y)B, for every rank-one matrixx ⊗ y ∈ L. From condition (c) we concludethatϕ(T ) = ATB for everyT ∈ L. �

We will now give a couple of examples to illustrate that conditions (a) and (b)in Theorem 3.1 cannot be removed. It is quite easy to construct examples violatingcondition (c).

Example 3.4. LetL be the subspace ofM3 spanned byE11, E12, E22, E23, E33 anddefineϕ : L −→ M3 by

ϕ

([a11 a12 00 a22 a230 0 a33

])=[a11 0 0a12 a22 a230 0 a33

].

Example 3.5. Let L = M2 ⊕ M2, identified as usual with a subspace ofM4 anddefineϕ : L −→ M4 by

ϕ

a1 b1 0 0c1 d1 0 00 0 a2 b20 0 c2 d2

=

a1 + a2 b1 + b2 c2 d2c1 d1 0 00 0 0 00 0 0 0

.In either example, it is clear thatϕ preserves rank-one matrices, and that the imageof ϕ is not a space of rank-one matrices. It is easy to verify that there do not existmatricesA andB, invertible or not, such thatϕ(T ) = ATB or ϕ(T ) = AT tB.

4. Triangular algebras

For every finite sequence of positive integersn1, n2, . . . , nk , satisfyingn1 + n2 +· · · + nk = n, we associate an algebraT(n1, n2, . . . , nk) consisting of alln× n

matrices of the form

A =

A11 A12 . . . A1k0 A22 . . . A2k...

0 0 . . . Akk

, (3)

whereAij is anni × nj matrix. We call such an algebra ablock upper triangularalgebra.

Associated with such an algebraA is its chain of invariant subspaces

J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33 19

{0} = V0 ⊂ V1 ⊂ · · · ⊂ Vk = Fn,

i.e.,

Vj = span{ei : 1 6 i 6 n1 + n2 + · · · + nj }.It is evident that the triangular algebraA is the set of all operators leaving everyVjinvariant. We also make use of the subspaces

Wj = span{ei : n1 + n2 + · · · + nj−1 6 i 6 n1 + n2 + · · · + nj }.These are the subspaces corresponding to the diagonal blocks. We haveVj = W1 ⊕· · · ⊕Wj , in particularFn = W1 ⊕ · · · ⊕Wk

A special case of block upper triangular algebras is the algebraTn(F) of uppertriangular matrices. In this caseWj = span(ej ) andVj = span{e1 · · · ej }.

Our first lemma is a standard fact and quite easy to prove. First, we fix some no-tation. For a subspaceV of Fn, we denote, as usual, its orthogonal complement withrespect to the standard dot product byV ⊥. Thus, we haveV ⊥

j = Wj+1 ⊕ · · · ⊕Wk .

Lemma 4.1. Let A be a block upper triangular algebra, and let{Vj : 1 6 j 6 k}be its chain of invariant subspaces. The rank-one matrixx ⊗ y ∈ A if and only ifx ∈ Vj andy ∈ V ⊥

j−1 for an index j; (1 6 j 6 k).

Proof. Omitted. �

Lemma 4.2. Let A,B be invertible matrices inMn(F), and A be a block uppertriangular algebra inMn(F). If the mappingT 7→ ATB mapsA intoA, thenA,B ∈A.

Proof. Take anyx ∈ Vj , thenx ⊗ z ∈ A for everyz ∈ V ⊥j−1, and soA(x ⊗ z)B =

Ax ⊗ B tz ∈ A. As B is invertible, the space{B tz | z ∈ V ⊥j−1} has the same dimen-

sion asV ⊥j−1, and soAx ∈ Vj . This shows thatA leaves everyVj invariant, and thus,

A ∈ A. By a similar argument, we also haveB ∈ A. �

Lemma 4.3. LetA,B be invertible matrices inMn(F), andA andB be block uppertriangular algebras inMn(F). If the mappingϕ defined byϕ(T ) = ATB mapsA

ontoB, thenA = B, andA,B ∈ A

Proof. First we observe thatϕ−1 exists and thatϕ−1(T ) = A−1T B−1. Denote thechain of invariant subspaces ofA (respectively,B) by {Vj : 1 6 j 6 k} (respective-ly, {V ′

j : 1 6 j 6 k′} ) .Now, takex ∈ V1 we know thatx ⊗ Fn ∈ A. As B is invertible, we haveAx

⊗ Fn = A(x ⊗ Fn)B and soAx ⊗ Fn ∈ B. ThereforeAx ∈ V ′1 and henceAV1 ⊆

V ′1. The same argument applied toϕ−1 gives usA−1V ′

1 ⊆ V1. Therefore dimV1 =dimV ′

1.

20 J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33

Now takex ∈ V2. Thenx ⊗ V ⊥1 ∈ A and soAx ⊗ B tV ⊥

1 ∈ B. As dim(B tV ⊥1 ) =

dim V⊥1 = dim V ′

1⊥, we conclude thatAx ∈ V ′

2. ThusAV2 ⊆ V ′2 and usingϕ−1 we

get thatA−1V ′2 ⊆ V2 and hence that dimV2 = dim V ′

2.Continuing in this fashion, one sees that the dimension of every block inA agrees

with the dimension of the corresponding block inB. ThusA = B. Now it followsfrom Lemma 4.2 thatA,B ∈ A. �

We are now in position to apply the results of Section 3 to triangular algebras.

Theorem 4.4. Let A and B be block upper triangular algebras inMn(F) andMm(F) respectively, and letϕ : A −→ B be a surjective linear mapping preservingrank-one matrices. Thenm = n, B = A or A+ and

ϕ(T ) = ATB or ϕ(T ) = AT +B,where A and B are invertible matrices inB andT 7→ T + is the transpose relativeto the anti-diagonal as in Eq.(2). Consequentlyϕ is bijective and preserves everyrank.

Proof. By Theorem 3.1, we have two cases.Case1.m > n andϕ(T ) = ATB, whereA andB are matrices of sizem× n and

n×m respectively, and each has rankn. As the mapϕ is onto, there existsT0 ∈ A

such thatAT0B = Im, them×m identity matrix. But this is possible only whenn > m. So we haven = m, and the result now follows from Lemma 4.2.

Case2. m > n and there exist matricesC and D of rank n such thatϕ(T ) =CT tD. As in Case 1, we get thatn = m. We may now writeϕ(T ) = CJT +JD,whereJ is the matrix in Eq. (1). LetA = CJ , B = JD, andψ(T ) = CJT JD,a map fromA+ to B. Now it follows from Lemma 4.2 thatA+ = B and thatA, B ∈ B. �

Remarks.1. The second form ofϕ may be written asϕ(T ) = (ATB)+, whereA andB are

now inA, rather thanB.2. Both forms ofϕ, may be present whenA = A+, i.e., when the sizesn1, . . . , nk

of the diagonal blocks satisfynj = nk−j+1.3. Even for such highly structured spaces of matrices as triangular algebras it is

possible to have a rank-one preserving map whose range is a space of rank-onematrices as the following example illustrates.

Example 4.5. Defineϕ : T3(F) −→ T3(F) by

ϕ

([a11 a12 a130 a22 a230 0 a33

])=[a11 + a22 + a33 a12 + a23 a13

0 0 00 0 0

].

Thenϕ preserves rank one.

J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33 21

5. Surjective additive maps

In this section, we characterize surjective additive, rather than linear mappingson triangular algebras that preserve rank one. As in [13,14], the proofs are quite abit more delicate than the corresponding proof for the linear case. As to the formof such maps, in addition to left multiplications, right multiplications and “trans-posing”, other “elementary” rank-one preservers appear, which we will presentlydescribe.

5.1. Assume thatc 7→ c is an automorphism ofF, andC = [cij ] ∈ Mmn(F). Wedenote the matrix[cij ] by C. The mapC 7→ C preserves every rank.

5.2. Let each off1, f2, . . . , fn be an additive mapping fromF to F such thatf1is bijective, and letf = (f1, f2, . . . , fn). Define a mappingf on a triangular algebraA = T(n1 · · ·nk), with n1 = 1, by

f

c11 c12 . . . c1n0 c22 . . . c2n...

0 0 . . . cnn

=

f1(c11) f2(c11)+ c12 . . . fn(c11)+ c1n

0 c22 . . . c2n...

0 0 . . . cnn

,i.e.,

f(c11E11) =n∑j=1

fj (c11)E1j , f(cEij ) = cEij if (i, j) /= (1,1).

This is a surjective additive mapping onA and it preserves rank-one matrices, butonly whenn1 = 1.

5.3. For f andf1, f2, . . . , fn as above, define a mappingf on a triangular algebraA = T(n1 · · ·nk), with nk = 1, by

f

c11 c12 . . . c1n0 c22 . . . c2n...

0 0 . . . cn−1n0 0 . . . cnn

=

c11 c12 . . . fn(cnn)+ c1n...

0 0 . . . f2(cnn)+ cn−1n0 0 . . . f1(cnn)

,i.e., f(C) = (f(C+))+. Again this is an additive mapping onA preserving rank-onematrices, but only whennk = 1.

Next we recall a classical definition.

22 J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33

Definition 5.4. A mappingϕ from a vector spaceV to a vector spaceW, is said tobesemilinearif it is additive and if there exists an automorphismf : F −→ F suchthatϕ(λv) = f (λ)ϕ(v) for everyλ ∈ F andv ∈ V .

Now, we state the main theorem in this section. The casen = 2 andA = T2 =T(1,1), the upper triangular 2× 2 matrices is exceptional as will be seen in Ex-ample 6.2. We should point out however that the theorem is valid forA = M2 =T(2).

Theorem 5.5. LetA = T(n1 . . . nk) be a block upper triangular algebra inMn(F),

such thatA /= T2(F). Letϕ : A −→ A be a surjective additive mapping that pre-serves rank-one matrices. Thenϕ is a composition of some or all of the followingmaps:

(i) Left multiplication by an invertible matrix inA.(ii) Right multiplication by an invertible matrix inA.(iii) The mapC 7→ C, induced by a field automorphisma 7→ a of F.(iv) The mapf defined in5.2above, but only whenn1 = 1.(v) The mapf defined in5.3above, but only whennk = 1.(vi) The transpose with respect to the antidiagonalT 7→ T +. This is present only

whenA = A+, i.e., nj = nk−j+1 for every j.Thus the restriction ofϕ to the space

M := {[cij ] ∈ A : c11 = 0 if n1 = 1, andcnn = 0 if nk = 1}is semilinear. In particular, if n1 > 2 andnk > 2, thenϕ is semilinear.

Before proving Theorem 5.5, we state a couple of corollaries.

Corollary 5.6. If ϕ is as in Theorem5.5,then:(a) ϕ is injective;(b) ϕ preserves every rank, i.e., rankϕ(T ) = rank T for everyT ∈ A.

Corollary 5.7. LetA be a block upper triangular algebra inMn(R), over the fieldof real numbers, and assume that each of the first and last diagonal block has size atleast2 × 2. Then every additive surjective rank-one preserving mappingϕ : A −→A is linear.

Proof of Corollary 5.7. It is well known that the identity is the only automorphismof the field of real numbers; see, e.g., [1, p. 58].�

The proof of Theorem 5.5 will be accomplished via several lemmas. We find itconvenient to deal with mappings between slightly different triangular algebrasA

andB having the same dimension, and show thatB must= A or A+, in addition tothe conclusions of Theorem 5.5.

J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33 23

Lemma 5.8. Let A = T(n1 · · · nk) and B = T(m1 · · ·m`) be block uppertriangular algebras inMn(F) such thatdim A = dim B > 3. If ϕ : A −→ B isa surjective additive mapping that preserves rank-one matrices, then there existnonzero vectorsu0, v0 ∈ Fn, and injective additive mappingsg, h : Fn −→ Fn suchthat either:

ϕ(e1 ⊗ y) = u0 ⊗ g(y)

and

ϕ(x ⊗ en) = h(x)⊗ v0 for x, y ∈ Fn (5.8.1)

or

ϕ(e1 ⊗ y) = g(y)⊗ u0

and

ϕ(x ⊗ en) = v0 ⊗ h(x) for x, y ∈ Fn. (5.8.2)

Proof. The image underϕ of (x0 ⊗ Fn) is an additive group of rank-one matrices.It follows thatϕ(e1 ⊗ Fn) = u0 ⊗G or G⊗ u0 for someu0 ∈ Fn and an additivesubgroupG of Fn. It follows easily that there exists an injective additive mappingg : Fn −→ Fn, such that

(a) ϕ(e1 ⊗ y) = u0 ⊗ g(y);or

(b) ϕ(e1 ⊗ y) = g(y)⊗ u0.

Similarly

(a)′ ϕ(x ⊗ en) = h(x)⊗ v0;or

(b)′ ϕ(x ⊗ en) = v0 ⊗ h(x).

for somev0 ∈ Fn and an injective additive mappingh : Fn −→ Fn.To prove the lemma, we must show that it is not possible to have Eqs. (a) and (b)′

satisfied simultaneously. The impossibility of (a)′ together with (b) may be provedsimilarly. Towards this end, assume thatϕ(e1 ⊗ y) = u0 ⊗ g(y) andϕ(x ⊗ en) =v0 ⊗ h(x). Thusu0 ⊗ g(en) = ϕ(e1 ⊗ en) = v0 ⊗ h(e1). It follows thatu0 andv0are linearly dependent, and since each is determined up to a multiplicative scalar, wemay assume with no loss of generality thatu0 = v0. We now proceed exactly as inthe proof of Theorem 3.1 (Case 1), to conclude thatϕ(A) ⊆ u0 ⊗ Fn, contradictingsurjectivity. �

5.9. If ϕ satisfies Eq. (5.8.2), we may replaceϕ by the mapϕ+ : A −→ B+ definedby ϕ+(T ) = (ϕ(T ))+. In view of this, we shall henceforth assume thatϕ satisfiesEq. (5.8.1) in addition to the hypotheses of Lemma 5.8.

24 J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33

For the following lemma, we recall that{V0, V1, . . . , Vk} is the chain of invariantsubspaces ofA.

Lemma 5.10. LetL1 := e1 ⊗ Fn ∪ Fn ⊗ en andL2 := u0 ⊗ Fn ∪ Fn ⊗ v0. Letξ ⊗η be a rank-one matrix inA, with ξ ∈ Vj , η ∈ V⊥

j−1 such thatϕ(ξ ⊗ η) 6∈ L2, Then:(i) ϕ(ξ ⊗ η) = h(ξ) ⊗ g(η).

(ii) ϕ(x ⊗ y) = h(x)⊗ g(y) for everyx ∈ Vj andy ∈ V ⊥j−1.

(iii) For everyc ∈ F, there existsc ∈ F, independent of x and y, such thath(cx) =ch(x) andg(cy) = cg(y) for everyx ∈ Vj andy ∈ V ⊥

j−1.Furthermore, if B /= T2(F), then there exists an invariant subspaceVj , with

dim Vj > 2 anddim V ⊥j−1 > 2, such that the conclusions of parts(ii) and(iii) hold.

Proof. (i) ConsiderK1 = ξ ⊗ η,

K2 = (ξ + e1)⊗ η,

K3 = ξ ⊗ (η + en)

and

K4 = (ξ + e1)⊗ (η + en)

and proceed exactly as in the proof of Theorem 3.1 (Case 2).(ii) For anyx ∈ Vj , we have

ϕ(x ⊗ η) = ϕ(ξ ⊗ η)+ ϕ((x − ξ)⊗ η).

If ϕ(x − ξ)⊗ η /∈ L2, the by part (i), we haveϕ((x − ξ)⊗ η) = h(x − ξ)⊗ g(η)

and so

ϕ(x ⊗ η) = h(ξ)⊗ g(η)+ h(x − ξ)⊗ g(η) = h(x)⊗ g(η).

On the other hand, ifϕ(x − ξ)⊗ η ∈ L2, thenϕ(x ⊗ η) /∈ L2, and again by ap-plying part (i) directly tox ⊗ η, we getϕ(x ⊗ η) = h(x)⊗ g(η). For anx ∈ Vj andy ∈ V ⊥

j−1, we may repeat the above argument using the equation

ϕ(x ⊗ y) = ϕ(x ⊗ η)+ ϕ(x ⊗ (y − η))

to reach the desired conclusion.(iii) Assume thatx ∈ Vj andy ∈ V ⊥

j−1 andc ∈ F. First we observe that dimVj >2 and dimV ⊥

j−1 > 2, since otherwise,ξ ⊗ η ∈ L1, and henceϕ(ξ ⊗ η) ∈ L2. It fol-

lows thate2 ∈ Vj anden−1 ∈ V ⊥j−1 andx ⊗ en−1 ande2 ⊗ y ∈ A. So

h(cx)⊗ g(en−1) = ϕ(cx ⊗ en−1) = ϕ(x ⊗ cen−1) = h(x)⊗ g(cen−1).

Therefore there exists a scalarc such thatg(cen−1) = cen−1 and consequentlyh(cx) = ch(x). Consideringϕ(x ⊗ cy) yields thatg(cy) = cg(y).

J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33 25

As to the last assertion of the lemma, we note that sinceϕ is surjective, there existsa rank-one matrix inA whose image underϕ is not inL2. The result now readilyfollows. �

For the purpose of the next lemma, we define thesupportof an n× n matrixC = [cij ], to be the subset of indices(i, j) for which cij /= 0 and we denote it bysuppC. For a collectionC of matrices, we define suppC by suppC = ⋃{suppC :C ∈ C}. We denote the cardinality of a setX by cardX. We also recall the matrixunitsEij = ei ⊗ ej .

We continue to assume thatϕ is as in Lemma 5.8, and satisfies Eq. (5.8.1). Wealso assume thatB /= T2(F), which implies thatA /= T2(F).

Lemma 5.11. Let A be the support ofA, and letJ be the the subset of all(i, j) forwhichϕ(Eij ) 6∈ L2. Then:

(i) {ϕ(Eij ) : (i, j) ∈ J} are linearly independent.(ii) Span{ϕ(Eij ) : (i, j) ∈ J} is disjoint fromL2.(iii) J consists of all indices(i, j) ∈ A for whichi > 2 andj 6 n− 1.(iv) If x ⊗ y 6∈ L1, thenϕ(x ⊗ y) 6∈ L2.(v) If x ⊗ y ∈ M, thenϕ(x ⊗ y) = h(x)⊗ g(y).(vi) u0 ∈ W1 and v0 ∈ Wk, in particular whenn1 = 1, we may takeu0 = e1 and

whennk = 1, we may takev0 = en

Proof. Define A0 = {(i, j) ∈ A : i > 2 andj 6 n− 1}. It follows from Lemma5.10 that ifT = Eij with (i, j) ∈ J, thenϕ(span{T }) ⊆ span{ϕ(T )}, and so

ϕ(span{Eij : (i, j) ∈ J}) ⊆ span{ϕ(Eij ) : (i, j) ∈ J}.Every matrix inA may be written asL+ E whereL ∈ L1 andE ∈ span{Eij :(i, j) ∈ J}. So

B=ϕ(A) ⊆ ϕ(L1)

+ϕ(span{Eij : (i, j) ∈ J}) ⊆ L2 + span{ϕ(Eij ) : (i, j) ∈ J}.Thus

dim B6dim L2 + cardJ 6 (2n− 1)+ cardJ

6(2n− 1)+ cardA0 = dim A.

But dim B = dim A, so all of the above inequalities become equalities. Asser-tions (i)–(iv) and (vi) are immediate.

To prove (v), we first notice that the conclusion has been established forx ⊗ y 6∈L1 in Lemma 5.10 together with part (iv). Ifx ⊗ y ∈ L1, thenx is a scalar multipleof e1 or y is a scalar multiple ofen. In the former case, we havee2 ⊗ y ∈ A and6∈ L1. Then

26 J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33

ϕ(x ⊗ y)=ϕ(e2 ⊗ y)+ ϕ((x − e2)⊗ y)

=h(e2)⊗ g(y)+ h(x − e2)⊗ g(y) = h(x)⊗ g(y).

A similar calculation establishes the result wheny is a scalar multiple ofen. �

Lemma 5.12. There exists an automorphismc 7→ c of F such thatϕ(cT ) = cϕ(T )

for everyT ∈ M andc ∈ F, whereM is defined in the statement of Theorem5.5.

Proof. Define

Ug := span{e2, . . . , en} or Fn according asn1 = 1 or not

and

Uh := span{e1, . . . , en−1} or Fn according asnk = 1 or not.

Takex ∈ Uh andy ∈ Ug with x ⊗ y ∈ A. By Lemma 5.11, we haveϕ(x ⊗ y) =h(x)⊗ g(y). We also know thate2 ⊗ y ∈ A andx ⊗ en−1 ∈ A. Exactly as in Lem-ma 5.10 (iii), this leads to the existence of a functionc 7→ c such thatϕ(cR) =cϕ(R) for every rank oneR ∈ M, and asM is the span of its rank-one matri-ces,ϕ(cT ) = cϕ(T ) for everyT ∈ M. It remains to show thatc 7→ c is an auto-morphism. Additivity and surjectivity are obvious. The map is multiplicative asc1c2ϕ(T ) = ϕ(c1c2T ) = c1ϕ(c2T ) = c1c2ϕ(T ). Finally the map is injective sinceits kernel is obviously not all ofF, and being an ideal, it must then be{0}. �

Lemma 5.13. There exist invertible matrices A and B inMn(F) and a surjectiverank-one preserving mappingϕ2 : A −→ A−1BB−1 such that:(i) ϕ is a composition ofϕ2, the multiplication operatorsLA, RB , and the mappingC 7→ C induced by a field automorphism.

(ii) The restriction ofϕ2 to M is the identity mapping.

Proof. Defineϕ1 by ϕ1(C) = ϕ(C). Thenϕ1 is a surjective rank-one preservingmapping fromA to B andϕ1|M is linear. Furthermore, there exist injective lin-ear mappingsh1 (respectively,g1), from Uh (respectively,Ug) into Fn such thatϕ1(x ⊗ y) = h1(x)⊗ g1(y) for all x ⊗ y ∈ M. (HereUh andUg are the spaces de-fined in the proof of the preceding lemma). It is now easy to find invertiblen× n

matricesA andB such thath1(x) = Ax for x ∈ Uh andg1(y) = B ty for y ∈ Ug.Defineϕ2(T ) = A−1ϕ1(T )B

−1. Both assertions of the lemma are easily verified.�

Lemma 5.14. A = B andA,B ∈ A.

Proof. We will first show thatA = A−1BB−1. As ϕ2(e1 ⊗ Ug) = e1 ⊗ Ug , weget thatϕ2(e1 ⊗ Fn) ⊆ e1 ⊗ Fn. Similarly,ϕ2(F

n ⊗ en) ⊆ Fn ⊗ en. SinceA = M +spanE11 + spanEnn we see that

J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33 27

A−1BB−1 = ϕ2(A) ⊆ e1 ⊗ Fn + Fn ⊗ en ⊆ A.

Since dimA−1BB−1 = dim B = dim A, it follows thatA = A−1BB−1.The functionT 7→ ATB maps the triangular algebraA = A−1BB−1 onto the

triangular algebraB. So, by Lemma 4.2, we get that the two algebras are equal andthatA andB ∈ A. �

Lemma 5.15. The mappingϕ2 of Lemma5.13 is a composition of mappingsf andg defined in5.2 and 5.3. The former is present only whenn1 = 1 and the latter ispresent only whennk = 1.

Proof. First assume thatn1 = 1. As before, we haveϕ2(e1 ⊗ y) = e1 ⊗ g2(y) foran injective additive mappingg2 onFn. In particularϕ2(c11E11) = ∑n

j=1 fj (c11)E1jfor additive mapsf1, f2, . . . , fn onF. Next we show thatf1 is bijective. Surjectivityis obvious. To prove bijectivity, assume thatf1(c) = 0. Thenϕ2(cE11 −∑n

j=2 fj (c)

E1j ) = 0. Thusc = 0, since otherwiseϕ2 annihilates a rank-one matrix. This provesinjectivity. If n1 > 2, thenϕ2|(spanE11) is the identity, and we may takef = (id,0,. . . ,0), where id is the identity.

The casenk = 1 is dealt with similarly, showing the existence of an additive map-pingg = (gn, . . . , g1), such thatϕ2(cnnEnn) = ∑n

j=1 gj (cnn)Ejn. It is now straight-

forward to verify thatϕ2 is a composition off andg. �

Conclusion.Theorem 5.5 now follows from Lemmas 5.8 to 5.15. Whenϕ satisfiesEq. (5.8.1), we have written as a composition of maps of the form (i)–(v). In thealternative case, the mapϕ+ : T 7→ (ϕ(T ))+ is a composition of the same maps.

6. Counterexamples

In this section, we give examples related to Section 5. We provide examples toillustrate that the hypothesis of surjectivity in Theorem 5.5 is indispensable, to showthatT2 is indeed an exceptional case and to show that unlike the linear case, thecondition thatϕ preserves rank-one matrices in both directions does not imply sur-jectivity and does not imply thatϕ has a form resembling that of Theorem 5.5.

We will find the following facts useful:(i) R ∼= Rd as additive groups for all positive integersd. This is true as each ofR,Rd

is isomorphic toQc wherec is the cardinality of the continuum.(ii) A field F may be isomorphic to a proper subfield of itself. Indeed ifK is an

arbitrary field andt is an indeterminate, thent 7→ t3 induces an isomorphismbetweenK(t) andK(t3). For a concrete example, we haveQ(π)∼= Q(π3).

In each of the following examples, we have an additive rank-one preserving map-ping that is not of the form of Theorem 5.5 and also the range is not included in a

28 J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33

space of rank-one matrices. Example 4.5 provides a linear map with range includedin a space of rank-one matrices.

Example 6.1. Let f : R4 −→ R be an additive-group isomorphism and then definemappingsϕ : T3(R) −→ T3(R) andψ : T3(R) −→ T2(R) by

ϕ

([a1 a2 a30 a4 a50 0 a6

])=[a1 0 f (a2, a3, a4, a5)

0 0 00 0 a6

],

ψ

([a1 a2 a30 a4 a50 0 a6

])=[a1 f (a2, a3, a4, a5)

0 a6

].

The mappingϕ is not surjective, whileψ is bijective but mapsTm ontoTn,wherem /= n.

The next example shows thatT2(R) is indeed exceptional even whenϕ preservesrank-one matrices in both directions in addition to being bijective.

Example 6.2. Let f : R −→ R be an additive-group isomorphism and defineϕ :T2(R) −→ T2(R) by

ϕ

([a11 a120 a22

])=[a11 f (a12)

0 a22

].

In the linear case, we have seen that surjectivity, together with preserving rankone, is equivalent to preserving rank one in both directions. Our next examples illus-trate the difference in the additive case. They show that preserving rank one in bothdirections does not give us the forms in Theorem 5.5. We give three examples forthree different types of matrix algebras, namelyTn, block triangular algebras, andthe full matrix algebraMn.

Example 6.3. Let F be a field which is isomorphic to a proper subfieldF′ such that[F : F′] > 3. (As usual[F : F′] is the dimension ofF as a vector space overF′.) Letθ be an isomorphism fromF to F′. Let π1, π2 be elements ofF such that 1, π1, π2are linearly independent overF′. For instance, we may takeF = Q(π), F′ = Q(π3),

π1 = π, π2 = π2 andθ(r(π)) = r(π3) for every rational expressionr(x) ∈ Q(x).Defineϕ : T4(F) −→ T4(F) andψ : T(1,2)(F) −→ T(1,2)(F) andχ : M3(F)

−→ M3(F) by

ϕ

a11 a12 a13 a140 a22 a23 a240 0 a33 a340 0 0 a44

J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33 29

=θ(a11) θ(a12) θ(a13)+ π1θ(a33) θ(a14)+ π1θ(a34)

0 θ(a22) θ(a23)+ π2θ(a33) θ(a24)+ π2θ(a34)

0 0 θ(a33) θ(a34)

0 0 0 θ(a44)

,

ψ

([a11 a12 a130 a22 a230 a32 a33

])

=[θ(a11) θ(a12)+ π1θ(a32) θ(a13)+ π1θ(a33)

0 θ(a22)+ π2θ(a32) θ(a23)+ π2θ(a33)

0 0 0

],

χ

([a11 a12 a13a21 a22 a23a31 a32 a33

])

=[f (a11)+ f (a31)π1 f (a12)+ f (a32)π1 f (a13)+ f (a33)π1f (a21)+ f (a31)π2 f (a22)+ f (a32)π2 f (a23)+ f (a33)π2

0 0 0

].

Using the fact that a nonzero matrix has rank one if and only if every 2× 2 subm-atrix has zero determinant, it is straightforward to verify that each of the three mapspreserves rank-one matrices in both directions and is injective.

7. Additive maps preserving rank one in both directions

As seen in Examples 6.3, additive maps that preserve rank one in both directionsneed not be of a form resembling the forms described in Theorem 5.5. We show,however, that the only “obstruction” is the fact that the fieldF may be isomorphicto a proper subfield of itself. When this is not the case, we obtain a form for suchmaps that is nearly identical to the form of Theorem 5.5. The details are slightly lessdelicate than the proofs in Section 5.

Remark. Each of the following fields is not isomorphic to a proper subfield of itself.Finite fields and their algebraic closures; the field of (real or complex) algebraicnumbers; finite extensions ofQ, and the fieldR of real numbers. All this is easy toverify. For the fieldR, we may also refer to [1, p. 58].

Of course, every prime field, i.e.Q or Zp (for a primep), is not isomorphic to aproper subfield of itself, but in this case additive maps are automatically linear.

Definition 7.1. A mappingϕ from a vector spaceV to a vector spaceW, is saidto bequasi-linearif it is additive and if there exists a nonzero ring-endomorphism

30 J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33

θ : F −→ F such thatϕ(λv) = θ(λ)ϕ(v) for everyλ ∈ F andv ∈ V . It should benoted that the mapθ is an isomorphism fromF onto a subfield of itself.

In the following theorem, we again assume thatA /= T2. Example 6.2 shows thatthis is necessary.

Theorem 7.2. Let F be a field that is not isomorphic to a proper subfield of it-self, let A = T(n1 · · ·nk) be a block upper triangular algebra inMn(F) such thatA /= T2(F). Letϕ : A −→ A be an additive mapping that preserves rank-one ma-trices in both directions. Thenϕ is a composition of some or all of the maps(i)–(iv) ofTheorem5.5,except that the mapf1 (in 5.2and5.3) is only required to be injectiverather than bijective.

The restriction ofϕ to the space

M := {[cij ] ∈ A : c11 = 0 if n1 = 1, andcnn = 0 if nk = 1}is semilinear. In particular, if n1 > 2 andnk > 2, thenϕ is semilinear.

Furthermore, ϕ is injective and it preserves every rank.

Again, we prove Theorem 7.2 via several lemmas.

Lemma 7.3(Cf. Lemma 5.8).Let A = T(n1 · · · nk) be a block upper triangularalgebra inMn(F) and letϕ : A −→ Mn(F) be an additive mapping that preservesrank-one matrices in both directions. Then there exist nonzero vectorsu0, v0 ∈ Fn,and injective additive mappingsg, h : Fn −→ Fn such thatϕ satisfies Eq.(5.8.1)orEq.(5.8.2).

Proof. The vectorsu0, v0 and the mappingsg, h are established exactly as in thefirst paragraph of the proof of Lemma 5.8. Upon examining the remainder of theproof of Lemma 5.8, we see that it suffices to show that it is not possible to haveϕ(e1 ⊗ y) = u0 ⊗ g(y) andϕ(x ⊗ en) = u0 ⊗ h(x).

If this is the case, then considerTx,y := e1 ⊗ y + x ⊗ en; with each of the pairs{x, e1} {y, en} linearly independent. EachTx,y has rank two andϕ(Tx,y) = u0 ⊗(h(x)+ g(y)). Thush(x)+ g(y) = 0, since otherwise the image of a rank two ma-trix has rank one. Upon replacingx by e1 + x, we get thath(e1) = 0. But then theimage ofe1 ⊗ en is zero, a contradiction. �

The next lemma replaces Lemmas 5.10 and 5.11.

Lemma 7.4. LetL1 andL2 be as in Lemma5.10and letx ⊗ y be a rank-one matrixin A, but not inL1. Then:

(i) ϕ(x ⊗ y) 6∈ L2.(ii) h(x), u0 are linearly independent andg(y), v0 are linearly independent.(iii) ϕ(x ⊗ y) = h(x)⊗ g(y).

J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33 31

Proof. (i) If ϕ(x ⊗ y) ∈ L2, thenϕ(x ⊗ y) = u0 ⊗ w orw ⊗ v0. In the former case,we consider the rank two matricesKz := x ⊗ y + e1 ⊗ z, wherezandy are linearlyindependent.ϕ(Kz) = u0 ⊗ (w + g(z)). Thusw + g(z) = 0 for everyz that is nota scalar multiple ofy. This contradicts the injectivity ofg. If ϕ(K1) = w ⊗ v0, asimilar calculation leads to a contradiction.

(ii) ConsiderK1 = x ⊗ y,

K2 = (x + e1)⊗ y,

K3 = x ⊗ (y + en)

and

K4 = (x + e1)⊗ (y + en)

and letCj = ϕ(Kj ); 1 6 j 6 4. ThusC1, C2, C3, C4 are all of rank one. IfC1 =u⊗ v, then

C2 = u⊗ v + u0 ⊗ g(y),

C3 = u⊗ v + h(x)⊗ v0

and

C4 = u⊗ v + u0 ⊗ g(y)+ h(x)⊗ v0 + u0 ⊗ v0.

If u0, h(x) are linearly dependent orv0, g(y) are linearly dependent, then by Lem-ma 2.1, we conclude thatϕ(x ⊗ y) ∈ L2, contradicting (i).

(iii) Applying Lemma 2.1 again toC1, C2, C3, C4, we now conclude, in view of,(ii) that ϕ(x ⊗ y) = h(x)⊗ g(y). �

Lemma 7.5. Assume thatϕ is as above and thatA /= T2(F). Then there exists aring homomorphismc 7→ c fromF into F such thatϕ(cT ) = cϕ(T ) for everyT ∈ M

and c ∈ F. Thus the restriction ofϕ to M is quasi-linear. In particular ifF is notisomorphic to a proper field of itself, thenc 7→ c is an automorphism ofF andϕ|Mis semilinear.

Proof. First consider a rank-one matrixx ⊗ y 6∈ L1, then by Lemma 7.4 (iii) weknow thatϕ(x ⊗ y) = h(x)⊗ g(y). This may now be extended to all matricese1 ⊗y (similarly x ⊗ en) that are inM, via the equationϕ(e1 ⊗ cy) = ϕ(e2 ⊗ cy)+ϕ((e1 − e2)⊗ cy). Thus, for allx ⊗ y ∈ M we haveϕ(x ⊗ y) = h(x)⊗ g(y).

Now exactly as in the proofs of Lemma 5.10 (iii) and Lemma 5.13, we see thath(cx) = ch(x) andg(cy) = cg(y), for a mappingc 7→ c from F into itself. Thusϕ(cT ) = cϕ(T ) for everyT ∈ M. We see thatc 7→ c is additive, multiplicative andinjective as in Lemma 5.13. Thusϕ|M is quasi-linear. Also,c 7→ c is surjective ifF is not isomorphic to a proper subfield of itself, yieldingc 7→ c and automorphismandϕ|M semilinear. �

Lemma 7.6. Assume thatϕ andA are as above. Assume further thatF is not iso-morphic to a proper subfield of itself. Thenϕ is a composition of a mappingC 7→ C

32 J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33

induced by a field automorphism and an additive mappingϕ1 : A −→ Mn(F) thatpreserves rank one in both directions withϕ1|M linear.

Proof. Defineϕ1(C) = ϕ(C). The results are easily verified.�

We continue to make the same assumptions as in Lemma 7.6.

Lemma 7.7. There exist invertible matrices A and B inMn(F) and a rank-one pre-serving mappingϕ2 : A −→ Mn(F) such that:(i) ϕ1(T ) = Aϕ2(T )B;(ii) The restriction ofϕ2 to M is the identity mapping.

Proof. Exactly as in Lemma 5.14. �

Lemma 7.8. The mappingϕ2 of Lemma7.7 is a composition of mappingsf andg defined in5.2 and5.3 with f1 andg1 injective. The former is present only whenn1 = 1 and the latter is present only whennk = 1.

Proof. This is exactly as in Lemma 5.16 except that in this casef1 andg1 need notbe surjective. �

7.9. Conclusion.Lemmas 7.3–7.8 constitute a proof for Theorem 7.2.�

Note added in proof

Special cases of our results (the linear case for the space of upper triangularmatrices) were obtained by Chooi and Lim [5].

References

[1] J. Aczel, J. Dhombres, Functional equations in several variables, Encyclopedia Math. Appl. 31 Cam-bridge University Press (1989).

[2] L. Beasley, Rank-k-preservers and preservers of sets of ranks, Linear Algebra Appl. 55 (1983)11–17.

[3] P. Botta, Linear maps preserving rank less than or equal to one, Linear and Multilinear Algebra 20(1987) 197–201.

[4] G.H. Chan, M.H. Lim, Linear transformations on symmetric matrices that preserve commutativity,Linear Algebra Appl. 47 (1982) 11–22.

[5] W.L. Chooi, M.H. Lim, Linear preservers on triangular matrices, Linear Algebra Appl., 269 (1998)241–255.

[6] K.R. Davidson, Nest Algebras, Pitman Research Notes in Mathematics, vol. 191, Longman Scien-tific and Technical, London, 1988.

[7] A.A. Jafarian, A.R. Sourour, Spectrum-preserving linear maps, J. Funct. Anal. 66 (1986) 255–261.

J. Bell, A.R. Sourour / Linear Algebra and its Applications 312 (2000) 13–33 33

[8] T.J. Laffey, R. Loewy, Linear transformations which do not increase rank, Linear and MultilinearAlgebra 26 (1990) 181–186.

[9] R. Loewy, Linear transformations which preserve or decrease rank, Linear Algebra Appl. 121 (1989)151–161.

[10] M. Marcus, Linear transformations on matrices, J. Nat. Bureau Standards 75B (1971) 107–113.[11] M. Marcus, B.N. Moyls, Transformations on tensor product spaces, Pacific J. Math. 9 (1959)

1215–1221.[12] H. Minc, Linear transformations on matrices: rank 1 preservers and determinant preservers, Linear

and Multilinear Algebra 4 (1977) 265–272.[13] M. Omladic, P. Šemrl, Spectrum-preserving additive maps, Linear Algebra Appl. 153 (1991) 67–72.[14] M. Omladic, P. Šemrl, Additive mappings preserving operators of rank one, Linear Algebra Appl.

182 (1993) 239–256.[15] H. Radjavi, Commutativity-preserving operators on symmetric matrices, Linear Algebra Appl. 61

(1984) 219–224.[16] A.R. Sourour, Invertibility preserving linear maps onL(X), Trans. Amer. Math. Soc. 348 (1996)

14–30.[17] W. Watkins, Linear maps that preserve commuting pairs of matrices, Linear Algebra Appl. 14 (1976)

29–35.