Final Exam December 12, 2002 - University of California...

19
Math 110 PROFESSOR KENNETH A. RIBET Final Exam December 12, 2002 12:30–3:30 PM The scalar field F will be the field of real numbers unless otherwise specified. Please put away all books, calculators, electronic games, cell phones, pagers, .mp3 players, PDAs, and other electronic devices. You may refer to a single 2-sided sheet of notes. Please write your name on each sheet of paper that you turn in. Don’t trust staples to keep your papers together. Explain your answers as is customary and appropriate. Your paper is your ambassador when it is graded. Disclaimer: These solutions were written by Ken Ribet. As usual, sorry if they’re a bit terse and apologies also if I messed something up. If you see an error, send me e-mail and I’ll post an updated document. 1. Let T : V V be a linear transformation. Suppose that all non-zero elements of V are eigenvectors for T . Show that T is a scalar multiple of the identity map, i.e., that there is a λ R such that T (v)= λv for all v V . We can and do assume that V is non-zero. Choose v non-zero in V and let λ be the eigenvalue for v. We must show that Tw = λw for all w W . This is clear if w is a multiple of v. If not, w and v are linearly independent, so that w + v is non-zero, in particular. In this case, let μ be the eigenvalue of w and let a be the eigenvalue of w + v. Then a(w + v)= T (w + v)= Tw + Tv = μw + λv, and so (a - μ)w =(λ - a)v. Because v and w are linearly independent, we get a = μ and a = λ. Hence μ = λ. 2. Let V be a 7-dimensional vector space over R. Consider linear transformations T : V V that satisfy T 2 =0. What are the possible values of nullity(T )? (Be sure to justify your answer: don’t just reply with a list of numbers.) 1

Transcript of Final Exam December 12, 2002 - University of California...

Page 1: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

CMath 110

PROFESSOR KENNETH A. RIBET

Final Exam

December 12, 2002

12:30–3:30 PM

The scalar field F will be the field of real numbers unless otherwise specified.

Please put away all books, calculators, electronic games, cell phones, pagers,.mp3 players, PDAs, and other electronic devices. You may refer to a single2-sided sheet of notes. Please write your name on each sheet of paper thatyou turn in. Don’t trust staples to keep your papers together. Explain youranswers as is customary and appropriate. Your paper is your ambassadorwhen it is graded.

Disclaimer: These solutions were written by Ken Ribet. As usual, sorry if they’re a bit terseand apologies also if I messed something up. If you see an error, send me e-mail and I’ll postan updated document.

1. Let T : V → V be a linear transformation. Suppose that all non-zero elements of V areeigenvectors for T . Show that T is a scalar multiple of the identity map, i.e., that there is aλ ∈ R such that T (v) = λv for all v ∈ V .

We can and do assume that V is non-zero. Choose v non-zero in V and let λ be theeigenvalue for v. We must show that Tw = λw for all w ∈ W . This is clear if w is a multipleof v. If not, w and v are linearly independent, so that w + v is non-zero, in particular.In this case, let µ be the eigenvalue of w and let a be the eigenvalue of w + v. Thena(w + v) = T (w + v) = Tw + Tv = µw + λv, and so (a− µ)w = (λ− a)v. Because v and ware linearly independent, we get a = µ and a = λ. Hence µ = λ.

2. Let V be a 7-dimensional vector space over R. Consider linear transformations T : V → Vthat satisfy T 2 = 0. What are the possible values of nullity(T )? (Be sure to justify youranswer: don’t just reply with a list of numbers.)

1

Page 2: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

2 FINAL EXAMINATION NAME:

I believe that 4, 5, 6, and 7 are the possible values. The condition T 2 = 0 may be rephrasedas the inclusion R(T ) ⊆ N(T ). Since the dimensions of R(T ) and N(T ) need to sum to 7,we must have nullity(T ) ≥ 4. Conversely, for desired value of nullity(T ) between 4 and 7,we can fabricate a T that gives this value. For example, to have nullity(T ) = 5, we defineT on the standard basis vectors e1, . . . , e7 by having T (e1) = · · ·T (e5) = 0, T (e6) = e1 andT (e7) = e2. This works: the rank is at least 2 because the range contains the space spannedby e1 and e2 while the nullity is at least 5 because the first 5 basis vectors are sent to 0 by T .3. Let A be a real n × n matrix and let At be its transpose. Prove that LA and LAtA have

the same null space. In other words, for x in Rn, regarded as a column vector, show thatAtAx = 0 if and only if Ax = 0. Prove also that the linear transformations LAt and LAtA

have the same range. (It may help to introduce an inner product on Rn.)

Use the standard inner product on Rn. If Ax is non-zero, then the inner product 〈Ax, Ax〉 ispositive. Re-write as 〈x, AtAx〉 to see that AtAx is non-zero. The statement about the nullspace is what we have just proved. the one about ranges follows by dimension considerations,since one range is a priori contained in the other.

4. Let T : V → V be a linear transformation. Suppose that v1, v2, . . . , vk ∈ V are eigenvectorsof T that correspond to distinct eigenvalues. Assume that W is a T -invariant subspace of Vthat contains the vector v1 + v2 + · · ·+ vk. Show that W contains each of v1, v2, . . . , vk.

This is a recycled quiz problem. See Tom’s web page for a solution.

Let A ∈ Mn×n(R) be a real square matrix of size n. Let v1, . . . , vn be the rows of A. Assumethat 〈vi, vj〉 = δij for 1 ≤ i, j ≤ n; here, 〈 , 〉 is the standard inner product on Rn and δij

is the Kronecker delta. Show that the corresponding statement holds for the columns of A,i.e., that the inner product of the ith and the jth columns of A is δij.

I discussed this in class a week ago. After you write down what’s involved, you see that youneed to know only that BA = I if AB = I when A and B are square matrices.

5. Let A = (aij) be an n× n real matrix with the following properties:

(1) the diagonal entries are positive;(2) the non-diagonal entries are negative;(3) the sum of the entries in each row is positive.

Suppose, for each i = 1, . . . , n, that we have

ai1x1 + ai2x2 + · · ·+ ainxn = 0,

where the xi are real numbers. Show that all xi are 0. [Assume the contrary and let i besuch that xi is at least as big in absolute value as the other xk. Consider the ith equation.]Prove that det A is non-zero.

Page 3: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

FINAL EXAMINATION NAME: 3

We assume the contrary and let i be such that xi is at least as big in absolute value as theother xk. After changing the sign of the xk, we can and do assume that xi is positive. Then

aiixi =∑j 6=i

(−aij)xj ≤∑j 6=i

(−aij)xi = xi

∑j 6=i

(−aij).

Dividing by xi, we get aii ≤ −∑j 6=i

aij, which is contrary to the hypothesis that the sum of

the entries in the ith row is positive. This business with the xk shows that the null spaceof LA is {0}. We conclude that A is invertible, so that its determinant is non-zero.

6. Decide whether or not each of the following real matrices is diagonalizable over the fieldof real numbers:

10 9

0 10

,

1 0 2

0 1 2

0 0 3

,

1 2 9 1

2 9 9 6

9 9 3 4

1 6 4 −1

Choose one of the above three matrices—call it A. Exhibit a matrix Q of the same size as Asuch that Q−1AQ is diagonal.

The first matrix is a quintessential example of a non-diagonalizable matrix, as discussed inclass. Its only eigenvalue is 10. If it were diagonalizable, it would be 10 times the identitymatrix. The third matrix is diagonalizable because it is symmetric; all I want here is thatyou quote the theorem to the effect that symmetric matrices can be diagonalized over R. Themiddle matrix is diagonalizable because R3 has a basis of eigenvectors for it: there’s someeigenvector with eigenvalue 3, and we see that e1 and e2 are eigenvectors with eigenvalue 1.For this matrix, we can find Q by finding an eigenvector with eigenvalue 3. Such a vector hasto have a non-zero third coefficient, since otherwise it would have eigenvalue 1. We can scaleit so that it’s of the form (a, b, 1). Apply the matrix to this vector and see what conditionon a and b we have if we want the vector to be turned into 3 times itself. The answer isthat we need a = b = 1. And, indeed, you can check instantly that (1, 1, 1) is an eigenvectorwith eigenvalue 3. We take Q here to be the matrix whose colums are the three eigenvectors,

namely Q =

1 0 1

0 1 1

0 0 1

.

Page 4: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

Math 110 Professor K. A. Ribet

Final Exam May 18, 2005

This exam was a 180-minute exam. It began at 5:00PM. There were 7 problems,for which the point counts were 8, 9, 8, 7, 8, 7, and 7. The maximum possiblescore was 54.

Please put away all books, calculators, electronic games, cell phones, pagers,.mp3 players, PDAs, and other electronic devices. You may refer to a single2-sided sheet of notes. Explain your answers in full English sentences asis customary and appropriate. Your paper is your ambassador when it isgraded. At the conclusion of the exam, please hand in your paper to yourGSI.

1. Let T be a linear operator on a vector space V . Suppose that v1, . . . , vk arevectors in V such that T (vi) = λivi for each i, where the numbers λ1, . . . , λk

are distinct elements of F . If W is a T -invariant subspace of V that containsv1 + · · ·+ vk, show that W contains vi for each i = 1, . . . , k.

See my solutions for homework set #11.

2. Assume that T : V → W is a linear transformation between finite-dimensionalvector spaces over F . Show that T is 1-1 if and only if there is a linear transfor-mation U : W → V such that UT is the identity map on V .

One direction is obvious; if UT = 1V and T (v) = 0, then v = U(T (v)) = 0, sothat T must be injective. The harder direction is to construct U when T is givenas 1-1. Choose a basis v1, . . . , vn of V and let wi = T (vi) for each i. BecauseT is injective, the wi are linearly independent. Complete w1, . . . , wn to a basisw1, . . . , wn;wn+1, . . . , wm of W . We can define U : W → V by declaring theimages U(wi) of the basis vectors wi; if we want wi to go to xi ∈ V , then wedefine U(

∑aiwi) =

∑aixi. We take xi = vi for i = 1, . . . , n and take (for

instance) xi = 0 for i > n. It is clear that (UT )(vi) = vi for each basis vector vi

of V . It follows from this that UT is the identity map on V .

3. Let T be a self-adjoint linear operator on a finite-dimensional inner productspace V (over R or C). Show that every eigenvalue of T is a positive real numberif and only if 〈T (x), x〉 is a positive real number for all non-zero x ∈ V .

Page 5: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

Because T is self-adjoint, there is an orthonormal basis β of V in which T isdiagonal. Let λ1, . . . , λn be the diagonal entries of the diagonal matrix [T ]β . Theλi are real numbers even if V is a complex vector space. The issue is whetheror not these real numbers are all positive. If x has coordinates a1, . . . , an inthe basis β, then 〈T (x), x〉 =

∑i

|ai|2λi. In particular, we can take x to be

the ith element of β, so that its ith coordinate is 1 and its other coordinatesare 0. Then 〈T (x), x〉 = λi. Thus if 〈T (x), x〉 is always positive, λi is positive foreach i. Conversely, if the λi are positive, the sum

∑i

|ai|2λi is non-negative for

all n-tuples (a1, . . . , an) and is positive whenever (a1, . . . , an) is non-zero, i.e.,whenever the vector x corresponding to (a1, . . . , an) is a non-zero element of V .

4. Let V be an inner product space over F and let X and Y be subspaces of Vsuch that 〈x, y〉 = 0 for all x ∈ X, y ∈ Y . Suppose further that V = X + Y .Prove that Y coincides with X⊥ = { v ∈ V | 〈x, v〉 = 0 for all x ∈ X }.

Under the assumptions of the problem, everything in Y is perpendicular toeverything in X, so we have Y ⊆ X⊥. If v is perpendicular to all vectors in X,we must show that v lies in Y . Because V = X +Y , we may write v = x+y withx ∈ X, y ∈ Y . We have 0 = 〈v, x〉 = 〈x + y, x〉 = 〈x, x〉 + 〈y, x〉 = 〈x, x〉 + 0 =〈x, x〉. Because 〈x, x〉 = 0, x = 0 by the axioms of an inner product. Hencev = y does indeed lie in Y . (This problem was inspired by a comment of Chu-Wee Lim, who pointed out to me that the definition on page 398 of the book,and the comments following the definition, are extremely bizarre.)

5. Let V be the space of polynomials in t with real coefficients. Use the Gram–Schmidt process to find non-zero polynomials p0(t), p1(t), p2(t), p3(t) in V such

that

∫ 1

−1

pi(t)pj(t) dt = 0 for 0 ≤ i < j ≤ 3. (It may help to note that

∫ 1

−1

ti dt =

0 when i is an odd positive integer.)

There is an implicit inner product here: 〈f, g〉 =∫ 1

−1

f(t)g(t) dt. The vectors 1,

t, t2, and t3 are linearly independent elements of V and we can apply G–S tothis sequence of vectors to generate an orthogonal set of vectors; this is what theproblem asks for. The computations, which I won’t reproduce, are easy becauseof the remark about the integrals of odd powers of t. The answer that I got is

that the pi are in order: 1, t, t2 − 13, t3 − 3

5t. I did these computations in class;

the pi are called Legendre polynomials.

110 first midterm—page 2

Page 6: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

6. Let A be an n×n matrix over F and let At be the transpose of A. Using theequality “row rank = column rank,” show for each λ ∈ F that the vector spaces{x ∈ Fn |Ax = λx } and {x ∈ Fn |Atx = λx } have the same dimension.

Combining the “row rank = column rank” theorem with the formula relatingrank and nullity, we see that the linear transformations LA and LAt have equalnullities. These nullities are the dimensions of the null spaces of the two trans-formations; meanwhile, the null spaces are exactly the two vector spaces of theproblem in the case λ = 0. To treat the case of arbitrary λ, we have only toreplace A by A− λIn.

7. Let A be an element of the vector space Mn×n(F ), which has dimensionn2 over F . Show that the span of the set of matrices { In, A, A2, A3, . . . } hasdimension ≤ n over F .

Direct application of Cayley–Hamilton: the set { In, A, A2, A3, . . . , An−1 } hasthe same span as the full set of all powers of A.

110 first midterm—page 3

Page 7: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

Math 110

PROFESSOR KENNETH A. RIBET

Final Examination

December 20, 2008

12:40–3:30 PM, 101 Barker Hall

Please put away all books, calculators, and other portable electronic devices—anything withan ON/OFF switch. You may refer to a single 2-sided sheet of notes. When you answerquestions, write your arguments in complete sentences that explain what you are doing:your paper becomes your only representative after the exam is over. All vector spaces arefinite-dimensional over the field of real numbers or the field of complex numbers (except forthe space P(R) of all real polynomials, which occurs in the first problem).

Problem Possible points

1 9 points

2 6 points

3 7 points

4 7 points

5 7 points

6 7 points

7 7 points

Total: 50 points

1. Exhibit examples of:

(a.) A linear operator D : P(R)→ P(R) and a linear operator I : P(R)→ P(R) such thatDI is the identity but ID is not the identity.

(b.) A (non-zero) generalized eigenvector that is not an eigenvector.

(c.) A normal operator on a positive-dimensional real vector space whose characteristicpolynomial has no real roots.

1

Page 8: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

In the first part, “D” was intended to evoke differentiation and “I” wasintended to suggest integration (definition integration with constant term 0,for example). It’s hard to give “answers” to these questions because differentpeople will have different examples.

2. On the vector space P2(R) of real polynomials of degree ≤ 2 consider the inner productgiven by

〈p, q〉 =

∫ 1

−1

p(x)q(x) dx.

Apply the Gram–Schmidt procedure to the basis (1, x, x2) to produce an orthonormal basisof P2(R).

When you apply the Gram–Schmidt progress to the sequence of polynomi-als 1, x, x2, x3, . . ., the resulting sequence of orthonormal polynomials is thesequence of Legendre polynomials. According to the Wikipedia article onOrthogonal Polynomials, the first three of them are 1, x and (3x2 − 1)/2.I haven’t done this calculation lately, but I’ll be wading through 38 suchcalculations in the very near future.

3. Suppose that P is a linear operator on V satisfying P 2 = P and let v be an element of V .Show that there are unique x ∈ null P and y ∈ range P such that v = x + y.

First, let x = v−Pv and y = Pv. Then clearly v = x+y, and y = Pv is in therange of P . Since Px = Pv−P 2v = Pv−Pv = 0, x is in the null space of P .Secondly, suppose that v = x + y with x ∈ null P and y ∈ range P . ThenPv = Px+Py = 0+Py = 0. Further, Py = y because y is in the range of P .Indeed, if y = Pw, then Py = P (Pw) = P 2w = Pw = y. Hence Pv = y,which implies that x = v − y = v − Pv. In other words, the x and y thatwe are dealing with are the ones that we knew about already. Conclusion: xand y are unique. Note: this was Exercise 21 of Chapter 5.

4. Let T be a linear operator on an inner product space for which trace(T ∗T ) = 0. Provethat T = 0.

Choose an orthonormal basis for the space, and let A = [aij] be the matrixof T in this basis. The matrix of T ∗ is the conjugate-transpose A∗ of A. Thematrix of T ∗T is then A∗A; for each i, i = 1, . . . , n, the (i, i)th entry of this

matrix is∑

j

ajiaji =∑

j

|aji|2. The trace of A∗A is thus∑i,j

|aji|2. Each

term |aji|2 is a non-negative real number. If the sum is 0, then each term is 0.This means that all aij are 0, i.e., that A = 0. If A = 0, then of course T = 0.Note: See problem 18 of Chapter 10, where a somewhat more sophisticatedproof of the indicated assertion was contemplated by the author of our book.

2

Page 9: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

5. If X and Y are subspaces of V with dim X ≥ dim Y , show that there is a linear operatorT : V → V such that T (X) = Y .

Let d = dim Y and let n = dim V . Choose a basis (v1, . . . , ve) of X; notethat e ≥ d by hypothesis. Complete this basis to a basis (v1, . . . , vn) of V .Choose a basis (y1, . . . , yd) of Y . We define T : V → V so that T (vi) = yi

for i = 1, . . . , d and T (vi) = 0 for i > d. Namely, if v =n∑

i=1

aivi, we set

Tv =d∑

i=1

aiyi. The range of T is then the span of the yi, which is Y . Already,

however, the span of (v1, . . . , vd) is mapped onto Y by T . Thus X is mappedonto Y by T .

6. Let N be a linear operator on the inner product space V . Suppose that N is both normaland nilpotent. Prove that N = 0. [The case F = C will probably be easier for you. Do itfirst do ensure partial credit.]

In the complex case, we can invoke the spectral theorem and find an orthonor-mal basis in which N has a diagonal matrix representation. Since some powerof N is 0, the diagonal entries are all 0. Hence N = 0, as required. In thereal case, the required assertion follows from the statement of Exercise 24 ofChapter 8, or—even better—from Exercise 7 of Chapter 7. (You have Axler’ssolution to that exercise.) Alternatively, we can argue (cheat) as follows:choose an orthonormal basis for the space, so that N becomes a nilpotentmatrix of real numbers that commuters with its transpose. We can think ofthis as a nilpotent matrix of complex numbers that commutes with its adjoint.Such a matrix corresponds to a complex normal nilpotent operator, which wealready know to be 0. Hence it’s 0.

7. Suppose n is a positive integer and T : Cn → Cn is defined by

T (x1, . . . , xn) = (x1 + · · ·+ xn, . . . , x1 + · · ·+ xn);

in other words, T is the linear operator whose matrix (with respect to the standard basis)consists of all 1’s. Find all eigenvalues and eigenvectors of T .

This was Exercise 7 of Chapter 5. You should have a solution available toyou.

3

Page 10: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

CMath 110

PROFESSOR KENNETH A. RIBET

Final Examination

May 10, 201011:30AM–2:30 PM, 10 Evans Hall

Please put away all books, calculators, and other portable electronic devices—anything with an ON/OFF switch. You may refer to a single 2-sided sheetof notes. For numerical questions, show your work but do not worry aboutsimplifying answers. For proofs, write your arguments in complete sentencesthat explain what you are doing. Remember that your paper becomes youronly representative after the exam is over. Please turn in your exam paper toyour GSI when your work is complete.

The point values of the problems were 12, 7, 7, 8, 8, 8 for a total of 50 points.

1. Label each of the following statements as TRUE or FALSE. Along with your answer,provide a clear justification (e.g., a proof or counterexample).

a. Each system of n linear equations in n unknowns has at least one solution.

Obviously false: for example we could have the system x + y = 1, x + y = 0, which is clearlyinconsistent (no solutions).

b. If A is an n×n complex matrix such that A∗ = −A, every eigenvalue of A has real part 0.

Because A∗ = −A, A commutes with its adjoint and thus is diagonalizable in an orthonormalbasis. In this basis, we compute the adjoint by conjugating the elements on the diagonal.These elements must be the negatives of their conjugates, which implies that they are indeedpurely imaginary. So the answer is “true”!

1

Page 11: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

c. If W and W ′ are 5-dimensional subspaces of a 9-dimensional vector space V , there is atleast one non-zero vector of V that lies in both W and W ′.

Yes, this is true. A fancy way to see this is to consider the linear transformation W×W ′ → Vtaking a pair (w, w′) to w−w′. Because W×W ′ has dimension 5+5 = 10 and V has dimension9 < 10, there must be a non-zero element (w, w′) in the null space of this map. We thenhave w − w′ = 0, i.e., w = w′. Because w is in W and w′ in W ′ and because these elementsare equal, w lines in the intersection W ∩W ′. So the correct answer is “true.”

d. If T is a linear transformation on V = C25, there is a T -invariant subspace of V that hasdimension 17.

Again, this is true: Schur’s theorem tells you that there is an orthonormal basis of V in whichT is upper-triangular. The span of the first 17 elements of this basis will be a T -invariantsubspace of V of dimension 17.

2. Let T be a linear transformation on an inner-product space. Show that T ∗T and T havethe same null space.

This problem was done in class toward the end of the semester. If Tv = 0, then of courseT ∗Tv = 0 as well. The problem is to prove that if T ∗Tv = 0, then already Tv = 0. However,if T ∗Tv = 0, then 〈T ∗Tv, v〉 = 0. Using the definition of “adjoint,” we convert the innerproduct to 〈Tv, Tv〉. Since this quantity is 0, the vector Tv must be zero (in view of thedefinition of an inner-product space).

3. Let T : V → W be a linear transformation between finite-dimensional vector spaces overa field F . Show that there is a subspace X of V such that the restriction of T to X is 1-1and has the same range as T .

If we admit the rank–nullity theorem, then we can do this problem in the following fairlybrain-dead way. Choose a basis v1, . . . , vm for the null space of T and extent this basis to abasis v1, . . . , vn of V . Let X be the span of the last n−m vectors in this basis. Clearly, therange of the restriction of T to X is the same as the range of T ; indeed, the range of T is theset of all vectors T (a1v1 + · · ·+anvn), which is the set of all vectors T (am+1vm+1 + · · ·+anvn),i.e., the range of the restriction. Since the range has dimension n−m, which is the dimensionof X, the restriction has to be 1-1.

A more enlightened way to do this is to redo the proof of the rank–nullity theorem.

4. Suppose that T is a self-adjoint operator on a finite-dimensional real vector space V andthat S : V → V is a linear transformation with the following property: every eigenvector ofT is also an eigenvector of S. Show that there is a basis of V in which both T and S arediagonal. Conclude that S and T commute.

Choose an orthonormal basis of V in which T is diagonal. The elements of this basis mustbe eigenvectors of T and thus will be eigenvectors of S. Accordingly, S (as well as T ) isdiagonal in this basis. Since diagonal matrices commute with each other, S and T commute.

Page 12: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

5. Use mathematical induction and the definition of the determinant to show for all n× ncomplex matrices A that the determinant of the complex conjugate of A is the complexconjugate of det A. (The “complex conjugate” of a matrix A is the matrix whose entries arethe complex conjugates of the entries of A.)

Let B be the complex conjugate of A. Work by induction as instructed. For the 1×1 case, Aand B have one entry each, and these entries are conjugates of each other. The determinantof a 1× 1 matrix is just the single element in the matrix, so we’re good. For the inductionstep, we assume n > 1 and that the result is known for the (n− 1)× (n− 1) case. We have,by definition,

det B =n∑

j=1

(−1)1+jb1j det B1j.

In the sum, b1j is the complex conjugate of a1j by the definition of B. Also, det B1j is the

complex conjugate of det A1j by the inductive assumption. (It’s best to remark first that B1j

is the complex conjugate of the matrix A1j.) It follows that det B is the complex conjugate

of det A =n∑

j=1

(−1)1+ja1j det A1j, as required.

6. Let W be a subspace of V , where V is a finite-dimensional vector space. Assume that Wis a proper subspace of V (i.e., that it is not all of V ). Show that there is a non-zero elementof V ∗ that is 0 on each element of W . (A harder version of this problem was on the secondmidterm.)

Take a basis v1, . . . , vm of W and extend it to a basis v1, . . . , vn of V . Let f1, . . . , fn be thebasis of V ∗ that’s dual to v1, . . . , vn. If f = fn, f is non-zero but it’s zero on v1, . . . , vm andtherefore on W .

3

Page 13: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

MATH 110, mock final test.

NameStudent ID #

All the necessary work to justify an answer and all the necessary steps of a proof must beshown clearly to obtain full credit. Partial credit may be given but only for significantprogress towards a solution. Show all relevant work in logical sequence and indicate allanswers clearly. Cross out all work you do not wish considered. Books and notes areallowed. Calculators, computers, cell phones, pagers and other electronic devices are notallowed during the test.

1. Using reduced row echelon form, determine whether the linear system

x1 + 2x2 − x3 = 1

2x1 + x2 + 2x3 = 3

x1 − 4x2 + 7x3 = 4

has a solution.

2. Let A and B be linear maps on a vector space V such that AB = 0. Prove thatrank(A) + rank(B) ≤ dim V .

3. Let A be an n×n matrix such that A(i, j) = 0 for more than n2 − n pairs of values of iand j. Prove that det A = 0.

4. Squares are labelled 1 through 4 consecutively from left to right. A player begins byplacing a marker in square 2. A die is rolled and the marker is moved one square to the left if1 or 2 is rolled or one square to the right if 3, 4, 5 or 6 is rolled. This process continues untilthe marker ends in square 1 (winning the game) or in square 4 (losing the game). What isthe probability of winning?

5. Find the general solution to the following system of differential equations:

x′1 = 8x1 + 10x2

x′2 = −5x1 − 7x2

6. Let V be the vector space of polynomials in x and y of (total) degree at most 2, and let

1

Page 14: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i

T : V → V be a linear map defined by

(Tf)(x, y) =∂

∂xf(x, y) +

∂yf(x, y).

Find the Jordan canonical form of T , the corresponding Jordan basis, and the minimalpolynomial of T .

7. Let A : V → V be a linear map on an n-dimensional vector space V .

• Prove that the set of all linear maps B on V satisfying the condition AB = 0 is asubspace of the space of all linear maps on V .

• Can every subspace of the space of all linear transformations on V be obtained in thatmanner, by the choice of a suitable A?

• What is the dimension of that subspace when A is a Jordan block of order n witheigenvalue 0?

8. Using the Gram-Schmidt procedure, find an orthonormal basis for the real vector spacespan{sin t, cos t, 1} equipped with the inner produt 〈f, g〉 =

∫ π

0f(t)g(t)dt.

9. Let T be an inner product space, and let y, z ∈ V . Define T : V → V by Tx :=〈x, y〉z.Prove that T is a linear map and find an explicit expression for T ∗.

10. Let V be the inner product space of complex-valued continuous functions on [0, 1] withthe inner product

〈f, g〉 =

∫ 1

0

f(t)g(t)dt.

Let h ∈ V , and define T : V → V by Tf := hf . Prove that T is a unitary operator if andonly if |h(t)| = 1 for all t ∈ [0, 1].

2

Page 15: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i
Page 16: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i
Page 17: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i
Page 18: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i
Page 19: Final Exam December 12, 2002 - University of California ...persson.berkeley.edu/110/finalsamples.pdf · FINAL EXAMINATION NAME: 3 We assume the contrary and let i be such that x i