KKP MTE 3110_3

69

Transcript of KKP MTE 3110_3

Page 1: KKP MTE 3110_3
Page 2: KKP MTE 3110_3
Page 3: KKP MTE 3110_3
Page 4: KKP MTE 3110_3
Page 5: KKP MTE 3110_3
Page 6: KKP MTE 3110_3

1.3.1 HISTORY OF VECTOR SPACE

Historically, the first ideas leading to vector spaces can be traced back as far as

17th century's analytic geometry, matrices, systems of linear equations, and Euclidean

vectors. The modern, more abstract treatment, first formulated by Giuseppe Peano in

the late 19th century, encompasses more general objects than Euclidean space, but

much of the theory can be seen as an extension of classical geometric ideas like

lines, planes and their higher-dimensional analogs. He called his n-tuples “number

complexes of order n.” which suggests affinity with the study of hyper-complex number.

(Mathematics Math21b Fall 2004; (http://www.math.harvard.edu/archive/

21b_fall_04/exhibits/historyvectorspace/index.html)

Vector spaces stem from affine geometry, via the introduction of coordinates in

the plane or three-dimensional space. Around 1636,

Descartes and Fermat founded analytic geometry by identifying solutions to an

equation of two variables with points on a plane curve. To achieve geometric solutions

without using coordinates, Bolzano introduced, in 1804, certain operations on points,

lines and planes, which are predecessors of vectors. This work was made use of in the

conception of bar centric coordinates by Möbius in 1827. The foundation of the

definition of vectors was Bellavitis' notion of the bipoint, an oriented segment one of

whose ends is the origin and the other one a target. Vectors were reconsidered with the

presentation of complex numbers by Argand and Hamilton and the inception

of quaternion and biquaternion by the latter. They are elements in R2, R4, and R8;

treating them using linear combinations goes back to Laguerre in 1867, who also

defined systems of linear equations.

In 1857, Cayley introduced the matrix notation , which allows for a harmonization

and simplification of linear maps. Around the same time, Grassmann studied the bar

centric calculus initiated by Möbius. He envisaged sets of abstract objects endowed with

operations. In his work, the concepts of linear independence and dimension, as well

as scalar products are present. Actually Grassmann's 1844 work exceeds the

framework of vector spaces, since his considering multiplication, too, led him to what

are today called algebras. 

Page 7: KKP MTE 3110_3

An important development of vector spaces is due to the construction of function

spaces by Lebesgue. This was later formalized by Banach and Hilbert, around

1920. Hilbert took up the concepts of fields and vector spaces in number theory. At that

time, algebra and the new field of functional analysis began to interact, notably with key

concepts such as spaces of p-integral functions and Hilbert spaces. Vector spaces,

including infinite-dimensional ones, then became a firmly established notion, and many

mathematical branches started making use of this concept. (Wikipedia;

http://en.wikipedia.org/wiki/Vector_space)

It is possible that the concept of a vector space is one of many which

mathematicians have found they have been using for many years without knowing it, or

perhaps one should say, needing to know it. For the problem is not merely to assess the

neglect of Grassman by mathematicians, but to respond to the fact that for 50 years

mathematicians worked confidently without feeling the need to elaborate an abstract

structure, when it is now so often assumed that making such abstractions is essential to

mathematical work.

Page 8: KKP MTE 3110_3

1.3.2 THEORY OF VECTOR SPACE

Vectors in Plane R2

Introduction to Vectors

The concept of vector space relies on the idea of vectors. A first example

of vectors is arrows in a fixed plane, starting at one fixed point. Such vectors are

called Euclidean vectors and can be used to describe

physical forces or velocities or further entities having both a magnitude and a

direction. In general, the term vector is used for objects on which two operations

can be exerted. The concrete nature of these operations depends on the type of

vector under consideration, and can often be described by different means, e.g.

geometric or algebraic. In view of the algebraic ideas behind these concepts

explained below, the two operations are called vector addition and scalar

multiplication.

Vector addition means that two vectors v and w can be "added" to yield the

sum v + w, another vector. The sum of two arrow vectors is calculated by

constructing the parallelogram two of whose sides are the given vectors v and w.

The sum of the two is given by the diagonal arrow of the parallelogram, starting

at the common point of the two vectors (left-most image below).

Scalar multiplication combines a number—also called scalar—r and a vector v. In

the example, a vector represented by an arrow is multiplied by a scalar by

dilating or shrinking the arrow accordingly: if r = 2 (r = 1/4), the resulting

vector r · w has the same direction as w, but is stretched to the double length

(shrunk to a fourth of the length, respectively) of w (right image below).

Equivalently 2 · w is the sum w + w. In addition, for negative factors, the direction

of the arrow is swapped: (−1) · v = −v has the opposite direction and the same

length as v (blue vector in the right image).

Page 9: KKP MTE 3110_3

Another example of vectors is provided by pairs of real numbers x and y, denoted

(x, y). (The order of the components x and y is significant, so such a pair is also

called an ordered pair.) These pairs form vectors, by defining vector addition and

scalar multiplication component wise, i.e.

(x1, y1) + (x2, y2) = (x1 + x2, y1 + y2)

and

r · (x, y) = (rx, ry).

(Wikipedia; http://en.wikipedia.org/wiki/Vector_space)

Vector Operations

Let’s consider a vector v whose initial point is the origin in an xy -

coordinate system and whose terminal point is . We say that the vector is

in standard position and refer to it as a position vector. Note that the ordered pair

defines the vector uniquely. Thus we can use to denote the vector. To emphasize

that we are thinking of a vector and to avoid the confusion of notation with

ordered - pair and interval notation, we generally write

v = < a, v = > b.

The coordinate a is the scalar horizontal component of the vector, and the

coordinate b is the scalar vertical component of the vector. By scalar, we mean

Page 10: KKP MTE 3110_3

a numerical quantity rather than a vector quantity. Thus, is considered to be

the component form of v. Note that a and b are not vectors and should not be

confused with the vector component definition.

Now consider   with A = (x1, y1) and C = (x2, y2). Let’s see how to find the

position vector equivalent to  . As we can see in the figure below, the initial

point A is relocated to the origin (0, 0). The coordinates of P are found by

subtracting the coordinates of A from the coordinates of C. Thus, P = (x2 - x1, y2 -

y1) and the position vector is  .

It can be shown that   and   have the same magnitude and direction and

are therefore equivalent. Thus,   =   = < x2 - x1, y2 - y1 >.

The component form of   with A = (x1, y1) and C = (x2, y2) is

 = < x2 - x1, y2 - y1 >.

Now that we know how to write vectors in component form, let’s restate some

definitions.

The length of a vector v is easy to determine when the components of the vector

are known. For v = < v1, v2 >, we have

|v|2 = v21 + v2

2          

Page 11: KKP MTE 3110_3

Using the Pythagorean theorem

|v| = √v21 + v2

2.

The length, or magnitude, of a vector v = < v1, v2 > is given by |v| = √v21 + v2

2.

Two vectors are equivalent if they have the same magnitude and the same

direction.

Let u = < u1, u2 > and v = < v1, v2 >.

Then < u1, u2 > = < v1, v2 > if and only if u1 = v1 and u2 = v2.

Operations on Vectors

To multiply a vector v by a positive real number, we multiply its length by the

number. Its direction stays the same. When a vector v is multiplied by 2 for

instance, its length is doubled and its direction is not changed. When a vector is

multiplied by 1.6, its length is increased by 60% and its direction stays the same.

To multiply a vector v by a negative real number, we multiply its length by the

number and reverse its direction. When a vector is multiplied by 2, its length is

doubled and its direction is reversed. Since real numbers work like scaling

factors in vector multiplication, we call them scalars and the products kv are

called scalar multiples of v.

Page 12: KKP MTE 3110_3

For a real number k and a vector v = < v1, v2 >, the scalar product of k and v is

kv = k.< v1, v2 > = < kv1, kv2 >.

The vector kv is a scalar multiple of the vector v.

Now we can add two vectors using components. To add two vectors given in

component form, we add the corresponding components. Let u = < u1, u2 > and v

= < v1, v2 >.

Then

u + v = < u = < u1 + v1, u2 + v2 >

If u = < u1, u2 > and v = < v1, v2 >,

Then u + v = < u1 + v1, u2 + v2 >.

Before we define vector subtraction, we need to define - v.

Page 13: KKP MTE 3110_3

The opposite of v = < v1, v2 >, shown below, is 

- v = (- 1).v = (- 1)< v1, v2 > = < - v1, - v2 >

Vector subtraction such as u - v involves subtracting corresponding components.

We show this by rewriting u - v as u + (- v). If u = < u1, u2 > and v = < v1, v2 >,

then

u - v = u + (- v) = < u1, u2 > + < - v1, - v2 > = < u1 + (- v1), u2 + (- v2) > = < u1 - v1,

u2 - v2 >

We can illustrate vector subtraction with parallelograms, just as we did vector

addition.

Vector Subtraction

If u = < u1, u2 > and v = < v1, v2 >, then

u - v = < u1 - v1, u2 - v2 >.

Page 14: KKP MTE 3110_3

It is interesting to compare the sum of two vectors with the difference of the same

two vectors in the same parallelogram. The vectors u + v and u - v are the

diagonals of the parallelogram.

Before we state the properties of vector addition and scalar multiplication, we

need to define another special vector—the zero vector. The vector whose initial

and terminal points are both is the zero vector, denoted by O, or < 0, 0 > . Its

magnitude is 0. In vector addition, the zero vector is the additive identity vector; v

+ O = v with

< v1, v2 > + < 0, 0 > = < v1, v2 >

Operations on vectors share many of the same properties as operations on real

numbers.

(Maths Online; (http://www.math10.com/en/geometry/vectors-operations/vectors-

operations.html )

Properties of Vector Operations

Properties of Vector Addition and Scalar Multiplication

For all vectors u, v, and w, and for all scalars b and c:

1. u + v = v + u (commutative property of addition)

2. u + (v + w) = (u + v) + w (associative property of addition)

3. v + O = v

Page 15: KKP MTE 3110_3

4 1.v = v; 0.v = O

5. v + (- v) = O

6. b(cv) = (bc)v

7. (b + c)v = bv + cv (distributive property)

8. b(u + v) = bu + bv (distributive property)

(Maths Online; (http://www.math10.com/en/geometry/vectors-operations/vectors-

operations.html )

(A.K. Sharma: 2007)

Length of Vector

This drawing shows how we set about doing this.

Notice the right-hand coordinate system but with the z-axis vertical this time.

The formula for the length OP or |p| is a kind of 3 dimensional form of

Pythagoras' theorem.

Here's how the working would go if we had a = 4 and b = 3 and c = 12.

Using Pythagoras' theorem in the yellow right-angled triangle we get 

OQ = the square root of (16 + 9) = 5

and then, using Pythagoras' theorem in the vertical green right-angled triangle,

we get

OP = the square root of (25 + 144) = 13.

Alternatively, in just one step of working, we have

Page 16: KKP MTE 3110_3

OP = the square root of (16 + 9 + 144) = 13.

A reminder on unit vectors

Unit vectors are vectors which have unit length. They are important in many

applications. In three dimensions, the vectors i, j and k which run along the x, y

and z axes respectively, all have unit length.

Sometimes, for example when working with planes, it is necessary to find a unit

vector in the same direction as a given vector.

Suppose we need the unit vector in the same direction as u = 2i - j + 2k, written

as a row vector. The length or magnitude of u is given by the square root and the

required vector must be parallel to u.

So we can get the vector we want by just scaling down u by a factor. 

Unit vectors are often written with a little hat on top, so here we would have

found û.

(http://www.netcomuk.co.uk/~jenolive/vect5.html)

Properties of Length in

If u and v are vectors in and k is any scalar, then:

(a) ║u║ ≥ 0

(b) ║u║ = 0 if and only if u = 0

(c) ║ku║ = │k│║u║

(d) ║u + v║ ≤ ║u║ + ║v║ (Triangle inequality)

Proof (c) If u = (u1, u2, . . . , un), then ku = (ku1, ku2, . . . , kun), so

║ku║= √[(ku1)2 + (ku2)2 + ...+ (ku3)2]

║ku║= │k│√[u12 + u2

2 + ...+ u32]

║ku║= │k│║u║

Proof (d) ║u + v║2 = (u + v). (u + v) = (u . u) + 2(u . v) + (v . v)

=║u║2 + 2(u . v) +║v║2

=║u║2 + 2 │u.v│║v║2

Page 17: KKP MTE 3110_3

= ║u║2 +2║u║║v║+║u║2

= (║u║+║u║)2

The result now follows on taking square roots of both sides.

(Howard Anton & Chris Rorres: 2005)

Dot Product

The dot product of a and b, written a.b, is defined by a.b = a b cos θ where a and

b are the magnitudes of a and b and q is the angle between the two vectors that

satisfies θ and p 

The dot product is distributive: a.(b + c) = a.b + a.c 

and commutative: a.b = b.a 

Knowing that the angles between each of the i, j, and k vectors is p/2 radians (90

degrees) and cos p/2 = 0, we can derive a handy alternative definition: Let, 

u = ai + bj + ck 

v = xi + yj + zk 

then, 

Page 18: KKP MTE 3110_3

u.v = (ai + bj + ck).( xi + yj + zk) 

=>u.v = (ai + bj + ck). xi + (ai + bj + ck).yj + (ai + bj + ck).zk 

The angle between any nonzero vector and iteself is 0, and cos 0 = 1, so i.i = 1

Hence, 

u.v = a x + b y + c z 

This means that for any vector, a,

a. a = a2

(http://members.tripod.com/Paul_Kirby/vector/Vdotproduct.html)

Angle Between Two Vector

We can now, given the coordinates of any two nonzero vectors u and v 

find the angle q between them: 

u = ai + bj + ck 

v = xi + yj + zk 

u.v = u v cos θ 

u.v = a x + b y + c z 

=> u v cos θ = a x + b y + c z 

=> θ = cos-1 o (a x + b y + c z) / ( u v ) p To get used to this method check out this

applet. If one of the vectors was the null vector 0, from (0,0,0) to (0,0,0), this is

the only vector without a direction and it isn't meaningful to ask the angle

between this vector and another vector. One of the main uses of the dot product

is to determine whether two vectors, a and b, are orthogonal (perpendicular).

If a . b = 0, then either a is orthogonal to b or a = 0 or b = 0. 

(http://members.tripod.com/Paul_Kirby/vector/Vdotproduct.html)

Page 19: KKP MTE 3110_3

Vectors in Space R2

Subspace

A subset W of a vector space V is called a subspace of V is W is itself a

vector space under addition and scalar multiplication as defined on V .

Theorem 1:

If W is a set of one or more vectors from a vector space V , then W is a subspace

of V if and only if the following hold:

1. If u and v are vectors in W, then u + v is in W

2. If k is any scalar and u is any in W, then ku is in W.

If W satisfies the conditions above, then W must also contain the zero vector of

V (k = 0), and the inverse (k = −1).

Theorem 2:

If Ax = Ō is a homogeneous linear system of m equations in n unknowns, then

the set of solution vectors is a subspace of Rn.

Theorem 3:

A vector w is a linear combination of the vectors v1, v2, . . . , vr if there are scalars

ki such that w = k1 v1 + k2 v2 + ・ ・ ・ + kr vr

If v1, v2, . . . , vr are vectors in V then

1. The set W of all linear combinations of v1, v2, . . . , vr is a subspace of V

2. W is the smallest subspace of V that contains v1, v2, . . . , vr

Theorem 4:

If S = {v1, v2, . . . , vr}, vi є V , then the subspace W of V consisting of all linear

combinations of the vectors in S is called the space spanned by v1, v2, . . . , vr,

and the vectors v1, v2, . . . , vr span W. We write W = span(S) or W = span {v1,

v2, . . . , vr}

Page 20: KKP MTE 3110_3

If S = {v1, v2, . . . , vr}, S′ = { w1, w2, . . . , wr} are two sets of vectors in the vector

space V , then span {v1, v2, . . . , vr} = span{ w1, w2, . . . , wr}

If and only if each vector in S is a linear combination of vector S′, and each vector

in S′ is a linear combination of vectors in S.

(http://www.math.ualberta.ca/~atdawes/m102_2009/ch5.pdf)

Linear Independence

If S = {v1, v2, . . . , vr} is a nonempty set of vectors, then S is a linearly

independent set if the vector equation k1 v1 + k2 v2 + ・ ・ ・ + kr vr = 0 has only

the trivial solution k1 = 0, k2 = 0, . . . kr = 0.

If there are some nonzero ki such that k1 v1 + k2 v2 + ・ ・ ・ + kr vr = 0, then S is

a linearly dependent set.

Theorem 1:

A set S with two or more vectors is

1. Linearly dependent if and only if at least one vector in S is expressible as a

linear combination of the other vectors in S

2. Linearly independent if and only if no vector in S is expressible as a linear

combination of the other vectors in S.

Theorem 2:

1. A finite set of vectors that contains the zero vector is linearly dependent

2. A set with exactly two vectors is linearly independent if and only if neither

vector is a scalar multiple of the other.

Theorem 3:

Let S = {v1, v2, . . . , vr} be a nonempty set of vectors in Rn. If r > n, then S is linear

dependent.

Note: The case r > n is equivalent to having a system of equations with more

unknowns than equations.

Page 21: KKP MTE 3110_3

(http://www.math.ualberta.ca/~atdawes/m102_2009/ch5.pdf)

Basis, Dimension and Rank

Basis

If V is any vector space and is a set of vectors in V, then S = (v1, v2 ... vn)

is called a basis for V if the following two conditions hold:

(a) S is linearly independent.

(b) S spans V.

If S = (v1, v2 ... vn)is a basis for a vector space V, then every vector v in V can be

express = c1 v1 + c2 v2 +...+ cn vn in exactly one way

Proof: Since S spans V, it follows from the definition of a spanning set that every

vector in V is expressible as a linear combination of the vectors in S. To see that

there is only one way to express a vector as a linear combination of the vectors

in S, suppose that some vector v can be written as

v = c1 v1 + c2 v2 +...+ cn vn and also as

v = k1 v1 + k2 v2 +...+ kn vn

Subtracting the second equation from the first gives

0 = (c1 - k1) + (c2 - k2) +...+ (cn - kn) vn

Since the right side of this equation is a linear combination of vectors in S, the

linear independence of S implies that

c1 – k1 = 0, c2 – k2 = 0 cn – kn = 0

that is

c1 = k1,c2 = k2,cn = kn

Thus, the two expressions for v are the same.

Dimension

In mathematics, the dimension of a vector space V is the cardinality of

abases of V. It is sometimes called Hamel dimension or algebraic dimension to

distinguish it from other types of dimension. All bases of a vector space have

Page 22: KKP MTE 3110_3

equal cardinality (see dimension theorem for vector spaces) and so the

dimension of a vector space is uniquely defined. The dimension of the vector

space V over the field F can be written as dimF(V) or as [V : F], read "dimension

of V over F". When F can be inferred from context, often just dim(V) is written.

We say V is finite-dimensional if the dimension of V is finite. Here are some

theorems:

(Wikipedia; http://en.wikipedia.org/wiki/Dimension_(vector_space))

Theorem 1:

Let V be a finite dimensional vector space and (v1, v2, . . . , vr) is any basis of V

1. If a set has more than n vectors, then it is linearly dependent

2. If a set has fewer than n vectors, then it does not span V

Theorem 2:

All bases for a finite dimensional vector space have the same number of vectors.

The dimension of a finite dimensional vector space V, denoted by dim (V), is

defined to be the number of vectors in a basis for V. In addition, we define the

zero vector space to have dimension zero.

Theorem 3:

If V is an n-dimensional vector space, and if S is a set in V with exactly n vectors,

then S is a basis for V if either S spans V or S is linearly independent. We know

how many vectors are in the basis of V since we know its dimension. So given a

set S containing the right number of vectors, we only need to show S spans V or

the vectors in S are linearly independent.

Page 23: KKP MTE 3110_3

Theorem 4:

Let S be a set of vectors in a finite dimensional vector space V

1. If S spans V but is not a basis for V , then S can be reduced to a basis for V

by removing appropriate vectors from S.

2. If S is a linearly independent set that is not a basis for V , then S can be

enlarged to a basis for V by adding appropriate vectors into S.

Theorem 5:

If W is a subspace of a finite dimensional space V , then dim(W) ≤ dim(V ) and if

dim(W) = dim(V ) then W = V .

(Howard Anton, Chris Rorres : 2005)

Rank

Theorem 1:

If A is any matrix, then rank (A) = rank (A)T

Proof

Rank (A) = dim (row space A) = dim (column space of AT) = rank (A)T

The following theorem establishes an important relationship between the rank

and nullity of a matrix.

Theorem 2:

If A is a matrix with n columns, then rank(A) + nullity (A) = n

Proof Since A has n columns, the homogeneous linear system has n unknowns

(variables). These fall into two categories: the leading variables and the free

variables. Thus

But the number of leading variables is the same as the number of leading 1's in

the reduced row-echelon form of A, and this is the rank of A. Thus

Page 24: KKP MTE 3110_3

The number of free variables is equal to the nullity of A. This is so because the

nullity of A is the dimension of the solution space of Ax = 0, which is the same as

the number of parameters in the general solution [see 3, for example], which is

the same as the number of free variables. Thus

The proof of the preceding theorem contains two results that are of importance in

their own right.

Theorem 3:

If A is an mxn matrix, then

(a) rank(A) = number of leading variables in the solution of Ax=0

(b) nullity (A) = number of parameters in the general solution of Ax=0

(Howard Anton, Chris Rorres : 2005)

Page 25: KKP MTE 3110_3

1.3.3 APPLICATION OF VECTOR SPACE

Vector spaces have manifold applications as they occur in many circumstances, namely

wherever functions with values in some field are involved. They provide a framework to

deal with analytical and geometrical problems, or are used in the Fourier transform. This

list is not exhaustive: many more applications exist, for example in optimization.

The minimal of game theory stating the existence of a unique payoff when all players

play optimally can be formulated and proven using vector spaces

methods. Representation theory fruitfully transfers the good understanding of linear

algebra and vector spaces to other mathematical domains such as group theory.

Example 1: Application of Subspace

The CMYK Color Model

Color magazines and books are printed using what is called a CMYK color model.

Colors in this model are created using four colored inks: (C), (M), (Y), and (K). The

colors can be created either by mixing inks of the four types and printing with the mixed

inks (the spot color method) or by printing dot patterns (called rosettes) with the four

colors and allowing the reader's eye and perception process to create the desired color

combination (the process color method). There is a numbering system for commercial

inks, called the Pantone Matching System, that assigns every commercial ink color a

number in accordance with its percentages of cyan, magenta, yellows, and black. One

way to represent a Pantone color is by associating the four base colors with the vectors

in R4 and describing the ink color as a linear combination of these using coefficients

between 0 and 1, inclusive. Thus, an ink color p represented as a linear combination of

the form

Page 26: KKP MTE 3110_3

where 0 ≤ c2 ≤ 1 . The set of all such linear combinations is called CMYK space,

although it is not a subspace of R4 . (Why?) For example, Pantone color 876CVC is a

mixture of 38% cyan, 59% magenta, 73% yellow, and 7% black; Pantone color 216CVC

is a mixture of 0% cyan, 83% magenta, 34% yellow, and 47% black; and Pantone color

328CVC is a mixture of 100% cyan,0% magenta, 47% yellow, and 30% black. We can

denote these colors by , , and

, respectively.

Example 2: Application in Game

The game is a grid of 5 by 5 buttons that can light up. Pressing a button will switch its

light around, but it will also switch the lights of the adjacent buttons around. Each

problem presents you with a certain pattern of lit buttons and to solve the puzzle we

have to turn all the lights out (which is not easy because if we're not careful you will turn

more lights on than off). This is the primary goal, the secondary goal is to accomplish

this with as few moves (pressing a button is one move) as possible.

Consider the following game:

Page 27: KKP MTE 3110_3

 

The game is a grid of 5 by 5 buttons that can light up. Pressing a button will toggle its

light (on/off or off/on), but it will also toggle the lights of the (4, 3, or 2) adjacent buttons

around. Each problem presents with a certain pattern of lit buttons and to solve the

puzzle we have to turn all the lights out (which is not easy because if we're not careful

we will turn more lights on than off). This is the primary goal; the secondary goal is to

accomplish this with as few moves (pressing a button is one move) as possible.

 

The solution presented here uses the algebraic notion of "fields". A field is a set with

two operations, addition and multiplication that satisfies the following axioms:

 

                           1.  Closure under addition.

                           2.  Closure under multiplication.

                           3.  Associative Law of Addition.

                           4.  Associative Law of Multiplication.

                           5.  Distributive Law.

                           6.  Existence of 0.

                           7.  Existence of 1.

                           8.  Existence of additive inverses (negatives).

                           9.  Existence of multiplicative inverses (reciprocals), except for 0.

                           10. Commutative Law of Addition.

                           11. Commutative Law of Multiplication.

 

Examples of fields are Q (rational numbers), R (real numbers), C (complex numbers),

and Z/pZ (integers modulo a prime p).

 

In particular, Z/2Z (denoted usually by Z2) is a field consisting of only two elements 0

and 1. The addition and multiplication on Z2 will be explained later. One can talk about a

vector space over a field k. One example of such a space is kn where n is a positive

integer.

 

Page 28: KKP MTE 3110_3

Let us return to our game now. Carsten Haese gives the solution presented below.

 The goal here is to find a universal solution to this puzzle, i.e. to find a general

algorithm that will tell you which buttons you have to press given a certain initial pattern.

The central questions concerning this universal solution are

 a) Is there a solution for any initial pattern? If yes, why? If not, describe the set of initial

patterns that have a solution.

 b) Assuming a solution exists for a given initial pattern, how can this solution be derived

from the pattern?

 This approach to the puzzle is to find an equation that describes how pressing the

buttons will affect the lights. First, I need to represent the lights numerically. One

obvious representation that incidentally will also prove to be algebraically very useful is

to represent an unlit button with 0 and a lit button with 1. I enumerate the 25

buttons/lights from left to right, top to bottom and thus each pattern of lights is

represented by a

25-tuple of 0’s and 1’s, i.e. by an element of the set

  .

 To represent the way in which pressing a button affects the pattern of lights, I define an

addition operator on the set

by

 

 

This describes accurately the switching mechanism when one summand represents the

state of a light and the other summand represents whether or not that light is switched.

(Not switching an unlit button results in an unlit button, as well as switching a lit button;

not switching a lit button results in a lit button, as well as switching an unlit button.) By

Page 29: KKP MTE 3110_3

component wise application, the addition on Z2 yields an addition on Z225 in a natural

way.

Of course, to take advantage of this addition in Z225 I will want to represent the effect of

pressing a button as an element of Z225 . I do this the obvious way. Each button has its

own pattern of which lights it will switch and which lights it will leave. By writing 0 for

leaving the button unchanged and 1 for switching, each of the buttons (or rather the

"button action", i.e. the effect of pressing that button) is described by an element in Z225.

Let ai in Z225 (i=1,..,25) be the button action that is caused by pressing the i-th button.

For example,

a2 = (1,1,1,0,0,0,1,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0),

a19 = (0,0,0,0,0,0,0,0,0,0,0,0,0,1,0,0,0,1,1,1,0,0,0,1,0).

 

I can now start the first attempt at writing down an equation that will solve a lights out

puzzle. Let y in Z225 be the initial light pattern of the puzzle. Now I might play around with

the buttons, for example by pressing the 21st button I would transform the light pattern y

into a21 + y. Then I might press the 15th button, resulting in a15 + (a21 + y). And so on.

The addition is associative, so I can spare the parentheses without ambiguity. The goal

is to find a (finite!) sequence (n(1),n(2),...,n(N)) such that 

 

an(N) + an(N-1) + ... + an(1) + y = (0,0,...,0).

 

This approach is awkward, though. The addition in Z225 is commutative, so without loss

of generality the sequence n(1)...n(N) can be chosen to be non-decreasing.

Furthermore, since ai + ai = 0 for alli=1,..,25 (in fact, v + v = 0 for all v in Z225 ), the

sequence can even be chosen to be strictly increasing since no button action needs to

be performed more than once. Evidently this means that N≤25. To achieve an elegant

form of the equation with exactly 25 summands, I define a multiplication operator

on Z2 that will allow me to skip those button actions that are not needed in the solution

sequence. I define 0*0 := 0*1 := 1*0 := 0 and 1*1 := 1. It is known that (Z2, + ,*) is a field,

Page 30: KKP MTE 3110_3

and with the natural multiplication s*(v1,..,v25) := (s*v1,..,s*v25), (Z225,+,*)  is a vector

space over the field Z2.

 

Now, the above equation can be equivalently rewritten as

 x1*a1 + ... + x25*a25 + y = (0,0,...,0)

 with some x=(x1,...,x25) in Z225. By adding y to both sides of this equation I get

 x1*a1 + ... + x25*a25 = y.

 

 Now I want to write the left hand side as a matrix product. I should mention at this point

that all vectors are supposed to be columns, even if I write them as rows for

convenience.

If I let M be the matrix whose i-th row is ai  (transposed), my equation becomes

 Mx = y.

 It is easy to verify that M is the block matrix

 

 with

  

,

 I is the identity matrix 5x5 and 0 is the zero matrix 5x5.

 

Now I have completed the task of finding an equation that describes the solution of a

lights out puzzle. A sequence of button actions that is described

by x=(x1,...,x25) in Z225 will make the light pattern y inZ2

25 vanish if and only if Mx = y. This

Page 31: KKP MTE 3110_3

is a linear equation in a vector space, so even though the equation does not deal with

real numbers, I can apply standard solving methods.

 

The above questions can now be answered by studying the properties of M.

 I have written a small computer program that simultaneously solves the 25 equations

 Mx = bi

 (where { bi | i=1,...,25} is the Cartesian basis of Z225) using Gauss elimination.

 I won't flood this paper with the result matrix, even though the following main results

can't really be verified without it.

 The main results are:

 * dim range M = 23, dim ker M = 2.

  Therefore there are puzzles that can't be solved. Those that can be solved have 4

different solutions because (x24, x25) can be chosen arbitrarily from Z22

 * Examining the last two lines in the final matrix yields a criterion for the existence of a

solution. With the vectors

 k1=(1,0,1,0,1,1,0,1,0,1,0,0,0,0,0,1,0,1,0,1,1,0,1,0,1),

 k2=(1,1,0,1,1,0,0,0,0,0,1,1,0,1,1,0,0,0,0,0,1,1,0,1,1).

   The criterion can be written as 0 = <y, k1> = <y, k2>, where < .  ,  . > denotes the

canonical scalar product in Z225. It turns out that {k1, k2} is a basis of ker M, which is not

really surprising because the solvability of Mx=y is equivalent to y being in range

M which (since M is symmetric) is equivalent to y being orthogonal to ker M.

Page 32: KKP MTE 3110_3
Page 33: KKP MTE 3110_3
Page 34: KKP MTE 3110_3
Page 35: KKP MTE 3110_3
Page 36: KKP MTE 3110_3
Page 37: KKP MTE 3110_3
Page 38: KKP MTE 3110_3
Page 39: KKP MTE 3110_3

3.0 Mini Project

Question:

Figure 1.0: The Road of Downtown Jacksonville, Florida

The figure above represents an area of downtown Jacksonville, Florida. The streets are

all one-way with the arrows indicating the direction of traffic flow. The traffic is measured

in vehicles per hour (vph) while the figures in and out of the network given here are

based on midweek peak traffic hours, 7 am to 9 am and 4 pm to 6 pm. Based on the

figure:

(a) Set up and solve a system of linear equations to find the possible flows f 1, f2, f3 and f4 (b) If traffic is regulated on B to C, so that f4 = 300 vehicles per hours, what will the average flows on the streets be?(c) What are the minimum and maximum possible flows on each street?

Hogan Street

Laura Street

Duval Street

Monroe Street

400800

225

350 1

253006

00

250

f2

f1

f4

f3

A B

CD

Page 40: KKP MTE 3110_3
Page 41: KKP MTE 3110_3

Answer:

First, we can highlight the main point, which is

• The road represent an area of downtown Jacksonville, Florida.

• The streets are all one-way with the arrows indicating the direction of traffic flow.

• The traffic is measured in vehicles per hour (vph).

• The figures in and out of the network given here, based on midweek peak traffic hours, 7 am to 9 am and 4 pm to 6 pm.

Then, let’s construct a mathematical model that can be used to analyze the flow f1, f2, f3

and f4 within the network for question (a).

We have to assume that the following traffic law applies

- ‘All traffic entering an intersection must leave that intersection.’

The constraints on the traffic are described by the following system of linear equations.

A = f1 + f2 = 625

B = f1 +f4 = 475

C = f3+f4 = 900

D = f2+f3 = 1050

Intersections Traffic In Traffic Out Linear equation

A f1 + f2 400+225 f1 + f2 = 625

B 350 +125 f1 + f4 f1 + f4 =475

C f3 + f4 600 + 300 f3 +f4 = 900

D 800 +250 f2 + f3 f2 + f3 =1050

Page 42: KKP MTE 3110_3

Therefore, the method of Gauss Jordan elimination is used to solve this system of equation and the augmented matrix and reduced row echelon form of the preceding system are as follows.

00000

9001100

1501010

4751001

10500110

9001100

4751001

6250011

10500110

9001100

6250011

4751001

10500110

9001100

1501010

4751001

9001100

9001100

1501010

4751001

R2 R1

R4 = R4 – R2

Page 43: KKP MTE 3110_3

The system of equations that correspond to this reduced row echelon form is

f1 + f4 = 475

f2 – f4 = 150

f3 + f4 = 900

We see that f4 is free variable. Therefore, the linear system is consistent and we have infinitely many solutions. Expressing each leading variable in terms of the remaining variable. Thus the possible flows are :

f1 = 475 – f4

f2 = 150 + f4

f3 = 900 – f4

f4 = free variables

Page 44: KKP MTE 3110_3
Page 45: KKP MTE 3110_3
Page 46: KKP MTE 3110_3

NAME : NUR FATIHAH BT HALIM

IC NUMBER : 881112 – 04 – 5018

UNIT : PISMP MATHEMATICS JAN 2008 (SEM 5)

SUBJECT : LINEAR ALGEBRA (MTE 3110)

LECTURER : PN ROZANA BT SAHUL HAMID

Page 47: KKP MTE 3110_3
Page 48: KKP MTE 3110_3
Page 49: KKP MTE 3110_3
Page 50: KKP MTE 3110_3

NAME : SITI KHIRNIE BINTI KASBOLAH

IC NUMBER : 880711 – 23 – 5710

UNIT : PISMP MATHEMATICS JAN 2008 (SEM 5)

SUBJECT : LINEAR ALGEBRA (MTE 3110)

LECTURER : PN ROZANA BT SAHUL HAMID

Individual Reflection

The project task of this knowledge-based course work is given on 17 th February

2010. Our lecturer, Pn Rozana had briefing briefly on how to work with the coursework.

We are given four weeks to complete part of the project task. I had been working with

Nur Fatihah Halim and Siti Madhihah Abdul Rahman for all group work task.

The project is divided into four main tasks. Task 1 requires students to work in

group of three or four. Each group need to gather information relating to our three main

topics in Linear Algebra in the aspects of history, theory and application. The

information must come from various aspects. Then, we need to review each aspects

based on the resources collected. For this task, I feel it difficult because at first, we

cannot imagine how to work with the task. Furthermore, it is involve the topic of Vector

Space, which we are not learning yet. Therefore, it is a challenge for my group mate

and I to work on. Besides that, we are confusing on how to arrange the information

based on the information we got, especially from the aspect of theory. This is because

there are so many theories that we had to consider. However, I found some strength in

our group. One of them is cooperation. Although we do this task during the mid-

semester break, we try to contact each other as always as we can whenever we were

not understand or confused about something. Telephone and internet were helpful

during our discussion. Furthermore, my group mate can easily share their ideas while

we doing this task. We were also agreeing in many parts of work especially when we

want to divide the work.

Page 51: KKP MTE 3110_3

For task 2, there are individual and group task. For individual, I have to sit for two

tests that prepared by my lecturer, Pn Rozana. So far, my class had sit only for one test

and too bad for me because I cannot do very well in the test. This is because I had lack

of preparation. I realize it was my fault because Pn Rozana had given a very long time

for my class to be prepared for the test. Therefore, I need to be well prepared for the

next test because I will have no second chance in the future. I need to do a lot of

exercise to increase my understanding towards the concept of each topic. As for the

group task, my group had to do an oral presentation based on the task 3. Before the

presentation, we have not so much time to discuss on the presentation because we had

our own commitment. In other words, we have lack of time in preparation for the task

and for the presentation. Luckily, my group mate and I have initiative to understand the

slide presentation by ourselves. When there is a free time, we had just a little discussion

on certain parts, which may contain some possible questionnaire. During the

presentation, we did it well.

For task 3 (mini project - group work), we need to choose one of any application

of linear system in real life and we had to discuss and solve it using the suitable method

which we already learnt in Linear Algebra. We had chosen the application of Traffic

Network Analysis. Honestly, Nur Fatihah had done so many parts in this task because

she is the one who found the question and she is the one who is responsible to add the

question for our application problem. Luckily she doesn’t mind to do all that things.

Madhihah and I prepare the slide presentation and we just change on certain part that

we think can make people easy to understand what we try to say during the

presentation. One of our strategies is to paste the figure of network analysis so that we

do not have to go back to the first slide just to refer to the figure. For this mini project,

we had using Gauss Jordan Elimination that we had learnt in the topic of Linear System.

Furthermore, we were also refer to our book (Linear Algebra by David Poole) to make

sure our presentation is complete and there will no question during the presentation. For

me, unexpected questions will only disturb the presentation and if we cannot answer the

question, we will destroy!

Page 52: KKP MTE 3110_3

For the last task, I have to write an individual reflection as usual. The reflection

should include the problems, strengths and weaknesses. As the conclusion, from this

assignment, I learn so many things and I realize that this topic, Linear Algebra is useful

in our daily life. We can apply this method in any suitable situation. Although the task is

the hardest for this semester, I am still enjoying do this task.

Page 53: KKP MTE 3110_3

NAME : SITI MADHIHAH BINTI ABDUL RAHMAN

IC NUMBER : 880408 – 10 – 5358

UNIT : PISMP MATHEMATICS JAN 2008 (SEM 5)

SUBJECT : LINEAR ALGEBRA (MTE 3110)

LECTURER : PN ROZANA BT SAHUL HAMID

Page 54: KKP MTE 3110_3

BIBLIOGRAPHY

A.K. Sharma (2007) Linear Algebra. New Delhi, India

Basis http://en.wikipedia.org/wiki/Basis_(linear_algebra) Accessed on March 05, 2010

Basic Vector Operation http://hyperphysics.phy-astr.gsu.edu/hbase/vect.html Accessed

on March 05, 2010

Dimensional (vector Space) http://en.wikipedia.org/wiki/Dimension_(vector_space)

Accessed on March 15, 2010

History of Finite Dimensional of Vector Space

http://www.math.harvard.edu/archive/21b_fall_04/exhibits/historyvectorspace/ind

ex.html Accessed on March 05, 2010

Howard Anton, Chris Rorres (2005). Elementary Linear Algebra Application Version

Ninth Edition

Linear Equation http://www.numbertheory.org/book/cha1.pdf Accessed on March 05,

2010

Linear Independence http://www.cliffsnotes.com/study_guide/Linear-

Independence.topicArticleId-20807,articleId-20789.html Accessed on March 05,

2010

Linear Independence http://en.wikipedia.org/wiki/Linear_independence Accessed on

March 15, 2010

Page 55: KKP MTE 3110_3

The Dot Product of 2 Vectors

http://members.tripod.com/Paul_Kirby/vector/Vdotproduct.html Accessed on

March 10, 2010

Vector Space http://en.wikipedia.org/wiki/Vector_space Accessed on March 05, 2010