condition for invertible
Reappears
IMT Extension
Use the transpose to find
Main Diagonal Similar Process
Must Have n pivots
IMT Extension
Same formula with minor tweak
The null space
Basis must be linearly independent
Row space included in thm 7
Any linearly independent set for thm 13
Eigenvectors must be linearly independent
factorization of A=QR is used in Theorem 15
Determine linear independence
Orthogonal matrix must have orthonormal columns
Properties
Must be linearly independent to be invertible
if index is linearly dependent
a condition for onto
Pivots used in theorem
Set of all linear combinations
Way to find eigenvalues
Way to find eigenvalues
Transpose used in formula
Transpose used in formula
a condition for one-to-one
IMT Extension
Used in Properties
Complex version of eigenvalue
Needs a nontrivial solution of x
Must be invertible
if index is linearly independent
Used In formula
One-to-One trnasformation
Must Have n pivots
Definition
Same four parts just slightly different context
Linearly dependent
Pivots Show Col space
Linear dependent required for thm
Thm 9 describes linear transformation
Linear Transformation
Thm 8 require invertible
Same Formula with minor tweak
Linear Transformation
Thm 12 characteristics of finite
Complex version of eigenvector
invertible
explains formula in Thm 8
basis must be linearly independent
More Explanation for e
Linearly Dependent
condition for Thm 5
Talks about forming a column space
orthogonal if both formulas equal 0
pivot found in Echelon Form and its reduced version
the formula for projwy is exact same as y in thm 5
rank + nullity = num columns
Same Formula
must be invertible

Linear Algebra

Chapter 1

1.2: Row Reduction and Echelon Forms

Theorem 1

r

Theorem 1 Each matrix is row equivalent to one and only one reduced row echelon matrix.

Echelon Form

r

Echelon Form RequirementsAll nonzero rows are above any rows of all zerosEach leading entry of a row is in a column to the right of the leading entry of the row above it.All entries in a column below a leading entry are zeroEx.key - * any number @- leading entry[ @ * * * ] [ 0 @ * * ][ 0 0 @ * ][ @ * * *][ 0 0 @ *][ 0 0 0 0 ][ @ * * * ][ 0 @ * * ][ 0 0 0 * ]

Reduced row Echelon form

r

Reduced Echelon Form requirements 1-3. all previous steps form Echelon Form 4. The leading entry in each nonzero row is 1 5. Each leading 1 is the only nonzero entry in its columnExkey - * any number[ 1 0 0 * ][ 0 1 0 * ][ 0 0 1 * ][ 1 0 * * ][ 0 1 * * ][ 0 0 0 0 ][ 1 * 0 * ][ 0 0 1 * ][ 0 0 0 * ]

Pivot

r

Pivot Position position in a matrix A is located in A that corresponds to a leading 1 in REF. in addition to a nonzero number needed to create zero by row operationsPivot Column a column that contains a pivot point.

1.1: Systems of Linear Equations

Row Operations

r

Row operations Used to solve system of equations Three operationsInterchange any 2 rowsreplace a row by multiplication (non-zero of that row)replace a row by the sum of that row and a constant nonzero multiple of another

Matrix Notation

r

Matrix Notation two types coefficient matrix and augmented matrix. augmented matrix includes what the system of equations are equal to while coefficient does not.

Systems of Linear equations

r

System of linear equations a collection of one or more linear equations involving the same variables. 3 cases for solutions consistent system independent equations, inconsistent system independent equations, consistent system dependent equations.

consistent system independent equations

r

consistent system independent equations- System of linear equations has one solution. has a graph that intersects in a single point.

inconsistent system independent equations

r

inconsistent system independent equations System of Linear equations has no solution. Graph is parallel lines.

consistent system dependent equations

r

consistent system dependent equations System of linear equations have infinite number of solutions. Graph shows line on top of each other.

Solution

r

Solution- a list s1, s2,... sn of numbers that make each equation a true statement

1.3: Vector Equations

Vectors

r

Vectors A matrix with only one column is called a column vector/ vector.ex [ 1 ] [ 1 ] [ 2 ] [ 3 ] Two vectors are equal iff thier entries are equalex [ 2 ] [ 2 ] [ 5 ] [ 5 ]

Vector Equation

r

Vector Equation x1a1+x2a2+...xnan=bex [ 3 ] [ 1] [ 4 ] x1[ 2 ] + x2 [-4 ] = [ 1 ] [ 1 ] [-3 ] [ 3 ]

Multiplication

r

Multiplication given vector u and a real number c the scalar multiple of u by c is the vector cu obtained by multiplying each entry in u by cex c[ u ] [ r1 ] = [ cr1 ] [ r2 ] = [ cr2 ]

Addition

r

Addition given two vectors u and v their sum is the vector u + v by adding entries of u and vex [ v1 ] [ u1 ] = [ v1+ u1 ] [ v2 ] [ u2 ] = [ v2+ u2 ]

Parallelogram Rule for Addition

r

Parallelogram Rule for Addition if u and v in r^2 are represented as points in the plane then u + v corresponds to the fourth vertex of the parallelogram whose other vertices are u, 0 and v

Algebraic Properties of R^n

r

Algebraic Properties of R^n i) u + v = v+ u commentative property ii) (u+v) + w = u + (v + w) associative property iii) u + 0 = 0 + u = u zero property iV) u + (-u)= -u+u=0 inverse property V) c(u+v)=cu+cv distributive property Vi) (c+d)u = cu + du distributive property Vii) c(du)=(cd)u associative property of the multiplication viii) 1u = u identity property

Linear Combinations

r

Linear Combination Given vectors v1, v2 ...,vp in r^n and given the scalars c1, c2, ... ,cp the vector y defined byy=c1v1+c2v2+vpvp

Span

r

Span if V1....vp in r^n, then the set of all linear combinations of v1... vp is denoted by span{v1...vp} and is called the subset of R^n spanned ( or generated) by v1...vp. That is, span [v1...vp] is the collection of all vectors that can be written in the form c1v1 + c2v2+...+cpVpwith c1 ... cp scalars

1.4: Matrix equation Ax=b

Product of Ax

r

Product of Ax If a in an m xn matrix with columns a1….an and if x is r^n then the product of A and x is denoted by Ax, is the linear combination of the columns of A using the corresponding entries in x a weightsExAx= {a1, a2 …… an} [x1] = x1a1 + x2a2 + …… +xnan [xn]

Finding B

r

Finding BIf A is an mxn matrix with columns a1 … an and if b is in r^M the matrix equationAx=bHas the same solution set as the vector equationX1a1 + x2a2 + …. + xnan =bWhich in turn has the same solution set as the system of linear equations whose augment matrix is[a1 a2 …. An b]This leads to the equation ax = b has a solution if and only if b is a linear combination of columns of A

Row- vector Rule for Computing ax

r

Row- vector Rule for Computing ax If the product ax is defined then the ith entry in Ax is the sum of the products corresponding entries from row i of A and from vector x.

1.5: Solutions Sets of linear Systems

Theorem 6

r

Theorem 6 suppose the equation Ax= b is consistent for some given b, and let p be a solution. then the solution se of Ax=b is the set of all vectors of the form w= p + vh where vh is any solution of the monogenous equation Ax=0

Homogeneous

r

Homogeneous System of linear equation that can be written Ax=0 where A is an m*n matrix and 0 is a zero vector. always one solution x=0

Trivial and non Trivial Solution

r

Trivial solution x=0Non-Trivial solution nonzero vector x that satisfies Ax= 0

1.6: Applications of Linear System

Equilibrium prices

r

Equilibrium prices a number that exist that income balances expenses

1.7 Linear Independence

Linearly Independent

r

Linearly independent indexed set of vectors v1...vp is linearly independent if x1v1+x2v2+ ... xpvp = 0 has only the trivial solutioncolumns of a matrix A are linearly independent if and only if the equation Ax=0 has only trivial solutions.a set containing one vector v is linearly independent iff v is not the zero vector

Set of two Vectors

r

Set of two Vectors if t

Linearly Dependent Sets

Linearly Dependent

r

Linearly Dependent the set v1... vp is linerly dependent if there exist weights c1....cp not all zeros such thatc1v1+c2v2+...cpvp=0(called linear dependence relation)

Theorem 9

r

Theorem 9 If a set S = [v1 ... vp] in R^n contains the zero vector, then the set is linearly dependent

Theorem 8

r

Theorem 8 If a set contains more vectors than there are entries in each vector then the set is linearly dependent.

Theorem 7

r

Theorem 7 an indexed set S = {v1.. vp} of two or more vectors is linearly dependent if and only if at least one of the vectors in s is a linear combination of the others in fact if s is linearly dependent and v1 =/= 0 then some vj (with j >1) is linear combination of the proceeding vectors v1.... vj-1

1.8: Intro to Linear Transformations

Transformation

r

Transformation function or mapping t from r^n to r^m is a rule that assigns each vector x in r^n a vector t(x) in R^m

Domain

r

Domain the set R^n is called the domain of t. the domain of t is r^n when a has n columns.

Codomain

r

Codomain the set R^m is called the codomain of t. the codomain of T is R^m when each column of A has m entries.

Image

r

Image the vector t(x) in R^M is called the image of x

Range

r

Range the set of all images t(x) is called the range of T. The range of t is the set of all linear combinations of the columns of A because each image t(x) is of the form Ax

Linear Transformation

r

Linear Transformation T is linear if i) t(u+v) = T(u) + T(v) for all u v in the domain of T; ii) T9cu) =cT(u) for all scalars c and all u in the domain of T0

1.9 Matrix Of Linear Transformations

Standard Matrix for linear
transformation

r

Standard Matrix for linear transformation A = [t9e1)...t(en)]

Matrix Transformation

r

Matrix Transformation Describes how mapping is implemented

One-to-One

r

One-to-One- a mapping t: r^n -. r^m is one to one if each b in r^m is the image of at most one x in r^n

Onto

r

Onto a mapping t: r^n -. r^m is onto if each b in r^m is the image of a t least one x in R^n

Theorem 11

r

Theorem 11 let T: r^n -> r^m be a linear transformation the T is one to one if and only if the equation t(x) = 0 has only the trivial solution

Theorem 12

r

Theorem 12 let T: r^n -> r^m be a linear transformation and let A be the standard matrix for t thena} T maps R^n onto r^m if and only if the columns of a span r^mb) t is one to one if and only if the columns of A are linearly independent.

Chapter 2

2.1: Matrix Operations

Theorem 1

r

Theorem 1 let A b and c be matrices of the same size and let r and s be scalarsA.   A + B = B + AB.  (A + B) + C = A + (B + C)C.  A + 0 = AD.  r(A + B) = rA + rBE.  (r + s)A = rA + sAF.  r(sA) = (rs)A

Theorem 2

r

Theorem 2 let A be an m * n matrix and let b and c have sizes which the indicated sums and products are definedA.  A(BC) = (AB)CB.  A(B+ C) = AB + ACC.  (B + C)A = BA + CAD.  r(AB) = (rA)B = A(rB)E.  ImA = A = AIn

Theorem 3

r

Theorem 3 let A and B denote matrices whose size are appropriate for the following sums and productsA.  (At)t = AB.  (A + B)t = At + BtC.  (rA)t = eAtD.  (AB)t = BtAt

Transpose

r

Transpose given m * n matrix A the transpose is the n * M denoted by A^t

Powers of a matrix

r

Powers of a matrix if A is n*n and if k is a postive integer then A^k denotes the product of K copies of A:A^k=A....A

Sum and scalar multiples

r

Sum and scalar multiples two m by n matrices A and B are said to be equal written as A=Bif r is a scalar and A is a matrix then the scalar multiple rA is the matrix whose columns are r times the corresponding columns in AIf A is an m*n matrix and if B is an n*p matrix with columns b1 ... bp then the product AB is the m * p matrix whose columns are ab1 ... abpab=A[b1 b2 ...bp]=[ab1.. ab2....abp]

2.2: Inverse of a Matrix

Theorem 4

r

Theorem 4 let [a b] [c d]if ad-bc =/= 0 then A is invertible and a^-1 = 1/ad-bc[d -b] [-c a]if ad-bc = 0 then A is not invertible

Theorem 5

r

Theorem 5 If A is an invertible n * n matrix then for each b in R6n the equation Ax=b has the uniques solution x=A^-1b

Theorem 6

r

Theorem 7 a) (a^-1)^-1 =A b) (AB)^-1 =b^-1A^-1 c) A^t-1=A^-1^t

Theorem 7

r

Theorem 7 an n*n matrix A is invertible iff if a is row equivalent to in and in this case any sequence of elementary row operation that reduce a to in also transforms in into a^-1

2.3: Characterization of Invertible
Matrices

Invertible Matrix Theorem

r

Invertible Matrix Theorem a. is an invertible matrix. b. A is a row equivalent to the n x n identify matrix. c. A has n pivot positions. d. The equation Ax = 0 has only the trivial solution. e. The columns of A form a linearly independent set. f. The linear transformation x -> Ax is one-to-one g. The equation Ax = b has at least one solution for each b in R^n. h. The columns of A span R^n. i. The linear transformation x -> Ax maps R^n onto R^n. j. There is an n x n matrix C such as CA = I. k. There is an n x n matrix D such that AD = I. l. A^T is an invertible matrix.

Theorem 9

r

Theorem let t: r^n ->r^n be a linear transformation and let A be the standard matrix for T then t is invertible iff A is an invertible matrix

2.4: Partitioned Matrices

Theorem 10

r

Theorem 10 if a is m*n and b is n* p thenAB = col1(a) col2(A) ....[row1(b) row2(b)= col(A)tow1(b)......

2.5: Matrix Factorization

Algorithm For an lu factorization

r

Algorithm For an lu factorizationreduce A to an echelon form U by a sequence of row replacement operations if possiblePlace entries in L such that the same sequence of row operations reduces L to l

Factorization

r

Factorization an equation that expresses A as a product of two or more matrices

2.8 Subspaces of Rn

Theorem 12

r

Theorem 12 The null space of an m8n matrix A is subspace of R^N equivalently the set of all solution of a system Ax = 0 homogenous linear equations in n unkowns is a subspace

Null Space

r

Null Space The null space of a matrix A is the set Nul A of all solutions of the homogenous equation Ax=0

Basis

r

Basis a linearly independent set in H spans H

Theorem 13

r

Theorem 13 The pivot columns of a matrix A form a basis for the column space of A

2.9 Dimension and Rank

Theorem 14

r

Theorem 14 If a matrix A has n columns then rank A + dim Nul A = n

Theorem 15

r

Theorem 15 Let H= be a p- dimenison subspace of R^n any linearly independent set of exactly p elements in H is automatically a basis for H also any set of p elements of H that spans H is automatically a basis for H

IMT Continued

r

IMT m. The columns of A form a basis of R^n n. Col A = R^n o. rank A = n p. dimNul A = 0 q.Nul A = {0}

Chapter 3

3.1: intro to Determinants

Theorem 1

r

Theorem 1 det A = ai1ci1 + ai2ci2 .... aincinthe cofactor expansion down the jth column is det A = aijcj1 + aj2cj2 .... anjcnj

Theorem 2

r

Theorem 2 If A is a triangular matrix, then det A is the product of the entries on the main diagonal of A

3.2 : Properties of Determinants

Theorem 4

r

Theorem 4 A square matrix A is invertible if and only if det A =/= 0

Theorem 5

r

Theorem 5 If A is an n*n matrix the det A^t = det A

Theorem 6

r

Theorem 6 If A and B are n * n matrices then det Ab= (detA)(Det B)

Theorem 3

r

Theorem 3 Let A be square matrix a) if a multiple of one row of A is added to another row to produce a matrix B the det B = det AB) if two rows of A are interchanged to produce B the det B = - Det AC) If one row of A is multiplied by K to producde B, the det B = k Det A.

3.3 Cramer's rule, Volume,
and linear Transformation

Theorem 7

r

Theorem 7 Let A be an invertible n * n matrix for any b R^n the unique solution x of Ax = b has entries given byxi= detAi9b)/detA

Theorem 8

r

Theorem 8 Let A be an invertible n*n matrix then A^-1 = 1/detA * adjA

Theorem 9

r

Theorem 9 If A is a 2*2 matrix the area of the parallelogram determined by the columns of A is |detA|. If A is a 3*3 matrix the volume of the parallelpiped determined by the columns of A is |det A|

Theorem 10

r

Theorem 10 2*2 [area of t(s)} = |det A|* {area of s} 3*3 [volumeof t(s)} = |det A|* {volumeof s}

Chapter 4

4.1 Vector Space and Subspace

Theorem 1

r

if v1 . . . . vp are in a vector space v, then span {v1 . . . . vp}is a subspace of v

Vector Space

r

A vector space is a nonempty set v of objects called vectors, on which are defined two operations addition and multiplication by scalars subject to the ten axioms listed below the axioms must hold for all vectors u, v, and w in v and for all scalars c and d1.The sum, of u and v denoted by u + v is in V2. u + v = v+ u3.(u + v) + w = u + ( v + w )4.There is a zero vector 0 in v such that u + 0 = u5.For each u in v there is a vector -u in v such that u + (-u)=06.The scalar multiple of u by c denoted by cu is in v7.c(u+v) = cu +cv8.(c +d)u = cu + du9.c(du) = (cd)u10.lu = u

Subspaces

r

A subspace of a vector space v is a subset H of V that has three propertiesa. The zero vector of V is in Hb. H is closed under vector addition. That is, for each u and v in H the sum u + v is in Hc. H is closed under multiplication by scalars. That is, for each u

4.2 Null, Column, Row Spaces and Linear Transformations

Theorem 2

r

The null space of an m*n matrix A is a subspace of R^n equivalently , the set of all solutions to a system Ax=0 of m homogenous linear equations in n unknowns is a subspace of R^n

Theorem 3

r

The column space of an m* n matrix A is a subspace of R^m

null space

r

The null space of an m * n matrix A written as Nul A, is the set of all solutions o fthe homogenous equation Ax=0 in set notationNul = {x:x is in R^n and Ax=0}

Explicit Description of Nul A

r

We say that Nul A is defined implicate because it is defined by a condition that must be checked. No explicit list or description of elements in Nul A is given. However solving the equation Ax=b amounts to producing an explicate description.

Column space

r

The column space of an m*n matrix A written as col A is set of all linear combinations of the columns of A if A = [a1 .. an] thencol A = Span {a1 . . . an}

Row Space

r

If A is an m*n matrix for each row of A has n entries and thus can be identified with a vector R^n. the set of all linear combinations of the row vectors is called the row space of A and is denoted by Row A. Each row has n entries so Row A is a subspace of R^n. Since the rows of A are identified with the columns ofA^t we could also write Col A^t in place of Row A.

contrast between Nul A and Col A

r

Nul A1.Nul A is a subspace of R^n2.Nul A is implicityly defined 3.it takes time to find vectors in nul A4.There is no obvious relation between nul A and the entries in A5.a typical vector v in nul a has the property that Av= 06.given a specific vector v it is easy to tell if v is in nul a just compute Avv7.Nul A ={0} if and only if the equation Ax = 0 has only raised the trivial solution8.Nul A ={0} if an only if the linear transformation x -> Ax is one to oneCol A1. Col A is a subspace of R^m2. Col A is explictly defined3.easy to find col A4.No obvious relation between Col A and the entries in A5.Typical vector is consistent6.Row operations are required7.Col A =R^m if and only if the equation Ax=b has a solution for every b in R^m8.Col A = R^m if and only if the linear trans x-. Ax maps R^n onto R^m

Linear Transformation

r

A linear transformation t from a vector space v into a vector space w is a rule that assign each vector x in v a unique vector t(x) in W such that1.T(u+v)=T(u)+T(v) for all u, v in V2.T(cu) = cT(u) for all u in V and all scalars C

4.3 Linearly Independent Sets and Bases

Theorem 4

r

An indexed set {v1 ... vp} of two or more vectors with v1 =/= 0 is linearly dependent if and only if some vj ( with j >1) is a linear combination of the proceeding vectors v1...vj-1

Theorem 5

r

The spanning set Theoremlet s = {v1 ... vp} be a set in a vector space v and let H = span {v1 ... vp}a.If one of the vectors in s-say vk is a linear combination of the remaining vectors in s the the set formed S by removing vk still spans Hb. If H=/= {0} some subset of s is a basis for H

Theorem 6

r

The pivot columns of a matrix A form a basis for col A

Theorem 7

r

If two matrices A and B are row equivalent then thier row spaces are the same If b is in echelon form the nonzero rows of b for a basis for the row space of A as well as for that of B

Indexed set dependent/independent

r

linearly dependent if there is a nontrivial solution toc1v1+c2v2+...+cpvp = 0Linearly independent if the vectorc1v1+c2v2+...+cpvp = 0has only the trivial solution c1=0...cp=0

Basis for H

r

Le t H be a subspace of a vector space v. A set of vectors B in V is a basis for H if1.B is linearly independent se2.The subspace spanned by b coincides with H that isH= Span B

4.4 Coordinate Systems

Theorem 8

r

Unqiue Representation TheoremLet B ={b1 ... bn} be a basis for a vector space v. then for each x in v there exist a unique set of scalars c1...cn such thatx = c1b1+...+cnbn

Theorem 9

r

Let B = {b1 ... bn} be a basis for a vector space V. Then the coordinate mapping x->[x]B is one to one linear transformation from v Onto R^N

B coordinates of x

r

Suppose B = {b1 ... bn} is a basis for a vector space v and x is in V. The coordinates of x relative to the basis B( or the B-coordinates of x) are the weights c1..cn such that x=c1b1+...+cnbn

4.5 The Dimension of a Vector Subspace

Theorem 10

r

If a vector space v has a basis B= {b1 .... bn} then an set in v containing more than n vectors must be linearly dependent

Theorem 11

r

If a vector Space V has a basis of n vectors, then every basis of V must consist of exactly n vectors

Theorem 12

r

Ltt H be a subspace of a finite dimensional vector space v. any linearly independent set in H can be expanded, if necessary, to a basis for H. Also H is finite-dimensional andDim H <= dim V

Theorem 13

r

The basis Theroremlet V be a p-dimensional vector space, p>+ 1 any linearly independent set of exactly p elements in v is automatically a basis for v any set of exactly p element that spans v is automatically a basis for v.

Theorem 14

r

The Rank TheoremThe dimensions of the column space and the null space of an m*n matrix A satisfy the equationrank A + nullity A = Number of columns in A

rank/nullity

r

The rank of an m*n matrix A is the dimension of the column space and nullity of A is the dimension of the null space

infinite/finite

r

If a vector space v is panned by a finite set, then v is said to be finite-dimensional and the dimensions of V written as dim V, is the number of vectors in a basis for V. The dimensions of the zero vector space {0} is defined to be zero. if V is not spanned by a finite set, then V is said to be infinite-dimensional

IMT Continued

r

Let A be an n*n matrix. Then the following statements are each equivalent to the statement that A is an invertible matrixM The columns of A form a basis of R^n.n. Col A + R6no. rank A = np. nullity A = 0q. Nul A ={0}

4.6 Change of Basis

Theorem !5

r

let B ={b1 ... b2} and c ={c1 ... c2} be the bases of a vector space v then there is a unique n * n matrix such that[x]c = pc<-b[x]bThe columsn of pc<-b are the c-coordinate vectors of the vector in the basis b that ispc<-b = [[b1]c [b2]c ..... [bn]c]

Chapter 5

5.1 Eigenvectors and Eigenvalues

Theorem 1

r

The eigenvalues of a triangular matrix are the entries on its main diagonal

Theorem 2

r

If v1...vr are eigenvectors that correspond to distinct eigenvalues λ1 .... λr of an n * n matrix A then the set {v1 .... vr} is linearly independent

Difference equation

r

The recursively defined vector valued sequencexk+1 = Axk where A is an n*n matrix is called the difference equationthis can be rewritten toxk+1 = A^k+1(x0)if x0 is an eigenvector of A with associated eigenvalue λ this becomesxk+1 = λ^k+1(x0)

Eigenvector

r

An eigenvector of an n*n matrix A is a nonzero vector x such that Ax = λx for some scalar λ.

Eigenvalue

r

A scalar λ is called an eigenvalue of A if there is a nontrivial solution x of Ax=λx such an x is called an eigenvector corresponding to λ1

5.2 The Characteristic Equation

Theorem 3

r

Properties of DeterminantsLet A and B be n*n matricesa. A square matrix A is invertible if and only if det A =/= 0b. det Ab= (detA)(Det B)c. det A^t = det Ad. If A is a triangular matrix, then det A is the product of the entries on the main diagonal of Ae. A row replacement operation does not change the determinant

Theorem 4

r

If n * n matrices A and B are similar then they have the same characteristic polynomial and hence the same eigenvalues

IMT Continued

r

Let A be an n*n matrix Then A is invertible if and only ifr. The number 0 is not an eigenvalue of A

5.3 Diagonalization

Theorem 5

r

The Diagonalization Theoreman n*n matrix A is diagonalizable if and only if A has n linearly independent eigenvectors In fact A = PDP^-1 with D a diagonal matrix if and only if the columns of P are linearly independent eigenvectors of A. In this case, the diagonal entries of D are eigenvalues of A the correspond respectively to the eigenvectors in P

Theorem 6

r

An n *n matrix with n distinct eigenvalues is diagonalizable

Theorem 7

r

Let A be an n*n matrix whose distinct eigenvalues are λ1....λpA. For 1<=k<= p the dimension of the eigenspace for λk is less than or equal to the multiplicity of the eigen value λkb. The matrix A is diagonalizable if and only if the sum of the dimensions of the eigenspace equals n and this happens if an only if 1. the characteristics polynomial factors completely into linear factors and 2. the dimensions of the eigenspace for each λk equal the multiplicity of λkC. If A is diagonalizable and bk is a basis for the eigenspace corresponding to λk for each k, then the total collection of vectors in the set b1 ... bp forms an eigenvector basis for R^n

5.4 Eigenvectors and Linear Transformations

5.5 Complex Eigenvalues

(Complex)eigenvalues/eigenvectors

r

The matrix eigenvalues-eigenvector theory already developed for r^n applies equally well to c^n. so a complex scalar λ satisfies det(a - λI) = 0 if and only if there is a nonzero vector x in c^n such that Ax = λx. We call λ a (complex) eigenvalue and x (complex eigenvector corresponding to λ.

Real and imaginary Parts of Vectors

r

The Complex conjugate of a complex vector x in C^n is the vecto x in c^n whose entries are the complex conjugates of the entries in x. The real and imaginary parts of a complex vector x are the vectors re x and im x in R6n formed from the real and imaginary parts of the entries of x thusx= rex+ilmx

Theorem 9

r

Let A be a real 2 * 2 matrix with a complex eigenvalue λ =a-bi(b=/=0) and an associated eigenvector v in c^2 thenA=PCP^-1 where p = [Re v im V] and c= [a -b] [b a]

Chapter 6

6.1 Inner Product, Length, Orthogonality

Theorem 1

r

Let u, v, and w be vectors in R^n and let c be scalar then a. u ⋅ v = v ⋅ u b. (u+v)⋅w = u⋅w + v⋅w c. (cu) ⋅ v = c(u⋅v) = u ⋅ (cv) d. u ⋅ u ≥ 0 and u ⋅ u = 0 if and only if u = 0

Theorem 2

r

Two vectors u and v are orthogonal if and only if||u+v||^2 = ||u||^2+ ||v||^2

Theorem 3

r

Let A be an m * n matrix. THe orthogonal complement of the row space of A is the null space of A and the orthogonal complement of the column space of A is the null space of A^t(row A) = Nul A and (col A) = nul A^T

Dot Product

r

If u and v are vectors R^n then we regard u and v as n * 1 matrices. The transpose u^t is a 1 * n matrix, and the matrix product u^(t)v is a 1 * 1 matrix which we write as a single real number without brackets. the number u^t(v) is called the inner product of u an v often written as u ⋅ v

Length/Norm

r

The length (or norm) of v is the nonnegative scalar ||v|| defined by||v||=√(v ⋅ v) = √(v1^2+v2^2+...+Vn^2) and||v|| = v ⋅ v

Distance

r

For u and v in R^n and the distance between u and v written as dist(u,v) is the length of the vector u-v that is.dist(u,v) = || u- v ||

Orthogonal

r

Two vectors u and v in R^n are orthogonal to each other if u ⋅ v = 0

6.2 Orthogonal Sets

Theorem 4

r

If S = {u1....up} is an orthogonal set of nonzero vectors in R^n then s is linearly independent and hence is a basis for the subspace spanned by s

Theorem 5

r

let {u1....up} be an orthogonal basis for a subspace w of R^n for each y in W the weights in the linear combinationy = c1u1+...+cpupare given bycj = y⋅uj/uj⋅uj (j=1,....p)

Theorem 6

r

An m * n matrix U has orthonormal columns if and only if u^Tu = 1

Theorem 7

r

Let U be an m * n matrix with orthonormal columns, and let x and y be in R^n thena. ||Ux|| = ||x||b. (Ux) * (Uy) = x⋅yc. (Ux) * (Uy) = 0 if and only if x⋅y = 0

Orthogonal Basis

r

An orthogonal basis for a subspace w of R^n is a basis for w that is also an orthogonal set

Orthogonal Projection

r

decomposing a vector y in r^n into the sum of two vectors one a multiple of u and the other orthogonal to uy =yhat + zyhat = αu for some scalar α and z is some vector orthogonal to u let z = y-αu then y-yhat is orthogonal to u if and only if0= (y-αu)⋅u = y⋅u-(αu)⋅u=y⋅u-α(u⋅u)α=(y⋅u)/(u⋅u) and yhat =(y⋅u)/(u⋅u)The vector y hat is called the orthogonal projection of y onto u and the vector z is called the component of y orthogonal to uyhat= projly = (y⋅u/u⋅u)u

Orthonormal Sets

r

A set {u1 ... up} is an orthonormal set if it is an orthogonal set of unit vectors. if W is the subspace spanned by such a set, then {u1 . . . up} is an orthonormal basis for W, since the set is automatically linearly independent, by theorem 4

Orthogonal Matrix

r

a square matrix u such that u^-1 = U^T

6.3 Orthogonal Projects

Theorem 8

r

Le W bea subspace of R^n. Then each y in r^n can be written uniquely in the form.y= yhat + zwhere yhat is in w and z is in w^⊥ in fact if {u1..... up} is any orthogononal basis of w thenyhat =(y⋅u1)/(u1⋅u1)u1 + .... + yhat =(y⋅up)/(up⋅up)upand y=y-yhat

Theorem 9

r

Best Approximation theorem: Let W be a subspace of R^n let y be any vector in R^n and let yhat be the orthogonal projection of y onto w, the yhat is the closet point in w to y in the sense that||y-yhat||<||y-v||for all v in w distinct form yhat

Theorem 10

r

If {u1....up} is an orthonormal basis for a subspace w of R^n then projwy= (yu1)u1+(yu2)u2+......+(yup)upif u =[u1u2...up] then projwy=u(u^t)(y) for all y in R^n

Properties of orthogonal projection

r

If {u1 ... up} is an orthogonal basis for W and if y happens to be in w then the formula for projw y is exactly the same as the representation of y given in theorem 5 in this case projw y =y

6.4 Gram-Schmidt Process

Theorem 11

r

Given a basis {x1 ... x2} for a nonzero subspace w of R^n definev1= x1v2= x2 - ((x2⋅v1)/(v1⋅v1))v1v3= x3 - ((x3⋅v1)/(v1⋅v1))v1 - ((x3⋅v2)/(v2⋅v2))v2vp= x3 - ((xp⋅v1)/(v1⋅v1))v1 - ((xp⋅v2)/(v2⋅v2))v2 .. -((xp⋅vp-1)/(vp-1⋅vp-1))vp-1then {v1....vp} is an orthogonal basis for w in additionspan {v1....vp} = span {x1 ... xk} for 1 <= k <= p

Theorem 12

r

If a is an m * n matrix with linearly independent columns, then A can be factored as A = QR where Q is an m * n matrix whose columns form an orthonormal basis for col A and R is an n * n upper triangular invertible matrix with positive entries on its diagonal

6.5 Least Squares Problems

Theorem 13

r

THe set of least squares solutions of Ax =b coincides with the nonempty set of solutions of the normal equations (A^t)AX = (A^T)b

Theorem 14

r

Let A be an m * n matrix. The following statements are logically equivalenta. the equations Ax= b has a unique least-square solution for each b in R^mb. The columns of A are linearly independentc. The matrix A^t A is invertiableWhen these statements are true the least square solution xhat is given byxhat =(A^TA)^-1A^tb

Theorem 15

r

Given an m * n matrix A with linearly independent columns let A =QR be a QR factorization of A as in Theorem 12. Then for each b in r^m, the equation Ax = b has a unique least squares solution given byx(hat) = (R^-1)(Q^T)b

Least-squares solution

r

if A is m * n and b is in R^M a least square solution of Ax=b is an xhat in R^n such that||b-A(xhat)|| <= ||b-Ax||for all x in R^n

6.7 Inner Product Spaces

Inner Product space

r

An inner product on a vector space v is a function that, to each pair of vectors u and v in V, associates a real number <u,v> and satisfies the following axioms for all u,v and w in V and all scalars c. a. <u ⋅ v> = <v ⋅ u> b. <u+v,w> = <u,w> +< v,w> c. <cu , v > = c<u,v> d. <u ,u> ≥ 0 and <u , u > = 0 if and only if u = 0A vector space with an inner product is called an inner product space

Lengths, Distances, and Orthogonality

r

Let V be an inner product space, with the inner product denoted by <u,v> just as in R^n we define the length of a vector to be scalar||v|| = √<v , v> Equivalently, ||v||^2 = <v,v>a unit vector is one whose length is 1. The distance between u and v is ||u-v||. Vectors u and v are orthogonal if <u,v>=0

Theorem 16

r

The cauchy-schwartz inequalityfor all u, v in v|<u,v>|<= ||u||||v||

Theorem 17

r

The Triangle inequalityfor all u,v, in V||u+v||<= ||u||+||v||