Категории: Все - vectors - linear - algebra

по Jude Gabriel Miranda 11 лет назад

226

VM 270 9394

In the context of linear algebra, the concept of the adjoint plays a significant role in understanding inner product spaces and their related properties. The adjoint of an operator T, often denoted T*

VM 270 9394

VM 270 9394

Linear Algebra

Chapter 7: Operators on Inner Product Spaces
isometries

Suppose V is a complex inner-product space and S ∈ L(V ). Then S is an isometry if and only if there is an orthonormal basis of V consisting of eigenvectors of S all of whose corresponding eigenvalues have absolute value 1

suppose S ∈ L(V) then the following are equivalent

there exists an orthonormal basis (e_1,...,e_n) of V such that(S^∗e_1,...,S^∗e_n) is orthonormal

(S^∗e_1,...,S^∗e_n) is orthonormal whenever (e_1,...,e_n) is an orthonormal list of vectors in V

SS^∗ = I

S^∗u, S^∗v = for all u, v ∈ V

S^∗ is an isometry

there exists an orthonormal basis (e_1,...,e_n) of V such that (Se_1,...,Se_n) is orthonormal

(Se_1,...,Se_n) is orthonormal whenever (e_1,...,e_n) is an orthonormal list of vectors in V

S^∗S = I

= for all u, v ∈ V

S is an isometry

operator S ∈ L(V) is called an isometry if ||Sv|| = ||v||

||Sv||^2= ||^2 + ...+ ||^2

positive operators

every positive operator on V has a unique positive square root

Tv = S^2v = α^2v

an operator S is called a square root of an operator T if S^2 = T

Let T ∈ L(V) then the following are equivalent

there exists an operator S ∈ L(V ) such that T = S^∗S

T has a self adjoint square root

T has a positive square root

T is self adjoint and all the eigenvalues of T are nonnegative

T is positive

operator T ∈ L(V ) is called positive if T is self adjoint and ≥ 0

normal operators on real inner product spaces

A block diagonal matrix is a square matrix of the form |A_1 0| | ... | | A_n|

suppose that V is a real inner product space and T ∈ L(V) then T is normal if and only if there is an orthonormal basis of V with respect to which T has a block diagonal matrix where each block is a 1-by-1 matrix or a 2-by-2 matrix of the form |a -b| |b a| with b > 0

A_1,...,A_m are square matrices lying along the diagonal and all the other entries of the matrix equal 0

suppose T ∈ L(V) is normal and U is a subspace of V that is invariant under T

T|_U^⊥ is a normal operator on U^⊥

T|_U is a normal operator on U

(T|_U)^∗ = (T^∗)|_U

U is invariant under T^*

U^⊥ is invariant under T

M(, (e_1, e_2)) |a c| |b d|

||Te_1||^2 = a^2+ b^2 ||T^∗e_1||^2= a^2+ c^2

T is normal, ||Te_1|| = ||T^∗e_1||

suppose V is a two dimensional real inner product space and T ∈ L(V ) then the following are equivalent

the matrix of T with respect to some orthonormal basis of V has he form

|a -b| |b a| b > 0

the matrix of T with respect to every orthonormal basis of V has the form

|a -b| |b a| b =!o 0

T is normal but not self adjoint

the spectral theorem

real spectral theorem

suppose that T ∈ L(V) is self adjoint (or that F = C and that T ∈ L(V) is normal) Let λ_1,...,λ_m denote the distinct eigenvalues of T

then V = null(T −λ_1I) ⊕···⊕null(T −λ_mI)

each vector in each null(T −λ_jI) is orthogonal to all vectors in the other subspaces of this decomposition

Suppose that V is a real inner-product space and T ∈ L(V) then V has an orthonormal basis consisting of eigenvectors of T if and only if T is self adjoint

suppose T ∈ L(V) is self-adjoint if α,β ∈ R are such that α^2< 4β, then T^2+αT +βI is invertible

suppose T ∈ L(V ) is self-adjoint then T has an eigenvalue

complex spectral theorem

suppose that V is a complex inner product space and T ∈ L(V) then V has an orthonormal basis consisting of eigenvectors of T if and only if T is normal

M(T,(e_1,...,e_n)) |a_1,1 ... a_1,n| | ... | | a_n,n|

||Te_1||^2 = |a1,1|^2

||T^∗e_1||^2 = |a_1,1|^2 + |a_1,2|^2 +···+ |a_1,n|^2

self adjoint and normal operators

an operator T ∈ L(V ) is called self adjoint if T = T^∗

operator on an inner product space is called normal if it commutes with its adjoint

If T ∈ L(V) is normal, then eigenvectors of T corresponding to distinct eigenvalues are orthogonal

suppose T ∈ L(V) is normal if v ∈ V is an eigenvector of T with eigenvalue λ ∈ F, then v is also an eigenvector of T^∗ with eigenvalue λ^bar

operator T ∈ L(V ) is normal if and only if ||T v|| = ||T∗v||

T ∈ L(V ) is normal if TT^∗ = T^∗T

If T is a self adjoint operator on V such that = 0

let V be a complex inner product space and let T ∈ L(V ) then T is self adjoint if and only if ∈ R

if V is a complex inner product space and T is an operator on V such that = 0

every eigenvalue of a self adjoint operator is real

Chapter 6: Inner Product Spaces
linear functionals and adjoints

conjugate transpose of an m-by-n matrix is the n-by-m matrix obtained by interchanging the rows and columns and then taking the complex conjugate of each entry

Suppose T ∈ L(V,W) if (e_1,...,e_n) is an orthonormal basis of V and (f_1,...,f_m) is an orthonormal basis of W

then M(T^∗, (f_1,...,f_m), (e_1,...,_en))

is the conjugate transpose of M(T,(e_1,...,e_n), (f_1,...,f_m))

suppose ϕ is a linear functional on V then there is a unique vector v ∈ V such that ϕ(u) = u,v

adjoint ofT, denoted T^∗ she word adjoint has , is the function from

suppose T ∈ L(V, W )

rangeT = (nullT^∗)^⊥

nullT = (rangeT^∗)^⊥

rangeT^∗ = (nullT)^⊥

nullT∗ = (rangeT)^⊥

products (ST)^∗ = T^∗S^∗ for all T ∈ L(V,W) and S ∈ L(W, U)

identity I^∗ = I, where I is the identity operator on V

adjoint of adjoint (T^∗)^∗ = T for all T ∈ L(V, W )

conjugate homogeneity (aT)^∗ = a^barT^∗ for all a ∈ F and T ∈ L(V, W )

additivity (S+T)^∗ = S^∗+T^∗ for all S,T ∈ L(V,W)

=

linear functional on V is a linear map from V to the scalars F

orthogonal projections and minimization problem

orthogonal projection

V = U ⊕U^⊥

suppose U is a subspace of V and v ∈ V then ||v −P_Uv|| ≤ ||v −u||

P_Uv = e_1+···+e_m

||P_U_v||≤ ||v|| for every v ∈ V

P_U^2= P_U

v−P_Uv ∈ U^⊥ for every v ∈ V

nullP_U= U^⊥

rangeP_U= U

denoted P_U

The decompositionV = U⊕U^⊥ means that each vector v ∈ V can be written uniquely in the form v = u+w where u ∈ U and w ∈ U^⊥

U ∩ U^⊥ = {0}

if U is a subspace of V, thenU = (U^⊥)^⊥

U ⊂ (U^⊥)^⊥

if U is a subspace of V, then V = U ⊕U^⊥

V = U +U^m⊥

if U is a subset of V, then the orthogonal complement of U, denoted U^⊥

U^⊥ = {v ∈ V : = 0 for all u ∈ U}

orthonormal bases

Gram-Schmidt: if (v_1,...,v_m) is a linearly independent list of vectors in V, then there exists an orthonormal list (e_1,...,e_m) of vectors in V such that

suppose V is a complex vector space and T ∈ L(V) then T has an upper triangular matrix with respect to some orthonormal basis of V

suppose T ∈ L(V) if T has an upper-triangular matrix with respect to some basis of V, then T has an upper triangular matrix with respect to some orthonormal basis of V

every orthonormal list of vectors in V can be extended to an orthonormal basis of V

(e_1,...,e_m, f_1,...,f_n)

every finite-dimensional inner-product space has an orthonormal basis

span(v_1,...,v_j−1) = span(e_1,...,e_j−1)

span(v_1,...,v_j) = span(e_1,...,e_j)

suppose (e_1,...,e_n) is an orthonormal basis of V

||v||^2= ||^2+ ··· + ||^2

v = e_1+··· +e_n

orthonormal basis of V is an orthonormal list of vectors in V that is also a basis of V

if (e_1,...,e_m) is an orthonormal list of vectors in V, then

||a_1e_1+···+a_me_m||^2= |a_1|^2+···+|a_m|^2

list of vectors is called orthonormal if the vectors in it are pairwise orthogonal and each vector has norm 1

every orthonormal list of vectors is linearly independent

a list (e_1,...,e_m) of vectors in V is orthonormal if equals 0 when j=!k and equals 1 when j = k

norms

two vectors u, v ∈ V are said to be orthogonal if u, v = 0

Parallelogram Equality: if u, v ∈ V, then

||u+v||^2+ ||u−v||^2 = 2(||u||^2+ ||v||^2)

Triangle Inequality: if u, v ∈ V, then

||u+v|| ≤ ||u||+||v||

||u,v||=||u|| ||v||

Cauchy-Schwarz Inequality: if u, v ∈ V, then

|| ≤ ||u|| ||v||

Pythagorean Theorem: if u,v are orthogonal vectors in V

||u+v||^2 = ||u||^2+ ||v||^2

the norm of v, denoted ||v|| ||v|| = sqrt()

inner products

inner product on V is a function that takes each ordered pair (u,v) of elements of V to a number u,v ∈ F

inner-product space is a vector space V along with an inner product on V

= integral from 0 to 1 p(x)q(x) dx

<(w_1,...,w_n), (z_1,...,z_n)> = w_1z_1+···+w_nz_n

conjugate symmetry = for all v,w ∈ V

homogeneity in first slot = a for all a ∈ F and all v,w ∈ V

additivity in first slot = + for all u,v,w ∈ V

definitiveness = 0 if and only if v = 0

positivity ≥ 0 for all v ∈ V

The length of a vector x in R^2 or R^3is called the norm of x, denoted ||x||

if λ = a + bi, where a, b ∈ R, then the absolute value of λ is defined by

complex conjugate

λ^bar = a−bi

|λ|^2 = λλ^bar

for z = (z_1,...,z_n) ∈ C^n, we define the norm of z by ||z ||= sqrt(|z_1|^2+···+|z_n|^2)

||z||^2= z_1z_1^bar+··· +z_nz_n^bar

|λ| = sqrt(a^2+b^2)

norm is not linear on R^n

to make it linear use dot product

r x,y ∈ R^n, the dot product of x and y, denoted x ·y

x ·y = x_1y_1+···+x_ny_n

x = (x_1, x_2) ∈ R^2, we have ||x|| = sqrt(x_1^2+x_2^2)

Chapter 5: Eigenvalues and Eigenvectors
invariant subspaces on real bector spaces

Every operator on an odd-dimensional real vector space has an eigenvalue

Every operator on a finite-dimensional, nonzero, real vector space has an invariant subspace of dimension 1 or 2

diagonal matrix

V = null(T −λ_1I)+··· +null(T −λ_mI)

dimV = dim null(T −λ_1I)+··· +dim null(T −λ_mI)

Suppose T ∈ L(V) let λ_1,...,λ_m denote the distinct eigenvalues of T then the following are equivalent

dimV = dim null(T −λ_1I)+··· +dim null(T−λ_mI)

V = null(T −λ_1I)⊕··· ⊕null(T −λ_mI)

there exist one-dimensional subspaces U1,...,Un of V, each invariant under T, such that V = U_1⊕···⊕U_n

V has a basis consisting of eigenvectors of T

T has a diagonal matrix with respect to some basis of V

if T ∈ L(V) has dimV distinct eigenvalues, then T has a diagonal matrix with respect to some basis of V

T(w, z) = (z,0)

diagonal matrix is a square matrix that is 0 everywhere except possibly along the diagonal

|1 0 0| |0 2 0| |0 0 3|

upper triangular matrix

matrix of T with respect to the basis (v1,...,vn) |a_1,1...a_1,n| |... ...| |a_n,1...a_n,n|

Suppose V is a complex vector space and T ∈ L(V) then T has an upper-triangular matrix with respect to some basis of V

Tu_j= (T|U)(u_j) ∈ span(u_1,...,u_j)

Suppose T ∈ L(V)has an upper triangular matrixwith respect to some basis of V then the eigenvalues of T consist precisely of the entries on the diagonal of that upper-triangular matrix

suppose T ∈ L(V)has an upper triangular matrix with respect to some basis of V then T is invertible if and only if all the entries on the diagonal of that upper triangular matrix are nonzero

h T|U has an uppertriangular matrix

Tv_k ∈ span(u_1,...,u_m, v_1,...,v_k)

Suppose T ∈ L(V) and (v_1,...,v_n) is a basis of V

span(v_1,...,v_k) is invariant under T for each k = 1,...,n

Tv_k ∈ span(v1,...,vk) for each k = 1,...,n

the matrix of T with respect to (v_1,...,v_n) is upper triangular

The diagonal of a square matrix consists of the entries along the straight line from the upper left corner to the bottom right corner

upper triangular if all the entries below the diagonal equal 0

|λ *| | λ | | λ | | λ|

|1 2 3 4| |0 2 3 4| |0 0 3 4| |0 0 0 4|

use ∗ to denote matrix entries that we do not know about or that are irrelevant

denote it by M T, (v_1,...,v_n) or just by M(T) if the basis (v_1,...,v_n)

every operator on a finite-dimensional, nonzero, complex vector space has an eigenvalue

polynomials applied to operators

p and q are polynomials with coefficients in F, then pq is the polynomial defined by

(pq)(T) = p(T)q(T)

p(T)q(T) = (pq)(T) = (qp)(T) = q(T)p(T)

(pq)(z) = p(z)q(z)

T ∈ L(V)

z ∈ F, then p(T)

p(T) = a_0I +a_1T+ a_2T^2+···+a_mT^m

T ∈ L(V) and p ∈ P(F)

p(z) = a_0+a_1z + a_2z^2+···+a_mz^m

T^mT^n= T^m+n (T^m)^n= T^mn

T^m = T...T T to the m times

invariant subspaces

Suppose T ∈ L(V) V = U_1⊕···⊕U_m

eigenvalues

eigenvectors

let T ∈ L(V ) suppose λ_1,...,λ_m are distinct eigenvalues of T and v_1,...,v_m are corresponding nonzero eigenvectors then (v_1,...,v_m) is linearly independent

each operator on V has at most dimV distinct eigenvalues

vk ∈ span(v_1,...,v_k−1)

v_k= a_1v_1+··· +a_k−1v_k−1

λ_kv_k= a1_λ_1v_1+··· +a_k−1λ_k−1v_k−1

0 = a_1(λ_k−λ_1)v_1+··· +a_k−1(λ_k−λ_k−1)v_k−1

if a ∈ F, then aI has only one eigenvalue, namely, a, and every vector is an eigenvector for this eigenvalue

T ∈ L(F^2) T(w,z) = (−z,w)

T(w,z) = λ(w,z)

−z = λw,w = λz

−z = λ^2z

−1 = λ^2

T ∈ L(V) and λ ∈ F is an eigenvalue of T A vector u ∈ V is called an eigenvector of T if T u = λu

a nonzero vector u ∈ V such that Tu = λu

invariant

dim 1 invariant subspace U = {au : a ∈ F}

T ∈ L(V), U a subspace of V; u ∈ U implies Tu ∈ U

a subspace that gets mapped into itself

Uj is a proper subspace of V

Chapter 4: Polynomials
fundamental theory of algebra

p(z) = c(z −λ1)...(z −λm)

If p ∈ P(C) is a nonconstant polynomial, then p has a unique factorization of the form

every nonconstant polynomial with complex coefficients has a root

Chapter 3: Linear Maps
matrix of a linear map

invertibility

inverse

suppose V is finite dimensional if T ∈ L(V ), then the following are equivalent

T is surjective

T is injective

T is invertible

If V and W are finite dimensional, then L(V,W) is finite dimensional and dimL(V,W) = (dimV)(dimW)

Suppose that (v_1,...,v_n) is a basis of V and (w_1,...,w_m) is a basis of W. Then M is an invertible linear map between L(V,W) and M at (m,n,F)

Two finite-dimensional vector spaces are isomorphic if and only if they have the same dimension

Two vector spaces are called isomorphic if there is an invertible linear map from one vector space onto the other one

A linear map is invertible if and only if it is injective and surjective

S = SI = S(TS) = (ST)S = IS = S

linear map S ∈ L(W,V) satisfying ST=I and TS=I

linear map T ∈ L(V,W) is invertible if there exists a linear map S ∈ L(W,V) such that ST equals the identity map on V and TS equals the identity map on W

(v_1,...,v_n) is a basis of V if v ∈ V, then there exist unique scalars b_1,...,b_n v = b_1v_1+···+b_nv_n

Suppose T ∈ L(V, W) and (v_1,...,v_n) is a basis of V and (w_1,...,w_m) is a basis of W M(Tv) = M(T)M(v)

matrix of v, denoted M(v) |b_1| M(v)= | ... | |b_n|

matrix functions

matrix distribution M(TS) = M(T)M(S)

matrix multiplication M(cT) = cM(T)

|a_1,1...a_1,n| c |... ... ...| |a_m,1.a_m,n| = |ca_1,1...ca_1,n| |... ... ...| |ca_m,1.ca_m,n|

matrix addition M(T+S)=M(T)+ M(S)

|a_1,1...a_1,n| |b_1,1...b_1,n| |... ... ...| + |... ... ...| |a_m,1.a_m,n| |b_m,1.b_m,n| = |a_1,1+b_1,1.....a_1,n+b_1,n| |... ... ...| |a_m,1+b_1,m.a_m,n+b_m,n|

T ∈ L(V, W)

Suppose that (v_1,...,v_n) is a basis of V and(w_1,...,w_m) is a basis of W, for each k=1,...,n, we can write Tv_k uniquely as a linear combination of the w’s

Tv_k= a_1,kw_1+···+a_m,kw_m

a_j,k ∈ F for j = 1,...,m

matrix formed by these scalars is called the matrix of T with respect to the bases (v_1,...,v_n) and (w_1,...,w_m)

The kth column of M(T) consists of the scalars needed to write Tv_k as a linear combination of the w’s

Tv_k is retrieved from the matrix M(T) by multiplying each entry in the kth column by the corresponding w from the left column, and then adding up the resulting vectors

T(x,y)=(x+3y,2x+5y,7x+9y) |1 3| |2 5| |7 9|

elements of F^m as columns of m numbers, then you can think of the kth column of M(T) as T applied to the k th basis vector

unless stated otherwise the bases in a linear map from F^n to F^m are the standard ones

M(T,(v_1,...,v_n),(w_1,...,w_m))

the scalars aj,k completely determine the linear map T because a linear map is determined by its values on a basis

an m-by-n matrix is a rectangular array with m rows and n columns

null spaces and ranges

T ∈ L(V,W), the nullspace of T, or null T, is the subset of V consisting of those vectors that T mapos to 0

nullT = {v ∈ V:Tv = 0}

linear map T:V to W is called injective if whenever u,v ∈ V and Tu = Tv, we have u = v

If V and W are finite-dimensional vector spaces such that dimV < dimW, then no linear map from V to W is surjective

dim rangeT = dimV −dim nullT ≤ dimV < dimW

Homogeneous, in this context, means that the constant term on the right side of each equation equals 0

If V and W are finite-dimensional vector spaces such that dimV > dimW, then no linear map from V to W is injective

dim nullT = dimV −dim rangeT ≥ dimV −dimW > 0

T ∈ L(V, W), then rangeT is a subspace of W rangeT = {T v : v ∈ V}

If V is finite dimensional and T ∈ L(V, W ) then rangeT is a finite- dimensional subspace of W dimV = dim nullT + dim rangeT

linear map T:V→W is called surjective if its range equals W

T ∈ L(V, W), the range of T, denoted range T, is the subset of W consisting of those vectors that are of the form T v for some v ∈ V

if T ∈ L(V, W ) then T is injective if and only if nullT = {0}

If T ∈ L(V,W), then nullT is a subspace of V

for the differentiation function, the only null are constant functions

types of linear maps

from F^n to F^m

T ∈ L(F^n,F^m) is equal to T (x_1,...,x_n) = (a_1,1x_1+ ···+a_1,nx_n,...,a_m,1x_1+ ···+a_m,nx_n)

(v_1,...,v_n) is a basis of V and T:V → W is linear, v ∈ V v = a_1v_1+···+a_nv_n

U is a vector space over F T ∈ L(U,V); S ∈ L(V,W) then ST ∈ L(U,W) is equal to (ST)(v) = S(Tv) for v ∈ U

when S and T are both linear then S (dot) T is written as just ST; ST is the product of S and T

properties

multiplication of linear maps is not commutative

distributive

(S_1 + S_2)T = S_1T + S_2T S(T_1+ T_2) = ST_1 + ST_2 where T,T_1,T_2∈ L(U,V) and S,S_1,S_2 ∈ L(V, W)

TI = T and IT = T T ∈ L(V, W) where the first I is the identity map on V and the second I is the identity map on W

associativity

(T_1T_2)T_3= T_1(T_2T_3) T_1,T_2, and T_3 are linear maps such that T_3 maps into domain of T_2, and T_2 maps into the domain of T_1

L(V, W) into a vector space

(aT)v = a(Tv)

S,T ∈ L(V, W) S +T ∈ L(V, W) (S+T)v = Sv +T v

given a basis (v_1,...,v_n) of V and w_1,...,w_n ∈ W such that Tv_j = w_j for j=1,...,n T: V→W = T(a_1v_1+···+a_nv_n) = a_1w_1+···+a_nw_n,

since T is linear, Tv = a_1Tv_1+···+a_nTv_n

backwards shift

T ∈ L(F^∞,F^∞) is equal to T(x^1, x^2, x^3,...) = (x^2, x^3,...)

multiplication by x^2

T ∈ L(P(R),P(R)) is equal to (Tp)(x) = x^2p(x) for x ∈ R

integration

T ∈ L(P(R),R) is equal to Tp = integral from 0 to 1 p(x) dx

differentiation

T ∈ L(P(R),P(R)) is equal to Tp = p'

(f+g)' = f' + g' and (af)' = af'

identity

I, is the function on some vector space that takes each element to itself

I ∈ L(V, V) is equal to Iv = v

zero

0 is the function that takes each element of some vector space to the additive identity of another vector space

0 ∈ L(V,W) is equal to 0v = 0

0v is the equation above is a function from V to W and 0 is the right side is the additive identity in W

linear map from V to W is a functionT: V → W with the following properties

homogeneity: T(av) = a(Tv) for all a in F and all v in V

additivity: T(u+v) = Tu+Tv for all u, v in V

Chapter 2: Finite Dimensional Vector Spaces
dimensions

Suppose V is finite dimensional and u_1,...,u_m are subspaces of V such that V = u_1+···+u_m and dimV = dimu_1+···+dimu+m then V = u_1direct sum··· direct sum u_Um

If U_1 and U_2 are subspaces of a finite dimensional vector space, then dim(U1+U2) = dimU1+dimU2−dim(U1∩U2)

If V is finite dimensional, then every linearly independent list of vectors in V with length dimV is a basis of V

If V is finite dimensional, then every spanning list of vectors in V with length dimV is a basis of V

If V is finite dimensional and U is a subspace of V, then dimU ≤ dimV

dimension of a finite dimensional vector space is the length of any basis of the vector space

Any two bases of a finite dimensional vector space have the same length

bases

Suppose V is finite dimensional and U is a subspace of V then there is a subspace W of V such that V equals the direct sum of U and W

Every linearly independent list of vectors in a finite dimensional vector space can be extended to a basis of the vector space

Every finite dimensional vector space has a basis

Every spanning list in a vector space can be reduced to a basis of the vector space

A list (v_1,...,v_n) of vectors in V is a basis of V if and only if every v in V can be written uniquely in the form v=a_1v_n+...+a_nv_n

a basis of V is a list of vectors in V that is linearly independent and spans V

linear dependence

linear dependence lemma

if there is a linear dependent list (v_1,...,v_m) in V and v_1 =! 0 there exists a j (2,...,m) then

if the jth therm is removed then the span of the remaining list equals (v_1,...,v_m)

v_j is in the span(v_1,...,v_j-1)

Every subspace of a finite-dimensional vector space is finite dimensional

linear dependence is a list of vectors that are not linear independent

linear independence

in a finite dimensional vector space, the length of every linearly independent list of vectors is less than or equal to the length of every spanning list of vectors

linear independence is when a list of vectors (v_0,...,v_m) in V and the only scalars a_0,...,a_m that makes a_1v_1+...+a_mv_m equal to 0 is 0

linear combination

span is the set of all linear combinations

a polynomial is said to have a degree m if there exists a scalar a_0,a_1,...,a_m such that p(m) = a_0+a_1z+...+a_mz^

if the polynomial is equal to zero then its degree is negative infinity

a vector space is finitely dimensional if some list of vectors spans the space

a vector space is infinitely dimensionall if it is not finitely dimensional

if span(v_1,...,v_m) equals V, then (v_1,..,v_m) spans V

f (v_1,...,v_m) is a list of vectors in V, then each vj is a linear combination of (v_1,...,v_m)

the span of every span of any list of vectors in V is a subspace of V

a list of vectors in the form a_1v_1+...+a_m+v_m

Chapter 1: Vector Spaces
Vector Spaces

Vector Sums and Direct Sums

Proposition 1.9

Suppose that U and W are subspaces of V. Then V = U ⊕ W if and only if V = U + W and U ∩ W = {0}

Proposition 1.8

Suppose that U1, . . . , Un are subspaces of V. Then V = U1 ⊕ · · · ⊕ Un if and only if both the following conditions hold

the only way to write 0 as a sum u1 + · · · + un, where each uj ∈ Uj , is by taking all the uj ’s equal to 0

V = U1 + · · · + Un

Propostion 1.7

U = {(x, y, 0) ∈ F^3: x, y ∈ F} W = {(0, 0, z) ∈ F^3: z ∈ F} U + W = {(x, y, 0) : x, y ∈ F}

Subspaces

must satisfy the following

Closed Under Scalar Multiplication au ∈ U a ∈ F u ∈ U

Closed Under Addition u + v ∈ U u, v ∈ U

Additive Identity 0 ∈ U

A subset U of V is called a subspace of V if U is also a vector space

Properties of Vector Spaces

Proposition 1.6

(−1)v = −v v ∈ V v + (−1)v = 1v + (−1)v =1 + (−1)v = 0v = 0

Proposition 1.5

a0 = 0 a ∈ F a0 = a(0 + 0) = a0 + a0

Proposition 1.4

0v = 0 v ∈ V 0v = (0 + 0)v = 0v + 0v

Proposition 1.3

Every element in a vector space has a unique additive inverse w = w+0 = w + (v + w') = (w + v) + w' = 0 + w' w = w'

Propostion 1.2

A vector space has a unique additive identity 0' = 0'+0 = 0

a set V along with an addition on V and a scalar multiplication on V such that the following properties hold

v ∈ V w ∈ V v+w = 0

add: 0 ∈ V v+0 = v v ∈ V mult: 1v = v v ∈ V

a(u + v) = au + av (a + b)u = au + bu a, b ∈ F u, v ∈ V

add: (u+v)+w = u+(v +w) mult: (ab)v = a(bv) u, v, w ∈ V a, b ∈ F

u+v = v+u u, v ∈ V

Complex Numbers

Axioms for Complex Numbers

Inverse

add: a+b = 0 mult: a*b = 1 a,b ∈ C

Identities

add: a+0 = a mult: a*1 = a a ∈ C

Distributive

a(b+c) = ab + ac a,b,c ∈ C

Associativity

add: (a+b)+c = a+(b+c) mult: (ab)c = a(bc) a,b,c ∈ C

Commutativity

add: a+b = b+a mult: ab = ba a,b ∈ C

C = {a + bi : a, b ∈ R} where R is all Real Numbers

Assignments
Fractal Art