VM 270 9394

Linear Algebra

Assignments

Fractal Art

Chapter 1:
Vector Spaces

Complex Numbers

C = {a + bi : a, b ∈ R}
where R is all Real Numbers

Axioms for Complex Numbers

Commutativity

add: a+b = b+a
mult: ab = ba
a,b ∈ C

Associativity

add: (a+b)+c = a+(b+c)
mult: (ab)c = a(bc)
a,b,c ∈ C

Distributive

a(b+c) = ab + ac
a,b,c ∈ C

Identities

add: a+0 = a
mult: a*1 = a
a ∈ C

Inverse

add: a+b = 0
mult: a*b = 1
a,b ∈ C

Vector Spaces

a set V along with an addition on V
and a scalar multiplication on V such
that the following properties hold

Commutativity

u+v = v+u
u, v ∈ V

Associativity

add: (u+v)+w = u+(v +w)
mult: (ab)v = a(bv)
u, v, w ∈ V
a, b ∈ F

Distributive

a(u + v) = au + av
(a + b)u = au + bu
a, b ∈ F
u, v ∈ V

Identities

add: 0 ∈ V
v+0 = v
v ∈ V
mult: 1v = v
v ∈ V

Inverse

v ∈ V
w ∈ V
v+w = 0

Properties of Vector Spaces

Propostion 1.2

A vector space has a
unique additive identity
0' = 0'+0 = 0

Proposition 1.3

Every element in a
vector space has a
unique additive inverse
w = w+0
= w + (v + w')
= (w + v) + w'
= 0 + w'
w = w'

Proposition 1.4

0v = 0
v ∈ V
0v = (0 + 0)v
= 0v + 0v

Proposition 1.5

a0 = 0
a ∈ F
a0 = a(0 + 0)
= a0 + a0

Proposition 1.6

(−1)v = −v
v ∈ V
v + (−1)v = 1v + (−1)v
=1 + (−1)v
= 0v
= 0

Subspaces

A subset U of V is called a subspace
of V if U is also a vector space

must satisfy the following

Additive Identity
0 ∈ U

Closed Under Addition
u + v ∈ U
u, v ∈ U

Closed Under Scalar Multiplication
au ∈ U
a ∈ F
u ∈ U

Vector Sums and Direct Sums

Propostion 1.7

U = {(x, y, 0) ∈ F^3: x, y ∈ F}
W = {(0, 0, z) ∈ F^3: z ∈ F}
U + W = {(x, y, 0) : x, y ∈ F}

Proposition 1.8

Suppose that U1, . . . , Un are subspaces of V.
Then V = U1 ⊕ · · · ⊕ Un if and only if both the
following conditions hold

V = U1 + · · · + Un

the only way to write 0 as a sum u1 + · · · + un,
where each uj ∈ Uj , is by taking all the uj ’s
equal to 0

Proposition 1.9

Suppose that U and W are subspaces of V.
Then V = U ⊕ W if and only if V = U + W
and U ∩ W = {0}

Chapter 2:
Finite Dimensional
Vector Spaces

linear combination

a list of vectors in the form
a_1v_1+...+a_m+v_m

span is the set of
all linear combinations

the span of every span of any list
of vectors in V is a subspace of V

f (v_1,...,v_m) is a list of vectors
in V, then each vj is a linear
combination of (v_1,...,v_m)

if span(v_1,...,v_m) equals
V, then (v_1,..,v_m) spans
V

a vector space is finitely
dimensional if some list
of vectors spans the space

a vector space is infinitely
dimensionall if it is not finitely
dimensional

a polynomial is said
to have a degree m
if there exists a scalar
a_0,a_1,...,a_m such
that p(m) =
a_0+a_1z+...+a_mz^

if the polynomial is
equal to zero then
its degree is negative
infinity

linear dependence

linear independence

linear independence is when a
list of vectors (v_0,...,v_m) in V
and the only scalars a_0,...,a_m
that makes a_1v_1+...+a_mv_m
equal to 0 is 0

in a finite dimensional
vector space, the length
of every linearly independent
list of vectors is less than or
equal to the length of every
spanning list of vectors

linear dependence

linear dependence is a
list of vectors that are
not linear independent

Every subspace of a
finite-dimensional vector
space is finite dimensional

linear dependence
lemma

if there is a linear
dependent list
(v_1,...,v_m) in V
and v_1 =! 0 there
exists a j (2,...,m)
then

v_j is in the
span(v_1,...,v_j-1)

if the jth therm is
removed then the
span of the remaining
list equals (v_1,...,v_m)

bases

a basis of V is a list of
vectors in V that is
linearly independent
and spans V

A list (v_1,...,v_n) of vectors
in V is a basis of V if and only
if every v in V can be written
uniquely in the form
v=a_1v_n+...+a_nv_n

Every spanning list in a
vector space can be
reduced to a basis of
the vector space

Every finite dimensional
vector space has a basis

Every linearly independent list
of vectors in a finite dimensional
vector space can be extended to
a basis of the vector space

Suppose V is finite dimensional
and U is a subspace of V then
there is a subspace W of V such
that V equals the direct sum of U
and W

dimensions

Any two bases of a finite
dimensional vector space
have the same length

dimension of a finite dimensional
vector space is the length of any
basis of the vector space

If V is finite dimensional
and U is a subspace of V,
then dimU ≤ dimV

If V is finite dimensional, then
every spanning list of vectors
in V with length dimV is a
basis of V

If V is finite dimensional, then
every linearly independent list
of vectors in V with length
dimV is a basis of V

If U_1 and U_2 are subspaces of a
finite dimensional vector space,
then dim(U1+U2) =
dimU1+dimU2−dim(U1∩U2)

Suppose V is finite dimensional
and u_1,...,u_m are subspaces
of V such that V = u_1+···+u_m
and dimV = dimu_1+···+dimu+m
then V = u_1direct sum··· direct
sum u_Um

Chapter 3:
Linear Maps

linear map from V to W
is a functionT: V → W
with the following properties

additivity: T(u+v) = Tu+Tv
for all u, v in V

homogeneity: T(av) = a(Tv)
for all a in F and all v in V

types of linear maps

zero

0 is the function that takes each
element of some vector space to
the additive identity of another
vector space

0 ∈ L(V,W) is equal to 0v = 0

0v is the equation above is a
function from V to W and 0 is
the right side is the additive
identity in W

identity

I, is the function on some
vector space that takes
each element to itself

I ∈ L(V, V) is equal to
Iv = v

differentiation

T ∈ L(P(R),P(R)) is equal to
Tp = p'

(f+g)' = f' + g' and
(af)' = af'

integration

T ∈ L(P(R),R) is equal to
Tp = integral from 0 to 1
p(x) dx

multiplication by x^2

T ∈ L(P(R),P(R)) is equal to
(Tp)(x) = x^2p(x) for x ∈ R

backwards shift

T ∈ L(F^∞,F^∞)
is equal to

T(x^1, x^2, x^3,...)
= (x^2, x^3,...)

from F^n to F^m

T ∈ L(F^n,F^m) is equal to

T (x_1,...,x_n) = (a_1,1x_1+
···+a_1,nx_n,...,a_m,1x_1+
···+a_m,nx_n)

(v_1,...,v_n) is a basis of V and
T:V → W is linear, v ∈ V

v = a_1v_1+···+a_nv_n

since T is linear,
Tv = a_1Tv_1+···+a_nTv_n

given a basis (v_1,...,v_n) of V
and w_1,...,w_n ∈ W such
that Tv_j = w_j for j=1,...,n

T: V→W = T(a_1v_1+···+a_nv_n)
= a_1w_1+···+a_nw_n,

L(V, W) into a vector space

S,T ∈ L(V, W)
S +T ∈ L(V, W)
(S+T)v = Sv +T v

(aT)v = a(Tv)

U is a vector space over F
T ∈ L(U,V); S ∈ L(V,W) then

ST ∈ L(U,W) is equal to
(ST)(v) = S(Tv) for v ∈ U

when S and T are both linear
then S (dot) T is written as
just ST; ST is the product of
S and T

properties

associativity

(T_1T_2)T_3= T_1(T_2T_3)
T_1,T_2, and T_3 are linear
maps such that T_3 maps into
domain of T_2, and T_2 maps
into the domain of T_1

identity

TI = T and IT = T
T ∈ L(V, W)
where the first I
is the identity map
on V and the second
I is the identity map
on W

distributive

(S_1 + S_2)T = S_1T + S_2T
S(T_1+ T_2) = ST_1 + ST_2
where T,T_1,T_2∈ L(U,V)
and S,S_1,S_2 ∈ L(V, W)

multiplication of linear
maps is not commutative

null spaces and
ranges

T ∈ L(V,W), the nullspace of T,
or null T, is the subset of V
consisting of those vectors that
T mapos to 0

nullT = {v ∈ V:Tv = 0}

for the differentiation function,
the only null are constant
functions

If T ∈ L(V,W), then nullT is
a subspace of V

linear map T:V to W is called
injective if whenever u,v ∈ V
and Tu = Tv, we have u = v

if T ∈ L(V, W ) then T
is injective if and only
if nullT = {0}

T ∈ L(V, W), then rangeT is
a subspace of W

rangeT = {T v : v ∈ V}

T ∈ L(V, W), the range of T,
denoted range T, is the subset
of W consisting of those vectors
that are of the form T v for
some v ∈ V

linear map T:V→W is
called surjective if its
range equals W

If V is finite dimensional
and T ∈ L(V, W )

then rangeT is a finite-
dimensional subspace
of W
dimV =
dim nullT + dim rangeT

If V and W are finite-dimensional
vector spaces such that
dimV > dimW, then no linear map
from V to W is injective

dim nullT = dimV −dim rangeT
≥ dimV −dimW
> 0

If V and W are finite-dimensional
vector spaces such that
dimV < dimW, then no linear map
from V to W is surjective

dim rangeT = dimV −dim nullT
≤ dimV
< dimW

Homogeneous, in this
context, means that the
constant term on the
right side of each
equation equals 0

matrix of a
linear map

an m-by-n matrix is a
rectangular array with
m rows and n columns

T ∈ L(V, W)

Suppose that (v_1,...,v_n) is a
basis of V and(w_1,...,w_m) is
a basis of W, for each k=1,...,n,
we can write Tv_k uniquely as a
linear combination of the w’s

Tv_k= a_1,kw_1+···+a_m,kw_m

a_j,k ∈ F for j = 1,...,m

the scalars aj,k completely
determine the linear map T
because a linear map is
determined by its values on
a basis

matrix formed by these
scalars is called the
matrix of T with respect
to the bases (v_1,...,v_n)
and (w_1,...,w_m)

M(T,(v_1,...,v_n),(w_1,...,w_m))

The kth column of M(T) consists
of the scalars needed to write
Tv_k as a linear combination of
the w’s

Tv_k is retrieved from the
matrix M(T) by multiplying
each entry in the kth column
by the corresponding w from
the left column, and then
adding up the resulting
vectors

unless stated otherwise
the bases in a linear
map from F^n to F^m
are the standard ones

elements of F^m as columns
of m numbers, then you can
think of the kth column of
M(T) as T applied to the k
th basis vector

T(x,y)=(x+3y,2x+5y,7x+9y)

|1 3|
|2 5|
|7 9|

matrix functions

matrix addition

M(T+S)=M(T)+ M(S)

|a_1,1...a_1,n| |b_1,1...b_1,n|
|... ... ...| + |... ... ...|
|a_m,1.a_m,n| |b_m,1.b_m,n|
=
|a_1,1+b_1,1.....a_1,n+b_1,n|
|... ... ...|
|a_m,1+b_1,m.a_m,n+b_m,n|

matrix multiplication

M(cT) = cM(T)

|a_1,1...a_1,n|
c |... ... ...|
|a_m,1.a_m,n|
=
|ca_1,1...ca_1,n|
|... ... ...|
|ca_m,1.ca_m,n|

matrix distribution

M(TS) = M(T)M(S)

(v_1,...,v_n) is a basis of V
if v ∈ V, then there exist
unique scalars b_1,...,b_n

v = b_1v_1+···+b_nv_n

matrix of v, denoted M(v)

|b_1|
M(v)= | ... |
|b_n|

Suppose T ∈ L(V, W) and
(v_1,...,v_n) is a basis
of V and (w_1,...,w_m) is
a basis of W

M(Tv) = M(T)M(v)

invertibility

linear map T ∈ L(V,W) is invertible
if there exists a linear map
S ∈ L(W,V) such that ST equals
the identity map on V and TS
equals the identity map on W

inverse

linear map S ∈ L(W,V)
satisfying ST=I and
TS=I

S = SI
= S(TS)
= (ST)S
= IS
= S

A linear map is invertible
if and only if it is injective
and surjective

Two vector spaces are called
isomorphic if there is an
invertible linear map from
one vector space onto the
other one

Two finite-dimensional vector
spaces are isomorphic if and
only if they have the same
dimension

Suppose that (v_1,...,v_n) is a
basis of V and (w_1,...,w_m) is
a basis of W. Then M is an
invertible linear map between
L(V,W) and M at (m,n,F)

If V and W are finite
dimensional, then L(V,W)
is finite dimensional and
dimL(V,W) = (dimV)(dimW)

suppose V is finite dimensional
if T ∈ L(V ), then the following
are equivalent

T is invertible

T is injective

T is surjective

Chapter 4:
Polynomials

fundamental theory
of algebra

every nonconstant polynomial
with complex coefficients has
a root

If p ∈ P(C) is a nonconstant
polynomial, then p has a
unique factorization of the
form

p(z) = c(z −λ1)...(z −λm)

Chapter 5:
Eigenvalues and
Eigenvectors

invariant subspaces

Suppose T ∈ L(V)
V = U_1⊕···⊕U_m

Uj is a proper
subspace of V

invariant

a subspace that
gets mapped into
itself

T ∈ L(V), U a subspace
of V; u ∈ U implies
Tu ∈ U

dim 1 invariant
subspace

U = {au : a ∈ F}

eigenvalues

a nonzero vector
u ∈ V such that
Tu = λu

eigenvectors

T ∈ L(V) and λ ∈ F is an
eigenvalue of T A vector
u ∈ V is called an eigenvector
of T if T u = λu

if a ∈ F, then aI has only one
eigenvalue, namely, a, and
every vector is an eigenvector
for this eigenvalue

T ∈ L(F^2)

T(w,z) = (−z,w)

T(w,z) = λ(w,z)

−z = λw,w = λz

−z = λ^2z

−1 = λ^2

let T ∈ L(V ) suppose λ_1,...,λ_m
are distinct eigenvalues of T and
v_1,...,v_m are corresponding
nonzero eigenvectors then
(v_1,...,v_m) is linearly
independent

vk ∈ span(v_1,...,v_k−1)

v_k= a_1v_1+···
+a_k−1v_k−1

λ_kv_k=
a1_λ_1v_1+···
+a_k−1λ_k−1v_k−1

0 = a_1(λ_k−λ_1)v_1+···
+a_k−1(λ_k−λ_k−1)v_k−1

each operator on V has
at most dimV distinct
eigenvalues

polynomials applied
to operators

T^m = T...T
T to the m
times

T^mT^n= T^m+n
(T^m)^n= T^mn

T ∈ L(V) and p ∈ P(F)

p(z) = a_0+a_1z +
a_2z^2+···+a_mz^m

z ∈ F, then p(T)

p(T) = a_0I +a_1T+
a_2T^2+···+a_mT^m

p and q are polynomials with
coefficients in F, then pq is the
polynomial defined by

(pq)(z) = p(z)q(z)

T ∈ L(V)

(pq)(T) = p(T)q(T)

p(T)q(T) =
(pq)(T) =
(qp)(T) =
q(T)p(T)

upper triangular
matrix

every operator on a
finite-dimensional,
nonzero, complex
vector space has
an eigenvalue

matrix of T with respect
to the basis (v1,...,vn)

|a_1,1...a_1,n|
|... ...|
|a_n,1...a_n,n|

denote it by M T, (v_1,...,v_n)
or just by M(T) if the basis
(v_1,...,v_n)

use ∗ to denote matrix
entries that we do not
know about or that are
irrelevant

The diagonal of a square matrix
consists of the entries along the
straight line from the upper left
corner to the bottom right corner

upper triangular if all the
entries below the diagonal
equal 0

|1 2 3 4|
|0 2 3 4|
|0 0 3 4|
|0 0 0 4|

|λ *|
| λ |
| λ |
| λ|

Suppose T ∈ L(V) and
(v_1,...,v_n) is a basis
of V

the matrix of T with respect
to (v_1,...,v_n) is upper
triangular

Tv_k ∈ span(v1,...,vk)
for each k = 1,...,n

span(v_1,...,v_k) is invariant
under T for each k = 1,...,n

Suppose V is a complex
vector space and T ∈ L(V)
then T has an upper-triangular
matrix with respect to some
basis of V

Tu_j= (T|U)(u_j) ∈ span(u_1,...,u_j)

h T|U has an
uppertriangular
matrix

Tv_k ∈ span(u_1,...,u_m,
v_1,...,v_k)

suppose T ∈ L(V)has an upper
triangular matrix with respect
to some basis of V then T is
invertible if and only if all the
entries on the diagonal of that
upper triangular matrix are
nonzero

Suppose T ∈ L(V)has an upper
triangular matrixwith respect to
some basis of V then the
eigenvalues of T consist precisely
of the entries on the diagonal of
that upper-triangular matrix

diagonal matrix

diagonal matrix is a square
matrix that is 0 everywhere
except possibly along the
diagonal

|1 0 0|
|0 2 0|
|0 0 3|

T(w, z) = (z,0)

if T ∈ L(V) has dimV distinct
eigenvalues, then T has a
diagonal matrix with respect
to some basis of V

Suppose T ∈ L(V) let λ_1,...,λ_m
denote the distinct eigenvalues of
T then the following are equivalent

T has a diagonal matrix with
respect to some basis of V

V has a basis consisting
of eigenvectors of T

there exist one-dimensional
subspaces U1,...,Un of V,
each invariant under T, such
that

V = U_1⊕···⊕U_n

V = null(T −λ_1I)⊕···
⊕null(T −λ_mI)

dimV = dim null(T −λ_1I)+···
+dim null(T−λ_mI)

V = null(T −λ_1I)+···
+null(T −λ_mI)

dimV = dim null(T −λ_1I)+···
+dim null(T −λ_mI)

invariant subspaces
on real bector spaces

Every operator on a
finite-dimensional,
nonzero, real vector
space has an invariant
subspace of dimension
1 or 2

Every operator on
an odd-dimensional
real vector space
has an eigenvalue

Chapter 6:
Inner Product
Spaces

inner products

The length of a vector x
in R^2 or R^3is called
the norm of x, denoted
||x||

x = (x_1, x_2) ∈ R^2, we have
||x|| = sqrt(x_1^2+x_2^2)

norm is not
linear on R^n

to make it linear
use dot product

r x,y ∈ R^n,
the dot product
of x and y,
denoted x ·y

x ·y = x_1y_1+···+x_ny_n

if λ = a + bi, where
a, b ∈ R, then the
absolute value
of λ is defined by

|λ| = sqrt(a^2+b^2)

complex conjugate

λ^bar = a−bi

|λ|^2 = λλ^bar

for z = (z_1,...,z_n) ∈ C^n,
we define the norm of z by
||z ||=
sqrt(|z_1|^2+···+|z_n|^2)

||z||^2= z_1z_1^bar+···
+z_nz_n^bar

inner product on V is a function
that takes each ordered pair
(u,v) of elements of V to a
number u,v ∈ F

properties

positivity

<v,v> ≥ 0 for all
v ∈ V

definitiveness

<v,v> = 0 if and
only if v = 0

additivity in first slot

<u+v,w> = <u,w>+
<v,w> for all
u,v,w ∈ V

homogeneity in first slot

<av,w> = a<v,w> for all
a ∈ F and all v,w ∈ V

conjugate symmetry

<v,w> = <w^bar,v^bar>
for all v,w ∈ V

inner-product space is a
vector space V along with
an inner product on V

<(w_1,...,w_n), (z_1,...,z_n)>
= w_1z_1+···+w_nz_n

<p,q> = integral from
0 to 1 p(x)q(x) dx

norms

the norm of v, denoted ||v||

||v|| = sqrt(<v,v>)

two vectors u, v ∈ V
are said to be orthogonal
if u, v = 0

Pythagorean Theorem:
if u,v are orthogonal
vectors in V

||u+v||^2 =
||u||^2+ ||v||^2

Cauchy-Schwarz Inequality:
if u, v ∈ V, then

|<u,v>| ≤ ||u|| ||v||

Triangle Inequality:
if u, v ∈ V, then

||u+v|| ≤ ||u||+||v||

||u,v||=||u|| ||v||

Parallelogram Equality:
if u, v ∈ V, then

||u+v||^2+ ||u−v||^2 =
2(||u||^2+ ||v||^2)

orthonormal bases

list of vectors is called
orthonormal if the
vectors in it are pairwise
orthogonal and each
vector has norm 1

a list (e_1,...,e_m) of vectors
in V is orthonormal if
<e_j,e_k> equals 0 when
j=!k and equals 1 when j = k

every orthonormal list
of vectors is linearly
independent

if (e_1,...,e_m) is an
orthonormal list of
vectors in V, then

||a_1e_1+···+a_me_m||^2=
|a_1|^2+···+|a_m|^2

orthonormal basis of V is
an orthonormal list of
vectors in V that is also a
basis of V

suppose (e_1,...,e_n) is an
orthonormal basis of V

v = <v,e_1>e_1+···
+<v,e_n>e_n

||v||^2= |<v,e_1>|^2+ ···
+ |<v,e_n>|^2

Gram-Schmidt:
if (v_1,...,v_m) is a
linearly independent
list of vectors in V,
then there exists an
orthonormal list
(e_1,...,e_m) of
vectors in V such
that

span(v_1,...,v_j) =
span(e_1,...,e_j)

span(v_1,...,v_j−1) =
span(e_1,...,e_j−1)

every finite-dimensional
inner-product space has
an orthonormal basis

every orthonormal list
of vectors in V can be
extended to an
orthonormal basis of V

(e_1,...,e_m,
f_1,...,f_n)

suppose T ∈ L(V) if T has
an upper-triangular matrix
with respect to some basis
of V, then T has an upper
triangular matrix with
respect to some
orthonormal basis of V

suppose V is a complex
vector space and T ∈ L(V)
then T has an upper
triangular matrix with
respect to some
orthonormal basis of V

orthogonal projections
and minimization
problem

if U is a subset of V,
then the orthogonal
complement of U,
denoted U^⊥

U^⊥ = {v ∈ V : <v,u> = 0
for all u ∈ U}

if U is a subspace
of V, then V =
U ⊕U^⊥

V = U +U^m⊥

U ∩ U^⊥ = {0}

if U is a subspace of V,
thenU = (U^⊥)^⊥

U ⊂ (U^⊥)^⊥

orthogonal projection

The decompositionV = U⊕U^⊥
means that each vector v ∈ V
can be written uniquely in the
form v = u+w where u ∈ U
and w ∈ U^⊥

denoted P_U

properties

rangeP_U= U

nullP_U= U^⊥

v−P_Uv ∈ U^⊥
for every v ∈ V

P_U^2= P_U

||P_U_v||≤ ||v||
for every v ∈ V

P_Uv = <v,e_1>e_1+···+<v,e_m>e_m

suppose U is a subspace
of V and v ∈ V then
||v −P_Uv|| ≤ ||v −u||

V = U ⊕U^⊥

linear functionals
and adjoints

linear functional on V is
a linear map from V to
the scalars F

suppose ϕ is a linear
functional on V then
there is a unique vector
v ∈ V such that
ϕ(u) = u,v

adjoint ofT, denoted T^∗
she word adjoint has ,
is the function from

<Tv,w> = <v,T^∗w>

properties

additivity

(S+T)^∗ = S^∗+T^∗
for all S,T ∈ L(V,W)

conjugate homogeneity

(aT)^∗ = a^barT^∗
for all a ∈ F and
T ∈ L(V, W )

adjoint of adjoint

(T^∗)^∗ = T for all
T ∈ L(V, W )

identity

I^∗ = I, where
I is the identity
operator on V

products

(ST)^∗ = T^∗S^∗
for all T ∈ L(V,W)
and S ∈ L(W, U)

suppose T ∈ L(V, W )

nullT∗ = (rangeT)^⊥

rangeT^∗ = (nullT)^⊥

nullT = (rangeT^∗)^⊥

rangeT = (nullT^∗)^⊥

conjugate transpose of
an m-by-n matrix is the
n-by-m matrix obtained
by interchanging the
rows and columns and
then taking the complex
conjugate of each entry

Suppose T ∈ L(V,W) if
(e_1,...,e_n) is an
orthonormal basis of V
and (f_1,...,f_m) is an
orthonormal basis of W

then M(T^∗, (f_1,...,f_m), (e_1,...,_en))

is the conjugate transpose of
M(T,(e_1,...,e_n), (f_1,...,f_m))

Chapter 7:
Operators on
Inner Product
Spaces

self adjoint and
normal operators

an operator T ∈ L(V )
is called self adjoint if
T = T^∗

every eigenvalue
of a self adjoint
operator is real

if V is a complex inner
product space and T is
an operator on V such
that <Tv,v> = 0

let V be a complex inner
product space and let
T ∈ L(V ) then T is self
adjoint if and only if
<Tv,v> ∈ R

If T is a self adjoint
operator on V such
that <Tv,v> = 0

operator on an inner
product space is called
normal if it commutes
with its adjoint

T ∈ L(V ) is normal if
TT^∗ = T^∗T

operator T ∈ L(V ) is
normal if and only if
||T v|| = ||T∗v||

suppose T ∈ L(V) is normal
if v ∈ V is an eigenvector
of T with eigenvalue λ ∈ F,
then v is also an eigenvector
of T^∗ with eigenvalue λ^bar

If T ∈ L(V) is normal,
then eigenvectors of
T corresponding to
distinct eigenvalues
are orthogonal

the spectral
theorem

complex spectral
theorem

suppose that V is a complex
inner product space and
T ∈ L(V) then V has an
orthonormal basis consisting
of eigenvectors of T if and
only if T is normal

M(T,(e_1,...,e_n))

|a_1,1 ... a_1,n|
| ... |
| a_n,n|

||Te_1||^2 =
|a1,1|^2

||T^∗e_1||^2 =
|a_1,1|^2 +
|a_1,2|^2 +···+
|a_1,n|^2

suppose T ∈ L(V) is
self-adjoint if α,β ∈ R
are such that α^2< 4β,
then T^2+αT +βI is
invertible

suppose T ∈ L(V )
is self-adjoint then
T has an eigenvalue

real spectral
theorem

Suppose that V is a
real inner-product
space and T ∈ L(V)
then V has an
orthonormal basis
consisting of
eigenvectors of T if
and only if T is self
adjoint

suppose that T ∈ L(V) is
self adjoint (or that F = C
and that T ∈ L(V) is normal)
Let λ_1,...,λ_m denote
the distinct eigenvalues of T

then V = null(T −λ_1I)
⊕···⊕null(T −λ_mI)

each vector in each null(T −λ_jI)
is orthogonal to all vectors in the
other subspaces of this decomposition

normal operators
on real inner
product spaces

suppose V is a two dimensional
real inner product space and
T ∈ L(V ) then the following
are equivalent

T is normal but
not self adjoint

the matrix of T with
respect to every
orthonormal basis of
V has the form

|a -b|
|b a|

b =!o 0

the matrix of T with
respect to some
orthonormal basis of
V has he form

|a -b|
|b a|

b > 0

M(, (e_1, e_2))

|a c|
|b d|

||Te_1||^2 = a^2+ b^2
||T^∗e_1||^2= a^2+ c^2

T is normal,
||Te_1|| =
||T^∗e_1||

suppose T ∈ L(V) is
normal and U is a
subspace of V that
is invariant under T

U^⊥ is invariant
under T

U is invariant
under T^*

(T|_U)^∗ =
(T^∗)|_U

T|_U is a normal
operator on U

T|_U^⊥ is a normal
operator on U^⊥

A block diagonal matrix
is a square matrix of
the form

|A_1 0|
| ... |
| A_n|

A_1,...,A_m are square
matrices lying along the
diagonal and all the
other entries of the
matrix equal 0

suppose that V is a real inner
product space and T ∈ L(V)
then T is normal if and only
if there is an orthonormal
basis of V with respect to
which T has a block diagonal
matrix where each block is a
1-by-1 matrix or a 2-by-2
matrix of the form

|a -b|
|b a| with b > 0

positive
operators

operator T ∈ L(V )
is called positive if
T is self adjoint and
<Tv,v> ≥ 0

Let T ∈ L(V) then
the following are
equivalent

T is
positive

T is self adjoint and
all the eigenvalues
of T are nonnegative

T has a positive
square root

T has a self
adjoint
square root

there exists an
operator S ∈ L(V )
such that T = S^∗S

every positive
operator on V
has a unique
positive square
root

an operator S is called
a square root of an
operator T if S^2 = T

V = null(T −λ_1I)⊕···
⊕null(T −λ_mI)

Tv
= S^2v
= α^2v

isometries

operator S ∈ L(V)
is called an isometry
if ||Sv|| = ||v||

v = <v,e_1>e_1+···
+<v,e_n>e_n

||v||^2= |<v,e_1>|^2+ ···
+ |<v,e_n>|^2

||Sv||^2= |<v,e_1>|^2 +
...+ |<v,e_n>|^2

suppose S ∈ L(V)
then the following
are equivalent

S is an
isometry

<Su,Sv> = <u,v>
for all u, v ∈ V

S^∗S
= I

(Se_1,...,Se_n) is orthonormal
whenever (e_1,...,e_n) is an
orthonormal list of vectors in V

there exists an orthonormal
basis (e_1,...,e_n) of V such
that (Se_1,...,Se_n) is
orthonormal

S^∗ is an
isometry

S^∗u, S^∗v = <u,v>
for all u, v ∈ V

SS^∗ = I

(S^∗e_1,...,S^∗e_n) is orthonormal
whenever (e_1,...,e_n) is an
orthonormal list of vectors in V

there exists an orthonormal
basis (e_1,...,e_n) of V such
that(S^∗e_1,...,S^∗e_n) is
orthonormal

Suppose V is a complex inner-product space and
S ∈ L(V ). Then S is an isometry if and only if there is an orthonormal
basis of V consisting of eigenvectors of S all of whose corresponding
eigenvalues have absolute value 1