LinearAlgebra
VectorSpaces—TheBasicConcepts
2.1ConceptofaVectorSpace
Exercises
Exercise2.1.1 Let V beanabstractvectorspaceovera field IF .Denoteby 0 and 1 theidentityelementswith respecttoadditionandmultiplicationofscalars,respectively.Let 1 ∈ IF betheelement∗ oppositeto 1 (withrespecttoscalaraddition).Provetheidentities
(i) 0 =0 x, ∀x ∈ V
(ii) x =( 1) x, ∀x ∈ V
where 0 ∈ V isthezerovector,i.e.,theidentityelementwithrespecttovectoraddition,and x denotestheoppositevectorto x.
(i)Let x beanarbitraryvector.Bytheaxiomsofavectorspace,wehave x +0 x =1 x +0 x =(1+0) x =1 x = x
Addingtobothsidestheinverseelement x,weobtainthat 0 +0 x =0 x = 0
(ii)Usingthe firstresult,weobtain x +( 1) x =(1+( 1)) x =0 x = 0
∗ Itisunique.
Exercise2.1.2 Let IC denotethe fieldofcomplexnumbers.Provethat IC n satisfiestheaxiomsofavector spacewithanalogousoperationstothosein IR n ,i.e.,
Thisisreallyatrivialexercise.Onebyone,onehastoverifytheaxioms.Forinstance,theassociativelawforvectoradditionisadirectconsequenceofthedefinitionofthevectoraddition,andthe associativelawfortheadditionofcomplexnumbers,etc.
Exercise2.1.3 ProveEuler’stheoremonrigidrotations.Considerarigidbody fixedatapoint A inaninitial configuration Ω.Supposethebodyiscarriedfromtheconfiguration Ω toanewconfiguration Ω1 ,bya rotationaboutanaxis l1 ,andnext,from Ω1 toaconfiguration Ω2 ,byarotationaboutanotheraxis l2
Showthatthereexistsauniqueaxis l ,andacorrespondingrotationcarryingtherigidbodyfromthe initialconfiguration Ω tothe finalone, Ω2 ,directly.Consultanytextbookonrigidbodydynamics,if necessary.
Thisseeminglyobviousresultisfarfromtrivial.Weofferaproofbasedontheuseofmatrices,and youmaywanttopostponestudyingthesolutionafterSection2.7orevenlater.
Amatrix A = Aij iscalled orthonormal ifitstransposecoincideswithitsinverse,i.e.,
or,intermsofitscomponents,
TheCauchytheoremfordeterminantsimpliesthat detA detAT =det2 A =detI =1
Consequently,foranorthonormalmatrix A, detA = ±1.Itiseasytocheckthatorthonormalmatrices forma(noncommutative)group.Cauchy’stheoremimpliesthatorthonormalmatriceswith detA =1 constituteasubgroupofthisgroup.
Weshallshownowthat,for n =2, 3,orthonormalmatriceswith detA =1 represent(rigidbody) rotations.
Case: n =2.Matrixrepresentationofarotationbyangle θ hastheform A = � cos θ sin θ sin θ cos θ �
anditiseasytoseethatitisanorthonormalmatrixwithunitdeterminant.Conversely,let aij satisfy conditions(2.1).Identities
implythatthereexistangles θ1 , θ2 ∈ [0, 2π ) suchthat
But
and
Thetwoequationsadmitonlyonesolution:
Case: n =3.Lineartransformationrepresentedbyanorthonormalmatrixpreservesthelengthof vectors(itisanisometry).Indeed,
Consequently,thetransformationmapsunitballintoitself.BytheSchauderFixedPointTheorem(a heavybutveryconvenientargument),thereexistsavector x thatismappedintoitself.Selectinga systemofcoordinatesinsuchawaythatvector x coincideswiththethirdaxis,wededucethat A has thefollowingrepresentation
Orthogonalityconditions(2.1)implythat a31 = a32 =0 andthat aij ,i,j =1, 2,isatwo-dimensional orthonormalmatrixwithunitdeterminant.Consequently,thetransformationrepresentsarotationabout theaxisspannedbythevector x
Exercise2.1.4 Constructanexampleshowingthatthesumoftwo finiterotation“vectors”doesnotneedto lieinaplanegeneratedbythosevectors.
Useyourtextbooktoverifythatthecompositionofrotationsrepresentedby“vectors” (π , 0, 0) and (0, π , 0) isrepresentedwiththe“vector” (0, 0, π ).
Exercise2.1.5 Let P k (Ω) denotethesetofallreal-orcomplex-valuedpolynomialsdefinedonaset Ω ⊂ IR n (IC n ) withdegreelessorequalto k .Showthat P k (Ω) withthestandardoperationsforfunctionsis avectorspace.
Itissufficientonlytoshowthatthesetisclosedwithrespecttothevectorspaceoperations.Butthis isimmediate:sumoftwopolynomialsofdegree ≤ k isapolynomialwithdegree ≤ k and,upon multiplyingapolynomialfrom P k (Ω) withanumber,weobtainapolynomialfrom P k (Ω) aswell.
Exercise2.1.6 Let Gk (Ω) denotethesetofallpolynomialsofordergreaterorequalto k .Is Gk (Ω) avector space?Why?
SOLUTIONMANUAL
No,itisnot.Thesetisnotclosedwithrespecttofunctionaddition.Addingpolynomial f (x) and f (x),weobtainazerofunction,i.e.,apolynomialofzerodegreewhichisoutsideoftheset.
Exercise2.1.7 Theextension f1 inthedefinitionofafunction f fromclass C k (Ω) neednotbeunique.The boundaryvaluesof f1 ,however,donotdependuponaparticularextension.Explainwhy.
Bydefinition,function f1 iscontinuousinthelargerset Ω1 .Let x ∈ ∂ Ω ∈ Ω1 ,andlet Ω � xn → x.
Bycontinuity, f1 (x)=limn→∞ f1 (xn )=limn→∞ f (xn ) since f1 = f in Ω.Thesameargument appliestothederivativesof f1
Exercise2.1.8 Showthat C k (Ω),k =0, 1,..., ∞,isavectorspace. Itissufficienttonoticethatallthesesetsareclosedwithrespecttothefunctionadditionandmultiplicationwithanumber.
2.2Subspaces
2.3EquivalenceRelationsandQuotientSpaces
Exercises
Exercise2.3.1 Provethattheoperationsinthequotientspace V/M arewelldefined,i.e.,theequivalence classes [x + y ] and [αx] donotdependuponthechoiceofelements x ∈ [x] and y ∈ [y ]
Let xi ∈ [x]= x + M, y i ∈ [y ]= y + M,i =1, 2.Weneedtoshowthat x1 + y 1 + M = x2 + y 2 + M
Let z ∈ x1 + y 1 + M ,i.e., z = x1 + y 1 + m, m ∈ M .Then z = x2 + y 2 +(x1 x2 )+(y 1 y 2 )+ m ∈ x2 + y 2 + M sinceeachofvectors x1 x2 , y 1 y 2 , m isanelementofsubspace M ,and M isclosedwithrespect tothevectoraddition.Bythesameargument x2 + y 2 + M ⊂ x1 + y 1 + M Similarly,let xi ∈ [x]= x + M,i =1, 2.Weneedtoshowthat αx1 + M = αx2 + M
Thisisequivalenttoshowthat αx1 αx2 = α(x1 x2 ) isanelementofsubspace M .Butthis followsfromthefactthat x1 x2 ∈ M andthat M isclosedwithrespecttothemultiplicationbya scalar.
Exercise2.3.2 Let M beasubspaceofarealspace V and RM thecorrespondingequivalencerelation. Togetherwiththreeequivalenceaxioms(i)-(iii),relation RM satisfiestwoextraconditions:
x
Wesaythat RM is consistent withlinearstructureon V .Let R beanarbitraryrelationsatisfying conditions(i)–(v),i.e.,anequivalencerelationconsistentwithlinearstructureon V .Showthatthere existsauniquesubspace M of V suchthat R = RM ,i.e., R isgeneratedbythesubspace M . Define M tobetheequivalenceclassofzerovector, M =[0].Axioms(iv)and(v)implythat M is closedwithrespecttovectorspaceoperationsand,therefore,isavectorsubspaceof V .Let y Rx.Since xRx,axiom(v)implies xR x and,byaxiom(iv), (y x)R0.Bydefinitionof M , (y x) ∈ M . Butthisisequivalentto y ∈ x + M =[x]RM
Exercise2.3.3 AnotherwaytoseethedifferencebetweentwoequivalencerelationsdiscussedinExample2.3.3istodiscusstheequationsofrigidbodymotions.Forthesakeofsimplicityletusconsider thetwo-dimensionalcase.
(i)Provethat,undertheassumptionthattheJacobianofthedeformationgradient F ispositive, E (u)= 0 ifandonlyif u takestheform
where θ ∈ [0, 2π ) istheangleofrotation.
(ii)Provethat εij (u)=0 ifandonlyif u hasthefollowingform u1 = c1 + θ x2 u2 = c2 θ x1
Onecanseethatforsmallvaluesofangle θ (cos θ ≈ 1, sin θ ≈ θ ) thesecondsetofequationscanbe obtainedbylinearizingthe first.
(i)UsingthenotationfromExample2.3.3,weneedtoshowthattherightCauchy-Greentensor Cij = xk ,i xk ,j = δij ifandonlyif
Adirectcomputationshowsthat Cij = δij forthe(relative)configurationabove.Conversely, (x 1 ,1 )2 +(x 2 ,1 )2 =1
impliesthatthereexistsanangle θ1 suchthat
Similarly,
impliesthatthereexistsanangle θ2 suchthat
Finally,condition
impliesthat sin(θ1 θ2 )=0.Restrictingourselvestoanglesin [0, 2π ),weseethateither θ1 = θ2 + π or θ1 = θ2 .Inthe firstcase, sin θ1 = sin θ2 and cos θ1 = cos θ2 whichresults inadeformationgradientwithnegativeJacobian.Thus θ1 = θ2 =: θ .Adirectintegrationresults theninthe finalformula.Angle θ istheangleofrotation,andintegrationconstants c1 ,c2 arethe componentsoftherigiddisplacement.
(ii)Integrating u1,1 =0,weobtain
Similarly, u2,2 =0 implies
2.4LinearDependenceandIndependence,HamelBasis,Dimension
Exercises
Exercise2.5.1 Findthematrixrepresentationofrotation R aboutangle θ in IR 2 withrespecttobasis a1 = (1, 0), a2 =(1, 1)
Startbyrepresentingbasis a1 , a2 intermsofcanonicalbasis e1 =(1, 0), e2 =(0, 1)
Then,
Similarly, R
or,
Therefore,thematrixrepresentationis:
or,equivalently,
Exercise2.5.2 Let V = X ⊕ Y ,and dimX = n, dimY = m.Provethat dimV = n + m.
Let e1 ,...,en beabasisfor X ,andlet g1 ,...,gm beabasisfor Y .Itissufficienttoshowthat e1 ,...,en ,g1 ,...,gm isabasisfor V .Let v ∈ V .Then v = x + y with x ∈ X , y ∈ Y ,and x = �i xi ei ,y = �j yj gj ,so v = �i xi ei + �j yj gj whichprovesthespancondition.Toprove linearindependence,assumethat
i αi ei + � j βj gj =0
Fromthefactthat X ∩ Y = {0} followsthat � i αi ei =0 and � j βj gj =0 whichinturnimpliesthat α1 = = αn =0,and β1 = = βm =0,sinceeachofthetwosetsof vectorsisseparatelylinearlyindependent.
2.6IsomorphicVectorSpaces
2.7MoreAboutLinearTransformations
Exercises
Exercise2.7.1 Let V beavectorspaceand idV theidentitytransformationon V .Provethatalineartransformation T : V → V isaprojectionifandonlyif idV T isaprojection.
Assume T isaprojection,i.e., T 2 = T .Then
(idV T )2 =(idV T )(idV T )
=idV T T + T 2
=idV T T + T
=idV T
Theconversefollowsfromthe firststepand T =idV (idV T )
2.8LinearTransformationsandMatrices
2.9SolvabilityofLinearEquations
Exercises
Exercise2.9.1EquivalentandSimilarMatrices. Givenmatrices A and B ,whennonsingularmatrices P and Q existsuchthat B = P 1 AQ
wesaythatthe matrices A and B are equivalent.If B = P 1 AP ,wesay A and B are similar
Let A and B besimilar n × n matrices.Provethat det A =det B , r (A)= r (B ), n(A)= n(B )
The firstassertionfollowsimmediatelyfromtheCauchy’sTheoremforDeterminants.Indeed, PA = BP implies
det P det A =det B det P
and,consequently, det A =det B
Let A : X → X bealinearmap.Recallthattherankof A equalsthemaximumnumberoflinearly independentvectors Aej where ej ,j =1,...,n isanarbitrarybasisin X .Let P : X → X benowan isomorphism.Considerabasis ej ,j =1,...,n inspace X .Then P ej isanotherbasisin X ,andthe rankof A equalsthemaximumnumberoflinearlyindependentvectors AP ej whichisalsotherank of AP .TheRankandNullityTheoremimpliesthenthatnullityof AP equalsnullityof A Similarly,nullityof A isthemaximumnumberoflinearlyindependentvectors ej suchthat Aej = 0 But Aej = 0 ⇔ PAej = 0
sothenullityof A isequaltothenullityof PA.TheRankandNullityTheoremimpliesthenthatrank of PA equalsnullityof A. Consequently,forsimilartransformations(matrices)rankandnullityarethesame.
Exercise2.9.2 Let T1 and T2 betwodifferentlineartransformationsfroman n-dimensionallinearvector space V intoitself.Provethat T1 and T2 arerepresentedrelativetotwodifferentbasesbythe same matrixifandonlyifthereexistsanonsingulartransformation Q on V suchthat T2 = Q 1 T1 Q
Let T1 g j = �i Tij g i and T2 ej = �i Tij ei where g j , ej aretwobasesin V .Defineanonsingular transformation Q mappingbasis ej intobasis g j .Then
whichimplies
Conversely,if T2 = Q 1 T1 Q and Q mapsbasis ej intobasis g j ,thenmatrixrepresentationof T2 with respectto ej equalsthematrixrepresentationof T1 withrespectbasis g j
Exercise2.9.3 Let T bealineartransformationrepresentedbythematrix
= � 1
� relativetobases {a1 , a2 } of IR 2 and {b1 , b2 , b3 } of IR 3 .Computethematrixrepresenting T relativeto thenewbases:
Wehave
Invertingtheformulasfor ai ,weget
Wehavenow,
Similarly,
and
Thus,thematrixrepresentationoftransformation T wrttonewbasesis
Exercise2.9.4 Let A bean n × n matrix.Showthattransformationswhich
(a)interchangerowsorcolumnsof A
(b)multiplyanyroworcolumnof A byascalar =0
(c)addanymultipleofaroworcolumntoaparallelroworcolumn produceamatrixwiththesamerankas A
Recallthat j -thcolumnrepresentsvalue Aej .Alldiscussedoperationsoncolumnsredefinethemap butdonotchangeitsrange.Indeed,
span{Ae1 ,...,Aej ,...,Aei ,...,Aen } =span{Ae1 ,...,Aei ,...,Aej ,...,Aen }
span{Ae1 ,...,A(αei ),...,Aej ,...,Aen } =span{Ae1 ,...,Aei ,...,Aej ,...,Aen }
span{Ae1 ,...,A(ei + β ej ),...,Aej ,...,Aen } =span{Ae1 ,...,Aei ,...,Aej ,...,Aen }
Thesameconclusionsapplytotherowsofmatrix A astheyrepresentvectors AT e∗ i ,andrank AT = rank A
Exercise2.9.5 Let {a1 , a2 } and {e1 , e2 } betwobasesfor IR 2 ,where a1 =( 1, 2), a2 =(0, 3),and e1 =(1, 0), e2 =(0, 1).Let T : IR 2 → IR 2 begivenby T (x,y )=(3x 4y,x + y ).Findthematrices for T foreachchoiceofbasisandshowthatthesematricesaresimilar.
Matrixrepresentationof T inthecanonicalbasis e1 , e2 is:
Wehave
where
Linearityofmap T impliesthefollowingrelations.
Consequently,matrixrepresentation Tij inbasis a1 , a2 is:
or,inthematrixform,
whichshowsthatmatrices T and T aresimilar.Finally,computingtheproductsabove,weget
AlgebraicDuals
2.10TheAlgebraicDualSpace,DualBasis
Exercises
Exercise2.10.1 Considerthecanonicalbasis e
arethecomponentsof x withrespecttothecanonicalbasis.Thedualbasisfunctional e∗ j returnsthe j -thcomponent:
Considernowadifferentbasisfor IR 2 ,say a1 =(1, 1), a2 =( 1, 1).Writedowntheexplicitformulas forthedualbasis.
Wefollowthesamereasoning.Expanding x inthenewbasis, x = ξ1 a1 +
2 a2 ,weapply a∗ j toboth sidestolearnthatthedualbasisfunctionals a∗ j returnthecomponentswithrespecttobasis aj ,
Thewholeissueisthussimplyincomputingthecomponents ξj .Thisisdonebyrepresentingthe canonicalbasisvectors ei intermsofvectors aj ,
Then,
Therefore, ξ1 = 1 2 (x1 + x2 ) and ξ2 = 1 2 (x2 x1 ) arethedualbasisfunctionals.
Exercise2.10.2 Let V bea finite-dimensionalvectorspace,and V ∗ denoteitsalgebraicdual.Let ei ,i = 1,...,n beabasisin V ,and e∗ j ,j =1,...,n denoteitsdualbasis.Whatisthematrixrepresentation ofthedualitypairingwithrespecttothesetwobases?Doesitdependuponwhetherwedefinethedual spaceaslinearorantilinearfunctionals?
Itfollowsfromthedefinitionofthedualbasisthatthematrixrepresentationofthedualitypairingis theKronecker’sdelta δij .Thisistrueforbothdefinitionsofthedualspace.
Exercise2.10.3 Let V beacomplexvectorspace.Let L(V,IC ) denotethespaceoflinearfunctionalsdefined on V ,andlet L(V,IC ) denotethespaceofantilinearfunctionalsdefinedon V .Definethe(complex conjugate)map C as
Showthatoperator C iswelldefined,bijective,andantilinear.Whatistheinverseof C ?
Let f bealinearfunctionaldefinedon V .Then,
so f isantilinear.Similarly,
sothemap C isitselfantilinear.Similarly,map
: L(V,IC ) �
iswelldefinedandantilinear.Noticethat C and D aredefinedondifferentspacesoyoucannotsay that C = D .Obviously,bothcompositions D ◦ C and C ◦ D areidentities,so D istheinverseof C , andbothmapsarebijective.
Exercise2.10.4 Let V bea finite-dimensionalvectorspace.Considerthemap ι from V intoitsbidualspace V ∗∗ ,prescribingforeach v ∈ V theevaluationat v ,andestablishingthecanonicalisomorphism betweenspace V anditsbidual V ∗∗ .Let e1 ,..., en beabasisfor V ,andlet e∗ 1 ,..., e∗ n bethe correspondingdualbasis.Considerthebidualbasis,i.e.,thebasis e∗∗ i ,i =1,...,n inthebidual space,dualtothedualbasis,andprovethat
(ei )= e
Thisissimple.Definitionofmap ι impliesthat
Thus,
Therelationfollowsthenformtheuniquenessofthe(bi)dualbasis.
2.11TransposeofaLinearTransformation
Exercises
Exercise2.11.1 Thefollowingisa“sanitycheck”ofyourunderstandingofconceptsdiscussedinthelast twosections.Consider IR 2 .
(a)Provethat a1 =(1, 0),a2 =(1, 1) isabasisin IR 2 . Itissufficienttoshowlinearindependence.Any n linearlyindependentvectorsina n-dimensional vectorspaceprovideabasisforthespace.Thevectorsareclearlynotcollinear,sotheyarelinearlyindependent.Formally, α1 a1 + α2 a2 =(α1 + α2 , α2 )=(0, 0) implies α1 = α2 =0,so thevectorsarelinearlyindependent.
(b)Considerafunctional f : IR 2 → IR,f (x1 ,x2 )=2x1 +3x2 .Provethatthefunctionalislinear, anddetermineitscomponentsinthedualbasis a∗ 1 ,a∗ 2 . Linearityistrivial.Dualbasisfunctionalsreturncomponentswithrespecttotheoriginalbasis,
Itis,therefore,sufficienttodetermine ξ1 , ξ2 .Wehave,
so x1 = ξ1 + ξ2 and x2 = ξ2 .Inverting,weget, ξ1 = x1 x2 , ξ
.Thesearethedualbasis functionals.Consequently,
Usingtheargumentlessnotation,
Ifyouarenotinterestedintheformofthedualbasisfunctionals,yougetcomputethecomponents of f withrespecttothedualbasisfaster.Assume
.Evaluatingbothsidesat x = a1 weget, (
f (1, 0)=2
Similarly,evaluatingat x = a2 ,weget α2 =5
(c)Consideralinearmap A : IR 2 → IR 2 whosematrixrepresentationinbasis a1 ,a2 is � 10 12 �
Computethematrixrepresentationofthetransposeoperatorwithrespecttothedualbasis. Nothingtocompute.Matrixrepresentationofthetransposeoperatorwithrespecttothedualbasis isequalofthetransposeoftheoriginalmatrix,
Exercise2.11.2 ProveProposition2.11.3.
All fivepropertiesofthematricesaredirectlyrelatedtothepropertiesoflineartransformationsdiscussedinProposition2.11.1andProposition2.11.2.Theycanalsobeeasilyverifieddirectly.
(i)
(ii)
(
(iv)Followthereasoningforlineartransformations:
Consequently,matrix AT isinvertible,and (A
(v)ConcludethisfromProposition2.11.2.Givenamatrix Aij ,ij, =1,...,n,wecaninterpretitas thematrixrepresentationofmap A : IR n → IR n definedas:
withrespecttothecanonicalbasis ei ,i =1,...,n.Thetransposematrix AT canthenbe interpretedasthematrixofthetransposetransformation:
Theconclusionfollowsthenfromthefactsthatrank A =rank A,rank AT =rank AT ,and Proposition2.11.2.
Exercise2.11.3 Constructanexampleofsquarematrices A and B suchthat
(a) AB = BA
(b) AB = 0,butneither A = 0 nor B = 0
(c) AB = AC ,but B = C
Take A, B from(b)and C = 0.
Exercise2.11.4 If A =[Aij ] isan m × n rectangularmatrixandits transpose AT isthe n × m matrix, AT n×m =[Aji ].Provethat
(i) (AT )T = A.
(ii) (A + B )T = AT + B T
ParticularcaseofProposition2.11.3(i).
(iii) (ABC ··· XYZ )T = Z T Y T X T ··· C T B T AT
UseProposition2.11.3(ii)andrecursion,
(iv) (q A)T = q AT
ParticularcaseofProposition2.11.3(i).
Exercise2.11.5 Inthisexercise,wedevelopaclassicalformulafortheinverseofasquarematrix.Let
A =[aij ] beamatrixoforder n.Wedefinethe cofactor Aij oftheelement aij ofthe i-thcolumnof A asthedeterminantofthematrixobtainedbydeletingthe i-throwand j -thcolumnof A,multiplied by ( 1)i+j : Aij
(a)Showthat
LinearAlgebra
where δij istheKroneckerdelta. Hint: CompareExercise2.13.4.
For i = j ,theformulareducestotheLaplaceExpansionFormulafordeterminantsdiscussedin Exercise2.13.4.For i = j ,theright-handsiderepresentstheLaplaceexpansionofthedeterminantofanarraywheretworowsareidentical.Antilinearityofdeterminant(comp.Section2.13) impliesthenthatthevaluemustbezero.
(b)Usingtheresultin(a),concludethat
Dividebothsidesby det A.
(c)Use(b)tocomputetheinverseof
andverifyyouranswerbyshowingthat
Exercise2.11.6 Considerthematrices
and
Ifpossible,computethefollowing:
(a) AAT +4D T D + E T
Theexpressionisill-defined, AAT ∈ Matr(2, 2) and E T ∈ Matr(4, 4),sothetwomatrices cannotbeaddedtoeachother.
(b) C T C + E E 2
(c) B T D Ill-defined,mismatcheddimensions.
(d) B T BD D
(e) EC AT A EC isnotcomputable.
(f) AT DC (E 2I )
Exercise2.11.7 Dothefollowingvectorsprovideabasisfor IR 4 ?
Itissufficienttochecklinearindependence,
Computing
wearriveatthehomogeneoussystemofequations
Thesystemhasanontrivialsolutioniffthematrixissingular,i.e., det A =0.Byinspection,thethird rowequalsminusthe firstone,sothedeterminantiszero.Vectors a, b, c, d arelinearlydependentand, therefore,donotprovideabasisfor IR 4
Exercise2.11.8 Evaluatethedeterminantofthematrix
Usee.g.theLaplaceexpansionwithrespecttothelastrowandSarrus’formulas,
Exercise2.11.9 Invertthefollowingmatrices(seeExercise2.11.5).
Exercise2.11.10 Provethatif A issymmetricandnonsingular,sois A
UseProposition2.11.3(iv).
.
Exercise2.11.11 Provethatif A, B , C ,and D arenonsingularmatricesofthesameorderthen
Usethefactthatmatrixproductisassociative,
Inthesameway,
(ABCD ) 1 =
Exercise2.11.12 Considerthelinearproblem
(i)Determinetherankof T
Multiplicationofcolumns(rows)withnon-zerofactor,additionofcolumns(rows),andinterchangeofcolumns(rows),donotchangetherankofamatrix.Wemayusethoseoperationsand mimicGaussianeliminationtocomputetherankofmatrices. rank
manipulatethesamewaycolumnstozerooutthe firstrow =rank
=2
(ii)Determinethenullspaceof T Set x3 = α and x4 = β andsolvefor x1 ,x2 toobtain
(iii)Obtainaparticularsolutionandthegeneralsolution.
Checkthattherankoftheaugmentedmatrixisalsoequal2.Set x3 = x4 =0 toobtaina particularsolution
Thegeneralsolutionisthen
(iv)Determinetherangespaceof T .
Asrank T =2,weknowthattherangeof T istwo-dimensional.Itissufficientthusto findtwo linearlyindependentvectorsthatareintherange,e.g.wecantake T e1 ,T e2 representedbythe firsttwocolumnsofthematrix,
Exercise2.11.13 Constructexamplesoflinearsystemsofequationshaving(1)nosolutions,(2)infinitely manysolutions,(3)ifpossible,uniquesolutionsforthefollowingcases:
(a)3equations,4unknowns (1)
(3)Uniquesolutionisnotpossible.
(b)3equations,3unknowns
(1)
Exercise2.11.14 Determinetherankofthefollowingmatrices:
Inallthreecases,therankisequal3.
Exercise2.11.15 Solve,ifpossible,thefollowingsystems:
(a)
2.12TensorProducts,CovariantandContravariantTensors
2.13ElementsofMultilinearAlgebra
Exercises
Exercise2.13.1 Let X bea finite-dimensionalspaceofdimension n.Provethatthedimensionofthespace M s m (X ) ofall m-linear symmetric functionalsdefinedon X isgivenbytheformula
Proceedalongthefollowingsteps.
(a)Let Pi,m denotethenumberofincreasingsequencesof m naturalnumbersendingwith i
Arguethat
Let a beageneral m-linearfunctionaldefinedon X .Let e1 ,...,en beabasisfor X ,andlet v j ,j =1,...,n,denotecomponentsofavector v withrespecttothebasis.Themultilinearityof a impliestherepresentationformula,
Ontheotherside,iftheform a issymmetric,wecaninterchangeanytwoargumentsinthecoefficient a(ej1 ,ej2 ,...,ejm ) withoutchangingthevalue.Theformisthusdeterminedbycoefficients a(ej1 ,ej2 ,...,ejm ) where
Thenumberofsuchincreasingsequencesequalsthedimensionofspace M s m (X ).Obviously,we canpartitionthesetofsuchsequencesintosubsetsthatcontainsequencesendingatparticular indices 1, 2,...,n,fromwhichtheidentityabovefollows.
(b)Arguethat
The first m elementsofanincreasingsequenceof m +1 integersendingat i,formanincreasing sequenceof m integersendingat j ≤ i.
(c)Usetheidentityaboveandmathematicalinductiontoprovethat
=1,Pi,1 =1.For m =2,
m =3,
Assumetheformulaistrueforaparticular m.Then
i,m+1 = i � j =1 j (j +1) (j + m 2) (m 1)!
Weshalluseinductionin i toprovethat
SOLUTIONMANUAL
Thecase i =1 isobvious.Supposetheformulaistrueforaparticularvalueof i.Then,
(d)Concludethe finalformula.
Justusetheformulaabove.
Exercise2.13.2 Provethatanybilinearfunctionalcanbedecomposedintoauniquewayintothesumofa symmetricfunctionalandanantisymmetricfunctional.Inotherwords,
Doesthisresultholdforageneral m-linearfunctionalwith m> 2 ?
Theresultfollowsfromthesimpledecomposition,
Unfortunately,itdoesnotgeneralizeto m> 2.Thiscanforinstancebeseenfromthesimplecomparisonofdimensionsoftheinvolvedspacesinthe finite-dimensionalcase,
for 2 <m ≤ n.
Exercise2.13.3 Antisymmetriclinearfunctionalsareagreattooltocheckforlinearindependenceofvectors.
Let a bean m-linearantisymmetricfunctionaldefinedonavectorspace V .Let v1 ,...,vm be m vectorsinspace V suchthat a(v1 ,...,vm ) =0.Provethatvectors v1 ,...,vn arelinearlyindependent. Istheconversetrue?Inotherwords,ifvectors v1 ,...,vn arelinearlyindependent,and a isanontrivial m-linearantisymmetricform,is a(v1 ,...,vm ) =0?
Assumeincontrarythatthereexistsanindex i suchthat vi = � j =i βj vj
forsomeconstants βj ,j = i.Substitutingintothefunctional a,weget, a(v1 ,...,vi ,...,vm )= a(v1 ,..., �
sinceineachoftheterms a(v1 ,...,vj ,...,vm ),twoargumentsarethesame. Theconverseisnottrue.Considerforinstanceabilinear,antisymmetricformdefinedonathreedimensionalspace.Let e1 ,e2 ,e3 beabasisforthespace.Asdiscussedinthetext,theformisuniquely determinedbyitsvaluesonpairsofbasisvectors: a(e1 ,e2 ),a(e1 ,e3 ),a(e2 ,e3 ).Itissufficientforone ofthesenumberstobenon-zeroinordertohaveanontrivialform.Thuswemayhave a(e1 ,e2 )=0 forthelinearlyindependentvectors e1 ,e2 ,andanontrivialform a.Thediscussedcriterionisonlya sufficientconditionforthelinearindependencebutnotnecessary.
Exercise2.13.4 Usethefactthatthedeterminantofmatrix A isamultilinearantisymmetricfunctionalof matrixcolumnsandrowstoprovethe LaplaceExpansionFormula.Selectaparticularcolumnof matrix Aij ,saythe j -thcolumn.Let Aij denotethesubmatrixof A obtainedbyremoving i-throw and j -thcolumn(donotconfuseitwithamatrixrepresentation).Provethat det A = n � i=1 ( 1)i+j Aij det Aij
Formulateandproveananalogousexpansionformulawithrespecttoan i-throw. Itfollowsfromthelinearityofthedeterminantwithrespecttothe j -thcolumnthat,
Ontheotherside,thedeterminantofmatrix,
isamultilinearfunctionaloftheremainingcolumns(androws)and,for Aij = I (The I denotehere theidentitymatrixin IR n 1 ),itsvaluereducesto ( 1)i+j .Hence,
Thereasoningfollowsidenticallinesfortheexpansionwithrespecttothe i-thcolumn.
Exercise2.13.5 ProvetheKramer’sformulasforthesolutionofanonsingularsystemof n equationswith n unknowns,
Hint: Inordertodeveloptheformulaforthe j -thunknown,rewritethesystemintheform:
Computethedeterminantofbothsidesoftheidentity,anduseCauchy’sTheoremforDeterminantsfor theleft-handside.
Exercise2.13.6 Explainwhytherankofa(notnecessarilysquare)matrixisequaltothemaximumsizeofa squaresubmatrixwithanon-zerodeterminant.
Consideran m × n matrix Aij .Thematrixcanbeconsideredtobearepresentationofalinearmap A froman n-dimensionalspace X withabasis ei ,i =1,...,n,intoan m-dimensionalspace Y witha basis g1 ,...,gm .Thetransposeofthematrixrepresentsthetransposeoperator AT mappingdualspace Y ∗ intothedualspace X ∗ ,withrespecttothedualbases g ∗ 1 ,...,g ∗ m and e∗ 1 ,...,e∗ n .Therankofthe matrixisequaltothedimensionoftherangespaceofoperator A andoperator AT .Let ej1 ,...,ejk be suchvectorsthat Aej1 ,...,Aejk isthebasisfortherangeofoperator A.Thecorrespondingsubmatrix representsarestriction B ofoperator A toasubspace X0 =span(ej1 ,...,ejk ) andhasthesamerank astheoriginalwholematrix.Itstransposehasthesamerankequalto k .Bythesameargument,there exist k vectors gi1 ,...,gik suchthat AT g ∗ i1 ,...,AT g ∗ ik arelinearlyindependent.Thecorresponding k × k submatrixrepresentstherestrictionofthetransposeoperatortothe k -dimensionalsubspace
Y ∗ 0 =span(g ∗ i1 ,...,g ∗ ik ),withvaluesinthedualspace X ∗ 0 ,andhasthesamerankequal k .Thus, the finalsubmatrixrepresentsanisomorphismfroma k -dimensionalspaceintoa k -dimensionalspace and,consequently,musthaveanon-zerodeterminant.
Conversely,let v 1 ,..., v m be k columnvectorsin IR m .Consideramatrixcomposedofthecolumns. Ifthereexistsasquaresubmatrixofthematrixwithanon-zerodeterminant,thevectorsmustbe linearlyindependent.Indeed,thedeterminantofanysquaresubmatrixofthematrixrepresentsa klinear,antisymmetricfunctionalofthecolumnvectors,so,byExercise2.13.3, v 1 ,..., v k arelinearly independentvectors.Thesameargumentappliestotherowsofthematrix.
EuclideanSpaces
2.14Scalar(Inner)Product.RepresentationTheoreminFinite-DimensionalSpaces
2.15BasisandCobasis.AdjointofaTransformation.Contra-andCovariantComponentsofTensors
Exercises
Exercise2.15.1 GobacktoExercise2.11.1andconsiderthefollowingproductin IR 2 ,
Provethat (x,y )V satisfiestheaxiomsforaninnerproduct.Determinetheadjointofmap A from Exercise2.11.1withrespectto thisinnerproduct.
Theproductisbilinear,symmetricandpositivedefinite,since (x,x)
0 implies x1 = x2 =0.Theeasiestwaytodetermineamatrixrepresentationof A∗ istodeterminethe cobasisofthe(canonical)basisusedtodefinethemap A.Assumethat a1 =(α, β ).Then (a1 ,a1 )= α =1
so a1 =(1, 1 2 ).Similarly,if a2 =(α, β ) then,
so a2 =(0, 1 2 ).Thematrixrepresentationof A∗ inthecobasisissimplythetransposeoftheoriginal matrix,
Inordertorepresent A∗ intheoriginal,canonicalbasis,weneedtoswitchinbetweenthebases.
Then,
Now,letuscheckourcalculations.First,letuscomputetheoriginalmap(thathasbeengiventousin basis a1 ,a2 ),inthecanonicalbasis,
Ifourcalculationsarecorrectthen,
mustmatch
whichitdoes!Needlesstosay,youcansolvethisprobleminmanyotherways.