Solution Manual for Linear Algebra with Applications
2nd Edition Holt 1464193347 9781464193347
Download full solution manual at: https://testbankpack.com/p/solution-manual-for-linear-algebra-withapplications-2nd-edition-holt-1464193347-9781464193347/
Download full test bank at: https://testbankpack.com/p/test-bank-for-linear-algebra-with-applications2nd-edition-holt-1464193347-9781464193347/
Chapter 4 Subspaces
4.1 Practice Problems
Thus, Ax = 0 has solutions of the form
and, therefore, null (A) = span
So
x = c has no solution, and c ∈ / range (T ).
So Ax = c has no solution, and c ∈ / range (T ).
4. (a) False. ker(T ) is a subset of the domain space.
(b) False. ker(T ) is a subspace of R3
(c) False. The trivial subspace {0} contains a single vector.
(d) False. If S is a subspace, then 0 ∈S, so 0 ∈ / SC , and SC cannot be a subspace.
4.1 Introduction
1. Let S be the set of vectors of the form 0 Letting a = 0 and b = 0, we see that 0 ∈S Suppose u
and
are in S, then u = [
and
, and we conclude that S is a subspace. (Alternatively, we have S = span
, then
, and hence S is a subspace.)
,
2. Let S be the set of vectors of the form a Letting a = 0, we see that 0 ∈ S Suppose u and v 0
we conclude that S is a subspace. (Alternatively, we have S = span {[ 1 ]} 1 0 , and hence S is a subspace.)
3. Not a subspace, because 0 + 0 = 1, and thus 0 is not in this set.
4. Let S be the set of vectors of the form b
we see that 0 ∈ S Suppose u and v are in
, then
we conclude that S is a subspace. (Alternatively, we have S = null ([ 1 1 1 ]), and hence S is a subspace.)
5. Not a subspace,since 0 is not in this subset because the second component of 0 is 0 = 1.
6. Let S be the set of vectors of the form a + b Then S = span 1 , 1 , and hence S is a subspace.
7. Not a subspace. Let r = 1 and u = [ 1 ] Then u belongs to the set, but ru = [ 1/2 ] does not.
0
8. The condition c = b a is equivalent to a b + c = 0, and therefore we see that this subset is equal to null {[ 1 1 1 ]}, which is a subspace.
9. Not a subspace. Let u = [ 1 ] 0 and v = [ 0 ] 1 Then both u and v belong to the subset, but u + v = [ 1 ] 1 1
0 1 does not, since 1(1)(1) = 1 = 0. [ 1 ] [ 2 ]
10. Not a subspace. Let r = 2 and u = Then u is in the subset, but ru = is not, since 22 + 02 = 4 > 1.
0 [ 1 ] 0 is not.
1
0
13. Not a subspace. Let r = 1 and u = not, since 1 2.
14. Not a subspace. Let u = [ 2 ] u + v = 0 is not.
1 ] 2 Then u is in the subset since 1 ≤ 2, but ru =
is 2
1 ] [ 1 ] and v = Then u and v are in the subset since 1 = 1 1 |1|, but
15. This subset is equal to null ([ 1 · · · 1 ]), which is a subspace.
16. This subset is equal to null ([ 1 1 1 1 · · · 1 1 ]), which is a subspace.
a ]
17. It would not be closed under scalar multiplication, since there is a vector 0 , where 0 < a, which is in the set, but r [ a ] 0 = [ ra ] 0 would not be in the set for r sufficiently large. One might also note that [ b ] b [ and b ] are in the set, but the sum b [ 2b ] 0 is not, for b sufficiently large.
a ]
18. It would not be closed under scalar multiplication, since there is a vector 0 , where 0 < a, which is in the set, but r [ a ] 0 [ ra ] = 0 would not be in the set for r sufficiently large.
19. It would not be closed under vector addition. For example, 0 and 0 ] are in the region, but 1 the sum [ 1 ] 1 is not.
20. It would not be closed under scalar multiplication, since there is a vector 0 , where 0 < a, which is [ a ] [ ra ] in the set, but r 0 = 0 would not be in the set for r sufficiently large. 21. We
23. A is row-reduced, and we see that Ax = 0 has solutions of the form
45. (a) True. Since A0 = 0 = b, 0 is not a solution to Ax = b, and hence the set of solutions is not a subspace.
(b) False. For any 5 × 3 matrix A, null(A) is a subspace of R3 , not R5
46. (a) True, by Theorem 4.3.
(b) True, by Theorem 4.5.
47. (a) False, because ker(T ) is a subspace of the domain R5 , not the codomainR8 .
(b) False, because range(T ) is a subspace of the codomain R7 , not the domain R2
48. (a) True, by Theorem 4.5. {[ 1 ]} {[ 0 ]}
(b) False, the union of the subspaces span and span are the coordinate axes for 0 1 R2 , which is not a subspace.
49. (a) True. Proof: Let S1 and S2 be two subspaces of Rn Since 0 must belong to both S1 and S2, 0 ∈S1 ∩ S2 Now let u and v be two vectors in S1 ∩ S2 Since u and v are both in S1, and S1 is a subspace, u+ v ∈S1 Likewise, since u and v are both in S2, and S2 is a subspace, u+ v ∈S2 Thus u+v ∈S1 ∩S2 Now let u ∈S1 ∩S2 and r ∈R. Since u ∈S1 and S1 is a subspace, ru ∈S1 Likewise, since u ∈ S2 and S2 is a subspace, ru ∈ S2. Thus ru ∈ S1 ∩ S2, and we conclude that S1 ∩ S2 is a subspace.
(b) True. Proof: Since S1 and S2 are two subspaces of Rn , 0 must belong to both S1 and S2, and hence 0 + 0 = 0 ∈ S Now let u and v be two vectors in S Then there exist u1 ∈ S1 and u2 ∈ S2 with u = u1 + u2. Likewise, there exist v1 ∈ S1 and v2 ∈ S2 with v = v1 + v2, and thus u + v = (u1 + u2)+ (v1 + v2) = (u1 + v1) + (u2 + v2). Since S1 is a subspace, u1 + v1 ∈ S1, and since S2 is a subspace u2 + v2 ∈ S2 Thus, u+ v ∈ S Now let u ∈ S and r ∈R Since u ∈ S there exist u1 ∈ S1 and u2 ∈ S2 with u = u1 +u2, and so ru = ru1+ ru2 Since S1 is a subspace, ru1 ∈ S1, and likewise since S2 is a subspace, ru2 ∈ S2 Hence, ru ∈ S We conclude that S is a subspace.
50. (a) True. Proof: Since S1 and S2 are two subspaces of Rn , 0 must belong to both S1 and S2, and hence 0 0 = 0 ∈ S Now let u and v be two vectors in S Then there exist u1 ∈ S1 and u2 ∈ S2 with u = u1 u2 Likewise, there exist v1 ∈ S1 and v2 ∈ S2 with v = v1 v2, and thus u + v = (u1 u2)+ (v1 v2) = (u1 + v1) (u2+ v2). Since S1 is a subspace, u1 + v1 ∈ S1, and since S2 is a subspace u2 + v2 ∈ S2 Thus, u+ v ∈S Now let u ∈ S and r ∈R Since u ∈ S there exist u1 ∈S1 and u2 ∈S2 with u = u1 u2,and so ru = ru1 ru2. Since S1 is a subspace, ru1 ∈ S1, and likewise since S2 is a subspace, ru2 ∈ S2. Hence, ru ∈ S. We conclude that S is a subspace.
(b) False. The set of integers is not closed under scalar multiplication. For example, let r = 1/2, then r (1) = 1/2 is not an integer.
51. (a) False. If S = {0}, then there exists v ∈ S with v = 0. Since S is a subspace, rv ∈ S for all scalars r Each rv is a distinct vector, for if r1v = r2v, then (r1 r2) v = 0, and since v = 0 we must have r1 r2 = 0, and thus r1 = r2. Thus, S must contain infinitely many vectors.
(b) True. Every point on the line connecting u and v is of the form (1 s) u + sv for some scalar s Since S is a subspace, both (1 s) u and sv belong to S, and hence (1 s) u + sv belongs to S
52. (a) False. For example, let S1 = {x ∈R : x < 0} and S2 = {x ∈R : x ≥ 0} Then S1 and S2 are not subspaces, but the union S1 ∪S2 = R is a subspace.
(b) False . Let S1 = {0, u} and S2 = {0, v} be subsets of Rn , where u and v are distinct non-zero vectors. Then S1 and S2 are not subspaces, but S1 ∩ S2 = {0} is a subspace.
53. Let S be a subspace of R, with S = {0} Then there exists x ∈ S with x = 0. Let y ∈ R Set r = y , then rx = ( y ) x = y, and therefore since S is a subspace, y ∈S Since y was arbitrary, it follows that S = R
54. Since 0 ∈ S, c0 = 0 ∈ cS Let u and v be two vectors in cS Then there exists s1 ∈ S such that u = cs1 and s2 ∈ S such that v = cs2 Thus u + v = cs1 + cs2 = c (s1 + s2). Now S is a subspace, so s1 + s2 ∈ S, and hence u + v ∈ S Next let u ∈ cS and r ∈R. Then there exists s ∈ S such that u = cs, and hence ru = r (cs) = c (rs). Because S is a subspace, rs ∈ S, and hence ru ∈ cS We conclude that cS is a subspace.
55. Since A0 = 0 = b, 0 does not belong to the set of solutions to Ax = b, and therefore this set is not a subspace.
56. The subspaces of R2 consist of {0}, R2 , and all lines which contain the origin.
57. The subspaces of R3 consist of {0}, R3 , all lines which contain the origin, and all planes which contain the origin.
58. Provided there exists some vector v ∈S, then if condition (c) is satisfied, we may use r = 0 to conclude that rv = 0v = 0 ∈ S, so condition (a) is satisfied. So we can replace condition (a) by the condition that S is not empty
59. Since A0 = 0 = y, 0 does not belong to the set of solutions to Ax = y, and therefore this set is not a subspace of Rm
60. Since x ∈null (A), Ax = 0, and thus Ax = [
=
61. Suppose ker (T ) = {0} Then T (x) = Ax = 0 has the unique solution, x = 0. If A = [ a1 · an ], then c1a1 + + cnan = Ac = 0 implies c = 0, and thus every ci = 0. Hence the columns of A are linearly independent. Now suppose the columns of A = [ a1 · · · an ] are linearly independent. If Ac = c1a1 + · · · + cnan = 0, then every ci = 0. Thus, c = 0, which shows that T (x) = Ax = 0 has the unique solution, x = 0. Thus, ker (T ) = {0}
62. Since T is a linear transformation, we have T (0) = 0, and thus 0 ∈ker (T ). (See Exercise 55 in Section 3.1.)
63. Since v = ( 1)v and S is closed under both addition and scalar multiplication, it follows that u + ( 1)v is in S and hence u v is in S
64. Suppose T is one-to-one, and let T (x) = 0. Since T (0) = 0 also, we must have that x = 0 because T is one-to-one. Thus, ker (T ) = {0} Now suppose ker (T ) = {0} and that T (x1) = T (x2). Since T is a linear transformation, T (x1 x2) = T (x1) T (x2) = 0. Thus x1 x2 ∈ ker (T ), and since ker (T ) = {0}, we have x1 x2 = 0 and hence x1 = x2 We conclude that T is one-to-one.
65. We solve the linear system
The general solution to this system is
where s is any real number. The set of solutions is given by
66. We solve the linear system
The general solution to this system is
where s is any real number. The set of solutions is given by
67. We solve the linear system
x1 x3 = 0 (Calcium atoms) 2
The general solution to this system is
where s is any real number. The set of solutions is
68. We solve the linear system
where s is any real number. The set of solutions is given by
4.2 Practice Problems
Row-reduce the matrix with the given vectors as rows,
The basis for S is given by the non-zero row vectors,
Row-reduce the matrix with the given vectors as rows, [
The basis for S is given by the non-zero row vectors, 1 ,
2. (a) Row-reduce the matrix with the given vectors as columns,
The basis for S is given by columns 1 and 2 of the original matrix corresponding to the pivot {[ column of the row-reduced matrix. Thus, our basis for S is (b) Row-reduce the matrix with the given vectors as columns,
The basis for S is given by columns 1, 2, and 3 of the original matrix corresponding to the pivot {[ columns of the row-reduced matrix. Thus, our basis for S is
3. (a) The second vector is 1 times the first, so we eliminate the second vector and obtain the basis
.
(b) The second vector is 2 times the first, and is eliminated as a dependent vector. Because the remaining vectors are not multiples of one another, they are linearly independent, and the basis
(b) We row-reduce
Thus, Ax = 0 has solutions of the form
Therefore, the null space has basis
the dimension is
6. (a) False. If S = {0} , then dim (S) = 0
(b) False. If the subspace is non-trivial, then it has a basis, and the dimension is the unique number of vectors in a basis. If S = {0} , then dim (S) = 0
(c) True. With a basis of two vectors, the span of a basis forms a plane.
(d) False. If A = [ 0 0 ], then n = 1 < 2 = m, but nullity(A)= 2 1
4.2 Basis and Dimension
1. Not a basis, since u1 and u2 are not linearly independent. Also, they do not span R2
2. A basis, since u1 and u2 are linearly independent and span R2
3. Not a basis, since three vectors in a two-dimensional space must be linearly dependent.
4. Not a basis, since three vectors in a two-dimensional space must be linearly dependent. {u1, u3} is linearly dependent, {u1, u2, u3} must also be linearly dependent.
5. Row-reduce the matrix with the given vectors as rows,
Thus a basis for S is given by the non-zero row vector,
6. Row-reduce the matrix with the given vectors as rows,
Thus a basis for S is given by the non-zero row vectors, , 5
7. Row-reduce the matrix with the given vectors as rows,
Also, since
Thus a basis for S is given by the non-zero row vectors,
] [
8. Row-reduce the matrix with the given vectors as rows,
a basis for S is given by the non-zero row vectors, 9. Row-reduce the matrix with the given vectors as rows,
a basis for S is given by the non-zero row vectors,
10. Row-reduce the matrix with the given vectors as rows,
given by the non-zero row vectors,
11. Row-reduce the matrix with the given vectors as columns,
A basis for S is given by columns 1 and 2 of the original matrix corresponding to the pivot columns of the row-reduced matrix. Hence a basis for S is
12. Row-reduce the matrix with the given vectors as columns,
A basis for S is given by column 1 of the original matrix corresponding to the pivot column of the {[ row-reduced matrix. Hence a basis for S is 2 ]} 6
13. Row-reduce the matrix with the given vectors as columns, [
A basis for S is given by columns 1, 2, and 3 of the original matrix corresponding to the pivot columns of the row-reduced matrix. Hence a basis for S is
14. Row-reduce the matrix with the given vectors as columns,
A basis for S is given by columns 1 and 2 of the original matrix corresponding to the pivot columns of the row-reduced matrix. Hence a basis for S is
15. Row-reduce the matrix with the given vectors as columns,
A basis for S is given by columns 1 and 2 of the original matrix corresponding to the pivot columns of
the row-reduced matrix. Hence a basis for S is
16. Row-reduce the matrix with the given vectors as columns,
A basis for S is given by columns 1, 2, and 3 of the original matrix corresponding to the pivot columns
of the row-reduced matrix. Hence a basis for S is ,
17. Since the second vector is 3 times the first, we eliminate the second vector, and obtain the basis {[ 2 ]} 6 The dimension is 1.
18. Since the vectors are not multiples of each other, they are linearly independent, and a basis is {[ 12 ] , [ 18 ]} . The dimension is 2. 3 6
19. The third vector is 3 times the first, and is eliminated as a dependent vector. Likewise, the second {[ 1 ]} vector is 2 times the first and is eliminated, leaving the basis 1 . The dimension is 1. 1
20. The second vector is 5 times the first, and is eliminated as a dependent vector. Since the remaining vectors are not multiples of one another, they are linearly independent, and a basis is {[ 1 ] [ 4 ]}
1 , 3 . The dimension is 2.
21. The first vector is eliminated, as the zero vector is always linearly dependent. The other vectors are linearly independent since the corresponding matrix
with 3 pivots. Hence a basis is
22. The fourth vector is the sum of the first two columns, hence is eliminated as linearly dependent. The re-
maining vectors can be shown to be linearly dependent, and thus our basis is
, , as these two vectors are linearly independent.
as these two vectors are linearly independent.
The extended basis is given by columns 1, 2, and 3 of the original matrix corresponding to the pivot columns of the row-reduced matrix.
The extended basis is given by columns 1, 2, and 4 of the original matrix corresponding to the pivot columns of the row-reduced matrix. Hence our basis is
Thus Ax = 0 has the trivial solution x = , and thus null (A) =
basis, and nullity(A)= 0.
subspace has no 0
35. For example, take the span of the first m vectors of the n standard basis vectors of Rn
False. For example, let s1 = (1, 0) and s2 = (0, 1), then {s1, s2} is a basis for S = R2 , but the vector s1 + s2 = (1, 1) is not a vector in the basis {s1, s2}
True. Because S1 is a proper subspace of S2 and dim (S2) = 1, we must have dim (S1) = 0 The only subspace with dimension 0 is {0}
44. (a) False. For example, S1 = span dimension 1, but S1 = S2
0 ]} and S2 = span 1 Then each subspace has
45. (a) False. For example, if U = will not yield a basis.
(b) False. For example, if U =
spans
= R2 ,but adding additional vectors
1 ] , [ 0 ]} , then U is linearly independent and is already a basis for 0 1 S = R2 , so adding additional vectors will not yield a basis.
46. (a) False. If the vectors in U are linearly independent, and thus a basis for S, then the removal of a vector from U will no longer be a basis. {[
]}
(b) False. For example, if S = span 0 , then removing the only vector will not leave a basis for S
47. (a) False. If the three vectors lie in the same plane, then they must be linearly dependent, and cannot form a basis.
(b) True. If S1 ⊂ S2 and dim (S2) = 3, then S1 = S2 If dim (S2) = 4, then S2 = R4 Hence if S1 ⊂ S2 then either S1 = S2 or S2 = R4
48. (a) False. The set {0} is linearly dependent, and thus cannot be a basis. The subspace {0} does not have a basis. {[ 1 ]} {[ 0 ]}
(b) False. For instance, S1 = span 0 and S2 = span 1 are different subspaces of dimension
1 of R2 (However, if n = 1 then the unique subspaces of dimension 0 and 1 are {0} and R.) {[ ] [ ] [ ]}
49. (a) False. For example, if U = 1 , 2 , 3 , then removing vectors from U will not form a basis for R2 0 0 0
(b) False. For example, U = {[1] 0 , 0
0 0 cannot be extended to a basis for R3
50. (a) True. The vectors {u1, u2} are linearly independent, and hence span a two-dimensional subspace, a plane.
(b) False. The nullity is the dimension of the null space of A, which is equal to the number of free variables in the row-reduced form of A; whereas the column space will have dimension equal to the number of pivot variables in the row-reduced form of A
51. (a) The dimension of S1 cannot exceed the dimension of S2 since S1 is contained in S2 S1 is non-zero, and thus its dimension can’t be 0. Hence the possible dimensions of S1 are 1, 2, and 3.
(b) If S1 = S2, then S1 is properly contained in S2, and the dimension of S1 is strictly less than the dimension of S2 Thus the possible dimensions of S1 are 1, and 2.
52. (a) The dimension of S1 cannot exceed the dimension of S2 since S1 is contained in S2 S1 is non-zero, and thus its dimension can’t be 0. Hence the possible dimensions of S1 are 1, 2, 3, and 4.
(b) If S1 = S2, then S1 is properly contained in S2, and the dimension of S1 is strictly less than the dimension of S2 So the possible dimensions of S1 are 1, 2, and 3.
53. Let S be a subspace of Rn of dimension n Then S has a basis {u1, u2, , un} If S = Rn , then there exists a vector v ∈ / S such that {u1, u2, , un, v} is linearly independent, for otherwise one could express v in terms of the vectors ui, and then v would belong to S But now the subspace span {u1, u2, . . . , un, v} is an n +1 dimensional subspace of Rn , which is not possible. Hence we must have that S = Rn .
cos θ
sin θ
54. For example, span
subspaces is distinct.
is a one-dimensional subspace for each 0 ≤ θ < π, and each of these
55. The vectors {s1, s2, , sm} span S, since every vector s in S can be written in terms of these vectors. If we consider c1s1 + c2s2 + + cmsm = 0, then since also 0s1 +0s2 + + 0sm = 0, we now have the vector 0 ∈ S expressed in terms of the si in two ways. Since each vector in S is uniquely written in terms of the si we must have that each ci = 0. Consequently, {s1, s2, , sm} is linearly independent, and therefore a basis for S
56. Suppose {u1, , um} is not a basis for S Then by Theorem 4.14(b) we can remove vectors to obtain a collection of vectors which forms a basis for S. But this basis for S would contain fewer than m vectors, contradicting the given dimension m of S. Thus we must have that {u1, . . . , um} is a basis for S.
57. Let {u1, , um} be a basis for S1 By Theorem 4.14(a) either {u1, , um} is a basis for S2 or we can add vectors to form a basis of S2 If {u1, , um} is a basis for S2,then m = dim (S1) = dim (S2). If we add k vectors, then dim (S1) = m < m + k = dim (S2). Hence dim (S1) ≤ dim (S2). Now if the dimensions are equal, then the basis {u1, , um} for S1 is also a basis for S2 Thus S1 = span {u1, , um} = S2
58. (a) Suppose U spans S By Theorem 4.14(b), either U is a basis for S, or we can remove vectors to obtain a collection of vectors which forms a basis for S Either way, we would have a basis with no more than m vectors, which implies dim (S) ≤ m But this contradicts dim (S) = k > m, hence U does not span S
(b) Suppose U is linearly independent. By Theorem 4.14(a) either {u1, . . . , um} is a basis for S or we can add vectors to form a basis of S. Either way, we would have a basis with more than m vectors, which implies dim (S) ≥ m But this contradicts dim (S) = k < m, hence U does is not linearly independent.
59. Suppose the pivots occur in columns
be the nonzero row vectors, and consider the equation
,
By considering the c1 component, we have
each
is zero. Hence we have
Now consider the c2 component, and conclude that a
before. Continue in this way to conclude that each ai = 0 for 1 ≤ i ≤ k. Thus the nonzero rows are linearly independent.
60. Since {u1, u2, u 3} spans R3 , ker (A) = ker
by the Unifying Theorem. Hence, nullity(A)= 0.
61. The maximum value of m1 + m2 is n To see why, suppose that m1 + m2 > n Let {
and {v1, , vm2} be bases for S1 and S2, respectively Since m1 + m
> n, the combined set
must be linearly dependent by Theorem 4.17b. Therefore there exists a nontrivial linear combination
}, it follows that
and {v1, , vm2} are bases, the above equations imply that a1 = = am1 = 0 and b1 = = bm2 = 0. But this contradicts (∗∗) being a nontrivial linear combination. Hence it must be that m1 + m2 ≤ n, as claimed.
62. Since B and A are equivalent, B results from a finite number of elementary row operations applied to A Hence we can just prove that the span of the rows is the same when B is obtained from A from a single elementary row operation. Let {u1, , um} be the rows of A, and suppose we multiply row j by cj = 0. Then the rows of B are {u1, . . . , cjuj , . . . , um}. Since the factor cj does not change the set of all linear combinations of the vectors ui, we conclude that the rows of A and the rows of B span the same subspace. Now consider the row operation that interchangestwo rows of A This simply re-orders the row vectors in B, which doesn’t change the span of the rows of A Finally, we consider the row operation which replaces row k of A by cjuj +uk Then the rows of B are {u1, , cjuj + uk , , um}
If v ∈span {u1, , cjuj + uk , , um}, then
and
so the span of the rows of B is contained within the span of the rows of A Now let
so the span of the rows of A is contained within the span of the rows of B Therefore we conclude that the span of the rows of A is the same as the span of the rows of B
63. A linear dependence in the vectors
is an expression of the form
This implies Ua = 0 where a is the vector with components ai Since U and V are equivalent, we know that V = EU where E is an invertible matrix. Thus V a = (EU ) a =
But this implies the same linear dependence in the vectors {
Similarly, an initial linear dependence in
by writing U = E 1V .
64. Suppose U = {u1
corresponds to a linear dependence in
and
for
k We can write for 1 ≤ i ≤ k,
Consider now and substitute to obtain
Considering this as a system of equations in the unknowns ai we have j equations with k unknowns. If j < k, we know that this system has infinitely many solutions, and therefore there exists a nontrivial solution to a1v1+ a
v2+ + ak vk = 0. Therefore V is linearly dependent, a contradiction. We conclude that j = k, and that in general any two bases for a subspace has the same number of vectors.
65. Using a computer algebra system, we determine that the span ofthe vectors has basis with dimension 2. The vectors are not a basis for R3 .
]
[ 3 ]} 7 , with dimension 3. The vectors are a basis for R3
3 2 2 2 0 4 7 5 1
, , with dimension 4. The vectors thus span R4
,
6 5
3 7
4
,
2
8
1 2 3 4 5 2 3 4 5 1 3 , 4 , 5 , 1 , 2 ,
4.3 Practice Problems
with dimension
[ 1 4 6 ] 2 8 12 [ 1 4 6 ] ∼ 0 0 0 {[ 1 ]}
A basis for the column space, which is determined from the pivot column 1, is
for the row space is determined from the nonzero rows of the echelon form,
basis
We solve
(b) We reduce A to echelon form:
our
A basis for the column space, determined from the pivot columns
A basis for the row space is determined from the nonzero rows of the echelon form,
We solve Ax = 0, to obtain x = s
, and so our nullspace basis is
rank (A) = 3, nullity (A) = 1, and rank (A) + nullity (A) = 3 + 1 = 4
. We have
2. (a) The dimension of the row space is 7 2 = 5, the number of nonzero rows in the echelon form.
(b) The dimension of the column space is also 5.
(c) The dimension of the null space is 12 5 = 7, so nullity (A) = 7
(d) The dimension of the row space is 5, so rank (A) = 5
3. nullity (A) = m rank (A) = 8 5 = 3.
4. (a) False. A = [ 1 0 ] has rank (A) = 1 = nullity (A)
(b) False. A = [0] has rank (A) = 0 [ 1 ]
(c) False. A = 0 has nullity (A) = 0
(d) True. A has, at most, n pivots, so rank (A) ≤ n.
4.3 Row and Column Spaces
for the row space is determined from the nonzero rows of the echelon form,
[ 10 ]
We
, and so our nullspace basis is
.
basis
Thus, for rank (A) = 2, we need two pivots, and hence x = 6.
13. The dimension of the column space is 5, the same as the dimension of the row space.
14. The dimension of the row space is 5, the same as the dimension of the column space.
15. The dimension of the row space is 4 1 = 3, the number of nonzero rows in the echelon form. The dimension of the column space is also 3, and the dimension of the null space is 7 3 = 4.
16. The dimension of the row space is 6 2 = 4, the number of nonzero rows in the echelon form. The dimension of the column space is also 4, and the dimension of the null space is 11 4 = 7.
17. rank (A) = m nullity (A) = 5 3 = 2.
18. rank (A) = m nullity (A) = 13 10 = 3.
19. nullity (A) = m rank (A) = 11 4 = 7.
20. nullity (A) = m rank (A) = 9 7 = 2.
21. dim (range (T )) = rank (A) = m nullity (A) = 11 7 = 4.
22. dim (ker (T )) = nullity (A) = m rank (A) = 12 8 = 4.
23. Since T is one-to-one, ker (T ) = 0, and dim (ker (T )) = 0. Hence nullity (A) = dim (null (A)) = dim (ker (T )) = 0.
24. Since T is onto, dim (range (T )) = 5. Thus rank (A) = dim (col (A)) = dim (range (T )) = 5, and nullity (A) = m rank (A) = 13 5 = 8.
25. The maximum possible value for the rank of A is 5 since the echelon form can have at most 5 pivots. The minimum possible value of the nullity of A is 8, since nullity (A) = m rank(A) = 13 rank(A) ≥ 13 5 = 8.
26. The minimum possible value for the rank of A is 0 since the echelon form may have 0 pivots. The maximum possible value of the nullity of A is 7, since nullity (A) = m rank (A) = 7 rank (A) ≤ 7 0 = 7.
27. rank (A) = 3, the number of nonzero rows of B
28. rank (A) = 2, the number of pivot columns of B
29. nullity (A) = m rank (A) = 5 3 = 2, since the rank of A is the number of nonzero rows of B
30. nullity (A) = m rank (A) = 5 1 = 4, since the rank of A is the number of pivot columns of B
31. B has 3 nonzero rows, since the rank of A is equal to the number of nonzero rows of B
32. B has 1 pivot column, since the rank of A is equal to the number of pivot columns of B
33. A must be 7 × 5, since col(A) is a subspace of Rn , and row (A) is a subspace of Rm
34. m = rank (A) +nullity (A) = 4+3 = 7. Since col(A) is a subspace of R5 , it must be that A is 5 × 7. [ 1
35. For example, A = 0 0 ]
0 1 0 [
36. For example, A =
37. For example, A =
48. (a) True, since the rank is the number of pivots, which can not exceed the number of rows.
(b) True, by Theorem 4.10, and the definition of row space. In particular, if B is an echelon form of A, then this is Theorem 4.20(a).
49. (a) False. For example, see A and B in Example 1.
(b) False, b is in col(A), not row (A).
50. (a) False. Solutions of Ax = b are not related to row (A).
(b) False. If nullity (A) = 5, then rank (A) = 13 5 = 8. But rank (A) ≤ 4, since A has 4 rows.
51. (a) False, since dim (range (T )) = rank (A) ≤ 5, T cannot map onto R9 [ I5×5
(b) True. For example, if A = 04×5 , then T is a one-to-one mapping.
52. (a) True. For example, if A = [ I4×4 04×9 ], then T maps onto R4 .
(b) False, since dim (ker (T )) = nullity (A) = m rank(A) = 13 rank(A) ≥ 13 4 = 9, T can not be one-to-one, as the set of solutions to T x = 0, i.e. ker (T ), is at least a nine-dimensional subspace.
53. The span of the rows of A is the same subspace as the span of the columns of AT , since these subspaces are determined by the same vectors. Hence row (A) = col(AT ) , and thus rank (A) = dim (row (A)) = dim (col(AT )) = rank (AT )
54. The matrix cA is equivalent to A if c = 0, as it results from multiplying each row of A by c. Since equivalent matrices have the same row space (Exercise 48b), they also have the same rank, and thus rank (A) = rank (cA).
55. If rank (A) < m, then nullity (A) = m rank (A) > m m = 0. Thus dim (null (A)) > 0, and therefore there exists nontrivial solutions to Ax = 0
56. The number of zero rows in the reduced row echelon form of A plus the rank of A is equal to the number of rows. Thus, the number of nonzero rows in the reduced row echelon form of A is n rank (A) > n n = 0. Hence the reduced row echelon form of A has a row of zeroes.
57. If m > n, then rank(A) ≤ n < m, so it must be that nullity(A) > 0. If m < n, then the same reasoning applies to AT , which is m × n.
58. (a) Let A = [ a1 a2 am ], then Ax = b if and only if
= b This is equivalent to b ∈col(A). Thus Ax = b is consistent if and only if b ∈col(A).
(b) Ax = b has a solution if and only if b ∈col(A), by part (a). If the columns of A are linearly independent, then Ax = 0 has only the trivial solution. So if Ax = b and Ay = b, then A (x y) = Ax Ay = 0 0 = 0, so x y = 0, and x = y. Thus Ax = b has a unique solution. Conversely, if Ax = b has a unique solution x, and Ay = 0, then A (x + y) = Ax+Ay= b+0 = 0, so we must have x + y = x, and thus y = 0. Since Ay = 0 has only the trivial solution, the columns of A are linearly independent. We now conclude that Ax = b has a unique solution if and only if b is in the column space of A and the columns of A are linearly independent.
59. Using a computer algebra system, we determine
60. Using a computer algebra system, we determine
Using a computer algebra system, we determine
62. Using a computer algebra system, we determine
4.4 Practice Problems
4. (a) True. Because A = W 1V, where V has columns given by the basis vectors of B1 and W has columns given by the basis vectors of B2, it follows that A is invertible, with A 1 = V 1W
(b) True. Let A = W 1V, where V has columns given by the basis vectors of B1 and W has columns given by the basis vectors of B2. Then A 1 = V 1W , which is the change of basis matrix from B2 to B1
(c) True. If A = W 1V, where V has columns given by the basis vectors of B1 and W has columns given by the basis vectors of B2, it follows that W = V, so A = W 1V = V 1V = I
(d) True, by Theorem 4.28.