Let V be a
vector space.
A subset W of V is called a subspace
of V if W is closed under addition and scalar multiplication, that
is if for every vectors A and B in W the sum A+B belongs to W and for every
vector A in W and every scalar k, the product kA belongs to W.
Positive Examples.
1. The whole space Rn is a subspace of itself. And the set consisting of
one vector, 0, is a subspace of any space.
2. In R2, consider the set W of all vectors
which are parallel to a given line L. It is clear that the sum of two
vectors which are parallel to L is itself parallel to L, and a scalar
multiple of a vector which is parallel to L is itself parallel to L.
Thus W is a subspace.
3. A similar argument shows that in R3, the
set W of all vectors which are parallel to a given plane (line)
is a subspace.
4. The set of all polynomials is a subspace of the space of continuous functions on [0,1], C[0,1]. The set of all polynomials whose degrees do not exceed a given number, is a subspace of the vector space of polynomials, and a subspace of C[0,1].
5. The set of differentiable functions is also a subspace of C[0,1].
Negative Examples.
1. In R2, the set of all vectors which are
parallel to one of two fixed non-parallel lines,
is not a subspace. Indeed, if we take a
non-zero vector parallel to one of the lines and add a non-zero vector parallel to another line, we get a vector which is parallel to neither of these
lines.
2. The set of polynomials of degree 2 is not a subspace of C[0,1]. Indeed,
the sum of x2+x and -x2 is a polynomial of degree 1.
Theorem. Every subspace W of a vector space V is itself a vector space with the same operations as V. Every subspace of a Euclidean vector space is itself a Euclidean vector space.
Indeed, all axioms of a vector space hold for any subspace of the vector space and the dot product holds for any subset of a Euclidean vector space.
Sources of subspaces: kernels and ranges of linear transformations
Let T be a linear transformation from a vector space V to a vector space W.
Then the kernel of T is the set of all vectors A
in V such that T(A)=0, that is
The range of T is the set of all vectors in W which are
images of some vectors in V, that is
Notice that the kernel of a transformation from V to W
is a subset of V and the range is a subset of W. For example if T is a
transformation from the space of functions to the space of real numbers then
the kernel must consist of functions and the range must consist of numbers.
Examples.
1. Let P be the projection of R2 on a line L from R2. Then the kernel of P
is the set of all vectors in R2 which are perpendicular to L and the
range of P is the set of all vectors parallel to L.
Indeed, the vectors which are perpendicular to L and only these vectors
are annihilated by the projection. This proves the statement about the kernel.
The projection on L of every vector is parallel to L (by the definition of
projection) and conversely, every vector which is parallel to L is the
projection of some vector from R2, for example, it is the projection of
itself. This proves the statement about the range.
2. Let R be the rotation of R2 through angle Pi/3; counterclockwise.
Then the kernel of R is
0 (no non-zero vectors
are annihilated by the rotation). The range of the rotation
is the whole R2. Indeed, for every vector W in R2 let V be the vector
obtained by rotating W through angle Pi/3; clockwise. Then W is the result of
rotating V through Pi/3 counterclockwise, thus W is in the range of our transformation. Since W was an arbitrary vector in R2, the range of the rotation
is the whole R2.
3. Let R be the reflection of R2 about a line L. Then the kernel of R
is 0. Indeed, no non-zero vectors are annihilated by the reflection.
The range is the whole R2 (prove it!).
4. Let Z be the zero transformation from V to W
which takes every vector in V to the zero of W. Then the kernel of Z is V
and the range is {0} (prove it!).
5. Let T be the linear operator
of the vector space
of P polynomials which takes every polynomial to its derivative. Then the
range of T is the whole P (every polynomial is a derivative of another polynomial) and the kernel of T is the set of all constants (prove it!).
6. Let T be the linear operator on the space P of polynomials
which takes every
polynomial g(x) to the polynomial
∫g(t) dt as t=0..x (integral from 0 to x of T(t)).
Then the range of T is the set of all polynomials h(x) such that h(0)=0
(every such polynomial is the image under T of its derivative) and
the kernel of T is 0 (the only function with 0 anti-derivative is 0).
7. Let T be the linear transformation from the space of all n by n matrices
M to R which takes every matrix to its trace. Then the range of T is the whole
R (every number is the trace of some matrix) and the kernel consists of all
n by n matrices with zero trace.
Theorem. Let T
be a linear transformation from V to W.
Then ker(T) is a subspace of V and range(T) is a subspace of W.
Proof. We need to prove that ker(T) is closed
under addition and scalar multiplication. Let A and B be elements from ker(T).
Then T(A)=0 and T(B)=0 (the definition of a kernel).
Hence T(A)+T(B)=0+0=0. But T is a
linear transformation,
so T(A)+T(B)=T(A+B). Hence T(A+B)=0. By the definition of a
kernel this implies that A+B is in ker(T). Thus ker(T) is closed under
taking sums.
Now let A be an element of ker(T) and let k be a number. Since A is in
ker(T), T(A)=0. Then kT(A)=k0=0 (prove the last equality!). On the other
hand, since
T is a linear transformation, kT(A)=T(kA). So T(kA)=0, whence kA is in ker(T).
This shows that ker(T) is closed under scalar multiplication.
The proof that range(T) is a subspace is left as an exercise.
We shall show later that every subspace of a vector space is a kernel of
some linear transformation and the range of some other linear transformation.
Example.
Let E be the subspace of R3 consisting of vectors
which are parallel to a plane P. Then E is the kernel of the projection of R3
on the line perpendicular to the plane and E is the range of the projection on
the plane P.
The theorem about kernel and ranges
implies the following important property of
homogeneous systems of linear equations.
Theorem. Let Av=0 be
a
homogeneous systems
of linear equations with n unknowns and m equations. Then the set of
solutions of this system coincides with the kernel of the linear
transformation TA from Rn to Rm with standard matrix A and so it
is a subspace of Rn.
Indeed the set of solutions of the system Av=0 is precisely the set of
vectors annihilated by the linear transformation TA.
Sources of subspaces: subspaces spanned by vectors
Let V be a vector space and let S be a subset of V.
A linear combination of distinct
elements s1,...,sn of S with
coefficients x1,...,xn is the vector
Theorem. Let S be a subset of a vector space V. Then:
Proof. 1. Let us use the definition of subspaces. We need to prove that the set W of
all linear combinations of elements from S is closed under sums and scalar
multiples. Let w1=x1s1+...+xnsn and w2=y1s'1+...+yms'm be arbitrary two
linear combinations of elements from S. Then the sum w1+w2 is equal to
x1s1+...+xnsn+y1s'1+...+yms'm which is again a linear combination
of elements from S. Thus the sum of any two elements of W is an element of W and W is closed under taking sums.
Now take an arbitrary linear combination w=x1s1+...xnsn of elements in S and an arbitrary number k. Then the product kw is equal to kx1s1+...+kxnsn (we used the distributivity of scalar multiplication with respect to addition in vector spaces) which is again a linear combination of elements of S. Thus W is closed under scalar multiplication.
2. Let U be any subspace of the vector space V which contains S. By the
definition of
subspaces U is closed under taking sums and scalar multiples. Therefore
U contains all linear combinations of elements of S (which are sums of scalar
multiples of elements of S), so U contains W. Thus W is the smallest subspace
of V containing S. The theorem is proved.
Let W be a subspace of V. We say that W is
spanned by a set of vectors S={A1,A2,...} if a) S is
contained in W
and b) every vector in W is a linear combination of vectors from S.
In this case we write W=span(S) or W=span{A1,A2,...}.
Examples.
1. Let V=R3. Let S consist of one non-zero
vector A.
Then span{A} consists of all vectors of the form xA. In other words,
span{A} consists
of all vectors which are
proportional to A, or span(A) is the set of vector parallel to A.
2. Let V=R3, S={(1,0,0), (0,1,0)}. Then span(S) consists of all vectors
of the form x(1,0,0)+y(0,1,0)=(x,y,0), that is span(S) consists of all vectors
parallel to the (x,y)-plane. More generally, if we take any two non-parallel
vectors A and B in R3 then span{A,B} is the subspace of all vectors which are
parallel to the plane containing A and B.
3. Let V=R3, S={(1,0,0), (0,1,0), (0,0,1)}. Then span(S) coincides with the
whole R3. More generally if V=Rn then V is spanned by the set of basic
vectors {(1,0,...,0), (0,1,...,0), ...,(0,0,...,1)}.
4. Let T be the linear transformation from R3 to R2 with standard matrix
[ 1 | 2 | 3 ] |
[ 2 | 0 | 1 ] |
[ 1 | 2 | 3 | 0 ] |
[ 2 | 0 | 1 | 0 ] |
[ 1 | 0 | 1/2 | 0 ] |
[ 0 | 1 | 5/4 | 0 ] |
Thus the general solution is the following:
x = -1/2t
y = -5/4t
z = t
Or in the vector notation:
Thus the kernel of T is spanned by the vector (-1/2,-5/4,1). Likewise
the general solution of any homogeneous system of linear equations with matrix of coefficients A gives
us a system of vectors which span the kernel of the corresponding linear
transformation. This subspace is usually called the null
space of matrix A.
5. Let T be a linear transformation from Rn to Rm with standard matrix
A. Then the range of T consists of all vectors b=(b1,...,bm)
in Rm which are expressible
in the form Av where V=(x1,...,xn) is in Rn:
or in the vector form:
where c1, ..., cn are column vectors of matrix A. Thus vector B in Rm
belongs to range(T) if and only if it is a linear combination of the column
vectors of A. Therefore range(T) is spanned by these column vectors. The
subspace range(T) is usually called the column space
of matrix A.
Every subspace W of any vector space V is spanned by some set of vectors S.
Indeed, one can take the whole W to be S. One of the main problems in the
theory of vector spaces is for every subspace W find a minimal set of vectors
S that spans W.
It is clear, for example, that the space R3 can be spanned by 3 vectors
(in fact it is spanned by any 3 vectors which are not parallel to the
same plane) and cannot be spanned by 2 vectors. The subspace of R3
consisting of all vectors parallel to a given plane can be spanned by two
non-parallel vectors and cannot be spanned by one vector and so on.
Thus a subspace of a vector space can be spanned by many sets of vectors.
The next theorem answers the question of when two sets of vectors span the
same subspace.
Theorem. Two subsets S1 and S2 of a vector space V span the same subspace if and only if every vector of S1 is a linear combination of vectors of S2 and every vector of S2 is a linear combination of vectors of S1.
The proof is left as an exercise.
Examples.
1. Vectors a=(1,2,3), b=(0,1,2), and c=(0,0,1)
span R3. Indeed, we know that R3 is spanned by the vectors i=(1,0,0),
j=(0,1,0) and k=(0,0,1). Thus we need to show that
the sets {a,b,c} and {i,j,k} span the same subspace. By the previous theorem,
we need to show that every vector from the first subset is a linear combination
of vectors from the second subset and conversely every vector from the second
subset is a linear combination of vectors of the first subset.
It is clear that a, b, c are linear combinations of i, j, k (as any vector
in R3). So we need to show only that i, j, k are linear combinations
of a, b, c. This is easy to check:
2. Polynomials f1=x2+2x+1, f2=x+1 and f3=x+2 in the space of all polynomials P span the subspace P2 of all polynomials of degree not exceeding 2. Indeed, it is clear that P2 is spanned by the polynomials p1=1, p2=x and p3=x2. So we need to show that the sets {f1, f2, f3} and {p1, p2, p3} span the same subspace. It is clear that f1, f2, f3 are linear combinations of p1, p2, p3. Conversely, it is easy to check that
3. Let a=(1,2,3,4), b=(3, -1, 5, 2), c=(-1, 2, 0, 1), d=(2, 1, 4, 5).
Determine whether span{a,b}=span{c,d}. We need to check if a and b are linear
combinations of vectors c and d, and whether c and d are linear combinations
of vectors a and b.
By
This gives the following system of linear equations:
This system does not have a solution, so a is not a linear combination of
c and d, so span{a,b} is not equal to span{c,d}. 4. Is it true that the range of the linear transformation T from R5 to R3 with
the following standard matrix:
coincides with the whole R3?
We know that the range of a linear transformation from Rm to Rn
is spanned by the column-vectors of its standard matrix. Thus the range of
T is spanned by the following vectors: t1=(2,1,2), t2=(3,2,1), t3=(4,3,2), t4=(5,1,3),
t5=(5,2,3).
We need to check whether the subspace of R3 spanned by these vectors coincides
with the whole R3. We know that R3 is spanned by the vectors i=(1,0,0), j=(0,1,0) and
k=(0,0,1). So we can apply the theorem about spans. It is clear that each vector ti (i=1,2,3,4,5) is a linear combination of i, j ,k.
So it remains to check whether i, j, k are linear combinations of t1,...,t5. So we
need to solve the following three systems of linear equations:
These systems have the same matrices of coefficients and different right sides, so we
can solve these systems simultaneously. Consider the combined augmented matrix:
Applying the Gauss-Jordan elimination procedure, we get:
Therefore each of the three systems of equations has infinitely many
solutions (the unknowns x4 and x5 are free in each of these systems).
Thus the range of T coincides with R3. As we said before, every subspace W of any vector space V
is spanned by some set of vectors S. Our goal is to find a minimal set S
such that W=span(S). The next theorem shows that in some cases a set S which spanned a subspace
W can be made smaller by throwing away extra elements.
Theorem.
Let S be a subset of a vector space V and let a be an element in S which is
equal to a linear combination of other elements of S. Let S' be the set
obtained by removing a from S. Then span(S)=span(S').
This theorem implies that if S is the smallest subset of a subspace W
which spans W then no element of S is a linear combination of other elements
of S. This leads to the following definition. A
subset S of a vector space V is called linearly independent
if no element of S is a linear combination of other elements of S. This definition gives an algorithm to check whether a subset S is linearly
independent. But this algorithm is very slow: one needs to check whether every
element in S is a linear combination of other elements in S. The following
theorem gives a much faster algorithm.
Theorem. a subset S of a vector space V is linearly independent if
and only if
there exists exactly one
linear combination of elements of S which is equal to 0, the one with all
zero coefficients.
Notice that if a subset of a set S is linearly dependent then the whole set is linearly dependent
(the proof is an exercise).
Examples. 2. Every three vectors a,b,c on a plane R2 are linearly dependent.
Indeed, if a and b are parallel then one of them is a multiple of another one,
and so a and b are linearly dependent which implies that all three vectors are linearly dependent.
If a and b are not parallel then from Calculus 3 we know that every vector on the plane,
including the vector c, is a linear combination of a and b. Thus a,b,c are linearly dependent.
3. If a subset S of Rn consists of more than n vectors then S is
linearly dependent. ( Prove it!) 4. The set of polynomials x+1, x2+x+1, x2-2x-2, x2-3x+1 is linearly
dependent.
To prove that we need to find numbers a, b, c, d not all equal to 0
such that
Theorem.
2. A set S with exactly two vectors is linearly dependent if and only if
one of these vectors is a scalar multiple of the other. The proof is left as an
exercise.
Let f1, f2,...,fn be functions in C[0,1] each of which has
first n-1 derivatives. The determinant of the following
matrix
Theorem. Let f1, f2,...,fn be functions in C[0,1] each of which has
first n-1 derivatives. If the Wronskian of this set of functions is not
identically zero then the set of functions is linearly independent.
We know that the set of vectors V1=(1,0,...,0), V2=(0,1,...,0), ...,
Vn=(0,...,1) in Rn is linearly independent and such that every vector in Rn
is (uniquely) expressible as a linear combination of these vectors. We called
these vectors basic because of this property. In this section we will
generalize the concept of basis to arbitrary vector spaces.
Positive examples.
Indeed, these vectors are linearly independent because
they are not proportional. In order to check that R2 is spanned by these vectors, it is enough to check that (1,0) and (0,1) are linear combinations of them
(theorem about spans):
In fact, every two non-parallel vectors in the plane R2 form a basis of R2.
This matrix is equal to the zero matrix only if
a=b=c=d=0. Second, we need to show that these 4 matrices span the space of all
2 by 2 matrices.
Indeed, every 2 by 2 matrix
is the linear combination of our four matrices with coefficients a, b, c, d.
Negative examples
In fact C[0,1] does not have a finite basis at all.
Theorem. Let V be a vector space.
The following properties of bases of V hold:
The first statement of this theorem allows us to introduce the following definition. Example. The last condition of the theorem about bases
allows us to introduce the following important definition.
A dimension of a vector space V (denoted by
dim(V) ), is the number of elements in a basis for V.
There is one exception of this definition: the dimension of the zero space
(the vector space consisting of one vector, zero) is defined to be 0 and
not 1.
Examples. This system of equations has the following augmented matrix:
2. Take two vectors (1,2,3,4), (2,1,1,1) in R4. These vectors are
linearly independent because they are not proportional (see the
theorem about linearly dependent sets). Thus by the
theorem about dimension we can add two vectors
and get a basis of R4. Let us add (1,0,0,0) and (0,1,0,0). Notice
that when we add vectors we need to make sure that the added vectors are
not linear combinations of the previous vectors.
In order to check
that the four vectors (1,2,3,4), (2,1,1,1), (1,0,0,0), (0,1,0,0) form a basis
of R4, we need to check only that they are linearly independent, that is the
system of equations:
has only one, trivial, solution (see the theorem about dimension). This is an homogeneous system with 4 equations and 4 unknowns. We
know that this system has only one solution if and only if the matrix
of coefficients is invertible (see the second theorem about inverses). And we know that a square
matrix is invertible if and only if its determinant is not zero (see
the third theorem
about determinants). Thus we need to check that the determinant of the matrix
of coefficients of our system is not zero. Maple says that
3. Let us prove that the space of functions C[0,1] is not finite
dimensional. By contradiction,
suppose that C[0,1] has a finite dimension n. Consider the set of n+1 functions
1, x, x2,..., xn. It is easy to check that the Wronskian of this set of
functions is non-zero (the matrix of derivatives is upper triangular). Thus
by the theorem about Wronskian, this set of
functions is linearly independent. This contradicts statement 2.1 of
the theorem about bases: if a vector space is
n-dimensional then every set of more than n vectors in this vector space is
linearly dependent.
4. Using the theorems about bases and
dimension, one can simplify solutions
of several problems considered before. For example, let us prove that
the range of the linear transformation from R5 to R2 with the following
standard matrix:
We shall prove that the column rank of A is equal to the row rank of A
for every matrix a. First suppose that A has the reduced row echelon form. It is easy to see that the rows containing leading
1's are linearly independent (see an example below).
Other rows are zero vectors. So the rank of
the set of rows of A is equal to the number of leading 1's (=the number of rows
containing leading 1's). Example. Consider the following matrix:
Let us check that the rows containing the leading 1's are linearly independent. Let us denote the rows by r1, r2, r3. Suppose that a linear
combination x1r1 + x2r2 +x3r3 is equal to zero. Consider the coordinates
of this linear combination which correspond to the leading 1's (i.e. the
first, the third and the fifth coordinates).
The first coordinate
of this liner combination is x1+0+0=x1. So x1=0. The third coordinate
is equal to 0+x2+0=x2, so x2=0. The fifth coordinate is equal to
0+0+x3=x3, so x3=0. Thus x1=x2=x3=0, so by the It is also clear that the columns containing leading 1's are linearly
independent (they are just the standard basic vectors) and other columns are
linear combinations of the columns containing the leading 1's. So the
column
rank of A
is also equal to the number
of leading 1's in the matrix.
Example. In the case of the previous matrix the columns containing the
leading 1's are
Thus the column rank and the row rank coincide
in the case of reduced row echelon matrices.
Theorem.
The column rank and the row rank of every matrix coincide, are equal to
the (row) rank of the reduced row echelon form of this matrix, and are equal
to the number of leading 1's in this reduced row echelon form.
Let S be a set of vectors in a vector space V. We call a
subset T of S a core of S if T is linearly independent and every element of S is a linear combination of elements of T. It is clear that the
number of elements in the core is the rank of S and that the core is a basis
of span(S). Theorem.
In order to find a core of the set of columns of a matrix A, one can reduce
A to the reduced row echelon form A'. Then the columns in A corresponding
to the leading 1's in A' form a core of the set of columns.
This theorem tells us how to find a basis of the column space of an m by n
matrix A.
In order to find a basis of the null space one needs to find the
general solution of the system Av=0 which as we know form a subspace of the
vector
space Rn and find the vectors spanning this vector space. The number of these
vectors is the number of free unknowns and it is easy to see that they
are linearly independent. Thus the dimension of the null space is equal to the number of free unknowns of the system Av=0. This implies the following result.
Theorem.
For every matrix A the dimension of the column space plus the dimension of
the null space is equal to the number of columns of A. In terms of linear
transformations this sounds as follows. If T is a linear
transformation from Rm to Rn then dim(range(T))+dim(kernel(T))=m.
1 = -x + 2y
2 = 2x + y
3 = 0x + 4y
4 = x + 5y
[ 2
3
4
5
5 ]
[ 1
2
3
1
2 ]
[ 2
1
2
3
3 ]
x1 *
[ 2 ]
[ 1 ]
[ 2 ]
+ x2 *
[ 3 ]
[ 2 ]
[ 1 ]
+ x3 *
[ 4 ]
[ 3 ]
[ 2 ]
+ x4 *
[ 5 ]
[ 1 ]
[ 3 ]
+ x5 *
[ 5 ]
[ 2 ]
[ 3 ]
=
[ 1 ]
[ 0 ]
[ 0 ]
,
x1 *
[ 2 ]
[ 1 ]
[ 2 ]
+ x2 *
[ 3 ]
[ 2 ]
[ 1 ]
+ x3 *
[ 4 ]
[ 3 ]
[ 2 ]
+ x4 *
[ 5 ]
[ 1 ]
[ 3 ]
+ x5 *
[ 5 ]
[ 2 ]
[ 3 ]
=
[ 0 ]
[ 1 ]
[ 0 ]
,
x1 *
[ 2 ]
[ 1 ]
[ 2 ]
+ x2 *
[ 3 ]
[ 2 ]
[ 1 ]
+ x3 *
[ 4 ]
[ 3 ]
[ 2 ]
+ x4 *
[ 5 ]
[ 1 ]
[ 3 ]
+ x5 *
[ 5 ]
[ 2 ]
[ 3 ]
=
[ 0 ]
[ 0 ]
[ 1 ]
.
[ 2
3
4
5
5
1
0
0 ]
[ 1
2
3
1
2
0
1
0 ]
[ 2
1
2
3
3
0
0
1 ]
[ 1
0
0
3
2
1/2
-1
1/2 ]
[ 0
1
0
5
3
2
-2
-1 ]
[ 0
0
1
-4
-2
-3/2
2
1/2 ]
Linearly independent
sets of vectors
A subset S of a vector space is called linearly
dependent if it is not linearly independent.
1. Every two vectors in the line
R are linearly dependent (one vector is proportional to the other one).
This leads to a homogeneous
system of linear equations with 4 unknowns and 3 equations. Such a system
must have a non-trivial solution by the theorem about homogeneous systems of linear equations.
1. If a subset S of a vector space V contains 0 then S is linearly dependent.
[ f1(x)
f2(x)
....
fn(x) ]
[ f'1(x)
f'2(x)
....
f'n(x) ]
[ f''1(x)
f''2(x)
....
f''n(x) ]
.....................................
[ f1n-1(x)
f2n-1(x)
....
fnn-1(x) ]
is called the Wronskian of this set of functions.
Examples of Using Wronskians
Basis and dimension
(0,1)=2*(1,2)-(2,3).
[ 1
0 ]
[ 0
0 ]
,
[ 0
1 ]
[ 0
0 ]
,
[ 0
0 ]
[ 1
0 ]
,
[ 0
0 ]
[ 0
1 ]
is a basis of the space of all 2 by 2 matrices.
First we need to prove that these
matrices are linearly independent. Indeed,
if we take a linear combination of these matrices with coefficients a,b,c,d,
we get the matrix
[ a
b ]
[ c
d ]
[ a
b ]
[ c
d ]
Let S be a basis of a vector space V and let a be a
vector from V. Then a is uniquely representable as
a linear combination of elements from S. The coefficients of this linear
conbination are called the coordinates of a in the basis S.
Vectors (1,2) and (2,3) form a basis of R2 (we have
shown it before). The vector (4,7) is equal to the linear combination
2*(1,2)+(2,3). Thus the vector (4,7) has coordinates 2, 1 in the basis
of vectors (1,2) and (2,3). The same vector has coordinates 4 and 7
in the basis of vectors (1,0) and (0,1). Thus a vector has different coordinates in different bases. It is sometimes very important to find a basis
where the vectors you are dealing with have the simplest possible coordinates.
1. Consider the following 5 vectors in R4:
(1,2,3,4), (1,1,0,0), (1,2,1,0),(0,1,2,3), (1,0,0,0). It can be shown (check!)
that these vectors span R4. Since R4 is
4-dimensional (it has the standard basis with 4 vectors), these 5 vectors
must be linearly dependent by the theorem about bases.
By the theorem about dimension we can through away
one of these vectors and get a basis of R4. By the
theorem about throwing away extra elements from a spanning set, we can through away a vector which is a linear combination of
other vectors in the set. Let us check that the vector (1,2,3,4) is such a vector. In order to find the linear combination which is equal to this vector,
we need to solve the system of linear equation:
[ 1 ]
[ 2 ]
[ 3 ]
[ 4 ]
=
[ 1 ]
[ 1 ]
[ 0 ]
[ 0 ]
* x1 +
[ 1 ]
[ 2 ]
[ 1 ]
[ 0 ]
* x2 +
[ 0 ]
[ 1 ]
[ 2 ]
[ 3 ]
* x3 +
[ 1 ]
[ 0 ]
[ 0 ]
[ 0 ]
* x4
[ 1
1
0
1
1 ]
[ 1
2
1
0
2 ]
[ 0
1
2
0
3 ]
[ 0
0
3
0
4 ]
Using the Gauss-Jordan procedure, we get the following matrix:
[ 1
0
0
0
0 ]
[ 0
1
0
0
1/3 ]
[ 0
0
1
0
4/3 ]
[ 0
0
0
1
2/3 ]
Thus x1=0, x2=1/3, x3=4/3, x4=2/3. So the vector (1,2,3,4) can be thrown away. The other vectors,
(1,1,0,0), (1,2,1,0),(0,1,2,3), (1,0,0,0), form a basis of R4. Indeed,
they
span R4 by the theorem about throwing away extra
elements, and by the theorem about dimension, every
four vectors in a 4-dimensional vector space which span the vector space,
form a basis of this vector space.
[ 0 ]
[ 0 ]
[ 0 ]
[ 0 ]
=
[ 1 ]
[ 2 ]
[ 3 ]
[ 4 ]
* x1 +
[ 2 ]
[ 1 ]
[ 1 ]
[ 1 ]
* x2 +
[ 1 ]
[ 0 ]
[ 0 ]
[ 0 ]
* x3 +
[ 0 ]
[ 1 ]
[ 0 ]
[ 0 ]
* x4
det
[ 1
2
1
0 ]
[ 2
1
0
1 ]
[ 3
1
0
0 ]
[ 4
1
0
0 ]
= 1
Thus, our four vectors form a basis of R4.
[ 1
2
3
4
5 ]
[ 3
4
5
6
7 ]
coincides with R2. Indeed, we know that the range
of our linear transformation is spanned by the column vectors of the standard
matrix: (1,3), (2,4), (3,5), (4,6), (5,7). We need to prove that these
vectors span R2. Notice that vectors (1,3) and (2,4) and non-proportional,
so they are linearly independent. By the theorem
about dimension, every two linearly independent vectors in a 2-dimensional
vector space form a basis of this vector space. Thus R2 is spanned
by the vectors (1,3) and (2,4). Therefore R2 is spanned by all the column vectors of our standard matrix.
Rank of
a matrix
[ 1
2
0
3
0
4
0
5
0 ]
[ 0
0
1
2
0
3
0
4
0 ]
[ 0
0
0
0
1
3
0
7
0 ]
[ 0
0
0
0
0
0
0
0
0 ]
[ 1
0
0 ]
[ 0
1
0 ]
[ 0
0
1 ]
[ 0
0
0 ]
Other columns are:
[ 2
3
4
0
5
0 ]
[ 0
2
3
0
4
0 ]
[ 0
0
3
0
7
0 ]
[ 0
0
0
0
0
0 ]
It is clear that these columns are linear combinations of the leading
columns.