A linear equation
is an equation of the form
a_{1}x_{1}+a_{2}x_{2}+...+a_{n}x_{n}=b (1)
where x_{1},...,x_{n} are unknowns, a_{1},...,a_{n},b are coefficients.
Example:
3x4y+5z=6 (2)
This equation has three unknowns and four coefficients (3, 4, 5, 6).
Example:
x=2, y=0, z=0
is a solution of equation (2).
A linear equation can have infinitely many solutions, exactly
one solution or no solutions at all.
Equation (2) has infinitely many solutions. To find them all
we can set arbitrary values of x and y and then solve (2) for z.
We get:
x = s y = t z = (63s+4t)/5
These formulas give all solutions of our equation meaning that for every
choice of values of t and s we get a solution and every solution is obtained
this way. Thus this is a (the) general solution of our equation.
There may be many formulas giving all solutions of a given equation. For
example Maple gives another formula:
> with(linalg);
This command starts the linear algebra package.
> A:=matrix(1,3,[3,4,5]):b:=vector([6]):linsolve(A, b);
This command asks Maple to solve the system of equations.
The solution has two parameters t_{1} and t_{2};
In order to get this solution "by hand"
one can give y and z arbitrary values
(t_{1} and t_{2}) and solve for x.
A system of linear equations is
any sequence of linear equations. A solution
of a system of linear equations
is any common solution of these equations.
A system is called consistent if it
has a solution.
A general solution of a
system of linear equations is
a formula which gives
all solutions for different values of parameters.
Examples. 1. Consider the system:
x + y = 7 2x + 4y = 18
This system has just one solution: x=5, y=2. This
is a general solution of the system.
2. Consider the system:
x + y + z = 7 2x + 4y + z = 18.
This system has infinitely many solutions given by this formula:
x = 5  3s/2 y = 2 + s/2 z = s
This is a general solution of our system.
In order to find a general solution of a system of equations, one needs to simplify it as much as possible. The simplest system of linear equations is
x = a y = b .....
where every equation has only one unknown and all these unknowns are different.
It is not possible to reduce every system of linear equations to this form, but
we can get very close. There are three
operations that one can apply to any system of linear equations:
The system obtained
after each of these operations is equivalent to the original
system, meaning that they have the same solutions.
For example consider the system
x + y = 7 2x + 4y = 18
We can first replace the second equation by the second equation plus the
first equation multiplied by 2. We get
x + y = 7 2y = 4
Now we can use the third operation and multiply the second equation by 1/2:
x + y = 7 y = 2
Finally we can replace the first equation by the sum of the first equation and
the second equation multiplied by 1:
x = 5 y = 2
Since this system is equivalent to the original system, we get that x=5, y=2
is the general solution of the original system.
Consider an (x,y)plane and the set of points satisfying ax+by=c.
This set of points is either a line (if a or b is not 0) or the
whole plane (if a=b=c=0), or empty (if a=b=0 but c is not 0).
The set of solutions of the system
ax + by = c a'x + b'y = c'
is the intersection of the sets of solutions of the individual equations.
For example if these equations define lines on the plane, the intersection
may be a point  if the lines are not parallel, a line  if the lines
coincide, or empty  if the lines are parallel.
A system of equations in 3 or more variables has similar geometric meaning.
Consider the following problem:
x + y + 2z = a x + z = b (1) 2x + y + 3z = c,
show that it has a solution only if a+b=c.
In order to prove that, replace the first equation by the sum of the first
two equations:
2x + y + 3z = a + b x + z = b 2x + y + 3z = c
This system is equivalent to the previous one, so it has a solution if and only if the initial system has a solution. But comparing the first and the third equations of this system we notice that it has a solution only if a+b=c. The problem is solved.
Now suppose that we have that a+b=c and we want
to find the general
solution of this system.
Then we need to simplify the system
by using three operations
(adding, swapping, multiplying). It is more
convenient to work not with the system but with its
augmented matrix, the array (table, matrix)
consisting of the coefficients of
the left sides of the equations and the right sides.
For example the system (1)
from the problem that we just solved
has
the following augmented
matrix:
[ 1  1  2  a ] 
[ 1  0  1  b ] 
[ 2  1  3  c ] 
The number of equations in a system of linear equations
is equal to the number of rows in the
augmented matrix, the number of unknowns is equal to the number of columns
minus 1, the last column consists of the right sides of the equations.
When we execute the operations on the systems
of equations, the augmented matrix changes. If we add equation i to
equation j, then row i will be added to row j, if we swap equations, the
corresponding rows get swapped,
if we multiply an equation by a (nonzero) number, the corresponding row
is multiplied by this number.
Thus, in order to simplify a system of equations it is enough to simplify
its augmented matrix by using the following
row
operations:
For example let us simplify the augmented matrix of
the system (1) from the problem that we just
solved.
First we replace the first row by the sum of the
first and the second rows:
[ 2  1  3  a+b ] 
[ 1  0  1  b ] 
[ 2  1  3  c ] 
Then we subtract the first row from the third row (remember that a+b=c):
p 2  1  3  a+b ] 
[ 1  0  1  b ] 
[ 0  0  0  0 ] 
Then we subtract the second row multiplied by 2 from the first row. :
[ 1  0  1  ab ] 
[ 1  0  1  b ] 
[ 0  0  0  0 ] 
Then we swap the first two rows and obtain the following matrix
[ 1  0  1  ab ] 
[ 0  1  1  ab ] 
[ 0  0  0  0 ] 
The last matrix has several important
features:
A matrix which satisfies all five conditions is called a matrix
in the reduced row echelon form
or a reduced row echelon matrix.
It is very easy to find the general solution
of a system of linear equations whose augmented
matrix has the reduced row echelon
form.
Consider the system of equations corresponding to the last
matrix that we got:
x + z = b y + z = a  b
The unknowns
corresponding to the leading 1's in the row echelon
augmented matrix are called
leading unknowns. In our case the leading 1's are in the
first and the second positions, so the leading unknowns are
x and y. Other unknowns are called free.
In our case we have only one free unknown, z. If we move it to the right and denote it by t, we get the following formulas:
x = b  t y = a  b  t z = t
This system gives us the general solution of the
original system with parameter t. Indeed, giving t arbitrary values, we
can compute x, y and z and obtain all solutions of the original system
of equations.
Similarly, we can get a general solution of every system of equations whose
matrix is in the reduced row echelon form:
One just has to move all free variables to the right side of the
equations and consider them as parameters.
Example Consider the system of equations:
x_{1} + 2x_{2} + x_{4} = 6 x_{3} + 6x_{4} = 7 x_{5}=1
Its augmented matrix is
[ 1  2  0  1  0  6 ] 
[ 0  0  1  6  0  7 ] 
[ 0  0  0  0  1  1 ] 
The matrix has the reduced row echelon form. The leading unknowns are x_{1}, x_{3} and x_{5}; the free unknowns are x_{2} and x_{4}. So the general solution is:
x_{1}= 62ts x_{2}= s x_{3}= 76t x_{4}= t x_{5}= 1
If the augmented matrix does not have the reduced
row echelon form but has the (ordinary) row echelon
form then the general solution also can be easily found.
The method of finding
the solution is called the backsubstitution.
First we solve each of the equations for the leading unknowns
The last nonzero equation gives us the
expression for the last leading unknown in terms of the free unknowns.
Then we substitute this leading unknown in all other equations by this
expression. After that we are able to find an expression for the next to the
last leading unknown, replace this unknown everywhere by this expression,
etc. until we get expressions for all leading unknowns. The expressions for
leading unknowns that we find in this process form the general solution of
our system of equations.
Example. Consider the following system of equations.
x_{1}3x_{2}+ x_{3}x_{4} = 2 x_{2}+2x_{3}x_{4} = 3 x_{3}+x_{4} = 1
Its augmented matrix
[ 1  3  1  1  2 ] 
[ 0  1  2  1  3 ] 
[ 0  0  1  1  1 ] 
is in the row echelon form.
The leading unknowns are x_{1}, x_{2}, x_{3}; the free unknown is x_{4}.
Solving each equation for the leading unknown we get:
x_{1}=2+3x_{2}x_{3}+x_{4} x_{2}=32x_{3}+x_{4} x_{3}=1x_{4}
The last equation gives us an expression for x_{3}: x_{3}=1x_{4}. Substituting this into the first and the second equations gives:
x_{1}=2+3x_{2}1+x_{4}+x_{4}=1+3x_{2}+2x_{4} x_{2}=32(1x_{4})+x_{4}=1+3x_{4} x_{3}=1x_{4}
Now substituting x_{2}=1+3x_{4} into the first equation, we get
x_{1}=1+3(1+3x_{4})+2x_{4}=4+11 x_{4} x_{2}=1+3x_{4} x_{3}=1x_{4}
Now we can write the general solution:
x_{1}=4+11 s x_{2}=1+ 3 s x_{3}=1 s x_{4}= s
Let us check if we made any arithmetic mistakes. Take x_{4}=1 and compute
x_{1}=15, x_{2}=4, x_{3}=0, x_{4}=1. Substitute it into the original system of equations:
15  3 * 4 + 0  1 = 2 4 + 2 * 0  1 = 3 0 + 1 = 1
OK, it seems that our solution is correct.
The GaussJordan elimination procedure
There exists a standard procedure to obtain a reduced row echelon
matrix from a given matrix by using the row operations.
This procedure consists of the following steps.
To see a Maple example of the GaussJordan elimination procedure, click here. To see the text version, click here.
Theorem. A system of linear equations either has no
solutions or has exactly one solution or has infinitely many solutions.
A
system of linear equations has infinitely many solutions if and only if
its reduced row echelon form has free unknowns.
A system of linear equation is called homogeneous if the right sides are equal to 0.
Example:
2x + 3y  4z = 0 x  y + z = 0 x  y = 0
A homogeneous system of equations always has a solution (0,0,...,0). Therefore the theorem about solutions of systems of linear equations implies the first part of the following result.
Theorem. Every homogeneous system has either exactly one
solution or infinitely many solutions. If a homogeneous system has more
unknowns than equations, then it has infinitely many solutions.
Matrices and matrix operations
A matrix is a rectangular array of numbers. The numbers in the array are called entries.
Examples. Here are three matrices:

, 

, 

The size
of a matrix is the pair of numbers: the number
of rows and the number of columns.
The matrices above have sizes (2,3), (1,4),
(2,1), respectively.
A matrix with one row is called a
rowvector.
A matrix with one column is called a
columnvector. In the example above the second
matrix is a rowvector, the third one is a columnvector. The entry of a matrix
A which stays in the ith row and jth column will be usually denoted by
A_{i}j or A(i,j).
A matrix with n rows and n columns is called a square matrix of size n.
Discussing matrices, we shall call numbers scalars. In some cases one can view scalars as 1x1matrices.
Matrices were introduced first in the middle of 19th century by W. Hamilton and A. Cayley. Following Cayley, we are going to describe an arithmetic where the role of numbers is played by matrices.
In order to solve an equation
with a not equal 0 we just divide b by a and get x. We want to solve
systems of linear equations in a similar manner. Instead of the scalar a we
shall
have a matrix of coefficients
of the system of equations, that is the array of the coefficients of
the unknowns
(i.e. the augmented matrix without the last column).
Instead of x we shall
have a vector of
unknowns and instead of b we shall have the vector of right sides of the
system.
In order to do that we must learn how to multiply and divide matrices.
But first we need to learn when two matrices are equal, how to add two
matrices and how to multiply a matrix by a scalar.
Two matrices are called equal if
they have the same size and their corresponding entries are equal.
The sum of two matrices A and B of the same size
(m,n) is the matrix C of size (m,n) such that C(i,j)=A(i,j)+B(i,j) for
every i and j.
Example.

+ 

= 

In order to multiply a matrix by a scalar, one has to
multiply all entries of the matrix by this scalar.
Example:
3 * 

= 

The product of a rowvector v of size (1, n)
and a column vector u of size (n,1) is the sum of products of corresponding
entries: uv=u(1)v(1)+u(2)v(2)+...+u(n)v(n)
Example:
3 (1, 2, 3) * 4 =1*3 + 2*4 + 3*1 = 3+8+3=14 1
Example:
x (2, 4, 3) * y = 2x + 4y + 3z z
As you see, we can represent the left side of a linear equation as a
product of two matrices. The product of arbitrary two matrices which we shall
define next will allow us to represent the left side of any system of equations
as a product of two matrices.
Let A be a matrix of size (m,n),
let B be a matrix
of size (n,k) (that is the number of columns in A is equal to the number of
rows in B. We can subdivide A into a column of m rowvectors of size (1,n).
We can also subdivide B into a row of k columnvectors of size (n,1):
r_{1} r_{2} A =... B=[c_{1} c_{2} ... c_{k}] r_{m}Then the product of A and B is the matrix C of size (m,k) such that
(C(i,j) is the product of the rowvector r_{i} and the columnvector c_{j}).
Matrices A and B such that the number of columns of A is not equal to the
number of rows of B cannot be multiplied.
Example:

* 

= 

Example:

* 

= 

You see: we can represent the left part of a system of linear equations
as a product of a matrix and a columnvector. The whole system of linear
equations can thus be written in the following form:
where A is the
matrix of coefficients
of the system  the array of coefficients of the left
side (do not mix with the augmented matrix),
v is the columnvector of unknowns, b is the column vector of the right sides
(constants).
Properties of matrix operations.
To see how to use Maple to perform matrix operations, click here (Maple) or here (text)
The following properties of matrix operations do not hold:
Example:
A = 

,  B = 

Indeed,
AB = 

,  BA = 

Example:
A = 

,  B = 

,  C = 

Then AB=AC=0 but B and C are not equal. Notice that this example shows also that a product of two nonzero matrices can be zero.
There are three other important operations on matrices.
If A is any m by n matrix then the transpose
of A, denoted by A^{T}, is defined to be the n by m matrix obtained by interchanging the rows and columns of A, that is the first column of A^{T} is the first
row of A, the second column of A^{T} is the second row of A, etc.
Example.
The transpose of

is 

If A is a square matrix of size n then the sum of the entries on the main diagonal of A is called the trace of A and is denoted by tr(A).
Example.
The trace of the matrix
[ 1  2  3 ] 
[ 4  5  6 ] 
[ 7  8  9 ] 
A square matrix of size n
A is called invertible
if there exists a square matrix B of the same size such that
AB = BA = I_{n}, the identity matrix of size
n. In this case B is called the
inverse of A.
Examples. 1.
The matrix I_{n} is invertible. The inverse matrix is I_{n}:
I_{n} times I_{n} is I_{n} because I_{n} is the identity
matrix.
2. The matrix A
[ 1  3 ] 
[ 0  1 ] 
is invertible. Indeed the following matrix B:
[ 1  3 ] 
[ 0  1 ] 
is the inverse of A since A*B=I_{2}=B*A.
3. The zero matrix O is not invertible. Indeed, if O*B=I_{n}
then O=O*B=I_{n} which is impossible.
4.
A matrix A with a zero row cannot be invertible because
in this case for every matrix B the product A*B will have a zero row
but I_{n} does not have zero rows.
5. The following matrix A:
[ 1  2  3 ] 
[ 3  4  5 ] 
[ 4  6  8 ] 
is not invertible. Indeed, suppose that there exists a matrix B:
[ a  b  c ] 
[ d  e  f ] 
[ g  h  i ] 
such that A*B=I_{3}. The corresponding entries of A*B and I_{3} must be equal, so we get the following system of nine linear equations with nine unknowns:
a+2d+3g = 1  ;  the (1,1)entry 
b+2e+3h = 0  ;  the (1,2)entry 
c+2f+3i = 0  ;  the (1,3)entry 
3a+4d+5g = 0  
3b+4e+5h = 1  
3c+4f+5i = 0  
4a+6d+8g = 0  
4b+6e+8h = 0  
4c+6f+8i = 1 
This system does not have a solution which can be shown with the help
of Maple.
Now we are going to prove some theorems about transposes, traces and inverses.
Theorem. The following properties hold:
Theorem. The following properties of traces hold:
Theorem. The following properties hold:
(AB)^{1}=B^{1} A^{1}
that is the inverse of the product is the product of inverses in the opposite order. In particular
(A^n)^{1}=(A^{1})^n.
The proofs of 2, 4, 5 are left as exercises.
Notice that using inverses we can solve some systems of linear equations just in the same way we solve the equation ax=b where a and b are numbers. Suppose that we have a system of linear equations with n equations and n unknowns. Then as we know, this system can be represented in the form Av=b where A is the matrix of the system, v is the columnvector of unknowns, b is the columnvector of the right sides of the equations. The matrix A is a square matrix. Suppose that it has an inverse A^{1}. Then we can multiply both sides of the equation A v = b by A^{1} on the left. Using the associativity, the fact that A^{1} A=I and the fact that Iv=v, we get: v=A^{1}b. This is the solution of our system.