Here is the theorem we need to prove. Theorem. A system of linear equations either has no solutions or has exactly one solution or has infinitely many solutions. A system of linear equations has infinitely many solutions if and only if its reduced row echelon form has free unknowns and the last column of the reduced row echelon form has no leading 1's. It has exactly one solution if and only if the reduced row echelon form has no free unknowns and the last column of the reduced row echelon form has no leading 1. It has no solutions if and only if the last column of the reduced row echelon form has a leading 1.\ Proof. Indeed, consider the reduced row-echelon form of our system of equations.

Suppose first that it contains an equation with zero left side and non-zero right side. It must have the form

0x1+...0xn=1

(the right hand side must be 1 since this is the leading non-zero entry of the corresponding row in the row echelon matrix matrix). Clearly then the system has no solutions.

Now suppose that the system does not contain such equations. Then if there are no free unknowns, we get exactly one solution because the system looks like this:

x1=b1
x2=b2
x3=b3
.......
xm=bm

where bi is a constant. If there are free unknowns then these unknowns can have arbitrary values and for each choice of these values we get a solution. Thus in this case we have infinitely many solutions. The theorem is proved. 