An elementary matrix is a matrix obtained from an identity matrix by one of the row operations.
Example.

, 

, 

, 

The first two matrices are obtained by adding a multiple of one row to another row. The third matrix is obtained by swapping two rows. The last matrix is obtained by multiplying a row by a number.
As we see, elementary matrices usually have lots of zeroes.
Elementary matrices which are obtained by adding rows contain only one nondiagonal nonzero entry.
Elementary matrices which are obtained by multiplying a row by a number contain exactly 1 nonunit entry on the diagonal and no nonzero entries outside the diagonal.
Elementary matrices which are obtained by swapping consist of 0s and 1s and contain exactly 2 nondiagonal entries.
The converse statements are true also (for example every matrix with 1s on
the diagonal and
exactly one nonzero entry outside the diagonal) is an elementary matrix.
The main result about elementary matrices is that every invertible matrix is a product of elementary matrices. These are in some sense the smallest particles in the world of invertible matrices. We shall prove it later.
Theorem. If the elementary matrix E is obtained
by performing a
row operation on
the identity matrix I_{m} and if A is an m by n
matrix, then the product EA is the matrix obtained from A by applying the
same row operation.
Lemma. Every elementary matrix is invertible and the inverse is again an elementary matrix. If an elementary matrix E is obtained from I by using a certain rowoperation q then E^{1} is obtained from I by the "inverse" operation q^{1} defined as follows:
Now we are able to prove the second theorem about inverses.
Theorem. If A is an n by n square matrix, then the following statements are equivalent.
The proof
of this theorem gives an algorithm to
represent a matrix as a product of elementary matrices: in order to represent
an invertible square matrix A as a product of elementary matrices one needs
to find a sequence of row operations p_{1},..., p_{m}
which
reduce A to its reduced row echelon form which is the identity matrix;
then A is the product of elementary matrices E_{1}^{1},...,E_{m}^{1}
corresponding to the
inverse row operations p_{1}^{1},...,p_{m}^{1}:
A=E_{1}^{1} E_{2}^{1} ...E_{m}^{1} (1)
If we take inverses of both sides of formula (1) then we get the following
formula (recall
that
the inverse of a product is the product of inverses in
the opposite order, and that the inverse of A^{1} is A):
Notice that the matrices E_{1},..., E_{m} are (by the
lemma about elementary matrices) the elementary
matrices corresponding to the row operations p_{1},...,p_{m}. Since E_{1}I=E_{1},
we can rewrite the last equality in the following form:
Now the theorem about elementary matrices allows us to interpret this equality in the following way: in order to get the inverse A^{1} of an invertible matrix A, one can find a sequence of row operations that reduces A to I, and then perform this sequence of operations on I.
We can join these two steps by first augmenting A and I (denote the resulting matrix by [A I] )
and then apply the row operations to the resulting
matrix reducing the left part of it
to I. The right part will be transformed into A^{1}.
A square matrix A is called symmetric
if A^{T}=A, that is if A(i,j)=A(j,i) for
every i and j. Thus A is symmetric if and only if it
is symmetric with respect to the main diagonal. Here is an example of a symmetric matrix:
[ 1  2  3 ] 
[ 2  4  5 ] 
[ 3  5  6 ] 
An important
subclass of symmetric matrices is formed by diagonal matrices,
i.e. matrices which have zeroes everywhere outside the main diagonal. For
example, the identity matrix is a diagonal matrix.
We present here three theorems about symmetric matrices.
The product
of symmetric matrices does not need to be symmetric.
Example.
Let
A = 

,  B = 

then
AB = 

Both A and B are symmetric but AB is not symmetric.
In fact the following result holds.
We leave the proof of this theorem as an exercise (it is similar to the proof of the first theorem about symmetric matrices)
The following theorem is much harder to prove:
Theorem. A square matrix is symmetric if and only if it is equal to a product A*A^{T} for some square matrix A with possibly complex entries.
I will prove only the easy part of this statement: for every matrix A the product A*A^{T} is symmetric.
Indeed, let B=A*A^{T}. Then B^{T}=(A*A^{T})^{T}. By the theorem about transposes, the transpose of a product is the product of transposes in the opposite order. Thus B^{T}=(A^{T})^{T}*A^{T}. By the same theorem (A^{T})^{T}=A. Thus B^{T}=A*A^{T}=B, so by definition B is symmetric.
To see a hint for the proof of the other part of the theorem
(if B is symmetric then B=A*A^{T} for some A) click
here.
A square matrix A is called upper triangular (resp. lower triangular) if all its entries below (resp. above) the main diagonal are zeroes, that is A(i,j)=0 whenever i is greater than j (resp. A(i,j)=0 whenever i is less than j).
Example.

, 

Example:
[ 0  2  3 ] 
[ 2  0  4 ] 
[ 3  4  0 ] 
I leave the proof of this theorem as an exercise.