Proofs Due Class 12


>with(linalg);

A square matrix A is called Skew-symmetric if AT=-A, that is A(i,j)=-A(j,i) for every i and j.

Theorem a) If A is invertible and skew-symmetric then the inverse of A is skew-symmetric. We want to prove the above theorem. We are given that A is invertible and skew-symmetric. This means that A*A-1=I and that AT=-A. We want to prove that A-1 is skew-symmetric. By theorem 1.4.10 we know that if A is an invertible matrix, then AT is also invertible and (A-1)T=(AT)-1. Thus (A-1)T=(-A-1)=-(A)-1. From this we see that AT = -A so A-1 is skew-symmetric.

Theorem b) If A and B are skew-symmetric then AT, A+B, AB-BA, and kA are skew-symmetric for every scalar k.

We want to prove that AT is skew-symmetric, in other words that (AT)T=-AT. But (AT)T=T by the theorem about transposes. Thus we need to prove that A=-AT but this is the definition of a skew-symmetric matrices. Therfore, AT is skew-symmetric.

We want to prove that A+B is skew-symmetric if A and B are both skew-symmetric. By definition AT=-A and BT=-B. We want to show that -(A+B)=(A+B)T. AT+BT=(A+B)T by a theorem about transposes. So -(A+B)=AT+BT=(A+B)T. So A+B is skew-symmetric.

We want to prove that AB-BA is skew-symmetric if A and B are both skew-symmetric. We want to show that -(AB-BA)=(AB-BA)T. (AB-BA)T= (AB)T-(BA)T by a theorem about transposes. Then (AB-BA)T=BTAT-ATBT by the same theorem. Since A and B are skew-symmetric AT=-A and BT=-B then (AB-BA)T=(-B)(-A)-(-A)(-B)=BA-AB=-(AB-BA).

We want to prove that kA is skew-symmetric for any scalar k if A is skew-symmetric. We want to show that -(kA)=(kA)T. (kA)T=kAT by a theorem about transposes. Since A is skew-symmetric AT=-A so indeed (kA)T=k(-A)=-(kA).

Theorem c) We are asked to prove that every square matrix is the sum of a symmetric and skew-symmetric matrices. Take some square matrix A. We have that A= 1/2(A+AT)+ 1/2(A-AT). We know that for a matrix to be symmetric it must equal its transpose. And we know for a matrix to be skew-symmetric matrix it must be the negative of it's transpose. We want to prove that 1/2(A+AT) is symmetric and 1/2(A-AT) is skew-symmetric. So let us prove that 1/2(A+AT) is symmetric. By definition that we stated above we must now take the transpose of it= 1/2(A+AT)T= 1/2(AT + (AT)T) = 1/2(A+AT). So indeed, A+AT is symmetric. Now we must prove that 1/2(A-AT) is skew-symmetric. By the deifintion we stated above we must take the transpose of it and get the negative of it. So 1/2(A-AT)T = 1/2(AT - (AT)T = 1/2(AT-A) = -1/2(A-AT). This means that 1/2(A-AT) is skew-symmetric. So A is the sum of a symmetric and skew-symmetric matrix.

Problem 2--- We are asked to find the determinant of a n x n matrix that has 0 on the diagonal and 1 everywhere else. We begin by using what looks like a matrix of order 6, but in actuality since we can not show the breaks in the matrix is an nxn matrix.


>A:=matrix([[0,1,1,1,1,1],[1,0,1,1,1,1],[1,1,0,1,1,1],[1,1,1,0,1,1],[1,1,1,1,0,1],[1,1,1,1,1,0]]);
A := [ 0 1 1 1 1 1 ]
[ 1 0 1 1 1 1 ]
[ 1 1 0 1 1 1 ]
[ 1 1 1 0 1 1 ]
[ 1 1 1 1 0 1 ]
[ 1 1 1 1 1 1 ]

We now that if we take the matrix and get it down to an Identity matrix all the values we pull out from the matrix will give us our determinant. Let us begin this process. We begin by adding all the the values below row one to row one. For the first entry we have n-1 1's that we would add to 0. The second entry and each proceeding entry in row one will have n-2 1's added to 1 so they all will have n-1 as there entry. Knowing this we get the following matrix as our resulting matrix after adding all the rows below row one to row one.


>B:=matrix([[n-1,n-1,n-1,n-1,n-1,n-1],[1,0,1,1,1,1],[1,1,0,1,1,1],[1,1,1,0,1,1],[1,1,1,1,0,1],[1,1,1,1,1,0]]);
B := [ n-1 n-1 n-1 n-1 n-1 n-1 ]
[ 1 0 1 1 1 1 ]
[ 1 1 0 1 1 1 ]
[ 1 1 1 0 1 1 ]
[ 1 1 1 1 0 1 ]
[ 1 1 1 1 1 1 ]

Once again this may look like a matrix of order 6, but in actuality it a n x n matrix. Now we can factor out a n-1 from row one to get a matrix like this:


>C:=matrix([[1,1,1,1,1,1],[1,0,1,1,1,1],[1,1,0,1,1,1],[1,1,1,0,1,1],[1,1,1,1,0,1],[1,1,1,1,1,0]]);
C := [ 1 1 1 1 1 1 ]
[ 1 0 1 1 1 1 ]
[ 1 1 0 1 1 1 ]
[ 1 1 1 0 1 1 ]
[ 1 1 1 1 0 1 ]
[ 1 1 1 1 1 1 ]

Now if we subtract row one from all the rows below it we get the following matrix:


>D:=matrix([[1,1,1,1,1,1],[0,-1,0,0,0,0],[0,0,-1,0,0,0],[0,0,0,-1,0,0],[0,0,0,0,-1,0],[0,0,0,0,0,-1]]);
D := [ 1 1 1 1 1 1 ]
[ 0 -1 0 0 0 0 ]
[ 0 0 -1 0 0 0 ]
[ 0 0 0 -1 0 0 ]
[ 0 0 0 0 -1 0 ]
[ 0 0 0 0 0 -1 ]

If we add all the rows below row one to it we get the following matrix


>F:=matrix([[1,0,0,0,0,0],[0,-1,0,0,0,0],[0,0,-1,0,0,0],[0,0,0,-1,0,0],[0,0,0,0,-1,0],[0,0,0,0,0,-1]]);
F := [ 1 0 0 0 0 0 ]
[ 0 -1 0 0 0 0 ]
[ 0 0 -1 0 0 0 ]
[ 0 0 0 -1 0 0 ]
[ 0 0 0 0 -1 0 ]
[ 0 0 0 0 0 -1 ]

Now to get the identity matrix we must factor out all the -1's. Nowing that the matrix is an n x n matrix and that all but the first row have -1's we can say that we need to factor out -1n-1 -1's. So the determinant of an n x n matrix with 0's on the diagonal and 1 's everywhere else is (n-1)*(-1)n-1. All the other manipulations we did would not effect the end result of the determinant by the rules we now about the determnant.