Navigation 
 Home 
 Search 
 Site map 

 Contact Graeme 
 Home 
 Email 
 Twitter

 Skip Navigation LinksMath Help > Sets, Set theory, Number systems > Matrix Math > Determinant

Matrix Determinant

The determinant of a square matrix (usually denoted by writing the matrix between vertical bars) is the algebraic sum of all possible products where the number of factors in each product is the same as the number of rows or columns, each factor in a given product is taken from a different row and column, and the sign of a product is positive or negative depending upon whether the number of permutations necessary to place the indices representing each factor's position in its row or column in the order of the natural numbers is odd or even.

Example: consider this 4x4 matrix:

A = [aij] = 

é
ê
ê
ê
ê
ê
ê
ê
ê
ë

a11 a12 a13 a14
a21 a22 a23 a24

a31

a32 a33 a3
a41 a42 a43 a44

ù
ú
ú
ú
ú
ú
ú
ú
ú
û

4P4=24 (there are 24 permutations of four things taken four at a time) so there are 24 products where each factor in a given product is taken from a different row and column.  If the four products are taken from successive columns and rows 1, 2, 3, 4, the product is added to find the determinant.  If two rows are interchanged (e.g. rows 1, 2, 4, 3) then the product is subtracted to find the determinant.  Another way to look at it is this: If an even number of operations is needed to restore the rows to 1, 2, 3, 4 order, then the product is added to find the determinant.  If an odd number of operations is needed to restore the rows to 1, 2, 3, 4 order, then the product is subtracted to find the determinant.

In each product, below, the four factors are taken from successive columns.  If the order of the rows was achieved by an even number of pairwise interchanges, then a "plus" sign is attached to it.  If the order of the rows was achieved by an odd number of pairwise interchanges, then a "minus" sign is attached to it:

|A| = 

a11a22a33a44
-a11a22a34a43
+a11a23a34a42
-a11a23a32a44
+a11a24a32a43
-a11a24a33a42
-a12a23a34a41
+a12a23a31a44
-a12a24a31a43
+a12a24a33a41
-a12a21a33a44
+a12a21a34a43
+a13a24a31a42
-a13a24a32a41
+a13a21a32a44
-a13a21a34a42
+a13a22a34a41
-a13a22a31a44
-a14a21a32a43
+a14a21a33a42
-a14a22a33a41
+a14a22a31a43
-a14a23a31a42
+a14a23a32a41

Algorithm

To find the determinant of larger and larger matrices, you would need to contend with faster-than-exponential growth of the number of permutations, if you want to carry out the calculations as I have described thus far.  So it is important to realize that the determinant of a matrix [aij] is equal to

a11 multiplied by the determinant of the matrix containing all but the row and column containing a11,

minus a12 multiplied by the determinant of the matrix containing all but the row and column containing a12,

plus a13 multiplied by the determinant of the matrix containing all but the row and column containing a13,

etc.

This suggests a recursive algorithm for finding the determinant of matrix A of order n:

If n==1 then return a11

Else {

set c = 0

for i = 1 to n,

matrix B = matrix A with row i and column 1 deleted.

c = c + (-1)(i-1) ai1 |B|

return c

}

Why is the determinant useful?

The determinant has a number of interesting properties.  If a square matrix represents the coefficients of a system of linear equations, then the determinant can tell you whether the equations are independent, which means no row is a linear combination of some other rows.  If the equations are independent, then the determinant of the matrix is non-zero.  If the equations are not independent, then the determinant of the matrix is zero.

In a sense, the determinant of a matrix denotes the size of the matrix.  If a=|A| and b=|B| then |AB| = |BA| = ab.

Thus, even though AB ยน BA, because the commutative property doesn't hold for matrix multiplication, the determinants of these two matrices (AB and BA) are the same, and they are the product of the determinants of the two factor matrices.

Another area where the determinant comes in handy is finding the inverse of a matrix.  If B is the inverse of A, then AB = BA = I, the identity matrix.  Let's suppose B is the inverse of A.  And, as before, let a=|A| and b=|B|.  Then, since the determinant is "preserved" by matrix multiplication, and AB = I, it follows that ab=1.  So a matrix can't have an inverse if its determinant is zero.  Conversely, if the determinant is not zero, then it has an inverse.

Related Pages in this website

Matrix Definitions

Cramer's Rule

Triangle Area using Determinant

Vectors -- the "dot" product and the "cross" product, explained.

 


The webmaster and author of this Math Help site is Graeme McRae.