Part of a series on Exterior Algebra:

  1. Multi-indices
  2. (This post)

Table of Contents

Laplace Expansion for the Determinant

With basics of wedge-products adequately notated in the previous post, we’ll now turn our attention to the “Laplace Expansion” of the determinant of a matrix. This is not so interesting on its own, but will be a stepping stone to the full matrix inverse.

I mentioned already that the determinant could be written as the action of on each vector in the -volume , which becomes a wedge product of the columns of :

This expression is clearly linear in the components of each column (that is, with respect to any of the ); we should therefore be able to write it as some kind of scalar product with that column alone.

Here we’ll bring in a new notation: the star basis element , where is the Hodge star. The specific set of indices entailed by is “any ordering of all of the indices except for such that ”. So could be , but any other sequence with the same overall sign works too. When working in the “combination basis” of multivectors (see the previous post in this series) we don’t care which set of indices is used to refer to an equivalent multivector.

These expressions are easy to use, because the Hodge star operator acts predictably on their indices:

I’m currently thinking of itself standing for specific multi-index like , but we won’t write the explicit wedge product in . A star index is always wedged, since the Hodge star is defined in terms of wedge products anyway. This will mess up the Einstein notation though; we will have to think of as pairing with . Most of the time we’ll be pairing an with a , which will be better behaved.

When I want to refer to star-basis matrix elements of wedge powers of , I will write the wedge power explicitly1 as .

Using the star-basis the above becomes:

Here the subscript on the matrix stands for a wedge product of of columns. Note that this expression is not Einstein-summed over ; is the index of the column we are expanding with-respect-to.

The previous expression can be then expanded in the tensor basis (or you could use the combination basis from the previous post to jump straight to the answer):

The basis element will contain every index except some , and will be related to the “canonical” order by a sign which can be represented by the Levi-Civita symbol :

The acts like a sum over in Einstein notation. The Levi-Civita could also be written as or .

Note that while stands for a product of matrix elements , the indices in should not be thought of as coming in any particular order out of all the positively-signed possibilities. This expression is still valid, though, because the opposite index is antisymmetrized; all the possible orderings are summed together anyway.

We can then use to get:

This therefore gives us the determinant as the inner product with a vector whose components are :

This is the “Laplace Expansion” for the determinant in terms of the column , with as the “cofactor” of .

If we use to antisymmetrize indices we can make this very succinct, but with some questionable indexing:

The “Laplace expansion in rows” would be , by analogy.

Normally one sees some extra minus-signs in the cofactor , because the normal expression is given in terms of matrix minors. These are equivalent to: This differs from the -index in that the vector has been removed “in place” rather than first being transposed to the front of ths list. We can represent this with another special multi-index , called “strike-”. The two differ by a sign, since it takes transposition to move from the front of the list to its normal position:

Therefore:

The cofactor can be seen as the matrix element , the component of the action of on the specific volume . We can find the sign of the appropriate strike-basis matrix element by:

In a typical notation would be thought of as the “determinant of a minor” and might be written , in which case we have:

which is the typical formula for the cofactor.

We could have also arrived here via the Hodge star identity

This would give

which tells us that the cofactor matrix is equivalent to:


The Matrix Inverse via Cofactors

The Laplace Expansion just shown is very nearly the matrix inverse already. We had

and also that this expression was equal to

Together these imply that

Therefore the scalar product of with any other column is zero, since all the other columns are already represented in . Thus these obey

which makes the dual basis vector to , and therefore a row of the inverse matrix:

This gives the adjugate matrix as the transpose of :

where, again, is a product of elements of the original matrix . (Strictly speaking I should be using positional indices and factors to swap these indices, but I won’t bother / don’t completely understand this.)

And the inverse matrix itself is just:

I’m including these derivations mainly to demonstrate the index notation, and as a reference. As such I won’t consider, in this post, the complicated cases of non-square matrices. Still I find the -basis derivations to be substantially more satisfying than whatever I learned in school: everything follows straightforwardly from the properties of .


The Matrix Inverse via Cramer’s Rule

Now for another matrix inverse. This time we start with the equation

This amounts to the statement that gives the coordinates of in the basis of the columns of . We can therefore write this coordinate expression:

Now, wedge both sides of this with the product of any columns, omitting column , in order:

Every term except drops out, and the remaining component is:

We can rewrite the numerator using to cancel the s:

And, undoing , we once again we get the th row of the matrix inverse as the transpose of this thing:

and

(I think I have to use s like that to get the index positions to work.)

Going back to the expression

Here I’ve used the -basis to simplify the derivation. The usual Cramer formula thinks of transposing into the position originally occupied by . But this is the exact sequence of transpositions that would give if were in the first position—so no extra signs are needed.

The usual Cramer formula then interprets the numerator of this expression as the “determinant of the matrix with the th column replaced by ”. In this formulation, there’s no need.


The Matrix Inverse via a Homogenous Matrix

A third method. We write the matrix equation as an -dimensional “homogenous matrix”:

Then the plan will be: first we take the th wedge power of this thing, which will map from the space -dimensional space to the 1-dimensional space . The components of this mapping on the input space represent a single grade- multivector which describes an -dimensional subspace of : the subspace spanned by the rows of . The vector is not in this subspace, since , and so it must be proportional to the complement of this space , and because one component is known, we can solve for the constant of proportionality.

To be explicit, I’ll use bases on (though we’ll only need ) and on the space. Then we can as a tensor product:

The components of can each be found by taking wedge products of columns of . The term for will be the product of the first columns, which are simply the matrix and will give . The other values of will have terms consisting of columns of wedges with . Working it out:

There are a couple of signs we need to get right.

  • is a wedge in -space. We’d like to extract the coefficient from this, but to get the signs right we have to transpose to the left times to put it in the the front of the wedge product, which lets us write .
  • It will be helpful to replace the expression with . Since this is -space, it will take transpositions to move to its normal spot at the end of the line. So we need to include a sign .

With those we can move and all of the minus signs to the front:

It’s not too pretty, but that object on the right represents the “span of the rows of in ”.

We can now drop the tensor product and signs from this whole expression. To get the complement we simply un-star the basis (dual) vectors, and let’s assume we can lower all covector indices, although I don’t completely understand what this means here. Then we must have

And we find:

This argument would be shorter if we didn’t bother with all of the covectors. I needed to write those to see what was happening, but the final result is actually a very simple operation on the columns of , equivalent of course to Cramer’s Rule.

It is as if we simply had the two dimensional vector equation:

and then we wanted to invert this. We’d get:

would be a 90-degree rotation of vector , i.e. and fixing one component to 1 would give our solution: and . It’s remarkable that the N-dimensional case ends up looking similar.

Footnotes

  1. This is probably more verbose than is necessary. In our existing notation, it is already unambiguous for a single star index to stand for a wedge product of columns, like . It would be a reasonable extension for a double star index to stand for a scalar component thereof. But this would probably be pushing it.