HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
, the Laplace expansion, named after
Pierre-Simon Laplace Pierre-Simon, Marquis de Laplace (; ; 23 March 1749 – 5 March 1827) was a French polymath, a scholar whose work has been instrumental in the fields of physics, astronomy, mathematics, engineering, statistics, and philosophy. He summariz ...
, also called cofactor expansion, is an expression of the
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
of an -
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
as a weighted sum of minors, which are the determinants of some - submatrices of . Specifically, for every , the ''Laplace expansion along the th row'' is the equality \begin \det(B)&= \sum_^ (-1)^ b_ m_, \end where b_ is the entry of the th row and th column of , and m_ is the determinant of the submatrix obtained by removing the th row and the th column of . Similarly, the ''Laplace expansion along the th column'' is the equality \begin \det(B)&= \sum_^ (-1)^ b_ m_. \end (Each identity implies the other, since the determinants of a matrix and its
transpose In linear algebra, the transpose of a Matrix (mathematics), matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other ...
are the same.) The coefficient (-1)^ m_ of b_ in the above sum is called the cofactor of b_ in . The Laplace expansion is often useful in proofs, as in, for example, allowing
recursion Recursion occurs when the definition of a concept or process depends on a simpler or previous version of itself. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in m ...
on the size of matrices. It is also of didactic interest for its simplicity and as one of several ways to view and compute the determinant. For large matrices, it quickly becomes inefficient to compute when compared to
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can a ...
.


Examples

Consider the matrix : B = \begin 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end. The determinant of this matrix can be computed by using the Laplace expansion along any one of its rows or columns. For instance, an expansion along the first row yields: : \begin , B, & = 1 \cdot \begin 5 & 6 \\ 8 & 9 \end - 2 \cdot \begin 4 & 6 \\ 7 & 9 \end + 3 \cdot \begin 4 & 5 \\ 7 & 8 \end \\ pt& = 1 \cdot (-3) - 2 \cdot (-6) + 3 \cdot (-3) = 0. \end Laplace expansion along the second column yields the same result: : \begin , B, & = -2 \cdot \begin 4 & 6 \\ 7 & 9 \end + 5 \cdot \begin 1 & 3 \\ 7 & 9 \end - 8 \cdot \begin 1 & 3 \\ 4 & 6 \end \\ pt& = -2 \cdot (-6) + 5 \cdot (-12) - 8 \cdot (-6) = 0. \end It is easy to verify that the result is correct: the matrix is
singular Singular may refer to: * Singular, the grammatical number that denotes a unit quantity, as opposed to the plural and other forms * Singular or sounder, a group of boar, see List of animal names * Singular (band), a Thai jazz pop duo *'' Singula ...
because the sum of its first and third column is twice the second column, and hence its determinant is zero.


Proof

Suppose B is an ''n'' × ''n'' matrix and i,j\in\. For clarity we also label the entries of B that compose its i,j minor matrix M_ as (a_) for 1 \le s,t \le n-1. Consider the terms in the expansion of , B, that have b_ as a factor. Each has the form : \sgn \tau\,b_ \cdots b_ \cdots b_ = \sgn \tau\,b_ a_ \cdots a_ for some
permutation In mathematics, a permutation of a set can mean one of two different things: * an arrangement of its members in a sequence or linear order, or * the act or process of changing the linear order of an ordered set. An example of the first mean ...
with \tau(i)=j, and a unique and evidently related permutation \sigma\in S_ which selects the same minor entries as . Similarly each choice of determines a corresponding i.e. the correspondence \sigma\leftrightarrow\tau is a
bijection In mathematics, a bijection, bijective function, or one-to-one correspondence is a function between two sets such that each element of the second set (the codomain) is the image of exactly one element of the first set (the domain). Equival ...
between S_ and \. Using Cauchy's two-line notation, the explicit relation between \tau and \sigma can be written as : \sigma = \begin 1 & 2 & \cdots & i & \cdots & n-1 \\ (\leftarrow)_j ( \tau(1) ) & (\leftarrow)_j ( \tau(2) ) & \cdots & (\leftarrow)_j ( \tau(i+1) ) & \cdots & (\leftarrow)_j ( \tau(n) ) \end where (\leftarrow)_j is a temporary shorthand notation for a cycle (n,n-1,\cdots,j+1,j). This operation decrements all indices larger than j so that every index fits in the set The permutation can be derived from as follows. Define \sigma'\in S_n by \sigma'(k) = \sigma(k) for 1 \le k \le n-1 and \sigma'(n) = n. Then \sigma' is expressed as : \sigma' = \begin 1 & 2 & \cdots & i & \cdots & n-1 & n \\ (\leftarrow)_j ( \tau(1) ) & (\leftarrow)_j ( \tau(2) ) & \cdots & (\leftarrow)_j ( \tau(i+1) ) & \cdots & (\leftarrow)_j ( \tau(n) ) & n\end Now, the operation which apply (\leftarrow)_i first and then apply \sigma' is (Notice applying A before B is equivalent to applying inverse of A to the upper row of B in two-line notation) : \sigma' (\leftarrow)_i = \begin 1 & 2 & \cdots & i+1 & \cdots & n & i \\ (\leftarrow)_j ( \tau(1) ) & (\leftarrow)_j ( \tau(2) ) & \cdots & (\leftarrow)_j ( \tau(i+1) ) & \cdots & (\leftarrow)_j ( \tau(n) ) & n \end where (\leftarrow)_i is temporary shorthand notation for (n,n-1,\cdots,i+1,i). the operation which applies \tau first and then applies (\leftarrow)_j is : (\leftarrow)_j \tau = \begin 1 & 2 & \cdots & i & \cdots & n-1 & n \\ (\leftarrow)_j ( \tau(1) ) & (\leftarrow)_j ( \tau(2) ) & \cdots & n & \cdots & (\leftarrow)_j ( \tau(n-1) ) & (\leftarrow)_j ( \tau(n) ) \end above two are equal thus, : (\leftarrow)_j \tau = \sigma' (\leftarrow)_i : \tau = (\rightarrow)_j \sigma' (\leftarrow)_i where (\rightarrow)_j is the inverse of (\leftarrow)_j which is (j,j+1,\cdots,n). Thus : \tau\,=(j,j+1,\ldots,n)\sigma'(n,n-1,\ldots,i) Since the two cycles can be written respectively as n-i and n-j transpositions, : \sgn\tau\,= (-1)^ \sgn\sigma'\,= (-1)^ \sgn\sigma. And since the map \sigma\leftrightarrow\tau is bijective, :\begin \sum_^n\sum_ \sgn \tau\,b_ \cdots b_ &= \sum_^\sum_ (-1)^\sgn\sigma\, b_ a_ \cdots a_\\ &= \sum_^b_(-1)^ \sum_ \sgn\sigma\, a_ \cdots a_\\ &= \sum_^ b_ (-1)^ M_ \end from which the result follows. Similarly, the result holds if the index of the outer summation was replaced with j.


Laplace expansion of a determinant by complementary minors

Laplace's cofactor expansion can be generalised as follows.


Example

Consider the matrix : A = \begin 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16 \end. The determinant of this matrix can be computed by using the Laplace's cofactor expansion along the first two rows as follows. Firstly note that there are 6 sets of two distinct numbers in namely let S=\left\ be the aforementioned set. By defining the complementary cofactors to be : b_=\begin a_ & a_ \\ a_ & a_ \end, : c_=\begin a_ & a_ \\ a_ & a_ \end, and the sign of their permutation to be : \varepsilon^ = \sgn\begin 1 & 2 & 3 & 4 \\ j & k & p & q \end, \text p\neq j,q\neq k. The determinant of ''A'' can be written out as : , A, = \sum_ \varepsilon^b_c_, where H^ is the complementary set to H . In our explicit example this gives us :\begin , A, &= b_c_ -b_c_ +b_c_+b_c_ -b_c_ +b_c_ \\ pt &= \begin 1 & 2 \\ 5 & 6 \end \cdot \begin 11 & 12 \\ 15 & 16 \end - \begin 1 & 3 \\ 5 & 7 \end \cdot \begin 10 & 12 \\ 14 & 16 \end + \begin 1 & 4 \\ 5 & 8 \end \cdot \begin 10 & 11 \\ 14 & 15 \end + \begin 2 & 3 \\ 6 & 7 \end \cdot \begin 9 & 12 \\ 13 & 16 \end - \begin 2 & 4 \\ 6 & 8 \end \cdot \begin 9 & 11 \\ 13 & 15 \end + \begin 3 & 4 \\ 7 & 8 \end \cdot \begin 9 & 10 \\ 13 & 14 \end\\ pt &= -4 \cdot (-4) -(-8) \cdot (-8) +(-12) \cdot (-4) +(-4) \cdot (-12) -(-8) \cdot (-8) +(-4) \cdot (-4)\\ pt &= 16 - 64 + 48 + 48 - 64 + 16 = 0. \end As above, it is easy to verify that the result is correct: the matrix is
singular Singular may refer to: * Singular, the grammatical number that denotes a unit quantity, as opposed to the plural and other forms * Singular or sounder, a group of boar, see List of animal names * Singular (band), a Thai jazz pop duo *'' Singula ...
because the sum of its first and third column is twice the second column, and hence its determinant is zero.


General statement

Let B= _/math> be an matrix and S the set of -element subsets of , H an element in it. Then the determinant of B can be expanded along the rows identified by H as follows: :, B, = \sum_ \varepsilon^ b_ c_ where \varepsilon^ is the sign of the permutation determined by H and L, equal to (-1)^, b_ the square minor of B obtained by deleting from B rows and columns with indices in H and L respectively, and c_ (called the complement of b_) defined to be b_ , H' and L' being the complement of H and L respectively. This coincides with the theorem above when k=1. The same thing holds for any fixed columns.


Computational expense

The Laplace expansion is computationally inefficient for high-dimension matrices, with a
time complexity In theoretical computer science, the time complexity is the computational complexity that describes the amount of computer time it takes to run an algorithm. Time complexity is commonly estimated by counting the number of elementary operations ...
in
big O notation Big ''O'' notation is a mathematical notation that describes the asymptotic analysis, limiting behavior of a function (mathematics), function when the Argument of a function, argument tends towards a particular value or infinity. Big O is a memb ...
of . Alternatively, using a decomposition into triangular matrices as in the
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix multiplication and matrix decomposition). The produ ...
can yield determinants with a time complexity of .Stoer Bulirsch: Introduction to Numerical Mathematics The following Python code implements the Laplace expansion: def determinant(M): # Base case of recursive function: 1x1 matrix if len(M)

1: return M 0] total = 0 for column, element in enumerate(M : # Exclude first row and current column. K = column">[:column+ x[column + 1 :for x in M[1:">column+ x[column + 1 :">column">[:column+ x[column + 1 :for x in M[1: s = 1 if column % 2

0 else -1 total += s * element * determinant(K) return total


See also

* Leibniz formula for determinants * Rule of Sarrus for 3 \times 3 determinants


References

* David Poole: ''Linear Algebra. A Modern Introduction''. Cengage Learning 2005, , pp. 265–267 () * Harvey E. Rose: ''Linear Algebra. A Pure Mathematical Approach''. Springer 2002, , pp. 57–60 ({{Google books, mTdAj-Yn4L4C, restricted online copy, page=57) Matrix theory Determinants Articles containing proofs de:Determinante#Laplacescher Entwicklungssatz