HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrices ...
, the Laplace expansion, named after Pierre-Simon Laplace, also called cofactor expansion, is an expression of the determinant of an matrix as a weighted sum of minors, which are the determinants of some
submatrices In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, \begi ...
of . Specifically, for every , \begin \det(B)&= \sum_^ (-1)^ B_ M_, \end where B_ is the entry of the th row and th column of , and M_ is the determinant of the submatrix obtained by removing the th row and the th column of . The term (-1)^ M_ is called the cofactor of B_ in . The Laplace expansion is often useful in proofs, as in, for example, allowing
recursion Recursion (adjective: ''recursive'') occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathemati ...
on the size of matrices. It is also of didactic interest for its simplicity, and as one of several ways to view and compute the determinant. For large matrices, it quickly becomes inefficient to compute, when compared to Gaussian elimination.


Examples

Consider the matrix : B = \begin 1 & 2 & 3 \\ 4 & 5 & 6 \\ 7 & 8 & 9 \end. The determinant of this matrix can be computed by using the Laplace expansion along any one of its rows or columns. For instance, an expansion along the first row yields: : \begin , B, & = 1 \cdot \begin 5 & 6 \\ 8 & 9 \end - 2 \cdot \begin 4 & 6 \\ 7 & 9 \end + 3 \cdot \begin 4 & 5 \\ 7 & 8 \end \\ pt& = 1 \cdot (-3) - 2 \cdot (-6) + 3 \cdot (-3) = 0. \end Laplace expansion along the second column yields the same result: : \begin , B, & = -2 \cdot \begin 4 & 6 \\ 7 & 9 \end + 5 \cdot \begin 1 & 3 \\ 7 & 9 \end - 8 \cdot \begin 1 & 3 \\ 4 & 6 \end \\ pt& = -2 \cdot (-6) + 5 \cdot (-12) - 8 \cdot (-6) = 0. \end It is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero.


Proof

Suppose B is an ''n'' × ''n'' matrix and i,j\in\. For clarity we also label the entries of B that compose its i,j minor matrix M_ as (a_) for 1 \le s,t \le n-1. Consider the terms in the expansion of , B, that have b_ as a factor. Each has the form : \sgn \tau\,b_ \cdots b_ \cdots b_ = \sgn \tau\,b_ a_ \cdots a_ for some permutation with \tau(i)=j, and a unique and evidently related permutation \sigma\in S_ which selects the same minor entries as . Similarly each choice of determines a corresponding i.e. the correspondence \sigma\leftrightarrow\tau is a bijection between S_ and \. Using Cauchy's two-line notation, the explicit relation between \tau and \sigma can be written as : \sigma = \begin 1 & 2 & \cdots & i & \cdots & n-1 \\ (\leftarrow)_j ( \tau(1) ) & (\leftarrow)_j ( \tau(2) ) & \cdots & (\leftarrow)_j ( \tau(i+1) ) & \cdots & (\leftarrow)_j ( \tau(n) ) \end where (\leftarrow)_j is a temporary shorthand notation for a cycle (n,n-1,\cdots,j+1,j). This operation decrements all indices larger than j so that every index fits in the set The permutation can be derived from as follows. Define \sigma'\in S_n by \sigma'(k) = \sigma(k) for 1 \le k \le n-1 and \sigma'(n) = n. Then \sigma' is expressed as : \sigma' = \begin 1 & 2 & \cdots & i & \cdots & n-1 & n \\ (\leftarrow)_j ( \tau(1) ) & (\leftarrow)_j ( \tau(2) ) & \cdots & (\leftarrow)_j ( \tau(i+1) ) & \cdots & (\leftarrow)_j ( \tau(n) ) & n\end Now, the operation which apply (\leftarrow)_i first and then apply \sigma' is (Notice applying A before B is equivalent to applying inverse of A to the upper row of B in two-line notation) : \sigma' (\leftarrow)_i = \begin 1 & 2 & \cdots & i+1 & \cdots & n & i \\ (\leftarrow)_j ( \tau(1) ) & (\leftarrow)_j ( \tau(2) ) & \cdots & (\leftarrow)_j ( \tau(i+1) ) & \cdots & (\leftarrow)_j ( \tau(n) ) & n \end where (\leftarrow)_i is temporary shorthand notation for (n,n-1,\cdots,i+1,i). the operation which applies \tau first and then applies (\leftarrow)_j is : (\leftarrow)_j \tau = \begin 1 & 2 & \cdots & i & \cdots & n-1 & n \\ (\leftarrow)_j ( \tau(1) ) & (\leftarrow)_j ( \tau(2) ) & \cdots & n & \cdots & (\leftarrow)_j ( \tau(n-1) ) & (\leftarrow)_j ( \tau(n) ) \end above two are equal thus, : (\leftarrow)_j \tau = \sigma' (\leftarrow)_i : \tau = (\rightarrow)_j \sigma' (\leftarrow)_i where (\rightarrow)_j is the inverse of (\leftarrow)_j which is (j,j+1,\cdots,n). Thus : \tau\,=(j,j+1,\ldots,n)\sigma'(n,n-1,\ldots,i) Since the two cycles can be written respectively as n-i and n-j transpositions, : \sgn\tau\,= (-1)^ \sgn\sigma'\,= (-1)^ \sgn\sigma. And since the map \sigma\leftrightarrow\tau is bijective, :\begin \sum_^n\sum_ \sgn \tau\,b_ \cdots b_ &= \sum_^\sum_ (-1)^\sgn\sigma\, b_ a_ \cdots a_\\ &= \sum_^b_(-1)^ \sum_ \sgn\sigma\, a_ \cdots a_\\ &= \sum_^ b_ (-1)^ M_ \end from which the result follows. Similarly, the result holds if the index of the outer summation was replaced with j.


Laplace expansion of a determinant by complementary minors

Laplace's cofactor expansion can be generalised as follows.


Example

Consider the matrix : A = \begin 1 & 2 & 3 & 4 \\ 5 & 6 & 7 & 8 \\ 9 & 10 & 11 & 12 \\ 13 & 14 & 15 & 16 \end. The determinant of this matrix can be computed by using the Laplace's cofactor expansion along the first two rows as follows. Firstly note that there are 6 sets of two distinct numbers in namely let S=\left\ be the aforementioned set. By defining the complementary cofactors to be : b_=\begin a_ & a_ \\ a_ & a_ \end, : c_=\begin a_ & a_ \\ a_ & a_ \end, and the sign of their permutation to be : \varepsilon^ = \sgn\begin 1 & 2 & 3 & 4 \\ j & k & p & q \end, \text p\neq j,q\neq k. The determinant of ''A'' can be written out as : , A, = \sum_ \varepsilon^b_c_, where H^ is the complementary set to H . In our explicit example this gives us :\begin , A, &= b_c_ -b_c_ +b_c_+b_c_ -b_c_ +b_c_ \\ pt &= \begin 1 & 2 \\ 5 & 6 \end \cdot \begin 11 & 12 \\ 15 & 16 \end - \begin 1 & 3 \\ 5 & 7 \end \cdot \begin 10 & 12 \\ 14 & 16 \end + \begin 1 & 4 \\ 5 & 8 \end \cdot \begin 10 & 11 \\ 14 & 15 \end + \begin 2 & 3 \\ 6 & 7 \end \cdot \begin 9 & 12 \\ 13 & 16 \end - \begin 2 & 4 \\ 6 & 8 \end \cdot \begin 9 & 11 \\ 13 & 15 \end + \begin 3 & 4 \\ 7 & 8 \end \cdot \begin 9 & 10 \\ 13 & 14 \end\\ pt &= -4 \cdot (-4) -(-8) \cdot (-8) +(-12) \cdot (-4) +(-4) \cdot (-12) -(-8) \cdot (-8) +(-4) \cdot (-4)\\ pt &= 16 - 64 + 48 + 48 - 64 + 16 = 0. \end As above, it is easy to verify that the result is correct: the matrix is singular because the sum of its first and third column is twice the second column, and hence its determinant is zero.


General statement

Let B= _/math> be an matrix and S the set of -element subsets of , H an element in it. Then the determinant of B can be expanded along the rows identified by H as follows: :, B, = \sum_ \varepsilon^ b_ c_ where \varepsilon^ is the sign of the permutation determined by H and L, equal to (-1)^, b_ the square minor of B obtained by deleting from B rows and columns with indices in H and L respectively, and c_ (called the complement of b_) defined to be b_ , H' and L' being the complement of H and L respectively. This coincides with the theorem above when k=1. The same thing holds for any fixed columns.


Computational expense

The Laplace expansion is computationally inefficient for high-dimension matrices, with a time complexity in big O notation of . Alternatively, using a decomposition into triangular matrices as in the
LU decomposition In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a p ...
can yield determinants with a time complexity of .Stoer Bulirsch: Introduction to Numerical Mathematics The following
Python Python may refer to: Snakes * Pythonidae, a family of nonvenomous snakes found in Africa, Asia, and Australia ** ''Python'' (genus), a genus of Pythonidae found in Africa and Asia * Python (mythology), a mythical serpent Computing * Python (pro ...
code implements Laplace expansion recursively: def determinant(M): # Base case of recursive function: 1x1 matrix if len(M)

1: return M 0] total = 0 for column, element in enumerate(M : # Exclude first row and current column. K = [:column+_x[column_+_1_:.html"_;"title="column.html"_;"title="[:column">[:column+_x[column_+_1_:">column.html"_;"title="[:column">[:column+_x[column_+_1_:for_x_in_M[1:.html" ;"title="column">[:column+_x[column_+_1_:.html" ;"title="column.html" ;"title="[:column">[:column+ x[column + 1 :">column.html" ;"title="[:column">[:column+ x[column + 1 :for x in M[1:">column">[:column+_x[column_+_1_:.html" ;"title="column.html" ;"title="[:column">[:column+ x[column + 1 :">column.html" ;"title="[:column">[:column+ x[column + 1 :for x in M[1: s = 1 if column % 2

0 else -1 total += s * element * determinant(K) return total


See also

* Leibniz formula for determinants * Rule of Sarrus for 3 \times 3 determinants


References

* David Poole: ''Linear Algebra. A Modern Introduction''. Cengage Learning 2005, , pp. 265–267 () * Harvey E. Rose: ''Linear Algebra. A Pure Mathematical Approach''. Springer 2002, , pp. 57–60 ()


External links


Laplace expansion in C


{{in lang, pt Matrix theory Determinants Articles containing proofs de:Determinante#Laplacescher Entwicklungssatz