Desnanot–Jacobi Identity
   HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, Dodgson condensation or method of contractants is a method of computing the
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
s of
square matrices In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
. It is named for its inventor,
Charles Lutwidge Dodgson Charles Lutwidge Dodgson (27 January 1832 – 14 January 1898), better known by his pen name Lewis Carroll, was an English author, poet, mathematician, photographer and reluctant Anglican deacon. His most notable works are ''Alice's Adventures ...
(better known by his pseudonym, as Lewis Carroll, the popular author), who discovered it in 1866. The method in the case of an ''n'' × ''n'' matrix is to construct an (''n'' − 1) × (''n'' − 1) matrix, an (''n'' − 2) × (''n'' − 2), and so on, finishing with a 1 × 1 matrix, which has one entry, the determinant of the original matrix.


General method

This algorithm can be described in the following four steps: # Let A be the given ''n'' × ''n'' matrix. Arrange A so that no zeros occur in its interior. An explicit definition of interior would be all ai,j with i,j\ne1,n. One can do this using any operation that one could normally perform without changing the value of the determinant, such as adding a multiple of one row to another. # Create an (''n'' − 1) × (''n'' − 1) matrix B, consisting of the determinants of every 2 × 2 submatrix of A. Explicitly, we write b_=\begin a_ & a_ \\ a_ & a_ \end. # Using this (''n'' − 1) × (''n'' − 1) matrix, perform step 2 to obtain an (''n'' − 2) × (''n'' − 2) matrix C. Divide each term in C by the corresponding term in the interior of A so c_=\begin b_ & b_ \\ b_ & b_ \end / a_ . # Let A = B, and B = C. Repeat step 3 as necessary until the 1 × 1 matrix is found; its only entry is the determinant.


Examples


Without zeros

One wishes to find : \begin -2 & -1 & -1 & -4 \\ -1 & -2 & -1 & -6 \\ -1 & -1 & 2 & 4 \\ 2 & 1 & -3 & -8 \end. All of the interior elements are non-zero, so there is no need to re-arrange the matrix. We make a matrix of its 2 × 2 submatrices. : \begin \begin -2 & -1 \\ -1 & -2 \end & \begin -1 & -1 \\ -2 & -1 \end & \begin -1 & -4 \\ -1 & -6 \end \\ \\ \begin -1 & -2 \\ -1 & -1 \end & \begin -2 & -1 \\ -1 & 2 \end & \begin -1 & -6 \\ 2 & 4 \end \\ \\ \begin -1 & -1 \\ 2 & 1 \end & \begin -1 & 2 \\ 1 & -3 \end & \begin 2 & 4 \\ -3 & -8 \end \end = \begin 3 & -1 & 2 \\ -1 & -5 & 8 \\ 1 & 1 & -4 \end. We then find another matrix of determinants: : \begin \begin 3 & -1 \\ -1 & -5 \end & \begin -1 & 2 \\ -5 & 8 \end \\ \\ \begin -1 & -5 \\ 1 & 1 \end & \begin -5 & 8 \\ 1 & -4 \end \end = \begin -16 & 2 \\ 4 & 12 \end. We must then divide each element by the corresponding element of our original matrix. The interior of the original matrix is \begin -2 & -1 \\ -1 & 2 \end , so after dividing we get \begin 8 & -2 \\ -4 & 6 \end . The process must be repeated to arrive at a 1 × 1 matrix. \begin \begin 8 & -2 \\ -4 & 6 \end \end = \begin 40 \end. Dividing by the interior of the 3 × 3 matrix, which is just −5, gives \begin -8 \end and −8 is indeed the determinant of the original matrix.


With zeros

Simply writing out the matrices: : \begin 2 & -1 & 2 & 1 & -3 \\ 1 & 2 & 1 & -1 & 2 \\ 1 & -1 & -2 & -1 & -1 \\ 2 & 1 & -1 & -2 & -1 \\ 1 & -2 & -1 & -1 & 2 \end \to \begin 5 & -5 & -3 & -1 \\ -3 & -3 & -3 & 3 \\ 3 & 3 & 3 & -1 \\ -5 & -3 & -1 & -5 \end \to \begin -15 & 6 & 12 \\ 0 & 0 & 6 \\ 6 & -6 & 8 \end. Here we run into trouble. If we continue the process, we will eventually be dividing by 0. We can perform four row exchanges on the initial matrix to preserve the determinant and repeat the process, with most of the determinants precalculated: : \begin 1 & 2 & 1 & -1 & 2 \\ 1 & -1 & -2 & -1 & -1 \\ 2 & 1 & -1 & -2 & -1 \\ 1 & -2 & -1 & -1 & 2 \\ 2 & -1 & 2 & 1 & -3 \end \to \begin -3 & -3 & -3 & 3 \\ 3 & 3 & 3 & -1 \\ -5 & -3 & -1 & -5 \\ 3 & -5 & 1 & 1 \end \to \begin 0 & 0 & 6 \\ 6 & -6 & 8 \\ -17 & 8 & -4 \end \to \begin 0 & 12 \\ 18 & 40 \end \to \begin 36 \end. Hence, we arrive at a determinant of 36.


Desnanot–Jacobi identity and proof of correctness of the condensation algorithm

The proof that the condensation method computes the determinant of the matrix if no divisions by zero are encountered is based on an identity known as the Desnanot–Jacobi identity (1841) or, more generally, the Sylvester determinant identity (1851). Let M=(m_)_^k be a square matrix, and for each 1\le i, j\le k, denote by M_i^j the matrix that results from M by deleting the i-th row and the j-th column. Similarly, for 1\le i, j, p,q\le k, denote by M_^ the matrix that results from M by deleting the i-th and j-th rows and the p-th and q-th columns.


Desnanot–Jacobi identity

:\det(M) \det(M_^) = \det(M_1^1)\det(M_k^k) - \det(M_1^k) \det(M_k^1).


Proof of the correctness of Dodgson condensation

Rewrite the identity as :\det(M) = \frac. Now note that by induction it follows that when applying the Dodgson condensation procedure to a square matrix A of order n, the matrix in the k-th stage of the computation (where the first stage k=1 corresponds to the matrix A itself) consists of all the ''connected minors'' of order k of A, where a connected minor is the determinant of a connected k\times k sub-block of adjacent entries of A. In particular, in the last stage k=n, one gets a matrix containing a single element equal to the unique connected minor of order n, namely the determinant of A.


Proof of the Desnanot–Jacobi identity

We follow the treatment in the book ''Proofs and Confirmations: The Story of the Alternating Sign Matrix Conjecture''; an alternative combinatorial proof was given in a paper by
Doron Zeilberger Doron Zeilberger (; born 2 July 1950) is an Israeli-American mathematician, known for his work in combinatorics. Education and career He received his doctorate from the Weizmann Institute of Science in 1976, under the direction of Harry Dym, w ...
. Denote a_ = (-1)^ \det(M_i^j) (up to sign, the (i,j)-th minor of M), and define a k\times k matrix M' by
: M' = \begin a_ & 0 & 0 & 0 & \ldots & 0 & a_ \\ a_ & 1 & 0 & 0 & \ldots & 0 & a_ \\ a_ & 0 & 1 & 0 & \ldots & 0 & a_ \\ a_ & 0 & 0 & 1 & \ldots & 0 & a_ \\ \vdots & \vdots & \vdots & \vdots & & \vdots & \vdots \\ a_ & 0 & 0 & 0 & \ldots & 1 & a_ \\ a_ & 0 & 0 & 0 & \ldots & 0 & a_ \end.
(Note that the first and last column of M' are equal to those of the
adjugate matrix In linear algebra, the adjugate or classical adjoint of a square matrix , , is the transpose of its cofactor matrix. It is occasionally known as adjunct matrix, or "adjoint", though that normally refers to a different concept, the adjoint operat ...
of A). The identity is now obtained by computing \det(M M') in two ways. First, we can directly compute the matrix product M M' (using simple properties of the adjugate matrix, or alternatively using the formula for the expansion of a matrix determinant in terms of a row or a column) to arrive at
: M M' = \begin \det(M) & m_ & m_ & \ldots & m_ & 0 \\ 0 & m_ & m_ & \ldots & m_ & 0 \\ 0 & m_ & m_ & \ldots & m_ & 0 \\ \vdots & \vdots & \vdots & & \vdots & \vdots & \vdots \\ 0 & m_ & m_ & \ldots & m_ & 0 \\ 0 & m_ & m_ & \ldots & m_ & \det(M) \end
where we use m_ to denote the (i,j)-th entry of M. The determinant of this matrix is \det(M)^2 \cdot \det(M_^).
Second, this is equal to the product of the determinants, \det(M) \cdot \det(M'). But clearly
\det(M') = a_ a_ - a_ a_ = \det(M_1^1)\det(M_k^k) - \det(M_1^k) \det(M_k^1),
so the identity follows from equating the two expressions we obtained for \det(M M') and dividing out by \det(M) (this is allowed if one thinks of the identities as polynomial identities over the ring of polynomials in the k^2 indeterminate variables (m_)_^k).


References


Further reading

* Bressoud, David M. and Propp, James
How the alternating sign matrix conjecture was solved
''Notices of the American Mathematical Society'', 46 (1999), 637-646. * Knuth, Donald
Overlapping Pfaffians
''Electronic Journal of Combinatorics'', 3 no. 2 (1996). * * Mills, William H., Robbins, David P., and Rumsey, Howard, Jr., Proof of the Macdonald conjecture, ''Inventiones Mathematicae'', 66 (1982), 73-87. * Mills, William H., Robbins, David P., and Rumsey, Howard, Jr., Alternating sign matrices and descending plane partitions, ''
Journal of Combinatorial Theory The ''Journal of Combinatorial Theory'', Series A and Series B, are mathematical journals specializing in combinatorics and related areas. They are published by Elsevier. ''Series A'' is concerned primarily with structures, designs, and applicati ...
, Series A'', 34 (1983), 340-359. * Robbins, David P., The story of 1, 2, 7, 42, 429, 7436, \cdots, ''The Mathematical Intelligencer'', 13 (1991), 12-19.


External links

* {{MathWorld, Condensation Determinants Lewis Carroll