Jacobi's Formula
   HOME

TheInfoList



OR:

In
matrix calculus In mathematics, matrix calculus is a specialized notation for doing multivariable calculus, especially over spaces of matrices. It collects the various partial derivatives of a single function with respect to many variables, and/or of a mult ...
, Jacobi's formula expresses the
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. F ...
of the
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and ...
of a matrix ''A'' in terms of the
adjugate In linear algebra, the adjugate or classical adjoint of a square matrix is the transpose of its cofactor matrix and is denoted by . It is also occasionally known as adjunct matrix, or "adjoint", though the latter today normally refers to a differe ...
of ''A'' and the derivative of ''A''., Part Three, Section 8.3 If is a differentiable map from the real numbers to matrices, then : \frac \det A(t) = \operatorname \left (\operatorname(A(t)) \, \frac\right ) = \left(\det A(t) \right) \cdot \operatorname \left (A(t)^ \cdot \, \frac\right ) where is the
trace Trace may refer to: Arts and entertainment Music * ''Trace'' (Son Volt album), 1995 * ''Trace'' (Died Pretty album), 1993 * Trace (band), a Dutch progressive rock band * ''The Trace'' (album) Other uses in arts and entertainment * ''Trace'' ...
of the matrix . (The latter equality only holds if ''A''(''t'') is
invertible In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers. Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that is ...
.) As a special case, : = \operatorname(A)_. Equivalently, if stands for the differential of , the general formula is : d \det (A) = \operatorname (\operatorname(A) \, dA). The formula is named after the mathematician
Carl Gustav Jacob Jacobi Carl Gustav Jacob Jacobi (; ; 10 December 1804 – 18 February 1851) was a German mathematician who made fundamental contributions to elliptic functions, dynamics, differential equations, determinants, and number theory. His name is occasiona ...
.


Derivation


Via Matrix Computation

We first prove a preliminary lemma: Lemma. Let ''A'' and ''B'' be a pair of square matrices of the same dimension ''n''. Then :\sum_i \sum_j A_ B_ = \operatorname (A^ B). ''Proof.'' The product ''AB'' of the pair of matrices has components :(AB)_ = \sum_i A_ B_. Replacing the matrix ''A'' by its
transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
''A''T is equivalent to permuting the indices of its components: :(A^ B)_ = \sum_i A_ B_. The result follows by taking the trace of both sides: :\operatorname (A^ B) = \sum_j (A^ B)_ = \sum_j \sum_i A_ B_ = \sum_i \sum_j A_ B_.\ \square Theorem. (Jacobi's formula) For any differentiable map ''A'' from the real numbers to ''n'' × ''n'' matrices, : d \det (A) = \operatorname (\operatorname(A) \, dA). ''Proof.'' Laplace's formula for the determinant of a matrix ''A'' can be stated as :\det(A) = \sum_j A_ \operatorname^ (A)_. Notice that the summation is performed over some arbitrary row ''i'' of the matrix. The determinant of ''A'' can be considered to be a function of the elements of ''A'': :\det(A) = F\,(A_, A_, \ldots , A_, A_, \ldots , A_) so that, by the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h(x)=f(g(x)) for every , ...
, its differential is :d \det(A) = \sum_i \sum_j \,dA_. This summation is performed over all ''n''×''n'' elements of the matrix. To find ∂''F''/∂''A''''ij'' consider that on the right hand side of Laplace's formula, the index ''i'' can be chosen at will. (In order to optimize calculations: Any other choice would eventually yield the same result, but it could be much harder). In particular, it can be chosen to match the first index of ∂ / ∂''A''''ij'': : = = \sum_k Thus, by the product rule, : = \sum_k \operatorname^(A)_ + \sum_k A_ . Now, if an element of a matrix ''A''''ij'' and a cofactor adjT(''A'')''ik'' of element ''A''''ik'' lie on the same row (or column), then the cofactor will not be a function of ''Aij'', because the cofactor of ''A''''ik'' is expressed in terms of elements not in its own row (nor column). Thus, : = 0, so : = \sum_k \operatorname^(A)_ . All the elements of ''A'' are independent of each other, i.e. : = \delta_, where ''δ'' is the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\ ...
, so : = \sum_k \operatorname^(A)_ \delta_ = \operatorname^(A)_. Therefore, :d(\det(A)) = \sum_i \sum_j \operatorname^(A)_ \,d A_, and applying the Lemma yields :d(\det(A)) = \operatorname(\operatorname(A) \,dA).\ \square


Via Chain Rule

Lemma 1. \det'(I)=\mathrm, where \det' is the differential of \det. This equation means that the differential of \det, evaluated at the identity matrix, is equal to the trace. The differential \det'(I) is a linear operator that maps an ''n'' × ''n'' matrix to a real number. ''Proof.'' Using the definition of a
directional derivative In mathematics, the directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity s ...
together with one of its basic properties for differentiable functions, we have :\det'(I)(T)=\nabla_T \det(I)=\lim_\frac \det(I+\varepsilon T) is a polynomial in \varepsilon of order ''n''. It is closely related to the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The chara ...
of T. The constant term (\varepsilon = 0) is 1, while the linear term in \varepsilon is \mathrm\ T. Lemma 2. For an invertible matrix ''A'', we have: \det'(A)(T)=\det A \; \mathrm(A^T). ''Proof.'' Consider the following function of ''X'': :\det X = \det (A A^ X) = \det (A) \ \det(A^ X) We calculate the differential of \det X and evaluate it at X = A using Lemma 1, the equation above, and the chain rule: :\det'(A)(T) = \det A \ \det'(I) (A^ T) = \det A \ \mathrm(A^ T) Theorem. (Jacobi's formula) \frac \det A = \mathrm\left(\mathrm\ A\frac\right) ''Proof.'' If A is invertible, by Lemma 2, with T = dA/dt :\frac \det A = \det A \; \mathrm \left(A^ \frac\right) = \mathrm \left( \mathrm\ A \; \frac \right) using the equation relating the
adjugate In linear algebra, the adjugate or classical adjoint of a square matrix is the transpose of its cofactor matrix and is denoted by . It is also occasionally known as adjunct matrix, or "adjoint", though the latter today normally refers to a differe ...
of A to A^. Now, the formula holds for all matrices, since the set of invertible linear matrices is dense in the space of matrices.


Corollary

The following is a useful relation connecting the
trace Trace may refer to: Arts and entertainment Music * ''Trace'' (Son Volt album), 1995 * ''Trace'' (Died Pretty album), 1993 * Trace (band), a Dutch progressive rock band * ''The Trace'' (album) Other uses in arts and entertainment * ''Trace'' ...
to the determinant of the associated
matrix exponential In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential gives ...
: This statement is clear for diagonal matrices, and a proof of the general claim follows. For any
invertible matrix In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplicati ...
A(t), in the previous section "Via Chain Rule", we showed that :\frac \det A(t) = \det A(t) \; \operatorname \left(A(t)^ \, \frac A(t)\right) Considering A(t) = \exp(tB) in this equation yields: : \frac \det e^ =\operatorname(B) \det e^ The desired result follows as the solution to this ordinary differential equation.


Applications

Several forms of the formula underlie the
Faddeev–LeVerrier algorithm In mathematics (linear algebra), the Faddeev–LeVerrier algorithm is a recursive method to calculate the coefficients of the characteristic polynomial p_A(\lambda)=\det (\lambda I_n - A) of a square matrix, , named after Dmitry Konstantinovic ...
for computing the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The chara ...
, and explicit applications of the
Cayley–Hamilton theorem In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex numbers or the integers) satisfies it ...
. For example, starting from the following equation, which was proved above: :\frac \det A(t) = \det A(t) \ \operatorname \left(A(t)^ \, \frac A(t)\right) and using A(t) = t I - B, we get: :\frac \det (tI-B) = \det (tI-B) \operatorname tI-B)^= \operatorname operatorname (tI-B)/math> where adj denotes the
adjugate matrix In linear algebra, the adjugate or classical adjoint of a square matrix is the transpose of its cofactor matrix and is denoted by . It is also occasionally known as adjunct matrix, or "adjoint", though the latter today normally refers to a differe ...
.


Remarks


References

* * {{DEFAULTSORT:Jacobi's Formula Determinants Matrix theory Articles containing proofs