HOME

TheInfoList



OR:

In mathematics (
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
), the Faddeev–LeVerrier algorithm is a
recursive Recursion occurs when the definition of a concept or process depends on a simpler or previous version of itself. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in m ...
method to calculate the coefficients of the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The ...
p_A(\lambda)=\det (\lambda I_n - A) of a square
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
, , named after Dmitry Konstantinovich Faddeev and
Urbain Le Verrier Urbain Jean Joseph Le Verrier (; 11 March 1811 – 23 September 1877) was a French astronomer and mathematician who specialized in celestial mechanics and is best known for predicting the existence and position of Neptune using only mathematics. ...
. Calculation of this polynomial yields the
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
s of as its roots; as a matrix polynomial in the matrix itself, it vanishes by the Cayley–Hamilton theorem. Computing the characteristic polynomial directly from the definition of the determinant is computationally cumbersome insofar as it introduces a new symbolic quantity \lambda; by contrast, the Faddeev-Le Verrier algorithm works directly with coefficients of matrix A. The algorithm has been independently rediscovered several times in different forms. It was first published in 1840 by
Urbain Le Verrier Urbain Jean Joseph Le Verrier (; 11 March 1811 – 23 September 1877) was a French astronomer and mathematician who specialized in celestial mechanics and is best known for predicting the existence and position of Neptune using only mathematics. ...
, subsequently redeveloped by P. Horst, Jean-Marie Souriau, in its present form here by Faddeev and Sominsky, and further by J. S. Frame, and others. (For historical points, see Householder. An elegant shortcut to the proof, bypassing
Newton polynomial In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences inter ...
s, was introduced by Hou. The bulk of the presentation here follows Gantmacher, p. 88.)


The Algorithm

The objective is to calculate the coefficients of the characteristic polynomial of the matrix , ::p_A(\lambda)\equiv \det(\lambda I_n-A)=\sum_^ c_k \lambda^k~, where, evidently, = 1 and 0 = (−1)''n'' det . The coefficients are determined by induction on , using an auxiliary sequence of matrices : \begin M_0 &\equiv 0 & c_n &= 1 \qquad &(k=0) \\ M_k &\equiv AM_ + c_ I \qquad \qquad & c_ &= -\frac 1 k \mathrm(AM_k) \qquad &k=1,\ldots ,n ~. \end Thus, : M_1= I ~, \quad c_ = - \mathrm A =-c_n \mathrm A ; :M_2= A-I\mathrm A , \quad c_=-\frac\Bigl(\mathrm A^2 -(\mathrm A)^2\Bigr) = -\frac (c_n \mathrm A^2 +c_ \mathrm A) ; :M_3= A^2-A\mathrm A -\frac\Bigl(\mathrm A^2 -(\mathrm A)^2\Bigr) I, ::c_=- \tfrac\Bigl( (\operatornameA)^3-3\operatorname(A^2)(\operatornameA)+2\operatorname(A^3)\Bigr)=-\frac(c_n \mathrm A^3+c_ \mathrm A^2 +c_\mathrm A); etc.,   ...; :M_m= \sum_^ c_ A^ ~, :c_= -\frac(c_n \mathrm A^m+c_ \mathrm A^+...+c_\mathrm A)= -\frac\sum_^ c_ \mathrm A^k ~ ; ... Observe terminates the recursion at . This could be used to obtain the inverse or the determinant of .


Derivation

The proof relies on the modes of the adjugate matrix, , the auxiliary matrices encountered.   This matrix is defined by ::(\lambda I-A) B = I ~ p_A(\lambda) and is thus proportional to the resolvent :B = (\lambda I-A)^ I ~ p_A(\lambda) ~. It is evidently a matrix polynomial in of degree . Thus, ::B\equiv \sum_^ \lambda^k~ B_k= \sum_^ \lambda^k~ M_ , where one may define the harmless ≡0. Inserting the explicit polynomial forms into the defining equation for the adjugate, above, :\sum_^ \lambda^ M_ - \lambda^k (AM_ +c_k I)= 0~. Now, at the highest order, the first term vanishes by =0; whereas at the bottom order (constant in , from the defining equation of the adjugate, above), :M_n A= B_0 A=c_0~, so that shifting the dummy indices of the first term yields :\sum_^ \lambda^ \Big ( M_ - AM_ +c_k I\Big )= 0~, which thus dictates the recursion :\therefore \qquad M_ =A M_ +c_ I ~, for =1,...,. Note that ascending index amounts to descending in powers of , but the polynomial coefficients are yet to be determined in terms of the s and . This can be easiest achieved through the following auxiliary equation (Hou, 1998), ::\lambda \frac -n p =\operatorname AB~. This is but the trace of the defining equation for by dint of
Jacobi's formula In matrix calculus, Jacobi's formula expresses the derivative of the determinant of a matrix ''A'' in terms of the adjugate of ''A'' and the derivative of ''A''., Part Three, Section 8.3 If is a differentiable map from the real numbers to matr ...
, :\frac= p_A(\lambda) \sum^\infty _\lambda ^ \operatornameA^m = p_A(\lambda) ~ \operatorname \frac\equiv\operatorname B~. Inserting the polynomial mode forms in this auxiliary equation yields :\sum_^ \lambda^ \Big ( k c_k - n c_k - \operatornameA M_\Big )= 0~, so that :\sum_^ \lambda^ \Big ( m c_ + \operatornameA M_\Big )= 0~, and finally :\therefore \qquad c_ = -\frac \operatornameA M_ ~. This completes the recursion of the previous section, unfolding in descending powers of . Further note in the algorithm that, more directly, : M_ =A M_ - \frac (\operatornameA M_) I ~, and, in comportance with the Cayley–Hamilton theorem, : \operatorname(A) =(-1)^ M_=(-1)^ (A^+c_A^+ ...+c_2 A+ c_1 I)=(-1)^ \sum_^n c_k A^~. The final solution might be more conveniently expressed in terms of complete exponential Bell polynomials as : c_ = \frac _k \Bigl ( \operatornameA , -1! ~ \operatornameA^2, 2! ~\operatornameA^3, \ldots, (-1)^(k-1)! ~ \operatornameA^k\Bigr ) .


Example

: Furthermore, , which confirms the above calculations. The characteristic polynomial of matrix is thus ; the determinant of is ; the trace is 10=−''c''2; and the inverse of is : .


An equivalent but distinct expression

A compact determinant of an ×-matrix solution for the above Jacobi's formula may alternatively determine the coefficients , :c_ = \frac \begin \operatornameA & m-1 &0&\cdots&0\\ \operatornameA^2 &\operatornameA& m-2 &\cdots&0\\ \vdots & \vdots & & & \vdots \\ \operatornameA^ &\operatornameA^& \cdots & \cdots & 1 \\ \operatornameA^m &\operatornameA^& \cdots & \cdots & \operatornameA \end ~.


See also

*
Characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The ...
*
Horner's method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Hor ...
* Fredholm determinant


References

Barbaresco F. (2019) Souriau Exponential Map Algorithm for Machine Learning on Matrix Lie Groups. In: Nielsen F., Barbaresco F. (eds) Geometric Science of Information. GSI 2019. Lecture Notes in Computer Science, vol 11712. Springer, Cham. https://doi.org/10.1007/978-3-030-26980-7_10 {{DEFAULTSORT:Faddeev-LeVerrier algorithm Polynomials Matrix theory Linear algebra Mathematical physics Determinants Homogeneous polynomials