Faddeev–LeVerrier Algorithm
   HOME

TheInfoList



OR:

In mathematics (
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrices ...
), the Faddeev–LeVerrier algorithm is a
recursive Recursion (adjective: ''recursive'') occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematics ...
method to calculate the coefficients of the characteristic polynomial p_A(\lambda)=\det (\lambda I_n - A) of a square
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** ''The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
, , named after
Dmitry Konstantinovich Faddeev Dmitry Konstantinovich Faddeev ( rus, Дми́трий Константи́нович Фадде́ев, , ˈdmʲitrʲɪj kənstɐnʲˈtʲinəvʲɪtɕ fɐˈdʲe(j)ɪf; 30 June 1907 – 20 October 1989) was a Soviet mathematician. Biography Dmitry w ...
and
Urbain Le Verrier Urbain Jean Joseph Le Verrier FRS (FOR) H FRSE (; 11 March 1811 – 23 September 1877) was a French astronomer and mathematician who specialized in celestial mechanics and is best known for predicting the existence and position of Neptune usin ...
. Calculation of this polynomial yields the
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
s of as its roots; as a matrix polynomial in the matrix itself, it vanishes by the fundamental
Cayley–Hamilton theorem In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex numbers or the integers) satisfies ...
. Computing determinant from the definition of characteristic polynomial, however, is computationally cumbersome, because \lambda is new symbolic quantity, whereas this algorithm works directly with coefficients of matrix A. The algorithm has been independently rediscovered several times, in some form or another. It was first published in 1840 by
Urbain Le Verrier Urbain Jean Joseph Le Verrier FRS (FOR) H FRSE (; 11 March 1811 – 23 September 1877) was a French astronomer and mathematician who specialized in celestial mechanics and is best known for predicting the existence and position of Neptune usin ...
, subsequently redeveloped by P. Horst,
Jean-Marie Souriau Jean-Marie Souriau (3 June 1922, Paris – 15 March 2012, Aix-en-Provence) was a French mathematician. He was one of the pioneers of modern symplectic geometry. Education and career Souriau started studying mathematics in 1942 at École ...
, in its present form here by Faddeev and Sominsky, and further by J. S. Frame, and others. (For historical points, see Householder. An elegant shortcut to the proof, bypassing
Newton polynomial In the mathematical field of numerical analysis, a Newton polynomial, named after its inventor Isaac Newton, is an interpolation polynomial for a given set of data points. The Newton polynomial is sometimes called Newton's divided differences interp ...
s, was introduced by Hou. The bulk of the presentation here follows Gantmacher, p. 88.)


The Algorithm

The objective is to calculate the coefficients of the characteristic polynomial of the matrix , ::p_A(\lambda)\equiv \det(\lambda I_n-A)=\sum_^ c_k \lambda^k~, where, evidently, = 1 and 0 = (−1)''n'' det . The coefficients are determined recursively from the top down, by dint of the sequence of auxiliary matrices , : \begin M_0 &\equiv 0 & c_n &= 1 \qquad &(k=0) \\ M_k &\equiv AM_ + c_ I \qquad \qquad & c_ &= -\frac 1 k \mathrm(AM_k) \qquad &k=1,\ldots ,n ~. \end Thus, : M_1= I ~, \quad c_ = - \mathrm A =-c_n \mathrm A ; :M_2= A-I\mathrm A , \quad c_=-\frac\Bigl(\mathrm A^2 -(\mathrm A)^2\Bigr) = -\frac (c_n \mathrm A^2 +c_ \mathrm A) ; :M_3= A^2-A\mathrm A -\frac\Bigl(\mathrm A^2 -(\mathrm A)^2\Bigr) I, ::c_=- \tfrac\Bigl( (\operatornameA)^3-3\operatorname(A^2)(\operatornameA)+2\operatorname(A^3)\Bigr)=-\frac(c_n \mathrm A^3+c_ \mathrm A^2 +c_\mathrm A); etc.,   ...; :M_m= \sum_^ c_ A^ ~, :c_= -\frac(c_n \mathrm A^m+c_ \mathrm A^+...+c_\mathrm A)= -\frac\sum_^ c_ \mathrm A^k ~ ; ... Observe terminates the recursion at . This could be used to obtain the inverse or the determinant of .


Derivation

The proof relies on the modes of the
adjugate matrix In linear algebra, the adjugate or classical adjoint of a square matrix is the transpose of its cofactor matrix and is denoted by . It is also occasionally known as adjunct matrix, or "adjoint", though the latter today normally refers to a differe ...
, , the auxiliary matrices encountered.   This matrix is defined by ::(\lambda I-A) B = I ~ p_A(\lambda) and is thus proportional to the resolvent :B = (\lambda I-A)^ I ~ p_A(\lambda) ~. It is evidently a matrix polynomial in of degree . Thus, ::B\equiv \sum_^ \lambda^k~ B_k= \sum_^ \lambda^k~ M_ , where one may define the harmless ≡0. Inserting the explicit polynomial forms into the defining equation for the adjugate, above, :\sum_^ \lambda^ M_ - \lambda^k (AM_ +c_k I)= 0~. Now, at the highest order, the first term vanishes by =0; whereas at the bottom order (constant in , from the defining equation of the adjugate, above), :M_n A= B_0 A=c_0~, so that shifting the dummy indices of the first term yields :\sum_^ \lambda^ \Big ( M_ - AM_ +c_k I\Big )= 0~, which thus dictates the recursion :\therefore \qquad M_ =A M_ +c_ I ~, for =1,...,. Note that ascending index amounts to descending in powers of , but the polynomial coefficients are yet to be determined in terms of the s and . This can be easiest achieved through the following auxiliary equation (Hou, 1998), ::\lambda \frac -n p =\operatorname AB~. This is but the trace of the defining equation for by dint of
Jacobi's formula In matrix calculus, Jacobi's formula expresses the derivative of the determinant of a matrix ''A'' in terms of the adjugate of ''A'' and the derivative of ''A''., Part Three, Section 8.3 If is a differentiable map from the real numbers to mat ...
, :\frac= p_A(\lambda) \sum^\infty _\lambda ^ \operatornameA^m = p_A(\lambda) ~ \operatorname \frac\equiv\operatorname B~. Inserting the polynomial mode forms in this auxiliary equation yields :\sum_^ \lambda^ \Big ( k c_k - n c_k - \operatornameA M_\Big )= 0~, so that :\sum_^ \lambda^ \Big ( m c_ + \operatornameA M_\Big )= 0~, and finally :\therefore \qquad c_ = -\frac \operatornameA M_ ~. This completes the recursion of the previous section, unfolding in descending powers of . Further note in the algorithm that, more directly, : M_ =A M_ - \frac (\operatornameA M_) I ~, and, in comportance with the
Cayley–Hamilton theorem In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex numbers or the integers) satisfies ...
, : \operatorname(A) =(-1)^ M_=(-1)^ (A^+c_A^+ ...+c_2 A+ c_1 I)=(-1)^ \sum_^n c_k A^~. The final solution might be more conveniently expressed in terms of complete exponential Bell polynomials as : c_ = \frac _k \Bigl ( \operatornameA , -1! ~ \operatornameA^2, 2! ~\operatornameA^3, \ldots, (-1)^(k-1)! ~ \operatornameA^k\Bigr ) .


Example

: Furthermore, , which confirms the above calculations. The characteristic polynomial of matrix is thus ; the determinant of is ; the trace is 10=−''c''2; and the inverse of is : .


An equivalent but distinct expression

A compact determinant of an ×-matrix solution for the above Jacobi's formula may alternatively determine the coefficients , :c_ = \frac \begin \operatornameA & m-1 &0&\cdots\\ \operatornameA^2 &\operatornameA& m-2 &\cdots\\ \vdots & \vdots & & & \vdots \\ \operatornameA^ &\operatornameA^& \cdots & \cdots & 1 \\ \operatornameA^m &\operatornameA^& \cdots & \cdots & \operatornameA \end ~.


See also

* Characteristic polynomial * Exterior algebra § Leverrier's algorithm *
Horner's method In mathematics and computer science, Horner's method (or Horner's scheme) is an algorithm for polynomial evaluation. Although named after William George Horner, this method is much older, as it has been attributed to Joseph-Louis Lagrange by Horn ...
*
Fredholm determinant In mathematics, the Fredholm determinant is a complex-valued function which generalizes the determinant of a finite dimensional linear operator. It is defined for bounded operators on a Hilbert space which differ from the identity operator by a tra ...


References

Barbaresco F. (2019) Souriau Exponential Map Algorithm for Machine Learning on Matrix Lie Groups. In: Nielsen F., Barbaresco F. (eds) Geometric Science of Information. GSI 2019. Lecture Notes in Computer Science, vol 11712. Springer, Cham. https://doi.org/10.1007/978-3-030-26980-7_10 {{DEFAULTSORT:Faddeev-LeVerrier algorithm Polynomials Matrix theory Linear algebra Mathematical physics Determinants Homogeneous polynomials