HOME

TheInfoList



OR:

In mathematics, Sylvester’s criterion is a
necessary and sufficient In logic and mathematics, necessity and sufficiency are terms used to describe a material conditional, conditional or implicational relationship between two Statement (logic), statements. For example, in the Conditional sentence, conditional stat ...
criterion to determine whether a
Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the ...
is
positive-definite In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular: * Positive-definite bilinear form * Positive-definite ...
. Sylvester's criterion states that a ''n'' × ''n'' Hermitian matrix ''M'' is positive-definite
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either bo ...
all the following matrices have a positive
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
: * the upper left 1-by-1 corner of ''M'', * the upper left 2-by-2 corner of ''M'', * the upper left 3-by-3 corner of ''M'', * \quad\vdots * ''M'' itself. In other words, all of the ''leading''
principal minor In linear algebra, a minor of a matrix (mathematics), matrix is the determinant of some smaller square matrix generated from by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square ma ...
s must be positive. By using appropriate permutations of rows and columns of ''M'', it can also be shown that the positivity of ''any'' nested sequence of ''n'' principal minors of ''M'' is equivalent to ''M'' being positive-definite. An analogous theorem holds for characterizing positive-semidefinite Hermitian matrices, except that it is no longer sufficient to consider only the ''leading'' principal minors as illustrated by the Hermitian matrix :
\begin 0&0&-1\\ 0&-1&0\\ -1&0&0 \end\quad\text\quad \begin 0\\1\\0 \end,\quad \begin 1\\0\\1 \end\quad\text\quad \begin 1\\0\\-1 \end.
A Hermitian matrix ''M'' is positive-semidefinite if and only if ''all''
principal minor In linear algebra, a minor of a matrix (mathematics), matrix is the determinant of some smaller square matrix generated from by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square ma ...
s of ''M'' are nonnegative.


Proof for the case of positive definite matrices

Suppose M_n is n\times n Hermitian matrix M_n^=M_n. Let M_k,k=1,\ldots n be the leading principal minor matrices, i.e. the k\times k upper left corner matrices. It will be shown that if M_n is positive definite, then the principal minors are positive; that is, \det M_k>0 for all k . M_k is positive definite. Indeed, choosing :
x=\left( \begin x_1\\ \vdots\\ x_k\\ 0\\ \vdots\\0\end \right) = \left( \begin \vec\\ 0\\ \vdots\\ 0 \end \right)
we can notice that 0 Equivalently, the eigenvalues of M_k are positive, and this implies that \det M_k>0 since the determinant is the product of the eigenvalues. To prove the reverse implication, we use induction. The general form of an (n+1) \times (n+1) Hermitian matrix is :
M_= \left( \beginM_n&\vec\\ \vec^&d\end\right) \qquad (*),
where M_n is an n \times n Hermitian matrix, \vec is a vector and d is a real constant. Suppose the criterion holds for M_n. Assuming that all the principal minors of M_ are positive implies that \det M_>0, \det M_n>0, and that M_n is positive definite by the inductive hypothesis. Denote :
x=\left( \begin\vec\\ x_\end \right)
then : x^M_x=\vec^M_n\vec+x_\vec^\vec+x_\vec^\vec+d, x_, ^2 By completing the squares, this last expression is equal to : (\vec^+\vec^M_n^\bar_)M_n(\vec+x_M_n^\vec)-, x_, ^2\vec^M_n^\vec+d, x_, ^2 : =(\vec+\vec)^M_n(\vec+\vec)+, x_, ^2(d-\vec^M_n^\vec) where \vec=x_M_n^\vec (note that M_n^ exists because the eigenvalues of M_n are all positive.) The first term is positive by the inductive hypothesis. We now examine the sign of the second term. By using the block matrix determinant formula
\det \left(\beginA&B\\C&D\end \right)=\det A\det(D-CA^B)
on (*) we obtain : \det M_=\det M_n(d-\vec^M_n^\vec)>0, which implies d-\vec^M_n^\vec>0. Consequently, x^M_x>0.


Proof for the case of positive semidefinite matrices

Let M_n be an ''n'' x ''n'' Hermitian matrix. Suppose M_n is semidefinite. Essentially the same proof as for the case that M_n is strictly positive definite shows that all principal minors (not necessarily the leading principal minors) are non-negative. For the reverse implication, it suffices to show that if M_n has all non-negative principal minors, then for all ''t>0'', all leading principal minors of the Hermitian matrix M_n+tI_n are strictly positive, where I_n is the ''n''x''n''
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
. Indeed, from the positive definite case, we would know that the matrices M_n+tI_n are strictly positive definite. Since the limit of positive definite matrices is always positive semidefinite, we can take t \to 0 to conclude. To show this, let M_k be the ''k''th leading principal submatrix of M_n. We know that q_k(t) = \det(M_k + tI_k) is a polynomial in ''t'', related to the characteristic polynomial p_ via q_k(t) = (-1)^kp_(-t). We use the identity in Characteristic polynomial#Properties to write q_k(t) = \sum_^k t^ \operatorname\left(\textstyle\bigwedge^j M_k\right), where \operatorname\left(\bigwedge^j M_k\right) is the trace of the ''j''th exterior power of M_k. From the definition of minors, we know that the entries in the matrix expansion of \bigwedge^j M_k (for ''j > 0'') are just the minors of M_k. In particular, the diagonal entries are the principal minors of M_k, which of course are also principal minors of M_n, and are thus non-negative. Since the trace of a matrix is the sum of the diagonal entries, it follows that \operatorname\left(\textstyle\bigwedge^j M_k\right) \geq 0. Thus the coefficient of t^ in q_k(t) is non-negative for all ''j > 0.'' For ''j = 0'', it is clear that the coefficient is 1. In particular, q_k(t) > 0 for all ''t > 0'', which is what was required to show.


Notes


References

* . * . Theorem 7.2.5. * {{Citation , last1=Carl D. Meyer , title=Matrix Analysis and Applied Linear Algebra , date=June 2000 , publisher=
SIAM Thailand, officially the Kingdom of Thailand and historically known as Siam (the official name until 1939), is a country in Southeast Asia on the Mainland Southeast Asia, Indochinese Peninsula. With a population of almost 66 million, it spa ...
, isbn=0-89871-454-0. Articles containing proofs Matrix theory fr:Matrice définie positive#Critère de Sylvester