Polynomial Matrix Spectral Factorization
   HOME

TheInfoList



OR:

Polynomial Matrix Spectral Factorization or Matrix Fejer–Riesz Theorem is a tool used to study the
matrix decomposition In the mathematical discipline of linear algebra, a matrix decomposition or matrix factorization is a factorization of a matrix into a product of matrices. There are many different matrix decompositions; each finds use among a particular class of ...
of polynomial matrices. Polynomial matrices are widely studied in the fields of
systems theory Systems theory is the Transdisciplinarity, transdisciplinary study of systems, i.e. cohesive groups of interrelated, interdependent components that can be natural or artificial. Every system has causal boundaries, is influenced by its context, de ...
and
control theory Control theory is a field of control engineering and applied mathematics that deals with the control system, control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the applic ...
and have seen other uses relating to
stable polynomial In the context of the characteristic polynomial of a differential equation or difference equation, a polynomial is said to be stable if either: * all its roots lie in the open left half-plane, or * all its roots lie in the open unit disk. Th ...
s. In
stability theory In mathematics, stability theory addresses the stability of solutions of differential equations and of trajectories of dynamical systems under small perturbations of initial conditions. The heat equation, for example, is a stable partial differ ...
, Spectral Factorization has been used to find determinantal matrix representations for bivariate stable polynomials and real zero polynomials. Given a
univariate In mathematics, a univariate object is an expression (mathematics), expression, equation, function (mathematics), function or polynomial involving only one Variable (mathematics), variable. Objects involving more than one variable are ''wikt:multi ...
positive polynomial In mathematics, a positive polynomial (respectively non-negative polynomial) on a particular set is a polynomial whose values are positive (respectively non-negative) on that set. Precisely, Let p be a polynomial in n variables with real coefficie ...
, i.e., p(t) > 0 for all t \in \mathbb, the Fejer–Riesz Theorem yields the polynomial spectral factorization p(t) = q(t)\bar(t). Results of this form are generically referred to as
Positivstellensatz In mathematics, a positive polynomial (respectively non-negative polynomial) on a particular set is a polynomial whose values are positive (respectively non-negative) on that set. Precisely, Let p be a polynomial in n variables with real coefficie ...
. Likewise, the Polynomial Matrix Spectral Factorization provides a factorization for
positive definite In mathematics, positive definiteness is a property of any object to which a bilinear form In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called '' vectors'') over a field ''K'' (the elements of w ...
polynomial matrices. This decomposition also relates to the
Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for eff ...
for scalar matrices A =LL^* . This result was originally proven by
Norbert Wiener Norbert Wiener (November 26, 1894 – March 18, 1964) was an American computer scientist, mathematician, and philosopher. He became a professor of mathematics at the Massachusetts Institute of Technology ( MIT). A child prodigy, Wiener late ...
in a more general context which was concerned with integrable matrix-valued functions that also had integrable log determinant. Because applications are often concerned with the polynomial restriction, simpler proofs and individual analysis exist focusing on this case. Weaker positivstellensatz conditions have been studied, specifically considering when the polynomial matrix has positive definite image on semi-algebraic subsets of the reals. Many publications recently have focused on streamlining proofs for these related results. This article roughly follows the recent proof method of Lasha Ephremidze which relies only on elementary
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
and
complex analysis Complex analysis, traditionally known as the theory of functions of a complex variable, is the branch of mathematical analysis that investigates functions of complex numbers. It is helpful in many branches of mathematics, including algebraic ...
. Spectral factorization is used extensively in linear–quadratic–Gaussian control and many algorithms exist to calculate spectral factors. Some modern algorithms focus on the more general setting originally studied by Wiener while others have used
Toeplitz matrix In linear algebra, a Toeplitz matrix or diagonal-constant matrix, named after Otto Toeplitz, is a matrix in which each descending diagonal from left to right is constant. For instance, the following matrix is a Toeplitz matrix: :\qquad\begin a & ...
advances to speed up factor calculations.


Definition

Consider polynomial matrix P(t) = \beginp_(t) &\ldots &p_(t) \\ \vdots & \ddots & \vdots\\ p_(t) & \cdots& p_(t)\\ \end, where each entry p_(t) is a complex coefficient polynomial of at most N-degree. If P(t) is a
positive definite In mathematics, positive definiteness is a property of any object to which a bilinear form In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called '' vectors'') over a field ''K'' (the elements of w ...
hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the ...
for all t \in \mathbb, then there exists a polynomial matrix Q(t) = \beginq_(t) &\ldots &q_(t) \\ \vdots & \ddots & \vdots\\ q_(t) & \cdots& q_(t)\\ \end, such that P(t) = Q(t) Q(t)^* where Q(t)^* is the
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \mathbf is an n \times m matrix obtained by transposing \mathbf and applying complex conjugation to each entry (the complex conjugate ...
. When q_(t) is a complex coefficient polynomial or complex coefficient
rational function In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be ...
then so are the elements of its conjugate transpose. We can furthermore find Q(t) which is nonsingular on the lower half plane.


Rational spectral factorization

Let p(t) be a
rational function In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be ...
where p(t) > 0 for all t \in \mathbb. Then there exists a rational function q(t) such that p(t) = q(t)\bar(t) and q(t) has no poles or zeroes in the lower half plane. This decomposition is unique up to multiplication by complex scalars of norm 1. To prove existence write p(x) = c \frac, where \alpha_i \neq \beta_j. Letting x \to \infty, we can conclude that c is real and positive. Dividing out by \sqrt we reduce to the monic case. The numerator and denominator have distinct sets of roots, so all real roots which show up in either must have even multiplicity (to prevent a sign change locally). We can divide out these real roots to reduce to the case where p(t) has only complex roots and poles. By hypothesis we have p(x) = \frac = \frac = \overline. Since all of the \alpha_i, \beta_jare complex (and hence not fixed points of conjugation) they both come in conjugate pairs. For each conjugate pair, pick the zero or pole in the upper half plane and accumulate these to obtain q(t). The uniqueness result follows in a standard fashion.


Cholesky decomposition

The inspiration for this result is a factorization which characterizes positive definite matrices.


Decomposition for scalar matrices

Given any positive definite
scalar matrix In linear algebra, a diagonal matrix is a matrix (mathematics), matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An exampl ...
A, the
Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for eff ...
allows us to write A = LL^* where L is a lower triangular matrix. If we don't restrict to lower triangular matrices we can consider all factorizations of the form A = V V^*. It is not hard to check that all factorizations are achieved by looking at the orbit of L under right multiplication by a unitary matrix, V = L U. To obtain the lower triangular decomposition we induct by splitting off the first row and first column:\begin a_ & \mathbf_^* \\ \mathbf_ & A_ \\ \end = \begin l_ & 0 \\ \mathbf_ & L_ \end \begin l_^* & \mathbf_^* \\ 0 & L_^* \end = \begin l_ l_^* & l_ \mathbf_^* \\ l_^* \mathbf_ & \mathbf_ \mathbf_^*+ L_ L_^* \endSolving these in terms of a_ we getl_ = \sqrt \mathbf_ = \frac\mathbf_ L_ L_^* = A_ - \frac \mathbf_ \mathbf^*_ Since A is positive definite we have a_is a positive real number, so it has a square root. The last condition from induction since the right hand side is the
Schur complement The Schur complement is a key tool in the fields of linear algebra, the theory of matrices, numerical analysis, and statistics. It is defined for a block matrix. Suppose ''p'', ''q'' are nonnegative integers such that ''p + q > 0'', and suppose ...
of A, which is itself positive definite.


Decomposition for rational polynomial matrices

A rational polynomial matrix is defined as a matrix P(t) = \beginp_(t) &\ldots &p_(t) \\ \vdots & \ddots & \vdots\\ p_(t) & \cdots& p_(t)\\ \end, where each entry p_(t) is a complex
rational function In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be ...
. If P(t) is a positive definite Hermitian matrix for all t\in\mathbb, then by the symmetric
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can a ...
we performed above, all we need to show is there exists a rational q_(t) such that p_(t) = q_(t) q_(t)^*, which follows from our rational spectral factorization. Once we have that then we can solve for l_(t), \mathbf_(t). Since the
Schur complement The Schur complement is a key tool in the fields of linear algebra, the theory of matrices, numerical analysis, and statistics. It is defined for a block matrix. Suppose ''p'', ''q'' are nonnegative integers such that ''p + q > 0'', and suppose ...
is positive definite for the real t away from the poles and the Schur complement is a rational polynomial matrix we can induct to find L_. It is not hard to check that we get P(t) = L(t) L(t)^*where L(t) is a rational polynomial matrix with no poles in the lower half plane.


Extension to polynomial decompositions

One way to prove the existence of polynomial matrix spectral factorization is to apply the Cholesky decomposition to a rational polynomial matrix and modify it to remove lower half plane singularities. That is, given P(t) = \beginp_(t) &\ldots &p_(t) \\ \vdots & \ddots & \vdots\\ p_(t) & \cdots& p_(t)\\ \end, where each entry p_(t) is a complex coefficient polynomial for all t\in\mathbb, a rational polynomial matrix L(t) with no lower half plane poles exists such that P(t) = L(t) L(t)^*. Given a rational polynomial matrix U(t) which is unitary valued for real t, there exists another decomposition P(t) = L(t) L(t)^* = L(t) U(t)U(t)^*L(t)^*. If \det(L(a)) = 0 then there exists a
scalar Scalar may refer to: *Scalar (mathematics), an element of a field, which is used to define a vector space, usually the field of real numbers *Scalar (physics), a physical quantity that can be described by a single element of a number field such a ...
unitary matrix In linear algebra, an invertible complex square matrix is unitary if its matrix inverse equals its conjugate transpose , that is, if U^* U = UU^* = I, where is the identity matrix. In physics, especially in quantum mechanics, the conjugate ...
U such that U e_1 = \frac. This implies L(t) U has first column vanish at a. To remove the singularity at a we multiply byU(t) = \operatorname(1, \ldots, \frac, \ldots, 1)L(t) U U(t) has determinant with one less zero (by multiplicity) at a, without introducing any poles in the lower half plane of any of the entries.


Example

Consider the following rational matrix decomposition A(t) = \begin t^2 + 1 & 2 t \\ 2 t & t^2 + 1 \\ \end = \begin t - i & 0 \\ \frac & \frac \\ \end \begin t + i & \frac \\ 0 & \frac \\ \end. This decomposition has no poles in the upper half plane. However \det\left(\begin t - i & 0 \\ \frac & \frac \\ \end\right)=\frac, so we need to modify our decomposition to get rid of the singularity at -i. First we multiply by a scalar unitary matrix U =\frac \begin1 & 1 \\ i & -i \\ \end, such that \begin t - i & 0 \\ \frac & \frac \\ \end U = \frac\begin t-i & t - i \\ i \frac & -i(t+i) \\ \end, becomes a new candidate for our decomposition. Now the first column vanishes at t=i, so we multiply through (on the right) by U(t) =\frac \begin\frac & 0 \\ 0 & 1\end, to obtain Q(t) = \frac\begin t-i & t - i \\ i \frac & -i(t+i) \\ \end U(t) = \frac\begin t + i & t - i \\ i(t-i) & -i(t+i) \\ \end, where \det(Q(t)) = i(1-t^2). This is our desired decomposition A(t) = Q(t) Q(t)^* with no singularities in the lower half plane.


Extend analyticity to all of C

After modifications, the decomposition P(t)=Q(t) Q(t)^*satisfies Q(t) is
holomorphic In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space . The existence of a complex deri ...
and invertible on the lower half plane. To extend analyticity to the upper half plane we need this key observation: If an invertible rational matrix Q(t) is holomorphic in the lower half plane, Q(t)^ is holomorphic in the lower half plane as well. The analyticity follows from the
adjugate matrix In linear algebra, the adjugate or classical adjoint of a square matrix , , is the transpose of its cofactor matrix. It is occasionally known as adjunct matrix, or "adjoint", though that normally refers to a different concept, the adjoint operat ...
formula (since both the entries of Q(t) and \det(Q(t))^are analytic on the lower half plane). The determinant of a rational polynomial matrix can only have poles where its entries have poles, so \det(Q(t)) has no poles in the lower half plane. Subsequently, Q(t)=(P(t)Q(t)^)^*. Since Q(t)^ is analytic on the lower half plane, Q(t) is analytic on the upper half plane. Finally if Q(t) has a pole on the real line then Q(t)^* has the same pole on the real line which contradicts the hypothesis that P(t) has no poles on the real line (i.e. it is analytic everywhere). The above shows that if Q(t) is analytic and invertible on the lower half plane indeed Q(t) is analytic everywhere and hence a polynomial matrix.


Uniqueness

Given two polynomial matrix decompositions which are invertible on the lower half plane P(t)=Q(t) Q(t)^*=R(t)R(t)^*, then R(t)^Q(t)Q(t)^*(R(t)^*)^=I. Since R(t) is analytic on the lower half plane and nonsingular, R(t)^Q(t) is a rational polynomial matrix which is analytic and invertible on the lower half plane. As such, R(t)^Q(t) is a polynomial matrix which is unitary for all t\in\mathbb. This means that if \mathbf_i(t) is the i^ row of Q(t)then \mathbf_i(t) \mathbf_i(t)^* = 1. For real t this is a sum of non-negative polynomials which sums to a constant, implying each of the summands are in fact constant polynomials. Then Q(t) = R(t) Uwhere U is a scalar unitary matrix.


See also

* Matrix factorization of a polynomial


Notes


References

* * * * * * {{cite journal , last1=Wiener , first1=N. , last2=Masani , first2=P. , title=The prediction theory of multivariate stochastic processes: I. The regularity condition , journal=Acta Mathematica , volume=98 , date=1957 , issn=0001-5962 , doi=10.1007/BF02404472 , pages=111–150 Matrix decompositions Polynomials