Lyapunov Equation
   HOME

TheInfoList



OR:

In
control theory Control theory is a field of mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a ...
, the discrete Lyapunov equation is of the form :A X A^ - X + Q = 0 where Q is a
Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th ...
and A^H is the
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex con ...
of A. The continuous Lyapunov equation is of the form :AX + XA^H + Q = 0. The Lyapunov equation occurs in many branches of control theory, such as stability analysis and
optimal control Optimal control theory is a branch of mathematical optimization that deals with finding a control for a dynamical system over a period of time such that an objective function is optimized. It has numerous applications in science, engineering and ...
. This and related equations are named after the Russian mathematician
Aleksandr Lyapunov Aleksandr Mikhailovich Lyapunov (russian: Алекса́ндр Миха́йлович Ляпуно́в, ; – 3 November 1918) was a Russian mathematician, mechanician and physicist. His surname is variously romanized as Ljapunov, Liapunov, Lia ...
.


Application to stability

In the following theorems A, P, Q \in \mathbb^, and P and Q are symmetric. The notation P>0 means that the matrix P is
positive definite In mathematics, positive definiteness is a property of any object to which a bilinear form or a sesquilinear form may be naturally associated, which is positive-definite. See, in particular: * Positive-definite bilinear form * Positive-definite f ...
. Theorem (continuous time version). Given any Q>0, there exists a unique P>0 satisfying A^T P + P A + Q = 0 if and only if the linear system \dot=A x is globally asymptotically stable. The quadratic function V(x)=x^T P x is a
Lyapunov function In the theory of ordinary differential equations (ODEs), Lyapunov functions, named after Aleksandr Lyapunov, are scalar functions that may be used to prove the stability of an equilibrium of an ODE. Lyapunov functions (also called Lyapunov’s se ...
that can be used to verify stability. Theorem (discrete time version). Given any Q>0, there exists a unique P>0 satisfying A^T P A -P + Q = 0 if and only if the linear system x_=A x_ is globally asymptotically stable. As before, x^T P x is a Lyapunov function.


Computational aspects of solution

The Lyapunov equation is linear, and so if X contains n entries can be solved in \mathcal O(n^3) time using standard matrix factorization methods. However, specialized algorithms are available which can yield solutions much quicker owing to the specific structure of the Lyapunov equation. For the discrete case, the Schur method of Kitagawa is often used. For the continuous Lyapunov equation the
Bartels–Stewart algorithm In numerical linear algebra, the Bartels–Stewart algorithm is used to numerically solve the Sylvester matrix equation AX - XB = C. Developed by R.H. Bartels and G.W. Stewart in 1971, it was the first numerically stable method that could be syst ...
can be used.


Analytic solution

Defining the vectorization operator \operatorname (A) as stacking the columns of a matrix A and A \otimes B as the
Kronecker product In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a generalization of the outer product (which is denoted by the same symbol) from vectors ...
of A and B, the continuous time and discrete time Lyapunov equations can be expressed as solutions of a matrix equation. Furthermore, if the matrix A is stable, the solution can also be expressed as an integral (continuous time case) or as an infinite sum (discrete time case).


Discrete time

Using the result that \operatorname(ABC)=(C^ \otimes A)\operatorname(B) , one has : (I_-\bar \otimes A)\operatorname(X) = \operatorname(Q) where I_ is a
conformable In mathematics, a matrix is conformable if its dimensions are suitable for defining some operation (''e.g.'' addition, multiplication, etc.). Examples * If two matrices have the same dimensions (number of rows and number of columns), they are ...
identity matrix and \bar is the element-wise complex conjugate of A. One may then solve for \operatorname(X) by inverting or solving the linear equations. To get X, one must just reshape \operatorname (X) appropriately. Moreover, if A is stable, the solution X can also be written as : X = \sum_^ A^ Q (A^)^k . For comparison, consider the one-dimensional case, where this just says that the solution of (1 - a^2) x = q is : x = \frac = \sum_^ qa^ .


Continuous time

Using again the Kronecker product notation and the vectorization operator, one has the matrix equation : (I_n \otimes A + \bar \otimes I_n) \operatornameX = -\operatornameQ, where \bar denotes the matrix obtained by complex conjugating the entries of A. Similar to the discrete-time case, if A is stable, the solution X can also be written as : X = \int_0^\infty ^ Q \mathrm^ d\tau . For comparison, consider the one-dimensional case, where this just says that the solution of 2ax = - q is : x = \frac = \int_0^ q^ d\tau .


Relationship Between Discrete and Continuous Lyapunov Equations

We start with the continuous-time linear dynamics: : \dot = \mathbf\mathbf . And then discretize it as follows: : \dot \approx \frac Where \delta > 0 indicates a small forward displacement in time. Substituting the bottom equation into the top and shuffling terms around, we get a discrete-time equation for \mathbf_. \mathbf_ = \mathbf_t + \delta \mathbf \mathbf_t = (\mathbf + \delta\mathbf)\mathbf_t = \mathbf\mathbf_t Where we've defined \mathbf \equiv \mathbf + \delta\mathbf. Now we can use the discrete time Lyapunov equation for \mathbf: \mathbf^T\mathbf\mathbf - \mathbf = -\delta\mathbf Plugging in our definition for \mathbf, we get: (\mathbf + \delta \mathbf)^T\mathbf(\mathbf + \delta \mathbf) - \mathbf = -\delta \mathbf Expanding this expression out yields: (\mathbf + \delta \mathbf^T\mathbf) (\mathbf + \delta \mathbf) - \mathbf = \delta(\mathbf^T\mathbf + \mathbf\mathbf) + \delta^2 \mathbf^T\mathbf\mathbf = -\delta \mathbf Recall that \delta is a small displacement in time. Letting \delta go to zero brings us closer and closer to having continuous dynamics—and in the limit we achieve them. It stands to reason that we should also recover the continuous-time Lyapunov equations in the limit as well. Dividing through by \delta on both sides, and then letting \delta \to 0 we find that: \mathbf^T\mathbf + \mathbf\mathbf = -\mathbf which is the continuous-time Lyapunov equation, as desired.


See also

*
Sylvester equation In mathematics, in the field of control theory, a Sylvester equation is a matrix equation of the form: :A X + X B = C. Then given matrices ''A'', ''B'', and ''C'', the problem is to find the possible matrices ''X'' that obey this equation. All m ...
* Algebraic Riccati equation *
Kalman filter For statistics and control theory, Kalman filtering, also known as linear quadratic estimation (LQE), is an algorithm that uses a series of measurements observed over time, including statistical noise and other inaccuracies, and produces estimat ...


References

{{DEFAULTSORT:Lyapunov Equation Control theory