A
differential equation
In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, ...
is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. A matrix differential equation contains more than one function stacked into vector form with a matrix relating the functions to their derivatives.
For example, a first-order matrix
ordinary differential equation
In mathematics, an ordinary differential equation (ODE) is a differential equation whose unknown(s) consists of one (or more) function(s) of one variable and involves the derivatives of those functions. The term ''ordinary'' is used in contrast ...
is
:
where
is an
vector of functions of an underlying variable
,
is the vector of first derivatives of these functions, and
is an
matrix of coefficients.
In the case where
is constant and has ''n''
linearly independent
In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are ...
eigenvector
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
s, this differential equation has the following general solution,
:
where are the
eigenvalue
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s of A; are the respective
eigenvector
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
s of A; and are constants.
More generally, if
commutes with its integral
then the
Magnus expansion In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it fu ...
reduces to leading order, and the general solution to the differential equation is
:
where
is an
constant vector.
By use of the
Cayley–Hamilton theorem
In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex numbers or the integers) satisfies ...
and
Vandermonde-type matrices, this formal
matrix exponential
In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential give ...
solution may be reduced to a simple form. Below, this solution is displayed in terms of Putzer's algorithm.
Stability and steady state of the matrix system
The matrix equation
:
with ''n''×1 parameter constant vector b is
stable
A stable is a building in which livestock, especially horses, are kept. It most commonly means a building that is divided into separate stalls for individual animals and livestock. There are many different types of stables in use today; the ...
if and only if all
eigenvalue
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s of the constant matrix A have a negative real part.
The steady state x* to which it converges if stable is found by setting
:
thus yielding
:
assuming A is invertible.
Thus, the original equation can be written in the homogeneous form in terms of deviations from the steady state,
:
An equivalent way of expressing this is that x* is a particular solution to the inhomogeneous equation, while all solutions are in the form
:
with
a solution to the homogeneous equation (b=0).
Stability of the two-state-variable case
In the ''n'' = 2 case (with two state variables), the stability conditions that the two eigenvalues of the transition matrix ''A'' each have a negative real part are equivalent to the conditions that the
trace of ''A'' be negative and its
determinant
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
be positive.
Solution in matrix form
The formal solution of