HOME

TheInfoList



OR:

A
differential equation In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, ...
is a mathematical equation for an unknown function of one or several variables that relates the values of the function itself and its derivatives of various orders. A matrix differential equation contains more than one function stacked into vector form with a matrix relating the functions to their derivatives. For example, a first-order matrix
ordinary differential equation In mathematics, an ordinary differential equation (ODE) is a differential equation whose unknown(s) consists of one (or more) function(s) of one variable and involves the derivatives of those functions. The term ''ordinary'' is used in contrast ...
is : \mathbf(t) = \mathbf(t)\mathbf(t) where \mathbf(t) is an n \times 1 vector of functions of an underlying variable t, \mathbf(t) is the vector of first derivatives of these functions, and \mathbf(t) is an n \times n matrix of coefficients. In the case where \mathbf is constant and has ''n''
linearly independent In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are ...
eigenvector In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
s, this differential equation has the following general solution, : \mathbf(t) = c_1 e^ \mathbf_1 + c_2 e^ \mathbf_2 + \cdots + c_n e^ \mathbf_n ~, where are the
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s of A; are the respective
eigenvector In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
s of A; and are constants. More generally, if \mathbf(t) commutes with its integral \int_a^t \mathbf(s)ds then the
Magnus expansion In mathematics and physics, the Magnus expansion, named after Wilhelm Magnus (1907–1990), provides an exponential representation of the solution of a first-order homogeneous linear differential equation for a linear operator. In particular, it fu ...
reduces to leading order, and the general solution to the differential equation is : \mathbf(t)=e^ \mathbf ~, where \mathbf is an n \times 1 constant vector. By use of the
Cayley–Hamilton theorem In linear algebra, the Cayley–Hamilton theorem (named after the mathematicians Arthur Cayley and William Rowan Hamilton) states that every square matrix over a commutative ring (such as the real or complex numbers or the integers) satisfies ...
and Vandermonde-type matrices, this formal
matrix exponential In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential give ...
solution may be reduced to a simple form. Below, this solution is displayed in terms of Putzer's algorithm.


Stability and steady state of the matrix system

The matrix equation :\mathbf(t) = \mathbf(t) + \mathbf with ''n''×1 parameter constant vector b is
stable A stable is a building in which livestock, especially horses, are kept. It most commonly means a building that is divided into separate stalls for individual animals and livestock. There are many different types of stables in use today; the ...
if and only if all
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s of the constant matrix A have a negative real part. The steady state x* to which it converges if stable is found by setting :\mathbf^* (t)=\mathbf~, thus yielding :\mathbf^* = -\mathbf^\mathbf~, assuming A is invertible. Thus, the original equation can be written in the homogeneous form in terms of deviations from the steady state, : \mathbf(t)=\mathbf mathbf(t)-\mathbf^*. An equivalent way of expressing this is that x* is a particular solution to the inhomogeneous equation, while all solutions are in the form :\mathbf_h+\mathbf^* ~, with \mathbf_h a solution to the homogeneous equation (b=0).


Stability of the two-state-variable case

In the ''n'' = 2 case (with two state variables), the stability conditions that the two eigenvalues of the transition matrix ''A'' each have a negative real part are equivalent to the conditions that the trace of ''A'' be negative and its
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
be positive.


Solution in matrix form

The formal solution of \mathbf(t)=\mathbf mathbf(t)-\mathbf^*/math> has the
matrix exponential In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential give ...
form :\mathbf(t)=\mathbf^*+e^ mathbf(0)-\mathbf^*~, evaluated using any of a multitude of techniques.


Putzer Algorithm for computing

Given a matrix A with eigenvalues \lambda_1,\lambda_2,\dots,\lambda_n, :e^ = \sum_^r_\mathbf_ where :\mathbf_0 = \mathbf :\mathbf_j = \prod_^\left(\mathbf-\lambda_k \mathbf\right)= \mathbf_ \left(\mathbf-\lambda_j \mathbf\right), \qquad j=1,2,\dots,n-1 :\dot_1 = \lambda_1 r_1 :r_1=1 :\dot_ = \lambda_j r_j + r_, \qquad j=2,3,\dots,n :r_j=0, \qquad j=2,3,\dots,n The equations for r_i (t) are simple first order inhomogeneous ODEs. Note the algorithm does not require that the matrix A be
diagonalizable In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique.) F ...
and bypasses complexities of the Jordan canonical forms normally utilized.


Deconstructed example of a matrix ordinary differential equation

A first-order homogeneous matrix ordinary differential equation in two functions ''x''(''t'') and ''y''(''t''), when taken out of matrix form, has the following form: : \frac=a_1x+b_1y,\quad\frac=a_2x+b_2y where a_1, a_2, b_1, and b_2 may be any arbitrary scalars. Higher order matrix ODE's may possess a much more complicated form.


Solving deconstructed matrix ordinary differential equations

The process of solving the above equations and finding the required functions of this particular order and form consists of 3 main steps. Brief descriptions of each of these steps are listed below: *Finding the
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
*Finding the
eigenvectors In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
*Finding the needed functions The final, third, step in solving these sorts of
ordinary differential equations In mathematics, an ordinary differential equation (ODE) is a differential equation whose unknown(s) consists of one (or more) function(s) of one variable and involves the derivatives of those functions. The term ''ordinary'' is used in contrast ...
is usually done by means of plugging in the values calculated in the two previous steps into a specialized general form equation, mentioned later in this article.


Solved example of a matrix ODE

To solve a matrix ODE according to the three steps detailed above, using simple matrices in the process, let us find, say, a function and a function both in terms of the single independent variable , in the following homogeneous
linear differential equation In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form :a_0(x)y + a_1(x)y' + a_2(x)y'' \cdots + a_n(x)y^ = b ...
of the first order, : \frac=3x-4y,\quad\frac=4x-7y~. To solve this particular
ordinary differential equation In mathematics, an ordinary differential equation (ODE) is a differential equation whose unknown(s) consists of one (or more) function(s) of one variable and involves the derivatives of those functions. The term ''ordinary'' is used in contrast ...
system, at some point in the solution process, we shall need a set of two initial values (corresponding to the two state variables at the starting point). In this case, let us pick .


First step

The first step, already mentioned above, is finding the
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
of A in : \begin x'\\y' \end = \begin 3 & -4\\4 & -7 \end\begin x\\y \end~. The
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
notation ''x''′ etc. seen in one of the vectors above is known as Lagrange's notation (first introduced by
Joseph Louis Lagrange Joseph-Louis Lagrange (born Giuseppe Luigi LagrangiaLeibniz's notation In calculus, Leibniz's notation, named in honor of the 17th-century German philosopher and mathematician Gottfried Wilhelm Leibniz, uses the symbols and to represent infinitely small (or infinitesimal) increments of and , respectively, just a ...
, honoring the name of
Gottfried Leibniz Gottfried Wilhelm (von) Leibniz . ( – 14 November 1716) was a German polymath active as a mathematician, philosopher, scientist and diplomat. He is one of the most prominent figures in both the history of philosophy and the history of mathem ...
.) Once the
coefficients In mathematics, a coefficient is a multiplicative factor in some term of a polynomial, a series, or an expression; it is usually a number, but may be any expression (including variables such as , and ). When the coefficients are themselves ...
of the two variables have been written in the
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
form A displayed above, one may evaluate the
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
. To that end, one finds the
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
of the
matrix Matrix most commonly refers to: * ''The Matrix'' (franchise), an American media franchise ** '' The Matrix'', a 1999 science-fiction action film ** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
that is formed when an
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
, I_n, multiplied by some constant , is subtracted from the above coefficient matrix to yield the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The c ...
of it, : \det\left(\begin 3 & -4\\4 & -7 \end - \lambda\begin 1 & 0\\0 & 1 \end\right)~, and solve for its zeroes. Applying further simplification and basic rules of
matrix addition In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together. However, there are other operations which could also be considered addition for matrices, such as the direct sum and the Kroneck ...
yields : \det\begin 3-\lambda & -4\\4 & -7-\lambda \end~. Applying the rules of finding the determinant of a single 2×2 matrix, yields the following elementary
quadratic equation In algebra, a quadratic equation () is any equation that can be rearranged in standard form as ax^2 + bx + c = 0\,, where represents an unknown value, and , , and represent known numbers, where . (If and then the equation is linear, not qu ...
, : \det\begin 3-\lambda & -4\\4 & -7-\lambda \end = 0 : -21 - 3\lambda + 7\lambda + \lambda^2 + 16 = 0 \,\! which may be reduced further to get a simpler version of the above, : \lambda^2 + 4\lambda - 5 = 0 ~. Now finding the two roots, \lambda_1 and \lambda_2 of the given
quadratic equation In algebra, a quadratic equation () is any equation that can be rearranged in standard form as ax^2 + bx + c = 0\,, where represents an unknown value, and , , and represent known numbers, where . (If and then the equation is linear, not qu ...
by applying the
factorization In mathematics, factorization (or factorisation, see English spelling differences) or factoring consists of writing a number or another mathematical object as a product of several ''factors'', usually smaller or simpler objects of the same kind ...
method yields : \lambda^2 + 5\lambda - \lambda - 5 = 0 : \lambda (\lambda + 5) - 1 (\lambda + 5) = 0 : (\lambda - 1)(\lambda + 5) = 0 : \lambda = 1, -5 ~. The values \lambda_1 = 1 and \lambda_2 = -5, calculated above are the required
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
of A. In some cases, say other matrix ODE's, the
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
may be
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
, in which case the following step of the solving process, as well as the final form and the solution, may dramatically change.


Second step

As mentioned above, this step involves finding the
eigenvectors In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
of A from the information originally provided. For each of the
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
calculated, we have an individual
eigenvector In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
. For the first
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
, which is \lambda_1 = 1, we have : \begin 3 & -4\\4 & -7 \end\begin \alpha\\\beta \end = 1\begin \alpha\\\beta \end. Simplifying the above expression by applying basic
matrix multiplication In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the s ...
rules yields : 3\alpha - 4\beta = \alpha : \alpha = 2\beta~. All of these calculations have been done only to obtain the last expression, which in our case is . Now taking some arbitrary value, presumably, a small insignificant value, which is much easier to work with, for either or (in most cases, it does not really matter), we substitute it into . Doing so produces a simple vector, which is the required eigenvector for this particular eigenvalue. In our case, we pick , which, in turn determines that and, using the standard
vector notation In mathematics and physics, vector notation is a commonly used notation for representing vectors, which may be Euclidean vectors, or more generally, members of a vector space. For representing a vector, the common typographic convention is l ...
, our vector looks like : \mathbf_1 = \begin 2\\1 \end. Performing the same operation using the second
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
we calculated, which is \lambda = -5, we obtain our second eigenvector. The process of working out this
vector Vector most often refers to: *Euclidean vector, a quantity with a magnitude and a direction *Vector (epidemiology), an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematic ...
is not shown, but the final result is : \mathbf_2 = \begin 1\\2 \end.


Third step

This final step finds the required functions that are 'hidden' behind the
derivative In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. ...
s given to us originally. There are two functions, because our differential equations deal with two variables. The equation which involves all the pieces of information that we have previously found, has the following form: : \begin x\\y \end = Ae^\mathbf_1 + Be^\mathbf_2. Substituting the values of
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
and
eigenvectors In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted ...
yields : \begin x\\y \end = Ae^\begin 2\\1 \end + Be^\begin 1\\2 \end. Applying further simplification, : \begin x\\y \end = \begin 2 & 1\\1 & 2 \end\begin Ae^\\Be^ \end. Simplifying further and writing the equations for functions and separately, : x = 2Ae^ + Be^ : y = Ae^ + 2Be^. The above equations are, in fact, the general functions sought, but they are in their general form (with unspecified values of and ), whilst we want to actually find their exact forms and solutions. So now we consider the problem’s given initial conditions (the problem including given initial conditions is the so-called
initial value problem In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or o ...
). Suppose we are given x(0) = y(0) = 1, which plays the role of starting point for our ordinary differential equation; application of these conditions specifies the constants, and . As we see from the x(0) = y(0) = 1 conditions, when , the left sides of the above equations equal 1. Thus we may construct the following system of
linear equations In mathematics, a linear equation is an equation that may be put in the form a_1x_1+\ldots+a_nx_n+b=0, where x_1,\ldots,x_n are the variables (or unknowns), and b,a_1,\ldots,a_n are the coefficients, which are often real numbers. The coeffici ...
, : 1 = 2A + B : 1 = A + 2B~. Solving these equations, we find that both constants and equal 1/3. Therefore substituting these values into the general form of these two functions specifies their exact forms, x = \tfrace^ + \tfrace^ y = \tfrace^ + \tfrace^~, the two functions sought.


Using matrix exponentiation

The above problem could have been solved with a direct application of the
matrix exponential In mathematics, the matrix exponential is a matrix function on square matrices analogous to the ordinary exponential function. It is used to solve systems of linear differential equations. In the theory of Lie groups, the matrix exponential give ...
. That is, we can say that \begin x(t)\\y(t) \end = \exp \left(\begin 3 & -4\\4 & -7 \end t\right) \begin x_0(t)\\y_0(t) \end Given that (which can be computed using any suitable tool, such as
MATLAB MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
's expm tool, or by performing matrix diagonalisation and leveraging the property that the matrix exponential of a diagonal matrix is the same as element-wise exponentiation of its elements) \exp \left(\begin 3 & -4\\4 & -7 \end t\right) = \begin 4 e^t/3 - e^/3 & 2e^/3 - 2e^t/3\\2e^t/3 - 2e^/3 & 4e^/3 - e^t/3 \end the final result is \begin x(t)\\y(t) \end = \begin 4 e^t/3 - e^/3 & 2e^/3 - 2e^t/3\\2e^t/3 - 2e^/3 & 4e^/3 - e^t/3 \end \begin 1\\1 \end \begin x(t)\\y(t) \end = \begin e^/3 + 2e^t/3\\ e^t/3 + 2e^/3 \end This is the same as the eigenvector approach shown before.


See also

* Nonhomogeneous equations *
Matrix difference equation A matrix difference equation is a difference equation in which the value of a vector (or sometimes, a matrix) of variables at one point in time is related to its own value at one or more previous points in time, using matrices. The order of the eq ...
*
Newton's law of cooling In the study of heat transfer, Newton's law of cooling is a physical law which states that The rate of heat loss of a body is directly proportional to the difference in the temperatures between the body and its environment. The law is frequently q ...
*
Fibonacci sequence In mathematics, the Fibonacci numbers, commonly denoted , form a sequence, the Fibonacci sequence, in which each number is the sum of the two preceding ones. The sequence commonly starts from 0 and 1, although some authors start the sequence from ...
*
Difference equation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
*
Wave equation The (two-way) wave equation is a second-order linear partial differential equation for the description of waves or standing wave fields — as they occur in classical physics — such as mechanical waves (e.g. water waves, sound waves and ...
*
Autonomous system (mathematics) In mathematics, an autonomous system or autonomous differential equation is a system of ordinary differential equations which does not explicitly depend on the independent variable. When the variable is time, they are also called time-invariant ...


References

{{Reflist Ordinary differential equations