Power series solution of differential equations
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, the power series method is used to seek a
power series In mathematics, a power series (in one variable) is an infinite series of the form \sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots where ''an'' represents the coefficient of the ''n''th term and ''c'' is a const ...
solution to certain
differential equation In mathematics, a differential equation is an equation that relates one or more unknown functions and their derivatives. In applications, the functions generally represent physical quantities, the derivatives represent their rates of change, an ...
s. In general, such a solution assumes a
power series In mathematics, a power series (in one variable) is an infinite series of the form \sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots where ''an'' represents the coefficient of the ''n''th term and ''c'' is a const ...
with unknown coefficients, then substitutes that solution into the differential equation to find a
recurrence relation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
for the coefficients.


Method

Consider the second-order
linear differential equation In mathematics, a linear differential equation is a differential equation that is defined by a linear polynomial in the unknown function and its derivatives, that is an equation of the form :a_0(x)y + a_1(x)y' + a_2(x)y'' \cdots + a_n(x)y^ = b( ...
a_2(z)f''(z)+a_1(z)f'(z)+a_0(z)f(z)=0. Suppose is nonzero for all . Then we can divide throughout to obtain f''+f'+f=0. Suppose further that and are
analytic function In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex an ...
s. The power series method calls for the construction of a power series solution f=\sum_^\infty A_kz^k. If is zero for some , then the
Frobenius method In mathematics, the method of Frobenius, named after Ferdinand Georg Frobenius, is a way to find an infinite series solution for a second-order ordinary differential equation of the form z^2 u'' + p(z)z u'+ q(z) u = 0 with u' \equiv \frac and u'' ...
, a variation on this method, is suited to deal with so called " singular points". The method works analogously for higher order equations as well as for systems.


Example usage

Let us look at the Hermite differential equation, f''-2zf'+\lambda f=0; \; \lambda=1 We can try to construct a series solution \begin f &= \sum_^\infty A_k z^k\\ f' &= \sum_^\infty k A_k z^\\ f'' &= \sum_^\infty k(k-1)A_k z^ \end Substituting these in the differential equation \begin & \sum_^\infty k(k-1)A_kz^-2z\sum_^\infty k A_k z^+\sum_^\infty A_k z^k=0 \\ = & \sum_^\infty k(k-1)A_kz^-\sum_^\infty 2kA_k z^k+\sum_^\infty A_k z^k \end Making a shift on the first sum \begin & = \sum_^\infty (k+2)(k+1) A_ z^k - \sum_^\infty 2k A_k z^k + \sum_^\infty A_k z^k \\ & = 2 A_2 + \sum_^\infty (k+2)(k+1) A_ z^k - \sum_^\infty 2k A_k z^k + A_0 + \sum_^\infty A_k z^k \\ & = 2 A_2 + A_0 + \sum_^\infty \left( (k+2) (k+1) A_ + (-2k+1) A_k \right) z^k \end If this series is a solution, then all these coefficients must be zero, so for both ''k''=0 and ''k''>0: (k+2)(k+1)A_+(-2k+1)A_k = 0 We can rearrange this to get a
recurrence relation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
for . (k+2)(k+1)A_=-(-2k+1)A_k A_=A_k Now, we have A_2 = A_0=A_0,\, A_3 = A_1=A_1 We can determine ''A''0 and ''A''1 if there are initial conditions, i.e. if we have an
initial value In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or ot ...
problem. So we have \begin A_4 & =A_2 = \left(\right)\left(\right)A_0 = A_0 \\ ptA_5 & =A_3 = \left(\right)\left(\right)A_1 = A_1 \\ ptA_6 & = A_4 = \left(\right)\left(\right)A_0 = A_0 \\ ptA_7 & = A_5 = \left(\right)\left(\right)A_1 = A_1 \end and the series solution is \begin f & = A_0z^0 + A_1z^1 +A_2z^2 +A_3z^3 +A_4z^4 +A_5z^5 + A_6z^6 + A_7z^7+\cdots \\ pt& = A_0z^0 + A_1z^1 + A_0z^2 + A_1z^3 + A_0z^4 + A_1z^5 + A_0z^6 + A_1z^7 + \cdots \\ pt& = A_0z^0 + A_0z^2 + A_0z^4 + A_0z^6 + A_1z + A_1z^3 + A_1z^5 + A_1z^7 + \cdots \end which we can break up into the sum of two linearly independent series solutions: f=A_0 \left(1 + z^2 + z^4 + z^6 + \cdots\right) + A_1\left(z + z^3 + z^5 + z^7 + \cdots\right) which can be further simplified by the use of hypergeometric series.


A simpler way using Taylor series

A much simpler way of solving this equation (and power series solution in general) using the
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor serie ...
form of the expansion. Here we assume the answer is of the form f=\sum_^\infty If we do this, the general rule for obtaining the recurrence relationship for the coefficients is y^ \to A_ and x^m y^ \to (k)(k-1)\cdots(k-m+1)A_ In this case we can solve the Hermite equation in fewer steps: f''-2zf'+\lambda f=0;\;\lambda=1 becomes A_ -2kA_k +\lambda A_k=0 or A_ = (2k-\lambda) A_k in the series f=\sum_^\infty


Nonlinear equations

The power series method can be applied to certain
nonlinear In mathematics and science, a nonlinear system is a system in which the change of the output is not proportional to the change of the input. Nonlinear problems are of interest to engineers, biologists, physicists, mathematicians, and many othe ...
differential equations, though with less flexibility. A very large class of nonlinear equations can be solved analytically by using the
Parker–Sochacki method In mathematics, the Parker–Sochacki method is an algorithm for solving systems of ordinary differential equations (ODEs), developed by G. Edgar Parker and James Sochacki, of the James Madison University Mathematics Department. The method produces ...
. Since the Parker–Sochacki method involves an expansion of the original system of ordinary differential equations through auxiliary equations, it is not simply referred to as the power series method. The Parker–Sochacki method is done before the power series method to make the power series method possible on many nonlinear problems. An ODE problem can be expanded with the auxiliary variables which make the power series method trivial for an equivalent, larger system. Expanding the ODE problem with auxiliary variables produces the same coefficients (since the power series for a function is unique) at the cost of also calculating the coefficients of auxiliary equations. Many times, without using auxiliary variables, there is no known way to get the power series for the solution to a system, hence the power series method alone is difficult to apply to most nonlinear equations. The power series method will give solutions only to
initial value problem In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or oth ...
s (opposed to boundary value problems), this is not an issue when dealing with linear equations since the solution may turn up multiple linearly independent solutions which may be combined (by superposition) to solve boundary value problems as well. A further restriction is that the series coefficients will be specified by a nonlinear recurrence (the nonlinearities are inherited from the differential equation). In order for the solution method to work, as in linear equations, it is necessary to express every term in the nonlinear equation as a power series so that all of the terms may be combined into one power series. As an example, consider the initial value problem F F'' + 2 F'^2 + \eta F' = 0 \quad ; \quad F(1) = 0 \ , \ F'(1) = -\frac which describes a solution to capillary-driven flow in a groove. There are two nonlinearities: the first and second terms involve products. The initial values are given at \eta = 1, which hints that the power series must be set up as: F(\eta) = \sum_^ c_i (\eta - 1)^i since in this way \left.\frac \_ = n! \ c_n which makes the initial values very easy to evaluate. It is necessary to rewrite the equation slightly in light of the definition of the power series, F F'' + 2 F'^2 + (\eta - 1) F' + F' = 0 \quad ; \quad F(1) = 0 \ , \ F'(1) = -\frac so that the third term contains the same form \eta - 1 that shows in the power series. The last consideration is what to do with the products; substituting the power series in would result in products of power series when it's necessary that each term be its own power series. This is where the
Cauchy product In mathematics, more specifically in mathematical analysis, the Cauchy product is the discrete convolution of two infinite series. It is named after the French mathematician Augustin-Louis Cauchy. Definitions The Cauchy product may apply to infini ...
\left(\sum_^ a_i x^i\right) \left(\sum_^ b_i x^i\right) = \sum_^ x^i \sum_^i a_ b_j is useful; substituting the power series into the differential equation and applying this identity leads to an equation where every term is a power series. After much rearrangement, the recurrence \sum_^i \left((j + 1) (j + 2) c_ c_ + 2 (i - j + 1) (j + 1) c_ c_\right) + i c_i + (i + 1) c_ = 0 is obtained, specifying exact values of the series coefficients. From the initial values, c_0 = 0 and c_1 = -1/2, thereafter the above recurrence is used. For example, the next few coefficients: c_2 = -\frac \quad ; \quad c_3 = -\frac \quad ; \quad c_4 = \frac \quad ; \quad c_5 = -\frac \ \dots A limitation of the power series solution shows itself in this example. A numeric solution of the problem shows that the function is smooth and always decreasing to the left of \eta = 1, and zero to the right. At \eta = 1, a slope discontinuity exists, a feature which the power series is incapable of rendering, for this reason the series solution continues decreasing to the right of \eta = 1 instead of suddenly becoming zero.


External links

*


References

* * * * {{Differential equations topics Ordinary differential equations