HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, specifically the study of differential equations, the Picard–Lindelöf theorem gives a set of conditions under which an
initial value problem In multivariable calculus, an initial value problem (IVP) is an ordinary differential equation together with an initial condition which specifies the value of the unknown function at a given point in the domain. Modeling a system in physics or ...
has a unique solution. It is also known as Picard's existence theorem, the Cauchy–Lipschitz theorem, or the existence and uniqueness theorem. The theorem is named after
Émile Picard Charles Émile Picard (; 24 July 1856 – 11 December 1941) was a French mathematician. He was elected the fifteenth member to occupy seat 1 of the Académie française in 1924. Life He was born in Paris on 24 July 1856 and educated there at th ...
, Ernst Lindelöf, Rudolf Lipschitz and
Augustin-Louis Cauchy Baron Augustin-Louis Cauchy ( , , ; ; 21 August 1789 – 23 May 1857) was a French mathematician, engineer, and physicist. He was one of the first to rigorously state and prove the key theorems of calculus (thereby creating real a ...
.


Theorem

Let D \subseteq \R \times \R^n be a closed rectangle with (t_0, y_0) \in \operatorname D, the interior of D. Let f: D \to \R^n be a function that is continuous in t and
Lipschitz continuous In mathematical analysis, Lipschitz continuity, named after Germany, German mathematician Rudolf Lipschitz, is a strong form of uniform continuity for function (mathematics), functions. Intuitively, a Lipschitz continuous function is limited in h ...
in y (with Lipschitz constant independent from t). Then there exists some \varepsilon > 0 such that the initial value problem y'(t)=f(t,y(t)),\qquad y(t_0)=y_0 has a unique solution y(t) on the interval _0-\varepsilon, t_0+\varepsilon/math>.


Proof sketch

A standard proof relies on transforming the differential equation into an integral equation, then applying the
Banach fixed-point theorem In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem or contractive mapping theorem or Banach–Caccioppoli theorem) is an important tool in the theory of metric spaces; it guarantees the existence and uniqu ...
to prove the existence and uniqueness of solutions. Integrating both sides of the differential equation y'(t)=f(t,y(t)) shows that any solution to the differential equation must also satisfy the
integral equation In mathematical analysis, integral equations are equations in which an unknown function appears under an integral sign. In mathematical notation, integral equations may thus be expressed as being of the form: f(x_1,x_2,x_3,\ldots,x_n ; u(x_1,x_2 ...
:y(t) - y(t_0) = \int_^t f(s,y(s)) \, ds. Given the hypotheses that f is continuous in t and Lipschitz continuous in y, this integral operator is a contraction and so the Banach fixed-point theorem proves that a solution can be obtained by fixed-point iteration of successive approximations. In this context, this fixed-point iteration method is known as Picard iteration. Set :\varphi_0(t)=y_0 and :\varphi_(t)=y_0+\int_^t f(s,\varphi_k(s))\,ds. It follows from the Banach fixed-point theorem that the sequence of "Picard iterates" \varphi_k is convergent and that its limit is a solution to the original initial value problem: :\lim_ \varphi_k(t) = y(t). Since the Banach fixed-point theorem states that the fixed-point is unique, the solution found through this iteration is the unique solution to the differential equation given an initial value.


Example of Picard iteration

Let y(t)=\tan(t), the solution to the equation y'(t)=1+y(t)^2 with initial condition y(t_0)=y_0=0,t_0=0. Starting with \varphi_0(t)=0, we iterate :\varphi_(t)=\int_0^t (1+(\varphi_k(s))^2)\,ds so that \varphi_n(t) \to y(t): :\varphi_1(t)=\int_0^t (1+0^2)\,ds = t :\varphi_2(t)=\int_0^t (1+s^2)\,ds = t + \frac :\varphi_3(t)=\int_0^t \left(1+\left(s + \frac\right)^2\right)\,ds = t + \frac + \frac + \frac and so on. Evidently, the functions are computing the
Taylor series In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
expansion of our known solution y=\tan(t). Since \tan has poles at \pm\tfrac, it is not Lipschitz continuous in the neighborhood of those points, and the iteration converges toward a local solution for , t, <\tfrac only that is not valid over all of \R.


Example of non-uniqueness

To understand uniqueness of solutions, contrast the following two examples of first order ordinary differential equations for . Both differential equations will possess a single stationary point First, the homogeneous linear equation (a<0), a stationary solution is , which is obtained for the initial condition . Beginning with any other initial condition , the solution y(t) = y_0 e^ tends toward the stationary point , but it only approaches it in the limit of infinite time, so the uniqueness of solutions over all finite times is guaranteed. By contrast for an equation in which the
stationary point In mathematics, particularly in calculus, a stationary point of a differentiable function of one variable is a point on the graph of a function, graph of the function where the function's derivative is zero. Informally, it is a point where the ...
can be reached after a ''finite'' time, uniqueness of solutions does not hold. Consider the homogeneous nonlinear equation , which has at least these two solutions corresponding to the initial condition : and :y(t)=\begin \left (\tfrac \right )^ & t<0\\ \ \ \ \ 0 & t \ge 0, \end so the previous state of the system is not uniquely determined by its state at or after ''t'' = 0. The uniqueness theorem does not apply because the derivative of the function is not bounded in the neighborhood of and therefore it is not Lipschitz continuous, violating the hypothesis of the theorem.


Detailed proof

Let L be the Lipschitz constant of (t, y) \mapsto f(t,y) with respect to y. The function f is continuous as a function of (t,y). In particular, since t \mapsto f(t,y) is a continuous function of t, we have that for any point (t_0, y_0) and \epsilon>0 there exist \delta>0 such that , f(t,y_0)-f(t_0,y_0), <\epsilon / 2 when , t - t_0, < \delta. We have , f(t,y)-f(t_0,y_0), \leq , f(t,y)-f(t,y_0), +, f(t,y_0)-f(t_0,y_0), <\epsilon, provided , t-t_0, <\delta and , y-y_0, <\epsilon /2L, which shows that f is continuous at (t_0,y_0). Let a := 1/2L and take any b > 0 such that C_ = I_a(t_0) \times B_b(y_0) is a subset of D, where \begin I_a(t_0) &= _0-a,t_0+a\\ B_b(y_0) &= _0-b,y_0+b \end Such a set exists because (t_0, y_0) is in the interior of D, by assumption. Let :M = \sup_\, f(t,y)\, , which is the
supremum In mathematics, the infimum (abbreviated inf; : infima) of a subset S of a partially ordered set P is the greatest element in P that is less than or equal to each element of S, if such an element exists. If the infimum of S exists, it is unique, ...
of (the
absolute value In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign. Namely, , x, =x if x is a positive number, and , x, =-x if x is negative (in which case negating x makes -x positive), ...
s of) the slopes of the function. The function f attains a maximum on C_ because f is continuous and C_ is compact. For a later step in the proof, we need that a < b / M, so if a \geq b / M, then change a to a :=\tfrac\min\, and update I_(t_0), B_(y_0), C_, and M accordingly (this update will be needed at most once since M cannot increase as a result of restricting C_). Consider \mathcal(I_(t_0),B_b(y_0)), the
function space In mathematics, a function space is a set of functions between two fixed sets. Often, the domain and/or codomain will have additional structure which is inherited by the function space. For example, the set of functions from any set into a ve ...
of continuous functions I_(t_0)\to B_b(y_0). We will proceed by applying the
Banach fixed-point theorem In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem or contractive mapping theorem or Banach–Caccioppoli theorem) is an important tool in the theory of metric spaces; it guarantees the existence and uniqu ...
using the
metric Metric or metrical may refer to: Measuring * Metric system, an internationally adopted decimal system of measurement * An adjective indicating relation to measurement in general, or a noun describing a specific type of measurement Mathematics ...
on \mathcal(I_(t_0),B_b(y_0)) induced by the uniform norm. Namely, for each continuous function \varphi : I_(t_0) \to B_b(y_0), the norm of \varphi is \, \varphi \, _\infty = \sup_ \, \varphi(t)\, . The ''Picard operator'' \Gamma:\mathcal\big(I_(t_0),B_b(y_0)\big) \to \mathcal\big(I_(t_0),B_b(y_0)\big) is defined for each \varphi \in \mathcal(I_(t_0),B_b(y_0)) by \Gamma \varphi \in \mathcal(I_(t_0),B_b(y_0)) given by \Gamma \varphi(t) = y_0 + \int_^ f(s,\varphi(s)) \, ds \quad \forall t \in I_a(t_0). To apply the Banach fixed-point theorem, we must show that \Gamma maps a complete non-empty
metric space In mathematics, a metric space is a Set (mathematics), set together with a notion of ''distance'' between its Element (mathematics), elements, usually called point (geometry), points. The distance is measured by a function (mathematics), functi ...
''X'' into itself and also is a
contraction mapping In mathematics, a contraction mapping, or contraction or contractor, on a metric space (''M'', ''d'') is a function ''f'' from ''M'' to itself, with the property that there is some real number 0 \leq k < 1 such that for all ''x'' and ...
. We first show that \Gamma takes B_b(y_0) into itself in the space of continuous functions with the uniform norm. Here, B_b(y_0) is a closed ball in the space of continuous (and bounded) functions "centered" at the constant function y_0. Hence we need to show that \left\, \Gamma\varphi(t)-y_0 \right\, = \left\, \int_^t f(s,\varphi(s))\, ds \right\, \leq \int_^ \left\, f(s,\varphi(s))\right\, ds \leq \int_^ M\, ds = M \left, t'-t_0 \ \leq M a \leq b where t' is some number in _0-a, t_0 +a/math> where the maximum is achieved. The last inequality in the chain is true since a < b / M. Now let us prove that \Gamma is a contraction mapping as required to apply the
Banach fixed-point theorem In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem or contractive mapping theorem or Banach–Caccioppoli theorem) is an important tool in the theory of metric spaces; it guarantees the existence and uniqu ...
. In particular, we want to show that there exists 0 \leq q < 1, such that \left \, \Gamma \varphi_1 - \Gamma \varphi_2 \right\, _\infty \le q \left\, \varphi_1 - \varphi_2 \right\, _\infty for all \varphi_1,\varphi_2\in\mathcal(I_(t_0),B_b(y_0)). Let q = aL and take any \varphi_1,\varphi_2\in\mathcal(I_(t_0),B_b(y_0)). Take t such that :\, \Gamma \varphi_1 - \Gamma \varphi_2 \, _\infty = \left\, \left(\Gamma\varphi_1 - \Gamma\varphi_2 \right)(t) \right\, . Then, using the definition of \Gamma, :\begin \left\, \left(\Gamma\varphi_1 - \Gamma\varphi_2 \right)(t) \right\, &= \left\, \int_^t \left( f(s,\varphi_1(s))-f(s,\varphi_2(s)) \right)ds \right\, \\ &\leq \int_^t \left\, f \left(s,\varphi_1(s)\right)-f\left(s,\varphi_2(s) \right) \right\, ds \\ &\leq L \int_^t \left\, \varphi_1(s)-\varphi_2(s) \right\, ds && \text f \text \\ &\leq L \int_^t \left\, \varphi_1-\varphi_2 \right\, _\infty \,ds \\ &\leq La \left\, \varphi_1-\varphi_2 \right\, _\infty, \end where t - t_0 \leq a, because the domains of \phi_1,\phi_2 are both I_a(t_0) \times B_b(y_0). By definition, q = aL, and a < 1 / L, so q < 1. Therefore, \Gamma is a contraction. We have established that the Picard's operator is a contraction on the Banach spaces with the metric induced by the uniform norm. This allows us to apply the Banach fixed-point theorem to conclude that the operator has a unique fixed point. In particular, there is a unique function \varphi\in \mathcal(I_a (t_0), B_b(y_0)) such that \Gamma \varphi = \varphi. Thus, \varphi is the unique solution of the initial value problem, valid on the interval I_a.


Optimization of the solution's interval

We wish to remove the dependence of the interval ''Ia'' on ''L''. To this end, there is a corollary of the Banach fixed-point theorem: if an operator ''T''''n'' is a contraction for some ''n'' in N, then ''T'' has a unique fixed point. Before applying this theorem to the Picard operator, recall the following: ''Proof.'' Induction on ''m''. For the base of the induction () we have already seen this, so suppose the inequality holds for , then we have: \begin \left \, \Gamma^m \varphi_1(t) - \Gamma^m\varphi_2(t) \right \, &= \left \, \Gamma\Gamma^ \varphi_1(t) - \Gamma\Gamma^\varphi_2(t) \right \, \\ &\leq \left, \int_^t \left \, f \left (s,\Gamma^\varphi_1(s) \right )-f \left (s,\Gamma^\varphi_2(s) \right )\right \, ds \ \\ &\leq L \left, \int_^t \left \, \Gamma^\varphi_1(s)-\Gamma^\varphi_2(s)\right \, ds\ \\ &\leq L \left, \int_^t \frac \left \, \varphi_1-\varphi_2\right \, ds\ \\ &\leq \frac \left \, \varphi_1 - \varphi_2 \right \, . \end By taking a supremum over t \in _0 - \alpha, t_0 + \alpha we see that \left \, \Gamma^m \varphi_1 - \Gamma^m\varphi_2 \right \, \leq \frac\left \, \varphi_1-\varphi_2\right \, . This inequality assures that for some large ''m'', \frac<1, and hence Γ''m'' will be a contraction. So by the previous corollary Γ will have a unique fixed point. Finally, we have been able to optimize the interval of the solution by taking . In the end, this result shows the interval of definition of the solution does not depend on the Lipschitz constant of the field, but only on the interval of definition of the field and its maximum absolute value.


Other existence theorems

The Picard–Lindelöf theorem shows that the solution exists and that it is unique. The
Peano existence theorem In mathematics, specifically in the study of ordinary differential equations, the Peano existence theorem, Peano theorem or Cauchy–Peano theorem, named after Giuseppe Peano and Augustin-Louis Cauchy, is a fundamental theorem which guarantees th ...
shows only existence, not uniqueness, but it assumes only that is continuous in , instead of
Lipschitz continuous In mathematical analysis, Lipschitz continuity, named after Germany, German mathematician Rudolf Lipschitz, is a strong form of uniform continuity for function (mathematics), functions. Intuitively, a Lipschitz continuous function is limited in h ...
. For example, the right-hand side of the equation with initial condition is continuous but not Lipschitz continuous. Indeed, rather than being unique, this equation has at least three solutions: :y(t) = 0, \qquad y(t) = \pm\left (\tfrac23 t\right)^. Even more general is Carathéodory's existence theorem, which proves existence (in a more general sense) under weaker conditions on . Although these conditions are only sufficient, there also exist necessary and sufficient conditions for the solution of an initial value problem to be unique, such as Okamura's theorem.


Global existence of solution

The Picard–Lindelöf theorem ensures that solutions to initial value problems exist uniquely within a local interval _0-\varepsilon, t_0+\varepsilon/math>, possibly dependent on each solution. The behavior of solutions beyond this local interval can vary depending on the properties of and the domain over which is defined. For instance, if is globally Lipschitz, then the local interval of existence of each solution can be extended to the entire real line and all the solutions are defined over the entire R. If is only locally Lipschitz, some solutions may not be defined for certain values of ''t'', even if is smooth. For instance, the differential equation with initial condition has the solution ''y''(''t'') = 1/(1-''t''), which is not defined at ''t'' = 1. Nevertheless, if is a
differentiable function In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
defined on a compact
submanifold In mathematics, a submanifold of a manifold M is a subset S which itself has the structure of a manifold, and for which the inclusion map S \rightarrow M satisfies certain properties. There are different types of submanifolds depending on exactly ...
of Rn such that the prescribed derivative is tangent to the given submanifold, then the initial value problem has a unique solution for all time. More generally, in
differential geometry Differential geometry is a Mathematics, mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds. It uses the techniques of Calculus, single variable calculus, vector calculus, lin ...
: if is a differentiable
vector field In vector calculus and physics, a vector field is an assignment of a vector to each point in a space, most commonly Euclidean space \mathbb^n. A vector field on a plane can be visualized as a collection of arrows with given magnitudes and dire ...
defined over a domain which is a compact smooth manifold, then all its trajectories ( integral curves) exist for all time.


See also

* Cauchy–Kovalevskaya theorem * Complete vector fields *
Frobenius theorem (differential topology) In mathematics, Frobenius' theorem gives necessary and sufficient conditions for finding a maximal set of independent solutions of an overdetermined system of first-order homogeneous linear partial differential equations. In modern differential ...
*
Integrability conditions for differential systems In mathematics, certain systems of partial differential equations are usefully formulated, from the point of view of their underlying geometric and algebraic structure, in terms of a system of differential forms. The idea is to take advantage of th ...
*
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
*
Euler method In mathematics and computational science, the Euler method (also called the forward Euler method) is a first-order numerical analysis, numerical procedure for solving ordinary differential equations (ODEs) with a given Initial value problem, in ...
* Trapezoidal rule


Notes


References

* * (In that article Lindelöf discusses a generalization of an earlier approach by Picard.) *


External links

*
Fixed Points and the Picard Algorithm
recovered from http://www.krellinst.org/UCES/archive/classes/CNA/dir2.6/uces2.6.html. * {{DEFAULTSORT:Picard-Lindelof theorem Augustin-Louis Cauchy Lipschitz maps Ordinary differential equations Theorems in mathematical analysis Uniqueness theorems