HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, the inverse function theorem is a
theorem In mathematics and formal logic, a theorem is a statement (logic), statement that has been Mathematical proof, proven, or can be proven. The ''proof'' of a theorem is a logical argument that uses the inference rules of a deductive system to esta ...
that asserts that, if a
real function In mathematical analysis, and applications in geometry, applied mathematics, engineering, and natural sciences, a function of a real variable is a function whose domain is the real numbers \mathbb, or a subset of \mathbb that contains an inter ...
''f'' has a continuous derivative near a point where its derivative is nonzero, then, near this point, ''f'' has an
inverse function In mathematics, the inverse function of a function (also called the inverse of ) is a function that undoes the operation of . The inverse of exists if and only if is bijective, and if it exists, is denoted by f^ . For a function f\colon ...
. The inverse function is also
differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
, and the '' inverse function rule'' expresses its derivative as the
multiplicative inverse In mathematics, a multiplicative inverse or reciprocal for a number ''x'', denoted by 1/''x'' or ''x''−1, is a number which when Multiplication, multiplied by ''x'' yields the multiplicative identity, 1. The multiplicative inverse of a ra ...
of the derivative of ''f''. The theorem applies verbatim to complex-valued functions of a complex variable. It generalizes to functions from ''n''-
tuples In mathematics, a tuple is a finite sequence or ''ordered list'' of numbers or, more generally, mathematical objects, which are called the ''elements'' of the tuple. An -tuple is a tuple of elements, where is a non-negative integer. There is on ...
(of real or complex numbers) to ''n''-tuples, and to functions between
vector space In mathematics and physics, a vector space (also called a linear space) is a set (mathematics), set whose elements, often called vector (mathematics and physics), ''vectors'', can be added together and multiplied ("scaled") by numbers called sc ...
s of the same finite dimension, by replacing "derivative" with "
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. If this matrix is square, that is, if the number of variables equals the number of component ...
" and "nonzero derivative" with "nonzero Jacobian determinant". If the function of the theorem belongs to a higher differentiability class, the same is true for the inverse function. There are also versions of the inverse function theorem for
holomorphic function In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space . The existence of a complex de ...
s, for differentiable maps between
manifold In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an n-dimensional manifold, or ''n-manifold'' for short, is a topological space with the property that each point has a N ...
s, for differentiable functions between
Banach space In mathematics, more specifically in functional analysis, a Banach space (, ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and ...
s, and so forth. The theorem was first established by Picard and Goursat using an iterative scheme: the basic idea is to prove a fixed point theorem using the contraction mapping theorem.


Statements

For functions of a single variable, the theorem states that if f is a continuously differentiable function with nonzero derivative at the point a; then f is injective (or bijective onto the image) in a neighborhood of a, the inverse is continuously differentiable near b=f(a), and the derivative of the inverse function at b is the reciprocal of the derivative of f at a: \bigl(f^\bigr)'(b) = \frac = \frac. It can happen that a function f may be injective near a point a while f'(a) = 0. An example is f(x) = (x - a)^3. In fact, for such a function, the inverse cannot be differentiable at b = f(a), since if f^ were differentiable at b, then, by the chain rule, 1 = (f^ \circ f)'(a) = (f^)'(b)f'(a), which implies f'(a) \ne 0. (The situation is different for holomorphic functions; see #Holomorphic inverse function theorem below.) For functions of more than one variable, the theorem states that if f is a continuously differentiable function from an open subset A of \mathbb^n into \R^n, and the
derivative In mathematics, the derivative is a fundamental tool that quantifies the sensitivity to change of a function's output with respect to its input. The derivative of a function of a single variable at a chosen input value, when it exists, is t ...
f'(a) is invertible at a point (that is, the determinant of the
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. If this matrix is square, that is, if the number of variables equals the number of component ...
of at is non-zero), then there exist neighborhoods U of a in A and V of b = f(a) such that f(U) \subset V and f : U \to V is bijective.Theorem 1.1.7. in Writing f=(f_1,\ldots,f_n), this means that the system of equations y_i = f_i(x_1, \dots, x_n) has a unique solution for x_1, \dots, x_n in terms of y_1, \dots, y_n when x \in U, y \in V. Note that the theorem ''does not'' say f is bijective onto the image where f' is invertible but that it is locally bijective where f' is invertible. Moreover, the theorem says that the inverse function f^ : V \to U is continuously differentiable, and its derivative at b=f(a) is the inverse map of f'(a); i.e., :(f^)'(b) = f'(a)^. In other words, if Jf^(b), Jf(a) are the Jacobian matrices representing (f^)'(b), f'(a), this means: :Jf^(b) = Jf(a)^. The hard part of the theorem is the existence and differentiability of f^. Assuming this, the inverse derivative formula follows from the
chain rule In calculus, the chain rule is a formula that expresses the derivative of the Function composition, composition of two differentiable functions and in terms of the derivatives of and . More precisely, if h=f\circ g is the function such that h ...
applied to f^\circ f = I. (Indeed, 1=I'(a) = (f^ \circ f)'(a) = (f^)'(b) \circ f'(a).) Since taking the inverse is infinitely differentiable, the formula for the derivative of the inverse shows that if f is continuously k times differentiable, with invertible derivative at the point , then the inverse is also continuously k times differentiable. Here k is a positive integer or \infty. There are two variants of the inverse function theorem. Given a continuously differentiable map f : U \to \mathbb^m, the first is *The derivative f'(a) is surjective (i.e., the Jacobian matrix representing it has rank m) if and only if there exists a continuously differentiable function g on a neighborhood V of b = f(a) such that f \circ g = I near b, and the second is *The derivative f'(a) is injective if and only if there exists a continuously differentiable function g on a neighborhood V of b = f(a) such that g \circ f = I near a. In the first case (when f'(a) is surjective), the point b = f(a) is called a regular value. Since m = \dim \ker(f'(a)) + \dim \operatorname(f'(a)), the first case is equivalent to saying b = f(a) is not in the image of critical points a (a critical point is a point a such that the kernel of f'(a) is nonzero). The statement in the first case is a special case of the submersion theorem. These variants are restatements of the inverse functions theorem. Indeed, in the first case when f'(a) is surjective, we can find an (injective) linear map T such that f'(a) \circ T = I. Define h(x) = a + Tx so that we have: :(f \circ h)'(0) = f'(a) \circ T = I. Thus, by the inverse function theorem, f \circ h has inverse near 0; i.e., f \circ h \circ (f \circ h)^ = I near b. The second case (f'(a) is injective) is seen in the similar way.


Example

Consider the
vector-valued function A vector-valued function, also referred to as a vector function, is a mathematical function of one or more variables whose range is a set of multidimensional vectors or infinite-dimensional vectors. The input of a vector-valued function could ...
F:\mathbb^2\to\mathbb^2\! defined by: : F(x,y)= \begin \\ \\ \end. The Jacobian matrix of it at (x, y) is: : JF(x,y)= \begin & \\ & \\ \end with the determinant: : \det JF(x,y)= e^ \cos^2 y + e^ \sin^2 y= e^. \,\! The determinant e^\! is nonzero everywhere. Thus the theorem guarantees that, for every point in \mathbb^2\!, there exists a neighborhood about over which is invertible. This does not mean is invertible over its entire domain: in this case is not even
injective In mathematics, an injective function (also known as injection, or one-to-one function ) is a function that maps distinct elements of its domain to distinct elements of its codomain; that is, implies (equivalently by contraposition, impl ...
since it is periodic: F(x,y)=F(x,y+2\pi)\!.


Counter-example

If one drops the assumption that the derivative is continuous, the function no longer need be invertible. For example f(x) = x + 2x^2\sin(\tfrac1x) and f(0)= 0 has discontinuous derivative f'\!(x) = 1 -2\cos(\tfrac1x) + 4x\sin(\tfrac1x) and f'\!(0) = 1, which vanishes arbitrarily close to x=0. These critical points are local max/min points of f, so f is not one-to-one (and not invertible) on any interval containing x=0. Intuitively, the slope f'\!(0)=1 does not propagate to nearby points, where the slopes are governed by a weak but rapid oscillation.


Methods of proof

As an important result, the inverse function theorem has been given numerous proofs. The proof most commonly seen in textbooks relies on the contraction mapping principle, also known as the
Banach fixed-point theorem In mathematics, the Banach fixed-point theorem (also known as the contraction mapping theorem or contractive mapping theorem or Banach–Caccioppoli theorem) is an important tool in the theory of metric spaces; it guarantees the existence and uniqu ...
(which can also be used as the key step in the proof of existence and uniqueness of solutions to
ordinary differential equations In mathematics, an ordinary differential equation (ODE) is a differential equation (DE) dependent on only a single independent variable. As with any other DE, its unknown(s) consists of one (or more) function(s) and involves the derivatives ...
). Since the fixed point theorem applies in infinite-dimensional (Banach space) settings, this proof generalizes immediately to the infinite-dimensional version of the inverse function theorem (see
Generalizations A generalization is a form of abstraction whereby common properties of specific instances are formulated as general concepts or claims. Generalizations posit the existence of a domain or set of elements, as well as one or more common character ...
below). An alternate proof in finite dimensions hinges on the
extreme value theorem In calculus, the extreme value theorem states that if a real-valued function f is continuous on the closed and bounded interval ,b/math>, then f must attain a maximum and a minimum, each at least once. That is, there exist numbers c and ...
for functions on a
compact set In mathematics, specifically general topology, compactness is a property that seeks to generalize the notion of a closed and bounded subset of Euclidean space. The idea is that a compact space has no "punctures" or "missing endpoints", i.e., i ...
. This approach has an advantage that the proof generalizes to a situation where there is no Cauchy completeness (see ). Yet another proof uses
Newton's method In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
, which has the advantage of providing an effective version of the theorem: bounds on the derivative of the function imply an estimate of the size of the neighborhood on which the function is invertible.


Proof for single-variable functions

We want to prove the following: ''Let D \subseteq \R be an open set with x_0 \in D, f: D \to \R a continuously differentiable function defined on D, and suppose that f'(x_0) \ne 0. Then there exists an open interval I with x_0 \in I such that f maps I bijectively onto the open interval J = f(I), and such that the inverse function f^ : J \to I is continuously differentiable, and for any y \in J, if x \in I is such that f(x) = y, then (f^)'(y) = \dfrac.'' We may without loss of generality assume that f'(x_0) > 0. Given that D is an open set and f' is continuous at x_0, there exists r > 0 such that (x_0 - r, x_0 + r) \subseteq D and, f'(x) - f'(x_0), < \dfrac \qquad \text , x - x_0, < r. In particular,f'(x) > \dfrac >0 \qquad \text , x - x_0, < r. This shows that f is strictly increasing for all , x - x_0, < r. Let \delta > 0 be such that \delta < r. Then - \delta, x + \delta\subseteq (x_0 - r, x_0 + r). By the intermediate value theorem, we find that f maps the interval - \delta, x + \delta/math> bijectively onto (x - \delta), f(x + \delta)/math>. Denote by I = (x-\delta, x+\delta) and J = (f(x - \delta),f(x + \delta)). Then f: I \to J is a bijection and the inverse f^: J \to I exists. The fact that f^: J \to I is differentiable follows from the differentiability of f. In particular, the result follows from the fact that if f: I \to \R is a strictly monotonic and continuous function that is differentiable at x_0 \in I with f'(x_0) \ne 0, then f^: f(I) \to \R is differentiable with (f^)'(y_0) = \dfrac, where y_0 = f(x_0) (a standard result in analysis). This completes the proof.


A proof using successive approximation

To prove existence, it can be assumed after an affine transformation that f(0)=0 and f^\prime(0)=I, so that a=b=0. By the mean value theorem for vector-valued functions, for a differentiable function u: ,1to\mathbb R^m, \, u(1)-u(0)\, \le \sup_ \, u^\prime(t)\, . Setting u(t)=f(x+t(x^\prime -x)) - x-t(x^\prime-x), it follows that :\, f(x) - f(x^\prime) - x + x^\prime\, \le \, x -x^\prime\, \,\sup_ \, f^\prime(x+t(x^\prime -x))-I\, . Now choose \delta>0 so that \, f'(x) - I\, < for \, x\, < \delta. Suppose that \, y\, <\delta/2 and define x_n inductively by x_0=0 and x_=x_n + y - f(x_n). The assumptions show that if \, x\, , \,\, \, x^\prime\, < \delta then :\, f(x)-f(x^\prime) - x + x^\prime\, \le \, x-x^\prime\, /2. In particular f(x)=f(x^\prime) implies x=x^\prime. In the inductive scheme \, x_n\, <\delta and \, x_ - x_n\, < \delta/2^n. Thus (x_n) is a
Cauchy sequence In mathematics, a Cauchy sequence is a sequence whose elements become arbitrarily close to each other as the sequence progresses. More precisely, given any small positive distance, all excluding a finite number of elements of the sequence are le ...
tending to x. By construction f(x)=y as required. To check that g=f^ is C1, write g(y+k) = x+h so that f(x+h)=f(x)+k. By the inequalities above, \, h-k\, <\, h\, /2 so that \, h\, /2<\, k\, < 2\, h\, . On the other hand, if A=f^\prime(x), then \, A-I\, <1/2. Using the
geometric series In mathematics, a geometric series is a series (mathematics), series summing the terms of an infinite geometric sequence, in which the ratio of consecutive terms is constant. For example, 1/2 + 1/4 + 1/8 + 1/16 + ⋯, the series \tfrac12 + \tfrac1 ...
for B=I-A, it follows that \, A^\, < 2. But then : = \le 4 tends to 0 as k and h tend to 0, proving that g is C1 with g^\prime(y)=f^\prime(g(y))^. The proof above is presented for a finite-dimensional space, but applies equally well for
Banach space In mathematics, more specifically in functional analysis, a Banach space (, ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and ...
s. If an invertible function f is Ck with k>1, then so too is its inverse. This follows by induction using the fact that the map F(A)=A^ on operators is Ck for any k (in the finite-dimensional case this is an elementary fact because the inverse of a matrix is given as the adjugate matrix divided by its
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
). The method of proof here can be found in the books of Henri Cartan, Jean Dieudonné,
Serge Lang Serge Lang (; May 19, 1927 – September 12, 2005) was a French-American mathematician and activist who taught at Yale University for most of his career. He is known for his work in number theory and for his mathematics textbooks, including the i ...
, Roger Godement and Lars Hörmander.


A proof using the contraction mapping principle

Here is a proof based on the contraction mapping theorem. Specifically, following T. Tao, it uses the following consequence of the contraction mapping theorem. Basically, the lemma says that a small perturbation of the identity map by a contraction map is injective and preserves a ball in some sense. Assuming the lemma for a moment, we prove the theorem first. As in the above proof, it is enough to prove the special case when a = 0, b = f(a) = 0 and f'(0) = I. Let g = f - I. The mean value inequality applied to t \mapsto g(x + t(y - x)) says: :, g(y) - g(x), \le , y-x, \sup_ , g'(x + t(y - x)), . Since g'(0) = I - I = 0 and g' is continuous, we can find an r > 0 such that :, g(y) - g(x), \le 2^, y-x, for all x, y in B(0, r). Then the early lemma says that f = g + I is injective on B(0, r) and B(0, r/2) \subset f(B(0, r)). Then :f : U = B(0, r) \cap f^(B(0, r/2)) \to V = B(0, r/2) is bijective and thus has an inverse. Next, we show the inverse f^ is continuously differentiable (this part of the argument is the same as that in the previous proof). This time, let g = f^ denote the inverse of f and A = f'(x). For x = g(y), we write g(y + k) = x + h or y + k = f(x+h). Now, by the early estimate, we have :, h - k, = , f(x+h) - f(x) - h, \le , h, /2 and so , h, /2 \le , k, . Writing \, \cdot \, for the operator norm, :, g(y+k) - g(y) - A^ k, = , h - A^(f(x + h) - f(x)), \le \, A^\, , Ah - f(x+h) + f(x), . As k \to 0, we have h \to 0 and , h, /, k, is bounded. Hence, g is differentiable at y with the derivative g'(y) = f'(g(y))^. Also, g' is the same as the composition \iota \circ f' \circ g where \iota : T \mapsto T^; so g' is continuous. It remains to show the lemma. First, we have: :, x - y, - , f(x) - f(y), \le , g(x) - g(y), \le c, x - y, , which is to say :(1 - c), x - y, \le , f(x) - f(y), . This proves the first part. Next, we show f(B(0, r)) \supset B(0, (1-c)r). The idea is to note that this is equivalent to, given a point y in B(0, (1-c) r), find a fixed point of the map :F : \overline(0, r') \to \overline(0, r'), \, x \mapsto y - g(x) where 0 < r' < r such that , y, \le (1-c)r' and the bar means a closed ball. To find a fixed point, we use the contraction mapping theorem and checking that F is a well-defined strict-contraction mapping is straightforward. Finally, we have: f(B(0, r)) \subset B(0, (1+c)r) since :, f(x), = , x + g(x) - g(0), \le (1+c), x, . \square As might be clear, this proof is not substantially different from the previous one, as the proof of the contraction mapping theorem is by successive approximation.


Applications


Implicit function theorem

The inverse function theorem can be used to solve a system of equations :\begin &f_1(x) = y_1 \\ &\quad \vdots\\ &f_n(x) = y_n,\end i.e., expressing y_1, \dots, y_n as functions of x = (x_1, \dots, x_n), provided the Jacobian matrix is invertible. The implicit function theorem allows to solve a more general system of equations: :\begin &f_1(x, y) = 0 \\ &\quad \vdots\\ &f_n(x, y) = 0\end for y in terms of x. Though more general, the theorem is actually a consequence of the inverse function theorem. First, the precise statement of the implicit function theorem is as follows: *given a map f : \mathbb^n \times \mathbb^m \to \mathbb^m, if f(a, b) = 0, f is continuously differentiable in a neighborhood of (a, b) and the derivative of y \mapsto f(a, y) at b is invertible, then there exists a differentiable map g : U \to V for some neighborhoods U, V of a, b such that f(x, g(x)) = 0. Moreover, if f(x, y) = 0, x \in U, y \in V, then y = g(x); i.e., g(x) is a unique solution. To see this, consider the map F(x, y) = (x, f(x, y)). By the inverse function theorem, F : U \times V \to W has the inverse G for some neighborhoods U, V, W. We then have: :(x, y) = F(G_1(x, y), G_2(x, y)) = (G_1(x, y), f(G_1(x, y), G_2(x, y))), implying x = G_1(x, y) and y = f(x, G_2(x, y)). Thus g(x) = G_2(x, 0) has the required property. \square


Giving a manifold structure

In differential geometry, the inverse function theorem is used to show that the pre-image of a regular value under a smooth map is a manifold. Indeed, let f : U \to \mathbb^r be such a smooth map from an open subset of \mathbb^n (since the result is local, there is no loss of generality with considering such a map). Fix a point a in f^(b) and then, by permuting the coordinates on \mathbb^n, assume the matrix \left \frac(a) \right has rank r. Then the map F : U \to \mathbb^r \times \mathbb^ = \mathbb^n, \, x \mapsto (f(x), x_, \dots, x_n) is such that F'(a) has rank n. Hence, by the inverse function theorem, we find the smooth inverse G of F defined in a neighborhood V \times W of (b, a_, \dots, a_n). We then have :x = (F \circ G)(x) = (f(G(x)), G_(x), \dots, G_n(x)), which implies :(f \circ G)(x_1, \dots, x_n) = (x_1, \dots, x_r). That is, after the change of coordinates by G, f is a coordinate projection (this fact is known as the submersion theorem). Moreover, since G : V \times W \to U' = G(V \times W) is bijective, the map :g = G(b, \cdot) : W \to f^(b) \cap U', \, (x_, \dots, x_n) \mapsto G(b, x_, \dots, x_n) is bijective with the smooth inverse. That is to say, g gives a local parametrization of f^(b) around a. Hence, f^(b) is a manifold. \square (Note the proof is quite similar to the proof of the implicit function theorem and, in fact, the implicit function theorem can be also used instead.) More generally, the theorem shows that if a smooth map f : P \to E is transversal to a submanifold M \subset E, then the pre-image f^(M) \hookrightarrow P is a submanifold.


Global version

The inverse function theorem is a local result; it applies to each point. ''A priori'', the theorem thus only shows the function f is locally bijective (or locally diffeomorphic of some class). The next topological lemma can be used to upgrade local injectivity to injectivity that is global to some extent. Proof: First assume X is
compact Compact as used in politics may refer broadly to a pact or treaty; in more specific cases it may refer to: * Interstate compact, a type of agreement used by U.S. states * Blood compact, an ancient ritual of the Philippines * Compact government, a t ...
. If the conclusion of the theorem is false, we can find two sequences x_i \ne y_i such that f(x_i) = f(y_i) and x_i, y_i each converge to some points x, y in A. Since f is injective on A, x = y. Now, if i is large enough, x_i, y_i are in a neighborhood of x = y where f is injective; thus, x_i = y_i, a contradiction. In general, consider the set E = \. It is disjoint from S \times S for any subset S \subset X where f is injective. Let X_1 \subset X_2 \subset \cdots be an increasing sequence of compact subsets with union X and with X_i contained in the interior of X_. Then, by the first part of the proof, for each i, we can find a neighborhood U_i of A \cap X_i such that U_i^2 \subset X^2 - E. Then U = \bigcup_i U_i has the required property. \square (See also for an alternative approach.) The lemma implies the following (a sort of) global version of the inverse function theorem: Note that if A is a point, then the above is the usual inverse function theorem.


Holomorphic inverse function theorem

There is a version of the inverse function theorem for holomorphic maps. The theorem follows from the usual inverse function theorem. Indeed, let J_(f) denote the Jacobian matrix of f in variables x_i, y_i and J(f) for that in z_j, \overline_j. Then we have \det J_(f) = , \det J(f), ^2, which is nonzero by assumption. Hence, by the usual inverse function theorem, f is injective near 0 with continuously differentiable inverse. By chain rule, with w = f(z), :\frac (f_j^ \circ f)(z) = \sum_k \frac(w) \frac(z) + \sum_k \frac(w) \frac(z) where the left-hand side and the first term on the right vanish since f_j^ \circ f and f_k are holomorphic. Thus, \frac(w) = 0 for each k. \square Similarly, there is the implicit function theorem for holomorphic functions. As already noted earlier, it can happen that an injective smooth function has the inverse that is not smooth (e.g., f(x) = x^3 in a real variable). This is not the case for holomorphic functions because of:


Formulations for manifolds

The inverse function theorem can be rephrased in terms of differentiable maps between
differentiable manifold In mathematics, a differentiable manifold (also differential manifold) is a type of manifold that is locally similar enough to a vector space to allow one to apply calculus. Any manifold can be described by a collection of charts (atlas). One ...
s. In this context the theorem states that for a differentiable map F: M \to N (of class C^1), if the differential of F, :dF_p: T_p M \to T_ N is a
linear isomorphism In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
at a point p in M then there exists an open neighborhood U of p such that :F, _U: U \to F(U) is a
diffeomorphism In mathematics, a diffeomorphism is an isomorphism of differentiable manifolds. It is an invertible function that maps one differentiable manifold to another such that both the function and its inverse are continuously differentiable. Definit ...
. Note that this implies that the connected components of and containing ''p'' and ''F''(''p'') have the same dimension, as is already directly implied from the assumption that ''dF''''p'' is an isomorphism. If the derivative of is an isomorphism at all points in then the map is a
local diffeomorphism In mathematics, more specifically differential topology, a local diffeomorphism is intuitively a map between smooth manifolds that preserves the local differentiable structure. The formal definition of a local diffeomorphism is given below. Form ...
.


Generalizations


Banach spaces

The inverse function theorem can also be generalized to differentiable maps between
Banach space In mathematics, more specifically in functional analysis, a Banach space (, ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vectors and ...
s ' and '. Let ' be an open neighbourhood of the origin in ' and F: U \to Y\! a continuously differentiable function, and assume that the Fréchet derivative dF_0: X \to Y\! of ' at 0 is a bounded linear isomorphism of ' onto '. Then there exists an open neighbourhood ' of F(0)\! in ' and a continuously differentiable map G: V \to X\! such that F(G(y)) = y for all ' in '. Moreover, G(y)\! is the only sufficiently small solution ' of the equation F(x) = y\!. There is also the inverse function theorem for Banach manifolds.


Constant rank theorem

The inverse function theorem (and the implicit function theorem) can be seen as a special case of the constant rank theorem, which states that a smooth map with constant rank near a point can be put in a particular normal form near that point. Specifically, if F:M\to N has constant rank near a point p\in M\!, then there are open neighborhoods of and of F(p)\! and there are diffeomorphisms u:T_pM\to U\! and v:T_N\to V\! such that F(U)\subseteq V\! and such that the derivative dF_p:T_pM\to T_N\! is equal to v^\circ F\circ u\!. That is, "looks like" its derivative near . The set of points p\in M such that the rank is constant in a neighborhood of p is an open dense subset of ; this is a consequence of semicontinuity of the rank function. Thus the constant rank theorem applies to a generic point of the domain. When the derivative of is injective (resp. surjective) at a point , it is also injective (resp. surjective) in a neighborhood of , and hence the rank of is constant on that neighborhood, and the constant rank theorem applies.


Polynomial functions

If it is true, the Jacobian conjecture would be a variant of the inverse function theorem for polynomials. It states that if a vector-valued polynomial function has a Jacobian determinant that is an invertible polynomial (that is a nonzero constant), then it has an inverse that is also a polynomial function. It is unknown whether this is true or false, even in the case of two variables. This is a major open problem in the theory of polynomials.


Selections

When f: \mathbb^n \to \mathbb^m with m\leq n, f is k times continuously differentiable, and the Jacobian A=\nabla f(\overline) at a point \overline is of rank m, the inverse of f may not be unique. However, there exists a local selection function s such that f(s(y)) = y for all y in a
neighborhood A neighbourhood (Commonwealth English) or neighborhood (American English) is a geographically localized community within a larger town, city, suburb or rural area, sometimes consisting of a single street and the buildings lining it. Neigh ...
of \overline = f(\overline), s(\overline) = \overline, s is k times continuously differentiable in this neighborhood, and \nabla s(\overline) = A^T(A A^T)^ (\nabla s(\overline) is the Moore–Penrose pseudoinverse of A).


Over a real closed field

The inverse function theorem also holds over a
real closed field In mathematics, a real closed field is a field F that has the same first-order properties as the field of real numbers. Some examples are the field of real numbers, the field of real algebraic numbers, and the field of hyperreal numbers. Def ...
''k'' (or an O-minimal structure).Theorem 2.11. in Precisely, the theorem holds for a semialgebraic (or definable) map between open subsets of k^n that is continuously differentiable. The usual proof of the IFT uses Banach's fixed point theorem, which relies on the Cauchy completeness. That part of the argument is replaced by the use of the
extreme value theorem In calculus, the extreme value theorem states that if a real-valued function f is continuous on the closed and bounded interval ,b/math>, then f must attain a maximum and a minimum, each at least once. That is, there exist numbers c and ...
, which does not need completeness. Explicitly, in , the Cauchy completeness is used only to establish the inclusion B(0, r/2) \subset f(B(0, r)). Here, we shall directly show B(0, r/4) \subset f(B(0, r)) instead (which is enough). Given a point y in B(0, r/4), consider the function P(x) = , f(x) - y, ^2 defined on a neighborhood of \overline(0, r). If P'(x) = 0, then 0 = P'(x) = 2 _1(x) - y_1 \cdots f_n(x) - y_n'(x) and so f(x) = y, since f'(x) is invertible. Now, by the extreme value theorem, P admits a minimal at some point x_0 on the closed ball \overline(0, r), which can be shown to lie in B(0, r) using 2^, x, \le , f(x), . Since P'(x_0) = 0, f(x_0) = y, which proves the claimed inclusion. \square Alternatively, one can deduce the theorem from the one over real numbers by Tarski's principle.


See also

* Nash–Moser theorem


Notes


References

* * * *. * * * * * {{Analysis in topological vector spaces Multivariable calculus Differential topology Inverse functions Theorems in real analysis Theorems in calculus de:Satz von der impliziten Funktion#Satz von der Umkehrabbildung