HOME

TheInfoList



OR:

In mathematics, calculus on Euclidean space is a generalization of calculus of functions in one or several variables to calculus of functions on
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's Elements, Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics ther ...
\mathbb^n as well as a finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to
multivariable calculus Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving several variables, rather th ...
but is somehow more sophisticated in that it uses linear algebra (or some functional analysis) more extensively and covers some concepts from differential geometry such as
differential form In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, ...
s and
Stokes' formula In vector calculus and differential geometry the generalized Stokes theorem (sometimes with apostrophe as Stokes' theorem or Stokes's theorem), also called the Stokes–Cartan theorem, is a statement about the integration of differential forms ...
in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces. Calculus on Euclidean space is also a local model of calculus on manifolds, a theory of functions on manifolds.


Basic notions


Functions in one real variable

This section is a brief review of function theory in one-variable calculus. A real-valued function f : \mathbb \to \mathbb is continuous at a if it is ''approximately constant'' near a; i.e., :\lim_ (f(a + h) - f(a)) = 0. In contrast, the function f is differentiable at a if it is ''approximately linear'' near a; i.e., there is some real number \lambda such that :\lim_ \frac = 0. (For simplicity, suppose f(a) = 0. Then the above means that f(a + h) = \lambda h + g(a, h) where g(a, h) goes to 0 faster than ''h'' going to 0 and, in that sense, f(a + h) behaves like \lambda h.) The number \lambda depends on a and thus is denoted as f'(a). If f is differentiable on an open interval U and if f' is a continuous function on U, then f is called a ''C''1 function. More generally, f is called a ''C''k function if its derivative f' is ''C''k-1 function.
Taylor's theorem In calculus, Taylor's theorem gives an approximation of a ''k''-times differentiable function around a given point by a polynomial of degree ''k'', called the ''k''th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the t ...
states that a ''C''k function is precisely a function that can be approximated by a polynomial of degree ''k''. If f : \mathbb \to \mathbb is a ''C''1 function and f'(a) \ne 0 for some a, then either f'(a) > 0 or f'(a) < 0; i.e., either f is strictly increasing or strictly decreasing in some open interval containing ''a''. In particular, f : f^(U) \to U is bijective for some open interval U containing f(a). The inverse function theorem then says that the inverse function f^ is differentiable on ''U'' with the derivatives: for y \in U :(f^)'(y) = .


Derivative of a map and chain rule

For functions f defined in the plane or more generally on an Euclidean space \mathbb^n, it is necessary to consider functions that are vector-valued or matrix-valued. It is also conceptually helpful to do this in an invariant manner (i.e., a coordinate-free way). Derivatives of such maps at a point are then vectors or linear maps, not real numbers. Let f : X \to Y be a map from an open subset X of \mathbb^n to an open subset Y of \mathbb^m. Then the map f is said to be differentiable at a point x in X if there exists a (necessarily unique) linear transformation f'(x) : \mathbb^n \to \mathbb^m, called the derivative of f at x, such that :\lim_ \frac , f(x + h) - f(x) - f'(x)h, = 0 where f'(x)h is the application of the linear transformation f'(x) to h. If f is differentiable at x, then it is continuous at x since :, f(x + h) - f(x), \le (, h, ^, f(x + h) - f(x) - f'(x)h, ) , h, + , f'(x)h, \to 0 as h \to 0. As in the one-variable case, there is This is proved exactly as for functions in one variable. Indeed, with the notation \widetilde = f(x + h) - f(x), we have: :\begin & \frac , g(f(x + h)) - g(y) - g'(y) f'(x) h, \\ & \le \frac , g(y + \widetilde) - g(y) - g'(y)\widetilde, + \frac , g'(y)(f(x+h) - f(x) - f'(x) h), . \end Here, since f is differentiable at x, the second term on the right goes to zero as h \to 0. As for the first term, it can be written as: :\begin \frac , g(y+ \widetilde) - g(y) - g'(y)\widetilde, /, \widetilde, , & \widetilde \neq 0, \\ 0, & \widetilde = 0. \end Now, by the argument showing the continuity of f at x, we see \frac is bounded. Also, \widetilde \to 0 as h \to 0 since f is continuous at x. Hence, the first term also goes to zero as h \to 0 by the differentiability of g at y. \square The map f as above is called
continuously differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its ...
or C^1 if it is differentiable on the domain and also the derivatives vary continuously; i.e., x \mapsto f'(x) is continuous. As a linear transformation, f'(x) is represented by an m \times n-matrix, called the
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as ...
Jf(x) of f at x and we write it as: :(Jf)(x) = \begin \frac(x) & \cdots & \frac(x) \\ \vdots & \ddots & \vdots \\ \frac(x) & \cdots & \frac(x) \end. Taking h to be h e_j, h a real number and e_j = (0, \cdots, 1, \cdots, 0) the ''j''-th standard basis element, we see that the differentiability of f at x implies: :\lim_ \frac = \frac(x) where f_i denotes the ''i''-th component of f. That is, each component of f is differentiable at x in each variable with the derivative \frac(x). In terms of Jacobian matrices, the chain rule says J(g \circ f)(x) = Jg(y) Jf(x); i.e., as (g \circ f)_i = g_i \circ f, :\frac(x) = \frac (y) \frac(x) + \cdots + \frac (y) \frac(x), which is the form of the chain rule that is often stated. A partial converse to the above holds. Namely, if the partial derivatives / are all defined and continuous, then f is continuously differentiable. This is a consequence of the mean value inequality: (This version of mean value inequality follows from mean value inequality in applied to the function
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
\to \mathbb^m, \, t \mapsto f(x + ty) - tv, where the proof on mean value inequality is given.) Indeed, let g(x) = (Jf)(x). We note that, if y = y_i e_i, then :\fracf(x + ty) = \frac(x+ty)y = g(x + ty)(y_i e_i). For simplicity, assume n = 2 (the argument for the general case is similar). Then, by mean value inequality, with the operator norm \, \cdot \, , :\begin &, \Delta_y f (x) - g(x)y, \\ &\le , \Delta_ f(x_1, x_2 + y_2) - g(x)(y_1 e_1), + , \Delta_ f(x_1, x_2) - g(x)(y_2 e_2), \\ &\le , y_1, \sup_\, g(x_1 + t y_1, x_2 + y_2) - g(x)\, + , y_2, \sup_\, g(x_1, x_2 + ty_2) - g(x)\, , \end which implies , \Delta_y f (x) - g(x)y, /, y, \to 0 as required. \square Example: Let U be the set of all invertible real square matrices of size ''n''. Note U can be identified as an open subset of \mathbb^ with coordinates x_, 0 \le i, j \ne n. Consider the function f(g) = g^ = the inverse matrix of g defined on U. To guess its derivatives, assume f is differentiable and consider the curve c(t) = ge^ where e^A means the matrix exponential of A. By the chain rule applied to f(c(t)) = e^ g^ , we have: :f'(c(t)) \circ c'(t) = -g^h e^ g^. Taking t = 0, we get: :f'(g) h = -g^h g^. Now, we then have: :\, (g+h)^ - g^ + g^h g^\, \le \, (g+h)^ \, \, h\, \, g^ h g^\, . Since the operator norm is equivalent to the Euclidean norm on \mathbb^ (any norms are equivalent to each other), this implies f is differentiable. Finally, from the formula for f', we see the partial derivatives of f are smooth (infinitely differentiable); whence, f is smooth too.


Higher derivatives and Taylor formula

If f : X \to \mathbb^m is differentiable where X \subset \mathbb^n is an open subset, then the derivatives determine the map f' : X \to \operatorname(\mathbb^n, \mathbb^m), where \operatorname stands for homomorphisms between vector spaces; i.e., linear maps. If f' is differentiable, then f'' : X \to \operatorname(\mathbb^n, \operatorname(\mathbb^n, \mathbb^m)). Here, the codomain of f'' can be identified with the space of bilinear maps by: :\operatorname(\mathbb^n, \operatorname(\mathbb^n, \mathbb^m)) \overset\underset\to \ where \varphi(g)(x, y) = g(x)y and \varphi is bijective with the inverse \psi given by (\psi(g)x)y = g(x, y). In general, f^ = (f^)' is a map from X to the space of k-multilinear maps (\mathbb^n)^k \to \mathbb^m. Just as f'(x) is represented by a matrix (Jacobian matrix), when m = 1 (a bilinear map is a bilinear form), the bilinear form f''(x) is represented by a matrix called the Hessian matrix of f at x; namely, the square matrix H of size n such that f''(x)(y, z) = (Hy, z), where the paring refers to an inner product of \mathbb^n, and H is none other than the Jacobian matrix of f' : X \to (\mathbb^n)^* \simeq \mathbb^n. The (i, j)-th entry of H is thus given explicitly as H_ = \frac(x). Moreover, if f'' exists and is continuous, then the matrix H is symmetric, the fact known as the
symmetry of second derivatives In mathematics, the symmetry of second derivatives (also called the equality of mixed partials) refers to the possibility of interchanging the order of taking partial derivatives of a function :f\left(x_1,\, x_2,\, \ldots,\, x_n\right) of ''n'' ...
. This is seen using the mean value inequality. For vectors u, v in \mathbb^n, using mean value inequality twice, we have: :, \Delta_v \Delta_u f(x) - f''(x)(u, v), \le \sup_ , f''(x + t_1 u + t_2 v)(u, v) - f''(x)(u, v) , , which says :f''(x)(u, v) = \lim_ (\Delta_ \Delta_ f(x) - f(x))/(st). Since the right-hand side is symmetric in u, v, so is the left-hand side: f''(x)(u, v) = f''(x)(v, u). By induction, if f is C^k, then the ''k''-multilinear map f^(x) is symmetric; i.e., the order of taking partial derivatives does not matter. As in the case of one variable, the Taylor series expansion can then be proved by integration by parts: :f(z+(h,k))=\sum_ \partial_x^a\partial_y^b f(z) + n\int_0^1 (1-t)^ \sum_ \partial_x^a\partial_y^b f(z+t(h,k)) \, dt. Taylor's formula has an effect of dividing a function by variables, which can be illustrated by the next typical theoretical use of the formula. Example: Let T : \mathcal \to \mathcal be a linear map between the vector space \mathcal of smooth functions on \mathbb^n with rapidly decreasing derivatives; i.e., \sup , x^ \partial^ \varphi, < \infty for any multi-index \alpha, \beta. (The space \mathcal is called a Schwartz space.) For each \varphi in \mathcal, Taylor's formula implies we can write: :\varphi - \psi \varphi(y) = \sum_^n (x_j - y_j) \varphi_j with \varphi_j \in \mathcal, where \psi is a smooth function with compact support and \psi(y) = 1. Now, assume T commutes with coordinates; i.e., T (x_j \varphi) = x_j T\varphi. Then :T\varphi - \varphi(y) T\psi = \sum_^n (x_j - y_j) T\varphi_j. Evaluating the above at y, we get T\varphi(y) = \varphi(y) T\psi(y). In other words, T is a multiplication by some function m; i.e., T\varphi = m \varphi. Now, assume further that T commutes with partial differentiations. We then easily see that m is a constant; T is a multiplication by a constant. (Aside: the above discussion ''almost'' proves the
Fourier inversion formula In mathematics, the Fourier inversion theorem says that for many types of functions it is possible to recover a function from its Fourier transform. Intuitively it may be viewed as the statement that if we know all frequency and phase information ...
. Indeed, let F, R : \mathcal \to \mathcal be the
Fourier transform A Fourier transform (FT) is a mathematical transform that decomposes functions into frequency components, which are represented by the output of the transform as a function of frequency. Most commonly functions of time or space are transformed, ...
and the reflection; i.e., (R \varphi)(x) = \varphi(-x). Then, dealing directly with the integral that is involved, one can see T = RF^2 commutes with coordinates and partial differentiations; hence, T is a multiplication by a constant. This is ''almost'' a proof since one still has to compute this constant.) A partial converse to the Taylor formula also holds; see
Borel's lemma In mathematics, Borel's lemma, named after Émile Borel, is an important result used in the theory of asymptotic expansions and partial differential equations. Statement Suppose ''U'' is an open set in the Euclidean space R''n'', and suppose that ...
and
Whitney extension theorem In mathematics, in particular in mathematical analysis, the Whitney extension theorem is a partial converse to Taylor's theorem. Roughly speaking, the theorem asserts that if ''A'' is a closed subset of a Euclidean space, then it is possible to e ...
.


Inverse function theorem and submersion theorem

A C^k-map with the C^k-inverse is called a C^k-diffeomorphism. Thus, the theorem says that, for a map f satisfying the hypothesis at a point x, f is a diffeomorphism near x, f(x). For a proof, see . The implicit function theorem says: given a map f : \mathbb^n \times \mathbb^m \to \mathbb^m, if f(a, b) = 0, f is C^k in a neighborhood of (a, b) and the derivative of y \mapsto f(a, y) at b is invertible, then there exists a differentiable map g : U \to V for some neighborhoods U, V of a, b such that f(x, g(x)) = 0. The theorem follows from the inverse function theorem; see . Another consequence is the submersion theorem.


Integrable functions on Euclidean spaces

A partition of an interval
, b The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
/math> is a finite sequence a = t_0 \le t_1 \le \cdots \le t_k = b. A partition P of a rectangle D (product of intervals) in \mathbb^n then consists of partitions of the sides of D; i.e., if D = \prod_1^n _i, b_i/math>, then P consists of P_1, \dots, P_n such that P_i is a partition of _i, b_i/math>. Given a function f on D, we then define the upper Riemann sum of it as: :U(f, P) = \sum_ (\sup_Q f) \operatorname(Q) where *Q is a partition element of P; i.e., Q = \prod_^n _, t_/math> when P_i : a_i = t_ \le \dots \cdots \le t_ = b_i is a partition of _i, b_i/math>. *The volume \operatorname(Q) of Q is the usual Euclidean volume; i.e., \operatorname(Q) = \prod_1^n (t_ - t_). The lower Riemann sum L(f, P) of f is then defined by replacing \sup by \inf. Finally, the function f is called integrable if it is bounded and \sup \ = \inf \. In that case, the common value is denoted as \int_D f \, dx. A subset of \mathbb^n is said to have measure zero if for each \epsilon > 0, there are some possibly infinitely many rectangles D_1, D_2, \dots, whose union contains the set and \sum_i \operatorname(D_i) < \epsilon. A key theorem is The next theorem allows us to compute the integral of a function as the iteration of the integrals of the function in one-variables: In particular, the order of integrations can be changed. Finally, if M \subset \mathbb^n is a bounded open subset and f a function on M, then we define \int_M f \, dx := \int_D \chi_M f \, dx where D is a closed rectangle containing M and \chi_M is the characteristic function on M; i.e., \chi_M(x) = 1 if x \in M and =0 if x \not\in M, provided \chi_M f is integrable.


Surface integral

If a bounded surface M in \mathbb^3 is parametrized by \textbf = \textbf(u, v) with domain D, then the
surface integral In mathematics, particularly multivariable calculus, a surface integral is a generalization of multiple integrals to integration over surfaces. It can be thought of as the double integral analogue of the line integral. Given a surface, one may ...
of a measurable function F on M is defined and denoted as: :\int_M F \, dS := \int \int_D (F \circ \textbf) , \textbf_u \times \textbf_v , \, du dv If F : M \to \mathbb^3 is vector-valued, then we define :\int_M F \cdot dS := \int_M (F \cdot \textbf) \, dS where \textbf is an outward unit normal vector to M. Since \textbf = \frac, we have: :\int_M F \cdot dS = \int \int_D (F \circ \textbf) \cdot (\textbf_u \times \textbf_v) \, du dv = \int \int_D \det(F \circ \textbf, \textbf_u, \textbf_v) \, dudv.


Vector analysis


Tangent vectors and vector fields

Let c :
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
\to \mathbb^n be a differentiable curve. Then the tangent vector to the curve c at t is a vector v at the point c(t) whose components are given as: :v = (c_1'(t), \dots, c_n'(t)). For example, if c(t) = (a \cos(t), a \sin(t), bt), a > 0, b > 0 is a helix, then the tangent vector at ''t'' is: :c'(t) = (-a \sin(t), a \cos(t), b). It corresponds to the intuition that the a point on the helix moves up in a constant speed. If M \subset \mathbb^n is a differentiable curve or surface, then the tangent space to M at a point ''p'' is the set of all tangent vectors to the differentiable curves c:
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
\to M with c(0) = p. A vector field ''X'' is an assignment to each point ''p'' in ''M'' a tangent vector X_p to ''M'' at ''p'' such that the assignment varies smoothly.


Differential forms

The dual notion of a vector field is a differential form. Given an open subset M in \mathbb^n, by definition, a
differential 1-form In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications ...
(often just 1-form) \omega is an assignment to a point p in M a linear functional \omega_p on the tangent space T_p M to M at p such that the assignment varies smoothly. For a (real or complex-valued) smooth function f, define the 1-form df by: for a tangent vector v at p, :df_p(v) = v(f) where v(f) denotes the directional derivative of f in the direction v at p. For example, if x_i is the i-th coordinate function, then dx_(v) = v_i; i.e., dx_ are the dual basis to the standard basis on T_p M. Then every differential 1-form \omega can be written uniquely as :\omega = f_1 \, dx_1 + \cdots + f_n \, dx_n for some smooth functions f_1, \dots, f_n on M (since, for every point p, the linear functional \omega_p is a unique linear combination of dx_i over real numbers). More generally, a differential ''k''-form is an assignment to a point p in M a vector \omega_p in the k-th
exterior power In mathematics, the exterior algebra, or Grassmann algebra, named after Hermann Grassmann, is an algebra that uses the exterior product or wedge product as its multiplication. In mathematics, the exterior product or wedge product of vectors is a ...
\bigwedge^k T^*_p M of the dual space T^*_p M of T_p M such that the assignment varies smoothly. In particular, a 0-form is the same as a smooth function. Also, any k-form \omega can be written uniquely as: :\omega = \sum_ f_ \, dx_ \wedge \cdots \wedge dx_ for some smooth functions f_. Like a smooth function, we can differentiate and integrate differential forms. If f is a smooth function, then df can be written as: :df = \sum_^n \frac \, dx_i since, for v = \partial / \partial x_j , _p, we have: df_p(v) = \frac(p) = \sum_^n \frac(p) \, dx_i(v). Note that, in the above expression, the left-hand side (whence the right-hand side) is independent of coordinates x_1, \dots, x_n; this property is called the invariance of differential. The operation d is called the
exterior derivative On a differentiable manifold, the exterior derivative extends the concept of the differential of a function to differential forms of higher degree. The exterior derivative was first described in its current form by Élie Cartan in 1899. The res ...
and it extends to any differential forms inductively by the requirement (
Leibniz rule Leibniz's rule (named after Gottfried Wilhelm Leibniz) may refer to one of the following: * Product rule in differential calculus * General Leibniz rule, a generalization of the product rule * Leibniz integral rule * The alternating series test, al ...
) :d(\alpha \wedge \beta) = d \alpha \wedge \beta + (-1)^p \alpha \wedge d \beta. where \alpha, \beta are a ''p''-form and a ''q''-form. The exterior derivative has the important property that d \circ d = 0; that is, the exterior derivative d of a differential form d \omega is zero. This property is a consequence of the
symmetry of second derivatives In mathematics, the symmetry of second derivatives (also called the equality of mixed partials) refers to the possibility of interchanging the order of taking partial derivatives of a function :f\left(x_1,\, x_2,\, \ldots,\, x_n\right) of ''n'' ...
(mixed partials are equal).


Boundary and orientation

A circle can be oriented clockwise or counterclockwise. Mathematically, we say that a subset M of \mathbb^n is oriented if there is a consistent choice of normal vectors to M that varies continuously. For example, a circle or, more generally, an ''n''-sphere can be oriented; i.e., orientable. On the other hand, a
Möbius strip In mathematics, a Möbius strip, Möbius band, or Möbius loop is a surface that can be formed by attaching the ends of a strip of paper together with a half-twist. As a mathematical object, it was discovered by Johann Benedict Listing and Augu ...
(a surface obtained by identified by two opposite sides of the rectangle in a twisted way) cannot oriented: if we start with a normal vector and travel around the strip, the normal vector at end will point to the opposite direction. The proposition is useful because it allows us to give an orientation by giving a volume form.


Integration of differential forms

If \omega = f \, dx_1 \wedge \cdots \wedge dx_n is a differential ''n''-form on an open subset ''M'' in \mathbb^n (any ''n''-form is that form), then the integration of it over M with the standard orientation is defined as: :\int_M \omega = \int_M f \, dx_1 \cdots dx_n. If ''M'' is given the orientation opposite to the standard one, then \int_M \omega is defined as the negative of the right-hand side. Then we have the fundamental formula relating exterior derivative and integration: Here is a sketch of proof of the formula. If f is a smooth function on \mathbb^n with compact support, then we have: :\int d(f \omega) = 0 (since, by the fundamental theorem of calculus, the above can be evaluated on boundaries of the set containing the support.) On the other hand, :\int d(f \omega) = \int df \wedge \omega + \int f \, d\omega. Let f approach the characteristic function on M. Then the second term on the right goes to \int_M d \omega while the first goes to -\int_ \omega, by the argument similar to proving the fundamental theorem of calculus. \square The formula generalizes the
fundamental theorem of calculus The fundamental theorem of calculus is a theorem that links the concept of differentiating a function (calculating its slopes, or rate of change at each time) with the concept of integrating a function (calculating the area under its graph, or ...
as well as
Stokes' theorem Stokes's theorem, also known as the Kelvin–Stokes theorem Nagayoshi Iwahori, et al.:"Bi-Bun-Seki-Bun-Gaku" Sho-Ka-Bou(jp) 1983/12Written in Japanese)Atsuo Fujimoto;"Vector-Kai-Seki Gendai su-gaku rekucha zu. C(1)" :ja:培風館, Bai-Fu-Kan( ...
in multivariable calculus. Indeed, if M =
, b The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline o ...
/math> is an interval and \omega = f, then d\omega = f' \, dx and the formula says: :\int_M f' \, dx = f(b) - f(a). Similarly, if M is an oriented bounded surface in \mathbb^3 and \omega = f\,dx + g\,dy + h\,dz, then d(f\,dx) = df \wedge dx = \frac \, dy \wedge dx + \frac \,dz \wedge dx and similarly for d(g\,dy) and d(g\,dy). Collecting the terms, we thus get: :d\omega = \left( \frac - \frac \right) dy \wedge dz + \left( \frac - \frac \right) dz \wedge dx + \left( \frac - \frac \right) dx \wedge dy. Then, from the definition of the integration of \omega, we have \int_M d \omega = \int_M (\nabla \times F) \cdot dS where F = (f, g, h) is the vector-valued function and \nabla = \left( \frac, \frac, \frac \right). Hence, Stokes’ formula becomes :\int_M (\nabla \times F) \cdot dS = \int_ (f\,dx + g\,dy + h\,dz), which is the usual form of the Stokes' theorem on surfaces. Green’s theorem is also a special case of Stokes’ formula. Stokes' formula also yields a general version of Cauchy's integral formula. To state and prove it, for the complex variable z = x + iy and the conjugate \bar z, let us introduce the operators :\frac = \frac\left( \frac - i \frac \right), \, \frac = \frac\left( \frac + i \frac \right). In these notations, a function f is holomorphic (complex-analytic) if and only if \frac = 0 (the Cauchy–Riemann equations). Also, we have: :df = \fracdz + \fracd \bar. Let D_ = \ be a punctured disk with center z_0. Since 1/(z - z_0) is holomorphic on D_, We have: :d \left( \frac dz \right) = \frac \frac . By Stokes’ formula, :\int_ \frac \frac = \left( \int_ - \int_ \right) \frac dz. Letting \epsilon \to 0 we then get: :2\pi i \, f(z_0) = \int_ \frac dz + \int_ \frac \frac.


Winding numbers and Poincaré lemma

A differential form \omega is called
closed Closed may refer to: Mathematics * Closure (mathematics), a set, along with operations, for which applying those operations on members always results in a member of the set * Closed set, a set which contains all its limit points * Closed interval, ...
if d\omega = 0 and is called exact if \omega = d\eta for some differential form \eta (often called a potential). Since d \circ d = 0, an exact form is closed. But the converse does not hold in general; there might be a non-exact closed form. A classic example of such a form is: :\omega = \frac + \frac, which is a differential form on \mathbb^2 - 0. Suppose we switch to polar coordinates: x = r \cos \theta, y = r \sin \theta where r = \sqrt. Then :\omega = r^(-r \sin \theta \, dx + r \cos \theta \, dy) = d \theta. This does not show that \omega is exact: the trouble is that \theta is not a well-defined continuous function on \mathbb^2 - 0. Since any function f on \mathbb^2 - 0 with df = \omega differ from \theta by constant, this means that \omega is not exact. The calculation, however, shows that \omega is exact, for example, on \mathbb^2 - \ since we can take \theta = \arctan(y/x) there. There is a result (Poincaré lemma) that gives a condition that guarantees closed forms are exact. To state it, we need some notions from topology. Given two continuous maps f, g : X \to Y between subsets of \mathbb^m, \mathbb^n (or more generally topological spaces), a
homotopy In topology, a branch of mathematics, two continuous functions from one topological space to another are called homotopic (from grc, ὁμός "same, similar" and "place") if one can be "continuously deformed" into the other, such a deforma ...
from f to g is a continuous function H : X \times
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
\to Y such that f(x) = H(x, 0) and g(x) = H(x, 1). Intuitively, a homotopy is a continuous variation of one function to another. A
loop Loop or LOOP may refer to: Brands and enterprises * Loop (mobile), a Bulgarian virtual network operator and co-founder of Loop Live * Loop, clothing, a company founded by Carlos Vasquez in the 1990s and worn by Digable Planets * Loop Mobile, an ...
in a set X is a curve whose starting point coincides with the end point; i.e., c :
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
\to X such that c(0) = c(1). Then a subset of \mathbb^n is called
simply connected In topology, a topological space is called simply connected (or 1-connected, or 1-simply connected) if it is path-connected and every path between two points can be continuously transformed (intuitively for embedded spaces, staying within the spac ...
if every loop is homotopic to a constant function. A typical example of a simply connected set is a disk D = \ \subset \mathbb^2. Indeed, given a loop c :
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
\to D, we have the homotopy H :
, 1 The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline (t ...
2 \to D, \, H(x, t) = (1-t) c(x) + t c(0) from c to the constant function c(0). A punctured disk, on the other hand, is not simply connected.


Geometry of curves and surfaces


Moving frame

Vector fields E_1, \dots, E_3 on \mathbb^3 are called a
frame field A frame field in general relativity (also called a tetrad or vierbein) is a set of four pointwise-orthonormal vector fields, one timelike and three spacelike, defined on a Lorentzian manifold that is physically interpreted as a model of spacetime ...
if they are orthogonal to each other at each point; i.e., E_i \cdot E_j = \delta_ at each point. The basic example is the standard frame U_i; i.e., U_i(x) is a standard basis for each point x in \mathbb^3. Another example is the cylindrical frame :E_1 = \cos \theta U_1 + \sin \theta U_2, \, E_2 = -\sin \theta U_1 + \cos \theta U_2, \, E_3 = U_3. For the study of the geometry of a curve, the important frame to use is a Frenet frame T, N, B on a unit-speed curve \beta : I \to \mathbb^3 given as:


The Gauss–Bonnet theorem

The Gauss–Bonnet theorem relates the ''topology'' of a surface and its geometry.


Calculus of variations


Method of Lagrange multiplier

The set g^(0) is usually called a constraint. Example: Suppose we want to find the minimum distance between the circle x^2 + y^2 = 1 and the line x + y = 4. That means that we want to minimize the function f(x, y, u, v) = (x - u)^2 + (y - v)^2, the square distance between a point (x, y) on the circle and a point (u, v) on the line, under the constraint g = (x^2 + y^2 - 1, u + v - 4). We have: :\nabla f = (2(x - u), 2(y - v), -2(x - u), -2(y - v)). :\nabla g_1 = (2x, 2y, 0, 0), \nabla g_2 = (0, 0, 1, 1). Since the Jacobian matrix of g has rank 2 everywhere on g^(0), the Lagrange multiplier gives: :x - u = \lambda_1 x, \, y - v = \lambda_1 y, \, 2(x-u) = -\lambda_2, \, 2(y-v) = -\lambda_2. If \lambda_1 = 0, then x = u, y = v, not possible. Thus, \lambda_1 \ne 0 and :x = \frac, \, y = \frac. From this, it easily follows that x = y = 1/\sqrt and u = v = 2. Hence, the minimum distance is 2\sqrt - 1 (as a minimum distance clearly exists). Here is an application to linear algebra. Let V be a finite-dimensional real vector space and T : V \to V a self-adjoint operator. We shall show V has a basis consisting of eigenvectors of T (i.e., T is diagonalizable) by induction on the dimension of V. Choosing a basis on V we can identify V = \mathbb^n and T is represented by the matrix _/math>. Consider the function f(x) = (Tx, x), where the bracket means the
inner product In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
. Then \nabla f = 2(\sum a_ x_i, \dots, \sum a_ x_i). On the other hand, for g = \sum x_i^2 - 1, since g^(0) is compact, f attains a maximum or minimum at a point u in g^(0). Since \nabla g = 2(x_1, \dots, x_n), by Lagrange multiplier, we find a real number \lambda such that 2 \sum_i a_ u_i = 2 \lambda u_j, 1 \le j \le n. But that means Tu = \lambda u. By inductive hypothesis, the self-adjoint operator T : W \to W, W the orthogonal complement to u, has a basis consisting of eigenvectors. Hence, we are done. \square.


Weak derivatives

Up to measure-zero sets, two functions can be determined to be equal or not by means of integration against other functions (called test functions). Namely, the following sometimes called the
fundamental lemma of calculus of variations In mathematics, specifically in the calculus of variations, a variation of a function can be concentrated on an arbitrarily small interval, but not a single point. Accordingly, the necessary condition of extremum (functional derivative equal zero ...
: Given a continuous function f, by the lemma, a continuously differentiable function u is such that \frac = f if and only if :\int \frac \varphi \, dx = \int f \varphi \, dx for every \varphi \in C_c^(M). But, by
integration by parts In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. ...
, the partial derivative on the left-hand side of u can be moved to that of \varphi; i.e., :-\int u \frac \, dx = \int f \varphi \, dx where there is no boundary term since \varphi has compact support. Now the key point is that this expression makes sense even if u is not necessarily differentiable and thus can be used to give sense to a derivative of such a function. Note each locally integrable function u defines the linear functional \varphi \mapsto \int u \varphi \, dx on C_c^(M) and, moreover, each locally integrable function can be identified with such linear functional, because of the early lemma. Hence, quite generally, if u is a linear functional on C_c^(M), then we define \frac to be the linear functional \varphi \mapsto -\left \langle u, \frac \right\rangle where the bracket means \langle \alpha, \varphi \rangle = \alpha(\varphi). It is then called the
weak derivative In mathematics, a weak derivative is a generalization of the concept of the derivative of a function (''strong derivative'') for functions not assumed differentiable, but only integrable, i.e., to lie in the L''p'' space L^1( ,b. The method of ...
of u with respect to x_i. If u is continuously differentiable, then the weak derivate of it coincides with the usual one; i.e., the linear functional \frac is the same as the linear functional determined by the usual partial derivative of u with respect to x_i. A usual derivative is often then called a classical derivative. When a linear functional on C_c^(M) is continuous with respect to a certain topology on C_c^(M), such a linear functional is called a
distribution Distribution may refer to: Mathematics *Distribution (mathematics), generalized functions used to formulate solutions of partial differential equations * Probability distribution, the probability of a particular value or value range of a vari ...
, an example of a generalized function. A classic example of a weak derivative is that of the
Heaviside function The Heaviside step function, or the unit step function, usually denoted by or (but sometimes , or ), is a step function, named after Oliver Heaviside (1850–1925), the value of which is zero for negative arguments and one for positive argume ...
H, the characteristic function on the interval (0, \infty). For every test function \varphi, we have: :\langle H', \varphi \rangle = -\int_0^ \varphi' \, dx = \varphi(0). Let \delta_a denote the linear functional \varphi \mapsto \varphi(a), called the
Dirac delta function In mathematics, the Dirac delta distribution ( distribution), also known as the unit impulse, is a generalized function or distribution over the real numbers, whose value is zero everywhere except at zero, and whose integral over the entire ...
(although not exactly a function). Then the above can be written as: :H' = \delta_0. Cauchy's integral formula has a similar interpretation in terms of weak derivatives. For the complex variable z = x + iy, let E_(z) = \frac. For a test function \varphi, if the disk , z - z_0 , \le r contains the support of \varphi, by Cauchy's integral formula, we have: :\varphi(z_0) = \int \frac \frac. Since dz \wedge d \bar z = -2i dx \wedge dy, this means: :\varphi(z_0) = -\int E_ \frac dxdy = \left\langle \frac, \varphi \right \rangle, or :\frac = \delta_. In general, a generalized function is called a fundamental solution for a linear partial differential operator if the application of the operator to it is the Dirac delta. Hence, the above says E_ is the fundamental solution for the differential operator \partial/\partial \bar z.


Hamilton–Jacobi theory


Calculus on manifolds


Definition of a manifold

:''This section requires some background in general topology.'' A
manifold In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an n-dimensional manifold, or ''n-manifold'' for short, is a topological space with the property that each point has a n ...
is a Hausdorff topological space that is locally modeled by an Euclidean space. By definition, an
atlas An atlas is a collection of maps; it is typically a bundle of maps of Earth or of a region of Earth. Atlases have traditionally been bound into book form, but today many atlases are in multimedia formats. In addition to presenting geographic ...
of a topological space M is a set of maps \varphi_i : U_i \to \mathbb^n, called charts, such that *U_i are an open cover of M; i.e., each U_i is open and M = \cup_i U_i, *\varphi_i : U_i \to \varphi_i(U_i) is a homeomorhism and *\varphi_j \circ \varphi_i^ : \varphi_i(U_i \cap U_j) \to \varphi_j(U_i \cap U_j) is smooth; thus a diffeomorphism by the inverse function theorem. By definition, a manifold is a second-countable Hausdorff topological space with a maximal atlas (called a differentiable structure); "maximal" means that it is not contained in strictly larger atlas. The dimension of the manifold M is the dimension of the model Euclidean space \mathbb^n; namely, n and a manifold is called an ''n''-manifold when it has dimension ''n''. A function on a manifold M is said to be smooth if f, _U \circ \varphi^ is smooth on \varphi(U) for each chart \varphi : U \to \mathbb^n in the differentiable structure. A manifold is paracompact; this has an implication that it admits a partition of unity subordinate to a given open cover. If \mathbb^n is replaced by an upper half-space \mathbb^n, then we get the notion of a manifold-with-boundary. The set of points that map to the boundary of \mathbb^n under charts is denoted by \partial M and is called the boundary of M. This boundary may not be the topological boundary of M. Since the interior of \mathbb^n is diffeomorphic to \mathbb^n, a manifold is a manifold-with-boundary with empty boundary. The next theorem furnishes many examples of manifolds. For example, for g(x) = x_1^2 + \cdots + x_^2 - 1, the derivative g'(x) = \begin2 x_1 & 2 x_2 & \cdots & 2 x_\end has rank one at every point p in g^(0). Hence, the ''n''-sphere g^(0) is an ''n''-manifold. The theorem is proved as a corollary of the inverse function theorem. Many familiar manifolds are subsets of \mathbb^n. The next theoretically important result says that there is no other kind of manifolds. An immersion is a smooth map whose differential is injective. An embedding is an immersion that is homeomorphic (thus diffeomorphic) to the image. The proof that a manifold can be embedded into \mathbb^N for ''some ''N is considerably easier and can be readily given here. It is known that a manifold has a finite atlas \. Let \lambda_i be smooth functions such that \operatorname(\lambda_i) \subset U_i and \ cover M (e.g., a partition of unity). Consider the map :f = (\lambda_1 \varphi_1, \dots, \lambda_r \varphi_r, \lambda_1, \dots, \lambda_r) : M \to \mathbb^ It is easy to see that f is an injective immersion. It may not be an embedding. To fix that, we shall use: :(f, g) : M \to \mathbb^ where g is a smooth proper map. The existence of a smooth proper map is a consequence of a partition of unity. Se

for the rest of the proof in the case of an immersion. \square Nash's embedding theorem says that, if M is equipped with a Riemannian metric, then the embedding can be taken to be isometric with an expense of increasing 2k; for this, se
this T. Tao's blog


Tubular neighborhood and transversality

A technically important result is: This can be proved by putting a Riemannian metric on the manifold M. Indeed, the choice of metric makes the normal bundle \nu_i a complementary bundle to TN; i.e., TM, _N is the direct sum of TN and \nu_N. Then, using the metric, we have the exponential map \exp : U \to V for some neighborhood U of N in the normal bundle \nu_N to some neighborhood V of N in M. The exponential map here may not be injective but it is possible to make it injective (thus diffeomorphic) by shrinking U (for now, see se

.


Integration on manifolds and distribution densities

The starting point for the topic of integration on manifolds is that there is no ''invariant way'' to integrate functions on manifolds. This may be obvious if we asked: what is an integration of functions on a finite-dimensional real vector space? (In contrast, there is an invariant way to do differentiation since, by definition, a manifold comes with a differentiable structure). There are several ways to introduce integration theory to manifolds: *Integrate differential forms. *Do integration against some measure. *Equip a manifold with a Riemannian metric and do integration against such a metric. For example, if a manifold is embedded into an Euclidean space \mathbb^n, then it acquires the Lebesgue measure restricting from the ambient Euclidean space and then the second approach works. The first approach is fine in many situations but it requires the manifold to be oriented (and there is a non-orientable manifold that is not pathological). The third approach generalizes and that gives rise to the notion of a density.


Generalizations


Extensions to infinite-dimensional normed spaces

The notions like differentiability extend to
normed space In mathematics, a normed vector space or normed space is a vector space over the real or complex numbers, on which a norm is defined. A norm is the formalization and the generalization to real vector spaces of the intuitive notion of "length" i ...
s.


See also

* Differential geometry of surfaces *
Integration along fibers In differential geometry, the integration along fibers of a ''k''-form yields a (k-m)-form where ''m'' is the dimension of the fiber, via "integration". It is also called the fiber integration. Definition Let \pi: E \to B be a fiber bundle ove ...
*
Lusin's theorem In the mathematical field of real analysis, Lusin's theorem (or Luzin's theorem, named for Nikolai Luzin) or Lusin's criterion states that an almost-everywhere finite function is measurable if and only if it is a continuous function on nearly ...
*
Density on a manifold In mathematics, and specifically differential geometry, a density is a spatially varying quantity on a differentiable manifold that can be integrated in an intrinsic manner. Abstractly, a density is a section of a certain line bundle, called the d ...


Notes


Citations


References

* * * * * * * (revised 1990, Jones and Bartlett; reprinted 2014, World Scientific) his text in particular discusses density* * * {{cite book , title=Calculus on Manifolds: A Modern Approach to Classical Theorems of Advanced Calculus , last1=Spivak, first1=Michael, title-link=Calculus on Manifolds (book), publisher= Benjamin Cummings , year=1965 , isbn=0-8053-9021-9 , location=San Francisco , pages= , author1-link=Michael Spivak Calculus