In mathematics, calculus on Euclidean space is a generalization of
calculus of functions in one or several variables to calculus of functions on
Euclidean space
Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's Elements, Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics ther ...
as well as a
finite-dimensional real vector space. This calculus is also known as advanced calculus, especially in the United States. It is similar to
multivariable calculus
Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving several variables, rather th ...
but is somehow more sophisticated in that it uses linear algebra (or some functional analysis) more extensively and covers some concepts from differential geometry such as
differential form
In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, ...
s and
Stokes' formula
In vector calculus and differential geometry the generalized Stokes theorem (sometimes with apostrophe as Stokes' theorem or Stokes's theorem), also called the Stokes–Cartan theorem, is a statement about the integration of differential forms ...
in terms of differential forms. This extensive use of linear algebra also allows a natural generalization of multivariable calculus to calculus on Banach spaces or topological vector spaces.
Calculus on Euclidean space is also a local model of calculus on manifolds, a theory of functions on manifolds.
Basic notions
Functions in one real variable
This section is a brief review of function theory in one-variable calculus.
A real-valued function
is continuous at
if it is ''approximately constant'' near
; i.e.,
:
In contrast, the function
is differentiable at
if it is ''approximately linear'' near
; i.e., there is some real number
such that
:
(For simplicity, suppose
. Then the above means that
where
goes to 0 faster than ''h'' going to 0 and, in that sense,
behaves like
.)
The number
depends on
and thus is denoted as
. If
is differentiable on an open interval
and if
is a continuous function on
, then
is called a ''C''
1 function. More generally,
is called a ''C''
k function if its derivative
is ''C''
k-1 function.
Taylor's theorem
In calculus, Taylor's theorem gives an approximation of a ''k''-times differentiable function around a given point by a polynomial of degree ''k'', called the ''k''th-order Taylor polynomial. For a smooth function, the Taylor polynomial is the t ...
states that a ''C''
k function is precisely a function that can be approximated by a polynomial of degree ''k''.
If
is a ''C''
1 function and
for some
, then either
or
; i.e., either
is strictly increasing or strictly decreasing in some open interval containing ''a''. In particular,
is bijective for some open interval
containing
. The
inverse function theorem then says that the inverse function
is differentiable on ''U'' with the derivatives: for
:
Derivative of a map and chain rule
For functions
defined in the plane or more generally on an Euclidean space
, it is necessary to consider functions that are vector-valued or matrix-valued. It is also conceptually helpful to do this in an invariant manner (i.e., a coordinate-free way). Derivatives of such maps at a point are then vectors or linear maps, not real numbers.
Let
be a map from an open subset
of
to an open subset
of
. Then the map
is said to be
differentiable at a point
in
if there exists a (necessarily unique) linear transformation
, called the derivative of
at
, such that
:
where
is the application of the linear transformation
to
. If
is differentiable at
, then it is continuous at
since
:
as
.
As in the one-variable case, there is
This is proved exactly as for functions in one variable. Indeed, with the notation
, we have:
:
Here, since
is differentiable at
, the second term on the right goes to zero as
. As for the first term, it can be written as:
:
Now, by the argument showing the continuity of
at
, we see
is bounded. Also,
as
since
is continuous at
. Hence, the first term also goes to zero as
by the differentiability of
at
.
The map
as above is called
continuously differentiable
In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its ...
or
if it is differentiable on the domain and also the derivatives vary continuously; i.e.,
is continuous.
As a linear transformation,
is represented by an
-matrix, called the
Jacobian matrix
In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as ...
of
at
and we write it as:
:
Taking
to be
,
a real number and
the ''j''-th standard basis element, we see that the differentiability of
at
implies:
:
where
denotes the ''i''-th component of
. That is, each component of
is differentiable at
in each variable with the derivative
. In terms of Jacobian matrices, the chain rule says
; i.e., as
,
:
which is the form of the chain rule that is often stated.
A partial converse to the above holds. Namely, if the partial derivatives
are all defined and continuous, then
is continuously differentiable. This is a consequence of the mean value inequality:
(This version of mean value inequality follows from mean value inequality in applied to the function
, where the proof on mean value inequality is given.)
Indeed, let
. We note that, if
, then
:
For simplicity, assume
(the argument for the general case is similar). Then, by mean value inequality, with the
operator norm ,
:
which implies
as required.
Example: Let
be the set of all invertible real square matrices of size ''n''. Note
can be identified as an open subset of
with coordinates
. Consider the function
= the inverse matrix of
defined on
. To guess its derivatives, assume
is differentiable and consider the curve
where
means the
matrix exponential of
. By the chain rule applied to
, we have:
:
.
Taking
, we get:
:
.
Now, we then have:
:
Since the operator norm is equivalent to the Euclidean norm on
(any norms are equivalent to each other), this implies
is differentiable. Finally, from the formula for
, we see the partial derivatives of
are smooth (infinitely differentiable); whence,
is smooth too.
Higher derivatives and Taylor formula
If
is differentiable where
is an open subset, then the derivatives determine the map
, where
stands for homomorphisms between vector spaces; i.e., linear maps. If
is differentiable, then
. Here, the codomain of
can be identified with the space of bilinear maps by:
:
where
and
is bijective with the inverse
given by
. In general,
is a map from
to the space of
-multilinear maps
.
Just as
is represented by a matrix (Jacobian matrix), when
(a bilinear map is a bilinear form), the bilinear form
is represented by a matrix called the
Hessian matrix of
at
; namely, the square matrix
of size
such that
, where the paring refers to an inner product of
, and
is none other than the Jacobian matrix of
. The
-th entry of
is thus given explicitly as
.
Moreover, if
exists and is continuous, then the matrix
is
symmetric, the fact known as the
symmetry of second derivatives
In mathematics, the symmetry of second derivatives (also called the equality of mixed partials) refers to the possibility of interchanging the order of taking partial derivatives of a function
:f\left(x_1,\, x_2,\, \ldots,\, x_n\right)
of ''n'' ...
.
This is seen using the mean value inequality. For vectors
in
, using mean value inequality twice, we have:
:
which says
:
Since the right-hand side is symmetric in
, so is the left-hand side:
. By induction, if
is
, then the ''k''-multilinear map
is symmetric; i.e., the order of taking partial derivatives does not matter.
As in the case of one variable, the Taylor series expansion can then be proved by integration by parts:
:
Taylor's formula has an effect of dividing a function by variables, which can be illustrated by the next typical theoretical use of the formula.
Example: Let
be a linear map between the vector space
of smooth functions on
with rapidly decreasing derivatives; i.e.,
for any multi-index
. (The space
is called a
Schwartz space.) For each
in
, Taylor's formula implies we can write:
:
with
, where
is a smooth function with compact support and
. Now, assume
commutes with coordinates; i.e.,
. Then
:
.
Evaluating the above at
, we get
In other words,
is a multiplication by some function
; i.e.,
. Now, assume further that
commutes with partial differentiations. We then easily see that
is a constant;
is a multiplication by a constant.
(Aside: the above discussion ''almost'' proves the
Fourier inversion formula In mathematics, the Fourier inversion theorem says that for many types of functions it is possible to recover a function from its Fourier transform. Intuitively it may be viewed as the statement that if we know all frequency and phase information ...
. Indeed, let
be the
Fourier transform
A Fourier transform (FT) is a mathematical transform that decomposes functions into frequency components, which are represented by the output of the transform as a function of frequency. Most commonly functions of time or space are transformed, ...
and the reflection; i.e.,
. Then, dealing directly with the integral that is involved, one can see
commutes with coordinates and partial differentiations; hence,
is a multiplication by a constant. This is ''almost'' a proof since one still has to compute this constant.)
A partial converse to the Taylor formula also holds; see
Borel's lemma
In mathematics, Borel's lemma, named after Émile Borel, is an important result used in the theory of asymptotic expansions and partial differential equations.
Statement
Suppose ''U'' is an open set in the Euclidean space R''n'', and suppose that ...
and
Whitney extension theorem
In mathematics, in particular in mathematical analysis, the Whitney extension theorem is a partial converse to Taylor's theorem. Roughly speaking, the theorem asserts that if ''A'' is a closed subset of a Euclidean space, then it is possible to e ...
.
Inverse function theorem and submersion theorem
A
-map with the
-inverse is called a
-diffeomorphism. Thus, the theorem says that, for a map
satisfying the hypothesis at a point
,
is a diffeomorphism near
For a proof, see .
The
implicit function theorem says: given a map
, if
,
is
in a neighborhood of
and the derivative of
at
is invertible, then there exists a differentiable map
for some neighborhoods
of
such that
. The theorem follows from the inverse function theorem; see .
Another consequence is the
submersion theorem.
Integrable functions on Euclidean spaces
A partition of an interval