Gradient grouped treemap.jpg
   HOME

TheInfoList



OR:

In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gradient of a function is non-zero at a point , the direction of the gradient is the direction in which the function increases most quickly from , and the magnitude of the gradient is the rate of increase in that direction, the greatest
absolute Absolute may refer to: Companies * Absolute Entertainment, a video game publisher * Absolute Radio, (formerly Virgin Radio), independent national radio station in the UK * Absolute Software Corporation, specializes in security and data risk manage ...
directional derivative. Further, a point where the gradient is the zero vector is known as a stationary point. The gradient thus plays a fundamental role in optimization theory, where it is used to maximize a function by gradient ascent. In coordinate-free terms, the gradient of a function f(\bf) may be defined by: :df=\nabla f \cdot d\bf where ''df'' is the total infinitesimal change in ''f'' for an infinitesimal displacement d\bf, and is seen to be maximal when d\bf is in the direction of the gradient \nabla f. The nabla symbol \nabla, written as an upside-down triangle and pronounced "del", denotes the vector differential operator. When a coordinate system is used in which the basis vectors are not functions of position, the gradient is given by the vector whose components are the
partial derivative In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Part ...
s of f at p. That is, for f \colon \R^n \to \R, its gradient \nabla f \colon \R^n \to \R^n is defined at the point p = (x_1,\ldots,x_n) in ''n-''dimensional space as the vector :\nabla f(p) = \begin \frac(p) \\ \vdots \\ \frac(p) \end. The gradient is dual to the total derivative df: the value of the gradient at a point is a tangent vector – a vector at each point; while the value of the derivative at a point is a ''co''tangent vector – a linear functional on vectors. They are related in that the dot product of the gradient of at a point with another tangent vector equals the directional derivative of at of the function along ; that is, \nabla f(p) \cdot \mathbf v = \frac(p) = df_(\mathbf) . The gradient admits multiple generalizations to more general functions on
manifold In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an n-dimensional manifold, or ''n-manifold'' for short, is a topological space with the property that each point has a n ...
s; see .


Motivation

Consider a room where the temperature is given by a
scalar field In mathematics and physics, a scalar field is a function (mathematics), function associating a single number to every point (geometry), point in a space (mathematics), space – possibly physical space. The scalar may either be a pure Scalar ( ...
, , so at each point the temperature is , independent of time. At each point in the room, the gradient of at that point will show the direction in which the temperature rises most quickly, moving away from . The magnitude of the gradient will determine how fast the temperature rises in that direction. Consider a surface whose height above sea level at point is . The gradient of at a point is a plane vector pointing in the direction of the steepest slope or grade at that point. The steepness of the slope at that point is given by the magnitude of the gradient vector. The gradient can also be used to measure how a scalar field changes in other directions, rather than just the direction of greatest change, by taking a dot product. Suppose that the steepest slope on a hill is 40%. A road going directly uphill has slope 40%, but a road going around the hill at an angle will have a shallower slope. For example, if the road is at a 60° angle from the uphill direction (when both directions are projected onto the horizontal plane), then the slope along the road will be the dot product between the gradient vector and a unit vector along the road, namely 40% times the cosine of 60°, or 20%. More generally, if the hill height function is differentiable, then the gradient of dotted with a unit vector gives the slope of the hill in the direction of the vector, the directional derivative of along the unit vector.


Notation

The gradient of a function f at point a is usually written as \nabla f (a). It may also be denoted by any of the following: * \vec f (a) : to emphasize the vector nature of the result. * * \partial_i f and f_ :
Einstein notation In mathematics, especially the usage of linear algebra in Mathematical physics, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of ...
.


Definition

The gradient (or gradient vector field) of a scalar function is denoted or where ( nabla) denotes the vector
differential operator In mathematics, a differential operator is an operator defined as a function of the differentiation operator. It is helpful, as a matter of notation first, to consider differentiation as an abstract operation that accepts a function and return ...
, del. The notation is also commonly used to represent the gradient. The gradient of is defined as the unique vector field whose dot product with any vector at each point is the directional derivative of along . That is, :\big(\nabla f(x)\big)\cdot \mathbf = D_f(x) where the right-side hand is the directional derivative and there are many ways to represent it. Formally, the derivative is ''dual'' to the gradient; see relationship with derivative. When a function also depends on a parameter such as time, the gradient often refers simply to the vector of its spatial derivatives only (see
Spatial gradient {{unreferenced, date=July 2016 A spatial gradient is a gradient whose components are spatial derivatives, i.e., rate of change of a given scalar physical quantity with respect to the position coordinates. Homogeneous regions have spatial grad ...
). The magnitude and direction of the gradient vector are independent of the particular coordinate representation.


Cartesian coordinates

In the three-dimensional
Cartesian coordinate system A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in t ...
with a Euclidean metric, the gradient, if it exists, is given by: :\nabla f = \frac \mathbf + \frac \mathbf + \frac \mathbf, where , , are the standard unit vectors in the directions of the , and coordinates, respectively. For example, the gradient of the function :f(x,y,z)= 2x+3y^2-\sin(z) is :\nabla f = 2\mathbf+ 6y\mathbf -\cos(z)\mathbf. In some applications it is customary to represent the gradient as a row vector or column vector of its components in a rectangular coordinate system; this article follows the convention of the gradient being a column vector, while the derivative is a row vector.


Cylindrical and spherical coordinates

In cylindrical coordinates with a Euclidean metric, the gradient is given by:. :\nabla f(\rho, \varphi, z) = \frac\mathbf_\rho + \frac\frac\mathbf_\varphi + \frac\mathbf_z, where is the axial distance, is the azimuthal or azimuth angle, is the axial coordinate, and , and are unit vectors pointing along the coordinate directions. In spherical coordinates, the gradient is given by: :\nabla f(r, \theta, \varphi) = \frac\mathbf_r + \frac\frac\mathbf_\theta + \frac\frac\mathbf_\varphi, where is the radial distance, is the azimuthal angle and is the polar angle, and , and are again local unit vectors pointing in the coordinate directions (that is, the normalized covariant basis). For the gradient in other
orthogonal coordinate system In mathematics, orthogonal coordinates are defined as a set of ''d'' coordinates q = (''q''1, ''q''2, ..., ''q'd'') in which the coordinate hypersurfaces all meet at right angles (note: superscripts are indices, not exponents). A coordinate su ...
s, see Orthogonal coordinates (Differential operators in three dimensions).


General coordinates

We consider general coordinates, which we write as , where is the number of dimensions of the domain. Here, the upper index refers to the position in the list of the coordinate or component, so refers to the second component—not the quantity squared. The index variable refers to an arbitrary element . Using
Einstein notation In mathematics, especially the usage of linear algebra in Mathematical physics, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of ...
, the gradient can then be written as: \nabla f = \fracg^ \mathbf_j (Note that its
dual Dual or Duals may refer to: Paired/two things * Dual (mathematics), a notion of paired concepts that mirror one another ** Dual (category theory), a formalization of mathematical duality *** see more cases in :Duality theories * Dual (grammatical ...
is \mathrmf = \frac\mathbf^i), where \mathbf_i = \partial \mathbf/\partial x^i and \mathbf^i = \mathrmx^i refer to the unnormalized local covariant and contravariant bases respectively, g^ is the inverse metric tensor, and the Einstein summation convention implies summation over ''i'' and ''j''. If the coordinates are orthogonal we can easily express the gradient (and the differential) in terms of the normalized bases, which we refer to as \hat_i and \hat^i, using the scale factors (also known as
Lamé coefficients In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally inv ...
) h_i= \lVert \mathbf_i \rVert = \sqrt = 1\, / \lVert \mathbf^i \rVert : \nabla f = \fracg^ \hat_\sqrt = \sum_^n \, \frac \frac \mathbf_i (and \mathrmf = \sum_^n \, \frac \frac \mathbf^i), where we cannot use Einstein notation, since it is impossible to avoid the repetition of more than two indices. Despite the use of upper and lower indices, \mathbf_i, \mathbf^i, and h_i are neither contravariant nor covariant. The latter expression evaluates to the expressions given above for cylindrical and spherical coordinates.


Relationship with derivative


Relationship with total derivative

The gradient is closely related to the total derivative ( total differential) df: they are transpose (
dual Dual or Duals may refer to: Paired/two things * Dual (mathematics), a notion of paired concepts that mirror one another ** Dual (category theory), a formalization of mathematical duality *** see more cases in :Duality theories * Dual (grammatical ...
) to each other. Using the convention that vectors in \R^n are represented by column vectors, and that covectors (linear maps \R^n \to \R) are represented by row vectors, the gradient \nabla f and the derivative df are expressed as a column and row vector, respectively, with the same components, but transpose of each other: :\nabla f(p) = \begin\frac(p) \\ \vdots \\ \frac(p) \end ; :df_p = \begin\frac(p) & \cdots & \frac(p) \end . While these both have the same components, they differ in what kind of mathematical object they represent: at each point, the derivative is a cotangent vector, a
linear form In mathematics, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars (often, the real numbers or the complex numbers). If is a vector space over a field , the s ...
(
covector In mathematics, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars (often, the real numbers or the complex numbers). If is a vector space over a field , the s ...
) which expresses how much the (scalar) output changes for a given infinitesimal change in (vector) input, while at each point, the gradient is a tangent vector, which represents an infinitesimal change in (vector) input. In symbols, the gradient is an element of the tangent space at a point, \nabla f(p) \in T_p \R^n, while the derivative is a map from the tangent space to the real numbers, df_p \colon T_p \R^n \to \R. The tangent spaces at each point of \R^n can be "naturally" identified with the vector space \R^n itself, and similarly the cotangent space at each point can be naturally identified with the dual vector space (\R^n)^* of covectors; thus the value of the gradient at a point can be thought of a vector in the original \R^n, not just as a tangent vector. Computationally, given a tangent vector, the vector can be ''multiplied'' by the derivative (as matrices), which is equal to taking the dot product with the gradient: : (df_p)(v) = \begin\frac(p) & \cdots & \frac(p) \end \beginv_1 \\ \vdots \\ v_n\end = \sum_^n \frac(p) v_i = \begin\frac(p) \\ \vdots \\ \frac(p) \end \cdot \beginv_1 \\ \vdots \\ v_n\end = \nabla f(p) \cdot v


Differential or (exterior) derivative

The best linear approximation to a differentiable function :f \colon \R^n \to \R at a point in is a linear map from to which is often denoted by or and called the differential or total derivative of at . The function , which maps to , is called the total differential or
exterior derivative On a differentiable manifold, the exterior derivative extends the concept of the differential of a function to differential forms of higher degree. The exterior derivative was first described in its current form by Élie Cartan in 1899. The res ...
of and is an example of a
differential 1-form In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications ...
. Much as the derivative of a function of a single variable represents the slope of the tangent to the graph of the function, the directional derivative of a function in several variables represents the slope of the tangent
hyperplane In geometry, a hyperplane is a subspace whose dimension is one less than that of its ''ambient space''. For example, if a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the space is 2-dimensional, its hyper ...
in the direction of the vector. The gradient is related to the differential by the formula :(\nabla f)_x\cdot v = df_x(v) for any , where \cdot is the dot product: taking the dot product of a vector with the gradient is the same as taking the directional derivative along the vector. If is viewed as the space of (dimension ) column vectors (of real numbers), then one can regard as the row vector with components :\left( \frac, \dots, \frac\right), so that is given by matrix multiplication. Assuming the standard Euclidean metric on , the gradient is then the corresponding column vector, that is, :(\nabla f)_i = df^\mathsf_i.


Linear approximation to a function

The best linear approximation to a function can be expressed in terms of the gradient, rather than the derivative. The gradient of a function from the Euclidean space to at any particular point in characterizes the best linear approximation to at . The approximation is as follows: :f(x) \approx f(x_0) + (\nabla f)_\cdot(x-x_0) for close to , where is the gradient of computed at , and the dot denotes the dot product on . This equation is equivalent to the first two terms in the multivariable Taylor series expansion of at .


Relationship with Fréchet derivative

Let be an open set in . If the function is differentiable, then the differential of is the Fréchet derivative of . Thus is a function from to the space such that \lim_ \frac = 0, where · is the dot product. As a consequence, the usual properties of the derivative hold for the gradient, though the gradient is not a derivative itself, but rather dual to the derivative: ; Linearity :The gradient is linear in the sense that if and are two real-valued functions differentiable at the point , and and are two constants, then is differentiable at , and moreover \nabla\left(\alpha f+\beta g\right)(a) = \alpha \nabla f(a) + \beta\nabla g (a). ; Product rule :If and are real-valued functions differentiable at a point , then the product rule asserts that the product is differentiable at , and \nabla (fg)(a) = f(a)\nabla g(a) + g(a)\nabla f(a). ; Chain rule :Suppose that is a real-valued function defined on a subset of , and that is differentiable at a point . There are two forms of the chain rule applying to the gradient. First, suppose that the function is a parametric curve; that is, a function maps a subset into . If is differentiable at a point such that , then (f\circ g)'(c) = \nabla f(a)\cdot g'(c), where ∘ is the composition operator: . More generally, if instead , then the following holds: \nabla (f\circ g)(c) = \big(Dg(c)\big)^\mathsf \big(\nabla f(a)\big), where T denotes the transpose
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as ...
. For the second form of the chain rule, suppose that is a real valued function on a subset of , and that is differentiable at the point . Then \nabla (h\circ f)(a) = h'\big(f(a)\big)\nabla f(a).


Further properties and applications


Level sets

A level surface, or isosurface, is the set of all points where some function has a given value. If is differentiable, then the dot product of the gradient at a point with a vector gives the directional derivative of at in the direction . It follows that in this case the gradient of is
orthogonal In mathematics, orthogonality is the generalization of the geometric notion of ''perpendicularity''. By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in ...
to the level sets of . For example, a level surface in three-dimensional space is defined by an equation of the form . The gradient of is then normal to the surface. More generally, any embedded hypersurface in a Riemannian manifold can be cut out by an equation of the form such that is nowhere zero. The gradient of is then normal to the hypersurface. Similarly, an
affine algebraic hypersurface In geometry, a hypersurface is a generalization of the concepts of hyperplane, plane curve, and surface. A hypersurface is a manifold or an algebraic variety of dimension , which is embedded in an ambient space of dimension , generally a Euclidean ...
may be defined by an equation , where is a polynomial. The gradient of is zero at a singular point of the hypersurface (this is the definition of a singular point). At a non-singular point, it is a nonzero normal vector.


Conservative vector fields and the gradient theorem

The gradient of a function is called a gradient field. A (continuous) gradient field is always a conservative vector field: its line integral along any path depends only on the endpoints of the path, and can be evaluated by the gradient theorem (the fundamental theorem of calculus for line integrals). Conversely, a (continuous) conservative vector field is always the gradient of a function.


Generalizations


Jacobian

The
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as ...
is the generalization of the gradient for vector-valued functions of several variables and
differentiable map In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non-vertical tangent line at each interior point in its ...
s between Euclidean spaces or, more generally,
manifold In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an n-dimensional manifold, or ''n-manifold'' for short, is a topological space with the property that each point has a n ...
s. A further generalization for a function between
Banach space In mathematics, more specifically in functional analysis, a Banach space (pronounced ) is a complete normed vector space. Thus, a Banach space is a vector space with a metric that allows the computation of vector length and distance between vector ...
s is the Fréchet derivative. Suppose is a function such that each of its first-order partial derivatives exist on . Then the Jacobian matrix of is defined to be an matrix, denoted by \mathbf_\mathbb(\mathbb) or simply \mathbf. The th entry is \mathbf J_ = \frac. Explicitly \mathbf J = \begin \dfrac & \cdots & \dfrac \end = \begin \nabla^\mathsf f_1 \\ \vdots \\ \nabla^\mathsf f_m \end = \begin \dfrac & \cdots & \dfrac\\ \vdots & \ddots & \vdots\\ \dfrac & \cdots & \dfrac \end.


Gradient of a vector field

Since the total derivative of a vector field is a linear mapping from vectors to vectors, it is a tensor quantity. In rectangular coordinates, the gradient of a vector field is defined by: :\nabla \mathbf=g^\frac \mathbf_i \otimes \mathbf_k, (where the Einstein summation notation is used and the tensor product of the vectors and is a dyadic tensor of type (2,0)). Overall, this expression equals the transpose of the Jacobian matrix: :\frac = \frac. In curvilinear coordinates, or more generally on a curved
manifold In mathematics, a manifold is a topological space that locally resembles Euclidean space near each point. More precisely, an n-dimensional manifold, or ''n-manifold'' for short, is a topological space with the property that each point has a n ...
, the gradient involves Christoffel symbols: :\nabla \mathbf=g^\left(\frac+_f^l\right) \mathbf_i \otimes \mathbf_k, where are the components of the inverse
metric tensor In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allows ...
and the are the coordinate basis vectors. Expressed more invariantly, the gradient of a vector field can be defined by the
Levi-Civita connection In Riemannian or pseudo Riemannian geometry (in particular the Lorentzian geometry of general relativity), the Levi-Civita connection is the unique affine connection on the tangent bundle of a manifold (i.e. affine connection) that preserves th ...
and metric tensor:. :\nabla^a f^b = g^ \nabla_c f^b , where is the connection.


Riemannian manifolds

For any
smooth function In mathematical analysis, the smoothness of a function (mathematics), function is a property measured by the number of Continuous function, continuous Derivative (mathematics), derivatives it has over some domain, called ''differentiability cl ...
on a Riemannian manifold , the gradient of is the vector field such that for any vector field , :g(\nabla f, X) = \partial_X f, that is, :g_x\big((\nabla f)_x, X_x \big) = (\partial_X f) (x), where denotes the inner product of tangent vectors at defined by the metric and is the function that takes any point to the directional derivative of in the direction , evaluated at . In other words, in a coordinate chart from an open subset of to an open subset of , is given by: :\sum_^n X^ \big(\varphi(x)\big) \frac(f \circ \varphi^) \Bigg, _, where denotes the th component of in this coordinate chart. So, the local form of the gradient takes the form: :\nabla f = g^ \frac _i . Generalizing the case , the gradient of a function is related to its exterior derivative, since :(\partial_X f) (x) = (df)_x(X_x) . More precisely, the gradient is the vector field associated to the differential 1-form using the
musical isomorphism In mathematics—more specifically, in differential geometry—the musical isomorphism (or canonical isomorphism) is an isomorphism between the tangent bundle \mathrmM and the cotangent bundle \mathrm^* M of a pseudo-Riemannian manifold induced by ...
:\sharp=\sharp^g\colon T^*M\to TM (called "sharp") defined by the metric . The relation between the exterior derivative and the gradient of a function on is a special case of this in which the metric is the flat metric given by the dot product.


See also

*
Curl cURL (pronounced like "curl", UK: , US: ) is a computer software project providing a library (libcurl) and command-line tool (curl) for transferring data using various network protocols. The name stands for "Client URL". History cURL was fi ...
* Divergence * Four-gradient * Hessian matrix *
Skew gradient In mathematics, a skew gradient of a harmonic function over a simply connected domain with two real dimensions is a vector field that is everywhere orthogonal to the gradient of the function and that has the same magnitude as the gradient. Defin ...


Notes


References

* * * * * * * * * * * *


Further reading

*


External links

* * . * {{Calculus topics Differential operators Differential calculus Generalizations of the derivative Linear operators in calculus Vector calculus Rates