Ricci Calculus
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a
differentiable manifold In mathematics, a differentiable manifold (also differential manifold) is a type of manifold that is locally similar enough to a vector space to allow one to apply calculus. Any manifold can be described by a collection of charts (atlas). One ma ...
, with or without a
metric tensor In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allows ...
or connection. It is also the modern name for what used to be called the absolute differential calculus (the foundation of tensor calculus), developed by Gregorio Ricci-Curbastro in 1887–1896, and subsequently popularized in a paper written with his pupil Tullio Levi-Civita in 1900. Jan Arnoldus Schouten developed the modern notation and formalism for this mathematical framework, and made contributions to the theory, during its applications to general relativity and
differential geometry Differential geometry is a mathematical discipline that studies the geometry of smooth shapes and smooth spaces, otherwise known as smooth manifolds. It uses the techniques of differential calculus, integral calculus, linear algebra and multili ...
in the early twentieth century. A component of a tensor is a real number that is used as a coefficient of a basis element for the tensor space. The tensor is the sum of its components multiplied by their corresponding basis elements. Tensors and tensor fields can be expressed in terms of their components, and operations on tensors and tensor fields can be expressed in terms of operations on their components. The description of tensor fields and operations on them in terms of their components is the focus of the Ricci calculus. This notation allows an efficient expression of such tensor fields and operations. While much of the notation may be applied with any tensors, operations relating to a differential structure are only applicable to tensor fields. Where needed, the notation extends to components of non-tensors, particularly multidimensional arrays. A tensor may be expressed as a linear sum of the tensor product of vector and
covector In mathematics, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars (often, the real numbers or the complex numbers). If is a vector space over a field , the s ...
basis elements. The resulting tensor components are labelled by indices of the basis. Each index has one possible value per dimension of the underlying vector space. The number of indices equals the degree (or order) of the tensor. For compactness and convenience, the Ricci calculus incorporates
Einstein notation In mathematics, especially the usage of linear algebra in Mathematical physics, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of ...
, which implies summation over indices repeated within a term and universal quantification over free indices. Expressions in the notation of the Ricci calculus may generally be interpreted as a set of simultaneous equations relating the components as functions over a manifold, usually more specifically as functions of the coordinates on the manifold. This allows intuitive manipulation of expressions with familiarity of only a limited set of rules.


Notation for indices


Basis-related distinctions


Space and time coordinates

Where a distinction is to be made between the space-like basis elements and a time-like element in the four-dimensional spacetime of classical physics, this is conventionally done through indices as follows: * The lowercase Latin alphabet is used to indicate restriction to 3-dimensional Euclidean space, which take values 1, 2, 3 for the spatial components; and the time-like element, indicated by 0, is shown separately. * The lowercase Greek alphabet is used for 4-dimensional spacetime, which typically take values 0 for time components and 1, 2, 3 for the spatial components. Some sources use 4 instead of 0 as the index value corresponding to time; in this article, 0 is used. Otherwise, in general mathematical contexts, any symbols can be used for the indices, generally running over all dimensions of the vector space.


Coordinate and index notation

The author(s) will usually make it clear whether a subscript is intended as an index or as a label. For example, in 3-D Euclidean space and using
Cartesian coordinates A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in t ...
; the
coordinate vector In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers (a tuple) that describes the vector in terms of a particular ordered basis. An easy example may be a position such as (5, 2, 1) in a 3-dimensiona ...
shows a direct correspondence between the subscripts 1, 2, 3 and the labels , , . In the expression , is interpreted as an index ranging over the values 1, 2, 3, while the , , subscripts are only labels, not variables. In the context of spacetime, the index value 0 conventionally corresponds to the label .


Reference to basis

Indices themselves may be ''labelled'' using
diacritic A diacritic (also diacritical mark, diacritical point, diacritical sign, or accent) is a glyph added to a letter or to a basic glyph. The term derives from the Ancient Greek (, "distinguishing"), from (, "to distinguish"). The word ''diacriti ...
-like symbols, such as a hat (ˆ),
bar Bar or BAR may refer to: Food and drink * Bar (establishment), selling alcoholic beverages * Candy bar * Chocolate bar Science and technology * Bar (river morphology), a deposit of sediment * Bar (tropical cyclone), a layer of cloud * Bar (u ...
(¯), tilde (˜), or prime (′) as in: : X_\,, Y_\,, Z_\,, T_ to denote a possibly different basis for that index. An example is in Lorentz transformations from one
frame of reference In physics and astronomy, a frame of reference (or reference frame) is an abstract coordinate system whose origin, orientation, and scale are specified by a set of reference points― geometric points whose position is identified both mathema ...
to another, where one frame could be unprimed and the other primed, as in: : v^ = v^L_\nu^ . This is not to be confused with
van der Waerden notation In theoretical physics, Van der Waerden notation refers to the usage of two-component spinors (Weyl spinors) in four spacetime dimensions. This is standard in twistor theory and supersymmetry. It is named after Bartel Leendert van der Waerden. Do ...
for spinors, which uses hats and overdots on indices to reflect the chirality of a spinor.


Upper and lower indices

Ricci calculus, and index notation more generally, distinguishes between lower indices (subscripts) and upper indices (superscripts); the latter are ''not'' exponents, even though they may look as such to the reader only familiar with other parts of mathematics. In the special case that the metric tensor is everywhere equal to the identity matrix, it is possible to drop the distinction between upper and lower indices, and then all indices could be written in the lower position. Coordinate formulae in linear algebra such as a_ b_ for the product of matrices may be examples of this. But in general, the distinction between upper and lower indices should be maintained.


Covariant tensor components

A ''lower index'' (subscript) indicates covariance of the components with respect to that index: : A_


Contravariant tensor components

An ''upper index'' (superscript) indicates contravariance of the components with respect to that index: : A^


Mixed-variance tensor components

A tensor may have both upper and lower indices: : A_^_^. Ordering of indices is significant, even when of differing variance. However, when it is understood that no indices will be raised or lowered while retaining the base symbol, covariant indices are sometimes placed below contravariant indices for notational convenience (e.g. with the
generalized Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\t ...
).


Tensor type and degree

The number of each upper and lower indices of a tensor gives its ''type'': a tensor with upper and lower indices is said to be of type , or to be a type- tensor. The number of indices of a tensor, regardless of variance, is called the ''degree'' of the tensor (alternatively, its ''valence'', ''order'' or ''rank'', although ''rank'' is ambiguous). Thus, a tensor of type has degree .


Summation convention

The same symbol occurring twice (one upper and one lower) within a term indicates a pair of indices that are summed over: : A_\alpha B^\alpha \equiv \sum_\alpha A_B^\alpha \quad \text \quad A^\alpha B_\alpha \equiv \sum_\alpha A^B_\alpha \,. The operation implied by such a summation is called tensor contraction: : A_\alpha B^\beta \rightarrow A_\alpha B^\alpha \equiv \sum_\alpha A_B^\alpha \,. This summation may occur more than once within a term with a distinct symbol per pair of indices, for example: : A_^\gamma B^\alpha C_\gamma^\beta \equiv \sum_\alpha \sum_\gamma A_^\gamma B^\alpha C_\gamma^\beta \,. Other combinations of repeated indices within a term are considered to be ill-formed, such as : The reason for excluding such formulae is that although these quantities could be computed as arrays of numbers, they would not in general transform as tensors under a change of basis.


Multi-index notation

If a tensor has a list of all upper or lower indices, one shorthand is to use a capital letter for the list: : A_B^C_ \equiv A_I B^ C_J , where and .


Sequential summation

A pair of vertical bars around a set of all-upper indices or all-lower indices (but not both), associated with contraction with another set of indices when the expression is completely antisymmetric in each of the two sets of indices: : A_ B^ = A_ B^ = \sum_ A_ B^ means a restricted sum over index values, where each index is constrained to being strictly less than the next. More than one group can be summed in this way, for example: : \begin &A_^ B^_ C^ \\ pt = &\sum_~\sum_~\sum_ A_^ B^_ C^ \end When using multi-index notation, an underarrow is placed underneath the block of indices: : A_^ B^P_ C^R = \sum_\underset \sum_\underset \sum_\underset A_^ B^P_ C^R where : \underset = , \alpha \beta\gamma, \,,\quad \underset = , \delta\epsilon\cdots\lambda, \,,\quad \underset = , \mu \nu \cdots\zeta,


Raising and lowering indices

By contracting an index with a non-singular
metric tensor In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allows ...
, the
type Type may refer to: Science and technology Computing * Typing, producing text via a keyboard, typewriter, etc. * Data type, collection of values used for computations. * File type * TYPE (DOS command), a command to display contents of a file. * Ty ...
of a tensor can be changed, converting a lower index to an upper index or vice versa: :B^_ = g^A_ \quad \text \quad A_ = g_B^_ The base symbol in many cases is retained (e.g. using where appears here), and when there is no ambiguity, repositioning an index may be taken to imply this operation.


Correlations between index positions and invariance

This table summarizes how the manipulation of covariant and contravariant indices fit in with invariance under a
passive transformation Passive may refer to: * Passive voice, a grammatical voice common in many languages, see also Pseudopassive * Passive language, a language from which an interpreter works * Passivity (behavior), the condition of submitting to the influence of on ...
between bases, with the components of each basis set in terms of the other reflected in the first column. The barred indices refer to the final coordinate system after the transformation. The Kronecker delta is used, see also below. :


General outlines for index notation and operations

Tensors are equal if and only if every corresponding component is equal; e.g., tensor equals tensor if and only if :A^_ = B^_ for all . Consequently, there are facets of the notation that are useful in checking that an equation makes sense (an analogous procedure to dimensional analysis).


Free and dummy indices

Indices not involved in contractions are called ''free indices''. Indices used in contractions are termed ''dummy indices'', or ''summation indices''.


A tensor equation represents many ordinary (real-valued) equations

The components of tensors (like , etc.) are just real numbers. Since the indices take various integer values to select specific components of the tensors, a single tensor equation represents many ordinary equations. If a tensor equality has free indices, and if the dimensionality of the underlying vector space is , the equality represents equations: each index takes on every value of a specific set of values. For instance, if : A^\alpha B_\beta^\gamma C_ + D^\alpha_\beta E_\delta = T^\alpha_\beta_\delta is in
four dimensions Four Dimensions may refer to: * ''Four Dimensions'' (Don Patterson album), 1968 * ''Four Dimensions'' (Lollipop F album), 2010 See also *''Four Dimensions of Greta ''Four Dimensions of Greta'' is a 1972 British sex comedy film directed and pro ...
(that is, each index runs from 0 to 3 or from 1 to 4), then because there are three free indices (), there are 43 = 64 equations. Three of these are: : \begin A^0 B_1^0 C_ + A^0 B_1^1 C_ + A^0 B_1^2 C_ + A^0 B_1^3 C_ + D^0_1 E_0 &= T^0_1_0 \\ A^1 B_0^0 C_ + A^1 B_0^1 C_ + A^1 B_0^2 C_ + A^1 B_0^3 C_ + D^1_0 E_0 &= T^1_0_0 \\ A^1 B_2^0 C_ + A^1 B_2^1 C_ + A^1 B_2^2 C_ + A^1 B_2^3 C_ + D^1_2 E_2 &= T^1_2_2. \end This illustrates the compactness and efficiency of using index notation: many equations which all share a similar structure can be collected into one simple tensor equation.


Indices are replaceable labels

Replacing any index symbol throughout by another leaves the tensor equation unchanged (provided there is no conflict with other symbols already used). This can be useful when manipulating indices, such as using index notation to verify vector calculus identities or identities of the Kronecker delta and Levi-Civita symbol (see also below). An example of a correct change is: : A^\alpha B_\beta^\gamma C_ + D^\alpha_\beta E_\delta \rightarrow A^\lambda B_\beta^\mu C_ + D^\lambda_\beta E_\delta \,, whereas an erroneous change is: : A^\alpha B_\beta^\gamma C_ + D^\alpha_\beta E_\delta \nrightarrow A^\lambda B_\beta^\gamma C_ + D^\alpha_\beta E_\delta \,. In the first replacement, replaced and replaced ''everywhere'', so the expression still has the same meaning. In the second, did not fully replace , and did not fully replace (incidentally, the contraction on the index became a tensor product), which is entirely inconsistent for reasons shown next.


Indices are the same in every term

The free indices in a tensor expression always appear in the same (upper or lower) position throughout every term, and in a tensor equation the free indices are the same on each side. Dummy indices (which implies a summation over that index) need not be the same, for example: : A^\alpha B_\beta^\gamma C_ + D^\alpha_\delta E_\beta = T^\alpha_\beta_\delta as for an erroneous expression: :A^\alpha B_\beta^\gamma C_ + D_\alpha_\beta^\gamma E^\delta. In other words, non-repeated indices must be of the same type in every term of the equation. In the above identity, line up throughout and occurs twice in one term due to a contraction (once as an upper index and once as a lower index), and thus it is a valid expression. In the invalid expression, while lines up, and do not, and appears twice in one term (contraction) ''and'' once in another term, which is inconsistent.


Brackets and punctuation used once where implied

When applying a rule to a number of indices (differentiation, symmetrization etc., shown next), the bracket or punctuation symbols denoting the rules are only shown on one group of the indices to which they apply. If the brackets enclose ''covariant indices'' – the rule applies only to ''all covariant indices enclosed in the brackets'', not to any contravariant indices which happen to be placed intermediately between the brackets. Similarly if brackets enclose ''contravariant indices'' – the rule applies only to ''all enclosed contravariant indices'', not to intermediately placed covariant indices.


Symmetric and antisymmetric parts


Symmetric part of tensor

Parentheses, ( ), around multiple indices denotes the symmetrized part of the tensor. When symmetrizing indices using to range over permutations of the numbers 1 to , one takes a sum over the
permutation In mathematics, a permutation of a set is, loosely speaking, an arrangement of its members into a sequence or linear order, or if the set is already ordered, a rearrangement of its elements. The word "permutation" also refers to the act or proc ...
s of those indices for , and then divides by the number of permutations: : A_ = \dfrac \sum_ A_ \,. For example, two symmetrizing indices mean there are two indices to permute and sum over: : A_ = \dfrac \left(A_ + A_ \right) while for three symmetrizing indices, there are three indices to sum over and permute: : A_ = \dfrac \left(A_ + A_ + A_ + A_ + A_ + A_ \right) The symmetrization is distributive over addition; : A_ \left(B_ + C_ \right) = A_B_ + A_C_ Indices are not part of the symmetrization when they are: * not on the same level, for example; *: A_B^_ = \dfrac \left(A_B^_ + A_B^_ \right) *within the parentheses and between vertical bars (i.e. , ⋅⋅⋅, ), modifying the previous example; *: A_B__ = \dfrac \left(A_B_ + A_B_ \right) Here the and indices are symmetrized, is not.


Antisymmetric or alternating part of tensor

Square brackets, nbsp;/nowiki>, around multiple indices denotes the ''anti''symmetrized part of the tensor. For antisymmetrizing indices – the sum over the permutations of those indices multiplied by the signature of the permutation is taken, then divided by the number of permutations: : \begin & A_ \\ pt = & \dfrac \sum_\sgn(\sigma) A_ \\ = & \delta_^ A_ \\ \end where is the
generalized Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\t ...
of degree , with scaling as defined below. For example, two antisymmetrizing indices imply: : A_ = \dfrac \left(A_ - A_ \right) while three antisymmetrizing indices imply: : A_ = \dfrac \left(A_ + A_ + A_ - A_ - A_ - A_ \right) as for a more specific example, if represents the electromagnetic tensor, then the equation : 0 = F_ = \dfrac \left( F_ + F_ + F_ - F_ - F_ - F_ \right) \, represents Gauss's law for magnetism and
Faraday's law of induction Faraday's law of induction (briefly, Faraday's law) is a basic law of electromagnetism predicting how a magnetic field will interact with an electric circuit to produce an electromotive force (emf)—a phenomenon known as electromagnetic inducti ...
. As before, the antisymmetrization is distributive over addition; : A_ \left(B_ + C_ \right) = A_B_ + A_C_ As with symmetrization, indices are not antisymmetrized when they are: * not on the same level, for example; *: A_B^_ = \dfrac \left(A_B^_ - A_B^_ \right) * within the square brackets and between vertical bars (i.e. , ⋅⋅⋅, ), modifying the previous example; *: A_B__ = \dfrac \left(A_B_ - A_B_ \right) Here the and indices are antisymmetrized, is not.


Sum of symmetric and antisymmetric parts

Any tensor can be written as the sum of its symmetric and antisymmetric parts on two indices: : A_ = A_+A_ as can be seen by adding the above expressions for and . This does not hold for other than two indices.


Differentiation

For compactness, derivatives may be indicated by adding indices after a comma or semicolon.


Partial derivative In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Part ...

While most of the expressions of the Ricci calculus are valid for arbitrary bases, the expressions involving partial derivatives of tensor components with respect to coordinates apply only with a coordinate basis: a basis that is defined through differentiation with respect to the coordinates. Coordinates are typically denoted by , but do not in general form the components of a vector. In flat spacetime with linear coordinatization, a tuple of ''differences'' in coordinates, , can be treated as a contravariant vector. With the same constraints on the space and on the choice of coordinate system, the partial derivatives with respect to the coordinates yield a result that is effectively covariant. Aside from use in this special case, the partial derivatives of components of tensors do not in general transform covariantly, but are useful in building expressions that are covariant, albeit still with a coordinate basis if the partial derivatives are explicitly used, as with the covariant, exterior and Lie derivatives below. To indicate partial differentiation of the components of a tensor field with respect to a coordinate variable , a ''
comma The comma is a punctuation mark that appears in several variants in different languages. It has the same shape as an apostrophe or single closing quotation mark () in many typefaces, but it differs from them in being placed on the baseline ...
'' is placed before an appended lower index of the coordinate variable. : A_ = \dfrac A_ This may be repeated (without adding further commas): : A_ = \dfrac\cdots\dfrac\dfrac A_. These components do ''not'' transform covariantly, unless the expression being differentiated is a scalar. This derivative is characterized by the product rule and the derivatives of the coordinates : x^_ = \delta^_\gamma , where is the Kronecker delta.


Covariant derivative In mathematics, the covariant derivative is a way of specifying a derivative along tangent vectors of a manifold. Alternatively, the covariant derivative is a way of introducing and working with a connection on a manifold by means of a different ...

The covariant derivative is only defined if a connection is defined. For any tensor field, a '' semicolon'' () placed before an appended lower (covariant) index indicates covariant differentiation. Less common alternatives to the semicolon include a ''
forward slash The slash is the oblique slanting line punctuation mark . Also known as a stroke, a solidus or several other historical or technical names including oblique and virgule. Once used to mark periods and commas, the slash is now used to represe ...
'' () or in three-dimensional curved space a single vertical bar (). The covariant derivative of a scalar function, a contravariant vector and a covariant vector are: : f_ = f_ : A^_ = A^_ + \Gamma^ _A^\gamma : A_ = A_ - \Gamma^ _A_\gamma \,, where are the connection coefficients. For an arbitrary tensor: : \begin T^_ & \\ = T^_ &+ \, \Gamma^_ T^_ + \cdots + \Gamma^_ T^_ \\ &- \, \Gamma^\delta_ T^_ - \cdots - \Gamma^\delta_ T^_\,. \end An alternative notation for the covariant derivative of any tensor is the subscripted nabla symbol . For the case of a vector field : : \nabla_\beta A^\alpha = A^\alpha_ \,. The covariant formulation of the directional derivative of any tensor field along a vector may be expressed as its contraction with the covariant derivative, e.g.: : v^\gamma A_ \,. The components of this derivative of a tensor field transform covariantly, and hence form another tensor field, despite subexpressions (the partial derivative and the connection coefficients) separately not transforming covariantly. This derivative is characterized by the product rule: : (A^_B^_)_ = A^_B^_ + A^_B^_ \,.


Connection types

A
Koszul connection In mathematics, and especially differential geometry and gauge theory, a connection on a fiber bundle is a device that defines a notion of parallel transport on the bundle; that is, a way to "connect" or identify fibers over nearby points. The mo ...
on the tangent bundle of a
differentiable manifold In mathematics, a differentiable manifold (also differential manifold) is a type of manifold that is locally similar enough to a vector space to allow one to apply calculus. Any manifold can be described by a collection of charts (atlas). One ma ...
is called an affine connection. A connection is a metric connection when the covariant derivative of the metric tensor vanishes: : g_ = 0 \,. An affine connection that is also a metric connection is called a Riemannian connection. A Riemannian connection that is torsion-free (i.e., for which the torsion tensor vanishes: ) is a
Levi-Civita connection In Riemannian or pseudo Riemannian geometry (in particular the Lorentzian geometry of general relativity), the Levi-Civita connection is the unique affine connection on the tangent bundle of a manifold (i.e. affine connection) that preserves th ...
. The for a Levi-Civita connection in a coordinate basis are called Christoffel symbols of the second kind.


Exterior derivative On a differentiable manifold, the exterior derivative extends the concept of the differential of a function to differential forms of higher degree. The exterior derivative was first described in its current form by Élie Cartan in 1899. The res ...

The exterior derivative of a totally antisymmetric type tensor field with components (also called a
differential form In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, ...
) is a derivative that is covariant under basis transformations. It does not depend on either a metric tensor or a connection: it requires only the structure of a differentiable manifold. In a coordinate basis, it may be expressed as the antisymmetrization of the partial derivatives of the tensor components: :(\mathrmA)_ = \frac A_ = A_ . This derivative is not defined on any tensor field with contravariant indices or that is not totally antisymmetric. It is characterized by a graded product rule.


Lie derivative

The Lie derivative is another derivative that is covariant under basis transformations. Like the exterior derivative, it does not depend on either a metric tensor or a connection. The Lie derivative of a type tensor field along (the flow of) a contravariant vector field may be expressed using a coordinate basis as : \begin (\mathcal_X T)^_ & \\ = X^\gamma T^_ & - \, X^_ T^_ - \cdots - X^_ T^_ \\ & + \, X^_ T^_ + \cdots + X^_ T^_ \,. \end This derivative is characterized by the product rule and the fact that the Lie derivative of a contravariant vector field along itself is zero: : (\mathcal_X X)^ = X^\gamma X^\alpha_ - X^\alpha_ X^\gamma = 0 \,.


Notable tensors


Kronecker delta

The Kronecker delta is like the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial o ...
when multiplied and contracted: : \begin \delta^_ \, A^ &= A^ \\ \delta^_ \, B_ &= B_ . \end The components are the same in any basis and form an invariant tensor of type , i.e. the identity of the tangent bundle over the identity mapping of the
base manifold In mathematics, and particularly topology, a fiber bundle (or, in Commonwealth English: fibre bundle) is a space that is a product space, but may have a different topological structure. Specifically, the similarity between a space E and a p ...
, and so its trace is an invariant. Its trace is the dimensionality of the space; for example, in four-dimensional spacetime, : \delta^_ = \delta^_ + \delta^_ + \delta^_ + \delta^_ = 4 . The Kronecker delta is one of the family of generalized Kronecker deltas. The generalized Kronecker delta of degree may be defined in terms of the Kronecker delta by (a common definition includes an additional multiplier of on the right): : \delta^_ = \delta^_ \cdots \delta^_ , and acts as an antisymmetrizer on indices: : \delta^_ \, A^ = A^ .


Torsion tensor

An affine connection has a torsion tensor : : T^\alpha_ = \Gamma^\alpha_ - \Gamma^\alpha_ - \gamma^\alpha_ , where are given by the components of the Lie bracket of the local basis, which vanish when it is a coordinate basis. For a Levi-Civita connection this tensor is defined to be zero, which for a coordinate basis gives the equations : \Gamma^\alpha_ = \Gamma^\alpha_.


Riemann curvature tensor

If this tensor is defined as : R^\rho_ = \Gamma^\rho_ - \Gamma^\rho_ + \Gamma^\rho_\Gamma^\lambda_ - \Gamma^\rho_\Gamma^\lambda_ \,, then it is the
commutator In mathematics, the commutator gives an indication of the extent to which a certain binary operation fails to be commutative. There are different definitions used in group theory and ring theory. Group theory The commutator of two elements, a ...
of the covariant derivative with itself: : A_ - A_ = A_ R^_ \,, since the connection is torsionless, which means that the torsion tensor vanishes. This can be generalized to get the commutator for two covariant derivatives of an arbitrary tensor as follows: : \begin T^_& - T^_ \\ &\!\!\!\!\!\!\!\!\!\!= - R^_ T^_ - \cdots - R^_ T^_ \\ &+ R^\sigma_ T^_ + \cdots + R^\sigma_ T^_ \, \end which are often referred to as the ''Ricci identities''.


Metric tensor In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allows ...

The metric tensor is used for lowering indices and gives the length of any space-like curve :\text = \int^_ \sqrt \, d \gamma \,, where is any smooth strictly monotone parameterization of the path. It also gives the duration of any time-like curve :\text = \int^_ \sqrt \, d \gamma \,, where is any smooth strictly monotone parameterization of the trajectory. See also '' Line element''. The inverse matrix of the metric tensor is another important tensor, used for raising indices: : g^ g_ = \delta^_ \,.


See also

*
Abstract index notation Abstract index notation (also referred to as slot-naming index notation) is a mathematical notation for tensors and spinors that uses indices to indicate their types, rather than their components in a particular basis. The indices are mere placeho ...
* Connection * Exterior algebra *
Differential form In mathematics, differential forms provide a unified approach to define integrands over curves, surfaces, solids, and higher-dimensional manifolds. The modern notion of differential forms was pioneered by Élie Cartan. It has many applications, ...
* Hodge star operator * Holonomic basis *
Metric tensor In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allows ...
* Penrose graphical notation * Regge calculus * Ricci decomposition * Tensor (intrinsic definition) * Tensor calculus * Tensor field


Notes


References


Sources

* * * * * * * * * {{tensors Calculus Differential geometry Tensors