Euclidean Tensor
   HOME

TheInfoList



OR:

In
geometry Geometry (; ) is, with arithmetic, one of the oldest branches of mathematics. It is concerned with properties of space such as the distance, shape, size, and relative position of figures. A mathematician who works in the field of geometry is c ...
and
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrices. ...
, a Cartesian tensor uses an
orthonormal basis In mathematics, particularly linear algebra, an orthonormal basis for an inner product space ''V'' with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, ...
to
represent Represent may refer to: * ''Represent'' (Compton's Most Wanted album) or the title song, 2000 * ''Represent'' (Fat Joe album), 1993 * ''Represent'', an album by DJ Magic Mike, 1994 * "Represent" (song), by Nas, 1994 * "Represent", a song by the ...
a
tensor In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tenso ...
in a
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's Elements, Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics ther ...
in the form of components. Converting a tensor's components from one such basis to another is through an
orthogonal transformation In linear algebra, an orthogonal transformation is a linear transformation ''T'' : ''V'' → ''V'' on a real inner product space ''V'', that preserves the inner product. That is, for each pair of elements of ''V'', we have ...
. The most familiar coordinate systems are the
two-dimensional In mathematics, a plane is a Euclidean (flat), two-dimensional surface that extends indefinitely. A plane is the two-dimensional analogue of a point (zero dimensions), a line (one dimension) and three-dimensional space. Planes can arise as s ...
and
three-dimensional Three-dimensional space (also: 3D space, 3-space or, rarely, tri-dimensional space) is a geometric setting in which three values (called ''parameters'') are required to determine the position of an element (i.e., point). This is the informal ...
Cartesian coordinate A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in ...
systems. Cartesian tensors may be used with any Euclidean space, or more technically, any finite-dimensional
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called ''vectors'', may be added together and multiplied ("scaled") by numbers called '' scalars''. Scalars are often real numbers, but can ...
over the
field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grass ...
of
real number In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every real ...
s that has an
inner product In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
. Use of Cartesian tensors occurs in
physics Physics is the natural science that studies matter, its fundamental constituents, its motion and behavior through space and time, and the related entities of energy and force. "Physical science is that department of knowledge which r ...
and
engineering Engineering is the use of scientific method, scientific principles to design and build machines, structures, and other items, including bridges, tunnels, roads, vehicles, and buildings. The discipline of engineering encompasses a broad rang ...
, such as with the
Cauchy stress tensor In continuum mechanics, the Cauchy stress tensor \boldsymbol\sigma, true stress tensor, or simply called the stress tensor is a second order tensor named after Augustin-Louis Cauchy. The tensor consists of nine components \sigma_ that complete ...
and the
moment of inertia The moment of inertia, otherwise known as the mass moment of inertia, angular mass, second moment of mass, or most accurately, rotational inertia, of a rigid body is a quantity that determines the torque needed for a desired angular acceler ...
tensor in
rigid body dynamics In the physical science of dynamics, rigid-body dynamics studies the movement of systems of interconnected bodies under the action of external forces. The assumption that the bodies are ''rigid'' (i.e. they do not deform under the action of a ...
. Sometimes general
curvilinear coordinates In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally inve ...
are convenient, as in high-deformation
continuum mechanics Continuum mechanics is a branch of mechanics that deals with the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such m ...
, or even necessary, as in
general relativity General relativity, also known as the general theory of relativity and Einstein's theory of gravity, is the geometric theory of gravitation published by Albert Einstein in 1915 and is the current description of gravitation in modern physics ...
. While orthonormal bases may be found for some such coordinate systems (e.g.
tangent In geometry, the tangent line (or simply tangent) to a plane curve at a given point is the straight line that "just touches" the curve at that point. Leibniz defined it as the line through a pair of infinitely close points on the curve. More ...
to
spherical coordinates In mathematics, a spherical coordinate system is a coordinate system for three-dimensional space where the position of a point is specified by three numbers: the ''radial distance'' of that point from a fixed origin, its ''polar angle'' measu ...
), Cartesian tensors may provide considerable simplification for applications in which rotations of rectilinear coordinate axes suffice. The transformation is a passive transformation, since the coordinates are changed and not the physical system.


Cartesian basis and related terminology


Vectors in three dimensions

In 3D
Euclidean space Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, that is, in Euclid's Elements, Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics ther ...
, \mathbb3, the
standard basis In mathematics, the standard basis (also called natural basis or canonical basis) of a coordinate vector space (such as \mathbb^n or \mathbb^n) is the set of vectors whose components are all zero, except one that equals 1. For example, in the c ...
is ex, ey, ez. Each basis vector points along the x-, y-, and z-axes, and the vectors are all
unit vector In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in \hat (pronounced "v-hat"). The term ''direction vecto ...
s (or normalized), so the basis is
orthonormal In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal (or perpendicular along a line) unit vectors. A set of vectors form an orthonormal set if all vectors in the set are mutually orthogonal and all of un ...
. Throughout, when referring to
Cartesian coordinates A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in t ...
in
three dimensions Three-dimensional space (also: 3D space, 3-space or, rarely, tri-dimensional space) is a geometric setting in which three values (called ''parameters'') are required to determine the position of an element (i.e., point). This is the informal ...
, a right-handed system is assumed and this is much more common than a left-handed system in practice, see
orientation (vector space) The orientation of a real vector space or simply orientation of a vector space is the arbitrary choice of which ordered bases are "positively" oriented and which are "negatively" oriented. In the three-dimensional Euclidean space, right-handed ...
for details. For Cartesian tensors of order 1, a Cartesian vector a can be written algebraically as a linear combination of the basis vectors ex, ey, ez: :\mathbf = a_\text\mathbf_\text + a_\text\mathbf_\text + a_\text\mathbf_\text where the
coordinate In geometry, a coordinate system is a system that uses one or more numbers, or coordinates, to uniquely determine the position of the points or other geometric elements on a manifold such as Euclidean space. The order of the coordinates is sign ...
s of the vector with respect to the Cartesian basis are denoted ''a''x, ''a''y, ''a''z. It is common and helpful to display the basis vectors as
column vector In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example, \boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end. Similarly, a row vector is a 1 \times n matrix for some n, c ...
s : \mathbf_\text = \begin 1 \\ 0 \\ 0 \end \,,\quad \mathbf_\text = \begin 0 \\ 1 \\ 0 \end \,,\quad \mathbf_\text = \begin 0 \\ 0 \\ 1 \end when we have a
coordinate vector In linear algebra, a coordinate vector is a representation of a vector as an ordered list of numbers (a tuple) that describes the vector in terms of a particular ordered basis. An easy example may be a position such as (5, 2, 1) in a 3-dimensiona ...
in a column vector representation: :\mathbf = \begin a_\text \\ a_\text \\ a_\text \end A
row vector In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example, \boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end. Similarly, a row vector is a 1 \times n matrix for some n, c ...
representation is also legitimate, although in the context of general curvilinear coordinate systems the row and column vector representations are used separately for specific reasons – see
Einstein notation In mathematics, especially the usage of linear algebra in Mathematical physics, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of ...
and
covariance and contravariance of vectors In physics, especially in multilinear algebra and tensor analysis, covariance and contravariance describe how the quantitative description of certain geometric or physical entities changes with a change of basis. In modern mathematical notation ...
for why. The term "component" of a vector is ambiguous: it could refer to: *a specific coordinate of the vector such as ''a''z (a scalar), and similarly for ''x'' and ''y'', or *the coordinate scalar-multiplying the corresponding basis vector, in which case the "''y''-component" of a is ''a''yey (a vector), and similarly for x and z. A more general notation is
tensor index notation In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be cal ...
, which has the flexibility of numerical values rather than fixed coordinate labels. The Cartesian labels are replaced by tensor indices in the basis vectors ex ↦ e1, ey ↦ e2, ez ↦ e3 and coordinates ''a''x ↦ ''a''1, ''a''y ↦ ''a''2, ''a''z ↦ ''a''3. In general, the notation e1, e2, e3 refers to ''any'' basis, and ''a''1, ''a''2, ''a''3 refers to the corresponding coordinate system; although here they are restricted to the Cartesian system. Then: :\mathbf = a_1\mathbf_1 + a_2\mathbf_2 + a_3\mathbf_3 = \sum_^3 a_i\mathbf_i It is standard to use the
Einstein notation In mathematics, especially the usage of linear algebra in Mathematical physics, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of ...
—the summation sign for summation over an index that is present exactly twice within a term may be suppressed for notational conciseness: :\mathbf = \sum_^3 a_i\mathbf_i \equiv a_i\mathbf_i An advantage of the index notation over coordinate-specific notations is the independence of the dimension of the underlying vector space, i.e. the same expression on the right hand side takes the same form in higher dimensions (see below). Previously, the Cartesian labels x, y, z were just labels and ''not'' indices. (It is informal to say "''i'' = x, y, z").


Second-order tensors in three dimensions

A
dyadic tensor In mathematics, specifically multilinear algebra, a dyadic or dyadic tensor is a second order tensor, written in a notation that fits in with vector algebra. There are numerous ways to multiply two Euclidean vectors. The dot product takes in two v ...
T is an order-2 tensor formed by the
tensor product In mathematics, the tensor product V \otimes W of two vector spaces and (over the same field) is a vector space to which is associated a bilinear map V\times W \to V\otimes W that maps a pair (v,w),\ v\in V, w\in W to an element of V \otimes W ...
⊗ of two Cartesian vectors a and b, written T = a ⊗ b. Analogous to vectors, it can be written as a linear combination of the tensor basis , , ..., (the right-hand side of each identity is only an abbreviation, nothing more): :\begin \mathbf =\quad &\left(a_\text\mathbf_\text + a_\text\mathbf_\text + a_\text\mathbf_\text\right)\otimes\left(b_\text\mathbf_\text + b_\text\mathbf_\text + b_\text\mathbf_\text\right) \\ pt =\quad &a_\text b_\text \mathbf_\text \otimes \mathbf_\text + a_\text b_\text\mathbf_\text \otimes \mathbf_\text + a_\text b_\text\mathbf_\text \otimes \mathbf_\text \\ pt + &a_\text b_\text\mathbf_\text \otimes \mathbf_\text + a_\text b_\text\mathbf_\text \otimes \mathbf_\text + a_\text b_\text\mathbf_\text \otimes \mathbf_\text \\ pt + &a_\text b_\text \mathbf_\text \otimes \mathbf_\text + a_\text b_\text\mathbf_\text \otimes \mathbf_\text + a_\text b_\text\mathbf_\text \otimes \mathbf_\text \end Representing each basis tensor as a matrix: :\begin \mathbf_\text \otimes \mathbf_\text &\equiv \mathbf_\text = \begin 1 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end\,,& \mathbf_\text \otimes \mathbf_\text &\equiv \mathbf_\text = \begin 0 & 1 & 0\\ 0 & 0 & 0\\ 0 & 0 & 0 \end\,,& \mathbf_\text \otimes \mathbf_\text &\equiv \mathbf_\text = \begin 0 & 0 & 0\\ 0 & 0 & 0\\ 0 & 0 & 1 \end \end then T can be represented more systematically as a matrix: :\mathbf = \begin a_\text b_\text & a_\text b_\text & a_\text b_\text \\ a_\text b_\text & a_\text b_\text & a_\text b_\text \\ a_\text b_\text & a_\text b_\text & a_\text b_\text \end See
matrix multiplication In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the s ...
for the notational correspondence between matrices and the dot and tensor products. More generally, whether or not T is a tensor product of two vectors, it is always a linear combination of the basis tensors with coordinates ''T''xx, ''T''xy, ..., ''T''zz: :\begin \mathbf =\quad &T_\text\mathbf_\text + T_\text\mathbf_\text + T_\text\mathbf_\text \\ pt + &T_\text\mathbf_\text + T_\text\mathbf_\text + T_\text\mathbf_\text \\ pt + &T_\text\mathbf_\text + T_\text\mathbf_\text + T_\text\mathbf_\text \end while in terms of tensor indices: :\mathbf = T_ \mathbf_ \equiv \sum_ T_ \mathbf_i \otimes \mathbf_j \,, and in matrix form: :\mathbf = \begin T_\text & T_\text & T_\text \\ T_\text & T_\text & T_\text \\ T_\text & T_\text & T_\text \end Second-order tensors occur naturally in physics and engineering when physical quantities have directional dependence in the system, often in a "stimulus-response" way. This can be mathematically seen through one aspect of tensors – they are multilinear functions. A second-order tensor T which takes in a vector u of some magnitude and direction will return a vector v; of a different magnitude and in a different direction to u, in general. The notation used for
function Function or functionality may refer to: Computing * Function key, a type of key on computer keyboards * Function model, a structured representation of processes in a system * Function object or functor or functionoid, a concept of object-oriente ...
s in
mathematical analysis Analysis is the branch of mathematics dealing with continuous functions, limit (mathematics), limits, and related theories, such as Derivative, differentiation, Integral, integration, measure (mathematics), measure, infinite sequences, series (m ...
leads us to write ,, used throughout while the same idea can be expressed in matrix and index notations, see Appendix C. (including the summation convention), respectively: :\begin \begin v_\text \\ v_\text \\ v_\text \end &= \begin T_\text & T_\text & T_\text \\ T_\text & T_\text & T_\text \\ T_\text & T_\text & T_\text \end\begin u_\text \\ u_\text \\ u_\text \end\,, & v_i &= T_u_j \end By "linear", if for two scalars ''ρ'' and ''σ'' and vectors r and s, then in function and index notations: :\begin \mathbf &= \mathbf(\rho\mathbf + \sigma\mathbf) = \rho\mathbf(\mathbf) + \sigma\mathbf(\mathbf) \\ v_i &= T_(\rho r_j + \sigma s_j) = \rho T_ r_j + \sigma T_ s_j \end and similarly for the matrix notation. The function, matrix, and index notations all mean the same thing. The matrix forms provide a clear display of the components, while the index form allows easier tensor-algebraic manipulation of the formulae in a compact manner. Both provide the physical interpretation of ''directions''; vectors have one direction, while second-order tensors connect two directions together. One can associate a tensor index or coordinate label with a basis vector direction. The use of second-order tensors are the minimum to describe changes in magnitudes and directions of vectors, as the
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an algebra ...
of two vectors is always a scalar, while the
cross product In mathematics, the cross product or vector product (occasionally directed area product, to emphasize its geometric significance) is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space (named here E), and is ...
of two vectors is always a pseudovector perpendicular to the plane defined by the vectors, so these products of vectors alone cannot obtain a new vector of any magnitude in any direction. (See also below for more on the dot and cross products). The tensor product of two vectors is a second-order tensor, although this has no obvious directional interpretation by itself. The previous idea can be continued: if T takes in two vectors p and q, it will return a scalar ''r''. In function notation we write ''r'' = T(p, q), while in matrix and index notations (including the summation convention) respectively: :r = \begin p_\text & p_\text & p_\text \end\begin T_\text & T_\text & T_\text \\ T_\text & T_\text & T_\text \\ T_\text & T_\text & T_\text \end\begin q_\text \\ q_\text \\ q_\text \end = p_i T_ q_j The tensor T is linear in both input vectors. When vectors and tensors are written without reference to components, and indices are not used, sometimes a dot ⋅ is placed where summations over indices (known as
tensor contraction In multilinear algebra, a tensor contraction is an operation on a tensor that arises from the natural pairing of a finite-dimensional vector space and its dual. In components, it is expressed as a sum of products of scalar components of the tenso ...
s) are taken. For the above cases: :\begin \mathbf &= \mathbf\cdot\mathbf\\ r &= \mathbf\cdot\mathbf\cdot\mathbf \end motivated by the dot product notation: :\mathbf\cdot\mathbf \equiv a_i b_i More generally, a tensor of order ''m'' which takes in ''n'' vectors (where ''n'' is between 0 and ''m'' inclusive) will return a tensor of order , see for further generalizations and details. The concepts above also apply to pseudovectors in the same way as for vectors. The vectors and tensors themselves can vary within throughout space, in which case we have vector fields and
tensor field In mathematics and physics, a tensor field assigns a tensor to each point of a mathematical space (typically a Euclidean space or manifold). Tensor fields are used in differential geometry, algebraic geometry, general relativity, in the analysis ...
s, and can also depend on time. Following are some examples: : For the electrical conduction example, the index and matrix notations would be: :\begin J_i &= \sigma_E_j \equiv \sum_ \sigma_E_j \\ \begin J_\text \\ J_\text \\ J_\text \end &= \begin \sigma_\text & \sigma_\text & \sigma_\text \\ \sigma_\text & \sigma_\text & \sigma_\text \\ \sigma_\text & \sigma_\text & \sigma_\text \end \begin E_\text \\ E_\text \\ E_\text \end \end while for the rotational kinetic energy ''T'': :\begin T &= \frac \omega_i I_ \omega_j \equiv \frac \sum_ \omega_i I_ \omega_j \,, \\ &= \frac \begin \omega_\text & \omega_\text & \omega_\text \end \begin I_\text & I_\text & I_\text \\ I_\text & I_\text & I_\text \\ I_\text & I_\text & I_\text \end \begin \omega_\text \\ \omega_\text \\ \omega_\text \end \,. \end See also
constitutive equation In physics and engineering, a constitutive equation or constitutive relation is a relation between two physical quantities (especially kinetic quantities as related to kinematic quantities) that is specific to a material or substance, and approx ...
for more specialized examples.


Vectors and tensors in ''n'' dimensions

In ''n''-dimensional Euclidean space over the real numbers, \mathbb^n, the standard basis is denoted e1, e2, e3, ... e''n''. Each basis vector e''i'' points along the positive ''xi'' axis, with the basis being orthonormal. Component ''j'' of e''i'' is given by the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\ ...
: :(\mathbf_i)_j = \delta_ A vector in \mathbb^n takes the form: :\mathbf = a_i\mathbf_i \equiv \sum_i a_i\mathbf_i \,. Similarly for the order-2 tensor above, for each vector a and b in \mathbb^n: :\mathbf = a_i b_j \mathbf_ \equiv \sum_ a_i b_j \mathbf_i \otimes \mathbf_j \,, or more generally: : \mathbf = T_ \mathbf_ \equiv \sum_ T_ \mathbf_i \otimes \mathbf_j \,.


Transformations of Cartesian vectors (any number of dimensions)


Meaning of "invariance" under coordinate transformations

The
position vector In geometry, a position or position vector, also known as location vector or radius vector, is a Euclidean vector that represents the position of a point ''P'' in space in relation to an arbitrary reference origin ''O''. Usually denoted x, r, or s ...
x in \mathbb^n is a simple and common example of a vector, and can be represented in ''any''
coordinate system In geometry, a coordinate system is a system that uses one or more numbers, or coordinates, to uniquely determine the position of the points or other geometric elements on a manifold such as Euclidean space. The order of the coordinates is sig ...
. Consider the case of
rectangular coordinate system A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured in t ...
s with orthonormal bases only. It is possible to have a coordinate system with rectangular geometry if the basis vectors are all mutually perpendicular and not normalized, in which case the basis is ortho''gonal'' but not ortho''normal''. However, orthonormal bases are easier to manipulate and are often used in practice. The following results are true for orthonormal bases, not orthogonal ones. In one rectangular coordinate system, x as a contravector has coordinates ''xi'' and basis vectors e''i'', while as a covector it has coordinates ''xi'' and basis covectors e''i'', and we have: :\begin \mathbf &= x^i\mathbf_i\,, & \mathbf &= x_i\mathbf^i \end In another rectangular coordinate system, x as a contravector has coordinates ''i'' and bases ''i'', while as a covector it has coordinates ''i'' and bases ''i'', and we have: :\begin \mathbf &= \bar^i\bar_i\,, & \mathbf &= \bar_i\bar^i \end Each new coordinate is a function of all the old ones, and vice versa for the
inverse function In mathematics, the inverse function of a function (also called the inverse of ) is a function that undoes the operation of . The inverse of exists if and only if is bijective, and if it exists, is denoted by f^ . For a function f\colon X\t ...
: :\begin \bar^i = \bar^i\left(x^1, x^2, \ldots\right) \quad &\rightleftharpoons \quad x^i = x^i\left(\bar^1, \bar^2, \ldots\right) \\ \bar_i = \bar_i\left(x_1, x_2, \ldots\right) \quad &\rightleftharpoons \quad x_i = x_i\left(\bar_1, \bar_2, \ldots\right) \end and similarly each new basis vector is a function of all the old ones, and vice versa for the inverse function: :\begin \bar_j = \bar_j\left(\mathbf_1, \mathbf_2, \ldots\right) \quad &\rightleftharpoons \quad \mathbf_j = \mathbf_j \left(\bar_1, \bar_2, \ldots\right) \\ \bar^j = \bar^j\left(\mathbf^1,\mathbf^2, \ldots\right) \quad &\rightleftharpoons \quad \mathbf^j = \mathbf^j \left(\bar^1, \bar^2, \ldots\right) \end for all ''i'', ''j''. A vector is invariant under any change of basis, so if coordinates transform according to a
transformation matrix In linear algebra, linear transformations can be represented by matrices. If T is a linear transformation mapping \mathbb^n to \mathbb^m and \mathbf x is a column vector with n entries, then T( \mathbf x ) = A \mathbf x for some m \times n matrix ...
L, the bases transform according to the
matrix inverse In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplicati ...
L−1, and conversely if the coordinates transform according to inverse L−1, the bases transform according to the matrix L. The difference between each of these transformations is shown conventionally through the indices as superscripts for contravariance and subscripts for covariance, and the coordinates and bases are linearly transformed according to the following rules: : where L''i''''j'' represents the entries of the
transformation matrix In linear algebra, linear transformations can be represented by matrices. If T is a linear transformation mapping \mathbb^n to \mathbb^m and \mathbf x is a column vector with n entries, then T( \mathbf x ) = A \mathbf x for some m \times n matrix ...
(row number is ''i'' and column number is ''j'') and (L−1)''i''''k'' denotes the entries of the
inverse matrix In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplicati ...
of the matrix L''i''''k''. If L is an
orthogonal transformation In linear algebra, an orthogonal transformation is a linear transformation ''T'' : ''V'' → ''V'' on a real inner product space ''V'', that preserves the inner product. That is, for each pair of elements of ''V'', we have ...
(
orthogonal matrix In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identity ma ...
), the objects transforming by it are defined as Cartesian tensors. This geometrically has the interpretation that a rectangular coordinate system is mapped to another rectangular coordinate system, in which the
norm Naturally occurring radioactive materials (NORM) and technologically enhanced naturally occurring radioactive materials (TENORM) consist of materials, usually industrial wastes or by-products enriched with radioactive elements found in the envir ...
of the vector x is preserved (and distances are preserved). The
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and ...
of L is det(L) = ±1, which corresponds to two types of orthogonal transformation: (+1) for
rotations Rotation, or spin, is the circular movement of an object around a '' central axis''. A two-dimensional rotating object has only one possible central axis and can rotate in either a clockwise or counterclockwise direction. A three-dimensional ...
and (−1) for
improper rotation In geometry, an improper rotation,. also called rotation-reflection, rotoreflection, rotary reflection,. or rotoinversion is an isometry in Euclidean space that is a combination of a rotation about an axis and a reflection in a plane perpendicula ...
s (including
reflection Reflection or reflexion may refer to: Science and technology * Reflection (physics), a common wave phenomenon ** Specular reflection, reflection from a smooth surface *** Mirror image, a reflection in a mirror or in water ** Signal reflection, in ...
s). There are considerable algebraic simplifications, the
matrix transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
is the inverse from the definition of an orthogonal transformation: : \boldsymbol^\textsf = \boldsymbol^ \Rightarrow \left(\boldsymbol^\right)_i^j = \left(\boldsymbol^\textsf\right)_i^j = (\boldsymbol)^j_i = \mathsf^j_i From the previous table, orthogonal transformations of covectors and contravectors are identical. There is no need to differ between
raising and lowering indices In mathematics and mathematical physics, raising and lowering indices are operations on tensors which change their type. Raising and lowering indices are a form of index manipulation in tensor expressions. Vectors, covectors and the metric Math ...
, and in this context and applications to physics and engineering the indices are usually all subscripted to remove confusion for
exponent Exponentiation is a mathematical operation, written as , involving two numbers, the '' base'' and the ''exponent'' or ''power'' , and pronounced as " (raised) to the (power of) ". When is a positive integer, exponentiation corresponds to re ...
s. All indices will be lowered in the remainder of this article. One can determine the actual raised and lowered indices by considering which quantities are covectors or contravectors, and the relevant transformation rules. Exactly the same transformation rules apply to any vector a, not only the position vector. If its components ''a''''i'' do not transform according to the rules, a is not a vector. Despite the similarity between the expressions above, for the change of coordinates such as , and the action of a tensor on a vector like , L is not a tensor, but T is. In the change of coordinates, L is a ''matrix'', used to relate two rectangular coordinate systems with orthonormal bases together. For the tensor relating a vector to a vector, the vectors and tensors throughout the equation all belong to the same coordinate system and basis.


Derivatives and Jacobian matrix elements

The entries of L are
partial derivative In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Part ...
s of the new or old coordinates with respect to the old or new coordinates, respectively. Differentiating ''i'' with respect to ''xk'': : \frac = \frac(x_j \mathsf_) = \mathsf_\frac = \delta_\mathsf_ = \mathsf_ so :\mathsf_i^j \equiv \mathsf_ = \frac is an element of the
Jacobian matrix In vector calculus, the Jacobian matrix (, ) of a vector-valued function of several variables is the matrix of all its first-order partial derivatives. When this matrix is square, that is, when the function takes the same number of variables as ...
. There is a (partially mnemonical) correspondence between index positions attached to L and in the partial derivative: ''i'' at the top and ''j'' at the bottom, in each case, although for Cartesian tensors the indices can be lowered. Conversely, differentiating ''xj'' with respect to ''i'': : \frac = \frac\left(\bar_i\left(\boldsymbol^\right)_\right) = \frac\left(\boldsymbol^\right)_ = \delta_ \left(\boldsymbol^\right)_ = \left(\boldsymbol^\right)_ so :\left(\boldsymbol^\right)_i^j \equiv \left(\boldsymbol^\right)_ = \frac is an element of the inverse Jacobian matrix, with a similar index correspondence. Many sources state transformations in terms of the partial derivatives: and the explicit matrix equations in 3d are: :\begin \bar &= \boldsymbol\mathbf \\ \begin \bar_1 \\ \bar_2 \\ \bar_3 \end &= \begin \frac & \frac & \frac\\ \frac & \frac & \frac\\ \frac & \frac & \frac \end\begin x_1\\ x_2\\ x_3 \end \end similarly for :\mathbf = \boldsymbol^\bar = \boldsymbol^\textsf\bar


Projections along coordinate axes

As with all linear transformations, L depends on the basis chosen. For two orthonormal bases :\begin \bar_i\cdot\bar_j &= \mathbf_i\cdot\mathbf_j = \delta_\,, & \left, \mathbf_i\ &= \left, \bar_i\ = 1\,, \end * projecting x to the ' axes: \bar_i=\bar_i\cdot\mathbf=\bar_i\cdot x_j\mathbf_j=x_i \mathsf_ \,, * projecting x to the ''x'' axes: x_i=\mathbf_i\cdot\mathbf=\mathbf_i\cdot\bar_j\bar_j=\bar_j\left(\boldsymbol^\right)_ \,. Hence the components reduce to
direction cosine In analytic geometry, the direction cosines (or directional cosines) of a vector are the cosines of the angles between the vector and the three positive coordinate axes. Equivalently, they are the contributions of each component of the basis to a ...
s between the ''i'' and ''xj'' axes: :\begin \mathsf_ &= \bar_i\cdot\mathbf_j = \cos\theta_ \\ \left(\boldsymbol^\right)_ &= \mathbf_i\cdot\bar_j = \cos\theta_ \end where ''θij'' and ''θji'' are the angles between the ''i'' and ''xj'' axes. In general, ''θij'' is not equal to ''θji'', because for example ''θ''12 and ''θ''21 are two different angles. The transformation of coordinates can be written: and the explicit matrix equations in 3d are: :\begin \bar &= \boldsymbol\mathbf \\ \begin \bar_1\\ \bar_2\\ \bar_3 \end &= \begin\bar_1\cdot\mathbf_1 & \bar_1\cdot\mathbf_2 & \bar_1\cdot\mathbf_3\\ \bar_2\cdot\mathbf_1 & \bar_2\cdot\mathbf_2 & \bar_2\cdot\mathbf_3\\ \bar_3\cdot\mathbf_1 & \bar_3\cdot\mathbf_2 & \bar_3\cdot\mathbf_3 \end\beginx_1\\ x_2\\ x_3 \end=\begin\cos\theta_ & \cos\theta_ & \cos\theta_\\ \cos\theta_ & \cos\theta_ & \cos\theta_\\ \cos\theta_ & \cos\theta_ & \cos\theta_ \end\beginx_1\\ x_2\\ x_3 \end \end similarly for :\mathbf = \boldsymbol^\bar = \boldsymbol^\textsf\bar The geometric interpretation is the ''i'' components equal to the sum of projecting the ''xj'' components onto the ''j'' axes. The numbers e''i''⋅e''j'' arranged into a matrix would form a
symmetric matrix In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with re ...
(a matrix equal to its own transpose) due to the symmetry in the dot products, in fact it is the
metric tensor In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allows ...
g. By contrast e''i''''j'' or ''i''⋅e''j'' do ''not'' form symmetric matrices in general, as displayed above. Therefore, while the L matrices are still orthogonal, they are not symmetric. Apart from a rotation about any one axis, in which the ''xi'' and ''i'' for some ''i'' coincide, the angles are not the same as
Euler angle The Euler angles are three angles introduced by Leonhard Euler to describe the orientation of a rigid body with respect to a fixed coordinate system.Novi Commentarii academiae scientiarum Petropolitanae 20, 1776, pp. 189–207 (E478PDF/ref> The ...
s, and so the L matrices are not the same as the rotation matrices.


Transformation of the dot and cross products (three dimensions only)

The
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an algebra ...
and
cross product In mathematics, the cross product or vector product (occasionally directed area product, to emphasize its geometric significance) is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space (named here E), and is ...
occur very frequently, in applications of vector analysis to physics and engineering, examples include: *
power Power most often refers to: * Power (physics), meaning "rate of doing work" ** Engine power, the power put out by an engine ** Electric power * Power (social and political), the ability to influence people or events ** Abusive power Power may a ...
transferred ''P'' by an object exerting a force F with velocity v along a straight-line path: P = \mathbf \cdot \mathbf *tangential
velocity Velocity is the directional speed of an object in motion as an indication of its rate of change in position as observed from a particular frame of reference and as measured by a particular standard of time (e.g. northbound). Velocity is a ...
v at a point x of a rotating
rigid body In physics, a rigid body (also known as a rigid object) is a solid body in which deformation is zero or so small it can be neglected. The distance between any two given points on a rigid body remains constant in time regardless of external force ...
with
angular velocity In physics, angular velocity or rotational velocity ( or ), also known as angular frequency vector,(UP1) is a pseudovector representation of how fast the angular position or orientation of an object changes with time (i.e. how quickly an objec ...
ω: \mathbf = \boldsymbol \times \mathbf *
potential energy In physics, potential energy is the energy held by an object because of its position relative to other objects, stresses within itself, its electric charge, or other factors. Common types of potential energy include the gravitational potentia ...
''U'' of a
magnetic dipole In electromagnetism, a magnetic dipole is the limit of either a closed loop of electric current or a pair of poles as the size of the source is reduced to zero while keeping the magnetic moment constant. It is a magnetic analogue of the electric ...
of
magnetic moment In electromagnetism, the magnetic moment is the magnetic strength and orientation of a magnet or other object that produces a magnetic field. Examples of objects that have magnetic moments include loops of electric current (such as electromagnets ...
m in a uniform external
magnetic field A magnetic field is a vector field that describes the magnetic influence on moving electric charges, electric currents, and magnetic materials. A moving charge in a magnetic field experiences a force perpendicular to its own velocity and to ...
B: U = -\mathbf\cdot\mathbf *
angular momentum In physics, angular momentum (rarely, moment of momentum or rotational momentum) is the rotational analog of linear momentum. It is an important physical quantity because it is a conserved quantity—the total angular momentum of a closed syst ...
J for a particle with
position vector In geometry, a position or position vector, also known as location vector or radius vector, is a Euclidean vector that represents the position of a point ''P'' in space in relation to an arbitrary reference origin ''O''. Usually denoted x, r, or s ...
r and
momentum In Newtonian mechanics, momentum (more specifically linear momentum or translational momentum) is the product of the mass and velocity of an object. It is a vector quantity, possessing a magnitude and a direction. If is an object's mass an ...
p: \mathbf = \mathbf\times \mathbf *
torque In physics and mechanics, torque is the rotational equivalent of linear force. It is also referred to as the moment of force (also abbreviated to moment). It represents the capability of a force to produce change in the rotational motion of th ...
τ acting on an
electric dipole The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-meter (C⋅m). The d ...
of
electric dipole moment The electric dipole moment is a measure of the separation of positive and negative electrical charges within a system, that is, a measure of the system's overall polarity. The SI unit for electric dipole moment is the coulomb-meter (C⋅m). The ...
p in a uniform external
electric field An electric field (sometimes E-field) is the physical field that surrounds electrically charged particles and exerts force on all other charged particles in the field, either attracting or repelling them. It also refers to the physical field fo ...
E: \boldsymbol = \mathbf\times\mathbf *induced surface
current density In electromagnetism, current density is the amount of charge per unit time that flows through a unit area of a chosen cross section. The current density vector is defined as a vector whose magnitude is the electric current per cross-sectional ar ...
jS in a magnetic material of
magnetization In classical electromagnetism, magnetization is the vector field that expresses the density of permanent or induced magnetic dipole moments in a magnetic material. Movement within this field is described by direction and is either Axial or Di ...
M on a surface with
unit normal In geometry, a normal is an object such as a line, ray, or vector that is perpendicular to a given object. For example, the normal line to a plane curve at a given point is the (infinite) line perpendicular to the tangent line to the curve at ...
n: \mathbf_\mathrm = \mathbf \times \mathbf How these products transform under orthogonal transformations is illustrated below.


Dot product, Kronecker delta, and metric tensor

The
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an algebra ...
⋅ of each possible pairing of the basis vectors follows from the basis being orthonormal. For perpendicular pairs we have :\begin \mathbf_\text\cdot\mathbf_\text &= \mathbf_\text\cdot\mathbf_\text &= \mathbf_\text\cdot\mathbf_\text &=\\ \mathbf_\text\cdot\mathbf_\text &= \mathbf_\text\cdot\mathbf_\text &= \mathbf_\text\cdot\mathbf_\text &= 0 \end while for parallel pairs we have :\mathbf_\text\cdot\mathbf_\text = \mathbf_\text\cdot\mathbf_\text = \mathbf_\text\cdot\mathbf_\text = 1. Replacing Cartesian labels by index notation as shown above, these results can be summarized by :\mathbf_i\cdot\mathbf_j = \delta_ where ''δij'' are the components of the
Kronecker delta In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\ ...
. The Cartesian basis can be used to represent ''δ'' in this way. In addition, each
metric tensor In the mathematical field of differential geometry, a metric tensor (or simply metric) is an additional structure on a manifold (such as a surface) that allows defining distances and angles, just as the inner product on a Euclidean space allows ...
component ''gij'' with respect to any basis is the dot product of a pairing of basis vectors: :g_ = \mathbf_i\cdot\mathbf_j . For the Cartesian basis the components arranged into a matrix are: :\mathbf = \begin g_\text & g_\text & g_\text \\ g_\text & g_\text & g_\text \\ g_\text & g_\text & g_\text \\ \end = \begin \mathbf_\text\cdot\mathbf_\text & \mathbf_\text\cdot\mathbf_\text & \mathbf_\text\cdot\mathbf_\text \\ \mathbf_\text\cdot\mathbf_\text & \mathbf_\text\cdot\mathbf_\text & \mathbf_\text\cdot\mathbf_\text \\ \mathbf_\text\cdot\mathbf_\text & \mathbf_\text\cdot\mathbf_\text & \mathbf_\text\cdot\mathbf_\text \\ \end = \begin 1 & 0 & 0 \\ 0 & 1 & 0 \\ 0 & 0 & 1 \\ \end so are the simplest possible for the metric tensor, namely the ''δ'': :g_ = \delta_ This is ''not'' true for general bases:
orthogonal coordinates In mathematics, orthogonal coordinates are defined as a set of ''d'' coordinates q = (''q''1, ''q''2, ..., ''q'd'') in which the coordinate hypersurfaces all meet at right angles (note: superscripts are indices, not exponents). A coordinate su ...
have
diagonal In geometry, a diagonal is a line segment joining two vertices of a polygon or polyhedron, when those vertices are not on the same edge. Informally, any sloping line is called diagonal. The word ''diagonal'' derives from the ancient Greek δ ...
metrics containing various scale factors (i.e. not necessarily 1), while general
curvilinear coordinates In geometry, curvilinear coordinates are a coordinate system for Euclidean space in which the coordinate lines may be curved. These coordinates may be derived from a set of Cartesian coordinates by using a transformation that is locally inve ...
could also have nonzero entries for off-diagonal components. The dot product of two vectors a and b transforms according to :\mathbf\cdot\mathbf = \bar_j \bar_j = a_i \mathsf_ b_k\left(\boldsymbol^\right)_ = a_i \delta_i_k b_k = a_i b_i which is intuitive, since the dot product of two vectors is a single scalar independent of any coordinates. This also applies more generally to any coordinate systems, not just rectangular ones; the dot product in one coordinate system is the same in any other.


Cross product, Levi-Civita symbol, and pseudovectors

For the
cross product In mathematics, the cross product or vector product (occasionally directed area product, to emphasize its geometric significance) is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space (named here E), and is ...
× of two vectors, the results are (almost) the other way round. Again, assuming a right-handed 3d Cartesian coordinate system,
cyclic permutation In mathematics, and in particular in group theory, a cyclic permutation (or cycle) is a permutation of the elements of some set ''X'' which maps the elements of some subset ''S'' of ''X'' to each other in a cyclic fashion, while fixing (that is, ma ...
s in perpendicular directions yield the next vector in the cyclic collection of vectors: :\begin \mathbf_\text\times\mathbf_\text &= \mathbf_\text & \mathbf_\text\times\mathbf_\text &= \mathbf_\text & \mathbf_\text\times\mathbf_\text &= \mathbf_\text \\ \mathbf_\text\times\mathbf_\text &= -\mathbf_\text & \mathbf_\text\times\mathbf_\text &= -\mathbf_\text & \mathbf_\text\times\mathbf_\text &= -\mathbf_\text \end while parallel vectors clearly vanish: :\mathbf_\text\times\mathbf_\text = \mathbf_\text\times\mathbf_\text = \mathbf_\text\times\mathbf_\text = \boldsymbol and replacing Cartesian labels by index notation as above, these can be summarized by: :\mathbf_i\times\mathbf_j = \begin +\mathbf_k & \text (i,j,k) = (1,2,3), (2,3,1), (3,1,2) \\ pt -\mathbf_k & \text (i,j,k) = (2,1,3), (3,2,1), (1,3,2) \\ pt \boldsymbol & i = j \end where ''i'', ''j'', ''k'' are indices which take values 1, 2, 3. It follows that: : = \begin +1 & \text (i,j,k) = (1,2,3), (2,3,1), (3,1,2) \\ pt -1 & \text (i,j,k) = (2,1,3), (3,2,1), (1,3,2) \\ pt 0 & i = j\textj = k\textk=i \end These permutation relations and their corresponding values are important, and there is an object coinciding with this property: the
Levi-Civita symbol In mathematics, particularly in linear algebra, tensor analysis, and differential geometry, the Levi-Civita symbol or Levi-Civita epsilon represents a collection of numbers; defined from the sign of a permutation of the natural numbers , for some ...
, denoted by ''ε''. The Levi-Civita symbol entries can be represented by the Cartesian basis: :\varepsilon_ = \mathbf_i\cdot \mathbf_j\times\mathbf_k which geometrically corresponds to the
volume Volume is a measure of occupied three-dimensional space. It is often quantified numerically using SI derived units (such as the cubic metre and litre) or by various imperial or US customary units (such as the gallon, quart, cubic inch). The de ...
of a
cube In geometry, a cube is a three-dimensional solid object bounded by six square faces, facets or sides, with three meeting at each vertex. Viewed from a corner it is a hexagon and its net is usually depicted as a cross. The cube is the only r ...
spanned by the orthonormal basis vectors, with sign indicating
orientation Orientation may refer to: Positioning in physical space * Map orientation, the relationship between directions on a map and compass directions * Orientation (housing), the position of a building with respect to the sun, a concept in building de ...
(and ''not'' a "positive or negative volume"). Here, the orientation is fixed by ''ε''123 = +1, for a right-handed system. A left-handed system would fix ''ε''123 = −1 or equivalently ''ε''321 = +1. The
scalar triple product In geometry and algebra, the triple product is a product of three 3-dimensional vectors, usually Euclidean vectors. The name "triple product" is used for two different products, the scalar-valued scalar triple product and, less often, the vector- ...
can now be written: :\mathbf \cdot \mathbf \times \mathbf = c_i\mathbf_i \cdot a_j\mathbf_j \times b_k\mathbf_k = \varepsilon_ c_i a_j b_k with the geometric interpretation of volume (of the
parallelepiped In geometry, a parallelepiped is a three-dimensional figure formed by six parallelograms (the term ''rhomboid'' is also sometimes used with this meaning). By analogy, it relates to a parallelogram just as a cube relates to a square. In Euclidea ...
spanned by a, b, c) and algebraically is a
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and ...
: :\mathbf \cdot \mathbf \times \mathbf = \begin c_\text & a_\text & b_\text \\ c_\text & a_\text & b_\text \\ c_\text & a_\text & b_\text \end This in turn can be used to rewrite the
cross product In mathematics, the cross product or vector product (occasionally directed area product, to emphasize its geometric significance) is a binary operation on two vectors in a three-dimensional oriented Euclidean vector space (named here E), and is ...
of two vectors as follows: :\begin (\mathbf \times \mathbf)_i = &= \varepsilon_ _\ell a_j b_k = \varepsilon_ \delta_ a_j b_k = \varepsilon_ a_j b_k \\ \Rightarrow\quad = (\mathbf \times \mathbf)_i \mathbf_i &= \varepsilon_ a_j b_k \mathbf_i \end Contrary to its appearance, the Levi-Civita symbol is ''not a tensor'', but a
pseudotensor In physics and mathematics, a pseudotensor is usually a quantity that transforms like a tensor under an orientation-preserving coordinate transformation (e.g. a proper rotation) but additionally changes sign under an orientation-reversing coordinat ...
, the components transform according to: :\bar_ = \det(\boldsymbol) \varepsilon_ \mathsf_\mathsf_\mathsf_ \,. Therefore, the transformation of the cross product of a and b is: :\begin &\left(\bar \times \bar\right)_i \\ = &\bar_ \bar_j \bar_k \\ = &\det(\boldsymbol) \;\; \varepsilon_ \mathsf_\mathsf_ \mathsf_ \;\; a_m \mathsf_ \;\; b_n \mathsf_ \\ = &\det(\boldsymbol) \;\; \varepsilon_ \;\; \mathsf_ \;\; \mathsf_ \left(\boldsymbol^\right)_ \;\; \mathsf_ \left(\boldsymbol^\right)_ \;\; a_m \;\; b_n \\ = &\det(\boldsymbol) \;\; \varepsilon_ \;\; \mathsf_ \;\; \delta_ \;\; \delta_ \;\; a_m \;\; b_n \\ = &\det(\boldsymbol) \;\; \mathsf_ \;\; \varepsilon_ a_q b_r \\ = &\det(\boldsymbol) \;\; (\mathbf\times\mathbf)_p \mathsf_ \end and so a × b transforms as a
pseudovector In physics and mathematics, a pseudovector (or axial vector) is a quantity that is defined as a function of some vectors or other geometric shapes, that resembles a vector, and behaves like a vector in many situations, but is changed into its o ...
, because of the determinant factor. The
tensor index notation In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be cal ...
applies to any object which has entities that form
multidimensional array In computer science, array is a data type that represents a collection of ''elements'' (values or variables), each selected by one or more indices (identifying keys) that can be computed at run time during program execution. Such a collection ...
s – not everything with indices is a tensor by default. Instead, tensors are defined by how their coordinates and basis elements change under a transformation from one coordinate system to another. Note the cross product of two vectors is a pseudovector, while the cross product of a pseudovector with a vector is another vector.


Applications of the ''δ'' tensor and ''ε'' pseudotensor

Other identities can be formed from the ''δ'' tensor and ''ε'' pseudotensor, a notable and very useful identity is one that converts two Levi-Civita symbols adjacently contracted over two indices into an antisymmetrized combination of Kronecker deltas: :\varepsilon_\varepsilon_ = \delta_\delta_ - \delta_\delta_ The index forms of the dot and cross products, together with this identity, greatly facilitate the manipulation and derivation of other identities in vector calculus and algebra, which in turn are used extensively in physics and engineering. For instance, it is clear the dot and cross products are distributive over vector addition: :\begin \mathbf\cdot(\mathbf + \mathbf) &= a_i ( b_i + c_i ) = a_i b_i + a_i c_i = \mathbf\cdot\mathbf + \mathbf\cdot\mathbf \\ \mathbf\times(\mathbf + \mathbf) &= \mathbf_i\varepsilon_ a_j ( b_k + c_k ) = \mathbf_i \varepsilon_ a_j b_k + \mathbf_i \varepsilon_ a_j c_k = \mathbf\times\mathbf + \mathbf\times\mathbf \end without resort to any geometric constructions – the derivation in each case is a quick line of algebra. Although the procedure is less obvious, the vector triple product can also be derived. Rewriting in index notation: : \left \mathbf\times(\mathbf\times\mathbf)\righti = \varepsilon_ a_j ( \varepsilon_ b_\ell c_m ) = (\varepsilon_ \varepsilon_ ) a_j b_\ell c_m and because cyclic permutations of indices in the ''ε'' symbol does not change its value, cyclically permuting indices in ''εkℓm'' to obtain ''εℓmk'' allows us to use the above ''δ''-''ε'' identity to convert the ''ε'' symbols into ''δ'' tensors: :\begin &\left \mathbf\times(\mathbf\times\mathbf)\righti \\ = &\left(\delta_ \delta_ - \delta_ \delta_\right) a_j b_\ell c_m \\ = &\delta_ \delta_ a_j b_\ell c_m - \delta_ \delta_ a_j b_\ell c_m \\ = &a_j b_i c_j - a_j b_j c_i \\ = &\left \mathbf\cdot\mathbf)\mathbf - (\mathbf\cdot\mathbf)\mathbf\righti \end thusly: :\mathbf\times(\mathbf\times\mathbf) = (\mathbf\cdot\mathbf)\mathbf - (\mathbf\cdot\mathbf)\mathbf Note this is antisymmetric in b and c, as expected from the left hand side. Similarly, via index notation or even just cyclically relabelling a, b, and c in the previous result and taking the negative: :(\mathbf\times \mathbf)\times\mathbf = (\mathbf\cdot\mathbf)\mathbf - (\mathbf\cdot\mathbf)\mathbf and the difference in results show that the cross product is not associative. More complex identities, like quadruple products; :(\mathbf\times \mathbf)\cdot(\mathbf\times\mathbf),\quad (\mathbf\times \mathbf)\times(\mathbf\times\mathbf),\quad \ldots and so on, can be derived in a similar manner.


Transformations of Cartesian tensors (any number of dimensions)

Tensors are defined as quantities which transform in a certain way under linear transformations of coordinates.


Second order

Let a = ''ai''e''i'' and b = ''bi''e''i'' be two vectors, so that they transform according to ''j'' = ''aiLij'', ''j'' = ''biLij''. Taking the tensor product gives: :\mathbf\otimes\mathbf=a_i\mathbf_i\otimes b_j\mathbf_j=a_i b_j\mathbf_i\otimes\mathbf_j then applying the transformation to the components :\bar_p\bar_q= a_i \mathsf_i_p b_j \mathsf_j_q = \mathsf_i_p\mathsf_j_q a_i b_j and to the bases :\bar_p\otimes\bar_q = \left(\boldsymbol^\right)_\mathbf_i\otimes\left(\boldsymbol^\right)_\mathbf_j = \left(\boldsymbol^\right)_\left(\boldsymbol^\right)_\mathbf_i\otimes\mathbf_j = \mathsf_ \mathsf_ \mathbf_i\otimes\mathbf_j gives the transformation law of an order-2 tensor. The tensor a⊗b is invariant under this transformation: :\begin &\bar_p\bar_q\bar_p\otimes\bar_q \\ = &\mathsf_ \mathsf_ a_k b_ \, \left(\boldsymbol^\right)_ \left(\boldsymbol^\right)_ \mathbf_i\otimes\mathbf_j \\ = &\mathsf_ \left(\boldsymbol^\right)_ \mathsf_ \left(\boldsymbol^\right)_ \, a_k b_ \mathbf_i\otimes\mathbf_j \\ = &\delta_k_i \delta_ \, a_k b_ \mathbf_i\otimes\mathbf_j \\ = &a_ib_j\mathbf_i\otimes\mathbf_j \end More generally, for any order-2 tensor :\mathbf = R_\mathbf_i\otimes\mathbf_j\,, the components transform according to; :\bar_=\mathsf_i_p\mathsf_j_q R_, and the basis transforms by: :\bar_p\otimes\bar_q = \left(\boldsymbol^\right)_\mathbf_i\otimes \left(\boldsymbol^\right)_\mathbf_j If R does not transform according to this rule – whatever quantity R may be – it is not an order-2 tensor.


Any order

More generally, for any order ''p'' tensor :\mathbf = T_ \mathbf_\otimes\mathbf_\otimes\cdots\mathbf_ the components transform according to; :\bar_ = \mathsf_ \mathsf_\cdots \mathsf_ T_ and the basis transforms by: :\bar_\otimes\bar_\cdots\otimes\bar_=\left(\boldsymbol^\right)_\mathbf_\otimes\left(\boldsymbol^\right)_\mathbf_\cdots\otimes\left(\boldsymbol^\right)_\mathbf_ For a
pseudotensor In physics and mathematics, a pseudotensor is usually a quantity that transforms like a tensor under an orientation-preserving coordinate transformation (e.g. a proper rotation) but additionally changes sign under an orientation-reversing coordinat ...
S of order ''p'', the components transform according to; :\bar_ = \det(\boldsymbol) \mathsf_ \mathsf_\cdots \mathsf_ S_\,.


Pseudovectors as antisymmetric second order tensors

The antisymmetric nature of the cross product can be recast into a tensorial form as follows. Let c be a vector, a be a pseudovector, b be another vector, and T be a second order tensor such that: :\mathbf = \mathbf\times\mathbf = \mathbf\cdot\mathbf As the cross product is linear in a and b, the components of T can be found by inspection, and they are: :\mathbf = \begin 0 & - a_\text & a_\text \\ a_\text & 0 & - a_\text \\ - a_\text & a_\text & 0 \\ \end so the pseudovector a can be written as an antisymmetric tensor. This transforms as a tensor, not a pseudotensor. For the mechanical example above for the tangential velocity of a rigid body, given by , this can be rewritten as where Ω is the tensor corresponding to the pseudovector ω: :\boldsymbol = \begin 0 & - \omega_\text & \omega_\text \\ \omega_\text & 0 & - \omega_\text \\ - \omega_\text & \omega_\text & 0 \\ \end For an example in
electromagnetism In physics, electromagnetism is an interaction that occurs between particles with electric charge. It is the second-strongest of the four fundamental interactions, after the strong force, and it is the dominant force in the interactions of a ...
, while the
electric field An electric field (sometimes E-field) is the physical field that surrounds electrically charged particles and exerts force on all other charged particles in the field, either attracting or repelling them. It also refers to the physical field fo ...
E is a vector field, the
magnetic field A magnetic field is a vector field that describes the magnetic influence on moving electric charges, electric currents, and magnetic materials. A moving charge in a magnetic field experiences a force perpendicular to its own velocity and to ...
B is a pseudovector field. These fields are defined from the
Lorentz force In physics (specifically in electromagnetism) the Lorentz force (or electromagnetic force) is the combination of electric and magnetic force on a point charge due to electromagnetic fields. A particle of charge moving with a velocity in an elect ...
for a particle of
electric charge Electric charge is the physical property of matter that causes charged matter to experience a force when placed in an electromagnetic field. Electric charge can be ''positive'' or ''negative'' (commonly carried by protons and electrons respe ...
''q'' traveling at velocity v: :\mathbf = q(\mathbf + \mathbf \times \mathbf) = q(\mathbf - \mathbf \times \mathbf) and considering the second term containing the cross product of a pseudovector B and velocity vector v, it can be written in matrix form, with F, E, and v as column vectors and B as an antisymmetric matrix: : \begin F_\text \\ F_\text \\ F_\text \\ \end = q\begin E_\text \\ E_\text \\ E_\text \\ \end - q \begin 0 & - B_\text & B_\text \\ B_\text & 0 & - B_\text \\ - B_\text & B_\text & 0 \\ \end \begin v_\text \\ v_\text \\ v_\text \\ \end If a pseudovector is explicitly given by a cross product of two vectors (as opposed to entering the cross product with another vector), then such pseudovectors can also be written as antisymmetric tensors of second order, with each entry a component of the cross product. The angular momentum of a classical pointlike particle orbiting about an axis, defined by , is another example of a pseudovector, with corresponding antisymmetric tensor: :\mathbf = \begin 0 & - J_\text & J_\text \\ J_\text & 0 & - J_\text \\ - J_\text & J_\text & 0 \\ \end = \begin 0 & - (x p_\text - y p_\text) & (z p_\text - x p_\text) \\ (x p_\text - y p_\text) & 0 & - (y p_\text - z p_\text) \\ - (z p_\text - x p_\text) & (y p_\text - z p_\text) & 0 \\ \end Although Cartesian tensors do not occur in the theory of relativity; the tensor form of orbital angular momentum J enters the spacelike part of the
relativistic angular momentum In physics, relativistic angular momentum refers to the mathematical formalisms and physical concepts that define angular momentum in special relativity (SR) and general relativity (GR). The relativistic quantity is subtly different from the thr ...
tensor, and the above tensor form of the magnetic field B enters the spacelike part of the
electromagnetic tensor In electromagnetism, the electromagnetic tensor or electromagnetic field tensor (sometimes called the field strength tensor, Faraday tensor or Maxwell bivector) is a mathematical object that describes the electromagnetic field in spacetime. Th ...
.


Vector and tensor calculus

The following formulae are only so simple in Cartesian coordinates – in general curvilinear coordinates there are factors of the metric and its determinant – see
tensors in curvilinear coordinates Curvilinear coordinates can be formulated in tensor calculus, with important applications in physics and engineering, particularly for describing transportation of physical quantities and deformation of matter in fluid mechanics and continuum mecha ...
for more general analysis.


Vector calculus

Following are the differential operators of
vector calculus Vector calculus, or vector analysis, is concerned with differentiation and integration of vector fields, primarily in 3-dimensional Euclidean space \mathbb^3. The term "vector calculus" is sometimes used as a synonym for the broader subject ...
. Throughout, let Φ(r, ''t'') be a
scalar field In mathematics and physics, a scalar field is a function (mathematics), function associating a single number to every point (geometry), point in a space (mathematics), space – possibly physical space. The scalar may either be a pure Scalar ( ...
, and :\mathbf(\mathbf,t) = A_\text(\mathbf,t)\mathbf_\text + A_\text(\mathbf,t)\mathbf_\text + A_\text(\mathbf,t)\mathbf_\text :\mathbf(\mathbf,t) = B_\text(\mathbf,t)\mathbf_\text + B_\text(\mathbf,t)\mathbf_\text + B_\text(\mathbf,t)\mathbf_\text be vector fields, in which all scalar and vector fields are functions of the
position vector In geometry, a position or position vector, also known as location vector or radius vector, is a Euclidean vector that represents the position of a point ''P'' in space in relation to an arbitrary reference origin ''O''. Usually denoted x, r, or s ...
r and time ''t''. The
gradient In vector calculus, the gradient of a scalar-valued differentiable function of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p is the "direction and rate of fastest increase". If the gradi ...
operator in Cartesian coordinates is given by: :\nabla = \mathbf_\text\frac + \mathbf_\text\frac + \mathbf_\text\frac and in index notation, this is usually abbreviated in various ways: :\nabla_i \equiv \partial_i \equiv \frac This operator acts on a scalar field Φ to obtain the vector field directed in the maximum rate of increase of Φ: :\left(\nabla\Phi\right)_i = \nabla_i \Phi The index notation for the dot and cross products carries over to the differential operators of vector calculus. The
directional derivative In mathematics, the directional derivative of a multivariable differentiable (scalar) function along a given vector v at a given point x intuitively represents the instantaneous rate of change of the function, moving through x with a velocity s ...
of a scalar field Φ is the rate of change of Φ along some direction vector a (not necessarily a
unit vector In mathematics, a unit vector in a normed vector space is a vector (often a spatial vector) of length 1. A unit vector is often denoted by a lowercase letter with a circumflex, or "hat", as in \hat (pronounced "v-hat"). The term ''direction vecto ...
), formed out of the components of a and the gradient: :\mathbf\cdot(\nabla\Phi) = a_j (\nabla\Phi)_j The
divergence In vector calculus, divergence is a vector operator that operates on a vector field, producing a scalar field giving the quantity of the vector field's source at each point. More technically, the divergence represents the volume density of the ...
of a vector field A is: :\nabla\cdot\mathbf = \nabla_i A_i Note the interchange of the components of the gradient and vector field yields a different differential operator :\mathbf\cdot\nabla = A_i \nabla_i which could act on scalar or vector fields. In fact, if A is replaced by the
velocity field In continuum mechanics the flow velocity in fluid dynamics, also macroscopic velocity in statistical mechanics, or drift velocity in electromagnetism, is a vector field used to mathematically describe the motion of a continuum. The length of the f ...
u(r, ''t'') of a fluid, this is a term in the
material derivative In continuum mechanics, the material derivative describes the time rate of change of some physical quantity (like heat or momentum) of a material element that is subjected to a space-and-time-dependent macroscopic velocity field. The material der ...
(with many other names) of
continuum mechanics Continuum mechanics is a branch of mechanics that deals with the mechanical behavior of materials modeled as a continuous mass rather than as discrete particles. The French mathematician Augustin-Louis Cauchy was the first to formulate such m ...
, with another term being the partial time derivative: : \frac = \frac + \mathbf\cdot\nabla which usually acts on the velocity field leading to the non-linearity in the Navier-Stokes equations. As for the
curl cURL (pronounced like "curl", UK: , US: ) is a computer software project providing a library (libcurl) and command-line tool (curl) for transferring data using various network protocols. The name stands for "Client URL". History cURL was fi ...
of a vector field A, this can be defined as a pseudovector field by means of the ''ε'' symbol: :\left(\nabla\times\mathbf\right)_i = \varepsilon_ \nabla_j A_k which is only valid in three dimensions, or an antisymmetric tensor field of second order via antisymmetrization of indices, indicated by delimiting the antisymmetrized indices by square brackets (see
Ricci calculus In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be cal ...
): :\left(\nabla\times\mathbf\right)_ = \nabla_i A_j - \nabla_j A_i = 2\nabla_ A_ which is valid in any number of dimensions. In each case, the order of the gradient and vector field components should not be interchanged as this would result in a different differential operator: :\varepsilon_ A_j \nabla_k = A_i \nabla_j - A_j \nabla_i = 2 A_ \nabla_ which could act on scalar or vector fields. Finally, the
Laplacian operator In mathematics, the Laplace operator or Laplacian is a differential operator given by the divergence of the gradient of a scalar function on Euclidean space. It is usually denoted by the symbols \nabla\cdot\nabla, \nabla^2 (where \nabla is the ...
is defined in two ways, the divergence of the gradient of a scalar field Φ: :\nabla\cdot(\nabla \Phi) = \nabla_i (\nabla_i \Phi) or the square of the gradient operator, which acts on a scalar field Φ or a vector field A: :\begin (\nabla\cdot\nabla) \Phi &= (\nabla_i \nabla_i) \Phi \\ (\nabla\cdot\nabla) \mathbf &= (\nabla_i \nabla_i) \mathbf \end In physics and engineering, the gradient, divergence, curl, and Laplacian operator arise inevitably in
fluid mechanics Fluid mechanics is the branch of physics concerned with the mechanics of fluids ( liquids, gases, and plasmas) and the forces on them. It has applications in a wide range of disciplines, including mechanical, aerospace, civil, chemical and bio ...
,
Newtonian gravitation Newton's law of universal gravitation is usually stated as that every particle attracts every other particle in the universe with a force that is proportional to the product of their masses and inversely proportional to the square of the distanc ...
,
electromagnetism In physics, electromagnetism is an interaction that occurs between particles with electric charge. It is the second-strongest of the four fundamental interactions, after the strong force, and it is the dominant force in the interactions of a ...
,
heat conduction Conduction is the process by which heat is transferred from the hotter end to the colder end of an object. The ability of the object to conduct heat is known as its ''thermal conductivity'', and is denoted . Heat spontaneously flows along a te ...
, and even
quantum mechanics Quantum mechanics is a fundamental theory in physics that provides a description of the physical properties of nature at the scale of atoms and subatomic particles. It is the foundation of all quantum physics including quantum chemistry, ...
. Vector calculus identities can be derived in a similar way to those of vector dot and cross products and combinations. For example, in three dimensions, the curl of a cross product of two vector fields A and B: :\begin &\left nabla\times(\mathbf\times\mathbf)\righti \\ = &\varepsilon_ \nabla_j (\varepsilon_ A_\ell B_m) \\ = &(\varepsilon_ \varepsilon_) \nabla_j (A_\ell B_m) \\ = &(\delta_\delta_ - \delta_\delta_) (B_m \nabla_j A_\ell + A_\ell \nabla_j B_m) \\ = &(B_j \nabla_j A_i + A_i \nabla_j B_j) - (B_i \nabla_j A_j + A_j \nabla_j B_i) \\ = &(B_j \nabla_j)A_i + A_i(\nabla_j B_j) - B_i (\nabla_j A_j ) - (A_j \nabla_j) B_i \\ = &\left \mathbf \cdot \nabla)\mathbf + \mathbf(\nabla\cdot \mathbf) - \mathbf(\nabla\cdot \mathbf) - (\mathbf\cdot \nabla) \mathbf \righti \\ \end where the
product rule In calculus, the product rule (or Leibniz rule or Leibniz product rule) is a formula used to find the derivatives of products of two or more functions. For two functions, it may be stated in Lagrange's notation as (u \cdot v)' = u ' \cdot v + ...
was used, and throughout the differential operator was not interchanged with A or B. Thus: : \nabla\times(\mathbf\times\mathbf) = (\mathbf \cdot \nabla)\mathbf + \mathbf(\nabla \cdot \mathbf) - \mathbf(\nabla \cdot \mathbf) - (\mathbf \cdot \nabla) \mathbf


Tensor calculus

One can continue the operations on tensors of higher order. Let T = T(r, ''t'') denote a second order tensor field, again dependent on the position vector r and time ''t''. For instance, the gradient of a vector field in two equivalent notations ("dyadic" and "tensor", respectively) is: :(\nabla \mathbf)_ \equiv (\nabla \otimes \mathbf)_ = \nabla_i A_j which is a tensor field of second order. The divergence of a tensor is: :(\nabla \cdot \mathbf)_j = \nabla_i T_ which is a vector field. This arises in continuum mechanics in Cauchy's laws of motion – the divergence of the Cauchy stress tensor σ is a vector field, related to
body force In physics, a body force is a force that acts throughout the volume of a body. Springer site - Book 'Solid mechanics'preview paragraph 'Body forces'./ref> Forces due to gravity, electric fields and magnetic fields are examples of body forces. Bo ...
s acting on the fluid.


Difference from the standard tensor calculus

Cartesian tensors are as in
tensor algebra In mathematics, the tensor algebra of a vector space ''V'', denoted ''T''(''V'') or ''T''(''V''), is the algebra of tensors on ''V'' (of any rank) with multiplication being the tensor product. It is the free algebra on ''V'', in the sense of being ...
, but Euclidean structure of and restriction of the basis brings some simplifications compared to the general theory. The general tensor algebra consists of general
mixed tensor In tensor analysis, a mixed tensor is a tensor which is neither strictly covariant nor strictly contravariant; at least one of the indices of a mixed tensor will be a subscript (covariant) and at least one of the indices will be a superscript ( ...
s of type (''p'', ''q''): :\mathbf = T_^ \mathbf_^ with basis elements: :\mathbf_^ = \mathbf_\otimes\mathbf_\otimes\cdots\mathbf_\otimes\mathbf^\otimes\mathbf^\otimes\cdots\mathbf^ the components transform according to: : \bar_^ = \mathsf_^ \mathsf_^ \cdots \mathsf_^ \left(\boldsymbol^\right)_^\left(\boldsymbol^\right)_^ \cdots \left(\boldsymbol^\right)_^ T_^ as for the bases: : \bar_^ = \left(\boldsymbol^\right)_^ \left(\boldsymbol^\right)_^ \cdots \left(\boldsymbol^\right)_^ \mathsf_^ \mathsf_^ \cdots \mathsf_^ \mathbf_^ For Cartesian tensors, only the order of the tensor matters in a Euclidean space with an orthonormal basis, and all indices can be lowered. A Cartesian basis does not exist unless the vector space has a positive-definite metric, and thus cannot be used in relativistic contexts.


History

Dyadic tensor In mathematics, specifically multilinear algebra, a dyadic or dyadic tensor is a second order tensor, written in a notation that fits in with vector algebra. There are numerous ways to multiply two Euclidean vectors. The dot product takes in two v ...
s were historically the first approach to formulating second-order tensors, similarly triadic tensors for third-order tensors, and so on. Cartesian tensors use
tensor index notation In mathematics, Ricci calculus constitutes the rules of index notation and manipulation for tensors and tensor fields on a differentiable manifold, with or without a metric tensor or connection. It is also the modern name for what used to be cal ...
, in which the
variance In probability theory and statistics, variance is the expectation of the squared deviation of a random variable from its population mean or sample mean. Variance is a measure of dispersion, meaning it is a measure of how far a set of numbers ...
may be glossed over and is often ignored, since the components remain unchanged by
raising and lowering indices In mathematics and mathematical physics, raising and lowering indices are operations on tensors which change their type. Raising and lowering indices are a form of index manipulation in tensor expressions. Vectors, covectors and the metric Math ...
.


See also

*
Tensor algebra In mathematics, the tensor algebra of a vector space ''V'', denoted ''T''(''V'') or ''T''(''V''), is the algebra of tensors on ''V'' (of any rank) with multiplication being the tensor product. It is the free algebra on ''V'', in the sense of being ...
*
Tensor calculus In mathematics, tensor calculus, tensor analysis, or Ricci calculus is an extension of vector calculus to tensor fields (tensors that may vary over a manifold, e.g. in spacetime). Developed by Gregorio Ricci-Curbastro and his student Tullio Levi ...
*
Tensors in curvilinear coordinates Curvilinear coordinates can be formulated in tensor calculus, with important applications in physics and engineering, particularly for describing transportation of physical quantities and deformation of matter in fluid mechanics and continuum mecha ...
*
Rotation group In mathematics, the orthogonal group in dimension , denoted , is the group of distance-preserving transformations of a Euclidean space of dimension that preserve a fixed point, where the group operation is given by composing transformations. ...


References


General references

* * *


Further reading and applications

* * * * * * * * * *


External links


''Cartesian Tensors''V. N. Kaliakin, ''Brief Review of Tensors'', University of DelawareR. E. Hunt, ''Cartesian Tensors'', University of Cambridge
{{Tensors Linear algebra Tensors Applied mathematics