HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrices. ...
, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors v_1,\dots, v_n in an
inner product space In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often den ...
is the
Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th ...
of
inner product In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
s, whose entries are given by the
inner product In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
G_ = \left\langle v_i, v_j \right\rangle., p.441, Theorem 7.2.10 If the vectors v_1,\dots, v_n are the columns of matrix X then the Gram matrix is X^* X in the general case that the vector coordinates are complex numbers, which simplifies to X^\top X for the case that the vector coordinates are real numbers. An important application is to compute
linear independence In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are ...
: a set of vectors are linearly independent if and only if the Gram determinant (the
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and ...
of the Gram matrix) is non-zero. It is named after
Jørgen Pedersen Gram Jørgen Pedersen Gram (27 June 1850 – 29 April 1916) was a Danish actuary and mathematician who was born in Nustrup, Duchy of Schleswig, Denmark and died in Copenhagen, Denmark. Important papers of his include ''On series expansions determin ...
.


Examples

For finite-dimensional real vectors in \mathbb^n with the usual Euclidean
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an algebra ...
, the Gram matrix is G = V^\top V, where V is a matrix whose columns are the vectors v_k and V^\top is its
transpose In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The tr ...
whose rows are the vectors v_k^\top. For
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
vectors in \mathbb^n, G = V^\dagger V, where V^\dagger is the
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex con ...
of V. Given
square-integrable function In mathematics, a square-integrable function, also called a quadratically integrable function or L^2 function or square-summable function, is a real- or complex-valued measurable function for which the integral of the square of the absolute value i ...
s \ on the interval \left _0, t_f\right/math>, the Gram matrix G = \left _\right/math> is: : G_ = \int_^ \ell_i^*(\tau)\ell_j(\tau)\, d\tau. where \ell_i^*(\tau) is the
complex conjugate In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, (if a and b are real, then) the complex conjugate of a + bi is equal to a - ...
of \ell_i(\tau). For any
bilinear form In mathematics, a bilinear form is a bilinear map on a vector space (the elements of which are called '' vectors'') over a field ''K'' (the elements of which are called ''scalars''). In other words, a bilinear form is a function that is linear i ...
B on a
finite-dimensional In mathematics, the dimension of a vector space ''V'' is the cardinality (i.e., the number of vectors) of a basis of ''V'' over its base field. p. 44, §2.36 It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to disti ...
vector space In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called ''vectors'', may be added together and multiplied ("scaled") by numbers called '' scalars''. Scalars are often real numbers, but can ...
over any
field Field may refer to: Expanses of open ground * Field (agriculture), an area of land used for agricultural purposes * Airfield, an aerodrome that lacks the infrastructure of an airport * Battlefield * Lawn, an area of mowed grass * Meadow, a grass ...
we can define a Gram matrix G attached to a set of vectors v_1, \dots, v_n by G_ = B\left(v_i, v_j\right). The matrix will be symmetric if the bilinear form B is symmetric.


Applications

* In
Riemannian geometry Riemannian geometry is the branch of differential geometry that studies Riemannian manifolds, smooth manifolds with a ''Riemannian metric'', i.e. with an inner product on the tangent space at each point that varies smoothly from point to poin ...
, given an embedded k-dimensional Riemannian manifold M\subset \mathbb^n and a parametrization \phi: U\to M for the volume form \omega on M induced by the embedding may be computed using the Gramian of the coordinate tangent vectors: \omega = \sqrt\ dx_1 \cdots dx_k,\quad G = \left left\langle \frac,\frac\right\rangle\right This generalizes the classical surface integral of a parametrized surface \phi:U\to S\subset \mathbb^3 for (x, y)\in U\subset\mathbb^2: \int_S f\ dA = \iint_U f(\phi(x, y))\, \left, \frac\,\,\frac\\, dx\, dy. * If the vectors are centered
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s, the Gramian is approximately proportional to the
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
, with the scaling determined by the number of elements in the vector. * In
quantum chemistry Quantum chemistry, also called molecular quantum mechanics, is a branch of physical chemistry focused on the application of quantum mechanics to chemical systems, particularly towards the quantum-mechanical calculation of electronic contributions ...
, the Gram matrix of a set of
basis vectors In mathematics, a set of vectors in a vector space is called a basis if every element of may be written in a unique way as a finite linear combination of elements of . The coefficients of this linear combination are referred to as components ...
is the
overlap matrix In chemical bonds, an orbital overlap is the concentration of orbitals on adjacent atoms in the same regions of space. Orbital overlap can lead to bond formation. Linus Pauling explained the importance of orbital overlap in the molecular bond an ...
. * In
control theory Control theory is a field of mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a ...
(or more generally
systems theory Systems theory is the interdisciplinary study of systems, i.e. cohesive groups of interrelated, interdependent components that can be natural or human-made. Every system has causal boundaries, is influenced by its context, defined by its structu ...
), the
controllability Gramian In control theory, we may need to find out whether or not a system such as \begin \dot(t)\boldsymbol(t)+\boldsymbol(t)\\ \boldsymbol(t)=\boldsymbol(t)+\boldsymbol(t) \end is controllable, where \boldsymbol, \boldsymbol, \boldsymbol and \boldsymb ...
and
observability Gramian In control theory, we may need to find out whether or not a system such as \begin \dot(t)\boldsymbol(t)+\boldsymbol(t)\\ \boldsymbol(t)=\boldsymbol(t)+\boldsymbol(t) \end is observable, where \boldsymbol, \boldsymbol, \boldsymbol and \boldsymbol ...
determine properties of a linear system. * Gramian matrices arise in covariance structure model fitting (see e.g., Jamshidian and Bentler, 1993, Applied Psychological Measurement, Volume 18, pp. 79–94). * In the
finite element method The finite element method (FEM) is a popular method for numerically solving differential equations arising in engineering and mathematical modeling. Typical problem areas of interest include the traditional fields of structural analysis, heat ...
, the Gram matrix arises from approximating a function from a finite dimensional space; the Gram matrix entries are then the inner products of the basis functions of the finite dimensional subspace. * In
machine learning Machine learning (ML) is a field of inquiry devoted to understanding and building methods that 'learn', that is, methods that leverage data to improve performance on some set of tasks. It is seen as a part of artificial intelligence. Machine ...
,
kernel function In operator theory, a branch of mathematics, a positive-definite kernel is a generalization of a positive-definite function or a positive-definite matrix. It was first introduced by James Mercer in the early 20th century, in the context of solving ...
s are often represented as Gram matrices. (Also see
kernel PCA In the field of multivariate statistics, kernel principal component analysis (kernel PCA) is an extension of principal component analysis (PCA) using techniques of kernel methods. Using a kernel, the originally linear operations of PCA are performed ...
) * Since the Gram matrix over the reals is a
symmetric matrix In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with re ...
, it is
diagonalizable In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique.) F ...
and its
eigenvalues In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
are non-negative. The diagonalization of the Gram matrix is the
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is related ...
.


Properties


Positive-semidefiniteness

The Gram matrix is
symmetric Symmetry (from grc, συμμετρία "agreement in dimensions, due proportion, arrangement") in everyday language refers to a sense of harmonious and beautiful proportion and balance. In mathematics, "symmetry" has a more precise definiti ...
in the case the real product is real-valued; it is
Hermitian {{Short description, none Numerous things are named after the French mathematician Charles Hermite (1822–1901): Hermite * Cubic Hermite spline, a type of third-degree spline * Gauss–Hermite quadrature, an extension of Gaussian quadrature meth ...
in the general, complex case by definition of an
inner product In mathematics, an inner product space (or, rarely, a Hausdorff space, Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation (mathematics), operation called an inner product. The inner product of two ve ...
. The Gram matrix is positive semidefinite, and every positive semidefinite matrix is the Gramian matrix for some set of vectors. The fact that the Gramian matrix is positive-semidefinite can be seen from the following simple derivation: : x^\dagger \mathbf x = \sum_x_i^* x_j\left\langle v_i, v_j \right\rangle = \sum_\left\langle x_i v_i, x_j v_j \right\rangle = \left\langle \sum_i x_i v_i, \sum_j x_j v_j \right\rangle = \left\, \sum_i x_i v_i \right\, ^2 \geq 0 . The first equality follows from the definition of matrix multiplication, the second and third from the bi-linearity of the
inner-product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, often den ...
, and the last from the positive definiteness of the inner product. Note that this also shows that the Gramian matrix is positive definite if and only if the vectors v_i are linearly independent (that is, \sum_i x_i v_i \neq 0 for all x).


Finding a vector realization

Given any positive semidefinite matrix M, one can decompose it as: : M = B^\dagger B, where B^\dagger is the
conjugate transpose In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex con ...
of B (or M = B^\textsf B in the real case). Here B is a k \times n matrix, where k is the
rank Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as: Level or position in a hierarchical organization * Academic rank * Diplomatic rank * Hierarchy * H ...
of M. Various ways to obtain such a decomposition include computing the
Cholesky decomposition In linear algebra, the Cholesky decomposition or Cholesky factorization (pronounced ) is a decomposition of a Hermitian, positive-definite matrix into the product of a lower triangular matrix and its conjugate transpose, which is useful for effici ...
or taking the non-negative square root of M. The columns b^, \dots, b^ of B can be seen as ''n'' vectors in \mathbb^k (or ''k''-dimensional Euclidean space \mathbb^k, in the real case). Then : M_ = b^ \cdot b^ where the
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an algebra ...
a \cdot b = \sum_^k a_\ell^* b_\ell is the usual inner product on \mathbb^k. Thus a
Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the -th ...
M is positive semidefinite if and only if it is the Gram matrix of some vectors b^, \dots, b^. Such vectors are called a vector realization of The infinite-dimensional analog of this statement is
Mercer's theorem In mathematics, specifically functional analysis, Mercer's theorem is a representation of a symmetric positive-definite function on a square as a sum of a convergent sequence of product functions. This theorem, presented in , is one of the most not ...
.


Uniqueness of vector realizations

If M is the Gram matrix of vectors v_1,\dots,v_n in \mathbb^k then applying any rotation or reflection of \mathbb^k (any
orthogonal transformation In linear algebra, an orthogonal transformation is a linear transformation ''T'' : ''V'' → ''V'' on a real inner product space ''V'', that preserves the inner product. That is, for each pair of elements of ''V'', we have ...
, that is, any
Euclidean isometry In mathematics, a rigid transformation (also called Euclidean transformation or Euclidean isometry) is a geometric transformation of a Euclidean space that preserves the Euclidean distance between every pair of points. The rigid transformation ...
preserving 0) to the sequence of vectors results in the same Gram matrix. That is, for any k \times k
orthogonal matrix In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identity ma ...
Q, the Gram matrix of Q v_1,\dots, Q v_n is also This is the only way in which two real vector realizations of M can differ: the vectors v_1,\dots,v_n are unique up to
orthogonal transformation In linear algebra, an orthogonal transformation is a linear transformation ''T'' : ''V'' → ''V'' on a real inner product space ''V'', that preserves the inner product. That is, for each pair of elements of ''V'', we have ...
s. In other words, the dot products v_i \cdot v_j and w_i \cdot w_j are equal if and only if some rigid transformation of \mathbb^k transforms the vectors v_1,\dots,v_n to w_1, \dots, w_n and 0 to 0. The same holds in the complex case, with
unitary transformation In mathematics, a unitary transformation is a transformation that preserves the inner product: the inner product of two vectors before the transformation is equal to their inner product after the transformation. Formal definition More precisely, ...
s in place of orthogonal ones. That is, if the Gram matrix of vectors v_1, \dots, v_n is equal to the Gram matrix of vectors w_1, \dots, w_n in \mathbb^k then there is a
unitary Unitary may refer to: Mathematics * Unitary divisor * Unitary element * Unitary group * Unitary matrix * Unitary morphism * Unitary operator * Unitary transformation * Unitary representation * Unitarity (physics) * ''E''-unitary inverse semigroup ...
k \times k matrix U (meaning U^\dagger U = I) such that v_i = U w_i for i = 1, \dots, n., p. 452, Theorem 7.3.11


Other properties

* Because G = G^\dagger, it is necessarily the case that G and G^\dagger commute. That is, a real or complex Gram matrix G is also a
normal matrix In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose : The concept of normal matrices can be extended to normal operators on infinite dimensional normed spaces and to normal elements in C*-algebras. As ...
. * The Gram matrix of any
orthonormal basis In mathematics, particularly linear algebra, an orthonormal basis for an inner product space ''V'' with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, ...
is the identity matrix. Equivalently, the Gram matrix of the rows or the columns of a real
rotation matrix In linear algebra, a rotation matrix is a transformation matrix that is used to perform a rotation in Euclidean space. For example, using the convention below, the matrix :R = \begin \cos \theta & -\sin \theta \\ \sin \theta & \cos \theta \end ...
is the identity matrix. Likewise, the Gram matrix of the rows or columns of a
unitary matrix In linear algebra, a complex square matrix is unitary if its conjugate transpose is also its inverse, that is, if U^* U = UU^* = UU^ = I, where is the identity matrix. In physics, especially in quantum mechanics, the conjugate transpose is ...
is the identity matrix. * The rank of the Gram matrix of vectors in \mathbb^k or \mathbb^k equals the dimension of the space spanned by these vectors.


Gram determinant

The Gram determinant or Gramian is the determinant of the Gram matrix: , G(\), = \begin \langle v_1,v_1\rangle & \langle v_1,v_2\rangle &\dots & \langle v_1,v_n\rangle \\ \langle v_2,v_1\rangle & \langle v_2,v_2\rangle &\dots & \langle v_2,v_n\rangle \\ \vdots & \vdots & \ddots & \vdots \\ \langle v_n,v_1\rangle & \langle v_n,v_2\rangle &\dots & \langle v_n,v_n\rangle \end. If v_1, \dots, v_n are vectors in \mathbb^m then it is the square of the ''n''-dimensional volume of the parallelotope formed by the vectors. In particular, the vectors are
linearly independent In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are ...
if and only if the parallelotope has nonzero ''n''-dimensional volume, if and only if Gram determinant is nonzero, if and only if the Gram matrix is
nonsingular In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplicati ...
. When the determinant and volume are zero. When , this reduces to the standard theorem that the absolute value of the determinant of ''n'' ''n''-dimensional vectors is the ''n''-dimensional volume. The Gram determinant is also useful for computing the volume of the
simplex In geometry, a simplex (plural: simplexes or simplices) is a generalization of the notion of a triangle or tetrahedron to arbitrary dimensions. The simplex is so-named because it represents the simplest possible polytope in any given dimension. ...
formed by the vectors; its volume is . The Gram determinant can also be expressed in terms of the
exterior product In mathematics, specifically in topology, the interior of a subset of a topological space is the union of all subsets of that are open in . A point that is in the interior of is an interior point of . The interior of is the complement of the ...
of vectors by :, G(v_1, \dots, v_n), = \, v_1 \wedge \cdots \wedge v_n\, ^2. When the vectors v_1, \ldots, v_n \in \mathbb^m are defined from the positions of points p_1, \ldots, p_n relative to some reference point p_0, :(v_1, v_2, \ldots, v_n) = (p_1 - p_0, p_2 - p_0, \ldots, p_n - p_0)\,, then the Gram determinant can be written as the difference of two Gram determinants, : , G(\), = , G(\), - , G(\), \,, where each (p_j, 1) is the corresponding point p_j supplemented with the coordinate value of 1 for an (m+1)-st dimension. Note that in the common case that , the second term on the right-hand side will be zero.


Constructing an orthonormal basis

Given a set of linearly independent vectors \ with Gram matrix G_:= \langle v_i,v_j\rangle, one can construct an orthonormal basis :w_i := \sum_j (G^)_ v_j. The positive definite matrix G^ is guaranteed to exist because, as mentioned above, the v_i are linearly independent if and only if G is invertible and hence is positive definite (not just semidefinite). The inverse G^ of positive definite matrix G is unique and also positive definite and thus has a unique positive definite square root G^ := (G^)^. One can check that these new vectors are orthonormal: :\langle w_i,w_j \rangle = \sum_ \sum_ \left(G^\right)_ \langle v_,v_ \rangle \left(G^\right)_ :::: = \sum_ \sum_ \left(G^\right)_ G_ \left(G^\right)_ :::: = \left(G^ G G^\right)_ = \delta_


See also

*
Controllability Gramian In control theory, we may need to find out whether or not a system such as \begin \dot(t)\boldsymbol(t)+\boldsymbol(t)\\ \boldsymbol(t)=\boldsymbol(t)+\boldsymbol(t) \end is controllable, where \boldsymbol, \boldsymbol, \boldsymbol and \boldsymb ...
*
Observability Gramian In control theory, we may need to find out whether or not a system such as \begin \dot(t)\boldsymbol(t)+\boldsymbol(t)\\ \boldsymbol(t)=\boldsymbol(t)+\boldsymbol(t) \end is observable, where \boldsymbol, \boldsymbol, \boldsymbol and \boldsymbol ...


References

*


External links

* *
Volumes of parallelograms
' by Frank Jones {{Matrix classes Systems theory Matrices Determinants Analytic geometry Kernel methods for machine learning