In
mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, particularly
linear algebra
Linear algebra is the branch of mathematics concerning linear equations such as
:a_1x_1+\cdots +a_nx_n=b,
linear maps such as
:(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n,
and their representations in vector spaces and through matrix (mathemat ...
and
numerical analysis
Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, the Gram–Schmidt process or Gram-Schmidt algorithm is a way of finding a set of two or more vectors that are perpendicular to each other.
By technical definition, it is a method of constructing an
orthonormal basis
In mathematics, particularly linear algebra, an orthonormal basis for an inner product space V with finite Dimension (linear algebra), dimension is a Basis (linear algebra), basis for V whose vectors are orthonormal, that is, they are all unit vec ...
from a set of
vectors in an
inner product space
In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, ofte ...
, most commonly the
Euclidean space
Euclidean space is the fundamental space of geometry, intended to represent physical space. Originally, in Euclid's ''Elements'', it was the three-dimensional space of Euclidean geometry, but in modern mathematics there are ''Euclidean spaces ...
equipped with the
standard inner product. The Gram–Schmidt process takes a
finite
Finite may refer to:
* Finite set, a set whose cardinality (number of elements) is some natural number
* Finite verb, a verb form that has a subject, usually being inflected or marked for person and/or tense or aspect
* "Finite", a song by Sara Gr ...
,
linearly independent
In the theory of vector spaces, a set of vectors is said to be if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be . These concep ...
set of vectors
for and generates an
orthogonal set that spans the same
-dimensional subspace of
as
.
The method is named after
Jørgen Pedersen Gram
Jørgen Pedersen Gram (27 June 1850 – 29 April 1916) was a Danish actuary and mathematician who was born in Nustrup, Duchy of Schleswig, Denmark and died in Copenhagen, Denmark.
Important papers of his include ''On series expansions determin ...
and
Erhard Schmidt
Erhard Schmidt (13 January 1876 – 6 December 1959) was a Baltic German mathematician whose work significantly influenced the direction of mathematics in the twentieth century. Schmidt was born in Tartu (), in the Governorate of Livonia (now ...
, but
Pierre-Simon Laplace
Pierre-Simon, Marquis de Laplace (; ; 23 March 1749 – 5 March 1827) was a French polymath, a scholar whose work has been instrumental in the fields of physics, astronomy, mathematics, engineering, statistics, and philosophy. He summariz ...
had been familiar with it before Gram and Schmidt. In the theory of
Lie group decompositions
In mathematics, Lie group decompositions are used to analyse the structure of Lie groups and associated objects, by showing how they are built up out of subgroups. They are essential technical tools in the representation theory of Lie groups and Li ...
, it is generalized by the
Iwasawa decomposition.
The application of the Gram–Schmidt process to the column vectors of a full column
rank
A rank is a position in a hierarchy. It can be formally recognized—for example, cardinal, chief executive officer, general, professor—or unofficial.
People Formal ranks
* Academic rank
* Corporate title
* Diplomatic rank
* Hierarchy ...
matrix
Matrix (: matrices or matrixes) or MATRIX may refer to:
Science and mathematics
* Matrix (mathematics), a rectangular array of numbers, symbols or expressions
* Matrix (logic), part of a formula in prenex normal form
* Matrix (biology), the m ...
yields the
QR decomposition
In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthonormal matrix ''Q'' and an upper triangular matrix ''R''. QR decom ...
(it is decomposed into an
orthogonal
In mathematics, orthogonality (mathematics), orthogonality is the generalization of the geometric notion of ''perpendicularity''. Although many authors use the two terms ''perpendicular'' and ''orthogonal'' interchangeably, the term ''perpendic ...
and a
triangular matrix
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are z ...
).
The Gram–Schmidt process

The
vector projection
The vector projection (also known as the vector component or vector resolution) of a vector on (or onto) a nonzero vector is the orthogonal projection of onto a straight line parallel to .
The projection of onto is often written as \oper ...
of a vector
on a nonzero vector
is defined as
[In the complex case, this assumes that the inner product is linear in the first argument and conjugate-linear in the second. In physics a more common convention is linearity in the second argument, in which case we define ]
where
denotes the
inner product
In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, ofte ...
of the vectors
and
. This means that
is the
orthogonal projection
In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself (an endomorphism) such that P\circ P=P. That is, whenever P is applied twice to any vector, it gives the same result as if it we ...
of
onto the line spanned by
. If
is the zero vector, then
is defined as the zero vector.
Given
nonzero linearly-independent vectors
the Gram–Schmidt process defines the vectors
as follows:
The sequence
is the required system of orthogonal vectors, and the normalized vectors
form an
orthonormal set
In linear algebra, two vectors in an inner product space are orthonormal if they are orthogonal unit vectors. A unit vector means that the vector has a length of 1, which is also known as normalized. Orthogonal means that the vectors are all perpe ...
. The calculation of the sequence
is known as ''Gram–Schmidt
orthogonalization
In linear algebra, orthogonalization is the process of finding a set of orthogonal vectors that span a particular subspace. Formally, starting with a linearly independent set of vectors in an inner product space (most commonly the Euclidean s ...
'', and the calculation of the sequence
is known as ''Gram–Schmidt
orthonormalization
In linear algebra, orthogonalization is the process of finding a Set (mathematics), set of orthogonal vectors that span (linear algebra), span a particular linear subspace, subspace. Formally, starting with a linearly independent set of vectors ...
''.
To check that these formulas yield an orthogonal sequence, first compute
by substituting the above formula for
: we get zero. Then use this to compute
again by substituting the formula for
: we get zero. For arbitrary
the proof is accomplished by
mathematical induction
Mathematical induction is a method for mathematical proof, proving that a statement P(n) is true for every natural number n, that is, that the infinitely many cases P(0), P(1), P(2), P(3), \dots all hold. This is done by first proving a ...
.
Geometrically, this method proceeds as follows: to compute
, it projects
orthogonally onto the subspace
generated by
, which is the same as the subspace generated by
. The vector
is then defined to be the difference between
and this projection, guaranteed to be orthogonal to all of the vectors in the subspace
.
The Gram–Schmidt process also applies to a linearly independent
countably infinite
In mathematics, a set is countable if either it is finite or it can be made in one to one correspondence with the set of natural numbers. Equivalently, a set is ''countable'' if there exists an injective function from it into the natural numbe ...
sequence . The result is an orthogonal (or orthonormal) sequence such that for natural number : the algebraic span of
is the same as that of
.
If the Gram–Schmidt process is applied to a linearly dependent sequence, it outputs the vector on the
th step, assuming that
is a linear combination of
. If an orthonormal basis is to be produced, then the algorithm should test for zero vectors in the output and discard them because no multiple of a zero vector can have a length of 1. The number of vectors output by the algorithm will then be the dimension of the space spanned by the original inputs.
A variant of the Gram–Schmidt process using
transfinite recursion
Transfinite induction is an extension of mathematical induction to well-ordered sets, for example to sets of ordinal numbers or cardinal numbers. Its correctness is a theorem of ZFC.
Induction by cases
Let P(\alpha) be a property defined for a ...
applied to a (possibly uncountably) infinite sequence of vectors
yields a set of orthonormal vectors
with
such that for any
, the
completion of the span of
is the same as that of In particular, when applied to a (algebraic) basis of a
Hilbert space
In mathematics, a Hilbert space is a real number, real or complex number, complex inner product space that is also a complete metric space with respect to the metric induced by the inner product. It generalizes the notion of Euclidean space. The ...
(or, more generally, a basis of any dense subspace), it yields a (functional-analytic) orthonormal basis. Note that in the general case often the strict inequality
holds, even if the starting set was linearly independent, and the span of
need not be a subspace of the span of
(rather, it's a subspace of its completion).
Example
Euclidean space
Consider the following set of vectors in
(with the conventional
inner product
In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, ofte ...
)
Now, perform Gram–Schmidt, to obtain an orthogonal set of vectors:
We check that the vectors
and
are indeed orthogonal:
noting that if the
dot product
In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a Scalar (mathematics), scalar as a result". It is also used for other symmetric bilinear forms, for example in a pseudo-Euclidean space. N ...
of two vectors is 0 then they are orthogonal.
For non-zero vectors, we can then normalize the vectors by dividing out their sizes as shown above:
Properties
Denote by
the result of applying the Gram–Schmidt process to a collection of vectors
. This yields a map
.
It has the following properties:
* It is continuous
* It is
orientation
Orientation may refer to:
Positioning in physical space
* Map orientation, the relationship between directions on a map and compass directions
* Orientation (housing), the position of a building with respect to the sun, a concept in building des ...
preserving in the sense that
.
* It commutes with orthogonal maps:
Let
be orthogonal (with respect to the given inner product). Then we have
Further, a parametrized version of the Gram–Schmidt process yields a (strong)
deformation retraction
In topology, a retraction is a continuous mapping from a topological space into a subspace that preserves the position of all points in that subspace. The subspace is then called a retract of the original space. A deformation retraction is a mappi ...
of the general linear group
onto the orthogonal group
.
Numerical stability
When this process is implemented on a computer, the vectors
are often not quite orthogonal, due to
rounding errors
In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Ro ...
. For the Gram–Schmidt process as described above (sometimes referred to as "classical Gram–Schmidt") this loss of orthogonality is particularly bad; therefore, it is said that the (classical) Gram–Schmidt process is
numerically unstable.
The Gram–Schmidt process can be stabilized by a small modification; this version is sometimes referred to as modified Gram-Schmidt or MGS. This approach gives the same result as the original formula in exact arithmetic and introduces smaller errors in finite-precision arithmetic.
Instead of computing the vector as
it is computed as
This method is used in the previous animation, when the intermediate
vector is used when orthogonalizing the blue vector
.
Here is another description of the modified algorithm. Given the vectors
, in our first step we produce vectors
by removing components along the direction of
. In formulas,
. After this step we already have two of our desired orthogonal vectors
, namely
, but we also made
already orthogonal to
. Next, we orthogonalize those remaining vectors against
. This means we compute
by subtraction
. Now we have stored the vectors
where the first three vectors are already
and the remaining vectors are already orthogonal to
. As should be clear now, the next step orthogonalizes
against
. Proceeding in this manner we find the full set of orthogonal vectors
. If orthonormal vectors are desired, then we normalize as we go, so that the denominators in the subtraction formulas turn into ones.
Algorithm
The following
MATLAB
MATLAB (an abbreviation of "MATrix LABoratory") is a proprietary multi-paradigm programming language and numeric computing environment developed by MathWorks. MATLAB allows matrix manipulations, plotting of functions and data, implementat ...
algorithm implements classical Gram–Schmidt orthonormalization. The vectors (columns of matrix
V
, so that
V(:,j)
is the
th vector) are replaced by orthonormal vectors (columns of
U
) which span the same subspace.
function U = gramschmidt(V)
, k
The comma is a punctuation mark that appears in several variants in different languages. Some typefaces render it as a small line, slightly curved or straight, but inclined from the vertical; others give it the appearance of a miniature fille ...
= size(V);
U = zeros(n,k);
U(:,1) = V(:,1) / norm(V(:,1));
for i = 2:k
U(:,i) = V(:,i);
for j = 1:i-1
U(:,i) = U(:,i) - (U(:,j)'*U(:,i)) * U(:,j);
end
U(:,i) = U(:,i) / norm(U(:,i));
end
end
The cost of this algorithm is asymptotically floating point operations, where is the dimensionality of the vectors.
Via Gaussian elimination
If the rows are written as a matrix
, then applying
Gaussian elimination
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can a ...
to the augmented matrix