In
mathematics, the polar decomposition of a square
real or
complex matrix
Matrix most commonly refers to:
* ''The Matrix'' (franchise), an American media franchise
** '' The Matrix'', a 1999 science-fiction action film
** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
is a
factorization of the form
, where
is an
orthogonal matrix
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors.
One way to express this is
Q^\mathrm Q = Q Q^\mathrm = I,
where is the transpose of and is the identity ...
and
is a positive semi-definite symmetric matrix (
is a
unitary matrix
In linear algebra, a Complex number, complex Matrix (mathematics), square matrix is unitary if its conjugate transpose is also its Invertible matrix, inverse, that is, if
U^* U = UU^* = UU^ = I,
where is the identity matrix.
In physics, esp ...
and
is a
positive semi-definite Hermitian matrix in the complex case), both square and of the same size.
Intuitively, if a real
matrix
is interpreted as a
linear transformation
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
of
-dimensional
space
Space is the boundless three-dimensional extent in which objects and events have relative position and direction. In classical physics, physical space is often conceived in three linear dimensions, although modern physicists usually con ...
, the polar decomposition separates it into a
rotation
Rotation, or spin, is the circular movement of an object around a '' central axis''. A two-dimensional rotating object has only one possible central axis and can rotate in either a clockwise or counterclockwise direction. A three-dimensional ...
or
reflection of
, and a
scaling of the space along a set of
orthogonal axes.
The polar decomposition of a square matrix
always exists. If
is
invertible
In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers.
Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that ...
, the decomposition is unique, and the factor
will be
positive-definite. In that case,
can be written uniquely in the form
, where
is unitary and
is the unique self-adjoint
logarithm
In mathematics, the logarithm is the inverse function to exponentiation. That means the logarithm of a number to the base is the exponent to which must be raised, to produce . For example, since , the ''logarithm base'' 10 of ...
of the matrix
. This decomposition is useful in computing the
fundamental group
In the mathematical field of algebraic topology, the fundamental group of a topological space is the group of the equivalence classes under homotopy of the loops contained in the space. It records information about the basic shape, or holes, of ...
of (matrix)
Lie group
In mathematics, a Lie group (pronounced ) is a group that is also a differentiable manifold. A manifold is a space that locally resembles Euclidean space, whereas groups define the abstract concept of a binary operation along with the addit ...
s.
The polar decomposition can also be defined as
where
is a symmetric positive-definite matrix with the same eigenvalues as
but different eigenvectors.
The polar decomposition of a matrix can be seen as the matrix analog of the
polar form
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
of a
complex number
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
as
, where
is its
absolute value (a non-negative
real number
In mathematics, a real number is a number that can be used to measurement, measure a ''continuous'' one-dimensional quantity such as a distance, time, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small var ...
), and
is a complex number with unit norm (an element of the
circle group
In mathematics, the circle group, denoted by \mathbb T or \mathbb S^1, is the multiplicative group of all complex numbers with absolute value 1, that is, the unit circle in the complex plane or simply the unit complex numbers.
\mathbb T = \ ...
).
The definition
may be extended to rectangular matrices
by requiring
to be a
semi-unitary matrix and
to be a positive-semidefinite Hermitian matrix. The decomposition always exists and
is always unique. The matrix
is unique if and only if
has full rank.
Intuitive interpretation
A real square
matrix
can be interpreted as the
linear transformation
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
of
that takes a column vector
to
. Then, in the polar decomposition
, the factor
is an
real orthonormal matrix. The polar decomposition then can be seen as expressing the linear transformation defined by
into a
scaling of the space
along each eigenvector
of
by a scale factor
(the action of
), followed by a single rotation or reflection of
(the action of
).
Alternatively, the decomposition
expresses the transformation defined by
as a rotation (
) followed by a scaling (
) along certain orthogonal directions. The scale factors are the same, but the directions are different.
Properties
The polar decomposition of the
complex conjugate
In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, (if a and b are real, then) the complex conjugate of a + bi is equal to a - ...
of
is given by
Note that
gives the corresponding polar decomposition of the
determinant
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if ...
of ''A'', since
and
. In particular, if
has determinant 1 then both
and
have determinant 1.
The positive-semidefinite matrix ''P'' is always unique, even if ''A'' is
singular
Singular may refer to:
* Singular, the grammatical number that denotes a unit quantity, as opposed to the plural and other forms
* Singular homology
* SINGULAR, an open source Computer Algebra System (CAS)
* Singular or sounder, a group of boar ...
, and is denoted as
where
denotes the
conjugate transpose
In mathematics, the conjugate transpose, also known as the Hermitian transpose, of an m \times n complex matrix \boldsymbol is an n \times m matrix obtained by transposing \boldsymbol and applying complex conjugate on each entry (the complex c ...
of
. The uniqueness of ''P'' ensures that this expression is well-defined. The uniqueness is guaranteed by the fact that
is a positive-semidefinite Hermitian matrix and, therefore, has a unique positive-semidefinite Hermitian
square root
In mathematics, a square root of a number is a number such that ; in other words, a number whose ''square'' (the result of multiplying the number by itself, or ⋅ ) is . For example, 4 and −4 are square roots of 16, because .
...
. If ''A'' is invertible, then ''P'' is positive-definite, thus also invertible and the matrix ''U'' is uniquely determined by
Relation to the SVD
In terms of the
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is r ...
(SVD) of
,
, one has
where
,
, and
are unitary matrices (called orthogonal matrices if the field is the reals
). This confirms that
is positive-definite and
is unitary. Thus, the existence of the SVD is equivalent to the existence of polar decomposition.
One can also decompose
in the form
Here
is the same as before and
is given by
This is known as the left polar decomposition, whereas the previous decomposition is known as the right polar decomposition. Left polar decomposition is also known as reverse polar decomposition.
The polar decomposition of a square invertible real matrix
is of the form
where
is a
positive-definite matrix
Matrix most commonly refers to:
* ''The Matrix'' (franchise), an American media franchise
** '' The Matrix'', a 1999 science-fiction action film
** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchi ...
and
is an orthogonal matrix.
Relation to Normal matrices
The matrix
with polar decomposition
is
normal if and only
and
commute:
, or equivalently, they are
simultaneously diagonalizable
In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique.) ...
.
Construction and proofs of existence
The core idea behind the construction of the polar decomposition is similar to that used to compute the
singular-value decomposition.
Derivation for normal matrices
If
is
normal, then it is unitarily equivalent to a diagonal matrix:
for some unitary matrix
and some diagonal matrix
. This makes the derivation of its polar decomposition particularly straightforward, as we can then write
where
is a diagonal matrix containing the ''phases'' of the elements of
, that is,
when
, and
when
.
The polar decomposition is thus
, with
and
diagonal in the eigenbasis of
and having eigenvalues equal to the phases and absolute values of those of
, respectively.
Derivation for invertible matrices
From the
singular-value decomposition, it can be shown that a matrix
is invertible if and only if
(equivalently,
) is. Moreover, this is true if and only if the eigenvalues of
are all not zero.
[Note how this implies, by the positivity of , that the eigenvalues are all real and strictly positive.]
In this case, the polar decomposition is directly obtained by writing
and observing that
is unitary. To see this, we can exploit the spectral decomposition of
to write
.
In this expression,
is unitary because
is. To show that also
is unitary, we can use the
SVD
''Svenska Dagbladet'' (, "The Swedish Daily News"), abbreviated SvD, is a daily List of Swedish newspapers, newspaper published in Stockholm, Sweden.
History and profile
The first issue of ''Svenska Dagbladet'' appeared on 18 December 1884. ...
to write
, so that
where again
is unitary by construction.
Yet another way to directly show the unitarity of
is to note that, writing the
SVD
''Svenska Dagbladet'' (, "The Swedish Daily News"), abbreviated SvD, is a daily List of Swedish newspapers, newspaper published in Stockholm, Sweden.
History and profile
The first issue of ''Svenska Dagbladet'' appeared on 18 December 1884. ...
of
in terms of rank-1 matrices as
, where
are the singular values of
, we have
which directly implies the unitarity of
because a matrix is unitary if and only if its singular values have unitary absolute value.
Note how, from the above construction, it follows that ''the unitary matrix in the polar decomposition of an invertible matrix is uniquely defined''.
General derivation
The SVD of a squared matrix
reads
, with
unitary matrices, and
a diagonal, positive semi-definite matrix. By simply inserting an additional pair of
s or
s, we obtain the two forms of the polar decomposition of
:
More generally, if
is some rectangular
matrix, its SVD can be written as
where now
and
are isometries with dimensions
and
, respectively, where
, and
is again a diagonal positive semi-definite squared matrix with dimensions
. We can now apply the same reasoning used in the above equation to write
, but now
is not in general unitary. Nonetheless,
has the same support and range as
, and it satisfies
and
. This makes
into an isometry when its action is restricted onto the support of
, that is, it means that
is a
partial isometry In functional analysis a partial isometry is a linear map between Hilbert spaces such that it is an isometry on the orthogonal complement of its kernel.
The orthogonal complement of its kernel is called the initial subspace and its range is cal ...
.
As an explicit example of this more general case, consider the SVD of the following matrix:
We then have
which is an isometry, but not unitary. On the other hand, if we consider the decomposition of
we find
which is a partial isometry (but not an isometry).
Bounded operators on Hilbert space
The polar decomposition of any
bounded linear operator ''A'' between complex
Hilbert space
In mathematics, Hilbert spaces (named after David Hilbert) allow generalizing the methods of linear algebra and calculus from (finite-dimensional) Euclidean vector spaces to spaces that may be infinite-dimensional. Hilbert spaces arise natu ...
s is a canonical factorization as the product of a
partial isometry In functional analysis a partial isometry is a linear map between Hilbert spaces such that it is an isometry on the orthogonal complement of its kernel.
The orthogonal complement of its kernel is called the initial subspace and its range is cal ...
and a non-negative operator.
The polar decomposition for matrices generalizes as follows: if ''A'' is a bounded linear operator then there is a unique factorization of ''A'' as a product ''A'' = ''UP'' where ''U'' is a partial isometry, ''P'' is a non-negative self-adjoint operator and the initial space of ''U'' is the closure of the range of ''P''.
The operator ''U'' must be weakened to a partial isometry, rather than unitary, because of the following issues. If ''A'' is the
one-sided shift on ''l''
2(N), then , ''A'', =
1/2 = ''I''. So if ''A'' = ''U'' , ''A'', , ''U'' must be ''A'', which is not unitary.
The existence of a polar decomposition is a consequence of
Douglas' lemma In operator theory, an area of mathematics, Douglas' lemma
relates factorization, range inclusion, and majorization of Hilbert space operators. It is generally attributed to Ronald G. Douglas, although Douglas acknowledges that aspects of the resu ...
:
The operator ''C'' can be defined by ''C''(''Bh'') := ''Ah'' for all ''h'' in ''H'', extended by continuity to the closure of ''Ran''(''B''), and by zero on the orthogonal complement to all of ''H''. The lemma then follows since ''AA'' ≤ ''BB'' implies ''Ker''(''B'') ⊂ ''Ker''(''A'').
In particular. If ''AA'' = ''BB'', then ''C'' is a partial isometry, which is unique if ''Ker''(''B'') ⊂ ''Ker''(''C'').
In general, for any bounded operator ''A'',
where (''AA'')
1/2 is the unique positive square root of ''AA'' given by the usual
functional calculus. So by the lemma, we have
for some partial isometry ''U'', which is unique if ''Ker''(''A'') ⊂ ''Ker''(''U''). Take ''P'' to be (''AA'')
1/2 and one obtains the polar decomposition ''A'' = ''UP''. Notice that an analogous argument can be used to show ''A = P'U'', where ''P' '' is positive and ''U'' a partial isometry.
When ''H'' is finite-dimensional, ''U'' can be extended to a unitary operator; this is not true in general (see example above). Alternatively, the polar decomposition can be shown using the operator version of
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is r ...
.
By property of the
continuous functional calculus, , ''A'', is in the
C*-algebra
In mathematics, specifically in functional analysis, a C∗-algebra (pronounced "C-star") is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra ''A'' of continu ...
generated by ''A''. A similar but weaker statement holds for the partial isometry: ''U'' is in the
von Neumann algebra generated by ''A''. If ''A'' is invertible, the polar part ''U'' will be in the
C*-algebra
In mathematics, specifically in functional analysis, a C∗-algebra (pronounced "C-star") is a Banach algebra together with an involution satisfying the properties of the adjoint. A particular case is that of a complex algebra ''A'' of continu ...
as well.
Unbounded operators
If ''A'' is a closed, densely defined
unbounded operator In mathematics, more specifically functional analysis and operator theory, the notion of unbounded operator provides an abstract framework for dealing with differential operators, unbounded observables in quantum mechanics, and other cases.
The ...
between complex Hilbert spaces then it still has a (unique) polar decomposition
where , ''A'', is a (possibly unbounded) non-negative self adjoint operator with the same domain as ''A'', and ''U'' is a partial isometry vanishing on the orthogonal complement of the range ''Ran''(, ''A'', ).
The proof uses the same lemma as above, which goes through for unbounded operators in general. If ''Dom''(''AA'') = ''Dom''(''BB'') and ''AAh'' = ''BBh'' for all ''h'' ∈ ''Dom''(''AA''), then there exists a partial isometry ''U'' such that ''A'' = ''UB''. ''U'' is unique if ''Ran''(''B'')
⊥ ⊂ ''Ker''(''U''). The operator ''A'' being closed and densely defined ensures that the operator ''AA'' is self-adjoint (with dense domain) and therefore allows one to define (''AA'')
1/2. Applying the lemma gives polar decomposition.
If an unbounded operator ''A'' is
affiliated
Affiliation or affiliate may refer to:
* Affiliate (commerce), a legal form of entity relationship used in Business Law
* Affiliation (family law), a legal form of family relationship
* Affiliate marketing
* Affiliate network or affiliation pla ...
to a von Neumann algebra M, and ''A'' = ''UP'' is its polar decomposition, then ''U'' is in M and so is the spectral projection of ''P'', 1
''B''(''P''), for any Borel set ''B'' in .
Quaternion polar decomposition
The polar decomposition of
quaternion
In mathematics, the quaternion number system extends the complex numbers. Quaternions were first described by the Irish mathematician William Rowan Hamilton in 1843 and applied to mechanics in three-dimensional space. Hamilton defined a quat ...
s H depends on the unit 2-dimensional sphere
of
square roots of minus one. Given any ''r'' on this sphere, and an angle −π < ''a'' ≤ π, the
versor is on the unit
3-sphere of H. For ''a'' = 0 and ''a'' = π, the versor is 1 or −1 regardless of which ''r'' is selected. The
norm ''t'' of a quaternion ''q'' is the
Euclidean distance
In mathematics, the Euclidean distance between two points in Euclidean space is the length of a line segment between the two points.
It can be calculated from the Cartesian coordinates of the points using the Pythagorean theorem, therefore o ...
from the origin to ''q''. When a quaternion is not just a real number, then there is a ''unique'' polar decomposition
Alternative planar decompositions
In the
Cartesian plane
A Cartesian coordinate system (, ) in a plane is a coordinate system that specifies each point uniquely by a pair of numerical coordinates, which are the signed distances to the point from two fixed perpendicular oriented lines, measured i ...
, alternative planar
ring decompositions arise as follows:
Numerical determination of the matrix polar decomposition
To compute an approximation of the polar decomposition ''A'' = ''UP'', usually the unitary factor ''U'' is approximated.
The iteration is based on
Heron's method for the square root of ''1'' and computes, starting from
, the sequence
The combination of inversion and Hermite conjugation is chosen so that in the singular value decomposition, the unitary factors remain the same and the iteration reduces to Heron's method on the singular values.
This basic iteration may be refined to speed up the process:
See also
*
Cartan decomposition
*
Algebraic polar decomposition
*
Polar decomposition of a complex measure
*
Lie group decomposition {{unreferenced, date=September 2009
In mathematics, Lie group decompositions are used to analyse the structure of Lie groups and associated objects, by showing how they are built up out of subgroups. They are essential technical tools in the represe ...
References
*
Conway, J.B.: A Course in Functional Analysis.
Graduate Texts in Mathematics
Graduate Texts in Mathematics (GTM) ( ISSN 0072-5285) is a series of graduate-level textbooks in mathematics published by Springer-Verlag. The books in this series, like the other Springer-Verlag mathematics series, are yellow books of a standa ...
. New York: Springer 1990
*
Douglas, R.G.: On Majorization, Factorization, and Range Inclusion of Operators on Hilbert Space.
Proc. Amer. Math. Soc.
''Proceedings of the American Mathematical Society'' is a monthly peer-reviewed scientific journal of mathematics published by the American Mathematical Society. As a requirement, all articles must be at most 15 printed pages.
According to the ...
17, 413-415 (1966)
* .
*
{{SpectralTheory
Lie groups
Operator theory
Matrix theory
Matrix decompositions