HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
and
functional analysis Functional analysis is a branch of mathematical analysis, the core of which is formed by the study of vector spaces endowed with some kind of limit-related structure (for example, Inner product space#Definition, inner product, Norm (mathematics ...
, a spectral theorem is a result about when a
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
or matrix can be diagonalized (that is, represented as a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
in some basis). This is extremely useful because computations involving a diagonalizable matrix can often be reduced to much simpler computations involving the corresponding diagonal matrix. The concept of diagonalization is relatively straightforward for operators on finite-dimensional vector spaces but requires some modification for operators on infinite-dimensional spaces. In general, the spectral theorem identifies a class of
linear operator In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pr ...
s that can be modeled by multiplication operators, which are as simple as one can hope to find. In more abstract language, the spectral theorem is a statement about commutative C*-algebras. See also
spectral theory In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operator (mathematics), operators in a variety of mathematical ...
for a historical perspective. Examples of operators to which the spectral theorem applies are self-adjoint operators or more generally normal operators on
Hilbert space In mathematics, a Hilbert space is a real number, real or complex number, complex inner product space that is also a complete metric space with respect to the metric induced by the inner product. It generalizes the notion of Euclidean space. The ...
s. The spectral theorem also provides a canonical decomposition, called the spectral decomposition, of the underlying vector space on which the operator acts. Augustin-Louis Cauchy proved the spectral theorem for symmetric matrices, i.e., that every real, symmetric matrix is diagonalizable. In addition, Cauchy was the first to be systematic about
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
s. The spectral theorem as generalized by
John von Neumann John von Neumann ( ; ; December 28, 1903 – February 8, 1957) was a Hungarian and American mathematician, physicist, computer scientist and engineer. Von Neumann had perhaps the widest coverage of any mathematician of his time, in ...
is today perhaps the most important result of operator theory. This article mainly focuses on the simplest kind of spectral theorem, that for a self-adjoint operator on a Hilbert space. However, as noted above, the spectral theorem also holds for normal operators on a Hilbert space.


Finite-dimensional case


Hermitian maps and Hermitian matrices

We begin by considering a
Hermitian matrix In mathematics, a Hermitian matrix (or self-adjoint matrix) is a complex square matrix that is equal to its own conjugate transpose—that is, the element in the -th row and -th column is equal to the complex conjugate of the element in the ...
on \mathbb^n (but the following discussion will be adaptable to the more restrictive case of symmetric matrices on We consider a Hermitian map on a finite-dimensional complex inner product space endowed with a positive definite sesquilinear
inner product In mathematics, an inner product space (or, rarely, a Hausdorff pre-Hilbert space) is a real vector space or a complex vector space with an operation called an inner product. The inner product of two vectors in the space is a scalar, ofte ...
\langle\cdot,\cdot\rangle. The Hermitian condition on A means that for all , \langle A x, y \rangle = \langle x, A y \rangle. An equivalent condition is that , where is the Hermitian conjugate of . In the case that is identified with a Hermitian matrix, the matrix of is equal to its conjugate transpose. (If is a real matrix, then this is equivalent to , that is, is a
symmetric matrix In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with ...
.) This condition implies that all eigenvalues of a Hermitian map are real: To see this, it is enough to apply it to the case when is an eigenvector. (Recall that an eigenvector of a linear map is a non-zero vector such that for some scalar . The value is the corresponding eigenvalue. Moreover, the eigenvalues are roots of the characteristic polynomial.) We provide a sketch of a proof for the case where the underlying field of scalars is the
complex number In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
s. By the fundamental theorem of algebra, applied to the characteristic polynomial of , there is at least one complex eigenvalue and corresponding eigenvector , which must by definition be non-zero. Then since \lambda_1 \langle v_1, v_1 \rangle = \langle A (v_1), v_1 \rangle = \langle v_1, A(v_1) \rangle = \bar\lambda_1 \langle v_1, v_1 \rangle, we find that is real. Now consider the space \mathcal^ = \text(v_1)^\perp, the orthogonal complement of . By Hermiticity, \mathcal^ is an invariant subspace of . To see that, consider any k \in \mathcal^ so that \langle k, v_1 \rangle = 0 by definition of \mathcal^. To satisfy invariance, we need to check if A(k) \in \mathcal^. This is true because, \langle A(k), v_1 \rangle = \langle k, A(v_1) \rangle = \langle k, \lambda_1 v_1 \rangle = 0. Applying the same argument to \mathcal^ shows that has at least one real eigenvalue \lambda_2 and corresponding eigenvector v_2 \in \mathcal^ \perp v_1. This can be used to build another invariant subspace \mathcal^ = \text(\)^\perp. Finite induction then finishes the proof. The matrix representation of in a basis of eigenvectors is diagonal, and by the construction the proof gives a basis of mutually orthogonal eigenvectors; by choosing them to be unit vectors one obtains an orthonormal basis of eigenvectors. can be written as a linear combination of pairwise orthogonal projections, called its spectral decomposition. Let V_ = \ be the eigenspace corresponding to an eigenvalue \lambda. Note that the definition does not depend on any choice of specific eigenvectors. In general, is the orthogonal direct sum of the spaces V_ where the \lambda ranges over the
spectrum A spectrum (: spectra or spectrums) is a set of related ideas, objects, or properties whose features overlap such that they blend to form a continuum. The word ''spectrum'' was first used scientifically in optics to describe the rainbow of co ...
of A. When the matrix being decomposed is Hermitian, the spectral decomposition is a special case of the Schur decomposition (see the proof in case of normal matrices below).


Spectral decomposition and the singular value decomposition

The spectral decomposition is a special case of the
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a Matrix decomposition, factorization of a real number, real or complex number, complex matrix (mathematics), matrix into a rotation, followed by a rescaling followed by another rota ...
, which states that any matrix A \in \mathbb^ can be expressed as A = U\Sigma V^, where U \in \mathbb^ and V \in \mathbb^ are unitary matrices and \Sigma \in \mathbb^ is a diagonal matrix. The diagonal entries of \Sigma are uniquely determined by A and are known as the singular values of A. If A is Hermitian, then A^* = A and V \Sigma U^* = U \Sigma V^* which implies U = V.


Normal matrices

The spectral theorem extends to a more general class of matrices. Let be an operator on a finite-dimensional inner product space. is said to be normal if . One can show that is normal if and only if it is unitarily diagonalizable using the Schur decomposition. That is, any matrix can be written as , where is unitary and is upper triangular. If is normal, then one sees that . Therefore, must be diagonal since a normal upper triangular matrix is diagonal (see
normal matrix In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose : :A \text \iff A^*A = AA^* . The concept of normal matrices can be extended to normal operators on infinite-dimensional normed spaces and to nor ...
). The converse is obvious. In other words, is normal if and only if there exists a unitary matrix such that A = U D U^*, where is a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
. Then, the entries of the diagonal of are the eigenvalues of . The column vectors of are the eigenvectors of and they are orthonormal. Unlike the Hermitian case, the entries of need not be real.


Compact self-adjoint operators

In the more general setting of Hilbert spaces, which may have an infinite dimension, the statement of the spectral theorem for compact self-adjoint operators is virtually the same as in the finite-dimensional case. As for Hermitian matrices, the key point is to prove the existence of at least one nonzero eigenvector. One cannot rely on determinants to show existence of eigenvalues, but one can use a maximization argument analogous to the variational characterization of eigenvalues. If the compactness assumption is removed, then it is ''not'' true that every self-adjoint operator has eigenvectors. For example, the multiplication operator M_ on L^2( ,1 which takes each \psi(x) \in L^2( ,1 to x\psi(x) is bounded and self-adjoint, but has no eigenvectors. However, its spectrum, suitably defined, is still equal to ,1/math>, see spectrum of bounded operator.


Bounded self-adjoint operators


Possible absence of eigenvectors

The next generalization we consider is that of bounded self-adjoint operators on a Hilbert space. Such operators may have no eigenvectors: for instance let be the operator of multiplication by on L^2( ,1, that is, ft) = t f(t). This operator does not have any eigenvectors ''in'' L^2( ,1, though it does have eigenvectors in a larger space. Namely the distribution f(t)=\delta(t-t_0), where \delta is the Dirac delta function, is an eigenvector when construed in an appropriate sense. The Dirac delta function is however not a function in the classical sense and does not lie in the Hilbert space . Thus, the delta-functions are "generalized eigenvectors" of A but not eigenvectors in the usual sense.


Spectral subspaces and projection-valued measures

In the absence of (true) eigenvectors, one can look for a "spectral subspace" consisting of an ''almost eigenvector'', i.e, a closed subspace V_E of V associated with a
Borel set In mathematics, a Borel set is any subset of a topological space that can be formed from its open sets (or, equivalently, from closed sets) through the operations of countable union, countable intersection, and relative complement. Borel sets ...
E \subset \sigma(A) in the
spectrum A spectrum (: spectra or spectrums) is a set of related ideas, objects, or properties whose features overlap such that they blend to form a continuum. The word ''spectrum'' was first used scientifically in optics to describe the rainbow of co ...
of A. This subspace can be thought of as the closed span of generalized eigenvectors for A with eigen''values'' in E. In the above example, where ft) = t f(t), \; we might consider the subspace of functions supported on a small interval ,a+\varepsilon/math> inside ,1/math>. This space is invariant under A and for any f in this subspace, Af is very close to af. Each subspace, in turn, is encoded by the associated projection operator, and the collection of all the subspaces is then represented by a projection-valued measure. One formulation of the spectral theorem expresses the operator as an integral of the coordinate function over the operator's spectrum \sigma(A) with respect to a projection-valued measure. A = \int_ \lambda \, d \pi (\lambda).When the self-adjoint operator in question is compact, this version of the spectral theorem reduces to something similar to the finite-dimensional spectral theorem above, except that the operator is expressed as a finite or countably infinite linear combination of projections, that is, the measure consists only of atoms.


Multiplication operator version

An alternative formulation of the spectral theorem says that every bounded self-adjoint operator is unitarily equivalent to a multiplication operator, a relatively simple type of operator.Multiplication operators are a direct generalization of diagonal matrices. A finite-dimensional Hermitian vector space V may be coordinatized as the space of functions f: B \to \C from a basis B to the complex numbers, so that the B-coordinates of a vector are the values of the corresponding function f. The finite-dimensional spectral theorem for a self-adjoint operator A: V \to V states that there exists an orthonormal basis of eigenvectors B, so that the inner product becomes the
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a Scalar (mathematics), scalar as a result". It is also used for other symmetric bilinear forms, for example in a pseudo-Euclidean space. N ...
with respect to the B-coordinates: thus V is isomorphic to L^2( B ,\mu ) for the discrete unit measure \mu on B. Also A is unitarily equivalent to the multiplication operator fv) = \lambda(v) f(v) , where \lambda(v) is the eigenvalue of v \in B : that is, A multiplies each B-coordinate by the corresponding eigenvalue \lambda(v), the action of a diagonal matrix. Finally, the operator norm , A, = , T, is equal to the magnitude of the largest eigenvector , \lambda, _\infty . The spectral theorem is the beginning of the vast research area of functional analysis called operator theory; see also spectral measure. There is also an analogous spectral theorem for bounded normal operators on Hilbert spaces. The only difference in the conclusion is that now ''\lambda'' may be complex-valued.


Direct integrals

There is also a formulation of the spectral theorem in terms of direct integrals. It is similar to the multiplication-operator formulation, but more canonical. Let A be a bounded self-adjoint operator and let \sigma (A) be the spectrum of A. The direct-integral formulation of the spectral theorem associates two quantities to A. First, a measure \mu on \sigma (A), and second, a family of Hilbert spaces \,\,\,\lambda\in\sigma (A). We then form the direct integral Hilbert space \int_\mathbf^\oplus H_\, d \mu(\lambda). The elements of this space are functions (or "sections") s(\lambda),\,\,\lambda\in\sigma(A), such that s(\lambda)\in H_ for all \lambda. The direct-integral version of the spectral theorem may be expressed as follows: The spaces H_ can be thought of as something like "eigenspaces" for A. Note, however, that unless the one-element set \lambda has positive measure, the space H_ is not actually a subspace of the direct integral. Thus, the H_'s should be thought of as "generalized eigenspace"—that is, the elements of H_ are "eigenvectors" that do not actually belong to the Hilbert space. Although both the multiplication-operator and direct integral formulations of the spectral theorem express a self-adjoint operator as unitarily equivalent to a multiplication operator, the direct integral approach is more canonical. First, the set over which the direct integral takes place (the spectrum of the operator) is canonical. Second, the function we are multiplying by is canonical in the direct-integral approach: Simply the function \lambda\mapsto\lambda.


Cyclic vectors and simple spectrum

A vector \varphi is called a cyclic vector for A if the vectors \varphi,A\varphi,A^2\varphi,\ldots span a dense subspace of the Hilbert space. Suppose A is a bounded self-adjoint operator for which a cyclic vector exists. In that case, there is no distinction between the direct-integral and multiplication-operator formulations of the spectral theorem. Indeed, in that case, there is a measure \mu on the spectrum \sigma(A) of A such that A is unitarily equivalent to the "multiplication by \lambda" operator on L^2(\sigma(A),\mu). This result represents A simultaneously as a multiplication operator ''and'' as a direct integral, since L^2(\sigma(A),\mu) is just a direct integral in which each Hilbert space H_ is just \mathbb. Not every bounded self-adjoint operator admits a cyclic vector; indeed, by the uniqueness in the direct integral decomposition, this can occur only when all the H_'s have dimension one. When this happens, we say that A has "simple spectrum" in the sense of spectral multiplicity theory. That is, a bounded self-adjoint operator that admits a cyclic vector should be thought of as the infinite-dimensional generalization of a self-adjoint matrix with distinct eigenvalues (i.e., each eigenvalue has multiplicity one). Although not every A admits a cyclic vector, it is easy to see that we can decompose the Hilbert space as a direct sum of invariant subspaces on which A has a cyclic vector. This observation is the key to the proofs of the multiplication-operator and direct-integral forms of the spectral theorem.


Functional calculus

One important application of the spectral theorem (in whatever form) is the idea of defining a functional calculus. That is, given a function f defined on the spectrum of A, we wish to define an operator f(A). If f is simply a positive power, f(x) = x^n, then f(A) is just the n-th power of A, A^n. The interesting cases are where f is a nonpolynomial function such as a square root or an exponential. Either of the versions of the spectral theorem provides such a functional calculus. In the direct-integral version, for example, f(A) acts as the "multiplication by f" operator in the direct integral: (A)s\lambda) = f(\lambda) s(\lambda). That is to say, each space H_ in the direct integral is a (generalized) eigenspace for f(A) with eigenvalue f(\lambda).


Unbounded self-adjoint operators

Many important linear operators which occur in
analysis Analysis (: analyses) is the process of breaking a complex topic or substance into smaller parts in order to gain a better understanding of it. The technique has been applied in the study of mathematics and logic since before Aristotle (38 ...
, such as differential operators, are unbounded. There is also a spectral theorem for self-adjoint operators that applies in these cases. To give an example, every constant-coefficient differential operator is unitarily equivalent to a multiplication operator. Indeed, the unitary operator that implements this equivalence is the Fourier transform; the multiplication operator is a type of Fourier multiplier. In general, spectral theorem for self-adjoint operators may take several equivalent forms.See Section 10.1 of Notably, all of the formulations given in the previous section for bounded self-adjoint operators—the projection-valued measure version, the multiplication-operator version, and the direct-integral version—continue to hold for unbounded self-adjoint operators, with small technical modifications to deal with domain issues. Specifically, the only reason the multiplication operator A on L^2( ,1 is bounded, is due to the choice of domain ,1/math>. The same operator on, e.g., L^2(\mathbb) would be unbounded. The notion of "generalized eigenvectors" naturally extends to unbounded self-adjoint operators, as they are characterized as non-normalizable eigenvectors. Contrary to the case of almost eigenvectors, however, the eigenvalues can be real or complex and, even if they are real, do not necessarily belong to the spectrum. Though, for self-adjoint operators there always exist a real subset of "generalized eigenvalues" such that the corresponding set of eigenvectors is complete.


See also

* * Spectral theory of compact operators * Spectral theory of normal C*-algebras * Borel functional calculus *
Spectral theory In mathematics, spectral theory is an inclusive term for theories extending the eigenvector and eigenvalue theory of a single square matrix to a much broader theory of the structure of operator (mathematics), operators in a variety of mathematical ...
* Matrix decomposition * Canonical form * Jordan decomposition, of which the spectral decomposition is a special case. *
Singular value decomposition In linear algebra, the singular value decomposition (SVD) is a Matrix decomposition, factorization of a real number, real or complex number, complex matrix (mathematics), matrix into a rotation, followed by a rescaling followed by another rota ...
, a generalisation of spectral theorem to arbitrary matrices. *
Eigendecomposition of a matrix In linear algebra, eigendecomposition is the factorization of a matrix into a canonical form, whereby the matrix is represented in terms of its eigenvalues and eigenvectors. Only diagonalizable matrices can be factorized in this way. When th ...
* Wiener–Khinchin theorem


Notes


References

* Sheldon Axler, ''Linear Algebra Done Right'', Springer Verlag, 1997 * *
Paul Halmos Paul Richard Halmos (; 3 March 1916 – 2 October 2006) was a Kingdom of Hungary, Hungarian-born United States, American mathematician and probabilist who made fundamental advances in the areas of mathematical logic, probability theory, operat ...

"What Does the Spectral Theorem Say?"
''American Mathematical Monthly'', volume 70, number 3 (1963), pages 241–24
Other link
* * M. Reed and B. Simon, ''Methods of Mathematical Physics'', vols I–IV, Academic Press 1972. * G. Teschl, ''Mathematical Methods in Quantum Mechanics with Applications to Schrödinger Operators'', https://www.mat.univie.ac.at/~gerald/ftp/book-schroe/, American Mathematical Society, 2009. * {{Spectral theory * Linear algebra Matrix theory Singular value decomposition Theorems in functional analysis Theorems in linear algebra