Normal Matrix
In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose : The concept of normal matrices can be extended to normal operators on infinite dimensional normed spaces and to normal elements in C*-algebras. As in the matrix case, normality means commutativity is preserved, to the extent possible, in the noncommutative setting. This makes normal operators, and normal elements of C*-algebras, more amenable to analysis. The spectral theorem states that a matrix is normal if and only if it is unitarily similar to a diagonal matrix, and therefore any matrix satisfying the equation is diagonalizable. The converse does not hold because diagonalizable matrices may have non-orthogonal eigenspaces. The left and right singular vectors in the singular value decomposition of a normal matrix \mathbf = \mathbf \boldsymbol \mathbf^* differ only in complex phase from each other and from the corresponding eigenvectors, since the phase must be factored out ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Complex Number
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the form a + bi, where and are real numbers. Because no real number satisfies the above equation, was called an imaginary number by René Descartes. For the complex number a+bi, is called the , and is called the . The set of complex numbers is denoted by either of the symbols \mathbb C or . Despite the historical nomenclature "imaginary", complex numbers are regarded in the mathematical sciences as just as "real" as the real numbers and are fundamental in many aspects of the scientific description of the natural world. Complex numbers allow solutions to all polynomial equations, even those that have no solutions in real numbers. More precisely, the fundamental theorem of algebra asserts that every non-constant polynomial equation with real or ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Skew-symmetric Matrix
In mathematics, particularly in linear algebra, a skew-symmetric (or antisymmetric or antimetric) matrix is a square matrix whose transpose equals its negative. That is, it satisfies the condition In terms of the entries of the matrix, if a_ denotes the entry in the i-th row and j-th column, then the skew-symmetric condition is equivalent to Example The matrix :A = \begin 0 & 2 & -45 \\ -2 & 0 & -4 \\ 45 & 4 & 0 \end is skew-symmetric because : -A = \begin 0 & -2 & 45 \\ 2 & 0 & 4 \\ -45 & -4 & 0 \end = A^\textsf . Properties Throughout, we assume that all matrix entries belong to a field \mathbb whose characteristic is not equal to 2. That is, we assume that , where 1 denotes the multiplicative identity and 0 the additive identity of the given field. If the characteristic of the field is 2, then a skew-symmetric matrix is the same thing as a symmetric matrix. * The sum of two skew-symmetric matrices is skew-symmetric. * A scala ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Diagonalizable Matrix
In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique.) For a finite-dimensional vector space a linear map T:V\to V is called diagonalizable if there exists an ordered basis of V consisting of eigenvectors of T. These definitions are equivalent: if T has a matrix representation T = PDP^ as above, then the column vectors of P form a basis consisting of eigenvectors of and the diagonal entries of D are the corresponding eigenvalues of with respect to this eigenvector basis, A is represented by Diagonalization is the process of finding the above P and Diagonalizable matrices and maps are especially easy for computations, once their eigenvalues and eigenvectors are known. One can raise a diagonal matrix D to a power by simply raising the diagonal entries to that power, and the determi ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Simultaneously Triangularizable
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are zero. Because matrix equations with triangular matrices are easier to solve, they are very important in numerical analysis. By the LU decomposition algorithm, an invertible matrix may be written as the product of a lower triangular matrix ''L'' and an upper triangular matrix ''U'' if and only if all its leading principal minors are non-zero. Description A matrix of the form :L = \begin \ell_ & & & & 0 \\ \ell_ & \ell_ & & & \\ \ell_ & \ell_ & \ddots & & \\ \vdots & \vdots & \ddots & \ddots & \\ \ell_ & \ell_ & \ldots & \ell_ & \ell_ \end is called a lower triangular matrix or left triangular matrix, and a ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Commuting Matrices
In linear algebra, two matrices A and B are said to commute if AB=BA, or equivalently if their commutator ,B AB-BA is zero. A set of matrices A_1, \ldots, A_k is said to commute if they commute pairwise, meaning that every pair of matrices in the set commute with each other. Characterizations and properties * Commuting matrices preserve each other's eigenspaces. As a consequence, commuting matrices over an algebraically closed field are simultaneously triangularizable; that is, there are bases over which they are both upper triangular. In other words, if A_1,\ldots,A_k commute, there exists a similarity matrix P such that P^ A_i P is upper triangular for all i \in \. The converse is not necessarily true, as the following counterexample shows: *:\begin 1 & 2 \\ 0 & 3 \end\begin 1 & 1 \\ 0 & 1 \end = \begin 1 & 3 \\ 0 & 3 \end \ne \begin 1 & 5 \\ 0 & 3 \end=\begin 1 & 1 \\ 0 & 1 \end\begin 1 & 2 \\ 0 & 3 \end. : However, if the square of the commutator of two matrices is zero, that ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Simultaneously Diagonalizable
In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique.) For a finite-dimensional vector space a linear map T:V\to V is called diagonalizable if there exists an ordered basis of V consisting of eigenvectors of T. These definitions are equivalent: if T has a matrix representation T = PDP^ as above, then the column vectors of P form a basis consisting of eigenvectors of and the diagonal entries of D are the corresponding eigenvalues of with respect to this eigenvector basis, A is represented by Diagonalization is the process of finding the above P and Diagonalizable matrices and maps are especially easy for computations, once their eigenvalues and eigenvectors are known. One can raise a diagonal matrix D to a power by simply raising the diagonal entries to that power, and the determ ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Real Number
In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every real number can be almost uniquely represented by an infinite decimal expansion. The real numbers are fundamental in calculus (and more generally in all mathematics), in particular by their role in the classical definitions of limits, continuity and derivatives. The set of real numbers is denoted or \mathbb and is sometimes called "the reals". The adjective ''real'' in this context was introduced in the 17th century by René Descartes to distinguish real numbers, associated with physical reality, from imaginary numbers (such as the square roots of ), which seemed like a theoretical contrivance unrelated to physical reality. The real numbers include the rational numbers, such as the integer and the fraction . The rest of the real number ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Self-adjoint
In mathematics, and more specifically in abstract algebra, an element ''x'' of a *-algebra is self-adjoint if x^*=x. A self-adjoint element is also Hermitian, though the reverse doesn't necessarily hold. A collection ''C'' of elements of a star-algebra is self-adjoint if it is closed under the involution operation. For example, if x^*=y then since y^*=x^=x in a star-algebra, the set is a self-adjoint set even though ''x'' and ''y'' need not be self-adjoint elements. In functional analysis, a linear operator A : H \to H on a Hilbert space is called self-adjoint if it is equal to its own adjoint ''A''. See self-adjoint operator for a detailed discussion. If the Hilbert space is finite-dimensional and an orthonormal basis has been chosen, then the operator ''A'' is self-adjoint if and only if the matrix describing ''A'' with respect to this basis is Hermitian, i.e. if it is equal to its own conjugate transpose. Hermitian matrices are also called self-adjoint. In a dagger categor ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Schur Decomposition
In the mathematical discipline of linear algebra, the Schur decomposition or Schur triangulation, named after Issai Schur, is a matrix decomposition. It allows one to write an arbitrary complex square matrix as unitarily equivalent to an upper triangular matrix whose diagonal elements are the eigenvalues of the original matrix. Statement The Schur decomposition reads as follows: if ''A'' is an square matrix with complex entries, then ''A'' can be expressed as(Section 2.3 and further at p. 79(Section 7.7 at p. 313 : A = Q U Q^ where ''Q'' is a unitary matrix (so that its inverse ''Q''−1 is also the conjugate transpose ''Q''* of ''Q''), and ''U'' is an upper triangular matrix, which is called a Schur form of ''A''. Since ''U'' is similar to ''A'', it has the same spectrum, and since it is triangular, its eigenvalues are the diagonal entries of ''U''. The Schur decomposition implies that there exists a nested sequence of ''A''-invariant subspaces , and that there exists an or ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Orthogonal
In mathematics, orthogonality is the generalization of the geometric notion of ''perpendicularity''. By extension, orthogonality is also used to refer to the separation of specific features of a system. The term also has specialized meanings in other fields including art and chemistry. Etymology The word comes from the Ancient Greek ('), meaning "upright", and ('), meaning "angle". The Ancient Greek (') and Classical Latin ' originally denoted a rectangle. Later, they came to mean a right triangle. In the 12th century, the post-classical Latin word ''orthogonalis'' came to mean a right angle or something related to a right angle. Mathematics Physics * In optics, polarization states are said to be orthogonal when they propagate independently of each other, as in vertical and horizontal linear polarization or right- and left-handed circular polarization. * In special relativity, a time axis determined by a rapidity of motion is hyperbolic-orthogonal to a space axis of s ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Eigenspace
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by \lambda, is the factor by which the eigenvector is scaled. Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction in which it is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Loosely speaking, in a multidimensional vector space, the eigenvector is not rotated. Formal definition If is a linear transformation from a vector space over a field into itself and is a nonzero vector in , then is an eigenvector of if is a scalar multiple of . This can be written as T(\mathbf) = \lambda \mathbf, where is a scalar in , known as the eigenvalue, characteristic value, or characteristic root ass ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Orthonormal Basis
In mathematics, particularly linear algebra, an orthonormal basis for an inner product space ''V'' with finite dimension is a basis for V whose vectors are orthonormal, that is, they are all unit vectors and orthogonal to each other. For example, the standard basis for a Euclidean space \R^n is an orthonormal basis, where the relevant inner product is the dot product of vectors. The image of the standard basis under a rotation or reflection (or any orthogonal transformation) is also orthonormal, and every orthonormal basis for \R^n arises in this fashion. For a general inner product space V, an orthonormal basis can be used to define normalized orthogonal coordinates on V. Under these coordinates, the inner product becomes a dot product of vectors. Thus the presence of an orthonormal basis reduces the study of a finite-dimensional inner product space to the study of \R^n under dot product. Every finite-dimensional inner product space has an orthonormal basis, which may be ob ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |