HOME
*





Householder Transformation
In linear algebra, a Householder transformation (also known as a Householder reflection or elementary reflector) is a linear transformation that describes a reflection about a plane or hyperplane containing the origin. The Householder transformation was used in a 1958 paper by Alston Scott Householder. Its analogue over general inner product spaces is the Householder operator. Definition Transformation The reflection hyperplane can be defined by its ''normal vector'', a unit vector v (a vector with length 1) that is orthogonal to the hyperplane. The reflection of a point x about this hyperplane is the linear transformation: : x - 2\langle x, v\rangle v = x - 2v\left(v^\textsf x\right), where v is given as a column unit vector with Hermitian transpose v^\textsf. Householder matrix The matrix constructed from this transformation can be expressed in terms of an outer product as: : P = I - 2vv^\textsf is known as the Householder matrix, where I is the identity matrix. Prope ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Algebra
Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrices. Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to spaces of functions. Linear algebra is also used in most sciences and fields of engineering, because it allows modeling many natural phenomena, and computing efficiently with such models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing with first-order approximations, using the fact that the differential of a multivariate function at a point is the line ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Involutory Matrix
In mathematics, an involutory matrix is a square matrix that is its own inverse. That is, multiplication by the matrix A is an involution if and only if A2 = I, where I is the ''n'' × ''n'' identity matrix. Involutory matrices are all square roots of the identity matrix. This is simply a consequence of the fact that any nonsingular matrix multiplied by its inverse is the identity.. Examples The 2 × 2 real matrix \begina & b \\ c & -a \end is involutory provided that a^2 + bc = 1 . The Pauli matrices in M(2, C) are involutory: \begin \sigma_1 = \sigma_x &= \begin 0 & 1 \\ 1 & 0 \end, \\ \sigma_2 = \sigma_y &= \begin 0 & -i \\ i & 0 \end, \\ \sigma_3 = \sigma_z &= \begin 1 & 0 \\ 0 & -1 \end. \end One of the three classes of elementary matrix is involutory, namely the row-interchange elementary matrix. A special case of another class of elementary matrix, that which re ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Givens Rotation
In numerical linear algebra, a Givens rotation is a rotation in the plane spanned by two coordinates axes. Givens rotations are named after Wallace Givens, who introduced them to numerical analysts in the 1950s while he was working at Argonne National Laboratory. Matrix representation A Givens rotation is represented by a matrix of the form :G(i, j, \theta) = \begin 1 & \cdots & 0 & \cdots & 0 & \cdots & 0 \\ \vdots & \ddots & \vdots & & \vdots & & \vdots \\ 0 & \cdots & c & \cdots & -s & \cdots & 0 \\ \vdots & & \vdots & \ddots & \vdots & & \vdots \\ 0 & \cdots & s & \cdots & c & \cdots & 0 \\ \vdots & & \vdots & & \vdots & \ddots & \vdots \\ 0 & \cdots & 0 & \cdots & 0 & \cdots & 1 \end, where and appear at the int ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Orthogonal Matrix
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identity matrix. This leads to the equivalent characterization: a matrix is orthogonal if its transpose is equal to its inverse: Q^\mathrm=Q^, where is the inverse of . An orthogonal matrix is necessarily invertible (with inverse ), unitary (), where is the Hermitian adjoint (conjugate transpose) of , and therefore normal () over the real numbers. The determinant of any orthogonal matrix is either +1 or −1. As a linear transformation, an orthogonal matrix preserves the inner product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation, reflection or rotoreflection. In other words, it is a unitary transformation. The set of orthogonal matrices, under multiplication, forms the group , known as the o ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Orthogonal Matrices
In linear algebra, an orthogonal matrix, or orthonormal matrix, is a real square matrix whose columns and rows are orthonormal vectors. One way to express this is Q^\mathrm Q = Q Q^\mathrm = I, where is the transpose of and is the identity matrix. This leads to the equivalent characterization: a matrix is orthogonal if its transpose is equal to its inverse: Q^\mathrm=Q^, where is the inverse of . An orthogonal matrix is necessarily invertible (with inverse ), unitary (), where is the Hermitian adjoint (conjugate transpose) of , and therefore normal () over the real numbers. The determinant of any orthogonal matrix is either +1 or −1. As a linear transformation, an orthogonal matrix preserves the inner product of vectors, and therefore acts as an isometry of Euclidean space, such as a rotation, reflection or rotoreflection. In other words, it is a unitary transformation. The set of orthogonal matrices, under multiplication, forms the group , known as the orthogonal gr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Inequality Of Arithmetic And Geometric Means
In mathematics, the inequality of arithmetic and geometric means, or more briefly the AM–GM inequality, states that the arithmetic mean of a list of non-negative real numbers is greater than or equal to the geometric mean of the same list; and further, that the two means are equal if and only if every number in the list is the same (in which case they are both that number). The simplest non-trivial case – i.e., with more than one variable – for two non-negative numbers and , is the statement that :\frac2 \ge \sqrt with equality if and only if . This case can be seen from the fact that the square of a real number is always non-negative (greater than or equal to zero) and from the elementary case of the binomial formula: :\begin 0 & \le (x-y)^2 \\ & = x^2-2xy+y^2 \\ & = x^2+2xy+y^2 - 4xy \\ & = (x+y)^2 - 4xy. \end Hence , with equality precisely when , i.e. . The AM–GM inequality then follows from taking the positive square root of both sides and then dividing both ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Unitary Transformation
In mathematics, a unitary transformation is a transformation that preserves the inner product: the inner product of two vectors before the transformation is equal to their inner product after the transformation. Formal definition More precisely, a unitary transformation is an isomorphism between two inner product spaces (such as Hilbert spaces). In other words, a ''unitary transformation'' is a bijective function U : H \to H_2\, between two inner product spaces, H and H_2, such that \langle Ux, Uy \rangle_ = \langle x, y \rangle_ \quad \text x, y \in H. Properties A unitary transformation is an isometry, as one can see by setting x=y in this formula. Unitary operator In the case when H_1 and H_2 are the same space, a unitary transformation is an automorphism of that Hilbert space, and then it is also called a unitary operator. Antiunitary transformation A closely related notion is that of antiunitary transformation, which is a bijective function :U:H_1\to H_2\, between t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Minor (linear Algebra)
In linear algebra, a minor of a matrix A is the determinant of some smaller square matrix, cut down from A by removing one or more of its rows and columns. Minors obtained by removing just one row and one column from square matrices (first minors) are required for calculating matrix cofactors, which in turn are useful for computing both the determinant and inverse of square matrices. Definition and illustration First minors If A is a square matrix, then the ''minor'' of the entry in the ''i''th row and ''j''th column (also called the (''i'', ''j'') ''minor'', or a ''first minor'') is the determinant of the submatrix formed by deleting the ''i''th row and ''j''th column. This number is often denoted ''M''''i,j''. The (''i'', ''j'') ''cofactor'' is obtained by multiplying the minor by (-1)^. To illustrate these definitions, consider the following 3 by 3 matrix, :\begin 1 & 4 & 7 \\ 3 & 0 & 5 \\ -1 & 9 & 11 \\ \end To compute the minor ''M''2,3 and the cofactor ''C''2,3, we fin ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Tridiagonal
In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main diagonal). For example, the following matrix is tridiagonal: :\begin 1 & 4 & 0 & 0 \\ 3 & 4 & 1 & 0 \\ 0 & 2 & 3 & 4 \\ 0 & 0 & 1 & 3 \\ \end. The determinant of a tridiagonal matrix is given by the ''continuant'' of its elements. An orthogonal transformation of a symmetric (or Hermitian) matrix to tridiagonal form can be done with the Lanczos algorithm. Properties A tridiagonal matrix is a matrix that is both upper and lower Hessenberg matrix. In particular, a tridiagonal matrix is a direct sum of ''p'' 1-by-1 and ''q'' 2-by-2 matrices such that — the dimension of the tridiagonal. Although a general tridiagonal matrix is not necessarily symmetric or Hermitian, many of those that arise when solving linear algebra problems hav ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Hessenberg Matrix
In linear algebra, a Hessenberg matrix is a special kind of square matrix, one that is "almost" triangular. To be exact, an upper Hessenberg matrix has zero entries below the first subdiagonal, and a lower Hessenberg matrix has zero entries above the first superdiagonal. They are named after Karl Hessenberg. Definitions Upper Hessenberg matrix A square n \times n matrix A is said to be in upper Hessenberg form or to be an upper Hessenberg matrix if a_=0 for all i,j with i > j+1. An upper Hessenberg matrix is called unreduced if all subdiagonal entries are nonzero, i.e. if a_ \neq 0 for all i \in \. Lower Hessenberg matrix A square n \times n matrix A is said to be in lower Hessenberg form or to be a lower Hessenberg matrix if its transpose is an upper Hessenberg matrix or equivalently if a_=0 for all i,j with j > i+1. A lower Hessenberg matrix is called unreduced if all superdiagonal entries are nonzero, i.e. if a_ \neq 0 for all i \in \. Examples Consider the following matric ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

QR Algorithm
In numerical linear algebra, the QR algorithm or QR iteration is an eigenvalue algorithm: that is, a procedure to calculate the eigenvalues and eigenvectors of a matrix. The QR algorithm was developed in the late 1950s by John G. F. Francis and by Vera N. Kublanovskaya, working independently. The basic idea is to perform a QR decomposition, writing the matrix as a product of an orthogonal matrix and an upper triangular matrix, multiply the factors in the reverse order, and iterate. The practical QR algorithm Formally, let ''A'' be a real matrix of which we want to compute the eigenvalues, and let ''A''0:=''A''. At the ''k''-th step (starting with ''k'' = 0), we compute the QR decomposition ''A''''k''=''Q''''k''''R''''k'' where ''Q''''k'' is an orthogonal matrix (i.e., ''Q''''T'' = ''Q''−1) and ''R''''k'' is an upper triangular matrix. We then form ''A''''k''+1 = ''R''''k''''Q''''k''. Note that : A_ = R_k Q_k = Q_k^ Q_k R_k Q_k = Q_k^ A_k Q_k = Q_k^ A_k Q_k, so all the ''A''''k ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

QR Decomposition
In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthogonal matrix ''Q'' and an upper triangular matrix ''R''. QR decomposition is often used to solve the linear least squares problem and is the basis for a particular eigenvalue algorithm, the QR algorithm. Cases and definitions Square matrix Any real square matrix ''A'' may be decomposed as : A = QR, where ''Q'' is an orthogonal matrix (its columns are orthogonal unit vectors meaning and ''R'' is an upper triangular matrix (also called right triangular matrix). If ''A'' is invertible, then the factorization is unique if we require the diagonal elements of ''R'' to be positive. If instead ''A'' is a complex square matrix, then there is a decomposition ''A'' = ''QR'' where ''Q'' is a unitary matrix (so If ''A'' has ''n'' linearly independent columns, then the first ''n'' columns of ''Q'' for ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]