HOME



picture info

Matrix Transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations). The transpose of a matrix was introduced in 1858 by the British mathematician Arthur Cayley. Transpose of a matrix Definition The transpose of a matrix , denoted by , , , A^, , , or , may be constructed by any one of the following methods: # Reflect over its main diagonal (which runs from top-left to bottom-right) to obtain #Write the rows of as the columns of #Write the columns of as the rows of Formally, the -th row, -th column element of is the -th row, -th column element of : :\left mathbf^\operatorname\right = \left mathbf\right. If is an matrix, then is an matrix. In the case of square matrices, may also denote the th power of the matrix . For avoiding a possible confusion, many authors use left upperscripts, t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Transpose Of A Linear Map
In linear algebra, the transpose of a linear map between two vector spaces, defined over the same Field (mathematics), field, is an induced map between the dual spaces of the two vector spaces. The transpose or algebraic adjoint of a linear map is often used to study the original linear map. This concept is generalised by adjoint functors. Definition Let X^ denote the algebraic dual space of a vector space X. Let X and Y be vector spaces over the same field \mathcal. If u : X \to Y is a linear map, then its algebraic adjoint or dual, is the map ^ u : Y^ \to X^ defined by f \mapsto f \circ u. The resulting functional ^ u(f) := f \circ u is called the pullback of f by u. The continuous dual space of a topological vector space (TVS) X is denoted by X^. If X and Y are TVSs then a linear map u : X \to Y is weakly continuous if and only if ^ u\left(Y^\right) \subseteq X^, in which case we let ^t u : Y^ \to X^ denote the restriction of ^ u to Y^. The map ^t u is called the tra ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Invariant Factors
The invariant factors of a module over a principal ideal domain (PID) occur in one form of the structure theorem for finitely generated modules over a principal ideal domain. If R is a PID and M a finitely generated R-module, then :M\cong R^r\oplus R/(a_1)\oplus R/(a_2)\oplus\cdots\oplus R/(a_m) for some integer r\geq0 and a (possibly empty) list of nonzero elements a_1,\ldots,a_m\in R for which a_1 \mid a_2 \mid \cdots \mid a_m. The nonnegative integer r is called the ''free rank'' or ''Betti number'' of the module M, while a_1,\ldots,a_m are the ''invariant factors'' of M and are unique up to associatedness. The invariant factors of a matrix over a PID occur in the Smith normal form In mathematics, the Smith normal form (sometimes abbreviated SNF) is a normal form that can be defined for any matrix (not necessarily square) with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can ... and provide a means of computing the struct ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Matrix Similarity
In linear algebra, two ''n''-by-''n'' matrices and are called similar if there exists an invertible ''n''-by-''n'' matrix such that B = P^ A P . Similar matrices represent the same linear map under two possibly different bases, with being the change-of-basis matrix. A transformation is called a similarity transformation or conjugation of the matrix . In the general linear group, similarity is therefore the same as conjugacy, and similar matrices are also called conjugate; however, in a given subgroup of the general linear group, the notion of conjugacy may be more restrictive than similarity, since it requires that be chosen to lie in . Motivating example When defining a linear transformation, it can be the case that a change of basis can result in a simpler form of the same transformation. For example, the matrix representing a rotation in when the axis of rotation is not aligned with the coordinate axis can be complicated to compute. If the axis of rotation were ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Characteristic Polynomial
In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The characteristic polynomial of an endomorphism of a finite-dimensional vector space is the characteristic polynomial of the matrix of that endomorphism over any basis (that is, the characteristic polynomial does not depend on the choice of a basis). The characteristic equation, also known as the determinantal equation, is the equation obtained by equating the characteristic polynomial to zero. In spectral graph theory, the characteristic polynomial of a graph is the characteristic polynomial of its adjacency matrix. Motivation In linear algebra, eigenvalues and eigenvectors play a fundamental role, since, given a linear transformation, an eigenvector is a vector whose direction is not changed by the transformation, and the correspondi ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Eigenvalue, Eigenvector And Eigenspace
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a constant factor \lambda when the linear transformation is applied to it: T\mathbf v=\lambda \mathbf v. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor \lambda (possibly a negative or complex number). Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed. The e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Positive-semidefinite Matrix
In mathematics, a symmetric matrix M with real entries is positive-definite if the real number \mathbf^\mathsf M \mathbf is positive for every nonzero real column vector \mathbf, where \mathbf^\mathsf is the row vector transpose of \mathbf. More generally, a Hermitian matrix (that is, a complex matrix equal to its conjugate transpose) is positive-definite if the real number \mathbf^* M \mathbf is positive for every nonzero complex column vector \mathbf, where \mathbf^* denotes the conjugate transpose of \mathbf. Positive semi-definite matrices are defined similarly, except that the scalars \mathbf^\mathsf M \mathbf and \mathbf^* M \mathbf are required to be positive ''or zero'' (that is, nonnegative). Negative-definite and negative semi-definite matrices are defined analogously. A matrix that is not positive semi-definite and not negative semi-definite is sometimes called ''indefinite''. Some authors use more general definitions of definiteness, permitting the matrices to be ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Dot Product
In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a Scalar (mathematics), scalar as a result". It is also used for other symmetric bilinear forms, for example in a pseudo-Euclidean space. Not to be confused with scalar multiplication. is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two Euclidean vector, vectors is widely used. It is often called the inner product (or rarely the projection product) of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space (see ''Inner product space'' for more). It should not be confused with the cross product. Algebraically, the dot product is the sum of the Product (mathematics), products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euc ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Determinant
In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the matrix and the linear map represented, on a given basis (linear algebra), basis, by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible matrix, invertible and the corresponding linear map is an linear isomorphism, isomorphism. However, if the determinant is zero, the matrix is referred to as singular, meaning it does not have an inverse. The determinant is completely determined by the two following properties: the determinant of a product of matrices is the product of their determinants, and the determinant of a triangular matrix is the product of its diagonal entries. The determinant of a matrix is :\begin a & b\\c & d \end=ad-bc, and the determinant of a matrix is : \begin a & b & c \\ d & e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Vector Space
In mathematics and physics, a vector space (also called a linear space) is a set (mathematics), set whose elements, often called vector (mathematics and physics), ''vectors'', can be added together and multiplied ("scaled") by numbers called scalar (mathematics), ''scalars''. The operations of vector addition and scalar multiplication must satisfy certain requirements, called ''vector axioms''. Real vector spaces and complex vector spaces are kinds of vector spaces based on different kinds of scalars: real numbers and complex numbers. Scalars can also be, more generally, elements of any field (mathematics), field. Vector spaces generalize Euclidean vectors, which allow modeling of Physical quantity, physical quantities (such as forces and velocity) that have not only a Magnitude (mathematics), magnitude, but also a Orientation (geometry), direction. The concept of vector spaces is fundamental for linear algebra, together with the concept of matrix (mathematics), matrices, which ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Linear Map
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism. If a linear map is a bijection then it is called a . In the case where V = W, a linear map is called a linear endomorphism. Sometimes the term refers to this case, but the term "linear operator" can have different meanings for different conventions: for example, it can be used to emphasize that V and W are real vector spaces (not necessarily with V = W), or it can be used to emphasize that V is a function space, which is a common convention in functional analysis. Sometimes the term ''linear function'' has the same meaning as ''linear m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Matrix Addition
In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together. For a vector, \vec\!, adding two matrices would have the geometric effect of applying each matrix transformation separately onto \vec\!, then adding the transformed vectors. :\mathbf\vec + \mathbf\vec = (\mathbf + \mathbf)\vec\! Definition Two matrices must have an equal number of rows and columns to be added. In which case, the sum of two matrices A and B will be a matrix which has the same number of rows and columns as A and B. The sum of A and B, denoted , is computed by adding corresponding elements of A and B: :\begin \mathbf+\mathbf & = \begin a_ & a_ & \cdots & a_ \\ a_ & a_ & \cdots & a_ \\ \vdots & \vdots & \ddots & \vdots \\ a_ & a_ & \cdots & a_ \\ \end + \begin b_ & b_ & \cdots & b_ \\ b_ & b_ & \cdots & b_ \\ \vdots & \vdots & \ddots & \vdots \\ b_ & b_ & \cdots & b_ \\ \end \\ & = \begin a_ + b_ & a_ + b_ & \cdots & a_ + b_ \\ a_ + b_ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]