HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
, a diagonal matrix is a
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
in which the entries outside the
main diagonal In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix A is the list of entries a_ where i = j. All off-diagonal elements are zero in a diagonal matrix ...
are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is \left begin 3 & 0 \\ 0 & 2 \end\right/math>, while an example of a 3×3 diagonal matrix is \left begin 6 & 0 & 0 \\ 0 & 5 & 0 \\ 0 & 0 & 4 \end\right/math>. An
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
of any size, or any multiple of it is a diagonal matrix called a ''scalar matrix'', for example, \left begin 0.5 & 0 \\ 0 & 0.5 \end\right/math>. In
geometry Geometry (; ) is a branch of mathematics concerned with properties of space such as the distance, shape, size, and relative position of figures. Geometry is, along with arithmetic, one of the oldest branches of mathematics. A mathematician w ...
, a diagonal matrix may be used as a '' scaling matrix'', since matrix multiplication with it results in changing scale (size) and possibly also
shape A shape is a graphics, graphical representation of an object's form or its external boundary, outline, or external Surface (mathematics), surface. It is distinct from other object properties, such as color, Surface texture, texture, or material ...
; only a scalar matrix results in uniform change in scale.


Definition

As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix with columns and rows is diagonal if \forall i,j \in \, i \ne j \implies d_ = 0. However, the main diagonal entries are unrestricted. The term ''diagonal matrix'' may sometimes refer to a , which is an -by- matrix with all the entries not of the form being zero. For example: \begin 1 & 0 & 0\\ 0 & 4 & 0\\ 0 & 0 & -3\\ 0 & 0 & 0\\ \end \quad \text \quad \begin 1 & 0 & 0 & 0 & 0\\ 0 & 4 & 0& 0 & 0\\ 0 & 0 & -3& 0 & 0 \end More often, however, ''diagonal matrix'' refers to square matrices, which can be specified explicitly as a . A square diagonal matrix is a
symmetric matrix In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with ...
, so this can also be called a . The following matrix is square diagonal matrix: \begin 1 & 0 & 0\\ 0 & 4 & 0\\ 0 & 0 & -2 \end If the entries are
real numbers In mathematics, a real number is a number that can be used to measurement, measure a continuous variable, continuous one-dimensional quantity such as a time, duration or temperature. Here, ''continuous'' means that pairs of values can have arbi ...
or
complex numbers In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the form a ...
, then it is a
normal matrix In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose : :A \text \iff A^*A = AA^* . The concept of normal matrices can be extended to normal operators on infinite-dimensional normed spaces and to nor ...
as well. In the remainder of this article we will consider only square diagonal matrices, and refer to them simply as "diagonal matrices".


Vector-to-matrix diag operator

A diagonal matrix can be constructed from a vector \mathbf = \begina_1 & \dots & a_n\end^\textsf using the \operatorname operator: \mathbf = \operatorname(a_1, \dots, a_n). This may be written more compactly as \mathbf = \operatorname(\mathbf). The same operator is also used to represent block diagonal matrices as \mathbf = \operatorname(\mathbf A_1, \dots, \mathbf A_n) where each argument is a matrix. The operator may be written as \operatorname(\mathbf) = \left(\mathbf \mathbf^\textsf\right) \circ \mathbf, where \circ represents the Hadamard product, and is a constant vector with elements 1.


Matrix-to-vector diag operator

The inverse matrix-to-vector operator is sometimes denoted by the identically named \operatorname(\mathbf) = \begina_1 & \dots & a_n\end^\textsf, where the argument is now a matrix, and the result is a vector of its diagonal entries. The following property holds: \operatorname(\mathbf\mathbf) = \sum_j \left(\mathbf \circ \mathbf^\textsf\right)_ = \left( \mathbf \circ \mathbf^\textsf \right) \mathbf.


Scalar matrix

A diagonal matrix with equal diagonal entries is a scalar matrix; that is, a scalar multiple of the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
. Its effect on a
vector Vector most often refers to: * Euclidean vector, a quantity with a magnitude and a direction * Disease vector, an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematics a ...
is
scalar multiplication In mathematics, scalar multiplication is one of the basic operations defining a vector space in linear algebra (or more generally, a module in abstract algebra). In common geometrical contexts, scalar multiplication of a real Euclidean vector ...
by . For example, a 3×3 scalar matrix has the form: \begin \lambda & 0 & 0 \\ 0 & \lambda & 0 \\ 0 & 0 & \lambda \end \equiv \lambda \boldsymbol_3 The scalar matrices are the center of the algebra of matrices: that is, they are precisely the matrices that commute with all other square matrices of the same size. By contrast, over a field (like the real numbers), a diagonal matrix with all diagonal elements distinct only commutes with diagonal matrices (its
centralizer In mathematics, especially group theory, the centralizer (also called commutant) of a subset ''S'' in a group ''G'' is the set \operatorname_G(S) of elements of ''G'' that commute with every element of ''S'', or equivalently, the set of ele ...
is the set of diagonal matrices). That is because if a diagonal matrix \mathbf = \operatorname(a_1, \dots, a_n) has a_i \neq a_j, then given a matrix with m_ \neq 0, the term of the products are: (\mathbf)_ = a_im_ and (\mathbf)_ = m_a_j, and a_jm_ \neq m_a_i (since one can divide by ), so they do not commute unless the off-diagonal terms are zero. Diagonal matrices where the diagonal entries are not all equal or all distinct have centralizers intermediate between the whole space and only diagonal matrices. For an abstract vector space (rather than the concrete vector space ), the analog of scalar matrices are scalar transformations. This is true more generally for a module over a
ring (The) Ring(s) may refer to: * Ring (jewellery), a round band, usually made of metal, worn as ornamental jewelry * To make a sound with a bell, and the sound made by a bell Arts, entertainment, and media Film and TV * ''The Ring'' (franchise), a ...
, with the
endomorphism algebra In mathematics, the endomorphisms of an abelian group ''X'' form a ring. This ring is called the endomorphism ring of ''X'', denoted by End(''X''); the set of all homomorphisms of ''X'' into itself. Addition of endomorphisms arises naturally in a ...
(algebra of linear operators on ) replacing the algebra of matrices. Formally, scalar multiplication is a linear map, inducing a map R \to \operatorname(M), (from a scalar to its corresponding scalar transformation, multiplication by ) exhibiting as a -
algebra Algebra is a branch of mathematics that deals with abstract systems, known as algebraic structures, and the manipulation of expressions within those systems. It is a generalization of arithmetic that introduces variables and algebraic ope ...
. For vector spaces, the scalar transforms are exactly the center of the endomorphism algebra, and, similarly, scalar invertible transforms are the center of the
general linear group In mathematics, the general linear group of degree n is the set of n\times n invertible matrices, together with the operation of ordinary matrix multiplication. This forms a group, because the product of two invertible matrices is again inve ...
. The former is more generally true
free module In mathematics, a free module is a module that has a ''basis'', that is, a generating set that is linearly independent. Every vector space is a free module, but, if the ring of the coefficients is not a division ring (not a field in the commu ...
s M \cong R^n, for which the endomorphism algebra is isomorphic to a matrix algebra.


Vector operations

Multiplying a vector by a diagonal matrix multiplies each of the terms by the corresponding diagonal entry. Given a diagonal matrix \mathbf = \operatorname(a_1, \dots, a_n) and a vector \mathbf = \begin x_1 & \dotsm & x_n \end^\textsf, the product is: \mathbf\mathbf = \operatorname(a_1, \dots, a_n)\beginx_1 \\ \vdots \\ x_n\end = \begin a_1 \\ & \ddots \\ & & a_n \end \beginx_1 \\ \vdots \\ x_n\end = \begina_1 x_1 \\ \vdots \\ a_n x_n\end. This can be expressed more compactly by using a vector instead of a diagonal matrix, \mathbf = \begin a_1 & \dotsm & a_n \end^\textsf, and taking the Hadamard product of the vectors (entrywise product), denoted \mathbf \circ \mathbf: \mathbf\mathbf = \mathbf \circ \mathbf = \begin a_1 \\ \vdots \\ a_n \end \circ \begin x_1 \\ \vdots \\ x_n \end = \begin a_1 x_1 \\ \vdots \\ a_n x_n \end. This is mathematically equivalent, but avoids storing all the zero terms of this
sparse matrix In numerical analysis and scientific computing, a sparse matrix or sparse array is a matrix in which most of the elements are zero. There is no strict definition regarding the proportion of zero-value elements for a matrix to qualify as sparse ...
. This product is thus used in
machine learning Machine learning (ML) is a field of study in artificial intelligence concerned with the development and study of Computational statistics, statistical algorithms that can learn from data and generalise to unseen data, and thus perform Task ( ...
, such as computing products of derivatives in
backpropagation In machine learning, backpropagation is a gradient computation method commonly used for training a neural network to compute its parameter updates. It is an efficient application of the chain rule to neural networks. Backpropagation computes th ...
or multiplying IDF weights in TF-IDF, since some
BLAS Blas is mainly a Spanish given name and surname, related to Blaise. It may refer to Places *Piz Blas, mountain in Switzerland * San Blas (disambiguation) People * Ricardo Blas Jr. (born 1986) Judo athlete from Guam * Blas Antonio Sáenz (fl. 18 ...
frameworks, which multiply matrices efficiently, do not include Hadamard product capability directly.


Matrix operations

The operations of matrix addition and
matrix multiplication In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix (mathematics), matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the n ...
are especially simple for diagonal matrices. Write for a diagonal matrix whose diagonal entries starting in the upper left corner are . Then, for
addition Addition (usually signified by the Plus and minus signs#Plus sign, plus symbol, +) is one of the four basic Operation (mathematics), operations of arithmetic, the other three being subtraction, multiplication, and Division (mathematics), divis ...
, we have \operatorname(a_1,\, \ldots,\, a_n) + \operatorname(b_1,\, \ldots,\, b_n) = \operatorname(a_1 + b_1,\, \ldots,\, a_n + b_n) and for
matrix multiplication In mathematics, specifically in linear algebra, matrix multiplication is a binary operation that produces a matrix (mathematics), matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the n ...
, \operatorname(a_1,\, \ldots,\, a_n) \operatorname(b_1,\, \ldots,\, b_n) = \operatorname(a_1 b_1,\, \ldots,\, a_n b_n). The diagonal matrix is
invertible In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers. Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that ...
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either bo ...
the entries are all nonzero. In this case, we have \operatorname(a_1,\, \ldots,\, a_n)^ = \operatorname(a_1^,\, \ldots,\, a_n^). In particular, the diagonal matrices form a
subring In mathematics, a subring of a ring is a subset of that is itself a ring when binary operations of addition and multiplication on ''R'' are restricted to the subset, and that shares the same multiplicative identity as .In general, not all s ...
of the ring of all -by- matrices. Multiplying an -by- matrix from the ''left'' with amounts to multiplying the -th ''row'' of by for all ; multiplying the matrix from the ''right'' with amounts to multiplying the -th ''column'' of by for all .


Operator matrix in eigenbasis

As explained in determining coefficients of operator matrix, there is a special basis, , for which the matrix takes the diagonal form. Hence, in the defining equation \mathbf_j = \sum_i a_ \mathbf e_i, all coefficients with are zero, leaving only one term per sum. The surviving diagonal elements, , are known as eigenvalues and designated with in the equation, which reduces to \mathbf_i = \lambda_i \mathbf e_i. The resulting equation is known as eigenvalue equation and used to derive the
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The ...
and, further,
eigenvalues and eigenvectors In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
. In other words, the
eigenvalue In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
s of are with associated
eigenvectors In linear algebra, an eigenvector ( ) or characteristic vector is a Vector (mathematics and physics), vector that has its direction (geometry), direction unchanged (or reversed) by a given linear map, linear transformation. More precisely, an e ...
of .


Properties

* The
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
of is the product . * The adjugate of a diagonal matrix is again diagonal. * Where all matrices are square, ** A matrix is diagonal if and only if it is triangular and normal. ** A matrix is diagonal if and only if it is both upper- and lower-triangular. ** A diagonal matrix is
symmetric Symmetry () in everyday life refers to a sense of harmonious and beautiful proportion and balance. In mathematics, the term has a more precise definition and is usually used to refer to an object that is invariant under some transformations ...
. * The
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
and
zero matrix In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero. It also serves as the additive identity of the additive group of m \times n matrices, and is denoted by the symbol O or 0 followe ...
are diagonal. * A 1×1 matrix is always diagonal. * The square of a 2×2 matrix with zero
trace Trace may refer to: Arts and entertainment Music * ''Trace'' (Son Volt album), 1995 * ''Trace'' (Died Pretty album), 1993 * Trace (band), a Dutch progressive rock band * ''The Trace'' (album), by Nell Other uses in arts and entertainment * ...
is always diagonal.


Applications

Diagonal matrices occur in many areas of linear algebra. Because of the simple description of the matrix operation and eigenvalues/eigenvectors given above, it is typically desirable to represent a given matrix or
linear map In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that p ...
by a diagonal matrix. In fact, a given -by- matrix is similar to a diagonal matrix (meaning that there is a matrix such that is diagonal) if and only if it has
linearly independent In the theory of vector spaces, a set of vectors is said to be if there exists no nontrivial linear combination of the vectors that equals the zero vector. If such a linear combination exists, then the vectors are said to be . These concep ...
eigenvectors. Such matrices are said to be
diagonalizable In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix. That is, if there exists an invertible matrix P and a diagonal matrix D such that . This is equivalent to (Such D are not ...
. Over the field of real or
complex Complex commonly refers to: * Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe ** Complex system, a system composed of many components which may interact with each ...
numbers, more is true. The
spectral theorem In linear algebra and functional analysis, a spectral theorem is a result about when a linear operator or matrix can be diagonalized (that is, represented as a diagonal matrix in some basis). This is extremely useful because computations involvin ...
says that every
normal matrix In mathematics, a complex square matrix is normal if it commutes with its conjugate transpose : :A \text \iff A^*A = AA^* . The concept of normal matrices can be extended to normal operators on infinite-dimensional normed spaces and to nor ...
is unitarily similar to a diagonal matrix (if then there exists a
unitary matrix In linear algebra, an invertible complex square matrix is unitary if its matrix inverse equals its conjugate transpose , that is, if U^* U = UU^* = I, where is the identity matrix. In physics, especially in quantum mechanics, the conjugate ...
such that is diagonal). Furthermore, the
singular value decomposition In linear algebra, the singular value decomposition (SVD) is a Matrix decomposition, factorization of a real number, real or complex number, complex matrix (mathematics), matrix into a rotation, followed by a rescaling followed by another rota ...
implies that for any matrix , there exist unitary matrices and such that is diagonal with positive entries.


Operator theory

In
operator theory In mathematics, operator theory is the study of linear operators on function spaces, beginning with differential operators and integral operators. The operators may be presented abstractly by their characteristics, such as bounded linear operato ...
, particularly the study of PDEs, operators are particularly easy to understand and PDEs easy to solve if the operator is diagonal with respect to the basis with which one is working; this corresponds to a
separable partial differential equation A separable partial differential equation can be broken into a set of equations of lower dimensionality (fewer independent variables) by a method of separation of variables. It generally relies upon the problem having some special form or symmetry ...
. Therefore, a key technique to understanding operators is a change of coordinates—in the language of operators, an
integral transform In mathematics, an integral transform is a type of transform that maps a function from its original function space into another function space via integration, where some of the properties of the original function might be more easily charac ...
—which changes the basis to an
eigenbasis In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a c ...
of
eigenfunction In mathematics, an eigenfunction of a linear operator ''D'' defined on some function space is any non-zero function f in that space that, when acted upon by ''D'', is only multiplied by some scaling factor called an eigenvalue. As an equation, th ...
s: which makes the equation separable. An important example of this is the
Fourier transform In mathematics, the Fourier transform (FT) is an integral transform that takes a function as input then outputs another function that describes the extent to which various frequencies are present in the original function. The output of the tr ...
, which diagonalizes constant coefficient differentiation operators (or more generally translation invariant operators), such as the Laplacian operator, say, in the
heat equation In mathematics and physics (more specifically thermodynamics), the heat equation is a parabolic partial differential equation. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quanti ...
. Especially easy are
multiplication operator In operator theory, a multiplication operator is a linear operator defined on some vector space of functions and whose value at a function is given by multiplication by a fixed function . That is, T_f\varphi(x) = f(x) \varphi (x) \quad for all ...
s, which are defined as multiplication by (the values of) a fixed function–the values of the function at each point correspond to the diagonal entries of a matrix.


See also

* Anti-diagonal matrix *
Banded matrix In mathematics, particularly matrix theory, a band matrix or banded matrix is a sparse matrix whose non-zero entries are confined to a diagonal ''band'', comprising the main diagonal and zero or more diagonals on either side. Band matrix Bandwidt ...
* Bidiagonal matrix *
Diagonally dominant matrix In mathematics, a square matrix is said to be diagonally dominant if, for every row of the matrix, the magnitude of the diagonal entry in a row is greater than or equal to the sum of the magnitudes of all the other (off-diagonal) entries in that ro ...
*
Diagonalizable matrix In linear algebra, a square matrix A is called diagonalizable or non-defective if it is matrix similarity, similar to a diagonal matrix. That is, if there exists an invertible matrix P and a diagonal matrix D such that . This is equivalent to ...
*
Jordan normal form \begin \lambda_1 1\hphantom\hphantom\\ \hphantom\lambda_1 1\hphantom\\ \hphantom\lambda_1\hphantom\\ \hphantom\lambda_2 1\hphantom\hphantom\\ \hphantom\hphantom\lambda_2\hphantom\\ \hphantom\lambda_3\hphantom\\ \hphantom\ddots\hphantom\\ ...
*
Multiplication operator In operator theory, a multiplication operator is a linear operator defined on some vector space of functions and whose value at a function is given by multiplication by a fixed function . That is, T_f\varphi(x) = f(x) \varphi (x) \quad for all ...
*
Tridiagonal matrix In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main ...
* Toeplitz matrix * Toral Lie algebra *
Circulant matrix In linear algebra, a circulant matrix is a square matrix in which all rows are composed of the same elements and each row is rotated one element to the right relative to the preceding row. It is a particular kind of Toeplitz matrix. In numerica ...


Notes


References


Sources

* {{Matrix classes Matrix normal forms Sparse matrices