HOME

TheInfoList



OR:

In
mathematics Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, a block matrix or a partitioned matrix is a
matrix Matrix (: matrices or matrixes) or MATRIX may refer to: Science and mathematics * Matrix (mathematics), a rectangular array of numbers, symbols or expressions * Matrix (logic), part of a formula in prenex normal form * Matrix (biology), the m ...
that is interpreted as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. For example, the 3x4 matrix presented below is divided by horizontal and vertical lines into four blocks: the top-left 2x3 block, the top-right 2x1 block, the bottom-left 1x3 block, and the bottom-right 1x1 block. : \left \begin a_ & a_ & a_ & b_ \\ a_ & a_ & a_ & b_ \\ \hline c_ & c_ & c_ & d \end \right Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned. This notion can be made more precise for an n by m matrix M by partitioning n into a collection \text, and then partitioning m into a collection \text. The original matrix is then considered as the "total" of these groups, in the sense that the (i, j) entry of the original matrix corresponds in a 1-to-1 way with some (s, t) offset entry of some (x,y), where x \in \text and y \in \text. Block matrix algebra arises in general from biproducts in categories of matrices.


Example

The matrix :\mathbf = \begin 1 & 2 & 2 & 7 \\ 1 & 5 & 6 & 2 \\ 3 & 3 & 4 & 5 \\ 3 & 3 & 6 & 7 \end can be visualized as divided into four blocks, as :\mathbf = \left \begin 1 & 2 & 2 & 7 \\ 1 & 5 & 6 & 2 \\ \hline 3 & 3 & 4 & 5 \\ 3 & 3 & 6 & 7 \end \right/math>. The horizontal and vertical lines have no special mathematical meaning, but are a common way to visualize a partition. By this partition, P is partitioned into four 2×2 blocks, as : \mathbf_ = \begin 1 & 2 \\ 1 & 5 \end,\quad \mathbf_ = \begin 2 & 7\\ 6 & 2 \end,\quad \mathbf_ = \begin 3 & 3 \\ 3 & 3 \end,\quad \mathbf_ = \begin 4 & 5 \\ 6 & 7 \end. The partitioned matrix can then be written as :\mathbf = \begin \mathbf_ & \mathbf_ \\ \mathbf_ & \mathbf_ \end.


Formal definition

Let A \in \mathbb^. A partitioning of A is a representation of A in the form :A = \begin A_ & A_ & \cdots & A_ \\ A_ & A_ & \cdots & A_ \\ \vdots & \vdots & \ddots & \vdots \\ A_ & A_ & \cdots & A_ \end, where A_ \in \mathbb^ are contiguous submatrices, \sum_^ m_i = m, and \sum_^ n_j = n. The elements A_ of the partition are called blocks. By this definition, the blocks in any one column must all have the same number of columns. Similarly, the blocks in any one row must have the same number of rows.


Partitioning methods

A matrix can be partitioned in many ways. For example, a matrix A is said to be partitioned by columns if it is written as :A = (a_1 \ a_2 \ \cdots \ a_n), where a_j is the jth column of A. A matrix can also be partitioned by rows: :A = \begin a_1^T \\ a_2^T \\ \vdots \\ a_m^T \end, where a_i^T is the ith row of A.


Common partitions

Often, we encounter the 2x2 partition :A = \begin A_ & A_ \\ A_ & A_ \end, particularly in the form where A_ is a scalar: :A = \begin a_ & a_^T \\ a_ & A_ \end.


Block matrix operations


Transpose

Let :A = \begin A_ & A_ & \cdots & A_ \\ A_ & A_ & \cdots & A_ \\ \vdots & \vdots & \ddots & \vdots \\ A_ & A_ & \cdots & A_ \end where A_ \in \mathbb^. (This matrix A will be reused in and .) Then its transpose is :A^T = \begin A_^T & A_^T & \cdots & A_^T \\ A_^T & A_^T & \cdots & A_^T \\ \vdots & \vdots & \ddots & \vdots \\ A_^T & A_^T & \cdots & A_^T \end, and the same equation holds with the transpose replaced by the conjugate transpose.


Block transpose

A special form of matrix
transpose In linear algebra, the transpose of a Matrix (mathematics), matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other ...
can also be defined for block matrices, where individual blocks are reordered but not transposed. Let A=(B_) be a k \times l block matrix with m \times n blocks B_, the block transpose of A is the l \times k block matrix A^\mathcal with m \times n blocks \left(A^\mathcal\right)_ = B_. As with the conventional trace operator, the block transpose is a
linear mapping In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a Map (mathematics), mapping V \to W between two vec ...
such that (A + C)^\mathcal = A^\mathcal + C^\mathcal . However, in general the property (A C)^\mathcal = C^\mathcal A^\mathcal does not hold unless the blocks of A and C commute.


Addition

Let :B = \begin B_ & B_ & \cdots & B_ \\ B_ & B_ & \cdots & B_ \\ \vdots & \vdots & \ddots & \vdots \\ B_ & B_ & \cdots & B_ \end, where B_ \in \mathbb^, and let A be the matrix defined in . (This matrix B will be reused in .) Then if p = r, q = s, k_i = m_i, and \ell_j = n_j, then :A + B = \begin A_ + B_ & A_ + B_ & \cdots & A_ + B_ \\ A_ + B_ & A_ + B_ & \cdots & A_ + B_ \\ \vdots & \vdots & \ddots & \vdots \\ A_ + B_ & A_ + B_ & \cdots & A_ + B_ \end.


Multiplication

It is possible to use a block partitioned matrix product that involves only algebra on submatrices of the factors. The partitioning of the factors is not arbitrary, however, and requires " conformable partitions" between two matrices A and B such that all submatrix products that will be used are defined. Let A be the matrix defined in , and let B be the matrix defined in . Then the matrix product : C = AB can be performed blockwise, yielding C as an (p \times s) matrix. The matrices in the resulting matrix C are calculated by multiplying: : C_ = \sum_^ A_B_. Or, using the
Einstein notation In mathematics, especially the usage of linear algebra in mathematical physics and differential geometry, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies ...
that implicitly sums over repeated indices: : C_ = A_B_. Depicting C as a matrix, we have :C = AB = \begin \sum_^ A_B_ & \sum_^ A_B_ & \cdots & \sum_^ A_B_ \\ \sum_^ A_B_ & \sum_^ A_B_ & \cdots & \sum_^ A_B_ \\ \vdots & \vdots & \ddots & \vdots \\ \sum_^ A_B_ & \sum_^ A_B_ & \cdots & \sum_^ A_B_ \end.


Inversion

If a matrix is partitioned into four blocks, it can be inverted blockwise as follows: : = \begin & \\ & \end^ = \begin ^ + ^\left( - ^\right)^^ & -^\left( - ^\right)^ \\ -\left(-^\right)^^ & \left( - ^\right)^ \end, where A and D are square blocks of arbitrary size, and B and C are conformable with them for partitioning. Furthermore, A and the Schur complement of A in P: must be invertible. Equivalently, by permuting the blocks: : = \begin & \\ & \end^ = \begin \left( - ^\right)^ & -\left(-^\right)^^ \\ -^\left( - ^\right)^ & \quad ^ + ^\left( - ^\right)^^ \end. Here, D and the Schur complement of D in P: must be invertible. If A and D are both invertible, then: : \begin & \\ & \end^ = \begin \left( - ^ \right)^ & \\ & \left( - ^ \right)^ \end \begin & - ^ \\ - ^ & \end. By the Weinstein–Aronszajn identity, one of the two matrices in the block-diagonal matrix is invertible exactly when the other is.


Computing submatrix inverses from the full inverse

By the symmetry between a matrix and its inverse in the block inversion formula, if a matrix P and its inverse P−1 are partitioned conformally: :P = \begin & \\ & \end, \quad P^ = \begin & \\ & \end then the inverse of any principal submatrix can be computed from the corresponding blocks of P−1: :^ = - ^ :^ = - ^ This relationship follows from recognizing that E−1 = A − BD−1C (the Schur complement), and applying the same block inversion formula with the roles of P and P−1 reversed.


Determinant

The formula for the determinant of a 2 \times 2-matrix above continues to hold, under appropriate further assumptions, for a matrix composed of four submatrices A, B, C, D with A and D square. The easiest such formula, which can be proven using either the Leibniz formula or a factorization involving the Schur complement, is :\det\beginA& 0\\ C& D\end = \det(A) \det(D) = \det\beginA& B\\ 0& D\end. Using this formula, we can derive that
characteristic polynomial In linear algebra, the characteristic polynomial of a square matrix is a polynomial which is invariant under matrix similarity and has the eigenvalues as roots. It has the determinant and the trace of the matrix among its coefficients. The ...
s of \beginA& 0\\ C& D\end and \beginA& B\\ 0& D\end are same and equal to the product of characteristic polynomials of A and D. Furthermore, If \beginA& 0\\ C& D\end or \beginA& B\\ 0& D\end is diagonalizable, then A and D are diagonalizable too. The converse is false; simply check \begin1& 1\\ 0& 1\end. If A is
invertible In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers. Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that ...
, one has :\det\beginA& B\\ C& D\end = \det(A) \det\left(D - C A^ B\right), and if D is invertible, one has :\det\beginA& B\\ C& D\end = \det(D) \det\left(A - B D^ C\right) . If the blocks are square matrices of the ''same'' size further formulas hold. For example, if C and D commute (i.e., CD=DC), then :\det\beginA& B\\ C& D\end = \det(AD - BC). Similar statements hold when AB=BA, AC=CA, or . Namely, if AC=CA, then :\det\beginA& B\\ C& D\end = \det(AD - CB). Note the change in order of C and B (we have CB instead of BC). Similarly, if BD = DB, then AD should be replaced with DA (i.e. we get \det(DA - BC)) and if AB = BA, then we should have \det(DA - CB). Note for the last two results, you have to use commutativity of the underlying ring, but not for the first two. This formula has been generalized to matrices composed of more than 2 \times 2 blocks, again under appropriate commutativity conditions among the individual blocks. For A = D and B=C, the following formula holds (even if A and B do not commute) :\det\beginA& B\\ B& A\end = \det(A - B) \det(A + B).


Special types of block matrices


Direct sums and block diagonal matrices


Direct sum

For any arbitrary matrices A (of size ''m'' × ''n'') and B (of size ''p'' × ''q''), we have the direct sum of A and B, denoted by A \oplus B and defined as : \oplus = \begin a_ & \cdots & a_ & 0 & \cdots & 0 \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ a_ & \cdots & a_ & 0 & \cdots & 0 \\ 0 & \cdots & 0 & b_ & \cdots & b_ \\ \vdots & \ddots & \vdots & \vdots & \ddots & \vdots \\ 0 & \cdots & 0 & b_ & \cdots & b_ \end. For instance, : \begin 1 & 3 & 2 \\ 2 & 3 & 1 \end \oplus \begin 1 & 6 \\ 0 & 1 \end = \begin 1 & 3 & 2 & 0 & 0 \\ 2 & 3 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 6 \\ 0 & 0 & 0 & 0 & 1 \end. This operation generalizes naturally to arbitrary dimensioned arrays (provided that A and B have the same number of dimensions). Note that any element in the
direct sum The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently but analogously for different kinds of structures. As an example, the direct sum of two abelian groups A and B is anothe ...
of two
vector space In mathematics and physics, a vector space (also called a linear space) is a set (mathematics), set whose elements, often called vector (mathematics and physics), ''vectors'', can be added together and multiplied ("scaled") by numbers called sc ...
s of matrices could be represented as a direct sum of two matrices.


Block diagonal matrices

A block diagonal matrix is a block matrix that is a
square matrix In mathematics, a square matrix is a Matrix (mathematics), matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Squ ...
such that the main-diagonal blocks are square matrices and all off-diagonal blocks are zero matrices. That is, a block diagonal matrix A has the form : = \begin _1 & & \cdots & \\ & _2 & \cdots & \\ \vdots & \vdots & \ddots & \vdots \\ & & \cdots & _n \end where A''k'' is a square matrix for all ''k'' = 1, ..., ''n''. In other words, matrix A is the
direct sum The direct sum is an operation between structures in abstract algebra, a branch of mathematics. It is defined differently but analogously for different kinds of structures. As an example, the direct sum of two abelian groups A and B is anothe ...
of A1, ..., A''n''. It can also be indicated as A1 ⊕ A2 ⊕ ... ⊕ A''n'' or diag(A1, A2, ..., A''n'') (the latter being the same formalism used for a
diagonal matrix In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagon ...
). Any square matrix can trivially be considered a block diagonal matrix with only one block. For the
determinant In mathematics, the determinant is a Scalar (mathematics), scalar-valued function (mathematics), function of the entries of a square matrix. The determinant of a matrix is commonly denoted , , or . Its value characterizes some properties of the ...
and trace, the following properties hold: :\begin \det &= \det_1 \times \cdots \times \det_n, \end and :\begin \operatorname &= \operatorname _1 + \cdots + \operatorname _n.\end A block diagonal matrix is invertible
if and only if In logic and related fields such as mathematics and philosophy, "if and only if" (often shortened as "iff") is paraphrased by the biconditional, a logical connective between statements. The biconditional is true in two cases, where either bo ...
each of its main-diagonal blocks are invertible, and in this case its inverse is another block diagonal matrix given by :\begin _ & & \cdots & \\ & _ & \cdots & \\ \vdots & \vdots & \ddots & \vdots \\ & & \cdots & _ \end^ = \begin _^ & & \cdots & \\ & _^ & \cdots & \\ \vdots & \vdots & \ddots & \vdots \\ & & \cdots & _^ \end. The
eigenvalues In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a ...
and eigenvectors of are simply those of the _ks combined.


Block tridiagonal matrices

A block tridiagonal matrix is another special block matrix, which is just like the block diagonal matrix a
square matrix In mathematics, a square matrix is a Matrix (mathematics), matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Squ ...
, having square matrices (blocks) in the lower diagonal,
main diagonal In linear algebra, the main diagonal (sometimes principal diagonal, primary diagonal, leading diagonal, major diagonal, or good diagonal) of a matrix A is the list of entries a_ where i = j. All off-diagonal elements are zero in a diagonal matrix ...
and upper diagonal, with all other blocks being zero matrices. It is essentially a
tridiagonal matrix In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main ...
but has submatrices in places of scalars. A block tridiagonal matrix A has the form : = \begin _ & _ & & & \cdots & & \\ _ & _ & _ & & & & \\ & \ddots & \ddots & \ddots & & & \vdots \\ & & _ & _ & _ & & \\ \vdots & & & \ddots & \ddots & \ddots & \\ & & & & _ & _ & _ \\ & & \cdots & & & _ & _ \end where _, _ and _ are square sub-matrices of the lower, main and upper diagonal respectively. Block tridiagonal matrices are often encountered in numerical solutions of engineering problems (e.g.,
computational fluid dynamics Computational fluid dynamics (CFD) is a branch of fluid mechanics that uses numerical analysis and data structures to analyze and solve problems that involve fluid dynamics, fluid flows. Computers are used to perform the calculations required ...
). Optimized numerical methods for
LU factorization In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix multiplication and matrix decomposition). The produ ...
are available and hence efficient solution algorithms for equation systems with a block tridiagonal matrix as coefficient matrix. The Thomas algorithm, used for efficient solution of equation systems involving a
tridiagonal matrix In linear algebra, a tridiagonal matrix is a band matrix that has nonzero elements only on the main diagonal, the subdiagonal/lower diagonal (the first diagonal below this), and the supradiagonal/upper diagonal (the first diagonal above the main ...
can also be applied using matrix operations to block tridiagonal matrices (see also Block LU decomposition).


Block triangular matrices


Upper block triangular

A matrix A is upper block triangular (or block upper triangular) if :A = \begin A_ & A_ & \cdots & A_ \\ 0 & A_ & \cdots & A_ \\ \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & \cdots & A_ \end, where A_ \in \mathbb^ for all i, j = 1, \ldots, k.


Lower block triangular

A matrix A is lower block triangular if :A = \begin A_ & 0 & \cdots & 0 \\ A_ & A_ & \cdots & 0 \\ \vdots & \vdots & \ddots & \vdots \\ A_ & A_ & \cdots & A_ \end, where A_ \in \mathbb^ for all i, j = 1, \ldots, k.


Block Toeplitz matrices

A block Toeplitz matrix is another special block matrix, which contains blocks that are repeated down the diagonals of the matrix, as a Toeplitz matrix has elements repeated down the diagonal. A matrix A is block Toeplitz if A_ = A_ for all k - i = l - j, that is, :A = \begin A_1 & A_2 & A_3 & \cdots \\ A_4 & A_1 & A_2 & \cdots \\ A_5 & A_4 & A_1 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end, where A_i \in \mathbb^.


Block Hankel matrices

A matrix A is block Hankel if A_ = A_ for all i + j = k + l, that is, :A = \begin A_1 & A_2 & A_3 & \cdots \\ A_2 & A_3 & A_4 & \cdots \\ A_3 & A_4 & A_5 & \cdots \\ \vdots & \vdots & \vdots & \ddots \end, where A_i \in \mathbb^.


See also

*
Kronecker product In mathematics, the Kronecker product, sometimes denoted by ⊗, is an operation on two matrices of arbitrary size resulting in a block matrix. It is a specialization of the tensor product (which is denoted by the same symbol) from vector ...
(matrix direct product resulting in a block matrix) *
Jordan normal form \begin \lambda_1 1\hphantom\hphantom\\ \hphantom\lambda_1 1\hphantom\\ \hphantom\lambda_1\hphantom\\ \hphantom\lambda_2 1\hphantom\hphantom\\ \hphantom\hphantom\lambda_2\hphantom\\ \hphantom\lambda_3\hphantom\\ \hphantom\ddots\hphantom\\ ...
(canonical form of a linear operator on a finite-dimensional complex vector space) * Strassen algorithm (algorithm for matrix multiplication that is faster than the conventional matrix multiplication algorithm)


Notes


References

* {{Matrix classes Matrices (mathematics) Sparse matrices