HOME

TheInfoList



OR:

In mathematics, the Kronecker product, sometimes denoted by ⊗, is an
operation Operation or Operations may refer to: Arts, entertainment and media * ''Operation'' (game), a battery-operated board game that challenges dexterity * Operation (music), a term used in musical set theory * ''Operations'' (magazine), Multi-Man ...
on two matrices of arbitrary size resulting in a
block matrix In mathematics, a block matrix or a partitioned matrix is a matrix that is '' interpreted'' as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original ma ...
. It is a generalization of the
outer product In linear algebra, the outer product of two coordinate vectors is a matrix. If the two vectors have dimensions ''n'' and ''m'', then their outer product is an ''n'' × ''m'' matrix. More generally, given two tensors (multidimensional arrays of nu ...
(which is denoted by the same symbol) from vectors to matrices, and gives the matrix of the tensor product linear map with respect to a standard choice of
basis Basis may refer to: Finance and accounting *Adjusted basis, the net cost of an asset after adjusting for various tax-related items *Basis point, 0.01%, often used in the context of interest rates *Basis trading, a trading strategy consisting of ...
. The Kronecker product is to be distinguished from the usual
matrix multiplication In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the ...
, which is an entirely different operation. The Kronecker product is also sometimes called matrix direct product. The Kronecker product is named after the German mathematician Leopold Kronecker (1823–1891), even though there is little evidence that he was the first to define and use it. The Kronecker product has also been called the ''Zehfuss matrix'', and the ''Zehfuss product'', after , who in 1858 described this matrix operation, but Kronecker product is currently the most widely used.


Definition

If A is an matrix and B is a matrix, then the Kronecker product is the block matrix: :\mathbf\otimes\mathbf = \begin a_ \mathbf & \cdots & a_\mathbf \\ \vdots & \ddots & \vdots \\ a_ \mathbf & \cdots & a_ \mathbf \end, more explicitly: : = \begin a_ b_ & a_ b_ & \cdots & a_ b_ & \cdots & \cdots & a_ b_ & a_ b_ & \cdots & a_ b_ \\ a_ b_ & a_ b_ & \cdots & a_ b_ & \cdots & \cdots & a_ b_ & a_ b_ & \cdots & a_ b_ \\ \vdots & \vdots & \ddots & \vdots & & & \vdots & \vdots & \ddots & \vdots \\ a_ b_ & a_ b_ & \cdots & a_ b_ & \cdots & \cdots & a_ b_ & a_ b_ & \cdots & a_ b_ \\ \vdots & \vdots & & \vdots & \ddots & & \vdots & \vdots & & \vdots \\ \vdots & \vdots & & \vdots & & \ddots & \vdots & \vdots & & \vdots \\ a_ b_ & a_ b_ & \cdots & a_ b_ & \cdots & \cdots & a_ b_ & a_ b_ & \cdots & a_ b_ \\ a_ b_ & a_ b_ & \cdots & a_ b_ & \cdots & \cdots & a_ b_ & a_ b_ & \cdots & a_ b_ \\ \vdots & \vdots & \ddots & \vdots & & & \vdots & \vdots & \ddots & \vdots \\ a_ b_ & a_ b_ & \cdots & a_ b_ & \cdots & \cdots & a_ b_ & a_ b_ & \cdots & a_ b_ \end. Using /\!/ and \% to denote truncating integer division and
remainder In mathematics, the remainder is the amount "left over" after performing some computation. In arithmetic, the remainder is the integer "left over" after dividing one integer by another to produce an integer quotient ( integer division). In alge ...
, respectively, and numbering the matrix elements starting from 0, one obtains (A\otimes B)_ = a_ b_ and (A\otimes B)_ = a_ b_. For the usual numbering starting from 1, one obtains (A\otimes B)_ = a_ b_ and (A\otimes B)_ = a_ b_. If A and B represent linear transformations and , respectively, then represents the tensor product of the two maps, .


Examples

: \begin 1 & 2 \\ 3 & 4 \\ \end \otimes \begin 0 & 5 \\ 6 & 7 \\ \end = \begin 1 \begin 0 & 5 \\ 6 & 7 \\ \end & 2 \begin 0 & 5 \\ 6 & 7 \\ \end \\ 3 \begin 0 & 5 \\ 6 & 7 \\ \end & 4 \begin 0 & 5 \\ 6 & 7 \\ \end \\ \end = \begin 1\times 0 & 1\times 5 & 2\times 0 & 2\times 5 \\ 1\times 6 & 1\times 7 & 2\times 6 & 2\times 7 \\ 3\times 0 & 3\times 5 & 4\times 0 & 4\times 5 \\ 3\times 6 & 3\times 7 & 4\times 6 & 4\times 7 \\ \end = \begin 0 & 5 & 0 & 10 \\ 6 & 7 & 12 & 14 \\ 0 & 15 & 0 & 20 \\ 18 & 21 & 24 & 28 \end. Similarly: : \begin 1 & -4 & 7 \\ -2 & 3 & 3 \end \otimes \begin 8 & -9 & -6 & 5 \\ 1 & -3 & -4 & 7 \\ 2 & 8 & -8 & -3 \\ 1 & 2 & -5 & -1 \end = \begin 8 & -9 & -6 & 5 & -32 & 36 & 24 & -20 & 56 & -63 & -42 & 35 \\ 1 & -3 & -4 & 7 & -4 & 12 & 16 & -28 & 7 & -21 & -28 & 49 \\ 2 & 8 & -8 & -3 & -8 & -32 & 32 & 12 & 14 & 56 & -56 & -21 \\ 1 & 2 & -5 & -1 & -4 & -8 & 20 & 4 & 7 & 14 & -35 & -7 \\ -16 & 18 & 12 & -10 & 24 & -27 & -18 & 15 & 24 & -27 & -18 & 15 \\ -2 & 6 & 8 & -14 & 3 & -9 & -12 & 21 & 3 & -9 & -12 & 21 \\ -4 & -16 & 16 & 6 & 6 & 24 & -24 & -9 & 6 & 24 & -24 & -9 \\ -2 & -4 & 10 & 2 & 3 & 6 & -15 & -3 & 3 & 6 & -15 & -3 \end


Properties


Relations to other matrix operations

) = \exp(\mathbf) \otimes \exp(\mathbf) Kronecker sums appear naturally in physics when considering ensembles of non-interacting
systems A system is a group of interacting or interrelated elements that act according to a set of rules to form a unified whole. A system, surrounded and influenced by its environment, is described by its boundaries, structure and purpose and express ...
. Let ''Hk'' be the Hamiltonian of the ''k''th such system. Then the total Hamiltonian of the ensemble is :H_=\bigoplus_k H^k .


Abstract properties


Matrix equations

The Kronecker product can be used to get a convenient representation for some matrix equations. Consider for instance the equation , where A, B and C are given matrices and the matrix X is the unknown. We can use the "vec trick" to rewrite this equation as : \left(\mathbf^\textsf \otimes \mathbf\right) \, \operatorname(\mathbf) = \operatorname(\mathbf) = \operatorname(\mathbf) . Here, vec(X) denotes the
vectorization Vectorization may refer to: Computing * Array programming, a style of computer programming where operations are applied to whole arrays instead of individual elements * Automatic vectorization, a compiler optimization that transforms loops to vect ...
of the matrix X, formed by stacking the columns of X into a single
column vector In linear algebra, a column vector with m elements is an m \times 1 matrix consisting of a single column of m entries, for example, \boldsymbol = \begin x_1 \\ x_2 \\ \vdots \\ x_m \end. Similarly, a row vector is a 1 \times n matrix for some n, c ...
. It now follows from the properties of the Kronecker product that the equation has a unique solution, if and only if A and B are invertible . If X and C are row-ordered into the column vectors u and v, respectively, then : \mathbf = \left(\mathbf \otimes \mathbf^\textsf\right)\mathbf . The reason is that : \mathbf = \operatorname\left((\mathbf)^\textsf\right) = \operatorname\left(\mathbf^\textsf\mathbf^\textsf\mathbf^\textsf\right) = \left(\mathbf \otimes \mathbf^\textsf\right)\operatorname\left(\mathbf\right) = \left(\mathbf \otimes \mathbf^\textsf\right)\mathbf .


Applications

For an example of the application of this formula, see the article on the Lyapunov equation. This formula also comes in handy in showing that the matrix normal distribution is a special case of the multivariate normal distribution. This formula is also useful for representing 2D
image processing An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimension ...
operations in matrix-vector form. Another example is when a matrix can be factored as a Hadamard product, then matrix multiplication can be performed faster by using the above formula. This can be applied recursively, as done in the radix-2 FFT and the Fast Walsh–Hadamard transform. Splitting a known matrix into the Hadamard product of two smaller matrices is known as the "nearest Kronecker product" problem, and can be solved exactly by using the SVD. To split a matrix into the Hadamard product of more than two matrices, in an optimal fashion, is a difficult problem and the subject of ongoing research; some authors cast it as a tensor decomposition problem. In conjunction with the least squares method, the Kronecker product can be used as an accurate solution to the hand eye calibration problem.


Related matrix operations

Two related matrix operations are the Tracy–Singh and
Khatri–Rao product In mathematics, the Khatri–Rao product of matrices defined as : \mathbf \ast \mathbf = \left(\mathbf_ \otimes \mathbf_\right)_ in which the ''ij''-th block is the sized Kronecker product of the corresponding blocks of A and B, assuming the numb ...
s, which operate on partitioned matrices. Let the matrix A be partitioned into the blocks A''ij'' and matrix B into the blocks B''kl'', with of course , , and .


Tracy–Singh product

The Tracy–Singh product is defined as :\mathbf \circ \mathbf = \left(\mathbf_ \circ \mathbf\right)_ = \left(\left(\mathbf_ \otimes \mathbf_\right)_\right)_ which means that the (''ij'')-th subblock of the product is the matrix , of which the (''k'')-th subblock equals the matrix . Essentially the Tracy–Singh product is the pairwise Kronecker product for each pair of partitions in the two matrices. For example, if A and B both are partitioned matrices e.g.: : \mathbf = \left \begin \mathbf_ & \mathbf_ \\ \hline \mathbf_ & \mathbf_ \end \right= \left \begin 1 & 2 & 3 \\ 4 & 5 & 6 \\ \hline 7 & 8 & 9 \end \right,\quad \mathbf = \left \begin \mathbf_ & \mathbf_ \\ \hline \mathbf_ & \mathbf_ \end \right= \left \begin 1 & 4 & 7 \\ \hline 2 & 5 & 8 \\ 3 & 6 & 9 \end \right, we get: :\begin \mathbf \circ \mathbf =& \left begin \mathbf_ \circ \mathbf & \mathbf_ \circ \mathbf \\ \hline \mathbf_ \circ \mathbf & \mathbf_ \circ \mathbf \end\right\\ = &\left[\begin \mathbf_ \otimes \mathbf_ & \mathbf_ \otimes \mathbf_ & \mathbf_ \otimes \mathbf_ & \mathbf_ \otimes \mathbf_ \\ \hline \mathbf_ \otimes \mathbf_ & \mathbf_ \otimes \mathbf_ & \mathbf_ \otimes \mathbf_ & \mathbf_ \otimes \mathbf_ \\ \hline \mathbf_ \otimes \mathbf_ & \mathbf_ \otimes \mathbf_ & \mathbf_ \otimes \mathbf_ & \mathbf_ \otimes \mathbf_ \\ \hline \mathbf_ \otimes \mathbf_ & \mathbf_ \otimes \mathbf_ & \mathbf_ \otimes \mathbf_ & \mathbf_ \otimes \mathbf_ \end\right] \\ = &\left[\begin 1 & 2 & 4 & 7 & 8 & 14 & 3 & 12 & 21 \\ 4 & 5 & 16 & 28 & 20 & 35 & 6 & 24 & 42 \\ \hline 2 & 4 & 5 & 8 & 10 & 16 & 6 & 15 & 24 \\ 3 & 6 & 6 & 9 & 12 & 18 & 9 & 18 & 27 \\ 8 & 10 & 20 & 32 & 25 & 40 & 12 & 30 & 48 \\ 12 & 15 & 24 & 36 & 30 & 45 & 18 & 36 & 54 \\ \hline 7 & 8 & 28 & 49 & 32 & 56 & 9 & 36 & 63 \\ \hline 14 & 16 & 35 & 56 & 40 & 64 & 18 & 45 & 72 \\ 21 & 24 & 42 & 63 & 48 & 72 & 27 & 54 & 81 \end\right]. \end


Khatri–Rao product

* Block Kronecker product * Column-wise Khatri–Rao product


Face-splitting product

Mixed-products properties : \mathbf \otimes (\mathbf\bull \mathbf) = (\mathbf\otimes \mathbf) \bull \mathbf , where \bull denotes the Khatri–Rao product#Face-splitting product, Face-splitting product. : (\mathbf \bull \mathbf)(\mathbf \otimes \mathbf) = (\mathbf\mathbf) \bull (\mathbf \mathbf) , Similarly: : (\mathbf \bull \mathbf)(\mathbf \otimes \mathbf) \cdots (\mathbf \otimes \mathbf) = (\mathbf\mathbf \cdots \mathbf) \bull (\mathbf\mathbf \cdots \mathbf) , : \mathbf^\textsf \bull \mathbf^\textsf = \mathbf^\textsf \otimes \mathbf^\textsf , where \mathbf c and \mathbf d are vectors, : (\mathbf \bull \mathbf)(\mathbf \otimes \mathbf) = (\mathbf\mathbf) \circ (\mathbf\mathbf) , where \mathbf c and \mathbf d are vectors, and \circ denotes the Hadamard product. Similarly: : (\mathbf \bull \mathbf)(\mathbf\mathbf\mathbf \otimes \mathbf\mathbf\mathbf) = (\mathbf\mathbf\mathbf\mathbf) \circ (\mathbf\mathbf\mathbf\mathbf), : \mathcal F(C^x \star C^y) = (\mathcal F C^ \bull \mathcal F C^)(x \otimes y)= \mathcal F C^x \circ \mathcal F C^y , where \star is vector
convolution In mathematics (in particular, functional analysis), convolution is a mathematical operation on two functions ( and ) that produces a third function (f*g) that expresses how the shape of one is modified by the other. The term ''convolution'' ...
and \mathcal F is the Fourier transform matrix (this result is an evolving of
count sketch Count sketch is a type of dimensionality reduction that is particularly efficient in statistics, machine learning and algorithms. It was invented by Moses Charikar, Kevin Chen and Martin Farach-Colton in an effort to speed up the AMS Sketch by ...
properties ), : (\mathbf \bull \mathbf)(\mathbf \otimes \mathbf) \cdots (\mathbf \otimes \mathbf)(\mathbf \ast \mathbf) = (\mathbf\mathbf \cdot \mathbf\mathbf) \circ (\mathbf\mathbf \cdots \mathbf\mathbf) , where \ast denotes the Column-wise Khatri–Rao product. Similarly: : (\mathbf \bull \mathbf)(\mathbf \otimes \mathbf) \cdots (\mathbf \otimes \mathbf)(c \otimes d ) = (\mathbf\mathbf \cdots \mathbf\mathbf) \circ (\mathbf\mathbf \cdots \mathbf\mathbf) , : (\mathbf \bull \mathbf)(\mathbf \otimes \mathbf) \cdots (\mathbf \otimes \mathbf)(\mathbf\mathbf \otimes \mathbf\mathbf ) = (\mathbf\mathbf \cdots \mathbf\mathbf\mathbf) \circ (\mathbf\mathbf \cdots \mathbf\mathbf\mathbf) , where \mathbf c and \mathbf d are vectors.


See also

*
Generalized linear array model In statistics, the generalized linear array model (GLAM) is used for analyzing data sets with array structures. It based on the generalized linear model with the design matrix written as a Kronecker product. Overview The generalized linear array ...
* Kronecker coefficient


Notes


References

* * * * *


External links

* * * * * The entry on the Kronecker, Zehfuss, or Direct Product of matrices has historical information. * * Software source in more than 40 languages. {{DEFAULTSORT:Kronecker Product Matrix theory