HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, particularly in matrix theory, a permutation matrix is a square binary matrix that has exactly one entry of 1 in each row and each column and 0s elsewhere. Each such matrix, say , represents a
permutation In mathematics, a permutation of a set is, loosely speaking, an arrangement of its members into a sequence or linear order, or if the set is already ordered, a rearrangement of its elements. The word "permutation" also refers to the act or pro ...
of elements and, when used to multiply another matrix, say , results in permuting the rows (when pre-multiplying, to form ) or columns (when post-multiplying, to form ) of the matrix .


Definition

Given a permutation of ''m'' elements, :\pi : \lbrace 1, \ldots, m \rbrace \to \lbrace 1, \ldots, m \rbrace represented in two-line form by :\begin 1 & 2 & \cdots & m \\ \pi(1) & \pi(2) & \cdots & \pi(m) \end, there are two natural ways to associate the permutation with a permutation matrix; namely, starting with the ''m'' × ''m''
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
, , either permute the columns or permute the rows, according to . Both methods of defining permutation matrices appear in the literature and the properties expressed in one representation can be easily converted to the other representation. This article will primarily deal with just one of these representations and the other will only be mentioned when there is a difference to be aware of. The ''m × m'' permutation matrix ''P'' = (''p''''ij'') obtained by permuting the columns of the identity matrix , that is, for each ''i'', if ''j'' = (''i'') and otherwise, will be referred to as the column representation in this article. Since the entries in row ''i'' are all 0 except that a 1 appears in column (''i''), we may write :P_\pi = \begin \mathbf e_ \\ \mathbf e_ \\ \vdots \\ \mathbf e_ \end, where \mathbf e_j, a standard basis vector, denotes a row vector of length ''m'' with 1 in the ''j''th position and 0 in every other position.Brualdi (2006) p.2 For example, the permutation matrix ''P'' corresponding to the permutation \pi=\begin 1 & 2 & 3 & 4 & 5 \\ 1 & 4 & 2 & 5 & 3 \end is :P_\pi = \begin \mathbf_ \\ \mathbf_ \\ \mathbf_ \\ \mathbf_ \\ \mathbf_ \end = \begin \mathbf_ \\ \mathbf_ \\ \mathbf_ \\ \mathbf_ \\ \mathbf_ \end = \begin 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 \end. Observe that the ''j''th column of the identity matrix now appears as the (''j'')th column of ''P''. The other representation, obtained by permuting the rows of the identity matrix , that is, for each ''j'', ''p''''ij'' = 1 if ''i'' = (''j'') and otherwise, will be referred to as the row representation.


Properties

The column representation of a permutation matrix is used throughout this section, except when otherwise indicated. Multiplying P_ times a column vector g will permute the rows of the vector: P_\pi \mathbf = \begin \mathbf_ \\ \mathbf_ \\ \vdots \\ \mathbf_ \end \begin g_1 \\ g_2 \\ \vdots \\ g_n \end = \begin g_ \\ g_ \\ \vdots \\ g_ \end. Repeated use of this result shows that if is an appropriately sized matrix, the product, P_ M is just a permutation of the rows of . However, observing that P_ \mathbf_k^ = \mathbf_^ for each shows that the permutation of the rows is given by −1. (M^ is the transpose of matrix .) As permutation matrices are orthogonal matrices (that is, P_P_^ = I), the inverse matrix exists and can be written as P_^ = P_ = P_^. Multiplying a row vector h times P_ will permute the columns of the vector: \mathbfP_\pi = \begin h_1 & h_2 & \cdots & h_n \end \begin \mathbf_ \\ \mathbf_ \\ \vdots \\ \mathbf_ \end = \begin h_ & h_ & \cdots & h_ \end Again, repeated application of this result shows that post-multiplying a matrix by the permutation matrix , that is, , results in permuting the columns of . Notice also that \mathbf_k P_ = \mathbf_. Given two permutations and of elements, the corresponding permutation matrices and acting on column vectors are composed with P_ P_\, \mathbf = P_\, \mathbf. The same matrices acting on row vectors (that is, post-multiplication) compose according to the same rule \mathbf P_ P_ = \mathbf P_. To be clear, the above formulas use the prefix notation for permutation composition, that is, (\pi\,\circ\,\sigma) (k) = \pi \left(\sigma (k) \right). Let Q_ be the permutation matrix corresponding to in its row representation. The properties of this representation can be determined from those of the column representation since Q_ = P_^ = P_. In particular, Q_ \mathbf_k^ = P_ \mathbf_k^ = \mathbf_^ = \mathbf_^. From this it follows that Q_ Q_\, \mathbf = Q_\, \mathbf. Similarly, \mathbf\, Q_ Q_ = \mathbf\, Q_. Permutation matrices can be characterized as the orthogonal matrices whose entries are all non-negative.


Matrix group

If (1) denotes the identity permutation, then is the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial or ...
. Let denote the symmetric group, or group of permutations, on . Since there are permutations, there are permutation matrices. By the formulas above, the permutation matrices form a group under matrix multiplication with the identity matrix as the identity element. The map that sends a permutation to its column representation is a faithful representation.


Doubly stochastic matrices

A permutation matrix is itself a
doubly stochastic matrix In mathematics, especially in probability and combinatorics, a doubly stochastic matrix (also called bistochastic matrix) is a square matrix X=(x_) of nonnegative real numbers, each of whose rows and columns sums to 1, i.e., :\sum_i x_=\sum_j x_= ...
, but it also plays a special role in the theory of these matrices. The Birkhoff–von Neumann theorem says that every doubly stochastic real matrix is a convex combination of permutation matrices of the same order and the permutation matrices are precisely the
extreme point In mathematics, an extreme point of a convex set S in a real or complex vector space is a point in S which does not lie in any open line segment joining two points of S. In linear programming problems, an extreme point is also called vertex ...
s of the set of doubly stochastic matrices. That is, the Birkhoff polytope, the set of doubly stochastic matrices, is the
convex hull In geometry, the convex hull or convex envelope or convex closure of a shape is the smallest convex set that contains it. The convex hull may be defined either as the intersection of all convex sets containing a given subset of a Euclidean space ...
of the set of permutation matrices.Brualdi (2006) p.19


Linear algebraic properties

The trace of a permutation matrix is the number of fixed points of the permutation. If the permutation has fixed points, so it can be written in cycle form as where has no fixed points, then are eigenvectors of the permutation matrix. To calculate the
eigenvalue In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denote ...
s of a permutation matrix P_, write \sigma as a product of cycles, say, \sigma= C_C_ \cdots C_. Let the corresponding lengths of these cycles be l_,l_...l_, and let R_ (1 \le i \le t) be the set of complex solutions of x^=1. The union of all R_s is the set of eigenvalues of the corresponding permutation matrix. The
geometric multiplicity In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted b ...
of each eigenvalue equals the number of R_s that contain it.J Najnudel, A Nikeghbali 2010 p.4 From
group theory In abstract algebra, group theory studies the algebraic structures known as groups. The concept of a group is central to abstract algebra: other well-known algebraic structures, such as rings, fields, and vector spaces, can all be seen ...
we know that any permutation may be written as a product of transpositions. Therefore, any permutation matrix factors as a product of row-interchanging elementary matrices, each having
determinant In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if a ...
−1. Thus, the determinant of a permutation matrix is the
signature A signature (; from la, signare, "to sign") is a Handwriting, handwritten (and often Stylization, stylized) depiction of someone's name, nickname, or even a simple "X" or other mark that a person writes on documents as a proof of identity and ...
of the corresponding permutation.


Examples


Permutation of rows and columns

When a matrix ''M'' is multiplied by a permutation matrix ''P'' on the left to make ''PM'', the product is the result of permuting the rows of ''M''. As a special case, if ''M'' is a column vector, then ''PM'' is the result of permuting the entries of ''M'': When instead ''M'' is multiplied by a permutation matrix on the right to make ''MP'', the product is the result of permuting the columns of ''M''. As a special case, if ''M'' is a row vector, then ''MP'' is the result of permuting the entries of ''M'':


Permutation of rows

The permutation matrix ''P''π corresponding to the permutation \pi=\begin 1 & 2 & 3 & 4 & 5 \\ 1 & 4 & 2 & 5 & 3 \end is :P_\pi = \begin \mathbf_ \\ \mathbf_ \\ \mathbf_ \\ \mathbf_ \\ \mathbf_ \end = \begin \mathbf_ \\ \mathbf_ \\ \mathbf_ \\ \mathbf_ \\ \mathbf_ \end = \begin 1 & 0 & 0 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 1 & 0 & 0 \end. Given a vector g, :P_\pi \mathbf = \begin \mathbf_ \\ \mathbf_ \\ \mathbf_ \\ \mathbf_ \\ \mathbf_ \end \begin g_1 \\ g_2 \\ g_3 \\ g_4 \\ g_5 \end = \begin g_1 \\ g_4 \\ g_2 \\ g_5 \\ g_3 \end.


Explanation

A permutation matrix will always be in the form :\begin \mathbf_ \\ \mathbf_ \\ \vdots \\ \mathbf_ \\ \end where e''a''''i'' represents the ''i''th basis vector (as a row) for R''j'', and where :\begin 1 & 2 & \ldots & j \\ a_1 & a_2 & \ldots & a_j\end is the
permutation In mathematics, a permutation of a set is, loosely speaking, an arrangement of its members into a sequence or linear order, or if the set is already ordered, a rearrangement of its elements. The word "permutation" also refers to the act or pro ...
form of the permutation matrix. Now, in performing matrix multiplication, one essentially forms the
dot product In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an alg ...
of each row of the first matrix with each column of the second. In this instance, we will be forming the dot product of each row of this matrix with the vector of elements we want to permute. That is, for example, v = (''g''0,...,''g''5)T, :e''a''''i''·v = ''g''''a''''i'' So, the product of the permutation matrix with the vector v above, will be a vector in the form (''g''''a''1, ''g''''a''2, ..., ''g''''a''''j''), and that this then is a permutation of v since we have said that the permutation form is :\begin 1 & 2 & \ldots & j \\ a_1 & a_2 & \ldots & a_j\end. So, permutation matrices do indeed permute the order of elements in vectors multiplied with them.


Restricted forms

* Costas array, a permutation matrix in which the displacement vectors between the entries are all distinct * n-queens puzzle, a permutation matrix in which there is at most one entry in each diagonal and antidiagonal


See also

* Alternating sign matrix * Exchange matrix * Generalized permutation matrix *
Rook polynomial In combinatorial mathematics, a rook polynomial is a generating polynomial of the number of ways to place non-attacking rooks on a board that looks like a checkerboard; that is, no two rooks may be in the same row or column. The board is any s ...
* Permanent


References

* * {{Authority control Matrices Permutations Sparse matrices