Permutation Matrix
In mathematics, particularly in matrix theory, a permutation matrix is a square binary matrix that has exactly one entry of 1 in each row and each column with all other entries 0. An permutation matrix can represent a permutation of elements. Pre- multiplying an -row matrix by a permutation matrix , forming , results in permuting the rows of , while post-multiplying an -column matrix , forming , permutes the columns of . Every permutation matrix ''P'' is orthogonal, with its inverse equal to its transpose: P^=P^\mathsf. Indeed, permutation matrices can be characterized as the orthogonal matrices whose entries are all non-negative. The two permutation/matrix correspondences There are two natural one-to-one correspondences between permutations and permutation matrices, one of which works along the rows of the matrix, the other along its columns. Here is an example, starting with a permutation in two-line form at the upper left: :\begin \pi\colon\begin1&2&3&4\\3&2&4&1\e ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many areas of mathematics, which include number theory (the study of numbers), algebra (the study of formulas and related structures), geometry (the study of shapes and spaces that contain them), Mathematical analysis, analysis (the study of continuous changes), and set theory (presently used as a foundation for all mathematics). Mathematics involves the description and manipulation of mathematical object, abstract objects that consist of either abstraction (mathematics), abstractions from nature orin modern mathematicspurely abstract entities that are stipulated to have certain properties, called axioms. Mathematics uses pure reason to proof (mathematics), prove properties of objects, a ''proof'' consisting of a succession of applications of in ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Permutation Group
In mathematics, a permutation group is a group ''G'' whose elements are permutations of a given set ''M'' and whose group operation is the composition of permutations in ''G'' (which are thought of as bijective functions from the set ''M'' to itself). The group of ''all'' permutations of a set ''M'' is the symmetric group of ''M'', often written as Sym(''M''). The term ''permutation group'' thus means a subgroup of the symmetric group. If then Sym(''M'') is usually denoted by S''n'', and may be called the ''symmetric group on n letters''. By Cayley's theorem, every group is isomorphic to some permutation group. The way in which the elements of a permutation group permute the elements of the set is called its group action. Group actions have applications in the study of symmetries, combinatorics and many other branches of mathematics, physics and chemistry. Basic properties and terminology A ''permutation group'' is a subgroup of a symmetric group; that is, its elements ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Eigenvalue
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a constant factor \lambda when the linear transformation is applied to it: T\mathbf v=\lambda \mathbf v. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor \lambda (possibly a negative or complex number). Geometrically, vectors are multi-dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed. Th ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Eigenvector
In linear algebra, an eigenvector ( ) or characteristic vector is a vector that has its direction unchanged (or reversed) by a given linear transformation. More precisely, an eigenvector \mathbf v of a linear transformation T is scaled by a constant factor \lambda when the linear transformation is applied to it: T\mathbf v=\lambda \mathbf v. The corresponding eigenvalue, characteristic value, or characteristic root is the multiplying factor \lambda (possibly a negative or complex number). Geometrically, vectors are multi- dimensional quantities with magnitude and direction, often pictured as arrows. A linear transformation rotates, stretches, or shears the vectors upon which it acts. A linear transformation's eigenvectors are those vectors that are only stretched or shrunk, with neither rotation nor shear. The corresponding eigenvalue is the factor by which an eigenvector is stretched or shrunk. If the eigenvalue is negative, the eigenvector's direction is reversed. ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Standard Basis
In mathematics, the standard basis (also called natural basis or canonical basis) of a coordinate vector space (such as \mathbb^n or \mathbb^n) is the set of vectors, each of whose components are all zero, except one that equals 1. For example, in the case of the Euclidean plane \mathbb^2 formed by the pairs of real numbers, the standard basis is formed by the vectors \mathbf_x = (1,0),\quad \mathbf_y = (0,1). Similarly, the standard basis for the three-dimensional space \mathbb^3 is formed by vectors \mathbf_x = (1,0,0),\quad \mathbf_y = (0,1,0),\quad \mathbf_z=(0,0,1). Here the vector e''x'' points in the ''x'' direction, the vector e''y'' points in the ''y'' direction, and the vector e''z'' points in the ''z'' direction. There are several common notations for standard-basis vectors, including , , , and . These vectors are sometimes written with a hat to emphasize their status as unit vectors (standard unit vectors). These vectors are a basis in the sense that any othe ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Trace (linear Algebra)
In linear algebra, the trace of a square matrix , denoted , is the sum of the elements on its main diagonal, a_ + a_ + \dots + a_. It is only defined for a square matrix (). The trace of a matrix is the sum of its eigenvalues (counted with multiplicities). Also, for any matrices and of the same size. Thus, similar matrices have the same trace. As a consequence, one can define the trace of a linear operator mapping a finite-dimensional vector space into itself, since all matrices describing such an operator with respect to a basis are similar. The trace is related to the derivative of the determinant (see Jacobi's formula). Definition The trace of an square matrix is defined as \operatorname(\mathbf) = \sum_^n a_ = a_ + a_ + \dots + a_ where denotes the entry on the row and column of . The entries of can be real numbers, complex numbers, or more generally elements of a field . The trace is not defined for non-square matrices. Example Let be a matrix, with \m ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Fixed Point (mathematics)
In mathematics, a fixed point (sometimes shortened to fixpoint), also known as an invariant point, is a value that does not change under a given transformation (mathematics), transformation. Specifically, for function (mathematics), functions, a fixed point is an element that is mapped to itself by the function. Any set of fixed points of a transformation is also an invariant set. Fixed point of a function Formally, is a fixed point of a function if belongs to both the domain of a function, domain and the codomain of , and . In particular, cannot have any fixed point if its domain is disjoint from its codomain. If is defined on the real numbers, it corresponds, in graphical terms, to a curve in the Euclidean plane, and each fixed-point corresponds to an intersection of the curve with the line , cf. picture. For example, if is defined on the real numbers by f(x) = x^2 - 3 x + 4, then 2 is a fixed point of , because . Not all functions have fixed points: for example, ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Convex Hull
In geometry, the convex hull, convex envelope or convex closure of a shape is the smallest convex set that contains it. The convex hull may be defined either as the intersection of all convex sets containing a given subset of a Euclidean space, or equivalently as the set of all convex combinations of points in the subset. For a Bounded set, bounded subset of the plane, the convex hull may be visualized as the shape enclosed by a rubber band stretched around the subset. Convex hulls of open sets are open, and convex hulls of compact sets are compact. Every compact convex set is the convex hull of its extreme points. The convex hull operator is an example of a closure operator, and every antimatroid can be represented by applying this closure operator to finite sets of points. The algorithmic problems of finding the convex hull of a finite set of points in the plane or other low-dimensional Euclidean spaces, and its projective duality, dual problem of intersecting Half-space (geome ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Extreme Point
In mathematics, an extreme point of a convex set S in a Real number, real or Complex number, complex vector space is a point in S that does not lie in any open line segment joining two points of S. The extreme points of a line segment are called its ''endpoint (geometry), endpoints''. In linear programming problems, an extreme point is also called ''vertex (geometry), vertex'' or ''corner point'' of S. Definition Throughout, it is assumed that X is a Real number, real or Complex number, complex vector space. For any p, x, y \in X, say that p x and y if x \neq y and there exists a 0 < t < 1 such that If is a subset of and then is called an of if it does not lie between any two points of That is, if there does exist and such that and The s ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Convex Combination
In convex geometry and Vector space, vector algebra, a convex combination is a linear combination of point (geometry), points (which can be vector (geometric), vectors, scalar (mathematics), scalars, or more generally points in an affine space) where all coefficients are non-negative and sum to 1. In other words, the operation is equivalent to a standard weighted average, but whose weights are expressed as a percent of the total weight, instead of as a fraction of the ''count'' of the weights as in a standard weighted average. Formal definition More formally, given a finite number of points x_1, x_2, \dots, x_n in a real vector space, a convex combination of these points is a point of the form : \alpha_1x_1+\alpha_2x_2+\cdots+\alpha_nx_n where the real numbers \alpha_i satisfy \alpha_i\ge 0 and \alpha_1+\alpha_2+\cdots+\alpha_n=1. As a particular example, every convex combination of two points lies on the line segment between the points. A set is convex set, convex if it ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Birkhoff–von Neumann Theorem
In mathematics, especially in probability and combinatorics, a doubly stochastic matrix (also called bistochastic matrix) is a square matrix X=(x_) of nonnegative real numbers, each of whose rows and columns sums to 1, i.e., :\sum_i x_=\sum_j x_=1, Thus, a doubly stochastic matrix is both left stochastic and right stochastic. Indeed, any matrix that is both left and right stochastic must be square: if every row sums to 1 then the sum of all entries in the matrix must be equal to the number of rows, and since the same holds for columns, the number of rows and columns must be equal. Birkhoff polytope The class of n\times n doubly stochastic matrices is a convex polytope known as the Birkhoff polytope B_n. Using the matrix entries as Cartesian coordinates, it lies in an (n-1)^2-dimensional affine subspace of n^2-dimensional Euclidean space defined by 2n-1 independent linear constraints specifying that the row and column sums all equal 1. (There are 2n-1 constraints rather than 2n ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Birkhoff Polytope
The Birkhoff polytope ''B''''n'' (also called the assignment polytope, the polytope of doubly stochastic matrices, or the perfect matching polytope of the complete bipartite graph K_) is the convex polytope in R''N'' (where ''N'' = ''n''2) whose points are the doubly stochastic matrices, i.e., the matrices whose entries are non-negative real numbers and whose rows and columns each add up to 1. It is named after Garrett Birkhoff. Properties Vertices The Birkhoff polytope has ''n''! vertices, one for each permutation on ''n'' items. This follows from the Birkhoff–von Neumann theorem, which states that the extreme points of the Birkhoff polytope are the permutation matrices, and therefore that any doubly stochastic matrix may be represented as a convex combination of permutation matrices; this was stated in a 1946 paper by Garrett Birkhoff, but equivalent results in the languages of projective configurations and of regular bipartite graph matchings, respectively, were sho ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |