In
linear algebra
Linear algebra is the branch of mathematics concerning linear equations such as:
:a_1x_1+\cdots +a_nx_n=b,
linear maps such as:
:(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n,
and their representations in vector spaces and through matrices.
...
, the rank of a
matrix
Matrix most commonly refers to:
* ''The Matrix'' (franchise), an American media franchise
** ''The Matrix'', a 1999 science-fiction action film
** "The Matrix", a fictional setting, a virtual reality environment, within ''The Matrix'' (franchis ...
is the
dimension
In physics and mathematics, the dimension of a Space (mathematics), mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any Point (geometry), point within it. Thus, a Line (geometry), lin ...
of the
vector space
In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called ''vectors'', may be added together and multiplied ("scaled") by numbers called '' scalars''. Scalars are often real numbers, but can ...
generated (or
spanned) by its columns.
[ p. 48, § 1.16] This corresponds to the maximal number of
linearly independent
In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are ...
columns of . This, in turn, is identical to the dimension of the vector space spanned by its rows.
Rank is thus a measure of the "
nondegenerateness" of the
system of linear equations and
linear transformation
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pre ...
encoded by . There are multiple equivalent definitions of rank. A matrix's rank is one of its most fundamental characteristics.
The rank is commonly denoted by or ;
sometimes the parentheses are not written, as in .
[Alternative notation includes from and .]
Main definitions
In this section, we give some definitions of the rank of a matrix. Many definitions are possible; see
Alternative definitions for several of these.
The column rank of is the
dimension
In physics and mathematics, the dimension of a Space (mathematics), mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any Point (geometry), point within it. Thus, a Line (geometry), lin ...
of the
column space
In linear algebra, the column space (also called the range or image) of a matrix ''A'' is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding mat ...
of , while the row rank of is the dimension of the
row space
Row or ROW may refer to:
Exercise
*Rowing, or a form of aquatic movement using oars
*Row (weight-lifting), a form of weight-lifting exercise
Math
*Row vector, a 1 × ''n'' matrix in linear algebra.
*Row (database), a single, implicitly structured ...
of .
A fundamental result in linear algebra is that the column rank and the row rank are always equal. (Two proofs of this result are given in , below.) This number (i.e., the number of linearly independent rows or columns) is simply called the rank of .
A matrix is said to have full rank if its rank equals the largest possible for a matrix of the same dimensions, which is the lesser of the number of rows and columns. A matrix is said to be rank-deficient if it does not have full rank. The rank deficiency of a matrix is the difference between the lesser of the number of rows and columns, and the rank.
The rank of a
linear map
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a Map (mathematics), mapping V \to W between two vect ...
or operator
is defined as the dimension of its
image
An image is a visual representation of something. It can be two-dimensional, three-dimensional, or somehow otherwise feed into the visual system to convey information. An image can be an artifact, such as a photograph or other two-dimensiona ...
:
where
is the dimension of a vector space, and
is the image of a map.
Examples
The matrix
has rank 2: the first two columns are
linearly independent
In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are ...
, so the rank is at least 2, but since the third is a linear combination of the first two (the first column minus the second), the three columns are linearly dependent so the rank must be less than 3.
The matrix
has rank 1: there are nonzero columns, so the rank is positive, but any pair of columns is linearly dependent. Similarly, the
transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations).
The tr ...
of has rank 1. Indeed, since the column vectors of are the row vectors of the
transpose
In linear algebra, the transpose of a matrix is an operator which flips a matrix over its diagonal;
that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other notations).
The tr ...
of , the statement that the column rank of a matrix equals its row rank is equivalent to the statement that the rank of a matrix is equal to the rank of its transpose, i.e., .
Computing the rank of a matrix
Rank from row echelon forms
A common approach to finding the rank of a matrix is to reduce it to a simpler form, generally
row echelon form
In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination.
A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and
column echelon form means that Gaussian ...
, by
elementary row operations In mathematics, an elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation. The elementary matrices generate the general linear group GL''n''(F) when F is a field. Left multiplication (pre-multi ...
. Row operations do not change the row space (hence do not change the row rank), and, being invertible, map the column space to an isomorphic space (hence do not change the column rank). Once in row echelon form, the rank is clearly the same for both row rank and column rank, and equals the number of
pivots (or basic columns) and also the number of non-zero rows.
For example, the matrix given by
can be put in reduced row-echelon form by using the following elementary row operations:
The final matrix (in row echelon form) has two non-zero rows and thus the rank of matrix is 2.
Computation
When applied to
floating point
In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can b ...
computations on computers, basic Gaussian elimination (
LU decomposition
In numerical analysis and linear algebra, lower–upper (LU) decomposition or factorization factors a matrix as the product of a lower triangular matrix and an upper triangular matrix (see matrix decomposition). The product sometimes includes a p ...
) can be unreliable, and a rank-revealing decomposition should be used instead. An effective alternative is the
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is re ...
(SVD), but there are other less expensive choices, such as
QR decomposition
In linear algebra, a QR decomposition, also known as a QR factorization or QU factorization, is a decomposition of a matrix ''A'' into a product ''A'' = ''QR'' of an orthogonal matrix ''Q'' and an upper triangular matrix ''R''. QR decomp ...
with pivoting (so-called
rank-revealing QR factorization
An RRQR factorization or rank-revealing QR factorization is a matrix decomposition algorithm based on the QR factorization which can be used to determine the rank of a matrix. The singular value decomposition
In linear algebra, the singular ...
), which are still more numerically robust than Gaussian elimination. Numerical determination of rank requires a criterion for deciding when a value, such as a singular value from the SVD, should be treated as zero, a practical choice which depends on both the matrix and the application.
Proofs that column rank = row rank
Proof using row reduction
The fact that the column and row ranks of any matrix are equal forms is fundamental in linear algebra. Many proofs have been given. One of the most elementary ones has been sketched in . Here is a variant of this proof:
It is straightforward to show that neither the row rank nor the column rank are changed by an
elementary row operation In mathematics, an elementary matrix is a matrix which differs from the identity matrix by one single elementary row operation. The elementary matrices generate the general linear group GL''n''(F) when F is a field. Left multiplication (pre-multipl ...
. As
Gaussian elimination
In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
proceeds by elementary row operations, the
reduced row echelon form
In linear algebra, a matrix is in echelon form if it has the shape resulting from a Gaussian elimination.
A matrix being in row echelon form means that Gaussian elimination has operated on the rows, and
column echelon form means that Gaussian e ...
of a matrix has the same row rank and the same column rank as the original matrix. Further elementary column operations allow putting the matrix in the form of an
identity matrix possibly bordered by rows and columns of zeros. Again, this changes neither the row rank nor the column rank. It is immediate that both the row and column ranks of this resulting matrix is the number of its nonzero entries.
We present two other proofs of this result. The first uses only basic properties of
linear combinations of vectors, and is valid over any
field
Field may refer to:
Expanses of open ground
* Field (agriculture), an area of land used for agricultural purposes
* Airfield, an aerodrome that lacks the infrastructure of an airport
* Battlefield
* Lawn, an area of mowed grass
* Meadow, a grass ...
. The proof is based upon Wardlaw (2005).
[
] The second uses
orthogonality and is valid for matrices over the
real numbers
In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every real ...
; it is based upon Mackiw (1995).
Both proofs can be found in the book by Banerjee and Roy (2014).
Proof using linear combinations
Let be an matrix. Let the column rank of be , and let be any basis for the column space of . Place these as the columns of an matrix . Every column of can be expressed as a linear combination of the columns in . This means that there is an matrix such that . is the matrix whose th column is formed from the coefficients giving the th column of as a linear combination of the columns of . In other words, is the matrix which contains the multiples for the bases of the column space of (which is ), which are then used to form as a whole. Now, each row of is given by a linear combination of the rows of . Therefore, the rows of form a spanning set of the row space of and, by the
Steinitz exchange lemma
The Steinitz exchange lemma is a basic theorem in linear algebra used, for example, to show that any two bases for a finite-dimensional vector space have the same number of elements. The result is named after the German mathematician Ernst Steinit ...
, the row rank of cannot exceed . This proves that the row rank of is less than or equal to the column rank of . This result can be applied to any matrix, so apply the result to the transpose of . Since the row rank of the transpose of is the column rank of and the column rank of the transpose of is the row rank of , this establishes the reverse inequality and we obtain the equality of the row rank and the column rank of . (Also see
Rank factorization In mathematics, given a field \mathbb F, nonnegative integers m,n, and a matrix A\in\mathbb F^, a rank decomposition or rank factorization of is a factorization of of the form , where C\in\mathbb F^ and F\in\mathbb F^, where r=\operatorname A is ...
.)
Proof using orthogonality
Let be an matrix with entries in the
real number
In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every real ...
s whose row rank is . Therefore, the dimension of the row space of is . Let be a
basis
Basis may refer to:
Finance and accounting
* Adjusted basis, the net cost of an asset after adjusting for various tax-related items
*Basis point, 0.01%, often used in the context of interest rates
* Basis trading, a trading strategy consisting ...
of the row space of . We claim that the vectors are
linearly independent
In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are ...
. To see why, consider a linear homogeneous relation involving these vectors with scalar coefficients :
where . We make two observations: (a) is a linear combination of vectors in the row space of , which implies that belongs to the row space of , and (b) since , the vector is
orthogonal to every row vector of and, hence, is orthogonal to every vector in the row space of . The facts (a) and (b) together imply that is orthogonal to itself, which proves that or, by the definition of ,
But recall that the were chosen as a basis of the row space of and so are linearly independent. This implies that . It follows that are linearly independent.
Now, each is obviously a vector in the column space of . So, is a set of linearly independent vectors in the column space of and, hence, the dimension of the column space of (i.e., the column rank of ) must be at least as big as . This proves that row rank of is no larger than the column rank of . Now apply this result to the transpose of to get the reverse inequality and conclude as in the previous proof.
Alternative definitions
In all the definitions in this section, the matrix is taken to be an matrix over an arbitrary
field
Field may refer to:
Expanses of open ground
* Field (agriculture), an area of land used for agricultural purposes
* Airfield, an aerodrome that lacks the infrastructure of an airport
* Battlefield
* Lawn, an area of mowed grass
* Meadow, a grass ...
.
Dimension of image
Given the matrix
, there is an associated
linear mapping
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that pre ...
defined by
The rank of
is the dimension of the image of
. This definition has the advantage that it can be applied to any linear map without need for a specific matrix.
Rank in terms of nullity
Given the same linear mapping as above, the rank is minus the dimension of the
kernel
Kernel may refer to:
Computing
* Kernel (operating system), the central component of most operating systems
* Kernel (image processing), a matrix used for image convolution
* Compute kernel, in GPGPU programming
* Kernel method, in machine learn ...
of . The
rank–nullity theorem
The rank–nullity theorem is a theorem in linear algebra, which asserts that the dimension of the domain of a linear map is the sum of its rank (the dimension of its image) and its ''nullity'' (the dimension of its kernel). p. 70, §2.1, Theo ...
states that this definition is equivalent to the preceding one.
Column rank – dimension of column space
The rank of is the maximal number of linearly independent columns
of ; this is the
dimension
In physics and mathematics, the dimension of a Space (mathematics), mathematical space (or object) is informally defined as the minimum number of coordinates needed to specify any Point (geometry), point within it. Thus, a Line (geometry), lin ...
of the
column space
In linear algebra, the column space (also called the range or image) of a matrix ''A'' is the span (set of all possible linear combinations) of its column vectors. The column space of a matrix is the image or range of the corresponding mat ...
of (the column space being the subspace of generated by the columns of , which is in fact just the image of the linear map associated to ).
Row rank – dimension of row space
The rank of is the maximal number of linearly independent rows of ; this is the dimension of the
row space
Row or ROW may refer to:
Exercise
*Rowing, or a form of aquatic movement using oars
*Row (weight-lifting), a form of weight-lifting exercise
Math
*Row vector, a 1 × ''n'' matrix in linear algebra.
*Row (database), a single, implicitly structured ...
of .
Decomposition rank
The rank of is the smallest integer such that can be factored as
, where is an matrix and is a matrix. In fact, for all integers , the following are equivalent:
# the column rank of is less than or equal to ,
# there exist columns
of size such that every column of is a linear combination of
,
# there exist an
matrix and a
matrix such that
(when is the rank, this is a
rank factorization In mathematics, given a field \mathbb F, nonnegative integers m,n, and a matrix A\in\mathbb F^, a rank decomposition or rank factorization of is a factorization of of the form , where C\in\mathbb F^ and F\in\mathbb F^, where r=\operatorname A is ...
of ),
# there exist rows
of size such that every row of is a linear combination of
,
# the row rank of is less than or equal to .
Indeed, the following equivalences are obvious:
.
For example, to prove (3) from (2), take to be the matrix whose columns are
from (2).
To prove (2) from (3), take
to be the columns of .
It follows from the equivalence
that the row rank is equal to the column rank.
As in the case of the "dimension of image" characterization, this can be generalized to a definition of the rank of any linear map: the rank of a linear map is the minimal dimension of an intermediate space such that can be written as the composition of a map and a map . Unfortunately, this definition does not suggest an efficient manner to compute the rank (for which it is better to use one of the alternative definitions). See
rank factorization In mathematics, given a field \mathbb F, nonnegative integers m,n, and a matrix A\in\mathbb F^, a rank decomposition or rank factorization of is a factorization of of the form , where C\in\mathbb F^ and F\in\mathbb F^, where r=\operatorname A is ...
for details.
Rank in terms of singular values
The rank of equals the number of non-zero
singular values In mathematics, in particular functional analysis, the singular values, or ''s''-numbers of a compact operator T: X \rightarrow Y acting between Hilbert spaces X and Y, are the square roots of the (necessarily non-negative) eigenvalues of the self- ...
, which is the same as the number of non-zero diagonal elements in Σ in the
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is re ...
Determinantal rank – size of largest non-vanishing minor
The rank of is the largest order of any non-zero
minor in . (The order of a minor is the side-length of the square sub-matrix of which it is the determinant.) Like the decomposition rank characterization, this does not give an efficient way of computing the rank, but it is useful theoretically: a single non-zero minor witnesses a lower bound (namely its order) for the rank of the matrix, which can be useful (for example) to prove that certain operations do not lower the rank of a matrix.
A non-vanishing -minor ( submatrix with non-zero determinant) shows that the rows and columns of that submatrix are linearly independent, and thus those rows and columns of the full matrix are linearly independent (in the full matrix), so the row and column rank are at least as large as the determinantal rank; however, the converse is less straightforward. The equivalence of determinantal rank and column rank is a strengthening of the statement that if the span of vectors has dimension , then of those vectors span the space (equivalently, that one can choose a spanning set that is a ''subset'' of the vectors): the equivalence implies that a subset of the rows and a subset of the columns simultaneously define an invertible submatrix (equivalently, if the span of vectors has dimension , then of these vectors span the space ''and'' there is a set of coordinates on which they are linearly independent).
Tensor rank – minimum number of simple tensors
The rank of is the smallest number such that can be written as a sum of rank 1 matrices, where a matrix is defined to have rank 1 if and only if it can be written as a nonzero product
of a column vector and a row vector . This notion of rank is called
tensor rank
In mathematics, the modern component-free approach to the theory of a tensor views a tensor as an abstract object, expressing some definite type of multilinear concept. Their properties can be derived from their definitions, as linear maps or m ...
; it can be generalized in the
separable models interpretation of the
singular value decomposition
In linear algebra, the singular value decomposition (SVD) is a factorization of a real or complex matrix. It generalizes the eigendecomposition of a square normal matrix with an orthonormal eigenbasis to any \ m \times n\ matrix. It is re ...
.
Properties
We assume that is an matrix, and we define the linear map by as above.
* The rank of an matrix is a
nonnegative
In mathematics, the sign of a real number is its property of being either positive, negative, or zero. Depending on local conventions, zero may be considered as being neither positive nor negative (having no sign or a unique third sign), or it ...
integer
An integer is the number zero (), a positive natural number (, , , etc.) or a negative integer with a minus sign (−1, −2, −3, etc.). The negative numbers are the additive inverses of the corresponding positive numbers. In the language ...
and cannot be greater than either or . That is,
A matrix that has rank is said to have ''full rank''; otherwise, the matrix is ''rank deficient''.
* Only a
zero matrix In mathematics, particularly linear algebra, a zero matrix or null matrix is a matrix all of whose entries are zero. It also serves as the additive identity of the additive group of m \times n matrices, and is denoted by the symbol O or 0 followed ...
has rank zero.
* is
injective
In mathematics, an injective function (also known as injection, or one-to-one function) is a function that maps distinct elements of its domain to distinct elements; that is, implies . (Equivalently, implies in the equivalent contrapositiv ...
(or "one-to-one") if and only if has rank (in this case, we say that has ''full column rank'').
* is
surjective
In mathematics, a surjective function (also known as surjection, or onto function) is a function that every element can be mapped from element so that . In other words, every element of the function's codomain is the image of one element of i ...
(or "onto") if and only if has rank (in this case, we say that has ''full row rank'').
* If is a square matrix (i.e., ), then is
invertible
In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers.
Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that is ...
if and only if has rank (that is, has full rank).
* If is any matrix, then
* If is an matrix of rank , then
* If is an matrix of rank , then
* The rank of is equal to if and only if there exists an invertible matrix and an invertible matrix such that
where denotes the
identity matrix.
*
Sylvester’s rank inequality: if is an matrix and is , then
[Proof: Apply the ]rank–nullity theorem
The rank–nullity theorem is a theorem in linear algebra, which asserts that the dimension of the domain of a linear map is the sum of its rank (the dimension of its image) and its ''nullity'' (the dimension of its kernel). p. 70, §2.1, Theo ...
to the inequality This is a special case of the next inequality.
* The inequality due to
Frobenius: if , and are defined, then
[Proof. The mapis well-defined and injective. We thus obtain the inequality in terms of dimensions of kernel, which can then be converted to the inequality in terms of ranks by the ]rank–nullity theorem
The rank–nullity theorem is a theorem in linear algebra, which asserts that the dimension of the domain of a linear map is the sum of its rank (the dimension of its image) and its ''nullity'' (the dimension of its kernel). p. 70, §2.1, Theo ...
.
Alternatively, if is a linear subspace then ; apply this inequality to the subspace defined by the orthogonal complement of the image of in the image of , whose dimension is ; its image under has dimension .
* Subadditivity:
when and are of the same dimension. As a consequence, a rank- matrix can be written as the sum of rank-1 matrices, but not fewer.
* The rank of a matrix plus the
nullity of the matrix equals the number of columns of the matrix. (This is the
rank–nullity theorem
The rank–nullity theorem is a theorem in linear algebra, which asserts that the dimension of the domain of a linear map is the sum of its rank (the dimension of its image) and its ''nullity'' (the dimension of its kernel). p. 70, §2.1, Theo ...
.)
* If is a matrix over the
real numbers
In mathematics, a real number is a number that can be used to measure a ''continuous'' one-dimensional quantity such as a distance, duration or temperature. Here, ''continuous'' means that values can have arbitrarily small variations. Every real ...
then the rank of and the rank of its corresponding
Gram matrix
In linear algebra, the Gram matrix (or Gramian matrix, Gramian) of a set of vectors v_1,\dots, v_n in an inner product space is the Hermitian matrix of inner products, whose entries are given by the inner product G_ = \left\langle v_i, v_j \right\r ...
are equal. Thus, for real matrices
This can be shown by proving equality of their
null spaces. The null space of the Gram matrix is given by vectors for which
If this condition is fulfilled, we also have
* If is a matrix over the
complex numbers
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the form a ...
and
denotes the complex conjugate of and the conjugate transpose of (i.e., the
adjoint
In mathematics, the term ''adjoint'' applies in several situations. Several of these share a similar formalism: if ''A'' is adjoint to ''B'', then there is typically some formula of the type
:(''Ax'', ''y'') = (''x'', ''By'').
Specifically, adjoin ...
of ), then
Applications
One useful application of calculating the rank of a matrix is the computation of the number of solutions of a
system of linear equations. According to the
Rouché–Capelli theorem
In linear algebra, the Rouché–Capelli theorem determines the number of solutions for a system of linear equations, given the rank of its augmented matrix and coefficient matrix. The theorem is variously known as the:
* Rouché–Capelli theore ...
, the system is inconsistent if the rank of the
augmented matrix
In linear algebra, an augmented matrix is a matrix obtained by appending the columns of two given matrices, usually for the purpose of performing the same elementary row operations on each of the given matrices.
Given the matrices and , where
...
is greater than the rank of the
coefficient matrix In linear algebra, a coefficient matrix is a matrix consisting of the coefficients of the variables in a set of linear equations. The matrix is used in solving systems of linear equations.
Coefficient matrix
In general, a system with ''m'' linear ...
. If on the other hand, the ranks of these two matrices are equal, then the system must have at least one solution. The solution is unique if and only if the rank equals the number of variables. Otherwise the general solution has free parameters where is the difference between the number of variables and the rank. In this case (and assuming the system of equations is in the real or complex numbers) the system of equations has infinitely many solutions.
In
control theory
Control theory is a field of mathematics that deals with the control of dynamical systems in engineered processes and machines. The objective is to develop a model or algorithm governing the application of system inputs to drive the system to a ...
, the rank of a matrix can be used to determine whether a
linear system
In systems theory, a linear system is a mathematical model of a system based on the use of a linear operator.
Linear systems typically exhibit features and properties that are much simpler than the nonlinear case.
As a mathematical abstraction o ...
is
controllable Controllability is an important property of a control system, and the controllability property plays a crucial role in many control problems, such as stabilization of unstable systems by feedback, or optimal control.
Controllability and observabi ...
, or
observable
In physics, an observable is a physical quantity that can be measured. Examples include position and momentum. In systems governed by classical mechanics, it is a real-valued "function" on the set of all possible system states. In quantum ph ...
.
In the field of
communication complexity
In theoretical computer science, communication complexity studies the amount of communication required to solve a problem when the input to the problem is distributed among two or more parties. The study of communication complexity was first intro ...
, the rank of the communication matrix of a function gives bounds on the amount of communication needed for two parties to compute the function.
Generalization
There are different generalizations of the concept of rank to matrices over arbitrary
ring
Ring may refer to:
* Ring (jewellery), a round band, usually made of metal, worn as ornamental jewelry
* To make a sound with a bell, and the sound made by a bell
:(hence) to initiate a telephone connection
Arts, entertainment and media Film and ...
s, where column rank, row rank, dimension of column space, and dimension of row space of a matrix may be different from the others or may not exist.
Thinking of matrices as
tensors
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensor ...
, the
tensor rank
In mathematics, the modern component-free approach to the theory of a tensor views a tensor as an abstract object, expressing some definite type of multilinear concept. Their properties can be derived from their definitions, as linear maps or m ...
generalizes to arbitrary tensors; for tensors of order greater than 2 (matrices are order 2 tensors), rank is very hard to compute, unlike for matrices.
There is a notion of
rank
Rank is the relative position, value, worth, complexity, power, importance, authority, level, etc. of a person or object within a ranking, such as:
Level or position in a hierarchical organization
* Academic rank
* Diplomatic rank
* Hierarchy
* ...
for
smooth map
In mathematical analysis, the smoothness of a function is a property measured by the number of continuous derivatives it has over some domain, called ''differentiability class''. At the very minimum, a function could be considered smooth if ...
s between
smooth manifold
In mathematics, a differentiable manifold (also differential manifold) is a type of manifold that is locally similar enough to a vector space to allow one to apply calculus. Any manifold can be described by a collection of charts (atlas). One ma ...
s. It is equal to the linear rank of the
derivative
In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. F ...
.
Matrices as tensors
Matrix rank should not be confused with
tensor order
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tensor ...
, which is called tensor rank. Tensor order is the number of indices required to write a
tensor
In mathematics, a tensor is an algebraic object that describes a multilinear relationship between sets of algebraic objects related to a vector space. Tensors may map between different objects such as vectors, scalars, and even other tenso ...
, and thus matrices all have tensor order 2. More precisely, matrices are tensors of type (1,1), having one row index and one column index, also called covariant order 1 and contravariant order 1; see
Tensor (intrinsic definition)
In mathematics, the modern component-free approach to the theory of a tensor views a tensor as an abstract object, expressing some definite type of multilinear concept. Their properties can be derived from their definitions, as linear maps or ...
for details.
The tensor rank of a matrix can also mean the minimum number of
simple tensor
In mathematics, the modern component-free approach to the theory of a tensor views a tensor as an abstract object, expressing some definite type of multilinear concept. Their properties can be derived from their definitions, as linear maps or m ...
s necessary to express the matrix as a linear combination, and that this definition does agree with matrix rank as here discussed.
See also
*
Matroid rank
In the mathematical theory of matroids, the rank of a matroid is the maximum size of an independent set in the matroid. The rank of a subset ''S'' of elements of the matroid is, similarly, the maximum size of an independent subset of ''S'', and th ...
*
Nonnegative rank (linear algebra) In linear algebra, the nonnegative rank of a nonnegative matrix is a concept similar to the usual linear rank of a real matrix, but adding the requirement that certain coefficients and entries of vectors/matrices have to be nonnegative.
For example ...
*
Rank (differential topology)
In mathematics, the rank of a differentiable map f:M\to N between differentiable manifolds at a point p\in M is the rank of the derivative of f at p. Recall that the derivative of f at p is a linear map
:d_p f : T_p M \to T_N\,
from the tangent sp ...
*
Multicollinearity
In statistics, multicollinearity (also collinearity) is a phenomenon in which one predictor variable in a multiple regression model can be linearly predicted from the others with a substantial degree of accuracy. In this situation, the coeffic ...
*
Linear dependence
In the theory of vector spaces, a set of vectors is said to be if there is a nontrivial linear combination of the vectors that equals the zero vector. If no such linear combination exists, then the vectors are said to be . These concepts are ...
Notes
References
Sources
*
*
*
*
*
*
Further reading
*
* Kaw, Autar K. Two Chapters from the book Introduction to Matrix Algebra: 1. Vector
and System of Equation
* Mike Brookes: Matrix Reference Manual
{{DEFAULTSORT:Rank (Linear Algebra)
Linear algebra