HOME

TheInfoList



OR:

In
linear algebra Linear algebra is the branch of mathematics concerning linear equations such as :a_1x_1+\cdots +a_nx_n=b, linear maps such as :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrix (mathemat ...
, a matrix is in row echelon form if it can be obtained as the result of
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can a ...
. Every matrix can be put in row echelon form by applying a sequence of elementary row operations. The term ''echelon'' comes from the French ''échelon'' ("level" or step of a ladder), and refers to the fact that the nonzero entries of a matrix in row echelon form look like an inverted staircase. For square matrices, an upper triangular matrix with nonzero entries on the diagonal is in row echelon form, and a matrix in row echelon form is (weakly) upper triangular. Thus, the row echelon form can be viewed as a generalization of upper triangular form for rectangular matrices. A matrix is in reduced row echelon form if it is in row echelon form, with the additional property that the first nonzero entry of each row is equal to 1 and is the only nonzero entry of its column. The reduced row echelon form of a matrix is unique and does not depend on the sequence of elementary row operations used to obtain it. The variant of
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of row-wise operations performed on the corresponding matrix of coefficients. This method can a ...
that transforms a matrix to reduced row echelon form is sometimes called Gauss–Jordan elimination. A matrix is in column echelon form if its
transpose In linear algebra, the transpose of a Matrix (mathematics), matrix is an operator which flips a matrix over its diagonal; that is, it switches the row and column indices of the matrix by producing another matrix, often denoted by (among other ...
is in row echelon form. Since all properties of column echelon forms can therefore immediately be deduced from the corresponding properties of row echelon forms, ''only row echelon forms are considered in the remainder of the article.''


(General) row echelon form

A matrix is in row echelon form if * All rows having only zero entries are at the bottom. * The leading entry (that is, the left-most nonzero entry) of every nonzero row, called the pivot, is on the right of the leading entry of every row above. Some texts add the condition that the leading coefficient must be 1 while others require this only in reduced row echelon form. These two conditions imply that all entries in a column below a leading coefficient are zeros. The following is an example of a 4\times 5 matrix in row echelon form, but not in reduced row echelon form (see below): : \left \begin 1 & a_0 & a_1 & a_2 & a_3 \\ 0 & 0 & 2 & a_4 & a_5 \\ 0 & 0 & 0 & 1 & a_6 \\ 0 & 0 & 0 & 0 & 0 \end \right Many properties of matrices may be easily deduced from their row echelon form, such as the rank and the kernel.


Reduced row echelon form

A matrix is in reduced row echelon form (also called row canonical form) if it satisfies the following conditions: * It is in row echelon form. * The leading entry in each nonzero row is (called a leading one). * Each column containing a leading has zeros in all its other entries. If the first two conditions are verified, the last condition is equivalent to: * Each column containing a leading has zeros in all entries above the leading . While a matrix may have several echelon forms, its reduced echelon form is unique. Given a matrix in reduced row echelon form, if one permutes the columns in order to have the leading of the th row in the th column, one gets a matrix of the form :\begin I & X\\ 0&0 \end, where is the
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
of dimension j equal to the rank of the entire matrix, is a matrix with j rows and n-j columns, and the two 's are zero matrices of appropriate size. Since a permutation of columns is not a row operation, the resulting matrix is inequivalent under elementary row operations. In the Gaussian elimination method, this corresponds to a permutation of the unknowns in the original linear system that allows a linear parametrization of the row space, in which the first j coefficients are unconstrained, and the remaining n-j are determined as linear combinations of these.


Systems of linear equations

A
system of linear equations In mathematics, a system of linear equations (or linear system) is a collection of two or more linear equations involving the same variable (math), variables. For example, : \begin 3x+2y-z=1\\ 2x-2y+4z=-2\\ -x+\fracy-z=0 \end is a system of th ...
is said to be in ''row echelon form'' if its
augmented matrix In linear algebra, an augmented matrix (A \vert B) is a k \times (n+1) matrix obtained by appending a k-dimensional column vector B, on the right, as a further column to a k \times n-dimensional matrix A. This is usually done for the purpose of p ...
is in row echelon form. Similarly, a system of linear equations is said to be in ''reduced row echelon form'' or in ''canonical form'' if its augmented matrix is in reduced row echelon form. The canonical form may be viewed as an explicit solution of the linear system. In fact, the system is inconsistent if and only if one of the equations of the canonical form is reduced to 0 = 1; that is if there is a leading in the column of the constant terms. Otherwise, regrouping in the right hand side all the terms of the equations but the leading ones, expresses the variables corresponding to the pivots as constants or linear functions of the other variables, if any.


Transformation to row echelon form

Gaussian elimination is the main algorithm for transforming every matrix into a matrix in row echelon form. A variant, sometimes called Gauss–Jordan elimination produces a reduced row echelon form. Both consist of a finite sequence of elementary row operations; the number of required elementary row operations is at most for an -by- matrix. For a given matrix, despite the row echelon form not being unique, all row echelon forms, including the reduced row echelon form, have the same number of zero rows and the pivots are located in the same positions. This is an example of a matrix in reduced row echelon form, which shows that the left part of the matrix is not always an
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. It has unique properties, for example when the identity matrix represents a geometric transformation, the obje ...
: : \left \begin 1 & 0 & a_1 & 0 & b_1 \\ 0 & 1 & a_2 & 0 & b_2 \\ 0 & 0 & 0 & 1 & b_3 \end \right/math> For a matrix with
integer An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
coefficients, the Hermite normal form is a row echelon form that can be calculated without introducing any denominator, by using Euclidean division or
Bézout's identity In mathematics, Bézout's identity (also called Bézout's lemma), named after Étienne Bézout who proved it for polynomials, is the following theorem: Here the greatest common divisor of and is taken to be . The integers and are called B� ...
. The reduced echelon form of a matrix with integer entries generally contains non-integer entries, because of the need of dividing by its leading coefficient each row of the echelon form. The non-uniqueness of the row echelon form of a matrix follows from the fact that some elementary row operations transform a matrix in row echelon form into another ( equivalent) matrix that is also in row echelon form. These elementary row operations include the multiplication of a row by a nonzero scalar and the addition of a scalar multiple of a row to one of the rows above it. For example: : \begin 1 & 3 & -1 \\ 0 & 1 & 7 \\ \end \xrightarrow \begin 1 & 4 & 6 \\ 0 & 1 & 7 \\ \end. In this example, the unique reduced row echelon form can be obtained by subtracting three times the second row from the first row : : \begin 1 & 3 & -1 \\ 0 & 1 & 7 \\ \end \xrightarrow \begin 1 & 0 & -22 \\ 0 & 1 & 7 \\ \end.


Affine spaces of reduced echelon forms

In this section and the following one, we denote the location of the columns containing the leading entries of the successive rows of a k\times n matrix A in reduced row echelon form (the pivots) as (L_1, \dots, L_j), with : 0< L_1 \cdots < L_j \le n, where j \le k is the dimension of the row space of the matrix. The data (k, n, L_1, \ldots, L_j) will be called the ''shape'' of A, which has leading non-zero entries \_, the entries in the column L_i above and below it vanish, and so do all those to the left of it within the same row, as well as all entries in the ith row for i>j: :\begin A_ =1\qquad &\text i=1, \dots, j,\\ A_=0\qquad &\text l\ne i,\\ A_ =0\qquad &\text l< L_i,\\ A_ =0\qquad &\text i>j \end. Since all other entries are arbitrary elements of the base field K, the set A(k, n, L_1, \ldots, L_j) of all reduced echelon form matrices with shape (k, n, L_1, \ldots, L_j) is a -affine space of dimension :\text(A(k,n, L_1, \dots, L_j))=nj -\fracj(j-1)- \sum_^j L_i. To see this, note that, of the nj possible matrix entries within the first j rows, j^2 are determined as 0's and 1's because they are in the columns (L_1, \dots, L_j) containing the pivots. A further \sum_^j(L_i-1) are also required to be 0, because they are to the left of the pivots, but of these, ::\sum_^i = \fracj(j-1) are also in the columns (L_1, \dots , L_j). Therefore, the total number of entries that are not fixed to be equal to 0 or 1 is :: nj - j^2 +\fracj(j-1) - \sum_^j L_i +j = nj - \fracj(j-1)- \sum_^j L_i.


Maximal rank: Schubert cells

The row echelon form can be used to give a concrete description of the Schubert cells associated with the
Grassmannian In mathematics, the Grassmannian \mathbf_k(V) (named in honour of Hermann Grassmann) is a differentiable manifold that parameterizes the set of all k-dimension (vector space), dimensional linear subspaces of an n-dimensional vector space V over a ...
of k-dimensional subspaces of a
vector space In mathematics and physics, a vector space (also called a linear space) is a set (mathematics), set whose elements, often called vector (mathematics and physics), ''vectors'', can be added together and multiplied ("scaled") by numbers called sc ...
. If j=k \le n, the matrices A \in A(k, n, L_1, \dots, L_k) are of maximal rank k, and determine k-dimensional subspaces w \subset V of the free K-module V:=K^n, as the span :: w= \text\ of linear combinations :: W_i := \sum_^n A_ e_l, \quad i=1, \dots , k of the elementary basis vectors (e_1, \dots, e_n), with coefficients equal to the row vectors. In this case, the affine space A(k, n, L_1, \dots, L_k) is the Schubert cell X_\lambda(\mathcal) of the
Grassmannian In mathematics, the Grassmannian \mathbf_k(V) (named in honour of Hermann Grassmann) is a differentiable manifold that parameterizes the set of all k-dimension (vector space), dimensional linear subspaces of an n-dimensional vector space V over a ...
\mathbf_k(V), consisting of k-dimensional subspaces of V corresponding to the integer partition :\lambda = (\lambda_1 \ge \cdots \ge \lambda_k \ge 0) with parts equal to :\lambda_i := n-k - L_i +i, \quad 1\le j \le k, relative to the complete flag : \mathcal= (V_1 \subset V_2 \cdots \subset V_n = V), where : V_i = \text\, \quad i =1, \dots n. This means that X_\lambda(\mathcal) \subset \mathbf_k(V) consists of those k-dimensional subspaces w \subset V whose intersections with the subspaces \_ have dimensions :: \text(w \cap V_) = i, \ \text n-k -\lambda_i +i \le j\le n-k -\lambda_ +i, \quad i=1, \dots , k. Its dimension is then equal to the weight , \lambda, = \sum_^k \lambda_i of the partition :\dim() = , \lambda, . An equivalent, but simpler characterization of the Schubert cell X_\lambda(\mathcal) may be given in terms of the dual complete flag : \tilde= (\tilde_1 \subset \tilde_2 \cdots \subset \tilde_n = V), where : \tilde_i = \text\, \quad i =1, \dots n. Then X_\lambda(\mathcal) \subset \mathbf_k(V) consists of those k-dimensional subspaces w \subset V that have a basis (\tilde_1, \dots, \tilde_k) consisting of elements :: \tilde_i \in \tilde_=\tilde_, \quad i=1, \dots, k of the subspaces \_ which, relative to the standard basis, are the row vectors (W_k, \dots, W_1) of the row echelon form, written in reverse order.


Notes


References

* . * .


External links


Interactive Row Echelon Form with rational output
{{Matrix classes Numerical linear algebra Articles with example pseudocode de:Lineares Gleichungssystem#Stufenform, Treppenform