Rational Canonical Form
   HOME
*





Rational Canonical Form
In linear algebra, the Frobenius normal form or rational canonical form of a square matrix ''A'' with entries in a field ''F'' is a canonical form for matrices obtained by conjugation by invertible matrices over ''F''. The form reflects a minimal decomposition of the vector space into subspaces that are cyclic for ''A'' (i.e., spanned by some vector and its repeated images under ''A''). Since only one normal form can be reached from a given matrix (whence the "canonical"), a matrix ''B'' is similar to ''A'' if and only if it has the same rational canonical form as ''A''. Since this form can be found without any operations that might change when extending the field ''F'' (whence the "rational"), notably without factoring polynomials, this shows that whether two matrices are similar does not change upon field extensions. The form is named after German mathematician Ferdinand Georg Frobenius. Some authors use the term rational canonical form for a somewhat different form that is mor ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Linear Algebra
Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrices. Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to spaces of functions. Linear algebra is also used in most sciences and fields of engineering, because it allows modeling many natural phenomena, and computing efficiently with such models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing with first-order approximations, using the fact that the differential of a multivariate function at a point is the linear ma ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Monic Polynomial
In algebra, a monic polynomial is a single-variable polynomial (that is, a univariate polynomial) in which the leading coefficient (the nonzero coefficient of highest degree) is equal to 1. Therefore, a monic polynomial has the form: :x^n+c_x^+\cdots+c_2x^2+c_1x+c_0 Univariate polynomials If a polynomial has only one indeterminate (univariate polynomial), then the terms are usually written either from highest degree to lowest degree ("descending powers") or from lowest degree to highest degree ("ascending powers"). A univariate polynomial in ''x'' of degree ''n'' then takes the general form displayed above, where : ''c''''n'' ≠ 0, ''c''''n''−1, ..., ''c''2, ''c''1 and ''c''0 are constants, the coefficients of the polynomial. Here the term ''c''''n''''x''''n'' is called the ''leading term'', and its coefficient ''c''''n'' the ''leading coefficient''; if the leading coefficient , the univariate polynomial is called monic. Properties Multiplicatively closed The set ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Smith Normal Form
In mathematics, the Smith normal form (sometimes abbreviated SNF) is a normal form that can be defined for any matrix (not necessarily square) with entries in a principal ideal domain (PID). The Smith normal form of a matrix is diagonal, and can be obtained from the original matrix by multiplying on the left and right by invertible square matrices. In particular, the integers are a PID, so one can always calculate the Smith normal form of an integer matrix. The Smith normal form is very useful for working with finitely generated modules over a PID, and in particular for deducing the structure of a quotient of a free module. It is named after the Irish mathematician Henry John Stephen Smith. Definition Let ''A'' be a nonzero ''m''×''n'' matrix over a principal ideal domain ''R''. There exist invertible m \times m and n \times n-matrices ''S, T'' (with coefficients in ''R'') such that the product ''S A T'' is \begin \alpha_1 & 0 & 0 & & \cdots & & 0 \\ 0 & \alpha_2 & 0 & ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Jordan Block
In the Mathematics, mathematical discipline of Matrix (mathematics), matrix theory, a Jordan matrix, named after Camille Jordan, is a Block matrix, block diagonal matrix over a Ring (mathematics), ring (whose Identity element, identities are the 0 (number), zero 0 and 1 (number), one 1), where each block along the diagonal, called a Jordan block, has the following form: \begin \lambda & 1 & 0 & \cdots & 0 \\ 0 & \lambda & 1 & \cdots & 0 \\ \vdots & \vdots & \vdots & \ddots & \vdots \\ 0 & 0 & 0 & \lambda & 1 \\ 0 & 0 & 0 & 0 & \lambda \end . Definition Every Jordan block is specified by its dimension ''n'' and its eigenvalue \lambda\in R, and is denoted as . It is an n\times n matrix of zeroes everywhere except for the diagonal, which is filled with \lambda and for the superdiagonal, which is composed of ones. Any block diagonal matrix whose blocks are Jordan blocks is called a Jordan matrix. This square m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Block Matrix
In mathematics, a block matrix or a partitioned matrix is a matrix that is '' interpreted'' as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned. This notion can be made more precise for an n by m matrix M by partitioning n into a collection \text, and then partitioning m into a collection \text. The original matrix is then considered as the "total" of these groups, in the sense that the (i, j) entry of the original matrix corresponds in a 1-to-1 way with some (s, t) offset entry of some (x,y), where x \in \text and y \in \text. Block matrix algebra arises in general from biproducts in categories of matrices ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Block Matrix
In mathematics, a block matrix or a partitioned matrix is a matrix that is '' interpreted'' as having been broken into sections called blocks or submatrices. Intuitively, a matrix interpreted as a block matrix can be visualized as the original matrix with a collection of horizontal and vertical lines, which break it up, or partition it, into a collection of smaller matrices. Any matrix may be interpreted as a block matrix in one or more ways, with each interpretation defined by how its rows and columns are partitioned. This notion can be made more precise for an n by m matrix M by partitioning n into a collection \text, and then partitioning m into a collection \text. The original matrix is then considered as the "total" of these groups, in the sense that the (i, j) entry of the original matrix corresponds in a 1-to-1 way with some (s, t) offset entry of some (x,y), where x \in \text and y \in \text. Block matrix algebra arises in general from biproducts in categories of matrices ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Eigenvalue
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by \lambda, is the factor by which the eigenvector is scaled. Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction in which it is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Loosely speaking, in a multidimensional vector space, the eigenvector is not rotated. Formal definition If is a linear transformation from a vector space over a field into itself and is a nonzero vector in , then is an eigenvector of if is a scalar multiple of . This can be written as T(\mathbf) = \lambda \mathbf, where is a scalar in , known as the eigenvalue, characteristic value, or characteristic root ass ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Diagonal Matrix
In linear algebra, a diagonal matrix is a matrix in which the entries outside the main diagonal are all zero; the term usually refers to square matrices. Elements of the main diagonal can either be zero or nonzero. An example of a 2×2 diagonal matrix is \left begin 3 & 0 \\ 0 & 2 \end\right/math>, while an example of a 3×3 diagonal matrix is \left begin 6 & 0 & 0 \\ 0 & 0 & 0 \\ 0 & 0 & 0 \end\right/math>. An identity matrix of any size, or any multiple of it (a scalar matrix), is a diagonal matrix. A diagonal matrix is sometimes called a scaling matrix, since matrix multiplication with it results in changing scale (size). Its determinant is the product of its diagonal values. Definition As stated above, a diagonal matrix is a matrix in which all off-diagonal entries are zero. That is, the matrix with ''n'' columns and ''n'' rows is diagonal if \forall i,j \in \, i \ne j \implies d_ = 0. However, the main diagonal entries are unrestricted. The term ''diagonal matrix'' may s ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Hamel Basis
In mathematics, a set of vectors in a vector space is called a basis if every element of may be written in a unique way as a finite linear combination of elements of . The coefficients of this linear combination are referred to as components or coordinates of the vector with respect to . The elements of a basis are called . Equivalently, a set is a basis if its elements are linearly independent and every element of is a linear combination of elements of . In other words, a basis is a linearly independent spanning set. A vector space can have several bases; however all the bases have the same number of elements, called the ''dimension'' of the vector space. This article deals mainly with finite-dimensional vector spaces. However, many of the principles are also valid for infinite-dimensional vector spaces. Definition A basis of a vector space over a field (such as the real numbers or the complex numbers ) is a linearly independent subset of that spans . This m ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Jordan Normal Form
In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal (on the superdiagonal), and with identical diagonal entries to the left and below them. Let ''V'' be a vector space over a field ''K''. Then a basis with respect to which the matrix has the required form exists if and only if all eigenvalues of the matrix lie in ''K'', or equivalently if the characteristic polynomial of the operator splits into linear factors over ''K''. This condition is always satisfied if ''K'' is algebraically closed (for instance, if it is the field of complex numbers). The diagonal entries of the normal form are the eigenvalues (of the operator), and the number of times each eigenvalue occurs is called th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Elementary Divisors
In algebra, the elementary divisors of a module over a principal ideal domain (PID) occur in one form of the structure theorem for finitely generated modules over a principal ideal domain. If R is a PID and M a finitely generated R-module, then ''M'' is isomorphic to a finite sum of the form ::M\cong R^r\oplus \bigoplus_^l R/(q_i) \qquad\textr,l\geq0 :where the (q_i) are nonzero primary ideals. The list of primary ideals is unique up to order (but a given ideal may be present more than once, so the list represents a multiset of primary ideals); the elements q_i are unique only up to associatedness, and are called the ''elementary divisors''. Note that in a PID, the nonzero primary ideals are powers of prime ideals, so the elementary divisors can be written as powers q_i=p_i^ of irreducible elements. The nonnegative integer r is called the ''free rank'' or ''Betti number'' of the module M. The module is determined up to isomorphism by specifying its free rank , and for class of a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]