Drazin Inverse
   HOME
*





Drazin Inverse
In mathematics, the Drazin inverse, named after Michael P. Drazin, is a kind of generalized inverse of a matrix. Let ''A'' be a square matrix. The index of ''A'' is the least nonnegative integer ''k'' such that rank(''A''''k''+1) = rank(''A''''k''). The Drazin inverse of ''A'' is the unique matrix ''A''D that satisfies :A^A^\text = A^k,\quad A^\textAA^\text = A^\text,\quad AA^\text = A^\textA. It's not a generalized inverse in the classical sense, since A A^\text A \neq A in general. * If ''A'' is invertible with inverse A^, then A^\text = A^. * Drazin inversion is invariant under conjugation. If A^\text is the Drazin inverse of A, then P A^\text P^ is the Drazin inverse of PAP^. * The Drazin inverse of a matrix of index 0 or 1 is called the group inverse or -inverse and denoted ''A''#. The group inverse can be defined, equivalently, by the properties ''AA''#''A'' = ''A'', ''A''#''AA''# = ''A''#, and ''AA''# = ''A''#''A''. * A projection matrix ''P'', defined as a matrix such th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Mathematics
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra, geometry, and analysis, respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature orin modern mathematicsentities that are stipulated to have certain properties, called axioms. A ''proof'' consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of t ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Perfect Field
In algebra, a field ''k'' is perfect if any one of the following equivalent conditions holds: * Every irreducible polynomial over ''k'' has distinct roots. * Every irreducible polynomial over ''k'' is separable. * Every finite extension of ''k'' is separable. * Every algebraic extension of ''k'' is separable. * Either ''k'' has characteristic 0, or, when ''k'' has characteristic , every element of ''k'' is a ''p''th power. * Either ''k'' has characteristic 0, or, when ''k'' has characteristic , the Frobenius endomorphism is an automorphism of ''k''. * The separable closure of ''k'' is algebraically closed. * Every reduced commutative ''k''-algebra ''A'' is a separable algebra; i.e., A \otimes_k F is reduced for every field extension ''F''/''k''. (see below) Otherwise, ''k'' is called imperfect. In particular, all fields of characteristic zero and all finite fields are perfect. Perfect fields are significant because Galois theory over these fields becomes simpler, si ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Generalized Eigenvector
In linear algebra, a generalized eigenvector of an n\times n matrix A is a vector which satisfies certain criteria which are more relaxed than those for an (ordinary) eigenvector. Let V be an n-dimensional vector space; let \phi be a linear map in , the set of all linear maps from V into itself; and let A be the matrix representation of \phi with respect to some ordered basis. There may not always exist a full set of n linearly independent eigenvectors of A that form a complete basis for V. That is, the matrix A may not be diagonalizable. This happens when the algebraic multiplicity of at least one eigenvalue \lambda_i is greater than its geometric multiplicity (the nullity of the matrix (A-\lambda_i I), or the dimension of its nullspace). In this case, \lambda_i is called a defective eigenvalue and A is called a defective matrix. A generalized eigenvector x_i corresponding to \lambda_i, together with the matrix (A-\lambda_i I) generate a Jordan chain of linearly indepe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Jordan Normal Form
In linear algebra, a Jordan normal form, also known as a Jordan canonical form (JCF), is an upper triangular matrix of a particular form called a Jordan matrix representing a linear operator on a finite-dimensional vector space with respect to some basis. Such a matrix has each non-zero off-diagonal entry equal to 1, immediately above the main diagonal (on the superdiagonal), and with identical diagonal entries to the left and below them. Let ''V'' be a vector space over a field ''K''. Then a basis with respect to which the matrix has the required form exists if and only if all eigenvalues of the matrix lie in ''K'', or equivalently if the characteristic polynomial of the operator splits into linear factors over ''K''. This condition is always satisfied if ''K'' is algebraically closed (for instance, if it is the field of complex numbers). The diagonal entries of the normal form are the eigenvalues (of the operator), and the number of times each eigenvalue occurs is called ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Moore–Penrose Inverse
In mathematics, and in particular linear algebra, the Moore–Penrose inverse of a matrix is the most widely known generalization of the inverse matrix. It was independently described by E. H. Moore in 1920, Arne Bjerhammar in 1951, and Roger Penrose in 1955. Earlier, Erik Ivar Fredholm had introduced the concept of a pseudoinverse of integral operators in 1903. When referring to a matrix, the term pseudoinverse, without further specification, is often used to indicate the Moore–Penrose inverse. The term generalized inverse is sometimes used as a synonym for pseudoinverse. A common use of the pseudoinverse is to compute a "best fit" ( least squares) solution to a system of linear equations that lacks a solution (see below under § Applications). Another use is to find the minimum ( Euclidean) norm solution to a system of linear equations with multiple solutions. The pseudoinverse facilitates the statement and proof of results in linear algebra. The pseudoinverse is de ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Inverse Element
In mathematics, the concept of an inverse element generalises the concepts of opposite () and reciprocal () of numbers. Given an operation denoted here , and an identity element denoted , if , one says that is a left inverse of , and that is a right inverse of . (An identity element is an element such that and for all and for which the left-hand sides are defined.) When the operation is associative, if an element has both a left inverse and a right inverse, then these two inverses are equal and unique; they are called the ''inverse element'' or simply the ''inverse''. Often an adjective is added for specifying the operation, such as in additive inverse, multiplicative inverse, and functional inverse. In this case (associative operation), an invertible element is an element that has an inverse. Inverses are commonly used in groupswhere every element is invertible, and ringswhere invertible elements are also called units. They are also commonly used for operations th ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Constrained Generalized Inverse
In linear algebra, a constrained generalized inverse is obtained by solving a system of linear equations with an additional constraint that the solution is in a given subspace. One also says that the problem is described by a system of constrained linear equations. In many practical problems, the solution x of a linear system of equations : Ax=b\qquad (\textA\in\R^\text b\in\R^m) is acceptable only when it is in a certain linear subspace L of \R^m. In the following, the orthogonal projection In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself (an endomorphism) such that P\circ P=P. That is, whenever P is applied twice to any vector, it gives the same result as if it wer ... on L will be denoted by P_L. Constrained system of linear equations :Ax=b\qquad x\in L has a solution if and only if the unconstrained system of equations :(A P_L) x = b\qquad x\in\R^m is solvable. If the subspace L is a proper subspace of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Shift Matrix
In mathematics, a shift matrix is a binary matrix with ones only on the superdiagonal or subdiagonal, and zeroes elsewhere. A shift matrix ''U'' with ones on the superdiagonal is an upper shift matrix. The alternative subdiagonal matrix ''L'' is unsurprisingly known as a lower shift matrix. The (''i'',''j''):th component of ''U'' and ''L'' are :U_ = \delta_, \quad L_ = \delta_, where \delta_ is the Kronecker delta symbol. For example, the ''5×5'' shift matrices are : U_5 = \begin 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & 0 & 1 \\ 0 & 0 & 0 & 0 & 0 \end \quad L_5 = \begin 0 & 0 & 0 & 0 & 0 \\ 1 & 0 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 & 0 \\ 0 & 0 & 1 & 0 & 0 \\ 0 & 0 & 0 & 1 & 0 \end. Clearly, the transpose of a lower shift matrix is an upper shift matrix and vice versa. As a linear transformation, a lower shift matrix shifts the components of a column vector one position down, with a zero appearing in the first position. An upper shift matrix shift ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Michael P
Michael may refer to: People * Michael (given name), a given name * Michael (surname), including a list of people with the surname Michael Given name "Michael" * Michael (archangel), ''first'' of God's archangels in the Jewish, Christian and Islamic religions * Michael (bishop elect), English 13th-century Bishop of Hereford elect * Michael (Khoroshy) (1885–1977), cleric of the Ukrainian Orthodox Church of Canada * Michael Donnellan (1915–1985), Irish-born London fashion designer, often referred to simply as "Michael" * Michael (footballer, born 1982), Brazilian footballer * Michael (footballer, born 1983), Brazilian footballer * Michael (footballer, born 1993), Brazilian footballer * Michael (footballer, born February 1996), Brazilian footballer * Michael (footballer, born March 1996), Brazilian footballer * Michael (footballer, born 1999), Brazilian footballer Rulers =Byzantine emperors= *Michael I Rangabe (d. 844), married the daughter of Emperor Nikephoro ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Nilpotent Matrix
In linear algebra, a nilpotent matrix is a square matrix ''N'' such that :N^k = 0\, for some positive integer k. The smallest such k is called the index of N, sometimes the degree of N. More generally, a nilpotent transformation is a linear transformation L of a vector space such that L^k = 0 for some positive integer k (and thus, L^j = 0 for all j \geq k). Both of these concepts are special cases of a more general concept of nilpotence that applies to elements of rings. Examples Example 1 The matrix : A = \begin 0 & 1 \\ 0 & 0 \end is nilpotent with index 2, since A^2 = 0. Example 2 More generally, any n-dimensional triangular matrix with zeros along the main diagonal is nilpotent, with index \le n . For example, the matrix : B=\begin 0 & 2 & 1 & 6\\ 0 & 0 & 1 & 2\\ 0 & 0 & 0 & 3\\ 0 & 0 & 0 & 0 \end is nilpotent, with : B^2=\begin 0 & 0 & 2 & 7\\ 0 & 0 & 0 & 3\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end ;\ B^3=\begin 0 & 0 & 0 & 6\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Projection (linear Algebra)
In linear algebra and functional analysis, a projection is a linear transformation P from a vector space to itself (an endomorphism) such that P\circ P=P. That is, whenever P is applied twice to any vector, it gives the same result as if it were applied once (i.e. P is idempotent). It leaves its image unchanged. This definition of "projection" formalizes and generalizes the idea of graphical projection. One can also consider the effect of a projection on a geometrical object by examining the effect of the projection on points in the object. Definitions A projection on a vector space V is a linear operator P : V \to V such that P^2 = P. When V has an inner product and is complete (i.e. when V is a Hilbert space) the concept of orthogonality can be used. A projection P on a Hilbert space V is called an orthogonal projection if it satisfies \langle P \mathbf x, \mathbf y \rangle = \langle \mathbf x, P \mathbf y \rangle for all \mathbf x, \mathbf y \in V. A projection on a Hilbe ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]