Semisimple Operator
In mathematics, a linear operator ''T'' on a vector space is semisimple if every ''T''-invariant subspace has a complementary ''T''-invariant subspace;Lam (2001), p. 39/ref> in other words, the vector space is a semisimple representation of the operator ''T''. Equivalently, a linear operator is semisimple if the minimal polynomial of it is a product of distinct irreducible polynomials. A linear operator on a finite dimensional vector space over an algebraically closed field is semisimple if and only if it is diagonalizable.This is trivial by the definition in terms of a minimal polynomial but can be seen more directly as follows. Such an operator always has an eigenvector; if it is, in addition, semi-simple, then it has a complementary invariant hyperplane, which itself has an eigenvector, and thus by induction is diagonalizable. Conversely, diagonalizable operators are easily seen to be semi-simple, as invariant subspaces are direct sums of eigenspaces, and any basis for this spa ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Mathematics
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra, geometry, and analysis, respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature orin modern mathematicsentities that are stipulated to have certain properties, called axioms. A ''proof'' consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Linear Operator
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a mapping V \to W between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of modules over a ring; see Module homomorphism. If a linear map is a bijection then it is called a . In the case where V = W, a linear map is called a (linear) ''endomorphism''. Sometimes the term refers to this case, but the term "linear operator" can have different meanings for different conventions: for example, it can be used to emphasize that V and W are real vector spaces (not necessarily with V = W), or it can be used to emphasize that V is a function space, which is a common convention in functional analysis. Sometimes the term ''linear function'' has the same meaning as ''linear map' ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Vector Space
In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called ''vectors'', may be added together and multiplied ("scaled") by numbers called '' scalars''. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called ''vector axioms''. The terms real vector space and complex vector space are often used to specify the nature of the scalars: real coordinate space or complex coordinate space. Vector spaces generalize Euclidean vectors, which allow modeling of physical quantities, such as forces and velocity, that have not only a magnitude, but also a direction. The concept of vector spaces is fundamental for linear algebra, together with the concept of matrix, which allows computing in vector spaces. This provides a concise and synthetic way for manipulating and studying systems of linear eq ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Invariant Subspace
In mathematics, an invariant subspace of a linear mapping ''T'' : ''V'' → ''V '' i.e. from some vector space ''V'' to itself, is a subspace ''W'' of ''V'' that is preserved by ''T''; that is, ''T''(''W'') ⊆ ''W''. General description Consider a linear mapping T :T: W \to W. An invariant subspace W of T has the property that all vectors \mathbf \in W are transformed by T into vectors also contained in W. This can be stated as :\mathbf \in W \implies T(\mathbf) \in W. Trivial examples of invariant subspaces * \mathbb^n: Since T maps every vector in \mathbb^n into \mathbb^n. * \: Since a linear map has to map 0 \mapsto 0. 1-dimensional invariant subspace ''U'' A basis of a 1-dimensional space is simply a non-zero vector \mathbf. Consequently, any vector \mathbf \in U can be represented as \lambda \mathbf where \lambda is a scalar. If we represent T by a matrix A then, for U to be an invariant subspace it must satisfy : \forall \mathbf \in U \; \exists \alpha \in ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Direct Sum Of Modules
In abstract algebra, the direct sum is a construction which combines several modules into a new, larger module. The direct sum of modules is the smallest module which contains the given modules as submodules with no "unnecessary" constraints, making it an example of a coproduct. Contrast with the direct product, which is the dual notion. The most familiar examples of this construction occur when considering vector spaces (modules over a field) and abelian groups (modules over the ring Z of integers). The construction may also be extended to cover Banach spaces and Hilbert spaces. See the article decomposition of a module for a way to write a module as a direct sum of submodules. Construction for vector spaces and abelian groups We give the construction first in these two cases, under the assumption that we have only two objects. Then we generalize to an arbitrary family of arbitrary modules. The key elements of the general construction are more clearly identified by conside ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Semisimple Representation
In mathematics, specifically in representation theory, a semisimple representation (also called a completely reducible representation) is a linear representation of a group or an algebra that is a direct sum of simple representations (also called irreducible representations). It is an example of the general mathematical notion of semisimplicity. Many representations that appear in applications of representation theory are semisimple or can be approximated by semisimple representations. A semisimple module over an algebra over a field is an example of a semisimple representation. Conversely, a semisimple representation of a group ''G'' over a field k is a semisimple module over the group ring ''k'' 'G'' Equivalent characterizations Let ''V'' be a representation of a group ''G''; or more generally, let ''V'' be a vector space with a set of linear endomorphisms acting on it. In general, a vector space acted on by a set of linear endomorphisms is said to be simple (or irreducible) ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Algebraically Closed
In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in . Examples As an example, the field of real numbers is not algebraically closed, because the polynomial equation ''x''2 + 1 = 0 has no solution in real numbers, even though all its coefficients (1 and 0) are real. The same argument proves that no subfield of the real field is algebraically closed; in particular, the field of rational numbers is not algebraically closed. Also, no finite field ''F'' is algebraically closed, because if ''a''1, ''a''2, ..., ''an'' are the elements of ''F'', then the polynomial (''x'' − ''a''1)(''x'' − ''a''2) ⋯ (''x'' − ''a''''n'') + 1 has no zero in ''F''. By contrast, the fundamental theorem of algebra states that the field of complex numbers is algebraically closed. Another example of an algebraicall ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Diagonalizable
In linear algebra, a square matrix A is called diagonalizable or non-defective if it is similar to a diagonal matrix, i.e., if there exists an invertible matrix P and a diagonal matrix D such that or equivalently (Such D are not unique.) For a finite-dimensional vector space a linear map T:V\to V is called diagonalizable if there exists an ordered basis of V consisting of eigenvectors of T. These definitions are equivalent: if T has a matrix representation T = PDP^ as above, then the column vectors of P form a basis consisting of eigenvectors of and the diagonal entries of D are the corresponding eigenvalues of with respect to this eigenvector basis, A is represented by Diagonalization is the process of finding the above P and Diagonalizable matrices and maps are especially easy for computations, once their eigenvalues and eigenvectors are known. One can raise a diagonal matrix D to a power by simply raising the diagonal entries to that power, and the determina ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Hyperplane
In geometry, a hyperplane is a subspace whose dimension is one less than that of its ''ambient space''. For example, if a space is 3-dimensional then its hyperplanes are the 2-dimensional planes, while if the space is 2-dimensional, its hyperplanes are the 1-dimensional lines. This notion can be used in any general space in which the concept of the dimension of a subspace is defined. In different settings, hyperplanes may have different properties. For instance, a hyperplane of an -dimensional affine space is a flat subset with dimension and it separates the space into two half spaces. While a hyperplane of an -dimensional projective space does not have this property. The difference in dimension between a subspace and its ambient space is known as the codimension of with respect to . Therefore, a necessary and sufficient condition for to be a hyperplane in is for to have codimension one in . Technical description In geometry, a hyperplane of an ''n''-dimensi ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Jordan–Chevalley Decomposition
In mathematics, the Jordan–Chevalley decomposition, named after Camille Jordan and Claude Chevalley, expresses a linear operator as the sum of its commuting semisimple part and its nilpotent part. The multiplicative decomposition expresses an invertible operator as the product of its commuting semisimple and unipotent parts. The decomposition is easy to describe when the Jordan normal form of the operator is given, but it exists under weaker hypotheses than the existence of a Jordan normal form. Analogues of the Jordan-Chevalley decomposition exist for elements of linear algebraic groups, Lie algebras, and Lie groups, and the decomposition is an important tool in the study of these objects. Decomposition of a linear operator Consider linear operators on a finite-dimensional vector space over a field. An operator T is semisimple if every T-invariant subspace has a complementary T-invariant subspace (if the underlying field is algebraically closed, this is the same as the requir ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Nilpotent Endomorphism
In linear algebra, a nilpotent matrix is a square matrix ''N'' such that :N^k = 0\, for some positive integer k. The smallest such k is called the index of N, sometimes the degree of N. More generally, a nilpotent transformation is a linear transformation L of a vector space such that L^k = 0 for some positive integer k (and thus, L^j = 0 for all j \geq k). Both of these concepts are special cases of a more general concept of nilpotence that applies to elements of rings. Examples Example 1 The matrix : A = \begin 0 & 1 \\ 0 & 0 \end is nilpotent with index 2, since A^2 = 0. Example 2 More generally, any n-dimensional triangular matrix with zeros along the main diagonal is nilpotent, with index \le n . For example, the matrix : B=\begin 0 & 2 & 1 & 6\\ 0 & 0 & 1 & 2\\ 0 & 0 & 0 & 3\\ 0 & 0 & 0 & 0 \end is nilpotent, with : B^2=\begin 0 & 0 & 2 & 7\\ 0 & 0 & 0 & 3\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0 \end ;\ B^3=\begin 0 & 0 & 0 & 6\\ 0 & 0 & 0 & 0\\ 0 & 0 & 0 & 0\\ 0 & 0 & ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Linear Algebra
Linear algebra is the branch of mathematics concerning linear equations such as: :a_1x_1+\cdots +a_nx_n=b, linear maps such as: :(x_1, \ldots, x_n) \mapsto a_1x_1+\cdots +a_nx_n, and their representations in vector spaces and through matrices. Linear algebra is central to almost all areas of mathematics. For instance, linear algebra is fundamental in modern presentations of geometry, including for defining basic objects such as lines, planes and rotations. Also, functional analysis, a branch of mathematical analysis, may be viewed as the application of linear algebra to spaces of functions. Linear algebra is also used in most sciences and fields of engineering, because it allows modeling many natural phenomena, and computing efficiently with such models. For nonlinear systems, which cannot be modeled with linear algebra, it is often used for dealing with first-order approximations, using the fact that the differential of a multivariate function at a point is the linear ma ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |