Natural Homomorphism
In category theory, a branch of mathematics, a natural transformation provides a way of transforming one functor into another while respecting the internal structure (i.e., the composition of morphisms) of the categories involved. Hence, a natural transformation can be considered to be a "morphism of functors". Informally, the notion of a natural transformation states that a particular map between functors can be done consistently over an entire category. Indeed, this intuition can be formalized to define so-called functor categories. Natural transformations are, after categories and functors, one of the most fundamental notions of category theory and consequently appear in the majority of its applications. Definition If F and G are functors between the categories C and D , then a natural transformation \eta from F to G is a family of morphisms that satisfies two requirements. # The natural transformation must associate, to every object X in C, a morphism \eta_X : F(X) ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Category Theory
Category theory is a general theory of mathematical structures and their relations that was introduced by Samuel Eilenberg and Saunders Mac Lane in the middle of the 20th century in their foundational work on algebraic topology. Nowadays, category theory is used in almost all areas of mathematics, and in some areas of computer science. In particular, many constructions of new mathematical objects from previous ones, that appear similarly in several contexts are conveniently expressed and unified in terms of categories. Examples include quotient spaces, direct products, completion, and duality. A category is formed by two sorts of objects: the objects of the category, and the morphisms, which relate two objects called the ''source'' and the ''target'' of the morphism. One often says that a morphism is an ''arrow'' that ''maps'' its source to its target. Morphisms can be ''composed'' if the target of the first morphism equals the source of the second one, and morphism compos ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Algebraic Topology
Algebraic topology is a branch of mathematics that uses tools from abstract algebra to study topological spaces. The basic goal is to find algebraic invariant (mathematics), invariants that classification theorem, classify topological spaces up to homeomorphism, though usually most classify up to Homotopy#Homotopy equivalence and null-homotopy, homotopy equivalence. Although algebraic topology primarily uses algebra to study topological problems, using topology to solve algebraic problems is sometimes also possible. Algebraic topology, for example, allows for a convenient proof that any subgroup of a free group is again a free group. Main branches of algebraic topology Below are some of the main areas studied in algebraic topology: Homotopy groups In mathematics, homotopy groups are used in algebraic topology to classify topological spaces. The first and simplest homotopy group is the fundamental group, which records information about loops in a space. Intuitively, homotopy gro ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Linear Map
In mathematics, and more specifically in linear algebra, a linear map (also called a linear mapping, linear transformation, vector space homomorphism, or in some contexts linear function) is a Map (mathematics), mapping V \to W between two vector spaces that preserves the operations of vector addition and scalar multiplication. The same names and the same definition are also used for the more general case of module (mathematics), modules over a ring (mathematics), ring; see Module homomorphism. If a linear map is a bijection then it is called a . In the case where V = W, a linear map is called a (linear) ''endomorphism''. Sometimes the term refers to this case, but the term "linear operator" can have different meanings for different conventions: for example, it can be used to emphasize that V and W are Real number, real vector spaces (not necessarily with V = W), or it can be used to emphasize that V is a function space, which is a common convention in functional analysis. Some ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Injective
In mathematics, an injective function (also known as injection, or one-to-one function) is a function that maps distinct elements of its domain to distinct elements; that is, implies . (Equivalently, implies in the equivalent contrapositive statement.) In other words, every element of the function's codomain is the image of one element of its domain. The term must not be confused with that refers to bijective functions, which are functions such that each element in the codomain is an image of exactly one element in the domain. A homomorphism between algebraic structures is a function that is compatible with the operations of the structures. For all common algebraic structures, and, in particular for vector spaces, an is also called a . However, in the more general context of category theory, the definition of a monomorphism differs from that of an injective homomorphism. This is thus a theorem that they are equivalent for algebraic structures; see for more details. ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Vector Space
In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called ''vectors'', may be added together and multiplied ("scaled") by numbers called '' scalars''. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called ''vector axioms''. The terms real vector space and complex vector space are often used to specify the nature of the scalars: real coordinate space or complex coordinate space. Vector spaces generalize Euclidean vectors, which allow modeling of physical quantities, such as forces and velocity, that have not only a magnitude, but also a direction. The concept of vector spaces is fundamental for linear algebra, together with the concept of matrix, which allows computing in vector spaces. This provides a concise and synthetic way for manipulating and studying systems of linear eq ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Field (mathematics)
In mathematics, a field is a set on which addition, subtraction, multiplication, and division are defined and behave as the corresponding operations on rational and real numbers do. A field is thus a fundamental algebraic structure which is widely used in algebra, number theory, and many other areas of mathematics. The best known fields are the field of rational numbers, the field of real numbers and the field of complex numbers. Many other fields, such as fields of rational functions, algebraic function fields, algebraic number fields, and ''p''-adic fields are commonly used and studied in mathematics, particularly in number theory and algebraic geometry. Most cryptographic protocols rely on finite fields, i.e., fields with finitely many elements. The relation of two fields is expressed by the notion of a field extension. Galois theory, initiated by Évariste Galois in the 1830s, is devoted to understanding the symmetries of field extensions. Among other results, thi ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Determinant
In mathematics, the determinant is a scalar value that is a function of the entries of a square matrix. It characterizes some properties of the matrix and the linear map represented by the matrix. In particular, the determinant is nonzero if and only if the matrix is invertible and the linear map represented by the matrix is an isomorphism. The determinant of a product of matrices is the product of their determinants (the preceding property is a corollary of this one). The determinant of a matrix is denoted , , or . The determinant of a matrix is :\begin a & b\\c & d \end=ad-bc, and the determinant of a matrix is : \begin a & b & c \\ d & e & f \\ g & h & i \end= aei + bfg + cdh - ceg - bdi - afh. The determinant of a matrix can be defined in several equivalent ways. Leibniz formula expresses the determinant as a sum of signed products of matrix entries such that each summand is the product of different entries, and the number of these summands is n!, the factorial of (t ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Group Of Units
In algebra, a unit of a ring is an invertible element for the multiplication of the ring. That is, an element of a ring is a unit if there exists in such that vu = uv = 1, where is the multiplicative identity; the element is unique for this property and is called the multiplicative inverse of . The set of units of forms a group under multiplication, called the group of units or unit group of . Other notations for the unit group are , , and (from the German term ). Less commonly, the term ''unit'' is sometimes used to refer to the element of the ring, in expressions like ''ring with a unit'' or ''unit ring'', and also unit matrix. Because of this ambiguity, is more commonly called the "unity" or the "identity" of the ring, and the phrases "ring with unity" or a "ring with identity" may be used to emphasize that one is considering a ring instead of a rng. Examples The multiplicative identity and its additive inverse are always units. More generally, any root of unit ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Invertible Matrix
In linear algebra, an -by- square matrix is called invertible (also nonsingular or nondegenerate), if there exists an -by- square matrix such that :\mathbf = \mathbf = \mathbf_n \ where denotes the -by- identity matrix and the multiplication used is ordinary matrix multiplication. If this is the case, then the matrix is uniquely determined by , and is called the (multiplicative) ''inverse'' of , denoted by . Matrix inversion is the process of finding the matrix that satisfies the prior equation for a given invertible matrix . A square matrix that is ''not'' invertible is called singular or degenerate. A square matrix is singular if and only if its determinant is zero. Singular matrices are rare in the sense that if a square matrix's entries are randomly selected from any finite region on the number line or complex plane, the probability that the matrix is singular is 0, that is, it will "almost never" be singular. Non-square matrices (-by- matrices for which ) do not hav ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Ring Homomorphism
In ring theory, a branch of abstract algebra, a ring homomorphism is a structure-preserving function between two rings. More explicitly, if ''R'' and ''S'' are rings, then a ring homomorphism is a function such that ''f'' is: :addition preserving: ::f(a+b)=f(a)+f(b) for all ''a'' and ''b'' in ''R'', :multiplication preserving: ::f(ab)=f(a)f(b) for all ''a'' and ''b'' in ''R'', :and unit (multiplicative identity) preserving: ::f(1_R)=1_S. Additive inverses and the additive identity are part of the structure too, but it is not necessary to require explicitly that they too are respected, because these conditions are consequences of the three conditions above. If in addition ''f'' is a bijection, then its inverse ''f''−1 is also a ring homomorphism. In this case, ''f'' is called a ring isomorphism, and the rings ''R'' and ''S'' are called ''isomorphic''. From the standpoint of ring theory, isomorphic rings cannot be distinguished. If ''R'' and ''S'' are rngs, then the cor ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Commutative Ring
In mathematics, a commutative ring is a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of ring properties that are not specific to commutative rings. This distinction results from the high number of fundamental properties of commutative rings that do not extend to noncommutative rings. Definition and first examples Definition A ''ring'' is a set R equipped with two binary operations, i.e. operations combining any two elements of the ring to a third. They are called ''addition'' and ''multiplication'' and commonly denoted by "+" and "\cdot"; e.g. a+b and a \cdot b. To form a ring these two operations have to satisfy a number of properties: the ring has to be an abelian group under addition as well as a monoid under multiplication, where multiplication distributes over addition; i.e., a \cdot \left(b + c\right) = \left(a \cdot b\right) + \left(a \cdot ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Singular Homology
In algebraic topology, singular homology refers to the study of a certain set of algebraic invariants of a topological space ''X'', the so-called homology groups H_n(X). Intuitively, singular homology counts, for each dimension ''n'', the ''n''-dimensional holes of a space. Singular homology is a particular example of a homology theory, which has now grown to be a rather broad collection of theories. Of the various theories, it is perhaps one of the simpler ones to understand, being built on fairly concrete constructions (see also the related theory simplicial homology). In brief, singular homology is constructed by taking maps of the standard ''n''-simplex to a topological space, and composing them into formal sums, called singular chains. The boundary operation – mapping each ''n''-dimensional simplex to its (''n''−1)-dimensional boundary – induces the singular chain complex. The singular homology is then the homology of the chain complex. The resulting ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |