Coxeter Matrix
In mathematics, a Coxeter group, named after H. S. M. Coxeter, is an abstract group that admits a formal description in terms of reflections (or kaleidoscopic mirrors). Indeed, the finite Coxeter groups are precisely the finite Euclidean reflection groups; the symmetry groups of regular polyhedra are an example. However, not all Coxeter groups are finite, and not all can be described in terms of symmetries and Euclidean reflections. Coxeter groups were introduced in 1934 as abstractions of reflection groups , and finite Coxeter groups were classified in 1935 . Coxeter groups find applications in many areas of mathematics. Examples of finite Coxeter groups include the symmetry groups of regular polytopes, and the Weyl groups of simple Lie algebras. Examples of infinite Coxeter groups include the triangle groups corresponding to regular tessellations of the Euclidean plane and the hyperbolic plane, and the Weyl groups of infinite-dimensional Kac–Moody algebras. Standard re ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Mathematics
Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics with the major subdisciplines of number theory, algebra, geometry, and analysis, respectively. There is no general consensus among mathematicians about a common definition for their academic discipline. Most mathematical activity involves the discovery of properties of abstract objects and the use of pure reason to prove them. These objects consist of either abstractions from nature orin modern mathematicsentities that are stipulated to have certain properties, called axioms. A ''proof'' consists of a succession of applications of deductive rules to already established results. These results include previously proved theorems, axioms, andin case of abstraction from naturesome basic properties that are considered true starting points of ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Kac–Moody Algebra
In mathematics, a Kac–Moody algebra (named for Victor Kac and Robert Moody, who independently and simultaneously discovered them in 1968) is a Lie algebra, usually infinite-dimensional, that can be defined by generators and relations through a generalized Cartan matrix. These algebras form a generalization of finite-dimensional semisimple Lie algebras, and many properties related to the structure of a Lie algebra such as its root system, irreducible representations, and connection to flag manifolds have natural analogues in the Kac–Moody setting. A class of Kac–Moody algebras called affine Lie algebras is of particular importance in mathematics and theoretical physics, especially two-dimensional conformal field theory and the theory of exactly solvable models. Kac discovered an elegant proof of certain combinatorial identities, the Macdonald identities, which is based on the representation theory of affine Kac–Moody algebras. Howard Garland and James Lepowsky demonstrated th ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Eigenvalues
In linear algebra, an eigenvector () or characteristic vector of a linear transformation is a nonzero vector that changes at most by a scalar factor when that linear transformation is applied to it. The corresponding eigenvalue, often denoted by \lambda, is the factor by which the eigenvector is scaled. Geometrically, an eigenvector, corresponding to a real nonzero eigenvalue, points in a direction in which it is stretched by the transformation and the eigenvalue is the factor by which it is stretched. If the eigenvalue is negative, the direction is reversed. Loosely speaking, in a multidimensional vector space, the eigenvector is not rotated. Formal definition If is a linear transformation from a vector space over a field into itself and is a nonzero vector in , then is an eigenvector of if is a scalar multiple of . This can be written as T(\mathbf) = \lambda \mathbf, where is a scalar in , known as the eigenvalue, characteristic value, or characteristic root ass ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Dot Product
In mathematics, the dot product or scalar productThe term ''scalar product'' means literally "product with a scalar as a result". It is also used sometimes for other symmetric bilinear forms, for example in a pseudo-Euclidean space. is an algebraic operation that takes two equal-length sequences of numbers (usually coordinate vectors), and returns a single number. In Euclidean geometry, the dot product of the Cartesian coordinates of two vectors is widely used. It is often called the inner product (or rarely projection product) of Euclidean space, even though it is not the only inner product that can be defined on Euclidean space (see Inner product space for more). Algebraically, the dot product is the sum of the products of the corresponding entries of the two sequences of numbers. Geometrically, it is the product of the Euclidean magnitudes of the two vectors and the cosine of the angle between them. These definitions are equivalent when using Cartesian coordinates. In mo ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Disjoint Union
In mathematics, a disjoint union (or discriminated union) of a family of sets (A_i : i\in I) is a set A, often denoted by \bigsqcup_ A_i, with an injection of each A_i into A, such that the images of these injections form a partition of A (that is, each element of A belongs to exactly one of these images). A disjoint union of a family of pairwise disjoint sets is their union. In category theory, the disjoint union is the coproduct of the category of sets, and thus defined up to a bijection. In this context, the notation \coprod_ A_i is often used. The disjoint union of two sets A and B is written with infix notation as A \sqcup B. Some authors use the alternative notation A \uplus B or A \operatorname B (along with the corresponding \biguplus_ A_i or \operatorname_ A_i). A standard way for building the disjoint union is to define A as the set of ordered pairs (x, i) such that x \in A_i, and the injection A_i \to A as x \mapsto (x, i). Example Consider the sets A_0 ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Direct Product Of Groups
In mathematics, specifically in group theory, the direct product is an operation that takes two groups and and constructs a new group, usually denoted . This operation is the group-theoretic analogue of the Cartesian product of sets and is one of several important notions of direct product in mathematics. In the context of abelian groups, the direct product is sometimes referred to as the direct sum, and is denoted G \oplus H. Direct sums play an important role in the classification of abelian groups: according to the fundamental theorem of finite abelian groups, every finite abelian group can be expressed as the direct sum of cyclic groups. Definition Given groups (with operation ) and (with operation ), the direct product is defined as follows: The resulting algebraic object satisfies the axioms for a group. Specifically: ;Associativity: The binary operation on is associative. ;Identity: The direct product has an identity element, namely , where is the identity e ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Connected Component (graph Theory)
In graph theory, a component of an undirected graph is a connected subgraph that is not part of any larger connected subgraph. The components of any graph partition its vertices into disjoint sets, and are the induced subgraphs of those sets. A graph that is itself connected has exactly one component, consisting of the whole graph. Components are sometimes called connected components. The number of components in a given graph is an important graph invariant, and is closely related to invariants of matroids, topological spaces, and matrices. In random graphs, a frequently occurring phenomenon is the incidence of a giant component, one component that is significantly larger than the others; and of a percolation threshold, an edge probability above which a giant component exists and below which it does not. The components of a graph can be constructed in linear time, and a special case of the problem, connected-component labeling, is a basic technique in image analysis. Dynamic co ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Commutative Operation
In mathematics, a binary operation is commutative if changing the order of the operands does not change the result. It is a fundamental property of many binary operations, and many mathematical proofs depend on it. Most familiar as the name of the property that says something like or , the property can also be used in more advanced settings. The name is needed because there are operations, such as division and subtraction, that do not have it (for example, ); such operations are ''not'' commutative, and so are referred to as ''noncommutative operations''. The idea that simple operations, such as the multiplication and addition of numbers, are commutative was for many years implicitly assumed. Thus, this property was not named until the 19th century, when mathematics started to become formalized. A similar property exists for binary relations; a binary relation is said to be symmetric if the relation applies regardless of the order of its operands; for example, equality is symme ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Coxeter–Dynkin Diagram
In geometry, a Coxeter–Dynkin diagram (or Coxeter diagram, Coxeter graph) is a graph with numerically labeled edges (called branches) representing the spatial relations between a collection of mirrors (or reflecting hyperplanes). It describes a kaleidoscopic construction: each graph "node" represents a mirror (domain facet) and the label attached to a branch encodes the dihedral angle order between two mirrors (on a domain ridge), that is, the amount by which the angle between the reflective planes can be multiplied to get 180 degrees. An unlabeled branch implicitly represents order-3 (60 degrees), and each pair of nodes that is not connected by a branch at all (such as non-adjacent nodes) represents a pair of mirrors at order-2 (90 degrees). Each diagram represents a Coxeter group, and Coxeter groups are classified by their associated diagrams. Dynkin diagrams are closely related objects, which differ from Coxeter diagrams in two respects: firstly, branches labeled "4" ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Symmetric Matrix
In linear algebra, a symmetric matrix is a square matrix that is equal to its transpose. Formally, Because equal matrices have equal dimensions, only square matrices can be symmetric. The entries of a symmetric matrix are symmetric with respect to the main diagonal. So if a_ denotes the entry in the ith row and jth column then for all indices i and j. Every square diagonal matrix is symmetric, since all off-diagonal elements are zero. Similarly in characteristic different from 2, each diagonal element of a skew-symmetric matrix must be zero, since each is its own negative. In linear algebra, a real symmetric matrix represents a self-adjoint operator represented in an orthonormal basis over a real inner product space. The corresponding object for a complex inner product space is a Hermitian matrix with complex-valued entries, which is equal to its conjugate transpose. Therefore, in linear algebra over the complex numbers, it is often assumed that a symmetric matrix refe ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |
|
Conjugate Elements
In mathematics, in particular field theory (mathematics), field theory, the conjugate elements or algebraic conjugates of an algebraic element , over a field extension , are the roots of the minimal polynomial (field theory), minimal polynomial of over . Conjugate elements are commonly called conjugates in contexts where this is not ambiguous. Normally itself is included in the set of conjugates of . Equivalently, the conjugates of are the images of under the field automorphisms of that leave fixed the elements of . The equivalence of the two definitions is one of the starting points of Galois theory. The concept generalizes the complex conjugation, since the algebraic conjugates over \R of a complex number are the number itself and its ''complex conjugate''. Example The cube roots of the number one (number), one are: : \sqrt[3] = \begin1 \\[3pt] -\frac+\fraci \\[5pt] -\frac-\fraci \end The latter two roots are conjugate elements in with minimal polynomial ... [...More Info...]       [...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]   |