Incidence Algebra
   HOME
*





Incidence Algebra
In order theory, a field of mathematics, an incidence algebra is an associative algebra, defined for every locally finite partially ordered set and commutative ring with unity. Subalgebras called reduced incidence algebras give a natural construction of various types of generating functions used in combinatorics and number theory. Definition A locally finite poset is one in which every closed interval : 'a, b''= is finite. The members of the incidence algebra are the functions ''f'' assigning to each nonempty interval 'a, b''a scalar ''f''(''a'', ''b''), which is taken from the ''ring of scalars'', a commutative ring with unity. On this underlying set one defines addition and scalar multiplication pointwise, and "multiplication" in the incidence algebra is a convolution defined by :(f*g)(a, b)=\sum_f(a, x)g(x, b). An incidence algebra is finite-dimensional if and only if the underlying poset is finite. Related concepts An incidence algebra is analogous to a group algebr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Order Theory
Order theory is a branch of mathematics that investigates the intuitive notion of order using binary relations. It provides a formal framework for describing statements such as "this is less than that" or "this precedes that". This article introduces the field and provides basic definitions. A list of order-theoretic terms can be found in the order theory glossary. Background and motivation Orders are everywhere in mathematics and related fields like computer science. The first order often discussed in primary school is the standard order on the natural numbers e.g. "2 is less than 3", "10 is greater than 5", or "Does Tom have fewer cookies than Sally?". This intuitive concept can be extended to orders on other sets of numbers, such as the integers and the reals. The idea of being greater than or less than another number is one of the basic intuitions of number systems (compare with numeral systems) in general (although one usually is also interested in the actual difference ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Group Ring
In algebra, a group ring is a free module and at the same time a ring, constructed in a natural way from any given ring and any given group. As a free module, its ring of scalars is the given ring, and its basis is the set of elements of the given group. As a ring, its addition law is that of the free module and its multiplication extends "by linearity" the given group law on the basis. Less formally, a group ring is a generalization of a given group, by attaching to each element of the group a "weighting factor" from a given ring. If the ring is commutative then the group ring is also referred to as a group algebra, for it is indeed an algebra over the given ring. A group algebra over a field has a further structure of a Hopf algebra; in this case, it is thus called a group Hopf algebra. The apparatus of group rings is especially useful in the theory of group representations. Definition Let ''G'' be a group, written multiplicatively, and let ''R'' be a ring. The group ring of ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Derivative
In mathematics, the derivative of a function of a real variable measures the sensitivity to change of the function value (output value) with respect to a change in its argument (input value). Derivatives are a fundamental tool of calculus. For example, the derivative of the position of a moving object with respect to time is the object's velocity: this measures how quickly the position of the object changes when time advances. The derivative of a function of a single variable at a chosen input value, when it exists, is the slope of the tangent line to the graph of the function at that point. The tangent line is the best linear approximation of the function near that input value. For this reason, the derivative is often described as the "instantaneous rate of change", the ratio of the instantaneous change in the dependent variable to that of the independent variable. Derivatives can be generalized to functions of several real variables. In this generalization, the derivativ ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Unit (ring Theory)
In algebra, a unit of a ring is an invertible element for the multiplication of the ring. That is, an element of a ring is a unit if there exists in such that vu = uv = 1, where is the multiplicative identity; the element is unique for this property and is called the multiplicative inverse of . The set of units of forms a group under multiplication, called the group of units or unit group of . Other notations for the unit group are , , and (from the German term ). Less commonly, the term ''unit'' is sometimes used to refer to the element of the ring, in expressions like ''ring with a unit'' or ''unit ring'', and also unit matrix. Because of this ambiguity, is more commonly called the "unity" or the "identity" of the ring, and the phrases "ring with unity" or a "ring with identity" may be used to emphasize that one is considering a ring instead of a rng. Examples The multiplicative identity and its additive inverse are always units. More generally, any root of unit ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Integral
In mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ..., an integral assigns numbers to functions in a way that describes Displacement (geometry), displacement, area, volume, and other concepts that arise by combining infinitesimal data. The process of finding integrals is called integration. Along with Derivative, differentiation, integration is a fundamental, essential operation of calculus,Integral calculus is a very well established mathematical discipline for which there are many sources. See and , for example. and serves as a tool to solve problems in mathematics and physics involving the area of an arbitrary shape, the length of a curve, and the volume of a solid, among others. The integrals enumerated here are those termed definite integrals, which can be int ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Kronecker Delta
In mathematics, the Kronecker delta (named after Leopold Kronecker) is a function of two variables, usually just non-negative integers. The function is 1 if the variables are equal, and 0 otherwise: \delta_ = \begin 0 &\text i \neq j, \\ 1 &\text i=j. \end or with use of Iverson brackets: \delta_ = =j, where the Kronecker delta is a piecewise function of variables and . For example, , whereas . The Kronecker delta appears naturally in many areas of mathematics, physics and engineering, as a means of compactly expressing its definition above. In linear algebra, the identity matrix has entries equal to the Kronecker delta: I_ = \delta_ where and take the values , and the inner product of vectors can be written as \mathbf\cdot\mathbf = \sum_^n a_\delta_b_ = \sum_^n a_ b_. Here the Euclidean vectors are defined as -tuples: \mathbf = (a_1, a_2, \dots, a_n) and \mathbf= (b_1, b_2, ..., b_n) and the last step is obtained by using the values of the Kronecker delta ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Matrix Multiplication
In mathematics, particularly in linear algebra, matrix multiplication is a binary operation that produces a matrix from two matrices. For matrix multiplication, the number of columns in the first matrix must be equal to the number of rows in the second matrix. The resulting matrix, known as the matrix product, has the number of rows of the first and the number of columns of the second matrix. The product of matrices and is denoted as . Matrix multiplication was first described by the French mathematician Jacques Philippe Marie Binet in 1812, to represent the composition of linear maps that are represented by matrices. Matrix multiplication is thus a basic tool of linear algebra, and as such has numerous applications in many areas of mathematics, as well as in applied mathematics, statistics, physics, economics, and engineering. Computing matrix products is a central operation in all computational applications of linear algebra. Notation This article will use the following notati ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Matrix Addition
In mathematics, matrix addition is the operation of adding two matrices by adding the corresponding entries together. However, there are other operations which could also be considered addition for matrices, such as the direct sum and the Kronecker sum. Entrywise sum Two matrices must have an equal number of rows and columns to be added. In which case, the sum of two matrices A and B will be a matrix which has the same number of rows and columns as A and B. The sum of A and B, denoted , is computed by adding corresponding elements of A and B: :\begin \mathbf+\mathbf & = \begin a_ & a_ & \cdots & a_ \\ a_ & a_ & \cdots & a_ \\ \vdots & \vdots & \ddots & \vdots \\ a_ & a_ & \cdots & a_ \\ \end + \begin b_ & b_ & \cdots & b_ \\ b_ & b_ & \cdots & b_ \\ \vdots & \vdots & \ddots & \vdots \\ b_ & b_ & \cdots & b_ \\ \end \\ & = \begin a_ + b_ & a_ + b_ & \cdots & a_ + b_ \\ a_ + b_ & a_ + b_ & \cdots & a_ + b_ \\ \vdots & \vdots & \ddots & \vdots \\ a_ + b_ & a_ + b_ & \c ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Isomorphic
In mathematics, an isomorphism is a structure-preserving mapping between two structures of the same type that can be reversed by an inverse mapping. Two mathematical structures are isomorphic if an isomorphism exists between them. The word isomorphism is derived from the Ancient Greek: ἴσος ''isos'' "equal", and μορφή ''morphe'' "form" or "shape". The interest in isomorphisms lies in the fact that two isomorphic objects have the same properties (excluding further information such as additional structure or names of objects). Thus isomorphic structures cannot be distinguished from the point of view of structure only, and may be identified. In mathematical jargon, one says that two objects are . An automorphism is an isomorphism from a structure to itself. An isomorphism between two structures is a canonical isomorphism (a canonical map that is an isomorphism) if there is only one isomorphism between the two structures (as it is the case for solutions of a univer ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Upper-triangular Matrix
In mathematics, a triangular matrix is a special kind of square matrix. A square matrix is called if all the entries ''above'' the main diagonal are zero. Similarly, a square matrix is called if all the entries ''below'' the main diagonal are zero. Because matrix equations with triangular matrices are easier to solve, they are very important in numerical analysis. By the LU decomposition algorithm, an invertible matrix may be written as the product of a lower triangular matrix ''L'' and an upper triangular matrix ''U'' if and only if all its leading principal minors are non-zero. Description A matrix of the form :L = \begin \ell_ & & & & 0 \\ \ell_ & \ell_ & & & \\ \ell_ & \ell_ & \ddots & & \\ \vdots & \vdots & \ddots & \ddots & \\ \ell_ & \ell_ & \ldots & \ell_ & \ell_ \end is called a lower triangular matrix or left triangular matrix, ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Matrix (mathematics)
In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, \begin1 & 9 & -13 \\20 & 5 & -6 \end is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a "-matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra. Therefore, the study of matrices is a large part of linear algebra, and most properties and operations of abstract linear algebra can be expressed in terms of matrices. For example, matrix multiplication represents composition of linear maps. Not all matrices are related to linear algebra. This is, in particular, the case in graph theory, of incidence matrices, and adjacency matrices. ''This article focuses on matrices related to linear algebra, and, unle ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Category (mathematics)
In mathematics, a category (sometimes called an abstract category to distinguish it from a concrete category) is a collection of "objects" that are linked by "arrows". A category has two basic properties: the ability to compose the arrows associatively and the existence of an identity arrow for each object. A simple example is the category of sets, whose objects are sets and whose arrows are functions. '' Category theory'' is a branch of mathematics that seeks to generalize all of mathematics in terms of categories, independent of what their objects and arrows represent. Virtually every branch of modern mathematics can be described in terms of categories, and doing so often reveals deep insights and similarities between seemingly different areas of mathematics. As such, category theory provides an alternative foundation for mathematics to set theory and other proposed axiomatic foundations. In general, the objects and arrows may be abstract entities of any kind, and the n ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]