HOME
*





Hyperdeterminant
In algebra, the hyperdeterminant is a generalization of the determinant. Whereas a determinant is a scalar valued function defined on an ''n'' × ''n'' square matrix, a hyperdeterminant is defined on a multidimensional array of numbers or tensor. Like a determinant, the hyperdeterminant is a homogeneous polynomial with integer coefficients in the components of the tensor. Many other properties of determinants generalize in some way to hyperdeterminants, but unlike a determinant, the hyperdeterminant does not have a simple geometric interpretation in terms of volumes. There are at least three definitions of hyperdeterminant. The first was discovered by Arthur Cayley in 1843 presented to the Cambridge Philosophical Society. A. Cayley, "On the theory of determinants", ''Trans. Camb. Philos. Soc.'', 1-16 (1843) https://archive.org/details/collectedmathem01caylgoog It is in two parts and Cayley's first hyperdeterminant is covered in the second part. It is usually denoted by det0. The se ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Arthur Cayley
Arthur Cayley (; 16 August 1821 – 26 January 1895) was a prolific United Kingdom of Great Britain and Ireland, British mathematician who worked mostly on algebra. He helped found the modern British school of pure mathematics. As a child, Cayley enjoyed solving complex maths problems for amusement. He entered Trinity College, Cambridge, where he excelled in Greek language, Greek, French language, French, German language, German, and Italian language, Italian, as well as mathematics. He worked as a lawyer for 14 years. He postulated the Cayley–Hamilton theorem—that every square matrix is a root of its own characteristic polynomial, and verified it for matrices of order 2 and 3. He was the first to define the concept of a group (mathematics), group in the modern way—as a set with a Binary function, binary operation satisfying certain laws. Formerly, when mathematicians spoke of "groups", they had meant permutation groups. Cayley tables and Cayley graphs as well as Cayle ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Algebra
Algebra () is one of the broad areas of mathematics. Roughly speaking, algebra is the study of mathematical symbols and the rules for manipulating these symbols in formulas; it is a unifying thread of almost all of mathematics. Elementary algebra deals with the manipulation of variables (commonly represented by Roman letters) as if they were numbers and is therefore essential in all applications of mathematics. Abstract algebra is the name given, mostly in education, to the study of algebraic structures such as groups, rings, and fields (the term is no more in common use outside educational context). Linear algebra, which deals with linear equations and linear mappings, is used for modern presentations of geometry, and has many practical applications (in weather forecasting, for example). There are many areas of mathematics that belong to algebra, some having "algebra" in their name, such as commutative algebra, and some not, such as Galois theory. The word ''algebra'' is ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Prime Number
A prime number (or a prime) is a natural number greater than 1 that is not a product of two smaller natural numbers. A natural number greater than 1 that is not prime is called a composite number. For example, 5 is prime because the only ways of writing it as a product, or , involve 5 itself. However, 4 is composite because it is a product (2 × 2) in which both numbers are smaller than 4. Primes are central in number theory because of the fundamental theorem of arithmetic: every natural number greater than 1 is either a prime itself or can be factorized as a product of primes that is unique up to their order. The property of being prime is called primality. A simple but slow method of checking the primality of a given number n, called trial division, tests whether n is a multiple of any integer between 2 and \sqrt. Faster algorithms include the Miller–Rabin primality test, which is fast but has a small chance of error, and the AKS primality test, which always pr ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Least Common Multiple
In arithmetic and number theory, the least common multiple, lowest common multiple, or smallest common multiple of two integers ''a'' and ''b'', usually denoted by lcm(''a'', ''b''), is the smallest positive integer that is divisible by both ''a'' and ''b''. Since division of integers by zero is undefined, this definition has meaning only if ''a'' and ''b'' are both different from zero. However, some authors define lcm(''a'',0) as 0 for all ''a'', since 0 is the only common multiple of ''a'' and 0. The lcm is the "lowest common denominator" (lcd) that can be used before fractions can be added, subtracted or compared. The least common multiple of more than two integers ''a'', ''b'', ''c'', . . . , usually denoted by lcm(''a'', ''b'', ''c'', . . .), is also well defined: It is the smallest positive integer that is divisible by each of ''a'', ''b'', ''c'', . . . Overview A multiple of a number is the product of that number and an integer. For example, 10 is a ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Matrix (mathematics)
In mathematics, a matrix (plural matrices) is a rectangular array or table of numbers, symbols, or expressions, arranged in rows and columns, which is used to represent a mathematical object or a property of such an object. For example, \begin1 & 9 & -13 \\20 & 5 & -6 \end is a matrix with two rows and three columns. This is often referred to as a "two by three matrix", a "-matrix", or a matrix of dimension . Without further specifications, matrices represent linear maps, and allow explicit computations in linear algebra. Therefore, the study of matrices is a large part of linear algebra, and most properties and operations of abstract linear algebra can be expressed in terms of matrices. For example, matrix multiplication represents composition of linear maps. Not all matrices are related to linear algebra. This is, in particular, the case in graph theory, of incidence matrices, and adjacency matrices. ''This article focuses on matrices related to linear algebra, and, unle ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Dual Space
In mathematics, any vector space ''V'' has a corresponding dual vector space (or just dual space for short) consisting of all linear forms on ''V'', together with the vector space structure of pointwise addition and scalar multiplication by constants. The dual space as defined above is defined for all vector spaces, and to avoid ambiguity may also be called the . When defined for a topological vector space, there is a subspace of the dual space, corresponding to continuous linear functionals, called the ''continuous dual space''. Dual vector spaces find application in many branches of mathematics that use vector spaces, such as in tensor analysis with finite-dimensional vector spaces. When applied to vector spaces of functions (which are typically infinite-dimensional), dual spaces are used to describe measures, distributions, and Hilbert spaces. Consequently, the dual space is an important concept in functional analysis. Early terms for ''dual'' include ''polarer Raum'' ahn 1 ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Vector Space
In mathematics and physics, a vector space (also called a linear space) is a set whose elements, often called ''vectors'', may be added together and multiplied ("scaled") by numbers called '' scalars''. Scalars are often real numbers, but can be complex numbers or, more generally, elements of any field. The operations of vector addition and scalar multiplication must satisfy certain requirements, called ''vector axioms''. The terms real vector space and complex vector space are often used to specify the nature of the scalars: real coordinate space or complex coordinate space. Vector spaces generalize Euclidean vectors, which allow modeling of physical quantities, such as forces and velocity, that have not only a magnitude, but also a direction. The concept of vector spaces is fundamental for linear algebra, together with the concept of matrix, which allows computing in vector spaces. This provides a concise and synthetic way for manipulating and studying systems of linear eq ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Dimension (vector Space)
In mathematics, the dimension of a vector space ''V'' is the cardinality (i.e., the number of vectors) of a basis of ''V'' over its base field. p. 44, §2.36 It is sometimes called Hamel dimension (after Georg Hamel) or algebraic dimension to distinguish it from other types of dimension. For every vector space there exists a basis, and all bases of a vector space have equal cardinality; as a result, the dimension of a vector space is uniquely defined. We say V is if the dimension of V is finite, and if its dimension is infinite. The dimension of the vector space V over the field F can be written as \dim_F(V) or as : F read "dimension of V over F". When F can be inferred from context, \dim(V) is typically written. Examples The vector space \R^3 has \left\ as a standard basis, and therefore \dim_(\R^3) = 3. More generally, \dim_(\R^n) = n, and even more generally, \dim_(F^n) = n for any field F. The complex numbers \Complex are both a real and complex vector space; we have ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Partial Derivative
In mathematics, a partial derivative of a function of several variables is its derivative with respect to one of those variables, with the others held constant (as opposed to the total derivative, in which all variables are allowed to vary). Partial derivatives are used in vector calculus and differential geometry. The partial derivative of a function f(x, y, \dots) with respect to the variable x is variously denoted by It can be thought of as the rate of change of the function in the x-direction. Sometimes, for z=f(x, y, \ldots), the partial derivative of z with respect to x is denoted as \tfrac. Since a partial derivative generally has the same arguments as the original function, its functional dependence is sometimes explicitly signified by the notation, such as in: :f'_x(x, y, \ldots), \frac (x, y, \ldots). The symbol used to denote partial derivatives is ∂. One of the first known uses of this symbol in mathematics is by Marquis de Condorcet from 1770, who used it for ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


Multilinear Form
In abstract algebra and multilinear algebra, a multilinear form on a vector space V over a field K is a map :f\colon V^k \to K that is separately ''K''-linear in each of its ''k'' arguments. More generally, one can define multilinear forms on a module over a commutative ring. The rest of this article, however, will only consider multilinear forms on finite-dimensional vector spaces. A multilinear ''k''-form on V over \mathbf is called a (covariant) ''k''-tensor, and the vector space of such forms is usually denoted \mathcal^k(V) or \mathcal^k(V). Tensor product Given a ''k''-tensor f\in\mathcal^k(V) and an ''ℓ''-tensor g\in\mathcal^\ell(V), a product f\otimes g\in\mathcal^(V), known as the tensor product, can be defined by the property : (f\otimes g)(v_1,\ldots,v_k,v_,\ldots, v_)=f(v_1,\ldots,v_k)g(v_,\ldots, v_), for all v_1,\ldots,v_\in V. The tensor product of multilinear forms is not commutative; however it is bilinear and associative: : f\otimes(ag_1+bg_2)=a(f\ot ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  


picture info

Levi-Civita Symbol
In mathematics, particularly in linear algebra, tensor analysis, and differential geometry, the Levi-Civita symbol or Levi-Civita epsilon represents a collection of numbers; defined from the sign of a permutation of the natural numbers , for some positive integer . It is named after the Italian mathematician and physicist Tullio Levi-Civita. Other names include the permutation symbol, antisymmetric symbol, or alternating symbol, which refer to its antisymmetric property and definition in terms of permutations. The standard letters to denote the Levi-Civita symbol are the Greek lower case epsilon or , or less commonly the Latin lower case . Index notation allows one to display permutations in a way compatible with tensor analysis: \varepsilon_ where ''each'' index takes values . There are indexed values of , which can be arranged into an -dimensional array. The key defining property of the symbol is ''total antisymmetry'' in the indices. When any two indices are interchanged, e ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]  




Einstein Convention
In mathematics, especially the usage of linear algebra in Mathematical physics, Einstein notation (also known as the Einstein summation convention or Einstein summation notation) is a notational convention that implies summation over a set of indexed terms in a formula, thus achieving brevity. As part of mathematics it is a notational subset of Ricci calculus; however, it is often used in physics applications that do not distinguish between tangent and cotangent spaces. It was introduced to physics by Albert Einstein in 1916. Introduction Statement of convention According to this convention, when an index variable appears twice in a single term and is not otherwise defined (see Free and bound variables), it implies summation of that term over all the values of the index. So where the indices can range over the set , : y = \sum_^3 c_i x^i = c_1 x^1 + c_2 x^2 + c_3 x^3 is simplified by the convention to: : y = c_i x^i The upper indices are not exponents but are in ...
[...More Info...]      
[...Related Items...]     OR:     [Wikipedia]   [Google]   [Baidu]