Resultant Set Of Policy
   HOME

TheInfoList



OR:

In
mathematics Mathematics is an area of knowledge that includes the topics of numbers, formulas and related structures, shapes and the spaces in which they are contained, and quantities and their changes. These topics are represented in modern mathematics ...
, the resultant of two polynomials is a
polynomial expression In mathematics, especially in the field of algebra, a polynomial ring or polynomial algebra is a ring (which is also a commutative algebra) formed from the set of polynomials in one or more indeterminates (traditionally also called variables) ...
of their coefficients, which is equal to zero if and only if the polynomials have a common root (possibly in a
field extension In mathematics, particularly in algebra, a field extension is a pair of fields E\subseteq F, such that the operations of ''E'' are those of ''F'' restricted to ''E''. In this case, ''F'' is an extension field of ''E'' and ''E'' is a subfield of ...
), or, equivalently, a common factor (over their field of coefficients). In some older texts, the resultant is also called the eliminant. The resultant is widely used in number theory, either directly or through the
discriminant In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without computing them. More precisely, it is a polynomial function of the coefficients of the origi ...
, which is essentially the resultant of a polynomial and its derivative. The resultant of two polynomials with rational or polynomial coefficients may be computed efficiently on a computer. It is a basic tool of
computer algebra In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions ...
, and is a built-in function of most computer algebra systems. It is used, among others, for cylindrical algebraic decomposition,
integration Integration may refer to: Biology *Multisensory integration *Path integration * Pre-integration complex, viral genetic material used to insert a viral genome into a host genome *DNA integration, by means of site-specific recombinase technology, ...
of
rational function In mathematics, a rational function is any function that can be defined by a rational fraction, which is an algebraic fraction such that both the numerator and the denominator are polynomials. The coefficients of the polynomials need not be rat ...
s and drawing of
curve In mathematics, a curve (also called a curved line in older texts) is an object similar to a line (geometry), line, but that does not have to be Linearity, straight. Intuitively, a curve may be thought of as the trace left by a moving point (ge ...
s defined by a bivariate polynomial equation. The resultant of ''n'' homogeneous polynomials in ''n'' variables (also called multivariate resultant, or Macaulay's resultant for distinguishing it from the usual resultant) is a generalization, introduced by Macaulay, of the usual resultant. It is, with Gröbner bases, one of the main tools of elimination theory.


Notation

The resultant of two univariate polynomials and is commonly denoted \operatorname(A,B) or \operatorname(A,B). In many applications of the resultant, the polynomials depend on several indeterminates and may be considered as univariate polynomials in one of their indeterminates, with polynomials in the other indeterminates as coefficients. In this case, the indeterminate that is selected for defining and computing the resultant is indicated as a subscript: \operatorname_x(A,B) or \operatorname_x(A,B). The degrees of the polynomials are used in the definition of the resultant. However, a polynomial of degree may also be considered as a polynomial of higher degree where the leading coefficients are zero. If such a higher degree is used for the resultant, it is usually indicated as a subscript or a superscript, such as \operatorname_(A,B) or \operatorname_x^(A,B).


Definition

The resultant of two univariate polynomials over a field or over a
commutative ring In mathematics, a commutative ring is a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of ring properties that are not sp ...
is commonly defined as the determinant of their Sylvester matrix. More precisely, let :A=a_0x^d +a_1x^ + \cdots + a_d and :B=b_0x^e +b_1x^ + \cdots + b_e be nonzero polynomials of degrees and respectively. Let us denote by \mathcal_i the vector space (or
free module In mathematics, a free module is a module that has a basis – that is, a generating set consisting of linearly independent elements. Every vector space is a free module, but, if the ring of the coefficients is not a division ring (not a field in t ...
if the coefficients belong to a commutative ring) of dimension whose elements are the polynomials of degree strictly less than . The map :\varphi:\mathcal_\times \mathcal_ \rightarrow \mathcal_ such that :\varphi(P,Q)=AP+BQ is a linear map between two spaces of the same dimension. Over the basis of the powers of (listed in descending order), this map is represented by a square matrix of dimension , which is called the ''Sylvester matrix'' of and (for many authors and in the article Sylvester matrix, the Sylvester matrix is defined as the transpose of this matrix; this convention is not used here, as it breaks the usual convention for writing the matrix of a linear map). The resultant of and is thus the determinant :\begin a_0 & 0 & \cdots & 0 & b_0 & 0 & \cdots & 0 \\ a_1 & a_0 & \cdots & 0 & b_1 & b_0 & \cdots & 0 \\ a_2 & a_1 & \ddots & 0 & b_2 & b_1 & \ddots & 0 \\ \vdots &\vdots & \ddots & a_0 & \vdots &\vdots & \ddots & b_0 \\ a_d & a_ & \cdots & \vdots & b_e & b_ & \cdots & \vdots\\ 0 & a_d & \ddots & \vdots & 0 & b_e & \ddots & \vdots \\ \vdots & \vdots & \ddots & a_ & \vdots & \vdots & \ddots & b_ \\ 0 & 0 & \cdots & a_d & 0 & 0 & \cdots & b_e \end, which has columns of and columns of (the fact that the first column of 's and the first column of 's have the same length, that is , is here only for simplifying the display of the determinant). For instance, taking and we get :\begin a_0 & 0 & b_0 & 0 & 0 \\ a_1 & a_0 & b_1 & b_0 & 0 \\ a_2 & a_1 & b_2 & b_1 & b_0 \\ a_3 & a_2 & 0 & b_2 & b_1 \\ 0 & a_3 & 0 & 0 & b_2 \end. If the coefficients of the polynomials belong to an integral domain, then :\operatorname(A, B) = a_0^e b_0^d \prod_ (\lambda_i-\mu_j) = a_0^e \prod_^d B(\lambda_i) = (-1)^ b_0^d \prod_^e A(\mu_j), where \lambda_1, \dots, \lambda_d and \mu_1,\dots,\mu_e are respectively the roots, counted with their multiplicities, of and in any
algebraically closed field In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in . Examples As an example, the field of real numbers is not algebraically closed, because ...
containing the integral domain. This is a straightforward consequence of the characterizing properties of the resultant that appear below. In the common case of integer coefficients, the algebraically closed field is generally chosen as the field of complex numbers.


Properties

In this section and its subsections, and are two polynomials in of respective degrees and , and their resultant is denoted \operatorname(A,B).


Characterizing properties

The following properties hold for the resultant of two polynomials with coefficients in a
commutative ring In mathematics, a commutative ring is a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of ring properties that are not sp ...
. If is a field or more generally an integral domain, the resultant is the unique function of the coefficients of two polynomials that satisfies these properties. * If is a
subring In mathematics, a subring of ''R'' is a subset of a ring that is itself a ring when binary operations of addition and multiplication on ''R'' are restricted to the subset, and which shares the same multiplicative identity as ''R''. For those wh ...
of another ring , then \operatorname_R(A,B) = \operatorname_S(A,B). That is and have the same resultant when considered as polynomials over or . *If (that is if A=a_0 is a nonzero constant) then \operatorname(A,B) = a_0^e. Similarly, if , then \operatorname(A,B) = b_0^d. * \operatorname(x+a_1, x+b_1) = b_1-a_1 * \operatorname(B,A)=(-1)^ \operatorname(A,B) * \operatorname(AB,C) = \operatorname(A,C)\operatorname(B,C)


Zeros

* The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a
common divisor In mathematics, the greatest common divisor (GCD) of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers ''x'', ''y'', the greatest common divisor of ''x'' and ''y'' is ...
of positive degree. * The resultant of two polynomials with coefficients in an integral domain is zero if and only if they have a common root in an
algebraically closed field In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in . Examples As an example, the field of real numbers is not algebraically closed, because ...
containing the coefficients. * There exists a polynomial of degree less than and a polynomial of degree less than such that \operatorname(A,B)=AP+BQ. This is a generalization of Bézout's identity to polynomials over an arbitrary commutative ring. In other words, the resultant of two polynomials belongs to the ideal generated by these polynomials.


Invariance by ring homomorphisms

Let and be two polynomials of respective degrees and with coefficients in a
commutative ring In mathematics, a commutative ring is a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of ring properties that are not sp ...
, and \varphi\colon R\to S a ring homomorphism of into another commutative ring . Applying \varphi to the coefficients of a polynomial extends \varphi to a homomorphism of polynomial rings R to S /math>, which is also denoted \varphi. With this notation, we have: * If \varphi preserves the degrees of and (that is if \deg(\varphi(A)) = d and \deg(\varphi(B))= e), then ::\varphi(\operatorname(A,B))=\operatorname(\varphi(A), \varphi(B)). * If \deg(\varphi(A)) < d and \deg(\varphi(B))< e, then ::\varphi(\operatorname(A,B)) = 0. * If \deg(\varphi(A)) = d and \deg(\varphi(B)) =f < e, and the leading coefficient of is a_0 then ::\varphi(\operatorname(A,B))=\varphi(a_0)^\operatorname(\varphi(A), \varphi(B)). * If \deg(\varphi(A)) = f and \deg(\varphi(B)) = e, and the leading coefficient of is b_0 then ::\varphi(\operatorname(A,B)) = (-1)^\varphi(b_0)^\operatorname(\varphi(A), \varphi(B)). These properties are easily deduced from the definition of the resultant as a determinant. They are mainly used in two situations. For computing a resultant of polynomials with integer coefficients, it is generally faster to compute it
modulo In computing, the modulo operation returns the remainder or signed remainder of a division, after one number is divided by another (called the '' modulus'' of the operation). Given two positive numbers and , modulo (often abbreviated as ) is t ...
several primes and to retrieve the desired resultant with
Chinese remainder theorem In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer ''n'' by several integers, then one can determine uniquely the remainder of the division of ''n'' by the product of thes ...
. When is a polynomial ring in other indeterminates, and is the ring obtained by specializing to numerical values some or all indeterminates of , these properties may be restated as ''if the degrees are preserved by the specialization, the resultant of the specialization of two polynomials is the specialization of the resultant''. This property is fundamental, for example, for cylindrical algebraic decomposition.


Invariance under change of variable

*\operatorname(A(x+a), B(x+a)) = \operatorname(A(x), B(x)) *\operatorname(A(ax), B(ax)) = a^\operatorname(A(x), B(x)) * If A_r(x)=x^dA(1/x) and B_r(x)=x^eB(1/x) are the reciprocal polynomials of and , respectively, then ::\operatorname(A_r, B_r)= (-1)^\operatorname(A,B) This means that the property of the resultant being zero is invariant under linear and projective changes of the variable.


Invariance under change of polynomials

*If and are nonzero constants (that is they are independent of the indeterminate ), and and are as above, then ::\operatorname(aA,bB) =a^eb^d\operatorname(A,B). *If and are as above, and is another polynomial such that the degree of is , then ::\operatorname(A-CB, B)=b_0^\operatorname(A,B). *In particular, if either is monic, or , then ::\operatorname(A-CB,B) = \operatorname(A,B), :and, if , then ::\operatorname(A-CB, B)=b_0^\operatorname(A,B). These properties imply that in the Euclidean algorithm for polynomials, and all its variants ( pseudo-remainder sequences), the resultant of two successive remainders (or pseudo-remainders) differs from the resultant of the initial polynomials by a factor which is easy to compute. Conversely, this allows one to deduce the resultant of the initial polynomials from the value of the last remainder or pseudo-remainder. This is the starting idea of the subresultant-pseudo-remainder-sequence algorithm, which uses the above formulae for getting subresultant polynomials as pseudo-remainders, and the resultant as the last nonzero pseudo-remainder (provided that the resultant is not zero). This algorithm works for polynomials over the integers or, more generally, over an integral domain, without any division other than exact divisions (that is, without involving fractions). It involves O(de) arithmetic operations, while the computation of the determinant of the Sylvester matrix with standard algorithms requires O((d+e)^3) arithmetic operations.


Generic properties

In this section, we consider two polynomials :A=a_0x^d +a_1x^ + \cdots + a_d and :B=b_0x^e +b_1x^ + \cdots + b_e whose coefficients are distinct
indeterminate Indeterminate may refer to: In mathematics * Indeterminate (variable), a symbol that is treated as a variable * Indeterminate system, a system of simultaneous equations that has more than one solution * Indeterminate equation, an equation that ha ...
s. Let :R=\mathbb _0, \ldots, a_d, b_0, \ldots, b_e/math> be the polynomial ring over the integers defined by these indeterminates. The resultant \operatorname(A,B) is often called the generic resultant for the degrees and . It has the following properties. *\operatorname(A,B) is an absolutely irreducible polynomial. *If I is the ideal of R /math> generated by and , then I\cap R is the principal ideal generated by \operatorname(A,B).


Homogeneity

The generic resultant for the degrees and is
homogeneous Homogeneity and heterogeneity are concepts often used in the sciences and statistics relating to the uniformity of a substance or organism. A material or image that is homogeneous is uniform in composition or character (i.e. color, shape, siz ...
in various ways. More precisely: * It is homogeneous of degree in a_0, \ldots, a_d. * It is homogeneous of degree in b_0, \ldots, b_e. * It is homogeneous of degree in all the variables a_i and b_j. * If a_i and b_i are given the weight (that is, the weight of each coefficient is its degree as elementary symmetric polynomial), then it is quasi-homogeneous of total weight . *If and are homogeneous multivariate polynomials of respective degrees and , then their resultant in degrees and with respect to an indeterminate , denoted \operatorname_x^(P,Q) in , is homogeneous of degree in the other indeterminates.


Elimination property

Let I=\langle A, B\rangle be the ideal generated by two polynomials and in a polynomial ring R where R=k _1,\ldots,y_n/math> is itself a polynomial ring over a field. If at least one of and is monic in , then: * \operatorname_x(A,B)\in I \cap R * The ideals I\cap R and R\operatorname_x(A,B) define the same algebraic set. That is, a tuple of elements of an
algebraically closed field In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in . Examples As an example, the field of real numbers is not algebraically closed, because ...
is a common zero of the elements of I\cap R if and only it is a zero of \operatorname_x(A,B). * The ideal I\cap R has the same
radical Radical may refer to: Politics and ideology Politics *Radical politics, the political intent of fundamental societal change *Radicalism (historical), the Radical Movement that began in late 18th century Britain and spread to continental Europe and ...
as the principal ideal R\operatorname_x(A,B). That is, each element of I\cap R has a power that is a multiple of \operatorname_x(A,B). * All irreducible factors of \operatorname_x(A,B) divide every element of I\cap R. The first assertion is a basic property of the resultant. The other assertions are immediate corollaries of the second one, which can be proved as follows. As at least one of and is monic, a tuple (\beta_1,\ldots, \beta_n) is a zero of \operatorname_x(A,B) if and only if there exists \alpha such that (\beta_1,\ldots, \beta_n, \alpha) is a common zero of and . Such a common zero is also a zero of all elements of I\cap R. Conversely, if (\beta_1,\ldots, \beta_n) is a common zero of the elements of I\cap R, it is a zero of the resultant, and there exists \alpha such that (\beta_1,\ldots, \beta_n, \alpha) is a common zero of and . So I\cap R and R\operatorname_x(A,B) have exactly the same zeros.


Computation

Theoretically, the resultant could be computed by using the formula expressing it as a product of roots differences. However, as the roots may generally not be computed exactly, such an algorithm would be inefficient and
numerically unstable In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is algorit ...
. As the resultant is a symmetric function of the roots of each polynomial, it could also be computed by using the fundamental theorem of symmetric polynomials, but this would be highly inefficient. As the resultant is the determinant of the Sylvester matrix (and of the
Bézout matrix In mathematics, a Bézout matrix (or Bézoutian or Bezoutiant) is a special square matrix associated with two polynomials, introduced by and and named after Étienne Bézout. Bézoutian may also refer to the determinant of this matrix, which is ...
), it may be computed by using any algorithm for computing determinants. This needs O(n^3) arithmetic operations. As algorithms are known with a better complexity (see below), this method is not used in practice. It follows from that the computation of a resultant is strongly related to the Euclidean algorithm for polynomials. This shows that the computation of the resultant of two polynomials of degrees and may be done in O(de) arithmetic operations in the field of coefficients. However, when the coefficients are integers, rational numbers or polynomials, these arithmetic operations imply a number of GCD computations of coefficients which is of the same order and make the algorithm inefficient. The subresultant pseudo-remainder sequences were introduced to solve this problem and avoid any fraction and any GCD computation of coefficients. A more efficient algorithm is obtained by using the good behavior of the resultant under a ring homomorphism on the coefficients: to compute a resultant of two polynomials with integer coefficients, one computes their resultants modulo sufficiently many prime numbers and then reconstructs the result with the
Chinese remainder theorem In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer ''n'' by several integers, then one can determine uniquely the remainder of the division of ''n'' by the product of thes ...
. The use of
fast multiplication A multiplication algorithm is an algorithm (or method) to multiply two numbers. Depending on the size of the numbers, different algorithms are more efficient than others. Efficient multiplication algorithms have existed since the advent of the de ...
of integers and polynomials allows algorithms for resultants and greatest common divisors that have a better time complexity, which is of the order of the complexity of the multiplication, multiplied by the logarithm of the size of the input (\log(s(d+e)), where is an upper bound of the number of digits of the input polynomials).


Application to polynomial systems

Resultants were introduced for solving
systems of polynomial equations A system of polynomial equations (sometimes simply a polynomial system) is a set of simultaneous equations where the are polynomials in several variables, say , over some field . A ''solution'' of a polynomial system is a set of values for the s ...
and provide the oldest proof that there exist algorithms for solving such systems. These are primarily intended for systems of two equations in two unknowns, but also allow solving general systems.


Case of two equations in two unknowns

Consider the system of two polynomial equations :\begin P(x,y)&=0\\ Q(x,y)&=0, \end where and are polynomials of respective total degrees and . Then R=\operatorname_y^(P,Q) is a polynomial in , which is generically of degree (by properties of ). A value \alpha of is a root of if and only if either there exist \beta in an
algebraically closed field In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in . Examples As an example, the field of real numbers is not algebraically closed, because ...
containing the coefficients, such that P(\alpha,\beta)=Q(\alpha,\beta)=0, or \deg(P(\alpha,y)) and \deg(Q(\alpha,y)) (in this case, one says that and have a common root at infinity for x=\alpha). Therefore, solutions to the system are obtained by computing the roots of , and for each root \alpha, computing the common root(s) of P(\alpha,y), Q(\alpha,y), and \operatorname_x(P,Q). Bézout's theorem results from the value of \deg\left(\operatorname_y(P,Q)\right)\le de, the product of the degrees of and . In fact, after a linear change of variables, one may suppose that, for each root of the resultant, there is exactly one value of such that is a common zero of and . This shows that the number of common zeros is at most the degree of the resultant, that is at most the product of the degrees of and . With some technicalities, this proof may be extended to show that, counting multiplicities and zeros at infinity, the number of zeros is exactly the product of the degrees.


General case

At first glance, it seems that resultants may be applied to a general polynomial system of equations :P_1(x_1, \ldots, x_n)=0 :\vdots :P_k(x_1, \ldots, x_n)=0 by computing the resultants of every pair (P_i,P_j) with respect to x_n for eliminating one unknown, and repeating the process until getting univariate polynomials. Unfortunately, this introduces many spurious solutions, which are difficult to remove. A method, introduced at the end of the 19th century, works as follows: introduce new indeterminates U_2, \ldots, U_k and compute :\operatorname_(P_1, U_2P_2 +\cdots +U_kP_k). This is a polynomial in U_2, \ldots, U_k whose coefficients are polynomials in x_1, \ldots, x_, which have the property that \alpha_1, \ldots, \alpha_ is a common zero of these polynomial coefficients, if and only if the univariate polynomials P_i(\alpha_1, \ldots, \alpha_, x_n) have a common zero, possibly at infinity. This process may be iterated until finding univariate polynomials. To get a correct algorithm two complements have to be added to the method. Firstly, at each step, a linear change of variable may be needed in order that the degrees of the polynomials in the last variable are the same as their total degree. Secondly, if, at any step, the resultant is zero, this means that the polynomials have a common factor and that the solutions split in two components: one where the common factor is zero, and the other which is obtained by factoring out this common factor before continuing. This algorithm is very complicated and has a huge time complexity. Therefore, its interest is mainly historical.


Other applications


Number theory

The
discriminant In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the roots without computing them. More precisely, it is a polynomial function of the coefficients of the origi ...
of a polynomial, which is a fundamental tool in number theory is the quotient by its leading coefficient of the resultant of the polynomial and its derivative. If \alpha and \beta are algebraic numbers such that P(\alpha)=Q(\beta)=0, then \gamma=\alpha+\beta is a root of the resultant \operatorname_x(P(x),Q(z-x)), and \tau = \alpha\beta is a root of \operatorname_x(P(x),x^nQ(z/x)), where n is the
degree Degree may refer to: As a unit of measurement * Degree (angle), a unit of angle measurement ** Degree of geographical latitude ** Degree of geographical longitude * Degree symbol (°), a notation used in science, engineering, and mathematics ...
of Q(y). Combined with the fact that 1/\beta is a root of y^nQ(1/y) = 0, this shows that the set of algebraic numbers is a field. Let K(\alpha) be an algebraic field extension generated by an element \alpha, which has P(x) as minimal polynomial. Every element of \beta \in K(\alpha) may be written as \beta=Q(\alpha), where Q is a polynomial. Then \beta is a root of \operatorname_x(P(x),z-Q(x)), and this resultant is a power of the minimal polynomial of \beta.


Algebraic geometry

Given two plane algebraic curves defined as the zeros of the polynomials and , the resultant allows the computation of their intersection. More precisely, the roots of \operatorname_y(P,Q) are the ''x''-coordinates of the intersection points and of the common vertical asymptotes, and the roots of \operatorname_x(P,Q) are the ''y''-coordinates of the intersection points and of the common horizontal asymptotes. A rational plane curve may be defined by a
parametric equation In mathematics, a parametric equation defines a group of quantities as functions of one or more independent variables called parameters. Parametric equations are commonly used to express the coordinates of the points that make up a geometric obj ...
: x=\frac,\qquad y=\frac, where , and are polynomials. An implicit equation of the curve is given by :\operatorname_t(xR-P,yR-Q). The ''degree'' of this curve is the highest degree of , and , which is equal to the total degree of the resultant.


Symbolic integration

In
symbolic integration In calculus, symbolic integration is the problem of finding a formula for the antiderivative, or ''indefinite integral'', of a given function ''f''(''x''), i.e. to find a differentiable function ''F''(''x'') such that :\frac = f(x). This is also ...
, for computing the antiderivative of a rational fraction, one uses partial fraction decomposition for decomposing the integral into a "rational part", which is a sum of rational fractions whose antiprimitives are rational fractions, and a "logarithmic part" which is a sum of rational fractions of the form :\frac, where is a square-free polynomial and is a polynomial of lower degree than . The antiderivative of such a function involves necessarily logarithms, and generally algebraic numbers (the roots of ). In fact, the antiderivative is :\int \fracdx=\sum_ \frac\log(x-\alpha), where the sum runs over all complex roots of . The number of
algebraic number An algebraic number is a number that is a root of a non-zero polynomial in one variable with integer (or, equivalently, rational) coefficients. For example, the golden ratio, (1 + \sqrt)/2, is an algebraic number, because it is a root of the po ...
s involved by this expression is generally equal to the degree of , but it occurs frequently that an expression with less algebraic numbers may be computed. The Lazard–Rioboo–
Trager Trager is a surname. People with the surname include: * George L. Trager (1906–1992), American linguist *Hannah Trager (1870–1943), English writer and activist * Milton Trager, founder of Trager approach *Philip Trager (born 1935), American phot ...
method produces an expression, where the number of algebraic numbers is minimal, without any computation with algebraic numbers. Let : S_1(r) S_2(r)^2 \cdots S_k(r)^k = \operatorname_r (rQ'(x)-P(x), Q(x)) be the square-free factorization of the resultant which appears on the right. Trager proved that the antiderivative is :\int \fracdx=\sum_^k\sum_ \alpha \log(T_i(\alpha,x)), where the internal sums run over the roots of the S_i (if S_i=1 the sum is zero, as being the empty sum), and T_i(r,x) is a polynomial of degree in . The Lazard-Rioboo contribution is the proof that T_i(r,x) is the
subresultant In algebra, the greatest common divisor (frequently abbreviated as GCD) of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common di ...
of degree of rQ'(x)-P(x) and Q(x). It is thus obtained for free if the resultant is computed by the subresultant pseudo-remainder sequence.


Computer algebra

All preceding applications, and many others, show that the resultant is a fundamental tool in
computer algebra In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions ...
. In fact most
computer algebra systems A computer algebra system (CAS) or symbolic algebra system (SAS) is any mathematical software with the ability to manipulate mathematical expressions in a way similar to the traditional manual computations of mathematicians and scientists. The de ...
include an efficient implementation of the computation of resultants.


Homogeneous resultant

The resultant is also defined for two homogeneous polynomial in two indeterminates. Given two homogeneous polynomials and of respective total degrees and , their homogeneous resultant is the determinant of the matrix over the
monomial basis In mathematics the monomial basis of a polynomial ring is its basis (as a vector space or free module over the field or ring of coefficients) that consists of all monomials. The monomials form a basis because every polynomial may be uniquely writt ...
of the linear map :(A,B) \mapsto AP+BQ, where runs over the bivariate homogeneous polynomials of degree , and runs over the homogeneous polynomials of degree . In other words, the homogeneous resultant of and is the resultant of and when they are considered as polynomials of degree and (their degree in may be lower than their total degree): :\operatorname(P(x,y),Q(x,y)) = \operatorname_(P(x,1),Q(x,1)). (The capitalization of "Res" is used here for distinguishing the two resultants, although there is no standard rule for the capitalization of the abbreviation). The homogeneous resultant has essentially the same properties as the usual resultant, with essentially two differences: instead of polynomial roots, one considers zeros in the projective line, and the degree of a polynomial may not change under a ring homomorphism. That is: * The resultant of two homogeneous polynomials over an integral domain is zero if and only if they have a non-zero common zero over an
algebraically closed field In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in . Examples As an example, the field of real numbers is not algebraically closed, because ...
containing the coefficients. * If and are two bivariate homogeneous polynomials with coefficients in a
commutative ring In mathematics, a commutative ring is a ring in which the multiplication operation is commutative. The study of commutative rings is called commutative algebra. Complementarily, noncommutative algebra is the study of ring properties that are not sp ...
, and \varphi\colon R\to S a ring homomorphism of into another commutative ring , then extending \varphi to polynomials over , ones has ::\operatorname(\varphi(P), \varphi(Q)) = \varphi(\operatorname(P,Q)). * The property of an homogeneous resultant to be zero is invariant under any projective change of variables. Any property of the usual resultant may similarly extended to the homogeneous resultant, and the resulting property is either very similar or simpler than the corresponding property of the usual resultant.


Macaulay's resultant

Macaulay's resultant, named after Francis Sowerby Macaulay, also called the multivariate resultant, or the multipolynomial resultant,, Chapter 3. Resultants is a generalization of the homogeneous resultant to homogeneous polynomials in indeterminates. Macaulay's resultant is a polynomial in the coefficients of these homogeneous polynomials that vanishes if and only if the polynomials have a common non-zero solution in an
algebraically closed field In mathematics, a field is algebraically closed if every non-constant polynomial in (the univariate polynomial ring with coefficients in ) has a root in . Examples As an example, the field of real numbers is not algebraically closed, because ...
containing the coefficients, or, equivalently, if the hyper surfaces defined by the polynomials have a common zero in the dimensional projective space. The multivariate resultant is, with Gröbner bases, one of the main tools of effective elimination theory (elimination theory on computers). Like the homogeneous resultant, Macaulay's may be defined with determinants, and thus behaves well under ring homomorphisms. However, it cannot be defined by a single determinant. It follows that it is easier to define it first on generic polynomials.


Resultant of generic homogeneous polynomials

A homogeneous polynomial of degree in variables may have up to :\binom=\frac coefficients; it is said to be ''generic'', if these coefficients are distinct indeterminates. Let P_1, \ldots, P_n be generic homogeneous polynomials in indeterminates, of respective degrees d_1, \dots, d_n. Together, they involve :\sum_^n\binom indeterminate coefficients. Let be the polynomial ring over the integers, in all these indeterminate coefficients. The polynomials P_1, \ldots, P_n belong thus to C _1,\ldots, x_n and their resultant (still to be defined) belongs to . The Macaulay degree is the integer D=d_1+\cdots+d_n-n+1, which is fundamental in Macaulay's theory. For defining the resultant, one considers the Macaulay matrix, which is the matrix over the
monomial basis In mathematics the monomial basis of a polynomial ring is its basis (as a vector space or free module over the field or ring of coefficients) that consists of all monomials. The monomials form a basis because every polynomial may be uniquely writt ...
of the -linear map :(Q_1, \ldots, Q_n)\mapsto Q_1P_1+\cdots+Q_nP_n, in which each Q_i runs over the homogeneous polynomials of degree D-d_i, and the codomain is the -module of the homogeneous polynomials of degree . If , the Macaulay matrix is the Sylvester matrix, and is a
square matrix In mathematics, a square matrix is a matrix with the same number of rows and columns. An ''n''-by-''n'' matrix is known as a square matrix of order Any two square matrices of the same order can be added and multiplied. Square matrices are often ...
, but this is no longer true for . Thus, instead of considering the determinant, one considers all the maximal minors, that is the determinants of the square submatrices that have as many rows as the Macaulay matrix. Macaulay proved that the -ideal generated by these principal minors is a principal ideal, which is generated by the greatest common divisor of these minors. As one is working with polynomials with integer coefficients, this greatest common divisor is defined up to its sign. The generic Macaulay resultant is the greatest common divisor which becomes , when, for each , zero is substituted for all coefficients of P_i, except the coefficient of x_i^, for which one is substituted.


Properties of the generic Macaulay resultant

*The generic Macaulay resultant is an irreducible polynomial. *It is homogeneous of degree B/d_i in the coefficients of P_i, where B=d_1 \cdots d_n is the Bézout bound. *The product with the resultant of every monomial of degree in x_1,\dots, x_n belongs to the ideal of C _1,\dots,x_n/math> generated by P_1,\dots,P_n.


Resultant of polynomials over a field

From now on, we consider that the homogeneous polynomials P_1,\ldots,P_n of degrees d_1,\ldots,d_n have their coefficients in a field , that is that they belong to k _1,\dots,x_n Their resultant is defined as the element of obtained by replacing in the generic resultant the indeterminate coefficients by the actual coefficients of the P_i. The main property of the resultant is that it is zero if and only if P_1,\ldots,P_n have a nonzero common zero in an algebraically closed extension of . The "only if" part of this theorem results from the last property of the preceding paragraph, and is an effective version of Projective Nullstellensatz: If the resultant is nonzero, then :\langle x_1,\ldots, x_n\rangle^D \subseteq \langle P_1,\ldots,P_n\rangle, where D=d_1+\cdots +d_n-n+1 is the Macaulay degree, and \langle x_1,\ldots, x_n\rangle is the maximal homogeneous ideal. This implies that P_1,\ldots,P_n have no other common zero than the unique common zero, , of x_1,\ldots,x_n.


Computability

As the computation of a resultant may be reduced to computing determinants and polynomial greatest common divisors, there are algorithms for computing resultants in a finite number of steps. However, the generic resultant is a polynomial of very high degree (exponential in ) depending on a huge number of indeterminates. It follows that, except for very small and very small degrees of input polynomials, the generic resultant is, in practice, impossible to compute, even with modern computers. Moreover, the number of monomials of the generic resultant is so high, that, if it would be computable, the result could not be stored on available memory devices, even for rather small values of and of the degrees of the input polynomials. Therefore, computing the resultant makes sense only for polynomials whose coefficients belong to a field or are polynomials in few indeterminates over a field. In the case of input polynomials with coefficients in a field, the exact value of the resultant is rarely important, only its equality (or not) to zero matters. As the resultant is zero if and only if the rank of the Macaulay matrix is lower than its number of its rows, this equality to zero may by tested by applying
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
to the Macaulay matrix. This provides a
computational complexity In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time (generally measured by the number of needed elementary operations) ...
d^, where is the maximum degree of input polynomials. Another case where the computation of the resultant may provide useful information is when the coefficients of the input polynomials are polynomials in a small number of indeterminates, often called parameters. In this case, the resultant, if not zero, defines a hypersurface in the parameter space. A point belongs to this hyper surface, if and only if there are values of x_1, \ldots,x_n which, together with the coordinates of the point are a zero of the input polynomials. In other words, the resultant is the result of the "
elimination Elimination may refer to: Science and medicine * Elimination reaction, an organic reaction in which two functional groups split to form an organic product *Bodily waste elimination, discharging feces, urine, or foreign substances from the bo ...
" of x_1, \ldots,x_n from the input polynomials.


''U''-resultant

Macaulay's resultant provides a method, called "''U''-resultant" by Macaulay, for solving
systems of polynomial equations A system of polynomial equations (sometimes simply a polynomial system) is a set of simultaneous equations where the are polynomials in several variables, say , over some field . A ''solution'' of a polynomial system is a set of values for the s ...
. Given homogeneous polynomials P_1, \ldots, P_, of degrees d_1, \ldots, d_, in indeterminates x_1, \ldots, x_n, over a field , their ''U''-resultant is the resultant of the polynomials P_1, \ldots, P_, P_n, where :P_n=u_1x_1 +\cdots +u_nx_n is the generic
linear form In mathematics, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars (often, the real numbers or the complex numbers). If is a vector space over a field , the s ...
whose coefficients are new indeterminates u_1, \ldots, u_n. Notation u_i or U_i for these generic coefficients is traditional, and is the origin of the term ''U''-resultant. The ''U''-resultant is a homogeneous polynomial in k _1, \ldots, u_n It is zero if and only if the common zeros of P_1, \ldots, P_ form a
projective algebraic set Algebraic varieties are the central objects of study in algebraic geometry, a sub-field of mathematics. Classically, an algebraic variety is defined as the set of solutions of a system of polynomial equations over the real or complex numbers. ...
of positive dimension (that is, there are infinitely many projective zeros over an algebraically closed extension of ). If the ''U''-resultant is not zero, its degree is the Bézout bound d_1\cdots d_. The ''U''-resultant factorizes over an algebraically closed extension of into a product of linear forms. If \alpha_1u_1+\ldots+\alpha_nu_n is such a linear factor, then \alpha_1, \ldots, \alpha_n are the homogeneous coordinates of a common zero of P_1, \ldots, P_. Moreover, every common zero may be obtained from one of these linear factors, and the multiplicity as a factor is equal to the intersection multiplicity of the P_i at this zero. In other words, the ''U''-resultant provides a completely explicit version of Bézout's theorem.


Extension to more polynomials and computation

The ''U''-resultant as defined by Macaulay requires the number of homogeneous polynomials in the system of equations to be n-1, where n is the number of indeterminates. In 1981,
Daniel Lazard Daniel Lazard (born December 10, 1941) is a French mathematician and computer scientist. He is emeritus professor at the University of Paris VI. Career Daniel Lazard was born in Carpentras, in southern France. After his undergraduate educati ...
extended the notion to the case where the number of polynomials may differ from n-1, and the resulting computation can be performed via a specialized
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
procedure followed by symbolic determinant computation. Let P_1, \ldots, P_k be homogeneous polynomials in x_1, \ldots, x_n, of degrees d_1, \ldots, d_k, over a field . Without loss of generality, one may suppose that d_1\ge d_2\ge \cdots \ge d_k. Setting d_i=1 for , the Macaulay bound is D=d_1+\cdots + d_n-n+1. Let u_1, \ldots, u_n be new indeterninates and define P_=u_1x_1+\cdots +u_nx_n. In this case, the Macaulay matrix is defined to be the matrix, over the basis of the monomials in x_1, \ldots, x_n, of the linear map :(Q_1, \ldots, Q_) \mapsto P_1Q_1+\cdots+P_Q_, where, for each , Q_i runs over the linear space consisting of zero and the homogeneous polynomials of degree D-d_i. Reducing the Macaulay matrix by a variant of
Gaussian elimination In mathematics, Gaussian elimination, also known as row reduction, is an algorithm for solving systems of linear equations. It consists of a sequence of operations performed on the corresponding matrix of coefficients. This method can also be used ...
, one obtains a square matrix of
linear form In mathematics, a linear form (also known as a linear functional, a one-form, or a covector) is a linear map from a vector space to its field of scalars (often, the real numbers or the complex numbers). If is a vector space over a field , the s ...
s in u_1, \ldots, u_n. The determinant of this matrix is the ''U''-resultant. As with the original ''U''-resultant, it is zero if and only if P_1, \ldots, P_k have infinitely many common projective zeros (that is if the
projective algebraic set Algebraic varieties are the central objects of study in algebraic geometry, a sub-field of mathematics. Classically, an algebraic variety is defined as the set of solutions of a system of polynomial equations over the real or complex numbers. ...
defined by P_1, \ldots, P_k has infinitely many points over an algebraic closure of ). Again as with the original ''U''-resultant, when this ''U''-resultant is not zero, it factorizes into linear factors over any algebraically closed extension of . The coefficients of these linear factors are the homogeneous coordinates of the common zeros of P_1, \ldots, P_k, and the multiplicity of a common zero equals the multiplicity of the corresponding linear factor. The number of rows of the Macaulay matrix is less than (ed)^n, where is the usual
mathematical constant A mathematical constant is a key number whose value is fixed by an unambiguous definition, often referred to by a symbol (e.g., an alphabet letter), or by mathematicians' names to facilitate using it across multiple mathematical problems. Cons ...
, and is the
arithmetic mean In mathematics and statistics, the arithmetic mean ( ) or arithmetic average, or just the ''mean'' or the ''average'' (when the context is clear), is the sum of a collection of numbers divided by the count of numbers in the collection. The colle ...
of the degrees of the P_i. It follows that all solutions of a system of polynomial equations with a finite number of projective zeros can be determined in time d^. Although this bound is large, it is nearly optimal in the following sense: if all input degrees are equal, then the time complexity of the procedure is polynomial in the expected number of solutions ( Bézout's theorem). This computation may be practically viable when , and are not large.


See also

* Elimination theory *
Subresultant In algebra, the greatest common divisor (frequently abbreviated as GCD) of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common di ...
*
Nonlinear algebra Nonlinear algebra is the nonlinear analogue to linear algebra, generalizing notions of spaces and transformations coming from the linear setting. Algebraic geometry is one of the main areas of mathematical research supporting nonlinear algebra, whi ...


Notes


References

* * * *


External links

* {{Polynomials Polynomials Determinants Computer algebra