HOME

TheInfoList



OR:

In algebra, the greatest common divisor (frequently abbreviated as GCD) of two polynomials is a polynomial, of the highest possible degree, that is a factor of both the two original polynomials. This concept is analogous to the greatest common divisor of two integers. In the important case of univariate polynomials over a field the polynomial GCD may be computed, like for the integer GCD, by the
Euclidean algorithm In mathematics, the Euclidean algorithm,Some widely used textbooks, such as I. N. Herstein's ''Topics in Algebra'' and Serge Lang's ''Algebra'', use the term "Euclidean algorithm" to refer to Euclidean division or Euclid's algorithm, is an effi ...
using long division. The polynomial GCD is defined only
up to Two Mathematical object, mathematical objects ''a'' and ''b'' are called equal up to an equivalence relation ''R'' * if ''a'' and ''b'' are related by ''R'', that is, * if ''aRb'' holds, that is, * if the equivalence classes of ''a'' and ''b'' wi ...
the multiplication by an invertible constant. The similarity between the integer GCD and the polynomial GCD allows extending to univariate polynomials all the properties that may be deduced from the Euclidean algorithm and Euclidean division. Moreover, the polynomial GCD has specific properties that make it a fundamental notion in various areas of algebra. Typically, the roots of the GCD of two polynomials are the common roots of the two polynomials, and this provides information on the roots without computing them. For example, the
multiple root In mathematics, the multiplicity of a member of a multiset is the number of times it appears in the multiset. For example, the number of times a given polynomial has a root at a given point is the multiplicity of that root. The notion of multipl ...
s of a polynomial are the roots of the GCD of the polynomial and its derivative, and further GCD computations allow computing the square-free factorization of the polynomial, which provides polynomials whose roots are the roots of a given multiplicity of the original polynomial. The greatest common divisor may be defined and exists, more generally, for multivariate polynomials over a field or the ring of integers, and also over a unique factorization domain. There exist algorithms to compute them as soon as one has a GCD algorithm in the ring of coefficients. These algorithms proceed by a recursion on the number of variables to reduce the problem to a variant of the Euclidean algorithm. They are a fundamental tool in
computer algebra In mathematics and computer science, computer algebra, also called symbolic computation or algebraic computation, is a scientific area that refers to the study and development of algorithms and software for manipulating mathematical expressions ...
, because computer algebra systems use them systematically to simplify fractions. Conversely, most of the modern theory of polynomial GCD has been developed to satisfy the need for efficiency of computer algebra systems.


General definition

Let and be polynomials with coefficients in an integral domain , typically a field or the integers. A greatest common divisor of and is a polynomial that divides and , and such that every common divisor of and also divides . Every pair of polynomials (not both zero) has a GCD if and only if is a unique factorization domain. If is a field and and are not both zero, a polynomial is a greatest common divisor if and only if it divides both and , and it has the greatest degree among the polynomials having this property. If , the GCD is 0. However, some authors consider that it is not defined in this case. The greatest common divisor of and is usually denoted "". The greatest common divisor is not unique: if is a GCD of and , then the polynomial is another GCD if and only if there is an invertible element of such that :f=u d and :d=u^ f. In other words, the GCD is unique up to the multiplication by an invertible constant. In the case of the integers, this indetermination has been settled by choosing, as the GCD, the unique one which is positive (there is another one, which is its opposite). With this convention, the GCD of two integers is also the greatest (for the usual ordering) common divisor. However, since there is no natural total order for polynomials over an integral domain, one cannot proceed in the same way here. For univariate polynomials over a field, one can additionally require the GCD to be monic (that is to have 1 as its coefficient of the highest degree), but in more general cases there is no general convention. Therefore, equalities like or are common abuses of notation which should be read " is a GCD of and " and " and have the same set of GCDs as and ". In particular, means that the invertible constants are the only common divisors. In this case, by analogy with the integer case, one says that and are .


Properties

*As stated above, the GCD of two polynomials exists if the coefficients belong either to a field, the ring of the integers, or more generally to a unique factorization domain. *If is any common divisor of and , then divides their GCD. *\gcd(p,q)= \gcd(q,p). *\gcd(p, q)= \gcd(q,p+rq) for any polynomial . This property is at the basis of the proof of Euclidean algorithm. *For any invertible element of the ring of the coefficients, \gcd(p,q)=\gcd(p,kq). *Hence \gcd(p,q)=\gcd(a_1p+b_1q,a_2p+b_2q) for any scalars a_1, b_1, a_2, b_2 such that a_1 b_2 - a_2 b_1 is invertible. *If \gcd(p, r)=1, then \gcd(p, q)=\gcd(p, qr). *If \gcd(q, r)=1, then \gcd(p, qr)=\gcd(p, q)\,\gcd(p, r). *For two univariate polynomials and over a field, there exist polynomials and , such that \gcd(p,q)=ap+bq and \gcd(p,q) divides every such linear combination of and ( Bézout's identity). *The greatest common divisor of three or more polynomials may be defined similarly as for two polynomials. It may be computed recursively from GCD's of two polynomials by the identities: ::\gcd(p, q, r) = \gcd(p, \gcd(q, r)), : and ::\gcd(p_1, p_2, \dots , p_n) = \gcd( p_1, \gcd(p_2, \dots , p_n)).


GCD by hand computation

There are several ways to find the greatest common divisor of two polynomials. Two of them are: #'' Factorization of polynomials'', in which one finds the factors of each expression, then selects the set of common factors held by all from within each set of factors. This method may be useful only in simple cases, as factoring is usually more difficult than computing the greatest common divisor. #The ''
Euclidean algorithm In mathematics, the Euclidean algorithm,Some widely used textbooks, such as I. N. Herstein's ''Topics in Algebra'' and Serge Lang's ''Algebra'', use the term "Euclidean algorithm" to refer to Euclidean division or Euclid's algorithm, is an effi ...
'', which can be used to find the GCD of two polynomials in the same manner as for two numbers.


Factoring

To find the GCD of two polynomials using factoring, simply factor the two polynomials completely. Then, take the product of all common factors. At this stage, we do not necessarily have a monic polynomial, so finally multiply this by a constant to make it a monic polynomial. This will be the GCD of the two polynomials as it includes all common divisors and is monic. Example one: Find the GCD of and . : : Thus, their GCD is .


Euclidean algorithm

Factoring polynomials can be difficult, especially if the polynomials have a large degree. The
Euclidean algorithm In mathematics, the Euclidean algorithm,Some widely used textbooks, such as I. N. Herstein's ''Topics in Algebra'' and Serge Lang's ''Algebra'', use the term "Euclidean algorithm" to refer to Euclidean division or Euclid's algorithm, is an effi ...
is a method that works for any pair of polynomials. It makes repeated use of Euclidean division. When using this algorithm on two numbers, the size of the numbers decreases at each stage. With polynomials, the degree of the polynomials decreases at each stage. The last nonzero remainder, made monic if necessary, is the GCD of the two polynomials. More specifically, for finding the gcd of two polynomials and , one can suppose (otherwise, the GCD is ), and :\deg(b(x)) \le \deg(a(x)) \,. The Euclidean division provides two polynomials , the ''quotient'' and , the ''remainder'' such that :a(x) = q_0(x)b(x) + r_0(x)\qquad\text\qquad \deg(r_0(x)) < \deg(b(x)) A polynomial divides both and if and only if it divides both and . Thus :\gcd(a(x), b(x)) = \gcd(b(x), r_0(x)). Setting :a_1(x) = b(x), b_1(x) = r_0(x), one can repeat the Euclidean division to get new polynomials and so on. At each stage we have :\deg(a_)+\deg(b_) < \deg(a_)+\deg(b_), so the sequence will eventually reach a point at which :b_N(x) = 0 and one has got the GCD: :\gcd(a,b)=\gcd(a_1,b_1)=\cdots=\gcd(a_N, 0)=a_N . Example: finding the GCD of and : : : Since is the last nonzero remainder, it is a GCD of the original polynomials, and the monic GCD is . In this example, it is not difficult to avoid introducing denominators by factoring out 12 before the second step. This can always be done by using pseudo-remainder sequences, but, without care, this may introduce very large integers during the computation. Therefore, for computer computation, other algorithms are used, that are described below. This method works only if one can test the equality to zero of the coefficients that occur during the computation. So, in practice, the coefficients must be integers, rational numbers, elements of a finite field, or must belong to some finitely generated field extension of one of the preceding fields. If the coefficients are floating-point numbers that represent real numbers that are known only approximately, then one must know the degree of the GCD for having a well defined computation result (that is a numerically stable result; in this cases other techniques may be used, usually based on singular value decomposition.


Univariate polynomials with coefficients in a field

The case of univariate polynomials over a field is especially important for several reasons. Firstly, it is the most elementary case and therefore appears in most first courses in algebra. Secondly, it is very similar to the case of the integers, and this analogy is the source of the notion of
Euclidean domain In mathematics, more specifically in ring theory, a Euclidean domain (also called a Euclidean ring) is an integral domain that can be endowed with a Euclidean function which allows a suitable generalization of the Euclidean division of integers. ...
. A third reason is that the theory and the algorithms for the multivariate case and for coefficients in a unique factorization domain are strongly based on this particular case. Last but not least, polynomial GCD algorithms and derived algorithms allow one to get useful information on the roots of a polynomial, without computing them.


Euclidean division

Euclidean division of polynomials, which is used in Euclid's algorithm for computing GCDs, is very similar to Euclidean division of integers. Its existence is based on the following theorem: Given two univariate polynomials ''a'' and ''b'' ≠ 0 defined over a field, there exist two polynomials ''q'' (the ''quotient'') and ''r'' (the ''remainder'') which satisfy :a=bq+r and :\deg(r)<\deg(b), where "deg(...)" denotes the degree and the degree of the zero polynomial is defined as being negative. Moreover, ''q'' and ''r'' are uniquely defined by these relations. The difference from Euclidean division of the integers is that, for the integers, the degree is replaced by the absolute value, and that to have uniqueness one has to suppose that ''r'' is non-negative. The rings for which such a theorem exists are called
Euclidean domain In mathematics, more specifically in ring theory, a Euclidean domain (also called a Euclidean ring) is an integral domain that can be endowed with a Euclidean function which allows a suitable generalization of the Euclidean division of integers. ...
s. Like for the integers, the Euclidean division of the polynomials may be computed by the long division algorithm. This algorithm is usually presented for paper-and-pencil computation, but it works well on computers when formalized as follows (note that the names of the variables correspond exactly to the regions of the paper sheet in a pencil-and-paper computation of long division). In the following computation "deg" stands for the degree of its argument (with the convention ), and "lc" stands for the leading coefficient, the coefficient of the highest degree of the variable. The proof of the validity of this algorithm relies on the fact that during the whole "while" loop, we have and is a non-negative integer that decreases at each iteration. Thus the proof of the validity of this algorithm also proves the validity of the Euclidean division.


Euclid's algorithm

As for the integers, the Euclidean division allows us to define Euclid's algorithm for computing GCDs. Starting from two polynomials ''a'' and ''b'', Euclid's algorithm consists of recursively replacing the pair by (where "" denotes the remainder of the Euclidean division, computed by the algorithm of the preceding section), until ''b'' = 0. The GCD is the last non zero remainder. Euclid's algorithm may be formalized in the recursive programming style as: In the imperative programming style, the same algorithm becomes, giving a name to each intermediate remainder: The sequence of the degrees of the is strictly decreasing. Thus after, at most, steps, one get a null remainder, say . As and have the same divisors, the set of the common divisors is not changed by Euclid's algorithm and thus all pairs have the same set of common divisors. The common divisors of and are thus the common divisors of and 0. Thus is a GCD of and . This not only proves that Euclid's algorithm computes GCDs but also proves that GCDs exist.


Bézout's identity and extended GCD algorithm

Bézout's identity is a GCD related theorem, initially proved for the integers, which is valid for every
principal ideal domain In mathematics, a principal ideal domain, or PID, is an integral domain in which every ideal is principal, i.e., can be generated by a single element. More generally, a principal ideal ring is a nonzero commutative ring whose ideals are principal, ...
. In the case of the univariate polynomials over a field, it may be stated as follows. The interest of this result in the case of the polynomials is that there is an efficient algorithm to compute the polynomials and , This algorithm differs from Euclid's algorithm by a few more computations done at each iteration of the loop. It is therefore called extended GCD algorithm. Another difference with Euclid's algorithm is that it also uses the quotient, denoted "quo", of the Euclidean division instead of only the remainder. This algorithm works as follows. The proof that the algorithm satisfies its output specification relies on the fact that, for every we have :r_i=as_i+bt_i :s_it_-t_is_=s_it_-t_is_, the latter equality implying :s_it_-t_is_=(-1)^i. The assertion on the degrees follows from the fact that, at every iteration, the degrees of and increase at most as the degree of decreases. An interesting feature of this algorithm is that, when the coefficients of Bezout's identity are needed, one gets for free the quotient of the input polynomials by their GCD.


Arithmetic of algebraic extensions

An important application of the extended GCD algorithm is that it allows one to compute division in algebraic field extensions. Let an algebraic extension of a field , generated by an element whose minimal polynomial has degree . The elements of are usually represented by univariate polynomials over of degree less than . The addition in is simply the addition of polynomials: :a+_Lb=a+_b. The multiplication in is the multiplication of polynomials followed by the division by : :a\cdot_Lb=\operatorname(a._b,f). The inverse of a non zero element of is the coefficient in Bézout's identity , which may be computed by extended GCD algorithm. (the GCD is 1 because the minimal polynomial is irreducible). The degrees inequality in the specification of extended GCD algorithm shows that a further division by is not needed to get deg() < deg().


Subresultants

In the case of univariate polynomials, there is a strong relationship between the greatest common divisors and resultants. More precisely, the resultant of two polynomials ''P'', ''Q'' is a polynomial function of the coefficients of ''P'' and ''Q'' which has the value zero if and only if the GCD of ''P'' and ''Q'' is not constant. The subresultants theory is a generalization of this property that allows characterizing generically the GCD of two polynomials, and the resultant is the 0-th subresultant polynomial. The ''i''-th ''subresultant polynomial'' ''Si''(''P'' ,''Q'') of two polynomials ''P'' and ''Q'' is a polynomial of degree at most ''i'' whose coefficients are polynomial functions of the coefficients of ''P'' and ''Q'', and the ''i''-th ''principal subresultant coefficient'' ''si''(''P'' ,''Q'') is the coefficient of degree ''i'' of ''Si''(''P'', ''Q''). They have the property that the GCD of ''P'' and ''Q'' has a degree ''d'' if and only if :s_0(P,Q)=\cdots=s_(P,Q) =0 \ , s_d(P,Q)\neq 0. In this case, ''Sd''(''P'' ,''Q'') is a GCD of ''P'' and ''Q'' and :S_0(P,Q)=\cdots=S_(P,Q) =0. Every coefficient of the subresultant polynomials is defined as the determinant of a submatrix of the Sylvester matrix of ''P'' and ''Q''. This implies that subresultants "specialize" well. More precisely, subresultants are defined for polynomials over any commutative ring ''R'', and have the following property. Let ''φ'' be a ring homomorphism of ''R'' into another commutative ring ''S''. It extends to another homomorphism, denoted also ''φ'' between the polynomials rings over ''R'' and ''S''. Then, if ''P'' and ''Q'' are univariate polynomials with coefficients in ''R'' such that :\deg(P)=\deg(\varphi(P)) and :\deg(Q)=\deg(\varphi(Q)), then the subresultant polynomials and the principal subresultant coefficients of ''φ''(''P'') and ''φ''(''Q'') are the image by ''φ'' of those of ''P'' and ''Q''. The subresultants have two important properties which make them fundamental for the computation on computers of the GCD of two polynomials with integer coefficients. Firstly, their definition through determinants allows bounding, through Hadamard inequality, the size of the coefficients of the GCD. Secondly, this bound and the property of good specialization allow computing the GCD of two polynomials with integer coefficients through modular computation and
Chinese remainder theorem In mathematics, the Chinese remainder theorem states that if one knows the remainders of the Euclidean division of an integer ''n'' by several integers, then one can determine uniquely the remainder of the division of ''n'' by the product of thes ...
(see
below Below may refer to: *Earth *Ground (disambiguation) *Soil *Floor *Bottom (disambiguation) Bottom may refer to: Anatomy and sex * Bottom (BDSM), the partner in a BDSM who takes the passive, receiving, or obedient role, to that of the top or ...
).


Technical definition

Let :P=p_0+p_1 X+\cdots +p_m X^m,\quad Q=q_0+q_1 X+\cdots +q_n X^n. be two univariate polynomials with coefficients in a field ''K''. Let us denote by \mathcal_i the ''K'' vector space of dimension ''i'' of polynomials of degree less than ''i''. For non-negative integer ''i'' such that ''i'' ≤ ''m'' and ''i'' ≤ ''n'', let :\varphi_i:\mathcal_ \times \mathcal_ \rightarrow \mathcal_ be the linear map such that :\varphi_i(A,B)=AP+BQ. The resultant of ''P'' and ''Q'' is the determinant of the Sylvester matrix, which is the (square) matrix of \varphi_0 on the bases of the powers of ''X''. Similarly, the ''i''-subresultant polynomial is defined in term of determinants of submatrices of the matrix of \varphi_i. Let us describe these matrices more precisely; Let ''p''''i'' = 0 for ''i'' < 0 or ''i'' > ''m'', and ''q''''i'' = 0 for ''i'' < 0 or ''i'' > ''n''. The Sylvester matrix is the (''m'' + ''n'') × (''m'' + ''n'')-matrix such that the coefficient of the ''i''-th row and the ''j''-th column is ''p''''m''+''j''−''i'' for ''j'' ≤ ''n'' and ''q''''j''−''i'' for ''j'' > ''n'': :S=\begin p_m & 0 & \cdots & 0 & q_n & 0 & \cdots & 0 \\ p_ & p_m & \cdots & 0 & q_ & q_n & \cdots & 0 \\ p_ & p_ & \ddots & 0 & q_ & q_ & \ddots & 0 \\ \vdots &\vdots & \ddots & p_m & \vdots &\vdots & \ddots & q_n \\ \vdots &\vdots & \cdots & p_ & \vdots &\vdots & \cdots & q_\\ p_0 & p_1 & \cdots & \vdots & q_0 & q_1 & \cdots & \vdots\\ 0 & p_0 & \ddots & \vdots & 0 & q_0 & \ddots & \vdots & \\ \vdots & \vdots & \ddots & p_1 & \vdots & \vdots & \ddots & q_1 \\ 0 & 0 & \cdots & p_0 & 0 & 0 & \cdots & q_0 \end. The matrix ''Ti'' of \varphi_i is the (''m'' + ''n'' − ''i'') × (''m'' + ''n'' − 2''i'')-submatrix of ''S'' which is obtained by removing the last ''i'' rows of zeros in the submatrix of the columns 1 to ''n'' − ''i'' and ''n'' + 1 to ''m'' + ''n'' − ''i'' of ''S'' (that is removing ''i'' columns in each block and the ''i'' last rows of zeros). The ''principal subresultant coefficient'' ''si'' is the determinant of the ''m'' + ''n'' − 2''i'' first rows of ''Ti''. Let ''Vi'' be the (''m'' + ''n'' − 2''i'') × (''m'' + ''n'' − ''i'') matrix defined as follows. First we add (''i'' + 1) columns of zeros to the right of the (''m'' + ''n'' − 2''i'' − 1) × (''m'' + ''n'' − 2''i'' − 1)
identity matrix In linear algebra, the identity matrix of size n is the n\times n square matrix with ones on the main diagonal and zeros elsewhere. Terminology and notation The identity matrix is often denoted by I_n, or simply by I if the size is immaterial o ...
. Then we border the bottom of the resulting matrix by a row consisting in (''m'' + ''n'' − ''i'' − 1) zeros followed by ''Xi'', ''X''''i''−1, ..., ''X'', 1: :V_i=\begin 1 & 0 & \cdots & 0 & 0 & 0 & \cdots & 0 \\ 0 & 1 & \cdots & 0 & 0 & 0 & \cdots & 0 \\ \vdots &\vdots & \ddots & \vdots & \vdots &\ddots & \vdots & 0 \\ 0 & 0 & \cdots & 1 & 0 & 0 & \cdots & 0 \\ 0 & 0 & \cdots & 0 & X^i & X^& \cdots & 1 \end. With this notation, the ''i''-th ''subresultant polynomial'' is the determinant of the matrix product ''ViTi''. Its coefficient of degree ''j'' is the determinant of the square submatrix of ''Ti'' consisting in its ''m'' + ''n'' − 2''i'' − 1 first rows and the (''m'' + ''n'' − ''i'' − ''j'')-th row.


Sketch of the proof

It is not obvious that, as defined, the subresultants have the desired properties. Nevertheless, the proof is rather simple if the properties of linear algebra and those of polynomials are put together. As defined, the columns of the matrix ''Ti'' are the vectors of the coefficients of some polynomials belonging to the image of \varphi_i. The definition of the ''i''-th subresultant polynomial ''Si'' shows that the vector of its coefficients is a linear combination of these column vectors, and thus that ''Si'' belongs to the image of \varphi_i. If the degree of the GCD is greater than ''i'', then Bézout's identity shows that every non zero polynomial in the image of \varphi_i has a degree larger than ''i''. This implies that ''Si''=0. If, on the other hand, the degree of the GCD is ''i'', then Bézout's identity again allows proving that the multiples of the GCD that have a degree lower than ''m'' + ''n'' − ''i'' are in the image of \varphi_i. The vector space of these multiples has the dimension ''m'' + ''n'' − 2''i'' and has a base of polynomials of pairwise different degrees, not smaller than ''i''. This implies that the submatrix of the ''m'' + ''n'' − 2''i'' first rows of the column echelon form of ''Ti'' is the identity matrix and thus that ''si'' is not 0. Thus ''Si'' is a polynomial in the image of \varphi_i, which is a multiple of the GCD and has the same degree. It is thus a greatest common divisor.


GCD and root finding


Square-free factorization

Most root-finding algorithms behave badly with polynomials that have
multiple root In mathematics, the multiplicity of a member of a multiset is the number of times it appears in the multiset. For example, the number of times a given polynomial has a root at a given point is the multiplicity of that root. The notion of multipl ...
s. It is therefore useful to detect and remove them before calling a root-finding algorithm. A GCD computation allows detection of the existence of multiple roots, since the multiple roots of a polynomial are the roots of the GCD of the polynomial and its derivative. After computing the GCD of the polynomial and its derivative, further GCD computations provide the complete ''square-free factorization'' of the polynomial, which is a factorization :f=\prod_^ f_i^i where, for each ''i'', the polynomial ''f''''i'' either is 1 if ''f'' does not have any root of multiplicity ''i'' or is a square-free polynomial (that is a polynomial without multiple root) whose roots are exactly the roots of multiplicity ''i'' of ''f'' (see Yun's algorithm). Thus the square-free factorization reduces root-finding of a polynomial with multiple roots to root-finding of several square-free polynomials of lower degree. The square-free factorization is also the first step in most polynomial factorization algorithms.


Sturm sequence

The ''Sturm sequence'' of a polynomial with real coefficients is the sequence of the remainders provided by a variant of Euclid's algorithm applied to the polynomial and its derivative. For getting the Sturm sequence, one simply replaces the instruction :r_:=\operatorname(r_,r_) of Euclid's algorithm by :r_:=-\operatorname(r_,r_). Let ''V''(''a'') be the number of changes of signs in the sequence, when evaluated at a point ''a''. Sturm's theorem asserts that ''V''(''a'') − ''V''(''b'') is the number of real roots of the polynomial in the interval 'a'', ''b'' Thus the Sturm sequence allows computing the number of real roots in a given interval. By subdividing the interval until every subinterval contains at most one root, this provides an algorithm that locates the real roots in intervals of arbitrary small length.


GCD over a ring and its field of fractions

In this section, we consider polynomials over a unique factorization domain ''R'', typically the ring of the integers, and over its field of fractions ''F'', typically the field of the rational numbers, and we denote ''R'' 'X''and ''F'' 'X''the rings of polynomials in a set of variables over these rings.


Primitive part–content factorization

The ''content'' of a polynomial ''p'' ∈ ''R'' 'X'' denoted "cont(''p'')", is the GCD of its coefficients. A polynomial ''q'' ∈ ''F'' 'X''may be written :q = \frac where ''p'' ∈ ''R'' 'X''and ''c'' ∈ ''R'': it suffices to take for ''c'' a multiple of all denominators of the coefficients of ''q'' (for example their product) and ''p'' = ''cq''. The ''content'' of ''q'' is defined as: :\operatorname (q) =\frac. In both cases, the content is defined up to the multiplication by a unit of ''R''. The ''primitive part'' of a polynomial in ''R'' 'X''or ''F'' 'X''is defined by :\operatorname (p) =\frac. In both cases, it is a polynomial in ''R'' 'X''that is ''primitive'', which means that 1 is a GCD of its coefficients. Thus every polynomial in ''R'' 'X''or ''F'' 'X''may be factorized as :p =\operatorname (p)\,\operatorname (p), and this factorization is unique up to the multiplication of the content by a unit of ''R'' and of the primitive part by the inverse of this unit. Gauss's lemma implies that the product of two primitive polynomials is primitive. It follows that :\operatorname (pq)=\operatorname (p) \operatorname(q) and :\operatorname (pq)=\operatorname (p) \operatorname(q).


Relation between the GCD over ''R'' and over ''F''

The relations of the preceding section imply a strong relation between the GCD's in ''R'' 'X''and in ''F'' 'X'' To avoid ambiguities, the notation "''gcd''" will be indexed, in the following, by the ring in which the GCD is computed. If ''q''1 and ''q''2 belong to ''F'' 'X'' then :\operatorname(\gcd_(q_1,q_2))=\gcd_(\operatorname(q_1),\operatorname(q_2)). If ''p''1 and ''p''2 belong to ''R'' 'X'' then :\gcd_(p_1,p_2)=\gcd_R(\operatorname(p_1),\operatorname(p_2)) \gcd_(\operatorname(p_1),\operatorname(p_2)), and :\gcd_(\operatorname(p_1),\operatorname(p_2))=\operatorname(\gcd_(p_1,p_2)). Thus the computation of polynomial GCD's is essentially the same problem over ''F'' 'X''and over ''R'' 'X'' For univariate polynomials over the rational numbers, one may think that Euclid's algorithm is a convenient method for computing the GCD. However, it involves simplifying a large number of fractions of integers, and the resulting algorithm is not efficient. For this reason, methods have been designed to modify Euclid's algorithm for working only with polynomials over the integers. They consist of replacing the Euclidean division, which introduces fractions, by a so-called ''pseudo-division'', and replacing the remainder sequence of the Euclid's algorithm by so-called ''pseudo-remainder sequences'' (see
below Below may refer to: *Earth *Ground (disambiguation) *Soil *Floor *Bottom (disambiguation) Bottom may refer to: Anatomy and sex * Bottom (BDSM), the partner in a BDSM who takes the passive, receiving, or obedient role, to that of the top or ...
).


Proof that GCD exists for multivariate polynomials

In the previous section we have seen that the GCD of polynomials in ''R'' 'X''may be deduced from GCDs in ''R'' and in ''F'' 'X'' A closer look on the proof shows that this allows us to prove the existence of GCDs in ''R'' 'X'' if they exist in ''R'' and in ''F'' 'X'' In particular, if GCDs exist in ''R'', and if ''X'' is reduced to one variable, this proves that GCDs exist in ''R'' 'X''(Euclid's algorithm proves the existence of GCDs in ''F'' 'X''. A polynomial in ''n'' variables may be considered as a univariate polynomial over the ring of polynomials in (''n'' − 1) variables. Thus a recursion on the number of variables shows that if GCDs exist and may be computed in ''R'', then they exist and may be computed in every multivariate polynomial ring over ''R''. In particular, if ''R'' is either the ring of the integers or a field, then GCDs exist in ''R'' 'x''1,..., ''xn'' and what precedes provides an algorithm to compute them. The proof that a polynomial ring over a unique factorization domain is also a unique factorization domain is similar, but it does not provide an algorithm, because there is no general algorithm to factor univariate polynomials over a field (there are examples of fields for which there does not exist any factorization algorithm for the univariate polynomials).


Pseudo-remainder sequences

In this section, we consider an integral domain ''Z'' (typically the ring Z of the integers) and its field of fractions ''Q'' (typically the field Q of the rational numbers). Given two polynomials ''A'' and ''B'' in the univariate polynomial ring ''Z'' 'X'' the Euclidean division (over ''Q'') of ''A'' by ''B'' provides a quotient and a remainder which may not belong to ''Z'' 'X'' For, if one applies Euclid's algorithm to the following polynomials :X^8+X^6-3 X^4-3 X^3+8 X^2+2 X-5 and :3 X^6+5 X^4-4 X^2-9 X+21,