A division algorithm is an
algorithm
In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
which, given two
integer
An integer is the number zero (0), a positive natural number (1, 2, 3, ...), or the negation of a positive natural number (−1, −2, −3, ...). The negations or additive inverses of the positive natural numbers are referred to as negative in ...
s ''N'' and ''D'' (respectively the numerator and the denominator), computes their
quotient and/or
remainder
In mathematics, the remainder is the amount "left over" after performing some computation. In arithmetic, the remainder is the integer "left over" after dividing one integer by another to produce an integer quotient ( integer division). In a ...
, the result of
Euclidean division
In arithmetic, Euclidean division – or division with remainder – is the process of dividing one integer (the dividend) by another (the divisor), in a way that produces an integer quotient and a natural number remainder strictly smaller than ...
. Some are applied by hand, while others are employed by digital circuit designs and software.
Division algorithms fall into two main categories: slow division and fast division. Slow division algorithms produce one digit of the final quotient per iteration. Examples of slow division include
restoring, non-performing restoring,
non-restoring, and
SRT division. Fast division methods start with a close approximation to the final quotient and produce twice as many digits of the final quotient on each iteration.
Newton–Raphson and
Goldschmidt algorithms fall into this category.
Variants of these algorithms allow using fast
multiplication algorithms. It results that, for large integers, the
computer time needed for a division is the same, up to a constant factor, as the time needed for a multiplication, whichever multiplication algorithm is used.
Discussion will refer to the form
, where
* ''N'' =
numerator (dividend)
* ''D'' =
denominator
A fraction (from , "broken") represents a part of a whole or, more generally, any number of equal parts. When spoken in everyday English, a fraction describes how many parts of a certain size there are, for example, one-half, eight-fifths, thre ...
(divisor)
is the input, and
* ''Q'' =
quotient
* ''R'' =
remainder
In mathematics, the remainder is the amount "left over" after performing some computation. In arithmetic, the remainder is the integer "left over" after dividing one integer by another to produce an integer quotient ( integer division). In a ...
is the output.
Division by repeated subtraction
The simplest division algorithm, historically incorporated into a
greatest common divisor
In mathematics, the greatest common divisor (GCD), also known as greatest common factor (GCF), of two or more integers, which are not all zero, is the largest positive integer that divides each of the integers. For two integers , , the greatest co ...
algorithm presented in
Euclid's ''Elements'', Book VII, Proposition 1, finds the remainder given two positive integers using only subtractions and comparisons:
R := N
Q := 0
while R ≥ D do
R := R − D
Q := Q + 1
end
return (Q,R)
The proof that the quotient and remainder exist and are unique (described at
Euclidean division
In arithmetic, Euclidean division – or division with remainder – is the process of dividing one integer (the dividend) by another (the divisor), in a way that produces an integer quotient and a natural number remainder strictly smaller than ...
) gives rise to a complete division algorithm, applicable to both negative and positive numbers, using additions, subtractions, and comparisons:
function divide(N, D)
if D = 0 then error(DivisionByZero) end
if D < 0 then (Q, R) := divide(N, −D); return (−Q, R) end
if N < 0 then
(Q,R) := divide(−N, D)
if R = 0 then return (−Q, 0)
else return (−Q − 1, D − R) end
end
-- At this point, N ≥ 0 and D > 0
return divide_unsigned(N, D)
end
function divide_unsigned(N, D)
Q := 0; R := N
while R ≥ D do
Q := Q + 1
R := R − D
end
return (Q, R)
end
This procedure always produces R ≥ 0. Although very simple, it takes Ω(Q) steps, and so is exponentially slower than even slow division algorithms like long division. It is useful if Q is known to be small (being an
output-sensitive algorithm), and can serve as an executable specification.
Long division
Long division is the standard algorithm used for pen-and-paper division of multi-digit numbers expressed in decimal notation. It shifts gradually from the left to the right end of the dividend, subtracting the largest possible multiple of the divisor (at the digit level) at each stage; the multiples then become the digits of the quotient, and the final difference is then the remainder.
When used with a binary radix, this method forms the basis for the (unsigned) integer division with remainder algorithm below.
Short division is an abbreviated form of long division suitable for one-digit divisors.
Chunking also known as the partial quotients method or the hangman method is a less-efficient form of long division which may be easier to understand. By allowing one to subtract more multiples than what one currently has at each stage, a more freeform variant of long division can be developed as well.
Integer division (unsigned) with remainder
The following algorithm, the binary version of the famous
long division, will divide ''N'' by ''D'', placing the quotient in ''Q'' and the remainder in ''R''. In the following pseudo-code, all values are treated as unsigned integers.
if D = 0 then error(DivisionByZeroException) end
Q := 0 -- Initialize quotient and remainder to zero
R := 0
for i := n − 1 .. 0 do -- Where n is number of bits in N
R := R << 1 -- Left-shift R by 1 bit
R(0) := N(i) -- Set the least-significant bit of R equal to bit i of the numerator
if R ≥ D then
R := R − D
Q(i) := 1
end
end
Example
If we take N=1100
2 (12
10) and D=100
2 (4
10)
''Step 1'': Set R=0 and Q=0
''Step 2'': Take i=3 (one less than the number of bits in N)
''Step 3'': R=00 (left shifted by 1)
''Step 4'': R=01 (setting R(0) to N(i))
''Step 5'': R < D, so skip statement
''Step 2'': Set i=2
''Step 3'': R=010
''Step 4'': R=011
''Step 5'': R < D, statement skipped
''Step 2'': Set i=1
''Step 3'': R=0110
''Step 4'': R=0110
''Step 5'': R>=D, statement entered
''Step 5b'': R=10 (R−D)
''Step 5c'': Q=10 (setting Q(i) to 1)
''Step 2'': Set i=0
''Step 3'': R=100
''Step 4'': R=100
''Step 5'': R>=D, statement entered
''Step 5b'': R=0 (R−D)
''Step 5c'': Q=11 (setting Q(i) to 1)
end
Q=11
2 (3
10) and R=0.
Slow division methods
Slow division methods are all based on a standard recurrence equation
:
where:
* ''R''
''j'' is the ''j''-th partial remainder of the division
* ''B'' is the
radix
In a positional numeral system, the radix (radices) or base is the number of unique digits, including the digit zero, used to represent numbers. For example, for the decimal system (the most common system in use today) the radix is ten, becaus ...
(base, usually 2 internally in computers and calculators)
* ''q''
''n'' − (''j'' + 1) is the digit of the quotient in position ''n''−(''j''+1), where the digit positions are numbered from least-significant 0 to most significant ''n''−1
* ''n'' is number of digits in the quotient
* ''D'' is the divisor
Restoring division
Restoring division operates on
fixed-point fractional numbers and depends on the assumption 0 < ''D'' < ''N''.
The quotient digits ''q'' are formed from the digit set .
The basic algorithm for binary (radix 2) restoring division is:
R := N
D := D << n -- R and D need twice the word width of N and Q
for i := n − 1 .. 0 do -- For example 31..0 for 32 bits
R := 2 * R − D -- Trial subtraction from shifted value (multiplication by 2 is a shift in binary representation)
if R >= 0 then
q(i) := 1 -- Result-bit 1
else
q(i) := 0 -- Result-bit 0
R := R + D -- New partial remainder is (restored) shifted value
end
end
-- Where: N = numerator, D = denominator, n = #bits, R = partial remainder, q(i) = bit #i of quotient
Non-performing restoring division is similar to restoring division except that the value of 2R is saved, so ''D'' does not need to be added back in for the case of R < 0.
Non-restoring division
Non-restoring division uses the digit set for the quotient digits instead of . The algorithm is more complex, but has the advantage when implemented in hardware that there is only one decision and addition/subtraction per quotient bit; there is no restoring step after the subtraction, which potentially cuts down the numbers of operations by up to half and lets it be executed faster. The basic algorithm for binary (radix 2) non-restoring division of non-negative numbers is:
R := N
D := D << n -- R and D need twice the word width of N and Q
for i = n − 1 .. 0 do -- for example 31..0 for 32 bits
if R >= 0 then
q(i) := +1
R := 2 * R − D
else
q(i) := −1
R := 2 * R + D
end if
end
-- Note: N=numerator, D=denominator, n=#bits, R=partial remainder, q(i)=bit #i of quotient.
Following this algorithm, the quotient is in a non-standard form consisting of digits of −1 and +1. This form needs to be converted to binary to form the final quotient. Example:
If the −1 digits of
are stored as zeros (0) as is common, then
is
and computing
is trivial: perform a ones' complement (bit by bit complement) on the original
.
Q := Q − bit.bnot(Q) -- Appropriate if −1 digits in Q are represented as zeros as is common.
Finally, quotients computed by this algorithm are always odd, and the remainder in R is in the range −D ≤ R < D. For example, 5 / 2 = 3 R −1. To convert to a positive remainder, do a single restoring step ''after'' Q is converted from non-standard form to standard form:
if R < 0 then
Q := Q − 1
R := R + D -- Needed only if the remainder is of interest.
end if
The actual remainder is R >> n. (As with restoring division, the low-order bits of R are used up at the same rate as bits of the quotient Q are produced, and it is common to use a single shift register for both.)
SRT division
SRT division is a popular method for division in many
microprocessor
A microprocessor is a computer processor (computing), processor for which the data processing logic and control is included on a single integrated circuit (IC), or a small number of ICs. The microprocessor contains the arithmetic, logic, a ...
implementations. The algorithm is named after D. W. Sweeney of
IBM
International Business Machines Corporation (using the trademark IBM), nicknamed Big Blue, is an American Multinational corporation, multinational technology company headquartered in Armonk, New York, and present in over 175 countries. It is ...
, James E. Robertson of
University of Illinois
The University of Illinois Urbana-Champaign (UIUC, U of I, Illinois, or University of Illinois) is a public university, public land-grant university, land-grant research university in the Champaign–Urbana metropolitan area, Illinois, United ...
, and
K. D. Tocher of
Imperial College London
Imperial College London, also known as Imperial, is a Public university, public research university in London, England. Its history began with Prince Albert of Saxe-Coburg and Gotha, Prince Albert, husband of Queen Victoria, who envisioned a Al ...
. They all developed the algorithm independently at approximately the same time (published in February 1957, September 1958, and January 1958 respectively).
SRT division is similar to non-restoring division, but it uses a
lookup table
In computer science, a lookup table (LUT) is an array data structure, array that replaces runtime (program lifecycle phase), runtime computation of a mathematical function (mathematics), function with a simpler array indexing operation, in a proc ...
based on the dividend and the divisor to determine each quotient digit.
The most significant difference is that a ''redundant representation'' is used for the quotient. For example, when implementing radix-4 SRT division, each quotient digit is chosen from ''five'' possibilities: . Because of this, the choice of a quotient digit need not be perfect; later quotient digits can correct for slight errors. (For example, the quotient digit pairs (0, +2) and (1, −2) are equivalent, since 0×4+2 = 1×4−2.) This tolerance allows quotient digits to be selected using only a few most-significant bits of the dividend and divisor, rather than requiring a full-width subtraction. This simplification in turn allows a radix higher than 2 to be used.
Like non-restoring division, the final steps are a final full-width subtraction to resolve the last quotient bit, and conversion of the quotient to standard binary form.
The
Intel Pentium processor's
infamous floating-point division bug was caused by an incorrectly coded lookup table. Five of the 1066 entries had been mistakenly omitted.
Fast division methods
Newton–Raphson division
Newton–Raphson uses
Newton's method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
to find the
reciprocal of
and multiply that reciprocal by
to find the
The steps of Newton–Raphson division are:
# Calculate an estimate
for the reciprocal
of the divisor
.
# Compute successively more accurate estimates
of the reciprocal. This is where one employs the Newton–Raphson method as such.
# Compute the quotient by multiplying the dividend by the reciprocal of the divisor:
.
In order to apply Newton's method to find the reciprocal of
, it is necessary to find a function
that has a zero at
. The obvious such function is
, but the Newton–Raphson iteration for this is unhelpful, since it cannot be computed without already knowing the reciprocal of
(moreover it attempts to compute the exact reciprocal in one step, rather than allow for iterative improvements). A function that does work is
, for which the Newton–Raphson iteration gives
:
which can be calculated from
using only multiplication and subtraction, or using two
fused multiply–adds.
From a computation point of view, the expressions
and
are not equivalent. To obtain a result with a precision of 2''n'' bits while making use of the second expression, one must compute the product between
and
with double the given precision of
(''n'' bits). In contrast, the product between
and
need only be computed with a precision of ''n'' bits, because the leading ''n'' bits (after the binary point) of
are zeros.
If the error is defined as
, then:
:
This squaring of the error at each iteration step the so-called
quadratic convergence of Newton–Raphson's method has the effect that the number of correct digits in the result roughly ''doubles for every iteration'', a property that becomes extremely valuable when the numbers involved have many digits (e.g. in the large integer domain). But it also means that the initial convergence of the method can be comparatively slow, especially if the initial estimate
is poorly chosen.
Initial estimate
For the subproblem of choosing an initial estimate
, it is convenient to apply a bit-shift to the divisor ''D'' to scale it so that 0.5 ≤ ''D'' ≤ 1. Applying the same bit-shift to the numerator ''N'' ensures the quotient does not change. Once within a bounded range, a simple polynomial
approximation can be used to find an initial estimate.
The linear
approximation with minimum worst-case absolute error on the interval