2Sum
   HOME

TheInfoList



OR:

2Sum is a
floating-point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can b ...
algorithm for computing the exact
round-off error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
in a floating-point addition operation. 2Sum and its variant Fast2Sum were first published by Møller in 1965. Fast2Sum is often used implicitly in other algorithms such as compensated summation algorithms; Kahan's summation algorithm was published first in 1965, and Fast2Sum was later factored out of it by Dekker in 1971 for
double-double arithmetic In computing, quadruple precision (or quad precision) is a binary floating point–based computer number format that occupies 16 bytes (128 bits) with precision at least twice the 53-bit double precision. This 128-bit quadruple precision is desi ...
algorithms. The names ''2Sum'' and ''Fast2Sum'' appear to have been applied retroactively by Shewchuk in 1997.


Algorithm

Given two floating-point numbers a and b, 2Sum computes the floating-point sum s := \operatorname(a + b) and the floating-point error t := a + b - \operatorname(a + b) so that s + t = a + b. The error t is itself a floating-point number. :Inputs floating-point numbers a, b :Outputs sum s = \operatorname(a + b) and error t = a + b - \operatorname(a + b) :# s := a \oplus b :# a' := s \ominus b :# b' := s \ominus a' :# \delta_a := a \ominus a' :# \delta_b := b \ominus b' :# t := \delta_a \oplus \delta_b :# return (s, t) Provided the floating-point arithmetic is correctly rounded to nearest (with ties resolved any way), as is the default in
IEEE 754 The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point arithmetic established in 1985 by the Institute of Electrical and Electronics Engineers (IEEE). The standard addressed many problems found i ...
, and provided the sum does not overflow and, if it underflows, underflows gradually, it can be proven that A variant of 2Sum called Fast2Sum uses only three floating-point operations, for floating-point arithmetic in radix 2 or radix 3, under the assumption that the exponent of a is at least as large as the exponent of b, such as when :Inputs radix-2 or radix-3 floating-point numbers a and b, of which at least one is zero, or which respectively have normalized exponents e_a \geq e_b :Outputs sum s = \operatorname(a + b) and error t = a + b - \operatorname(a + b) :# s := a \oplus b :# z = s \ominus a :# t = b \ominus z :# return (s, t) Even if the conditions are not satisfied, 2Sum and Fast2Sum often provide reasonable approximations to the error so that s + t \approx a + b, which enables algorithms for compensated summation, dot-product, etc., to have low error even if the inputs are not sorted or the rounding mode is unusual. More complicated variants of 2Sum and Fast2Sum also exist for rounding modes other than round-to-nearest.


See also

* Kahan summation algorithm *
Round-off error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
*
Double-double arithmetic In computing, quadruple precision (or quad precision) is a binary floating point–based computer number format that occupies 16 bytes (128 bits) with precision at least twice the 53-bit double precision. This 128-bit quadruple precision is desi ...


References

{{reflist Computer arithmetic Floating point Numerical analysis