HOME

TheInfoList



OR:

In
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of numerical methods th ...
, pairwise summation, also called cascade summation, is a technique to sum a sequence of finite- precision
floating-point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be ...
numbers that substantially reduces the accumulated
round-off error A roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Rounding errors are d ...
compared to naively accumulating the sum in sequence. Although there are other techniques such as Kahan summation that typically have even smaller round-off errors, pairwise summation is nearly as good (differing only by a logarithmic factor) while having much lower computational cost—it can be implemented so as to have nearly the same cost (and exactly the same number of arithmetic operations) as naive summation. In particular, pairwise summation of a sequence of ''n'' numbers ''xn'' works by
recursively Recursion (adjective: ''recursive'') occurs when a thing is defined in terms of itself or of its type. Recursion is used in a variety of disciplines ranging from linguistics to logic. The most common application of recursion is in mathematic ...
breaking the sequence into two halves, summing each half, and adding the two sums: a
divide and conquer algorithm In computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical ...
. Its worst-case roundoff errors grow
asymptotically In analytic geometry, an asymptote () of a curve is a line such that the distance between the curve and the line approaches zero as one or both of the ''x'' or ''y'' coordinates tends to infinity. In projective geometry and related contexts, ...
as at most ''O''(ε log ''n''), where ε is the
machine precision Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the subjec ...
(assuming a fixed
condition number In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
, as discussed below). In comparison, the naive technique of accumulating the sum in sequence (adding each ''xi'' one at a time for ''i'' = 1, ..., ''n'') has roundoff errors that grow at worst as ''O''(ε''n''). Kahan summation has a worst-case error of roughly ''O''(ε), independent of ''n'', but requires several times more arithmetic operations. If the roundoff errors are random, and in particular have random signs, then they form a
random walk In mathematics, a random walk is a random process that describes a path that consists of a succession of random steps on some mathematical space. An elementary example of a random walk is the random walk on the integer number line \mathbb ...
and the error growth is reduced to an average of O(\varepsilon \sqrt) for pairwise summation.Manfred Tasche and Hansmartin Zeuner ''Handbook of Analytic-Computational Methods in Applied Mathematics'' Boca Raton, FL: CRC Press, 2000). A very similar recursive structure of summation is found in many
fast Fourier transform A fast Fourier transform (FFT) is an algorithm that computes the discrete Fourier transform (DFT) of a sequence, or its inverse (IDFT). Fourier analysis converts a signal from its original domain (often time or space) to a representation in t ...
(FFT) algorithms, and is responsible for the same slow roundoff accumulation of those FFTs.S. G. Johnson and M. Frigo,
Implementing FFTs in practice
in
Fast Fourier Transforms
', edited by C. Sidney Burrus (2008).


The algorithm

In
pseudocode In computer science, pseudocode is a plain language description of the steps in an algorithm or another system. Pseudocode often uses structural conventions of a normal programming language, but is intended for human reading rather than machine re ...
, the pairwise summation algorithm for an
array An array is a systematic arrangement of similar objects, usually in rows and columns. Things called an array include: {{TOC right Music * In twelve-tone and serial composition, the presentation of simultaneous twelve-tone sets such that the ...
of length ≥ 0 can be written: ''s'' = pairwise(''x'' …''n'' if ''n'' ≤ ''N'' ''base case: naive summation for a sufficiently small array'' ''s'' = 0 for ''i'' = 1 to ''n'' ''s'' = ''s'' + ''x'' 'i'' else ''divide and conquer: recursively sum two halves of the array'' ''m'' = floor(''n'' / 2) ''s'' = pairwise(''x'' …''m'' + pairwise(''x'' 'm''+1…''n'' end if For some sufficiently small , this algorithm switches to a naive loop-based summation as a base case, whose error bound is O(Nε). The entire sum has a worst-case error that grows asymptotically as ''O''(ε log ''n'') for large ''n'', for a given condition number (see below). In an algorithm of this sort (as for
divide and conquer algorithm In computer science Computer science is the study of computation, automation, and information. Computer science spans theoretical disciplines (such as algorithms, theory of computation, information theory, and automation) to practical ...
s in general), it is desirable to use a larger base case in order to amortize the overhead of the recursion. If ''N'' = 1, then there is roughly one recursive subroutine call for every input, but more generally there is one recursive call for (roughly) every ''N''/2 inputs if the recursion stops at exactly ''n'' = ''N''. By making ''N'' sufficiently large, the overhead of recursion can be made negligible (precisely this technique of a large base case for recursive summation is employed by high-performance FFT implementations). Regardless of ''N'', exactly ''n''−1 additions are performed in total, the same as for naive summation, so if the recursion overhead is made negligible then pairwise summation has essentially the same computational cost as for naive summation. A variation on this idea is to break the sum into ''b'' blocks at each recursive stage, summing each block recursively, and then summing the results, which was dubbed a "superblock" algorithm by its proposers.Anthony M. Castaldo, R. Clint Whaley, and Anthony T. Chronopoulos, "Reducing floating-point error in dot product using the superblock family of algorithms," ''SIAM J. Sci. Comput.'', vol. 32, pp. 1156–1174 (2008). The above pairwise algorithm corresponds to ''b'' = 2 for every stage except for the last stage which is ''b'' = ''N''.


Accuracy

Suppose that one is summing ''n'' values ''x''''i'', for ''i'' = 1, ..., ''n''. The exact sum is: :S_n = \sum_^n x_i (computed with infinite precision). With pairwise summation for a base case ''N'' = 1, one instead obtains S_n + E_n, where the error E_n is bounded above by: :, E_n, \leq \frac \sum_^n , x_i, where ε is the
machine precision Machine epsilon or machine precision is an upper bound on the relative approximation error due to rounding in floating point arithmetic. This value characterizes computer arithmetic in the field of numerical analysis, and by extension in the subjec ...
of the arithmetic being employed (e.g. ε ≈ 10−16 for standard
double precision Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point. F ...
floating point). Usually, the quantity of interest is the
relative error The approximation error in a data value is the discrepancy between an exact value and some ''approximation'' to it. This error can be expressed as an absolute error (the numerical amount of the discrepancy) or as a relative error (the absolute er ...
, E_n, /, S_n, , which is therefore bounded above by: :\frac \leq \frac \left(\frac\right). In the expression for the relative error bound, the fraction (Σ, ''xi'', /, Σ''xi'', ) is the
condition number In numerical analysis, the condition number of a function measures how much the output value of the function can change for a small change in the input argument. This is used to measure how sensitive a function is to changes or errors in the inpu ...
of the summation problem. Essentially, the condition number represents the ''intrinsic'' sensitivity of the summation problem to errors, regardless of how it is computed. The relative error bound of ''every'' (
backwards stable In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context. One is numerical linear algebra and the other is alg ...
) summation method by a fixed algorithm in fixed precision (i.e. not those that use
arbitrary-precision arithmetic In computer science, arbitrary-precision arithmetic, also called bignum arithmetic, multiple-precision arithmetic, or sometimes infinite-precision arithmetic, indicates that calculations are performed on numbers whose digits of precision are l ...
, nor algorithms whose memory and time requirements change based on the data), is proportional to this condition number. An ''ill-conditioned'' summation problem is one in which this ratio is large, and in this case even pairwise summation can have a large relative error. For example, if the summands ''xi'' are uncorrelated random numbers with zero mean, the sum is a
random walk In mathematics, a random walk is a random process that describes a path that consists of a succession of random steps on some mathematical space. An elementary example of a random walk is the random walk on the integer number line \mathbb ...
and the condition number will grow proportional to \sqrt. On the other hand, for random inputs with nonzero mean the condition number asymptotes to a finite constant as n\to\infty. If the inputs are all
non-negative In mathematics, the sign of a real number is its property of being either positive, negative, or zero. Depending on local conventions, zero may be considered as being neither positive nor negative (having no sign or a unique third sign), or it ...
, then the condition number is 1. Note that the 1 - \varepsilon \log_2 n denominator is effectively 1 in practice, since \varepsilon \log_2 n is much smaller than 1 until ''n'' becomes of order 21/ε, which is roughly 101015 in double precision. In comparison, the relative error bound for naive summation (simply adding the numbers in sequence, rounding at each step) grows as O(\varepsilon n) multiplied by the condition number. In practice, it is much more likely that the rounding errors have a random sign, with zero mean, so that they form a random walk; in this case, naive summation has a
root mean square In mathematics and its applications, the root mean square of a set of numbers x_i (abbreviated as RMS, or rms and denoted in formulas as either x_\mathrm or \mathrm_x) is defined as the square root of the mean square (the arithmetic mean of th ...
relative error that grows as O(\varepsilon \sqrt) and pairwise summation has an error that grows as O(\varepsilon \sqrt) on average.


Software implementations

Pairwise summation is the default summation algorithm in NumPy and the Julia technical-computing language, where in both cases it was found to have comparable speed to naive summation (thanks to the use of a large base case). Other software implementations include the HPCsharp library for the C Sharp language and the standard library summation in D.


References

{{reflist Computer arithmetic Numerical analysis Articles with example pseudocode