HOME

TheInfoList



OR:

The Remez algorithm or Remez exchange algorithm, published by Evgeny Yakovlevich Remez in 1934, is an iterative algorithm used to find simple approximations to functions, specifically, approximations by functions in a Chebyshev space that are the best in the
uniform norm In mathematical analysis, the uniform norm (or ) assigns to real- or complex-valued bounded functions defined on a set the non-negative number :\, f\, _\infty = \, f\, _ = \sup\left\. This norm is also called the , the , the , or, when ...
''L'' sense. It is sometimes referred to as Remes algorithm or Reme algorithm. A typical example of a Chebyshev space is the subspace of
Chebyshev polynomials The Chebyshev polynomials are two sequences of polynomials related to the cosine and sine functions, notated as T_n(x) and U_n(x). They can be defined in several equivalent ways, one of which starts with trigonometric functions: The Chebys ...
of order ''n'' in the
space Space is the boundless three-dimensional extent in which objects and events have relative position and direction. In classical physics, physical space is often conceived in three linear dimensions, although modern physicists usually con ...
of real continuous functions on an interval, ''C'' 'a'', ''b'' The polynomial of best approximation within a given subspace is defined to be the one that minimizes the maximum
absolute difference The absolute difference of two real numbers x and y is given by , x-y, , the absolute value of their difference. It describes the distance on the real line between the points corresponding to x and y. It is a special case of the Lp distance for ...
between the polynomial and the function. In this case, the form of the solution is precised by the
equioscillation theorem In mathematics, the equioscillation theorem concerns the approximation of continuous functions using polynomials when the merit function is the maximum difference (uniform norm). Its discovery is attributed to Chebyshev. Statement Let f be a con ...
.


Procedure

The Remez algorithm starts with the function f to be approximated and a set X of n + 2 sample points x_1, x_2, ...,x_ in the approximation interval, usually the extrema of Chebyshev polynomial linearly mapped to the interval. The steps are: * Solve the linear system of equations : b_0 + b_1 x_i+ ... +b_n x_i ^ n + (-1)^ i E = f(x_i) (where i=1, 2, ... n+2 ), :for the unknowns b_0, b_1...b_n and ''E''. * Use the b_i as coefficients to form a polynomial P_n. * Find the set M of points of local maximum error , P_n(x) - f(x), . * If the errors at every m \in M are of equal magnitude and alternate in sign, then P_n is the minimax approximation polynomial. If not, replace X with M and repeat the steps above. The result is called the polynomial of best approximation or the minimax approximation algorithm. A review of technicalities in implementing the Remez algorithm is given by W. Fraser.


Choice of initialization

The Chebyshev nodes are a common choice for the initial approximation because of their role in the theory of polynomial interpolation. For the initialization of the optimization problem for function ''f'' by the Lagrange interpolant ''L''n(''f''), it can be shown that this initial approximation is bounded by :\lVert f - L_n(f)\rVert_\infty \le (1 + \lVert L_n\rVert_\infty) \inf_ \lVert f - p\rVert with the norm or Lebesgue constant of the Lagrange interpolation operator ''L''''n'' of the nodes (''t''1, ..., ''t''''n'' + 1) being :\lVert L_n\rVert_\infty = \overline_n(T) = \max_ \lambda_n(T; x), ''T'' being the zeros of the Chebyshev polynomials, and the Lebesgue functions being :\lambda_n(T; x) = \sum_^ \left, l_j(x) \, \quad l_j(x) = \prod_^ \frac. Theodore A. Kilgore, Carl de Boor, and Allan Pinkus proved that there exists a unique ''t''''i'' for each ''L''''n'', although not known explicitly for (ordinary) polynomials. Similarly, \underline_n(T) = \min_ \lambda_n(T; x), and the optimality of a choice of nodes can be expressed as \overline_n - \underline_n \ge 0. For Chebyshev nodes, which provides a suboptimal, but analytically explicit choice, the asymptotic behavior is known as :\overline_n(T) = \frac \log(n + 1) + \frac\left(\gamma + \log\frac\right) + \alpha_ ( being the
Euler–Mascheroni constant Euler's constant (sometimes also called the Euler–Mascheroni constant) is a mathematical constant usually denoted by the lowercase Greek letter gamma (). It is defined as the limiting difference between the harmonic series and the natural ...
) with :0 < \alpha_n < \frac for n \ge 1, and upper bound :\overline_n(T) \le \frac \log(n + 1) + 1 Lev Brutman obtained the bound for n \ge 3, and \hat being the zeros of the expanded Chebyshev polynomials: :\overline_n(\hat) - \underline_n(\hat) < \overline_3 - \frac \cot \frac + \frac \frac - \frac(\gamma - \log\pi)\approx 0.201. Rüdiger Günttner obtained from a sharper estimate for n \ge 40 :\overline_n(\hat) - \underline_n(\hat) < 0.0196.


Detailed discussion

This section provides more information on the steps outlined above. In this section, the index ''i'' runs from 0 to ''n''+1. Step 1: Given x_0, x_1, ... x_, solve the linear system of ''n''+2 equations : b_0 + b_1 x_i+ ... +b_n x_i ^ n + (-1) ^ i E = f(x_i) (where i=0, 1, ... n+1 ), :for the unknowns b_0, b_1, ...b_n and ''E''. It should be clear that (-1)^i E in this equation makes sense only if the nodes x_0, ...,x_ are ''ordered'', either strictly increasing or strictly decreasing. Then this linear system has a unique solution. (As is well known, not every linear system has a solution.) Also, the solution can be obtained with only O(n^2) arithmetic operations while a standard solver from the library would take O(n^3) operations. Here is the simple proof: Compute the standard ''n''-th degree interpolant p_1(x) to f(x) at the first ''n''+1 nodes and also the standard ''n''-th degree interpolant p_2(x) to the ordinates (-1)^i :p_1(x_i) = f(x_i), p_2(x_i) = (-1)^i, i = 0, ..., n. To this end, use each time Newton's interpolation formula with the divided differences of order 0, ...,n and O(n^2) arithmetic operations. The polynomial p_2(x) has its ''i''-th zero between x_ and x_i,\ i=1, ...,n, and thus no further zeroes between x_n and x_: p_2(x_n) and p_2(x_) have the same sign (-1)^n. The linear combination p(x) := p_1 (x) - p_2(x)\!\cdot\!E is also a polynomial of degree ''n'' and :p(x_i) = p_1(x_i) - p_2(x_i)\!\cdot\! E \ = \ f(x_i) - (-1)^i E,\ \ \ \ i =0, \ldots, n. This is the same as the equation above for i = 0, ... ,n and for any choice of ''E''. The same equation for ''i'' = ''n''+1 is :p(x_) \ = \ p_1(x_) - p_2(x_)\!\cdot\!E \ = \ f(x_) - (-1)^ E and needs special reasoning: solved for the variable ''E'', it is the ''definition'' of ''E'': :E \ := \ \frac. As mentioned above, the two terms in the denominator have same sign: ''E'' and thus p(x) \equiv b_0 + b_1x + \ldots + b_nx^n are always well-defined. The error at the given ''n''+2 ordered nodes is positive and negative in turn because :p(x_i) - f(x_i) \ = \ -(-1)^i E,\ \ i = 0, ... , n\!+\!1. The theorem of de La Vallée Poussin states that under this condition no polynomial of degree ''n'' exists with error less than ''E''. Indeed, if such a polynomial existed, call it \tilde p(x), then the difference p(x)-\tilde p(x) = (p(x) - f(x)) - (\tilde p(x) - f(x)) would still be positive/negative at the ''n''+2 nodes x_i and therefore have at least ''n''+1 zeros which is impossible for a polynomial of degree ''n''. Thus, this ''E'' is a lower bound for the minimum error which can be achieved with polynomials of degree ''n''. Step 2 changes the notation from b_0 + b_1x + ... + b_nx^n to p(x). Step 3 improves upon the input nodes x_0, ..., x_ and their errors \pm E as follows. In each P-region, the current node x_i is replaced with the local maximizer \bar_i and in each N-region x_i is replaced with the local minimizer. (Expect \bar_0 at ''A'', the \bar _i near x_i, and \bar_ at ''B''.) No high precision is required here, the standard ''line search'' with a couple of ''quadratic fits'' should suffice. (See ) Let z_i := p(\bar_i) - f(\bar_i). Each amplitude , z_i, is greater than or equal to ''E''. The Theorem of ''de La Vallée Poussin'' and its proof also apply to z_0, ... ,z_ with \min\ \geq E as the new lower bound for the best error possible with polynomials of degree ''n''. Moreover, \max\ comes in handy as an obvious upper bound for that best possible error. Step 4: With \min\,\ and \max\,\ as lower and upper bound for the best possible approximation error, one has a reliable stopping criterion: repeat the steps until \max\ - \min\ is sufficiently small or no longer decreases. These bounds indicate the progress.


Variants

Some modifications of the algorithm are present on the literature. These include: * Replacing more than one sample point with the locations of nearby maximum absolute differences. * Replacing all of the sample points with in a single iteration with the locations of all, alternating sign, maximum differences.Temes, G.C.; Barcilon, V.; Marshall, F.C. (1973). "The optimization of bandlimited systems". ''Proceedings of the IEEE''. 61 (2): 196–234. doi:10.1109/PROC.1973.9004.
ISSN An International Standard Serial Number (ISSN) is an eight-digit serial number used to uniquely identify a serial publication, such as a magazine. The ISSN is especially helpful in distinguishing between serials with the same title. ISSNs ...
 0018-9219.
* Using the relative error to measure the difference between the approximation and the function, especially if the approximation will be used to compute the function on a computer which uses
floating point In computing, floating-point arithmetic (FP) is arithmetic that represents real numbers approximately, using an integer with a fixed precision, called the significand, scaled by an integer exponent of a fixed base. For example, 12.345 can be r ...
arithmetic; * Including zero-error point constraints. * The Fraser-Hart variant, used to determine the best rational Chebyshev approximation.


See also

*
Approximation theory In mathematics, approximation theory is concerned with how functions can best be approximated with simpler functions, and with quantitatively characterizing the errors introduced thereby. Note that what is meant by ''best'' and ''simpler'' wil ...


References


External links


Minimax Approximations and the Remez Algorithm
background chapter in the
Boost Boost, boosted or boosting may refer to: Science, technology and mathematics * Boost, positive manifold pressure in turbocharged engines * Boost (C++ libraries), a set of free peer-reviewed portable C++ libraries * Boost (material), a material b ...
Math Tools documentation, with link to an implementation in C++
Intro to DSP
*{{MathWorld, urlname=RemezAlgorithm, title=Remez Algorithm, author1-link=Ronald Aarts, author=Aarts, Ronald M., author2=Bond, Charles, author3=Mendelsohn, Phil, author4= Weisstein, Eric W., name-list-style=amp Polynomials Approximation theory Numerical analysis