HOME

TheInfoList



OR:

The approximation error in a given data value represents the significant discrepancy that arises when an exact, true value is compared against some
approximation An approximation is anything that is intentionally similar but not exactly equal to something else. Etymology and usage The word ''approximation'' is derived from Latin ''approximatus'', from ''proximus'' meaning ''very near'' and the prefix ...
derived for it. This inherent error in approximation can be quantified and expressed in two principal ways: as an absolute error, which denotes the direct numerical magnitude of this discrepancy irrespective of the true value's scale, or as a relative error, which provides a scaled measure of the error by considering the absolute error in proportion to the exact data value, thus offering a context-dependent assessment of the error's significance. An approximation error can manifest due to a multitude of diverse reasons. Prominent among these are limitations related to computing machine precision, where digital systems cannot represent all real numbers with perfect accuracy, leading to unavoidable truncation or rounding. Another common source is inherent
measurement error Observational error (or measurement error) is the difference between a measured value of a quantity and its unknown true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. Such errors are inherent in the measurement pr ...
, stemming from the practical limitations of instruments, environmental factors, or observational processes (for instance, if the actual length of a piece of paper is precisely 4.53 cm, but the measuring ruler only permits an estimation to the nearest 0.1 cm, this constraint could lead to a recorded measurement of 4.5 cm, thereby introducing an error). In the
mathematical Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
field of
numerical analysis Numerical analysis is the study of algorithms that use numerical approximation (as opposed to symbolic computation, symbolic manipulations) for the problems of mathematical analysis (as distinguished from discrete mathematics). It is the study of ...
, the crucial concept of
numerical stability In the mathematical subfield of numerical analysis, numerical stability is a generally desirable property of numerical algorithms. The precise definition of stability depends on the context: one important context is numerical linear algebra, and ...
associated with an
algorithm In mathematics and computer science, an algorithm () is a finite sequence of Rigour#Mathematics, mathematically rigorous instructions, typically used to solve a class of specific Computational problem, problems or to perform a computation. Algo ...
serves to indicate the extent to which initial errors or perturbations present in the input data of the algorithm are likely to propagate and potentially amplify into substantial errors in the final output. Algorithms that are characterized as numerically stable are robust in the sense that they do not yield a significantly magnified error in their output even when the input is slightly malformed or contains minor inaccuracies; conversely, numerically unstable algorithms may exhibit dramatic error growth from small input changes, rendering their results unreliable.


Formal definition

Given some true or exact value ''v'', we formally state that an approximation ''v''approx estimates or represents ''v'' where the magnitude of the absolute error is bounded by a positive value ''ε'' (i.e., ''ε''>0), if the following inequality holds: :, v-v_\text, \leq \varepsilon where the vertical bars, , , , unambiguously denote the
absolute value In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign. Namely, , x, =x if x is a positive number, and , x, =-x if x is negative (in which case negating x makes -x positive), ...
of the difference between the true value ''v'' and its approximation ''v''approx. This mathematical operation signifies the magnitude of the error, irrespective of whether the approximation is an overestimate or an underestimate. Similarly, we state that ''v''approx approximates the value ''v'' where the magnitude of the relative error is bounded by a positive value ''η'' (i.e., ''η''>0), provided ''v'' is not zero (''v'' ≠ 0), if the subsequent inequality is satisfied:
, v-v_\text, \leq \eta\cdot , v, .
This definition ensures that ''η'' acts as an upper bound on the ratio of the absolute error to the magnitude of the true value. If ''v'' ≠ 0, then the actual relative error, often also denoted by ''η'' in context (representing the calculated value rather than a bound), is precisely calculated as: : \eta = \frac = \left, \frac \ = \left, 1 - \frac \ . Note that the first term in the equation above implicitly defines `ε` as `, v-v_approx, ` if `η` is `ε/, v, `. The percent error, often denoted as ''δ'', is a common and intuitive way of expressing the relative error, effectively scaling the relative error value to a percentage for easier interpretation and comparison across different contexts: :\delta = 100\%\times\eta = 100\%\times\left, \frac \. An error bound rigorously defines an established upper limit on either the relative or the absolute magnitude of an approximation error. Such a bound thereby provides a formal guarantee on the maximum possible deviation of the approximation from the true value, which is critical in applications requiring known levels of precision.


Examples

To illustrate these concepts with a numerical example, consider an instance where the exact, accepted value is 50, and its corresponding approximation is determined to be 49.9. In this particular scenario, the absolute error is precisely 0.1 (calculated as , 50 − 49.9, ), and the relative error is calculated as the absolute error 0.1 divided by the true value 50, which equals 0.002. This relative error can also be expressed as 0.2%. In a more practical setting, such as when measuring the volume of liquid in a 6 mL beaker, if the instrument reading indicates 5 mL while the true volume is actually 6 mL, the percent error for this particular measurement situation is, when rounded to one decimal place, approximately 16.7% (calculated as , (6 mL − 5 mL) / 6 mL, × 100%). The utility of relative error becomes particularly evident when it is employed to compare the quality of approximations for numbers that possess widely differing magnitudes; for example, approximating the number 1,000 with an absolute error of 3 results in a relative error of 0.003 (or 0.3%). This is, within the context of most scientific or engineering applications, considered a significantly less accurate approximation than approximating the much larger number 1,000,000 with an identical absolute error of 3. In the latter case, the relative error is a mere 0.000003 (or 0.0003%). In the first case, the relative error is 0.003, whereas in the second, more favorable scenario, it is a substantially smaller value of only 0.000003. This comparison clearly highlights how relative error provides a more meaningful and contextually appropriate assessment of precision, especially when dealing with values across different orders of magnitude. There are two crucial features or caveats associated with the interpretation and application of relative error that should always be kept in mind. Firstly, relative error becomes mathematically undefined whenever the true value (''v'') is zero, because this true value appears in the denominator of its calculation (as detailed in the formal definition provided above), and division by zero is an undefined operation. Secondly, the concept of relative error is most truly meaningful and consistently interpretable only when the measurements under consideration are performed on a ratio scale. This type of scale is characterized by possessing a true, non-arbitrary zero point, which signifies the complete absence of the quantity being measured. If this condition of a ratio scale is not met (e.g., when using interval scales like Celsius temperature), the calculated relative error can become highly sensitive to the choice of measurement units, potentially leading to misleading interpretations. For example, when an absolute error in a
temperature Temperature is a physical quantity that quantitatively expresses the attribute of hotness or coldness. Temperature is measurement, measured with a thermometer. It reflects the average kinetic energy of the vibrating and colliding atoms making ...
measurement given in the
Celsius scale The degree Celsius is the unit of temperature on the Celsius temperature scale "Celsius temperature scale, also called centigrade temperature scale, scale based on 0 ° for the melting point of water and 100 ° for the boiling point ...
is 1 °C, and the true value is 2 °C, the relative error is 0.5 (or 50%, calculated as , 1°C / 2°C, ). However, if this exact same approximation, representing the same physical temperature difference, is made using the
Kelvin scale The kelvin (symbol: K) is the base unit for temperature in the International System of Units (SI). The Kelvin scale is an absolute temperature scale that starts at the lowest possible temperature (absolute zero), taken to be 0 K. By de ...
(which is a ratio scale where 0 K represents absolute zero), a 1 K absolute error (equivalent in magnitude to a 1 °C error) with the same true value of 275.15 K (which is equivalent to 2 °C) gives a markedly different relative error of approximately 0.00363, or about 3.63 (calculated as , 1 K / 275.15 K, ). This disparity underscores the importance of the underlying measurement scale.


Comparison

When comparing the behavior and intrinsic characteristics of these two fundamental error types, it is important to recognize their differing sensitivities to common arithmetic operations. Specifically, statements and conclusions made about ''relative errors'' are notably sensitive to the addition of a non-zero constant to the underlying true and approximated values, as such an addition alters the base value against which the error is relativized, thereby changing the ratio. However, relative errors remain unaffected by the multiplication of both the true and approximated values by the same non-zero constant, because this constant would appear in both the numerator (of the absolute error) and the denominator (the true value) of the relative error calculation, and would consequently cancel out, leaving the relative error unchanged. Conversely, for ''absolute errors'', the opposite relationship holds true: absolute errors are directly sensitive to the multiplication of the underlying values by a constant (as this scales the magnitude of the difference itself), but they are largely insensitive to the addition of a constant to these values (since adding the same constant to both the true value and its approximation does not change the difference between them: (''v''+c) − (''v''approx+c) = ''v'' − ''v''approx).


Polynomial-time approximation of real numbers

In the realm of computational complexity theory, we define that a real value ''v'' is polynomially computable with absolute error from a given input if, for any specified rational number ''ε'' > 0 representing the desired maximum permissible absolute error, it is algorithmically possible to compute a rational number ''v''approx such that ''v''approx approximates ''v'' with an absolute error no greater than ''ε'' (formally, , ''v'' − ''v''approx, ≤ ''ε''). Crucially, this computation must be achievable within a time duration that is polynomial in terms of the size of the input data and the encoding size of ''ε'' (the latter typically being of the order O(log(1/''ε'')) bits, reflecting the number of bits needed to represent the precision). Analogously, the value ''v'' is considered polynomially computable with relative error if, for any specified rational number ''η'' > 0 representing the desired maximum permissible relative error, it is possible to compute a rational number ''v''approx that approximates ''v'' with a relative error no greater than ''η'' (formally, , (''v'' − ''v''approx)/''v'', ≤ ''η'', assuming ''v'' ≠ 0). This computation, similar to the absolute error case, must likewise be achievable in an amount of time that is polynomial in the size of the input data and the encoding size of ''η'' (which is typically O(log(1/''η'')) bits). It can be demonstrated that if a value ''v'' is polynomially computable with relative error (utilizing an algorithm that we can designate as REL), then it is consequently also polynomially computable with absolute error. ''Proof sketch'': Let ''ε'' > 0 be the target maximum absolute error that we wish to achieve. The procedure commences by invoking the REL algorithm with a chosen relative error bound of, for example, ''η'' = 1/2. This initial step aims to find a rational number approximation ''r''1 such that the inequality , ''v'' − ''r''1, ≤ , ''v'', /2 holds true. From this relationship, by applying the reverse triangle inequality (, ''v'', − , ''r''1, ≤ , ''v'' − ''r''1, ), we can deduce that , ''v'', ≤ 2, ''r''1, (this holds assuming ''r''1 ≠ 0; if ''r''1 = 0, then the relative error condition implies ''v'' must also be 0, in which case the problem of achieving any absolute error ''ε'' > 0 is trivial, as ''v''approx = 0 works, and we are done). Given that the REL algorithm operates in polynomial time, the encoding length of the computed ''r''1 will necessarily be polynomial with respect to the input size. Subsequently, the REL algorithm is invoked a second time, now with a new, typically much smaller, relative error target set to ''η'' = ''ε'' / (2, ''r''1, ) (this step also assumes ''r''1 is non-zero, which we can ensure or handle as a special case). This second application of REL yields another rational number approximation, ''r''2, that satisfies the condition , ''v'' − ''r''2, ≤ ''η'', ''v'', . Substituting the expression for ''η'' gives , ''v'' − ''r''2, ≤ (''ε'' / (2, ''r''1, )) , ''v'', . Now, using the previously derived inequality , ''v'', ≤ 2, ''r''1, , we can bound the term: , ''v'' − ''r''2, ≤ (''ε'' / (2, ''r''1, )) × (2, ''r''1, ) = ''ε''. Thus, the approximation ''r''2 successfully approximates ''v'' with the desired absolute error ''ε'', demonstrating that polynomial computability with relative error implies polynomial computability with absolute error. The reverse implication, namely that polynomial computability with absolute error implies polynomial computability with relative error, is generally not true without imposing additional conditions or assumptions. However, a significant special case exists: if one can assume that some positive lower bound ''b'' on the magnitude of ''v'' (i.e., , ''v'', > ''b'' > 0) can itself be computed in polynomial time, and if ''v'' is also known to be polynomially computable with absolute error (perhaps via an algorithm designated as ABS), then ''v'' also becomes polynomially computable with relative error. This is because one can simply invoke the ABS algorithm with a carefully chosen target absolute error, specifically ''εtarget'' = ''ηb'', where ''η'' is the desired relative error. The resulting approximation ''v''approx would satisfy , ''v'' − ''v''approx, ≤ ''ηb''. To see the implication for relative error, we divide by , ''v'', (which is non-zero): , (''v'' − ''v''approx)/''v'', ≤ (''ηb'')/, ''v'', . Since we have the condition , ''v'', > ''b'', it follows that ''b''/, ''v'', < 1. Therefore, the relative error is bounded by ''η'' × (''b''/, ''v'', ) < ''η'' × 1 = ''η'', which is the desired outcome for polynomial computability with relative error. An algorithm that, for every given rational number ''η'' > 0, successfully computes a rational number ''v''approx that approximates ''v'' with a relative error no greater than ''η'', and critically, does so in a time complexity that is polynomial in both the size of the input and in the reciprocal of the relative error, 1/''η'' (rather than being polynomial merely in log(1/''η''), which typically allows for faster computation when ''η'' is extremely small), is known as a Fully Polynomial-Time Approximation Scheme (FPTAS). The dependence on 1/''η'' rather than log(1/''η'') is a defining characteristic of FPTAS and distinguishes it from weaker approximation schemes.


Instruments

In the context of most indicating measurement instruments, such as analog or digital voltmeters, pressure gauges, and thermometers, the specified accuracy is frequently guaranteed by their manufacturers as a certain percentage of the instrument's full-scale reading capability, rather than as a percentage of the actual reading. The defined boundaries or limits of these permissible deviations from the true or specified values under operational conditions are commonly referred to as limiting errors or, alternatively, guarantee errors. This method of specifying accuracy implies that the maximum possible absolute error can be larger when measuring values towards the higher end of the instrument's scale, while the relative error with respect to the full-scale value itself remains constant across the range. Consequently, the relative error with respect to the actual measured value can become quite large for readings at the lower end of the instrument's scale.


Generalizations

The fundamental definitions of absolute and relative error, as presented primarily for scalar (one-dimensional) values, can be naturally and rigorously extended to more complex scenarios where the quantity of interest v and its corresponding approximation v_ are ''n''-dimensional vectors, matrices, or, more generally, elements of a
normed vector space The Ateliers et Chantiers de France (ACF, Workshops and Shipyards of France) was a major shipyard that was established in Dunkirk, France, in 1898. The shipyard boomed in the period before World War I (1914–18), but struggled in the inter-war ...
. This important generalization is typically achieved by systematically replacing the
absolute value In mathematics, the absolute value or modulus of a real number x, is the non-negative value without regard to its sign. Namely, , x, =x if x is a positive number, and , x, =-x if x is negative (in which case negating x makes -x positive), ...
function (which effectively measures magnitude or "size" for scalar numbers) with an appropriate vector ''n''-norm or matrix norm. Common examples of such norms include the L1 norm (sum of absolute component values), the L2 norm (Euclidean norm, or square root of the sum of squared components), and the L norm (maximum absolute component value). These norms provide a way to quantify the "distance" or "difference" between the true vector (or matrix) and its approximation in a multi-dimensional space, thereby allowing for analogous definitions of absolute and relative error in these higher-dimensional contexts.


See also

* Accepted and experimental value * Condition number *
Errors and residuals in statistics In statistics and optimization, errors and residuals are two closely related and easily confused measures of the deviation of an observed value of an element of a statistical sample from its "true value" (not necessarily observable). The erro ...
* Experimental uncertainty analysis * Machine epsilon *
Measurement error Observational error (or measurement error) is the difference between a measured value of a quantity and its unknown true value.Dodge, Y. (2003) ''The Oxford Dictionary of Statistical Terms'', OUP. Such errors are inherent in the measurement pr ...
*
Measurement uncertainty In metrology, measurement uncertainty is the expression of the statistical dispersion of the values attributed to a quantity measured on an interval or ratio scale. All measurements are subject to uncertainty and a measurement result is complet ...
*
Propagation of uncertainty In statistics, propagation of uncertainty (or propagation of error) is the effect of variables' uncertainties (or errors, more specifically random errors) on the uncertainty of a function based on them. When the variables are the values of ex ...
*
Quantization error Quantization, in mathematics and digital signal processing, is the process of mapping input values from a large set (often a continuous set) to output values in a (countable) smaller set, often with a finite number of elements. Rounding and ...
*
Relative difference In any quantitative science, the terms relative change and relative difference are used to compare two quantities while taking into account the "sizes" of the things being compared, i.e. dividing by a ''standard'' or ''reference'' or ''starting'' ...
*
Round-off error In computing, a roundoff error, also called rounding error, is the difference between the result produced by a given algorithm using exact arithmetic and the result produced by the same algorithm using finite-precision, rounded arithmetic. Roun ...
*
Uncertainty Uncertainty or incertitude refers to situations involving imperfect or unknown information. It applies to predictions of future events, to physical measurements that are already made, or to the unknown, and is particularly relevant for decision ...


References


External links

*{{MathWorld, PercentageError, Percentage error Numerical analysis