In mathematics, the error function (also called the Gauss error function), often denoted by , is a complex function of a complex variable defined as:
:
This integral is a
special (non-
elementary)
sigmoid function that occurs often in
probability,
statistics
Statistics (from German language, German: ''wikt:Statistik#German, Statistik'', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of ...
, and
partial differential equation
In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a Multivariable calculus, multivariable function.
The function is often thought of as an "unknown" to be sol ...
s. In many of these applications, the function argument is a real number. If the function argument is real, then the function value is also real.
In statistics, for non-negative values of , the error function has the following interpretation: for a
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
that is
normally distributed with
mean 0 and
standard deviation
In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while ...
, is the probability that falls in the range .
Two closely related functions are the complementary error function () defined as
:
and the imaginary error function () defined as
:
where is the
imaginary unit
Name
The name "error function" and its abbreviation were proposed by
J. W. L. Glaisher
James Whitbread Lee Glaisher FRS FRSE FRAS (5 November 1848, Lewisham – 7 December 1928, Cambridge), son of James Glaisher and Cecilia Glaisher, was a prolific English mathematician and astronomer. His large collection of (mostly) English ce ...
in 1871 on account of its connection with "the theory of Probability, and notably the theory of
Errors."
The error function complement was also discussed by Glaisher in a separate publication in the same year.
For the "law of facility" of errors whose
density is given by
:
(the
normal distribution), Glaisher calculates the probability of an error lying between and as:
:
Applications
When the results of a series of measurements are described by a
normal distribution with
standard deviation
In statistics, the standard deviation is a measure of the amount of variation or dispersion of a set of values. A low standard deviation indicates that the values tend to be close to the mean (also called the expected value) of the set, while ...
and
expected value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a l ...
0, then is the probability that the error of a single measurement lies between and , for positive . This is useful, for example, in determining the
bit error rate of a digital communication system.
The error and complementary error functions occur, for example, in solutions of the
heat equation
In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for t ...
when
boundary conditions are given by the
Heaviside step function.
The error function and its approximations can be used to estimate results that hold
with high probability or with low probability. Given a random variable (a normal distribution with mean and standard deviation ) and a constant :
:
where and are certain numeric constants. If is sufficiently far from the mean, specifically , then:
:
so the probability goes to 0 as .
The probability for being in the interval can be derived as
:
Properties
The property means that the error function is an
odd function. This directly results from the fact that the integrand is an
even function
In mathematics, even functions and odd functions are functions which satisfy particular symmetry relations, with respect to taking additive inverses. They are important in many areas of mathematical analysis, especially the theory of power seri ...
(the antiderivative of an even function which is zero at the origin is an odd function and vice versa).
Since the error function is an
entire function which takes real numbers to real numbers, for any
complex number :
:
where is the
complex conjugate of ''z''.
The integrand and are shown in the complex -plane in the figures at right with
domain coloring.
The error function at is exactly 1 (see
Gaussian integral). At the real axis, approaches unity at and −1 at . At the imaginary axis, it tends to .
Taylor series
The error function is an
entire function; it has no singularities (except that at infinity) and its
Taylor expansion always converges, but is famously known "
..for its bad convergence if ."
The defining integral cannot be evaluated in
closed form in terms of
elementary functions, but by expanding the
integrand into its
Maclaurin series and integrating term by term, one obtains the error function's Maclaurin series as:
:
which holds for every
complex number . The denominator terms are sequence
A007680 in the
OEIS.
For iterative calculation of the above series, the following alternative formulation may be useful:
:
because expresses the multiplier to turn the th term into the th term (considering as the first term).
The imaginary error function has a very similar Maclaurin series, which is:
:
which holds for every
complex number .
Derivative and integral
The derivative of the error function follows immediately from its definition:
:
From this, the derivative of the imaginary error function is also immediate:
:
An
antiderivative of the error function, obtainable by
integration by parts, is
:
An antiderivative of the imaginary error function, also obtainable by integration by parts, is
:
Higher order derivatives are given by
:
where are the physicists'
Hermite polynomials
In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence.
The polynomials arise in:
* signal processing as Hermitian wavelets for wavelet transform analysis
* probability, such as the Edgeworth series, as well a ...
.
Bürmann series
An expansion, which converges more rapidly for all real values of than a Taylor expansion, is obtained by using
Hans Heinrich Bürmann's theorem:
:
where is the
sign function
In mathematics, the sign function or signum function (from '' signum'', Latin for "sign") is an odd mathematical function that extracts the sign of a real number. In mathematical expressions the sign function is often represented as . To avoi ...
. By keeping only the first two coefficients and choosing and , the resulting approximation shows its largest relative error at , where it is less than 0.0036127:
:
Inverse functions
Given a complex number , there is not a ''unique'' complex number satisfying , so a true inverse function would be multivalued. However, for , there is a unique ''real'' number denoted satisfying
:
The inverse error function is usually defined with domain , and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk of the complex plane, using the Maclaurin series
:
where and
:
So we have the series expansion (common factors have been canceled from numerators and denominators):
:
(After cancellation the numerator/denominator fractions are entries / in the
OEIS; without cancellation the numerator terms are given in entry .) The error function's value at is equal to .
For , we have .
The inverse complementary error function is defined as
:
For real , there is a unique ''real'' number satisfying . The inverse imaginary error function is defined as .
For any real ''x'',
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real-valu ...
can be used to compute , and for , the following Maclaurin series converges:
:
where is defined as above.
Asymptotic expansion
A useful
asymptotic expansion of the complementary error function (and therefore also of the error function) for large real is
:
where is the
double factorial
In mathematics, the double factorial or semifactorial of a number , denoted by , is the product of all the integers from 1 up to that have the same parity (odd or even) as . That is,
:n!! = \prod_^ (n-2k) = n (n-2) (n-4) \cdots.
For even , the ...
of , which is the product of all odd numbers up to . This series diverges for every finite , and its meaning as asymptotic expansion is that for any integer one has
:
where the remainder, in
Landau notation, is
:
as .
Indeed, the exact value of the remainder is
:
which follows easily by induction, writing
:
and integrating by parts.
For large enough values of , only the first few terms of this asymptotic expansion are needed to obtain a good approximation of (while for not too large values of , the above Taylor expansion at 0 provides a very fast convergence).
Continued fraction expansion
A
continued fraction expansion of the complementary error function is:
:
Integral of error function with Gaussian density function
:
which appears related to Ng and Geller, formula 13 in section 4.3 with a change of variables.
Factorial series
The inverse
factorial series:
:
converges for . Here
:
denotes the
rising factorial, and denotes a signed
Stirling number of the first kind
In mathematics, especially in combinatorics, Stirling numbers of the first kind arise in the study of permutations. In particular, the Stirling numbers of the first kind count permutations according to their number of cycles (counting fixed poin ...
.
There also exists a representation by an infinite sum containing the
double factorial
In mathematics, the double factorial or semifactorial of a number , denoted by , is the product of all the integers from 1 up to that have the same parity (odd or even) as . That is,
:n!! = \prod_^ (n-2k) = n (n-2) (n-4) \cdots.
For even , the ...
:
:
Numerical approximations
Approximation with elementary functions
- Abramowitz and Stegun give several approximations of varying accuracy (equations 7.1.25–28). This allows one to choose the fastest approximation suitable for a given application. In order of increasing accuracy, they are:
:
(maximum error: )
where , , ,
:
(maximum error: )
where , , ,
(maximum error: )
where , , , , ,
(maximum error: )
where , , , , ,
All of these approximations are valid for . To use these approximations for negative , use the fact that is an odd function, so .
- Exponential bounds and a pure exponential approximation for the complementary error function are given by
:
- The above have been generalized to sums of exponentials with increasing accuracy in terms of so that can be accurately approximated or bounded by , where
:
In particular, there is a systematic methodology to solve the numerical coefficients that yield a
minimax
Minimax (sometimes MinMax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for ''mini''mizing the possible loss for a worst case (''max''imum loss) scenario. When de ...
approximation or bound for the closely related Q-function
In statistics, the Q-function is the tail distribution function of the standard normal distribution. y) = P(X > x) = Q(x)
where x = \frac.
Other definitions of the ''Q''-function, all of which are simple transformations of the normal cumulati ...
: , , or for . The coefficients for many variations of the exponential approximations and bounds up to have been released to open access as a comprehensive dataset.
- A tight approximation of the complementary error function for is given by Karagiannidis & Lioumpas (2007) who showed for the appropriate choice of parameters that
:
They determined , which gave a good approximation for all . Alternative coefficients are also available for tailoring accuracy for a specific application or transforming the expression into a tight bound.
- A single-term lower bound is
:
where the parameter can be picked to minimize error on the desired interval of approximation.
- Another approximation is given by Sergei Winitzki using his "global Padé approximations":
:
where
:
This is designed to be very accurate in a neighborhood of 0 and a neighborhood of infinity, and the ''relative'' error is less than 0.00035 for all real . Using the alternate value reduces the maximum relative error to about 0.00013.
This approximation can be inverted to obtain an approximation for the inverse error function:
:
- An approximation with a maximal error of for any real argument is:
:
with
:
and
:
Table of values
Related functions
Complementary error function
The complementary error function, denoted , is defined as
:
which also defines , the scaled complementary error function
(which can be used instead of to avoid
arithmetic underflow). Another form of for is known as Craig's formula, after its discoverer:
:
This expression is valid only for positive values of , but it can be used in conjunction with to obtain for negative values. This form is advantageous in that the range of integration is fixed and finite. An extension of this expression for the of the sum of two non-negative variables is as follows:
:
Imaginary error function
The imaginary error function, denoted , is defined as
:
where is the Dawson function (which can be used instead of to avoid arithmetic overflow[).
Despite the name "imaginary error function", is real when is real.
When the error function is evaluated for arbitrary complex arguments , the resulting complex error function is usually discussed in scaled form as the ]Faddeeva function
The Faddeeva function or Kramp function is a scaled complex complementary error function,
:w(z):=e^\operatorname(-iz) = \operatorname(-iz)
=e^\left(1+\frac\int_0^z e^\textt\right).
It is related to the Fresnel integral, to Dawson's integral, a ...
:
:
Cumulative distribution function
The error function is essentially identical to the standard normal cumulative distribution function, denoted , also named by some software languages, as they differ only by scaling and translation. Indeed,
:
or rearranged for and :
:
Consequently, the error function is also closely related to the Q-function
In statistics, the Q-function is the tail distribution function of the standard normal distribution. y) = P(X > x) = Q(x)
where x = \frac.
Other definitions of the ''Q''-function, all of which are simple transformations of the normal cumulati ...
, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function as
:
The inverse
Inverse or invert may refer to:
Science and mathematics
* Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence
* Additive inverse (negation), the inverse of a number that, when ad ...
of is known as the normal quantile function, or probit function and may be expressed in terms of the inverse error function as
:
The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics.
The error function is a special case of the Mittag-Leffler function, and can also be expressed as a confluent hypergeometric function (Kummer's function):
:
It has a simple expression in terms of the Fresnel integral.
In terms of the regularized gamma function and the incomplete gamma function
In mathematics, the upper and lower incomplete gamma functions are types of special functions which arise as solutions to various mathematical problems such as certain integrals.
Their respective names stem from their integral definitions, which ...
,
:
is the sign function
In mathematics, the sign function or signum function (from '' signum'', Latin for "sign") is an odd mathematical function that extracts the sign of a real number. In mathematical expressions the sign function is often represented as . To avoi ...
.
Generalized error functions
Some authors discuss the more general functions:
:
Notable cases are:
* is a straight line through the origin:
* is the error function, .
After division by , all the for odd look similar (but not identical) to each other. Similarly, the for even look similar (but not identical) to each other after a simple division by . All generalised error functions for look similar on the positive side of the graph.
These generalised functions can equivalently be expressed for using the gamma function
In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except ...
and incomplete gamma function
In mathematics, the upper and lower incomplete gamma functions are types of special functions which arise as solutions to various mathematical problems such as certain integrals.
Their respective names stem from their integral definitions, which ...
:
:
Therefore, we can define the error function in terms of the incomplete gamma function:
:
Iterated integrals of the complementary error function
The iterated integrals of the complementary error function are defined by
:
The general recurrence formula is
:
They have the power series
:
from which follow the symmetry properties
:
and
:
Implementations
As real function of a real argument
* In Posix-compliant operating systems, the header math.h
shall declare and the mathematical library libm
shall provide the functions erf
and erfc
( double precision) as well as their single precision and extended precision
Extended precision refers to floating-point arithmetic, floating-point number formats that provide greater precision (computer science), precision than the basic floating-point formats. Extended precision formats support a basic format by floati ...
counterparts erff
, erfl
and erfcf
, erfcl
.
* The GNU Scientific Library provides erf
, erfc
, log(erf)
, and scaled error functions.
As complex function of a complex argument
* libcerf
/code>, numeric C library for complex error functions, provides the complex functions cerf
, cerfc
, cerfcx
and the real functions erfi
, erfcx
with approximately 13–14 digits precision, based on the Faddeeva function
The Faddeeva function or Kramp function is a scaled complex complementary error function,
:w(z):=e^\operatorname(-iz) = \operatorname(-iz)
=e^\left(1+\frac\int_0^z e^\textt\right).
It is related to the Fresnel integral, to Dawson's integral, a ...
as implemented in th
MIT Faddeeva Package
See also
Related functions
* Gaussian integral, over the whole real line
* Gaussian function, derivative
* Dawson function, renormalized imaginary error function
* Goodwin–Staton integral
In probability
* Normal distribution
* Normal cumulative distribution function, a scaled and shifted form of error function
* Probit, the inverse or quantile function
In probability and statistics, the quantile function, associated with a probability distribution of a random variable, specifies the value of the random variable such that the probability of the variable being less than or equal to that value equ ...
of the normal CDF
* Q-function
In statistics, the Q-function is the tail distribution function of the standard normal distribution. y) = P(X > x) = Q(x)
where x = \frac.
Other definitions of the ''Q''-function, all of which are simple transformations of the normal cumulati ...
, the tail probability of the normal distribution
References
Further reading
*
*
*
External links
A Table of Integrals of the Error Functions
{{Authority control
Special hypergeometric functions
Gaussian function
Functions related to probability distributions
Analytic functions