In mathematics, the error function (also called the Gauss error function), often denoted by , is a complex function of a complex variable defined as:
:
This integral is a
special (non-
elementary)
sigmoid function that occurs often in
probability
Probability is the branch of mathematics concerning numerical descriptions of how likely an Event (probability theory), event is to occur, or how likely it is that a proposition is true. The probability of an event is a number between 0 and ...
,
statistics, and
partial differential equation
In mathematics, a partial differential equation (PDE) is an equation which imposes relations between the various partial derivatives of a multivariable function.
The function is often thought of as an "unknown" to be solved for, similarly to ...
s. In many of these applications, the function argument is a real number. If the function argument is real, then the function value is also real.
In statistics, for non-negative values of , the error function has the following interpretation: for a
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the p ...
that is
normally distributed
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu is ...
with
mean
There are several kinds of mean in mathematics, especially in statistics. Each mean serves to summarize a given group of data, often to better understand the overall value ( magnitude and sign) of a given data set.
For a data set, the '' ari ...
0 and
standard deviation , is the probability that falls in the range .
Two closely related functions are the complementary error function () defined as
:
and the imaginary error function () defined as
:
where is the
imaginary unit
The imaginary unit or unit imaginary number () is a solution to the quadratic equation x^2+1=0. Although there is no real number with this property, can be used to extend the real numbers to what are called complex numbers, using addition a ...
Name
The name "error function" and its abbreviation were proposed by
J. W. L. Glaisher
James Whitbread Lee Glaisher FRS FRSE FRAS (5 November 1848, Lewisham – 7 December 1928, Cambridge), son of James Glaisher and Cecilia Glaisher, was a prolific English mathematician and astronomer. His large collection of (mostly) English ce ...
in 1871 on account of its connection with "the theory of Probability, and notably the theory of
Errors."
The error function complement was also discussed by Glaisher in a separate publication in the same year.
For the "law of facility" of errors whose
density
Density (volumetric mass density or specific mass) is the substance's mass per unit of volume. The symbol most often used for density is ''ρ'' (the lower case Greek letter rho), although the Latin letter ''D'' can also be used. Mathematicall ...
is given by
:
(the
normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
), Glaisher calculates the probability of an error lying between and as:
:
Applications
When the results of a series of measurements are described by a
normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
with
standard deviation and
expected value
In probability theory, the expected value (also called expectation, expectancy, mathematical expectation, mean, average, or first moment) is a generalization of the weighted average. Informally, the expected value is the arithmetic mean of a ...
0, then is the probability that the error of a single measurement lies between and , for positive . This is useful, for example, in determining the
bit error rate of a digital communication system.
The error and complementary error functions occur, for example, in solutions of the
heat equation when
boundary condition
In mathematics, in the field of differential equations, a boundary value problem is a differential equation together with a set of additional constraints, called the boundary conditions. A solution to a boundary value problem is a solution to ...
s are given by the
Heaviside step function
The Heaviside step function, or the unit step function, usually denoted by or (but sometimes , or ), is a step function, named after Oliver Heaviside (1850–1925), the value of which is zero for negative arguments and one for positive argume ...
.
The error function and its approximations can be used to estimate results that hold
with high probability or with low probability. Given a random variable (a normal distribution with mean and standard deviation ) and a constant :
:
where and are certain numeric constants. If is sufficiently far from the mean, specifically , then:
:
so the probability goes to 0 as .
The probability for being in the interval can be derived as
:
Properties
The property means that the error function is an
odd function. This directly results from the fact that the integrand is an
even function (the antiderivative of an even function which is zero at the origin is an odd function and vice versa).
Since the error function is an
entire function
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any fin ...
which takes real numbers to real numbers, for any
complex number
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
:
:
where is the
complex conjugate
In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, (if a and b are real, then) the complex conjugate of a + bi is equal to a - ...
of ''z''.
The integrand and are shown in the complex -plane in the figures at right with
domain coloring.
The error function at is exactly 1 (see
Gaussian integral). At the real axis, approaches unity at and −1 at . At the imaginary axis, it tends to .
Taylor series
The error function is an
entire function
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any fin ...
; it has no singularities (except that at infinity) and its
Taylor expansion
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor se ...
always converges, but is famously known "
..for its bad convergence if ."
The defining integral cannot be evaluated in
closed form in terms of
elementary functions, but by expanding the
integrand into its
Maclaurin series and integrating term by term, one obtains the error function's Maclaurin series as:
:
which holds for every
complex number
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
. The denominator terms are sequence
A007680 in the
OEIS
The On-Line Encyclopedia of Integer Sequences (OEIS) is an online database of integer sequences. It was created and maintained by Neil Sloane while researching at AT&T Labs. He transferred the intellectual property and hosting of the OEIS to th ...
.
For iterative calculation of the above series, the following alternative formulation may be useful:
:
because expresses the multiplier to turn the th term into the th term (considering as the first term).
The imaginary error function has a very similar Maclaurin series, which is:
:
which holds for every
complex number
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
.
Derivative and integral
The derivative of the error function follows immediately from its definition:
:
From this, the derivative of the imaginary error function is also immediate:
:
An
antiderivative
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a function is a differentiable function whose derivative is equal to the original function . This can be stated symbolically ...
of the error function, obtainable by
integration by parts
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivative. ...
, is
:
An antiderivative of the imaginary error function, also obtainable by integration by parts, is
:
Higher order derivatives are given by
:
where are the physicists'
Hermite polynomials.
Bürmann series
An expansion, which converges more rapidly for all real values of than a Taylor expansion, is obtained by using
Hans Heinrich Bürmann's theorem:
:
where is the
sign function. By keeping only the first two coefficients and choosing and , the resulting approximation shows its largest relative error at , where it is less than 0.0036127:
:
Inverse functions

Given a complex number , there is not a ''unique'' complex number satisfying , so a true inverse function would be multivalued. However, for , there is a unique ''real'' number denoted satisfying
:
The inverse error function is usually defined with domain , and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk of the complex plane, using the Maclaurin series
:
where and
:
So we have the series expansion (common factors have been canceled from numerators and denominators):
:
(After cancellation the numerator/denominator fractions are entries / in the
OEIS
The On-Line Encyclopedia of Integer Sequences (OEIS) is an online database of integer sequences. It was created and maintained by Neil Sloane while researching at AT&T Labs. He transferred the intellectual property and hosting of the OEIS to th ...
; without cancellation the numerator terms are given in entry .) The error function's value at is equal to .
For , we have .
The inverse complementary error function is defined as
:
For real , there is a unique ''real'' number satisfying . The inverse imaginary error function is defined as .
For any real ''x'',
Newton's method
In numerical analysis, Newton's method, also known as the Newton–Raphson method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a real ...
can be used to compute , and for , the following Maclaurin series converges:
:
where is defined as above.
Asymptotic expansion
A useful
asymptotic expansion of the complementary error function (and therefore also of the error function) for large real is
:
where is the
double factorial
In mathematics, the double factorial or semifactorial of a number , denoted by , is the product of all the integers from 1 up to that have the same parity (odd or even) as . That is,
:n!! = \prod_^ (n-2k) = n (n-2) (n-4) \cdots.
For even , the ...
of , which is the product of all odd numbers up to . This series diverges for every finite , and its meaning as asymptotic expansion is that for any integer one has
:
where the remainder, in
Landau notation, is
:
as .
Indeed, the exact value of the remainder is
:
which follows easily by induction, writing
:
and integrating by parts.
For large enough values of , only the first few terms of this asymptotic expansion are needed to obtain a good approximation of (while for not too large values of , the above Taylor expansion at 0 provides a very fast convergence).
Continued fraction expansion
A
continued fraction
In mathematics, a continued fraction is an expression obtained through an iterative process of representing a number as the sum of its integer part and the reciprocal of another number, then writing this other number as the sum of its integ ...
expansion of the complementary error function is:
:
Integral of error function with Gaussian density function
:
which appears related to Ng and Geller, formula 13 in section 4.3 with a change of variables.
Factorial series
The inverse
factorial series:
:
converges for . Here
:
denotes the
rising factorial, and denotes a signed
Stirling number of the first kind
In mathematics, especially in combinatorics, Stirling numbers of the first kind arise in the study of permutations. In particular, the Stirling numbers of the first kind count permutations according to their number of cycles (counting fixed poin ...
.
There also exists a representation by an infinite sum containing the
double factorial
In mathematics, the double factorial or semifactorial of a number , denoted by , is the product of all the integers from 1 up to that have the same parity (odd or even) as . That is,
:n!! = \prod_^ (n-2k) = n (n-2) (n-4) \cdots.
For even , the ...
:
:
Numerical approximations
Approximation with elementary functions
-
Abramowitz and Stegun
''Abramowitz and Stegun'' (''AS'') is the informal name of a 1964 mathematical reference work edited by Milton Abramowitz and Irene Stegun of the United States National Bureau of Standards (NBS), now the ''National Institute of Standards and ...
give several approximations of varying accuracy (equations 7.1.25–28). This allows one to choose the fastest approximation suitable for a given application. In order of increasing accuracy, they are:
:
(maximum error: )
where , , ,
:
(maximum error: )
where , , ,
(maximum error: )
where , , , , ,
(maximum error: )
where , , , , ,
All of these approximations are valid for . To use these approximations for negative , use the fact that is an odd function, so .
- Exponential bounds and a pure exponential approximation for the complementary error function are given by
:
- The above have been generalized to sums of exponentials with increasing accuracy in terms of so that can be accurately approximated or bounded by , where
:
In particular, there is a systematic methodology to solve the numerical coefficients that yield a
minimax
Minimax (sometimes MinMax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, game theory, statistics, and philosophy for ''mini''mizing the possible loss for a worst case (''max''imum loss) scenario. Whe ...
approximation or bound for the closely related Q-function
In statistics, the Q-function is the tail distribution function of the standard normal distribution. y) = P(X > x) = Q(x)
where x = \frac.
Other definitions of the ''Q''-function, all of which are simple transformations of the normal cumulati ...
: , , or for . The coefficients for many variations of the exponential approximations and bounds up to have been released to open access as a comprehensive dataset.
- A tight approximation of the complementary error function for is given by Karagiannidis & Lioumpas (2007) who showed for the appropriate choice of parameters that
:
They determined , which gave a good approximation for all . Alternative coefficients are also available for tailoring accuracy for a specific application or transforming the expression into a tight bound.
- A single-term lower bound is
:
where the parameter can be picked to minimize error on the desired interval of approximation.
- Another approximation is given by Sergei Winitzki using his "global Padé approximations":
:
where
:
This is designed to be very accurate in a neighborhood of 0 and a neighborhood of infinity, and the ''relative'' error is less than 0.00035 for all real . Using the alternate value reduces the maximum relative error to about 0.00013.
This approximation can be inverted to obtain an approximation for the inverse error function:
:
- An approximation with a maximal error of for any real argument is:
:
with
:
and
:
Table of values
Related functions
Complementary error function
The complementary error function, denoted , is defined as
:
which also defines , the scaled complementary error function
(which can be used instead of to avoid
arithmetic underflow
The term arithmetic underflow (also floating point underflow, or just underflow) is a condition in a computer program where the result of a calculation is a number of more precise absolute value than the computer can actually represent in memory ...
). Another form of for is known as Craig's formula, after its discoverer:
:
This expression is valid only for positive values of , but it can be used in conjunction with to obtain for negative values. This form is advantageous in that the range of integration is fixed and finite. An extension of this expression for the of the sum of two non-negative variables is as follows:
:
Imaginary error function
The imaginary error function, denoted , is defined as
:
where is the Dawson function (which can be used instead of to avoid arithmetic overflow
Arithmetic () is an elementary part of mathematics that consists of the study of the properties of the traditional operations on numbers—addition, subtraction, multiplication, division, exponentiation, and extraction of roots. In the 19th ...
[).
Despite the name "imaginary error function", is real when is real.
When the error function is evaluated for arbitrary complex arguments , the resulting complex error function is usually discussed in scaled form as the ]Faddeeva function
The Faddeeva function or Kramp function is a scaled complex complementary error function,
:w(z):=e^\operatorname(-iz) = \operatorname(-iz)
=e^\left(1+\frac\int_0^z e^\textt\right).
It is related to the Fresnel integral, to Dawson's integral, a ...
:
:
Cumulative distribution function
The error function is essentially identical to the standard normal cumulative distribution function, denoted , also named by some software languages, as they differ only by scaling and translation. Indeed,
:
or rearranged for and :
:
Consequently, the error function is also closely related to the Q-function
In statistics, the Q-function is the tail distribution function of the standard normal distribution. y) = P(X > x) = Q(x)
where x = \frac.
Other definitions of the ''Q''-function, all of which are simple transformations of the normal cumulati ...
, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function as
:
The inverse
Inverse or invert may refer to:
Science and mathematics
* Inverse (logic), a type of conditional sentence which is an immediate inference made from another conditional sentence
* Additive inverse (negation), the inverse of a number that, when ad ...
of is known as the normal quantile function, or probit function and may be expressed in terms of the inverse error function as
:
The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics.
The error function is a special case of the Mittag-Leffler function, and can also be expressed as a confluent hypergeometric function
In mathematics, a confluent hypergeometric function is a solution of a confluent hypergeometric equation, which is a degenerate form of a hypergeometric differential equation where two of the three regular singularities merge into an irregular ...
(Kummer's function):
:
It has a simple expression in terms of the Fresnel integral.
In terms of the regularized gamma function and the incomplete gamma function,
:
is the sign function.
Generalized error functions
Some authors discuss the more general functions:
:
Notable cases are:
* is a straight line through the origin:
* is the error function, .
After division by , all the for odd look similar (but not identical) to each other. Similarly, the for even look similar (but not identical) to each other after a simple division by . All generalised error functions for look similar on the positive side of the graph.
These generalised functions can equivalently be expressed for using the gamma function
In mathematics, the gamma function (represented by , the capital letter gamma from the Greek alphabet) is one commonly used extension of the factorial function to complex numbers. The gamma function is defined for all complex numbers except th ...
and incomplete gamma function:
:
Therefore, we can define the error function in terms of the incomplete gamma function:
:
Iterated integrals of the complementary error function
The iterated integrals of the complementary error function are defined by
:
The general recurrence formula is
:
They have the power series
:
from which follow the symmetry properties
:
and
:
Implementations
As real function of a real argument
* In Posix
The Portable Operating System Interface (POSIX) is a family of standards specified by the IEEE Computer Society for maintaining compatibility between operating systems. POSIX defines both the system- and user-level application programming inte ...
-compliant operating systems, the header math.h
shall declare and the mathematical library libm
shall provide the functions erf
and erfc
(double precision
Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point number format, usually occupying 64 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.
F ...
) as well as their single precision
Single-precision floating-point format (sometimes called FP32 or float32) is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.
A floati ...
and extended precision counterparts erff
, erfl
and erfcf
, erfcl
.
* The GNU Scientific Library provides erf
, erfc
, log(erf)
, and scaled error functions.
As complex function of a complex argument
* libcerf
/code>, numeric C library for complex error functions, provides the complex functions cerf
, cerfc
, cerfcx
and the real functions erfi
, erfcx
with approximately 13–14 digits precision, based on the Faddeeva function
The Faddeeva function or Kramp function is a scaled complex complementary error function,
:w(z):=e^\operatorname(-iz) = \operatorname(-iz)
=e^\left(1+\frac\int_0^z e^\textt\right).
It is related to the Fresnel integral, to Dawson's integral, a ...
as implemented in th
MIT Faddeeva Package
See also
Related functions
* Gaussian integral, over the whole real line
* Gaussian function
In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function of the base form
f(x) = \exp (-x^2)
and with parametric extension
f(x) = a \exp\left( -\frac \right)
for arbitrary real constants , and non-zero . It i ...
, derivative
* Dawson function, renormalized imaginary error function
* Goodwin–Staton integral
In probability
* Normal distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
* Normal cumulative distribution function, a scaled and shifted form of error function
* Probit, the inverse or quantile function
In probability and statistics, the quantile function, associated with a probability distribution of a random variable, specifies the value of the random variable such that the probability of the variable being less than or equal to that value e ...
of the normal CDF
* Q-function
In statistics, the Q-function is the tail distribution function of the standard normal distribution. y) = P(X > x) = Q(x)
where x = \frac.
Other definitions of the ''Q''-function, all of which are simple transformations of the normal cumulati ...
, the tail probability of the normal distribution
References
Further reading
*
*
*
External links
A Table of Integrals of the Error Functions
{{Authority control
Special hypergeometric functions
Gaussian function
Functions related to probability distributions
Analytic functions