In mathematics, the error function (also called the Gauss error function), often denoted by , is a function
defined as:
The integral here is a complex
contour integral which is path-independent because
is
holomorphic
In mathematics, a holomorphic function is a complex-valued function of one or more complex variables that is complex differentiable in a neighbourhood of each point in a domain in complex coordinate space . The existence of a complex deri ...
on the whole complex plane
. In many applications, the function argument is a real number, in which case the function value is also real.
In some old texts,
the error function is defined without the factor of
.
This
nonelementary integral is a
sigmoid function that occurs often in
probability
Probability is a branch of mathematics and statistics concerning events and numerical descriptions of how likely they are to occur. The probability of an event is a number between 0 and 1; the larger the probability, the more likely an e ...
,
statistics
Statistics (from German language, German: ', "description of a State (polity), state, a country") is the discipline that concerns the collection, organization, analysis, interpretation, and presentation of data. In applying statistics to a s ...
, and
partial differential equation
In mathematics, a partial differential equation (PDE) is an equation which involves a multivariable function and one or more of its partial derivatives.
The function is often thought of as an "unknown" that solves the equation, similar to ho ...
s.
In statistics, for non-negative real values of , the error function has the following interpretation: for a real
random variable
A random variable (also called random quantity, aleatory variable, or stochastic variable) is a Mathematics, mathematical formalization of a quantity or object which depends on randomness, random events. The term 'random variable' in its mathema ...
that is
normally distributed
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real number, real-valued random variable. The general form of its probability density function is
f(x ...
with
mean
A mean is a quantity representing the "center" of a collection of numbers and is intermediate to the extreme values of the set of numbers. There are several kinds of means (or "measures of central tendency") in mathematics, especially in statist ...
0 and
standard deviation
In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
, is the probability that falls in the range .
Two closely related functions are the complementary error function
is defined as
and the imaginary error function
is defined as
where is the
imaginary unit
The imaginary unit or unit imaginary number () is a mathematical constant that is a solution to the quadratic equation Although there is no real number with this property, can be used to extend the real numbers to what are called complex num ...
.
Name
The name "error function" and its abbreviation were proposed by
J. W. L. Glaisher in 1871 on account of its connection with "the theory of Probability, and notably the theory of
Errors."
The error function complement was also discussed by Glaisher in a separate publication in the same year.
For the "law of facility" of errors whose
density
Density (volumetric mass density or specific mass) is the ratio of a substance's mass to its volume. The symbol most often used for density is ''ρ'' (the lower case Greek letter rho), although the Latin letter ''D'' (or ''d'') can also be u ...
is given by
(the
normal distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
f(x) = \frac ...
), Glaisher calculates the probability of an error lying between and as:
Applications
When the results of a series of measurements are described by a
normal distribution
In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
f(x) = \frac ...
with
standard deviation
In statistics, the standard deviation is a measure of the amount of variation of the values of a variable about its Expected value, mean. A low standard Deviation (statistics), deviation indicates that the values tend to be close to the mean ( ...
and
expected value
In probability theory, the expected value (also called expectation, expectancy, expectation operator, mathematical expectation, mean, expectation value, or first Moment (mathematics), moment) is a generalization of the weighted average. Informa ...
0, then is the probability that the error of a single measurement lies between and , for positive . This is useful, for example, in determining the
bit error rate of a digital communication system.
The error and complementary error functions occur, for example, in solutions of the
heat equation
In mathematics and physics (more specifically thermodynamics), the heat equation is a parabolic partial differential equation. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quanti ...
when
boundary condition
In the study of differential equations, a boundary-value problem is a differential equation subjected to constraints called boundary conditions. A solution to a boundary value problem is a solution to the differential equation which also satis ...
s are given by the
Heaviside step function
The Heaviside step function, or the unit step function, usually denoted by or (but sometimes , or ), is a step function named after Oliver Heaviside, the value of which is zero for negative arguments and one for positive arguments. Differen ...
.
The error function and its approximations can be used to estimate results that hold
with high probability
In mathematics, an event that occurs with high probability (often shortened to w.h.p. or WHP) is one whose probability depends on a certain number ''n'' and goes to 1 as ''n'' goes to infinity, i.e. the probability of the event occurring can be m ...
or with low probability. Given a random variable (a normal distribution with mean and standard deviation ) and a constant , it can be shown via integration by substitution:
where and are certain numeric constants. If is sufficiently far from the mean, specifically , then:
so the probability goes to 0 as .
The probability for being in the interval can be derived as
Properties
The property means that the error function is an
odd function. This directly results from the fact that the integrand is an
even function
In mathematics, an even function is a real function such that f(-x)=f(x) for every x in its domain. Similarly, an odd function is a function such that f(-x)=-f(x) for every x in its domain.
They are named for the parity of the powers of the ...
(the antiderivative of an even function which is zero at the origin is an odd function and vice versa).
Since the error function is an
entire function
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any ...
which takes real numbers to real numbers, for any
complex number
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
:
where
denotes the
complex conjugate
In mathematics, the complex conjugate of a complex number is the number with an equal real part and an imaginary part equal in magnitude but opposite in sign. That is, if a and b are real numbers, then the complex conjugate of a + bi is a - ...
of
.
The integrand and are shown in the complex -plane in the figures at right with
domain coloring
In complex analysis, domain coloring or a color wheel graph is a technique for visualizing complex functions by assigning a color to each point of the complex plane. By assigning points on the complex plane to different colors and brightness, do ...
.
The error function at is exactly 1 (see
Gaussian integral
The Gaussian integral, also known as the Euler–Poisson integral, is the integral of the Gaussian function f(x) = e^ over the entire real line. Named after the German mathematician Carl Friedrich Gauss, the integral is
\int_^\infty e^\,dx = \s ...
). At the real axis, approaches unity at and −1 at . At the imaginary axis, it tends to .
Taylor series
The error function is an
entire function
In complex analysis, an entire function, also called an integral function, is a complex-valued function that is holomorphic on the whole complex plane. Typical examples of entire functions are polynomials and the exponential function, and any ...
; it has no singularities (except that at infinity) and its
Taylor expansion
In mathematics, the Taylor series or Taylor expansion of a function is an infinite sum of terms that are expressed in terms of the function's derivatives at a single point. For most common functions, the function and the sum of its Taylor ser ...
always converges. For , however, cancellation of leading terms makes the Taylor expansion unpractical.
The defining integral cannot be evaluated in
closed form in terms of
elementary functions (see
Liouville's theorem), but by expanding the
integrand into its
Maclaurin series
Maclaurin or MacLaurin is a surname. Notable people with the surname include:
* Colin Maclaurin (1698–1746), Scottish mathematician
* Normand MacLaurin (1835–1914), Australian politician and university administrator
* Henry Normand MacLaurin ...
and integrating term by term, one obtains the error function's Maclaurin series as:
which holds for every
complex number
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
. The denominator terms are sequence
A007680 in the
OEIS
The On-Line Encyclopedia of Integer Sequences (OEIS) is an online database of integer sequences. It was created and maintained by Neil Sloane while researching at AT&T Labs. He transferred the intellectual property and hosting of the OEIS to th ...
.
For iterative calculation of the above series, the following alternative formulation may be useful:
because expresses the multiplier to turn the th term into the th term (considering as the first term).
The imaginary error function has a very similar Maclaurin series, which is:
which holds for every
complex number
In mathematics, a complex number is an element of a number system that extends the real numbers with a specific element denoted , called the imaginary unit and satisfying the equation i^= -1; every complex number can be expressed in the for ...
.
Derivative and integral
The derivative of the error function follows immediately from its definition:
From this, the derivative of the imaginary error function is also immediate:
An
antiderivative
In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a continuous function is a differentiable function whose derivative is equal to the original function . This can be stated ...
of the error function, obtainable by
integration by parts
In calculus, and more generally in mathematical analysis, integration by parts or partial integration is a process that finds the integral of a product of functions in terms of the integral of the product of their derivative and antiderivati ...
, is
An antiderivative of the imaginary error function, also obtainable by integration by parts, is
Higher order derivatives are given by
where are the physicists'
Hermite polynomials
In mathematics, the Hermite polynomials are a classical orthogonal polynomial sequence.
The polynomials arise in:
* signal processing as Hermitian wavelets for wavelet transform analysis
* probability, such as the Edgeworth series, as well a ...
.
Bürmann series
An expansion, which converges more rapidly for all real values of than a Taylor expansion, is obtained by using
Hans Heinrich Bürmann's theorem:
where is the
sign function
In mathematics, the sign function or signum function (from '' signum'', Latin for "sign") is a function that has the value , or according to whether the sign of a given real number is positive or negative, or the given number is itself zer ...
. By keeping only the first two coefficients and choosing and , the resulting approximation shows its largest relative error at , where it is less than 0.0034361:
Inverse functions

Given a complex number , there is not a ''unique'' complex number satisfying , so a true inverse function would be multivalued. However, for , there is a unique ''real'' number denoted satisfying
The inverse error function is usually defined with domain , and it is restricted to this domain in many computer algebra systems. However, it can be extended to the disk of the complex plane, using the Maclaurin series
where and
So we have the series expansion (common factors have been canceled from numerators and denominators):
(After cancellation the numerator and denominator values in and respectively; without cancellation the numerator terms are values in .) The error function's value at is equal to .
For , we have .
The inverse complementary error function is defined as
For real , there is a unique ''real'' number satisfying . The inverse imaginary error function is defined as .
For any real ''x'',
Newton's method
In numerical analysis, the Newton–Raphson method, also known simply as Newton's method, named after Isaac Newton and Joseph Raphson, is a root-finding algorithm which produces successively better approximations to the roots (or zeroes) of a ...
can be used to compute , and for , the following Maclaurin series converges:
where is defined as above.
Asymptotic expansion
A useful
asymptotic expansion
In mathematics, an asymptotic expansion, asymptotic series or Poincaré expansion (after Henri Poincaré) is a formal series of functions which has the property that truncating the series after a finite number of terms provides an approximation ...
of the complementary error function (and therefore also of the error function) for large real is
where is the
double factorial
In mathematics, the double factorial of a number , denoted by , is the product of all the positive integers up to that have the same Parity (mathematics), parity (odd or even) as . That is,
n!! = \prod_^ (n-2k) = n (n-2) (n-4) \cdots.
Restated ...
of , which is the product of all odd numbers up to . This series diverges for every finite , and its meaning as asymptotic expansion is that for any integer one has
where the remainder is
which follows easily by induction, writing
and integrating by parts.
The asymptotic behavior of the remainder term, in
Landau notation, is
as . This can be found by
For large enough values of , only the first few terms of this asymptotic expansion are needed to obtain a good approximation of (while for not too large values of , the above Taylor expansion at 0 provides a very fast convergence).
Continued fraction expansion
A
continued fraction
A continued fraction is a mathematical expression that can be written as a fraction with a denominator that is a sum that contains another simple or continued fraction. Depending on whether this iteration terminates with a simple fraction or not, ...
expansion of the complementary error function was found by
Laplace
Pierre-Simon, Marquis de Laplace (; ; 23 March 1749 – 5 March 1827) was a French polymath, a scholar whose work has been instrumental in the fields of physics, astronomy, mathematics, engineering, statistics, and philosophy. He summariz ...
:
Factorial series
The inverse
factorial series:
converges for . Here
denotes the
rising factorial, and denotes a signed
Stirling number of the first kind.
There also exists a representation by an infinite sum containing the
double factorial
In mathematics, the double factorial of a number , denoted by , is the product of all the positive integers up to that have the same Parity (mathematics), parity (odd or even) as . That is,
n!! = \prod_^ (n-2k) = n (n-2) (n-4) \cdots.
Restated ...
:
Bounds and Numerical approximations
Approximation with elementary functions
-
Abramowitz and Stegun
''Abramowitz and Stegun'' (''AS'') is the informal name of a 1964 mathematical reference work edited by Milton Abramowitz and Irene Stegun of the United States National Bureau of Standards (NBS), now the National Institute of Standards and T ...
give several approximations of varying accuracy (equations 7.1.25–28). This allows one to choose the fastest approximation suitable for a given application. In order of increasing accuracy, they are:
(maximum error: )
where , , ,
(maximum error: )
where , , ,
(maximum error: )
where , , , , ,
(maximum error: )
where , , , , ,
All of these approximations are valid for . To use these approximations for negative , use the fact that is an odd function, so .
-
Exponential bounds and a pure exponential approximation for the complementary error function are given by
-
The above have been generalized to sums of exponentials with increasing accuracy in terms of so that can be accurately approximated or bounded by , where
In particular, there is a systematic methodology to solve the numerical coefficients that yield a
minimax
Minimax (sometimes Minmax, MM or saddle point) is a decision rule used in artificial intelligence, decision theory, combinatorial game theory, statistics, and philosophy for ''minimizing'' the possible loss function, loss for a Worst-case scenari ...
approximation or bound for the closely related Q-function: , , or for . The coefficients for many variations of the exponential approximations and bounds up to have been released to open access as a comprehensive dataset.
-
A tight approximation of the complementary error function for is given by Karagiannidis & Lioumpas (2007) who showed for the appropriate choice of parameters that
They determined , which gave a good approximation for all . Alternative coefficients are also available for tailoring accuracy for a specific application or transforming the expression into a tight bound.
-
A single-term lower bound is
where the parameter can be picked to minimize error on the desired interval of approximation.
-
Another approximation is given by Sergei Winitzki using his "global Padé approximations":
where
This is designed to be very accurate in a neighborhood of 0 and a neighborhood of infinity, and the ''relative'' error is less than 0.00035 for all real . Using the alternate value reduces the maximum relative error to about 0.00013.
This approximation can be inverted to obtain an approximation for the inverse error function:
-
An approximation with a maximal error of for any real argument is:
with
and
- An approximation of with a maximum relative error less than in absolute value is:
for
and for
-
A simple approximation for real-valued arguments could be done through
Hyperbolic functions
In mathematics, hyperbolic functions are analogues of the ordinary trigonometric functions, but defined using the hyperbola rather than the circle. Just as the points form a circle with a unit radius, the points form the right half of the u ...
:
which keeps the absolute difference
-
Since the error function and the Gaussian Q-function are closely related through the identity or equivalently , bounds developed for the Q-function can be adapted to approximate the complementary error function. A pair of tight lower and upper bounds on the Gaussian Q-function for positive arguments was introduced by Abreu (2012) based on a simple algebraic expression with only two exponential terms:
and
These bounds stem from a unified form where the parameters and are selected to ensure the bounding properties: for the lower bound, and , and for the upper bound, and .
These expressions maintain simplicity and tightness, providing a practical trade-off between accuracy and ease of computation. They are particularly valuable in theoretical contexts, such as communication theory over fading channels, where both functions frequently appear. Additionally, the original Q-function bounds can be extended to for positive integers via the binomial theorem, suggesting potential adaptability for powers of , though this is less commonly required in error function applications.
Table of values
Related functions
Complementary error function
The complementary error function, denoted , is defined as
which also defines , the scaled complementary error function
(which can be used instead of to avoid
arithmetic underflow
The term arithmetic underflow (also floating-point underflow, or just underflow) is a condition in a computer program where the result of a calculation is a number of more precise absolute value than the computer can actually represent in memory ...
). Another form of for is known as Craig's formula, after its discoverer:
This expression is valid only for positive values of , but it can be used in conjunction with to obtain for negative values. This form is advantageous in that the range of integration is fixed and finite. An extension of this expression for the of the sum of two non-negative variables is as follows:
Imaginary error function
The imaginary error function, denoted , is defined as
where is the Dawson function (which can be used instead of to avoid arithmetic overflow[).
Despite the name "imaginary error function", is real when is real.
When the error function is evaluated for arbitrary ]complex
Complex commonly refers to:
* Complexity, the behaviour of a system whose components interact in multiple ways so possible interactions are difficult to describe
** Complex system, a system composed of many components which may interact with each ...
arguments , the resulting complex error function is usually discussed in scaled form as the Faddeeva function:
Cumulative distribution function
The error function is essentially identical to the standard normal cumulative distribution function, denoted , also named by some software languages, as they differ only by scaling and translation. Indeed,
or rearranged for and :
Consequently, the error function is also closely related to the Q-function, which is the tail probability of the standard normal distribution. The Q-function can be expressed in terms of the error function as
The inverse of is known as the normal quantile function, or probit
In probability theory and statistics, the probit function is the quantile function associated with the standard normal distribution. It has applications in data analysis and machine learning, in particular exploratory statistical graphics and ...
function and may be expressed in terms of the inverse error function as
The standard normal cdf is used more often in probability and statistics, and the error function is used more often in other branches of mathematics.
The error function is a special case of the Mittag-Leffler function, and can also be expressed as a confluent hypergeometric function (Kummer's function):
It has a simple expression in terms of the Fresnel integral.
In terms of the regularized gamma function and the incomplete gamma function
In mathematics, the upper and lower incomplete gamma functions are types of special functions which arise as solutions to various mathematical problems such as certain integrals.
Their respective names stem from their integral definitions, whic ...
,
is the sign function
In mathematics, the sign function or signum function (from '' signum'', Latin for "sign") is a function that has the value , or according to whether the sign of a given real number is positive or negative, or the given number is itself zer ...
.
Iterated integrals of the complementary error function
The iterated integrals of the complementary error function are defined by
The general recurrence formula is
They have the power series
from which follow the symmetry properties
and
Implementations
As real function of a real argument
* In POSIX
The Portable Operating System Interface (POSIX; ) is a family of standards specified by the IEEE Computer Society for maintaining compatibility between operating systems. POSIX defines application programming interfaces (APIs), along with comm ...
-compliant operating systems, the header math.h
shall declare and the mathematical library libm
shall provide the functions erf
and erfc
(double precision
Double-precision floating-point format (sometimes called FP64 or float64) is a floating-point arithmetic, floating-point computer number format, number format, usually occupying 64 Bit, bits in computer memory; it represents a wide range of numeri ...
) as well as their single precision
Single-precision floating-point format (sometimes called FP32 or float32) is a computer number format, usually occupying 32 bits in computer memory; it represents a wide dynamic range of numeric values by using a floating radix point.
A floa ...
and extended precision
Extended precision refers to floating-point number formats that provide greater precision than the basic floating-point formats. Extended-precision formats support a basic format by minimizing roundoff and overflow errors in intermediate value ...
counterparts erff
, erfl
and erfcf
, erfcl
.
* The GNU Scientific Library
The GNU Scientific Library (or GSL) is a software library for numerical computations in applied mathematics and science. The GSL is written in C (programming language), C; wrappers are available for other programming languages. The GSL is part of ...
provides erf
, erfc
, log(erf)
, and scaled error functions.
As complex function of a complex argument
* libcerf
/code>, numeric C library for complex error functions, provides the complex functions cerf
, cerfc
, cerfcx
and the real functions erfi
, erfcx
with approximately 13–14 digits precision, based on the Faddeeva function as implemented in th
MIT Faddeeva Package
References
Further reading
*
*
*
External links
A Table of Integrals of the Error Functions
{{Authority control
Special hypergeometric functions
Gaussian function
Functions related to probability distributions
Analytic functions