HOME

TheInfoList



OR:

The Gaussian integral, also known as the Euler–Poisson integral, is the
integral In mathematics, an integral is the continuous analog of a Summation, sum, which is used to calculate area, areas, volume, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental oper ...
of the
Gaussian function In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function (mathematics), function of the base form f(x) = \exp (-x^2) and with parametric extension f(x) = a \exp\left( -\frac \right) for arbitrary real number, rea ...
f(x) = e^ over the entire real line. Named after the German mathematician
Carl Friedrich Gauss Johann Carl Friedrich Gauss (; ; ; 30 April 177723 February 1855) was a German mathematician, astronomer, geodesist, and physicist, who contributed to many fields in mathematics and science. He was director of the Göttingen Observatory and ...
, the integral is \int_^\infty e^\,dx = \sqrt.
Abraham de Moivre Abraham de Moivre FRS (; 26 May 166727 November 1754) was a French mathematician known for de Moivre's formula, a formula that links complex numbers and trigonometry, and for his work on the normal distribution and probability theory. He move ...
originally discovered this type of integral in 1733, while Gauss published the precise integral in 1809, attributing its discovery to
Laplace Pierre-Simon, Marquis de Laplace (; ; 23 March 1749 – 5 March 1827) was a French polymath, a scholar whose work has been instrumental in the fields of physics, astronomy, mathematics, engineering, statistics, and philosophy. He summariz ...
. The integral has a wide range of applications. For example, with a slight change of variables it is used to compute the
normalizing constant In probability theory, a normalizing constant or normalizing factor is used to reduce any probability function to a probability density function with total probability of one. For example, a Gaussian function can be normalized into a probabilit ...
of the
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
. The same integral with finite limits is closely related to both the
error function In mathematics, the error function (also called the Gauss error function), often denoted by , is a function \mathrm: \mathbb \to \mathbb defined as: \operatorname z = \frac\int_0^z e^\,\mathrm dt. The integral here is a complex Contour integrat ...
and the
cumulative distribution function In probability theory and statistics, the cumulative distribution function (CDF) of a real-valued random variable X, or just distribution function of X, evaluated at x, is the probability that X will take a value less than or equal to x. Ever ...
of the
normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
. In physics this type of integral appears frequently, for example, in
quantum mechanics Quantum mechanics is the fundamental physical Scientific theory, theory that describes the behavior of matter and of light; its unusual characteristics typically occur at and below the scale of atoms. Reprinted, Addison-Wesley, 1989, It is ...
, to find the probability density of the ground state of the harmonic oscillator. This integral is also used in the path integral formulation, to find the propagator of the harmonic oscillator, and in
statistical mechanics In physics, statistical mechanics is a mathematical framework that applies statistical methods and probability theory to large assemblies of microscopic entities. Sometimes called statistical physics or statistical thermodynamics, its applicati ...
, to find its partition function. Although no
elementary function In mathematics, an elementary function is a function of a single variable (typically real or complex) that is defined as taking sums, products, roots and compositions of finitely many polynomial, rational, trigonometric, hyperbolic, a ...
exists for the error function, as can be proven by the Risch algorithm, the Gaussian integral can be solved analytically through the methods of
multivariable calculus Multivariable calculus (also known as multivariate calculus) is the extension of calculus in one variable to calculus with functions of several variables: the differentiation and integration of functions involving multiple variables ('' mult ...
. That is, there is no elementary ''
indefinite integral In calculus, an antiderivative, inverse derivative, primitive function, primitive integral or indefinite integral of a continuous function is a differentiable function whose derivative is equal to the original function . This can be stated s ...
'' for \int e^\,dx, but the
definite integral In mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus,Int ...
\int_^\infty e^\,dx can be evaluated. The definite integral of an arbitrary
Gaussian function In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function (mathematics), function of the base form f(x) = \exp (-x^2) and with parametric extension f(x) = a \exp\left( -\frac \right) for arbitrary real number, rea ...
is \int_^ e^\,dx= \sqrt.


Computation


By polar coordinates

A standard way to compute the Gaussian integral, the idea of which goes back to Poisson, is to make use of the property that: \left(\int_^ e^\,dx\right)^2 = \int_^ e^\,dx \int_^ e^\,dy = \int_^ \int_^ e^\, dx\,dy. Consider the function e^ = e^on the plane \mathbb^2, and compute its integral two ways: # on the one hand, by double integration in the
Cartesian coordinate system In geometry, a Cartesian coordinate system (, ) in a plane (geometry), plane is a coordinate system that specifies each point (geometry), point uniquely by a pair of real numbers called ''coordinates'', which are the positive and negative number ...
, its integral is a square: \left(\int e^\,dx\right)^2; # on the other hand, by
shell integration Shell integration (the shell method in integral calculus) is a method for calculating the volume of a solid of revolution, when integrating along an axis ''perpendicular to'' the axis of revolution. This is in contrast to disc integration w ...
(a case of double integration in
polar coordinates In mathematics, the polar coordinate system specifies a given point (mathematics), point in a plane (mathematics), plane by using a distance and an angle as its two coordinate system, coordinates. These are *the point's distance from a reference ...
), its integral is computed to be \pi Comparing these two computations yields the integral, though one should take care about the
improper integral In mathematical analysis, an improper integral is an extension of the notion of a definite integral to cases that violate the usual assumptions for that kind of integral. In the context of Riemann integrals (or, equivalently, Darboux integral ...
s involved. \begin \iint_ e^dx\,dy &= \int_0^ \int_0^ e^r\,dr\,d\theta\\ pt &= 2\pi \int_0^\infty re^\,dr\\ pt &= 2\pi \int_^0 \tfrac e^s\,ds && s = -r^2\\ pt &= \pi \int_^0 e^s\,ds \\ pt &= \pi \, \left e^s\right^ \\ pt &= \pi \,\left(e^0 - e^\right) \\ pt &= \pi \,\left(1 - 0\right) \\ pt &=\pi, \end where the factor of is the Jacobian determinant which appears because of the transform to polar coordinates ( is the standard measure on the plane, expressed in polar coordinates Wikibooks:Calculus/Polar Integration#Generalization), and the substitution involves taking , so . Combining these yields \left ( \int_^\infty e^\,dx \right )^2=\pi, so \int_^\infty e^ \, dx = \sqrt.


Complete proof

To justify the improper double integrals and equating the two expressions, we begin with an approximating function: I(a) = \int_^a e^dx. If the integral \int_^\infty e^ \, dx were
absolutely convergent In mathematics, an infinite series of numbers is said to converge absolutely (or to be absolutely convergent) if the sum of the absolute values of the summands is finite. More precisely, a real or complex series \textstyle\sum_^\infty a_n is said ...
we would have that its
Cauchy principal value In mathematics, the Cauchy principal value, named after Augustin-Louis Cauchy, is a method for assigning values to certain improper integrals which would otherwise be undefined. In this method, a singularity on an integral interval is avoided by ...
, that is, the limit \lim_ I(a) would coincide with \int_^\infty e^\,dx. To see that this is the case, consider that \int_^\infty \left, e^\ dx < \int_^ -x e^\, dx + \int_^1 e^\, dx+ \int_^ x e^\, dx < \infty . So we can compute \int_^\infty e^ \, dx by just taking the limit \lim_ I(a). Taking the square of I(a) yields \begin I(a)^2 & = \left ( \int_^a e^\, dx \right ) \left ( \int_^a e^\, dy \right ) \\ pt& = \int_^a \left ( \int_^a e^\, dy \right )\,e^\, dx \\ pt& = \int_^a \int_^a e^\,dy\,dx. \end Using
Fubini's theorem In mathematical analysis, Fubini's theorem characterizes the conditions under which it is possible to compute a double integral by using an iterated integral. It was introduced by Guido Fubini in 1907. The theorem states that if a function is L ...
, the above double integral can be seen as an area integral \iint_ e^\,d(x,y), taken over a square with vertices on the ''xy''- plane. Since the exponential function is greater than 0 for all real numbers, it then follows that the integral taken over the square's
incircle In geometry, the incircle or inscribed circle of a triangle is the largest circle that can be contained in the triangle; it touches (is tangent to) the three sides. The center of the incircle is a triangle center called the triangle's incenter ...
must be less than I(a)^2, and similarly the integral taken over the square's
circumcircle In geometry, the circumscribed circle or circumcircle of a triangle is a circle that passes through all three vertex (geometry), vertices. The center of this circle is called the circumcenter of the triangle, and its radius is called the circumrad ...
must be greater than I(a)^2. The integrals over the two disks can easily be computed by switching from Cartesian coordinates to
polar coordinates In mathematics, the polar coordinate system specifies a given point (mathematics), point in a plane (mathematics), plane by using a distance and an angle as its two coordinate system, coordinates. These are *the point's distance from a reference ...
: \begin x &= r \cos \theta, & y &= r \sin\theta \end \mathbf J(r, \theta) = \begin \dfrac & \dfrac\\ em \dfrac & \dfrac \end = \begin \cos\theta & - r\sin \theta \\ \sin\theta & \hphantom r\cos \theta \end d(x,y) = \left, J(r, \theta)\ d(r,\theta) = r\, d(r,\theta). \int_0^ \int_0^a re^ \, dr \, d\theta < I^2(a) < \int_0^ \int_0^ re^ \, dr\, d\theta. (See to polar coordinates from Cartesian coordinates for help with polar transformation.) Integrating, \pi \left(1-e^\right) < I^2(a) < \pi \left(1 - e^\right). By the
squeeze theorem In calculus, the squeeze theorem (also known as the sandwich theorem, among other names) is a theorem regarding the limit of a function that is bounded between two other functions. The squeeze theorem is used in calculus and mathematical a ...
, this gives the Gaussian integral \int_^\infty e^\, dx = \sqrt.


By Cartesian coordinates

A different technique, which goes back to Laplace (1812), is the following. Let \begin y & = xs \\ dy & = x\,ds. \end Since the limits on as depend on the sign of , it simplifies the calculation to use the fact that is an
even function In mathematics, an even function is a real function such that f(-x)=f(x) for every x in its domain. Similarly, an odd function is a function such that f(-x)=-f(x) for every x in its domain. They are named for the parity of the powers of the ...
, and, therefore, the integral over all real numbers is just twice the integral from zero to infinity. That is, \int_^ e^ \, dx = 2\int_^ e^\,dx. Thus, over the range of integration, , and the variables and have the same limits. This yields: \begin I^2 &= 4 \int_0^\infty \int_0^\infty e^ dy\,dx \\ pt&= 4 \int_0^\infty \left( \int_0^\infty e^ \, dy \right) \, dx \\ pt&= 4 \int_0^\infty \left( \int_0^\infty e^ x\,ds \right) \, dx \\ pt\end Then, using
Fubini's theorem In mathematical analysis, Fubini's theorem characterizes the conditions under which it is possible to compute a double integral by using an iterated integral. It was introduced by Guido Fubini in 1907. The theorem states that if a function is L ...
to switch the order of integration: \begin I^2 &= 4 \int_0^\infty \left( \int_0^\infty e^ x \, dx \right) \, ds \\ pt&= 4 \int_0^\infty \left \frac \right^ \, ds \\ pt&= 4 \left (\frac \int_0^\infty \frac \right) \\ pt&= 2 \arctan(s)\Big , _0^\infty \\ pt&= \pi. \end Therefore, I = \sqrt, as expected.


By

Laplace's method In mathematics, Laplace's method, named after Pierre-Simon Laplace, is a technique used to approximate integrals of the form :\int_a^b e^ \, dx, where f is a twice-differentiable function, M is a large number, and the endpoints a and b could b ...

In Laplace approximation, we deal only with up to second-order terms in Taylor expansion, so we consider e^\approx 1-x^2 \approx (1+x^2)^. In fact, since (1+t)e^ \leq 1 for all t, we have the exact bounds:1-x^2 \leq e^ \leq (1+x^2)^Then we can do the bound at Laplace approximation limit:\int_(1-x^2)^n dx \leq \int_e^ dx \leq \int_(1+x^2)^ dx That is, 2\sqrt n\int_(1-x^2)^n dx \leq \int_e^ dx \leq 2\sqrt n\int_(1+x^2)^ dx By trigonometric substitution, we exactly compute those two bounds: 2\sqrt n(2n)!!/(2n+1)!! and 2\sqrt n (\pi/2)(2n-3)!!/(2n-2)!! By taking the square root of the Wallis formula, \frac \pi 2 = \prod_ \fracwe have \sqrt \pi = 2 \lim_ \sqrt \frac, the desired lower bound limit. Similarly we can get the desired upper bound limit. Conversely, if we first compute the integral with one of the other methods above, we would obtain a proof of the Wallis formula.


Relation to the gamma function

The integrand is an
even function In mathematics, an even function is a real function such that f(-x)=f(x) for every x in its domain. Similarly, an odd function is a function such that f(-x)=-f(x) for every x in its domain. They are named for the parity of the powers of the ...
, \int_^ e^ dx = 2 \int_0^\infty e^ dx Thus, after the change of variable x = \sqrt, this turns into the Euler integral 2 \int_0^\infty e^ dx = 2\int_0^\infty \frac\ e^ \ t^ dt = \Gamma = \sqrt where \Gamma(z) = \int_^ t^ e^ dt is the
gamma function In mathematics, the gamma function (represented by Γ, capital Greek alphabet, Greek letter gamma) is the most common extension of the factorial function to complex numbers. Derived by Daniel Bernoulli, the gamma function \Gamma(z) is defined ...
. This shows why the
factorial In mathematics, the factorial of a non-negative denoted is the Product (mathematics), product of all positive integers less than or equal The factorial also equals the product of n with the next smaller factorial: \begin n! &= n \times ...
of a half-integer is a rational multiple of \sqrt \pi. More generally, \int_0^\infty x^n e^ dx = \frac, which can be obtained by substituting t=a x^b in the integrand of the gamma function to get \Gamma(z) = a^z b \int_0^ x^ e^ dx .


Generalizations


The integral of a Gaussian function

The integral of an arbitrary
Gaussian function In mathematics, a Gaussian function, often simply referred to as a Gaussian, is a function (mathematics), function of the base form f(x) = \exp (-x^2) and with parametric extension f(x) = a \exp\left( -\frac \right) for arbitrary real number, rea ...
is \int_^ e^\,dx= \sqrt. An alternative form is \int_^e^\,dx=\sqrt\,e^. This form is useful for calculating expectations of some continuous probability distributions related to the normal distribution, such as the
log-normal distribution In probability theory, a log-normal (or lognormal) distribution is a continuous probability distribution of a random variable whose logarithm is normal distribution, normally distributed. Thus, if the random variable is log-normally distributed ...
, for example.


Complex form

\int_^ e^ dt = e^ \sqrtand more generally,\int_ e^dx = \det(A)^ ^Nfor any positive-definite symmetric matrix A.


''n''-dimensional and functional generalization

Suppose ''A'' is a symmetric positive-definite (hence invertible)
precision matrix In statistics, the precision matrix or concentration matrix is the matrix inverse of the covariance matrix or dispersion matrix, P = \Sigma^. For univariate distributions, the precision matrix degenerates into a scalar precision, defined as the ...
, which is the matrix inverse of the
covariance matrix In probability theory and statistics, a covariance matrix (also known as auto-covariance matrix, dispersion matrix, variance matrix, or variance–covariance matrix) is a square matrix giving the covariance between each pair of elements of ...
. Then, \begin \int_ \exp \, d^n \mathbf &= \int_ \exp \, d^n \mathbf \\ ex&= \sqrt = \sqrt \\ ex&= \sqrt \endBy completing the square, this generalizes to\int_ \exp \, d^n \mathbf = \sqrt \exp\left(\tfrac \mathbf^\mathsf A^ \mathbf + c\right) This fact is applied in the study of the
multivariate normal distribution In probability theory and statistics, the multivariate normal distribution, multivariate Gaussian distribution, or joint normal distribution is a generalization of the one-dimensional ( univariate) normal distribution to higher dimensions. One d ...
. Also, \int x_\cdots x_ \, \exp \, d^nx =\sqrt \, \frac \, \sum_(A^)_ \cdots (A^)_ where ''σ'' is a
permutation In mathematics, a permutation of a set can mean one of two different things: * an arrangement of its members in a sequence or linear order, or * the act or process of changing the linear order of an ordered set. An example of the first mean ...
of and the extra factor on the right-hand side is the sum over all combinatorial pairings of of ''N'' copies of ''A''−1. Alternatively, \int f(\mathbf x) \exp d^n\mathbf = \sqrt \, \left. \exp\left(\frac \sum_^\left(A^\right)_\right) f(\mathbf)\_ for some
analytic function In mathematics, an analytic function is a function that is locally given by a convergent power series. There exist both real analytic functions and complex analytic functions. Functions of each type are infinitely differentiable, but complex ...
''f'', provided it satisfies some appropriate bounds on its growth and some other technical criteria. (It works for some functions and fails for others. Polynomials are fine.) The exponential over a differential operator is understood as a
power series In mathematics, a power series (in one variable) is an infinite series of the form \sum_^\infty a_n \left(x - c\right)^n = a_0 + a_1 (x - c) + a_2 (x - c)^2 + \dots where ''a_n'' represents the coefficient of the ''n''th term and ''c'' is a co ...
. While
functional integral Functional integration is a collection of results in mathematics and physics where the domain of an integral is no longer a region of space, but a space of functions. Functional integrals arise in probability, in the study of partial differentia ...
s have no rigorous definition (or even a nonrigorous computational one in most cases), we can ''define'' a Gaussian functional integral in analogy to the finite-dimensional case. There is still the problem, though, that (2\pi)^\infty is infinite and also, the
functional determinant In functional analysis, a branch of mathematics, it is sometimes possible to generalize the notion of the determinant of a square matrix of finite order (representing a linear transformation from a finite-dimensional vector space to itself) to the ...
would also be infinite in general. This can be taken care of if we only consider ratios: \begin & \frac \\ pt= & \frac\sum_A^(x_,x_)\cdots A^(x_,x_). \end In the
DeWitt notation Physics often deals with classical models where the dynamical variables are a collection of functions ''α'' over a d-dimensional space/spacetime manifold ''M'' where ''α'' is the " flavor" index. This involves functionals over the ''φs, functi ...
, the equation looks identical to the finite-dimensional case.


''n''-dimensional with linear term

If ''A'' is again a symmetric positive-definite matrix, then (assuming all are column vectors) \begin \int \exp\left(-\frac\sum_^n A_ x_i x_j+\sum_^n b_i x_i\right) d^n \mathbf &= \int \exp\left(-\tfrac \mathbf^\mathsf A \mathbf + \mathbf^\mathsf \mathbf\right) d^n \mathbf \\ &= \sqrt \exp\left(\tfrac \mathbf^\mathsf A^ \mathbf\right). \end


Integrals of similar form

\int_0^\infty x^ e^\,dx = \sqrt\frac \int_0^\infty x^ e^ \, dx = \frac a^ \int_0^\infty x^e^\,dx = \frac \sqrt \int_0^\infty x^e^\,dx = \frac \int_0^\infty x^e^\,dx = \frac where n is a positive integer An easy way to derive these is by differentiating under the integral sign. \begin \int_^\infty x^ e^\,dx &= \left(-1\right)^n\int_^\infty \frac e^\,dx \\ ex&= \left(-1\right)^n\frac \int_^\infty e^\,dx\\ ex&= \sqrt \left(-1\right)^n\frac\alpha^ \\ ex&= \sqrt\frac \end One could also integrate by parts and find a
recurrence relation In mathematics, a recurrence relation is an equation according to which the nth term of a sequence of numbers is equal to some combination of the previous terms. Often, only k previous terms of the sequence appear in the equation, for a parameter ...
to solve this.


Higher-order polynomials

Applying a linear change of basis shows that the integral of the exponential of a homogeneous polynomial in ''n'' variables may depend only on SL(''n'')-invariants of the polynomial. One such invariant is the
discriminant In mathematics, the discriminant of a polynomial is a quantity that depends on the coefficients and allows deducing some properties of the zero of a function, roots without computing them. More precisely, it is a polynomial function of the coef ...
, zeros of which mark the singularities of the integral. However, the integral may also depend on other invariants. Exponentials of other even polynomials can numerically be solved using series. These may be interpreted as
formal calculation In mathematical logic, a formal calculation, or formal operation, is a calculation that is ''systematic but without a rigorous justification''. It involves manipulating symbols in an expression using a generic substitution without proving that the ...
s when there is no convergence. For example, the solution to the integral of the exponential of a quartic polynomial is \int_^ e^\,dx = \frac e^f \sum_^ \frac \frac \frac \frac. The mod 2 requirement is because the integral from −∞ to 0 contributes a factor of to each term, while the integral from 0 to +∞ contributes a factor of 1/2 to each term. These integrals turn up in subjects such as
quantum field theory In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines Field theory (physics), field theory and the principle of relativity with ideas behind quantum mechanics. QFT is used in particle physics to construct phy ...
.


See also

* List of integrals of Gaussian functions * Common integrals in quantum field theory *
Normal distribution In probability theory and statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is f(x) = \frac ...
*
List of integrals of exponential functions The following is a list of integrals of exponential functions. For a complete list of integral functions, please see the list of integrals. Indefinite integral Indefinite integrals are antiderivative functions. A constant (the constant of integ ...
*
Error function In mathematics, the error function (also called the Gauss error function), often denoted by , is a function \mathrm: \mathbb \to \mathbb defined as: \operatorname z = \frac\int_0^z e^\,\mathrm dt. The integral here is a complex Contour integrat ...
* Berezin integral


References


Citations


Sources

* * * {{integral Integrals Articles containing proofs Gaussian function Theorems in mathematical analysis