In
mathematics
Mathematics is a field of study that discovers and organizes methods, Mathematical theory, theories and theorems that are developed and Mathematical proof, proved for the needs of empirical sciences and mathematics itself. There are many ar ...
, Monte Carlo integration is a technique for
numerical integration
In analysis, numerical integration comprises a broad family of algorithms for calculating the numerical value of a definite integral.
The term numerical quadrature (often abbreviated to quadrature) is more or less a synonym for "numerical integr ...
using
random numbers. It is a particular
Monte Carlo method
Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be ...
that numerically computes a
definite integral
In mathematics, an integral is the continuous analog of a sum, which is used to calculate areas, volumes, and their generalizations. Integration, the process of computing an integral, is one of the two fundamental operations of calculus,Int ...
. While other algorithms usually evaluate the integrand at a regular grid, Monte Carlo randomly chooses points at which the integrand is evaluated. This method is particularly useful for higher-dimensional integrals.
[
There are different methods to perform a Monte Carlo integration, such as uniform sampling, stratified sampling, importance sampling, sequential Monte Carlo (also known as a particle filter), and mean-field particle methods.
]
Overview
In numerical integration, methods such as the trapezoidal rule use a deterministic approach. Monte Carlo integration, on the other hand, employs a non-deterministic approach: each realization provides a different outcome. In Monte Carlo, the final outcome is an approximation of the correct value with respective error bars, and the correct value is likely to be within those error bars.
The problem Monte Carlo integration addresses is the computation of a multidimensional definite integral
where Ω, a subset of , has volume
The naive Monte Carlo approach is to sample points uniformly on Ω: given ''N'' uniform samples,
''I'' can be approximated by
This is because the law of large numbers ensures that
Given the estimation of ''I'' from ''QN'', the error bars of ''QN'' can be estimated by the sample variance
In probability theory and statistics, variance is the expected value of the squared deviation from the mean of a random variable. The standard deviation (SD) is obtained as the square root of the variance. Variance is a measure of dispersion, ...
using the unbiased estimate of the variance.
which leads to
Since the sequence
is bounded due to being identically equal to ''Var(f)'', as long as this is assumed finite, this variance decreases asymptotically to zero as 1/''N''. The estimation of the error of ''QN'' is thus
which decreases as . This is standard error of the mean multiplied with .
This result does not depend on the number of dimensions of the integral, which is the promised advantage of Monte Carlo integration against most deterministic methods that depend exponentially on the dimension.
It is important to notice that, unlike in deterministic methods, the estimate of the error is not a strict error bound; random sampling may not uncover all the important features of the integrand that can result in an underestimate of the error.
While the naive Monte Carlo works for simple examples, an improvement over deterministic algorithms can only be accomplished with algorithms that use problem-specific sampling distributions.
With an appropriate sample distribution it is possible to exploit the fact that almost all higher-dimensional integrands are very localized and only small subspace notably contributes to the integral.
A large part of the Monte Carlo literature is dedicated in developing strategies to improve the error estimates. In particular, stratified sampling—dividing the region in sub-domains—and importance sampling—sampling from non-uniform distributions—are two examples of such techniques.
Example
A paradigmatic example of a Monte Carlo integration is the estimation of π. Consider the function
and the set Ω = ��1,1× ��1,1with ''V'' = 4. Notice that
Thus, a crude way of calculating the value of π with Monte Carlo integration is to pick ''N'' random numbers on Ω and compute
In the figure on the right, the relative error is measured as a function of ''N'', confirming the .
C/C++ example
#include
#include
#include
int main()
Python example
Made in Python.
import numpy as np
rng = np.random.default_rng(0)
throws = 2000
radius = 1
# Choose random X and Y data centered around 0,0
x = rng.uniform(-radius, radius, throws)
y = rng.uniform(-radius, radius, throws)
# Count the times (x, y) is inside the circle,
# which happens when sqrt(x^2 + y^2) <= radius.
inside_circle = np.count_nonzero(np.hypot(x, y) <= radius)
# Calculate area and print; should be closer to Pi with increasing number of throws
area = (2 * radius)**2 * inside_circle / throws
print(area)
Wolfram Mathematica example
The code below describes a process of integrating the function
from