The cross-entropy (CE) method is a
Monte Carlo
Monte Carlo (; ; french: Monte-Carlo , or colloquially ''Monte-Carl'' ; lij, Munte Carlu ; ) is officially an administrative area of the Principality of Monaco, specifically the ward of Monte Carlo/Spélugues, where the Monte Carlo Casino i ...
method for
importance sampling and
optimization
Mathematical optimization (alternatively spelled ''optimisation'') or mathematical programming is the selection of a best element, with regard to some criterion, from some set of available alternatives. It is generally divided into two subfi ...
. It is applicable to both
combinatorial and
continuous problems, with either a static or noisy objective.
The method approximates the optimal importance sampling estimator by repeating two phases:
[Rubinstein, R.Y. and Kroese, D.P. (2004), The Cross-Entropy Method: A Unified Approach to Combinatorial Optimization, Monte-Carlo Simulation, and Machine Learning, Springer-Verlag, New York .]
#Draw a sample from a probability distribution.
#Minimize the
''cross-entropy'' between this distribution and a target distribution to produce a better sample in the next iteration.
Reuven Rubinstein developed the method in the context of ''rare event simulation'', where tiny probabilities must be estimated, for example in network reliability analysis, queueing models, or performance analysis of telecommunication systems. The method has also been applied to the
traveling salesman,
quadratic assignment,
DNA sequence alignment,
max-cut and buffer allocation problems.
Estimation via importance sampling
Consider the general problem of estimating the quantity
,
where
is some ''performance function'' and
is a member of some
parametric family
In mathematics and its applications, a parametric family or a parameterized family is a family of objects (a set of related objects) whose differences depend only on the chosen values for a set of parameters.
Common examples are parametrized (fa ...
of distributions. Using
importance sampling this quantity can be estimated as
,
where
is a random sample from
. For positive
, the theoretically ''optimal'' importance sampling
density
Density (volumetric mass density or specific mass) is the substance's mass per unit of volume. The symbol most often used for density is ''ρ'' (the lower case Greek letter rho), although the Latin letter ''D'' can also be used. Mathematicall ...
(PDF) is given by
.
This, however, depends on the unknown
. The CE method aims to approximate the optimal PDF by adaptively selecting members of the parametric family that are closest (in the
Kullback–Leibler sense) to the optimal PDF
.
Generic CE algorithm
# Choose initial parameter vector
; set t = 1.
# Generate a random sample
from
# Solve for
, where
# If convergence is reached then stop; otherwise, increase t by 1 and reiterate from step 2.
In several cases, the solution to step 3 can be found ''analytically''. Situations in which this occurs are
* When
belongs to the
natural exponential family
* When
is
discrete
Discrete may refer to:
*Discrete particle or quantum in physics, for example in quantum theory
*Discrete device, an electronic component with just one circuit element, either passive or active, other than an integrated circuit
*Discrete group, a ...
with finite
support
* When
and
, then
corresponds to the
maximum likelihood estimator
In statistics, maximum likelihood estimation (MLE) is a method of estimating the parameters of an assumed probability distribution, given some observed data. This is achieved by maximizing a likelihood function so that, under the assumed stati ...
based on those
.
Continuous optimization—example
The same CE algorithm can be used for optimization, rather than estimation.
Suppose the problem is to maximize some function
, for example,
.
To apply CE, one considers first the ''associated stochastic problem'' of estimating
for a given ''level''
, and parametric family
, for example the 1-dimensional
Gaussian distribution
In statistics, a normal distribution or Gaussian distribution is a type of continuous probability distribution for a real-valued random variable. The general form of its probability density function is
:
f(x) = \frac e^
The parameter \mu i ...
,
parameterized by its mean
and variance
(so
here).
Hence, for a given
, the goal is to find
so that
is minimized. This is done by solving the sample version (stochastic counterpart) of the KL divergence minimization problem, as in step 3 above.
It turns out that parameters that minimize the stochastic counterpart for this choice of target distribution and
parametric family are the sample mean and sample variance corresponding to the ''elite samples'', which are those samples that have objective function value
.
The worst of the elite samples is then used as the level parameter for the next iteration.
This yields the following randomized algorithm that happens to coincide with the so-called Estimation of Multivariate Normal Algorithm (EMNA), an
estimation of distribution algorithm.
Pseudocode
''// Initialize parameters''
μ := −6
σ2 := 100
t := 0
maxits := 100
N := 100
Ne := 10
''// While maxits not exceeded and not converged''
while t < maxits and σ2 > ε do
''// Obtain N samples from current sampling distribution''
X := SampleGaussian(μ, σ2, N)
''// Evaluate objective function at sampled points''
S := exp(−(X − 2) ^ 2) + 0.8 exp(−(X + 2) ^ 2)
''// Sort X by objective function values in descending order''
X := sort(X, S)
''// Update parameters of sampling distribution''
μ := mean(X(1:Ne))
σ2 := var(X(1:Ne))
t := t + 1
''// Return mean of final sampling distribution as solution''
return μ
Related methods
*
Simulated annealing
Simulated annealing (SA) is a probabilistic technique for approximating the global optimum of a given function. Specifically, it is a metaheuristic to approximate global optimization in a large search space for an optimization problem. ...
*
Genetic algorithms
In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to gene ...
*
Harmony search
*
Estimation of distribution algorithm
*
Tabu search Tabu search is a metaheuristic search method employing local search methods used for mathematical optimization. It was created by Fred W. Glover in 1986
and formalized in 1989.
Local (neighborhood) searches take a potential solution to a prob ...
*
Natural Evolution Strategy
See also
*
Cross entropy
*
Kullback–Leibler divergence
In mathematical statistics, the Kullback–Leibler divergence (also called relative entropy and I-divergence), denoted D_\text(P \parallel Q), is a type of statistical distance: a measure of how one probability distribution ''P'' is different fro ...
*
Randomized algorithm
A randomized algorithm is an algorithm that employs a degree of randomness as part of its logic or procedure. The algorithm typically uses uniformly random bits as an auxiliary input to guide its behavior, in the hope of achieving good performa ...
*
Importance sampling
Journal papers
* De Boer, P-T., Kroese, D.P, Mannor, S. and Rubinstein, R.Y. (2005). A Tutorial on the Cross-Entropy Method. ''Annals of Operations Research'', 134 (1), 19–6
*Rubinstein, R.Y. (1997). Optimization of Computer simulation Models with Rare Events, ''European Journal of Operational Research'', 99, 89–112.
Software implementations
CEoptim R packageNovacta.Analytics .NET library
References
{{reflist
Heuristics
Optimization algorithms and methods
Monte Carlo methods
Machine learning