Stochastic Optimization
   HOME

TheInfoList



OR:

Stochastic optimization (SO) methods are optimization methods that generate and use
random variable A random variable (also called random quantity, aleatory variable, or stochastic variable) is a mathematical formalization of a quantity or object which depends on random events. It is a mapping or a function from possible outcomes (e.g., the po ...
s. For stochastic problems, the random variables appear in the formulation of the optimization problem itself, which involves random objective functions or random constraints. Stochastic optimization methods also include methods with random iterates. Some stochastic optimization methods use random iterates to solve stochastic problems, combining both meanings of stochastic optimization. Stochastic optimization methods generalize
deterministic Determinism is a philosophical view, where all events are determined completely by previously existing causes. Deterministic theories throughout the history of philosophy have developed from diverse and sometimes overlapping motives and consi ...
methods for deterministic problems.


Methods for stochastic functions

Partly random input data arise in such areas as real-time estimation and control, simulation-based optimization where
Monte Carlo simulations Monte Carlo methods, or Monte Carlo experiments, are a broad class of computational algorithms that rely on repeated random sampling to obtain numerical results. The underlying concept is to use randomness to solve problems that might be determi ...
are run as estimates of an actual system, and problems where there is experimental (random) error in the measurements of the criterion. In such cases, knowledge that the function values are contaminated by random "noise" leads naturally to algorithms that use
statistical inference Statistical inference is the process of using data analysis to infer properties of an underlying probability distribution, distribution of probability.Upton, G., Cook, I. (2008) ''Oxford Dictionary of Statistics'', OUP. . Inferential statistical ...
tools to estimate the "true" values of the function and/or make statistically optimal decisions about the next steps. Methods of this class include: *
stochastic approximation Stochastic approximation methods are a family of iterative methods typically used for root-finding problems or for optimization problems. The recursive update rules of stochastic approximation methods can be used, among other things, for solving l ...
(SA), by
Robbins Robbins may refer to: People * Robbins (name), a surname Fictional characters * Al Robbins, medical doctor in ''CSI: Crime Scene Investigation'' * Arizona Robbins, surgeon in ''Grey's Anatomy'' * Ashley Mizuki Robbins, protagonist in the video ...
and Monro (1951) *
stochastic gradient descent Stochastic gradient descent (often abbreviated SGD) is an iterative method for optimizing an objective function with suitable smoothness properties (e.g. differentiable or subdifferentiable). It can be regarded as a stochastic approximation of ...
* finite-difference SA by Kiefer and
Wolfowitz Wolfowitz is a surname. Notable people with the surname include: * Jacob Wolfowitz, (1910–1981) American statistician and information theorist * Clare Selgin Wolfowitz (born 1945), American anthropologist * Paul Wolfowitz Paul Dundes Wolfowitz ...
(1952) * simultaneous perturbation SA by Spall (1992) * scenario optimization


Randomized search methods

On the other hand, even when the data set consists of precise measurements, some methods introduce randomness into the search-process to accelerate progress. Such randomness can also make the method less sensitive to modeling errors. Another advantage is that randomness into the search-process can be used for obtaining interval estimates of the minimum of a function via extreme value statistics. Further, the injected randomness may enable the method to escape a local optimum and eventually to approach a global optimum. Indeed, this randomization principle is known to be a simple and effective way to obtain algorithms with ''almost certain'' good performance uniformly across many data sets, for many sorts of problems. Stochastic optimization methods of this kind include: * simulated annealing by S. Kirkpatrick, C. D. Gelatt and M. P. Vecchi (1983) * quantum annealing * Probability Collectives by D.H. Wolpert, S.R. Bieniawski and D.G. Rajnarayan (2011) * reactive search optimization (RSO) by Roberto Battiti, G. Tecchiolli (1994), recently reviewed in the reference book * cross-entropy method by
Rubinstein Rubinstein is a surname of German and Yiddish origin, mostly found among Ashkenazi Jews; it denotes "ruby-stone". Notable persons named Rubinstein include: A–E * Akiba Rubinstein (1880–1961), Polish chess grandmaster * Amnon Rubinstein (born ...
and Kroese (2004) * random search by Anatoly Zhigljavsky (1991) * Informational search * stochastic tunneling * parallel tempering a.k.a. replica exchange *
stochastic hill climbing Stochastic hill climbing is a variant of the basic hill climbing method. While basic hill climbing always chooses the steepest uphill move, "stochastic hill climbing chooses at random from among the uphill moves; the probability of selection can var ...
* swarm algorithms * evolutionary algorithms ** genetic algorithms by Holland (1975) ** evolution strategies * cascade object optimization & modification algorithm (2016) In contrast, some authors have argued that randomization can only improve a deterministic algorithm if the deterministic algorithm was poorly designed in the first place.
Fred W. Glover Fred W. Glover (born March 8, 1937 in Kansas City, Missouri) is known for his contributions to the area of metaheuristics (a name he coined) and for launching the computer-based optimization methodology of Tabu search and the associated evolutio ...
argues that reliance on random elements may prevent the development of more intelligent and better deterministic components. The way in which results of stochastic optimization algorithms are usually presented (e.g., presenting only the average, or even the best, out of N runs without any mention of the spread), may also result in a positive bias towards randomness.


See also

* Global optimization * Machine learning * Scenario optimization *
Gaussian process In probability theory and statistics, a Gaussian process is a stochastic process (a collection of random variables indexed by time or space), such that every finite collection of those random variables has a multivariate normal distribution, i.e. e ...
* State Space Model * Model predictive control * Nonlinear programming * Entropic value at risk


References


Further reading

*Michalewicz, Z. and Fogel, D. B. (2000),
How to Solve It: Modern Heuristics
', Springer-Verlag, New York. *
PSA: A novel optimization algorithm based on survival rules of porcellio scaber
, Y. Zhang and S. Li


External links


COSP
{{DEFAULTSORT:Stochastic Optimization Monte Carlo methods