HOME

TheInfoList



OR:

Differential evolution (DE) is an
evolutionary algorithm Evolutionary algorithms (EA) reproduce essential elements of the biological evolution in a computer algorithm in order to solve "difficult" problems, at least Approximation, approximately, for which no exact or satisfactory solution methods are k ...
to optimize a problem by iteratively trying to improve a
candidate solution In mathematical optimization and computer science, a feasible region, feasible set, or solution space is the set of all possible points (sets of values of the choice variables) of an optimization problem that satisfy the problem's constraints, ...
with regard to a given measure of quality. Such methods are commonly known as
metaheuristic In computer science and mathematical optimization, a metaheuristic is a higher-level procedure or heuristic designed to find, generate, tune, or select a heuristic (partial search algorithm) that may provide a sufficiently good solution to an op ...
s as they make few or no assumptions about the optimized problem and can search very large spaces of candidate solutions. However, metaheuristics such as DE do not guarantee an optimal solution is ever found. DE is used for multidimensional real-valued functions but does not use the
gradient In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
of the problem being optimized, which means DE does not require the optimization problem to be
differentiable In mathematics, a differentiable function of one real variable is a function whose derivative exists at each point in its domain. In other words, the graph of a differentiable function has a non- vertical tangent line at each interior point in ...
, as is required by classic optimization methods such as
gradient descent Gradient descent is a method for unconstrained mathematical optimization. It is a first-order iterative algorithm for minimizing a differentiable multivariate function. The idea is to take repeated steps in the opposite direction of the gradi ...
and quasi-newton methods. DE can therefore also be used on optimization problems that are not even continuous, are noisy, change over time, etc. DE optimizes a problem by maintaining a population of candidate solutions and creating new candidate solutions by combining existing ones according to its simple formulae, and then keeping whichever candidate solution has the best score or fitness on the optimization problem at hand. In this way, the optimization problem is treated as a black box that merely provides a measure of quality given a candidate solution and the gradient is therefore not needed.


History

Storn and Price introduced Differential Evolution in 1995. Books have been published on theoretical and practical aspects of using DE in
parallel computing Parallel computing is a type of computing, computation in which many calculations or Process (computing), processes are carried out simultaneously. Large problems can often be divided into smaller ones, which can then be solved at the same time. ...
,
multiobjective optimization Multi-objective optimization or Pareto optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, or multiattribute optimization) is an area of MCDM, multiple-criteria decision making that is concerned ...
,
constrained optimization In mathematical optimization, constrained optimization (in some contexts called constraint optimization) is the process of optimizing an objective function with respect to some variables in the presence of constraints on those variables. The obj ...
, and the books also contain surveys of application areas. Surveys on the multi-faceted research aspects of DE can be found in journal articles.


Algorithm

A basic variant of the DE algorithm works by having a population of candidate solutions (called agents). These agents are moved around in the search-space by using simple mathematical
formula In science, a formula is a concise way of expressing information symbolically, as in a mathematical formula or a ''chemical formula''. The informal use of the term ''formula'' in science refers to the general construct of a relationship betwe ...
e to combine the positions of existing agents from the population. If the new position of an agent is an improvement then it is accepted and forms part of the population, otherwise the new position is simply discarded. The process is repeated and by doing so it is hoped, but not guaranteed, that a satisfactory solution will eventually be discovered. Formally, let f: \mathbb^n \to \mathbb be the fitness function which must be minimized (note that maximization can be performed by considering the function h := -f instead). The function takes a candidate solution as argument in the form of a
vector Vector most often refers to: * Euclidean vector, a quantity with a magnitude and a direction * Disease vector, an agent that carries and transmits an infectious pathogen into another living organism Vector may also refer to: Mathematics a ...
of
real number In mathematics, a real number is a number that can be used to measure a continuous one- dimensional quantity such as a duration or temperature. Here, ''continuous'' means that pairs of values can have arbitrarily small differences. Every re ...
s. It produces a real number as output which indicates the fitness of the given candidate solution. The
gradient In vector calculus, the gradient of a scalar-valued differentiable function f of several variables is the vector field (or vector-valued function) \nabla f whose value at a point p gives the direction and the rate of fastest increase. The g ...
of f is not known. The goal is to find a solution \mathbf for which f(\mathbf) \leq f(\mathbf) for all \mathbf in the search-space, which means that \mathbf is the global minimum. Let \mathbf \in \mathbb^n designate a candidate solution (agent) in the population. The basic DE algorithm can then be described as follows: * Choose the parameters \text \geq 4, \text \in ,1/math>, and F \in ,2/math>. ** NP : \text is the population size, i.e. the number of candidate agents or "parents". ** CR : The parameter \text \in ,1/math> is called the ''crossover probability''. ** F : The parameter F \in ,2/math> is called the ''differential weight''. ** Typical settings are NP = 10n, CR = 0.9 and F = 0.8. ** Optimization performance may be greatly impacted by these choices; see below. * Initialize all agents \mathbf with random positions in the search-space. * Until a termination criterion is met (e.g. number of iterations performed, or adequate fitness reached), repeat the following: ** For each agent \mathbf in the population do: *** Pick three agents \mathbf,\mathbf, and \mathbf from the population at random, they must be distinct from each other as well as from agent \mathbf. (\mathbf is called the "base" vector.) *** Pick a random index R \in \ where n is the dimensionality of the problem being optimized. *** Compute the agent's potentially new position \mathbf = _1, \ldots, y_n/math> as follows: **** For each i \in \, pick a uniformly distributed random number r_i \sim U(0,1) **** If r_i < CR or i = R then set y_i = a_i + F \times (b_i-c_i) otherwise set y_i = x_i. (Index position R is replaced for certain.) *** If f(\mathbf) \leq f(\mathbf) then replace the agent \mathbf in the population with the improved or equal candidate solution \mathbf. * Pick the agent from the population that has the best fitness and return it as the best found candidate solution.


Parameter selection

The choice of DE parameters \text, \text and F can have a large impact on optimization performance. Selecting the DE parameters that yield good performance has therefore been the subject of much research. Rules of thumb for parameter selection were devised by Storn et al. and Liu and Lampinen. Mathematical convergence analysis regarding parameter selection was done by Zaharie.


Constraint handling

Differential evolution can be utilized for constrained optimization as well. A common method involves modifying the target function to include a penalty for any violation of constraints, expressed as: f\tilde(x) = f(x) + \rho \times \mathrm(x). Here, \mathrm(x) represents either a constraint violation (an L1 penalty) or the square of a constraint violation (an L2 penalty). This method, however, has certain drawbacks. One significant challenge is the appropriate selection of the penalty coefficient \rho. If \rho is set too low, it may not effectively enforce constraints. Conversely, if it's too high, it can greatly slow down or even halt the convergence process. Despite these challenges, this approach remains widely used due to its simplicity and because it doesn't require altering the differential evolution algorithm itself. There are alternative strategies, such as projecting onto a feasible set or reducing dimensionality, which can be used for box-constrained or linearly constrained cases. However, in the context of general nonlinear constraints, the most reliable methods typically involve penalty functions.


Variants

Variants of the DE algorithm are continually being developed in an effort to improve optimization performance. The following directions of development can be outlined: * New schemes for performing crossover and mutation of agents * Various strategies for handling constraints * Adaptive strategies that dynamically adjust population size, F and CR parameters * Specialized algorithms for large-scale optimization * Multi-objective and many-objective algorithms * Techniques for handling binary/integer variables


See also

* Artificial bee colony algorithm *
CMA-ES Covariance matrix adaptation evolution strategy (CMA-ES) is a particular kind of strategy for numerical optimization. evolution strategy, Evolution strategies (ES) are stochastic, Derivative-free optimization, derivative-free methods for numerical ...
*
Evolution strategy Evolution strategy (ES) from computer science is a subclass of evolutionary algorithms, which serves as an optimization (mathematics), optimization technique. It uses the major genetic operators mutation (evolutionary algorithm), mutation, recomb ...
*
Genetic algorithm In computer science and operations research, a genetic algorithm (GA) is a metaheuristic inspired by the process of natural selection that belongs to the larger class of evolutionary algorithms (EA). Genetic algorithms are commonly used to g ...


References

{{DEFAULTSORT:Differential Evolution Evolutionary algorithms