HOME

TheInfoList



OR:

Reward-based selection is a technique used in
evolutionary algorithm In computational intelligence (CI), an evolutionary algorithm (EA) is a subset of evolutionary computation, a generic population-based metaheuristic optimization algorithm. An EA uses mechanisms inspired by biological evolution, such as reproduc ...
s for selecting potentially useful solutions for recombination. The probability of being selected for an individual is proportional to the cumulative reward, obtained by the individual. The cumulative reward can be computed as a sum of the individual reward and the reward, inherited from parents.


Description

Reward-based selection can be used within
Multi-armed bandit In probability theory and machine learning, the multi-armed bandit problem (sometimes called the ''K''- or ''N''-armed bandit problem) is a problem in which a fixed limited set of resources must be allocated between competing (alternative) choices ...
framework for
Multi-objective optimization Multi-objective optimization (also known as multi-objective programming, vector optimization, multicriteria optimization, multiattribute optimization or Pareto optimization) is an area of multiple criteria decision making that is concerned with ...
to obtain a better approximation of the
Pareto front In multi-objective optimization, the Pareto front (also called Pareto frontier or Pareto curve) is the set of all Pareto efficient solutions. The concept is widely used in engineering. It allows the designer to restrict attention to the set of effi ...
. The newborn a'^ and its parents receive a reward r^, if a'^ was selected for new population Q^, otherwise the reward is zero. Several reward definitions are possible: *1. r^=1, if the newborn individual a'^ was selected for new population Q^. *2. r^ = 1 - \frac \mbox a'^ \in Q^ , where rank(a'^) is the rank of newly inserted individual in the population of \mu individuals. Rank can be computed using a well-known non-dominated sorting procedure. *3. r^ = \sum_ \Delta(a,Q^) - \sum_ \Delta(a,Q^), where \Delta(a,Q^) is the hypervolume indicator contribution of the individual a to the population Q^. The reward r^>0 if the newly inserted individual improves the quality of the population, which is measured as its hypervolume contribution in the objective space. *4. A relaxation of the above reward, involving a rank-based penalization for points for k-th dominated Pareto front: r^ = \frac \left( \sum_ \Delta(a,ndom_k(Q^)) - \sum_ \Delta(a,ndom_k(Q^)) \right) Reward-based selection can quickly identify the most fruitful directions of search by maximizing the cumulative reward of individuals.


See also

*
Fitness proportionate selection Fitness proportionate selection, also known as roulette wheel selection, is a genetic operator used in genetic algorithms for selecting potentially useful solutions for recombination. In fitness proportionate selection, as in all selection method ...
*
Selection (genetic algorithm) Selection is the stage of a genetic algorithm in which individual genomes are chosen from a population for later breeding (using the crossover operator). A generic selection procedure may be implemented as follows: #The fitness function is evalua ...
*
Stochastic universal sampling Stochastic universal sampling (SUS) is a technique used in genetic algorithms for selecting potentially useful solutions for recombination. It was introduced by James Baker. SUS is a development of fitness proportionate selection (FPS) which exhi ...
*
Tournament selection Tournament selection is a method of selecting an individual from a population of individuals in a genetic algorithm. Tournament selection involves running several "tournaments" among a few individuals (or "chromosomes") chosen at random from the po ...


References

{{DEFAULTSORT:Rewardbased Selection Evolutionary algorithms Genetic algorithms