Thompson Sampling
   HOME

TheInfoList



OR:

Thompson sampling, named after
William R. Thompson William Robin Thompson (June 29, 1887January 30, 1972) was a Canadian entomologist and also wrote on the philosophy of science in his book ''Science and Common Sense: An Aristotelian Excursion'' (1937). He specialized in the biological control of ...
, is a
heuristic A heuristic (; ), or heuristic technique, is any approach to problem solving or self-discovery that employs a practical method that is not guaranteed to be optimal, perfect, or rational, but is nevertheless sufficient for reaching an immediate ...
for choosing actions that addresses the
exploration-exploitation dilemma The exploration-exploitation dilemma, also known as the explore-exploit tradeoff, is a fundamental concept in decision-making that arises in many domains. It is depicted as the balancing act between two opposing strategies. Exploitation involves c ...
in the
multi-armed bandit In probability theory and machine learning, the multi-armed bandit problem (sometimes called the ''K''- or ''N''-armed bandit problem) is a problem in which a fixed limited set of resources must be allocated between competing (alternative) choices ...
problem. It consists of choosing the action that maximizes the expected reward with respect to a randomly drawn belief.


Description

Consider a set of contexts \mathcal, a set of actions \mathcal, and rewards in \mathbb. In each round, the player obtains a context x \in \mathcal, plays an action a \in \mathcal and receives a reward r \in \mathbb following a distribution that depends on the context and the issued action. The aim of the player is to play actions such as to maximize the cumulative rewards. The elements of Thompson sampling are as follows: # a likelihood function P(r, \theta,a,x); # a set \Theta of parameters \theta of the distribution of r; # a prior distribution P(\theta) on these parameters; # past observations triplets \mathcal = \; # a posterior distribution P(\theta, \mathcal) \propto P(\mathcal, \theta)P(\theta), where P(\mathcal, \theta) is the likelihood function. Thompson sampling consists in playing the action a^\ast \in \mathcal according to the probability that it maximizes the expected reward; action a^\ast is chosen with probability :\int \mathbb \left a^\ast,x,\theta) = \max_ \mathbb(r, a',x,\theta) \rightP(\theta, \mathcal) d\theta, where \mathbb is the indicator function. In practice, the rule is implemented by sampling. In each round, parameters \theta^\ast are sampled from the posterior P(\theta, \mathcal), and an action a^\ast chosen that maximizes \mathbb \theta^\ast,a^\ast,x/math>, i.e. the expected reward given the sampled parameters, the action, and the current context. Conceptually, this means that the player instantiates their beliefs randomly in each round according to the posterior distribution, and then acts optimally according to them. In most practical applications, it is computationally onerous to maintain and sample from a posterior distribution over models. As such, Thompson sampling is often used in conjunction with approximate sampling techniques.


History

Thompson sampling was originally described by Thompson in 1933. It was subsequently rediscovered numerous times independently in the context of multi-armed bandit problems. A first proof of convergence for the bandit case has been shown in 1997. The first application to Markov decision processes was in 2000. A related approach (see
Bayesian control rule Thompson sampling, named after William R. Thompson, is a heuristic for choosing actions that addresses the exploration-exploitation dilemma in the multi-armed bandit In probability theory and machine learning, the multi-armed bandit problem (som ...
) was published in 2010. In 2010 it was also shown that Thompson sampling is ''instantaneously self-correcting''. Asymptotic convergence results for contextual bandits were published in 2011. Thompson Sampling has been widely used in many online learning problems including A/B testing in website design and online advertising, and accelerated learning in decentralized decision making. A Double Thompson Sampling (D-TS) algorithm has been proposed for dueling bandits, a variant of traditional MAB, where feedback comes in the form of pairwise comparison.


Relationship to other approaches


Probability matching

Probability matching is a decision strategy in which predictions of class membership are proportional to the class base rates. Thus, if in the training set positive examples are observed 60% of the time, and negative examples are observed 40% of the time, the observer using a probability-matching strategy will predict (for unlabeled examples) a class label of "positive" on 60% of instances, and a class label of "negative" on 40% of instances.


Bayesian control rule

A generalization of Thompson sampling to arbitrary dynamical environments and causal structures, known as Bayesian control rule, has been shown to be the optimal solution to the adaptive coding problem with actions and observations. In this formulation, an agent is conceptualized as a mixture over a set of behaviours. As the agent interacts with its environment, it learns the causal properties and adopts the behaviour that minimizes the relative entropy to the behaviour with the best prediction of the environment's behaviour. If these behaviours have been chosen according to the maximum expected utility principle, then the asymptotic behaviour of the Bayesian control rule matches the asymptotic behaviour of the perfectly rational agent. The setup is as follows. Let a_1, a_2, \ldots, a_T be the actions issued by an agent up to time T, and let o_1, o_2, \ldots, o_T be the observations gathered by the agent up to time T. Then, the agent issues the action a_ with probability: :P(a_, \hat_, o_), where the "hat"-notation \hat_t denotes the fact that a_t is a causal intervention (see Causality), and not an ordinary observation. If the agent holds beliefs \theta \in \Theta over its behaviors, then the Bayesian control rule becomes :P(a_, \hat_, o_) = \int_ P(a_, \theta, \hat_, o_) P(\theta, \hat_, o_) \, d\theta, where P(\theta, \hat_, o_) is the posterior distribution over the parameter \theta given actions a_ and observations o_. In practice, the Bayesian control amounts to sampling, at each time step, a parameter \theta^\ast from the posterior distribution P(\theta, \hat_, o_), where the posterior distribution is computed using Bayes' rule by only considering the (causal) likelihoods of the observations o_1, o_2, \ldots, o_T and ignoring the (causal) likelihoods of the actions a_1, a_2, \ldots, a_T, and then by sampling the action a^\ast_ from the action distribution P(a_, \theta^\ast,\hat_,o_).


Upper-Confidence-Bound (UCB) algorithms

Thompson sampling and upper-confidence bound algorithms share a fundamental property that underlies many of their theoretical guarantees. Roughly speaking, both algorithms allocate exploratory effort to actions that might be optimal and are in this sense "optimistic". Leveraging this property, one can translate regret bounds established for UCB algorithms to Bayesian regret bounds for Thompson sampling or unify regret analysis across both these algorithms and many classes of problems.


References

{{Reflist, refs= Thompson, William R
"On the likelihood that one unknown probability exceeds another in view of the evidence of two samples"
'' Biometrika'', 25(3–4):285–294, 1933.
Thompson, W. R. (1935). On the theory of apportionment. ''American Journal of Mathematics'', 57(2), 450-456. J. Wyatt. ''Exploration and Inference in Learning from Reinforcement''. Ph.D. thesis, Department of Artificial Intelligence, University of Edinburgh. March 1997. Chapelle, Olivier, and Lihong Li. "An empirical evaluation of thompson sampling." Advances in neural information processing systems. 2011. http://papers.nips.cc/paper/4321-an-empirical-evaluation-of-thompson-sampling B. C. May, B. C., N. Korda, A. Lee, and D. S. Leslie. "Optimistic Bayesian sampling in contextual-bandit problems". Technical report, Statistics Group, Department of Mathematics, University of Bristol, 2011. P. A. Ortega and D. A. Braun. "A Minimum Relative Entropy Principle for Learning and Acting", ''Journal of Artificial Intelligence Research'', 38, pages 475–511, 2010. M. J. A. Strens. "A Bayesian Framework for Reinforcement Learning", ''Proceedings of the Seventeenth International Conference on Machine Learning'', Stanford University, California, June 29–July 2, 2000, http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.140.1701 O.-C. Granmo. "Solving Two-Armed Bernoulli Bandit Problems Using a Bayesian Learning Automaton", ''International Journal of Intelligent Computing and Cybernetics'', 3 (2), 2010, 207-234. {{Cite journal , last1 = Granmo , first1 = O. C. , last2 = Glimsdal , first2 = S. , title = Accelerated Bayesian learning for decentralized two-armed bandit based decision making with applications to the Goore Game , doi = 10.1007/s10489-012-0346-z , journal = Applied Intelligence , volume = 38 , issue = 4 , pages = 479–488 , year = 2012 , hdl = 11250/137969 , s2cid = 8746483 , hdl-access = free Ian Clarke. "Proportionate A/B testing", September 22nd, 2011, http://blog.locut.us/2011/09/22/proportionate-ab-testing/ {{citation , arxiv = 1604.07101, last1 = Wu , first1 = Huasen , last2 = Liu , first2 = Xin , last3 = Srikant , first3 = R , title = Double Thompson Sampling for Dueling Bandits , year = 2016 , bibcode= 2016arXiv160407101W Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband and Zheng Wen (2018), "A Tutorial on Thompson Sampling", Foundations and Trends in Machine Learning: Vol. 11: No. 1, pp 1-96. https://web.stanford.edu/~bvr/pubs/TS_Tutorial.pdf Daniel J. Russo and Benjamin Van Roy (2013), "Eluder Dimension and the Sample Complexity of Optimistic Exploration", Advances in Neural Information Processing Systems 26, pp. 2256-2264. https://proceedings.neurips.cc/paper/2013/file/41bfd20a38bb1b0bec75acf0845530a7-Paper.pdf Daniel J. Russo and Benjamin Van Roy (2014), "Learning to Optimize Via Posterior Sampling", Mathematics of Operations Research, Vol. 39, No. 4, pp. 1221-1243, 2014. https://pubsonline.informs.org/doi/abs/10.1287/moor.2014.0650 Artificial intelligence Heuristic algorithms Sequential methods Sequential experiments