Foundations and Trends® in Machine Learning > Vol 11 > Issue 1

A Tutorial on Thompson Sampling

Daniel J. Russo, Columbia University, USA, djr2174@columbia.edu Benjamin Van Roy, Stanford University, USA, bvr@stanford.edu Abbas Kazerouni, Stanford University, USA, abbask@stanford.edu Ian Osband, Google DeepMind, UK, ian.osband@gmail.com Zheng Wen, Adobe Research, USA, zwen@adobe.com
 
Suggested Citation
Daniel J. Russo, Benjamin Van Roy, Abbas Kazerouni, Ian Osband and Zheng Wen (2018), "A Tutorial on Thompson Sampling", Foundations and TrendsĀ® in Machine Learning: Vol. 11: No. 1, pp 1-96. http://dx.doi.org/10.1561/2200000070

Published: 12 Jul 2018
© 2018 D. J. Russo, B. Van Roy, A. Kazerouni, I. Osband and Z. Wen
 
Subjects
Reinforcement learning,  Online learning,  Bayesian learning,  Optimization
 
Keywords
ExplorationBandit learning
 

Free Preview:

Article Help

Share

Download article
In this article:
1. Introduction
2. Greedy Decisions
3. Thompson Sampling for the Bernoulli Bandit
4. General Thompson Sampling
5. Approximations
6. Practical Modeling Considerations
7. Further Examples
8. Why it Works, When it Fails, and Alternative Approaches
Acknowledgements
References

Abstract

Thompson sampling is an algorithm for online decision problems where actions are taken sequentially in a manner that must balance between exploiting what is known to maximize immediate performance and investing to accumulate new information that may improve future performance. The algorithm addresses a broad range of problems in a computationally efficient manner and is therefore enjoying wide use. This tutorial covers the algorithm and its application, illustrating concepts through a range of examples, including Bernoulli bandit problems, shortest path problems, product recommendation, assortment, active learning with neural networks, and reinforcement learning in Markov decision processes. Most of these problems involve complex information structures, where information revealed by taking an action informs beliefs about other actions. We will also discuss when and why Thompson sampling is or is not effective and relations to alternative algorithms.

DOI:10.1561/2200000070
ISBN: 978-1-68083-470-3
112 pp. $80.00
Buy book
 
ISBN: 978-1-68083-471-0
112 pp. $140.00
Buy E-book
Table of contents:
1. Introduction
2. Greedy Decisions
3. Thompson Sampling for the Bernoulli Bandit
4. General Thompson Sampling
5. Approximations
6. Practical Modeling Considerations
7. Further Examples
8. Why it Works, When it Fails, and Alternative Approaches
Acknowledgements
References

A Tutorial on Thompson Sampling

Thompson sampling is an algorithm for online decision problems where actions are taken sequentially in a manner that must balance between exploiting what is known to maximize immediate performance and investing to accumulate new information that may improve future performance. The algorithm addresses a broad range of problems in a computationally efficient manner and is therefore enjoying wide use.

A Tutorial on Thompson Sampling covers the algorithm and its application, illustrating concepts through a range of examples, including Bernoulli bandit problems, shortest path problems, product recommendation, assortment, active learning with neural networks, and reinforcement learning in Markov decision processes. Most of these problems involve complex information structures, where information revealed by taking an action informs beliefs about other actions. It also discusses when and why Thompson sampling is or is not effective and relations to alternative algorithms.

 
MAL-070