Foundations and Trends® in Machine Learning > Vol 6 > Issue 4

A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning

Alborz Geramifard, Massachusetts Institute of Technology, USA, agf@mit.edu Thomas J. Walsh, Massachusetts Institute of Technology, USA, twalsh@mit.edu Stefanie Tellex, Massachusetts Institute of Technology, USA, stefie10@csail.mit.edu Girish Chowdhary, Massachusetts Institute of Technology, USA, girishc@mit.edu Nicholas Roy, Massachusetts Institute of Technology, USA, nickroy@csail.mit.edu Jonathan P. How, Massachusetts Institute of Technology, USA, jhow@mit.edu
 
Suggested Citation
Alborz Geramifard, Thomas J. Walsh, Stefanie Tellex, Girish Chowdhary, Nicholas Roy and Jonathan P. How (2013), "A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning", Foundations and TrendsĀ® in Machine Learning: Vol. 6: No. 4, pp 375-451. http://dx.doi.org/10.1561/2200000042

Published: 19 Dec 2013
© 2013 A. Geramifard, T. J. Walsh, S. Tellex, G. Chowdhary, N. Roy, and J. P. How
 
Subjects
Reinforcement learning,  Markov Decision Processes
 

Free Preview:

Article Help

Share

Download article
In this article:
1. Introduction
2. Dynamic Programming and Reinforcement Learning
3. Representations
4. Empirical Results
5. Summary
Acknowledgments
References

Abstract

A Markov Decision Process (MDP) is a natural framework for formulating sequential decision-making problems under uncertainty. In recent years, researchers have greatly advanced algorithms for learning and acting in MDPs. This article reviews such algorithms, beginning with well-known dynamic programming methods for solving MDPs such as policy iteration and value iteration, then describes approximate dynamic programming methods such as trajectory based value iteration, and finally moves to reinforcement learning methods such as Q-Learning, SARSA, and least-squares policy iteration. We describe algorithms in a unified framework, giving pseudocode together with memory and iteration complexity analysis for each. Empirical evaluations of these techniques with four representations across four domains, provide insight into how these algorithms perform with various feature sets in terms of running time and performance.

DOI:10.1561/2200000042
ISBN: 978-1-60198-760-0
91 pp. $70.00
Buy book
 
ISBN: 978-1-60198-761-7
91 pp. $115.00
Buy E-book
Table of contents:
1. Introduction
2. Dynamic Programming and Reinforcement Learning
3. Representations
4. Empirical Results
5. Summary
Acknowledgments
References

A Tutorial on Linear Function Approximators for Dynamic Programming and Reinforcement Learning

A Markov Decision Process (MDP) is a natural framework for formulating sequential decision-making problems under uncertainty. In recent years, researchers have greatly advanced algorithms for learning and acting in MDPs. This book reviews such algorithms, beginning with well-known dynamic programming methods for solving MDPs such as policy iteration and value iteration, then describes approximate dynamic programming methods such as trajectory based value iteration, and finally moves to reinforcement learning methods such as Q-Learning, SARSA, and least-squares policy iteration. It describes algorithms in a unified framework, giving pseudocode together with memory and iteration complexity analysis for each. Empirical evaluations of these techniques, with four representations across four domains, provide insight into how these algorithms perform with various feature sets in terms of running time and performance.

This tutorial provides practical guidance for researchers seeking to extend DP and RL techniques to larger domains through linear value function approximation. The practical algorithms and empirical successes outlined also form a guide for practitioners trying to weigh computational costs, accuracy requirements, and representational concerns. Decision making in large domains will always be challenging, but with the tools presented here this challenge is not insurmountable.

 
MAL-042