By Saurabh Kumar, Stanford University, USA, szk@stanford.edu | Henrik Marklund, Stanford University, USA | Ashish Rao, Stanford University, USA | Yifan Zhu, Stanford University, USA | Hong Jun Jeon, Stanford University, USA | Yueyang Liu, Rice University, USA | Benjamin Van Roy, Stanford University, USA
An agent that accumulates knowledge to develop increasingly sophisticated skills over a long lifetime could advance the frontier of artificial intelligence capabilities. The design of such agents, which remains a long-standing challenge, is addressed by the subject of continual learning. This monograph clarifies and formalizes concepts of continual learning, introducing a framework and tools to stimulate further research. We also present a range of empirical case studies to illustrate the roles of forgetting, relearning, exploration, and auxiliary learning.
Metrics presented in previous literature for evaluating continual learning agents tend to focus on particular behaviors that are deemed desirable, such as avoiding catastrophic forgetting, retaining plasticity, relearning quickly, and maintaining low memory or compute footprints. In order to systematically reason about design choices and compare agents, a coherent, holistic objective that encompasses all such requirements would be helpful. To provide such an objective, we cast continual learning as reinforcement learning with limited compute resources. In particular, we pose the continual learning objective to be the maximization of infinite-horizon average reward subject to a computational constraint. Continual supervised learning, for example, is a special case of our general formulation where the reward is taken to be negative log-loss or accuracy. Among the implications of maximizing average reward are that remembering all information from the past is unnecessary, forgetting nonrecurring information is not “catastrophic,” and learning about how an environment changes over time is useful.
Computational constraints give rise to informational constraints in the sense that they limit the amount of information used to make decisions. A consequence is that, unlike in more common framings of machine learning in which per-timestep regret vanishes as an agent accumulates information, the regret experienced in continual learning typically persists. Related to this is that even in stationary environments, informational constraints can incentivize perpetual adaptation. Informational constraints also give rise to the familiar stability-plasticity dilemma, which we formalize in information-theoretic terms.
Continual learning remains a long-standing challenge. Success requires continuously ingesting new knowledge while retaining old knowledge that remains useful. More generally, an agent needs to efficiently accumulate knowledge to develop increasingly sophisticated skills over a long lifetime. Existing incremental machine learning techniques fall short of these ambitions of continual learning, as a major challenge has been to develop scalable systems that judiciously control what information they ingest, retain, or forget.
An agent that accumulates knowledge to develop increasingly sophisticated skills over a long lifetime could advance the frontier of artificial intelligence capabilities. The design of such agents, which remains a longstanding challenge, is addressed by the subject of continual learning. This monograph clarifies and formalizes concepts of continual learning, introducing a framework and tools to stimulate further research. Also presented are a range of empirical case studies to illustrate the roles of forgetting, relearning, exploration, and auxiliary learning.
Metrics presented in previous literature for evaluating continual learning agents tend to focus on particular behaviors that are deemed desirable, such as avoiding catastrophic forgetting, retaining plasticity, relearning quickly, and maintaining low memory or compute footprints. In order to systematically reason about design choices and compare agents, a coherent, holistic objective that encompasses all such requirements would be helpful. To provide such an objective, in this book continual learning is cast as reinforcement learning with limited compute resources. In particular, the continual learning objective is posed to be the maximization of infinite-horizon average reward subject to a computational constraint. Continual supervised learning, for example, is a special case of general formulation where the reward is taken to be negative log-loss or accuracy. Among the implications of maximizing average reward are that remembering all information from the past is unnecessary, forgetting non-recurring information is not “catastrophic,” and learning about how an environment changes over time is useful.