
Continual Learning as Computationally Constrained Reinforcement Learning
Versandkostenfrei!
Versandfertig in über 4 Wochen
89,99 €
inkl. MwSt.
PAYBACK Punkte
45 °P sammeln!
An agent that accumulates knowledge to develop increasingly sophisticated skills over a long lifetime could advance the frontier of artificial intelligence capabilities. The design of such agents, which remains a long-standing challenge, is addressed by the subject of continual learning. This monograph clarifies and formalizes concepts of continual learning, introducing a framework and tools to stimulate further research. We also present a range of empirical case studies to illustrate the roles of forgetting, relearning, exploration, and auxiliary learning. Metrics presented in previous litera...
An agent that accumulates knowledge to develop increasingly sophisticated skills over a long lifetime could advance the frontier of artificial intelligence capabilities. The design of such agents, which remains a long-standing challenge, is addressed by the subject of continual learning. This monograph clarifies and formalizes concepts of continual learning, introducing a framework and tools to stimulate further research. We also present a range of empirical case studies to illustrate the roles of forgetting, relearning, exploration, and auxiliary learning. Metrics presented in previous literature for evaluating continual learning agents tend to focus on particular behaviors that are deemed desirable, such as avoiding catastrophic forgetting, retaining plasticity, relearning quickly, and maintaining low memory or compute footprints. In order to systematically reason about design choices and compare agents, a coherent, holistic objective that encompasses all such requirements would be helpful. To provide such an objective, we cast continual learning as reinforcement learning with limited compute resources. In particular, we pose the continual learning objective to be the maximization of infinite-horizon average reward subject to a computational constraint. Continual supervised learning, for example, is a special case of our general formulation where the reward is taken to be negative log-loss or accuracy. Among the implications of maximizing average reward are that remembering all information from the past is unnecessary, forgetting nonrecurring information is not "catastrophic," and learning about how an environment changes over time is useful. Computational constraints give rise to informational constraints in the sense that they limit the amount of information used to make decisions. A consequence is that, unlike in more common framings of machine learning in which per-timestep regret vanishes as an agent accumulates information, the regret experienced in continual learning typically persists. Related to this is that even in stationary environments, informational constraints can incentivize perpetual adaptation. Informational constraints also give rise to the familiar stability-plasticity dilemma, which we formalize in information-theoretic terms.