Potential for Reinforcement Learning in the Cerebellum.
Published version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
This article explores how simple reinforcement learning algorithms might be implemented by the anatomy of the cerebellum. In doing this, we highlight which anatomical and physiological details are most important for assessing algorithmic fit, and we discuss which algorithm components are easiest to accommodate in a neural system. We describe hypothetical cerebellar implementations of four reinforcement learning algorithms and discuss the anatomical plausibility of the various components required. We show how one of the algorithms can learn to generate short sequences of actions without continuous information on the resulting changes to the environment. We finish with simulations that illustrate the way that the algorithms learn to solve the problem of balancing an inverted pendulum, commonly known as the cart-pole problem. We highlight two physiological features: reward signals and combining information across time, that indicate that some sort of reinforcement learning adaptation may be taking place. We also describe why the commonly used algorithmic feature, an eligibility trace, presents particular problems to implement in known neural anatomy.
Description
Journal Title
Conference Name
Journal ISSN
1530-888X

