Repository logo
 

Data-efficient reinforcement learning in continuous state-action Gaussian-POMDPs

Accepted version
Peer-reviewed

Loading...
Thumbnail Image

Type

Conference Object

Change log

Authors

McAllister, RT 
Rasmussen, CE 

Abstract

We present a data-efficient reinforcement learning method for continuous state-action systems under significant observation noise. Data-efficient solutions under small noise exist, such as PILCO which learns the cartpole swing-up task in 30s. PILCO evaluates policies by planning state-trajectories using a dynamics model. However, PILCO applies policies to the observed state, therefore planning in observation space. We extend PILCO with filtering to instead plan in belief space, consistent with partially observable Markov decisions process (POMDP) planning. This enables data-efficient learning under significant observation noise, outperforming more naive methods such as post-hoc application of a filter to policies optimised by the original (unfiltered) PILCO algorithm. We test our method on the cartpole swing-up task, which involves nonlinear dynamics and requires nonlinear control.

Description

Keywords

Journal Title

Advances in Neural Information Processing Systems

Conference Name

Neural Information Processing Systems

Journal ISSN

1049-5258

Volume Title

2017-December

Publisher

Sponsorship
Engineering and Physical Sciences Research Council (EP/J012300/1)
Alan Turing Institute (unknown)