Learning and policy search in stochastic dynamical systems with Bayesian neural networks
View / Open Files
Authors
Depeweg, S
Hernández-Lobato, JM
Doshi-Velez, F
Udluft, S
Publication Date
2017-01-01Journal Title
5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings
Type
Conference Object
This Version
VoR
Metadata
Show full item recordCitation
Depeweg, S., Hernández-Lobato, J., Doshi-Velez, F., & Udluft, S. (2017). Learning and policy search in stochastic dynamical systems with Bayesian neural networks. 5th International Conference on Learning Representations, ICLR 2017 - Conference Track Proceedings https://doi.org/10.17863/CAM.55902
Abstract
We present an algorithm for policy search in stochastic dynamical systems using model-based reinforcement learning. The system dynamics are described with Bayesian neural networks (BNNs) that include stochastic input variables. These input variables allow us to capture complex statistical patterns in the transition dynamics (e.g. multi-modality and heteroskedasticity), which are usually missed by alternative modeling approaches. After learning the dynamics, our BNNs are then fed into an algorithm that performs random roll-outs and uses stochastic optimization for policy learning. We train our BNNs by minimizing a-divergences with a = 0.5, which usually produces better results than other techniques such as variational Bayes. We illustrate the performance of our method by solving a challenging problem where model-based approaches usually fail and by obtaining promising results in real-world scenarios including the control of a gas turbine and an industrial benchmark.
Identifiers
This record's DOI: https://doi.org/10.17863/CAM.55902
This record's URL: https://www.repository.cam.ac.uk/handle/1810/308814
Rights
All rights reserved