Generative model-enhanced human motion prediction.


Type
Article
Change log
Authors
Griffiths, Ryan-Rhys  ORCID logo  https://orcid.org/0000-0003-3117-4559
Gray, Robert 
Jha, Ashwani 
Nachev, Parashkev 
Abstract

The task of predicting human motion is complicated by the natural heterogeneity and compositionality of actions, necessitating robustness to distributional shifts as far as out-of-distribution (OoD). Here, we formulate a new OoD benchmark based on the Human3.6M and Carnegie Mellon University (CMU) motion capture datasets, and introduce a hybrid framework for hardening discriminative architectures to OoD failure by augmenting them with a generative model. When applied to current state-of-the-art discriminative models, we show that the proposed approach improves OoD robustness without sacrificing in-distribution performance, and can theoretically facilitate model interpretability. We suggest human motion predictors ought to be constructed with OoD challenges in mind, and provide an extensible general framework for hardening diverse discriminative architectures to extreme distributional shift. The code is available at: https://github.com/bouracha/OoDMotion.

Description
Keywords
deep learning, generative models, human motion prediction, variational autoencoders
Journal Title
Appl AI Lett
Conference Name
Journal ISSN
2689-5595
2689-5595
Volume Title
3
Publisher
Wiley
Sponsorship
Wellcome Trust (213038)