Repository logo
 

Probabilistic Future Prediction for Video Scene Understanding

Accepted version
Peer-reviewed

Type

Conference Object

Change log

Authors

Hu, Anthony 
Cotter, Fergal 
Mohan, Nikhil 
Gurau, Corina 
Kendall, Alex 

Abstract

We present a novel deep learning architecture for probabilistic future prediction from video. We predict the future semantics, geometry and motion of complex real-world urban scenes and use this representation to control an autonomous vehicle. This work is the first to jointly predict ego-motion, static scene, and the motion of dynamic agents in a probabilistic manner, which allows sampling consistent, highly probable futures from a compact latent space. Our model learns a representation from RGB video with a spatio-temporal convolutional module. The learned representation can be explicitly decoded to future semantic segmentation, depth, and optical flow, in addition to being an input to a learnt driving policy. To model the stochasticity of the future, we introduce a conditional variational approach which minimises the divergence between the present distribution (what could happen given what we have seen) and the future distribution (what we observe actually happens). During inference, diverse futures are generated by sampling from the present distribution.

Description

Keywords

Journal Title

Conference Name

European Conference on Computer Vision (ECCV)

Journal ISSN

Volume Title

Publisher

Sponsorship
Toshiba Europe, grant G100453