Repository logo
 

Recurrent Variational Autoencoders for Learning Nonlinear Generative Models in the Presence of Outliers

Accepted version
Peer-reviewed

Loading...
Thumbnail Image

Type

Article

Change log

Authors

Wang, Yu 
Dai, Bin 
Hua, Gang 
Aston, JAD 
Wipf, David 

Abstract

This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular deep generative modeling framework that dresses traditional autoencoders with probabilistic attire. The first involves a specially-tailored form of conditioning that allows us to simplify the VAE decoder structure while simultaneously introducing robustness to outliers. In a related vein, a second, complementary alteration is proposed to further build invariance to contaminated or dirty samples via a data augmentation process that amounts to recycling. In brief, to the extent that the VAE is legitimately a representative generative model, then each output from the decoder should closely resemble an authentic sample, which can then be resubmitted as a novel input ad infinitum. Moreover, this can be accomplished via special recurrent connections without the need for additional parameters to be trained. We evaluate these proposals on multiple practical outlier-removal and generative modeling tasks involving nonlinear low-dimensional manifolds, demonstrating considerable improvements over existing algorithms.

Description

Keywords

Deep generative models, variational autoencoder, robust PCA, outlier removal, variational Bayesian model, deep learning

Journal Title

IEEE Journal on Selected Topics in Signal Processing

Conference Name

Journal ISSN

1932-4553
1941-0484

Volume Title

Publisher

IEEE
Sponsorship
Engineering and Physical Sciences Research Council (EP/N014588/1)
Y. Wang and J. Aston are sponsored by the EPSRC Centre for Mathematical Imaging in Healthcare, EP/N014588/1.