Recurrent Variational Autoencoders for Learning Nonlinear Generative Models in the Presence of Outliers
Authors
Change log
Abstract
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular deep generative modeling framework that dresses traditional autoencoders with probabilistic attire. The first involves a specially-tailored form of conditioning that allows us to simplify the VAE decoder structure while simultaneously introducing robustness to outliers. In a related vein, a second, complementary alteration is proposed to further build invariance to contaminated or dirty samples via a data augmentation process that amounts to recycling. In brief, to the extent that the VAE is legitimately a representative generative model, then each output from the decoder should closely resemble an authentic sample, which can then be resubmitted as a novel input ad infinitum. Moreover, this can be accomplished via special recurrent connections without the need for additional parameters to be trained. We evaluate these proposals on multiple practical outlier-removal and generative modeling tasks involving nonlinear low-dimensional manifolds, demonstrating considerable improvements over existing algorithms.
Publication Date
Online Publication Date
Acceptance Date
Keywords
Journal Title
Journal ISSN
1941-0484