Recurrent Variational Autoencoders for Learning Nonlinear Generative Models in the Presence of Outliers
IEEE Journal on Selected Topics in Signal Processing
MetadataShow full item record
Wang, Y., Dai, B., Hua, G., Aston, J., & Wipf, D. (2018). Recurrent Variational Autoencoders for Learning Nonlinear Generative Models in the Presence of Outliers. IEEE Journal on Selected Topics in Signal Processing https://doi.org/10.1109/JSTSP.2018.2876995
This paper explores two useful modifications of the recent variational autoencoder (VAE), a popular deep generative modeling framework that dresses traditional autoencoders with probabilistic attire. The first involves a specially-tailored form of conditioning that allows us to simplify the VAE decoder structure while simultaneously introducing robustness to outliers. In a related vein, a second, complementary alteration is proposed to further build invariance to contaminated or dirty samples via a data augmentation process that amounts to recycling. In brief, to the extent that the VAE is legitimately a representative generative model, then each output from the decoder should closely resemble an authentic sample, which can then be resubmitted as a novel input ad infinitum. Moreover, this can be accomplished via special recurrent connections without the need for additional parameters to be trained. We evaluate these proposals on multiple practical outlier-removal and generative modeling tasks involving nonlinear low-dimensional manifolds, demonstrating considerable improvements over existing algorithms.
Y. Wang and J. Aston are sponsored by the EPSRC Centre for Mathematical Imaging in Healthcare, EP/N014588/1.
Engineering and Physical Sciences Research Council (EP/N014588/1)
External DOI: https://doi.org/10.1109/JSTSP.2018.2876995
This record's URL: https://www.repository.cam.ac.uk/handle/1810/286783