Repository logo
 

A lifted Bregman formulation for the inversion of deep neural networks

Published version
Peer-reviewed

Repository DOI


Change log

Authors

Wang, X 
Benning, M 

Abstract

jats:pWe propose a novel framework for the regularized inversion of deep neural networks. The framework is based on the authors' recent work on training feed-forward neural networks without the differentiation of activation functions. The framework lifts the parameter space into a higher dimensional space by introducing auxiliary variables, and penalizes these variables with tailored Bregman distances. We propose a family of variational regularizations based on these Bregman distances, present theoretical results and support their practical application with numerical examples. In particular, we present the first convergence result (to the best of our knowledge) for the regularized inversion of a single-layer perceptron that only assumes that the solution of the inverse problem is in the range of the regularization operator, and that shows that the regularized inverse provably converges to the true inverse if measurement errors converge to zero.</jats:p>

Description

Peer reviewed: True

Keywords

4901 Applied Mathematics, 49 Mathematical Sciences, 4905 Statistics

Journal Title

Frontiers in Applied Mathematics and Statistics

Conference Name

Journal ISSN

2297-4687
2297-4687

Volume Title

Publisher

Frontiers Media SA
Relationships
Is supplemented by: