Choose your path wisely: gradient descent in a Bregman distance framework
Accepted version
Peer-reviewed
Repository URI
Repository DOI
Change log
Authors
Abstract
We propose an extension of a special form of gradient descent --- in the literature known as linearised Bregman iteration -- to a larger class of non-convex functions. We replace the classical (squared) two norm metric in the gradient descent setting with a generalised Bregman distance, based on a proper, convex and lower semi-continuous function. The algorithm's global convergence is proven for functions that satisfy the Kurdyka-\L ojasiewicz property. Examples illustrate that features of different scale are being introduced throughout the iteration, transitioning from coarse to fine. This coarse-to-fine approach with respect to scale allows to recover solutions of non-convex optimisation problems that are superior to those obtained with conventional gradient descent, or even projected and proximal gradient descent. The effectiveness of the linearised Bregman iteration in combination with early stopping is illustrated for the applications of parallel magnetic resonance imaging, blind deconvolution as well as image classification with neural networks.
Description
Keywords
Journal Title
Conference Name
Journal ISSN
1936-4954
Volume Title
Publisher
Publisher DOI
Rights
Sponsorship
Leverhulme Trust (RPG-2015-250)
Engineering and Physical Sciences Research Council (EP/N014588/1)
European Commission Horizon 2020 (H2020) Marie Sk?odowska-Curie actions (691070)
European Commission Horizon 2020 (H2020) Marie Sk?odowska-Curie actions (777826)
Leverhulme Trust (RPG-2018-121)
Leverhulme Trust (PLP-2017-275)
Alan Turing Institute (Unknown)
EPSRC (EP/S026045/1)
EPSRC (EP/T017961/1)
Royal Society (RSWF\R3\193016)
EPSRC (EP/T003553/1)