Repository logo
 

Analysis of stochastic gradient descent in continuous time

Published version
Peer-reviewed

Change log

Abstract

Stochastic gradient descent is an optimisation method that combines classical gradient descent with random subsampling within the target functional. In this work, we introduce the stochastic gradient process as a continuous-time representation of stochastic gradient descent. The stochastic gradient process is a dynamical system that is coupled with a continuous-time Markov process living on a finite state space. The dynamical system -- a gradient flow -- represents the gradient descent part, the process on the finite state space represents the random subsampling. Processes of this type are, for instance, used to model clonal populations in fluctuating environments. After introducing it, we study theoretical properties of the stochastic gradient process: We show that it converges weakly to the gradient flow with respect to the full target function, as the learning rate approaches zero. We give conditions under which the stochastic gradient process with constant learning rate is exponentially ergodic in the Wasserstein sense. Then we study the case, where the learning rate goes to zero sufficiently slowly and the single target functions are strongly convex. In this case, the process converges weakly to the point mass concentrated in the global minimum of the full target function; indicating consistency of the method. We conclude after a discussion of discretisation strategies for the stochastic gradient process and numerical experiments.

Description

Keywords

Stochastic optimisation, Ergodicity, Piecewise-deterministic Markov processes, Wasserstein distance

Journal Title

Statistics and Computing

Conference Name

Journal ISSN

0960-3174
1573-1375

Volume Title

31

Publisher

Springer Science and Business Media LLC
Sponsorship
EPSRC (EP/S026045/1)