On fundamental computational barriers in the mathematics of information
Repository URI
Repository DOI
Change log
Authors
Abstract
This thesis is about computational theory in the setting of the mathematics of information. The first goal is to demonstrate that many commonly considered problems in optimisation theory cannot be solved with an algorithm if the input data is only known up to an arbitrarily small error (modelling the fact that most real numbers are not expressible to infinite precision with a floating point based computational device). This includes computing the minimisers to basis pursuit, linear programming, lasso and image deblurring as well as finding an optimal neural network given training data. These results are somewhat paradoxical given the success that existing algorithms exhibit when tackling these problems with real world datasets and a substantial portion of this thesis is dedicated to explaining the apparent disparity, particularly in the context of compressed sensing. To do so requires the introduction of a variety of new concepts, including that of a breakdown epsilon, which may have broader applicability to computational problems outside of the ones central to this thesis. We conclude with a discussion on future research directions opened up by this work.