Which neural networks can be computed by an algorithm? – Generalised hardness of approximation meets Deep Learning

Change log
Thesing, Laura 
Hansen, Anders C 

jats:titleAbstract</jats:title>jats:pClassical hardness of approximation (HA) is the phenomenon that, assuming P ≠ NP, one can easily compute an ϵ‐approximation to the solution of a discrete computational problem for ϵ > ϵjats:sub0</jats:sub> > 0, but for ϵ < ϵjats:sub0</jats:sub> – where ϵjats:sub0</jats:sub> is the approximation threshold – it becomes intractable. Recently, a similar yet more general phenomenon has been documented in AI: Generalised hardness of approximation (GHA). This phenomenon includes the following occurrence: For any approximation threshold ϵjats:sub1</jats:sub> > 0, there are AI problems for which provably there exist stable neural networks (NNs) that solve the problem, but no algorithm can compute any NN that approximates the AI problem to ϵjats:sub1</jats:sub>‐accuracy. Moreover, this issue is independent of the P vs NP question and thus is a rather different mathematical phenomenon than HA. GHA implies that the universal approximation theorem for NNs only provides a partial understanding of the power of NNs in AI. Thus, a classification theory describing which NNs can be computed by algorithms to particular accuracies is needed to fill this gap. We initiate such a theory by showing the correspondence between the functions that can be computed to ϵ‐accuracy by an algorithm and those functions that can be approximated by NNs which can be computed to ϵ̂‐accuracy by an algorithm. In particular, the approximation thresholds ϵ and ϵ̂ cannot differ by more than a factor of 12. This means that computing function approximations through NNs will be optimal – in the sense of best approximation accuracy achievable by an algorithm – up to a small constant, compared to any other computational technique.</jats:p>

46 Information and Computing Sciences, 4611 Machine Learning
Journal Title
Conference Name
Journal ISSN
Volume Title