On existence, stability, accuracy and learning of approximate decoders for ill-posed inverse problems
Repository URI
Repository DOI
Change log
Authors
Abstract
Artificial intelligence (AI) methods are changing the sciences and scientific computing, in particular also in the field of inverse problems. In inverse problems, for example in imaging AI based methods seemingly achieve higher reconstruction accuracy than standard methods, such as compressed sensing. However, trustworthiness has become a serious issue as there is empirical evidence that deep learning (DL) may lead to unstable methods in inverse problems. Recently, in inverse problems in imaging, another phenomenon of DL decoders yielding false yet realistic looking artefacts, coined AI generated hallucinations, has been reported on. This thesis explores the use of DL in inverse problems and aims at providing a theoretical basis for assessing the stability and accuracy of such methods. In the second chapter, a fully learned neural network approach for image reconstruction, which was introduced in [192] and coined ’automated transform by manifold approximation’, is examined. In particular, its potential benefits with respect to accuracy and disadvantages with respect to stability and robustness compared to standard methods for image reconstruction are investigated. We show that without further conditions on the sampling operator, such fully learned approaches to solving inverse problems become unstable. In the third chapter, we present a comprehensive mathematical analysis explaining different causes of AI generated hallucinations and the links to instabilities. Our results establish four crucial issues for AI methods in inverse problems. Firstly, overly accurate AI methods will wrongly transfer details from one image to another reconstructed image creating a hallucination. Secondly, there is an accuracy-hallucination trade-off. Thirdly, there is an accuracy-stability trade-off, and optimising these trade-offs through standard training processes is difficult. And lastly, hallucinations can occur due to any kind of noise model and probability distribution used on the training set. In the last chapter, we investigate how DL based methods for solving inverse problems can perform better than standard methods. Thus, we establish fundamental accuracy bounds for solving ill-posed inverse problems. This is achieved by obtaining upper and lower bounds on a universal optimality constant, that includes the best worst-case noise, the average and the statistical reconstruction error for the reconstruction of an ill-posed inverse problem. This framework encompasses non-linear and linear inverse problems with different noise models and allows to assess stability, accuracy and learning of approximate decoders for ill-posed inverse problems.