Repository logo
 

Iterative Techniques for Radial Basis Function Interpolation


Type

Thesis

Change log

Authors

Faul, Anita Christine 

Abstract

The problem of interpolating functions comes up naturally in many areas of applied mathematics and natural sciences. Radial basis function methods provide an interpolant to values of a real function of d variables and are highly useful in many applications, especially if the function values are given at scattered data points.

The need for iterative procedures arises when the number of interpolation conditions n is large, since hardly any sparsity occurs in the linear system of interpolation equations. Solving this system with direct methods would require O($\textit{n}$3) operations.

This dissertation considers several iterative techniques. They were developed from an algorithm described by Beatson, Goodsell and Powell (1995), which is examined first. By gaining more and more theoretical insight into the original algorithm, new algorithms are developed and connections to known methods are made. We establish the important role a certain semi-inner product plays in the convergence analysis of the original algorithm, and the first proof of convergence is given. This leads to a new technique using line searches described later. Then it is shown that the original algorithm is equivalent to solving a certain symmetric and positive definite system of equations by Gauss-Seidel iterations. Thus iterative techniques like Jacobi iterations and conjugate gradient methods follow. This symmetric and positive definite system of equations can be derived from the original system of equations by preconditioning it with a certain matrix. The preconditioned conjugate gradient algorithm was first suggested for this problem by Dyn et al. (1983, 1986), motivated by the variational theory of thin plate splines. It is helpful to view the original algorithm as a linear operator working on a certain linear space equipped with the aforementioned semi-inner product.

The original algorithm had the drawback that the residuals had to be updated at several stages during each iteration. Another algorithm defers the updates till the end of each iteration, which usually improves efficiency greatly, but divergence occurs in some cases. Therefore a line search method is developed that ensures convergence.

The last technique described is a Krylov subspace method which proved to be very successful. It can be applied to any algorithm that fulfils certain criteria. If the underlying algorithm is convergent, the Krylov subspace technique speeds the convergence up. In cases of divergence, the Krylov subspace method enforces convergence. It is shown that the Krylov subspace method applied to the algorithm where updating the residuals is deferred till the end of each iteration is analogous to the conjugate gradient technique applied to the aforementioned symmetric and positive definite system of equations.

All algorithms are related to each other and theoretical insight into some properties of one algorithm leads to an improved algorithm. These considerations provide a highly useful theory, linking different techniques for iterative radial basis function interpolation.

Description

Date

Advisors

Powell, Michael

Keywords

Function interpolation, Radial basis function methods

Qualification

Doctor of Philosophy (PhD)

Awarding Institution

University of Cambridge