Numerical methods for systems of highly oscillatory ordinary differential equations Marianna Khanamiryan University of Cambridge Trinity College This dissertation is submitted for the degree of Doctor of Philosophy May 20, 2010 Numerical methods for systems of highly oscillatory ordinary differential equations Marianna Khanamiryan University of Cambridge Department of Applied Mathematics and Theoretical Physics Wilberforce Road, Cambridge CB3 0WA, United Kingdom. Abstract This thesis presents methods for efficient numerical approximation of linear and non-linear systems of highly oscillatory ordinary differ- ential equations. Phenomena of high oscillation is considered a major computational problem occurring in Fourier analysis, computational harmonic anal- ysis, quantum mechanics, electrodynamics and fluid dynamics. Clas- sical methods based on Gaussian quadrature fail to approximate oscil- latory integrals. In this work we introduce numerical methods which share the remarkable feature that the accuracy of approximation im- proves as the frequency of oscillation increases. Asymptotically, our methods depend on inverse powers of the frequency of oscillation, turn- ing the major computational problem into an advantage. Evolving ideas from the stationary phase method, we first apply the asymptotic method to solve highly oscillatory linear systems of differential equations. The asymptotic method provides a background for our next, the Filon-type method, which is highly accurate and re- quires computation of moments. We also introduce two novel methods. The first method, we call it the FM method, is a combination of Mag- nus approach and the Filon-type method, to solve matrix exponential. The second method, we call it the WRF method, a combination of the Filon-type method and the waveform relaxation methods, for solving highly oscillatory non-linear systems. Finally, completing the theory, we show that the Filon-type method can be replaced by a less accurate but moment free Levin-type method. Declaration I, Marianna Khanamiryan, declare that the thesis entitled ”Nu- merical methods for systems of highly oscillatory ordinary differ- ential equations” and the work presented in the thesis are both my own, and have been generated by me as the result of my own original research and includes nothing which is the outcome of work done in collaboration. I confirm that where I have quoted from the work of others, the source is always given. With the exception of such quotations, this thesis is entirely my own work. Marianna Khanamiryan Dedication For my family, who offered me unconditional love and support throughout the course of this thesis. Acknowledgements First of all I would like to express my sincere gratitude to my su- pervisor, Arieh Iserles. It was Arieh who encouraged me to apply for graduate studies at Cambridge University in the first instance. Arieh’s advice on the choice of the research topic and fantastic opportunity to pursue my PhD at the University of Cambridge are invaluable for my career. For very useful discussions and feedback on my research I also must acknowledge in alphabetical order Luca Dieci (School of Mathematics, Georgia Institute of Technology), Ernst Harier (Uni- versity de Geneve), Liz Mansfield (IMS, Kent), Alexander Oster- mann (University of Innsbruck), Brynjulf Owren (NTNU), Chus Sanz-Serna (Universidad de Valladolid), Y-H Richards Tsai (Uni- versity of Texas at Austin), Stefan Vandewalle (Katholieke Uni- versiteit Leuven) and Jan Verwer (CWI Amsterdam). For their efforts and assistance, a very special thanks goes to Uri Ascher (UBC) and David Levin (Tel-Aviv University). I would like to express my deepest gratitude to Olavi Nevanlinna (Helsinki Uni- versity of Technology), for the valuable advice and the generous support that has been instrumental to the completion of this the- sis. I would also like to extend my thanks to my family, for their love, encouragement and editing assistance. Finally, I would be remiss without mentioning Trinity College, University of Cambridge and Cambridge Overseas Trusts for fi- nancial assistance. To each of the above, I extend my deepest appreciation. Marianna Khanamiryan Cambridge, August 2009 Preface This thesis presents results of my PhD research in Numerical Analysis of Differential Equations at the Centre for Mathematical Sciences at Cambridge University. The main focus of my PhD was on developing numerical methods efficient for solving linear and non-linear systems of highly oscillatory differential equations. The key problem of our discussion throughout the thesis is a family of highly oscillatory vector-valued integrals of the form, I[f ] = ∫ b a Xωfdt, X ′ ω = AωXω, Xω(0) = I, where Xω is a time-dependant highly oscillatory matrix-valued kernel, satisfying matrix linear ordinary differential equation as above and Aω is a non-singular d × d matrix with large imagi- nary eigenvalues, σ(Aω) ⊂ iR. We also assume that ‖Aω−1‖ ≪ 1, ω ≫ 1 is a real parameter describing frequency of oscillation and f ∈ Rd is a smooth vector-valued function. In current work we present numerical methods for systems of ODEs with a constant matrix Aω and with a time-dependant matrix Aω(t). In our later discussion we also assume non-linearity in a vector-valued func- tion f . The motivation to study family of integrals I[f ] appears to be in applications of such integrals in solutions of highly oscillatory linear systems of ODEs, y′(t) = Aωy(t) + f (t), y(0) = y0 ∈ Rd, t ≥ 0, as well as non-linear systems of ODEs, y′(t) = Aωy(t) + f((t),y(t)), y(0) = y0 ∈ Rd, t ≥ 0. The variation of constants formula provides us with analytic rep- resentation of the solution for linear systems of ODEs, y(t) =Xω(t)y0 + ∫ t 0 Xω(t− τ)f (τ)dτ = Xω(t)y0 + I[f (t)], and with implicit representation for non-linear systems of ODEs, y(t) =Xω(t)y0 + ∫ t 0 Xω(t− τ)f ((τ,y(τ))dτ = Xω(t)y0 + I[f (t,y(t))]. We first derive efficient numerical methods for solving family of highly oscillatory vector-valued integrals I[f ] and apply the novel methods in solutions to systems of highly oscillatory differential equations. In essence, the matrix Aω need not depend on the parameter ω explicitly. Rather, the largest absolute values of the matrix eigen- values determine the frequency of oscillation in I[f ], and the norm of the matrix grows with an increase in frequency. However, for numerical purposes, we have expressed this information in ω, as a way of showing improvement in the accuracy of approximation as ω grows. By this we mean that the larger the eigenvalues of the matrix Aω, the larger the norm of the matrix, and the better our approximation. In getting to grips with our underlying task of solving highly oscillatory non-linear systems of ordinary differential equation, we advance gradually, first considering some special cases of the problem. Before beginning to present our methods, in Chapter 1 we present introduction and background information on classical quadrature methods, highly oscillatory quadrature rules and exponential in- tegrators, [DR84], [HL03]. In Chapter 2 we briefly describe univariate quadrature methods used to approximate highly oscillatory Fourier-type integrals of the form I[f ] = ∫ b a f(x)eiωg(x)dx, where f, g ∈ C∞ are smooth, g is strictly monotone in [a, b], a ≤ x ≤ b and the frequency is ω ≫ 1. We introduce the univariate asymptotic and Filon-type methods for the scheme of Fourier- type integrals. In the same chapter we also provide motivation for our research and provide the link between the our work and the works by A. Iserles and S. Nørsett on the classical quadrature rules for univariate highly oscillatory integrals, [IN05]. In Chapter 3 we present numerical solvers for highly oscillatory linear systems of ODEs with a constant matrix, y′ = Aωy + f , y(0) = y0 ∈ Rd, f ∈ Rd, t ≥ 0. The analytic solution is given by the formula y(t) = etAωy0 + ∫ t 0 e(t−τ)Aωfdτ = etAωy0 + I[f ]. We assume that Aω is a non-singular constant matrix with large eigenvalues, σ(Aω) ⊂ iR, ‖Aω−1‖ ≪ 1, ω ≫ 1 is a real param- eter and f ∈ Rd is a smooth vector-valued function. Evolving ideas of the stationary phase approximation [Ste93], we present the asymptotic and the Filon-type methods to solve linear sys- tems of highly oscillatory ODEs. Expanding the integral I[f ] into its asymptotic series we demonstrate the dependence of the error of approximation on the powers of the matrix inverse, hav- ing the implication that for both asymptotic and the Filon-type methods the error depends on the inverse powers of ω. Our meth- ods require explicit availability of more terms in the asymptotic expansion, leading to the assumptions of non singularity of the matrix Aω and smoothness of the function f , [Kha08b]. In Chapter 4 we focus on linear matrix ODEs I[f ] = ∫ b a Xω(t)f(t)dt, X ′ ω = AωXω, Xω(0) = I, where Aω is a non singular matrix with large imaginary eigen- values, det(Aω) 6= 0, ‖Aω−1‖ ≪ 1, σ(Aω) ⊂ iR, ω ≫ 1 is a real parameter, f ∈ Rd is a smooth vector-valued function. We present Magnus methods, modified Magnus methods and intro- duce a novel FM method for solving Lie-type linear equations, [Kha09a]. With research background provided in previous chapters, in Chap- ter 5 we solve linear systems with a time dependant matrix, y′(t) = Aω(t)y(t) + f (t), y(0) = y0 ∈ Rd, t ≥ 0, where Aω is a non-singular matrix with large eigenvalues, σ(Aω) ⊂ iR, ‖Aω‖ ≫ 1, ω ≫ 1 is a real parameter and f ∈ Rd is a smooth vector-valued function. We apply the FM method for linear sys- tems with a time dependant matrix, [Kha09b]. The special combination of the Filon-type methods and iterative waveform relaxation methods form a novel WRF method, applied for solving highly oscillatory non-linear systems of ODEs. Amaz- ingly, both the Filon-type method and the WRF method work with end points only, and as such do not require further subdi- vision of the integration interval. One can obtain an high order of approximation by adding higher order derivatives in I[f ]. We apply the FM method for linear systems with a time dependant matrix and develop the WRFM method, a combination of the WRF and FM methods for non-linear systems. This is a subject of discussion in Chapter 6, [Kha08b], [Kha09b]. In Chapter 7 we present the conclusion to our work and explain why the numerical methods based on Taylor’s reasoning fail to approximate highly oscillatory systems versus the remarkable fea- ture of the numerical methods based on the asymptotic expan- sion, where the the error term decays with inverse powers of fre- quency. Indeed, we show that oscillatory equations are beneficial. Our methods are designed for large frequencies, meaning that the numerical solution improves with an increase in frequency. We mention by passing that the Filon-type methods require com- putation of moments, which may not always be available. In Chapter 8 we provide our current project, [Kha08a], on alterna- tive moment-free Filon and Levin-type methods. We also provide our plans for future work in applications to PDEs. It is a fasci- nating fact that oscillations occur in many mathematical models describing not only physical systems but also for example biolog- ical systems or even models in human society. We also would like to mention an important application such as medicine, where the quality of computational tomography and medical image analy- sis improves once better techniques are applied to read oscillating patterns in image processing. The bottom line is that oscillatory equations have a great deal in many interdisciplinary subjects with applications in a very broad sense, but appear to be a ma- jor computational problem in those fields. This dissertation is the result of my own work and is based on a series of articles, [Kha08b], [Kha09b], [Kha08a], [Kha09a]. Table of Contents Declaration i Dedication ii Acknowledgements iii Preface v 1 Introduction 1 1.1 Classical quadrature rules . . . . . . . . . . . . . . . . . . . . 1 1.2 Highly oscillatory quadrature . . . . . . . . . . . . . . . . . . 6 1.3 Exponential integrators . . . . . . . . . . . . . . . . . . . . . . 7 2 Fourier-type oscillators 9 2.1 The univariate asymptotic method . . . . . . . . . . . . . . . 10 2.2 The univariate Filon-type method . . . . . . . . . . . . . . . . 12 2.3 Applications to linear systems . . . . . . . . . . . . . . . . . . 15 3 Highly oscillatory linear systems with constant variables 17 3.1 The asymptotic method for matrix-valued kernels . . . . . . . 17 3.2 The Filon-type method for matrix-valued kernels . . . . . . . 21 4 The FM methods 32 4.1 The matrix linear ODEs . . . . . . . . . . . . . . . . . . . . . 32 4.2 The Magnus method . . . . . . . . . . . . . . . . . . . . . . . 35 4.3 The modified Magnus method . . . . . . . . . . . . . . . . . . 39 4.4 The FM method . . . . . . . . . . . . . . . . . . . . . . . . . 41 x TABLE OF CONTENTS 5 Highly oscillatory linear systems with variable coefficients 48 5.1 The asymptotic method . . . . . . . . . . . . . . . . . . . . . 48 5.2 The FM method . . . . . . . . . . . . . . . . . . . . . . . . . 57 5.3 The modified FM method . . . . . . . . . . . . . . . . . . . . 59 6 Highly oscillatory non-linear systems 65 6.1 Waveform relaxation methods . . . . . . . . . . . . . . . . . . 65 6.2 WRF method . . . . . . . . . . . . . . . . . . . . . . . . . . . 71 6.3 WRFM method . . . . . . . . . . . . . . . . . . . . . . . . . . 75 7 Conclusions 80 7.1 Asymptotic expansion versus Taylor expansion for highly os- cillatory ODEs . . . . . . . . . . . . . . . . . . . . . . . . . . 80 8 Future work 83 8.1 Moment-free approximation . . . . . . . . . . . . . . . . . . . 83 8.2 Applications to PDEs . . . . . . . . . . . . . . . . . . . . . . . 86 References 94 xi Chapter 1 Introduction 1.1 Classical quadrature rules Numerical integration constitutes a broad family of algorithms for calculating the numerical value of a definite integral, and by extension, the term is also sometimes used to describe the numerical solution of differential equations. The term numerical quadrature, often abbreviated to quadrature, is more or less a synonym for numerical integration, especially as applied to univari- ate integrals. Multivariate integration is sometimes described as cubature, although the meaning of quadrature is understood for three and higher di- mensional integration as well. The basic problem considered by numerical integration is to compute an approximate solution to a definite integral:∫ b a f(x)dx. If f(x) is a smooth function, and the limits of integration are bounded, there are many methods of approximating the integral with arbitrary preci- sion. We will briefly list some of them, [DR84]. For example, in many applications the integrand f(x) may be known only at certain points, such as obtained by sampling. Or sometimes it may be pos- sible to find an anti-derivative symbolically, but it may be easier to compute a numerical approximation rather than to compute the anti-derivative. This may be the case if the anti-derivative is given as an infinite series or product, 1 1.1 Classical quadrature rules or if its evaluation requires a special function which is not available. And finally, a formula for the integrand may be known, but it may be difficult or impossible to find an anti-derivative which is an elementary function. The error function is a good example, erf(z) = 2√ pi ∫ z 0 e−t 2 dt. It is a special non-elementary function, which occurs in probability, statistics, materials science, and partial differential equations. The error function is an entire function. It has no singularities except that at infinity, and its Taylor expansion always converges. The integral of the error function cannot be evaluated in closed form in terms of elementary functions. However, by expanding the integrand e−z 2 into its Taylor series and integrating term by term, one obtains the error function’s Taylor series, erf(z) = 2√ pi ∞∑ n=0 (−1)nz2n+1 n!(2n + 1) = 2√ pi ( z − z 3 3 + z5 10 − z 7 42 + z9 216 − · · · ) . Taylor expansion is a valid tool in constructing classical numerical meth- ods of approximation. However in chapters to come we show that numerical methods based on Taylor’s reasoning are not suitable in approximation of highly oscillatory integrals. Instead, methods based on the asymptotic ex- pansion prove to be more accurate in numerical approximation of highly oscillatory integrals. The following section presents a brief review on numerical integrators. We refer our reader to [AS64], [DR84], [Ise09b] and [RR01] and for more information. The most straightforward numerical integration technique uses the New- ton -Cotes formulas, [RR01], which approximate a function tabulated at a sequence of regularly spaced intervals by various degree polynomials. As- sume that the function is defined on the interval [a, b] is known at equally spaced points a = x0, x1, . . . , xn = b. There are two types of Newton-Cotes formulas, the so called closed type uses the function value at all points, and the open type does not use the function values at the endpoints. The closed Newton–Cotes formula of degree n is given by the formula∫ b a w(x)f(x) dx ≈ n∑ i=0 Wif(xi), 2 1.1 Classical quadrature rules where w(x) is a weight function and Wi are Cotes numbers. The latter is defined as follows, Wi = ∫ b a w(x)li(x) dx. With the help of Lagrangian interpolation, li(x) = pn+1(x) (x− xi)p′n+1(xi) , with pn+1(x) = n∏ j=0 (x− xi), we find the weights,∫ b a w(x)f(x) dx = n∑ i=0 Wif(xi) + 1 (n+ 1)! ∫ b a w(x)pn+1(x)f (n+1)(ξ) dx, where the n + 1 derivative appears as a multiple in the error term. Thus, Wj recovers polynomials of degree n or less. For the open Newton–Cotes formula, ∫ b a w(x)f(x) dx ≈ n−1∑ i=1 Wif(xi), with n− 1 weights and exact polynomial recovery rate n− 2. If the endpoints are tabulated, then for w(x) = 1 and n = 1 from the Newton–Cotes formula we obtain the trapezoidal rule∫ b a f(x) dx ≈ (b− a)f(a) + f(b) 2 . To calculate this integral more accurately, one first splits the interval of integration [a,b] into n smaller subintervals, and then applies the trapezoidal rule on each of them. This results in the composite trapezoidal rule, ∫ b a f(x) dx ≈ b− a n [ f(a) + f(b) 2 + n−1∑ k=1 f ( a + k b− a n )] . A generalization of the trapezoidal rule is Romberg integration, which can yield accurate results for many fewer function evaluations. The method 3 1.1 Classical quadrature rules uses acceleration techniques, such as Richardson extrapolation to improve the results of a numerical method, from a method of order O(hp) it gives us a method of order O(hp). Suppose we want to approximate a quantity Q, and you have available approximations Q(h) for h > 0. Typically, this approximation is of a certain order, Q = Q(h) + O(hp). But often, more can be said, for some quantity A not depending on h, Q = Q(h) + Ahp + O(hp+1). The idea of Richardson extrapolation is to take two different values of h, typically we take h1 = h and h2 = h/2, and eliminate the A term. QR = hp2Q(h1)− hp1Q(h2) hp2 − hp1 Now, the Romberg integration method can be defined inductively as fol- lows, R(0, 0) = 1 2 (b− a)(f(a) + f(b)) R(n, 0) = 1 2 R(n− 1, 0) + hn 2n−1∑ k=1 f(a+ (2k − 1)hn) R(n,m) =R(n,m− 1) + 1 4m − 1(R(n,m− 1)− R(n− 1, m− 1)). In the same way as for the trapezoidal rule, Simpson’s rule can be derived from the Newton–Cotesformula for n = 2,∫ b a f(x) dx ≈ b− a 6 [ f(a) + 4f ( a+ b 2 ) + f(b) ] . Simpson’s rule is a special case of Romberg’s method. The theory of this method shows that Simpson’s rule is exact when the integrand is a polynomial of degree three or lower. The composite Simpson’s rule is given by ∫ b a f(x) dx ≈ h 3 [ f(x0) + 2 n/2−1∑ j=1 f(x2j) + 4 n/2∑ j=1 f(x2j−1) + f(xn) ] , 4 1.1 Classical quadrature rules where xj = a+ jh for j = 0, 1, ..., n with h = (b− a)/n, in particular, x0 = a and xn = b. The above formula can also be written as∫ b a f(x) dx ≈ h 3 [ f(x0) + 4f(x1) + 2f(x2) + 4f(x3) + 2f(x4) + · · ·+ 4f(xn−1) + f(xn) ] . The error committed by the composite Simpson’s rule is bounded in ab- solute value by h4 180 (b− a) max ξ∈[a,b] |f (4)(ξ)|, where h = (b− a)/n is a step size. This formulation splits the interval [a,b] in subintervals of equal length. In practice, it is often advantageous to use subintervals of different lengths, and concentrate the efforts on the places where the integrand is less well-behaved. This leads to the adaptive Simpson’s method. If the functions are known analytically instead of being tabulated at equally spaced intervals, the best numerical method of integration is called Gaussian quadrature. By picking the abscissas at which to evaluate the function, Gaussian quadrature produces the most accurate approximations possible. An n-point Gaussian quadrature rule is a quadrature rule con- structed to yield an exact result for polynomials of degree 2n− 1 or less by a suitable choice of the points xi and weights wi for i = 1, ..., n. The formula∫ b a f(x) dx ≈ b− a 2 n∑ i=1 wif ( b− a 2 xi + a+ b 2 ) is used whenever the integrand is sufficiently smooth, and the gain in the number of nodes is substantial. Suppose that the integrated function can be written as f(x) = W (x)g(x), where g(x) is approximately polynomial, and W (x) is known, then there are alternative weights such that∫ 1 −1 f(x) dx = ∫ 1 −1 W (x)g(x) dx ≈ n∑ i=1 wig(xi). 5 1.2 Highly oscillatory quadrature Common weighting functions include Gauss-Chebyshev W (x) = (1− x2)−1/2 and Gauss-Hermite W (x) = e−x 2 functions. 1.2 Highly oscillatory quadrature The first known numerical quadrature scheme for oscillatory integrals was developed in 1928 by Louis Napoleon George Filon [IN05]. Filon presented a method for efficiently computing the Fourier integrals,∫ b a f(x) sin(ωx)dx and ∫ b a f(x) x sin(ωx)dx. We will see that often asymptotic expansions lie at the heart of oscilla- tory quadrature. We aim to find and investigate methods which preserve the asymptotic properties of an asymptotic expansion, whilst allowing for arbi- trarily high accuracy for a fixed frequency. Fortunately, methods have been developed with these properties, in particular the Filon method and Levin collocation method. As originally constructed, the Filon method consists of dividing the inter- val into 2n panels of size h, and applying a modified Simpson’s rule on each panel. In other words, f is interpolated at the endpoints and midpoint of each panel by a quadratic. In each panel the integral becomes a polynomial multiplied by the oscillatory kernel sinωx, which can be integrated in closed form. We determine the quadratic for the kth panel as vk(x) = ck,0 + ck,1x+ ck,2x 2, by solving the system vk(xk) = f(xk) vk(xk+1) = f(xk+1) vk(xk+2) = f(xk+2). 6 1.3 Exponential integrators We thus sum up the approximation on each subinterval, ∫ b a f(x) sin(ωx)dx ≈ n−1∑ k=0 ∫ x2k+2 x2k v(xk) sin(ωx)dx. The infinite integral was then computed using a series transformation, since the moments ∫ b a xk sin(ωx)dx are known explicitly. These ideas were generalized and developed further by Arieh Iserles and Syvert Nørsett in a number of articles by the authors, [IN05], [IN06], [IN04], [Ise04b], [Ise05], [INO06]. In the original paper by Filon, it is shown that the error of the Filon method is bounded by C sin hω 2 ( 1− 1 16 sec hω 4 ) . However, it was noticed in papers by Arieh Iserles and Syvert Nørsett that h need not shrink as ω increases, rather, if anything, it should increase, thus reducing the required number of operations. This is the most important property of the Filon method, its accuracy actually improves as the frequency increases! Indeed, for a fixed step size the error decays like O(ω−2). 1.3 Exponential integrators Referring to Hochbruck [Hoc04], we relate our work to the class of works on exponential integrators and present a brief review on the methods. Definition 1.3.1 [Hoc04] An exponential integrator is a numerical method which involves an exponential function or a related function of the Jacobian or an approximation to it. Recently, there has been a great deal of interest in the construction of exponential integrators. These integrators, as their name suggests, use the exponential function (and related functions) of the Jacobian or an approx- imation to it, inside the numerical method. Various methods have been developed for the differential equation y(t) = f(y(t)) = Ly(t) +N(y(t)) with y(tn) = yn. 7 1.3 Exponential integrators The first paper to construct what are now known as exponential integrators, was by Certaine, published in 1960. Before introducing the framework of exponential integrators, we examine several well known extensions of the Euler method. Linearising the initial value problem for f(y(t)), gives y′(t) = f(yn−1) + f ′(yn−1)(y − yn−1). The exact solution to this linearised problem is yn = yn−1 + hφ1(hf ′(yn−1))f(yn−1), where the function φ1 , is defined as φ1(z) = ez − 1 z . This method is of order two, for general problems of the form f(y(t)), and exact for problems, where f(y) = Ly +N. The method is known as the exponential Euler method. Assume, that we use L as an approximation to the full Jacobian in the method above, then the Exponential Time Differencing (ETD) Euler is, [Nør69] yn = yn−1 + hφ1(hL)(Lyn−1 +N(yn−1)) = ehLyn−1 + hφ1(hL)N(yn−1). 8 Chapter 2 Fourier-type oscillators It was shown by A. Iserles and S. Nørsett in [IN05] that the standard nu- merical approach based on Gauss–Christoffel quadrature fails to approxi- mate highly oscillatory integrals since the error of approximation is O(1) for ω → ∞. Instead the authors developed the asymptotic and the Filon-type methods, which share the feature that accuracy improves as ω increases. Our research is very much inspired by early works by A. Iserles and S. Nørsett. Below we present initial ideas from the research done by these authors and explain continuous connection between our work and works on oscillatory quadrature rules by A. Iserles and S. Nørsett, [Ise04b, IN04, IN05]. Bear in mind linear system of ODEs y′(t) = Aωy(t) + f (t), y(0) = y0 ∈ Rd, f : R→ Rd, t ≥ 0, with the exact solution y(t) = etAωy0 + ∫ t 0 e(t−τ)Aωf(τ)dτ = etAωy0 + I[f ]. (2.0.1) The following assumptions hold: Aω is a constant non-singular matrix with large imaginary eigenvalues, σ(Aω) ⊂ iR, ‖Aω‖ ≫ 1, ω ≫ 1 is a real param- eter and f ∈ Rd is a smooth vector-valued function. For a 2pi-periodic function f(x), integrable on [−pi, pi] the well known Fourier series is given by a0 2 + ∞∑ n=1 [an cos(nx) + bn sin(nx)]. 9 2.1 The univariate asymptotic method The Fourier coefficients are oscillatory functions described as follows, an = 1 pi ∫ π −π f(t) cos(nt) dt, n ≥ 0 and bn = 1 pi ∫ π −π f(t) sin(nt) dt, n ≥ 1. Employing Euler’s formula, einx = cos(nx)+ i sin(nx), we can rewrite the sum in term of exponential function, f(x) = ∞∑ n=−∞ cne inx. The Fourier coefficients are now determined as cn = 1 2pi ∫ π −π f(x)e−inx dx, and an = cn + c−n for n = 0, 1, 2, . . . , while bn = i(cn− c−n) for n = 1, 2, . . . . In order to present our readers motivation for our research we now briefly state the two important theorems from [IN05], describing the quadrature methods used to approximate a more general family of Fourier-type highly oscillatory integrals of the form, I[f ] = ∫ b a f(x)eiωg(x)dx, (2.0.2) where f, g ∈ C∞ are smooth, g is strictly monotone in [a, b], a ≤ x ≤ b and the frequency is ω ≫ 1. Later in this chapter we will explain the link between linear systems of ODEs above and Fourier-type integrals given in (2.0.2). 2.1 The univariate asymptotic method Suppose that f and g are differentiable up to a sufficiently high order, and g is free of stationary points on [a, b]. In that case we can obtain the asymptotic expansion for the Fourier-type integral I[f ] = ∫ b a f(x)eiωg(x)dx. 10 2.1 The univariate asymptotic method The truncated asymptotic expansion appears to be a powerful tool in approx- imation of the integral I[f ] as compared to aforementioned classical methods. The accuracy of approximation will improve as the frequency grows large, ω ≫ 1. Once the assumption on differentiability of the functions is satisfied, the asymptotic method uses very little information about function values and derivatives at the end points. We can obtain the expansion straight by repeatedly integrating I[f ] by parts, I[f ] = ∫ b a f(x)eiωg(x)dx = 1 iω ∫ b a f(x) g′(x) d dx eiωg(x) = 1 iω [ f(b) g′(b) eiωg(b) − 1 iω f(a) g′(a) eiωg(a) ] − 1 iω ∫ b a d dx [ f(x) g′(x) ] eiωg(x)dx. The error term shares the property, − 1 iω I [ d dx [ f(x) g′(x) ]] = O(ω−2). It is now evident that the error term depends on the inverse powers of fre- quency and itself is an oscillatory function. Hence, one can apply integration by parts further to obtain arbitrary higher order methods of approximation. Lemma 2.1.1 [A. Iserles & S. Nørsett][IN05] Let f, g ∈ C∞, g′(x) 6= 0 on [a, b] and σ0[f ](x) = f(x), σk+1[f ](x) = d dx σk[f ](x) g′(x) , k = 0, 1, .... Then, for ω ≫ 1, I[f ] ∼ − ∞∑ m=1 1 (−iw)m [ eiωg(b) g′(b) σm−1[f(b)− e iωg(a) g′(a) σm−1[f ](a) ] . The asymptotic method is defined as follows, QAs [f ] = − s∑ m=1 1 (−iω)m [ eiωg(b) g′(b) σm−1[f ](b)− e iωg(a) g′(a) σm−1[f ](a) ] . Theorem 2.1.2 [A. Iserles & S. Nørsett] [IN05] For every smooth f and g, such that g′(x) 6= 0 on [a, b], it is true that QAs [f ]− I[f ] ∼ O(ω−s−1), ω →∞. 11 2.2 The univariate Filon-type method 2.2 The univariate Filon-type method A precursor of a Filon-type method was first pioneered in the work of L. N. G. Filon in 1928, modified by E. A. Flinn [Fli60], and developed by A. Iserles and S. Nørsett in [IN05] and later by A. Iserles, S. Nørsett and S. Olver in [INO06]. The method replaces f in (2.0.2) by its Hermite interpolant, v(x) = ν∑ l=1 θl−1∑ j=0 αl,j(x)f (j)(cl), which satisfies v(j)(cl) = f (j)(cl), at node points a = c1 < c2 < ... < cν = b, with θ1, θ2, ..., θν ≥ 1 associated multiplicities, j = 0, 1, ..., θl−1, l = 1, 2, ..., ν and r = ∑ν l=1 θl − 1 being the order of approximation polynomial. For the Filon-type method, by definition, QFs [f ] = I[v] = ∫ b a v(x)eiωg(x)dx = ν∑ l=1 θl−1∑ j=0 bl,j(ω)f (j)(cl), where bl,j = ∫ b a αl,j(x)e iωg(x)dx, j = 0, 1, ..., θl − 1, l = 1, 2, ..., ν. Theorem 2.2.1 [A. Iserles & S. Nørsett][IN05] Suppose that s = min {θ1, θν}. For every smooth f and g, g′(x) 6= 0 on [a, b], it is true that I[f ]−QFs [f ] ∼ O(ω−s−1), ω →∞. Example 2.2.1 Here we consider the asymptotic and the Filon-type method with first derivatives for (2.0.2) over the interval [0, 1] for the case g(x) = x. 12 2.2 The univariate Filon-type method 100 120 140 160 180 200−6 −4 −2 0 2 4 6 x 10 −7 100 120 140 160 180 200−6 −4 −2 0 2 4 6 8 x 10 −8 Figure 2.2.1: The global error of the asymptotic method QA2 (right) and the Filon-type method QF2 (left) for the (2.0.2), f(x) = cos(x), g(x) = x, x ∈ [0, 1], θ1 = θ2 = 2 and 100 ≤ ω ≤ 200. QA2 = eiωf(1)− f(0) iω + eiωf ′(1)− f ′(0) ω2 , QF2 = ( − 1 iω − 61 + e iω iω3 + 12 1− eiω ω4 ) f(0) + ( −e iω iω + 6 1 + eiω iω3 − 121− e iω ω4 ) f(1) + ( − 1 ω2 − 22 + e iω iω3 + 6 1− eiω ω4 ) f ′(0) + ( eiω ω2 − 21 + e iω iω3 + 6 1− eiω ω4 ) f ′(1). 13 2.2 The univariate Filon-type method 100 120 140 160 180 200−6 −4 −2 0 2 4 6x 10 −3 Figure 2.2.2: This plot describes the exact solution for the Fourier-type integral (2.0.2) with f(x) = cos(x), g(x) = x, x ∈ [0, 1] and 100 ≤ ω ≤ 200. In Figure 2.2.1 we present numerical results on the asymptotic and Filon- type methods with function values and its first derivatives at the end points only, c1 = 0, c2 = 1, for the integral I[f ] = ∫ 1 0 cos(x)eiωxdx, 100 ≤ ω ≤ 200. Both methods have the same asymptotic order and use exactly the same information. However as we can see from Figure 2.2.1, the Filon-type method yields a greater measure of accuracy than the asymptotic method. We would like to emphasize that the Filon-type method works for small values of ω as well. With Gaussian points it is equivalent to the Gauss-Christoffel quadra- ture for ω → 0, using the same information. In [IN05] it was shown by the authors that adding more internal points leads to the decay of the leading error constant, resulting in a marked im- provement in the accuracy of approximation. This addition does not affect asymptotic order but rather its contributory end points. 14 2.3 Applications to linear systems We would like to mention here that these results remain valid for vector- valued functions as well as for vector-valued polynomials. This is particularly important while constructing similar methods for oscillatory ODEs. Having both large and small eigenvalues in the system makes difference in applica- tions. And the beauty of the Filon-type method is that it is valid for both large and small eigenvalues, as it is valid for large and small ω. This means that one does not need to split the system and apply different methods to its large and small parts. Instead, the Filon method takes care of both parts. Note that while replacing function f by a polynomial, the Filon-type method requires computation of the moments ∫ b a xmeiωg(x)dx,m ≥ 0, which may not always be available. As a consequence, since I[f ] may appear to be elements of the vector-valued integral I[f ] in oscillatory systems, the latter may also not always be available once f is replaced by a vector-valued polynomial. We used both MATLAB numerical package and its symbolic toolbox linked to a MAPLE kernel to perform numerical evaluations for this work. Numerical approximation of highly oscillatory integrals is a an advanced topic of research and involves wide spectrum of different approaches in- cluding moment–free approximation. This includes Levin-type methods, [Lev82, Lev97], invented by D. Levin and further extended by S. Olver, [Olv07]. The second alternative method we refer to is numerical steepest descent method by D. Huybrechs and S. Vandewalle, [HO08, HV06]. There is a very relevant work using exponential integrators for oscillatory equa- tions by V. Grimm and M. Hochbruck, [GH06]. For asymptotic methods for integrals we refer to [dB81, Olv74, Won01]. 2.3 Applications to linear systems The work presented in this thesis entails the extension of the introduced points from [IN05]. Namely, we develop the asymptotic and the Filon-type methods further with a purpose of approximation of highly oscillatory in- tegrals I[f ] with a matrix-valued kernel and a vector-valued function. The results of our approximation are being used to solve linear and non-linear sys- tems of highly oscillatory ODEs. We should mention that all norms hereafter are L∞ norms. Having introduced asymptotic and Filon-type methods for the family of 15 2.3 Applications to linear systems integrals (2.0.2), we now explain the link between the present work published in [Kha08b] and early results from [IN05]. Take for simplicity a spectral de- composition of the matrix Aω = PDP −1, having a pure imaginary spectrum σ(Aω) = {iωk}dk=1, ωk ∈ R, Aω = P   iω1 0 . . . 0 0 iω2 . . . 0 ... . . . . . . ... 0 . . . 0 iωd  P−1, therefore A−1ω = P   1 iω1 0 . . . 0 0 1 iω2 . . . 0 ... . . . . . . ... 0 . . . 0 1 iωd  P−1, and eAω = P   eiω1 0 . . . 0 0 eiω2 . . . 0 ... . . . . . . ... 0 . . . 0 eiωd  P−1. This suggests that to approximate a highly oscillatory integral I[f ] with a matrix-valued kernel and a vector-valued function in (2.0.1), we need to approximate a highly oscillatory integral of the kind (2.0.2) in I[f ]. Needless to mention the asymptotic order of our approximation to I[f ] depends on the inverse powers of the eigenvalues, thus we wish the error will decay as the eigenvalues grow. For smaller eigenvalues the method is comparable with the classical methods, while in the case of zero eigenvalues the method will be equivalent to the polynomial approximation of the integrable function, [Kha08b]. 16 Chapter 3 Highly oscillatory linear systems with constant variables 3.1 The asymptotic method for matrix-valued kernels Consider a vector-valued integral over a compact interval [a, b] I[f ] = ∫ b a Xω(t)f (t)dt, X ′ ω = AωXω, (3.1.1) where the matrix kernel Xω satisfies a linear differential equation (3.1.1) with a constant non-singular matrix Aω of large eigenvalues, ‖A−1ω ‖ ≪ 1, σ(Aω) ⊂ iR, ω is a real parameter and f ∈ Rd is a smooth vector-valued function: f ∈ C∞[a, b]. The fact that the matrix Xω satisfies linear matrix ODE (3.1.1) allows us to integrate (3.1.1) by parts, I[f ] =A−1ω [ Xw(b)f (b)−Xω(a)f (a) ] −A−1ω ∫ b a Xω(t)f ′(t)dt = QA1 − A−1ω I[f ′]. We define asymptotic method QAs as QAs [f ] = − s∑ m=1 (−Aω)−m [ Xω(b)f (m−1)(b)−Xω(a)f (m−1)(a) ] , 17 3.1 The asymptotic method for matrix-valued kernels representing s-partial sum of the asymptotic expansion for I[f ], (3.1.1). At this point of discussion it will be appropriate if we introduce some notation on matrix and function asymptotics from [Olv07]. We say that f = O(f˜) for an arbitrary function f and non-negative constant f˜ , which depend on a real parameter ω, if the norm of f and its derivatives are all of order O(f˜) as ω →∞, namely ‖f (m)‖ = O(f˜) form = 0, 1, . . . . For arbitrary two n×m matrices A(x) = (aij(x)) and A˜ = (a˜ij), a˜ij ≥ 0, depending on a real parameter ω, we can thus posit A(x) = O(A˜), if aij(x) = O(a˜ij) element- wise as ω → ∞. We may also say that f = O(1), if f and its derivatives remain bounded on [a, b], as ω → ∞. Let 1 = {1ij} stand for a matrix with all entries one. This allows us to write A(x) = O(1), if aij(x) = O(1) element-wise as ω → ∞. And finally, if A = O(A˜) and B = O(B˜), then the integration and multiplication properties are ∫ b a A(x)dx = O(A˜) and AB = O(A˜B˜), [Kha08b], [Kha09b]. Note that the constant in AB = O(A˜B˜) does not depend on the dimension of the matrices, rather on the largest absolute value of a constant in element- wise estimates of the entries of a matrix-product. The reason for this is the following: for arbitrary two n ×m matrices A(x) = (aij(x)) and A˜ = (a˜ij), a˜ij ≥ 0, depending on a real parameter ω, we say that A(x) = O(A˜), if aij(x) = O(a˜ij) element-wise as ω →∞. In other words, ‖aij‖ ≤ Cij‖a˜ij‖, hence, in the estimate A(x) = O(A˜) the constant depends on the maximum value C = max{Cij}. The same rule applies to the product of a two matrices. For Z = AB, and Z˜ = A˜B˜, where Z(x) = (zij(x)) and Z˜ = (z˜ij), z˜ij ≥ 0 we have Z = AB = O(A˜B˜) = O(Z˜), whereby ‖zij‖ ≤ C˜ij‖z˜ij‖. Thus, the constant in AB = O(A˜B˜) depends on the maximum value C˜ = max{C˜ij} which will provide a uniform estimate for ∀‖zij‖ ≤ C˜‖z˜ij‖ and not on the dimension of the matrices. 18 3.1 The asymptotic method for matrix-valued kernels Lemma 3.1.1 [Kha08b] Let I[f ] = ∫ b a Xω(t)f (t)dt, X ′ ω = AωXω, where the matrix kernel Xω satisfies linear matrix ODE as above, Aω is a constant non-singular matrix, ‖A−1ω ‖ ≪ 1 and f : R → Rd is a smooth vector-valued function. Then, for ω ≫ 1, I[f ] ∼ − ∞∑ m=1 (−Aω)−m [ Xω(b)f (m−1)(b)−Xω(a)f (m−1)(a) ] . For ψ = max{‖f (s)‖, ‖f (s+1)‖}, QAs [f ]− I[f ] ∼ O(‖A−s−1ω ‖‖Xω‖ψ), as ω →∞. If Xω = O(Xˆω) and f = O(f˜ ), then QAs [f ]− I[f ] = O(A−s−1ω Xˆωf˜), as ω →∞, element wise. Proof: By induction, QAs [f ] = I[f ]− (−Aω)−s ∫ b a Xω(t)f (s)(t)dt = I[f ]− (−Aω)−sI[f (s)]. Indeed, for s = 0 the identity QAs = I[f ]. Suppose that the equality holds for some s ≥ 1, we now prove it for s+ 1. This follows from I[f s] = ∫ b a Xω(t)f (s)(t)dt=A−1ω [ Xω(b)f (s)(b)−Xω(a)f (s)(a) ] −A−1ω ∫ b a Xω(t)f (s+1)(t)dt. For L∞ norms, I[f (s)] ∼ O(‖A−1ω ‖‖Xω‖‖f (s)‖) + O(‖A−1ω ‖‖Xω‖‖f (s+1)‖) = O(‖A−1ω ‖‖Xω‖ψ), 19 3.1 The asymptotic method for matrix-valued kernels therefore QAs [f ]− I[f ] ∼ O(‖A−s−1ω ‖‖Xω‖ψ). If Xω = O(Xˆω) and f = O(f˜ ) element-wise, then I[f (s)] = O(A−1ω Xˆωf˜ ) + O(A −1 ω Xˆωf˜ ) = O(A −1 ω Xˆωf˜ ), yielding the further result QAs [f ]− I[f ] = O(A−s−1ω Xˆωf˜ ). Corollary 3.1.2 [Kha08b] If f (i)(a) = f (i)(b) = 0, for i = 0, ..., s− 1, then in L∞ norm, I[f ] ∼ O(‖A−s−1ω ‖‖Xω‖ψ), and I[f ] = O(A−s−1ω Xˆωf˜) element-wise. Proof: Follows immediately from Lemma (3.1.1). Corollary 3.1.3 [Kha08b] If Xω = O(1) and f = O(1), then QAs [f ]− I[f ] = O(A−s−1ω 1). Proof: The statement follows from the notation on matrix asymptotics and Lemma(3.1.1). The point of departure in construction of our numerical solvers for the systems of ordinary differential equations is the initial-value integrator yn+1 = e hAωyn + ∫ h 0 e(h−τ)Aωf (tn + τ)dτ. (3.1.2) 20 3.2 The Filon-type method for matrix-valued kernels Example 3.1.1 Let Ih[f ] = ∫ h 0 eAω(h−t)f (t)dt. The asymptotic method for s = 2 with end points only is QA2 [f ] = −A−1ω ( f(h)− eAωhf(0))−A−2ω (f ′(h)− eAωhf ′(0)) . In the sequel we provide some applications of the asymptotic method to solve highly oscillatory linear systems. Figures 3.1.1 and 3.1.2 capture how the accuracy of the method increases with ω, as long as the step size h is fixed and the characteristic frequency hω ≫ 1. The method remains accurate for magnitudes of ω = 104 and hω = 103. This allows us to work with larger step-sizes, taking into account that it is the ω that reduces the error small rather than step-size. Our method may be compared with the fourth order Runge–Kutta method presented in Figure 3.1.3 for the same equation and same step size, and MAT- LAB ode15s and ode113 solvers in Figure 3.2.5. For a fixed step-size h = 1 10 the error of the fourth order Runge–Kutta method increases with ω. Due to stability of the Runge–Kutta method the error remains bounded as in the right Figure 3.1.2. However the method is accurate only for small values of t around the origin, whilst on a large time scale the approximation has nothing to do with exact solution for increasing ω. 3.2 The Filon-type method for matrix-valued kernels In this section we extend the Filon-type method [IN05] to solve systems of ordinary differential equations. We interpolate a vector-valued function f in (3.1.1) by a r-degree vector-valued polynomial v v(t) = ν∑ l=1 θl−1∑ j=0 αl,j(t)f (j)(tl), (3.2.1) such that v(j)(tl) = f (j)(tl) at node points a = t1 < t2 < ... < tν = b, θ1, θ2, ..., θν ≥ 1 being the associated multiplicities, j = 0, 1, ..., θl − 1 and l = 1, 2, ..., ν, [Kha08b]. 21 3.2 The Filon-type method for matrix-valued kernels 0 20 40 60 80 100 −0.05 0 0.05 0 20 40 60 80 100−6 −4 −2 0 2 4 6x 10 −4 Figure 3.1.1: Global error of the asymptotic method QA2 with end points only for the equation y′′(t) = −ωy(t) − cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions and step-size h = 1 10 for ω = 10 (left figure), ω = 102 (right figure). We define the Filon-type method as follows, QFs [f ] = ∫ b a Xω(t)v(t)dt = ν∑ l=1 θl−1∑ j=0 βl,jf (j)(tl), where βl,j = ∫ b a Xω(t)αl,j(t)dt. Theorem 3.2.1 [Kha08b] Let I[f ] = ∫ b a Xω(t)f (t)dt, X ′ ω = AωXω, where Aω is a constant non-singular matrix of a pure imaginary spectrum, σ(Aω) ⊂ iR, ‖A−1ω ‖ ≪ 1, θ1, θν ≥ s and f : R→ Rd is a smooth vector-valued 22 3.2 The Filon-type method for matrix-valued kernels 0 20 40 60 80 100−3 −2 −1 0 1 2 3x 10 −6 0 20 40 60 80 100−1 −0.5 0 0.5 1x 10 −8 Figure 3.1.2: Global error of the asymptotic method QA2 with end points only for the equation y′′(t) = −ωy(t)−cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions and step-size h = 1 10 for ω = 103 (left figure), ω = 104 (right figure). function. Then, for ψ = max{‖f (s)‖, ‖f (s+1)‖}, QFs [f ]− I[f ] ∼ O(‖A−s−1ω ‖‖Xω‖ψ), as ω →∞. If Xω = O(Xˆω) and f = O(f˜ ), then element-wise QFs [f ]− I[f ] = O(A−s−1ω Xˆωf˜), as ω →∞. Proof: The proof is equivalent to that for the classical Filon-type method. As a consequence of Corollary (3.1.2), replacing f in the asymptotic method with f − v, implies that [f − v](j)(a) = [f − v](j)(b) = 0, for j = 0, 1, ..., s− 1. Corollary 3.2.2 [Kha08b] If Xω = O(1) and f = O(1), then QFs [f ]− I[f ] = O(A−s−1ω 1). 23 3.2 The Filon-type method for matrix-valued kernels 0 20 40 60 80 100−0.03 −0.02 −0.01 0 0.01 0.02 0.03 0 20 40 60 80 100−1.5 −1 −0.5 0 0.5 1 1.5 Figure 3.1.3: Global error of the fourth order Runge–Kutta method for the equation y′′(t) = −ωy(t)− cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions, step–size h = 1 10 , for ω = 10 (left) and ω = 102 (right). Proof: The statement follows from the notation on matrix asymptotics and Corollary (3.1.3). Employing the initial-value integrator (5.1.3), we present the Filon-type method for the systems of highly oscillatory ODEs, yn+1 = e hAωyn + ∫ h 0 e(h−τ)Aωv(tn + τ)dτ = ehAωyn +Q F s [f ]. (3.2.2) Here QFs = I[v] stands for an approximation to I[f ]. Example 3.2.1 Take the same integral Ih[f ] as in the Example (3.1.1). For 24 3.2 The Filon-type method for matrix-valued kernels s = 2, t1 = 0, t2 = h and f = [f1, f2] ⊺ we derive the Filon-type method, QFs [f ] = Ih[v] = ∫ h 0 eAω(h−t)v(t)dt = [∫ h 0 eAω(h−t)v1(t)dt ] f(0) + [∫ h 0 eAω(h−t)v2(t)dt ] f (h) + [∫ h 0 eAω(h−t)v3(t)dt ] f ′(0) + [∫ h 0 eAω(h−t)v4(t)dt ] f ′(h). We note by passing that the computational cost is relatively cheap. The algo- rithm requires only some linear algebra once we have precomputed moments in Ih[v]. Theorem 3.2.3 [Kha08b] Let θ1, θν ≥ s, r = ∑ν l=1 θl − 1. Then r is the numerical order of the Filon-type method applied to the linear system, y(tn)− yn = O(hr+1). Proof: Suppose that f = v + p, where v is an r-degree vector-valued polynomial approximation (e.g. Hermite, 3.2.1) to f , with an approximation error p = pr r! f (r)(ξ), pr = ν∏ l=1 (t− tl)θl. We can now derive the local error of our numerical solver, I[p] = ∫ h 0 e(h−τ)Aωp(tn + τ)dτ = ∫ h 0 e(h−τ)Aω pr(τ) r! f (r)(ξ)dτ, where pr(τ) = O(τ r). Recall that I[p] = I[f ]− I[v] = I[f ]−QFs [f ] = O(hr+1), (3.2.3) which proves the order of the method. 25 3.2 The Filon-type method for matrix-valued kernels Theorem 3.2.4 [Kha08b] The numerical solution (3.2.2) is convergent. Proof: Presenting matrix Aω in its Jordan normal form J = P −1AωP, we let Cω = {‖P‖‖P−1‖ : P−1AωP = J}, and consider the following bounds for matrix exponential, ‖etAω‖ = ‖P etJP−1‖ ≤ Cω‖etJ‖. We would like to remind our reader that σ(Aω) ⊂ iR, which means that the norm of the matrix exponential is always bounded, etJ = O(1). In other words, for a fixed value of parameter ω, the convergence of the numerical scheme to the exact solution follows from the estimate of the residual term, ‖I[p]‖ ≤ Cωh r! ‖pr(τ)‖‖f (r)‖, as step-size h tends to zero. Note that the method requires information only about the function values and its derivatives at the end points. Figures 3.2.1 and 3.2.2 offer some nu- merical examples, where a step size h is fixed and ω increases. The examples demonstrate a gain in accuracy with the increase of ω. Comparison will note that the Filon-type method performs better than the asymptotic method, although both methods are of the same asymptotic order. It is evident now that for a larger step-size h = 1 4 than that assumed with the asymptotic method (h = 1 10 ) the accuracy of approximation improves as hω ≫ 1. We can compare our solutions with MATLAB solvers, presented in Figure 3.2.5. To achieve better results with MATLAB we set it to AbsTol=ReTol= 10−8. Accuracy decreases for the solver ode15s while remaining the same for ode113. Both methods work with variable step size, but taking an average for ω = 100 it is h ≈ 1 186 for ode15s and h ≈ 1 60 for ode113, reducing to the exceptionally small values for ω = 104 of h ≈ 5 × 10−4 in ode15s and h ≈ 10−3 in ode113, which is in no way comparable with h = 1 4 of the Filon- type method. The logarithmic error in Figure 3.2.4 describes both numerical and asymptotic analysis of the method for increasing ω. These considerations leave us at a point where the connections between [IN05] and the present work are evident. It follows from the spectral decom- position of the matrix Aω that the factor e iλkf appears in the fundamental 26 3.2 The Filon-type method for matrix-valued kernels matrix in the solutions of both linear and non linear systems of highly os- cillatory ODEs, and provides valid reasons to extend described methods for the given setting. 27 3.2 The Filon-type method for matrix-valued kernels 0 20 40 60 80 100−1.5 −1 −0.5 0 0.5 1 1.5x 10 −6 0 20 40 60 80 100−1.5 −1 −0.5 0 0.5 1 1.5x 10 −7 Figure 3.2.1: Global error of the Filon-type method QF2 with end points only and multiplicities all 2, for the equation y′′(t) = −ωy(t)−cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions and step-size h = 1 4 for ω = 10 (left figure) and ω = 102 (right figure) 0 20 40 60 80 100−2 −1 0 1 2x 10 −8 0 20 40 60 80 100−2 −1 0 1 2x 10 −10 Figure 3.2.2: Global error of the Filon-type method QF2 with end points only and multiplicities all 2, for the equation y′′(t) = −ωy(t)−cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions and step-size h = 1 4 for ω = 103 (left figure) and ω = 104 (right figure). 28 3.2 The Filon-type method for matrix-valued kernels 0 20 40 60 80 100−1.5 −1 −0.5 0 0.5 1 1.5x 10 −6 0 20 40 60 80 100−1.5 −1 −0.5 0 0.5 1 1.5x 10 −7 0 20 40 60 80 100−2 −1 0 1 2x 10 −8 0 20 40 60 80 100−2 −1 0 1 2x 10 −10 Figure 3.2.3: Global error of the Filon-type method QF2 with end points only and multiplicities all 2, for the equation y′′(t) = −ωy(t)−cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions and step-size h = 1 4 for ω = 10 (top figure, left) and ω = 102 (top figure, right), ω = 103 (left, bottom figure) and ω = 104 (right, bottom figure). 29 3.2 The Filon-type method for matrix-valued kernels 10−3 10−2 10−1 100 10−14 10−12 10−10 10−8 10−6 10−4 w=10 w=100 w=1000 Figure 3.2.4: Logarithmic error (y-axis) and the step-size (x-axis) of the Filon-type method QF2 , with endpoints only and multiplicities all 2, for the equation y′′(t) = −ωy(t)− cos(t), with initial values in [1, 0]⊺. 30 3.2 The Filon-type method for matrix-valued kernels 0 20 40 60 80 100−1 −0.5 0 0.5 1x 10 −4 0 20 40 60 80 100−1 −0.5 0 0.5 1x 10 −6 0 20 40 60 80 100−1 −0.5 0 0.5 1x 10 −3 0 20 40 60 80 100−4 −2 0 2 4x 10 −6 Figure 3.2.5: Global error of MATLAB ode15s routine (top figure, left) and ode113 routine (top figure, right) set to RelTol= 1e− 08, AbsTol= 1e− 08, for the equation y′′(t) = −ωy(t) − cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions and ω = 102; the same solvers with the same properties and ω = 104 (two figures in the bottom respectively). 31 Chapter 4 The FM methods 4.1 The matrix linear ODEs In this chapter we present a new method from [Kha09a] for solving Lie-type equations X ′ω = Aω(t)Xω, Xω(0) = I. The solution to the equations of this type have the following representation Xω = exp(Ω). Here ω stands for an infinite Magnus series of integral commutators. We assume that the matrix Aω has large imaginary spectrum and ω is a real parameter describing the frequency of oscillation. The novel method, called the FM method, combines the Magnus method, which is based on iterative techniques to approximate Ω, and the Filon-type method, an efficient quadrature rule for solving os- cillatory integrals. We show that the FM method has superior performance compared to the classical Magnus method when applied to oscillatory differ- ential equations. We proceed with the matrix ordinary differential equation X ′ω = AωXω, Xω(0) = I, Xω = e Ω, (4.1.1) 32 4.1 The matrix linear ODEs where Ω represents the Magnus expansion, an infinite recursive series, Ω(t) = ∫ t 0 Aω(t)dt + 1 2 ∫ t 0 [Aω(τ), ∫ τ 0 Aω(ξ)dξ]dτ + 1 4 ∫ t 0 [Aω(τ), ∫ τ 0 [Aω(ξ), ∫ ξ 0 Aω(ζ)dζ ]dξ]dτ + 1 12 ∫ t 0 [[Aω(τ), ∫ τ 0 Aω(ξ)dξ], ∫ τ 0 Aω(ζ)dζ ]dτ + · · · . We make the following assumptions: Aω(t) is a smooth matrix-valued func- tion, the spectrum σ(Aω) of the matrix Aω has large imaginary eigenvalues and that ω ≫ 1 is a real parameter describing frequency of oscillation. It has been shown by Hausdorff, [Hau06], that the solution of the linear matrix equation in (4.1.1) is a matrix exponential Xω(t) = e Ω(t), where Ω(t) satisfies the following non-linear differential equation, Ω′ = ∞∑ j=0 Bj j! adjω(Aω) = dexp −1 Ω (Aω(t)), Ω0 = 0, (4.1.2) as long as ‖Ω(t)‖ < pi, to insure that the operator dexp−1Ω (t) is invertible. Here Bj are Bernoulli numbers and the adjoint operator is defined as ad0Ω(Aω) =Aω adΩ(Aω) = [Ω, Aω] = ΩAω − AωΩ adj+1Ω (Aω) = [Ω, ad j ΩAω]. Later it was observed by W. Magnus, [Mag54], that solving equation (4.1.2) with Picard iteration it is possible to obtain an infinite recursive series for Ω and then the truncated expansion can be used in approximation of Ω. Magnus methods preserve peculiar geometric properties of the solution, ensuring that if Xω is in matricial Lie group G and Aω is in associated Lie algebra g of G, then the numerical solution after discretization stays on the manifold. Moreover, Magnus methods preserve time-symmetric properties of 33 4.1 The matrix linear ODEs the solution after discretization, for any a > b, Xω(a, b) −1 = Xω(b, a) yields Ω(a, b) = −Ω(b, a). These properties appear to be valuable in applications. Employing the Magnus method to solve highly oscillatory differential equations we develop it in combination with appropriate quadrature rules, such as Filon-type methods, proved to be more efficient for oscillatory equa- tions than for example classical Gaussian quadrature. Below we remind our reader the Filon-type method and the asymptotic method, applied to approximate highly oscillatory integrals of the form I[f ] = ∫ b a f(x)eiωg(x)dx. (4.1.3) The asymptotic method provides theoretical background for the Filon-type method. Suppose f, g ∈ C∞ are smooth, g is strictly monotone in [a, b], a ≤ x ≤ b and the frequency is ω ≫ 1. We interpolate function f in (4.1.3) at chosen node points. For instance, take Hermite interpolation v˜(x) = ν∑ l=1 θl−1∑ j=0 αl,j(x)f (j)(cl), (4.1.4) which satisfies v˜(j)(cl) = f (j)(cl), at node points a = c1 < c2 < ... < cν = b, with θ1, θ2, ..., θν ≥ 1 associated multiplicities, j = 0, 1, ..., θl−1, l = 1, 2, ..., ν and r = ∑ν l=1 θl− 1 being the order of approximation polynomial. Then, for the Filon-type method, by definition, QFs [f ] = I[v˜] = ∫ b a v˜(x)eiωg(x)dx = ν∑ l=1 θl−1∑ j=0 bl,j(w)f (j)(cl), where bl,j = ∫ 1 0 αl,j(x)e iωg(x)dx, j = 0, 1, ..., θl − 1, l = 1, 2, ..., ν. The Filon-type method has the same asymptotic order as the asymptotic method. For formal proof of the asymptotic order of the Filon-type method for univariate integrals, we refer the reader to [IN05] and for matrix and 34 4.2 The Magnus method vector-valued functions we refer to [Kha08b]. The following theorem from [Kha08b] is on the numerical order of the Filon-type method. It was also shown in [Kha08b] that the numerical solution obtained after discretization of the integral with Filon-type method is convergent. Theorem 4.1.1 [MK][Kha08b] Let s = min{θ1, θν}. Then the numerical order of the Filon-type method is equal to r = ∑ν l=1 θl − 1. Adding more internal interpolation points leads to the decay of the lead- ing error constant, resulting in a marked improvement in the accuracy of approximation. However, interpolating function f at internal points does not contribute to the higher asymptotic order of the Filon-type method. 4.2 The Magnus method In this section we focus on Magnus methods for approximation of a matrix- valued function Xω in (4.1.1). There is a large list of publications available on the Lie-group methods, here we refer to some of them: [BCR00], [CG93], [Ise09a], [Ise02a], [MK98], [OM97], [Zan96]. Currently, the most general theorem on the existence and convergence of the Magnus series is proved for a bounded linear operator A(t) in a Hilbert space, [Cas07]. The following theorem gives us sufficient condition for con- vergence of the Magnus series, extending Theorem 3 from [MN08], where the same results are stated for real matrices. Theorem 4.2.1 (F. Casas, [Cas07]) Consider the differential equation X ′ = A(t)X defined in a Hilbert space H with X(0) = I, and let A(t) be a bounded linear operator on H. Then, the Magnus series Ω(t) in (4.1.2) converges in the interval t ∈ [0, T ) such that∫ T 0 ‖A(τ)‖dτ < pi and the sum Ω(t) satisfies exp Ω(t) = X(t). For recent results on the error bounds for the Magnus method for Schro¨dinger equation the author refers to work by M. Hochbruck and C. Lubich available 35 4.2 The Magnus method in [HL03]. The authors in [HL03] considered Schro¨dinger equation with a time-dependant Hamiltonian i dψ dt = H(t)ψ, ψ(t0) = ψ0. Here H(t) is a finite dimensional Hermitian operator and as a result of dis- cretization of an unbounded operator, typically a sum of discretized negative Laplacian and a time-dependent potential, H(t) my have an arbitrary large norm. In the paper authors provide asymptotically sharp error bounds for Magnus integrators in the framework applied to time-dependent Schro¨dinger equation where there is no restrictions on smallness nor bounds on h‖H(t)‖, h stands for a step-size of a numerical integrator. Given the representation for Ω(t) in (4.1.2), the numerical task on eval- uating the commutator brackets is fairly simple, [BCR00], [Ise09a], [Ise02a]. For example, choosing symmetric grid c1, c2, ..., cν , suppose taking Gaussian points with respect to 1 2 , consider set {A1, A2, ..., Aν}, with Ak = hA(t0 + ckh), k = 1, 2, ..., ν. Linear combinations of this basis form an adjoint basis {B1, B2, ..., Bν}, with Ak = ν∑ l=1 (ck − 1 2 )l−1Bl, k = 1, 2, ..., ν. In this basis the six-order method, with Gaussian points c1 = 1 2 − √ 15 10 , c2 = 1 2 , c3 = 1 2 + √ 15 10 , Ak = hA(t0 + ckh), is Ω(t0 + h)≈B1 + 1 12 B3 − 1 12 [B1, B2] + 1 240 [B2, B3] + 1 360 [B1, [B1, B3]]− 1 240 [B2, [B1, B2]] + 1 720 [B1[B1, [B1, B2]]], where B1 = A2, B2 = √ 15 3 (A3 − A1), B3 = 10 3 (A3 − 2A2 + A1). This can be reduced further and written in a more compact manner [Ise02a], [Ise09a], Ω(t0 + h) ≈ B1 + 1 12 B3 + P1 + P2 + P3, 36 4.2 The Magnus method where P1 = [B2, 1 12 B1 + 1 240 B3], P2 = [B1, [B1, 1 360 B3 − 1 60 P1]], P3 = 1 20 [B2, P1]. A more profound approach taking Taylor expansion of A(t) around the point t1/2 = t0 + h 2 was introduced in [BCR00], A(t) = ∞∑ i=0 ai(t− t1/2)i, with ai = 1 i! dnA(t) dti |t=t1/2 . This can be substituted in the univariate integrals of the form B(i) = 1 hi+1 ∫ h/2 −h/2 tiA ( t + h 2 ) dt, i = 0, 1, 2, ... to obtain a new basis B(0) = a0 + 1 12 h2a2 + 1 80 h4a4 ++ 1 448 h6a6... B(1) = 1 12 ha1 + 1 80 h3a3 + 1 448 h5a5 + ... B(2) = 1 12 a0 + 1 80 h2a2 + 1 448 h4a4 + 1 2304 h6a6 + ... In these terms a second order method will look as follows, [BCR00], eΩ = ehB (0) + O(h3). Whereas for a six-order method, [BCR00], Ω = ∑4 i=1 Ω˜i + O(h7), one needs to evaluate only four commutators, [BCR00], Ω˜1 = hB (0) Ω˜2 = h 2[B(1), 3 2 B(0) − 6B(2)] Ω˜3 + Ω˜4 = h 2[B(0), [B(0), 1 2 hB(2) − 1 60 Ω˜2]] + 3 5 h[B(1), Ω˜2]. Numerical behaviour of the fourth and six order classical Magnus method is illustrated in Figures 4.2.1, 4.2.2 and 4.2.3, and 4.2.4, 4.2.5 and 4.2.6, 37 4.2 The Magnus method respectively. The method is applied to solve Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, t ∈ [0, 1000], for varies step-sizes, h = 1 4 , h = 1 10 and h = 1 25 . Comparison shows that for a bigger interval steps h both fourth and six order methods give similar results, as illustrated in Figures 4.2.1, 4.2.2, 4.2.3 and 4.2.6. However, for smaller steps six order Magnus method has a more rapid improvement in approximation compared to a fourth order method, Figures 4.2.3 and 4.2.6. 0 200 400 600 800 1000−4 −2 0 2 4x 10 −3 Figure 4.2.1: Global error of the fourth order Magnus method for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 1000 and step-size h = 1 4 . In current work we present an alternative method to solve equations of the kind (4.1.1), [Kha09a]. We apply the Filon quadrature to evaluate inte- gral commutators for Ω and then solve the matrix exponential Xω with the Magnus method or the modified Magnus method. The combination of the Filon-type methods and the Magnus methods forms the FM method, pre- sented in the next section. Application of the FM method to solve systems of highly oscillatory ordinary differential equation can be found in [Kha09b], [Kha09a]. 38 4.3 The modified Magnus method 0 200 400 600 800 1000−8 −4 0 4 8x 10 −4 Figure 4.2.2: Global error of the fourth order Magnus method for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 1000 and step-size h = 1 10 . 4.3 The modified Magnus method In this section we continue our discussion on the solution of the matrix dif- ferential equation X ′ω = Aω(t)Xω, Xω(0) = I, Xω = exp(Ω). (4.3.1) It was shown in [Ise02c] that one can achieve better accuracy in approxima- tion of the fundamental matrix Xω, if one solves (4.3.1) at each time step locally for a constant matrix Aω. We commence from a linear oscillator studied in [Ise02c], y′ = Aω(t)y, y(0) = y0. We also assume that the spectrum of the matrix Aω(t) has large imaginary values. Introducing local change of variables at each mesh point, we write the solution in the form, y(t) = e(t−tn)Aω(tn+ 1 2 h)x(t− tn), t ≥ tn, 39 4.3 The modified Magnus method 0 200 400 600 800 1000−1 −0.5 0 0.5 1x 10 −7 Figure 4.2.3: Global error of the fourth order Magnus method for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 1000 and step-size h = 1 25 . and observe that x′ = B(t)x, x(0) = yn, with B(t) = e−tAω(tn+ 1 2 h)[Aω(tn+t)−Aω(tn+ 1 2 )]etAω(tn+ 1 2 h), where tn+ 1 2 = tn+ h 2 . The modified Magnus method is defined as a local approximation of the solution vector y by solving x′ = B(t)x, x(0) = yn, with classical Magnus method. This approximation results in the following algorithm, yn+1 = e hAω(tn+h/2)xn, xn = e Ω˜syn. 40 4.4 The FM method 0 200 400 600 800 1000−4 −2 0 2 4x 10 −3 Figure 4.2.4: Global error of the six order Magnus method for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 1000 and step-size h = 1 4 . Performance of the modified Magnus method is better than that of the classicalMagnusmethod due to a number of reasons. Firstly, the fact that the matrix B is small, B(t) = O(t− tn+ 1 2 ), contributes to higher order correction to the solution. Secondly, the order of the modified Magnus method increases from p = 2s to p = 3s+ 1, [Ise02b]. 4.4 The FM method In this section we present a novel FM method for solving matrix ODEs, [Kha09a]. The method combines ideas on Filon methods and Lie group methods for solving matrix exponential with nested integral commutators. Consider the Airy-type equation y′′(t) + g(t)y(t) = 0, g(t) > 0 for t > 0, and lim t→∞ g(t) = +∞. Replacing the second order differential equation by the first order system, we 41 4.4 The FM method 0 200 400 600 800 1000−8 −4 0 4 8x 10 −4 Figure 4.2.5: Global error of the six order Magnus method for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 1000 and step-size h = 1 10 . obtain y′ = Aω(t)y, where Aω(t) = ( 0 1 −g(t) 0 ) . Due to a large imaginary spectrum of the matrix Aω, this Airy-type equa- tion is rapidly oscillating. We apply the modified Magnus method to solve the system y′ = Aω(t)y, yn+1 = e hAω(tn+h/2)eΩ˜nyn. Here the integral commutators in Ω˜n are computed according to the rules of the Filon quadrature. This results in highly accurate numerical method, the FM method. Taking into account that the entries of the matrix B(t) are likely to be oscillatory, the advantage of the Filon-type method is evident. In the example below we provide a more detailed evaluation of the FM method applied to solve the Airy-type equation y′ = Aω(t)y. 42 4.4 The FM method 0 200 400 600 800 1000−5 0 5x 10 −9 Figure 4.2.6: Global error of the six order Magnus method for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 1000 and step-size h = 1 25 . Example 4.4.1 Once we have obtained the representation for Ω˜n, the com- mutator brackets are now formed by the matrix B(t). It is possible to reduce cost of evaluation of the matrix B(t) by simplifying it, [Ise04a]. Denote q = √ g(tn + 1 2 h) and v(t) = g(tn + t)− g(tn + 12h). Then, B(t) = v(t) [ (2q)−1 sin 2qt q−2 sin2 qt − cos2 qt −(2q)−1 sin 2qt ] = v(t) [ q−1 sin qt − cos qt ] [ cos qt q−1 sin qt ] , and for the product B(t)B(s) = v(t)v(s) sin q(s− t) q [ q−1 sin qt − cos qt ] [ cos qs q−1 sin qs ] . It was shown in [Ise04a] that ‖B(t)‖ = cos2 ωt+ sin 2 ωt ω2 and B(t) = O(t− tn+ 1 2 ). 43 4.4 The FM method We can now estimate integral h∫ 0 ‖B(t)‖dt = h 2 (1 + 1 ω2 ) + sin 2ωh 2ω (1− 1 ω2 ) := q(h, ω). To meet requirements stated in Theorem 4.2.1 we allow q(h, ω) ≤ pi. In other words, to satisfy Theorem 4.2.1 we obtain the following condition on the step-size h, h ≤ 2piω 3 − ω2 + 1 ω(ω2 + 1) . Given the compact representation of the matrix B(t) with oscillatory entries, we solve Ω(t) with a Filon-type method. We approximate functions in B(t) by a polynomial v˜(t), for example Hermite polynomial (4.1.4), as in classical Filon-type method. Furthermore, in our approximation we use end points only, although the method is general and more nodes of approximation can be required. In Figure 4.4.1 we present the global error of the fourth order modified Magnus method with exact integrals for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 2000 time interval and step-size h = 1 5 . This can be compared with the global error of the FM method applied to the same equation with exactly the same conditions and step-size, Figure 4.4.2. In Figures 4.4.3 and 4.4.4 we compare the fourth order classical Magnus method with a remarkable performance of the fourth order FM method, applied to the Airy equation with a large step-size equal to h = 1 2 . While in Figures 4.4.5 and 4.4.6 we show the solution for the Airy equation with the fourth order FM method with step-sizes h = 1 4 and h = 1 10 respectively. 44 4.4 The FM method 0 500 1000 1500 2000−3 −2 −1 0 1 2 x 10−10 Figure 4.4.1: Global error of the fourth order Modified Magnus method with exact evaluation of integral commutators for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 2000 and step-size h = 1 5 . 0 500 1000 1500 2000−3 −2 −1 0 1 2 x 10−10 Figure 4.4.2: Global error of the fourth order FM method with end points only and multiplicities all 2 for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 2000 and step-size h = 1 5 . 45 4.4 The FM method 0 500 1000 1500 2000−4 −2 0 2 4 x 10−7 Figure 4.4.3: Global error of the fourth order FM method with end points only and multiplicities all 2 for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 2000 and step-size h = 1 2 . 0 500 1000 1500 2000−0.015 −0.005 0.005 0.015 Figure 4.4.4: Global error of the fourth order Magnus method for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 2000 and step-size h = 1 2 . 46 4.4 The FM method 0 500 1000 1500 2000 −1 0 1 x 10−9 Figure 4.4.5: Global error of the fourth order FM method with end points only and multiplicities all 2, for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 2000 and step-size h = 1 4 . 0 500 1000 1500 2000−4 −2 0 2 4x 10 −11 Figure 4.4.6: Global error of the fourth order FM method with end points only and multiplicities all 2, for the Airy equation y′′(t) = −ty(t) with [1, 0]⊺ initial conditions, 0 ≤ t ≤ 2000 and step-size h = 1 10 . 47 Chapter 5 Highly oscillatory linear systems with variable coefficients 5.1 The asymptotic method In this chapter we introduce our reader further applications of the FM meth- ods. In particular, we develop the FM method for solving highly oscillatory systems of ODEs with a time-dependent matrix, [Kha09b], [Kha09a]. We solve the matrix exponential in Lie groups applying Magnus method. For discretizing nested commutators we apply Filon quadrature and develop the FM method. We commence from a system of ordinary differential equations written in a vector form, y′ = Aωy + f (t), y(0) = y0 ∈ Rd, t ≥ 0, where Aω is a non-singular matrix with large imaginary eigenvalues, σ(Aω) ⊂ iR, ‖Aω‖ ≫ 1, ω ≫ 1 is a real parameter describing frequency of oscillation and f : R× Rd → Rd is a smooth vector-valued function. In present chapter we allow matrix Aω to be time-dependent non-singular matrix as compared to a constant matrix Aω, discussed previously and pre- sented in [Kha08b]. For more generality one can assume that Aω can be a function from [0,∞) to the set of matrices. However, the scope of this 48 5.1 The asymptotic method thesis is to provide answers while Aω is the source of the oscillation occur- ring in the system and the methods introduced in our work are beneficial for large frequencies of oscillation. While for small frequencies or even in absence of oscillation our discretization methods are of the same order as Gauss-Christopher rule with Gaussian point. The analytic solution of the system is given by the variation of constants formula, y(t) = Xω(t)y0 + ∫ t 0 Xω(t− τ)f (τ,y(τ))dτ = Xωy0 + I[f ]. (5.1.1) The fact that Aω is no longer a constant matrix results in a more complex solution of a fundamental matrix Xω, satisfying matrix linear ODE, X ′ω = AωXω, Xω = e Ω, where Ω represents Magnus expansion, an infinite recursive series, Ω(t) = ∫ t 0 A(t)dt + 1 2 ∫ t 0 [A(τ), ∫ τ 0 A(ξ)dξ]dτ + 1 4 ∫ t 0 [A(τ), ∫ τ 0 [A(ξ), ∫ ξ 0 A(ζ)dζ ]dξ]dτ + 1 12 ∫ t 0 [[A(τ), ∫ τ 0 A(ξ)dξ], ∫ τ 0 A(ζ)dζ ]dτ + · · · . The challenge is to evaluate highly oscillatory integral I[f ] in (5.1.1) effi- ciently. We already know that ordinary methods based on Gauss–Christoffel quadrature fail to approximate highly oscillatory integrals for increasing ω, [IN05]. Hence, we first focus on developing numerical methods to evaluate highly oscillatory integrals with a matrix-valued kernel and next we use our results in numerical approximation to (5.1.1). Given a vector-valued integral over a compact interval [a, b] I[f ] = ∫ b a Xω(t)f (t)dt, X ′ ω = AωXω, Xω = e Ω, 49 5.1 The asymptotic method with an arbitrary non-singular matrix Aω of large eigenvalues, ‖A−1ω ‖ ≪ 1, σ(Aω) ⊂ iR, a real parameter ω and a smooth vector-valued function f ∈ Rd. We would like to remind our reader that if the matrix Aω was constant, then for example the task of obtaining an asymptotic expansion for the inte- gral I[f ] was quite simple, [Kha08b]. Applying rules of integration by parts, I[f ] =A−1ω [ Xω(b)f (b)−Xω(a)f (a) ]−A−1ω ∫ b a Xω(t)f ′(t)dt = QA1 −A−1ω I[f ′], we obtain the asymptotic method QAs , defined as s-partial sum of the asymp- totic expansion for I[f ], QAs [f ] = − s∑ m=1 (−Aω)−m [ Xω(b)f (m−1)(b)−Xω(a)f (m−1)(a) ] . This is not the case when the matrix Aω is arbitrary, since substitution Xω = A −1 ω X ′ ω in I[f ] is no longer useful to apply tools of integration by parts, I[f ] = ∫ b a A−1ω (t)X ′ ω(t)f(t)dt. However, it is possible to overcome this issue and obtain the asymptotic expansion for I[f ] given an arbitrary matrix Aω. In this section we show how applying some matrix transformations in I[f ] leads to its explicit asymptotic expansion. Given, I[f ] = ∫ b a Xω(t)f (t)dt, X ′ ω = AωXω, Xω = e Ω, (5.1.2) with a matrix valued kernel Xω depending on a real parameter ω ≫ 1 and satisfying matrix linear differential equation (5.1.2). Assume that Aω is an arbitrary non-singular matrix of pure imaginary spectrum, σ(Aω) ⊂ iR, ‖A−1ω ‖ ≪ 1 and f ∈ Rd is a smooth vector-valued function. We have already mentioned previously in this chapter that the current representation of the integral I[f ] in (5.1.2) is not suitable for obtaining its 50 5.1 The asymptotic method asymptotic expansion due to a more general choice of the matrix Aω. Once we substitute Xω = A −1 ω X ′ ω in I[f ] in its current occurrence, then I[f ] = ∫ b a A−1ω (t)X ′ ω(t)f(t)dt. Transforming integral (5.1.2) into an equivalent integral with appropriate representation for our purposes helps to obtain the asymptotic expansion to (5.1.2). We first vectorize matrix Xω into Xω = [x¯1, x¯2, ..., x¯d]. Next step we take the transpose of Xω, obtaining a corresponding vector X¯ω, X¯ω = [x¯1, x¯2, ..., x¯d] ⊺. It is easy to verify that the latter satisfies a linear differential equation X¯ ′ ω = BωX¯ω, with Bω = d⊕ i=1 Aω, where Bω represents direct d-tuple sum of the original matrix Aω. We also scale an identity matrix I by taking its direct product with the vector-valued function f := f ⊺ = [f1, f2, . . . , fd], F = f1×d ⊗ Id×d. For instance, take f = [f1, f2] and I = ( 1 0 0 1 ) . Direct product F = f ⊗ I results in the following matrix, F = ( f1 0 f2 0 0 f1 0 f2 ) . Thus, F = d⋃ i=1 Fi, where each entry matrix is a diagonal matrix with the corresponding element fi on the diagonal, Fi =   fi 0 . . . 0 0 fi . . . 0 0 0 fi 0 ... . . . . . . ... 0 . . . 0 fi   . 51 5.1 The asymptotic method These transformations result in equality Xωf = FX¯ω, and appear to be suit- able for integration by parts and representation of the asymptotic expansion of the integral (5.1.2). Example 5.1.1 Let Xω = ( x11 x12 x21 x22 ) , Aω = ( a11 a12 a21 a22 ) and f = [ f1 f2 ] . Then X¯ω =   x11 x21 x12 x22   , Bω =   a11 a12 0 0 a21 a22 0 0 0 0 a11 a12 0 0 a21 a22   and F = ( f1 0 f2 0 0 f1 0 f2 ) . Hence, Xωf = FX¯ω = [ x11f1 + x12f2 x21f1 + x22f2 ] , and X¯ ′ ω = BωX¯ω, once X ′ ω = AωXω. As a result, we have obtained the following equality for the two integrals, I[f ] = ∫ b a Xω(t)f (t)dt = ∫ b a F (t)X¯ω(t)dt = I[F ], where matrix Xω and vector X¯ω satisfy linear ODEs X ′ω = AωXω and X¯ ′ ω = BωX¯ω respectively. This transformation allows us to approximate I[F ] and thus I[f ]. Applying integration by parts to I[F ] we obtain the asymptotic expan- sion for the integral I[f ], I[f ] = ∫ b a F (t)X¯ω(t)dt = ∫ b a F (t)B−1ω (t)X¯ ′ ω(t)dt = [ F (t)B−1ω (t)X¯ω(t) ]b a − ∫ b a ( F (t)B−1ω (t) )′ X¯ω(t)dt. 52 5.1 The asymptotic method Hereafter we remind our reader the notation on matrix functions asymp- totics depending on a real parameter ω. For the entirety of our work, all norms are L∞ norms, for vectors, matrices and functions. The norm of a function is taken over the interval [a, b]. We say that f = O(f˜) for an arbitrary function f and non-negative constant f˜ , which depend on a real parameter ω, if the norm of f and its derivatives are all of order O(f˜) as ω → ∞, namely ‖f (m)‖ = O(f˜) for m = 0, 1, . . . . For arbitrary two n ×m matrices A(x) = (aij(x)) and A˜ = (a˜ij), a˜ij ≥ 0, depending on a real pa- rameter ω, we can thus posit A(x) = O(A˜), if aij(x) = O(a˜ij) element-wise as ω → ∞. We may also say that f = O(1), if f and its derivatives remain bounded on [a, b], as ω → ∞. Let 1 = {1ij} stand for a matrix with all en- tries one. This allows us to write A(x) = O(1), if aij(x) = O(1) element-wise as ω → ∞. And finally, if A = O(A˜) and B = O(B˜), then the integration and multiplication properties are ∫ b a A(x)dx = O(A˜) and AB = O(A˜B˜). Letting σ0 = F , σk+1 = (σkB −1 ω ) ′, k = 0, 1, . . . we define asymptotic method as QAs [f ] = s−1∑ k=0 (−1)k (σk(b)B−1ω (b)X¯ω(b)− σk(a)B−1ω (a)X¯ω(a)) . Below we Theorem 3.1 from [Olv07] and Lemma 2.1 from [Kha08b]. In Theorem 5.1.3 we generalize these results for a matrix-valued kernel Xω, a vector-valued function f and a time dependant matrix Aω in I[f ], (5.1.2) . Theorem 5.1.1 (S. Olver, [Olv07]) Suppose that y satisfies the differential equation, y′(x) = A(x)y(x), in the interval [a, b], for some invertible matrix-valued function A such that A−1 = O(Aˆ), for ω →∞. Assume that I[f ] = ∫ b a f⊺(x)y(x)dx, 53 5.1 The asymptotic method where f ∈ Rd is a smooth vector-valued function and y ∈ Rd is a smooth, highly oscillatory vector- valued function. Define QAs [f ] = s−1∑ k=0 (−1)k[σ⊺k(b)A−1(b)y(b)− σ⊺k(a)A−1(a)y(a)], where σ0 ≡ f, σk+1 = (A−⊺σk)′, k = 0, 1, . . . . If f = O(f˜) and y(x) = O(y˜) for a ≤ x ≤ b, then I[f ]−QAs [f ] = (−1)s ∫ b a σ⊺sydx = O(f˜ ⊺ Aˆs+1y˜), as ω →∞. Lemma 5.1.2 (MK, [Kha08b]) Let I[f ] = ∫ b a Xω(t)f (t)dt, X ′ ω = AωXω, where the matrix kernel Xω satisfies linear matrix ODE as above, Aω is a constant non-singular matrix, ‖A−1ω ‖ ≪ 1 and f : R → Rd is a smooth vector-valued function. Then, for ω ≫ 1, I[f ] ∼ − ∞∑ m=1 (−Aω)−m [ Xω(b)f (m−1)(b)−Xω(a)f (m−1)(a) ] . For ψ = max{‖f (s)‖, ‖f (s+1)‖}, QAs [f ]− I[f ] ∼ O(‖A−s−1ω ‖‖Xω‖ψ), as ω →∞. If Xω = O(Xˆω) and f = O(f˜ ), then QAs [f ]− I[f ] = O(A−s−1ω Xˆωf˜), as ω →∞, element wise. 54 5.1 The asymptotic method Theorem 5.1.3 [Kha09b] Postulate that I[f ] = ∫ b a Xω(t)f (t)dt, and X ′ ω = Aω(t)Xω, where Aω is an arbitrary non-singular matrix, σ(Aω) ⊂ iR, ω is an oscillatory parameter and f ∈ Rd is a smooth vector-valued function. If F ω = O(Fˆ ), B −1 ω = O(Bˆ) and X¯ω = O(Xˆω), then I[f ]−QAs [f ] = (−1)s ∫ b a σs(t)X¯ω(t)dt = O(Fˆ Bˆs+1Xˆ), as ω →∞. Proof: We first prove the identity by induction, I[F ] = [ F (t)B−1ω (t)X¯ω(t) ]b a − ∫ b a ( F (t)B−1ω (t) )′ X¯ω(t)dt = [ σ0(t)B −1 ω (t)X¯ω(t) ]b a − ∫ b a σ1(t)X¯ω(t)dt. Suppose the identity holds for some k ∈ N, we prove for k + 1.∫ b a σk+1(t)X¯ω(t)dt= ∫ b a σk+1(t)B −1 ω (t)X¯ ′ ω(t)dt = [ σk+1(t)B −1 ω (t)X¯ω(t) ]b a − ∫ b a σk+2(t)X¯ω(t)dt. By induction, σk = O(Fˆ Bˆ k). Indeed, for k = 0, σ0 = F = O(Fˆ ). We assume the equality holds for some σk, then σk+1 = [ σkB −1 ω ]′ = σ′kB −1 ω + σkB −1′ ω = O(Fˆ Bˆk)O(Bˆ) + O(Fˆ Bˆk)O(Bˆ) = O(Fˆ Bˆk+1). And finally,∫ b a σsX¯ωdt= [ σsB −1 ω X¯ω ]b a − ∫ b a σs+1X¯ωdt = O(Fˆ Bˆs+1Xˆ) + O(Fˆ Bˆs+1Xˆ) = O(Fˆ Bˆs+1Xˆ). 55 5.1 The asymptotic method Corollary 5.1.4 [Kha09b] If f (k)(a) = f (k)(b) = 0, for k = 0, 1, ..., s− 1, then, I[f ] = O(Fˆ Bˆs+1Xˆ). Example 5.1.2 For practical purposes consider integral Ih[f ] = ∫ h 0 eΩ(h−τ)f (τ)dτ. Once we vectorize the system to obtain asymptotic method, I¯ stands for a vectorized identity matrix I, and X¯ω stands for a vectorized matrix e Ω. For s = 2 the asymptotic method using information at the end points only, is QA2 = [F (h)B −1 ω (0)− F ′(h)B−2ω (0)− F (h)B−1 ′ ω (0)B −1 ω (0)]I¯ −[F (0)B−1ω (h)− F ′(0)B−2ω (h)− F (0)B−1 ′ ω (h)B −1 ω (h)]X¯ω(h). Employing initial-value integrator, yn+1 = e Ωsyn + ∫ h 0 eΩs(h−τ)f (tn + τ)dτ, where Ωs stands for a truncated Magnus expansion, we can now solve (5.1.1) with the asymptotic method, yn+1 = e Ωsyn +Q A s . We solve matrix exponential eΩs with the FMmethod described in [Kha09a]. The method combines the ideas of Lie group solvers, namelyMagnusmethod, with Filon quadrature for solving nested integral commutators in Ω. The following lemma for analysis of highly oscillatory ODEs instead of Taylor method, present a paragraph on it Lemma 5.1.5 [Kha09a] The approximation with the asymptotic method QAs applied to an integral I[f ], is independent of the step-size of integration. 56 5.2 The FM method Proof: For ∀ step-size h and ∀ number n of node points tn = tn−1+h, the components evaluated at the internal node points of the truncated asymptotic sum QAs appear in pairs with an opposite sign, except the values at the end points QAs [f ](t1)−QAs [f ](t0) +QAs [f ](t2)−QAs [f ](t1) +QAs [f ](t3)−QAs [f ](t2) + ... +QAs [f ](tn−1)−QAs [f ](tn−2) +QAs [f ](tn)−QAs [f ](tn−1) = QAs [f ](tn)−QAs [f ](t0). As a result, no matter how we truncate the interval we obtain the same representation for the asymptotic method QAs evaluated at the end points. 5.2 The FM method Employing classical Filon-type quadrature proven to have high accuracy for increasing ω, as in [Kha08a], we construct an r-degree polynomial interpola- tion v for the vector-valued function f in (5.1.2), v(t) = ν∑ l=1 θl−1∑ j=0 αl,j(t)f (j)(tl), which agrees with function values and derivatives v(j)(tl) = f (j)(tl) at node points a = t1 < t2 < ... < tν = b, with associated θ1, θ2, ..., θν multiplicities, j = 0, 1, ..., θl − 1, and l = 1, 2, ..., ν, [Kha09b]. By definition, Filon-type method is QFs [f ] = ∫ b a Xω(t)v(t)dt = ν∑ l=1 θl−1∑ j=0 βl,jf (j)(tl), where βl,j = ∫ b a Xω(t)αl,j(t)dt. Theorem 5.2.1 [Kha09b] Postulate that I[f ] = ∫ b a Xω(t)f (t)dt, and X ′ ω = Aω(t)Xω, 57 5.2 The FM method Aω is a non-singular matrix, σ(Aω) ⊂ iR, ω is an oscillatory parameter and f ∈ Rd is a smooth vector-valued function. If F ω = O(Fˆ ), B −1 ω = O(Bˆ) and X¯ω = O(Xˆω), then I[f ]−QFs [f ] = (−1)s ∫ b a σs(t)X¯ω(t)dt = O(Fˆ Bˆs+1Xˆ), as ω →∞. Proof: Observing that I[f − v] = I[f ] − QFs [f ], the proof follows by replacing f in the asymptotic method (5.1.3) by f − v and the fact that [f − v](j)(a) = [f − v](j)(b) = 0, for j = 0, 1, ..., s− 1. Corollary 5.2.2 [Kha09b] If Xω = O(1) and f = O(1), then QFs [f ]− I[f ] = O(A−s−1ω 1). Theorem 5.2.3 [Kha09b] Let θ1, θν ≥ s, r = ∑ν l=1 θl − 1. Then r is the numerical order of the Filon-type method applied to the linear systems, y(tn)− yn = O(hr+1). Numerical experiments withQFMs method, a combination of Filon quadra- ture and Magnus method, are available in Figures 5.2.1, 5.2.2, 5.2.3, 5.2.4, 5.2.5, 5.2.6 and 5.2.7. By definition, FM method is QFMs [f ] = ∫ b a eΩs(t)v(t)dt = ν∑ l=1 θl−1∑ j=0 βl,jf (j)(tl), where βl,j = ∫ b a eΩs(t)αl,j(t)dt. In our case it is numerical discretization of the integral I[f ] = ∫ h 0 eΩsf(tn + τ)dτ, 58 5.3 The modified FM method applied to solve yn+1 = e Ωsyn +Q FM s [f ]. We present numerical results for FM method QFM2 with end points only and multiplicities all 2, for the equation y′′(t) = −ty(t)−cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions and various step-sizes in Figures 5.2.1-5.2.7. Initially, we show that employing the FM method one can afford large step- sizes, such as h = 1 and h = 1 2 , Figures 5.2.1. Decreasing step-size up to h = 1 4 and h = 1 10 we obtain accuracy up to 10−8, Figure 5.2.2. In Figure 5.2.4 we show that for a fixed large step-size the accuracy of approximation improves with an increase in frequency, h = 1 4 , ω = 102 and 103. For a smaller step-sizes we demonstrate high accuracy in approximation for ω = 104, Figure 5.2.5 and ω = 105, Figure 5.2.6. While in Figure 5.2.7 we show that for ω = 106 and hω = 104 the method introduces highly desirable accuracy, the error is 10−14. This can be compared with Figure 5.2.3 for the same step-size size. 5.3 The modified FM method In our further discussion we develop a modified FM method QFMS employ- ing ideas from the modified Magnus method, [Ise02c], [Kha09a]. By solving the equation for variable constants locally with a constant matrix we intro- duce local change of variables. Applications of the modified Magnus method include systems of highly oscillatory linear systems of ODEs, [Kha09a] y′(t) = Aω(t)y(t) + f(t), y(t0) = y0, with analytic solution y(t) = Xω(t)y0 + ∫ t 0 Xω(t− τ)f (τ)dτ = Xωy0 + I[f ]. We assume that the spectrum of the matrix Aω(t) has large imaginary values. Introducing local change of variables at each mesh point, we write the solution in the form y(t) = e(t−tn)A(tn+ 1 2 h)x(t− tn), t ≥ tn. 59 5.3 The modified FM method 0 20 40 60 80 100−0.01 −0.005 0 0.005 0.01 0 20 40 60 80 100−2 −1 0 1 2x 10 −4 Figure 5.2.1: Global error of the FM method QFM2 with end points only and multiplicities all 2, for the equation y′′(t) = −ty(t) − cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions and step-size h = 1 and h = 1 2 (top two figures from left to right) We apply modifiedMagnus techniques to the system and solve the system locally with a constant matrix Aω, while x(t) satisfies a highly oscillatory equation x′(t) = Bω(t)Xω(t) + e−tAω(tn)f(t), t ≥ 0, with the same matrix Bω as for linear matrix ODEs, Bω(t) = e −tAω(tn)[Aω(t)− Aω(tn)]etAω(tn). It is possible to reduce cost of evaluation of the matrix B(t) by simplifying it, [Ise04a]. Denote q = √ g(tn + 1 2 h) and v(t) = g(tn+ t)−g(tn+ 12h). Then, B(t) = v(t) [ (2q)−1 sin 2qt q−2 sin2 qt − cos2 qt −(2q)−1 sin 2qt ] = v(t) [ q−1 sin qt − cos qt ] [ cos qt q−1 sin qt ] , 60 5.3 The modified FM method 0 20 40 60 80 100−4 −2 0 2 4x 10 −6 0 20 40 60 80 100−4 −2 0 2 4x 10 −8 Figure 5.2.2: Global error of the FM method QFM2 with end points only and multiplicities all 2, for the equation y′′(t) = −ty(t) − cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions and step-size h = 1 4 and h = 1 10 (from left to right). and for the product B(t)B(s) = v(t)v(s) sin q(s− t) q [ q−1 sin qt − cos qt ] [ cos qs q−1 sin qs ] . It was shown in [Ise04a] that ‖B(t)‖ = cos2 ωt+ sin 2 ωt ω2 and B(t) = O(t− tn+ 1 2 ). This approximation results in the following algorithm, yn+1 = e hA(tn+h/2)xn xn = e Ω˜nyn. Given the compact representation of the matrix B(t) with oscillatory entries, we solve Ω(t) with a Filon-type method, approximating function 61 5.3 The modified FM method 0 20 40 60 80 100−4 −2 0 2 4x 10 −13 Figure 5.2.3: Global error of the FM method QFM2 with end points only and multiplicities all 2, for the equation y′′(t) = −ty(t) − cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions and step-size h = 1 100 . v(t) by a polynomial v˜(t), for example Hermite polynomial, as in classical Filon-type method. Furthermore, in our approximation we use end points only, although the method is general and more nodes of approximation can be required. As a result we can now solve (5.1.2) we Modified Magnus method in combination with Filon quadrature, which we call the modified FM method, and this proves to be more accurate approximation than solving if we solved the commutators with Gaussian quadrature. Performance of the modified Magnus method improves as compared to classical Magnus method due to a number of reasons. The fact due to the fact that the matrix B is small, B(t) = O(t − tn+ 1 2 ), contributes to higher order correction to the solution, [Ise02b], [Kha09a]. The order of the modified Magnus method increase from p = 2s to p = 3s+ 1. 62 5.3 The modified FM method 0 20 40 60 80 100−1.5 −1 −0.5 0 0.5 1 1.5x 10 −6 0 20 40 60 80 100−2 −1 0 1 2x 10 −7 Figure 5.2.4: Global error of the QFM2 FM method with end points only and multiplicities all 2, for the equation y′′(t) = −ωty(t) − cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions and step-size h = 1 4 and ω = 102 ω = 103 (from left to right). 0 20 40 60 80 100−1.5 −1 −0.5 0 0.5 1 1.5 x 10 −9 0 20 40 60 80 100−4 −2 0 2 4 x 10 −11 Figure 5.2.5: Global error of the QFM2 FM method with end points only and multiplicities all 2, for the equation y′′(t) = −ωty(t) − cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions, ω = 104, step-sizes h = 1 10 and h = 1 25 (from left to right), 63 5.3 The modified FM method 0 20 40 60 80 100−3 −2 −1 0 1 2 3 x 10 −12 0 20 40 60 80 100−6 −4 −2 0 2 4 6 x 10 −14 Figure 5.2.6: Global error of the QFM2 FM method with end points only and multiplicities all 2, for the equation y′′(t) = −ωty(t) − cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions and step-size h = 1 100 with ω = 25 × 102 and ω = 105 (from left to right.) 0 20 40 60 80 100−2 −1 0 1 2x 10 −14 Figure 5.2.7: Global error of the QFM2 FM method with end points only and multiplicities all 2, for the equation y′′(t) = −ωty(t) − cos(t), 0 ≤ t ≤ 100, with [1, 0]⊺ initial conditions, step-size h = 1 100 and ω = 106. 64 Chapter 6 Highly oscillatory non-linear systems 6.1 Waveform relaxation methods In this chapter we present iteration techniques, namely, waveform relaxation algorithms for solving non-linear ordinary differential equations with associ- ated initial conditions: d dt z(t) = f (t, z), z(0) = z0, t ∈ [0, T ], (6.1.1) where T ≥ 0 f : [0, T ]× Rn → Rn, z0 = [z1,0, z2,0, ..., zn,0] ∈ Rn is the vector of initial values, and z(t) = [z1(t), z2(t), ..., zn(t)] ∈ Rn is the solution vector. In other words the system is written as follows, d dt z1(t) = f1(t, z1, z2, ..., zn), z1(0) = z1,0 d dt z2(t) = f2(t, z1, z2, ..., zn), z2(0) = z2,0 ... ... d dt zn(t) = fn(t, z1, z2, ..., zn), zn(0) = zn,0. 65 6.1 Waveform relaxation methods A simple example of waveform relaxation scheme, which maps the previous iterate z(ν−1) into new iterate z(ν) is, d dt z (ν) 1 (t) = f1(t, z (ν) 1 , z (ν−1) 2 , z (ν−1) 3 , ..., z (ν−1) n ), z (ν) 1 (0) = z1,0 d dt z (ν) 2 (t) = f2(t, z (ν) 1 , z (ν) 2 , z (ν−1) 3 , ..., z (ν−1) n ), z (ν) 2 (0) = z2,0 ... ... d dt z(ν)n (t) = fn(t, z (ν) 1 , z (ν) 2 , z (ν) 3 , ..., z (ν) n ), z (ν) n (0) = zn,0. The above relaxation scheme is called Gauss-Seidel waveform relaxation, similarly to Gauss-Seidel iterative method to solve linear and non-linear sys- tems of algebraic equations. The iterative methods simplify the problem of solving large systems of differential equations of n variables to a sequence of differential equations of a single variable. Another example of iteration scheme is the Jacobi waveform relaxation method, d dt z (ν) 1 (t) = f1(t, z (ν) 1 , z (ν−1) 2 , z (ν−1) 3 , ..., z (ν−1) n ), z (ν) 1 (0) = z1,0 d dt z (ν) 2 (t) = f2(t, z (ν−1) 1 , z (ν) 2 , z (ν−1) 3 , ..., z (ν−1) n ), z (ν) 2 (0) = z2,0 ... ... d dt z(ν)n (t) = fn(t, z (ν−1) 1 , z (ν−1) 2 , z (ν−1) 3 , ..., z (ν) n ), z (ν) n (0) = zn,0. In both cases the iterations scheme picks up initial conditions as an initial approximation z(0)(t) z (0) i (t) = zi,0, t ∈ [0, T ], i = 1, ..., n, which is defined along the whole interval of integration. Easy to notice that unlike Jacobi algorithm, the Gauss–Seidel method is sequential, i.e. one have to solve equations one after another, while the Jacobi scheme can be solved in parallel, simultaneously. Let us define waveform relaxation operator F as z(ν) = F(z(ν−1)). 66 6.1 Waveform relaxation methods Then a one-step iteration method can be written as d dt z(ν)(t) = F(t, z(ν−1), z(ν)), z(ν)(0) = z0, where F(u, v) is defined so, that F(t, v, v) = f(t, v). The well known Picard iteration, as well as Jacobi and Gauss–Seidel schemes can be written in the formulae, Fi(t, u, v) = Fi(t, u1, u2, ..., ui, ..., un), (Picard), Fi(t, u, v) = Fi(t, u1, u2, ..., ui−1, vi, ui+1, ..., un), (Jacobi), Fi(t, u, v) = Fi(t, v1, v2, ..., vi−1, vi, ui+1, ..., un), (Gauss-Seidel). Convergence analysis of waveform relaxationmethods is presented [Van93], [Nev89a], [Nev89b] and [MN87]. The convergence of sequences {z(ν)}∞ν=0 is analysed in the context of Banach spaces. We consider continuous vector-valued functions defined on [0, T ], i.e. C([0, T ];Rn), with maximum of the norm, defined as: ‖z‖T = max t∈[0,T ] ‖z(t)‖, with vector norm ‖ · ‖ in Rn. Definition 6.1.1 Let (X, ‖ · ‖X) be a normed linear space and the associated norm. An operator U : X→ X is called a contraction if there exists a γ, with 0 ≤ γ < 1, such that for all x, y ∈ X it is true that ‖U(x)− U(y)‖ ≤ γ‖x− y‖X. It is shown in [Van93] that the waveform relaxation is a contraction map- ping. Theorem 6.1.1 [Van93] Let X be a Banach space and U a contraction. Then there is a unique x⋆ ∈ X, such that U(x⋆) = x⋆. Moreover, if x(0) is any point in X, and we define the sequence {x(ν)}∞ν=0 by x(1) = U(x(1)), then x(ν) → x⋆ as ν →∞. 67 6.1 Waveform relaxation methods In order to prove some convergence results we introduce an exponentially scaled norm, ‖z‖b = max t∈[0,T ] e−bt‖z(t)‖, (6.1.2) for some b > 0, so that for sufficiently large b and any element z(t) of the space it is true that ‖z(t)‖∞ ≤ ‖z(t)‖b, for all t ∈ [0, T ]. It was also proved in [Van93] that the iteration is contraction for the derivatives of the iterate, ‖ d dt z(n) − d dt z(k)‖b ≤ γ‖ d dt z(n+1) − d dt z(k+1)‖b, γ ≤ 1. In her current work the author considers an important class of waveform relaxation methods, d dt v = F(t, u, v), such as Picard, Jacobi and Gauss-Seidel iterations, which are representatives of the more general class of waveform iteration methods. The Lipschitz condition on f(t,y(t)) induces a Lipschitz condition on F(t, u, v). Namely, there exist positive constants l1 and l2, such that for all u1, u2, v1, v2 ∈ Rd, t ∈ [0, T ], ‖F(t, u1, v1)− F(t, u2, v2)‖ ≤ l1‖u1 − u2‖+ l2‖v1 − v2‖. (6.1.3) In the sequel we proof convergence of Jacobi, Picard and Gauss–Seidel waveform relaxation methods applied to systems of non-linear ODEs. We commence from a system of ordinary differential equations written in a vector form, y′ = Aωy + f (t,y), y(0) = y0 ∈ Rd, t ≥ 0. The following assumptions are hold: A(t) is a d × d time-dependant matrix and f : R × Rd → Rd is a smooth vector-valued function. The analytic solution is presented by the variation of constants formula, y(t) = X(t)y0 + ∫ t 0 X(t− τ)f (τ,y(τ))dτ = Xy0 + I[f ]. The fact that A(t) is not a constant matrix results in a complex solution of a fundamental matrix X(t), satisfying matrix linear ODE, X ′(t) = A(t)X(t), X(t) = eΩ, 68 6.1 Waveform relaxation methods where Ω represents Magnus expansion, an infinite recursive series, Ω(t) = ∫ t 0 A(t)dt + 1 2 ∫ t 0 [A(τ), ∫ τ 0 A(ξ)dξ]dτ + 1 4 ∫ t 0 [A(τ), ∫ τ 0 [A(ξ), ∫ ξ 0 A(ζ)dζ ]dξ]dτ + 1 12 ∫ t 0 [[A(τ), ∫ τ 0 A(ξ)dξ], ∫ τ 0 A(ζ)dζ ]dτ + · · · . We mention here that the following theorem is valid also for a constant matrix A, representing only a special case. Theorem 6.1.2 [Kha09b] Suppose that f (t,y(t)) is continuous and satisfies Lipschitz conditions on D = [0, T ] × Rd, and F is the waveform relaxation operator y(ν+1) = F(y(ν)), y(0) = y0, defined as y(ν+1)(t) = X(t)y0 + ∫ t t0 X(t− τ)F(τ,y(ν)(τ))dτ. Then the Jacobi, Picard and Gauss-Seidel waveform relaxation methods ap- plied to a non-linear system of ODEs, y′(t) = A(t)y(t) + f(t,y(t)), converge. Proof: First we prove that F is a contraction. Consider ∀y, z ∈ X. Then one-step waveform relaxation iteration can we written as y(1)(t) = X(t)y0 + ∫ t t0 X(t− τ)F(τ,y(τ),y(1)(τ))dτ, and z(1)(t) = X(t)z0 + ∫ t t0 X(t− τ)F(τ, z(τ), z(1)(τ))dτ. 69 6.1 Waveform relaxation methods We then take the difference ‖y(1)(t)− z(1)(t)‖ ≤ ∫ t 0 ‖X(t− τ)(F(τ,y(τ),y(1)(τ))− F(τ, z(τ), z(1)(τ)))‖dτ. Then there exist positive constants l1, l2, so that for all t ∈ [0, T ] it is true that ‖y(1)(t)− z(1)(t)‖ ≤ max t∈[0,T ] ‖X(t)‖[l1 ∫ t 0 ‖y(τ)− z(τ)‖dτ + l2 ∫ t 0 ‖y(1)(τ)− z(1)(τ)‖dτ]. We multiply both sides by e−bt and maximize over [0, T ]. It is easy to notice that max t∈[0,T ] e−bt ∫ t 0 ebτdτ ≤ 1/b, and soon as X(t) is continuous, hence it is bounded over the finite interval [0, T ], i.e. ∃c > 0 st ‖X(t)‖∞ ≤ c. Using b-norm notation we can write, ‖y(1) − z(1)‖b ≤ cl1 b ‖y − z‖b + cl2 b ‖y(1) − z(1)‖b, (6.1.4) and for sufficiently large b we will have ‖y(1) − z(1)‖b ≤ γ‖y − z‖b, γ = c l1/b 1/c− l2/b < 1, (6.1.5) which proves that F is a contraction mapping. Convergence follows from Theorem 3.2. Remark 6.1.1 It is shown in [Van93] that from step to step of iteration the error of approximation decreases, when measured in the b-norm. Indeed, i.e. if y(t) is the exact solution and y(ν−1) is the iterate, then ‖y(ν) − y‖b ≤ γ‖y(ν−1) − y‖b, for 0 ≤ γ < 1 and b > 0. 70 6.2 WRF method Definition 6.1.2 [Van93] A differential system is said to have the strict WR contractivity property on [0, T ], if the waveform relaxation algorithm applied to the system contracts in the maximum norm on [0, T ], i.e., ‖z(ν+1) − z(ν)‖T ≤ γ‖z(ν) − z(ν−1)‖T , γ < 1, ν ≥ 1. (6.1.6) Theorem 6.1.3 [Van93] For any non-linear system y′ = f(t,y), which sat- isfies the assumptions of the convergence theorem, there exists a T > 0 such that the system has the strict WR contractivity property on [0, T ]. Proof: Let y = y(ν) and z = y(ν−1), then ‖y(ν+1)(t)− y(ν)(t)‖ ≤ l1 ∫ t 0 ‖y(ν)(s)− y(ν−1)(s)‖ds+ l2 ∫ t 0 ‖y(ν+1)(s)− y(ν)(s)‖ds Maximizing the norm over the interval [0, T ] ‖y(ν+1) − y(ν)‖T ≤ l1T‖y(ν) − y(ν−1)‖T + l2T‖y(ν+1) − y(ν)‖T for sufficiently small T we get ‖y(ν+1) − y(ν)‖T ≤ γ‖y(ν) − y(ν−1)‖T , with γ = l1T 1− l2T < 1. 6.2 WRF method In approximation of highly oscillatory non-linear systems of ODEs we use the implicit representation of the solution and the initial value integrator yn+1 = e hAωyn + ∫ h 0 e(h−τ)Aωf (tn + τ,y(tn + τ))dτ. Our method further develops some of the implications of the Filon quadrature and waveform relaxation methods in this setting. 71 6.2 WRF method Waveform relaxation (WR) methods, a family of iterative techniques de- signed for analysing dynamical systems, have been studied by a number of authors and we mention here some of them, [AC91], [AHvdD07], [GS98], [GV07], [JOW98], [JV97], [LO87], [MN87], [Nev89a], [Nev89b], [Van93], [WOSVR85]. Assuming that f : R × Rd → Rd satisfies the Lipschitz condition, the classical waveform Picard algorithm states y[s](t) = X(t)y0 + ∫ t 0 X(t− τ)f (τ,y[s−1](τ))dτ. It is desirable to apply Filon quadrature with its nice properties for large ω to non-linear dynamical systems to solve integral I[f ] = ∫ b a Xω(t)f (t,y(t))dt, where X ′ ω = AωXω and ‖A−1ω ‖ ≪ 1. Formally, the asymptotic expansion for the integral I[f ] as above looks as follows, I[f ] ∼ ∞∑ m=1 (−1)m−1A−m[Xw(b)f (m−1)(b,y(b))−Xw(a)f (m−1)(a,y(a))], where the function values and its derivatives at the end point f (m)(b,y(b)) are not available any more for m = 0, 1, 2, . . . . To overcome this issue we in- troduce waveform methods. Note that if the solution y is oscillatory, then the function f(t,y) is also likely to be oscillatory, causing waveform relaxation methods, if used by itself, to be inefficient. Thus, for efficiency, we discretize I[f ] according to the rules of the Filon quadrature, choosing a vector-valued polynomial (e.g. Hermite), which agrees with our function values and its derivatives f (m) at the node points. Our WRF method iterates y with a waveform method, solving I[f ] at each step with Filon quadrature, y [0] n+1 = y [s] n , y [1] n+1 = e Aωhy[s]n + ∫ h 0 e(h−τ)Aωf (tn + τ,y [0] n+1)dτ, ... y [s] n+1 = e Aωhy[s]n + ∫ h 0 eAω(h−τ)v(tn + τ,y [s−1] n+1 (tn + τ))dτ, 72 6.2 WRF method where v is a polynomial approximation to f as before. The method takes the initial value constant y [0] n+1 at a first step of iteration to obtain the first value of y [1] n+1. Having values at two endpoints we can now evaluate the derivatives at those points and construct a polynomial, which agrees with function values and derivatives at the end points. In order to obtain any desirably high order method higher order derivatives of the function f are required. In that respect we recommend to differentiate the equation for the system y′ = Ay + f (t,y) itself to obtain higher order derivatives of the solution vector y, and use the results in construction of approximation polynomial. The more derivatives at the end points are used in the polynomial approximation the more terms are cancelled in the asymptotic expansion to I[f ], leading to higher order methods and hence better accuracy. We highlight here that the WRF method employs end points only, other- wise adding internal points would have led to increasingly fine discretization of the interval. The reason for this is that approximation at internal points themselves requires further partition of the subinterval and this iteration is endless. From the point of view of the function asymptotics, the fundamental point here is that the performance of the Filon quadrature is determined by the values at the end points of the integration interval only, making addition of internal points less valuable, Theorem 2.2.1, Theorem 3.2.1. Computational cost of the WRF method is comparable to that of the Filon-type method. Having precomputed the vector-valued moments, all that remains are only some linear algebra operations. Theorem 6.2.1 [Kha08b] Suppose that r is the numerical order of a wave- form relaxation method and s is the numerical order of the quadrature dis- cretization applied to a non-linear system of ODEs y′ = Ay + f (t,y), y(t0) = y0, t ≥ 0, of arbitrary matrix A and f : R×Rd → Rd satisfying the Lipschitz condition. Then, y(h)− yh = O(hq+1), where q = min{r, s}. 73 6.2 WRF method Proof: Let y(h) be an exact solution, y [r] h its numerical solution by a waveform method, and yh the final numerical solution after discretization. Then, estimating in L∞-norm, ‖y(h)− yh‖ ≤ ‖y(h)− y[r]h ‖+ ‖y[r]h − yh‖ = O(hr+1) + O(hs+1) = O(hq+1). (6.2.1) where q = min{r, s}. Corollary 6.2.2 [Kha09b] The numerical order of the WRF method is the minimum over both the Filon quadrature and the waveform method applied to solve non-linear systems y′ = Ay + f . Proof: It follows immediately from Theorem (6.2.1). Thus, once we have chosen the quadrature rule of a given order, we iterate the equation until obtaining the order of the of the quadrature and further iteration is pointless. Theorem 6.2.3 [Kha09b] WRF method is convergent. Proof: Follows from (6.2.1) as h→ 0. In the Table 6.2.1 we demonstrate the accuracy of the WRF method for increasing ω in just four iterations. Due to non-linearity, the accuracy improves slightly slower, since we have chosen to iterate oscillatory equations. On the other hand, taking into account rapid oscillation of the solution for large ω, applying only four iteration combined with Filon quadrature is a very little exercise to achieve up to ten digits accuracy as it is described the Table 6.2.1. Figure 6.2.1 demonstrates the logarithmic error of the WRF method for different values of ω. Waveform methods do not contribute to function asymptotics making it harder to parallel the two methods, namely Filon- type method and WRF method, as it is shown in Figures 6.2.1 and 3.2.4. We have seen that the Filon-type methods have the same asymptotic order as the asymptotic method, where the error depends on the inverse powers of ω. In the case of asymptotic behaviour of WRF method the bottom line is that one iterates the asymptotic expansion, but the advantage of applying the right quadrature rule means that for increasing ω the error amazingly 74 6.3 WRFM method remains very close to that for smaller ω. Waveform methods do not preserve the nice asymptotic features of the Filon quadrature, but the latter even in the presence of iteration captures up to 10−12 accuracy as ω increases. Finally, for methods introduced in current work there wasn’t any restric- tions on the phase of the solution, whilst for discretization of I[f ] with Filon quadrature some special functions can be considered as the most obvious choice. The analysis of the asymptotic, Filon-type and WRF methods for a time-dependant matrix using Magnus expansions as well as some alterna- tive choices of the quadrature methods can be found in [Kha08b, Kha09b]. Table 6.2.1: Approximation error of the WRF method for the non-linear ODE y′′ = −ωy − 3y3, compared with the results of MATLAB ode45 solver set to RelTol= 10−12, AbsTol= 10−16. h ω = 10 h ω = 100 h ω = 1000 2.50−01 1.04−03 1.00−01 −8.32−04 4.00−02 −8.90−04 1.00−01 2.25−05 5.00−02 −5.33−05 2.00−02 −5.16−05 5.00−02 1.40−06 2.50−02 −3.35−06 1.00−02 −3.22−06 4.00−02 5.74−07 1.00−02 −8.59−08 3.33−02 −3.98−08 1.25−02 4.83−09 5.00−03 −5.36−09 1.25−03 −6.63−10 6.3 WRFM method Extending the WRF method for non-linear systems with a time dependant matrix, we now make use of the Filon–Magnus method, which we call the FM method. These ideas nicely combine together, forming an new WRFM method. Consider non-linear system y = Aω(t)y + f (t,y), y(t0) = y0, where Aω has large imaginary eigenvalues and f : R×Rd → Rd The analytic 75 6.3 WRFM method 10−4 10−3 10−2 10−1 100 10−15 10−10 10−5 100 w=10 w=100 w=1000 Figure 6.2.1: Logarithmic error (y-axis) and the step-size (x-axis) of the WRF method for the equation y′′ = −ωy − 3y3, initial values in [1, 1]⊺, with endpoints only and multiplicities all 2. solution is given by y(t) = Xω(t)y0 + ∫ t 0 Xω(t− τ)f (τ,y(τ))dτ = Xωy0 + I[f ]. Our WRFM method iterates y with a waveform method, solving I[f ] at each step with FM method, y [0] n+1 = y [s] n , y [1] n+1 =X Ωs ω y [s] n + ∫ h 0 XΩs(h−τ)ω f(tn + τ,y [0] n+1)dτ, ... y [s] n+1 =X Ωs ω y [s] n + ∫ h 0 XΩs(h−τ)ω v(tn + τ,y [s−1] n+1 (tn + τ))dτ, where v is a polynomial approximation to f as before. 76 6.3 WRFM method Modified WRFM method Further applications of the Modified Magnus method include systems of highly oscillatory non linear equations y = Aω(t)y + f (t,y), y(t0) = y0, with analytic solution y(t) = Xω(t)y0 + ∫ t 0 Xω(t− τ)f (τ,y(τ))dτ = Xωy0 + I[f ]. We apply modified Magnus techniques to the system of non linear ODEs by introducing the following change of variables y(t) = e(t−tn)Aω(tn+ h 2 )x(t− tn), t ≥ t0, and solving the system locally with a constant matrix Aω, while x(t) satisfies a non linear highly oscillatory equation x′(t) = Bω(t)X˜ω(t) + e−tAω(tn+ h 2 )f (t, etAω(tn+ h 2 )x(t)), t ≥ 0, with exactly the same matrix Bω as for linear equations, Bω(t) = e −tAω(tn+h2 )[Aω(tn + t)−Aω(tn + h/2)]etAω(tn+h2 ). The analytic solution to the new system is of the form, x(t) = X˜ω(t)y0 + ∫ t 0 X˜ω(t− τ)e−τAω(tn+h2 )f(τ, eτAω(tn+h2 )x(τ))dτ, (6.3.1) where X˜ ′ω(t) = Bω(t)X˜ω(t), and X˜ω(0) = I. TheMagnus expansion provides us with a solution for the matrix-kernel X˜(t), X˜(t) = eΩ(t), 77 6.3 WRFM method with Ω(t) = ∫ t 0 Bω(t)dt + 1 2 ∫ t 0 [Bω(τ), ∫ τ 0 Bω(ξ)dξ]dτ + 1 4 ∫ t 0 [Bω(τ), ∫ τ 0 [Bω(ξ), ∫ ξ 0 Bω(ζ)dζ ]dξ]dτ + 1 12 ∫ t 0 [[Bω(τ), ∫ τ 0 Bω(ξ)dξ], ∫ τ 0 Bω(ζ)dζ ]dτ + · · · . The modified WRFM method is defined as a local approximation of the so- lution vector y by solving x′(t) = Bω(t)X˜ω(t) + e−tAω(tn+ h 2 )f (t, etAω(tn+ h 2 )x(t)), x(0) = yn, with classicalMagnusmethod for linear part of the system andWRFmethod, [Kha08b], for non-linear part. This approximation results in the following algorithm, x [0] n+1 = yn, x [1] n+1 = e Ωsyn + ∫ h 0 eΩs(h−τ)e−τA(tn+ h 2 )f (τ, eτA(tn+ h 2 )x [0] n+1(τ))dτ, ... x [s] n+1 = e Ωsyn + ∫ h 0 eΩs(h−τ)e−τA(tn+ h 2 )f (τ, eτA(tn+ h 2 )x [s−1] n+1 (τ))dτ, yn+1 = e hAω(tn+h/2)x [s] n+1. Here Ωs stand for a truncated Magnus expansion Ω(t). To prove the convergence results one can use the exponentially scaled norm introduced above 6.1.2. We noticed that for large values of ‖Aω‖ one can replace b-norm notation with A-norm notation, ‖z‖‖Aω‖ = max t∈[0,T ] e−Aωt‖z(t)‖. (6.3.2) This notation is particularly interesting in the scope of current work, since we consider matrices with large eigenvalues and hence large norm. 78 6.3 WRFM method Lemma 6.3.1 [Kha09b] Suppose that f(t,y(t)) is continuous and satisfies Lipschitz conditions on D = [0, T ] × Rd. Then Jacobi, Picard and Gauss- Seidel waveform relaxation methods applied to the modified system 6.3.1, x[0](t) = y0, x[1](t) = eΩs(t)y0 + ∫ t 0 eΩs(t−τ)e−τAω(tn+h/2)F(τ, eτAω(tn+h/2)x[0](τ))dτ, ... x[s](t) = eΩs(t)y0 + ∫ t 0 eΩs(t−τ)e−τAω(tn+h/2)F(τ, eτAω(tn+h/2)x[s−1](τ))dτ, are convergent and the system has strict WR contractivity on [0, T ]. Proof: The proof in b-norm follows from Theorem 6.1.2 and Theorem 6.1.3. The proof in A-norm follows from Theorem 6.1.2 and Theorem 6.1.3 by replacing b-norm notation with A-norm notation, ‖x(t)− x[1](t)‖∞ ≤ γ‖A‖ω‖x(t)− x[1](t)‖‖A‖ω , γ = c l1/‖A‖ω 1/c− l2/‖A‖ω < 1. 79 Chapter 7 Conclusions 7.1 Asymptotic expansion versus Taylor ex- pansion for highly oscillatory ODEs Numerical analysis of DEs is based on the Taylor theorem. This includes fundamental concepts of time-stepping, truncation error, order etc., which underlie the construction and analysis of discretization algorithms. Suppose f has n+ 1 continuous derivatives on an open interval containing a. Then for each x in the interval, f(x) = f(a) + f ′(a) 1! (x− a) + f (2)(a) 2! (x− a)2 + · · · + f (n)(a) n! (x− a)n +Rn(x) where Rn(x) = f (n+1)(ξ) (n+ 1)! (x− a)n+1. is the Lagrange formula for the remainder for some ξ ∈ (a, x). For Euler’s method, the truncation error can be examined using Taylor’s theorem y(tn+1) = y(tn) + ∆ty ′(tn) + 1 2 ∆2y′′(ξn), ξ ∈ [tn, tn+1]. 80 7.1 Asymptotic expansion versus Taylor expansion for highly oscillatory ODEs The magnitude of the errors arising from for example the Euler method can be demonstrated by comparison with a Taylor expansion of y. y(t0 + h) = y(t0) + hy ′(t0) + 1 2 h2y′′(t0) +O(h3). But the Taylor theorem is the wrong tool for high oscillations. Each time we differentiate, the amplitude is roughly scaled by frequency. This means that, in methods based on Taylor reasoning, we are compelled to choose a ridiculously small step size. To the contrary to Taylor’s expansion, the asymptotic expansion is a more powerful tool in analysis of highly oscillatory ODEs, since the asymptotic expansion depends and the error term are expressed in terms of inverse powers of the frequency of oscillation. For instance, for an extrinsic linear oscillator y′ = Ay +α sin(ωt), when ω≫ ‖A‖. Note that the asymptotic expansion for this equation is available at any given time point, and for the numerical purposes the truncated asymptotic expansion can be evaluated for any large time-value t, y(t)∼ eAty0 + ∞∑ n=0 1 ω2n+1 (eAt − I cos(ωt))A2nα − ∞∑ n=0 1 ω2n+2 A2n+1α sin(ωt). In this work we developed numerical tools based on asymptotic expansion for solving systems of highly oscillatory ODEs, where the accuracy of ap- proximation improves with large frequencies of oscillation. We addressed the issues of numerical approximation of systems of ODEs with large frequencies of oscillation, which was an open problem. For small frequencies our methods are comparable with classical methods, but they have a remarkable advan- tage once applied for equations with large frequencies. This is demonstrated in examples in comparison with classical methods as well as state of the art solvers in Matlab and Maple. Our algorithms for Filon-type methods and FM-methods as well as parallel computing for non-linear systems are imple- mented in Matlab, except the symbolic calculation in the asymptotic method implemented using Maple symbolic package. Efficiency of the methods was 81 7.1 Asymptotic expansion versus Taylor expansion for highly oscillatory ODEs tested and compared with Matlab built-in routines. The efficiency of our algorithms is superior for large frequencies, when Matlab solvers take hours to overcome computation with variable but incredibly small step-size, while we get the answer within a couple of seconds, roughly by a press of a button. And this is simple to explain. Not only we built our methods with a rela- tively large step-size, but we also use very little information about function, working with end-points only. Moreover, our methods do not require explicit availability of the derivatives as it was mentioned before. For instance for an interval [0, 1] one can choose to substitute sequence 1 ω around zero and 1− 1 ω for values around point one. 82 Chapter 8 Future work 8.1 Moment-free approximation In my current work I am developing moment-free numerical methods, such as moment-free Filon method and Levin-type method for solving highly os- cillatory systems of ODEs, [Kha08a]. We presented the asymptotic method based on integration by parts, as well as Filon-type method approximating f by some polynomial, for example Hermite, f˜(x) = ν∑ l=1 θl∑ j=0 αl,j(x)f (j)(cl), which satisfies f˜ (j)(cl) = f (j)(cl), at node points 0 = c1 < c2 < ... < cν = 1, with θ1, θ2, ..., θν associated multiplicities, and j = 0, 1, ..., θl, l = 1, 2, ..., ν. However, while replacing function f(x) by a polynomial f˜(x), the method requires computation of moments ∫∞ −∞ x meiωg(x)dx,m ≥ 0, QFs [f ] = I[f˜ ] = n∑ k=0 ckI[x k], which are not always available. 83 8.1 Moment-free approximation Instead, in current project we consider function F (x), such that d dx [ F (x)eiωg(x) ] = f(x)eiωg(x), and obviously I[f ] = [F (x)eiωg(x)]ba. The idea is to approximate integral by QL[f ] = [v(x)eiωg(x)]ba, where v(x) is some function (e.g. collocation polyno- mial) approximating F (x), [Lev97], [Olv07] . We now obtain that F ′eiωg(x) + iωg′(x)F (x)eiωg(x) = f(x)eiωg(x) L[F ](x) = F ′(x) + iωg′(x)F (x), so that L[F ](x) = f(x), with operator L defined as L[F ] = F ′ + iwg′F. Theorem 8.1.1 [Olv06] Suppose that g′(x) 6= 0 for x ∈ [a, b]. Let {ψk}n0 be a basis of functions independent of w, s is some positive integer and xk ν 0 is a set of node points a = x0 < x1 < ...xν = b with {mk}ν0, mo, mν ≥ s set of multiplicities associated with the node points. Suppose that v = ∑n k=0 ckψk, where n = ∑ν k=0mk−1, is the solution to the system of collocation equations L[v](xk) = f(xk) dL[v] dx (xk) = f ′(xk) ... dmk−1L[v] dxmk−1 (xk) = f mk−1(xk) for every integer 0 ≤ k ≤ n and L[v] = v′ + iωg′v. Define gk = [(g ′ψk)(x0), ..., (g′ψk)(m0−1)(x0), ..., (g′ψk)(xν), ..., (g′ψk)(mν−1)(xν)]⊺. If the vectors {g0, ..., gn} are linearly independent, then for sufficiently large ω the system has a unique solution and I[f ]−QL[f ] ∼ O(ω−s−1), 84 8.1 Moment-free approximation where QL[f ] ≡ [v(x)eiωg(x)]ba = v(b)eiωg(b) − v(a)eiωg(a). In our work in progress, [Kha08a] we extend these ideas and apply the Levin-type method is applied for the vector-valued functions with a matrix- valued kernel, I[f ] = ∫ b a Xωfdx. The theory is available for the family of vector-valued integrals I[f ] = ∫ b a f(x)⊺y(x)dx. where f is a vector-valued function and y is a highly oscillatory vector-valued function satisfying differential equation y′(x) = A(x)y(x). Theorem 8.1.2 [Olv07] Suppose that y satisfies the differential equation y′ = Ay, on [a, b], for some matrix-valued function A such that A−1 = O(Aˆ) for w → ∞. Let QAs [f ] = s−1∑ k=0 (−1)k[σk(b)⊺A−1(b)y(b)− σk(a)⊺A−1(a)y(a)], where σ ⊺ 0 ≡ f ⊺, σ⊺k+1 = (σ⊺kA−1)′, k = 0, 1, .... For f = O(f˜ ) and y(x) = O(y˜) it is true that QAs [f ]− I[f ] = (−1)s+1 ∫ b a σ⊺sydx = O ( f˜ ⊺ Aˆs+1y˜ ) , w →∞. 85 8.2 Applications to PDEs More precisely L[F ] = f , for L[F ] = F ′ + A⊺F . To compute F exactly in general is not possible, but it is possible to ap- proximate it using collocation methods [17], where the Levin-method for the vector-valued functions is defined as follows. Let v(x) = ∑n k=1 ckψk(x) for some set of basis functions {ψk}, st ψk : R → Rd, n = d ∑ mk, the total number of equations in the system. For a sequence of nodes x1, ..., xν and and multiplicities m1, ..., mν the coefficients of v are determined by solving the system L[v](xk) = f (xk), ...,L[v] (mk−1)(xk) = f (mk−1)(xk), and the Levin-type method will defined as Q L[f ] = v(b)⊺y(b)− v(a)⊺y(a). The method can be applied to solve linear and non-linear systems once we apply matrix transformation to obtain asymptotic expansion for the family of integrals with a matrix-valued kernel as described in Chapter 5. 8.2 Applications to PDEs The results of our research are particularly interesting in application to par- tial differential equations. Discretization of oscillatory PDEs results in sys- tems of rapidly oscillatory ODEs. Hence, having obtained efficient solvers for highly oscillatory systems of ODEs, we expect to apply them to solve os- cillatory PDEs, at the very core of quantum mechanics, molecular dynamics and electrodynamics, e.g. Schroedinger equation with large potentials, the Helmholtz equation andMaxwell equations. This might well involve the com- bination of our ideas with other approaches, e.g. homogenization, Wigner measures, the eikonal expansions and geometric optics. These expectations have strong theoretical support, since the methods introduced in our papers are by construction independent of matrix dimension and valid for large and small eigenvalues, and hence for all frequencies. The methods seem promis- ing also for the Fermi-Pasta-Ulam problem and for other Hamiltonian ODEs with highly oscillatory solutions. 86 8.2 Applications to PDEs The phenomena of high oscillation observed in certain dynamical systems may have different explanations to its occurrence. Sometimes oscillations occur due a nature of the system itself, the way it is designed due to its inner properties. This is what we call intrinsic oscillation, when the oscillation is a global feature of the dynamical system itself. For example, if the system has a dominating imaginary spectrum, then it is the nature of the system to be oscillatory, and hence we deal with intrinsic high oscillation. On the other hand, it happens that for numerous reasons there is an oscillatory element embedded into a non oscillatory system. When such an oscillatory forcing term is added to a non oscillatory system, making the solution to oscillate rapidly, we deal with an extrinsic high oscillation. For instance, in substantially larger time scales, when the frequency of the input signal is negligible, the extrinsic oscillation originates in the forcing term. A simple example of an extrinsic oscillator is y′ = Ay +αsin(ωt), when ω≫ ‖A‖. Note that the asymptotic expansion for this equation is available at any given time point, y(t)∼ eAty0 + ∞∑ n=0 1 ω2n+1 (eAt − I cos(ωt))A2nα − ∞∑ n=0 1 ω2n+2 A2n+1α sin(ωt). In our work we are mainly concerned about numerical methods designed for solving dynamical systems with intrinsic high oscillation. There are many famous examples of such systems. For instance, take the Airy equation, y′′ ± k2ty = 0, where the imaginary part of the spectrum insures that the solution is oscilla- tory. Similarly, a particularly interesting example of intrinsic high oscillation is the non-linear Emden-Fowler equation, y′′ + ty3 = 0. For applications in mechanics the Hamiltonian system is a classical example, H(p, q) = 1 2 (q21 + q 2 2) + 1 2 sin(p1 − p2) + 3p42. 87 8.2 Applications to PDEs The Hamiltonian is considered to be the closest approach of classical me- chanics to the Schroedinger equation in quantum mechanics, ı~ ∂ ∂t Ψ(r, t) = − ~ 2 2m ∇2Ψ(r, t) + V (r)Ψ(r, t). In chaos theory, the apparent paradox of the results of the Fermi-Pasta-Ulam experiment with the hypothesis that essentially any non linearity would lead to a system exhibiting ergodic behaviour has become known as the Fermi- Pasta-Ulam problem, y′′j = (yj+1 − 2yj + yj−1)× [1 + 1 2 (yj+1 − yj−1)]. Instead, such physical systems exhibited almost exactly periodic behaviour versus expected ergodic behaviour. At last, the mathematical model describ- ing wave propagation on the surface of shallow water. The equation is known as the Korteweg-de Vries equation, ∂tφ+ ∂ 3 xφ+ 6φ ∂xφ = 0. 88 References [AC91] U. M. Ascher and S. Y. P. Chan. On parallel methods for boundary value ODEs. Computing, 46(1):1–17, 1991. [AHvdD07] U. M. Ascher, H. Huang, and K. van den Doel. Artificial time integration. BIT, 47(1):3–25, 2007. [AS64] M. Abramowitz and I. A. Stegun. Handbook of mathemati- cal functions with formulas, graphs, and mathematical tables, volume 55 of National Bureau of Standards Applied Mathemat- ics Series. For sale by the Superintendent of Documents, U.S. Government Printing Office, Washington, D.C., 1964. [BCR00] S. Blanes, F. Casas, and J. Ros. Improved high order integrators based on the Magnus expansion. BIT, 40(3):434–450, 2000. [Cas07] F. Casas. Sufficient conditions for the convergence of the Mag- nus expansion. J. Phys. A, 40(50):15001–15017, 2007. [CG93] P. E. Crouch and R. Grossman. Numerical integration of or- dinary differential equations on manifolds. J. Nonlinear Sci., 3(1):1–33, 1993. [dB81] N. G. de Bruijn. Asymptotic methods in analysis. Dover Pub- lications Inc., New York, third edition, 1981. [DR84] Philip J. Davis and Philip Rabinowitz. Methods of numeri- cal integration. Computer Science and Applied Mathematics. Academic Press Inc., Orlando, FL, second edition, 1984. 89 REFERENCES [Fli60] E. A. Flinn. A modification of Filon’s method of numerical integration. J. Assoc. Comput. Mach., 7:181–184, 1960. [GH06] Volker Grimm and Marlis Hochbruck. Error analysis of expo- nential integrators for oscillatory second-order differential equa- tions. J. Phys. A, 39(19):5495–5507, 2006. [GS98] Martin J. Gander and Andrew M. Stuart. Space-time con- tinuous analysis of waveform relaxation for the heat equation. SIAM J. Sci. Comput., 19(6):2014–2031 (electronic), 1998. [GV07] Martin J. Gander and Stefan Vandewalle. Analysis of the parareal time-parallel time-integration method. SIAM J. Sci. Comput., 29(2):556–578 (electronic), 2007. [Hau06] F. Hausdorff. Die symbolische exponentialformel in der grup- pentheorie. Berichte der Sachsischen Akademie der Wis- senschaften (Math. Phys. Klasse), 58:19–48, 1906. [HL03] Marlis Hochbruck and Christian Lubich. On magnus integrators for time-dependent schro¨dinger equations. SIAM J. Numer. Anal., 41(3):945–963, 2003. [HO08] D Huybrechs and S. Olver. Highly oscillatory quadrature. Highly Oscillatory Problems: Computation, Theory and Appli- cations, Isaac Newton Institute for Mathematical Sciences, to appear, 2008. [Hoc04] Marlis Hochbruck. Exponential integrators. Workshop on Ex- ponential Integrators, Innsbruck, 2004. [HV06] Daan Huybrechs and Stefan Vandewalle. On the evaluation of highly oscillatory integrals by analytic continuation. SIAM J. Numer. Anal., 44(3):1026–1048 (electronic), 2006. [IN04] A. Iserles and S. P. Nørsett. On quadrature methods for highly oscillatory integrals and their implementation. BIT, 44(4):755– 772, 2004. 90 REFERENCES [IN05] A. Iserles and S. P. Nørsett. Efficient quadrature of highly oscillatory integrals using derivatives. Proc. R. Soc. Lond. Ser. A Math. Phys. Eng. Sci., 461(2057):1383–1399, 2005. [IN06] A. Iserles and S. P. Nørsett. On the computation of highly oscillatory multivariate integrals with stationary points. BIT, 46(3):549–566, 2006. [INO06] A. Iserles, S. P. Nørsett, and S. Olver. Highly oscillatory quadrature: the story so far. In Numerical mathematics and advanced applications, pages 97–118. Springer, Berlin, 2006. [Ise02a] A. Iserles. Brief introduction to Lie-group methods. In Col- lected lectures on the preservation of stability under discretiza- tion (Fort Collins, CO, 2001), pages 123–143. SIAM, Philadel- phia, PA, 2002. [Ise02b] A. Iserles. On the global error of discretization methods for highly-oscillatory ordinary differential equations. BIT, 42(3):561–599, 2002. [Ise02c] A. Iserles. Think globally, act locally: solving highly-oscillatory ordinary differential equations. Appl. Numer. Math., 43(1- 2):145–160, 2002. 19th Dundee Biennial Conference on Nu- merical Analysis (2001). [Ise04a] A. Iserles. On the method of Neumann series for highly oscil- latory equations. BIT, 44(3):473–488, 2004. [Ise04b] A. Iserles. On the numerical quadrature of highly-oscillating in- tegrals. I. Fourier transforms. IMA J. Numer. Anal., 24(3):365– 391, 2004. [Ise05] A. Iserles. On the numerical quadrature of highly-oscillating integrals. II. Irregular oscillators. IMA J. Numer. Anal., 25(1):25–44, 2005. [Ise09a] A. Iserles. Magnus expansions and beyond. to appear in Proc. MPIM, Bonn, 2009. 91 REFERENCES [Ise09b] Arieh Iserles. A first course in the numerical analysis of dif- ferential equations. Cambridge Texts in Applied Mathematics. Cambridge University Press, Cambridge, second edition, 2009. [JOW98] Z. Jackiewicz, B. Owren, and B. Welfert. Pseudospectra of waveform relaxation operators. Comput. Math. Appl., 36(8):67– 85, 1998. [JV97] J. Janssen and S. Vandewalle. On SOR waveform relaxation methods. SIAM J. Numer. Anal., 34(6):2456–2481, 1997. [Kha08a] M. Khanamiryan. Levin-type method for highly oscillatory sys- tems of ordinary differential equations. Preprint, 2008. [Kha08b] M. Khanamiryan. Quadrature methods for highly oscillatory linear and nonlinear systems of ordinary differential equations. I. BIT, 48(4):743–761, 2008. [Kha09a] M. Khanamiryan. The magnus method for solving oscillatory lie-type ordinary differential equations. Submitted, 2009. [Kha09b] M. Khanamiryan. Quadrature methods for highly oscillatory linear and nonlinear systems of ordinary differential equations. II. Submitted to BIT, 2009. [Lev82] David Levin. Procedures for computing one- and two- dimensional integrals of functions with rapid irregular oscilla- tions. Math. Comp., 38(158):531–538, 1982. [Lev97] David Levin. Analysis of a collocation method for integrat- ing rapidly oscillatory functions. J. Comput. Appl. Math., 78(1):131–138, 1997. [LO87] Ch. Lubich and A. Ostermann. Multigrid dynamic iteration for parabolic equations. BIT, 27(2):216–234, 1987. [Mag54] W. Magnus. On the exponential solution of differential equa- tions for a linear operator. Comm. Pure Appl. Math., 7:649–673, 1954. 92 REFERENCES [MK98] H. Munthe-Kaas. Runge-Kutta methods on Lie groups. BIT, 38(1):92–111, 1998. [MN87] U. Miekkala and O. Nevanlinna. Convergence of dynamic iter- ation methods for initial value problem. SIAM J. Sci. Statist. Comput., 8(4):459–482, 1987. [MN08] P. C. Moan and J. Niesen. Convergence of the Magnus series. Found. Comput. Math., 8(3):291–301, 2008. [Nev89a] O. Nevanlinna. Remarks on Picard-Lindelo¨f iteration. I. BIT, 29(2):328–346, 1989. [Nev89b] O. Nevanlinna. Remarks on Picard-Lindelo¨f iteration. II. BIT, 29(3):535–562, 1989. [Nør69] A. Nørsett. An A-stable modification of the AdamsBashforth methods. In Conf. on Numerical Solution of Differential Equa- tions (Dundee, 1969). Springer, Berlin, 1969. [Olv74] F. W. J. Olver. Asymptotics and special functions. Academic Press [A subsidiary of Harcourt Brace Jovanovich, Publishers], New York-London, 1974. Computer Science and Applied Math- ematics. [Olv06] S. Olver. Moment-free numerical integration of highly oscilla- tory functions. IMA J. Numer. Anal., 26(2):213–227, 2006. [Olv07] S. Olver. Numerical approximation of vector-valued highly os- cillatory integrals. BIT, 47(3):637–655, 2007. [OM97] B. Owren and A. Marthinsen. Integration methods based on rigid frames. Norwegian University of Science & Technology Tech. Report, 1997. [RR01] Anthony Ralston and Philip Rabinowitz. A first course in nu- merical analysis. Dover Publications Inc., Mineola, NY, second edition, 2001. 93 REFERENCES [Ste93] Elias M. Stein. Harmonic analysis: real-variable methods, or- thogonality, and oscillatory integrals, volume 43 of Princeton Mathematical Series. Princeton University Press, Princeton, NJ, 1993. With the assistance of Timothy S. Murphy, Mono- graphs in Harmonic Analysis, III. [Van93] S. Vandewalle. Parallel multigrid waveform relaxation for parabolic problems. Teubner Skripten zur Numerik. [Teubner Scripts on Numerical Mathematics]. B. G. Teubner, Stuttgart, 1993. [Won01] R. Wong. Asymptotic approximations of integrals, volume 34 of Classics in Applied Mathematics. Society for Industrial and Ap- plied Mathematics (SIAM), Philadelphia, PA, 2001. Corrected reprint of the 1989 original. [WOSVR85] J. White, F. Odeh, Alberto L. Sangiovanni-Vincentelli, and A. Ruehli. Waveform relaxation: Theory and practice. Techni- cal Report UCB/ERL M85/65, EECS Department, University of California, Berkeley, 1985. [Zan96] A. Zanna. The method of iterated commutators for ordi- nary differential equations on lie groups. Tech. Rep. DAMTP 1996/NA12, University of Cambridge, 1996. 94