Repository logo
 

Learning Elliptic Partial Differential Equations with Randomized Linear Algebra

Published version
Peer-reviewed

Type

Article

Change log

Authors

Boullé, N 
Townsend, A 

Abstract

jats:titleAbstract</jats:title>jats:pGiven input–output pairs of an elliptic partial differential equation (PDE) in three dimensions, we derive the first theoretically rigorous scheme for learning the associated Green’s function jats:italicG</jats:italic>. By exploiting the hierarchical low-rank structure of jats:italicG</jats:italic>, we show that one can construct an approximant to jats:italicG</jats:italic> that converges almost surely and achieves a relative error of jats:inline-formulajats:alternativesjats:tex-math$$\mathcal {O}(\varGamma _\epsilon ^{-1/2}\log ^3(1/\epsilon )\epsilon )$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> mml:mrow mml:miO</mml:mi> mml:mo(</mml:mo> mml:msubsup mml:miΓ</mml:mi> mml:miϵ</mml:mi> mml:mrow mml:mo-</mml:mo> mml:mn1</mml:mn> mml:mo/</mml:mo> mml:mn2</mml:mn> </mml:mrow> </mml:msubsup> mml:msup mml:molog</mml:mo> mml:mn3</mml:mn> </mml:msup> mml:mrow mml:mo(</mml:mo> mml:mn1</mml:mn> mml:mo/</mml:mo> mml:miϵ</mml:mi> mml:mo)</mml:mo> </mml:mrow> mml:miϵ</mml:mi> mml:mo)</mml:mo> </mml:mrow> </mml:math></jats:alternatives></jats:inline-formula> using at most jats:inline-formulajats:alternativesjats:tex-math$$\mathcal {O}(\epsilon ^{-6}\log ^4(1/\epsilon ))$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> mml:mrow mml:miO</mml:mi> mml:mo(</mml:mo> mml:msup mml:miϵ</mml:mi> mml:mrow mml:mo-</mml:mo> mml:mn6</mml:mn> </mml:mrow> </mml:msup> mml:msup mml:molog</mml:mo> mml:mn4</mml:mn> </mml:msup> mml:mrow mml:mo(</mml:mo> mml:mn1</mml:mn> mml:mo/</mml:mo> mml:miϵ</mml:mi> mml:mo)</mml:mo> </mml:mrow> mml:mo)</mml:mo> </mml:mrow> </mml:math></jats:alternatives></jats:inline-formula> input–output training pairs with high probability, for any jats:inline-formulajats:alternativesjats:tex-math$$0<\epsilon <1$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> mml:mrow mml:mn0</mml:mn> mml:mo<</mml:mo> mml:miϵ</mml:mi> mml:mo<</mml:mo> mml:mn1</mml:mn> </mml:mrow> </mml:math></jats:alternatives></jats:inline-formula>. The quantity jats:inline-formulajats:alternativesjats:tex-math$$0<\varGamma _\epsilon \le 1$$</jats:tex-math><mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"> mml:mrow mml:mn0</mml:mn> mml:mo<</mml:mo> mml:msub mml:miΓ</mml:mi> mml:miϵ</mml:mi> </mml:msub> mml:mo≤</mml:mo> mml:mn1</mml:mn> </mml:mrow> </mml:math></jats:alternatives></jats:inline-formula> characterizes the quality of the training dataset. Along the way, we extend the randomized singular value decomposition algorithm for learning matrices to Hilbert–Schmidt operators and characterize the quality of covariance kernels for PDE learning.</jats:p>

Description

Keywords

Data-driven discovery of PDEs, Randomized SVD, Green's function, Hilbert-Schmidt operators, Low-rank approximation

Journal Title

Foundations of Computational Mathematics

Conference Name

Journal ISSN

1615-3375
1615-3383

Volume Title

Publisher

Springer Science and Business Media LLC