CAMBRIDGE MONOGRAPHS ON

551

Transcript of CAMBRIDGE MONOGRAPHS ON

Page 1: CAMBRIDGE MONOGRAPHS ON

CAMBRIDGE MONOGRAPHS ONAPPLIED AND COMPUTATIONALMATHEMATICS

Series Editors

m ablowitz s davis j hincha iserles j ockendon p olver

28 Multiscale Methods for FredholmIntegral Equations

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163346 subject to the Cambridge Core terms of

The Cambridge Monographs on Applied and Computational Mathematics seriesreflects the crucial role of mathematical and computational techniques incontemporary science The series publishes expositions on all aspects of applicableand numerical mathematics with an emphasis on new developments in thisfast-moving area of research

State-of-the-art methods and algorithms as well as modern mathematicaldescriptions of physical and mechanical ideas are presented in a manner suited tograduate research students and professionals alike Sound pedagogical presentation isa prerequisite It is intended that books in the series will serve to inform a newgeneration of researchers

A complete list of books in the series can be found atwwwcambridgeorgmathematicsRecent titles include the following

14 Simulating Hamiltonian dynamics Benedict Leimkuhler amp Sebastian Reich15 Collocation methods for Volterra integral and related functional differential

equations Hermann Brunner16 Topology for computing Afra J Zomorodian17 Scattered data approximation Holger Wendland18 Modern computer arithmetic Richard Brent amp Paul Zimmermann19 Matrix preconditioning techniques and applications Ke Chen20 Greedy approximation Vladimir Temlyakov21 Spectral methods for time-dependent problems Jan Hesthaven Sigal Gottlieb amp

David Gottlieb22 The mathematical foundations of mixing Rob Sturman Julio M Ottino amp

Stephen Wiggins23 Curve and surface reconstruction Tamal K Dey24 Learning theory Felipe Cucker amp Ding Xuan Zhou25 Algebraic geometry and statistical learning theory Sumio Watanabe26 A practical guide to the invariant calculus Elizabeth Louise Mansfield27 Difference equations by differential equation methods Peter E Hydon28 Multiscale methods for Fredholm integral equations Zhongying Chen Charles A

Micchelli amp Yuesheng Xu29 Partial differential equation methods for image inpainting Carola-Bibiane

Schonlieb

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163346 subject to the Cambridge Core terms of

Multiscale Methods forFredholm Integral Equations

ZHONGYING CHENSun Yat-Sen University Guangzhou China

CHARLES A MICCHELLIState University of New York Albany

YUESHENG XUSun Yat-Sen University Guangzhou China

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163346 subject to the Cambridge Core terms of

University Printing House Cambridge CB2 8BS United Kingdom

Cambridge University Press is part of the University of Cambridge

It furthers the Universityrsquos mission by disseminating knowledge in the pursuit ofeducation learning and research at the highest international levels of excellence

wwwcambridgeorgInformation on this title wwwcambridgeorg9781107103474

copy Zhongying Chen Charles A Micchelli and Yuesheng Xu 2015

This publication is in copyright Subject to statutory exceptionand to the provisions of relevant collective licensing agreementsno reproduction of any part may take place without the written

permission of Cambridge University Press

First published 2015

A catalogue record for this publication is available from the British Library

Library of Congress Cataloguing in Publication dataChen Zhongying 1946ndash

Multiscale methods for Fredholm integral equations Zhongying Chen Sun Yat-SenUniversity Guangzhou China Charles A Micchelli State University of New York

Albany Yuesheng Xu Sun Yat-Sen University Guangzhou Chinapages cm ndash (The Cambridge monographs on applied and computational

mathematics series)Includes bibliographical references and index

ISBN 978-1-107-10347-4 (Hardback)1 Fredholm equations 2 Integral equations I Micchelli Charles A

II Xu Yuesheng III TitleQA431C4634 2015

515prime45ndashdc23 2014050239

ISBN 978-1-107-10347-4 Hardback

Cambridge University Press has no responsibility for the persistence or accuracy ofURLs for external or third-party internet websites referred to in this publication

and does not guarantee that any content on such websites is or will remainaccurate or appropriate

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163346 subject to the Cambridge Core terms of

Contents

Preface page ixList of symbols xi

Introduction 1

1 A review of the Fredholm approach 511 Introduction 512 Second-kind matrix Fredholm equations 713 Fredholm functions 1114 Resolvent kernels 1715 Fredholm determinants 2016 Eigenvalue estimates and a trace formula 2417 Bibliographical remarks 31

2 Fredholm equations and projection theory 3221 Fredholm integral equations 3222 General theory of projection methods 5323 Bibliographical remarks 78

3 Conventional numerical methods 8031 Degenerate kernel methods 8032 Quadrature methods 8633 Galerkin methods 9434 Collocation methods 10535 PetrovndashGalerkin methods 11236 Bibliographical remarks 142

4 Multiscale basis functions 14441 Multiscale functions on the unit interval 14542 Multiscale partitions 153

v

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163402 subject to the Cambridge Core terms of

vi Contents

43 Multiscale orthogonal bases 16644 Refinable sets and set wavelets 16945 Multiscale interpolating bases 18446 Bibliographical remarks 197

5 Multiscale Galerkin methods 19951 The multiscale Galerkin method 20052 The fast multiscale Galerkin method 20553 Theoretical analysis 20954 Bibliographical remarks 221

6 Multiscale PetrovndashGalerkin methods 22361 Fast multiscale PetrovndashGalerkin methods 22362 Discrete multiscale PetrovndashGalerkin methods 23163 Bibliographical remarks 263

7 Multiscale collocation methods 26571 Multiscale basis functions and collocation functionals 26672 Multiscale collocation methods 28173 Analysis of the truncation scheme 28874 Bibliographical remarks 298

8 Numerical integrations and error control 30081 Discrete systems of the multiscale collocation method 30082 Quadrature rules with polynomial order of accuracy 30283 Quadrature rules with exponential order of accuracy 31484 Numerical experiments 31885 Bibliographical remarks 321

9 Fast solvers for discrete systems 32291 Multilevel augmentation methods 32292 Multilevel iteration methods 34793 Bibliographical remarks 354

10 Multiscale methods for nonlinear integral equations 356101 Critical issues in solving nonlinear equations 356102 Multiscale methods for the Hammerstein equation 359103 Multiscale methods for nonlinear boundary

integral equations 377104 Numerical experiments 402105 Bibliographical remarks 413

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163402 subject to the Cambridge Core terms of

Contents vii

11 Multiscale methods for ill-posed integral equations 416111 Numerical solutions of regularization problems 416112 Multiscale Galerkin methods via the Lavrentiev

regularization 420113 Multiscale collocation methods via the Tikhonov

regularization 438114 Numerical experiments 456115 Bibliographical remarks 463

12 Eigen-problems of weakly singular integral operators 465121 Introduction 465122 An abstract framework 466123 A multiscale collocation method 474124 Analysis of the fast algorithm 478125 A power iteration algorithm 483126 A numerical example 484127 Bibliographical remarks 487

Appendix Basic results from functional analysis 488A1 Metric spaces 488A2 Linear operator theory 494A3 Invariant sets 502

References 519Index 534

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163402 subject to the Cambridge Core terms of

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163402 subject to the Cambridge Core terms of

Preface

Fredholm equations arise in many areas of science and engineeringConsequently they occupy a central topic in applied mathematics Traditionalnumerical methods developed during the period prior to the mid-1980s includemainly quadrature collocation and Galerkin methods Unfortunately all ofthese approaches suffer from the fact that the resulting discretization matricesare dense That is they have a large number of nonzero entries This bottleneckleads to significant computational costs for the solution of the correspondingintegral equations

The recent appearance of wavelets as a new computational tool in appliedmathematics has given a new direction to the area of the numerical solution ofFredholm integral equations Shortly after their introduction it was discoveredthat using a wavelet basis for a singular integral equation led to numericallysparse matrix discretization This observation combined with a truncationstrategy then led to a fast numerical solution of this class of integral equations

Approximately 20 years ago the authors of this book began a systematicstudy of the construction of wavelet bases suitable for solving Fredholmintegral equations and explored their usefulness for developing fast multiscaleGalerkin PetrovndashGalerkin and collocation methods The purpose of this bookis to provide a self-contained account of these ideas as well as some traditionalmaterial on Fredholm equations to make this book accessible to as large anaudience as possible

The goal of this book is twofold It can be used as a reference text forpractitioners who need to solve integral equations numerically and wish touse the new techniques presented here At the same time portions of thisbook can be used as a modern text treating the subject of the numericalsolution of integral equations which is suitable for upper-level undergraduatestudents as well as graduate students Specifically the first five chapters of thisbook are designed for a one-semester course which provides students with a

ix

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637001Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163408 subject to the Cambridge Core terms of

x Preface

solid background in integral equations and fast multiscale methods for theirnumerical solutions

An early version of this book was used in a summer school on appliedmathematics sponsored by the Ministry of Education of the Peoplersquos Republicof China Subsequently the authors used revised versions of this book forcourses on integral equations at our respective institutions These teachingexperiences led us to make many changes in presentation resulting from ourinteractions with our many students

We are indebted to our many colleagues who gave freely of their time andadvice concerning the material in this book and whose expertise on the subjectof the numerical solution of Fredholm equations collectively far exceedsours We mention here that a preliminary version of the book was provided toKendall Atkinson Uday Banerjee Hermann Brunner Yanzhao Cao WolfgangDahmen Leslie Greengard Weimin Han Geroge Hsiao Hideaki KanekoRainer Kress Wayne Lawton Qun Lin Paul Martin Richard Noren SergeiPereverzyev Reinhold Schneider Johannes Tausch Ezio Venturino and AihuiZhou We are grateful to them all for their constructive comments whichimproved our presentation

Our special thanks go to Kendall Atkinson for his encouragement andsupport in writing this book We would also like to thank our colleagues at SunYat-Sen University including Bin Wu Sirui Cheng Xianglin Chen as well asthe graduate student Jieyang Chen for their assistance in preparing this book

Finally we are deeply indebted to our families for their understandingpatience and continued support throughout our efforts to complete this project

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637001Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163408 subject to the Cambridge Core terms of

Symbols

ae almost everywhere sect11Alowast adjoint operator of A sect211A[i j] minor of matrix A with lattice vectors i and j sect12B(XY) normed linear space of all bounded linear operators from

X into Y sect211C set of complex numbers sect11C(D) linear space of all real-valued continuous functions on D sect21Cm(D) linear space of all real-valued m-times continuously

differentiable functions on D sect21Cinfin(D) linear space of all real-valued infinitely differentiable functions

on D sect21C0(D) subspace of C(D) consisting of functions with support contained

inside D sectA1Cinfin0 (D) subspace of Cinfin(D) consisting of functions with support

contained inside D and bounded sectA1cσ (D) positive constant defined in sect212card T cardinality of T sect222cond(A) condition number of A sect223D(λ) complex-valued function at λ defined by (118)det(A) determinant of matrix A sect12diag(middot) diagonal matrix sect12diam(S) diameter of set S sect11Hm(D) Sobolev space sectA1Hm

0 (D) Sobolev space sectA1Lp(D) linear space of all real-valued pth power integrable functions

(1 le p ltinfin) sect21

xi

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163416 subject to the Cambridge Core terms of

xii List of symbols

Linfin(D) linear space of all real-valued essentially bounded measurablefunctions sect21

m(D) positive constant defined in sect212N set of positive integers 1 2 3 sect11N0 set of integers 0 1 2 sect11Nn set of positive integers 1 2 n for n isin N sect11PA characteristic polynomial of matrix A sect12R set of real numbers sect11Rd d-dimensional Euclidean space sect11Re f real part of f sect16rq(A) minor equation of A (14)Rλ resolvent kernel sect14rank A rank of matrix A sect335s(n) dimension of space Xn sect33span S span of set S sect331U index set (i j) i isin N0 j isin Zw(i) sect41Un index set (i j) i isin Zn+1 j isin Zw(i) sect451vol(S) volume of set S sect11Wmp(D) Sobolev space sectA1Wmp

0 (D) Sobolev space sectA1w(n) dimension of space Wn sect41Z set of integers 0plusmn1plusmn2 sect11Zn set of integers 0 1 2 nminus 1 for n isin N sect11

(middot) gamma function sect212nabla gradient operator sect213 Laplace operator sect213ρ(T ) resolvent set of operator T sect112σ(T ) spectrum of operator T sect112ωdminus1 surface area of unit sphere in Rd sect213ω(K h) modules of continuity of K sect13

factorial for example (14)⋃perp union of orthogonal sets sect41oplusdirect sum of spaces sect41otimestensor product (direct product) sect13

functional composition sect331|α| sum of components of lattice vector α sect21|sminus t| Euclidean distance between s and t sect212

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163416 subject to the Cambridge Core terms of

List of symbols xiii

A norm of operator A sect211 middot minfin norm of Cm(D) sect21 middot p norm of Lp(D) (1 le P le infin) sect21(middot middot) inner product sect21〈middot middot〉 value of a linear functional at a function sect211sim same order sect511

sminusrarr pointwise converge sect211uminusrarr uniformly converge sect211

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163416 subject to the Cambridge Core terms of

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163416 subject to the Cambridge Core terms of

Introduction

The equations we consider in this book are primarily Fredholm integralequations of the second kind on bounded domains in the Euclidean spaceThese equations are used as mathematical models for a multitude of physicalproblems and cover many important applications such as radiosity equa-tions for realistic image synthesis [18 85 244] and especially boundaryintegral equations [12 177 203] which themselves occur as reformulationsof other problems typically originating as partial differential equations Inpractice Fredholm integral equations are solved numerically using piecewisepolynomial collocation or Galerkin methods and when the order of thecoefficient matrix (which is typically full) is large the computational costof generating the matrix as well as solving the corresponding linear systemis large Therefore to enhance the range of applicability of the Fredholmequation methodology it is critical to provide alternate algorithms whichare fast efficient and accurate This book is concerned with this challengedesigning fast multiscale methods for the numerical solution of Fredholmintegral equations

The development and use of multiscale methods for solving integral equa-tions is a subject of recent intense study The history of fast multiscale solutionsof integral equations began with the introduction of multiscale Galerkin(PetrovndashGalerkin) methods for solving integral equations as presented in[28 64 68 88 94 95 202 260 261] and the references cited thereinMost noteworthy is the discovery in [28] that the representation of a singularintegral operator by compactly supported orthonormal wavelets producesnumerically sparse matrices In other words most of their entries are so smallin absolute value that to some degree of precision they can be neglectedwithout affecting the overall accuracy of the approximation Later the papers[94 95] studied PetrovndashGalerkin methods using periodic multiscale basesconstructed from refinement equations for periodic elliptic pseudodifferential

1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637002Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010438 subject to the Cambridge Core terms of

2 Introduction

equations and in this restricted environment stability convergence and matrixcompression were investigated For a first-kind boundary integral equationa truncation strategy for the Galerkin method using spline-based multiscalebasis functions of low degree was proposed in [260] Also in [261] forelliptic pseudodifferential equations of order zero on a three-dimensionalmanifold a Galerkin method using discontinuous piecewise linear multiscalebasis functions on triangles was studied

In another direction a general construction of multidimensional discontin-uous orthogonal and bi-orthogonal wavelets on invariant sets was presentedin [200 201] Invariant sets include among others the important cases ofsimplices and cubes and in the two-dimensional case L-shaped domainsA similar recursive structure was explored in [65] for multiscale functionrepresentation and approximation constructed by interpolation on invariantsets In this regard an essential advantage of this approach is the existenceof efficient schemes for generating recursively multilevel partitions of invariantsets and their associated multiscale functions All of these methods even extendto domains which are a finite union of invariant sets thereby significantlyexpanding the range of their applicability Therefore the constructions givenin [65 200 201] led to a wide variety of multiscale basis functions which onthe one hand have desirable simple recursive structure and on the other handcan be used in diverse areas in which the Fredholm methodology is applied

Subsequently the papers [64 68 202] developed multiscale piecewise poly-nomial Galerkin PetrovndashGalerkin and discrete multiscale PetrovndashGalerkinmethods An important advantage of multiscale piecewise polynomials is thattheir closed-form expressions are very convenient for computation Moreoverthey can easily be related to standard bases used in the conventional numericalmethod thereby providing an advantage for theoretical analysis as wellAmong conventional numerical methods for solving integral equations thecollocation method has received the most favorable attention in the engineeringcommunity due to its lower computational cost in generating the coefficientmatrix of the corresponding discrete equations In comparison the implemen-tation of the Galerkin method requires much more computational effort forthe evaluation of integrals (see for example [19 77] for a discussion of thispoint) Motivated by this issue [69] proposed and analyzed a fast collocationalgorithm for solving general multidimensional integral equations Moreovera matrix truncation strategy was introduced there by making a careful choice ofbasis functions and collocation functionals the end result being fast multiscalealgorithms for solving the integral equations

The development of stable efficient and fast numerical algorithms for solv-ing operator equations including differential equations and integral equations

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637002Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010438 subject to the Cambridge Core terms of

Introduction 3

is a main focus of research in numerical analysis and scientific computationsince such algorithms are particularly important for large-scale computationWe review the three main steps in solving an operator equation The firstis at the level of approximation theory Here we must choose appropriatesubspaces and suitable bases for them The second step is to discretize theoperator equations using these bases and to analyze convergence properties ofthe approximate solutions The end result of this step of processing is a discretelinear system and its construction is considered as a main task for the numericalsolution of operator equations The third step employs methods of numericallinear algebra to design an efficient solver for the discrete linear system Theultimate goal is of course to solve the discrete linear system efficiently andobtain an accurate approximate solution to the original operator equationTheoretical considerations and practical implementations in the numericalsolution of operator equations show that these three steps of processing areclosely related Therefore designing efficient algorithms for the discrete linearsystem should take into consideration the choice of subspaces and their basesthe methodologies of discretization of the operator equations and the specificcharacteristics and advantages of the numerical solvers used to solve theresulting discrete linear system In this book we describe how these threesteps are integrated in a multiscale environment and thereby achieve our goalof providing a wide selection of fast and accurate algorithms for the second-kind integral equations We also describe work in progress addressing relatedissues of eigenvalue and eigenfunction computation as well as the solution ofFredholm equations of the first kind

This book is organized into 12 chapters plus an appendix Chapter 1 isdevoted to a review of the Fredholm approach to solving an integral equation ofthe second kind In Chapter 2 we introduce essential concepts from Fredholmintegral equations of the second kind and describe a general setup of projectionmethods for solving operator equations which will be used in later chaptersThe purpose of Chapter 3 is to describe conventional numerical methodsfor solving Fredholm integral equations of the second kind including thedegenerate kernel method the quadrature method the Galerkin method thePetrovndashGalerkin method and the collocation method In Chapter 4 a generalconstruction of multiscale bases of piecewise polynomial spaces includingmultiscale orthogonal and interpolating bases is presented Chapters 5 6and 7 use the material from Chapter 4 to construct multiscale GalerkinPetrovndashGalerkin and collocation methods We study the discretization schemesresulting from these methods propose truncation strategies for building fastand accurate algorithms and give a complete analysis for the order of con-vergence computational complexity stability and condition numbers for the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637002Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010438 subject to the Cambridge Core terms of

4 Introduction

truncated schemes In Chapter 8 two types of quadrature rule for the numericalintegration required to generate the coefficient matrix are introduced and errorcontrol strategies are designed so that the quadrature errors will neither ruin theoverall convergence order nor increase the overall computational complexityof the original multiscale methods The goal of Chapter 9 is to investigate fastsolvers for the discrete linear systems resulting from multiscale methods Weintroduce multilevel augmentation methods and multilevel iteration methodsbased on direct sum decompositions of the range and domain of the operatorequation In Chapters 10 11 and 12 the fast algorithms are applied to solvingnonlinear integral equations of the second kind ill-posed integral equations ofthe first kind and eigen-problems of compact integral operators respectivelyWe summarize in the Appendix some of the standard concepts and resultsfrom functional analysis in a form which is used throughout the book Theappendix provides the reader with a convenient source of the backgroundmaterial needed to follow the ideas and arguments presented in other chaptersof this book

Most of the material in this book can only be found in research papers Thisis the first time that it has been assembled into a book Although this book ispronouncedly a research monograph selected material from the initial chapterscan be used in a semester course on numerical methods for integral equationswhich presents the multiscale point of view

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637002Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010438 subject to the Cambridge Core terms of

1

A review of the Fredholm approach

In this chapter we pay homage to Ivar Fredholm (April 7 1866ndashAugust 171927) and review his approach to solving an integral equation of the secondkind The methods employed in this chapter are classical and differ from theapproach taken in the rest of the book We include it here because those readersinexperienced in integral equations should be familiar with these importantideas The basic tools of matrix theory and some complex analysis are neededand we shall provide a reasonably self-contained discussion of the requiredmaterial

11 Introduction

We start by introducing the notation that will be used throughout this bookLet C R Z and N denote respectively the set of complex numbers the setof real numbers the set of integers and the set of positive integers We alsolet N0 = 0 cup N For the purpose of enumerating a nonempty finite set ofobjects we use the sets Nd = 1 2 d and Zd = 0 1 dminus1 both ofwhich consist of d distinct integers For d isin N let Rd denote the d-dimensionalEuclidean space and a subset of Rd By C() we mean the linear space ofall continuous real-valued functions defined on We usually denote matricesor vectors over R in boldface for example A = [Aij i j isin Nd] isin Rdtimesd andu = [uj j isin Nd] isin Rd When the vector has all integer coordinates thatis u isin Zd we sometimes call it a lattice vector Moreover we usually denoteintegral operators by calligraphic letters Especially the integral operator witha kernel K will be denoted by K that is for the kernel K defined on times andthe function u defined on we define

(Ku)(s) =int

K(s t)u(t)dt s isin

5

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

6 A review of the Fredholm approach

The most direct approach to solving a second-kind integral equation merelyreplaces integrals by sums and thereby obtains a linear system of equationswhose solution approximates the solution of the original equation The studyof the resulting linear system of equations leads naturally to the importantnotion of the Fredholm function and determinant which remain a central toolin the theory of second-kind integral equations see for example [183 253]We consider this direct approach when we are given a continuous kernelK isin C( times ) on a compact subset of Rd with positive Borel measurea continuous function f isin C() and a nonzero complex number λ isin C Thetask is to find a function u isin C() such that for s isin

u(s)minus λ

int

K(s t)u(t)dt = f (s) (11)

To this end for each positive h gt 0 we partition into nonempty compactsubsets i i isin Nn

=⋃iisinNn

i

such that different subsets have no overlapping interior and

diam i = max|xminus y| x y isin i le h

where |x| is the 2-norm of the vector x isin Rd This partition can be constructedby first putting a large ldquoboxrdquo around the set and then decomposing this boxinto cubes each of which has diameter less than or equal to h The sets i arethen formed by intersecting the set with the cubes where we discard sets ofzero Borel measure Therefore the partition of constructed in this manneris done ae Next we choose any finite set of points T = ti i isin Nn suchthat for any i isin Nn we have that ti isin i With these points we now replaceour integral equation (11) with a linear system of equations Specifically wechoose the number ρ = minusλ and the ntimes n matrix A defined by

A = [vol(j)K(ti tj) i j isin Nn]where vol(j) denotes the volume of the set j and replace (11) with thesystem of linear equations

(I+ ρA)u = f (12)

Here f is the vector obtained by evaluating the function f on the set T Of course the point of view we take here is that the vector u isin Rn

which solves equation (12) is an approximation to the function u on theset T Therefore the problem of determining the function u is replaced by thesimpler one of numerically solving for the vector u when h is small Certainly

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

12 Second-kind matrix Fredholm equations 7

an important role is played by the determinant of the coefficient matrix ofthe linear system (12) Its properties especially as h rarr 0+ will be ourmain concern for a significant part of this chapter We start by studying thedeterminant of the coefficient matrix of the linear system (12) and then derivea formula for the entries of the inverse of the matrix I + ρA in terms of thematrix A

12 Second-kind matrix Fredholm equations

We define the minor of an n times n matrix If A = [Aij i j isin Nn] is an n times nmatrix q is a non-negative integer in Zn+1 i = [il l isin Nq] j = [jl l isin Nq]are lattice vectors in N

qn we define the corresponding minor by

A[i j] = det[Air js r s isin Nq]Sometimes the more elaborate notation

A

(i1 i2 iqj1 j2 jq

)(13)

is used for A[i j] When i = j that is for a principal minor of A we use thesimplified notation A[i] in place of A[i i] For a positive integer q isin Nn we set

rq(A) = 1

qsumiisinNq

n

A[i] (14)

and also choose r0(A) = 1

Lemma 11 If A is an ntimes n matrix and ρ isin C then

det(I+ ρA) =sum

qisinZn+1

rq(A)ρq (15)

Before proving this lemma we make two remarks

Remark 12 Using the extended notation for a minor as indicated in (13)we see that equation (14) is equivalent to the formula

rq(A) = 1

qsum

[illisinNq]isinNqn

A

(i1 iqi1 iq

) (16)

Certainly if any two components of the vector i = [il l isin Nq] are equalthen the corresponding minor has a repeated row (and column) and so is zeroThese terms may be neglected Moreover any permutation of the componentsof the vector i affects both a row and a column exchange of the determinant

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

8 A review of the Fredholm approach

appearing in (16) and so does not affect the value of the determinant Sincethere are q such permutations we get that

rq(A) =sum

1lei1lti2ltmiddotmiddotmiddotltiqlen

A

(i1 iqi1 iq

) (17)

Remark 13 If the characteristic values of the matrix A are denoted by λj j isin Nn then

rq(A) =sum

1lei1lti2ltmiddotmiddotmiddotltiqlen

λi1λi2 middot middot middot λiq (18)

The right-hand side of this equation is an elementary symmetric function ofthe eigenvalues of A which is invariant under a similarity transformation ofthe matrix A This fact will be the basis of the proof of Lemma 11 presentedbelow

We next present the proof of Lemma 11

Proof By Schurrsquos upper-triangular factorization theorem (see for example[142] p 79) the matrix A can be factored in the form

A = Pminus1TP (19)

where P is an orthogonal matrix and T is an upper-triangular matrix whosediagonal entries are the eigenvalues of A (chosen in any prescribed order) Foran upper-triangular matrix T we observe from (18) that

det(I+ ρT) =prodjisinNn

(1+ ρλj)

=sum

qisinZn+1

ρqsum

1le j1ltj2ltmiddotmiddotmiddotltjqle n

λj1λj2 middot middot middot λjq

=sum

qisinZn+1

rq(T)ρq

thereby verifying that equation (15) is valid at least for an upper-triangularmatrix For a general matrix we use the reduction to the upper-triangular caseby an orthogonal similarity (19) Since all determinants in equations (15) and(16) are unchanged under an orthogonal similarity this comment proves thegeneral case

We now add a more difficult computation which provides an expansion ofthe elements of the matrix (I+ ρA)minus1 as a rational function of ρ To this end

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

12 Second-kind matrix Fredholm equations 9

we introduce some needed constants We define for any k l isin Nn and q isin Zn+1

the constants

ulkq = 1

qsumiisinNq

n

A

[l ik i

] (110)

In (110) there are (not necessarily principal) minors of order q+ 1 Moreoverwhen q = n minus 1 and k l isin Nn with k = l we have that ulkq = 0 since Nn hasonly n distinct elements Also for the same reason ulkn = 0 for all k l isin Nn

We shall relate these constants But first we introduce for k l isin Nn

polynomials Ukl defined at ρ to be

Ulk(ρ) =sum

qisinZn+1

ulkqρq

Now to relate all of these quantities we start with the minor

A

(l i1 iqk i1 iq

)(111)

where l k isin Nn and expand it by the Laplace expansion by minors across itsfirst row to obtain the formula

A

(l i1 iqk i1 iq

)= AlkA

(i1 iqi1 iq

)+sum

misinNq

(minus1)mAlimA

(i1 i2 im+1 iqk i1 im middot middot middot iq

)

(112)

The symbol im appearing in the minor above is to be interpreted to mean thatthe imth column of A is deleted in that minor We now sum both sides of thisformula over all integers i1 i2 iq in Nn and interchange the summationsof the second term on the right-hand side of the resulting equation to yield theformulasum[ippisinNq]isinNq

nA

(l i1 iqk i1 iq

)=sum[ippisinNq]isinNq

nAlkA

(i1 iqi1 iq

)+summisinNq

sum[ippisinNq]isinNq

n(minus1)mAlim A

(i1 i2 im+1 iqk i1 im middot middot middot iq

)

(113)

The first term on the right-hand side of equation (113) is clearly Alkqrq(A)which follows from our definition (16) The value for the second term requiresmore explanation

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

10 A review of the Fredholm approach

For the second term we point out that there are two sums over im isin NnThe outer sum already appears in the right-hand side of equation (113)and the inner one appears in the sum over all indices i1 i2 iq in NnIn the inner sum we first fix im and then sum over all the other indicesi1 imminus1 im+1 iq isin Nn This leads us to the expression

sum[i1 imminus1 im+1 iq]isinNqminus1

n

(minus1)mA

(i1 iq

k i1 imminus1 im+1 iq

)

We now locate the imth row in the minor of A and move it forward to the firstrow This requires mminus 1 row exchanges and gives us the expression

minussum

[i1 imminus1 im+1 iq]isinNqminus1n

A

(im i1 imminus1 im+1 iqk i1 imminus1 im+1 iq

)

Next we multiply this expression by Alim and compute the (first) sum of it overim isin Nn This yields the quantity

minussumrisinNn

Alr

sum[ippisinNqminus1]isinNqminus1

n

A

(r i1 iqminus1

k i1 iqminus1

)

But this quantity is independent of im and so appears q times in the other(second) sum over im So in summary we get the equationsum[ippisinNq]isinNq

n

A

(l i1 iqk i1 iq

)= Alk

sum[ippisinNq]isinNq

n

A

(i1 iqi1 iq

)

minusqsumrisinNn

Alr

sum[ippisinNqminus1]isinNqminus1

n

A

(r i1 iqminus1

k i1 iqminus1

)

which is equivalent to the formulasumiisinNq

n

A

[l ik i

]= Alk

sumiisinNq

n

A[i] minus qsumrisinNn

Alr

sumjisinNqminus1

n

A

[r jk j

]

When q = 0 the second term on the right is zero while the first sum on theright is set to one Likewise the expression on the left is set to Alk and so thisformula is still true when q = 0 We now multiply both sides by ρq

q and sumover q isin Zn+1 Upon simplification we conclude for l k isin Nn that

Ulk(ρ) = Alk det(I+ ρA)minus ρsumrisinNn

AlrUrk(ρ) (114)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

13 Fredholm functions 11

Let U be the ntimes n matrix defined by

U = (Ulk(ρ))lkisinNn

Using this matrix we re-express equation (114) in an equivalent matrix formas follows

Uminus A det(I+ ρA)+ ρAU = 0 (115)

Likewise by performing the Laplace expansion on (111) by minors across itscolumns we get the equation

Uminus A det(I+ ρA)+ ρUA = 0 (116)

From these formulas we get that

(I+ ρA)minus1 = Iminus ρUdet(I+ ρA)

(117)

Indeed this can be obtained by checking the equations

(I+ ρA)(

Iminus ρUdet(I+ρA)

)=(

Iminus ρUdet(I+ρA)

)(I+ ρA) = I

which may be confirmed by employing equations (115) and (116)

13 Fredholm functions

We now apply Lemma 11 to Fredholmrsquos discretization given in equation (12)For this purpose we introduce the notation for the Fredholm minors of acontinuous kernel K isin C(times)

Definition 14 For any two vectors x = [xl l isin Nq] and y = [yl l isin Nq] inq we define the corresponding qth-order minor of a continuous kernel K as

K

[xy

]= det[K(xl yr) l r isin Nq]

If x = y we use the simplified notation K[x]Sometimes to avoid confusion we may use the extended notation

K

(x1 xq

y1 yq

)for the minor K

[xy

]and similarly we might use

K

(x1 xq

x1 xq

)for K[x]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

12 A review of the Fredholm approach

Now returning to equation (12) we recall in that case the matrix A is givenby the formula

A = [vol(j)K(ti tj) i j isin Nn]For each lattice vector i = [il l isin Nq] in N

qn we form the vector si=[til

lisinNq] in q from the points in the set T = ti i isin Nn the subset

i =otimeslisinNq

il

of q and obtain a formula for the determinant of the coefficient matrix of thelinear system (12) namely

Dh(λ) =sum

qisinZn+1

(minus1)q

q λqsumiisinNq

n

vol(i)K[si]

Note that the vector si = [til l isin Nq] is in i and that

q =⋃iisinNq

n

i

thereby forming a partition of the set q ae In our expanded notation thedeterminant becomes

Dh(λ) =sum

qisinZn+1

(minus1)qλqsum

1lei1lti2ltmiddotmiddotmiddotltiqlen

vol(i)K

(ti1 tiqti1 tiq

)

We now begin the task of identifying the limit of the polynomials Dh ashrarr 0+

Definition 15 If K is a continuous kernel on times and λ isin C we definethe complex-valued function D at λ isin C by

D(λ) =sumqisinN0

(minus1)qλq

qintq

K[x]dx (118)

Next we point out that D is an entire function of λ We shall refer to thefunction D as the Fredholm function To prove that the Fredholm function D isan entire function we recall the Hadamard inequality for the determinant of anntimes n matrix (see for example [142]) When we use the Hadamard inequalityit is convenient to express an ntimes n matrix A in terms of its columns Thus wewrite A = [ai i isin Nn] in which each ai isin Rn is the ith column of A

Lemma 16 If A = [ai i isin Nn] is an ntimes n matrix then

| det A| leprodiisinNn

|ai|

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

13 Fredholm functions 13

where |ai| denotes the 2-norm of ai

We now set

Kinfin = max|K(x y)| x y isin and derive the next lemma as an immediate application of the Hadamardinequality

Lemma 17 If K isin C(times) and x y isin q then∣∣∣∣K [xy

]∣∣∣∣ le Kqinfinqq2

Proof Since for each r isin Nq and xr y isin we have thatsumrisinNq

K2(xr y) le qK2infin

the result of this lemma follows immediately from the Hadamard inequality

Lemma 18 The function D defined by equation (118) is an entire function

Proof For q isin N0 and λ isin C we have by Lemma 17 that∣∣∣∣ (minus1)qλq

qintq

K[x]dx

∣∣∣∣ le vol()q|λ|qq K

qinfinqq2

By using the inequality

qq

q lt eq (119)

valid for all q isin N we obtain that

vol()q|λ|qq K

qinfinqq2 =(

vol()|λ|Kinfinradicq

)q qq

q le(

vol()|λ|Kinfineradicq

)q

Hence when q is chosen to satisfy the condition

q gt (2vol()|λ| middot Kinfine)2

we conclude that ∣∣∣∣ (minus1)qλq

qintq

K[x]dx

∣∣∣∣ lt 2minusq

Thus the series on the right-hand side of equation (118) converges absolutely

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

14 A review of the Fredholm approach

Theorem 19 For λ isin C

limhrarr0+

Dh(λ) = D(λ)

uniformly and absolutely on any bounded subset of C

For the proof of Theorem 19 we need some ancillary lemmas The first isabout ntimes n matrices A = [ai i isin Nn] and B = [bi i isin Nn]Lemma 110 If there exists a constant θ such that for all i isin Nn |ai| le θ

and |bi| le θ then

| det Aminus det B| le θnminus1sumiisinNn

|ai minus bi|

Proof The proof follows from the Hadamard inequality applied to theformula

det Aminus det B =sumiisinNn

det[b1 biminus1 ai minus bi ai+1 an]

For the next fact we need to recall the definition of the modulus of continuityof function K namely for h gt 0

ω(K h) = max|K(x y)minus K(xprime yprime)| |xminus xprime| le h |yminus yprime| le hLemma 111 For any x y isin q with |xminus y| le h there holds the estimate

|K[x] minus K[y]| le q

Kinfin (Kinfinradicq)qω(K h)

Proof We apply Lemma 110 to obtain the inequalities

|K[x] minus K[y]| le (Kinfinradicq)qminus1sumiisinNq

⎛⎝sumrisinNq

(K(xr xi)minus K(yr yi))2

⎞⎠12

le (Kinfinradicq)qminus1

q32ω(K h)

which gives the desired estimate

The next lemma estimates the difference between a multivariate integral anda finite sum which approximates it

Lemma 112 If f isin C(q) and q is partitioned as q =⋃iisinNqni ae and

si isin i for i isin Nqn then∣∣∣∣∣∣

intq

f (x)dxminussumiisinNq

n

vol(i)f (si)

∣∣∣∣∣∣ le (vol())qω(f h)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

13 Fredholm functions 15

where h is chosen so that diam i le h for all i isin Nqn

Proof We have thatsumiisinNq

n

vol(i)f (si)minusintq

f (x)dx =sumiisinNq

n

vol(i)f (si)minusinti

f (x)dx

=sumiisinNq

n

inti

( f (si)minus f (x))dx

Hence taking absolute values of both sides of the equation gives the inequality∣∣∣∣∣∣sumiisinNq

n

vol(i)f (si)minusintq

f (x)dx

∣∣∣∣∣∣ lesumiisinNq

n

inti

| f (si)minus f (x)|dx

le ω( f h)sumiisinNq

n

vol(i) le (vol())qω( f h)

We are now ready to prove Theorem 19

Proof Note that

|Dh(λ)minus D(λ)| le 1 +2

where

1 =∣∣∣∣∣∣Dh(λ)minus

sumqisinZn+1

(minus1)qλq

qintq

K[x]dx

∣∣∣∣∣∣and

2 =∣∣∣∣∣∣sum

qisinN0Zn+1

(minus1)qλq

qintq

K[x] dx

∣∣∣∣∣∣ We estimate 1 and 2 separately

We apply the above two lemmas to the function f q rarr R defined forx isin q as f (x) = K[x] and obtain the inequality∣∣∣∣∣∣

sumiisinNq

n

vol(i)K[si] minusintq

K[x]dx

∣∣∣∣∣∣ le q

Kinfin (vol()Kinfinradicq)qω(K h)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

16 A review of the Fredholm approach

Therefore we obtain that

1 lesum

qisinZn+1

|λ|qq

∣∣∣∣∣∣sumiisinNq

n

vol(i)K[si] minusintq

K[x]dx

∣∣∣∣∣∣le⎧⎨⎩ sum

qisinZn+1

|λ|qq

q

Kinfin (vol()Kinfinradicq)q

⎫⎬⎭ω(K h)

le sum

qisinN0

q

(vol()|λ| middot Kinfineradic

q

)qω(K h)

Kinfin

where we have again used inequality (119) from the proof of Lemma 18 andwe demand that q is sufficiently large so that the inequality

vol()|λ| middot Kinfineradicq

lt1

2

holds Therefore the series above converges uniformly and absolutely on anybounded set In fact if we introduce the constant

c =sumqisinN0

q

(vol()|λ| middot Kinfineradic

q

)qKinfin

we can write the above observation as 1 le cω(K h)For 2 we also have that

2 lesum

qisinN0Zn+1

|λ|qqintq|K[x]| dx

lesum

qisinN0Zn+1

|λ|qq (Kinfin

radicq)qvol(q)

lesum

qisinN0Zn+1

(vol()|λ|Kinfineradic

q

)q

lesum

qisinN0Zn+1

1

2q

Since n ge vol()2dhd we see that h rarr 0+ implies both n rarr infin and q rarr infin

so that the right-hand side of the above inequality goes to zero uniformly in λ

in any bounded subset of the complex plane These two inequalities prove thedesired result Indeed when n is sufficiently large we have

|Dh(λ)minus D(λ)| le cω(K h)+sum

qisinN0Zn+1

1

2q

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

14 Resolvent kernels 17

Since ω(K h) rarr 0 as h rarr 0+ the right-hand side of the above inequalitytends to zero uniformly

14 Resolvent kernels

We now explain the central role that the Fredholm determinant plays in solvinga second-kind integral equation with a continuous kernel

To this end for each λ isin C we introduce the resolvent kernel Rλ defined fors t isin as

Rλ(s t) =sumqisinN0

(minus1)q+1

q λq+1intq

K

[s xt x

]dx

Here of course the expression K[

sxtx

]stands for the Fredholm minor

K

(s x1 xq

t x1 xq

)

Alternatively we may write the resolvent kernel in the form

Rλ(s t) = minusλ⎧⎨⎩K(s t)+

sumqisinN

(minus1)q

q λqintq

K

[s xt x

]dx

⎫⎬⎭

As in the proof of Lemma 18 we see for fixed s t isin that Rλ(s t) is an entirefunction of λ isin C and for a fixed λ isin C it is continuous in s t isin The nextobservation gives an indication of the role of the resolvent kernel in solving asecond-kind integral equation

Proposition 113 For any λ isin C we have that

Rλ minus λKRλ + λD(λ)K = 0 (120)

and

Rλ minus λRλK + λD(λ)K = 0 (121)

Proof We first prove equation (120) For x isin q s t isin we expand the

determinant K[

s xt x

]along its first row to obtain the formula

K

(s x1 xq

t x1 xq

)= K(s t)K

(x1 xq

x1 xq

)+sumjisinNq

(minus1) jK

(x1 x2 xj xj+1 xq

t x1 xjminus1 xj+1 xq

)K(s xj)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

18 A review of the Fredholm approach

Now we integrate both sides of this equation over x isin q and note that theintegral of each summand is independent of the summation index j Hence weconclude thatintq

K

[s xt x

]dx = K(s t)

intq

K[x]dxminus qint

K(s r)

(intqminus1

K

[r xt x

]dx)

dr

and this equation is also by definition valid for q = 0 We now substitute thisequation into the power series for the resolvent kernel to obtain the equation

Rλ(s t) = λ

int

K(s r)Rλ(r t)dr minus λD(λ)K(s t) (122)

Next we choose any u isin C() and multiply both sides of equation (122) byu(t) and integrate both sides of the resulting equation over t isin to obtainequation (120) The second equation (121) is proved in an analogous fashion

by expanding the minor K[

s xt x

]along its first column

The next result is concerned with the unique solvability of equation (11)

Theorem 114 If D(λ) = 0 then (I minus λK)minus1 is given by the formula

(I minus λK)minus1 = I minus D(λ)minus1Rλ

Proof We compute

(I minus λK)(I minus D(λ)minus1Rλ) = I minus D(λ)minus1[Rλ + λD(λ)K minus λKRλ]and now use equation (120) to conclude that the term inside the brackets iszero Similarly we verify that

(I minus D(λ)minus1Rλ)(I minus λK) = I

by using equation (121)

It follows immediately from Theorem 114 that if λ is not a zero of thefunction D then equation (11) has a unique solution in C() for any f isin C()We now prove the converse of Theorem 114 namely if D(λ) = 0 then IminusλKis not invertible To this end we note the following ancillary fact

Lemma 115 For any λ isin C there holds the equationint

Rλ(s s)ds = λDprime(λ) (123)

The proof of this formula follows directly from a term-by-term integrationand differentiation of the relevant power series

Corollary 116 If D has a zero of order m at some μ = 0 then there is ay isin for which Rμ(y y) has a zero of order strictly less than m

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

14 Resolvent kernels 19

Proof From Lemma 115 above and our hypothesis we have thatint

partmminus1

partλmminus1 (Rλ(s s)) |λ=μds = μD(m)(μ) = 0

Theorem 117 If K is a continuous kernel such that D(λ) = 0 then IminusλK isnot invertible

Proof Choose a y isin and consider the function uy defined to be Rλ(middot y)Since D(λ) = 0 by hypothesis equation (120) implies that uy minus λKuy = 0We would be finished with the proof if there was a y isin for which uy = 0Otherwise we must argue differently

Suppose that D vanishes at μ and the multiplicity of this zero is m (note thatD(0) = 1 and so each zero of D must be of finite multiplicity) Let l be thesmallest non-negative integer such that for all x y isin the function Rλ(x y)has a zero of order l at μ Corollary 116 says that l lt m We now expandRλ(x y) near λ = μ and have that

Rλ = (λminus μ)lg + O(λminus μ)l+1

where the function g is continuous on times and never vanishes thereinSubstituting this equation into equation (122) we obtain that

(λminusμ)lg(s t)+O(λminusμ)l+1 = λ

int

K(s tprime)[(λminusμ)lg(tprime t)+O(λminusμ)l+1

]dtprime

minus λD(λ)K(s t)

Dividing both sides of the above equation by (λminus μ)l we get that

g(s t)+ O(λminus μ) = λ

int

K(s tprime)[g(tprime t)+ O(λminus μ)]dtprime minus λD(λ)

(λminus μ)lK(s t)

Letting λrarr μ we have that

g(s t) = μ

int

K(s tprime)g(tprime t)dtprime (124)

However we may rewrite equation (124) in the equivalent form

(I minus μK)g = 0

which together with the fact that g is not equal to zero establishes that I minusμKis not invertible thereby proving the theorem

We see from the above theorems the importance of the zeros of D in thestudy of second-kind integral equations Since D(0) = 1 the zeros of D are

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

20 A review of the Fredholm approach

nonzero countable and only have infinity as a possible accumulation pointThe reciprocals of the zeros of D correspond to the nonzero eigenvalues ofthe operator K These observations prove many of the general statements wehave made about compact operators in the Appendix see Section A27 Weshall say more about the zeros of D later First we want to prove Fredholmrsquosremarkable formula for the determinant of a product of two operators

15 Fredholm determinants

In this section we consider the Fredholm determinant and provide a result onthe Fredholm determinant product

Definition 118 Let K isin C( times ) The linear operator I + K C() rarrC() defined for s isin and u isin C() by

((I +K)u)(s) = u(s)+int

K(s t)u(t)dt

has a Fredholm determinant which is given by

det(I +K) =sumqisinN0

1

qintq

K[x]dx

Alternatively we may express the Fredholm determinant directly in termsof the Fredholm function namely

det(I +K) = D(minus1)

Note that the Fredholm determinant of the operator K is the same as theFredholm determinant of the operator Klowast corresponding to the adjoint kernelwhich is defined for s t isin by Klowast(s t) = K(t s) That is for any continuouskernel K on the compact set we have the equation

det(I +K) = det(I +Klowast) (125)

The main result of this section is the Fredholm determinant product formula

Theorem 119 If K H isin C(times) then

det(I +H)(I +K) = det(I +H) det(I +K)

We start our discussion of this product formula by introducing a continuouskernel L associated with two prescribed continuous kernels K and H

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

15 Fredholm determinants 21

Definition 120 If K and H are continuous kernels we define L to be thekernel of the operator

L = K +H+HK

Remark 121 The kernel L has the characteristic property that

I + L = (I +H)(I +K) (126)

There are some special cases of Theorem 119 that are readily proven Forexample if det(I + K) = 0 then I + K is not one to one Therefore itfollows that I + L is also not one to one Consequently we conclude thatdet(I + L) = 0 Similarly if det(I + H) = 0 then det(I + Hlowast) = 0 fromwhich it follows that I +H is not onto Hence the operator I + L is also notonto and so det(I+L) = 0 In other words in the proof of the above theoremwe may assume that the operators I + L I +H and I +K are all invertible

To proceed further we need the following differentiation formula To thisend we define for τ isin R and the continuous kernel G the one-parameterfamily of kernels given as Kτ = K + τG

Proposition 122 If K and G are continuous kernels then

d

dτdet(I +Kτ )|τ=0 = det(I +K)

int

G(s s)dsminusint

(int

Rminus1(s t)G(t s)dt

)ds

Proof For each x isin q a straightforward differentiation of the requisitedeterminant and a Laplace expansion yield the formula

d

dτ(Kτ [x]) |τ=0 =

sumlkisinNq

(minus1)l+kK

(x1 xkminus1 xk+1 xq

x1 xlminus1 xl+1 xq

)G(xk xl)

(127)

We now integrate both sides of this equation over x isin q The integral on theleft-hand side of the equation is clear As for the right-hand side we distinguishthe summands corresponding to k = l from the remaining summands All thesummands corresponding to k = l have the same integral given byint

qminus1K[x]dx middot

int

G(s s)ds (128)

There are q(qminus 1) remaining terms corresponding to the case k = l They alsohave the same integral which is independent of k and l and is computed to be

minusq(qminus 1)int2

(intqminus2

K

[s xt x

]dx)

G(t s)dsdt

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

22 A review of the Fredholm approach

Therefore in summary we obtain thatintq

d

dτ(Kτ [x]) |τ=0 dx = q

intqminus1

K[x]dx middotint

G(s s)ds

minus q(qminus 1)int

[int

(intqminus2

K

[s xt x

]dx)

G(t s)

dt

]ds

(129)

Substituting equation (129) into the series expansion

det(I +Kτ ) =sumqisinN0

1

qintq

Kτ [x]dx

and rearranging terms yields the desired formula

d

dτdet(I +Kτ )|τ=0 = D(minus1)

int

G(s s)dsminusint

(int

Rminus1(s t)G(t s)dt

)ds

Remark 123 When det(I +K) = 0 we may use equation (120) to expressthe formula in Proposition 122 in the equivalent form

d

dτlog det(I +Kτ )|τ=0 =

int

W(s s)ds (130)

where W is the continuous kernel corresponding to the integral operator(I minus Dminus1(minus1)Rminus1

)G

Indeed by Proposition 122 we have that

d

dτlog det(I +Kτ )|τ=0 =

d

dτdet(I +Kτ )|τ=0

det(I +K)

=int

G(s s) dsminus Dminus1(minus1)int

(int

Rminus1(s t)G(t s)dt

)ds

=int

W(s s) ds

where

W(s s) = G(s s)minus Dminus1(minus1)int

Rminus1(s t)G(t s) dt

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

15 Fredholm determinants 23

Moreover for any u isin C() s isin we have thatint

W(s t)u(t) dt =int

G(s t)minus Dminus1(minus1)

int

Rminus1(s tprime)G(tprime t) dtprime

u(t) dt

=int

G(s t)u(t) dt minus Dminus1(minus1)int

int

Rminus1(s tprime)G(tprime t)u(t) dtprime dt

= (((I minus Dminus1(minus1)Rminus1)G)u)(s)

which ensures the desired result By Theorem 114 we can rewrite this integraloperator in the equivalent simpler form (I + K)minus1G In order to make it easyto recall the association of the kernel W with this integral operator we expressequation (130) in the notationally suggestive form

d

dτlog det(I +Kτ )|τ=0 =

int

(I +K)minus1G

(s s)ds (131)

As we already pointed out in equation (125) the Fredholm determinants ofK and Klowast are the same Hence since Kτ

lowast = Klowast + τGlowast we also have that

d

dτlog det(I +Kτ )|τ=0 =

int

(I +Klowast)minus1Glowast

(s s)ds (132)

We are now ready to prove Theorem 119

Proof We consider two one-parameter perturbations Kτ = K+τK0 and Hτ =H + τH0 and set

I + Lτ = (I +Hτ ) (I +Kτ )

Note that

Lτ = L+ τL0 + o(τ 2) τ rarr 0+ (133)

where L is defined in Definition 120 and L0 is given by the formula

L0 = (I +H)K0 +H0 (I +K) (134)

Combining equations (131) (132) (133) and (134) we obtain that

d

dτlog det(I + Lτ )|τ=0 =

int

(I + L)minus1 (I +H)K0

(s s)ds

+int

(I + Llowast)minus1 (I +Klowast

)Hlowast0(s s)ds

But we also have that

(I + L)minus1 = (I +K)minus1 (I +H)minus1

and

(I + Llowast)minus1 = (I +Hlowast)minus1 (I +Klowast

)minus1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

24 A review of the Fredholm approach

Therefore we get that

d

dτlog det(I + Lτ )|τ=0

=int

(I +Hlowast)minus1Hlowast0

(s s)ds+

int

(I +K)minus1K0

(s s)ds

= d

dτlog det(I +Hτ )|τ=0 + d

dτlog det(I +Kτ )|τ=0

Let γ [0 1] rarr R be the function defined at τ isin [0 1] as

γ (τ) = log det(I + Lτ )minus logdet(I +Hτ ) det(I +Kτ )The above formula means that γ prime(0) = 0 Clearly we can derive the sameconclusion along any continuous path H(τ ) and K(τ ) as long as H(0) = Hand K(0) = K Moreover we can also show that the derivative of γ is zero atany point μ in (0 1) provided that both the integral operators I + H(μ) andI +K(μ) have inverses

Next we choose any continuously differentiable path σ [0 1] rarr [0 1]such that σ(0) = 0 σ(1) = 1 and then for any τ isin [0 1] which satisfiesdet(I + τH) = 0 and det(I + τK) = 0 we introduce the operatorsHτ = σ(τ)H and Kτ = σ(τ)K Consequently we conclude by our previousremarks that the function γ defined above must be constant in the interval[0 1] This together with the fact that γ (0) = 0 and γ (1) = log det(I + L)minuslog det(I +H)) det(I +K) completes the proof

16 Eigenvalue estimates and a trace formula

We now turn our attention to a new topic and provide an estimate of the growthof the zero of D in the complex plane We accomplish this only when =[0 1] and the kernel K has the property that there are positive constants ρ gt 0and α isin (0 1] such that for all u s t isin [0 1]

|K(u s)minus K(u t)| le ρ|sminus t|α

When this is the case we say that K is Holder continuous of order α withconstant ρ (with respect to the second variable) For Holder continuous kernelsof order α we can provide a better estimate for the Fredholm minor than thatgiven in Lemma 17

Lemma 124 If K is Holder continuous of order α with constant ρ then forany x y isin [0 1]q ∣∣∣∣K [x

y

]∣∣∣∣ le 4α(qq)12minusαρq

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

16 Eigenvalue estimates and a trace formula 25

Proof First we reorder the columns of the minor appearing on the left-handside of the inequality so that the components of the vector y = [yi i isin Nn]are increasing Next for each i isin Nqminus1 we subtract the (i + 1)th columnfrom the ith column in the above minor (which does not alter its value) usethe Hadamard inequality and then the hypothesis on the kernel to obtain theinequality ∣∣∣∣K [x

y

]∣∣∣∣ le (ρradic

q)qprod

iisinNqminus1

|yi+1 minus yi|α

We now apply the arithmeticndashgeometric inequality to obtain the inequalities∣∣∣∣K [xy

]∣∣∣∣ le (ρradic

q)q

1

qminus 1

sumiisinNqminus1

(yi+1 minus yi)

(qminus1)α

le 4α(qq)12minusαρq

thereby confirming the desired bound

Remark 125 The above lemma holds if K is a Holder continuous kernel oforder α with respect to the first variable rather than the second variable

Note that for the next result we introduce the constant

μ = 2

1+ 2α

For α isin ( 12 1] we have that μ isin [ 23 1)

Theorem 126 If K(s t) is a Holder continuous kernel of order α isin ( 12 1]

with respect to either s or t in [0 1] then for every ε gt 0 there is a constanta gt 0 such that for all λ isin C

|D(λ)| le ae|λ|μ+ε

Proof To prove this theorem we first note that any polynomial appearingin the proof of this result can be neglected because any polynomial is surelydominated in the complex plane by an exponential

For every λ isin C we have that

|D(λ)| le 4αsum

misinN0

am|λ|m

where we define the constant

am = ρm(mm)12minusα

m

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

26 A review of the Fredholm approach

This constant satisfies the inequality

am le(

ρe

μ+ε

)m 1

mm

μ+ε (135)

Now for m large enough the expression in parentheses on the right-hand sideof inequality (135) can be made less than one Hence as we can ignorepolynomials of a fixed degree in |λ| we assume without loss of generalitythat it suffices to bound from above the power seriessum

misinN

|λ|mm

mμ+ε

Certainly the series above is bounded for |λ| le 1 and so we only have toconsider it for |λ| gt 1 In that case we break the sum above into two parts Forthe first part we sum over positive integers m lt (2|λ|)μ+ε and for the secondpart over the remaining integers For the first sum we have thatsum

mlt(2|λ|)μ+ε

|λ|mm

mμ+εle c|λ|(2|λ|)μ+ε

where

c =summisinN

mminusm

μ+ε ltinfin

Since

lim|λ|rarrinfin |λ|minusε log |λ| = 0

we conclude that there is a constant a gt 0 such that

|λ|(2|λ|)μ+ε le ae|λ|μ+2ε

For the other summands corresponding to m ge (2|λ|)μ+ε we have that

|λ|mm

mμ+εle 2minusm

and so we conclude that summge(2|λ|)μ+ε

|λ|mm

mμ+εlesummisinN

2minusm ltinfin

This theorem says under the hypothesis on the kernel K that D is an entirefunction of order less than or equal to μ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

16 Eigenvalue estimates and a trace formula 27

Corollary 127 If λn n isin N are the zeros of D in C ordered so that0 lt |λ1| le |λ2| le middot middot middot le |λn| le middot middot middot and K(s t) is a Holder continuous kernelin either s or t on [0 1] with exponent α isin ( 1

2 1] thensumnisinN|λn|minus1 ltinfin

The proof of this corollary relies on standard techniques in the study ofentire functions We briefly review some of the details

The first step is to recall Jensenrsquos formula (see for example [2] p 208) Ifρ gt 0 f is a function analytic in the disc ρ = z |z| lt ρ and continuouson the boundary with a finite number of zeros aj j isin Nm in the closed discρ then

log | f (0)| +sumjisinNm

logρ

|aj| =1

int 2π

0log | f (ρeiθ )|dθ (136)

We apply Jensenrsquos formula to the function D in the following way We letν(ρ) be the number of zeros of D (counting multiplicities) in the disc ρ Recall that |λ1| le |λ2| le middot middot middot and so it follows that

n le ν(|λn|) (137)

Now according to Jensenrsquos formula applied to the function D on the disc 2ρ

we have that sumjisinNk

log2ρ

|λj| =1

int 2π

0log |D(2ρeiθ )|dθ (138)

where λj j isin Nk are the zeros of D in 2ρ Since |λj| le 2ρ for j isin Nkwe have that each summand on the left-hand side of equation (138) is non-negative We neglect the terms corresponding to the zeros of D in 2ρ ρ thereby obtaining the inequality

ν(ρ) log 2 lt1

int 2π

0log |D(2ρeiθ )|dθ (139)

Now choose ε gt 0 so that μ+ ε lt 1 (recall that when α isin ( 12 1] we have

that μ lt 1) Next we use the estimate for D(λ) in Theorem 126 and get that

ν(ρ) log 2 le log a+ (2ρ)μ+ε (140)

Consequently for sufficiently large m there is a positive constant c such thatfor n ge m we have |λn| ge 1 and

ν(|λn|) le c|λn|μ+ε

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

28 A review of the Fredholm approach

According to the inequality (137) this inequality implies for n ge m that

n1

μ+ε le c|λn|In other words for some other positive constant b gt 0 we have thatsum

nisinN|λn|minus1 le b

sumnisinN

nminus1

μ+ε ltinfin

Theorem 128 If K(s t) is a Holder continuous kernel on [0 1] in either sor t with exponent α isin ( 1

2 1] then

D(λ) =prodnisinN

(1minus λ

λn

)

Proof According to the above corollary we see that the right-hand side of theabove equation is an entire function of λ isin C Moreover the function h definedat λ as

h(λ) = D(λ)prodnisinN

(1minus λ

λn

)is free of zeros in C Therefore by the Weierstrass factorization theorem thereis an entire function g such that for all λ isin C

D(λ) = eg(λ)prodnisinN

(1minus λ

λn

) (141)

We now show that g is a constant But since D(0) = 1 the result will followTherefore the last remaining task is to show that g is constant To this end weneed the following version of the PoissonndashJensen formula (see for example[2] p 208) Specifically if f is a function analytic in ρ with only zeros aj j isin Nm in ρ and z isin ρ then

log | f (z)| = minussumjisinNm

log

∣∣∣∣∣ ρ2 minus ajz

ρ(zminus aj)

∣∣∣∣∣+ 1

int 2π

0Re

ρeiθ + z

ρeiθ minus zlog | f (ρeiθ )|dθ

(142)

We shall first differentiate both sides of this equation with respect to zappropriately and then examine the behavior of each resulting term on theright-hand side when f = D and ρ rarrinfin For the process of differentiation werecall the following lemma

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

16 Eigenvalue estimates and a trace formula 29

Lemma 129 If f is analytic in a neighborhood of z then

part

partzlog | f (z)| = f prime(z)

f (z)

and

part

partzRef (z) = f prime(z)

To prove this lemma we write f (z) = u+ iv and then by a direct applicationof the CauchyndashRiemann equations the result may be verified

We now fix z choose ρ gt 2|z| and apply the derivative operator part

partz to bothsides of equation (142) for the choice f = D to get the equation

Dprime(z)D(z)

=sumjisinNn

λj

ρ2 minus λjz+sumjisinNn

1

zminus λj+ 1

π

int 2π

0ρeiθ (ρeiθ minus z)minus2 log |D(ρeiθ )|dθ

(143)

We first estimate the integral by using Corollary 127 Since ρ gt 2|z| we knowthat |ρeiθ minus z| ge ρ2 which together with Theorem 126 yields that∣∣∣∣ρeiθ (ρeiθ minus z)minus2 log |D(ρeiθ )|

∣∣∣∣ le ρ|ρeiθ minus z|minus2(log a+ |ρeiθ |μ+ε)

le 4

ρ(log a+ ρμ+ε)

and thus the absolute value of the third term on the right-hand side ofequation (143) is bounded by 8

ρ

(log a+ ρμ+ε

) which tends to zero as

ρ rarr infin For the first sum on the right-hand side of equation (143) we notethat

|ρ2 minus λjz| ge ρ2 minus ρ|z| ge ρ2

2

Therefore we see that the first sum on the right-hand side of equation (143) isbounded by 2ρminus1ν(ρ) which according to inequality (140) also goes to zeroas ρ rarrinfin Consequently equation (143) yields as ρ rarrinfin the equation

Dprime(z)D(z)

=sumjisinN

1

zminus λj (144)

However we also have that

D(z) = eg(z)prodjisinN

(1minus z

λj

) (145)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

30 A review of the Fredholm approach

Now we differentiate both sides of equation (145) to obtain that

Dprime(z)D(z)

= gprime(z)+sumjisinN

1

zminus λj (146)

Comparing equations (144) and (146) leads to the desired conclusion thatg is constant thereby completing the proof of the theorem

Our discussion in this chapter indicates that a Holder continuous kernelK(s t) on [0 1] in either s or t with exponent α isin ( 1

2 1] has eigenvalues whichare l1-summable We let μn n isin N be the nonzero eigenvalues of K so thatλn = 1μn We then have the following result

Theorem 130 If K(s t) is a Holder continuous kernel on [0 1] in either s ort with exponent α isin (12 1] thenint

K(s s)ds =sumnisinN

μn

and

det(I +K) =prodnisinN

(1+ μn)

Proof First we prove the second equality Indeed we have that

det(I +K) = D(minus1) =prodnisinN

(1+ λminus1n ) =

prodnisinN

(1+ μn)

Next we show the first equality On the one hand we know that

Dprime(z)D(z)

=sumjisinN

1

zminus λj

which in turn yields

Dprime(0) = D(0)sumjisinNminus 1

λj= minus

sumjisinN

μn

On the other hand by the power series expansion of D we have that

Dprime(λ) =sumqisinN0

(minus1)q+1 λq

qintq+1

K[x] dx

and hence it follows that

Dprime(0) = minusint

K(s s)ds

Combining the above two aspects we obtain the desired result

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

17 Bibliographical remarks 31

This completes our brief discussion of the classical theory of Fredholmintegral equations We next turn our attention to further essential backgroundmaterial on integral equations and postpone until Chapter 4 the main subject ofthis book namely multiscale methods for the numerical solution of Fredholmintegral equations

17 Bibliographical remarks

Most of the material presented in this chapter is taken from the book [183] Forthe important notion of the Fredholm function and the Fredholm determinantreaders are referred to [183 253] Readers may find a discussion of thedistribution of the eigenvalues of the Fredholm integral operator in [141] Inaddition [249] is a nice reference for the topic of integral equations wherethree fundamental papers on integral equations written by three outstandingmathematicians [Ivar Fredholm (1903) David Hilbert (1904) and ErhardSchmidt (1907)] published in the first decade of the twentieth century weretranslated into English with commentaries

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

2

Fredholm equations and projection theory

In this chapter we provide concepts and results useful for the study ofthe Fredholm integral equation of the second kind especially the theory ofprojection methods needed for the development of the multiscale methods

21 Fredholm integral equations

The main concern of this book is the Fredholm integral equation of the secondkind In this section we review concepts relevant to the study of this class ofintegral equations The general form for a Fredholm integral equation of thesecond kind is

uminusKu = f (21)

where the linear operator K is defined on a normed linear space with valuesin another such space the function f is given and u is a solution to bedetermined Typically these spaces consist of real or complex-valued functionson a measurable subset in the d-dimensional Euclidean space Rd Thefunction Ku is determined by a kernel K(s t) s t isin and the Fredholmintegral operator is defined by the formula

(Ku)(s) =int

K(s t)u(t)dt s isin (22)

or in short by

(Ku)(s) =int

K(s middot)u(middot) s isin

32

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 33

Properties of the operator K are inherited from those of the kernel K We shallrestrict ourselves to kernels for which K is a compact operator that is it mapsbounded subsets to relatively compact ones Let us postpone the discussion ofoperators of this type until we describe our preferred notation and terminology

For d isin N let Rd be the d-dimensional Euclidean space and a domain(open set) in Rd By Cm() m isin N0 we mean the linear space of all real-valued functions defined on which are m-times continuously differentiablethere That is all derivatives up to and including all those of total order mare continuous on Therefore the space of infinitely differentiable functionson is given by Cinfin() = ⋂

misinN0Cm() We use for the closure of

and denote by Cm() the subspace of all functions together with all theirderivatives up to order m that are bounded and uniformly continuous on theclosure of The simplified notational convention C() and C() for thespaces C0() and C0() respectively will be used throughout the book Fora lattice vector α isin Nd

0 with non-negative coordinates we use the notation|α| = sumiisinNd

αi Corresponding to any vector t = [tj j isin Nd] isin Rd wedenote the |α|-derivative of a function u at t (when it exists) by

Dαu(t) = part |α|u(t)parttα1

1 middot middot middot parttαdd

When the set is compact the linear space Cm() m isin N0 is a Banach spacewith the norm

uminfin = max|Dαu(t)| t isin α isin Zdm+1 |α| le m

and for m = 0 we simply denote it by uinfinFor a Lebesgue measurable subset of Rd the linear space of all real-

valued functions defined on for which their pth powers 1 le p lt infin areintegrable is denoted by Lp() Unless stated otherwise all integrals are takenin the sense of the Lebesgue integration Likewise we use Linfin() for thelinear space of all real-valued essentially bounded (that is bounded except ona zero measure set) measurable functions Moreover Lp() is a Banach spacewith the norm

up =⎧⎨⎩(int

|u(t)|pdt

)1p

1 le p ltinfin

inf sup|u(t)| t isin E meas(E) = 0 p = infin

In the special case p = 2 L2() is a Hilbert space equipped with the innerproduct

(u v) =int

u(t)v(t)dt u v isin L2()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

34 Fredholm equations and projection theory

We remark that the integral of an integrable function f over a measurableset isin Rd with respect to the Lebesgue measure will be denoted by

int

f (t)dtor in short by

int

f when there is no independent variable indicated and noconfusion can be expected Additional details about the spaces can be found instandard texts for example [1 183 236 276]

211 Compact integral operators

Compact operators play a central role in the theory of integral equationsand hence are frequently discussed in that context We review some of theirimportant properties here which from time to time we shall refer to in thetext Readers are referred to [10 15 47 177 183 203 236] for additionaldetails on the material reviewed here and to the Appendix of this book forbasic elements of functional analysis

We use the symbol B(XY) for the normed linear space of all bounded linearoperators from a normed linear space X into a normed linear space Y withoperator norm A = supAx x isin X x le 1 When Y is a Banach spacethen so is B(XY) and in the case that X = Y we denote it simply by B(X)Convergence of a sequence of operators An n isin N sube B(XY) to an operatorA isin B(XY) relative to the operator norm is said to be uniform convergenceand will be denoted by An

uminusrarr A This notion of convergence differs from theweaker requirement that for all x isin X we have that limnrarrinfinAnx = Ax Thisis called pointwise convergence and denoted by An

sminusrarr AThe normed linear space B(XR) is called the dual space of X and is denoted

by Xlowast The dual space of a normed linear space is always a Banach space andconsists of all continuous linear functionals on X For every x isin X isin Xlowast weuse the familiar bracket notation 〈 x〉 (or 〈x 〉) = (x) and associated withany operator A isin B(XY) its adjoint operator Alowast isin B(YlowastXlowast) is defined forall x isin X isin Ylowast by

〈Ax〉 = langAlowast xrang

An operator and its adjoint have the same norm that is A = Alowast WhenX is a Hilbert space the Riesz representation theorem (see Theorem A31 inthe Appendix) can be used to identify X with its dual space In that case theadjoint of any A isin B(X) is likewise in B(X) and A is said to be self-adjointwhenever A = Alowast

Definition 21 A linear operator K from a normed linear space X to a normedlinear space Y is said to be compact if it maps each bounded set in X to arelatively compact set in Y

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 35

It follows from the definition of a compact operator that A isin B(XY) iscompact if there is a sequence of compact operators An n isin N sube B(XY)which uniformly converges to it that is An

uminusrarr AWe now describe some examples of compact operators which are relevant

To do this we first recall the ArzelandashAscoli theorem which states that a setS is relatively compact in C() when is compact if and only if S isuniformly bounded and equicontinuous (for more details see Theorem A43in the Appendix) This classical result leads us to our first example

1 The Fredholm operator defined by a continuous kernel

Proposition 22 If sube Rd is a compact set and K isin C( times ) thenthe corresponding integral operator K given in (22) is a compact operator inB(C())

Proof First note by the Lebesgue dominated convergence theorem (cf [236])that K isin B(C()) since its operator norm satisfies the inequality

K le max int

|K(s t)|dt s isin

Next let S sube C() be a bounded subset Choose a constant c gt 0 such that forany v isin S we have that vinfin le c Therefore for any v isin S and s s1 s2 isin we conclude that

|(Kv)(s)| le c meas() max|K(s t)| (s t) isin and∣∣(Kv

)(s1)minus

(Kv)(s2)∣∣ le c

int

∣∣K(s1 t)minus K(s2 t)∣∣dt

le c meas() max∣∣K(s1 t)minus K(s2 t)∣∣ s1 s2 t isin

where meas() denotes the Lebesgue measure of the domain Since the ker-nel K is bounded and uniformly continuous ontimes the right-hand side of thefirst inequality is finite and that of the second inequality can be made as smallas desired provided that |s1 minus s2| is small enough Thus not only is the imageof the set S under K namely K(S) uniformly bounded and equicontinuousTherefore an appeal to the ArzelandashAscoli theorem completes the proof

2 The Fredholm operator defined by a Schmidt kernelRecall that a kernel K is called a Schmidt kernel provided that K isin L2(times)Proposition 23 If sube Rd is a measurable set and K is a Schmidt kernel thenthe integral operator K defined by (22) is a compact operator in B(L2())

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

36 Fredholm equations and projection theory

Proof We first show that this linear operator K is in B(L2()) Indeed forany v isin L2() and any compact subset 0 of by the Fubini theorem(Theorem A14) and the CauchyndashSchwarz inequality (Section A13) we havethatint

0

|(Kv)(s)|ds leint0

int

|K(s t)v(t)|dsdt

le (meas(0))12(int

0

int

|K(s t)|2dsdt

)12

vL2()

Therefore again by the Fubini theorem we obtain that(Kv)(s) exists almost

everywhere for s isin 0 and is measurable hence it will be so on the entire Once again using the Fubini theorem and the CauchyndashSchwarz inequality weobtain that

KvL2() =(int

∣∣∣∣ int

K(s t)v(t)dt

∣∣∣∣2ds

)12

le(int

int

|K(s t)|2dsdt

)12(int

|v(t)|2dt

)12

(23)

= KL2(times)vL2()

which proves that K isin B(L2())We next show that K is actually a compact operator First we consider the

case that K is a degenerate kernel That is there exist K1j K2j isin L2() j isin Nn

such that the kernel K for any s t isin has the form

K(s t) =sumjisinNn

K1j(s)K2j(t)

In this case we have for v isin L2() that

Kv =sumjisinNn

K1j

int

K2j(t)v(t)dt

which implies that the range of K is finite-dimensional and so K is indeed acompact operator Next we show that for any K isin L2(times) there is alwaysa sequence of degenerate kernels Kj j isin N sube L2(times) such that

limjrarrinfinKj minus KL2(times) = 0 (24)

From this fact it will follow that K is a compact operator Indeed let Kj be thecompact operator corresponding to the kernel Kj thus Kj is compact Frominequality (23) we obtain that

Kj minusK le Kj minus KL2(times)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 37

and consequently we conclude that KjuminusrarrK Therefore by Proposition

A47 part (v) we see that K is compactNow the existence of a sequence Kj j isin N of degenerate kernels in

L2( times ) which satisfy (24) follows from Theorem A31 and CorollaryA30 Specifically we use the Fubini theorem and conclude that the onlyfunction h isin L2(times) with the property thatint

int

f (s)g(t)h(s t)dsdt = 0

for all f g isin L2() is the zero function Therefore the set of degeneratekernels forms a dense subset of L2(times)

We remark that a constructive approximation argument can also be usedto establish the existence of a sequence of degenerate kernels Kj j isin Nwhich satisfy (24) when is compact For example first we approximatethe Schmidt kernel K by a kernel in C( times ) and then approximate thecontinuous kernel uniformly on by bivariate polynomials In particularwhen K isin C(times) the degenerate kernel Kj j isin N can also be chosen inC(times) so that

limjrarrinfinKj minus KC(times) = 0

thereby giving an alternate proof of Proposition 23

212 Weakly singular integral operators

We now turn our attention to a class of weakly singular integral operatorswhich is a kind of the most important compact integral operators

Definition 24 Let be a bounded and measurable subset of Rd If thereexists a bounded and measurable function M defined on times such that fors t isin but s = t

K(s t) = M(s t) log |sminus t| (25)

or

K(s t) = M(s t)

|sminus t|σ (26)

where σ is a constant in the interval (0 d) and |s minus t| is the Euclideandistance between s t isin then the function K is called a kernel with a weaksingularity and the operator K defined by (22) is called a weakly singularintegral operator The case that the kernel K has a logarithmic singularity (25)is sometimes referred to merely by saying that σ = 0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

38 Fredholm equations and projection theory

We introduce several constants which are convenient for our discussion ofweakly singular integral operators namely

cσ () = sup int

dt

|sminus t|σ s isin

m() = sup|M(s t)| s t isin and

diam() = sup|sminus t| s t isin

Also we let Sd be the unit sphere in Rd and recall that

vol(Sd) = dπd2

(d2+ 1)

where is the gamma function In the next lemma we estimate an upper boundof cσ ()

Lemma 25 If sube Rd is a bounded and measurable set and σ isin [0 d) then

cσ () le vol(Sd)diam()dminusσ

d minus σ (27)

Proof Fix a choice of s isin We use spherical coordinates with center at sto estimate the integral in the definition of cσ () Specifically for any t isin we have that dt = rdminus1drdωd where r isin [0 diam()] and ωd is the Lebesguemeasure on the unit sphere Sd Consequently we obtain the estimate

int

dt

|sminus t|σ leint diam()

0

dt

rσ=int

Sd

(int diam()

0rdminus1minusσdr

)dωd

= vol(Sd)diam()dminusσ

d minus σ (28)

Note that from inequality (28) it follows that every weakly singular kernelK is in L1(times) and when σ isin [0 d2) K is likewise a Schmidt kernel Weconsider the general weakly singular integral operator in the next result

Proposition 26 The integral operator K defined by (22) with a weaksingular kernel (26) is in B(L2()) with the norm satisfying the inequalityK le m()cσ ()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 39

Proof We first observe by the Fubini theorem for any u isin L2() thatint

int

u2(t)

|sminus t|σ dsdt =int

u2(t)

(int

ds

|sminus t|σ)

dt le cσ ()u2L2() (29)

Therefore the function v defined for all s isin as

v(s) =int

u2(t)

|sminus t|σ dt

exists at almost every s isin and is integrable Next we point out for eachs t isin that

|K(s t)u(t)| le m()1

|sminus t|σ2

|u(t)||sminus t|σ2

le m()

2

1

|sminus t|σ +m()

2

u2(t)

|sminus t|σ

For any sisin both terms on the right-hand side of this inequality are integrablewith respect to t isin and so we conclude that |K(s t)u(t)| is finite for almostevery t isin Moreover by the CauchyndashSchwarz inequality we have that

[(Ku)(s)]2 =[int

K(s t)u(t)dt

]2

le m2()

int

dt

|sminus t|σint

u2(t)

|sminus t|σ dt

le m2()cσ ()int

u2(t)

|sminus t|σ dt

which implies that Ku isin L2() is square-integrable that is K is definedon L2() and maps L2() to L2() Moreover integrating both sides of thelast inequality with respect to s isin and employing estimate (29) yields theinequality

Ku2L2()le m2()cσ ()

2u2L2()

which completes the proof

We next establish the compactness of a weakly singular integral operator onL2()

Theorem 27 The integral operator K with a weakly singular kernel (26) isa compact operator in B(L2())

Proof For ε gt 0 let Kε and Kprimeε be the integral operators whose kernelsKε Kprimeε are defined respectively for s t isin by the equations

Kε(s t) =

K(s t) |sminus t| ge ε

0 |sminus t| lt ε

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

40 Fredholm equations and projection theory

and

Kprimeε(s t) =

0 |sminus t| ge ε

K(s t) |sminus t| lt ε

These kernels were chosen to provide the decomposition

K = Kε +Kprimeε (210)

Since for s tisin |Kε(s t)| lem()εσ and is bounded it follows thatKε isin L2(times) Consequently we conclude from Proposition 23 that Kε is acompact operator in B(L2()) Moreover setting sε =t t isin |sminust|ltεfor each s isin the CauchyndashSchwarz inequality yields the inequality

|(Kprimeε(u)(s)| =∣∣∣∣∣intsε

M(s t)

|sminus t|σ u(t)dt

∣∣∣∣∣le m()

(intsε

1

|sminus t|σ dt

)12 (int

u2(t)

|sminus t|σ dt

)12

We bound the first integral on the right-hand side of this inequality by themethod of proof used for Lemma 25 and then integrate both sides of theresulting inequality over t isin to obtain the inequality

Kprimeε(u)L2() le[

m2()vol(Sd)(2ε)dminusσ

d minus σ

int

int

u2(t)

|sminus t|σ dsdt

]12

This inequality combined with (29) and Lemma 25 leads to the estimate

KprimeεuL2() lem()vol(Sd)[2εdiam()](dminusσ)2

d minus σuL2()

In other words we have proved that limεrarr0 Kprimeε = 0 and so being theuniform limit of the compact operators Kε ε gt 0 K is compact

The next result establishes a similar fact for K to be a compact operator inB(C()) under a modified hypothesis

Proposition 28 If sube Rd is a compact set of positive measure and thefunction M in (26) is in C(times) then the integral operator K with a weaklysingular kernel (26) is a compact operator in B(C())

Proof Let S be a bounded subset of C() By the ArzelandashAscoli theorem itsuffices to prove that the set K(S) is uniformly bounded and equicontinuous

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 41

We choose a positive constant c which ensures for any u isin S that uinfin le cand so we have for any u isin S that

|(Ku)(s)| =∣∣∣∣int

M(s t)

|sminus t|σ u(t)dt

∣∣∣∣ le c m()cσ () s isin

Therefore using Lemma 25 we conclude that the set K(S) is uniformlybounded

Next we shall show not only that Ku isin C() but also that the set K(S) isequicontinous To this end we choose ε gt 0 points s + h s isin and obtainthe equation

(Ku)(s+ h)minus (Ku)(s) =int

[M(s+ h t)

|s+ hminus t|σ minusM(s t)

|sminus t|σ]

u(t)dt

Let B(s 2ε) be the sphere with center at s and radius 2ε and set (s) = B(s 2ε) We have that

|(Ku)(s+ h)minus (Ku)(s)| le c m()

(intB(s2ε)

dt

|s+ hminus t|σ +int

B(s2ε)

dt

|sminus t|σ)

+ cint(s)

∣∣∣∣ M(s+ h t)

|s+ hminus t|σ minusM(s t)

|sminus t|σ∣∣∣∣ dt

(211)

For every s isin it follows from the method used to prove Lemma 25 thatintB(s2ε)

dt

|sminus t|σ levol(Sd)(4ε)dminusσ

d minus σ

When |h| lt 2ε we are assured that B(s 2ε) sube B(s+ h 4ε) and consequentlywe obtain the next inequalityint

B(s2ε)

dt

|s+ hminus t|σ leint

B(s+h4ε)

dt

|s+ hminus t|σ levol(Sd)(16ε)dminusσ

d minus σ

where in the last step we again employ the method of proof for Lemma 25These two inequalities demonstrate that the first two quantities on the right-hand side of (211) do not exceed

2c m()vol(Sd)(16ε)dminusσ

d minus σ

We now estimate the third integral appearing on the right-hand side of (211)To this end we observe that on the compact set W =(s t) s tisin |sminust| ge εthe function defined as H(s t) = M(s t)|s minus t|σ is uniformly continuous

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

42 Fredholm equations and projection theory

Hence there exists a δ gt 0 such that whenever (sprime tprime) (s t) isin W with |sprime minus s| leδ |tprime minus t| le δ we have that∣∣∣∣ M(sprime tprime)

|sprime minus tprime|σ minusM(s t)

|sminus t|σ∣∣∣∣ le ε

Now assume that |h| le min(ε δ) and t isin (s) Therefore we have that(s + h t) (s t) isin W and so the third term on the right-hand side of (211) isbounded by c meas()ε for any s isin and |h| le min(ε δ) Therefore not onlyis the function Ku continuous on but also the set K(S) is equicontinuousAn application of the ArzelandashAscoli theorem establishes that the set K(S) iscompact and so K is a compact operator

213 Boundary integral equations

Some important boundary value problems of partial differential equations overa prescribed domain can be reformulated as equivalent integral equationsover the boundary of the domain The resulting integral equations on theboundary are called boundary integral equations (BIEs) The superiority ofthe BIE methodology for solving boundary value problems rests on the factthat the dimension of the domain of functions appearing in the BIE will beone lower than in the original partial differential equation This means thatthe computational effort required to solve the partial differential equations canbe reduced significantly by using an efficient numerical method to solve theassociated BIE In this subsection we briefly review the BIE reformulation ofboundary value problems for the Laplace equation This material will providethe reader with some concrete integral equations that supplement the generaltheory described throughout the book

We begin with a discussion of the Green identities and integral representa-tions of harmonic functions To this end we let sube Rd be a bounded opendomain with piecewise smooth boundary part Throughout our discussion we

use the standard notation nabla =[

partpartxl

l isin Nd

]for the gradient and the Laplace

operator is defined by u = (nablanablau) Let A = [aij i j isin Nd] be a d times dsymmetric matrix with entries in C2() b = [bi i isin Nd] a vector field withcoordinates in C1() and c a scalar-valued function in C()

The proof of the lemma below is straightforward

Lemma 29 If u Rd rarr R and a Rd rarr Rd are in C1() then

nabla middot (ua) = unabla middot a+ a middot nablau

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 43

For the next lemma we introduce the vector field P Rd rarr Rd

P = A(vnablauminus unablav)+ buv

the second-order elliptic partial differential operator

Mu = nabla middot Anablau+ b middot nablau+ cu

and its formal adjoint operator

Mlowastv = nabla middot Anablavminusnabla middot bv+ cv

Lemma 210 If u v Rd rarr R are in C2() and A b c as above then

vMuminus uMlowastv = nabla middot P

Proof By direct computation using Lemma 29 we have that

vMuminus uMlowastv = vnabla middot (Anablau)minus unabla middot (Anablav)+ vb middot nablau+ unabla middot (bv)

while the definition of P proves that the right-hand side of this equation equalsnabla middot P

The formula appearing in the above lemma is often referred to as the adjointidentity

Next we write the adjoint identity in an alternate form For this purpose weintroduce the notation

Pu = Anablau middot n and Qv = Anablav middot nminus vb middot n (212)

where n denotes the unit outer normal along part It follows from the Gaussformula and Lemma 210 thatint

(vMuminus uMlowastv) =intpart

(vPuminus uQv) (213)

We are interested in getting a representation for the solution u of thehomogeneous elliptic equation Mu = 0 in terms of its values on the boundaryof the domain The standard method for doing this employs the fundamentalsolution to the inhomogeneous problem corresponding to the adjoint operatorWe briefly describe this process Recall that the fundamental solution of thelinear partial differential operator M is a function U defined on times suchthat for each x isin the solution u of the equation Mu = f is given by theintegral representation

u(x) =intRd

U(x y)f (y)dy (214)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

44 Fredholm equations and projection theory

We assume that the fundamental solution of the adjoint operator Mlowast isavailable and is denoted by Glowast Therefore we are ensured that the solutionv of the adjoint equation Mlowastv = f is given for each x isin as

v(x) =intRd

Glowast(x y)f (y)dy (215)

The function Glowast leads us to the following basic result

Proposition 211 If u is the solution of the homogeneous equation

Mu = 0 (216)

then for each x isin

u(x) =intpart

[u(y)(QGlowast(x middot))(y)minus Glowast(x y)Pu(y)

]dy (217)

where P Q are defined by (212) Glowast is the fundamental solution of theoperator Mlowast

Proof It follows from (213) thatint

[v(y)(Mu)(y)minus u(y)(Mlowastv)(y)]dy =intpart

[v(y)(Pu)(y)minus u(y)(Qv)(y)]dy

We choose v = Glowast(x middot) in this formula and use the definition of Glowast to get thedesired conclusion

Note that (217) expresses u over in terms of the boundary values of uand its normal derivative on part Let us specialize this result to the Laplaceoperator Thus we choose M to be the Laplace operator and observe in thiscase that M = Mlowast = and Pu = Qu = nablau middot n Therefore we get from(213) the following theorem

Theorem 212 (Green theorem) If u v isin C2() thenint

(vuminus uv) =intpart

(vpartu

partnminus u

partv

partn

) (218)

The fundamental solution of the Laplace operator is given by the formula

G(x y) =minus 1

2π log |xminus y| d = 2minus 1

(dminus2)ωdminus1

1|xminusy|dminus2 d ge 3

(219)

where ωdminus1 denotes the surface area of the unit sphere in Rd (cf [7 177])

Definition 213 If u isin C2() satisfies the Laplace equation u = 0 on then u is called harmonic on

It follows from direct computation that u = G(middot y) is harmonic on Rd y

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 45

Corollary 214 If u is harmonic on thenintpart

partu

partn= 0

Proof This result follows from Theorem 212 by choosing v = 1

In what follows we review the techniques of finding solutions to theLaplace equation under various circumstances Especially we discuss thedirect method for solving the Dirichlet and Neumann problems in boththe interior and exterior of the domain Moreover we shall also reviewboth single and double-layer representations for the solution of these problemsAll of our remarks pertain to the practically important cases of two andthree dimensions The two-dimensional case will be presented in some detailwhile the corresponding three-dimensional case will be stated without thebenefit of a detailed explanation since it follows the pattern of proof of thetwo-dimensional case We begin with a proposition which shows that harmonicfunctions have boundary integral representations

Proposition 215 If sube R2 is a bounded open domain with smoothboundary part prime = R2 and u is a harmonic function on then

u(x) = 1

intpart

(u(y)

part

partnlog |xminus y| minus log |xminus y|partu(y)

partn

)dy x isin (220)

u(x) = 1

π

intpart

[u(y)

part

partnlog |xminus y| minus log |xminus y|partu(y)

partn

]dy x isin part (221)

0 =intpart

[u(y)

part

partnlog |xminus y| minus log |xminus y|partu(y)

partn

]dy x isin prime (222)

Proof The first equation (220) follows from Proposition 211 We now turnto the case that x isin part To deal with the singularity on x = y of the integrandon the right-hand side of equation (221) we choose a positive ε and denote byε the domain obtained from after removing the small disc B(x ε) = y |x minus y| le ε On this punctured domain we can use the Green identity (218)to obtain the equationint

ε

(v(y)u(y)minus u(y)v(y))dy =intpartε

(v(y)

partu(y)

partnminus u(y)

partv(y)

partn

)dy

(223)

Let v be the fundamental solution (219) of the Laplace operator Both of thefunctions u and v are harmonic functions on ε Therefore the above equation

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

46 Fredholm equations and projection theory

can be written as

1

intpartε

(log |xminus y|partu(y)

partnminus u(y)

part

partnlog |xminus y|

)dy = 0 (224)

To evaluate the limit as ε rarr 0 of the integral on the left-hand side of theequation above we split it into a sum of the following two integrals

I1ε = 1

intε

(log |xminus y|partu(y)

partnminus u(y)

part

partnlog |xminus y|

)dy

and

I2ε = 1

intpartεε

(log |xminus y|partu(y)

partnminus u(y)

part

partnlog |xminus y|

)dy

with ε = partB(x ε)⋂ It follows that

I1ε = 1

intε

(log ε

partu(y)

partn+ εminus1u(y)

)dy (225)

from which we obtain that

limεrarr0

I1ε = 1

2u(x) (226)

Moreover we have that

limεrarr0

I2ε = 1

intpart

(log |xminus y|partu(y)

partnminus u(y)

part

partnlog |xminus y|

)dy (227)

Combining equations (224)ndash(227) yields (221) Finally we note that whenx isin prime both functions u and v = 1

2π log |x minus middot| are harmonic functions on Thus (222) follows from the Green identity (218)

We state the corresponding result for the three-dimensional case The prooffollows the pattern of the proof for Proposition 215

Proposition 216 If sube R3 is a bounded open domain with smoothboundary part prime = R3 and u is a harmonic function on then

u(x) = minus 1

intpart

[u(y)

part

partn

1

|xminus y| minus1

|xminus y|partu(y)

partn

]dy x isin (228)

u(x) = minus 1

intpart

[u(y)

part

partn

1

|xminus y| minus1

|xminus y|partu(y)

partn

]dy x isin part (229)

0 =intpart

[u(y)

part

partn

1

|xminus y| minus1

|xminus y|partu(y)

partn

]dy x isin prime (230)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 47

Next we make use of these boundary value formulas for harmonic functionsto rewrite several boundary value problems as integral equations We startwith a description of methods for obtaining boundary integral equations ofthe direct type Later we turn our attention to the indirect methods of singleand double-layer potentials First we consider the following interior boundaryvalue problems

The interior Dirichlet problem Find u isin C()⋂

C2() such thatu(x) = 0 x isin u(x) = u0(x) x isin part

(231)

where u0 isin C(part) is a given boundary function

The interior Neumann problem Find u isin C()⋂

C2() such thatu(x) = 0 x isin partu(x)partn = u1(x) x isin part

(232)

where u1 isin C(part) is a given boundary function satisfyingintpart

u1(x)dx = 0

It is known that both of these problems have unique solutions (see forexample Chapter 6 of [177]) We now use (221) and (229) with the boundarycondition u = u0 and reformulate the interior Dirichlet problem for (231)when d = 2 as the BIE of the first kind

1

π

intpart

log |xminus y|ρ(y)dy = f (x) x isin part (233)

where

ρ = partu

partnand f = minusu0 + 1

π

intpart

u0(y)part

partnlog | middot minusy|dy

For the case d = 3 we choose

ρ = partu

partnand f = u0 + 1

intpart

u0(y)part

partn

1

| middot minusy|dy

and obtain the reformulation of the interior Dirichlet problem for (231) withd = 3 as the BIE of the first kind

1

intpart

1

|xminus y|ρ(y)dy = f (x) x isin part (234)

In a similar manner we treat the interior Neumann problem First for d = 2we use (221) and (229) with the boundary condition partu

partn = u1 and convert the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

48 Fredholm equations and projection theory

interior Neumann problem (232) to the equivalent BIE of the second kind

u(x)minus 1

π

intpart

u(y)part

partnlog |xminus y|dy = g(x) x isin part (235)

where

g = minus 1

π

intpart

u1(y) log | middot minusy|dy

For the corresponding three-dimensional case d = 3 we set

g = 1

intpart

u1(y)

| middot minusy|dy

and obtain the BIE of the second kind

u(x)+ 1

intpart

u(y)part

partn

1

|xminus y|dy = g(x) x isin part (236)

Let us now consider the Dirichlet and Neumann problems in the exteriordomain prime = Rd for d = 2 3 Specifically we reformulate the followingtwo problems as BIE

The exterior Dirichlet problem Find u isin C(prime)⋂

C2(prime) such thatu(x) = 0 x isin primeu(x) = u0(x) x isin part

(237)

where u0 isin C(part) is a given boundary function

The exterior Neumann problem Find u isin C(prime)⋂

C2(prime) such thatu(x) = 0 x isin primepartu(x)partnprime = u1(x) x isin part

(238)

where u1 isin C(part) is a given boundary function satisfyingintpart

u1(x)dx = 0and nprime is the outer unit normal to part(= partprime) with respect to prime

It is known (see for example [177]) that both problems have uniquesolutions under the condition that

u(x) = O(|x|minus1

) |x| rarr infin

|nablau(x)| = O(|x|minus2

) |x| rarr infin

(239)

Below we state the analog of both Proposition 215 and Proposition 216 forthe exterior domain prime We begin with the case d = 2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 49

Proposition 217 If sube R2 is a bounded open domain with smoothboundary part prime = R2 and u is a harmonic function on prime then thereholds

u(x) = 1

intpart

[u(y)

part

partnprimelog |xminus y| minus log |xminus y|partu(y)

partnprime

]dy x isin prime (240)

u(x) = 1

π

intpart

[u(y)

part

partnprimelog |xminus y| minus log |xminus y|partu(y)

partnprime

]dy x isin part (241)

0 =intpart

[u(y)

part

partnprimelog |xminus y| minus log |xminus y|partu(y)

partnprime

]dy x isin (242)

Proof We only prove (240) The other two equations are similarly obtainedLet the ball BR = x |x| lt R be chosen such that sub BR and let primeR =prime⋂

BR Consequently it follows from (220) with = primeR that

u(x) = 1

intpart

[u(y)

part

partnprimelog |xminus y| minus log |xminus y|partu(y)

partnprime

]dy+ IR x isin primeR

(243)

where

IR = 1

intpartBR

[u(y)

part

partnlog |xminus y| minus log |xminus y|partu(y)

partn

]dy x isin primeR

and n is the outer unit normal to partBR Using the condition (239) we have thatthere exists a positive constant c such that

|IR| le 1

intpartBR

[|u(y)|

∣∣∣∣ partpartnlog |xminus y|

∣∣∣∣+ |log |xminus y||∣∣∣∣partu(y)

partn

∣∣∣∣] dy

le c log R

2πRrarr 0

Note that the upper bound tends to zero as R tends to infinity Therefore thisestimate combined with (243) yields (240)

The three-dimensional version of Proposition 216 for the exterior domainis described next The proof is similar to that of Proposition 217 and so isomitted

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

50 Fredholm equations and projection theory

Proposition 218 If sube R3 is a bounded open domain with smoothboundary part prime = R3 and u is a harmonic function on prime then

u(x) = minus1

intpart

[u(y)

part

partnprime1

|xminus y| minus1

|xminus y|partu(y)

partnprime

]dy x isin prime (244)

u(x) = minus 1

intpart

[u(y)

part

partnprime1

|xminus y| minus1

|xminus y|partu(y)

partnprime

]dy x isin part (245)

0 =intpart

[u(y)

part

partnprime1

|xminus y| minus1

|xminus y|partu(y)

partnprime

]dy x isin (246)

We now make use of (241) and (245) to rewrite the exterior Dirichletproblem (237) for d = 2 as the BIE

1

π

intpart

log |xminus y|ρ(y)dy = f (x) x isin part (247)

where

ρ = partu

partnprimeand f = minusu0 + 1

π

intpart

u0(y)part

partnprimelog | middot minusy|dy

while for d = 3 we have the equation

1

intpart

1

|xminus y|ρ(y)dy = f (x) x isin part (248)

where

ρ = partu

partnprimeand f = u0 + 1

intpart

u0(y)part

partnprime1

| middot minusy|dy

For the exterior Neumann problem (238) the BIE is of the second kind andit is explicitly given for d = 2 as

u(x)minus 1

π

intpart

u(y)part

partnprimelog |xminus y|dy = g(x) x isin part (249)

where

g(x) = minus 1

π

intpart

u1(y) log |xminus y|dy

The case d = 3 is covered by the following BIE

u(x)+ 1

intpart

u(y)part

partnprime1

|xminus y|dy = g(x) x isin part (250)

where

g(x) = 1

intpart

u1(y)

|xminus y|dy

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 51

In the remaining part of this section our goal is to describe the BIE forthe Laplace equation of the indirect type First we consider representing theunknown harmonic function u as a single-layer potential

u(x) =intpart

ρ(y)G(x y)dy x isin Rd part (251)

and then later as a double-layer potential

u(x) =intpart

ρ(y)partG(x y)

partndy x isin Rd part (252)

where G is the fundamental solution of the Laplace operator and ρ isin C(part)is a function to be determined depending on the nature of the boundaryconditions We show that the single and double-layer potentials can be usedto solve interior and exterior problems for both the Dirichlet and Neumannproblems

Let us start with the single-layer method For the interior or exteriorDirichlet problem the boundary condition and the continuity of u on part leadto the demand that the function ρ satisfies the first-kind Fredholm integralequation int

part

ρ(y)G(x y)dy = u0(x) x isin part

In particular for d = 2 the solution u of the interior (resp exterior) Dirichletproblem has the single-layer representation given by the equation

u(x) = 1

intpart

ρ(y) log |xminus y|dy x isin (resp prime) (253)

with ρ satisfying the requirement that

1

intpart

ρ(y) log |xminus y|dy = u0(x) x isin part (254)

In the three-dimensional case we get the equation

u(x) = minus 1

intpart

ρ(y)

|xminus y|dy x isin (respprime) (255)

where ρ satisfies the equation

minus 1

intpart

ρ(y)

|xminus y|dy = u0(x) x isin part (256)

For the two-dimensional interior Neumann problem we consider the equa-tionintpartε

ρ(y) log |xminus y|dy =intε

ρ(y) log |xminus y|dy+intpartε

ρ(y) log |xminus y|dy

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

52 Fredholm equations and projection theory

where partε is the boundary of the domain ε = B(x ε) and ε =partB(x ε)

⋂ By taking the directional derivative in the normal direction n

of both sides of this equation letting εrarr 0 using the boundary conditionpartupartn = u1 and arguments similar to that used in the proof of Proposition 215we conclude that u is represented as the single-layer potential (253) with ρ

satisfying the second-kind Fredholm integral equation

minusρ(x)2+ 1

intpart

ρ(y)part

partnlog |xminus y|dy = u1(x) x isin part (257)

In a similar manner the solution u to the three-dimensional interior Neumannproblem is represented as the single-layer potential (255) with ρ satisfying thesecond-kind Fredholm integral equation

minusρ(x)2minus 1

intpart

ρ(y)part

partn

1

|xminus y|dy = u1(x) x isin part (258)

For the exterior Neumann problem a similar argument leads to the resultthat the solution u in the two and three-dimensional cases is representedas (253) and (255) with ρ satisfying the second-kind Fredholm integralequations respectively

ρ(x)

2+ 1

intpart

ρ(y)part

partnlog |xminus y|dy = u1(x) x isin part (259)

and

ρ(x)

2minus 1

intpart

ρ(y)part

partn

1

|xminus y|dy = u1(x) x isin part (260)

We close this section with a review of the double-layer potentials forharmonic functions We start with the two-dimensional interior Dirichletproblem Suppose that u= u+ isin C() is a harmonic function in Let uminusbe the solution of the exterior Neumann problem with

partuminus(y)partn

= partu+(y)partn

y isin part

which satisfies (239) It follows from Propositions 215 and 217 that

u(x) = 1

intpart

(u+(y)minus uminus(y))part

partnlog |xminus y|dy x isin

This equation can be written as a double-layer potential

u(x) = 1

intpart

ρ(y)part

partnlog |xminus y|dy x isin (261)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 53

with ρ = u+ minus uminus According to the proof of Proposition 215 we have forx isin part that

limxrarrx

intpart

ρ(y)part

partnlog |xminus y|dy = minusπρ(x)+

intpart

ρ(y)part

partnlog |xminus y|dy

This with the boundary condition of the Dirichlet problem concludes that ρsatisfies the second-kind Fredholm integral equation

minusρ(x)2+ 1

intpart

ρ(y)part

partnlog |xminus y|dy = u0(x) x isin part (262)

Similarly the solution u to the three-dimensional interior Dirichlet problem isrepresented as the double-layer potential

u(x) = minus 1

intpart

ρ(y)part

partn

1

|xminus y|dy x isin with ρ satisfying the second-kind Fredholm integral equation

ρ(x)

2minus 1

intpart

ρ(y)part

partn

1

|xminus y|dy = u0(x) x isin part

For the exterior Dirichlet problem the corresponding integral equations in thetwo and three-dimensional cases are

ρ(x)

2+ 1

intpart

ρ(y)part

partnlog |xminus y|dy = u0(x) x isin part

and

minusρ(x)2minus 1

intpart

ρ(y)part

partn

1

|xminus y|dy = u0(x) x isin part

respectively

22 General theory of projection methods

The main concern in the present section is the general theory of projectionmethods for approximate solutions of operator equations of the form

Au = f (263)

where AisinB(XY) f isinY are given and u isin X is the solution to be determinedThe case of central importance to us takes the form A= I minusK where I is theidentity operator in B(X) and K is a compact operator in B(X) In this case(263) becomes

(I minusK) u = f (264)

a Fredholm equation of the second kind

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

54 Fredholm equations and projection theory

221 Projection operators

We begin with a description of various projections an essential tool for thedevelopment of approximation schemes for (263) The following notation willbe used throughout the book For a linear operator A not defined on all of thelinear space X we denote by D(A) its domain and by N(A)=x xisinX Ax =0 its null space For the range of A we use R(A) = Ax x isin D(A)Alternatively we may sometimes write A(U) for the range of A where U isthe domain of A We start with the following definition

Definition 219 Let X be a normed linear space and V a closed linearsubspace of X A bounded linear operator P X rarr V is called a projectionfrom X onto V if for all v isin V

Pv = v (265)

Note that a projection P X rarr V necessarily has the property that V =R(P) For later use we make the following remark

Proposition 220 Let X be a normed linear space and P isin B(X) Then P isa projection on X if and only if P2 = P Moreover in this case if P = 0 thenP ge 1

Proof If P X rarr R(P) is a projection then for all x isin X we have thatP2x = P(Px) = Px Conversely if P2 = P then any v isin R(P) written asv = Px for some x isin X satisfies the equation Pv = P2x = Px = v Finally itfollows from the equation P2 = P that P2 ge P2 = P which impliesthat P ge 1 when P = 0

Now we describe the three kinds of projection which are most importantfor the practical development of approximation schemes to solve operatorequations We have in mind the well-known orthogonal and interpolationprojections and perhaps the less familiar concept of a projection defined bya generalized best approximation

1 Orthogonal projectionsWe recall the standard definition of the orthogonal projection on a Hilbertspace

Definition 221 Let X be a Hilbert space with inner product (middot middot) Two vectorsx y isin X are said to be orthogonal provided that (x y) = 0 If V is a nontrivialclosed linear subspace of X then a linear operator P from X onto V is calledthe orthogonal projection if for all x isin X y isin V it satisfies the equation

(Px y) = (x y) (266)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 55

In other words the orthogonal projection onto V has the property that xminusPxis orthogonal to all y isin V The orthogonal projection satisfies P= 1 and isself-adjoint that is Plowast = P Moreover we have the following well-knownextremal characterization of the orthogonal projection

Proposition 222 If X is a Hilbert space and V a nontrivial closed linearsubspace of X then there exists an orthogonal projection P from X onto V

and for all x isin X

xminus Px = minxminus v v isin VMoreover the last equation uniquely characterizes Px isin V

Proof The existence of Px follows from the completeness of X and theparallelogram law The remaining claim follows from the definition of theorthogonal projection which gives for x isin X and v isin V that

xminus v2 = xminus Px2 + Pxminus v2

2 Interpolating projectionsWe next introduce the concept of interpolating projections

Definition 223 Let X be a Banach space and V a finite-dimensional subspaceof X A subset j j isin Nm of the dual space Xlowast is V-unisolvent if for anycj jisinNm sube R there exists a unique element v isin V satisfying for all jisinNmthe equation

j(v) = cj (267)

To emphasize the pairing between X and Xlowast the value of a linear functional isin Xlowast at x isin X will often be denoted by 〈x 〉 That is we define〈x 〉 = (x) This convenient notation is standard and the next propositionis elementary To state it we use the Kronecker symbol δij i j isin Nm that isδij = 0 except for i = j in which case δij = 1

Proposition 224 If X is a Banach space and V is an m-dimensional subspaceof X then j j isin Nm is V-unisolvent if and only if there exists a linearlyindependent set xj j isin Nm sube V which satisfies for all i j isin Nm the equation

j(xi) = δij (268)

In this case the operator P Xrarr V defined for each x isin X and j isin Nm bylangPx j

rang = langx jrang

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

56 Fredholm equations and projection theory

is a projection from X onto V and is given by the formula

Px =sumjisinNm

j(x)xj (269)

According to the above result any m-dimensional subspace V of X and anyset of linear functionals j j isin Nm in X which is V-unisolvent determinethe projection (269) An important special case occurs when the Banach spaceX consists of real-valued functions on a compact set of Rd In this case ifthere is a subset of points tj j isin Nm in such that the linear functionalsj j isin Nm are defined for each x isin X by the equation

j(x) = x(tj) (270)

that is j is the point evaluation functional at tj then the operator P Xrarr V

defined by (269) is called the Lagrange interpolation If for some j isin Nmj(x) is determined not only by the value of the function x at some point of but also by derivatives of x P is called the Hermite interpolation In the casethat P isin B(C()) is a Lagrange interpolation its operator norm is given by

P = max

⎧⎨⎩sumjisinNm

|xj(t)| t isin ⎫⎬⎭ (271)

3 Generalized best approximation projectionsAs our final example we describe the generalized best approximation pro-jections which were introduced in [77] Let X be a Banach space and Xlowast itsdual space For n isin N we assume that Xn and Yn are two finite-dimensionalsubspaces of X and Xlowast respectively with the same dimension

Definition 225 For x isin X an element Pnx isin Xn is called a generalized bestapproximation to x from Xn with respect to Yn if for all isin Yn it satisfies theequation

〈xminus Pnx 〉 = 0 (272)

Similarly given isin Xlowast an element P primen isin Yn is called a generalized bestapproximation from Yn to with respect to Xn if for all x isin Xn it satisfies theequation lang

x minus P primenrang = 0

Figure 21 displays schematically the generalized best approximationprojection Pnx to x isin X from Xn with respect to Yn in a Hilbert space Inthis case equation (272) means (xminus Pnx)perpYn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 57

Yn

Xn

x

Pnx

0

Figure 21 Generalized best approximation projections

For a further explanation we provide an example as follows Let I = [0 1]and X = L2(I) We subdivide the interval I into n subintervals by pointstj = jh j isin Nnminus1 and set tjminus 1

2= (jminus 1

2 )h jisinNn where h = 1n We let Xn be

the space of continuous piecewise linear polynomials with knots at tj j isin Nnminus1and Yn be the space of piecewise constant functions with knots at tjminus 1

2 j isin Nn

Clearly dimXn= dimYn = n + 1 For xisinX we define the generalized bestapproximation Pnx to x from Xn with respect to Yn by the equation

〈xminus Pnx y〉 = 0 for all y isin Yn

Let

φj(t) =⎧⎨⎩

1minus (tj minus t)h tjminus1 le t le tj1minus (t minus tj)h tj lt t le tj+10 elsewhere

j isin Zn+1

and

ψj(t) =

1 tjminus 12le t le tj+ 1

2

0 elsewherej isin Zn+1

Two groups of functions φj j isin Zn+1 and ψj j isin Zn+1 form thebases for Xn and Yn respectively Thus Pnx can be written in the formPnx = sum

jisinZn+1cjφj where the vector un = [cj j isin Zn+1] satisfies the

linear equation

Anun = fn

in which An = [langφjψirang

i j isin Zn+1] and fn = [〈xψi〉 i isin Zn+1]We now present a necessary and sufficient condition for which each x isin X

has a unique generalized best approximation from Xn with respect to Yn In

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

58 Fredholm equations and projection theory

what follows we denote by Xperpn the set of all linear functionals in Xlowast whichvanish on the subspace Xn that is the annihilator of Xn in Xlowast

Proposition 226 For each x isin X the generalized best approximation Pnx tox from Xn with respect to Yn exists and is unique if and only if

Yn cap Xperpn = 0 (273)

When this is the case and Pnx is the generalized best approximation of x Pn Xrarr Xn is a projection

Proof Let x isin X be given and assume that spaces Xn and Yn have basesxj j isin Nm and j j isin Nm respectively The existence and uniqueness of

Pnx =sumiisinNm

cixi isin Xn

satisfying equation (272) means that the linear systemsumiisinNm

cilangxi j

rang = langx jrang j isin Nm (274)

has a unique solution c = [cj j isin Nm] isin Rm for any x isin X This is equivalentto the fact that the mtimes m matrix

A = [langxj irang

i j isin Nm]is nonsingular Moreover a vector b = [bj j isin Nm] isin Rm is in the null spaceof A if and only if the linear functional = sumjisinNm

bjj is in the subspace

Yn cap Xperpn This proves the first assertion For the remaining claim when Yn capXperpn = 0 we solve (274) and define for x isin X

Pnx =sumjisinNm

cjxj

Clearly Pn Xrarr Xn is a linear operator that by construction satisfies (272)Let us show that Pn is also a projection For any x isin X P2

n x isin Xn isa generalized best approximation to Pnx from Xn with respect to Yn ByDefinition 225 we conclude for all isin Yn thatlang

Pnxminus P2n x

rang= 0

This together with (272) implies for all isin Yn thatlangxminus P2

n x rang= 0

By the uniqueness of the solution to equation (274) we obtain that P2n x = Pnx

and so Pn is indeed a projection

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 59

We state the corresponding result for the generalized best approximation to from Yn with respect to Xn The proof is similar to that of Proposition 226where in this case the transpose of the matrix A is used

Proposition 227 For each isin Y the generalized best approximation to isin Y from Yn with respect to Xn exists and is unique if and only if

Xn cap Yperpn = 0 (275)

When this is the case and P primen is the generalized best approximation of fromYn with respect to Xn then P primen Yrarr Yn is a projection

In view of Propositions 226 and 227 we shall always assume that con-dition (273) (resp (275)) holds whenever we refer to Pn (resp P primen) as thegeneralized best approximation projection from Xn with respect to Yn (respthe generalized best approximation projection from Yn with respect to Xn) Wealso remark that condition (273) or (275) implies that dimYn = dimXn

Proposition 226 allows us to connect the concept of the generalized bestapproximation to the familiar concept of dual bases Indeed by the HahnndashBanach theorem every finite-dimensional subspace Xn of a normed linearspace X has a dual basis (see Theorem A32 in the Appendix) Specificallyif xj j isin Nm is a basis for Xn there is a subset j j isin Nm sube Xlowast suchthat for i j isin Nm j(xi) = δij According to Proposition 226 the generalizedbest approximation to x isin X from Xn with respect to Yn = spanj j isin Nmexists and is given by

Pnx =sumjisinNm

j(x)xj

Conversely if (273) holds we can find a dual basis for Xn in Yn

Proposition 228 If Yn cap Xperpn = 0 then P primen = Plowastn and Yn = PlowastnXlowast

Proof For all x isin X and isin Xlowast we have thatlangxP primen

rang = langPnxP primenrang = 〈Pnx 〉 = langxPlowastn

rang

from which the desired result follows

The next proposition gives an alternative sufficient condition to ensure thatevery xisinX has a unique generalized best approximation in Xn with respect toYn

Proposition 229 If there is a constant c gt 0 and a linear operator Tn Xn rarr Yn with TnXn = Yn such that for all x isin Xn

x2 le c 〈x Tnx〉 then (273) holds

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

60 Fredholm equations and projection theory

Proof Our hypothesis implies that for any isin Yn cap Xperpn there exists x isin Xn

such that Tnx = and so

x le radicc 〈x Tnx〉12 = radicc 〈x 〉12 = 0

Therefore we obtain that x = 0 and consequently we also have that =Tnx= 0

The next issue that concerns us is the conditions which guarantee that asequence of projections Pn n isin N converges pointwise to the identityoperator in X that is Pn

sminusrarr I As we shall see later this property iscrucial for the analysis of projection methods Generally condition (273) isnot sufficient to ensure that this is the case Therefore we need to introducethe concept of a regular pair

Definition 230 A pair of sequences of subspaces Xn sube X n isin N and Yn subeY n isin N is called a regular pair if there is a positive constant c such that forall n isin N there are linear operators Tn Xn rarr Yn with TnXn = Yn satisfyingthe conditions that for all x isin Xn

(i) x le c 〈x Tnx〉12(ii) Tnx le cx

In this definition it is important to realize that the constant c appearing aboveis independent of n isin N

If X is a Hilbert space so that Xlowast can be identified by the Riesz representationtheorem (see Theorem A31 in the Appendix) with X itself and we also havefor all n isin N that Xn = Yn then conditions (i) and (ii) are satisfied withTn = I n isin N Thus in this case we have a regular pair On the contrary ifXnYn is a regular pair then from Proposition 229 we conclude that (273)holds and so Pn is well defined

For the next proposition we find it appropriate to introduce the quantity

dist(xXn) = minxminus u u isin XnProposition 231 If Xn n isin N and Yn n isin N form a regular pair andPn Xrarr Xn is the corresponding generalized best approximation projectionthen for each x isin X and n isin N

(i) Pn le c3(ii) dist(xXn) le xminus Pnx le (1+ c3)dist(xXn)

Consequently if for each x isin X limnrarrinfin dist(xXn)= 0 then limnrarrinfin Pnx= x

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 61

Proof For each x isin X and u isin Xn we have that

uminus Pnx2 le c2 〈uminus Pnx Tn(uminus Pnx)〉= c2 〈uminus x Tn(uminus Pnx)〉le c2uminus xTn(uminus Pnx)le c3uminus xuminus Pnx

Therefore we conclude that

uminus Pnx le c3uminus xThe choice u = 0 in the above inequality establishes (i) As for (ii) the lowerbound is obvious while the upper bound is obtained by using the inequalitythat for any u isin Xn

xminus Pnx le xminus u + uminus Pnx le (1+ c3)uminus x

222 Projection methods for operator equations

In this subsection we describe projection methods for solving operator equa-tions which include the PetrovndashGalerkin method the Galerkin method theleast-squares method and the collocation method

Let X and Y be Banach spaces A X rarr Y be a bounded linear operatorand f isin Y We wish to find an u isin X such that

Au = f

if it exists Projection methods have the common feature of specifying asequence Xn n isin N of subspaces and choosing a un isin Xn for which theresidual error

rn = Aun minus f

is ldquosmallrdquo so that un is a good approximation to the desired u How this is donedepends on the method used We shall review some of the principal strategiesfor making rn small

We begin with a description of the PetrovndashGalerkin method The idea behindthis method is to make the residual rn isinY small by choosing finite-dimensionalsubspaces Ln nisinN in Ylowast with dim Ln = dim Xn and attempting to findun isinXn so that for all isin Ln we have

〈Aun 〉 = 〈f 〉 (276)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

62 Fredholm equations and projection theory

Specifically we choose bases Xn= spanxj jisinNm and Ln= spanj jisinNmand write un in the form

un =sumjisinNm

cjxj

where the vector un = [cj j isin Nm] isin Rm must satisfy the linear equation

Anun = fn

where An = [langAxj irang

i j isin Nm] and fn = [〈f i〉 i isin Nm]For the purpose of theoretical analysis it is also useful to express un isinXn

as a solution of an operator equation This can be done by specifying anysequence Yn nisinN of subspaces for which there are generalized bestapproximation projections Pn Y rarr Yn with respect to Ln This means thatfor all isinLn not only 〈Aun minus f 〉 = 0 but also 〈Aun minus PnAun 〉 = 0 and〈f minus Pnf 〉 = 0 From these three equations we conclude that PnAunminusPnf isinYn cap Lperpn = 0 and so we obtain that

PnAun = Pn f

Therefore we conclude that equation (276) is equivalent to the operatorequation

Anun = Pn f

where An = PnA|Xn Here the symbol A|Xn stands for the operator Arestricted to the subspace Xn and so An isin B(XnYn) This means that theoperator An can be realized as a square matrix because dim Xn = dim Ln =dim Yn

We remark that the commonly used Galerkin method and also the least-squares method are special cases of the PetrovndashGalerkin method Specificallyif X=Y is a Hilbert space we identify Xlowast with X and choose Xn=Yn n isin Nthen the projection Pn above is the orthogonal projection of X onto Xn andequation (276) means that Aunminus f isinXperpn with un isinXn Alternatively the least-squares method chooses Yn = A(Xn) instead of the choice Xn=Yn specifiedfor the Galerkin method which yields the requirement Aun minus f isinA(Xn) withun isinXn This means that un satisfies the equation

f minusAun = dist (f A(Xn))

Equivalently un has the property that Alowast(Aunminus f ) isin Xperpn So in particular wesee that the least-squares method is equivalent to the Galerkin method appliedto the operator equation

AlowastAu = Alowast f

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 63

Our final example is the collocation method This approaches the task ofmaking the residual rn small by making it zero on some finite set of pointsSpecifically the setup requires that Y = C() where is a compact subset ofRd Choose a finite set T sube and demand that rn|T = 0 where rn|T denotesthe restriction of the function rn to the finite set T Again to solve for un isin X

we restrict our search to un isin Xn where dim Xn = card T and card T denotesthe number of distinct elements in T that is the cardinality of T In termsof a basis for Xn we have as before un = sumjisinNm

cjxj un = [cj j isin Nm]fn = [f (tj) j isin Nm]where T = tj j isin Nm and An = [(Axi)(tj) i j isin Nm]These quantities are joined by the linear system of equations

Anun = fn

An operator version of this linear system follows by choosing a subspace Yn subeC() with dim Yn = card T which admits an interpolation projection Pn Yrarr Yn corresponding to the family of linear functionals t t isin T wherethe linear functional t is defined to be the ldquodelta functionalrdquo at t that is foreach f isin C() we have that t(f ) = f (t) Therefore (Pnf )|T = 0 if and onlyif f |T = 0 and so we get that

Anun = Pnf

where An = PnA|Xn

223 Convergence and stability

In this subsection we discuss the convergence and stability of projectionmethods for operator equations The setup is as before namely X and Y arenormal linear spaces A isin B(XY) Xn n isin N and Yn n isin N aretwo sequences of finite-dimensional subspaces of X and Y respectively withdim Xn = dim Yn and Pn Yrarr Yn is a projection The projection equationfor our approximate solution un isin Xn for a solution u isin X to the equationAu = f is

Anun = Pn f (277)

where as before

An = PnA|Xn (278)

and An isin B(XnYn) Our goal is to clarify to what extent the sequenceun n isin N if it exists approximates u isin X We start with a definition

Definition 232 The projection method above is said to be convergent if thereexists an integer q isin N such that the operator An isin B(XnYn) is invertible for

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

64 Fredholm equations and projection theory

n ge q and for each f isin A(X) the unique solution of (277) which we callun = Aminus1

n Pnf converges as n rarr infin to a u isin X that satisfies the operatorequation Au = f

We remark that convergence of the approximate solutions un n isin N tou as defined above does not require that the operator equation Au = f has aunique solution although this is often the case in applications

We first describe a consequence of convergence

Theorem 233 If the projection method is convergent then there is an integerq isin N and a constant c gt 0 such that for all n ge q

Aminus1n PnA le c (279)

If in addition A is onto that is A(X) = Y then the q and c above can bechosen so that for all n ge q

Aminus1n le c (280)

Proof The proof uses the uniform boundedness principle (Theorem A25in the Appendix) Specifically since the projection method converges weconclude for each u isin X that the sequence Aminus1

n PnAu n isin N converges inX as nrarrinfin and hence is norm bounded for each u isin X Thus an applicationof the uniform boundedness principle confirms (279) We note in passing thatAminus1

n PnA isin B(XXn) and by equation (278) this operator is a projection of Xonto Xn

As for (280) we argue in a similar fashion using in this case the sequenceAminus1

n Pnf n ge q where f can be chosen arbitrarily in Y because the operatorA is assumed to be onto to conclude again by the uniform boundednessprinciple that Aminus1

n Pn is bounded uniformly in n isin N Here we have thatAminus1

n Pn isin B(YXn) However since

Aminus1n = supAminus1

n y y isin Yn y = 1= supAminus1

n Pny y isin Yn y = 1le supAminus1

n Pny y isin Y y = 1 = Aminus1n Pn (281)

the claim is confirmed

Next we explore to what extent (279) and (280) are also sufficient forconvergence of a projection method To this end we introduce the notion ofdensity of the sequences of subspaces Xn n isin N in X

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 65

Definition 234 We say that the sequence of subspaces Xn n isin N has thedenseness property in X if for every x isin X

limnrarrinfin dist(xXn) = 0

Theorem 235 If there exists a q isin N such that the operator An isin B(XnYn)

is invertible a positive constant c such that for all n ge q Aminus1n PnA le c and

the sequence Xn n isin N has the denseness property in X then the operatorA is one to one and for each f isin R(A) the projection method converges tou isin X the unique solution to the equation Au = f

Proof The proof uses the fact that the operator Aminus1n PnA is a projection of X

onto Xn Hence we have for any v isin X the inequality

Aminus1n PnAvminus v le (1+ c)dist(vXn) (282)

and in particular for f isin R(A) we have that

un minus u le (1+ c)dist(uXn)

Although Theorem 235 shows that condition (279) nearly guaranteesconvergence it is hard to apply in practice Nevertheless further explanationof the inequality (282) will lead to some improvements Indeed for anyprojection Pn Xrarr Xn we have for any x isin X that

Pnxminus x le (1+ Pn)dist(xXn) (283)

and

dist(xXn) le Pnxminus x (284)

These inequalities lead to a useful criterion to ensure that a sequence ofprojections has the property Pn

sminusrarr I

Lemma 236 A sequence of projections Pn n isin N sube B(XXn) has theproperty Pn

sminusrarr I if and only if the sequence Pn n isin N is bounded andthe sequence of subspaces Xn n isin N has the denseness property in X

Proof This result follows directly from the uniform boundedness principleand inequalities (283) and (284)

We next comment on the denseness property For this purpose we introducethe subspace

lim supnrarrinfin

Xn =⋃misinN

⋂ngem

Xn (285)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

66 Fredholm equations and projection theory

Proposition 237 If the subspace lim supnrarrinfinXn is dense in X then thesequence of subspaces Xn n isin N has the denseness property in X

Proof If lim supnrarrinfinXn is dense in X then for any xisinX and any ε gt 0 thereexists a positive integer qisinN and a yisinX such that xminus yltε and yisin capnge q

Xn Hence for all nge q we have that dist(xXn)lt ε that is Xn nisinN hasthe denseness property in X

The next proposition presents a necessary condition for the densenessproperty

Proposition 238 If the sequence of subspaces Xn nisinN has the densenessproperty then cupnisinNXn is dense in X

Proof This result follows from the fact that for all x isin X

dist(xcupmisinNXm) le dist(xXn) for all n isin N

and the definition of the denseness property

Definition 239 We say that a sequence of subspaces Xn n isin N is nestedif for all n isin N Xn sube Xn+1

Note that when the sequence of subspaces Xn n isin N is nested it followsthat ⋃

nisinNXn = lim sup

nrarrinfinXn

Consequently Lemma 236 gives us the following fact

Proposition 240 If the sequence of subspaces Xn n isin N is nested in X

then Pnsminusrarr I if and only if the sequence Pn n isin N is bounded and the

subspace cupnisinNXn is dense in X

We may express the condition that the collection of subspaces Xn nisinNis nested in terms of the corresponding collection of projections Pn nisinNsuch that for each n isin N we have that R(Pn) = Xn Indeed Xn n isin N is anested collection of subspaces if

PnPm = Pn for m ge n (286)

We turn our attention to the adjoint projections Plowastn Xlowast rarr Xlowast To thisend we choose a basis xi i isin Nm for Xn and observe that there exists aunique collection of bounded linear functionals i i isin Nm sube Xlowast such that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 67

j(xi) = δij i j isin Nm and for each x isin X we have that

Pnx =sumiisinNm

〈x i〉 xi (287)

In fact for any x isin Xn in the form x =sumiisinNmcixi where c = [ci i isin Nm] isin

Rm define bounded linear functionals j j isin Nm on Xn by

j(x) = cj

which leads to j(xi) = δij i j isin Nm and (287) We then extend the functionalsto the entire space X by the equationlang

x jrang = langPnx j

rang for all x isin X

It can easily be verified that (287) is valid for the collection of extendedfunctionals and this is unique for the requirements

From equation (287) comes the formula for the adjoint projection Plowastn namely for each isin Xlowast

Plowastn =sumjisinNm

langxj

rangj (288)

So we see that Yn =R(Plowastn )= span j j isin Nm and dimYn = dimXn Bydefinition for each x isin X and isin Xlowast we have that 〈Pnx 〉 = langxPlowastn

rang It

follows that limnrarrinfin Plowastn = for all isin Xlowast in the weak topology on Xlowastif and only if for every x isin X limnrarrinfin Pnx = x in the weak topology on X

([276] p 111)The next lemma prepares us for a result from [47] (p 14) which provides

a sufficient condition to ensure that Plowastnsminusrarr I in the norm topology on Xlowast

Recall that a normed linear space X is said to be reflexive if X may be identifiedwith its second dual

Lemma 241 If X is reflexive and for every xisinX limnrarrinfin Pnx= x in theweak topology on X then spancupnisinNR(Plowastn ) is dense in Xlowast in the normtopology

Proof We consider the subspace of Xlowast given by

W = span

⋃nisinN

R(Plowastn )

and choose any F isinWperp Since X is reflexive there is an xisinX such thatfor all isinXlowast we have that F()= 〈x 〉 Choose any misinN and observethat 0=F(Plowastm)=

langxPlowastm

rang = 〈Pmx 〉 We now let m rarr infin and useour hypothesis to conclude that 〈x 〉 = 0 Since isin Xlowast is arbitrary we

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

68 Fredholm equations and projection theory

conclude that x= 0 thereby establishing that F= 0 In other words we haveconfirmed that Xperp = 0 and so W is dense in Xlowast (see Corollary A35 in theAppendix)

From this fact follows the next result

Proposition 242 If X is reflexive and for every x isin X limnrarrinfin Pnx = x inthe weak topology on X and the projection Pn satisfies (286) then Plowastn

sminusrarr Iin the norm topology on Xlowast

Proof According to equation (286) we conclude that PlowastmPlowastn =Plowastn whichimplies that the spaces Yn =R(Plowastn ) are nested for nisinN Moreover ourhypothesis ensures that for each x isin X the set Pnx nisinN is weakly boundedTherefore Corollary A39 in the Appendix implies that it is norm boundedand so by the uniform boundedness principle (see Theorem A25) the setPn n isin N is bounded But we know that Pn = Plowastn and therefore theclaim made in this proposition follows from Proposition 240 and Lemma 241above

Let us return to condition (280) which is usually more readily verifiablein applications First we comment on the relationship of (280) to (279) Ourcomment here is based on the following norm inequalities the first being

Aminus1n PnA le Aminus1

n middot Pn middot Awhich implies that (280) ensures (279) when Pn n isin N is bounded(where of course the constants in (279) and (280) will be different)Moreover when A Xrarr Y is one to one and onto then

Aminus1n Pn le Aminus1

n PnA middot Aminus1and so recalling inequality (281) we obtain the inequality

Aminus1n le Aminus1

n PnAAminus1which demonstrates that inequality (279) implies (280) at least when A is oneto one and onto We formalize this in the next proposition

Proposition 243 Let A Xrarr Y be a bounded linear operator Xn n isin Nand Yn n isin N finite-dimensional subspaces of X and Y respectively andPn Y rarr Yn a projection If Pn n isin N and Aminus1

n n isin N arebounded then so is Aminus1

n PnA n isin N If A is one to one and onto andAminus1

n PnA n isin N is bounded then Aminus1n n isin N is bounded too

Our final comments concerning projection methods demonstrate that undercertain circumstances the existence of a unique solution of the projection

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 69

equation (277) implies the same for the operator equation (263) In the nextlemma we provide conditions on the projection method which imply that A isone to one

Lemma 244 Let Xn n isin N and Yn n isin N be sequences of finite-dimensional subspaces of X and Y respectively and Pn Y rarr Yn Qn X rarr Xn projections If there is a q isin N and a positive constant c gt 0 suchthat for any n ge q both Pn le c and Aminus1

n le c hold and also Qnsminusrarr I

then A is one to one

Proof If u isin X satisfies Au = 0 then for n ge q we have since An =PnA|Xn that

QnuX le cAnQnuY= cPnAQnuminus PnAuYle c2AQnuminus uX

Letting nrarrinfin on both sides of this inequality we conclude that u = 0

We now present a similar result that implies the operator A is onto

Lemma 245 Let X and Y be Banach spaces with X reflexive and A isinB(XY) Let Pn Y rarr Yn be a projection such that Pn

sminusrarr I on Y

and the sequence Aminus1n n isin N is bounded If the sequence of subsets

R(Plowastn ) n isin N is nested then A is onto

Proof Choose any f isin Y and recall that un = Aminus1n Pnf Since Pn

sminusrarr I weconclude by the uniform boundedness principle (see Theorem A25) that thesequence Pn n isin N is bounded and so we obtain that un n isin Nis also bounded Moreover our hypothesis that R(Plowastn ) n isin N is nested

guarantees by the proof of Proposition 242 that Plowastnsminusrarr I on Ylowast Now since

X is reflexive we can extract a subsequence unk k isin N which convergesweakly to an element u isin X (see for example [276] p 126) We show thatAu = f To this end we first observe for any isin Ylowast that

limkrarrinfin

langAunk

rang = limkrarrinfin

langunk Alowast

rang = languAlowastrang = 〈Au 〉

that is limkrarrinfinAunk = Au weakly in Y Therefore the right-hand side of theinequality∣∣langAunk Plowastnk

rangminus 〈Au 〉∣∣ le AunkPlowastnk

minus + | langAunk minusAu rang |

goes to zero as nrarrinfin and so with the formula Anun = PnAun we obtain thatlimkrarrinfinAnk unk = Au weakly in Y Moreover by definition for all nisinN there

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

70 Fredholm equations and projection theory

holds the equation Anun = Pnf and also by hypothesis limnrarrinfin Pnf = f (innorm) from which we conclude that limkrarrinfinAnk unk = f (in norm) Henceindeed we obtain the desired conclusion that Au = f

Now we turn our attention to a discussion of the numerical stability of theprojection methods The numerical stability of the approximate solution prob-lem concerns how close the approximate solution of projection equation (277)is to that of a perturbed equation of the form(

An + En)un = Pnf + gn (289)

where En isin B(XnYn) is a linear operator which affects a perturbation of An

and gn isin Yn affects a perturbation of Pnf n isin NWe begin with a formal definition of stability

Definition 246 The projection method is said to be stable if there are non-negative constants μ and ν a positive constant δ and a positive integer q suchthat for any n ge q the operator An is invertible and for any vector gn isinYn and any linear operator En isin B(XnYn) with En le δ the perturbedequation (289) always has a unique solution un isin Xn satisfying the inequality

un minus un le μEnun + νgn (290)

We next characterize the stability of the projection method

Theorem 247 If A isin B(XY) then the projection method is stable if andonly if inequality (280) holds

Proof Suppose that condition (280) is satisfied for n ge q Then for any f isinA(X) and n ge q the projection equation (277) has the unique solution un isinXn If n ge q and the perturbation satisfies the norm inequality En le c

2 thenfor any x isin Xn

(An + En)x ge c

2x (291)

Hence for any gn isin Yn and n ge q the perturbed equation (289) has a uniquesolution un isin Xn and it gives us the formula

un minus un = (An + En)minus1(gn minus Enun)

Hence with inequality (291) we get the stability estimate

un minus un le 2

c(Enun + gn)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 71

Conversely suppose that the projection method is stable In this casewe choose the perturbation operator to be En= 0 Then for any f isinA(X)and gn isinYn when ngeN the projection equation (277) and its perturbedequation (289) have unique solutions un un isinXn respectively We now letvn= unminusun and observe for nge q that Anvn= gn and so the stability inequality(290) gives us the desired inequality

Aminus1n gn = vn le νgn

We now introduce an important concept in connection with the actualbehavior of approximate methods that is the condition number of a linearoperator which is used to indicate how sensitive the solution of an equationmay be to small relative changes in the input data

Definition 248 Let X and Y be Banach spaces and A Xrarr Y be a boundedlinear operator with bounded inverse Aminus1 Yrarr X The condition number ofA is defined as

cond(A) = AAminus1It is clear that the inequality cond(A) ge 1 always holds The following

proposition shows that the condition number is a suitable tool for measuringthe stability

Proposition 249 Suppose that X and Y are Banach spaces and A isin B(XY)has bounded inverse Aminus1 Let δA isin B(XY) δf isin Y be linear perturbationsof A and f and u isin X u+ δu isin X be solutions of

Au = f (292)

and

(A+ δA)(u+ δu) = f + δf (293)

respectively If δA lt 1Aminus1 and f = 0 then

δuu le

cond(A)1minus Aminus1δA

(δAA +

δff

)

Proof It follows from (292) and (293) that

(A+ δA)(u+ δu) = Au+ δf

which leads to the equation

δu = (I +Aminus1δA)minus1Aminus1(δf minus δAu) (294)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

72 Fredholm equations and projection theory

The inequality δA lt 1Aminus1 ensures the existence of the linear operator(I +Aminus1δA)minus1 and also the estimate

(I +Aminus1δA)minus1 le 1

1minus Aminus1δA

From this with (294) we conclude that

δuu le

Aminus11minus Aminus1δA

(δfu + δA

)le Aminus1A

1minus Aminus1δA(δff +

δAA

)

completing the proof

A simple fact for the condition number cond(An) = AnAminus1n of the

projection equation (277) is that if Anuminusrarr A and Aminus1

numinusrarr Aminus1

n then

limnrarrinfin cond(An) = cond(A)

We remark that if the projection method is convergent then for somepositive constant c the inequality

Aminus1n le cAminus1

holds In fact using Theorem 233 we have that

Aminus1n = supAminus1

n Pny y isin Yn y = 1le supAminus1

n Pny y isin Y y = 1le sup[Aminus1

n PnA]Aminus1y y isin Y y = 1le cAminus1

224 An abstract framework for second-kind operator equations

In this subsection we change our perspectives somewhat to a context closer tothe specific applications that we have in mind in later chapters Specificallyin this subsection our operator A will have the form I minusK where K iscompact Projection methods for this case are well developed in the literatureSpecifically it is well known that the theory of collectively compact operatorsdue to P Anselone (as presented in [6 15]) provides us with a convenientabstract setting for the analysis of many numerical schemes associated withthe Fredholm operator I minusK This theory generally requires that the sequenceof nested spaces Xn n isin N is dense in X For our purposes later it isadvantageous to improve upon this hypothesis

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 73

We have in mind several concrete projection methods which will beintroduced later These include discrete Galerkin PetrovndashGalerkin collocationand quadrature methods We shall describe them collectively in the followingcontext which differs somewhat from the point of view of previous sections

We begin with a Banach space X and a subspace V of it Let K isin B(XV)be a compact operator and consider the Fredholm equation of the second kind

uminusKu = f (295)

By the Fredholm alternative (see Theorem A48) this equation has a uniquesolution for all f isin V if and only if the null space of I minus K is 0 that is aslong as one is not an eigenvalue of K We always assume that this conditionholds

To set up our approximation methods for (295) unlike in the discussionsearlier we need two sequences of operators Kn n isin N sube B(XV) andPn n isin N sube B(XU) where we require that V sube U sube X As before Pn willapproximate the identity and in the present context Kn will approximate theoperator K The exact sense for which this is required shall be explained belowPostponing this issue for the moment we associate with these two sequencesof operators the approximation scheme

(I minus PnKn)un = Pnf (296)

for solving (295)For the analysis of the convergence properties of (296) we are led to

consider the existence and uniform boundedness for n isin N of the inverse ofthe operator

An = I minus PnKn

Moreover in this section we also prepare the tools to study the phenomenonof superconvergence This means that we shall approximate the solution ofequation (296) by the function

un = f +Knun (297)

We refer to un as the iterated approximation to (296) which is also called theSloan iterate and it follows directly that un satisfies the equation

(I minusKnPn)un = f (298)

Therefore we also consider in this section the existence and uniform bound-edness for n isin N of the inverse of the operators

An = I minusKnPn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

74 Fredholm equations and projection theory

Our analysis of the linear operators An and An requires several assumptionson Kn and Pn To prepare for these conditions we introduce the followingterminology

Definition 250 We say that a sequence of operators Tn n isin N sube B(XY)converges pointwise to an operator T isin B(XY) on the set S sube X providedthat for each x isin S we have limnrarrinfin Tnx minus T x = 0 Notationally weindicate this by Tn

sminusrarr T on S Similarly we say that the sequence ofoperators Tn n isin N converges to the operator T uniformly on S providedthat limnrarrinfin supTnx minus T x x isin S = 0 and indicate this by Tn

uminusrarr Ton S

Clearly when S is the unit ball in X and Tnuminusrarr T on S this means that

Tnuminusrarr T on X

Lemma 251 Let X be a Banach space and S a relatively compact subset ofX If the sequence of operators Tn n isin N sube B(X) has uniformly boundedoperator norms and Tn

sminusrarr T then Tnuminusrarr T on S

Proof Since the set S is compact it is totally bounded (Theorem A7) and sofor a given ε gt 0 there is a finite set W sube S such that for each x isin S there isa w isin W with x minus w lt ε Since W is finite by hypothesis there is a q isin N

such that for any v isin W and n ge q we have Tnv minus T v le ε In particularthis inequality holds for the choice v = w By hypothesis there is a constantc gt 0 such that for any n isin N we have Tn le c We now estimate the erroruniformly for all x isin S when n ge q

Tnxminus T x le (Tn minus T )(xminus w) + Tnwminus T wle (c+ T + 1)ε

Let us now return to our setup for approximate schemes for solvingFredholm equations We list below several conditions that we shall assumeand investigate their consequences

(H-1) The set of operators Kn n isin N sube B(XV) is collectively compactthat is for any bounded set B sube X the set cupnisinNKn(B) is relativelycompact in V

(H-2) Knsminusrarr K on U

(H-3) The set of operators Pn n isin N sube B(XU) is compact with normswhich are uniformly bounded for n isin N

(H-4) Pnsminusrarr I on V

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 75

As a first step we modify Proposition 17 in [6] to fit our circumstances andobtain the following fact

Lemma 252 If conditions (H-1)ndash(H-4) hold then

(i) (Pn minus I)Knuminusrarr 0 on X

(ii) (Kn minusK)PnKnuminusrarr 0 on X

(iii) (KnPn minusK)KnPnuminusrarr 0 on X

Proof (i) Let B denote the closed unit ball in X that is B = x x isin X x le1 and also set G = Knx x isin B n isin N Condition (H-1) implies that G isa relatively compact set in V while hypotheses (H-3) and (H-4) coupled withLemma 251 establish that Pn

uminusrarr I on G Consequently the inequality

(Pn minus I)Kn = sup(Pn minus I)Knx x isin B le sup(Pn minus I)x x isin G(299)

establishes (i)(ii) For any x isin V it follows from (H-4) that Pnx n isin N is a relatively

compact subset of X Therefore by Lemma 251 and the hypotheses (H-1)and (H-2) we conclude that (Kn minus K)Pn

sminusrarr 0 on V Moreover using theinequality

(Kn minusK)PnKn = sup(Kn minusK)PnKny y isin Ble sup(Kn minusK)Pnx x isin G (2100)

and specializing Lemma 251 to the choice T = 0 Tn = (Kn minusK)Pn and thechoice S = G we conclude the validity of (ii)

(iii) Hypotheses (H-1) and (H-3) guarantee that

Gprime = KnPnx x isin B n isin Nis a relatively compact subset of V Moreover from the equation

KnPn minusK = (KnPn minusKPn)+ (KPn minusK) (2101)

statement (ii) and (H-4) we obtain that KnPnminusK sminusrarr 0 on V Thus statement(iii) follows directly from equation (2101) and the relative compactness of theset Gprime

We next study the existence of the inverse operator of An and An Forthis purpose we recall a useful result about the existence and boundednessof inverse operators

Lemma 253 If X is a normed linear space with S and E in B(X) such that

Sminus1 exists as a bounded linear operator on S(X) and E lt Sminus1minus1 then

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

76 Fredholm equations and projection theory

the linear operator T = S minus E has an inverse T minus1 as a bounded linearoperator on T (X) and has the property that

T minus1 le 1

Sminus1minus1 minus E

Proof For any u isin X we have that

Su = T u+ Eu

and so

Su le T u + EuThus

(Sminus1minus1 minus E)u le Su minus Eu le T ufrom which the desired result follows

We are now ready to prove the main result of this section

Theorem 254 If K isin B(XX) is a compact operator not having one asan eigenvalue and conditions (H-1)ndash(H-4) hold then there exists a positiveinteger q such that for all n ge q both (I minus PnKn)

minus1 and (I minusKnPn)minus1 are

in B(X) and have norms which are uniformly bounded Moreover if u un andun are the solutions of equations (295) (296) and (297) respectively and aconstant p gt 0 is chosen so that for all n isin N Pn le p then for all n isin N

uminus un le c(uminus Pnu + pKuminusKnu) (2102)

and

uminus un le c(K(I minus Pn)u + (K minusKn)Pnu) (2103)

Proof We first note that a straightforward computation leads to the formulas

[I + (I minusK)minus1Kn](I minus PnKn) =I minus (I minusK)minus1[(Pn minus I)Kn + (Kn minusK)PnKn]

and

[I + (I minusK)minus1KnPn](I minusKnPn) = I minus (I minusK)minus1(KnPn minusK)KnPn

By Lemma 252 there exists a q gt 0 such that for all n ge q we have

1 = (I minusK)minus1[(Pn minus I)Kn + (Kn minusK)PnKn] le 1

2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 77

and

2 = (I minusK)minus1(KnPn minusK)KnPn le 1

2

It follows from Lemma 253 for all nge q that the inverse operators(I minusPnKn)

minus1 and (I minusKnPn)minus1 exist and there the inequalities

(I minus PnKn)minus1 le 1

1minus1(1+ (I minusK)minus1Kn)

and

(I minusKnPn)minus1 le 1

1minus2(1+ p(I minusK)minus1Kn)

holdSince the set of operators Kn n isin N is collectively compact it follows

that the norms Kn are uniformly bounded for n isin N and so by the aboveinequalities the norms of both (I minus PnKn)

minus1 and (I minusKnPn)minus1 are also

uniformly bounded for n isin N Therefore equations (296) and (298) haveunique solutions for every f isin X

It remains to prove the estimates (2102) and (2103) To this end we notefrom equations (295) (296) and (298) that

(I minus PnKn)un = Pn(I minusK)u

and

(I minusKnPn)un = (I minusK)u

Using these equations we obtain that

(I minus PnKn)(uminus un) = (uminus Pnu)+ Pn(KuminusKnu)

and

(I minusKnPn)(uminus un) = KuminusKnPnu = K(uminus Pnu)+ (K minusKn)Pnu

Therefore from what we have already proved we obtain the desired estimates

We remark from the estimate (2102) that the convergence rate of un to udepends only on the rate of approximations of Pn to the identity operator andKn to K Moreover it is seen from (2103) that if Kn approximates K fasterthan the convergence of Pn to the identity superconvergence of the iteratedsolution will result since the first term on the right-hand side of (2103) ismore significant than the other In fact since for each u isin X we have that

K(I minus Pn)u = K(I minus Pn)(I minus Pn)u le K(I minus Pn)(I minus Pn)u

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

78 Fredholm equations and projection theory

circumstances for which limnrarrinfin K(I minus Pn) = 0 will lead to superconver-gence Examples of this phenomenon will be described in Section 41

We also remark that when X = V and Kn = K Theorem 254 leads to thefollowing well-known theorem

Theorem 255 If X is a Banach space Xn nisinN is a sequence offinite-dimensional subspaces of X K XrarrX is a compact linear operatornot having one as an eigenvalue and Pn XrarrXn is a sequence of linearprojections that converges pointwise to the identity operator I in X then thereexist an integer q and a positive constant c such that for all n ge q the equation

un minus PnKun = Pnf

has a unique solution un isin Xn and

uminus un le cuminus Pnuwhere u is the solution of equation (295) Moreover the iterated solution un

defined by (297) with Kn = K satisfies the estimate

uminus un le cK(I minus Pn)uOur final remark is that when X = V and Pn = I Theorem 254 leads to

the following theorem

Theorem 256 If X is a Banach space K isin B(XX) is a compact operatornot having one as an eigenvalue and conditions (H-1) and (H-2) hold thenthere exist an integer q and a positive constant c such that for all n ge q theequation

un minusKnun = f

has a unique solution un isin X and

uminus un le cKuminusKnuwhere u is the solution of equation (295)

23 Bibliographical remarks

The basic concepts and results of Fredholm integral equations and the projec-tion theory may be found in many well-written books on integral equationssuch as [15 110 121 177 203 253] In particular for the subjects of weaklysingular integral operators and boundary integral equations we recommendthe books [15 177 203] The functional spaces used in this book are usually

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

23 Bibliographical remarks 79

covered in standard texts (for example [1 183 236 276]) Readers are referredto [15 47 177 183 203 236] for additional information on the notion of com-pact operators and weakly singular integral operators and to the Appendix ofthis book for basic elements of functional analysis Moreover readers may findadditional details on boundary integral equations in [12 15 22 121 144 150ndash153 177 203 267]

Regarding the projection methods readers can see [15 47 175ndash177 276]Especially the notion of generalized best approximation projections was origi-nally introduced in [77] For the approximate solvability of projection methodsfor operator equations Theorems 233 and 235 provide respectively necessaryand sufficient conditions which can be compared with those in [47] and [177]For the abstract framework for second-kind operator equations the theory ofcollectively compact operators [6 15] presents a convenient abstract settingfor the analysis of many numerical schemes In Section 224 we improvethe framework to fit more general circumstances For this point readers arereferred to the paper [80] More information about superconvergence of theiterated scheme may be seen in [60 246 247]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

3

Conventional numerical methods

This chapter is designed to provide readers with a background on conventionalmethods for the numerical solution of the Fredholm integral equation of thesecond kind defined on a compact domain of a Euclidean space Specificallywe discuss the degenerate kernel method the quadrature method the Galerkinmethod the collocation method and the PetrovndashGalerkin method

Let be a compact measurable domain in Rd having a piecewise smoothboundary We present in this chapter several conventional numerical methodsfor solving the Fredholm integral equation of the second kind in the form

uminusKu = f (31)

where

(Ku)(s) =int

K(s t)u(t)dt s isin

We describe the principles used in the development of the numerical methodsand their convergence analysis

31 Degenerate kernel methods

In this section we describe the degenerate kernel method for solving theFredholm integral equation of the second kind For this purpose we assumethat X is either C() or L2() with the appropriate norm middot The integraloperator K is assumed to be a compact operator from X to X

311 A general form of the degenerate kernel method

The degenerate kernel method approximates the original integral equation byreplacing its kernel with a sequence of kernels having the form

80

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

31 Degenerate kernel methods 81

Kn(s t) =sumjisinNn

K1j (s)K

2j (t) s t isin (32)

where K1j K2

j isin X and may depend on n A kernel of this type is calleda degenerate kernel We require that integral operators Kn with kernels Kn

uniformly converge to the integral operator K that is Knuminusrarr K The

degenerate kernel method for solving (31) finds un isin X such that

un minusKnun = f (33)

For the unique existence and convergence of the approximate solution of thedegenerate kernel method we have the following theorem

Theorem 31 Let X be a Banach space and K isin B(X) be a compact operatornot having one as its eigenvalue If the operators Kn isin B(X) uniformlyconverge to K then there exists a positive integer q such that for all n ge q theinverse operators (I minusKn)

minus1 exist from X to X and

(I minusKn)minus1 le (I minusK)minus1

1minus (I minusK)minus1K minusKn

Moreover the error estimate

un minus u le (I minusKn)minus1(K minusKn)u

holds

Proof Note that (I minus K)minus1 exists as a bounded linear operator on X (seeTheorem A48 in the Appendix) The first result of this theorem follows fromLemma 253 with S = I minus K and E = Kn minus K For the second result wehave that

un minus u = (I minusKn)minus1f minus (I minusK)minus1f

= (I minusKn)minus1(Kn minusK)(I minusK)minus1f

= (I minusKn)minus1(Kn minusK)u

which yields the second estimate

According to the second estimate of Theorem 31 we obtain that

un minus u le (I minusKn)minus1K minusKnu

This means that the speed of convergence un minus u to zero depends on thespeed of convergence K minus Kn to zero This is determined by the choice of

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

82 Conventional numerical methods

the kernels Kn and is independent of the differentiability of u It is clear thatwhen X = C() we have that

K minusKn = maxsisin

int

|K(s t)minus Kn(s t)|dt (34)

and when X = L2() we have that

K minusKn le(int

int

|K(s t)minus Kn(s t)|2dsdt

)12

(35)

We now discuss the algebraic aspects of the degenerate kernel method

Proposition 32 If un is the solution of the degenerate kernel method (33)then it can be given by

un = f +sumjisinNn

vjK1j (36)

in which [vj j isin Nn] is a solution of the linear system

vi minussumjisinNn

(K1j K2

i )vj = (f K2i ) i isin Nn (37)

where (middot middot) denotes the L2() inner product

Proof If un is the solution of equation (33) then by using (32) we have that

un(s)minussumjisinNn

K1j (s)

int

K2j (t)un(t)dt = f (s) s isin (38)

This means that the solution un can be written as (36) with

vj =int

K2j (t)un(t)dt

Multiplying (36) by K2i (s) and integrating over we find that [vj j isin Nn]

must satisfy the linear system (37)

Linear system (37) may be written in matrix form To this end we define

Kn = [(K1j K2

i ) i j isin Nn] vn = [vj j isin Nn] and fn = [(f K2i ) i isin Nn]

and let In denote the identity matrix of order n Then (37) can be rewritten as

(In minusKn)vn = fn

We next consider the invertibility of matrix In minusKn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

31 Degenerate kernel methods 83

Proposition 33 If X is a Banach space K isin B(X) is a compact operator nothaving one as its eigenvalue and the operators Kn isin B(X) uniformly convergeto K then there exists a positive integer q such that for all n ge q the coefficientmatrix In minus Kn of the linear system (37) is nonsingular

Proof It follows from Theorem 31 that there exists a positive integer q suchthat for all n ge q (I minus Kn)

minus1 exists This with Proposition 32 leads to theconclusion that (37) is solvable for any right-hand side of the form b = [bi i isin Nn] = [(f K2

i ) i isin Nn] with f isin X We proceed with this proof in twocases

Case 1 K2i i isin Nn is a linearly independent set of functions To prove that

the coefficient matrix of (37) is nonsingular it is sufficient to prove that (37)with any right-hand side b isin Rn is solvable or equivalently for any b isin Rn

there exists a function f isin X such that [(f K2i ) i isin Nn] = b To do this we

let f =sumjisinNncjK2

j and consider the equationsumjisinNn

(K2j K2

i )cj = bi i isin Nn (39)

The coefficient matrix [(K2j K2

i ) i j isin Nn] is a Gram matrix so it is positive

semi-definite Since K2i i isin Nn is linear independent this matrix is positive

definite This means that (39) is solvable and the function f exists indeedThus the coefficient matrix In minusKn of (37) is nonsingular

Case 2 K2i i isin Nn is a dependent set of functions In this case there is a

nonsingular matrix Qn such that

[K21 middot middot middot K2

n ]QTn = [K2

1 middot middot middot K2r 0 middot middot middot 0]

where K2i i isin Nr 0 lt r lt n is a linearly independent set of functions Let

[K11 middot middot middot K1

n ] = [K11 middot middot middot K1

n ]Qminus1n

and

Kr = [(K1j K2

i ) i j isin Nr]We then have that

Qn(In minusKn)Qminus1n =

[Ir minus Kr lowast

0 Inminusr

]

Noting that Ir minus Kr is the coefficient matrix associated with the degeneratekernel Kr(s t) = sum

jisinNrK1j(s)K2j(t) and K2i i isin Nr is a linearly

independent set of functions we conclude from case 1 that Ir minus Kr isnonsingular and thus In minusKn is nonsingular

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

84 Conventional numerical methods

When the hypothesis of the above proposition is satisfied we solve (37) for[vj j isin Nn] and obtain from (36) the solution un of the degenerate kernelmethod (33)

312 Degenerate kernel approximations via interpolation

A natural way to construct degenerate kernel approximations is by interpo-lation We may employ polynomials piecewise polynomials trigonometricpolynomials and others as a basis to construct an interpolation for the kernelfunction

We now consider the Lagrange interpolation Let tj j isin Nn be a finitesubset of the domain and Lj j isin Nn be the Lagrange basis functionssatisfying

Lj(ti) = δij i j isin Nn

The kernel K can be approximated by the kernel Kn interpolating K withrespect to s or t That is we have

Kn(s t) =sumjisinNn

Lj(s)K(tj t)

or

Kn(s t) =sumjisinNn

K(s tj)Lj(t)

Using the former the linear system (37) becomes

vi minussumjisinNn

vj

int

Lj(t)K(ti t)dt =int

f (t)K(ti t)dt i isin Nn (310)

and the solution is given by

un = f +sumjisinNn

vjLj (311)

while using the latter the linear system (37) becomes

vi minussumjisinNn

vj

int

K(s tj)Li(s)ds =int

f (s)Li(s)ds i isin Nn (312)

and the solution is given by

un = f +sumjisinNn

vjK(middot tj) (313)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

31 Degenerate kernel methods 85

As an example of the Lagrange interpolation we consider continuouspiecewise linear polynomials that is linear splines on = [a b] Lettj = a + jh with h = (b minus a)n j isin Nn n isin N The basis functions arechosen as

Lj(t) =

1minus |t minus tj|h t isin [tjminus1 tj+1]0 otherwise

We obtain the degenerate kernel approximation by interpolating K with respectto s Specifically for t isin we have that

Kn(s t) = [(tj minus s)K(tjminus1 t)+ (sminus tjminus1)K(tj t)]h s isin [tjminus1 tj] j isin Nn

For this example we have the following error estimate

Proposition 34 If K(middot t) isin C2() for any t isin and part2Kparts2 isin C(times) then

K minusKn le 1

8h2(bminus a)

∥∥∥∥part2K

parts2

∥∥∥∥infin

Proof It can easily be derived by using the Taylor formula that

|K(s t)minus Kn(s t)| le 1

8h2∥∥∥∥part2K(middot t)

parts2

∥∥∥∥infin

for any s isin [tjminus1 tj] t isin This with (34) leads to the desired result of thisproposition

313 Degenerate kernel approximations via expansion

An alternative way to construct degenerate kernel approximations is byexpansion such as Taylor expansions and Fourier expansions of the kernelK We now introduce the latter

Let X = L2() with inner product (middot middot) which can be defined with respectto a weight function Let Fj j isin N be a complete orthonormal sequence inX Then for any x isin X we have the Fourier expansions of x with respect toFj j isin N

x =sumjisinN

(x Fj)Fj

This can be used for construction of approximate degenerate kernels of K withrespect to either variable For example we may define

Kn(s t) =sumjisinNn

Fj(s)(K(middot t) Fj(middot))

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

86 Conventional numerical methods

Let Gj(t) = (K(middot t) Fj(middot)) Then the linear system (37) becomes

vi minussumjisinNn

vj(Fj Gi) = (f (t) Gi) i isin Nn (314)

and the solution is given by

un = f +sumjisinNn

vjFj (315)

Proposition 35 If K isin L2(times) and Fj j isin N is a complete orthonormalset in L2() then

K minusKn le⎛⎝ sum

jisinNNn

∥∥(K(middot bull) Fj(middot))∥∥2

⎞⎠12

Proof Note that

K(s t)minus Kn(s t) =sum

jisinNNn

Fj(s)(K(middot t) Fj(middot))

By employing the orthonormal property of the sequence Fj j isin N it followsfrom (35) that

K minusKn le(int

int

|K(s t)minus Kn(s t)|2dsdt

)12

=⎛⎝ sum

jisinNNn

∥∥(K(middot bull) Fj(middot))∥∥2

⎞⎠12

32 Quadrature methods

In this section we introduce the quadrature or Nystrom method for solving theFredholm integral equations of the second kind This method discretizes theintegral equation by directly replacing the integral appearing in the integralequation by numerical quadratures

321 Numerical quadratures

We begin by introducing numerical integrations Let isin Rd be a compact setand assume that g isin C() To approximate the integral

Q(g) =int

g(t)dt

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

32 Quadrature methods 87

we consider numerical quadrature rules of the form

Qn(g) =sumjisinNn

wnjg(tnj)

where tnj isin j isin Nn are quadrature notes and wnj j isin Nn are real quadratureweights

The following are some examples of quadrature rules

Example 36 Consider the trapezoidal quadrature rule

Qn(g) = h

[1

2g(t0)+ g(t1)+ middot middot middot + g(tnminus1)+ 1

2g(tn)

]

on = [a b] where h = (bminus a)n tj = a+ jh j isin Nn When g isin C2()the error of the trapezoidal rule has the estimate ([172] p 481)

|Q(g)minus Qn(g)| le bminus a

12h2gprimeprimeinfin

Example 37 Consider the Simpson quadrature rule

Qn(g) = h

3[g(t0)+ 4g(t1)+ 2g(t2)+ middot middot middot + 2g(tnminus2)+ 4g(tnminus1)+ g(tn)]

on = [a b] where h = (bminus a)n tj = a+ jh j isin Nn and n is even Wheng isin C4() the error of the Simpson rule has the estimate ([172] p 483)

|Q(g)minus Qn(g)| le bminus a

180h4g(4)infin

Example 38 Let ψj j isin Zn+1 be a family of orthogonal polynomials ofdegree le n on = [a b] with respect to a non-negative weight function ρand tj j isin Nn be zeros of the function ψn Consider the Gaussian quadraturerule

Qn(g) =sumjisinNn

wjg(tj) (316)

for the integral Q(g) where

wj =int

ρ(t)Lnj(t)dt

and Lnj j isin Nn are the Lagrange interpolation polynomials of degree n minus 1which have the form

Lnj(t) = iisinNni =j(t minus ti)iisinNni =j(tj minus ti) j isin Nn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

88 Conventional numerical methods

When g isin C2n() the error of the Gaussian quadrature rule has the estimate([172] p 497)

Q(g)minus Qn(g) = g2n(η)

(2n)int

ρ(t)iisinNn(t minus ti)2dt

for some η isin When = [minus1 1] ρ = 1 and ψj j isin Zn+1 is chosen to be the set of

Legendre polynomials

ψj(t) = 1

2jjdj

dtj[(t2 minus 1)j] j isin Zn+1

formula (316) is called the GaussndashLegendre quadrature formula in which

wj = 2

n

1

ψnminus1(tj)ψ primen(tj)

This example shows that the Nystrom method using the Gaussian quadratureformula has rapid convergence We now turn to considering convergence of asequence of general quadrature rules

Definition 39 A sequence Qn n isin N of quadrature rules is calledconvergent if the sequence Qn n isin N converges pointwise to the functionalQ on C()

The next result characterizes convergent quadrature rules

Proposition 310 A sequence Qn n isin N of quadrature rules with theweights wnj j isin Nn converges if and only if

supnisinN

sumjisinNn

|wnj| ltinfin

and

Qn(g)rarr Q(g) nrarrinfin

for all g in a dense subset U sub C()

Proof It can easily be verified that

Qninfin =sumjisinNn

|wnj|

Thus the result of this proposition follows directly from the BanachndashSteinhaustheorem (Corollary A27 in the Appendix)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

32 Quadrature methods 89

322 The Nystrom method for continuous kernels

Using a sequence Qn n isin N of numerical quadrature rules the integraloperator

(Ku)(s) =int

K(s t)u(t)dt s isin

with a continuous kernel K isin C( times ) is approximated by a sequence ofsummation operators

(Knu)(s) =sumjisinNn

wjK(s tj)u(tj) s isin

Accordingly the integral equation (31) is approximated by a sequence ofdiscrete equations

un minusKnun = f (317)

or

un(s)minussumjisinNn

wjK(s tj)un(tj) = f (s) s isin (318)

We specify equation (318) at the quadrature points ti i isin Nn and obtain thelinear system

un(ti)minussumjisinNn

wjK(ti tj)un(tj) = f (ti) i isin Nn (319)

where the unknown is the vector [un(tj) j isin Nn] We summarize the abovediscussion in the following proposition

Proposition 311 For the solution un of (318) let unj = un(tj) j isin NnThen [unj j isin Nn] satisfies the linear system

uni minussumjisinNn

wjK(ti tj)unj = f (ti) i isin Nn (320)

Conversely if [unj j isin Nn] is a solution of (320) then the function un definedby

un(s) = f (s)+sumjisinNn

wjK(s tj)unj s isin (321)

solves equation (318)

Proof The first statement is trivial Next if [unj j isin Nn] is a solution of(320) then we have from (321) and (320) that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

90 Conventional numerical methods

un(ti) = f (ti)+sumjisinNn

wjK(ti tj)unj = uni

From (321) and the above equation we find that un satisfies (318)

Formula (321) can be viewed as an interpolation formula which extendsthe numerical solution of linear system (319) to all points s isin and is calledthe Nystrom interpolation formula

We now consider error analysis of the Nystrom method Unlike the degener-ate kernel method we do not expect uniform convergence of the sequenceKn n isin N of approximate operators to the integral operator K in theNystrom method In fact KnminusK ge K To see this for any small positiveconstant ε we can choose a function φε isin C() such that φεinfin = 1φε(tj) = 0 for all j isin Nn and φε(s) = 1 for all s isin with minjisinNn |sminus tj| ge εFor this choice of φε we have that

Kn minusK = sup(Kn minusK)vinfin v isin C() vinfin le 1ge sup(Kn minusK)(vφε)infin v isin C() vinfin le 1 ε gt 0= supK(vφε)infin v isin C() vinfin le 1 ε gt 0= supKvinfin v isin C() vinfin le 1 = K

Although the sequence Kn n isin N is not uniformly convergent it ispointwise convergent Therefore by using the theory of collectively compactoperator approximation we can obtain the error estimate for the Nystrommethod

Theorem 312 If the sequence of quadrature rules is convergent then thesequence Kn n isin N of quadrature operators is collectively compactand pointwise convergent on C() Moreover if u and un are the solutionsof equations (31) and the Nystrom method respectively then there exist apositive constant c and a positive integer q such that for all n ge q

un minus uinfin le c(Kn minusK)uinfin

Proof It follows from Proposition 310 that

C = supnisinN

sumjisinNn

|wnj| ltinfin

which leads to

Knvinfin le C maxstisin |K(s t)| vinfin (322)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

32 Quadrature methods 91

and

|(Knv)(s1)minus (Knv)(s2)| le C maxtisin |K(s1 t)minus K(s2 t)| vinfin s1 s2 isin

(323)

Noting that K is uniformly continuous on times we conclude that for anybounded set B sub C() Knv v isin B n isin N is bounded and equicontinuousThus by the ArzelandashAscoli theorem the sequence Kn n isin N is collectivelycompact

Since the sequence of quadrature rules is convergent for any v isin C()

(Knv)(s)rarr (Kv)(s) as nrarrinfin for all s isin (324)

From (323) we see that Knv n isin N is equicontinuous which with (324)leads to the conclusion that Knv n isin N is uniformly convergent that isKn n isin N is pointwise convergent on C()

The last statement of the theorem follows from Theorem 256

It follows from equations (31) and (317) that

(I minusKn)(un minus u) = (Kn minusK)u

which yields

(Kn minusK)uinfin le I minusKn un minus uinfin

This with the estimate of Theorem 312 means that the error un minus uinfinconverges to zero in the same order as the numerical integration error

(Kn minusK)uinfin = maxsisin

∣∣∣∣∣∣sumjisinNn

wjK(s tj)u(tj)minusint

K(s t)u(t)dt

∣∣∣∣∣∣

323 The Nystrom method for weakly singular kernels

In this subsection we describe the Nystrom method for the numerical solutionof integral equation (31) with weakly singular operators defined by

(Ku)(s) =int

K1(s t)K2(s t)u(t)dt

where K1 is a weakly singular kernel and K2 is a smooth kernel We considerthe important case

K1(s t) = log |sminus t|or

K1(s t) = 1

|sminus t|σ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

92 Conventional numerical methods

for some σ isin (0 d) The former is often regarded as a special case of the latterwith σ = 0

We consider a sequence Qn n isin N of numerical quadrature rules

(Qng)(s) =sumjisinNn

wnj(s)g(tnj) s isin (325)

for the integral

(Qg)(s) =int

EK1(s t)g(t)dt s isin

where the quadrature weights depend on the function K1 and the variable sThen the integral operator K is approximated by a sequence of approximateoperators defined by

(Knu)(s) = Qn(K2(s middot)u(middot))(s) s isin in terms of the quadrature rules Qn Specifically we have that

(Knu)(s) =sumjisinNn

wj(s)K2(s tj)u(tj)

where we use simplified notations wj = wnj and tj = tnj The integralequation (31) is then approximated by a sequence of linear equations

uni minussumjisinNn

wj(ti)K2(ti tj)unj = f (ti) i isin Nn (326)

and the approximate solution un is defined by

un(s) = f (s)+sumjisinNn

wj(s)K2(s tj)unj s isin (327)

Example 313 Suppose that

(Ku)(s) =int

log |sminus t|K2(s t)u(t)dt s isin = [a b]where K2 is a smooth function Let h = (bminus a)n tj = a+ jh j isin Zn+1 Fora fixed s isin we choose a piecewise linear interpolation for K2(s middot)u(middot) thatis K2(s middot)u(middot) is approximated by

[(tj minus t)K2(s tjminus1)u(tjminus1)+ (t minus tjminus1)K2(s tj)u(tj)]h

for t isin [tjminus1 tj] j isin Nn By defining the weight functions

w0(s) = 1

h

int[t0t1]

(t1 minus t) log |sminus t|dt

wj(s) = 1

h

int[tjminus1tj]

(t minus tjminus1) log |sminus t|dt + 1

h

int[tjtj+1]

(tj minus t)) log |sminus t|dt j isin Nnminus1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

32 Quadrature methods 93

and

wn(s) = 1

h

int[tnminus1tn]

(t minus tnminus1) log |sminus t|dt

we obtain the approximate operators

(Knu)(s) =sum

jisinZn+1

wj(s)K2(s tj)u(tj) s isin

The error analysis for the Nystrom method for weakly singular kernelscan be obtained in a way similar to Theorem 312 for continuous kernelsNoting that in this case the quadrature weights depend on s we need to makeappropriate modifications in Theorem 312 to fit the current case We firstmodify Proposition 310 to the following result

Proposition 314 The sequence Qn n isin N of quadrature rules defined asin (325) converges uniformly on if and only if

supsisin

supnisinN

sumjisinNn

|wnj(s)| ltinfin

and

Qn(g)rarr Q(g) nrarrinfin

uniformly on for all g in some dense subset U sub C()

With this result we describe the main result for the Nystrom method in thecase of weakly singular kernels

Theorem 315 If the sequence of quadrature rules is uniformly convergentand

limtrarrs

supnisinN

sumjisinNn

|wnj(t)minus wnj(s)| = 0 (328)

then the sequence Kn n isin N of corresponding approximate operators iscollectively compact and pointwise convergent on C() Moreover if u andun are the solutions of (equations (31)) and the Nystrom method respectivelythen there exist a positive constant c and a positive integer q such that for alln ge q

un minus uinfin le c(Kn minusK)uinfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

94 Conventional numerical methods

Proof The proof is similar to that of Theorem 312 We only need to replaceinequality (323) by the following inequality

|(Knu)(s1)minus (Knu)(s2)| le∣∣∣sum

jisinNn

wn j(s1)[K2(s1 tj)minus K2(s2 tj)]u(tj)∣∣∣

+∣∣∣sum

jisinNn

[wnj(s1)minus wn j(s2)]K2(s2 tj)]u(tj)∣∣∣

le C maxtisin |K2(s1 t)minus K2(s2 t)|uinfin+ sup

nisinN

sumjisinNn

|wnj(s1)minus wnj(s2)|maxstisin |K2(s t)|uinfin

Then by a similar proof of Theorem 312 one can conclude the desired results

33 Galerkin methods

We have discussed projection methods for solving operator equations inSection 13 In this section and what follows we specialize the operatorequations to Fredholm integral equations of the second kind and consider threemajor projection methods namely the Galerkin method PetrovndashGalerkinmethod and collocation method for solving the equations Specifically wepresent in this section the Galerkin method the iterated Galerkin method andthe discrete Galerkin method for solving Fredholm integral equations of thesecond kind

As described in Section 132 for the operator equation in a Hilbert space theprojection method via orthogonal projections mapped from the Hilbert spaceonto finite-dimensional subspaces leads to the Galerkin method

Let X = L2() Xn n isin N be a sequence of subspaces of X satisfyingcupnisinNXn = X and Pn n isin N be a sequence of orthogonal projections fromX onto Xn The Galerkin method for solving (31) is to find un isin Xn such that

(I minus PnK)un = Pnf (329)

or equivalently

(un v)minus (Kun v) = (f v) for all v isin Xn

Under the hypotheses that s(n) = dimXn and φj j isin Ns(n) is a basis forXn the solution un of equation (329) can be written in the form

un =sum

jisinNs(n)

ujφj

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

33 Galerkin methods 95

where the vector un = [uj j isin Ns(n)] satisfies the linear systemsumjisinNs(n)

uj[(φjφi)minus (Kφjφi)

] = (f φi) i isin Ns(n)

Setting

En = [(φjφi) i j isin Ns(n)]

Kn = [(Kφjφi) i j isin Ns(n)]

and

fn = [(f φj) j isin Ns(n)]

equation (329) can be written in the matrix form

(En minusKn)un = fn (330)

We call Kn the Galerkin matrixNote that Pn Xrarr Xn n isin N are orthogonal projections and cupnisinNXn = XPn = 1 and Pn converges pointwise to the identity operator I in X Accord-ing to Theorem 255 if K X rarr X is a compact linear operator not havingone as an eigenvalue then there exist an integer q and a positive constant csuch that for all n ge q equation (329) has a unique solution un isin X and

uminus un le cuminus Pnuwhere u is the solution of equation (31)

331 The Galerkin method with piecewise polynomials

Piecewise polynomial bases are often used in the Galerkin method for solvingequation (31) due to its simplicity flexibility and excellent approximationproperty In this subsection we present the standard piecewise polynomialGalerkin method for solving the Fredholm integral equation (31) of the secondkind

We begin with a description of piecewise polynomial subspaces of L2()We assume that there is a partition i i isin Zn of for n isin N which satisfiesthe following conditions

bull =⋃iisinZni and meas(j capjprime) = 0 j = jprime

bull For each i isin Zn there exists an invertible affine map φi rarr such thatφi(

0) = i where 0 is a reference element

For i isin i i isin Zn we define the parameters

hi = diam(i) ρi = the diameter of the largest circle inscribed in i

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

96 Conventional numerical methods

We also assume that the partition is regular in the sense that there exists apositive constant c such that for all i isin Zn and for all n isin N

hi

ρi

le c

Let hn = maxdiam(i) i isin Zn For a positive integer k we denote byXn the space of the piecewise polynomials of total degree le k minus 1 withrespect to the partition i i isin Zn In other words every element in Xn isa polynomial of total degree le k minus 1 on each i Since the dimension of the

space of polynomials of total degree k minus 1 is given by m =(

k + d minus 1d

)

we conclude that the dimension of Xkn is mn

We next construct a basis for the space Xn We choose a collection τj j isinNm sub 0 in a general position that is the Lagrange interpolation polynomialof the total degree k minus 1 at these points is uniquely defined For j isin Nm weassume that a collection pj j isin Nm of polynomials of total degree k minus 1 ischosen to satisfy the equation

pj(τi) = δij i j isin Nm

For each i isin Zn the functions defined by

ρij(t) =(pj φminus1

i )(t) t isin i

0 t isin i(331)

form a basis for the space Xn where ldquordquo denotes the functional compositionWe let Pn L2() rarr Xn be the orthogonal projection onto Xn Then Pn isself-adjoint and Pn = 1 for all n Moreover for each x isin Hk() (cf SectionA1 in the Appendix) there exist a positive constant c and a positive integer qsuch that for all n ge q

xminus Pnx le chknxHk (332)

Associated with the projection Pn the piecewise polynomial Galerkinmethod for solving equation (31) is described as finding un isin Xn such that

(I minus PnK)un = Pnf (333)

In terms of the basis functions described in (331) the Galerkin equation (333)is equivalent to the linear system

(En minusKn)un = fn (334)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

33 Galerkin methods 97

where un isin Rmn

En = [Eijiprimejprime i iprime isin Zn j jprime isin Nm] with Eijiprimejprime = (ρiprimejprime ρij)

Kn = [Kijiprimejprime i iprime isin Zn j jprime isin Nm] with Kijiprimejprime = (Kρiprimejprime ρij)

and

fn = [fij i isin Zn j isin Nm] with fij = (f ρij)

The following result is concerned with the convergence order of the Galerkinmethod (333)

Theorem 316 Suppose that K L2()rarr L2() is a compact operator nothaving one as its eigenvalue If u isin L2() is the solution of equation (31)then there exist a positive constant c and a positive integer q such that for eachn ge q equation (333) has a unique solution un isin Xn Moreover if u isin Hk()

then un satisfies the error bound

uminus un le chknuHk

Proof By the hypothesis that one is not an eigenvalue of the compact operatorK the operator I minusK is one to one and onto Since the spaces Xn are dense inL2() we have that for x isin L2()

limhrarr0Pnxminus Ix = 0

This ensures that there exists a positive integer q such that for each n ge qequation (333) has a unique solution un isin Xn and the inverse operators (I minusPnK)minus1 are uniformly bounded

It follows from equation (333) that

uminus un = uminus Pnf minus PnKun (335)

Moreover applying the projection Pn to both sides of equation (31) yields

Pnuminus PnKu = Pnf

Substituting this equation into (335) leads to the equation

uminus un = uminus Pnu+ PnK(uminus un)

Solving for uminus un from the above equation gives

uminus un = (I minus PnK)minus1(uminus Pnu)

Therefore the desired estimate follows from the above equation the uniformboundedness of (I minus PnK)minus1 and estimate (332)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

98 Conventional numerical methods

The last theorem shows that the Galerkin method has convergence ofoptimal order That is the order of convergence is equal to the order ofapproximation from the piecewise polynomial space

We finally remark that if is the boundary of a domain the piecewisepolynomial Galerkin method is called a boundary element method

332 The Galerkin method with trigonometric polynomials

We now consider equation (31) with = [0 2π ] and the functions K andf being 2π -periodic that is K(s + 2π t) = K(s t + 2π) = K(s t) andf (s+2π) = f (s) for s t isin In this case trigonometric polynomials are oftenused as approximations for solving the equations and projection methods ofthis type are referred to as spectral methods

Let X = L2(0 2π) be the space of all complex-valued 2π -periodic andsquare integral Lebesgue measurable functions on with inner product

(x y) =int

x(t)y(t)dt

For each n isin N let Xn be the subspace of X of all trigonometric polynomialsof degree le n That is we set φj(s) = eijs s isin R

Xn = spanφj(s) j isin Zminusnnwhere i = radicminus1 and Zminusnn = minusn 0 n The orthogonal projectionof X onto Xn is given by

Pnx = 1

sumjisinZminusnn

(xφj)φj

For x isin X it is well known that

xminus Pnx =⎛⎝ 1

sumjisinZZminusnn

|(xφj)|2⎞⎠12

rarr 0 as nrarrinfin

To present the error analysis we introduce Sobolev spaces of 2π -periodicfunctions which are subspaces of L2(0 2π) and require for their elements acertain decay of their Fourier coefficients That is for r isin [0infin)

Hr(0 2π) =⎧⎨⎩x isin L2(0 2π)

sumjisinZ

(1+ j2)r|(xφj)|2 ltinfin⎫⎬⎭

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

33 Galerkin methods 99

Note that H0(0 2π) coincides with L2(0 2π) and Hr(0 2π) is a Hilbert spacewith inner product given by

(x y)r = 1

sumjisinZ

(1+ j2)r(xφj)(yφj)

and norm given by

xHr =⎛⎝ 1

sumjisinZ

(1+ j2)r|(xφj)|2⎞⎠12

We remark that when r is an integer this norm is equivalent to the norm

xr =⎛⎝ sum

jisinZr+1

x(j)2⎞⎠12

It can easily be seen that for x isin Hr(0 2π)

xminus Pnx le 1

nr

⎛⎝ 1

sumjisinZZminusnn

(1+ j2)r|(xφj)|2⎞⎠12

le 1

nrxHr (336)

With the help of the above estimate we have the following error analysis

Theorem 317 Suppose that K L2(0 2π) rarr L2(0 2π) is a compactoperator not having one as its eigenvalue If u isin L2(0 2π) is the solutionof equation (31) then there exist an integer N0 and a positive constant c suchthat for each n ge N0 the Galerkin approximate equation (333) has a uniquesolution un isin Xn Moreover if u isin Hr(0 2π) then un satisfies the error bound

uminus un le cnminusruHr

Proof The proof of this theorem is similar to that of Theorem 316 with (332)being replaced by (336)

333 The condition number for the Galerkin method

We discuss in this subsection the condition number of the linear systemassociated with the Galerkin equation which depends on the choice of basesof the approximate subspace To present the results we need the notion of thematrix norm induced by a vector norm For an ntimesn matrix A the matrix normis defined by

Ap = supAxp x isin Rn xp = 1 1 le p le infin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

100 Conventional numerical methods

It is well known that for A = [aij i j isin Zn]Ainfin = max

iisinZn

sumjisinZn

|aij|

A1 = maxjisinZn

sumiisinZn

|aij|

and

A2 = [ρ(ATA)]12

where ρ(A) is called the spectral radius of A and is defined as the largesteigenvalue of A

To discuss the condition number of the coefficient matrix of the linearsystem (330) for the Galerkin method we first provide a lemma on a changeof bases for the subspace Xn Let φj j isin Ns(n) and ψj j isin Ns(n) be twobases for the subspace Xn with the latter being orthonormal These bases havethe relations

n = Cnn and n = Dnn

where n = [ψj j isin Ns(n)] n = [φj j isin Ns(n)] Dn = [(φiψj) i j isin Ns(n)] and Cn = [cij i j isin Ns(n)] is the matrix determined by the firstequation

Lemma 318 If ψj j isin Ns(n) is orthonormal then

CnDn = In and DnDTn = En

where In is the identity matrix and En = [(φjφi) i j isin Ns(n)] Moreover

Dn2 = DTn 2 = En12

2 and Dminus1n 2 = DminusT

n 2 = Eminus1n 12

2

Proof The first part of this lemma follows directly from computation For thesecond part we have that

Dn2 = DTn 2 = [ρ(DnDT

n )]12 = [ρ(En)]12 = En122

The second equation can be proved similarly

The next lemma can easily be verified

Lemma 319 For any w = [wj j isin Ns(n)] isin Rs(n) let w = wTn isin XnIf the operator Qn Rs(n) rarr Xn is defined by Qnw = w then Qn is invertibleand Qn = Qminus1

n = 1

In the next theorem we estimate the condition number of the coefficientmatrix of the Galerkin method

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

33 Galerkin methods 101

Theorem 320 The condition number of the coefficient matrix of the linearsystem (330) for the Galerkin method has the bound

cond(En minusKn) le cond(En)cond(I minus PnK)

Proof For any g = [gj j isin Ns(n)] let

v = (En minusKn)minus1g

It can be verified that g = gTn and v = vTn satisfy the equation

g = (I minus PnK)v

Noting that v = (DTn v)Tn = Qn(DT

n v) and g = (DTn g)Tn = Qn(DT

n g) wehave that

(En minusKn)minus1g = v = DminusT

n Qminus1n v = DminusT

n Qminus1n (I minus PnK)minus1Qn(DT

n g)

Thus using Lemmas 318 and 319 we conclude that

(En minusKn)minus1 le DminusT

n Qminus1n (I minus PnK)minus1QnDT

n le cond(En)

12(I minus PnK)minus1 (337)

Likewise we conclude from

(En minusKn)v = g = DminusTn Qminus1

n g = DminusTn Qminus1

n (I minus PnK)Qn(DTn v)

that

En minusKn le cond(En)12(I minus PnK) (338)

Combining estimates (337) and (338) yields the desired result of this theorem

We remark that if K is a compact linear operator not having one as itseigenvalue and Pn

sminusrarr I then PnKuminusrarr K and (IminusPnK)minus1 uminusrarr (IminusK)minus1

which yields

cond(I minus PnK)rarr cond(I minusK) as nrarrinfin

334 The iterated Galerkin method

We presented the iterated projection scheme (297) in Section 224 Accordingto Theorem 255 if K X rarr X is a compact linear operator not having oneas an eigenvalue then there exist a positive constant c and a positive integer q

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

102 Conventional numerical methods

such that for all n ge q the iterated solution defined by (297) with Kn = Ksatisfies the estimate

uminus un le cK(I minus Pn)uSince (I minus Pn)

2 = I minus Pn we have that

K(I minus Pn)u le K(I minus Pn)(I minus Pn)uSince K is compact its adjoint Klowast is also compact As a result we obtain that

K(I minus Pn) = [K(I minus Pn)]lowast = (I minus Pn)Klowast rarr 0 as nrarrinfin

Thus we conclude that

uminus un le c(I minus Pn)Klowast(I minus Pn)uand see that u minus un converges to zero more rapidly than u minus un doesMoreover we have that

K(I minus Pn)u = (K(s middot) (I minus Pn)u(middot)) = ((I minus Pn)K(s middot) (I minus Pn)u(middot))which leads to

uminus un le c ess sup(I minus Pn)K(s middot) s isin (I minus Pn)uThis shows that the additional order of convergence gained from iterationis attributed to approximation of the integral kernel from the approximatesubspace

335 Discrete Galerkin methods

The implementation of the Galerkin method (329) requires evaluating theintegrals involved in (330) There are two types of integral required forevaluation the integral that defines the operator K and the inner product (middot middot)of the space L2() These integrals usually cannot be evaluated exactly andthus they require numerical integration The Galerkin method with integralscomputed using numerical quadrature is called the discrete Galerkin method

We choose quadrature nodes τj j isin Nqn with qn ge s(n) and discreteoperators Kn defined by

(Knx)(t) =sum

jisinNqn

wj(t)x(τj) t isin

to approximate the operator K We require that the sequence Kn n isin N ofdiscrete operators is collectively compact and pointwise convergent on C()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

33 Galerkin methods 103

For sufficient conditions to ensure collective compactness see Theorem 315Suppose that we use a quadrature formula to approximate integrals that isint

x(t)dt asympsum

jisinNqn

λjx(τj) with λj gt 0 j isin Nqn

and define a discrete semi-definite inner product

(x y)n =sum

jisinNqn

λjx(τj)y(τj) x y isin C()

and the corresponding discrete semi-norm

xn = (x x)12n x isin C()

We require that the rank of the matrix n = [φi(τj) i isin Ns(n) j isin Nqn ] isequal to s(n) It follows that there is a subset of quadrature nodes say τj j isinNs(n) such that the matrix [φi(τj) i j isin Ns(n)] is nonsingular Thus for anydata bj j isin Ns(n) there exists a unique φ isin Xn satisfying

φ(τj) = bj j isin Ns(n)

We see that middot n is a semi-norm on C() and is a norm on XnThe discrete Galerkin method for solving (31) is to find un isin Xn such that

(un v)n minus (Knun v)n = (f v)n for all v isin Xn (339)

The analysis of the discrete Galerkin method requires the notation of thediscrete orthogonal projection

Definition 321 Let Xn be a subspace of C() The operator Pn C() rarrXn defined for x isin C() by

(Pnx y)n = (x y)n for all y isin Xn

is called the discrete orthogonal projection from C() onto Xn

Proposition 322 If λj gt 0 j isin Nqn and rank n = s(n) then the discreteorthogonal projection Pn C() rarr Xn is well defined and is a linearprojection from C() onto Xn If in addition qn = s(n) then Pn is aninterpolating projection satisfying for x isin C()

(Pnx)(τj) = x(τj) j isin Ns(n)

Proof For x isin C() let Pnx =sumjisinNs(n)xjφj and consider the linear systemsum

jisinNs(n)

xj(φjφi)n = (xφi)n i isin Ns(n) (340)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

104 Conventional numerical methods

The coefficient matrix Gn = [(φjφi)n i j isin Ns(n)] is a Gram matrix Thusfor any x = [xj j isin Ns(n)] isin Rs(n) we have that

xTGnx =∥∥∥ sum

jisinNs(n)

xjφj

∥∥∥2

nge 0

Since λj gt 0 for all j isin Nqn xTGnx = 0 if and only ifsum

jisinNs(n)xjφj(τi) =

0 i isin Nqn The latter is equivalent to x = 0 since rank n = s(n) Thus weconclude that the matrix Gn is positive definite and equation (340) is uniquelysolvable that is Pn is well defined From Definition 321 we can easily verifythat Pn C()rarr Xn is a linear projection

When qn = s(n) we define for x isin C() Inx = [x(τj) j isin Ns(n)] isin Rs(n)We next show that

In(Pnx) = Inx

We first have that

In(Pnx) = In

( sumjisinNs(n)

xjφj

)= T

n x

It follows from equation (340) that

Gnx = nnInx

where n is the diagonal matrix diag(λ1 λs(n)) The definition of thenotation Gn leads to Gn = nn

Tn Thus combining the above equations

we obtain that

In(Pnx) = Tn Gminus1

n nnInx = Inx

which completes the proof

The discrete orthogonal projection Pn is self-adjoint on C() with respectto the discrete inner product that is

(Pnx y)n = (xPny)n x y isin C()

and is bounded on C() with respect to the discrete semi-norm that is

Pnxn le xn x isin C()

We remark that the latter does not mean the uniform boundedness of theoperators Pn n isin N However if Xn are piecewise polynomial spaces andquadrature nodes are obtained by using affine mappings from a set of quadra-ture nodes in a reference element then we have the uniform boundednesssupPninfin n isin N ltinfin (see Section 354)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

34 Collocation methods 105

The following proposition provides convergence of the discrete orthogonalprojection

Proposition 323 If the sequence of discrete orthogonal projections Pn n isin N is uniformly bounded on C() then there is a positive constant c suchthat for all x isin C()

xminus Pnxinfin le c infxminus vinfin v isin XnProof Since Pn is a linear projection from C() onto Xn for any v isin Xn

xminus Pnx = xminus vminus Pn(xminus v)

This yields the estimate

xminus Pnxinfin le (1+ supPninfin n isin N)xminus vinfin

Thus the desired result follows from the above estimate and the hypothesisthat Pn is uniformly bounded

With the help of discrete orthogonal projections the discrete Galerkinmethod (339) for solving (31) can be rewritten in the form of (296) that is

(I minus PnKn)un = Pn f

Thus the error analysis for this method follows from the same framework asTheorem 254

34 Collocation methods

We consider in this section the collocation method for solving the Fredholmintegral equation of the second kind According to the description in Sec-tion 221 for the operator equation in the space C() the projection methodvia interpolation projections into finite-dimensional subspaces leads to thecollocation method

Let Xn n isin N be a sequence of subspaces of C() with s(n) = dimXnand let Pn n isin N be a sequence of interpolation projections from C()onto Xn defined for x isin C() by

(Pnx)(tj) = x(tj) for all j isin Ns(n)

where tj j isin Ns(n) is a set of distinct nodes in The collocation methodfor solving (31) is to find un isin Xn such that

(I minus PnK)un = Pnf (341)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

106 Conventional numerical methods

or equivalently

un(ti)minusint

K(ti t)un(t)dt = f (ti) for all i isin Ns(n) (342)

Suppose that φj j isin Ns(n) is a basis for Xn Let un =sumjisinNs(n)ujφj Equation

(342) can be written in the matrix form

(En minusKn)un = fn (343)

where

En = [φj(ti) i j isin Ns(n)]

Kn = [(Kφj)(ti) i j isin Ns(n)]

un = [uj j isin Ns(n)]

and fn = [f (tj) j isin Ns(n)]

Kn is called the collocation matrixWe remark that the set of collocation points tj j isin Ns(n) should be chosen

such that the subspace Xn is unisolvent that is the interpolating function isuniquely determined by its values at the interpolating points It is clear that thisrequirement is equivalent to the condition det(En) = 0 Since the collocationmethod is interpreted as a projection method with the interpolating operatorthe general convergence results for projection methods are applicable

When the Lagrange basis Lj j isin Ns(n) for Xn is used then

un(t) =sum

jisinNs(n)

ujLj(t) with uj = un(tj)

and the linear system (343) becomes

ui minussum

jisinNs(n)

uj

int

K(ti t)Lj(t)dt = f (ti) for all i isin Ns(n)

Note that the coefficient matrix is the same as that for the degenerate kernelmethod (310) In other words the operator PnK is a degenerate kernel integraloperator

PnKu(t) =int

Kn(s t)u(t)dt with Kn(s t) =sum

jisinNs(n)

K(tj t)Lj(s)

We then have the estimate

K minus PnK = maxsisin

int

|K(s t)minus Kn(s t)|dt (344)

We observe that the computational cost of the collocation method is muchlower than that of the Galerkin method since it reduces the calculation ofintegrations involved

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

34 Collocation methods 107

341 The collocation method with piecewise polynomials

In this subsection we consider the collocation method with the subspace Xn

being a piecewise polynomial space Suppose that there is a regular partitioni i isin Nn of for n isin N which satisfies

=⋃iisinNn

i and meas(j capjprime) = 0 j = jprime

and for each i isin Nn there exists an invertible affine map φi which maps areference element 0 onto i For a positive integer k let Xn be the space ofpiecewise polynomials of total degree le k minus 1 with respect to the partitioni i isin Nn We choose m distinct points τj isin 0 j isin Nm such thatthe Lagrange interpolation polynomial of total degree k minus 1 at these points isuniquely defined Then we can find pj isin Pk polynomials of total degree k minus 1such that

pj(τi) = δij i j isin Nm

For each n isin Nn the functions defined by

ρij(t) =(pj φminus1

i )(t) t isin i

0 t isin i(345)

form a basis for the space Xn and the points tij = φi(τj) i j isin Nm form a setof collocation nodes satisfying

ρij(tiprimejprime) = δiiprimeδjjprime

We let Pn C()rarr Xn be the interpolating projection onto Xn We then have

Pnxinfin le maxPnxinfini i isin NnNoting that

Pnx(t) =sumjisinNm

x(tij)ρij(t) =sumjisinNm

x(tij)pj(τ ) t isin i τ = φminus1i (t) isin 0

we conclude that

Pn lesumjisinNm

pjinfin

which means that the sequence of projections Pn is uniformly boundedMoreover for each x isin Wkinfin() (cf Section A1 in the Appendix) thereexists a positive constant c and a positive integer q such that for all n ge q

xminus Pnxinfin le c infxminus vinfin v isin Xn le chknxWkinfin (346)

where hn = maxdiam(i) i isin Zn We have the following theorem

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

108 Conventional numerical methods

Theorem 324 Suppose that K C() rarr C() is a compact operator nothaving one as its eigenvalue If u isin C() is the solution of equation (31)then there exist a positive constant c and a positive integer q such that foreach n ge q equation (341) has a unique solution un isin Xn Moreover ifu isin Wkinfin() then there exists a positive constant c such that for all n

uminus uninfin le chknuWkinfin()

342 The collocation method with trigonometric polynomials

We consider in this subsection equation (31) in which = [0 2π ] and K andf are 2π -periodic and describe the collocation method for solving the equationusing trigonometric polynomials

Let X = Cp(0 2π) be the space of all 2π -periodic continuous functions onR with uniform norm middot infin and choose the approximate subspace as

Xn = span1 cos t sin t cos nt sin nt n isin N

To define an interpolating projection from X onto Xn we recall the Dirichletkernel

Dn(t) = sin(n+ 12 )t

2 sin t2

= 1

2+sumjisinNn

cos jt

and observe that for tj = jπn+ 1

2 j isin Z2n+1

Dn(tj) = 1

2 + n j = 00 j isin Z2n+1 0

This means that functions j defined by

j(t) = 2

2n+ 1Dn(t minus tj) j isin Z2n+1

satisfy

j(ti) = δij

and form a Lagrange basis for Xn We then define the interpolating projectionPn Xrarr Xn for x isin X by

Pnx =sum

jisinZ2n+1

x(tj)j

The following estimate is known (cf [279])

Pn = O(log n) (347)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

34 Collocation methods 109

Hence from the principle of uniform boundedness there exists x isin X forwhich Pnx does not converge to x The bound (347) leads to the fact that forx isin X

Pnxminus xinfin le (1+ Pn) infxminus vinfin v isin Xnle O(log n) infxminus vinfin v isin Xn

We next consider the estimate of KminusPnK Assume that the kernel satisfiesthe α-Holder continuity condition

|K(s1 t)minus K(s2 t)| le c|s1 minus s2|α for all s1 s2 t isin

for some positive constant c Then using (344) we conclude that

K minus PnK le cnminusα log n

343 The condition number for the collocation method

We now turn our attention to consideration of the condition number cond(En minus Kn) of the coefficient matrix of the linear system (343) obtained fromthe collocation method In this case the condition number is defined in termsof the infinity norm of a matrix A specifically

cond(A) = AinfinAminus1infin

Theorem 325 If det(En) = 0 then the condition number of the linear system(343) of the collocation method satisfies

cond(En minusKn) le Pn2infincond(En) cond(I minus PnK) (348)

Proof For g = [gj j isin Ns(n)] let

v = [vj j isin Ns(n)] = (En minusKn)minus1g

Choose g isin C() such that

ginfin = ginfin and g(tj) = gj j isin Ns(n)

Set

v = (I minus PnK)minus1Png

Then we have that

v(ti) =sum

jisinNs(n)

vjφj(ti) i isin Ns(n)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

110 Conventional numerical methods

Letting v = [v(ti) i isin Ns(n)] in matrix notation the above equation can berewritten as

Env = v (349)

We then conclude from

(En minusKn)minus1g = v = Eminus1

n v

that

(En minusKn)minus1ginfin le Eminus1

n infinvinfin le Eminus1n infinvinfin

Since

vinfin = (I minus PnK)minus1Pnginfin le (I minus PnK)minus1Pnginfin

we conclude that

(En minusKn)minus1infin le PnEminus1

n infin(I minus PnK)minus1 (350)

Moreover for v = [vj j isin Ns(n)] let

g = [gj j isin Ns(n)] = (En minusKn)v

We choose g isin C() as before and also set v = (I minus PnK)minus1Png Notingthat

Png(tj) = g(tj) = gj j isin Ns(n)

we have ginfin le Pnginfin Thus

(En minusKn)vinfin = ginfin le Pnginfin = (I minus PnK)vinfin (351)

Choose v isin C() such that

vinfin = vinfin and v(tj) = v(tj) j isin Ns(n)

Then we have that v = Pnv and thus

vinfin le Pnvinfin = Pninfinvinfin le PnEninfinvinfin (352)

Combining estimates (351) and (352) yields

(En minusKn)infin le PnEninfin(I minus PnK)infin

which with (350) leads to the desired result of this theorem

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

34 Collocation methods 111

344 Discrete collocation methods

Before investigating discrete collocation methods we remark on the iteratedcollocation method It was known that the iterated collocation method may notlead to superconvergence In contrast with the iterated Galerkin method

K(I minus Pn) ge Kholds This means that the iterated collocation method converges more rapidlyonly in the case that for the solution u K(I minusPn)u has superconvergence (seeSection 224) This is the case when the approximate subspaces Xn are chosenas piecewise polynomials of even degree and the kernel K and solution u aresufficiently smooth (cf [15] for details)

We now begin to discuss discrete collocation methods This approachreplaces the integrals appearing in the collocation equation (342) by finitesums to be chosen depending on the specific numerical methods to be used Tothis end we define

(Knu)(s) =sum

jisinNqn

wjK(s τj)u(τj) s isin

Then the discrete collocation method for solving (31) is to find un isin Xn suchthat

(I minus PnKn)un = Pnf

or equivalently

un(ti)minussum

jisinNqn

wjK(ti τj)un(τj) = f (ti) for all i isin Ns(n) (353)

Some assumptions should be imposed to guarantee the unique solvability ofthe resulting system The iterated discrete collocation solution is defined by

un = f +Knun

which is the solution of the equation

(I minusKnPn)un = f (354)

The analysis of the discrete collocation method can be done by using theframework given in Section 224 with X = V = C()

We close this subsection by giving a relationship between the iterateddiscrete collocation solution and the Nystrom solution That is if τj j isinNqn sube tj j isin Ns(n) then the iterated discrete collocation solution un is theNystrom solution satisfying

(I minusKn)un = f (355)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

112 Conventional numerical methods

In fact by the definition of the interpolating projection for x isin C()

Pnx(τj) = x(τj) j isin Nqn

This leads to

KnPnx(s) =sum

jisinNqn

wjK(s τj)Pnx(τj) =sum

jisinNqn

wjK(s τj)x(τj) = Knx(s)

which with (354) yields (355)

35 PetrovndashGalerkin methods

In this section we establish a theoretical framework for the analysis of conver-gence for the PetrovndashGalerkin method and superconvergence for the iteratedPetrovndashGalerkin method for Fredholm integral equations of the second kind

Unlike the standard Galerkin method the PetrovndashGalerkin method employsa sequence of finite-dimensional subspaces to approximate the solution space(the trial space) of the equation and a different sequence to approximate theimage space of the integral operator (the test space) This feature provides uswith great freedom in choosing a pair of space sequences in order to improvethe computational efficiency of the standard Galerkin method while preservingits convergence order However the space of sequences cannot be chosenarbitrarily They must be coupled properly This motivates us to develop atheoretical framework for convergence analysis of the PetrovndashGalerkin methodand the iterated PetrovndashGalerkin method

It is revealed in [77] that for the PetrovndashGalerkin method the roles of thetrial space and test space are to approximate the solution space of the equationand the range of the integral operator (or in other words the image space)respectively Therefore the convergence order of the PetrovndashGalerkin methodis the same as the approximation order of the trial space and it is independentof the approximation order of the test space This leads to the followingstrategy of choosing the trial and test spaces We may choose the trial spaceas piecewise polynomials of a higher degree and the test space as piecewisepolynomials of a lower degree but keep them with the same dimension Thischoice of the trial and test spaces results in a significantly less expensivenumerical algorithm in comparison with the standard Galerkin method withthe same convergence order which uses the same piecewise polynomials asthose for the trial space The saving comes from computing the entries of thematrix and the right-hand-side vector of the linear system that results fromthe corresponding discretization Note that an entry of the Galerkin matrix is

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 113

the inner product of the integral operator applied to a basis function for thetrial space against a basis function for the same space which is a piecewisepolynomial of a higher degree while an entry of the PetrovndashGalerkin matrix isthat against a basis function for the test space which is a piecewise polynomialof a lower degree Computing the latter is less expensive than computing theformer due to the use of lower-degree polynomials for the test space In factthe PetrovndashGalerkin method interpolates between the Galerkin method and thecollocation method

351 Analysis of PetrovndashGalerkin and iteratedPetrovndashGalerkin methods

1 The PetrovndashGalerkin methodLet X be a Banach space with the norm middot and let Xlowast denote its dualspace Assume that K X rarr X is a compact linear operator We considerthe Fredholm equation of the second kind

uminusKu = f f isin X (356)

where u isin X is the unknown to be determinedWe choose two sequences of finite-dimensional subspaces Xn sub X n isin

N and Yn sub Xlowast n isin N and suppose that they satisfy condition (H) Foreach x isin X and y isin Xlowast there exist xn isin Xn and yn isin Yn such that xnminusx rarr 0and yn minus y rarr 0 as nrarrinfin and

s(n) = dim Xn = dim Yn n isin N (357)

The PetrovndashGalerkin method for equation (356) is a numerical method forfinding un isin Xn such that

(un minusKun y) = (f y) for all y isin Yn (358)

Let

Xn = spanφ1φ2 φs(n)Yn = spanψ1ψ2 ψs(n)

and

un =sum

jisinNs(n)

αjφj

Equation (358) can be written assumjisinNs(n)

αj[(φjψi)minus (Kφjψi)] = (f ψi) i isin Ns(n)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

114 Conventional numerical methods

If XnYn is a regular pair (see Definition 230) in the sense that there is alinear operator n Xn rarr Yn with nXn = Yn and satisfying the conditions

x le c1(xnx)12 and nx le c2x for all x isin Xn

where c1 and c2 are positive constants independent of n then equation (358)may be rewritten as

(un minusKunnx) = (f nx) for all x isin Xn

Furthermore using the generalized best approximation projection Pn X rarrXn (see Definition 225) which is defined by

(xminus Pnx y) = 0 for all y isin Yn

equation (358) is equivalent to the operator equation

un minus PnKun = Pnf (359)

This equation indicates that the PetrovndashGalerkin method is a projectionmethod Using Theorem 255 we obtain the following result

Theorem 326 Let X be a Banach space and K Xrarr X be a compact linearoperator Assume that one is not an eigenvalue of the operator K Suppose thatXn and Yn satisfy condition (H) and XnYn is a regular pair Then thereexists an N0 gt 0 such that for n ge N0 equation (359) has a unique solutionun isin Xn for any given f isin X that satisfies

un minus u le cuminus Pnu n ge N0

where u isin X is the unique solution of equation (356) and c gt 0 is a constantindependent of n

2 The iterated PetrovndashGalerkin methodWe now turn our attention to studying superconvergence of the iterated PetrovndashGalerkin method for integral equations of the second kind

Let X be a Banach space and let Xn sub X n isin N and Yn sub Xlowast n isin N be two sequences of finite-dimensional subspaces satisfying condition(H) Assume that Pn X rarr Xn are the linear projections of the generalizedbest approximation from Xn to X with respect to Yn Consider the projectionmethod (359) Suppose that un isin Xn is the unique solution of equation (359)which approximates the solution of equation (356) The iterated projectionmethod is defined by

uprimen = f +Kun (360)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 115

It can easily be verified that the iterated projection approximation uprimen satisfiesthe integral equation

uprimen minusKPnuprimen = f (361)

In order to analyze uprimen as the solution of equation (361) we need tounderstand the convergence of the approximate operator KPn The next lemmais helpful in this regard

Lemma 327 Suppose that X is a Banach space and Xn sub X and Yn sub Xlowastsatisfy condition (H) Let Pn X rarr X be the sequence of projections of thegeneralized best approximation from X to Xn with respect to Yn that convergespointwise to the identity operator I in X Then the sequence of dual operatorsPlowastn converges pointwise to the identity operator Ilowast in Xlowast

Proof It follows from condition (H) that for any v isin Xlowast there exists asequence vn isin Yn such that vn minus v rarr 0 as nrarrinfin Consequently

Plowastn vminus v le Plowastn vminus vn + vn minus v le (Pn + 1)vn minus v rarr 0

where the first inequality holds because Plowastn Xlowast rarr Yn are also projectionsThat is Plowastn rarr Ilowast pointwise The second inequality uses the general resultPlowastn = PnTheorem 328 Suppose that X is a Banach space and Xn sub X and Yn sub Xlowastsatisfy condition (H) Assume that K is a compact operator in X Let Pn XrarrX be the projections of the generalized best approximation from X to Xn withrespect to Yn that converges pointwise to the identity operator I in X Then

KPn minusK rarr 0 as nrarrinfin

Proof Note that

KPn minusK = [KPn minusK]lowast = PlowastnKlowast minusKlowastSince K is compact we also have that Klowast is compact Using Lemmas 327 and252 we conclude the result of this theorem

352 Equivalent conditions in Hilbert spaces for regular pairs

From the last section we know that the notion of regular pairs plays anessential role in the analysis of the PetrovndashGalerkin method Therefore it isnecessary to re-examine this concept from different points of view In thissubsection we first study regular pairs from a geometric point of view and

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

116 Conventional numerical methods

second characterize them in terms of the uniform boundedness of the sequenceof projections defined by the generalized best approximation

In what follows we confine the space X to be a Hilbert space with innerproduct (middot middot) from which a norm middot is induced In this case Xlowast is identifiedto be X via the inner product We assume XnYn sub X satisfy condition (H)The structure of Hilbert spaces allows us to define the angle between spacesXn and Yn which is done by the orthogonal projection from X onto Yn Foreach x isin X we define the best approximation ylowastn from Yn by

xminus ylowastn = infxminus y y isin YnSince Yn is a finite-dimensional Hilbert subspace in X there exists a bestapproximation from Yn to x isin X We furthermore define the best approxi-mation operator Yn by

Ynx = ylowastn for each x isin X

It is well known that for any x isin X Ynx satisfies the equation

(xminus Ynx y) = 0 for all y isin Yn (362)

In other words the operator Yn is the orthogonal projection from X onto YnTo define the angle between two spaces Xn and Yn we denote

γn = inf

Ynxx x isin Xn

We call

θn = arccos γn

the angle between spaces Xn and Yn The next theorem characterizes a regularpair XnYn in a Hilbert space X in terms of the angles between Xn and Yn

Theorem 329 Let X be a Hilbert space and let Xn and Yn be two subspacesof X satisfying condition (H) and dimXn = dimYn lt infin for n isin N ThenXnYn is a regular pair if and only if there exists a positive number θ0 lt π2such that

θn le θ0 n isin N

Proof We first prove the sufficiency Assume that there exists a positivenumber θ0 lt π2 such that θn le θ0 for all n isin N Thus

γn = inf

Ynxx x isin Xn

ge cos θ0 gt 0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 117

Using the characterization of the best approximation we have that

(xYnx) = Ynx2 ge cos2 θ0x2 for all x isin Xn

This implies that YnXn = Yn and condition (H-1) holds with c1 = 1 cos θ0Moreover since the operator Yn is the orthogonal projection we conclude that

Ynx le x for all x isin Xn

Hence condition (H-2) holds with c2 = 1We now show the necessity It follows from the definition of a regular pair

that

x2 le c21(xnx) le c2

1xnx le c21c2x2 for all x isin Xn

Thus we obtain

0 lt1

c21c2le 1

It can be seen that there exists an xprime isin Xn with xprime = 0 such that

Ynxprimexprime = inf

Ynxx x isin Xn

= cos θn

By the characterization of the best approximation we obtain that

xprime2 le c21(xprimenxprime) = c2

1(Ynxprimenxprime) le c21Ynxprimenxprime le c2

1c2 cos θnxprime2

Therefore

cos θn ge 1

c21c2

gt 0

and

θn le arccos1

c21c2

ltπ

2

The proof is complete

We now turn to establishing the equivalence of the regular pair and theuniform boundedness of the projections Pn when they are well defined Weneed two preliminary results to prove this equivalence

Lemma 330 Let X be a Hilbert space Assume that XnYn sub X withdim Xn = dim Yn ltinfin satisfy condition (273) Then

Yn = YnPn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

118 Conventional numerical methods

Proof Let x isin X Then Pnx and Ynx satisfy equations (272) and (362)respectively It follows that

(Pnxminus Ynx y) = 0 for all y isin Yn

By the definition of best approximation from Yn to Pnx we conclude that

Ynx = YnPnx for all x isin X

The proof of the lemma is complete

In Hilbert spaces we can interpret condition (273) from many differentpoints of view The next proposition lists eight equivalent statements

Proposition 331 Let X be a Hilbert space and XnYn sub X with dim Xn =dim Yn ltinfin Then the following statements are equivalent

(i) Yn cap Xperpn = 0(ii) det[(φiψj)] = 0 where φl l isin Nm and ψl l isin Nm are bases for

Xn and Yn respectively(iii) Xn cap Yperpn = 0(iv) If x isin Xn with x = 0 then Ynx = 0(v) γn gt 0

(vi) YnXn = Yn(vii) PnYn = Xn and Pn|Yn = (Yn|Xn)

minus1(viii) If y isin Yn with y = 0 then Pny = 0

Proof The implications that (i) implies (ii) and (ii) implies (iii) follow fromthe proof of Proposition 226

We prove that (iii) implies (iv) Let x isin Xn with x = 0 Using the definitionof Yn and (iii) we conclude that

(Ynx y) = (x y) = 0 for all y isin Yn with y = 0

Thus Ynx = 0To prove the implication that (iv) implies (v) we use (iv) to conclude that

on the closed unit sphere x x isin Xn x = 1 Ynx gt 0 Thus we have

γn = infYnx x isin Xn x = 1 gt 0

and statement (v) is provedTo establish (vi) it suffices to show that if φl l isin Nm is a basis for Xn

then Ynφl l isin Zm is a basis for Yn Note that Ynφi isin Yn and dimYn = NIt remains to show that Ynφ1 YnφN is linearly independent To this end

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 119

assume that there are not all zero constants c1 cN such thatsumjisinNN

ciYnφi = 0

Let x =sumjisinNNciφi Then x isin Xn with x = 0 but Ynx = 0 Hence γn = 0 a

contradiction to (v)We now show that (vi) implies (vii) Since PnYn = PnYnXn it is sufficient

to prove PnYnXn = Xn For any x isin Xn applying the definition of Yn gives

(Ynxminus x y) = 0 for all y isin Yn

The definition of Pn implies that x = PnYnx for all x isin Xn Hence weconclude that PnYnXn = Xn and (vii) is established

The implication of (vii) to (viii) is obviousFinally we prove that (viii) implies (i) Let y isin Yn cap Xperpn By the definition

of Pn we find

y2 = (y y) = (Pny y) = 0

This ensures that y = 0

The next theorem shows that XnYn is a regular pair if and only if thesequence of projections Pn is uniformly bounded

Theorem 332 Let X be a Hilbert space Assume that XnYn sub X satisfycondition (H) and equation (273) Let Pn be a sequence of projectionsdefined by the generalized best approximation (272) Then XnYn is aregular pair if and only if there exists a positive constant c for which

Pn le c for all n isin N

Proof We have proved in Proposition 231 that if XnYn is a regular pairthen Pn is uniformly bounded It remains to prove the converse For thispurpose we let Yn Xrarr Yn be the orthogonal projection By our conventionspaces Xn and Yn satisfy condition (273) Thus Proposition 331 ensuresthat YnXn = Yn The validity of condition (H-2) follows from a propertyof best approximation in Hilbert spaces We now prove condition (H-1) bycontradiction Assume to the contrary that condition (H-1) is not valid Thento each ε with 0 lt ε lt 1c where c is the constant that gives the bound forPn there exist n isin N and x isin Xn such that

(xYnx) lt ε2x2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

120 Conventional numerical methods

It follows from the characterization of best approximation in the Hilbert spaceX that

(xYnx) = Ynx2

We then have

Ynx lt εxLet x0 = Ynx Clearly x0 isin Yn Since x isin Xn satisfies the equation

(x0 minus x y) = (Ynxminus x y) = 0 for all y isin Yn

we conclude that x = Pnx0 Consequently

x0 lt εPnx0In other words

Pnx0 gt cx0which contradicts the assumption that Pn le c This contradiction shows thatcondition (H-1) must hold

In the remaining part of this subsection we discuss regular pairs from analgebraic point of view

Definition 333 Let X = φi i isin Zm Y = ψi i isin Zm be two finite(ordered) subsets of the Hilbert space X The correlation matrix between X andY is defined to be the mtimes m matrix

G(X Y) = [(φiψj) i j isin Zm]Note that GT(X Y) the transpose of G(X Y) is G(Y X) For the special

case X = Y we use G(X) for G(X X) and recall that G(X) is the Gram matrixfor the set X The matrix G(X) is positive semi-definite Generally the matrixG(X Y) is not symmetric We use G+(X Y) to denote the symmetric part ofG(X Y) Specifically we set

G+(X Y) = 1

2[G(X Y)+G(Y X)]

We use the standard ordering on m times m symmetric matrices A = [aij i j isinZm] B = [bij i j isin Zm] and write A le B provided thatsum

iisinZm

sumjisinZm

xiaijxj = xTAx le xTBx

for all x = [xi i isin Zm] isin Rm When the strict inequality holds above exceptfor x = 0 we write A lt B

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 121

Definition 334 Let X be a Hilbert space Suppose for any n isin N Xn = φi i isin Zs(n) and Yn = ψi i isin Zs(n) are finite subsets of X where s(n) denotesthe cardinality of Xn We say that Xn Yn forms a regular pair provided thatthere are constants σ gt 0 and σ prime gt 0 such that for all n isin N we have

0 lt G(Xn) le σG+(Xn Yn) (363)

and

0 lt G(Yn) le σ primeG(Xn) (364)

Thus given any finite sets X and Y of linearly independent elements inX of the same cardinality the constant pair X Y is regular if and only ifG+(X Y) gt 0 Moreover when we only have that det G(X Y) = 0 wecan form from X and Y a constant regular pair by modifying either one ofthe sets X and Y To explain this we suppose that X = φi i isin Zn andY = ψi i isin Zn Let W = ωi i isin Zn where the elements of this set aredefined by the formula

ωi =sumjisinZn

(φjψi)φj i isin Zn

Then

G(W Y) = G(X Y)TG(X Y)

and so W Y is a constant regular pair when det G(X Y) = 0 and the elementsof X and Y are linearly independent In the special case that the elements of Xare orthonormal then ωi = Xψi i isin Zn where X is the orthogonal projectionof X onto spanX

Let Xn = span Xn and Yn = span Yn When Xn Yn form a regularpair of finite sets and for every xisinX limnrarrinfin dist(xXn)= 0 the subspacesXnYn form a regular pair of subspaces in the terminology of Definition 230Conversely whenever two subspaces XnYn form a regular pair thesesubspaces have bases which as sets form a regular pair The notion of regularpairs of subspaces from Definition 230 is independent of the bases of thesubspaces However Definition 334 is dependent upon the specific sets usedand may fail to hold if these sets are transformed into others by lineartransformations

Let us observe that (363) and (364) imply

G(Yn) le σσ primeG+(Xn Yn) (365)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

122 Conventional numerical methods

Moreover for any a = [ai i isin Zs(n)] isin Rs(n) by the CauchyndashSchwarzinequality and (363) we have that

aTG+(Xn Yn)a =⎛⎝ sum

jisinZs(n)

ajφjsum

jisinZs(n)

ajψj

⎞⎠le [aTG(Xn)a]12[aTG(Yn)a]12

le σ 12[aTG+(Xn Yn)a]12[aTG(Yn)a]12

This inequality implies that

G+(Xn Yn) le σG(Yn)

Using this inequality and (363) we conclude that

G(Xn) le σG+(Xn Yn) le σ 2G(Yn) (366)

Therefore it follows that whenever Xn Yn is a regular pair then so is Yn XnWhen the sets Xn Yn form a regular pair with constants σ σ prime the

generalized best approximation projection Pn X rarr Xn with respect to Yn

enjoys the bound

Pn le p = σσ prime12 (367)

To confirm this inequality for each x isin X we write Pnx in the form

Pnx =sum

jisinZs(n)

ajφj

where the vector a = [aj j isin Zs(n)] is the solution of the linear equations

(xψi) =( sum

jisinZs(n)

ajφjψi

) i isin Zs(n)

Hence multiplying both sides of these equations by ai summing over i isin Zs(n)

and using (363) and (364) we get that

Pnx2 = aTG(Xn)a le σaTG+(Xn Yn)a

= σ(Pnx

sumjisinZs(n)

ajψj

)= σ

(xsum

jisinZs(n)

ajψj

)le σx

∥∥∥ sumjisinZs(n)

ajψj

∥∥∥ le σσ prime12x∥∥∥ sum

jisinZs(n)

ajφj

∥∥∥ = σσ prime12xPnx

Now we divide the first and last terms in the above inequality by Pnx toyield the desired inequality Pn le p

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 123

Recall that

γn = inf

Ynxx x isin Xn

and

θn = arccos γn n isin N

Note that γn isin [0 1] and θn isin [0π2] By definition θn is the angle betweenthe two subspaces Xn and Yn Let us observe for any x isin Xn that the equation

PnYnx = x

holds This leads to

cos θn = inf

YnxPnYnx x isin Xn

ge inf

yPny y isin Yn

ge 1

Pn

Therefore we conclude that when XnYn is a pair of subspaces with basesXn Yn which form a regular pair with constants σ σ prime the inequality

cos θn ge pminus1 gt 0

holds In other words in this case for all n isin N we have that θn isin [0 θlowast)where θlowast lt π2

353 The discrete PetrovndashGalerkin method and its iterated scheme

The PetrovndashGalerkin method for Fredholm integral equations of the secondkind was studied in the last sections To use the PetrovndashGalerkin method inpractical computation we have to be able to efficiently compute the integralsoccurring in the method In this subsection we take an approach to discretizinga given integral equation by a discrete projection and a discrete inner productThe iterated solution suggested in this section is also fully discrete

In this subsection we describe discrete PetrovndashGalerkin methods for Fred-holm integral equations of the second kind with weakly singular kernels Forthis purpose we consider the equation

(I minusK)u = f (368)

where K Linfin()rarr C() is a compact linear integral operator defined by

(Ku)(s) =int

K(s t)u(t)dt s isin (369)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

124 Conventional numerical methods

sub Rd is a bounded closed domain and K is a function defined on times

which is allowed to have weak singularities We assume that one is not aneigenvalue of the operator K to guarantee the existence of a unique solutionu isin Linfin() Some additional specific assumptions will be imposed later inthis subsection

We first recall the PetrovndashGalerkin method for equation (368) In thisdescription we let X = L2() with an inner product (middot middot) Let Xn and Ynbe two sequences of finite-dimensional subspaces of X such that

dimXn = dimYn = s(n)

Xn = spanφ1φ2 φs(n)and

Yn = spanψ1ψ2 ψs(n)We assume that XnYn is a regular pair It is known from the last sectionthat the necessary and sufficient condition for a generalized best approximationfrom Xn to x isin X with respect to Yn to exist uniquely is YncapXperpn = 0 If thiscondition holds then Pn is a projection and XnYn forms a regular pair ifand only if Pn is uniformly bounded The PetrovndashGalerkin method for solvingequation (368) is a numerical scheme to find a function

un(s) =sum

jisinNs(n)

αjφj(s) isin Xn

such that

((I minusK)un y) = (f y) for all y isin Yn (370)

or equivalentlysumjisinNs(n)

αj[(φjψi)minus (Kφjψi)

] = (f ψi) i isin Ns(n) (371)

Using the generalized best approximation Pn X rarr Xn we write equa-tion (370) in operator form as

(I minus PnK)un = Pnf (372)

It is also proved in the last section that if XnYn is a regular pair then forsufficiently large n equation (372) has a unique solution un isin Xn whichsatisfies the estimate

un minus u le C infuminus x x isin Xn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 125

Solving equation (372) requires solving the linear system (371) Of coursethe entries of the coefficient matrix of (371) involve the integrals (Kφjψi)which are normally evaluated by a numerical quadrature formula Roughlyspeaking the discrete PetrovndashGalerkin method is the scheme (371) withthe integrals appearing in the method computed by quadrature formulasHowever we shall develop our discrete PetrovndashGalerkin method independentof the PetrovndashGalerkin method (372) In other words we do not assumethat the PetrovndashGalerkin method (372) has been previously constructed toavoid the ldquoregular pairrdquo assumption which is crucial for the solvability andconvergence of the PetrovndashGalerkin method We take a one-step approachto fully discretize equation (368) directly We first describe the method inldquoabstractrdquo terms without specifying the bases and the concrete quadratureformulas Later we specialize them using the piecewise polynomial spacesThe only assumption that we have to impose later to guarantee the solvabilityand convergence of the resulting concrete method is a local condition on thereference element and thus it is easy to verify it

In our description we use function values f (t) at given points t isin for anLinfin function f We follow [21] to define them precisely Let C() denote thesubspace of Linfin() which consists of functions each of which is equal to anelement in C() ae The point evaluation functional δt on the space C() isdefined by

δt(f ) = f (t) t isin f isin C()

where f on the right-hand side is chosen to be the representative function f isinC() which is continuous By the HahnndashBanach theorem the point evaluationfunctional δt can be extended from C() to the whole Linfin() in such a waythat the norm is preserved We use dt to denote such an extension and define

f (t) = dt(f ) for f isin Linfin()

We remark that the extension is not unique but that is usually immaterial Whatis important is that it exists and preserves many of the properties naturallyassociated with the point evaluation functional For example at a point ofcontinuity of f the extended point evaluation is uniquely defined and has thenatural value and moreover dt is continuous at such points The reader isreferred to [21] for more details on this extension

We now return to our description of the discrete PetrovndashGalerkin methodAs in the description of the (continuous) PetrovndashGalerkin method we choosetwo subspaces Xn = span φj j isin Ns(n) and Yn = span ψj j isin Ns(n) ofthe space Linfin() such that dim Xn = dim Yn = s(n) We choose mn pointsti isin and two sets of weight functions w1i w2i i isin Nmn We define the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

126 Conventional numerical methods

discrete inner product

(x y)n =sum

iisinNmn

w1ix(ti)y(ti) x y isin Linfin() (373)

which will be used to approximate the inner product (x y) = intD x(t)y(t)dtand define discrete operators by

(Knu)(s) =sum

iisinNmn

w2i(s)u(ti) u isin Linfin() (374)

which will be used to approximate the operator K With these notationsthe discrete PetrovndashGalerkin method for equation (368) is a numericalscheme to find

un(s) =sum

jisinNs(n)

αnjφj(s) (375)

such that

((I minusKn)un y)n = (f y)n for all y isin Yn (376)

In terms of basis functions equation (376) is written as

sumjisinNs(n)

αnj

⎡⎣ sumisinNmn

w1φj(t)ψi(t)minussumisinNmn

w1

summisinNmn

w2m(t)φj(tm)ψi(t)

⎤⎦=sumisinNmn

w1f (t)ψi(t) i isin Ns(n) (377)

Upon solving the linear system (377) we obtain s(n) values αnj Substitutingthem into (375) yields an approximation to the solution u of equation (368)Equation (376) can also be written in the operator form by a discretegeneralized best approximation Qn which we define next Let Qn Xrarr Xn

be defined by

(Qnx y)n = (x y)n for all y isin Yn (378)

If Qnx is uniquely defined for every x isin X equation (376) can be written inthe form

(I minusQnKn)un = Qnf (379)

We postpone a discussion of the unique existence of Qnx until laterThe iterated PetrovndashGalerkin method has been shown to have a supercon-

vergence property where the additional order of convergence gained froman iteration is attributed by approximation of the kernel from the test space

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 127

The convergence order of the iterated PetrovndashGalerkin method is equal to theapproximation order of space Xn plus the approximation order of space YnIt is of interest to study the superconvergence of the iterated discrete PetrovndashGalerkin method which we define by

uprimen = f +Knun (380)

Equation (380) is a fully discrete algorithm which can be implemented easilyinvolving only multiplications and additions It can be shown that uprimen satisfiesthe operator equation

(I minusKnQn)uprimen = f (381)

This form of equation allows us to treat the iterated discrete PetrovndashGalerkinmethod as an operator equation whose analysis is covered by the theorydeveloped in Section 224

Up to now the discrete PetrovndashGalerkin method has been described inabstract terms without specifying the spaces Xn and Yn In the remainder ofthis section we specialize the discrete PetrovndashGalerkin method by specifyingthe spaces Xn and Yn and defining operators Qn and Kn in terms of piecewisepolynomials We assume that is a polyhedral region and construct a partitionTn for by dividing it into Nn simplices ni i isin NNn such that

h = maxdiam ni i isin NNn rarr 0 as nrarrinfin (382)

=⋃

iisinNNn

ni

and

meas(ni capnj) = 0 i = j

When the dependence of the simplex ni on n is well understood we dropthe first index n in the notation and simply write it as i For each positiveinteger n the set Tn forms a partition for the domain We also require thatthe partition is regular in the sense that any vertex of a simplex in Tn is not inthe interior of an edge of a face of another simplex in the set It is well knownthat for each simplex there exists a unique one to one and onto affine mappingwhich maps the simplex onto a unit simplex 0 called a reference element

Let Fi i isin NNn denote the invertible affine mappings that map the referenceelement 0 one to one and onto the simplices i Then the affine mappingsFi have the form

Fi(t) = Bit + bi t isin 0 (383)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

128 Conventional numerical methods

where Bi is a d times d invertible matrix and bi a vector in Rd and they satisfy

i = Fi(0)

On the reference element 0 we choose two piecewise polynomial spacesS1k1(

0) and S2k2(0) of total degree k1minus1 and k2minus1 respectively such that

dim S1k1(0) = dim S2k2(

0) = μ

The partitions 1 and 2 of 0 associated respectively with S1k1(0) and

S2k2(0) may be different they are arranged according to the integers k1 k2

and d Assume that the numbers of sub-simplices contained in the partitions1 and 2 are denoted by ν1 and ν2 We have to choose these pairs of integersk1 ν1 and k2 ν2 such that(

k1 minus 1+ d

d

)ν1 =

(k2 minus 1+ d

d

)ν2 = μ

because the dimension of the space of polynomials of total degree k is(k+d

d

)

We shall not provide a detailed discussion on how the partitions 1 and 2

are constructed Instead we assume that we have chosen bases for these twospaces so that

S1k1(0) = spanφj j isin Nμ

and

S2k2(0) = spanψj j isin Nμ

We next map these piecewise polynomial spaces on 0 to each simplex i byletting

φij(t) =φj Fminus1

i (t) t isin i

0 t isin i

and

ψij(t) =ψj Fminus1

i (t) t isin i

0 t isin i

for i isin NNn and j isin Nμ Using these functions as bases we define the trialspace and the test space respectively by

Xn = spanφij i isin NNn j isin Nμand

Yn = spanψij i isin NNn j isin Nμ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 129

It follows from (382) that

C() sube⋃

Xn

and

C() sube⋃

Yn

Moreover we have that if x isin Wk1infin() then there exists a constant c gt 0 suchthat for all n

infxminus φ φ isin Xn le chk1

and if x isin Wk2infin() then likewise there exists a constant c gt 0 such that forall n

infxminus φ φ isin Yn le chk2

However the space X =⋃Xn does not equal Linfin() it is a proper subspaceof Linfin() because the space Linfin() is not separable Due to this fact theexisting theory of collectively compact operators (cf [6]) does not applydirectly to this setting Some modifications of the theory are required

We next specialize the definition of the discrete inner product (373) anddescribe a concrete construction of the approximate operators Kn To this endwe introduce a third piecewise polynomial space S3k3(

0) of total degreek3 minus 1 on 0 We divide the reference element 0 into ν3 sub-simplices

3 = ei i isin Nν3and also assume that the partition 3 is regular On each of the simplicesei we choose m = (k3minus1+d

d

)points τij j isin Nm such that they admit a

unique Lagrange interpolating polynomial of total degree k3 minus 1 on ei Formultivariate Lagrange interpolation by polynomials of total degree see [83]and the references cited therein Let pij be the polynomial of total degree k3minus1on ei satisfying the interpolation conditions

pij(τiprimejprime) = δiiprimeδjjprime i iprime isin Nν3 j jprime isin Nm

We assemble these polynomials to form a basis for the space S3k3(0) by

letting

ζ(iminus1)m+j(t) =

pij(t) t isin ei0 t isin ei

i isin Nν3 j isin Nm

Set γ = mν3 which is equal to the dimension of S3k3(0) and

t(iminus1)m+j = τij i isin Nν3 j isin Nm

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

130 Conventional numerical methods

Then ζi isin S3k3(0) and satisfy the interpolation conditions

ζi(tj) = δij i j isin Nγ

This set of functions forms a basis for the space S3k3(0) It can be used to

introduce a piecewise polynomial space on by mapping the basis ζj j isin Nγ

for S3k3(0) from 0 into each i Specifically we define

ζij(t) =ζj Fminus1

i (t) t isin i0 t isin i

where Fi is the affine map defined by (383) Let

Zn = spanζij i isin NNn j isin Nγ Hence Zn is a piecewise polynomial space of dimension γNn For each i wedefine

tij = Fi(tj) = Bitj + bi

where Bi and bi are respectively the matrix and vector appearing in thedefinition of the affine map Fi Furthermore we define the linear projectionZn Xrarr Zn by

Zng =sum

iisinNNn

sumjisinNγ

dtij(g)ζij

where dt is the extension of the point evaluation functional δt satisfyingdt = 1 which was discussed earlier and satisfies the condition

dtij(ζiprimejprime) = δiiprimeδjjprime i iprime isin NNn j jprime isin Nγ

Moreover we have that

Zn = ess suptisin

sumiisinNNn

sumjisinNγ

|ζij(t)| = ess suptisin

sumjisinNγ

|ζj(t)|

That is Zn is uniformly bounded for all n It follows from the uniformboundedness of Zn that for any y isin Wk3infin() there holds an estimate

yminus Zny le C infyminus φφ isin Zn le Chk3 (384)

Using the projection Zn defined above we have a quadrature formulaint

g(t)dt =sum

iisinNNn

sumjisinNγ

wijdtij(g)+O(hk3)

where

wij =int

ζij(t)dt

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 131

If we set

wi =int0

ζi(t)dt i isin Nγ

then we have

wij =inti

ζj(Fminus1i (t))dt = det(Bi)

int0

ζj(t)dt = det(Bi)wj

Without loss of generality we assume that

det(Bi) gt 0 i isin Nγ

Employing this formula we introduce the following discrete inner product

(x y)n =sum

iisinNNn

sumisinNγ

wix(ti)y(ti) (385)

Formula (385) is a concrete form for (373) When x y isin Wk3infin() we havethe error estimate

|(x y)minus (x y)n| le Chk3

With this specific definition of the spaces Xn Yn and the discrete inner productwe obtain a construction of the operators Qn using equation (378)

Finally to describe a concrete construction of the approximate operatorsKn we impose a few additional assumptions on the kernel K of the integraloperator K Roughly speaking we assume that K is a product of two kernelsone of them is continuous but perhaps involves a complicated function and theother has a simple form but has a singularity In particular we let

K(s t) = K1(s t)K2(s t) s t isin

where K1 is continuous on times and K2 has a singularity and satisfies theconditions

K2(s middot) isin L1() s isin supsisin

int

|K2(s t)|dt lt +infin (386)

K2(s middot)minus K2(sprime middot)1 rarr 0 as sprime rarr s (387)

Moreover we assume that the integration of the product of K2(s t) and apolynomial p(t) with respect to the variable t can be evaluated exactly Manyintegral operators K that appear in practical applications are of this type

Using the linear projection Zn we define Kn Xrarr X by

(Knx)(s) =int

Zn(K1(s t)x(t))K2(s t)dt

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

132 Conventional numerical methods

which approximates the operator K For un isin Xn we have that

(Knun)(s) =sum

iisinNNn

sumjisinNγ

wij(s)K1(s tij)un(tij)

where

wij(s) =inti

ζij(t)K2(s t)dt

This concrete construction of the trial space Xn the test space Yn and operatorsQn Kn yields a specific discrete PetrovndashGalerkin method which is describedby equation (379) This is the method that we shall analyze in the nextsubsection

354 The convergence of the discrete PetrovndashGalerkin method

In this subsection we follow the general theory developed in Section 224 toprove the convergence results of the discrete PetrovndashGalerkin method when apiecewise polynomial approximation is used Throughout the remaining part ofthis subsection we let X = Linfin() V = C() Xn and Yn be the piecewisepolynomial spaces defined in the last subsection and X = cupnXn Our maintask is to verify that the operators Qn and Kn with the spaces Xn Yn defined inthe last subsection by the piecewise polynomials satisfy the hypotheses (H-1)ndash(H-4) of Section 224 so that Theorem 254 can be applied For this purposewe define the necessary notation Let

= [φi(tj) i isin Nμ j isin Nγ ] and = [ψi(tj) i isin Nμ j isin Nγ ]where φi and ψi are the bases we have chosen for the piecewise polynomialspaces S1k1(

0) and S2k2(0) and tj are the interpolation points in the

reference element 0 chosen in the last subsection Noting that wi are theweights of the quadrature formula on the reference element developed inSection 421 we set

W = diag(w1 wγ ) and M = WT

The next proposition presents a necessary and sufficient condition for thediscrete generalized best approximation to exist uniquely

Proposition 335 For each x isin Linfin() the discrete generalized bestapproximation Qnx from Xn to x with respect to Yn defined by (378) existsuniquely if and only if

det(M) = 0 (388)

Under this condition Qn is a projection that is Q2n = Qn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 133

Proof Let x isin Linfin() be given Showing that there is a unique Qnx isin Xn

satisfying equation (378) is equivalent to proving that the linear systemsumiisinNNn

sumjisinNμ

cij(φijψiprimejprime)n = (xψiprimejprime)n iprime isin NNn jprime isin Nμ (389)

has a unique solution [c11 c1μ cNn1 cNnμ] This in turn is equiv-alent to the fact that the coefficient matrix M of this system is nonsingularIt is easily seen that

M = diag(det(B1)M det(BNn)M)

Thus the first result of this proposition follows from hypothesis (388)It remains to show that Qn is a projection By definition we have for every

x isin Linfin() that

(Qnx y)n = (x y)n for all y isin Yn

In particular this equation holds when x is replaced by Qnx That is

(Q2hx y)n = (Qnx y)n for all y isin Yn

It follows for each x isin X that

Q2nx = Qnx

That is Qn is a projection

Condition (388) is on the choice of points tj on the reference elementThey have to be selected in a careful manner so that they match with the choiceof the bases φi and ψi This condition has to be verified before a concreteconstruction of the projection Qn is given This is not a difficult task since thecondition is on the reference element it is independent of n and in practicalapplications the numbers μ and γ are not too large

The next proposition gives two useful properties of the projection Qn

Proposition 336 Assume that condition (388) is satisfied Let Qn be definedby (378) with the spaces Xn Yn and the discrete inner product constructed interms of the piecewise polynomials described in the last subsection Then thefollowing statements hold

(i) Qn is uniformly bounded that is there exists a constant c gt 0 such thatQn le c for all n

(ii) There exists a constant c gt 0 such that for all n

Qnxminus xinfin le c infxminus φinfinφ isin Xnholds for all x isin Linfin() Thus for each x isin C() Qnx minus xinfin rarr 0holds as nrarrinfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

134 Conventional numerical methods

Proof (i) For any x isin Linfin() we have the expression

Qnx =sum

iisinNNn

sumjisinNμ

cijφij (390)

where the coefficients cij satisfy equation (389) It follows that

Qnxinfin le cinfiness supsisin

sumiisinNNn

sumjisinNμ

|φij(s)| = cinfinmaxsisin0

sumjisinNμ

|φj(s)| (391)

where

c = [c11 c1μ cNn1 cNnμ]T

and the discrete norm of c is defined by cinfin = max|cij| i isin NNn j isin NμBy definition the vector c is dependent on n although we do not specify it inthe notation However we prove that cinfin is in fact independent of n To thisend we use system (389) and hypothesis (388) to conclude that

cinfin = Mminus1dinfin (392)

where

d = [(xψ11)n (xψ1μ)n (xψNn1)n (xψNnμ)n]T

and

Mminus1 = diag(

det(B1)minus1Mminus1 det(BNn)

minus1Mminus1)

Let

di = [(xψi1)n (xψiμ)n]T isin Rμ

Then it follows from (392) that the following estimate of cinfin holds in termsof blocks di and Mminus1

cinfin le maxiisinNNn

det(Bi)minus1Mminus1diinfin (393)

This inequality reduces estimating cinfin to bounding each block di By thedefinition of the discrete inner product we have the estimate for the normof di

diinfin le xinfinmaxjisinNμ

sumisinNγ

wi|ψij(ti)| = det(Bi)xinfinmaxjisinNμ

sumisinNγ

w|ψj(t)|

(394)

From (391)ndash(394) we conclude that

Qnxinfin le cxinfin for all n

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 135

where c is a constant independent of n with the value

c = Mminus1infinmaxsisinE

sumjisinNμ

|φj(s)|maxjisinNμ

sumisinNγ

w|ψj(t)|

(ii) Let φ isin Xn Since Qn is a projection we have for each x isin Linfin that

Qnxminus xinfin le xminus φinfin + Qnφ minusQnxinfin le (1+ c)xminus φinfin

Thus we obtain the estimate

Qnxminus xinfin le c infxminus φinfinφ isin XnThis estimate with the relation C() sube cupnXn implies that Qnxminus xinfin rarr 0as nrarrinfin for each x isin C()

In the next proposition we verify that the operators Kn defined in thelast subsection by the piecewise polynomial approximation satisfy hypotheses(H-1) and (H-2)

Proposition 337 Suppose that Kn is defined as in the last subsection by thepiecewise polynomial approximation Then the following statements hold

(i) The set of operators Kn is collectively compact(ii) For each x isin X KnxminusKxinfin rarr 0 as nrarrinfin

(iii) If x isin Wk3infin() and K1 isin C()timesWk3infin() then

KxminusKnxinfin le chk3

Proof (i) By the continuity of the kernel K1 and condition (387) there existconstants c1 and c2 such that

K1(s middot)infin le c1 and K2(s middot)1 le c2

Thus we have that

|(Knx)(s)| =∣∣∣∣int

Zn(K1(s t)x(t))K2(s t)dt

∣∣∣∣ le c0c1c2xinfin (395)

Moreover

|(Knx)(s)minus (Knx)(sprime)|=∣∣∣∣int

Zn(K1(s t)x(t))K2(s t)dt minusint

Zn(K1(sprime t)x(t))K2(s

prime t)dt

∣∣∣∣le∣∣∣∣int

Zn(K1(s t)x(t))[K2(s t)minus K2(sprime t)]dt

∣∣∣∣

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

136 Conventional numerical methods

+∣∣∣∣int

[Zn(K1(s t)x(t))minus Zn(K1(sprime t)x(t))]K2(s

prime t)dt

∣∣∣∣le c0xinfin

(c1K2(s middot)minus K2(s

prime middot)1 + c2K1(s middot)minus K1(sprime middot)infin

)

Since K2(s middot)minusK2(sprime middot)1 and K1(s middot)minusK1(sprime middot)infin are uniformly continu-ous on we observe that Knx is equicontinuous on By the ArzelandashAscolitheorem we conclude that Kn is collectively compact

(ii) For any x isin X

|(Knx)(s)minus (Kx)(s)| =∣∣∣∣int

[Zn(K1(s t)x(t))minus K1(s t)x(t)]K2(s t)dt

∣∣∣∣le c2Zn(K1(s middot)x)minus K1(s middot)xinfin

Note that K1x is piecewise continuous as is x By the definition of Zn wehave that the right-hand side of the above inequality converges to zero as nrarrinfin We conclude that the left-hand side converges uniformly to zero on thecompact set That is KnxminusKx rarr 0 as nrarrinfin

(iii) If x isin Wk3infin() by the approximate order of the interpolation projectionZn we have

KnxminusKxinfin le c supsisin(Zn(K1(s middot)x(middot)))(middot)minus K1(s middot)x(middot)infin le chk3

The estimate above follows immediately from the fact that K1 isin C() timesWk3infin() and inequality (384)

Using Propositions 336 and 337 and Theorem 254 we obtain the follow-ing theorem

Theorem 338 The following statements are valid

(i) There exists N0 gt 0 such that for all n gt N0 the discrete PetrovndashGalerkin method using the piecewise polynomial approximation describedin Section 421 has a unique solution un isin Xn

(ii) If u isin Wαinfin() with α = mink1 k3 then

uminus uninfin le chα

Proof By Propositions 336 and 337 we conclude that conditions(H-1)ndash(H-4) are satisfied Hence from Theorem 254 statement (i) followsimmediately and the estimate

uminus uninfin le c (uminusQnuinfin + KuminusKnuinfin) (396)

holds Now let u isin Wαinfin() Again Proposition 336 ensures that

uminusQnuinfin le c infuminus φinfinφ isin Xn le chα (397)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 137

By (iii) of Proposition 337 we have that

KuminusKnuinfin le chα (398)

Substituting estimates (397) and (398) into inequality (396) yields theestimate in (ii)

355 Superconvergence of the iterated approximation

We present in this subsection a superconvergence property of the iterateddiscrete PetrovndashGalerkin method when the kernel is smooth

To obtain superconvergence we require furthermore that the partitions 1

and 3 of 0 associated with the spaces S1k(0) and S3k3(

0) respectivelyare exactly the same In the main theorem of this section we prove that thecorresponding iterated discrete PetrovndashGalerkin approximation has a super-convergence property when the kernels are smooth In particular we assumethat the kernel K = K1 and K2 = 1 in the notation of the last subsection Wefirst establish a technical lemma

Lemma 339 Let x isin Linfin() and k1 isin C() times Wk3infin() If 1 = 3 thenthere exists a positive constant c such that for all n

(K minusKn)Qnxinfin le chk3

Proof Since Qnx is not a continuous function Proposition 337 (iii) doesnot apply to this case However it follows from the proof of Proposition 337(ii) that

|(KnQnx)(s)minus (KQnx)(s)| le crsinfin

where

rs(t) = (Zn(K1(s middot)(Qnx)(middot))(t)minus K1(s t)(Qnx)(t)

Hence it suffices to estimate rs(t)Using the definition of the projection Qn we write

(Qnx)(t) =sum

iisinNNn

sumjisinNμ

cijφij(t) t isin (399)

where φij are the basis functions for Xn given in Subsection 354 and thecoefficients cij satisfy the linear system (389) Consequently we have that

(Zn(K1(s middot)(Qnx)(middot))(t) =sum

iisinNNn

sumjisinNμ

cij(Zn(K1(s middot)φij(middot))(t) (3100)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

138 Conventional numerical methods

By the construction of the functions φij we have that φij(tiprimejprime) = 0 if i = iprimeThus it follows that

(Zn(K1(s middot)φij(middot))(t) =sum

iprimeisinNNn

sumjprimeisinNγ

K1(s tiprimejprime)φij(tiprimejprime)ζiprimejprime(t)

=sum

jprimeisinNγ

K1(s tijprime)φij(tijprime)ζijprime(t)

Substituting this equation into (3100) yields

(Zn(K1(s middot)(Qnx)(middot))(t) =sum

iisinNNn

sumjisinNμ

cij

sumjprimeisinNγ

K1(s tijprime)φij(tijprime)ζijprime(t) t isin

(3101)

We now assume that for some point t isin iprime rsinfin = |rs(t)| For this point tthere exists a point τ in the reference element 0 such that t = Fiprime(τ ) Hence

rsinfin =∣∣∣∣∣∣sumjisinNμ

ciprimej

⎡⎣sumjprimeisinNγ

K1(sFiprime(tjprime))φj(tjprime)ζjprime(τ )minus K1(sFiprime(τ ))φj(τ )

⎤⎦∣∣∣∣∣∣ Because 0 = cupiisinNν3

ei the point τ must be in some ei For each integerjprime isin Nγ assume that positive integers i0 and j0 with i isin Nν3 j0 isin Nm are suchthat (i0 minus 1)m+ j0 = jprime Therefore we have

ζjprime(t) =

pi0j0(t) t isin ei00 t isin ei0

and tjprime = τi0j0

so that

rsinfin =∣∣∣∣∣∣sumjisinNμ

cij

⎡⎣ sumi0isinNν3

sumj0isinNm

K1(sFiprime (τi0j0 ))φj(τi0j0 )pi0j0 (τ )minus K1(sFiprime (τ ))φj(τ )

⎤⎦∣∣∣∣∣∣=∣∣∣∣∣∣sumjisinNμ

cij

⎡⎣ sumj0isinNm

K1(sFiprime (τij0 ))φj(τij0 )pij0 (τ )minus K1(sFiprime (τ ))φj(τ )

⎤⎦∣∣∣∣∣∣ We identify that the function in the blanket of the last term is the error ofpolynomial interpolation of the function K1(sFiprime(τ ))φj(τ ) on ei which wecall the error term on ei Since 1 = 3 K1(sFiprime(τ ))φj(τ ) as a function of

τ is in the space Wk3infin(ei) We conclude that the error term on ei is bounded bya constant time k3(K1(sFiprime(middot))φj(middot)infin The latter is bounded by a constanttime |det(Biprime)|k3 le chk3 Hence we obtain

rsinfin le ccinfinhk3

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 139

By the proof of Proposition 336 we know that cinfin le c Therefore we haversinfin le chk3

We are now ready to establish the main result of this subsection concerningthe superconvergence of the iterated solution

Theorem 340 If β = mink1 + k2 k3 u isin Wβinfin() and K isin C() timesWk3infin() then there exists a constant c gt 0 such that for all n

uminus uprimeninfin le chβ

Proof It follows from Theorem 254 that

uminus uprimeninfin le c ((K minusKn)Qnuinfin + K(I minusQn)uinfin) (3102)

Because 1 = 3 by applying Lemma 339 we have that

(K minusKn)Qnuinfin le chk3 (3103)

Moreover since K(s middot) isin Wk3infin() and 1 = 3 we conclude that

K(uminusQnu)infin le (K(s t) u(t)minus (Qnu)(t))ninfin + chk3 (3104)

It remains to establish an upper bound for (K(s t) u(t)minus (Qnu)(t))ninfin Forthis purpose we note that for any y isin Yn

(y uminusQnu)n = 0

holds It follows that

|(K(s t) u(t)minus (Qnu)(t))n| = |(K(s t)minus y(t) u(t)minusQnu)(t))n|le infK(s t)minus y(t)infin y isin YnuminusQnuinfin

This implies that

(K(s t) u(t)minus (Qnu)(t))ninfin le chk2 hk1 = chk1+k2 (3105)

Combining inequalities (3102)ndash(3105) we establish the estimate of thistheorem

We remark that when k1 lt k3 lt k1+k2 the optimal order of convergence ofun is O(hk1) while the iterated solution uprimen has an order of convergence O(hk3)This phenomenon is called superconvergence

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

140 Conventional numerical methods

356 Numerical examples

In this subsection we present two numerical examples to illustrate the theo-retical estimates obtained in the previous subsections The kernel in the firstexample is weakly singular while the kernel in the second example is smoothThe second example is presented to show the superconvergence property of theiterated solution We restrict ourselves to simple one-dimensional equationswhose exact solutions are known

In both examples we use piecewise linear functions and piecewise constantfunctions for the spaces Xn and Yn respectively Specifically we define thetrial space by

Xn = spanφj j isin N2nwhere

φ2j+1(t) =

nt minus j jn le t le j+1

n 0 otherwise

j isin Zn

and

φ2j+2(t) =

j+ 1minus nt jn le t le j+1

n 0 otherwise

j isin Zn

The test space is then defined by

Yn = spanψj j isin N2nwhere

ψj(t) =

1 jminus12n le t le j

2n 0 otherwise

j isin N2n

Example 1 Consider the integral equation with a weakly singular kernel

u(s)minusint π

0log | cos sminus cos t|u(t)dt = 1 0 le s le π

This equation is a reformulation of a third boundary value problem of the two-dimensional Laplace equation and it has the exact solution given by

u(s) = 1

1+ π log 2

See [12] for more details on this example By changes of variable t = π tprimes = πsprime we have an equivalent equation

u(πs)minus π

int 1

0log | cos(πs)minus cos(π t)|u(π t)dt = 1 0 le s le 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 141

We write the kernel

log | cos(πs)minus cos(π t)| =4sum

i=1

Ki1(s t)Ki2(s t)

where

K11(s t) = log

⎛⎝ sin(π(tminuss)

2

)sin(π(t+s)

2

)π3 (tminuss)

2 (t + s)(2minus t minus s)

⎞⎠

K12(s t) = K21(s t) = K31(s t) = K41(s t) = 1

K22(s t) = log |π(sminus t)| K32(s t) = log(π(2minus sminus t))

and

K42(s t) = log(π(s+ t))

In Table 31 we present the error en of the approximate solution and the erroreprimen of the iterated approximate solution where we use q and qprime to represent thecorresponding orders of approximation respectively In our computation wechoose k3 = 2

The order of approximation agrees with our theoretical estimate Theiteration does not improve the accuracy of the approximate solution for thisexample due to the nonsmoothness of the kernel

Example 2 We consider the integral equation with a smooth kernel

u(s)minusint 1

0sin s cos tu(t)dt = sin s(1minus esin 1)+ esin s 0 le s le 1

It is not difficult to verify that u(s) = esin s is the unique solution of thisequation In the notation of Section 353 we have K1(s t) = sin s cos t andK2(s t) = 1 In this case we choose k3 = 3 for the quadrature formula Thenotation in Table 32 is the same as that in Table 31

In this example the iteration improves the accuracy of approximation by theorder as estimated in Theorem 340

Table 31

n 4 8 16 32

en 1504077e-6 3879971e-7 9877713e-8 2957718e-8q 1954761 1973797 1739639eprimen 3186220e-6 8005914e-7 1973337e-7 5153006e-8qprime 1992708 2020429 193715

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

142 Conventional numerical methods

Table 32

n 4 8 16 32

en 168156e-2 410275e-3 101615e-3 300353e-4q 2035137 2013478 175839eprimen 678911e-5 416056e-6 258946e-7 161679e-8qprime 4028373 4006054 4001447

36 Bibliographical remarks

The material presented in this chapter on conventional numerical methods forsolving Fredholm integral equations of the second kind is mainly taken fromthe books [15 177 178] Analysis of the quadrature method may be found in[6] As related issues of the collocation method valuation of an Linfin functionf at a given point the reader is referred to [21] and multivariate Lagrangeinterpolation by polynomials may be found in [83] and the references citedtherein For the theoretical framework for analysis of the PetrovndashGalerkinmethod readers are referred to [64 77] Superconvergence of the iteratedPetrovndashGalerkin method was originally analyzed in [77] The discrete PetrovndashGalerkin method and its iterated scheme presented in Section 353 are takenfrom [68 80] The iterated Galerkin method a special case of the iteratedPetrovndashGalerkin method for Fredholm integral equations of the second kindwas studied by many authors (see [23 165 246] and the references citedtherein) Reference [241] gives a nice review of the iterated Galerkin methodand iterated collocation method

We would like to mention other developments on this topic not includedin this book Boundary integral equations of the second kind with periodiclogarithmic kernels were solved by a Nystrom scheme-based extrapolationmethod in [271] where asymptotic expansions for the approximate solutionsobtained by the Nystrom scheme were developed to analyze the extrapolationmethod The generalized airfoil equation for an airfoil with a flap was solvednumerically in [204] In [49] it was shown that the dense coefficient matrixobtained from a quadrature rule for boundary integral equations with logarithmkernels can be replaced by a sparse one if appropriate graded meshes areused in the quadrature rules A fast numerical method was developed in [266]for solving the two-dimensional Fredholm integral equation of the secondkind More information about the Galerkin method using the Fourier basis forsolving boundary integral equations may be found in [25 81] Fast numericalalgorithms for this method were developed recently in [37 154 155 263]A singularity-preserving Galerkin method was developed in [41] for the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

36 Bibliographical remarks 143

Fredholm integral equation of the second kind with weakly singular kernelswhose solutions have singularity The method was extended in [38 229] tosolve the Volterra integral equation of the second kind with weakly singularkernels which was also used in [111 112] to solve fractional differentialequations

A singularity-preserving collocation method for solving the Fredholm inte-gral equation of the second kind with weakly singular kernels was developedin [39] In [16] a discretized Galerkin method was obtained using numericalintegration for evaluation of the integrals occurring in the Galerkin methodand in [23] by considering discrete inner product and discrete projectionsthe authors treated more appropriately kernels with discontinuous derivativesA discrete convergence theory and its applications to the numerical solutionof weakly singular integral equations were presented in [256] Finally weremark that it may be obtained by a similar analysis provided by [165] forthe superconvergence of the iterated Galerkin method when the kernels areweakly singular

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

4

Multiscale basis functions

Since a large class of physical problems is defined on bounded domains wefocus on integral equations on bounded domains As we know a boundeddomain in Rd may be well approximated by a polygonal domain which isa union of simplexes cubes and perhaps L-shaped domains To develop fastGalerkin PetrovndashGalerkin and collocation methods for solving the integralequations we need multiscale bases and collocation functionals on polygonaldomains Simplexes cubes or L-shaped domains are typical examples ofinvariant sets This chapter is devoted to a description of constructions ofmultiscale basis functions including multiscale orthogonal bases interpolatingbases and multiscale collocation functionals The multiscale basis functionsthat we construct here are discontinuous piecewise polynomials For thisreason we describe their construction on invariant sets which can turn to baseson a polygon

To illustrate the idea of the construction we start with examples on[0 1] which is the simplest example of invariant sets This will be donein Section 41 Constructions of multiscale basis functions and collocationfunctionals on invariant sets are based on self-similar partitions of thesets Hence we discuss such partitions in Section 42 Based on such self-similar partitions we describe constructions of multiscale orthogonal basesin Section 43 For the construction of the multiscale interpolating basis werequire the availability of the multiscale interpolation points Section 44 isdevoted to the notion of refinable sets which are a base for the constructionof the multiscale interpolation points Finally in Section 45 we present theconstruction of multiscale interpolating bases

144

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

41 Multiscale functions on the unit interval 145

41 Multiscale functions on the unit interval

This section serves as an illustration of the idea for the construction oforthogonal multiscale piecewise polynomial bases on an invariant set Weconsider the simplest invariant set = [0 1] in this section The essentialaspect of this construction is the recursive generation of partitions of andthe multiscale bases based on the partitions

Let L2() denote the Hilbert space equipped with inner product

(u v) =int

u(t)v(t)dt u v isin L2()

and the induced norm middot = radic(middot middot) We now describe a sequence of finite-dimensional subspaces of L2() For two positive integers k and m we let Sk

mdenote the linear space of all functions which are polynomials of degree atmost k minus 1 on

Im =[

m+ 1

m

] isin Zm

The functions in Skm are allowed to be discontinuous at the knots jm for j isin

Nmminus1 Hence the dimension of the space Skm is km When mprime divides m that

is m = mprime for some positive integer then

Skmprime sube Sk

m

since the knot sequence mprime isin Zmprime for the space Skmprime is contained in the

sequence m isin Zm for the space Skm In particular for m = 2k we have

that

Sk1 sube Sk

2 sube middot middot middot sube Sk2n (41)

In this context we reinterpret the unit interval and its partition Recall thatthe unit interval is the invariant set with respect to the maps

φε(t) = ε + t

2 t isin ε isin Z2

in the sense that

= φ0() cup φ1() and meas(φ0() cap φ1()) = 0

where meas(A) denotes the Lebesgue measure of the set A Note that themaps φ0 and φ1 are contractive and they map onto [0 12] and [12 1]respectively The partition I2k isin Z2k of can be re-expressed in termsof the contractive maps φε ε isin Z2 as

φε1middotmiddotmiddotεk() εj isin Z2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

146 Multiscale basis functions

0 1

f

0 1φ0()

T0 f

0 1φ1()

T1 f

Figure 41 φe and Te

Associated with the contractive maps φε ε isin Z2 we introduce two mutuallyorthogonal isometries on L2() that will be used to recursively generate basesfor the spaces in the chain (41) For each ε isin Z2 we set ε = [ε2 (ε+1)2]and define the isometry Tε by setting for f isin L2()

Tε f = radic2(

f φminus1ε

)χε =

radic2f (2t minus ε) t isin ε

0 t isin ε (42)

where χA denotes the characteristic function of the set A Figure 41 illustratesthe results of applications of operators Tε to a function

For each ε isin Z2 we use T lowastε for the adjoint operator of Tε We have thefollowing result concerning the adjoint operator T lowastε

Proposition 41 (1) If f isin L2() then

T lowastε f =radic

2

2f φε

(2) For any ε εprime isin Z2

T lowastε Tεprime = δεεprimeI (43)

(3) For any ε εprime isin Z2 and f g isin L2()

(Tε f Tεprimeg) = δεεprime( f g)

Proof (1) For f g isin L2() by the definition of Tε we have thatint

g(x)(Tε f )(x)dx = radic2intε

g(x)( f φminus1ε )(x)dx

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

41 Multiscale functions on the unit interval 147

We make a change of variables t = φminus1ε (x) to conclude thatint

g(x)(Tε f )(x)dx =radic

2

2

int

(g φε)(x)f (x)dx

The formula for T lowastε g follows(2) For f isin L2() by (1) of this proposition we observe that

(T lowastεprime Tε f )(x) =radic

2

2(Tε f )(φεprime(x))

By the definition of the operator Tε if εprime = ε then T lowastεprime Tε = 0 and if εprime = ε

then T lowastεprime Tε = I

(3) The formula in this part follows directly from (43)

It is clear from their definitions that the operators Tε ε isin Z2 preserve thelinear independence of a set of functions in L2() Moreover it follows fromProposition 41 (2) that functions resulting from applications of the operatorsTε with different ε are orthogonal We next show how to use the operators Tε ε isin Z2 to generate recursively the bases for spaces

Xn = Sk2n n isin N0

To this end when S1 and S2 are subsets of L2() such that (u v) = 0 for allu isin S1 v isin S2 we introduce the notation S1 cupperp S2 which denotes the union ofS1 and S2

Proposition 42 If X0 is an orthonormal basis for Sk1 then

Xn =⋃εisinZ2

perpTεXnminus1 n isin N (44)

is an orthonormal basis for Sk2n

Proof We prove by induction on n Suppose that fj j isin Nk2nminus1 form anorthonormal basis for Xnminus1 By Proposition 41 Tε fj jisinNk2nminus1 ε isinZ2 arealso orthonormal It can be shown that these k2n functions are elements inXn Moreover dimXn = k2n which equals the number of these elementsTherefore they form an orthonormal basis for the space Xn

We now turn to the construction of our multiscale basis for space XnRecalling Xnminus1 sube Xn we have that

Xn = Xnminus1 oplusperpWn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

148 Multiscale basis functions

where Wn is the orthogonal complement of Xnminus1 in Xn Since the dimensionof Xn is k2n the dimension of Wn is given by

dimWn = k2nminus1

Repeating this process leads to the decomposition

Xn = X0 oplusperpW1 oplusperp middot middot middot oplusperpWn (45)

for space Xn In order to construct a multiscale orthonormal basis it sufficesto construct an orthonormal basis Wj for space Wj for each j isin Nn We firstchoose the Legendre polynomials of degree le k minus 1 on as an orthonormalbasis for X0 = Sk

1 and denote by X0 the basis We then use Proposition 42 toconstruct an orthonormal basis X1 for space X1 that is

X1 =⋃εisinZ2

perpTεX0

Since both X0 and X1 are finite-dimensional we can use the GramndashSchmidtprocess to find an orthonormal basis W1 for W1 Specifically we form a linearcombination of the basis functions in X1 and require it to be orthogonal toall elements of X0 This gives us k linearly independent elements which areorthogonal to X0 We then orthonormalize these k functions and they serve asan orthonormal basis for W1 For construction of basis Wj when j ge 2 weappeal to the following proposition

Proposition 43 If W1 is given as an orthonormal basis for W1 then

Wn+1 =⋃εisinZ2

perpTεWn n isin N (46)

is an orthonormal basis for Wn+1

Proof We prove that Wn is an orthonormal basis for Wn by induction on nWhen n = 1 W1 is an orthonormal basis for W1 by hypothesis Assume thatWj is an orthonormal basis for Wj for some j ge 1 we show that Wj+1 is anorthonormal basis for Wj+1

Let W = T0Wj cup T1Wj Since Wj sube Xj by Proposition 42 we concludethat W sube Xj+1 Proposition 41 with the induction hypothesis that WjperpXjminus1ensures Wperp(T0Xjminus1 cup T1Xjminus1) = Xj which implies that W subeWj+1 Becausethe elements in Wj are orthonormal by Proposition 41 the elements in W arealso orthonormal Moreover

card W = dimWj+1

holds Therefore W is a basis for Wj+1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

41 Multiscale functions on the unit interval 149

The proposition above gives a recursive generation of the multiscale basisfunctions for spaces Wn once orthonormal basis functions for W1 areavailable It is useful for us in what follows to index the functions in the waveletbases for Xn and to clearly have in mind the interval of their ldquosupportrdquo To thisend we set W0 = X0 and define

w(n) = dimWn and s(n) = dimXn n isin N0

Thus we have that

w(0) = k w(n) = k2nminus1 and s(n) = k2n n isin N0

For i isin N0 we write Wi = wij j isin Zw(i) where we use double subscriptsfor the basis functions with the first representing the level of the scale of thesubspaces and the second indicating the location of its support There are twoproperties of the functions in the set wij (i j) isin U where U = (i j) i isinN0 j isin Zw(i) which are important to us The first is that they form a completeorthonormal system for the space L2() In particular we have that

(wij wiprimejprime) = δiiprimeδjjprime (i j) (iprime jprime) isin U

Embodied in this fact is the useful property that the wavelet basis wij (i j) isinU has vanishing moments of order k that is

((middot)r wij) = 0 for r isin Zk j isin Zw(i) i isin N

The second property is the ldquoshrinking supportrdquo (as the level i increases)of the multiscale basis functions To pin down this fact we take the point ofview that the k functions in W1 have ldquosupportrdquo on Thereafter the waveletbasis at level i will be grouped into 2iminus1 sets of k functions each having thesame ldquosupport intervalrdquo For future reference we identify a set off which wij

vanishes For this purpose we write j isin Zw(i) i isin N uniquely in the formj = vk + l where l isin Zk and v isin Z2iminus1 i isin N Then

wij(t) = 0 t isin Iv2iminus1 j = vk + l (47)

and therefore we have for j isin Zw(i) that

meas(supp wij) le 1

2iminus1

To see this fact clearly we express v in its dyadic expansion

v = 2iminus2ε1 + middot middot middot + 2εiminus2 + εiminus1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

150 Multiscale basis functions

where ε1 ε2 εiminus1 isin Z2 The recursion (46) then gives the formula

wij = Tε1 middot middot middot Tεiminus1w1l

which confirms (47)We end this section by presenting bases for spaces X0 and W1 for four con-

crete examples piecewise constant linear quadratic and cubic polynomials

Piecewise constant functions This case leads to the Haar wavelet We have abasis for X0 given by

w00(t) = 1 t isin [0 1]and a basis for W1 given by (see also Figure 42)

w10(t) =

1 t isin [0 12]minus1 t isin (12 1]

We illustrate in Figure 42 the graph of the functions w00 and w10

Piecewise linear polynomials In this case k = 2 and dimX0 = dimW1 = 2We have an orthonormal basis for X0 given by

w00(t) = 1 w01(t) = radic3(2t minus 1)

and an orthonormal basis for W1 given by

w10(t) =

1minus 6t t isin [0 12 ]

5minus 6t t isin ( 12 1] w11(t) =

radic3(1minus 4t) t isin [0 1

2 ]radic3(4t minus 3) t isin ( 1

2 1]We illustrate in Figure 43 the graph of the functions w00 w01 w10 and w11

0 02 04 06 08 10

02

04

06

08

1

w00

0 02 04 06 08 1

minus1

minus05

0

05

1

w10

Figure 42 Basis functions for piecewise constant functions

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

41 Multiscale functions on the unit interval 151

0 02 04 06 08 10

02

04

06

08

1

w00

0 02 04 06 08 1minus2

minus1

0

1

2

w01

0 02 04 06 08 1

minus2

minus1

0

1

2

w10

0 02 04 06 08 1minus2

minus1

0

1

2

w11

Figure 43 Basis functions for piecewise linear polynomials

Piecewise quadratic polynomials In this case k= 3 and dimX0= dimW1= 3 An orthonormal basis for X0 is given by

w00(t) = 1 w01(t) = radic3(2t minus 1) w02(t) = radic5(6t2 minus 6t + 1)

and an orthonormal basis for W1 is given by

w10(t) =

1minus 6t t isin [0 12 ]

5minus 6t t isin ( 12 1]

w11(t) =⎧⎨⎩radic

9331 (240t2 minus 116t + 9) t isin [0 1

2 ]radic93

31 (3minus 4t) t isin ( 12 1]

w12(t) =⎧⎨⎩radic

9331 (4t minus 1) t isin [0 1

2 ]radic93

31 (240t2 minus 364t + 133) t isin ( 12 1]

In Figure 44 we illustrate the graph of the bases for X0 and W0 in this case

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

152 Multiscale basis functions

0 05 10

02

04

06

08

1

w00

0 05 1minus2

minus1

0

1

2

w01

0 05 1minus15

minus1

minus05

0

05

1

15

2

25

w02

0 05 1minus2

minus1

0

1

2

w10

0 05 1minus2

minus1

0

1

2

3

4

w11

0 05 1minus2

minus1

0

1

2

3

4

w12

Figure 44 Basis functions for piecewise quadratic polynomials

Piecewise cubic polynomials In this case we have that k = 4 and dimX0 =dimW1 = 4 An orthonormal basis for X0 is given by

w00(t) = 1 w01(t) = radic3(2t minus 1)

w02(t) = radic5(6t2 minus 6t + 1) w03(t) = radic7(20t3 minus 30t2 + 12t minus 1)

and a basis for W1 is given by

w10(t) = radic

515 (240t2 minus 90t + 5) t isin [0 1

2 ]minusradic

515 (240t2 minus 390t + 155) t isin ( 1

2 1]

w11(t) = radic

3(30t2 minus 14t + 1) t isin [0 12 ]radic

3(30t2 minus 46t + 17) t isin ( 12 1]

w12(t) = radic

7(160t3 minus 120t2 + 24t minus 1) t isin [0 12 ]

minusradic7(160t3 minus 360t2 + 264t minus 63) t isin ( 12 1]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 153

0 02 04 06 08 10

02

04

06

08

1

w00

0 02 04 06 08 1minus2

minus1

0

1

2

w01

0 02 04 06 08 1minus15

minus1

minus05

0

05

1

15

2

25

w02

0 02 04 06 08 1minus3

minus2

minus1

0

1

2

3

w03

Figure 45 Basis functions for piecewise cubic polynomials

w13(t) =⎧⎨⎩

14radic

2929 (160t3 minus 120t2 + 165

7 t minus 1314 ) t isin [0 1

2 ]14radic

2929 (160t3 minus 360t2 + 1845

7 t minus 87714 ) t isin ( 1

2 1]The bases for X0 and W1 are shown respectively in Figures 45 and 46

42 Multiscale partitions

Because a polygonal domain in Rd is a union of a finite number of invariantsets in this section we focus on multiscale partitioning of an invariant set in Rd

421 Invariant sets

We introduce the notion of invariant sets following [148] Let M be a completemetric space For any subset A of M and x isinM we define the distance of x to

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

154 Multiscale basis functions

0 02 04 06 08 1

minus3

minus2

minus1

0

1

2

3

w10

0 02 04 06 08 1minus2

minus1

0

1

2

3

w11

0 02 04 06 08 1minus3

minus2

minus1

0

1

2

3

w12

0 02 04 06 08 1minus3

minus2

minus1

0

1

2

3

w13

Figure 46 Basis functions for piecewise cubic polynomials

A and the diameter of A respectively by

dist (x A) = infd(x y) y isin Aand

diam (A) = supd(x y) x y isin AA mapping from M to M is called contractive if there exists a γ isin (0 1) suchthat for all subsets A of M

diam (φε(A)) le γ diam (A) ε isin Zμ (48)

For a positive integer μ gt 1 we suppose that = φε ε isin Zμ is a familyof contractive mappings on M We define the subset of M by

(A) =⋃εisinZμ

φε(A)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 155

According to [148] there exists a unique compact subset of M such that

() = (49)

We call the set the invariant set relative to the family of contractivemappings

Generally an invariant set has a complex fractal structure For examplethere are choices of for which is the Cantor subset of the interval [0 1]the Sierpinski gasket contained in an equilateral triangle or the twin dragonsfrom wavelet analysis In Figures 47 and 48 we illustrate the generation ofthe Cantor set of [0 1] and the Sierpinski gasket respectively

In the context of numerical solutions of integral equations we are interestedin the cases when has a simple structure including for example the cubeand simplex in Rd With these cases in mind we make the following additionalrestrictions on the family of mappings

Figure 47 Generation of the Cantor set

Figure 48 Generation of the Sierpinski gasket

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

156 Multiscale basis functions

(a) For every ε isin Zμ the mapping φε has a continuous inverse on (b) The set has nonempty interior and

meas (φε() cap φεprime()) = 0 ε εprime isin Zμ ε = εprime

We present several simple examples of invariant sets

Example 44 For the metric space R and an integer μ gt 1 consider thefamily of contractive mappings = φε ε isin Zμ where

φε(t) = t + ε

μ t isin R ε isin Zμ

The unit interval = [0 1] is the invariant set relative to which satisfies

=⋃εisinZμ

φε()

When μ= 2 this example is discussed in Section 41 Figure 49 illustrates thecase when μ = 3 Note that in this case

φ0() =[

11

3

] φ1() =

[1

3

2

3

] φ2() =

[2

3 1

]and clearly

[0 1] = φ0() cup φ1() cup φ2()

Example 45 In the metric space R2 we consider four contractive affinemappings

φ0(s t) = 1

2(s t) φ1(s t) = 1

2(s+ 1 t)

φ2(s t) = 1

2(s t + 1) φ3(s t) = 1

2(1minus s 1minus t) (s t) isin R2

The invariant set relative to these mappings is the unit triangle with verticesat (0 0) (1 0) and (0 1) since

= φ0() cup φ1() cup φ2() cup φ3()

This is illustrated in Figure 410

0 1φ0() φ1() φ2()

Figure 49 The invariant set in Example 1 with μ = 3

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 157

(0 0) (1 0)

(0 1)

φ0() φ1()

φ2()

φ3()

Figure 410 The unit triangle as an invariant set

Figure 411 The unit L-shaped domain as an invariant set

Example 46 In the metric space R2 we consider four contractive affinemappings

φ0(s t) = 1

2(s t) φ1(s t) = 1

2(2minus s t)

φ2(s t) = 1

2(s 2minus t) φ3(s t) = 1

2(s+ 12 t + 12) (s t) isin R2

The invariant set relative to these mappings is the L-shaped domain illustratedin Figure 411

Example 47 As the last example in the metric space R3 we consider eightcontractive affine mappings

φ0(x y z) = 1

2(x y z) φ1(x y z) = 1

2(y z x+ 1)

φ2(x y z) = 1

2(x z y+ 1) φ3(x y z) = 1

2(x y z+ 1)

φ4(x y z) = 1

2(x y+ 1 z+ 1) φ5(x y z) = 1

2(y x+ 1 z+ 1)

φ6(x y z) = 1

2(z x+ 1 y+ 1) φ7(x y z) = 1

2(x+ 1 y+ 1 z+ 1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

158 Multiscale basis functions

0 02 04 06 08 1

002

0406

081

0

01

02

03

04

05

06

07

08

09

1

S

0 02 04 06 08 1

0

05

10

01

02

03

04

05

06

07

08

09

1

S[000]

0 02 04 06 081

0

05

10

01

02

03

04

05

06

07

08

09

1

S[100]

S S(000) S(100)

0 02 04 06 08 1

002

0406

0810

01

02

03

04

05

06

07

08

09

1

S[010]

0 02 04 0608 1

002

0406

0810

01

02

03

04

05

06

07

08

09

1

S[001]

0 02 04 06 08 1

002

0406

0810

01

02

03

04

05

06

07

08

09

1

S[011]

S(010) S(001) S(011)

0 02 04 06 08 1

002

0406

0810

01

02

03

04

05

06

07

08

09

1

S[101]

0 02 04 06 08 1

0

05

10

01

02

03

04

05

06

07

08

09

1

S[110]

0 02 04 06 08 1

0

05

10

01

02

03

04

05

06

07

08

09

1

S[111]

S(101) S(110) S(111)

Figure 412 A three-dimensional unit simplex as an invariant set

The invariant set relative to these eight mappings is the simplex in R3

defined by

S = (x y z) 0 le x le y le z le 1This is illustrated in Figure 412

422 Multiscale partitions by contractive mappings

The contractive mappings that define the invariant set naturally forma partition for the invariant set Repeatedly applying the mappings to theinvariant set generates a sequence of multiscale partitions for the invariant set

We next show how the contractive mappings are used to generate asequence of multiscale partitions n n isin N0 of the invariant set which

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 159

is defined by For notational convenience we introduce the notation

Znμ = Zμ times middot middot middot times Zμ n times

For each e = [ej j isin Zn] isin Znμ we define the composition mapping

φe = φe0 φe1 middot middot middot φenminus1

and the number

μ(e) = μnminus1e0 + middot middot middot + μenminus2 + enminus1

Note that every i isin Zμn can be written uniquely as i = μ(e) for some e isin Znμ

From equation (49) and conditions (a) and (b) it follows that the collection ofsets

n = ne ne = φe() e isin Znμ (410)

forms a partition of We require that this partition has the following property

(c) There exist positive constants cminus c+ such that for all n isin N0

cminusμminusnd le maxd(ne) e isin Znμ le c+μminusnd (411)

where d(A) represents the diameter of set A that is d(A) = sup|xminusy| x y isinA with | middot | being the Euclidean norm in the space Rd

If a sequence of partitions n n isin N0 has property (c) we say that it formsa sequence of multiscale partitions for

Proposition 48 If the Jacobian of the contractive affine mappings φe e isinZμ satisfies

|Jφe | = O(μminus1)

then the sequence of partitions n n isin N0 is of multiscale

Proof For any s t isin φe() there exist s t isin such that s = φe(s) andt = φe(t) and thus we have that

|sminus t| = |Jφe |1d|sminus t|This with the hypothesis on the Jacobian of the mappings ensures that for anye isin Zμ

d(1e) = O(μminus1d)

By induction we may find that for any e isin Znμ

d(ne) = O(μminusnd) (412)

proving the result

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

160 Multiscale basis functions

423 Multiscale partitions of a multidimensional simplex

For the purpose of solving integral equations on a polygonal domain in Rd wedescribe in this subsection multiscale partitions of a simplex in Rd for d ge 1For a vector x isin Rd we write x = [xj xj isin R j isin Zd] The unit simplex S inRd is the subset

S = x x isin Rd 0 le x0 le x1 le middot middot middot le xdminus1 le 1This set is the invariant set relative to a family of μd contractive mappingsIn order to describe these contractive mappings for a positive integer μ wedefine a family of counting functions χj Zd

μ rarr Zd+1 j isin Zμ for e = [ej j isin Zd] isin Zd

μ by

χj(e) =sumiisinZd

δj(ei) (413)

where δj(k) = 1 when j = k and otherwise δj(k) = 0 Note that the value ofχj(e) is exactly the number of components of e that equals j Given e isin Zd

μ

we identify a vector c(e) = [cj j isin Zμ+1] isin Zμ+1d+1 by

c0 = 0 cj =sumiisinZj

χi(e) j isin Nμ (414)

We remark that c(e) is always nondecreasing since each χj takes a non-negativevalue and cμ is always equal to d For e isin Zd

μ and j lt k we define the index

set kj = el j le l lt k el = ek Then we define the permutation vector

Ie = [ik k isin Zd] isin Zdd of e by

ik = cek + card(k0) (415)

where we assume card(empty) = 0 We have the following lemma about Ie

Lemma 49 For any e isin Zdμ the permutation vector Ie has the following

properties

(1) For k isin Zd cm le ik lt cm+1 if and only if m = ek(2) For any j k isin Zd ij lt ik if and only if ej lt ek or ej = ek with j lt k(3) The equality ij = ik holds if and only if j = k(4) The vector Ie is a permutation of vd = [j j isin Zd]Proof According to the definition of Ie we have for any k isin Zk that

cek le ik lt cek + card(ej j isin Zd ej = ek) = cek+1 (416)

This implies that if m = ek cm le ik lt cm+1 On the contrary if there is anm such that cm le ik lt cm+1 it is unique because the components of c(e) are

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 161

nondecreasing It follows from the uniqueness of m and (416) that m = ekThus property (1) is proved

We now turn to proving property (2) If ej lt ek then ej + 1 le ek and hencecej+1 le cek since the components of c(e) form a nondecreasing sequence By(416) we conclude that ij lt cej+1 le cek le ik If ej= ek with jlt k then ik minusij= card(k

j )ge 1 hence ij lt ik It remains to prove that if ij lt ik then ej lt ek

or ej= ek jlt k Since in general for j kisinZd one of the following cases holdsej lt ek ej= ek with jlt k ej= ek with jge k or ej gt ek it suffices to show thatif ej gt ek or ej= ek with jge k then ijge ik If ej gt ek by the proof we showedearlier in this paragraph we conclude that ij gt ik If ej= ek with j ge k we

have that ij minus ik= card( jk)ge 0 that is ijge ik Thus we complete a proof for

property (2)The above analysis also implies that the only possibility to have ij = ik is

j = k This proves property (3)Noticing that ek isin Zμ for k isin Zd and 0 le cek le ik lt cek+1 le d we

conclude that Ie is a permutation of vd

We also need conjugate permutations in order to define the contractivemappings A permutation matrix has exactly one entry in each row and columnequal to one and all other entries zero Hence a permutation matrix is anorthogonal matrix For any permutation Ie of vd there is a unique permutationmatrix Pe such that Ie = Pevd We call the vector

Ilowaste = [ilowastj j isin Zd] = PTe vd

the conjugate permutation of Ie Thus Ilowaste itself is also a permutation of vdIt follows from the definition above that for l isin Zd ilowastl = k if and only ifik = l We define the conjugate vector elowast = [elowastj j isin Zd] of e by settingelowastl = eilowastl l isin Zd Utilizing the above notations we define the mapping Ge by

Ge(x) = x =[

xl =xilowastl + elowastl

μ l isin Zd

] x isin S (417)

It is clear that mappings Ge e isin Zdμ are affine and contractive

We next identify the set Ge(S) To this end associated with each e isin Zdμ we

define a set in Rd by

Se =

x isin Rd 0 le xi0 minuse0

μle xi1 minus

e1

μle middot middot middot le xidminus1 minus

edminus1

μle 1

μ

(418)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

162 Multiscale basis functions

where ik k isin Zd are the components of the permutation vector Ie of e SinceIe is a permutation of vd Se is a simplex in Rd In the next lemma we identifyGe(S) with the simplex Se

Lemma 410 For all e isin Zdμ there holds Ge(S) = Se

Proof For k isin Zd we let l = ik and observe by definition that ilowastl = k elowastl = ekThus xl = xk+ek

μ or

xk = μxl minus ek = μxik minus ek (419)

If x isin S then 0 le x0 le x1 le middot middot middot le xdminus1 le 1 which implies that

0 le μxi0 minus e0 le μxi1 minus e1 le middot middot middot le μxidminus1 minus edminus1 le 1

or

0 le xi0 minuse0

μle xi1 minus

e1

μle middot middot middot le xidminus1 minus

edminus1

μle 1

μ

so that x isin Se Moreover given x isin Se we define x = [xk k isin Zd] byequation (419) Thus x isin S and x = Ge(x) Therefore Ge(S) = Se

In the following lemma we present properties of the simplices Se e isin Zdμ

Lemma 411 The simplices Se e isin Zdμ have the following properties

(1) For any x isin Se there holds

k

μle xck le xck+1 le middot middot middot le xck+1minus1 le k + 1

μ k isin Zμ (420)

(2) For any e isin Zdμ Se sub S

(3) If e1 e2 isin Zdμ with e1 = e2 then int(Se1) cap int(Se2) = empty

(4) For any e isin Zdμ meas(Se) = 1(μdd) where meas() denotes the

Lebesgue measure of set

Proof In order to prove (420) it suffices to show

0 le xck minusk

μle xck+1 minus k

μle middot middot middot le xck+1minus1 minus k

μle 1

μ k isin Zμ (421)

or equivalently

0 le xp minus k

μle xq minus k

μle 1

μ

for any ck le p lt q lt ck+1 In fact since Ie is a permutation of vd for anyintegers ck le p lt q lt ck+1 there exists a unique pair pprime qprime isin Zd such that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 163

ipprime = p iqprime = q It follows from Lemma 49 that epprime = eqprime = k and pprime lt qprimeThus (418) states that

0 le xp minus k

μ= xipprime minus

epprime

μle xiqprime minus

eqprime

μ= xq minus k

μle 1

μ

which concludes property (1)Property (2) is a direct consequence of (1) and the definition of SFor the proof of (3) we first notice that

int(Se) =

x isin Rd 0 lt xi0 minuse0

μlt xi1 minus

e1

μlt middot middot middot lt xidminus1 minus

edminus1

μlt

1

μ

(422)

Moreover by a proof similar to that for (420) we utilize (422) to concludefor any x isin int(Se) that

k

μlt xck lt xck+1 lt middot middot middot lt xck+1minus1 lt

k + 1

μ k isin Zμ (423)

For j = 1 2 we let ej = [ejk k isin Zd] Iej = [ijk k isin Zd] c(ej) = [cj

k k isinZμ+1]

Assume to the contrary that int(Se1)cap int(Se2) is not empty We consider twocases In case 1 that c(e1) = c(e2) we let k be the smallest integer such thatc1

k = c2k and assume c1

k lt c2k without loss of generality For any x isin int(Se1)cap

int(Se2) by (423) we have xc1kgt k

μand xc2

kminus1 ltkμ

Moreover because x isin Swe have that xc1

kle xc2

kminus1 a contradiction In case 2 that c(e1) = c(e2) since

e1 = e2 we let k be the smallest integer such that e1k = e2

k Hence e1j = e2

j for

j lt k and we assume that e1k lt e2

k without loss of generality Thus we havethat i1k lt c1

e1k+1le c2

e2kle i2k There exists a unique p isin Zd such that i1p = i2k since

Ie1 is a permutation and p ge k because i1j = i2j = i2k for all j lt k Furthermore

it follows from Lemma 49 c(e1) = c(e2) and i1p = i2k that e1p = e2

k = e1k

which implies p = k Therefore for any x isin int(Se1) there holds

xi1kminus e1

k

μlt xi1p

minus e1p

μ= xi2k

minus e2k

μ

However there is a unique q isin Zd such that q gt k i2q = i1k and for anyx isin int(Se2)

xi2kminus e2

k

μlt xi2q

minus e2q

μ= xi1k

minus e1k

μ

again a contradiction This completes the proof of property (3)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

164 Multiscale basis functions

For property (4) we find by direct computation that meas(Sprimee) = 1(μdd)where

Sprimee =

x isin Rd 0 le xi0 le xi1 le middot middot middot le xidminus1 le1

μ

Notice that Se is the translation of simplex Sprimee via the vector eμ

Sincethe Lebesgue measure of a set is invariant under translation we concludeproperty (4)

Theorem 412 The family S(Zdμ) = Se e isin Zd

μ is an equivolume partitionof the unit simplex S

Proof By Lemma 411 we see that for any e isin Zdμ Se sub S and for e1 e2 isin Zd

μ

with e1 = e2 int(Se1) cap int(Se2) = empty and meas(Se1) = meas(Se2) It remainsto prove that S sube⋃eisinZd

μSe

To this end for each x isin S we find e isin Zdμ such that x isin Se Note that for

each x isin S we have that 0 le x0 le x1 le middot middot middot le xdminus1 le 1 For each k isin Zμ wedenote by ck the subscript of the smallest component xj greater than or equalto k

μ We order the elements in set xj j isin Zd cup k

μ k isin Zμ+1 in increasing

order We then obtain that

0 le x0 le middot middot middot le xc1minus1 lt1

μle xc1 le middot middot middot le xcμminus1minus1

ltμminus 1

μle xcμminus1 le middot middot middot le xcμminus1 = xdminus1 le 1

In other words we have that

0 le xck minusk

μle xck+1 minus k

μle middot middot middot le xck+1minus1 minus k

μle 1

μ k isin Zμ (424)

Let pj = maxk ck le j It follows from (424) that the set xj minus pjμ

j isinZd sub [0 1

μ] We sort the elements of this set into

0 le xi0 minuspi0

μle xi1 minus

pi1

μle middot middot middot le xidminus1 minus

pidminus1

μle 1

μ (425)

Notice that the vector I = [ik k isin Zd] is a permutation of vd Let e = [ek k isin Zd] be a vector such that ej = pij It is easy to verify ij = cej + | j

0|Hence I = Ie which together with (425) shows x isin Se

The expression of the inverse mapping Gminus1e has been given by equa-

tion (419) which is written formally as

x = Gminus1e (x) = [xk = μxik minus ek k isin Zd] x isin Se (426)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 165

For any e isin Zdμ and xprime xprimeprime isin Rd

Ge(xprime)minus Ge(xprimeprime)p = 1

μxprime minus xprimeprimep (427)

and

Gminus1e (xprime)minus Gminus1

e (xprimeprime)p = μxprime minus xprimeprimep (428)

hold where middot p is the standard p-norm on Rd for 1 le p le infin

Proposition 413 The family S(Zdμ) is a uniform partition of the unit simplex

S in the sense that all elements of S(Zdμ) have an identical diameter

Proof We let = maxxprimexprimeprimeisinS xprime minus xprimeprimep It suffices to prove for any e isin Zdμ

that

maxxprimexprimeprimeisinSe

xprimee minus xprimeprimeep =

μ

It follows from formula (428) that for any xprimee xprimeprimee isin Se

μxprimee minus xprimeprimeep = Gminus1e (xprimee)minus Gminus1

e (xprimeprimee )p le

Moreover suppose that xprime xprimeprime isin S such that xprime minus xprimeprimep = and let xprimee =Ge(xprime) and xprimeprimee = Ge(xprimeprime) By (427) we have that

xprimee minus xprimeprimeep =1

μxprime minus xprimeprimep =

μ

which completes the proof

When a partition of the unit simplex has been established it is notdifficult to obtain a corresponding partition of a general simplex in Rd Fora nondegenerate simplex Sprime in Rd in the sense Vol(Sprime) = 0 there exists anaffine mapping F Rd rarr Rd such that F(Sprime) = S It can be shown that for1 le p le infin there are two positive constants c1 and c2 such that

c1xprime minus xprimeprimep le F(xprime)minus F(xprimeprime)p le c2xprime minus xprimeprimep (429)

for any xprime xprimeprime isin Sprime For any e isin Zdμ we define Gprimee = Fminus1 Ge F Thus the

family of simplices Gprimee(Sprime) e isin Zdμ is a partition of Sprime Furthermore for any

xprime xprimeprime isin Rd and e isin Zdμ

c1

c2μxprime minus xprimeprimep le Gprimee(xprime)minus Gprimee(xprimeprime)p le

c2

c1μxprime minus xprimeprimep

holds For E = [ej j isin Zm] isin (Zdμ)

m we define the composite mappings

GE = Ge0 middot middot middot Gemminus1 and GprimeE = Gprimee0 middot middot middot Gprimeemminus1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

166 Multiscale basis functions

and observe that GprimeE = Fminus1 GE F In the next theorem we show that thepartition Gprimee(Sprime) e isin Zd

μ of Sprime is uniform To this end we let SE = GE(S)and SprimeE = GprimeE(Sprime) Also we use diamp to denote the diameter of a domain inRd with respect to the p-norm

Theorem 414 For any xprime xprimeprime isin Rd and E isin (Zdμ)

m there hold

c1

c2

(1

μ

)m

xprime minus xprimeprimep le GprimeE(xprime)minus GprimeE(xprimeprime)p lec2

c1

(1

μ

)m

xprime minus xprimeprimep

diamp(SE) =(

1

μ

)m

diamp(S)

and

c1

c2

(1

μ

)m

diamp(Sprime) le diamp(S

primeE) le

c2

c1

(1

μ

)m

diamp(Sprime)

43 Multiscale orthogonal bases

In this section we describe the recursive construction of multiscale orthogonalbases for spaces L2() on the invariant set

431 Piecewise polynomial spaces

On the partition n we consider piecewise polynomials in a Banach space X

which has the norm middot Choose a positive integer k and let Xn be the spaces ofall functions such that their restriction to any cell ne e isin Zn

μ is a polynomialof total degree le kminus 1 Here we use the convention that for n = 0 the set isthe only cell in the partition and so

m = dim X0 =(

k + d minus 1d

)

It is easily seen that

x(n) = dimXn = mμn n isin N0

To generate the spaces Xn by induction from X0 we introduce linearoperators Te Xrarr X e isin Zμ defined by

(Tev)(t) = c0v(φminus1e (t))χφe()(t) (430)

where χA denotes the characteristic function of set A and c0 the positiveconstant such that Te = 1 Thus we have that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

43 Multiscale orthogonal bases 167

Xn =opluseisinZμ

TeXnminus1 n isin N (431)

where A oplus B denotes the direct sum of the spaces A and B It is easily seenthat the sequence of spaces has the property of nestedness that is

Xnminus1 sub Xn n isin N (432)

Assume that there exists a basis of elements in X0 denoted byψ0ψ1 ψmminus1 such that

X0 = spanψj j isin ZmIt is clear that

Xn = spanTeψj j isin Zm e isin Znμ (433)

432 A recursive construction

Noting that the subspace sequence Xn n isin N0 is nested we can define foreach n isin N0 subspaces Wn+1 sub Xn+1 such that

Xn+1 = Xn oplusperpWn+1 n isin N0 (434)

Thus by setting W0 = X0 we have the multiscale space decomposition thatfor any n isin N

Xn =oplus

iisinZn+1

perpWi (435)

and

L2() =oplusiisinN0

perpWi (436)

It can be computed that the dimension of Wn is given by

w(n) = dimWn = x(n)minus x(nminus 1) = m(μminus 1)μnminus1 n isin N (437)

Now the family of operators Te L2()rarr L2() e isin Zμ is defined by

Tev = |Jφe |minus12 v φminus1

e χφe()

In the next proposition we see that the operators Te e isin Zμ are isometricsfrom L2() to L2()

Proposition 415 If e eprime isin Zμ then for all u v isin L2()

(Teu Teprimev) = δeeprime(u v)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

168 Multiscale basis functions

Proof When e = eprime the intersection of the support of Teu and that of Teprimev hasmeasure zero Hence in this case we have that

(Teu Teprimev) = 0

Now we consider the case e = eprime By the definition of the operator Te we havethat

(Teu Teprimev) = |Jφe |minus1intφe()

(u φminus1e )(t)(v φminus1

e )(t)dt

Using a change of variable we conclude that

(Teu Teprimev) =int

u(t)v(t)dt = (u v)

which completes the proof

The following proposition shows that the spaces Wn can be generatedrecursively by W1

Proposition 416 It holds that

Wn+1 =opluseisinZμ

perpTeWn n isin N

Proof We remark first that by Proposition 415 the direct sum above satisfiesthe orthogonal property It follows from (434) that

Wn sub Xn and WnperpXnminus1

Thus by (431) and Proposition 415 we conclude that

TeWn sub Xn+1 and TeWnperpXn

These relations with (434) ensure that for any e isin Zμ

TeWn subWn+1

Since from (437) we have that

dimopluseisinZμ

perpTeWn = μ dim Wn = m(μminus 1)μn = dim Wn+1

the result of this proposition holds

It can easily be seen from the above proposition and its proof that thefollowing proposition holds

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 169

Proposition 417 If W1 is a basis of space W1 then for n isin N the recursivelygenerated set

Wn+1 =⋃

eisinZμ

perpTeWn =

⋃eisinZn

μ

perpTeW1

is the basis of space Wn+1

Now it is clear that as soon as we choose an orthogonal basis of W0 denotedby w0j j isin Zm and get an orthogonal basis of W1 denoted by w1j j isin Zrwhere r = w(1) then we can generate recursively an orthogonal basis wij j isin Zw(i) of the space Wi by

wij = Tew1l j = μ(e)r + l e isin Ziminus1μ l isin Zr iminus 1 isin N (438)

Functions wij (i j) isin U are wavelet-like functions which are also calledorthogonal wavelets

44 Refinable sets and set wavelets

In Section 43 we showed how to construct multiscale orthonormal bases oninvariant sets These wavelet-like functions are discontinuous nonetheless theyhave important applications to the numerical solution of integral equations Inthe next section we explore similar recursive structures for multiscale functionrepresentation and approximation by focusing on the analogous situation forinterpolation on an invariant set Thus in this section we seek a mechanism togenerate sequences of points which have a multiscale structure that can then beused to efficiently generate interpolating functions and multiscale functionals

We first develop a notion of refinable sets give a complete characterizationof refinable sets in a general metric space setting and illustrate the generalcharacterization with several examples of practical importance Next we showhow refinable sets lead to a multiresolution structure relative to set inclusionwhich is analogous to the multiresolution analysis associated with refinablefunctions This set-theoretic multiresolution structure leads us to what we callset wavelets which are generated by a successive application of the contractivemappings to an initial set wavelet These will lead us in particular in Section45 to the construction of interpolation that has the desired multiscale structureand in Chapter 7 to the construction of multiscale functionals for developingfast multiscale collocation methods for solving integral equations

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

170 Multiscale basis functions

441 Refinable sets

This subsection is devoted to a study of the refinable set relative to a familyof contractive mappings A complete characterization of refinable sets willbe presented which will be illustrated by several examples of practicalimportance

Assume that X is a complete metric space = φε ε isin Zμ is a familyof contractive mappings on X and is the invariant set relative to the family of mappings

For a subset V sub X we let

(V) =⋃εisinZμ

φε(V)

Definition 418 A subset V of X is said to be refinable relative to themappings if V sube (V)

For every k isin N and ek = [εj j isin Zk] isin Zkμ we define the contractive

composition mapping φek = φε0φε1middot middot middotφεkminus1 and letk = φek ek isin Zkμ

in particular1 = Observe that the union of any collection of refinable setsis likewise refinable Moreover if V is a refinable subset of X then k(V) =cupekisinZk

μφek(V) is also a refinable subset of X for all k isin N One of our main

objectives is to identify refinable sets of finite cardinalityBefore we present a characterization of these sets we look at some examples

on the real line which will be helpful to illuminate the general resultFor the metric space R and integer μ gt 1 we consider the mappings

ψε(t) = t + ε

μ t isin R ε isin Zμ (439)

The invariant set for this family of mappings is the unit interval [0 1] Our firstexample of a refinable set relative to the family of mappings = ψε ε isinZμ given in (439) comes next

Proposition 419 The set U0 =

jk jisinZk+1

is refinable relative to the

mappings

Proof It is sufficient to consider j gt 0 since 0 = ψ0(0) For every j isin Zk+1

we write the integer μj uniquely in the form μj = kε + where minus 1 isin Zk

and ε isin N0 Since μj le μk we conclude that ε isin Zμ Moreover we have thatjk = ψε

(k

)and so U0 is refinable

In some applications the exclusion of the endpoints 0 and 1 from a refinableset is desirable As an example of this case we present the following fact

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 171

Proposition 420 The set U0 =

jk+1 jminus 1 isin Zk

is refinable relative to

the mappings if and only if μ and k + 1 are relatively prime

Proof Suppose that μ and k + 1 have a common multiple mgt 1 that isμ=m1 and k+1=m2 for some integers 1 and 2 and U0 is refinable relativeto the mappings Then we have that 2 minus 1isinZk and 1 isinZμ Moreover wehave that ψ1(0)= 1

μ= 2

k+1 This equation implies that ψ1(0)isinU0 SinceU0 is refinable there exist ε0 isinZμ and u isin U0 such that ψ1(0)=ψε0(u)It follows from the equation above that 1= u + ε0 Thus either 1= ε0 andu= 0 or ε0 + 1= 1 and u= 1 In either case we conclude that either 0 or 1 isin U0 But this is a contradiction since U0 contains neither 0 nor 1 Hence theintegers μ and k + 1 must be relatively prime

Conversely suppose μ and k + 1 are relatively prime For every jminus 1 isin Zk

there exist integers ε and such that jμ = (k + 1)ε + where minus 1 isin Zk+1Since jμ le (k + 1)μ it follows that ε isin Zμ Moreover because μ and k + 1are relatively prime it must also be the case that minus 1 isin Zk Furthermore

since jk+1 = ψε

(

k+1

)we conclude that U0 is refinable

Our third special construction of refinable sets U0 in [0 1] relative tothe mappings is formed from cyclic μ-adic expansions To describe thisconstruction we introduce two additional mappings The first mapping π Zinfinμ rarr [0 1] is defined by

π(e) =sumjisinN

εjminus1

μj e = [εj j isin N0] isin Zinfinμ

and we also write it as π(e)= ε0ε1ε2 middot middot middot This mapping takes an infinitevector eisinZinfinμ and associates it with a number in [0 1]whoseμ-adic expansionis read off from the components of e The mapping π is not invertibleReferring back to the definition (439) we conclude for any ε isinZμ and e isin Zinfinμthat ψε(π(e))= εε0ε1 middot middot middot We also make use of the ldquoshiftrdquo map σ Zinfinμ rarrZinfinμ Specifically for e=[εj j isin N0] isinZinfinμ we set σ(e) =[εj jisinN] isinZinfinμ Thus the mapping σ discards the first component of e while the mapping ψε

restores the corresponding digit that is

ψε0(π σ(e)) = π(e) (440)

For any k isin N and ek = [εj j isin Zk] isin Zkμ we let ε0ε1 middot middot middot εkminus1 denote the

number π(e) where e = [εj j isin N0] isin Zinfinμ and εi+k = εi i isin N0 Note thatfor such an infinite vector e we have that σ k(e) = e where σ k = σ middot middot middot σis the k-fold composition of σ and also the number ε0ε1 middot middot middot εkminus1 is the uniquefixed point of the mapping ψek

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

172 Multiscale basis functions

Proposition 421 Choose k isin N and ek = [εj j isin Zk] isin Zkμ such that

at least two components of ek are different Let e = [εj j isin N0] isin Zinfinμwith εi+k = εi i isin N0 Then the set U0(π(e)) = π σ(e) isin Zk isrefinable relative to the mappings and has cardinality le k Moreover if kis the smallest positive integer such that εi+k = εi i isin N0 then U0(π(e)) hascardinality k

Proof If ε0ε1 middot middot middot εkminus1 = εprime0εprime1 middot middot middot εprimekminus1 for εi εprimei isin Zμ i isin Zk then εi = εprimei i isin Zk Hence it follows that all the elements of U0(π(e)) are distinct Alsoby using (440) for any isin Zk we have that π σ(e) = ψε(π σ+1(e))Note that trivially π σ+1(e) isin U0(π(e)) for isin Zkminus1 and π σ k(e) =π(e) isin U0(π(e)) Thus U0(π(e)) is indeed refinable

Various useful examples can be generated from this proposition We mentionthe following possibilities for μ = 2

U0(01) =

1

3

2

3

U0(001) =

1

7

2

7

4

7

U0(0011) =

1

5

2

5

3

5

4

5

We now present a characterization of refinable sets relative to a given family of contractive mappings on any metric space M To state this result for everyk isin N = 1 2 and ek = [εj j isin Zk] isin Zk

μ we define the contractivemapping φek = φε0 φε1 middot middot middot φεkminus1 and let k = φek ek isin Zk

μ inparticular 1 = We let xek be the unique fixed point of the mapping φek that is

φek(xek) = xek

and set

Fk = xek ek isin Zkμ

We also define Zinfinμ to be the set of infinite vectors e = [εj j isin N0] εi isin Zμi isin N0 With every such vector e isin Zinfinμ and k isin N we let ek = [εj j isin Zk] isinZkμ It was shown in [148] that the limit of xek as krarrinfin exists and we denote

this element of the metric space X by xe In other words we have that

limkrarrinfin xek = xe

Moreover we let er =[εj jisinZ Zr] for r isin Z and use xer to denote thefixed point of the composition mapping φer where φer =φεr φεr+1 middot middot middot φεminus1 when r isin Z and φer is the identity mapping when r =

Theorem 422 Let = φε ε isin Zμ be a family of contractive mappingson a complete metric space X and let V0 sube X be a nonempty set of cardinalityk isin N Then V0 is refinable relative to if and only if V0 has the following

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 173

property For every v isin V0 there exist integers m isin Zk+1 with lt m andεi isin Zμ i isin Zm such that v = φe0 (xem) and the points

vr = φer (xem) isin V0 r isin Z

v+r = φe+rm(xem) isin V0 r isin Zmminus (441)

Moreover in this case we have that Vi = i(V0) sube i isin N and also

V0 sube (Fmminus) (442)

Proof Assume that V0 is refinable and v isin V0 Let v0 = v By the refinabilityof V0 there exist points vj+1 isin V0 and εj isin Zμ for j isin Zk such that vj =φεj(vj+1) j isin Zk Therefore we have that vr = φers(vs) r isin Zs s isin Zk+1Since the cardinality of V0 is k there exist two integers m isin Zk+1 with lt mfor which v = vm Hence in particular we conclude that v = vm = xem Itfollows that

vr = φer (v) = φer (xem) r isin Z

and

v+r = φe+rm(vm) = φe+rm(xem) r isin Zmminus

These remarks establish the necessity and also the fact that v0 isin(Fmminus)subeConversely let V0 be a set of points with the property and let v be a typical

element of V0 Then we have that either v = φe0 (xem) with gt 0 or v = xe0m

with = 0 In the first case since v = φε0(φe1 (xem)) and φe1 (xem) isin V0we have that v isin (V0) In the second case since xe0m is the unique fixedpoint of the mapping φe0m we write v = φε0(φe1m(xe0m)) By our hypothesisφe1m(xe0m) isin V0 and thus in this case we also have that v isin (V0) Thereforein either case v isin (V0) and so V0 is refinable These comments complete theproof of the theorem

We next derive two consequences of this observation To present the firstwe go back to the definition of the point xe in the metric space X wheree = [εj j isin N0] isin Zinfinμ and observe that when the vector e is s-periodic thatis its coordinates have the property that s is the smallest positive integer suchthat εi = εi+s i isin N0 we have xe = xes where es = [εj j isin Zs] Converselygiven any es isin Zs

μ we can extend it as an s-periodic vector e isin Zinfinμ andconclude that xe = xes

Let us observe that the powers of the shift operator σ act on s-periodic vectors in Zinfinμ as a cyclic permutation of vectors in Zs

μ Also thes-periodic orbits of σ that is vectors e isin Zinfinμ such that σ s(e) = e are exactly

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

174 Multiscale basis functions

the s-periodic vectors in Zinfinμ With this viewpoint in mind we can draw thefollowing conclusion from Theorem 422

Theorem 423 A finite set V0 in a metric space X is refinable relative to themappings if and only if for every v isin V0 there exists an e isin Zinfinμ such thatv = xe and xσ k(e) isin V0 for all k isin N

Proof For convenience we define the notation πlowast(ε0ε1ε2 middot middot middot ) = [εj j isinN0] isin Zinfinμ for εi isin Zμ The proof requires a formula from [148] (p 727) whichin our notation takes the form

φε(xe) = xπlowast(ψε(π(e))) ε isin Zμ e isin Zinfinμ

where ψε are the concrete mappings defined by (439) Using this formula thenumber π(e) associated with the vector e in Theorem 422 is identified as

π(e) = ε0ε1 middot middot middot εminus1ε middot middot middot εmminus1

An immediate corollary of this result characterizes refinable sets on R

relative to the mappings defined by (439)

Theorem 424 Let U0 be a subset of R having cardinality k Then U0 isrefinable relative to the mappings (439) if and only if for every point u isin U0there exist integers m isin Zk+1 with lt m and εi isin Zμ i isin Zm such thatu = ε0 middot middot middot εminus1ε middot middot middot εmminus1 and for any cyclic permutation η ηmminus1 ofε εmminus1 and r isin Z the point εr middot middot middot εminus1η middot middot middot ηmminus1 is in U0

It is the vectors e isin Zinfinμ which are pre-orbits of σ that is for some k isin N0

the vector σ k(e) is periodic which characterize refinable sets Thus there is anobvious way to build from refinable sets U0 relative to the mappings (439) onR refinable sets relative to any finite contractive mappings on a metric spaceFor example let U0 be a finite subset of cardinality k in the interval [0 1] Werequire for each number u in this set that there is an e isin Zinfinμ such that u = π(e)and for every j isin N0 π(σ j(e)) isin U0 In other words U0 is refinable relativeto the mappings (439) We define a set V0 in X associated with U0 by theformula

V0 = xe π(e) isin U0where xe isin X is the limit of xek This set is a refinable subset of X relative to thecontractive mappings We may use this association to construct examplesof practical importance in the finite element method and boundary integralequation method

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 175

Example Let sub R2 be the triangle with vertices at y0 = (0 0) y1 = (1 0)and y2 = (0 1) Set y3 = (1 1) and consider four contractive affine mappings

φε(x) = 1

2(yε + (minus1)τ(ε)x) ε isin Z4 x isin R2 (443)

where τ(ε) = 0 ε isin Z3 and τ(3) = 1 The invariant subset of R2 relative tothese mappings is the triangle and the following sets are refinable(

1

3

1

3

)

(1

7

4

7

)

(2

7

1

7

)

(4

7

2

7

)(

1

15

2

15

)

(2

15

4

15

)

(4

15

8

15

)

(8

15

1

15

)with respect to these mappings Also we record for any ek = [εj j isin Zk] isinZkμ and x isin R2 that

φek(x) =1

2k

⎡⎣(minus1)τk x+sumjisinZk

(minus1)τj 2kminusjyεj

⎤⎦

where τj =sumjminus1=0 τ(ε) j isin Zk+1 From this equation it follows that

xek =1

2k + (minus1)τk+1

sumjisinZk

(minus1)τj 2kminusjyεj

These formulas can be used to generate the above sets

442 Set wavelets

In this subsection we generate a sequence W = Wn n isin N0 of finite sets ofa metric space X which have a wavelet-like multiresolution structure We callan element of W a set wavelet and demonstrate in subsequent sections that setwavelets are crucial for the construction of interpolating wavelets on certaincompact subsets of Rd

The generation of set wavelets begins with an initial finite subset of distinctpoints V0 = vj j isin Zm in X We use this subset and the finite set ofcontractive mappings to define a sequence of subsets of X given by

Vi = (Viminus1) i isin N (444)

Assume that a compact set in X is the unique invariant set relative tothe mappings When V0 sube it follows for each i isin N that Vi sube Furthermore using the set k = φek ek isin Zk

μ of contractive mappings

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

176 Multiscale basis functions

introduced in the last subsection and for every subset A of X we definethe set

k(A) =⋃

ekisinZkμ

φek(A)

and so in particular 1(A) = (A) Therefore equation (444) implies thatVi = i(V0) i isin N

The next lemma is useful to us

Lemma 425 Let be a finite family of contractive mappings on X Assumethat sube X is the invariant set relative to the mappings If V0 is a nonemptyfinite subset of X then

sube⋃iisinN0

Vi

where Vi is generated by the mappings by (444)

Proof Let x isin and δ gt 0 Since is a compact set in X we choosean integer n gt 0 such that γ ndiam ( cup V0) lt δ where γ is the contractionparameter appearing in equation (48) According to the defining property (49)of the set there exists an en isin Zn

μ such that x isin φen() sube φen( cup V0)Since V0 is a nonempty set of X there exists a y isin φen(V0) sube φen( cup V0)Moreover by the contractivity (48) of the family we have that

d(x y) le diamφen( cup V0) le γ ndiam ( cup V0) lt δ

This inequality proves the result

Proposition 426 Let V0 be a nonempty refinable set of X relative to a finitefamily of contractive mappings and Vi i isin N0 be the collection of setsgenerated by definition (444) Then

=⋃iisinN0

Vi

Proof This result follows directly from Lemma 425 and Theorem 443

Let us recall the construction of the invariant set given a family ofcontractive mappings (see [148]) The invariant set is given by eitherone of the formulas

= xe e isin Zinfinμ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 177

or

=⋃kisinN

Fk

The above proposition provides another way to construct the unique invariantset relative to a finite family of contractive mappings In other words westart with a refinable set V0 and then form Vi i isin N recursively by (444)

We say that a sequence of sets Ai i isin N0 is nested (resp strictly nested)provided that Aiminus1 sube Ai i isin N (resp Aiminus1 sub Ai i isin N) The next lemmashows the importance of the notion of the refinable set

Lemma 427 Let be the invariant set in X relative to a finite family ofcontractive mappings Suppose that is not a finite set and V0 is a nonemptyfinite subset of X Then the collection of sets Vi i isin N0 defined by (444) isstrictly nested if and only if the set V0 is refinable relative to

Proof Suppose that V0 is refinable relative to Then it follows by inductionon i isin N that Viminus1 sube Vi

It remains to prove that this inclusion is strict for all i isin N Assume to thecontrary that for some i isin N we have that Viminus1 = Vi By the definition of Viwe conclude that Viminus1 = Vj for all j ge i and thus we have that⋃

jisinN0

Vj = Viminus1

This conclusion contradicts Proposition 426 and the fact that does not havefinite cardinality

When the sequence of sets Vi i isin N0 is strictly nested we let Wi =Vi Viminus1 i isin N that is Vi = Viminus1 cupperp Wi i isin N where we use the notationA cupperp B to denote A cup B when A cap B = φ By Lemma 427 if the set V0 isrefinable relative to we have that Wi = empty i isin N Similarly we use thenotation

perp(A) =⋃εisinZμ

perpφε(A)

when φε(A) cap φεprime(A) = empty ε εprime isin Zμ ε = εprime The sets Wi i isin N0 give us thedecomposition

Vn =⋃

iisinZn+1

perpWi (445)

where W0 = V0 The next theorem shows that when the set W1 is specifiedthe sets Wi i isin N can be constructed recursively and the set has a

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

178 Multiscale basis functions

decomposition in terms of these sets This result provides a multiresolutiondecomposition for the invariant set For this reason we call the sets Wii isin N set wavelets the set W1 the initial set wavelet and the decomposition of in terms of Wi i isin N the set wavelet decomposition of

Theorem 428 Let be the invariant set in X relative to a finite family of contractive mappings Suppose that each of the contractive mappings φε ε isin Zμ in has a continuous inverse on X and they have the property that

φε(int) cap φεprime(int) = empty ε εprime isin Zμ ε = εprime (446)

Let V0 be refinable with respect to and W1 sub int Then

Wi+1 = perp(Wi) i isin N (447)

and the compact set has the set wavelet decomposition

=⋃

nisinN0

perpWn (448)

where W0 = V0

Proof Our hypotheses on the contractive mappings φε ε isin Zμ guarantee thatthey are topological mappings Hence for any subsets A and B of X we havefor any ε isin Zμ that

intφε(A) = φε(int A) (449)

and

φε(A) cap φε(B) = φε(A cap B) (450)

Let us first establish that when W1 sub int the sets Wi i isin N defined bythe recursion (447) are all in int We prove this fact by induction on i isin NTo this end we suppose that Wi sube int i isin N Then the invariance property(49) of and (449) imply that

(Wi) sube (int) = int() = int

Therefore we have advanced the induction hypothesis and proved that Wi+1 subeint for all i isin N Using the fact that Wi+1 sube int we conclude from ourhypothesis (446) for any i isin N ε εprime isin Zμ ε = εprime that φε(Wi)cap φεprime(Wi) = emptywhich justifies the ldquoperprdquo in formula (447) It follows from (444) (447) andW1 = V1 V0 that Wi+1 sube Vi+1 i isin N0

Next we wish to confirm that

Vi Viminus1 = Wi i isin N (451)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 179

Again we rely upon induction on i and assume that

Vi Viminus1 = Wi (452)

Therefore we obtain that

Vi cupWi+1 = (Viminus1) cup(Wi) =⋃εisinZμ

(φε(Viminus1) cup φε(Wi))

=⋃εisinZμ

φε(Vi) = Vi+1

which implies that Vi+1 Vi sube Wi To confirm that equality holds we observethat

Vi capWi+1 = (Viminus1) cap(Wi) =⋃εisinZμ

⋃εprimeisinZμ

(φε(Viminus1)) cap (φεprime(Wi)) (453)

For ε = εprime we can use (449) and hypothesis (446) to conclude thatφε()capφεprime(int) = empty To see this we assume to the contrary that there existsx isin φε() cap φεprime(int) Then there exist y isin and yprime isin int such thatx = φε(y) = φεprime(yprime) Condition (446) insures that y isin int Hence byequation (449) it follows from the first equality that x isin int and fromthe second equality that x isin int a contradiction Consequently we have that

(φε(Viminus1)) cap (φεprime(Wi)) = empty (454)

When ε= εprime we use (450) to obtain that (454) still holds Hence equa-tion (453) implies that Vi cap Wi+1 = empty This establishes (451) advances theinduction hypothesis and proves the result

We end this section by considering the following converse question to theone we have considered so far Given a finite set in a metric space is itrefinable relative to some finite set of contractive mappings The motivationfor this question comes from practical considerations As is often the case incertain numerical problems associated with interpolation and approximationwe begin on an interval of the real line with prescribed points for exampleGaussian points or the zeros of Chebyshev polynomials We then want to findmappings to make these prescribed points refinable relative to them We shallonly address this question in the generality of the space Rd relative to theinfin-norm It is easy to see that given any subset V0 = vi i isin Zk ofRd there is a family of contractive mappings on Rd such that V0 is refinablerelative to them For example the mappings φi(x) = 1

2 (x + vi) i isin Zkx isin Rd will do since clearly the fixed point of the mapping φi is vi for i isin ZkHowever almost surely the associated invariant set will have an empty interior

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

180 Multiscale basis functions

and therefore Theorem 428 will not apply For instance in the example ofa triangle mentioned above the general prescription described applied to thevertices of the triangle will yield the Serpinski gasket This invariant set isa Cantor set and is formed by successively applying the maps (443) to thetriangles throwing away the middle triangle which is the image of the fourthmap used in the example To overcome this we must add to the above familyof mappings another set of contractive mappings ldquowhich fill the holesrdquo Todescribe this process we review some facts about parallelepipeds

A finite set I = ti i isin Zn+1 with t0 lt t1 lt middot middot middot lt tnminus1 lt tn iscalled a partition of the interval I = [t0 tn] and divides it into subintervalsIi = [ti ti+1] i isin Zn where the points in Icap(t0 tn) appear as endpoints of twoadjacent subintervals For every finite set U0 of (0 1) there exists a partition Isuch that the points of U0 lie in the interior of the corresponding subintervalsThe lengths of these subintervals can be chosen as small as desired

Likewise for any two vectors x = [xi i isin Zd] y = [yi i isin Zd] in Rdwhere xi lt yi i isin Zd which we denote by x ≺ y (also x y when xi le yii isin Zd) we can partition the set

otimesiisinZd[xi yi] ndash called a parallelepiped and

denoted by 〈x y〉 ndash into (sub)parallelepipeds formed from the partition

Id =otimesiisinZd

Ii = [tj j isin Zd] ti isin Ii i isin Zd (455)

where each Ii is a partition of the interval [xi yi] i isin Zd If Ii j j isin Zni is theset of subintervals associated with the partition Ii then a typical parallelepipedassociated with the partition Id corresponds to a lattice point i = [ij j isin Zd]where ij isin Znj j isin Zd defined by

Ii =otimesjisinZd

Iij j (456)

Given any finite set V0 sub Rd contained in the interior of a parallelepipedP we can partition it as above so that the interior of the subparallelepipedscontains the vectors of V0 We can if required choose the volume of thesesubparallelepipeds to be as small as desired

The set of all parallelepipeds is closed under translation as the simple rule

〈x y〉 + z = w+ z w isin 〈x y〉 = 〈x+ z y+ z〉 valid for any x y z isin Rd with x ≺ y For any x y isin Rd we associate an affinemapping A on Rd defined by the formula

At = Xt + y t isin Rd (457)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 181

where X = diag(x0 x1 xdminus1) Such an affine map takes a parallelepipedone to one and onto a parallelepiped (as long as the vector x has no zerocomponents) Conversely given any two parallelepipeds P and Pprime there existsan affine mapping of the form (457) which takes P one to one and onto PprimeMoreover if there exists a z isin Rd such that Pprime + z sub int P then A is acontraction relative to the infin-norm on Rd given by

[xi i isin Zd]infin = max|xi| i isin ZdFor any two parallelepipeds P = 〈x y〉 and Pprime = langxprime yprime

rangwith Pprime sube P we

can partition their set-theoretic difference into parallelepipeds in the followingway For each i isin Zd we partition the interval [xi yi] into three subintervals byusing the partition Ii = xi xprimei yprimei yi The associated partition Id decomposesP into subparallelepipeds such that one and only one of them corresponds toPprime itself In other words if Pi i isin ZN with N = 3d are the subparallelepipedswhich partition P and PNminus1 = Pprime then we have

P Pprime =⋃

iisinZNminus1

Pi

We can now state the theorem

Theorem 429 Let m be a positive integer and V0 a finite subset of cardinalitym in the metric space (Rd middot infin) There exists a finite set of contractivemappings of the form (457) such that V0 is refinable relative to and theinvariant set for is a parallelepiped

Proof First we put the set V0 into the interior of a parallelepiped P whichwe partition as described above into subparallelepipeds so that the vectors inV0 are in the union of the interior of these subparallelepipeds Specifically wesuppose that

V0 = vi i isin Zm P =⋃

iisinZM

Pi

with m lt M vi isin int Pi i isin Zm and V0 cap int Pi = empty i isin ZM ZmFor each i isin Zm we choose a vector zi = [zi j j isin Zd] isin (0 1)d with

sufficiently small components zi j so that the affine mapping

Ait = Qi(t minus vi)+ vi t isin Rd (458)

where Qi = diag(zi0 zi1 zidminus1) has the property that the parallelepipedQi = AiP is contained in Pi Since Aivi = vi i isin Zm the set V0 is refinablerelative to any set of mappings including those in (458) We wish to appendto these m mappings another collection of one to one and onto contractive

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

182 Multiscale basis functions

affine mappings of the type (457) so that the extended family has P as theinvariant set

To this end for each i isin Zm we partition the difference set Pi Qi intoparallelepipeds in the manner described above

Pi Qi =⋃

jisinZNminus1

Pi j

where N= 3d Thus we have succeeded in decomposing P into subparal-lelepipeds so that exactly m of them are the subparallelepipeds Qi i isin Zm Inother words we have

P =⋃iisinZk

Wi

where m lt k and Wi = Qi i isin Zm Finally for every i isin Zk Zm we choosea one to one and onto contractive affine mapping Ai such that AiP = Wi Thisimplies that

P =⋃iisinZk

AiP

and therefore P is the invariant set relative to the one to one and ontocontractive mappings Ai i isin Zk

In the remainder of this section we look at the above result for the real lineand try to economize on the number of affine mappings needed to make a givenset V0 refinable

Theorem 430 Let k be a positive integer and V0 = vl l isin Zk a subset ofdistinct points in [0 1] of cardinality k Then there exists a family of one-to-oneand onto contractive affine mappings φε ε isin Zμ of the type (457) for some2 le μ le 4 when k = 1 2 and 3 le μ le 2k minus 1 when k ge 3 such that V0 isrefinable relative to these mappings

Proof Since the mappings φ0(t) = t2 and φ1(t) = t+1

2 have the fixed pointst = 0 and t = 1 respectively we conclude that when k = 1 and V0 consists ofeither 0 or 1 and when k = 2 and V0 consists of 0 and 1 these two mappingsare the desired mappings When V0 consists of one interior point v0 we needat least three mappings For example we choose

φ0(t) = v0

2t φ1(t) = 1

2(t minus v0)+ v0

and

φ2(t) = 1minus v0

2t + v0 + 1

2 for t isin [0 1]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 183

When V0 consists of two interior points of (0 1) we need four mappingsconstructed by following the spirit of the construction for the case k ge 3which is given below

When k ge 3 regardless of the location of the points there exist 2k minus 1mappings that do the job We next construct these mappings specificallyWithout loss of generality we assume that v0 lt v1 lt middot middot middot lt vkminus1 We firstchoose a parameter γ1 such that

0 lt γ1 lt min

v1 minus v0

v1

v2 minus v1

1minus v1

and consider the mapping φ1(t) = γ1(t minus v1) + v1 t isin [0 1] Therefore ifwe let α1 = φ1(0) and β1 = φ1(1) then v0 lt α1 lt β1 lt v2 Next we letγ0 = (α1 minus v0)(1minus v0) and introduce the mapping φ0(t) = γ0(tminus v0)+ v0t isin [0 1] Clearly by letting α0 = φ0(0) and β0 = φ0(1) we have 0 le α0 lt

β0 = α1The remaining steps in the construction proceed inductively on k For this

purpose we assume that the affine mapping φjminus2 has been constructed We letβjminus2 = φjminus2(1) and define

φjminus1(t) = γjminus1(t minus vjminus1)+ vjminus1 t isin [0 1] j = 3 4 k minus 1

where the parameters γjminus1 are chosen to satisfy the conditions

0 lt γjminus1 lt min

vjminus1 minus βjminus2

vjminus1

vj minus vjminus1

1minus vjminus1

j = 3 4 k minus 1

It is not difficult to verify that φjminus1([0 1]) sub (βjminus2 vj) or equivalently βjminus2 lt

αjminus1 lt βjminus1 lt vj by letting αjminus1 = φjminus1(0) and βjminus1 = φjminus1(1) Next we

let φkminus1(t) = γkminus1(t minus vkminus1)+ vkminus1 t isin [0 1] where 0 lt γkminus1 = vkminus1minusβkminus2vkminus1

and let αkminus1 = φkminus2(0) and βkminus1 = φkminus1(1) Then we have that βkminus2 =αkminus1 lt βkminus1 le 1 By the construction above we find two sets αi i isin Zkand βi i isin Zk of numbers that satisfy the condition

0 le α0 lt β0 = α1 lt β1 lt middot middot middot lt βkminus2 = αkminus1 lt βkminus1 le 1

and the union of the images of the interval [0 1] under mappings φj jisinZk isU =

⋃jisinZk

[αjβj]

Notice that the set U is not the whole interval [0 1] There are at most k minus 1gaps which need to be covered It is straightforward to construct these k minus 1additional mappings

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

184 Multiscale basis functions

The family of mappings of cardinality at most 2k minus 1 that we haveconstructed above has [0 1] as the invariant set and V0 is a refinable set relativeto them

When the points in a given set V0 have special structure the number ofmappings may be reduced

45 Multiscale interpolating bases

In this section we present the construction of multiscale bases for bothLagrange interpolation and Hermite interpolation based on refinable sets andset wavelets developed in the previous section

451 Multiscale Lagrange interpolation

In this subsection we describe a construction of the Lagrange interpolatingwavelet-like basis using the set wavelets constructed previously For thispurpose we let X = Rd and assume that = φε ε isin Zμ is a family ofcontractive mappings that satisfies the hypotheses of Theorem 449 We alsoassume that sub X is the invariant set relative to with meas ( int) = 0Let k be a positive integer and assume that

V0 = vl l isin Zk sub int

is refinable relative to Note that in this construction of discontinuouswavelets we restrict the choice of the points in the set V0 to interior pointsof

As in [200 201] we choose a refinable curve f =[fl l isin Zk] rarrRk

which satisfies a refinement equation

f φi = Aif i isin Zμ (459)

for some prescribed k times k matrices Ai i isin Zμ We remark that if there isg rarr Rk and a k times k nonsingular matrix B such that f = Bg then g is alsoa refinable curve We let

F0 = spanfl l isin Zkand suppose that dimF0 = k Furthermore we require that for any b = [bl l isin Zk] isin Rk there exists a unique element f isin F0 such that f (vi) = bii isin Zk In other words there exist k elements in F0 which we also denoteby f0 f1 fkminus1 such that fi(vj) = δij i j isin Zk Refinable sets that admit

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

45 Multiscale interpolating bases 185

a unique Lagrange interpolating polynomial were constructed in [198] Whenthis condition holds we say fi i isin Zk interpolates on the set V0 and thatfj interpolates at vj j isin Zk Under this condition any element f isin F0 has arepresentation of the form

f =sumiisinZk

f (vi)fi

A set V0 sube X is called (Lagrange) admissible relative to (F0) if it isrefinable relative to and there is a basis of functions fi i isin Zk for F0 whichinterpolate on the set V0 In this subsection we shall always assume that V0

is (Lagrange) admissible We record in the next proposition the simple fact ofthe Lagrange admissibility of any set of cardinality k for the special case when = defined by (439) = [0 1] and F0 = Pkminus1 the space of polynomialsof degree le k minus 1

Proposition 431 If V0 sub [0 1] is refinable relative to and has cardinalityk then V0 is Lagrange admissible relative to (Pkminus1)

Proof It is a well-known fact that the polynomial basis functions satisfythe refinement equation (459) with φi = ψi for some matrices Ai Hencethis result follows immediately from the unique solvability of the univariateLagrange interpolation

In a manner similar to the construction of orthogonal wavelets in Section 43we define linear operators Tε Linfin()rarr Linfin() ε isin Zμ by

(Tεx)(t) =

x(φminus1ε (t)) t isin φε()

0 t isin φε()and set

Fi+1 =oplusεisinZμ

TεFi i isin N0

This sequence of spaces is nested that is Fi sube Fi+1 i isin N0 and dimFi = kμii isin N0

We next construct a convenient basis for each of the spaces Fi For thispurpose we let F0 = fj j isin Zk where fj j isin Zk interpolate the set V0 and

Fi =⋃εisinZμperpTεFiminus1

= Tε0 middot middot middot Tεiminus1 fj j isin Zk ε isin Zμ isin Zi i isin N(460)

Since the functions fj j isin Zk interpolate the set V0 we conclude that theelements in Fi interpolate the set Vi In other words the functions in the set Fi

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

186 Multiscale basis functions

satisfy the condition

(Tε0 middot middot middot Tεiminus1 fj)(φεprime0 middot middot middot φεprimeiminus1(vjprime)) = δ(ε0εiminus1 j)(εprime0εprimeiminus1 jprime) (461)

where we use the notation

δaaprime =

1 a = aprime0 a = aprime a aprime isin Ni

0 i isin N

For ease of notation we let ei = [εj j isin Zi] and Tei fj = Tε0 middot middot middot Tεiminus1 fjBy equation (461) this function interpolates at φei(vj) Moreover the relation

Fi = span Fi i isin N0 (462)

holds Now for each n isin N0 we decompose the space Fn+1 as the direct sumof the space Fn and its complement space Gn+1 which consists of the elementsin Fn+1 vanishing at all points in Vn that is

Fn+1 = Fn oplusGn+1 n isin N0 (463)

This decomposition is analogous to the orthogonal decomposition in theconstruction of the orthogonal wavelet-like basis in Section 43 and can beviewed as an interpolatory decomposition in the sense that we describe below

We first label the points in the set Vn according to the set wavelet decom-position for Vn given in Section 44 We assume that the initial set wavelet isgiven by W1 = wj j isin Zr with r = k(μminus 1) and we let

t0j = vj j isin Zk t1j = wj j isin Zr

tij = φew j = μ(e)r + e isin Ziminus1μ isin Zr i = 2 3 n

Then we conclude that Vn = tij (i j) isin Un where Un = (i j) i isinZn+1 j isin Zw(i) with

w(i) =

k i = 0k(μminus 1)μiminus1 i ge 1

The Lagrange interpolation problem for Fn relative to Vn is to find for avector b = [bij (i j) isin Un] an element f isin Fn such that

f (tij) = bij (i j) isin Un (464)

The following fact is useful in this regard

Lemma 432 If V0 is Lagrange admissible relative to (F0) then for eachn isin N0 the set Vn is also Lagrange admissible relative to (Fn)

Proof This result follows immediately from equations (461)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

45 Multiscale interpolating bases 187

Lemma 432 insures that each f isin Fn+1 has the representation f = Pn f+gnwhere Pn f is the Lagrange interpolant to f from Fn relative to Vn and gn =f minus Pn f is the error of the interpolation Therefore we have that

Gn+1 = gn gn = f minus Pn f f isin Fn+1 (465)

The fact that the subspace decomposition (463) is a direct sum also followsfrom equation (465) and the unique solvability of the Lagrange interpolationproblem (464) For this reason the spaces Gn are called the interpolatingwavelet spaces and in particular the space G1 is called the initial interpolatingwavelet space Direct computation yields the dimension of the wavelet spaceGn dimGn = kμnminus1(μ minus 1) Also we have an interpolating waveletdecomposition for Fn+1

Fn+1 = F0 oplusG1 oplus middot middot middot oplusGn+1 (466)

In the next theorem we describe a recursive construction for the waveletspaces Gn To establish the theorem we need the following lemma regardingthe distributivity of the linear operators Tε ε isin Zμ relative to a direct sum oftwo subspaces of Linfin()

Lemma 433 Let BC sub Linfin() be two subspaces If BoplusC is a direct sumthen for each ε isin Zμ Tε(Boplus C) = (TεB)oplus (TεC)

Proof It is clear that Tε(B oplus C) = (TεB) + (TεC) Therefore it remains toverify that the sum on the right-hand side is a direct sum To this end we letx isin (TεB) cap (TεC) and observe that there exist f isin B and g isin C such that

x = Tε f = Tεg (467)

By the definition of the operators Tε we have that x(t) = 0 for t isin φε()Now for each t isin φε() there exists τ isin such that t = φε(τ ) and thususing equation (467) we observe that

x(t) = f (φminus1ε (t)) = f (τ ) isin B x(t) = g(φminus1

ε (t)) = g(τ ) isin C

Since B oplus C is a direct sum we conclude that x(t) = 0 for t isin φε() Itfollows that x = 0

We also need the following fact for the proof of our main theorem

Lemma 434 If Y sube Linfin() then there holds

TεY cap TεprimeY = 0 ε εprime isin Zμ ε = εprime

Proof Let x isin TεY capTεprimeY There exist y1 y2 isin Y such that x = Tεy1 = Tεprimey2By the definition of the operators Tε we conclude from the first equality that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

188 Multiscale basis functions

x(t) = 0 for t isin φε() and from the second that x(t) = 0 for t isin φεprime()Since ε = εprime we have that meas (φε()capφεprime()) = 0 This implies that x = 0ae in and therefore establishes the result in this lemma

We are now ready to prove the main result of this section

Theorem 435 Let V0 be Lagrange admissible relative to (F0) and Wnn isin N be the set wavelets generated from V0 Then

G1 = spanTε fj ε isin Zμ j isin Zk Tε fj interpolates at φε(vj) isin W1

Gn+1 =oplusεisinZμ

TεGn n isin N (468)

and Gn = span Gn where

Gn = Ten fj en isin Znμ j isin Zk Ten fj interpolates at φen(vj) isin Wn

Proof Let Tε fj interpolate at φε(vj) isin W1 Then Tε fj has the property

(Tε fj)(φεprime(vjprime)) = 0 ε εprime isin Zμ εprime = ε or j jprime isin Zk jprime = j

By the definition of the set wavelet W1=V1 V0 we conclude for all vjprime isinV0

that we have (Tε fj)(vjprime) = 0 Thus by the definition of G1 we have thatcorresponding to each point φε(vj) isin W1 the basis function Tε fj is in G1Note that the cardinality of W1 is given by the formula card W1 = card V1 minuscard V0 = k(μminus1) It follows that the number of basis functions for which Tε fjinterpolate at φε(vj) in W1 is r the dimension of G1 Because these r functionsare linearly independent they constitute a basis for G1

We next prove equation (468) by induction on n For this purpose weassume that equation (468) holds for n le m and consider the case whenn = m+ 1 By the definition of Fm+1 and Gm we have that

Fm+1 =oplusεisinZμ

TεFm =oplusεisinZμ

Tε(Fmminus1 oplusGm)

Using Lemma 433 we obtain that

Fm+1 =oplusεisinZμ[(TεFmminus1)oplus (TεGm)] =

⎛⎝oplusεisinZμ

TεFmminus1

⎞⎠oplus⎛⎝oplusεisinZμ

TεGm

⎞⎠

It then follows from the definition of Fm that

Fm+1 = Fm oplus⎛⎝oplusεisinZμ

TεGm

⎞⎠

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

45 Multiscale interpolating bases 189

Let

G =oplusεisinZμ

TεGm

and assume that f isin G Then there exist g0 gμminus1 isin Gm such that

f =sumεisinZμ

Tεgε

For each v isin Vm there exist vprime isin Vmminus1 and εprime isin Zμ such that v = φεprime(vprime)By the definition of the linear operators Tε ε isin Zμ and the fact that gε isin Gmε isin Zμ direct computation leads to the condition for each v isin Vm that

f (v) =sumεisinZμ

Tεgε(φεprime(vprime)) = gεprime(φminus1εprime φεprime(vprime)) = gεprime(v

prime) = 0

Hence G sube Gm+1 Moreover it is easy to see that dimG = dimGm+1 whichimplies that G = Gm+1

To prove the second part of the theorem it suffices to establish the recursion

Gn+1 =⋃εisinZμ

perpTεGn

The ldquoperprdquo on the right-hand side of this equation is justified by Lemma 434 Toestablish its validity we let

G =⋃εisinZμ

perpTεGn

Hence the set G consists of the elements TεnTen fj where Ten fj interpolates atφen(vj) isin Wn εn isin Zμ By Theorem 428 we have that

φen+1(vj) = φεn φen(vj) εn isin Zμ φen(vj) isin Wn sube φen+1(vj) isin Wn+1Hence G sube Gn+1 Since card G = card Gn+1 = card Wn+1 we conclude thatG = Gn+1

Theorem 436 It holds that

L2() =⋃

nisinN0

Fn = F0 oplus(oplus

nisinNGn

)

Proof Since the mappings φε ε isin Zμ are contractive the condition ofTheorem 47 of [201] is satisfied The finite-dimensional spaces Fn appearinghere are the same as those generated by the family of mutually orthogonalisometries in [201] if we begin with the same initial space F1 Therefore the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

190 Multiscale basis functions

first equality holds An examination of the proof for Theorem 47 of [201]shows that the same proof proves the second equality

As a result of the decomposition obtained in Theorems 435 and 436 wepresent a multiscale algorithm for the Lagrange interpolation To this end welet gj j isin Zr be a basis for G1 so that

gj(t0i) = 0 i isin Zk j isin Zr gj(t1jprime) = δj jprime j jprime isin Zr

We label those functions according to points in Vn in the following way Let

g0j = fj j isin Zk g1j = gj j isin Zr

gij = Teg j = μ(e)r + e isin Ziminus1μ isin Zr i = 2 3 n

With this labeling we see that gij(tiprimejprime) = δiiprimeδjjprime (i j) (iprime jprime) isin Un i ge iprime and

Fn = span gij (i j) isin UnFunctions gij (i j) isin U are also called interpolating wavelets Now we expressthe interpolation projection in terms of this basis For each x isin C() theinterpolation projection Pnx of x is given by

Pnx =sum

(i j)isinUn

xijgij (469)

The coefficients xij in (469) can be obtained from the recursive formula

x0j = x(t0j) j isin Zk

xij = x(tij)minussum

(iprime jprime)isinUiminus1

xiprimejprimegiprimejprime(tij) (i j) isin Un

This recursive formula allows us to interpolate a given function efficiently byfunctions in Fn When we increase the level from n to n+ 1 we do not need torecompute the coefficients xij for 0 le i le n We describe this important pointwith the formula Pn+1x = Pnx+Qn+1x where Qn+1x isin Gn+1 and

Qn+1x =sum

jisinZJ(n+1)

xn+1 jgn+1 j

The coefficients xn+1 j are computed by the previous recursive formula usingthe coefficients obtained for the previous levels that is

xn+1 j = x(tn+1 j)minussum

(iprime jprime)isinUn

xiprimejprimegiprimejprime(tn+1 j)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

45 Multiscale interpolating bases 191

452 Multiscale Hermite interpolation

In the last section we showed how refinable sets lead to a multiresolutionstructure and result in what we call set wavelets In the last subsection wethen used this recursive structure of the points to construct the Lagrange inter-polation that has a much desired multiscale structure In this subsection wedescribe a similar construction for Hermite piecewise polynomial interpolationon invariant sets

Let X be Euclidean space Rd and = φε ε isin Zμ be a family ofcontractive mappings on X is the invariant set relative to the family ofcontractive mappings Let V0 be a nonempty finite subset of distinct pointsin X and recursively define

Vi = (Viminus1) i isin N

It was shown in the last section that the collection of sets Vi i isin N0 isstrictly nested if and only if the set V0 is refinable relative to Denote

Wi = Vi Viminus1 i isin N

When the contractive mappings have a continuous inverse on X

φε(int) cap φεprime(int) = empty ε εprime isin Zμ ε = εprime

and W1 is chosen to be a subset of int then the sets Wi+1 i isin N can begenerated recursively from W1 by the formula

Wi+1 = perp(Wi) =⋃εisinZμ

perpφε(Wi) i isin N

and the invariant set has the decomposition

= V0 cupperp(⋃

nisinNperpWn

)

In the following we first describe a construction of multiscale discontinuousHermite interpolation and then a construction of multiscale smooth Hermiteinterpolation on the interval [0 1]1 Multiscale discontinuous Hermite interpolationWe start with nonempty finite sets V sub int and U sub Nd

0 For u = [ui i isinZd] isin U we set

Du = part |u|

parttu01 middot middot middot parttudminus1

d

where |u| =sumiisinZdui

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

192 Multiscale basis functions

Let P be a linear space of functions on We say that (P U V) is Hermiteadmissible provided that V is refinable and for any given real numbers cr r =(u v) isin U times V there exists a unique element p isin P such that

Dr(p) = (Dup)(v) = cr (470)

When this is the case the dimension of P is (card V)(card U) and there existsa basis pr r isin U times V for P such that for every r = (u v) isin U times V andrprime = (uprime vprime) isin U times V

(Duprimepr)(vprime) = δrrprime (471)

Moreover for any function p isin P the representation

p =sum

risinUtimesV

Dr( p)pr

holds We call pr r isin U times V a Hermite basis for P relative to U times V Toproceed further we must restrict the family to have the form

φε(t) = aε t + bε t isin (472)

where aε t is the vector formed by the componentwise product of the vectorsaε and t We also require linear operators Tε Linfin() rarr Linfin() ε isin Zμ

defined by

Tε f = f φminus1ε χφε()

and for every en = [εj j isin Zn] isin Znμ we define constants

aminusuen

= aminusuε0

aminusuε1middot middot middot aminusu

εnminus1

Lemma 437 If is a family of contractive mappings of the form (472) thenfor all en isin Zn

μ and u isin Nd0 the following formula holds

DuTen = aminusuen

Ten Du n isin N

Proof We prove this lemma by induction on n and the proof begins by firstverifying the case when n = 1 by the chain rule The induction hypothesis isthen advanced by again using the chain rule and the case n = 1

We suppose that (P U V0) is admissible P is a subspace of polynomialswith Hermite basis

F0 = pr r isin U times V0relative to U times V0 and use the operators Tε ε isin Zμ to recursively define thesets

Fn = auenTen pr r = (u v) isin U times V0 en isin Zn

μ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

45 Multiscale interpolating bases 193

Since the polynomials pr r isin UtimesV0 were chosen to be a Hermite basis for Prelative to Utimes V0 the functions in Fn form a Hermite basis for Fn = span Fn

relative to U times Vn That is the function auenTen pr satisfies the condition

Duprime(auenTen pr)(φeprimen(v

prime)) = δ(ren)(rprimeeprimen) (473)

where rprime = (uprime vprime) isin U times V0Since Fn sube Fn+1 we can decompose Fn+1 as the direct sum of the space

Fn and Gn+1 defined to be the elements in Fn+1 whose uth derivatives foru isin U vanish at all points in Vn We let Pn f isin Fn be uniquely defined by theconditions

(DuPn f )(v) = Duf (v) v isin Vn u isin U (474)

Hence each f isin Fn+1 has the representation f = Pn f + gn where gn isin Gn+1

with Gn+1 = f minus Pn f f isin Fn+1 and we have the decomposition

Fn = F0 oplusG1 oplus middot middot middot oplusGn

Most importantly the spaces Gn can be generated recursively the proof ofwhich follows the pattern of those given for Theorems 435 and 436 To statethe next result we make use of the following notation For each n isin N0 and asubset A sube Vn we let Zn

μ(A) denote the subset of Znμ consisting of the indices

en isin Znμ such that there exists r = (u v) isin U times V0 for which equation (473)

holds and φen(v) isin A

Theorem 438 If P is a subspace of polynomials (P U V0) is admissibleand Wn n isin N is the set wavelets generated by V0 then

G1 = spanauεTεpr r = (u v) isin U times V0 ε isin Zμ(W1)

and

Gn+1 =oplusεisinZμ

TεGn n isin N

Moreover we have that Gn = span Gn where for n isin N

Gn = auenTen pr r = (u v) isin U times V0 en isin Zn

μ(Wn)and the formula

L2() =⋃

nisinN0

Fn = F0 oplus(oplus

nisinNGn

)

holds

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

194 Multiscale basis functions

2 Multiscale smooth Hermite interpolation on [0 1]In the following we focus on a construction of smooth multiscale Hermite inter-polating polynomials on the interval [0 1] which generate finite-dimensionalspaces dense in the Sobolev space Wmp[0 1] where m is a positive integer and1 le p ltinfin To this end we choose affine mappings = φε ε isin Zμ

φε(t) = (tε+1 minus tε)t + tε ε isin Zμ

where 0 = t0 lt t1 lt middot middot middot lt tμminus1 lt tμ = 1 and μ gt 1 The invariantset of is [0 1] and this family of mappings has all the properties delineatedearlier We let V0 be a refinable set containing the endpoints of [0 1] that isV0 = v0 v1 vkminus1 where 0 = v0 lt v1 lt middot middot middot lt vkminus2 lt vkminus1 = 1 Sincethe endpoints are the fixed points of the first and last mappings respectivelyW1 = V1 V0 sub (0 1)

We let Fn be the space of piecewise polynomials of degree le km minus 1 inWmp[0 1] with knots at φen(0 1) en isin Zn

μ In particular F0 is the spaceof polynomials of degree le kmminus 1 on [0 1] and

dim Fn = μn(k minus 1)m+ m Fn sube Fn+1 n isin N0

This sequence of spaces is dense in Wmp[0 1] for 1 le p ltinfinWe construct multiscale bases for these spaces Fn using the solution of the

Hermite interpolation problem

p(i)(φen(v)) = f (i)(φen(v)) en isin Znμ v isin V0 i isin Zm (475)

which has a unique solution p isin Fn for any f isin Wmp[0 1] Hence in thisspecial case the refinability of V0 insures that (FnZm Vn) is admissible

Let Gn+1 be the space of all functions in Fn+1 such that g(i) i isin Zm vanishat all points in Vn A basis for the space Gn can be constructed recursivelystarting with interpolating bases for F0 and F1 To this end for r = (i j) isinZm times Zk we let pr isin F0 satisfy the conditions

p(iprime)

r (vjprime) = δrrprime rprime = (iprime jprime) isin Zm times Zk

Then the set of functions

F0 = pr r isin Zm times Zkconstitutes a Hermite basis for the space F0 relative to Zm times Zk To constructa basis for F1 we recall the linear operators Tε which in this case have thespecial forms

(Tε f )(t) = f

(t minus tε

tε+1 minus tε

)χ[tε tε+1](t) t isin [0 1]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

45 Multiscale interpolating bases 195

We remark that in general the range of operators Tε is not contained in theSobolev space Wmp[0 1] However we do have the following fact whosestatement uses for 1 le p ltinfin the spaces

Wmp0minus[0 1] = f isin Wmp[0 1] f (i)(0) = 0 i isin Zm+1

Wmp0+[0 1] = f isin Wmp[0 1] f (i)(1) = 0 i isin Zm+1

and

Wmp0 [0 1] = Wmp

0minus[0 1] capWmp0+[0 1]

Lemma 439 The following inclusions hold

Tμminus1(Wmp0minus[0 1]) sube Wmp

0minus[0 1]T0(W

mp0+[0 1]) sube Wmp

0+[0 1]and

Tε(Wmp0 [0 1]) sube Wmp

0 [0 1] ε isin Zμ

Next we show how to use the functions pr in F0 and the operators Tε toconstruct a basis of F1 The operators Tε may introduce discontinuities whenapplied to a function in F0 thereby leading to an unacceptable basis for F1Lemma 439 reveals exactly what happens when we apply Tε to pr Using theoperators Tε ε isin Zμ we define for r = (i ) isin ZmtimesZk and j = ε(kminus1)+

qij = (tε+1 minus tε)iTεpr

when ε = 0 = 0 or ε isin Zμ minus 1 isin Zkminus2 or ε = μminus 1 = k minus 1 and

qij = (tε minus tεminus1)iTεminus1p(ikminus1) + (tε+1 minus tε)

iTεp(i0)

when ε minus 1 isin Zμminus1 = 0 Set

F1 = qij i isin Zm j isin Zμ(kminus1)+1In the next lemma we state some properties of this set of functions To thisend we number the points in V1 according to the scheme

zj = φε(v) j = ε(k minus 1)+ ε isin Zμ minus 1 isin Zkminus1

Lemma 440 The set F1 forms a basis for F1 such that

q(iprime)

ij (zjprime) = δrrprime r = (i j) rprime = (iprime jprime) isin Zm times Zμ(kminus1)+1 (476)

Proof Lemma 439 insures that these functions are in Wmp[0 1] Hence it isclear that they are elements in F1 Moreover a direct verification leads to theconclusion that these functions satisfy the conditions (476)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

196 Multiscale basis functions

For a general n we follow the same process as described earlier to constructa basis for space Fn from the basis for Fnminus1 At each level it requiresa procedure of eliminating discontinuities introduced by the operators Tε However we do not construct the basis for Fn directly for n gt 1 Insteadwe turn our attention to the construction of bases for the complement spacesG1G2 Gn Surprisingly the next theorem shows that we can choose(μ minus 1)(k minus 1)m functions from the set F1 to form a basis for G1 andrecursively generate bases for the spaces Gn from this basis of G1 by applyingthe operators Tε We see that the construction of bases for Gn for n ge 2 doesnot require the process of eliminating discontinuities which is required for thedirect construction of bases for Fn because G1 sube Wmp

0 [0 1]Theorem 441 If V0 is refinable relative to the affine mappings and

G1 = qij j isin Zμ(kminus1)+1(W1) i isin Zmthen

G1 = span G1

Moreover if

Gn+1 = aienTen qij qij isin G1 i isin Zm en isin Zn

μwhere

aen =prodiisinZn

(tε+1 minus tε)

then

Gn+1 = span Gn+1 n isin N

Proof Since the cardinality of W1 equals the dimension of the space G1 theset G1 consists of (μminus 1)(kminus 1) linearly independent functions It remains toshow that G1 sub G1 To this end we prove that the functions in G1 vanish at allpoints in V0 Let qij isin G1 By (476) we obtain that q(i)ij (zj) = 1 for zj isin W1

and q(iprime)

ij (zjprime) = 0 for (iprime jprime) = (i j) Since V0 sub V1 and zj isin V0 we conclude

that q(i)ij i isin Zm vanish at all points in V0 and thus qij isin G1We now prove the second statement of the theorem It follows from

Lemma 439 that the functions aienTen qij isin Wmp

0 [0 1] since qij isin Wmp0 [0 1]

In addition we have that

(aienTen qij)

(iprime)(φeprimen(zjprime))= aiminusiprimeen

Ten q(iprime)

ij (φeprimen(zjprime))

= aiminusiprimeen

δeneprimen q(iprime)

ij (zjprime)

= δeneprimenδiiprimeδjjprime

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

46 Bibliographical remarks 197

This equation implies that these functions are linearly independent in Gn+1Next we observe that the cardinality of the set Gn+1 is equal to the

dimension of Gn+1 Consequently we conclude that Gn+1 = span Gn+1

Since

Fn = F0 oplusG1 oplus middot middot middot oplusGn

and the sequence of spaces Fn n isin N0 is dense in the space Wmp[0 1] for1 le p ltinfin we obtain the following result

Theorem 442 The equation

F0 oplus(oplus

nisinNGn

)= Wmp[0 1]

holds for 1 le p ltinfin

In the finite element method the space Wmp0 [0 1] has a special importance

For this reason we define

F00 = f isin F0 f (i)(0) = f (i)(1) = 0 i isin Zm

and observe that dim F00 = (k minus 2)m where F0

0 = span F00 and

F00 = pi

j i isin Zm jminus 1 isin Zkminus1Corollary 443 The equation

F00 oplus

(oplusnisinN

Gn

)= Wmp

0 [0 1]

holds for 1 le p ltinfin

46 Bibliographical remarks

The material presented in this chapter regarding the multiscale bases wasmainly taken from [65 196 200 201] The construction of orthogonalwavelets on invariant sets was originally introduced in [200] Then theconstruction was extended in [201] to a general bounded domain and bi-orthogonal wavelets In particular the construction of the initial wavelet spacewas formulated in [201] in terms of a general solution of a matrix completionproblem Later [65] gave a construction of interpolating wavelets on invariantsets The concept of a refinable set relative to a family of contractivemappings on a metric space which define the invariant set is introducedin [65] and a recursive structure was explored in the paper for multiscale

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

198 Multiscale basis functions

function representation and approximation constructed by interpolation oninvariant sets For the notion of invariant sets the reader is referred to [148]The material about multiscale partitions of a multidimensional simplex wasoriginally developed in [74] Paper [198] constructed refinable sets that admit aunique Lagrange interpolating polynomial (see also [199]) The description formultiscale Hermite interpolation in Section 452 follows [66] Moreover [69]presented a construction of multiscale basis functions and the correspondingmultiscale collocation functionals both having vanishing moments (see alsoSection 71)

For wavelets on an unbounded domain the reader is referred to [43 48 8284 92 93 97 98 100 101 232] and the references cited therein

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

5

Multiscale Galerkin methods

The main purpose of this chapter is to present fast multiscale Galerkin methodsfor solving the second-kind Fredholm integral equations

uminusKu = f (51)

defined on a compact domain in Rd The classical Galerkin method usingthe piecewise polynomial applied to equation (51) leads to a linear systemof equations with a dense coefficient matrix Hence the numerical solutionof this equation is computationally costly The multiscale Galerkin methodto be described in this chapter makes use of the multiscale feature and thevanishing moment property of the multiscale piecewise polynomial basis andresults in a linear system with a numerically sparse coefficient matrix As aresult fast algorithms may be designed based on a truncation of the coefficientmatrix Specifically the multiscale Galerkin method uses the L2-orthogonalprojection for a discretization principle with the multiscale basis functionswhose construction is described in Chapter 4

The fast multiscale Galerkin method is based on a matrix compressionscheme We show that the matrix compression scheme preserves almostthe optimal convergence order of the standard Galerkin method while itreduces the number of nonzero entries of its coefficient matrix from O(N2)

to O(N logσ N) where N is the size of the matrix and σ may be 1 or 2 Wealso prove that the condition number of the compressed matrix is uniformlybounded independent of the size of the matrix

The kernels of the integral operators in which we are interested in thischapter are weakly singular or smooth We present theoretical results for theweakly singular case in detail and only give comments for the smooth case

199

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

200 Multiscale Galerkin methods

51 The multiscale Galerkin method

In this section we present the multiscale Galerkin method for solving equa-tion (51) For this purpose we first describe the properties of multiscalebases required necessarily for developing the multiscale Galerkin methodThese properties are satisfied for the multiscale bases constructed in Chapter 3However the multiscale bases constructed in Chapter 3 have other propertiesthat are not essential for developing the multiscale Galerkin method

511 Multiscale bases

The multiscale basis requires a multiscale partition of the domain Weassume that there is a family of partitions i i isin N0 such that for eachscale i isin N0 i = ij j isin Ze(i) where e(i) denotes the cardinality of ihas the properties

(1)⋃

jisinZe(i)ij =

(2) meas(ij⋂ijprime) = 0 j jprime isin Ze(i) j = jprime

(3) meas(ij) sim ddi for all j isin Ze(i)

where di = maxd(ij) j isin Ze(i) Here the notation ai sim bi for i isin N0

means that there are positive constants c1 and c2 such that c1ai le bi le c2ai forall i isin N0

In addition we assume that(4) the sets ij j isin Ze(i) are star-shapedWe remark that a set A sub Rd is called a star-shaped set if it contains a point

for which the line segment connecting this point and any other point in the setis contained in the set Such a point is called a center of the set

We further suppose that there is a nested sequence of finite-dimensionalsubspaces Xn n isin N0 of X that is

Xnminus1 sub Xn n isin N

Thus for each n isin N0 a subspace Wn sub Xn can be defined such that Xn is anorthogonal direct sum of Xnminus1 and Wn Moreover we assume that Xn n isin N0

is ultimately dense in L2() in the sense that⋃nisinN0

Xn = L2()

We then have an orthogonal decomposition of space L2()

L2() =oplusnisinN0

perpWn (52)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

51 The multiscale Galerkin method 201

where we have used the notation W0 = X0 Set w(n) = dim Wn and s(n) =dim Xn for n isin N0 It follows that

s(n) =sum

iisinZn+1

w(i)

For each n isin N0 we introduce an index set Un = (i j) i isin Zn+1 j isinZw(i) We also use the notation U = (i j) i isin N0 j isin Zw(i) We assumethat there is a family of basis functions wij (i j) isin U sub X such that

Wn = spanwnj j isin Zw(n) n isin N0

Thus

Xn = spanwij (i j) isin UnWe require that the following multiscale properties hold for the partitions andthe basis functions

(I) There is a positive integer μ gt 1 such that for i isin N0

di sim μminusid w(i) sim μi and s(i) sim μi

(II) There exist positive integers ρ and γ such that for every i gt γ and j isinZw(i) written in the form j = νρ + s where s isin Zρ and ν isin N0

wij(t) = 0 t isin iminusγ ν

Setting Sij = iminusγ ν we see that the support of wij is contained in Sij Itcan easily be verified that

di sim maxd(Sij) j isin Ze(i)Because of this property we shall not distinguish di from the right-handside of the above equation

(III) For any (i j) isin U with i ge 1 and polynomial p of total degree less than apositive integer k

(p wij) = 0

where (middot middot) denotes the L2-inner product(IV) There is a constant θ0 such that for any (i j) isin U

wij = 1 and wijinfin le θ0μi2

where middot and middotinfin denote the L2-norm and the Linfin-norm respectively

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

202 Multiscale Galerkin methods

(V) There is a positive constant θ1 such that for all n isin N0 v =sum(ij)isinUn

vijwij

Env2 sim v2 and v2 le θ1vwhere v = [vij (i j) isin Un] En =

[(wiprimejprime wij) (iprime jprime) (i j) isin Un

]and

the notation xp 1 le p le infin for a vector x = [xj j isin Zn] denotes thep-norm defined by

xp = (sum

jisinZn|xj|p

)1p 1 le p ltinfin

max|xj| j isin Zn p = infin

(VI) If Pn is the orthogonal projection from X onto Xn then there exists apositive constant c such that for any u isin Hk()

uminus Pnu le cdknuHk

All of these properties are fulfilled by the multiscale basis functionsconstructed in Chapter 3 In general the matrix En is a block diagonal matrixMoreover if wij (i j) isin U is a sequence of orthonormal basis functions thenEn is the identity matrix and property (V) holds with v2 = v Furthermoreif Xn n isin N0 are spaces of piecewise polynomials of total degree less than kthen the vanishing moment property (III) and the approximation property (VI)hold naturally

512 Formulation of the multiscale Galerkin method

As we have discussed in Section 31 the Galerkin method for equation (51) isto find un isin Xn that satisfies the operator equation

(I minusKn)un = Pnf (53)

where Kn = PnK|Xn It is clear that the following theoretical results hold forthe Galerkin method (see Section 33)

Theorem 51 Let K be a linear compact operator not having one as itseigenvalue Then there exists N gt 0 such that for all n ge N the Galerkinscheme (53) has a unique solution un isin Xn and there is a constant c gt 0such that for all n ge N

(I minusKn)minus1 le c

Moreover if the solution u of equation (51) satisfies u isin Hk() then thereexists a positive constant c such that for all n ge N

uminus un le cμminusknduHk

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

51 The multiscale Galerkin method 203

Using the multiscale bases for spaces Xn described in the last section theabove Galerkin method (53) is to seek

un =sum

(ij)isinUn

uijwij isin Xn

such that sum(ij)isinUn

uij(wiprimejprime wij minusKwij) = (wiprimejprime f ) (iprime jprime) isin Un (54)

Because the multiscale basis is used in order to distinguish it from thetraditional Galerkin method we call (54) the multiscale Galerkin method

To write (54) in a matrix form we use the lexicographic ordering on Zn+1timesZn+1 and define the matrix

Kn = [(wiprimejprime Kwij) (iprime jprime) (i j) isin Un]and vectors

fn = [(wiprimejprime f ) (iprime jprime) isin Un] un = [uij (i j) isin Un]Note that these vectors have length s(n) With these notations equation (54)takes the equivalent matrix form

(En minusKn)un = fn (55)

Even though the coefficient matrix Kn is a full matrix it differs significantlyfrom the matrix Kn in Section 331 The use of multiscale basis functionsmakes the matrix Kn numerically sparse By the numerically sparse matrixwe mean a matrix with significantly large number of entries being very smallin magnitude This forms a base for developing the fast multiscale Galerkinmethod We illustrate this observation by the following example

Example 52 Consider = [0 1] and the compact integral operator withkernel

K(s t) = log |sminus t| s t isin [0 1]We choose Xn as the piecewise linear functions with knots j2n j isin N2nminus1In this case k = 2 The Galerkin matrix of this operator with respect to theLagrange interpolating basis is illustrated in Figure 51 with n = 6

We can see that generating the full matrix and then solving the correspond-ing linear system requires large computational cost when its order is large Theidea to overcome this computational deficiency is to change the basis for thepiecewise polynomial space so that the projection of the integral operator K tothe space has a numerically sparse Galerkin matrix under the new basis

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

204 Multiscale Galerkin methods

0

20

40

60

80

020

4060

80

0

1

2

3

4

5

6

x 10minus3

Figure 51 The Galerkin matrix with respect to the piecewise linear polynomialbasis

020

4060

80

020

4060

80

0

05

1

15

2

Figure 52 The Galerkin matrix with respect to the piecewise linear polynomialmultiscale basis

The Galerkin matrix of this operator with respect to the piecewise linearpolynomial multiscale basis described in the last section is illustrated inFigure 52 with n = 6 It can be seen that the absolute value of the entries offthe diagonals of the blocks corresponding to different scales of spaces is very

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

52 The fast multiscale Galerkin method 205

small We can set the entries small in magnitude to zero and obtain a sparsematrix which leads to a fast Galerkin method We present this fast method andits analysis in the next several sections

52 The fast multiscale Galerkin method

In this section we develop the fast multiscale Galerkin method based on amatrix truncation strategy We consider two classes of kernels Class oneconsists of kernels having weak singularity along the diagonal Specificallyfor σ isin [0 d) and integer k ge 1 we define Sσ k We say K isin Sσ k if fors t isin s = t K has continuous partial derivatives Dα

s Dβt K(s t) for |α| le k

|β| le k and there exists a constant c gt 0 such that for |α| = |β| = k

|Dαs Dβ

t K(s t)| le c

|sminus t|σ+2k s t isin (56)

Related to the kernel on the right-hand side of (56) we remark that whenσ = 0 the function 1xσ is understood as log x Class two consists of kernelsK isin Ck(times) Kernels in this class are smooth

Set

Kiprimejprimeij = (wiprimejprime Kwij) (i j) (iprime jprime) isin Un

and observe that Kiprimejprimeij are entries of matrix Kn In the next lemma we estimatethe bound of Kiprimejprimeij

Lemma 53 Suppose that conditions (I)ndash(IV) hold

(1) If K isin Sσ k for some σ isin [0 d) and a positive integer k and there is aconstant r gt 1 such that

dist(Sij Siprimejprime) ge maxrdi rdiprime then there exists a positive constant c such that for all (i j) (iprime jprime) isin Ui iprime isin N

|Kiprimejprimeij| le c(didiprime)kminus d

2

min

dd

iprime maxsisinSiprime jprime

intSij

dt

|sminus t|2k+σ ddi max

tisinSij

intSiprimejprime

ds

|sminus t|2k+σ

(2) If K isin Ck(times) then there exists a positive constant c such that for alli iprime isin N

|Kiprimejprimeij| le cdk+d2iprime dk+d2

i

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

206 Multiscale Galerkin methods

Proof We present a proof for part (1) only since the proof for part (2) issimilar This is done by using the Taylor theorem By hypothesis for each(i j) isin U the set Sij is star-shaped Let s0 and t0 be centers of the sets Siprimejprime andSij respectively It follows from the Taylor theorem that

K(s t) = p(s t)+ q(s t)+sum|α|=k

sum|β|=k

(sminus s0)α(t minus t0)β

αβ rαβ(s t)

where p(s middot) and q(middot t) are polynomials of total degree less than k in t and ins respectively and

rαβ(s t) =int 1

0

int 1

0Dα

s Dβt K(s0 + θ1(sminus s0) t0

+ θ2(t minus t0))(1minus θ1)kminus1(1minus θ2)

kminus1dθ1dθ2

By conditions (II) and (III) we have that

Kiprimejprimeij =sum|α|=k

sum|β|=k

intSiprimejprime

intSij

(sminus s0)α(t minus t0)β

αβ rαβ(s t)wiprimejprime(s)wij(t)dsdt

This with conditions (I) and (IV) yields the bound

|Kiprimejprimeij| le cdkminus d

2i d

kminus d2

iprimesum|α|=k

sum|β|=k

1

αβint

Siprimejprime

intSij

|rαβ(s t)|dsdt (57)

We conclude from the mean-value theorem and the hypothesis K isin Sσ k thatthere exist sprime isin Siprimejprime and tprime isin Sij such that

|rαβ(s t)| = kminus2|Dαs Dβ

t K(sprime tprime)| le c

|sprime minus tprime|2k+σ

The assumption of this lemma yields

|sprime minus tprime| ge |sprime minus t| minus di ge (1minus rminus1)|sprime minus t|Thus for a new constant c

|rαβ(s t)| le c

|sprime minus t|2k+σ

This inequality with (57) and the relationship meas(Siprimejprime) sim ddiprime leads to the

desired estimate

The above lemma shows that most of the entries are so small that theycan be neglected without affecting the overall accuracy of the approximationscheme This observation leads to a matrix truncation strategy To present itwe partition matrix Kn into a block matrix

Kn = [Kiprimei iprime i isin Zn+1] with Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

52 The fast multiscale Galerkin method 207

For each iprime i isin Zn+1 we choose a truncation parameter δniprimei which will be

specified later We define for the weakly singular case

Kiprimejprimeij =

Kiprimejprimeij dist(Siprimejprime Sij) le δniprimei

0 otherwise(58)

and obtain a truncation matrix

Kn = [Kiprimei iprime i isin Zn+1]where

Kiprimei = K(δniprimei)iprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]

Likewise for the smooth case we define for each iprime i isin N

Kiprimei =

Kiprimei i+ iprime le n0 otherwise

(59)

This truncation strategy leads to the fast multiscale Galerkin method which isto find un = [uij (i j) isin Un] isin Rs(n) such that

(En minus Kn)un = fn (510)

Example 54 We again consider the compact integral operator with the kernel

K(s t) = log |sminus t| s t isin [0 1]and choose Xn as the piecewise linear functions (k = 2) with knots j2nj isin N2nminus1 The truncated Galerkin matrix of this operator with respect to thepiecewise linear polynomial multiscale basis is illustrated in Figure 53 withn = 6

The analysis of the fast multiscale Galerkin method requires the availabilityof an operator form of equation (510) To this end we first introduce theconcept of the matrix representation of an operator

Definition 55 The matrix B is said to be the matrix representation of thelinear operator A relative to the basis = φj j isin Nn if

TB = A(T)

Proposition 56 The matrix representation of the operator K relative to thebasis Wn = wij (i j) isin Un is Bn = Eminus1

n Kn

Proof Let Bn = [biprimejprimeij (i j) isin Un] be the matrix representation of theoperator K relative to the basis Wn According to Definition 55 we have that

Kwij =sum

(kl)isinUn

bklijwkl for all (i j) isin Un

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

208 Multiscale Galerkin methods

010

2030

4050

60

010

2030

4050

60

0

05

1

15

0 10 20 30 40 50 60

0

10

20

30

40

50

60

nz = 1638

(a) (b)

Figure 53 (a) The truncated Galerkin matrix with respect to the piecewise linearpolynomial multiscale basis (b) The figure of nonzero entries of the truncatedmatrix

This leads to

(wiprimejprime Kwij) =sum

(kl)isinUn

bklij(wiprimejprime wkl) for all (i j) (iprime jprime) isin Un

which means Kn = EnBn and completes the proof

We next convert the linear system (510) to an abstract operator equationform Let βiprimejprimeij (i j) (iprime jprime) isin Un denote the entries of matrix Eminus1

n KnEminus1n

and let

Kn(s t) =sum

(ij)(iprimejprime)isinUn

βiprimejprimeijwiprimejprime(s)wij(t)

We denote by Kn the integral operator defined by the kernel Kn(s t)

Proposition 57 Solving the linear system (510) is equivalent to finding

un =sum

(ij)isinUn

uijwij isin Xn

such that

(I minus Kn)un = Pnf

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

53 Theoretical analysis 209

Proof It follows for (i j) (iprime jprime) isin Un that

(wiprimejprime Knwij) =(

wiprimejprime int

Kn(middot t)wij(t)dt

)=

sum(kl)(kprimelprime)isinUn

βkprimelprimekl(wkl wij)(wiprimejprime wkprimelprime)

=sum

(kl)(kprimelprime)isinUn

(En)iprimejprimekprimelprimeβkprimelprimekl(En)klij

= (Kn)iprimejprimeij (511)

which means

Kn = [(wiprimejprime Knwij) (i j) (iprime jprime) isin Un]and leads to the desired result of this proposition

The analysis of the fast multiscale Galerkin method with an appropriatechoice of the truncation parameters δn

iprimei will be discussed in the next section

53 Theoretical analysis

In this section we analyze the fast multiscale Galerkin method Specificallywe show that the number of nonzero entries of the truncated matrix is of linearorder up to a logarithm factor prove that the method is stable and that itgives almost optimal order of convergence We also prove that the conditionnumber of the truncated matrix is uniformly bounded We consider the weaklysingular case in Sections 531ndash533 Special results for the smooth case willbe presented in the last subsection without proof

531 Computational complexity

The computational complexity of the fast multiscale Galerkin method ismeasured in terms of the number of nonzero entries of the truncated matrix Inthis subsection we estimate the number of nonzero entries of matrix Kn For amatrix A we denote by N (A) the number of its nonzero entries

Lemma 58 If conditions (I) and (II) hold then there exists a constant c gt 0such that for all iprime i isin N0 and for all n isin N

N (Kiprimei) le cμi+iprime(

ddi + dd

iprime + (δniprimei)

d)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

210 Multiscale Galerkin methods

Proof For fixed i iprime jprime and an arbitrarily fixed point s0 in Siprimejprime we let

S(i iprime jprime) = s isin Rd |sminus s0| le di + diprime + δniprimei

If Kiprimejprimeij = 0 then dist(Siprimejprime Sij) le δniprimei Thus Sij sub S(i iprime jprime) Let Niiprimejprime denote

the number of indices (i j) such that Sij is contained in S(i iprime jprime) Property (3)of the partition i and condition (I) imply that there exists a constant c gt 0such that

Niiprimejprime le meas(S(i iprime jprime))minmeas(Sij) Sij sub S(i iprime jprime) le cμi(di + diprime + δn

iprimei)d

It follows from condition (II) that the number of functions wij having supportscontained in Sij is bounded by ρ Since w(iprime) sim μiprime

N (Kiprimei) le ρsum

jprimeisinZw(iprime)

Niiprimejprime le cμi+iprime(di + diprime + δniprimei)

d

proving the desired result

To continue estimating the number of nonzero entries of matrix Kn we nowspecify choices of the truncation parameters δn

iprimei Specifically for each i iprime isinZn+1 and for arbitrarily chosen constants a gt 0 and r gt 1 we choose thetruncation parameter δn

iprimei such that

δniprimei le max

aμ[minusn+α(nminusi)+αprime(nminusiprime)]d rdi rdiprime

(512)

where α and αprime are any numbers in (minusinfin 1] The lemma above and thechoice of truncation parameters lead to the following estimate of the numberof nonzero entries of matrix Kn

Theorem 59 If the truncation parameters δniprimei are chosen according to (512)

and if conditions (I) and (II) hold then

N (Kn) =

O(s(n) log2 s(n)) α = αprime = 1O(s(n) log s(n)) otherwise

Proof Because

N (Kn) =sum

iprimeisinZn+1

sumiisinZn+1

N (Kiprimei) (513)

we use Lemma 58 to estimate N (Kn) The choice (512) of truncationparameters ensures that

δniprimei le aμ[minusn+α(nminusi)+αprime(nminusiprime)]d + rdi + rdiprime

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

53 Theoretical analysis 211

Using (513) and substituting the above estimate into the inequality inLemma 58 we have that

N (Kn) le csum

iisinZn+1

sumiprimeisinZn+1

μi+iprime(

2μminusi + 2μminusiprime + adμminusn+α(nminusi)+αprime(nminusiprime))

= c

⎡⎣4(n+ 1)sum

iisinZn+1

μi + adμn

⎛⎝ sumiisinZn+1

μ(αminus1)(nminusi)

⎞⎠times⎛⎝ sum

iprimeisinZn+1

μ(αprimeminus1)(nminusiprime)

⎞⎠⎤⎦=

O(μn(n+ 1)2) α = αprime = 1O(μn(n+ 1)) otherwise

as nrarrinfin This leads to the desired result of this theorem

532 Stability and convergence

In this subsection we show that the fast multiscale Galerkin method is stableand it has an almost optimal convergence order

The first lemma that we present here gives an estimate for the discrepancybetween the block Kiprimei and Kiprimei = K(δ)iprimei where the latter is obtained by usingthe truncation strategy with parameter δ = δn

iprimei

Lemma 510 Suppose that Kiprimei is obtained from the truncation strategy (58)with truncation parameter δ If conditions (I)ndash(IV) hold and K isin Sσ k for someσ isin [0 d) and a positive integer k then for any r gt 1 and δ gt 0 there existsa constant c such that when δ ge maxrdi rdiprime

Kiprimei minus Kiprimei2 le c(didiprime)kδminusη

where η = 2k minus d + σ gt 0

Proof By the definition of Kiprimei we have that

Kiprimei minus Kiprimeiinfin = maxjprimeisinZw(iprime)

sumjisinZδ|Kiprimejprimeij|

where

Zδ = j j isin Zw(i) dist(Sij Siprimejprime) gt δ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

212 Multiscale Galerkin methods

It follows from Lemma 53 that∥∥∥Kiprimei minus Kiprimei∥∥∥infin le c(didiprime)

kminus d2 dd

iprime maxjprimeisinZw(iprime)

maxsisinSiprimejprime

sumjisinZδ

intSij

dt

|sminus t|2k+σ

le c(didiprime)kminus d

2 ddiprime

int|t|gtδ

dt

|t|2k+σ

le c(didiprime)kminus d

2 ddiprimeδminusη

Likewise we have that∥∥∥Kiprimei minus Kiprimei∥∥∥

1le c(didiprime)

kminus d2 dd

i δminusη

Since the spectral radius of a matrix A is less than or equal to any of its matrixnorms

A22 = ρ(ATA) le ATAinfin le ATinfinAinfin = A1Ainfin

Using the above inequality we have that∥∥∥Kiprimei minus Kiprimei∥∥∥2

2le∥∥∥Kiprimei minus Kiprimei

∥∥∥1

∥∥∥Kiprimei minus Kiprimei∥∥∥infin

Substituting both estimates obtained earlier into the right-hand side of theabove inequality proves the desired result

We now describe a second criterion for the choice of truncation parametersδn

iprimei For each i iprime isin Zn+1 and for arbitrarily chosen constants a gt 0 and r gt 1we choose the truncation parameter δn

iprimei such that

δniprimei ge max

aμ[minusn+α(nminusi)+αprime(nminusiprime)]d rdi rdiprime

(514)

where α and αprime are any numbers in (minusinfin 1] For real numbers a and b we set

μ[a b n] =sum

iisinZn+1

μaidsum

iprimeisinZn+1

μbiprimed

We next estimate the error Rn = Kn minus Kn of the truncation operator in termsof the function μ[middot middot n]Lemma 511 Let u isin Hm() with 0 le m le k and K isin Sσ k for someσ isin [0 d) and a positive integer k If the truncation parameters δn

iprimei are chosenaccording to (514) and conditions (I)ndash(VI) hold then there exists a positiveconstant c such that for all n isin N0

(Kn minus Kn)Pnu le cμ[k + mminus αη k minus αprimeη n]μminus(m+dminusσ)nduHm

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

53 Theoretical analysis 213

Proof For any u v isin X we project them into the subspace Xn Hence

Pnu =sum

iisinZn+1

(Pi minus Piminus1)u =sum

(ij)isinUn

uijwij

for some constants uij and

Pnv =sum

iisinZn+1

(Pi minus Piminus1)v =sum

(ij)isinUn

vijwij

for some constants vij where Pminus1 = 0 By the definitions of operators Kn andKn we have that((Kn minus Kn)PnuPnv

)=

sumiiprimeisinZn+1

((Kn minus Kn)(Pi minus Piminus1)u (Piprime minus Piprimeminus1)v

)=

sumiiprimeisinZn+1

sumjisinZw(i)

sumjprimeisinZw(iprime)

(Kiprimejprimeij minus Kiprimejprimeij)uijviprimejprime

Set

en =∣∣∣((Kn minus Kn)PnuPnv

)∣∣∣ Using the CauchyndashSchwarz inequality and condition (V) we conclude that

en le csum

iiprimeisinZn+1

Kiprimei minus Kiprimei2(Pi minus Piminus1)u(Piprime minus Piprimeminus1)v

It follows from condition (VI) that for u isin Hm() with 0 le m le k

(Pi minus Piminus1)u le cdmiminus1uHm

Combining the above estimates and using Lemma 510 we have for u isinHm() and v isin Hmprime() with 0 le m mprime le k that

en le csum

iiprimeisinZn+1

(didiprime)k(δn

iprimei)minusηdm

iminus1dmprimeiprimeminus1uHmvHmprime

Using di sim μminusid and the choice of δniprimei we conclude that

en le caminusηsum

iiprimeisinZn+1

μ(k+mminusαη)(nminusi)d+(k+mprimeminusαprimeη)(nminusiprime)d

μminus(m+mprime+dminusσ)nduHmvHmprime

= caminusημ[k + mminus αη k + mprime minus αprimeη n] middot μminus(m+mprime+dminusσ) nd uHmvHmprime

Since (Kn minus Kn) = Pn(Kn minus Kn) we have for u isin X that∥∥∥(Kn minus Kn)Pnu∥∥∥ = sup

visinXv=0

∣∣∣((Kn minus Kn)PnuPnv)∣∣∣ v2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

214 Multiscale Galerkin methods

Combining this equation with the inequality above with mprime = 0 yields thedesired result of this lemma

The next theorem provides a stability estimate for operator I minus Kn Recallthat for the standard Galerkin method there exist positive constants c0 and N0

such that for all n gt N0

(I minusKn)v ge c0v for all v isin Xn (515)

Theorem 512 Let K isin Sσ k for some σ isin [0 d) and a positive integer kSuppose that the truncation parameters δn

iprimei are chosen according to (514) with

α gt1

2minus d minus σ

2η αprime gt 1

2minus d minus σ

2η α + αprime gt 1

If conditions (I)ndash(VI) hold then there exist a positive constant c and a positiveinteger N such that for all n ge N and v isin Xn

(I minus Kn)v ge cvProof Note that for any real numbers a b and e

limnrarrinfinμ[a b n]μminusend = 0

when e gt max0 a b a+ b Thus the choice of δniprimei ensures that there exists

a positive integer N such that for all n ge N

cμ[k minus αη k minus αprimeη n]μminus(dminusσ)nd le c02

This with the estimate in Lemma 511 leads to

(Kn minus Kn)v le c0

2v for all v isin Xn (516)

Combining (516) and the stability estimate (515) of the standard Galerkinmethod yields

(I minus Kn)v ge (I minusKn)v minus (Kn minus Kn)v ge c0

2v

for any v isin Xn This completes the proof

The above stability estimate ensures that (Iminus Kn)minus1 exists and is uniformly

bounded As a result the fast multiscale Galerkin method (510) has a uniquesolution for a sufficiently large n

Theorem 513 Let u isin Hk() and K isin Sσ k for some σ isin [0 d) anda positive integer k Suppose that the truncation parameters δn

iprimei are chosenaccording to (514) with α and αprime satisfying one of the following conditions

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

53 Theoretical analysis 215

(i) α ge 1αprime gt 12 minus dminusσ

2η α + αprime gt 1+ kη

or α gt 1αprime ge 12 minus dminusσ

2η α + αprime gt1+ k

ηor α gt 1αprime gt 1

2 minus dminusσ2η α + αprime ge 1+ k

η

(ii) α = 1αprime = kη

or α = 2kη

αprime = 12 minus dminusσ

If conditions (I)ndash(VI) hold then there exist a positive constant c and a positiveinteger N such that for all n ge N

uminus un le cs(n)minuskd(log s(n))τuHk()

where τ = 0 in case (i) and τ = 1 in case (ii)

Proof It follows from Theorem 512 that there exist a positive constant c anda positive integer N such that for all n ge N

Pnuminus un le c(I minus Kn)(Pnuminus un) (517)

Since

Pn(I minusK)u = (I minus Kn)un = Pnf

we have that

(I minus Kn)(Pnuminus un) = Pn(I minusK)(Pnuminus u)+ (Kn minus Kn)Pnu (518)

Now by the triangle inequality we have that

uminus un le uminus Pnu + Pnuminus un (519)

Using inequality (517) and equation (518) we obtain that

Pnuminus un le cI minusKPnuminus u + c(Kn minus Kn)PnuSubstituting this estimate into the right-hand side of (519) yields

uminus un le (1+ cI minusK)Pnuminus u + c(Kn minus Kn)PnuIt follows from Lemma 511 that

(Kn minus Kn)Pnu le cμ[2k minus αη k minus αprimeη n]μminus(dminusσ)ndμminusknduHk

Observing that

μ[a b n]μminusend =

⎧⎪⎪⎨⎪⎪⎩O(1) if e ge a e gt b e gt a+ b

or e gt a e ge b e gt a+ bor e gt a e gt b e ge a+ b

O(n) if e = a b = 0 or e = b a = 0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

216 Multiscale Galerkin methods

as nrarrinfin we obtain that

μ[2k minus αη k minus αprimeη n]μminus(dminusσ)nd =

O(1) in case (i)O(n) in case (ii)

with a = 2k minus αη b = k minus αprimeη and e = d minus σ This yields the desiredresult

533 The condition number of the truncated matrix

We show in this subsection that the condition number of the truncated matrixis uniformly bounded To this end we need a norm equivalence result whichis presented below

Lemma 514 If conditions (II) (IV) and (V) hold then for any n isin N andv =sum(ij)isinUn

vijwij

v sim v2

where v = [vij (i j) isin Un]Proof Since condition (V) holds it suffices to prove that there is a positiveconstant θ2 such that for all v

v le θ2v2

It follows from the orthogonal decomposition (52) that

v2 =sum

iisinZn+1

∥∥∥ sumjisinZw(i)

vijwij

∥∥∥2

According to the construction of the partition of and condition (II) for alli gt γ ∥∥∥ sum

jisinZw(i)

vijwij

∥∥∥2 =sum

νisinZe(iminusγ )

∥∥∥ sumjisinZ(ν)

vijwij

∥∥∥2

where Z(ν) = j supp wij sube Sij = iminusγ ν Using the CauchyndashSchwarzinequality and condition (II) we have that∥∥∥ sum

jisinZ(ν)vijwij

∥∥∥2 leint

sumjisinZ(ν)

v2ij

sumjisinZ(ν)

w2ij(t)dt le ρ

sumjisinZ(ν)

v2ij

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

53 Theoretical analysis 217

The last inequality holds because the cardinality of Z(ν) is less than or equalto ρ and the L2-norm of wij is equal to 1 Hence we conclude that there is apositive constant θ2 such that

v2 le θ22

sum(ij)isinUn

v2ij = θ2

2v22

This completes the proof

With the help of the above lemma we are ready to show that the conditionnumber of the coefficient matrix

An = En minus Kn

is uniformly bounded

Theorem 515 Suppose that K isin Sσ k for some σ isin [0 d) and a positiveinteger k and the truncation parameters δn

iprimei are chosen according to (514)with α and αprime satisfying the following conditions

α gt1

2minus d minus σ

2η αprime gt 1

2minus d minus σ

2η α + αprime gt 1

If conditions (I)ndash(VI) hold then the condition number of the coefficient matrixof the truncated approximate equation (510) is bounded that is there exists apositive constant c such that for all n isin N

cond2(An) le c

Proof For any v = [vij (i j) isin Un] isin Rs(n) let

v =sum

(ij)isinUn

vijwij

and

g = (I minus Kn)v

Thus g isin Xn and it can be written as

g =sum

(ij)isinUn

gijwij

Set

g = [gij (i j) isin Un]It can be verified that

g = (En minus Kn)v

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

218 Multiscale Galerkin methods

It follows from Theorem 512 Lemma 514 and the above equations thatthere exist a positive constant c and positive integer N such that for all n ge N

v2 le cv le c(I minus Kn)v = cg le cg2 = c(En minus Kn)v2

This means that

(En minus Kn)minus12 le c (520)

Conversely we have that

(En minus Kn)v2 = g2 le cg = c(I minus Kn)vNote that

(I minus Kn)v le (I minusKn)v + (Kn minus Kn)vThis with (516) implies that

(I minus Kn)v le (1+ K)v + c0

2v le cv2

Thus

En minus Kn2 le c (521)

The result of this theorem follows from (520) and (521)

To close this section we would like to know if we can choose appro-priate truncation parameters such that the optimal results about the orderof convergence and computational complexity can be reached CombiningTheorems 59 512 513 and 515 leads to the following

Theorem 516 Let u isin Hk() and K isin Sσ k for some σ isin [0 d) and apositive integer k If conditions (I)ndash(VI) hold and δn

iprimei are chosen as

δniprimei = max

aμ[minusn+α(nminusi)+αprime(nminusiprime)]d rdi rdiprime

with α = 1 and 1minus kηlt αprime le 1 then the following hold the stability estimate

(I minus Kn)v ge cv for all v isin Xn

the boundedness of the condition number

cond2(An) le c

the optimal convergence order

uminus un le cs(n)minuskduHk()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

53 Theoretical analysis 219

and the optimal (up to a logarithmic factor) order of the complexity

N (Kn) =

O(s(n) log2 s(n)) α = αprime = 1O(s(n) log s(n)) otherwise

534 Remarks on the smooth kernel case

In this subsection we present special results for the smooth kernel case Sincethe proofs are similar to those for the weakly singular case we omit the detailsof the proof except for Lemma 519 whose results have something differentfrom Lemma 511

Lemma 517 If conditions (I)ndash(IV) hold and K isin Ck( times ) then thereexists a positive constant c such that for i iprime isin N and for all n isin N

Kiprimei2 le cdki dk

iprime

To avoid computing the entries whose values are nearly zero we make aspecial block truncation strategy that is setting

Kiprimei =

Kiprimei i+ iprime le n0 otherwise

iprime i isin N (522)

to obtain a sparse truncation matrix

Kn = [Kiprimei iprime i isin Zn+1]The following theorems provide the computational complexity the conver-

gence estimate and the stability of the truncation scheme for integral equationswith smooth kernels

Theorem 518 Suppose that condition (I) holds and K isin Ck( times ) If thetruncated matrix Kn is chosen as (522) then

N (Kn) = O(s(n) log s(n))

Lemma 519 Suppose that conditions (I)ndash(VI) hold and K isin Ck(times) Ifthe truncated matrix Kn is chosen as (522) then there exists a constant c suchthat for all u isin Hk() and for all n isin N

(Kn minus Kn)Pnu le cμminusknduHk

and for u isin L2()

(Kn minus Kn)Pnu le c(n+ 1)μminuskndu

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

220 Multiscale Galerkin methods

Proof Similar to the proof of Theorem 511 for any u v isin X

Pnu =sum

(ij)isinUn

uijwij Pnv =sum

(ij)isinUn

vijwij

we have((Kn minus Kn)Pnu v

)=

sumiiprimeisinZn+1

sumjisinZw(i)

sumjprimeisinZw(iprime)

(Kiprimejprimeij minus Kiprimejprimeij)uijviprimejprime

Using the CauchyndashSchwarz inequality and condition (V) we conclude that itsabsolute value is bounded by

csum

iiprimeisinZn+1

Kiprimei minus Kiprimei2(Pi minus Piminus1)u(Piprime minus Piprimeminus1)v

It follows from condition (VI) that for u isin Hm() with 0 le m le k

(Pi minus Piminus1)u le cdmiminus1uHm

Denote Zprime(i) = iprime isin Zn+1 iprime gt nminus i Combining the above estimates usingLemma 517 and the truncation strategy (522) we have that for u isin Hm()

and v isin L2()∣∣∣((Kn minus Kn)Pnu v)∣∣∣ le c

sumiisinZn+1

sumiprimeisinZprime

(i)

(didiprime)kdm

iminus1uHmv (523)

Since di sim μminusid a simple computation yields thatsumiisinZn+1

sumiprimeisinZprime

(i)

(didiprime)kdm

iminus1 le csum

iisinZn+1

sumiprimeisinZprime

(i)

μminusk(i+iprime)dminusm(iminus1)d

= cμminuskndsum

iisinZn+1

μminusm(iminus1)dsum

iprimeisinZprime(i)

μminusk(i+iprimeminusn)d

For any i isin Zn+1sumiprimeisinZprime

(i)

μminusk(i+iprimeminusn)d lesumlisinN

μminuskld le μminuskd

1minus μminuskd

which leads to the fact that there exists a constant c such thatsumiisinZn+1

sumiprimeisinZprime

(i)

(didiprime)kdm

iminus1 le

cμminusknd if 0 lt m le kc(n+ 1)μminusknd if m = 0

(524)

Combining the above inequalities (523) (524) and

(Kn minus Kn)Pnu = supvisinXv=0

|((Kn minus Kn)Pnu v)|v

we obtain the estimates of this lemma

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

54 Bibliographical remarks 221

The compactness of K and the property of the orthogonal projection Pn

lead to the stability estimate of the operator equation This with the secondestimate of Lemma 519 yields the following theorem about the stability ofthe truncation equation

Theorem 520 Suppose that conditions (I)ndash(VI) hold and K isin Ck(times) Ifthe truncated matrix Kn is chosen as (522) then there exist a positive constantc0 and an integer N such that for all n ge N and x isin Xn

(I minus Kn)x ge c0xWe have the following convergence estimate similar to Theorem 513

Theorem 521 Suppose that conditions (I)ndash(VI) hold and K isin Ck(times) Ifthe truncated matrix Kn is chosen as (522) then there exist a positive constantc and an integer N such that for all n ge N

uminus un le cs(n)minuskduHk

We also have that the condition number of the coefficient matrix An =En minus Kn of the truncated scheme is bounded by a constant independent of n

Theorem 522 Suppose that conditions (I)ndash(VI) hold and K isin Ck( times )If the truncated matrix Kn is chosen as (522) then the condition number ofthe coefficient matrix of the truncated approximate equation is bounded thatis there exists a positive constant c such that for all n isin N

cond2(An) le c

54 Bibliographical remarks

Since the 1990s wavelet and multiscale methods have been developed forsolving the Fredholm integral equation of the second kind The history of fastmultiscale solutions of the equation began with the remarkable discovery in[28] that the matrix representation of a singular Fredholm integral operatorunder a wavelet basis is numerically sparse This fact was then used indeveloping the multiscale Galerkin (PetrovndashGalerkin) method for solvingthe Fredholm integral equation see [5 64 68 88ndash91 94 95 135 136 139140 202 251 260 261] and references cited therein Readers are referred tothe Introduction of this book for more information The multiscale piecewisepolynomial PetrovndashGalerkin discrete multiscale PetrovndashGalerkin and multi-scale collocation methods were developed in [64 68 69] We give an in-depth

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

222 Multiscale Galerkin methods

discussion of these methods in the next two chapters A numerical implemen-tation issue of the multiscale Galerkin method was considered in [109]

The convergence results presented in this chapter are for the smoothsolution However solutions of the Fredholm integral equation of the secondkind with weakly singular kernels may not be smooth When the solutionis not smooth a fast singularity-preserving multiscale Galerkin method wasdeveloped in [46] for solving weakly singular Fredholm integral equations ofthe second kind This method was designed based on the singularity-preservingGalerkin method introduced originally in [41] and a matrix truncation strategysimilar to what we have discussed in Section 52

There are several fast methods in the literature for solving the Fredholmintegral equation of the second kind which are closely related to the fastmultiscale method They include the fast multipole method the panel clus-tering method and the method of sparse grids The fast multipole method[114 115 235 250] was originally introduced by V Rokhlin and L Greengardbased on the multipole expansion It effectively reduces the computationalcomplexity involving a certain type of the dense matrix which can ariseout of many physical systems The panel clustering method proposed byW Hackbusch and Z Nowak also significantly lessens the computationalcomplexity (see for example [124 125]) For the method of sparse gridsreaders are referred to [36] and the references cited therein Fast FourierndashGalerkin methods developed in [37 53 154 155 263] for solving boundaryintegral equations are special cases of the method of sparse grids Fast methodsfor solving Fredholm integral equations of the second kind in high dimensionswere developed in [272] and [102] respectively based on a combinationtechnique and the lattice integration

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

6

Multiscale PetrovndashGalerkin methods

This chapter is devoted to presenting multiscale PetrovndashGalerkin methods forsolving Fredholm integral equations of the second kind In a manner similar tothe Galerkin method the PetrovndashGalerkin method also suffers from the densityof the coefficient matrix of its resulting linear system We show that with themultiscale basis the PetrovndashGalerkin method leads to a linear system havinga numerically sparse coefficient matrix We propose a matrix compressionscheme for solving the linear system and prove that it almost preserves theoptimal convergence order of the numerical solution that the original PetrovndashGalerkin method enjoys and it reduces the computational complexity from thesquare order to the quasi-linear order We also present the discrete versionof the multiscale PetrovndashGalerkin method which further treats the nonzeroentries of the compressed coefficient matrix that results from the multiscalePetrovndashGalerkin method by using the product integration method We call thismethod the discrete multiscale PetrovndashGalerkin method

In Section 61 we first present the development of the multiscale PetrovndashGalerkin method and its analysis We then discuss in Section 62 the discretemultiscale PetrovndashGalerkin method

61 Fast multiscale PetrovndashGalerkin methods

In this section we describe the construction of two sequences of multiscalebases for trial and test spaces and use them to develop multiscale PetrovndashGalerkin methods for solving the second-kind integral equations

223

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

224 Multiscale PetrovndashGalerkin methods

611 Multiscale bases for PetrovndashGalerkin methods

We review first a special case of the recursive construction given in Chapter 4for piecewise polynomial spaces on = [0 1] which can be used to developa multiscale PetrovndashGalerkin scheme

We start with positive integers k kprime ν and μ which satisfy kν = kprimeμ andkprime le k We choose our initial trial space and test space to be X0 = Sk

ν andY0 = Skprime

μ and thereafter we recursively divide the corresponding subintervalsinto μ pieces to obtain two sequences of subspaces

Xn = Skνμn Yn = Skprime

μn+1 n isin N0

These spaces are referred to as the (k kprime) element spacesWe have that

dimXn = dimYn n isin N0

Xn sub Xn+1 Yn sub Yn+1 n isin N0

and ⋃nisinN0

Xn =⋃

nisinN0

Yn = L2()

Moreover XnYn forms a regular pair (see Definition 230)We use Xn = fij (i j) isin Un and Yn = hij (i j) isin Un for the associated

multiscale bases for Xn and Yn respectively where Un = (i j) i isin Zn+1 j isinZw(i) with w(0) = kν = kprimeμ w(i) = kν(μminus 1)μiminus1 = kprime(μminus 1)μi i isin Nfor given k ν kprimeμ isin N These bases can be constructed recursively by themethod described in Section 41 such that both fij (i j) isin U and hij (i j) isin U are orthonormal bases in X = L2[0 1] having some importantproperties such as vanishing moment conditionsint 1

0tfij(t)dt = 0 isin Zk j isin Zw(i) i isin Nint 1

0thij(t)dt = 0 isin Zkprime j isin Zw(i) i isin N

and compact support properties

meas(suppfij) le 1μiminus1 meas(supphij) le 1μiminus1 j isin Zw(i) i isin N

The vanishing moment conditions play an important role in developingtruncated schemes (see Chapter 5) Therefore it is expected to raise the orderof the vanishing moments of hij to k when kprime lt k This can be done as follows

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

61 Fast multiscale PetrovndashGalerkin methods 225

We first choose basis g0j j isin Zw(0) for Y0 which is bi-orthogonal tof0j j isin Zw(0) that is

( f0jprime g0j) = δjjprime j jprime isin Zw(0)

Then for j isin Zw(1) we find a vector [cjs s isin Zs(1)] isin Rs(1) where s(i) =dimYi i isin N0 such that

g1j =sum

sisinZw(0)

cjsh0s +sum

sisinZw(1)

cjw(0)+sh1s j isin Zw(1)

satisfies the equations

(f0jprime g1j) = 0 jprime isin Zw(0)

and

(f1jprime g1j) = δjjprime jprime isin Zw(1)

Noting that the matrix of order s(1) for this linear system of equations is

H = [(fiprimejprime hij) (iprime jprime) (i j) isin U1]and XnYn forms a regular pair we conclude that H is nonsingular Thusthere exists a unique solution which satisfies the above equations It can easilybe verified that these functions g1j j isin Zw(1) are linearly independent andY1 = spangij (i j) isin U1 Using the isometry operator Tε (see (42)) wedefine recursively for i isin N that

gi+1 j = Tεgil

where j = εw(i) + l ε isin Zμ l isin Zw(i) Then we have that Yn = spangij (i j) isin Un for n isin N0 Defining

Wi = spanfij j isin Zw(i) and Vi = spangij j isin Zw(i) i isin N0

we have that

Xn =oplus

iisinZn+1

perpWi and Yn =

oplusiisinZn+1

Vi n isin N0

Proposition 61 The multiscale bases fij (i j) isin U and gij (i j) isin Uhave the following properties

(I) There exist positive integers ρ and r such that for every i gt r and j isinZw(i) written in the form j = νρ + s where s isin Zρ and ν isin N0

fij(x) = 0 gij(x) = 0 x isin iminusrν

Setting Sij = iminusrν the supports of fij and gij are then contained in Sij

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

226 Multiscale PetrovndashGalerkin methods

(II) For any (i j) (iprime jprime) isin U

( fij fiprimejprime) = δiprimeiδjprimej

(III) For any (i j) (iprime jprime) isin U and iprime ge i

( fij giprimejprime) = δiprimeiδjprimej

(IV) For any (i j) isin U with i ge 1 and polynomial p of total degree less than k

( fij p) = 0 (gij p) = 0

(V) There is a positive constant c such that for any (i j) isin U

fijinfin le cμi2 and gijinfin le cμi2

Set

En = [(giprimejprime fij) (iprime jprime) (i j) isin Un]It is useful to make the construction of the matrix En clear

Lemma 62 For any n isin N the following statements hold

(i) The matrix En has the form

En =

⎡⎢⎢⎢⎢⎢⎢⎣

I0 G0

I1 G1

Gnminus1

In

⎤⎥⎥⎥⎥⎥⎥⎦

where Ii i isin N0 is the w(i)times w(i) identity matrix

G0 = [(g0jprime f1j) jprime isinZw(0) j isin Zw(1)]G1 = [(g1jprime f2j) jprime isinZw(1) j isin Zw(2)]

and Gi i isin N is the block diagonal matrix diag(G1 G1 G1) with μiminus1

diagonal blocks(ii) There exists a positive constant c such that

En2 le c

Proof (i) We first partition the matrix En into a block matrix

En = [Eiprimei iprime i isin Zn+1]where

Eiprimei = [(giprimejprime fij) jprime isin Zw(iprime) j isin Zw(i)]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

61 Fast multiscale PetrovndashGalerkin methods 227

It follows from property (III) that

Eiprimei =

Ii iprime = i0 iprime gt i

When i ge iprime + 2 it follows from giprimejprime isin Yiprime fij isin Wi and the fact that Yiprime subeXiprime+1 perpWi that

(giprimejprime fij) = 0

which means that

Eiprimei = 0 for all i ge iprime + 2

We finally consider the case i = iprime + 1 When iprime = 0 and i = 1 it is clearthat E01 = G0 When iprime ge 1 and i = iprime + 1 assume that giprimejprime = Teprimeg1lprime andfij = Tef2l where jprime = μ(eprime)w(1)+ lprime and j = μ(e)w(2)+ l with eprime e isin Ziprimeminus1

μ lprime isin Zw(1) and l isin Zw(2) Using Proposition 415 we conclude that

(giprimejprime fij) = δeprimee(g1lprime f2l)

This means that for iprime ge 1 Eiprimeiprime+1 = Gi is the block diagonal matrixdiag(G1 G1 G1)

(ii) It is clear from (i) that

Eninfin = maxGiinfin + 1 i isin 0 1and

En1 = maxGi1 + 1 i isin 0 1Thus we obtain that

En2 le c = maxEninfin En1= maxGil + 1 i isin 0 1 l isin 1infin

This completes the proof

To estimate the norm of an element u isin Xn or v isin Yn we introduce asequence of functions ξij (i j) isin Uwhich is bi-orthogonal to gij (i j) isin UTo obtain the sequence we can find ξij isin Xi (i j) isin U1 such that

(giprimejprime ξij) = δiprimeiδjprimej (iprime jprime) (i j) isin U1

and then set

ξij = Teξ1l j = μ(e)w(1)+ l e isin Ziminus1μ l isin Zw(1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

228 Multiscale PetrovndashGalerkin methods

Using this sequence we have for v isin Yn that

v =sum

(ij)isinUn

vijgij

with vij =langv ξij

rang Let

n = [(ξiprimejprime ξij) (iprime jprime) (i j) isin Un]Lemma 63 There exists a positive constant c such that

n2 le c

Proof We first estimate the entries of matrix n The fact that ξij (i j)isin U is bi-orthogonal to gij (i j) isin U implies that ξij i isin N has vanishingmoments of order kprime For iprime ge i and i isin Z2 let t0 be the center of the set Siprimejprime and write ξij = summisinZk

cm(t minus t0)m on Siprimejprime There exists a positive constant csuch that

|(ξiprimejprime ξij)| le cd(Siprimejprime)kprimeint

Siprimejprime|ξiprimejprime(t)|dt le cd(Siprimejprime)

kprime+12ξiprimejprime le cμminusiprime(kprime+12)

When iprime ge i gt 1 there exist eprime e isin Ziminus1μ lprime isin Zw(iprimeminusi+1) and l isin Zw(1) such

that

jprime = μ(eprime)w(iprime minus i+ 1)+ lprime j = μ(e)w(1)+ l

and

ξiprimejprime = Teprimeξiprimeminusi+1lprime ξij = Teξ1l

Thus

|(ξiprimejprime ξij)| = δeprimee|(ξiprimeminusi+1lprime ξ1l)| le cδeprimeeμminus(iprimeminusi+1)(kprime+12)

Combining the above estimates we obtain for (i j) (iprime jprime) isin Un that

|(ξiprimejprime ξij)| le cδeprimeeμminus|iprimeminusi|(kprime+12)

where eprime e isin Z|iprimeminusi|μ

We next partition n into a block matrix

n = [iprimei iprime i isin Zn+1]with

iprimei = [(ξiprimejprime ξij) jprime isin Zw(iprime) j isin Zw(i)]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

61 Fast multiscale PetrovndashGalerkin methods 229

and estimate the norm of these blocks It can be seen that

iprimeiinfin = maxjprimeisinZw(iprime)

sumjisinZw(i)

|(ξiprimejprime ξij)| le cw(|iprime minus i|)μminus|iprimeminusi|(kprime+12)

le cμminus|iprimeminusi|(kprimeminus12)

We next estimate matrix n Using the above inequality we have that

n1 = ninfin le maxiprimeisinZn+1

sumiisinZn+1

iprimeiinfin le 2c

1minus μminus(kprimeminus12)

which leads to the desired result of this lemma

Using the above lemmas we can verify the following proposition

Proposition 64 There exist two positive constants cminus and c+ such that forall n isin N0 u isin Xn having form u = sum(ij)isinUn

uijfij = sum(ij)isinUnuijξij and

v isin Yn having form v =sum(ij)isinUnvijgij

u = u2 (61)

cminusu le u2 le c+u (62)

and

cminusv le v2 le c+v (63)

where u = [uij (i j) isin Un] u = [uij (i j) isin Un] and v = [vij (i j) isin Un]Proof Recall that fij (i j) isin U is an orthonormal basis in X and ξij (i j) isin U is bi-orthogonal to gij (i j) isin U Therefore for (i j) isin Unuij = (u fij) uij = (u gij) and vij = (v ξij) Moreover equation (61) holdsIt can easily be verified that u2 = uTnu and u = Enu Using Lemmas 6263 and (61) we have that

u le (n2u22)12 le cu2 and u2 le Enu2 le cuwhich yield (62)

Noting that Yn sube Xn+1 any v isin Yn can be expressed as

v =sum

(ij)isinUn+1

(v gij)ξij

Thus we have that

v = n+1v

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

230 Multiscale PetrovndashGalerkin methods

where v = [(v ξij) (i j) isin Un+1] and v = [(v gij) (i j) isin Un+1] ByLemma 337 and (62) we conclude that

v2 le n+12v2 le cv (64)

On the contrary

v2 =⎛⎝ sum(ij)isinUn

(v ξij)gijsum

(ij)isinUn+1

(v gij)ξij

⎞⎠=

sum(ij)isinUn

(v ξij)(v gij)

le v2v2 le cv2vThis with (64) yields (63)

612 Multiscale PetrovndashGalerkin methods

We now formulate the PetrovndashGalerkin method using multiscale bases forFredholm integral equations of the second kind given in the form

uminusKu = f (65)

where

(Ku)(s) =int

K(s t)u(t)dt

the function f isin X = L2() the kernel K isin L2( times ) are given and u isin X

is the unknown function to be determinedWe assume that there are two sequences of multiscale functions fij (i j) isin

U and gij (i j) isin U where U = (i j) j isin Zw(i) i isin N0 such that thesubspaces

Xn = spanfij (i j) isin Un and Yn = spangij (i j) isin Unsatisfy condition (H) and XnYn forms a regular pair These bases may notbe those constructed in the above subsection but they are demanded to satisfythe properties listed in Propositions 61 and 64

The PetrovndashGalerkin method for solving equation (65) seeks a vector un =[uij (i j) isin Un] such that the function

un =sum

(ij)isinUn

uij fij isin Xn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 231

satisfies

(giprimejprime un minusKun) = (giprimejprime f ) (iprime jprime) isin Un (66)

Equivalently we obtain the linear system of equations

(En minusKn)un = fn

where

Kn = [(giprimejprime Kfij) (iprime jprime) (i j) isin Un]En = [(giprimejprime fij) (iprime jprime) (i j) isin Un

]and

fn = [(gij f ) (i j) isin Un]The truncated scheme and its analysis of convergence and computational

complexity are nearly the same as for the multiscale Galerkin method we leavethem to the reader Readers are also referred to the discrete version of themultiscale PetrovndashGalerkin methods in the next section

62 Discrete multiscale PetrovndashGalerkin methods

One can find that the compression strategy for the design of the fast multiscalePetrovndashGalerkin method is similar to that of the fast multiscale Galerkinmethod and the practical use of the fast multiscale method requires thenumerical computation of integrals appearing in the method Therefore in thissection we turn our attention to discrete multiscale schemes We develop adiscrete multiscale PetrovndashGalerkin (DMPG) method for integral equationsof the second kind with weakly singular kernels A compression strategy fordesigning fast algorithms is suggested Estimates for the order of convergenceand computational complexity of the method are provided

We consider in this section the following Fredholm integral equations

uminusKu = f (67)

where K is an integral operator with a weakly singular kernelThe idea that we use to develop our DMPG method is to combine the

discrete PetrovndashGalerkin (DPG) method with multiscale bases to exploit thevanishing moment property of the multiscale bases and the computationalalgorithms for computing singular integrals of the DPG method Note that theanalysis of the DPG method was done in [80] in the Linfin-norm since this normis natural for discrete methods which use interpolatory projections However

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

232 Multiscale PetrovndashGalerkin methods

for our DMPG method in order to make use of the vanishing moment propertyof the multiscale bases we have to switch back and forth between the Linfin-normand the L2-norm to obtain the necessary estimates We give special attentionto this issue

621 DPG methods and Lp-stability

We review the abstract framework outlined in [80] for analysis of discretenumerical methods of Fredholm integral equations of the second kind withweakly singular kernels To this end we let X be a Banach space with norm middot and V be a subspace of X We require that K Xrarr V be a compact linearoperator and that the integral equation (14) be uniquely solvable in X for allf isin X Note that whenever f isin V the unique solution of (14) is in V LetXn n isin N be a sequence of finite-dimensional subspaces of X satisfying

V sube X =⋃nisinN

Xn sube X

Suppose that the operators K and I (the identity from X to X) are approximatedby operators Kn X rarr V and Qn X rarr Xn respectively Specificallywe assume that Kn and Qn converge pointwise to K and I respectively Anapproximation scheme for solving equation (14) is defined by the equation

(I minusQnKn)un = Qnf n isin N (68)

This approximate scheme includes the discrete and nondiscrete versions ofthe PetrovndashGalerkin method collocation method and quadrature method asspecial cases Under various conditions elucidated in [80] for n large enoughequation (68) has a unique solution We discuss this issue later Instead weturn to specifying the operators and other related quantities needed for thedefinition of the DPG method for our current context In this section we fixX = Linfin() and V = C() with = [0 1] and use the followingterminology about the singularity

Definition 65 We say a kernel K(s t) s t isin = [0 1] is quasi-weaklysingular provided that

supsisinK(s middot)1 ltinfin

and

limsprimerarrsK(s middot)minus K(sprime middot)1 = 0

where middot 1 is the L1()-norm on

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 233

It can easily be verified that the weakly singular kernel in the sense ofDefinition 24 with a continuous function M is quasi-weakly singular Everyquasi-weakly singular kernel determines by the formula

(Ku)(s) =int

K(s t)u(t)dt s isin u isin Linfin() (69)

a compact operator from Linfin() into C()For n isin N we partition into N (depending on n) subintervals 0 = i

i isin ZN That is we have

=⋃

risinZN

r meas(i capj) = 0 i = j i j isin ZN

Moreover we assume that as nrarrinfin the sequence of partition lengths

h = max|i| i isin ZNgoes to zero For each i isin ZN let Fi denote the linear function that maps theinterval one to one and onto i Thus Fi has the form

Fi(t) = |i|t + bi t isin i isin ZN (610)

for some constant bi For every partition 0 of described above and anypositive integer k we let Sk(0) be the space of all functions defined on which are continuous from the right and on each subinterval i it is apolynomial of degree at most kminus1 (at the right-most endpoint of we requirethat the functions in Sk(0) are left continuous)

We use the following mechanism to refine a given fixed partition = Jj j isin Z chosen independently of n For any i isin ZN written in the form i =k+ j j isin Z k isin ZN we define the intervals

Hi = Fk(Jj)

which collectively determine the partition

0 = Hi i isin ZNThis partition consists of N ldquocopiesrdquo of each of which is put on thesubintervals i i isin ZN Given two partitions 1 and 2 of (independentof n) and positive integers k1 and k2 we introduce the following trial and testspaces

Xn = f f Fi isin Sk1(1) i isin ZN = Sk1(0 1)

and

Yn = Sk2(0 2)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

234 Multiscale PetrovndashGalerkin methods

respectively These are also spaces of piecewise polynomials of degree k1 minus 1k2 minus 1 on finer partitions induced by 1 2 and 0 respectively To insurethat the spaces Xn and Yn have the same dimension we require that

dim Sk1(1) = dim Sk2(2) = λ

We choose bases in Xn and Yn in the following manner Starting with spaces

Sk1(1) = spanξi i isin Zλand

Sk2(2) = spanηi i isin Zλfor j = λi+ where i isin ZN and isin Zλ we define functions

ξj = (ξ Fminus1i )χi and ηj = (η Fminus1

i )χi

These functions form a basis for spaces Xn and Yn respectively that is Xn =spanξj j isin ZλN and Yn = spanηj j isin ZλN

To construct a quadrature formula we introduce a third piecewise poly-nomial space Sk3(3) of dimension γ where 3 is yet another partition of (independent of n) and choose distinct points tj j isin Zγ in such thatthere exist unique functions ζi isin Sk3(3) i isin Zγ satisfying the interpolationconditions ζi(tj) = δij i j isin Zγ The functions ζi i isin Zγ form a basis for thespace Sk3(3) As above for j = γ i+ where i isin ZN and isin Zγ we definefunctions

ζj = (ζ Fminus1i )χi

and points

tj = Fi(t)

We also introduce the subspace

Qn = Sk3(0 3)

and observe that

Qn = spanζj j isin ZγNWe require the linear projection Zn Xrarr Qn by

Q = Zng =sum

jisinZγN

g(tj)ζj (611)

where for a function g isin Linfin() g(t) is defined in the sense described inSection 353 that is as any norm-preserving bounded linear functional which

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 235

extends point evaluation at t from C() to Linfin() (cf [21]) For any x y isin Xwe introduce the following discrete inner product

(x y)n =sum

jisinZγN

wjx(tj)y(tj) (612)

where

wj =int

ζj(t)dt

Note that

wj = |i|int

ζ(t)dt = |i|w

where j = γ i + with i isin ZN and isin Zγ and for every isin Zγ w =intζ(t)dt Henceforth we assume that w gt 0 isin Zγ This way xn =

(x x)12n is a semi-norm on X

We now define a pair of operators using the discrete inner product Specifi-cally we define the operator Qn Linfin()rarr Xn by requiring

(Qnx y)n = (x y)n y isin Yn (613)

An element Qnx isin Xn satisfying (613) is called the discrete generalized bestapproximation (DGBA) to x from Xn with respect to Yn Similarly we letQprimen Linfin()rarr Yn be the discrete generalized best approximation projectionfrom Linfin() onto Yn with respect to Xn defined by the equation

(vQprimenx)n = (v x)n v isin Xn (614)

The following lemma proved in [80] presents a necessary and sufficientcondition for Qn and Qprimen to be well defined To state this lemma we introducea matrix notation Let

= [ξi(tj) i isin Zλ j isin Zγ ] = [ηi(tj) i isin Zλ j isin Zγ ]W = diag(wj j isin Nγ )

and define the square matrix of order λ

M = WT

Lemma 66 Let x isin Linfin() Then the following statements are equivalent

(i) The discrete generalized best approximation to x from Xn with respect toYn is well defined

(ii) The discrete generalized best approximation x from Yn with respect to Xn

is well defined

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

236 Multiscale PetrovndashGalerkin methods

(iii) The functions ξi i isin Zλ ηi i isin Zλ and the points ti i isin Zγ havethe property that M is nonsingular

Moreover under any one of these conditions the operators Qn and Qprimen areuniformly bounded projections with Qn(Yn) = Xn and Qprimen(Xn) = Yn

It remains to define the operator Kn For this purpose we express the quasi-weakly singular kernel K as K1K2 where K1 isin C( times ) and K2 is quasi-weakly singular Using this factorization we develop a product integrationformula that discretizes the kernel K1 but not the kernel K2 Specifically wedefine the operator Kn Linfin()rarr C() by the formula

(Knx)(s) =int

Zn(K1(s middot)x(middot))(t)K2(s t)dt s isin x isin Linfin() (615)

and note that

(Knx)(s) =sum

jisinZγN

Wj(s)K1(s tj)x(tj) s isin

where for all j isin ZγN we define the function

Wj(s) =int

K2(s t)ζj(t)dt s isin

With the approximate operators Kn and Qn as defined above equation (68)specifies a DPG scheme for solving (67)

We describe two special constructions of the triple of spaces Ski(i)i = 1 2 3 such that the matrix M is nonsingular In the first constructionwe assume that k1 le k3 and k1 = rk2 where r is a positive integer and choosethe partitions 1 = 3 = that is Sk1(1) and Sk3(3) are spaces ofpolynomials of degree k1minus1 and k3minus1 respectively We then choose k3 pointst0 lt t1 lt middot middot middot lt tk3minus1 so that the weights wi gt 0 for i isin Zk3 Thus in this caseλ = k1 and γ = k3 Now we define the partition 2 = [xi xi+1] i isin Zrof the interval by letting x0 = 0 xr = 1 and choosing xi isin (tik2minus1 tik2)iminus 1 isin Zrminus1 Thus for i isin Zrminus1 each of the first r minus 1 subintervals [xi xi+1]contains exactly k2 points tik2+j j isin Zk2 and the last subinterval [xrminus1 xr]contains exactly k3 minus k1 + k2 points t(rminus1)k2+j j isin Zk3minusk1+k2 We call this atype I construction for the triple of spaces Ski(i) i = 1 2 3

Proposition 67 For type I spaces Ski(i) i = 1 2 3 det(M) = 0 holds

Proof Let

(0 k1 minus 1

j1 jk1

)and

(0 k1 minus 1

j1 jk1

)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 237

denote the minors of matrices and corresponding to the columnsj1 jk1 respectively We have by the CauchyndashBinet formula that

det (M) =sum

0lej1ltmiddotmiddotmiddotltjk1lek3minus1

wj1 middot middot middotwjk1

(0 k1 minus 1

j1 jk1

)

(0 k1 minus 1

j1 jk1

)

Let 1 and 1 be the submatrices of and consisting of the first k1

columns respectively Since the triple spaces Ski(i) i = 1 2 3 are type Iconstructions both these matrices are invertible and hence it follows that

det(M) ge det(1) det(1)w1 middot middot middotwk1 gt 0

Note that for a type I construction the partition 2 may not be equallyspaced As our multiscale construction requires this property for both 1 and2 we present a second construction which guarantees that this is the case Forthis purpose we again assume that k1 = rk2 where r is a positive integer andSk1(1) is the space of polynomials of degree k1 minus 1 on For the remainingspline spaces we divide the interval into r equally space subintervalsthat is

2 = 3 =[

i

r

i+ 1

r

] i isin Zr

Hence in this case λ = k1 and γ = rk3 We choose the points tj j isin Zr inthe following way On each interval [ ir i+1

r ] i isin Zr we choose the zeros of theLegendre polynomial of degree k3 (ge k12) shifted from [minus1 1] to the interval[ ir i+1

r ] and we number these points increasingly by ti i isin Zrkprime In this choicewe have wi gt 0 i isin Zrkprime We call this a type II construction

Proposition 68 For type II spaces Ski(i) i = 1 2 3 det(M) = 0 holds

Proof By construction we conclude from a similar argument used in theproof of Proposition 67 that the determinant of the matrix M is positive

We say that the triple of spaces Ski(i) i = 1 2 3 forms an acceptabletriple provided that the matrix M is nonsingular

According to Lemma 66 the discrete approximate operators Qn and Qprimen arebounded uniformly in n isin N on Linfin() For the analysis of DMPG we needsimilar properties on L2() Our first comment is of a general nature To thisend let P = Sk(0) where is a fixed partition of chosen independentof n For any linear operator A such that A Lp() rarr Linfin() 1 le p le infinwe set

ApP = supAxp x isin P xp = 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

238 Multiscale PetrovndashGalerkin methods

The next lemma gives a condition on an operator A such that ApP can bebounded by a constant independent of n times AinfinP

Lemma 69 Let A Linfin() rarr Linfin() be a linear operator such that forx isin P there holds

(Ax)χIi = A(xχIi) i isin ZN (616)

where χi is the characteristic function of the set i Then for any 1 le p le infinthere exists a positive constant α depending only on k and such that

ApP le αAinfinP

Proof For v isin L2(i) we have that

vLp(i) = |i|1pv FiLp() (617)

We conclude from (617) that for x isin P

AxLp(i) = |i|1p(Ax) FiLp() le |i|1p(Ax) FiLinfin()Moreover using assumption (616) we obtain that

(Ax) FiLinfin() = (Ax)χiLinfin() = A(xχi)Linfin()Combining this fact with (617) gives us the inequality

AxLp(i) le AinfinP|i|1px FiLinfin()For any i isin ZN x Fi x isin P = Sk() and so there is a positive constant αdepending only on k and such that

x FiLinfin() le αx FiLp()

Therefore we have confirmed for x isin P and i isin ZN that

AxLp(i) le αAinfinPxLp(i)

Summing both sides of the inequality over all i isin ZN completes the proof

Lemma 610 For any 1 le p le infin there exists a constant c gt 0 such that forn isin N

QnpXn + QnpYn le c QprimenpXn + QprimenpYn le c

Proof We employ Lemma 69 to prove this result In view of Lemma 66 theoperators Qn are uniformly bounded in Linfin Therefore it suffices to verify thatthe following conditions are satisfied

(Qnx) middot χi = Qn(x middot χi) x isin Xn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 239

and

(Qny) middot χi = Qn(y middot χi) y isin Yn

To prove this fact it is useful to introduce another discrete inner product by theformula

[x y] =sumisinZγ

wx(t)y(t)

and corresponding to this inner product the DGBA Qx from Sk1(1) to x isinLinfin() with respect to Sk2(2) Therefore by definition we conclude for anyi isin ZN and x isin Linfin() that

(Qnx) Fi = Q(x Fi)

This proves the first estimate and the second estimate can be obtained similarly

Let Xn L2()rarr Xn and Yn L2()rarr Yn be the orthogonal projectionsfrom L2() onto Xn and Yn respectively

Lemma 611 Let K be a quasi-weakly singular kernel in the factored formK = K1K2 where K2 is quasi-weakly singular

suptisinK2(middot t)1 ltinfin

and K1 is continuous on times Then the set of operators KnXn n isin N isuniformly bounded in the space L2()

Proof To prove the uniform boundedness of the sequence of operators KnXnfor every f isin L2() we introduce the function

e(t) =sum

jisinZγN

|(Xnf )(tj)||ζj(t)| t isin

which may be rewritten as

e(t) =sumiisinZN

sumisinZγ|(Xnf )(Fi(t))||(ζ Fminus1

i )χi(t)|

We observe for any s isin that

|(KnXnf )(s)| le K1infin|(Ge)(s)| (618)

where K1infin is the Linfin-norm of K1 on times and G is the integral operatorwith kernel G = |K2| By the CauchyndashSchwarz inequality we conclude that

Ge2 le βe2 (619)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

240 Multiscale PetrovndashGalerkin methods

where

β = 1

2

[supsisinK2(s middot)1 + sup

tisinK2(middot t)1

]

It remains to bound e2 To this end we note that

(ζ Fminus1i )χi(ζprime Fminus1

iprime )χiprime = 0 if i = iprime

It follows that

(e(t))2 =sumiisinZN

⎛⎝sumisinZγ

∣∣∣∣(Xnf )(Fi(t))

∣∣∣∣∣∣∣∣(ζ Fminus1i )χi(t)

∣∣∣∣⎞⎠2

and thus by the CauchyndashSchwarz inequality we have that

e22 lesumiisinZN

sumisinZγ

∣∣(Xnf )(Fi(t))∣∣2 sum

isinZγ

int

∣∣∣(ζ Fminus1i )χi(t)

∣∣∣2 dt

Note that for each i isin ZN and f isin L2()

(Xnf ) Fi = X ( f Fi)

where X is the orthogonal projection of L2() onto Sk1(1) We conclude that

e22 le μsumiisinZN

sumjisinZγ|X ( f Fi)(tj)|2|i|

where

μ =sumjisinZγ

int

|ζj(t)|2dt

We let X2infin denote its norm as an operator from L2() into Linfin()Therefore

e22 le γμX22infinsumiisinZN

f Fi22|i| = γμX22infinf22 (620)

which establishes the uniform boundedness of the operators KnXn n isin N

We need to prove that the set of operators KnXn n isin N with a specialrequirement on the kernel K2 is collectively compact on L2() We recall thata set τ = A of linear operators mapping a normed space X into a normedspace Y is called collectively compact if for each bounded set S sube X the imageset Ax x isin SA isin τ is relatively compact (see [6])

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 241

We further require that the kernel K2 has the α-property namely

K2(s t) = A(s t)

|sminus t|α s t isin

where A is a continuous function on times and α is a constant satisfying0 le α lt 1

We now prove the collective compactness of the set KnXn n isin NTheorem 612 Let K be a quasi-weakly singular kernel in the factored formK = K1K2 where K2 has the α-property and K1 is continuous on times LetAn = KnXn Then the set of operators An n isin N is collectively compacton the space L2()

Proof By the definition of collective compactness we are required to confirmthat for any bounded set S = f isin L2() f2 le c0 where c0 is a positiveconstant the set Anf f isin S n isin N is relatively compact This will bedone using the Kolmogorov theorem (Theorem A44) The first condition inthe theorem that is the uniform boundedness of the set An n isin N hasbeen verified in Lemma 611 It remains to verify conditions (ii) and (iii)in Theorem A44 We first show that the sequence of operators An has theproperty that

limhrarr0

(int 1minush

0|(An f )(s+ h)minus (Anf )(s)|2ds

) 12

= 0 (621)

uniformly in n isin N and f isin S To this end we note that(int 1minush

0|(Anf )(s+ h)minus (Anf )(s)|2ds

) 12

le r1 + r2

where

r1 =⎛⎝int 1minush

0

∣∣∣∣∣int 1

0Zn((K1(s+ h middot)minus K1(s middot))(Xnf )(middot))(t)K2(s+ h t)dt

∣∣∣∣∣2

ds

⎞⎠12

and

r2 =⎛⎝int 1minush

0

∣∣∣∣∣int 1

0Zn(K1(s middot)(Xnf )(middot))(t)[K2(s+ h t)minus K2(s t)]dt

∣∣∣∣∣2

ds

⎞⎠12

We first bound the term r1 By the definition of Zn we have that

r1 le suptisin

supsisinh

|K1(s+ h t)minus K1(s t)|Ge2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

242 Multiscale PetrovndashGalerkin methods

where h = [0 1minus h] Using (619) and (620) we conclude that

r1 le βγ12μ

12 X2infinf2 sup

tisinsupsisinh

|K1(s+ h t)minus K1(s t)|

To estimate the term r2 we make use of the property of the kernel K2 thatK2 is continuous off the diagonal Specifically for any s isin (0 1) and δ gt 0we let U(s δ) = (sminus δ s+ δ) and Uprime(s δ) = U(s δ) and write

r2 le rprime + rprimeprime + rprimeprimeprime

where

rprime =[int 1minush

0

(intU(s2δ)

|Zn(K1(s middot)(Xnf )(middot))(t)||K2(s t)|dt

)2

ds

] 12

rprimeprime =[int 1minush

0

(intU(s2δ)

|Zn(K1(s middot)(Xnf )(middot))(t)||K2(s+ h t)|dt

)2

ds

] 12

and

rprimeprimeprime =[int 1minush

0

∣∣∣∣intUprime(s2δ)

|Zn(K1(s middot)(Xnf )(middot))(t)[K2(s+ h t)minusK2(s t)]dt

∣∣∣∣2ds

]12

We observe by using the α-property of the kernel K2 and by a straightforwardcomputation that

rprime le K1infin[int 1minush

0

(intU(s2δ)

|K2(s t)|e(t)dt

)2

ds

] 12

le A21minus α

2

1minus αK1infinμ 1

2 β12 X2infinδ

1minusα2 f2

le Cδ1minusα

2 f2

where

A = supstisin|A(s t)|

To bound rprimeprime we note that if h lt δ then U(s minus h 2δ) sub U(s 3δ) and thus bychange of variables we conclude that

rprimeprime le[int 1

0

(intU(s3δ)

|Zn(K1(s middot)(Xnf )(middot))(t)||K2(s t)|dt

)2

ds

] 12

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 243

Likewise we have that

rprimeprime le cδ1minusα

2 f2

For any ε gt 0 we find a δ gt 0 such that cδ1minusα

2 lt ε4 Hence we observe that

rprime + rprimeprime lt 1

2εf2

Finally we deal with the term rprimeprimeprime By the α-property of the kernel K2 K2 iscontinuous off the diagonal and hence

rprimeprimeprime le K1infine2(int 1

0

intUprime(s2δ)

|K2(s+ h t)minus K2(s t)|2dtds

) 12

le cf2(int 1

0

intUprime(s2δ)

|K2(s+ h t)minus K2(s t)|2dtds

) 12

When s isin (0 1) t isin Uprime(s 2δ) and h lt δ we have |s+ hminus t| ge δ so that bothpoints (s t) and (s+ h t) are contained in the set

Dδ = (s t) isin times |sminus t| ge δFor fixed number δ gt 0 the function K2(s t) is bounded and uniformlycontinuous with respect to the totality of s and t on Dδ Therefore there existsa constant σ1 with 0 lt σ1 le δ such that when h lt σ1 (s t) (s+ h t) isin Dδ

|K2(s t)minus K2(s+ h t)| lt ε

4c

Thus (int

K2(s middot)minus K2(s+ h middot)2L2(Uprime(s2δ))ds

)12

le ε

4c

In summary we have established the estimate(int 1minush

0|(Anf )(s+ h)minus (Anf )(s)|2ds

) 12

lt εf2

and thus proved equation (621)Finally we verify that

limhrarr0

(int 1

1minush|(Anf )(s)|2ds

) 12

= 0 (622)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

244 Multiscale PetrovndashGalerkin methods

uniformly in n isin N and f isin S To do this note that (618) and (619) give(int 1

1minush|(Anf )(s)|2ds

) 12

le K1infinGeL2[1minush1] le βK1infineL2[1minush1]

where G e and β are all defined in the proof of Lemma 611 Similar argumentsused to prove estimate (620) lead to

e2L2[1minush1] le γ X22infinf22int 1

1minush

sumjisinZγ|ηj(t)|2dt

Note that

limhrarr0

int 1

1minush

sumjisinZγ|ηj(t)|2dt = 0

uniformly in n isin N and f isin S We then conclude from the estimate(int 1

1minush|(Anf )(s)|2ds

) 12

le γβK1infinX2infinf2⎛⎝int 1

1minush

sumjisinZγ|ηj(t)|2dt

⎞⎠12

that (622) holds uniformly in n isin N and f isin S

The following is a result from [80] which generalizes Proposition 17 of [6]

Lemma 613 Let X be a Banach space and S sub X a relatively compact setAssume that T Tn are bounded linear operators from X to X satisfying

Tn le C for all n

and for each x isin S

Tnxminus T x rarr 0 as nrarrinfin

where C is a constant independent of n Then Tnxminus T x rarr 0 uniformly forall x isin S

Lemma 614 Let K be a quasi-weakly singular kernel in the factored formK = K1K2 where K2 has the α-property and K1 is continuous on times Then

(i) (Qn minus I)KnXn2 rarr 0 as nrarrinfin(ii) (Kn minusK)QnKnXn2 rarr 0 as nrarrinfin

Proof (i) Let B denote the unit ball in L2() that is

B = x isin L2() x2 le 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 245

and let

A = KnXnx x isin B n isin NSince KnXn is collectively compact on L2() we conclude that A is arelatively compact set in L2() Note that A sub C() and for each x isin C()

Qnxminus x2 le cQnxminus xinfin rarr 0 as nrarrinfin

Using Lemma 613 with Tn = Qn and T = I we have that

(Qn minus I)KnXn2 = supxisinB(Qn minus I)KnXnx2

= supxisinA(Qn minus I)x2 rarr 0

(ii) For a fixed x isin C() Qnx n isin N is a relatively compact setin Linfin() Since Kn n isin N is collectively compact on Linfin() and Kn

converges pointwise to K on the set X by Lemma 613 we conclude that

(Kn minusK)Qnx2 le C(Kn minusK)Qnxinfin rarr 0 as nrarrinfin

Using Lemma 613 with the bounded linear operators Tn = (Kn minusK)Qn andT = 0 we have that

(Kn minusK)QnKnXn2 = supxisinB(Kn minusK)QnKnXnx2

le supxisinA(Kn minusK)Qnx2 rarr 0 as nrarrinfin

This completes the proof

Lemma 615 Let K be a quasi-weakly singular kernel in the factored formK = K1K2 where K2 has the α-property and K1 is continuous on times Thenthere exist positive constants c and N such that for all n gt N the inverses(I minusQnKnXn)

minus1 exist as linear operators defined on L2() and

(I minusQnKnXn)minus12 le c

Proof A straightforward computation yields

[I + (I minusK)minus1KnXn](I minusQnKnXn)

= I minus (I minusK)minus1[(Qn minus I)KnXn + (Kn minusK)QnKnXn]By Lemma 614 there exists N gt 0 such that for all n gt N

= (I minusK)minus1[(Qn minus I)KnXn + (Kn minusK)QnKnXn] le 1

2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

246 Multiscale PetrovndashGalerkin methods

Therefore the inverse operators (I minusQnKnXn)minus1 exist and it holds that

(I minusQnKnXn)minus12 le 1

1minus(1+ (I minusK)minus12KnXn2)

Thus the results of this lemma follow from Lemma 611 Moreover theFredholm theory allows us to conclude that the inverse operators are definedon L2()

Proposition 616 Let K be a quasi-weakly singular kernel in the factoredform K = K1K2 where K2 has the α-property and K1 is continuous on timesThen there exist positive constants c0 and N such that for n gt N and x isin Xn

(I minusQnKn)x2 ge c0x2

Proof For all x isin Xn it follows from Lemma 615 that

x2 = (I minusQnKnXn)minus1(I minusQnKnXn)x2

le C(I minusQnKnXn)x2= C(I minusQnKn)x2

and thus the lemma follows

622 A truncated DMPG scheme

In this subsection we describe the DMPG method and a truncated scheme forequation (67)

We first use type II spaces Ski(i) i = 1 2 3 to generate XnYn and QnSpecifically we choose an integer r gt 1 define the partitions 0 of by

0 =[

j

rn

j+ 1

rn

] j isin Zrn

and define Xn = Sk1(0 1)Yn = Sk2(0 2) and Qn = Sk3(0 3)where k1 = rk2 k2 le k1 le k3 We next describe our multiscale bases Forthis purpose we denote by Sk

m the space of piecewise polynomials of degreele k minus 1 defined on corresponding to the equally spaced partition j =[jm (j + 1)m] j isin Zm We assume that k kprime νμ are positive integers withμ gt 1 such that λ = kν = kprimeμ kprime le k and μν is an integer We define thetrial space Xn as Sk

νμn and the test space Yn as Skprimeμn+1 For i isin Zn+1 we set

w(i) =λ i = 0

λ(μminus 1)μiminus1 i ge 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 247

The orthogonal multiscale bases φij (i j) isin Un for Xn and ψij (i j) isin Un forYn can be constructed using the general construction described in Section 43(see also [200 201] and [64]) The orthonormal multiscale basis φij enjoys thefollowing properties

Xn = spanφij (i j) isin Unint

tφij(t)dt = 0 isin Zk j isin Zw(i) iminus 1 isin Zn

and the length of the support supp(φij) is given by

meas(suppφij) le 1

μiminus1 i ge 1

Similarly the multiscale basis ψij also has the properties

Yn = spanψij (i j) isin Unint

tψij(t)dt = 0 isin Zk j isin Zw(i) iminus 1 isin Zn

where k is an integer between kprime and k and there is a constant cprime dependingonly on k for which

meas(suppψij) le cprime

μiminus1 i ge 1

Consequently any function xn isin Xn has the representation

xn =sum

(i j)isinUn

xijφij

where

xij = (xnφij) (i j) isin Un

We have described multiscale bases for the trial and test spaces Xn andYn It remains to describe the third space Qn which is used for integrationWe do this next by choosing kprimeprime ge k and letting the space Qn be Skprimeprime

μn+1

Specifically we choose kprimeprime points in 0 lt τ0 lt τ1 lt middot middot middot lt τkprimeprimeminus1 lt 1and let q0 q1 qkprimeprimeminus1 be the Lagrange interpolating polynomials satisfyingdeg qj le kprimeprime minus 1 and qi(τj) = δij i j isin Zkprimeprime We let ϕεμ be the linear functionmapping from bijectively onto [ ε

μ ε+1

μ] for ε isin Zμ and set

tj = ϕεμ(τl) ζj = χIεμ ql ϕminus1εμ j = εkprimeprime + l ε isin Zμ isin Zkprimeprime

It can easily be seen that ζi(tj) = δij i j isin Zγ where γ = kprimeprimeμ ge λ Followingthe last section we use affine mappings Fi i = [ i

μn i+1μn ] rarr defined by

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

248 Multiscale PetrovndashGalerkin methods

Fi(t) = μnt minus i i isin ZN where N = μn to define basis functions ζij for thespace Qn The three spaces Xn Yn and Qn were chosen so that

Xn sube Qn and Yn sube Qn

These inclusions are crucial for developing a compressed scheme which willbe discussed in the next section We also define the discrete inner product (middot middot)nand operators Qn and Kn according to (613) and (615)

With the multiscale bases for spaces Xn and Yn the DMPG scheme forequation (67) becomes

un =sum

(ij)isinUn

uijφij isin Xn

where the coefficients of the function un satisfysum(ij)isinUn

uij(ψiprimejprime φij minusKnφij)n = (ψiprimejprime f )n (iprime jprime) isin Un (623)

To write (623) in a matrix form we use lexicographic ordering on Zn+1timesZn+1

and define matrices

En = [(ψijφiprimejprime)n (iprime jprime) (i j) isin Un]Kn = [(ψijKnφiprimejprime)n (iprime jprime) (i j) isin Un]

and vectors

fn = [(ψij f )n (i j) isin Un] un = [uij (i j) isin Un]Note that the vectors have length s(n) = λμnminus1 With these notationsequation (623) takes the equivalent form

(En minusKn)un = fn (624)

To present a convergence result about the DMPG scheme we let

= [φ0i(tj) i isin Zλ j isin Zγ ] = [ψ0i(tj) i isin Zλ j isin Zγ ] and M = WT

where W = diag(wj j isin Nγ ) and wi = intζi(t)dt i isin Zγ

Theorem 617 Let K be a quasi-weakly singular kernel in the factored formK = K1K2 where K2 has the α-property and K1 is continuous on times Thenthere exists N gt 0 such that for all n ge N the DMPG scheme (623) has aunique solution un isin Xn Moreover if the solution u of equation (67) satisfiesu isin Wkinfin() then there exists a positive constant c such that for all n isin N

uminus un2 le cμminuskn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 249

Proof By the construction of spaces Xn and Yn and the choice of thequadrature nodes tj we conclude from Proposition 68 that det(M) = 0holds Therefore the conclusion of this theorem follows directly from The-orem 338

We now develop a matrix compression strategy for the DMPG methodThis compression strategy will lead us to a fast algorithm for the approximatesolution to equation (67)

Throughout this section we suppose that the following additional conditionson the kernel K hold K = K1K2 and K1 and K2 have derivatives

K(lm)r (s t) = part l+mKr(s t)

partslparttm

for l isin Zk+1 m isin Zkprime+1 when r = 1 s t isin I and r = 2 s t isin I with s = t

where k = minkprimeprime minus k+ 1 k and kprime = minkprimeprime minus k+ 1 k and there exists apositive constant c0 such that for s t isin I s = t

|K(lm)2 (s t)| le c0

|sminus t|α+l+m (625)

We denote the entries of matrix Kn by Kiprimejprimeij = (ψiprimejprime Knφij)n (i j) (iprime jprime)isin Un The entries of the matrix Kn are discrete inner products which we shallshow in the next lemma have a similar estimate of continuous inner productspresented in Lemma 71 of [64] with k kprime replaced by k kprime respectively whichshow the influence of the full discretization To this end we let S(ij) and Sprime

(iprimejprime)denote the support of φij and ψiprimejprime respectively Then

|S(ij)| = meas(S(ij)) le 1

μiminus1for i gt 1

and

|Sprime(iprimejprime)| = meas(Sprime(iprimejprime)) lecprime

μiprimeminus1for iprime gt 1

We first estimate the entries of matrix Kn in the following lemma

Lemma 618 The estimate

|Kiprimejprimeij| le cμminus(k+12 )iminus(kprime+ 1

2 )iprimedist(S(ij) Sprime(iprimejprime))

minusαminuskminuskprime

holds where c is a positive constant independent of n i iprime j and jprime

Proof The entries of matrix Kn can be written in the following way

Kiprimejprimeij =int

(Zng)(s)ds

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

250 Multiscale PetrovndashGalerkin methods

where the function g is defined by

g(s) = ψiprimejprime(s)int

K2(s t)(Zn(K1(s middot)φij(middot)))(t)dt

Let t0 and s0 be the midpoints of the intervals S(ij) and Sprime(iprimejprime) respectively By

the Taylor theorem for (s t) isin S(iprimejprime) times S(ij) and r = 1 2 we have that

Kr(s t) =sumlisinZk

1

lK(0l)r (s t0)(t minus t0)

l

+ 1

(k minus 1)int

K(0k)r (s t0 + θr(t minus t0))(t minus t0)

k(1minus θr)kminus1dθr

Likewise we have that

K(0k)r (s tprime) =

sumlprimeisinZkprime

1

lprimeK(lprimek)r (s0 tprime)(sminus s0)

lprime

+ 1

(kprime minus 1)int

K(kprimek)r (sprime tprime)(sminus s0)

kprime(1minus θkr)kprimeminus1dθkr

where sprime = s0 + θkr(sminus s0) and tprime = t0 + θr(t minus t0) Thus we obtain that

Kr(s t) =sum

misinZ4

Trm

where

Tr0 =sumlisinZk

sumlprimeisinZkprime

1

llprimeK(lprimel)r (s0 t0)(sminus s0)

lprime(t minus t0)l

Tr1 =sumlisinZk

1

l(kprime minus 1) (sminus s0)kprime(t minus t0)

l

timesint

K(kprimek)r (s0 + θlr(sminus s0) t0)(1minus θlr)

kprimeminus1dθlr

Tr2 = 1

(k minus 1)sum

lprimeisinZkprime

1

lprime(sminus s0)

lprime(t minus t0)k

timesint

K(lprimek)r (s0 t0 + θr(t minus t0))(1minus θr)

kminus1dθr

and

Tr3 = 1

(k minus 1)(kprime minus 1) (sminus s0)kprime(t minus t0)

k

timesint

int

K(kprimek)r (sprime tprime)(1minus θkr)

kprimeminus1(1minus θr)kminus1dθkrdθr

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 251

with (sprime tprime) = (s0 + θkr(sminus s0) t0 + θr(t minus t0)) Note that the polynomials ofdegree kprimeprimeminus1 are invariant under the projectors Zn and Zn Linfin()rarr Linfin()is uniformly bounded Using the vanishing moment conditions of bases φij andψiprimejprime that isint

S(ij)(t minus t0)

lφij(t)dt = 0 l isin Zk iminus 1 isin Zn j isin Zw(i)

and intSprime(iprimejprime)

(sminus s0)lψiprimejprime(s)ds = 0 l isin Zkprime iprime minus 1 isin Zn jprime isin Zw(iprime)

we conclude that

|Kiprimejprimeij| le C sup(|K(lprimel)1 (s t)||K(lprimel)

2 (s t)|)timesint

S(ij)|(t minus t0)

kφij(t)|dtint

Sprime(iprimejprime)|(sminus s0)

kprimeψiprimejprime(s)|ds

where the supremum is taken over (s t) isin Sprime(iprimejprime) times S(ij) and (lprime l) isin Zkprime+1 times

Zk+1 In fact to see the estimate it is sufficient to consider the cases Kr(s t) =Trm r = 1 2 m isin Z4 We assume without loss of generality that in the sumof Trm there is only one term which means that Kr(s t) is a product of anintegral and two factors (t minus t0)lr and (s minus s0)

lprimer Obviously if l1 + l2 lt k (orlprime1 + lprime2 lt kprime) the operator Zn can be discarded and by using the vanishingmoment conditions of bases φij and ψiprimejprime we can obtain Kiprimejprimeij = 0 Otherwisel1 + l2 ge k and lprime1 + lprime2 ge kprime then the desired estimate follows by using theuniform boundedness of Zn Now using φij2 = ψij2 = 1 we concludefrom the CauchyndashSchwarz inequality that

|Kiprimejprimeij| le C dist(S(ij) Sprime(iprimejprime))minusαminuskminuskprime |S(ij)|k+ 1

2 |Sprime(iprimejprime)|kprime+ 1

2

le Cμminus(k+12 )iminus(kprime+ 1

2 )iprimedist(S(ij) Sprime(iprimejprime))

minusαminuskminuskprime

Lemma 618 leads us to a truncation strategy which we describe below Totruncate the matrix Kn we first partition it into a block matrix Kn = [Kiprimei (iprime i) isin Zn+1 times Zn+1] according to the decomposition of spaces Xn and Ynwhere

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]For a given positive number δn

iprimei we truncate the block Kiprimei to obtain

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

252 Multiscale PetrovndashGalerkin methods

where

Kiprimejprimeij =

Kiprimejprimeij dist(S(ij) Sprime(iprimejprime)) le δn

iprimei

0 otherwise

and let Kn = [Kiprimei Zn+1 times Zn+1] Using the truncation matrix Kn to replacethe matrix Kn in equation (624) we obtain a new linear system

(En minus Kn)un = fn (626)

where un = [uij] isin Rs(n) Equation (626) is a compressed scheme whichprovides us with a fast algorithm

It is convenient to work with a functional analytic approach in our develop-ment For this purpose we convert the linear system (626) to an abstract oper-ator equation form Let biprimejprimeij denote the entries of matrix Bn = Eminus1

n KnEminus1n

and let

K(s t) =sum

(ij)(iprimejprime)isinUn

biprimejprimeijψij(s)φiprimejprime(t)

We denote by Kn the discrete integral operator defined by the kernel K Thensolving the linear system (626) is equivalent to finding

un =sum

(ij)isinUn

uijφij isin Xn

such that

(I minus Kn)un = Qnf (627)

The analysis of this truncated scheme and the choice of the truncationparameter δn

iprimei will be discussed in the next subsection

623 Analysis of convergence and complexity

In this subsection we present the analysis for the order of convergence and theorder of computational complexity of the compressed scheme developed in thelast subsection

We first estimate the truncated matrix Kn To do this we denote η = α +k + kprime minus 1

Lemma 619 Let J be any integral contained in Suppose that n and s arepositive integers and s gt 1 Thensum

jisinJn

dist(Jjn)minuss le 4n

sminus 1δminuss max

sminus 1

n δ

where jn = [jn (j+ 1)n] j isin Zn and Jn = j j isin Zn dist(Jjn) gt δ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 253

Proof Suppose that J = [a b] Choose the greatest integer p with p+ 1 isin Zn

such that pn lt aminus δ le (p+ 1)n and the least integer q with q+ 1 isin Zn+1

such that qn le b+ δ lt (q+ 1)n Therefore

sumjisinJn

dist(Jjn)minuss =

pminus1sumj=0

(aminus j+ 1

n

)minuss

+nminus1sum

j=q+2

(j

nminus b

)minuss

When a minus δ le 1n the first sum is zero and likewise when b + δ ge 1 minus 1nthe second sum is zero In any case we have that

sumjisinJn

dist(Jjn)minuss le 2δminuss +

pminus2sumj=0

(aminus j+ 1

n

)minuss

+nminus1sum

j=q+2

(j

nminus b

)minuss

le 2δminuss + nint infin

aminuspntminussdt + n

int infin(q+1)nminusb

tminussdt

Evaluating the integrals and using our choice of p and q gives the bound forthe right-hand side of the inequality above

2δminuss + 2n

sminus 1δminuss+1 = 2nδminuss

sminus 1

(sminus 1

n+ δ

)le 4nδminuss

sminus 1max

sminus 1

n δ

Lemma 620 For any i iprime isin Zn+1 and positive constant δniprimei the estimates

Kiprimei minus Kiprimeiinfin le cμminus(kminus12 )iminus(kprime+ 1

2 )iprime(δn

iprimei)minusαminuskminuskprimemax

η

μiminus1 δn

iprimei

and

Kiprimei minus Kiprimei1 le cμminus(k+12 )iminus(kprimeminus 1

2 )iprime(δn

iprimei)minusαminuskminuskprimemax

cprimeημiprimeminus1

δniprimei

hold where c is a positive constant independent of n i and iprime

Proof Using the estimate given in Lemma 618 on the entries |Kiprimejprimeij| notingthe definition of the truncated matrix Kiprimejprimeij we have that

Kiprimei minus Kiprimeiinfin le cμminus(k+12 )iminus(kprime+ 1

2 )iprime

maxjprimeisinZw(iprime)

sumjisinZδn

iprimei

dist(S(ij) Sprime(iprimejprime))minusαminuskminuskprime

where Zδniprime i = j j isin Zw(i) dist(S(ij) Sprime

(iprimejprime)) gt δniprimei Using Lemma 619 (with

n replaced byμiminus2 s replaced by α+k+kprimejn replaced by S(ij) and J replacedby Sprime

(iprimejprime)) we get the first estimate of this lemma The second estimate followslikewise from Lemma 619

We recall a useful result of Schur

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

254 Multiscale PetrovndashGalerkin methods

Lemma 621 (Schurrsquos lemma) Let A = [aij i j isin Zn] n isin N be a matrixsuch that there exist positive constants γi i isin Zn and a positive constant cindependent of n satisfying for all i j isin Znsum

isinZn

|aj|γ le cγj

and sumisinZn

|ai|γ le cγi

Then A2 le c for all n isin N

We are now ready to estimate the truncation operator Rn = QnKn minus Kn interms of the function μ[middot middot n]Lemma 622 Let b bprime be real numbers and m a non-negative integer Choosethe truncation parameter δn

iprimei i iprime isin Zn+1 such that

δniprimei ge max

cprimeημiminus1

η

μiprimeminus1μminusn+b(nminusi)+bprime(nminusiprime)

(628)

Then for any u isin Hm(I) with 0 lt m le k

RnXnuL2 le cμ[k + mminus bη kprime minus bprimeη n](n+ 1)12μminus(1minusα+m)nuHm

and for u isin L2[0 1]RnXnuL2 le cμ[k minus bη kprime minus bprimeη n]μminus(1minusα)nuL2

where c is a positive constant independent of n

Proof It is clear that for φ isin Xn

φn = supvisinXn

|(φ v)n|vn

It follows that for u isin Hm[0 1] with 0 le m le k

RnXnun = supvisinXn

|(RnXnu v)n|vn (629)

We need to pass the estimate (629) in terms of the discrete norm to anestimate in terms of the L2-norm To this end we need a norm equivalenceresult That is there exist positive constants A and B such that for all functionsx isin Xn and n isin N the inequality

Axn le xL2 le Bxn (630)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 255

holds A proof of (630) is in order Because Xn sube Qn any function x isin Xn

has the representation

x(t) =sum

jisinZγN

x(tj)ζj(t)

It follows that

x2L2 =sumiisinZN

|i|xTi Wxi

where xi = [x(Fi(tj)) j isin Zγ ] and

W =[int

ζ(τ )ζm(τ )dτ m isin Zγ

]

Moreover we have that

x2n =sum

jisinZγN

wjx2(tj) =

sumiisinZN

|i|xTi Wxi

Because both W and W are positive definite matrices (630) follows Using(630) in (629) we have that

RnXnu2 le c supvisinXn

|(RnXnu v)n|v2= c sup

visinXn

|(RnXnuQprimenv)n|v2

Using the second estimate in Lemma 610 we conclude that

RnXnu2 le c supvisinXn

|(RnXnuQprimenv)n|Qprimenv2

Since Qprimen(Xn) = Yn(Xn) = Yn we obtain that

RnXnu2 le c supvisinXn

|(RnXnuYnv)n|Ynv2 (631)

Now we estimate |(RnXnuYnv)n| Note that

Xnu =sum

(ij)isinUn

(uφij)φij and Ynv =sum

(ij)isinUn

(vψij)ψij

Let Riprimejprimeij = Kiprimejprimeij minus Kiprimejprimeij We have that

(RnXnuYnv)n =sum

(ij)(iprimejprime)isinUn

Riprimejprimeij(uφij)(vψiprimejprime)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

256 Multiscale PetrovndashGalerkin methods

Denoting = μ[k + m minus bη kprime minus bprimeη n] iprimejprimeij = minus1μ(1minusα+m)nminusmiRiprimejprimeijand defining the matrix n = [iprimejprimeij] we see that

|(RnXnuYnv)n| le μminus(1minusα+m)n

⎛⎝ sumiisinZn+1

μ2misum

jisinZw(i)

|(uφij)|2⎞⎠12

n2Ynv2

Now for u isin Hm() we have that

μ2misum

jisinZw(i)

|(uφij)|2 = μ2miXiuminus Ximinus1u2L2 le Cu2Hm

so sumiisinZn+1

μ2misum

jisinZw(i)

|(uφij)|2 le C(n+ 1)u2Hm

Thus we conclude that for u isin Hm()

|(RnXnuYnv)n| le C(n+ 1)σ(m)μminus(1minusα+m)nn2uHmYnv2 (632)

holds where

σ(m) =

12 0 lt m le k

0 m = 0

We now estimate n2 using Lemma 621 with the choice γij = μminusi2 Wehave from Lemma 620 thatsum(ij)isinUn

|iprimejprimeij|γij lesum

iisinZn+1

minus1μ(1minusα+m)nminusmiKiprimei minus Kiprimeiinfinμminusi2

le cminus1μ(1minusα)n sumiisinZn+1

μm(nminusi)μminus(kminus12)iminus(kprime + 12)iprime(δniprimei)minusημminusi2

The right-hand side of this inequality can be bounded by

cminus1γiprimejprimesum

iisinZn+1

μ(k+mminusbη)(nminusi)μ(kprimeminusbprimeη)(nminusiprime)

This inequality and our hypothesis imply thatsum(ij)isinUn

|iprimejprimeij|γij le cγiprimejprime

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 257

Similarly using the second estimate in Lemma 620 we have thatsum(iprimejprime)isinUn

|iprimejprimeij|γiprimejprime lesum

iprimeisinZn+1

minus1μ(1minusα+m)nminusmiKiprimei minus Kiprimei1μminusiprime2

le cminus1μ(1minusα)n sumiprimeisinZn+1

μm(nminusi)μminus(k+12)iminus(kprimeminus12)iprime

times (δniprimei)minusημminusiprime2

le cminus1γij

sumiprimeisinZn+1

μ(k+mminusbη)(nminusi)μ(kprimeminusbprimeη)(nminusiprime)

which implies that sum(iprimejprime)isinUn

|iprimejprimeij|γiprimejprime le cγij

Hence Lemma 621 yields

n2 le c (633)

Combining the inequalities (631)ndash(633) yields the estimates of this lemma

The next result shows that the truncated scheme is uniquely solvable andstable

Theorem 623 Let δniprimei be chosen according to (628) with b gt k+αminus1

η bprime gt

kprime+αminus1η

and b + bprime gt 1 then there exist a positive constant c and a positiveinteger N such that for n ge N and x isin Xn

(I minus Kn)x2 ge cx2

In particular equation (627) has a unique solution un isin Xn for n ge N and thetruncated scheme (627) is stable

Proof By Proposition 616

(I minusQnKn)x2 ge c0x2 for all x isin Xn

Thus using the second estimate in Lemma 622 we have that

(I minus Kn)x2 ge (I minusQnKn)x2 minus Rnx2 ge (c0 minus εn)x2

where εn = μ[kminus bη kprime minus bprimeη n]μminus(1minusα)n rarr 0 as nrarrinfin The result of thistheorem follows

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

258 Multiscale PetrovndashGalerkin methods

In the next theorem we present a result on the order of convergence for theapproximate solution un obtained from the truncated scheme It is convenientto introduce an interpolation projection In from C() to Xn For x isin C()we let Inx be the interpolation to x from Xn such that (Inx)(tij) = x(tij) for j isinZw(i) i isin Zn+1 We assume that the interpolatory points tij are chosen so thatthe interpolation problem has a unique solution We also need the followingerror estimates For x isin Wminfin() the following estimates hold

xminus Xnx2 le cμminusmnxHm

xminus Inxinfin le cμminusmnxWminfin

and

(K minusKn)xinfin le cμminusmnxWminfin

Theorem 624 Let δniprimei be chosen according to (628) with b ge m+k+αminus1

η bprime gt

kprime+αminus1η

b+ bprime ge 1+ mη

and 0 lt m le k then there exist a positive constant cand a positive integer N such that for n ge N

uminus unL2 le cs(n)minusm(log(s(n)))τ+12uWminfin

where τ = 0 except for (b bprime) = (m+k+αminus1η

kprimeη) in which case τ = 1

Proof To estimate the error of the solution un of the truncated equation (627)we observe that Theorem 623 yields

Xnuminus un2 le cminus1(I minus Kn)(Xnuminus un)2le cminus1(RnXnu2 + Qn(K minusKn)u2+ Qn(I minusKn)(Xnuminus u)2)

The first term has been estimated in Lemma 622 We deal with the last twoterms Recalling that Qninfin is uniformly bounded we conclude that

Qn(K minusKn)u2 le Qn(K minusKn)uinfinle Qninfin(K minusKn)uinfinle cμminusmnuWminfin

We next estimate the last term Note that

Qn(I minusKn)(Xnuminus u)2 le Qninfin(1+ Kninfin)Xnuminus uinfinle c(Xn(uminus Inu)infin + Inuminus uinfin)

There is a point t isin j such that t = Fj(τ ) where τ isin and

Xn(uminus Inu)infin = |Xn(uminus Inu)(t)|

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 259

It follows that

Xn(uminus Inu)infin = |((Xn(uminus Inu)) Fj)(τ )| = |(X ((uminus Inu) Fj))(τ )|where the notation X was introduced in the proof of Lemma 611 Therefore

Xn(uminus Inu)infin le X ((uminus Inu) Fj)infinle X2infin(uminus Inu) Fj2le X2infin(uminus Inu) Fjinfinle X2infinuminus Inuinfin

Consequently

Xnuminus uinfin le (X2infin + 1)uminus Inuinfin le cμminusmnuWminfin

Using these estimates and Lemma 622 we complete the proof

The next result shows that the condition number of the coefficient matrix ofthe truncated scheme (627) is bounded by a constant independent of n

Theorem 625 Let δniprimei be chosen according to (628) with b gt k+αminus1

η bprime gt

kprime+αminus1η

and b + bprime gt 1 then the condition number of the coefficient matrix ofthe truncated approximate equation (627) is bounded that is there exists apositive constant c such that for n isin N

cond(En minus Kn) le c

Proof For e = [eij (i j) isin Un] isin Rs(n) and eprime = [eprimeij (i j) isin Un] isin Rs(n)set

x =sum

(ij)isinUn

eijφij and y =sum

(ij)isinUn

eprimeijψij

It follows from (630) and y2 = eprime2 that

(En minus Kn)e2 = supeprimeisinRs(n)eprime2=1

|eprimeT(En minus Kn)e|

= supyisinYny2=1

|(y (I minus Kn)x)n|

le c((I minusQnKnXn)x2 + Rnx2)From Lemma 611 we know that KnXn L2() rarr L2() are uniformlybounded By Lemma 614 (i) we conclude that IminusQnKnXn2 are uniformlybounded This fact with the second estimate in Lemma 622 yields

(En minus Kn)e2 le cx2 = ce2

that is En minus Kn le c

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

260 Multiscale PetrovndashGalerkin methods

Moreover for any eprime isin Rs(n) we find e isin Rs(n) such that eprime = (En minus Kn)eand choose the unique x isin Xn and y isin Yn such that for all (r ) isin Un〈xφr〉 = er and 〈yψr〉 = eprimer Therefore we have that

Qny = (I minus Kn)x

Since x2 = e2 y2 = eprime2 we have by Theorem 623 that

c(En minus Kn)minus1eprime2 = ce2 = cx2 le (I minus Kn)x2 = Qny2

le Qn2Yny2

which proves that

(En minus Kn)minus12 le cminus1Qn2Yn

It follows from Lemma 610 that (EnminusKn)minus12 is bounded by a constant

For any matrix A we use N (A) to denote the number of nonzero entries in AEmploying a standard argument (see for example Section 531 and [64]) weconclude that when b bprime are real numbers and δn

rrprime is chosen according to

(628) with η = α + k + k minus 1 there exists a positive constant c such that forall n isin N

N (En minus Kn) = cμn(n+ μ[bminus 1 bprime minus 1 n])

The choice b = 1 and bprime isin ( k+αminus1η

1) results in

μ[bminus 1 bprime minus 1 n] = O(n) nrarrinfin

Theorem 626 Suppose that kprimeprime ge 2k minus 1 m = k and k = k Choose b = 1

and bprime isin ( k+αminus1η

1) Then there hold the stability estimate in Theorem 623 theerror estimate in Theorem 624 with τ = 0 the boundedness of the conditionnumber in Theorem 625 and the following asymptotic result of the complexity

N (En minus Kn) = O(s(n) log(s(n)))

as nrarrinfin

624 A numerical example

In this subsection we present a numerical example of the DMPG algorithmapplied to a boundary integral equation to illustrate the convergence ordermatrix compression and computational complexity of the method

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 261

We consider the reformulation of the following boundary value problem forthe Laplace equation as an integral equation The boundary value problem isgiven by

u(P) = 0 P isin D

partu(P)

partnp= minuscu(P)+ g(P) P isin = partD

where D is a bounded simply connected open region in R2 with a smoothboundary nP the exterior unit normal to at P g(P) a given continuousfunction on the boundary and c a positive constant We seek a solution u isinC2(D) cap C1(D) for the boundary value problem Following Section 213 (seealso [12 270]) we employ the Green representation formula for harmonicfunctions and rewrite the above problem as a boundary integral equation

u(P)minus (Au)(P)minus (Bu)(P) = minus(Bg)(P) P isin

where

(Au)(P) = 1

π

int

u(Q)part

partnQlog |Pminus Q|dσ(Q) P isin

and

(Bu)(P) = 1

π

int

u(Q) log |Pminus Q|dσ(Q) P isin

To convert the above boundary integral equation to an integral equation onan interval we introduce a parametrization r(t) = (ξ(t) η(t)) 0 le t le 2π for the boundary We assume that each component of r is a 2π -periodicfunction in C2 and |rprime(t)| = radicξ prime(t)2 + ηprime(t)2 = 0 for 0 le t le 2π Using thisparametrization we convert the above equation to the following equivalentone

(u r)(t)minus (K(u r))(t) = minus1

c(B(g r))(t) t isin [0 2π ]

where

(Kv)(t) =int 2π

0k(t s)v(s)ds

k(t s) = c

π|rprime(s)| log |r(t)minus r(s)| + 1

π

ηprime(s)[ξ(s)minus ξ(t)] minus ξ prime(s)[η(s)minus η(t)][ξ(s)minus ξ(t)]2 + [η(s)minus η(t)]2

and ldquordquo stands for the function composition This kernel has a weak singularityalong its diagonal and a detailed discussion of its regularity is given in [270]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

262 Multiscale PetrovndashGalerkin methods

In the numerical example presented below we consider the boundary valueproblem described above with

D =(x y) x2 +

( y

2

)2lt 1

For comparison purposes we consider the boundary value problem with theknown exact solution u(P) = ex cos y P isin D and compute the function g

accordingly This leads to

g(P) = part

partnP(ex cos y)+ ex cos y P = (x y) isin

The equivalent boundary integral equation is given by (for details see [270])

u(s)minusint

K(s t)u(t)dt = f (s) s isin = [0 1]

where u(s) represents the value u(P) at the point P = (cos 2πs 2 sin 2πs)s isin the right-hand-side function f is also determined by the exact solutionu and the kernel is given by

K(s t) =radic

1+ 3 cos2 2π t log[(cos 2π t minus cos 2πs)2 + 4(sin 2π t minus sin 2πs)2

]+ 8 sin2 π(t minus s)

4 sin2 π(t minus s)+ 3(sin 2π t minus sin 2πs)2

To apply the DMPG method we choose k = μ = 2 kprime = ν = 1 kprimeprime = 3 (sothat λ = 2) and choose the initial trial and test spaces as

X0 = spanφ00φ01 and Y0 = spanψ00ψ01

respectively where

φ00(t) = 1 φ01(t) = radic3(2t minus 1)

and

ψ00(t) =radic

2 0 le t le 12

0 12 lt t le 1

ψ01(t) =

0 0 le t le 12 radic

2 12 lt t le 1

The initial spaces

W0 = spanφ10φ11 Wprime0 = spanψ10ψ11

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

63 Bibliographical remarks 263

are given for t isin by the equations

φ10(t) =

6t minus 1 0 le t le 12

6t minus 5 12 lt t le 1

φ11(t) =radic

3(4t minus 1) 0 le t le 12 radic

3(3minus 4t) 12 lt t le 1

ψ10(t) =

⎧⎪⎪⎨⎪⎪⎩radic

2 0 le t le 14

minusradic2 14 lt t le 1

2

0 12 lt t le 1

ψ11(t) =

⎧⎪⎪⎨⎪⎪⎩0 0 le t le 1

2 radic2 1

2 lt t le 34

minusradic2 34 lt t le 1

The bases for Wn Wprimen n isin N are generated recursively as described inChapter 4

We report numerical results of the truncated DMPG scheme applied tothe integral equation given above in the following table We define thecompression rate by the number of nonzero entries in the matrix Kn divided bythe total number of entries in the matrix before truncation In the table we useOn Cn and Rn to denote respectively the order of convergence in the L2-normthe condition number of the truncated coefficient matrix and the compressionrate

n 7 8 9

On 1877 1829Cn 1402 1507 1604Rn 043 035 019

63 Bibliographical remarks

The PetrovndashGalerkin and iterated PetrovndashGalerkin methods for Fredholmintegral equations of the second kind were originally studied in [77] (cfSection 35) where the useful notions of the generalized best approximationand regular pair of subspaces were introduced to analyze convergence of themethods One advantage of the PetrovndashGalerkin method is that it allows us toachieve the same order of convergence as the Galerkin method for the sametrial space with much less computational cost by choosing the test spacesas spaces of piecewise polynomials of degree lower than that for the trialspace However the practical use of the PetrovndashGalerkin method requires thenumerical computation of integrals appearing in the resulting linear system Ingeneral the entries of the coefficient matrix are singular integrals and they maynot be evaluated exactly To overcome this difficulty a theoretical frameworkfor the analysis of a large class of numerical schemes including the discrete

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

264 Multiscale PetrovndashGalerkin methods

Galerkin PetrovndashGalerkin collocation methods and quadrature methods wasdeveloped in [80] (cf Section 353)

The multiscale PetrovndashGalerkin schemes presented in Section 61 wereoriginally developed in [64] to solve weakly singular Fredholm integralequations of the second kind They are based on discontinuous orthogonalmultiscale basis functions constructed in [200] and [201] and lead to a fastalgorithm for the numerical solution of equations Additional information onmultiscale PetrovndashGalerkin algorithms can be found in [146 147]

The discrete multiscale PetrovndashGalerkin method for the integral equationspresented in Section 62 was introduced in [68] by employing the productintegration method Computational complexity stability and convergenceof the method based on the compression strategy are analyzed using theframework developed in [80] For the classical product integration methodsolving Fredholm integral equations of the second kind the reader is referredto [15] Some early work on the discrete Galerkin method can be found in [16]and [24]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

7

Multiscale collocation methods

The purpose of this chapter is to present a multiscale collocation methodfor solving Fredholm integral equations of the second kind with weaklysingular kernels Among conventional numerical methods for solving integralequations the collocation method receives more favorable attention fromengineering applications due to its lower computational cost in generating thecoefficient matrix of the corresponding discrete equations In comparison theimplementation of the Galerkin method requires much more computationaleffort for the evaluation of integrals (see for example [12 19 77]) Nonethe-less it seems that the most attention in multiscale and wavelet methods forboundary integral equations has been paid to Galerkin methods or PetrovndashGalerkin methods (see [28 64 95] and the references cited therein) Thesemethods are amenable to the L2 analysis and therefore the vanishing momentsof the multiscale basis functions naturally lead to matrix truncation techniquesFor collocation methods the appropriate context to work in is the Linfin spaceand this provides challenging technical obstacles for the identification of goodmatrix truncation strategies Following [69] we present a construction ofmultiscale basis functions and the corresponding multiscale collocation func-tionals both having vanishing moments These basis functions and collocationfunctionals lead to a numerically sparse matrix presentation of the Fredholmintegral operator A proper truncation of such a numerically sparse matrix willresult in a fast numerical algorithm for solving the equation which preservesthe optimal convergence order up to a logarithmic factor

In Section 71 we describe multiscale basis functions and the correspondingfunctionals needed for developing the fast algorithm Section 72 is devoted toa presentation of the multiscale collocation method We analyze the proposedmethod in Section 73 giving estimates for the convergence order computa-tional costs and a bound of the condition number of the related coefficientmatrix

265

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

266 Multiscale collocation methods

71 Multiscale basis functions and collocation functionals

Multiscale collocation methods require availability of multiscale basis func-tions and collocation functionals which have vanishing moments of certaindegrees In this section we present a construction of multiscale basis functionsand the corresponding collocation functionals for the multiscale collocationmethods We also study properties of these functions and functionals

711 A construction of multiscale basis functions and functionals

The solution spaces of the integral equation will be piecewise polynomialson multiscale partitions Let us first recall the method we used in previouschapters to generate multiscale partitions of an invariant set in d-dimensionalEuclidean space Rd We start with a positive integer μ and a family = φe e isin Zμ of contractive affine mappings on Rd There exists a unique invariantset of Rd associated with the family of mappings such that

() = (71)

where

() =⋃

eisinZμφe()

We are interested in the cases where has a simple structure including forexample the cube and simplex in Rd With these cases in mind we make thefollowing additional restriction on the family of mappings

(a) For every e isin Zμ the mapping φe has a continuous inverse on (b) The set has nonempty interior and

meas (φe() cap φeprime()) = 0 e eprime isin Zμ e = eprime

We use to obtain a sequence of multiscale partitions n n isin N0 of theset in the following way Given any e = (ej j isin Zn) isin Zn

μ we define themapping

φe = φe0 φe1 middot middot middot φenminus1

and the number

μ(e) = μnminus1e0 + middot middot middot + μenminus2 + enminus1

Note that every i isin Zμn can be written uniquely as i = μ(e) for some e isin Znμ

From equation (71) and conditions (a) and (b) it follows that the collection

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 267

of sets

n = ne ne = φe() e isin Znμ

forms a partition of We require that this partition has the following property(c) There exist positive constants cminus c+ such that for all n isin N0

cminusμminusnd le maxd(ne) e isin Znμ le c+μminusnd (72)

On this partition we consider piecewise polynomials Choose a positiveinteger k and for n isin N let Xn be the space of all functions such that theirrestrictions to any cell ne e isin Zn

μ are polynomials of total degree le k minus 1Here we use the convention that for n = 0 the set is the only cell in thepartition and so

m = dim X0 =(

k + d minus 1d

)

We must generate a suitable multiscale decomposition of Xn To this end letG0 = tj j isin Zm be a finite set of distinct points in which is refinablerelative to the mappings that is G0 satisfies

G0 sube (G0)

Set

G1 = (G0) V1 = G1 G0 = tm+j j isin Zrwith r = (μ minus 1)m Now we require that there exists a basis of elements inX0 denoted by ψj j isin Zm such that

X0 = spanψj j isin Zmand they satisfy the Lagrange interpolation conditions

ψi(tj) = δij i j isin Zm (73)

A construction of refinable points tj j isin Zm isin that admit a uniqued-dimensional Lagrange interpolation is presented in [198]

With this basis of X0 at hand we generate a multiscale basis for Xn in thefollowing way For this purpose we introduce linear operators Te X rarrX e isin Zμ defined by

(Tex)(t) = x(φminus1e (t))χφe()(t) t isin

where χS denotes the characteristic function of set S Therefore it follows that

Xn =opluseisinZμ

TeXnminus1 n isin N

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

268 Multiscale collocation methods

where A oplus B denotes the direct sum of spaces A and B We assume that thefunctions ψm+j isin X1 j isin Zr satisfy

ψm+j(ti) = 0 i isin Zm j isin Zr ψm+j(tm+jprime) = δjjprime j jprime isin Zr (74)

Then the functions ψj j isin Zq form a basis for X1 where q = m+ rWe require another basis for X1 consisting of functions with vanishing

moments To this end we set

w0j = ψj j isin Zm

For j isin Zr we find a vector [cjs s isin Zq] isin Rq such that

w1j =sumsisinZq

cjsψs j isin Zr (75)

satisfies the equation (w1jψjprime

) = 0 jprime isin Zm j isin Zr (76)

where (middot middot) denotes the inner product in L2() Since (76) is a linear systemof rank m with m equations and q unknowns there exist r linearly independentsolutions of this system which we denote by w1j j isin Zr These functions forma basis for the orthogonal complement of X0 in X1 denoted by W1 Let

Wi+1 =opluseisinZμ

TeWi i isin N

Then Xn has the decomposition (see Chapter 4)

Xn = X0 oplusperpW1 oplusperp middot middot middot oplusperpWn

We denote W0 = X0 w(i) = dimWi i isin N0 and s(n) = dimXn n isin N0 asbefore

Let us now turn our attention to the construction of multiscale collocationfunctionals We start by recalling our notational conventions For a compactsubset of the d-dimensional Euclidean space Rd we let X = Linfin()V = C() and denote the dual space of V by Vlowast For any s isin we useδs to denote the linear functional in Vlowast defined for v isin V by the equation〈δs v〉 = v(s) We need to evaluate δs on functions in X Therefore as in [21]we take any norm-preserving extension of δs to X and use the same notationfor the extension In particular this convention allows us to evaluate piecewisepolynomials anywhere on We begin by defining

0j = δtj j isin Zm

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 269

and for jprime isin Zr we find the vector [cprimejprimes s isin Zq] such that

1jprime =sumsisinZq

cprimejprimesδts jprime isin Zr (77)

satisfies the equations lang1jprime w0j

rang = 0 j isin Zm jprime isin Zr (78)lang1jprime w1j

rang = δjjprime j isin Zr jprime isin Zr (79)

For j isin Zr the qtimes q coefficient matrix of this linear system is given by

A =[langδtiprimejprime wij

rang (i j) (iprime jprime) isin U1

]

where t1j = tm+j j isin Zr Let us prove that the matrix A is nonsingular To thisend we assume that there are constants aij (i j) isin U1 such thatsum

(ij)isinU1

aij

langδtiprime jprime wij

rang= 0 (iprime jprime) isin U1

that is langδtiprimejprime

sum(ij)isinU1

aijwij

rang= 0 (iprime jprime) isin U1

Since the set G1 is Lagrange admissible relative to (X1) (see Section 451)we conclude that sum

(ij)isinU1

aijwij = 0

and therefore aij = 0 (i j) isin U1 This proves that A is nonsingularWe find it convenient to write equations (78) and (79) in a matrix form For

this purpose we introduce the matrices

B = [langδti ψjrang

i j isin Zq]

B = [langδtm+i ψjrang

i isin Zr j isin Zm]

C1 = [cjs j isin Zr s isin Zm]

C2 = [cjm+s j isin Zr s isin Zr]

Cprime1 =[cprimejs j isin Zr s isin Zm

] Cprime2 =

[cprimejm+s j isin Zr s isin Zr

]and

C = [C1 C2] Cprime = [Cprime1 Cprime2]The next lemma gives a relationship between the matrices C and Cprime

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

270 Multiscale collocation methods

Lemma 71 The following relation holds

Cprime1 = minusCprime2B Cprime2 = (CT2 )minus1

Proof It follows from equations (78) and (79) that

CprimeB[Im Omtimesr]T = Ortimesm

and

CprimeBCT = Ir

where Omtimesr denotes the mtimes r zero matrix and Im denotes the mtimes m identitymatrix The properties of the basis ψj j isin Zq and the functionals δtj j isinZq described above in equations (73) and (74) imply that

B =[

Im Omtimesr

B Ir

]and

Cprime1 + Cprime2B = Ortimesm (Cprime1 + Cprime2B)CT1 + Cprime2CT

2 = Ir

from which the result follows

We next describe the construction of a basis for Wi i isin N To this end fore = [ej j isin Zn] isin Zn

μ we introduce a composition operator Te by

Te = Te0 middot middot middot Tenminus1

For i = 2 3 n we let

wij = Tew1l j = μ(e)r + l e isin Ziminus1μ l isin Zr (710)

and

Wi = spanwij j isin Zw(i)Observe that the support of wij is contained in Sij = φe() j isin Zw(i)

To generate a multiscale collocation functional we introduce for any e isin Zμ

a linear operator Le Xlowast rarr Xlowast defined by the equation

〈Le v〉 = 〈 v φe〉 v isin X isin Xlowast

Moreover for e = [ej j isin Zn] isin Znμ we define the composition operator

Le = Le0 middot middot middot Lenminus1

Consequently for any e eprime isin Ziμ w isin X and isin Xlowast we have that

〈Le Teprimew〉 = 〈 w〉 δeeprime (711)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 271

In addition for i gt 1 j = μ(e)r + l e isin Ziminus1μ l isin Zr we define

ij = Le1l (712)

and observe that langij v

rang = 〈1l v φe〉 =sumsisinZq

cprimelsv(φe(ts))

Note that the ldquosupportrdquo of ij is also contained in Sij

712 Properties of basis functions and collocation functionals

We discuss the properties of the multiscale functions and their correspondingcollocation functionals constructed in the last subsection To this end wepartition the matrix En into a block matrix

En = [Eiprimei iprime i isin Zn+1]where

Eiprimei = [iprimejprimeij jprime isin Zw(iprime) j isin Zw(i)] iprimejprimeij = langiprimejprime wijrang

and in the next lemma we relate the norm of the matrix Eiprimei to that of E1iminusiprime+1

Lemma 72 If iprime i isin N with i gt iprime then

Eiprimeiinfin = E1iminusiprime+1infin (713)

Proof From the definition of iprimejprime and wij for (iprime jprime) (i j) isin Un with i gt iprime weobtain that there exist e isin Ziminus1

μ eprime isin Ziprimeminus1μ l lprime isin Zr such thatlang

iprimejprime wijrang = 〈Leprime1lprime Tew1l〉

We introduce the vectors

e1 = [ej j isin Ziprimeminus1] e2 = [ej j isin Ziminus1 Ziprimeminus1]and conclude from (711) thatlang

iprimejprime wijrang = lang1lprime Te2 w1l

rangδeprimee1

Let j0 = μ(e2)r + l and obtain thatlangiprimejprime wij

rang = lang1lprime wiminusiprime+1j0

rangδeprimee1

Consequently we have thatsumjisinZw(i)

∣∣langiprimejprime wijrang∣∣ = sum

jisinZw(iminusiprime+1)

∣∣lang1lprime wiminusiprime+1jrang∣∣

which proves the lemma

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

272 Multiscale collocation methods

Lemma 73 The conditionsummisinZw(n)

| 〈nprimemprime wnm〉 | le γ (nprime mprime) isin U n gt nprime

is satisfied with

γ = maxC11 CprimeinfinC11Proof By Lemma 72 it suffices to prove for i isin N that

E0iinfin le C11 (714)

and

E1i+1infin le CprimeinfinC11 (715)

Recall the definition

E1i+1 = [lang1lprime wi+1jrang

lprime isin Zr j isin Zw(i+1)]

We need to decompose this matrix This is done by using equation (77) towrite

1lprime =sum

sprimeisinZq

cprimelprimesprimeδtsprime lprime isin Zr

Thus it follows from (75) and (710) for any j isin Zw(i+1) that there exists aunique pair ej isin Zi

μ and l isin Zr such that

wi+1 j =sumsisinZq

clsTejψs

Since for any sprime isin Zq ej isin Ziμ i isin N and s = m qminus 1lang

δtsprime Tejψsrang = 0

we conclude for lprime isin Zr j = μ(ej)r + l ej isin Ziμ l isin Zr thatlang

1lprime wi+1jrang = sum

sprimeisinZq

sumsisinZm

cprimelprimesprimeclslangδtsprime Tejψs

rang

We write this equation in a matrix form by introducing for each e isin Ziμ the

matrix

De = [langδtsprime Teψsrang

sprime isin Zq s isin Zm]

and from these matrices we build the matrix

D =[De0 De1 De

μiminus1

]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 273

This notation allows us to write

E1i+1 = CprimeDdiag(CT1 CT

1 )

where the right-most matrix is a block diagonal matrix with μi identical blocksof CT

1 This formula will allow us to estimate the norm of the matrix E1i+1Because the set G0 is refinable relative to the contractive affine mappings

we assume that for any sprime isin Zm there exist unique eprimeprime isin Ziμ and sprimeprime isin Zm such

that tsprime = φeprimeprime(tsprimeprime) Thuslangδtsprime Teψs

rang = langLeprimeprimeδtsprimeprime Teψsrang = langδtsprimeprime ψs

rangδeprimeprimee = δssprimeprimeδeprimeprimee

which implies that Dinfin = 1 Consequently we conclude inequality (715)Similarly inequality (714) follows from

E0i =[lang0lprime wij

rang lprime isin Zm j isin Zw(i)

]

This completes the proof of this lemma

We next show that the pair (W L) of basis functions and collocationfunctionals constructed in the last subsection has some important properties Tothis end we let Pn be the projection from X onto Xn defined by the requirementthat lang

ijPnxrang = langij x

rang (i j) isin Un

Proposition 74 The following properties hold

(I) There exist positive integers ρ and h such that for every n gt h andm isin Zw(n) written in the form m = jρ + s where s isin Zρ and j isin N0

wnm(x) = 0 x isin nminushj

(II) For any n nprime isin N0

〈nprimemprime wnm〉 = δnnprimeδmmprime (n m) (nprime mprime) isin U n le nprime

and there exists a positive constant γ for whichsummisinZw(n)

| 〈nprimemprime wnm〉 | le γ (nprime mprime) isin U n gt nprime

(III) There exists a positive integer k such that for all p isin πk where πk

denotes the space of polynomials of total degree less than k

〈nm p〉 = 0 (wnm p) = 0 (n m) isin U

(IV) There exists a positive constant θ0 such that for all (n m) isin U

nm + wnminfin le θ0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

274 Multiscale collocation methods

(V) There exists a positive integer μ gt 1 such that

dim Xn = O(μn) dim Wn = O(μn) dn = O(μminusnd) nrarrinfin

(VI) The operators Pn are well defined and converge pointwise to the identityoperator I in Linfin() as nrarrinfin In other words for any x isin Linfin()

limnrarrinfinPnxminus xinfin = 0

(VII) There exists a positive constant c such that for u isin Wkinfin()

dist(uXn) le cμminuskndukinfin

Proof Property (I) is satisfied because for (i j) isin U with i gt 1 the supportof wij is contained in Sij = φe() = iminus1e where j = μ(e)r + l l isin Zre isin Ziminus1

μ We now prove that the pair (W L) satisfies property (II) For (i j) isin U there

exists a unique pair of e isin Ziminus1μ and l isin Zr such that j = μ(e)r + l and

wij = Tew1l Likewise for (iprime jprime) isin U there exists a unique pair of eprime isin Ziprimeminus1μ

and lprime isin Zr such that jprime = μ(eprime)r+ lprime and iprimejprime = Leprime1lprime When i = iprime it followsfrom (711) and (79) thatlang

iprimejprime wijrang = 〈Leprime1lprime Tew1l〉 = 〈1lprime w1l〉 δeprimee = δlprimelδeprimee = δjprimej

When i lt iprime let eprime1 = [eprimej j isin Ziminus1] eprime2 = [eprimej j isin Ziprimeminus1 Ziminus1] Thenlangiprimejprime wij

rang = langLeprime21lprime w1l

rangδeprime1e =

lang1lprime w1l φeprime2

rangδeprime1e

Since φeprime2 rarr φeprime2() is an affine mapping we conclude that w1l φeprime2 is apolynomial of total degree le k minus 1 in X0 By using (78) we have thatlang

iprimejprime wijrang = 0 (i j) (iprime jprime) isin U i lt iprime

When i gt iprime Lemma 73 ensures that the second equation of (II) is satisfiedThis proves property (II)

Next we verify that property (III) is satisfied Again it follows from (78)that lang

iprimejprime ψjrang = lang1lprime ψj φeprime

rang = 0 j isin Zm

This proves the first equation of property (III) To prove the second equationwe consider Te as an operator from L2() to L2() and denote by T lowaste theconjugate operator of Te which is defined by

(Tex y) = (x T lowaste y)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 275

It can be shown that for y isin L2()

T lowaste y = Jφe y φe

where Je is the Jacobi of mapping φe Therefore we have that(wijψjprime

) = (Tew1lψjprime) = (w1l T lowaste ψjprime

) = 0

The last equality holds because T lowaste ψjprime is a polynomial of total degree le k minus 1and w1l satisfies condition (76)

From (712) (77) (710) and (75) we have for (i j) isin U j = μ(e)r+ l that

| langij vrang | = | 〈1l v φe〉 | le Cprimeinfinvinfin

and

wijinfin le w1l(φminus1e (t))χ(φe())infin le Cinfinmax

jisinZq

ψjinfin

which confirm property (IV)By our construction it is the case that

dim Xn = mμn dim Wn = m(μminus 1)μnminus1

These equations with (72) imply that property (V) is satisfiedIt follows from the first equation of (II) that Pn is well defined The

pointwise convergence condition (VI) of the interpolating projections Pn

follows from a result of [21] Finally property (VII) holds since the Xn arespaces of piecewise polynomials of total degree le k minus 1

Proposition 75 For any k d isin N there exists an integer μ gt 1 such that thefollowing property holds

(VIII) The constant γ in property (II) satisfies the condition

(1+ γ )μminuskd lt 1

Proof We must show that there exists an integer μ gt 1 such that

1+ γ le μkd

where γ is defined in Lemma 73 This will be done by proving that γ isbounded from above independently by μ For this purpose we consider thematrices

H1 = [(ψiψj)

i isin Zm j isin Zm]H2 = [(ψiψm+j

) i isin Zm j isin Zr] H = [H1 H2]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

276 Multiscale collocation methods

Therefore from equation (76) it follows that

CHT = C1HT1 + C2HT

2 = 0

where C2 is an arbitrary r times r nonsingular matrix We choose C2 = Ir fromwhich we have that

C1 = minusHT2 (H

T1 )minus1 (716)

Moreover from Lemma 71 we have that

Cprime = [minusB I]and thus

Cprimeinfin = Binfin + 1 (717)

For j isin Zm the functions ψj are polynomials and therefore continuous Thusthere exists a positive constant ρ such that

maxjisinZmψjinfin le ρ

Hence recalling the definition of matrix B and equation (717) we have that

Cprimeinfin = 1+maxiisinZr

sumjisinZm

|ψj(tm+i)| le 1+ m maxjisinZmψjinfin le 1+ mρ

Moreover we have by equation (717) that

C11 = Hminus11 H2infin le Hminus1

1 infinH2infin

Since Hminus11 infin is independent of μ it remains to estimate H2infin from the

above independent of μ Therefore we recall for j isin Zr that

ψm+j(t) = ψl(φminus1e (t))χφe()(t) t isin

for some l isin Zm and e isin Zμ Consequently from (72) we conclude that

|(ψiψm+j)| leintφe()

∣∣∣ψi(t)ψl(φminus1e (t))

∣∣∣ dt le ρ2meas(φe()) le ρ2

μ

Noting that r = (μminus 1)m we obtain the desired estimate

H2infin = maxiisinZm

sumjisinZr

|(ψiψm+j)| le ρ2

μ(μminus 1)m le ρ2m

thereby proving the result

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 277

We next examine property (VIII) in several cases of practical importanceWe consider the cases when d = 1 and = [0 1] as well as d = 2 and = where is the triangle with vertices (0 0) (1 0) (1 0) When d = 1and = [0 1] property (VIII) is satisfied for the following choices

(1) k = 2 μ = 2

φe(t) = (t + e)2 i isin e = 0 1

and ti = (i+ 1)3 for i = 0 1(2) k = 3 μ = 2

φe(t) = (t + e)2 t isin e = 0 1

and ti = 2i7 for i = 0 1 2(3) k = 3 μ = 3

φe(t) = (t + e)3 t isin e = 0 1 2

and ti = (i+ 1)4 for i = 0 1 2(4) k = 4 μ = 2

φe(t) = (t + e)2 t isin e = 0 1

and ti = (i+ 1)5 for i = 0 1 2 3In the other case (VIII) is also satisfied when k = 2 μ = 4 for

(x y) isin

φ0(x y) = (x2 y2) φ1(x y) = ((x+ 1)2 y2)

φ2(x y) = (x2 (y+ 1)2) φ3(x y) = ((1minus x)2 (1minus y)2)

and t0 = (17 47) t1 = (27 17) t2 = (47 27)Finally we turn our attention to another property

(IX) There exist positive constants θ1 and θ2 such that for all n isin N0 andv isin Xn having the form v =sum(ij)isinUn

vijwij

θ1vinfin le vinfin le θ2(n+ 1)Envinfin

where v = [vij (i j) isin Un]To show this we consider the sequence of functions ζij (i j) isin U

bi-orthogonal to the linear functionals ij (i j) isin U and having the propertythat for all i isin N0

suptisin

sumjisinZw(i)

|ζij(t)| le θ2 (718)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

278 Multiscale collocation methods

Let

ζ0j = w0j j isin Zm

and observe that lang0j ζ0jprime

rang = δjjprime j jprime isin Zm

For each j isin Zr we find vectors cprimeprime = [cprimeprimejs s isin Zq] such that the function

ζ1j =sumsisinZq

cprimeprimejsψs

satisfies the system of linear equationslang0jprime ζ1j

rang = 0 jprime isin Zm (719)

and lang1jprime ζ1j

rang = δjjprime jprime isin Zr (720)

Let us confirm that cprimeprime exists and is unique The coefficient matrix forequations (719) and (720) is

A = [langiprimejprime ψjrang

j isin Zq (iprime jprime) isin U1] (721)

Because ψj j isin Zq is a basis for the space X1 we conclude that matrix A isnonsingular since A is nonsingular Thus there exists a unique solution cprimeprime forequations (719) and (720)

For i gt 1 j = μ(e)r + l e isin Ziminus1μ l isin Zr we define functions

ζij = Teζ1l

These functions will be used in the proof of the next result

Proposition 76 The pair (W L) has property (IX)

Proof It suffices to verify that the sequences of functions ij (i j) isin U andfunctionals ζij (i j) isin U are bi-orthogonal that is they satisfy the conditionlang

iprimejprime ζijrang = δiiprimeδjjprime (i j) (iprime jprime) isin U (722)

and in addition there exists a positive constant θ2 such that for any i isin N0condition (718) is satisfied

The proof of (722) for the case i le iprime is similar to that for (II) inProposition 74 Hence we only present the proof for the case iprime lt i In thiscase we have thatlang

iprimejprime ζijrang = 〈Leprime1lprime Teζ1l〉 =

lang1lprime Te2ζ1l

rangδeprimee1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 279

where jprime = μ(eprime)r + lprime j = μ(e)r + l e1 = [ej j isin Ziprimeminus1] and e2 = [ej j isinZiminus1 Ziprimeminus1] From this it follows that

langiprimejprime ζij

rang = 0 except for eprime = e1 inwhich case lang

iprimejprime ζijrang = lang1lprime ζ1l φminus1

e2χφe2 ()

rang (723)

Since G0 is a refinable set we have that φminus1e (t) isin G0 when t isin G1 cap φe()

e isin Zμ and thus φminus1e2(ts) isin G0 when ts isin φe2() s isin Zq This observation

with (720) yields the equation

ζ1l(φminus1e2(ts)) =

langδφminus1

e2 (ts) ζ1l

rang= 0 (724)

whenever ts isin φe2() s isin Zq We appeal to (723) and (724) to conclude thatlangiprimejprime ζij

rang = 0Next we show that condition (718) is satisfied Without loss of generality

we consider only the case when i ge 1 In this case the definition of ζij fori ge 1 guarantees

suptisin

sumjisinZw(i)

|ζij(t)| = suptisin

sumeisinZiminus1

μ

sumlisinZr

|Teζ1l(t)|

= suptisin

sumeisinZiminus1

μ

sumlisinZr

|ζ1l(φminus1e (t))χφe()(t)|

lesumlisinZr

ζ1linfin

and therefore (718) holds with θ2 =sumlisinZrζ1linfin

Finally we verify the first inequality in property (IX) To this end we notethat for v = sum(ij)isinUn

vijwij and v = [vij (i j) isin Un] there exists (i0 j0) isinUn with j0 = μ(e0)r + l0 e0 isin Z

i0minus1μ l0 isin Zr such that

vinfin = |vi0j0 | (725)

For l isin Zr we denote vl = vi0j and wl = wi0j where j = μ(e0)r + l and

e0 isin Zi0minus1μ and observe that

|vi0j0 | le⎛⎝sum

lisinZr

|vl|2⎞⎠12

(726)

Recalling that wi0j = Te0 w1l l isin Zr and φe e isin Zμ are affine we concludethat

(wlprime wl) = Jφe0(w1lprime w1l) (727)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

280 Multiscale collocation methods

where Jφe0denotes the Jacobi of the mapping φe0 We introduce an rtimesr matrix

W = [(w1lprime w1l) lprime l isin Zr]and note that it is the Gram matrix of the basis w1l l isin Zr and thus it ispositive definite It follows that there exists a positive constant c0 such that forv =sumlisinZr

vlwl and v = [vl l isin Zr]

c0

sumlisinZr

|vl|2 le vTWv (728)

By formula (727) we have that

v22 = (v v) = Jφe0vTWv

Combining this equation with (728) yieldssumlisinZr

|vl|2 le 1

c0Jφe0

v22 (729)

Since the basis wij (i j) isin Un constructed in this section has property (III)we obtain that

v22 =intφe0 ()

v(t)v(t)dt le Jφe0vinfinvinfin le Jφe0

|vi0j0 |sumlisinZr

wlinfinvinfin

Using property (IV) we conclude that there exists a positive constant c0 suchthat sum

lisinZr

wlinfin le c0

which implies with the last inequality that

v22 le c0Jφe0vinfinvinfin (730)

Combining (725) (726) (729) and (730) yields that there exists a positiveconstant c such that

vinfin le cv12infin v12infin

and thus

vinfin le cvinfin

We have proved the first inequality of (IX) with θ1 = 1c

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

72 Multiscale collocation methods 281

72 Multiscale collocation methods

In the last section we presented a concrete construction of multiscale baseson an invariant set in Rd and multiscale collocation functionals neededfor multiscale collocation methods In this section we develop a generalcollocation scheme for solving Fredholm integral equations of the second kindusing multiscale basis functions and multiscale collocation functionals havingthe properties described in the last section

721 The collocation scheme

For a set A sub Rd d(A) represents the diameter of A that is

d(A) = sup|xminus y| x y isin A (731)

where | middot | denotes the Euclidean norm on the space Rd We use α = [αi isinN0 i isin Zd] to denote a lattice point in Nd

0 As is usually the case we set|α| =sumiisinZd

αi

As usual for a positive integer k Wkinfin() will denote the set of allfunctions v on such that Dαv isin X for |α| le k where we use the standardmulti-index notation for derivatives

Dαv(x) = part |α|v(x)partxα0

0 middot middot middot partxαdminus1dminus1

x isin Rd

and the norm

vkinfin = maxDαvinfin |α| le kon Wkinfin() For a star-shaped set it is easy to estimate the distance ofa function v isin Wkinfin() from the space πk Specifically there is a positiveconstant c such that

dist(vπk) le c(d())kvkinfin (732)

Throughout the following sections c will always stand for a generic constantwhose value will change with the context Its meaning will be clear from theorder of the qualifiers used to describe its role in our estimates

There are several ingredients required in the development of the fastcollocation algorithms for solving the integral equation First we require amultiscale of finite-dimensional subspaces of X denoted by Xn n isin N0 inwhich we make our approximation These spaces are required to have theproperty that

Xn sube Xn+1 n isin N0 (733)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

282 Multiscale collocation methods

and

V sube⋃

nisinN0

Xn (734)

For efficient computation relative to a scale of spaces we express them as adirect sum of subspaces

Xn =W0 oplusW1 oplus middot middot middot oplusWn (735)

These spaces serve as multiscale subspaces of X and will be constructed aspiecewise polynomial functions on

We need a multiscale partition of the set It consists of a family ofpartitions n n isin N0 of such that for each scale n isin N0 the partition n

consists of a family of subsets ni i isin Ze(n) of with the properties that

meas(ni capniprime) = 0 i iprime isin Ze(n) i = iprime (736)

and ⋃iisinZe(n)

ni = (737)

At the appropriate time later we shall adjust the number e(n) of elements andthe maximum diameter of the cells in the nth partition to be commensuratewith dim Wn

A construction of multiscale partitions for an invariant set has beendescribed in Section 711 For unions of invariant sets their multiscalepartitions can be constructed from multiscale partitions of the invariant setswhich form the unions For example a polygonal domain in R2 is a union oftriangles which are invariant sets Hence multiscale partitions of each of thetriangles consisting of the polygon form a multiscale partition of the polygonThe partition n n isin N0 is used in two ways First we demand that thereis a basis Wn = wnm m isin Zw(n) for the spaces

Wn = span Wn n isin N0 (738)

having the property (I) (stated in Proposition 74)For ngt h we use the notation Snm =nminushj so that the support of the func-

tion wnm is contained in the set Snm Note that the supports of the basisfunctions at the nth level are not disjoint However for every n gt h and everyfunction wnm there are at most ρ other functions at level n whose supportoverlaps the support of wnm

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

72 Multiscale collocation methods 283

To define the collocation method we need a set of linear functionals in Vlowastgiven by

Ln = nm m isin Zw(n) n isin N0

The multiscale partitions n n isin N0 are also used to specify the supportsof the linear functionals by the requirement that the linear functional nm is afinite sum of point evaluations

nm =sum

sisinnminushj

csδs (739)

where cs are constants and ni is a finite subset of distinct points inni with thecardinality bounded independent of n isin N and i isin Zw(n) We set Snm = nminushj

and consider it as the ldquosupportrdquo of the functionals nmThe linear functionals and multiscale basis are tied together by the require-

ment that property (II) holds We do not require the linear functionals and themultiscale basis functions to be bi-orthogonal Instead we require them to havea ldquosemi-bi-orthogonalityrdquo property imposed by the first equation of (II) witha controllable perturbation from the bi-orthogonality which is ensured by thesecond equation of (II) Specifically the first one means that the basis functionsvanish when they are applied by collocation functionals of higher levels Wedenote by E the semi-infinite matrix with entries

Enprimemprimenm = 〈nprimemprime wnm〉 (nprime mprime) (n m) isin U

We note by the first equation of property (II) that the matrix E can be viewedas a block upper triangular matrix with the diagonal blocks equal to identitymatrices Consequently the infinite matrix E has an inverse Eminus1 of the sametype that is

(Eminus1)nprimemprimenm = δnnprimeδmmprime n le nprime m isin Zw(n) mprime isin Zw(nprime)

To introduce the collocation method for solving integral equations of thesecond kind we suppose that K is a weakly singular kernel such that theoperator K Xrarr V defined by

(Ku)(s) =int

K(s t)u(t)dt s isin

is compact in X We consider Fredholm integral equations of the second kindin the form

uminusKu = f (740)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

284 Multiscale collocation methods

where f isin X is a given function and u isin X is the unknown to be determinedWhen one is not an eigenvalue of K equation (740) has a unique solutionin X The collocation scheme for solving equation (740) seeks a vector un =[uij (i j) isin Un] where Un is the set of lattice points in R2 defined as (i j) j isin Zw(i) i isin Zn+1 such that the function

un =sum

(ij)isinUn

uijwij

in Xn has the property thatlangiprimejprime un minusKun

rang = langiprimejprime frang (iprime jprime) isin Un (741)

Equivalently we obtain the linear system of equations

(En minusKn)un = fn

where

Kn = [langiprimejprime Kwijrang

(iprime jprime) (i j) isin Un]En = [langiprimejprime wij

rang (iprime jprime) (i j) isin Un]

and

fn = [langij frang

(i j) isin Un]By definition we have that (En)iprimejprimeij = Eiprimejprimeij for (iprime jprime) (i j) isin Un and by (II)we see that

(Eminus1n )iprimejprimeij = (Eminus1)iprimejprimeij (iprime jprime) (i j) isin Un (742)

Let us use condition (II) to estimate the inverse of the matrix En To this endwe introduce a weighted norm on the vector x = [xij (i j) isin Un] For anyi isin Zn+1 we set xi = [xij j isin Zw(i)]

xiinfin = max|xij| j isin Zw(i)and whenever ν isin (0 1) we define

xν = maxxiinfinνminusi i isin Zn+1We also use the notation xinfin = maxxiinfin i isin Zn+1 for the maximumnorm of the vector x

Lemma 77 If condition (II) holds 0 lt ν lt 1 and (1 + γ )ν lt 1 then forany integer n isin N0 and vector x isin Rs(n)

xν le 1minus ν

1minus (1+ γ )νEnxν

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

72 Multiscale collocation methods 285

Proof Let y = Enx so that

yij =sum

(iprimejprime)isinUn

langij wiprimejprime

rangxiprimejprime

In particular for i = n we have that yij = xij For 0 le l le nminus 1 we have fromthe first equation of (II) that

xnminuslminus1j = ynminuslminus1j minussum

nminuslleiprimelenjprimeisinZw(iprime)

langnminuslminus1j wiprimejprime

rangxiprimejprime j isin Zw(nminuslminus1)

Using the second equation of (II) we conclude that

xnminuslminus1infin le ynminuslminus1infin + γ

lsumi=0

xnminusiinfin

By induction on j it readily follows that

xnminusjinfin lejminus1suml=0

γ (1+ γ )lynminusj+l+1infin + ynminusjinfin

Thus we have that

xnminusjinfinνminus(nminusj) le γ ν

jminus1suml=0

[(1+ γ )ν]lynminusj+l+1infinνminus(nminusj+l+1)

+ ynminusjinfinνminus(nminusj)

le⎡⎣1+ γ ν

jminus1suml=0

[(1+ γ )ν]⎤⎦ Enxν

le 1minus ν

1minus (1+ γ )νEnxν

from which the result is proved

722 Estimates for matrices and a truncated scheme

In this subsection our goal is to obtain estimates for the entries of thematrix Kn This requires conditions on the support of the basis functions forWn and vanishing moments for both the basis functions and the collocationfunctionals which have been described in (III) and (IV) We also need theregularity of kernel K

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

286 Multiscale collocation methods

(X) For s t isin s = t the kernel K has continuous partial derivativesDα

s Dβt K(s t) for |α| le k |β| le k Moreover there exist positive constants

σ and θ3 with σ lt d such that for |α| = |β| = k∣∣∣Dαs Dβ

t K(s t)∣∣∣ le θ3

|sminus t|σ+|α|+|β| (743)

In the next lemma we present an estimate of the entries of the matrix KnSuch an estimate forms the basis for a truncation strategy In the statement ofthe next lemma we use the quantities

di = maxd(Sij) j isin Zw(i) i isin N0

Lemma 78 If (I) (III) (IV) and (X) hold and there is a constant r gt 1 suchthat

dist(Sij Siprimejprime) ge r(di + diprime) (744)

then there exists a positive constant c such that

|Kiprimejprimeij| le c(didiprime)ksum

sisinSiprimejprime

intSij

1

|sminus t|2k+σ dt

Proof Let s0 t0 be centers of the sets Siprimejprime and Sij respectively Using theTaylor theorem with remainder we write K = K1 + K2 + K3 where K1(s middot)and K2(middot t) are polynomials of total degree le k minus 1 in t and s respectively

|K3(s t)| le dki dk

iprimev(s t) s isin Siprimejprime t isin Sij

where

v(s t) =sum|α|=k

sum|β|=k

|rαβ(s t)|αβ (745)

and

rαβ(s t) =int 1

0

int 1

0Dα

s Dβt K(s0 + t1(sminus s0)

t0 + t2(t minus t0))(1minus t1)kminus1(1minus t2)

kminus1dt1dt2 (746)

Applying the vanishing moment conditions yields the bound

|Kiprimejprimeij| le iprimejprime wijdki dk

iprimesum

sisinSiprimejprime

intSij

|v(s t)|dt (747)

It follows from the mean-value theorem and condition (X) that

|rαβ(s t)| = kminus2|Dαs Dβ

t K(sprime tprime)| le θ3

k2|sprime minus tprime|2k+σ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

72 Multiscale collocation methods 287

holds for some sprime isin Siprimejprime tprime isin Sij For s isin Siprimejprime t isin Sij the assumption (744)yields

|sprime minus tprime| ge |sminus t| minus di minus diprime ge (1minus rminus1)|sminus t|from which it follows that

|rαβ(s t)| le c1

|sminus t|2k+σ

where

c1 = θ3

k2(1minus rminus1)2k+σ

Substituting the above inequality into (747) completes the proof with

c = θ3θ20 e

2d1minusrminus1

k2(1minus rminus1)σ

To present the truncation strategy we partition matrix Kn into a block matrix

Kn = [Kiprimei iprime i isin Zn+1]with

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]We truncate the block Kiprimei by using a given positive number ε to form a matrix

K(ε)iprimei = [K(ε)iprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]with

K(ε)iprimejprimeij =

Kiprimejprimeij dist(Siprimejprime Sij) le ε0 otherwise

where ε may depend on iprime i and n In the next lemma we use the estimatefor the entries of Kn presented in Lemma 78 to obtain an estimate for thediscrepancy between the blocks of K(ε) and Kn

Lemma 79 If (I) (III) (IV) and (X) hold then given any constant r gt 1 and0 le σ prime lt max2k dminusσ there exists a positive constant c such that wheneverε ge r(di + diprime)

Kiprimei minusK(ε)iprimeiinfin le cεminusη(didiprime)k

where η = 2k minus σ prime

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

288 Multiscale collocation methods

Proof We first note that

Kiprimei minusK(ε)iprimeiinfin = maxjprimeisinZw(iprime)

sumjisinZiprimejprime (ε)

|Kiprimejprimeij|

where

Ziprimejprime(ε) = j j isin Zw(i) dist(Sij Siprimejprime) gt εTherefore by using Lemma 78 we have that

Kiprimei minusK(ε)iprimeiinfin le c(didiprime)k max

jprimeisinZw(iprime)

sumsisinSiprime jprime

sumjisinZiprimejprime (ε)

intSij

1

|sminus t|2k+σ dt

Although the sets Sij are not disjoint we can use property (I) to conclude thatsumjisinZiprime jprime (ε)

intSij

1

|sminus t|2k+σ dt le ρεminusηint

1

|sminus t|σ+σ prime dt

Since σ + σ prime lt d and is a compact set

maxsisin

int

1

|sminus t|σ+σ prime dt ltinfin

We employ the above inequalities to obtain the desired estimate

73 Analysis of the truncation scheme

In this section we discuss the truncation strategy for the collocation methodproposed in the previous section We analyze the order of convergencestability and computational complexity of the truncation algorithm

731 Stability and convergence

We introduce the operator from Xn into itself defined by the equation Kn =PnK|Xn and note that its matrix representation relative to the basis Wn is givenby Eminus1

n Kn For each block Kiprimei i iprime isin Zn+1 of Kn we shall specify latertruncation parameters εn

iprimei and reassemble the block to form a truncation matrix

Kn = [K(εniprimei)iprimei iprime i isin Zn+1]

Using this truncation matrix we let Kn Xn rarr Xn be the linear operator fromXn into itself relative to the basis Wn having the matrix representation Eminus1

n Kn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

73 Analysis of the truncation scheme 289

Our goal is to provide an essential estimate for the difference of two operatorsKn and Kn To this end for v isin Linfin() we set

Pnv =sum

(ij)isinUn

vijwij

The quantities vij are linear functionals of v We estimate them in the nextlemma

Lemma 710 Suppose that conditions (I)ndash(V) and (VIII) hold If v isinWkinfin() then there exists a positive constant c such that

|vij| le cμminuskidvkinfin (i j) isin Un (748)

Proof For v isin Wkinfin() we write

Pnv =sum

(ij)isinUn

vijwij

and let v = [vij (i j) isin Un] By the definition of the projection Pn we havethat

Env =⎡⎣langij

sum(iprimejprime)isinUn

viprimejprimewiprimejprime)

rang (i j) isin Un

⎤⎦ = [langij vrang

(i j) isin Un]

Meanwhile using Lemma 77 with ν = μminuskd and condition (VIII) weconclude that

vμminuskd le cEnvμminuskd

where

c = 1minus μminuskd

1minus (1+ γ )μminuskdgt 0

is a constant Hence

vμminuskd le c max(i j)isinUn

|μikd langij vrang | (749)

Moreover recalling that the ldquosupportrdquo of the functional ij is the set Sij sube Sijwe use the Taylor theorem with remainder on the set Sij for v isin Wkinfin() andconditions (III)ndash(V) to conclude that there exists a positive constant c such that

| langij vrang | le cdk

i vkinfin le cμminuskidvkinfin

Combining this inequality with (749) we obtain the estimate

vμminuskd le cvkinfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

290 Multiscale collocation methods

Again using the definition of the weighted norms we have that

viinfin le cμminuskidvkinfin

which proves the estimate of this lemma

Lemma 710 ensures that for a function v isin Wkinfin() the coefficients ofits expansion in basis Wn and functionals Ln decay in order O(μminusikd) Thisis an extension of a well-known result for orthogonal multiscale bases to themultiscale interpolating piecewise polynomials constructed in this chapter

For positive numbers α and β we make use of the notation

μ[αβ n] =sum

iisinZn+1

μαidsum

iprimeisinZn+1

μβiprimed

to state the next lemma which will play an important role in the analysis forthe order of convergence and stability of the multiscale collocation method Toprove the next lemma we need to estimate the Linfin-norm of a typical elementin Xn given by

v =sum

(ij)isinUn

vijwij (750)

in terms of the norm vinfin of its coefficients v = [vij (i j) isin Un]Specifically we require the condition (IX) One way to satisfy the condition isto consider the sequence of functions ζij (i j) isin U defined by the equation

ζij =sum

(iprimejprime)isinU(Eminus1)iprimejprimeijwiprimejprime (i j) isin U (751)

These functions are bi-orthogonal relative to the set of linear functionals ij j isin Zw(i) i isin N0 that islang

iprimejprime ζijrang = δiiprimeδjjprime (i j) (iprime jprime) isin U

If in addition for all i isin N0

suptisin

sumjisinZw(i)

|ζij(t)| le θ2 (752)

then the second inequality of (IX) followsIn the next lemma we estimate the difference of operators Kn and Kn

applying to Pnv It is an important step for both stability analysis and theconvergence estimate of the multiscale collocation method

Lemma 711 Suppose that conditions (I)ndash(V) and (VIII)ndash(X) hold 0 lt σ prime ltmin2k d minus σ and η = 2k minus σ prime Let b and bprime be real numbers and let the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

73 Analysis of the truncation scheme 291

truncation parameters εniprimei iprime i isin Zn+1 be chosen such that

εniprimei ge max

aμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime)

i iprime isin Zn+1

for some constants a gt 0 and r gt 1 Then there exists a positive constant csuch that for all n isin N and v isin Wkinfin()

(Kn minus Kn)Pnvinfin le cμ[2k minus bη k minus bprimeη n](n+ 1)μminus(k+σ prime)ndvkinfin(753)

and for v isin Linfin()

(Kn minus Kn)Pnvinfin le cμ[k minus bη k minus bprimeη n](n+ 1)μminusσ primendvinfin (754)

Proof Since

Pnv =sum

(ij)isinUn

vijwij

we conclude that

(Kn minus Kn)Pnv =sum

(ij)isinUn

hijwij

where

h = Eminus1n (Kn minus Kn)v

Thus by hypothesis (IX) we conclude that

(Kn minus Kn)Pnvinfin le θ2(n+ 1)(Kn minus Kn)vinfin (755)

We next estimate (Kn minus Kn)vinfin To this end we introduce the matrix

n = [iprimejprimeij (i j) (iprime jprime) isin Un]whose elements are given by

iprimejprimeij = νμk(nminusi)d+σ primend(Kiprimejprimeij minus Kiprimejprimeij) (i j) (iprime jprime) isin Un

where ν = 1μ[2k minus bη k minus bprimeη n] and the vector

vprime = [vprimeij (i j) isin Un]whose components are

vprimeij = μkidvij (i j) isin Un

In this notation we observe that

(Kn minus Kn)vinfin le νminus1μminus(k+σ prime)ndninfinvprimeinfin (756)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

292 Multiscale collocation methods

By Lemma 710 there exists a positive constant c such that for all n isin N andall v isin Wkinfin()

vprimeinfin le cvkinfin (757)

Moreover from Lemma 79 there exists a positive constant c such thatsum(ij)isinUn

∣∣iprimejprimeij∣∣ le ν

sumiisinZn+1

μk(nminusi)d+σ primendKiprimei minus Kiprimeiinfin

le cνsum

iisinZn+1

μk(nminusi)d+σ primendminusk(i+iprime)d(εniprimei)minusη

Consequently by the choice of εiprimei we conclude that

ninfin = max(iprimejprime)isinUn

sum(ij)isinUn

∣∣iprimejprimeij∣∣le c (758)

Combining this inequality with (756)ndash(758) yields the first estimateTo prove the second estimate we proceed similarly and introduce the matrix

primen = [primeiprimejprimeij]s(n)timess(n)

whose entries are given by

primeiprimejprimeij = νprimeμσ primend(Kiprimejprimeij minus Kiprimejprimeij) (i j) (iprime jprime) isin Un

where νprime = 1μ[kminusbη kminusbprimeη n] With these quantities we have the estimate

(Kn minus Kn)vinfin le (νprime)minus1μminusσ primendprimeninfinvinfin (759)

Condition (IX) provides a positive constant c such that for v isin Linfin()

vinfin le cvinfin (760)

As before Lemma 79 and the choice of εiprimei i iprime isin Zn+1 ensure that there existsa positive constant c such that

primeninfin le c (761)

Combining this inequality with (759) and (760) yields the second estimate

We now turn our attention to the stability of the multiscale collocationmethod For this purpose we require property (VI) This property followstrivially if Xn is a space of piecewise polynomials Because of this propertyand the fact that K is compact we conclude for sufficiently large n that theoperators (I minus Kn)

minus1 exist and are uniformly bounded in Linfin() (see forexample [6 7]) From this fact follows the stability estimate that is there

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

73 Analysis of the truncation scheme 293

exists a positive constant ρ and a positive integer m such that for n ge m andx isin Xn

(I minusKn)xinfin ge ρxinfin

We establish a similar estimate for I minus Kn

Theorem 712 Suppose that 0 lt σ prime lt min2k d minus σ and η = 2k minus σ prime Ifconditions (I)ndash(VI) and (VIII)ndash(X) hold and εn

iprimei i iprime isin Zn+1 are chosen as inLemma 711 with

b gtk minus σ prime

η bprime gt k minus σ prime

η b+ bprime gt 1

then there exist a positive constant c and a positive integer m such that for alln ge m and x isin Xn

(I minus Kn)xinfin ge cxinfin

Proof Note that for any real numbers αβ and e

limnrarrinfinμ[αβ n](n+ 1)μminusend = 0

when e gt max0αβα+ β Thus our hypothesis ensures that there exists apositive integer m such that when n ge m

cμ[k minus bη k minus bprimeη n](n+ 1)μminusσ primend lt ρ2 (762)

where the constant c is that appearing in (754) The stability of the collocationscheme and the second estimate in Lemma 711 together with (762) yield forx isin Xn that

(I minus Kn)xinfin ge (I minusKn)xinfin minus (Kn minus Kn)Pnxinfin ge ρ

2xinfin

This completes the proof

In particular this theorem ensures for n ge m that the equation

(I minus Kn)un = Pnf (763)

has a unique solution given by

un =sum

(ij)isinUn

uijwij

This equation is equivalent to the matrix equation

(En minus Kn)un = fn

where un = [uij (i j) isin Un] The next theorem provides an error bound foruminus uninfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

294 Multiscale collocation methods

Theorem 713 Suppose that conditions (I)ndash(X) hold and that 0 lt σ prime ltmin2k d minus σ and η = 2k minus σ prime Let εn

iprimei i iprime isin Zn+1 be chosen as inLemma 711 with b and bprime satisfying one of the following three conditions

(i) b gt 1 bprime gt kminusσ primeη

b+ bprime gt 1+ kη

(ii) b = 1 bprime gt kminusσ primeη

b+ bprime gt 1+ kη

b gt 1 bprime = kminusσ primeη

b+ bprime gt 1+ kη

or

b gt 1 bprime gt kminusσ primeη

b+ bprime = 1+ kη

(iii) b = 1 bprime = kη

or b = 2kη

bprime = kminusσ primeη

Then there exist a positive constant c and positive integer m such that for alln ge m

uminus uninfin le cs(n)minuskd(logs(n))τukinfin

where τ = 0 in case (i) τ = 1 in case (ii) and τ = 2 in case (iii)

Proof It follows from Theorem 712 that there exists a positive constant csuch that

uminus uninfin le uminus Pnuinfin + c(I minus Kn)(Pnuminus un)infin (764)

Using the equation

Pn(I minusK)u = (I minus Kn)un

we find that

(I minus Kn)(Pnuminus un) = Pn(I minusK)(Pnuminus u)+ (Kn minus Kn)Pnu (765)

From (764) (765) hypothesis (VI) and Lemma 711 there exist positiveconstants c p such that

uminus uninfin le (1+ pI minusK)Pnuminus uinfin + cμprimeμminuskndukinfin

where

μprime = μ[2k minus bη k minus bprimeη n](n+ 1)μminusσ primend

We estimate each term in the inequality above separately For the first term wenote that conditions (VI) and (VII) provide a positive constant c such that

Pnuminus uinfin le cμminuskndukinfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

73 Analysis of the truncation scheme 295

Now we turn our attention to estimating the quantity μprime To this end weobserve for any real numbers αβ and e with e gt 0 the asymptotic order

μ[αβ n](n+ 1)μminusend =

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

o(1) if e gt maxαβα + βO(n) if α = eβ lt eα + β lt e

or if α lt eβ = eα + β lt eor if α lt eβ lt eα + β = e

O(n2) if α = 0β = e or α = eβ = 0

as nrarr infin Using this fact with α = 2k minus bηβ = k minus bprimeη and e = σ prime weconclude that

μprime =⎧⎨⎩

o(1) in case (i)O(n) in case (ii)O(n2) in case (iii)

which establishes the result of this theorem by noting that n sim log s(n)

We see from this theorem that the convergence order of the approximatesolution un obtained from the truncated collocation method is optimal up to alogarithmic factor

732 The condition number of the truncated matrix andcomplexity

We next estimate the condition number of the matrix An = En minus Kn

Theorem 714 If the conditions of Theorem 712 hold then there exists apositive constant c such that the condition number of the matrix An satisfiesthe estimate

condinfin(An) le c log2(s(n))

where condinfin(A) denotes the condition number of a matrix A in the infin matrixnorm

Proof For any v = [vij (i j) isin Un] isin Rs(n) we define the vector g = [gij (i j) isin Un] isin Rs(n) by the equation

Anv = g (766)

and the function

g =sum

(ij)isinUn

gijζij

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

296 Multiscale collocation methods

Therefore we have that

gij =langij g

rang = langijPngrang (i j) isin Un

It follows from (IV) that

Anvinfin le θ0Pnginfin (767)

Let

v =sum

(ij)isinUn

vijwij

and observe the equation

(I minus Kn)v = Png (768)

We conclude from (767) and (768) that there exists a positive constant c suchthat

Anvinfin le θ0(I minus Kn)vinfinle θ0

((I minusKn)vinfin + (Kn minus Kn)vinfin

)le cvinfin

where the last inequality holds because of (754) and (762) Next appealingto hypotheses (I) and (IV) we observe for any t isin and i isin Zn+1 that∣∣∣∣∣∣

sumjisinZw(i)

vijwij(t)

∣∣∣∣∣∣ le ρθ0vinfin

because there are at most ρ values of j isin Zw(i) such that functions wij(t) = 0Therefore we conclude that

vinfin le ρθ0(n+ 1)vinfin (769)

Consequently there exists a positive constant c such that

Aninfin le c(n+ 1) (770)

Conversely for any g isin Rs(n) there exists a vector v isin Rs(n) such thatequation (766) holds We argue that there exists a positive constant c such that

ginfin le c(n+ 1)ginfin

Hence we obtain from condition (IX) the inequality

vinfin le cvinfin le c(I minus Kn)vinfin = cginfin le c(n+ 1)ginfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

73 Analysis of the truncation scheme 297

from which it follows that there exists a positive constant c such that for alln isin N

Aminus1n infin le c(n+ 1) (771)

Recalling hypothesis (V) we combine the estimates (770) and (771) to obtainthe desired result namely

condinfin(An) = O((n+ 1)2

)= O

(log2(s(n))

) nrarrinfin

In the remainder of this section we estimate the number of nonzero entriesof matrix An = EnminusKn which shows that the truncation strategy embodied inLemma 711 can lead to a fast numerical algorithm for solving equation (740)while preserving nearly optimal order of convergence For any matrix A wedenote by N (A) the number of nonzero entries in A

Theorem 715 Suppose that hypotheses (I) and (V) hold Let b and bprime be realnumbers not larger than one and the truncation parameters εn

iprimei iprime i isin Zn+1

be chosen such that

εniprimei le maxaμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime) i iprime isin Zn+1

for some constants a gt 0 and r gt 1 Then

N (An) = O(s(n) logτ s(n))

where τ = 1 except for b = bprime = 1 in which case τ = 2

Proof We first estimate the number N (Aiprimei) For fixed i iprime and jprime if Aiprimejprimeij = 0then dist(Siprimejprime Sij) le εn

iprimei so that

Sij sube S(i iprime) = v v isin Rd |vminus v0| le di + diprime + εniprimei

where v0 is an arbitrary point in the set Siprimejprime Let Niiprimejprime be the number of suchsets which are contained in S(i iprime) Using condition (V) we conclude that thereexists a positive constant c such that

Niiprimejprime le meas(S(i iprime))minmeas(Sij) Sij sube S(i iprime) le cμi(di + diprime + εn

iprimei)d

Next we invoke condition (I) to conclude that the number of functions wij

having support contained in Sij is bounded by ρ and appealing to condition(V) we have that w(i) = O(μi) irarrinfin Consequently there exists a positiveconstant c such that

N (Aiprimei) le ρsum

jprimeisinZw(iprime)

Niiprimejprime le cμi+iprime(di + diprime + εniprimei)

d i iprime isin Zn+1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

298 Multiscale collocation methods

from which it follows that

N (An) le csum

iiprimeisinZn+1

μi+iprime[(di)

d + (diprime)d + (εn

iprimei)d]

This inequality and condition (I) imply that if the truncation parameters havethe bound

εniprimei le aμ[minusn+b(nminusi)+bprime(nminusiprime)]d

then

N (An) le csum

iprimeisinZn+1

sumiisinZn+1

μi+iprime(μminusi + μminusiprime + adμminusn+b(nminusi)+bprime(nminusiprime)

)

le c

⎡⎣2(n+ 1)sum

iisinZn+1

μi + adμn

⎛⎝ sumiisinZn+1

μ(bminus1)(nminusi)

⎞⎠⎛⎝ sum

iprimeisinZn+1

μ(bprimeminus1)(nminusiprime)

⎞⎠⎤⎦= O

(μn(n+ 1)τ

) = O(s(n) logτ s(n)

)

as nrarrinfin If εniprimei le r(di + diprime) a similar argument leads to

N (An) = O(s(n) log s(n)) nrarrinfin

This completes the proof

It follows from Theorems 712ndash715 that for the truncation scheme to haveall the desired properties of stability convergence and complexity we have tochoose the truncation parameters to satisfy the equation

εniprimei = maxaμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime) i iprime isin Zn+1

with b = 1 bprime gt kminusση

b+ bprime ge 1+ kη

or with b = 1 bprime = kη

σ prime lt k

74 Bibliographical remarks

This chapter lays out the foundation of the fast multiscale collocation methodfor solving the Fredholm integral equation of the second kind with a weaklysingular kernel Most of the material presented in this chapter is taken from thepaper [69] The construction of multiscale basis functions and the correspond-ing multiscale collocation functionals both having vanishing moments wasdescribed in [69] based on ideas developed in [65 200 201] The analysis of

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

74 Bibliographical remarks 299

the collocation method for solving the integral equation is based on the theoryof collectively compact operators described in [6 15] For the definition offunction values f (t) at given points t isin for an Linfin function f readers arereferred to [21] We remark that the fast multiscale collocation method wasrealized for integral equations of one dimension two dimensions and higherdimensions respectively in [75] [264] and [74] Another wavelet collocationmethod was presented in [105] for the solution of boundary integral equationsof order r = 0 1 over a closed and smooth boundary manifold where thetrial space is the space of all continuous and piecewise linear functions definedover a uniform triangular grid and the collocation points are the grid pointsFor more wavelet collocation methods for solving integral equations readersare referred to [225 226] A quadrature algorithm for the piecewise linearwavelet collocation applied to boundary integral equations can be found in[227] Wavelet collocation methods for a first-kind boundary integral equationin acoustic scattering were developed in [143]

Numerical integrations with error control strategies in fast collocation meth-ods described in this chapter were originally presented in [72] An iterated fastcollocation method was developed for solving integral equations of the secondkind in [62] Multiscale collocation methods were applied in [40] to solvestochastic integral equations and in [52 76 158 160] to solve Hammersteinequations and nonlinear boundary integral equations Moreover multiscalecollocation methods were applied to solve ill-posed integral equations of thefirst kind and inverse boundary value problems in [56 57 79 107] to identifyVolterra kernels of high order in [33] and to solve eigen-problems of weaklysingular integral operators in [70]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

8

Numerical integrations and error control

In the last three chapters multiscale Galerkin methods multiscale PetrovndashGalerkin methods and multiscale collocation methods were developed for solv-ing the Fredholm integral equation of the second kind with a weakly singularkernel on a domain in Rd These methods which use multiscale bases havingvanishing moments lead to compression strategies for the coefficient matricesof the resulting linear systems They provide fast algorithms for solving theintegral equations with an optimal order of convergence and quasi-linear (upto a logarithmic factor) order in computational complexity However it shouldbe pointed out that there is still a challenging problem to solve computationof the entries of the compressed coefficient matrix which are weakly singularintegrals The purpose of this chapter is to introduce error control strategies fornumerical integrations in generating the coefficient matrix of these multiscalemethods The error control techniques are so designed that quadrature errorswill not ruin the overall convergence order of the approximate solution of theintegral equation and will not increase the overall computational complexityorder of the original multiscale method Specifically we discuss the problemsin the setting of multiscale collocation methods Two types of quadrature ruleare used and the corresponding error control techniques are discussed in thischapter The numerical integration issue of the other two types of multiscalemethod can be handled similarly and thus we leave it to the interested reader

81 Discrete systems of the multiscale collocation method

We begin this chapter with a brief review on discrete systems of linearequations resulting from the multiscale collocation method introduced inChapter 7

300

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

81 Discrete systems of the multiscale collocation method 301

We consider solving the Fredholm integral equation of the second kind inthe form

uminusKu = f (81)

where f isin Linfin() is a given function u isin Linfin() is the unknown to bedetermined = [0 1] and the operator K Linfin()rarr Linfin() is defined by

(Ku)(s) =int

K(s t)u(t)dt s isin (82)

The multiscale collocation scheme for solving (81) seeks a vector un =[uij (i j) isin Un

]such that the function

un =sum

(i j)isinUn

uijwij

satisfies the equationlangiprimejprime un minusKun

rang = langiprimejprime frang (iprime jprime) isin Un

or equivalently

(En minusKn)un = fn (83)

where

En = [langiprimejprime wijrang

(iprime jprime) (i j) isin Un]

(84)

Kn = [langiprimejprime Kwijrang

(iprime jprime) (i j) isin Un] (85)

and

fn = [langiprimejprime frang

(iprime jprime) isin Un]

The coefficient matrix An = En minus Kn is a full matrix However due tothe special properties of the multiscale bases and functionals the matrix En issparse and Kn is numerically sparse in the sense that most of its entries havesmall absolute values To avoid computing all of these entries a truncationstrategy is proposed so that the numerical solution obtained from the truncatedsparse matrix is as accurate as that from the full matrix (see Chapter 7) Let

Kn = [Kiprimei iprime i isin Zn+1]

where

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]

with Kiprimejprimeij = langiprimejprime Kwijrang

The truncation parameters εniprimei are chosen for a pair of level indices (iprime i) by

εniprimei = maxν(di + diprime) aμbprime(nminusiprime)minusi

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

302 Numerical integrations and error control

for some constants agt 0 and ν gt 1 The truncated matrix Kn is then defined by

Kn =[Kiprimei iprime i isin Zn+1

]

where

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]

with

Kiprimejprimeij =

Kiprimejprimeij dist(Siprimejprime Sij) le εniprimei

0 otherwise

We see from (85) that each entry of Kn is a weakly singular integral and hasto be computed numerically When an elementary quadrature rule is chosen forthe evaluation of Kiprimejprimeij the bound of the numerical error can be obtained Thenwe are able to utilize the bound to gauge how the accumulation of these errorsinfluences the accuracy of the resulting numerical solutions The problem ishow to choose the quadrature rules with their parameter values so that theconvergence order of the multiscale collocation scheme can be preserved withlow computational complexity

82 Quadrature rules with polynomial order of accuracy

We first present a class of quadrature rules with polynomial order of accuracy

821 Quadrature rule I

The first quadrature rule that we present here was introduced in [164] for aclass of weakly singular univariate functions For a fixed positive integer kprime leth isin C2kprime(0 1] satisfy the property that there exists a positive constant c suchthat

|h(2kprime)(t)| le ctminusσminus2kprime t isin (0 1]where σ isin [0 1) Note that the function h is integrable on [0 1] but may havea singularity at t = 0 We wish to compute a numerical value of the integral

I(h) =int 1

0h(t)dt

in accuracy O(mminus2kprime) by using O(m) number of functional evaluations Forthis purpose we assume that gkprime is the Legendre polynomial of degree kprime on

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

82 Quadrature rules with polynomial order of accuracy 303

[0 1] that is int 1

0gkprime(t)t

dt = 0 isin Zkprime

and denote by τ isin Zkprime the kprime zeros of gkprime in an order given by 0 lt τ0

lt middot middot middot lt τkprimeminus1 lt 1 To compute I(h) we set q = 2kprime+11minusσ and according to this

parameter q we choose m points

tj =(

j

m

)q

j isin Zm+1 (86)

so that the set of subintervals j = [tj tj+1] j isin Zm form a partition forinterval [0 1] Let

τj = tj + (tj+1 minus tj)τ isin Zkprime j isin Zm (87)

and note that τ j isin Zkprime are the kprime zeros of the Legendre polynomial of degree

kprime on j We now use these points to define a piecewise polynomial S(h) over[0 1] with knots tj j = 1 2 m minus 1 Set S(h)(t) = 0 t isin [t0 t1) and letS(h) be the Lagrange interpolation polynomial of degree kprime minus 1 to h at nodesτ

j isin Zkprime for x isin [tj tj+1) j = 1 2 mminus 2 and x isin [tmminus1 tm] We use the

value

I(S(h)) =sum

jminus1isinZmminus1

sumisinZkprime

ωjh(τ

j)

where

ωj =

int tj+1

tj

prodiisinZkprime i =

t minus τji

τj minus τ

ji

dt

to approximate the integral I(h) Let Emkprime(h) = I(h) minus I(S(h)) denote theerror of the approximation and it is proved in [164] that there exists a positiveconstant cprime which might depend on h such that

|Emkprime(h)| le cprimemminus2kprime for all m isin N

We next consider computing the entries of Kn using this integration methodThe entries of matrix Kn are integrals of integrands in the form

hij(s t) = K(s t)wij(t) (88)

for some s isin and (i j) isin Un Note that these functions hij have a singularityat the point s whose supports are given by the support of wij and they arepiecewise smooth To apply the integration method to these functions wedefine a function class and extend the integration method to this class offunctions h A function h is said to be in class A if it has the properties that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

304 Numerical integrations and error control

(I) supp(h) is a subinterval of (II) there exists a set of nodes π(h) = sj j minus 1 isin Zmprimeminus1 such that h isin

C2kprime((s cup π(h))) and(III) there exists a positive constant θ prime such that

|h(2kprime)(t)| le θ prime|t minus s|minus(σ+2kprime) t isin (s cup π(h))For a function in class A we choose the canonical partition of with respectto m as described by (86) and associated with the singular point s we picktwo collections of nodes

π rt = trj = s+ tj j isin Zm+1 and π l

t = tlj = sminus tj j isin Zm+1Let [qprime qprimeprime] = supp(h) We rearrange the elements of

(π(h) cup π rt cup π l

t cup qprime qprimeprime) cap supp(h)

in the increasing order and write them as a new sequence qprime = q0 lt q1 lt

middot middot middot lt qmprimeprime = qprimeprime with an integer mprimeprime that depends on m and has the boundmprimeprime le 2m+ mprime + 1 Define a partition (h) of supp(h) by

(h) = Qα = [qα qα+1) α isin Zmprimeprime and define a piecewise polynomial S(h) of order kprime on supp(h) by the rule thatS(h) = 0 on Qα if Qα sub [tl1 tr1) and otherwise on Qα S(h) is the Lagrangeinterpolation polynomial of order kprime which interpolates h at the kprime zeros τα =qα + (qα+1 minus qα)τ isin Zkprime of the Legendre polynomial of degree kprime definedon Qα We compute the value I(S(h)) and use it to approximate I(h) In thenext lemma we analyze the order of convergence of this integration methodFor this purpose we set

Emkprime(h) =sum

Qαisin(h)

∣∣∣∣intQα

[h(t)minus S(h)(t)]dt

∣∣∣∣ Lemma 81 Let h be a function in class A Then there exists a positiveconstant c1 independent of h s or (h) such that

Emkprime(h) le c1θprimemminus2kprime (89)

where θ prime is the constant appearing in the definition of class A

Proof The proof is done by modifying the proof of Theorem 22 in [164] Forj isin Zm we introduce two index sets

rj = α isin Zmprimeprime Qα isin (h) Qα sub [trj trj+1]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

82 Quadrature rules with polynomial order of accuracy 305

and

lj = α isin Zmprimeprime Qα isin (h) Qα sub [tlj+1 tlj]

Associated with these two index sets we set

Erkprime j(h) =

sumαisinr

j

∣∣∣∣intQα

[h(t)minus S(h)(t)]dt

∣∣∣∣and

Elkprime j(h) =

sumαisinl

j

∣∣∣∣intQα

[h(t)minus S(h)(t)]dt

∣∣∣∣ and observe that

Emkprime(h) =suml

j =emptyEl

kprime j(h)+sumr

j =emptyEr

kprime j(h)

We first estimate Erkprime j By the definition of S(h) we have that

Erkprime0(h) le

int tr1

tr0

|h(t)|dt le θ primeint tr1

tr0

|sminus t|minusσdt = θ prime

1minus σmminus(2kprime+1) (810)

For j ge 1 it follows from the error estimate of the Gaussian quadrature thatthere exists ηα isin Qα such that

Erkprime j(h) =

sumαisinr

j

|h(2kprime)(ηα)|(2kprime)

∣∣∣∣intQα

(t minus τα0 )2 middot middot middot (t minus ταkprimeminus1)

2dt

∣∣∣∣ Using condition (III) and noting that

|t minus τα | lt trj+1 minus trj for any α isin rj t isin Qα and

sumαisinr

j

intQα

dt le trj+1 minus trj

we conclude that

Erkprime j(h) le

θ prime

(2kprime) |sminus trj |minus(σ+2kprime)(trj+1 minus trj )2kprime+1

By the definition of trj and (86) we observe that

Erkprime j(h) le

θ primemminus(2kprime+1)

(2kprime) jminusq(σ+2kprime)q2kprime+1(j+ 1)(qminus1)(2kprime+1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

306 Numerical integrations and error control

Since q(σ + 2kprime) = (qminus 1)(2kprime + 1) it follows that

Erkprime j(h) le

θ prime

(2kprime)q2kprime+12(qminus1)(2kprime+1)mminus(2kprime+1) (811)

Likewise we obtain estimates similar to (810) and (811) for Elkprime0 and El

kprime jTherefore we conclude (89) with

c1 = 2 max

q2kprime+12(qminus1)(2kprime+1)

(2kprime) 1

1minus σ

proving the lemma

We now apply the integration method described above for functions in classA to hij defined by (88) which appear in the compressed matrix Kn Recallthat the compressed matrix is obtained by a truncation strategy defined with thetruncation parameters εn

iprimei iprime i isin Zn+1 from the full matrix Kn For the giventruncation parameters εn

iprimei we introduce an index set

Ziprimejprimei = j isin Zw(i) dist(Siprimejprime Sij) le εniprimei

for (iprime jprime) isin Un and define for isin Zr

Ziprimejprimei = j isin Ziprimejprimei j = μ(e)r +

We observe that Ziprimejprimei sube Ziprimejprimei and for j1 j2 isin Z

iprimejprimei with j1 = j2

meas(supp(wij1) cap supp(wij2)) = 0

and for any isin Zr ⋃jisinZ

iprimejprime i

supp(wij) sub

Therefore we define for isin Zr and (iprime jprime) isin Un

wiprimejprimei(t) =

wij(t) if t isin int(supp(wij)) for some j isin Ziprimejprimei

0 otherwise

and set

hiprimejprimei(s t) = K(s t)wiprimejprimei(t)

The next lemma presents estimates for the error of the integration methodapplied to functions hiprimejprimei

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

82 Quadrature rules with polynomial order of accuracy 307

Lemma 82 Suppose that there exists a positive constant θ such that

|Dβt K(s t)| le θ |sminus t|minus(σ+β)

for any 0 le β le 2kprime and s t isin s = t Then there exists a positive constantc2 such that for all i isin Zn+1 isin Zr (iprime jprime) isin Un and s isin

Emkprime(hiprimejprimei) le c2mminus2kprime[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]k0 (812)

where k0 = mink 2kprime + 1Proof It suffices to prove that hiprimejprimei is in class A and to compute theconstant θ prime for this function

It is clear that condition (I) is satisfied with this function According to theconstruction of wij for any (i j) isin Un there exist e isin Ziminus1

μ and l isin Zr suchthat j = μ(e)r + l and

wij = Tew1l = w1l φminus1e χφe()

Note that w1l is a piecewise polynomial with a finite set of knots This set ofknots forms a set π(hij) of knots for the function hij required by condition (II)in the definition of class A Observing that

π(hiprimejprimei) =⋃

jisinZiprimejprime i

π(hij) (813)

we confirm that hiprimejprimei satisfies condition (II) with the set π(hiprimejprimei) of knots Itremains to show that it also satisfies condition (III) Again by noting that eachwiprimejprimei is a piecewise polynomial of order k with the knots π(hiprimejprimei) it followsfrom the hypothesis on kernel K that for t isin supp(wij) (s cup π(hij))

|D2kprimet hiprimejprimei(s t)| =

∣∣∣∣∣∣sumβisinZk0

2kprime)

D2kprimeminusβt K(s t)w(β)

ij (t)

∣∣∣∣∣∣le θ

sumβisinZk0

2kprime)|sminus t|minus(σ+2kprimeminusβ)μβ(iminus1)|w(β)

1l (φminus1e (t))|

Introducing a constant

= maxβisinZk0 lisinZr

sup|w(β)

1l (t)| t isin ltinfin

we obtain the estimate for t isin supp(wij) (s cup π(hij))

|D2kprimet hiprimejprimei(t)| le θ(2kprime)

sumβisinZk0

[|sminus t|μiminus1]β |sminus t|minus(σ+2kprime) (814)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

308 Numerical integrations and error control

We now compute the constant θ prime associated with function hiprimejprimei For any j isinZiprimejprimei we have that

|sminus t| le di + diprime + εniprimei for any t isin supp(wij)

This implies that for t isin supp(wij)sumβisinZk0

[|sminus t|μiminus1]β le k0

[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]k0

Noticing that

supp(wiprimejprimei) sub⋃

jisinZiprimejprime i

supp(wij)

we observe that condition (III) holds for function hiprimejprimei with a constant

θ prime = θk0(2kprime)[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]k0

Finally using Lemma 81 we conclude the estimate (812) with c2 =c1θk0(2kprime)

We now use the integration method described above to compute the integralsinvolved in the nonzero entries

Kiprimejprimeij =sum

sisinSiprimejprime

cs

intSij

hij(s t)dt (815)

of Kiprimei In other words we use

Kiprimejprimeij =sum

sisinSiprimejprime

csI(S(hij(s middot)))

to approximate Kiprimejprimeij given by (815) For a given set of truncation parameterswe let

˜Kiprimei =[ ˜K(ε)iprimejprimeij jprime isin Zw(iprime) j isin Zw(i)

]

where

˜K(ε)iprimejprimeij =

Kiprimejprimeij dist(Siprimejprime Sij) le εniprimei

0 otherwise(816)

In the next proposition we estimate the Linfin-norm of error Kiprimei minus ˜Kiprimei

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

82 Quadrature rules with polynomial order of accuracy 309

Lemma 83 Let m be a positive integer Then there exists a positive constantc3 gt 0 such that for all iprime i isin Zn+1 and n isin N

Kiprimei minus ˜Kiprimeiinfin le c3

[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]k0mminus2kprime (817)

Proof For (iprime jprime) isin Un we set Ciprimejprime = max|cs| s isin Siprimejprime By theconstruction of collocation functionals there exist positive integers cprime2 cprimeprime2 suchthat for all (iprime jprime) isin Un and for all n isin N

card(Siprimejprime) le cprime2 and maxCiprimejprime (iprime jprime) isin Un

le cprimeprime2 (818)

Using (815) and (818) we see that there exists a positive constant c such thatfor all iprime i isin Zn+1

Kiprimei minus ˜Kiprimeiinfin le c maxjprimeisinZw(iprime)

⎧⎪⎨⎪⎩sum

sisinSiprime jprime

sumjisinZiprimejprime i

Emkprime(hij)

⎫⎪⎬⎪⎭

According to the definition of Emkprime(h) we conclude thatsumjisinZiprimejprime i

Emkprime(hij) =sumisinZr

sumjisinZ

iprime jprime i

sumQαisin(hij)

∣∣∣∣intQα

[hij(t)minus (S(hij))(t)]dt

∣∣∣∣ Recalling (813) we obtain that the right-hand side of the equation above isequal tosum

isinZr

sumQαisin(hiprimejprime i)

∣∣∣∣intQα

[hiprimejprimei(t)minus (S(hiprimejprimei))(t)]dt

∣∣∣∣ =sumisinZr

Emkprime(hiprimejprimei)

It follows that

Kiprimei minus ˜Kiprimeiinfin le c max

⎧⎨⎩sumisinZr

Emkprime(hiprimejprimei) jprime isin Zw(iprime)

⎫⎬⎭ (819)

By (819) and Lemma 82 we obtain the desired estimate

822 Convergence order and computational complexity

To ensure that the numerical integration will not ruin the convergence orderof the collocation method we are required to choose different integers m thenumber of functional evaluations used in numerical integration of the integrals

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

310 Numerical integrations and error control

involved in the entries of different blocks Kiprimei We now denote these integers bymiprimei iprime i isin Zn+1 to indicate their dependence on different blocks Specificallywe choose miprimei to satisfy the inequality

miprimei ge c0(εn

iprimei)λμ(iprime i) iprime i isin Zn+1 (820)

for some positive constant c0 where

λ = 2k + k0 minus σ prime

2kprimeand μ(iprime i) = μ

k(iprime+i)+k0(iminus1)2kprime

Suppose that the numerical values of blocks ˜Kiprimei are computed accordingly Wesolve the linear system

(En minus ˜Kn) ˜un = fn (821)

for ˜un = [˜uij (i j) isin Un] and denote

˜un =sum

(i j)isinUn

˜uijwij

Our next theorem shows that the integers miprimei so chosen allow the approximatesolution ˜un to preserve the convergence order that un has

Theorem 84 Suppose that the condition of Lemma 82 holds u isin Wkinfin()and that the integrals in ˜Kn are computed by the integration formula describedabove using miprimei functional evaluations where miprimei satisfy (820) and ˜un issolved accordingly Then there exist a positive constant c and a positive integerN such that for all n gt N

uminus ˜uninfin le c(s(n))minusk(log s(n))τukinfin (822)

where τ = 1 if bprime gt k2kminusσ prime τ = 2 if bprime = k

2kminusσ prime

Proof By the proof of Theorem 713 the estimate (822) holds if there existsa positive constant c such that for all iprime i isin Zn+1 and for all n isin N

Kiprimei minus ˜Kiprimeiinfin le c(εniprimei)minus(2kminusσ prime)μminusk(iprime+i) (823)

By estimate (823) and the triangle inequality it suffices to prove that thereexists a positive constant c such that for all iprime i isin Zn+1 and for all n isin N(823) holds with Kiprimei replaced by Kiprimei To this end we recall that the definitionof εn

iprimei ensures that

μminusiprime+1 + μminusi+1 le εniprimei

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

82 Quadrature rules with polynomial order of accuracy 311

It follows from Lemma 83 and the choice of miprimei (820) that

Kiprimei minus ˜Kiprimeiinfin le 2k0 cminus2kprime0 c

(εn

iprimei)minus(2kminusσ prime)

μminusk(iprime+i)

proving the claim

Now we turn to analyzing computational complexity for the generating

matrix ˜Kn For iprime i isin Zn+1 we denote by Miprimei the number of functional

evaluations for computing the entries of ˜Kiprimei Thus

MUn =

sumiisinZn+1

sumiprimeisinZi+1

Miprimei (824)

and

MLn =

sumiprimeisinZn+1

sumiisinZiprime

Miprimei (825)

are the number of functional evaluations used for computing the upper and

lower triangular entries of ˜Kn respectively and Mn =MUn +ML

n represents

the total number of functional evaluations used for computing all entries of ˜KnIn the next theorem we estimate MU

n and MLn

Theorem 85 Suppose that the matrix ˜Kn is generated by using the integra-tion formula described above Let miprimei iprime i isin Zn+1 be the smallest integerssatisfying (820) Choose

kprime ge k and2kprime

2kprime + 1(1minus σ) le σ prime lt 1minus σ (826)

Then there exists a positive constant c such that for all n isin N

MUn le c(s(n))1+λprime (827)

and

MLn le c(s(n))1+λprimeprime (828)

where λprime = σ prime2kprime minus 1minusσ

2kprime+1 and λprimeprime = k2kprime

Proof For iprime iisinZn+1 we let Miprimejprimei denote the number of functional eval-

uations used in computing the jprimeth row of the block ˜Kiprimei Recalling that thenumber of rows in this block is w(iprime) we have that

Miprimei = w(iprime)Miprimejprimei (829)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

312 Numerical integrations and error control

To estimate Miprimejprimei we let M(h) be the number of functional evaluations usedin computing I(S(h)) Recalling the definition of the function hiprimejprimei we havethat

Miprimejprimei =sum

jisinZiprime jprime i

M(hij) =sumisinZr

sumjisinZ

iprimejprime i

M(hij) =sumisinZr

M(hiprimejprimei) (830)

Note that we actually integrate S(hiprimejprimei) for an approximate value of the inte-gral I(hiprimejprimei) Since S(hiprimejprimei) is a piecewise polynomial of order kprime the numberof functional evaluations used in integrating it between two consecutive knotsis exactly kprime Setting

N1 = card(π(hiprimejprimei)) and N2 = card(π rs cup π l

s)

we find that

M(hiprimejprimei) le kprime(N1 + N2) (831)

For j = 1 2 we let

MUn j =

sumiisinZn+1

sumiprimeisinZi+1

w(iprime)sumisinZr

kprimeNj (832)

and

MLn j =

sumiprimeisinZn+1

sumiisinZiprime

w(iprime)sumisinZr

kprimeNj (833)

From (824) (825) and (829)ndash(831) we conclude that MUn leMU

n1 +MUn2

and MLn leML

n1 +MLn2

We now estimate MUn1 From the construction of the basis functions wij it is

clear that the functions wij are piecewise polynomials with sets of knots whosecardinality is uniformly bounded As a result according to the definition ofwiprimejprimei there exists a positive constant cprime such that

N1 le cprimecard(Ziprimejprimei) (834)

By (834) and Theorem 715 we observe that there exists a positive constant csuch that for all n isin N

MUn1 le cprimekprime

sumiisinZn+1

sumiprimeisinZi+1

N (Kiprimei) le ckprimes(n) logτ s(n) (835)

where τ = 2 if bprime = 1 and τ = 1 if kminusσ prime2kminusσ prime lt bprime lt 1 Likewise we have the

estimate for MLn1

MLn1 le ckprimes(n) logτ s(n)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

82 Quadrature rules with polynomial order of accuracy 313

Now we turn to estimating MUn2 and ML

n2 By (86) and the truncation

strategy a sufficient condition for tr isin supp(wiprimejprimei) or tl isin supp(wiprimejprimei) is(

miprimei

) 2kprime+11minusσ ge di + diprime + εn

iprimei (836)

The smallest that satisfies (836) is an upper bound of the number of elementsin π r

s or π ls which are also located in supp(hiprimejprimei) Therefore by the choice of

miprimei there exists a positive constant c such that for all iprime i isin Zn+1 and for alln isin N

N2 le 2c0(di + diprime + εn

iprimei) 1minusσ

2kprime+1(εn

iprimei)λμ(iprime i) le c

(εn

iprimei)λ0 μ(iprime i)

where λ0 = 2k+k02kprime minus λprime When iprime le i if εn

iprimei = ν(μminusiprime+1 + μminusi+1) then

N2 le c(ν(μminusiprime+1 + μminusi+1)

) 2k+k02kprime minusλprime

μ(iprime i) le cμminusλ0iprime+ k+k02kprime i

and if εniprimei = aμbprime(nminusiprime)minusi then

N2 le c(aμbprime(nminusiprime)minusi)λ0μ(iprime i) = cμbprimeλ0nμ( k

2kprime minusbprimeλ0)iprimeminus( k2kprime minusλprime)i

When iprime gt i if εniprimei = ν(μminusiprime+1 + μminusi+1) then

N2 le c(ν(μminusiprime+1 + μminusi+1)

)λ0μ(iprime i) le cμ

k2kprime iprimeminus( k

2kprime minusλprime)i

and if εniprimei = aμbprime(nminusiprime)minusi then

N2 le c(aμbprime(nminusiprime)minusi)λ0μ(iprime i) = cμbprimeλ0nμ( k

2kprime minusbprimeλ0)iprimeminus( k2kprime minusλprime)i

Note that for iminus 1 isin Zn w(i) = rμiminus1 By the definition of MUn2 we observe

that if εniprimei = ν(μminusiprime+1 + μminusi+1) then

MUn2 le c

sumiisinZn+1

sumiprimeisinZi+1

μ(1minus k+k0

2kprime +λprime)iprime+k+k02kprime i

Hence

MUn2 le

⎧⎨⎩ cμk+k02kprime n λprime lt k+k0

2kprime minus 1

cμ(1+λprime)n λprime ge k+k02kprime minus 1

(837)

Similarly if εniprimei = aμbprime(nminusiprime)minusi we introduce a new parameter λ = (1+ k

2kprime )λ0

and observe that

MUn2 le

cμbprimeλ0n bprime gt λ

cμ(1+λprime)n bprime le λ(838)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

314 Numerical integrations and error control

For an estimate of MLn2 if εn

iprimei = ν(μminusiprime+1 + μminusi+1) since λprime lt k2kprime we have

MLn2 le cμ(1+ k

2kprime )n (839)

and if εniprimei = aμbprime(nminusiprime)minusi we see

MLn2 le

cμbprimeλ0n bprime gt λ

cμ(1+ k2kprime )n bprime le λ

(840)

Now using the assumption (826) we have that k0 = k and 0 lt λprime lt σ2kprime and

thus λ gt 1 Noting that bprime le 1 we conclude the estimates (827) and (828)from (832) (833) and (837)ndash(840) The proof is complete

83 Quadrature rules with exponential order of accuracy

In this section we study a class of quadrature rules which have an exponentialorder of accuracy

831 Quadrature rule II

In this section we present another integration method when the kernel is a Cinfinfunction off the diagonal This stronger assumption on the kernel allows usto use different orders of polynomials in different subintervals to achieve anexponential order of convergence for the integration method This idea wasused in [242] in a different context As a result the computational complexityfor numerical integration is improved considerably Specifically we assumethat for any s isin K(s middot) isin Cinfin( s) and there exists a positive constant θsuch that

|Dβt K(s t)| le θ |sminus t|minus(σ+β) (841)

for any β isin N0 and s t isin s = t Instead of using the knots described by(86) for any γ isin (0 1) we set

t0 = 0 tι = γmminusι ι = 1 2 m

As before we define the sets π rt π l

t of knots subintervals Qα and partition(h) for function h(s middot) isin Cinfin( (s cup π(h))) In the present case wedefine the piecewise polynomial S(h) by the following rule Note that ταj =qα + (qα+1 minus qα)τj j isin Zkι are the kι zeros of the Legendre polynomial ofdegree kι defined on Qα If Qα sub [tl1 tr1) S(h) = 0 on Qα and if Qα sub [trι trι+1)

or [tlι+1 tlι) on Qα S(h) is the Lagrange interpolating polynomial of order kι

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

83 Quadrature rules with exponential order of accuracy 315

to h at the kι zeros ταj Note that kι varies depending on ι and S(h) depends onthe vector k = [kι ι = 1 2 m] We use I(S(h)) to approximate I(h) Fora constant a isin R a denotes the smallest integer not less than a

Lemma 86 Let ε gt 0 γ isin (0 1) and choose

kι = ιε ι = 1 2 m (842)

Then there exists a positive constant c such that for all integers m and forι = 1 2 m [

(2kι minus k)γ (1minusσ)ι+2kι+1]minus1 le c

Proof Note that n sim (ne)n+12 as nrarr +infin It suffices to prove that thereexists a positive integer n0 such that when ι ge n0[(

2kι minus k

e

)2kιminusk+ 12

γ (1minusσ)ι+2kι+1

]minus1

le 1

Let ζ = max 1minusσε

4 Since 2kιminuske rarr+infin as ιrarr+infin there exists a positive

integer n0 such that for all ι ge n0

2kι minus k

ege γminusζ and

1

2kι gt k

Thus for any γ isin (0 1)(2kι minus k

e

)2kιminusk+ 12 ge γminusζ(2kιminusk+ 1

2 ) ge γminus1minusσε

kιminus4(kιminusk+ 12 ) ge γminus(1minusσ)ιminus2kιminus1

This completes the proof

To analyze the convergence of this integral method we let

Emk(h) =sum

Qαisin(h)

∣∣∣∣intQα

[h(t)minus S(h)(t)]dt

∣∣∣∣ Lemma 87 Suppose that the kernel K satisfies (841) Then there exists apositive constant c such that for any i isin Zn+1 l isin Zr (iprime jprime) isin Un and s isin E

Emk(hiprimejprimeil) le c[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]kγ (1minusσ)m

Proof Define rι

lι as in the proof of Lemma 81 with Er

kprimeι(h) and Elkprimeι(h)

replaced by Erι (h) and El

ι(h) respectively Now we have that

Emk(h) =sumlι =empty

Elι(h)+

sumrι =empty

Erι (h)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

316 Numerical integrations and error control

We first estimate Erι (hiprimejprimeil) According to the definition of S(h) we have that

Er0(hiprimejprimeil) le

int tr1

tr0

|hiprimejprimeil(s t)|dt le cθint tr1

tr0

|sminus t|minusσdt le cθ

(1minus σ)γ 1minusσ γ(1minusσ)m

For ι ge 1 by the error estimate of the Gaussian quadrature there exist ξα isin Qα

such that

Erι (hiprimejprimeil) =

sumαisinr

ι

|D2kιt hiprimejprimeil(s ξα)|

(2kι)∣∣∣∣int

(t minus τα0 )2 middot middot middot (t minus ταkιminus1)

2dt

∣∣∣∣ Note that wij is a piecewise polynomial of order k By using assumption (841)we have for t isin supp(wij) (s cup π(hij)) that

|D2kιt hiprimejprimeil(s t)| =

∣∣∣∣∣∣sumβisinZk

2kι

)D2kιminusβ

t K(s t)w(β)ij (t)

∣∣∣∣∣∣le θ

sumβisinZk

2kι

)|sminus t|minus(σ+2kιminusβ)μβ(iminus1)

∣∣∣w(β)

1l (φminus1e (t))

∣∣∣ Moreover we have that tι = γmminusι le |ξα minus s| le di + diprime + εn

iprimei Thus weconclude that

Erι (hiprimejprimeil) le θ(2kι)(2kι minus 1) middot middot middot (2kι minus k + 1)

(2kι)t(σ+2kι)ι

(tι+1 minus tι)2kι+1

timessumβisinZk

[(di + diprime + εn

iprimei)μiminus1

]βle kθ(1minus γ )2kι+1

(2kι minus k)γ (1minusσ)ι+2kι+1γ (1minusσ)m

[(di + diprime + εn

iprimei)μiminus1

]k

It follows from Lemma 86 that there exists a positive constant c such that

Erι (hiprimejprimeil) le c(1minus γ )2ειγ (1minusσ)m

[(di + diprime + εn

iprimei)μiminus1

]k

Therefore sumrι =empty

Erι (hiprimejprimeil) le cγ (1minusσ)m

[(di + diprime + εn

iprimei)μiminus1

]k

Likewise we obtain the same estimate forsum

lι =empty El

ι(hiprimejprimeil) and complete theproof of this lemma

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

83 Quadrature rules with exponential order of accuracy 317

832 Convergence order and computational complexity

Using Lemma 87 and similar arguments used in the proofs for Lemma 83 andTheorem 84 we have the following results

Lemma 88 Let m be a positive integer Then there exists a positive constantc gt 0 such that for all iprime i isin Zn+1 and n isin N

Kiprimei minus ˜Kiprimeiinfin le c[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]kγ (1minusσ)m

Theorem 89 Let u isin Wkinfin() For iprime i isin Zn+1 choose

miprimei ge minusk logμ

(1minus σ) log γ(2i+ iprime) (843)

Suppose that the kernel K satisfies (841) the integrals Kiprimejprimeij in matrix ˜Kn arecomputed by the integration methods described earlier with m = miprimei and kιdetermined by (842) and ˜un is solved accordingly Then there exist a positiveconstant c and a positive integer N such that for all n gt N

uminus ˜uninfin le c(s(n))minusk(log s(n))τukinfin (844)

where τ = 1 if bprime gt k2kminusσ prime and τ = 2 if bprime = k

2kminusσ prime

Proof As in the proof of Theorem 84 it suffices to prove that there holds[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]kγ (1minusσ)m le c

(εn

iprimei)minus(2kminusσ prime)

μminusk(iprime+i)

Since μminusiprime+1 + μminusi+1 le εniprimei we only need to show

γ (1minusσ)miprime i le c(εn

iprimei)minus(3kminusσ prime)

μminus(2ki+kiprime)

This holds with the choice of miprimei and thus the conclusion of this theoremfollows

In the next theorem we present an estimate of the number of functional

evaluations used for computing the entries of ˜Kn

Theorem 810 Suppose that miprimei iprime i isin Zn+1 are chosen to be the smallestinteger satisfying condition (843) Then there exists a positive constant c suchthat for all n isin N

Mn le cs(n) (log s(n))3

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

318 Numerical integrations and error control

Proof As in the proof of Theorem 85 we have that

Mn =sum

iisinZn+1

sumiprimeisinZn+1

Miprimei =sum

iisinZn+1

sumiprimeisinZn+1

w(iprime)Miprimejprimei

where

Miprimejprimei =sumisinZr

M(hiprimejprimei)

In the present case we obtain that

cardQα Qα isin [γmiprime iminusι γmiprimeiminusιminus1) le γmiprime iminusιminus1 minus γmiprimeiminusι

μminusi+1+ 2

Thus there is a positive constant c such that

Miprimejprimei le clsum

ι=1

(γmiprime iminusιminus1 minus γmiprimeiminusι

μminusi+1+ 2

)

Since kι lt ει+ 1

Miprimejprimei le c

(1

γminus 1

)μiminus1

lsumι=1

ιγmiprime iminusι +lsum

ι=1

γmiprime iminusι)+ 2c

lsumι=1

(ει+ 1)

According to the truncation strategy we have that tι = γmiprimeiminusι le εniprimei + diprime + di

where i0 = miniprime i By the choice of εniprimei and miprimei we conclude that

Miprimejprimei le c[nμi

(μbprime(nminusiprime)minusi + μı+1 + μminusiprime+1

)+ n2

]

It follows from this inequality and w(iprime) = k(μminus 1)μiprimeminus1 that

Mn le csum

iisinZn+1

sumiprimeisinZn+1

nμbprimenμ(1minusbprime)iprime + μi + μiprime + n2μiprime

A simple computation yields the estimate of this theorem

84 Numerical experiments

In this section we use numerical experiments to verify our theoretical esti-mates We consider equation (81) with

K(s t) = log | cos(πs)minus cos(π t)| t s isin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

84 Numerical experiments 319

and choose

f (s) = sin(πs)+ 1

π

[2minus (1minus cos(πs)) log (1minus cos(πs))

minus (1+ cos(πs)) log (1+ cos(πs))]

so that the exact solution is u(s) = sin(πs) for comparison purposes In theexperiment we apply linear bases to discretize the equation

To verify the computational complexity of the quadrature rules we reportthe time for establishing the matrix Kn In the first experiment we let kprime = 2that is we use piecewise linear polynomials to approximate the integrandsThe values of other related parameters are identified as follows Let kprime = 2σ prime = 08 a = 025 bprime = 08 ν = 101 and

miprimei ge 14(εn

iprimei)13 2

iprime+2iminus12

We use the notations TUn and TL

n for the time to evaluate the upper and lowertriangles of Kn According to our theoretical estimates there should hold

rUn = log2

(TU

n+1TUn

) = 1+ σ

5and rL

n = log2(TL

n+1TLn

) = 15

The computed results are listed in Table 81We see that most of the values of rL

n are around 15 and those of rUn tend to

be lower than 12 when n increases We also include the errors of the numericalsolutions to the true solution as well as the convergence order which is shownto maintain the optimal convergence rate

In order to observe the influence of the values of k and kprime on the order of timecomplexity we now choose k = 2 kprime = 4 that is we continue to use the linearbasis to discretize the integral equation while piecewise cubic polynomials are

Table 81 Numerical results for quadrature rule I Linear quadrature case

n s(n) TUn rU

n TLn rL

n uminus uninfin Conv rate

4 32 002 003 4162345e-35 64 005 132 007 122 1302214e-3 167646 128 014 148 023 172 3324927e-4 196967 256 033 124 069 159 6300048e-5 239998 512 082 131 199 153 1481033e-5 208889 1024 199 128 575 153 3409346e-6 21190

10 2048 465 122 1633 151 8696977e-7 1970911 4096 1056 118 4701 152 2161024e-7 2008812 8192 2369 116 13211 149 5410341e-8 19979

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

320 Numerical integrations and error control

Table 82 Numerical results for quadrature rule I Cubic quadrature case

n s(n) TUn rU

n TLn rL

n uminus uninfin Conv rate

4 32 002 002 3911817e-35 64 005 132 006 158 1112090e-3 181466 128 011 114 017 150 2887823e-4 194527 256 028 135 046 144 6471414e-5 215788 512 063 117 121 139 1454796e-5 215339 1024 144 119 308 135 3578654e-6 20233

10 2048 329 119 765 131 8812394e-7 2021811 4096 739 117 1901 131 2195702e-7 2004912 8192 1662 117 4679 130 5795965e-8 19216

Table 83 Numerical results for quadrature rule II

n s(n) CT rn 2(

nnminus1

)3 uminus uninfin Conv rate

3 16 007 1520796e-24 32 019 271 474 3816222e-3 19946105 64 055 289 391 9462863e-4 20117966 128 181 329 346 2475222e-4 19347197 256 476 263 318 5903598e-5 20678928 512 1231 259 299 1479396e-5 19965869 1024 3157 256 285 3686005e-6 2004878

10 2048 8445 268 274 8944827e-7 2042933

applied in the quadrature rule The values of other parameters remain the sameexcept that

miprimei ge 24(εn

iprimei)064 2

iprime+2iminus14

The corresponding numerical results are listed in Table 82In the last experiment we implement the quadrature rule described in Sec-

tion 831 using the same parameters for truncation The time for evaluation islisted in Table 83 in which ldquoCTrdquo stands for the time for the computation ofmatrix Kn rn is defined as the ratio of the two successive times that is

rn = CTn

CTnminus1

We also list in this table the theoretical value 2( n+1n )3 for comparison and

the Linfin-norms of the numerical errors u minus uninfin as well as the convergenceorder

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

85 Bibliographical remarks 321

85 Bibliographical remarks

The main developments along the direction of multiscale methods for solvingFredholm integral equations can be found in [28 64 68 69 71 88ndash90 95 202226 241 243 260 261] Appropriate bases for multiscale Galerkin PetrovndashGalerkin and collocation methods were constructed in [200 201] and [65 69]To develop error control techniques for numerical integrations in generatingthe coefficient matrix for the one-dimensional case we propose using thegraded quadrature methods When the integrand has only a polynomial orderof smoothness except at the singular points we use a quadrature methodsuggested in [164] having a polynomial order of accuracy When the integrandhas an infinite order of smoothness except at the singular points we use theidea of [242] to develop a quadrature having an exponential order of accuracyReaders are referred to [72] for more information on error control strategiesfor numerical integrations in one dimension and [75] for those in higherdimensions

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

9

Fast solvers for discrete systems

The goal of this chapter is to develop efficient solvers for the discrete linearsystems resulting from discretization of the Fredholm integral equation of thesecond kind by using the multiscale methods discussed in previous chaptersWe introduce the multilevel augmentation method (MAM) and the multileveliteration method (MIM) for solving operator equations based on multileveldecompositions of the approximate subspaces Reflecting the direct sumdecompositions of the subspaces the coefficient matrix of the linear systemhas a special structure Specifically the matrix corresponding to a finer levelof approximate spaces is obtained by augmenting the matrix correspondingto a coarser level with submatrices that correspond to the difference spacesbetween the spaces of the finer level and the coarser level The main idea isto split the matrix into a sum of two matrices with one reflecting its lowerfrequency and the other reflecting its higher frequency We are required tochoose the splitting in such a way that the inverse of the lower-frequencymatrix either has an explicit form or can easily be computed with a lowercomputational cost

In this chapter we introduce the MAM and MIM and provide a completeanalysis of their convergence and stability

91 Multilevel augmentation methods

In this section we describe a general setting of the MAM for solving operatorequations This method is based on a standard approximation method ata coarse level and updates the resulting approximate solutions by addingdetails corresponding to higher levels in a direct sum decomposition Weprove that this method provides the same order of convergence as the originalapproximation method

322

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 323

911 Multilevel augmentation methods for solvingoperator equations

We begin with a description of the general setup for the operator equationsunder consideration Let X and Y be two Banach spaces and A X rarr Y

be a bounded linear operator For a function f isin Y we consider the operatorequation

Au = f (91)

where u isin X is the solution to be determined We assume that equation (91)has a unique solution in X To solve the equation we choose two sequencesof finite-dimensional subspaces Xn n isin N0 = 0 1 and Yn n isin N0 of Xand Y respectively such that⋃

nisinN0

Xn = X⋃

nisinN0

Yn = Y

and

dim Xn = dim Yn n isin N0

We suppose that equation (91) has an approximate operator equation

Anun = fn (92)

where An Xn rarr Yn is an approximate operator of A un isin Xn and fn isin Yn

is an approximation of f Examples of such equations include projectionmethods such as Galerkin methods and collocation methods In particular forsolving integral equations they also include approximate operator equationsobtained from quadrature methods and degenerate kernel methods Waveletcompression schemes using both orthogonal projection (Galerkin methods)and interpolation projection (collocation methods) are also examples of thistype

Our method is based on an additional hypothesis that the subspaces arenested that is

Xn sub Xn+1 Yn sub Yn+1 n isin N0 (93)

so that we can define two subspaces Wn+1 sub Xn+1 and Qn+1 sub Yn+1 suchthat Xn+1 becomes a direct sum of Xn and Wn+1 and likewise Yn+1 is a directsum of Yn and Qn+1 Specifically we assume that two direct sums oplus1 and oplus2

are defined so that we have the decompositions

Xn+1 = Xn oplus1 Wn+1 and Yn+1 = Yn oplus2 Qn+1 n isin N0 (94)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

324 Fast solvers for discrete systems

In practice the finer-level subspaces Xn+1 and Yn+1 are obtained respectivelyfrom the coarse-level subspaces Xn and Yn by local or global subdivisions Itfollows from (94) for a fixed k isin N0 and any m isin N0 that

Xk+m = Xk oplus1 Wk+1 oplus1 middot middot middot oplus1 Wk+m (95)

and

Yk+m = Yk oplus2 Qk+1 oplus2 middot middot middot oplus2 Qk+m (96)

As in [67] for g0 isin Xk and gi isin Wk+i i = 1 2 m we identify thevector [g0 g1 gm]T in Xk timesWk+1 times middot middot middot timesWk+m with the sum g0 + g1

+ middot middot middot + gm in Xk oplus1 Wk+1 oplus1 middot middot middot oplus1 Wk+m Similarly for g0 isin Yk andgi isin Qk+i for i = 1 2 m we also identify the vector [g0 g1 gm]T inYk timesQk+1 times middot middot middot timesQk+m with the sum g0 + g1 + middot middot middot + gm in Yk oplus2 Qk+1 oplus2

middot middot middot oplus2 Qk+m In this notation we describe the multilevel method for solvingequation (92) with n = k + m which has the form

Ak+muk+m = fk+m (97)

According to decomposition (95) we write the solution uk+m isin Xk+m as

uk+m = uk0 +msum

i=1

vki (98)

where uk0 isin Xk and vki isin Wk+i for i = 1 2 m Hence uk+m isidentified as uk(m) = [uk0 vk1 vkm]T We use both of these notationsexchangeably

We let Fkk+j Wk+jrarrYk Gk+ik XkrarrQk+i and Hk+ik+j Wk+jrarrQk+i i j = 1 2 m be given and assume that the operator Ak+m isidentified as the matrix of operators

Akm =

⎡⎢⎢⎢⎣Ak Fkk+1 middot middot middot Fkk+m

Gk+1k Hk+1k+1 middot middot middot Hk+1k+m

Gk+mk Hk+mk+1 middot middot middot Hk+mk+m

⎤⎥⎥⎥⎦ (99)

Equation (97) is now equivalent to the equation

Akmuk(m) = fk+m (910)

We remark that the nestedness of subspaces implies that the matrix Akm con-tains Akmminus1 as a submatrix In other words Akm is obtained by augmentingthe matrix of the previous level Akmminus1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 325

With this setup one can design various iteration schemes to solve equa-tion (910) by splitting the matrix Akm defined by (99) into a sum of twomatrices and applying matrix iteration algorithms to Akm

We split operator Akm as the sum of two operators Bkm Ckm Xk+m rarrYk+m that is Akm = Bkm + Ckm m isin N0 Note that matrices Bkm andCkm are obtained from augmenting matrices Bkmminus1 and Ckmminus1 respectivelyHence equation (910) becomes

Bkmuk(m) = fk+m minus Ckmuk(m) m isin N0 (911)

Instead of solving (911) exactly we solve (911) approximately by using theMAM described below

Algorithm 1 (Operator form of the multilevel augmentation algorithm) Letk gt 0 be a fixed integer

Step 1 Solve equation (92) with n = k for uk isin Xk exactlyStep 2 Set uk0 = uk and compute the splitting matrices Bk0 and Ck0Step 3 For m isin N suppose that ukmminus1 isin Xk+mminus1 has been obtained and do

the followingbull Augment the matrices Bkmminus1 and Ckmminus1 to form Bkm and Ckm

respectively

bull Augment ukmminus1 by setting ukm =[

ukmminus1

0

]

bull Solve ukm isin Xk+m from equation

Bkmukm = fk+m minus Ckmukm (912)

For a fixed positive integer k if this algorithm can be carried out it generatesa sequence of approximate solutions ukm isin Xk+m m isin N0 Note that thisalgorithm is not an iteration method since for different m we are dealing withmatrices of different order and for each m we only compute the approximatesolution ukm once To ensure that the algorithm can be carried out we have toguarantee that for m isin N0 the inverse of Bkm exists and is uniformly boundedMoreover the approximate solutions generated by this augmentation algorithmneither necessarily have the same order of convergence as the approximationorder of the subspaces Xn nor necessarily are more efficient to solve thansolving equation (92) with n = k + m directly unless certain conditionson the splitting are satisfied For this algorithm to be executable accurateand efficient we demand that the splitting of operator Akm fulfills threerequirements Firstly Bminus1

km is uniformly bounded Secondly the approximatesolution ukm preserves the convergence order of uk+m That is ukm converges

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

326 Fast solvers for discrete systems

to the exact solution u at the approximation order of the subspaces Xk+mThirdly the inverse of Bkm is much easier to obtain than the inverse of Akm

We now address the first issue To this end we describe our hypotheses

(I) There exist a positive integer N0 and a positive constant α such that for alln ge N0

Aminus1n le αminus1 (913)

(II) The limit

limnrarrinfinCnm = 0 (914)

holds uniformly for all m isin N

Under these two assumptions we have the following result

Proposition 91 Suppose that hypotheses (I) and (II) hold Then there existsa positive integer N gt N0 such that for all k ge N and all m isin Nequation (912) has a unique solution ukm isin Xk+m

Proof From hypothesis (I) whenever k ge N0 it holds that for x isin Xk+m

Bkmx = (Akm minus Ckm)x ge (α minus Ckm)x (915)

Moreover it follows from (914) that there exists a positive integer N gt N0

such that for k ge N and m isin N0 Ckm lt α2 Combining this inequalitywith (915) we find that for k ge N and m isin N0 the estimate

Bminus1km le

1

α minus Ckm le 2αminus1 (916)

holds This ensures that for all k ge N and m isin N0 equation (912) has a uniquesolution

We next consider the second issue For n isin N0 we let Rn denote theapproximation error of space Xn for u isin X namely Rn = Rn(u) =infu minus vX v isin Xn A sequence of non-negative numbers γn n isin N0

is called a majorization sequence of Rn if γn ge Rn n isin N0 and there exist apositive integer N0 and a positive constant σ such that for n ge N0 γn+1

γnge σ

We also need the following hypothesis

(III) There exist a positive integer N0 and a positive constant ρ such that forn ge N0 and for the solution un isin Xn of equation (92) uminusunX le ρRn

In the next theorem we show that under the assumptions described aboveukm approximates u at an order comparable to Rk+m

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 327

Theorem 92 Suppose that hypotheses (I)ndash(III) hold Let u isin X be thesolution of equation (91) γn n isin N0 be a majorization sequence of Rn andρ be the constant appearing in hypothesis (III) Then there exists a positiveinteger N such that for k ge N and m isin N0

uminus ukm le (ρ + 1)γk+m

where ukm is the solution of equation (912)

Proof We prove this theorem by establishing an estimate on ukm minus uk+mFor this purpose we subtract (911) from (912) to obtain

Bkm(ukm minus uk+m) = Ckm(uk+m minus ukm)

The hypotheses of this theorem ensure that Proposition 91 holds Hence fromthe equation above and inequality (916) we have that

ukm minus uk+m le Ckmα minus Ckmuk+m minus ukm (917)

We next prove by induction on m that there exists a positive integer N suchthat for k ge N and m isin N0

ukm minus uk+m le γk+m (918)

When m = 0 since uk0 = uk estimate (918) holds trivially Suppose that theclaim holds for m = rminus 1 and we prove that it holds for m = r To accomplishthis using the definition of ukr hypothesis (III) the induction hypothesis andthe definition of majorization sequences we obtain that

uk+r minus ukr le uk+r minus u + uminus uk+rminus1 + uk+rminus1 minus ukrminus1le ργk+r + (ρ + 1)γk+rminus1 le

(ρ + (ρ + 1)

1

σ

)γk+r

Substituting this estimate into the right-hand side of (917) with m = r yields

ukr minus uk+r le Ckrα minus Ckr

(ρ + (ρ + 1)

1

σ

)γk+r

Again employing hypothesis (II) there exists a positive integer N such thatfor k ge N and r isin N0 Ckr le α

(ρ+1)(1+ 1σ) We then conclude that for k ge N

and r isin N0

Ckrα minus Ckr

(ρ + (ρ + 1)

1

σ

)le 1

Therefore for k ge N estimate (918) holds for m = r This advances theinduction hypothesis and thus estimate (918) holds for all m isin N0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

328 Fast solvers for discrete systems

Finally the estimate of this theorem follows directly from estimate (918)and hypothesis (III)

We remark that when the exact solution u of equation (91) has certainSobolev or Besov regularity and specific approximate subspaces Xn arechosen we may choose the majorization sequence γn as the upper bound ofRn which gives the order of approximation of the subspaces Xn with respect tothe regularity For example when Xn is chosen to be the usual finite elementspaces with mesh size 2minusn and when the solution u of equation (91) belongsto the Sobolev space Hr we may choose γn = c2minusrnuHr In this case theconstant σ in the definition of majorization sequences can be taken as 2minusrTherefore Theorem 92 ensures that the approximate solution ukm generatedby the MAM has the same order of approximation as the subspaces Xn

912 Second-kind equations

In this section we present special results for projection methods for solvingoperator equations of the second kind Consider equations

(I minusK)u = f (919)

where K X rarr X is a linear operator We assume that equation (919) has aunique solution In this special case we identify that A = I minusK X = Y andXn = Yn Suppose that Pn Xrarr Xn are linear projections and we define theprojection method for solving equation (919) by

Pn(I minusK)un = Pnf (920)

where un isin Xn To develop a MAM we need another projection Pn Xrarr XnWe define operators Qn = Pn minus Pnminus1 and Qn = Pn minus Pnminus1 and introducesubspaces Wn = QnXn n isin N We allow the projections Pn and Pn tobe different in order to have a wide range of applications For example forGalerkin methods Pn and Pn are both identical to the orthogonal projectionand for the collocation method developed in [68] Pn is the interpolatoryprojection and Pn is the orthogonal projection

For n isin N we set Kn = PnK|Xn and identify An = Pn(I minus K)|Xn Wefurther identify the operators in (99) with

Fkk+j = Pk(I minusK)|Wk+j Gk+ik = Qk+i(I minusK)|Xk

Hk+ik+j = Qk+i(I minusK)|Wk+j

We split the operator Kk+m into the sum of two operators Kk+m = KLkm+KH

km

where KLkm = PkK|Xk+m and KH

km = (Pk+m minus Pk)K|Xk+m The operators

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 329

KLkm and KH

km correspond to lower and higher frequency of the operator Kkmrespectively According to the decomposition of Kk+m we write the operatorAkm = Ik+mminusKkm as a sum of lower and higher-frequency Bkm = Ik+mminusKL

km and Ckm = minusKHkm Using this specific splitting in formula (912) of

Algorithm 1 we have that

(Ik+m minusKLkm)ukm = fkm +KH

kmukm (921)

The next theorem is concerned with the convergence order for the MAM forsecond-kind equations using projection methods

Theorem 93 Suppose that K is a compact linear operator not having one asits eigenvalue and that there exists a positive constant p such that

Pn le p Pn le p for all n isin N (922)

Let u isin X be the solution of equation (919) and γn be a majorization sequenceof Rn Then there exist a positive integer N and a positive constant c0 such thatfor all k ge N and m isin N

uminus ukm le c0γk+m

where ukm is obtained from the augmentation algorithm with formula (921)

Proof We prove that the hypotheses of Theorem 92 hold for the specialchoice of operators Bkm and Ckm for second-kind equations We first remarkthat the assumption on the operators K and Pn ensures that hypotheses (I)and (III) hold with An = I minus Kn It remains to verify hypothesis (II)To this end we recall the definition of Cnm which has the form Cnm =minus(Pn+m minus Pn)K|Xn+m It follows from the second inequality of (922) that

Cnm = (Pn+m minus Pn)K|Xn+m le p(Pn+m minus Pn)KBy the first inequality of (922) and the nestedness of subspaces Xn weconclude that Pn converges pointwise to the identity operator I of space XHence since K is compact the last term of the inequality above converges tozero as nrarrinfin uniformly for m isin N Therefore all hypotheses of Theorem 92are satisfied and thus we complete the proof of this theorem

We next derive the matrix form of the MAM by choosing appropriate basesfor the subspaces Xn For this purpose we let Xlowast denote the dual space of Xand for isin Xlowast x isin X we let 〈 x〉 denote the value of the linear functional at x Suppose that Ln n isin N0 is a sequence of subspaces of Xlowast which has

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

330 Fast solvers for discrete systems

the property that Ln sub Ln+1 and dim Ln = dim Xn n isin N0 The operatorPn Xrarr Xn is defined for x isin X by

〈 xminus Pnx〉 = 0 for all isin Ln (923)

It is known (cf [77]) that the operator Pn X rarr Xn is uniquely determinedand is a projection if and only if LncapXperpn = 0 n isin N0 where Xperpn denotes theannihilator of Xn in Xlowast Throughout the rest of this section we always assumethat this condition is satisfied We also assume that we have a decompositionof the space Ln+1 namely

Ln+1 = Ln oplus Vn+1 n isin N0 (924)

Clearly the spaces Wi and Vi have the same dimension We specify the directsum in (924) later

Set w(0) = dim X0 and w(i) = dim Wi for i isin N Suppose that

X0 = spanw0j j isin Zw(0) L0 = span0j j isin Zw(0)Wi = spanwij j isin Zw(i) Vi = spanij j isin Zw(i) i isin N

Using the index set Un = (i j) i isin Zn+1 j isin Zw(i) we have that forn isin N0

Xn = spanwij (i j) isin Un and Ln = spanij (i j) isin UnWe remark that the index set Un has cardinality dn = dim Xn and we assumethat the elements in Un are ordered lexicographically

We now present the matrix form of equation (920) using these bases Notethat for vn isin Xn there exist unique constants vij (i j) isin Un such that vn =sum(ij)isinUn

vijwij It follows that the solution un with n = k+m of equation (920)

has the vector representation un = [uij (i j) isin Un] under the basis wij(i j) isin Un Using the bases for Xn and Ln we let Eiprimejprimeij = lang

iprimejprime wijrang

andKiprimejprimeij = langiprimejprime Kwij

rangand introduce the matrices En = [Eiprimejprimeij (iprime jprime) (i j) isin

Un] and Kn = [Kiprimejprimeij (iprime jprime) (i j) isin Un] We also introduce the columnvectors fn = [langiprimejprime f

rang (iprime jprime) isin Un] In these notations equation (920) is

written in matrix form as

(Ek+m minusKk+m) uk+m = fk+m (925)

We partition matrices Kn and En into block matrices according to thedecompositions of the spaces Xn and Ln Specifically for iprime i isin Zn+1we introduce the blocks Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)] and setKn = [Kiprimei iprime i isin Zn+1] Moreover for a fixed k isin N we define theblocks Kk

00 = Kk and for lprime l isin N Kk0l = [Kiprimei iprime isin Zk+1 i = k + l]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 331

Kkl0 = [Kiprimei iprime = k + l i isin Zk+1] and Kk

lprimel = Kk+lprimek+l Using these block

notations for n = k + m we write Kk+m =[Kk

iprimei iprime i isin Zm+1

] Likewise

we partition matrix En in exactly the same wayThe decomposition of operator Kk+m suggests the matrix decomposition

Kk+m = KLkm +KH

km where

KLkm =

⎡⎢⎢⎢⎣Kk

00 Kk01 middot middot middot Kk

0m0 0 middot middot middot 0

0 0 middot middot middot 0

⎤⎥⎥⎥⎦ and

KHkm =

⎡⎢⎢⎢⎣0 0 middot middot middot 0

Kk10 Kk

11 middot middot middot Kk1m

Kkm0 Kk

m1 middot middot middot Kkmm

⎤⎥⎥⎥⎦

Note that the matrices KLkm and KH

km correspond to lower and higher frequency

of the matrix Kkm Moreover we set Bkm = Ek+mminusKLkm and Ckm = minusKH

kmNext we describe the matrix form of the MAM for solving equation (925)using these two matrices

Algorithm 2 (Matrix form of the multilevel augmentation algorithm) Letkgt 0 be a fixed integer

Step 1 Solve uk isin Rdk from the equation (Ek minusKk)uk = fk

Step 2 Set uk0 = uk and compute the splitting matrices KLk0 and KH

k0

Step 3 For m isin N suppose that ukmminus1 isin Rdk+mminus1 has been obtained and dothe following

bull Augment the matrices KLkmminus1 and KH

kmminus1 to form KLkm and KH

kmrespectively

bull Augment ukmminus1 by setting ukm =[

ukmminus1

0

]

bull Solve ukm isin Rdk+m from the algebraic equations

(Ekm minusKLkm)ukm = fk+m +KH

kmukm (926)

It is important to know under what condition the matrix form (926) isequivalent to the operator form (921) This issue is addressed in the nexttheorem To prepare a proof of this theorem we consider an expression of the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

332 Fast solvers for discrete systems

identity operator I in the subspace Xk+m Note that for any x isin Xk+j j isin ZmQk+1+jx = 0 This is equivalent to the following equations

Qk+1+jI|Xk = 0 and Qk+1+jI|Wk+1+i = 0 i isin Zj (927)

Using this fact we express the identity operator I in the subspace Xk+m as

Ik+m = Pk+mI|Xk+m =

⎡⎢⎢⎢⎣PkI|Xk PkI|Wk+1 middot middot middot PkI|Wk+m

0 Qk+1I|Wk+1 middot middot middot Qk+1I|Wk+m

0 middot middot middot 0 Qk+mI|Wk+m

⎤⎥⎥⎥⎦

Taking this into consideration equation (921) becomes⎡⎢⎢⎢⎣Pk(I minusK)|Xk Pk(I minusK)|Wk+1 middot middot middot Pk(I minusK)|Wk+m

0 Qk+1I|Wk+1 middot middot middot Qk+1I|Wk+m

0 middot middot middot 0 Qk+mI|Wk+m

⎤⎥⎥⎥⎦ ukm

=

⎡⎢⎢⎢⎣Pkf

Qk+1f

Qk+mf

⎤⎥⎥⎥⎦

+

⎡⎢⎢⎢⎣0 0 middot middot middot 0

Qk+1K|Xk Qk+1K|Wk+1 middot middot middot Qk+1K|Wk+m

Qk+mK|Xk Qk+mK|Wk+1 middot middot middot Qk+mK|Wk+m

⎤⎥⎥⎥⎦⎡⎢⎢⎣ ukmminus1

0

⎤⎥⎥⎦

(928)

To state the next theorem we let Nk = k k + 1 and introduce thefollowing notion For finite-dimensional subspaces A sub Xlowast and B sub X wesay A perp B if for any isin A and x isin B it holds that 〈 x〉 = 0

Theorem 94 The following statements are equivalent

(i) The matrix form (926) and the operator form (921) are equivalent forany f isin X and for any compact operator K Xrarr X

(ii) For any l isin Nk Vl+1 perp Xl(iii) For any l isin Nk and any j isin Zm+1 0 Vl+j perp Xl(iv) For iprime i isin Zm+1 iprime gt i Ek

iprimei = 0

(v) For iprime i isin Zm+1 iprime gt i Bkiprimei = 0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 333

Proof We first prove the equivalence of statements (i) and (ii) It is clear thatfor any x isin X and for any isin Vn 〈Pnminus1x〉 = 0 if and only if Vn perp Xnminus1Moreover we observe from the definitions of Pn and Qn that for n isin N x isin X

and for isin Vn

〈Qnx〉 = 〈Pnxminus Pnminus1x〉 = 〈 x〉 minus 〈Pnminus1x〉 Hence for each n isin N the following equation holds 〈Qnx〉 = 〈 x〉 for allx isin X isin Vn if and only if Vn perp Xnminus1 Therefore statement (ii) is equivalentto saying that for any j isin Zmlang

Qk+j+1xrang = 〈 x〉 for all x isin X isin Vk+j+1 (929)

Noting that for x isin X and n isin N Qnx isin Qn = QnXn sub Xn we concludefrom (923) and (927) that equation (928) is equivalent tolang

Pk(I minusK)ukmrang = 〈Pkf 〉 for all isin Lk (930)

andlangQk+j+1ukm

rang = langQk+j+1f +Qk+j+1Kukmminus1rang for all isinLk+j+1 jisinZm

(931)

Using (923) equation (930) is written aslang (I minusK)ukm

rang = 〈 f 〉 for all isin Lk (932)

Again from (923) we have that for x isin X and isin Lk+j j isin ZmlangQk+j+1x

rang = langPk+j+1xminus Pk+jxrang = 0

In particular for all isin Lk+j j isin Zm both sides of equation (931) are equalto zero Noting that

Lk+j+1 = Lk+j oplus Vk+j+1

equation (931) is equivalent tolangQk+j+1ukm

rang = langQk+j+1f +Qk+j+1Kukmminus1rang for all isinVk+j+1 jisinZm

Now suppose that statement (ii) holds Using equation (929) the equationabove is equivalent tolang

ukmrang = lang f +Kukmminus1

rang for all isin Vk+j+1 j isin Zm (933)

In terms of the bases of the spaces Xk Lk Wk+j+1 and Vk+j+1 j isin Zm

equations (932) and (933) are equivalent to the matrix equation (926)Conversely if (i) holds we can prove that equation (929) is satisfied and thus(ii) holds

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

334 Fast solvers for discrete systems

The proof of (iii) implying (ii) is trivial Statement (ii) and the nestednessassumption on Xn ensure the validity of (iii) Statement (iv) is the discreteversion of (iii) and hence they are equivalent Finally the equivalence of (iv)and (v) follows from the definition of matrix Bkm

Note that condition (ii) in Theorem 94 specifies the definition of the directsum (924) In other words the space Vn+1 is uniquely determined by condition(ii) From now on we always assume that condition (ii) is satisfied to guaranteethe equivalence of (921) and (926) Another way to write the equivalenceconditions in Theorem 94 islang

iprimejprime wijrang = 0 i iprime isin Nk i lt iprime j isin Zw(i) jprime isin Zw(iprime) (934)

When condition (934) is satisfied we call the bases ij and wij semi-bi-orthogonal Under this condition when the solution ukm of equation (926)is computed we conclude from Theorem 94 that the function defined byukm = uT

kmwk+m is the solution of the equation (921) where wn = [wij (i j) isin Un] We remark that condition (934) is satisfied for the multiscaleGalerkin methods and multiscale collocation methods developed in Chapters 5and 7 respectively In fact in the case of the Galerkin method using orthogonalpiecewise polynomial multiscale bases constructed in [200] the matrix En

is the identity and in the case of the collocation method using interpolatingpiecewise polynomial multiscale bases and multiscale functionals constructedin [69] the matrix En is upper triangular with diagonal entries equal to one

We now turn to a study of the computational complexity of Algorithm 2Specifically we estimate the number of multiplications used in the methodFor this purpose we rewrite equation (926) in block form Letting n = k+mwe partition the matrix Ek+m in the same way as we have done for the matrixKk+m to obtain blocks Ek

iiprime i iprime isin Zm+1 We also partition the vectors ukm andfk+m accordingly as ukm = [um

i i isin Zm+1] and fk+m = [fki i isin Zm+1]Here and in what follows we require that the appropriate bases are chosen sothat Ek

iprimei = 0 for 0 le i lt iprime le m and Ekii = I With this assumption we

express the matrix Bkm as

Bkm=

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣

IminusKk Ek01 minusKk

01 Ek02 minusKk

02 middot middot middot Ek0mminus1 minusKk

0mminus1 Ek0m minusKk

0m0 I Ek

12 middot middot middot Ek1mminus1 Ek

1m0 0 I middot middot middot Ek

2mminus1 Ek2m

0 0 0 middot middot middot I Ekmminus1m

0 0 0 middot middot middot 0 I

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 335

It is clear from this matrix representation that inverting matrices Bkm m isin N0

is basically equivalent to inverting IminusKk The strength of Algorithm 2 is thatit only requires inverting the matrix (IminusKk) Using this block form of matrixBkm equation (926) becomes

umi = fki +

mminus1sumj=0

Kkiju

mminus1j minus

msumj=i+1

Ekiju

mj i = m mminus 1 1 (935)

fk0 = fk0 minusmsum

j=1

(Ek0j minusKk

0j)umj and um

0 = (IminusKk)minus1 fk0 (936)

For a matrix A we denote by N (A) the number of nonzero entries of ANote that we need N (KH

km) + N (Ek+m) multiplications to obtain umi i = 1

2 m from equation (935) In addition the computation of fk0 requiresN (KL

km) number of multiplications We assume that computing um0 from

the second equation of (936) needs M(k) multiplications which is constantindependent of m Hence the number of multiplications for computing ukm

from ukmminus1 is

Nkm = N (Kk+m)+N (Ek+m)+M(k) (937)

Recall that to compute ukm we first compute uk and then use the algorithmof (935) and (936) to compute uki i = 1 2 m successively By formula(937) the total number of multiplications required to obtain ukm is given by

M(k)+msum

i=1

Nki = (m+ 1)M(k)+msum

i=1

[N (Kk+i)+N (Ek+i)]

We now summarize the discussion above in a proposition

Proposition 95 The total number of multiplications required for computingukm from uk is given by

(m+ 1)M(k)+msum

i=1

[N (Kk+i)+N (Ek+i)]

To close this section we analyze the stability of Algorithm 2 It can beshown that if the condition number cond(Bkm) of matrix Bkm is small thensmall perturbations of the matrices Bkm and Ckm and the vector ukm onlycause a small perturbation in the solution ukm For this reason we study thecondition number of matrix Bkm Our theorem will confirm that the conditionnumbers cond(Bkm) and cond(Ak+m) have the same order In other words the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

336 Fast solvers for discrete systems

augmentation method will not ruin the well-condition property of the originalmultilevel method

We first establish a result that the stability of Bkm is inherited from that ofAk+m

Lemma 96 Suppose that the family of operators An n isin N0 has the propertythat there exist positive constants c1 and c2 and a positive integer N0 such thatfor n ge N0 An le c1 and Anv ge c2v for all v isin Xn Moreover supposethat for any k m isin N0 Ak+m = Bkm + Ckm where Ckm satisfies hypothesis(II) Then there exist positive constants cprime1 and cprime2 and a positive integer N1

such that for k gt N1 m isin N0 Bkm le cprime1 and Bkmv ge cprime2v for allv isin Xk+m

Proof By the triangular inequality we have for any k m isin N0 that

Ak+m minus Ckm le Bkm le Ak+m + CkmThe hypotheses of this lemma ensure that there exists a positive integer Nprimesuch that for k gt Nprime and m isin N0 Ckm le c22 We let N1 = maxN0 Nprimeand observe that for k gt N1 m isin N0 Bkm le c1 + c22 and Bkmv geAk+mv minus Ckmv ge c2

2 v for all v isin Xk+m By choosing cprime1 = c1 + c22

and cprime2 = c22 we complete the proof of this lemma

We now return to the discussion of the condition number of matrix BkmTo do this we need auxiliary bases for X0 and Wi for i isin N which arebi-orthogonal to ij j isin Zw(i) i isin N0 that is X0 = span ζ0j j isin Zw(0)Wi = span ζij j isin Zw(i) with the bi-orthogonal property 〈iprimejprime ζij〉 = δiprimeiδjprimejfor i iprime isin N0 j isin Zw(i) jprime isin Zw(iprime) For any v isin Xn we have two representationsof v given by v =sum(ij)isinUn

vijwij and v =sum(ij)isinUnvprimeijζij We let v vprime isin Rdn be

the vectors of the coefficients in the two representations of v respectively thatis v = [vij (i j) isin Un] and vprime = [vprimeij (i j) isin Un]Theorem 97 Let n isin N0 and suppose that there exist functions μi(n) νi(n)i = 1 2 such that for any v isin Xn

μ1(n)v le v le μ2(n)v ν1(n)vprime le v le ν2(n)vprime (938)

Suppose that the hypothesis of Lemma 96 is satisfied Then there exists apositive integer N such that for any k gt N and any m isin N0

cond(Ak+m) le c1μ2(k + m)ν2(k + m)

c2μ1(k + m)ν1(k + m)and

cond(Bkm) le cprime1μ2(k + m)ν2(k + m)

cprime2μ1(k + m)ν1(k + m)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 337

Proof We prove the bound for both matrices at the same time To this endfor n = k + m we let Hkm denote either Ak+m or Bkm and Hkm for thecorresponding operator For any v = [vij (i j) isin Uk+m] isin Rdk+m we definea vector g = [gij (i j) isin Uk+m] isin Rdk+m by letting g = Hkmv Introducingv = sum(ij)isinUk+m

vijwij and g = sum(ij)isinUk+mgijζij we have the corresponding

operator equation Hkmv = g Since the hypotheses of Lemma 96 are satisfiedby using Lemma 96 there exist positive constants cprime1 and cprime2 and a positiveinteger N such that for k gt N m isin N0 Bkm le cprime1 and Bkmv ge cprime2v forall v isin Xk+m Therefore in either case there exist positive constants c1 andc2 and a positive integer N such that for k gt N m isin N0 Hkm le c1 andHkmv ge c2v for all v isin Xk+m It follows from (938) that for any k gt Nand m isin N0

Hkmv=gle gν1(k + m)

= Hkmvν1(k + m)

le c1vν1(k + m)

le c1μ2(k + m)

ν1(k + m)v

which yields Hkm le c1μ2(k+m)ν1(k+m) Likewise by (938) we have that for any

k gt N and m isin N0

μ1(k + m)v le v le Hkmvc2

= gc2

le ν2(k + m)

c2g = ν2(k + m)

c2Hkmv

which ensures that Hminus1km le ν2(k+m)

c2μ1(k+m) Combining the estimates for Hkmand Hminus1

km we confirm the bounds of the condition numbers of Ak+m andBkm

We next apply this theorem to two specific cases to obtain two useful specialresults

Corollary 98 For the multiscale Galerkin methods developed in Chapter5 and the multiscale PetrovndashGalerkin methods developed in Chapter 6cond(Ak+m) = O(1) and cond(Bkm) = O(1)

Proof In these multiscale methods we use orthogonal multiscale basesThus the quantities μi(n) and νi(n) i = 1 2 appearing in (938) areconstant independent of n By Theorem 97 in these cases condition numberscond(Ak+m) and cond(Bkm) are constant independent of n

Corollary 99 For the multiscale collocation methods developed inChapter 7

cond(Ak+m) = O(log2 dk+m)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

338 Fast solvers for discrete systems

and

cond(Bkm) = O(log2 dk+m)

where dk+m denotes the order of matrices Ak+m and Bkm

Proof In the multiscale collocation methods developed in Chapter 7 we havethat μ1(k + m) = O(1) ν1(k + m) = O(1) μ2(k + m) = O(log dk+m) andν2(k + m) = O(log dk+m) Therefore the result of this remark follows fromTheorem 97

913 Compression schemes

This section is devoted to an application of the MAM for solving linear systemsresulting from compression schemes derived from multiscale methods

We assume that a compression strategy has been applied to compress the fullmatrix Kn to obtain a sparse matrix Kn where the number of nonzero entriesof Kn is of order dn logα dn with α = 0 1 or 2 that is

N (Kn) = O(dn logα dn) (939)

Methods of this type were studied in [4 28 64 68 69 88 94 95 202226 241 260 261] In particular when orthogonal piecewise polynomialmultiscale bases and interpolating piecewise polynomial multiscale basesconstructed respectively in [200] and [65] are used to develop the multiscaleGalerkin method (see Chapter 5) the multiscale PetrovndashGalerkin methods (seeChapter 6) and the multiscale collocation method (see Chapter 7) we have thatN (Kn) = O(nαμn) where the corresponding multiscale bases are constructedwith aμ-adic subdivision of the domain when the kernel K(s t) s t isin sube Rdof the integral operator K satisfies the conditions described in these papers Thecompression scheme for these methods has the form

(En minus Kn)un = fn (940)

Equation (940) has an equivalent operator equation Let Kn Xn rarr Xn bethe linear operator relative to the basis wij (i j) isin Un having the matrixrepresentation Eminus1

n Kn We have that

(I minus Kn)un = Pn f (941)

where un isin Xn and it is related to the solution un of equation (940) by theformula un = uT

n wn where wn = [wij (i j) isin Un] It is known that under

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 339

certain conditions for Galerkin methods and collocation methods we have thatif u isin Wrp()

uminus unp le cμminusrndnαurp (942)

where p = 2 for the Galerkin method and p = infin for the collocation methodand r denotes the order of piecewise polynomials used in these methods

To develop the MAM for solving the operator equation (941) we note thatKn = PnKn from which equation (941) is rewritten as

(I minus PnKn)un = Pn f (943)

Hence for n = k + m we have that Kk+m = PkKk+m + (Pk+m minus Pk)Kk+m

and from this equation we define Bkm = Ik+m minus PkKk+m and Ckm =minus(Pk+m minus Pk)Kk+m As done in Section 912 we can define the MAM forequation (941)

We have the following convergence result

Theorem 910 Let u isin Wrp() Suppose that the estimate (942) holds and

limnrarrinfinKn minus Kn = 0 (944)

Then there exist a positive integer N and a positive constant c such that for allk gt N and m isin N0

uminus ukm le cμminusr(k+m)d(k + m)αurp

Proof The proof employs Theorem 92 with a majorization sequence γn =cμminusrndnαurp We conclude that γn+1

γnge μminusrd In other words γn is a

majorization sequence of Rn with σ = μminusrd It is readily shown that

Ckm le 2pKk+m minusKk+m + Ckm + 2p2(Pk+m minus I)K

By (944) we have that limkrarrinfin Ckm = 0 uniformly for m isin N0 Also itcan be verified that the other conditions of Theorem 92 are satisfied with boththe multiscale Galerkin method and the collocation method using piecewisepolynomial multiscale basis of order r By applying Theorem 92 we completethe proof

We now formulate the matrix form of the MAM directly from the com-pressed matrix Because the compressed matrix Kn inherits the multilevelstructure of the matrix Kn we may use the MAM to solve equation (940)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

340 Fast solvers for discrete systems

as described in Section 712 Specifically we partition the compressed matrixKn as done for the full matrix Kn in Section 712 Let

KLkm =

⎡⎢⎢⎢⎣Kk

00 Kk01 middot middot middot Kk

0m0 0 middot middot middot 0

0 0 middot middot middot 0

⎤⎥⎥⎥⎦ and

KHkm =

⎡⎢⎢⎢⎢⎣0 0 middot middot middot 0

Kk10 Kk

11 middot middot middot Kk1m

Kkm0 Kk

m1 middot middot middot Kkmm

⎤⎥⎥⎥⎥⎦

and define Bkm = Ek+m minus KLkm and Ckm = minusKH

km

Algorithm 3 (Matrix form of the augmentation algorithm for compressionschemes) Let k be a fixed positive integer

Step 1 Solve uk isin Rdk from the equation(Ek minus Kk

)uk = fk

Step 2 Set uk0 = uk and compute the splitting matrices KLk0 and KH

k0

Step 3 For m isin N suppose that ukmminus1 isin Rdk+mminus1 has been obtained and dothe followingbull Augment the matrices KL

kmminus1 and KHkmminus1 to form KL

km and KHkm

respectively

bull Augment ukmminus1 by setting ukm =[

ukmminus1

0

]

bull Solve ukm isin Rdk+m from the algebraic equations

(Ekm minus KLkm)ukm = fk+m + KH

kmukm

Since Eminus1n Kn is the matrix representation of the operator Kn relative to the

basis wij (i j) isin Jn we conclude thatlangiprimejprime Knwij

rang=

sum(iprimeprimejprimeprime)isinUn

(Eminus1n Kn)iprimeprimejprimeprimeijEiprimejprimeiprimeprimejprimeprime = (Kn)iprimejprimeij

and see that the matrix form of MAM derived above is equivalent to thecorresponding operator form

In the next result we estimate the number of multiplications used inAlgorithm 3

Theorem 911 Let k be a fixed positive integer and m isin N0 Suppose that forsome α isin 0 1 2 and some integer μ gt 1 N (Kn) = O(nαμn) and suppose

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 341

that for n isin N0 N (En) le N (Kn) Then the total number of multiplicationsrequired for computing ukm from uk is of O((m+ k)αμm+k)

Proof According to Proposition 95 the total number of multiplicationsrequired for computing ukm from uk is given by

Nkm = O(m)+sumiisinNm

[N (Kk+i)+N (Ek+i)]

Since N (En) le N (Kn) it suffices to estimate the quantitysumiisinNm

N (Kk+i) To

this end we may show the identitysumiisinNm

(k + i)αμk+i = O((k + m)αμk+m)

Using this formula and the hypotheses of this theorem we complete the proofof this result

It can be verified that for the compression schemes presented in Chapters5 6 and 7 condition (944) and the assumption on Theorem 911 are fulfilledTherefore the conclusions of Theorems 910 and 911 hold for the class ofmethods proposed in these chapters

914 Numerical experiments

We present in this subsection numerical examples to demonstrate the per-formance of the MAM associated with the multiscale Galerkin method andthe multiscale collocation method To focus on the main issue of the MAMwe first choose a second-kind integral equation on the unit interval sincethe augmentation method is independent of the dimension of the domain ofthe integral equation Consider equation (919) with the integral operator Kdefined by

(Ku)(s) =int 1

0log | cos(πs)minus cos(π t)|u(t)dt s isin = [0 1]

In our numerical experiments for convenience of comparison we choose theright-hand-side function in equation (919) as

f (s) = sin(πs)+ 1

π[2minus (1minus cos(πs)) log(1minus cos(πs))

minus (1+ cos(πs)) log(1+ cos(πs))]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

342 Fast solvers for discrete systems

so that u(s) = sin(πs) s isin is the exact solution of the equation We chooseXn as the space of piecewise linear polynomials on with knots at the dyadicpoints j2n j = 1 2 2nminus1 Note that the theoretical convergence order forpiecewise linear approximation is 2 The following two numerical algorithmsare run on a personal computer with a 600-MHz CPU and 256M memory

Example 1 Multiscale Galerkin methods In our first experiment we con-sider the multiscale Galerkin method for solving equation (919) Choose anorthonormal basis w00(t) = 1 and w01(t) = radic3(2tminus 1) for t isin for X0 and

w10(t) =

1minus 6t t isin [0 12 ]

5minus 6t t isin [ 12 1] w11(t) = radic

3(1minus 4t) t isin [0 12 ]radic

3(4t minus 3) t isin [ 12 1]

for W1 An orthonormal basis wij j = 0 1 2i for Wi is constructedaccording to the construction given in Section 43 We choose Pn = Pn asthe orthogonal projection mapping L2() onto Xn and ij = wij

In this case En is an identity matrix because of the orthogonality of the basisand the matrix Kn is truncated to form the compressed matrix Kn according toa strategy presented in Chapter 5 The MAM is then applied to the compressedlinear system

In Table 91 we report the approximation error and convergence orderof the numerical solution obtained from the MAM and the computing timefor solving the linear system in the Galerkin case where we use the initiallevel k = 5 They confirm our theoretical estimates In all tables presentedhere we use ldquoApprox orderrdquo for the computed approximation order of thenumerical solution ukm approximating the exact solution u ldquoComp raterdquo forthe compression rate of the compressed coefficient matrix Kn and ldquoCTrdquo for thecomputing time (measured in seconds) used to solve the linear system usingthe augmentation method

Table 91 Convergence and computational speed for the Galerkin method

m uminus ukmL2 Approx order Comp rate CT

0 102e-3 1000 lt 0011 256e-4 199 0688 lt 0012 636e-5 200 0479 lt 0013 159e-5 200 0312 0014 422e-6 191 0193 0025 111e-6 192 0116 003

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 343

Table 92 Condition numbers of the matrices from the Galerkinmethod

m cond(A5+m) 1 cond(B5m) 2

0 19551 1977 0022310 20032 1989 0011255 2003 00003283 1994 0005669 2003 00000404 1997 0002843 2003 00000065 1999 0001416 2003 0000001

We report the spectral condition numbers of Ak+m and Bkm in Table 92We use the notations 1 = cond(Ak+m) minus cond(Ak+mminus1) and 2 = cond(Bkm)minus cond(Bkmminus1)

Example 2 The multiscale collocation method In this case we choose Pn

and Pn respectively as the interpolatory and orthogonal projection onto Xn

and define the subspaces Wn by the orthogonal projection Pn Specifically wechoose X0 = spanw00 w01 where w00(t) = 2minus 3t and w01(t) = minus1+ 3tt isin and W1 = spanw10 w11 where

w10(t) =

1minus 92 t t isin [0 1

2 ]minus1+ 3

2 t t isin [ 12 1]w11(t) =

12 minus 3

2 t t isin [0 12 ]

minus 72 + 9

2 t t isin [ 12 1]We also need multiscale collocation functionals for the multiscale collocationmethod To this end for any s isin we use δs to denote the linear functional in(C())lowast defined for x isin C() by the equation 〈δs x〉 = x(s) We choose thespaces of functionals as L0 = span00 01 where 00 = δ 1

3and 01 = δ 2

3and V1 = span10 11 where

10 = δ 16minus 3

2δ 1

3+ 1

2δ 2

3and 10 = 1

2δ 1

3minus 3

2δ 2

3+ δ 5

6

Bases wij for Wi and ij for Vi are constructed according to the descriptionin Chapter 7 We remark that by construction Vl+1 perp Xl for all l isin N0In this case En is upper triangular with diagonal entries all one The matrixKn is truncated to form the compressed matrix Kn according to the strategypresented in Chapter 7 Again the MAM is applied to the compressed linearsystem

The numerical results of the collocation method are shown in Table 93where we use again the initial level k = 5

We also use Table 91 and 93 to demonstrate the applicability of theMAM to multiscale collocation methods We begin with an initial approximate

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

344 Fast solvers for discrete systems

Table 93 Convergence and computational speed for the collocation method

m Matrix size uminus ukmLinfin Approx order Comp rate CT

0 64 391e-3 0891 lt 0011 128 111e-3 181 0766 lt 0012 256 289e-4 194 0546 lt 0013 512 647e-5 216 0356 00104 1024 145e-5 215 0220 00205 2048 358e-6 202 0131 00416 4096 881e-7 202 0076 01007 8192 220e-7 200 0044 02408 16384 580e-8 192 0024 05819 32768 142e-8 202 0014 1342

10 65536 366e-9 196 0007 2994

Table 94 Condition numbers of the matrices from the collocation method

m cond(A5+m) 1 cond(B5m) 2

0 77422231 9446764 1704541 93832852 11096629 1649865 11003597 16203123 12695555 1598926 12589373 15857764 14256352 1560797 14151313 15619405 15789732 1533380 15690938 1539625

solution u5 The numerical results show that the accuracy is improved inexactly the same order as our theoretical result as we move from a level toa higher level using the augmentation method In this experiment at each levelwe perform a uniform subdivision since the solution is smooth

In Table 94 we list the spectral condition numbers of matrices Ak+m

and Bkm for the collocation method We also compute the differences of thecondition numbers of these matrices between two consecutive levels to observethe growth of the condition numbers The purpose of this experiment is toconfirm the stability analysis presented in Section 912 Note that theoreticallywe have that cond(Ak+m) = O(log2 dk+m) and cond(Bkm) = O(log2 dk+m)Our numerical results coincide with the theoretical estimates obtained inSection 913 This experiment shows that the MAM does not ruin the stabilityof the original multilevel scheme (940) Normally the original multilevelscheme is well conditioned since the use of multiscale bases leads to apreconditioner A good discrete method should preserve this nice feature ofequation (940) From our numerical results we see that the condition numberof Bkm is smaller than that of Ak+m

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 345

Next we consider a boundary integral equation resulting from a reformula-tion of a boundary value problem of the Laplace equation

Example 3 The multiscale collocation method applied to boundary integralequations In this experiment we apply the multiscale collocation method tosolving the boundary integral equation reformulated from the boundary valueproblem ⎧⎪⎨⎪⎩

u(x) = 0 x isin

partu(x)

partnx= minusu(x)+ g0(x) x isin

(945)

where the domain

=

x = (x1 x2) x21 +

x22

4lt 1

with being its boundary We choose the function g0 so that

u0(x) = ex1 cos(x2) x isin is the exact solution of equation (945) Here we note that the solution uis analytic By the fundamental solution of the equation we reformulate theboundary value problem as the boundary integral equation (cf Section 213)

u(x)minus 1

π

int

u(y)part

partnylog |xminus y|dsy minus 1

π

int

u(y) log |xminus y|dsy

= minus 1

π

int

g0(y) log |xminus y|dsy x isin (946)

Utilizing parametrization x(t) = (cos(2π t) 2 sin(2π t)) t isin [0 1] on theboundary we establish the boundary integral equation

u(t)minusint 1

0K(t τ)u(τ )dτ minus

int 1

0L(t τ)u(τ )dτ

=minusint 1

0L(t τ)g0(τ )dτ t isin [0 1) (947)

where the kernels

K(t τ) = 2

1+ 3 cos2(t + τ)π

and

L(t τ) = 2radic

1+ 3 cos2(2πτ) log(

2 sin |π(t minus τ)|radic

1+ 3 cos2 π(t + τ))

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

346 Fast solvers for discrete systems

Table 95 Convergence and computational speed for the collocation method

m d4+m uminus u4minfin Approx order Comp rate 1 Comp rate 2 CT

0 64 456e-2 063 0751 128 1390e-2 171 0573 0516 00012 256 337e-3 204 0343 0266 00023 512 809e-4 206 0205 0137 00044 1024 182e-4 215 0121 0070 00085 2048 446e-5 203 0071 0036 00146 4096 108e-5 204 0041 0019 00297 8192 260e-6 205 0023 0010 00598 16384 591e-7 214 0013 0005 01229 32768 119e-7 231 0007 0003 0250

Table 96 Condition numbers of the matrices from the collocation method

m d4+m cond(A5+m) 1 cond(B5m) 2

0 64 1021 128 120 18 1192 256 137 17 135 163 512 152 16 150 154 1024 167 15 165 155 2048 182 14 180 15

are smooth and weakly singular respectively The solution of (945) corre-sponding to u0 is

u(t) = ecos(2π t) cos(2 sin(2π t))

We use the multiscale collocation method described in Example 2 to solvethe boundary integral equation (947) The programs are run on a personalcomputer with a 340-GHz CPU and 800GB memory The numerical resultsare shown in Tables 95 and 96 where we use ldquoComp rate 1rdquo and ldquoComprate 2rdquo for the compression rates of the compressed coefficient matrices forthe weakly singular and smooth kernels respectively The numerical resultsconfirm the theoretical estimates

In Table 96 we list the spectral condition numbers of matrices Ak+m andBkm for the collocation method The differences of the condition numbers ofthese matrices between two consecutive levels are also computed to show thegrowth of the condition numbers The numerical results confirm the stabilityanalysis presented in Section 912

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

92 Multilevel iteration methods 347

Both theoretical analysis and numerical experiment show that the MAMis particularly suitable for solving large-scale linear systems that result fromcompression schemes applied to integral equations It is a stable and fastalgorithm which provides accurate numerical solutions of integral equations

92 Multilevel iteration methods

In this section we develop MIMs for solving Fredholm integral equations ofthe second kind based on the multiscale methods for which the subspace hasa multiresolution decomposition To describe the multilevel iteration schemesand ideas leading to these algorithms we utilize multiscale Galerkin methods

921 Multilevel iteration schemes

Let X be a Hilbert space and K X rarr X an operator such that I minus K isbijective on X Consider the Fredholm integral equation of the second kindgiven in the form

(I minusK)u = f (948)

where f isin X is given and u isin X is the solution to be determinedTo solve equation (948) by the Galerkin method let Xn be a nested sequence

of finite-dimensional subspaces of X

Xn sube Xn+1 n isin N0 (949)

such that cupnisinN0Xn = X Let Pn X rarr Xn denote the sequence oforthogonal projections Then Pn = 1Xn = PnXPlowastn = Pn and Pn rarr Ipointwise in X The Galerkin method is to find un isin Xn satisfying the equation

(I minus PnK)un = Pn f (950)

We assume that K is a compact operator on X In this case the pointwiseconvergence of Pn to I and the compactness of K and Klowast lead to

limnrarrinfinK minus PnK = lim

nrarrinfinK minusKPn = 0

which implies for all n large enough that I minus PnK is invertible and its inverseis bounded by a constant independent of n Therefore equation (950) has aunique solution un isin Xn which satisfies the estimates

un le cf uminus un le c infuminus v v isin Xn (951)

where u is the solution of equation (948)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

348 Fast solvers for discrete systems

Next we develop our MIMs By the nestedness property (949) of theapproximating subspace sequence each Xn represents a level of resolution inapproximating X and we solve un isin Xn in (950) seeking an approximation ofup to the nth level of resolution to the exact solution u isin X This nestednessproperty also implies that there exists a subspace Wn+1 of Xn+1 such that

Xn+1 = Xn oplusperpWn+1 n isin N0 (952)

Let

Qn+1 = Pn+1 minus Pn

It is then straightforward to show that Qn is a projection onto Wn and Wn+1 =Qn+1Xn+1 n isin N0 Repeatedly using equation (952) produces for k m isin N0the following decomposition of the space Xn with n = k + m

Xk+m = Xk oplusperpWk+1 oplusperp middot middot middot oplusperpWk+m (953)

For uk+m isin Xk+m we write

uk+m = uk0 + vk+1 + middot middot middot + vk+m

where uk0 isin Xk and vkl isinWk+l l isin Nm and

Pk+m = Pk +Qk+1 + middot middot middot +Qk+m

For k m isin N0 we can identify the operator Kk+m = Pk+mKPk+m with thematrix of operators

Kkm =

⎡⎢⎢⎢⎣PkKPk PkKQk+1 middot middot middot PkKQk+m

Qk+1KPk Qk+1KQk+1 middot middot middot Qk+1KQk+m

Qk+mKPk Qk+mKQk+1 middot middot middot Qk+mKQk+m

⎤⎥⎥⎥⎦ (954)

Thus we can express the operator equation (950) in the form

ukm minusKkmukm = fkm (955)

where ukm = [uk0 vk1 vkm]T and fkm = [Pkf Qk+1f Qk+mf ]T As in (950) we seek the solution to (955) in the approximation space Xk+m

Once the bases for Xk and Wk+l are chosen the matrix representation of Kk+m

has exactly the same block structure as Kkm in (954)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

92 Multilevel iteration methods 349

To develop multilevel iteration schemes for solving equation (955) weintroduce five matrices of operators from Kkm the strictly upper triangularmatrix

Ukm =

⎡⎢⎢⎢⎢⎢⎢⎣

O PkKQk+1 PkKQk+2 middot middot middot PkKQk+m

O Qk+1KQk+1 middot middot middot Qk+1KQk+m

Qk+mminus1KQk+m

O

⎤⎥⎥⎥⎥⎥⎥⎦

the lower triangular matrix

Lkm =

⎡⎢⎢⎢⎣O

Qk+1KPk Qk+1KQk+1

Qk+mKPk Qk+mKQk+1 middot middot middot Qk+mKQk+m

⎤⎥⎥⎥⎦

the block diagonal matrix

Dkm =

⎡⎢⎢⎢⎣I minus PkKPk

I

I

⎤⎥⎥⎥⎦and the matrices corresponding to lower and higher frequency of Kkmrespectively

KLkm =

⎡⎢⎢⎢⎣PkKPk PkKQk+1 middot middot middot PkKQk+m

O O middot middot middot O

O O middot middot middot O

⎤⎥⎥⎥⎦

KHkm =

⎡⎢⎢⎢⎣O O middot middot middot O

Qk+1KPk Qk+1KQk+1 middot middot middot Qk+1KQk+m

Qk+mKPk Qk+mKQk+1 middot middot middot Qk+mKQk+m

⎤⎥⎥⎥⎦

Equation (955) can be rewritten as

(Dkm minus Ukm minus Lkm)ukm = fkm (956)

or

(I minusKLkm minusKH

km)ukm = fkm (957)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

350 Fast solvers for discrete systems

It is these matrix forms that lead us to the following multilevel iterationschemes for solving equation (950)

bull Jacobi-type iteration

Dkmu(l+1)km = (Ukm + Lkm)u

(l)km + fkm l isin N0 (958)

with any initial approximation u(0)km

Except for the first component all the other components of u(l+1)km are

already expressed in terms of u(l)km since the diagonal blocks in Dkm areidentity operators Only the first component needs to be solved by invertingI minus PkKPk which is also the difference between this iteration scheme andthe usual algebraic Jacobi schemebull GaussndashSeidel-type iteration

(Dkm minus Ukm)u(l+1)km = Lkmu(l)km + fkm l isin N0 (959)

with any initial approximation u(0)kmLike the usual algebraic GaussndashSeidel iteration (959) can easily be

solved in the ldquobackward substitutionsrdquo fashion for the components of u(l+1)km

and the only inverse we need to find is for the first component that is(I minus PkKPk)

minus1bull LndashH-type iteration

(I minusKLkm)u

(l+1)km = KH

kmu(l)km + fkm l isin N0 (960)

Like the Jacobi scheme only the first component of u(l+1)km needs to be solved

by inverting I minus PkKPk while other components are computed directlyIt is clear that these three schemes are similar in the method of updating the

components of u(l+1)km they differ only in that the GaussndashSeidel scheme uses

newly available components within each iteration the LndashH scheme uses thecomponents obtained in the last iteration except for the first component whilethe Jacobi scheme waits till an iteration is complete The three methods requireinverting the operator I minus PkKPk at the kth level Hence we have to choosek so that the inverse of I minus PkKPk exists and is relatively easy to find In themeantime we can take advantage of this to start the iteration with a good initialguess u(0)km = [(I minus PkKPk)

minus1Pkf 0 0]T

922 Convergence analysis

We now provide convergence analysis for iteration schemes (958) (959) and(960) when the operator K is compact on X In this case we have that

limnrarrinfin(Pn+i minus Pn+j)K = lim

nrarrinfinK(Pn+i minus Pn+j) = 0 (961)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

92 Multilevel iteration methods 351

Moreover for n large enough I minusKn is invertible and the norm of the inverseoperator is bounded uniformly for all n In the following theorem we establishconvergence for schemes (958)ndash(960) for k large enough and the rate ofconvergence

Theorem 912 Suppose that the linear operator K is compact on the Hilbertspace X IminusK is bijective on X and the sequence of approximation subspacespossesses the nestedness (949) and has the decomposition (953) Then thefollowing statements hold

(i) For each m isin N and sufficiently large k the iteration schemes (958)(959) and (960) for solving equation (950) or (955) are all convergent

(ii) The convergence rates for the Jacobi iteration scheme (958) and the LndashHiteration scheme (960) are independent of m for all sufficiently large k Ifthe hypothesessum

kisinN0

(I minus Pk)K le infinsumkisinN0

K(I minus Pk) le infin (962)

and sumkrarrinfin

k(I minus Pk)K =sum

krarrinfinkK(I minus Pk) = 0 (963)

hold the rate of convergence for the GaussndashSeidel iteration is alsoindependent of m for all sufficiently large k

(iii) The convergence rates for the Jacobi iteration scheme (958) and the LndashHiteration scheme (960) can be made small (by increasing k) in the sameorder of approximation PkK to K Moreover if hypotheses (962) and(963) are satisfied the rate of convergence for the GaussndashSeidel schemecan be made smaller (by increasing k) in the same order as rk going tozero where rk =sumiisinN0

(I minus Pk+i)K

Proof We first estimate the norms of iteration operators Dminus1km(Ukm + Lkm)

(Dkm minus Ukm)minus1Lkm and (I minus KL

km)minus1KH

km for schemes (958) (959) and(960) From the definition of Ukm we see that

Ukmu = PkK(Pk+m minus Pk)u+sum

lisinNmminus1

Qk+lK(Pk+m minus Pk+l)u

for all u isin X which leads to

Ukm lesumlisinZm

K(Pk+m minus Pk+l) rarr 0 as krarrinfin (964)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

352 Fast solvers for discrete systems

where we have used (961) and the fact that Pn = Qn = 1 for all n isin NSimilarly we have that

Lkmu = (Pk+m minus Pk)KPku+sum

lisinNmminus1

(Pk+m minus Pk+l)KQk+l+1u

and

Lkm lesumlisinZm

(Pk+m minus Pk+l)K rarr 0 as krarrinfin

Moreover from Uk+m + Lk+m = Kk+m minusKk we conclude that

Uk+m + Lk+m le (Pk+m minus Pk)K + K(Pk+m minus Pk) rarr 0 as krarrinfin(965)

Moreover as an operator on X Dkm = I minusKk Hence

Dminus1km = (I minusKk)

minus1 (966)

and

(Dkm minus Ukm)minus1 = (I minus (I minusKk)

minus1Ukm)minus1(I minusKk)

minus1

Therefore

Dminus1km = (I minusKk)

minus1and

(Dkm minus Ukm)minus1 le (I minusKk)

minus11minus (I minusKk)minus1Ukm

Since (IminusKk)minus1 is uniformly bounded for large enough k in view of (964)

these two inverses are both bounded uniformly for large enough kCombining the above we see that for each fixed m isin N the iteration

operators for schemes (958) and (959) are both convergent to 0 in the operatornorm

Dminus1km(Uk+m + Lk+m) rarr 0 and (I minusKk)

minus1Lkm rarr 0 as krarrinfin

Hence they can be chosen less than one if k is large enough which in turnyields the convergence of the iteration schemes (958) and (959)

For the scheme (960) noting that KLkm = PkKPk+m and KH

km = (Pk+m minusPk)KPk+m we have that

KLkm minusK rarr 0 and KH

km rarr 0 as krarrinfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

92 Multilevel iteration methods 353

uniformly for m isin N This leads to the fact that (I minus KLkm)minus1 is uniformly

bounded and moreover

(I minusKLkm)minus1KH

km rarr 0 as krarrinfin

which also yields the convergence of the iteration scheme (960)Next we examine the rate of convergence of the iteration schemes Let

qkm =

⎧⎪⎨⎪⎩Dminus1

km(Uk+m + Lk+m) for scheme (958)(Dkm minus Ukm)

minus1Lkm for scheme (959)(I minusKL

km)minus1KH

km for scheme (960)

This is roughly the factor between two consecutive errors in the iteration Moreprecisely for each (k m) the quantity qkm is the least upper bound for theratios of two consecutive errors or the ratios of differences of two consecutiveiterates

u(l)km minus ukmu(lminus1)

km minus ukmle qkm and

u(l)km minus u(lminus1)km

u(lminus1)km minus u(lminus2)

km le qkm

We have shown that for fixed m isin N qkm rarr 0 as krarrinfin and thus qkm lt 1for large enough k

For the Jacobi-type scheme (958) we can show that this rate is independentof all large enough m Indeed from (965) and (966) above

qkm le Dminus1kmUk+m + Lk+m

le (I minusKk)minus1((Pk+m minus Pk)K + K(Pk+m minus Pk))

thus

lim supmrarrinfin

qkm le (I minusKk)minus1(K minus PkK + K minusKPk) lt 1 (967)

if k is chosen large enoughFor the LndashH-type scheme (960) noting that (I minus KL

km)minus1 is uniformly

bounded and KHkm = (Pk+m minus Pk)KPk+m we conclude that

lim supmrarrinfin

qkm le cK minus PkK lt 1

if k is chosen large enoughTo derive an estimate for the rate of convergence of the GaussndashSeidel

scheme (959) we need hypotheses (962) and (963) For any positive

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

354 Fast solvers for discrete systems

integer k we introduce two quantities

rk =sumiisinN0

(I minus Pk+i)K and rprimek =sumiisinN0

K(I minus Pk+i)

Under assumption (962) we see that

limkrarrinfin rk = lim

krarrinfin rprimek = 0 (968)

and hypothesis (963) ensures that

lim supmrarrinfin

Lkm le rk and lim supmrarrinfin

Ukm le rprimek (969)

Therefore we conclude from (968) and (969) that for the GaussndashSeidelscheme

lim supmrarrinfin

qkm le lim supmrarrinfin

(I minusKk)minus1Lkm

1minus (I minusKk)minus1Ukmle lim sup

mrarrinfin(I minusKk)

minus1rk

1minus (I minusKk)minus1rprimeklt 1 (970)

if k is chosen large enoughThe discussions above lead to the results of this theorem

We remark that conditions (962) and (963) are fulfilled in many appli-cations when Xn are chosen as piecewise polynomial spaces The reader isreferred to [108] for details and numerical examples

93 Bibliographical remarks

In Chapters 5ndash7 multiscale Galerkin PetrovndashGalerkin and collocation meth-ods are developed for solving the Fredholm integral equation of the second-kind with a weakly singular kernel These methods yield linear systems havingnumerically sparse coefficient matrices which with appropriate truncationstrategies lead to fast discretization of the integral equation (see for example[28 64 68 94 95 202 260 261] and the references cited therein) Thischapter describes fast solvers for the resulting discrete systems includingmultilevel augmentation methods and multilevel iteration methods The ideaof multilevel methods introduced in this chapter based on the GaussndashSeideliteration was initiated in [67] in its early form for solving the Fredholmintegral equation of the second kind The results in [67] were developedfurther to the MAM and the MIM in [71] and [108] respectively The abstract

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

93 Bibliographical remarks 355

framework of the MAM was established in [71] Since then it has been usedin various contexts (see [45 51ndash59 73 76 78 187])

We remark that other different multilevel methods for solving integralequations were developed as early as the late 1970s (see for example [120]and the references cited therein) Methods for the data-sparse approximationof matrices were introduced in [120] resulting in the so-called hierarchicalmatrices (or short H-matrices) For more information on this subject thereader is referred to [30 122 123] as well as [168ndash170] Other work onmultilevel and associated iteration methods can be found in Chapter 6 of [15]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

10

Multiscale methods for nonlinearintegral equations

In this chapter we develop multiscale methods for solving the Hammersteinequation and the nonlinear boundary integral equation resulting from areformulation of a boundary value problem of the Laplace equation withnonlinear boundary conditions Fast algorithms are proposed using the MAMin conjunction with matrix truncation strategies and techniques of numericalintegration for integrals appearing in the process of solving equations Weprove that the proposed methods require only linear (up to a logarithmic factor)computational complexity and have the optimal convergence order

In the section that follows we discuss the critical issues in solving nonlinearintegral equations This will shine a light on the ideas developed later in thischapter In Section 102 we introduce the MAM for solving Hammersteinequations and provide a complete convergence analysis for the proposedmethod In Section 103 we develop the MAM for solving the nonlinearboundary integral equation as a result of a reformulation of a boundary valueproblem of the Laplace equation with nonlinear boundary conditions Wepresent numerical experiments in Section 104

101 Critical issues in solving nonlinear equations

Nonlinear integral equations portray many mathematical physics problemsThe Hammerstein equation is a typical kind of nonlinear integral equa-tion Moreover boundary value problems of the Laplace equation serve asmathematical models for many important applications Making use of thefundamental solutions of the equation we can formulate the boundary valueproblems as integral equations defined on the boundary (see Section 223)For linear boundary conditions the resulting boundary integral equationsare linear the numerical methods of which have been studied extensively

356

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

101 Critical issues in solving nonlinear equations 357

Nonlinear boundary conditions are also involved in various applications Inthese cases the reformulation of the corresponding boundary value problemsleads to nonlinear integral equations

The nonlinearity introduces difficulties in the numerical solution of theequation which normally requires an iteration scheme to solve it locally asa linearized integral equation The Newton iteration method and the secantmethod are often used as bases for designing numerical schemes for equationsof this type In this case the Jacobian matrices associated with the nonlinearoperators have to be computed and possibly refreshed at each iteration stepSince the Jacobian matrices are usually dense and their size is equal to thedimension N of the approximate subspace of the solution the computationalcomplexity of the algorithms is at least of O(N2) In [13] several popularnumerical schemes for solving nonlinear integral equations are reviewed thecomputational complexities of which are all at least of O(N2) In particularthe multi-grid method [120] has an operation count of O(N2) when the dis-cretization of the system is included When a high approximation accuracy isdesired it requires using an approximate subspace of a high dimension Thusit demands a significantly large amount of computational effort This becomesa bottleneck problem for the numerical solution of nonlinear integral equations

In order to develop a fast numerical algorithm for solving nonlinear integralequations we consider achieving the following two tasks First the Jacobianmatrices involved in the algorithm should be approximated in a subspace of alow dimension which is fixed and much smaller than that for the whole approx-imate subspace Second all steps of the algorithm should be implementedwith a number of calculations no more than O(N log N) Accomplishing thesetwo tasks is comparable with using the truncation strategy and the MAMintroduced in the previous chapter for solving a linear integral equation

In this chapter we develop an MAM for solving the Hammerstein equa-tion and the nonlinear boundary integral equation which accomplishes twotasks discussed earlier This method requires the availability of a multileveldecomposition of the solution space and a projection from the solution spaceonto a finite-dimensional subspace at a given level Solving the equationwith high-order approximation accuracy requires us to solve an approximateequation which is the original equation projected onto a subspace at a highlevel We observe that most of the computational efforts are spent on invertingthe nonlinear integral operator when solving the nonlinear equation Thehigher the level of the subspace used the more accurate the approximatesolution obtained and at the same time more computational effort is neededto invert the nonlinear integral operator at that level To significantly reducethe computational cost we propose not to invert directly the nonlinear integral

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

358 Multiscale methods for nonlinear integral equations

operator at the high level but instead to invert it at a much lower (fixed)level and compensate the error which may result from this modification bya high-frequency correction term that can be obtained by solving a linearsystem at the high level The correction does not involve any inversion ofthe nonlinear integral operator This method results in a fast algorithm forsolving the nonlinear integral equation which gives the optimal convergenceorder the same as the approximation order of the approximate subspace(of the highest level) At the same time this method requires only linearcomputational complexity in the sense that the number of multiplicationsneeded is proportional to the dimension of the approximate subspace

The proposed method is based on a traditional projection method for solvingnonlinear integral equations and a multilevel decomposition of the solutionspace (see [76]) The main idea comes from treating the corresponding linearintegral equation Multilevel or multiscale numerical methods for solvinglinear integral equations were discussed in the previous chapter Recall thatin the MAM solving the linear equation the coefficient matrix of the resultinglinear system from discretization of the linear integral equation via a multiscaledecomposition of the solution space can be obtained from a small-sized matrixwhich is a low-resolution representation of the integral operator by augmentingit with new rows and columns representing the high frequency of the integraloperator Making use of this multiscale structure of the matrix the MAMinverts only the small-sized fixed matrix with the compensation of matrixndashvector multiplications It was proved in Section 91 that this method gives anoptimal order of convergence while it reduces the computational complexitysignificantly Motivated by the MAM for the linear integral equation wedevelop the MAM for the nonlinear equation The MAM which will bedescribed in Section 102 for solving the Hammerstein equation and in Section103 for solving the nonlinear boundary integral equation resulting from areformulation of a boundary value problem of the Laplace equation withnonlinear boundary conditions needs only to invert the nonlinear operatorin a much smaller subspace This significantly reduces the computationalcomplexity to a nearly linear order We also develop a fully discrete MAM forsolving the nonlinear integral equation using numerical integration methodsfor computing the integrals appearing in the resulting nonlinear system

To close this section we briefly mention the fast FourierndashGalerkin methodfor solving the nonlinear boundary integral equation developed recently in[53] A fast algorithm for solving the resulting discrete nonlinear systemwas designed in the paper by integrating together the techniques of matrixcompression numerical integration of oscillatory integrals and the multilevelaugmentation method It was proved there that the proposed method enjoys

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 359

an optimal convergence order and a nearly linear computational complexityNumerical experiments were presented to confirm the theoretical estimates andto demonstrate the efficiency and accuracy of the proposed method

102 Multiscale methods for the Hammerstein equation

In this section we describe fast multiscale methods for solving the Ham-merstein equation We prove that the proposed methods require only linear(up to a logarithmic factor) computational complexity and have the optimalconvergence order Two specific fast methods based on the Galerkin projectionand the collocation projection are presented

1021 The multilevel augmentation method

We introduce in this subsection the MAM for solving the Hammersteinequation The method is described here in an operator form only in terms of theprojections and related spaces The description of the method involving basesof the subspaces is postponed to Section 1024 This subsection ends with acomparison of the proposed method with the well-known multigrid method

For d isin N we let sube Rd be a compact domain Consider the Hammersteinequation

u(s)minusint

K(s t)ψ(t u(t))dt = f (s) s isin (101)

where K f and ψ are given functions and u is the unknown to be determinedWe assume that f isin C() and for any s isin we denote Ks(t) = K(s t)Throughout this section we assume unless stated otherwise that the followingconditions on K and ψ are satisfied

(H1) limsrarrτKs minus Kτ1 = 0 for any τ isin and sup

sisin

int

|K(s t)|dt ltinfin

(H2) ψ(t u) is continuous in t isin and Lipschitz continuous in u isin R thepartial derivative Duψ of ψ with respect to the variable u exists and isLipschitz continuous that is there exists a positive constant L such that

|Duψ(t u1)minus Duψ(t u2)| le L|u1 minus u2|

and for any u isin C() ψ(middot u(middot)) Duψ(middot u(middot)) isin C()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

360 Multiscale methods for nonlinear integral equations

We use X to represent the Banach space L2() or Linfin() The linear integraloperator K Xrarr X is defined by

(Ku)(s) =int

K(s t)u(t)dt s isin

and the nonlinear operator Xrarr X by

(u)(t) = ψ(t u(t)) t isin

With these notations equation (101) is written in the operator form as

uminusKu = f (102)

We assume that Xn n isin N0 is a sequence of finite-dimensional subspaces ofX having the property that

Y sube⋃

nisinN0

Xn

where if X = L2() then Y = X and the notation ldquosuberdquo in the above relationis replaced by ldquo=rdquo and if X = Linfin() then Y = C() For each n isin N0 letPn Xrarr Xn be a projection Throughout this section we assume that

(H3) the sequence Pn n isin N0 converges pointwise to the identity operator inY that is for any x isin Y

limnrarrinfinPnxminus x = 0

The projection method for solving equation (102) is to find un isin Xn satisfying

un minus PnKun = Pnf (103)

The solution un of equation (103) is called the projection solution of equa-tion (102) The following result is concerned with the existence of theprojection solution un and its convergence property

Theorem 101 Let ulowast isin X be an isolated solution of (102) If one is notan eigenvalue of the linear operator (K)prime(ulowast) then for sufficiently large nequation (103) has a unique solution un isin B(ulowast δ) for some δ gt 0 with theproperty

c1Pnulowast minus ulowast le un minus ulowast le c2Pnulowast minus ulowast (104)

for some positive constants c1 c2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 361

The above theorem was established in Theorem 2 of [255] and usedin [165]

As we have discussed in Section 101 the projection method (103) requiresinverting the nonlinear operator

n = I minus PnK

which is computationally challenging when the dimension of the subspace Xn

is large In fact once a basis of the subspace Xn is chosen equation (103)is equivalent to a system of nonlinear algebraic equations Standard methodssuch as the Newton method and its variations for solving the nonlinear systemare to linearize the equation locally and solve the nonlinear system by iterationAt each iteration step we need to invert the Jacobian matrix of n evaluated atthe solution of the previous step The Jacobian matrix different at a differentstep is dense and has size equal to the dimension s(n) of the space XnThe computational cost for solving equation (103) by a standard method isO(s(n)2) When s(n) is large this becomes a bottleneck problem for numericalsolutions of these equations

Because of this we propose not to solve equation (103) directly Instead wedevelop a multilevel method which requires inverting the nonlinear operatork for a fixed k much smaller than n To this end we require that the space X

has a multiscale decomposition that is the subspaces Xn are nested (Xnminus1 subXn n isin N) so that Xn is the direct sum of Xnminus1 and its complement WnSpecifically

Xn = Xnminus1 oplusWn n isin N (105)

Accordingly we have for all n isin N0 that

PnPn+1 = Pn (106)

The decomposition (105) can be applied repeatedly so that for n = k + mwith k isin N0 fixed and m isin N0 we have the decomposition

Xk+m = Xk oplusWkm (107)

where

Wkm =Wk+1 oplus middot middot middot oplusWk+m

Under this hypothesis we describe the multilevel method for obtaining anapproximation of the solution of equation (103) Our goal is to obtain anapproximation of the solution of equation (103) with n = k + m k beingsmall and fixed We first solve equation (103) with n = k exactly and obtainthe solution uk Since s(k) is very small in comparison with s(k + m) the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

362 Multiscale methods for nonlinear integral equations

computational cost of inverting the nonlinear operator k is much less thanthat of inverting k+m The next step is to obtain an approximation of thesolution uk+1 isin Xk+1 of equation (103) with n = k + 1 For this purpose wedecompose

uk+1 = uLk+1 + uH

k+1 with uLk+1 isin Xk and uH

k+1 isinWk+1

using the decomposition (107) and rewrite equation (103) with n = k + 1 as

Pk(I minusK)(uLk+1 + uH

k+1) = Pkf + (Pk+1 minus Pk)(f +Kuk+1)minus uHk+1(108)

The second term on the right-hand side of equation (108) can be obtainedapproximately via the solution uk of the previous level That is we computeuH

k1 = (Pk+1 minus Pk)(f +Kuk) where uk0 = uk and note that uHk1 isinWk+1

Observing that

uHk1 = uk+1 minus uk minus Pk+1K(uk+1 minusuk)

in equation (108) we replace uHk+1 and the second term on the right-hand side

by uHk1 to obtain an equation for uL

k1 isin Xk

Pk(I minusK)(uLk1 + uH

k1) = Pkf (109)

The function uLk1 can be viewed as a good approximation to uL

k+1 We thenobtain an approximation to the solution uk+1 of equation (103) by setting

uk1 = uLk1 + uH

k1

Note that uLk1 and uH

k1 respectively represent the lower and higher-frequencycomponents of uk1 This procedure is repeated m times to obtain an approxi-mation ukm of the solution uk+m of equation (103) with n = k + m

Note that at each step we invert only the same nonlinear operator Pk(I minusK) This makes the method very efficient computationally We summarizethe method described above in the following algorithm

Algorithm 102 (The multilevel augmentation method An operator form)Let k be a fixed positive integer

Step 1 Find the solution uk isin Xk of equation (103) with n = k Set uk0 =uk and l = 1

Step 2 Compute

uHkl = (Pk+l minus Pk)( f +Kuklminus1) (1010)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 363

Step 3 Solve uLkl isin Xk from the nonlinear equation

Pk(I minusK)(uLkl + uH

kl) = Pkf (1011)

Step 4 Let ukl = uLkl + uH

kl Set llarr l+ 1 and go back to step 2 until l = m

The output of Algorithm 102 is an approximation ukm of the solution uk+m

of equation (103) The approximation of uk+m is obtained beginning with aninitial approximation uk repeatedly inverting the operator Pk(I minus K) toupdate the approximation recursively The procedure completes in m steps andno iteration is needed if the operator Pk(I minus K) can be inverted exactly Ofcourse the inversion of the nonlinear operator may require iterations The keysteps in this algorithm are steps 2 and 3 In step 2 we obtain a high-frequencycomponent uH

kl of the approximate solution ukl from the approximation uklminus1

at the previous level by a functional evaluation In step 3 we solve the low-frequency component uL

kl isin Xk from (1011) with the known high-frequency

component uHkl isin Wk+1 obtained from step 2 For all l isin Zm+1 we invert

the same nonlinear operator Pk(I minus K) at the initial coarse level k Thecomputational costs for this are significantly lower than inverting the nonlinearoperator Pk+m(IminusK) at the final fine level k+m We call ukm the multilevelsolution of equation (103) and uL

km and uHkm respectively the lower and

higher-frequency components of ukm In the next subsection we show thatukm approximates the exact solution u in the same order as uk+m does

The multilevel solution ukm is in fact a solution of a nonlinear operatorequation We present this observation in the next proposition

Proposition 103 If uHkm is obtained from formula (1010) and uL

km isin Xk is

a solution of equation (1011) then ukm = uLkm + uH

km is a solution of theequation

(I minus PkK)ukm = Pk+mf + (Pk+m minus Pk)Kukmminus1 (1012)

Conversely for any solution ukm of (1012) uHkm = (Pk+mminusPk)ukm satisfies

equation (1010) and uLkm = Pkukm is a solution of equation (1011)

Proof Since uLkm isin Xk we have that (Pk+m minus Pk)uL

km = 0 It follows from(1010) with l = m that

(Pk+m minus Pk)(uLkm + uH

km) = (Pk+m minus Pk)( f +Kukmminus1)

Adding the above equation to equation (1011) with l = m and noticing thatPk+mukm = ukm yield equation (1012)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

364 Multiscale methods for nonlinear integral equations

Conversely let ukm be a solution of equation (1012) We apply the operatorPk+m minus Pk to both sides of the equation and obtain (1010) by utilizing

(Pk+m minus Pk)Pk = 0 and (Pk+m minus Pk)Pk+m = Pk+m minus Pk

where the second equation is a consequence of formula (106) We then applyPk to both sides of (1012) and use Pk(Pk+m minus Pk) = 0 to conclude thatuL

km = Pkukm is a solution of equation (1011) with l = m

Equation (1012) differs from equation (103) with n = k + m in two ways(1) The right-hand sides of these two equations differ in the term (Pk+m minusPk)Kukmminus1 (2) The nonlinear operators on the left-hand side of these twoequations are different I minus Pk+mK for equation (103) and I minus PkKfor equation (1012) It requires much less computational effort to invert thenonlinear operator I minus PkK than the nonlinear operator I minus Pk+mK Itis these differences that lead to fast solutions of the Hammerstein equationMoreover equation (1012) connects multilevels (levels k k + m minus 1 andk+m) of the solution space and the range space This is the basis on which theapproximate solution ukm has a good approximation property

To close this subsection we compare the MAM with the well-knownmultigrid method From the point of view of Kress [177] the multigridmethod for solving linear integral equations uses special techniques forresidual correction utilizing the information of the coarser levels to constructappropriate approximations of the inverse of the operator of the approximateequation to be solved Specific choices of the approximate inverses lead tothe V-cycle W-cycle and cascadic multigrids This idea was applied to theconstruction of iteration schemes for nonlinear integral equations The relatedwork was discussed in a master review paper [7] Taking this point of view theproposed MAM might be considered as a nonconventional cascadic multigridmethod with significant difference from the traditional one

In order to compare the proposed method with the multigrid method wereview the two-grid method which was first introduced in [9] and reviewed in[7] The two-grid method solves (103) by the Newton iteration

u(+1)n = u()n minus[I minus (PnK)prime(u()n )]minus1

[u()n minus PnK(u()n )minus Pnf ] = 1 2

combined with an approximation of [I minus (PnK)prime(u()n )]minus1 using the infor-mation of coarser grids In particular one may choose a level nprime lt n and usethe following approximation

[I minus (PnK)prime(u()n )]minus1 asymp I + [I minus (PnprimeK)prime(unprime)]minus1(PnK)prime(unprime)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 365

where unprime is the solution of (103) with n = nprime Since the Jacobian matrix(PnprimeK)prime(unprime) was obtained after solving (103) at level nprime and (PnK)prime(unprime)remains unchanged during the iteration for level n the above approximationavoids the computational cost of updating the Jacobian matrix However theneed to establish (PnK)prime(unprime) still requires O(s(n)2) computational costThe multigrid scheme uses information of more than one lower level toapproximate [Iminus(PnK)prime(u()n )]minus1 and it needs O(s(n)2) computational costIn other words the idea of the multigrid method is as follows First establishthe Newton iteration method for equation (103) and then approximate theJacobian matrix appearing in the iteration process by using information incoarse grids Our proposed method introduces a new approximation strategywhich approximates directly the nonlinear operator IminusPnK in (103) not theJacobian matrix in the Newton iteration by I minus PkK at a fixed level k lt nThis point can clearly be observed in Proposition 103 It is possible onlybecause the solution space has a built-in multilevel structure The proposedmethod requires only O(s(n)) (linear) computational cost

1022 Analysis of the multilevel algorithm

We analyze the MAM described in the last subsection Specifically we showthat the multilevel solution ukm exists and prove that it converges to the exactsolution ulowast of equation (102) in the same order as the projection solution uk+m

doesWe first present a result concerning the existence of the multilevel solution

The proof of this result is similar to that for the existence of the projectionsolution which was established in [255]

Theorem 104 Let ulowast be an isolated solution of (102) If one is not aneigenvalue of (K)prime(ulowast) then there exists an integer N such that for eachk gt N if ukmminus1 is given the operator equation (1012) has a unique solutionukm isin B(ulowast δ) for some δ gt 0 and for all m isin N0

Proof Let L = (K)prime(ulowast) By hypotheses there exists an integer N and apositive constant ν such that for all k gt N (I minus PkL)minus1 exists and (I minusPkL)minus1 le ν For u v isin X we define

R(u v) = K(u)minusK(v)minus L(uminus v)

We then obtain from (102) and (1012) that

ukm minus ulowast = (I minus PkL)minus1[(Pk minus I)ulowast + PkR(ukm ulowast)+ (Pk+m minus Pk)( f +Kukmminus1)] (1013)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

366 Multiscale methods for nonlinear integral equations

We introduce the operator

Fkm(v) = (I minus PkL)minus1[(Pk minus I)ulowast + PkR(v+ ulowast ulowast)+ (Pk+m minus Pk)( f +Kukmminus1)]

It follows from hypothesis (H2) that there exist two positive constants M1 M2

such that the estimates

R(v ulowast) le M1vminus ulowast2

and

R(v1 ulowast)minusR(v2 ulowast) le M2

(v1 minus ulowast + 1

2v1 minus v2

)v1 minus v2

hold for all v v1 v2 in a neighborhood of ulowast By utilizing this property and thepointwise convergence of the projection Pn we can show that there exists apositive constant δ such that Fkm is a contractive mapping on the ball B(0 δ)The fixed-point theorem ensures that the fixed-point equation

v = Fkm(v)

has a unique solution in B(ulowast δ) or equivalently the equation (1012) has aunique solution ukm isin B(ulowast δ)

Next we turn to analyzing the convergence of the multilevel solution Wefirst prove a crucial technical lemma which confirms that the error betweenukm and uk+m is bounded by the error between ukmminus1 and uk+m with a factordepending on k m which converges to zero uniformly for m isin N0 as krarrinfinTo this end for n isin N0 we denote by Rn the approximation error of Xn forulowast isin X namely

Rn = Rn(ulowast) = infulowast minus vX v isin Xn

Lemma 105 Let ulowast be an isolated solution of (102) If one is not aneigenvalue of (K)prime(ulowast) then there exists a sequence of positive numbers αkmk isin N m isin N0 with limkrarrinfin αkm = 0 uniformly for m isin N0 and a positiveinteger N such that for all k ge N and m isin N0

ukm minus uk+m le αkmukmminus1 minus uk+mProof It follows from (103) with n = k + m and (1012) that

(I minus Pk+mL)(ukm minus uk+m) = (Pk+m minus Pk)K(ukmminus1 minusukm)

+ Pk+mR(ukm uk+m)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 367

We let akm = (Pk+m minus Pk)K and by L we denote the Lipschitz constantof the derivative Duψ Hypothesis (H3) ensures that there exists a positiveconstant p such that for all n isin N Pn le p By hypothesis (H2) we have that

ukm minus uk+m le νLakm

1minus ν(akmL+ pM1ukm minus uk+m)ukmminus1 minus uk+m

Since akm rarr 0 krarrinfin uniformly for m isin N0 there exists a positive integerN1 such that akmLν lt 1

6 for all k gt N1 and m isin N0 Since Rn rarr 0 nrarrinfinwe can find a positive integer N2 such that νpM1ρRk lt 1

6 for all k gt N2We choose δ gt 0 as in Theorem 104 such that νpM1δ lt 1

6 and (1012) hasa unique solution in B(ulowast δ) for all k gt N3 for some positive integer N3Consequently for any k gt N = maxN1 N2 N3 we have that

ν(akmL+ pM1ukm minus uk+m) le 1

2

which implies that

ukm minus uk+m le 2νLakmukmminus1 minus uk+m

We conclude the desired result of this lemma with αkm = 2νLakm by recallingthat akm rarr 0 krarrinfin uniformly for m isin N0

Recall that a sequence of non-negative numbers γn n isin N0 is called amajorization sequence of Rn n isin N0 if γn ge Rn for all n isin N0 and thereexists a positive integer N0 and a positive constant σ such that for n ge N0γn+1γnge σ

Making use of the above lemma we obtain the following important resulton the convergence rate of the multilevel augmentation solution The proof issimilar to that of Theorem 92 for linear operator equations

Theorem 106 Let ulowast be an isolated solution of (102) and let γn n isin N0 be amajorization sequence of Rn n isin N0 If one is not an eigenvalue of (K)prime(ulowast)then there exists a positive constant ρ and a positive integer N such that for allk ge N and m isin N0

ulowast minus ukm le (ρ + 1)γk+m (1014)

Proof We prove the estimate (1014) by induction on m When m = 0 it isclear that

ulowast minus uk0 = ulowast minus uk le ρRk le ργk

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

368 Multiscale methods for nonlinear integral equations

for all k gt N0 Suppose that the estimate (1014) holds for m minus 1 By thetriangle inequality and the induction hypothesis

ukmminus1 minus uk+m le ukmminus1 minus ulowast + uk+m minus ulowastle (ρ + 1)γk+mminus1 + ργk+m

le(ρ + ρ + 1

σ

)γk+m

Choose N such that for all k gt N the estimate in Lemma 105 holdsand αkm(ρ + ρ+1

σ)lt 1 Combining the estimate above with the estimate in

Lemma 105 yields the inequality

ukm minus uk+m le γk+m

Again by using the triangle inequality we obtain that

ukm minus ulowast le ukm minus uk+m + uk+m minus ulowast le (ρ + 1)γk+m

which completes the induction procedure

When a specific projection method is given the associated majorizationsequence is known In this case Theorem 106 may lead to the convergenceorder estimate of the corresponding MAM The corresponding convergenceorder for the MAMs based on the Galerkin projection and based on thecollocation projection will be presented in Section 1024

1023 The discrete multilevel augmentation method

In Section 1021 we established an operator form of the MAM and inSection 1022 we proved the existence and convergence properties of theapproximate solution obtained from the method It is shown that the proposedmethod gives approximate solutions with the same order of accuracy as theclassical projection methods The purpose of this section is to describe adiscrete version of the MAM when an appropriate basis of the approximatesubspace is chosen and to estimate the computational cost of the algorithm

Suppose that Ln n isin N0 is a sequence of subspaces of Xlowast which has theproperties

Ln sub Ln+1 dim (Ln) = dim (Xn) n isin N0

It follows from the nestedness property that there is a decomposition

Lk+m = Lk oplus Vkm

where Vkm = Vk+1 oplus middot middot middot oplus Vk+m

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 369

We let w(0) = dim(X0) and w(i) = dim(Wi) for i gt 0 and suppose that

X0 = spanw0j j isin Zw(0) L0 = span0j j isin Zw(0)and

Wi = spanwij j isin Zw(i) Vi = spanij j isin Zw(i) i gt 0

By using the index set Un = (i j) j isin Zw(i) i isin Zn+1 we have that

Xn = spanwij (i j) isin Un Ln = spanij (i j) isin Un n isin N0

Recalling s(n) = dim(Xn) we then observe that Un has cardinality card(Un) =s(n) We further assume that the elements of Un are ordered lexicographically

For any v isin Xk+m we have a unique expansion

v =sum

(ij)isinUk+m

vijwij

The vector v = [vij (i j) isin Uk+m] is called the representation vector of vThus for the solution ukm of (1012) its representation vector is given byukm = [(ukm)ij (i j) isin Uk+m] Setting Ukm = Uk+mUk we obtain that

Ukm = (i j) j isin Zw(i) i isin Zk+m+1Zk+1Consequently we have the representations

uLkm =

sum(ij)isinUk

(ukm)ijwij and uHkm =

sum(ij)isinUkm

(ukm)ijwij

It follows from the property of Ln that equation (1011) is equivalent to thenonlinear systemlang

iprimejprime (I minusK)

⎛⎝ sum(ij)isinUk

(ukl)ijwij + uHkl

⎞⎠rang = langiprimejprime frang (iprime jprime) isin Uk

(1015)

In order to convert (1010) into its equivalent discrete form we first prove thefollowing lemma

Lemma 107 For v isin X the equation

(Pk+l minus Pk)v = 0 (1016)

is equivalent to langiprimejprime v

rang = 0 (iprime jprime) isin Ukl (1017)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

370 Multiscale methods for nonlinear integral equations

if and only if for all v isin Xlangiprimejprime Pkv

rang = 0 (iprime jprime) isin Ukl (1018)

Proof We observe that since iprimejprime isin Lk+l (iprime jprime) isin Ukl we have thatlangiprimejprime vminus Pk+lv

rang = 0 for (iprime jprime) isin Ukl Hence (1018) is equivalent tolangiprimejprime vminus (Pk+l minus Pk)v

rang = 0 (iprime jprime) isin Ukl (1019)

We first show the sufficient condition To this end we assume that v isin X sat-isfies equation (1018) It follows from Lk+lcapXperpk+l = 0 and equation (1018)that for any (iprime jprime) isin Ukl there exists w isin Wkl such that

langiprimejprime w

rang = 0Therefore we observe that Vkl capWperpkl = 0 Now we suppose v isin X is asolution of equation (1016) Thus it follows directly from equation (1019)that v satisfies equation (1017) Conversely if v satisfies equation (1017) butis not a solution of equation (1016) then we can find isin Vkl such that〈 (Pk+l minus Pk)v〉 = 0 thus 〈 vminus (Pk+l minus Pk)v〉 = 0 which contradictsequation (1019)

It remains to prove the necessary condition For any v isin X we verifydirectly that vminus (Pk+l minusPk)v is a solution of equation (1016) and hence it isalso a solution of equation (1017) Thus we obtain equation (1019) By theequivalence of conditions (1018) and (1019) we prove equation (1018)

The next theorem describes the condition for which equation (1010) isconverted into its equivalent discrete form

Theorem 108 The following statements are equivalent

(i) Equation (1010) is equivalent tolangiprimejprime uH

kl

rang = langiprimejprime f +Kuklminus1rang (iprime jprime) isin Ukl (1020)

(ii) Vp sub Xperpk p gt k(iii) For any v isin Xk

langiprimejprime v

rang = 0 (iprime jprime) isin Ukl

Proof The equivalence of statements (ii) and (iii) is clear It suffices toprove the equivalence of statements (i) and (ii) Note that equation (1010)is equivalent to

(Pk+l minus Pk)[uHkl minus ( f +Kuklminus1)] = 0

By Lemma 107 the above equation is equivalent to (1020) if and only ifstatement (ii) holds

We are now ready to present the discrete form of the MAM For(iprime jprime) (i j) isin Ukl we define the matrix Ekl = [langiprimejprime wij

rang (iprime jprime) (i j) isin Ukl]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 371

Using this notation equation (1020) can be rewritten as

EkluHkl = fkl (1021)

where uHkl = [(ukl)ij (i j) isin Ukl] and fkl = [langiprimejprime f +Kuklminus1

rang (iprime jprime)

isin Ukl]Algorithm 109 (The multilevel augmentation method A discrete form) Letk be a fixed positive integer

Step 1 Solve the nonlinear systemlangiprimejprime (I minusK)

⎛⎝ sum(ij)isinUk

(uk)ijwij

⎞⎠rang = langiprimejprime frang (iprime jprime) isin Uk

(1022)

and obtain the solution uk = [(uk)ij (i j) isin Uk] Let uk0 = uk andl = 1

Step 2 Solve the linear system (1021) to obtain uHkl and define

uHkl =

sum(ij)isinUkl

(ukl)ijwij

Step 3 Solve the nonlinear system (1015) to obtain uLkl = [(ukl)ij (i j) isin

Uk] Define uLkl =sum(ij)isinUk

(ukl)ijwij and ukl = uLkl + uH

klStep 4 Set llarr l+ 1 and go back to step 2

A crucial procedure in Algorithm 109 is to repeatedly solve the nonlinearsystem (1015) A typical approach to solve this system is by the Newtoniteration or the secant method There are two strategies to implement theNewton iterationsecant method The first strategy is to update the Jacobianmatrix of the nonlinear system (1015) at each step The drawback of thisstrategy is that updating the Jacobian matrix is time-consuming This can beimproved by reducing the frequency of updating the Jacobian matrix Thesecond strategy is that in step 3 of Algorithm 109 we use the same Jacobianmatrix obtained in step 1 when solving the nonlinear system (1015) Thismodification avoids updating the Jacobian matrix and thus it significantlyreduces the computational cost It may affect the approximation accuracyHowever it can be compensated by a few more iterations The numericalresults to be presented later show that this strategy indeed speeds up thecomputation significantly while preserving the approximation accuracy

In the rest of this subsection we estimate the computational cost ofAlgorithm 109 which is measured by the number of multiplications used in

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

372 Multiscale methods for nonlinear integral equations

computation We suppose that the initial approximate solution uk has beenobtained and we intend to find ukm by using Algorithm 109 Accordingto the algorithm we divide the computation into m stages For stage i withi = 1 2 m we perform the following procedures

(1) Generate the coefficient matrix Eki(2) Compute the vector fki(3) Solve the linear system (1021)(4) Solve the nonlinear system (1015)

The computational cost of Algorithm 109 is estimated according to eachprocedure described above We denote by Mkij the computational cost in theabove procedure j at stage i We assume that the following hypothesis holds

(A0) Computing the integrals that appear in fki requires a constant computa-tional cost per integral

Specifically we identify Mki1 and Mki2 respectively by the numberof entries of the matrix Eki and that of the components of the vector fkiMoreover Mki3 is the number of multiplications used for solving the linearsystem (1021) and Mki4 is the number of multiplications used for solving thenonlinear system (1015) Since in procedure 4 we solve the same nonlinearsystem with different function uH

kl we then conclude that Mki4 = O(1) Itremains to estimate Mki3 For this purpose we make the following additionalhypotheses

(A1) There exists a positive integer μ gt 1 such that for any n the dimensions(n) of Xn is equivalent to μ that is s(n) sim μn

(A2) For any i the matrix Eki is an upper triangular sparse matrix withN (Eki) = O(s(k + i)) where N (A) denotes the number of nonzeroelements of the matrix A

We present estimates for Mkij j = 1 2 3 in the next proposition

Proposition 1010 If assumptions (A0) (A1) and (A2) hold then for anyi gt 0 and j = 1 2 3

Mkij = O(s(k + i))

Proof The estimates for j = 1 2 are clear The number of multiplicationsin finding the solution of linear system (1021) equals the number of nonzeroentries of the coefficient matrix Eki According to assumption (A2) N (Eki) =O(s(k + i)) The proof is complete using assumption (A1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 373

Next we let Mkmj denote the total computational cost related to procedurej for obtaining the solution ukm

Corollary 1011 If assumptions (A0) (A1) and (A2) hold then

Mkmj = O(s(k + m)) j = 1 2 3 and Mkm4 = O(m)

Proof For j = 1 2 3 4 we have that

Mkmj =msum

i=1

Mkij

The result of this corollary follows directly from the equation above andProposition 1010

Theorem 1012 If assumptions (A0) (A1) and (A2) hold then the totalcomputational cost of obtaining the solution ukm by Algorithm 109 is in theorder O(s(k + m)) where s(k + m) is the dimension of the subspace Xk+m

Proof The total computational cost of obtaining the solution ukm by Algo-rithm 109 is given by the sum of Mkmj over j = 1 2 3 4 The desiredestimate of this theorem follows from Corollary 1011 and the equivalenceof s(k + m) and μk+m

The above theorem reveals that the computational cost for Algorithm 109is linear with respect to the dimension of the approximation space underassumption (A0)

1024 The Galerkin and collocation-based methods

In this subsection we present two specific MAMs one based on the Galerkinmethod and the other based on the collocation method

We first recall a multiscale partition n n isin N0 of the domain For eachscale n the partition n consists of a family of subsets ni i isin Ze(n) wheree(n) denotes the cardinality of n with the properties that

meas(ni capniprime) = 0 i iprime isin Ze(n) i = iprime and⋃

iisinZe(n)

ni =

The multiscale property requires that for n gt nprime and i isin Ze(n) there exists aunique iprime isin Ze(nprime) such that ni sub nprimeiprime We also demand that the partitionis ldquoshrinkingrdquo at a proper rate that is there is a positive constant τ isin (0 1)such that for sufficiently large n d(n) le τ n where d(n) denotes the largestdiameter of the subsets in n The growth of the number of elements in n inn is required to satisfy e(n) = O(μn) When can be decomposed into a union

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

374 Multiscale methods for nonlinear integral equations

of simplices a multiscale partition of that meets the above requirements isgiven in Sections 42 51 and 71 (cf [69 75])

We now describe the multilevel Galerkin scheme In this case we chooseX = L2() and the subspaces Xn as spaces of piecewise polynomials oforder r associated with a multiscale partition n n isin N0 of the domain The multiscale partition described above guarantees the nestedness of the spacesequence Xn n isin N0 The space Xn+1 can be decomposed as the orthogonalsum of Xn and its orthogonal complement Wn+1 in Xn+1 We write W0 = X0Then Xn can be written as the orthogonal direct sum of Wi for i isin Zn+1

The projection Pn is naturally chosen as the orthogonal projection from X

onto Xn As a result equation (103) becomes the Galerkin scheme for solving(102) and accordingly Algorithm 102 is a MAM based on the Galerkinscheme In this case Theorem 106 has the following form

Theorem 1013 Let ulowast be an isolated solution of (102) If one is not aneigenvalue of (K)prime(ulowast) and if ulowast isin Wr2() then there exist a positiveconstant c and a positive integer N such that for all k ge N and m isin N0

ulowast minus ukm2 le cτ r(k+m)ulowastr2

Proof We may define a sequence γn by

γn = cτ rnulowastr2

where c is a positive constant independent of n such that Rn le γn recalling thatRn is the error of the best approximation to ulowast from Xn Since ulowast isin Wr2()we conclude that for all n isin N0

γn+1

γn= τ r

Hence the sequence γn is a majorization sequence of Rn n isin N0 The desiredresult therefore follows directly from Theorem 106

We next comment on the discrete form of the MAM based on the Galerkinscheme For any i ge 0 we choose an orthonormal basis wij j isin Zw(i) forthe space Wi so that wij (i j) isin Un form an orthonormal basis of Xn Theconstruction of these bases may be found in Chapter 4 The choice of theapproximation spaces and their bases permits the hypotheses (A1) and (A2)described in Section 1023 to be satisfied Since Xlowast = L2() for i ge 0 andj isin Zw(i) the functional ij may be chosen as

langij x

rang = (wij x) for x isin L2()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 375

where (middot middot) denotes the inner product in L2 In this setting the nonlinear system(1015) has the form⎛⎝wiprimejprime (I minusK)

⎛⎝ sum(ij)isinUk

uijwij + uHkm

⎞⎠⎞⎠ = (wiprimejprime f)

(iprime jprime) isin Uk

(1023)

and the matrix Ekm becomes the identity matrix Hence equation (1021) isreduced to

uHkm = fkm (1024)

Algorithm 109 with the Galerkin method has a very simple formWe now turn our attention to the MAM based on the collocation method

The multiscale collocation method that we are describing here was firstintroduced in [69] We set X = Linfin() and as in the Galerkin case we choosethe subspaces Xn as spaces of piecewise polynomials of order r associatedwith a multiscale partition n n isin N0 of the domain Again we needthe orthogonal complement Wn of Xnminus1 in Xn and the same orthogonaldecomposition of Xn as in the Galerkin case However we do not demandthe orthogonality of basis functions within the subspace Wn

We need to describe collocation functionals To this end we recall that Y =C() and a collocation functional is chosen as an element in Ylowast Specificallythe space Ln in this case is spanned by a basis whose elements are pointevaluation functionals Note that Ln has a decomposition Ln = V0oplus middot middot middot oplusVn

with V0 = L0 where Vi = spanij j isin Zw(i) will be described belowWe construct V0 from refinable sets of points in with respect to families ofcontractive maps which define the refinement of the multiscale partitions of Functionals 0j are the point evaluation functionals associated with the pointsin the refinable sets Each functional 1j is defined by a linear combinationof point evaluation functionals with the number of such functionals beingbounded independent of i and satisfies the ldquosemi-bi-orthogonalityrdquo propertywith respect to wij i = 0 1 j isin Zw(i) that is

langiprimejprime wij

rang = δiiprimeδjjprime for i le iprimeThe functionals ij i gt 1 j isin Zw(i) are defined recursively from 1j j isin Zw(1)The projection Pn in this case is naturally the interpolatory projection on XnIt can readily be verified that the assumptions (A1) and (A2) described inSection 1023 hold

With the interpolatory projection Pn equation (103) becomes the colloca-tion scheme for solving (102) and accordingly Algorithm 102 is a MAMbased on the collocation scheme Similar to Theorem 1013 we have thefollowing convergence result for the collocation-based MAM

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

376 Multiscale methods for nonlinear integral equations

Theorem 1014 Let ulowast be an isolated solution of (102) If one is not aneigenvalue of (K)prime(ulowast) and if ulowast isin Wrinfin() then there exist a positiveconstant c and a positive integer N such that for all k ge N and m isin N0

ulowast minus ukminfin le cτ r(k+m)ulowastrinfin

Since the proof for Theorem 1014 is similar to that for Theorem 1013 weomit it

To close the subsection we remark on the influence of numerical integrationon the approximation errors of numerical solutions and the computationalcomplexity of the algorithm for the multiscale collocation method In thecomputational complexity analysis presented in Section 1023 we imposeassumption (A0) for the estimate of Mki2 However this assumption maynot be fulfilled in all cases Additional computational efforts may be neededto compute the vector fkl We take the multiscale bases formed by piecewisepolynomials as an example When the collocation methods are applied todiscretize the integral equation Mkl2 indicates the computational cost forcomputing lang

iprimejprime Kuklminus1rang (iprime jprime) isin Ukl (1025)

where the functionals iprimejprime are linear combinations of point evaluations There-fore we need to evaluate numerically the integrals of the formint

K(s t)ψ(t uklminus1(t))dt

According to [96 273] under suitable assumptions on the regularity of thenonlinear function ψ we have an approximation

ψ(t uklminus1(t)) asympsum

(ij)isinUk+l

bijwij(t)

with an optimal order of convergence and computational complexityO(s(k + l) log s(k + l)) Then computing (1025) reduces to calculatinglang

iprimejprime Kwijrang (iprime jprime) (i j) isin Ukl

When the kernel is smooth or weakly singular we can establish truncationstrategies for the matrix and error control strategies for computing the restelements of the matrix the cost of which is O(s(k + l)(log s(k + l))ν) wherethe positive integer ν depends on the dimension of the domain Whend = 1 2 3 the value of ν is 3 4 5 respectively See [72 75 264] for detailsSummarizing the above discussion we observe that the total computationalcost for computing fkl is of O(s(k + l)(log s(k + l))ν) A similar approachapplies to the Galerkin method

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 377

103 Multiscale methods for nonlinear boundaryintegral equations

In this section we develop the MAM for solving the nonlinear boundary inte-gral equation propose a matrix compression strategy and present acceleratedquadratures and Newton iterations for seeding up the computation

1031 The multilevel augmentation method

We describe in this subsection the MAM for solving the nonlinear boundaryintegral equation We begin by recalling the reformation of the nonlinearboundary value problem as a nonlinear integral equation

Let be a simply connected bounded domain in R2 with a C2 boundary We consider solving the following nonlinear boundary value problem⎧⎪⎨⎪⎩

u(x) = 0 x isin

partu

partnx(x) = minusg(x u(x))+ g0(x) x isin

(1026)

where nx denotes the exterior unit normal vector to at x The numericalsolution of the above problem was studied in many papers (see for example[17 239] and the references cited therein) The fundamental solution of theLaplace equation in R2 is given by

(x y) = minus 1

2πlog |xminus y|

It is shown (cf [17 239]) that problem (1026) can be reformulated as thefollowing nonlinear integral equation defined on

u(x)minus 1

π

int

u(y)part

partnylog |xminus y|dsy minus 1

π

int

g(y u(y)) log |xminus y|dsy

= minus 1

π

int

g0(y) log |xminus y|dsy x isin (1027)

Note that the first integral operator is linear while the second is nonlinearWe assume that the boundary has a parametrization x = (ξ(t) η(t)) t isin[0 1) With this representation the functions that appeared in (1027) whichare defined on are transformed to functions of the variable t For simplicitywe use the same notations that is

u(t) = u(ξ(t) η(t)) g(t u(t)) = g((ξ(t) η(t)) u) g0(t) = g0(ξ(t) η(t))

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

378 Multiscale methods for nonlinear integral equations

With these notations according to [15 17] equation (1027) is rewritten as

u(t)minusint 1

0K(t τ)u(τ )dτ minus

int 1

0L(t τ)g(τ u(τ ))χ(τ)dτ

= minusint 1

0L(t τ)g0(τ )χ(τ)dτ t isin [0 1) (1028)

where for t τ isin [0 1)

K(t τ) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩1

π

ηprime(τ )(ξ(τ )minus ξ(t))minus ξ prime(τ )(η(τ )minus η(t))

(ξ(t)minus ξ(τ ))2 + (η(t)minus η(τ))2 t = τ

1

π

ηprime(t)ξ primeprime(t)minus ξ prime(t)ηprimeprime(t)2[ξ prime(t)2 + ηprime(t)2] t = τ

L(t τ) = 1

2πlog[(ξ(t)minus ξ(τ ))2 + (η(t)minus η(τ))2] t = τ

and

χ(τ) =radicξ prime(τ )2 + ηprime(τ )2

We introduce two linear integral operators KL Linfin(0 1) rarr Linfin(0 1)defined respectively by

(Kw)(t) =int 1

0K(t τ)w(τ )dτ t isin [0 1)

and

(Lw)(t) =int 1

0L(t τ)w(τ )dτ t isin [0 1)

and the nonlinear operator

(u)(t) = g(t u(t))χ(t) t isin [0 1)

By letting

T = K + L

we rewrite equation (1028) as

uminus T u = f (1029)

in which the right-hand-side function f = minusL(g0χ)In passing we comment on the regularity of the two kernels K and L It is

easy to verify that when is of Cs with s ge 2 K has continuous derivatives

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 379

up to order s minus 2 Throughout this section we assume s is sufficiently largeHence there exists a positive constant such that

|Dαt Dβ

τ K(t τ)| le t τ isin (0 1) (1030)

for positive integers α β with α + β le sminus 2 The expression of L contains alogarithmic factor which exhibits a weak singularity The positions of singularpoints are determined by the properties of parametrization ξ and η Noting that is a closed curve the singular points are located where t minus τ = 0minus1 1Accordingly we require that L(t middot) isin Cinfin([0 1]t) for any t isin [0 1] andthere exist positive constants θ and σ isin (0 1) such that

|Dαt Dβ

τ L(t τ)| le θ middotmax|t minus τ |minus(σ+α+β) |t minus τ + 1|minus(σ+α+β)

|t minus τ minus 1|minus(σ+α+β)

(1031)

for any αβ isin N0 and t τ isin [0 1] with t = τ and t minus τ = plusmn1 We remarkthat the above setting of weak singularity includes not only the logarithmicsingularity but also other kinds of singularity

We now return to equation (1029) The solvability of (1029) was consid-ered in the literature (cf [239]) Throughout the rest of this section we assumethat (1029) has an isolated solution ulowast isin C(0 1) Moreover we supposethat the function g(x u) is continuous with respect to x isin and Lipschitzcontinuous with respect to u isin R the partial derivative Dug of g with respectto the variable u exists and is Lipschitz continuous and for each u isin C()g(middot u(middot)) Dug(middot u(middot)) isin C()

Next we describe the fast algorithm for (1029) in light of the idea of MAMFor n isin N0 let πn be the uniform mesh which divides the interval [0 1] into μn

pieces for a given positive integer μ and Xn be the piecewise polynomial spaceof order r with respect to πn It is easily observed that the sequence Xn n isin N0

is nested that is Xn sub Xn+1 For each n isin N0 let Pn be the interpolatoryprojection from C(0 1) onto Xn with the set of interpolation points

j+ s

μn j isin Zμn s isin G

where G is the set of initial interpolation points in [0 1] We require that Ghas two properties One is that G contains r distinct points and as a resultthe interpolation of polynomials of order r on G exists uniquely The other isthat G is refinable with respect to the family of contractive affine mappingsμ = φe e isin Zμ where

φe(x) = x+ e

μ e isin Zμ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

380 Multiscale methods for nonlinear integral equations

in the sense that

G sub μ(G) =⋃

eisinZμφe(G)

The collocation method for solving (1029) is to find un isin Xn such that

un minus PnT un = Pnf (1032)

Making use of Theorem 2 of [255] we prove below that (1032) is uniquelysolvable

Theorem 1015 If ulowast isin C(0 1) is an isolated solution of (1029) and one isnot an eigenvalue of the linear operator T prime(ulowast) then for sufficiently large n(1032) has a unique solution un isin B(ulowast δ) for some δ gt 0 and there existpositive constants c1 c2 such that

c1ulowast minus Pnulowastinfin le ulowast minus uninfin le c2ulowast minus Pnulowastinfin

We now describe the MAM for finding an approximate solution of equa-tion (1032) The nestedness of the subspace sequence allows us to have thedecomposition of Xn+1 as the direct sum of Xn and its orthogonal complementWn+1 Thus for a fixed k isin N0 and any m isin N0 we have that

Xk+m = Xk oplusWkm where Wkm =Wk+1 oplusWk+2 oplus middot middot middot oplusWk+m(1033)

We now solve equation (1032) with n = k+m k being fixed and small relativeto n At the first step we solve equation (1032) with n = k exactly and obtainthe solution uk Since dim(Xk) is small in comparison with dim(Xk+m) thecomputational cost to invert the nonlinear operator Pk(I minus T ) is much lessthan that to invert Pk+m(I minus T ) The next step is to obtain an approximationof the solution uk+1 of (1032) with n = k+1 For this purpose we decompose

uk+1 = uLk+1 + uH

k+1 with uLk+1 isin Xk and uH

k+1 isinWk+1

according to the decomposition (1033) and rewrite equation (1032) with n =k + 1 in its equivalent form as

(Pk+1 minus Pk)(uLk+1 + uH

k+1)minus (Pk+1 minus Pk)T uk+1 = (Pk+1 minus Pk)f

Pk(I minus T )(uLk+1 + uH

k+1) = Pkf

(1034)

In view of

(Pk+1 minus Pk)(uLk+1 + uH

k+1) = uHk+1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 381

the first equation in 1034 becomes

uHk+1 = (Pk+1 minus Pk)( f + T uk+1)

The right-hand side of the above equation can be obtained approximately viathe solution uk at the previous level That is we compute

uHk1 = (Pk+1 minus Pk)( f + T uk0)

where uk0 = uk and note that uHk1 isin Wk+1 We replace uH

k+1 in the second

equation of (1034) by uHk1 and solve uL

k1 isin Xk from the equation

Pk(I minus T )(uLk1 + uH

k1) = Pkf

The solution uLk1 of the above equation is a good approximation to uL

k+1 Wethen obtain an approximation to the solution uk+1 of (1032) by letting

uk1 = uLk1 + uH

k1

Note that uLk1 and uH

k1 represent respectively the lower and higher-frequencycomponents of uk1 This procedure is repeated m times to obtain the approxi-mation ukm of the solution uk+m of (1032) with n = k + m At step of thisprocedure we do not invert the nonlinear operator Pk+(IminusT ) but invert onlythe same nonlinear operator Pk(I minus T ) This makes the method very efficientcomputationally We summarize this procedure in the following algorithm

Algorithm 1016 (The multilevel augmentation method in an operator form)Let k be a fixed positive integer

Step 1 Find the solution uk isin Xk of equation (1032) with n = k Set uk0 =uk and l = 1

Step 2 Compute

uHkl = (Pk+l minus Pk)( f + T uklminus1) isinWkl (1035)

Step 3 Solve for uLkl isin Xk from the equation

Pk(I minus T )(uLkl + uH

kl) = Pkf (1036)

Step 4 Let ukl = uLkl + uH

kl Set llarr l+ 1 and go back to step 2 until l = m

The output of the MAM is ukm isin Xk+m By employing the analysis inthe last section for the Hammerstein equations we establish the followingapproximation result for ukm

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

382 Multiscale methods for nonlinear integral equations

Theorem 1017 If ulowast isin C(0 1) is an isolated solution of (1029) and oneis not an eigenvalue of the linear operator T prime(ulowast) then there exists a positiveinteger N such that for any k gt N the MAM solution ukm exists uniquely forall m gt 0 and ukm isin B(ulowast δ) for some δ gt 0 Moreover if ulowast isin Wrinfin(0 1)then there exists a positive constant c such that for all k gt N and m isin N0

ulowast minus ukminfin le cμminusr(k+m)ulowastrinfin

As Proposition 103 states the MAM solves ukl successively for l = 12 m from

(I minus PkT )ukl = Pk+lf + (Pk+l minus Pk)T uklminus1 (1037)

Let us briefly compare (1037) with (1032) If we solve (1032) directly wehave to invert IminusPnT which is a nonlinear operator on Xn The linearizationof the nonlinear operator usually leads to a linear equation that requires highcomputational complexity to solve Therefore this is not an economic wayto solve equation (1032) when the dimension of Xn is large The solutionof (1037) however need only invert I minus PkT with k ltlt n Note that thenonlinear component of the operator is restricted to Xk the dimension of whichis fixed and much smaller than that of the space where ukm is located for a largem To see it more clearly we split (1037) into (1035) and (1036) and observethat (1035) is a linear equation

We now recall multiscale bases for Xn and the corresponding multiscalecollocation functionals introduced in Chapter 7 Suppose that Ln n isin N0 isa sequence of subspaces of (Linfin(0 1))lowast which possesses the properties Ln subLn+1 dim(Ln) = dim(Xn) n isin N0 We remark that the elements of Ln arepoint evaluations and their finite linear combinations and the refinability of Gguarantees the above nestedness property We utilize the nestedness propertyagain to obtain the multiscale decomposition Lk+m = Lk oplus Vkm for any fixedk isin N0 and any m isin N0 in which Vkm = Vk+1 oplus middot middot middot oplus Vk+m Let w(0) =dim(X0) and w(i) = dim(Wi) for i gt 0 We suppose that

X0 = spanw0j j isin Zw(0) L0 = span0j j isin Zw(0)and

Wi = spanwij j isin Zw(i) Li = spanij j isin Zw(i) i gt 0

By introducing the index set Un = (i j) j isin Zw(i) i isin Zn+1 we have

Xn = spanwij (i j) isin Un Ln = spanij (i j) isin Un n isin N0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 383

For any l isin Zm+1 we express the solution ukl of (1037) as

ukl =sum

(ij)isinUk+l

(ukl)ijwij

We use the notation ukl = [(ukl)ij (i j) isin Uk+l] to denote the representationvector of ukl Using the index set Ukl = Uk+lUk = (i j) j isin Zw(i) i isinZk+l+1Zk+1 we have the expansion uL

kl =sum

(ij)isinUk(ukl)ijwij and uH

kl =sum(ij)isinUkl

(ukl)ijwij Moreover we define the matrix

EHkl = [ langiprimejprime wij

rang (iprime jprime) (i j) isin Ukl

]

As in the last section we make use of the properties of the bases to concludethat the nonlinear equation (1036) is equivalent to the nonlinear systemlang

iprimejprime (I minus T )( sum(ij)isinUk

(ukl)ijwij + uHkl

)rang= langiprimejprime f

rang (iprime jprime) isin Uk

(1038)

and (1035) is equivalent to

EHklu

Hkl = fkl (1039)

where uHkl = [(ukl)ij (i j) isin Ukl] and

fkl = [langiprimejprime f + T uklminus1rang

(iprime jprime) isin Ukl] (1040)

Computing fkl requires evaluating the integralslangiprimejprime (K + L)uklminus1

rang for

(iprime jprime) isin Ukl We separate the integral into its linear and nonlinear componentslangiprimejprime Kuklminus1

rang (iprime jprime) isin Ukl (1041)

and langiprimejprime Luklminus1

rang (iprime jprime) isin Ukl (1042)

We write

uklminus1 =sum

(ij)isinUk+lminus1

(uklminus1)ijwij

and define for k l the matrix KHklminus1 = [Kiprimejprimeij (iprime jprime) isin Ukl (i j) isin Uk+lminus1]

Computing the quantities in (1041) is equivalent to generating KHklminus1 and

calculating

KHklminus1uklminus1 (1043)

The evaluation of the nonlinear component (1042) will be done slightlydifferently Sinceuklminus1 isin Xk+l we are not able to expressuklminus1 as a linear

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

384 Multiscale methods for nonlinear integral equations

combination of the basis of Xk+l as we do for computing (1041) In order toestablish a fast algorithm similar to that for evaluating (1041) we approximateuklminus1 by its projection in Xk+l In other words we do not evaluate (1042)exactly but compute its approximationlang

iprimejprime LPk+luklminus1rang (iprime jprime) isin Ukl (1044)

Formally (1044) has a form similar to (1041) where L corresponds to Kand Pk+luklminus1 corresponds to uklminus1 Therefore the fast algorithm describedabove for (1041) is applicable to (1044) To see this we write

Pk+luklminus1 =sum

(ij)isinUk+l

(uk+l)ijwij

and thus we have thatlangiprimejprime Pk+luklminus1

rang = sum(ij)isinUk+l

(uk+l)ijlangiprimejprime wij

rang (iprime jprime) isin Uk+l

Let

gk+l =⎡⎣langiprimejprime

( sum(ij)isinUk+lminus1

(uklminus1)ijwij

)rang (iprime jprime) isin Uk+l

⎤⎦

and for n isin N0 define the matrix En = [langiprimejprime wijrang

(iprime jprime) (i j) isin Un] Thenthe representation vector uk+l of Pk+luklminus1 satisfies the linear system

Ek+luk+l = gk+l (1045)

Define for k l the matrix LHkl = [Liprimejprimeij (iprime jprime) isin Ukl (i j) isin Uk+l]

Computing (1044) is equivalent to solving uk+l from (1045) generating LHkl

and then evaluating

LHkluk+l (1046)

In the bases and collocation functionals described above we have the matrixrepresentations for the operators K and L Specifically for n isin N0 we definethe matrices

Kn = [Kiprimejprimeij (iprime jprime) (i j) isin Un] with Kiprimejprimeij = langiprimejprime Kwijrang

and

Ln = [Liprimejprimeij (iprime jprime) (i j) isin Un] with Liprimejprimeij = langiprimejprime Lwijrang

Matrices Kn and Ln will be compressed according to the regularity of thekernels K and L Note that the kernel K is smooth and L is weakly singular and

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 385

their regularities are described in (1030) and (1031) respectively We adoptthe following truncation strategies

(T1) For each n isin N0 the matrix Kn is truncated to a sparse matrix

Kn = [Kiprimejprimeij (iprime jprime) (i j) isin Un]where for (iprime jprime) (i j) isin Un

Kiprimejprimeij =

Kiprimejprimeij iprime + i le n0 otherwise

(T2) For (i j) isin Un we let Sij = supp(wij) For each n isin N0 and(iprime jprime) (i j) isin Un we set

Liprimejprimeij =

Liprimejprimeij dist(Siprimejprime Sij) le εniprimei or dist(Siprimejprime Sij) ge 1minus εn

iprimei0 otherwise

in which the truncation parameters εniprimei are chosen by

εniprimei = maxaμminusn+b(nminusi)+bprime(nminusiprime) ρ(μminusi + μminusiprime) (1047)

for some constants b bprime a gt 0 and ρ gt 1 The truncated matrix of Ln isdefined by

Ln = [Liprimejprimeij (iprime jprime) (i j) isin Un]When n = 7 and using piecewise linear basis functions we show in

Figure 101 the block matrices K7 and L7We now describe the MAM with the matrix truncations

32 64 128 256

32

64

128

256

0

32 64 128 256

32

64

128

256

0

Figure 101 The distribution of nonzero entries of K7 (left) and L7 (right)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

386 Multiscale methods for nonlinear integral equations

For a fixed k isin N0 and l gt 0 we set

KHklminus1 = [Kiprimejprimeij (iprime jprime) isin Ukl (i j) isin Uk+lminus1]

and

LHkl = [Liprimejprimeij (iprime jprime) isin Ukl (i j) isin Uk+l]

Algorithm 1018 (The multilevel augmentation method with matrix trun-cations) Let k be a fixed positive integer Given m isin N0 we carry out thefollowing computing steps

Step 1 Solve the nonlinear systemlangiprimejprime (I minus (K + L))

( sum(ij)isinUk

(uk)ijwij

)rang= langiprimejprime f

rang (iprime jprime) isin Uk

(1048)

for the solution uk = [(uk)ij (i j) isin Uk] Set uk0 = uk and l = 1

Step 2 Compute the representation vector uk+l of uk+l = Pk+luklminus1 andgenerate vector

fkl = [ langiprimejprime frang

(iprime jprime) isin Ukl]

Compute

fkl = fkl + KHklminus1uklminus1 + LH

kluk+l (1049)

Solve the linear system

EHklu

Hkl = fkl (1050)

for uHkl = [(ukl)ij (i j) isin Ukl] and define uH

kl = sum(ij)isinUkl

(ukl)ijwij

Step 3 Solve the nonlinear systemlangiprimejprime (I minus (K + L))

( sum(ij)isinUk

(ukl)ijwij + uHkl

)rang= langiprimejprime f

rang

(iprime jprime) isin Uk (1051)

for uLkl = [(ukl)ij (i j) isin Uk] and define uL

kl =sum(ij)isinUk(ukl)ijwij

and ukl = uLkl + uH

kl

Step 4 Set llarr l+ 1 and go back to step 2 until l = m

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 387

Several remarks on the computational performance of Algorithm 1018 arein order The computational costs of Algorithm 1018 can be divided intothree parts Part 1 is for generating the matrices Kk+l and Lk+l Part 2 isfor computing (1049) and solving (1050) Part 3 is for solving the resultingnonlinear systems including (1048) and (1051) It takes much more time togenerate matrix Lk+l than to generate matrix Kk+l since the kernel L is weaklysingular We observe from numerical experiments that although parts 1 and 3both have high costs when l is small part 3 dominates the total computing timefor implementing the algorithm while when l increases part 1 grows faster thanpart 3

1032 Accelerated quadratures and Newton iterations

In this subsection we address two computational issues of the MAM algorithmWe employ a product integration scheme for computing the singular integralswhich appear in the matrices involved in the MAM and introduce an approx-imation technique in the Newton iteration for solving the resulting nonlinearsystems to avoid repeated computation in generating their Jacobian matricesThe use of these two techniques results in a modified MAM which speeds upits computation (cf [52])

1 Product integration of singular integrals

Numerical experiments show that the generation of Ln requires much morecomputing time than that of Kn due to the singularity of the kernel L Weobserve that the kernel L has a special structure which allows us to develop amore efficient quadrature method than the Gaussian quadrature method In thissubsection we develop a special product integration method for the specifickernel L so that the computing time for calculating the nonzero entries of thematrix Ln is significantly reduced

The product integration method has been widely used in the literaturefor computing singular integrals For example it was used in [17 138] todiscretize singular integral operators Along this line concrete formulas ofproduct integration were given in [15] (see pp 116ndash119 therein) Theseformulas were developed in the context of single-scale approximation andwere proved efficient for computation in that context In the current multiscaleapproximation context we establish product integration formulas suitable forthe use of multiscale bases

We now study the typical integral involved in the entries of matrix Ln Thenonzero entries of matrix Ln involve integrals of the form

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

388 Multiscale methods for nonlinear integral equations

Iij(s) =int

Sij

L(s t)wij(t)dt for (i j) isin Un s isin [0 1]

where wij are multiscale piecewise polynomial basis functions As suggestedby [15 17 138] the kernel L can be decomposed as

L(s t) = 1

π[B0(s t)+ B1(s t)] (1052)

with

B0(s t) = log

∣∣∣∣∣radic(ξ(s)minus ξ(t))2 + (η(s)minus η(t))2

(sminus t)(sminus t minus 1)(sminus t + 1)

∣∣∣∣∣and

B1(s t) = log |sminus t| + log |sminus t minus 1| + log |sminus t + 1|The above decomposition in fact extracts the singularity of L SpecificallyB1 possesses all the singularity features of L and B0 is smooth The kernelB0 is easy to integrate numerically with a little computational cost while thesingularity of B1 brings difficulty to its numerical integration However weobserve that the expression of B1 is very specific which allows us to integrateit exactly with explicit formulas The quantities Iij can be written as the sum oftwo terms

Iνij(s) = 1

π

intSij

Bν(s t)wij(t)dt ν = 0 1 (1053)

We first compute the term I1ij To this end we re-express the multiscale basis

functions wij Note that for j isin Zw(0) w0j is a polynomial of order r Hence itcan be written as

w0j(t) =sumγisinZr

aγ tγ

For i gt 0 and j isin Zw(i) wij is a piecewise polynomial of order r Accordingto the construction of the basis function wij for all (i j) isin Un the support Sij

can be divided into μ pieces κ = (aκ bκ ) κ isin Zμ on each of which wij is apolynomial of order r Thus we write wij as

wij(t) =sumγisinZr

aκ γ tγ t isin κ κ isin Zμ

The above discussion of the basis functions motivates us to define thefollowing special integrals For γ isin Zr α isin = 0 1minus1 and for

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 389

a b isin [0 1] with a lt b we set

I(a bα γ s) =int b

alog |sminus t minus α|tγ dt

In this notation we have thatintS0j

B1(s t)w0j(t)dt =sumαisin

sumγisinZr

aγ I(0 1α γ s) (1054)

and intSij

B1(s t)wij(t)dt =sumκisinZμ

sumαisin

sumγisinZr

aκ γ I(aκ bκ α γ s) (1055)

The integral I(a bα γ s) can be computed exactly We derive below theformula for the integral

Lemma 1019 If γ isin Zr α isin and a b isin [0 1] with a lt b then

I(a bα γ s)

= 1

γ + 1

⎡⎣(tγ+1 minus (sminus α)γ+1) log |sminus t minus α| minusγ+1sumj=1

(sminus α)γminusj+1t j

⎤⎦∣∣∣∣∣∣b

a

Proof The formula in this lemma may be proved using integration byparts

Using the integration formula in Lemma 1019 we are able to compute theintegrals (1054) and (1055) exactly

It remains to compute the term I0ij For this purpose we describe a Gaussian

quadrature rule on the interval [a b] For each positive integer j we denote bygj the Legendre polynomial of degree j and by τ

j isin Zj the j zeros of gj in

the order minus1 lt τj0 lt middot middot middot lt τ

jjminus1 lt 1 We transfer these zeros to the interval

[a b] by letting

τj = a+ b

2+ bminus a

j isin Zj

The points τ j are the j zeros of the Legendre polynomial of degree j on the

interval [a b] Given a continuous function h defined on [a b] the j-pointGaussian quadrature rule is given by

G(h [a b] j) =sumisinZj

ωjh(τ

j)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

390 Multiscale methods for nonlinear integral equations

where

ωj =

int b

a

prodiisinZji =

t minus τji

τj minus τ

ji

dt

This quadrature formula will be used to compute I0ij We summarize below the

integration strategy for computing the nonzero entries Liprimejprimeij of Ln(QL) For a nonzero entry Liprimejprimeij of Ln we compute I0

ij and I1ij separately

To compute I0ij we divide the support Sij of wij uniformly into N intervals

I isin ZN where N is a positive integer such that the diameter of each ofthe intervals is less than or equal to μminusκr and wij is a polynomial on I Theintegral I0

ij is computed by the formulasumisinZN

G(B0(s middot)wij I (2κ)minus1n)

The integral I1ij is expressed in terms of equations (1054) and (1055) and

computed using Lemma 1019

2 Approximate iteration for solving nonlinear systems

Algorithm 1018 requires solving two nonlinear systems (1048) and (1051)These equations are solved using the Newton method In each iteration stepof the Newton method we need to compute the entries of the Jacobianmatrix Specifically for equation (1048) the Newton iteration scheme has thefollowing steps

bull Choose an initial guess u(0)k

bull For m = 0 1 compute

F(u(m)k ) = (Ek minusKk)u(m)k minus f(m)k

with

f(m)k =[langiprimejprime f + L

(u(m)k

)rang (iprime jprime) isin Uk

] u(m)k =

sum(ij)isinUk

(u(m)k )ijwij

and compute the Jacobian matrix

J(u(m)k ) = [Jiprimejprimeij (iprime jprime) (i j) isin Uk]with

Jiprimejprimeij = Eiprimejprimeij minus Kiprimejprimeij minuslangiprimejprime L(wij

prime(u(m)k ))rang

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 391

bull Solve (m)k from the equation J(u(m)k )

(m)k = minusF(u(m)k )

bull Compute u(m+1)k = u(m)k +

(m)k

It is easily seen that the evaluation of both F(u(m)k ) and J(u(m)k ) involvescomputing integrals which requires high computational costs Solving equa-tion (1051) numerically also involves computing the integrals

When evaluating F(u(m)k ) we need to compute the integralslangiprimejprime L

(u(m)k

)rang (1056)

These integrals come from the integral operator L For different steps of theiteration we are required to compute different integrals and as a result the

computational cost is large Note that for some (

u(m)k

)isin Xk we can write

it as

(

u(m)k

)=

sum(ij)isinUk

cijwij

and langiprimejprime L

(u(m)k

)rang=

sum(ij)isinUk

cijLiprimejprimeij (1057)

Comparing (1057) with (1056) we observe that although they both involveintegral evaluation (1057) makes use of the values of the entries of the matrixLn which have been previously obtained so we do not have to recomputethem However in general (u(m)k ) isin Xk We cannot write (u(m)k ) as a linearcombination of the basis function wij For this reason we propose to project

(u(m)k ) into Xk Specifically we do not solve (1048) directly and instead wesolve uk from the nonlinear systemlang

iprimejprime (I minus (K + LPk))

( sum(ij)isinUk

(uk)ijwij

)rang= langiprimejprime f

rang (iprime jprime) isin Uk

(1058)

When we solve equation (1058) by the Newton iteration method we arerequired to compute the termslang

iprimejprime LPk

( sum(ij)isinUk

(uk)ijwij

)rang (iprime jprime) isin Uk

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

392 Multiscale methods for nonlinear integral equations

and their partial derivatives with respect to the variables (uk)ij (i j) isin Uk Tothis end we suppose that

Pk

⎛⎝ sum(ij)isinUk

(uk)ijwij

⎞⎠ = sum(ij)isinUk

(uk)ijwij

Then each (uk)ij is a function with respect to the variables (uk)ij (i j) isin Uk Infact if we let

F =⎡⎣langiprimejprime

( sum(ij)isinUk

(uk)ijwij

)rang (iprime jprime) isin Uk

⎤⎦

then we have uk = Eminus1k F Therefore it follows thatlang

iprimejprime LPk

( sum(ij)isinUk

(uk)ijwij

)rang=

sum(ij)isinUk

Liprimejprimeij(uk)ij

In a similar manner we compute the partial derivatives of the above quantitieswith respect to the variables (uk)ij (i j) isin Uk Making use of the aboveobservations we describe the Newton iteration scheme for solving (1058) asfollows

Algorithm 1020 (The Newton iteration method for solving (1058)) Setm = 0 fk = [langij f

rang (i j) isin Uk] and choose an initial guess u(0)k and

an iteration stopping threshold δ

Step 1 Let u(m)k =sum(ij)isinUk(u(m)k )ijwij and set

G(u(m)k ) =[langiprimejprime

prime(u(m)k )wij

rang (iprime jprime) (i j) isin Uk

]

Solve F(m)k from EkF(m)

k = G(u(m)k ) and compute the Jacobian matrix

J(u(m)k ) = Ek minusKk minus LkF(m)k

Step 2 For g(m)k =[langij(u

(m)k )rang

(i j) isin Uk

] solve u(m)k from Eku(m)k =

g(m)k Compute

F(u(m)k ) = (Ek minusKk)u(m)k minus Lku(m)k minus fk

Step 3 Solve (m)k from J(u(m)k )

(m)k = minusF(u(m)k ) and compute u(m+1)

k =u(m)k +

(m)k

Step 4 Set mlarr m+ 1 and go back to step 1 until (m)k infin lt δ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 393

It is worth noticing that in Algorithm 1020 we need not evaluate anyadditional integrals but make use of the matrix Lk This saves tremendouscomputational effort and thus makes the algorithm very fast We shall see fromthe numerical examples in Section 1042 that (1058) is solved much fasterthan (1048)

Equation (1051) can be approximated in a similar manner Specifically instep 3 of Algorithm 1020 we replace (1051) bylangiprimejprime (I minus (K + LPk+l))

( sum(ij)isinUk

(ukl)ijwij + uHkl

)rang= langiprimejprime f

rang (iprime jprime) isin Uk

(1059)

The Newton iteration scheme for solving (1059) can likewise be developedand will be referred to as Algorithm 1020prime

3 The modified MAM algorithm

We describe below the MAM algorithm employing the above two techniques

Algorithm 1021 (MAM with an accelerated quadrature and the Newtonmethod) Let k be a fixed positive integer Given m isin N0 we carry out thefollowing computing steps

Step 1 Use Algorithm 1020 to solve the nonlinear system (1058) and obtainthe solution uk = [(uk)ij (i j) isin Uk] Let uk0 = uk and l = 1

Step 2 Follow step 2 of Algorithm 1018 to generate fkl and solve EHklu

Hkl =

fkl to obtain uHkl = [(ukl)ij (i j) isin Ukl]T and define uH

kl =sum(ij)isinUkl

(ukl)ijwijStep 3 Use Algorithm 1020prime to solve the nonlinear system (1059) and

obtain the solution uLkl = [(ukl)ij (i j) isin Uk] Define uL

kl =sum(ij)isinUk

(ukl)ijwij and ukl = uLkl + uH

klStep 4 Set llarr l+ 1 and go back to step 2 until l = m

1033 Computational complexity

We now estimate the computational efforts by the number of multiplicationsand functional evaluations Before presenting the estimates we review severalproperties of the multiscale bases and collocation functionals for later refer-ence established in Chapter 7

(I) For any (i j) isin U = (i j) i isin N0 j isin Zw(i) there are at most(μminus 1)r minus 1 numbers of wijprime jprime isin Zw(i) such that meas(Sij cap Sijprime) = 0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

394 Multiscale methods for nonlinear integral equations

(II) For any iprime i isin N0 with i le iprime it holds thatlangiprimejprime wij

rang = δiprimeiδjprimej jprime isin Zw(iprime) j isin Zw(i)

where δiprimei is the Kronecker delta(III) For any polynomial p of order le r and i ge 1lang

ij prang = 0 (wij p) = 0 (i j) isin U

where (middot middot) denotes the inner product in L2(0 1)(IV) There exists a positive constant θ0 such that ij le θ0 and wijinfin le θ0

for all (i j) isin U(V) There exist positive constants cminus and c+ such that for all n isin N0

cminusμi le w(i) le c+μi cminusμminusi le maxjisinZw(i)

|supp(wij)| le c+μminusi

(VI) For any v isin Xn we have a unique expansion

v =sum

(ij)isinUn

vijwij

and there exist positive constants θ1 and θ2 such that

θ1vinfin le vinfin le θ2(n+ 1)Envinfin

in which En = [langiprimejprime wijrang

(iprime jprime) (i j) isin Un]When solving the nonlinear systems (1058) and (1059) we are required

to generate matrices Kn and Ln The truncated matrix Kn is evaluated bythe same strategies as those in Algorithm 1018 The truncated matrix Ln isevaluated using the quadrature method (QL) The following lemma establishesestimates for numbers of nonzero entries of the truncated matrices as well asthe numbers of multiplications and functional evaluations in computing thesenonzero entries

Lemma 1022 For any n isin N0

N (Kn) = O(nμn) N (Ln) = O(nμn)

where N (middot) denotes the number of nonzero entries of a matrix The numbers offunctional evaluations and multiplications needed for generating Kn and Ln

are both O(nμn) respectively

Proof Define the matrix blocks

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)] iprime i isin Zn+1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 395

We obtain from property (V) that N (Kiprimei) = O(μiprime+i) for iprime i isin Zn+1 Directcalculation leads to sum

iisinZn+1

nminusisumiprime=0

μiprime+i = O(nμn)

In light of property (I) the number of functional evaluations in computingthe entries Kiprimejprimeij j isin Zw(i) is of O((2κ)minus1nμκr + μi) for any jprime isin Zw(iprime)Thus generating the whole block Kiprimei requires O([(2κ)minus1nμκr + μi]μiprime)number of functional evaluations The total number of functional evaluationsfor generating Kn is then obtained by a simple summation It is easily observedfrom the expression of quadrature rules that the number of multiplicationsinvolved has the same order as that of functional evaluations

For the estimates on Ln we let

Liprimei = [Liprimejprimeij jprime isin Zw(iprime) j isin Zw(i)] iprime i isin Zn+1

For kernels whose singularity points are along the diagonal the entries whichshould be calculated concentrate along the diagonal of each block For kernelswhose singularity is described by (1031) it is not difficult to observe that theentries that have to be computed concentrate along the diagonal and the cornerson the top right and bottom left of each block We obtain the same estimate ofN (Ln) as that in Theorem 715 of Chapter 7 (cf Theorem 46 of [69]) We nowestimate the computational costs for generating the matrix Ln According tothe integration strategy (QL) for each pair i and j computing I1

ij or I0ij requires

only a fixed number of functional evaluations and multiplications Thus theestimates of this lemma are proved

Lemma 1023 For a fixed k and any l isin N0 the numbers of functionalevaluations and multiplications for evaluating fkl are both O((k + l)μk+l)

Proof Making use of the estimates of numbers of nonzero entries of thetruncated matrices the number of multiplications used in obtaining fkl from(1049) is O((k + l)μk+l) It follows from Lemma 1022 that the numbers ofmultiplications and functional evaluations for generating KH

klminus1 are O((k+ lminus1)μk+lminus1) while those for generating LH

kl are also O((k+l)μk+l) Moreover it

is easily obtained from the definition of fkl that the numbers of multiplicationsand functional evaluations for computing it are O(μk+l) Therefore it remainsto estimate the computational efforts for uk+l

Since for i isin Zk+l+1 and a point P isin [0 1] there are bounded numbers ofwij not vanishing at P and there are bounded numbers of point evaluations ineach ij the numbers of point evaluations and multiplications for calculating

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

396 Multiscale methods for nonlinear integral equations

each component of gk+l are O(k + l) Hence those for computing the vectorgk+l are O((k + l)μk+l) Noting that Ek+l is a sparse upper triangular matrixwith O((k + l)μk+l) number of nonzero entries the solution of the linearsystem (1045) needs O((k + l)μk+l) number of multiplications The resultof the lemma then follows by adding the above estimates together

Combining the estimate in the above lemma for fkl with that for othercomputing steps in Algorithm 1018 obtained from the last section (see also[76]) we have the following lemma

Lemma 1024 If the truncated matrix Ln is evaluated using the quadraturemethod (QL) then the numbers of functional evaluations and multiplicationsin Algorithm 1018 are O((k + m)μk+m)

The next theorem gives an estimate of the computational cost required forAlgorithm 1021

Theorem 1025 Let k be a fixed positive integer For any m isin N0 thenumber of functional evaluations and multiplications used in Algorithm 1021is O((k + m)μk+m)

Proof The computational cost of Algorithm 1021 is composed of two partsOne is the cost for generating the matrices Kk+m and Lk+m The other is thatfor carrying out the computing steps listed in the algorithm Lemma 1022has shown that the computational cost of generating both Lk+m and Kk+m

is O((k + m)μk+m) To estimate the efforts of the computing steps in Algo-rithm 1021 we only need to compare Algorithm 1021 with Algorithm 1018We observe that Algorithm 1021 replaces (1048) and (1051) by (1058)and (1059) respectively We have shown that the modifications reduce thecomputational cost Therefore the computational cost for carrying out thecomputing steps of Algorithm 1021 is less than that of Algorithm 1018It is stated in Lemma 1024 that the number of functional evaluations andmultiplications in Algorithm 1018 is O((k + m)μk+m) Thus the theorem isproved

We remark that if the quadrature method (QL) is not used for generatingthe truncated matrix Ln then the number of functional evaluations andmultiplications used in Algorithm 1021 is O((k + m)3μk+m) (cf [51])and although Algorithm 1021 has the same order of computational costs asAlgorithm 1018 the constant involved in the order of computational costs isimproved

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 397

1034 Convergence analysis

In the last subsection we proved that the computational costs of Algo-rithm 1021 are nearly linear In this subsection we establish the order ofits approximate accuracy More precisely we show that the output ukm ofAlgorithm 1021 maintains much of the convergence order of ukm generated byAlgorithm 1032 Through out the rest of this subsection we assume withoutfurther mention that the function g in the nonlinear boundary condition iscontinuously differentiable with respect to the second variable

We begin with some necessary preparation For n isin N0 we define theoperator Kn Xn rarr Xn by requiring

Kiprimejprimeij =langiprimejprime Knwij

rang (iprime jprime) (i j) isin Un

Clearly Kn is uniquely determined Likewise we define the operator Ln Xn rarr Xn by requiring

Liprimejprimeij =langiprimejprime Lnwij

rang (iprime jprime) (i j) isin Un

To estimate the error between ukm and ukm we need to estimate the errorsKn minus Kn and Ln minus Ln where Kn = PnK|Xn and Ln = PnL|Xn

In the following lemma we estimate the error introduced by truncation andquadrature rules applied to matrix Kn

Lemma 1026 There exists a positive constant c such that for all iprime i isin Zn+1

and for all n

Kiprimei minus Kiprimeiinfin le cμminusrn

Proof We proceed with our proof in two cases When iprime + i gt n accordingto the strategy (T1) we have that

Kiprimei minus Kiprimeiinfin = Kiprimeiinfin (1060)

It follows from property (III) and (1030) that for jprime isin Zw(iprime) and j isin Zw(i)

|Kiprimejprimeij| le cμminusr(iprime+i)|Sij|We then make use of property (V) to obtainsum

jisinZw(i)

|Kiprimejprimeij| le cμminusr(iprime+i)

Therefore

Kiprimeiinfin = maxjprimeisinZw(iprime)

sumjisinZw(i)

|Kiprimejprimeij| le cμminusr(iprime+i) le cμminusrn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

398 Multiscale methods for nonlinear integral equations

Substituting this estimate into (1060) verifies the lemma for the caseiprime + i gt n

We now consider the case iprime + i le n Noting that the Gaussian quadraturerule of N points has the algebraic accuracy of order 2N when iprime + i le n wehave that

|Kiprimejprimeij minus Kiprimejprimeij| le c(μminusκr)nκ|Sij| le cμminusrn|Sij|Utilizing property (V) we conclude that

Kiprimei minus Kiprimeiinfin = maxjprimeisinZw(iprime)

sumjisinZw(i)

|Kiprimejprimeij minus Kiprimejprimeij| le cμminusrn

The estimate in Lemma 1026 can be translated into operator form

Lemma 1027 There exists a positive constant c1 such that for all n isin N0

and all v isin Xn

(Kn minus Kn)vinfin le c1(n+ 1)2μminusrnvinfin (1061)

Moreover if ulowast isin Wrinfin(0 1) and if vminus ulowastinfin le cμminusr(nminus1)ulowastrinfin for somepositive constant c then there exists a positive constant c2 such that for alln isin N0

(Kn minus Kn)vinfin le c2(n+ 1)μminusrnulowastrinfin (1062)

Proof Let h = Eminus1n (Kn minus Kn)v We expand (Kn minus Kn)v as

(Kn minus Kn)v =sum

(ij)isinUn

hijwij

Then property (VI) gives

(Kn minus Kn)vinfin le θ2(n+ 1)(Kn minus Kn)vinfin (1063)

We next estimate (Kn minus Kn)vinfin in two different casesFor (1061) note that property (VI) leads to vinfin le θminus1

1 vinfin Moreoverby Lemma 1026 there exists a positive constant c such that for all n isin N0

Kn minus Kninfin le maxiprimeisinZn+1

sumiisinZn+1

Kiprimei minus Kiprimeiinfin le c(n+ 1)μminusrn

Combining the above estimates with (1063) yields the estimate (1061)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 399

For the second inequality (1062) we decompose v into v = Pnulowast + (v minusPnulowast) with

Pnulowast =sum

(ij)isinUn

ξijwij and vminus Pnulowast =sum

(ij)isinUn

ζijwij

Since ulowast isin Wrinfin(0 1) it holds that |ξij| le cμminusriulowastrinfin Moreover it followsfrom property (VI) that

|ζij| le θminus11 vminus Pnulowastinfin le cμminusr(nminus1)ulowastrinfin

These two estimates together imply that there exists a positive constant c suchthat for all (i j) isin Un and all n isin N0 |vij| le cμminusriulowastrinfin

Define the matrix n = [iprimejprimeij (iprime jprime) (i j) isin Un] with iprimejprimeij =μr(nminusi)(Kiprimejprimeij minus Kiprimejprimeij) and the vector vprime = [vprimeij (i j) isin Un] with vprimeij = μrivijThen vprimeinfin le culowastrinfin and

(Kn minus Kn)vinfin le μminusrnninfinvprimeinfin

For any (iprime jprime) isin Un it follows from Lemma 1026 thatsum(ij)isinUn

|iprimejprimeij| lesum

iisinZn+1

μr(nminusi)Kiprimei minus Kiprimeiinfin le c

Therefore

ninfin = max(iprimejprime)isinUn

sum(ij)isinUn

|iprimejprimeij| le c

which leads to the estimate

(Kn minus Kn)vinfin le cμminusrnulowastrinfin

Combining this estimate with (1063) proves the desired result (1062)

To estimate the error Ln minus Ln we define the matrix blocks

Liprimei = [Liprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]Liprimei = [Liprimejprimeij jprime isin Zw(iprime) j isin Zw(i)] iprime i isin Zn+1

The difference between Liprimei and Liprimei that results from both the truncationstrategy (T2) and the quadrature strategy (QL) has the bound

Liprimei minus Liprimeiinfin le c maxμminusrn (εniprimei)minus(2rminusσ prime)μminusr(iprime+i)

By the definition (1047) of the truncation parameters εniprimei we conclude that

Liprimei minus Liprimeiinfin le c(εniprimei)minus(2rminusσ prime)μminusr(iprime+i)

This leads to the following lemma

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

400 Multiscale methods for nonlinear integral equations

Lemma 1028 Let σ prime isin (0 1) η = 2r minus σ prime and set b = 1 bprime isin (rη 1) in(1047) Then there exists a positive constant c1 such that for all n isin N0 andall v isin Xn

(Ln minus Ln)vinfin le c1(n+ 1)μminusσ primenvinfin (1064)

Moreover if ulowast isin Wrinfin(0 1) and if v satisfies vminusulowastinfin le cμminusr(nminus1)ulowastrinfinfor some positive constant c then there exists a positive constant c2 such that

(Ln minus Ln)Pnvinfin le c2(n+ 1)2μminus(r+σ prime)nulowastrinfin (1065)

Proof Since most of the proof for this lemma is similar to that forLemma 1027 we omit the details In the proof of (1065) since g iscontinuously differentiable with respect to the second variable we may usethe standard estimate for nonlinear operator ulowastrinfin le culowastrinfin (see forexample [238]) to conclude the second estimate

In the next lemma we estimate the error between the outputs of Algo-rithms 1018 and 1021

Lemma 1029 If ulowast isin Wrinfin(0 1) then there exists a positive constant c suchthat for sufficiently large integer k and for all l isin Zm+1

ukl minus uklinfin le c(k + l+ 1)μminusr(k+l)ulowastrinfin

Proof We prove this lemma by induction on l For the case l = 0 we need toprove that the solution uk of

uk minus Pk(K + LPk)uk = Pkf (1066)

satisfies

uk minus ukinfin le c(k + 1)μminuskrulowastrinfin (1067)

Following a standard argument we may show that the solution of equa-tion (1066) has the bound

uk minus ulowastinfin le cμminuskrulowastrinfin

Thus by the triangle inequality we establish the estimate (1067)We now assume that the result of this lemma holds for l minus 1 and consider

the case l Note that step 3 of Algorithm 1021 is the same as step 3 ofAlgorithm 1018 According to Algorithms 1018 and 1021 we obtain that

uHkl minus uH

kl = uA + uB (1068)

where

uA = (Pk+l minus Pk)Kk+l(uklminus1 minus uklminus1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 401

and

uB = (Pk+l minus Pk)Lk+lPk+l(uklminus1 minusuklminus1)

Since the projections are uniformly bounded as follows from (1062) ofLemma 1027 we have that

uA le c(k + l+ 1)μminusr(k+l)ulowastrinfin (1069)

By Lemma 1028 and the induction hypothesis

uBinfin le c(k + l+ 1)μminusr(k+l)ulowastrinfin (1070)

From equations (1068)ndash(1070) we conclude that

uHkl minus uH

klinfin le c(k + l+ 1)μminusr(k+l)ulowastrinfin (1071)

It remains to estimate uLkl minus uL

kl Subtracting (1051) from (1059) yieldsthe equation

uLkl minus uL

kl = Pk[(K + LPk+l)ukl minus (K + L)ukl]Let B = (K + LPk+l)

prime and

R(ukl ukl) = (K + LPk+l)ukl minus (K + LPk+l)ukl minus B(ukl minus ukl)

We then conclude that

uLkl minus uL

kl = (I minus PkB)minus1Pk[B(uHkl minus uH

kl)+R(ukl ukl)+ L(Pk+l minus I)ukl]Note that there exist positive constants c1 and c2 such that for all k l isin N0

(I minus PkB)minus1Pk le c1

and

R(ukl ukl)infin le c2ukl minus ukl2infin

From the last inequality and the fact that uklminusuklinfin rarr 0 krarrinfin uniformlyfor l isin N0 we conclude that for sufficiently large integer k and for all l isin Zm+1

R(ukl ukl)infin le 1

2c1(uL

kl minus uLklinfin + uH

kl minus uHklinfin)

Moreover there exists a positive constant c3 such that for sufficiently largeinteger k and for all l isin Zm+1

L(Pk+l minus I)uklinfin le c3μminusr(k+l)ulowastrinfin

Combining the above inequalities yields the estimate

uLkl minus uL

klinfin le cprime(uHkl minus uH

klinfin + μminusr(k+l)ulowastrinfin)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

402 Multiscale methods for nonlinear integral equations

for some positive constant cprime which with (1071) leads to the desired estimateof this lemma for the case l and completes the proof

The above lemma leads to the following error estimate of the approximatesolutions generated by Algorithm 1021

Theorem 1030 If ulowast isin Wrinfin[0 1] then there exists a positive constant csuch that for sufficiently large k and all m isin N0

ulowast minus ukminfin le c(k + m+ 1)μminusr(k+m)ulowastrinfin

104 Numerical experiments

In this section we present numerical examples to demonstrate the performanceof multiscale methods for solving the Hammerstein equation and the nonlinearboundary integral equations

1041 Numerical examples of the Hammerstein equation

We present in this subsection four numerical experiments to verify thetheoretical estimates obtained in Section 102 The computer programs are runon a personal computer with 28-GHz CPU and 1G memory

Example 1 Consider the equation

u(s)minusint 1

0sin(π(s+ t))u2(t)dt = sin(πs)minus 4

3πcos(πs) s isin [0 1]

(1072)

The equation has an isolated solution ulowast(s) = sin(πs)We use the collocation method via the piecewise linear polynomial basis for

the numerical solution of the equation Specifically we choose Xn as the spaceof piecewise linear polynomials with knots at j2n j = 1 2 2nminus1 Hencedim(Xn) = 2n+1 The basis functions of X0 and W1 are given respectively byw00(t) = minus3t + 2 w01(t) = 3t minus 1 t isin [0 1] and

w10(t) =

1minus 92 t t isin [0 12)

32 t minus 1 t isin [12 1] w11(t) =

12 minus 3

2 t t isin [0 12)92 t minus 7

2 t isin [12 1]The corresponding collocation functionals are

00 = δ 13 01 = δ 2

3 10 = δ 1

6minus 3

2δ 1

3+ 1

2δ 2

3 11 = 1

2δ 1

3minus 3

2δ 2

3+ δ 5

6

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

104 Numerical experiments 403

where 〈δt f 〉 = f (t) The basis functions wij i gt 1 j isin Zw(i) and thecorresponding functionals ij i gt 1 j isin Zw(i) are constructed recursivelyby applying linear operators related to the contractive mappings φ0(t) = t

2 andφ1(t) = t+1

2 t isin [0 1]We solve equation (1072) by Algorithm 109 with the initial level k = 4

based on the multiscale collocation method developed via the basis functionsand corresponding collocation functionals The nonlinear system (1015)related to (1072) is solved by the Newton iteration method For comparisonpurposes we also solve the collocation equation (103) using Newton iterationwhich will be called the direct Newton method and the two-grid method

We report the numerical results in Table 101 Columns 3 and 4 ofTable 101 list respectively the computed errors and computed convergenceorder (CO) of the approximate solution u4m for equation (1072) obtained byAlgorithm 109 In column 5 we present the computing time TM measuredin seconds when Algorithm 109 is used These numerical results confirmthe theoretical estimates presented in the last subsection noting that thetheoretical order of convergence for the piecewise linear approximation is 2The computing time is linear with respect to the dimension of the subspaceIn column 6 we list the computed error of the approximate solution u4+m

obtained from the direct Newton method We list in column 7 the computingtime TN in seconds when the direct Newton method is used to solve (103)For the direct Newton method we only compute the results for m le 5 sinceit is much more for a larger-scale problem and the data we obtained haveclearly illustrated the computational efficiency of the proposed method Thenumerical errors ulowast minus uG

4+minfin and the computing time TG of the two-gridmethod are listed in columns 8 and 9 respectively In Figure 102 we plot thecomputing time of the proposed method the direct Newton method and the

Table 101 Numerical results for Example 1

m s(4+ m) ulowast minus u4minfin CO TM ulowast minus u4+minfin TN ulowast minus uG4+minfin TG

0 32 5142e-3 45 5142e-3 46 5142e-3 461 64 1282e-3 200 88 1282e-3 128 1285e-3 782 128 3190e-4 201 143 3192e-4 355 3192e-4 1323 256 7962e-5 200 201 7967e-5 960 7972e-5 2424 512 1994e-5 200 276 1996e-5 2773 1995e-5 5745 1024 5057e-6 198 382 5061e-6 7745 5063e-6 14286 2048 1240e-6 203 544 1243e-6 21862 1239e-6 40627 4096 3094e-7 200 8058 8192 7740e-8 200 13059 16384 1932e-8 200 2294

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

404 Multiscale methods for nonlinear integral equations

Table 102 Computing time for Example 1

m T1 T2 R(T2) T3 T4 (T4) T prime41 lt 001 0062 lt 001 3500 04842 lt 001 0375 6049 lt 001 7813 431 10473 lt 001 1172 3125 lt 001 1208 427 16874 lt 001 2828 2413 lt 001 1711 503 23755 lt 001 6859 2425 lt 001 2302 591 32346 0015 1583 2308 lt 001 2941 639 40777 0031 3461 2186 lt 001 3630 689 51258 0078 7608 2198 lt 001 4411 781 61739 0187 1666 2190 lt 001 5158 747 7188

0 1 2 3 4 5 6 7 8 90

100

200

300

400

500

600

700

800

m

Com

puta

tiona

l Tim

e

Figure 102 Growth of TM (lowast) TG (+) and TN () for Example 1

two-grid method The figure shows that TN grows much faster than the othertwo as level m increases and TM grows the slowest

We next verify numerically the estimate of Mkmj j = 1 2 3 4 establishedin Section 1014 In Table 102 we list the computing time for the four mainprocedures listed in Section 1014 All these computing times are measuredin seconds In the table T1 denotes the computing time for generating thecoefficient matrix Ekm T2 presents the total time for evaluating the vectorsfkl l = 1 2 m T3 is the total time to solve the linear systems (1021)for l = 1 2 m and T4 stands for the total time spent in solving thenonlinear system (1015) using the Newton iteration updating the Jacobianmatrix (strategy 1) at each iteration step We observe that T1 and T3 are smallin comparison with T2 and T4 The column for ldquoR(T2)rdquo reports the increasingratio of T2 where the data are obtained by calculating T2(m)T2(m minus 1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

104 Numerical experiments 405

The column for ldquo(T4)rdquo lists the difference T4(m) minus T4(m minus 1) of twosuccessive values of T4 We observe that the values in the column for ldquoR(T2)rdquoare close to 2 which coincides with the estimate that Mkm2 = O(μk+m) withμ = 2 The values in column ldquo(T4)rdquo are nearly constant and this verifies theestimate Mkm4 = O(m)

We use strategy 2 for the Newton iteration in step 3 of Algorithm 109 tosolve the nonlinear system (1015) Specifically we choose the last Jacobianmatrix obtained in step 1 for all steps of the Newton iteration without updatingit with a few more iteration steps to ensure that the same approximationaccuracy is obtained We list in the last column of Table 102 under ldquoT prime4rdquo thecomputing time for solving the nonlinear system (1015) using the Newtoniteration without updating the Jacobian matrix (strategy 2) with the numberof iterations listed in Table 103 For comparison in Table 103 we alsolist the number of iterations used for strategy 1 By comparing the data ofldquoT4rdquo and ldquoT prime4rdquo in the same row we observe that the modification cuts downthe computational cost remarkably We plot the values of T2 T4 and T prime4 inFigure 103 Note that T2 grows the fastest and occupies most of the running

Table 103 A comparison of iteration numbers used in step 3 with strategies 1and 2

Level 1 2 3 4 5 6 7 8 9

Strategy 1 3 2 2 2 2 2 2 1 1Strategy 2 5 5 5 4 4 4 4 3 3

1 2 3 4 5 6 7 8 90

10

20

30

40

50

60

70

80

m

Com

puta

tiona

l Tim

e

Figure 103 Growth of T2 (lowast) T4 (+) and T prime4 () for Example 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

406 Multiscale methods for nonlinear integral equations

time of the algorithm when m is large Moreover both T4 and T prime4 grow nearlylinearly with respect to the number of levels and they have the relation

T4 asymp 72T prime4

This suggests that we should use strategy 2 in step 3 of Algorithm 109 Hencein the rest of the numerical experiments we always use strategy 2

Example 2 We again consider numerical solutions of equation (1072)This time we solve the equation using the Galerkin scheme with the spacesof piecewise linear polynomials as approximation subspaces The orthonor-mal basis functions for X0 and W1 are respectively w00(t)= 1 w01(t)=radic

3(2t minus 1) for t isin [0 1] and

w10(t) =

1minus 6t t isin [0 12)5minus 6t t isin [12 1] w11(t) =

radic3(1minus 4t) t isin [0 12)radic3(4t minus 3) t isin [12 1]

The bases for spaces Wi i gt 1 are obtained by using the linear operatorsdefined in terms of the contractive mappings φ0 and φ1

We solve equation (1072) by Algorithm 109 with the initial level k = 4based on the multiscale Galerkin method developed via the above basisfunctions The nonlinear system (1015) related to (1072) is solved by thesecant method We present the numerical results in Table 104 It is clearfrom the table that the numerical solutions converge in approximately thetheoretical order 2 Moreover the growth of T2 and T prime4 is consistent withour estimate Note that for the one-dimensional equation the entries of thecoefficient matrix involve double integrals As a result the computing time forthis method is much more than that for the corresponding collocation methodThe numerical results using the two-grid method are listed in the last twocolumns for comparison

Table 104 Numerical results for Example 2

m s(4+ m) ulowast minus u4m2 CO TM T2 T prime4 ulowast minus uG4+m2 TG

0 32 1016e-3 798 1016e-3 8131 64 2547e-4 200 143 425 4254 2543e-4 3612 128 6425e-5 199 236 157 1048 6431e-5 5293 256 1661e-5 195 382 362 1968 1670e-5 7234 512 4632e-6 184 538 702 3163 4623e-6 10215 1024 1292e-6 184 866 127 4224 1263e-6 16326 2048 3235e-7 200 981 234 5559

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

104 Numerical experiments 407

Example 3 In this example we consider solving the equation

u(s)minusint 1

0log

1

16| cos(πs)minus cos(π t)|(2u2(t)minus 1)dt = f (s) s isin [0 1]

(1073)

with

f (s) = cos

2

(1

2minus s

))+ 1

16π[2minus (1minus cos(πs)) log(1minus cos(πs))

minus (1+ cos(πs)) log(1+ cos(πs))]Note that the kernel of the integral operator involved in this equation is weaklysingular This equation has an isolated solution ulowast(s) = cos(π2 (

12 minus s))

s isin [0 1] We use the multiscale collocation scheme with the same bases asthe first example and the nonlinear system (1015) is solved by the MAM inconjunction with strategy 2 which suggests no updating in the Jacobian matrix

The weakly singular integrals are computed numerically using the quadra-ture methods proposed in [242] Taking into account the cost of numericalintegration in this case Mkm2 = O(s(k + m)(log s(k + m))3) Thereforeimplementing the entire algorithm requires O(s(k + m)(log s(k + m))3) num-ber of multiplications See also [113 164 269 270] for related treatments ofweakly singular integrals The numerical results are listed in Table 105 Sincethe amounts T1 and T3 are insignificant in comparison with T2 and T prime4 they arenot included in the table for this experiment

Example 4 In the last example we solve the two-dimensional equation

u(x y)minusint

sin(xy+ xprimeyprime)u2(xprime yprime)dxprimedyprime = f (x y) (x y) isin (1074)

Table 105 Numerical results for Example 3

m s(4+ m) ulowast minus u4minfin CO TM T2 T prime4 ulowast minus uG4+minfin TG

0 32 1031e-3 76 1031e-3 761 64 2607e-4 198 80 01 04 2613e-4 1182 128 6493e-5 201 110 09 15 6495e-5 2243 256 1613e-5 201 127 14 21 1621e-5 4824 512 4041e-6 200 180 37 33 4043e-6 12025 1024 1007e-6 200 244 97 37 1010e-6 33166 2048 2589e-7 196 414 217 64 2593e-7 99887 4096 6395e-8 202 749 535 698 8192 1577e-8 202 1205 984 889 16384 3920e-9 201 2628 2361 105

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

408 Multiscale methods for nonlinear integral equations

where = (x y) 0 le x le y le 1 and the function f is chosen so thatulowast(x y) = x2 + y2 is an isolated solution of the equation

We solve equation (1074) by the MAM based on the multiscale collocationscheme We choose Xn as the space of bivariate piecewise linear polynomialswith the multiscale partitions defined by the family of contractive mappings = φe e isin Z4 where

φ0(x y) =( x

2

y

2

) φ1(x y) =

(x

2

y+ 1

2

)

φ2(x y) =(

1minus x

2 1minus y

2

) φ3(x y) =

(x+ 1

2

y+ 1

2

)

For space Xn we have the dimension dim(Xn)= 3(4n) Let Se =φe() eisinZ4The basis functions for X0 are given by

w00(x y) = minus3x+ 2y w01(x y) = xminus 3y+ 2

w02(x y) = 2x+ yminus 1 (x y) isin

The corresponding collocation functionals are chosen as

00 = δ( 2

7 37 )

01 = δ( 1

7 57 )

02 = δ( 4

7 67 )

wherelangδ(xy) g

rang= g(x y) The basis functions for W1 are given by

w10(x y) = minus 11

8 minus 158 x+ 41

8 y (x y) isin S058 + 1

8 xminus 78 y (x y) isin S S0

w11(x y) =

1minus 154 xminus 7

8 y (x y) isin S0

minus1+ 14 x+ 9

8 y (x y) isin S S0

w12(x y) =

98 + 15

8 xminus 298 y (x y) isin S0

minus 158 minus 1

8 x+ 198 y (x y) isin S S0

w13(x y) = minus 15

8 minus 418 x+ 13

4 y (x y) isin S118 + 7

8 xminus 34 y (x y) isin S S1

w14(x y) =

298 + 7

8 xminus 378 y (x y) isin S1

minus 38 minus 9

8 x+ 118 y (x y) isin S S1

w15(x y) = minus 5

8 minus 298 x+ 7

4 y (x y) isin S138 + 19

8 xminus 94 y (x y) isin S S1

w16(x y) =

154 minus 13

4 xminus 158 y (x y) isin S3

minus 14 + 3

4 x+ 18 y (x y) isin S S3

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

104 Numerical experiments 409

w17(x y) = minus 1

8 minus 378 x+ 15

4 y (x y) isin S3

minus 18 + 11

8 xminus 14 y (x y) isin S S3

w18(x y) = minus 5

2 + 74 x+ 15

8 y (x y) isin S312 minus 9

4 xminus 18 y (x y) isin S S3

Their corresponding collocation functionals are chosen as

10 = δ( 1

14 514 )minus δ

( 17 3

14 )+ δ

( 27 3

7 )minus δ

( 314 4

7 )

11 = δ( 1

14 514 )minus δ

( 27 3

7 )+ δ

( 37 9

14 )minus δ

( 314 4

7 )

12 = δ( 5

14 1114 )minus δ

( 37 9

14 )+ δ

( 27 3

7 )minus δ

( 314 4

7 )

13 = δ( 1

14 67 )minus δ

( 17 5

7 )+ δ

( 514 11

14 )minus δ

( 27 13

14 )

14 = δ( 1

7 57 )minus δ

( 27 13

14 )+ δ

( 514 11

14 )minus δ

( 314 4

7 )

15 = δ( 3

7 914 )minus δ

( 514 11

14 )+ δ

( 17 5

7 )minus δ

( 314 4

7 )

16 = δ( 4

7 67 )minus δ

( 1114 13

14 )+ δ

( 914 5

7 )minus δ

( 37 9

14 )

17 = δ( 4

7 67 )minus δ

( 914 5

7 )+ δ

( 37 9

14 )minus δ

( 514 11

14 )

18 = δ( 3

14 47 )minus δ

( 37 9

14 )+ δ

( 47 6

7 )minus δ

( 514 11

14 )

The construction of the basis functions and collocation functionals of higherlevels are described in Chapter 7 (cf [69 75])

The above bases are used in our method for solving (1074) and the numeri-cal results are listed in Table 106 At each step the nonlinear system (1015) issolved by the Newton iteration method The computed convergence orders areall around 2 This confirms the theoretical estimate of the convergence orderSince dim(Xn+1)dim(Xn) = 4 the theoretical value of R(T2) is 4 Table 106shows that the computed values of R(T2) are close to 4

For comparison we also solve the collocation equation (103) by the directNewton method and the two-grid method The numerical errors and computingtime of the three methods are listed in Table 107 The data show that the

Table 106 Numerical results for Example 4

m s(2+ m) ulowast minus u2m CO TM T2 R(T2) T prime4 (T prime4)0 48 3269e-2 73601 192 8159e-3 200 1376 2820 35802 768 2035e-3 200 2863 1408 4995 7163 35833 3072 5076e-4 200 7725 5915 4201 1074 35774 12288 1268e-4 200 2610 2393 4046 1432 3580

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

410 Multiscale methods for nonlinear integral equations

Table 107 Comparison of the methods for solving Example 4

m s(2+ m) ulowast minus u2m TM ulowast minus uG2+m TG ulowast minus u2+m TN

0 48 3269e-2 7360 3269e-2 7360 3269e-2 73601 192 8159e-3 1376 8162e-3 2132 8157e-3 35552 768 2035e-3 2863 2035e-3 6324 2035e-3 27243 3072 5076e-4 7725 5084e-4 3613 5083e-4 174924 12288 1268e-4 2610

1 15 2 25 3 35 40

500

1000

1500

2000

m

Com

puta

tiona

l Tim

e

0 05 1 15 2 25 3 35 40

2000

4000

6000

8000

10000

12000

14000

16000

m

Com

puta

tiona

l Tim

e

Figure 104 Comparison for Example 4 Left Growth of T2 (lowast) T prime4 () RightTM (lowast) TG (+) TN ()

proposed method has nearly the same accuracy as the direct Newton methodand the two-grid method To compare visually the computing times for thisexample we plot them in Figure 104 from which we observe that theproposed method runs the fastest

1042 Numerical examples of nonlinear boundaryintegral equations

In this subsection we present numerical results which verify the approximationaccuracy and computational efficiency of the proposed algorithm for solvingnonlinear boundary integral equations All programs are run on a workstationwith 338-GHz CPU and 96G memory

We consider the boundary value problem (1026) with g(x u(x)) = u(x) +sin(u(x)) Let be the elliptical region x2

1 +( x2

2

)2lt 1 For comparison

purposes we choose the solution of (1026) as u0(x) = ex1 cos(x2) withx = (x1 x2) Correspondingly we have that

g0(x) = g(x u0)+ partu0(x)

partnx

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

104 Numerical experiments 411

The corresponding solution of the boundary integral equation (1029) isgiven by

ulowast(t) = ecos(2π t) cos(2 sin(2π t))

In all experiments presented in this subsection we choose μ = 2 and Xn

as the space of piecewise cubic polynomials with knots at j2n j = 1 2 2n minus 1 It is easy to compute dim(Xn) = 2n+2 The basis functions of X0 andW1 are given by

w00(t) = minus1

6(5t minus 2)(5t minus 3)(5t minus 4)

w01(t) = 1

2(5t minus 1)(5t minus 3)(5t minus 4)

w02(t) = minus1

2(5t minus 1)(5t minus 2)(5t minus 4)

w03(t) = 1

6(5t minus 1)(5t minus 2)(5t minus 3)

w10(t) =

8532 minus 725

12 t + 5752 t2 minus 1475

4 t3 t isin [0 12 ]

minus 23532 + 575

12 t minus 1752 t2 + 575

12 t3 t isin ( 12 1]

w11(t) =

1145288 minus 1775

24 t + 16756 t2 minus 4975

18 t3 t isin [0 12 ]

minus 7495288 + 3625

24 t minus 5252 t2 + 2525

18 t3 t isin ( 12 1]

w12(t) =

805288 minus 375

8 t + 4753 t2 minus 2525

18 t3 t isin [0 12 ]

minus 19355288 + 8275

24 t minus 550t2 + 497518 t3 t isin ( 1

2 1]

w13(t) =

9596 minus 50

3 t + 2254 t2 minus 575

12 t3 t isin [0 12 ]

minus 1334596 + 1775

3 t minus 32754 t2 + 1475

4 t3 t isin ( 12 1]

The corresponding collocation functionals are

00 = δ 15 01 = δ 2

5 02 = δ 3

5 03 = δ 4

5

10 = 2

5δ 1

10minus 3

2δ 2

10+ 2δ 3

10minus δ 4

10+ 1

10δ 6

10

11 = 3

10δ 2

10minus δ 3

10+ δ 4

10minus 1

2δ 6

10+ 1

5δ 7

10

12 = 1

5δ 3

10minus 1

2δ 4

10+ δ 6

10minus δ 7

10+ 3

10δ 8

10

13 = 1

10δ 4

10minus δ 6

10+ 2δ 7

10minus 3

2δ 8

10+ 2

5δ 9

10

The basis functions wij and collocation functionals ij for i gt 1 can beconstructed recursively from w1j and 1j

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

412 Multiscale methods for nonlinear integral equations

Table 108 Total time for solving the related nonlinear system

m Tm T primem0 249 00161 370 00462 447 00783 537 00934 638 01405 747 02036 852 03827 991 07578 1132 1592

Example 1 In this experiment we use the MAMs with initial level k = 4 tosolve the boundary integral equation (1029) The purpose of this experimentis to test how much the proposed technique for solving the nonlinear systemsspeeds up the solution process To this end we run Algorithms 1018 and 1021separately and add up the time used in solving the nonlinear systems Note thatin Algorithm 1018 we need to solve (1022) once and (1051) m times whilein Algorithm 1021 we need to solve (1058) once and (1059) m times InTable 108 Tm denotes the total time spent in Algorithm 1018 for solving thenonlinear systems while T primem denotes that in Algorithm 1021 Obviously wesee that in Table 108 T primem is much less than Tm

Example 2 We illustrate in this experiment the approximation accuracy andthe total computational effects of Algorithm 1021 compared with those ofAlgorithm 1018 and Algorithm AC (the algorithm of Atkinson and Chandlerpresented in [17])

In Table 109 we report the numerical results of Algorithms 1018 and1021 For any m we denote by u4m and u4m respectively the numericalsolutions resulting from Algorithms 1018 and 1021 Moreover we let TM

and T primeM denote the total times for implementing Algorithms 1018 and 1021respectively We observe in Table 109 that u4m and u4m have nearly thesame accuracy while T primeM is significantly less than TM This indicates thatalthough Algorithms 1018 and 1021 have the same order of computationalcosts the techniques proposed in this paper effectively reduce the absolutecomputational costs

For comparison we list in Table 1010 the numerical results of AlgorithmAC which is a Nystrom method We let N be the number of unknowns uN thenumerical solution and TN the running time of the program

Note that we cannot compare Algorithms 1018 and AC in the same dis-cretization scale since Algorithm 1018 is a multiscale method and Algorithm

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

105 Bibliographical remarks 413

Table 109 Comparison of the accuracy and running time of Algorithms 1018and 1021

m s(4+ m) ulowast minus u4minfin ulowast minus u4minfin TM T primeM0 64 4107e-3 5079e-3 85 021 128 3079e-4 3007e-4 89 042 256 2152e-5 2225e-5 112 083 512 1491e-6 1527e-6 146 174 1024 8204e-8 8624e-8 203 375 2048 5041e-9 5313e-9 309 796 4096 2865e-10 2970e-10 519 1697 8192 1795e-11 1882e-11 880 3618 16384 1023e-12 1101e-12 1961 768

Table 1010 Numerical results of Algorithm AC

N ulowast minus uNinfin TN

128 1780e-5 3256 1071e-6 11512 6559e-8 52

1024 4076e-9 2262048 2695e-10 8364096 9870e-11 38778192 6751e-11 16473

AC is a single-scale quadrature method However we may utilize a ldquonumericalerrors vs computing timerdquo figure for these methods We use the data inTables 109 and 1010 to generate Figure 105 For convenience of displaywe take logarithms on both numerical errors and computing times It is seenthat for any accuracy level Algorithm 1021 uses the least time among thethree algorithms These numerical results confirm that the proposed methodsoutperform both the two-grid method and Algorithm AC

105 Bibliographical remarks

The results presented in this chapter were mainly chosen from the originalpapers [51 52 76] The reader is referred to recent papers [50 158] foradditional developments where a multiscale collocation method was appliedto solving nonlinear Hammerstein equations Specifically in [158] a sparsityin the Jacobian matrix resulted from nonlinear solvers such as the Newtonmethod and the quasi-Newton method was discussed that leads to a fast

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

414 Multiscale methods for nonlinear integral equations

minus2 0 2 4 6 8 10minus28

minus26

minus24

minus22

minus20

minus18

minus16

minus14

minus12

minus10

logarithm of computing time

loga

rithm

of n

umer

ical

err

ors

Algorithm 1021Algorithm 1018Algorithm AC

Figure 105 Comparison of Algorithms 1018 1021 and AC on numericalperformance

algorithm Furthermore a fast multilevel augmentation method was appliedto a transformed nonlinear equation that results in an additional saving incomputing time

Numerical solutions of nonlinear integral equations especially the Hammer-stein equation have been studied extensively in the literature see [20 23 3435 131 132 159 161 163 175 180ndash182 240 255 257 258] Specificallya degenerate kernel scheme was introduced in [163] a variation of Nystromrsquosmethod was proposed in [182] and a product integration method was discussedin [161] Projection methods including the Galerkin scheme and collocationscheme were discussed in [20 23 47 131 159 180 181 240 255 258]Fundamental work on the error analysis of projection methods may be foundin Section 33 of [175] Studies of superconvergence properties can be foundin [160 161 165 179 247] Moreover the reader is referred to [13 22] forgeneral information on numerical treatments of the Hammerstein equationsFurthermore regularity of the solution of the Hammerstein equation with aweakly singular kernel was studied in [162] Certain aspects of theoreticalanalysis of the Hammerstein equation may be found in [99 130 184 277]The connection of a large class of Gaussian processes with the Hammersteinequation was studied in [218]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

105 Bibliographical remarks 415

Boundary value problems of the Laplace equation serve as mathematicalmodels for many important applications such as acoustics elasticity elec-tromagnetics and fluid dynamics (see [126 149 208 234 265] and thereferences cited therein) Nonlinear boundary conditions are also involved invarious applications such as heat radiation and heat transfer [29 32 42] Insome electromagnetic problems the boundary conditions may also containnonlinearities [171] In these cases the reformulation of the correspondingboundary value problems leads to nonlinear integral equations The nonlinearboundary integral equation of the Laplace equation was treated numerically in[17] Certain linearization approaches are used in the numerical treatment ofnonlinear integral equations (see for example [13ndash15 20 23 131 132 159ndash161 163 165 182 240 255])

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

11

Multiscale methods for ill-posedintegral equations

In this chapter we present multiscale methods for solving ill-posed integralequations of the first kind The ill-posed integral equations are turned to well-posed integral equations of the second kind by using regularization methodsincluding the Lavrentiev regularization and the Tikhonov regularization Multi-scale Galerkin and collocation methods are introduced for solving the resultingwell-posed equations A priori and a posteriori regularization parameter choicestrategies are proposed Convergence rates of the regularized solutions areestablished

111 Numerical solutions of regularization problems

We present a brief review of regularization of ill-posed integral equations ofthe first kind and discuss the main idea used in this chapter in developing fastalgorithms for solving the equations

We consider the Fredholm integral equation of the first kind in the form

Ku = f (111)

where

(Ku)(s) =int

K(s t)u(t)dt s isin (112)

Noting that compact operators cannot have a continuous inverse equa-tion (111) with a compact operator K is an ill-posed problem in the senseof the following definition

Definition 111 Let K be an operator from a normed linear space X to anormed linear space Y Equation (111) is said to be well posed if for any

416

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

111 Numerical solutions of regularization problems 417

f isin Y there exists a solution u isin X the solution is unique and the dependenceof u upon f is continuous Otherwise the equation is called ill posed

Among the three conditions for equation (111) to be well posed listedin Definition 111 continuous dependence of the solution on the given datais most crucial and challenging because the nonexistence or nonuniquenessof a solution may be well treated by an existing method such as the least-squares method In this chapter we mainly treat the ill-posedness caused by theviolation of this condition The ill-posedness of the equation is often treatedby regularization which imposes certain a priori conditions on the solutionThe most commonly used method of regularization of ill-posed problems isthe Tikhonov regularization (named after Andrey Tikhonov) [252] It is alsocalled the TikhonovndashPhillips regularization because of the contribution ofPhillips [217] Another frequently used method is the Lavrentiev regularization[116 220] Recent developments in image science use the TV regularization[237]

Certain regularization methods may be formulated as a minimization of afidelity term which measures the error of Kuminusf plus a regularization parametertimes the norm of u determined by the a priori condition The Tikhonov regu-larization [116 117 119 217 252] and the TV regularization are examplesof this type The optimal regularization parameter is usually unknown Inpractical problems such as imagesignal processing it is often determined byan ad hoc method Approaches include the Bayesian method the discrepancyprinciple cross-validation L-curve method restricted maximum likelihoodand unbiased predictive risk estimator In [262] the connection was establishedbetween the choice of the optimal parameter and the leave-one-out cross-validation There is recent interest in the multiparameter regularization [63]

Although the theoretical development of solving ill-posed problems isnearly complete developing efficient stable fast solvers for such problemsremains an active focused research area A bottleneck problem for the numer-ical solution of the ill-posed Fredholm integral equation of the first kind is itsdemanding large computational costs By regularization the solution of theill-posed equation is obtained by solving a sequence of well-posed Fredholmintegral equations of the second kind As discussed in previous chaptersthe discretization of such an equation leads to an algebraic system with afull matrix and numerical solutions of the system are computationally costlyAimed at overcoming this bottleneck problem we consider in this chaptersolving ill-posed operator equations of the first kind in a multiresolutionframework Although multiscale methods for solving well-posed operatorequations have been well understood and widely used (see previous chapters

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

418 Multiscale methods for ill-posed integral equations

of this book) less attention has been paid to the development of multiscalemethods for ill-posed problems Solving ill-posed problems requires iteratedcomputations which demand huge computational costs and therefore design-ing efficient numerical methods for solving problems of this type by makinguse of the underlying multiscale data structure is extremely beneficial It isthe attempt of this chapter to explore the multiscale matrix representationof the operator in developing efficient fast algorithms for solving ill-posedproblems

To describe the main point of this chapter we now elaborate the multilevelaugmentation method developed in Chapter 9 for solving well-posed operatorequations It is based on direct sum decompositions of the range space ofthe operator and the solution space of the operator equation and a matrixsplitting scheme It allows us to develop fast accurate and stable noncon-ventional numerical algorithms for solving operator equations For second-kind equations special splitting techniques were proposed to develop suchalgorithms These algorithms were then applied to solve the linear systemsresulting from matrix compression schemes using wavelet-like functions forsolving Fredholm integral equations of the second kind It was proved thatthe method has an optimal order of convergence and it is numerically stableWith an appropriate matrix compression technique it leads to fast algorithmsBasically this method generates an approximate solution with convergenceorder O(Nminuskd) if piecewise polynomials of order k are used in the approx-imation the spatial dimension of the integral equation is d and solving theentire discrete equation requires computational complexity of order O(N)

The main idea used in Section 112 is to combine the Lavrentiev regular-ization method and the MAM in solving the resulting regularized second-kind equations Since the matrix compression issue for integral equations hasbeen well explained in previous chapters we do not discuss this issue in thesection and suppose that an appropriate compression technique will be used inpractical computation (cf [55]) Instead we focus on choosing regularizationparameters by exploring the multiscale structure of the matrix representationof the operator We present choices for a priori parameters and for a posterioriparameters in the context of the MAM

Multilevel methods were applied to solving ill-posed problems and a prioriand a posteriori parameter choice strategies were also proposed (see forexample [103 137 157 193]) These methods are all based on the Galerkindiscretization principle Because of the computational advantages which wediscussed earlier of the collocation method it is highly desirable to developfast multiscale collocation methods However there are two challenging issuesrelated to the development of the fast collocation method for solving the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

111 Numerical solutions of regularization problems 419

ill-posed integral equation First it requires the availability of a Banach spacesetting for regularization since the appropriate context for collocation methodsis Linfin There is some effort at developing the collocation method in the L2

space for solving ill-posed integral equations (see [190 207 211]) Howeverwe feel that the traditional regularization analysis in Hilbert spaces (such asthe L2 space) is more suitable for Galerkin-type methods In other words itis more natural and interesting to analyze collocation-type methods in the Linfinspace Regularization analysis for collocation methods for ill-posed problemsin a Banach space has not yet been completely understood in the literatureeven though some interesting results were obtained in [117 216 224] Themathematical analysis of the convergence and convergence rate in a Banachspace is quite different from that in a Hilbert space since many estimateresults and conclusions which hold in a Hilbert space may not hold in aBanach space For example in the Banach space little is known about thesaturation phenomenon The second challenging issue is that a posterioriparameter choice strategies for the fast collocation method demand certainestimates in the Linfin-norm which are not available In general mathematicalanalysis is more difficult in a Banach space than in a Hilbert space An optimalregularization parameter should give the best balance between the well-posedness and approximation accuracy This principle has been used by manyresearchers (for example [118 127ndash129 206 220 222 224 228 248 275]) indeveloping a priori and a posteriori parameter choice strategies for regularizedGalerkin methods The focus of Section 113 is to develop a fast collocationalgorithm based on the mathematical development presented in [117 224] anda related a posteriori choice strategy of regularization parameters based on theidea used in [207 216]

We remark that further development in this topic can be seen in [56] Inthis paper multiscale collocation methods are developed for solving a systemof integral equations which is a reformulation of the Tikhonov-regularizedsecond-kind equation of an ill-posed integral equation of the first kind Directnumerical solutions of the Tikhonov regularization equation require generatinga matrix representation of the composition of the conjugate operator with itsoriginal integral operator Generating such a matrix is computationally costlyTo overcome this difficulty rather than solving the Tikhonov-regularizedequation directly it is proposed to solve an equivalent coupled system ofintegral equations A multiscale collocation method is applied with a matrixcompression strategy to discretize the system of integral equations and thenuse the multilevel augmentation method to solve the resulting discrete systemA priori and a posteriori parameter choice strategies are also developed forthese methods

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

420 Multiscale methods for ill-posed integral equations

To close this section we remark that as an application of multiscaleGalerkin methods for solving the ill-posed integral equation integral equationmodels for image restoration were considered in [189] Discrete models areconsistently used as practical models for image restoration They are piecewiseconstant approximations of true physical (continuous) models and henceinevitably impose bottleneck model errors Paper [189] proposed workingdirectly with continuous models for image restoration aiming to suppress themodel errors caused by the discrete models A systematic study was conductedin that paper for the continuous out-of-focus image models which can beformulated as an integral equation of the first kind The resulting integralequation was regularized by the Lavrentiev method and the Tikhonov methodFast multiscale algorithms having high accuracy were developed there to solvethe regularized integral equations of the second kind Numerical experimentspresented in the paper show that the methods based on the continuous modelperform much better than those based on discrete models

112 Multiscale Galerkin methods via the Lavrentievregularization

In this section we first describe the MAM for numerical solutions of ill-posed equations of the first kind and then present convergence analysis for theapproximate solution obtained from the MAM with an a priori regularizationparameter We propose an a posteriori regularization parameter choice strategyin the MAM The choice of the parameter is adapted to the context of themultilevel augmentation method We establish an optimal order of convergencefor the approximate solution obtained from the multilevel augmentationmethod using the a posteriori regularization parameter

1121 Multilevel augmentation methods

We present in this subsection the MAM for numerical solutions of ill-posedoperator equations of the first kind For this purpose we recall the Lavrentievregularization method for the equations Suppose that X is a real Hilbert spacewith an inner product (middot middot) and the related norm middot Let K X rarr X be alinear and positive semi-definite compact operator that is (Kx x) ge 0 for allx isin X Given f isin X we consider the operator equation of the first kind havingthe form

Ku = f (113)

where u isin X is the unknown to be determined

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 421

We assume that the range R(K) of operator K is infinite and thus equa-tion (113) is ill posed [116 221] For f isin R(K) we let ulowast isin X denote theunique minimum norm solution of equation (113) in the sense that

Kulowast = f and ulowast = infv Kv = f v isin X (114)

In general the solution of (113) does not depend continuously on the right-hand side f Let δ gt 0 be a given small number and suppose that f δ isin X

satisfying the condition

f δ minus f le δ (115)

is given noisy data actually used in computing the solution of equation (113)We apply the Lavrentiev regularization method to solve equation (113) Forα gt 0 we solve the equation

(αI +K)uδα = f δ (116)

It can be proved that

(αI +K)minus1 le αminus1 (117)

and thus for any given α gt 0 equation (116) has a unique solution uδα isin XWe consider the unique solution uδα isin X of (116) as an approximation of ulowastIt is well known that

limαrarr0δαminus1rarr0

uδα minus ulowast = 0

We now recall the projection method for solving the regularization equa-tion (116) Suppose that Xn n isin N0 is a sequence of finite-dimensionalsubspaces of X satisfying ⋃

nisinN0

Xn = X

For each n isin N0 we let Pn Xrarr Xn denote the linear orthogonal projectionand thus Pn converges pointwise to the identity operator and Pn = 1 Set

Kn = PnKPn f δn = Pnf δ

The projection method is to find uδαn isin Xn such that

(αI +Kn)uδαn = f δn (118)

Since Kn is positive semi-definite

(αI +Kn)minus1 le αminus1 (119)

and thus equation (118) has a unique solution

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

422 Multiscale methods for ill-posed integral equations

To apply the MAM to equation (118) as we have done earlier we assumethat the subspaces Xn n isin N0 are nested and let Wn+1 be the orthogonalcomplement of Xn in Xn+1 For a fixed k isin N and any m isin N0 we have thedecomposition

Xk+m = Xk oplusperpWk+1 oplusperp middot middot middot oplusperpWk+m (1110)

For g0 isin Xk and gi isin Wk+i i = 1 2 m we identify the vector[g0 g1 gm]T in XktimesWk+1timesmiddot middot middottimesWk+m with the vector g0+g1+middot middot middot+gm

in Xk oplusperp Wk+1 oplusperp middot middot middot oplusperp Wk+m We write the solution uδαk+m isin Xk+m ofequation (118) with n = k + m as

uδαk+m = (uδαk)0 +msum

j=1

(uδαk)j = [(uδαk)0 (uδαk)1 (uδαk)m]T

where (uδαk)0 isin Xk and (uδαk)j isinWk+j for j = 1 2 mWe next re-express the operator in equation (118) with n = k+m Defining

Qn+1 = Pn+1 minus Pn n isin N0

the function f δk+m is identified as

f δk+m = [Pkf δ Qk+1f δ Qk+mf δ]T

and the operator Kk+m is identified as a matrix of operators

Kkm =

⎡⎢⎢⎢⎣PkKPk PkKQk+1 middot middot middot PkKQk+m

Qk+1KPk Qk+1KQk+1 middot middot middot Qk+1KQk+m

Qk+mKPk Qk+mKQk+1 middot middot middot Qk+mKQk+m

⎤⎥⎥⎥⎦

We then split the operator Kkm into the sum of two operators

Kkm = KLkm +KH

km

where

KLkm = PkKPk+m and KH

km = (Pk+m minus Pk)KPk+m

which correspond to lower and higher resolution of the operator Kk+mrespectively In the matrix notation we have that

KLkm =

⎡⎢⎢⎢⎣PkKPk PkKQk+1 middot middot middot PkKQk+m

0 0 middot middot middot 0

0 0 middot middot middot 0

⎤⎥⎥⎥⎦

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 423

and

KHkm =

⎡⎢⎢⎢⎣0 0 middot middot middot 0

Qk+1KPk Qk+1KQk+1 middot middot middot Qk+1KQk+m

Qk+mKPk Qk+mKQk+1 middot middot middot Qk+mKQk+m

⎤⎥⎥⎥⎦

For a given parameter α gt 0 we set

Bkm(α) = I + αminus1KLkm and Ckm(α) = αminus1KH

km (1111)

Thus we have the decomposition

I + αminus1Kkm = Bkm(α)+ Ckm(α) m isin N

The MAM for solving (118) can be described as follows

Algorithm 112 (Multilevel augmentation algorithm)

Step 1 For a fixed k gt 0 solve (118) with n = k exactlyStep 2 Set uδαk0 = uδαk and compute matrices Bk0(α) and Ck0(α)

Step 3 For m isin N suppose that uδαkmminus1 isin Xk+mminus1 has been obtained anddo the followingbull Augment the matrices Bkmminus1(α) and Ckmminus1(α) to form Bkm(α)

and Ckm(α) respectivelybull Augment uδαkmminus1 to form

uδαkm =[

uδαkmminus10

]isin Xk+m

bull Solve uδαkm = [(uδαkm)0 (uδαkm)1 (uδαkm)m]T with

(uδαkm)0 isin Xk and (uδαkm)j isinWk+j j = 1 2 m fromequation

Bkm(α)uδαkm = αminus1f δk+m minus Ckm(α)u

δαkm (1112)

The augmentation method begins with an initial approximate solutionuδαk and updates it from one level to another Specifically after the initial

approximation uδαk is obtained for m = 1 2 we compute

(uδαkm)j = αminus1Qk+jfδ minus αminus1Qk+jKuδαkmminus1 j = 1 2 m

solve

(I + αminus1PkK)(uδαkm)0 = αminus1Pkf δ minus αminus1PkK

⎛⎝ msumj=1

(uδαkm)j

⎞⎠

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

424 Multiscale methods for ill-posed integral equations

and obtain the approximate solution

uδαkm = [(uδαkm)0 (uδαkm)1 (uδαkm)m]T

Note that in this algorithm we only need to find the inverse of I + αminus1PkK atlevel k This is the key point which leads to fast computation for the proposedmethod

The parameter α in general needs to be chosen according to the level Thereare two types of choice for the parameter a priori and a posteriori In the nexttwo subsections we consider the choices of the parameters

1122 A priori error analysis

The goal of this subsection is to propose a choice of a priori regularizationparameters and to estimate the convergence order of the MAM with the choiceof the a priori parameter To this end we define the fractional power operator(cf [220]) Let ν$ denote the greatest integer not larger than ν For 0 lt ν lt 1we define the fractional powers Kν by

Kν = sinπν

π

int +infin0

tνminus1(tI +K)minus1Kdt (1113)

and for ν gt 1

Kν = Kνminusν$Kν$

Since operators K and (αI + K)minus1 commute by the definition of the poweroperator we conclude that operators Kν and (αI + K)minus1 commute as wellthat is

Kν(αI +K)minus1 = (αI +K)minus1Kν (1114)

We impose the hypothesis on ulowast

(H-1) For some ν isin (0 1] ulowast isin R(Kν) that is there is ω isin X such thatulowast = Kνω

Let uα isin X denote the solution of equation

(αI +K)uα = f (1115)

and we need to compare uα with ulowast It is well known that if hypothesis (H-1)is satisfied then the estimates

uα minus ulowast le cνωαν (1116)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 425

hold where

cν =

sin νπν(1minusν)π 0 lt ν lt 1

1 ν = 1

and

uδα minus uα le δαminus1 (1117)

Hence from these estimates we have that

uδα minus ulowast le cνωαν + δαminus1 (1118)

As in [220] we assume that the following hypothesis holds

(H-2) There exists a sequence θn n isin N0 satisfying

σ0 le θn+1

θnle 1 and lim

nrarr+infin θn = 0 (1119)

for some constant σ0 isin (0 1) such that when n ge N0

(I minus Pn)Kν le aνθνn 0 lt ν le 2 (1120)

and

K(I minus Pn) le a1θn (1121)

where N0 is a positive integer and aν 0 lt ν le 2 are constantsdepending only on ν

We next present a bound on the difference uα minus uδαn

Proposition 113 Let uα and uδαn denote the solution of equations (1115)and (118) respectively If hypotheses (H-1) and (H-2) hold then for n ge N0

uα minus uδαn leδ

α+ 2a1+ν + a1aν

αωθ1+ν

n

Proof It follows from equations (1115) and (118) that

uα minus uδαn = (αI +Kn)minus1(fn minus f δn )+ dn

where

fn = Pnf and dn = (αI +K)minus1f minus (αI +Kn)minus1fn

By using condition (115) estimate (119) and the fact that Pn = 1 we havethat

(αI +Kn)minus1(fn minus f δn ) le

δ

α

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

426 Multiscale methods for ill-posed integral equations

Introducing the operator

En = (αI +K)minus1 minus (αI +Kn)minus1

we write dn as two terms

dn = (αI +Kn)minus1(I minus Pn)f + Enf

We next estimate the two terms in the last formula for dn separately Byhypothesis (H-1) and the definition of the power operator we find that

f = Kulowast = K1+νω (1122)

By using (1122) and hypothesis (H-2) we have the estimate for the first termthat

(αI +Kn)minus1(I minus Pn)f le a1+ν

αωθ1+ν

n

It remains to estimate the term En f Again using (1122) we conclude that

En f = e1 + e2

where

e1 = (αI +Kn)minus1PnK(Pn minus I)(αI +K)minus1K1+νω

and

e2 = (αI +Kn)minus1(Pn minus I)K(αI +K)minus1K1+νω

By using the commutativity of Kν and (αI +K)minus1 and the identity

Pn minus I = (Pn minus I)(Pn minus I)

we obtain that

e1 = (αI +Kn)minus1PnK(Pn minus I)(Pn minus I)Kν(αI +K)minus1Kω

Moreover since K is positive semi-definite for any α gt 0

(αI +K)minus1K le 1 (1123)

It follows from (119) (1120) (1121) and (1123) that

e1 le αminus1a1aνθ1+νn ω

Likewise we have that

e2 = (αI +Kn)minus1(Pn minus I)K1+ν(αI +K)minus1Kω

By (119) (1121) and (1123) we see that

e2 le αminus1a1+νθ1+νn ω

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 427

Consequently we have that

En f le e1 + e2 le αminus1(a1aν + a1+ν)θ1+νn ω

This proves the result of the proposition

We next estimate the distance of uδα from the subspace Xn To do this forn isin N0 we let

Eδαn = (I minus Pn)u

δα (1124)

We also set

γ δαn = δ

α+ a1+ν

αωθ1+ν

n (1125)

We remark that if the sequence θn satisfies condition (1119) then we have that

γ δαn+1

γ δαnge σ = σ 1+ν

0 (1126)

Proposition 114 If hypotheses (H-1) and (H-2) hold then for n ge N0

Eδαn le γ δαn (1127)

Proof By (114) (116) and (H-1) we have that

uδα = (αI +K)minus1(f δ minus f )+ (αI +K)minus1K1+νω (1128)

Since Pn is an orthogonal projection we have that I minus Pn le 1 Thus from(115) (1128) (117) and (H-2) we conclude that

Eδαn = (I minus Pn)u

δα le

δ

α+ a1+ν

αωθ1+ν

n

establishing the estimate

The next proposition shows that uδαkm approximates uδαk+m at a conver-

gence rate comparable with γ δαk+m The proof of this result follows the sameidea as the proof of Theorem 22 in [71] To make this chapter self-containedwe provide details of the proof for convenient reference

Proposition 115 Let uδαkm and uδαk+m be the solution of the MAM (1112)and the solution of the projection method (118) with n = k + m respectivelySuppose that hypotheses (H-1) and (H-2) hold Then there exists a positiveinteger N ge N0 when k ge N m isin N0 and α satisfies the condition

α ge a1

ρθk (1129)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

428 Multiscale methods for ill-posed integral equations

where

ρ = 1

2

(minus1+

radic1+ 2σ

1+ σ

) (1130)

The following estimate holds

uδαkm minus uδαk+m le γ δαk+m

Proof From (118) and (1112) we have that

Bkm(α)(uδαkm minus uδαk+m) = Ckm(α)(u

δαk+m minus uδαkm) (1131)

Noting that I + αminus1Kk+m ge 1 it holds that

Bkm(α)x = ((I + αminus1Kkm)minus Ckm(α))x ge (1minus Ckm(α))x(1132)

By the definition of Ckm(α) we have that

Ckm(α) = αminus1(Pk+m minus Pk)KPk+m le αminus1(Pk+m minus Pk)KThus when k ge N m isin N0 and hypothesis (1129) is satisfied we have that

Ckm(α) le 2a1θk

αle 2ρ

This with (1130) ensures that

Ckm(α)) lt 1

(ρ + 1)(1+ 1σ) (1133)

which also implies Ckm(α)lt 1 Therefore by inequality (1132) we con-clude that when k ge N m isin N0 and hypothesis (1129) is satisfied

Bminus1km(α)) le

1

1minus Ckm(α)

From this and (1131) we obtain

uδαkm minus uδαk+m leCkm(α)

1minus Ckm(α)uδαk+m minus uδαkm (1134)

Moreover as in the proof of Theorem 92 we have that

uδα minus uδαk+m lea1θk+m

αEδαk+m le

a1θk

αEδαk+m

By condition (1129) we find that

uδα minus uδαk+m le ρEδαk+m (1135)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 429

We next prove by induction on m that when k ge N m isin N0 and hypothesis(1129) is satisfied we have the estimate

uδαkm minus uδαk+m le γ δαk+m (1136)

When m = 0 since uδαk0 = uδαk estimate (1136) holds in this case Supposethat the claim holds for m = r minus 1 To prove (1136) with m = r we usethe definition of uδαkr estimates (1135) (1126) (1127) and the inductionhypothesis to obtain

uδαk+r minus uδαkr le uδαk+r minus uδα + uδα minus uδαk+rminus1+uδαk+rminus1 minus uδαkrminus1le ργ δαk+r + (ρ + 1)γ δαk+rminus1

le(ρ + (ρ + 1)

1

σ

)γ δαk+r

Substituting this estimate into the right-hand side of (1134) with m = r yields

uδαkr minus uδαk+r leCkr(α))

1minus Ckr(α))(ρ + (ρ + 1)

1

σ

)γ δαk+r

It follows from the estimate (1133) that for k ge N and non-negative integer r

Ckr(α))1minus Ckr(α))

(ρ + (ρ + 1)

1

σ

)le 1

Therefore for k ge N estimate (1136) holds for m = r The proof is complete

In the next theorem we present the a priori error estimate for the approxi-mate solution uδαkm obtained by the MAM

Theorem 116 Let ulowast denote the minimum norm solution of equation (113)and uδαkm denote the solution of the MAM (1112) If hypotheses (H-1) and(H-2) hold then there exists a positive integer N ge N0 such that for all k ge Nm isin N0 and α satisfying the condition (1129) the following estimate holds

ulowast minus uδαkm le cνωαν + 2δ

α+ (3a1+ν + a1aν)ω

θ1+νk+m

α (1137)

Proof Note that

ulowast minus uδαkm = (ulowast minus uα)+ (uα minus uδαk+m)+ (uδαk+m minus uδαkm)

The estimate (1137) follows directly from the estimate (1116) Proposi-tions 113 114 and 115

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

430 Multiscale methods for ill-posed integral equations

Motivated by Proposition 115 we now propose a choice of the regulariza-tion parameter α in the multilevel augmentation algorithm For given δ gt 0we choose a positive integer k gt N so that

cminusδ1

ν+1 le c0θk le c+δ1

ν+1 (1138)

for some positive integers cminus c+ and c0 ge a1ρ and choose

α = c0θk (1139)

This choice of α is called the a priori parameter since (1138) uses theinformation on ν which appears in a priori assumption (H-1) In the nexttheorem we present the a priori error estimate of the MAM

Theorem 117 Let ulowast be the minimum norm solution of equation (113)and uδαkm be the solution of the MAM (1112) with the choice of α satisfying(1139) If hypotheses (H-1) and (H-2) hold then there exists a positive integerN ge N0 such that when k ge N m isin N0 and (1138) is satisfied the followingestimate holds

ulowast minus uδαkm le(

cνcν+ω +2

cminus

νν+1 + (3aν+1 + a1aν)ω ρ

a1θνk+m

(1140)

Proof The estimate (1140) is obtained by substituting the choice of theregularization parameter α into the right-hand side of estimate (1137)

The last theorem shows that the MAM improves the approximation errorfrom θk to θk+m

1123 A posteriori choice strategy

In this subsection we develop a strategy for choosing an a posteriori regular-ization parameter which ensures the optimal convergence for the approximatesolution obtained by the MAM with the parameter

To introduce an a posteriori regularization parameter we consider anauxiliary operator equation For fixed k m isin N we consider the equation

(αI +K)uδα = uδαkm (1141)

where uδα isin X That is we consider equation (116) with the right-handside f δ replaced by uδαkm where uδαkm is the solution using the MAM for

equation (116) It is clear that equation (1141) has a unique solution uδα andthe solution depends on k m For simple presentation we abuse the notation

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 431

without indicating the dependence The projection method for equation (1141)has the form

(αI +Kk+i)uδαk+i = uδαkm i = 0 1 m (1142)

We denote by uδαki the solution of the above equation using the MAMdescribed in Section 1121 for an α to be determined

Let

Eδαn = (I minus Pn)u

δα

and

γ δαn = 2δ

α2+ (4a1+ν + a1aν)ω

α2θ1+ν

n

In the next proposition we bound Eδαn by γ δαn

Proposition 118 If hypotheses (H-1) and (H-2) hold there exists a positiveinteger N ge N0 such that when k ge N m isin N0 and α satisfies the condition(1129) the following estimate holds

Eδαk+m le γ δαk+m

and for i = 0 1 m

uδαki minus uδαk+i le γ δαk+i

Proof It follows from (1141) that

uδα = (αI +K)minus1uδαkm

Using equation (1115) and hypothesis (H-1) we have that

uα = (αI +K)minus1K1+νω

Therefore

uδα = (αI +K)minus1(uδαkm minus uα)+ (αI +K)minus2K1+νω

Hence by the fact that I minus Pn le 1 hypothesis (H-2) and the estimatesobtained in the last section we have that

Eδαk+m=uδα minusPk+muδα le

α2+ (3a1+ν + a1aν)ω

α2θ1+ν

k+m +a1+νω

α2θ1+ν

k+m

which leads to the first estimate of this proposition The second estimatefollows similarly from the proof of Proposition 115

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

432 Multiscale methods for ill-posed integral equations

We define the quantities

α = α2(αI +K)minus2f δαkm = α2uδαkm

and

D(δ θk+m) = 4δ + (8a1+ν + 3a1aν)ωθ1+νk+m

In the next result we estimate the difference between α and δαkm in terms

of D(δ θk+m)

Proposition 119 If hypotheses (H-1) and (H-2) hold there exists a positiveinteger N ge N0 such that when k ge N m isin N0 and α satisfies the condition(1129) it holds that

δαkm minusα le D(δ θk+m) (1143)

and

α le c2νωα1+ν (1144)

Moreover if

f δ gt D(δ θk+m)+ (b+ 1)δ (1145)

for some constant b gt 0 then

lim infαrarr+infin

δαkm gt bδ (1146)

Proof It follows from (1142) and (1115) that

δαkm minusα = α2(uδαkm minus uδαk+m)+ α2(αI +Kkm)

minus1(uδαkm minus uδαk+m)

+ α2(αI +Kkm)minus1(uδαk+m minus uα)

+ α2[(αI +Kkm)minus1 minus (αI +K)minus1](αI +K)minus1f

Thus by Propositions 118 and 115

δαkm minusα le α2γ δαk+m + αγ δαk+m + αuδαk+m minus uα

+ α2[(αI +Kkm)minus1 minus (αI +K)minus1](αI +K)minus1f

Similar to the proof of Proposition 113 we find that the last term is given by

α2[(αI +Kk+m)minus1 minus (αI +K)minus1](αI +K)minus1K1+νω

le (a1aν + a1+ν)θ1+νk+mω

Combining these inequalities and using Propositions 118 114 113 and(1117) we conclude that

δαkm minusα le 4δ + (8a1+ν + 3a1aν)ωθ1+ν

k+m (1147)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 433

which is the estimate (1143) Note by hypotheses that

α = α2∥∥∥(αI +K)minus2K1+νω

∥∥∥= α1+ν

∥∥∥[α(αI +K)minus1](1minusν)[(αI +K)minus1K]1+νω∥∥∥

le c2νωα1+ν

where we used the inequality [α(αI + K)minus1]ν le cν for ν isin (0 1] whichcan easily be verified by the definition (1113) of the fractional power of anoperator Thus estimate (1144) follows

To prove the second statement we note that

δαkm ge α minus δ

αkm minusα and limαrarr+infinα = f

When condition (1145) holds we have that

f ge f δ minus f minus f δ gt D(δ θk+m)+ bδ

Thus

lim infαrarr+infin

δαkm gt bδ

which completes the proof

We remark that if f δ ge cδ with c gt 5 and if k + m is sufficiently largethen condition (1145) holds In fact a simple computation yields

f δ minus D(δ θk+m)minus δ ge (cminus 5)δ minus (8a1+ν + 3a1aν)ωθ1+νk+m

which confirms that (1145) holds for sufficiently large k + m

Proposition 1110 Suppose that α ge αprime gt 0 Let uα and ulowast denote the solu-tion of equation (1115) and the minimum norm solution of equation (114)respectively Then

uα minus ulowast le uαprime minus ulowast + ααprime

Proof Direct computation leads to

uα minus uαprime = (αI +K)minus1f minus (αprimeI +K)minus1f

= minus 1

αprime

(1minus αprime

α

)αprimeαminus1(αprimeI +K)minus1(αI +K)α2(αI +K)minus2f

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

434 Multiscale methods for ill-posed integral equations

and

αprimeαminus1(αprimeI +K)minus1(αI +K) = αprime

α

∥∥∥I + ( ααprimeminus 1)(I + αprimeminus1K)minus1

∥∥∥le αprime

α

(1+ α

αprimeminus 1)= 1

Thus we conclude that

uα minus uαprime le 1

αprime

(1minus αprime

α

)α2(αI +K)minus2f le α

αprime

This with the inequality

uα minus ulowast le uαprime minus ulowast + uα minus uαprime yields the estimate of the proposition

Let d gt 1 and τ = a1ρ be fixed We choose a positive number α0 satisfyingthe condition

τθk le α0 le dτθk (1148)

and define an increasing sequence αn by the recursive formula

αn = dαnminus1 n = 1 2

Clearly the sequence αn goes to infinity as n rarr infin We present in the nextlemma a property of this sequence

Lemma 1111 If condition (1145) holds for some positive constant b thenthere exists a non-negative integer n0 such that

δαn0minus1km le bδ le δ

αn0 km (1149)

where δαminus1km = 0

Proof If α0 satisfies the condition

δα0km ge bδ (1150)

the proof is complete with n0 = 0 We now assume that condition (1150) is notsatisfied By the hypothesis of this lemma Proposition 119 and the definitionof the sequence αn there exists a positive integer p such that

δαpkm ge bδ

Let n0 be the smallest such integer p Thus we obtain (1149)

As suggested by Lemma 1111 we present an algorithm which generates achoice for an a posteriori parameter

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 435

Algorithm 1112 (A posteriori choice of regularization parameter) Let τ =a1ρ gt 0 d gt 1 and b gt 4 be fixed

Step 1 For given δ gt 0 choose a positive integer k gt N0 and a constant α0

such that θk le cδ for some c gt 0 and (1148) holdsStep 2 If αn has been defined use Algorithm 112 to compute uδαnkm and

δαnkm If

δαnkm lt bδ

is satisfied we set

αn+1 = dαn

and repeat this step Otherwise go to step 3Step 3 Set α = αnminus1 and stop

The output α of Algorithm 1112 depends on k m and δ and satisfies theconditions

τθk le α le dτθk and bδ le δαkm (1151)

or

α ge τθk and δαkm le bδ le δ

dαkm (1152)

The next proposition ensures that α = α(k m δ) converges to zero as δ rarr 0and krarrinfin

Proposition 1113 Suppose that hypotheses (H-1) and (H-2) hold If α =α(k m δ) is chosen according to Algorithm 1112 then there exists a positiveinteger N ge N0 such that when k ge N and m isin N0

limδrarr0 krarr+infin α(k m δ) = 0 (1153)

Proof If α le dτθk then (1153) holds since limkrarr+infin θk = 0 Otherwiseinequality (1152) must be satisfied Thus

limδrarr0 krarr+infin

δαkm = 0

Moreover noting that α satisfies (1129) it follows from (1143) that

limδrarr0 krarr+infin

δαkm minusα = 0

Combining the above two inequalities yields

limδrarr0 krarr+infinα = 0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

436 Multiscale methods for ill-posed integral equations

According to a well-known result (cf [156 274 275]) we conclude (1153)from the equation above

In the next theorem we present an error estimate for the multilevel augmen-tation solution using the a posteriori parameter

Theorem 1114 Suppose that hypotheses (H-1) and (H-2) hold Let ulowast anduδαkm be the minimum norm solution of equation (114) and the solution of

the MAM (1112) respectively with α chosen according to Algorithm 1112 Iff δ gt cδ with c gt 5 then there exist a positive integer N and constants d1

and d2 such that when k ge N and m isin N0

uδαkm minus ulowast le d1δ

ν1+ν + d2θ

νk+m

Proof We first note that according to the remark after Proposition 119condition (1145) holds if f δ gt cδ with c gt 5 and if k + m is sufficientlylarge This with Lemma 1111 ensures that the parameter α can be obtainedby Algorithm 1112 Noting that the parameter generated by Algorithm 1112satisfies the condition (1129) it follows from Theorem 116 that there exists apositive integer N when k ge N and m isin N0 such that

ulowast minus uδαkm le ulowast minus uα + 2δ

α+ 1

τ(3a1+ν + a1aν)ωθνk+m (1154)

or

ulowast minus uδαkm le cνωαν + 2δ

α+ 1

τ(3a1+ν + a1aν)ωθνk+m (1155)

In the case that (1151) holds it follows from Algorithm 1112 that

αν le (dτθk)ν le (cdτ)νδν (1156)

Using (1143) and (1144) we have

bδ le δαkm le α + δ

αkm minusα le c2νωα1+ν + D(δ θk+m)

This with (1156) yields

(bminus 4)δ

αle c2

νω(cdτ)νδν + (8a1+ν + 3a1aν)ωθνk+m

Combining (1155) (1156) and the above inequality we conclude the estimateof this theorem with

d1 = (1+ 2cν(bminus 4))cν(cdτ)νωand

d2 = [(16(bminus 4)+ 3τ)a1+ν + (6(bminus 4)+ 1τ)a1aν]ω

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 437

In the case that (1152) holds we let αprime = δ1

1+ν + θk+m Then we have that

(αprime)ν le 2ν(δν

1+ν + θνk+m) (1157)

If α ge αprime it follows from Proposition 1110 that

ulowast minus uα le ulowast minus uαprime + ααprime

(1158)

Using estimates (1116) and (1157) we have that

ulowast minus uαprime le cνω(αprime)ν le 2νcνω(δ ν1+ν + θνk+m) (1159)

From Proposition 119 and (1152) we observe that

α le α minusδαkm + δ

αkm le D(δ θk+m)+ bδ

which leads to

ααprimele (b+ 4)δ

ν1+ν + (8a1+ν + 3a1aν)ωθνk+m

This with (1158) and (1159) yields

ulowast minus uα le [2νcνω + (b+ 4)]δ ν1+ν + [2νcν + (8a1+ν + 3a1aν)]ωθνk+m

(1160)

It is clear that

δ

αle δ

αprimele δ

ν1+ν

Combining (1154) (1123) and the above inequality we have that

d1 = 2νcνω + b+ 6 and

d2 =[2νcν + (8+ 3τ)a1+ν + (3+ 1τ)a1aν

] ωIf α le αprime then using (1157)

αν le 2ν(δν

1+ν + θνk+m) (1161)

It follows from (1152) and Proposition 119 that

bδ le dαkm le dαkm minusdα + dαle 4δ + (8a1+ν + 3a1aν)ωθ1+ν

k+m + dαSince b gt 4 and τθk le α we obtain

(bminus 4)δ

αle 1

τ(8a1+ν + 3a1aν)ωθνk+m +

dαα

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

438 Multiscale methods for ill-posed integral equations

which with (1144) yields

δ

αle 1

(bminus 4)τ(8a1+ν + 3a1aν)ωθνk+m +

1

(bminus 4)c2νd1+νωαν (1162)

Combining (1155) (1161) and (1162) we conclude the result of this theoremagain with

d1 = [2νcν + (2d)1+νc2ν(bminus 4)]ω

and

d2 = d1 + [(16(bminus 4)+ 3)a1+ν + (6(bminus 4)+ 1)a1aν]ωτ

This completes the proof

113 Multiscale collocation methods via the Tikhonovregularization

In this section we introduce a fast piecewise polynomial collocation methodfor solving the second-kind integral equation obtained using Tikhonov regu-larization from the original ill-posed equation The method is developed basedon a matrix compression strategy resulting from using multiscale piecewisepolynomial basis functions and their corresponding multiscale collocationfunctionals

1131 The polynomial collocation method for the Tikhonovregularization

We present in this subsection a polynomial collocation method for solvingthe Tikhonov regularization equation For this purpose we first describe theTikhonov regularization in the Linfin space for ill-posed integral equations of thefirst kind Suppose that is a compact set of the d-dimensional Euclideanspace Rd for d ge 1 The Fredholm integral operator K is defined by

(Ku)(s) =int

K(s t)u(t)dt s isin (1163)

where K isin C( times ) is a nondegenerate kernel We consider the Fredholmintegral equation of the first kind in the form

Ku = f (1164)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 439

where f isin Linfin() is a given function and u is the unknown solution to bedetermined The operator K can be considered as a compact operator fromLinfin() to Linfin() In this case equation (1164) is an ill-posed problem

Let Klowast be the adjoint operator of K defined by

(Klowastu)(s) =int

K(t s)u(t)dt s isin (1165)

Instead of solving equation (1164) the Tikhonov regularization method is tosolve the equation

(αI +A)uδα = Klowastf δ (1166)

where A = KlowastK α is a positive parameter and f δ is the approximate data off with

f δ minus f2 le δ (1167)

for some positive constant δ We also denote by uα the solution of the equation

(αI +A)uα = Klowastf (1168)

where we assume that the right-hand-side function f contains no noiseFor 1 le p le infin we use up to denote the norm of the function u isin Lp()

and use AXrarrY to denote the norm of the operator A Xrarr Y When X =Y = Lp() we simplify the notation by Ap = ALp()rarrLp() Letting

M = supsisin( int

|K(t s)|2dt

)12 we have that KlowastL2()rarrLinfin() le MWe recall the following two useful estimates established in [224]

Lemma 1115 For each α gt 0 the operator αI+A is invertible from Linfin()to Linfin()

(αI +A)minus1infin leradicα +M2

α32and (αI +A)minus1KlowastL2()rarrLinfin() le

M

α

It is well known that in Hilbert space L2 we have the estimates

(αI +A)minus1L2() le1

αand (αI +A)minus1KlowastL2() le

1

2α12

These estimates are optimal in the L2 space However to the best of ourknowledge the estimates stated in Lemma 1115 are the only availableestimates although it is not clear if they are optimal in the Linfin space

In the sequel for simplicity we assume that f isin R(K) where R(K) denotesthe range of K Moreover we use u = Kdaggerf to denote the solution of (1164)where Kdagger is the MoorendashPenrose generalized inverse of K We also need thefollowing assumption

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

440 Multiscale methods for ill-posed integral equations

(H1) u isin R(AνKlowast) with 0 lt ν le 1 that is there exists an ω isin Linfin() suchthat u = AνKlowastω

The next result was obtained in [117]

Lemma 1116 If u isin R(Klowast) then u minus uαinfin rarr 0 as α rarr 0 Moreover ifhypothesis (H1) holds then

uminus uαinfin le c(ν)ωαν as αrarr 0

where c(ν) is a constant defined by

c(ν) =

sin νπν(1minusν)π 0 lt ν lt 1

1 ν = 1

We remark that a somewhat different collocation method was studied in[207] recently in the L2 space The ill-posed equation was first discretizedusing a numerical integration formula which leads to a finite-dimensionalequation A regularization is then applied to convert the discrete ill-posedequation to a discrete well-posed equation In paper [207] a weak assumptionx isin R(Aν) 0 lt ν le 1 was assumed to obtain the optimal convergencerate O(δ2ν(2ν+1)) for the Tikhonov regularization solution in the L2 spaceMoreover papers [194 195] prove the saturation of methods for solvinglinear ill-posed problems in Hilbert spaces for a wide class of regularizationmethods We take a different approach in this section We first regularize theill-posed equation (1164) to obtain the equation (1166) and then apply the fastcollocation method to solve the equation (1166) For convergence analysis ofthe proposed method we feel that for the collocation method using the Linfin-norm is more natural For this reason we adopt in this section the Linfin space foranalysis of the proposed collocation method Our analysis will be based on theestimates described in Lemmas 1115 and 1116 presented in [224] and [117]respectively

Next we present the piecewise polynomial collocation method for solvingthe regularized equation (1166) We need some necessary notations Weassume that there is a partition E = n n isin ZN of for N isin N whichsatisfies the following conditions

(I) =⋃nisinZNn and meas(i capj) = 0 for i = j

(II) For each i isin ZN there exists an invertible affine map φn 0 rarr suchthat φn(

0) = n where 0 sube Rd is a reference element

We denote by XkN the space of piecewise polynomials of total degree le

k minus 1 with respect to the partition E In other words every element in XkN is a

polynomial of total degree le k minus 1 on each element n Since the dimension

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 441

of the polynomials of total degree kminus 1 is given by rk =(

k + d minus 1d

) the

dimension of XkN is Nrk

Choose rk distinct points xj isin 0 j = 1 2 rk in a general position thatis the Lagrange interpolation polynomial of total degree kminus1 at these points isuniquely defined (cf [198]) We assume that the polynomials pj isin Pk of totaldegree k minus 1 satisfy the conditions pj(xi) = δij for i j = 1 2 rk For eachn isin ZN we define basis functions by

ρnj(x) =(pj φminus1

n )(x) x isin n0 x isin n

In what follows we denote h = maxdiam(n) n isin ZN and define theinterpolation projection Ph C() rarr Xk

N The projection Ph is first definedfor f isin C() by

(Phf )(x) =sum

nisinZN

rksumj=1

f (φn(xj))ρnj(x) x isin

The operator Ph C() rarr XkN is an interpolation projection onto Xk

N It canbe verified that there exists a positive constant c such that Phinfin le c for all NThe projection Ph can be extended to Linfin() by the HahnndashBanach theoremWe also need the orthogonal projection Qh L2()rarr Xk

N In the next proposition we prove important properties of the projections Ph

and Qh These properties may have appeared in the literature in a differentform To make this section self-contained we provide a complete proof for theconvenience of the reader For 1 le p q le +infin and a positive constant r welet Wrq() denote the Sobolev space of functions whose r-derivatives are inLq() and set

Lp( Wrq()) = v v(s middot) isin Wrq() for almost all s isin and v(s middot)Wrq() isin Lp()

and

Lp(Wrq()) = v v(middot t) isin Wrq() for almost all t isin and v(middot t)Wrq() isin Lp()

In the rest of this section unless stated otherwise we use c for a genericpositive constant whose values may be different on different occasions

Proposition 1117 If K isin C( times ) and 0 lt r le k then the followingstatements hold

(1) For K isin Linfin( Wr1()) K(I minusQh)infin le chrKLinfin(Wr1())

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

442 Multiscale methods for ill-posed integral equations

(2) For K isin Linfin(Wr1()) (I minus Ph)Klowastinfin le chrKLinfin(Wr1())(3) For K isin L2( Wr2()) K(I minusQh)2 le chrKL2(Wr2())(4) For K isin L2(Wr2()) (I minus Ph)Klowast2 le chrKL2(Wr2())(5) For K isin Linfin(Wr2()) (I minus Ph)KlowastL2()rarrLinfin()le chr

KLinfin(Wr2())

Proof (1) For any u isin Linfin() noting that Qh is self-adjoint we have that

(K(I minusQh)u)(s) =int

((I minusQh)K(s middot))(t)u(t)dt (1169)

Therefore

K(I minusQh)uinfin le supsisin

int

|((I minusQh)K(s middot))(t)|dtuinfinle chrKLinfin(Wr1())uinfin

which implies that

K(I minusQh)infin le chrKLinfin(Wr1())

(2) Likewise we have that

((I minus Ph)Klowastu)(s) =int

((I minus Ph)K(middot s))(t)u(t)dt (1170)

This implies that

(I minus Ph)Klowastuinfin le supsisin

int

|((I minus Ph)K(middot t))(s)|dsuinfinle chrKLinfin(Wr1())uinfin

which leads to the estimate of (2)(3) For any u isin L2() it follows from (1169) that

K(I minusQh)u2 le(int

(I minusQh)K(s middot)22u22ds

)12

le chrKL2(Wr2())u2

which yields the result of (3)(4) The estimate of (4) follows from (1170) and a similar argument used in

the proof of (3)(5) For any u isin L2() it follows that

(I minus Ph)Klowastuinfin le supsisin(I minus Ph)K(middot s)2u2 le chrKLinfin(Wr2())u2

This completes the proof

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 443

The next corollary follows directly from Proposition 1117

Corollary 1118 If K(middot middot) isin Wrinfin(times) with 0 lt r le k then there existsa positive constant c such that for all h gt 0

maxAminus PhAQhinfin PhAQh minus PhAinfin

(I minus Ph)KlowastL2()rarrLinfin() le chr

We remark that the number r on the right-hand side of the above estimatemay be larger than that in the assumption of this corollary (see the example inSection 1142)

Using the piecewise polynomial spaces and the projection operators intro-duced above the piecewise polynomial collocation method for solving theregularized equation (1166) is to find uδαh isin Xk

N such that

(αI + PhAQh)uδαh = PhKlowastf δ (1171)

Following the discussion in [224] we have the convergence and errorestimate of the polynomial collocation method

Theorem 1119 Suppose that K(middot middot) isin Wrinfin(times) with 0 lt r le k and

chr le min

1

2

α32

radicα +M2

radicαradic

α +M2

(1172)

(1) If u isin R(Klowast) and h α are chosen such that hr = O(radicαδ) δα rarr 0 asδrarr 0 then

uminus uδαhinfin rarr 0 as δrarr 0 (1173)

(2) If hypothesis (H1) holds and h α are chosen such that hr = O(radicαδ) and

α sim δ1

ν+1 as δrarr 0 then

uminus uδαhinfin = O(δν

ν+1 ) as δrarr 0 (1174)

Proof Using Lemma 1115 and a standard argument we have that

uminus uδαhinfin le uminus uαinfin + c

α+ hr

α32

)

Thus the results of this theorem follow from Theorems 25 and 26 of [224]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

444 Multiscale methods for ill-posed integral equations

1132 The fast multiscale collocation method

This subsection is devoted to the development of the fast multiscale piecewisepolynomial collocation method for solving equation (1166)

We recall the multiscale sequence of approximate subspaces A sequence ofpartitions of domain is called multiscale if every partition in the sequenceis obtained from a refinement of its previous partition Let Xn n isin N be asequence of piecewise polynomial spaces of total degree k minus 1 based on asequence of multiscale partitions of where X0 is the space of piecewisepolynomials of total degree k minus 1 on an initial partition of with m =dimX0 = m0rk for some positive integer m0 Because of the multiscalepartition we have that the spaces Xn n isin N are nested that is Xn sube Xn+1 forn isin N This leads to the decomposition

Xn =W0 oplusperpW1 oplusperp middot middot middot oplusperpWn (1175)

where W0 = X0 For each i isin N0 we assume that Wi has a basis wij j isin Zw(i)that is Wi = spanwij j isin Zw(i) According to (1175) we then have thatXn = spanwij (i j) isin Un For each (i j) isin Un we denote by Sij the supportof the basis function wij and let d(A) denote the diameter of the set A sub Wedefine s(n) = dimXn and for each i isin Zn+1 we set hi = maxd(Sij) j isinZw(i) We further require that the spaces and their bases have the propertiesthat s(n) sim μn w(i) sim μi hi sim μminusid where μ gt 1 is an integer and thatthere exists a positive constant c such that wijinfin le c for all (i j) isin U Theseproperties are fulfilled for the spaces and bases constructed in [69 75 264]

Next we turn to describing a multiscale sequence of the correspondingcollocation functional spaces Associated with each basis function wij we havea collocation functional ij which is a sum of point evaluation functionalsat a fixed number of points in Sij We demand that for each (i j) isin Unlangij q

rang = 0 for any polynomial q of total degree k minus 1 and there existsa positive constant c such that ij le c for all (i j) isin U We alsorequire that the basis functions and their corresponding collocation functionalshave the semi-bi-orthogonal property 〈iprimejprime wij〉 = δiprimeiδjprimej (i j) (iprime jprime) isin Uiprime le i These properties of the collocation functionals are satisfied for thoseconstructed in [69 75] The multiscale collocation functionals constructed inthese papers make use of refinable sets introduced in [65] which admit theunique Lagrange interpolating polynomial (cf [198]) These functionals wereoriginally defined for continuous functions and then extended to functions inLinfin() by the HahnndashBanach theorem Corresponding to each subspace Wi

we have the collocation functional space Vi = spanij j isin Zw(i) andcorresponding to the space Xn we have Ln = spanij (i j) isin Un The

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 445

space Ln has the decomposition

Ln = V0 oplus V1 oplus middot middot middot oplus Vn

We now formulate the collocation method for solving equation (1166) Tothis end for each n isin N0 we let Pn denote the interpolation projection fromLinfin() onto Xn defined for f isin Linfin() by Pnf isin Xn satisfying

〈ij f minus Pnf 〉 = 0 (i j) isin Un

Moreover for each n isin N0 we let Qn denote the orthogonal projection fromL2() onto Xn The collocation method for solving (1166) is to find uδαn isin Xn

such that

(αI + PnAQn)uδαn = PnKlowastf δ (1176)

In the standard piecewise polynomial collocation method the orthogonalprojection Qn is not used Instead its roll is taken by the interpolationprojection Pn At the operator equation level the two formulations areequivalent However the use of the orthogonal projection Qn allows us to usethe multiscale basis functions which have vanishing moments This is crucialfor developing fast algorithms based on a matrix compression

The matrix representation of operator αI+PnAQn under the basis functionsand the corresponding multiscale collocation functionals is a dense matrix Tocompress this matrix we write it in block form Let An = PnAQn then theoperator An Xn rarr Xn is identified in matrix form with

An=

⎡⎢⎢⎢⎣P0AQ0 P0A(Q1 minusQ0) middot middot middot P0A(Qn minusQnminus1)

(P1 minusP0)AQ0 (P1 minusP0)A(Q1 minusQ0) middot middot middot (P1 minusP0)A(Qn minusQnminus1)

(Pn minusPnminus1)AQ0 (Pn minusPnminus1)A(Q1 minusQ0) middot middot middot (Pn minusPnminus1)A(Qn minusQnminus1)

⎤⎥⎥⎥⎦

(1177)

If i+ j gt n we replace the block (Pi minus Piminus1)A(Qj minusQjminus1) of (1177) by thezero which leads to a compressed matrix

An =

⎡⎢⎢⎢⎣P0AQ0 P0A(Q1 minusQ0) middot middot middot P0A(Qn minusQnminus1)

(P1 minusP0)AQ0 (P1 minusP0)A(Q1 minusQ0) middot middot middot 0

(Pn minusPnminus1)AQ0 0 middot middot middot 0

⎤⎥⎥⎥⎦ (1178)

We call this compression strategy the -shape compression Letting Aminus1 =Qminus1 = 0 the operator An Xn rarr Xn can be written as

An =sum

ijisinZn+1i+jlen

(Pi minus Piminus1)A(Qj minusQjminus1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

446 Multiscale methods for ill-posed integral equations

In the collocation method (1176) we replace An by An and obtain a newapproximation scheme for solving equation (1166) That is we find uδαn isin Xn

such that

(αI + An)uδαn = PnKlowastf δ (1179)

We show that this modified collocation method leads to a fast algorithm and atthe same time it preserves the convergence rate obtained in [224]

To write equation (1179) in its equivalent matrix form we make use ofthe multiscale basis functions and the corresponding collocation functionalsWe write the solution uδαn isin Xn as uδαn =

sum(ij)isinUn

uijwij and introduce the

solution vector un = [uij (i j) isin Un]T We introduce matrices the

En = [〈iprimejprime wij〉 (iprime jprime) (i j) isin Un] An = [Aiprimejprimeij (iprime jprime) (i j) isin Un]where

Aiprimejprimeij = 〈iprimejprime KlowastKwij〉 iprime + i le n

0 otherwise(1180)

and vector fn = [〈iprimejprime Klowastf δ〉 (i j) isin Un]T Upon using these notationsequation (1179) is written in the matrix form

(αEn + An)un = fn (1181)

The semi-bi-orthogonality and the compact support property of the basisfunctions wij and the corresponding collocation functionals ij for (i j) isin Uensure that En is a sparse upper triangular matrix According to the -shapecompression strategy An is a sparse matrix These facts lead to a fast algorithmfor solving equation (1181)

In the next theorem we analyze the number N (An) of nonzero entries ofmatrix An

Theorem 1120 If the matrix An is obtained from the compression strategy(1180) then

N (An) = O(s(n) log(s(n)) nrarrinfin

Proof For i iprime isin Zn+1 we introduce the block matrix

Aiprimei = [Aiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]According to the compression strategy (1180) it is clear that

N (An) =sum

iprime+ilen

N (Aiprimei)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 447

Noting that Aiprimei is a w(iprime)timesw(i) matrix and that w(i) sim μi from the equationabove we conclude that

N (An) simsum

iprime+ilen

μiprime+i =sum

iisinZn+1

μisum

iprimeisinZnminusi+1

μiprime =O(s(n) log(s(n)) as nrarrinfin

proving the result of this theorem

A crucial step in generating the sparse coefficient matrix An is computingthe nonzero entries 〈iprimejprime KlowastKwij〉 for iprime + i le n This requires computingthe values of the collocation functionals iprimejprime at the functions defined by theintegrals int

int

K(τ s)K(τ t)dτwij(t)dt

The same issue has been addressed in [33] in a different context (see Theorems5 and 6 in that paper) The key idea is to develop a numerical quadraturestrategy for computing the entrieslang

iprimejprime int

int

K(τ s)K(τ t)dτwij(t)dt

rangof the compressed matrix Such a strategy requires only O(s(n) log(s(n)))number of functional evaluations with preserving the convergence order of theresulting approximate solution See also [75] for a similar development for thesecond-kind Fredholm integral equation Since this issue has been understoodin principle and since the main focus of this section is the regularization wewill omit the details of this development and leave them to the interested reader

We briefly discuss an alternative idea which may provide a fast algorithmfor the collocation method proposed recently in [207] The suggestion is thatwe first discretize equation (1164) directly by using the collocation methoddescribed previously which results in the linear system

Knun = fn (1182)

The use of the collocation functionals iprimejprime instead of the point evaluationfunctionals as in [207] is for the matrix compression The second step is tocompress the dense matrix Kn in equation (1182) to obtain

Knun = fn (1183)

The resulting coefficient matrix has only O(s(n) log(s(n))) nonzero entriesThis treatment is the same as that for the second-kind Fredholm integralequation developed in [69] The third step is to apply the regularization to

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

448 Multiscale methods for ill-posed integral equations

equation (1183) to have

(αEn + KlowastnKn)unα = Klowastnfn (1184)

The matrix En may be replaced by the identity matrix In Since all matrices EnKn and Klowastn are sparse having at most O(s(n) log(s(n))) nonzero entries thecomputation of KlowastnKnunα has a fast algorithm by first computing Knunα andthen computing the product of Klowastn with the resulting vector if certain iterationmethods are used in solving the discrete equation (1184) Further descriptionof this idea may result in a detour from the main focus of this section Hencewe leave the details for future investigation

We now turn to estimating the convergence rate of the modified collocationmethod (1179) We impose the following hypothesis

(H2) There exists a positive constant c such that

K(I minusQj)infin le cμminusrjd K(I minusQj)2 le cμminusrjd

(I minus Pj)Klowastinfin le cμminusrjd (I minus Pj)Klowast2 le cμminusrjd

(I minus Pj)KlowastL2()rarrLinfin() le cμminusrjd

Proposition 1117 gives various smoothness conditions on the kernel K so thatestimates in hypothesis (H2) hold We impose hypothesis (H2) instead of thesmoothness conditions on the kernel K since the smoothness conditions aresufficient but not necessary Examples of such cases will be shown in the lastsection

In the following lemma we present the error between An and A in theoperator norm

Lemma 1121 If hypothesis (H2) holds then there exists a positive constantc0 such that

Aminus Aninfin le c0nμminusrnd

Proof We write

Aminus An = (I minus Pn)A+ PnA(I minusQn)+ (PnAQn minus An) (1185)

Recalling that A = KlowastK and Pninfin is uniformly bounded by a constant itfollows from hypothesis (H2) that there exists a positive constant c such that

(I minus Pn)A+ PnA(I minusQn)infin le cμminusrnd (1186)

Moreover noting for each i isin Zn+1 that

Qnminusi =sum

jisinZnminusi+1

(Qj minusQjminus1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 449

we conclude that

An =sum

iisinZn+1

(Pi minus Piminus1)AQnminusi (1187)

We conclude from (1187) that

PnAQn minus An =sum

iisinZn+1

(Pi minus Piminus1)KlowastK(Qn minusQnminusi) (1188)

Again using hypothesis (H2) there exists a positive constant c such that

(Pi minus Piminus1)Klowastinfin le (Pi minus I)Klowastinfin + (I minus Piminus1)Klowastinfin le cμminusrid

and

K(Qn minusQnminusi)infin le K(Qn minus I)infin + K(I minusQnminusi)infin le cμminusr(nminusi)d

These estimates together with (1188) lead to

PnAQn minus Aninfin le cnsum

i=1

μminusridμminusr(nminusi)d le c0nμminusrnd (1189)

Combining (1186) and (1189) yields the desired result of this lemma

We establish below an estimate for the approximate operator An similar tothe first estimate in Lemma 1115 which is for the operator A

Lemma 1122 Suppose that hypothesis (H2) holds If

nμminusrnd le 1

2c0

α32

radicα +M2

(1190)

where c0 is the constant appearing in Lemma 1121 then αI+An Linfin()rarrLinfin() is invertible and

(αI + An)minus1infin le 2

radicα +M

α32 (1191)

Proof By a basic result in functional analysis (cf Theorem 15 p 193 of[254]) we conclude that αI + An Linfin()rarr Linfin() is invertible and

(αI + An)minus1infin le (αI +A)minus1infin

1minus (αI +A)minus1infinAminus Aninfin

The estimate (1191) follows from the above bound condition (1190) andLemmas 1115 and 1121

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

450 Multiscale methods for ill-posed integral equations

We next proceed to estimate the error uminus uδαninfin To this end we let uαn

denote the solution of (1179) with f δ replaced by f By the triangle inequalitywe have that

uminus uδαninfin le uminus uαinfin + uα minus uαninfin + uαn minus uδαninfin (1192)

Since Lemma 1116 has given an estimate for uminusuαinfin it remains to estimatethe last two terms on the right-hand side of inequality (1192) In the nextlemma we estimate the second term

Lemma 1123 If hypothesis (H2) holds the integer n is chosen to satisfyinequality (1190) and u isin R(Klowast) then there exists a positive constant c suchthat for all n isin N

uα minus uαninfin le cnμminusrnd

α32

Proof By using equations (1168) and (1179) we have that

uα minus uαn = (αI +A)minus1Klowastf minus (αI + An)minus1AnKlowastf

It can be rewritten as

uα minus uαn=[(αI +A)minus1 minus (αI + An)minus1]Klowastf + (αI + An)

minus1(I minusAn)Klowastf

By employing the relation f = Ku we derive that

uα minus uαn = (αI + An)minus1(An minusA)(αI +A)minus1Au

+ (αI + An)minus1(I minusAn)Au

By hypothesis u isin R(Klowast) and thus we write u = Klowastv for some v isin Linfin()Since for any positive number α the operator αI +KKlowast is invertible and

(αI +KKlowast)minus1KKlowast2 le 1

it follows that

(αI +A)minus1Auinfin =Klowast(αI +KKlowast)minus1KKlowastvinfin leKlowastL2()rarrLinfin()v2Hence there exists a positive constant c such that

(αI +A)minus1Auinfin le c

This estimate together with Lemmas 1121 and 1122 ensures that

(αI + An)minus1(An minusA)(αI +A)minus1Auinfin le c

nμminusrnd

α32 (1193)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 451

Moreover using Lemma 1122 and part (2) of Proposition 1117 we concludethat

(αI + An)minus1(I minusAn)Auinfin le c

μminusrnd

α32 (1194)

Combining estimates (1193) and (1194) yields the desired estimate

Next we estimate the third term on the right-hand side of inequality (1192)

Lemma 1124 If hypothesis (H2) holds and condition (1190) is satisfiedthen there exists a positive constant c such that

uαn minus uδαninfin le c

α+ μminusrnd δ

α32

)

Proof From (1179) we have that

(αI + An)(uαn minus uδαn) = AnKlowast(f minus f δ)

We rewrite it in the form

uαn minus uδαn = (αI +A)minus1 [Klowast(f minus f δ)+ (Pn minus I)Klowast(f minus f δ)

+(Aminus An)(uαn minus uδαn)]

It follows from the second estimate of Lemma 1115 and hypothesis (1167)that

(αI +A)minus1Klowast(f minus f δ)infinle (αI +A)minus1KlowastL2()rarrLinfin()f minus f δ2 le M

δ

α

The first estimate of Lemma 1115 and part (2) of Proposition 1117 ensurethat there exists a positive constant c such that

(αI +A)minus1(Pn minus I)Klowast(f minus f δ)infin le cμminusrnd(radicα +M2)

δ

α32

By the first estimate of Lemma 1115 Lemma 1121 and condition (1190) weobtain

(αI +A)minus1(Aminus An)(uαn minus uδαn)infin le1

2uαn minus uδαninfin

Combining the above three estimates we conclude that

uαn minus uδαninfin le Mδ

α+ cμminusrnd(

radicα +M2)

δ

α32+ 1

2uαn minus uδαninfin

In the above inequality solving for uαnminus uδαninfin yields the desired estimate

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

452 Multiscale methods for ill-posed integral equations

We are now ready to present an error bound for uminus uδαninfin

Theorem 1125 If hypothesis (H2) holds the integer n is chosen to satisfyinequality (1190) and u isin R(Klowast) then there exists a positive constant c suchthat

uminus uδαninfin le uminus uαinfin + c

α+ nμminusrnd

α32

) (1195)

Proof The estimate in this theorem follows directly from (1192) Lemmas1123 and 1124

We present in the next corollary two special results

Corollary 1126 Suppose that hypothesis (H2) holds and the integer n ischosen to satisfy inequality (1190)

(1) If α is chosen such that δαrarr 0 as δrarr 0 the integer n is chosen so thatnμminusrnd = O(radicαδ) as δrarr 0 and u isin R(Klowast) then

uminus uδαninfin rarr 0 as δrarr 0 αrarr 0 (1196)

(2) If hypothesis (H1) holds then

uminus uδαninfin = O(αν + δ

α+ nμminusrnd

α32

) as δrarr 0 (1197)

Proof (1) Using the choice of α and n in estimate (1195) leads to

uminus uδαninfin le uminus uαinfin + cδ

α

By the first result of Lemma 1116 as α rarr 0 the first term on the right-handside goes to zero proving the result

(2) Since

(KlowastK)νKlowast = Klowast(KKlowast)ν

(cf p 16 of [116]) we obtain that R(AνKlowast) sube R(Klowast) When hypothesis(H1) holds we have that u isin R(Klowast) By Theorem 1125 estimate (1195)holds Using the second result of Lemma 1116 estimate (1195) reduces tothe desired result

1133 Regularization parameter choice strategies

Solving equation (1179) (or (1181)) numerically requires appropriate choicesof the regularization parameter α In this subsection we present a priori and

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 453

a posteriori strategies for choices of such parameters and estimate the errorbound of the corresponding approximate solutions

We first present an a priori parameter choice strategy We propose to choosethe parameter α such that the right-hand side of estimate (1197) is minimizedSpecifically we suppose that u isin R(AνKlowast) with 0 lt ν le 1 where we callν the smoothness order of the exact solution u and propose the following rulefor the choice of parameter α

Rule 1127 Given the noise bound δ gt 0 and the smoothness order ν isin (0 1]of the exact solution u we choose α to satisfy

α sim δ1

ν+1

and a positive integer n to satisfy

nμminusrnd = O(radicαδ) as δrarr 0

Rule 1127 uses the a priori parameter ν which is normally not availableFor this reason Rule 1127 is called an a priori strategy If there is a way toobtain ν the parameter α and integer n chosen in Rule 1127 are then used inequation (1181) to obtain an approximate solution uδαn In the next theoremwe present the convergence rate of the approximate solution corresponding tothe above choice of α

Theorem 1128 Suppose that K isin Wrinfin(times) with 0 lt r le k If α and nare chosen according to Rule 1127 and hypothesis (H1) is satisfied then

uminus uδαninfin = O(δν

ν+1 ) as δrarr 0 (1198)

Proof When α and n are chosen according to Rule 1127 by a straightforwardcomputation we confirm that inequality (1190) is satisfied The assumptionon K of this theorem with Proposition 1117 ensures that hypothesis (H2)is satisfied Hence the hypothesis of Corollary 1126 holds Substituting thechoices of α and n into the right-hand side of (1197) in Corollary 1126 weobtain estimate (1198)

Because the smoothness order ν of the exact solution is normally not knownit is desirable to develop a strategy for choices of the parameter α without apriori information on ν Next we present an a posteriori parameter choicestrategy The idea of the strategy was first used in [216] and recently developedfurther in [207]

For a given α gt 0 we let n(α) be a positive integer associated with α Fol-lowing [216] we assume that there exist two increasing continuous functions

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

454 Multiscale methods for ill-posed integral equations

ϕ(α) and λ(α) with ϕ(0) = 0 and λ(0) = 0 such that

uminus uδαn(α)infin le ϕ(α)+ δ

λ(α) (1199)

The assumption (1199) leads us to the choice

α = αopt = (ϕλ)minus1(δ)

With this choice of the parameter we have that

uminus uδαopt n(αopt)infin le 2ϕ((ϕλ)minus1(δ)) (11100)

Observing that

αopt = max

α ϕ(α) le δ

λ(α)

in practice we select the regularization parameter from a finite set

M(N) =αi αi isin N ϕ(αi) le δ

λ(αi)

where

N = αi 0 lt α0 lt α1 lt middot middot middot lt αNis a given set of N distinct positive numbers which will be specified later Weconsider

αlowast = maxαi αi isin M(N)as an approximation of αopt under appropriate conditions

When ϕ is unknown the above choice of the parameter is not feasible Itwas suggested in [207 216] to introduce a set

M+(N) =αj αj isin N uδαjn(αj)

minus uδαin(αi)infin le 4

δ

λ(αi) i = 0 1 j

to replace M(N) and choose

α+ = maxαi αi isin M+(N)as an approximation of αlowast accordingly We have the next lemma which isessentially Theorem 21 in [216]

Lemma 1129 Suppose that estimate (1199) holds If M(N) = emptyNM(N) = empty and for any αi isin N i = 1 2 N the function λ(α)

satisfies λ(αi) le qλ(αiminus1) for a fixed constant q then

uminus uδα+n(α+)infin le 6qϕ((ϕλ)minus1(δ)) (11101)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 455

The general result stated in the above lemma will be used to develop an aposteriori parameter choice strategy To this end for any fixed α we choosen = n(α) to be the smallest positive integer satisfying condition

nμminusrnd le min

c1radicαδ

1

2c0

α32

radicα +M2

(11102)

Note that according to the condition above n(α) is uniquely determined by thevariable α If hypotheses (H1) and (H2) hold then for such a pair of (α n) theestimate (1197) holds From (1197) there exist positive constants c2 c3 suchthat

uminus uδαn(α)infin le c2αν + c3

δ

α (11103)

We choose ϕ(α) = c2αν λ(α) = α

c3and for q0 gt 1 ρ gt 0 a positive integer

N determined by

ρδqNminus10 le 1 lt ρδqN

0

as in [207 216] specify the finite set by

N = αi = ρδqi0 i = 0 1 N

The introduction of the parameter ρ allows a larger degree of freedom for thechoice of the regularization parameter

Now we have the following rule for choosing the regularization parameterα = α+

Rule 1130 Choose α = α+ by

α+ = max

αj isin N uδαjn(αj)

minus uδαin(αi)infin le 4c3

δ

αi i = 0 1 j

(11104)

Rule 1130 does not use the smoothness order ν of the exact solution or anyother a priori information Hence it is an a posteriori choice strategy of theregularization parameter In the next theorem we present the convergence orderof the approximate solution corresponding to the above choice of α+

Theorem 1131 Suppose that hypotheses (H1) and (H2) hold If α = α+ ischosen according to Rule 1130 then

uminus uδα+n(α+)infin = O(δν

ν+1 ) as δrarr 0 (11105)

Proof This theorem is a direct consequence of Lemma 1129 It suffices toverify that the hypotheses of the lemma are satisfied

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

456 Multiscale methods for ill-posed integral equations

First of all according to (11103) estimate (1199) holds with ϕ(α) = c2αν

and λ(α) = αc3

It can be verified that λ(αi) = q0λ(αiminus1) for i = 0 1 Nand

c2

c3ρν+1δν le 1 and

c2

c3gt δ as δrarr 0

From this we have that

ϕ(α0) le δλ(α0) and ϕ(αN) gt δλ(αN)

We conclude that α0 isin M(N) and αN isin M(N) and thus

M(N) = empty and NM(N) = emptyWe have proved that all hypotheses of Lemma 1129 are satisfied Thereforefrom (11101) we have that

uminus uδα+n(α+)infin le 6q0ϕ((ϕλ)minus1(δ)) = c4δ

νν+1 (11106)

with c4 = 6q0cνminus12 cν3

114 Numerical experiments

In this section we present numerical results which verify the efficiency of themethods proposed in Sections 112 and 113

1141 A numerical example for the multiscale Galerkin method

We now present a numerical example to illustrate the method proposed inSection 112 and to confirm the theoretical estimates established in the section

For this purpose we consider the integral operator K L2[0 1] rarr L2[0 1]defined by

(Ku)(x) =int 1

0K1(x t)u(t)dt x isin [0 1] (11107)

with the kernel

K1(x t) =

x(1minus t) 0 le x le t le 1t(1minus x) 0 le t lt x le 1

Operator K is positive semi-definite with respect to L2[0 1] and it is alinear compact self-adjoint operator from L2[0 1] to L2[0 1] We consider the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

114 Numerical experiments 457

integral equation of the first kind (113) with

f (x) = 5

12x(xminus 1)(x2 minus xminus 1) x isin [0 1]

Clearly the unique solution of equation (113) is given by

ulowast(x) = 5x(1minus x) x isin [0 1]Moreover ulowast = Kω isin R(K) with ω = 10 This means that condition (H-1) issatisfied with ν = 1

Let Xn be the piecewise linear functions on [0 1] with knots at j2n j =1 2 2nminus1 We decompose Xn into the orthogonal direct sum of subspaces

Xn = X0 oplusperpW1 oplusperp middot middot middot oplusperpWn

where X0 is the linear polynomial space on [0 1] and Wi i = 2 3 n areconstructed recursively once the initial space W1 is given We choose a basisfor X0

w00(t) = 2minus 3t and w01(t) = minus1+ 3t

and a basis for space W1

w10(t) =

1minus 92 t t isin [0 1

2 ]minus1+ 3

2 t t isin ( 12 1]

and

w11(t) = 1

2 minus 32 t t isin [0 1

2 ]minus 7

2 + 92 t t isin ( 1

2 1]It follows from a simple computation that

(K2u)(x) =int 1

0K2(x t)u(t)dt t isin [0 1] (11108)

where

K2(x t) =

x(t minus 1)(minus2t + t2 + x2)6 0 le x le t le 1t(xminus 1)(t2 minus 2x+ x2)6 0 le t lt x le 1

We remark that K2 isin C2([0 1] times [0 1]) For the orthogonal projection Pn L2[0 1] rarr Xn we have that

(I minus Pn)K2 leradic

2

22n+1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

458 Multiscale methods for ill-posed integral equations

Moreover by the moment inequality in [220] we know that the approximationproperties (1120) and (1121) are valid with

θn =radic

2radic

2

2n+1

In the MAM we choose

n = k + m with k = 3 m = 3

Hence we use the decomposition

X3+3 = X3 oplusperpW3+1 oplusperpW3+2 oplusperpW3+3

and

θ6 =radic

2radic

2

27

We choose a perturbed right-hand side

f δ = f + δv (11109)

where v isin X has uniformly distributed random values with v le 1 and where

δ = e

100middot f with e isin 0125 025 05 10 15 20

Now we present numerical results for our experiments in Tables 111ndash113The numerical results in Table 111 are for an a priori parameter choice

According to Theorem 117 we choose an a priori parameter

α = 001 lowast θk with k = 3

since ν = 1 in this example Table 112 shows numerical results for an apriori parameter choice with error level δ fixed that is e = 025 It illustratesthat the MAM takes effect since the absolute error uδα3m minus ulowast decreasesas m increases in this case In Tables 111 and 112 we also compared theeffect by the MAM and the Galerkin method Table 113 is associated with an

Table 111 Numerical results for a priori parameter with α = 001 lowast θ3

e uδα33 minus ulowast uδ

α3+3 minus ulowast uδα33 minus ulowast minus uδα3+3 minus ulowast

20 13412 times10minus2 13412times10minus2 47452times10minus7

15 11230times10minus2 11229times10minus2 17932times10minus7

10 10549 times10minus2 10549times10minus2 79820times10minus8

05 99053times10minus3 99053times10minus3 13435times10minus8

025 98348times10minus3 98348times10minus3 minus39297times10minus10

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

114 Numerical experiments 459

Table 112 Numerical results for a priori parameter with e = 025 and α =001 lowast θ3

m uδα3m minus ulowast uδ

α3+m minus ulowast uδα3m minus ulowast minus uδα3+m minus ulowast

0 11447times10minus2 11447times10minus2 01 10241times10minus2 10059times10minus2 13841times10minus2

2 99703times10minus3 99698times10minus3 57395times10minus7

3 99645times10minus3 99645times10minus3 55529 times10minus9

Table 113 Numerical results for a posteriori parameter with θk+m = θ6

e α uδα33 minus ulowast

20 90117times10minus3 75321 times10minus2

15 90117times10minus3 75578 times10minus2

10 30039times10minus3 27045times10minus2

05 30039times10minus3 27188times10minus2

025 30039times10minus3 27470 times10minus2

0125 10013times10minus3 95639 times10minus3

a posteriori parameter choice and we use Algorithm 1112 to determine theparameter α The numerical results in these three tables demonstrate that theMAM is an efficient method to solve ill-posed problems

1142 Numerical examples for the multiscale collocation method

In this subsection we present numerical results to demonstrate the efficiencyand accuracy of the method proposed in Section 113

We consider the integral operator K Linfin[0 1] rarr Linfin[0 1] defined by

(Ku)(s) =int 1

0K(s t)u(t)dt s isin [0 1] (11110)

where

K(s t) =

s(1minus t) 0 le s le t le 1t(1minus s) 0 le t lt s le 1

Since the kernel K is continuous on [0 1]times[0 1] the operator K is compact onLinfin[0 1] Hence the integral equation (1164) that is Ku = f with this kernelis ill posed We solve this equation using the fast piecewise linear collocationmethod The numerical results produced by this method with both a priori anda posteriori regularization parameter choices will be presented

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

460 Multiscale methods for ill-posed integral equations

Now we describe the approximation spaces Xn and the correspondingcollocation functional spaces Ln Specifically for each positive integer n wechoose Xn as the space of piecewise linear polynomials on [0 1] with knots atj2n j = 1 2 2n minus 1 Here dim Xn = 2n+1 and μ = 2 correspondingto the general case We then decompose Xn into an orthogonal direct sum ofsubspaces of different scales

Xn = X0 oplusperpW1 oplusperp middot middot middot oplusperpWn

where X0 has a basis

w00(s) = 2minus 3s and w01(s) = minus1+ 3s

and W1 has a basis

w10(s)=

1minus 92 s s isin [0 1

2 ]minus1+ 3

2 s s isin ( 12 1] and w11(s) =

12 minus 3

2 s s isin [0 12 ]

minus 72 + 9

2 s s isin ( 12 1]

The spaces Wi = spanwij j isin Z2i are generated recursively by W1

following the general construction developed in [66 72 200 201]The collocation functional space Ln corresponding to Xn is constructed

likewise For any s isin [0 1] we use δs to denote the linear functional defined forfunctions v isin C[0 1] by 〈δs v〉 = v(s) and extended to all functions in Linfin[0 1]by the HahnndashBanach theorem We let L0 = span01 02 with 00 = δ 1

3

01 = δ 23

and decompose Ln as

Ln = L0 oplus V1 oplus middot middot middot oplus Vn

where V1 = span11 12 with

10 = minus3

2δ 1

3+ 1

2δ 2

3+ δ 1

6 11 = 1

2δ 1

3minus 3

2δ 2

3+ δ 5

6

and the spaces

Vi = spanij j isin Z2iare constructed recursively by V1 The approximation spaces and the corre-sponding collocation functionals have the properties outlined in Section 1132

By direct computation we obtain that

Kinfin = Klowastinfin le max0letle1

int 1

0|k(t s)|ds = 1

8

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

114 Numerical experiments 461

Moreover since Xn are the piecewise linear polynomials on [0 1] with knotsat j2n j = 1 2 2n minus 1 it is easily verified that

K(I minusQn)infin leradic

2

32minus

32 n (I minusAn)Klowastinfin le 1

82minus2n

K(I minusQn)2 leradic

2

32minus

32 n (I minusAn)Klowast2 le 2radic

32minus2n

and

(I minusAn)KlowastL2[01]rarrLinfin[01] leradic

2

32minus

32 n

Therefore hypothesis (H2) holds with r = 32

For comparison purposes in our numerical experiment we choose the right-hand side of the equation as

f (s) = 1

40320s(sminus 1)(minus17minus 17s+ 11s2+ 11s3 minus 3s4minus 3s5+ s6) sisin [0 1]

so that we have the exact solution

u(s) = minus1

0720s(sminus 1)(3+ 3sminus 2s2 minus 2s3 + s4) s isin [0 1]

Moreover

u = AKlowastω isin R(AKlowast) with ω = 1000

Hence hypothesis (H1) is satisfied with ν = 1 The perturbed right-hand sideis chosen as f δ = f + δv where v isin L2[0 1] has uniformly distributed randomvalues with v2 = 1 and δ = f2 middot e100 = 009488 middot e100 with e isin1 3 5 7 9 The linear system of the multiscale regularized equation is solvedby the augmentation method described in [71]

We present the results of two numerical experiments The goal of the firstexperiment is to confirm the convergence result for the a priori parameterchoice For given δ following Rule 1127 we choose α = 0005δ12 and nsuch that n2minus3n2 le 15δ54 and (n minus 1)2minus3(nminus1)2 gt 15δ54 The numericalresults are listed in Table 114 In Table 114 ldquoCompression ratesrdquo arecomputed by dividing the number of nonzero entries of the matrix by thetotal number of entries of the matrix Table 114 shows that the computedconvergence rate is O(δ12) for ν = 1 which is consistent with the theoreticalestimate given in Theorem 1128 In Figure 111 we illustrate the approximatesolutions with different parameters

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

462 Multiscale methods for ill-posed integral equations

Table 114 A priori parameter choices

e α = 0005 lowast δ12 n Compression rates uminus uδαninfin uminus uδαninfinδ12

1 15401e-4 8 00145 00260 084343 26676e-4 7 00286 00395 073985 34435e-4 6 00557 00547 079497 40749e-4 5 01055 00707 086809 46204e-4 5 01055 00892 09657

0 02 04 06 08 10

02

04

06

08

1

12

14

0 02 04 06 08 10

02

04

06

08

1

12

14

(a) (b)

0 02 04 06 08 10

02

04

06

08

1

12

14

0 02 04 06 08 10

02

04

06

08

1

12

14

(c) (d)

Figure 111 (a) The original function (b) the restored function with n = 8 e = 1and α = 0005 lowast δ12 (c) the restored function with n = 6 e = 5 and α =0005 lowast δ12 (d) the restored function with n = 5 e = 9 and α = 0005 lowast δ12

In the second experiment we choose α = α+ following Rule 1130 withρ = 0005 and q0 = 3 The numerical results are shown in Table 115 andillustrated in Figure 112 These results show that the a posteriori parameterchoice gives results comparable with those of the a priori parameter choiceThe computed convergence rate O(δ12) confirms the theoretical estimategiven in Theorem 1131 with ν = 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

115 Bibliographical remarks 463

Table 115 A posteriori parameter choices

e α n Compression rates uminus uδαninfin uminus uδαninfinδ12

1 12809e-4 8 00145 00235 079073 38426e-4 7 00286 00576 071995 64044e-4 6 00557 00410 076817 89662e-4 5 01055 00608 074579 11528e-3 5 01055 00779 08428

0 02 04 06 08 10

02

04

06

08

1

12

14

0 02 04 06 08 10

02

04

06

08

1

12

14

(a) (b)

0 02 04 06 08 10

02

04

06

08

1

12

14

0 02 04 06 08 10

02

04

06

08

1

12

14

(c) (d)

Figure 112 (a) The original function (b) the restored function with n = 8 e = 1and α=12809e-4 (c) the restored function with n = 6 e = 5 and α=64044e-4(d) the restored function with n = 5 e = 9 and α=11528e-3

115 Bibliographical remarks

The material presented in this chapter was mainly chosen from two papers [7879] The a posteriori choice strategy of regularization parameters describedin this chapter is based on the ideas developed in [207 216] For otherdevelopments of a priori and a posteriori parameter choice strategies the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

464 Multiscale methods for ill-posed integral equations

reader is referred to [58 118 127ndash129 191 192 206 219 220 222 224228 248 275]

We review several other recent developments in the numerical solution ofill-posed operator equations closely related to the research presented in thischapter A class of regularization methods for discretized ill-posed problemswas proposed in [222] and a method was suggested to determine a priori anda posteriori parameters for the regularization Papers [127ndash129 220] studiednumerical algorithms for regularization of the Galerkin method In [231] anadditive Schwarz iteration was presented for the fast solution of the Tikhonovregularized ill-posed problems A two-level preconditioner was developedin [133 134] for solving the discretized operator equation and extended in[233] to a general case The work in [193] designed an adaptive discretizationfor Tikhonov regularization which has a sparse matrix structure Multilevelmethods were applied to solve ill-posed problems and also established a prioriand a posteriori parameter choice strategies [103 137 177] In particular awavelet-based matrix compression technique was used in [137] Moreoverwavelet-based multilevel methods and cascadic multilevel methods for solv-ing ill-posed problems were developed in [174] and [230] respectivelyA multiscale analysis for ill-posed problems with semi-discrete Tikhonovregularization was established in [278] For more information on fast solversfor ill-posed integral equations the reader is referred to [54 78 103 104 133137 157 193 231]

Finally applications of ill-posed problems in science and engineering maybe found in [106 116 173] In particular for applications of regularizationin image processing system identification and machine learning the reader isreferred to [188 189 237 259] [33 223] and [87 195 268] respectively

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

12

Eigen-problems of weakly singularintegral operators

In this chapter we consider solving the eigen-problem of a weakly singularintegral operator K As we know the spectrum of a compact integral operatorK consists of a countable number of eigenvalues which only have an accu-mulation point at zero that may or may not be an eigenvalue We explain inthis chapter how the multiscale methods can be used to compute the nonzeroeigenvalues of K rapidly and efficiently We begin with a brief introduction tothe subject

121 Introduction

Many practical problems in science and engineering are formulated as eigen-problems of compact linear integral operators (cf [44]) Standard numericaltreatments of the eigen-problem normally discretize the compact integraloperator into a matrix and then solve the eigen-problem of the resulting matrixThe computed eigenvalues and associated eigenvectors of the matrix areconsidered approximations of the corresponding eigenvalues and eigenvectorsof the compact integral operator In particular the Galerkin PetrovndashGalerkincollocation Nystrom and degenerate kernel methods are commonly usedmethods for the approximation of eigenvalues and eigenvectors of compactintegral operators It is well known that the matrix which results from adiscretization of a compact integral operator is dense Solving the eigen-problem of a dense matrix requires a significant amount of computationaleffort Hence fast algorithms for solving such a problem are highly desirable

We are interested in developing a fast collocation method for solving theeigen-problem of a compact linear integral operator K on the Linfin() spacewith a weakly singular kernel Wavelet and multiscale methods were recentlydeveloped (see for example [28 64 67 68 95 108 202] and the references

465

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

466 Eigen-problems of weakly singular integral operators

cited therein) for numerical solutions of weakly singular Fredholm integralequations of the second kind Some of them were discussed in the previouschapter The essence of these methods is to approximate the dense matrix thatresults from the discretization of the integral operator by a sparse matrix andsolve the linear system of the sparse matrix It has been proved that the methodshave nearly linear computational complexity and optimal convergence (seeChapters 5ndash7) Among these methods the fast collocation method receivesfavorable attention due to the lower computational costs in generating its sparsecoefficient matrix (cf [69] and Chapter 7)

The specific goal of this chapter is to develop the fast collocation methodfor finding eigenvalues and eigenvectors of a compact integral operator K witha weakly singular kernel based on the matrix compression technique For theanalysis of the fast method we extend the classical spectral approximationtheory [3 44 212] to a somewhat more general setting so that it is applicableto the scenario where the matrix compression technique is used

We organize this chapter into seven sections In Section 122 we presentan abstract framework for a compact integral operator to be approximatedby a sequence of ν-convergent bounded linear operators We develop inSection 123 the fast multiscale collocation method for solving the eigen-problem of a compact integral operator K with a weakly singular kernel Weestablish in Section 124 the optimal convergence rate for the approximateeigenvalues and generalized eigenvectors In Section 125 we describe a powermethod for solving the eigen-problem of the compressed matrix which makesuse of the sparsity of the compressed matrix We present in Section 126numerical results to confirm the convergence estimates Finally in Section127 we make bibliographical remarks

122 An abstract framework

We describe in this section an abstract framework for eigenvalue approxima-tion of a compact operator in a Banach space The results presented here arebasically the classical spectral approximation theory [3 44 212] However wepresent them in a somewhat more general form so that they are applicable tothe sparse matrix resulting from the multiscale collocation method with matrixcompression

Suppose that X is a complex Banach space and B(X) is the space of boundedlinear operators from X to X For an operator T isin B(X) we define its resolventset by

ρ(T ) = z z isin C (T minus zI)minus1 isin B(X)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

122 An abstract framework 467

and its spectrum by

σ(T ) = C ρ(T )The resolvent operator (T minus zI)minus1 is denoted by R(T ) and when wewish to show its dependency on z we write R(T z) Clearly by definitionR(T z) isin B(X) on ρ(T ) The range space and null space of T are definedrespectively by

R(T ) = T x x isin X and N(T ) = x T x = 0 x isin XThe central problem considered in this chapter is that for a compact operator

T defined on a Banach space X we wish to find λ isin σ(T ) 0 and φ isin X

with φ = 1 such that

T φ = λφ (121)

As the eigenvalue problem (121) in general cannot be solved exactlywe consider solving the problem approximately To this end we require asequence of subspaces Xn n isin N which approximate the space X anda sequence of operators Tn n isin N which approximate the operator T andconsider the approximate eigen-problem finding λn isin σ(Tn)0 and φn isin Xn

with φn = 1 such that

Tnφn = λnφn (122)

We start with a discussion of the spectral approximation For any closedJordan (rectifiable) curve in ρ(T ) we use the notation R(T ) for thequantity maxR(T z) z isin We first propose an ancillary lemma

Lemma 121 If T S isin B(X) z isin where is a closed Jordan curve inρ(T ) ζ ζ isin C |ζ | le r for some r gt 0 and

μ = rminus1R(T )(1+ T minus S)+ 1 (123)

then ∥∥∥[(T minus S)R(T z)]2∥∥∥ le μ ((T minus S)T + (T minus S)S) (124)

Proof We compute

[(T minus S)R(T z)]2 = (T minus S)R(T z)(T minus S)R(T z) (125)

For any z isin direct computation confirms that

R(T z) = zminus1[T R(T z)minus I] (126)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

468 Eigen-problems of weakly singular integral operators

We substitute this formula and obtain the identity

[(T minus S)R(T z)]2

= zminus1 [(T minus S)T R(T z)(T minus S)minus (T minus S)T + (T minus S)S]R(T z)

Using the triangle inequality proves the result

The next lemma demonstrates how we shall use Lemma 121

Lemma 122 If the hypotheses of Lemma 121 are satisfied and both of thequantities (T minus S)T and (T minus S)S do not exceed (4μ)minus1 then ρ(S) sube ζ |ζ | le r and the resolvent of S satisfies the bound

R(S) le 2 (R(T ) + 1)2 (1+ T minus S) (127)

Proof We choose z isin and introduce the operator

Q = [(T minus S)R(T z)]2 (128)

Our hypotheses and Lemma 121 imply that Q le 12 Next we use theformula

R(S z) = R(T z)[I minus (T minus S)R(T z)]minus1

= R(T z)[I + (T minus S)R(T z)]sumjisinN0

Qj

to obtain the estimate

R(S z) le 2R(T ) [1+ T minus S middot R(T )]

which proves the lemma

The above lemmas are used in conjunction with the following notion ofν-convergence of operators in B(X)

Definition 123 Let X be a Banach space and T Tn n isin N in B(X) Thesequence Tn n isin N is said to ν-converge to T denoted by Tn

νminusrarr T if

(i) the sequence Tn n isin N is bounded(ii) (T minus Tn)T

uminusrarr 0(iii) (T minus Tn)Tn

uminusrarr 0

Lemmas 121 and 122 give us the following well known fact (see forexample [3])

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

122 An abstract framework 469

Proposition 124 If X is a Banach space Tnνminusrarr T in B(X) and a closed

Jordan curve in ρ(T ) ζ |ζ | le r r gt 0 then there is a constant c gt 0such that

sube ρ(T ) ζ |ζ | le r and R(Tn) le c

Proof We apply Lemma 121 to T and S = Tn and see that the constant μin (123) corresponding to this choice is bounded independent of n Now wechoose n large enough so that both (T minus Tn)Tn and (T minus Tn)T do notexceed (4μ)minus1 Therefore Lemma 122 gives us the desired conclusion

For the next proposition we review the notion of the spectral projection Weassume that the spectrum of T isin B(X) has an isolated point λ and define thespectral projection associated with T and λ by

P = minus 1

2π i

int

R(z)dz (129)

where is a closed Jordan curve in ρ(T ) enclosing λ but not any other pointof σ(T ) and we simplify the notation to R(z) = R(T z) Clearly P does notdepend on but on λ only It is also known that P isin B(X) and moreover Pis a projection Indeed following the proof of Theorem 227 on p 105 of [44]for example we have that

P2 = 1

(2π i)2

int

intprimeR(z)R(ζ )dzdζ (1210)

where prime is a closed Jordan curve enclosing λ and completely contained in thedomain bounded by We rewrite the integrand in (1210) in an equivalentform to obtain the formula

P2 = 1

(2π i)2

int

intprime

R(ζ )minusR(z)

ζ minus zdzdζ (1211)

From the choice of prime we have by the Cauchy integral formula for ζ isin prime andz isin that int

primedη

η minus z= 2π i

int

ζ minus η= 0

Hence by interchanging the order of integration in (1211) we obtain

P2 = minus 1

2π i

int

R(z)dz

which proves P is a projectionWhen T is a compact operator on B(X) and λ is a nonzero eigenvalue of

T a typical choice for the Jordan curve in (129) is one that includes only

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

470 Eigen-problems of weakly singular integral operators

λ in its interior but not the origin In that case m = dimR(P) lt infin and them times m matrix obtained by restricting the range and domain of T to R(P) hasλ as an eigenvalue of algebraic multiplicity m Moreover the least integer l forwhich (T minusλI)l = 0 on R(P) is called the ascent of λ In other words l is thesmallest positive integer such that (T minus λI)lP = 0 Generally the eigenspaceN(T minus λI) is a proper subspace of R(P) and equal to it if and only if l = 1

We need to compare two spectral projections of two distinct operators whichwill be closed in the sense of ν-convergence This will be accomplished withthe next lemma

Lemma 125 If X is a Banach space T1 T2 isin B(X) is a closed Jordancurve in ρ(T1)

⋂ρ(T2) ζ ζ isin C |ζ | le r for some r gt 0 and P1 P2

are the corresponding spectral projections of T1 and T2 with the same Jordancurve respectively then

(P1 minus P2)P1 le α1(T1 minus T2)|R(P1) le α2(T1 minus T2)T1 (1212)

where

α1 = R(T1)R(T2) α2 = R(T1)2R(T2)()(2πr)

and () is the length of

Proof For k = 1 2 we have that

Pk = minus 1

2π i

int

R(Tk z)dz

Hence we obtain that

(P1 minus P2)P1 = minus 1

2π i

int

R(T1 z)minusR(T2 z)P1dz

= minus 1

2π i

int

R(T2 z)(T2 minus T1)R(T1 z)P1dz

= minus 1

2π i

int

R(T2 z)(T2 minus T1)P1R(T1 z)dz

Taking the norms of both sides of this equation leads to the inequality

(P1 minus P2)P1 le R(T1)R(T2)(T2 minus T1)P1 (1213)

which leads to the first inequality of (125) Next we estimate the last term onthe right-hand side of this inequality To this end we note that

(T2 minus T1)P1 = minus 1

2π i

int

(T2 minus T1)R(T1 z)dz

= minus 1

2π i

int

(T2 minus T1)(T1R(T1 z)minus I)dz

z

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

122 An abstract framework 471

Since zero lies outside the domain bounded by the Cauchy integral formulasays that int

dz

z= 0

and so the above equation simplifies to

(T2 minus T1)P1 = minus 1

2π i

int

(T2 minus T1)T1R(T1 z)dz

z

Therefore taking norms of both sides of this inequality gives the inequality

(T2 minus T1)P1 le ()

2πrR(T1)(T2 minus T1)T1 (1214)

Combining this inequality with inequality (1213) proves the result

From the above lemma we obtain the following proposition (see forexample [3 44 167 205])

Proposition 126 Suppose Tn n isin N and T are in B(X) is a closedJordan curve contained in ρ(T ) ζ ζ isin C |ζ | le r for some r gt 0 andTn

νminusrarr T in B(X) Then there exist positive constants c1 c2 and an N isin N

such that for all n ge N we have that

(i) (P minus Pn)P le c1(T minus Tn)|R(P) le c2(T minus Tn)T (ii) (P minus Pn)Pn le c1(T minus Tn)|R(Pn) le c2(T minus Tn)TnProof The proof uses Lemma 125 applied to P and Pn and also Proposi-tion 124

Propositions 124 and 126 supply the well known tools for the presentationin this chapter Besides the above facts on spectral projections we need thenotion of the gap between subspaces and some of its useful properties

Definition 127 Let Y1Y2 be two closed subspaces of a Banach space X

and set

δ(Y1Y2) = supdist(yY2) y isin Y1 y = 1The gap between Y1 and Y2 is defined to be

θ(Y1Y2) = maxδ(Y1Y2) δ(Y2Y1)We make use of the following lemma of Kato (see [44] p 87 [166]

pp 264ndash269)

Lemma 128 If dimY1 = dimY2 ltinfin then

δ(Y2Y1) le δ(Y1Y2)

1minus δ(Y1Y2) (1215)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

472 Eigen-problems of weakly singular integral operators

The next lemma is of a different character Its proof requires the Borsukantipodal mapping theorem and is given either in [186] p 199 or [167] p 385

Lemma 129 If dimY1 lt infin and dimY2 gt dimY1 then there is an x isinY2 0 such that dist(xY1) = x and consequently

δ(Y2Y1) = 1

From this lemma we have the following fact

Lemma 1210 If dimY1 ltinfin and θ(Y2Y1) lt 1 then dimY2 = dimY1

Proof Since our hypothesis implies δ(Y2Y1) lt 1 we conclude byLemma 129 that dimY2 le dimY1 But our hypothesis also implies thatδ(Y1Y2) lt 1 and so indeed dimY2 = dimY1

We combine this lemma with Proposition 126 and conclude the followingresult

Proposition 1211 If the hypotheses of Proposition 126 hold then there is apositive integer N such that for all n ge N

dim R(P) = dim R(Pn)

Proof For any two projections P1 and P2 isin B(X) we have that

δ(R(P1) R(P2)) le supxminus P2x x isin R(P1) x = 1= supP1xminus P2P1x x isin R(P1) x = 1le P1 minus P2P1 = (P1 minus P2)P1

Using this inequality on the operators P and Pn it follows from Proposi-tion 126 that for some N whenever n ge N θ(R(P) R(Pn)) lt 1 becausedim R(P) lt infin and Tn

νminusrarr T in B(X) The result of this proposition thenfollows from Lemma 1210

The following theorem shows the gap between the spectral subspaces thatis the nearness of the eigenfunctions

Theorem 1212 If the hypotheses of Proposition 126 hold then there exist apositive constant c and an integer N such that for all n ge N

θ(R(Pn) R(P)) le c(T minus Tn)|R(P)In particular if φ isin R(Pn) with φ = 1 then

dist(φ R(P)) le c(T minus Tn)|R(P)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

122 An abstract framework 473

Proof We know from Proposition 1211 that dim R(P) = dim R(Pn) for nsufficiently large Hence by Lemma 128 we have that

δ(R(Pn) R(P)) le δ(R(P) R(Pn))

1minus δ(R(P) R(Pn))le 2δ(R(P) R(Pn))

Moreover as in the proof of Proposition 1211 and using Proposition 126 weconclude that

θ(R(Pn) R(P)) le 2δ(R(P) R(Pn)) le 2(P minus Pn)P le c(T minus Tn)|R(P)

Suppose that λ is an eigenvalue of the operator T with algebraic multiplicitym and ascent isolated by a closed rectifiable curve in ρ(T ) from therest of the spectrum of T and from zero We assume that Tn

νminusrarr T Thenthe spectrum of Tn inside consists of m eigenvalues say λn1 λn2 λnmcounted according to their algebraic multiplicities (see [3 44]) We define thearithmetic mean of these eigenvalues by letting

λn = λn1 + λn2 + middot middot middot + λnm

m

and we approximate λ by λnThe next theorem concerns the approximation of eigenvalues

Theorem 1213 If X is a Banach space T Tn isin B(X) for n isin N andTn

νminusrarr T then there exist a constant c and an integer N isin N such thatfor all n ge N

|λminus λn| le c(T minus Tn)|R(P)and

|λminus λnj| le c(T minus Tn)|R(P)Proof Let Pn = Pn|R(P) Note that there exists N such that for all n isin N n geN Pn is a surjective isomorphism from R(P) to R(Pn) and Pminus1

n n isin Nis bounded Let T = T |R(P) and Tn = Pminus1

n TnPn Then σ(T ) = λ andσ(Tn) = λn1 λn2 λnm Thus we have that

|λminus λn| = 1

m

∣∣∣trace(T minus Tn)

∣∣∣ le T minus Tn= Pminus1

n Pn(T minus Tn)|R(P) le c(T minus Tn)|R(P)To prove the second estimate we note that the ascent of λ is which ensures

that (λI|R(P) minus T ) = 0 By virtue of this equation we observe that

|λminus λnj| le (λI|R(P) minus Tn)

= (λI|R(P) minus Tn) minus (λI|R(P) minus T )

le αnT minus Tn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

474 Eigen-problems of weakly singular integral operators

where

αn =sumkisinZλIR(P) minus Pminus1

n TnPnminus1minuskλIR(P) minus T |R(P)k

Since Pn Pminus1n and Tn are all bounded αn can be bounded by a positive

constant c independent of n This leads to the desired estimate and completesthe proof

123 A multiscale collocation method

In this section we develop fast multiscale collocation methods for solving theeigen-problem of a compact integral operator with a weakly singular kernelSpecifically we let d be a positive integer and assume that is a compactset of the d-dimensional Euclidean space Rd For a compact linear integraloperator K on Linfin() defined by

Kφ(s) =int

K(s t)φ(t)dt t isin

with a weakly singular kernel K we consider the following eigen-problemFind λ isin σ(K) 0 and φ isin Linfin() with φ = 1 such that

Kφ = λφ (1216)

In this case the Banach space X in the abstract setting described in the last sec-tion is chosen as the space Linfin() Hence in the rest of this section we alwayshave X = Linfin() and V = C() By Vlowast we denote the dual space of V For isin Vlowast and v isin V we use 〈 v〉 to stand for the value of the linear functional evaluated at the function v and v for their respective norms We alsouse (middot middot) to denote the inner product in L2() For s isin by δs we denote thelinear functional in Vlowast defined for v isin V by the equation 〈δs v〉 = v(s) Weneed to evaluate δs on functions in X As in [21] we take the norm-preservingextension of δs to X and use the same notation for the extension

A multiscale scheme is based on a multiscale partition of the set amultiscale subspace decomposition of the space X and a multiscale basisof the space (cf [69]) We first require that there is a family of partitionsn n isin N0 of satisfying

n = ni i isin Ze(n) and⋃

iisinZe(n)

ni =

meas(ni capniprime) = 0 i iprime isin Ze(n) i = iprime

where the sets ni are star-shaped and e(n) denotes the cardinality of nWe then assume that there is a sequence of finite-dimensional subspaces Xn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

123 A multiscale collocation method 475

n isin N0 of X which have the nested property namely Xnminus1 sub Xn n isin NThus a subspace Wn sub Xn can be defined such that Xn is an orthogonal directsum of Xnminus1 and Wn As a result for each n isin N0 we have a multiscaledecomposition

Xn = X0 oplusW1 oplusW2 oplus middot middot middot oplusWn

where W0 = X0 Let w(i) = dim(Wi) i isin N0 and s(n) = dim(Xn) n isin N0Then s(n) =sumiisinZn+1

w(i) We also assume that there is a basis wij j isin Zw(i)for the spaces Wi that is

Wi = spanwij j isin Zw(i) i isin N0

and that there exist positive integers h and r such that for any i gt h and j isinZw(i) with j = νr + l for some ν isin N0 and l isin Zr

wij(x) = 0 x isin iminushν (1217)

Letting Sij = iminushν condition (1217) means that the support of the basisfunction wij is contained in Sij The multiscale property demands that there isa positive integer μ gt 1 and positive constants c1 c2 such that for n isin N0

c1μminusndlednlec2μ

minusnd c1μnledimXnlec2μ

n and c1μnledimWnlec2μ

n(1218)

where dn = maxd(ni) i isin Ze(n) and the notation d(A) denotes thediameter of the set A

To define a multiscale collocation scheme we also need linear functionalsiprimejprime in Vlowast jprime isin Zw(iprime) iprime isin Zn+1 Each iprimejprime is a finite sum of point evaluations

iprimejprime =sum

sisinSiprime jprime

csδs

where cs are constants and Siprimejprime is a finite subset of distinct points in Siprimejprime havinga constant cardinality Let

Un = (i j) j isin Zw(i) i isin Zn+1The linear functionals and multiscale bases are required to satisfy the vanishingmoment condition that for any polynomial p of total degree less than or equalto k minus 1

〈ij p〉 = 0 (wij p) = 0 (i j) isin Un i ge 1 (1219)

the boundedness property

ij + wij le c (i j) isin Un (1220)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

476 Eigen-problems of weakly singular integral operators

where c is a positive constant independent of i j and n and the requirementthat for any i iprime isin N0

〈iprimejprime wij〉 = δiiprimeδjjprime (i j) (iprime jprime) isin Un i le iprime (1221)

with δiiprime being the Kronecker deltaThe collocation scheme for solving the eigen-problem (1216) is to find

λn isin C and φn isin Xn with φn = 1 such that for any (iprime jprime) isin Un

〈iprimejprime Kφn〉 = λn〈iprimejprime φn〉 (1222)

Since φn isin Xn we write

φn =sum

(ij)isinUn

φijwij (1223)

and substitute (1223) into (1222) to obtain the linear systemsum(ij)isinUn

φij〈iprimejprime Kwij〉 = λn

sum(ij)isinUn

φij〈iprimejprime wij〉 (iprime jprime) isin Un (1224)

Using the following notations

En = [〈iprimejprime wij〉 (iprime jprime) (i j) isin Un] Kn = [〈iprimejprime Kwij〉 (iprime jprime) (i j) isin Un]and n = [φij (i j) isin Un] the system (1224) is written in the matrix form

Knn = λnEnn (1225)

In the above generalized eigen-problem the matrix En is an upper triangularsparse matrix having only O(s(n) log s(n)) nonzero entries due to the con-struction of the basis functions and their corresponding collocation functionalsSpecifically when the elements Siprimejprime and Sij are disjoint the entry

langiprimejprime wij

rangof

En is zero The matrix Kn is still a full matrix but it is numerically sparse Ournext task is to use the truncation strategy developed in [69] for the matrix Kn

to formulate a fast algorithm for solving the eigen-problem (1225)For analysis purposes it is convenient to work with the functional analytic

form of the eigen-problem (1225) For this purpose we let πn X rarr Xn

denote the bounded linear projection operator defined by

〈ij πnx〉 = 〈ij x〉 (i j) isin Un (1226)

With this operator we define Kn X rarr X by Kn = πnKπn Clearly Kn isa bounded linear operator Thus the eigen-problem (1225) is written in theoperator form

Knφn = λnφn (1227)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

123 A multiscale collocation method 477

We remark that as in Chapter 7 (cf [69]) the projection operator πn is requiredto satisfy the property that there exists a positive constant c such that for v isinWkinfin() and for all n isin N

vminus πnv le c infvnisinXn

vminus vn le cμminuskndvkinfin (1228)

where middot and middotkinfin denote the norm in X and in Wkinfin() respectively Thiscondition is fulfilled when Xn is chosen as the space of piecewise polynomialsof total degree k minus 1

We now assume that the integral kernel of the operator K is weakly singularin the sense that for s t isin [0 1] s = t and a positive integer k gt 0 the kernel Khas continuous partial derivatives Dα

s Dβt K(s t) for |α| le k |β| le k and there

exist positive constants σ and c with σ lt d such that for |α| = |β| = k∣∣∣Dαs Dβ

t K(s t)∣∣∣ le c

|sminus t|σ+|α|+|β| We truncate the matrix Kn according to the singularity of the kernel As in

Chapter 7 (cf [69]) we partition the matrix Kn into a block matrix

Kn = [Kiprimei iprime i isin Zn+1] with Kiprimei = [〈iprimejprime Kwij〉 jprime isin Zw(iprime) j isin Zw(i)]and truncate the block Kiprimei by using a truncation parameter ε = εn

iprimei which willbe described later Specifically we define a truncation matrix

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)] (1229)

by setting

Kiprimejprimeij = 〈iprimejprime Kwij〉 dist(Siprimejprime Sij) le ε

0 otherwise(1230)

Using the truncation (1230) problem (1224) becomessum(ij)isinUn

φijKiprimejprimeij = λn

sum(ij)isinUn

φij〈iprimejprime wij〉 (iprime jprime) isin Un (1231)

Let

Kn = [Kiprimei iprime i isin Zn+1] n = [φij (i j) isin Un]The eigen-problem (1231) can then be written as the eigen-problem of thetruncated matrix

Knn = λnEnn (1232)

There is no need to compress the matrix En since it is already sparse In fact ifan entry of matrix Kn is truncated to zero according to our proposed truncationstrategy the corresponding entry of the matrix En is already zero

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

478 Eigen-problems of weakly singular integral operators

Eigen-problem (1232) may be expressed in the operator form Let Kn bethe bounded linear operator from Xn to Xn having the matrix representationEminus1

n Kn in the basis wij (i j) isin Un We then define a bounded linear operatorKn X rarr X by Kn = Knπn Solving eigen-problem (1232) is nowequivalent to finding λn isin C and φn = sum(ij)isinUn

φijwij isin Xn with φn = 1such that

Knφn = λnφn (1233)

Eigen-problem (1232) leads to a fast method for solving the original eigen-problem We study the convergence order and computational complexity of thefast method

124 Analysis of the fast algorithm

We provide in this section analysis for the convergence order and com-putational complexity of the fast algorithm described in the last sectionSpecifically we show that the proposed method has the optimal convergenceorder (up to a logarithmic factor) and almost linear computational complexity

For real numbers b and bprime we choose the truncation parameters εniprimei iprime i isin

Zn+1 to satisfy the condition

εniprimei ge maxaμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime) i iprime isin Zn+1 (1234)

for some constants a gt 0 and r gt 1 For any real numbers α and β and apositive integer n we define a function

μ[αβ n] =sum

iisinZn+1

μαidsum

iprimeisinZn+1

μβiprimed

We recall a result from Lemma 711 (cf Lemma 42 of [69]) whichestimates the difference between the operators Kn and Kn

Lemma 1214 If 0 lt σ prime lt min2k d minus σ η = 2k minus σ prime and the truncationparameter εn

iprimei is chosen according to (1234) for any real numbers b bprime thenthere exists a positive constant c such that for all n isin N

Kn minus Kn le cμ[k minus bη k minus bprimeη n](n+ 1)μminusσ primend

and

Kn minus KnWkinfinrarrX le cμ[2k minus bη k minus bprimeη n](n+ 1)μminus(k+σ prime)nd

where middot WkinfinrarrX denotes the norm of the operator from Wkinfin() to X

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

124 Analysis of the fast algorithm 479

The following lemma concerns the ν-convergence of the truncated operatorsKn to K on X

Lemma 1215 If 0 lt σ prime lt min2k d minus σ η = 2k minus σ prime and the truncationparameter εn

iprimei is chosen according to (1234) with b gt kminusσ primeη

bprime gt kminusσ primeη

b +bprime gt 1 then Kn

νminusrarr K on X as nrarrinfin

Proof We prove that Kn satisfies the three conditions in the definitionof ν-convergence Note that for any real numbers α β and e with e gt

max0αβα + βlim

nrarrinfinμ[αβ n](n+ 1)μminusend = 0

The choice of parameters in this lemma ensures that

limnrarrinfinμ[k minus bη k minus bprimeη n](n+ 1)μminusσ primend = 0 (1235)

This with Lemma 1214 yields that

limnrarrinfinKn minus Kn = 0 (1236)

which with the boundedness of Kn n isin N ensures that Kn n isin N isalso bounded That is there is a constant c such that for any n isin N

Kn le c (1237)

It is known that for any v isin X

limnrarrinfinKnvminusKv = 0

This with (1236) leads to

limnrarrinfinKnvminusKv = 0

By using the compactness of the operator K we conclude that

limnrarrinfin(Kn minusK)K = 0 (1238)

Moreover we have that

(Kn minusK)Kn le (Kn minusKn)Kn + (πn minus I)KKnle [Kn minusKn + (πn minus I)K]Kn

Since limnrarrinfin πnv minus v = 0 for any v isin X and since K is compactlimnrarrinfin (πn minus I)K = 0 From this with (1236) (1237) and the inequalityabove we conclude that

limnrarrinfin(Kn minusK)Kn = 0 (1239)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

480 Eigen-problems of weakly singular integral operators

Combining (1237)ndash(1239) yields the result of this lemma

We now consider the spectral projection associated with K and λ isin σ(K)

P = minus 1

2π i

int

(K minus zI)minus1dz

where is a closed rectifiable curve in ρ(K) enclosing λ but no other point ofσ(K)

Lemma 1216 If R(P) sub Wkinfin() then there exists a positive constant csuch that for all n isin N and for all v isin R(P)

(I minus πn)Kv le cμminuskndvkinfin

Proof Since R(P) is invariant under K we have that for any v isin R(P) Kv isinR(P) sub Wkinfin() Because all norms are equivalent in a finite-dimensionalspace we see that there is a positive constant c such that for any v isin R(P)

Kvkinfin le cKv le cKvkinfin

Thus the desired result follows from the above inequality and (1228)

Lemma 1217 Let R(P) sub Wkinfin() If 0 lt σ prime lt min2k d minus σ η =2k minus σ prime and the truncation parameter εn

iprimei is chosen according to (1234) forsome constants a gt 0 and r gt 1 with b and bprime satisfying one of the followingconditions

(i) b gt 1 bprime gt kminusσ primeη

b+ bprime gt 1+ kη

(ii) b = 1 bprime gt kminusσ primeη

b + bprime gt 1 + kη

b gt 1 bprime = kminusσ primeη

b + bprime gt1+ k

η or b gt 1 bprime gt kminusσ prime

η b+ bprime = 1+ k

η

(iii) b = 1 bprime = kη

or b = 2kη

bprime = kminusσ primeη

then there exists a positive constant c such that for all n isin N and for allv isin R(P)

(Kn minusK)v le c(s(n))minuskd logτ s(n)vkinfin

where τ = 0 in case (i) τ = 1 in case (ii) and τ = 2 in case (iii)

Proof We prove this result by using the inequality

(Kn minusK)v le (Kn minusKn)v + (Kn minusK)v (1240)

It follows from Lemma 1214 that

Kn minusKnWkinfinrarrX le cμ[2k minus bη k minus bprimeη n](n+ 1)μminusσ primendμminusknd

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

124 Analysis of the fast algorithm 481

Note that for any real numbers α β and e with e gt 0

μ[αβ n](n+ 1)μminusend =

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

o(1) if e gt maxαβα + βO(n) if α = e β lt e α + β lt e

or α lt e β = e α + β lt eor α lt e β lt e α + β = e

O(n2) if α = 0 β = e or α = e β = 0

as n rarr infin We then conclude by choosing α = 2k minus bη β = k minus bprimeη ande = σ prime that

μ[2k minus bη k minus bprimeη n](n+ 1)μminusσ primend =⎧⎨⎩

o(1) in case (i)O(n) in case (ii)O(n2) in case (iii)

Hence we see that there exist a constant c1 and a positive integer N such thatfor all n ge N

Kn minusKnWkinfinrarrX le c1μminuskndnτ

where τ = 0 in case (i) τ = 1 in case (ii) and τ = 2 in case (iii) That is

(Kn minusKn)v le c1μminuskndnτvkinfin (1241)

Moreover it follows from (1228) and Lemma 1216 that there exists apositive constant c2 such that for all n isin N

(Kn minusK)v le πnK(πn minus I)v + (πn minus I)Kv le c2μminuskndvkinfin

(1242)

Combining estimates (1240)ndash(1242) and the relation s(n) sim μn we obtainthe estimate of this theorem

Suppose that rank P = m lt infin Note that Knνminusrarr K on X as n rarr infin

As described in Section 122 when n is sufficiently large the spectrum of Kn

inside consists of m eigenvalues λin i = 1 2 m counting algebraicmultiplicities Let

Pn = minus 1

2π i

int

(Kn minus zI)minus1dz

be the spectral projection associated with Kn and its spectra inside Thusdim R(Pn) = dim R(P) = m We define the quantity

C(P) = supφkinfin φ isin R(P) φinfin = 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

482 Eigen-problems of weakly singular integral operators

Theorem 1218 If the assumptions of Lemma 1217 hold then there exist apositive constant c and a positive integer N such that for all n ge N

δ(R(Pn) R(P)) le c(s(n))minuskd logτ s(n)C(P) (1243)

In particular for any φn isin R(Pn) with φn = 1 we have that

dist(φn R(P)) le c(s(n))minuskd logτ s(n)C(P)

where τ = 0 in case (i) τ = 1 in case (ii) and τ = 2 in case (iii)

Proof It is easy to verify that the choice of truncation parameters in case(i) (ii) or (iii) satisfies the hypothesis of Lemma 1215 It follows fromLemma 1215 that Kn

νminusrarr K as nrarrinfin Using Theorem 1212 we see that

δ(R(Pn) R(P)) le c sup(Kn minusK)φ φ isin R(P) φ = 1Now by using Lemma 1217 we conclude the desired estimate (1243)

We define the arithmetic mean of the eigenvalues λin i = 1 2 m by

ˆλn = λ1n + + λmn

m

Theorem 1219 If the assumptions of Lemma 1217 hold then there exist apositive constant c and a positive integer N such that for all n ge N

|λminus ˆλn| le c(s(n))minuskd(log s(n))τC(P)|λminus λnj| le c(s(n))minuskd(log s(n))τC(P) j = 1 2 m

In particular if λ is simple that is m = 1 and = 1 then

|λminus λn| le c(s(n))minuskd(log s(n))τC(P)

where τ = 0 in case (i) τ = 1 in case (ii) and τ = 2 in case (iii)

Proof The results of this theorem follow from Theorem 1213 andLemma 1217

Theorem 1220 Let R(P)subWkinfin() Suppose that 0ltσ primelt min2k dminusσ and η = 2k minus σ prime and the truncation parameters εn

iprimei iprime i isin Zn+1 are chosensuch that

εniprimei = maxaμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime) i iprime isin Zn+1 (1244)

for some constants a gt 0 and r gt 1 with b = 1 and kηle bprime le 1 Then the

number of nonzero entries of matrix Kn is

N (Kn) = O(s(n) logτ s(n))

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

125 A power iteration algorithm 483

where τ = 1 except for bprime = 1 in which case τ = 2 and the estimates inTheorems 1218 and 1219 hold with τ = 1 except for bprime = k

η in which case

τ = 2

Proof It is shown in Theorem 715 (cf Theorem 46 of [69]) that if theparameters are chosen such that

εniprimei le maxaμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime) i iprime isin Zn+1

for some constants a gt 0 and r gt 1 with b and bprime not larger than one thenumber of nonzero entries of matrix Kn is in the order O(s(n) logτ s(n)) whereτ = 1 except for b = bprime = 1 in which case τ = 2 This together withTheorems 1218 and 1219 yields the results of this theorem

The above theorem means that the scheme (1231) (or (1232) (1233))leads to a fast numerical algorithm for solving the eigen-problem (1216)which has both optimal order (up to a logarithmic factor) of convergence andcomputational complexity

125 A power iteration algorithm

The matrix compression technique described and analyzed in the previoussections provides a basis for developing various fast numerical solvers for theeigen-problem (1232) Once the eigen-problem (1232) is set up standardnumerical methods may be applied to it and the sparsity of its coefficientmatrix leads to fast methods for solving the problem As an example toillustrate this point in this section we apply the power iteration algorithm toeigen-problem (1232) and provide a computational complexity result of thealgorithm

For convenience we rewrite eigen-problem (1232) in the form

Eminus1n Knn = λnn (1245)

The power iteration method is to find the largest eigenvalue and its correspond-ing eigenvectors of a matrix We describe below the power iteration algorithmapplied to eigen-problem (1245)

Algorithm 1221 (Power iteration algorithm)

Step 1 For fixed n isin N choose (0)n = 0 satisfying that there is an (l m) isin Un

such that ((0)n )(lm) = (0)

n infin = 1

Step 2 For j isin N0 suppose that (j)n has been obtained and do the following

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

484 Eigen-problems of weakly singular integral operators

bull Compute (j)1 = Kn

(j)n

bull Solve (j)2 from the equation En

(j)2 =

(j)1

bull Compute λ(j)n = ((j)2 )(lm)

Step 3 Find an (l m) isin Un such that |((j)2 )(lm)| = (j)

2 infin let (j+1)n =

(j)2 (

(j)2 )(lm) and go to step 2

The sequences λ(j)n j isin N0 and

(j)n j isin N0 converge to the largest

(in magnitude) eigenvalue and its corresponding eigenvector respectively ofthe eigen-problem (1245) Since the number of nonzero entries of matricesKn and En is on the order of O(s(n) logτ s(n)) and since Algorithm 1221uses basically matrixndashvector multiplications the algorithm is fast In the nextproposition we provide an estimate on the number of multiplications needed ineach iteration step

Proposition 1222 Suppose that 0 lt σ prime lt min2k dminus σ and η = 2kminus σ primeIf the truncation parameters ε = εn

iprimei iprime i isin Zn+1 are chosen such that

εniprimei = maxaμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime) i iprime isin Zn+1 (1246)

for some constants a gt 0 and r gt 1 with b = 1 and kηle bprime le 1 then the

number of multiplications needed in a single iteration of Algorithm 1221 is onthe order of O(s(n) logτ s(n)) where τ = 1 except for b = bprime = 1 in whichcase τ = 2

Proof This result is a direct consequence of (1221) and the estimates of thenumber of nonzero entries of matrices Kn and En The major computationaleffort is spent in step 2 The matrixndashvector multiplication Kn

(j)n needs

O(s(n) logτ s(n)) number of multiplications Owing to the special structureof matrix En the equation En

(j)2 =

(j)1 can be solved by direct backward

substitution It requires O(s(n) logτ s(n)) number of multiplications Hencethe total number of multiplications needed in a single iteration is on the orderof O(s(n) logτ s(n))

126 A numerical example

We present a numerical example in this section to confirm the theoreticalestimates for the convergence order and computational complexity

Consider the eigen-problem

Kφ(s) = λφ(s) s isin = [0 1] (1247)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

126 A numerical example 485

where K is the integral operator with the weakly singular kernel

K(s t) = log | cos(πs)minus cos(π t)| s t isin [0 1]Let Xn be the space of piecewise linear polynomials having a multiscale

basis wij (i j) isin Un In this case k = 2 μ = 2 and s(n) = dimXn = 2n+1We choose the basis for X0

w00(t) = minus3t + 2 w01(t) = 3t minus 1 t isin [0 1]and the basis for W1

w10(t) = minus 9

2 t + 1 t isin [0 12 ]

32 t minus 1 t isin ( 1

2 1] w11(t) = minus 3

2 t + 12 t isin [0 1

2 ]92 t minus 7

2 t isin ( 12 1]

The bases for Wi i gt 1 are generated recursively by

wij(t) = radic

2wiminus1j(2t) t isin [0 12 ]

0 t isin ( 12 1] j isin Z2iminus1

and

wi2iminus1+j(t) =

0 t isin [0 12 ]radic

2wiminus1j(2t minus 1) t isin ( 12 1] j isin Z2iminus1

The multiscale functionals ij (i j) isin Un are chosen as follows

00 = δ 13 01 = δ 2

3

10 = δ 16minus 3

2δ 1

3+ 1

2δ 2

3 11 = 1

2δ 1

3minus 3

2δ 2

3+ δ 5

6

and ij (i j) isin Un i gt 1 are also generated recursively See [69] for therecursive generation of the functionals of higher levels

As the exact eigenvalues and eigenfunctions are not known for comparisonpurposes we use the approximate eigenvalues and eigenvectors of Kn withn = 15 for those of K Let λ denote the largest simple eigenvalue ofKn (n = 15) P be the spectral projection associated with Kn (n = 15) andλ and λn φn be the largest (in magnitude) simple eigenvalue and associatedeigenfunction of the truncated operator Kn respectively It can be computedthat λ asymp minus099999982793163

We apply the fast collocation method to eigen-problem (1247) The corre-sponding matrix is truncated with the parameter choice (1246) with a = 1b = 1 and bprime = 078 The numerical algorithm is run on a PC with Intel Core2T5600 183-GHz CPU 2GB RAM and the programs are compiled using VisualC++ 2005 with single thread

The numerical results are listed in Tables 121 and 122 In Table 121ldquoComp raterdquo denotes the compression rate which is defined as the ratio of

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

486 Eigen-problems of weakly singular integral operators

Table 121 Numerical results for eigenvalue computation (1)

n Comp rate |λminus λn| r1 φn minus Pφninfin r2

1 minus 47362e-2 57466e-42 minus 14659e-2 169 12057e-4 2253 0875 39724e-3 188 22607e-5 2424 0672 10293e-3 195 40572e-6 2485 0469 25869e-4 199 73048e-7 2476 0306 64714e-5 200 13101e-7 2487 0190 16102e-5 201 24330e-8 2438 0115 38976e-6 205 45452e-9 2429 0067 96189e-7 202 83415e-10 245

10 0039 23518e-7 203 19261e-10 21111 0022 57356e-8 203 52633e-11 18712 0012 13001e-8 214 12906e-11 20313 0007 28441e-9 219 39536e-12 171

Table 122 Numerical results for eigenvalue computation (2)

n N = μn+1 t1(s) t2(s) NA51

1 4 0016 0015 1722 8 0032 0031 1603 16 0079 0047 1694 32 0234 0031 1675 64 0656 0047 1636 128 1703 0063 1607 256 4266 0125 1618 512 1031 0281 1679 1024 2427 0579 156

10 2048 5669 1343 15811 4096 1353 2859 15312 8192 3321 5953 15413 16384 7111 1265 154

the number of nonzero entries in Kn to that of the full matrix Kn that isN (Kn)s(n)2 and r1 r2 denote the convergence orders of eigenvalues andeigenfunctions respectively The numerical results show that the truncationdoes not ruin the convergence order which agrees with the theoretical esti-mates presented in this chapter

In Table 122 t1 records the time in seconds for generating the matrix Knand t2 and NA51 record the time for solving the resulting discrete eigen-problem (1245) and the number of iterations used in Algorithm 51 to obtainthe corresponding results listed in Table 121 It can clearly be seen from thenumerical results that most of the computing time is spent on generating the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

127 Bibliographical remarks 487

coefficient matrix The computing time spent on generating the matrix and onfinding the solution shows a linear increase corresponding to the growth ofthe size of the matrix Indeed the power iteration method based on the matrixcompression technique is a fast method

127 Bibliographical remarks

The abstract framework for the eigenvalue approximation presented in Sec-tion 122 is an extension of the classical spectral approximation theory[3 44 212] For the eigen-problems of compact linear operators [44] is agood reference (see also Section A27 in the Appendix) For the classicalspectral approximation theory and the notion of spectral projection the readeris referred to [3 44 145 166 167 186 212] The material of the multiscalemethod for solving the eigen-problem is mainly chosen from the paper [70]See [214 215] for further developments along this line

For more about the numerical solutions of the eigen-problem of compactintegral operators see [3 8 11 26 27 31 61 212 213 245] Analysisof numerical methods for the approximation of eigenvalues and eigenvectorsof compact integral operators is well documented in the literature (see forexample [8 11 15 44 47 185 209 210 213 245])

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

Appendix

Basic results from functional analysis

In this appendix we summarize some of the standard concepts and resultsfrom functional analysis in a form that is used throughout the book Thereforethis section provides the reader with a convenient source for the backgroundmaterial needed to follow the ideas and arguments presented in previouschapters Further discussion and detailed proofs of the concepts we reviewhere can be found in standard texts on functional analysis for example[1 86 183 236 276]

A1 Metric spaces

A11 Metric spaces

Definition A1 Let X be a nonempty set and ρ be a real-valued functiondefined on Xtimes X satisfying the following properties

(i) ρ(x y) ge 0 and ρ(x y) = 0 if and only if x = y(ii) ρ(x y) = ρ(y x)

(iii) ρ(x y) le ρ(x z)+ ρ(z y)

In this case ρ is called a metric function (or distance function) defined on Xand (X ρ) (or X) is called a metric space

Definition A2 A sequence xj j isin N in a metric space X is said to convergeto x isin X as jrarrinfin if

limjrarrinfin ρ(xj x) = 0

A sequencexj j isin N

sube X is called a Cauchy sequence if

limijrarrinfin ρ

(xi xj

) = 0

488

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A1 Metric spaces 489

A subset A of a metric space X is said to be complete if every Cauchy sequencein A converges to some element in A

Definition A3 Let X be a metric space and let S and Sprime be subsets of X Ifin every neighborhood of any point x in S there is an element of Sprime then Sprime issaid to be dense in S If S has a countable dense subset then S is said to be aseparable set If X itself is separable then X is called a separable space

Definition A4 The ball with center x isin X and radius r shall be denoted byB(x r) = y y isin X ρ(x y) lt r

Of course a set A in X is bounded if it is contained in some ball B(x r)Moreover if A and B are bounded then A cup B is bounded

Definition A5 A subset A of a metric space X is totally bounded if for everyε gt 0 there is a finite set of points xj j isin Nm sube X such that A sube cupB(xj ε) j isin NmDefinition A6 Let S be a subset of a metric space X If every sequencexj j isin N

in S has a convergent subsequence

xkj j isin N

that is there is an

x isin X such that limjrarrinfin ρ(xkj x

) = 0 then the subset S is said to be relativelycompact Moreover if the limit x is always in S then S is said to be compactThe space X is called a compact space if it has this property

Certainly every compact set is closed and bounded (but not conversely)However we have the following useful fact

Theorem A7 A subset A of a metric space X is compact if and only if it istotally bounded and complete

A12 Normed linear spaces

Definition A8 Suppose that every pair of elements x y isin X can be combinedby an operation called addition to yield a new element in X denoted by x+ ySuppose also that for every complex (or real) number a and every elementx isin X there is an operation called scalar multiplication which yields a newelement in X denoted by ax The set X is said to be a linear space if it satisfiesthe following axioms

(i) x+ y = y+ x(ii) x+ (y+ z) = (x+ y)+ z

(iii) X contains a unique element denoted 0 which satisfies x+ 0 = x for allx isin X

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

490 Basic results from functional analysis

(iv) to each x isin X there corresponds an element of X denoted minusx such thatx+ (minusx) = 0

(v) a(x+ y) = ax+ ay(vi) (a+ b)x = ax+ bx

(vii) a(bx) = (ab)x(viii) 1x = x

(ix) 0x = 0

where a and b are complex (or real) numbers

Definition A9 A norm on a linear space X over a complex (or real) field isa real-valued function denoted middot satisfying the requirements that

(i) x ge 0 where the equality holds if and only if x = 0(ii) ax = |a| x

(iii) x+ y le x + yA linear space with a norm is called a normed linear space

When we wish to indicate the connection of a norm to the space X on whichit is defined we indicate it with the symbol middot X A normed linear space X isa metric space with metric function ρ(x y) = xminus yX

Definition A10 A normed linear space X is called a Banach space if it is acomplete metric space in the metric induced by its norm ρ(x y) = xminus yX

A13 Inner product spaces

Definition A11 A linear space X over a complex (or real) field is calledan inner product space if for any two elements x and y of X there is auniquely associated complex (or real) number called the inner product of xand y and denoted by (x y) (or when needed by (x y)X) which satisfies therequirements that

(i) (x x) ge 0 and the equality holds if and only if x = 0(ii) (x y) = (y x)

(iii) (ax+ by z) = a(x z)+ b(y z)

Every inner product satisfies the CauchyndashSchwarz inequality namely forevery x y isin X we have that ∣∣(x y)

∣∣ le x middot yAn inner product space is also a normed linear space Specifically we

associate with each x isin X the non-negative number x = (x x)12 and by the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A1 Metric spaces 491

Minkowski inequality ∥∥x+ y∥∥ le x + y

we immediately conclude that middot X is a norm on X A characteristic featureof the norm on an inner product space is the following result

Proposition A12 (Parallelogram identity) If X is an inner product spacewith norm middot then for any x y isin X

x+ y2 + xminus y2 = 2x2 + 2y2

Definition A13 An inner product space X is called a Hilbert space if it is acomplete metric space under the metric induced by its inner product ρ(x y) =(xminus y xminus y)12

A14 Function spaces C() and Lp() (1 le p le infin)

Let be a bounded domain in a d-dimensional Euclidean space Rd whered is a positive integer We use x = [xj j isin Nd] to denote a vector inRd and define C() to be the linear space (under pointwise addition andscalar multiplication) of uniformly continuous complex-valued functions on Moreover the notation

supp (u) = closure ofx x isin u(x) = 0

stands for the support of the function u on The symbol C0() indicates thelinear subspace of C() consisting of functions with support contained inside Also C() equipped with the norm

u0infin = max∣∣ u(x) ∣∣ x isin is a Banach space The symbol Linfin() is used to denote the linear space ofcomplex-valued measurable functions u which are essentially bounded that isthere is a set E sube of measure zero such that u is bounded on E Equippedwith the norm

u0infin = ess supxisin∣∣ u(x) ∣∣ = inf

sup|u(x)| x isin E meas(E) = 0

Linfin() is a Banach space Let Lp() 1 le p lt infin denote the linear space ofcomplex-valued measurable functions u such that |u|p is Lebesgue-integrableon For any u isin Lp() we define

u0p =int

∣∣ u(x) ∣∣pdx

1p

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

492 Basic results from functional analysis

and recall that in this norm Lp() becomes a Banach space In particularL2() is a Hilbert space with inner product

(u v)0 =int

u(x)v(x)dx

and corresponding norm middot 02 as defined above An important and usefulfact is that C0() is dense in Lp() for 1 le p ltinfin

Theorem A14 (Fubini theorem) If u is a measurable function defined onRn+m and at least one of the integrals

I1 =intRn+m|u(x y)|dxdy

I2 =intRm

(intRn|u(x y)|dx

)dy

and

I3 =intRn

(intRm|u(x y)|dy

)dx

exists and is finite then

(i) for almost all y isin Rm u(middot y) isin L1(Rn) andintRn u(x middot)dx isin L1(Rm)

(ii) for almost all x isin Rn u(x middot) isin L1(Rm) andintRm u(middot y)dy isin L1(Rn)

(iii) I1 = I2 = I3

A15 Function spaces Cm() and Wmp() (m ge 1 1 le p le infin)

We generally use α = [αj j isin Nd] for a vector in Rd with non-negativeinteger components The set of all such lattice vectors is denoted by Nd andwith each such vector we define |α| = α1 + middot middot middot + αd We find convenient thenotation Zd

m = α |α| le m minus 1 The αth derivative operator is denoted byDα = part |α|

partxα1

1 middot middot middot partxαdd and |α| is called the (total) order of the derivative

When α is the zero vector we interpret Dα as the identity operator Let Cm()

be the closed subspace of C() of all the functions which have continuousderivatives up to and including the mth-order derivatives that is Dαu isin C()for α isin Zd

m Cm() is a Banach space with the norm

uminfin =sumαisinZd

m

∥∥Dαu∥∥

0infin (A1)

In a similar manner the linear space Cm() is defined We also set

Cinfin() =⋂

misinN0

Cm()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A1 Metric spaces 493

and

Cinfin0 () = u isin Cinfin() support of u is inside and boundedNeither one of these linear spaces are Banach spaces However the family ofnorms given in (A1) as m varies over N0 determines a Fechet topology onthese spaces

We now describe Sobolev spaces For a non-negative integer m isin N and1 lt p ltinfin we define the linear space

Wmp() =

u u isin Lp() Dαu isin Lp()α isin Zdm+1

and when m = 0 set

W0p() = Lp()

When 1 lt p ltinfin we define on Wmp() the norm

ump = sumαisinZd

m+1

∥∥Dαu∥∥p

0p

1p

and for p = infin set

uminfin = maxDαu0infin α isin Zdm+1

The space Wmp() is a Banach space with norm middot mp Let Wmp

0 () be the closure of Cinfin0 () in Wmp() In particular for p = 2it is a standard notation to use Hm() and Hm

0 () for Wmp() and Wmp0 ()

respectively and likewise denote the norm middot m2 by middot m Both Wmp()

and Wmp0 () are called Sobolev spaces

Theorem A15 (Properties of Sobolev spaces)

(i) For any (1 le p le infin) Wmp() is a Banach space In particular thespace Hm() is a Hilbert space with inner product

(u v)m =sum

1le|α|lem

(Dαu Dαv)0 u v isin Hm()

(ii) For any 1 le p ltinfin the space Wmp() is the closure of the set u u isinCinfin() ump ltinfin

(iii) For any 1 le p ltinfin the space Wmp() is separable(iv) For any 1 le p ltinfin the space Wmp() is reflexive

Definition A16 Let (middot middot) and (middot middot)prime be two inner products in the same linearspace X If the two norms middot and middot prime induced by these inner products

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

494 Basic results from functional analysis

are equivalent namely there are positive constants c1 and c2 such that for allu isin X

c1u le uprime le c2uthen the two inner products are said to be equivalent

With equivalent inner products all results based on convergence in X are thesame as both norms determine the same metric topology on X

Proposition A17 (Poincare inequality) Let ρ be the diameter of a boundeddomain in a d-dimensional Euclidean space Rd Then the followingPoincare inequality holds for all u isin H1

0()

u0 le ρradicdnablau0

It follows from the Poincare inequality that

(u v)prime1 =sumkisinNd

(partu

partxkpartv

partxk

)0

is an inner product of H10() satisfying for all u isin H1

0() the inequality

(u u)prime1 le u21 le(1+ dminus1ρ2) (u u)prime1

Hence the new inner product (middot middot)prime1 and the original inner product (middot middot)1 areequivalent

When m gt 1 observe that if u isin Hm0 () then Dαu isin H1

0() |α| lem minus 1 so that by repeatedly applying the Poincare inequality we verify thatsum|α|=m(D

αu Dαv)0 is an inner product of Hm0 () which is equivalent to the

original inner product This leads to the following proposition

Proposition A18 In Hm0 () the semi-norm defined by

|u|m =⎛⎝sum|α|=m

Dαu20

⎞⎠12

is equivalent to the original norm um

A2 Linear operator theory

A21 Linear operators

Definition A19 Let X and Y be normed linear spaces and T be an operatorfrom X into Y T is called a linear operator if for any x y isin X and any complex

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A2 Linear operator theory 495

numbers a and b

T(ax+ by

) = aT x+ bT y

In particular if Y is a complex (or real) number field then T is called a linearfunctional

Definition A20 A linear operator is said to be a bounded operator if there isa positive constant c such that for any x isin X

T xY le cxX

The infimum of the values of c that satisfy the above inequality is called thenorm of the linear operator denoted T

A linear operator is bounded if and only if it is continuous We denote byB(XY) the set of all bounded linear operators from X to Y If Y is a Banachspace then B(XY) is a Banach space relative to the operator norm WhenY = R the elements of B(XR) are bounded linear functionals on X Thespace B(XR) is called the dual space of X and is denoted by Xlowast see alsoDefinition A36

Definition A21 Let T Xrarr Y be a linear operator We denote its domainby D(T ) a subspace of X and its range by R(T ) which is a subspace ofY The linear operator T is said to be a closed operator if for any sequencexj j isin N sube D(T ) satisfying limjrarrinfin xj = x in X and limjrarrinfin T xj = y in Yit follows that x isin D(T ) and y = T x

We note that a linear operator T is closed if and only if its graph

G(T ) = (x T x) x isin D(T )

is a closed subspace of Xtimes Y In general a closed operator is not necessarilycontinuous However under certain conditions this is true (see the closed graphtheorem below) Moreover if the domain D(T ) of a continuous linear operatorT D(T )rarr Y is a closed subspace of X then it is a closed operator

A22 Open mapping theorems and uniform boundedness theorem

Theorem A22 (Open mapping theorem) If T isin B(XY) where X Y areBanach spaces R(T ) = Y and ε gt 0 then there exists an η gt 0 such that

y y isin Y yY lt η sube T x x isin X xX lt ε

This theorem states that for any open set G sube X the operator T maps theset D(T ) cap G to an open set in Y

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

496 Basic results from functional analysis

Theorem A23 (Inverse operator theorem) If the operator T satisfies all theproperties stated in Theorem A22 and in addition is one to one then theinverse operator T minus1 is a continuous linear operator

Theorem A24 (Closed graph theorem) If X and Y are Banach spaces andT X rarr Y is a linear operator such that the domain D(T ) and graph of Tare closed sets of X and Xtimes Y respectively then T is continuous

A statement equivalent to the closed graph theorem is that if X and Y areBanach spaces and T Xrarr Y is a closed linear operator such that its domainD(T ) is a closed subspace of X then T is continuous

Theorem A25 (Uniform boundedness theorem) If X is a Banach space Ya normed linear space and for any x isin X the sequence of operators

Tj j isin

N sube B(XY) has the property that supTjxY j isin N ltinfin then

supTj j isin N ltinfin

Corollary A26 If X is a Banach space Y a normed linear space andthe sequence of operators

Tn n isin N

sube B(XY) has the property thatlimnrarrinfin Tnx = T x for all x isin X then T isin B(XY)

Corollary A27 (BanachndashSteinhaus theorem) If X and Y are Banach spacesU a dense subset of X

Tn n isin N

sube B(XY) and T isin B(XY) thenlimnrarrinfin Tnx = T x for all x isin X if and only if

limnrarrinfin Tnx = T x for all x isin U

and

supTn n isin N ltinfin

A23 Orthogonal projection

Let K be a closed convex set in a Hilbert space X and x isin X The shortestdistance problem is to find y isin K such that

xminus y = minxminus w w isin KIt can be shown that this problem is equivalent to finding a y isin K which solvesthe variational inequality that for all w isin K

(xminus y wminus y) le 0

Theorem A28 The above shortest distance problem is uniquely solvable

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A2 Linear operator theory 497

Theorem A29 (Orthogonal projection theorem) Let M be a closed subspaceof the Hilbert space X and Mperp be the orthogonal complement of M that is

Mperp = y isin X (y x) = 0 x isin M

Then X can be decomposed as a direct sum of M and Mperp that is X = MoplusMperp

Corollary A30 In a Hilbert space X a subspace M is dense in X if and onlyif Mperp = 0

A24 Riesz representation theorem

If X is a Hilbert space and x isin X then the linear functional defined for ally isin X

(y) = (y x)

is a bounded linear functional on X with norm x The converse is given bythe Riesz representation theorem

Theorem A31 (Riesz representation theorem) If X is a Hilbert space and

is a bounded linear functional defined on X then there is a unique x isin X suchthat for all y isin X

(y) = (y x)

and in that case = xGiven a real Hilbert space X and isin Xlowast a method of finding the

corresponding unique x isin X follows by establishing that the variationalproblem

infw2Xminus 2(w) w isin X

has a solution

A25 HahnndashBanach extension theorem

Theorem A32 (HahnndashBanach extension theorem) If 0 is a bounded linearfunctional defined on a subspace M of a normed linear space X then there isa norm-preserving extension isin Xlowast that is (x) = 0(x) for all x isin M and = 0

We note several useful corollaries of the HahnndashBanach theorem

Corollary A33 For any nonzero x isin X there exists a bounded linearfunctional isin Xlowast of norm one such that (x) = xX

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

498 Basic results from functional analysis

Corollary A34 For any closed subspace M sube X and x isin X

max(x) isin Xlowast le 1 (y) = 0 y isin M = infxminus yX y isin MCorollary A35 A linear subspace M sube X is dense in X if and only if thereis no nonzero bounded linear functional which vanishes on M

A26 Compactness

Definition A36 The dual space of a normed linear space X denoted Xlowast isa Banach space with norm

= sup |(x)| x isin X x le 1 For each x isin X we define an x isin Xlowastlowast by setting for each isin Xlowast

〈 x〉 = 〈x 〉 thereby defining a mapping τ Xrarr Xlowastlowast as τ(x) = x

Proposition A37 If X is a normed linear space and Xlowastlowast its double dual thenthe mapping τ defined above is a one to one norm-preserving linear operator

This proposition implies from the perspective of the normed linear spacestructure that X can be considered as the subspace τ(X) of Xlowastlowast In generalτ(X) is a proper subspace of Xlowastlowast However when τ(X) = Xlowastlowast then X is saidto be reflexive

Definition A38 Let X be a normed linear space A sequencexj j isin N

subeX is said to be a weak Cauchy sequence if for each isin Xlowast the sequence ofscalars

(xj) j isin N

is a Cauchy sequence and a set S sube X is said to be

weakly bounded if for each isin Xlowast the set (S) is bounded The sequencexj j isin N

sube X is said to be weakly convergent in X if there exists an x isin X

such that for all isin Xlowast

limjrarrinfin

(xj) =

(x)

In this case x is called the weak limit of the sequence Moreover a sequencej j isin N

sube Xlowast is said to be weaklowast convergent if there exists an isin Xlowastsuch that for all x isin X

limjrarrinfin j(x) = (x)

where is called the weaklowast limit of the sequence

From Theorem A25 we have the following useful fact

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A2 Linear operator theory 499

Corollary A39 Every weakly bounded subset of a normed linear space isnormed bounded

We also recall the following results

Theorem A40 A reflexive Banach space is weakly complete In particularevery Hilbert space is weakly complete Moreover the dual space Xlowast of aBanach space X is weaklowast complete

Definition A41 Let X be a normed linear space A subset S sube X is saidto be weak-relatively (sequentially) compact if every sequence in S contains aweakly convergent subsequence

Theorem A42 Let X be a separable and reflexive Banach space A subsetS sube X is weak-relatively (sequentially) compact if and only if S is bounded

This theorem implies that a subset S sube Wmp() where m isin N 0 lt p ltinfinis weak-relatively (sequentially) compact if and only if S is bounded

Theorem A43 (ArzelandashAscoli theorem) A subset S sube C() is relatively(sequentially) compact if S is bounded pointwise and equicontinuous that is

(i) for any x isin sup|u(x)| u isin S ltinfin(ii) lim

δrarr0+sup|u(x1)minus u(x2)| |x1 minus x2| le δ u isin S = 0

Theorem A44 (Kolmogorov theorem) A subset U sube Lp[0 1] 1 le p lt infinis relatively compact if and only if the following conditions are satisfied

(i) supfisinU fp ltinfin

(ii) limtrarr0int 1minust

0 | f (t + s)minus f (s)|pds = 0 uniformly in f isin U

(iii) limtrarr0int 1

1minust | f (s)|pds = 0 uniformly in f isin U

A27 Compact operators and Fredholm theorems

Definition A45 Let X and Y be Banach spaces with a subset D sube X andK D rarr Y be an operator K is said to be a relatively compact operator iffor any bounded set S sube D K(S) is a relatively compact set in Y K is saidto be a compact operator if it is continuous and relatively compact K is saidto be a completely continuous operator if for any sequence xn n isin N in Dweakly converging to x it always follows that Kxn n isin N converges to Kxin Y

Proposition A46 Let X and Y be Banach spaces and K X rarr Y anoperator

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

500 Basic results from functional analysis

(i) If K is a compact linear operator then K is completely continuous(ii) If X is reflexive and K (need not be linear) is completely continuous then

K is compact

This proposition implies that on a reflexive Banach space a compact linearoperator and a completely continuous linear operator are the same

Proposition A47 Let X and Y be Banach spaces and K X rarr Y a linearoperator

(i) If K is relatively compact then K is continuous(ii) If K is bounded and the range R(K) is finite dimensional then K is

compact(iii) If K is compact then the range R(K) is separable(iv) If K is compact then the adjoint operator Klowast Ylowast rarr Xlowast is also compact(v) Let Kj j isin N be a sequence of compact operators from X to Y

satisfying Kjuminusrarr K on X then K is compact

Let X be a Hilbert space with inner product (middot middot)X and norm middot Xrespectively and let K X rarr X be a linear compact operator Consider thelinear operator equation

(I minusK)u = f f isin X (a)

and its adjoint operator equation

(I minusKlowast)v = g g isin X (b)

where I is the identity operator and Klowast is the adjoint operator of KWe have the following basic results

Theorem A48 (First Fredholm theorem) If K Xrarr X is a compact linearoperator then the following statements hold

(i) Equation (a) is uniquely solvable for an arbitrarily given f isin X if andonly if equation (b) is so for an arbitrarily given g isin X

(ii) Equation (a) (or equation (b)) is uniquely solvable for an arbitrarily givenf isin X (or g isin X) if and only if the corresponding homogeneous equation

(I minusK)u = 0 (or (I minusKlowast)v = 0)

has only a zero solution(iii) In the case that equation (I minus K)u = 0 only has a zero solution the

inverse operator (I minus K)minus1 exists on the entire space X and is boundedso that the solution u isin X of equation (a) satisfies

uX le c fXfor a constant c gt 0 independent of f isin X

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A2 Linear operator theory 501

Theorem A49 (Second Fredholm theorem) If K X rarr X is a compactlinear operator and equation (IminusK)u = 0 has nonzero solutions then amongall its solutions there are only finitely many that are linearly independent Inthis case equation (IminusKlowast)v = 0 has the same number of linearly independentsolutions

Theorem A50 (Third Fredholm theorem) If K X rarr X is a compactlinear operator then equation (a) is solvable if and only if the given f isin X

is orthogonal with all the solutions of equation (I minus Klowast)v = 0 In this caseequation (a) has one and only one solution u isin X that is orthogonal to allthe solutions of equation (I minus K)u = 0 which satisfies uX le c fX for aconstant c gt 0 independent of the given f isin X

We now consider the following operator equation

(λI minusK)u = f f isin X (c)

where λ is a complex parameter If for an arbitrarily given f isin X equation (c)has a unique solution corresponding to a complex value of λ and (λI minusK)minus1

is bounded then this λ is called a regular value of K If correspondingto a complex value of λ the homogeneous equation (λI minus K)u = 0 hasnonzero solutions then this λ is called an eigenvalue of K while eachcorresponding nonzero solution is called the eigenfunction associated with thisλ The maximum number of linearly independent eigenfunctions is called thegeometric multiplicity of the corresponding eigenvalue λ which can be eitherfinite or infinite

In general it is quite possible that there exists a number λ for which (λI minusK)minus1 exists but is not everywhere defined in X However if K is a compactlinear operator this is impossible In this case it is also impossible to have aninfinite geometric multiplicity (as shown in the following theorem)

Theorem A51 Let K Xrarr X be a compact linear operator

(i) If λ = 0 then λ is a regular value of K if and only if λ is not an eigenvalueof K

(ii) If λ = 0 is an eigenvalue of K then the geometric multiplicity of λ isfinite λ is the eigenvalue of the adjoint operator Klowast of K with the samegeometric multiplicity as λ

(iii) If λ = 0 is an eigenvalue of K then equation (c) is solvable if and onlyif the given f isin X is orthogonal to all eigenfunctions associated withthe eigenvalue λ of Klowast in this case equation (c) has one and only onesolution that is orthogonal to all eigenfunctions of K associated with λ

(iv) The family of eigenfunctions corresponding to different eigenvalues of Kis linearly independent

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

502 Basic results from functional analysis

Theorem A52 (Fourth Fredholm theorem) Let K X rarr X be a compactlinear operator For any constant c gt 0 there are only finitely manyeigenvalues of K that are located outside the disc of radius c centered at zeroConsequently K either has finitely many eigenvalues or has countably manyeigenvalues converging to zero

According to this theorem if a compact linear operator K has nonzeroeigenvalues then they can be arranged in decreasing order in absolute values(ie |λn| ge |λn+1| n isin N) which can be a finite or infinite sequence ofordering and the number of appearance of any particular eigenvalue in thisordering is equal to its geometric multiplicity If this sequence of orderingis infinite then limnrarrinfin λn = 0 Also the eigenfunctions associated withthe above sequence of ordering can be chosen such that they are all linearlyindependent

The above results can be extended directly to Banach spacesAs a special case if the compact linear operator is self-adjoint and K = 0

then the above sequence of ordering of eigenvalues and their correspondingeigenfunctions will be nonempty all the eigenvalues are real and the eigen-functions can be chosen so that they constitute an orthonormal basis of theHilbert space X

Theorem A53 (HilbertndashSchmidt theorem) If K Xrarr X is a nonzero self-adjoint compact linear operator then for any u isin X Ku has a convergentFourier series expansion by an orthonormal basis en n isin N of X

Ku =sumnisinN

(Ku en)Xen =sumnisinN

λn(u en)Xen

Moreover if λ = 0 is not an eigenvalue of K then every u isin X has a convergentFourier series expansion by this orthonormal basis

u =sumnisinN

(u en)Xen

A3 Invariant sets

In this section we present a proof of the existence of an invariant set associatedwith a finite number of contractions on a complete metric space as this result isimportant in several chapters of this book We follow closely the presentationof the fundamental paper of Hutchinson [148]

Let X = (X d) be a metric space and F a function such that F Xrarr X Ifthere is a constant c isin (01) such that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 503

d(F(x) F(y)) le cd(x y)

for all x y isin X then F is called a contraction mapping on X A basic fact isthat any contraction on a complete metric space has a unique fixed point Westate and prove this fact next

Theorem A54 If X is a complete metric space and F is a contraction on Xthen there exists one and only one x isin X such that F(x) = x

Proof First we establish the uniqueness of x isin X To this end we assumethat there are x1 x2 isin X such that F(x1) = x1 and F(x2) = x2 Since F isa contraction it follow that d(x1 x2) le cd(x1 x2) from which it follows thatd(x1 x2) = 0 since c isin (01)

To establish the existence of the fixed point for the function F we chooseany x0 isin X define recursively the sequence of points

xn+1 = F(xn) n isin N0

and observe that

d(xn+1 xn) = d(F(xn) F(xnminus1))

le cd(xn xnminus1)

from which it follows that

d(xn+1 xn) le cnd(x1 x0)

Consequently for n lt m we have that

d(xm xn) lesum

isinNmNn

d(x xminus1)

lesum

isinNmNn

cd(x1 x0)

which yields the inequality

d(xm xn) le cn

1minus cd(x1 x0)

This means that xn n isin X is a Cauchy sequence and hence must convergeto some x isin X that is limnrarrinfin xn = x isin X Since F is a continuous functionwe conclude that

F(x) = limnrarrinfinF(xn) = lim

nrarrinfin xn+1 = x

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

504 Basic results from functional analysis

Definition A55 Let X be a complete metric space The Lipschitz constant ofa function F Xrarr X is defined by

Lip F = sup

d(F(x) F(y))

d(x y) x y isin X x = y

We say F is Lipschitz continuous when Lip F ltinfin A Lipschitz continuousmapping takes bounded sets into bounded sets but not necessarily closed setsinto closed sets

According to the definition

d(F(x) F(y)) le Lip F d(x y)

and F is a contraction means that Lip F lt 1 In addition if F G X rarr Xthen

Lip F G le Lip F middot Lip G

Thus it follows from Definition A32 that

Lip F G = sup

d(F(G(x)) F(G(y)))

d(x y) x y isin X x = y

le Lip F middot sup

d(G(x) G(y))

d(x y) x y isin X x = y

le Lip F middot Lip G

For x isin X A sube X the distance from x to A is defined by the equation

d(x A) = infd(x a) a isin ALet B be the class of nonempty closed bounded subsets of X The least closedset containing A that is the closure of A is denoted by A Certainly for anyx isin X and A sube X we have that d(x A) = d(x A)

Definition A56 The Hausdorff metric δ on B is defined for any A B isin B bythe equation

δ(A B) = supd(a B) d(b A) a isin A b isin BLemma A57 If δ is the Hausdorff metric then δ is a metric on B

Proof For any A B isin B we certainly have that δ(A B) ge 0 If δ(A B) = 0then any a0 isin A must have the property that d(a0 B) = 0 But then it must bethe case that a0 isin B In other words we have verified that A sube B and likewiseB sube A that is A = B Conversely if A = B then d(a B) = 0 when a isin A

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 505

and d(b A) = 0 when b isin B from which it follows that δ(A B) = 0 We nextpoint out that

δ(B A) = supd(a A) d(b B) a isin B b isin A= supd(a B) d(b A) a isin A b isin B= δ(A B)

Finally we prove the triangle inequality To this end we suppose thatA B C isin B The triangle inequality for the metric space (X d) says that forany a isin A b isin B and c isin C

d(a B) le d(a b) le d(a c)+ d(c b)

which yields

d(a B) le d(a C)+ d(b C) le δ(A C)+ δ(B C)

and likewise we conclude that

d(b A) le δ(A C)+ δ(B C)

Therefore we obtain the desired fact that

δ(A B) le δ(A C)+ δ(C B)

This completes the proof of this lemma

Lemma A58 If (X d) is a complete metric space then (B δ) is a completemetric space

For the proof of this fact we find the following additional notation conve-nient A neighborhood of a subset A of X is denoted by

Nr(A) = x x isin X d(x A) lt r =⋃B(a r) a isin A

According to this definition we conclude that A sube Nr(B) if and only if for alla isin A there exists a b isin B such that d(a b) lt r or equivalently d(a B) lt rAnother way of expressing this condition is to merely say that

supd(a B) a isin A lt r

A fact relating these two simple metric concepts is a useful set inclusionrelationship which says that for any x isin X and positive numbers r1 r2

Nr1(B(x r2)) sube B(x r1 + r2) (A2)

A related notion is the concept of the upper Hausdorff hemi-metric given bythe formula

δlowast(A B) = infr r gt 0 A sube Nr(B) (A3)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

506 Basic results from functional analysis

and by our comments above we obtain that

δlowast(A B) = supd(a B) a isin Aas well as that

δ(A B) = maxδlowast(A B) δlowast(B A) (A4)

We also point out by what we have already said that

δ(A B) = infr r gt 0 A sube Nr(B) B sube Nr(A)In the next lemma we list some additional basic properties concerning these

concepts

Lemma A59(a) If A is open then for any r gt 0 the set Nr(A) is open(b) If A sube B then Nr(A) sube Nr(B)(c) Nr(A cap B) sube Nr(A) cap Nr(B)(d) For any collection Aγ γ isin of subsets of X we have that

Nr

⎛⎝⋃γisin

⎞⎠ = ⋃γisin

Nr(Aγ )

(e) If r1 le r2 then Nr1(A) sube Nr2(A)(f) For every r1 r2 we have that Nr1(Nr2(K)) sube Nr1+r2(K)(g) δ(A B) lt ε if and only if there are positive numbers r1 r2 lt ε such that

A sube Nr1(B) and B sube Nr2(A)

Proof Part (a) follows directly from the definition of the set Nr(A) as theunion of open balls The remaining assertions are also straightforward Hereare some of the details For (b) and (c) we use the facts that for any x isin X andany sets A B of X the following equation holds

d(x A cap B) = mind(x A) d(x B)and when A sube B we have additionally d(x B) le d(x A) For (d) we compute

Nr(cuprisinAr) =⋃B(a r) a isin

⋃risin

Ar

=⋃B(a r) a isin Ar r isin

=⋃Nr(Ar) r isin

Part (e) is obvious and (f) follows from the set inclusion (A2) The final claimwhich is especially useful follows from equations (A3) and (A4)

The next fact we use is stated in an ancillary lemma

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 507

Lemma A60 If A = An n isin N is a Cauchy sequence in B then thereis a ball B in X such that for all n isin N we have that An sube B That is An isuniformly bounded for all n isin N

Proof There is a positive integer k such that for any n ge k we have thatδ(Ak An) lt

12 and so by using Lemma A59 part (g) we conclude for n ge k

that An sube N12(Ak) Now we choose a positive number r and y isin X such that⋃jisinNk

Aj sube B(y r) Therefore the ball B = B(y r + 12) has the desiredproperty

Definition A61 For every sequence A =An nisinN in B we define the set

C(A) = x x isin X such that there is a subsequence nk k isin Nand xnk isin Ank with limkrarrinfin xnk = x

Note that when A consists of one set A the set C(A) is the set ofaccumulation points of A and is called the derived set of A

We are now ready to prove Lemma A58 To this end we let A = An n isinN be a Cauchy sequence in B We prove it converges to C(A)

Proof We divide the proof into several steps and begin with the statementthat

(a) C(A) is bounded

Indeed by Lemma A60 there is a ball B such that for all n isin N we have thatAn sube B Now let x isin C(A) so there is some subsequence nk k isin N andxnk isin Ank for which limkrarrinfin xnk = x Therefore we conclude that x isin B thatis C(A) sube B

(b) C(A) is closed

If y isin N is a sequence in C(A) which converges to some y isin X thenthere are subsequences nk k isin N and mk k isin N such that for all k isin N

we have that d(y ynk) lt 2minusk xmk isin Amk and d(ynk xmk) lt 2minusk Hence weobtain that limkrarrinfin xmk = y which means that y isin C(A)

(c) C(A) is not empty

Since An nisinN is a Cauchy sequence there is a subsequence nk such thatfor all kisinN the inequality δ(Ank Ank+1)lt 2minusk holds In particular we obtainfor all aisinAnk that d(a Ank+1)lt 2minusk and so we can construct inductively asequence xnk isinAnk such that d(xnk xnk+1)lt 2minusk This means that xnk kisinNis a Cauchy sequence in X and so limkrarrinfin xnk = x0 That is we conclude thatx0 isinA

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

508 Basic results from functional analysis

So far we have demonstrated that C(A)subeB Now we prove thatlimnrarrinfin An=C(A) in (B δ) For this purpose we choose ε gt 0 anddemonstrate that there is a pisinN such that for mgt p we have δ(C(A) Am) lt εBy hypothesis corresponding to this ε there is an integer k such that for allk lt n lt m we have

δ(Am An) lt ε2 (A5)

Using this fact we first show for every x isin C(A) and n gt k that d(x An) lt εIndeed by the definition of the set C(A) there is a subsequence n isin Nwith xn isin An and d(x xn ) lt ε2 for sufficiently large We choose evenlarger say gt q so that not only does this inequality hold but also n gt nthereby guaranteeing by (A5) for n gt k and gt q that δ(An An) lt ε Con-sequently we have for n gt k that d(x An) lt ε which is the desired inequality

Next we show that for all n gt k and y isin An d(y C(A)) lt ε We do thisby constructing an x isin C(A) so that d(y x) lt ε Here we again appeal to(A5) to define inductively for a subsequence m isin Z+ with xm

isin Am

and x0 = y and for all isin Z+ that d(xm xm+1) lt ε

2 Clearly we have

that x isin N is a Cauchy sequence and hence converges to some x isin XMoreover we observe for all isin Z+ that

d(y xm+1) lt ε1

2+ 1

22+ middot middot middot

= ε

and so d(y x) lt ε

For the next lemma we let C be the collection of all compact subsets of thecomplete metric space X Clearly the set C is a subset of B In the next lemmawe prove that it is a closed subset of B in the Hausdorff metric on B

Lemma A62 If X is a complete metric space then the set C is a closed subsetof (B δ) Therefore in particular (C δ) is a complete metric space

Proof Suppose A=An n isin N is a sequence in C such that limnrarrinfin An=Ain the metric δ We show that the set A is compact by establishing that it iscomplete and totally bounded (see Theorem A7) Since (B δ) is complete wecan then be assured that A is in B too Certainly we already know by the proofof Lemma A59 that A = C(A) and so is in B Recall that a closed subset ofa complete metric space must be complete In particular we conclude that Ais complete Next we show that it is totally bounded To this end we chooseany ε gt 0 By our hypothesis there is an m isin N such that δ(A Am) lt ε2Using part (g) of Lemma A59 we conclude that there is an r isin (0 ε2) suchthat A sube Nr(Am) However the set Am is compact and consequently there is a

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 509

finite set of points xj j isin Nk sube X such that

Am sube⋃B(xj ε2) j isin Nk

Again using Lemma A59 we get that

A sube⋃B(xj ε) j isin Nk

and so A is totally bounded This proves that A isin C and we conclude thatindeed C is a closed subset of B in the metric δ

For later purposes we require the next lemma

Lemma A63 For any Lipschitz continuous mapping F Xrarr X and subsetA B isin X we have that

δ(F(A) F(B)) le (Lip F) middot δ(A B) (A6)

Proof First we observe for any subset A B of X that δ(A B) le δ(A B)To see this we choose any ε gt δ(A B) and a positive δ By Lemma A59part (g) there are positive constants r1 r2 lt ε such that A sube Nr1(B) andB sube Nr2(A) Consequently we get that A sube Nr1+δ(B) and B sube Nr2+δ(A)Therefore again by Lemma A59 part (g) we have that δ(A B) lt ε+ δ Nowwe let δ rarr 0+ and ε rarr δ(A B) from above and conclude as claimed thatδ(A B) le δ(A B)

Applying this preliminary observation to the inequality (A6) we reduce theproof to establishing that

δ(F(A) F(B)) le (Lip F)δ(A B)

To this end for a given u isin F(A) there is v isin A with u = F(v) Now chooseany x isin B and observe that

d(u F(B)) le d(u F(x)) = d(F(v) F(x)) le (Lip F)d(v x)

Since x was chosen arbitrarily in B we get that

d(u F(B)) le (Lip F)d(v B) le (Lip F)δ(A B)

Similarly for u isin F(B) with u = F(v) where v isin B we conclude as abovefor all x isin A that

d(F(A) u) le d(F(x) F(v)) le (Lip F)d(x v)

Consequently we obtain that

d(F(A) u) le (Lip F)d(A v) le (Lip F)δ(A B)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

510 Basic results from functional analysis

and also

δ(F(A) F(B)) le (Lip F)δ(A B)

The next lemma is also needed

Lemma A64 If Aγ γ isin and Bγ γ isin are families of subsets of Xthen

δ

⎛⎝⋃γisin

Aγ ⋃γisin

⎞⎠ le supδ(Aγ Bγ ) γ isin

Proof For any x isin⋃γisin Aγ there is a μ isin such that x isin Aμ and so we seethat

d

⎛⎝x⋃γisin

⎞⎠ le d(x Bμ) le δ(Aμ Bμ) le supδ(Aγ Bγ ) γ isin

Similarly for any y isin⋃γisin Bγ there corresponds a μ isin with y isin Bμ and so

d

⎛⎝⋃γisin

Aγ y

⎞⎠ le d(Aμ y) le δ(Aμ Bμ) le supδ(Aγ Bγ ) γ isin

Combining these two inequalities proves the lemma

If A is nonempty then the S(A)s are nonempty and so is S(A) ThereforeS B rarr B B is a set of closed bounded nonempty sets We know that (B δ)is a complete metric space

We have now prepared the basic facts needed about the Hausdorff metricand can address the essential issue of this section namely the construction ofinvariant sets We start with a complete metric space X and a finite family ofcontractive mappings

= φε ε isin ZμCorresponding to the family is a set-valued mapping which is introduced inthe next definition

Definition A65 Let = φε ε isin Zμ be a finite family of contractionmappings on a metric space X For every subset A of X we define the set-valuedmapping + at A by the formula

+(A) =⋃φε(A) ε isin Zμ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 511

Clearly by what has already been said we conclude that + B rarr B andalso we have that + C rarr C Moreover for A isin C the set +(A) reduces to

+(A) =⋃φε(A) ε isin Zμ

The essential property of the set-valued mapping + is given in the nextlemma For its statement we introduce the constant

λ() = maxLip φε ε isin Zμwhich is certainly less than one

Lemma A66 If = φε ε isin Zμ is a finite family of contractionmappings on a metric space then the set-valued mapping + B rarr B isa contraction relative to the Hausdorff metric In fact we have the inequality

Lip + le λ()

for its Lipschitz constant

Proof Let A B isin B and use the two previous lemmas to conclude that

δ(+(A)+(B)) le maxδ(φε(A)φε(B)) ε isin Zμle λ(+)δ(A B)

Theorem A67 If = φε ε isin Zμ is a finite family of contractionmappings on a metric space X then there is a unique K isin C such that K =+(K) Moreover there is at most one K isin B which satisfies this equation

Proof This result follows from the contraction mapping principle Specifi-cally the uniqueness of K isin B follows from the fact that + is a contractionon (B δ) while the existence of K isin C follows from the fact that (C δ) is alsoa complete metric space

Remark Any subset K of X which satisfies the equation K = +(K) is calledan invariant set of the collection of mappings

We end this appendix with an alternative construction of K which is veryrelevant to the presentation in Chapter 4 The additional information providedis that an invariant set K isin C can be obtained from the fixed points of afinite number of compositions of mappings chosen from the collection Foran explanation of this procedure we review the appropriate notation used inChapter 4 We choose Zp

μ to denote all ordered sequences of p integers p isin Nselected from Zμ Every e isin Z

pμ is written in vector form e = [εe e isin Zp]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

512 Basic results from functional analysis

where each εe is in Zμ and associated with this vector is the compositionmapping

φe = φε0 φε1 middot middot middot φεpminus1

Associated with the family of contraction mappings is the p-compoundfamily of mappings

p = φe e isin Zpμ

We show next that and p share the same invariant set We start with apreliminary lemma

Lemma A68 If K is an invariant set of the family of contractions = φε ε isin Zμ and e = [εi i isin Zμ] isin Z

pμ then φe(K) =⋃(φe φε)(K) ε isin Zμ

Proof In the above equation the case p = 0 just means that K is an invariantset for the collection of mappings Now for p isin N e isin Z

pμ we merely

compute that

φe(K) = φe(cupφε(K) ε isin Zμ) =⋃(φe φε)(K) ε isin Zμ

Proposition A69 If K = +(K) and p isin N then

K = (p+)(K)

Proof The case p = 1 is immediate The remaining cases follow by inductionon p Specifically if p gt 1 and the result is valued for pminus 1 we compute

K =⋃φe(K) e isin Zpminus1

μ =⋃(φe φε)(K) e isin Zpminus1

μ ε isin Zμ=⋃φe(K) e isin Zp

μ

For the next series of lemmas we use the notation Zinfinμ for all orderedsequences of integers selected from Zμ Every e isin Zinfinμ is written in vectorform e = [εe e isin N0] where each component εe is in Zμ There is a naturalassociation of any e isin Zinfinμ with an element in Z

pμ by truncating it to its first p

components namely ep = [εe e isin Zp] that is to say every e isin Zinfinμ givesrise to the sequence of vectors ep p isin N Of course every e isin Z

pμ may be

realized as ep for many choices of e isin Zinfinμ For every subset A in X we set

Ae = φe(A)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 513

so that Ae isin B We use the convention that for p = 0 the mapping φe e isin Zpμ

is the identity mapping on X Hence if A isin B then in this case Ae = AThe next sequence of lemmas study what happens when p rarr infin For

this purpose we recall that the diameter of a subset A of a metric space X

is defined as

diam A = supd(x y) x y isin AIt follows readily that a bounded set has a finite diameter and the diameter of aset and its closure are the same

Lemma A70 If A is a bounded subset of a metric space X a finitecollection of contractive mappings on X and e isin Zinfinμ then

limprarrinfin diam Aep = 0

Proof For any x y isin φep(A) there are u v isin A such that x = φepu andy = φepv Therefore we conclude that

d(x y) le λ(φ)pd(u v) le λ(φ)p diam A

That is we have that

diam Aep le λ(φ)p diam A (A7)

from which the lemma follows

Lemma A71 If K is an invariant set in B of a finite number of contractivemappings on a complete metric space X and e isin Zinfinμ then the family ofsubsets Kep p isin N of K is nested and

Ke = capKep p isin Nconsists of exactly one point ke isin X

Proof For each p isin N and e isin Zinfinμ we have by Proposition A69 that

Kep = ep(K)

= ep(cupφε(K) ε isin Zμ)= cup(ep φ)(K) ε isin Zμsupe Kep+1

Moreover since K is an invariant set in B it follows from the above inclusionthat Kep sube K

Now if x y isin Ke then by inequality (A7) for any p isin N we have that

d(x y) le λ()p diam K

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

514 Basic results from functional analysis

We now let prarrinfin and conclude that x = y That is the set Ke consists of atmost one point

It remains to prove that Ke is nonempty Indeed since K is nonempty forany p isin N there is an xp isin Kep and we claim that the sequence xp p isin N isa Cauchy sequence in X To this end we suppose that p le q then xp xq isin Kep So we conclude that

d(xp xq) le λ()p diam K

from which it follows that xp p isin N is a Cauchy sequence Therefore thereis an x isin X for which x = limqrarrinfin xq However xq isin Kep for all q ge pfrom which we conclude that x isin Kep because Kep is a closed subset of XConsequently we have that x isin Ke which proves the result

Lemma A72 If K is an invariant set of a finite number of contractivemappings in B then

K = ke e isin Zinfinμ Proof By Lemma A71 we have that

ke e isin Zinfinμ sube K

Now we choose an x0 isin K Since K is an invariant set corresponding to thefamily there is an ε0 isin Zμ and an x1 isin X such that x0 = φε0 x1 Repeatingthis process we create an e isin Zinfinμ such that x0 = φepxp In particular weconclude that x0 isin Kep and so also x0 isin Ke This means by Lemma A71 thatx0 = ke thereby establishing the claim

Our next observation demonstrates that for any e isin Zinfinμ the point ke isin Kcan be constructed as a limit of the fixed points xep of the contraction mappingφep We first demonstrate that xep isin K for any e isin Zinfinμ and for any p isin N Tothis end we introduce a map from Z

pμ to Zinfinμ Specifically corresponding to

an e isin Zpμ expressed as e = [εl l isin Zp] we let ep isin Zinfinμ be the infinite vector

obtained by repeating the components of the vector e isin Zpμ infinitely often

That is we define

ep = [ε0 middot middot middot εpminus1 ε0 εpminus1 ]T

Proposition A73 For every p isin N and e isin Zpμ we have that

kep = xe

Proof We need to show that

φe(kep) = kep

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 515

for any e isin Zpμ As a first step we have that

φe(kep) isin φe

(⋂K(ep)q q isin N

)and so we get

φe(kep) isin⋂

(φe φ(ep)q)(K) q isin N

However by our notational convention

φe φ(ep)q = φ(ep)p+q

But then Lemma A71 implies that

φe(kep) isin⋂K(ep)q q isin N = kep

Since φe has a unique fixed point the proof of the proposition is complete

We also require the next result

Lemma A74 For each e isin Zinfinμ we have that

ke = limprarrinfin xep

Proof For any e isin Zinfinμ and p isin N we have by Proposition A73 that xep isinKep But we also have that ke isin Kep So we conclude by inequality (A7) that

d(xep ke) le λ()p diam K

Letting prarrinfin in this inequality proves the result

As a corollary we obtain the following result part of which was provedearlier in an even stronger form To start with we let W be the smallest closedset containing all fixed points xe where e isin Z

pμ and p isin N

Corollary A75 If K isin B is an invariant set for the contractive mapping then K = W

Proof By Proposition A73 it follows that W sube K while Lemma A74implies that K sube W

This corollary is the key to constructing an invariant set for Wecontinue to explain further properties of K and specifically comment on itsrepresentation given in Lemma A62

This requires putting the Tychnoff topology on Zinfinμ Specifically we let Zμ

have the discrete topology that is all sets are open We view Zinfinμ as a functionfrom N0 into Zμ and give Zinfinμ the weakest topology so that all componentmaps are continuous In other words for each i isin N0 the map which takes

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

516 Basic results from functional analysis

e = [εk k isin N0] into εi is continuous in this topology on Zinfinμ Therefore as aspecial case of the Tychnoff theorem see [183 276] we conclude that Zinfinμ iscompact in this topology

Definition A76 We define the map ψ Zinfinμ rarr K at e isin Zinfinμ by the equationψ(e) = ke

Lemma A77 The map ψ defined above is continuous

Remark A78 According to Lemma A77 the mapping is onto Hence thislemma implies any invariant set K isin B must be compact confirming in analternative manner a part of Theorem A67

Proof We show that ψ is continuous at any e isin Zpμ Thus we must show for

any ε gt 0 that the inverse image of the ball B(ke ε) in X contains an openneighborhood of e isin Zinfinμ Corresponding to this ε we choose a positive integerq such that diam Keq le ε The existence of such an integer is guaranteed byLemma A70 applied to the set K The set

O = e e isin Zinfinμ eq = eqis an open neighborhood of e in the topology on Zinfinμ Moreover if e isin O thenit follows that ke isin Keq That is if e isin O then d(ke keq) le ε This means thatO sube ψminus1(B(ke ε)) which proves the lemma

We now turn to the iterates of the set-valued map + We denote them as

p+ defined on any subset A iteratively by the formula

p+(A) = +(pminus1

+ (A))

for p ge 1 and 0+(A) = AAccording to Lemma A66 we have that

δ(p+(A) K) le λ()pδ(A K)

and so for any nonempty bounded subset A of X we conclude that

limprarrinfin

p+(A) = K

We are now going to establish the existence of the limit in Lemma A74 withoutassuming the existence of an invariant set

We start from the fixed point xε of the map φε for each ε isin Zμ defineconstants ρ = maxd(Sε Sθ ) ε θ isin Zμ and r = ρ(1 minus λ())minus1 and a setV isin B given as

V =⋂B(xε r) ε isin Zμ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 517

Lemma A79 If is a finite family of contraction maps on a complete metricspace X and e isin Zinfinμ then limprarrinfin xep exists and is the unique point in the set

Ve =⋂Vep p isin N

Proof The proof proceeds as before First we establish that the sets Vep p isin N are nested in a decreasing manner

By the choice of constants r and ρ we have for λ = λ() that⋃B(xε λr) ε isin Zμ sube V (A8)

In fact if d(x xθ ) le r for some θ isin Zμ we have that

d(x xε) le d(x xθ )+ d(xθ xε) le λr + ρ = r

Now if u isin φε(V) then there is a υ isin V such that u = φευ Since υ isin V weknow for any ε isin Zμ that d(υ xε) le r and so it follows that

d(u xε) = d(φε(υ)φ(xε)) le λd(υ xε) le λr

Consequently by the set inclusion (A8) we conclude that u isin V In otherwords we have established that φε(V) isin V for any ε isin Zμ

From this observation it follows directly for any p isin N that Vep supe Vep+1 This implies that the contraction map φep has the property that φep V rarr V Since V is closed the unique fixed point of φep lies in V that is xep isin V Asbefore we argue as in Lemma A70 that diam Vep le λp diam V Thus as inthe proof of Lemma A71 we conclude that Ve consists of at most one pointthat is the sequence xep p isin N is a Cauchy sequence and limprarrinfin xep = xethe unique point in Ve This proves the lemma

We can now define the subset K = xe e isin Zinfinμ of X As in the remarkfollowing Lemma A77 it follows that K is compact Indeed the map Zinfinμ erarr xe isin K is continuous by the same argument used to prove Lemma A77 Itremains only to establish that K is an invariant set of the collection of invariantsets For this purpose for each ε isin Zμ and e = [εj j isin N] isin Zinfinμ wedefine

εe = [ε ε0 ε1 ]

and likewise for e isin Zinfinμ We claim that ψε(xe) = xεe for each e isin Zinfinμ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

518 Basic results from functional analysis

To see this we observe for each e isin Zinfinμ and ε isin Zμ that

ψε(xe) isin ψε

(⋂Vep p isin N

)sube⋂ψε(Vep) p isin N

=⋂Vεep p isin N

= xεeIn other words we do indeed have that ψε(xe) = xεe Now if xe isin K we writee = ε0e for some ε0 isin Zμ and e isin Zinfinμ and conclude that

xe = xε0e = ψε0(xe) isin⋃ψε(K) ε isin Zμ

Similarly if u isin ⋃ψε(K) ε isin Zμ we get that u = ψεxe for some ε isin Zμ

and e isin Zinfinμ This implies u = xεe isin K So K is an invariant set for thecollection of contraction mappings on a complete metric space

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

References

[1] R A Adams Sobolev Spaces Academic Press New York 1975[2] L V Ahlfors Complex Analysis 3rd edn McGraw-Hill New York 1985[3] M Ahues A Largillier and B V Limaye Spectral Computations for Bounded

Operators Chapman and HallCRC London 2001[4] B K Alpert A class of bases in L2 for the sparse representation of integral

operators SIAM Journal on Mathematical Analysis 24 (1993) 246ndash262[5] B Alpert G Beylkin R Coifman and V Rokhlin Wavelet-like bases for the

fast solution of second-kind integral equations SIAM Journal on ScientificComputing 14 (1993) 159ndash184

[6] P M Anselone Collectively Compact Operator Approximation Theory andApplications to Integral Equations Prentice-Hall Englewood Cliffs NJ 1971

[7] K E Atkinson Numerical solution of Fredholm integral equation of the secondkind SIAM Journal on Numerical Analysis 4 (1967) 337ndash348

[8] K E Atkinson The numerical solution of the eigenvalue problem for compactintegral operators Transactions of the American Mathematical Society 129(1967) 458ndash465

[9] K E Atkinson Iterative variants of the Nystrom method for the numericalsolution of integral equations Numerische Mathematik 22 (1973) 17ndash31

[10] K E Atkinson The numerical evaluation of fixed points for completely contin-uous operators SIAM Journal on Numerical Analysis 10 (1973) 799ndash807

[11] K E Atkinson Convergence rates for approximate eigenvalues of compactintegral operators SIAM Journal on Numerical Analysis 12 (1975) 213ndash222

[12] K E Atkinson A survey of boundary integral equation methods for the numer-ical solution of Laplacersquos equation in three dimension in M Golberg (ed)Numerical Solution of Integral Equations Plenum Press New York 1990

[13] K E Atkinson A survey of numerical methods for solving nonlinear integralequations Journal of Integral Equations and Applications 4 (1992) 15ndash46

[14] K E Atkinson The numerical solution of a nonlinear boundary integral equationon smooth surfaces IMA Journal of Numerical Analysis 14 (1994) 461ndash483

[15] K E Atkinson The Numerical Solution of Integral Equations of the SecondKind Cambridge University Press Cambridge 1997

[16] K E Atkinson and A Bogomolny The discrete Galerkin method for integralequations Mathematics of Computation 48 (1987) 595ndash616

[17] K E Atkinson and G Chandler Boundary integral equation methods for solvingLaplacersquos equation with nonlinear boundary conditions The smooth boundarycase Mathematics of Computation 55 (1990) 451ndash472

519

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

520 References

[18] K E Atkinson and G Chandler The collocation method for solving the radiosityequation for unocluded surfaces Journal of Integral Equations and Applications10 (1998) 253ndash290

[19] K E Atkinson and D Chien Piecewise polynomial collocation for boundaryintegral equations SIAM Journal on Scientific Computing 16 (1995) 651ndash681

[20] K E Atkinson and J Flores The discrete collocation method for nonlinearintegral equations IMA Journal of Numerical Analysis 13 (1993) 195ndash213

[21] K E Atkinson I Graham and I Sloan Piecewise continuous collocation forintegral equations SIAM Journal on Numerical Analysis 20 (1987) 172ndash186

[22] K E Atkinson and W Han Theoretical Numerical Analysis Springer-VerlagNew York 2001

[23] K E Atkinson and F A Potra Projection and iterated projection methods fornonlinear integral equations SIAM Journal on Numerical Analysis 24 (1987)1352ndash1373

[24] K E Atkinson and F A Potra On the discrete Galerkin method for Fredholmintegral equations of the second kind IMA Journal of Numerical Analysis 9(1989) 385ndash403

[25] K E Atkinson and I H Sloan The numerical solution of the first kindlogarithmic kernel integral equations on smooth open curves Mathematics ofComputation 56 (1991) 119ndash139

[26] I Babuska and J E Osborn Estimates for the errors in eigenvalue and eigen-vector approximation by Galerkin methods with particular attention to the caseof multiple eigenvalues SIAM Journal on Numerical Analysis 24 (1987) 1249ndash1276

[27] I Babuska and J E Osborn Finite element-Galerkin approximation of the eigen-values and eigenvectors of selfadjoint problems Mathematics of Computation52 (1989) 275ndash297

[28] G Beylkin R Coifman and V Rokhlin Fast wavelet transforms and numericalalgorithms I Communications on Pure and Applied Mathematics 44 (1991)141ndash183

[29] R Bialecki and A Nowak Boundary value problems in heat conduction withnonlinear material and nonlinear boundary conditions Applied MathematicalModelling 5 (1981) 417ndash421

[30] S Borm L Grasedyck and W Hackbusch Introduction to hierarchical matriceswith applications Engineering Analysis with Boundary Elements 27 (2003)405ndash422

[31] J H Bramble and J E Osborn Rate of convergence estimates for nonselfadjointeigenvalue approximations Mathematics of Computation 27 (1973) 525ndash549

[32] C Brebbia J Telles and L Wrobel Boundary Element Technique Theory andApplications in Engineering Springer-Verlag Berlin 1984

[33] M Brenner Y Jiang and Y Xu Multiparameter regularization for Volterrakernel identification via multiscale collocation methods Advances in Compu-tational Mathematics 31 (2009) 421ndash455

[34] H Brunner On the numerical solution of nonlinear VolterrandashFredholm integralequations by collocation methods SIAM Journal on Numerical Analysis 27(1990) 987ndash1000

[35] H Brunner On implicitly linear and iterated collocation methods for Ham-merstein integral equations Journal of Integral Equations and Applications 3(1991) 475ndash488

[36] H J Bungartz and M Griebel Sparse grids Acta Numerica 13 (2004)147ndash269

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 521

[37] H Cai and Y Xu A fast FourierndashGalerkin method for solving singular boundaryintegral equations SIAM Journal on Numerical Analysis 46 (2008) 1965ndash1984

[38] Y Cao T Herdman and Y Xu A hybrid collocation method for Volterra integralequations with weakly singular kernels SIAM Journal on Numerical Analysis41 (2003) 264ndash281

[39] Y Cao M Huang L Liu and Y Xu Hybrid collocation methods for Fredholmintegral equations with weakly singular kernels Applied Numerical Mathemat-ics 57 (2007) 549ndash561

[40] Y Cao B Wu and Y Xu A fast collocation method for solving stochasticintegral equations SIAM Journal on Numerical Analysis 47 (2009) 3744ndash3767

[41] Y Cao and Y Xu Singularity preserving Galerkin methods for weakly singularFredholm integral equations Journal of Integral Equations and Applications 6(1994) 303ndash334

[42] T Carleman Uber eine nichtlineare Randwertaufgabe bei der Gleichungu= 0Mathematische Zeitschrift 9 (1921) 35ndash43

[43] A Cavaretta W Dahmen and C A Micchelli Stationary subdivision Memoirsof the American Mathematical Society No 453 1991

[44] F Chatelin Spectral Approximation of Linear Operators Academic Press NewYork 1983

[45] J Chen Z Chen and S Cheng Multilevel augmentation methods for solving thesine-Gordon equation Journal of Mathematical Analysis and Applications 375(2011) 706ndash724

[46] J Chen Z Chen and Y Zhang Fast singularity preserving methods forintegral equations with non-smooth solutions Journal of Integral Equations andApplications 24 (2012) 213ndash240

[47] M Chen Z Chen and G Chen Approximate Solutions of Operator EquationsWorld Scientific Singapore 1997

[48] Q Chen C A Micchelli and Y Xu On the matrix completion problem formultivariate filter bank construction Advances in Computational Mathematics26 (2007) 173ndash204

[49] Q Chen T Tang and Z Teng A fast numerical method for integral equationsof the first kind with logarithmic kernel using mesh grading Journal ofComputational Mathematics 22 (2004) 287ndash298

[50] X Chen Z Chen and B Wu Multilevel augmentation methods with matrixcompression for solving reformulated Hammerstein equations Journal of Inte-gral Equations and Applications 24 (2012) 513ndash544

[51] X Chen Z Chen B Wu and Y Xu Fast multilevel augmentation methods fornonlinear boundary integral equations SIAM Journal on Numerical Analysis 49(2011) 2231ndash2255

[52] X Chen Z Chen B Wu and Y Xu Fast multilevel augmentation methods fornonlinear boundary integral equations II efficient implementation Journal ofIntegral Equations and Applications 24 (2012) 545ndash574

[53] X Chen R Wang and Y Xu Fast FourierndashGalerkin methods for nonlin-ear boundary integral equations Journal of Scientific Computing 56 (2013)494ndash514

[54] Z Chen S Cheng G Nelakanti and H Yang A fast multiscale Galerkinmethod for the first kind ill-posed integral equations via Tikhonov regularizationInternational Journal of Computer Mathematics 87 (2010) 565ndash582

[55] Z Chen S Cheng and H Yang Fast multilevel augmentation methods withcompression technique for solving ill-posed integral equations Journal ofIntegral Equations and Applications 23 (2011) 39ndash70

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

522 References

[56] Z Chen S Ding Y Xu and H Yang Multiscale collocation methods for ill-posed integral equations via a coupled system Inverse Problems 28 (2012)025006

[57] Z Chen S Ding and H Yang Multilevel augmentation algorithms based onfast collocation methods for solving ill-posed integral equations Computers ampMathematics with Applications 62 (2011) 2071ndash2082

[58] Z Chen Y Jiang L Song and H Yang A parameter choice strategy for amulti-level augmentation method solving ill-posed operator equations Journalof Integral Equations and Applications 20 (2008) 569ndash590

[59] Z Chen J Li and Y Zhang A fast multiscale solver for modified Hammersteinequations Applied Mathematics and Computation 218 (2011) 3057ndash3067

[60] Z Chen G Long and G Nelakanti The discrete multi-projection method forFredholm integral equations of the second kind Journal of Integral Equationsand Applications 19 (2007) 143ndash162

[61] Z Chen G Long and G Nelakanti Richardson extrapolation of iterated discreteprojection methods for eigenvalue approximation Journal of Computational andApplied Mathematics 223 (2009) 48ndash61

[62] Z Chen G Long G Nelakanti and Y Zhang Iterated fast collocation methodsfor integral equations of the second kind Journal of Scientific Computing 57(2013) 502ndash517

[63] Z Chen Y Lu Y Xu and H Yang Multi-parameter Tikhonov regularization forlinear ill-posed operator equations Journal of Computational Mathematics 26(2008) 37ndash55

[64] Z Chen C A Micchelli and Y Xu The PetrovndashGalerkin methods for secondkind integral equations II Multiwavelet scheme Advances in ComputationalMathematics 7 (1997) 199ndash233

[65] Z Chen C A Micchelli and Y Xu A construction of interpolating wavelets oninvariant sets Mathematics of Computation 68 (1999) 1569ndash1587

[66] Z Chen C A Micchelli and Y Xu Hermite interpolating wavelets in L Li ZChen and Y Zhang (eds) Lecture Notes in Scientific Computation InternationalCulture Publishing Beiging 2000 pp 31ndash39

[67] Z Chen C A Micchelli and Y Xu A multilevel method for solving operatorequations Journal of Mathematical Analysis and Applications 262 (2001) 688ndash699

[68] Z Chen C A Micchelli and Y Xu Discrete wavelet PetrovndashGalerkin methodsAdvances in Computational Mathematics 16 (2002) 1ndash28

[69] Z Chen C A Micchelli and Y Xu Fast collocation method for second kindintegral equations SIAM Journal on Numerical Analysis 40 (2002) 344ndash375

[70] Z Chen G Nelakanti Y Xu and Y Zhang A fast collocation method for eigen-problems of weakly singular integral operators Journal of Scientific Computing41 (2009) 256ndash272

[71] Z Chen B Wu and Y Xu Multilevel augmentation methods for solvingoperator equations Numerical Mathematics A Journal of Chinese Universities14 (2005) 31ndash55

[72] Z Chen B Wu and Y Xu Error control strategies for numerical integrationsin fast collocation methods Northeastern Mathematical Journal 21(2) (2005)233ndash252

[73] Z Chen B Wu and Y Xu Multilevel augmentation methods for differentialequations Advances in Computational Mathematics 24 (2006) 213ndash238

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 523

[74] Z Chen B Wu and Y Xu Fast numerical collocation solutions of inte-gral equations Communications on Pure and Applied Mathematics 6 (2007)649ndash666

[75] Z Chen B Wu and Y Xu Fast collocation methods for high-dimensionalweakly singular integral equations Journal of Integral Equations and Applica-tions 20 (2008) 49ndash92

[76] Z Chen B Wu and Y Xu Fast multilevel augmentation methods for solvingHammerstein equations SIAM Journal on Numerical Analysis 47 (2009)2321ndash2346

[77] Z Chen and Y Xu The PetrovndashGalerkin and integrated PetrovndashGalerkin meth-ods for second kind integral equations SIAM Journal on Numerical Analysis 35(1998) 406ndash434

[78] Z Chen Y Xu and H Yang A multilevel augmentation method for solvingill-posed operator equations Inverse Problems 22 (2006) 155ndash174

[79] Z Chen Y Xu and H Yang Fast collocation methods for solving ill-posedintegral equations of the first kind Inverse Problems 24 (2008) 065007

[80] Z Chen Y Xu and J Zhao The discrete PetrovndashGalerkin method for weaklysingular integral equations Journal of Integral Equations and Applications 11(1999) 1ndash35

[81] D Chien and K E Atkinson A discrete Galerkin method for hypersingularboundary integral equations IMA Journal of Numerical Analysis 17 (1997)463ndash478

[82] C K Chui and J Z Wang A cardinal spline approach to wavelets Proceedingsof the American Mathematical Society 113 (1991) 785ndash793

[83] K C Chung and T H Yao On lattices admitting unique Lagrange interpolationSIAM Journal on Numerical Analysis 14 (1977) 735ndash743

[84] A Cohen W Dahmen and R DeVore Multiscale decomposition on boundeddomains Transactions of the American Mathematical Society 352 (2000)3651ndash3685

[85] M Cohen and J Wallace Radiosity and Realistic Image Synthesis AcademicPress New York 1993

[86] J B Convey A Course in Functional Analysis Springer-Verlag New York1990

[87] F Cucker and S Smale On the mathematical foundation of learning Bulletin ofthe American Mathematical Society 39 (2002) 1ndash49

[88] W Dahmen Wavelet and multiscale methods for operator equations ActaNumerica 6 (1997) 55ndash228

[89] W Dahmen H Harbrecht and R Schneider Compression techniques forboundary integral equations ndash asymptotically optimal complexity estimatesSIAM Journal on Numerical Analysis 43 (2006) 2251ndash2271

[90] W Dahmen H Harbrecht and R Schneider Adaptive methods for boundaryintegral equations Complexity and convergence estimates Mathematics ofComputation 76 (2007) 1243ndash1274

[91] W Dahmen A Kunoth and R Schneider Operator equations multiscaleconcepts and complexity in The Mathematics of Numerical Analysis (Park CityUT 1995) pp 225ndash261 [Lecture in Applied Mathematics No 32 AmericanMathematical Society Providence RI 1996]

[92] W Dahmen and C A Micchelli Using the refinement equation for evaluatingintegrals of wavelets SIAM Journal on Numerical Analysis 30 (1993) 507ndash537

[93] W Dahmen and C A Micchelli Biorthogonal wavelet expansions ConstructiveApproximation 13 (1997) 293ndash328

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

524 References

[94] W Dahmen S Prossdorf and R Schneider Wavelet approximation methodsfor pseudodifferential equations I Stability and convergence MathematischeZeitschrift 215 (1994) 583ndash620

[95] W Dahmen S Prossdorf and R Schneider Wavelet approximation methods forpseudodifferential equations II Matrix compression and fast solutions Advancesin Computational Mathematics 1 (1993) 259ndash335

[96] W Dahmen R Schneider and Y Xu Nonlinear functionals of wavelet expan-sions ndash adaptive reconstruction and fast evaluation Numerische Mathematik 86(2000) 49ndash101

[97] I Daubechies Orthonormal bases of compactly supported wavelets Communi-cations on Pure and Applied Mathematics 41 (1988) 909ndash996

[98] I Daubechies Ten Lectures on Wavelets CBMS-NSF Regional ConferenceSeries in Applied Mathematics No 61 SIAM Philadelphia PA 1992

[99] K Deimling and D Klaus Nonlinear Functional Analysis Springer-VerlagBerlin 1985

[100] R DeVore B Jawerth and V Popov Compression of wavelet decompositionsAmerican Journal of Mathematics 114 (1992) 737ndash785

[101] R DeVore and B Lucier Wavelets Acta Numerica 1 (1991) 1ndash56[102] J Dicka P Kritzerb F Y Kuoc and I H Sloan Lattice-Nystrom method for

Fredholm integral equations of the second kind with convolution type kernelsJournal of Complexity 23 (2007) 752ndash772

[103] V Dicken and P Maass Wavelet-Galerkin methods for ill-posed problemsJournal of Inverse and Ill-Posed Problems 4 (1996) 203ndash221

[104] S Ding and H Yang Multilevel augmentation methods for nonlinear ill-posedproblems International Journal of Computer Mathematics 88 (2011) 3685ndash3701

[105] S Ehrich and A Rathsfeld Piecewise linear wavelet collocation approximationof the boundary manifold and quadrature Electronic Transactions on NumericalAnalysis 12 (2001) 149ndash192

[106] H W Engl M Hanke and A Neubauer Regularization of Inverse ProblemsKluwer Dordrecht 1996

[107] W Fang and M Lu A fast collocation method for an inverse boundary valueproblem International Journal for Numerical Methods in Engineering 59(2004) 1563ndash1585

[108] W Fang F Ma and Y Xu Multilevel iteration methods for solving integralequations of the second kind Journal of Integral Equations and Applications14 (2002) 355ndash376

[109] W Fang Y Wang and Y Xu An implementation of fast wavelet Galerkin meth-ods for integral equations of the second kind Journal of Scientific Computing20 277ndash302

[110] I Fenyo and H Stolle Theorie und Praxis der Linearen IntegralgleichungenVols 1ndash4 Birkhauser-Verlag Berlin 1981ndash84

[111] N J Ford M L Morgado and M Rebelo Nonpolynomial collocation approxi-mation of solutions to fractional differential equations Fractional Calculus andApplied Analysis 16 (2013) 874ndash891

[112] N J Ford M L Morgado and M Rebelo High order numerical methodsfor fractional terminal value problems Computational Methods in AppliedMathematics 14 (2014) 55ndash70

[113] W F Ford Y Xu and Y Zhao Derivative correction for quadrature formulasAdvances in Computational Mathematics 6 (1996) 139ndash157

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 525

[114] L Greengard The Rapid Evaluation of Potential Fields in Particle Systems MITPress Cambridge MA 1988

[115] L Greengard and V Rokhlin A fast algorithm for particle simulation Journalof Computational Physics 73 (1987) 325ndash348

[116] C W Groetsch The Theory of Tikhonov Regularization for Fredholm Equationsof the First Kind Research Notes in Mathematics No 105 Pitman Boston MA1984

[117] C W Groetsch Uniform convergence of regularization methods for Fredholmequations of the first kind Journal of Australian Mathematical Society Series A39 (1985) 282ndash286

[118] C W Groetsch Convergence analysis of a regularized degenerate kernel methodfor Fredholm equations of the first kind Integral Equations and OperatorTheory 13 (1990) 67ndash75

[119] C W Groetsch Linear inverse problems in O Scherze (ed) Handbook ofMathematical Methods in Imaging pp 3ndash41 Springer-Verlag New York 2011

[120] W Hackbusch Multi-grid Methods and Applications Springer-Verlag Berlin1985

[121] W Hackbusch Integral Equations Theory and Numerical Treatment [translatedand revised by the author from the 1989 German original] International Seriesof Numerical Mathematics No 120 Birhauser-Verlag Basel 1995

[122] W Hackbusch A sparse matrix arithmetic based on H-matrices I Introductionto H-matrices Computing 62 (1999) 89ndash108

[123] W Hackbusch and B Khoromskij A sparse H-matrix arithmetic Generalcomplexity estimates Numerical Analysis 2000 Vol VI Ordinary differentialequations and integral equations Journal of Computational and Applied Mathe-matics 125 (2000) 479ndash501

[124] W Hackbusch and Z Nowak A multilevel discretization and solution methodfor potential flow problems in three dimensions in E H Hirschel (ed) FiniteApproximations in Fluid Mechanics Notes on Numerical Fluid Mechanics No14 Vieweg Braunschweig 1986

[125] W Hackbusch and Z Nowak On the fast matrix multiplication in the bound-ary element method by panel clustering Numerische Mathematik 54 (1989)463ndash491

[126] T Haduong La methode de Schenck pour la resolution numerique du problemede radiation acoustique Bull Dir Etudes Recherces Ser C Math Inf ServiceInf Math Appl 2 (1979) 15ndash50

[127] U Hamarik On the discretization error in regularized projection methods withparameter choice by discrepancy principle in A N Tikhonov (ed) Ill-posedproblems in Natural Sciences VSP UtrechtTVP Moscow 1992 pp 24ndash29

[128] U Hamarik Quasioptimal error estimate for the regularized RitzndashGalerkinmethod with the a-posteriori choice of the parameter Acta et CommentationesUniversitatis Tartuensis 937 (1992) 63ndash76

[129] U Hamarik On the parameter choice in the regularized RitzndashGalerkin methodEesti Teaduste Akadeemia Toimetised Fsika Matemaatika 42 (1993) 133ndash143

[130] A Hammerstein Nichtlineare integralgleichungen nebst anwendungen ActaMathematica 54 (1930) 117ndash176

[131] G Han Extrapolation of a discrete collocation-type method of Hammersteinequations Journal of Computational and Applied Mathematics 61 (1995)73ndash86

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

526 References

[132] G Han and J Wang Extrapolation of Nystrom solution for two-dimensionalnonlinear Fredholm integral equations Journal of Scientific Computing 14(1999) 197ndash209

[133] M Hanke and C R Vogel Two-level preconditioners for regularized inverseproblems I Theory Numerische Mathematik 83 (1999) 385ndash402

[134] M Hanke and C R Vogel Two-level preconditioners for regularized inverseproblems II Implementation and numerical results preprint

[135] H Harbrecht U Kahler and R Schneider Wavelet matrix compression forboundary integral equations in Parallel Algorithms and Cluster Computingpp 129ndash149 Lecture Notes in Computational Science and Engineering No 52Springer-Verlag Berlin 2006

[136] H Harbrecht M Konik and R Schneider Fully discrete wavelet Galerkinschemes Engineering Analysis with Boundary Elements 27 (2003) 423ndash437

[137] H Harbrecht S Pereverzev and R Schneider Self-regularization by projectionfor noisy pseudodifferential equations of negative order Numerische Mathe-matik 95 (2003) 123ndash143

[138] H Harbrecht and R Schneider Wavelet Galerkin schemes for 2D-BEM inOperator Theory Advances and Applications Vol 121 Birkhauser-VerlagBerlin 2001

[139] H Harbrecht and R Schneider Wavelet Galerkin schemes for boundary integralequations ndash implementation and quadrature SIAM Journal on Scientific Com-puting 27 (2006) 1347ndash1370

[140] H Harbrecht and R Schneider Rapid solution of boundary integral equationsby wavelet Galerkin schemes in Multiscale Nonlinear and Adaptive Approxi-mation pp 249ndash294 Springer-Verlag Berlin 2009

[141] E Hille and J Tamarkin On the characteristic values of linear integral equationsActa Mathematica 57 (1931) 1ndash76

[142] R A Horn and C R Johnson Matrix Analysis Cambridge University PressCambridge 1985

[143] G C Hsiao and A Rathsfeld Wavelet collocation methods for a first kindboundary integral equation in acoustic scattering Advances in ComputationalMathematics 17 (2002) 281ndash308

[144] G C Hsiao and W L Wendland Boundary Integral Equations Springer-VarlagBerlin 2008

[145] C Huang H Guo and Z Zhang A spectral collocation method for eigenvalueproblems of compact integral operators Journal of Integral Equations andApplications 25 (2013) 79ndash101

[146] M Huang Wavelet PetrovndashGalerkin algorthims for Fredholm integral equationsof the second kind PhD thesis Academia Sinica (in Chinese) 2003

[147] M Huang A construction of multiscale bases for PetrovndashGalerkin methods forintegral equations Advances in Computational Mathematics 25 (2006) 7ndash22

[148] J E Hutchinson Fractals and self similarity Indiana University MathematicsJournal 30 (1981) 713ndash747

[149] M Jaswon and G Symm Integral Equation Methods in Potential Theory andElastostatics Academic Press London 1977

[150] Y Jeon An indirect boundary integral equation method for the biharmonicequation SIAM Journal on Numerical Analysis 31 (1994) 461ndash476

[151] Y Jeon New boundary element formulas for the biharmonic equation Advancesin Computational Mathematics 9 (1998) 97ndash115

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 527

[152] Y Jeon New indirect scalar boundary integral equation formulas for thebiharmonic equation Journal of Computational and Applied Mathematics 135(2001) 313ndash324

[153] Y Jeon and W McLean A new boundary element method for the biharmonicequation with Dirichlet boundary conditions Advances in Computational Math-ematics 19 (2003) 339ndash354

[154] Y Jiang B Wang and Y Xu A fast FourierndashGalerkin method solving a boundaryintegral equation for the biharmonic equation SIAM Journal on NumericalAnalysis 52 (2014) 2530ndash2554

[155] Y Jiang and Y Xu Fast Fourier Galerkin methods for solving singular boundaryintegral equations Numerical integration and precondition Journal of Compu-tational and Applied Mathematics 234 (2010) 2792ndash2807

[156] Q Jin and Z Hou On an a posteriori parameter choice strategy for Tikhonovregularization of nonlinear ill-posed problems Numerische Mathematik 83(1999) 139ndash159

[157] B Kaltenbacher On the regularizing properties of a full multigrid method forill-posed problems Inverse Problems 17 (2001) 767ndash788

[158] H Kaneko K Neamprem and B Novaprateep Wavelet collocation method andmultilevel augmentation method for Hammerstein equations SIAM Journal onScientific Computing 34 (2012) A309ndashA338

[159] H Kaneko R Noren and B Novaprateep Wavelet applications to the PetrovndashGalerkin method for Hammerstein equations Applied Numerical Mathematics45 (2003) 255ndash273

[160] H Kaneko R D Noren and P A Padilla Superconvergence of the iteratedcollocation methods for Hammerstein equations Journal of Computational andApplied Mathematics 80 (1997) 335ndash349

[161] H Kaneko R Noren and Y Xu Numerical solutions for weakly singular Ham-merstein equations and their superconvergence Journal of Integral Equationsand Applications 4 (1992) 391ndash407

[162] H Kaneko R Noren and Y Xu Regularity of the solution of Hammersteinequations with weakly singular kernel Integral Equation Operator Theory 13(1990) 660ndash670

[163] H Kaneko and Y Xu Degenerate kernel method for Hammerstein equationsMathematics of Computation 56 (1991) 141ndash148

[164] H Kaneko and Y Xu Gauss-type quadratures for weakly singular integrals andtheir application to Fredholm integral equations of the second kind Mathematicsof Computation 62 (1994) 739ndash753

[165] H Kaneko and Y Xu Superconvergence of the iterated Galerkin methods forHammerstein equations SIAM Journal on Numerical Analysis 33 (1996) 1048ndash1064

[166] T Kato Perturbation theory for nullity deficiency and other quantities of linearoperators Journal drsquoAnalyse Mathematique 6 (1958) 261ndash322

[167] T Kato Perturbation Theory of Linear Operators Springer-Verlag Berlin1976

[168] C T Kelley A fast two-grid method for matrix H-equations Transport Theoryand Statistical Physics 18 (1989) 185ndash203

[169] C T Kelley A fast multilevel algorithm for integral equations SIAM Journal onNumerical Analysis 32 (1995) 501ndash513

[170] C T Kelley and E W Sachs Fast algorithms for compact fixed point problemswith inexact function evaluations SIAM Journal on Scientific and StatisticalComputing 12 (1991) 725ndash742

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

528 References

[171] M Kelmanson Solution of nonlinear elliptic equations with boundary singu-larities by an integral equation method Journal of Computational Physics 56(1984) 244ndash283

[172] D Kincaid and W Cheney Numerical Analysis Mathematics of ScientificComputing 3rd edn American Mathematical Society Providence RI 2002

[173] A Kirsch An Introduction to the Mathematical Theory of Inverse Problems 2ndedn Applied Mathematical Sciences No 120 Springer-Verlag New York 2011

[174] E Klann R Ramlau and L Reichel Wavelet-based multilevel methods forlinear ill-posed problems BIT Numerical Mathematics 51 (2011) 669ndash694

[175] M A Kransnoselrsquoskii Topological Methods in the Theory of Nonlinear IntegralEquations Pergamon Press New York 1964

[176] M A Kransnoselrsquoskii G M Vainikko P P Zabreiko Ya B Rutiskii and V YaStetsenko Approximate Solution of Operator Equations Wolters-NoordhoffPublishing Groningen 1972

[177] R Kress Linear Integral Equations Springer-Verlag Berlin 1989[178] R Kress Numerical Analysis Graduate Texts in Mathematics No 181 Springer-

Verlag New York 1998[179] S Kumar Superconvergence of a collocation-type method for Hammerstein

equations IMA Journal of Numerical Analysis 7 (1987) 313ndash325[180] S Kumar A discrete collocation-type method for Hammerstein equation SIAM

Journal on Numerical Analysis 25 (1988) 328ndash341[181] S Kumar and I H Sloan A new collocation-type method for Hammerstein

integral equations Mathematics of Computation 48 (1987) 585ndash593[182] L J Lardy A variation of Nystromrsquos method for Hammerstein equations

Journal of Integral Equations 3 (1981) 43ndash60[183] P D Lax Functional Analysis Wiley-Interscience New York 2002[184] F Li Y Li and Z Li Existence of solutions to nonlinear Hammerstein integral

equations and applications Journal of Mathematical Analysis and Applications323 (2006) 209ndash227

[185] E Lin Multiscale approximation for eigenvalue problems of Fredholm integralequations Journal of Applied Functional Analysis 2 (2007) 461ndash469

[186] G G Lorentz Approximation Theory and Functional Analysis Academic PressBoston MA 1991

[187] Y Lu L Shen and Y Xu Shadow block iteration for solving linear systemsobtained from wavelet transforms Applied and Computational Harmonic Anal-ysis 19 (2005) 359ndash385

[188] Y Lu L Shen and Y Xu Multi-parameter regularization methods for high-resolution image reconstruction with displacement errors IEEE Transactions onCircuits and Systems I 54 (2007) 1788ndash1799

[189] Y Lu L Shen and Y Xu Integral equation models for image restoration Highaccuracy methods and fast algorithms Inverse Problems 26 (2010) 045006

[190] M A Lukas Comparisons of parameter choice methods for regularization withdiscrete noisy data Inverse Problems 14 (1998) 161ndash184

[191] X Luo L Fan Y Wu and F Li Fast multilevel iteration methods with compres-sion technique for solving ill-posed integral equations Journal of Computationaland Applied Mathematics 256 (2014) 131ndash151

[192] X Luo F Li and S Yang A posteriori parameter choice strategy for fast multi-scale methods solving ill-posed integral equations Advances in ComputationalMathematics 36 (2012) 299ndash314

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 529

[193] P Maass S V Pereverzev R Ramlau and SG Solodky An adaptive discretiza-tion for TikhonovndashPhillips regularization with a posteriori parameter selectionNumerische Mathematik 87 (2001) 485ndash502

[194] P Mathe Saturation of regularization methods for linear ill-posed problems inHilbert spaces SIAM Journal on Numerical Analysis 42 (2004) 968ndash973

[195] P Mathe and S V Pereverzev Discretization strategy for linear ill-posed prob-lems in variable Hilbert scales Inverse Problems 19 (2003) 1263ndash1277

[196] C A Micchelli Using the refinable equation for the construction ofpre-wavelets Numerical Algorithm 1 (1991) 75ndash116

[197] C A Micchelli and M Pontil Learning the kernel function via regularizationJournal of Machine Learning Research 6 (2005) 1099ndash1125

[198] C A Micchelli T Sauer and Y Xu A construction of refinable sets forinterpolating wavelets Results in Mathematics 34 (1998) 359ndash372

[199] C A Micchelli T Sauer and Y Xu Subdivision schemes for iterated functionsystems Proceedings of the American Mathematical Society 129 (2001)1861ndash1872

[200] C A Micchelli and Y Xu Using the matrix refinement equation for theconstruction of wavelets on invariant sets Applied and Computational HarmonicAnalysis 1 (1994) 391ndash401

[201] C A Micchelli and Y Xu Reconstruction and decomposition algorithms forbiorthogonal multiwavelets Multidimensional Systems and Signal Processing 8(1997) 31ndash69

[202] C A Micchelli Y Xu and Y Zhao Wavelet Galerkin methods for second-kind integral equations Journal of Computational and Applied Mathematics86 (1997) 251ndash270

[203] S G Mikhlin Mathematical Physics an Advanced Course North-HollandAmsterdam 1970

[204] G Monegato and I H Sloan Numerical solution of the generalized airfoilequation for an airfoil with a flap SIAM Journal on Numerical Analysis 34(1997) 2288ndash2305

[205] M T Nair On strongly stable approximations Journal of Australian Mathemat-ical Society Series A 52 (1992) 251ndash260

[206] M T Nair A unified approach for regularized approximation methods forFredholm integral equations of the first kind Numerical Functional Analysisand Optimization 15 (1994) 381ndash389

[207] M T Nair and S V Pereverzev Regularized collocation method for Fredholmintegral equations of the first kind Journal of Complexity 23 (2007) 454ndash467

[208] J Nedelec Approximation des Equations Integrales en Mecanique et enPhysique Lecture Notes Centre Math Appl Ecole Polytechnique PalaiseauFrance 1977

[209] G Nelakanti A degenerate kernel method for eigenvalue problems of com-pact integral operators Advances in Computational Mathematics 27 (2007)339ndash354

[210] G Nelakanti Spectral Approximation for Integral Operators PhD thesis IndianInsitute of Technology Bombay 2003

[211] D W Nychka and D D Cox Convergence rates for regularized solutions ofintegral equations from discrete noisy data Annals of Statistics 17 (1989)556ndash572

[212] J E Osborn Spectral approximation for compact operators Mathematics ofComputation 29 (1975) 712ndash725

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

530 References

[213] R Pallav and A Pedas Quadratic spline collocation method for weakly sin-gular integral equations and corresponding eigenvalue problem MathematicalModelling and Analysis 7 (2002) 285ndash296

[214] B L Panigrahi and G Nelakanti Legendre Galerkin method for weakly singularFredholm integral equations and the corresponding eigenvalue problem Journalof Applied Mathematics and Computing 43 (2013) 175ndash197

[215] B L Panigrahi and G Nelakanti Richardson extrapolation of iterated discreteGalerkin method for eigenvalue problem of a two dimensional compact integraloperator Journal of Scientific Computing 51 (2012) 421ndash448

[216] S V Pereverzev and E Schock On the adaptive selection of the parameter inregularization of ill-posed problems SIAM Journal on Numerical Analysis 43(2005) 2060ndash2076

[217] D L Phillips A technique for the numerical solution of certain integral equa-tions of the first kind Journal of the Association for Computing Machinery 9(1962) 84ndash97

[218] M Pincus Gaussian processes and Hammerstein integral equations Transac-tions of the American Mathematical Society 134 (1968) 193ndash214

[219] R Plato On the discrepancy principle for iterative and parametric methods tosolve linear ill-posed equations Numerische Mathematik 75 (1996) 99ndash120

[220] R Plato The Galerkin scheme for Lavrentievrsquos m-times iterated method to solvelinear accretive Volterra integral equations of the first kind BIT NumericalMathematics 37 (1997) 404ndash423

[221] R Plato and U Hamarik On pseudo-optimal parameter choices and stoppingrules for regularization methods in Banach spaces Numerical Functional Anal-ysis and Optimization 17 (1996) 181ndash195

[222] R Plato and G Vainikko On the regularization of projection methods for solvingill-posed problems Numerische Mathematik 57 (1990) 63ndash79

[223] R Prazenica R Lind and A Kurdila Uncertainty estimation from Volterrakernels for robust flutter analysis Journal of Guidance Control and Dynamics26 (2003) 331ndash339

[224] M P Rajan Convergence analysis of a regularized approximation for solvingFredholm integral equations of the first kind Journal of Mathematical Analysisand Applications 279 (2003) 522ndash530

[225] A Rathsfeld A wavelet algorithm for the solution of the double layer potentialequation over polygonal boundaries Journal of Integral Equations and Applica-tions 7 (1995) 47ndash98

[226] A Rathsfeld A wavelet algorithm for the solution of a singular integral equationover a smooth two-dimensional manifold Journal of Integral Equations andApplications 10 (1998) 445ndash501

[227] A Rathsfeld and R Schneider On a quadrature algorithm for the piecewiselinear wavelet collocation applied to boundary integral equations MathematicalMethods in the Applied Sciences 26 (2003) 937ndash979

[228] T Raus About regularization parameter choice in case of approximately givenerrors of data Acta et Commentationes Universitatis Tartuensis 937 (1992)77ndash89

[229] M Rebeloa and T Diogob A hybrid collocation method for a nonlinear Volterraintegral equation with weakly singular kernel Journal of Computational andApplied Mathematics 234 (2010) 2859ndash2869

[230] L Reichel and A Shyshkov Cascadic multilevel methods for ill-posedproblems Journal of Computational and Applied Mathematics 233 (2010)1314ndash1325

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 531

[231] A Rieder A wavelet multilevel method for ill-posed problems stabilized byTikhonov regularization Numerische Mathematik 75 (1997) 501ndash522

[232] S D Riemenschneider and Z Shen Wavelets and pre-wavelets in low dimen-sions Journal of Approximation Theory 71 (1992) 18ndash38

[233] K Riley Two-level preconditioners for regularized ill-posed problems PhDthesis Montana State University 1999

[234] F Rizzo An integral equation approach to boundary value problems of classicalelastostatics Quarterly Journal of Applied Mathematics 25 (1967) 83ndash95

[235] V Rokhlin Rapid solution of integral equations of classical potential theoryJournal of Computational Physics 60 (1983) 187ndash207

[236] H L Royden Real Analysis Macmillan New York 1963[237] L Rudin S Osher and E Fatemi Nonlinear total variation based noise removal

algorithms Physica D 60 (1992) 259ndash268[238] T Runst and W Sickel Sobolev Spaces of Fractional Order Nemytskij Opera-

tors and Nonlinear Partial Differential Equations de Gryuter Berlin 1996[239] K Ruotsalainen and W Wendland On the boundary element method for

some nonlinear boundary value problems Numerische Mathematik 53 (1988)299ndash314

[240] J Saranen Projection methods for a class of Hammerstein equations SIAMJournal on Numerical Analysis 27 (1990) 1445ndash1449

[241] R Schneider Multiskalen- und wavelet-Matrixkompression Analyiss-sasierteMethoden zur effizienten Losung groβer vollbesetzter Gleichungs-systemeHabilitationsschrift Technische Hochschule Darmstadt 1995

[242] C Schwab Variable order composite quadrature of singular and nearly singularintegrals Computing 53 (1994) 173ndash194

[243] Y Shen and W Lin Collocation method for the natural boundary integralequation Applied Mathematics Letters 19 (2006) 1278ndash1285

[244] F Sillion and C Puech Radiosity and Global Illumination Morgan KaufmannSan Francisco CA 1994

[245] I H Sloan Iterated Galerkin method for eigenvalue problem SIAM Journal onNumerical Analysis 13 (1976) 753ndash760

[246] I H Sloan Superconvergence in M Golberg (ed) Numerical Solution ofIntegral Equations Plenum New York 1990 pp 35ndash70

[247] I H Sloan and V Thomee Superconvergence of the Galerkin iterates for integralequations of the second kind Journal of Integral Equations 9 (1985) 1ndash23

[248] S G Solodky On a quasi-optimal regularized projection method for solvingoperator equations of the first kind Inverse Problems 21 (2005) 1473ndash1485

[249] G W Stewart Fredholm Hilbert Schmidt Three Fundamental Papers on Inte-gral Equations translated with commentary by G W Stewat 2011 Available atwwwcsundedustewartFHSpdf

[250] J Tausch The variable order fast multipole method for boundary integralequations of the second kind Computing 72 (2004) 267ndash291

[251] J Tausch and J White Multiscale bases for the sparse representation ofboundary integral operators on complex geometry SIAM Journal on ScientificComputation 24 (2003) 1610ndash1629

[252] A N Tikhonov Solution of incorrectly formulated problems and the regulariza-tion method Doklady Akademii Nauk SSSR 151 (1963) 501ndash504 [translated inSoviet Mathematics 4 1035ndash1038]

[253] F G Tricomi Integral Equations Dover Publications New York 1985[254] A E Taylor and D C Lay Introduction to Functional Analysis 2nd edn John

Wiley amp Sons New York 1980

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

532 References

[255] G Vainikko A perturbed Galerkin method and the general theory of approx-imate methods for nonlinear equations Z Vycisl Mat i Mat Fiz 7 (1967)723ndash751 [English translation USSR Computational Mathematics and Mathe-matical Physics 7 (1967) 723ndash751]

[256] G Vainikko Multidimensional Weakly Singular Integral Equations Springer-Verlag Berlin 1993

[257] G Vainikko A Pedes and P Uba Methods of Solving Weakly Singular IntegralEquations (in Russian) Tartu University 1984

[258] G Vainikko and P Uba A piecewise polynomial approximation to the solutionof an integral equation with weakly singular kernel Journal of AustralianMathematical Society Series B 22 (1981) 431ndash438

[259] C R Vogel and M E Oman Fast robust total variation-based reconstructionof noisy blurred images IEEE Transactions on Image Processing 7 (1998)813ndash824

[260] T von Petersdorff and C Schwab Wavelet approximation of first kind integralequations in a polygon Numerische Mathematik 74 (1996) 479ndash516

[261] T von Petersdordd R Schneider and C Schwab Multiwavelets for second kindintegral equations SIAM Journal on Numerical Analysis 34 (1997) 2212ndash2227

[262] G Wahba Spline Models for Observational Data Society for Industrial andApplied Mathematics Philadelphia PA 1990

[263] B Wang R Wang and Y Xu Fast FourierndashGalerkin methods for first-kindlogarithmic-kernel integral equations on open arcs Science China Mathematics53 (2010) 1ndash22

[264] Y Wang and Y Xu A fast wavelet collocation method for integral equations onpolygons Journal of Integral Equations and Applications 17 (2005) 277ndash330

[265] W Wendland On some mathematical aspects of boundary element methods forelliptic problems in J Whiteman (ed) The Mathematics of Finite Elements andApplications Academic Press London 1985 pp 230ndash257

[266] W-J Xie and F-R Lin A fast numerical solution method for two dimensionalFredholm integral equations of the second kind Applied Numerical Mathemat-ics 59 (2009) 1709ndash1719

[267] Y Xu H L Chen and Q Zou Limit values of derivatives of the Cauchy integralsand computation of the logarithmic potentials Computing 73 (2004) 295ndash327

[268] Y Xu and H Zhang Refinable kernels Journal of Machine Learning Research8 (2007) 2083ndash2120

[269] Y Xu and Y Zhao Quadratures for improper integrals and their applications inintegral equations Proceedings of Symposia in Applied Mathematics 48 (1994)409ndash413

[270] Y Xu and Y Zhao Quadratures for boundary integral equations of the firstkind with logarithmic kernels Journal of Integral Equations and Applications8 (1996) 239ndash268

[271] Y Xu and Y Zhao An extrapolation method for a class of boundary integralequations Mathematics of Computation 65 (1996) 587ndash610

[272] Y Xu and A Zhou Fast Boolean approximation methods for solving integralequations in high dimensions Journal of Integral Equations and Applications16 (2004) 83ndash110

[273] Y Xu and Q Zou Adaptive wavelet methods for elliptic operator equations withnonlinear terms Advances in Computational Mathematics 19 (2003) 99ndash146

[274] H Yang and Z Hou Convergence rates of regularized solutions and param-eter choice strategy for positive semi-definite operator equation Numerical

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 533

Mathematics A Journal of Chinese Universities (Chinese Edition) 20 (1998)245ndash251

[275] H Yang and Z Hou A posteriori parameter choice strategy for nonlinearmonotone operator equations Acta Mathematicae Applicatae Sinica 18 (2002)289ndash294

[276] K Yosida Functional Analysis Springer-Verlag Berlin 1965[277] E Zeidler Nonlinear Functional Analysis and its Applications I IIB Springer-

Verlag New York 1990[278] M Zhong S Lu and J Cheng Multiscale analysis for ill-posed problems with

semi-discrete Tikhonov regularization Inverse Problems 28 (2012) 065019[279] A Zygmund Trigonometric Series Cambridge University Press New York

1959

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

Index

(k kprime) element 224α-property 241ν-convergence 468

adjoint identity 43adjoint operator 34ArzelandashAscoli theorem 499ascent 470

Banach space 490BanachndashSteinhaus theorem 496boundary integral equation 42 356

Cauchy sequence 488closed graph theorem 496collectively compact 74 240collocation matrix 106collocation method 63 105compact operator 34condition number 71contraction mapping 503converges pointwise 60 74converges uniformly 74correlation matrix 120cyclic μ-adic expansions 171

degenerate kernel 36 81degenerate kernel method 80derived set 507discrete orthogonal projection 103distance function 488dual space 495 498

eigenfunction 501eigenvalue 501

equationboundary integral 42Hammerstein 356ill-posed 416integral 5nonlinear boundary integral 356nonlinear integral 356

equation integral 32

Fredholm determinant 20Fredholm function 11 12Fredholm integral equation 32Fredholm minor 11Fredholm operator

continuous kernel 35Schmidt kernel 35

Fredholm theorem 499Fubini theorem 492fundamental solution 43

of the Laplace operator 44

Galerkin matrix 95Galerkin method 62 94gap between subspaces 471generalized best approximation 56

Holder continuous 24Hadamard inequality 12HahnndashBanach extension theorem

497Hammerstein equation 356 359harmonic 44Hausdorff metric 504Hermite admissible 192Hermite interpolation 56

534

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163339 subject to the Cambridge Core terms of

Index 535

Hilbert space 491HilbertndashSchmidt theorem 502

ill-posed 417ill-posed integral equation 416inequality

Hadamard 12inner product 490inner product space 490integral equation

of the first kind 416of the second kind 5 32

integral operator 5compact 34Fredholm 32 35weakly singular 37

interpolating wavelet spaces 187interpolation projection 55invariant set 155inverse operator theorem 496

Jensenrsquos formula 27

kernel 5 32continuous 35degenerate 36quasi-weakly singular 232Schmidt 35weak singularity 37

Kolmogorov theorem 499Kronecker symbol 55

Lagrange admissible 185Lagrange interpolation 56Laplace equation 42Laplace expansion 9lattice vector 5Lavrentiev regularization 416 421least-squares method 62linear functional 495linear operator 494linear space 489Lipschitz constant 504

majorization sequence 326MAM 322 325 331 359 377 420matrix norm 99matrix representation 207metric function 488metric space 488MIM 322 347minimum norm solution 421

minor 7modules of continuity 14multilevel augmentation method 322 325

331 359 377 420multilevel iteration method 322 347multiscale basis function 144multiscale collocation method 265multiscale Galerkin method 199multiscale Hermite interpolation 191multiscale interpolating bases 184multiscale Lagrange interpolation 184multiscale orthogonal bases 166multiscale partitions 153multiscale PetrovndashGalerkin method 223

nested 66nonlinear boundary value problem 377nonlinear integral 356nonlinear integral equation 356

boundary integral equation 356Hammerstein 356

norm 490normed linear space 490numerically sparse 203Nystrom method 86

open mapping theorem 495operator

adjoint 34 43bounded 495closed 495compact 34 499completely continuous 499elliptic partial differential 43integral 5interpolation projection 55Laplace 42orthogonal projection 54projection 54relatively compact 499spectral projection 469

operator equation 53orthogonal projection 54orthogonal projection theorem 497orthogonal wavelets 169

parallelepiped 180parallelogram identity 491PetrovndashGalerkin matrix 113PetrovndashGalerkin method 61 112Poincare inequality 494pointwise convergence 34

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163339 subject to the Cambridge Core terms of

536 Index

Poisson Jensen formula 28power iteration algorithm 483principle minor 7projection 54

generalized best approximation 56interpolation 55orthogonal 54spectral 469

projection method 53 94

quadrature method 86quadrature rule 87 302 314

convergent 88Gaussian 87Simpson 87trapezoidal 87

quasi-weakly singular 232

refinable set 169 170regular pair 60 121regular partition 96regular value 501resolvent kernel 17resolvent set 466Riesz representation theorem 497

Schmidt kernel 35Schur lemma 254set

center 200compact 489derived 507invariant 155refinable 169relatively compact 489separable 489star-shaped 200weak-relatively compact 499

set wavelet 169 175set-valued mapping 510

Sloan iterate 73Sobolev spaces 493space

compact 489complete 489dual 495 498metric 488reflexive 498separable 489

spectral projection 469spectrum 467stability 70stable 70star-shaped set 200superconvergence 77support 491

Tikhonov regularization 416totally bounded 489trace formula 24truncation matrix 207 219truncation parameter 207truncation strategy 205 219

uniform boundedness theorem 496uniform convergence 34unisolvent 55upper Hausdorff hemi-metric 505

vanishing moments 149

wavelet bases 149wavelets

set 175interpolating 190orthogonal 169

weak Cauchy sequence 498weakly convergent 498weakly singular 37well-posed 416

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163339 subject to the Cambridge Core terms of

  • 9781107103474
  • a1_front
  • a2_contents
  • a3_preface
  • a4_symbols
  • ch00_Intro
  • ch01
  • ch02
  • ch03
  • ch04
  • ch05
  • ch06
  • ch07
  • ch08
  • ch09
  • ch10
  • ch11
  • ch12
  • p_app
  • z1_refs
  • z2_index
Page 2: CAMBRIDGE MONOGRAPHS ON

The Cambridge Monographs on Applied and Computational Mathematics seriesreflects the crucial role of mathematical and computational techniques incontemporary science The series publishes expositions on all aspects of applicableand numerical mathematics with an emphasis on new developments in thisfast-moving area of research

State-of-the-art methods and algorithms as well as modern mathematicaldescriptions of physical and mechanical ideas are presented in a manner suited tograduate research students and professionals alike Sound pedagogical presentation isa prerequisite It is intended that books in the series will serve to inform a newgeneration of researchers

A complete list of books in the series can be found atwwwcambridgeorgmathematicsRecent titles include the following

14 Simulating Hamiltonian dynamics Benedict Leimkuhler amp Sebastian Reich15 Collocation methods for Volterra integral and related functional differential

equations Hermann Brunner16 Topology for computing Afra J Zomorodian17 Scattered data approximation Holger Wendland18 Modern computer arithmetic Richard Brent amp Paul Zimmermann19 Matrix preconditioning techniques and applications Ke Chen20 Greedy approximation Vladimir Temlyakov21 Spectral methods for time-dependent problems Jan Hesthaven Sigal Gottlieb amp

David Gottlieb22 The mathematical foundations of mixing Rob Sturman Julio M Ottino amp

Stephen Wiggins23 Curve and surface reconstruction Tamal K Dey24 Learning theory Felipe Cucker amp Ding Xuan Zhou25 Algebraic geometry and statistical learning theory Sumio Watanabe26 A practical guide to the invariant calculus Elizabeth Louise Mansfield27 Difference equations by differential equation methods Peter E Hydon28 Multiscale methods for Fredholm integral equations Zhongying Chen Charles A

Micchelli amp Yuesheng Xu29 Partial differential equation methods for image inpainting Carola-Bibiane

Schonlieb

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163346 subject to the Cambridge Core terms of

Multiscale Methods forFredholm Integral Equations

ZHONGYING CHENSun Yat-Sen University Guangzhou China

CHARLES A MICCHELLIState University of New York Albany

YUESHENG XUSun Yat-Sen University Guangzhou China

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163346 subject to the Cambridge Core terms of

University Printing House Cambridge CB2 8BS United Kingdom

Cambridge University Press is part of the University of Cambridge

It furthers the Universityrsquos mission by disseminating knowledge in the pursuit ofeducation learning and research at the highest international levels of excellence

wwwcambridgeorgInformation on this title wwwcambridgeorg9781107103474

copy Zhongying Chen Charles A Micchelli and Yuesheng Xu 2015

This publication is in copyright Subject to statutory exceptionand to the provisions of relevant collective licensing agreementsno reproduction of any part may take place without the written

permission of Cambridge University Press

First published 2015

A catalogue record for this publication is available from the British Library

Library of Congress Cataloguing in Publication dataChen Zhongying 1946ndash

Multiscale methods for Fredholm integral equations Zhongying Chen Sun Yat-SenUniversity Guangzhou China Charles A Micchelli State University of New York

Albany Yuesheng Xu Sun Yat-Sen University Guangzhou Chinapages cm ndash (The Cambridge monographs on applied and computational

mathematics series)Includes bibliographical references and index

ISBN 978-1-107-10347-4 (Hardback)1 Fredholm equations 2 Integral equations I Micchelli Charles A

II Xu Yuesheng III TitleQA431C4634 2015

515prime45ndashdc23 2014050239

ISBN 978-1-107-10347-4 Hardback

Cambridge University Press has no responsibility for the persistence or accuracy ofURLs for external or third-party internet websites referred to in this publication

and does not guarantee that any content on such websites is or will remainaccurate or appropriate

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163346 subject to the Cambridge Core terms of

Contents

Preface page ixList of symbols xi

Introduction 1

1 A review of the Fredholm approach 511 Introduction 512 Second-kind matrix Fredholm equations 713 Fredholm functions 1114 Resolvent kernels 1715 Fredholm determinants 2016 Eigenvalue estimates and a trace formula 2417 Bibliographical remarks 31

2 Fredholm equations and projection theory 3221 Fredholm integral equations 3222 General theory of projection methods 5323 Bibliographical remarks 78

3 Conventional numerical methods 8031 Degenerate kernel methods 8032 Quadrature methods 8633 Galerkin methods 9434 Collocation methods 10535 PetrovndashGalerkin methods 11236 Bibliographical remarks 142

4 Multiscale basis functions 14441 Multiscale functions on the unit interval 14542 Multiscale partitions 153

v

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163402 subject to the Cambridge Core terms of

vi Contents

43 Multiscale orthogonal bases 16644 Refinable sets and set wavelets 16945 Multiscale interpolating bases 18446 Bibliographical remarks 197

5 Multiscale Galerkin methods 19951 The multiscale Galerkin method 20052 The fast multiscale Galerkin method 20553 Theoretical analysis 20954 Bibliographical remarks 221

6 Multiscale PetrovndashGalerkin methods 22361 Fast multiscale PetrovndashGalerkin methods 22362 Discrete multiscale PetrovndashGalerkin methods 23163 Bibliographical remarks 263

7 Multiscale collocation methods 26571 Multiscale basis functions and collocation functionals 26672 Multiscale collocation methods 28173 Analysis of the truncation scheme 28874 Bibliographical remarks 298

8 Numerical integrations and error control 30081 Discrete systems of the multiscale collocation method 30082 Quadrature rules with polynomial order of accuracy 30283 Quadrature rules with exponential order of accuracy 31484 Numerical experiments 31885 Bibliographical remarks 321

9 Fast solvers for discrete systems 32291 Multilevel augmentation methods 32292 Multilevel iteration methods 34793 Bibliographical remarks 354

10 Multiscale methods for nonlinear integral equations 356101 Critical issues in solving nonlinear equations 356102 Multiscale methods for the Hammerstein equation 359103 Multiscale methods for nonlinear boundary

integral equations 377104 Numerical experiments 402105 Bibliographical remarks 413

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163402 subject to the Cambridge Core terms of

Contents vii

11 Multiscale methods for ill-posed integral equations 416111 Numerical solutions of regularization problems 416112 Multiscale Galerkin methods via the Lavrentiev

regularization 420113 Multiscale collocation methods via the Tikhonov

regularization 438114 Numerical experiments 456115 Bibliographical remarks 463

12 Eigen-problems of weakly singular integral operators 465121 Introduction 465122 An abstract framework 466123 A multiscale collocation method 474124 Analysis of the fast algorithm 478125 A power iteration algorithm 483126 A numerical example 484127 Bibliographical remarks 487

Appendix Basic results from functional analysis 488A1 Metric spaces 488A2 Linear operator theory 494A3 Invariant sets 502

References 519Index 534

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163402 subject to the Cambridge Core terms of

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163402 subject to the Cambridge Core terms of

Preface

Fredholm equations arise in many areas of science and engineeringConsequently they occupy a central topic in applied mathematics Traditionalnumerical methods developed during the period prior to the mid-1980s includemainly quadrature collocation and Galerkin methods Unfortunately all ofthese approaches suffer from the fact that the resulting discretization matricesare dense That is they have a large number of nonzero entries This bottleneckleads to significant computational costs for the solution of the correspondingintegral equations

The recent appearance of wavelets as a new computational tool in appliedmathematics has given a new direction to the area of the numerical solution ofFredholm integral equations Shortly after their introduction it was discoveredthat using a wavelet basis for a singular integral equation led to numericallysparse matrix discretization This observation combined with a truncationstrategy then led to a fast numerical solution of this class of integral equations

Approximately 20 years ago the authors of this book began a systematicstudy of the construction of wavelet bases suitable for solving Fredholmintegral equations and explored their usefulness for developing fast multiscaleGalerkin PetrovndashGalerkin and collocation methods The purpose of this bookis to provide a self-contained account of these ideas as well as some traditionalmaterial on Fredholm equations to make this book accessible to as large anaudience as possible

The goal of this book is twofold It can be used as a reference text forpractitioners who need to solve integral equations numerically and wish touse the new techniques presented here At the same time portions of thisbook can be used as a modern text treating the subject of the numericalsolution of integral equations which is suitable for upper-level undergraduatestudents as well as graduate students Specifically the first five chapters of thisbook are designed for a one-semester course which provides students with a

ix

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637001Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163408 subject to the Cambridge Core terms of

x Preface

solid background in integral equations and fast multiscale methods for theirnumerical solutions

An early version of this book was used in a summer school on appliedmathematics sponsored by the Ministry of Education of the Peoplersquos Republicof China Subsequently the authors used revised versions of this book forcourses on integral equations at our respective institutions These teachingexperiences led us to make many changes in presentation resulting from ourinteractions with our many students

We are indebted to our many colleagues who gave freely of their time andadvice concerning the material in this book and whose expertise on the subjectof the numerical solution of Fredholm equations collectively far exceedsours We mention here that a preliminary version of the book was provided toKendall Atkinson Uday Banerjee Hermann Brunner Yanzhao Cao WolfgangDahmen Leslie Greengard Weimin Han Geroge Hsiao Hideaki KanekoRainer Kress Wayne Lawton Qun Lin Paul Martin Richard Noren SergeiPereverzyev Reinhold Schneider Johannes Tausch Ezio Venturino and AihuiZhou We are grateful to them all for their constructive comments whichimproved our presentation

Our special thanks go to Kendall Atkinson for his encouragement andsupport in writing this book We would also like to thank our colleagues at SunYat-Sen University including Bin Wu Sirui Cheng Xianglin Chen as well asthe graduate student Jieyang Chen for their assistance in preparing this book

Finally we are deeply indebted to our families for their understandingpatience and continued support throughout our efforts to complete this project

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637001Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163408 subject to the Cambridge Core terms of

Symbols

ae almost everywhere sect11Alowast adjoint operator of A sect211A[i j] minor of matrix A with lattice vectors i and j sect12B(XY) normed linear space of all bounded linear operators from

X into Y sect211C set of complex numbers sect11C(D) linear space of all real-valued continuous functions on D sect21Cm(D) linear space of all real-valued m-times continuously

differentiable functions on D sect21Cinfin(D) linear space of all real-valued infinitely differentiable functions

on D sect21C0(D) subspace of C(D) consisting of functions with support contained

inside D sectA1Cinfin0 (D) subspace of Cinfin(D) consisting of functions with support

contained inside D and bounded sectA1cσ (D) positive constant defined in sect212card T cardinality of T sect222cond(A) condition number of A sect223D(λ) complex-valued function at λ defined by (118)det(A) determinant of matrix A sect12diag(middot) diagonal matrix sect12diam(S) diameter of set S sect11Hm(D) Sobolev space sectA1Hm

0 (D) Sobolev space sectA1Lp(D) linear space of all real-valued pth power integrable functions

(1 le p ltinfin) sect21

xi

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163416 subject to the Cambridge Core terms of

xii List of symbols

Linfin(D) linear space of all real-valued essentially bounded measurablefunctions sect21

m(D) positive constant defined in sect212N set of positive integers 1 2 3 sect11N0 set of integers 0 1 2 sect11Nn set of positive integers 1 2 n for n isin N sect11PA characteristic polynomial of matrix A sect12R set of real numbers sect11Rd d-dimensional Euclidean space sect11Re f real part of f sect16rq(A) minor equation of A (14)Rλ resolvent kernel sect14rank A rank of matrix A sect335s(n) dimension of space Xn sect33span S span of set S sect331U index set (i j) i isin N0 j isin Zw(i) sect41Un index set (i j) i isin Zn+1 j isin Zw(i) sect451vol(S) volume of set S sect11Wmp(D) Sobolev space sectA1Wmp

0 (D) Sobolev space sectA1w(n) dimension of space Wn sect41Z set of integers 0plusmn1plusmn2 sect11Zn set of integers 0 1 2 nminus 1 for n isin N sect11

(middot) gamma function sect212nabla gradient operator sect213 Laplace operator sect213ρ(T ) resolvent set of operator T sect112σ(T ) spectrum of operator T sect112ωdminus1 surface area of unit sphere in Rd sect213ω(K h) modules of continuity of K sect13

factorial for example (14)⋃perp union of orthogonal sets sect41oplusdirect sum of spaces sect41otimestensor product (direct product) sect13

functional composition sect331|α| sum of components of lattice vector α sect21|sminus t| Euclidean distance between s and t sect212

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163416 subject to the Cambridge Core terms of

List of symbols xiii

A norm of operator A sect211 middot minfin norm of Cm(D) sect21 middot p norm of Lp(D) (1 le P le infin) sect21(middot middot) inner product sect21〈middot middot〉 value of a linear functional at a function sect211sim same order sect511

sminusrarr pointwise converge sect211uminusrarr uniformly converge sect211

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163416 subject to the Cambridge Core terms of

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163416 subject to the Cambridge Core terms of

Introduction

The equations we consider in this book are primarily Fredholm integralequations of the second kind on bounded domains in the Euclidean spaceThese equations are used as mathematical models for a multitude of physicalproblems and cover many important applications such as radiosity equa-tions for realistic image synthesis [18 85 244] and especially boundaryintegral equations [12 177 203] which themselves occur as reformulationsof other problems typically originating as partial differential equations Inpractice Fredholm integral equations are solved numerically using piecewisepolynomial collocation or Galerkin methods and when the order of thecoefficient matrix (which is typically full) is large the computational costof generating the matrix as well as solving the corresponding linear systemis large Therefore to enhance the range of applicability of the Fredholmequation methodology it is critical to provide alternate algorithms whichare fast efficient and accurate This book is concerned with this challengedesigning fast multiscale methods for the numerical solution of Fredholmintegral equations

The development and use of multiscale methods for solving integral equa-tions is a subject of recent intense study The history of fast multiscale solutionsof integral equations began with the introduction of multiscale Galerkin(PetrovndashGalerkin) methods for solving integral equations as presented in[28 64 68 88 94 95 202 260 261] and the references cited thereinMost noteworthy is the discovery in [28] that the representation of a singularintegral operator by compactly supported orthonormal wavelets producesnumerically sparse matrices In other words most of their entries are so smallin absolute value that to some degree of precision they can be neglectedwithout affecting the overall accuracy of the approximation Later the papers[94 95] studied PetrovndashGalerkin methods using periodic multiscale basesconstructed from refinement equations for periodic elliptic pseudodifferential

1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637002Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010438 subject to the Cambridge Core terms of

2 Introduction

equations and in this restricted environment stability convergence and matrixcompression were investigated For a first-kind boundary integral equationa truncation strategy for the Galerkin method using spline-based multiscalebasis functions of low degree was proposed in [260] Also in [261] forelliptic pseudodifferential equations of order zero on a three-dimensionalmanifold a Galerkin method using discontinuous piecewise linear multiscalebasis functions on triangles was studied

In another direction a general construction of multidimensional discontin-uous orthogonal and bi-orthogonal wavelets on invariant sets was presentedin [200 201] Invariant sets include among others the important cases ofsimplices and cubes and in the two-dimensional case L-shaped domainsA similar recursive structure was explored in [65] for multiscale functionrepresentation and approximation constructed by interpolation on invariantsets In this regard an essential advantage of this approach is the existenceof efficient schemes for generating recursively multilevel partitions of invariantsets and their associated multiscale functions All of these methods even extendto domains which are a finite union of invariant sets thereby significantlyexpanding the range of their applicability Therefore the constructions givenin [65 200 201] led to a wide variety of multiscale basis functions which onthe one hand have desirable simple recursive structure and on the other handcan be used in diverse areas in which the Fredholm methodology is applied

Subsequently the papers [64 68 202] developed multiscale piecewise poly-nomial Galerkin PetrovndashGalerkin and discrete multiscale PetrovndashGalerkinmethods An important advantage of multiscale piecewise polynomials is thattheir closed-form expressions are very convenient for computation Moreoverthey can easily be related to standard bases used in the conventional numericalmethod thereby providing an advantage for theoretical analysis as wellAmong conventional numerical methods for solving integral equations thecollocation method has received the most favorable attention in the engineeringcommunity due to its lower computational cost in generating the coefficientmatrix of the corresponding discrete equations In comparison the implemen-tation of the Galerkin method requires much more computational effort forthe evaluation of integrals (see for example [19 77] for a discussion of thispoint) Motivated by this issue [69] proposed and analyzed a fast collocationalgorithm for solving general multidimensional integral equations Moreovera matrix truncation strategy was introduced there by making a careful choice ofbasis functions and collocation functionals the end result being fast multiscalealgorithms for solving the integral equations

The development of stable efficient and fast numerical algorithms for solv-ing operator equations including differential equations and integral equations

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637002Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010438 subject to the Cambridge Core terms of

Introduction 3

is a main focus of research in numerical analysis and scientific computationsince such algorithms are particularly important for large-scale computationWe review the three main steps in solving an operator equation The firstis at the level of approximation theory Here we must choose appropriatesubspaces and suitable bases for them The second step is to discretize theoperator equations using these bases and to analyze convergence properties ofthe approximate solutions The end result of this step of processing is a discretelinear system and its construction is considered as a main task for the numericalsolution of operator equations The third step employs methods of numericallinear algebra to design an efficient solver for the discrete linear system Theultimate goal is of course to solve the discrete linear system efficiently andobtain an accurate approximate solution to the original operator equationTheoretical considerations and practical implementations in the numericalsolution of operator equations show that these three steps of processing areclosely related Therefore designing efficient algorithms for the discrete linearsystem should take into consideration the choice of subspaces and their basesthe methodologies of discretization of the operator equations and the specificcharacteristics and advantages of the numerical solvers used to solve theresulting discrete linear system In this book we describe how these threesteps are integrated in a multiscale environment and thereby achieve our goalof providing a wide selection of fast and accurate algorithms for the second-kind integral equations We also describe work in progress addressing relatedissues of eigenvalue and eigenfunction computation as well as the solution ofFredholm equations of the first kind

This book is organized into 12 chapters plus an appendix Chapter 1 isdevoted to a review of the Fredholm approach to solving an integral equation ofthe second kind In Chapter 2 we introduce essential concepts from Fredholmintegral equations of the second kind and describe a general setup of projectionmethods for solving operator equations which will be used in later chaptersThe purpose of Chapter 3 is to describe conventional numerical methodsfor solving Fredholm integral equations of the second kind including thedegenerate kernel method the quadrature method the Galerkin method thePetrovndashGalerkin method and the collocation method In Chapter 4 a generalconstruction of multiscale bases of piecewise polynomial spaces includingmultiscale orthogonal and interpolating bases is presented Chapters 5 6and 7 use the material from Chapter 4 to construct multiscale GalerkinPetrovndashGalerkin and collocation methods We study the discretization schemesresulting from these methods propose truncation strategies for building fastand accurate algorithms and give a complete analysis for the order of con-vergence computational complexity stability and condition numbers for the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637002Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010438 subject to the Cambridge Core terms of

4 Introduction

truncated schemes In Chapter 8 two types of quadrature rule for the numericalintegration required to generate the coefficient matrix are introduced and errorcontrol strategies are designed so that the quadrature errors will neither ruin theoverall convergence order nor increase the overall computational complexityof the original multiscale methods The goal of Chapter 9 is to investigate fastsolvers for the discrete linear systems resulting from multiscale methods Weintroduce multilevel augmentation methods and multilevel iteration methodsbased on direct sum decompositions of the range and domain of the operatorequation In Chapters 10 11 and 12 the fast algorithms are applied to solvingnonlinear integral equations of the second kind ill-posed integral equations ofthe first kind and eigen-problems of compact integral operators respectivelyWe summarize in the Appendix some of the standard concepts and resultsfrom functional analysis in a form which is used throughout the book Theappendix provides the reader with a convenient source of the backgroundmaterial needed to follow the ideas and arguments presented in other chaptersof this book

Most of the material in this book can only be found in research papers Thisis the first time that it has been assembled into a book Although this book ispronouncedly a research monograph selected material from the initial chapterscan be used in a semester course on numerical methods for integral equationswhich presents the multiscale point of view

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637002Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010438 subject to the Cambridge Core terms of

1

A review of the Fredholm approach

In this chapter we pay homage to Ivar Fredholm (April 7 1866ndashAugust 171927) and review his approach to solving an integral equation of the secondkind The methods employed in this chapter are classical and differ from theapproach taken in the rest of the book We include it here because those readersinexperienced in integral equations should be familiar with these importantideas The basic tools of matrix theory and some complex analysis are neededand we shall provide a reasonably self-contained discussion of the requiredmaterial

11 Introduction

We start by introducing the notation that will be used throughout this bookLet C R Z and N denote respectively the set of complex numbers the setof real numbers the set of integers and the set of positive integers We alsolet N0 = 0 cup N For the purpose of enumerating a nonempty finite set ofobjects we use the sets Nd = 1 2 d and Zd = 0 1 dminus1 both ofwhich consist of d distinct integers For d isin N let Rd denote the d-dimensionalEuclidean space and a subset of Rd By C() we mean the linear space ofall continuous real-valued functions defined on We usually denote matricesor vectors over R in boldface for example A = [Aij i j isin Nd] isin Rdtimesd andu = [uj j isin Nd] isin Rd When the vector has all integer coordinates thatis u isin Zd we sometimes call it a lattice vector Moreover we usually denoteintegral operators by calligraphic letters Especially the integral operator witha kernel K will be denoted by K that is for the kernel K defined on times andthe function u defined on we define

(Ku)(s) =int

K(s t)u(t)dt s isin

5

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

6 A review of the Fredholm approach

The most direct approach to solving a second-kind integral equation merelyreplaces integrals by sums and thereby obtains a linear system of equationswhose solution approximates the solution of the original equation The studyof the resulting linear system of equations leads naturally to the importantnotion of the Fredholm function and determinant which remain a central toolin the theory of second-kind integral equations see for example [183 253]We consider this direct approach when we are given a continuous kernelK isin C( times ) on a compact subset of Rd with positive Borel measurea continuous function f isin C() and a nonzero complex number λ isin C Thetask is to find a function u isin C() such that for s isin

u(s)minus λ

int

K(s t)u(t)dt = f (s) (11)

To this end for each positive h gt 0 we partition into nonempty compactsubsets i i isin Nn

=⋃iisinNn

i

such that different subsets have no overlapping interior and

diam i = max|xminus y| x y isin i le h

where |x| is the 2-norm of the vector x isin Rd This partition can be constructedby first putting a large ldquoboxrdquo around the set and then decomposing this boxinto cubes each of which has diameter less than or equal to h The sets i arethen formed by intersecting the set with the cubes where we discard sets ofzero Borel measure Therefore the partition of constructed in this manneris done ae Next we choose any finite set of points T = ti i isin Nn suchthat for any i isin Nn we have that ti isin i With these points we now replaceour integral equation (11) with a linear system of equations Specifically wechoose the number ρ = minusλ and the ntimes n matrix A defined by

A = [vol(j)K(ti tj) i j isin Nn]where vol(j) denotes the volume of the set j and replace (11) with thesystem of linear equations

(I+ ρA)u = f (12)

Here f is the vector obtained by evaluating the function f on the set T Of course the point of view we take here is that the vector u isin Rn

which solves equation (12) is an approximation to the function u on theset T Therefore the problem of determining the function u is replaced by thesimpler one of numerically solving for the vector u when h is small Certainly

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

12 Second-kind matrix Fredholm equations 7

an important role is played by the determinant of the coefficient matrix ofthe linear system (12) Its properties especially as h rarr 0+ will be ourmain concern for a significant part of this chapter We start by studying thedeterminant of the coefficient matrix of the linear system (12) and then derivea formula for the entries of the inverse of the matrix I + ρA in terms of thematrix A

12 Second-kind matrix Fredholm equations

We define the minor of an n times n matrix If A = [Aij i j isin Nn] is an n times nmatrix q is a non-negative integer in Zn+1 i = [il l isin Nq] j = [jl l isin Nq]are lattice vectors in N

qn we define the corresponding minor by

A[i j] = det[Air js r s isin Nq]Sometimes the more elaborate notation

A

(i1 i2 iqj1 j2 jq

)(13)

is used for A[i j] When i = j that is for a principal minor of A we use thesimplified notation A[i] in place of A[i i] For a positive integer q isin Nn we set

rq(A) = 1

qsumiisinNq

n

A[i] (14)

and also choose r0(A) = 1

Lemma 11 If A is an ntimes n matrix and ρ isin C then

det(I+ ρA) =sum

qisinZn+1

rq(A)ρq (15)

Before proving this lemma we make two remarks

Remark 12 Using the extended notation for a minor as indicated in (13)we see that equation (14) is equivalent to the formula

rq(A) = 1

qsum

[illisinNq]isinNqn

A

(i1 iqi1 iq

) (16)

Certainly if any two components of the vector i = [il l isin Nq] are equalthen the corresponding minor has a repeated row (and column) and so is zeroThese terms may be neglected Moreover any permutation of the componentsof the vector i affects both a row and a column exchange of the determinant

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

8 A review of the Fredholm approach

appearing in (16) and so does not affect the value of the determinant Sincethere are q such permutations we get that

rq(A) =sum

1lei1lti2ltmiddotmiddotmiddotltiqlen

A

(i1 iqi1 iq

) (17)

Remark 13 If the characteristic values of the matrix A are denoted by λj j isin Nn then

rq(A) =sum

1lei1lti2ltmiddotmiddotmiddotltiqlen

λi1λi2 middot middot middot λiq (18)

The right-hand side of this equation is an elementary symmetric function ofthe eigenvalues of A which is invariant under a similarity transformation ofthe matrix A This fact will be the basis of the proof of Lemma 11 presentedbelow

We next present the proof of Lemma 11

Proof By Schurrsquos upper-triangular factorization theorem (see for example[142] p 79) the matrix A can be factored in the form

A = Pminus1TP (19)

where P is an orthogonal matrix and T is an upper-triangular matrix whosediagonal entries are the eigenvalues of A (chosen in any prescribed order) Foran upper-triangular matrix T we observe from (18) that

det(I+ ρT) =prodjisinNn

(1+ ρλj)

=sum

qisinZn+1

ρqsum

1le j1ltj2ltmiddotmiddotmiddotltjqle n

λj1λj2 middot middot middot λjq

=sum

qisinZn+1

rq(T)ρq

thereby verifying that equation (15) is valid at least for an upper-triangularmatrix For a general matrix we use the reduction to the upper-triangular caseby an orthogonal similarity (19) Since all determinants in equations (15) and(16) are unchanged under an orthogonal similarity this comment proves thegeneral case

We now add a more difficult computation which provides an expansion ofthe elements of the matrix (I+ ρA)minus1 as a rational function of ρ To this end

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

12 Second-kind matrix Fredholm equations 9

we introduce some needed constants We define for any k l isin Nn and q isin Zn+1

the constants

ulkq = 1

qsumiisinNq

n

A

[l ik i

] (110)

In (110) there are (not necessarily principal) minors of order q+ 1 Moreoverwhen q = n minus 1 and k l isin Nn with k = l we have that ulkq = 0 since Nn hasonly n distinct elements Also for the same reason ulkn = 0 for all k l isin Nn

We shall relate these constants But first we introduce for k l isin Nn

polynomials Ukl defined at ρ to be

Ulk(ρ) =sum

qisinZn+1

ulkqρq

Now to relate all of these quantities we start with the minor

A

(l i1 iqk i1 iq

)(111)

where l k isin Nn and expand it by the Laplace expansion by minors across itsfirst row to obtain the formula

A

(l i1 iqk i1 iq

)= AlkA

(i1 iqi1 iq

)+sum

misinNq

(minus1)mAlimA

(i1 i2 im+1 iqk i1 im middot middot middot iq

)

(112)

The symbol im appearing in the minor above is to be interpreted to mean thatthe imth column of A is deleted in that minor We now sum both sides of thisformula over all integers i1 i2 iq in Nn and interchange the summationsof the second term on the right-hand side of the resulting equation to yield theformulasum[ippisinNq]isinNq

nA

(l i1 iqk i1 iq

)=sum[ippisinNq]isinNq

nAlkA

(i1 iqi1 iq

)+summisinNq

sum[ippisinNq]isinNq

n(minus1)mAlim A

(i1 i2 im+1 iqk i1 im middot middot middot iq

)

(113)

The first term on the right-hand side of equation (113) is clearly Alkqrq(A)which follows from our definition (16) The value for the second term requiresmore explanation

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

10 A review of the Fredholm approach

For the second term we point out that there are two sums over im isin NnThe outer sum already appears in the right-hand side of equation (113)and the inner one appears in the sum over all indices i1 i2 iq in NnIn the inner sum we first fix im and then sum over all the other indicesi1 imminus1 im+1 iq isin Nn This leads us to the expression

sum[i1 imminus1 im+1 iq]isinNqminus1

n

(minus1)mA

(i1 iq

k i1 imminus1 im+1 iq

)

We now locate the imth row in the minor of A and move it forward to the firstrow This requires mminus 1 row exchanges and gives us the expression

minussum

[i1 imminus1 im+1 iq]isinNqminus1n

A

(im i1 imminus1 im+1 iqk i1 imminus1 im+1 iq

)

Next we multiply this expression by Alim and compute the (first) sum of it overim isin Nn This yields the quantity

minussumrisinNn

Alr

sum[ippisinNqminus1]isinNqminus1

n

A

(r i1 iqminus1

k i1 iqminus1

)

But this quantity is independent of im and so appears q times in the other(second) sum over im So in summary we get the equationsum[ippisinNq]isinNq

n

A

(l i1 iqk i1 iq

)= Alk

sum[ippisinNq]isinNq

n

A

(i1 iqi1 iq

)

minusqsumrisinNn

Alr

sum[ippisinNqminus1]isinNqminus1

n

A

(r i1 iqminus1

k i1 iqminus1

)

which is equivalent to the formulasumiisinNq

n

A

[l ik i

]= Alk

sumiisinNq

n

A[i] minus qsumrisinNn

Alr

sumjisinNqminus1

n

A

[r jk j

]

When q = 0 the second term on the right is zero while the first sum on theright is set to one Likewise the expression on the left is set to Alk and so thisformula is still true when q = 0 We now multiply both sides by ρq

q and sumover q isin Zn+1 Upon simplification we conclude for l k isin Nn that

Ulk(ρ) = Alk det(I+ ρA)minus ρsumrisinNn

AlrUrk(ρ) (114)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

13 Fredholm functions 11

Let U be the ntimes n matrix defined by

U = (Ulk(ρ))lkisinNn

Using this matrix we re-express equation (114) in an equivalent matrix formas follows

Uminus A det(I+ ρA)+ ρAU = 0 (115)

Likewise by performing the Laplace expansion on (111) by minors across itscolumns we get the equation

Uminus A det(I+ ρA)+ ρUA = 0 (116)

From these formulas we get that

(I+ ρA)minus1 = Iminus ρUdet(I+ ρA)

(117)

Indeed this can be obtained by checking the equations

(I+ ρA)(

Iminus ρUdet(I+ρA)

)=(

Iminus ρUdet(I+ρA)

)(I+ ρA) = I

which may be confirmed by employing equations (115) and (116)

13 Fredholm functions

We now apply Lemma 11 to Fredholmrsquos discretization given in equation (12)For this purpose we introduce the notation for the Fredholm minors of acontinuous kernel K isin C(times)

Definition 14 For any two vectors x = [xl l isin Nq] and y = [yl l isin Nq] inq we define the corresponding qth-order minor of a continuous kernel K as

K

[xy

]= det[K(xl yr) l r isin Nq]

If x = y we use the simplified notation K[x]Sometimes to avoid confusion we may use the extended notation

K

(x1 xq

y1 yq

)for the minor K

[xy

]and similarly we might use

K

(x1 xq

x1 xq

)for K[x]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

12 A review of the Fredholm approach

Now returning to equation (12) we recall in that case the matrix A is givenby the formula

A = [vol(j)K(ti tj) i j isin Nn]For each lattice vector i = [il l isin Nq] in N

qn we form the vector si=[til

lisinNq] in q from the points in the set T = ti i isin Nn the subset

i =otimeslisinNq

il

of q and obtain a formula for the determinant of the coefficient matrix of thelinear system (12) namely

Dh(λ) =sum

qisinZn+1

(minus1)q

q λqsumiisinNq

n

vol(i)K[si]

Note that the vector si = [til l isin Nq] is in i and that

q =⋃iisinNq

n

i

thereby forming a partition of the set q ae In our expanded notation thedeterminant becomes

Dh(λ) =sum

qisinZn+1

(minus1)qλqsum

1lei1lti2ltmiddotmiddotmiddotltiqlen

vol(i)K

(ti1 tiqti1 tiq

)

We now begin the task of identifying the limit of the polynomials Dh ashrarr 0+

Definition 15 If K is a continuous kernel on times and λ isin C we definethe complex-valued function D at λ isin C by

D(λ) =sumqisinN0

(minus1)qλq

qintq

K[x]dx (118)

Next we point out that D is an entire function of λ We shall refer to thefunction D as the Fredholm function To prove that the Fredholm function D isan entire function we recall the Hadamard inequality for the determinant of anntimes n matrix (see for example [142]) When we use the Hadamard inequalityit is convenient to express an ntimes n matrix A in terms of its columns Thus wewrite A = [ai i isin Nn] in which each ai isin Rn is the ith column of A

Lemma 16 If A = [ai i isin Nn] is an ntimes n matrix then

| det A| leprodiisinNn

|ai|

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

13 Fredholm functions 13

where |ai| denotes the 2-norm of ai

We now set

Kinfin = max|K(x y)| x y isin and derive the next lemma as an immediate application of the Hadamardinequality

Lemma 17 If K isin C(times) and x y isin q then∣∣∣∣K [xy

]∣∣∣∣ le Kqinfinqq2

Proof Since for each r isin Nq and xr y isin we have thatsumrisinNq

K2(xr y) le qK2infin

the result of this lemma follows immediately from the Hadamard inequality

Lemma 18 The function D defined by equation (118) is an entire function

Proof For q isin N0 and λ isin C we have by Lemma 17 that∣∣∣∣ (minus1)qλq

qintq

K[x]dx

∣∣∣∣ le vol()q|λ|qq K

qinfinqq2

By using the inequality

qq

q lt eq (119)

valid for all q isin N we obtain that

vol()q|λ|qq K

qinfinqq2 =(

vol()|λ|Kinfinradicq

)q qq

q le(

vol()|λ|Kinfineradicq

)q

Hence when q is chosen to satisfy the condition

q gt (2vol()|λ| middot Kinfine)2

we conclude that ∣∣∣∣ (minus1)qλq

qintq

K[x]dx

∣∣∣∣ lt 2minusq

Thus the series on the right-hand side of equation (118) converges absolutely

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

14 A review of the Fredholm approach

Theorem 19 For λ isin C

limhrarr0+

Dh(λ) = D(λ)

uniformly and absolutely on any bounded subset of C

For the proof of Theorem 19 we need some ancillary lemmas The first isabout ntimes n matrices A = [ai i isin Nn] and B = [bi i isin Nn]Lemma 110 If there exists a constant θ such that for all i isin Nn |ai| le θ

and |bi| le θ then

| det Aminus det B| le θnminus1sumiisinNn

|ai minus bi|

Proof The proof follows from the Hadamard inequality applied to theformula

det Aminus det B =sumiisinNn

det[b1 biminus1 ai minus bi ai+1 an]

For the next fact we need to recall the definition of the modulus of continuityof function K namely for h gt 0

ω(K h) = max|K(x y)minus K(xprime yprime)| |xminus xprime| le h |yminus yprime| le hLemma 111 For any x y isin q with |xminus y| le h there holds the estimate

|K[x] minus K[y]| le q

Kinfin (Kinfinradicq)qω(K h)

Proof We apply Lemma 110 to obtain the inequalities

|K[x] minus K[y]| le (Kinfinradicq)qminus1sumiisinNq

⎛⎝sumrisinNq

(K(xr xi)minus K(yr yi))2

⎞⎠12

le (Kinfinradicq)qminus1

q32ω(K h)

which gives the desired estimate

The next lemma estimates the difference between a multivariate integral anda finite sum which approximates it

Lemma 112 If f isin C(q) and q is partitioned as q =⋃iisinNqni ae and

si isin i for i isin Nqn then∣∣∣∣∣∣

intq

f (x)dxminussumiisinNq

n

vol(i)f (si)

∣∣∣∣∣∣ le (vol())qω(f h)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

13 Fredholm functions 15

where h is chosen so that diam i le h for all i isin Nqn

Proof We have thatsumiisinNq

n

vol(i)f (si)minusintq

f (x)dx =sumiisinNq

n

vol(i)f (si)minusinti

f (x)dx

=sumiisinNq

n

inti

( f (si)minus f (x))dx

Hence taking absolute values of both sides of the equation gives the inequality∣∣∣∣∣∣sumiisinNq

n

vol(i)f (si)minusintq

f (x)dx

∣∣∣∣∣∣ lesumiisinNq

n

inti

| f (si)minus f (x)|dx

le ω( f h)sumiisinNq

n

vol(i) le (vol())qω( f h)

We are now ready to prove Theorem 19

Proof Note that

|Dh(λ)minus D(λ)| le 1 +2

where

1 =∣∣∣∣∣∣Dh(λ)minus

sumqisinZn+1

(minus1)qλq

qintq

K[x]dx

∣∣∣∣∣∣and

2 =∣∣∣∣∣∣sum

qisinN0Zn+1

(minus1)qλq

qintq

K[x] dx

∣∣∣∣∣∣ We estimate 1 and 2 separately

We apply the above two lemmas to the function f q rarr R defined forx isin q as f (x) = K[x] and obtain the inequality∣∣∣∣∣∣

sumiisinNq

n

vol(i)K[si] minusintq

K[x]dx

∣∣∣∣∣∣ le q

Kinfin (vol()Kinfinradicq)qω(K h)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

16 A review of the Fredholm approach

Therefore we obtain that

1 lesum

qisinZn+1

|λ|qq

∣∣∣∣∣∣sumiisinNq

n

vol(i)K[si] minusintq

K[x]dx

∣∣∣∣∣∣le⎧⎨⎩ sum

qisinZn+1

|λ|qq

q

Kinfin (vol()Kinfinradicq)q

⎫⎬⎭ω(K h)

le sum

qisinN0

q

(vol()|λ| middot Kinfineradic

q

)qω(K h)

Kinfin

where we have again used inequality (119) from the proof of Lemma 18 andwe demand that q is sufficiently large so that the inequality

vol()|λ| middot Kinfineradicq

lt1

2

holds Therefore the series above converges uniformly and absolutely on anybounded set In fact if we introduce the constant

c =sumqisinN0

q

(vol()|λ| middot Kinfineradic

q

)qKinfin

we can write the above observation as 1 le cω(K h)For 2 we also have that

2 lesum

qisinN0Zn+1

|λ|qqintq|K[x]| dx

lesum

qisinN0Zn+1

|λ|qq (Kinfin

radicq)qvol(q)

lesum

qisinN0Zn+1

(vol()|λ|Kinfineradic

q

)q

lesum

qisinN0Zn+1

1

2q

Since n ge vol()2dhd we see that h rarr 0+ implies both n rarr infin and q rarr infin

so that the right-hand side of the above inequality goes to zero uniformly in λ

in any bounded subset of the complex plane These two inequalities prove thedesired result Indeed when n is sufficiently large we have

|Dh(λ)minus D(λ)| le cω(K h)+sum

qisinN0Zn+1

1

2q

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

14 Resolvent kernels 17

Since ω(K h) rarr 0 as h rarr 0+ the right-hand side of the above inequalitytends to zero uniformly

14 Resolvent kernels

We now explain the central role that the Fredholm determinant plays in solvinga second-kind integral equation with a continuous kernel

To this end for each λ isin C we introduce the resolvent kernel Rλ defined fors t isin as

Rλ(s t) =sumqisinN0

(minus1)q+1

q λq+1intq

K

[s xt x

]dx

Here of course the expression K[

sxtx

]stands for the Fredholm minor

K

(s x1 xq

t x1 xq

)

Alternatively we may write the resolvent kernel in the form

Rλ(s t) = minusλ⎧⎨⎩K(s t)+

sumqisinN

(minus1)q

q λqintq

K

[s xt x

]dx

⎫⎬⎭

As in the proof of Lemma 18 we see for fixed s t isin that Rλ(s t) is an entirefunction of λ isin C and for a fixed λ isin C it is continuous in s t isin The nextobservation gives an indication of the role of the resolvent kernel in solving asecond-kind integral equation

Proposition 113 For any λ isin C we have that

Rλ minus λKRλ + λD(λ)K = 0 (120)

and

Rλ minus λRλK + λD(λ)K = 0 (121)

Proof We first prove equation (120) For x isin q s t isin we expand the

determinant K[

s xt x

]along its first row to obtain the formula

K

(s x1 xq

t x1 xq

)= K(s t)K

(x1 xq

x1 xq

)+sumjisinNq

(minus1) jK

(x1 x2 xj xj+1 xq

t x1 xjminus1 xj+1 xq

)K(s xj)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

18 A review of the Fredholm approach

Now we integrate both sides of this equation over x isin q and note that theintegral of each summand is independent of the summation index j Hence weconclude thatintq

K

[s xt x

]dx = K(s t)

intq

K[x]dxminus qint

K(s r)

(intqminus1

K

[r xt x

]dx)

dr

and this equation is also by definition valid for q = 0 We now substitute thisequation into the power series for the resolvent kernel to obtain the equation

Rλ(s t) = λ

int

K(s r)Rλ(r t)dr minus λD(λ)K(s t) (122)

Next we choose any u isin C() and multiply both sides of equation (122) byu(t) and integrate both sides of the resulting equation over t isin to obtainequation (120) The second equation (121) is proved in an analogous fashion

by expanding the minor K[

s xt x

]along its first column

The next result is concerned with the unique solvability of equation (11)

Theorem 114 If D(λ) = 0 then (I minus λK)minus1 is given by the formula

(I minus λK)minus1 = I minus D(λ)minus1Rλ

Proof We compute

(I minus λK)(I minus D(λ)minus1Rλ) = I minus D(λ)minus1[Rλ + λD(λ)K minus λKRλ]and now use equation (120) to conclude that the term inside the brackets iszero Similarly we verify that

(I minus D(λ)minus1Rλ)(I minus λK) = I

by using equation (121)

It follows immediately from Theorem 114 that if λ is not a zero of thefunction D then equation (11) has a unique solution in C() for any f isin C()We now prove the converse of Theorem 114 namely if D(λ) = 0 then IminusλKis not invertible To this end we note the following ancillary fact

Lemma 115 For any λ isin C there holds the equationint

Rλ(s s)ds = λDprime(λ) (123)

The proof of this formula follows directly from a term-by-term integrationand differentiation of the relevant power series

Corollary 116 If D has a zero of order m at some μ = 0 then there is ay isin for which Rμ(y y) has a zero of order strictly less than m

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

14 Resolvent kernels 19

Proof From Lemma 115 above and our hypothesis we have thatint

partmminus1

partλmminus1 (Rλ(s s)) |λ=μds = μD(m)(μ) = 0

Theorem 117 If K is a continuous kernel such that D(λ) = 0 then IminusλK isnot invertible

Proof Choose a y isin and consider the function uy defined to be Rλ(middot y)Since D(λ) = 0 by hypothesis equation (120) implies that uy minus λKuy = 0We would be finished with the proof if there was a y isin for which uy = 0Otherwise we must argue differently

Suppose that D vanishes at μ and the multiplicity of this zero is m (note thatD(0) = 1 and so each zero of D must be of finite multiplicity) Let l be thesmallest non-negative integer such that for all x y isin the function Rλ(x y)has a zero of order l at μ Corollary 116 says that l lt m We now expandRλ(x y) near λ = μ and have that

Rλ = (λminus μ)lg + O(λminus μ)l+1

where the function g is continuous on times and never vanishes thereinSubstituting this equation into equation (122) we obtain that

(λminusμ)lg(s t)+O(λminusμ)l+1 = λ

int

K(s tprime)[(λminusμ)lg(tprime t)+O(λminusμ)l+1

]dtprime

minus λD(λ)K(s t)

Dividing both sides of the above equation by (λminus μ)l we get that

g(s t)+ O(λminus μ) = λ

int

K(s tprime)[g(tprime t)+ O(λminus μ)]dtprime minus λD(λ)

(λminus μ)lK(s t)

Letting λrarr μ we have that

g(s t) = μ

int

K(s tprime)g(tprime t)dtprime (124)

However we may rewrite equation (124) in the equivalent form

(I minus μK)g = 0

which together with the fact that g is not equal to zero establishes that I minusμKis not invertible thereby proving the theorem

We see from the above theorems the importance of the zeros of D in thestudy of second-kind integral equations Since D(0) = 1 the zeros of D are

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

20 A review of the Fredholm approach

nonzero countable and only have infinity as a possible accumulation pointThe reciprocals of the zeros of D correspond to the nonzero eigenvalues ofthe operator K These observations prove many of the general statements wehave made about compact operators in the Appendix see Section A27 Weshall say more about the zeros of D later First we want to prove Fredholmrsquosremarkable formula for the determinant of a product of two operators

15 Fredholm determinants

In this section we consider the Fredholm determinant and provide a result onthe Fredholm determinant product

Definition 118 Let K isin C( times ) The linear operator I + K C() rarrC() defined for s isin and u isin C() by

((I +K)u)(s) = u(s)+int

K(s t)u(t)dt

has a Fredholm determinant which is given by

det(I +K) =sumqisinN0

1

qintq

K[x]dx

Alternatively we may express the Fredholm determinant directly in termsof the Fredholm function namely

det(I +K) = D(minus1)

Note that the Fredholm determinant of the operator K is the same as theFredholm determinant of the operator Klowast corresponding to the adjoint kernelwhich is defined for s t isin by Klowast(s t) = K(t s) That is for any continuouskernel K on the compact set we have the equation

det(I +K) = det(I +Klowast) (125)

The main result of this section is the Fredholm determinant product formula

Theorem 119 If K H isin C(times) then

det(I +H)(I +K) = det(I +H) det(I +K)

We start our discussion of this product formula by introducing a continuouskernel L associated with two prescribed continuous kernels K and H

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

15 Fredholm determinants 21

Definition 120 If K and H are continuous kernels we define L to be thekernel of the operator

L = K +H+HK

Remark 121 The kernel L has the characteristic property that

I + L = (I +H)(I +K) (126)

There are some special cases of Theorem 119 that are readily proven Forexample if det(I + K) = 0 then I + K is not one to one Therefore itfollows that I + L is also not one to one Consequently we conclude thatdet(I + L) = 0 Similarly if det(I + H) = 0 then det(I + Hlowast) = 0 fromwhich it follows that I +H is not onto Hence the operator I + L is also notonto and so det(I+L) = 0 In other words in the proof of the above theoremwe may assume that the operators I + L I +H and I +K are all invertible

To proceed further we need the following differentiation formula To thisend we define for τ isin R and the continuous kernel G the one-parameterfamily of kernels given as Kτ = K + τG

Proposition 122 If K and G are continuous kernels then

d

dτdet(I +Kτ )|τ=0 = det(I +K)

int

G(s s)dsminusint

(int

Rminus1(s t)G(t s)dt

)ds

Proof For each x isin q a straightforward differentiation of the requisitedeterminant and a Laplace expansion yield the formula

d

dτ(Kτ [x]) |τ=0 =

sumlkisinNq

(minus1)l+kK

(x1 xkminus1 xk+1 xq

x1 xlminus1 xl+1 xq

)G(xk xl)

(127)

We now integrate both sides of this equation over x isin q The integral on theleft-hand side of the equation is clear As for the right-hand side we distinguishthe summands corresponding to k = l from the remaining summands All thesummands corresponding to k = l have the same integral given byint

qminus1K[x]dx middot

int

G(s s)ds (128)

There are q(qminus 1) remaining terms corresponding to the case k = l They alsohave the same integral which is independent of k and l and is computed to be

minusq(qminus 1)int2

(intqminus2

K

[s xt x

]dx)

G(t s)dsdt

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

22 A review of the Fredholm approach

Therefore in summary we obtain thatintq

d

dτ(Kτ [x]) |τ=0 dx = q

intqminus1

K[x]dx middotint

G(s s)ds

minus q(qminus 1)int

[int

(intqminus2

K

[s xt x

]dx)

G(t s)

dt

]ds

(129)

Substituting equation (129) into the series expansion

det(I +Kτ ) =sumqisinN0

1

qintq

Kτ [x]dx

and rearranging terms yields the desired formula

d

dτdet(I +Kτ )|τ=0 = D(minus1)

int

G(s s)dsminusint

(int

Rminus1(s t)G(t s)dt

)ds

Remark 123 When det(I +K) = 0 we may use equation (120) to expressthe formula in Proposition 122 in the equivalent form

d

dτlog det(I +Kτ )|τ=0 =

int

W(s s)ds (130)

where W is the continuous kernel corresponding to the integral operator(I minus Dminus1(minus1)Rminus1

)G

Indeed by Proposition 122 we have that

d

dτlog det(I +Kτ )|τ=0 =

d

dτdet(I +Kτ )|τ=0

det(I +K)

=int

G(s s) dsminus Dminus1(minus1)int

(int

Rminus1(s t)G(t s)dt

)ds

=int

W(s s) ds

where

W(s s) = G(s s)minus Dminus1(minus1)int

Rminus1(s t)G(t s) dt

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

15 Fredholm determinants 23

Moreover for any u isin C() s isin we have thatint

W(s t)u(t) dt =int

G(s t)minus Dminus1(minus1)

int

Rminus1(s tprime)G(tprime t) dtprime

u(t) dt

=int

G(s t)u(t) dt minus Dminus1(minus1)int

int

Rminus1(s tprime)G(tprime t)u(t) dtprime dt

= (((I minus Dminus1(minus1)Rminus1)G)u)(s)

which ensures the desired result By Theorem 114 we can rewrite this integraloperator in the equivalent simpler form (I + K)minus1G In order to make it easyto recall the association of the kernel W with this integral operator we expressequation (130) in the notationally suggestive form

d

dτlog det(I +Kτ )|τ=0 =

int

(I +K)minus1G

(s s)ds (131)

As we already pointed out in equation (125) the Fredholm determinants ofK and Klowast are the same Hence since Kτ

lowast = Klowast + τGlowast we also have that

d

dτlog det(I +Kτ )|τ=0 =

int

(I +Klowast)minus1Glowast

(s s)ds (132)

We are now ready to prove Theorem 119

Proof We consider two one-parameter perturbations Kτ = K+τK0 and Hτ =H + τH0 and set

I + Lτ = (I +Hτ ) (I +Kτ )

Note that

Lτ = L+ τL0 + o(τ 2) τ rarr 0+ (133)

where L is defined in Definition 120 and L0 is given by the formula

L0 = (I +H)K0 +H0 (I +K) (134)

Combining equations (131) (132) (133) and (134) we obtain that

d

dτlog det(I + Lτ )|τ=0 =

int

(I + L)minus1 (I +H)K0

(s s)ds

+int

(I + Llowast)minus1 (I +Klowast

)Hlowast0(s s)ds

But we also have that

(I + L)minus1 = (I +K)minus1 (I +H)minus1

and

(I + Llowast)minus1 = (I +Hlowast)minus1 (I +Klowast

)minus1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

24 A review of the Fredholm approach

Therefore we get that

d

dτlog det(I + Lτ )|τ=0

=int

(I +Hlowast)minus1Hlowast0

(s s)ds+

int

(I +K)minus1K0

(s s)ds

= d

dτlog det(I +Hτ )|τ=0 + d

dτlog det(I +Kτ )|τ=0

Let γ [0 1] rarr R be the function defined at τ isin [0 1] as

γ (τ) = log det(I + Lτ )minus logdet(I +Hτ ) det(I +Kτ )The above formula means that γ prime(0) = 0 Clearly we can derive the sameconclusion along any continuous path H(τ ) and K(τ ) as long as H(0) = Hand K(0) = K Moreover we can also show that the derivative of γ is zero atany point μ in (0 1) provided that both the integral operators I + H(μ) andI +K(μ) have inverses

Next we choose any continuously differentiable path σ [0 1] rarr [0 1]such that σ(0) = 0 σ(1) = 1 and then for any τ isin [0 1] which satisfiesdet(I + τH) = 0 and det(I + τK) = 0 we introduce the operatorsHτ = σ(τ)H and Kτ = σ(τ)K Consequently we conclude by our previousremarks that the function γ defined above must be constant in the interval[0 1] This together with the fact that γ (0) = 0 and γ (1) = log det(I + L)minuslog det(I +H)) det(I +K) completes the proof

16 Eigenvalue estimates and a trace formula

We now turn our attention to a new topic and provide an estimate of the growthof the zero of D in the complex plane We accomplish this only when =[0 1] and the kernel K has the property that there are positive constants ρ gt 0and α isin (0 1] such that for all u s t isin [0 1]

|K(u s)minus K(u t)| le ρ|sminus t|α

When this is the case we say that K is Holder continuous of order α withconstant ρ (with respect to the second variable) For Holder continuous kernelsof order α we can provide a better estimate for the Fredholm minor than thatgiven in Lemma 17

Lemma 124 If K is Holder continuous of order α with constant ρ then forany x y isin [0 1]q ∣∣∣∣K [x

y

]∣∣∣∣ le 4α(qq)12minusαρq

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

16 Eigenvalue estimates and a trace formula 25

Proof First we reorder the columns of the minor appearing on the left-handside of the inequality so that the components of the vector y = [yi i isin Nn]are increasing Next for each i isin Nqminus1 we subtract the (i + 1)th columnfrom the ith column in the above minor (which does not alter its value) usethe Hadamard inequality and then the hypothesis on the kernel to obtain theinequality ∣∣∣∣K [x

y

]∣∣∣∣ le (ρradic

q)qprod

iisinNqminus1

|yi+1 minus yi|α

We now apply the arithmeticndashgeometric inequality to obtain the inequalities∣∣∣∣K [xy

]∣∣∣∣ le (ρradic

q)q

1

qminus 1

sumiisinNqminus1

(yi+1 minus yi)

(qminus1)α

le 4α(qq)12minusαρq

thereby confirming the desired bound

Remark 125 The above lemma holds if K is a Holder continuous kernel oforder α with respect to the first variable rather than the second variable

Note that for the next result we introduce the constant

μ = 2

1+ 2α

For α isin ( 12 1] we have that μ isin [ 23 1)

Theorem 126 If K(s t) is a Holder continuous kernel of order α isin ( 12 1]

with respect to either s or t in [0 1] then for every ε gt 0 there is a constanta gt 0 such that for all λ isin C

|D(λ)| le ae|λ|μ+ε

Proof To prove this theorem we first note that any polynomial appearingin the proof of this result can be neglected because any polynomial is surelydominated in the complex plane by an exponential

For every λ isin C we have that

|D(λ)| le 4αsum

misinN0

am|λ|m

where we define the constant

am = ρm(mm)12minusα

m

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

26 A review of the Fredholm approach

This constant satisfies the inequality

am le(

ρe

μ+ε

)m 1

mm

μ+ε (135)

Now for m large enough the expression in parentheses on the right-hand sideof inequality (135) can be made less than one Hence as we can ignorepolynomials of a fixed degree in |λ| we assume without loss of generalitythat it suffices to bound from above the power seriessum

misinN

|λ|mm

mμ+ε

Certainly the series above is bounded for |λ| le 1 and so we only have toconsider it for |λ| gt 1 In that case we break the sum above into two parts Forthe first part we sum over positive integers m lt (2|λ|)μ+ε and for the secondpart over the remaining integers For the first sum we have thatsum

mlt(2|λ|)μ+ε

|λ|mm

mμ+εle c|λ|(2|λ|)μ+ε

where

c =summisinN

mminusm

μ+ε ltinfin

Since

lim|λ|rarrinfin |λ|minusε log |λ| = 0

we conclude that there is a constant a gt 0 such that

|λ|(2|λ|)μ+ε le ae|λ|μ+2ε

For the other summands corresponding to m ge (2|λ|)μ+ε we have that

|λ|mm

mμ+εle 2minusm

and so we conclude that summge(2|λ|)μ+ε

|λ|mm

mμ+εlesummisinN

2minusm ltinfin

This theorem says under the hypothesis on the kernel K that D is an entirefunction of order less than or equal to μ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

16 Eigenvalue estimates and a trace formula 27

Corollary 127 If λn n isin N are the zeros of D in C ordered so that0 lt |λ1| le |λ2| le middot middot middot le |λn| le middot middot middot and K(s t) is a Holder continuous kernelin either s or t on [0 1] with exponent α isin ( 1

2 1] thensumnisinN|λn|minus1 ltinfin

The proof of this corollary relies on standard techniques in the study ofentire functions We briefly review some of the details

The first step is to recall Jensenrsquos formula (see for example [2] p 208) Ifρ gt 0 f is a function analytic in the disc ρ = z |z| lt ρ and continuouson the boundary with a finite number of zeros aj j isin Nm in the closed discρ then

log | f (0)| +sumjisinNm

logρ

|aj| =1

int 2π

0log | f (ρeiθ )|dθ (136)

We apply Jensenrsquos formula to the function D in the following way We letν(ρ) be the number of zeros of D (counting multiplicities) in the disc ρ Recall that |λ1| le |λ2| le middot middot middot and so it follows that

n le ν(|λn|) (137)

Now according to Jensenrsquos formula applied to the function D on the disc 2ρ

we have that sumjisinNk

log2ρ

|λj| =1

int 2π

0log |D(2ρeiθ )|dθ (138)

where λj j isin Nk are the zeros of D in 2ρ Since |λj| le 2ρ for j isin Nkwe have that each summand on the left-hand side of equation (138) is non-negative We neglect the terms corresponding to the zeros of D in 2ρ ρ thereby obtaining the inequality

ν(ρ) log 2 lt1

int 2π

0log |D(2ρeiθ )|dθ (139)

Now choose ε gt 0 so that μ+ ε lt 1 (recall that when α isin ( 12 1] we have

that μ lt 1) Next we use the estimate for D(λ) in Theorem 126 and get that

ν(ρ) log 2 le log a+ (2ρ)μ+ε (140)

Consequently for sufficiently large m there is a positive constant c such thatfor n ge m we have |λn| ge 1 and

ν(|λn|) le c|λn|μ+ε

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

28 A review of the Fredholm approach

According to the inequality (137) this inequality implies for n ge m that

n1

μ+ε le c|λn|In other words for some other positive constant b gt 0 we have thatsum

nisinN|λn|minus1 le b

sumnisinN

nminus1

μ+ε ltinfin

Theorem 128 If K(s t) is a Holder continuous kernel on [0 1] in either sor t with exponent α isin ( 1

2 1] then

D(λ) =prodnisinN

(1minus λ

λn

)

Proof According to the above corollary we see that the right-hand side of theabove equation is an entire function of λ isin C Moreover the function h definedat λ as

h(λ) = D(λ)prodnisinN

(1minus λ

λn

)is free of zeros in C Therefore by the Weierstrass factorization theorem thereis an entire function g such that for all λ isin C

D(λ) = eg(λ)prodnisinN

(1minus λ

λn

) (141)

We now show that g is a constant But since D(0) = 1 the result will followTherefore the last remaining task is to show that g is constant To this end weneed the following version of the PoissonndashJensen formula (see for example[2] p 208) Specifically if f is a function analytic in ρ with only zeros aj j isin Nm in ρ and z isin ρ then

log | f (z)| = minussumjisinNm

log

∣∣∣∣∣ ρ2 minus ajz

ρ(zminus aj)

∣∣∣∣∣+ 1

int 2π

0Re

ρeiθ + z

ρeiθ minus zlog | f (ρeiθ )|dθ

(142)

We shall first differentiate both sides of this equation with respect to zappropriately and then examine the behavior of each resulting term on theright-hand side when f = D and ρ rarrinfin For the process of differentiation werecall the following lemma

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

16 Eigenvalue estimates and a trace formula 29

Lemma 129 If f is analytic in a neighborhood of z then

part

partzlog | f (z)| = f prime(z)

f (z)

and

part

partzRef (z) = f prime(z)

To prove this lemma we write f (z) = u+ iv and then by a direct applicationof the CauchyndashRiemann equations the result may be verified

We now fix z choose ρ gt 2|z| and apply the derivative operator part

partz to bothsides of equation (142) for the choice f = D to get the equation

Dprime(z)D(z)

=sumjisinNn

λj

ρ2 minus λjz+sumjisinNn

1

zminus λj+ 1

π

int 2π

0ρeiθ (ρeiθ minus z)minus2 log |D(ρeiθ )|dθ

(143)

We first estimate the integral by using Corollary 127 Since ρ gt 2|z| we knowthat |ρeiθ minus z| ge ρ2 which together with Theorem 126 yields that∣∣∣∣ρeiθ (ρeiθ minus z)minus2 log |D(ρeiθ )|

∣∣∣∣ le ρ|ρeiθ minus z|minus2(log a+ |ρeiθ |μ+ε)

le 4

ρ(log a+ ρμ+ε)

and thus the absolute value of the third term on the right-hand side ofequation (143) is bounded by 8

ρ

(log a+ ρμ+ε

) which tends to zero as

ρ rarr infin For the first sum on the right-hand side of equation (143) we notethat

|ρ2 minus λjz| ge ρ2 minus ρ|z| ge ρ2

2

Therefore we see that the first sum on the right-hand side of equation (143) isbounded by 2ρminus1ν(ρ) which according to inequality (140) also goes to zeroas ρ rarrinfin Consequently equation (143) yields as ρ rarrinfin the equation

Dprime(z)D(z)

=sumjisinN

1

zminus λj (144)

However we also have that

D(z) = eg(z)prodjisinN

(1minus z

λj

) (145)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

30 A review of the Fredholm approach

Now we differentiate both sides of equation (145) to obtain that

Dprime(z)D(z)

= gprime(z)+sumjisinN

1

zminus λj (146)

Comparing equations (144) and (146) leads to the desired conclusion thatg is constant thereby completing the proof of the theorem

Our discussion in this chapter indicates that a Holder continuous kernelK(s t) on [0 1] in either s or t with exponent α isin ( 1

2 1] has eigenvalues whichare l1-summable We let μn n isin N be the nonzero eigenvalues of K so thatλn = 1μn We then have the following result

Theorem 130 If K(s t) is a Holder continuous kernel on [0 1] in either s ort with exponent α isin (12 1] thenint

K(s s)ds =sumnisinN

μn

and

det(I +K) =prodnisinN

(1+ μn)

Proof First we prove the second equality Indeed we have that

det(I +K) = D(minus1) =prodnisinN

(1+ λminus1n ) =

prodnisinN

(1+ μn)

Next we show the first equality On the one hand we know that

Dprime(z)D(z)

=sumjisinN

1

zminus λj

which in turn yields

Dprime(0) = D(0)sumjisinNminus 1

λj= minus

sumjisinN

μn

On the other hand by the power series expansion of D we have that

Dprime(λ) =sumqisinN0

(minus1)q+1 λq

qintq+1

K[x] dx

and hence it follows that

Dprime(0) = minusint

K(s s)ds

Combining the above two aspects we obtain the desired result

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

17 Bibliographical remarks 31

This completes our brief discussion of the classical theory of Fredholmintegral equations We next turn our attention to further essential backgroundmaterial on integral equations and postpone until Chapter 4 the main subject ofthis book namely multiscale methods for the numerical solution of Fredholmintegral equations

17 Bibliographical remarks

Most of the material presented in this chapter is taken from the book [183] Forthe important notion of the Fredholm function and the Fredholm determinantreaders are referred to [183 253] Readers may find a discussion of thedistribution of the eigenvalues of the Fredholm integral operator in [141] Inaddition [249] is a nice reference for the topic of integral equations wherethree fundamental papers on integral equations written by three outstandingmathematicians [Ivar Fredholm (1903) David Hilbert (1904) and ErhardSchmidt (1907)] published in the first decade of the twentieth century weretranslated into English with commentaries

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637003Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 010447 subject to the Cambridge Core terms of

2

Fredholm equations and projection theory

In this chapter we provide concepts and results useful for the study ofthe Fredholm integral equation of the second kind especially the theory ofprojection methods needed for the development of the multiscale methods

21 Fredholm integral equations

The main concern of this book is the Fredholm integral equation of the secondkind In this section we review concepts relevant to the study of this class ofintegral equations The general form for a Fredholm integral equation of thesecond kind is

uminusKu = f (21)

where the linear operator K is defined on a normed linear space with valuesin another such space the function f is given and u is a solution to bedetermined Typically these spaces consist of real or complex-valued functionson a measurable subset in the d-dimensional Euclidean space Rd Thefunction Ku is determined by a kernel K(s t) s t isin and the Fredholmintegral operator is defined by the formula

(Ku)(s) =int

K(s t)u(t)dt s isin (22)

or in short by

(Ku)(s) =int

K(s middot)u(middot) s isin

32

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 33

Properties of the operator K are inherited from those of the kernel K We shallrestrict ourselves to kernels for which K is a compact operator that is it mapsbounded subsets to relatively compact ones Let us postpone the discussion ofoperators of this type until we describe our preferred notation and terminology

For d isin N let Rd be the d-dimensional Euclidean space and a domain(open set) in Rd By Cm() m isin N0 we mean the linear space of all real-valued functions defined on which are m-times continuously differentiablethere That is all derivatives up to and including all those of total order mare continuous on Therefore the space of infinitely differentiable functionson is given by Cinfin() = ⋂

misinN0Cm() We use for the closure of

and denote by Cm() the subspace of all functions together with all theirderivatives up to order m that are bounded and uniformly continuous on theclosure of The simplified notational convention C() and C() for thespaces C0() and C0() respectively will be used throughout the book Fora lattice vector α isin Nd

0 with non-negative coordinates we use the notation|α| = sumiisinNd

αi Corresponding to any vector t = [tj j isin Nd] isin Rd wedenote the |α|-derivative of a function u at t (when it exists) by

Dαu(t) = part |α|u(t)parttα1

1 middot middot middot parttαdd

When the set is compact the linear space Cm() m isin N0 is a Banach spacewith the norm

uminfin = max|Dαu(t)| t isin α isin Zdm+1 |α| le m

and for m = 0 we simply denote it by uinfinFor a Lebesgue measurable subset of Rd the linear space of all real-

valued functions defined on for which their pth powers 1 le p lt infin areintegrable is denoted by Lp() Unless stated otherwise all integrals are takenin the sense of the Lebesgue integration Likewise we use Linfin() for thelinear space of all real-valued essentially bounded (that is bounded except ona zero measure set) measurable functions Moreover Lp() is a Banach spacewith the norm

up =⎧⎨⎩(int

|u(t)|pdt

)1p

1 le p ltinfin

inf sup|u(t)| t isin E meas(E) = 0 p = infin

In the special case p = 2 L2() is a Hilbert space equipped with the innerproduct

(u v) =int

u(t)v(t)dt u v isin L2()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

34 Fredholm equations and projection theory

We remark that the integral of an integrable function f over a measurableset isin Rd with respect to the Lebesgue measure will be denoted by

int

f (t)dtor in short by

int

f when there is no independent variable indicated and noconfusion can be expected Additional details about the spaces can be found instandard texts for example [1 183 236 276]

211 Compact integral operators

Compact operators play a central role in the theory of integral equationsand hence are frequently discussed in that context We review some of theirimportant properties here which from time to time we shall refer to in thetext Readers are referred to [10 15 47 177 183 203 236] for additionaldetails on the material reviewed here and to the Appendix of this book forbasic elements of functional analysis

We use the symbol B(XY) for the normed linear space of all bounded linearoperators from a normed linear space X into a normed linear space Y withoperator norm A = supAx x isin X x le 1 When Y is a Banach spacethen so is B(XY) and in the case that X = Y we denote it simply by B(X)Convergence of a sequence of operators An n isin N sube B(XY) to an operatorA isin B(XY) relative to the operator norm is said to be uniform convergenceand will be denoted by An

uminusrarr A This notion of convergence differs from theweaker requirement that for all x isin X we have that limnrarrinfinAnx = Ax Thisis called pointwise convergence and denoted by An

sminusrarr AThe normed linear space B(XR) is called the dual space of X and is denoted

by Xlowast The dual space of a normed linear space is always a Banach space andconsists of all continuous linear functionals on X For every x isin X isin Xlowast weuse the familiar bracket notation 〈 x〉 (or 〈x 〉) = (x) and associated withany operator A isin B(XY) its adjoint operator Alowast isin B(YlowastXlowast) is defined forall x isin X isin Ylowast by

〈Ax〉 = langAlowast xrang

An operator and its adjoint have the same norm that is A = Alowast WhenX is a Hilbert space the Riesz representation theorem (see Theorem A31 inthe Appendix) can be used to identify X with its dual space In that case theadjoint of any A isin B(X) is likewise in B(X) and A is said to be self-adjointwhenever A = Alowast

Definition 21 A linear operator K from a normed linear space X to a normedlinear space Y is said to be compact if it maps each bounded set in X to arelatively compact set in Y

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 35

It follows from the definition of a compact operator that A isin B(XY) iscompact if there is a sequence of compact operators An n isin N sube B(XY)which uniformly converges to it that is An

uminusrarr AWe now describe some examples of compact operators which are relevant

To do this we first recall the ArzelandashAscoli theorem which states that a setS is relatively compact in C() when is compact if and only if S isuniformly bounded and equicontinuous (for more details see Theorem A43in the Appendix) This classical result leads us to our first example

1 The Fredholm operator defined by a continuous kernel

Proposition 22 If sube Rd is a compact set and K isin C( times ) thenthe corresponding integral operator K given in (22) is a compact operator inB(C())

Proof First note by the Lebesgue dominated convergence theorem (cf [236])that K isin B(C()) since its operator norm satisfies the inequality

K le max int

|K(s t)|dt s isin

Next let S sube C() be a bounded subset Choose a constant c gt 0 such that forany v isin S we have that vinfin le c Therefore for any v isin S and s s1 s2 isin we conclude that

|(Kv)(s)| le c meas() max|K(s t)| (s t) isin and∣∣(Kv

)(s1)minus

(Kv)(s2)∣∣ le c

int

∣∣K(s1 t)minus K(s2 t)∣∣dt

le c meas() max∣∣K(s1 t)minus K(s2 t)∣∣ s1 s2 t isin

where meas() denotes the Lebesgue measure of the domain Since the ker-nel K is bounded and uniformly continuous ontimes the right-hand side of thefirst inequality is finite and that of the second inequality can be made as smallas desired provided that |s1 minus s2| is small enough Thus not only is the imageof the set S under K namely K(S) uniformly bounded and equicontinuousTherefore an appeal to the ArzelandashAscoli theorem completes the proof

2 The Fredholm operator defined by a Schmidt kernelRecall that a kernel K is called a Schmidt kernel provided that K isin L2(times)Proposition 23 If sube Rd is a measurable set and K is a Schmidt kernel thenthe integral operator K defined by (22) is a compact operator in B(L2())

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

36 Fredholm equations and projection theory

Proof We first show that this linear operator K is in B(L2()) Indeed forany v isin L2() and any compact subset 0 of by the Fubini theorem(Theorem A14) and the CauchyndashSchwarz inequality (Section A13) we havethatint

0

|(Kv)(s)|ds leint0

int

|K(s t)v(t)|dsdt

le (meas(0))12(int

0

int

|K(s t)|2dsdt

)12

vL2()

Therefore again by the Fubini theorem we obtain that(Kv)(s) exists almost

everywhere for s isin 0 and is measurable hence it will be so on the entire Once again using the Fubini theorem and the CauchyndashSchwarz inequality weobtain that

KvL2() =(int

∣∣∣∣ int

K(s t)v(t)dt

∣∣∣∣2ds

)12

le(int

int

|K(s t)|2dsdt

)12(int

|v(t)|2dt

)12

(23)

= KL2(times)vL2()

which proves that K isin B(L2())We next show that K is actually a compact operator First we consider the

case that K is a degenerate kernel That is there exist K1j K2j isin L2() j isin Nn

such that the kernel K for any s t isin has the form

K(s t) =sumjisinNn

K1j(s)K2j(t)

In this case we have for v isin L2() that

Kv =sumjisinNn

K1j

int

K2j(t)v(t)dt

which implies that the range of K is finite-dimensional and so K is indeed acompact operator Next we show that for any K isin L2(times) there is alwaysa sequence of degenerate kernels Kj j isin N sube L2(times) such that

limjrarrinfinKj minus KL2(times) = 0 (24)

From this fact it will follow that K is a compact operator Indeed let Kj be thecompact operator corresponding to the kernel Kj thus Kj is compact Frominequality (23) we obtain that

Kj minusK le Kj minus KL2(times)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 37

and consequently we conclude that KjuminusrarrK Therefore by Proposition

A47 part (v) we see that K is compactNow the existence of a sequence Kj j isin N of degenerate kernels in

L2( times ) which satisfy (24) follows from Theorem A31 and CorollaryA30 Specifically we use the Fubini theorem and conclude that the onlyfunction h isin L2(times) with the property thatint

int

f (s)g(t)h(s t)dsdt = 0

for all f g isin L2() is the zero function Therefore the set of degeneratekernels forms a dense subset of L2(times)

We remark that a constructive approximation argument can also be usedto establish the existence of a sequence of degenerate kernels Kj j isin Nwhich satisfy (24) when is compact For example first we approximatethe Schmidt kernel K by a kernel in C( times ) and then approximate thecontinuous kernel uniformly on by bivariate polynomials In particularwhen K isin C(times) the degenerate kernel Kj j isin N can also be chosen inC(times) so that

limjrarrinfinKj minus KC(times) = 0

thereby giving an alternate proof of Proposition 23

212 Weakly singular integral operators

We now turn our attention to a class of weakly singular integral operatorswhich is a kind of the most important compact integral operators

Definition 24 Let be a bounded and measurable subset of Rd If thereexists a bounded and measurable function M defined on times such that fors t isin but s = t

K(s t) = M(s t) log |sminus t| (25)

or

K(s t) = M(s t)

|sminus t|σ (26)

where σ is a constant in the interval (0 d) and |s minus t| is the Euclideandistance between s t isin then the function K is called a kernel with a weaksingularity and the operator K defined by (22) is called a weakly singularintegral operator The case that the kernel K has a logarithmic singularity (25)is sometimes referred to merely by saying that σ = 0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

38 Fredholm equations and projection theory

We introduce several constants which are convenient for our discussion ofweakly singular integral operators namely

cσ () = sup int

dt

|sminus t|σ s isin

m() = sup|M(s t)| s t isin and

diam() = sup|sminus t| s t isin

Also we let Sd be the unit sphere in Rd and recall that

vol(Sd) = dπd2

(d2+ 1)

where is the gamma function In the next lemma we estimate an upper boundof cσ ()

Lemma 25 If sube Rd is a bounded and measurable set and σ isin [0 d) then

cσ () le vol(Sd)diam()dminusσ

d minus σ (27)

Proof Fix a choice of s isin We use spherical coordinates with center at sto estimate the integral in the definition of cσ () Specifically for any t isin we have that dt = rdminus1drdωd where r isin [0 diam()] and ωd is the Lebesguemeasure on the unit sphere Sd Consequently we obtain the estimate

int

dt

|sminus t|σ leint diam()

0

dt

rσ=int

Sd

(int diam()

0rdminus1minusσdr

)dωd

= vol(Sd)diam()dminusσ

d minus σ (28)

Note that from inequality (28) it follows that every weakly singular kernelK is in L1(times) and when σ isin [0 d2) K is likewise a Schmidt kernel Weconsider the general weakly singular integral operator in the next result

Proposition 26 The integral operator K defined by (22) with a weaksingular kernel (26) is in B(L2()) with the norm satisfying the inequalityK le m()cσ ()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 39

Proof We first observe by the Fubini theorem for any u isin L2() thatint

int

u2(t)

|sminus t|σ dsdt =int

u2(t)

(int

ds

|sminus t|σ)

dt le cσ ()u2L2() (29)

Therefore the function v defined for all s isin as

v(s) =int

u2(t)

|sminus t|σ dt

exists at almost every s isin and is integrable Next we point out for eachs t isin that

|K(s t)u(t)| le m()1

|sminus t|σ2

|u(t)||sminus t|σ2

le m()

2

1

|sminus t|σ +m()

2

u2(t)

|sminus t|σ

For any sisin both terms on the right-hand side of this inequality are integrablewith respect to t isin and so we conclude that |K(s t)u(t)| is finite for almostevery t isin Moreover by the CauchyndashSchwarz inequality we have that

[(Ku)(s)]2 =[int

K(s t)u(t)dt

]2

le m2()

int

dt

|sminus t|σint

u2(t)

|sminus t|σ dt

le m2()cσ ()int

u2(t)

|sminus t|σ dt

which implies that Ku isin L2() is square-integrable that is K is definedon L2() and maps L2() to L2() Moreover integrating both sides of thelast inequality with respect to s isin and employing estimate (29) yields theinequality

Ku2L2()le m2()cσ ()

2u2L2()

which completes the proof

We next establish the compactness of a weakly singular integral operator onL2()

Theorem 27 The integral operator K with a weakly singular kernel (26) isa compact operator in B(L2())

Proof For ε gt 0 let Kε and Kprimeε be the integral operators whose kernelsKε Kprimeε are defined respectively for s t isin by the equations

Kε(s t) =

K(s t) |sminus t| ge ε

0 |sminus t| lt ε

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

40 Fredholm equations and projection theory

and

Kprimeε(s t) =

0 |sminus t| ge ε

K(s t) |sminus t| lt ε

These kernels were chosen to provide the decomposition

K = Kε +Kprimeε (210)

Since for s tisin |Kε(s t)| lem()εσ and is bounded it follows thatKε isin L2(times) Consequently we conclude from Proposition 23 that Kε is acompact operator in B(L2()) Moreover setting sε =t t isin |sminust|ltεfor each s isin the CauchyndashSchwarz inequality yields the inequality

|(Kprimeε(u)(s)| =∣∣∣∣∣intsε

M(s t)

|sminus t|σ u(t)dt

∣∣∣∣∣le m()

(intsε

1

|sminus t|σ dt

)12 (int

u2(t)

|sminus t|σ dt

)12

We bound the first integral on the right-hand side of this inequality by themethod of proof used for Lemma 25 and then integrate both sides of theresulting inequality over t isin to obtain the inequality

Kprimeε(u)L2() le[

m2()vol(Sd)(2ε)dminusσ

d minus σ

int

int

u2(t)

|sminus t|σ dsdt

]12

This inequality combined with (29) and Lemma 25 leads to the estimate

KprimeεuL2() lem()vol(Sd)[2εdiam()](dminusσ)2

d minus σuL2()

In other words we have proved that limεrarr0 Kprimeε = 0 and so being theuniform limit of the compact operators Kε ε gt 0 K is compact

The next result establishes a similar fact for K to be a compact operator inB(C()) under a modified hypothesis

Proposition 28 If sube Rd is a compact set of positive measure and thefunction M in (26) is in C(times) then the integral operator K with a weaklysingular kernel (26) is a compact operator in B(C())

Proof Let S be a bounded subset of C() By the ArzelandashAscoli theorem itsuffices to prove that the set K(S) is uniformly bounded and equicontinuous

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 41

We choose a positive constant c which ensures for any u isin S that uinfin le cand so we have for any u isin S that

|(Ku)(s)| =∣∣∣∣int

M(s t)

|sminus t|σ u(t)dt

∣∣∣∣ le c m()cσ () s isin

Therefore using Lemma 25 we conclude that the set K(S) is uniformlybounded

Next we shall show not only that Ku isin C() but also that the set K(S) isequicontinous To this end we choose ε gt 0 points s + h s isin and obtainthe equation

(Ku)(s+ h)minus (Ku)(s) =int

[M(s+ h t)

|s+ hminus t|σ minusM(s t)

|sminus t|σ]

u(t)dt

Let B(s 2ε) be the sphere with center at s and radius 2ε and set (s) = B(s 2ε) We have that

|(Ku)(s+ h)minus (Ku)(s)| le c m()

(intB(s2ε)

dt

|s+ hminus t|σ +int

B(s2ε)

dt

|sminus t|σ)

+ cint(s)

∣∣∣∣ M(s+ h t)

|s+ hminus t|σ minusM(s t)

|sminus t|σ∣∣∣∣ dt

(211)

For every s isin it follows from the method used to prove Lemma 25 thatintB(s2ε)

dt

|sminus t|σ levol(Sd)(4ε)dminusσ

d minus σ

When |h| lt 2ε we are assured that B(s 2ε) sube B(s+ h 4ε) and consequentlywe obtain the next inequalityint

B(s2ε)

dt

|s+ hminus t|σ leint

B(s+h4ε)

dt

|s+ hminus t|σ levol(Sd)(16ε)dminusσ

d minus σ

where in the last step we again employ the method of proof for Lemma 25These two inequalities demonstrate that the first two quantities on the right-hand side of (211) do not exceed

2c m()vol(Sd)(16ε)dminusσ

d minus σ

We now estimate the third integral appearing on the right-hand side of (211)To this end we observe that on the compact set W =(s t) s tisin |sminust| ge εthe function defined as H(s t) = M(s t)|s minus t|σ is uniformly continuous

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

42 Fredholm equations and projection theory

Hence there exists a δ gt 0 such that whenever (sprime tprime) (s t) isin W with |sprime minus s| leδ |tprime minus t| le δ we have that∣∣∣∣ M(sprime tprime)

|sprime minus tprime|σ minusM(s t)

|sminus t|σ∣∣∣∣ le ε

Now assume that |h| le min(ε δ) and t isin (s) Therefore we have that(s + h t) (s t) isin W and so the third term on the right-hand side of (211) isbounded by c meas()ε for any s isin and |h| le min(ε δ) Therefore not onlyis the function Ku continuous on but also the set K(S) is equicontinuousAn application of the ArzelandashAscoli theorem establishes that the set K(S) iscompact and so K is a compact operator

213 Boundary integral equations

Some important boundary value problems of partial differential equations overa prescribed domain can be reformulated as equivalent integral equationsover the boundary of the domain The resulting integral equations on theboundary are called boundary integral equations (BIEs) The superiority ofthe BIE methodology for solving boundary value problems rests on the factthat the dimension of the domain of functions appearing in the BIE will beone lower than in the original partial differential equation This means thatthe computational effort required to solve the partial differential equations canbe reduced significantly by using an efficient numerical method to solve theassociated BIE In this subsection we briefly review the BIE reformulation ofboundary value problems for the Laplace equation This material will providethe reader with some concrete integral equations that supplement the generaltheory described throughout the book

We begin with a discussion of the Green identities and integral representa-tions of harmonic functions To this end we let sube Rd be a bounded opendomain with piecewise smooth boundary part Throughout our discussion we

use the standard notation nabla =[

partpartxl

l isin Nd

]for the gradient and the Laplace

operator is defined by u = (nablanablau) Let A = [aij i j isin Nd] be a d times dsymmetric matrix with entries in C2() b = [bi i isin Nd] a vector field withcoordinates in C1() and c a scalar-valued function in C()

The proof of the lemma below is straightforward

Lemma 29 If u Rd rarr R and a Rd rarr Rd are in C1() then

nabla middot (ua) = unabla middot a+ a middot nablau

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 43

For the next lemma we introduce the vector field P Rd rarr Rd

P = A(vnablauminus unablav)+ buv

the second-order elliptic partial differential operator

Mu = nabla middot Anablau+ b middot nablau+ cu

and its formal adjoint operator

Mlowastv = nabla middot Anablavminusnabla middot bv+ cv

Lemma 210 If u v Rd rarr R are in C2() and A b c as above then

vMuminus uMlowastv = nabla middot P

Proof By direct computation using Lemma 29 we have that

vMuminus uMlowastv = vnabla middot (Anablau)minus unabla middot (Anablav)+ vb middot nablau+ unabla middot (bv)

while the definition of P proves that the right-hand side of this equation equalsnabla middot P

The formula appearing in the above lemma is often referred to as the adjointidentity

Next we write the adjoint identity in an alternate form For this purpose weintroduce the notation

Pu = Anablau middot n and Qv = Anablav middot nminus vb middot n (212)

where n denotes the unit outer normal along part It follows from the Gaussformula and Lemma 210 thatint

(vMuminus uMlowastv) =intpart

(vPuminus uQv) (213)

We are interested in getting a representation for the solution u of thehomogeneous elliptic equation Mu = 0 in terms of its values on the boundaryof the domain The standard method for doing this employs the fundamentalsolution to the inhomogeneous problem corresponding to the adjoint operatorWe briefly describe this process Recall that the fundamental solution of thelinear partial differential operator M is a function U defined on times suchthat for each x isin the solution u of the equation Mu = f is given by theintegral representation

u(x) =intRd

U(x y)f (y)dy (214)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

44 Fredholm equations and projection theory

We assume that the fundamental solution of the adjoint operator Mlowast isavailable and is denoted by Glowast Therefore we are ensured that the solutionv of the adjoint equation Mlowastv = f is given for each x isin as

v(x) =intRd

Glowast(x y)f (y)dy (215)

The function Glowast leads us to the following basic result

Proposition 211 If u is the solution of the homogeneous equation

Mu = 0 (216)

then for each x isin

u(x) =intpart

[u(y)(QGlowast(x middot))(y)minus Glowast(x y)Pu(y)

]dy (217)

where P Q are defined by (212) Glowast is the fundamental solution of theoperator Mlowast

Proof It follows from (213) thatint

[v(y)(Mu)(y)minus u(y)(Mlowastv)(y)]dy =intpart

[v(y)(Pu)(y)minus u(y)(Qv)(y)]dy

We choose v = Glowast(x middot) in this formula and use the definition of Glowast to get thedesired conclusion

Note that (217) expresses u over in terms of the boundary values of uand its normal derivative on part Let us specialize this result to the Laplaceoperator Thus we choose M to be the Laplace operator and observe in thiscase that M = Mlowast = and Pu = Qu = nablau middot n Therefore we get from(213) the following theorem

Theorem 212 (Green theorem) If u v isin C2() thenint

(vuminus uv) =intpart

(vpartu

partnminus u

partv

partn

) (218)

The fundamental solution of the Laplace operator is given by the formula

G(x y) =minus 1

2π log |xminus y| d = 2minus 1

(dminus2)ωdminus1

1|xminusy|dminus2 d ge 3

(219)

where ωdminus1 denotes the surface area of the unit sphere in Rd (cf [7 177])

Definition 213 If u isin C2() satisfies the Laplace equation u = 0 on then u is called harmonic on

It follows from direct computation that u = G(middot y) is harmonic on Rd y

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 45

Corollary 214 If u is harmonic on thenintpart

partu

partn= 0

Proof This result follows from Theorem 212 by choosing v = 1

In what follows we review the techniques of finding solutions to theLaplace equation under various circumstances Especially we discuss thedirect method for solving the Dirichlet and Neumann problems in boththe interior and exterior of the domain Moreover we shall also reviewboth single and double-layer representations for the solution of these problemsAll of our remarks pertain to the practically important cases of two andthree dimensions The two-dimensional case will be presented in some detailwhile the corresponding three-dimensional case will be stated without thebenefit of a detailed explanation since it follows the pattern of proof of thetwo-dimensional case We begin with a proposition which shows that harmonicfunctions have boundary integral representations

Proposition 215 If sube R2 is a bounded open domain with smoothboundary part prime = R2 and u is a harmonic function on then

u(x) = 1

intpart

(u(y)

part

partnlog |xminus y| minus log |xminus y|partu(y)

partn

)dy x isin (220)

u(x) = 1

π

intpart

[u(y)

part

partnlog |xminus y| minus log |xminus y|partu(y)

partn

]dy x isin part (221)

0 =intpart

[u(y)

part

partnlog |xminus y| minus log |xminus y|partu(y)

partn

]dy x isin prime (222)

Proof The first equation (220) follows from Proposition 211 We now turnto the case that x isin part To deal with the singularity on x = y of the integrandon the right-hand side of equation (221) we choose a positive ε and denote byε the domain obtained from after removing the small disc B(x ε) = y |x minus y| le ε On this punctured domain we can use the Green identity (218)to obtain the equationint

ε

(v(y)u(y)minus u(y)v(y))dy =intpartε

(v(y)

partu(y)

partnminus u(y)

partv(y)

partn

)dy

(223)

Let v be the fundamental solution (219) of the Laplace operator Both of thefunctions u and v are harmonic functions on ε Therefore the above equation

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

46 Fredholm equations and projection theory

can be written as

1

intpartε

(log |xminus y|partu(y)

partnminus u(y)

part

partnlog |xminus y|

)dy = 0 (224)

To evaluate the limit as ε rarr 0 of the integral on the left-hand side of theequation above we split it into a sum of the following two integrals

I1ε = 1

intε

(log |xminus y|partu(y)

partnminus u(y)

part

partnlog |xminus y|

)dy

and

I2ε = 1

intpartεε

(log |xminus y|partu(y)

partnminus u(y)

part

partnlog |xminus y|

)dy

with ε = partB(x ε)⋂ It follows that

I1ε = 1

intε

(log ε

partu(y)

partn+ εminus1u(y)

)dy (225)

from which we obtain that

limεrarr0

I1ε = 1

2u(x) (226)

Moreover we have that

limεrarr0

I2ε = 1

intpart

(log |xminus y|partu(y)

partnminus u(y)

part

partnlog |xminus y|

)dy (227)

Combining equations (224)ndash(227) yields (221) Finally we note that whenx isin prime both functions u and v = 1

2π log |x minus middot| are harmonic functions on Thus (222) follows from the Green identity (218)

We state the corresponding result for the three-dimensional case The prooffollows the pattern of the proof for Proposition 215

Proposition 216 If sube R3 is a bounded open domain with smoothboundary part prime = R3 and u is a harmonic function on then

u(x) = minus 1

intpart

[u(y)

part

partn

1

|xminus y| minus1

|xminus y|partu(y)

partn

]dy x isin (228)

u(x) = minus 1

intpart

[u(y)

part

partn

1

|xminus y| minus1

|xminus y|partu(y)

partn

]dy x isin part (229)

0 =intpart

[u(y)

part

partn

1

|xminus y| minus1

|xminus y|partu(y)

partn

]dy x isin prime (230)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 47

Next we make use of these boundary value formulas for harmonic functionsto rewrite several boundary value problems as integral equations We startwith a description of methods for obtaining boundary integral equations ofthe direct type Later we turn our attention to the indirect methods of singleand double-layer potentials First we consider the following interior boundaryvalue problems

The interior Dirichlet problem Find u isin C()⋂

C2() such thatu(x) = 0 x isin u(x) = u0(x) x isin part

(231)

where u0 isin C(part) is a given boundary function

The interior Neumann problem Find u isin C()⋂

C2() such thatu(x) = 0 x isin partu(x)partn = u1(x) x isin part

(232)

where u1 isin C(part) is a given boundary function satisfyingintpart

u1(x)dx = 0

It is known that both of these problems have unique solutions (see forexample Chapter 6 of [177]) We now use (221) and (229) with the boundarycondition u = u0 and reformulate the interior Dirichlet problem for (231)when d = 2 as the BIE of the first kind

1

π

intpart

log |xminus y|ρ(y)dy = f (x) x isin part (233)

where

ρ = partu

partnand f = minusu0 + 1

π

intpart

u0(y)part

partnlog | middot minusy|dy

For the case d = 3 we choose

ρ = partu

partnand f = u0 + 1

intpart

u0(y)part

partn

1

| middot minusy|dy

and obtain the reformulation of the interior Dirichlet problem for (231) withd = 3 as the BIE of the first kind

1

intpart

1

|xminus y|ρ(y)dy = f (x) x isin part (234)

In a similar manner we treat the interior Neumann problem First for d = 2we use (221) and (229) with the boundary condition partu

partn = u1 and convert the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

48 Fredholm equations and projection theory

interior Neumann problem (232) to the equivalent BIE of the second kind

u(x)minus 1

π

intpart

u(y)part

partnlog |xminus y|dy = g(x) x isin part (235)

where

g = minus 1

π

intpart

u1(y) log | middot minusy|dy

For the corresponding three-dimensional case d = 3 we set

g = 1

intpart

u1(y)

| middot minusy|dy

and obtain the BIE of the second kind

u(x)+ 1

intpart

u(y)part

partn

1

|xminus y|dy = g(x) x isin part (236)

Let us now consider the Dirichlet and Neumann problems in the exteriordomain prime = Rd for d = 2 3 Specifically we reformulate the followingtwo problems as BIE

The exterior Dirichlet problem Find u isin C(prime)⋂

C2(prime) such thatu(x) = 0 x isin primeu(x) = u0(x) x isin part

(237)

where u0 isin C(part) is a given boundary function

The exterior Neumann problem Find u isin C(prime)⋂

C2(prime) such thatu(x) = 0 x isin primepartu(x)partnprime = u1(x) x isin part

(238)

where u1 isin C(part) is a given boundary function satisfyingintpart

u1(x)dx = 0and nprime is the outer unit normal to part(= partprime) with respect to prime

It is known (see for example [177]) that both problems have uniquesolutions under the condition that

u(x) = O(|x|minus1

) |x| rarr infin

|nablau(x)| = O(|x|minus2

) |x| rarr infin

(239)

Below we state the analog of both Proposition 215 and Proposition 216 forthe exterior domain prime We begin with the case d = 2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 49

Proposition 217 If sube R2 is a bounded open domain with smoothboundary part prime = R2 and u is a harmonic function on prime then thereholds

u(x) = 1

intpart

[u(y)

part

partnprimelog |xminus y| minus log |xminus y|partu(y)

partnprime

]dy x isin prime (240)

u(x) = 1

π

intpart

[u(y)

part

partnprimelog |xminus y| minus log |xminus y|partu(y)

partnprime

]dy x isin part (241)

0 =intpart

[u(y)

part

partnprimelog |xminus y| minus log |xminus y|partu(y)

partnprime

]dy x isin (242)

Proof We only prove (240) The other two equations are similarly obtainedLet the ball BR = x |x| lt R be chosen such that sub BR and let primeR =prime⋂

BR Consequently it follows from (220) with = primeR that

u(x) = 1

intpart

[u(y)

part

partnprimelog |xminus y| minus log |xminus y|partu(y)

partnprime

]dy+ IR x isin primeR

(243)

where

IR = 1

intpartBR

[u(y)

part

partnlog |xminus y| minus log |xminus y|partu(y)

partn

]dy x isin primeR

and n is the outer unit normal to partBR Using the condition (239) we have thatthere exists a positive constant c such that

|IR| le 1

intpartBR

[|u(y)|

∣∣∣∣ partpartnlog |xminus y|

∣∣∣∣+ |log |xminus y||∣∣∣∣partu(y)

partn

∣∣∣∣] dy

le c log R

2πRrarr 0

Note that the upper bound tends to zero as R tends to infinity Therefore thisestimate combined with (243) yields (240)

The three-dimensional version of Proposition 216 for the exterior domainis described next The proof is similar to that of Proposition 217 and so isomitted

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

50 Fredholm equations and projection theory

Proposition 218 If sube R3 is a bounded open domain with smoothboundary part prime = R3 and u is a harmonic function on prime then

u(x) = minus1

intpart

[u(y)

part

partnprime1

|xminus y| minus1

|xminus y|partu(y)

partnprime

]dy x isin prime (244)

u(x) = minus 1

intpart

[u(y)

part

partnprime1

|xminus y| minus1

|xminus y|partu(y)

partnprime

]dy x isin part (245)

0 =intpart

[u(y)

part

partnprime1

|xminus y| minus1

|xminus y|partu(y)

partnprime

]dy x isin (246)

We now make use of (241) and (245) to rewrite the exterior Dirichletproblem (237) for d = 2 as the BIE

1

π

intpart

log |xminus y|ρ(y)dy = f (x) x isin part (247)

where

ρ = partu

partnprimeand f = minusu0 + 1

π

intpart

u0(y)part

partnprimelog | middot minusy|dy

while for d = 3 we have the equation

1

intpart

1

|xminus y|ρ(y)dy = f (x) x isin part (248)

where

ρ = partu

partnprimeand f = u0 + 1

intpart

u0(y)part

partnprime1

| middot minusy|dy

For the exterior Neumann problem (238) the BIE is of the second kind andit is explicitly given for d = 2 as

u(x)minus 1

π

intpart

u(y)part

partnprimelog |xminus y|dy = g(x) x isin part (249)

where

g(x) = minus 1

π

intpart

u1(y) log |xminus y|dy

The case d = 3 is covered by the following BIE

u(x)+ 1

intpart

u(y)part

partnprime1

|xminus y|dy = g(x) x isin part (250)

where

g(x) = 1

intpart

u1(y)

|xminus y|dy

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

21 Fredholm integral equations 51

In the remaining part of this section our goal is to describe the BIE forthe Laplace equation of the indirect type First we consider representing theunknown harmonic function u as a single-layer potential

u(x) =intpart

ρ(y)G(x y)dy x isin Rd part (251)

and then later as a double-layer potential

u(x) =intpart

ρ(y)partG(x y)

partndy x isin Rd part (252)

where G is the fundamental solution of the Laplace operator and ρ isin C(part)is a function to be determined depending on the nature of the boundaryconditions We show that the single and double-layer potentials can be usedto solve interior and exterior problems for both the Dirichlet and Neumannproblems

Let us start with the single-layer method For the interior or exteriorDirichlet problem the boundary condition and the continuity of u on part leadto the demand that the function ρ satisfies the first-kind Fredholm integralequation int

part

ρ(y)G(x y)dy = u0(x) x isin part

In particular for d = 2 the solution u of the interior (resp exterior) Dirichletproblem has the single-layer representation given by the equation

u(x) = 1

intpart

ρ(y) log |xminus y|dy x isin (resp prime) (253)

with ρ satisfying the requirement that

1

intpart

ρ(y) log |xminus y|dy = u0(x) x isin part (254)

In the three-dimensional case we get the equation

u(x) = minus 1

intpart

ρ(y)

|xminus y|dy x isin (respprime) (255)

where ρ satisfies the equation

minus 1

intpart

ρ(y)

|xminus y|dy = u0(x) x isin part (256)

For the two-dimensional interior Neumann problem we consider the equa-tionintpartε

ρ(y) log |xminus y|dy =intε

ρ(y) log |xminus y|dy+intpartε

ρ(y) log |xminus y|dy

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

52 Fredholm equations and projection theory

where partε is the boundary of the domain ε = B(x ε) and ε =partB(x ε)

⋂ By taking the directional derivative in the normal direction n

of both sides of this equation letting εrarr 0 using the boundary conditionpartupartn = u1 and arguments similar to that used in the proof of Proposition 215we conclude that u is represented as the single-layer potential (253) with ρ

satisfying the second-kind Fredholm integral equation

minusρ(x)2+ 1

intpart

ρ(y)part

partnlog |xminus y|dy = u1(x) x isin part (257)

In a similar manner the solution u to the three-dimensional interior Neumannproblem is represented as the single-layer potential (255) with ρ satisfying thesecond-kind Fredholm integral equation

minusρ(x)2minus 1

intpart

ρ(y)part

partn

1

|xminus y|dy = u1(x) x isin part (258)

For the exterior Neumann problem a similar argument leads to the resultthat the solution u in the two and three-dimensional cases is representedas (253) and (255) with ρ satisfying the second-kind Fredholm integralequations respectively

ρ(x)

2+ 1

intpart

ρ(y)part

partnlog |xminus y|dy = u1(x) x isin part (259)

and

ρ(x)

2minus 1

intpart

ρ(y)part

partn

1

|xminus y|dy = u1(x) x isin part (260)

We close this section with a review of the double-layer potentials forharmonic functions We start with the two-dimensional interior Dirichletproblem Suppose that u= u+ isin C() is a harmonic function in Let uminusbe the solution of the exterior Neumann problem with

partuminus(y)partn

= partu+(y)partn

y isin part

which satisfies (239) It follows from Propositions 215 and 217 that

u(x) = 1

intpart

(u+(y)minus uminus(y))part

partnlog |xminus y|dy x isin

This equation can be written as a double-layer potential

u(x) = 1

intpart

ρ(y)part

partnlog |xminus y|dy x isin (261)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 53

with ρ = u+ minus uminus According to the proof of Proposition 215 we have forx isin part that

limxrarrx

intpart

ρ(y)part

partnlog |xminus y|dy = minusπρ(x)+

intpart

ρ(y)part

partnlog |xminus y|dy

This with the boundary condition of the Dirichlet problem concludes that ρsatisfies the second-kind Fredholm integral equation

minusρ(x)2+ 1

intpart

ρ(y)part

partnlog |xminus y|dy = u0(x) x isin part (262)

Similarly the solution u to the three-dimensional interior Dirichlet problem isrepresented as the double-layer potential

u(x) = minus 1

intpart

ρ(y)part

partn

1

|xminus y|dy x isin with ρ satisfying the second-kind Fredholm integral equation

ρ(x)

2minus 1

intpart

ρ(y)part

partn

1

|xminus y|dy = u0(x) x isin part

For the exterior Dirichlet problem the corresponding integral equations in thetwo and three-dimensional cases are

ρ(x)

2+ 1

intpart

ρ(y)part

partnlog |xminus y|dy = u0(x) x isin part

and

minusρ(x)2minus 1

intpart

ρ(y)part

partn

1

|xminus y|dy = u0(x) x isin part

respectively

22 General theory of projection methods

The main concern in the present section is the general theory of projectionmethods for approximate solutions of operator equations of the form

Au = f (263)

where AisinB(XY) f isinY are given and u isin X is the solution to be determinedThe case of central importance to us takes the form A= I minusK where I is theidentity operator in B(X) and K is a compact operator in B(X) In this case(263) becomes

(I minusK) u = f (264)

a Fredholm equation of the second kind

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

54 Fredholm equations and projection theory

221 Projection operators

We begin with a description of various projections an essential tool for thedevelopment of approximation schemes for (263) The following notation willbe used throughout the book For a linear operator A not defined on all of thelinear space X we denote by D(A) its domain and by N(A)=x xisinX Ax =0 its null space For the range of A we use R(A) = Ax x isin D(A)Alternatively we may sometimes write A(U) for the range of A where U isthe domain of A We start with the following definition

Definition 219 Let X be a normed linear space and V a closed linearsubspace of X A bounded linear operator P X rarr V is called a projectionfrom X onto V if for all v isin V

Pv = v (265)

Note that a projection P X rarr V necessarily has the property that V =R(P) For later use we make the following remark

Proposition 220 Let X be a normed linear space and P isin B(X) Then P isa projection on X if and only if P2 = P Moreover in this case if P = 0 thenP ge 1

Proof If P X rarr R(P) is a projection then for all x isin X we have thatP2x = P(Px) = Px Conversely if P2 = P then any v isin R(P) written asv = Px for some x isin X satisfies the equation Pv = P2x = Px = v Finally itfollows from the equation P2 = P that P2 ge P2 = P which impliesthat P ge 1 when P = 0

Now we describe the three kinds of projection which are most importantfor the practical development of approximation schemes to solve operatorequations We have in mind the well-known orthogonal and interpolationprojections and perhaps the less familiar concept of a projection defined bya generalized best approximation

1 Orthogonal projectionsWe recall the standard definition of the orthogonal projection on a Hilbertspace

Definition 221 Let X be a Hilbert space with inner product (middot middot) Two vectorsx y isin X are said to be orthogonal provided that (x y) = 0 If V is a nontrivialclosed linear subspace of X then a linear operator P from X onto V is calledthe orthogonal projection if for all x isin X y isin V it satisfies the equation

(Px y) = (x y) (266)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 55

In other words the orthogonal projection onto V has the property that xminusPxis orthogonal to all y isin V The orthogonal projection satisfies P= 1 and isself-adjoint that is Plowast = P Moreover we have the following well-knownextremal characterization of the orthogonal projection

Proposition 222 If X is a Hilbert space and V a nontrivial closed linearsubspace of X then there exists an orthogonal projection P from X onto V

and for all x isin X

xminus Px = minxminus v v isin VMoreover the last equation uniquely characterizes Px isin V

Proof The existence of Px follows from the completeness of X and theparallelogram law The remaining claim follows from the definition of theorthogonal projection which gives for x isin X and v isin V that

xminus v2 = xminus Px2 + Pxminus v2

2 Interpolating projectionsWe next introduce the concept of interpolating projections

Definition 223 Let X be a Banach space and V a finite-dimensional subspaceof X A subset j j isin Nm of the dual space Xlowast is V-unisolvent if for anycj jisinNm sube R there exists a unique element v isin V satisfying for all jisinNmthe equation

j(v) = cj (267)

To emphasize the pairing between X and Xlowast the value of a linear functional isin Xlowast at x isin X will often be denoted by 〈x 〉 That is we define〈x 〉 = (x) This convenient notation is standard and the next propositionis elementary To state it we use the Kronecker symbol δij i j isin Nm that isδij = 0 except for i = j in which case δij = 1

Proposition 224 If X is a Banach space and V is an m-dimensional subspaceof X then j j isin Nm is V-unisolvent if and only if there exists a linearlyindependent set xj j isin Nm sube V which satisfies for all i j isin Nm the equation

j(xi) = δij (268)

In this case the operator P Xrarr V defined for each x isin X and j isin Nm bylangPx j

rang = langx jrang

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

56 Fredholm equations and projection theory

is a projection from X onto V and is given by the formula

Px =sumjisinNm

j(x)xj (269)

According to the above result any m-dimensional subspace V of X and anyset of linear functionals j j isin Nm in X which is V-unisolvent determinethe projection (269) An important special case occurs when the Banach spaceX consists of real-valued functions on a compact set of Rd In this case ifthere is a subset of points tj j isin Nm in such that the linear functionalsj j isin Nm are defined for each x isin X by the equation

j(x) = x(tj) (270)

that is j is the point evaluation functional at tj then the operator P Xrarr V

defined by (269) is called the Lagrange interpolation If for some j isin Nmj(x) is determined not only by the value of the function x at some point of but also by derivatives of x P is called the Hermite interpolation In the casethat P isin B(C()) is a Lagrange interpolation its operator norm is given by

P = max

⎧⎨⎩sumjisinNm

|xj(t)| t isin ⎫⎬⎭ (271)

3 Generalized best approximation projectionsAs our final example we describe the generalized best approximation pro-jections which were introduced in [77] Let X be a Banach space and Xlowast itsdual space For n isin N we assume that Xn and Yn are two finite-dimensionalsubspaces of X and Xlowast respectively with the same dimension

Definition 225 For x isin X an element Pnx isin Xn is called a generalized bestapproximation to x from Xn with respect to Yn if for all isin Yn it satisfies theequation

〈xminus Pnx 〉 = 0 (272)

Similarly given isin Xlowast an element P primen isin Yn is called a generalized bestapproximation from Yn to with respect to Xn if for all x isin Xn it satisfies theequation lang

x minus P primenrang = 0

Figure 21 displays schematically the generalized best approximationprojection Pnx to x isin X from Xn with respect to Yn in a Hilbert space Inthis case equation (272) means (xminus Pnx)perpYn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 57

Yn

Xn

x

Pnx

0

Figure 21 Generalized best approximation projections

For a further explanation we provide an example as follows Let I = [0 1]and X = L2(I) We subdivide the interval I into n subintervals by pointstj = jh j isin Nnminus1 and set tjminus 1

2= (jminus 1

2 )h jisinNn where h = 1n We let Xn be

the space of continuous piecewise linear polynomials with knots at tj j isin Nnminus1and Yn be the space of piecewise constant functions with knots at tjminus 1

2 j isin Nn

Clearly dimXn= dimYn = n + 1 For xisinX we define the generalized bestapproximation Pnx to x from Xn with respect to Yn by the equation

〈xminus Pnx y〉 = 0 for all y isin Yn

Let

φj(t) =⎧⎨⎩

1minus (tj minus t)h tjminus1 le t le tj1minus (t minus tj)h tj lt t le tj+10 elsewhere

j isin Zn+1

and

ψj(t) =

1 tjminus 12le t le tj+ 1

2

0 elsewherej isin Zn+1

Two groups of functions φj j isin Zn+1 and ψj j isin Zn+1 form thebases for Xn and Yn respectively Thus Pnx can be written in the formPnx = sum

jisinZn+1cjφj where the vector un = [cj j isin Zn+1] satisfies the

linear equation

Anun = fn

in which An = [langφjψirang

i j isin Zn+1] and fn = [〈xψi〉 i isin Zn+1]We now present a necessary and sufficient condition for which each x isin X

has a unique generalized best approximation from Xn with respect to Yn In

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

58 Fredholm equations and projection theory

what follows we denote by Xperpn the set of all linear functionals in Xlowast whichvanish on the subspace Xn that is the annihilator of Xn in Xlowast

Proposition 226 For each x isin X the generalized best approximation Pnx tox from Xn with respect to Yn exists and is unique if and only if

Yn cap Xperpn = 0 (273)

When this is the case and Pnx is the generalized best approximation of x Pn Xrarr Xn is a projection

Proof Let x isin X be given and assume that spaces Xn and Yn have basesxj j isin Nm and j j isin Nm respectively The existence and uniqueness of

Pnx =sumiisinNm

cixi isin Xn

satisfying equation (272) means that the linear systemsumiisinNm

cilangxi j

rang = langx jrang j isin Nm (274)

has a unique solution c = [cj j isin Nm] isin Rm for any x isin X This is equivalentto the fact that the mtimes m matrix

A = [langxj irang

i j isin Nm]is nonsingular Moreover a vector b = [bj j isin Nm] isin Rm is in the null spaceof A if and only if the linear functional = sumjisinNm

bjj is in the subspace

Yn cap Xperpn This proves the first assertion For the remaining claim when Yn capXperpn = 0 we solve (274) and define for x isin X

Pnx =sumjisinNm

cjxj

Clearly Pn Xrarr Xn is a linear operator that by construction satisfies (272)Let us show that Pn is also a projection For any x isin X P2

n x isin Xn isa generalized best approximation to Pnx from Xn with respect to Yn ByDefinition 225 we conclude for all isin Yn thatlang

Pnxminus P2n x

rang= 0

This together with (272) implies for all isin Yn thatlangxminus P2

n x rang= 0

By the uniqueness of the solution to equation (274) we obtain that P2n x = Pnx

and so Pn is indeed a projection

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 59

We state the corresponding result for the generalized best approximation to from Yn with respect to Xn The proof is similar to that of Proposition 226where in this case the transpose of the matrix A is used

Proposition 227 For each isin Y the generalized best approximation to isin Y from Yn with respect to Xn exists and is unique if and only if

Xn cap Yperpn = 0 (275)

When this is the case and P primen is the generalized best approximation of fromYn with respect to Xn then P primen Yrarr Yn is a projection

In view of Propositions 226 and 227 we shall always assume that con-dition (273) (resp (275)) holds whenever we refer to Pn (resp P primen) as thegeneralized best approximation projection from Xn with respect to Yn (respthe generalized best approximation projection from Yn with respect to Xn) Wealso remark that condition (273) or (275) implies that dimYn = dimXn

Proposition 226 allows us to connect the concept of the generalized bestapproximation to the familiar concept of dual bases Indeed by the HahnndashBanach theorem every finite-dimensional subspace Xn of a normed linearspace X has a dual basis (see Theorem A32 in the Appendix) Specificallyif xj j isin Nm is a basis for Xn there is a subset j j isin Nm sube Xlowast suchthat for i j isin Nm j(xi) = δij According to Proposition 226 the generalizedbest approximation to x isin X from Xn with respect to Yn = spanj j isin Nmexists and is given by

Pnx =sumjisinNm

j(x)xj

Conversely if (273) holds we can find a dual basis for Xn in Yn

Proposition 228 If Yn cap Xperpn = 0 then P primen = Plowastn and Yn = PlowastnXlowast

Proof For all x isin X and isin Xlowast we have thatlangxP primen

rang = langPnxP primenrang = 〈Pnx 〉 = langxPlowastn

rang

from which the desired result follows

The next proposition gives an alternative sufficient condition to ensure thatevery xisinX has a unique generalized best approximation in Xn with respect toYn

Proposition 229 If there is a constant c gt 0 and a linear operator Tn Xn rarr Yn with TnXn = Yn such that for all x isin Xn

x2 le c 〈x Tnx〉 then (273) holds

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

60 Fredholm equations and projection theory

Proof Our hypothesis implies that for any isin Yn cap Xperpn there exists x isin Xn

such that Tnx = and so

x le radicc 〈x Tnx〉12 = radicc 〈x 〉12 = 0

Therefore we obtain that x = 0 and consequently we also have that =Tnx= 0

The next issue that concerns us is the conditions which guarantee that asequence of projections Pn n isin N converges pointwise to the identityoperator in X that is Pn

sminusrarr I As we shall see later this property iscrucial for the analysis of projection methods Generally condition (273) isnot sufficient to ensure that this is the case Therefore we need to introducethe concept of a regular pair

Definition 230 A pair of sequences of subspaces Xn sube X n isin N and Yn subeY n isin N is called a regular pair if there is a positive constant c such that forall n isin N there are linear operators Tn Xn rarr Yn with TnXn = Yn satisfyingthe conditions that for all x isin Xn

(i) x le c 〈x Tnx〉12(ii) Tnx le cx

In this definition it is important to realize that the constant c appearing aboveis independent of n isin N

If X is a Hilbert space so that Xlowast can be identified by the Riesz representationtheorem (see Theorem A31 in the Appendix) with X itself and we also havefor all n isin N that Xn = Yn then conditions (i) and (ii) are satisfied withTn = I n isin N Thus in this case we have a regular pair On the contrary ifXnYn is a regular pair then from Proposition 229 we conclude that (273)holds and so Pn is well defined

For the next proposition we find it appropriate to introduce the quantity

dist(xXn) = minxminus u u isin XnProposition 231 If Xn n isin N and Yn n isin N form a regular pair andPn Xrarr Xn is the corresponding generalized best approximation projectionthen for each x isin X and n isin N

(i) Pn le c3(ii) dist(xXn) le xminus Pnx le (1+ c3)dist(xXn)

Consequently if for each x isin X limnrarrinfin dist(xXn)= 0 then limnrarrinfin Pnx= x

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 61

Proof For each x isin X and u isin Xn we have that

uminus Pnx2 le c2 〈uminus Pnx Tn(uminus Pnx)〉= c2 〈uminus x Tn(uminus Pnx)〉le c2uminus xTn(uminus Pnx)le c3uminus xuminus Pnx

Therefore we conclude that

uminus Pnx le c3uminus xThe choice u = 0 in the above inequality establishes (i) As for (ii) the lowerbound is obvious while the upper bound is obtained by using the inequalitythat for any u isin Xn

xminus Pnx le xminus u + uminus Pnx le (1+ c3)uminus x

222 Projection methods for operator equations

In this subsection we describe projection methods for solving operator equa-tions which include the PetrovndashGalerkin method the Galerkin method theleast-squares method and the collocation method

Let X and Y be Banach spaces A X rarr Y be a bounded linear operatorand f isin Y We wish to find an u isin X such that

Au = f

if it exists Projection methods have the common feature of specifying asequence Xn n isin N of subspaces and choosing a un isin Xn for which theresidual error

rn = Aun minus f

is ldquosmallrdquo so that un is a good approximation to the desired u How this is donedepends on the method used We shall review some of the principal strategiesfor making rn small

We begin with a description of the PetrovndashGalerkin method The idea behindthis method is to make the residual rn isinY small by choosing finite-dimensionalsubspaces Ln nisinN in Ylowast with dim Ln = dim Xn and attempting to findun isinXn so that for all isin Ln we have

〈Aun 〉 = 〈f 〉 (276)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

62 Fredholm equations and projection theory

Specifically we choose bases Xn= spanxj jisinNm and Ln= spanj jisinNmand write un in the form

un =sumjisinNm

cjxj

where the vector un = [cj j isin Nm] isin Rm must satisfy the linear equation

Anun = fn

where An = [langAxj irang

i j isin Nm] and fn = [〈f i〉 i isin Nm]For the purpose of theoretical analysis it is also useful to express un isinXn

as a solution of an operator equation This can be done by specifying anysequence Yn nisinN of subspaces for which there are generalized bestapproximation projections Pn Y rarr Yn with respect to Ln This means thatfor all isinLn not only 〈Aun minus f 〉 = 0 but also 〈Aun minus PnAun 〉 = 0 and〈f minus Pnf 〉 = 0 From these three equations we conclude that PnAunminusPnf isinYn cap Lperpn = 0 and so we obtain that

PnAun = Pn f

Therefore we conclude that equation (276) is equivalent to the operatorequation

Anun = Pn f

where An = PnA|Xn Here the symbol A|Xn stands for the operator Arestricted to the subspace Xn and so An isin B(XnYn) This means that theoperator An can be realized as a square matrix because dim Xn = dim Ln =dim Yn

We remark that the commonly used Galerkin method and also the least-squares method are special cases of the PetrovndashGalerkin method Specificallyif X=Y is a Hilbert space we identify Xlowast with X and choose Xn=Yn n isin Nthen the projection Pn above is the orthogonal projection of X onto Xn andequation (276) means that Aunminus f isinXperpn with un isinXn Alternatively the least-squares method chooses Yn = A(Xn) instead of the choice Xn=Yn specifiedfor the Galerkin method which yields the requirement Aun minus f isinA(Xn) withun isinXn This means that un satisfies the equation

f minusAun = dist (f A(Xn))

Equivalently un has the property that Alowast(Aunminus f ) isin Xperpn So in particular wesee that the least-squares method is equivalent to the Galerkin method appliedto the operator equation

AlowastAu = Alowast f

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 63

Our final example is the collocation method This approaches the task ofmaking the residual rn small by making it zero on some finite set of pointsSpecifically the setup requires that Y = C() where is a compact subset ofRd Choose a finite set T sube and demand that rn|T = 0 where rn|T denotesthe restriction of the function rn to the finite set T Again to solve for un isin X

we restrict our search to un isin Xn where dim Xn = card T and card T denotesthe number of distinct elements in T that is the cardinality of T In termsof a basis for Xn we have as before un = sumjisinNm

cjxj un = [cj j isin Nm]fn = [f (tj) j isin Nm]where T = tj j isin Nm and An = [(Axi)(tj) i j isin Nm]These quantities are joined by the linear system of equations

Anun = fn

An operator version of this linear system follows by choosing a subspace Yn subeC() with dim Yn = card T which admits an interpolation projection Pn Yrarr Yn corresponding to the family of linear functionals t t isin T wherethe linear functional t is defined to be the ldquodelta functionalrdquo at t that is foreach f isin C() we have that t(f ) = f (t) Therefore (Pnf )|T = 0 if and onlyif f |T = 0 and so we get that

Anun = Pnf

where An = PnA|Xn

223 Convergence and stability

In this subsection we discuss the convergence and stability of projectionmethods for operator equations The setup is as before namely X and Y arenormal linear spaces A isin B(XY) Xn n isin N and Yn n isin N aretwo sequences of finite-dimensional subspaces of X and Y respectively withdim Xn = dim Yn and Pn Yrarr Yn is a projection The projection equationfor our approximate solution un isin Xn for a solution u isin X to the equationAu = f is

Anun = Pn f (277)

where as before

An = PnA|Xn (278)

and An isin B(XnYn) Our goal is to clarify to what extent the sequenceun n isin N if it exists approximates u isin X We start with a definition

Definition 232 The projection method above is said to be convergent if thereexists an integer q isin N such that the operator An isin B(XnYn) is invertible for

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

64 Fredholm equations and projection theory

n ge q and for each f isin A(X) the unique solution of (277) which we callun = Aminus1

n Pnf converges as n rarr infin to a u isin X that satisfies the operatorequation Au = f

We remark that convergence of the approximate solutions un n isin N tou as defined above does not require that the operator equation Au = f has aunique solution although this is often the case in applications

We first describe a consequence of convergence

Theorem 233 If the projection method is convergent then there is an integerq isin N and a constant c gt 0 such that for all n ge q

Aminus1n PnA le c (279)

If in addition A is onto that is A(X) = Y then the q and c above can bechosen so that for all n ge q

Aminus1n le c (280)

Proof The proof uses the uniform boundedness principle (Theorem A25in the Appendix) Specifically since the projection method converges weconclude for each u isin X that the sequence Aminus1

n PnAu n isin N converges inX as nrarrinfin and hence is norm bounded for each u isin X Thus an applicationof the uniform boundedness principle confirms (279) We note in passing thatAminus1

n PnA isin B(XXn) and by equation (278) this operator is a projection of Xonto Xn

As for (280) we argue in a similar fashion using in this case the sequenceAminus1

n Pnf n ge q where f can be chosen arbitrarily in Y because the operatorA is assumed to be onto to conclude again by the uniform boundednessprinciple that Aminus1

n Pn is bounded uniformly in n isin N Here we have thatAminus1

n Pn isin B(YXn) However since

Aminus1n = supAminus1

n y y isin Yn y = 1= supAminus1

n Pny y isin Yn y = 1le supAminus1

n Pny y isin Y y = 1 = Aminus1n Pn (281)

the claim is confirmed

Next we explore to what extent (279) and (280) are also sufficient forconvergence of a projection method To this end we introduce the notion ofdensity of the sequences of subspaces Xn n isin N in X

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 65

Definition 234 We say that the sequence of subspaces Xn n isin N has thedenseness property in X if for every x isin X

limnrarrinfin dist(xXn) = 0

Theorem 235 If there exists a q isin N such that the operator An isin B(XnYn)

is invertible a positive constant c such that for all n ge q Aminus1n PnA le c and

the sequence Xn n isin N has the denseness property in X then the operatorA is one to one and for each f isin R(A) the projection method converges tou isin X the unique solution to the equation Au = f

Proof The proof uses the fact that the operator Aminus1n PnA is a projection of X

onto Xn Hence we have for any v isin X the inequality

Aminus1n PnAvminus v le (1+ c)dist(vXn) (282)

and in particular for f isin R(A) we have that

un minus u le (1+ c)dist(uXn)

Although Theorem 235 shows that condition (279) nearly guaranteesconvergence it is hard to apply in practice Nevertheless further explanationof the inequality (282) will lead to some improvements Indeed for anyprojection Pn Xrarr Xn we have for any x isin X that

Pnxminus x le (1+ Pn)dist(xXn) (283)

and

dist(xXn) le Pnxminus x (284)

These inequalities lead to a useful criterion to ensure that a sequence ofprojections has the property Pn

sminusrarr I

Lemma 236 A sequence of projections Pn n isin N sube B(XXn) has theproperty Pn

sminusrarr I if and only if the sequence Pn n isin N is bounded andthe sequence of subspaces Xn n isin N has the denseness property in X

Proof This result follows directly from the uniform boundedness principleand inequalities (283) and (284)

We next comment on the denseness property For this purpose we introducethe subspace

lim supnrarrinfin

Xn =⋃misinN

⋂ngem

Xn (285)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

66 Fredholm equations and projection theory

Proposition 237 If the subspace lim supnrarrinfinXn is dense in X then thesequence of subspaces Xn n isin N has the denseness property in X

Proof If lim supnrarrinfinXn is dense in X then for any xisinX and any ε gt 0 thereexists a positive integer qisinN and a yisinX such that xminus yltε and yisin capnge q

Xn Hence for all nge q we have that dist(xXn)lt ε that is Xn nisinN hasthe denseness property in X

The next proposition presents a necessary condition for the densenessproperty

Proposition 238 If the sequence of subspaces Xn nisinN has the densenessproperty then cupnisinNXn is dense in X

Proof This result follows from the fact that for all x isin X

dist(xcupmisinNXm) le dist(xXn) for all n isin N

and the definition of the denseness property

Definition 239 We say that a sequence of subspaces Xn n isin N is nestedif for all n isin N Xn sube Xn+1

Note that when the sequence of subspaces Xn n isin N is nested it followsthat ⋃

nisinNXn = lim sup

nrarrinfinXn

Consequently Lemma 236 gives us the following fact

Proposition 240 If the sequence of subspaces Xn n isin N is nested in X

then Pnsminusrarr I if and only if the sequence Pn n isin N is bounded and the

subspace cupnisinNXn is dense in X

We may express the condition that the collection of subspaces Xn nisinNis nested in terms of the corresponding collection of projections Pn nisinNsuch that for each n isin N we have that R(Pn) = Xn Indeed Xn n isin N is anested collection of subspaces if

PnPm = Pn for m ge n (286)

We turn our attention to the adjoint projections Plowastn Xlowast rarr Xlowast To thisend we choose a basis xi i isin Nm for Xn and observe that there exists aunique collection of bounded linear functionals i i isin Nm sube Xlowast such that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 67

j(xi) = δij i j isin Nm and for each x isin X we have that

Pnx =sumiisinNm

〈x i〉 xi (287)

In fact for any x isin Xn in the form x =sumiisinNmcixi where c = [ci i isin Nm] isin

Rm define bounded linear functionals j j isin Nm on Xn by

j(x) = cj

which leads to j(xi) = δij i j isin Nm and (287) We then extend the functionalsto the entire space X by the equationlang

x jrang = langPnx j

rang for all x isin X

It can easily be verified that (287) is valid for the collection of extendedfunctionals and this is unique for the requirements

From equation (287) comes the formula for the adjoint projection Plowastn namely for each isin Xlowast

Plowastn =sumjisinNm

langxj

rangj (288)

So we see that Yn =R(Plowastn )= span j j isin Nm and dimYn = dimXn Bydefinition for each x isin X and isin Xlowast we have that 〈Pnx 〉 = langxPlowastn

rang It

follows that limnrarrinfin Plowastn = for all isin Xlowast in the weak topology on Xlowastif and only if for every x isin X limnrarrinfin Pnx = x in the weak topology on X

([276] p 111)The next lemma prepares us for a result from [47] (p 14) which provides

a sufficient condition to ensure that Plowastnsminusrarr I in the norm topology on Xlowast

Recall that a normed linear space X is said to be reflexive if X may be identifiedwith its second dual

Lemma 241 If X is reflexive and for every xisinX limnrarrinfin Pnx= x in theweak topology on X then spancupnisinNR(Plowastn ) is dense in Xlowast in the normtopology

Proof We consider the subspace of Xlowast given by

W = span

⋃nisinN

R(Plowastn )

and choose any F isinWperp Since X is reflexive there is an xisinX such thatfor all isinXlowast we have that F()= 〈x 〉 Choose any misinN and observethat 0=F(Plowastm)=

langxPlowastm

rang = 〈Pmx 〉 We now let m rarr infin and useour hypothesis to conclude that 〈x 〉 = 0 Since isin Xlowast is arbitrary we

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

68 Fredholm equations and projection theory

conclude that x= 0 thereby establishing that F= 0 In other words we haveconfirmed that Xperp = 0 and so W is dense in Xlowast (see Corollary A35 in theAppendix)

From this fact follows the next result

Proposition 242 If X is reflexive and for every x isin X limnrarrinfin Pnx = x inthe weak topology on X and the projection Pn satisfies (286) then Plowastn

sminusrarr Iin the norm topology on Xlowast

Proof According to equation (286) we conclude that PlowastmPlowastn =Plowastn whichimplies that the spaces Yn =R(Plowastn ) are nested for nisinN Moreover ourhypothesis ensures that for each x isin X the set Pnx nisinN is weakly boundedTherefore Corollary A39 in the Appendix implies that it is norm boundedand so by the uniform boundedness principle (see Theorem A25) the setPn n isin N is bounded But we know that Pn = Plowastn and therefore theclaim made in this proposition follows from Proposition 240 and Lemma 241above

Let us return to condition (280) which is usually more readily verifiablein applications First we comment on the relationship of (280) to (279) Ourcomment here is based on the following norm inequalities the first being

Aminus1n PnA le Aminus1

n middot Pn middot Awhich implies that (280) ensures (279) when Pn n isin N is bounded(where of course the constants in (279) and (280) will be different)Moreover when A Xrarr Y is one to one and onto then

Aminus1n Pn le Aminus1

n PnA middot Aminus1and so recalling inequality (281) we obtain the inequality

Aminus1n le Aminus1

n PnAAminus1which demonstrates that inequality (279) implies (280) at least when A is oneto one and onto We formalize this in the next proposition

Proposition 243 Let A Xrarr Y be a bounded linear operator Xn n isin Nand Yn n isin N finite-dimensional subspaces of X and Y respectively andPn Y rarr Yn a projection If Pn n isin N and Aminus1

n n isin N arebounded then so is Aminus1

n PnA n isin N If A is one to one and onto andAminus1

n PnA n isin N is bounded then Aminus1n n isin N is bounded too

Our final comments concerning projection methods demonstrate that undercertain circumstances the existence of a unique solution of the projection

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 69

equation (277) implies the same for the operator equation (263) In the nextlemma we provide conditions on the projection method which imply that A isone to one

Lemma 244 Let Xn n isin N and Yn n isin N be sequences of finite-dimensional subspaces of X and Y respectively and Pn Y rarr Yn Qn X rarr Xn projections If there is a q isin N and a positive constant c gt 0 suchthat for any n ge q both Pn le c and Aminus1

n le c hold and also Qnsminusrarr I

then A is one to one

Proof If u isin X satisfies Au = 0 then for n ge q we have since An =PnA|Xn that

QnuX le cAnQnuY= cPnAQnuminus PnAuYle c2AQnuminus uX

Letting nrarrinfin on both sides of this inequality we conclude that u = 0

We now present a similar result that implies the operator A is onto

Lemma 245 Let X and Y be Banach spaces with X reflexive and A isinB(XY) Let Pn Y rarr Yn be a projection such that Pn

sminusrarr I on Y

and the sequence Aminus1n n isin N is bounded If the sequence of subsets

R(Plowastn ) n isin N is nested then A is onto

Proof Choose any f isin Y and recall that un = Aminus1n Pnf Since Pn

sminusrarr I weconclude by the uniform boundedness principle (see Theorem A25) that thesequence Pn n isin N is bounded and so we obtain that un n isin Nis also bounded Moreover our hypothesis that R(Plowastn ) n isin N is nested

guarantees by the proof of Proposition 242 that Plowastnsminusrarr I on Ylowast Now since

X is reflexive we can extract a subsequence unk k isin N which convergesweakly to an element u isin X (see for example [276] p 126) We show thatAu = f To this end we first observe for any isin Ylowast that

limkrarrinfin

langAunk

rang = limkrarrinfin

langunk Alowast

rang = languAlowastrang = 〈Au 〉

that is limkrarrinfinAunk = Au weakly in Y Therefore the right-hand side of theinequality∣∣langAunk Plowastnk

rangminus 〈Au 〉∣∣ le AunkPlowastnk

minus + | langAunk minusAu rang |

goes to zero as nrarrinfin and so with the formula Anun = PnAun we obtain thatlimkrarrinfinAnk unk = Au weakly in Y Moreover by definition for all nisinN there

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

70 Fredholm equations and projection theory

holds the equation Anun = Pnf and also by hypothesis limnrarrinfin Pnf = f (innorm) from which we conclude that limkrarrinfinAnk unk = f (in norm) Henceindeed we obtain the desired conclusion that Au = f

Now we turn our attention to a discussion of the numerical stability of theprojection methods The numerical stability of the approximate solution prob-lem concerns how close the approximate solution of projection equation (277)is to that of a perturbed equation of the form(

An + En)un = Pnf + gn (289)

where En isin B(XnYn) is a linear operator which affects a perturbation of An

and gn isin Yn affects a perturbation of Pnf n isin NWe begin with a formal definition of stability

Definition 246 The projection method is said to be stable if there are non-negative constants μ and ν a positive constant δ and a positive integer q suchthat for any n ge q the operator An is invertible and for any vector gn isinYn and any linear operator En isin B(XnYn) with En le δ the perturbedequation (289) always has a unique solution un isin Xn satisfying the inequality

un minus un le μEnun + νgn (290)

We next characterize the stability of the projection method

Theorem 247 If A isin B(XY) then the projection method is stable if andonly if inequality (280) holds

Proof Suppose that condition (280) is satisfied for n ge q Then for any f isinA(X) and n ge q the projection equation (277) has the unique solution un isinXn If n ge q and the perturbation satisfies the norm inequality En le c

2 thenfor any x isin Xn

(An + En)x ge c

2x (291)

Hence for any gn isin Yn and n ge q the perturbed equation (289) has a uniquesolution un isin Xn and it gives us the formula

un minus un = (An + En)minus1(gn minus Enun)

Hence with inequality (291) we get the stability estimate

un minus un le 2

c(Enun + gn)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 71

Conversely suppose that the projection method is stable In this casewe choose the perturbation operator to be En= 0 Then for any f isinA(X)and gn isinYn when ngeN the projection equation (277) and its perturbedequation (289) have unique solutions un un isinXn respectively We now letvn= unminusun and observe for nge q that Anvn= gn and so the stability inequality(290) gives us the desired inequality

Aminus1n gn = vn le νgn

We now introduce an important concept in connection with the actualbehavior of approximate methods that is the condition number of a linearoperator which is used to indicate how sensitive the solution of an equationmay be to small relative changes in the input data

Definition 248 Let X and Y be Banach spaces and A Xrarr Y be a boundedlinear operator with bounded inverse Aminus1 Yrarr X The condition number ofA is defined as

cond(A) = AAminus1It is clear that the inequality cond(A) ge 1 always holds The following

proposition shows that the condition number is a suitable tool for measuringthe stability

Proposition 249 Suppose that X and Y are Banach spaces and A isin B(XY)has bounded inverse Aminus1 Let δA isin B(XY) δf isin Y be linear perturbationsof A and f and u isin X u+ δu isin X be solutions of

Au = f (292)

and

(A+ δA)(u+ δu) = f + δf (293)

respectively If δA lt 1Aminus1 and f = 0 then

δuu le

cond(A)1minus Aminus1δA

(δAA +

δff

)

Proof It follows from (292) and (293) that

(A+ δA)(u+ δu) = Au+ δf

which leads to the equation

δu = (I +Aminus1δA)minus1Aminus1(δf minus δAu) (294)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

72 Fredholm equations and projection theory

The inequality δA lt 1Aminus1 ensures the existence of the linear operator(I +Aminus1δA)minus1 and also the estimate

(I +Aminus1δA)minus1 le 1

1minus Aminus1δA

From this with (294) we conclude that

δuu le

Aminus11minus Aminus1δA

(δfu + δA

)le Aminus1A

1minus Aminus1δA(δff +

δAA

)

completing the proof

A simple fact for the condition number cond(An) = AnAminus1n of the

projection equation (277) is that if Anuminusrarr A and Aminus1

numinusrarr Aminus1

n then

limnrarrinfin cond(An) = cond(A)

We remark that if the projection method is convergent then for somepositive constant c the inequality

Aminus1n le cAminus1

holds In fact using Theorem 233 we have that

Aminus1n = supAminus1

n Pny y isin Yn y = 1le supAminus1

n Pny y isin Y y = 1le sup[Aminus1

n PnA]Aminus1y y isin Y y = 1le cAminus1

224 An abstract framework for second-kind operator equations

In this subsection we change our perspectives somewhat to a context closer tothe specific applications that we have in mind in later chapters Specificallyin this subsection our operator A will have the form I minusK where K iscompact Projection methods for this case are well developed in the literatureSpecifically it is well known that the theory of collectively compact operatorsdue to P Anselone (as presented in [6 15]) provides us with a convenientabstract setting for the analysis of many numerical schemes associated withthe Fredholm operator I minusK This theory generally requires that the sequenceof nested spaces Xn n isin N is dense in X For our purposes later it isadvantageous to improve upon this hypothesis

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 73

We have in mind several concrete projection methods which will beintroduced later These include discrete Galerkin PetrovndashGalerkin collocationand quadrature methods We shall describe them collectively in the followingcontext which differs somewhat from the point of view of previous sections

We begin with a Banach space X and a subspace V of it Let K isin B(XV)be a compact operator and consider the Fredholm equation of the second kind

uminusKu = f (295)

By the Fredholm alternative (see Theorem A48) this equation has a uniquesolution for all f isin V if and only if the null space of I minus K is 0 that is aslong as one is not an eigenvalue of K We always assume that this conditionholds

To set up our approximation methods for (295) unlike in the discussionsearlier we need two sequences of operators Kn n isin N sube B(XV) andPn n isin N sube B(XU) where we require that V sube U sube X As before Pn willapproximate the identity and in the present context Kn will approximate theoperator K The exact sense for which this is required shall be explained belowPostponing this issue for the moment we associate with these two sequencesof operators the approximation scheme

(I minus PnKn)un = Pnf (296)

for solving (295)For the analysis of the convergence properties of (296) we are led to

consider the existence and uniform boundedness for n isin N of the inverse ofthe operator

An = I minus PnKn

Moreover in this section we also prepare the tools to study the phenomenonof superconvergence This means that we shall approximate the solution ofequation (296) by the function

un = f +Knun (297)

We refer to un as the iterated approximation to (296) which is also called theSloan iterate and it follows directly that un satisfies the equation

(I minusKnPn)un = f (298)

Therefore we also consider in this section the existence and uniform bound-edness for n isin N of the inverse of the operators

An = I minusKnPn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

74 Fredholm equations and projection theory

Our analysis of the linear operators An and An requires several assumptionson Kn and Pn To prepare for these conditions we introduce the followingterminology

Definition 250 We say that a sequence of operators Tn n isin N sube B(XY)converges pointwise to an operator T isin B(XY) on the set S sube X providedthat for each x isin S we have limnrarrinfin Tnx minus T x = 0 Notationally weindicate this by Tn

sminusrarr T on S Similarly we say that the sequence ofoperators Tn n isin N converges to the operator T uniformly on S providedthat limnrarrinfin supTnx minus T x x isin S = 0 and indicate this by Tn

uminusrarr Ton S

Clearly when S is the unit ball in X and Tnuminusrarr T on S this means that

Tnuminusrarr T on X

Lemma 251 Let X be a Banach space and S a relatively compact subset ofX If the sequence of operators Tn n isin N sube B(X) has uniformly boundedoperator norms and Tn

sminusrarr T then Tnuminusrarr T on S

Proof Since the set S is compact it is totally bounded (Theorem A7) and sofor a given ε gt 0 there is a finite set W sube S such that for each x isin S there isa w isin W with x minus w lt ε Since W is finite by hypothesis there is a q isin N

such that for any v isin W and n ge q we have Tnv minus T v le ε In particularthis inequality holds for the choice v = w By hypothesis there is a constantc gt 0 such that for any n isin N we have Tn le c We now estimate the erroruniformly for all x isin S when n ge q

Tnxminus T x le (Tn minus T )(xminus w) + Tnwminus T wle (c+ T + 1)ε

Let us now return to our setup for approximate schemes for solvingFredholm equations We list below several conditions that we shall assumeand investigate their consequences

(H-1) The set of operators Kn n isin N sube B(XV) is collectively compactthat is for any bounded set B sube X the set cupnisinNKn(B) is relativelycompact in V

(H-2) Knsminusrarr K on U

(H-3) The set of operators Pn n isin N sube B(XU) is compact with normswhich are uniformly bounded for n isin N

(H-4) Pnsminusrarr I on V

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 75

As a first step we modify Proposition 17 in [6] to fit our circumstances andobtain the following fact

Lemma 252 If conditions (H-1)ndash(H-4) hold then

(i) (Pn minus I)Knuminusrarr 0 on X

(ii) (Kn minusK)PnKnuminusrarr 0 on X

(iii) (KnPn minusK)KnPnuminusrarr 0 on X

Proof (i) Let B denote the closed unit ball in X that is B = x x isin X x le1 and also set G = Knx x isin B n isin N Condition (H-1) implies that G isa relatively compact set in V while hypotheses (H-3) and (H-4) coupled withLemma 251 establish that Pn

uminusrarr I on G Consequently the inequality

(Pn minus I)Kn = sup(Pn minus I)Knx x isin B le sup(Pn minus I)x x isin G(299)

establishes (i)(ii) For any x isin V it follows from (H-4) that Pnx n isin N is a relatively

compact subset of X Therefore by Lemma 251 and the hypotheses (H-1)and (H-2) we conclude that (Kn minus K)Pn

sminusrarr 0 on V Moreover using theinequality

(Kn minusK)PnKn = sup(Kn minusK)PnKny y isin Ble sup(Kn minusK)Pnx x isin G (2100)

and specializing Lemma 251 to the choice T = 0 Tn = (Kn minusK)Pn and thechoice S = G we conclude the validity of (ii)

(iii) Hypotheses (H-1) and (H-3) guarantee that

Gprime = KnPnx x isin B n isin Nis a relatively compact subset of V Moreover from the equation

KnPn minusK = (KnPn minusKPn)+ (KPn minusK) (2101)

statement (ii) and (H-4) we obtain that KnPnminusK sminusrarr 0 on V Thus statement(iii) follows directly from equation (2101) and the relative compactness of theset Gprime

We next study the existence of the inverse operator of An and An Forthis purpose we recall a useful result about the existence and boundednessof inverse operators

Lemma 253 If X is a normed linear space with S and E in B(X) such that

Sminus1 exists as a bounded linear operator on S(X) and E lt Sminus1minus1 then

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

76 Fredholm equations and projection theory

the linear operator T = S minus E has an inverse T minus1 as a bounded linearoperator on T (X) and has the property that

T minus1 le 1

Sminus1minus1 minus E

Proof For any u isin X we have that

Su = T u+ Eu

and so

Su le T u + EuThus

(Sminus1minus1 minus E)u le Su minus Eu le T ufrom which the desired result follows

We are now ready to prove the main result of this section

Theorem 254 If K isin B(XX) is a compact operator not having one asan eigenvalue and conditions (H-1)ndash(H-4) hold then there exists a positiveinteger q such that for all n ge q both (I minus PnKn)

minus1 and (I minusKnPn)minus1 are

in B(X) and have norms which are uniformly bounded Moreover if u un andun are the solutions of equations (295) (296) and (297) respectively and aconstant p gt 0 is chosen so that for all n isin N Pn le p then for all n isin N

uminus un le c(uminus Pnu + pKuminusKnu) (2102)

and

uminus un le c(K(I minus Pn)u + (K minusKn)Pnu) (2103)

Proof We first note that a straightforward computation leads to the formulas

[I + (I minusK)minus1Kn](I minus PnKn) =I minus (I minusK)minus1[(Pn minus I)Kn + (Kn minusK)PnKn]

and

[I + (I minusK)minus1KnPn](I minusKnPn) = I minus (I minusK)minus1(KnPn minusK)KnPn

By Lemma 252 there exists a q gt 0 such that for all n ge q we have

1 = (I minusK)minus1[(Pn minus I)Kn + (Kn minusK)PnKn] le 1

2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

22 General theory of projection methods 77

and

2 = (I minusK)minus1(KnPn minusK)KnPn le 1

2

It follows from Lemma 253 for all nge q that the inverse operators(I minusPnKn)

minus1 and (I minusKnPn)minus1 exist and there the inequalities

(I minus PnKn)minus1 le 1

1minus1(1+ (I minusK)minus1Kn)

and

(I minusKnPn)minus1 le 1

1minus2(1+ p(I minusK)minus1Kn)

holdSince the set of operators Kn n isin N is collectively compact it follows

that the norms Kn are uniformly bounded for n isin N and so by the aboveinequalities the norms of both (I minus PnKn)

minus1 and (I minusKnPn)minus1 are also

uniformly bounded for n isin N Therefore equations (296) and (298) haveunique solutions for every f isin X

It remains to prove the estimates (2102) and (2103) To this end we notefrom equations (295) (296) and (298) that

(I minus PnKn)un = Pn(I minusK)u

and

(I minusKnPn)un = (I minusK)u

Using these equations we obtain that

(I minus PnKn)(uminus un) = (uminus Pnu)+ Pn(KuminusKnu)

and

(I minusKnPn)(uminus un) = KuminusKnPnu = K(uminus Pnu)+ (K minusKn)Pnu

Therefore from what we have already proved we obtain the desired estimates

We remark from the estimate (2102) that the convergence rate of un to udepends only on the rate of approximations of Pn to the identity operator andKn to K Moreover it is seen from (2103) that if Kn approximates K fasterthan the convergence of Pn to the identity superconvergence of the iteratedsolution will result since the first term on the right-hand side of (2103) ismore significant than the other In fact since for each u isin X we have that

K(I minus Pn)u = K(I minus Pn)(I minus Pn)u le K(I minus Pn)(I minus Pn)u

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

78 Fredholm equations and projection theory

circumstances for which limnrarrinfin K(I minus Pn) = 0 will lead to superconver-gence Examples of this phenomenon will be described in Section 41

We also remark that when X = V and Kn = K Theorem 254 leads to thefollowing well-known theorem

Theorem 255 If X is a Banach space Xn nisinN is a sequence offinite-dimensional subspaces of X K XrarrX is a compact linear operatornot having one as an eigenvalue and Pn XrarrXn is a sequence of linearprojections that converges pointwise to the identity operator I in X then thereexist an integer q and a positive constant c such that for all n ge q the equation

un minus PnKun = Pnf

has a unique solution un isin Xn and

uminus un le cuminus Pnuwhere u is the solution of equation (295) Moreover the iterated solution un

defined by (297) with Kn = K satisfies the estimate

uminus un le cK(I minus Pn)uOur final remark is that when X = V and Pn = I Theorem 254 leads to

the following theorem

Theorem 256 If X is a Banach space K isin B(XX) is a compact operatornot having one as an eigenvalue and conditions (H-1) and (H-2) hold thenthere exist an integer q and a positive constant c such that for all n ge q theequation

un minusKnun = f

has a unique solution un isin X and

uminus un le cKuminusKnuwhere u is the solution of equation (295)

23 Bibliographical remarks

The basic concepts and results of Fredholm integral equations and the projec-tion theory may be found in many well-written books on integral equationssuch as [15 110 121 177 203 253] In particular for the subjects of weaklysingular integral operators and boundary integral equations we recommendthe books [15 177 203] The functional spaces used in this book are usually

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

23 Bibliographical remarks 79

covered in standard texts (for example [1 183 236 276]) Readers are referredto [15 47 177 183 203 236] for additional information on the notion of com-pact operators and weakly singular integral operators and to the Appendix ofthis book for basic elements of functional analysis Moreover readers may findadditional details on boundary integral equations in [12 15 22 121 144 150ndash153 177 203 267]

Regarding the projection methods readers can see [15 47 175ndash177 276]Especially the notion of generalized best approximation projections was origi-nally introduced in [77] For the approximate solvability of projection methodsfor operator equations Theorems 233 and 235 provide respectively necessaryand sufficient conditions which can be compared with those in [47] and [177]For the abstract framework for second-kind operator equations the theory ofcollectively compact operators [6 15] presents a convenient abstract settingfor the analysis of many numerical schemes In Section 224 we improvethe framework to fit more general circumstances For this point readers arereferred to the paper [80] More information about superconvergence of theiterated scheme may be seen in [60 246 247]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637004Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163014 subject to the Cambridge Core terms of

3

Conventional numerical methods

This chapter is designed to provide readers with a background on conventionalmethods for the numerical solution of the Fredholm integral equation of thesecond kind defined on a compact domain of a Euclidean space Specificallywe discuss the degenerate kernel method the quadrature method the Galerkinmethod the collocation method and the PetrovndashGalerkin method

Let be a compact measurable domain in Rd having a piecewise smoothboundary We present in this chapter several conventional numerical methodsfor solving the Fredholm integral equation of the second kind in the form

uminusKu = f (31)

where

(Ku)(s) =int

K(s t)u(t)dt s isin

We describe the principles used in the development of the numerical methodsand their convergence analysis

31 Degenerate kernel methods

In this section we describe the degenerate kernel method for solving theFredholm integral equation of the second kind For this purpose we assumethat X is either C() or L2() with the appropriate norm middot The integraloperator K is assumed to be a compact operator from X to X

311 A general form of the degenerate kernel method

The degenerate kernel method approximates the original integral equation byreplacing its kernel with a sequence of kernels having the form

80

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

31 Degenerate kernel methods 81

Kn(s t) =sumjisinNn

K1j (s)K

2j (t) s t isin (32)

where K1j K2

j isin X and may depend on n A kernel of this type is calleda degenerate kernel We require that integral operators Kn with kernels Kn

uniformly converge to the integral operator K that is Knuminusrarr K The

degenerate kernel method for solving (31) finds un isin X such that

un minusKnun = f (33)

For the unique existence and convergence of the approximate solution of thedegenerate kernel method we have the following theorem

Theorem 31 Let X be a Banach space and K isin B(X) be a compact operatornot having one as its eigenvalue If the operators Kn isin B(X) uniformlyconverge to K then there exists a positive integer q such that for all n ge q theinverse operators (I minusKn)

minus1 exist from X to X and

(I minusKn)minus1 le (I minusK)minus1

1minus (I minusK)minus1K minusKn

Moreover the error estimate

un minus u le (I minusKn)minus1(K minusKn)u

holds

Proof Note that (I minus K)minus1 exists as a bounded linear operator on X (seeTheorem A48 in the Appendix) The first result of this theorem follows fromLemma 253 with S = I minus K and E = Kn minus K For the second result wehave that

un minus u = (I minusKn)minus1f minus (I minusK)minus1f

= (I minusKn)minus1(Kn minusK)(I minusK)minus1f

= (I minusKn)minus1(Kn minusK)u

which yields the second estimate

According to the second estimate of Theorem 31 we obtain that

un minus u le (I minusKn)minus1K minusKnu

This means that the speed of convergence un minus u to zero depends on thespeed of convergence K minus Kn to zero This is determined by the choice of

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

82 Conventional numerical methods

the kernels Kn and is independent of the differentiability of u It is clear thatwhen X = C() we have that

K minusKn = maxsisin

int

|K(s t)minus Kn(s t)|dt (34)

and when X = L2() we have that

K minusKn le(int

int

|K(s t)minus Kn(s t)|2dsdt

)12

(35)

We now discuss the algebraic aspects of the degenerate kernel method

Proposition 32 If un is the solution of the degenerate kernel method (33)then it can be given by

un = f +sumjisinNn

vjK1j (36)

in which [vj j isin Nn] is a solution of the linear system

vi minussumjisinNn

(K1j K2

i )vj = (f K2i ) i isin Nn (37)

where (middot middot) denotes the L2() inner product

Proof If un is the solution of equation (33) then by using (32) we have that

un(s)minussumjisinNn

K1j (s)

int

K2j (t)un(t)dt = f (s) s isin (38)

This means that the solution un can be written as (36) with

vj =int

K2j (t)un(t)dt

Multiplying (36) by K2i (s) and integrating over we find that [vj j isin Nn]

must satisfy the linear system (37)

Linear system (37) may be written in matrix form To this end we define

Kn = [(K1j K2

i ) i j isin Nn] vn = [vj j isin Nn] and fn = [(f K2i ) i isin Nn]

and let In denote the identity matrix of order n Then (37) can be rewritten as

(In minusKn)vn = fn

We next consider the invertibility of matrix In minusKn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

31 Degenerate kernel methods 83

Proposition 33 If X is a Banach space K isin B(X) is a compact operator nothaving one as its eigenvalue and the operators Kn isin B(X) uniformly convergeto K then there exists a positive integer q such that for all n ge q the coefficientmatrix In minus Kn of the linear system (37) is nonsingular

Proof It follows from Theorem 31 that there exists a positive integer q suchthat for all n ge q (I minus Kn)

minus1 exists This with Proposition 32 leads to theconclusion that (37) is solvable for any right-hand side of the form b = [bi i isin Nn] = [(f K2

i ) i isin Nn] with f isin X We proceed with this proof in twocases

Case 1 K2i i isin Nn is a linearly independent set of functions To prove that

the coefficient matrix of (37) is nonsingular it is sufficient to prove that (37)with any right-hand side b isin Rn is solvable or equivalently for any b isin Rn

there exists a function f isin X such that [(f K2i ) i isin Nn] = b To do this we

let f =sumjisinNncjK2

j and consider the equationsumjisinNn

(K2j K2

i )cj = bi i isin Nn (39)

The coefficient matrix [(K2j K2

i ) i j isin Nn] is a Gram matrix so it is positive

semi-definite Since K2i i isin Nn is linear independent this matrix is positive

definite This means that (39) is solvable and the function f exists indeedThus the coefficient matrix In minusKn of (37) is nonsingular

Case 2 K2i i isin Nn is a dependent set of functions In this case there is a

nonsingular matrix Qn such that

[K21 middot middot middot K2

n ]QTn = [K2

1 middot middot middot K2r 0 middot middot middot 0]

where K2i i isin Nr 0 lt r lt n is a linearly independent set of functions Let

[K11 middot middot middot K1

n ] = [K11 middot middot middot K1

n ]Qminus1n

and

Kr = [(K1j K2

i ) i j isin Nr]We then have that

Qn(In minusKn)Qminus1n =

[Ir minus Kr lowast

0 Inminusr

]

Noting that Ir minus Kr is the coefficient matrix associated with the degeneratekernel Kr(s t) = sum

jisinNrK1j(s)K2j(t) and K2i i isin Nr is a linearly

independent set of functions we conclude from case 1 that Ir minus Kr isnonsingular and thus In minusKn is nonsingular

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

84 Conventional numerical methods

When the hypothesis of the above proposition is satisfied we solve (37) for[vj j isin Nn] and obtain from (36) the solution un of the degenerate kernelmethod (33)

312 Degenerate kernel approximations via interpolation

A natural way to construct degenerate kernel approximations is by interpo-lation We may employ polynomials piecewise polynomials trigonometricpolynomials and others as a basis to construct an interpolation for the kernelfunction

We now consider the Lagrange interpolation Let tj j isin Nn be a finitesubset of the domain and Lj j isin Nn be the Lagrange basis functionssatisfying

Lj(ti) = δij i j isin Nn

The kernel K can be approximated by the kernel Kn interpolating K withrespect to s or t That is we have

Kn(s t) =sumjisinNn

Lj(s)K(tj t)

or

Kn(s t) =sumjisinNn

K(s tj)Lj(t)

Using the former the linear system (37) becomes

vi minussumjisinNn

vj

int

Lj(t)K(ti t)dt =int

f (t)K(ti t)dt i isin Nn (310)

and the solution is given by

un = f +sumjisinNn

vjLj (311)

while using the latter the linear system (37) becomes

vi minussumjisinNn

vj

int

K(s tj)Li(s)ds =int

f (s)Li(s)ds i isin Nn (312)

and the solution is given by

un = f +sumjisinNn

vjK(middot tj) (313)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

31 Degenerate kernel methods 85

As an example of the Lagrange interpolation we consider continuouspiecewise linear polynomials that is linear splines on = [a b] Lettj = a + jh with h = (b minus a)n j isin Nn n isin N The basis functions arechosen as

Lj(t) =

1minus |t minus tj|h t isin [tjminus1 tj+1]0 otherwise

We obtain the degenerate kernel approximation by interpolating K with respectto s Specifically for t isin we have that

Kn(s t) = [(tj minus s)K(tjminus1 t)+ (sminus tjminus1)K(tj t)]h s isin [tjminus1 tj] j isin Nn

For this example we have the following error estimate

Proposition 34 If K(middot t) isin C2() for any t isin and part2Kparts2 isin C(times) then

K minusKn le 1

8h2(bminus a)

∥∥∥∥part2K

parts2

∥∥∥∥infin

Proof It can easily be derived by using the Taylor formula that

|K(s t)minus Kn(s t)| le 1

8h2∥∥∥∥part2K(middot t)

parts2

∥∥∥∥infin

for any s isin [tjminus1 tj] t isin This with (34) leads to the desired result of thisproposition

313 Degenerate kernel approximations via expansion

An alternative way to construct degenerate kernel approximations is byexpansion such as Taylor expansions and Fourier expansions of the kernelK We now introduce the latter

Let X = L2() with inner product (middot middot) which can be defined with respectto a weight function Let Fj j isin N be a complete orthonormal sequence inX Then for any x isin X we have the Fourier expansions of x with respect toFj j isin N

x =sumjisinN

(x Fj)Fj

This can be used for construction of approximate degenerate kernels of K withrespect to either variable For example we may define

Kn(s t) =sumjisinNn

Fj(s)(K(middot t) Fj(middot))

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

86 Conventional numerical methods

Let Gj(t) = (K(middot t) Fj(middot)) Then the linear system (37) becomes

vi minussumjisinNn

vj(Fj Gi) = (f (t) Gi) i isin Nn (314)

and the solution is given by

un = f +sumjisinNn

vjFj (315)

Proposition 35 If K isin L2(times) and Fj j isin N is a complete orthonormalset in L2() then

K minusKn le⎛⎝ sum

jisinNNn

∥∥(K(middot bull) Fj(middot))∥∥2

⎞⎠12

Proof Note that

K(s t)minus Kn(s t) =sum

jisinNNn

Fj(s)(K(middot t) Fj(middot))

By employing the orthonormal property of the sequence Fj j isin N it followsfrom (35) that

K minusKn le(int

int

|K(s t)minus Kn(s t)|2dsdt

)12

=⎛⎝ sum

jisinNNn

∥∥(K(middot bull) Fj(middot))∥∥2

⎞⎠12

32 Quadrature methods

In this section we introduce the quadrature or Nystrom method for solving theFredholm integral equations of the second kind This method discretizes theintegral equation by directly replacing the integral appearing in the integralequation by numerical quadratures

321 Numerical quadratures

We begin by introducing numerical integrations Let isin Rd be a compact setand assume that g isin C() To approximate the integral

Q(g) =int

g(t)dt

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

32 Quadrature methods 87

we consider numerical quadrature rules of the form

Qn(g) =sumjisinNn

wnjg(tnj)

where tnj isin j isin Nn are quadrature notes and wnj j isin Nn are real quadratureweights

The following are some examples of quadrature rules

Example 36 Consider the trapezoidal quadrature rule

Qn(g) = h

[1

2g(t0)+ g(t1)+ middot middot middot + g(tnminus1)+ 1

2g(tn)

]

on = [a b] where h = (bminus a)n tj = a+ jh j isin Nn When g isin C2()the error of the trapezoidal rule has the estimate ([172] p 481)

|Q(g)minus Qn(g)| le bminus a

12h2gprimeprimeinfin

Example 37 Consider the Simpson quadrature rule

Qn(g) = h

3[g(t0)+ 4g(t1)+ 2g(t2)+ middot middot middot + 2g(tnminus2)+ 4g(tnminus1)+ g(tn)]

on = [a b] where h = (bminus a)n tj = a+ jh j isin Nn and n is even Wheng isin C4() the error of the Simpson rule has the estimate ([172] p 483)

|Q(g)minus Qn(g)| le bminus a

180h4g(4)infin

Example 38 Let ψj j isin Zn+1 be a family of orthogonal polynomials ofdegree le n on = [a b] with respect to a non-negative weight function ρand tj j isin Nn be zeros of the function ψn Consider the Gaussian quadraturerule

Qn(g) =sumjisinNn

wjg(tj) (316)

for the integral Q(g) where

wj =int

ρ(t)Lnj(t)dt

and Lnj j isin Nn are the Lagrange interpolation polynomials of degree n minus 1which have the form

Lnj(t) = iisinNni =j(t minus ti)iisinNni =j(tj minus ti) j isin Nn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

88 Conventional numerical methods

When g isin C2n() the error of the Gaussian quadrature rule has the estimate([172] p 497)

Q(g)minus Qn(g) = g2n(η)

(2n)int

ρ(t)iisinNn(t minus ti)2dt

for some η isin When = [minus1 1] ρ = 1 and ψj j isin Zn+1 is chosen to be the set of

Legendre polynomials

ψj(t) = 1

2jjdj

dtj[(t2 minus 1)j] j isin Zn+1

formula (316) is called the GaussndashLegendre quadrature formula in which

wj = 2

n

1

ψnminus1(tj)ψ primen(tj)

This example shows that the Nystrom method using the Gaussian quadratureformula has rapid convergence We now turn to considering convergence of asequence of general quadrature rules

Definition 39 A sequence Qn n isin N of quadrature rules is calledconvergent if the sequence Qn n isin N converges pointwise to the functionalQ on C()

The next result characterizes convergent quadrature rules

Proposition 310 A sequence Qn n isin N of quadrature rules with theweights wnj j isin Nn converges if and only if

supnisinN

sumjisinNn

|wnj| ltinfin

and

Qn(g)rarr Q(g) nrarrinfin

for all g in a dense subset U sub C()

Proof It can easily be verified that

Qninfin =sumjisinNn

|wnj|

Thus the result of this proposition follows directly from the BanachndashSteinhaustheorem (Corollary A27 in the Appendix)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

32 Quadrature methods 89

322 The Nystrom method for continuous kernels

Using a sequence Qn n isin N of numerical quadrature rules the integraloperator

(Ku)(s) =int

K(s t)u(t)dt s isin

with a continuous kernel K isin C( times ) is approximated by a sequence ofsummation operators

(Knu)(s) =sumjisinNn

wjK(s tj)u(tj) s isin

Accordingly the integral equation (31) is approximated by a sequence ofdiscrete equations

un minusKnun = f (317)

or

un(s)minussumjisinNn

wjK(s tj)un(tj) = f (s) s isin (318)

We specify equation (318) at the quadrature points ti i isin Nn and obtain thelinear system

un(ti)minussumjisinNn

wjK(ti tj)un(tj) = f (ti) i isin Nn (319)

where the unknown is the vector [un(tj) j isin Nn] We summarize the abovediscussion in the following proposition

Proposition 311 For the solution un of (318) let unj = un(tj) j isin NnThen [unj j isin Nn] satisfies the linear system

uni minussumjisinNn

wjK(ti tj)unj = f (ti) i isin Nn (320)

Conversely if [unj j isin Nn] is a solution of (320) then the function un definedby

un(s) = f (s)+sumjisinNn

wjK(s tj)unj s isin (321)

solves equation (318)

Proof The first statement is trivial Next if [unj j isin Nn] is a solution of(320) then we have from (321) and (320) that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

90 Conventional numerical methods

un(ti) = f (ti)+sumjisinNn

wjK(ti tj)unj = uni

From (321) and the above equation we find that un satisfies (318)

Formula (321) can be viewed as an interpolation formula which extendsthe numerical solution of linear system (319) to all points s isin and is calledthe Nystrom interpolation formula

We now consider error analysis of the Nystrom method Unlike the degener-ate kernel method we do not expect uniform convergence of the sequenceKn n isin N of approximate operators to the integral operator K in theNystrom method In fact KnminusK ge K To see this for any small positiveconstant ε we can choose a function φε isin C() such that φεinfin = 1φε(tj) = 0 for all j isin Nn and φε(s) = 1 for all s isin with minjisinNn |sminus tj| ge εFor this choice of φε we have that

Kn minusK = sup(Kn minusK)vinfin v isin C() vinfin le 1ge sup(Kn minusK)(vφε)infin v isin C() vinfin le 1 ε gt 0= supK(vφε)infin v isin C() vinfin le 1 ε gt 0= supKvinfin v isin C() vinfin le 1 = K

Although the sequence Kn n isin N is not uniformly convergent it ispointwise convergent Therefore by using the theory of collectively compactoperator approximation we can obtain the error estimate for the Nystrommethod

Theorem 312 If the sequence of quadrature rules is convergent then thesequence Kn n isin N of quadrature operators is collectively compactand pointwise convergent on C() Moreover if u and un are the solutionsof equations (31) and the Nystrom method respectively then there exist apositive constant c and a positive integer q such that for all n ge q

un minus uinfin le c(Kn minusK)uinfin

Proof It follows from Proposition 310 that

C = supnisinN

sumjisinNn

|wnj| ltinfin

which leads to

Knvinfin le C maxstisin |K(s t)| vinfin (322)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

32 Quadrature methods 91

and

|(Knv)(s1)minus (Knv)(s2)| le C maxtisin |K(s1 t)minus K(s2 t)| vinfin s1 s2 isin

(323)

Noting that K is uniformly continuous on times we conclude that for anybounded set B sub C() Knv v isin B n isin N is bounded and equicontinuousThus by the ArzelandashAscoli theorem the sequence Kn n isin N is collectivelycompact

Since the sequence of quadrature rules is convergent for any v isin C()

(Knv)(s)rarr (Kv)(s) as nrarrinfin for all s isin (324)

From (323) we see that Knv n isin N is equicontinuous which with (324)leads to the conclusion that Knv n isin N is uniformly convergent that isKn n isin N is pointwise convergent on C()

The last statement of the theorem follows from Theorem 256

It follows from equations (31) and (317) that

(I minusKn)(un minus u) = (Kn minusK)u

which yields

(Kn minusK)uinfin le I minusKn un minus uinfin

This with the estimate of Theorem 312 means that the error un minus uinfinconverges to zero in the same order as the numerical integration error

(Kn minusK)uinfin = maxsisin

∣∣∣∣∣∣sumjisinNn

wjK(s tj)u(tj)minusint

K(s t)u(t)dt

∣∣∣∣∣∣

323 The Nystrom method for weakly singular kernels

In this subsection we describe the Nystrom method for the numerical solutionof integral equation (31) with weakly singular operators defined by

(Ku)(s) =int

K1(s t)K2(s t)u(t)dt

where K1 is a weakly singular kernel and K2 is a smooth kernel We considerthe important case

K1(s t) = log |sminus t|or

K1(s t) = 1

|sminus t|σ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

92 Conventional numerical methods

for some σ isin (0 d) The former is often regarded as a special case of the latterwith σ = 0

We consider a sequence Qn n isin N of numerical quadrature rules

(Qng)(s) =sumjisinNn

wnj(s)g(tnj) s isin (325)

for the integral

(Qg)(s) =int

EK1(s t)g(t)dt s isin

where the quadrature weights depend on the function K1 and the variable sThen the integral operator K is approximated by a sequence of approximateoperators defined by

(Knu)(s) = Qn(K2(s middot)u(middot))(s) s isin in terms of the quadrature rules Qn Specifically we have that

(Knu)(s) =sumjisinNn

wj(s)K2(s tj)u(tj)

where we use simplified notations wj = wnj and tj = tnj The integralequation (31) is then approximated by a sequence of linear equations

uni minussumjisinNn

wj(ti)K2(ti tj)unj = f (ti) i isin Nn (326)

and the approximate solution un is defined by

un(s) = f (s)+sumjisinNn

wj(s)K2(s tj)unj s isin (327)

Example 313 Suppose that

(Ku)(s) =int

log |sminus t|K2(s t)u(t)dt s isin = [a b]where K2 is a smooth function Let h = (bminus a)n tj = a+ jh j isin Zn+1 Fora fixed s isin we choose a piecewise linear interpolation for K2(s middot)u(middot) thatis K2(s middot)u(middot) is approximated by

[(tj minus t)K2(s tjminus1)u(tjminus1)+ (t minus tjminus1)K2(s tj)u(tj)]h

for t isin [tjminus1 tj] j isin Nn By defining the weight functions

w0(s) = 1

h

int[t0t1]

(t1 minus t) log |sminus t|dt

wj(s) = 1

h

int[tjminus1tj]

(t minus tjminus1) log |sminus t|dt + 1

h

int[tjtj+1]

(tj minus t)) log |sminus t|dt j isin Nnminus1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

32 Quadrature methods 93

and

wn(s) = 1

h

int[tnminus1tn]

(t minus tnminus1) log |sminus t|dt

we obtain the approximate operators

(Knu)(s) =sum

jisinZn+1

wj(s)K2(s tj)u(tj) s isin

The error analysis for the Nystrom method for weakly singular kernelscan be obtained in a way similar to Theorem 312 for continuous kernelsNoting that in this case the quadrature weights depend on s we need to makeappropriate modifications in Theorem 312 to fit the current case We firstmodify Proposition 310 to the following result

Proposition 314 The sequence Qn n isin N of quadrature rules defined asin (325) converges uniformly on if and only if

supsisin

supnisinN

sumjisinNn

|wnj(s)| ltinfin

and

Qn(g)rarr Q(g) nrarrinfin

uniformly on for all g in some dense subset U sub C()

With this result we describe the main result for the Nystrom method in thecase of weakly singular kernels

Theorem 315 If the sequence of quadrature rules is uniformly convergentand

limtrarrs

supnisinN

sumjisinNn

|wnj(t)minus wnj(s)| = 0 (328)

then the sequence Kn n isin N of corresponding approximate operators iscollectively compact and pointwise convergent on C() Moreover if u andun are the solutions of (equations (31)) and the Nystrom method respectivelythen there exist a positive constant c and a positive integer q such that for alln ge q

un minus uinfin le c(Kn minusK)uinfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

94 Conventional numerical methods

Proof The proof is similar to that of Theorem 312 We only need to replaceinequality (323) by the following inequality

|(Knu)(s1)minus (Knu)(s2)| le∣∣∣sum

jisinNn

wn j(s1)[K2(s1 tj)minus K2(s2 tj)]u(tj)∣∣∣

+∣∣∣sum

jisinNn

[wnj(s1)minus wn j(s2)]K2(s2 tj)]u(tj)∣∣∣

le C maxtisin |K2(s1 t)minus K2(s2 t)|uinfin+ sup

nisinN

sumjisinNn

|wnj(s1)minus wnj(s2)|maxstisin |K2(s t)|uinfin

Then by a similar proof of Theorem 312 one can conclude the desired results

33 Galerkin methods

We have discussed projection methods for solving operator equations inSection 13 In this section and what follows we specialize the operatorequations to Fredholm integral equations of the second kind and consider threemajor projection methods namely the Galerkin method PetrovndashGalerkinmethod and collocation method for solving the equations Specifically wepresent in this section the Galerkin method the iterated Galerkin method andthe discrete Galerkin method for solving Fredholm integral equations of thesecond kind

As described in Section 132 for the operator equation in a Hilbert space theprojection method via orthogonal projections mapped from the Hilbert spaceonto finite-dimensional subspaces leads to the Galerkin method

Let X = L2() Xn n isin N be a sequence of subspaces of X satisfyingcupnisinNXn = X and Pn n isin N be a sequence of orthogonal projections fromX onto Xn The Galerkin method for solving (31) is to find un isin Xn such that

(I minus PnK)un = Pnf (329)

or equivalently

(un v)minus (Kun v) = (f v) for all v isin Xn

Under the hypotheses that s(n) = dimXn and φj j isin Ns(n) is a basis forXn the solution un of equation (329) can be written in the form

un =sum

jisinNs(n)

ujφj

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

33 Galerkin methods 95

where the vector un = [uj j isin Ns(n)] satisfies the linear systemsumjisinNs(n)

uj[(φjφi)minus (Kφjφi)

] = (f φi) i isin Ns(n)

Setting

En = [(φjφi) i j isin Ns(n)]

Kn = [(Kφjφi) i j isin Ns(n)]

and

fn = [(f φj) j isin Ns(n)]

equation (329) can be written in the matrix form

(En minusKn)un = fn (330)

We call Kn the Galerkin matrixNote that Pn Xrarr Xn n isin N are orthogonal projections and cupnisinNXn = XPn = 1 and Pn converges pointwise to the identity operator I in X Accord-ing to Theorem 255 if K X rarr X is a compact linear operator not havingone as an eigenvalue then there exist an integer q and a positive constant csuch that for all n ge q equation (329) has a unique solution un isin X and

uminus un le cuminus Pnuwhere u is the solution of equation (31)

331 The Galerkin method with piecewise polynomials

Piecewise polynomial bases are often used in the Galerkin method for solvingequation (31) due to its simplicity flexibility and excellent approximationproperty In this subsection we present the standard piecewise polynomialGalerkin method for solving the Fredholm integral equation (31) of the secondkind

We begin with a description of piecewise polynomial subspaces of L2()We assume that there is a partition i i isin Zn of for n isin N which satisfiesthe following conditions

bull =⋃iisinZni and meas(j capjprime) = 0 j = jprime

bull For each i isin Zn there exists an invertible affine map φi rarr such thatφi(

0) = i where 0 is a reference element

For i isin i i isin Zn we define the parameters

hi = diam(i) ρi = the diameter of the largest circle inscribed in i

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

96 Conventional numerical methods

We also assume that the partition is regular in the sense that there exists apositive constant c such that for all i isin Zn and for all n isin N

hi

ρi

le c

Let hn = maxdiam(i) i isin Zn For a positive integer k we denote byXn the space of the piecewise polynomials of total degree le k minus 1 withrespect to the partition i i isin Zn In other words every element in Xn isa polynomial of total degree le k minus 1 on each i Since the dimension of the

space of polynomials of total degree k minus 1 is given by m =(

k + d minus 1d

)

we conclude that the dimension of Xkn is mn

We next construct a basis for the space Xn We choose a collection τj j isinNm sub 0 in a general position that is the Lagrange interpolation polynomialof the total degree k minus 1 at these points is uniquely defined For j isin Nm weassume that a collection pj j isin Nm of polynomials of total degree k minus 1 ischosen to satisfy the equation

pj(τi) = δij i j isin Nm

For each i isin Zn the functions defined by

ρij(t) =(pj φminus1

i )(t) t isin i

0 t isin i(331)

form a basis for the space Xn where ldquordquo denotes the functional compositionWe let Pn L2() rarr Xn be the orthogonal projection onto Xn Then Pn isself-adjoint and Pn = 1 for all n Moreover for each x isin Hk() (cf SectionA1 in the Appendix) there exist a positive constant c and a positive integer qsuch that for all n ge q

xminus Pnx le chknxHk (332)

Associated with the projection Pn the piecewise polynomial Galerkinmethod for solving equation (31) is described as finding un isin Xn such that

(I minus PnK)un = Pnf (333)

In terms of the basis functions described in (331) the Galerkin equation (333)is equivalent to the linear system

(En minusKn)un = fn (334)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

33 Galerkin methods 97

where un isin Rmn

En = [Eijiprimejprime i iprime isin Zn j jprime isin Nm] with Eijiprimejprime = (ρiprimejprime ρij)

Kn = [Kijiprimejprime i iprime isin Zn j jprime isin Nm] with Kijiprimejprime = (Kρiprimejprime ρij)

and

fn = [fij i isin Zn j isin Nm] with fij = (f ρij)

The following result is concerned with the convergence order of the Galerkinmethod (333)

Theorem 316 Suppose that K L2()rarr L2() is a compact operator nothaving one as its eigenvalue If u isin L2() is the solution of equation (31)then there exist a positive constant c and a positive integer q such that for eachn ge q equation (333) has a unique solution un isin Xn Moreover if u isin Hk()

then un satisfies the error bound

uminus un le chknuHk

Proof By the hypothesis that one is not an eigenvalue of the compact operatorK the operator I minusK is one to one and onto Since the spaces Xn are dense inL2() we have that for x isin L2()

limhrarr0Pnxminus Ix = 0

This ensures that there exists a positive integer q such that for each n ge qequation (333) has a unique solution un isin Xn and the inverse operators (I minusPnK)minus1 are uniformly bounded

It follows from equation (333) that

uminus un = uminus Pnf minus PnKun (335)

Moreover applying the projection Pn to both sides of equation (31) yields

Pnuminus PnKu = Pnf

Substituting this equation into (335) leads to the equation

uminus un = uminus Pnu+ PnK(uminus un)

Solving for uminus un from the above equation gives

uminus un = (I minus PnK)minus1(uminus Pnu)

Therefore the desired estimate follows from the above equation the uniformboundedness of (I minus PnK)minus1 and estimate (332)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

98 Conventional numerical methods

The last theorem shows that the Galerkin method has convergence ofoptimal order That is the order of convergence is equal to the order ofapproximation from the piecewise polynomial space

We finally remark that if is the boundary of a domain the piecewisepolynomial Galerkin method is called a boundary element method

332 The Galerkin method with trigonometric polynomials

We now consider equation (31) with = [0 2π ] and the functions K andf being 2π -periodic that is K(s + 2π t) = K(s t + 2π) = K(s t) andf (s+2π) = f (s) for s t isin In this case trigonometric polynomials are oftenused as approximations for solving the equations and projection methods ofthis type are referred to as spectral methods

Let X = L2(0 2π) be the space of all complex-valued 2π -periodic andsquare integral Lebesgue measurable functions on with inner product

(x y) =int

x(t)y(t)dt

For each n isin N let Xn be the subspace of X of all trigonometric polynomialsof degree le n That is we set φj(s) = eijs s isin R

Xn = spanφj(s) j isin Zminusnnwhere i = radicminus1 and Zminusnn = minusn 0 n The orthogonal projectionof X onto Xn is given by

Pnx = 1

sumjisinZminusnn

(xφj)φj

For x isin X it is well known that

xminus Pnx =⎛⎝ 1

sumjisinZZminusnn

|(xφj)|2⎞⎠12

rarr 0 as nrarrinfin

To present the error analysis we introduce Sobolev spaces of 2π -periodicfunctions which are subspaces of L2(0 2π) and require for their elements acertain decay of their Fourier coefficients That is for r isin [0infin)

Hr(0 2π) =⎧⎨⎩x isin L2(0 2π)

sumjisinZ

(1+ j2)r|(xφj)|2 ltinfin⎫⎬⎭

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

33 Galerkin methods 99

Note that H0(0 2π) coincides with L2(0 2π) and Hr(0 2π) is a Hilbert spacewith inner product given by

(x y)r = 1

sumjisinZ

(1+ j2)r(xφj)(yφj)

and norm given by

xHr =⎛⎝ 1

sumjisinZ

(1+ j2)r|(xφj)|2⎞⎠12

We remark that when r is an integer this norm is equivalent to the norm

xr =⎛⎝ sum

jisinZr+1

x(j)2⎞⎠12

It can easily be seen that for x isin Hr(0 2π)

xminus Pnx le 1

nr

⎛⎝ 1

sumjisinZZminusnn

(1+ j2)r|(xφj)|2⎞⎠12

le 1

nrxHr (336)

With the help of the above estimate we have the following error analysis

Theorem 317 Suppose that K L2(0 2π) rarr L2(0 2π) is a compactoperator not having one as its eigenvalue If u isin L2(0 2π) is the solutionof equation (31) then there exist an integer N0 and a positive constant c suchthat for each n ge N0 the Galerkin approximate equation (333) has a uniquesolution un isin Xn Moreover if u isin Hr(0 2π) then un satisfies the error bound

uminus un le cnminusruHr

Proof The proof of this theorem is similar to that of Theorem 316 with (332)being replaced by (336)

333 The condition number for the Galerkin method

We discuss in this subsection the condition number of the linear systemassociated with the Galerkin equation which depends on the choice of basesof the approximate subspace To present the results we need the notion of thematrix norm induced by a vector norm For an ntimesn matrix A the matrix normis defined by

Ap = supAxp x isin Rn xp = 1 1 le p le infin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

100 Conventional numerical methods

It is well known that for A = [aij i j isin Zn]Ainfin = max

iisinZn

sumjisinZn

|aij|

A1 = maxjisinZn

sumiisinZn

|aij|

and

A2 = [ρ(ATA)]12

where ρ(A) is called the spectral radius of A and is defined as the largesteigenvalue of A

To discuss the condition number of the coefficient matrix of the linearsystem (330) for the Galerkin method we first provide a lemma on a changeof bases for the subspace Xn Let φj j isin Ns(n) and ψj j isin Ns(n) be twobases for the subspace Xn with the latter being orthonormal These bases havethe relations

n = Cnn and n = Dnn

where n = [ψj j isin Ns(n)] n = [φj j isin Ns(n)] Dn = [(φiψj) i j isin Ns(n)] and Cn = [cij i j isin Ns(n)] is the matrix determined by the firstequation

Lemma 318 If ψj j isin Ns(n) is orthonormal then

CnDn = In and DnDTn = En

where In is the identity matrix and En = [(φjφi) i j isin Ns(n)] Moreover

Dn2 = DTn 2 = En12

2 and Dminus1n 2 = DminusT

n 2 = Eminus1n 12

2

Proof The first part of this lemma follows directly from computation For thesecond part we have that

Dn2 = DTn 2 = [ρ(DnDT

n )]12 = [ρ(En)]12 = En122

The second equation can be proved similarly

The next lemma can easily be verified

Lemma 319 For any w = [wj j isin Ns(n)] isin Rs(n) let w = wTn isin XnIf the operator Qn Rs(n) rarr Xn is defined by Qnw = w then Qn is invertibleand Qn = Qminus1

n = 1

In the next theorem we estimate the condition number of the coefficientmatrix of the Galerkin method

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

33 Galerkin methods 101

Theorem 320 The condition number of the coefficient matrix of the linearsystem (330) for the Galerkin method has the bound

cond(En minusKn) le cond(En)cond(I minus PnK)

Proof For any g = [gj j isin Ns(n)] let

v = (En minusKn)minus1g

It can be verified that g = gTn and v = vTn satisfy the equation

g = (I minus PnK)v

Noting that v = (DTn v)Tn = Qn(DT

n v) and g = (DTn g)Tn = Qn(DT

n g) wehave that

(En minusKn)minus1g = v = DminusT

n Qminus1n v = DminusT

n Qminus1n (I minus PnK)minus1Qn(DT

n g)

Thus using Lemmas 318 and 319 we conclude that

(En minusKn)minus1 le DminusT

n Qminus1n (I minus PnK)minus1QnDT

n le cond(En)

12(I minus PnK)minus1 (337)

Likewise we conclude from

(En minusKn)v = g = DminusTn Qminus1

n g = DminusTn Qminus1

n (I minus PnK)Qn(DTn v)

that

En minusKn le cond(En)12(I minus PnK) (338)

Combining estimates (337) and (338) yields the desired result of this theorem

We remark that if K is a compact linear operator not having one as itseigenvalue and Pn

sminusrarr I then PnKuminusrarr K and (IminusPnK)minus1 uminusrarr (IminusK)minus1

which yields

cond(I minus PnK)rarr cond(I minusK) as nrarrinfin

334 The iterated Galerkin method

We presented the iterated projection scheme (297) in Section 224 Accordingto Theorem 255 if K X rarr X is a compact linear operator not having oneas an eigenvalue then there exist a positive constant c and a positive integer q

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

102 Conventional numerical methods

such that for all n ge q the iterated solution defined by (297) with Kn = Ksatisfies the estimate

uminus un le cK(I minus Pn)uSince (I minus Pn)

2 = I minus Pn we have that

K(I minus Pn)u le K(I minus Pn)(I minus Pn)uSince K is compact its adjoint Klowast is also compact As a result we obtain that

K(I minus Pn) = [K(I minus Pn)]lowast = (I minus Pn)Klowast rarr 0 as nrarrinfin

Thus we conclude that

uminus un le c(I minus Pn)Klowast(I minus Pn)uand see that u minus un converges to zero more rapidly than u minus un doesMoreover we have that

K(I minus Pn)u = (K(s middot) (I minus Pn)u(middot)) = ((I minus Pn)K(s middot) (I minus Pn)u(middot))which leads to

uminus un le c ess sup(I minus Pn)K(s middot) s isin (I minus Pn)uThis shows that the additional order of convergence gained from iterationis attributed to approximation of the integral kernel from the approximatesubspace

335 Discrete Galerkin methods

The implementation of the Galerkin method (329) requires evaluating theintegrals involved in (330) There are two types of integral required forevaluation the integral that defines the operator K and the inner product (middot middot)of the space L2() These integrals usually cannot be evaluated exactly andthus they require numerical integration The Galerkin method with integralscomputed using numerical quadrature is called the discrete Galerkin method

We choose quadrature nodes τj j isin Nqn with qn ge s(n) and discreteoperators Kn defined by

(Knx)(t) =sum

jisinNqn

wj(t)x(τj) t isin

to approximate the operator K We require that the sequence Kn n isin N ofdiscrete operators is collectively compact and pointwise convergent on C()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

33 Galerkin methods 103

For sufficient conditions to ensure collective compactness see Theorem 315Suppose that we use a quadrature formula to approximate integrals that isint

x(t)dt asympsum

jisinNqn

λjx(τj) with λj gt 0 j isin Nqn

and define a discrete semi-definite inner product

(x y)n =sum

jisinNqn

λjx(τj)y(τj) x y isin C()

and the corresponding discrete semi-norm

xn = (x x)12n x isin C()

We require that the rank of the matrix n = [φi(τj) i isin Ns(n) j isin Nqn ] isequal to s(n) It follows that there is a subset of quadrature nodes say τj j isinNs(n) such that the matrix [φi(τj) i j isin Ns(n)] is nonsingular Thus for anydata bj j isin Ns(n) there exists a unique φ isin Xn satisfying

φ(τj) = bj j isin Ns(n)

We see that middot n is a semi-norm on C() and is a norm on XnThe discrete Galerkin method for solving (31) is to find un isin Xn such that

(un v)n minus (Knun v)n = (f v)n for all v isin Xn (339)

The analysis of the discrete Galerkin method requires the notation of thediscrete orthogonal projection

Definition 321 Let Xn be a subspace of C() The operator Pn C() rarrXn defined for x isin C() by

(Pnx y)n = (x y)n for all y isin Xn

is called the discrete orthogonal projection from C() onto Xn

Proposition 322 If λj gt 0 j isin Nqn and rank n = s(n) then the discreteorthogonal projection Pn C() rarr Xn is well defined and is a linearprojection from C() onto Xn If in addition qn = s(n) then Pn is aninterpolating projection satisfying for x isin C()

(Pnx)(τj) = x(τj) j isin Ns(n)

Proof For x isin C() let Pnx =sumjisinNs(n)xjφj and consider the linear systemsum

jisinNs(n)

xj(φjφi)n = (xφi)n i isin Ns(n) (340)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

104 Conventional numerical methods

The coefficient matrix Gn = [(φjφi)n i j isin Ns(n)] is a Gram matrix Thusfor any x = [xj j isin Ns(n)] isin Rs(n) we have that

xTGnx =∥∥∥ sum

jisinNs(n)

xjφj

∥∥∥2

nge 0

Since λj gt 0 for all j isin Nqn xTGnx = 0 if and only ifsum

jisinNs(n)xjφj(τi) =

0 i isin Nqn The latter is equivalent to x = 0 since rank n = s(n) Thus weconclude that the matrix Gn is positive definite and equation (340) is uniquelysolvable that is Pn is well defined From Definition 321 we can easily verifythat Pn C()rarr Xn is a linear projection

When qn = s(n) we define for x isin C() Inx = [x(τj) j isin Ns(n)] isin Rs(n)We next show that

In(Pnx) = Inx

We first have that

In(Pnx) = In

( sumjisinNs(n)

xjφj

)= T

n x

It follows from equation (340) that

Gnx = nnInx

where n is the diagonal matrix diag(λ1 λs(n)) The definition of thenotation Gn leads to Gn = nn

Tn Thus combining the above equations

we obtain that

In(Pnx) = Tn Gminus1

n nnInx = Inx

which completes the proof

The discrete orthogonal projection Pn is self-adjoint on C() with respectto the discrete inner product that is

(Pnx y)n = (xPny)n x y isin C()

and is bounded on C() with respect to the discrete semi-norm that is

Pnxn le xn x isin C()

We remark that the latter does not mean the uniform boundedness of theoperators Pn n isin N However if Xn are piecewise polynomial spaces andquadrature nodes are obtained by using affine mappings from a set of quadra-ture nodes in a reference element then we have the uniform boundednesssupPninfin n isin N ltinfin (see Section 354)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

34 Collocation methods 105

The following proposition provides convergence of the discrete orthogonalprojection

Proposition 323 If the sequence of discrete orthogonal projections Pn n isin N is uniformly bounded on C() then there is a positive constant c suchthat for all x isin C()

xminus Pnxinfin le c infxminus vinfin v isin XnProof Since Pn is a linear projection from C() onto Xn for any v isin Xn

xminus Pnx = xminus vminus Pn(xminus v)

This yields the estimate

xminus Pnxinfin le (1+ supPninfin n isin N)xminus vinfin

Thus the desired result follows from the above estimate and the hypothesisthat Pn is uniformly bounded

With the help of discrete orthogonal projections the discrete Galerkinmethod (339) for solving (31) can be rewritten in the form of (296) that is

(I minus PnKn)un = Pn f

Thus the error analysis for this method follows from the same framework asTheorem 254

34 Collocation methods

We consider in this section the collocation method for solving the Fredholmintegral equation of the second kind According to the description in Sec-tion 221 for the operator equation in the space C() the projection methodvia interpolation projections into finite-dimensional subspaces leads to thecollocation method

Let Xn n isin N be a sequence of subspaces of C() with s(n) = dimXnand let Pn n isin N be a sequence of interpolation projections from C()onto Xn defined for x isin C() by

(Pnx)(tj) = x(tj) for all j isin Ns(n)

where tj j isin Ns(n) is a set of distinct nodes in The collocation methodfor solving (31) is to find un isin Xn such that

(I minus PnK)un = Pnf (341)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

106 Conventional numerical methods

or equivalently

un(ti)minusint

K(ti t)un(t)dt = f (ti) for all i isin Ns(n) (342)

Suppose that φj j isin Ns(n) is a basis for Xn Let un =sumjisinNs(n)ujφj Equation

(342) can be written in the matrix form

(En minusKn)un = fn (343)

where

En = [φj(ti) i j isin Ns(n)]

Kn = [(Kφj)(ti) i j isin Ns(n)]

un = [uj j isin Ns(n)]

and fn = [f (tj) j isin Ns(n)]

Kn is called the collocation matrixWe remark that the set of collocation points tj j isin Ns(n) should be chosen

such that the subspace Xn is unisolvent that is the interpolating function isuniquely determined by its values at the interpolating points It is clear that thisrequirement is equivalent to the condition det(En) = 0 Since the collocationmethod is interpreted as a projection method with the interpolating operatorthe general convergence results for projection methods are applicable

When the Lagrange basis Lj j isin Ns(n) for Xn is used then

un(t) =sum

jisinNs(n)

ujLj(t) with uj = un(tj)

and the linear system (343) becomes

ui minussum

jisinNs(n)

uj

int

K(ti t)Lj(t)dt = f (ti) for all i isin Ns(n)

Note that the coefficient matrix is the same as that for the degenerate kernelmethod (310) In other words the operator PnK is a degenerate kernel integraloperator

PnKu(t) =int

Kn(s t)u(t)dt with Kn(s t) =sum

jisinNs(n)

K(tj t)Lj(s)

We then have the estimate

K minus PnK = maxsisin

int

|K(s t)minus Kn(s t)|dt (344)

We observe that the computational cost of the collocation method is muchlower than that of the Galerkin method since it reduces the calculation ofintegrations involved

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

34 Collocation methods 107

341 The collocation method with piecewise polynomials

In this subsection we consider the collocation method with the subspace Xn

being a piecewise polynomial space Suppose that there is a regular partitioni i isin Nn of for n isin N which satisfies

=⋃iisinNn

i and meas(j capjprime) = 0 j = jprime

and for each i isin Nn there exists an invertible affine map φi which maps areference element 0 onto i For a positive integer k let Xn be the space ofpiecewise polynomials of total degree le k minus 1 with respect to the partitioni i isin Nn We choose m distinct points τj isin 0 j isin Nm such thatthe Lagrange interpolation polynomial of total degree k minus 1 at these points isuniquely defined Then we can find pj isin Pk polynomials of total degree k minus 1such that

pj(τi) = δij i j isin Nm

For each n isin Nn the functions defined by

ρij(t) =(pj φminus1

i )(t) t isin i

0 t isin i(345)

form a basis for the space Xn and the points tij = φi(τj) i j isin Nm form a setof collocation nodes satisfying

ρij(tiprimejprime) = δiiprimeδjjprime

We let Pn C()rarr Xn be the interpolating projection onto Xn We then have

Pnxinfin le maxPnxinfini i isin NnNoting that

Pnx(t) =sumjisinNm

x(tij)ρij(t) =sumjisinNm

x(tij)pj(τ ) t isin i τ = φminus1i (t) isin 0

we conclude that

Pn lesumjisinNm

pjinfin

which means that the sequence of projections Pn is uniformly boundedMoreover for each x isin Wkinfin() (cf Section A1 in the Appendix) thereexists a positive constant c and a positive integer q such that for all n ge q

xminus Pnxinfin le c infxminus vinfin v isin Xn le chknxWkinfin (346)

where hn = maxdiam(i) i isin Zn We have the following theorem

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

108 Conventional numerical methods

Theorem 324 Suppose that K C() rarr C() is a compact operator nothaving one as its eigenvalue If u isin C() is the solution of equation (31)then there exist a positive constant c and a positive integer q such that foreach n ge q equation (341) has a unique solution un isin Xn Moreover ifu isin Wkinfin() then there exists a positive constant c such that for all n

uminus uninfin le chknuWkinfin()

342 The collocation method with trigonometric polynomials

We consider in this subsection equation (31) in which = [0 2π ] and K andf are 2π -periodic and describe the collocation method for solving the equationusing trigonometric polynomials

Let X = Cp(0 2π) be the space of all 2π -periodic continuous functions onR with uniform norm middot infin and choose the approximate subspace as

Xn = span1 cos t sin t cos nt sin nt n isin N

To define an interpolating projection from X onto Xn we recall the Dirichletkernel

Dn(t) = sin(n+ 12 )t

2 sin t2

= 1

2+sumjisinNn

cos jt

and observe that for tj = jπn+ 1

2 j isin Z2n+1

Dn(tj) = 1

2 + n j = 00 j isin Z2n+1 0

This means that functions j defined by

j(t) = 2

2n+ 1Dn(t minus tj) j isin Z2n+1

satisfy

j(ti) = δij

and form a Lagrange basis for Xn We then define the interpolating projectionPn Xrarr Xn for x isin X by

Pnx =sum

jisinZ2n+1

x(tj)j

The following estimate is known (cf [279])

Pn = O(log n) (347)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

34 Collocation methods 109

Hence from the principle of uniform boundedness there exists x isin X forwhich Pnx does not converge to x The bound (347) leads to the fact that forx isin X

Pnxminus xinfin le (1+ Pn) infxminus vinfin v isin Xnle O(log n) infxminus vinfin v isin Xn

We next consider the estimate of KminusPnK Assume that the kernel satisfiesthe α-Holder continuity condition

|K(s1 t)minus K(s2 t)| le c|s1 minus s2|α for all s1 s2 t isin

for some positive constant c Then using (344) we conclude that

K minus PnK le cnminusα log n

343 The condition number for the collocation method

We now turn our attention to consideration of the condition number cond(En minus Kn) of the coefficient matrix of the linear system (343) obtained fromthe collocation method In this case the condition number is defined in termsof the infinity norm of a matrix A specifically

cond(A) = AinfinAminus1infin

Theorem 325 If det(En) = 0 then the condition number of the linear system(343) of the collocation method satisfies

cond(En minusKn) le Pn2infincond(En) cond(I minus PnK) (348)

Proof For g = [gj j isin Ns(n)] let

v = [vj j isin Ns(n)] = (En minusKn)minus1g

Choose g isin C() such that

ginfin = ginfin and g(tj) = gj j isin Ns(n)

Set

v = (I minus PnK)minus1Png

Then we have that

v(ti) =sum

jisinNs(n)

vjφj(ti) i isin Ns(n)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

110 Conventional numerical methods

Letting v = [v(ti) i isin Ns(n)] in matrix notation the above equation can berewritten as

Env = v (349)

We then conclude from

(En minusKn)minus1g = v = Eminus1

n v

that

(En minusKn)minus1ginfin le Eminus1

n infinvinfin le Eminus1n infinvinfin

Since

vinfin = (I minus PnK)minus1Pnginfin le (I minus PnK)minus1Pnginfin

we conclude that

(En minusKn)minus1infin le PnEminus1

n infin(I minus PnK)minus1 (350)

Moreover for v = [vj j isin Ns(n)] let

g = [gj j isin Ns(n)] = (En minusKn)v

We choose g isin C() as before and also set v = (I minus PnK)minus1Png Notingthat

Png(tj) = g(tj) = gj j isin Ns(n)

we have ginfin le Pnginfin Thus

(En minusKn)vinfin = ginfin le Pnginfin = (I minus PnK)vinfin (351)

Choose v isin C() such that

vinfin = vinfin and v(tj) = v(tj) j isin Ns(n)

Then we have that v = Pnv and thus

vinfin le Pnvinfin = Pninfinvinfin le PnEninfinvinfin (352)

Combining estimates (351) and (352) yields

(En minusKn)infin le PnEninfin(I minus PnK)infin

which with (350) leads to the desired result of this theorem

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

34 Collocation methods 111

344 Discrete collocation methods

Before investigating discrete collocation methods we remark on the iteratedcollocation method It was known that the iterated collocation method may notlead to superconvergence In contrast with the iterated Galerkin method

K(I minus Pn) ge Kholds This means that the iterated collocation method converges more rapidlyonly in the case that for the solution u K(I minusPn)u has superconvergence (seeSection 224) This is the case when the approximate subspaces Xn are chosenas piecewise polynomials of even degree and the kernel K and solution u aresufficiently smooth (cf [15] for details)

We now begin to discuss discrete collocation methods This approachreplaces the integrals appearing in the collocation equation (342) by finitesums to be chosen depending on the specific numerical methods to be used Tothis end we define

(Knu)(s) =sum

jisinNqn

wjK(s τj)u(τj) s isin

Then the discrete collocation method for solving (31) is to find un isin Xn suchthat

(I minus PnKn)un = Pnf

or equivalently

un(ti)minussum

jisinNqn

wjK(ti τj)un(τj) = f (ti) for all i isin Ns(n) (353)

Some assumptions should be imposed to guarantee the unique solvability ofthe resulting system The iterated discrete collocation solution is defined by

un = f +Knun

which is the solution of the equation

(I minusKnPn)un = f (354)

The analysis of the discrete collocation method can be done by using theframework given in Section 224 with X = V = C()

We close this subsection by giving a relationship between the iterateddiscrete collocation solution and the Nystrom solution That is if τj j isinNqn sube tj j isin Ns(n) then the iterated discrete collocation solution un is theNystrom solution satisfying

(I minusKn)un = f (355)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

112 Conventional numerical methods

In fact by the definition of the interpolating projection for x isin C()

Pnx(τj) = x(τj) j isin Nqn

This leads to

KnPnx(s) =sum

jisinNqn

wjK(s τj)Pnx(τj) =sum

jisinNqn

wjK(s τj)x(τj) = Knx(s)

which with (354) yields (355)

35 PetrovndashGalerkin methods

In this section we establish a theoretical framework for the analysis of conver-gence for the PetrovndashGalerkin method and superconvergence for the iteratedPetrovndashGalerkin method for Fredholm integral equations of the second kind

Unlike the standard Galerkin method the PetrovndashGalerkin method employsa sequence of finite-dimensional subspaces to approximate the solution space(the trial space) of the equation and a different sequence to approximate theimage space of the integral operator (the test space) This feature provides uswith great freedom in choosing a pair of space sequences in order to improvethe computational efficiency of the standard Galerkin method while preservingits convergence order However the space of sequences cannot be chosenarbitrarily They must be coupled properly This motivates us to develop atheoretical framework for convergence analysis of the PetrovndashGalerkin methodand the iterated PetrovndashGalerkin method

It is revealed in [77] that for the PetrovndashGalerkin method the roles of thetrial space and test space are to approximate the solution space of the equationand the range of the integral operator (or in other words the image space)respectively Therefore the convergence order of the PetrovndashGalerkin methodis the same as the approximation order of the trial space and it is independentof the approximation order of the test space This leads to the followingstrategy of choosing the trial and test spaces We may choose the trial spaceas piecewise polynomials of a higher degree and the test space as piecewisepolynomials of a lower degree but keep them with the same dimension Thischoice of the trial and test spaces results in a significantly less expensivenumerical algorithm in comparison with the standard Galerkin method withthe same convergence order which uses the same piecewise polynomials asthose for the trial space The saving comes from computing the entries of thematrix and the right-hand-side vector of the linear system that results fromthe corresponding discretization Note that an entry of the Galerkin matrix is

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 113

the inner product of the integral operator applied to a basis function for thetrial space against a basis function for the same space which is a piecewisepolynomial of a higher degree while an entry of the PetrovndashGalerkin matrix isthat against a basis function for the test space which is a piecewise polynomialof a lower degree Computing the latter is less expensive than computing theformer due to the use of lower-degree polynomials for the test space In factthe PetrovndashGalerkin method interpolates between the Galerkin method and thecollocation method

351 Analysis of PetrovndashGalerkin and iteratedPetrovndashGalerkin methods

1 The PetrovndashGalerkin methodLet X be a Banach space with the norm middot and let Xlowast denote its dualspace Assume that K X rarr X is a compact linear operator We considerthe Fredholm equation of the second kind

uminusKu = f f isin X (356)

where u isin X is the unknown to be determinedWe choose two sequences of finite-dimensional subspaces Xn sub X n isin

N and Yn sub Xlowast n isin N and suppose that they satisfy condition (H) Foreach x isin X and y isin Xlowast there exist xn isin Xn and yn isin Yn such that xnminusx rarr 0and yn minus y rarr 0 as nrarrinfin and

s(n) = dim Xn = dim Yn n isin N (357)

The PetrovndashGalerkin method for equation (356) is a numerical method forfinding un isin Xn such that

(un minusKun y) = (f y) for all y isin Yn (358)

Let

Xn = spanφ1φ2 φs(n)Yn = spanψ1ψ2 ψs(n)

and

un =sum

jisinNs(n)

αjφj

Equation (358) can be written assumjisinNs(n)

αj[(φjψi)minus (Kφjψi)] = (f ψi) i isin Ns(n)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

114 Conventional numerical methods

If XnYn is a regular pair (see Definition 230) in the sense that there is alinear operator n Xn rarr Yn with nXn = Yn and satisfying the conditions

x le c1(xnx)12 and nx le c2x for all x isin Xn

where c1 and c2 are positive constants independent of n then equation (358)may be rewritten as

(un minusKunnx) = (f nx) for all x isin Xn

Furthermore using the generalized best approximation projection Pn X rarrXn (see Definition 225) which is defined by

(xminus Pnx y) = 0 for all y isin Yn

equation (358) is equivalent to the operator equation

un minus PnKun = Pnf (359)

This equation indicates that the PetrovndashGalerkin method is a projectionmethod Using Theorem 255 we obtain the following result

Theorem 326 Let X be a Banach space and K Xrarr X be a compact linearoperator Assume that one is not an eigenvalue of the operator K Suppose thatXn and Yn satisfy condition (H) and XnYn is a regular pair Then thereexists an N0 gt 0 such that for n ge N0 equation (359) has a unique solutionun isin Xn for any given f isin X that satisfies

un minus u le cuminus Pnu n ge N0

where u isin X is the unique solution of equation (356) and c gt 0 is a constantindependent of n

2 The iterated PetrovndashGalerkin methodWe now turn our attention to studying superconvergence of the iterated PetrovndashGalerkin method for integral equations of the second kind

Let X be a Banach space and let Xn sub X n isin N and Yn sub Xlowast n isin N be two sequences of finite-dimensional subspaces satisfying condition(H) Assume that Pn X rarr Xn are the linear projections of the generalizedbest approximation from Xn to X with respect to Yn Consider the projectionmethod (359) Suppose that un isin Xn is the unique solution of equation (359)which approximates the solution of equation (356) The iterated projectionmethod is defined by

uprimen = f +Kun (360)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 115

It can easily be verified that the iterated projection approximation uprimen satisfiesthe integral equation

uprimen minusKPnuprimen = f (361)

In order to analyze uprimen as the solution of equation (361) we need tounderstand the convergence of the approximate operator KPn The next lemmais helpful in this regard

Lemma 327 Suppose that X is a Banach space and Xn sub X and Yn sub Xlowastsatisfy condition (H) Let Pn X rarr X be the sequence of projections of thegeneralized best approximation from X to Xn with respect to Yn that convergespointwise to the identity operator I in X Then the sequence of dual operatorsPlowastn converges pointwise to the identity operator Ilowast in Xlowast

Proof It follows from condition (H) that for any v isin Xlowast there exists asequence vn isin Yn such that vn minus v rarr 0 as nrarrinfin Consequently

Plowastn vminus v le Plowastn vminus vn + vn minus v le (Pn + 1)vn minus v rarr 0

where the first inequality holds because Plowastn Xlowast rarr Yn are also projectionsThat is Plowastn rarr Ilowast pointwise The second inequality uses the general resultPlowastn = PnTheorem 328 Suppose that X is a Banach space and Xn sub X and Yn sub Xlowastsatisfy condition (H) Assume that K is a compact operator in X Let Pn XrarrX be the projections of the generalized best approximation from X to Xn withrespect to Yn that converges pointwise to the identity operator I in X Then

KPn minusK rarr 0 as nrarrinfin

Proof Note that

KPn minusK = [KPn minusK]lowast = PlowastnKlowast minusKlowastSince K is compact we also have that Klowast is compact Using Lemmas 327 and252 we conclude the result of this theorem

352 Equivalent conditions in Hilbert spaces for regular pairs

From the last section we know that the notion of regular pairs plays anessential role in the analysis of the PetrovndashGalerkin method Therefore it isnecessary to re-examine this concept from different points of view In thissubsection we first study regular pairs from a geometric point of view and

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

116 Conventional numerical methods

second characterize them in terms of the uniform boundedness of the sequenceof projections defined by the generalized best approximation

In what follows we confine the space X to be a Hilbert space with innerproduct (middot middot) from which a norm middot is induced In this case Xlowast is identifiedto be X via the inner product We assume XnYn sub X satisfy condition (H)The structure of Hilbert spaces allows us to define the angle between spacesXn and Yn which is done by the orthogonal projection from X onto Yn Foreach x isin X we define the best approximation ylowastn from Yn by

xminus ylowastn = infxminus y y isin YnSince Yn is a finite-dimensional Hilbert subspace in X there exists a bestapproximation from Yn to x isin X We furthermore define the best approxi-mation operator Yn by

Ynx = ylowastn for each x isin X

It is well known that for any x isin X Ynx satisfies the equation

(xminus Ynx y) = 0 for all y isin Yn (362)

In other words the operator Yn is the orthogonal projection from X onto YnTo define the angle between two spaces Xn and Yn we denote

γn = inf

Ynxx x isin Xn

We call

θn = arccos γn

the angle between spaces Xn and Yn The next theorem characterizes a regularpair XnYn in a Hilbert space X in terms of the angles between Xn and Yn

Theorem 329 Let X be a Hilbert space and let Xn and Yn be two subspacesof X satisfying condition (H) and dimXn = dimYn lt infin for n isin N ThenXnYn is a regular pair if and only if there exists a positive number θ0 lt π2such that

θn le θ0 n isin N

Proof We first prove the sufficiency Assume that there exists a positivenumber θ0 lt π2 such that θn le θ0 for all n isin N Thus

γn = inf

Ynxx x isin Xn

ge cos θ0 gt 0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 117

Using the characterization of the best approximation we have that

(xYnx) = Ynx2 ge cos2 θ0x2 for all x isin Xn

This implies that YnXn = Yn and condition (H-1) holds with c1 = 1 cos θ0Moreover since the operator Yn is the orthogonal projection we conclude that

Ynx le x for all x isin Xn

Hence condition (H-2) holds with c2 = 1We now show the necessity It follows from the definition of a regular pair

that

x2 le c21(xnx) le c2

1xnx le c21c2x2 for all x isin Xn

Thus we obtain

0 lt1

c21c2le 1

It can be seen that there exists an xprime isin Xn with xprime = 0 such that

Ynxprimexprime = inf

Ynxx x isin Xn

= cos θn

By the characterization of the best approximation we obtain that

xprime2 le c21(xprimenxprime) = c2

1(Ynxprimenxprime) le c21Ynxprimenxprime le c2

1c2 cos θnxprime2

Therefore

cos θn ge 1

c21c2

gt 0

and

θn le arccos1

c21c2

ltπ

2

The proof is complete

We now turn to establishing the equivalence of the regular pair and theuniform boundedness of the projections Pn when they are well defined Weneed two preliminary results to prove this equivalence

Lemma 330 Let X be a Hilbert space Assume that XnYn sub X withdim Xn = dim Yn ltinfin satisfy condition (273) Then

Yn = YnPn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

118 Conventional numerical methods

Proof Let x isin X Then Pnx and Ynx satisfy equations (272) and (362)respectively It follows that

(Pnxminus Ynx y) = 0 for all y isin Yn

By the definition of best approximation from Yn to Pnx we conclude that

Ynx = YnPnx for all x isin X

The proof of the lemma is complete

In Hilbert spaces we can interpret condition (273) from many differentpoints of view The next proposition lists eight equivalent statements

Proposition 331 Let X be a Hilbert space and XnYn sub X with dim Xn =dim Yn ltinfin Then the following statements are equivalent

(i) Yn cap Xperpn = 0(ii) det[(φiψj)] = 0 where φl l isin Nm and ψl l isin Nm are bases for

Xn and Yn respectively(iii) Xn cap Yperpn = 0(iv) If x isin Xn with x = 0 then Ynx = 0(v) γn gt 0

(vi) YnXn = Yn(vii) PnYn = Xn and Pn|Yn = (Yn|Xn)

minus1(viii) If y isin Yn with y = 0 then Pny = 0

Proof The implications that (i) implies (ii) and (ii) implies (iii) follow fromthe proof of Proposition 226

We prove that (iii) implies (iv) Let x isin Xn with x = 0 Using the definitionof Yn and (iii) we conclude that

(Ynx y) = (x y) = 0 for all y isin Yn with y = 0

Thus Ynx = 0To prove the implication that (iv) implies (v) we use (iv) to conclude that

on the closed unit sphere x x isin Xn x = 1 Ynx gt 0 Thus we have

γn = infYnx x isin Xn x = 1 gt 0

and statement (v) is provedTo establish (vi) it suffices to show that if φl l isin Nm is a basis for Xn

then Ynφl l isin Zm is a basis for Yn Note that Ynφi isin Yn and dimYn = NIt remains to show that Ynφ1 YnφN is linearly independent To this end

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 119

assume that there are not all zero constants c1 cN such thatsumjisinNN

ciYnφi = 0

Let x =sumjisinNNciφi Then x isin Xn with x = 0 but Ynx = 0 Hence γn = 0 a

contradiction to (v)We now show that (vi) implies (vii) Since PnYn = PnYnXn it is sufficient

to prove PnYnXn = Xn For any x isin Xn applying the definition of Yn gives

(Ynxminus x y) = 0 for all y isin Yn

The definition of Pn implies that x = PnYnx for all x isin Xn Hence weconclude that PnYnXn = Xn and (vii) is established

The implication of (vii) to (viii) is obviousFinally we prove that (viii) implies (i) Let y isin Yn cap Xperpn By the definition

of Pn we find

y2 = (y y) = (Pny y) = 0

This ensures that y = 0

The next theorem shows that XnYn is a regular pair if and only if thesequence of projections Pn is uniformly bounded

Theorem 332 Let X be a Hilbert space Assume that XnYn sub X satisfycondition (H) and equation (273) Let Pn be a sequence of projectionsdefined by the generalized best approximation (272) Then XnYn is aregular pair if and only if there exists a positive constant c for which

Pn le c for all n isin N

Proof We have proved in Proposition 231 that if XnYn is a regular pairthen Pn is uniformly bounded It remains to prove the converse For thispurpose we let Yn Xrarr Yn be the orthogonal projection By our conventionspaces Xn and Yn satisfy condition (273) Thus Proposition 331 ensuresthat YnXn = Yn The validity of condition (H-2) follows from a propertyof best approximation in Hilbert spaces We now prove condition (H-1) bycontradiction Assume to the contrary that condition (H-1) is not valid Thento each ε with 0 lt ε lt 1c where c is the constant that gives the bound forPn there exist n isin N and x isin Xn such that

(xYnx) lt ε2x2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

120 Conventional numerical methods

It follows from the characterization of best approximation in the Hilbert spaceX that

(xYnx) = Ynx2

We then have

Ynx lt εxLet x0 = Ynx Clearly x0 isin Yn Since x isin Xn satisfies the equation

(x0 minus x y) = (Ynxminus x y) = 0 for all y isin Yn

we conclude that x = Pnx0 Consequently

x0 lt εPnx0In other words

Pnx0 gt cx0which contradicts the assumption that Pn le c This contradiction shows thatcondition (H-1) must hold

In the remaining part of this subsection we discuss regular pairs from analgebraic point of view

Definition 333 Let X = φi i isin Zm Y = ψi i isin Zm be two finite(ordered) subsets of the Hilbert space X The correlation matrix between X andY is defined to be the mtimes m matrix

G(X Y) = [(φiψj) i j isin Zm]Note that GT(X Y) the transpose of G(X Y) is G(Y X) For the special

case X = Y we use G(X) for G(X X) and recall that G(X) is the Gram matrixfor the set X The matrix G(X) is positive semi-definite Generally the matrixG(X Y) is not symmetric We use G+(X Y) to denote the symmetric part ofG(X Y) Specifically we set

G+(X Y) = 1

2[G(X Y)+G(Y X)]

We use the standard ordering on m times m symmetric matrices A = [aij i j isinZm] B = [bij i j isin Zm] and write A le B provided thatsum

iisinZm

sumjisinZm

xiaijxj = xTAx le xTBx

for all x = [xi i isin Zm] isin Rm When the strict inequality holds above exceptfor x = 0 we write A lt B

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 121

Definition 334 Let X be a Hilbert space Suppose for any n isin N Xn = φi i isin Zs(n) and Yn = ψi i isin Zs(n) are finite subsets of X where s(n) denotesthe cardinality of Xn We say that Xn Yn forms a regular pair provided thatthere are constants σ gt 0 and σ prime gt 0 such that for all n isin N we have

0 lt G(Xn) le σG+(Xn Yn) (363)

and

0 lt G(Yn) le σ primeG(Xn) (364)

Thus given any finite sets X and Y of linearly independent elements inX of the same cardinality the constant pair X Y is regular if and only ifG+(X Y) gt 0 Moreover when we only have that det G(X Y) = 0 wecan form from X and Y a constant regular pair by modifying either one ofthe sets X and Y To explain this we suppose that X = φi i isin Zn andY = ψi i isin Zn Let W = ωi i isin Zn where the elements of this set aredefined by the formula

ωi =sumjisinZn

(φjψi)φj i isin Zn

Then

G(W Y) = G(X Y)TG(X Y)

and so W Y is a constant regular pair when det G(X Y) = 0 and the elementsof X and Y are linearly independent In the special case that the elements of Xare orthonormal then ωi = Xψi i isin Zn where X is the orthogonal projectionof X onto spanX

Let Xn = span Xn and Yn = span Yn When Xn Yn form a regularpair of finite sets and for every xisinX limnrarrinfin dist(xXn)= 0 the subspacesXnYn form a regular pair of subspaces in the terminology of Definition 230Conversely whenever two subspaces XnYn form a regular pair thesesubspaces have bases which as sets form a regular pair The notion of regularpairs of subspaces from Definition 230 is independent of the bases of thesubspaces However Definition 334 is dependent upon the specific sets usedand may fail to hold if these sets are transformed into others by lineartransformations

Let us observe that (363) and (364) imply

G(Yn) le σσ primeG+(Xn Yn) (365)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

122 Conventional numerical methods

Moreover for any a = [ai i isin Zs(n)] isin Rs(n) by the CauchyndashSchwarzinequality and (363) we have that

aTG+(Xn Yn)a =⎛⎝ sum

jisinZs(n)

ajφjsum

jisinZs(n)

ajψj

⎞⎠le [aTG(Xn)a]12[aTG(Yn)a]12

le σ 12[aTG+(Xn Yn)a]12[aTG(Yn)a]12

This inequality implies that

G+(Xn Yn) le σG(Yn)

Using this inequality and (363) we conclude that

G(Xn) le σG+(Xn Yn) le σ 2G(Yn) (366)

Therefore it follows that whenever Xn Yn is a regular pair then so is Yn XnWhen the sets Xn Yn form a regular pair with constants σ σ prime the

generalized best approximation projection Pn X rarr Xn with respect to Yn

enjoys the bound

Pn le p = σσ prime12 (367)

To confirm this inequality for each x isin X we write Pnx in the form

Pnx =sum

jisinZs(n)

ajφj

where the vector a = [aj j isin Zs(n)] is the solution of the linear equations

(xψi) =( sum

jisinZs(n)

ajφjψi

) i isin Zs(n)

Hence multiplying both sides of these equations by ai summing over i isin Zs(n)

and using (363) and (364) we get that

Pnx2 = aTG(Xn)a le σaTG+(Xn Yn)a

= σ(Pnx

sumjisinZs(n)

ajψj

)= σ

(xsum

jisinZs(n)

ajψj

)le σx

∥∥∥ sumjisinZs(n)

ajψj

∥∥∥ le σσ prime12x∥∥∥ sum

jisinZs(n)

ajφj

∥∥∥ = σσ prime12xPnx

Now we divide the first and last terms in the above inequality by Pnx toyield the desired inequality Pn le p

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 123

Recall that

γn = inf

Ynxx x isin Xn

and

θn = arccos γn n isin N

Note that γn isin [0 1] and θn isin [0π2] By definition θn is the angle betweenthe two subspaces Xn and Yn Let us observe for any x isin Xn that the equation

PnYnx = x

holds This leads to

cos θn = inf

YnxPnYnx x isin Xn

ge inf

yPny y isin Yn

ge 1

Pn

Therefore we conclude that when XnYn is a pair of subspaces with basesXn Yn which form a regular pair with constants σ σ prime the inequality

cos θn ge pminus1 gt 0

holds In other words in this case for all n isin N we have that θn isin [0 θlowast)where θlowast lt π2

353 The discrete PetrovndashGalerkin method and its iterated scheme

The PetrovndashGalerkin method for Fredholm integral equations of the secondkind was studied in the last sections To use the PetrovndashGalerkin method inpractical computation we have to be able to efficiently compute the integralsoccurring in the method In this subsection we take an approach to discretizinga given integral equation by a discrete projection and a discrete inner productThe iterated solution suggested in this section is also fully discrete

In this subsection we describe discrete PetrovndashGalerkin methods for Fred-holm integral equations of the second kind with weakly singular kernels Forthis purpose we consider the equation

(I minusK)u = f (368)

where K Linfin()rarr C() is a compact linear integral operator defined by

(Ku)(s) =int

K(s t)u(t)dt s isin (369)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

124 Conventional numerical methods

sub Rd is a bounded closed domain and K is a function defined on times

which is allowed to have weak singularities We assume that one is not aneigenvalue of the operator K to guarantee the existence of a unique solutionu isin Linfin() Some additional specific assumptions will be imposed later inthis subsection

We first recall the PetrovndashGalerkin method for equation (368) In thisdescription we let X = L2() with an inner product (middot middot) Let Xn and Ynbe two sequences of finite-dimensional subspaces of X such that

dimXn = dimYn = s(n)

Xn = spanφ1φ2 φs(n)and

Yn = spanψ1ψ2 ψs(n)We assume that XnYn is a regular pair It is known from the last sectionthat the necessary and sufficient condition for a generalized best approximationfrom Xn to x isin X with respect to Yn to exist uniquely is YncapXperpn = 0 If thiscondition holds then Pn is a projection and XnYn forms a regular pair ifand only if Pn is uniformly bounded The PetrovndashGalerkin method for solvingequation (368) is a numerical scheme to find a function

un(s) =sum

jisinNs(n)

αjφj(s) isin Xn

such that

((I minusK)un y) = (f y) for all y isin Yn (370)

or equivalentlysumjisinNs(n)

αj[(φjψi)minus (Kφjψi)

] = (f ψi) i isin Ns(n) (371)

Using the generalized best approximation Pn X rarr Xn we write equa-tion (370) in operator form as

(I minus PnK)un = Pnf (372)

It is also proved in the last section that if XnYn is a regular pair then forsufficiently large n equation (372) has a unique solution un isin Xn whichsatisfies the estimate

un minus u le C infuminus x x isin Xn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 125

Solving equation (372) requires solving the linear system (371) Of coursethe entries of the coefficient matrix of (371) involve the integrals (Kφjψi)which are normally evaluated by a numerical quadrature formula Roughlyspeaking the discrete PetrovndashGalerkin method is the scheme (371) withthe integrals appearing in the method computed by quadrature formulasHowever we shall develop our discrete PetrovndashGalerkin method independentof the PetrovndashGalerkin method (372) In other words we do not assumethat the PetrovndashGalerkin method (372) has been previously constructed toavoid the ldquoregular pairrdquo assumption which is crucial for the solvability andconvergence of the PetrovndashGalerkin method We take a one-step approachto fully discretize equation (368) directly We first describe the method inldquoabstractrdquo terms without specifying the bases and the concrete quadratureformulas Later we specialize them using the piecewise polynomial spacesThe only assumption that we have to impose later to guarantee the solvabilityand convergence of the resulting concrete method is a local condition on thereference element and thus it is easy to verify it

In our description we use function values f (t) at given points t isin for anLinfin function f We follow [21] to define them precisely Let C() denote thesubspace of Linfin() which consists of functions each of which is equal to anelement in C() ae The point evaluation functional δt on the space C() isdefined by

δt(f ) = f (t) t isin f isin C()

where f on the right-hand side is chosen to be the representative function f isinC() which is continuous By the HahnndashBanach theorem the point evaluationfunctional δt can be extended from C() to the whole Linfin() in such a waythat the norm is preserved We use dt to denote such an extension and define

f (t) = dt(f ) for f isin Linfin()

We remark that the extension is not unique but that is usually immaterial Whatis important is that it exists and preserves many of the properties naturallyassociated with the point evaluation functional For example at a point ofcontinuity of f the extended point evaluation is uniquely defined and has thenatural value and moreover dt is continuous at such points The reader isreferred to [21] for more details on this extension

We now return to our description of the discrete PetrovndashGalerkin methodAs in the description of the (continuous) PetrovndashGalerkin method we choosetwo subspaces Xn = span φj j isin Ns(n) and Yn = span ψj j isin Ns(n) ofthe space Linfin() such that dim Xn = dim Yn = s(n) We choose mn pointsti isin and two sets of weight functions w1i w2i i isin Nmn We define the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

126 Conventional numerical methods

discrete inner product

(x y)n =sum

iisinNmn

w1ix(ti)y(ti) x y isin Linfin() (373)

which will be used to approximate the inner product (x y) = intD x(t)y(t)dtand define discrete operators by

(Knu)(s) =sum

iisinNmn

w2i(s)u(ti) u isin Linfin() (374)

which will be used to approximate the operator K With these notationsthe discrete PetrovndashGalerkin method for equation (368) is a numericalscheme to find

un(s) =sum

jisinNs(n)

αnjφj(s) (375)

such that

((I minusKn)un y)n = (f y)n for all y isin Yn (376)

In terms of basis functions equation (376) is written as

sumjisinNs(n)

αnj

⎡⎣ sumisinNmn

w1φj(t)ψi(t)minussumisinNmn

w1

summisinNmn

w2m(t)φj(tm)ψi(t)

⎤⎦=sumisinNmn

w1f (t)ψi(t) i isin Ns(n) (377)

Upon solving the linear system (377) we obtain s(n) values αnj Substitutingthem into (375) yields an approximation to the solution u of equation (368)Equation (376) can also be written in the operator form by a discretegeneralized best approximation Qn which we define next Let Qn Xrarr Xn

be defined by

(Qnx y)n = (x y)n for all y isin Yn (378)

If Qnx is uniquely defined for every x isin X equation (376) can be written inthe form

(I minusQnKn)un = Qnf (379)

We postpone a discussion of the unique existence of Qnx until laterThe iterated PetrovndashGalerkin method has been shown to have a supercon-

vergence property where the additional order of convergence gained froman iteration is attributed by approximation of the kernel from the test space

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 127

The convergence order of the iterated PetrovndashGalerkin method is equal to theapproximation order of space Xn plus the approximation order of space YnIt is of interest to study the superconvergence of the iterated discrete PetrovndashGalerkin method which we define by

uprimen = f +Knun (380)

Equation (380) is a fully discrete algorithm which can be implemented easilyinvolving only multiplications and additions It can be shown that uprimen satisfiesthe operator equation

(I minusKnQn)uprimen = f (381)

This form of equation allows us to treat the iterated discrete PetrovndashGalerkinmethod as an operator equation whose analysis is covered by the theorydeveloped in Section 224

Up to now the discrete PetrovndashGalerkin method has been described inabstract terms without specifying the spaces Xn and Yn In the remainder ofthis section we specialize the discrete PetrovndashGalerkin method by specifyingthe spaces Xn and Yn and defining operators Qn and Kn in terms of piecewisepolynomials We assume that is a polyhedral region and construct a partitionTn for by dividing it into Nn simplices ni i isin NNn such that

h = maxdiam ni i isin NNn rarr 0 as nrarrinfin (382)

=⋃

iisinNNn

ni

and

meas(ni capnj) = 0 i = j

When the dependence of the simplex ni on n is well understood we dropthe first index n in the notation and simply write it as i For each positiveinteger n the set Tn forms a partition for the domain We also require thatthe partition is regular in the sense that any vertex of a simplex in Tn is not inthe interior of an edge of a face of another simplex in the set It is well knownthat for each simplex there exists a unique one to one and onto affine mappingwhich maps the simplex onto a unit simplex 0 called a reference element

Let Fi i isin NNn denote the invertible affine mappings that map the referenceelement 0 one to one and onto the simplices i Then the affine mappingsFi have the form

Fi(t) = Bit + bi t isin 0 (383)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

128 Conventional numerical methods

where Bi is a d times d invertible matrix and bi a vector in Rd and they satisfy

i = Fi(0)

On the reference element 0 we choose two piecewise polynomial spacesS1k1(

0) and S2k2(0) of total degree k1minus1 and k2minus1 respectively such that

dim S1k1(0) = dim S2k2(

0) = μ

The partitions 1 and 2 of 0 associated respectively with S1k1(0) and

S2k2(0) may be different they are arranged according to the integers k1 k2

and d Assume that the numbers of sub-simplices contained in the partitions1 and 2 are denoted by ν1 and ν2 We have to choose these pairs of integersk1 ν1 and k2 ν2 such that(

k1 minus 1+ d

d

)ν1 =

(k2 minus 1+ d

d

)ν2 = μ

because the dimension of the space of polynomials of total degree k is(k+d

d

)

We shall not provide a detailed discussion on how the partitions 1 and 2

are constructed Instead we assume that we have chosen bases for these twospaces so that

S1k1(0) = spanφj j isin Nμ

and

S2k2(0) = spanψj j isin Nμ

We next map these piecewise polynomial spaces on 0 to each simplex i byletting

φij(t) =φj Fminus1

i (t) t isin i

0 t isin i

and

ψij(t) =ψj Fminus1

i (t) t isin i

0 t isin i

for i isin NNn and j isin Nμ Using these functions as bases we define the trialspace and the test space respectively by

Xn = spanφij i isin NNn j isin Nμand

Yn = spanψij i isin NNn j isin Nμ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 129

It follows from (382) that

C() sube⋃

Xn

and

C() sube⋃

Yn

Moreover we have that if x isin Wk1infin() then there exists a constant c gt 0 suchthat for all n

infxminus φ φ isin Xn le chk1

and if x isin Wk2infin() then likewise there exists a constant c gt 0 such that forall n

infxminus φ φ isin Yn le chk2

However the space X =⋃Xn does not equal Linfin() it is a proper subspaceof Linfin() because the space Linfin() is not separable Due to this fact theexisting theory of collectively compact operators (cf [6]) does not applydirectly to this setting Some modifications of the theory are required

We next specialize the definition of the discrete inner product (373) anddescribe a concrete construction of the approximate operators Kn To this endwe introduce a third piecewise polynomial space S3k3(

0) of total degreek3 minus 1 on 0 We divide the reference element 0 into ν3 sub-simplices

3 = ei i isin Nν3and also assume that the partition 3 is regular On each of the simplicesei we choose m = (k3minus1+d

d

)points τij j isin Nm such that they admit a

unique Lagrange interpolating polynomial of total degree k3 minus 1 on ei Formultivariate Lagrange interpolation by polynomials of total degree see [83]and the references cited therein Let pij be the polynomial of total degree k3minus1on ei satisfying the interpolation conditions

pij(τiprimejprime) = δiiprimeδjjprime i iprime isin Nν3 j jprime isin Nm

We assemble these polynomials to form a basis for the space S3k3(0) by

letting

ζ(iminus1)m+j(t) =

pij(t) t isin ei0 t isin ei

i isin Nν3 j isin Nm

Set γ = mν3 which is equal to the dimension of S3k3(0) and

t(iminus1)m+j = τij i isin Nν3 j isin Nm

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

130 Conventional numerical methods

Then ζi isin S3k3(0) and satisfy the interpolation conditions

ζi(tj) = δij i j isin Nγ

This set of functions forms a basis for the space S3k3(0) It can be used to

introduce a piecewise polynomial space on by mapping the basis ζj j isin Nγ

for S3k3(0) from 0 into each i Specifically we define

ζij(t) =ζj Fminus1

i (t) t isin i0 t isin i

where Fi is the affine map defined by (383) Let

Zn = spanζij i isin NNn j isin Nγ Hence Zn is a piecewise polynomial space of dimension γNn For each i wedefine

tij = Fi(tj) = Bitj + bi

where Bi and bi are respectively the matrix and vector appearing in thedefinition of the affine map Fi Furthermore we define the linear projectionZn Xrarr Zn by

Zng =sum

iisinNNn

sumjisinNγ

dtij(g)ζij

where dt is the extension of the point evaluation functional δt satisfyingdt = 1 which was discussed earlier and satisfies the condition

dtij(ζiprimejprime) = δiiprimeδjjprime i iprime isin NNn j jprime isin Nγ

Moreover we have that

Zn = ess suptisin

sumiisinNNn

sumjisinNγ

|ζij(t)| = ess suptisin

sumjisinNγ

|ζj(t)|

That is Zn is uniformly bounded for all n It follows from the uniformboundedness of Zn that for any y isin Wk3infin() there holds an estimate

yminus Zny le C infyminus φφ isin Zn le Chk3 (384)

Using the projection Zn defined above we have a quadrature formulaint

g(t)dt =sum

iisinNNn

sumjisinNγ

wijdtij(g)+O(hk3)

where

wij =int

ζij(t)dt

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 131

If we set

wi =int0

ζi(t)dt i isin Nγ

then we have

wij =inti

ζj(Fminus1i (t))dt = det(Bi)

int0

ζj(t)dt = det(Bi)wj

Without loss of generality we assume that

det(Bi) gt 0 i isin Nγ

Employing this formula we introduce the following discrete inner product

(x y)n =sum

iisinNNn

sumisinNγ

wix(ti)y(ti) (385)

Formula (385) is a concrete form for (373) When x y isin Wk3infin() we havethe error estimate

|(x y)minus (x y)n| le Chk3

With this specific definition of the spaces Xn Yn and the discrete inner productwe obtain a construction of the operators Qn using equation (378)

Finally to describe a concrete construction of the approximate operatorsKn we impose a few additional assumptions on the kernel K of the integraloperator K Roughly speaking we assume that K is a product of two kernelsone of them is continuous but perhaps involves a complicated function and theother has a simple form but has a singularity In particular we let

K(s t) = K1(s t)K2(s t) s t isin

where K1 is continuous on times and K2 has a singularity and satisfies theconditions

K2(s middot) isin L1() s isin supsisin

int

|K2(s t)|dt lt +infin (386)

K2(s middot)minus K2(sprime middot)1 rarr 0 as sprime rarr s (387)

Moreover we assume that the integration of the product of K2(s t) and apolynomial p(t) with respect to the variable t can be evaluated exactly Manyintegral operators K that appear in practical applications are of this type

Using the linear projection Zn we define Kn Xrarr X by

(Knx)(s) =int

Zn(K1(s t)x(t))K2(s t)dt

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

132 Conventional numerical methods

which approximates the operator K For un isin Xn we have that

(Knun)(s) =sum

iisinNNn

sumjisinNγ

wij(s)K1(s tij)un(tij)

where

wij(s) =inti

ζij(t)K2(s t)dt

This concrete construction of the trial space Xn the test space Yn and operatorsQn Kn yields a specific discrete PetrovndashGalerkin method which is describedby equation (379) This is the method that we shall analyze in the nextsubsection

354 The convergence of the discrete PetrovndashGalerkin method

In this subsection we follow the general theory developed in Section 224 toprove the convergence results of the discrete PetrovndashGalerkin method when apiecewise polynomial approximation is used Throughout the remaining part ofthis subsection we let X = Linfin() V = C() Xn and Yn be the piecewisepolynomial spaces defined in the last subsection and X = cupnXn Our maintask is to verify that the operators Qn and Kn with the spaces Xn Yn defined inthe last subsection by the piecewise polynomials satisfy the hypotheses (H-1)ndash(H-4) of Section 224 so that Theorem 254 can be applied For this purposewe define the necessary notation Let

= [φi(tj) i isin Nμ j isin Nγ ] and = [ψi(tj) i isin Nμ j isin Nγ ]where φi and ψi are the bases we have chosen for the piecewise polynomialspaces S1k1(

0) and S2k2(0) and tj are the interpolation points in the

reference element 0 chosen in the last subsection Noting that wi are theweights of the quadrature formula on the reference element developed inSection 421 we set

W = diag(w1 wγ ) and M = WT

The next proposition presents a necessary and sufficient condition for thediscrete generalized best approximation to exist uniquely

Proposition 335 For each x isin Linfin() the discrete generalized bestapproximation Qnx from Xn to x with respect to Yn defined by (378) existsuniquely if and only if

det(M) = 0 (388)

Under this condition Qn is a projection that is Q2n = Qn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 133

Proof Let x isin Linfin() be given Showing that there is a unique Qnx isin Xn

satisfying equation (378) is equivalent to proving that the linear systemsumiisinNNn

sumjisinNμ

cij(φijψiprimejprime)n = (xψiprimejprime)n iprime isin NNn jprime isin Nμ (389)

has a unique solution [c11 c1μ cNn1 cNnμ] This in turn is equiv-alent to the fact that the coefficient matrix M of this system is nonsingularIt is easily seen that

M = diag(det(B1)M det(BNn)M)

Thus the first result of this proposition follows from hypothesis (388)It remains to show that Qn is a projection By definition we have for every

x isin Linfin() that

(Qnx y)n = (x y)n for all y isin Yn

In particular this equation holds when x is replaced by Qnx That is

(Q2hx y)n = (Qnx y)n for all y isin Yn

It follows for each x isin X that

Q2nx = Qnx

That is Qn is a projection

Condition (388) is on the choice of points tj on the reference elementThey have to be selected in a careful manner so that they match with the choiceof the bases φi and ψi This condition has to be verified before a concreteconstruction of the projection Qn is given This is not a difficult task since thecondition is on the reference element it is independent of n and in practicalapplications the numbers μ and γ are not too large

The next proposition gives two useful properties of the projection Qn

Proposition 336 Assume that condition (388) is satisfied Let Qn be definedby (378) with the spaces Xn Yn and the discrete inner product constructed interms of the piecewise polynomials described in the last subsection Then thefollowing statements hold

(i) Qn is uniformly bounded that is there exists a constant c gt 0 such thatQn le c for all n

(ii) There exists a constant c gt 0 such that for all n

Qnxminus xinfin le c infxminus φinfinφ isin Xnholds for all x isin Linfin() Thus for each x isin C() Qnx minus xinfin rarr 0holds as nrarrinfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

134 Conventional numerical methods

Proof (i) For any x isin Linfin() we have the expression

Qnx =sum

iisinNNn

sumjisinNμ

cijφij (390)

where the coefficients cij satisfy equation (389) It follows that

Qnxinfin le cinfiness supsisin

sumiisinNNn

sumjisinNμ

|φij(s)| = cinfinmaxsisin0

sumjisinNμ

|φj(s)| (391)

where

c = [c11 c1μ cNn1 cNnμ]T

and the discrete norm of c is defined by cinfin = max|cij| i isin NNn j isin NμBy definition the vector c is dependent on n although we do not specify it inthe notation However we prove that cinfin is in fact independent of n To thisend we use system (389) and hypothesis (388) to conclude that

cinfin = Mminus1dinfin (392)

where

d = [(xψ11)n (xψ1μ)n (xψNn1)n (xψNnμ)n]T

and

Mminus1 = diag(

det(B1)minus1Mminus1 det(BNn)

minus1Mminus1)

Let

di = [(xψi1)n (xψiμ)n]T isin Rμ

Then it follows from (392) that the following estimate of cinfin holds in termsof blocks di and Mminus1

cinfin le maxiisinNNn

det(Bi)minus1Mminus1diinfin (393)

This inequality reduces estimating cinfin to bounding each block di By thedefinition of the discrete inner product we have the estimate for the normof di

diinfin le xinfinmaxjisinNμ

sumisinNγ

wi|ψij(ti)| = det(Bi)xinfinmaxjisinNμ

sumisinNγ

w|ψj(t)|

(394)

From (391)ndash(394) we conclude that

Qnxinfin le cxinfin for all n

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 135

where c is a constant independent of n with the value

c = Mminus1infinmaxsisinE

sumjisinNμ

|φj(s)|maxjisinNμ

sumisinNγ

w|ψj(t)|

(ii) Let φ isin Xn Since Qn is a projection we have for each x isin Linfin that

Qnxminus xinfin le xminus φinfin + Qnφ minusQnxinfin le (1+ c)xminus φinfin

Thus we obtain the estimate

Qnxminus xinfin le c infxminus φinfinφ isin XnThis estimate with the relation C() sube cupnXn implies that Qnxminus xinfin rarr 0as nrarrinfin for each x isin C()

In the next proposition we verify that the operators Kn defined in thelast subsection by the piecewise polynomial approximation satisfy hypotheses(H-1) and (H-2)

Proposition 337 Suppose that Kn is defined as in the last subsection by thepiecewise polynomial approximation Then the following statements hold

(i) The set of operators Kn is collectively compact(ii) For each x isin X KnxminusKxinfin rarr 0 as nrarrinfin

(iii) If x isin Wk3infin() and K1 isin C()timesWk3infin() then

KxminusKnxinfin le chk3

Proof (i) By the continuity of the kernel K1 and condition (387) there existconstants c1 and c2 such that

K1(s middot)infin le c1 and K2(s middot)1 le c2

Thus we have that

|(Knx)(s)| =∣∣∣∣int

Zn(K1(s t)x(t))K2(s t)dt

∣∣∣∣ le c0c1c2xinfin (395)

Moreover

|(Knx)(s)minus (Knx)(sprime)|=∣∣∣∣int

Zn(K1(s t)x(t))K2(s t)dt minusint

Zn(K1(sprime t)x(t))K2(s

prime t)dt

∣∣∣∣le∣∣∣∣int

Zn(K1(s t)x(t))[K2(s t)minus K2(sprime t)]dt

∣∣∣∣

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

136 Conventional numerical methods

+∣∣∣∣int

[Zn(K1(s t)x(t))minus Zn(K1(sprime t)x(t))]K2(s

prime t)dt

∣∣∣∣le c0xinfin

(c1K2(s middot)minus K2(s

prime middot)1 + c2K1(s middot)minus K1(sprime middot)infin

)

Since K2(s middot)minusK2(sprime middot)1 and K1(s middot)minusK1(sprime middot)infin are uniformly continu-ous on we observe that Knx is equicontinuous on By the ArzelandashAscolitheorem we conclude that Kn is collectively compact

(ii) For any x isin X

|(Knx)(s)minus (Kx)(s)| =∣∣∣∣int

[Zn(K1(s t)x(t))minus K1(s t)x(t)]K2(s t)dt

∣∣∣∣le c2Zn(K1(s middot)x)minus K1(s middot)xinfin

Note that K1x is piecewise continuous as is x By the definition of Zn wehave that the right-hand side of the above inequality converges to zero as nrarrinfin We conclude that the left-hand side converges uniformly to zero on thecompact set That is KnxminusKx rarr 0 as nrarrinfin

(iii) If x isin Wk3infin() by the approximate order of the interpolation projectionZn we have

KnxminusKxinfin le c supsisin(Zn(K1(s middot)x(middot)))(middot)minus K1(s middot)x(middot)infin le chk3

The estimate above follows immediately from the fact that K1 isin C() timesWk3infin() and inequality (384)

Using Propositions 336 and 337 and Theorem 254 we obtain the follow-ing theorem

Theorem 338 The following statements are valid

(i) There exists N0 gt 0 such that for all n gt N0 the discrete PetrovndashGalerkin method using the piecewise polynomial approximation describedin Section 421 has a unique solution un isin Xn

(ii) If u isin Wαinfin() with α = mink1 k3 then

uminus uninfin le chα

Proof By Propositions 336 and 337 we conclude that conditions(H-1)ndash(H-4) are satisfied Hence from Theorem 254 statement (i) followsimmediately and the estimate

uminus uninfin le c (uminusQnuinfin + KuminusKnuinfin) (396)

holds Now let u isin Wαinfin() Again Proposition 336 ensures that

uminusQnuinfin le c infuminus φinfinφ isin Xn le chα (397)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 137

By (iii) of Proposition 337 we have that

KuminusKnuinfin le chα (398)

Substituting estimates (397) and (398) into inequality (396) yields theestimate in (ii)

355 Superconvergence of the iterated approximation

We present in this subsection a superconvergence property of the iterateddiscrete PetrovndashGalerkin method when the kernel is smooth

To obtain superconvergence we require furthermore that the partitions 1

and 3 of 0 associated with the spaces S1k(0) and S3k3(

0) respectivelyare exactly the same In the main theorem of this section we prove that thecorresponding iterated discrete PetrovndashGalerkin approximation has a super-convergence property when the kernels are smooth In particular we assumethat the kernel K = K1 and K2 = 1 in the notation of the last subsection Wefirst establish a technical lemma

Lemma 339 Let x isin Linfin() and k1 isin C() times Wk3infin() If 1 = 3 thenthere exists a positive constant c such that for all n

(K minusKn)Qnxinfin le chk3

Proof Since Qnx is not a continuous function Proposition 337 (iii) doesnot apply to this case However it follows from the proof of Proposition 337(ii) that

|(KnQnx)(s)minus (KQnx)(s)| le crsinfin

where

rs(t) = (Zn(K1(s middot)(Qnx)(middot))(t)minus K1(s t)(Qnx)(t)

Hence it suffices to estimate rs(t)Using the definition of the projection Qn we write

(Qnx)(t) =sum

iisinNNn

sumjisinNμ

cijφij(t) t isin (399)

where φij are the basis functions for Xn given in Subsection 354 and thecoefficients cij satisfy the linear system (389) Consequently we have that

(Zn(K1(s middot)(Qnx)(middot))(t) =sum

iisinNNn

sumjisinNμ

cij(Zn(K1(s middot)φij(middot))(t) (3100)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

138 Conventional numerical methods

By the construction of the functions φij we have that φij(tiprimejprime) = 0 if i = iprimeThus it follows that

(Zn(K1(s middot)φij(middot))(t) =sum

iprimeisinNNn

sumjprimeisinNγ

K1(s tiprimejprime)φij(tiprimejprime)ζiprimejprime(t)

=sum

jprimeisinNγ

K1(s tijprime)φij(tijprime)ζijprime(t)

Substituting this equation into (3100) yields

(Zn(K1(s middot)(Qnx)(middot))(t) =sum

iisinNNn

sumjisinNμ

cij

sumjprimeisinNγ

K1(s tijprime)φij(tijprime)ζijprime(t) t isin

(3101)

We now assume that for some point t isin iprime rsinfin = |rs(t)| For this point tthere exists a point τ in the reference element 0 such that t = Fiprime(τ ) Hence

rsinfin =∣∣∣∣∣∣sumjisinNμ

ciprimej

⎡⎣sumjprimeisinNγ

K1(sFiprime(tjprime))φj(tjprime)ζjprime(τ )minus K1(sFiprime(τ ))φj(τ )

⎤⎦∣∣∣∣∣∣ Because 0 = cupiisinNν3

ei the point τ must be in some ei For each integerjprime isin Nγ assume that positive integers i0 and j0 with i isin Nν3 j0 isin Nm are suchthat (i0 minus 1)m+ j0 = jprime Therefore we have

ζjprime(t) =

pi0j0(t) t isin ei00 t isin ei0

and tjprime = τi0j0

so that

rsinfin =∣∣∣∣∣∣sumjisinNμ

cij

⎡⎣ sumi0isinNν3

sumj0isinNm

K1(sFiprime (τi0j0 ))φj(τi0j0 )pi0j0 (τ )minus K1(sFiprime (τ ))φj(τ )

⎤⎦∣∣∣∣∣∣=∣∣∣∣∣∣sumjisinNμ

cij

⎡⎣ sumj0isinNm

K1(sFiprime (τij0 ))φj(τij0 )pij0 (τ )minus K1(sFiprime (τ ))φj(τ )

⎤⎦∣∣∣∣∣∣ We identify that the function in the blanket of the last term is the error ofpolynomial interpolation of the function K1(sFiprime(τ ))φj(τ ) on ei which wecall the error term on ei Since 1 = 3 K1(sFiprime(τ ))φj(τ ) as a function of

τ is in the space Wk3infin(ei) We conclude that the error term on ei is bounded bya constant time k3(K1(sFiprime(middot))φj(middot)infin The latter is bounded by a constanttime |det(Biprime)|k3 le chk3 Hence we obtain

rsinfin le ccinfinhk3

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 139

By the proof of Proposition 336 we know that cinfin le c Therefore we haversinfin le chk3

We are now ready to establish the main result of this subsection concerningthe superconvergence of the iterated solution

Theorem 340 If β = mink1 + k2 k3 u isin Wβinfin() and K isin C() timesWk3infin() then there exists a constant c gt 0 such that for all n

uminus uprimeninfin le chβ

Proof It follows from Theorem 254 that

uminus uprimeninfin le c ((K minusKn)Qnuinfin + K(I minusQn)uinfin) (3102)

Because 1 = 3 by applying Lemma 339 we have that

(K minusKn)Qnuinfin le chk3 (3103)

Moreover since K(s middot) isin Wk3infin() and 1 = 3 we conclude that

K(uminusQnu)infin le (K(s t) u(t)minus (Qnu)(t))ninfin + chk3 (3104)

It remains to establish an upper bound for (K(s t) u(t)minus (Qnu)(t))ninfin Forthis purpose we note that for any y isin Yn

(y uminusQnu)n = 0

holds It follows that

|(K(s t) u(t)minus (Qnu)(t))n| = |(K(s t)minus y(t) u(t)minusQnu)(t))n|le infK(s t)minus y(t)infin y isin YnuminusQnuinfin

This implies that

(K(s t) u(t)minus (Qnu)(t))ninfin le chk2 hk1 = chk1+k2 (3105)

Combining inequalities (3102)ndash(3105) we establish the estimate of thistheorem

We remark that when k1 lt k3 lt k1+k2 the optimal order of convergence ofun is O(hk1) while the iterated solution uprimen has an order of convergence O(hk3)This phenomenon is called superconvergence

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

140 Conventional numerical methods

356 Numerical examples

In this subsection we present two numerical examples to illustrate the theo-retical estimates obtained in the previous subsections The kernel in the firstexample is weakly singular while the kernel in the second example is smoothThe second example is presented to show the superconvergence property of theiterated solution We restrict ourselves to simple one-dimensional equationswhose exact solutions are known

In both examples we use piecewise linear functions and piecewise constantfunctions for the spaces Xn and Yn respectively Specifically we define thetrial space by

Xn = spanφj j isin N2nwhere

φ2j+1(t) =

nt minus j jn le t le j+1

n 0 otherwise

j isin Zn

and

φ2j+2(t) =

j+ 1minus nt jn le t le j+1

n 0 otherwise

j isin Zn

The test space is then defined by

Yn = spanψj j isin N2nwhere

ψj(t) =

1 jminus12n le t le j

2n 0 otherwise

j isin N2n

Example 1 Consider the integral equation with a weakly singular kernel

u(s)minusint π

0log | cos sminus cos t|u(t)dt = 1 0 le s le π

This equation is a reformulation of a third boundary value problem of the two-dimensional Laplace equation and it has the exact solution given by

u(s) = 1

1+ π log 2

See [12] for more details on this example By changes of variable t = π tprimes = πsprime we have an equivalent equation

u(πs)minus π

int 1

0log | cos(πs)minus cos(π t)|u(π t)dt = 1 0 le s le 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

35 PetrovndashGalerkin methods 141

We write the kernel

log | cos(πs)minus cos(π t)| =4sum

i=1

Ki1(s t)Ki2(s t)

where

K11(s t) = log

⎛⎝ sin(π(tminuss)

2

)sin(π(t+s)

2

)π3 (tminuss)

2 (t + s)(2minus t minus s)

⎞⎠

K12(s t) = K21(s t) = K31(s t) = K41(s t) = 1

K22(s t) = log |π(sminus t)| K32(s t) = log(π(2minus sminus t))

and

K42(s t) = log(π(s+ t))

In Table 31 we present the error en of the approximate solution and the erroreprimen of the iterated approximate solution where we use q and qprime to represent thecorresponding orders of approximation respectively In our computation wechoose k3 = 2

The order of approximation agrees with our theoretical estimate Theiteration does not improve the accuracy of the approximate solution for thisexample due to the nonsmoothness of the kernel

Example 2 We consider the integral equation with a smooth kernel

u(s)minusint 1

0sin s cos tu(t)dt = sin s(1minus esin 1)+ esin s 0 le s le 1

It is not difficult to verify that u(s) = esin s is the unique solution of thisequation In the notation of Section 353 we have K1(s t) = sin s cos t andK2(s t) = 1 In this case we choose k3 = 3 for the quadrature formula Thenotation in Table 32 is the same as that in Table 31

In this example the iteration improves the accuracy of approximation by theorder as estimated in Theorem 340

Table 31

n 4 8 16 32

en 1504077e-6 3879971e-7 9877713e-8 2957718e-8q 1954761 1973797 1739639eprimen 3186220e-6 8005914e-7 1973337e-7 5153006e-8qprime 1992708 2020429 193715

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

142 Conventional numerical methods

Table 32

n 4 8 16 32

en 168156e-2 410275e-3 101615e-3 300353e-4q 2035137 2013478 175839eprimen 678911e-5 416056e-6 258946e-7 161679e-8qprime 4028373 4006054 4001447

36 Bibliographical remarks

The material presented in this chapter on conventional numerical methods forsolving Fredholm integral equations of the second kind is mainly taken fromthe books [15 177 178] Analysis of the quadrature method may be found in[6] As related issues of the collocation method valuation of an Linfin functionf at a given point the reader is referred to [21] and multivariate Lagrangeinterpolation by polynomials may be found in [83] and the references citedtherein For the theoretical framework for analysis of the PetrovndashGalerkinmethod readers are referred to [64 77] Superconvergence of the iteratedPetrovndashGalerkin method was originally analyzed in [77] The discrete PetrovndashGalerkin method and its iterated scheme presented in Section 353 are takenfrom [68 80] The iterated Galerkin method a special case of the iteratedPetrovndashGalerkin method for Fredholm integral equations of the second kindwas studied by many authors (see [23 165 246] and the references citedtherein) Reference [241] gives a nice review of the iterated Galerkin methodand iterated collocation method

We would like to mention other developments on this topic not includedin this book Boundary integral equations of the second kind with periodiclogarithmic kernels were solved by a Nystrom scheme-based extrapolationmethod in [271] where asymptotic expansions for the approximate solutionsobtained by the Nystrom scheme were developed to analyze the extrapolationmethod The generalized airfoil equation for an airfoil with a flap was solvednumerically in [204] In [49] it was shown that the dense coefficient matrixobtained from a quadrature rule for boundary integral equations with logarithmkernels can be replaced by a sparse one if appropriate graded meshes areused in the quadrature rules A fast numerical method was developed in [266]for solving the two-dimensional Fredholm integral equation of the secondkind More information about the Galerkin method using the Fourier basis forsolving boundary integral equations may be found in [25 81] Fast numericalalgorithms for this method were developed recently in [37 154 155 263]A singularity-preserving Galerkin method was developed in [41] for the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

36 Bibliographical remarks 143

Fredholm integral equation of the second kind with weakly singular kernelswhose solutions have singularity The method was extended in [38 229] tosolve the Volterra integral equation of the second kind with weakly singularkernels which was also used in [111 112] to solve fractional differentialequations

A singularity-preserving collocation method for solving the Fredholm inte-gral equation of the second kind with weakly singular kernels was developedin [39] In [16] a discretized Galerkin method was obtained using numericalintegration for evaluation of the integrals occurring in the Galerkin methodand in [23] by considering discrete inner product and discrete projectionsthe authors treated more appropriately kernels with discontinuous derivativesA discrete convergence theory and its applications to the numerical solutionof weakly singular integral equations were presented in [256] Finally weremark that it may be obtained by a similar analysis provided by [165] forthe superconvergence of the iterated Galerkin method when the kernels areweakly singular

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637005Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163023 subject to the Cambridge Core terms of

4

Multiscale basis functions

Since a large class of physical problems is defined on bounded domains wefocus on integral equations on bounded domains As we know a boundeddomain in Rd may be well approximated by a polygonal domain which isa union of simplexes cubes and perhaps L-shaped domains To develop fastGalerkin PetrovndashGalerkin and collocation methods for solving the integralequations we need multiscale bases and collocation functionals on polygonaldomains Simplexes cubes or L-shaped domains are typical examples ofinvariant sets This chapter is devoted to a description of constructions ofmultiscale basis functions including multiscale orthogonal bases interpolatingbases and multiscale collocation functionals The multiscale basis functionsthat we construct here are discontinuous piecewise polynomials For thisreason we describe their construction on invariant sets which can turn to baseson a polygon

To illustrate the idea of the construction we start with examples on[0 1] which is the simplest example of invariant sets This will be donein Section 41 Constructions of multiscale basis functions and collocationfunctionals on invariant sets are based on self-similar partitions of thesets Hence we discuss such partitions in Section 42 Based on such self-similar partitions we describe constructions of multiscale orthogonal basesin Section 43 For the construction of the multiscale interpolating basis werequire the availability of the multiscale interpolation points Section 44 isdevoted to the notion of refinable sets which are a base for the constructionof the multiscale interpolation points Finally in Section 45 we present theconstruction of multiscale interpolating bases

144

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

41 Multiscale functions on the unit interval 145

41 Multiscale functions on the unit interval

This section serves as an illustration of the idea for the construction oforthogonal multiscale piecewise polynomial bases on an invariant set Weconsider the simplest invariant set = [0 1] in this section The essentialaspect of this construction is the recursive generation of partitions of andthe multiscale bases based on the partitions

Let L2() denote the Hilbert space equipped with inner product

(u v) =int

u(t)v(t)dt u v isin L2()

and the induced norm middot = radic(middot middot) We now describe a sequence of finite-dimensional subspaces of L2() For two positive integers k and m we let Sk

mdenote the linear space of all functions which are polynomials of degree atmost k minus 1 on

Im =[

m+ 1

m

] isin Zm

The functions in Skm are allowed to be discontinuous at the knots jm for j isin

Nmminus1 Hence the dimension of the space Skm is km When mprime divides m that

is m = mprime for some positive integer then

Skmprime sube Sk

m

since the knot sequence mprime isin Zmprime for the space Skmprime is contained in the

sequence m isin Zm for the space Skm In particular for m = 2k we have

that

Sk1 sube Sk

2 sube middot middot middot sube Sk2n (41)

In this context we reinterpret the unit interval and its partition Recall thatthe unit interval is the invariant set with respect to the maps

φε(t) = ε + t

2 t isin ε isin Z2

in the sense that

= φ0() cup φ1() and meas(φ0() cap φ1()) = 0

where meas(A) denotes the Lebesgue measure of the set A Note that themaps φ0 and φ1 are contractive and they map onto [0 12] and [12 1]respectively The partition I2k isin Z2k of can be re-expressed in termsof the contractive maps φε ε isin Z2 as

φε1middotmiddotmiddotεk() εj isin Z2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

146 Multiscale basis functions

0 1

f

0 1φ0()

T0 f

0 1φ1()

T1 f

Figure 41 φe and Te

Associated with the contractive maps φε ε isin Z2 we introduce two mutuallyorthogonal isometries on L2() that will be used to recursively generate basesfor the spaces in the chain (41) For each ε isin Z2 we set ε = [ε2 (ε+1)2]and define the isometry Tε by setting for f isin L2()

Tε f = radic2(

f φminus1ε

)χε =

radic2f (2t minus ε) t isin ε

0 t isin ε (42)

where χA denotes the characteristic function of the set A Figure 41 illustratesthe results of applications of operators Tε to a function

For each ε isin Z2 we use T lowastε for the adjoint operator of Tε We have thefollowing result concerning the adjoint operator T lowastε

Proposition 41 (1) If f isin L2() then

T lowastε f =radic

2

2f φε

(2) For any ε εprime isin Z2

T lowastε Tεprime = δεεprimeI (43)

(3) For any ε εprime isin Z2 and f g isin L2()

(Tε f Tεprimeg) = δεεprime( f g)

Proof (1) For f g isin L2() by the definition of Tε we have thatint

g(x)(Tε f )(x)dx = radic2intε

g(x)( f φminus1ε )(x)dx

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

41 Multiscale functions on the unit interval 147

We make a change of variables t = φminus1ε (x) to conclude thatint

g(x)(Tε f )(x)dx =radic

2

2

int

(g φε)(x)f (x)dx

The formula for T lowastε g follows(2) For f isin L2() by (1) of this proposition we observe that

(T lowastεprime Tε f )(x) =radic

2

2(Tε f )(φεprime(x))

By the definition of the operator Tε if εprime = ε then T lowastεprime Tε = 0 and if εprime = ε

then T lowastεprime Tε = I

(3) The formula in this part follows directly from (43)

It is clear from their definitions that the operators Tε ε isin Z2 preserve thelinear independence of a set of functions in L2() Moreover it follows fromProposition 41 (2) that functions resulting from applications of the operatorsTε with different ε are orthogonal We next show how to use the operators Tε ε isin Z2 to generate recursively the bases for spaces

Xn = Sk2n n isin N0

To this end when S1 and S2 are subsets of L2() such that (u v) = 0 for allu isin S1 v isin S2 we introduce the notation S1 cupperp S2 which denotes the union ofS1 and S2

Proposition 42 If X0 is an orthonormal basis for Sk1 then

Xn =⋃εisinZ2

perpTεXnminus1 n isin N (44)

is an orthonormal basis for Sk2n

Proof We prove by induction on n Suppose that fj j isin Nk2nminus1 form anorthonormal basis for Xnminus1 By Proposition 41 Tε fj jisinNk2nminus1 ε isinZ2 arealso orthonormal It can be shown that these k2n functions are elements inXn Moreover dimXn = k2n which equals the number of these elementsTherefore they form an orthonormal basis for the space Xn

We now turn to the construction of our multiscale basis for space XnRecalling Xnminus1 sube Xn we have that

Xn = Xnminus1 oplusperpWn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

148 Multiscale basis functions

where Wn is the orthogonal complement of Xnminus1 in Xn Since the dimensionof Xn is k2n the dimension of Wn is given by

dimWn = k2nminus1

Repeating this process leads to the decomposition

Xn = X0 oplusperpW1 oplusperp middot middot middot oplusperpWn (45)

for space Xn In order to construct a multiscale orthonormal basis it sufficesto construct an orthonormal basis Wj for space Wj for each j isin Nn We firstchoose the Legendre polynomials of degree le k minus 1 on as an orthonormalbasis for X0 = Sk

1 and denote by X0 the basis We then use Proposition 42 toconstruct an orthonormal basis X1 for space X1 that is

X1 =⋃εisinZ2

perpTεX0

Since both X0 and X1 are finite-dimensional we can use the GramndashSchmidtprocess to find an orthonormal basis W1 for W1 Specifically we form a linearcombination of the basis functions in X1 and require it to be orthogonal toall elements of X0 This gives us k linearly independent elements which areorthogonal to X0 We then orthonormalize these k functions and they serve asan orthonormal basis for W1 For construction of basis Wj when j ge 2 weappeal to the following proposition

Proposition 43 If W1 is given as an orthonormal basis for W1 then

Wn+1 =⋃εisinZ2

perpTεWn n isin N (46)

is an orthonormal basis for Wn+1

Proof We prove that Wn is an orthonormal basis for Wn by induction on nWhen n = 1 W1 is an orthonormal basis for W1 by hypothesis Assume thatWj is an orthonormal basis for Wj for some j ge 1 we show that Wj+1 is anorthonormal basis for Wj+1

Let W = T0Wj cup T1Wj Since Wj sube Xj by Proposition 42 we concludethat W sube Xj+1 Proposition 41 with the induction hypothesis that WjperpXjminus1ensures Wperp(T0Xjminus1 cup T1Xjminus1) = Xj which implies that W subeWj+1 Becausethe elements in Wj are orthonormal by Proposition 41 the elements in W arealso orthonormal Moreover

card W = dimWj+1

holds Therefore W is a basis for Wj+1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

41 Multiscale functions on the unit interval 149

The proposition above gives a recursive generation of the multiscale basisfunctions for spaces Wn once orthonormal basis functions for W1 areavailable It is useful for us in what follows to index the functions in the waveletbases for Xn and to clearly have in mind the interval of their ldquosupportrdquo To thisend we set W0 = X0 and define

w(n) = dimWn and s(n) = dimXn n isin N0

Thus we have that

w(0) = k w(n) = k2nminus1 and s(n) = k2n n isin N0

For i isin N0 we write Wi = wij j isin Zw(i) where we use double subscriptsfor the basis functions with the first representing the level of the scale of thesubspaces and the second indicating the location of its support There are twoproperties of the functions in the set wij (i j) isin U where U = (i j) i isinN0 j isin Zw(i) which are important to us The first is that they form a completeorthonormal system for the space L2() In particular we have that

(wij wiprimejprime) = δiiprimeδjjprime (i j) (iprime jprime) isin U

Embodied in this fact is the useful property that the wavelet basis wij (i j) isinU has vanishing moments of order k that is

((middot)r wij) = 0 for r isin Zk j isin Zw(i) i isin N

The second property is the ldquoshrinking supportrdquo (as the level i increases)of the multiscale basis functions To pin down this fact we take the point ofview that the k functions in W1 have ldquosupportrdquo on Thereafter the waveletbasis at level i will be grouped into 2iminus1 sets of k functions each having thesame ldquosupport intervalrdquo For future reference we identify a set off which wij

vanishes For this purpose we write j isin Zw(i) i isin N uniquely in the formj = vk + l where l isin Zk and v isin Z2iminus1 i isin N Then

wij(t) = 0 t isin Iv2iminus1 j = vk + l (47)

and therefore we have for j isin Zw(i) that

meas(supp wij) le 1

2iminus1

To see this fact clearly we express v in its dyadic expansion

v = 2iminus2ε1 + middot middot middot + 2εiminus2 + εiminus1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

150 Multiscale basis functions

where ε1 ε2 εiminus1 isin Z2 The recursion (46) then gives the formula

wij = Tε1 middot middot middot Tεiminus1w1l

which confirms (47)We end this section by presenting bases for spaces X0 and W1 for four con-

crete examples piecewise constant linear quadratic and cubic polynomials

Piecewise constant functions This case leads to the Haar wavelet We have abasis for X0 given by

w00(t) = 1 t isin [0 1]and a basis for W1 given by (see also Figure 42)

w10(t) =

1 t isin [0 12]minus1 t isin (12 1]

We illustrate in Figure 42 the graph of the functions w00 and w10

Piecewise linear polynomials In this case k = 2 and dimX0 = dimW1 = 2We have an orthonormal basis for X0 given by

w00(t) = 1 w01(t) = radic3(2t minus 1)

and an orthonormal basis for W1 given by

w10(t) =

1minus 6t t isin [0 12 ]

5minus 6t t isin ( 12 1] w11(t) =

radic3(1minus 4t) t isin [0 1

2 ]radic3(4t minus 3) t isin ( 1

2 1]We illustrate in Figure 43 the graph of the functions w00 w01 w10 and w11

0 02 04 06 08 10

02

04

06

08

1

w00

0 02 04 06 08 1

minus1

minus05

0

05

1

w10

Figure 42 Basis functions for piecewise constant functions

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

41 Multiscale functions on the unit interval 151

0 02 04 06 08 10

02

04

06

08

1

w00

0 02 04 06 08 1minus2

minus1

0

1

2

w01

0 02 04 06 08 1

minus2

minus1

0

1

2

w10

0 02 04 06 08 1minus2

minus1

0

1

2

w11

Figure 43 Basis functions for piecewise linear polynomials

Piecewise quadratic polynomials In this case k= 3 and dimX0= dimW1= 3 An orthonormal basis for X0 is given by

w00(t) = 1 w01(t) = radic3(2t minus 1) w02(t) = radic5(6t2 minus 6t + 1)

and an orthonormal basis for W1 is given by

w10(t) =

1minus 6t t isin [0 12 ]

5minus 6t t isin ( 12 1]

w11(t) =⎧⎨⎩radic

9331 (240t2 minus 116t + 9) t isin [0 1

2 ]radic93

31 (3minus 4t) t isin ( 12 1]

w12(t) =⎧⎨⎩radic

9331 (4t minus 1) t isin [0 1

2 ]radic93

31 (240t2 minus 364t + 133) t isin ( 12 1]

In Figure 44 we illustrate the graph of the bases for X0 and W0 in this case

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

152 Multiscale basis functions

0 05 10

02

04

06

08

1

w00

0 05 1minus2

minus1

0

1

2

w01

0 05 1minus15

minus1

minus05

0

05

1

15

2

25

w02

0 05 1minus2

minus1

0

1

2

w10

0 05 1minus2

minus1

0

1

2

3

4

w11

0 05 1minus2

minus1

0

1

2

3

4

w12

Figure 44 Basis functions for piecewise quadratic polynomials

Piecewise cubic polynomials In this case we have that k = 4 and dimX0 =dimW1 = 4 An orthonormal basis for X0 is given by

w00(t) = 1 w01(t) = radic3(2t minus 1)

w02(t) = radic5(6t2 minus 6t + 1) w03(t) = radic7(20t3 minus 30t2 + 12t minus 1)

and a basis for W1 is given by

w10(t) = radic

515 (240t2 minus 90t + 5) t isin [0 1

2 ]minusradic

515 (240t2 minus 390t + 155) t isin ( 1

2 1]

w11(t) = radic

3(30t2 minus 14t + 1) t isin [0 12 ]radic

3(30t2 minus 46t + 17) t isin ( 12 1]

w12(t) = radic

7(160t3 minus 120t2 + 24t minus 1) t isin [0 12 ]

minusradic7(160t3 minus 360t2 + 264t minus 63) t isin ( 12 1]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 153

0 02 04 06 08 10

02

04

06

08

1

w00

0 02 04 06 08 1minus2

minus1

0

1

2

w01

0 02 04 06 08 1minus15

minus1

minus05

0

05

1

15

2

25

w02

0 02 04 06 08 1minus3

minus2

minus1

0

1

2

3

w03

Figure 45 Basis functions for piecewise cubic polynomials

w13(t) =⎧⎨⎩

14radic

2929 (160t3 minus 120t2 + 165

7 t minus 1314 ) t isin [0 1

2 ]14radic

2929 (160t3 minus 360t2 + 1845

7 t minus 87714 ) t isin ( 1

2 1]The bases for X0 and W1 are shown respectively in Figures 45 and 46

42 Multiscale partitions

Because a polygonal domain in Rd is a union of a finite number of invariantsets in this section we focus on multiscale partitioning of an invariant set in Rd

421 Invariant sets

We introduce the notion of invariant sets following [148] Let M be a completemetric space For any subset A of M and x isinM we define the distance of x to

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

154 Multiscale basis functions

0 02 04 06 08 1

minus3

minus2

minus1

0

1

2

3

w10

0 02 04 06 08 1minus2

minus1

0

1

2

3

w11

0 02 04 06 08 1minus3

minus2

minus1

0

1

2

3

w12

0 02 04 06 08 1minus3

minus2

minus1

0

1

2

3

w13

Figure 46 Basis functions for piecewise cubic polynomials

A and the diameter of A respectively by

dist (x A) = infd(x y) y isin Aand

diam (A) = supd(x y) x y isin AA mapping from M to M is called contractive if there exists a γ isin (0 1) suchthat for all subsets A of M

diam (φε(A)) le γ diam (A) ε isin Zμ (48)

For a positive integer μ gt 1 we suppose that = φε ε isin Zμ is a familyof contractive mappings on M We define the subset of M by

(A) =⋃εisinZμ

φε(A)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 155

According to [148] there exists a unique compact subset of M such that

() = (49)

We call the set the invariant set relative to the family of contractivemappings

Generally an invariant set has a complex fractal structure For examplethere are choices of for which is the Cantor subset of the interval [0 1]the Sierpinski gasket contained in an equilateral triangle or the twin dragonsfrom wavelet analysis In Figures 47 and 48 we illustrate the generation ofthe Cantor set of [0 1] and the Sierpinski gasket respectively

In the context of numerical solutions of integral equations we are interestedin the cases when has a simple structure including for example the cubeand simplex in Rd With these cases in mind we make the following additionalrestrictions on the family of mappings

Figure 47 Generation of the Cantor set

Figure 48 Generation of the Sierpinski gasket

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

156 Multiscale basis functions

(a) For every ε isin Zμ the mapping φε has a continuous inverse on (b) The set has nonempty interior and

meas (φε() cap φεprime()) = 0 ε εprime isin Zμ ε = εprime

We present several simple examples of invariant sets

Example 44 For the metric space R and an integer μ gt 1 consider thefamily of contractive mappings = φε ε isin Zμ where

φε(t) = t + ε

μ t isin R ε isin Zμ

The unit interval = [0 1] is the invariant set relative to which satisfies

=⋃εisinZμ

φε()

When μ= 2 this example is discussed in Section 41 Figure 49 illustrates thecase when μ = 3 Note that in this case

φ0() =[

11

3

] φ1() =

[1

3

2

3

] φ2() =

[2

3 1

]and clearly

[0 1] = φ0() cup φ1() cup φ2()

Example 45 In the metric space R2 we consider four contractive affinemappings

φ0(s t) = 1

2(s t) φ1(s t) = 1

2(s+ 1 t)

φ2(s t) = 1

2(s t + 1) φ3(s t) = 1

2(1minus s 1minus t) (s t) isin R2

The invariant set relative to these mappings is the unit triangle with verticesat (0 0) (1 0) and (0 1) since

= φ0() cup φ1() cup φ2() cup φ3()

This is illustrated in Figure 410

0 1φ0() φ1() φ2()

Figure 49 The invariant set in Example 1 with μ = 3

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 157

(0 0) (1 0)

(0 1)

φ0() φ1()

φ2()

φ3()

Figure 410 The unit triangle as an invariant set

Figure 411 The unit L-shaped domain as an invariant set

Example 46 In the metric space R2 we consider four contractive affinemappings

φ0(s t) = 1

2(s t) φ1(s t) = 1

2(2minus s t)

φ2(s t) = 1

2(s 2minus t) φ3(s t) = 1

2(s+ 12 t + 12) (s t) isin R2

The invariant set relative to these mappings is the L-shaped domain illustratedin Figure 411

Example 47 As the last example in the metric space R3 we consider eightcontractive affine mappings

φ0(x y z) = 1

2(x y z) φ1(x y z) = 1

2(y z x+ 1)

φ2(x y z) = 1

2(x z y+ 1) φ3(x y z) = 1

2(x y z+ 1)

φ4(x y z) = 1

2(x y+ 1 z+ 1) φ5(x y z) = 1

2(y x+ 1 z+ 1)

φ6(x y z) = 1

2(z x+ 1 y+ 1) φ7(x y z) = 1

2(x+ 1 y+ 1 z+ 1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

158 Multiscale basis functions

0 02 04 06 08 1

002

0406

081

0

01

02

03

04

05

06

07

08

09

1

S

0 02 04 06 08 1

0

05

10

01

02

03

04

05

06

07

08

09

1

S[000]

0 02 04 06 081

0

05

10

01

02

03

04

05

06

07

08

09

1

S[100]

S S(000) S(100)

0 02 04 06 08 1

002

0406

0810

01

02

03

04

05

06

07

08

09

1

S[010]

0 02 04 0608 1

002

0406

0810

01

02

03

04

05

06

07

08

09

1

S[001]

0 02 04 06 08 1

002

0406

0810

01

02

03

04

05

06

07

08

09

1

S[011]

S(010) S(001) S(011)

0 02 04 06 08 1

002

0406

0810

01

02

03

04

05

06

07

08

09

1

S[101]

0 02 04 06 08 1

0

05

10

01

02

03

04

05

06

07

08

09

1

S[110]

0 02 04 06 08 1

0

05

10

01

02

03

04

05

06

07

08

09

1

S[111]

S(101) S(110) S(111)

Figure 412 A three-dimensional unit simplex as an invariant set

The invariant set relative to these eight mappings is the simplex in R3

defined by

S = (x y z) 0 le x le y le z le 1This is illustrated in Figure 412

422 Multiscale partitions by contractive mappings

The contractive mappings that define the invariant set naturally forma partition for the invariant set Repeatedly applying the mappings to theinvariant set generates a sequence of multiscale partitions for the invariant set

We next show how the contractive mappings are used to generate asequence of multiscale partitions n n isin N0 of the invariant set which

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 159

is defined by For notational convenience we introduce the notation

Znμ = Zμ times middot middot middot times Zμ n times

For each e = [ej j isin Zn] isin Znμ we define the composition mapping

φe = φe0 φe1 middot middot middot φenminus1

and the number

μ(e) = μnminus1e0 + middot middot middot + μenminus2 + enminus1

Note that every i isin Zμn can be written uniquely as i = μ(e) for some e isin Znμ

From equation (49) and conditions (a) and (b) it follows that the collection ofsets

n = ne ne = φe() e isin Znμ (410)

forms a partition of We require that this partition has the following property

(c) There exist positive constants cminus c+ such that for all n isin N0

cminusμminusnd le maxd(ne) e isin Znμ le c+μminusnd (411)

where d(A) represents the diameter of set A that is d(A) = sup|xminusy| x y isinA with | middot | being the Euclidean norm in the space Rd

If a sequence of partitions n n isin N0 has property (c) we say that it formsa sequence of multiscale partitions for

Proposition 48 If the Jacobian of the contractive affine mappings φe e isinZμ satisfies

|Jφe | = O(μminus1)

then the sequence of partitions n n isin N0 is of multiscale

Proof For any s t isin φe() there exist s t isin such that s = φe(s) andt = φe(t) and thus we have that

|sminus t| = |Jφe |1d|sminus t|This with the hypothesis on the Jacobian of the mappings ensures that for anye isin Zμ

d(1e) = O(μminus1d)

By induction we may find that for any e isin Znμ

d(ne) = O(μminusnd) (412)

proving the result

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

160 Multiscale basis functions

423 Multiscale partitions of a multidimensional simplex

For the purpose of solving integral equations on a polygonal domain in Rd wedescribe in this subsection multiscale partitions of a simplex in Rd for d ge 1For a vector x isin Rd we write x = [xj xj isin R j isin Zd] The unit simplex S inRd is the subset

S = x x isin Rd 0 le x0 le x1 le middot middot middot le xdminus1 le 1This set is the invariant set relative to a family of μd contractive mappingsIn order to describe these contractive mappings for a positive integer μ wedefine a family of counting functions χj Zd

μ rarr Zd+1 j isin Zμ for e = [ej j isin Zd] isin Zd

μ by

χj(e) =sumiisinZd

δj(ei) (413)

where δj(k) = 1 when j = k and otherwise δj(k) = 0 Note that the value ofχj(e) is exactly the number of components of e that equals j Given e isin Zd

μ

we identify a vector c(e) = [cj j isin Zμ+1] isin Zμ+1d+1 by

c0 = 0 cj =sumiisinZj

χi(e) j isin Nμ (414)

We remark that c(e) is always nondecreasing since each χj takes a non-negativevalue and cμ is always equal to d For e isin Zd

μ and j lt k we define the index

set kj = el j le l lt k el = ek Then we define the permutation vector

Ie = [ik k isin Zd] isin Zdd of e by

ik = cek + card(k0) (415)

where we assume card(empty) = 0 We have the following lemma about Ie

Lemma 49 For any e isin Zdμ the permutation vector Ie has the following

properties

(1) For k isin Zd cm le ik lt cm+1 if and only if m = ek(2) For any j k isin Zd ij lt ik if and only if ej lt ek or ej = ek with j lt k(3) The equality ij = ik holds if and only if j = k(4) The vector Ie is a permutation of vd = [j j isin Zd]Proof According to the definition of Ie we have for any k isin Zk that

cek le ik lt cek + card(ej j isin Zd ej = ek) = cek+1 (416)

This implies that if m = ek cm le ik lt cm+1 On the contrary if there is anm such that cm le ik lt cm+1 it is unique because the components of c(e) are

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 161

nondecreasing It follows from the uniqueness of m and (416) that m = ekThus property (1) is proved

We now turn to proving property (2) If ej lt ek then ej + 1 le ek and hencecej+1 le cek since the components of c(e) form a nondecreasing sequence By(416) we conclude that ij lt cej+1 le cek le ik If ej= ek with jlt k then ik minusij= card(k

j )ge 1 hence ij lt ik It remains to prove that if ij lt ik then ej lt ek

or ej= ek jlt k Since in general for j kisinZd one of the following cases holdsej lt ek ej= ek with jlt k ej= ek with jge k or ej gt ek it suffices to show thatif ej gt ek or ej= ek with jge k then ijge ik If ej gt ek by the proof we showedearlier in this paragraph we conclude that ij gt ik If ej= ek with j ge k we

have that ij minus ik= card( jk)ge 0 that is ijge ik Thus we complete a proof for

property (2)The above analysis also implies that the only possibility to have ij = ik is

j = k This proves property (3)Noticing that ek isin Zμ for k isin Zd and 0 le cek le ik lt cek+1 le d we

conclude that Ie is a permutation of vd

We also need conjugate permutations in order to define the contractivemappings A permutation matrix has exactly one entry in each row and columnequal to one and all other entries zero Hence a permutation matrix is anorthogonal matrix For any permutation Ie of vd there is a unique permutationmatrix Pe such that Ie = Pevd We call the vector

Ilowaste = [ilowastj j isin Zd] = PTe vd

the conjugate permutation of Ie Thus Ilowaste itself is also a permutation of vdIt follows from the definition above that for l isin Zd ilowastl = k if and only ifik = l We define the conjugate vector elowast = [elowastj j isin Zd] of e by settingelowastl = eilowastl l isin Zd Utilizing the above notations we define the mapping Ge by

Ge(x) = x =[

xl =xilowastl + elowastl

μ l isin Zd

] x isin S (417)

It is clear that mappings Ge e isin Zdμ are affine and contractive

We next identify the set Ge(S) To this end associated with each e isin Zdμ we

define a set in Rd by

Se =

x isin Rd 0 le xi0 minuse0

μle xi1 minus

e1

μle middot middot middot le xidminus1 minus

edminus1

μle 1

μ

(418)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

162 Multiscale basis functions

where ik k isin Zd are the components of the permutation vector Ie of e SinceIe is a permutation of vd Se is a simplex in Rd In the next lemma we identifyGe(S) with the simplex Se

Lemma 410 For all e isin Zdμ there holds Ge(S) = Se

Proof For k isin Zd we let l = ik and observe by definition that ilowastl = k elowastl = ekThus xl = xk+ek

μ or

xk = μxl minus ek = μxik minus ek (419)

If x isin S then 0 le x0 le x1 le middot middot middot le xdminus1 le 1 which implies that

0 le μxi0 minus e0 le μxi1 minus e1 le middot middot middot le μxidminus1 minus edminus1 le 1

or

0 le xi0 minuse0

μle xi1 minus

e1

μle middot middot middot le xidminus1 minus

edminus1

μle 1

μ

so that x isin Se Moreover given x isin Se we define x = [xk k isin Zd] byequation (419) Thus x isin S and x = Ge(x) Therefore Ge(S) = Se

In the following lemma we present properties of the simplices Se e isin Zdμ

Lemma 411 The simplices Se e isin Zdμ have the following properties

(1) For any x isin Se there holds

k

μle xck le xck+1 le middot middot middot le xck+1minus1 le k + 1

μ k isin Zμ (420)

(2) For any e isin Zdμ Se sub S

(3) If e1 e2 isin Zdμ with e1 = e2 then int(Se1) cap int(Se2) = empty

(4) For any e isin Zdμ meas(Se) = 1(μdd) where meas() denotes the

Lebesgue measure of set

Proof In order to prove (420) it suffices to show

0 le xck minusk

μle xck+1 minus k

μle middot middot middot le xck+1minus1 minus k

μle 1

μ k isin Zμ (421)

or equivalently

0 le xp minus k

μle xq minus k

μle 1

μ

for any ck le p lt q lt ck+1 In fact since Ie is a permutation of vd for anyintegers ck le p lt q lt ck+1 there exists a unique pair pprime qprime isin Zd such that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 163

ipprime = p iqprime = q It follows from Lemma 49 that epprime = eqprime = k and pprime lt qprimeThus (418) states that

0 le xp minus k

μ= xipprime minus

epprime

μle xiqprime minus

eqprime

μ= xq minus k

μle 1

μ

which concludes property (1)Property (2) is a direct consequence of (1) and the definition of SFor the proof of (3) we first notice that

int(Se) =

x isin Rd 0 lt xi0 minuse0

μlt xi1 minus

e1

μlt middot middot middot lt xidminus1 minus

edminus1

μlt

1

μ

(422)

Moreover by a proof similar to that for (420) we utilize (422) to concludefor any x isin int(Se) that

k

μlt xck lt xck+1 lt middot middot middot lt xck+1minus1 lt

k + 1

μ k isin Zμ (423)

For j = 1 2 we let ej = [ejk k isin Zd] Iej = [ijk k isin Zd] c(ej) = [cj

k k isinZμ+1]

Assume to the contrary that int(Se1)cap int(Se2) is not empty We consider twocases In case 1 that c(e1) = c(e2) we let k be the smallest integer such thatc1

k = c2k and assume c1

k lt c2k without loss of generality For any x isin int(Se1)cap

int(Se2) by (423) we have xc1kgt k

μand xc2

kminus1 ltkμ

Moreover because x isin Swe have that xc1

kle xc2

kminus1 a contradiction In case 2 that c(e1) = c(e2) since

e1 = e2 we let k be the smallest integer such that e1k = e2

k Hence e1j = e2

j for

j lt k and we assume that e1k lt e2

k without loss of generality Thus we havethat i1k lt c1

e1k+1le c2

e2kle i2k There exists a unique p isin Zd such that i1p = i2k since

Ie1 is a permutation and p ge k because i1j = i2j = i2k for all j lt k Furthermore

it follows from Lemma 49 c(e1) = c(e2) and i1p = i2k that e1p = e2

k = e1k

which implies p = k Therefore for any x isin int(Se1) there holds

xi1kminus e1

k

μlt xi1p

minus e1p

μ= xi2k

minus e2k

μ

However there is a unique q isin Zd such that q gt k i2q = i1k and for anyx isin int(Se2)

xi2kminus e2

k

μlt xi2q

minus e2q

μ= xi1k

minus e1k

μ

again a contradiction This completes the proof of property (3)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

164 Multiscale basis functions

For property (4) we find by direct computation that meas(Sprimee) = 1(μdd)where

Sprimee =

x isin Rd 0 le xi0 le xi1 le middot middot middot le xidminus1 le1

μ

Notice that Se is the translation of simplex Sprimee via the vector eμ

Sincethe Lebesgue measure of a set is invariant under translation we concludeproperty (4)

Theorem 412 The family S(Zdμ) = Se e isin Zd

μ is an equivolume partitionof the unit simplex S

Proof By Lemma 411 we see that for any e isin Zdμ Se sub S and for e1 e2 isin Zd

μ

with e1 = e2 int(Se1) cap int(Se2) = empty and meas(Se1) = meas(Se2) It remainsto prove that S sube⋃eisinZd

μSe

To this end for each x isin S we find e isin Zdμ such that x isin Se Note that for

each x isin S we have that 0 le x0 le x1 le middot middot middot le xdminus1 le 1 For each k isin Zμ wedenote by ck the subscript of the smallest component xj greater than or equalto k

μ We order the elements in set xj j isin Zd cup k

μ k isin Zμ+1 in increasing

order We then obtain that

0 le x0 le middot middot middot le xc1minus1 lt1

μle xc1 le middot middot middot le xcμminus1minus1

ltμminus 1

μle xcμminus1 le middot middot middot le xcμminus1 = xdminus1 le 1

In other words we have that

0 le xck minusk

μle xck+1 minus k

μle middot middot middot le xck+1minus1 minus k

μle 1

μ k isin Zμ (424)

Let pj = maxk ck le j It follows from (424) that the set xj minus pjμ

j isinZd sub [0 1

μ] We sort the elements of this set into

0 le xi0 minuspi0

μle xi1 minus

pi1

μle middot middot middot le xidminus1 minus

pidminus1

μle 1

μ (425)

Notice that the vector I = [ik k isin Zd] is a permutation of vd Let e = [ek k isin Zd] be a vector such that ej = pij It is easy to verify ij = cej + | j

0|Hence I = Ie which together with (425) shows x isin Se

The expression of the inverse mapping Gminus1e has been given by equa-

tion (419) which is written formally as

x = Gminus1e (x) = [xk = μxik minus ek k isin Zd] x isin Se (426)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

42 Multiscale partitions 165

For any e isin Zdμ and xprime xprimeprime isin Rd

Ge(xprime)minus Ge(xprimeprime)p = 1

μxprime minus xprimeprimep (427)

and

Gminus1e (xprime)minus Gminus1

e (xprimeprime)p = μxprime minus xprimeprimep (428)

hold where middot p is the standard p-norm on Rd for 1 le p le infin

Proposition 413 The family S(Zdμ) is a uniform partition of the unit simplex

S in the sense that all elements of S(Zdμ) have an identical diameter

Proof We let = maxxprimexprimeprimeisinS xprime minus xprimeprimep It suffices to prove for any e isin Zdμ

that

maxxprimexprimeprimeisinSe

xprimee minus xprimeprimeep =

μ

It follows from formula (428) that for any xprimee xprimeprimee isin Se

μxprimee minus xprimeprimeep = Gminus1e (xprimee)minus Gminus1

e (xprimeprimee )p le

Moreover suppose that xprime xprimeprime isin S such that xprime minus xprimeprimep = and let xprimee =Ge(xprime) and xprimeprimee = Ge(xprimeprime) By (427) we have that

xprimee minus xprimeprimeep =1

μxprime minus xprimeprimep =

μ

which completes the proof

When a partition of the unit simplex has been established it is notdifficult to obtain a corresponding partition of a general simplex in Rd Fora nondegenerate simplex Sprime in Rd in the sense Vol(Sprime) = 0 there exists anaffine mapping F Rd rarr Rd such that F(Sprime) = S It can be shown that for1 le p le infin there are two positive constants c1 and c2 such that

c1xprime minus xprimeprimep le F(xprime)minus F(xprimeprime)p le c2xprime minus xprimeprimep (429)

for any xprime xprimeprime isin Sprime For any e isin Zdμ we define Gprimee = Fminus1 Ge F Thus the

family of simplices Gprimee(Sprime) e isin Zdμ is a partition of Sprime Furthermore for any

xprime xprimeprime isin Rd and e isin Zdμ

c1

c2μxprime minus xprimeprimep le Gprimee(xprime)minus Gprimee(xprimeprime)p le

c2

c1μxprime minus xprimeprimep

holds For E = [ej j isin Zm] isin (Zdμ)

m we define the composite mappings

GE = Ge0 middot middot middot Gemminus1 and GprimeE = Gprimee0 middot middot middot Gprimeemminus1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

166 Multiscale basis functions

and observe that GprimeE = Fminus1 GE F In the next theorem we show that thepartition Gprimee(Sprime) e isin Zd

μ of Sprime is uniform To this end we let SE = GE(S)and SprimeE = GprimeE(Sprime) Also we use diamp to denote the diameter of a domain inRd with respect to the p-norm

Theorem 414 For any xprime xprimeprime isin Rd and E isin (Zdμ)

m there hold

c1

c2

(1

μ

)m

xprime minus xprimeprimep le GprimeE(xprime)minus GprimeE(xprimeprime)p lec2

c1

(1

μ

)m

xprime minus xprimeprimep

diamp(SE) =(

1

μ

)m

diamp(S)

and

c1

c2

(1

μ

)m

diamp(Sprime) le diamp(S

primeE) le

c2

c1

(1

μ

)m

diamp(Sprime)

43 Multiscale orthogonal bases

In this section we describe the recursive construction of multiscale orthogonalbases for spaces L2() on the invariant set

431 Piecewise polynomial spaces

On the partition n we consider piecewise polynomials in a Banach space X

which has the norm middot Choose a positive integer k and let Xn be the spaces ofall functions such that their restriction to any cell ne e isin Zn

μ is a polynomialof total degree le kminus 1 Here we use the convention that for n = 0 the set isthe only cell in the partition and so

m = dim X0 =(

k + d minus 1d

)

It is easily seen that

x(n) = dimXn = mμn n isin N0

To generate the spaces Xn by induction from X0 we introduce linearoperators Te Xrarr X e isin Zμ defined by

(Tev)(t) = c0v(φminus1e (t))χφe()(t) (430)

where χA denotes the characteristic function of set A and c0 the positiveconstant such that Te = 1 Thus we have that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

43 Multiscale orthogonal bases 167

Xn =opluseisinZμ

TeXnminus1 n isin N (431)

where A oplus B denotes the direct sum of the spaces A and B It is easily seenthat the sequence of spaces has the property of nestedness that is

Xnminus1 sub Xn n isin N (432)

Assume that there exists a basis of elements in X0 denoted byψ0ψ1 ψmminus1 such that

X0 = spanψj j isin ZmIt is clear that

Xn = spanTeψj j isin Zm e isin Znμ (433)

432 A recursive construction

Noting that the subspace sequence Xn n isin N0 is nested we can define foreach n isin N0 subspaces Wn+1 sub Xn+1 such that

Xn+1 = Xn oplusperpWn+1 n isin N0 (434)

Thus by setting W0 = X0 we have the multiscale space decomposition thatfor any n isin N

Xn =oplus

iisinZn+1

perpWi (435)

and

L2() =oplusiisinN0

perpWi (436)

It can be computed that the dimension of Wn is given by

w(n) = dimWn = x(n)minus x(nminus 1) = m(μminus 1)μnminus1 n isin N (437)

Now the family of operators Te L2()rarr L2() e isin Zμ is defined by

Tev = |Jφe |minus12 v φminus1

e χφe()

In the next proposition we see that the operators Te e isin Zμ are isometricsfrom L2() to L2()

Proposition 415 If e eprime isin Zμ then for all u v isin L2()

(Teu Teprimev) = δeeprime(u v)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

168 Multiscale basis functions

Proof When e = eprime the intersection of the support of Teu and that of Teprimev hasmeasure zero Hence in this case we have that

(Teu Teprimev) = 0

Now we consider the case e = eprime By the definition of the operator Te we havethat

(Teu Teprimev) = |Jφe |minus1intφe()

(u φminus1e )(t)(v φminus1

e )(t)dt

Using a change of variable we conclude that

(Teu Teprimev) =int

u(t)v(t)dt = (u v)

which completes the proof

The following proposition shows that the spaces Wn can be generatedrecursively by W1

Proposition 416 It holds that

Wn+1 =opluseisinZμ

perpTeWn n isin N

Proof We remark first that by Proposition 415 the direct sum above satisfiesthe orthogonal property It follows from (434) that

Wn sub Xn and WnperpXnminus1

Thus by (431) and Proposition 415 we conclude that

TeWn sub Xn+1 and TeWnperpXn

These relations with (434) ensure that for any e isin Zμ

TeWn subWn+1

Since from (437) we have that

dimopluseisinZμ

perpTeWn = μ dim Wn = m(μminus 1)μn = dim Wn+1

the result of this proposition holds

It can easily be seen from the above proposition and its proof that thefollowing proposition holds

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 169

Proposition 417 If W1 is a basis of space W1 then for n isin N the recursivelygenerated set

Wn+1 =⋃

eisinZμ

perpTeWn =

⋃eisinZn

μ

perpTeW1

is the basis of space Wn+1

Now it is clear that as soon as we choose an orthogonal basis of W0 denotedby w0j j isin Zm and get an orthogonal basis of W1 denoted by w1j j isin Zrwhere r = w(1) then we can generate recursively an orthogonal basis wij j isin Zw(i) of the space Wi by

wij = Tew1l j = μ(e)r + l e isin Ziminus1μ l isin Zr iminus 1 isin N (438)

Functions wij (i j) isin U are wavelet-like functions which are also calledorthogonal wavelets

44 Refinable sets and set wavelets

In Section 43 we showed how to construct multiscale orthonormal bases oninvariant sets These wavelet-like functions are discontinuous nonetheless theyhave important applications to the numerical solution of integral equations Inthe next section we explore similar recursive structures for multiscale functionrepresentation and approximation by focusing on the analogous situation forinterpolation on an invariant set Thus in this section we seek a mechanism togenerate sequences of points which have a multiscale structure that can then beused to efficiently generate interpolating functions and multiscale functionals

We first develop a notion of refinable sets give a complete characterizationof refinable sets in a general metric space setting and illustrate the generalcharacterization with several examples of practical importance Next we showhow refinable sets lead to a multiresolution structure relative to set inclusionwhich is analogous to the multiresolution analysis associated with refinablefunctions This set-theoretic multiresolution structure leads us to what we callset wavelets which are generated by a successive application of the contractivemappings to an initial set wavelet These will lead us in particular in Section45 to the construction of interpolation that has the desired multiscale structureand in Chapter 7 to the construction of multiscale functionals for developingfast multiscale collocation methods for solving integral equations

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

170 Multiscale basis functions

441 Refinable sets

This subsection is devoted to a study of the refinable set relative to a familyof contractive mappings A complete characterization of refinable sets willbe presented which will be illustrated by several examples of practicalimportance

Assume that X is a complete metric space = φε ε isin Zμ is a familyof contractive mappings on X and is the invariant set relative to the family of mappings

For a subset V sub X we let

(V) =⋃εisinZμ

φε(V)

Definition 418 A subset V of X is said to be refinable relative to themappings if V sube (V)

For every k isin N and ek = [εj j isin Zk] isin Zkμ we define the contractive

composition mapping φek = φε0φε1middot middot middotφεkminus1 and letk = φek ek isin Zkμ

in particular1 = Observe that the union of any collection of refinable setsis likewise refinable Moreover if V is a refinable subset of X then k(V) =cupekisinZk

μφek(V) is also a refinable subset of X for all k isin N One of our main

objectives is to identify refinable sets of finite cardinalityBefore we present a characterization of these sets we look at some examples

on the real line which will be helpful to illuminate the general resultFor the metric space R and integer μ gt 1 we consider the mappings

ψε(t) = t + ε

μ t isin R ε isin Zμ (439)

The invariant set for this family of mappings is the unit interval [0 1] Our firstexample of a refinable set relative to the family of mappings = ψε ε isinZμ given in (439) comes next

Proposition 419 The set U0 =

jk jisinZk+1

is refinable relative to the

mappings

Proof It is sufficient to consider j gt 0 since 0 = ψ0(0) For every j isin Zk+1

we write the integer μj uniquely in the form μj = kε + where minus 1 isin Zk

and ε isin N0 Since μj le μk we conclude that ε isin Zμ Moreover we have thatjk = ψε

(k

)and so U0 is refinable

In some applications the exclusion of the endpoints 0 and 1 from a refinableset is desirable As an example of this case we present the following fact

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 171

Proposition 420 The set U0 =

jk+1 jminus 1 isin Zk

is refinable relative to

the mappings if and only if μ and k + 1 are relatively prime

Proof Suppose that μ and k + 1 have a common multiple mgt 1 that isμ=m1 and k+1=m2 for some integers 1 and 2 and U0 is refinable relativeto the mappings Then we have that 2 minus 1isinZk and 1 isinZμ Moreover wehave that ψ1(0)= 1

μ= 2

k+1 This equation implies that ψ1(0)isinU0 SinceU0 is refinable there exist ε0 isinZμ and u isin U0 such that ψ1(0)=ψε0(u)It follows from the equation above that 1= u + ε0 Thus either 1= ε0 andu= 0 or ε0 + 1= 1 and u= 1 In either case we conclude that either 0 or 1 isin U0 But this is a contradiction since U0 contains neither 0 nor 1 Hence theintegers μ and k + 1 must be relatively prime

Conversely suppose μ and k + 1 are relatively prime For every jminus 1 isin Zk

there exist integers ε and such that jμ = (k + 1)ε + where minus 1 isin Zk+1Since jμ le (k + 1)μ it follows that ε isin Zμ Moreover because μ and k + 1are relatively prime it must also be the case that minus 1 isin Zk Furthermore

since jk+1 = ψε

(

k+1

)we conclude that U0 is refinable

Our third special construction of refinable sets U0 in [0 1] relative tothe mappings is formed from cyclic μ-adic expansions To describe thisconstruction we introduce two additional mappings The first mapping π Zinfinμ rarr [0 1] is defined by

π(e) =sumjisinN

εjminus1

μj e = [εj j isin N0] isin Zinfinμ

and we also write it as π(e)= ε0ε1ε2 middot middot middot This mapping takes an infinitevector eisinZinfinμ and associates it with a number in [0 1]whoseμ-adic expansionis read off from the components of e The mapping π is not invertibleReferring back to the definition (439) we conclude for any ε isinZμ and e isin Zinfinμthat ψε(π(e))= εε0ε1 middot middot middot We also make use of the ldquoshiftrdquo map σ Zinfinμ rarrZinfinμ Specifically for e=[εj j isin N0] isinZinfinμ we set σ(e) =[εj jisinN] isinZinfinμ Thus the mapping σ discards the first component of e while the mapping ψε

restores the corresponding digit that is

ψε0(π σ(e)) = π(e) (440)

For any k isin N and ek = [εj j isin Zk] isin Zkμ we let ε0ε1 middot middot middot εkminus1 denote the

number π(e) where e = [εj j isin N0] isin Zinfinμ and εi+k = εi i isin N0 Note thatfor such an infinite vector e we have that σ k(e) = e where σ k = σ middot middot middot σis the k-fold composition of σ and also the number ε0ε1 middot middot middot εkminus1 is the uniquefixed point of the mapping ψek

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

172 Multiscale basis functions

Proposition 421 Choose k isin N and ek = [εj j isin Zk] isin Zkμ such that

at least two components of ek are different Let e = [εj j isin N0] isin Zinfinμwith εi+k = εi i isin N0 Then the set U0(π(e)) = π σ(e) isin Zk isrefinable relative to the mappings and has cardinality le k Moreover if kis the smallest positive integer such that εi+k = εi i isin N0 then U0(π(e)) hascardinality k

Proof If ε0ε1 middot middot middot εkminus1 = εprime0εprime1 middot middot middot εprimekminus1 for εi εprimei isin Zμ i isin Zk then εi = εprimei i isin Zk Hence it follows that all the elements of U0(π(e)) are distinct Alsoby using (440) for any isin Zk we have that π σ(e) = ψε(π σ+1(e))Note that trivially π σ+1(e) isin U0(π(e)) for isin Zkminus1 and π σ k(e) =π(e) isin U0(π(e)) Thus U0(π(e)) is indeed refinable

Various useful examples can be generated from this proposition We mentionthe following possibilities for μ = 2

U0(01) =

1

3

2

3

U0(001) =

1

7

2

7

4

7

U0(0011) =

1

5

2

5

3

5

4

5

We now present a characterization of refinable sets relative to a given family of contractive mappings on any metric space M To state this result for everyk isin N = 1 2 and ek = [εj j isin Zk] isin Zk

μ we define the contractivemapping φek = φε0 φε1 middot middot middot φεkminus1 and let k = φek ek isin Zk

μ inparticular 1 = We let xek be the unique fixed point of the mapping φek that is

φek(xek) = xek

and set

Fk = xek ek isin Zkμ

We also define Zinfinμ to be the set of infinite vectors e = [εj j isin N0] εi isin Zμi isin N0 With every such vector e isin Zinfinμ and k isin N we let ek = [εj j isin Zk] isinZkμ It was shown in [148] that the limit of xek as krarrinfin exists and we denote

this element of the metric space X by xe In other words we have that

limkrarrinfin xek = xe

Moreover we let er =[εj jisinZ Zr] for r isin Z and use xer to denote thefixed point of the composition mapping φer where φer =φεr φεr+1 middot middot middot φεminus1 when r isin Z and φer is the identity mapping when r =

Theorem 422 Let = φε ε isin Zμ be a family of contractive mappingson a complete metric space X and let V0 sube X be a nonempty set of cardinalityk isin N Then V0 is refinable relative to if and only if V0 has the following

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 173

property For every v isin V0 there exist integers m isin Zk+1 with lt m andεi isin Zμ i isin Zm such that v = φe0 (xem) and the points

vr = φer (xem) isin V0 r isin Z

v+r = φe+rm(xem) isin V0 r isin Zmminus (441)

Moreover in this case we have that Vi = i(V0) sube i isin N and also

V0 sube (Fmminus) (442)

Proof Assume that V0 is refinable and v isin V0 Let v0 = v By the refinabilityof V0 there exist points vj+1 isin V0 and εj isin Zμ for j isin Zk such that vj =φεj(vj+1) j isin Zk Therefore we have that vr = φers(vs) r isin Zs s isin Zk+1Since the cardinality of V0 is k there exist two integers m isin Zk+1 with lt mfor which v = vm Hence in particular we conclude that v = vm = xem Itfollows that

vr = φer (v) = φer (xem) r isin Z

and

v+r = φe+rm(vm) = φe+rm(xem) r isin Zmminus

These remarks establish the necessity and also the fact that v0 isin(Fmminus)subeConversely let V0 be a set of points with the property and let v be a typical

element of V0 Then we have that either v = φe0 (xem) with gt 0 or v = xe0m

with = 0 In the first case since v = φε0(φe1 (xem)) and φe1 (xem) isin V0we have that v isin (V0) In the second case since xe0m is the unique fixedpoint of the mapping φe0m we write v = φε0(φe1m(xe0m)) By our hypothesisφe1m(xe0m) isin V0 and thus in this case we also have that v isin (V0) Thereforein either case v isin (V0) and so V0 is refinable These comments complete theproof of the theorem

We next derive two consequences of this observation To present the firstwe go back to the definition of the point xe in the metric space X wheree = [εj j isin N0] isin Zinfinμ and observe that when the vector e is s-periodic thatis its coordinates have the property that s is the smallest positive integer suchthat εi = εi+s i isin N0 we have xe = xes where es = [εj j isin Zs] Converselygiven any es isin Zs

μ we can extend it as an s-periodic vector e isin Zinfinμ andconclude that xe = xes

Let us observe that the powers of the shift operator σ act on s-periodic vectors in Zinfinμ as a cyclic permutation of vectors in Zs

μ Also thes-periodic orbits of σ that is vectors e isin Zinfinμ such that σ s(e) = e are exactly

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

174 Multiscale basis functions

the s-periodic vectors in Zinfinμ With this viewpoint in mind we can draw thefollowing conclusion from Theorem 422

Theorem 423 A finite set V0 in a metric space X is refinable relative to themappings if and only if for every v isin V0 there exists an e isin Zinfinμ such thatv = xe and xσ k(e) isin V0 for all k isin N

Proof For convenience we define the notation πlowast(ε0ε1ε2 middot middot middot ) = [εj j isinN0] isin Zinfinμ for εi isin Zμ The proof requires a formula from [148] (p 727) whichin our notation takes the form

φε(xe) = xπlowast(ψε(π(e))) ε isin Zμ e isin Zinfinμ

where ψε are the concrete mappings defined by (439) Using this formula thenumber π(e) associated with the vector e in Theorem 422 is identified as

π(e) = ε0ε1 middot middot middot εminus1ε middot middot middot εmminus1

An immediate corollary of this result characterizes refinable sets on R

relative to the mappings defined by (439)

Theorem 424 Let U0 be a subset of R having cardinality k Then U0 isrefinable relative to the mappings (439) if and only if for every point u isin U0there exist integers m isin Zk+1 with lt m and εi isin Zμ i isin Zm such thatu = ε0 middot middot middot εminus1ε middot middot middot εmminus1 and for any cyclic permutation η ηmminus1 ofε εmminus1 and r isin Z the point εr middot middot middot εminus1η middot middot middot ηmminus1 is in U0

It is the vectors e isin Zinfinμ which are pre-orbits of σ that is for some k isin N0

the vector σ k(e) is periodic which characterize refinable sets Thus there is anobvious way to build from refinable sets U0 relative to the mappings (439) onR refinable sets relative to any finite contractive mappings on a metric spaceFor example let U0 be a finite subset of cardinality k in the interval [0 1] Werequire for each number u in this set that there is an e isin Zinfinμ such that u = π(e)and for every j isin N0 π(σ j(e)) isin U0 In other words U0 is refinable relativeto the mappings (439) We define a set V0 in X associated with U0 by theformula

V0 = xe π(e) isin U0where xe isin X is the limit of xek This set is a refinable subset of X relative to thecontractive mappings We may use this association to construct examplesof practical importance in the finite element method and boundary integralequation method

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 175

Example Let sub R2 be the triangle with vertices at y0 = (0 0) y1 = (1 0)and y2 = (0 1) Set y3 = (1 1) and consider four contractive affine mappings

φε(x) = 1

2(yε + (minus1)τ(ε)x) ε isin Z4 x isin R2 (443)

where τ(ε) = 0 ε isin Z3 and τ(3) = 1 The invariant subset of R2 relative tothese mappings is the triangle and the following sets are refinable(

1

3

1

3

)

(1

7

4

7

)

(2

7

1

7

)

(4

7

2

7

)(

1

15

2

15

)

(2

15

4

15

)

(4

15

8

15

)

(8

15

1

15

)with respect to these mappings Also we record for any ek = [εj j isin Zk] isinZkμ and x isin R2 that

φek(x) =1

2k

⎡⎣(minus1)τk x+sumjisinZk

(minus1)τj 2kminusjyεj

⎤⎦

where τj =sumjminus1=0 τ(ε) j isin Zk+1 From this equation it follows that

xek =1

2k + (minus1)τk+1

sumjisinZk

(minus1)τj 2kminusjyεj

These formulas can be used to generate the above sets

442 Set wavelets

In this subsection we generate a sequence W = Wn n isin N0 of finite sets ofa metric space X which have a wavelet-like multiresolution structure We callan element of W a set wavelet and demonstrate in subsequent sections that setwavelets are crucial for the construction of interpolating wavelets on certaincompact subsets of Rd

The generation of set wavelets begins with an initial finite subset of distinctpoints V0 = vj j isin Zm in X We use this subset and the finite set ofcontractive mappings to define a sequence of subsets of X given by

Vi = (Viminus1) i isin N (444)

Assume that a compact set in X is the unique invariant set relative tothe mappings When V0 sube it follows for each i isin N that Vi sube Furthermore using the set k = φek ek isin Zk

μ of contractive mappings

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

176 Multiscale basis functions

introduced in the last subsection and for every subset A of X we definethe set

k(A) =⋃

ekisinZkμ

φek(A)

and so in particular 1(A) = (A) Therefore equation (444) implies thatVi = i(V0) i isin N

The next lemma is useful to us

Lemma 425 Let be a finite family of contractive mappings on X Assumethat sube X is the invariant set relative to the mappings If V0 is a nonemptyfinite subset of X then

sube⋃iisinN0

Vi

where Vi is generated by the mappings by (444)

Proof Let x isin and δ gt 0 Since is a compact set in X we choosean integer n gt 0 such that γ ndiam ( cup V0) lt δ where γ is the contractionparameter appearing in equation (48) According to the defining property (49)of the set there exists an en isin Zn

μ such that x isin φen() sube φen( cup V0)Since V0 is a nonempty set of X there exists a y isin φen(V0) sube φen( cup V0)Moreover by the contractivity (48) of the family we have that

d(x y) le diamφen( cup V0) le γ ndiam ( cup V0) lt δ

This inequality proves the result

Proposition 426 Let V0 be a nonempty refinable set of X relative to a finitefamily of contractive mappings and Vi i isin N0 be the collection of setsgenerated by definition (444) Then

=⋃iisinN0

Vi

Proof This result follows directly from Lemma 425 and Theorem 443

Let us recall the construction of the invariant set given a family ofcontractive mappings (see [148]) The invariant set is given by eitherone of the formulas

= xe e isin Zinfinμ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 177

or

=⋃kisinN

Fk

The above proposition provides another way to construct the unique invariantset relative to a finite family of contractive mappings In other words westart with a refinable set V0 and then form Vi i isin N recursively by (444)

We say that a sequence of sets Ai i isin N0 is nested (resp strictly nested)provided that Aiminus1 sube Ai i isin N (resp Aiminus1 sub Ai i isin N) The next lemmashows the importance of the notion of the refinable set

Lemma 427 Let be the invariant set in X relative to a finite family ofcontractive mappings Suppose that is not a finite set and V0 is a nonemptyfinite subset of X Then the collection of sets Vi i isin N0 defined by (444) isstrictly nested if and only if the set V0 is refinable relative to

Proof Suppose that V0 is refinable relative to Then it follows by inductionon i isin N that Viminus1 sube Vi

It remains to prove that this inclusion is strict for all i isin N Assume to thecontrary that for some i isin N we have that Viminus1 = Vi By the definition of Viwe conclude that Viminus1 = Vj for all j ge i and thus we have that⋃

jisinN0

Vj = Viminus1

This conclusion contradicts Proposition 426 and the fact that does not havefinite cardinality

When the sequence of sets Vi i isin N0 is strictly nested we let Wi =Vi Viminus1 i isin N that is Vi = Viminus1 cupperp Wi i isin N where we use the notationA cupperp B to denote A cup B when A cap B = φ By Lemma 427 if the set V0 isrefinable relative to we have that Wi = empty i isin N Similarly we use thenotation

perp(A) =⋃εisinZμ

perpφε(A)

when φε(A) cap φεprime(A) = empty ε εprime isin Zμ ε = εprime The sets Wi i isin N0 give us thedecomposition

Vn =⋃

iisinZn+1

perpWi (445)

where W0 = V0 The next theorem shows that when the set W1 is specifiedthe sets Wi i isin N can be constructed recursively and the set has a

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

178 Multiscale basis functions

decomposition in terms of these sets This result provides a multiresolutiondecomposition for the invariant set For this reason we call the sets Wii isin N set wavelets the set W1 the initial set wavelet and the decomposition of in terms of Wi i isin N the set wavelet decomposition of

Theorem 428 Let be the invariant set in X relative to a finite family of contractive mappings Suppose that each of the contractive mappings φε ε isin Zμ in has a continuous inverse on X and they have the property that

φε(int) cap φεprime(int) = empty ε εprime isin Zμ ε = εprime (446)

Let V0 be refinable with respect to and W1 sub int Then

Wi+1 = perp(Wi) i isin N (447)

and the compact set has the set wavelet decomposition

=⋃

nisinN0

perpWn (448)

where W0 = V0

Proof Our hypotheses on the contractive mappings φε ε isin Zμ guarantee thatthey are topological mappings Hence for any subsets A and B of X we havefor any ε isin Zμ that

intφε(A) = φε(int A) (449)

and

φε(A) cap φε(B) = φε(A cap B) (450)

Let us first establish that when W1 sub int the sets Wi i isin N defined bythe recursion (447) are all in int We prove this fact by induction on i isin NTo this end we suppose that Wi sube int i isin N Then the invariance property(49) of and (449) imply that

(Wi) sube (int) = int() = int

Therefore we have advanced the induction hypothesis and proved that Wi+1 subeint for all i isin N Using the fact that Wi+1 sube int we conclude from ourhypothesis (446) for any i isin N ε εprime isin Zμ ε = εprime that φε(Wi)cap φεprime(Wi) = emptywhich justifies the ldquoperprdquo in formula (447) It follows from (444) (447) andW1 = V1 V0 that Wi+1 sube Vi+1 i isin N0

Next we wish to confirm that

Vi Viminus1 = Wi i isin N (451)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 179

Again we rely upon induction on i and assume that

Vi Viminus1 = Wi (452)

Therefore we obtain that

Vi cupWi+1 = (Viminus1) cup(Wi) =⋃εisinZμ

(φε(Viminus1) cup φε(Wi))

=⋃εisinZμ

φε(Vi) = Vi+1

which implies that Vi+1 Vi sube Wi To confirm that equality holds we observethat

Vi capWi+1 = (Viminus1) cap(Wi) =⋃εisinZμ

⋃εprimeisinZμ

(φε(Viminus1)) cap (φεprime(Wi)) (453)

For ε = εprime we can use (449) and hypothesis (446) to conclude thatφε()capφεprime(int) = empty To see this we assume to the contrary that there existsx isin φε() cap φεprime(int) Then there exist y isin and yprime isin int such thatx = φε(y) = φεprime(yprime) Condition (446) insures that y isin int Hence byequation (449) it follows from the first equality that x isin int and fromthe second equality that x isin int a contradiction Consequently we have that

(φε(Viminus1)) cap (φεprime(Wi)) = empty (454)

When ε= εprime we use (450) to obtain that (454) still holds Hence equa-tion (453) implies that Vi cap Wi+1 = empty This establishes (451) advances theinduction hypothesis and proves the result

We end this section by considering the following converse question to theone we have considered so far Given a finite set in a metric space is itrefinable relative to some finite set of contractive mappings The motivationfor this question comes from practical considerations As is often the case incertain numerical problems associated with interpolation and approximationwe begin on an interval of the real line with prescribed points for exampleGaussian points or the zeros of Chebyshev polynomials We then want to findmappings to make these prescribed points refinable relative to them We shallonly address this question in the generality of the space Rd relative to theinfin-norm It is easy to see that given any subset V0 = vi i isin Zk ofRd there is a family of contractive mappings on Rd such that V0 is refinablerelative to them For example the mappings φi(x) = 1

2 (x + vi) i isin Zkx isin Rd will do since clearly the fixed point of the mapping φi is vi for i isin ZkHowever almost surely the associated invariant set will have an empty interior

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

180 Multiscale basis functions

and therefore Theorem 428 will not apply For instance in the example ofa triangle mentioned above the general prescription described applied to thevertices of the triangle will yield the Serpinski gasket This invariant set isa Cantor set and is formed by successively applying the maps (443) to thetriangles throwing away the middle triangle which is the image of the fourthmap used in the example To overcome this we must add to the above familyof mappings another set of contractive mappings ldquowhich fill the holesrdquo Todescribe this process we review some facts about parallelepipeds

A finite set I = ti i isin Zn+1 with t0 lt t1 lt middot middot middot lt tnminus1 lt tn iscalled a partition of the interval I = [t0 tn] and divides it into subintervalsIi = [ti ti+1] i isin Zn where the points in Icap(t0 tn) appear as endpoints of twoadjacent subintervals For every finite set U0 of (0 1) there exists a partition Isuch that the points of U0 lie in the interior of the corresponding subintervalsThe lengths of these subintervals can be chosen as small as desired

Likewise for any two vectors x = [xi i isin Zd] y = [yi i isin Zd] in Rdwhere xi lt yi i isin Zd which we denote by x ≺ y (also x y when xi le yii isin Zd) we can partition the set

otimesiisinZd[xi yi] ndash called a parallelepiped and

denoted by 〈x y〉 ndash into (sub)parallelepipeds formed from the partition

Id =otimesiisinZd

Ii = [tj j isin Zd] ti isin Ii i isin Zd (455)

where each Ii is a partition of the interval [xi yi] i isin Zd If Ii j j isin Zni is theset of subintervals associated with the partition Ii then a typical parallelepipedassociated with the partition Id corresponds to a lattice point i = [ij j isin Zd]where ij isin Znj j isin Zd defined by

Ii =otimesjisinZd

Iij j (456)

Given any finite set V0 sub Rd contained in the interior of a parallelepipedP we can partition it as above so that the interior of the subparallelepipedscontains the vectors of V0 We can if required choose the volume of thesesubparallelepipeds to be as small as desired

The set of all parallelepipeds is closed under translation as the simple rule

〈x y〉 + z = w+ z w isin 〈x y〉 = 〈x+ z y+ z〉 valid for any x y z isin Rd with x ≺ y For any x y isin Rd we associate an affinemapping A on Rd defined by the formula

At = Xt + y t isin Rd (457)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 181

where X = diag(x0 x1 xdminus1) Such an affine map takes a parallelepipedone to one and onto a parallelepiped (as long as the vector x has no zerocomponents) Conversely given any two parallelepipeds P and Pprime there existsan affine mapping of the form (457) which takes P one to one and onto PprimeMoreover if there exists a z isin Rd such that Pprime + z sub int P then A is acontraction relative to the infin-norm on Rd given by

[xi i isin Zd]infin = max|xi| i isin ZdFor any two parallelepipeds P = 〈x y〉 and Pprime = langxprime yprime

rangwith Pprime sube P we

can partition their set-theoretic difference into parallelepipeds in the followingway For each i isin Zd we partition the interval [xi yi] into three subintervals byusing the partition Ii = xi xprimei yprimei yi The associated partition Id decomposesP into subparallelepipeds such that one and only one of them corresponds toPprime itself In other words if Pi i isin ZN with N = 3d are the subparallelepipedswhich partition P and PNminus1 = Pprime then we have

P Pprime =⋃

iisinZNminus1

Pi

We can now state the theorem

Theorem 429 Let m be a positive integer and V0 a finite subset of cardinalitym in the metric space (Rd middot infin) There exists a finite set of contractivemappings of the form (457) such that V0 is refinable relative to and theinvariant set for is a parallelepiped

Proof First we put the set V0 into the interior of a parallelepiped P whichwe partition as described above into subparallelepipeds so that the vectors inV0 are in the union of the interior of these subparallelepipeds Specifically wesuppose that

V0 = vi i isin Zm P =⋃

iisinZM

Pi

with m lt M vi isin int Pi i isin Zm and V0 cap int Pi = empty i isin ZM ZmFor each i isin Zm we choose a vector zi = [zi j j isin Zd] isin (0 1)d with

sufficiently small components zi j so that the affine mapping

Ait = Qi(t minus vi)+ vi t isin Rd (458)

where Qi = diag(zi0 zi1 zidminus1) has the property that the parallelepipedQi = AiP is contained in Pi Since Aivi = vi i isin Zm the set V0 is refinablerelative to any set of mappings including those in (458) We wish to appendto these m mappings another collection of one to one and onto contractive

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

182 Multiscale basis functions

affine mappings of the type (457) so that the extended family has P as theinvariant set

To this end for each i isin Zm we partition the difference set Pi Qi intoparallelepipeds in the manner described above

Pi Qi =⋃

jisinZNminus1

Pi j

where N= 3d Thus we have succeeded in decomposing P into subparal-lelepipeds so that exactly m of them are the subparallelepipeds Qi i isin Zm Inother words we have

P =⋃iisinZk

Wi

where m lt k and Wi = Qi i isin Zm Finally for every i isin Zk Zm we choosea one to one and onto contractive affine mapping Ai such that AiP = Wi Thisimplies that

P =⋃iisinZk

AiP

and therefore P is the invariant set relative to the one to one and ontocontractive mappings Ai i isin Zk

In the remainder of this section we look at the above result for the real lineand try to economize on the number of affine mappings needed to make a givenset V0 refinable

Theorem 430 Let k be a positive integer and V0 = vl l isin Zk a subset ofdistinct points in [0 1] of cardinality k Then there exists a family of one-to-oneand onto contractive affine mappings φε ε isin Zμ of the type (457) for some2 le μ le 4 when k = 1 2 and 3 le μ le 2k minus 1 when k ge 3 such that V0 isrefinable relative to these mappings

Proof Since the mappings φ0(t) = t2 and φ1(t) = t+1

2 have the fixed pointst = 0 and t = 1 respectively we conclude that when k = 1 and V0 consists ofeither 0 or 1 and when k = 2 and V0 consists of 0 and 1 these two mappingsare the desired mappings When V0 consists of one interior point v0 we needat least three mappings For example we choose

φ0(t) = v0

2t φ1(t) = 1

2(t minus v0)+ v0

and

φ2(t) = 1minus v0

2t + v0 + 1

2 for t isin [0 1]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

44 Refinable sets and set wavelets 183

When V0 consists of two interior points of (0 1) we need four mappingsconstructed by following the spirit of the construction for the case k ge 3which is given below

When k ge 3 regardless of the location of the points there exist 2k minus 1mappings that do the job We next construct these mappings specificallyWithout loss of generality we assume that v0 lt v1 lt middot middot middot lt vkminus1 We firstchoose a parameter γ1 such that

0 lt γ1 lt min

v1 minus v0

v1

v2 minus v1

1minus v1

and consider the mapping φ1(t) = γ1(t minus v1) + v1 t isin [0 1] Therefore ifwe let α1 = φ1(0) and β1 = φ1(1) then v0 lt α1 lt β1 lt v2 Next we letγ0 = (α1 minus v0)(1minus v0) and introduce the mapping φ0(t) = γ0(tminus v0)+ v0t isin [0 1] Clearly by letting α0 = φ0(0) and β0 = φ0(1) we have 0 le α0 lt

β0 = α1The remaining steps in the construction proceed inductively on k For this

purpose we assume that the affine mapping φjminus2 has been constructed We letβjminus2 = φjminus2(1) and define

φjminus1(t) = γjminus1(t minus vjminus1)+ vjminus1 t isin [0 1] j = 3 4 k minus 1

where the parameters γjminus1 are chosen to satisfy the conditions

0 lt γjminus1 lt min

vjminus1 minus βjminus2

vjminus1

vj minus vjminus1

1minus vjminus1

j = 3 4 k minus 1

It is not difficult to verify that φjminus1([0 1]) sub (βjminus2 vj) or equivalently βjminus2 lt

αjminus1 lt βjminus1 lt vj by letting αjminus1 = φjminus1(0) and βjminus1 = φjminus1(1) Next we

let φkminus1(t) = γkminus1(t minus vkminus1)+ vkminus1 t isin [0 1] where 0 lt γkminus1 = vkminus1minusβkminus2vkminus1

and let αkminus1 = φkminus2(0) and βkminus1 = φkminus1(1) Then we have that βkminus2 =αkminus1 lt βkminus1 le 1 By the construction above we find two sets αi i isin Zkand βi i isin Zk of numbers that satisfy the condition

0 le α0 lt β0 = α1 lt β1 lt middot middot middot lt βkminus2 = αkminus1 lt βkminus1 le 1

and the union of the images of the interval [0 1] under mappings φj jisinZk isU =

⋃jisinZk

[αjβj]

Notice that the set U is not the whole interval [0 1] There are at most k minus 1gaps which need to be covered It is straightforward to construct these k minus 1additional mappings

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

184 Multiscale basis functions

The family of mappings of cardinality at most 2k minus 1 that we haveconstructed above has [0 1] as the invariant set and V0 is a refinable set relativeto them

When the points in a given set V0 have special structure the number ofmappings may be reduced

45 Multiscale interpolating bases

In this section we present the construction of multiscale bases for bothLagrange interpolation and Hermite interpolation based on refinable sets andset wavelets developed in the previous section

451 Multiscale Lagrange interpolation

In this subsection we describe a construction of the Lagrange interpolatingwavelet-like basis using the set wavelets constructed previously For thispurpose we let X = Rd and assume that = φε ε isin Zμ is a family ofcontractive mappings that satisfies the hypotheses of Theorem 449 We alsoassume that sub X is the invariant set relative to with meas ( int) = 0Let k be a positive integer and assume that

V0 = vl l isin Zk sub int

is refinable relative to Note that in this construction of discontinuouswavelets we restrict the choice of the points in the set V0 to interior pointsof

As in [200 201] we choose a refinable curve f =[fl l isin Zk] rarrRk

which satisfies a refinement equation

f φi = Aif i isin Zμ (459)

for some prescribed k times k matrices Ai i isin Zμ We remark that if there isg rarr Rk and a k times k nonsingular matrix B such that f = Bg then g is alsoa refinable curve We let

F0 = spanfl l isin Zkand suppose that dimF0 = k Furthermore we require that for any b = [bl l isin Zk] isin Rk there exists a unique element f isin F0 such that f (vi) = bii isin Zk In other words there exist k elements in F0 which we also denoteby f0 f1 fkminus1 such that fi(vj) = δij i j isin Zk Refinable sets that admit

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

45 Multiscale interpolating bases 185

a unique Lagrange interpolating polynomial were constructed in [198] Whenthis condition holds we say fi i isin Zk interpolates on the set V0 and thatfj interpolates at vj j isin Zk Under this condition any element f isin F0 has arepresentation of the form

f =sumiisinZk

f (vi)fi

A set V0 sube X is called (Lagrange) admissible relative to (F0) if it isrefinable relative to and there is a basis of functions fi i isin Zk for F0 whichinterpolate on the set V0 In this subsection we shall always assume that V0

is (Lagrange) admissible We record in the next proposition the simple fact ofthe Lagrange admissibility of any set of cardinality k for the special case when = defined by (439) = [0 1] and F0 = Pkminus1 the space of polynomialsof degree le k minus 1

Proposition 431 If V0 sub [0 1] is refinable relative to and has cardinalityk then V0 is Lagrange admissible relative to (Pkminus1)

Proof It is a well-known fact that the polynomial basis functions satisfythe refinement equation (459) with φi = ψi for some matrices Ai Hencethis result follows immediately from the unique solvability of the univariateLagrange interpolation

In a manner similar to the construction of orthogonal wavelets in Section 43we define linear operators Tε Linfin()rarr Linfin() ε isin Zμ by

(Tεx)(t) =

x(φminus1ε (t)) t isin φε()

0 t isin φε()and set

Fi+1 =oplusεisinZμ

TεFi i isin N0

This sequence of spaces is nested that is Fi sube Fi+1 i isin N0 and dimFi = kμii isin N0

We next construct a convenient basis for each of the spaces Fi For thispurpose we let F0 = fj j isin Zk where fj j isin Zk interpolate the set V0 and

Fi =⋃εisinZμperpTεFiminus1

= Tε0 middot middot middot Tεiminus1 fj j isin Zk ε isin Zμ isin Zi i isin N(460)

Since the functions fj j isin Zk interpolate the set V0 we conclude that theelements in Fi interpolate the set Vi In other words the functions in the set Fi

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

186 Multiscale basis functions

satisfy the condition

(Tε0 middot middot middot Tεiminus1 fj)(φεprime0 middot middot middot φεprimeiminus1(vjprime)) = δ(ε0εiminus1 j)(εprime0εprimeiminus1 jprime) (461)

where we use the notation

δaaprime =

1 a = aprime0 a = aprime a aprime isin Ni

0 i isin N

For ease of notation we let ei = [εj j isin Zi] and Tei fj = Tε0 middot middot middot Tεiminus1 fjBy equation (461) this function interpolates at φei(vj) Moreover the relation

Fi = span Fi i isin N0 (462)

holds Now for each n isin N0 we decompose the space Fn+1 as the direct sumof the space Fn and its complement space Gn+1 which consists of the elementsin Fn+1 vanishing at all points in Vn that is

Fn+1 = Fn oplusGn+1 n isin N0 (463)

This decomposition is analogous to the orthogonal decomposition in theconstruction of the orthogonal wavelet-like basis in Section 43 and can beviewed as an interpolatory decomposition in the sense that we describe below

We first label the points in the set Vn according to the set wavelet decom-position for Vn given in Section 44 We assume that the initial set wavelet isgiven by W1 = wj j isin Zr with r = k(μminus 1) and we let

t0j = vj j isin Zk t1j = wj j isin Zr

tij = φew j = μ(e)r + e isin Ziminus1μ isin Zr i = 2 3 n

Then we conclude that Vn = tij (i j) isin Un where Un = (i j) i isinZn+1 j isin Zw(i) with

w(i) =

k i = 0k(μminus 1)μiminus1 i ge 1

The Lagrange interpolation problem for Fn relative to Vn is to find for avector b = [bij (i j) isin Un] an element f isin Fn such that

f (tij) = bij (i j) isin Un (464)

The following fact is useful in this regard

Lemma 432 If V0 is Lagrange admissible relative to (F0) then for eachn isin N0 the set Vn is also Lagrange admissible relative to (Fn)

Proof This result follows immediately from equations (461)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

45 Multiscale interpolating bases 187

Lemma 432 insures that each f isin Fn+1 has the representation f = Pn f+gnwhere Pn f is the Lagrange interpolant to f from Fn relative to Vn and gn =f minus Pn f is the error of the interpolation Therefore we have that

Gn+1 = gn gn = f minus Pn f f isin Fn+1 (465)

The fact that the subspace decomposition (463) is a direct sum also followsfrom equation (465) and the unique solvability of the Lagrange interpolationproblem (464) For this reason the spaces Gn are called the interpolatingwavelet spaces and in particular the space G1 is called the initial interpolatingwavelet space Direct computation yields the dimension of the wavelet spaceGn dimGn = kμnminus1(μ minus 1) Also we have an interpolating waveletdecomposition for Fn+1

Fn+1 = F0 oplusG1 oplus middot middot middot oplusGn+1 (466)

In the next theorem we describe a recursive construction for the waveletspaces Gn To establish the theorem we need the following lemma regardingthe distributivity of the linear operators Tε ε isin Zμ relative to a direct sum oftwo subspaces of Linfin()

Lemma 433 Let BC sub Linfin() be two subspaces If BoplusC is a direct sumthen for each ε isin Zμ Tε(Boplus C) = (TεB)oplus (TεC)

Proof It is clear that Tε(B oplus C) = (TεB) + (TεC) Therefore it remains toverify that the sum on the right-hand side is a direct sum To this end we letx isin (TεB) cap (TεC) and observe that there exist f isin B and g isin C such that

x = Tε f = Tεg (467)

By the definition of the operators Tε we have that x(t) = 0 for t isin φε()Now for each t isin φε() there exists τ isin such that t = φε(τ ) and thususing equation (467) we observe that

x(t) = f (φminus1ε (t)) = f (τ ) isin B x(t) = g(φminus1

ε (t)) = g(τ ) isin C

Since B oplus C is a direct sum we conclude that x(t) = 0 for t isin φε() Itfollows that x = 0

We also need the following fact for the proof of our main theorem

Lemma 434 If Y sube Linfin() then there holds

TεY cap TεprimeY = 0 ε εprime isin Zμ ε = εprime

Proof Let x isin TεY capTεprimeY There exist y1 y2 isin Y such that x = Tεy1 = Tεprimey2By the definition of the operators Tε we conclude from the first equality that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

188 Multiscale basis functions

x(t) = 0 for t isin φε() and from the second that x(t) = 0 for t isin φεprime()Since ε = εprime we have that meas (φε()capφεprime()) = 0 This implies that x = 0ae in and therefore establishes the result in this lemma

We are now ready to prove the main result of this section

Theorem 435 Let V0 be Lagrange admissible relative to (F0) and Wnn isin N be the set wavelets generated from V0 Then

G1 = spanTε fj ε isin Zμ j isin Zk Tε fj interpolates at φε(vj) isin W1

Gn+1 =oplusεisinZμ

TεGn n isin N (468)

and Gn = span Gn where

Gn = Ten fj en isin Znμ j isin Zk Ten fj interpolates at φen(vj) isin Wn

Proof Let Tε fj interpolate at φε(vj) isin W1 Then Tε fj has the property

(Tε fj)(φεprime(vjprime)) = 0 ε εprime isin Zμ εprime = ε or j jprime isin Zk jprime = j

By the definition of the set wavelet W1=V1 V0 we conclude for all vjprime isinV0

that we have (Tε fj)(vjprime) = 0 Thus by the definition of G1 we have thatcorresponding to each point φε(vj) isin W1 the basis function Tε fj is in G1Note that the cardinality of W1 is given by the formula card W1 = card V1 minuscard V0 = k(μminus1) It follows that the number of basis functions for which Tε fjinterpolate at φε(vj) in W1 is r the dimension of G1 Because these r functionsare linearly independent they constitute a basis for G1

We next prove equation (468) by induction on n For this purpose weassume that equation (468) holds for n le m and consider the case whenn = m+ 1 By the definition of Fm+1 and Gm we have that

Fm+1 =oplusεisinZμ

TεFm =oplusεisinZμ

Tε(Fmminus1 oplusGm)

Using Lemma 433 we obtain that

Fm+1 =oplusεisinZμ[(TεFmminus1)oplus (TεGm)] =

⎛⎝oplusεisinZμ

TεFmminus1

⎞⎠oplus⎛⎝oplusεisinZμ

TεGm

⎞⎠

It then follows from the definition of Fm that

Fm+1 = Fm oplus⎛⎝oplusεisinZμ

TεGm

⎞⎠

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

45 Multiscale interpolating bases 189

Let

G =oplusεisinZμ

TεGm

and assume that f isin G Then there exist g0 gμminus1 isin Gm such that

f =sumεisinZμ

Tεgε

For each v isin Vm there exist vprime isin Vmminus1 and εprime isin Zμ such that v = φεprime(vprime)By the definition of the linear operators Tε ε isin Zμ and the fact that gε isin Gmε isin Zμ direct computation leads to the condition for each v isin Vm that

f (v) =sumεisinZμ

Tεgε(φεprime(vprime)) = gεprime(φminus1εprime φεprime(vprime)) = gεprime(v

prime) = 0

Hence G sube Gm+1 Moreover it is easy to see that dimG = dimGm+1 whichimplies that G = Gm+1

To prove the second part of the theorem it suffices to establish the recursion

Gn+1 =⋃εisinZμ

perpTεGn

The ldquoperprdquo on the right-hand side of this equation is justified by Lemma 434 Toestablish its validity we let

G =⋃εisinZμ

perpTεGn

Hence the set G consists of the elements TεnTen fj where Ten fj interpolates atφen(vj) isin Wn εn isin Zμ By Theorem 428 we have that

φen+1(vj) = φεn φen(vj) εn isin Zμ φen(vj) isin Wn sube φen+1(vj) isin Wn+1Hence G sube Gn+1 Since card G = card Gn+1 = card Wn+1 we conclude thatG = Gn+1

Theorem 436 It holds that

L2() =⋃

nisinN0

Fn = F0 oplus(oplus

nisinNGn

)

Proof Since the mappings φε ε isin Zμ are contractive the condition ofTheorem 47 of [201] is satisfied The finite-dimensional spaces Fn appearinghere are the same as those generated by the family of mutually orthogonalisometries in [201] if we begin with the same initial space F1 Therefore the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

190 Multiscale basis functions

first equality holds An examination of the proof for Theorem 47 of [201]shows that the same proof proves the second equality

As a result of the decomposition obtained in Theorems 435 and 436 wepresent a multiscale algorithm for the Lagrange interpolation To this end welet gj j isin Zr be a basis for G1 so that

gj(t0i) = 0 i isin Zk j isin Zr gj(t1jprime) = δj jprime j jprime isin Zr

We label those functions according to points in Vn in the following way Let

g0j = fj j isin Zk g1j = gj j isin Zr

gij = Teg j = μ(e)r + e isin Ziminus1μ isin Zr i = 2 3 n

With this labeling we see that gij(tiprimejprime) = δiiprimeδjjprime (i j) (iprime jprime) isin Un i ge iprime and

Fn = span gij (i j) isin UnFunctions gij (i j) isin U are also called interpolating wavelets Now we expressthe interpolation projection in terms of this basis For each x isin C() theinterpolation projection Pnx of x is given by

Pnx =sum

(i j)isinUn

xijgij (469)

The coefficients xij in (469) can be obtained from the recursive formula

x0j = x(t0j) j isin Zk

xij = x(tij)minussum

(iprime jprime)isinUiminus1

xiprimejprimegiprimejprime(tij) (i j) isin Un

This recursive formula allows us to interpolate a given function efficiently byfunctions in Fn When we increase the level from n to n+ 1 we do not need torecompute the coefficients xij for 0 le i le n We describe this important pointwith the formula Pn+1x = Pnx+Qn+1x where Qn+1x isin Gn+1 and

Qn+1x =sum

jisinZJ(n+1)

xn+1 jgn+1 j

The coefficients xn+1 j are computed by the previous recursive formula usingthe coefficients obtained for the previous levels that is

xn+1 j = x(tn+1 j)minussum

(iprime jprime)isinUn

xiprimejprimegiprimejprime(tn+1 j)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

45 Multiscale interpolating bases 191

452 Multiscale Hermite interpolation

In the last section we showed how refinable sets lead to a multiresolutionstructure and result in what we call set wavelets In the last subsection wethen used this recursive structure of the points to construct the Lagrange inter-polation that has a much desired multiscale structure In this subsection wedescribe a similar construction for Hermite piecewise polynomial interpolationon invariant sets

Let X be Euclidean space Rd and = φε ε isin Zμ be a family ofcontractive mappings on X is the invariant set relative to the family ofcontractive mappings Let V0 be a nonempty finite subset of distinct pointsin X and recursively define

Vi = (Viminus1) i isin N

It was shown in the last section that the collection of sets Vi i isin N0 isstrictly nested if and only if the set V0 is refinable relative to Denote

Wi = Vi Viminus1 i isin N

When the contractive mappings have a continuous inverse on X

φε(int) cap φεprime(int) = empty ε εprime isin Zμ ε = εprime

and W1 is chosen to be a subset of int then the sets Wi+1 i isin N can begenerated recursively from W1 by the formula

Wi+1 = perp(Wi) =⋃εisinZμ

perpφε(Wi) i isin N

and the invariant set has the decomposition

= V0 cupperp(⋃

nisinNperpWn

)

In the following we first describe a construction of multiscale discontinuousHermite interpolation and then a construction of multiscale smooth Hermiteinterpolation on the interval [0 1]1 Multiscale discontinuous Hermite interpolationWe start with nonempty finite sets V sub int and U sub Nd

0 For u = [ui i isinZd] isin U we set

Du = part |u|

parttu01 middot middot middot parttudminus1

d

where |u| =sumiisinZdui

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

192 Multiscale basis functions

Let P be a linear space of functions on We say that (P U V) is Hermiteadmissible provided that V is refinable and for any given real numbers cr r =(u v) isin U times V there exists a unique element p isin P such that

Dr(p) = (Dup)(v) = cr (470)

When this is the case the dimension of P is (card V)(card U) and there existsa basis pr r isin U times V for P such that for every r = (u v) isin U times V andrprime = (uprime vprime) isin U times V

(Duprimepr)(vprime) = δrrprime (471)

Moreover for any function p isin P the representation

p =sum

risinUtimesV

Dr( p)pr

holds We call pr r isin U times V a Hermite basis for P relative to U times V Toproceed further we must restrict the family to have the form

φε(t) = aε t + bε t isin (472)

where aε t is the vector formed by the componentwise product of the vectorsaε and t We also require linear operators Tε Linfin() rarr Linfin() ε isin Zμ

defined by

Tε f = f φminus1ε χφε()

and for every en = [εj j isin Zn] isin Znμ we define constants

aminusuen

= aminusuε0

aminusuε1middot middot middot aminusu

εnminus1

Lemma 437 If is a family of contractive mappings of the form (472) thenfor all en isin Zn

μ and u isin Nd0 the following formula holds

DuTen = aminusuen

Ten Du n isin N

Proof We prove this lemma by induction on n and the proof begins by firstverifying the case when n = 1 by the chain rule The induction hypothesis isthen advanced by again using the chain rule and the case n = 1

We suppose that (P U V0) is admissible P is a subspace of polynomialswith Hermite basis

F0 = pr r isin U times V0relative to U times V0 and use the operators Tε ε isin Zμ to recursively define thesets

Fn = auenTen pr r = (u v) isin U times V0 en isin Zn

μ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

45 Multiscale interpolating bases 193

Since the polynomials pr r isin UtimesV0 were chosen to be a Hermite basis for Prelative to Utimes V0 the functions in Fn form a Hermite basis for Fn = span Fn

relative to U times Vn That is the function auenTen pr satisfies the condition

Duprime(auenTen pr)(φeprimen(v

prime)) = δ(ren)(rprimeeprimen) (473)

where rprime = (uprime vprime) isin U times V0Since Fn sube Fn+1 we can decompose Fn+1 as the direct sum of the space

Fn and Gn+1 defined to be the elements in Fn+1 whose uth derivatives foru isin U vanish at all points in Vn We let Pn f isin Fn be uniquely defined by theconditions

(DuPn f )(v) = Duf (v) v isin Vn u isin U (474)

Hence each f isin Fn+1 has the representation f = Pn f + gn where gn isin Gn+1

with Gn+1 = f minus Pn f f isin Fn+1 and we have the decomposition

Fn = F0 oplusG1 oplus middot middot middot oplusGn

Most importantly the spaces Gn can be generated recursively the proof ofwhich follows the pattern of those given for Theorems 435 and 436 To statethe next result we make use of the following notation For each n isin N0 and asubset A sube Vn we let Zn

μ(A) denote the subset of Znμ consisting of the indices

en isin Znμ such that there exists r = (u v) isin U times V0 for which equation (473)

holds and φen(v) isin A

Theorem 438 If P is a subspace of polynomials (P U V0) is admissibleand Wn n isin N is the set wavelets generated by V0 then

G1 = spanauεTεpr r = (u v) isin U times V0 ε isin Zμ(W1)

and

Gn+1 =oplusεisinZμ

TεGn n isin N

Moreover we have that Gn = span Gn where for n isin N

Gn = auenTen pr r = (u v) isin U times V0 en isin Zn

μ(Wn)and the formula

L2() =⋃

nisinN0

Fn = F0 oplus(oplus

nisinNGn

)

holds

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

194 Multiscale basis functions

2 Multiscale smooth Hermite interpolation on [0 1]In the following we focus on a construction of smooth multiscale Hermite inter-polating polynomials on the interval [0 1] which generate finite-dimensionalspaces dense in the Sobolev space Wmp[0 1] where m is a positive integer and1 le p ltinfin To this end we choose affine mappings = φε ε isin Zμ

φε(t) = (tε+1 minus tε)t + tε ε isin Zμ

where 0 = t0 lt t1 lt middot middot middot lt tμminus1 lt tμ = 1 and μ gt 1 The invariantset of is [0 1] and this family of mappings has all the properties delineatedearlier We let V0 be a refinable set containing the endpoints of [0 1] that isV0 = v0 v1 vkminus1 where 0 = v0 lt v1 lt middot middot middot lt vkminus2 lt vkminus1 = 1 Sincethe endpoints are the fixed points of the first and last mappings respectivelyW1 = V1 V0 sub (0 1)

We let Fn be the space of piecewise polynomials of degree le km minus 1 inWmp[0 1] with knots at φen(0 1) en isin Zn

μ In particular F0 is the spaceof polynomials of degree le kmminus 1 on [0 1] and

dim Fn = μn(k minus 1)m+ m Fn sube Fn+1 n isin N0

This sequence of spaces is dense in Wmp[0 1] for 1 le p ltinfinWe construct multiscale bases for these spaces Fn using the solution of the

Hermite interpolation problem

p(i)(φen(v)) = f (i)(φen(v)) en isin Znμ v isin V0 i isin Zm (475)

which has a unique solution p isin Fn for any f isin Wmp[0 1] Hence in thisspecial case the refinability of V0 insures that (FnZm Vn) is admissible

Let Gn+1 be the space of all functions in Fn+1 such that g(i) i isin Zm vanishat all points in Vn A basis for the space Gn can be constructed recursivelystarting with interpolating bases for F0 and F1 To this end for r = (i j) isinZm times Zk we let pr isin F0 satisfy the conditions

p(iprime)

r (vjprime) = δrrprime rprime = (iprime jprime) isin Zm times Zk

Then the set of functions

F0 = pr r isin Zm times Zkconstitutes a Hermite basis for the space F0 relative to Zm times Zk To constructa basis for F1 we recall the linear operators Tε which in this case have thespecial forms

(Tε f )(t) = f

(t minus tε

tε+1 minus tε

)χ[tε tε+1](t) t isin [0 1]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

45 Multiscale interpolating bases 195

We remark that in general the range of operators Tε is not contained in theSobolev space Wmp[0 1] However we do have the following fact whosestatement uses for 1 le p ltinfin the spaces

Wmp0minus[0 1] = f isin Wmp[0 1] f (i)(0) = 0 i isin Zm+1

Wmp0+[0 1] = f isin Wmp[0 1] f (i)(1) = 0 i isin Zm+1

and

Wmp0 [0 1] = Wmp

0minus[0 1] capWmp0+[0 1]

Lemma 439 The following inclusions hold

Tμminus1(Wmp0minus[0 1]) sube Wmp

0minus[0 1]T0(W

mp0+[0 1]) sube Wmp

0+[0 1]and

Tε(Wmp0 [0 1]) sube Wmp

0 [0 1] ε isin Zμ

Next we show how to use the functions pr in F0 and the operators Tε toconstruct a basis of F1 The operators Tε may introduce discontinuities whenapplied to a function in F0 thereby leading to an unacceptable basis for F1Lemma 439 reveals exactly what happens when we apply Tε to pr Using theoperators Tε ε isin Zμ we define for r = (i ) isin ZmtimesZk and j = ε(kminus1)+

qij = (tε+1 minus tε)iTεpr

when ε = 0 = 0 or ε isin Zμ minus 1 isin Zkminus2 or ε = μminus 1 = k minus 1 and

qij = (tε minus tεminus1)iTεminus1p(ikminus1) + (tε+1 minus tε)

iTεp(i0)

when ε minus 1 isin Zμminus1 = 0 Set

F1 = qij i isin Zm j isin Zμ(kminus1)+1In the next lemma we state some properties of this set of functions To thisend we number the points in V1 according to the scheme

zj = φε(v) j = ε(k minus 1)+ ε isin Zμ minus 1 isin Zkminus1

Lemma 440 The set F1 forms a basis for F1 such that

q(iprime)

ij (zjprime) = δrrprime r = (i j) rprime = (iprime jprime) isin Zm times Zμ(kminus1)+1 (476)

Proof Lemma 439 insures that these functions are in Wmp[0 1] Hence it isclear that they are elements in F1 Moreover a direct verification leads to theconclusion that these functions satisfy the conditions (476)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

196 Multiscale basis functions

For a general n we follow the same process as described earlier to constructa basis for space Fn from the basis for Fnminus1 At each level it requiresa procedure of eliminating discontinuities introduced by the operators Tε However we do not construct the basis for Fn directly for n gt 1 Insteadwe turn our attention to the construction of bases for the complement spacesG1G2 Gn Surprisingly the next theorem shows that we can choose(μ minus 1)(k minus 1)m functions from the set F1 to form a basis for G1 andrecursively generate bases for the spaces Gn from this basis of G1 by applyingthe operators Tε We see that the construction of bases for Gn for n ge 2 doesnot require the process of eliminating discontinuities which is required for thedirect construction of bases for Fn because G1 sube Wmp

0 [0 1]Theorem 441 If V0 is refinable relative to the affine mappings and

G1 = qij j isin Zμ(kminus1)+1(W1) i isin Zmthen

G1 = span G1

Moreover if

Gn+1 = aienTen qij qij isin G1 i isin Zm en isin Zn

μwhere

aen =prodiisinZn

(tε+1 minus tε)

then

Gn+1 = span Gn+1 n isin N

Proof Since the cardinality of W1 equals the dimension of the space G1 theset G1 consists of (μminus 1)(kminus 1) linearly independent functions It remains toshow that G1 sub G1 To this end we prove that the functions in G1 vanish at allpoints in V0 Let qij isin G1 By (476) we obtain that q(i)ij (zj) = 1 for zj isin W1

and q(iprime)

ij (zjprime) = 0 for (iprime jprime) = (i j) Since V0 sub V1 and zj isin V0 we conclude

that q(i)ij i isin Zm vanish at all points in V0 and thus qij isin G1We now prove the second statement of the theorem It follows from

Lemma 439 that the functions aienTen qij isin Wmp

0 [0 1] since qij isin Wmp0 [0 1]

In addition we have that

(aienTen qij)

(iprime)(φeprimen(zjprime))= aiminusiprimeen

Ten q(iprime)

ij (φeprimen(zjprime))

= aiminusiprimeen

δeneprimen q(iprime)

ij (zjprime)

= δeneprimenδiiprimeδjjprime

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

46 Bibliographical remarks 197

This equation implies that these functions are linearly independent in Gn+1Next we observe that the cardinality of the set Gn+1 is equal to the

dimension of Gn+1 Consequently we conclude that Gn+1 = span Gn+1

Since

Fn = F0 oplusG1 oplus middot middot middot oplusGn

and the sequence of spaces Fn n isin N0 is dense in the space Wmp[0 1] for1 le p ltinfin we obtain the following result

Theorem 442 The equation

F0 oplus(oplus

nisinNGn

)= Wmp[0 1]

holds for 1 le p ltinfin

In the finite element method the space Wmp0 [0 1] has a special importance

For this reason we define

F00 = f isin F0 f (i)(0) = f (i)(1) = 0 i isin Zm

and observe that dim F00 = (k minus 2)m where F0

0 = span F00 and

F00 = pi

j i isin Zm jminus 1 isin Zkminus1Corollary 443 The equation

F00 oplus

(oplusnisinN

Gn

)= Wmp

0 [0 1]

holds for 1 le p ltinfin

46 Bibliographical remarks

The material presented in this chapter regarding the multiscale bases wasmainly taken from [65 196 200 201] The construction of orthogonalwavelets on invariant sets was originally introduced in [200] Then theconstruction was extended in [201] to a general bounded domain and bi-orthogonal wavelets In particular the construction of the initial wavelet spacewas formulated in [201] in terms of a general solution of a matrix completionproblem Later [65] gave a construction of interpolating wavelets on invariantsets The concept of a refinable set relative to a family of contractivemappings on a metric space which define the invariant set is introducedin [65] and a recursive structure was explored in the paper for multiscale

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

198 Multiscale basis functions

function representation and approximation constructed by interpolation oninvariant sets For the notion of invariant sets the reader is referred to [148]The material about multiscale partitions of a multidimensional simplex wasoriginally developed in [74] Paper [198] constructed refinable sets that admit aunique Lagrange interpolating polynomial (see also [199]) The description formultiscale Hermite interpolation in Section 452 follows [66] Moreover [69]presented a construction of multiscale basis functions and the correspondingmultiscale collocation functionals both having vanishing moments (see alsoSection 71)

For wavelets on an unbounded domain the reader is referred to [43 48 8284 92 93 97 98 100 101 232] and the references cited therein

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637006Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163029 subject to the Cambridge Core terms of

5

Multiscale Galerkin methods

The main purpose of this chapter is to present fast multiscale Galerkin methodsfor solving the second-kind Fredholm integral equations

uminusKu = f (51)

defined on a compact domain in Rd The classical Galerkin method usingthe piecewise polynomial applied to equation (51) leads to a linear systemof equations with a dense coefficient matrix Hence the numerical solutionof this equation is computationally costly The multiscale Galerkin methodto be described in this chapter makes use of the multiscale feature and thevanishing moment property of the multiscale piecewise polynomial basis andresults in a linear system with a numerically sparse coefficient matrix As aresult fast algorithms may be designed based on a truncation of the coefficientmatrix Specifically the multiscale Galerkin method uses the L2-orthogonalprojection for a discretization principle with the multiscale basis functionswhose construction is described in Chapter 4

The fast multiscale Galerkin method is based on a matrix compressionscheme We show that the matrix compression scheme preserves almostthe optimal convergence order of the standard Galerkin method while itreduces the number of nonzero entries of its coefficient matrix from O(N2)

to O(N logσ N) where N is the size of the matrix and σ may be 1 or 2 Wealso prove that the condition number of the compressed matrix is uniformlybounded independent of the size of the matrix

The kernels of the integral operators in which we are interested in thischapter are weakly singular or smooth We present theoretical results for theweakly singular case in detail and only give comments for the smooth case

199

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

200 Multiscale Galerkin methods

51 The multiscale Galerkin method

In this section we present the multiscale Galerkin method for solving equa-tion (51) For this purpose we first describe the properties of multiscalebases required necessarily for developing the multiscale Galerkin methodThese properties are satisfied for the multiscale bases constructed in Chapter 3However the multiscale bases constructed in Chapter 3 have other propertiesthat are not essential for developing the multiscale Galerkin method

511 Multiscale bases

The multiscale basis requires a multiscale partition of the domain Weassume that there is a family of partitions i i isin N0 such that for eachscale i isin N0 i = ij j isin Ze(i) where e(i) denotes the cardinality of ihas the properties

(1)⋃

jisinZe(i)ij =

(2) meas(ij⋂ijprime) = 0 j jprime isin Ze(i) j = jprime

(3) meas(ij) sim ddi for all j isin Ze(i)

where di = maxd(ij) j isin Ze(i) Here the notation ai sim bi for i isin N0

means that there are positive constants c1 and c2 such that c1ai le bi le c2ai forall i isin N0

In addition we assume that(4) the sets ij j isin Ze(i) are star-shapedWe remark that a set A sub Rd is called a star-shaped set if it contains a point

for which the line segment connecting this point and any other point in the setis contained in the set Such a point is called a center of the set

We further suppose that there is a nested sequence of finite-dimensionalsubspaces Xn n isin N0 of X that is

Xnminus1 sub Xn n isin N

Thus for each n isin N0 a subspace Wn sub Xn can be defined such that Xn is anorthogonal direct sum of Xnminus1 and Wn Moreover we assume that Xn n isin N0

is ultimately dense in L2() in the sense that⋃nisinN0

Xn = L2()

We then have an orthogonal decomposition of space L2()

L2() =oplusnisinN0

perpWn (52)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

51 The multiscale Galerkin method 201

where we have used the notation W0 = X0 Set w(n) = dim Wn and s(n) =dim Xn for n isin N0 It follows that

s(n) =sum

iisinZn+1

w(i)

For each n isin N0 we introduce an index set Un = (i j) i isin Zn+1 j isinZw(i) We also use the notation U = (i j) i isin N0 j isin Zw(i) We assumethat there is a family of basis functions wij (i j) isin U sub X such that

Wn = spanwnj j isin Zw(n) n isin N0

Thus

Xn = spanwij (i j) isin UnWe require that the following multiscale properties hold for the partitions andthe basis functions

(I) There is a positive integer μ gt 1 such that for i isin N0

di sim μminusid w(i) sim μi and s(i) sim μi

(II) There exist positive integers ρ and γ such that for every i gt γ and j isinZw(i) written in the form j = νρ + s where s isin Zρ and ν isin N0

wij(t) = 0 t isin iminusγ ν

Setting Sij = iminusγ ν we see that the support of wij is contained in Sij Itcan easily be verified that

di sim maxd(Sij) j isin Ze(i)Because of this property we shall not distinguish di from the right-handside of the above equation

(III) For any (i j) isin U with i ge 1 and polynomial p of total degree less than apositive integer k

(p wij) = 0

where (middot middot) denotes the L2-inner product(IV) There is a constant θ0 such that for any (i j) isin U

wij = 1 and wijinfin le θ0μi2

where middot and middotinfin denote the L2-norm and the Linfin-norm respectively

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

202 Multiscale Galerkin methods

(V) There is a positive constant θ1 such that for all n isin N0 v =sum(ij)isinUn

vijwij

Env2 sim v2 and v2 le θ1vwhere v = [vij (i j) isin Un] En =

[(wiprimejprime wij) (iprime jprime) (i j) isin Un

]and

the notation xp 1 le p le infin for a vector x = [xj j isin Zn] denotes thep-norm defined by

xp = (sum

jisinZn|xj|p

)1p 1 le p ltinfin

max|xj| j isin Zn p = infin

(VI) If Pn is the orthogonal projection from X onto Xn then there exists apositive constant c such that for any u isin Hk()

uminus Pnu le cdknuHk

All of these properties are fulfilled by the multiscale basis functionsconstructed in Chapter 3 In general the matrix En is a block diagonal matrixMoreover if wij (i j) isin U is a sequence of orthonormal basis functions thenEn is the identity matrix and property (V) holds with v2 = v Furthermoreif Xn n isin N0 are spaces of piecewise polynomials of total degree less than kthen the vanishing moment property (III) and the approximation property (VI)hold naturally

512 Formulation of the multiscale Galerkin method

As we have discussed in Section 31 the Galerkin method for equation (51) isto find un isin Xn that satisfies the operator equation

(I minusKn)un = Pnf (53)

where Kn = PnK|Xn It is clear that the following theoretical results hold forthe Galerkin method (see Section 33)

Theorem 51 Let K be a linear compact operator not having one as itseigenvalue Then there exists N gt 0 such that for all n ge N the Galerkinscheme (53) has a unique solution un isin Xn and there is a constant c gt 0such that for all n ge N

(I minusKn)minus1 le c

Moreover if the solution u of equation (51) satisfies u isin Hk() then thereexists a positive constant c such that for all n ge N

uminus un le cμminusknduHk

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

51 The multiscale Galerkin method 203

Using the multiscale bases for spaces Xn described in the last section theabove Galerkin method (53) is to seek

un =sum

(ij)isinUn

uijwij isin Xn

such that sum(ij)isinUn

uij(wiprimejprime wij minusKwij) = (wiprimejprime f ) (iprime jprime) isin Un (54)

Because the multiscale basis is used in order to distinguish it from thetraditional Galerkin method we call (54) the multiscale Galerkin method

To write (54) in a matrix form we use the lexicographic ordering on Zn+1timesZn+1 and define the matrix

Kn = [(wiprimejprime Kwij) (iprime jprime) (i j) isin Un]and vectors

fn = [(wiprimejprime f ) (iprime jprime) isin Un] un = [uij (i j) isin Un]Note that these vectors have length s(n) With these notations equation (54)takes the equivalent matrix form

(En minusKn)un = fn (55)

Even though the coefficient matrix Kn is a full matrix it differs significantlyfrom the matrix Kn in Section 331 The use of multiscale basis functionsmakes the matrix Kn numerically sparse By the numerically sparse matrixwe mean a matrix with significantly large number of entries being very smallin magnitude This forms a base for developing the fast multiscale Galerkinmethod We illustrate this observation by the following example

Example 52 Consider = [0 1] and the compact integral operator withkernel

K(s t) = log |sminus t| s t isin [0 1]We choose Xn as the piecewise linear functions with knots j2n j isin N2nminus1In this case k = 2 The Galerkin matrix of this operator with respect to theLagrange interpolating basis is illustrated in Figure 51 with n = 6

We can see that generating the full matrix and then solving the correspond-ing linear system requires large computational cost when its order is large Theidea to overcome this computational deficiency is to change the basis for thepiecewise polynomial space so that the projection of the integral operator K tothe space has a numerically sparse Galerkin matrix under the new basis

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

204 Multiscale Galerkin methods

0

20

40

60

80

020

4060

80

0

1

2

3

4

5

6

x 10minus3

Figure 51 The Galerkin matrix with respect to the piecewise linear polynomialbasis

020

4060

80

020

4060

80

0

05

1

15

2

Figure 52 The Galerkin matrix with respect to the piecewise linear polynomialmultiscale basis

The Galerkin matrix of this operator with respect to the piecewise linearpolynomial multiscale basis described in the last section is illustrated inFigure 52 with n = 6 It can be seen that the absolute value of the entries offthe diagonals of the blocks corresponding to different scales of spaces is very

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

52 The fast multiscale Galerkin method 205

small We can set the entries small in magnitude to zero and obtain a sparsematrix which leads to a fast Galerkin method We present this fast method andits analysis in the next several sections

52 The fast multiscale Galerkin method

In this section we develop the fast multiscale Galerkin method based on amatrix truncation strategy We consider two classes of kernels Class oneconsists of kernels having weak singularity along the diagonal Specificallyfor σ isin [0 d) and integer k ge 1 we define Sσ k We say K isin Sσ k if fors t isin s = t K has continuous partial derivatives Dα

s Dβt K(s t) for |α| le k

|β| le k and there exists a constant c gt 0 such that for |α| = |β| = k

|Dαs Dβ

t K(s t)| le c

|sminus t|σ+2k s t isin (56)

Related to the kernel on the right-hand side of (56) we remark that whenσ = 0 the function 1xσ is understood as log x Class two consists of kernelsK isin Ck(times) Kernels in this class are smooth

Set

Kiprimejprimeij = (wiprimejprime Kwij) (i j) (iprime jprime) isin Un

and observe that Kiprimejprimeij are entries of matrix Kn In the next lemma we estimatethe bound of Kiprimejprimeij

Lemma 53 Suppose that conditions (I)ndash(IV) hold

(1) If K isin Sσ k for some σ isin [0 d) and a positive integer k and there is aconstant r gt 1 such that

dist(Sij Siprimejprime) ge maxrdi rdiprime then there exists a positive constant c such that for all (i j) (iprime jprime) isin Ui iprime isin N

|Kiprimejprimeij| le c(didiprime)kminus d

2

min

dd

iprime maxsisinSiprime jprime

intSij

dt

|sminus t|2k+σ ddi max

tisinSij

intSiprimejprime

ds

|sminus t|2k+σ

(2) If K isin Ck(times) then there exists a positive constant c such that for alli iprime isin N

|Kiprimejprimeij| le cdk+d2iprime dk+d2

i

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

206 Multiscale Galerkin methods

Proof We present a proof for part (1) only since the proof for part (2) issimilar This is done by using the Taylor theorem By hypothesis for each(i j) isin U the set Sij is star-shaped Let s0 and t0 be centers of the sets Siprimejprime andSij respectively It follows from the Taylor theorem that

K(s t) = p(s t)+ q(s t)+sum|α|=k

sum|β|=k

(sminus s0)α(t minus t0)β

αβ rαβ(s t)

where p(s middot) and q(middot t) are polynomials of total degree less than k in t and ins respectively and

rαβ(s t) =int 1

0

int 1

0Dα

s Dβt K(s0 + θ1(sminus s0) t0

+ θ2(t minus t0))(1minus θ1)kminus1(1minus θ2)

kminus1dθ1dθ2

By conditions (II) and (III) we have that

Kiprimejprimeij =sum|α|=k

sum|β|=k

intSiprimejprime

intSij

(sminus s0)α(t minus t0)β

αβ rαβ(s t)wiprimejprime(s)wij(t)dsdt

This with conditions (I) and (IV) yields the bound

|Kiprimejprimeij| le cdkminus d

2i d

kminus d2

iprimesum|α|=k

sum|β|=k

1

αβint

Siprimejprime

intSij

|rαβ(s t)|dsdt (57)

We conclude from the mean-value theorem and the hypothesis K isin Sσ k thatthere exist sprime isin Siprimejprime and tprime isin Sij such that

|rαβ(s t)| = kminus2|Dαs Dβ

t K(sprime tprime)| le c

|sprime minus tprime|2k+σ

The assumption of this lemma yields

|sprime minus tprime| ge |sprime minus t| minus di ge (1minus rminus1)|sprime minus t|Thus for a new constant c

|rαβ(s t)| le c

|sprime minus t|2k+σ

This inequality with (57) and the relationship meas(Siprimejprime) sim ddiprime leads to the

desired estimate

The above lemma shows that most of the entries are so small that theycan be neglected without affecting the overall accuracy of the approximationscheme This observation leads to a matrix truncation strategy To present itwe partition matrix Kn into a block matrix

Kn = [Kiprimei iprime i isin Zn+1] with Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

52 The fast multiscale Galerkin method 207

For each iprime i isin Zn+1 we choose a truncation parameter δniprimei which will be

specified later We define for the weakly singular case

Kiprimejprimeij =

Kiprimejprimeij dist(Siprimejprime Sij) le δniprimei

0 otherwise(58)

and obtain a truncation matrix

Kn = [Kiprimei iprime i isin Zn+1]where

Kiprimei = K(δniprimei)iprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]

Likewise for the smooth case we define for each iprime i isin N

Kiprimei =

Kiprimei i+ iprime le n0 otherwise

(59)

This truncation strategy leads to the fast multiscale Galerkin method which isto find un = [uij (i j) isin Un] isin Rs(n) such that

(En minus Kn)un = fn (510)

Example 54 We again consider the compact integral operator with the kernel

K(s t) = log |sminus t| s t isin [0 1]and choose Xn as the piecewise linear functions (k = 2) with knots j2nj isin N2nminus1 The truncated Galerkin matrix of this operator with respect to thepiecewise linear polynomial multiscale basis is illustrated in Figure 53 withn = 6

The analysis of the fast multiscale Galerkin method requires the availabilityof an operator form of equation (510) To this end we first introduce theconcept of the matrix representation of an operator

Definition 55 The matrix B is said to be the matrix representation of thelinear operator A relative to the basis = φj j isin Nn if

TB = A(T)

Proposition 56 The matrix representation of the operator K relative to thebasis Wn = wij (i j) isin Un is Bn = Eminus1

n Kn

Proof Let Bn = [biprimejprimeij (i j) isin Un] be the matrix representation of theoperator K relative to the basis Wn According to Definition 55 we have that

Kwij =sum

(kl)isinUn

bklijwkl for all (i j) isin Un

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

208 Multiscale Galerkin methods

010

2030

4050

60

010

2030

4050

60

0

05

1

15

0 10 20 30 40 50 60

0

10

20

30

40

50

60

nz = 1638

(a) (b)

Figure 53 (a) The truncated Galerkin matrix with respect to the piecewise linearpolynomial multiscale basis (b) The figure of nonzero entries of the truncatedmatrix

This leads to

(wiprimejprime Kwij) =sum

(kl)isinUn

bklij(wiprimejprime wkl) for all (i j) (iprime jprime) isin Un

which means Kn = EnBn and completes the proof

We next convert the linear system (510) to an abstract operator equationform Let βiprimejprimeij (i j) (iprime jprime) isin Un denote the entries of matrix Eminus1

n KnEminus1n

and let

Kn(s t) =sum

(ij)(iprimejprime)isinUn

βiprimejprimeijwiprimejprime(s)wij(t)

We denote by Kn the integral operator defined by the kernel Kn(s t)

Proposition 57 Solving the linear system (510) is equivalent to finding

un =sum

(ij)isinUn

uijwij isin Xn

such that

(I minus Kn)un = Pnf

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

53 Theoretical analysis 209

Proof It follows for (i j) (iprime jprime) isin Un that

(wiprimejprime Knwij) =(

wiprimejprime int

Kn(middot t)wij(t)dt

)=

sum(kl)(kprimelprime)isinUn

βkprimelprimekl(wkl wij)(wiprimejprime wkprimelprime)

=sum

(kl)(kprimelprime)isinUn

(En)iprimejprimekprimelprimeβkprimelprimekl(En)klij

= (Kn)iprimejprimeij (511)

which means

Kn = [(wiprimejprime Knwij) (i j) (iprime jprime) isin Un]and leads to the desired result of this proposition

The analysis of the fast multiscale Galerkin method with an appropriatechoice of the truncation parameters δn

iprimei will be discussed in the next section

53 Theoretical analysis

In this section we analyze the fast multiscale Galerkin method Specificallywe show that the number of nonzero entries of the truncated matrix is of linearorder up to a logarithm factor prove that the method is stable and that itgives almost optimal order of convergence We also prove that the conditionnumber of the truncated matrix is uniformly bounded We consider the weaklysingular case in Sections 531ndash533 Special results for the smooth case willbe presented in the last subsection without proof

531 Computational complexity

The computational complexity of the fast multiscale Galerkin method ismeasured in terms of the number of nonzero entries of the truncated matrix Inthis subsection we estimate the number of nonzero entries of matrix Kn For amatrix A we denote by N (A) the number of its nonzero entries

Lemma 58 If conditions (I) and (II) hold then there exists a constant c gt 0such that for all iprime i isin N0 and for all n isin N

N (Kiprimei) le cμi+iprime(

ddi + dd

iprime + (δniprimei)

d)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

210 Multiscale Galerkin methods

Proof For fixed i iprime jprime and an arbitrarily fixed point s0 in Siprimejprime we let

S(i iprime jprime) = s isin Rd |sminus s0| le di + diprime + δniprimei

If Kiprimejprimeij = 0 then dist(Siprimejprime Sij) le δniprimei Thus Sij sub S(i iprime jprime) Let Niiprimejprime denote

the number of indices (i j) such that Sij is contained in S(i iprime jprime) Property (3)of the partition i and condition (I) imply that there exists a constant c gt 0such that

Niiprimejprime le meas(S(i iprime jprime))minmeas(Sij) Sij sub S(i iprime jprime) le cμi(di + diprime + δn

iprimei)d

It follows from condition (II) that the number of functions wij having supportscontained in Sij is bounded by ρ Since w(iprime) sim μiprime

N (Kiprimei) le ρsum

jprimeisinZw(iprime)

Niiprimejprime le cμi+iprime(di + diprime + δniprimei)

d

proving the desired result

To continue estimating the number of nonzero entries of matrix Kn we nowspecify choices of the truncation parameters δn

iprimei Specifically for each i iprime isinZn+1 and for arbitrarily chosen constants a gt 0 and r gt 1 we choose thetruncation parameter δn

iprimei such that

δniprimei le max

aμ[minusn+α(nminusi)+αprime(nminusiprime)]d rdi rdiprime

(512)

where α and αprime are any numbers in (minusinfin 1] The lemma above and thechoice of truncation parameters lead to the following estimate of the numberof nonzero entries of matrix Kn

Theorem 59 If the truncation parameters δniprimei are chosen according to (512)

and if conditions (I) and (II) hold then

N (Kn) =

O(s(n) log2 s(n)) α = αprime = 1O(s(n) log s(n)) otherwise

Proof Because

N (Kn) =sum

iprimeisinZn+1

sumiisinZn+1

N (Kiprimei) (513)

we use Lemma 58 to estimate N (Kn) The choice (512) of truncationparameters ensures that

δniprimei le aμ[minusn+α(nminusi)+αprime(nminusiprime)]d + rdi + rdiprime

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

53 Theoretical analysis 211

Using (513) and substituting the above estimate into the inequality inLemma 58 we have that

N (Kn) le csum

iisinZn+1

sumiprimeisinZn+1

μi+iprime(

2μminusi + 2μminusiprime + adμminusn+α(nminusi)+αprime(nminusiprime))

= c

⎡⎣4(n+ 1)sum

iisinZn+1

μi + adμn

⎛⎝ sumiisinZn+1

μ(αminus1)(nminusi)

⎞⎠times⎛⎝ sum

iprimeisinZn+1

μ(αprimeminus1)(nminusiprime)

⎞⎠⎤⎦=

O(μn(n+ 1)2) α = αprime = 1O(μn(n+ 1)) otherwise

as nrarrinfin This leads to the desired result of this theorem

532 Stability and convergence

In this subsection we show that the fast multiscale Galerkin method is stableand it has an almost optimal convergence order

The first lemma that we present here gives an estimate for the discrepancybetween the block Kiprimei and Kiprimei = K(δ)iprimei where the latter is obtained by usingthe truncation strategy with parameter δ = δn

iprimei

Lemma 510 Suppose that Kiprimei is obtained from the truncation strategy (58)with truncation parameter δ If conditions (I)ndash(IV) hold and K isin Sσ k for someσ isin [0 d) and a positive integer k then for any r gt 1 and δ gt 0 there existsa constant c such that when δ ge maxrdi rdiprime

Kiprimei minus Kiprimei2 le c(didiprime)kδminusη

where η = 2k minus d + σ gt 0

Proof By the definition of Kiprimei we have that

Kiprimei minus Kiprimeiinfin = maxjprimeisinZw(iprime)

sumjisinZδ|Kiprimejprimeij|

where

Zδ = j j isin Zw(i) dist(Sij Siprimejprime) gt δ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

212 Multiscale Galerkin methods

It follows from Lemma 53 that∥∥∥Kiprimei minus Kiprimei∥∥∥infin le c(didiprime)

kminus d2 dd

iprime maxjprimeisinZw(iprime)

maxsisinSiprimejprime

sumjisinZδ

intSij

dt

|sminus t|2k+σ

le c(didiprime)kminus d

2 ddiprime

int|t|gtδ

dt

|t|2k+σ

le c(didiprime)kminus d

2 ddiprimeδminusη

Likewise we have that∥∥∥Kiprimei minus Kiprimei∥∥∥

1le c(didiprime)

kminus d2 dd

i δminusη

Since the spectral radius of a matrix A is less than or equal to any of its matrixnorms

A22 = ρ(ATA) le ATAinfin le ATinfinAinfin = A1Ainfin

Using the above inequality we have that∥∥∥Kiprimei minus Kiprimei∥∥∥2

2le∥∥∥Kiprimei minus Kiprimei

∥∥∥1

∥∥∥Kiprimei minus Kiprimei∥∥∥infin

Substituting both estimates obtained earlier into the right-hand side of theabove inequality proves the desired result

We now describe a second criterion for the choice of truncation parametersδn

iprimei For each i iprime isin Zn+1 and for arbitrarily chosen constants a gt 0 and r gt 1we choose the truncation parameter δn

iprimei such that

δniprimei ge max

aμ[minusn+α(nminusi)+αprime(nminusiprime)]d rdi rdiprime

(514)

where α and αprime are any numbers in (minusinfin 1] For real numbers a and b we set

μ[a b n] =sum

iisinZn+1

μaidsum

iprimeisinZn+1

μbiprimed

We next estimate the error Rn = Kn minus Kn of the truncation operator in termsof the function μ[middot middot n]Lemma 511 Let u isin Hm() with 0 le m le k and K isin Sσ k for someσ isin [0 d) and a positive integer k If the truncation parameters δn

iprimei are chosenaccording to (514) and conditions (I)ndash(VI) hold then there exists a positiveconstant c such that for all n isin N0

(Kn minus Kn)Pnu le cμ[k + mminus αη k minus αprimeη n]μminus(m+dminusσ)nduHm

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

53 Theoretical analysis 213

Proof For any u v isin X we project them into the subspace Xn Hence

Pnu =sum

iisinZn+1

(Pi minus Piminus1)u =sum

(ij)isinUn

uijwij

for some constants uij and

Pnv =sum

iisinZn+1

(Pi minus Piminus1)v =sum

(ij)isinUn

vijwij

for some constants vij where Pminus1 = 0 By the definitions of operators Kn andKn we have that((Kn minus Kn)PnuPnv

)=

sumiiprimeisinZn+1

((Kn minus Kn)(Pi minus Piminus1)u (Piprime minus Piprimeminus1)v

)=

sumiiprimeisinZn+1

sumjisinZw(i)

sumjprimeisinZw(iprime)

(Kiprimejprimeij minus Kiprimejprimeij)uijviprimejprime

Set

en =∣∣∣((Kn minus Kn)PnuPnv

)∣∣∣ Using the CauchyndashSchwarz inequality and condition (V) we conclude that

en le csum

iiprimeisinZn+1

Kiprimei minus Kiprimei2(Pi minus Piminus1)u(Piprime minus Piprimeminus1)v

It follows from condition (VI) that for u isin Hm() with 0 le m le k

(Pi minus Piminus1)u le cdmiminus1uHm

Combining the above estimates and using Lemma 510 we have for u isinHm() and v isin Hmprime() with 0 le m mprime le k that

en le csum

iiprimeisinZn+1

(didiprime)k(δn

iprimei)minusηdm

iminus1dmprimeiprimeminus1uHmvHmprime

Using di sim μminusid and the choice of δniprimei we conclude that

en le caminusηsum

iiprimeisinZn+1

μ(k+mminusαη)(nminusi)d+(k+mprimeminusαprimeη)(nminusiprime)d

μminus(m+mprime+dminusσ)nduHmvHmprime

= caminusημ[k + mminus αη k + mprime minus αprimeη n] middot μminus(m+mprime+dminusσ) nd uHmvHmprime

Since (Kn minus Kn) = Pn(Kn minus Kn) we have for u isin X that∥∥∥(Kn minus Kn)Pnu∥∥∥ = sup

visinXv=0

∣∣∣((Kn minus Kn)PnuPnv)∣∣∣ v2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

214 Multiscale Galerkin methods

Combining this equation with the inequality above with mprime = 0 yields thedesired result of this lemma

The next theorem provides a stability estimate for operator I minus Kn Recallthat for the standard Galerkin method there exist positive constants c0 and N0

such that for all n gt N0

(I minusKn)v ge c0v for all v isin Xn (515)

Theorem 512 Let K isin Sσ k for some σ isin [0 d) and a positive integer kSuppose that the truncation parameters δn

iprimei are chosen according to (514) with

α gt1

2minus d minus σ

2η αprime gt 1

2minus d minus σ

2η α + αprime gt 1

If conditions (I)ndash(VI) hold then there exist a positive constant c and a positiveinteger N such that for all n ge N and v isin Xn

(I minus Kn)v ge cvProof Note that for any real numbers a b and e

limnrarrinfinμ[a b n]μminusend = 0

when e gt max0 a b a+ b Thus the choice of δniprimei ensures that there exists

a positive integer N such that for all n ge N

cμ[k minus αη k minus αprimeη n]μminus(dminusσ)nd le c02

This with the estimate in Lemma 511 leads to

(Kn minus Kn)v le c0

2v for all v isin Xn (516)

Combining (516) and the stability estimate (515) of the standard Galerkinmethod yields

(I minus Kn)v ge (I minusKn)v minus (Kn minus Kn)v ge c0

2v

for any v isin Xn This completes the proof

The above stability estimate ensures that (Iminus Kn)minus1 exists and is uniformly

bounded As a result the fast multiscale Galerkin method (510) has a uniquesolution for a sufficiently large n

Theorem 513 Let u isin Hk() and K isin Sσ k for some σ isin [0 d) anda positive integer k Suppose that the truncation parameters δn

iprimei are chosenaccording to (514) with α and αprime satisfying one of the following conditions

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

53 Theoretical analysis 215

(i) α ge 1αprime gt 12 minus dminusσ

2η α + αprime gt 1+ kη

or α gt 1αprime ge 12 minus dminusσ

2η α + αprime gt1+ k

ηor α gt 1αprime gt 1

2 minus dminusσ2η α + αprime ge 1+ k

η

(ii) α = 1αprime = kη

or α = 2kη

αprime = 12 minus dminusσ

If conditions (I)ndash(VI) hold then there exist a positive constant c and a positiveinteger N such that for all n ge N

uminus un le cs(n)minuskd(log s(n))τuHk()

where τ = 0 in case (i) and τ = 1 in case (ii)

Proof It follows from Theorem 512 that there exist a positive constant c anda positive integer N such that for all n ge N

Pnuminus un le c(I minus Kn)(Pnuminus un) (517)

Since

Pn(I minusK)u = (I minus Kn)un = Pnf

we have that

(I minus Kn)(Pnuminus un) = Pn(I minusK)(Pnuminus u)+ (Kn minus Kn)Pnu (518)

Now by the triangle inequality we have that

uminus un le uminus Pnu + Pnuminus un (519)

Using inequality (517) and equation (518) we obtain that

Pnuminus un le cI minusKPnuminus u + c(Kn minus Kn)PnuSubstituting this estimate into the right-hand side of (519) yields

uminus un le (1+ cI minusK)Pnuminus u + c(Kn minus Kn)PnuIt follows from Lemma 511 that

(Kn minus Kn)Pnu le cμ[2k minus αη k minus αprimeη n]μminus(dminusσ)ndμminusknduHk

Observing that

μ[a b n]μminusend =

⎧⎪⎪⎨⎪⎪⎩O(1) if e ge a e gt b e gt a+ b

or e gt a e ge b e gt a+ bor e gt a e gt b e ge a+ b

O(n) if e = a b = 0 or e = b a = 0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

216 Multiscale Galerkin methods

as nrarrinfin we obtain that

μ[2k minus αη k minus αprimeη n]μminus(dminusσ)nd =

O(1) in case (i)O(n) in case (ii)

with a = 2k minus αη b = k minus αprimeη and e = d minus σ This yields the desiredresult

533 The condition number of the truncated matrix

We show in this subsection that the condition number of the truncated matrixis uniformly bounded To this end we need a norm equivalence result whichis presented below

Lemma 514 If conditions (II) (IV) and (V) hold then for any n isin N andv =sum(ij)isinUn

vijwij

v sim v2

where v = [vij (i j) isin Un]Proof Since condition (V) holds it suffices to prove that there is a positiveconstant θ2 such that for all v

v le θ2v2

It follows from the orthogonal decomposition (52) that

v2 =sum

iisinZn+1

∥∥∥ sumjisinZw(i)

vijwij

∥∥∥2

According to the construction of the partition of and condition (II) for alli gt γ ∥∥∥ sum

jisinZw(i)

vijwij

∥∥∥2 =sum

νisinZe(iminusγ )

∥∥∥ sumjisinZ(ν)

vijwij

∥∥∥2

where Z(ν) = j supp wij sube Sij = iminusγ ν Using the CauchyndashSchwarzinequality and condition (II) we have that∥∥∥ sum

jisinZ(ν)vijwij

∥∥∥2 leint

sumjisinZ(ν)

v2ij

sumjisinZ(ν)

w2ij(t)dt le ρ

sumjisinZ(ν)

v2ij

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

53 Theoretical analysis 217

The last inequality holds because the cardinality of Z(ν) is less than or equalto ρ and the L2-norm of wij is equal to 1 Hence we conclude that there is apositive constant θ2 such that

v2 le θ22

sum(ij)isinUn

v2ij = θ2

2v22

This completes the proof

With the help of the above lemma we are ready to show that the conditionnumber of the coefficient matrix

An = En minus Kn

is uniformly bounded

Theorem 515 Suppose that K isin Sσ k for some σ isin [0 d) and a positiveinteger k and the truncation parameters δn

iprimei are chosen according to (514)with α and αprime satisfying the following conditions

α gt1

2minus d minus σ

2η αprime gt 1

2minus d minus σ

2η α + αprime gt 1

If conditions (I)ndash(VI) hold then the condition number of the coefficient matrixof the truncated approximate equation (510) is bounded that is there exists apositive constant c such that for all n isin N

cond2(An) le c

Proof For any v = [vij (i j) isin Un] isin Rs(n) let

v =sum

(ij)isinUn

vijwij

and

g = (I minus Kn)v

Thus g isin Xn and it can be written as

g =sum

(ij)isinUn

gijwij

Set

g = [gij (i j) isin Un]It can be verified that

g = (En minus Kn)v

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

218 Multiscale Galerkin methods

It follows from Theorem 512 Lemma 514 and the above equations thatthere exist a positive constant c and positive integer N such that for all n ge N

v2 le cv le c(I minus Kn)v = cg le cg2 = c(En minus Kn)v2

This means that

(En minus Kn)minus12 le c (520)

Conversely we have that

(En minus Kn)v2 = g2 le cg = c(I minus Kn)vNote that

(I minus Kn)v le (I minusKn)v + (Kn minus Kn)vThis with (516) implies that

(I minus Kn)v le (1+ K)v + c0

2v le cv2

Thus

En minus Kn2 le c (521)

The result of this theorem follows from (520) and (521)

To close this section we would like to know if we can choose appro-priate truncation parameters such that the optimal results about the orderof convergence and computational complexity can be reached CombiningTheorems 59 512 513 and 515 leads to the following

Theorem 516 Let u isin Hk() and K isin Sσ k for some σ isin [0 d) and apositive integer k If conditions (I)ndash(VI) hold and δn

iprimei are chosen as

δniprimei = max

aμ[minusn+α(nminusi)+αprime(nminusiprime)]d rdi rdiprime

with α = 1 and 1minus kηlt αprime le 1 then the following hold the stability estimate

(I minus Kn)v ge cv for all v isin Xn

the boundedness of the condition number

cond2(An) le c

the optimal convergence order

uminus un le cs(n)minuskduHk()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

53 Theoretical analysis 219

and the optimal (up to a logarithmic factor) order of the complexity

N (Kn) =

O(s(n) log2 s(n)) α = αprime = 1O(s(n) log s(n)) otherwise

534 Remarks on the smooth kernel case

In this subsection we present special results for the smooth kernel case Sincethe proofs are similar to those for the weakly singular case we omit the detailsof the proof except for Lemma 519 whose results have something differentfrom Lemma 511

Lemma 517 If conditions (I)ndash(IV) hold and K isin Ck( times ) then thereexists a positive constant c such that for i iprime isin N and for all n isin N

Kiprimei2 le cdki dk

iprime

To avoid computing the entries whose values are nearly zero we make aspecial block truncation strategy that is setting

Kiprimei =

Kiprimei i+ iprime le n0 otherwise

iprime i isin N (522)

to obtain a sparse truncation matrix

Kn = [Kiprimei iprime i isin Zn+1]The following theorems provide the computational complexity the conver-

gence estimate and the stability of the truncation scheme for integral equationswith smooth kernels

Theorem 518 Suppose that condition (I) holds and K isin Ck( times ) If thetruncated matrix Kn is chosen as (522) then

N (Kn) = O(s(n) log s(n))

Lemma 519 Suppose that conditions (I)ndash(VI) hold and K isin Ck(times) Ifthe truncated matrix Kn is chosen as (522) then there exists a constant c suchthat for all u isin Hk() and for all n isin N

(Kn minus Kn)Pnu le cμminusknduHk

and for u isin L2()

(Kn minus Kn)Pnu le c(n+ 1)μminuskndu

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

220 Multiscale Galerkin methods

Proof Similar to the proof of Theorem 511 for any u v isin X

Pnu =sum

(ij)isinUn

uijwij Pnv =sum

(ij)isinUn

vijwij

we have((Kn minus Kn)Pnu v

)=

sumiiprimeisinZn+1

sumjisinZw(i)

sumjprimeisinZw(iprime)

(Kiprimejprimeij minus Kiprimejprimeij)uijviprimejprime

Using the CauchyndashSchwarz inequality and condition (V) we conclude that itsabsolute value is bounded by

csum

iiprimeisinZn+1

Kiprimei minus Kiprimei2(Pi minus Piminus1)u(Piprime minus Piprimeminus1)v

It follows from condition (VI) that for u isin Hm() with 0 le m le k

(Pi minus Piminus1)u le cdmiminus1uHm

Denote Zprime(i) = iprime isin Zn+1 iprime gt nminus i Combining the above estimates usingLemma 517 and the truncation strategy (522) we have that for u isin Hm()

and v isin L2()∣∣∣((Kn minus Kn)Pnu v)∣∣∣ le c

sumiisinZn+1

sumiprimeisinZprime

(i)

(didiprime)kdm

iminus1uHmv (523)

Since di sim μminusid a simple computation yields thatsumiisinZn+1

sumiprimeisinZprime

(i)

(didiprime)kdm

iminus1 le csum

iisinZn+1

sumiprimeisinZprime

(i)

μminusk(i+iprime)dminusm(iminus1)d

= cμminuskndsum

iisinZn+1

μminusm(iminus1)dsum

iprimeisinZprime(i)

μminusk(i+iprimeminusn)d

For any i isin Zn+1sumiprimeisinZprime

(i)

μminusk(i+iprimeminusn)d lesumlisinN

μminuskld le μminuskd

1minus μminuskd

which leads to the fact that there exists a constant c such thatsumiisinZn+1

sumiprimeisinZprime

(i)

(didiprime)kdm

iminus1 le

cμminusknd if 0 lt m le kc(n+ 1)μminusknd if m = 0

(524)

Combining the above inequalities (523) (524) and

(Kn minus Kn)Pnu = supvisinXv=0

|((Kn minus Kn)Pnu v)|v

we obtain the estimates of this lemma

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

54 Bibliographical remarks 221

The compactness of K and the property of the orthogonal projection Pn

lead to the stability estimate of the operator equation This with the secondestimate of Lemma 519 yields the following theorem about the stability ofthe truncation equation

Theorem 520 Suppose that conditions (I)ndash(VI) hold and K isin Ck(times) Ifthe truncated matrix Kn is chosen as (522) then there exist a positive constantc0 and an integer N such that for all n ge N and x isin Xn

(I minus Kn)x ge c0xWe have the following convergence estimate similar to Theorem 513

Theorem 521 Suppose that conditions (I)ndash(VI) hold and K isin Ck(times) Ifthe truncated matrix Kn is chosen as (522) then there exist a positive constantc and an integer N such that for all n ge N

uminus un le cs(n)minuskduHk

We also have that the condition number of the coefficient matrix An =En minus Kn of the truncated scheme is bounded by a constant independent of n

Theorem 522 Suppose that conditions (I)ndash(VI) hold and K isin Ck( times )If the truncated matrix Kn is chosen as (522) then the condition number ofthe coefficient matrix of the truncated approximate equation is bounded thatis there exists a positive constant c such that for all n isin N

cond2(An) le c

54 Bibliographical remarks

Since the 1990s wavelet and multiscale methods have been developed forsolving the Fredholm integral equation of the second kind The history of fastmultiscale solutions of the equation began with the remarkable discovery in[28] that the matrix representation of a singular Fredholm integral operatorunder a wavelet basis is numerically sparse This fact was then used indeveloping the multiscale Galerkin (PetrovndashGalerkin) method for solvingthe Fredholm integral equation see [5 64 68 88ndash91 94 95 135 136 139140 202 251 260 261] and references cited therein Readers are referred tothe Introduction of this book for more information The multiscale piecewisepolynomial PetrovndashGalerkin discrete multiscale PetrovndashGalerkin and multi-scale collocation methods were developed in [64 68 69] We give an in-depth

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

222 Multiscale Galerkin methods

discussion of these methods in the next two chapters A numerical implemen-tation issue of the multiscale Galerkin method was considered in [109]

The convergence results presented in this chapter are for the smoothsolution However solutions of the Fredholm integral equation of the secondkind with weakly singular kernels may not be smooth When the solutionis not smooth a fast singularity-preserving multiscale Galerkin method wasdeveloped in [46] for solving weakly singular Fredholm integral equations ofthe second kind This method was designed based on the singularity-preservingGalerkin method introduced originally in [41] and a matrix truncation strategysimilar to what we have discussed in Section 52

There are several fast methods in the literature for solving the Fredholmintegral equation of the second kind which are closely related to the fastmultiscale method They include the fast multipole method the panel clus-tering method and the method of sparse grids The fast multipole method[114 115 235 250] was originally introduced by V Rokhlin and L Greengardbased on the multipole expansion It effectively reduces the computationalcomplexity involving a certain type of the dense matrix which can ariseout of many physical systems The panel clustering method proposed byW Hackbusch and Z Nowak also significantly lessens the computationalcomplexity (see for example [124 125]) For the method of sparse gridsreaders are referred to [36] and the references cited therein Fast FourierndashGalerkin methods developed in [37 53 154 155 263] for solving boundaryintegral equations are special cases of the method of sparse grids Fast methodsfor solving Fredholm integral equations of the second kind in high dimensionswere developed in [272] and [102] respectively based on a combinationtechnique and the lattice integration

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637007Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163116 subject to the Cambridge Core terms of

6

Multiscale PetrovndashGalerkin methods

This chapter is devoted to presenting multiscale PetrovndashGalerkin methods forsolving Fredholm integral equations of the second kind In a manner similar tothe Galerkin method the PetrovndashGalerkin method also suffers from the densityof the coefficient matrix of its resulting linear system We show that with themultiscale basis the PetrovndashGalerkin method leads to a linear system havinga numerically sparse coefficient matrix We propose a matrix compressionscheme for solving the linear system and prove that it almost preserves theoptimal convergence order of the numerical solution that the original PetrovndashGalerkin method enjoys and it reduces the computational complexity from thesquare order to the quasi-linear order We also present the discrete versionof the multiscale PetrovndashGalerkin method which further treats the nonzeroentries of the compressed coefficient matrix that results from the multiscalePetrovndashGalerkin method by using the product integration method We call thismethod the discrete multiscale PetrovndashGalerkin method

In Section 61 we first present the development of the multiscale PetrovndashGalerkin method and its analysis We then discuss in Section 62 the discretemultiscale PetrovndashGalerkin method

61 Fast multiscale PetrovndashGalerkin methods

In this section we describe the construction of two sequences of multiscalebases for trial and test spaces and use them to develop multiscale PetrovndashGalerkin methods for solving the second-kind integral equations

223

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

224 Multiscale PetrovndashGalerkin methods

611 Multiscale bases for PetrovndashGalerkin methods

We review first a special case of the recursive construction given in Chapter 4for piecewise polynomial spaces on = [0 1] which can be used to developa multiscale PetrovndashGalerkin scheme

We start with positive integers k kprime ν and μ which satisfy kν = kprimeμ andkprime le k We choose our initial trial space and test space to be X0 = Sk

ν andY0 = Skprime

μ and thereafter we recursively divide the corresponding subintervalsinto μ pieces to obtain two sequences of subspaces

Xn = Skνμn Yn = Skprime

μn+1 n isin N0

These spaces are referred to as the (k kprime) element spacesWe have that

dimXn = dimYn n isin N0

Xn sub Xn+1 Yn sub Yn+1 n isin N0

and ⋃nisinN0

Xn =⋃

nisinN0

Yn = L2()

Moreover XnYn forms a regular pair (see Definition 230)We use Xn = fij (i j) isin Un and Yn = hij (i j) isin Un for the associated

multiscale bases for Xn and Yn respectively where Un = (i j) i isin Zn+1 j isinZw(i) with w(0) = kν = kprimeμ w(i) = kν(μminus 1)μiminus1 = kprime(μminus 1)μi i isin Nfor given k ν kprimeμ isin N These bases can be constructed recursively by themethod described in Section 41 such that both fij (i j) isin U and hij (i j) isin U are orthonormal bases in X = L2[0 1] having some importantproperties such as vanishing moment conditionsint 1

0tfij(t)dt = 0 isin Zk j isin Zw(i) i isin Nint 1

0thij(t)dt = 0 isin Zkprime j isin Zw(i) i isin N

and compact support properties

meas(suppfij) le 1μiminus1 meas(supphij) le 1μiminus1 j isin Zw(i) i isin N

The vanishing moment conditions play an important role in developingtruncated schemes (see Chapter 5) Therefore it is expected to raise the orderof the vanishing moments of hij to k when kprime lt k This can be done as follows

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

61 Fast multiscale PetrovndashGalerkin methods 225

We first choose basis g0j j isin Zw(0) for Y0 which is bi-orthogonal tof0j j isin Zw(0) that is

( f0jprime g0j) = δjjprime j jprime isin Zw(0)

Then for j isin Zw(1) we find a vector [cjs s isin Zs(1)] isin Rs(1) where s(i) =dimYi i isin N0 such that

g1j =sum

sisinZw(0)

cjsh0s +sum

sisinZw(1)

cjw(0)+sh1s j isin Zw(1)

satisfies the equations

(f0jprime g1j) = 0 jprime isin Zw(0)

and

(f1jprime g1j) = δjjprime jprime isin Zw(1)

Noting that the matrix of order s(1) for this linear system of equations is

H = [(fiprimejprime hij) (iprime jprime) (i j) isin U1]and XnYn forms a regular pair we conclude that H is nonsingular Thusthere exists a unique solution which satisfies the above equations It can easilybe verified that these functions g1j j isin Zw(1) are linearly independent andY1 = spangij (i j) isin U1 Using the isometry operator Tε (see (42)) wedefine recursively for i isin N that

gi+1 j = Tεgil

where j = εw(i) + l ε isin Zμ l isin Zw(i) Then we have that Yn = spangij (i j) isin Un for n isin N0 Defining

Wi = spanfij j isin Zw(i) and Vi = spangij j isin Zw(i) i isin N0

we have that

Xn =oplus

iisinZn+1

perpWi and Yn =

oplusiisinZn+1

Vi n isin N0

Proposition 61 The multiscale bases fij (i j) isin U and gij (i j) isin Uhave the following properties

(I) There exist positive integers ρ and r such that for every i gt r and j isinZw(i) written in the form j = νρ + s where s isin Zρ and ν isin N0

fij(x) = 0 gij(x) = 0 x isin iminusrν

Setting Sij = iminusrν the supports of fij and gij are then contained in Sij

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

226 Multiscale PetrovndashGalerkin methods

(II) For any (i j) (iprime jprime) isin U

( fij fiprimejprime) = δiprimeiδjprimej

(III) For any (i j) (iprime jprime) isin U and iprime ge i

( fij giprimejprime) = δiprimeiδjprimej

(IV) For any (i j) isin U with i ge 1 and polynomial p of total degree less than k

( fij p) = 0 (gij p) = 0

(V) There is a positive constant c such that for any (i j) isin U

fijinfin le cμi2 and gijinfin le cμi2

Set

En = [(giprimejprime fij) (iprime jprime) (i j) isin Un]It is useful to make the construction of the matrix En clear

Lemma 62 For any n isin N the following statements hold

(i) The matrix En has the form

En =

⎡⎢⎢⎢⎢⎢⎢⎣

I0 G0

I1 G1

Gnminus1

In

⎤⎥⎥⎥⎥⎥⎥⎦

where Ii i isin N0 is the w(i)times w(i) identity matrix

G0 = [(g0jprime f1j) jprime isinZw(0) j isin Zw(1)]G1 = [(g1jprime f2j) jprime isinZw(1) j isin Zw(2)]

and Gi i isin N is the block diagonal matrix diag(G1 G1 G1) with μiminus1

diagonal blocks(ii) There exists a positive constant c such that

En2 le c

Proof (i) We first partition the matrix En into a block matrix

En = [Eiprimei iprime i isin Zn+1]where

Eiprimei = [(giprimejprime fij) jprime isin Zw(iprime) j isin Zw(i)]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

61 Fast multiscale PetrovndashGalerkin methods 227

It follows from property (III) that

Eiprimei =

Ii iprime = i0 iprime gt i

When i ge iprime + 2 it follows from giprimejprime isin Yiprime fij isin Wi and the fact that Yiprime subeXiprime+1 perpWi that

(giprimejprime fij) = 0

which means that

Eiprimei = 0 for all i ge iprime + 2

We finally consider the case i = iprime + 1 When iprime = 0 and i = 1 it is clearthat E01 = G0 When iprime ge 1 and i = iprime + 1 assume that giprimejprime = Teprimeg1lprime andfij = Tef2l where jprime = μ(eprime)w(1)+ lprime and j = μ(e)w(2)+ l with eprime e isin Ziprimeminus1

μ lprime isin Zw(1) and l isin Zw(2) Using Proposition 415 we conclude that

(giprimejprime fij) = δeprimee(g1lprime f2l)

This means that for iprime ge 1 Eiprimeiprime+1 = Gi is the block diagonal matrixdiag(G1 G1 G1)

(ii) It is clear from (i) that

Eninfin = maxGiinfin + 1 i isin 0 1and

En1 = maxGi1 + 1 i isin 0 1Thus we obtain that

En2 le c = maxEninfin En1= maxGil + 1 i isin 0 1 l isin 1infin

This completes the proof

To estimate the norm of an element u isin Xn or v isin Yn we introduce asequence of functions ξij (i j) isin Uwhich is bi-orthogonal to gij (i j) isin UTo obtain the sequence we can find ξij isin Xi (i j) isin U1 such that

(giprimejprime ξij) = δiprimeiδjprimej (iprime jprime) (i j) isin U1

and then set

ξij = Teξ1l j = μ(e)w(1)+ l e isin Ziminus1μ l isin Zw(1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

228 Multiscale PetrovndashGalerkin methods

Using this sequence we have for v isin Yn that

v =sum

(ij)isinUn

vijgij

with vij =langv ξij

rang Let

n = [(ξiprimejprime ξij) (iprime jprime) (i j) isin Un]Lemma 63 There exists a positive constant c such that

n2 le c

Proof We first estimate the entries of matrix n The fact that ξij (i j)isin U is bi-orthogonal to gij (i j) isin U implies that ξij i isin N has vanishingmoments of order kprime For iprime ge i and i isin Z2 let t0 be the center of the set Siprimejprime and write ξij = summisinZk

cm(t minus t0)m on Siprimejprime There exists a positive constant csuch that

|(ξiprimejprime ξij)| le cd(Siprimejprime)kprimeint

Siprimejprime|ξiprimejprime(t)|dt le cd(Siprimejprime)

kprime+12ξiprimejprime le cμminusiprime(kprime+12)

When iprime ge i gt 1 there exist eprime e isin Ziminus1μ lprime isin Zw(iprimeminusi+1) and l isin Zw(1) such

that

jprime = μ(eprime)w(iprime minus i+ 1)+ lprime j = μ(e)w(1)+ l

and

ξiprimejprime = Teprimeξiprimeminusi+1lprime ξij = Teξ1l

Thus

|(ξiprimejprime ξij)| = δeprimee|(ξiprimeminusi+1lprime ξ1l)| le cδeprimeeμminus(iprimeminusi+1)(kprime+12)

Combining the above estimates we obtain for (i j) (iprime jprime) isin Un that

|(ξiprimejprime ξij)| le cδeprimeeμminus|iprimeminusi|(kprime+12)

where eprime e isin Z|iprimeminusi|μ

We next partition n into a block matrix

n = [iprimei iprime i isin Zn+1]with

iprimei = [(ξiprimejprime ξij) jprime isin Zw(iprime) j isin Zw(i)]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

61 Fast multiscale PetrovndashGalerkin methods 229

and estimate the norm of these blocks It can be seen that

iprimeiinfin = maxjprimeisinZw(iprime)

sumjisinZw(i)

|(ξiprimejprime ξij)| le cw(|iprime minus i|)μminus|iprimeminusi|(kprime+12)

le cμminus|iprimeminusi|(kprimeminus12)

We next estimate matrix n Using the above inequality we have that

n1 = ninfin le maxiprimeisinZn+1

sumiisinZn+1

iprimeiinfin le 2c

1minus μminus(kprimeminus12)

which leads to the desired result of this lemma

Using the above lemmas we can verify the following proposition

Proposition 64 There exist two positive constants cminus and c+ such that forall n isin N0 u isin Xn having form u = sum(ij)isinUn

uijfij = sum(ij)isinUnuijξij and

v isin Yn having form v =sum(ij)isinUnvijgij

u = u2 (61)

cminusu le u2 le c+u (62)

and

cminusv le v2 le c+v (63)

where u = [uij (i j) isin Un] u = [uij (i j) isin Un] and v = [vij (i j) isin Un]Proof Recall that fij (i j) isin U is an orthonormal basis in X and ξij (i j) isin U is bi-orthogonal to gij (i j) isin U Therefore for (i j) isin Unuij = (u fij) uij = (u gij) and vij = (v ξij) Moreover equation (61) holdsIt can easily be verified that u2 = uTnu and u = Enu Using Lemmas 6263 and (61) we have that

u le (n2u22)12 le cu2 and u2 le Enu2 le cuwhich yield (62)

Noting that Yn sube Xn+1 any v isin Yn can be expressed as

v =sum

(ij)isinUn+1

(v gij)ξij

Thus we have that

v = n+1v

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

230 Multiscale PetrovndashGalerkin methods

where v = [(v ξij) (i j) isin Un+1] and v = [(v gij) (i j) isin Un+1] ByLemma 337 and (62) we conclude that

v2 le n+12v2 le cv (64)

On the contrary

v2 =⎛⎝ sum(ij)isinUn

(v ξij)gijsum

(ij)isinUn+1

(v gij)ξij

⎞⎠=

sum(ij)isinUn

(v ξij)(v gij)

le v2v2 le cv2vThis with (64) yields (63)

612 Multiscale PetrovndashGalerkin methods

We now formulate the PetrovndashGalerkin method using multiscale bases forFredholm integral equations of the second kind given in the form

uminusKu = f (65)

where

(Ku)(s) =int

K(s t)u(t)dt

the function f isin X = L2() the kernel K isin L2( times ) are given and u isin X

is the unknown function to be determinedWe assume that there are two sequences of multiscale functions fij (i j) isin

U and gij (i j) isin U where U = (i j) j isin Zw(i) i isin N0 such that thesubspaces

Xn = spanfij (i j) isin Un and Yn = spangij (i j) isin Unsatisfy condition (H) and XnYn forms a regular pair These bases may notbe those constructed in the above subsection but they are demanded to satisfythe properties listed in Propositions 61 and 64

The PetrovndashGalerkin method for solving equation (65) seeks a vector un =[uij (i j) isin Un] such that the function

un =sum

(ij)isinUn

uij fij isin Xn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 231

satisfies

(giprimejprime un minusKun) = (giprimejprime f ) (iprime jprime) isin Un (66)

Equivalently we obtain the linear system of equations

(En minusKn)un = fn

where

Kn = [(giprimejprime Kfij) (iprime jprime) (i j) isin Un]En = [(giprimejprime fij) (iprime jprime) (i j) isin Un

]and

fn = [(gij f ) (i j) isin Un]The truncated scheme and its analysis of convergence and computational

complexity are nearly the same as for the multiscale Galerkin method we leavethem to the reader Readers are also referred to the discrete version of themultiscale PetrovndashGalerkin methods in the next section

62 Discrete multiscale PetrovndashGalerkin methods

One can find that the compression strategy for the design of the fast multiscalePetrovndashGalerkin method is similar to that of the fast multiscale Galerkinmethod and the practical use of the fast multiscale method requires thenumerical computation of integrals appearing in the method Therefore in thissection we turn our attention to discrete multiscale schemes We develop adiscrete multiscale PetrovndashGalerkin (DMPG) method for integral equationsof the second kind with weakly singular kernels A compression strategy fordesigning fast algorithms is suggested Estimates for the order of convergenceand computational complexity of the method are provided

We consider in this section the following Fredholm integral equations

uminusKu = f (67)

where K is an integral operator with a weakly singular kernelThe idea that we use to develop our DMPG method is to combine the

discrete PetrovndashGalerkin (DPG) method with multiscale bases to exploit thevanishing moment property of the multiscale bases and the computationalalgorithms for computing singular integrals of the DPG method Note that theanalysis of the DPG method was done in [80] in the Linfin-norm since this normis natural for discrete methods which use interpolatory projections However

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

232 Multiscale PetrovndashGalerkin methods

for our DMPG method in order to make use of the vanishing moment propertyof the multiscale bases we have to switch back and forth between the Linfin-normand the L2-norm to obtain the necessary estimates We give special attentionto this issue

621 DPG methods and Lp-stability

We review the abstract framework outlined in [80] for analysis of discretenumerical methods of Fredholm integral equations of the second kind withweakly singular kernels To this end we let X be a Banach space with norm middot and V be a subspace of X We require that K Xrarr V be a compact linearoperator and that the integral equation (14) be uniquely solvable in X for allf isin X Note that whenever f isin V the unique solution of (14) is in V LetXn n isin N be a sequence of finite-dimensional subspaces of X satisfying

V sube X =⋃nisinN

Xn sube X

Suppose that the operators K and I (the identity from X to X) are approximatedby operators Kn X rarr V and Qn X rarr Xn respectively Specificallywe assume that Kn and Qn converge pointwise to K and I respectively Anapproximation scheme for solving equation (14) is defined by the equation

(I minusQnKn)un = Qnf n isin N (68)

This approximate scheme includes the discrete and nondiscrete versions ofthe PetrovndashGalerkin method collocation method and quadrature method asspecial cases Under various conditions elucidated in [80] for n large enoughequation (68) has a unique solution We discuss this issue later Instead weturn to specifying the operators and other related quantities needed for thedefinition of the DPG method for our current context In this section we fixX = Linfin() and V = C() with = [0 1] and use the followingterminology about the singularity

Definition 65 We say a kernel K(s t) s t isin = [0 1] is quasi-weaklysingular provided that

supsisinK(s middot)1 ltinfin

and

limsprimerarrsK(s middot)minus K(sprime middot)1 = 0

where middot 1 is the L1()-norm on

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 233

It can easily be verified that the weakly singular kernel in the sense ofDefinition 24 with a continuous function M is quasi-weakly singular Everyquasi-weakly singular kernel determines by the formula

(Ku)(s) =int

K(s t)u(t)dt s isin u isin Linfin() (69)

a compact operator from Linfin() into C()For n isin N we partition into N (depending on n) subintervals 0 = i

i isin ZN That is we have

=⋃

risinZN

r meas(i capj) = 0 i = j i j isin ZN

Moreover we assume that as nrarrinfin the sequence of partition lengths

h = max|i| i isin ZNgoes to zero For each i isin ZN let Fi denote the linear function that maps theinterval one to one and onto i Thus Fi has the form

Fi(t) = |i|t + bi t isin i isin ZN (610)

for some constant bi For every partition 0 of described above and anypositive integer k we let Sk(0) be the space of all functions defined on which are continuous from the right and on each subinterval i it is apolynomial of degree at most kminus1 (at the right-most endpoint of we requirethat the functions in Sk(0) are left continuous)

We use the following mechanism to refine a given fixed partition = Jj j isin Z chosen independently of n For any i isin ZN written in the form i =k+ j j isin Z k isin ZN we define the intervals

Hi = Fk(Jj)

which collectively determine the partition

0 = Hi i isin ZNThis partition consists of N ldquocopiesrdquo of each of which is put on thesubintervals i i isin ZN Given two partitions 1 and 2 of (independentof n) and positive integers k1 and k2 we introduce the following trial and testspaces

Xn = f f Fi isin Sk1(1) i isin ZN = Sk1(0 1)

and

Yn = Sk2(0 2)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

234 Multiscale PetrovndashGalerkin methods

respectively These are also spaces of piecewise polynomials of degree k1 minus 1k2 minus 1 on finer partitions induced by 1 2 and 0 respectively To insurethat the spaces Xn and Yn have the same dimension we require that

dim Sk1(1) = dim Sk2(2) = λ

We choose bases in Xn and Yn in the following manner Starting with spaces

Sk1(1) = spanξi i isin Zλand

Sk2(2) = spanηi i isin Zλfor j = λi+ where i isin ZN and isin Zλ we define functions

ξj = (ξ Fminus1i )χi and ηj = (η Fminus1

i )χi

These functions form a basis for spaces Xn and Yn respectively that is Xn =spanξj j isin ZλN and Yn = spanηj j isin ZλN

To construct a quadrature formula we introduce a third piecewise poly-nomial space Sk3(3) of dimension γ where 3 is yet another partition of (independent of n) and choose distinct points tj j isin Zγ in such thatthere exist unique functions ζi isin Sk3(3) i isin Zγ satisfying the interpolationconditions ζi(tj) = δij i j isin Zγ The functions ζi i isin Zγ form a basis for thespace Sk3(3) As above for j = γ i+ where i isin ZN and isin Zγ we definefunctions

ζj = (ζ Fminus1i )χi

and points

tj = Fi(t)

We also introduce the subspace

Qn = Sk3(0 3)

and observe that

Qn = spanζj j isin ZγNWe require the linear projection Zn Xrarr Qn by

Q = Zng =sum

jisinZγN

g(tj)ζj (611)

where for a function g isin Linfin() g(t) is defined in the sense described inSection 353 that is as any norm-preserving bounded linear functional which

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 235

extends point evaluation at t from C() to Linfin() (cf [21]) For any x y isin Xwe introduce the following discrete inner product

(x y)n =sum

jisinZγN

wjx(tj)y(tj) (612)

where

wj =int

ζj(t)dt

Note that

wj = |i|int

ζ(t)dt = |i|w

where j = γ i + with i isin ZN and isin Zγ and for every isin Zγ w =intζ(t)dt Henceforth we assume that w gt 0 isin Zγ This way xn =

(x x)12n is a semi-norm on X

We now define a pair of operators using the discrete inner product Specifi-cally we define the operator Qn Linfin()rarr Xn by requiring

(Qnx y)n = (x y)n y isin Yn (613)

An element Qnx isin Xn satisfying (613) is called the discrete generalized bestapproximation (DGBA) to x from Xn with respect to Yn Similarly we letQprimen Linfin()rarr Yn be the discrete generalized best approximation projectionfrom Linfin() onto Yn with respect to Xn defined by the equation

(vQprimenx)n = (v x)n v isin Xn (614)

The following lemma proved in [80] presents a necessary and sufficientcondition for Qn and Qprimen to be well defined To state this lemma we introducea matrix notation Let

= [ξi(tj) i isin Zλ j isin Zγ ] = [ηi(tj) i isin Zλ j isin Zγ ]W = diag(wj j isin Nγ )

and define the square matrix of order λ

M = WT

Lemma 66 Let x isin Linfin() Then the following statements are equivalent

(i) The discrete generalized best approximation to x from Xn with respect toYn is well defined

(ii) The discrete generalized best approximation x from Yn with respect to Xn

is well defined

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

236 Multiscale PetrovndashGalerkin methods

(iii) The functions ξi i isin Zλ ηi i isin Zλ and the points ti i isin Zγ havethe property that M is nonsingular

Moreover under any one of these conditions the operators Qn and Qprimen areuniformly bounded projections with Qn(Yn) = Xn and Qprimen(Xn) = Yn

It remains to define the operator Kn For this purpose we express the quasi-weakly singular kernel K as K1K2 where K1 isin C( times ) and K2 is quasi-weakly singular Using this factorization we develop a product integrationformula that discretizes the kernel K1 but not the kernel K2 Specifically wedefine the operator Kn Linfin()rarr C() by the formula

(Knx)(s) =int

Zn(K1(s middot)x(middot))(t)K2(s t)dt s isin x isin Linfin() (615)

and note that

(Knx)(s) =sum

jisinZγN

Wj(s)K1(s tj)x(tj) s isin

where for all j isin ZγN we define the function

Wj(s) =int

K2(s t)ζj(t)dt s isin

With the approximate operators Kn and Qn as defined above equation (68)specifies a DPG scheme for solving (67)

We describe two special constructions of the triple of spaces Ski(i)i = 1 2 3 such that the matrix M is nonsingular In the first constructionwe assume that k1 le k3 and k1 = rk2 where r is a positive integer and choosethe partitions 1 = 3 = that is Sk1(1) and Sk3(3) are spaces ofpolynomials of degree k1minus1 and k3minus1 respectively We then choose k3 pointst0 lt t1 lt middot middot middot lt tk3minus1 so that the weights wi gt 0 for i isin Zk3 Thus in this caseλ = k1 and γ = k3 Now we define the partition 2 = [xi xi+1] i isin Zrof the interval by letting x0 = 0 xr = 1 and choosing xi isin (tik2minus1 tik2)iminus 1 isin Zrminus1 Thus for i isin Zrminus1 each of the first r minus 1 subintervals [xi xi+1]contains exactly k2 points tik2+j j isin Zk2 and the last subinterval [xrminus1 xr]contains exactly k3 minus k1 + k2 points t(rminus1)k2+j j isin Zk3minusk1+k2 We call this atype I construction for the triple of spaces Ski(i) i = 1 2 3

Proposition 67 For type I spaces Ski(i) i = 1 2 3 det(M) = 0 holds

Proof Let

(0 k1 minus 1

j1 jk1

)and

(0 k1 minus 1

j1 jk1

)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 237

denote the minors of matrices and corresponding to the columnsj1 jk1 respectively We have by the CauchyndashBinet formula that

det (M) =sum

0lej1ltmiddotmiddotmiddotltjk1lek3minus1

wj1 middot middot middotwjk1

(0 k1 minus 1

j1 jk1

)

(0 k1 minus 1

j1 jk1

)

Let 1 and 1 be the submatrices of and consisting of the first k1

columns respectively Since the triple spaces Ski(i) i = 1 2 3 are type Iconstructions both these matrices are invertible and hence it follows that

det(M) ge det(1) det(1)w1 middot middot middotwk1 gt 0

Note that for a type I construction the partition 2 may not be equallyspaced As our multiscale construction requires this property for both 1 and2 we present a second construction which guarantees that this is the case Forthis purpose we again assume that k1 = rk2 where r is a positive integer andSk1(1) is the space of polynomials of degree k1 minus 1 on For the remainingspline spaces we divide the interval into r equally space subintervalsthat is

2 = 3 =[

i

r

i+ 1

r

] i isin Zr

Hence in this case λ = k1 and γ = rk3 We choose the points tj j isin Zr inthe following way On each interval [ ir i+1

r ] i isin Zr we choose the zeros of theLegendre polynomial of degree k3 (ge k12) shifted from [minus1 1] to the interval[ ir i+1

r ] and we number these points increasingly by ti i isin Zrkprime In this choicewe have wi gt 0 i isin Zrkprime We call this a type II construction

Proposition 68 For type II spaces Ski(i) i = 1 2 3 det(M) = 0 holds

Proof By construction we conclude from a similar argument used in theproof of Proposition 67 that the determinant of the matrix M is positive

We say that the triple of spaces Ski(i) i = 1 2 3 forms an acceptabletriple provided that the matrix M is nonsingular

According to Lemma 66 the discrete approximate operators Qn and Qprimen arebounded uniformly in n isin N on Linfin() For the analysis of DMPG we needsimilar properties on L2() Our first comment is of a general nature To thisend let P = Sk(0) where is a fixed partition of chosen independentof n For any linear operator A such that A Lp() rarr Linfin() 1 le p le infinwe set

ApP = supAxp x isin P xp = 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

238 Multiscale PetrovndashGalerkin methods

The next lemma gives a condition on an operator A such that ApP can bebounded by a constant independent of n times AinfinP

Lemma 69 Let A Linfin() rarr Linfin() be a linear operator such that forx isin P there holds

(Ax)χIi = A(xχIi) i isin ZN (616)

where χi is the characteristic function of the set i Then for any 1 le p le infinthere exists a positive constant α depending only on k and such that

ApP le αAinfinP

Proof For v isin L2(i) we have that

vLp(i) = |i|1pv FiLp() (617)

We conclude from (617) that for x isin P

AxLp(i) = |i|1p(Ax) FiLp() le |i|1p(Ax) FiLinfin()Moreover using assumption (616) we obtain that

(Ax) FiLinfin() = (Ax)χiLinfin() = A(xχi)Linfin()Combining this fact with (617) gives us the inequality

AxLp(i) le AinfinP|i|1px FiLinfin()For any i isin ZN x Fi x isin P = Sk() and so there is a positive constant αdepending only on k and such that

x FiLinfin() le αx FiLp()

Therefore we have confirmed for x isin P and i isin ZN that

AxLp(i) le αAinfinPxLp(i)

Summing both sides of the inequality over all i isin ZN completes the proof

Lemma 610 For any 1 le p le infin there exists a constant c gt 0 such that forn isin N

QnpXn + QnpYn le c QprimenpXn + QprimenpYn le c

Proof We employ Lemma 69 to prove this result In view of Lemma 66 theoperators Qn are uniformly bounded in Linfin Therefore it suffices to verify thatthe following conditions are satisfied

(Qnx) middot χi = Qn(x middot χi) x isin Xn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 239

and

(Qny) middot χi = Qn(y middot χi) y isin Yn

To prove this fact it is useful to introduce another discrete inner product by theformula

[x y] =sumisinZγ

wx(t)y(t)

and corresponding to this inner product the DGBA Qx from Sk1(1) to x isinLinfin() with respect to Sk2(2) Therefore by definition we conclude for anyi isin ZN and x isin Linfin() that

(Qnx) Fi = Q(x Fi)

This proves the first estimate and the second estimate can be obtained similarly

Let Xn L2()rarr Xn and Yn L2()rarr Yn be the orthogonal projectionsfrom L2() onto Xn and Yn respectively

Lemma 611 Let K be a quasi-weakly singular kernel in the factored formK = K1K2 where K2 is quasi-weakly singular

suptisinK2(middot t)1 ltinfin

and K1 is continuous on times Then the set of operators KnXn n isin N isuniformly bounded in the space L2()

Proof To prove the uniform boundedness of the sequence of operators KnXnfor every f isin L2() we introduce the function

e(t) =sum

jisinZγN

|(Xnf )(tj)||ζj(t)| t isin

which may be rewritten as

e(t) =sumiisinZN

sumisinZγ|(Xnf )(Fi(t))||(ζ Fminus1

i )χi(t)|

We observe for any s isin that

|(KnXnf )(s)| le K1infin|(Ge)(s)| (618)

where K1infin is the Linfin-norm of K1 on times and G is the integral operatorwith kernel G = |K2| By the CauchyndashSchwarz inequality we conclude that

Ge2 le βe2 (619)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

240 Multiscale PetrovndashGalerkin methods

where

β = 1

2

[supsisinK2(s middot)1 + sup

tisinK2(middot t)1

]

It remains to bound e2 To this end we note that

(ζ Fminus1i )χi(ζprime Fminus1

iprime )χiprime = 0 if i = iprime

It follows that

(e(t))2 =sumiisinZN

⎛⎝sumisinZγ

∣∣∣∣(Xnf )(Fi(t))

∣∣∣∣∣∣∣∣(ζ Fminus1i )χi(t)

∣∣∣∣⎞⎠2

and thus by the CauchyndashSchwarz inequality we have that

e22 lesumiisinZN

sumisinZγ

∣∣(Xnf )(Fi(t))∣∣2 sum

isinZγ

int

∣∣∣(ζ Fminus1i )χi(t)

∣∣∣2 dt

Note that for each i isin ZN and f isin L2()

(Xnf ) Fi = X ( f Fi)

where X is the orthogonal projection of L2() onto Sk1(1) We conclude that

e22 le μsumiisinZN

sumjisinZγ|X ( f Fi)(tj)|2|i|

where

μ =sumjisinZγ

int

|ζj(t)|2dt

We let X2infin denote its norm as an operator from L2() into Linfin()Therefore

e22 le γμX22infinsumiisinZN

f Fi22|i| = γμX22infinf22 (620)

which establishes the uniform boundedness of the operators KnXn n isin N

We need to prove that the set of operators KnXn n isin N with a specialrequirement on the kernel K2 is collectively compact on L2() We recall thata set τ = A of linear operators mapping a normed space X into a normedspace Y is called collectively compact if for each bounded set S sube X the imageset Ax x isin SA isin τ is relatively compact (see [6])

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 241

We further require that the kernel K2 has the α-property namely

K2(s t) = A(s t)

|sminus t|α s t isin

where A is a continuous function on times and α is a constant satisfying0 le α lt 1

We now prove the collective compactness of the set KnXn n isin NTheorem 612 Let K be a quasi-weakly singular kernel in the factored formK = K1K2 where K2 has the α-property and K1 is continuous on times LetAn = KnXn Then the set of operators An n isin N is collectively compacton the space L2()

Proof By the definition of collective compactness we are required to confirmthat for any bounded set S = f isin L2() f2 le c0 where c0 is a positiveconstant the set Anf f isin S n isin N is relatively compact This will bedone using the Kolmogorov theorem (Theorem A44) The first condition inthe theorem that is the uniform boundedness of the set An n isin N hasbeen verified in Lemma 611 It remains to verify conditions (ii) and (iii)in Theorem A44 We first show that the sequence of operators An has theproperty that

limhrarr0

(int 1minush

0|(An f )(s+ h)minus (Anf )(s)|2ds

) 12

= 0 (621)

uniformly in n isin N and f isin S To this end we note that(int 1minush

0|(Anf )(s+ h)minus (Anf )(s)|2ds

) 12

le r1 + r2

where

r1 =⎛⎝int 1minush

0

∣∣∣∣∣int 1

0Zn((K1(s+ h middot)minus K1(s middot))(Xnf )(middot))(t)K2(s+ h t)dt

∣∣∣∣∣2

ds

⎞⎠12

and

r2 =⎛⎝int 1minush

0

∣∣∣∣∣int 1

0Zn(K1(s middot)(Xnf )(middot))(t)[K2(s+ h t)minus K2(s t)]dt

∣∣∣∣∣2

ds

⎞⎠12

We first bound the term r1 By the definition of Zn we have that

r1 le suptisin

supsisinh

|K1(s+ h t)minus K1(s t)|Ge2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

242 Multiscale PetrovndashGalerkin methods

where h = [0 1minus h] Using (619) and (620) we conclude that

r1 le βγ12μ

12 X2infinf2 sup

tisinsupsisinh

|K1(s+ h t)minus K1(s t)|

To estimate the term r2 we make use of the property of the kernel K2 thatK2 is continuous off the diagonal Specifically for any s isin (0 1) and δ gt 0we let U(s δ) = (sminus δ s+ δ) and Uprime(s δ) = U(s δ) and write

r2 le rprime + rprimeprime + rprimeprimeprime

where

rprime =[int 1minush

0

(intU(s2δ)

|Zn(K1(s middot)(Xnf )(middot))(t)||K2(s t)|dt

)2

ds

] 12

rprimeprime =[int 1minush

0

(intU(s2δ)

|Zn(K1(s middot)(Xnf )(middot))(t)||K2(s+ h t)|dt

)2

ds

] 12

and

rprimeprimeprime =[int 1minush

0

∣∣∣∣intUprime(s2δ)

|Zn(K1(s middot)(Xnf )(middot))(t)[K2(s+ h t)minusK2(s t)]dt

∣∣∣∣2ds

]12

We observe by using the α-property of the kernel K2 and by a straightforwardcomputation that

rprime le K1infin[int 1minush

0

(intU(s2δ)

|K2(s t)|e(t)dt

)2

ds

] 12

le A21minus α

2

1minus αK1infinμ 1

2 β12 X2infinδ

1minusα2 f2

le Cδ1minusα

2 f2

where

A = supstisin|A(s t)|

To bound rprimeprime we note that if h lt δ then U(s minus h 2δ) sub U(s 3δ) and thus bychange of variables we conclude that

rprimeprime le[int 1

0

(intU(s3δ)

|Zn(K1(s middot)(Xnf )(middot))(t)||K2(s t)|dt

)2

ds

] 12

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 243

Likewise we have that

rprimeprime le cδ1minusα

2 f2

For any ε gt 0 we find a δ gt 0 such that cδ1minusα

2 lt ε4 Hence we observe that

rprime + rprimeprime lt 1

2εf2

Finally we deal with the term rprimeprimeprime By the α-property of the kernel K2 K2 iscontinuous off the diagonal and hence

rprimeprimeprime le K1infine2(int 1

0

intUprime(s2δ)

|K2(s+ h t)minus K2(s t)|2dtds

) 12

le cf2(int 1

0

intUprime(s2δ)

|K2(s+ h t)minus K2(s t)|2dtds

) 12

When s isin (0 1) t isin Uprime(s 2δ) and h lt δ we have |s+ hminus t| ge δ so that bothpoints (s t) and (s+ h t) are contained in the set

Dδ = (s t) isin times |sminus t| ge δFor fixed number δ gt 0 the function K2(s t) is bounded and uniformlycontinuous with respect to the totality of s and t on Dδ Therefore there existsa constant σ1 with 0 lt σ1 le δ such that when h lt σ1 (s t) (s+ h t) isin Dδ

|K2(s t)minus K2(s+ h t)| lt ε

4c

Thus (int

K2(s middot)minus K2(s+ h middot)2L2(Uprime(s2δ))ds

)12

le ε

4c

In summary we have established the estimate(int 1minush

0|(Anf )(s+ h)minus (Anf )(s)|2ds

) 12

lt εf2

and thus proved equation (621)Finally we verify that

limhrarr0

(int 1

1minush|(Anf )(s)|2ds

) 12

= 0 (622)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

244 Multiscale PetrovndashGalerkin methods

uniformly in n isin N and f isin S To do this note that (618) and (619) give(int 1

1minush|(Anf )(s)|2ds

) 12

le K1infinGeL2[1minush1] le βK1infineL2[1minush1]

where G e and β are all defined in the proof of Lemma 611 Similar argumentsused to prove estimate (620) lead to

e2L2[1minush1] le γ X22infinf22int 1

1minush

sumjisinZγ|ηj(t)|2dt

Note that

limhrarr0

int 1

1minush

sumjisinZγ|ηj(t)|2dt = 0

uniformly in n isin N and f isin S We then conclude from the estimate(int 1

1minush|(Anf )(s)|2ds

) 12

le γβK1infinX2infinf2⎛⎝int 1

1minush

sumjisinZγ|ηj(t)|2dt

⎞⎠12

that (622) holds uniformly in n isin N and f isin S

The following is a result from [80] which generalizes Proposition 17 of [6]

Lemma 613 Let X be a Banach space and S sub X a relatively compact setAssume that T Tn are bounded linear operators from X to X satisfying

Tn le C for all n

and for each x isin S

Tnxminus T x rarr 0 as nrarrinfin

where C is a constant independent of n Then Tnxminus T x rarr 0 uniformly forall x isin S

Lemma 614 Let K be a quasi-weakly singular kernel in the factored formK = K1K2 where K2 has the α-property and K1 is continuous on times Then

(i) (Qn minus I)KnXn2 rarr 0 as nrarrinfin(ii) (Kn minusK)QnKnXn2 rarr 0 as nrarrinfin

Proof (i) Let B denote the unit ball in L2() that is

B = x isin L2() x2 le 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 245

and let

A = KnXnx x isin B n isin NSince KnXn is collectively compact on L2() we conclude that A is arelatively compact set in L2() Note that A sub C() and for each x isin C()

Qnxminus x2 le cQnxminus xinfin rarr 0 as nrarrinfin

Using Lemma 613 with Tn = Qn and T = I we have that

(Qn minus I)KnXn2 = supxisinB(Qn minus I)KnXnx2

= supxisinA(Qn minus I)x2 rarr 0

(ii) For a fixed x isin C() Qnx n isin N is a relatively compact setin Linfin() Since Kn n isin N is collectively compact on Linfin() and Kn

converges pointwise to K on the set X by Lemma 613 we conclude that

(Kn minusK)Qnx2 le C(Kn minusK)Qnxinfin rarr 0 as nrarrinfin

Using Lemma 613 with the bounded linear operators Tn = (Kn minusK)Qn andT = 0 we have that

(Kn minusK)QnKnXn2 = supxisinB(Kn minusK)QnKnXnx2

le supxisinA(Kn minusK)Qnx2 rarr 0 as nrarrinfin

This completes the proof

Lemma 615 Let K be a quasi-weakly singular kernel in the factored formK = K1K2 where K2 has the α-property and K1 is continuous on times Thenthere exist positive constants c and N such that for all n gt N the inverses(I minusQnKnXn)

minus1 exist as linear operators defined on L2() and

(I minusQnKnXn)minus12 le c

Proof A straightforward computation yields

[I + (I minusK)minus1KnXn](I minusQnKnXn)

= I minus (I minusK)minus1[(Qn minus I)KnXn + (Kn minusK)QnKnXn]By Lemma 614 there exists N gt 0 such that for all n gt N

= (I minusK)minus1[(Qn minus I)KnXn + (Kn minusK)QnKnXn] le 1

2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

246 Multiscale PetrovndashGalerkin methods

Therefore the inverse operators (I minusQnKnXn)minus1 exist and it holds that

(I minusQnKnXn)minus12 le 1

1minus(1+ (I minusK)minus12KnXn2)

Thus the results of this lemma follow from Lemma 611 Moreover theFredholm theory allows us to conclude that the inverse operators are definedon L2()

Proposition 616 Let K be a quasi-weakly singular kernel in the factoredform K = K1K2 where K2 has the α-property and K1 is continuous on timesThen there exist positive constants c0 and N such that for n gt N and x isin Xn

(I minusQnKn)x2 ge c0x2

Proof For all x isin Xn it follows from Lemma 615 that

x2 = (I minusQnKnXn)minus1(I minusQnKnXn)x2

le C(I minusQnKnXn)x2= C(I minusQnKn)x2

and thus the lemma follows

622 A truncated DMPG scheme

In this subsection we describe the DMPG method and a truncated scheme forequation (67)

We first use type II spaces Ski(i) i = 1 2 3 to generate XnYn and QnSpecifically we choose an integer r gt 1 define the partitions 0 of by

0 =[

j

rn

j+ 1

rn

] j isin Zrn

and define Xn = Sk1(0 1)Yn = Sk2(0 2) and Qn = Sk3(0 3)where k1 = rk2 k2 le k1 le k3 We next describe our multiscale bases Forthis purpose we denote by Sk

m the space of piecewise polynomials of degreele k minus 1 defined on corresponding to the equally spaced partition j =[jm (j + 1)m] j isin Zm We assume that k kprime νμ are positive integers withμ gt 1 such that λ = kν = kprimeμ kprime le k and μν is an integer We define thetrial space Xn as Sk

νμn and the test space Yn as Skprimeμn+1 For i isin Zn+1 we set

w(i) =λ i = 0

λ(μminus 1)μiminus1 i ge 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 247

The orthogonal multiscale bases φij (i j) isin Un for Xn and ψij (i j) isin Un forYn can be constructed using the general construction described in Section 43(see also [200 201] and [64]) The orthonormal multiscale basis φij enjoys thefollowing properties

Xn = spanφij (i j) isin Unint

tφij(t)dt = 0 isin Zk j isin Zw(i) iminus 1 isin Zn

and the length of the support supp(φij) is given by

meas(suppφij) le 1

μiminus1 i ge 1

Similarly the multiscale basis ψij also has the properties

Yn = spanψij (i j) isin Unint

tψij(t)dt = 0 isin Zk j isin Zw(i) iminus 1 isin Zn

where k is an integer between kprime and k and there is a constant cprime dependingonly on k for which

meas(suppψij) le cprime

μiminus1 i ge 1

Consequently any function xn isin Xn has the representation

xn =sum

(i j)isinUn

xijφij

where

xij = (xnφij) (i j) isin Un

We have described multiscale bases for the trial and test spaces Xn andYn It remains to describe the third space Qn which is used for integrationWe do this next by choosing kprimeprime ge k and letting the space Qn be Skprimeprime

μn+1

Specifically we choose kprimeprime points in 0 lt τ0 lt τ1 lt middot middot middot lt τkprimeprimeminus1 lt 1and let q0 q1 qkprimeprimeminus1 be the Lagrange interpolating polynomials satisfyingdeg qj le kprimeprime minus 1 and qi(τj) = δij i j isin Zkprimeprime We let ϕεμ be the linear functionmapping from bijectively onto [ ε

μ ε+1

μ] for ε isin Zμ and set

tj = ϕεμ(τl) ζj = χIεμ ql ϕminus1εμ j = εkprimeprime + l ε isin Zμ isin Zkprimeprime

It can easily be seen that ζi(tj) = δij i j isin Zγ where γ = kprimeprimeμ ge λ Followingthe last section we use affine mappings Fi i = [ i

μn i+1μn ] rarr defined by

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

248 Multiscale PetrovndashGalerkin methods

Fi(t) = μnt minus i i isin ZN where N = μn to define basis functions ζij for thespace Qn The three spaces Xn Yn and Qn were chosen so that

Xn sube Qn and Yn sube Qn

These inclusions are crucial for developing a compressed scheme which willbe discussed in the next section We also define the discrete inner product (middot middot)nand operators Qn and Kn according to (613) and (615)

With the multiscale bases for spaces Xn and Yn the DMPG scheme forequation (67) becomes

un =sum

(ij)isinUn

uijφij isin Xn

where the coefficients of the function un satisfysum(ij)isinUn

uij(ψiprimejprime φij minusKnφij)n = (ψiprimejprime f )n (iprime jprime) isin Un (623)

To write (623) in a matrix form we use lexicographic ordering on Zn+1timesZn+1

and define matrices

En = [(ψijφiprimejprime)n (iprime jprime) (i j) isin Un]Kn = [(ψijKnφiprimejprime)n (iprime jprime) (i j) isin Un]

and vectors

fn = [(ψij f )n (i j) isin Un] un = [uij (i j) isin Un]Note that the vectors have length s(n) = λμnminus1 With these notationsequation (623) takes the equivalent form

(En minusKn)un = fn (624)

To present a convergence result about the DMPG scheme we let

= [φ0i(tj) i isin Zλ j isin Zγ ] = [ψ0i(tj) i isin Zλ j isin Zγ ] and M = WT

where W = diag(wj j isin Nγ ) and wi = intζi(t)dt i isin Zγ

Theorem 617 Let K be a quasi-weakly singular kernel in the factored formK = K1K2 where K2 has the α-property and K1 is continuous on times Thenthere exists N gt 0 such that for all n ge N the DMPG scheme (623) has aunique solution un isin Xn Moreover if the solution u of equation (67) satisfiesu isin Wkinfin() then there exists a positive constant c such that for all n isin N

uminus un2 le cμminuskn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 249

Proof By the construction of spaces Xn and Yn and the choice of thequadrature nodes tj we conclude from Proposition 68 that det(M) = 0holds Therefore the conclusion of this theorem follows directly from The-orem 338

We now develop a matrix compression strategy for the DMPG methodThis compression strategy will lead us to a fast algorithm for the approximatesolution to equation (67)

Throughout this section we suppose that the following additional conditionson the kernel K hold K = K1K2 and K1 and K2 have derivatives

K(lm)r (s t) = part l+mKr(s t)

partslparttm

for l isin Zk+1 m isin Zkprime+1 when r = 1 s t isin I and r = 2 s t isin I with s = t

where k = minkprimeprime minus k+ 1 k and kprime = minkprimeprime minus k+ 1 k and there exists apositive constant c0 such that for s t isin I s = t

|K(lm)2 (s t)| le c0

|sminus t|α+l+m (625)

We denote the entries of matrix Kn by Kiprimejprimeij = (ψiprimejprime Knφij)n (i j) (iprime jprime)isin Un The entries of the matrix Kn are discrete inner products which we shallshow in the next lemma have a similar estimate of continuous inner productspresented in Lemma 71 of [64] with k kprime replaced by k kprime respectively whichshow the influence of the full discretization To this end we let S(ij) and Sprime

(iprimejprime)denote the support of φij and ψiprimejprime respectively Then

|S(ij)| = meas(S(ij)) le 1

μiminus1for i gt 1

and

|Sprime(iprimejprime)| = meas(Sprime(iprimejprime)) lecprime

μiprimeminus1for iprime gt 1

We first estimate the entries of matrix Kn in the following lemma

Lemma 618 The estimate

|Kiprimejprimeij| le cμminus(k+12 )iminus(kprime+ 1

2 )iprimedist(S(ij) Sprime(iprimejprime))

minusαminuskminuskprime

holds where c is a positive constant independent of n i iprime j and jprime

Proof The entries of matrix Kn can be written in the following way

Kiprimejprimeij =int

(Zng)(s)ds

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

250 Multiscale PetrovndashGalerkin methods

where the function g is defined by

g(s) = ψiprimejprime(s)int

K2(s t)(Zn(K1(s middot)φij(middot)))(t)dt

Let t0 and s0 be the midpoints of the intervals S(ij) and Sprime(iprimejprime) respectively By

the Taylor theorem for (s t) isin S(iprimejprime) times S(ij) and r = 1 2 we have that

Kr(s t) =sumlisinZk

1

lK(0l)r (s t0)(t minus t0)

l

+ 1

(k minus 1)int

K(0k)r (s t0 + θr(t minus t0))(t minus t0)

k(1minus θr)kminus1dθr

Likewise we have that

K(0k)r (s tprime) =

sumlprimeisinZkprime

1

lprimeK(lprimek)r (s0 tprime)(sminus s0)

lprime

+ 1

(kprime minus 1)int

K(kprimek)r (sprime tprime)(sminus s0)

kprime(1minus θkr)kprimeminus1dθkr

where sprime = s0 + θkr(sminus s0) and tprime = t0 + θr(t minus t0) Thus we obtain that

Kr(s t) =sum

misinZ4

Trm

where

Tr0 =sumlisinZk

sumlprimeisinZkprime

1

llprimeK(lprimel)r (s0 t0)(sminus s0)

lprime(t minus t0)l

Tr1 =sumlisinZk

1

l(kprime minus 1) (sminus s0)kprime(t minus t0)

l

timesint

K(kprimek)r (s0 + θlr(sminus s0) t0)(1minus θlr)

kprimeminus1dθlr

Tr2 = 1

(k minus 1)sum

lprimeisinZkprime

1

lprime(sminus s0)

lprime(t minus t0)k

timesint

K(lprimek)r (s0 t0 + θr(t minus t0))(1minus θr)

kminus1dθr

and

Tr3 = 1

(k minus 1)(kprime minus 1) (sminus s0)kprime(t minus t0)

k

timesint

int

K(kprimek)r (sprime tprime)(1minus θkr)

kprimeminus1(1minus θr)kminus1dθkrdθr

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 251

with (sprime tprime) = (s0 + θkr(sminus s0) t0 + θr(t minus t0)) Note that the polynomials ofdegree kprimeprimeminus1 are invariant under the projectors Zn and Zn Linfin()rarr Linfin()is uniformly bounded Using the vanishing moment conditions of bases φij andψiprimejprime that isint

S(ij)(t minus t0)

lφij(t)dt = 0 l isin Zk iminus 1 isin Zn j isin Zw(i)

and intSprime(iprimejprime)

(sminus s0)lψiprimejprime(s)ds = 0 l isin Zkprime iprime minus 1 isin Zn jprime isin Zw(iprime)

we conclude that

|Kiprimejprimeij| le C sup(|K(lprimel)1 (s t)||K(lprimel)

2 (s t)|)timesint

S(ij)|(t minus t0)

kφij(t)|dtint

Sprime(iprimejprime)|(sminus s0)

kprimeψiprimejprime(s)|ds

where the supremum is taken over (s t) isin Sprime(iprimejprime) times S(ij) and (lprime l) isin Zkprime+1 times

Zk+1 In fact to see the estimate it is sufficient to consider the cases Kr(s t) =Trm r = 1 2 m isin Z4 We assume without loss of generality that in the sumof Trm there is only one term which means that Kr(s t) is a product of anintegral and two factors (t minus t0)lr and (s minus s0)

lprimer Obviously if l1 + l2 lt k (orlprime1 + lprime2 lt kprime) the operator Zn can be discarded and by using the vanishingmoment conditions of bases φij and ψiprimejprime we can obtain Kiprimejprimeij = 0 Otherwisel1 + l2 ge k and lprime1 + lprime2 ge kprime then the desired estimate follows by using theuniform boundedness of Zn Now using φij2 = ψij2 = 1 we concludefrom the CauchyndashSchwarz inequality that

|Kiprimejprimeij| le C dist(S(ij) Sprime(iprimejprime))minusαminuskminuskprime |S(ij)|k+ 1

2 |Sprime(iprimejprime)|kprime+ 1

2

le Cμminus(k+12 )iminus(kprime+ 1

2 )iprimedist(S(ij) Sprime(iprimejprime))

minusαminuskminuskprime

Lemma 618 leads us to a truncation strategy which we describe below Totruncate the matrix Kn we first partition it into a block matrix Kn = [Kiprimei (iprime i) isin Zn+1 times Zn+1] according to the decomposition of spaces Xn and Ynwhere

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]For a given positive number δn

iprimei we truncate the block Kiprimei to obtain

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

252 Multiscale PetrovndashGalerkin methods

where

Kiprimejprimeij =

Kiprimejprimeij dist(S(ij) Sprime(iprimejprime)) le δn

iprimei

0 otherwise

and let Kn = [Kiprimei Zn+1 times Zn+1] Using the truncation matrix Kn to replacethe matrix Kn in equation (624) we obtain a new linear system

(En minus Kn)un = fn (626)

where un = [uij] isin Rs(n) Equation (626) is a compressed scheme whichprovides us with a fast algorithm

It is convenient to work with a functional analytic approach in our develop-ment For this purpose we convert the linear system (626) to an abstract oper-ator equation form Let biprimejprimeij denote the entries of matrix Bn = Eminus1

n KnEminus1n

and let

K(s t) =sum

(ij)(iprimejprime)isinUn

biprimejprimeijψij(s)φiprimejprime(t)

We denote by Kn the discrete integral operator defined by the kernel K Thensolving the linear system (626) is equivalent to finding

un =sum

(ij)isinUn

uijφij isin Xn

such that

(I minus Kn)un = Qnf (627)

The analysis of this truncated scheme and the choice of the truncationparameter δn

iprimei will be discussed in the next subsection

623 Analysis of convergence and complexity

In this subsection we present the analysis for the order of convergence and theorder of computational complexity of the compressed scheme developed in thelast subsection

We first estimate the truncated matrix Kn To do this we denote η = α +k + kprime minus 1

Lemma 619 Let J be any integral contained in Suppose that n and s arepositive integers and s gt 1 Thensum

jisinJn

dist(Jjn)minuss le 4n

sminus 1δminuss max

sminus 1

n δ

where jn = [jn (j+ 1)n] j isin Zn and Jn = j j isin Zn dist(Jjn) gt δ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 253

Proof Suppose that J = [a b] Choose the greatest integer p with p+ 1 isin Zn

such that pn lt aminus δ le (p+ 1)n and the least integer q with q+ 1 isin Zn+1

such that qn le b+ δ lt (q+ 1)n Therefore

sumjisinJn

dist(Jjn)minuss =

pminus1sumj=0

(aminus j+ 1

n

)minuss

+nminus1sum

j=q+2

(j

nminus b

)minuss

When a minus δ le 1n the first sum is zero and likewise when b + δ ge 1 minus 1nthe second sum is zero In any case we have that

sumjisinJn

dist(Jjn)minuss le 2δminuss +

pminus2sumj=0

(aminus j+ 1

n

)minuss

+nminus1sum

j=q+2

(j

nminus b

)minuss

le 2δminuss + nint infin

aminuspntminussdt + n

int infin(q+1)nminusb

tminussdt

Evaluating the integrals and using our choice of p and q gives the bound forthe right-hand side of the inequality above

2δminuss + 2n

sminus 1δminuss+1 = 2nδminuss

sminus 1

(sminus 1

n+ δ

)le 4nδminuss

sminus 1max

sminus 1

n δ

Lemma 620 For any i iprime isin Zn+1 and positive constant δniprimei the estimates

Kiprimei minus Kiprimeiinfin le cμminus(kminus12 )iminus(kprime+ 1

2 )iprime(δn

iprimei)minusαminuskminuskprimemax

η

μiminus1 δn

iprimei

and

Kiprimei minus Kiprimei1 le cμminus(k+12 )iminus(kprimeminus 1

2 )iprime(δn

iprimei)minusαminuskminuskprimemax

cprimeημiprimeminus1

δniprimei

hold where c is a positive constant independent of n i and iprime

Proof Using the estimate given in Lemma 618 on the entries |Kiprimejprimeij| notingthe definition of the truncated matrix Kiprimejprimeij we have that

Kiprimei minus Kiprimeiinfin le cμminus(k+12 )iminus(kprime+ 1

2 )iprime

maxjprimeisinZw(iprime)

sumjisinZδn

iprimei

dist(S(ij) Sprime(iprimejprime))minusαminuskminuskprime

where Zδniprime i = j j isin Zw(i) dist(S(ij) Sprime

(iprimejprime)) gt δniprimei Using Lemma 619 (with

n replaced byμiminus2 s replaced by α+k+kprimejn replaced by S(ij) and J replacedby Sprime

(iprimejprime)) we get the first estimate of this lemma The second estimate followslikewise from Lemma 619

We recall a useful result of Schur

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

254 Multiscale PetrovndashGalerkin methods

Lemma 621 (Schurrsquos lemma) Let A = [aij i j isin Zn] n isin N be a matrixsuch that there exist positive constants γi i isin Zn and a positive constant cindependent of n satisfying for all i j isin Znsum

isinZn

|aj|γ le cγj

and sumisinZn

|ai|γ le cγi

Then A2 le c for all n isin N

We are now ready to estimate the truncation operator Rn = QnKn minus Kn interms of the function μ[middot middot n]Lemma 622 Let b bprime be real numbers and m a non-negative integer Choosethe truncation parameter δn

iprimei i iprime isin Zn+1 such that

δniprimei ge max

cprimeημiminus1

η

μiprimeminus1μminusn+b(nminusi)+bprime(nminusiprime)

(628)

Then for any u isin Hm(I) with 0 lt m le k

RnXnuL2 le cμ[k + mminus bη kprime minus bprimeη n](n+ 1)12μminus(1minusα+m)nuHm

and for u isin L2[0 1]RnXnuL2 le cμ[k minus bη kprime minus bprimeη n]μminus(1minusα)nuL2

where c is a positive constant independent of n

Proof It is clear that for φ isin Xn

φn = supvisinXn

|(φ v)n|vn

It follows that for u isin Hm[0 1] with 0 le m le k

RnXnun = supvisinXn

|(RnXnu v)n|vn (629)

We need to pass the estimate (629) in terms of the discrete norm to anestimate in terms of the L2-norm To this end we need a norm equivalenceresult That is there exist positive constants A and B such that for all functionsx isin Xn and n isin N the inequality

Axn le xL2 le Bxn (630)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 255

holds A proof of (630) is in order Because Xn sube Qn any function x isin Xn

has the representation

x(t) =sum

jisinZγN

x(tj)ζj(t)

It follows that

x2L2 =sumiisinZN

|i|xTi Wxi

where xi = [x(Fi(tj)) j isin Zγ ] and

W =[int

ζ(τ )ζm(τ )dτ m isin Zγ

]

Moreover we have that

x2n =sum

jisinZγN

wjx2(tj) =

sumiisinZN

|i|xTi Wxi

Because both W and W are positive definite matrices (630) follows Using(630) in (629) we have that

RnXnu2 le c supvisinXn

|(RnXnu v)n|v2= c sup

visinXn

|(RnXnuQprimenv)n|v2

Using the second estimate in Lemma 610 we conclude that

RnXnu2 le c supvisinXn

|(RnXnuQprimenv)n|Qprimenv2

Since Qprimen(Xn) = Yn(Xn) = Yn we obtain that

RnXnu2 le c supvisinXn

|(RnXnuYnv)n|Ynv2 (631)

Now we estimate |(RnXnuYnv)n| Note that

Xnu =sum

(ij)isinUn

(uφij)φij and Ynv =sum

(ij)isinUn

(vψij)ψij

Let Riprimejprimeij = Kiprimejprimeij minus Kiprimejprimeij We have that

(RnXnuYnv)n =sum

(ij)(iprimejprime)isinUn

Riprimejprimeij(uφij)(vψiprimejprime)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

256 Multiscale PetrovndashGalerkin methods

Denoting = μ[k + m minus bη kprime minus bprimeη n] iprimejprimeij = minus1μ(1minusα+m)nminusmiRiprimejprimeijand defining the matrix n = [iprimejprimeij] we see that

|(RnXnuYnv)n| le μminus(1minusα+m)n

⎛⎝ sumiisinZn+1

μ2misum

jisinZw(i)

|(uφij)|2⎞⎠12

n2Ynv2

Now for u isin Hm() we have that

μ2misum

jisinZw(i)

|(uφij)|2 = μ2miXiuminus Ximinus1u2L2 le Cu2Hm

so sumiisinZn+1

μ2misum

jisinZw(i)

|(uφij)|2 le C(n+ 1)u2Hm

Thus we conclude that for u isin Hm()

|(RnXnuYnv)n| le C(n+ 1)σ(m)μminus(1minusα+m)nn2uHmYnv2 (632)

holds where

σ(m) =

12 0 lt m le k

0 m = 0

We now estimate n2 using Lemma 621 with the choice γij = μminusi2 Wehave from Lemma 620 thatsum(ij)isinUn

|iprimejprimeij|γij lesum

iisinZn+1

minus1μ(1minusα+m)nminusmiKiprimei minus Kiprimeiinfinμminusi2

le cminus1μ(1minusα)n sumiisinZn+1

μm(nminusi)μminus(kminus12)iminus(kprime + 12)iprime(δniprimei)minusημminusi2

The right-hand side of this inequality can be bounded by

cminus1γiprimejprimesum

iisinZn+1

μ(k+mminusbη)(nminusi)μ(kprimeminusbprimeη)(nminusiprime)

This inequality and our hypothesis imply thatsum(ij)isinUn

|iprimejprimeij|γij le cγiprimejprime

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 257

Similarly using the second estimate in Lemma 620 we have thatsum(iprimejprime)isinUn

|iprimejprimeij|γiprimejprime lesum

iprimeisinZn+1

minus1μ(1minusα+m)nminusmiKiprimei minus Kiprimei1μminusiprime2

le cminus1μ(1minusα)n sumiprimeisinZn+1

μm(nminusi)μminus(k+12)iminus(kprimeminus12)iprime

times (δniprimei)minusημminusiprime2

le cminus1γij

sumiprimeisinZn+1

μ(k+mminusbη)(nminusi)μ(kprimeminusbprimeη)(nminusiprime)

which implies that sum(iprimejprime)isinUn

|iprimejprimeij|γiprimejprime le cγij

Hence Lemma 621 yields

n2 le c (633)

Combining the inequalities (631)ndash(633) yields the estimates of this lemma

The next result shows that the truncated scheme is uniquely solvable andstable

Theorem 623 Let δniprimei be chosen according to (628) with b gt k+αminus1

η bprime gt

kprime+αminus1η

and b + bprime gt 1 then there exist a positive constant c and a positiveinteger N such that for n ge N and x isin Xn

(I minus Kn)x2 ge cx2

In particular equation (627) has a unique solution un isin Xn for n ge N and thetruncated scheme (627) is stable

Proof By Proposition 616

(I minusQnKn)x2 ge c0x2 for all x isin Xn

Thus using the second estimate in Lemma 622 we have that

(I minus Kn)x2 ge (I minusQnKn)x2 minus Rnx2 ge (c0 minus εn)x2

where εn = μ[kminus bη kprime minus bprimeη n]μminus(1minusα)n rarr 0 as nrarrinfin The result of thistheorem follows

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

258 Multiscale PetrovndashGalerkin methods

In the next theorem we present a result on the order of convergence for theapproximate solution un obtained from the truncated scheme It is convenientto introduce an interpolation projection In from C() to Xn For x isin C()we let Inx be the interpolation to x from Xn such that (Inx)(tij) = x(tij) for j isinZw(i) i isin Zn+1 We assume that the interpolatory points tij are chosen so thatthe interpolation problem has a unique solution We also need the followingerror estimates For x isin Wminfin() the following estimates hold

xminus Xnx2 le cμminusmnxHm

xminus Inxinfin le cμminusmnxWminfin

and

(K minusKn)xinfin le cμminusmnxWminfin

Theorem 624 Let δniprimei be chosen according to (628) with b ge m+k+αminus1

η bprime gt

kprime+αminus1η

b+ bprime ge 1+ mη

and 0 lt m le k then there exist a positive constant cand a positive integer N such that for n ge N

uminus unL2 le cs(n)minusm(log(s(n)))τ+12uWminfin

where τ = 0 except for (b bprime) = (m+k+αminus1η

kprimeη) in which case τ = 1

Proof To estimate the error of the solution un of the truncated equation (627)we observe that Theorem 623 yields

Xnuminus un2 le cminus1(I minus Kn)(Xnuminus un)2le cminus1(RnXnu2 + Qn(K minusKn)u2+ Qn(I minusKn)(Xnuminus u)2)

The first term has been estimated in Lemma 622 We deal with the last twoterms Recalling that Qninfin is uniformly bounded we conclude that

Qn(K minusKn)u2 le Qn(K minusKn)uinfinle Qninfin(K minusKn)uinfinle cμminusmnuWminfin

We next estimate the last term Note that

Qn(I minusKn)(Xnuminus u)2 le Qninfin(1+ Kninfin)Xnuminus uinfinle c(Xn(uminus Inu)infin + Inuminus uinfin)

There is a point t isin j such that t = Fj(τ ) where τ isin and

Xn(uminus Inu)infin = |Xn(uminus Inu)(t)|

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 259

It follows that

Xn(uminus Inu)infin = |((Xn(uminus Inu)) Fj)(τ )| = |(X ((uminus Inu) Fj))(τ )|where the notation X was introduced in the proof of Lemma 611 Therefore

Xn(uminus Inu)infin le X ((uminus Inu) Fj)infinle X2infin(uminus Inu) Fj2le X2infin(uminus Inu) Fjinfinle X2infinuminus Inuinfin

Consequently

Xnuminus uinfin le (X2infin + 1)uminus Inuinfin le cμminusmnuWminfin

Using these estimates and Lemma 622 we complete the proof

The next result shows that the condition number of the coefficient matrix ofthe truncated scheme (627) is bounded by a constant independent of n

Theorem 625 Let δniprimei be chosen according to (628) with b gt k+αminus1

η bprime gt

kprime+αminus1η

and b + bprime gt 1 then the condition number of the coefficient matrix ofthe truncated approximate equation (627) is bounded that is there exists apositive constant c such that for n isin N

cond(En minus Kn) le c

Proof For e = [eij (i j) isin Un] isin Rs(n) and eprime = [eprimeij (i j) isin Un] isin Rs(n)set

x =sum

(ij)isinUn

eijφij and y =sum

(ij)isinUn

eprimeijψij

It follows from (630) and y2 = eprime2 that

(En minus Kn)e2 = supeprimeisinRs(n)eprime2=1

|eprimeT(En minus Kn)e|

= supyisinYny2=1

|(y (I minus Kn)x)n|

le c((I minusQnKnXn)x2 + Rnx2)From Lemma 611 we know that KnXn L2() rarr L2() are uniformlybounded By Lemma 614 (i) we conclude that IminusQnKnXn2 are uniformlybounded This fact with the second estimate in Lemma 622 yields

(En minus Kn)e2 le cx2 = ce2

that is En minus Kn le c

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

260 Multiscale PetrovndashGalerkin methods

Moreover for any eprime isin Rs(n) we find e isin Rs(n) such that eprime = (En minus Kn)eand choose the unique x isin Xn and y isin Yn such that for all (r ) isin Un〈xφr〉 = er and 〈yψr〉 = eprimer Therefore we have that

Qny = (I minus Kn)x

Since x2 = e2 y2 = eprime2 we have by Theorem 623 that

c(En minus Kn)minus1eprime2 = ce2 = cx2 le (I minus Kn)x2 = Qny2

le Qn2Yny2

which proves that

(En minus Kn)minus12 le cminus1Qn2Yn

It follows from Lemma 610 that (EnminusKn)minus12 is bounded by a constant

For any matrix A we use N (A) to denote the number of nonzero entries in AEmploying a standard argument (see for example Section 531 and [64]) weconclude that when b bprime are real numbers and δn

rrprime is chosen according to

(628) with η = α + k + k minus 1 there exists a positive constant c such that forall n isin N

N (En minus Kn) = cμn(n+ μ[bminus 1 bprime minus 1 n])

The choice b = 1 and bprime isin ( k+αminus1η

1) results in

μ[bminus 1 bprime minus 1 n] = O(n) nrarrinfin

Theorem 626 Suppose that kprimeprime ge 2k minus 1 m = k and k = k Choose b = 1

and bprime isin ( k+αminus1η

1) Then there hold the stability estimate in Theorem 623 theerror estimate in Theorem 624 with τ = 0 the boundedness of the conditionnumber in Theorem 625 and the following asymptotic result of the complexity

N (En minus Kn) = O(s(n) log(s(n)))

as nrarrinfin

624 A numerical example

In this subsection we present a numerical example of the DMPG algorithmapplied to a boundary integral equation to illustrate the convergence ordermatrix compression and computational complexity of the method

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

62 Discrete multiscale PetrovndashGalerkin methods 261

We consider the reformulation of the following boundary value problem forthe Laplace equation as an integral equation The boundary value problem isgiven by

u(P) = 0 P isin D

partu(P)

partnp= minuscu(P)+ g(P) P isin = partD

where D is a bounded simply connected open region in R2 with a smoothboundary nP the exterior unit normal to at P g(P) a given continuousfunction on the boundary and c a positive constant We seek a solution u isinC2(D) cap C1(D) for the boundary value problem Following Section 213 (seealso [12 270]) we employ the Green representation formula for harmonicfunctions and rewrite the above problem as a boundary integral equation

u(P)minus (Au)(P)minus (Bu)(P) = minus(Bg)(P) P isin

where

(Au)(P) = 1

π

int

u(Q)part

partnQlog |Pminus Q|dσ(Q) P isin

and

(Bu)(P) = 1

π

int

u(Q) log |Pminus Q|dσ(Q) P isin

To convert the above boundary integral equation to an integral equation onan interval we introduce a parametrization r(t) = (ξ(t) η(t)) 0 le t le 2π for the boundary We assume that each component of r is a 2π -periodicfunction in C2 and |rprime(t)| = radicξ prime(t)2 + ηprime(t)2 = 0 for 0 le t le 2π Using thisparametrization we convert the above equation to the following equivalentone

(u r)(t)minus (K(u r))(t) = minus1

c(B(g r))(t) t isin [0 2π ]

where

(Kv)(t) =int 2π

0k(t s)v(s)ds

k(t s) = c

π|rprime(s)| log |r(t)minus r(s)| + 1

π

ηprime(s)[ξ(s)minus ξ(t)] minus ξ prime(s)[η(s)minus η(t)][ξ(s)minus ξ(t)]2 + [η(s)minus η(t)]2

and ldquordquo stands for the function composition This kernel has a weak singularityalong its diagonal and a detailed discussion of its regularity is given in [270]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

262 Multiscale PetrovndashGalerkin methods

In the numerical example presented below we consider the boundary valueproblem described above with

D =(x y) x2 +

( y

2

)2lt 1

For comparison purposes we consider the boundary value problem with theknown exact solution u(P) = ex cos y P isin D and compute the function g

accordingly This leads to

g(P) = part

partnP(ex cos y)+ ex cos y P = (x y) isin

The equivalent boundary integral equation is given by (for details see [270])

u(s)minusint

K(s t)u(t)dt = f (s) s isin = [0 1]

where u(s) represents the value u(P) at the point P = (cos 2πs 2 sin 2πs)s isin the right-hand-side function f is also determined by the exact solutionu and the kernel is given by

K(s t) =radic

1+ 3 cos2 2π t log[(cos 2π t minus cos 2πs)2 + 4(sin 2π t minus sin 2πs)2

]+ 8 sin2 π(t minus s)

4 sin2 π(t minus s)+ 3(sin 2π t minus sin 2πs)2

To apply the DMPG method we choose k = μ = 2 kprime = ν = 1 kprimeprime = 3 (sothat λ = 2) and choose the initial trial and test spaces as

X0 = spanφ00φ01 and Y0 = spanψ00ψ01

respectively where

φ00(t) = 1 φ01(t) = radic3(2t minus 1)

and

ψ00(t) =radic

2 0 le t le 12

0 12 lt t le 1

ψ01(t) =

0 0 le t le 12 radic

2 12 lt t le 1

The initial spaces

W0 = spanφ10φ11 Wprime0 = spanψ10ψ11

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

63 Bibliographical remarks 263

are given for t isin by the equations

φ10(t) =

6t minus 1 0 le t le 12

6t minus 5 12 lt t le 1

φ11(t) =radic

3(4t minus 1) 0 le t le 12 radic

3(3minus 4t) 12 lt t le 1

ψ10(t) =

⎧⎪⎪⎨⎪⎪⎩radic

2 0 le t le 14

minusradic2 14 lt t le 1

2

0 12 lt t le 1

ψ11(t) =

⎧⎪⎪⎨⎪⎪⎩0 0 le t le 1

2 radic2 1

2 lt t le 34

minusradic2 34 lt t le 1

The bases for Wn Wprimen n isin N are generated recursively as described inChapter 4

We report numerical results of the truncated DMPG scheme applied tothe integral equation given above in the following table We define thecompression rate by the number of nonzero entries in the matrix Kn divided bythe total number of entries in the matrix before truncation In the table we useOn Cn and Rn to denote respectively the order of convergence in the L2-normthe condition number of the truncated coefficient matrix and the compressionrate

n 7 8 9

On 1877 1829Cn 1402 1507 1604Rn 043 035 019

63 Bibliographical remarks

The PetrovndashGalerkin and iterated PetrovndashGalerkin methods for Fredholmintegral equations of the second kind were originally studied in [77] (cfSection 35) where the useful notions of the generalized best approximationand regular pair of subspaces were introduced to analyze convergence of themethods One advantage of the PetrovndashGalerkin method is that it allows us toachieve the same order of convergence as the Galerkin method for the sametrial space with much less computational cost by choosing the test spacesas spaces of piecewise polynomials of degree lower than that for the trialspace However the practical use of the PetrovndashGalerkin method requires thenumerical computation of integrals appearing in the resulting linear system Ingeneral the entries of the coefficient matrix are singular integrals and they maynot be evaluated exactly To overcome this difficulty a theoretical frameworkfor the analysis of a large class of numerical schemes including the discrete

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

264 Multiscale PetrovndashGalerkin methods

Galerkin PetrovndashGalerkin collocation methods and quadrature methods wasdeveloped in [80] (cf Section 353)

The multiscale PetrovndashGalerkin schemes presented in Section 61 wereoriginally developed in [64] to solve weakly singular Fredholm integralequations of the second kind They are based on discontinuous orthogonalmultiscale basis functions constructed in [200] and [201] and lead to a fastalgorithm for the numerical solution of equations Additional information onmultiscale PetrovndashGalerkin algorithms can be found in [146 147]

The discrete multiscale PetrovndashGalerkin method for the integral equationspresented in Section 62 was introduced in [68] by employing the productintegration method Computational complexity stability and convergenceof the method based on the compression strategy are analyzed using theframework developed in [80] For the classical product integration methodsolving Fredholm integral equations of the second kind the reader is referredto [15] Some early work on the discrete Galerkin method can be found in [16]and [24]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637008Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163127 subject to the Cambridge Core terms of

7

Multiscale collocation methods

The purpose of this chapter is to present a multiscale collocation methodfor solving Fredholm integral equations of the second kind with weaklysingular kernels Among conventional numerical methods for solving integralequations the collocation method receives more favorable attention fromengineering applications due to its lower computational cost in generating thecoefficient matrix of the corresponding discrete equations In comparison theimplementation of the Galerkin method requires much more computationaleffort for the evaluation of integrals (see for example [12 19 77]) Nonethe-less it seems that the most attention in multiscale and wavelet methods forboundary integral equations has been paid to Galerkin methods or PetrovndashGalerkin methods (see [28 64 95] and the references cited therein) Thesemethods are amenable to the L2 analysis and therefore the vanishing momentsof the multiscale basis functions naturally lead to matrix truncation techniquesFor collocation methods the appropriate context to work in is the Linfin spaceand this provides challenging technical obstacles for the identification of goodmatrix truncation strategies Following [69] we present a construction ofmultiscale basis functions and the corresponding multiscale collocation func-tionals both having vanishing moments These basis functions and collocationfunctionals lead to a numerically sparse matrix presentation of the Fredholmintegral operator A proper truncation of such a numerically sparse matrix willresult in a fast numerical algorithm for solving the equation which preservesthe optimal convergence order up to a logarithmic factor

In Section 71 we describe multiscale basis functions and the correspondingfunctionals needed for developing the fast algorithm Section 72 is devoted toa presentation of the multiscale collocation method We analyze the proposedmethod in Section 73 giving estimates for the convergence order computa-tional costs and a bound of the condition number of the related coefficientmatrix

265

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

266 Multiscale collocation methods

71 Multiscale basis functions and collocation functionals

Multiscale collocation methods require availability of multiscale basis func-tions and collocation functionals which have vanishing moments of certaindegrees In this section we present a construction of multiscale basis functionsand the corresponding collocation functionals for the multiscale collocationmethods We also study properties of these functions and functionals

711 A construction of multiscale basis functions and functionals

The solution spaces of the integral equation will be piecewise polynomialson multiscale partitions Let us first recall the method we used in previouschapters to generate multiscale partitions of an invariant set in d-dimensionalEuclidean space Rd We start with a positive integer μ and a family = φe e isin Zμ of contractive affine mappings on Rd There exists a unique invariantset of Rd associated with the family of mappings such that

() = (71)

where

() =⋃

eisinZμφe()

We are interested in the cases where has a simple structure including forexample the cube and simplex in Rd With these cases in mind we make thefollowing additional restriction on the family of mappings

(a) For every e isin Zμ the mapping φe has a continuous inverse on (b) The set has nonempty interior and

meas (φe() cap φeprime()) = 0 e eprime isin Zμ e = eprime

We use to obtain a sequence of multiscale partitions n n isin N0 of theset in the following way Given any e = (ej j isin Zn) isin Zn

μ we define themapping

φe = φe0 φe1 middot middot middot φenminus1

and the number

μ(e) = μnminus1e0 + middot middot middot + μenminus2 + enminus1

Note that every i isin Zμn can be written uniquely as i = μ(e) for some e isin Znμ

From equation (71) and conditions (a) and (b) it follows that the collection

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 267

of sets

n = ne ne = φe() e isin Znμ

forms a partition of We require that this partition has the following property(c) There exist positive constants cminus c+ such that for all n isin N0

cminusμminusnd le maxd(ne) e isin Znμ le c+μminusnd (72)

On this partition we consider piecewise polynomials Choose a positiveinteger k and for n isin N let Xn be the space of all functions such that theirrestrictions to any cell ne e isin Zn

μ are polynomials of total degree le k minus 1Here we use the convention that for n = 0 the set is the only cell in thepartition and so

m = dim X0 =(

k + d minus 1d

)

We must generate a suitable multiscale decomposition of Xn To this end letG0 = tj j isin Zm be a finite set of distinct points in which is refinablerelative to the mappings that is G0 satisfies

G0 sube (G0)

Set

G1 = (G0) V1 = G1 G0 = tm+j j isin Zrwith r = (μ minus 1)m Now we require that there exists a basis of elements inX0 denoted by ψj j isin Zm such that

X0 = spanψj j isin Zmand they satisfy the Lagrange interpolation conditions

ψi(tj) = δij i j isin Zm (73)

A construction of refinable points tj j isin Zm isin that admit a uniqued-dimensional Lagrange interpolation is presented in [198]

With this basis of X0 at hand we generate a multiscale basis for Xn in thefollowing way For this purpose we introduce linear operators Te X rarrX e isin Zμ defined by

(Tex)(t) = x(φminus1e (t))χφe()(t) t isin

where χS denotes the characteristic function of set S Therefore it follows that

Xn =opluseisinZμ

TeXnminus1 n isin N

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

268 Multiscale collocation methods

where A oplus B denotes the direct sum of spaces A and B We assume that thefunctions ψm+j isin X1 j isin Zr satisfy

ψm+j(ti) = 0 i isin Zm j isin Zr ψm+j(tm+jprime) = δjjprime j jprime isin Zr (74)

Then the functions ψj j isin Zq form a basis for X1 where q = m+ rWe require another basis for X1 consisting of functions with vanishing

moments To this end we set

w0j = ψj j isin Zm

For j isin Zr we find a vector [cjs s isin Zq] isin Rq such that

w1j =sumsisinZq

cjsψs j isin Zr (75)

satisfies the equation (w1jψjprime

) = 0 jprime isin Zm j isin Zr (76)

where (middot middot) denotes the inner product in L2() Since (76) is a linear systemof rank m with m equations and q unknowns there exist r linearly independentsolutions of this system which we denote by w1j j isin Zr These functions forma basis for the orthogonal complement of X0 in X1 denoted by W1 Let

Wi+1 =opluseisinZμ

TeWi i isin N

Then Xn has the decomposition (see Chapter 4)

Xn = X0 oplusperpW1 oplusperp middot middot middot oplusperpWn

We denote W0 = X0 w(i) = dimWi i isin N0 and s(n) = dimXn n isin N0 asbefore

Let us now turn our attention to the construction of multiscale collocationfunctionals We start by recalling our notational conventions For a compactsubset of the d-dimensional Euclidean space Rd we let X = Linfin()V = C() and denote the dual space of V by Vlowast For any s isin we useδs to denote the linear functional in Vlowast defined for v isin V by the equation〈δs v〉 = v(s) We need to evaluate δs on functions in X Therefore as in [21]we take any norm-preserving extension of δs to X and use the same notationfor the extension In particular this convention allows us to evaluate piecewisepolynomials anywhere on We begin by defining

0j = δtj j isin Zm

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 269

and for jprime isin Zr we find the vector [cprimejprimes s isin Zq] such that

1jprime =sumsisinZq

cprimejprimesδts jprime isin Zr (77)

satisfies the equations lang1jprime w0j

rang = 0 j isin Zm jprime isin Zr (78)lang1jprime w1j

rang = δjjprime j isin Zr jprime isin Zr (79)

For j isin Zr the qtimes q coefficient matrix of this linear system is given by

A =[langδtiprimejprime wij

rang (i j) (iprime jprime) isin U1

]

where t1j = tm+j j isin Zr Let us prove that the matrix A is nonsingular To thisend we assume that there are constants aij (i j) isin U1 such thatsum

(ij)isinU1

aij

langδtiprime jprime wij

rang= 0 (iprime jprime) isin U1

that is langδtiprimejprime

sum(ij)isinU1

aijwij

rang= 0 (iprime jprime) isin U1

Since the set G1 is Lagrange admissible relative to (X1) (see Section 451)we conclude that sum

(ij)isinU1

aijwij = 0

and therefore aij = 0 (i j) isin U1 This proves that A is nonsingularWe find it convenient to write equations (78) and (79) in a matrix form For

this purpose we introduce the matrices

B = [langδti ψjrang

i j isin Zq]

B = [langδtm+i ψjrang

i isin Zr j isin Zm]

C1 = [cjs j isin Zr s isin Zm]

C2 = [cjm+s j isin Zr s isin Zr]

Cprime1 =[cprimejs j isin Zr s isin Zm

] Cprime2 =

[cprimejm+s j isin Zr s isin Zr

]and

C = [C1 C2] Cprime = [Cprime1 Cprime2]The next lemma gives a relationship between the matrices C and Cprime

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

270 Multiscale collocation methods

Lemma 71 The following relation holds

Cprime1 = minusCprime2B Cprime2 = (CT2 )minus1

Proof It follows from equations (78) and (79) that

CprimeB[Im Omtimesr]T = Ortimesm

and

CprimeBCT = Ir

where Omtimesr denotes the mtimes r zero matrix and Im denotes the mtimes m identitymatrix The properties of the basis ψj j isin Zq and the functionals δtj j isinZq described above in equations (73) and (74) imply that

B =[

Im Omtimesr

B Ir

]and

Cprime1 + Cprime2B = Ortimesm (Cprime1 + Cprime2B)CT1 + Cprime2CT

2 = Ir

from which the result follows

We next describe the construction of a basis for Wi i isin N To this end fore = [ej j isin Zn] isin Zn

μ we introduce a composition operator Te by

Te = Te0 middot middot middot Tenminus1

For i = 2 3 n we let

wij = Tew1l j = μ(e)r + l e isin Ziminus1μ l isin Zr (710)

and

Wi = spanwij j isin Zw(i)Observe that the support of wij is contained in Sij = φe() j isin Zw(i)

To generate a multiscale collocation functional we introduce for any e isin Zμ

a linear operator Le Xlowast rarr Xlowast defined by the equation

〈Le v〉 = 〈 v φe〉 v isin X isin Xlowast

Moreover for e = [ej j isin Zn] isin Znμ we define the composition operator

Le = Le0 middot middot middot Lenminus1

Consequently for any e eprime isin Ziμ w isin X and isin Xlowast we have that

〈Le Teprimew〉 = 〈 w〉 δeeprime (711)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 271

In addition for i gt 1 j = μ(e)r + l e isin Ziminus1μ l isin Zr we define

ij = Le1l (712)

and observe that langij v

rang = 〈1l v φe〉 =sumsisinZq

cprimelsv(φe(ts))

Note that the ldquosupportrdquo of ij is also contained in Sij

712 Properties of basis functions and collocation functionals

We discuss the properties of the multiscale functions and their correspondingcollocation functionals constructed in the last subsection To this end wepartition the matrix En into a block matrix

En = [Eiprimei iprime i isin Zn+1]where

Eiprimei = [iprimejprimeij jprime isin Zw(iprime) j isin Zw(i)] iprimejprimeij = langiprimejprime wijrang

and in the next lemma we relate the norm of the matrix Eiprimei to that of E1iminusiprime+1

Lemma 72 If iprime i isin N with i gt iprime then

Eiprimeiinfin = E1iminusiprime+1infin (713)

Proof From the definition of iprimejprime and wij for (iprime jprime) (i j) isin Un with i gt iprime weobtain that there exist e isin Ziminus1

μ eprime isin Ziprimeminus1μ l lprime isin Zr such thatlang

iprimejprime wijrang = 〈Leprime1lprime Tew1l〉

We introduce the vectors

e1 = [ej j isin Ziprimeminus1] e2 = [ej j isin Ziminus1 Ziprimeminus1]and conclude from (711) thatlang

iprimejprime wijrang = lang1lprime Te2 w1l

rangδeprimee1

Let j0 = μ(e2)r + l and obtain thatlangiprimejprime wij

rang = lang1lprime wiminusiprime+1j0

rangδeprimee1

Consequently we have thatsumjisinZw(i)

∣∣langiprimejprime wijrang∣∣ = sum

jisinZw(iminusiprime+1)

∣∣lang1lprime wiminusiprime+1jrang∣∣

which proves the lemma

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

272 Multiscale collocation methods

Lemma 73 The conditionsummisinZw(n)

| 〈nprimemprime wnm〉 | le γ (nprime mprime) isin U n gt nprime

is satisfied with

γ = maxC11 CprimeinfinC11Proof By Lemma 72 it suffices to prove for i isin N that

E0iinfin le C11 (714)

and

E1i+1infin le CprimeinfinC11 (715)

Recall the definition

E1i+1 = [lang1lprime wi+1jrang

lprime isin Zr j isin Zw(i+1)]

We need to decompose this matrix This is done by using equation (77) towrite

1lprime =sum

sprimeisinZq

cprimelprimesprimeδtsprime lprime isin Zr

Thus it follows from (75) and (710) for any j isin Zw(i+1) that there exists aunique pair ej isin Zi

μ and l isin Zr such that

wi+1 j =sumsisinZq

clsTejψs

Since for any sprime isin Zq ej isin Ziμ i isin N and s = m qminus 1lang

δtsprime Tejψsrang = 0

we conclude for lprime isin Zr j = μ(ej)r + l ej isin Ziμ l isin Zr thatlang

1lprime wi+1jrang = sum

sprimeisinZq

sumsisinZm

cprimelprimesprimeclslangδtsprime Tejψs

rang

We write this equation in a matrix form by introducing for each e isin Ziμ the

matrix

De = [langδtsprime Teψsrang

sprime isin Zq s isin Zm]

and from these matrices we build the matrix

D =[De0 De1 De

μiminus1

]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 273

This notation allows us to write

E1i+1 = CprimeDdiag(CT1 CT

1 )

where the right-most matrix is a block diagonal matrix with μi identical blocksof CT

1 This formula will allow us to estimate the norm of the matrix E1i+1Because the set G0 is refinable relative to the contractive affine mappings

we assume that for any sprime isin Zm there exist unique eprimeprime isin Ziμ and sprimeprime isin Zm such

that tsprime = φeprimeprime(tsprimeprime) Thuslangδtsprime Teψs

rang = langLeprimeprimeδtsprimeprime Teψsrang = langδtsprimeprime ψs

rangδeprimeprimee = δssprimeprimeδeprimeprimee

which implies that Dinfin = 1 Consequently we conclude inequality (715)Similarly inequality (714) follows from

E0i =[lang0lprime wij

rang lprime isin Zm j isin Zw(i)

]

This completes the proof of this lemma

We next show that the pair (W L) of basis functions and collocationfunctionals constructed in the last subsection has some important properties Tothis end we let Pn be the projection from X onto Xn defined by the requirementthat lang

ijPnxrang = langij x

rang (i j) isin Un

Proposition 74 The following properties hold

(I) There exist positive integers ρ and h such that for every n gt h andm isin Zw(n) written in the form m = jρ + s where s isin Zρ and j isin N0

wnm(x) = 0 x isin nminushj

(II) For any n nprime isin N0

〈nprimemprime wnm〉 = δnnprimeδmmprime (n m) (nprime mprime) isin U n le nprime

and there exists a positive constant γ for whichsummisinZw(n)

| 〈nprimemprime wnm〉 | le γ (nprime mprime) isin U n gt nprime

(III) There exists a positive integer k such that for all p isin πk where πk

denotes the space of polynomials of total degree less than k

〈nm p〉 = 0 (wnm p) = 0 (n m) isin U

(IV) There exists a positive constant θ0 such that for all (n m) isin U

nm + wnminfin le θ0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

274 Multiscale collocation methods

(V) There exists a positive integer μ gt 1 such that

dim Xn = O(μn) dim Wn = O(μn) dn = O(μminusnd) nrarrinfin

(VI) The operators Pn are well defined and converge pointwise to the identityoperator I in Linfin() as nrarrinfin In other words for any x isin Linfin()

limnrarrinfinPnxminus xinfin = 0

(VII) There exists a positive constant c such that for u isin Wkinfin()

dist(uXn) le cμminuskndukinfin

Proof Property (I) is satisfied because for (i j) isin U with i gt 1 the supportof wij is contained in Sij = φe() = iminus1e where j = μ(e)r + l l isin Zre isin Ziminus1

μ We now prove that the pair (W L) satisfies property (II) For (i j) isin U there

exists a unique pair of e isin Ziminus1μ and l isin Zr such that j = μ(e)r + l and

wij = Tew1l Likewise for (iprime jprime) isin U there exists a unique pair of eprime isin Ziprimeminus1μ

and lprime isin Zr such that jprime = μ(eprime)r+ lprime and iprimejprime = Leprime1lprime When i = iprime it followsfrom (711) and (79) thatlang

iprimejprime wijrang = 〈Leprime1lprime Tew1l〉 = 〈1lprime w1l〉 δeprimee = δlprimelδeprimee = δjprimej

When i lt iprime let eprime1 = [eprimej j isin Ziminus1] eprime2 = [eprimej j isin Ziprimeminus1 Ziminus1] Thenlangiprimejprime wij

rang = langLeprime21lprime w1l

rangδeprime1e =

lang1lprime w1l φeprime2

rangδeprime1e

Since φeprime2 rarr φeprime2() is an affine mapping we conclude that w1l φeprime2 is apolynomial of total degree le k minus 1 in X0 By using (78) we have thatlang

iprimejprime wijrang = 0 (i j) (iprime jprime) isin U i lt iprime

When i gt iprime Lemma 73 ensures that the second equation of (II) is satisfiedThis proves property (II)

Next we verify that property (III) is satisfied Again it follows from (78)that lang

iprimejprime ψjrang = lang1lprime ψj φeprime

rang = 0 j isin Zm

This proves the first equation of property (III) To prove the second equationwe consider Te as an operator from L2() to L2() and denote by T lowaste theconjugate operator of Te which is defined by

(Tex y) = (x T lowaste y)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 275

It can be shown that for y isin L2()

T lowaste y = Jφe y φe

where Je is the Jacobi of mapping φe Therefore we have that(wijψjprime

) = (Tew1lψjprime) = (w1l T lowaste ψjprime

) = 0

The last equality holds because T lowaste ψjprime is a polynomial of total degree le k minus 1and w1l satisfies condition (76)

From (712) (77) (710) and (75) we have for (i j) isin U j = μ(e)r+ l that

| langij vrang | = | 〈1l v φe〉 | le Cprimeinfinvinfin

and

wijinfin le w1l(φminus1e (t))χ(φe())infin le Cinfinmax

jisinZq

ψjinfin

which confirm property (IV)By our construction it is the case that

dim Xn = mμn dim Wn = m(μminus 1)μnminus1

These equations with (72) imply that property (V) is satisfiedIt follows from the first equation of (II) that Pn is well defined The

pointwise convergence condition (VI) of the interpolating projections Pn

follows from a result of [21] Finally property (VII) holds since the Xn arespaces of piecewise polynomials of total degree le k minus 1

Proposition 75 For any k d isin N there exists an integer μ gt 1 such that thefollowing property holds

(VIII) The constant γ in property (II) satisfies the condition

(1+ γ )μminuskd lt 1

Proof We must show that there exists an integer μ gt 1 such that

1+ γ le μkd

where γ is defined in Lemma 73 This will be done by proving that γ isbounded from above independently by μ For this purpose we consider thematrices

H1 = [(ψiψj)

i isin Zm j isin Zm]H2 = [(ψiψm+j

) i isin Zm j isin Zr] H = [H1 H2]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

276 Multiscale collocation methods

Therefore from equation (76) it follows that

CHT = C1HT1 + C2HT

2 = 0

where C2 is an arbitrary r times r nonsingular matrix We choose C2 = Ir fromwhich we have that

C1 = minusHT2 (H

T1 )minus1 (716)

Moreover from Lemma 71 we have that

Cprime = [minusB I]and thus

Cprimeinfin = Binfin + 1 (717)

For j isin Zm the functions ψj are polynomials and therefore continuous Thusthere exists a positive constant ρ such that

maxjisinZmψjinfin le ρ

Hence recalling the definition of matrix B and equation (717) we have that

Cprimeinfin = 1+maxiisinZr

sumjisinZm

|ψj(tm+i)| le 1+ m maxjisinZmψjinfin le 1+ mρ

Moreover we have by equation (717) that

C11 = Hminus11 H2infin le Hminus1

1 infinH2infin

Since Hminus11 infin is independent of μ it remains to estimate H2infin from the

above independent of μ Therefore we recall for j isin Zr that

ψm+j(t) = ψl(φminus1e (t))χφe()(t) t isin

for some l isin Zm and e isin Zμ Consequently from (72) we conclude that

|(ψiψm+j)| leintφe()

∣∣∣ψi(t)ψl(φminus1e (t))

∣∣∣ dt le ρ2meas(φe()) le ρ2

μ

Noting that r = (μminus 1)m we obtain the desired estimate

H2infin = maxiisinZm

sumjisinZr

|(ψiψm+j)| le ρ2

μ(μminus 1)m le ρ2m

thereby proving the result

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 277

We next examine property (VIII) in several cases of practical importanceWe consider the cases when d = 1 and = [0 1] as well as d = 2 and = where is the triangle with vertices (0 0) (1 0) (1 0) When d = 1and = [0 1] property (VIII) is satisfied for the following choices

(1) k = 2 μ = 2

φe(t) = (t + e)2 i isin e = 0 1

and ti = (i+ 1)3 for i = 0 1(2) k = 3 μ = 2

φe(t) = (t + e)2 t isin e = 0 1

and ti = 2i7 for i = 0 1 2(3) k = 3 μ = 3

φe(t) = (t + e)3 t isin e = 0 1 2

and ti = (i+ 1)4 for i = 0 1 2(4) k = 4 μ = 2

φe(t) = (t + e)2 t isin e = 0 1

and ti = (i+ 1)5 for i = 0 1 2 3In the other case (VIII) is also satisfied when k = 2 μ = 4 for

(x y) isin

φ0(x y) = (x2 y2) φ1(x y) = ((x+ 1)2 y2)

φ2(x y) = (x2 (y+ 1)2) φ3(x y) = ((1minus x)2 (1minus y)2)

and t0 = (17 47) t1 = (27 17) t2 = (47 27)Finally we turn our attention to another property

(IX) There exist positive constants θ1 and θ2 such that for all n isin N0 andv isin Xn having the form v =sum(ij)isinUn

vijwij

θ1vinfin le vinfin le θ2(n+ 1)Envinfin

where v = [vij (i j) isin Un]To show this we consider the sequence of functions ζij (i j) isin U

bi-orthogonal to the linear functionals ij (i j) isin U and having the propertythat for all i isin N0

suptisin

sumjisinZw(i)

|ζij(t)| le θ2 (718)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

278 Multiscale collocation methods

Let

ζ0j = w0j j isin Zm

and observe that lang0j ζ0jprime

rang = δjjprime j jprime isin Zm

For each j isin Zr we find vectors cprimeprime = [cprimeprimejs s isin Zq] such that the function

ζ1j =sumsisinZq

cprimeprimejsψs

satisfies the system of linear equationslang0jprime ζ1j

rang = 0 jprime isin Zm (719)

and lang1jprime ζ1j

rang = δjjprime jprime isin Zr (720)

Let us confirm that cprimeprime exists and is unique The coefficient matrix forequations (719) and (720) is

A = [langiprimejprime ψjrang

j isin Zq (iprime jprime) isin U1] (721)

Because ψj j isin Zq is a basis for the space X1 we conclude that matrix A isnonsingular since A is nonsingular Thus there exists a unique solution cprimeprime forequations (719) and (720)

For i gt 1 j = μ(e)r + l e isin Ziminus1μ l isin Zr we define functions

ζij = Teζ1l

These functions will be used in the proof of the next result

Proposition 76 The pair (W L) has property (IX)

Proof It suffices to verify that the sequences of functions ij (i j) isin U andfunctionals ζij (i j) isin U are bi-orthogonal that is they satisfy the conditionlang

iprimejprime ζijrang = δiiprimeδjjprime (i j) (iprime jprime) isin U (722)

and in addition there exists a positive constant θ2 such that for any i isin N0condition (718) is satisfied

The proof of (722) for the case i le iprime is similar to that for (II) inProposition 74 Hence we only present the proof for the case iprime lt i In thiscase we have thatlang

iprimejprime ζijrang = 〈Leprime1lprime Teζ1l〉 =

lang1lprime Te2ζ1l

rangδeprimee1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

71 Multiscale basis functions and collocation functionals 279

where jprime = μ(eprime)r + lprime j = μ(e)r + l e1 = [ej j isin Ziprimeminus1] and e2 = [ej j isinZiminus1 Ziprimeminus1] From this it follows that

langiprimejprime ζij

rang = 0 except for eprime = e1 inwhich case lang

iprimejprime ζijrang = lang1lprime ζ1l φminus1

e2χφe2 ()

rang (723)

Since G0 is a refinable set we have that φminus1e (t) isin G0 when t isin G1 cap φe()

e isin Zμ and thus φminus1e2(ts) isin G0 when ts isin φe2() s isin Zq This observation

with (720) yields the equation

ζ1l(φminus1e2(ts)) =

langδφminus1

e2 (ts) ζ1l

rang= 0 (724)

whenever ts isin φe2() s isin Zq We appeal to (723) and (724) to conclude thatlangiprimejprime ζij

rang = 0Next we show that condition (718) is satisfied Without loss of generality

we consider only the case when i ge 1 In this case the definition of ζij fori ge 1 guarantees

suptisin

sumjisinZw(i)

|ζij(t)| = suptisin

sumeisinZiminus1

μ

sumlisinZr

|Teζ1l(t)|

= suptisin

sumeisinZiminus1

μ

sumlisinZr

|ζ1l(φminus1e (t))χφe()(t)|

lesumlisinZr

ζ1linfin

and therefore (718) holds with θ2 =sumlisinZrζ1linfin

Finally we verify the first inequality in property (IX) To this end we notethat for v = sum(ij)isinUn

vijwij and v = [vij (i j) isin Un] there exists (i0 j0) isinUn with j0 = μ(e0)r + l0 e0 isin Z

i0minus1μ l0 isin Zr such that

vinfin = |vi0j0 | (725)

For l isin Zr we denote vl = vi0j and wl = wi0j where j = μ(e0)r + l and

e0 isin Zi0minus1μ and observe that

|vi0j0 | le⎛⎝sum

lisinZr

|vl|2⎞⎠12

(726)

Recalling that wi0j = Te0 w1l l isin Zr and φe e isin Zμ are affine we concludethat

(wlprime wl) = Jφe0(w1lprime w1l) (727)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

280 Multiscale collocation methods

where Jφe0denotes the Jacobi of the mapping φe0 We introduce an rtimesr matrix

W = [(w1lprime w1l) lprime l isin Zr]and note that it is the Gram matrix of the basis w1l l isin Zr and thus it ispositive definite It follows that there exists a positive constant c0 such that forv =sumlisinZr

vlwl and v = [vl l isin Zr]

c0

sumlisinZr

|vl|2 le vTWv (728)

By formula (727) we have that

v22 = (v v) = Jφe0vTWv

Combining this equation with (728) yieldssumlisinZr

|vl|2 le 1

c0Jφe0

v22 (729)

Since the basis wij (i j) isin Un constructed in this section has property (III)we obtain that

v22 =intφe0 ()

v(t)v(t)dt le Jφe0vinfinvinfin le Jφe0

|vi0j0 |sumlisinZr

wlinfinvinfin

Using property (IV) we conclude that there exists a positive constant c0 suchthat sum

lisinZr

wlinfin le c0

which implies with the last inequality that

v22 le c0Jφe0vinfinvinfin (730)

Combining (725) (726) (729) and (730) yields that there exists a positiveconstant c such that

vinfin le cv12infin v12infin

and thus

vinfin le cvinfin

We have proved the first inequality of (IX) with θ1 = 1c

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

72 Multiscale collocation methods 281

72 Multiscale collocation methods

In the last section we presented a concrete construction of multiscale baseson an invariant set in Rd and multiscale collocation functionals neededfor multiscale collocation methods In this section we develop a generalcollocation scheme for solving Fredholm integral equations of the second kindusing multiscale basis functions and multiscale collocation functionals havingthe properties described in the last section

721 The collocation scheme

For a set A sub Rd d(A) represents the diameter of A that is

d(A) = sup|xminus y| x y isin A (731)

where | middot | denotes the Euclidean norm on the space Rd We use α = [αi isinN0 i isin Zd] to denote a lattice point in Nd

0 As is usually the case we set|α| =sumiisinZd

αi

As usual for a positive integer k Wkinfin() will denote the set of allfunctions v on such that Dαv isin X for |α| le k where we use the standardmulti-index notation for derivatives

Dαv(x) = part |α|v(x)partxα0

0 middot middot middot partxαdminus1dminus1

x isin Rd

and the norm

vkinfin = maxDαvinfin |α| le kon Wkinfin() For a star-shaped set it is easy to estimate the distance ofa function v isin Wkinfin() from the space πk Specifically there is a positiveconstant c such that

dist(vπk) le c(d())kvkinfin (732)

Throughout the following sections c will always stand for a generic constantwhose value will change with the context Its meaning will be clear from theorder of the qualifiers used to describe its role in our estimates

There are several ingredients required in the development of the fastcollocation algorithms for solving the integral equation First we require amultiscale of finite-dimensional subspaces of X denoted by Xn n isin N0 inwhich we make our approximation These spaces are required to have theproperty that

Xn sube Xn+1 n isin N0 (733)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

282 Multiscale collocation methods

and

V sube⋃

nisinN0

Xn (734)

For efficient computation relative to a scale of spaces we express them as adirect sum of subspaces

Xn =W0 oplusW1 oplus middot middot middot oplusWn (735)

These spaces serve as multiscale subspaces of X and will be constructed aspiecewise polynomial functions on

We need a multiscale partition of the set It consists of a family ofpartitions n n isin N0 of such that for each scale n isin N0 the partition n

consists of a family of subsets ni i isin Ze(n) of with the properties that

meas(ni capniprime) = 0 i iprime isin Ze(n) i = iprime (736)

and ⋃iisinZe(n)

ni = (737)

At the appropriate time later we shall adjust the number e(n) of elements andthe maximum diameter of the cells in the nth partition to be commensuratewith dim Wn

A construction of multiscale partitions for an invariant set has beendescribed in Section 711 For unions of invariant sets their multiscalepartitions can be constructed from multiscale partitions of the invariant setswhich form the unions For example a polygonal domain in R2 is a union oftriangles which are invariant sets Hence multiscale partitions of each of thetriangles consisting of the polygon form a multiscale partition of the polygonThe partition n n isin N0 is used in two ways First we demand that thereis a basis Wn = wnm m isin Zw(n) for the spaces

Wn = span Wn n isin N0 (738)

having the property (I) (stated in Proposition 74)For ngt h we use the notation Snm =nminushj so that the support of the func-

tion wnm is contained in the set Snm Note that the supports of the basisfunctions at the nth level are not disjoint However for every n gt h and everyfunction wnm there are at most ρ other functions at level n whose supportoverlaps the support of wnm

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

72 Multiscale collocation methods 283

To define the collocation method we need a set of linear functionals in Vlowastgiven by

Ln = nm m isin Zw(n) n isin N0

The multiscale partitions n n isin N0 are also used to specify the supportsof the linear functionals by the requirement that the linear functional nm is afinite sum of point evaluations

nm =sum

sisinnminushj

csδs (739)

where cs are constants and ni is a finite subset of distinct points inni with thecardinality bounded independent of n isin N and i isin Zw(n) We set Snm = nminushj

and consider it as the ldquosupportrdquo of the functionals nmThe linear functionals and multiscale basis are tied together by the require-

ment that property (II) holds We do not require the linear functionals and themultiscale basis functions to be bi-orthogonal Instead we require them to havea ldquosemi-bi-orthogonalityrdquo property imposed by the first equation of (II) witha controllable perturbation from the bi-orthogonality which is ensured by thesecond equation of (II) Specifically the first one means that the basis functionsvanish when they are applied by collocation functionals of higher levels Wedenote by E the semi-infinite matrix with entries

Enprimemprimenm = 〈nprimemprime wnm〉 (nprime mprime) (n m) isin U

We note by the first equation of property (II) that the matrix E can be viewedas a block upper triangular matrix with the diagonal blocks equal to identitymatrices Consequently the infinite matrix E has an inverse Eminus1 of the sametype that is

(Eminus1)nprimemprimenm = δnnprimeδmmprime n le nprime m isin Zw(n) mprime isin Zw(nprime)

To introduce the collocation method for solving integral equations of thesecond kind we suppose that K is a weakly singular kernel such that theoperator K Xrarr V defined by

(Ku)(s) =int

K(s t)u(t)dt s isin

is compact in X We consider Fredholm integral equations of the second kindin the form

uminusKu = f (740)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

284 Multiscale collocation methods

where f isin X is a given function and u isin X is the unknown to be determinedWhen one is not an eigenvalue of K equation (740) has a unique solutionin X The collocation scheme for solving equation (740) seeks a vector un =[uij (i j) isin Un] where Un is the set of lattice points in R2 defined as (i j) j isin Zw(i) i isin Zn+1 such that the function

un =sum

(ij)isinUn

uijwij

in Xn has the property thatlangiprimejprime un minusKun

rang = langiprimejprime frang (iprime jprime) isin Un (741)

Equivalently we obtain the linear system of equations

(En minusKn)un = fn

where

Kn = [langiprimejprime Kwijrang

(iprime jprime) (i j) isin Un]En = [langiprimejprime wij

rang (iprime jprime) (i j) isin Un]

and

fn = [langij frang

(i j) isin Un]By definition we have that (En)iprimejprimeij = Eiprimejprimeij for (iprime jprime) (i j) isin Un and by (II)we see that

(Eminus1n )iprimejprimeij = (Eminus1)iprimejprimeij (iprime jprime) (i j) isin Un (742)

Let us use condition (II) to estimate the inverse of the matrix En To this endwe introduce a weighted norm on the vector x = [xij (i j) isin Un] For anyi isin Zn+1 we set xi = [xij j isin Zw(i)]

xiinfin = max|xij| j isin Zw(i)and whenever ν isin (0 1) we define

xν = maxxiinfinνminusi i isin Zn+1We also use the notation xinfin = maxxiinfin i isin Zn+1 for the maximumnorm of the vector x

Lemma 77 If condition (II) holds 0 lt ν lt 1 and (1 + γ )ν lt 1 then forany integer n isin N0 and vector x isin Rs(n)

xν le 1minus ν

1minus (1+ γ )νEnxν

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

72 Multiscale collocation methods 285

Proof Let y = Enx so that

yij =sum

(iprimejprime)isinUn

langij wiprimejprime

rangxiprimejprime

In particular for i = n we have that yij = xij For 0 le l le nminus 1 we have fromthe first equation of (II) that

xnminuslminus1j = ynminuslminus1j minussum

nminuslleiprimelenjprimeisinZw(iprime)

langnminuslminus1j wiprimejprime

rangxiprimejprime j isin Zw(nminuslminus1)

Using the second equation of (II) we conclude that

xnminuslminus1infin le ynminuslminus1infin + γ

lsumi=0

xnminusiinfin

By induction on j it readily follows that

xnminusjinfin lejminus1suml=0

γ (1+ γ )lynminusj+l+1infin + ynminusjinfin

Thus we have that

xnminusjinfinνminus(nminusj) le γ ν

jminus1suml=0

[(1+ γ )ν]lynminusj+l+1infinνminus(nminusj+l+1)

+ ynminusjinfinνminus(nminusj)

le⎡⎣1+ γ ν

jminus1suml=0

[(1+ γ )ν]⎤⎦ Enxν

le 1minus ν

1minus (1+ γ )νEnxν

from which the result is proved

722 Estimates for matrices and a truncated scheme

In this subsection our goal is to obtain estimates for the entries of thematrix Kn This requires conditions on the support of the basis functions forWn and vanishing moments for both the basis functions and the collocationfunctionals which have been described in (III) and (IV) We also need theregularity of kernel K

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

286 Multiscale collocation methods

(X) For s t isin s = t the kernel K has continuous partial derivativesDα

s Dβt K(s t) for |α| le k |β| le k Moreover there exist positive constants

σ and θ3 with σ lt d such that for |α| = |β| = k∣∣∣Dαs Dβ

t K(s t)∣∣∣ le θ3

|sminus t|σ+|α|+|β| (743)

In the next lemma we present an estimate of the entries of the matrix KnSuch an estimate forms the basis for a truncation strategy In the statement ofthe next lemma we use the quantities

di = maxd(Sij) j isin Zw(i) i isin N0

Lemma 78 If (I) (III) (IV) and (X) hold and there is a constant r gt 1 suchthat

dist(Sij Siprimejprime) ge r(di + diprime) (744)

then there exists a positive constant c such that

|Kiprimejprimeij| le c(didiprime)ksum

sisinSiprimejprime

intSij

1

|sminus t|2k+σ dt

Proof Let s0 t0 be centers of the sets Siprimejprime and Sij respectively Using theTaylor theorem with remainder we write K = K1 + K2 + K3 where K1(s middot)and K2(middot t) are polynomials of total degree le k minus 1 in t and s respectively

|K3(s t)| le dki dk

iprimev(s t) s isin Siprimejprime t isin Sij

where

v(s t) =sum|α|=k

sum|β|=k

|rαβ(s t)|αβ (745)

and

rαβ(s t) =int 1

0

int 1

0Dα

s Dβt K(s0 + t1(sminus s0)

t0 + t2(t minus t0))(1minus t1)kminus1(1minus t2)

kminus1dt1dt2 (746)

Applying the vanishing moment conditions yields the bound

|Kiprimejprimeij| le iprimejprime wijdki dk

iprimesum

sisinSiprimejprime

intSij

|v(s t)|dt (747)

It follows from the mean-value theorem and condition (X) that

|rαβ(s t)| = kminus2|Dαs Dβ

t K(sprime tprime)| le θ3

k2|sprime minus tprime|2k+σ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

72 Multiscale collocation methods 287

holds for some sprime isin Siprimejprime tprime isin Sij For s isin Siprimejprime t isin Sij the assumption (744)yields

|sprime minus tprime| ge |sminus t| minus di minus diprime ge (1minus rminus1)|sminus t|from which it follows that

|rαβ(s t)| le c1

|sminus t|2k+σ

where

c1 = θ3

k2(1minus rminus1)2k+σ

Substituting the above inequality into (747) completes the proof with

c = θ3θ20 e

2d1minusrminus1

k2(1minus rminus1)σ

To present the truncation strategy we partition matrix Kn into a block matrix

Kn = [Kiprimei iprime i isin Zn+1]with

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]We truncate the block Kiprimei by using a given positive number ε to form a matrix

K(ε)iprimei = [K(ε)iprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]with

K(ε)iprimejprimeij =

Kiprimejprimeij dist(Siprimejprime Sij) le ε0 otherwise

where ε may depend on iprime i and n In the next lemma we use the estimatefor the entries of Kn presented in Lemma 78 to obtain an estimate for thediscrepancy between the blocks of K(ε) and Kn

Lemma 79 If (I) (III) (IV) and (X) hold then given any constant r gt 1 and0 le σ prime lt max2k dminusσ there exists a positive constant c such that wheneverε ge r(di + diprime)

Kiprimei minusK(ε)iprimeiinfin le cεminusη(didiprime)k

where η = 2k minus σ prime

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

288 Multiscale collocation methods

Proof We first note that

Kiprimei minusK(ε)iprimeiinfin = maxjprimeisinZw(iprime)

sumjisinZiprimejprime (ε)

|Kiprimejprimeij|

where

Ziprimejprime(ε) = j j isin Zw(i) dist(Sij Siprimejprime) gt εTherefore by using Lemma 78 we have that

Kiprimei minusK(ε)iprimeiinfin le c(didiprime)k max

jprimeisinZw(iprime)

sumsisinSiprime jprime

sumjisinZiprimejprime (ε)

intSij

1

|sminus t|2k+σ dt

Although the sets Sij are not disjoint we can use property (I) to conclude thatsumjisinZiprime jprime (ε)

intSij

1

|sminus t|2k+σ dt le ρεminusηint

1

|sminus t|σ+σ prime dt

Since σ + σ prime lt d and is a compact set

maxsisin

int

1

|sminus t|σ+σ prime dt ltinfin

We employ the above inequalities to obtain the desired estimate

73 Analysis of the truncation scheme

In this section we discuss the truncation strategy for the collocation methodproposed in the previous section We analyze the order of convergencestability and computational complexity of the truncation algorithm

731 Stability and convergence

We introduce the operator from Xn into itself defined by the equation Kn =PnK|Xn and note that its matrix representation relative to the basis Wn is givenby Eminus1

n Kn For each block Kiprimei i iprime isin Zn+1 of Kn we shall specify latertruncation parameters εn

iprimei and reassemble the block to form a truncation matrix

Kn = [K(εniprimei)iprimei iprime i isin Zn+1]

Using this truncation matrix we let Kn Xn rarr Xn be the linear operator fromXn into itself relative to the basis Wn having the matrix representation Eminus1

n Kn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

73 Analysis of the truncation scheme 289

Our goal is to provide an essential estimate for the difference of two operatorsKn and Kn To this end for v isin Linfin() we set

Pnv =sum

(ij)isinUn

vijwij

The quantities vij are linear functionals of v We estimate them in the nextlemma

Lemma 710 Suppose that conditions (I)ndash(V) and (VIII) hold If v isinWkinfin() then there exists a positive constant c such that

|vij| le cμminuskidvkinfin (i j) isin Un (748)

Proof For v isin Wkinfin() we write

Pnv =sum

(ij)isinUn

vijwij

and let v = [vij (i j) isin Un] By the definition of the projection Pn we havethat

Env =⎡⎣langij

sum(iprimejprime)isinUn

viprimejprimewiprimejprime)

rang (i j) isin Un

⎤⎦ = [langij vrang

(i j) isin Un]

Meanwhile using Lemma 77 with ν = μminuskd and condition (VIII) weconclude that

vμminuskd le cEnvμminuskd

where

c = 1minus μminuskd

1minus (1+ γ )μminuskdgt 0

is a constant Hence

vμminuskd le c max(i j)isinUn

|μikd langij vrang | (749)

Moreover recalling that the ldquosupportrdquo of the functional ij is the set Sij sube Sijwe use the Taylor theorem with remainder on the set Sij for v isin Wkinfin() andconditions (III)ndash(V) to conclude that there exists a positive constant c such that

| langij vrang | le cdk

i vkinfin le cμminuskidvkinfin

Combining this inequality with (749) we obtain the estimate

vμminuskd le cvkinfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

290 Multiscale collocation methods

Again using the definition of the weighted norms we have that

viinfin le cμminuskidvkinfin

which proves the estimate of this lemma

Lemma 710 ensures that for a function v isin Wkinfin() the coefficients ofits expansion in basis Wn and functionals Ln decay in order O(μminusikd) Thisis an extension of a well-known result for orthogonal multiscale bases to themultiscale interpolating piecewise polynomials constructed in this chapter

For positive numbers α and β we make use of the notation

μ[αβ n] =sum

iisinZn+1

μαidsum

iprimeisinZn+1

μβiprimed

to state the next lemma which will play an important role in the analysis forthe order of convergence and stability of the multiscale collocation method Toprove the next lemma we need to estimate the Linfin-norm of a typical elementin Xn given by

v =sum

(ij)isinUn

vijwij (750)

in terms of the norm vinfin of its coefficients v = [vij (i j) isin Un]Specifically we require the condition (IX) One way to satisfy the condition isto consider the sequence of functions ζij (i j) isin U defined by the equation

ζij =sum

(iprimejprime)isinU(Eminus1)iprimejprimeijwiprimejprime (i j) isin U (751)

These functions are bi-orthogonal relative to the set of linear functionals ij j isin Zw(i) i isin N0 that islang

iprimejprime ζijrang = δiiprimeδjjprime (i j) (iprime jprime) isin U

If in addition for all i isin N0

suptisin

sumjisinZw(i)

|ζij(t)| le θ2 (752)

then the second inequality of (IX) followsIn the next lemma we estimate the difference of operators Kn and Kn

applying to Pnv It is an important step for both stability analysis and theconvergence estimate of the multiscale collocation method

Lemma 711 Suppose that conditions (I)ndash(V) and (VIII)ndash(X) hold 0 lt σ prime ltmin2k d minus σ and η = 2k minus σ prime Let b and bprime be real numbers and let the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

73 Analysis of the truncation scheme 291

truncation parameters εniprimei iprime i isin Zn+1 be chosen such that

εniprimei ge max

aμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime)

i iprime isin Zn+1

for some constants a gt 0 and r gt 1 Then there exists a positive constant csuch that for all n isin N and v isin Wkinfin()

(Kn minus Kn)Pnvinfin le cμ[2k minus bη k minus bprimeη n](n+ 1)μminus(k+σ prime)ndvkinfin(753)

and for v isin Linfin()

(Kn minus Kn)Pnvinfin le cμ[k minus bη k minus bprimeη n](n+ 1)μminusσ primendvinfin (754)

Proof Since

Pnv =sum

(ij)isinUn

vijwij

we conclude that

(Kn minus Kn)Pnv =sum

(ij)isinUn

hijwij

where

h = Eminus1n (Kn minus Kn)v

Thus by hypothesis (IX) we conclude that

(Kn minus Kn)Pnvinfin le θ2(n+ 1)(Kn minus Kn)vinfin (755)

We next estimate (Kn minus Kn)vinfin To this end we introduce the matrix

n = [iprimejprimeij (i j) (iprime jprime) isin Un]whose elements are given by

iprimejprimeij = νμk(nminusi)d+σ primend(Kiprimejprimeij minus Kiprimejprimeij) (i j) (iprime jprime) isin Un

where ν = 1μ[2k minus bη k minus bprimeη n] and the vector

vprime = [vprimeij (i j) isin Un]whose components are

vprimeij = μkidvij (i j) isin Un

In this notation we observe that

(Kn minus Kn)vinfin le νminus1μminus(k+σ prime)ndninfinvprimeinfin (756)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

292 Multiscale collocation methods

By Lemma 710 there exists a positive constant c such that for all n isin N andall v isin Wkinfin()

vprimeinfin le cvkinfin (757)

Moreover from Lemma 79 there exists a positive constant c such thatsum(ij)isinUn

∣∣iprimejprimeij∣∣ le ν

sumiisinZn+1

μk(nminusi)d+σ primendKiprimei minus Kiprimeiinfin

le cνsum

iisinZn+1

μk(nminusi)d+σ primendminusk(i+iprime)d(εniprimei)minusη

Consequently by the choice of εiprimei we conclude that

ninfin = max(iprimejprime)isinUn

sum(ij)isinUn

∣∣iprimejprimeij∣∣le c (758)

Combining this inequality with (756)ndash(758) yields the first estimateTo prove the second estimate we proceed similarly and introduce the matrix

primen = [primeiprimejprimeij]s(n)timess(n)

whose entries are given by

primeiprimejprimeij = νprimeμσ primend(Kiprimejprimeij minus Kiprimejprimeij) (i j) (iprime jprime) isin Un

where νprime = 1μ[kminusbη kminusbprimeη n] With these quantities we have the estimate

(Kn minus Kn)vinfin le (νprime)minus1μminusσ primendprimeninfinvinfin (759)

Condition (IX) provides a positive constant c such that for v isin Linfin()

vinfin le cvinfin (760)

As before Lemma 79 and the choice of εiprimei i iprime isin Zn+1 ensure that there existsa positive constant c such that

primeninfin le c (761)

Combining this inequality with (759) and (760) yields the second estimate

We now turn our attention to the stability of the multiscale collocationmethod For this purpose we require property (VI) This property followstrivially if Xn is a space of piecewise polynomials Because of this propertyand the fact that K is compact we conclude for sufficiently large n that theoperators (I minus Kn)

minus1 exist and are uniformly bounded in Linfin() (see forexample [6 7]) From this fact follows the stability estimate that is there

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

73 Analysis of the truncation scheme 293

exists a positive constant ρ and a positive integer m such that for n ge m andx isin Xn

(I minusKn)xinfin ge ρxinfin

We establish a similar estimate for I minus Kn

Theorem 712 Suppose that 0 lt σ prime lt min2k d minus σ and η = 2k minus σ prime Ifconditions (I)ndash(VI) and (VIII)ndash(X) hold and εn

iprimei i iprime isin Zn+1 are chosen as inLemma 711 with

b gtk minus σ prime

η bprime gt k minus σ prime

η b+ bprime gt 1

then there exist a positive constant c and a positive integer m such that for alln ge m and x isin Xn

(I minus Kn)xinfin ge cxinfin

Proof Note that for any real numbers αβ and e

limnrarrinfinμ[αβ n](n+ 1)μminusend = 0

when e gt max0αβα+ β Thus our hypothesis ensures that there exists apositive integer m such that when n ge m

cμ[k minus bη k minus bprimeη n](n+ 1)μminusσ primend lt ρ2 (762)

where the constant c is that appearing in (754) The stability of the collocationscheme and the second estimate in Lemma 711 together with (762) yield forx isin Xn that

(I minus Kn)xinfin ge (I minusKn)xinfin minus (Kn minus Kn)Pnxinfin ge ρ

2xinfin

This completes the proof

In particular this theorem ensures for n ge m that the equation

(I minus Kn)un = Pnf (763)

has a unique solution given by

un =sum

(ij)isinUn

uijwij

This equation is equivalent to the matrix equation

(En minus Kn)un = fn

where un = [uij (i j) isin Un] The next theorem provides an error bound foruminus uninfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

294 Multiscale collocation methods

Theorem 713 Suppose that conditions (I)ndash(X) hold and that 0 lt σ prime ltmin2k d minus σ and η = 2k minus σ prime Let εn

iprimei i iprime isin Zn+1 be chosen as inLemma 711 with b and bprime satisfying one of the following three conditions

(i) b gt 1 bprime gt kminusσ primeη

b+ bprime gt 1+ kη

(ii) b = 1 bprime gt kminusσ primeη

b+ bprime gt 1+ kη

b gt 1 bprime = kminusσ primeη

b+ bprime gt 1+ kη

or

b gt 1 bprime gt kminusσ primeη

b+ bprime = 1+ kη

(iii) b = 1 bprime = kη

or b = 2kη

bprime = kminusσ primeη

Then there exist a positive constant c and positive integer m such that for alln ge m

uminus uninfin le cs(n)minuskd(logs(n))τukinfin

where τ = 0 in case (i) τ = 1 in case (ii) and τ = 2 in case (iii)

Proof It follows from Theorem 712 that there exists a positive constant csuch that

uminus uninfin le uminus Pnuinfin + c(I minus Kn)(Pnuminus un)infin (764)

Using the equation

Pn(I minusK)u = (I minus Kn)un

we find that

(I minus Kn)(Pnuminus un) = Pn(I minusK)(Pnuminus u)+ (Kn minus Kn)Pnu (765)

From (764) (765) hypothesis (VI) and Lemma 711 there exist positiveconstants c p such that

uminus uninfin le (1+ pI minusK)Pnuminus uinfin + cμprimeμminuskndukinfin

where

μprime = μ[2k minus bη k minus bprimeη n](n+ 1)μminusσ primend

We estimate each term in the inequality above separately For the first term wenote that conditions (VI) and (VII) provide a positive constant c such that

Pnuminus uinfin le cμminuskndukinfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

73 Analysis of the truncation scheme 295

Now we turn our attention to estimating the quantity μprime To this end weobserve for any real numbers αβ and e with e gt 0 the asymptotic order

μ[αβ n](n+ 1)μminusend =

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

o(1) if e gt maxαβα + βO(n) if α = eβ lt eα + β lt e

or if α lt eβ = eα + β lt eor if α lt eβ lt eα + β = e

O(n2) if α = 0β = e or α = eβ = 0

as nrarr infin Using this fact with α = 2k minus bηβ = k minus bprimeη and e = σ prime weconclude that

μprime =⎧⎨⎩

o(1) in case (i)O(n) in case (ii)O(n2) in case (iii)

which establishes the result of this theorem by noting that n sim log s(n)

We see from this theorem that the convergence order of the approximatesolution un obtained from the truncated collocation method is optimal up to alogarithmic factor

732 The condition number of the truncated matrix andcomplexity

We next estimate the condition number of the matrix An = En minus Kn

Theorem 714 If the conditions of Theorem 712 hold then there exists apositive constant c such that the condition number of the matrix An satisfiesthe estimate

condinfin(An) le c log2(s(n))

where condinfin(A) denotes the condition number of a matrix A in the infin matrixnorm

Proof For any v = [vij (i j) isin Un] isin Rs(n) we define the vector g = [gij (i j) isin Un] isin Rs(n) by the equation

Anv = g (766)

and the function

g =sum

(ij)isinUn

gijζij

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

296 Multiscale collocation methods

Therefore we have that

gij =langij g

rang = langijPngrang (i j) isin Un

It follows from (IV) that

Anvinfin le θ0Pnginfin (767)

Let

v =sum

(ij)isinUn

vijwij

and observe the equation

(I minus Kn)v = Png (768)

We conclude from (767) and (768) that there exists a positive constant c suchthat

Anvinfin le θ0(I minus Kn)vinfinle θ0

((I minusKn)vinfin + (Kn minus Kn)vinfin

)le cvinfin

where the last inequality holds because of (754) and (762) Next appealingto hypotheses (I) and (IV) we observe for any t isin and i isin Zn+1 that∣∣∣∣∣∣

sumjisinZw(i)

vijwij(t)

∣∣∣∣∣∣ le ρθ0vinfin

because there are at most ρ values of j isin Zw(i) such that functions wij(t) = 0Therefore we conclude that

vinfin le ρθ0(n+ 1)vinfin (769)

Consequently there exists a positive constant c such that

Aninfin le c(n+ 1) (770)

Conversely for any g isin Rs(n) there exists a vector v isin Rs(n) such thatequation (766) holds We argue that there exists a positive constant c such that

ginfin le c(n+ 1)ginfin

Hence we obtain from condition (IX) the inequality

vinfin le cvinfin le c(I minus Kn)vinfin = cginfin le c(n+ 1)ginfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

73 Analysis of the truncation scheme 297

from which it follows that there exists a positive constant c such that for alln isin N

Aminus1n infin le c(n+ 1) (771)

Recalling hypothesis (V) we combine the estimates (770) and (771) to obtainthe desired result namely

condinfin(An) = O((n+ 1)2

)= O

(log2(s(n))

) nrarrinfin

In the remainder of this section we estimate the number of nonzero entriesof matrix An = EnminusKn which shows that the truncation strategy embodied inLemma 711 can lead to a fast numerical algorithm for solving equation (740)while preserving nearly optimal order of convergence For any matrix A wedenote by N (A) the number of nonzero entries in A

Theorem 715 Suppose that hypotheses (I) and (V) hold Let b and bprime be realnumbers not larger than one and the truncation parameters εn

iprimei iprime i isin Zn+1

be chosen such that

εniprimei le maxaμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime) i iprime isin Zn+1

for some constants a gt 0 and r gt 1 Then

N (An) = O(s(n) logτ s(n))

where τ = 1 except for b = bprime = 1 in which case τ = 2

Proof We first estimate the number N (Aiprimei) For fixed i iprime and jprime if Aiprimejprimeij = 0then dist(Siprimejprime Sij) le εn

iprimei so that

Sij sube S(i iprime) = v v isin Rd |vminus v0| le di + diprime + εniprimei

where v0 is an arbitrary point in the set Siprimejprime Let Niiprimejprime be the number of suchsets which are contained in S(i iprime) Using condition (V) we conclude that thereexists a positive constant c such that

Niiprimejprime le meas(S(i iprime))minmeas(Sij) Sij sube S(i iprime) le cμi(di + diprime + εn

iprimei)d

Next we invoke condition (I) to conclude that the number of functions wij

having support contained in Sij is bounded by ρ and appealing to condition(V) we have that w(i) = O(μi) irarrinfin Consequently there exists a positiveconstant c such that

N (Aiprimei) le ρsum

jprimeisinZw(iprime)

Niiprimejprime le cμi+iprime(di + diprime + εniprimei)

d i iprime isin Zn+1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

298 Multiscale collocation methods

from which it follows that

N (An) le csum

iiprimeisinZn+1

μi+iprime[(di)

d + (diprime)d + (εn

iprimei)d]

This inequality and condition (I) imply that if the truncation parameters havethe bound

εniprimei le aμ[minusn+b(nminusi)+bprime(nminusiprime)]d

then

N (An) le csum

iprimeisinZn+1

sumiisinZn+1

μi+iprime(μminusi + μminusiprime + adμminusn+b(nminusi)+bprime(nminusiprime)

)

le c

⎡⎣2(n+ 1)sum

iisinZn+1

μi + adμn

⎛⎝ sumiisinZn+1

μ(bminus1)(nminusi)

⎞⎠⎛⎝ sum

iprimeisinZn+1

μ(bprimeminus1)(nminusiprime)

⎞⎠⎤⎦= O

(μn(n+ 1)τ

) = O(s(n) logτ s(n)

)

as nrarrinfin If εniprimei le r(di + diprime) a similar argument leads to

N (An) = O(s(n) log s(n)) nrarrinfin

This completes the proof

It follows from Theorems 712ndash715 that for the truncation scheme to haveall the desired properties of stability convergence and complexity we have tochoose the truncation parameters to satisfy the equation

εniprimei = maxaμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime) i iprime isin Zn+1

with b = 1 bprime gt kminusση

b+ bprime ge 1+ kη

or with b = 1 bprime = kη

σ prime lt k

74 Bibliographical remarks

This chapter lays out the foundation of the fast multiscale collocation methodfor solving the Fredholm integral equation of the second kind with a weaklysingular kernel Most of the material presented in this chapter is taken from thepaper [69] The construction of multiscale basis functions and the correspond-ing multiscale collocation functionals both having vanishing moments wasdescribed in [69] based on ideas developed in [65 200 201] The analysis of

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

74 Bibliographical remarks 299

the collocation method for solving the integral equation is based on the theoryof collectively compact operators described in [6 15] For the definition offunction values f (t) at given points t isin for an Linfin function f readers arereferred to [21] We remark that the fast multiscale collocation method wasrealized for integral equations of one dimension two dimensions and higherdimensions respectively in [75] [264] and [74] Another wavelet collocationmethod was presented in [105] for the solution of boundary integral equationsof order r = 0 1 over a closed and smooth boundary manifold where thetrial space is the space of all continuous and piecewise linear functions definedover a uniform triangular grid and the collocation points are the grid pointsFor more wavelet collocation methods for solving integral equations readersare referred to [225 226] A quadrature algorithm for the piecewise linearwavelet collocation applied to boundary integral equations can be found in[227] Wavelet collocation methods for a first-kind boundary integral equationin acoustic scattering were developed in [143]

Numerical integrations with error control strategies in fast collocation meth-ods described in this chapter were originally presented in [72] An iterated fastcollocation method was developed for solving integral equations of the secondkind in [62] Multiscale collocation methods were applied in [40] to solvestochastic integral equations and in [52 76 158 160] to solve Hammersteinequations and nonlinear boundary integral equations Moreover multiscalecollocation methods were applied to solve ill-posed integral equations of thefirst kind and inverse boundary value problems in [56 57 79 107] to identifyVolterra kernels of high order in [33] and to solve eigen-problems of weaklysingular integral operators in [70]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637009Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163132 subject to the Cambridge Core terms of

8

Numerical integrations and error control

In the last three chapters multiscale Galerkin methods multiscale PetrovndashGalerkin methods and multiscale collocation methods were developed for solv-ing the Fredholm integral equation of the second kind with a weakly singularkernel on a domain in Rd These methods which use multiscale bases havingvanishing moments lead to compression strategies for the coefficient matricesof the resulting linear systems They provide fast algorithms for solving theintegral equations with an optimal order of convergence and quasi-linear (upto a logarithmic factor) order in computational complexity However it shouldbe pointed out that there is still a challenging problem to solve computationof the entries of the compressed coefficient matrix which are weakly singularintegrals The purpose of this chapter is to introduce error control strategies fornumerical integrations in generating the coefficient matrix of these multiscalemethods The error control techniques are so designed that quadrature errorswill not ruin the overall convergence order of the approximate solution of theintegral equation and will not increase the overall computational complexityorder of the original multiscale method Specifically we discuss the problemsin the setting of multiscale collocation methods Two types of quadrature ruleare used and the corresponding error control techniques are discussed in thischapter The numerical integration issue of the other two types of multiscalemethod can be handled similarly and thus we leave it to the interested reader

81 Discrete systems of the multiscale collocation method

We begin this chapter with a brief review on discrete systems of linearequations resulting from the multiscale collocation method introduced inChapter 7

300

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

81 Discrete systems of the multiscale collocation method 301

We consider solving the Fredholm integral equation of the second kind inthe form

uminusKu = f (81)

where f isin Linfin() is a given function u isin Linfin() is the unknown to bedetermined = [0 1] and the operator K Linfin()rarr Linfin() is defined by

(Ku)(s) =int

K(s t)u(t)dt s isin (82)

The multiscale collocation scheme for solving (81) seeks a vector un =[uij (i j) isin Un

]such that the function

un =sum

(i j)isinUn

uijwij

satisfies the equationlangiprimejprime un minusKun

rang = langiprimejprime frang (iprime jprime) isin Un

or equivalently

(En minusKn)un = fn (83)

where

En = [langiprimejprime wijrang

(iprime jprime) (i j) isin Un]

(84)

Kn = [langiprimejprime Kwijrang

(iprime jprime) (i j) isin Un] (85)

and

fn = [langiprimejprime frang

(iprime jprime) isin Un]

The coefficient matrix An = En minus Kn is a full matrix However due tothe special properties of the multiscale bases and functionals the matrix En issparse and Kn is numerically sparse in the sense that most of its entries havesmall absolute values To avoid computing all of these entries a truncationstrategy is proposed so that the numerical solution obtained from the truncatedsparse matrix is as accurate as that from the full matrix (see Chapter 7) Let

Kn = [Kiprimei iprime i isin Zn+1]

where

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]

with Kiprimejprimeij = langiprimejprime Kwijrang

The truncation parameters εniprimei are chosen for a pair of level indices (iprime i) by

εniprimei = maxν(di + diprime) aμbprime(nminusiprime)minusi

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

302 Numerical integrations and error control

for some constants agt 0 and ν gt 1 The truncated matrix Kn is then defined by

Kn =[Kiprimei iprime i isin Zn+1

]

where

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]

with

Kiprimejprimeij =

Kiprimejprimeij dist(Siprimejprime Sij) le εniprimei

0 otherwise

We see from (85) that each entry of Kn is a weakly singular integral and hasto be computed numerically When an elementary quadrature rule is chosen forthe evaluation of Kiprimejprimeij the bound of the numerical error can be obtained Thenwe are able to utilize the bound to gauge how the accumulation of these errorsinfluences the accuracy of the resulting numerical solutions The problem ishow to choose the quadrature rules with their parameter values so that theconvergence order of the multiscale collocation scheme can be preserved withlow computational complexity

82 Quadrature rules with polynomial order of accuracy

We first present a class of quadrature rules with polynomial order of accuracy

821 Quadrature rule I

The first quadrature rule that we present here was introduced in [164] for aclass of weakly singular univariate functions For a fixed positive integer kprime leth isin C2kprime(0 1] satisfy the property that there exists a positive constant c suchthat

|h(2kprime)(t)| le ctminusσminus2kprime t isin (0 1]where σ isin [0 1) Note that the function h is integrable on [0 1] but may havea singularity at t = 0 We wish to compute a numerical value of the integral

I(h) =int 1

0h(t)dt

in accuracy O(mminus2kprime) by using O(m) number of functional evaluations Forthis purpose we assume that gkprime is the Legendre polynomial of degree kprime on

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

82 Quadrature rules with polynomial order of accuracy 303

[0 1] that is int 1

0gkprime(t)t

dt = 0 isin Zkprime

and denote by τ isin Zkprime the kprime zeros of gkprime in an order given by 0 lt τ0

lt middot middot middot lt τkprimeminus1 lt 1 To compute I(h) we set q = 2kprime+11minusσ and according to this

parameter q we choose m points

tj =(

j

m

)q

j isin Zm+1 (86)

so that the set of subintervals j = [tj tj+1] j isin Zm form a partition forinterval [0 1] Let

τj = tj + (tj+1 minus tj)τ isin Zkprime j isin Zm (87)

and note that τ j isin Zkprime are the kprime zeros of the Legendre polynomial of degree

kprime on j We now use these points to define a piecewise polynomial S(h) over[0 1] with knots tj j = 1 2 m minus 1 Set S(h)(t) = 0 t isin [t0 t1) and letS(h) be the Lagrange interpolation polynomial of degree kprime minus 1 to h at nodesτ

j isin Zkprime for x isin [tj tj+1) j = 1 2 mminus 2 and x isin [tmminus1 tm] We use the

value

I(S(h)) =sum

jminus1isinZmminus1

sumisinZkprime

ωjh(τ

j)

where

ωj =

int tj+1

tj

prodiisinZkprime i =

t minus τji

τj minus τ

ji

dt

to approximate the integral I(h) Let Emkprime(h) = I(h) minus I(S(h)) denote theerror of the approximation and it is proved in [164] that there exists a positiveconstant cprime which might depend on h such that

|Emkprime(h)| le cprimemminus2kprime for all m isin N

We next consider computing the entries of Kn using this integration methodThe entries of matrix Kn are integrals of integrands in the form

hij(s t) = K(s t)wij(t) (88)

for some s isin and (i j) isin Un Note that these functions hij have a singularityat the point s whose supports are given by the support of wij and they arepiecewise smooth To apply the integration method to these functions wedefine a function class and extend the integration method to this class offunctions h A function h is said to be in class A if it has the properties that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

304 Numerical integrations and error control

(I) supp(h) is a subinterval of (II) there exists a set of nodes π(h) = sj j minus 1 isin Zmprimeminus1 such that h isin

C2kprime((s cup π(h))) and(III) there exists a positive constant θ prime such that

|h(2kprime)(t)| le θ prime|t minus s|minus(σ+2kprime) t isin (s cup π(h))For a function in class A we choose the canonical partition of with respectto m as described by (86) and associated with the singular point s we picktwo collections of nodes

π rt = trj = s+ tj j isin Zm+1 and π l

t = tlj = sminus tj j isin Zm+1Let [qprime qprimeprime] = supp(h) We rearrange the elements of

(π(h) cup π rt cup π l

t cup qprime qprimeprime) cap supp(h)

in the increasing order and write them as a new sequence qprime = q0 lt q1 lt

middot middot middot lt qmprimeprime = qprimeprime with an integer mprimeprime that depends on m and has the boundmprimeprime le 2m+ mprime + 1 Define a partition (h) of supp(h) by

(h) = Qα = [qα qα+1) α isin Zmprimeprime and define a piecewise polynomial S(h) of order kprime on supp(h) by the rule thatS(h) = 0 on Qα if Qα sub [tl1 tr1) and otherwise on Qα S(h) is the Lagrangeinterpolation polynomial of order kprime which interpolates h at the kprime zeros τα =qα + (qα+1 minus qα)τ isin Zkprime of the Legendre polynomial of degree kprime definedon Qα We compute the value I(S(h)) and use it to approximate I(h) In thenext lemma we analyze the order of convergence of this integration methodFor this purpose we set

Emkprime(h) =sum

Qαisin(h)

∣∣∣∣intQα

[h(t)minus S(h)(t)]dt

∣∣∣∣ Lemma 81 Let h be a function in class A Then there exists a positiveconstant c1 independent of h s or (h) such that

Emkprime(h) le c1θprimemminus2kprime (89)

where θ prime is the constant appearing in the definition of class A

Proof The proof is done by modifying the proof of Theorem 22 in [164] Forj isin Zm we introduce two index sets

rj = α isin Zmprimeprime Qα isin (h) Qα sub [trj trj+1]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

82 Quadrature rules with polynomial order of accuracy 305

and

lj = α isin Zmprimeprime Qα isin (h) Qα sub [tlj+1 tlj]

Associated with these two index sets we set

Erkprime j(h) =

sumαisinr

j

∣∣∣∣intQα

[h(t)minus S(h)(t)]dt

∣∣∣∣and

Elkprime j(h) =

sumαisinl

j

∣∣∣∣intQα

[h(t)minus S(h)(t)]dt

∣∣∣∣ and observe that

Emkprime(h) =suml

j =emptyEl

kprime j(h)+sumr

j =emptyEr

kprime j(h)

We first estimate Erkprime j By the definition of S(h) we have that

Erkprime0(h) le

int tr1

tr0

|h(t)|dt le θ primeint tr1

tr0

|sminus t|minusσdt = θ prime

1minus σmminus(2kprime+1) (810)

For j ge 1 it follows from the error estimate of the Gaussian quadrature thatthere exists ηα isin Qα such that

Erkprime j(h) =

sumαisinr

j

|h(2kprime)(ηα)|(2kprime)

∣∣∣∣intQα

(t minus τα0 )2 middot middot middot (t minus ταkprimeminus1)

2dt

∣∣∣∣ Using condition (III) and noting that

|t minus τα | lt trj+1 minus trj for any α isin rj t isin Qα and

sumαisinr

j

intQα

dt le trj+1 minus trj

we conclude that

Erkprime j(h) le

θ prime

(2kprime) |sminus trj |minus(σ+2kprime)(trj+1 minus trj )2kprime+1

By the definition of trj and (86) we observe that

Erkprime j(h) le

θ primemminus(2kprime+1)

(2kprime) jminusq(σ+2kprime)q2kprime+1(j+ 1)(qminus1)(2kprime+1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

306 Numerical integrations and error control

Since q(σ + 2kprime) = (qminus 1)(2kprime + 1) it follows that

Erkprime j(h) le

θ prime

(2kprime)q2kprime+12(qminus1)(2kprime+1)mminus(2kprime+1) (811)

Likewise we obtain estimates similar to (810) and (811) for Elkprime0 and El

kprime jTherefore we conclude (89) with

c1 = 2 max

q2kprime+12(qminus1)(2kprime+1)

(2kprime) 1

1minus σ

proving the lemma

We now apply the integration method described above for functions in classA to hij defined by (88) which appear in the compressed matrix Kn Recallthat the compressed matrix is obtained by a truncation strategy defined with thetruncation parameters εn

iprimei iprime i isin Zn+1 from the full matrix Kn For the giventruncation parameters εn

iprimei we introduce an index set

Ziprimejprimei = j isin Zw(i) dist(Siprimejprime Sij) le εniprimei

for (iprime jprime) isin Un and define for isin Zr

Ziprimejprimei = j isin Ziprimejprimei j = μ(e)r +

We observe that Ziprimejprimei sube Ziprimejprimei and for j1 j2 isin Z

iprimejprimei with j1 = j2

meas(supp(wij1) cap supp(wij2)) = 0

and for any isin Zr ⋃jisinZ

iprimejprime i

supp(wij) sub

Therefore we define for isin Zr and (iprime jprime) isin Un

wiprimejprimei(t) =

wij(t) if t isin int(supp(wij)) for some j isin Ziprimejprimei

0 otherwise

and set

hiprimejprimei(s t) = K(s t)wiprimejprimei(t)

The next lemma presents estimates for the error of the integration methodapplied to functions hiprimejprimei

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

82 Quadrature rules with polynomial order of accuracy 307

Lemma 82 Suppose that there exists a positive constant θ such that

|Dβt K(s t)| le θ |sminus t|minus(σ+β)

for any 0 le β le 2kprime and s t isin s = t Then there exists a positive constantc2 such that for all i isin Zn+1 isin Zr (iprime jprime) isin Un and s isin

Emkprime(hiprimejprimei) le c2mminus2kprime[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]k0 (812)

where k0 = mink 2kprime + 1Proof It suffices to prove that hiprimejprimei is in class A and to compute theconstant θ prime for this function

It is clear that condition (I) is satisfied with this function According to theconstruction of wij for any (i j) isin Un there exist e isin Ziminus1

μ and l isin Zr suchthat j = μ(e)r + l and

wij = Tew1l = w1l φminus1e χφe()

Note that w1l is a piecewise polynomial with a finite set of knots This set ofknots forms a set π(hij) of knots for the function hij required by condition (II)in the definition of class A Observing that

π(hiprimejprimei) =⋃

jisinZiprimejprime i

π(hij) (813)

we confirm that hiprimejprimei satisfies condition (II) with the set π(hiprimejprimei) of knots Itremains to show that it also satisfies condition (III) Again by noting that eachwiprimejprimei is a piecewise polynomial of order k with the knots π(hiprimejprimei) it followsfrom the hypothesis on kernel K that for t isin supp(wij) (s cup π(hij))

|D2kprimet hiprimejprimei(s t)| =

∣∣∣∣∣∣sumβisinZk0

2kprime)

D2kprimeminusβt K(s t)w(β)

ij (t)

∣∣∣∣∣∣le θ

sumβisinZk0

2kprime)|sminus t|minus(σ+2kprimeminusβ)μβ(iminus1)|w(β)

1l (φminus1e (t))|

Introducing a constant

= maxβisinZk0 lisinZr

sup|w(β)

1l (t)| t isin ltinfin

we obtain the estimate for t isin supp(wij) (s cup π(hij))

|D2kprimet hiprimejprimei(t)| le θ(2kprime)

sumβisinZk0

[|sminus t|μiminus1]β |sminus t|minus(σ+2kprime) (814)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

308 Numerical integrations and error control

We now compute the constant θ prime associated with function hiprimejprimei For any j isinZiprimejprimei we have that

|sminus t| le di + diprime + εniprimei for any t isin supp(wij)

This implies that for t isin supp(wij)sumβisinZk0

[|sminus t|μiminus1]β le k0

[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]k0

Noticing that

supp(wiprimejprimei) sub⋃

jisinZiprimejprime i

supp(wij)

we observe that condition (III) holds for function hiprimejprimei with a constant

θ prime = θk0(2kprime)[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]k0

Finally using Lemma 81 we conclude the estimate (812) with c2 =c1θk0(2kprime)

We now use the integration method described above to compute the integralsinvolved in the nonzero entries

Kiprimejprimeij =sum

sisinSiprimejprime

cs

intSij

hij(s t)dt (815)

of Kiprimei In other words we use

Kiprimejprimeij =sum

sisinSiprimejprime

csI(S(hij(s middot)))

to approximate Kiprimejprimeij given by (815) For a given set of truncation parameterswe let

˜Kiprimei =[ ˜K(ε)iprimejprimeij jprime isin Zw(iprime) j isin Zw(i)

]

where

˜K(ε)iprimejprimeij =

Kiprimejprimeij dist(Siprimejprime Sij) le εniprimei

0 otherwise(816)

In the next proposition we estimate the Linfin-norm of error Kiprimei minus ˜Kiprimei

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

82 Quadrature rules with polynomial order of accuracy 309

Lemma 83 Let m be a positive integer Then there exists a positive constantc3 gt 0 such that for all iprime i isin Zn+1 and n isin N

Kiprimei minus ˜Kiprimeiinfin le c3

[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]k0mminus2kprime (817)

Proof For (iprime jprime) isin Un we set Ciprimejprime = max|cs| s isin Siprimejprime By theconstruction of collocation functionals there exist positive integers cprime2 cprimeprime2 suchthat for all (iprime jprime) isin Un and for all n isin N

card(Siprimejprime) le cprime2 and maxCiprimejprime (iprime jprime) isin Un

le cprimeprime2 (818)

Using (815) and (818) we see that there exists a positive constant c such thatfor all iprime i isin Zn+1

Kiprimei minus ˜Kiprimeiinfin le c maxjprimeisinZw(iprime)

⎧⎪⎨⎪⎩sum

sisinSiprime jprime

sumjisinZiprimejprime i

Emkprime(hij)

⎫⎪⎬⎪⎭

According to the definition of Emkprime(h) we conclude thatsumjisinZiprimejprime i

Emkprime(hij) =sumisinZr

sumjisinZ

iprime jprime i

sumQαisin(hij)

∣∣∣∣intQα

[hij(t)minus (S(hij))(t)]dt

∣∣∣∣ Recalling (813) we obtain that the right-hand side of the equation above isequal tosum

isinZr

sumQαisin(hiprimejprime i)

∣∣∣∣intQα

[hiprimejprimei(t)minus (S(hiprimejprimei))(t)]dt

∣∣∣∣ =sumisinZr

Emkprime(hiprimejprimei)

It follows that

Kiprimei minus ˜Kiprimeiinfin le c max

⎧⎨⎩sumisinZr

Emkprime(hiprimejprimei) jprime isin Zw(iprime)

⎫⎬⎭ (819)

By (819) and Lemma 82 we obtain the desired estimate

822 Convergence order and computational complexity

To ensure that the numerical integration will not ruin the convergence orderof the collocation method we are required to choose different integers m thenumber of functional evaluations used in numerical integration of the integrals

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

310 Numerical integrations and error control

involved in the entries of different blocks Kiprimei We now denote these integers bymiprimei iprime i isin Zn+1 to indicate their dependence on different blocks Specificallywe choose miprimei to satisfy the inequality

miprimei ge c0(εn

iprimei)λμ(iprime i) iprime i isin Zn+1 (820)

for some positive constant c0 where

λ = 2k + k0 minus σ prime

2kprimeand μ(iprime i) = μ

k(iprime+i)+k0(iminus1)2kprime

Suppose that the numerical values of blocks ˜Kiprimei are computed accordingly Wesolve the linear system

(En minus ˜Kn) ˜un = fn (821)

for ˜un = [˜uij (i j) isin Un] and denote

˜un =sum

(i j)isinUn

˜uijwij

Our next theorem shows that the integers miprimei so chosen allow the approximatesolution ˜un to preserve the convergence order that un has

Theorem 84 Suppose that the condition of Lemma 82 holds u isin Wkinfin()and that the integrals in ˜Kn are computed by the integration formula describedabove using miprimei functional evaluations where miprimei satisfy (820) and ˜un issolved accordingly Then there exist a positive constant c and a positive integerN such that for all n gt N

uminus ˜uninfin le c(s(n))minusk(log s(n))τukinfin (822)

where τ = 1 if bprime gt k2kminusσ prime τ = 2 if bprime = k

2kminusσ prime

Proof By the proof of Theorem 713 the estimate (822) holds if there existsa positive constant c such that for all iprime i isin Zn+1 and for all n isin N

Kiprimei minus ˜Kiprimeiinfin le c(εniprimei)minus(2kminusσ prime)μminusk(iprime+i) (823)

By estimate (823) and the triangle inequality it suffices to prove that thereexists a positive constant c such that for all iprime i isin Zn+1 and for all n isin N(823) holds with Kiprimei replaced by Kiprimei To this end we recall that the definitionof εn

iprimei ensures that

μminusiprime+1 + μminusi+1 le εniprimei

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

82 Quadrature rules with polynomial order of accuracy 311

It follows from Lemma 83 and the choice of miprimei (820) that

Kiprimei minus ˜Kiprimeiinfin le 2k0 cminus2kprime0 c

(εn

iprimei)minus(2kminusσ prime)

μminusk(iprime+i)

proving the claim

Now we turn to analyzing computational complexity for the generating

matrix ˜Kn For iprime i isin Zn+1 we denote by Miprimei the number of functional

evaluations for computing the entries of ˜Kiprimei Thus

MUn =

sumiisinZn+1

sumiprimeisinZi+1

Miprimei (824)

and

MLn =

sumiprimeisinZn+1

sumiisinZiprime

Miprimei (825)

are the number of functional evaluations used for computing the upper and

lower triangular entries of ˜Kn respectively and Mn =MUn +ML

n represents

the total number of functional evaluations used for computing all entries of ˜KnIn the next theorem we estimate MU

n and MLn

Theorem 85 Suppose that the matrix ˜Kn is generated by using the integra-tion formula described above Let miprimei iprime i isin Zn+1 be the smallest integerssatisfying (820) Choose

kprime ge k and2kprime

2kprime + 1(1minus σ) le σ prime lt 1minus σ (826)

Then there exists a positive constant c such that for all n isin N

MUn le c(s(n))1+λprime (827)

and

MLn le c(s(n))1+λprimeprime (828)

where λprime = σ prime2kprime minus 1minusσ

2kprime+1 and λprimeprime = k2kprime

Proof For iprime iisinZn+1 we let Miprimejprimei denote the number of functional eval-

uations used in computing the jprimeth row of the block ˜Kiprimei Recalling that thenumber of rows in this block is w(iprime) we have that

Miprimei = w(iprime)Miprimejprimei (829)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

312 Numerical integrations and error control

To estimate Miprimejprimei we let M(h) be the number of functional evaluations usedin computing I(S(h)) Recalling the definition of the function hiprimejprimei we havethat

Miprimejprimei =sum

jisinZiprime jprime i

M(hij) =sumisinZr

sumjisinZ

iprimejprime i

M(hij) =sumisinZr

M(hiprimejprimei) (830)

Note that we actually integrate S(hiprimejprimei) for an approximate value of the inte-gral I(hiprimejprimei) Since S(hiprimejprimei) is a piecewise polynomial of order kprime the numberof functional evaluations used in integrating it between two consecutive knotsis exactly kprime Setting

N1 = card(π(hiprimejprimei)) and N2 = card(π rs cup π l

s)

we find that

M(hiprimejprimei) le kprime(N1 + N2) (831)

For j = 1 2 we let

MUn j =

sumiisinZn+1

sumiprimeisinZi+1

w(iprime)sumisinZr

kprimeNj (832)

and

MLn j =

sumiprimeisinZn+1

sumiisinZiprime

w(iprime)sumisinZr

kprimeNj (833)

From (824) (825) and (829)ndash(831) we conclude that MUn leMU

n1 +MUn2

and MLn leML

n1 +MLn2

We now estimate MUn1 From the construction of the basis functions wij it is

clear that the functions wij are piecewise polynomials with sets of knots whosecardinality is uniformly bounded As a result according to the definition ofwiprimejprimei there exists a positive constant cprime such that

N1 le cprimecard(Ziprimejprimei) (834)

By (834) and Theorem 715 we observe that there exists a positive constant csuch that for all n isin N

MUn1 le cprimekprime

sumiisinZn+1

sumiprimeisinZi+1

N (Kiprimei) le ckprimes(n) logτ s(n) (835)

where τ = 2 if bprime = 1 and τ = 1 if kminusσ prime2kminusσ prime lt bprime lt 1 Likewise we have the

estimate for MLn1

MLn1 le ckprimes(n) logτ s(n)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

82 Quadrature rules with polynomial order of accuracy 313

Now we turn to estimating MUn2 and ML

n2 By (86) and the truncation

strategy a sufficient condition for tr isin supp(wiprimejprimei) or tl isin supp(wiprimejprimei) is(

miprimei

) 2kprime+11minusσ ge di + diprime + εn

iprimei (836)

The smallest that satisfies (836) is an upper bound of the number of elementsin π r

s or π ls which are also located in supp(hiprimejprimei) Therefore by the choice of

miprimei there exists a positive constant c such that for all iprime i isin Zn+1 and for alln isin N

N2 le 2c0(di + diprime + εn

iprimei) 1minusσ

2kprime+1(εn

iprimei)λμ(iprime i) le c

(εn

iprimei)λ0 μ(iprime i)

where λ0 = 2k+k02kprime minus λprime When iprime le i if εn

iprimei = ν(μminusiprime+1 + μminusi+1) then

N2 le c(ν(μminusiprime+1 + μminusi+1)

) 2k+k02kprime minusλprime

μ(iprime i) le cμminusλ0iprime+ k+k02kprime i

and if εniprimei = aμbprime(nminusiprime)minusi then

N2 le c(aμbprime(nminusiprime)minusi)λ0μ(iprime i) = cμbprimeλ0nμ( k

2kprime minusbprimeλ0)iprimeminus( k2kprime minusλprime)i

When iprime gt i if εniprimei = ν(μminusiprime+1 + μminusi+1) then

N2 le c(ν(μminusiprime+1 + μminusi+1)

)λ0μ(iprime i) le cμ

k2kprime iprimeminus( k

2kprime minusλprime)i

and if εniprimei = aμbprime(nminusiprime)minusi then

N2 le c(aμbprime(nminusiprime)minusi)λ0μ(iprime i) = cμbprimeλ0nμ( k

2kprime minusbprimeλ0)iprimeminus( k2kprime minusλprime)i

Note that for iminus 1 isin Zn w(i) = rμiminus1 By the definition of MUn2 we observe

that if εniprimei = ν(μminusiprime+1 + μminusi+1) then

MUn2 le c

sumiisinZn+1

sumiprimeisinZi+1

μ(1minus k+k0

2kprime +λprime)iprime+k+k02kprime i

Hence

MUn2 le

⎧⎨⎩ cμk+k02kprime n λprime lt k+k0

2kprime minus 1

cμ(1+λprime)n λprime ge k+k02kprime minus 1

(837)

Similarly if εniprimei = aμbprime(nminusiprime)minusi we introduce a new parameter λ = (1+ k

2kprime )λ0

and observe that

MUn2 le

cμbprimeλ0n bprime gt λ

cμ(1+λprime)n bprime le λ(838)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

314 Numerical integrations and error control

For an estimate of MLn2 if εn

iprimei = ν(μminusiprime+1 + μminusi+1) since λprime lt k2kprime we have

MLn2 le cμ(1+ k

2kprime )n (839)

and if εniprimei = aμbprime(nminusiprime)minusi we see

MLn2 le

cμbprimeλ0n bprime gt λ

cμ(1+ k2kprime )n bprime le λ

(840)

Now using the assumption (826) we have that k0 = k and 0 lt λprime lt σ2kprime and

thus λ gt 1 Noting that bprime le 1 we conclude the estimates (827) and (828)from (832) (833) and (837)ndash(840) The proof is complete

83 Quadrature rules with exponential order of accuracy

In this section we study a class of quadrature rules which have an exponentialorder of accuracy

831 Quadrature rule II

In this section we present another integration method when the kernel is a Cinfinfunction off the diagonal This stronger assumption on the kernel allows usto use different orders of polynomials in different subintervals to achieve anexponential order of convergence for the integration method This idea wasused in [242] in a different context As a result the computational complexityfor numerical integration is improved considerably Specifically we assumethat for any s isin K(s middot) isin Cinfin( s) and there exists a positive constant θsuch that

|Dβt K(s t)| le θ |sminus t|minus(σ+β) (841)

for any β isin N0 and s t isin s = t Instead of using the knots described by(86) for any γ isin (0 1) we set

t0 = 0 tι = γmminusι ι = 1 2 m

As before we define the sets π rt π l

t of knots subintervals Qα and partition(h) for function h(s middot) isin Cinfin( (s cup π(h))) In the present case wedefine the piecewise polynomial S(h) by the following rule Note that ταj =qα + (qα+1 minus qα)τj j isin Zkι are the kι zeros of the Legendre polynomial ofdegree kι defined on Qα If Qα sub [tl1 tr1) S(h) = 0 on Qα and if Qα sub [trι trι+1)

or [tlι+1 tlι) on Qα S(h) is the Lagrange interpolating polynomial of order kι

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

83 Quadrature rules with exponential order of accuracy 315

to h at the kι zeros ταj Note that kι varies depending on ι and S(h) depends onthe vector k = [kι ι = 1 2 m] We use I(S(h)) to approximate I(h) Fora constant a isin R a denotes the smallest integer not less than a

Lemma 86 Let ε gt 0 γ isin (0 1) and choose

kι = ιε ι = 1 2 m (842)

Then there exists a positive constant c such that for all integers m and forι = 1 2 m [

(2kι minus k)γ (1minusσ)ι+2kι+1]minus1 le c

Proof Note that n sim (ne)n+12 as nrarr +infin It suffices to prove that thereexists a positive integer n0 such that when ι ge n0[(

2kι minus k

e

)2kιminusk+ 12

γ (1minusσ)ι+2kι+1

]minus1

le 1

Let ζ = max 1minusσε

4 Since 2kιminuske rarr+infin as ιrarr+infin there exists a positive

integer n0 such that for all ι ge n0

2kι minus k

ege γminusζ and

1

2kι gt k

Thus for any γ isin (0 1)(2kι minus k

e

)2kιminusk+ 12 ge γminusζ(2kιminusk+ 1

2 ) ge γminus1minusσε

kιminus4(kιminusk+ 12 ) ge γminus(1minusσ)ιminus2kιminus1

This completes the proof

To analyze the convergence of this integral method we let

Emk(h) =sum

Qαisin(h)

∣∣∣∣intQα

[h(t)minus S(h)(t)]dt

∣∣∣∣ Lemma 87 Suppose that the kernel K satisfies (841) Then there exists apositive constant c such that for any i isin Zn+1 l isin Zr (iprime jprime) isin Un and s isin E

Emk(hiprimejprimeil) le c[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]kγ (1minusσ)m

Proof Define rι

lι as in the proof of Lemma 81 with Er

kprimeι(h) and Elkprimeι(h)

replaced by Erι (h) and El

ι(h) respectively Now we have that

Emk(h) =sumlι =empty

Elι(h)+

sumrι =empty

Erι (h)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

316 Numerical integrations and error control

We first estimate Erι (hiprimejprimeil) According to the definition of S(h) we have that

Er0(hiprimejprimeil) le

int tr1

tr0

|hiprimejprimeil(s t)|dt le cθint tr1

tr0

|sminus t|minusσdt le cθ

(1minus σ)γ 1minusσ γ(1minusσ)m

For ι ge 1 by the error estimate of the Gaussian quadrature there exist ξα isin Qα

such that

Erι (hiprimejprimeil) =

sumαisinr

ι

|D2kιt hiprimejprimeil(s ξα)|

(2kι)∣∣∣∣int

(t minus τα0 )2 middot middot middot (t minus ταkιminus1)

2dt

∣∣∣∣ Note that wij is a piecewise polynomial of order k By using assumption (841)we have for t isin supp(wij) (s cup π(hij)) that

|D2kιt hiprimejprimeil(s t)| =

∣∣∣∣∣∣sumβisinZk

2kι

)D2kιminusβ

t K(s t)w(β)ij (t)

∣∣∣∣∣∣le θ

sumβisinZk

2kι

)|sminus t|minus(σ+2kιminusβ)μβ(iminus1)

∣∣∣w(β)

1l (φminus1e (t))

∣∣∣ Moreover we have that tι = γmminusι le |ξα minus s| le di + diprime + εn

iprimei Thus weconclude that

Erι (hiprimejprimeil) le θ(2kι)(2kι minus 1) middot middot middot (2kι minus k + 1)

(2kι)t(σ+2kι)ι

(tι+1 minus tι)2kι+1

timessumβisinZk

[(di + diprime + εn

iprimei)μiminus1

]βle kθ(1minus γ )2kι+1

(2kι minus k)γ (1minusσ)ι+2kι+1γ (1minusσ)m

[(di + diprime + εn

iprimei)μiminus1

]k

It follows from Lemma 86 that there exists a positive constant c such that

Erι (hiprimejprimeil) le c(1minus γ )2ειγ (1minusσ)m

[(di + diprime + εn

iprimei)μiminus1

]k

Therefore sumrι =empty

Erι (hiprimejprimeil) le cγ (1minusσ)m

[(di + diprime + εn

iprimei)μiminus1

]k

Likewise we obtain the same estimate forsum

lι =empty El

ι(hiprimejprimeil) and complete theproof of this lemma

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

83 Quadrature rules with exponential order of accuracy 317

832 Convergence order and computational complexity

Using Lemma 87 and similar arguments used in the proofs for Lemma 83 andTheorem 84 we have the following results

Lemma 88 Let m be a positive integer Then there exists a positive constantc gt 0 such that for all iprime i isin Zn+1 and n isin N

Kiprimei minus ˜Kiprimeiinfin le c[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]kγ (1minusσ)m

Theorem 89 Let u isin Wkinfin() For iprime i isin Zn+1 choose

miprimei ge minusk logμ

(1minus σ) log γ(2i+ iprime) (843)

Suppose that the kernel K satisfies (841) the integrals Kiprimejprimeij in matrix ˜Kn arecomputed by the integration methods described earlier with m = miprimei and kιdetermined by (842) and ˜un is solved accordingly Then there exist a positiveconstant c and a positive integer N such that for all n gt N

uminus ˜uninfin le c(s(n))minusk(log s(n))τukinfin (844)

where τ = 1 if bprime gt k2kminusσ prime and τ = 2 if bprime = k

2kminusσ prime

Proof As in the proof of Theorem 84 it suffices to prove that there holds[(μminusi+1 + μminusiprime+1 + εn

iprimei

)μiminus1

]kγ (1minusσ)m le c

(εn

iprimei)minus(2kminusσ prime)

μminusk(iprime+i)

Since μminusiprime+1 + μminusi+1 le εniprimei we only need to show

γ (1minusσ)miprime i le c(εn

iprimei)minus(3kminusσ prime)

μminus(2ki+kiprime)

This holds with the choice of miprimei and thus the conclusion of this theoremfollows

In the next theorem we present an estimate of the number of functional

evaluations used for computing the entries of ˜Kn

Theorem 810 Suppose that miprimei iprime i isin Zn+1 are chosen to be the smallestinteger satisfying condition (843) Then there exists a positive constant c suchthat for all n isin N

Mn le cs(n) (log s(n))3

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

318 Numerical integrations and error control

Proof As in the proof of Theorem 85 we have that

Mn =sum

iisinZn+1

sumiprimeisinZn+1

Miprimei =sum

iisinZn+1

sumiprimeisinZn+1

w(iprime)Miprimejprimei

where

Miprimejprimei =sumisinZr

M(hiprimejprimei)

In the present case we obtain that

cardQα Qα isin [γmiprime iminusι γmiprimeiminusιminus1) le γmiprime iminusιminus1 minus γmiprimeiminusι

μminusi+1+ 2

Thus there is a positive constant c such that

Miprimejprimei le clsum

ι=1

(γmiprime iminusιminus1 minus γmiprimeiminusι

μminusi+1+ 2

)

Since kι lt ει+ 1

Miprimejprimei le c

(1

γminus 1

)μiminus1

lsumι=1

ιγmiprime iminusι +lsum

ι=1

γmiprime iminusι)+ 2c

lsumι=1

(ει+ 1)

According to the truncation strategy we have that tι = γmiprimeiminusι le εniprimei + diprime + di

where i0 = miniprime i By the choice of εniprimei and miprimei we conclude that

Miprimejprimei le c[nμi

(μbprime(nminusiprime)minusi + μı+1 + μminusiprime+1

)+ n2

]

It follows from this inequality and w(iprime) = k(μminus 1)μiprimeminus1 that

Mn le csum

iisinZn+1

sumiprimeisinZn+1

nμbprimenμ(1minusbprime)iprime + μi + μiprime + n2μiprime

A simple computation yields the estimate of this theorem

84 Numerical experiments

In this section we use numerical experiments to verify our theoretical esti-mates We consider equation (81) with

K(s t) = log | cos(πs)minus cos(π t)| t s isin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

84 Numerical experiments 319

and choose

f (s) = sin(πs)+ 1

π

[2minus (1minus cos(πs)) log (1minus cos(πs))

minus (1+ cos(πs)) log (1+ cos(πs))]

so that the exact solution is u(s) = sin(πs) for comparison purposes In theexperiment we apply linear bases to discretize the equation

To verify the computational complexity of the quadrature rules we reportthe time for establishing the matrix Kn In the first experiment we let kprime = 2that is we use piecewise linear polynomials to approximate the integrandsThe values of other related parameters are identified as follows Let kprime = 2σ prime = 08 a = 025 bprime = 08 ν = 101 and

miprimei ge 14(εn

iprimei)13 2

iprime+2iminus12

We use the notations TUn and TL

n for the time to evaluate the upper and lowertriangles of Kn According to our theoretical estimates there should hold

rUn = log2

(TU

n+1TUn

) = 1+ σ

5and rL

n = log2(TL

n+1TLn

) = 15

The computed results are listed in Table 81We see that most of the values of rL

n are around 15 and those of rUn tend to

be lower than 12 when n increases We also include the errors of the numericalsolutions to the true solution as well as the convergence order which is shownto maintain the optimal convergence rate

In order to observe the influence of the values of k and kprime on the order of timecomplexity we now choose k = 2 kprime = 4 that is we continue to use the linearbasis to discretize the integral equation while piecewise cubic polynomials are

Table 81 Numerical results for quadrature rule I Linear quadrature case

n s(n) TUn rU

n TLn rL

n uminus uninfin Conv rate

4 32 002 003 4162345e-35 64 005 132 007 122 1302214e-3 167646 128 014 148 023 172 3324927e-4 196967 256 033 124 069 159 6300048e-5 239998 512 082 131 199 153 1481033e-5 208889 1024 199 128 575 153 3409346e-6 21190

10 2048 465 122 1633 151 8696977e-7 1970911 4096 1056 118 4701 152 2161024e-7 2008812 8192 2369 116 13211 149 5410341e-8 19979

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

320 Numerical integrations and error control

Table 82 Numerical results for quadrature rule I Cubic quadrature case

n s(n) TUn rU

n TLn rL

n uminus uninfin Conv rate

4 32 002 002 3911817e-35 64 005 132 006 158 1112090e-3 181466 128 011 114 017 150 2887823e-4 194527 256 028 135 046 144 6471414e-5 215788 512 063 117 121 139 1454796e-5 215339 1024 144 119 308 135 3578654e-6 20233

10 2048 329 119 765 131 8812394e-7 2021811 4096 739 117 1901 131 2195702e-7 2004912 8192 1662 117 4679 130 5795965e-8 19216

Table 83 Numerical results for quadrature rule II

n s(n) CT rn 2(

nnminus1

)3 uminus uninfin Conv rate

3 16 007 1520796e-24 32 019 271 474 3816222e-3 19946105 64 055 289 391 9462863e-4 20117966 128 181 329 346 2475222e-4 19347197 256 476 263 318 5903598e-5 20678928 512 1231 259 299 1479396e-5 19965869 1024 3157 256 285 3686005e-6 2004878

10 2048 8445 268 274 8944827e-7 2042933

applied in the quadrature rule The values of other parameters remain the sameexcept that

miprimei ge 24(εn

iprimei)064 2

iprime+2iminus14

The corresponding numerical results are listed in Table 82In the last experiment we implement the quadrature rule described in Sec-

tion 831 using the same parameters for truncation The time for evaluation islisted in Table 83 in which ldquoCTrdquo stands for the time for the computation ofmatrix Kn rn is defined as the ratio of the two successive times that is

rn = CTn

CTnminus1

We also list in this table the theoretical value 2( n+1n )3 for comparison and

the Linfin-norms of the numerical errors u minus uninfin as well as the convergenceorder

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

85 Bibliographical remarks 321

85 Bibliographical remarks

The main developments along the direction of multiscale methods for solvingFredholm integral equations can be found in [28 64 68 69 71 88ndash90 95 202226 241 243 260 261] Appropriate bases for multiscale Galerkin PetrovndashGalerkin and collocation methods were constructed in [200 201] and [65 69]To develop error control techniques for numerical integrations in generatingthe coefficient matrix for the one-dimensional case we propose using thegraded quadrature methods When the integrand has only a polynomial orderof smoothness except at the singular points we use a quadrature methodsuggested in [164] having a polynomial order of accuracy When the integrandhas an infinite order of smoothness except at the singular points we use theidea of [242] to develop a quadrature having an exponential order of accuracyReaders are referred to [72] for more information on error control strategiesfor numerical integrations in one dimension and [75] for those in higherdimensions

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637010Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163137 subject to the Cambridge Core terms of

9

Fast solvers for discrete systems

The goal of this chapter is to develop efficient solvers for the discrete linearsystems resulting from discretization of the Fredholm integral equation of thesecond kind by using the multiscale methods discussed in previous chaptersWe introduce the multilevel augmentation method (MAM) and the multileveliteration method (MIM) for solving operator equations based on multileveldecompositions of the approximate subspaces Reflecting the direct sumdecompositions of the subspaces the coefficient matrix of the linear systemhas a special structure Specifically the matrix corresponding to a finer levelof approximate spaces is obtained by augmenting the matrix correspondingto a coarser level with submatrices that correspond to the difference spacesbetween the spaces of the finer level and the coarser level The main idea isto split the matrix into a sum of two matrices with one reflecting its lowerfrequency and the other reflecting its higher frequency We are required tochoose the splitting in such a way that the inverse of the lower-frequencymatrix either has an explicit form or can easily be computed with a lowercomputational cost

In this chapter we introduce the MAM and MIM and provide a completeanalysis of their convergence and stability

91 Multilevel augmentation methods

In this section we describe a general setting of the MAM for solving operatorequations This method is based on a standard approximation method ata coarse level and updates the resulting approximate solutions by addingdetails corresponding to higher levels in a direct sum decomposition Weprove that this method provides the same order of convergence as the originalapproximation method

322

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 323

911 Multilevel augmentation methods for solvingoperator equations

We begin with a description of the general setup for the operator equationsunder consideration Let X and Y be two Banach spaces and A X rarr Y

be a bounded linear operator For a function f isin Y we consider the operatorequation

Au = f (91)

where u isin X is the solution to be determined We assume that equation (91)has a unique solution in X To solve the equation we choose two sequencesof finite-dimensional subspaces Xn n isin N0 = 0 1 and Yn n isin N0 of Xand Y respectively such that⋃

nisinN0

Xn = X⋃

nisinN0

Yn = Y

and

dim Xn = dim Yn n isin N0

We suppose that equation (91) has an approximate operator equation

Anun = fn (92)

where An Xn rarr Yn is an approximate operator of A un isin Xn and fn isin Yn

is an approximation of f Examples of such equations include projectionmethods such as Galerkin methods and collocation methods In particular forsolving integral equations they also include approximate operator equationsobtained from quadrature methods and degenerate kernel methods Waveletcompression schemes using both orthogonal projection (Galerkin methods)and interpolation projection (collocation methods) are also examples of thistype

Our method is based on an additional hypothesis that the subspaces arenested that is

Xn sub Xn+1 Yn sub Yn+1 n isin N0 (93)

so that we can define two subspaces Wn+1 sub Xn+1 and Qn+1 sub Yn+1 suchthat Xn+1 becomes a direct sum of Xn and Wn+1 and likewise Yn+1 is a directsum of Yn and Qn+1 Specifically we assume that two direct sums oplus1 and oplus2

are defined so that we have the decompositions

Xn+1 = Xn oplus1 Wn+1 and Yn+1 = Yn oplus2 Qn+1 n isin N0 (94)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

324 Fast solvers for discrete systems

In practice the finer-level subspaces Xn+1 and Yn+1 are obtained respectivelyfrom the coarse-level subspaces Xn and Yn by local or global subdivisions Itfollows from (94) for a fixed k isin N0 and any m isin N0 that

Xk+m = Xk oplus1 Wk+1 oplus1 middot middot middot oplus1 Wk+m (95)

and

Yk+m = Yk oplus2 Qk+1 oplus2 middot middot middot oplus2 Qk+m (96)

As in [67] for g0 isin Xk and gi isin Wk+i i = 1 2 m we identify thevector [g0 g1 gm]T in Xk timesWk+1 times middot middot middot timesWk+m with the sum g0 + g1

+ middot middot middot + gm in Xk oplus1 Wk+1 oplus1 middot middot middot oplus1 Wk+m Similarly for g0 isin Yk andgi isin Qk+i for i = 1 2 m we also identify the vector [g0 g1 gm]T inYk timesQk+1 times middot middot middot timesQk+m with the sum g0 + g1 + middot middot middot + gm in Yk oplus2 Qk+1 oplus2

middot middot middot oplus2 Qk+m In this notation we describe the multilevel method for solvingequation (92) with n = k + m which has the form

Ak+muk+m = fk+m (97)

According to decomposition (95) we write the solution uk+m isin Xk+m as

uk+m = uk0 +msum

i=1

vki (98)

where uk0 isin Xk and vki isin Wk+i for i = 1 2 m Hence uk+m isidentified as uk(m) = [uk0 vk1 vkm]T We use both of these notationsexchangeably

We let Fkk+j Wk+jrarrYk Gk+ik XkrarrQk+i and Hk+ik+j Wk+jrarrQk+i i j = 1 2 m be given and assume that the operator Ak+m isidentified as the matrix of operators

Akm =

⎡⎢⎢⎢⎣Ak Fkk+1 middot middot middot Fkk+m

Gk+1k Hk+1k+1 middot middot middot Hk+1k+m

Gk+mk Hk+mk+1 middot middot middot Hk+mk+m

⎤⎥⎥⎥⎦ (99)

Equation (97) is now equivalent to the equation

Akmuk(m) = fk+m (910)

We remark that the nestedness of subspaces implies that the matrix Akm con-tains Akmminus1 as a submatrix In other words Akm is obtained by augmentingthe matrix of the previous level Akmminus1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 325

With this setup one can design various iteration schemes to solve equa-tion (910) by splitting the matrix Akm defined by (99) into a sum of twomatrices and applying matrix iteration algorithms to Akm

We split operator Akm as the sum of two operators Bkm Ckm Xk+m rarrYk+m that is Akm = Bkm + Ckm m isin N0 Note that matrices Bkm andCkm are obtained from augmenting matrices Bkmminus1 and Ckmminus1 respectivelyHence equation (910) becomes

Bkmuk(m) = fk+m minus Ckmuk(m) m isin N0 (911)

Instead of solving (911) exactly we solve (911) approximately by using theMAM described below

Algorithm 1 (Operator form of the multilevel augmentation algorithm) Letk gt 0 be a fixed integer

Step 1 Solve equation (92) with n = k for uk isin Xk exactlyStep 2 Set uk0 = uk and compute the splitting matrices Bk0 and Ck0Step 3 For m isin N suppose that ukmminus1 isin Xk+mminus1 has been obtained and do

the followingbull Augment the matrices Bkmminus1 and Ckmminus1 to form Bkm and Ckm

respectively

bull Augment ukmminus1 by setting ukm =[

ukmminus1

0

]

bull Solve ukm isin Xk+m from equation

Bkmukm = fk+m minus Ckmukm (912)

For a fixed positive integer k if this algorithm can be carried out it generatesa sequence of approximate solutions ukm isin Xk+m m isin N0 Note that thisalgorithm is not an iteration method since for different m we are dealing withmatrices of different order and for each m we only compute the approximatesolution ukm once To ensure that the algorithm can be carried out we have toguarantee that for m isin N0 the inverse of Bkm exists and is uniformly boundedMoreover the approximate solutions generated by this augmentation algorithmneither necessarily have the same order of convergence as the approximationorder of the subspaces Xn nor necessarily are more efficient to solve thansolving equation (92) with n = k + m directly unless certain conditionson the splitting are satisfied For this algorithm to be executable accurateand efficient we demand that the splitting of operator Akm fulfills threerequirements Firstly Bminus1

km is uniformly bounded Secondly the approximatesolution ukm preserves the convergence order of uk+m That is ukm converges

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

326 Fast solvers for discrete systems

to the exact solution u at the approximation order of the subspaces Xk+mThirdly the inverse of Bkm is much easier to obtain than the inverse of Akm

We now address the first issue To this end we describe our hypotheses

(I) There exist a positive integer N0 and a positive constant α such that for alln ge N0

Aminus1n le αminus1 (913)

(II) The limit

limnrarrinfinCnm = 0 (914)

holds uniformly for all m isin N

Under these two assumptions we have the following result

Proposition 91 Suppose that hypotheses (I) and (II) hold Then there existsa positive integer N gt N0 such that for all k ge N and all m isin Nequation (912) has a unique solution ukm isin Xk+m

Proof From hypothesis (I) whenever k ge N0 it holds that for x isin Xk+m

Bkmx = (Akm minus Ckm)x ge (α minus Ckm)x (915)

Moreover it follows from (914) that there exists a positive integer N gt N0

such that for k ge N and m isin N0 Ckm lt α2 Combining this inequalitywith (915) we find that for k ge N and m isin N0 the estimate

Bminus1km le

1

α minus Ckm le 2αminus1 (916)

holds This ensures that for all k ge N and m isin N0 equation (912) has a uniquesolution

We next consider the second issue For n isin N0 we let Rn denote theapproximation error of space Xn for u isin X namely Rn = Rn(u) =infu minus vX v isin Xn A sequence of non-negative numbers γn n isin N0

is called a majorization sequence of Rn if γn ge Rn n isin N0 and there exist apositive integer N0 and a positive constant σ such that for n ge N0 γn+1

γnge σ

We also need the following hypothesis

(III) There exist a positive integer N0 and a positive constant ρ such that forn ge N0 and for the solution un isin Xn of equation (92) uminusunX le ρRn

In the next theorem we show that under the assumptions described aboveukm approximates u at an order comparable to Rk+m

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 327

Theorem 92 Suppose that hypotheses (I)ndash(III) hold Let u isin X be thesolution of equation (91) γn n isin N0 be a majorization sequence of Rn andρ be the constant appearing in hypothesis (III) Then there exists a positiveinteger N such that for k ge N and m isin N0

uminus ukm le (ρ + 1)γk+m

where ukm is the solution of equation (912)

Proof We prove this theorem by establishing an estimate on ukm minus uk+mFor this purpose we subtract (911) from (912) to obtain

Bkm(ukm minus uk+m) = Ckm(uk+m minus ukm)

The hypotheses of this theorem ensure that Proposition 91 holds Hence fromthe equation above and inequality (916) we have that

ukm minus uk+m le Ckmα minus Ckmuk+m minus ukm (917)

We next prove by induction on m that there exists a positive integer N suchthat for k ge N and m isin N0

ukm minus uk+m le γk+m (918)

When m = 0 since uk0 = uk estimate (918) holds trivially Suppose that theclaim holds for m = rminus 1 and we prove that it holds for m = r To accomplishthis using the definition of ukr hypothesis (III) the induction hypothesis andthe definition of majorization sequences we obtain that

uk+r minus ukr le uk+r minus u + uminus uk+rminus1 + uk+rminus1 minus ukrminus1le ργk+r + (ρ + 1)γk+rminus1 le

(ρ + (ρ + 1)

1

σ

)γk+r

Substituting this estimate into the right-hand side of (917) with m = r yields

ukr minus uk+r le Ckrα minus Ckr

(ρ + (ρ + 1)

1

σ

)γk+r

Again employing hypothesis (II) there exists a positive integer N such thatfor k ge N and r isin N0 Ckr le α

(ρ+1)(1+ 1σ) We then conclude that for k ge N

and r isin N0

Ckrα minus Ckr

(ρ + (ρ + 1)

1

σ

)le 1

Therefore for k ge N estimate (918) holds for m = r This advances theinduction hypothesis and thus estimate (918) holds for all m isin N0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

328 Fast solvers for discrete systems

Finally the estimate of this theorem follows directly from estimate (918)and hypothesis (III)

We remark that when the exact solution u of equation (91) has certainSobolev or Besov regularity and specific approximate subspaces Xn arechosen we may choose the majorization sequence γn as the upper bound ofRn which gives the order of approximation of the subspaces Xn with respect tothe regularity For example when Xn is chosen to be the usual finite elementspaces with mesh size 2minusn and when the solution u of equation (91) belongsto the Sobolev space Hr we may choose γn = c2minusrnuHr In this case theconstant σ in the definition of majorization sequences can be taken as 2minusrTherefore Theorem 92 ensures that the approximate solution ukm generatedby the MAM has the same order of approximation as the subspaces Xn

912 Second-kind equations

In this section we present special results for projection methods for solvingoperator equations of the second kind Consider equations

(I minusK)u = f (919)

where K X rarr X is a linear operator We assume that equation (919) has aunique solution In this special case we identify that A = I minusK X = Y andXn = Yn Suppose that Pn Xrarr Xn are linear projections and we define theprojection method for solving equation (919) by

Pn(I minusK)un = Pnf (920)

where un isin Xn To develop a MAM we need another projection Pn Xrarr XnWe define operators Qn = Pn minus Pnminus1 and Qn = Pn minus Pnminus1 and introducesubspaces Wn = QnXn n isin N We allow the projections Pn and Pn tobe different in order to have a wide range of applications For example forGalerkin methods Pn and Pn are both identical to the orthogonal projectionand for the collocation method developed in [68] Pn is the interpolatoryprojection and Pn is the orthogonal projection

For n isin N we set Kn = PnK|Xn and identify An = Pn(I minus K)|Xn Wefurther identify the operators in (99) with

Fkk+j = Pk(I minusK)|Wk+j Gk+ik = Qk+i(I minusK)|Xk

Hk+ik+j = Qk+i(I minusK)|Wk+j

We split the operator Kk+m into the sum of two operators Kk+m = KLkm+KH

km

where KLkm = PkK|Xk+m and KH

km = (Pk+m minus Pk)K|Xk+m The operators

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 329

KLkm and KH

km correspond to lower and higher frequency of the operator Kkmrespectively According to the decomposition of Kk+m we write the operatorAkm = Ik+mminusKkm as a sum of lower and higher-frequency Bkm = Ik+mminusKL

km and Ckm = minusKHkm Using this specific splitting in formula (912) of

Algorithm 1 we have that

(Ik+m minusKLkm)ukm = fkm +KH

kmukm (921)

The next theorem is concerned with the convergence order for the MAM forsecond-kind equations using projection methods

Theorem 93 Suppose that K is a compact linear operator not having one asits eigenvalue and that there exists a positive constant p such that

Pn le p Pn le p for all n isin N (922)

Let u isin X be the solution of equation (919) and γn be a majorization sequenceof Rn Then there exist a positive integer N and a positive constant c0 such thatfor all k ge N and m isin N

uminus ukm le c0γk+m

where ukm is obtained from the augmentation algorithm with formula (921)

Proof We prove that the hypotheses of Theorem 92 hold for the specialchoice of operators Bkm and Ckm for second-kind equations We first remarkthat the assumption on the operators K and Pn ensures that hypotheses (I)and (III) hold with An = I minus Kn It remains to verify hypothesis (II)To this end we recall the definition of Cnm which has the form Cnm =minus(Pn+m minus Pn)K|Xn+m It follows from the second inequality of (922) that

Cnm = (Pn+m minus Pn)K|Xn+m le p(Pn+m minus Pn)KBy the first inequality of (922) and the nestedness of subspaces Xn weconclude that Pn converges pointwise to the identity operator I of space XHence since K is compact the last term of the inequality above converges tozero as nrarrinfin uniformly for m isin N Therefore all hypotheses of Theorem 92are satisfied and thus we complete the proof of this theorem

We next derive the matrix form of the MAM by choosing appropriate basesfor the subspaces Xn For this purpose we let Xlowast denote the dual space of Xand for isin Xlowast x isin X we let 〈 x〉 denote the value of the linear functional at x Suppose that Ln n isin N0 is a sequence of subspaces of Xlowast which has

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

330 Fast solvers for discrete systems

the property that Ln sub Ln+1 and dim Ln = dim Xn n isin N0 The operatorPn Xrarr Xn is defined for x isin X by

〈 xminus Pnx〉 = 0 for all isin Ln (923)

It is known (cf [77]) that the operator Pn X rarr Xn is uniquely determinedand is a projection if and only if LncapXperpn = 0 n isin N0 where Xperpn denotes theannihilator of Xn in Xlowast Throughout the rest of this section we always assumethat this condition is satisfied We also assume that we have a decompositionof the space Ln+1 namely

Ln+1 = Ln oplus Vn+1 n isin N0 (924)

Clearly the spaces Wi and Vi have the same dimension We specify the directsum in (924) later

Set w(0) = dim X0 and w(i) = dim Wi for i isin N Suppose that

X0 = spanw0j j isin Zw(0) L0 = span0j j isin Zw(0)Wi = spanwij j isin Zw(i) Vi = spanij j isin Zw(i) i isin N

Using the index set Un = (i j) i isin Zn+1 j isin Zw(i) we have that forn isin N0

Xn = spanwij (i j) isin Un and Ln = spanij (i j) isin UnWe remark that the index set Un has cardinality dn = dim Xn and we assumethat the elements in Un are ordered lexicographically

We now present the matrix form of equation (920) using these bases Notethat for vn isin Xn there exist unique constants vij (i j) isin Un such that vn =sum(ij)isinUn

vijwij It follows that the solution un with n = k+m of equation (920)

has the vector representation un = [uij (i j) isin Un] under the basis wij(i j) isin Un Using the bases for Xn and Ln we let Eiprimejprimeij = lang

iprimejprime wijrang

andKiprimejprimeij = langiprimejprime Kwij

rangand introduce the matrices En = [Eiprimejprimeij (iprime jprime) (i j) isin

Un] and Kn = [Kiprimejprimeij (iprime jprime) (i j) isin Un] We also introduce the columnvectors fn = [langiprimejprime f

rang (iprime jprime) isin Un] In these notations equation (920) is

written in matrix form as

(Ek+m minusKk+m) uk+m = fk+m (925)

We partition matrices Kn and En into block matrices according to thedecompositions of the spaces Xn and Ln Specifically for iprime i isin Zn+1we introduce the blocks Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)] and setKn = [Kiprimei iprime i isin Zn+1] Moreover for a fixed k isin N we define theblocks Kk

00 = Kk and for lprime l isin N Kk0l = [Kiprimei iprime isin Zk+1 i = k + l]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 331

Kkl0 = [Kiprimei iprime = k + l i isin Zk+1] and Kk

lprimel = Kk+lprimek+l Using these block

notations for n = k + m we write Kk+m =[Kk

iprimei iprime i isin Zm+1

] Likewise

we partition matrix En in exactly the same wayThe decomposition of operator Kk+m suggests the matrix decomposition

Kk+m = KLkm +KH

km where

KLkm =

⎡⎢⎢⎢⎣Kk

00 Kk01 middot middot middot Kk

0m0 0 middot middot middot 0

0 0 middot middot middot 0

⎤⎥⎥⎥⎦ and

KHkm =

⎡⎢⎢⎢⎣0 0 middot middot middot 0

Kk10 Kk

11 middot middot middot Kk1m

Kkm0 Kk

m1 middot middot middot Kkmm

⎤⎥⎥⎥⎦

Note that the matrices KLkm and KH

km correspond to lower and higher frequency

of the matrix Kkm Moreover we set Bkm = Ek+mminusKLkm and Ckm = minusKH

kmNext we describe the matrix form of the MAM for solving equation (925)using these two matrices

Algorithm 2 (Matrix form of the multilevel augmentation algorithm) Letkgt 0 be a fixed integer

Step 1 Solve uk isin Rdk from the equation (Ek minusKk)uk = fk

Step 2 Set uk0 = uk and compute the splitting matrices KLk0 and KH

k0

Step 3 For m isin N suppose that ukmminus1 isin Rdk+mminus1 has been obtained and dothe following

bull Augment the matrices KLkmminus1 and KH

kmminus1 to form KLkm and KH

kmrespectively

bull Augment ukmminus1 by setting ukm =[

ukmminus1

0

]

bull Solve ukm isin Rdk+m from the algebraic equations

(Ekm minusKLkm)ukm = fk+m +KH

kmukm (926)

It is important to know under what condition the matrix form (926) isequivalent to the operator form (921) This issue is addressed in the nexttheorem To prepare a proof of this theorem we consider an expression of the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

332 Fast solvers for discrete systems

identity operator I in the subspace Xk+m Note that for any x isin Xk+j j isin ZmQk+1+jx = 0 This is equivalent to the following equations

Qk+1+jI|Xk = 0 and Qk+1+jI|Wk+1+i = 0 i isin Zj (927)

Using this fact we express the identity operator I in the subspace Xk+m as

Ik+m = Pk+mI|Xk+m =

⎡⎢⎢⎢⎣PkI|Xk PkI|Wk+1 middot middot middot PkI|Wk+m

0 Qk+1I|Wk+1 middot middot middot Qk+1I|Wk+m

0 middot middot middot 0 Qk+mI|Wk+m

⎤⎥⎥⎥⎦

Taking this into consideration equation (921) becomes⎡⎢⎢⎢⎣Pk(I minusK)|Xk Pk(I minusK)|Wk+1 middot middot middot Pk(I minusK)|Wk+m

0 Qk+1I|Wk+1 middot middot middot Qk+1I|Wk+m

0 middot middot middot 0 Qk+mI|Wk+m

⎤⎥⎥⎥⎦ ukm

=

⎡⎢⎢⎢⎣Pkf

Qk+1f

Qk+mf

⎤⎥⎥⎥⎦

+

⎡⎢⎢⎢⎣0 0 middot middot middot 0

Qk+1K|Xk Qk+1K|Wk+1 middot middot middot Qk+1K|Wk+m

Qk+mK|Xk Qk+mK|Wk+1 middot middot middot Qk+mK|Wk+m

⎤⎥⎥⎥⎦⎡⎢⎢⎣ ukmminus1

0

⎤⎥⎥⎦

(928)

To state the next theorem we let Nk = k k + 1 and introduce thefollowing notion For finite-dimensional subspaces A sub Xlowast and B sub X wesay A perp B if for any isin A and x isin B it holds that 〈 x〉 = 0

Theorem 94 The following statements are equivalent

(i) The matrix form (926) and the operator form (921) are equivalent forany f isin X and for any compact operator K Xrarr X

(ii) For any l isin Nk Vl+1 perp Xl(iii) For any l isin Nk and any j isin Zm+1 0 Vl+j perp Xl(iv) For iprime i isin Zm+1 iprime gt i Ek

iprimei = 0

(v) For iprime i isin Zm+1 iprime gt i Bkiprimei = 0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 333

Proof We first prove the equivalence of statements (i) and (ii) It is clear thatfor any x isin X and for any isin Vn 〈Pnminus1x〉 = 0 if and only if Vn perp Xnminus1Moreover we observe from the definitions of Pn and Qn that for n isin N x isin X

and for isin Vn

〈Qnx〉 = 〈Pnxminus Pnminus1x〉 = 〈 x〉 minus 〈Pnminus1x〉 Hence for each n isin N the following equation holds 〈Qnx〉 = 〈 x〉 for allx isin X isin Vn if and only if Vn perp Xnminus1 Therefore statement (ii) is equivalentto saying that for any j isin Zmlang

Qk+j+1xrang = 〈 x〉 for all x isin X isin Vk+j+1 (929)

Noting that for x isin X and n isin N Qnx isin Qn = QnXn sub Xn we concludefrom (923) and (927) that equation (928) is equivalent tolang

Pk(I minusK)ukmrang = 〈Pkf 〉 for all isin Lk (930)

andlangQk+j+1ukm

rang = langQk+j+1f +Qk+j+1Kukmminus1rang for all isinLk+j+1 jisinZm

(931)

Using (923) equation (930) is written aslang (I minusK)ukm

rang = 〈 f 〉 for all isin Lk (932)

Again from (923) we have that for x isin X and isin Lk+j j isin ZmlangQk+j+1x

rang = langPk+j+1xminus Pk+jxrang = 0

In particular for all isin Lk+j j isin Zm both sides of equation (931) are equalto zero Noting that

Lk+j+1 = Lk+j oplus Vk+j+1

equation (931) is equivalent tolangQk+j+1ukm

rang = langQk+j+1f +Qk+j+1Kukmminus1rang for all isinVk+j+1 jisinZm

Now suppose that statement (ii) holds Using equation (929) the equationabove is equivalent tolang

ukmrang = lang f +Kukmminus1

rang for all isin Vk+j+1 j isin Zm (933)

In terms of the bases of the spaces Xk Lk Wk+j+1 and Vk+j+1 j isin Zm

equations (932) and (933) are equivalent to the matrix equation (926)Conversely if (i) holds we can prove that equation (929) is satisfied and thus(ii) holds

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

334 Fast solvers for discrete systems

The proof of (iii) implying (ii) is trivial Statement (ii) and the nestednessassumption on Xn ensure the validity of (iii) Statement (iv) is the discreteversion of (iii) and hence they are equivalent Finally the equivalence of (iv)and (v) follows from the definition of matrix Bkm

Note that condition (ii) in Theorem 94 specifies the definition of the directsum (924) In other words the space Vn+1 is uniquely determined by condition(ii) From now on we always assume that condition (ii) is satisfied to guaranteethe equivalence of (921) and (926) Another way to write the equivalenceconditions in Theorem 94 islang

iprimejprime wijrang = 0 i iprime isin Nk i lt iprime j isin Zw(i) jprime isin Zw(iprime) (934)

When condition (934) is satisfied we call the bases ij and wij semi-bi-orthogonal Under this condition when the solution ukm of equation (926)is computed we conclude from Theorem 94 that the function defined byukm = uT

kmwk+m is the solution of the equation (921) where wn = [wij (i j) isin Un] We remark that condition (934) is satisfied for the multiscaleGalerkin methods and multiscale collocation methods developed in Chapters 5and 7 respectively In fact in the case of the Galerkin method using orthogonalpiecewise polynomial multiscale bases constructed in [200] the matrix En

is the identity and in the case of the collocation method using interpolatingpiecewise polynomial multiscale bases and multiscale functionals constructedin [69] the matrix En is upper triangular with diagonal entries equal to one

We now turn to a study of the computational complexity of Algorithm 2Specifically we estimate the number of multiplications used in the methodFor this purpose we rewrite equation (926) in block form Letting n = k+mwe partition the matrix Ek+m in the same way as we have done for the matrixKk+m to obtain blocks Ek

iiprime i iprime isin Zm+1 We also partition the vectors ukm andfk+m accordingly as ukm = [um

i i isin Zm+1] and fk+m = [fki i isin Zm+1]Here and in what follows we require that the appropriate bases are chosen sothat Ek

iprimei = 0 for 0 le i lt iprime le m and Ekii = I With this assumption we

express the matrix Bkm as

Bkm=

⎡⎢⎢⎢⎢⎢⎢⎢⎢⎣

IminusKk Ek01 minusKk

01 Ek02 minusKk

02 middot middot middot Ek0mminus1 minusKk

0mminus1 Ek0m minusKk

0m0 I Ek

12 middot middot middot Ek1mminus1 Ek

1m0 0 I middot middot middot Ek

2mminus1 Ek2m

0 0 0 middot middot middot I Ekmminus1m

0 0 0 middot middot middot 0 I

⎤⎥⎥⎥⎥⎥⎥⎥⎥⎦

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 335

It is clear from this matrix representation that inverting matrices Bkm m isin N0

is basically equivalent to inverting IminusKk The strength of Algorithm 2 is thatit only requires inverting the matrix (IminusKk) Using this block form of matrixBkm equation (926) becomes

umi = fki +

mminus1sumj=0

Kkiju

mminus1j minus

msumj=i+1

Ekiju

mj i = m mminus 1 1 (935)

fk0 = fk0 minusmsum

j=1

(Ek0j minusKk

0j)umj and um

0 = (IminusKk)minus1 fk0 (936)

For a matrix A we denote by N (A) the number of nonzero entries of ANote that we need N (KH

km) + N (Ek+m) multiplications to obtain umi i = 1

2 m from equation (935) In addition the computation of fk0 requiresN (KL

km) number of multiplications We assume that computing um0 from

the second equation of (936) needs M(k) multiplications which is constantindependent of m Hence the number of multiplications for computing ukm

from ukmminus1 is

Nkm = N (Kk+m)+N (Ek+m)+M(k) (937)

Recall that to compute ukm we first compute uk and then use the algorithmof (935) and (936) to compute uki i = 1 2 m successively By formula(937) the total number of multiplications required to obtain ukm is given by

M(k)+msum

i=1

Nki = (m+ 1)M(k)+msum

i=1

[N (Kk+i)+N (Ek+i)]

We now summarize the discussion above in a proposition

Proposition 95 The total number of multiplications required for computingukm from uk is given by

(m+ 1)M(k)+msum

i=1

[N (Kk+i)+N (Ek+i)]

To close this section we analyze the stability of Algorithm 2 It can beshown that if the condition number cond(Bkm) of matrix Bkm is small thensmall perturbations of the matrices Bkm and Ckm and the vector ukm onlycause a small perturbation in the solution ukm For this reason we study thecondition number of matrix Bkm Our theorem will confirm that the conditionnumbers cond(Bkm) and cond(Ak+m) have the same order In other words the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

336 Fast solvers for discrete systems

augmentation method will not ruin the well-condition property of the originalmultilevel method

We first establish a result that the stability of Bkm is inherited from that ofAk+m

Lemma 96 Suppose that the family of operators An n isin N0 has the propertythat there exist positive constants c1 and c2 and a positive integer N0 such thatfor n ge N0 An le c1 and Anv ge c2v for all v isin Xn Moreover supposethat for any k m isin N0 Ak+m = Bkm + Ckm where Ckm satisfies hypothesis(II) Then there exist positive constants cprime1 and cprime2 and a positive integer N1

such that for k gt N1 m isin N0 Bkm le cprime1 and Bkmv ge cprime2v for allv isin Xk+m

Proof By the triangular inequality we have for any k m isin N0 that

Ak+m minus Ckm le Bkm le Ak+m + CkmThe hypotheses of this lemma ensure that there exists a positive integer Nprimesuch that for k gt Nprime and m isin N0 Ckm le c22 We let N1 = maxN0 Nprimeand observe that for k gt N1 m isin N0 Bkm le c1 + c22 and Bkmv geAk+mv minus Ckmv ge c2

2 v for all v isin Xk+m By choosing cprime1 = c1 + c22

and cprime2 = c22 we complete the proof of this lemma

We now return to the discussion of the condition number of matrix BkmTo do this we need auxiliary bases for X0 and Wi for i isin N which arebi-orthogonal to ij j isin Zw(i) i isin N0 that is X0 = span ζ0j j isin Zw(0)Wi = span ζij j isin Zw(i) with the bi-orthogonal property 〈iprimejprime ζij〉 = δiprimeiδjprimejfor i iprime isin N0 j isin Zw(i) jprime isin Zw(iprime) For any v isin Xn we have two representationsof v given by v =sum(ij)isinUn

vijwij and v =sum(ij)isinUnvprimeijζij We let v vprime isin Rdn be

the vectors of the coefficients in the two representations of v respectively thatis v = [vij (i j) isin Un] and vprime = [vprimeij (i j) isin Un]Theorem 97 Let n isin N0 and suppose that there exist functions μi(n) νi(n)i = 1 2 such that for any v isin Xn

μ1(n)v le v le μ2(n)v ν1(n)vprime le v le ν2(n)vprime (938)

Suppose that the hypothesis of Lemma 96 is satisfied Then there exists apositive integer N such that for any k gt N and any m isin N0

cond(Ak+m) le c1μ2(k + m)ν2(k + m)

c2μ1(k + m)ν1(k + m)and

cond(Bkm) le cprime1μ2(k + m)ν2(k + m)

cprime2μ1(k + m)ν1(k + m)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 337

Proof We prove the bound for both matrices at the same time To this endfor n = k + m we let Hkm denote either Ak+m or Bkm and Hkm for thecorresponding operator For any v = [vij (i j) isin Uk+m] isin Rdk+m we definea vector g = [gij (i j) isin Uk+m] isin Rdk+m by letting g = Hkmv Introducingv = sum(ij)isinUk+m

vijwij and g = sum(ij)isinUk+mgijζij we have the corresponding

operator equation Hkmv = g Since the hypotheses of Lemma 96 are satisfiedby using Lemma 96 there exist positive constants cprime1 and cprime2 and a positiveinteger N such that for k gt N m isin N0 Bkm le cprime1 and Bkmv ge cprime2v forall v isin Xk+m Therefore in either case there exist positive constants c1 andc2 and a positive integer N such that for k gt N m isin N0 Hkm le c1 andHkmv ge c2v for all v isin Xk+m It follows from (938) that for any k gt Nand m isin N0

Hkmv=gle gν1(k + m)

= Hkmvν1(k + m)

le c1vν1(k + m)

le c1μ2(k + m)

ν1(k + m)v

which yields Hkm le c1μ2(k+m)ν1(k+m) Likewise by (938) we have that for any

k gt N and m isin N0

μ1(k + m)v le v le Hkmvc2

= gc2

le ν2(k + m)

c2g = ν2(k + m)

c2Hkmv

which ensures that Hminus1km le ν2(k+m)

c2μ1(k+m) Combining the estimates for Hkmand Hminus1

km we confirm the bounds of the condition numbers of Ak+m andBkm

We next apply this theorem to two specific cases to obtain two useful specialresults

Corollary 98 For the multiscale Galerkin methods developed in Chapter5 and the multiscale PetrovndashGalerkin methods developed in Chapter 6cond(Ak+m) = O(1) and cond(Bkm) = O(1)

Proof In these multiscale methods we use orthogonal multiscale basesThus the quantities μi(n) and νi(n) i = 1 2 appearing in (938) areconstant independent of n By Theorem 97 in these cases condition numberscond(Ak+m) and cond(Bkm) are constant independent of n

Corollary 99 For the multiscale collocation methods developed inChapter 7

cond(Ak+m) = O(log2 dk+m)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

338 Fast solvers for discrete systems

and

cond(Bkm) = O(log2 dk+m)

where dk+m denotes the order of matrices Ak+m and Bkm

Proof In the multiscale collocation methods developed in Chapter 7 we havethat μ1(k + m) = O(1) ν1(k + m) = O(1) μ2(k + m) = O(log dk+m) andν2(k + m) = O(log dk+m) Therefore the result of this remark follows fromTheorem 97

913 Compression schemes

This section is devoted to an application of the MAM for solving linear systemsresulting from compression schemes derived from multiscale methods

We assume that a compression strategy has been applied to compress the fullmatrix Kn to obtain a sparse matrix Kn where the number of nonzero entriesof Kn is of order dn logα dn with α = 0 1 or 2 that is

N (Kn) = O(dn logα dn) (939)

Methods of this type were studied in [4 28 64 68 69 88 94 95 202226 241 260 261] In particular when orthogonal piecewise polynomialmultiscale bases and interpolating piecewise polynomial multiscale basesconstructed respectively in [200] and [65] are used to develop the multiscaleGalerkin method (see Chapter 5) the multiscale PetrovndashGalerkin methods (seeChapter 6) and the multiscale collocation method (see Chapter 7) we have thatN (Kn) = O(nαμn) where the corresponding multiscale bases are constructedwith aμ-adic subdivision of the domain when the kernel K(s t) s t isin sube Rdof the integral operator K satisfies the conditions described in these papers Thecompression scheme for these methods has the form

(En minus Kn)un = fn (940)

Equation (940) has an equivalent operator equation Let Kn Xn rarr Xn bethe linear operator relative to the basis wij (i j) isin Un having the matrixrepresentation Eminus1

n Kn We have that

(I minus Kn)un = Pn f (941)

where un isin Xn and it is related to the solution un of equation (940) by theformula un = uT

n wn where wn = [wij (i j) isin Un] It is known that under

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 339

certain conditions for Galerkin methods and collocation methods we have thatif u isin Wrp()

uminus unp le cμminusrndnαurp (942)

where p = 2 for the Galerkin method and p = infin for the collocation methodand r denotes the order of piecewise polynomials used in these methods

To develop the MAM for solving the operator equation (941) we note thatKn = PnKn from which equation (941) is rewritten as

(I minus PnKn)un = Pn f (943)

Hence for n = k + m we have that Kk+m = PkKk+m + (Pk+m minus Pk)Kk+m

and from this equation we define Bkm = Ik+m minus PkKk+m and Ckm =minus(Pk+m minus Pk)Kk+m As done in Section 912 we can define the MAM forequation (941)

We have the following convergence result

Theorem 910 Let u isin Wrp() Suppose that the estimate (942) holds and

limnrarrinfinKn minus Kn = 0 (944)

Then there exist a positive integer N and a positive constant c such that for allk gt N and m isin N0

uminus ukm le cμminusr(k+m)d(k + m)αurp

Proof The proof employs Theorem 92 with a majorization sequence γn =cμminusrndnαurp We conclude that γn+1

γnge μminusrd In other words γn is a

majorization sequence of Rn with σ = μminusrd It is readily shown that

Ckm le 2pKk+m minusKk+m + Ckm + 2p2(Pk+m minus I)K

By (944) we have that limkrarrinfin Ckm = 0 uniformly for m isin N0 Also itcan be verified that the other conditions of Theorem 92 are satisfied with boththe multiscale Galerkin method and the collocation method using piecewisepolynomial multiscale basis of order r By applying Theorem 92 we completethe proof

We now formulate the matrix form of the MAM directly from the com-pressed matrix Because the compressed matrix Kn inherits the multilevelstructure of the matrix Kn we may use the MAM to solve equation (940)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

340 Fast solvers for discrete systems

as described in Section 712 Specifically we partition the compressed matrixKn as done for the full matrix Kn in Section 712 Let

KLkm =

⎡⎢⎢⎢⎣Kk

00 Kk01 middot middot middot Kk

0m0 0 middot middot middot 0

0 0 middot middot middot 0

⎤⎥⎥⎥⎦ and

KHkm =

⎡⎢⎢⎢⎢⎣0 0 middot middot middot 0

Kk10 Kk

11 middot middot middot Kk1m

Kkm0 Kk

m1 middot middot middot Kkmm

⎤⎥⎥⎥⎥⎦

and define Bkm = Ek+m minus KLkm and Ckm = minusKH

km

Algorithm 3 (Matrix form of the augmentation algorithm for compressionschemes) Let k be a fixed positive integer

Step 1 Solve uk isin Rdk from the equation(Ek minus Kk

)uk = fk

Step 2 Set uk0 = uk and compute the splitting matrices KLk0 and KH

k0

Step 3 For m isin N suppose that ukmminus1 isin Rdk+mminus1 has been obtained and dothe followingbull Augment the matrices KL

kmminus1 and KHkmminus1 to form KL

km and KHkm

respectively

bull Augment ukmminus1 by setting ukm =[

ukmminus1

0

]

bull Solve ukm isin Rdk+m from the algebraic equations

(Ekm minus KLkm)ukm = fk+m + KH

kmukm

Since Eminus1n Kn is the matrix representation of the operator Kn relative to the

basis wij (i j) isin Jn we conclude thatlangiprimejprime Knwij

rang=

sum(iprimeprimejprimeprime)isinUn

(Eminus1n Kn)iprimeprimejprimeprimeijEiprimejprimeiprimeprimejprimeprime = (Kn)iprimejprimeij

and see that the matrix form of MAM derived above is equivalent to thecorresponding operator form

In the next result we estimate the number of multiplications used inAlgorithm 3

Theorem 911 Let k be a fixed positive integer and m isin N0 Suppose that forsome α isin 0 1 2 and some integer μ gt 1 N (Kn) = O(nαμn) and suppose

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 341

that for n isin N0 N (En) le N (Kn) Then the total number of multiplicationsrequired for computing ukm from uk is of O((m+ k)αμm+k)

Proof According to Proposition 95 the total number of multiplicationsrequired for computing ukm from uk is given by

Nkm = O(m)+sumiisinNm

[N (Kk+i)+N (Ek+i)]

Since N (En) le N (Kn) it suffices to estimate the quantitysumiisinNm

N (Kk+i) To

this end we may show the identitysumiisinNm

(k + i)αμk+i = O((k + m)αμk+m)

Using this formula and the hypotheses of this theorem we complete the proofof this result

It can be verified that for the compression schemes presented in Chapters5 6 and 7 condition (944) and the assumption on Theorem 911 are fulfilledTherefore the conclusions of Theorems 910 and 911 hold for the class ofmethods proposed in these chapters

914 Numerical experiments

We present in this subsection numerical examples to demonstrate the per-formance of the MAM associated with the multiscale Galerkin method andthe multiscale collocation method To focus on the main issue of the MAMwe first choose a second-kind integral equation on the unit interval sincethe augmentation method is independent of the dimension of the domain ofthe integral equation Consider equation (919) with the integral operator Kdefined by

(Ku)(s) =int 1

0log | cos(πs)minus cos(π t)|u(t)dt s isin = [0 1]

In our numerical experiments for convenience of comparison we choose theright-hand-side function in equation (919) as

f (s) = sin(πs)+ 1

π[2minus (1minus cos(πs)) log(1minus cos(πs))

minus (1+ cos(πs)) log(1+ cos(πs))]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

342 Fast solvers for discrete systems

so that u(s) = sin(πs) s isin is the exact solution of the equation We chooseXn as the space of piecewise linear polynomials on with knots at the dyadicpoints j2n j = 1 2 2nminus1 Note that the theoretical convergence order forpiecewise linear approximation is 2 The following two numerical algorithmsare run on a personal computer with a 600-MHz CPU and 256M memory

Example 1 Multiscale Galerkin methods In our first experiment we con-sider the multiscale Galerkin method for solving equation (919) Choose anorthonormal basis w00(t) = 1 and w01(t) = radic3(2tminus 1) for t isin for X0 and

w10(t) =

1minus 6t t isin [0 12 ]

5minus 6t t isin [ 12 1] w11(t) = radic

3(1minus 4t) t isin [0 12 ]radic

3(4t minus 3) t isin [ 12 1]

for W1 An orthonormal basis wij j = 0 1 2i for Wi is constructedaccording to the construction given in Section 43 We choose Pn = Pn asthe orthogonal projection mapping L2() onto Xn and ij = wij

In this case En is an identity matrix because of the orthogonality of the basisand the matrix Kn is truncated to form the compressed matrix Kn according toa strategy presented in Chapter 5 The MAM is then applied to the compressedlinear system

In Table 91 we report the approximation error and convergence orderof the numerical solution obtained from the MAM and the computing timefor solving the linear system in the Galerkin case where we use the initiallevel k = 5 They confirm our theoretical estimates In all tables presentedhere we use ldquoApprox orderrdquo for the computed approximation order of thenumerical solution ukm approximating the exact solution u ldquoComp raterdquo forthe compression rate of the compressed coefficient matrix Kn and ldquoCTrdquo for thecomputing time (measured in seconds) used to solve the linear system usingthe augmentation method

Table 91 Convergence and computational speed for the Galerkin method

m uminus ukmL2 Approx order Comp rate CT

0 102e-3 1000 lt 0011 256e-4 199 0688 lt 0012 636e-5 200 0479 lt 0013 159e-5 200 0312 0014 422e-6 191 0193 0025 111e-6 192 0116 003

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 343

Table 92 Condition numbers of the matrices from the Galerkinmethod

m cond(A5+m) 1 cond(B5m) 2

0 19551 1977 0022310 20032 1989 0011255 2003 00003283 1994 0005669 2003 00000404 1997 0002843 2003 00000065 1999 0001416 2003 0000001

We report the spectral condition numbers of Ak+m and Bkm in Table 92We use the notations 1 = cond(Ak+m) minus cond(Ak+mminus1) and 2 = cond(Bkm)minus cond(Bkmminus1)

Example 2 The multiscale collocation method In this case we choose Pn

and Pn respectively as the interpolatory and orthogonal projection onto Xn

and define the subspaces Wn by the orthogonal projection Pn Specifically wechoose X0 = spanw00 w01 where w00(t) = 2minus 3t and w01(t) = minus1+ 3tt isin and W1 = spanw10 w11 where

w10(t) =

1minus 92 t t isin [0 1

2 ]minus1+ 3

2 t t isin [ 12 1]w11(t) =

12 minus 3

2 t t isin [0 12 ]

minus 72 + 9

2 t t isin [ 12 1]We also need multiscale collocation functionals for the multiscale collocationmethod To this end for any s isin we use δs to denote the linear functional in(C())lowast defined for x isin C() by the equation 〈δs x〉 = x(s) We choose thespaces of functionals as L0 = span00 01 where 00 = δ 1

3and 01 = δ 2

3and V1 = span10 11 where

10 = δ 16minus 3

2δ 1

3+ 1

2δ 2

3and 10 = 1

2δ 1

3minus 3

2δ 2

3+ δ 5

6

Bases wij for Wi and ij for Vi are constructed according to the descriptionin Chapter 7 We remark that by construction Vl+1 perp Xl for all l isin N0In this case En is upper triangular with diagonal entries all one The matrixKn is truncated to form the compressed matrix Kn according to the strategypresented in Chapter 7 Again the MAM is applied to the compressed linearsystem

The numerical results of the collocation method are shown in Table 93where we use again the initial level k = 5

We also use Table 91 and 93 to demonstrate the applicability of theMAM to multiscale collocation methods We begin with an initial approximate

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

344 Fast solvers for discrete systems

Table 93 Convergence and computational speed for the collocation method

m Matrix size uminus ukmLinfin Approx order Comp rate CT

0 64 391e-3 0891 lt 0011 128 111e-3 181 0766 lt 0012 256 289e-4 194 0546 lt 0013 512 647e-5 216 0356 00104 1024 145e-5 215 0220 00205 2048 358e-6 202 0131 00416 4096 881e-7 202 0076 01007 8192 220e-7 200 0044 02408 16384 580e-8 192 0024 05819 32768 142e-8 202 0014 1342

10 65536 366e-9 196 0007 2994

Table 94 Condition numbers of the matrices from the collocation method

m cond(A5+m) 1 cond(B5m) 2

0 77422231 9446764 1704541 93832852 11096629 1649865 11003597 16203123 12695555 1598926 12589373 15857764 14256352 1560797 14151313 15619405 15789732 1533380 15690938 1539625

solution u5 The numerical results show that the accuracy is improved inexactly the same order as our theoretical result as we move from a level toa higher level using the augmentation method In this experiment at each levelwe perform a uniform subdivision since the solution is smooth

In Table 94 we list the spectral condition numbers of matrices Ak+m

and Bkm for the collocation method We also compute the differences of thecondition numbers of these matrices between two consecutive levels to observethe growth of the condition numbers The purpose of this experiment is toconfirm the stability analysis presented in Section 912 Note that theoreticallywe have that cond(Ak+m) = O(log2 dk+m) and cond(Bkm) = O(log2 dk+m)Our numerical results coincide with the theoretical estimates obtained inSection 913 This experiment shows that the MAM does not ruin the stabilityof the original multilevel scheme (940) Normally the original multilevelscheme is well conditioned since the use of multiscale bases leads to apreconditioner A good discrete method should preserve this nice feature ofequation (940) From our numerical results we see that the condition numberof Bkm is smaller than that of Ak+m

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

91 Multilevel augmentation methods 345

Next we consider a boundary integral equation resulting from a reformula-tion of a boundary value problem of the Laplace equation

Example 3 The multiscale collocation method applied to boundary integralequations In this experiment we apply the multiscale collocation method tosolving the boundary integral equation reformulated from the boundary valueproblem ⎧⎪⎨⎪⎩

u(x) = 0 x isin

partu(x)

partnx= minusu(x)+ g0(x) x isin

(945)

where the domain

=

x = (x1 x2) x21 +

x22

4lt 1

with being its boundary We choose the function g0 so that

u0(x) = ex1 cos(x2) x isin is the exact solution of equation (945) Here we note that the solution uis analytic By the fundamental solution of the equation we reformulate theboundary value problem as the boundary integral equation (cf Section 213)

u(x)minus 1

π

int

u(y)part

partnylog |xminus y|dsy minus 1

π

int

u(y) log |xminus y|dsy

= minus 1

π

int

g0(y) log |xminus y|dsy x isin (946)

Utilizing parametrization x(t) = (cos(2π t) 2 sin(2π t)) t isin [0 1] on theboundary we establish the boundary integral equation

u(t)minusint 1

0K(t τ)u(τ )dτ minus

int 1

0L(t τ)u(τ )dτ

=minusint 1

0L(t τ)g0(τ )dτ t isin [0 1) (947)

where the kernels

K(t τ) = 2

1+ 3 cos2(t + τ)π

and

L(t τ) = 2radic

1+ 3 cos2(2πτ) log(

2 sin |π(t minus τ)|radic

1+ 3 cos2 π(t + τ))

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

346 Fast solvers for discrete systems

Table 95 Convergence and computational speed for the collocation method

m d4+m uminus u4minfin Approx order Comp rate 1 Comp rate 2 CT

0 64 456e-2 063 0751 128 1390e-2 171 0573 0516 00012 256 337e-3 204 0343 0266 00023 512 809e-4 206 0205 0137 00044 1024 182e-4 215 0121 0070 00085 2048 446e-5 203 0071 0036 00146 4096 108e-5 204 0041 0019 00297 8192 260e-6 205 0023 0010 00598 16384 591e-7 214 0013 0005 01229 32768 119e-7 231 0007 0003 0250

Table 96 Condition numbers of the matrices from the collocation method

m d4+m cond(A5+m) 1 cond(B5m) 2

0 64 1021 128 120 18 1192 256 137 17 135 163 512 152 16 150 154 1024 167 15 165 155 2048 182 14 180 15

are smooth and weakly singular respectively The solution of (945) corre-sponding to u0 is

u(t) = ecos(2π t) cos(2 sin(2π t))

We use the multiscale collocation method described in Example 2 to solvethe boundary integral equation (947) The programs are run on a personalcomputer with a 340-GHz CPU and 800GB memory The numerical resultsare shown in Tables 95 and 96 where we use ldquoComp rate 1rdquo and ldquoComprate 2rdquo for the compression rates of the compressed coefficient matrices forthe weakly singular and smooth kernels respectively The numerical resultsconfirm the theoretical estimates

In Table 96 we list the spectral condition numbers of matrices Ak+m andBkm for the collocation method The differences of the condition numbers ofthese matrices between two consecutive levels are also computed to show thegrowth of the condition numbers The numerical results confirm the stabilityanalysis presented in Section 912

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

92 Multilevel iteration methods 347

Both theoretical analysis and numerical experiment show that the MAMis particularly suitable for solving large-scale linear systems that result fromcompression schemes applied to integral equations It is a stable and fastalgorithm which provides accurate numerical solutions of integral equations

92 Multilevel iteration methods

In this section we develop MIMs for solving Fredholm integral equations ofthe second kind based on the multiscale methods for which the subspace hasa multiresolution decomposition To describe the multilevel iteration schemesand ideas leading to these algorithms we utilize multiscale Galerkin methods

921 Multilevel iteration schemes

Let X be a Hilbert space and K X rarr X an operator such that I minus K isbijective on X Consider the Fredholm integral equation of the second kindgiven in the form

(I minusK)u = f (948)

where f isin X is given and u isin X is the solution to be determinedTo solve equation (948) by the Galerkin method let Xn be a nested sequence

of finite-dimensional subspaces of X

Xn sube Xn+1 n isin N0 (949)

such that cupnisinN0Xn = X Let Pn X rarr Xn denote the sequence oforthogonal projections Then Pn = 1Xn = PnXPlowastn = Pn and Pn rarr Ipointwise in X The Galerkin method is to find un isin Xn satisfying the equation

(I minus PnK)un = Pn f (950)

We assume that K is a compact operator on X In this case the pointwiseconvergence of Pn to I and the compactness of K and Klowast lead to

limnrarrinfinK minus PnK = lim

nrarrinfinK minusKPn = 0

which implies for all n large enough that I minus PnK is invertible and its inverseis bounded by a constant independent of n Therefore equation (950) has aunique solution un isin Xn which satisfies the estimates

un le cf uminus un le c infuminus v v isin Xn (951)

where u is the solution of equation (948)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

348 Fast solvers for discrete systems

Next we develop our MIMs By the nestedness property (949) of theapproximating subspace sequence each Xn represents a level of resolution inapproximating X and we solve un isin Xn in (950) seeking an approximation ofup to the nth level of resolution to the exact solution u isin X This nestednessproperty also implies that there exists a subspace Wn+1 of Xn+1 such that

Xn+1 = Xn oplusperpWn+1 n isin N0 (952)

Let

Qn+1 = Pn+1 minus Pn

It is then straightforward to show that Qn is a projection onto Wn and Wn+1 =Qn+1Xn+1 n isin N0 Repeatedly using equation (952) produces for k m isin N0the following decomposition of the space Xn with n = k + m

Xk+m = Xk oplusperpWk+1 oplusperp middot middot middot oplusperpWk+m (953)

For uk+m isin Xk+m we write

uk+m = uk0 + vk+1 + middot middot middot + vk+m

where uk0 isin Xk and vkl isinWk+l l isin Nm and

Pk+m = Pk +Qk+1 + middot middot middot +Qk+m

For k m isin N0 we can identify the operator Kk+m = Pk+mKPk+m with thematrix of operators

Kkm =

⎡⎢⎢⎢⎣PkKPk PkKQk+1 middot middot middot PkKQk+m

Qk+1KPk Qk+1KQk+1 middot middot middot Qk+1KQk+m

Qk+mKPk Qk+mKQk+1 middot middot middot Qk+mKQk+m

⎤⎥⎥⎥⎦ (954)

Thus we can express the operator equation (950) in the form

ukm minusKkmukm = fkm (955)

where ukm = [uk0 vk1 vkm]T and fkm = [Pkf Qk+1f Qk+mf ]T As in (950) we seek the solution to (955) in the approximation space Xk+m

Once the bases for Xk and Wk+l are chosen the matrix representation of Kk+m

has exactly the same block structure as Kkm in (954)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

92 Multilevel iteration methods 349

To develop multilevel iteration schemes for solving equation (955) weintroduce five matrices of operators from Kkm the strictly upper triangularmatrix

Ukm =

⎡⎢⎢⎢⎢⎢⎢⎣

O PkKQk+1 PkKQk+2 middot middot middot PkKQk+m

O Qk+1KQk+1 middot middot middot Qk+1KQk+m

Qk+mminus1KQk+m

O

⎤⎥⎥⎥⎥⎥⎥⎦

the lower triangular matrix

Lkm =

⎡⎢⎢⎢⎣O

Qk+1KPk Qk+1KQk+1

Qk+mKPk Qk+mKQk+1 middot middot middot Qk+mKQk+m

⎤⎥⎥⎥⎦

the block diagonal matrix

Dkm =

⎡⎢⎢⎢⎣I minus PkKPk

I

I

⎤⎥⎥⎥⎦and the matrices corresponding to lower and higher frequency of Kkmrespectively

KLkm =

⎡⎢⎢⎢⎣PkKPk PkKQk+1 middot middot middot PkKQk+m

O O middot middot middot O

O O middot middot middot O

⎤⎥⎥⎥⎦

KHkm =

⎡⎢⎢⎢⎣O O middot middot middot O

Qk+1KPk Qk+1KQk+1 middot middot middot Qk+1KQk+m

Qk+mKPk Qk+mKQk+1 middot middot middot Qk+mKQk+m

⎤⎥⎥⎥⎦

Equation (955) can be rewritten as

(Dkm minus Ukm minus Lkm)ukm = fkm (956)

or

(I minusKLkm minusKH

km)ukm = fkm (957)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

350 Fast solvers for discrete systems

It is these matrix forms that lead us to the following multilevel iterationschemes for solving equation (950)

bull Jacobi-type iteration

Dkmu(l+1)km = (Ukm + Lkm)u

(l)km + fkm l isin N0 (958)

with any initial approximation u(0)km

Except for the first component all the other components of u(l+1)km are

already expressed in terms of u(l)km since the diagonal blocks in Dkm areidentity operators Only the first component needs to be solved by invertingI minus PkKPk which is also the difference between this iteration scheme andthe usual algebraic Jacobi schemebull GaussndashSeidel-type iteration

(Dkm minus Ukm)u(l+1)km = Lkmu(l)km + fkm l isin N0 (959)

with any initial approximation u(0)kmLike the usual algebraic GaussndashSeidel iteration (959) can easily be

solved in the ldquobackward substitutionsrdquo fashion for the components of u(l+1)km

and the only inverse we need to find is for the first component that is(I minus PkKPk)

minus1bull LndashH-type iteration

(I minusKLkm)u

(l+1)km = KH

kmu(l)km + fkm l isin N0 (960)

Like the Jacobi scheme only the first component of u(l+1)km needs to be solved

by inverting I minus PkKPk while other components are computed directlyIt is clear that these three schemes are similar in the method of updating the

components of u(l+1)km they differ only in that the GaussndashSeidel scheme uses

newly available components within each iteration the LndashH scheme uses thecomponents obtained in the last iteration except for the first component whilethe Jacobi scheme waits till an iteration is complete The three methods requireinverting the operator I minus PkKPk at the kth level Hence we have to choosek so that the inverse of I minus PkKPk exists and is relatively easy to find In themeantime we can take advantage of this to start the iteration with a good initialguess u(0)km = [(I minus PkKPk)

minus1Pkf 0 0]T

922 Convergence analysis

We now provide convergence analysis for iteration schemes (958) (959) and(960) when the operator K is compact on X In this case we have that

limnrarrinfin(Pn+i minus Pn+j)K = lim

nrarrinfinK(Pn+i minus Pn+j) = 0 (961)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

92 Multilevel iteration methods 351

Moreover for n large enough I minusKn is invertible and the norm of the inverseoperator is bounded uniformly for all n In the following theorem we establishconvergence for schemes (958)ndash(960) for k large enough and the rate ofconvergence

Theorem 912 Suppose that the linear operator K is compact on the Hilbertspace X IminusK is bijective on X and the sequence of approximation subspacespossesses the nestedness (949) and has the decomposition (953) Then thefollowing statements hold

(i) For each m isin N and sufficiently large k the iteration schemes (958)(959) and (960) for solving equation (950) or (955) are all convergent

(ii) The convergence rates for the Jacobi iteration scheme (958) and the LndashHiteration scheme (960) are independent of m for all sufficiently large k Ifthe hypothesessum

kisinN0

(I minus Pk)K le infinsumkisinN0

K(I minus Pk) le infin (962)

and sumkrarrinfin

k(I minus Pk)K =sum

krarrinfinkK(I minus Pk) = 0 (963)

hold the rate of convergence for the GaussndashSeidel iteration is alsoindependent of m for all sufficiently large k

(iii) The convergence rates for the Jacobi iteration scheme (958) and the LndashHiteration scheme (960) can be made small (by increasing k) in the sameorder of approximation PkK to K Moreover if hypotheses (962) and(963) are satisfied the rate of convergence for the GaussndashSeidel schemecan be made smaller (by increasing k) in the same order as rk going tozero where rk =sumiisinN0

(I minus Pk+i)K

Proof We first estimate the norms of iteration operators Dminus1km(Ukm + Lkm)

(Dkm minus Ukm)minus1Lkm and (I minus KL

km)minus1KH

km for schemes (958) (959) and(960) From the definition of Ukm we see that

Ukmu = PkK(Pk+m minus Pk)u+sum

lisinNmminus1

Qk+lK(Pk+m minus Pk+l)u

for all u isin X which leads to

Ukm lesumlisinZm

K(Pk+m minus Pk+l) rarr 0 as krarrinfin (964)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

352 Fast solvers for discrete systems

where we have used (961) and the fact that Pn = Qn = 1 for all n isin NSimilarly we have that

Lkmu = (Pk+m minus Pk)KPku+sum

lisinNmminus1

(Pk+m minus Pk+l)KQk+l+1u

and

Lkm lesumlisinZm

(Pk+m minus Pk+l)K rarr 0 as krarrinfin

Moreover from Uk+m + Lk+m = Kk+m minusKk we conclude that

Uk+m + Lk+m le (Pk+m minus Pk)K + K(Pk+m minus Pk) rarr 0 as krarrinfin(965)

Moreover as an operator on X Dkm = I minusKk Hence

Dminus1km = (I minusKk)

minus1 (966)

and

(Dkm minus Ukm)minus1 = (I minus (I minusKk)

minus1Ukm)minus1(I minusKk)

minus1

Therefore

Dminus1km = (I minusKk)

minus1and

(Dkm minus Ukm)minus1 le (I minusKk)

minus11minus (I minusKk)minus1Ukm

Since (IminusKk)minus1 is uniformly bounded for large enough k in view of (964)

these two inverses are both bounded uniformly for large enough kCombining the above we see that for each fixed m isin N the iteration

operators for schemes (958) and (959) are both convergent to 0 in the operatornorm

Dminus1km(Uk+m + Lk+m) rarr 0 and (I minusKk)

minus1Lkm rarr 0 as krarrinfin

Hence they can be chosen less than one if k is large enough which in turnyields the convergence of the iteration schemes (958) and (959)

For the scheme (960) noting that KLkm = PkKPk+m and KH

km = (Pk+m minusPk)KPk+m we have that

KLkm minusK rarr 0 and KH

km rarr 0 as krarrinfin

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

92 Multilevel iteration methods 353

uniformly for m isin N This leads to the fact that (I minus KLkm)minus1 is uniformly

bounded and moreover

(I minusKLkm)minus1KH

km rarr 0 as krarrinfin

which also yields the convergence of the iteration scheme (960)Next we examine the rate of convergence of the iteration schemes Let

qkm =

⎧⎪⎨⎪⎩Dminus1

km(Uk+m + Lk+m) for scheme (958)(Dkm minus Ukm)

minus1Lkm for scheme (959)(I minusKL

km)minus1KH

km for scheme (960)

This is roughly the factor between two consecutive errors in the iteration Moreprecisely for each (k m) the quantity qkm is the least upper bound for theratios of two consecutive errors or the ratios of differences of two consecutiveiterates

u(l)km minus ukmu(lminus1)

km minus ukmle qkm and

u(l)km minus u(lminus1)km

u(lminus1)km minus u(lminus2)

km le qkm

We have shown that for fixed m isin N qkm rarr 0 as krarrinfin and thus qkm lt 1for large enough k

For the Jacobi-type scheme (958) we can show that this rate is independentof all large enough m Indeed from (965) and (966) above

qkm le Dminus1kmUk+m + Lk+m

le (I minusKk)minus1((Pk+m minus Pk)K + K(Pk+m minus Pk))

thus

lim supmrarrinfin

qkm le (I minusKk)minus1(K minus PkK + K minusKPk) lt 1 (967)

if k is chosen large enoughFor the LndashH-type scheme (960) noting that (I minus KL

km)minus1 is uniformly

bounded and KHkm = (Pk+m minus Pk)KPk+m we conclude that

lim supmrarrinfin

qkm le cK minus PkK lt 1

if k is chosen large enoughTo derive an estimate for the rate of convergence of the GaussndashSeidel

scheme (959) we need hypotheses (962) and (963) For any positive

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

354 Fast solvers for discrete systems

integer k we introduce two quantities

rk =sumiisinN0

(I minus Pk+i)K and rprimek =sumiisinN0

K(I minus Pk+i)

Under assumption (962) we see that

limkrarrinfin rk = lim

krarrinfin rprimek = 0 (968)

and hypothesis (963) ensures that

lim supmrarrinfin

Lkm le rk and lim supmrarrinfin

Ukm le rprimek (969)

Therefore we conclude from (968) and (969) that for the GaussndashSeidelscheme

lim supmrarrinfin

qkm le lim supmrarrinfin

(I minusKk)minus1Lkm

1minus (I minusKk)minus1Ukmle lim sup

mrarrinfin(I minusKk)

minus1rk

1minus (I minusKk)minus1rprimeklt 1 (970)

if k is chosen large enoughThe discussions above lead to the results of this theorem

We remark that conditions (962) and (963) are fulfilled in many appli-cations when Xn are chosen as piecewise polynomial spaces The reader isreferred to [108] for details and numerical examples

93 Bibliographical remarks

In Chapters 5ndash7 multiscale Galerkin PetrovndashGalerkin and collocation meth-ods are developed for solving the Fredholm integral equation of the second-kind with a weakly singular kernel These methods yield linear systems havingnumerically sparse coefficient matrices which with appropriate truncationstrategies lead to fast discretization of the integral equation (see for example[28 64 68 94 95 202 260 261] and the references cited therein) Thischapter describes fast solvers for the resulting discrete systems includingmultilevel augmentation methods and multilevel iteration methods The ideaof multilevel methods introduced in this chapter based on the GaussndashSeideliteration was initiated in [67] in its early form for solving the Fredholmintegral equation of the second kind The results in [67] were developedfurther to the MAM and the MIM in [71] and [108] respectively The abstract

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

93 Bibliographical remarks 355

framework of the MAM was established in [71] Since then it has been usedin various contexts (see [45 51ndash59 73 76 78 187])

We remark that other different multilevel methods for solving integralequations were developed as early as the late 1970s (see for example [120]and the references cited therein) Methods for the data-sparse approximationof matrices were introduced in [120] resulting in the so-called hierarchicalmatrices (or short H-matrices) For more information on this subject thereader is referred to [30 122 123] as well as [168ndash170] Other work onmultilevel and associated iteration methods can be found in Chapter 6 of [15]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637011Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163144 subject to the Cambridge Core terms of

10

Multiscale methods for nonlinearintegral equations

In this chapter we develop multiscale methods for solving the Hammersteinequation and the nonlinear boundary integral equation resulting from areformulation of a boundary value problem of the Laplace equation withnonlinear boundary conditions Fast algorithms are proposed using the MAMin conjunction with matrix truncation strategies and techniques of numericalintegration for integrals appearing in the process of solving equations Weprove that the proposed methods require only linear (up to a logarithmic factor)computational complexity and have the optimal convergence order

In the section that follows we discuss the critical issues in solving nonlinearintegral equations This will shine a light on the ideas developed later in thischapter In Section 102 we introduce the MAM for solving Hammersteinequations and provide a complete convergence analysis for the proposedmethod In Section 103 we develop the MAM for solving the nonlinearboundary integral equation as a result of a reformulation of a boundary valueproblem of the Laplace equation with nonlinear boundary conditions Wepresent numerical experiments in Section 104

101 Critical issues in solving nonlinear equations

Nonlinear integral equations portray many mathematical physics problemsThe Hammerstein equation is a typical kind of nonlinear integral equa-tion Moreover boundary value problems of the Laplace equation serve asmathematical models for many important applications Making use of thefundamental solutions of the equation we can formulate the boundary valueproblems as integral equations defined on the boundary (see Section 223)For linear boundary conditions the resulting boundary integral equationsare linear the numerical methods of which have been studied extensively

356

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

101 Critical issues in solving nonlinear equations 357

Nonlinear boundary conditions are also involved in various applications Inthese cases the reformulation of the corresponding boundary value problemsleads to nonlinear integral equations

The nonlinearity introduces difficulties in the numerical solution of theequation which normally requires an iteration scheme to solve it locally asa linearized integral equation The Newton iteration method and the secantmethod are often used as bases for designing numerical schemes for equationsof this type In this case the Jacobian matrices associated with the nonlinearoperators have to be computed and possibly refreshed at each iteration stepSince the Jacobian matrices are usually dense and their size is equal to thedimension N of the approximate subspace of the solution the computationalcomplexity of the algorithms is at least of O(N2) In [13] several popularnumerical schemes for solving nonlinear integral equations are reviewed thecomputational complexities of which are all at least of O(N2) In particularthe multi-grid method [120] has an operation count of O(N2) when the dis-cretization of the system is included When a high approximation accuracy isdesired it requires using an approximate subspace of a high dimension Thusit demands a significantly large amount of computational effort This becomesa bottleneck problem for the numerical solution of nonlinear integral equations

In order to develop a fast numerical algorithm for solving nonlinear integralequations we consider achieving the following two tasks First the Jacobianmatrices involved in the algorithm should be approximated in a subspace of alow dimension which is fixed and much smaller than that for the whole approx-imate subspace Second all steps of the algorithm should be implementedwith a number of calculations no more than O(N log N) Accomplishing thesetwo tasks is comparable with using the truncation strategy and the MAMintroduced in the previous chapter for solving a linear integral equation

In this chapter we develop an MAM for solving the Hammerstein equa-tion and the nonlinear boundary integral equation which accomplishes twotasks discussed earlier This method requires the availability of a multileveldecomposition of the solution space and a projection from the solution spaceonto a finite-dimensional subspace at a given level Solving the equationwith high-order approximation accuracy requires us to solve an approximateequation which is the original equation projected onto a subspace at a highlevel We observe that most of the computational efforts are spent on invertingthe nonlinear integral operator when solving the nonlinear equation Thehigher the level of the subspace used the more accurate the approximatesolution obtained and at the same time more computational effort is neededto invert the nonlinear integral operator at that level To significantly reducethe computational cost we propose not to invert directly the nonlinear integral

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

358 Multiscale methods for nonlinear integral equations

operator at the high level but instead to invert it at a much lower (fixed)level and compensate the error which may result from this modification bya high-frequency correction term that can be obtained by solving a linearsystem at the high level The correction does not involve any inversion ofthe nonlinear integral operator This method results in a fast algorithm forsolving the nonlinear integral equation which gives the optimal convergenceorder the same as the approximation order of the approximate subspace(of the highest level) At the same time this method requires only linearcomputational complexity in the sense that the number of multiplicationsneeded is proportional to the dimension of the approximate subspace

The proposed method is based on a traditional projection method for solvingnonlinear integral equations and a multilevel decomposition of the solutionspace (see [76]) The main idea comes from treating the corresponding linearintegral equation Multilevel or multiscale numerical methods for solvinglinear integral equations were discussed in the previous chapter Recall thatin the MAM solving the linear equation the coefficient matrix of the resultinglinear system from discretization of the linear integral equation via a multiscaledecomposition of the solution space can be obtained from a small-sized matrixwhich is a low-resolution representation of the integral operator by augmentingit with new rows and columns representing the high frequency of the integraloperator Making use of this multiscale structure of the matrix the MAMinverts only the small-sized fixed matrix with the compensation of matrixndashvector multiplications It was proved in Section 91 that this method gives anoptimal order of convergence while it reduces the computational complexitysignificantly Motivated by the MAM for the linear integral equation wedevelop the MAM for the nonlinear equation The MAM which will bedescribed in Section 102 for solving the Hammerstein equation and in Section103 for solving the nonlinear boundary integral equation resulting from areformulation of a boundary value problem of the Laplace equation withnonlinear boundary conditions needs only to invert the nonlinear operatorin a much smaller subspace This significantly reduces the computationalcomplexity to a nearly linear order We also develop a fully discrete MAM forsolving the nonlinear integral equation using numerical integration methodsfor computing the integrals appearing in the resulting nonlinear system

To close this section we briefly mention the fast FourierndashGalerkin methodfor solving the nonlinear boundary integral equation developed recently in[53] A fast algorithm for solving the resulting discrete nonlinear systemwas designed in the paper by integrating together the techniques of matrixcompression numerical integration of oscillatory integrals and the multilevelaugmentation method It was proved there that the proposed method enjoys

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 359

an optimal convergence order and a nearly linear computational complexityNumerical experiments were presented to confirm the theoretical estimates andto demonstrate the efficiency and accuracy of the proposed method

102 Multiscale methods for the Hammerstein equation

In this section we describe fast multiscale methods for solving the Ham-merstein equation We prove that the proposed methods require only linear(up to a logarithmic factor) computational complexity and have the optimalconvergence order Two specific fast methods based on the Galerkin projectionand the collocation projection are presented

1021 The multilevel augmentation method

We introduce in this subsection the MAM for solving the Hammersteinequation The method is described here in an operator form only in terms of theprojections and related spaces The description of the method involving basesof the subspaces is postponed to Section 1024 This subsection ends with acomparison of the proposed method with the well-known multigrid method

For d isin N we let sube Rd be a compact domain Consider the Hammersteinequation

u(s)minusint

K(s t)ψ(t u(t))dt = f (s) s isin (101)

where K f and ψ are given functions and u is the unknown to be determinedWe assume that f isin C() and for any s isin we denote Ks(t) = K(s t)Throughout this section we assume unless stated otherwise that the followingconditions on K and ψ are satisfied

(H1) limsrarrτKs minus Kτ1 = 0 for any τ isin and sup

sisin

int

|K(s t)|dt ltinfin

(H2) ψ(t u) is continuous in t isin and Lipschitz continuous in u isin R thepartial derivative Duψ of ψ with respect to the variable u exists and isLipschitz continuous that is there exists a positive constant L such that

|Duψ(t u1)minus Duψ(t u2)| le L|u1 minus u2|

and for any u isin C() ψ(middot u(middot)) Duψ(middot u(middot)) isin C()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

360 Multiscale methods for nonlinear integral equations

We use X to represent the Banach space L2() or Linfin() The linear integraloperator K Xrarr X is defined by

(Ku)(s) =int

K(s t)u(t)dt s isin

and the nonlinear operator Xrarr X by

(u)(t) = ψ(t u(t)) t isin

With these notations equation (101) is written in the operator form as

uminusKu = f (102)

We assume that Xn n isin N0 is a sequence of finite-dimensional subspaces ofX having the property that

Y sube⋃

nisinN0

Xn

where if X = L2() then Y = X and the notation ldquosuberdquo in the above relationis replaced by ldquo=rdquo and if X = Linfin() then Y = C() For each n isin N0 letPn Xrarr Xn be a projection Throughout this section we assume that

(H3) the sequence Pn n isin N0 converges pointwise to the identity operator inY that is for any x isin Y

limnrarrinfinPnxminus x = 0

The projection method for solving equation (102) is to find un isin Xn satisfying

un minus PnKun = Pnf (103)

The solution un of equation (103) is called the projection solution of equa-tion (102) The following result is concerned with the existence of theprojection solution un and its convergence property

Theorem 101 Let ulowast isin X be an isolated solution of (102) If one is notan eigenvalue of the linear operator (K)prime(ulowast) then for sufficiently large nequation (103) has a unique solution un isin B(ulowast δ) for some δ gt 0 with theproperty

c1Pnulowast minus ulowast le un minus ulowast le c2Pnulowast minus ulowast (104)

for some positive constants c1 c2

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 361

The above theorem was established in Theorem 2 of [255] and usedin [165]

As we have discussed in Section 101 the projection method (103) requiresinverting the nonlinear operator

n = I minus PnK

which is computationally challenging when the dimension of the subspace Xn

is large In fact once a basis of the subspace Xn is chosen equation (103)is equivalent to a system of nonlinear algebraic equations Standard methodssuch as the Newton method and its variations for solving the nonlinear systemare to linearize the equation locally and solve the nonlinear system by iterationAt each iteration step we need to invert the Jacobian matrix of n evaluated atthe solution of the previous step The Jacobian matrix different at a differentstep is dense and has size equal to the dimension s(n) of the space XnThe computational cost for solving equation (103) by a standard method isO(s(n)2) When s(n) is large this becomes a bottleneck problem for numericalsolutions of these equations

Because of this we propose not to solve equation (103) directly Instead wedevelop a multilevel method which requires inverting the nonlinear operatork for a fixed k much smaller than n To this end we require that the space X

has a multiscale decomposition that is the subspaces Xn are nested (Xnminus1 subXn n isin N) so that Xn is the direct sum of Xnminus1 and its complement WnSpecifically

Xn = Xnminus1 oplusWn n isin N (105)

Accordingly we have for all n isin N0 that

PnPn+1 = Pn (106)

The decomposition (105) can be applied repeatedly so that for n = k + mwith k isin N0 fixed and m isin N0 we have the decomposition

Xk+m = Xk oplusWkm (107)

where

Wkm =Wk+1 oplus middot middot middot oplusWk+m

Under this hypothesis we describe the multilevel method for obtaining anapproximation of the solution of equation (103) Our goal is to obtain anapproximation of the solution of equation (103) with n = k + m k beingsmall and fixed We first solve equation (103) with n = k exactly and obtainthe solution uk Since s(k) is very small in comparison with s(k + m) the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

362 Multiscale methods for nonlinear integral equations

computational cost of inverting the nonlinear operator k is much less thanthat of inverting k+m The next step is to obtain an approximation of thesolution uk+1 isin Xk+1 of equation (103) with n = k + 1 For this purpose wedecompose

uk+1 = uLk+1 + uH

k+1 with uLk+1 isin Xk and uH

k+1 isinWk+1

using the decomposition (107) and rewrite equation (103) with n = k + 1 as

Pk(I minusK)(uLk+1 + uH

k+1) = Pkf + (Pk+1 minus Pk)(f +Kuk+1)minus uHk+1(108)

The second term on the right-hand side of equation (108) can be obtainedapproximately via the solution uk of the previous level That is we computeuH

k1 = (Pk+1 minus Pk)(f +Kuk) where uk0 = uk and note that uHk1 isinWk+1

Observing that

uHk1 = uk+1 minus uk minus Pk+1K(uk+1 minusuk)

in equation (108) we replace uHk+1 and the second term on the right-hand side

by uHk1 to obtain an equation for uL

k1 isin Xk

Pk(I minusK)(uLk1 + uH

k1) = Pkf (109)

The function uLk1 can be viewed as a good approximation to uL

k+1 We thenobtain an approximation to the solution uk+1 of equation (103) by setting

uk1 = uLk1 + uH

k1

Note that uLk1 and uH

k1 respectively represent the lower and higher-frequencycomponents of uk1 This procedure is repeated m times to obtain an approxi-mation ukm of the solution uk+m of equation (103) with n = k + m

Note that at each step we invert only the same nonlinear operator Pk(I minusK) This makes the method very efficient computationally We summarizethe method described above in the following algorithm

Algorithm 102 (The multilevel augmentation method An operator form)Let k be a fixed positive integer

Step 1 Find the solution uk isin Xk of equation (103) with n = k Set uk0 =uk and l = 1

Step 2 Compute

uHkl = (Pk+l minus Pk)( f +Kuklminus1) (1010)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 363

Step 3 Solve uLkl isin Xk from the nonlinear equation

Pk(I minusK)(uLkl + uH

kl) = Pkf (1011)

Step 4 Let ukl = uLkl + uH

kl Set llarr l+ 1 and go back to step 2 until l = m

The output of Algorithm 102 is an approximation ukm of the solution uk+m

of equation (103) The approximation of uk+m is obtained beginning with aninitial approximation uk repeatedly inverting the operator Pk(I minus K) toupdate the approximation recursively The procedure completes in m steps andno iteration is needed if the operator Pk(I minus K) can be inverted exactly Ofcourse the inversion of the nonlinear operator may require iterations The keysteps in this algorithm are steps 2 and 3 In step 2 we obtain a high-frequencycomponent uH

kl of the approximate solution ukl from the approximation uklminus1

at the previous level by a functional evaluation In step 3 we solve the low-frequency component uL

kl isin Xk from (1011) with the known high-frequency

component uHkl isin Wk+1 obtained from step 2 For all l isin Zm+1 we invert

the same nonlinear operator Pk(I minus K) at the initial coarse level k Thecomputational costs for this are significantly lower than inverting the nonlinearoperator Pk+m(IminusK) at the final fine level k+m We call ukm the multilevelsolution of equation (103) and uL

km and uHkm respectively the lower and

higher-frequency components of ukm In the next subsection we show thatukm approximates the exact solution u in the same order as uk+m does

The multilevel solution ukm is in fact a solution of a nonlinear operatorequation We present this observation in the next proposition

Proposition 103 If uHkm is obtained from formula (1010) and uL

km isin Xk is

a solution of equation (1011) then ukm = uLkm + uH

km is a solution of theequation

(I minus PkK)ukm = Pk+mf + (Pk+m minus Pk)Kukmminus1 (1012)

Conversely for any solution ukm of (1012) uHkm = (Pk+mminusPk)ukm satisfies

equation (1010) and uLkm = Pkukm is a solution of equation (1011)

Proof Since uLkm isin Xk we have that (Pk+m minus Pk)uL

km = 0 It follows from(1010) with l = m that

(Pk+m minus Pk)(uLkm + uH

km) = (Pk+m minus Pk)( f +Kukmminus1)

Adding the above equation to equation (1011) with l = m and noticing thatPk+mukm = ukm yield equation (1012)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

364 Multiscale methods for nonlinear integral equations

Conversely let ukm be a solution of equation (1012) We apply the operatorPk+m minus Pk to both sides of the equation and obtain (1010) by utilizing

(Pk+m minus Pk)Pk = 0 and (Pk+m minus Pk)Pk+m = Pk+m minus Pk

where the second equation is a consequence of formula (106) We then applyPk to both sides of (1012) and use Pk(Pk+m minus Pk) = 0 to conclude thatuL

km = Pkukm is a solution of equation (1011) with l = m

Equation (1012) differs from equation (103) with n = k + m in two ways(1) The right-hand sides of these two equations differ in the term (Pk+m minusPk)Kukmminus1 (2) The nonlinear operators on the left-hand side of these twoequations are different I minus Pk+mK for equation (103) and I minus PkKfor equation (1012) It requires much less computational effort to invert thenonlinear operator I minus PkK than the nonlinear operator I minus Pk+mK Itis these differences that lead to fast solutions of the Hammerstein equationMoreover equation (1012) connects multilevels (levels k k + m minus 1 andk+m) of the solution space and the range space This is the basis on which theapproximate solution ukm has a good approximation property

To close this subsection we compare the MAM with the well-knownmultigrid method From the point of view of Kress [177] the multigridmethod for solving linear integral equations uses special techniques forresidual correction utilizing the information of the coarser levels to constructappropriate approximations of the inverse of the operator of the approximateequation to be solved Specific choices of the approximate inverses lead tothe V-cycle W-cycle and cascadic multigrids This idea was applied to theconstruction of iteration schemes for nonlinear integral equations The relatedwork was discussed in a master review paper [7] Taking this point of view theproposed MAM might be considered as a nonconventional cascadic multigridmethod with significant difference from the traditional one

In order to compare the proposed method with the multigrid method wereview the two-grid method which was first introduced in [9] and reviewed in[7] The two-grid method solves (103) by the Newton iteration

u(+1)n = u()n minus[I minus (PnK)prime(u()n )]minus1

[u()n minus PnK(u()n )minus Pnf ] = 1 2

combined with an approximation of [I minus (PnK)prime(u()n )]minus1 using the infor-mation of coarser grids In particular one may choose a level nprime lt n and usethe following approximation

[I minus (PnK)prime(u()n )]minus1 asymp I + [I minus (PnprimeK)prime(unprime)]minus1(PnK)prime(unprime)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 365

where unprime is the solution of (103) with n = nprime Since the Jacobian matrix(PnprimeK)prime(unprime) was obtained after solving (103) at level nprime and (PnK)prime(unprime)remains unchanged during the iteration for level n the above approximationavoids the computational cost of updating the Jacobian matrix However theneed to establish (PnK)prime(unprime) still requires O(s(n)2) computational costThe multigrid scheme uses information of more than one lower level toapproximate [Iminus(PnK)prime(u()n )]minus1 and it needs O(s(n)2) computational costIn other words the idea of the multigrid method is as follows First establishthe Newton iteration method for equation (103) and then approximate theJacobian matrix appearing in the iteration process by using information incoarse grids Our proposed method introduces a new approximation strategywhich approximates directly the nonlinear operator IminusPnK in (103) not theJacobian matrix in the Newton iteration by I minus PkK at a fixed level k lt nThis point can clearly be observed in Proposition 103 It is possible onlybecause the solution space has a built-in multilevel structure The proposedmethod requires only O(s(n)) (linear) computational cost

1022 Analysis of the multilevel algorithm

We analyze the MAM described in the last subsection Specifically we showthat the multilevel solution ukm exists and prove that it converges to the exactsolution ulowast of equation (102) in the same order as the projection solution uk+m

doesWe first present a result concerning the existence of the multilevel solution

The proof of this result is similar to that for the existence of the projectionsolution which was established in [255]

Theorem 104 Let ulowast be an isolated solution of (102) If one is not aneigenvalue of (K)prime(ulowast) then there exists an integer N such that for eachk gt N if ukmminus1 is given the operator equation (1012) has a unique solutionukm isin B(ulowast δ) for some δ gt 0 and for all m isin N0

Proof Let L = (K)prime(ulowast) By hypotheses there exists an integer N and apositive constant ν such that for all k gt N (I minus PkL)minus1 exists and (I minusPkL)minus1 le ν For u v isin X we define

R(u v) = K(u)minusK(v)minus L(uminus v)

We then obtain from (102) and (1012) that

ukm minus ulowast = (I minus PkL)minus1[(Pk minus I)ulowast + PkR(ukm ulowast)+ (Pk+m minus Pk)( f +Kukmminus1)] (1013)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

366 Multiscale methods for nonlinear integral equations

We introduce the operator

Fkm(v) = (I minus PkL)minus1[(Pk minus I)ulowast + PkR(v+ ulowast ulowast)+ (Pk+m minus Pk)( f +Kukmminus1)]

It follows from hypothesis (H2) that there exist two positive constants M1 M2

such that the estimates

R(v ulowast) le M1vminus ulowast2

and

R(v1 ulowast)minusR(v2 ulowast) le M2

(v1 minus ulowast + 1

2v1 minus v2

)v1 minus v2

hold for all v v1 v2 in a neighborhood of ulowast By utilizing this property and thepointwise convergence of the projection Pn we can show that there exists apositive constant δ such that Fkm is a contractive mapping on the ball B(0 δ)The fixed-point theorem ensures that the fixed-point equation

v = Fkm(v)

has a unique solution in B(ulowast δ) or equivalently the equation (1012) has aunique solution ukm isin B(ulowast δ)

Next we turn to analyzing the convergence of the multilevel solution Wefirst prove a crucial technical lemma which confirms that the error betweenukm and uk+m is bounded by the error between ukmminus1 and uk+m with a factordepending on k m which converges to zero uniformly for m isin N0 as krarrinfinTo this end for n isin N0 we denote by Rn the approximation error of Xn forulowast isin X namely

Rn = Rn(ulowast) = infulowast minus vX v isin Xn

Lemma 105 Let ulowast be an isolated solution of (102) If one is not aneigenvalue of (K)prime(ulowast) then there exists a sequence of positive numbers αkmk isin N m isin N0 with limkrarrinfin αkm = 0 uniformly for m isin N0 and a positiveinteger N such that for all k ge N and m isin N0

ukm minus uk+m le αkmukmminus1 minus uk+mProof It follows from (103) with n = k + m and (1012) that

(I minus Pk+mL)(ukm minus uk+m) = (Pk+m minus Pk)K(ukmminus1 minusukm)

+ Pk+mR(ukm uk+m)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 367

We let akm = (Pk+m minus Pk)K and by L we denote the Lipschitz constantof the derivative Duψ Hypothesis (H3) ensures that there exists a positiveconstant p such that for all n isin N Pn le p By hypothesis (H2) we have that

ukm minus uk+m le νLakm

1minus ν(akmL+ pM1ukm minus uk+m)ukmminus1 minus uk+m

Since akm rarr 0 krarrinfin uniformly for m isin N0 there exists a positive integerN1 such that akmLν lt 1

6 for all k gt N1 and m isin N0 Since Rn rarr 0 nrarrinfinwe can find a positive integer N2 such that νpM1ρRk lt 1

6 for all k gt N2We choose δ gt 0 as in Theorem 104 such that νpM1δ lt 1

6 and (1012) hasa unique solution in B(ulowast δ) for all k gt N3 for some positive integer N3Consequently for any k gt N = maxN1 N2 N3 we have that

ν(akmL+ pM1ukm minus uk+m) le 1

2

which implies that

ukm minus uk+m le 2νLakmukmminus1 minus uk+m

We conclude the desired result of this lemma with αkm = 2νLakm by recallingthat akm rarr 0 krarrinfin uniformly for m isin N0

Recall that a sequence of non-negative numbers γn n isin N0 is called amajorization sequence of Rn n isin N0 if γn ge Rn for all n isin N0 and thereexists a positive integer N0 and a positive constant σ such that for n ge N0γn+1γnge σ

Making use of the above lemma we obtain the following important resulton the convergence rate of the multilevel augmentation solution The proof issimilar to that of Theorem 92 for linear operator equations

Theorem 106 Let ulowast be an isolated solution of (102) and let γn n isin N0 be amajorization sequence of Rn n isin N0 If one is not an eigenvalue of (K)prime(ulowast)then there exists a positive constant ρ and a positive integer N such that for allk ge N and m isin N0

ulowast minus ukm le (ρ + 1)γk+m (1014)

Proof We prove the estimate (1014) by induction on m When m = 0 it isclear that

ulowast minus uk0 = ulowast minus uk le ρRk le ργk

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

368 Multiscale methods for nonlinear integral equations

for all k gt N0 Suppose that the estimate (1014) holds for m minus 1 By thetriangle inequality and the induction hypothesis

ukmminus1 minus uk+m le ukmminus1 minus ulowast + uk+m minus ulowastle (ρ + 1)γk+mminus1 + ργk+m

le(ρ + ρ + 1

σ

)γk+m

Choose N such that for all k gt N the estimate in Lemma 105 holdsand αkm(ρ + ρ+1

σ)lt 1 Combining the estimate above with the estimate in

Lemma 105 yields the inequality

ukm minus uk+m le γk+m

Again by using the triangle inequality we obtain that

ukm minus ulowast le ukm minus uk+m + uk+m minus ulowast le (ρ + 1)γk+m

which completes the induction procedure

When a specific projection method is given the associated majorizationsequence is known In this case Theorem 106 may lead to the convergenceorder estimate of the corresponding MAM The corresponding convergenceorder for the MAMs based on the Galerkin projection and based on thecollocation projection will be presented in Section 1024

1023 The discrete multilevel augmentation method

In Section 1021 we established an operator form of the MAM and inSection 1022 we proved the existence and convergence properties of theapproximate solution obtained from the method It is shown that the proposedmethod gives approximate solutions with the same order of accuracy as theclassical projection methods The purpose of this section is to describe adiscrete version of the MAM when an appropriate basis of the approximatesubspace is chosen and to estimate the computational cost of the algorithm

Suppose that Ln n isin N0 is a sequence of subspaces of Xlowast which has theproperties

Ln sub Ln+1 dim (Ln) = dim (Xn) n isin N0

It follows from the nestedness property that there is a decomposition

Lk+m = Lk oplus Vkm

where Vkm = Vk+1 oplus middot middot middot oplus Vk+m

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 369

We let w(0) = dim(X0) and w(i) = dim(Wi) for i gt 0 and suppose that

X0 = spanw0j j isin Zw(0) L0 = span0j j isin Zw(0)and

Wi = spanwij j isin Zw(i) Vi = spanij j isin Zw(i) i gt 0

By using the index set Un = (i j) j isin Zw(i) i isin Zn+1 we have that

Xn = spanwij (i j) isin Un Ln = spanij (i j) isin Un n isin N0

Recalling s(n) = dim(Xn) we then observe that Un has cardinality card(Un) =s(n) We further assume that the elements of Un are ordered lexicographically

For any v isin Xk+m we have a unique expansion

v =sum

(ij)isinUk+m

vijwij

The vector v = [vij (i j) isin Uk+m] is called the representation vector of vThus for the solution ukm of (1012) its representation vector is given byukm = [(ukm)ij (i j) isin Uk+m] Setting Ukm = Uk+mUk we obtain that

Ukm = (i j) j isin Zw(i) i isin Zk+m+1Zk+1Consequently we have the representations

uLkm =

sum(ij)isinUk

(ukm)ijwij and uHkm =

sum(ij)isinUkm

(ukm)ijwij

It follows from the property of Ln that equation (1011) is equivalent to thenonlinear systemlang

iprimejprime (I minusK)

⎛⎝ sum(ij)isinUk

(ukl)ijwij + uHkl

⎞⎠rang = langiprimejprime frang (iprime jprime) isin Uk

(1015)

In order to convert (1010) into its equivalent discrete form we first prove thefollowing lemma

Lemma 107 For v isin X the equation

(Pk+l minus Pk)v = 0 (1016)

is equivalent to langiprimejprime v

rang = 0 (iprime jprime) isin Ukl (1017)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

370 Multiscale methods for nonlinear integral equations

if and only if for all v isin Xlangiprimejprime Pkv

rang = 0 (iprime jprime) isin Ukl (1018)

Proof We observe that since iprimejprime isin Lk+l (iprime jprime) isin Ukl we have thatlangiprimejprime vminus Pk+lv

rang = 0 for (iprime jprime) isin Ukl Hence (1018) is equivalent tolangiprimejprime vminus (Pk+l minus Pk)v

rang = 0 (iprime jprime) isin Ukl (1019)

We first show the sufficient condition To this end we assume that v isin X sat-isfies equation (1018) It follows from Lk+lcapXperpk+l = 0 and equation (1018)that for any (iprime jprime) isin Ukl there exists w isin Wkl such that

langiprimejprime w

rang = 0Therefore we observe that Vkl capWperpkl = 0 Now we suppose v isin X is asolution of equation (1016) Thus it follows directly from equation (1019)that v satisfies equation (1017) Conversely if v satisfies equation (1017) butis not a solution of equation (1016) then we can find isin Vkl such that〈 (Pk+l minus Pk)v〉 = 0 thus 〈 vminus (Pk+l minus Pk)v〉 = 0 which contradictsequation (1019)

It remains to prove the necessary condition For any v isin X we verifydirectly that vminus (Pk+l minusPk)v is a solution of equation (1016) and hence it isalso a solution of equation (1017) Thus we obtain equation (1019) By theequivalence of conditions (1018) and (1019) we prove equation (1018)

The next theorem describes the condition for which equation (1010) isconverted into its equivalent discrete form

Theorem 108 The following statements are equivalent

(i) Equation (1010) is equivalent tolangiprimejprime uH

kl

rang = langiprimejprime f +Kuklminus1rang (iprime jprime) isin Ukl (1020)

(ii) Vp sub Xperpk p gt k(iii) For any v isin Xk

langiprimejprime v

rang = 0 (iprime jprime) isin Ukl

Proof The equivalence of statements (ii) and (iii) is clear It suffices toprove the equivalence of statements (i) and (ii) Note that equation (1010)is equivalent to

(Pk+l minus Pk)[uHkl minus ( f +Kuklminus1)] = 0

By Lemma 107 the above equation is equivalent to (1020) if and only ifstatement (ii) holds

We are now ready to present the discrete form of the MAM For(iprime jprime) (i j) isin Ukl we define the matrix Ekl = [langiprimejprime wij

rang (iprime jprime) (i j) isin Ukl]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 371

Using this notation equation (1020) can be rewritten as

EkluHkl = fkl (1021)

where uHkl = [(ukl)ij (i j) isin Ukl] and fkl = [langiprimejprime f +Kuklminus1

rang (iprime jprime)

isin Ukl]Algorithm 109 (The multilevel augmentation method A discrete form) Letk be a fixed positive integer

Step 1 Solve the nonlinear systemlangiprimejprime (I minusK)

⎛⎝ sum(ij)isinUk

(uk)ijwij

⎞⎠rang = langiprimejprime frang (iprime jprime) isin Uk

(1022)

and obtain the solution uk = [(uk)ij (i j) isin Uk] Let uk0 = uk andl = 1

Step 2 Solve the linear system (1021) to obtain uHkl and define

uHkl =

sum(ij)isinUkl

(ukl)ijwij

Step 3 Solve the nonlinear system (1015) to obtain uLkl = [(ukl)ij (i j) isin

Uk] Define uLkl =sum(ij)isinUk

(ukl)ijwij and ukl = uLkl + uH

klStep 4 Set llarr l+ 1 and go back to step 2

A crucial procedure in Algorithm 109 is to repeatedly solve the nonlinearsystem (1015) A typical approach to solve this system is by the Newtoniteration or the secant method There are two strategies to implement theNewton iterationsecant method The first strategy is to update the Jacobianmatrix of the nonlinear system (1015) at each step The drawback of thisstrategy is that updating the Jacobian matrix is time-consuming This can beimproved by reducing the frequency of updating the Jacobian matrix Thesecond strategy is that in step 3 of Algorithm 109 we use the same Jacobianmatrix obtained in step 1 when solving the nonlinear system (1015) Thismodification avoids updating the Jacobian matrix and thus it significantlyreduces the computational cost It may affect the approximation accuracyHowever it can be compensated by a few more iterations The numericalresults to be presented later show that this strategy indeed speeds up thecomputation significantly while preserving the approximation accuracy

In the rest of this subsection we estimate the computational cost ofAlgorithm 109 which is measured by the number of multiplications used in

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

372 Multiscale methods for nonlinear integral equations

computation We suppose that the initial approximate solution uk has beenobtained and we intend to find ukm by using Algorithm 109 Accordingto the algorithm we divide the computation into m stages For stage i withi = 1 2 m we perform the following procedures

(1) Generate the coefficient matrix Eki(2) Compute the vector fki(3) Solve the linear system (1021)(4) Solve the nonlinear system (1015)

The computational cost of Algorithm 109 is estimated according to eachprocedure described above We denote by Mkij the computational cost in theabove procedure j at stage i We assume that the following hypothesis holds

(A0) Computing the integrals that appear in fki requires a constant computa-tional cost per integral

Specifically we identify Mki1 and Mki2 respectively by the numberof entries of the matrix Eki and that of the components of the vector fkiMoreover Mki3 is the number of multiplications used for solving the linearsystem (1021) and Mki4 is the number of multiplications used for solving thenonlinear system (1015) Since in procedure 4 we solve the same nonlinearsystem with different function uH

kl we then conclude that Mki4 = O(1) Itremains to estimate Mki3 For this purpose we make the following additionalhypotheses

(A1) There exists a positive integer μ gt 1 such that for any n the dimensions(n) of Xn is equivalent to μ that is s(n) sim μn

(A2) For any i the matrix Eki is an upper triangular sparse matrix withN (Eki) = O(s(k + i)) where N (A) denotes the number of nonzeroelements of the matrix A

We present estimates for Mkij j = 1 2 3 in the next proposition

Proposition 1010 If assumptions (A0) (A1) and (A2) hold then for anyi gt 0 and j = 1 2 3

Mkij = O(s(k + i))

Proof The estimates for j = 1 2 are clear The number of multiplicationsin finding the solution of linear system (1021) equals the number of nonzeroentries of the coefficient matrix Eki According to assumption (A2) N (Eki) =O(s(k + i)) The proof is complete using assumption (A1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 373

Next we let Mkmj denote the total computational cost related to procedurej for obtaining the solution ukm

Corollary 1011 If assumptions (A0) (A1) and (A2) hold then

Mkmj = O(s(k + m)) j = 1 2 3 and Mkm4 = O(m)

Proof For j = 1 2 3 4 we have that

Mkmj =msum

i=1

Mkij

The result of this corollary follows directly from the equation above andProposition 1010

Theorem 1012 If assumptions (A0) (A1) and (A2) hold then the totalcomputational cost of obtaining the solution ukm by Algorithm 109 is in theorder O(s(k + m)) where s(k + m) is the dimension of the subspace Xk+m

Proof The total computational cost of obtaining the solution ukm by Algo-rithm 109 is given by the sum of Mkmj over j = 1 2 3 4 The desiredestimate of this theorem follows from Corollary 1011 and the equivalenceof s(k + m) and μk+m

The above theorem reveals that the computational cost for Algorithm 109is linear with respect to the dimension of the approximation space underassumption (A0)

1024 The Galerkin and collocation-based methods

In this subsection we present two specific MAMs one based on the Galerkinmethod and the other based on the collocation method

We first recall a multiscale partition n n isin N0 of the domain For eachscale n the partition n consists of a family of subsets ni i isin Ze(n) wheree(n) denotes the cardinality of n with the properties that

meas(ni capniprime) = 0 i iprime isin Ze(n) i = iprime and⋃

iisinZe(n)

ni =

The multiscale property requires that for n gt nprime and i isin Ze(n) there exists aunique iprime isin Ze(nprime) such that ni sub nprimeiprime We also demand that the partitionis ldquoshrinkingrdquo at a proper rate that is there is a positive constant τ isin (0 1)such that for sufficiently large n d(n) le τ n where d(n) denotes the largestdiameter of the subsets in n The growth of the number of elements in n inn is required to satisfy e(n) = O(μn) When can be decomposed into a union

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

374 Multiscale methods for nonlinear integral equations

of simplices a multiscale partition of that meets the above requirements isgiven in Sections 42 51 and 71 (cf [69 75])

We now describe the multilevel Galerkin scheme In this case we chooseX = L2() and the subspaces Xn as spaces of piecewise polynomials oforder r associated with a multiscale partition n n isin N0 of the domain The multiscale partition described above guarantees the nestedness of the spacesequence Xn n isin N0 The space Xn+1 can be decomposed as the orthogonalsum of Xn and its orthogonal complement Wn+1 in Xn+1 We write W0 = X0Then Xn can be written as the orthogonal direct sum of Wi for i isin Zn+1

The projection Pn is naturally chosen as the orthogonal projection from X

onto Xn As a result equation (103) becomes the Galerkin scheme for solving(102) and accordingly Algorithm 102 is a MAM based on the Galerkinscheme In this case Theorem 106 has the following form

Theorem 1013 Let ulowast be an isolated solution of (102) If one is not aneigenvalue of (K)prime(ulowast) and if ulowast isin Wr2() then there exist a positiveconstant c and a positive integer N such that for all k ge N and m isin N0

ulowast minus ukm2 le cτ r(k+m)ulowastr2

Proof We may define a sequence γn by

γn = cτ rnulowastr2

where c is a positive constant independent of n such that Rn le γn recalling thatRn is the error of the best approximation to ulowast from Xn Since ulowast isin Wr2()we conclude that for all n isin N0

γn+1

γn= τ r

Hence the sequence γn is a majorization sequence of Rn n isin N0 The desiredresult therefore follows directly from Theorem 106

We next comment on the discrete form of the MAM based on the Galerkinscheme For any i ge 0 we choose an orthonormal basis wij j isin Zw(i) forthe space Wi so that wij (i j) isin Un form an orthonormal basis of Xn Theconstruction of these bases may be found in Chapter 4 The choice of theapproximation spaces and their bases permits the hypotheses (A1) and (A2)described in Section 1023 to be satisfied Since Xlowast = L2() for i ge 0 andj isin Zw(i) the functional ij may be chosen as

langij x

rang = (wij x) for x isin L2()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

102 Multiscale methods for the Hammerstein equation 375

where (middot middot) denotes the inner product in L2 In this setting the nonlinear system(1015) has the form⎛⎝wiprimejprime (I minusK)

⎛⎝ sum(ij)isinUk

uijwij + uHkm

⎞⎠⎞⎠ = (wiprimejprime f)

(iprime jprime) isin Uk

(1023)

and the matrix Ekm becomes the identity matrix Hence equation (1021) isreduced to

uHkm = fkm (1024)

Algorithm 109 with the Galerkin method has a very simple formWe now turn our attention to the MAM based on the collocation method

The multiscale collocation method that we are describing here was firstintroduced in [69] We set X = Linfin() and as in the Galerkin case we choosethe subspaces Xn as spaces of piecewise polynomials of order r associatedwith a multiscale partition n n isin N0 of the domain Again we needthe orthogonal complement Wn of Xnminus1 in Xn and the same orthogonaldecomposition of Xn as in the Galerkin case However we do not demandthe orthogonality of basis functions within the subspace Wn

We need to describe collocation functionals To this end we recall that Y =C() and a collocation functional is chosen as an element in Ylowast Specificallythe space Ln in this case is spanned by a basis whose elements are pointevaluation functionals Note that Ln has a decomposition Ln = V0oplus middot middot middot oplusVn

with V0 = L0 where Vi = spanij j isin Zw(i) will be described belowWe construct V0 from refinable sets of points in with respect to families ofcontractive maps which define the refinement of the multiscale partitions of Functionals 0j are the point evaluation functionals associated with the pointsin the refinable sets Each functional 1j is defined by a linear combinationof point evaluation functionals with the number of such functionals beingbounded independent of i and satisfies the ldquosemi-bi-orthogonalityrdquo propertywith respect to wij i = 0 1 j isin Zw(i) that is

langiprimejprime wij

rang = δiiprimeδjjprime for i le iprimeThe functionals ij i gt 1 j isin Zw(i) are defined recursively from 1j j isin Zw(1)The projection Pn in this case is naturally the interpolatory projection on XnIt can readily be verified that the assumptions (A1) and (A2) described inSection 1023 hold

With the interpolatory projection Pn equation (103) becomes the colloca-tion scheme for solving (102) and accordingly Algorithm 102 is a MAMbased on the collocation scheme Similar to Theorem 1013 we have thefollowing convergence result for the collocation-based MAM

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

376 Multiscale methods for nonlinear integral equations

Theorem 1014 Let ulowast be an isolated solution of (102) If one is not aneigenvalue of (K)prime(ulowast) and if ulowast isin Wrinfin() then there exist a positiveconstant c and a positive integer N such that for all k ge N and m isin N0

ulowast minus ukminfin le cτ r(k+m)ulowastrinfin

Since the proof for Theorem 1014 is similar to that for Theorem 1013 weomit it

To close the subsection we remark on the influence of numerical integrationon the approximation errors of numerical solutions and the computationalcomplexity of the algorithm for the multiscale collocation method In thecomputational complexity analysis presented in Section 1023 we imposeassumption (A0) for the estimate of Mki2 However this assumption maynot be fulfilled in all cases Additional computational efforts may be neededto compute the vector fkl We take the multiscale bases formed by piecewisepolynomials as an example When the collocation methods are applied todiscretize the integral equation Mkl2 indicates the computational cost forcomputing lang

iprimejprime Kuklminus1rang (iprime jprime) isin Ukl (1025)

where the functionals iprimejprime are linear combinations of point evaluations There-fore we need to evaluate numerically the integrals of the formint

K(s t)ψ(t uklminus1(t))dt

According to [96 273] under suitable assumptions on the regularity of thenonlinear function ψ we have an approximation

ψ(t uklminus1(t)) asympsum

(ij)isinUk+l

bijwij(t)

with an optimal order of convergence and computational complexityO(s(k + l) log s(k + l)) Then computing (1025) reduces to calculatinglang

iprimejprime Kwijrang (iprime jprime) (i j) isin Ukl

When the kernel is smooth or weakly singular we can establish truncationstrategies for the matrix and error control strategies for computing the restelements of the matrix the cost of which is O(s(k + l)(log s(k + l))ν) wherethe positive integer ν depends on the dimension of the domain Whend = 1 2 3 the value of ν is 3 4 5 respectively See [72 75 264] for detailsSummarizing the above discussion we observe that the total computationalcost for computing fkl is of O(s(k + l)(log s(k + l))ν) A similar approachapplies to the Galerkin method

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 377

103 Multiscale methods for nonlinear boundaryintegral equations

In this section we develop the MAM for solving the nonlinear boundary inte-gral equation propose a matrix compression strategy and present acceleratedquadratures and Newton iterations for seeding up the computation

1031 The multilevel augmentation method

We describe in this subsection the MAM for solving the nonlinear boundaryintegral equation We begin by recalling the reformation of the nonlinearboundary value problem as a nonlinear integral equation

Let be a simply connected bounded domain in R2 with a C2 boundary We consider solving the following nonlinear boundary value problem⎧⎪⎨⎪⎩

u(x) = 0 x isin

partu

partnx(x) = minusg(x u(x))+ g0(x) x isin

(1026)

where nx denotes the exterior unit normal vector to at x The numericalsolution of the above problem was studied in many papers (see for example[17 239] and the references cited therein) The fundamental solution of theLaplace equation in R2 is given by

(x y) = minus 1

2πlog |xminus y|

It is shown (cf [17 239]) that problem (1026) can be reformulated as thefollowing nonlinear integral equation defined on

u(x)minus 1

π

int

u(y)part

partnylog |xminus y|dsy minus 1

π

int

g(y u(y)) log |xminus y|dsy

= minus 1

π

int

g0(y) log |xminus y|dsy x isin (1027)

Note that the first integral operator is linear while the second is nonlinearWe assume that the boundary has a parametrization x = (ξ(t) η(t)) t isin[0 1) With this representation the functions that appeared in (1027) whichare defined on are transformed to functions of the variable t For simplicitywe use the same notations that is

u(t) = u(ξ(t) η(t)) g(t u(t)) = g((ξ(t) η(t)) u) g0(t) = g0(ξ(t) η(t))

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

378 Multiscale methods for nonlinear integral equations

With these notations according to [15 17] equation (1027) is rewritten as

u(t)minusint 1

0K(t τ)u(τ )dτ minus

int 1

0L(t τ)g(τ u(τ ))χ(τ)dτ

= minusint 1

0L(t τ)g0(τ )χ(τ)dτ t isin [0 1) (1028)

where for t τ isin [0 1)

K(t τ) =

⎧⎪⎪⎪⎨⎪⎪⎪⎩1

π

ηprime(τ )(ξ(τ )minus ξ(t))minus ξ prime(τ )(η(τ )minus η(t))

(ξ(t)minus ξ(τ ))2 + (η(t)minus η(τ))2 t = τ

1

π

ηprime(t)ξ primeprime(t)minus ξ prime(t)ηprimeprime(t)2[ξ prime(t)2 + ηprime(t)2] t = τ

L(t τ) = 1

2πlog[(ξ(t)minus ξ(τ ))2 + (η(t)minus η(τ))2] t = τ

and

χ(τ) =radicξ prime(τ )2 + ηprime(τ )2

We introduce two linear integral operators KL Linfin(0 1) rarr Linfin(0 1)defined respectively by

(Kw)(t) =int 1

0K(t τ)w(τ )dτ t isin [0 1)

and

(Lw)(t) =int 1

0L(t τ)w(τ )dτ t isin [0 1)

and the nonlinear operator

(u)(t) = g(t u(t))χ(t) t isin [0 1)

By letting

T = K + L

we rewrite equation (1028) as

uminus T u = f (1029)

in which the right-hand-side function f = minusL(g0χ)In passing we comment on the regularity of the two kernels K and L It is

easy to verify that when is of Cs with s ge 2 K has continuous derivatives

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 379

up to order s minus 2 Throughout this section we assume s is sufficiently largeHence there exists a positive constant such that

|Dαt Dβ

τ K(t τ)| le t τ isin (0 1) (1030)

for positive integers α β with α + β le sminus 2 The expression of L contains alogarithmic factor which exhibits a weak singularity The positions of singularpoints are determined by the properties of parametrization ξ and η Noting that is a closed curve the singular points are located where t minus τ = 0minus1 1Accordingly we require that L(t middot) isin Cinfin([0 1]t) for any t isin [0 1] andthere exist positive constants θ and σ isin (0 1) such that

|Dαt Dβ

τ L(t τ)| le θ middotmax|t minus τ |minus(σ+α+β) |t minus τ + 1|minus(σ+α+β)

|t minus τ minus 1|minus(σ+α+β)

(1031)

for any αβ isin N0 and t τ isin [0 1] with t = τ and t minus τ = plusmn1 We remarkthat the above setting of weak singularity includes not only the logarithmicsingularity but also other kinds of singularity

We now return to equation (1029) The solvability of (1029) was consid-ered in the literature (cf [239]) Throughout the rest of this section we assumethat (1029) has an isolated solution ulowast isin C(0 1) Moreover we supposethat the function g(x u) is continuous with respect to x isin and Lipschitzcontinuous with respect to u isin R the partial derivative Dug of g with respectto the variable u exists and is Lipschitz continuous and for each u isin C()g(middot u(middot)) Dug(middot u(middot)) isin C()

Next we describe the fast algorithm for (1029) in light of the idea of MAMFor n isin N0 let πn be the uniform mesh which divides the interval [0 1] into μn

pieces for a given positive integer μ and Xn be the piecewise polynomial spaceof order r with respect to πn It is easily observed that the sequence Xn n isin N0

is nested that is Xn sub Xn+1 For each n isin N0 let Pn be the interpolatoryprojection from C(0 1) onto Xn with the set of interpolation points

j+ s

μn j isin Zμn s isin G

where G is the set of initial interpolation points in [0 1] We require that Ghas two properties One is that G contains r distinct points and as a resultthe interpolation of polynomials of order r on G exists uniquely The other isthat G is refinable with respect to the family of contractive affine mappingsμ = φe e isin Zμ where

φe(x) = x+ e

μ e isin Zμ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

380 Multiscale methods for nonlinear integral equations

in the sense that

G sub μ(G) =⋃

eisinZμφe(G)

The collocation method for solving (1029) is to find un isin Xn such that

un minus PnT un = Pnf (1032)

Making use of Theorem 2 of [255] we prove below that (1032) is uniquelysolvable

Theorem 1015 If ulowast isin C(0 1) is an isolated solution of (1029) and one isnot an eigenvalue of the linear operator T prime(ulowast) then for sufficiently large n(1032) has a unique solution un isin B(ulowast δ) for some δ gt 0 and there existpositive constants c1 c2 such that

c1ulowast minus Pnulowastinfin le ulowast minus uninfin le c2ulowast minus Pnulowastinfin

We now describe the MAM for finding an approximate solution of equa-tion (1032) The nestedness of the subspace sequence allows us to have thedecomposition of Xn+1 as the direct sum of Xn and its orthogonal complementWn+1 Thus for a fixed k isin N0 and any m isin N0 we have that

Xk+m = Xk oplusWkm where Wkm =Wk+1 oplusWk+2 oplus middot middot middot oplusWk+m(1033)

We now solve equation (1032) with n = k+m k being fixed and small relativeto n At the first step we solve equation (1032) with n = k exactly and obtainthe solution uk Since dim(Xk) is small in comparison with dim(Xk+m) thecomputational cost to invert the nonlinear operator Pk(I minus T ) is much lessthan that to invert Pk+m(I minus T ) The next step is to obtain an approximationof the solution uk+1 of (1032) with n = k+1 For this purpose we decompose

uk+1 = uLk+1 + uH

k+1 with uLk+1 isin Xk and uH

k+1 isinWk+1

according to the decomposition (1033) and rewrite equation (1032) with n =k + 1 in its equivalent form as

(Pk+1 minus Pk)(uLk+1 + uH

k+1)minus (Pk+1 minus Pk)T uk+1 = (Pk+1 minus Pk)f

Pk(I minus T )(uLk+1 + uH

k+1) = Pkf

(1034)

In view of

(Pk+1 minus Pk)(uLk+1 + uH

k+1) = uHk+1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 381

the first equation in 1034 becomes

uHk+1 = (Pk+1 minus Pk)( f + T uk+1)

The right-hand side of the above equation can be obtained approximately viathe solution uk at the previous level That is we compute

uHk1 = (Pk+1 minus Pk)( f + T uk0)

where uk0 = uk and note that uHk1 isin Wk+1 We replace uH

k+1 in the second

equation of (1034) by uHk1 and solve uL

k1 isin Xk from the equation

Pk(I minus T )(uLk1 + uH

k1) = Pkf

The solution uLk1 of the above equation is a good approximation to uL

k+1 Wethen obtain an approximation to the solution uk+1 of (1032) by letting

uk1 = uLk1 + uH

k1

Note that uLk1 and uH

k1 represent respectively the lower and higher-frequencycomponents of uk1 This procedure is repeated m times to obtain the approxi-mation ukm of the solution uk+m of (1032) with n = k + m At step of thisprocedure we do not invert the nonlinear operator Pk+(IminusT ) but invert onlythe same nonlinear operator Pk(I minus T ) This makes the method very efficientcomputationally We summarize this procedure in the following algorithm

Algorithm 1016 (The multilevel augmentation method in an operator form)Let k be a fixed positive integer

Step 1 Find the solution uk isin Xk of equation (1032) with n = k Set uk0 =uk and l = 1

Step 2 Compute

uHkl = (Pk+l minus Pk)( f + T uklminus1) isinWkl (1035)

Step 3 Solve for uLkl isin Xk from the equation

Pk(I minus T )(uLkl + uH

kl) = Pkf (1036)

Step 4 Let ukl = uLkl + uH

kl Set llarr l+ 1 and go back to step 2 until l = m

The output of the MAM is ukm isin Xk+m By employing the analysis inthe last section for the Hammerstein equations we establish the followingapproximation result for ukm

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

382 Multiscale methods for nonlinear integral equations

Theorem 1017 If ulowast isin C(0 1) is an isolated solution of (1029) and oneis not an eigenvalue of the linear operator T prime(ulowast) then there exists a positiveinteger N such that for any k gt N the MAM solution ukm exists uniquely forall m gt 0 and ukm isin B(ulowast δ) for some δ gt 0 Moreover if ulowast isin Wrinfin(0 1)then there exists a positive constant c such that for all k gt N and m isin N0

ulowast minus ukminfin le cμminusr(k+m)ulowastrinfin

As Proposition 103 states the MAM solves ukl successively for l = 12 m from

(I minus PkT )ukl = Pk+lf + (Pk+l minus Pk)T uklminus1 (1037)

Let us briefly compare (1037) with (1032) If we solve (1032) directly wehave to invert IminusPnT which is a nonlinear operator on Xn The linearizationof the nonlinear operator usually leads to a linear equation that requires highcomputational complexity to solve Therefore this is not an economic wayto solve equation (1032) when the dimension of Xn is large The solutionof (1037) however need only invert I minus PkT with k ltlt n Note that thenonlinear component of the operator is restricted to Xk the dimension of whichis fixed and much smaller than that of the space where ukm is located for a largem To see it more clearly we split (1037) into (1035) and (1036) and observethat (1035) is a linear equation

We now recall multiscale bases for Xn and the corresponding multiscalecollocation functionals introduced in Chapter 7 Suppose that Ln n isin N0 isa sequence of subspaces of (Linfin(0 1))lowast which possesses the properties Ln subLn+1 dim(Ln) = dim(Xn) n isin N0 We remark that the elements of Ln arepoint evaluations and their finite linear combinations and the refinability of Gguarantees the above nestedness property We utilize the nestedness propertyagain to obtain the multiscale decomposition Lk+m = Lk oplus Vkm for any fixedk isin N0 and any m isin N0 in which Vkm = Vk+1 oplus middot middot middot oplus Vk+m Let w(0) =dim(X0) and w(i) = dim(Wi) for i gt 0 We suppose that

X0 = spanw0j j isin Zw(0) L0 = span0j j isin Zw(0)and

Wi = spanwij j isin Zw(i) Li = spanij j isin Zw(i) i gt 0

By introducing the index set Un = (i j) j isin Zw(i) i isin Zn+1 we have

Xn = spanwij (i j) isin Un Ln = spanij (i j) isin Un n isin N0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 383

For any l isin Zm+1 we express the solution ukl of (1037) as

ukl =sum

(ij)isinUk+l

(ukl)ijwij

We use the notation ukl = [(ukl)ij (i j) isin Uk+l] to denote the representationvector of ukl Using the index set Ukl = Uk+lUk = (i j) j isin Zw(i) i isinZk+l+1Zk+1 we have the expansion uL

kl =sum

(ij)isinUk(ukl)ijwij and uH

kl =sum(ij)isinUkl

(ukl)ijwij Moreover we define the matrix

EHkl = [ langiprimejprime wij

rang (iprime jprime) (i j) isin Ukl

]

As in the last section we make use of the properties of the bases to concludethat the nonlinear equation (1036) is equivalent to the nonlinear systemlang

iprimejprime (I minus T )( sum(ij)isinUk

(ukl)ijwij + uHkl

)rang= langiprimejprime f

rang (iprime jprime) isin Uk

(1038)

and (1035) is equivalent to

EHklu

Hkl = fkl (1039)

where uHkl = [(ukl)ij (i j) isin Ukl] and

fkl = [langiprimejprime f + T uklminus1rang

(iprime jprime) isin Ukl] (1040)

Computing fkl requires evaluating the integralslangiprimejprime (K + L)uklminus1

rang for

(iprime jprime) isin Ukl We separate the integral into its linear and nonlinear componentslangiprimejprime Kuklminus1

rang (iprime jprime) isin Ukl (1041)

and langiprimejprime Luklminus1

rang (iprime jprime) isin Ukl (1042)

We write

uklminus1 =sum

(ij)isinUk+lminus1

(uklminus1)ijwij

and define for k l the matrix KHklminus1 = [Kiprimejprimeij (iprime jprime) isin Ukl (i j) isin Uk+lminus1]

Computing the quantities in (1041) is equivalent to generating KHklminus1 and

calculating

KHklminus1uklminus1 (1043)

The evaluation of the nonlinear component (1042) will be done slightlydifferently Sinceuklminus1 isin Xk+l we are not able to expressuklminus1 as a linear

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

384 Multiscale methods for nonlinear integral equations

combination of the basis of Xk+l as we do for computing (1041) In order toestablish a fast algorithm similar to that for evaluating (1041) we approximateuklminus1 by its projection in Xk+l In other words we do not evaluate (1042)exactly but compute its approximationlang

iprimejprime LPk+luklminus1rang (iprime jprime) isin Ukl (1044)

Formally (1044) has a form similar to (1041) where L corresponds to Kand Pk+luklminus1 corresponds to uklminus1 Therefore the fast algorithm describedabove for (1041) is applicable to (1044) To see this we write

Pk+luklminus1 =sum

(ij)isinUk+l

(uk+l)ijwij

and thus we have thatlangiprimejprime Pk+luklminus1

rang = sum(ij)isinUk+l

(uk+l)ijlangiprimejprime wij

rang (iprime jprime) isin Uk+l

Let

gk+l =⎡⎣langiprimejprime

( sum(ij)isinUk+lminus1

(uklminus1)ijwij

)rang (iprime jprime) isin Uk+l

⎤⎦

and for n isin N0 define the matrix En = [langiprimejprime wijrang

(iprime jprime) (i j) isin Un] Thenthe representation vector uk+l of Pk+luklminus1 satisfies the linear system

Ek+luk+l = gk+l (1045)

Define for k l the matrix LHkl = [Liprimejprimeij (iprime jprime) isin Ukl (i j) isin Uk+l]

Computing (1044) is equivalent to solving uk+l from (1045) generating LHkl

and then evaluating

LHkluk+l (1046)

In the bases and collocation functionals described above we have the matrixrepresentations for the operators K and L Specifically for n isin N0 we definethe matrices

Kn = [Kiprimejprimeij (iprime jprime) (i j) isin Un] with Kiprimejprimeij = langiprimejprime Kwijrang

and

Ln = [Liprimejprimeij (iprime jprime) (i j) isin Un] with Liprimejprimeij = langiprimejprime Lwijrang

Matrices Kn and Ln will be compressed according to the regularity of thekernels K and L Note that the kernel K is smooth and L is weakly singular and

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 385

their regularities are described in (1030) and (1031) respectively We adoptthe following truncation strategies

(T1) For each n isin N0 the matrix Kn is truncated to a sparse matrix

Kn = [Kiprimejprimeij (iprime jprime) (i j) isin Un]where for (iprime jprime) (i j) isin Un

Kiprimejprimeij =

Kiprimejprimeij iprime + i le n0 otherwise

(T2) For (i j) isin Un we let Sij = supp(wij) For each n isin N0 and(iprime jprime) (i j) isin Un we set

Liprimejprimeij =

Liprimejprimeij dist(Siprimejprime Sij) le εniprimei or dist(Siprimejprime Sij) ge 1minus εn

iprimei0 otherwise

in which the truncation parameters εniprimei are chosen by

εniprimei = maxaμminusn+b(nminusi)+bprime(nminusiprime) ρ(μminusi + μminusiprime) (1047)

for some constants b bprime a gt 0 and ρ gt 1 The truncated matrix of Ln isdefined by

Ln = [Liprimejprimeij (iprime jprime) (i j) isin Un]When n = 7 and using piecewise linear basis functions we show in

Figure 101 the block matrices K7 and L7We now describe the MAM with the matrix truncations

32 64 128 256

32

64

128

256

0

32 64 128 256

32

64

128

256

0

Figure 101 The distribution of nonzero entries of K7 (left) and L7 (right)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

386 Multiscale methods for nonlinear integral equations

For a fixed k isin N0 and l gt 0 we set

KHklminus1 = [Kiprimejprimeij (iprime jprime) isin Ukl (i j) isin Uk+lminus1]

and

LHkl = [Liprimejprimeij (iprime jprime) isin Ukl (i j) isin Uk+l]

Algorithm 1018 (The multilevel augmentation method with matrix trun-cations) Let k be a fixed positive integer Given m isin N0 we carry out thefollowing computing steps

Step 1 Solve the nonlinear systemlangiprimejprime (I minus (K + L))

( sum(ij)isinUk

(uk)ijwij

)rang= langiprimejprime f

rang (iprime jprime) isin Uk

(1048)

for the solution uk = [(uk)ij (i j) isin Uk] Set uk0 = uk and l = 1

Step 2 Compute the representation vector uk+l of uk+l = Pk+luklminus1 andgenerate vector

fkl = [ langiprimejprime frang

(iprime jprime) isin Ukl]

Compute

fkl = fkl + KHklminus1uklminus1 + LH

kluk+l (1049)

Solve the linear system

EHklu

Hkl = fkl (1050)

for uHkl = [(ukl)ij (i j) isin Ukl] and define uH

kl = sum(ij)isinUkl

(ukl)ijwij

Step 3 Solve the nonlinear systemlangiprimejprime (I minus (K + L))

( sum(ij)isinUk

(ukl)ijwij + uHkl

)rang= langiprimejprime f

rang

(iprime jprime) isin Uk (1051)

for uLkl = [(ukl)ij (i j) isin Uk] and define uL

kl =sum(ij)isinUk(ukl)ijwij

and ukl = uLkl + uH

kl

Step 4 Set llarr l+ 1 and go back to step 2 until l = m

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 387

Several remarks on the computational performance of Algorithm 1018 arein order The computational costs of Algorithm 1018 can be divided intothree parts Part 1 is for generating the matrices Kk+l and Lk+l Part 2 isfor computing (1049) and solving (1050) Part 3 is for solving the resultingnonlinear systems including (1048) and (1051) It takes much more time togenerate matrix Lk+l than to generate matrix Kk+l since the kernel L is weaklysingular We observe from numerical experiments that although parts 1 and 3both have high costs when l is small part 3 dominates the total computing timefor implementing the algorithm while when l increases part 1 grows faster thanpart 3

1032 Accelerated quadratures and Newton iterations

In this subsection we address two computational issues of the MAM algorithmWe employ a product integration scheme for computing the singular integralswhich appear in the matrices involved in the MAM and introduce an approx-imation technique in the Newton iteration for solving the resulting nonlinearsystems to avoid repeated computation in generating their Jacobian matricesThe use of these two techniques results in a modified MAM which speeds upits computation (cf [52])

1 Product integration of singular integrals

Numerical experiments show that the generation of Ln requires much morecomputing time than that of Kn due to the singularity of the kernel L Weobserve that the kernel L has a special structure which allows us to develop amore efficient quadrature method than the Gaussian quadrature method In thissubsection we develop a special product integration method for the specifickernel L so that the computing time for calculating the nonzero entries of thematrix Ln is significantly reduced

The product integration method has been widely used in the literaturefor computing singular integrals For example it was used in [17 138] todiscretize singular integral operators Along this line concrete formulas ofproduct integration were given in [15] (see pp 116ndash119 therein) Theseformulas were developed in the context of single-scale approximation andwere proved efficient for computation in that context In the current multiscaleapproximation context we establish product integration formulas suitable forthe use of multiscale bases

We now study the typical integral involved in the entries of matrix Ln Thenonzero entries of matrix Ln involve integrals of the form

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

388 Multiscale methods for nonlinear integral equations

Iij(s) =int

Sij

L(s t)wij(t)dt for (i j) isin Un s isin [0 1]

where wij are multiscale piecewise polynomial basis functions As suggestedby [15 17 138] the kernel L can be decomposed as

L(s t) = 1

π[B0(s t)+ B1(s t)] (1052)

with

B0(s t) = log

∣∣∣∣∣radic(ξ(s)minus ξ(t))2 + (η(s)minus η(t))2

(sminus t)(sminus t minus 1)(sminus t + 1)

∣∣∣∣∣and

B1(s t) = log |sminus t| + log |sminus t minus 1| + log |sminus t + 1|The above decomposition in fact extracts the singularity of L SpecificallyB1 possesses all the singularity features of L and B0 is smooth The kernelB0 is easy to integrate numerically with a little computational cost while thesingularity of B1 brings difficulty to its numerical integration However weobserve that the expression of B1 is very specific which allows us to integrateit exactly with explicit formulas The quantities Iij can be written as the sum oftwo terms

Iνij(s) = 1

π

intSij

Bν(s t)wij(t)dt ν = 0 1 (1053)

We first compute the term I1ij To this end we re-express the multiscale basis

functions wij Note that for j isin Zw(0) w0j is a polynomial of order r Hence itcan be written as

w0j(t) =sumγisinZr

aγ tγ

For i gt 0 and j isin Zw(i) wij is a piecewise polynomial of order r Accordingto the construction of the basis function wij for all (i j) isin Un the support Sij

can be divided into μ pieces κ = (aκ bκ ) κ isin Zμ on each of which wij is apolynomial of order r Thus we write wij as

wij(t) =sumγisinZr

aκ γ tγ t isin κ κ isin Zμ

The above discussion of the basis functions motivates us to define thefollowing special integrals For γ isin Zr α isin = 0 1minus1 and for

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 389

a b isin [0 1] with a lt b we set

I(a bα γ s) =int b

alog |sminus t minus α|tγ dt

In this notation we have thatintS0j

B1(s t)w0j(t)dt =sumαisin

sumγisinZr

aγ I(0 1α γ s) (1054)

and intSij

B1(s t)wij(t)dt =sumκisinZμ

sumαisin

sumγisinZr

aκ γ I(aκ bκ α γ s) (1055)

The integral I(a bα γ s) can be computed exactly We derive below theformula for the integral

Lemma 1019 If γ isin Zr α isin and a b isin [0 1] with a lt b then

I(a bα γ s)

= 1

γ + 1

⎡⎣(tγ+1 minus (sminus α)γ+1) log |sminus t minus α| minusγ+1sumj=1

(sminus α)γminusj+1t j

⎤⎦∣∣∣∣∣∣b

a

Proof The formula in this lemma may be proved using integration byparts

Using the integration formula in Lemma 1019 we are able to compute theintegrals (1054) and (1055) exactly

It remains to compute the term I0ij For this purpose we describe a Gaussian

quadrature rule on the interval [a b] For each positive integer j we denote bygj the Legendre polynomial of degree j and by τ

j isin Zj the j zeros of gj in

the order minus1 lt τj0 lt middot middot middot lt τ

jjminus1 lt 1 We transfer these zeros to the interval

[a b] by letting

τj = a+ b

2+ bminus a

j isin Zj

The points τ j are the j zeros of the Legendre polynomial of degree j on the

interval [a b] Given a continuous function h defined on [a b] the j-pointGaussian quadrature rule is given by

G(h [a b] j) =sumisinZj

ωjh(τ

j)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

390 Multiscale methods for nonlinear integral equations

where

ωj =

int b

a

prodiisinZji =

t minus τji

τj minus τ

ji

dt

This quadrature formula will be used to compute I0ij We summarize below the

integration strategy for computing the nonzero entries Liprimejprimeij of Ln(QL) For a nonzero entry Liprimejprimeij of Ln we compute I0

ij and I1ij separately

To compute I0ij we divide the support Sij of wij uniformly into N intervals

I isin ZN where N is a positive integer such that the diameter of each ofthe intervals is less than or equal to μminusκr and wij is a polynomial on I Theintegral I0

ij is computed by the formulasumisinZN

G(B0(s middot)wij I (2κ)minus1n)

The integral I1ij is expressed in terms of equations (1054) and (1055) and

computed using Lemma 1019

2 Approximate iteration for solving nonlinear systems

Algorithm 1018 requires solving two nonlinear systems (1048) and (1051)These equations are solved using the Newton method In each iteration stepof the Newton method we need to compute the entries of the Jacobianmatrix Specifically for equation (1048) the Newton iteration scheme has thefollowing steps

bull Choose an initial guess u(0)k

bull For m = 0 1 compute

F(u(m)k ) = (Ek minusKk)u(m)k minus f(m)k

with

f(m)k =[langiprimejprime f + L

(u(m)k

)rang (iprime jprime) isin Uk

] u(m)k =

sum(ij)isinUk

(u(m)k )ijwij

and compute the Jacobian matrix

J(u(m)k ) = [Jiprimejprimeij (iprime jprime) (i j) isin Uk]with

Jiprimejprimeij = Eiprimejprimeij minus Kiprimejprimeij minuslangiprimejprime L(wij

prime(u(m)k ))rang

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 391

bull Solve (m)k from the equation J(u(m)k )

(m)k = minusF(u(m)k )

bull Compute u(m+1)k = u(m)k +

(m)k

It is easily seen that the evaluation of both F(u(m)k ) and J(u(m)k ) involvescomputing integrals which requires high computational costs Solving equa-tion (1051) numerically also involves computing the integrals

When evaluating F(u(m)k ) we need to compute the integralslangiprimejprime L

(u(m)k

)rang (1056)

These integrals come from the integral operator L For different steps of theiteration we are required to compute different integrals and as a result the

computational cost is large Note that for some (

u(m)k

)isin Xk we can write

it as

(

u(m)k

)=

sum(ij)isinUk

cijwij

and langiprimejprime L

(u(m)k

)rang=

sum(ij)isinUk

cijLiprimejprimeij (1057)

Comparing (1057) with (1056) we observe that although they both involveintegral evaluation (1057) makes use of the values of the entries of the matrixLn which have been previously obtained so we do not have to recomputethem However in general (u(m)k ) isin Xk We cannot write (u(m)k ) as a linearcombination of the basis function wij For this reason we propose to project

(u(m)k ) into Xk Specifically we do not solve (1048) directly and instead wesolve uk from the nonlinear systemlang

iprimejprime (I minus (K + LPk))

( sum(ij)isinUk

(uk)ijwij

)rang= langiprimejprime f

rang (iprime jprime) isin Uk

(1058)

When we solve equation (1058) by the Newton iteration method we arerequired to compute the termslang

iprimejprime LPk

( sum(ij)isinUk

(uk)ijwij

)rang (iprime jprime) isin Uk

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

392 Multiscale methods for nonlinear integral equations

and their partial derivatives with respect to the variables (uk)ij (i j) isin Uk Tothis end we suppose that

Pk

⎛⎝ sum(ij)isinUk

(uk)ijwij

⎞⎠ = sum(ij)isinUk

(uk)ijwij

Then each (uk)ij is a function with respect to the variables (uk)ij (i j) isin Uk Infact if we let

F =⎡⎣langiprimejprime

( sum(ij)isinUk

(uk)ijwij

)rang (iprime jprime) isin Uk

⎤⎦

then we have uk = Eminus1k F Therefore it follows thatlang

iprimejprime LPk

( sum(ij)isinUk

(uk)ijwij

)rang=

sum(ij)isinUk

Liprimejprimeij(uk)ij

In a similar manner we compute the partial derivatives of the above quantitieswith respect to the variables (uk)ij (i j) isin Uk Making use of the aboveobservations we describe the Newton iteration scheme for solving (1058) asfollows

Algorithm 1020 (The Newton iteration method for solving (1058)) Setm = 0 fk = [langij f

rang (i j) isin Uk] and choose an initial guess u(0)k and

an iteration stopping threshold δ

Step 1 Let u(m)k =sum(ij)isinUk(u(m)k )ijwij and set

G(u(m)k ) =[langiprimejprime

prime(u(m)k )wij

rang (iprime jprime) (i j) isin Uk

]

Solve F(m)k from EkF(m)

k = G(u(m)k ) and compute the Jacobian matrix

J(u(m)k ) = Ek minusKk minus LkF(m)k

Step 2 For g(m)k =[langij(u

(m)k )rang

(i j) isin Uk

] solve u(m)k from Eku(m)k =

g(m)k Compute

F(u(m)k ) = (Ek minusKk)u(m)k minus Lku(m)k minus fk

Step 3 Solve (m)k from J(u(m)k )

(m)k = minusF(u(m)k ) and compute u(m+1)

k =u(m)k +

(m)k

Step 4 Set mlarr m+ 1 and go back to step 1 until (m)k infin lt δ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 393

It is worth noticing that in Algorithm 1020 we need not evaluate anyadditional integrals but make use of the matrix Lk This saves tremendouscomputational effort and thus makes the algorithm very fast We shall see fromthe numerical examples in Section 1042 that (1058) is solved much fasterthan (1048)

Equation (1051) can be approximated in a similar manner Specifically instep 3 of Algorithm 1020 we replace (1051) bylangiprimejprime (I minus (K + LPk+l))

( sum(ij)isinUk

(ukl)ijwij + uHkl

)rang= langiprimejprime f

rang (iprime jprime) isin Uk

(1059)

The Newton iteration scheme for solving (1059) can likewise be developedand will be referred to as Algorithm 1020prime

3 The modified MAM algorithm

We describe below the MAM algorithm employing the above two techniques

Algorithm 1021 (MAM with an accelerated quadrature and the Newtonmethod) Let k be a fixed positive integer Given m isin N0 we carry out thefollowing computing steps

Step 1 Use Algorithm 1020 to solve the nonlinear system (1058) and obtainthe solution uk = [(uk)ij (i j) isin Uk] Let uk0 = uk and l = 1

Step 2 Follow step 2 of Algorithm 1018 to generate fkl and solve EHklu

Hkl =

fkl to obtain uHkl = [(ukl)ij (i j) isin Ukl]T and define uH

kl =sum(ij)isinUkl

(ukl)ijwijStep 3 Use Algorithm 1020prime to solve the nonlinear system (1059) and

obtain the solution uLkl = [(ukl)ij (i j) isin Uk] Define uL

kl =sum(ij)isinUk

(ukl)ijwij and ukl = uLkl + uH

klStep 4 Set llarr l+ 1 and go back to step 2 until l = m

1033 Computational complexity

We now estimate the computational efforts by the number of multiplicationsand functional evaluations Before presenting the estimates we review severalproperties of the multiscale bases and collocation functionals for later refer-ence established in Chapter 7

(I) For any (i j) isin U = (i j) i isin N0 j isin Zw(i) there are at most(μminus 1)r minus 1 numbers of wijprime jprime isin Zw(i) such that meas(Sij cap Sijprime) = 0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

394 Multiscale methods for nonlinear integral equations

(II) For any iprime i isin N0 with i le iprime it holds thatlangiprimejprime wij

rang = δiprimeiδjprimej jprime isin Zw(iprime) j isin Zw(i)

where δiprimei is the Kronecker delta(III) For any polynomial p of order le r and i ge 1lang

ij prang = 0 (wij p) = 0 (i j) isin U

where (middot middot) denotes the inner product in L2(0 1)(IV) There exists a positive constant θ0 such that ij le θ0 and wijinfin le θ0

for all (i j) isin U(V) There exist positive constants cminus and c+ such that for all n isin N0

cminusμi le w(i) le c+μi cminusμminusi le maxjisinZw(i)

|supp(wij)| le c+μminusi

(VI) For any v isin Xn we have a unique expansion

v =sum

(ij)isinUn

vijwij

and there exist positive constants θ1 and θ2 such that

θ1vinfin le vinfin le θ2(n+ 1)Envinfin

in which En = [langiprimejprime wijrang

(iprime jprime) (i j) isin Un]When solving the nonlinear systems (1058) and (1059) we are required

to generate matrices Kn and Ln The truncated matrix Kn is evaluated bythe same strategies as those in Algorithm 1018 The truncated matrix Ln isevaluated using the quadrature method (QL) The following lemma establishesestimates for numbers of nonzero entries of the truncated matrices as well asthe numbers of multiplications and functional evaluations in computing thesenonzero entries

Lemma 1022 For any n isin N0

N (Kn) = O(nμn) N (Ln) = O(nμn)

where N (middot) denotes the number of nonzero entries of a matrix The numbers offunctional evaluations and multiplications needed for generating Kn and Ln

are both O(nμn) respectively

Proof Define the matrix blocks

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)] iprime i isin Zn+1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 395

We obtain from property (V) that N (Kiprimei) = O(μiprime+i) for iprime i isin Zn+1 Directcalculation leads to sum

iisinZn+1

nminusisumiprime=0

μiprime+i = O(nμn)

In light of property (I) the number of functional evaluations in computingthe entries Kiprimejprimeij j isin Zw(i) is of O((2κ)minus1nμκr + μi) for any jprime isin Zw(iprime)Thus generating the whole block Kiprimei requires O([(2κ)minus1nμκr + μi]μiprime)number of functional evaluations The total number of functional evaluationsfor generating Kn is then obtained by a simple summation It is easily observedfrom the expression of quadrature rules that the number of multiplicationsinvolved has the same order as that of functional evaluations

For the estimates on Ln we let

Liprimei = [Liprimejprimeij jprime isin Zw(iprime) j isin Zw(i)] iprime i isin Zn+1

For kernels whose singularity points are along the diagonal the entries whichshould be calculated concentrate along the diagonal of each block For kernelswhose singularity is described by (1031) it is not difficult to observe that theentries that have to be computed concentrate along the diagonal and the cornerson the top right and bottom left of each block We obtain the same estimate ofN (Ln) as that in Theorem 715 of Chapter 7 (cf Theorem 46 of [69]) We nowestimate the computational costs for generating the matrix Ln According tothe integration strategy (QL) for each pair i and j computing I1

ij or I0ij requires

only a fixed number of functional evaluations and multiplications Thus theestimates of this lemma are proved

Lemma 1023 For a fixed k and any l isin N0 the numbers of functionalevaluations and multiplications for evaluating fkl are both O((k + l)μk+l)

Proof Making use of the estimates of numbers of nonzero entries of thetruncated matrices the number of multiplications used in obtaining fkl from(1049) is O((k + l)μk+l) It follows from Lemma 1022 that the numbers ofmultiplications and functional evaluations for generating KH

klminus1 are O((k+ lminus1)μk+lminus1) while those for generating LH

kl are also O((k+l)μk+l) Moreover it

is easily obtained from the definition of fkl that the numbers of multiplicationsand functional evaluations for computing it are O(μk+l) Therefore it remainsto estimate the computational efforts for uk+l

Since for i isin Zk+l+1 and a point P isin [0 1] there are bounded numbers ofwij not vanishing at P and there are bounded numbers of point evaluations ineach ij the numbers of point evaluations and multiplications for calculating

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

396 Multiscale methods for nonlinear integral equations

each component of gk+l are O(k + l) Hence those for computing the vectorgk+l are O((k + l)μk+l) Noting that Ek+l is a sparse upper triangular matrixwith O((k + l)μk+l) number of nonzero entries the solution of the linearsystem (1045) needs O((k + l)μk+l) number of multiplications The resultof the lemma then follows by adding the above estimates together

Combining the estimate in the above lemma for fkl with that for othercomputing steps in Algorithm 1018 obtained from the last section (see also[76]) we have the following lemma

Lemma 1024 If the truncated matrix Ln is evaluated using the quadraturemethod (QL) then the numbers of functional evaluations and multiplicationsin Algorithm 1018 are O((k + m)μk+m)

The next theorem gives an estimate of the computational cost required forAlgorithm 1021

Theorem 1025 Let k be a fixed positive integer For any m isin N0 thenumber of functional evaluations and multiplications used in Algorithm 1021is O((k + m)μk+m)

Proof The computational cost of Algorithm 1021 is composed of two partsOne is the cost for generating the matrices Kk+m and Lk+m The other is thatfor carrying out the computing steps listed in the algorithm Lemma 1022has shown that the computational cost of generating both Lk+m and Kk+m

is O((k + m)μk+m) To estimate the efforts of the computing steps in Algo-rithm 1021 we only need to compare Algorithm 1021 with Algorithm 1018We observe that Algorithm 1021 replaces (1048) and (1051) by (1058)and (1059) respectively We have shown that the modifications reduce thecomputational cost Therefore the computational cost for carrying out thecomputing steps of Algorithm 1021 is less than that of Algorithm 1018It is stated in Lemma 1024 that the number of functional evaluations andmultiplications in Algorithm 1018 is O((k + m)μk+m) Thus the theorem isproved

We remark that if the quadrature method (QL) is not used for generatingthe truncated matrix Ln then the number of functional evaluations andmultiplications used in Algorithm 1021 is O((k + m)3μk+m) (cf [51])and although Algorithm 1021 has the same order of computational costs asAlgorithm 1018 the constant involved in the order of computational costs isimproved

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 397

1034 Convergence analysis

In the last subsection we proved that the computational costs of Algo-rithm 1021 are nearly linear In this subsection we establish the order ofits approximate accuracy More precisely we show that the output ukm ofAlgorithm 1021 maintains much of the convergence order of ukm generated byAlgorithm 1032 Through out the rest of this subsection we assume withoutfurther mention that the function g in the nonlinear boundary condition iscontinuously differentiable with respect to the second variable

We begin with some necessary preparation For n isin N0 we define theoperator Kn Xn rarr Xn by requiring

Kiprimejprimeij =langiprimejprime Knwij

rang (iprime jprime) (i j) isin Un

Clearly Kn is uniquely determined Likewise we define the operator Ln Xn rarr Xn by requiring

Liprimejprimeij =langiprimejprime Lnwij

rang (iprime jprime) (i j) isin Un

To estimate the error between ukm and ukm we need to estimate the errorsKn minus Kn and Ln minus Ln where Kn = PnK|Xn and Ln = PnL|Xn

In the following lemma we estimate the error introduced by truncation andquadrature rules applied to matrix Kn

Lemma 1026 There exists a positive constant c such that for all iprime i isin Zn+1

and for all n

Kiprimei minus Kiprimeiinfin le cμminusrn

Proof We proceed with our proof in two cases When iprime + i gt n accordingto the strategy (T1) we have that

Kiprimei minus Kiprimeiinfin = Kiprimeiinfin (1060)

It follows from property (III) and (1030) that for jprime isin Zw(iprime) and j isin Zw(i)

|Kiprimejprimeij| le cμminusr(iprime+i)|Sij|We then make use of property (V) to obtainsum

jisinZw(i)

|Kiprimejprimeij| le cμminusr(iprime+i)

Therefore

Kiprimeiinfin = maxjprimeisinZw(iprime)

sumjisinZw(i)

|Kiprimejprimeij| le cμminusr(iprime+i) le cμminusrn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

398 Multiscale methods for nonlinear integral equations

Substituting this estimate into (1060) verifies the lemma for the caseiprime + i gt n

We now consider the case iprime + i le n Noting that the Gaussian quadraturerule of N points has the algebraic accuracy of order 2N when iprime + i le n wehave that

|Kiprimejprimeij minus Kiprimejprimeij| le c(μminusκr)nκ|Sij| le cμminusrn|Sij|Utilizing property (V) we conclude that

Kiprimei minus Kiprimeiinfin = maxjprimeisinZw(iprime)

sumjisinZw(i)

|Kiprimejprimeij minus Kiprimejprimeij| le cμminusrn

The estimate in Lemma 1026 can be translated into operator form

Lemma 1027 There exists a positive constant c1 such that for all n isin N0

and all v isin Xn

(Kn minus Kn)vinfin le c1(n+ 1)2μminusrnvinfin (1061)

Moreover if ulowast isin Wrinfin(0 1) and if vminus ulowastinfin le cμminusr(nminus1)ulowastrinfin for somepositive constant c then there exists a positive constant c2 such that for alln isin N0

(Kn minus Kn)vinfin le c2(n+ 1)μminusrnulowastrinfin (1062)

Proof Let h = Eminus1n (Kn minus Kn)v We expand (Kn minus Kn)v as

(Kn minus Kn)v =sum

(ij)isinUn

hijwij

Then property (VI) gives

(Kn minus Kn)vinfin le θ2(n+ 1)(Kn minus Kn)vinfin (1063)

We next estimate (Kn minus Kn)vinfin in two different casesFor (1061) note that property (VI) leads to vinfin le θminus1

1 vinfin Moreoverby Lemma 1026 there exists a positive constant c such that for all n isin N0

Kn minus Kninfin le maxiprimeisinZn+1

sumiisinZn+1

Kiprimei minus Kiprimeiinfin le c(n+ 1)μminusrn

Combining the above estimates with (1063) yields the estimate (1061)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 399

For the second inequality (1062) we decompose v into v = Pnulowast + (v minusPnulowast) with

Pnulowast =sum

(ij)isinUn

ξijwij and vminus Pnulowast =sum

(ij)isinUn

ζijwij

Since ulowast isin Wrinfin(0 1) it holds that |ξij| le cμminusriulowastrinfin Moreover it followsfrom property (VI) that

|ζij| le θminus11 vminus Pnulowastinfin le cμminusr(nminus1)ulowastrinfin

These two estimates together imply that there exists a positive constant c suchthat for all (i j) isin Un and all n isin N0 |vij| le cμminusriulowastrinfin

Define the matrix n = [iprimejprimeij (iprime jprime) (i j) isin Un] with iprimejprimeij =μr(nminusi)(Kiprimejprimeij minus Kiprimejprimeij) and the vector vprime = [vprimeij (i j) isin Un] with vprimeij = μrivijThen vprimeinfin le culowastrinfin and

(Kn minus Kn)vinfin le μminusrnninfinvprimeinfin

For any (iprime jprime) isin Un it follows from Lemma 1026 thatsum(ij)isinUn

|iprimejprimeij| lesum

iisinZn+1

μr(nminusi)Kiprimei minus Kiprimeiinfin le c

Therefore

ninfin = max(iprimejprime)isinUn

sum(ij)isinUn

|iprimejprimeij| le c

which leads to the estimate

(Kn minus Kn)vinfin le cμminusrnulowastrinfin

Combining this estimate with (1063) proves the desired result (1062)

To estimate the error Ln minus Ln we define the matrix blocks

Liprimei = [Liprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]Liprimei = [Liprimejprimeij jprime isin Zw(iprime) j isin Zw(i)] iprime i isin Zn+1

The difference between Liprimei and Liprimei that results from both the truncationstrategy (T2) and the quadrature strategy (QL) has the bound

Liprimei minus Liprimeiinfin le c maxμminusrn (εniprimei)minus(2rminusσ prime)μminusr(iprime+i)

By the definition (1047) of the truncation parameters εniprimei we conclude that

Liprimei minus Liprimeiinfin le c(εniprimei)minus(2rminusσ prime)μminusr(iprime+i)

This leads to the following lemma

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

400 Multiscale methods for nonlinear integral equations

Lemma 1028 Let σ prime isin (0 1) η = 2r minus σ prime and set b = 1 bprime isin (rη 1) in(1047) Then there exists a positive constant c1 such that for all n isin N0 andall v isin Xn

(Ln minus Ln)vinfin le c1(n+ 1)μminusσ primenvinfin (1064)

Moreover if ulowast isin Wrinfin(0 1) and if v satisfies vminusulowastinfin le cμminusr(nminus1)ulowastrinfinfor some positive constant c then there exists a positive constant c2 such that

(Ln minus Ln)Pnvinfin le c2(n+ 1)2μminus(r+σ prime)nulowastrinfin (1065)

Proof Since most of the proof for this lemma is similar to that forLemma 1027 we omit the details In the proof of (1065) since g iscontinuously differentiable with respect to the second variable we may usethe standard estimate for nonlinear operator ulowastrinfin le culowastrinfin (see forexample [238]) to conclude the second estimate

In the next lemma we estimate the error between the outputs of Algo-rithms 1018 and 1021

Lemma 1029 If ulowast isin Wrinfin(0 1) then there exists a positive constant c suchthat for sufficiently large integer k and for all l isin Zm+1

ukl minus uklinfin le c(k + l+ 1)μminusr(k+l)ulowastrinfin

Proof We prove this lemma by induction on l For the case l = 0 we need toprove that the solution uk of

uk minus Pk(K + LPk)uk = Pkf (1066)

satisfies

uk minus ukinfin le c(k + 1)μminuskrulowastrinfin (1067)

Following a standard argument we may show that the solution of equa-tion (1066) has the bound

uk minus ulowastinfin le cμminuskrulowastrinfin

Thus by the triangle inequality we establish the estimate (1067)We now assume that the result of this lemma holds for l minus 1 and consider

the case l Note that step 3 of Algorithm 1021 is the same as step 3 ofAlgorithm 1018 According to Algorithms 1018 and 1021 we obtain that

uHkl minus uH

kl = uA + uB (1068)

where

uA = (Pk+l minus Pk)Kk+l(uklminus1 minus uklminus1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

103 Multiscale methods for nonlinear boundary integral equations 401

and

uB = (Pk+l minus Pk)Lk+lPk+l(uklminus1 minusuklminus1)

Since the projections are uniformly bounded as follows from (1062) ofLemma 1027 we have that

uA le c(k + l+ 1)μminusr(k+l)ulowastrinfin (1069)

By Lemma 1028 and the induction hypothesis

uBinfin le c(k + l+ 1)μminusr(k+l)ulowastrinfin (1070)

From equations (1068)ndash(1070) we conclude that

uHkl minus uH

klinfin le c(k + l+ 1)μminusr(k+l)ulowastrinfin (1071)

It remains to estimate uLkl minus uL

kl Subtracting (1051) from (1059) yieldsthe equation

uLkl minus uL

kl = Pk[(K + LPk+l)ukl minus (K + L)ukl]Let B = (K + LPk+l)

prime and

R(ukl ukl) = (K + LPk+l)ukl minus (K + LPk+l)ukl minus B(ukl minus ukl)

We then conclude that

uLkl minus uL

kl = (I minus PkB)minus1Pk[B(uHkl minus uH

kl)+R(ukl ukl)+ L(Pk+l minus I)ukl]Note that there exist positive constants c1 and c2 such that for all k l isin N0

(I minus PkB)minus1Pk le c1

and

R(ukl ukl)infin le c2ukl minus ukl2infin

From the last inequality and the fact that uklminusuklinfin rarr 0 krarrinfin uniformlyfor l isin N0 we conclude that for sufficiently large integer k and for all l isin Zm+1

R(ukl ukl)infin le 1

2c1(uL

kl minus uLklinfin + uH

kl minus uHklinfin)

Moreover there exists a positive constant c3 such that for sufficiently largeinteger k and for all l isin Zm+1

L(Pk+l minus I)uklinfin le c3μminusr(k+l)ulowastrinfin

Combining the above inequalities yields the estimate

uLkl minus uL

klinfin le cprime(uHkl minus uH

klinfin + μminusr(k+l)ulowastrinfin)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

402 Multiscale methods for nonlinear integral equations

for some positive constant cprime which with (1071) leads to the desired estimateof this lemma for the case l and completes the proof

The above lemma leads to the following error estimate of the approximatesolutions generated by Algorithm 1021

Theorem 1030 If ulowast isin Wrinfin[0 1] then there exists a positive constant csuch that for sufficiently large k and all m isin N0

ulowast minus ukminfin le c(k + m+ 1)μminusr(k+m)ulowastrinfin

104 Numerical experiments

In this section we present numerical examples to demonstrate the performanceof multiscale methods for solving the Hammerstein equation and the nonlinearboundary integral equations

1041 Numerical examples of the Hammerstein equation

We present in this subsection four numerical experiments to verify thetheoretical estimates obtained in Section 102 The computer programs are runon a personal computer with 28-GHz CPU and 1G memory

Example 1 Consider the equation

u(s)minusint 1

0sin(π(s+ t))u2(t)dt = sin(πs)minus 4

3πcos(πs) s isin [0 1]

(1072)

The equation has an isolated solution ulowast(s) = sin(πs)We use the collocation method via the piecewise linear polynomial basis for

the numerical solution of the equation Specifically we choose Xn as the spaceof piecewise linear polynomials with knots at j2n j = 1 2 2nminus1 Hencedim(Xn) = 2n+1 The basis functions of X0 and W1 are given respectively byw00(t) = minus3t + 2 w01(t) = 3t minus 1 t isin [0 1] and

w10(t) =

1minus 92 t t isin [0 12)

32 t minus 1 t isin [12 1] w11(t) =

12 minus 3

2 t t isin [0 12)92 t minus 7

2 t isin [12 1]The corresponding collocation functionals are

00 = δ 13 01 = δ 2

3 10 = δ 1

6minus 3

2δ 1

3+ 1

2δ 2

3 11 = 1

2δ 1

3minus 3

2δ 2

3+ δ 5

6

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

104 Numerical experiments 403

where 〈δt f 〉 = f (t) The basis functions wij i gt 1 j isin Zw(i) and thecorresponding functionals ij i gt 1 j isin Zw(i) are constructed recursivelyby applying linear operators related to the contractive mappings φ0(t) = t

2 andφ1(t) = t+1

2 t isin [0 1]We solve equation (1072) by Algorithm 109 with the initial level k = 4

based on the multiscale collocation method developed via the basis functionsand corresponding collocation functionals The nonlinear system (1015)related to (1072) is solved by the Newton iteration method For comparisonpurposes we also solve the collocation equation (103) using Newton iterationwhich will be called the direct Newton method and the two-grid method

We report the numerical results in Table 101 Columns 3 and 4 ofTable 101 list respectively the computed errors and computed convergenceorder (CO) of the approximate solution u4m for equation (1072) obtained byAlgorithm 109 In column 5 we present the computing time TM measuredin seconds when Algorithm 109 is used These numerical results confirmthe theoretical estimates presented in the last subsection noting that thetheoretical order of convergence for the piecewise linear approximation is 2The computing time is linear with respect to the dimension of the subspaceIn column 6 we list the computed error of the approximate solution u4+m

obtained from the direct Newton method We list in column 7 the computingtime TN in seconds when the direct Newton method is used to solve (103)For the direct Newton method we only compute the results for m le 5 sinceit is much more for a larger-scale problem and the data we obtained haveclearly illustrated the computational efficiency of the proposed method Thenumerical errors ulowast minus uG

4+minfin and the computing time TG of the two-gridmethod are listed in columns 8 and 9 respectively In Figure 102 we plot thecomputing time of the proposed method the direct Newton method and the

Table 101 Numerical results for Example 1

m s(4+ m) ulowast minus u4minfin CO TM ulowast minus u4+minfin TN ulowast minus uG4+minfin TG

0 32 5142e-3 45 5142e-3 46 5142e-3 461 64 1282e-3 200 88 1282e-3 128 1285e-3 782 128 3190e-4 201 143 3192e-4 355 3192e-4 1323 256 7962e-5 200 201 7967e-5 960 7972e-5 2424 512 1994e-5 200 276 1996e-5 2773 1995e-5 5745 1024 5057e-6 198 382 5061e-6 7745 5063e-6 14286 2048 1240e-6 203 544 1243e-6 21862 1239e-6 40627 4096 3094e-7 200 8058 8192 7740e-8 200 13059 16384 1932e-8 200 2294

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

404 Multiscale methods for nonlinear integral equations

Table 102 Computing time for Example 1

m T1 T2 R(T2) T3 T4 (T4) T prime41 lt 001 0062 lt 001 3500 04842 lt 001 0375 6049 lt 001 7813 431 10473 lt 001 1172 3125 lt 001 1208 427 16874 lt 001 2828 2413 lt 001 1711 503 23755 lt 001 6859 2425 lt 001 2302 591 32346 0015 1583 2308 lt 001 2941 639 40777 0031 3461 2186 lt 001 3630 689 51258 0078 7608 2198 lt 001 4411 781 61739 0187 1666 2190 lt 001 5158 747 7188

0 1 2 3 4 5 6 7 8 90

100

200

300

400

500

600

700

800

m

Com

puta

tiona

l Tim

e

Figure 102 Growth of TM (lowast) TG (+) and TN () for Example 1

two-grid method The figure shows that TN grows much faster than the othertwo as level m increases and TM grows the slowest

We next verify numerically the estimate of Mkmj j = 1 2 3 4 establishedin Section 1014 In Table 102 we list the computing time for the four mainprocedures listed in Section 1014 All these computing times are measuredin seconds In the table T1 denotes the computing time for generating thecoefficient matrix Ekm T2 presents the total time for evaluating the vectorsfkl l = 1 2 m T3 is the total time to solve the linear systems (1021)for l = 1 2 m and T4 stands for the total time spent in solving thenonlinear system (1015) using the Newton iteration updating the Jacobianmatrix (strategy 1) at each iteration step We observe that T1 and T3 are smallin comparison with T2 and T4 The column for ldquoR(T2)rdquo reports the increasingratio of T2 where the data are obtained by calculating T2(m)T2(m minus 1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

104 Numerical experiments 405

The column for ldquo(T4)rdquo lists the difference T4(m) minus T4(m minus 1) of twosuccessive values of T4 We observe that the values in the column for ldquoR(T2)rdquoare close to 2 which coincides with the estimate that Mkm2 = O(μk+m) withμ = 2 The values in column ldquo(T4)rdquo are nearly constant and this verifies theestimate Mkm4 = O(m)

We use strategy 2 for the Newton iteration in step 3 of Algorithm 109 tosolve the nonlinear system (1015) Specifically we choose the last Jacobianmatrix obtained in step 1 for all steps of the Newton iteration without updatingit with a few more iteration steps to ensure that the same approximationaccuracy is obtained We list in the last column of Table 102 under ldquoT prime4rdquo thecomputing time for solving the nonlinear system (1015) using the Newtoniteration without updating the Jacobian matrix (strategy 2) with the numberof iterations listed in Table 103 For comparison in Table 103 we alsolist the number of iterations used for strategy 1 By comparing the data ofldquoT4rdquo and ldquoT prime4rdquo in the same row we observe that the modification cuts downthe computational cost remarkably We plot the values of T2 T4 and T prime4 inFigure 103 Note that T2 grows the fastest and occupies most of the running

Table 103 A comparison of iteration numbers used in step 3 with strategies 1and 2

Level 1 2 3 4 5 6 7 8 9

Strategy 1 3 2 2 2 2 2 2 1 1Strategy 2 5 5 5 4 4 4 4 3 3

1 2 3 4 5 6 7 8 90

10

20

30

40

50

60

70

80

m

Com

puta

tiona

l Tim

e

Figure 103 Growth of T2 (lowast) T4 (+) and T prime4 () for Example 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

406 Multiscale methods for nonlinear integral equations

time of the algorithm when m is large Moreover both T4 and T prime4 grow nearlylinearly with respect to the number of levels and they have the relation

T4 asymp 72T prime4

This suggests that we should use strategy 2 in step 3 of Algorithm 109 Hencein the rest of the numerical experiments we always use strategy 2

Example 2 We again consider numerical solutions of equation (1072)This time we solve the equation using the Galerkin scheme with the spacesof piecewise linear polynomials as approximation subspaces The orthonor-mal basis functions for X0 and W1 are respectively w00(t)= 1 w01(t)=radic

3(2t minus 1) for t isin [0 1] and

w10(t) =

1minus 6t t isin [0 12)5minus 6t t isin [12 1] w11(t) =

radic3(1minus 4t) t isin [0 12)radic3(4t minus 3) t isin [12 1]

The bases for spaces Wi i gt 1 are obtained by using the linear operatorsdefined in terms of the contractive mappings φ0 and φ1

We solve equation (1072) by Algorithm 109 with the initial level k = 4based on the multiscale Galerkin method developed via the above basisfunctions The nonlinear system (1015) related to (1072) is solved by thesecant method We present the numerical results in Table 104 It is clearfrom the table that the numerical solutions converge in approximately thetheoretical order 2 Moreover the growth of T2 and T prime4 is consistent withour estimate Note that for the one-dimensional equation the entries of thecoefficient matrix involve double integrals As a result the computing time forthis method is much more than that for the corresponding collocation methodThe numerical results using the two-grid method are listed in the last twocolumns for comparison

Table 104 Numerical results for Example 2

m s(4+ m) ulowast minus u4m2 CO TM T2 T prime4 ulowast minus uG4+m2 TG

0 32 1016e-3 798 1016e-3 8131 64 2547e-4 200 143 425 4254 2543e-4 3612 128 6425e-5 199 236 157 1048 6431e-5 5293 256 1661e-5 195 382 362 1968 1670e-5 7234 512 4632e-6 184 538 702 3163 4623e-6 10215 1024 1292e-6 184 866 127 4224 1263e-6 16326 2048 3235e-7 200 981 234 5559

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

104 Numerical experiments 407

Example 3 In this example we consider solving the equation

u(s)minusint 1

0log

1

16| cos(πs)minus cos(π t)|(2u2(t)minus 1)dt = f (s) s isin [0 1]

(1073)

with

f (s) = cos

2

(1

2minus s

))+ 1

16π[2minus (1minus cos(πs)) log(1minus cos(πs))

minus (1+ cos(πs)) log(1+ cos(πs))]Note that the kernel of the integral operator involved in this equation is weaklysingular This equation has an isolated solution ulowast(s) = cos(π2 (

12 minus s))

s isin [0 1] We use the multiscale collocation scheme with the same bases asthe first example and the nonlinear system (1015) is solved by the MAM inconjunction with strategy 2 which suggests no updating in the Jacobian matrix

The weakly singular integrals are computed numerically using the quadra-ture methods proposed in [242] Taking into account the cost of numericalintegration in this case Mkm2 = O(s(k + m)(log s(k + m))3) Thereforeimplementing the entire algorithm requires O(s(k + m)(log s(k + m))3) num-ber of multiplications See also [113 164 269 270] for related treatments ofweakly singular integrals The numerical results are listed in Table 105 Sincethe amounts T1 and T3 are insignificant in comparison with T2 and T prime4 they arenot included in the table for this experiment

Example 4 In the last example we solve the two-dimensional equation

u(x y)minusint

sin(xy+ xprimeyprime)u2(xprime yprime)dxprimedyprime = f (x y) (x y) isin (1074)

Table 105 Numerical results for Example 3

m s(4+ m) ulowast minus u4minfin CO TM T2 T prime4 ulowast minus uG4+minfin TG

0 32 1031e-3 76 1031e-3 761 64 2607e-4 198 80 01 04 2613e-4 1182 128 6493e-5 201 110 09 15 6495e-5 2243 256 1613e-5 201 127 14 21 1621e-5 4824 512 4041e-6 200 180 37 33 4043e-6 12025 1024 1007e-6 200 244 97 37 1010e-6 33166 2048 2589e-7 196 414 217 64 2593e-7 99887 4096 6395e-8 202 749 535 698 8192 1577e-8 202 1205 984 889 16384 3920e-9 201 2628 2361 105

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

408 Multiscale methods for nonlinear integral equations

where = (x y) 0 le x le y le 1 and the function f is chosen so thatulowast(x y) = x2 + y2 is an isolated solution of the equation

We solve equation (1074) by the MAM based on the multiscale collocationscheme We choose Xn as the space of bivariate piecewise linear polynomialswith the multiscale partitions defined by the family of contractive mappings = φe e isin Z4 where

φ0(x y) =( x

2

y

2

) φ1(x y) =

(x

2

y+ 1

2

)

φ2(x y) =(

1minus x

2 1minus y

2

) φ3(x y) =

(x+ 1

2

y+ 1

2

)

For space Xn we have the dimension dim(Xn)= 3(4n) Let Se =φe() eisinZ4The basis functions for X0 are given by

w00(x y) = minus3x+ 2y w01(x y) = xminus 3y+ 2

w02(x y) = 2x+ yminus 1 (x y) isin

The corresponding collocation functionals are chosen as

00 = δ( 2

7 37 )

01 = δ( 1

7 57 )

02 = δ( 4

7 67 )

wherelangδ(xy) g

rang= g(x y) The basis functions for W1 are given by

w10(x y) = minus 11

8 minus 158 x+ 41

8 y (x y) isin S058 + 1

8 xminus 78 y (x y) isin S S0

w11(x y) =

1minus 154 xminus 7

8 y (x y) isin S0

minus1+ 14 x+ 9

8 y (x y) isin S S0

w12(x y) =

98 + 15

8 xminus 298 y (x y) isin S0

minus 158 minus 1

8 x+ 198 y (x y) isin S S0

w13(x y) = minus 15

8 minus 418 x+ 13

4 y (x y) isin S118 + 7

8 xminus 34 y (x y) isin S S1

w14(x y) =

298 + 7

8 xminus 378 y (x y) isin S1

minus 38 minus 9

8 x+ 118 y (x y) isin S S1

w15(x y) = minus 5

8 minus 298 x+ 7

4 y (x y) isin S138 + 19

8 xminus 94 y (x y) isin S S1

w16(x y) =

154 minus 13

4 xminus 158 y (x y) isin S3

minus 14 + 3

4 x+ 18 y (x y) isin S S3

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

104 Numerical experiments 409

w17(x y) = minus 1

8 minus 378 x+ 15

4 y (x y) isin S3

minus 18 + 11

8 xminus 14 y (x y) isin S S3

w18(x y) = minus 5

2 + 74 x+ 15

8 y (x y) isin S312 minus 9

4 xminus 18 y (x y) isin S S3

Their corresponding collocation functionals are chosen as

10 = δ( 1

14 514 )minus δ

( 17 3

14 )+ δ

( 27 3

7 )minus δ

( 314 4

7 )

11 = δ( 1

14 514 )minus δ

( 27 3

7 )+ δ

( 37 9

14 )minus δ

( 314 4

7 )

12 = δ( 5

14 1114 )minus δ

( 37 9

14 )+ δ

( 27 3

7 )minus δ

( 314 4

7 )

13 = δ( 1

14 67 )minus δ

( 17 5

7 )+ δ

( 514 11

14 )minus δ

( 27 13

14 )

14 = δ( 1

7 57 )minus δ

( 27 13

14 )+ δ

( 514 11

14 )minus δ

( 314 4

7 )

15 = δ( 3

7 914 )minus δ

( 514 11

14 )+ δ

( 17 5

7 )minus δ

( 314 4

7 )

16 = δ( 4

7 67 )minus δ

( 1114 13

14 )+ δ

( 914 5

7 )minus δ

( 37 9

14 )

17 = δ( 4

7 67 )minus δ

( 914 5

7 )+ δ

( 37 9

14 )minus δ

( 514 11

14 )

18 = δ( 3

14 47 )minus δ

( 37 9

14 )+ δ

( 47 6

7 )minus δ

( 514 11

14 )

The construction of the basis functions and collocation functionals of higherlevels are described in Chapter 7 (cf [69 75])

The above bases are used in our method for solving (1074) and the numeri-cal results are listed in Table 106 At each step the nonlinear system (1015) issolved by the Newton iteration method The computed convergence orders areall around 2 This confirms the theoretical estimate of the convergence orderSince dim(Xn+1)dim(Xn) = 4 the theoretical value of R(T2) is 4 Table 106shows that the computed values of R(T2) are close to 4

For comparison we also solve the collocation equation (103) by the directNewton method and the two-grid method The numerical errors and computingtime of the three methods are listed in Table 107 The data show that the

Table 106 Numerical results for Example 4

m s(2+ m) ulowast minus u2m CO TM T2 R(T2) T prime4 (T prime4)0 48 3269e-2 73601 192 8159e-3 200 1376 2820 35802 768 2035e-3 200 2863 1408 4995 7163 35833 3072 5076e-4 200 7725 5915 4201 1074 35774 12288 1268e-4 200 2610 2393 4046 1432 3580

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

410 Multiscale methods for nonlinear integral equations

Table 107 Comparison of the methods for solving Example 4

m s(2+ m) ulowast minus u2m TM ulowast minus uG2+m TG ulowast minus u2+m TN

0 48 3269e-2 7360 3269e-2 7360 3269e-2 73601 192 8159e-3 1376 8162e-3 2132 8157e-3 35552 768 2035e-3 2863 2035e-3 6324 2035e-3 27243 3072 5076e-4 7725 5084e-4 3613 5083e-4 174924 12288 1268e-4 2610

1 15 2 25 3 35 40

500

1000

1500

2000

m

Com

puta

tiona

l Tim

e

0 05 1 15 2 25 3 35 40

2000

4000

6000

8000

10000

12000

14000

16000

m

Com

puta

tiona

l Tim

e

Figure 104 Comparison for Example 4 Left Growth of T2 (lowast) T prime4 () RightTM (lowast) TG (+) TN ()

proposed method has nearly the same accuracy as the direct Newton methodand the two-grid method To compare visually the computing times for thisexample we plot them in Figure 104 from which we observe that theproposed method runs the fastest

1042 Numerical examples of nonlinear boundaryintegral equations

In this subsection we present numerical results which verify the approximationaccuracy and computational efficiency of the proposed algorithm for solvingnonlinear boundary integral equations All programs are run on a workstationwith 338-GHz CPU and 96G memory

We consider the boundary value problem (1026) with g(x u(x)) = u(x) +sin(u(x)) Let be the elliptical region x2

1 +( x2

2

)2lt 1 For comparison

purposes we choose the solution of (1026) as u0(x) = ex1 cos(x2) withx = (x1 x2) Correspondingly we have that

g0(x) = g(x u0)+ partu0(x)

partnx

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

104 Numerical experiments 411

The corresponding solution of the boundary integral equation (1029) isgiven by

ulowast(t) = ecos(2π t) cos(2 sin(2π t))

In all experiments presented in this subsection we choose μ = 2 and Xn

as the space of piecewise cubic polynomials with knots at j2n j = 1 2 2n minus 1 It is easy to compute dim(Xn) = 2n+2 The basis functions of X0 andW1 are given by

w00(t) = minus1

6(5t minus 2)(5t minus 3)(5t minus 4)

w01(t) = 1

2(5t minus 1)(5t minus 3)(5t minus 4)

w02(t) = minus1

2(5t minus 1)(5t minus 2)(5t minus 4)

w03(t) = 1

6(5t minus 1)(5t minus 2)(5t minus 3)

w10(t) =

8532 minus 725

12 t + 5752 t2 minus 1475

4 t3 t isin [0 12 ]

minus 23532 + 575

12 t minus 1752 t2 + 575

12 t3 t isin ( 12 1]

w11(t) =

1145288 minus 1775

24 t + 16756 t2 minus 4975

18 t3 t isin [0 12 ]

minus 7495288 + 3625

24 t minus 5252 t2 + 2525

18 t3 t isin ( 12 1]

w12(t) =

805288 minus 375

8 t + 4753 t2 minus 2525

18 t3 t isin [0 12 ]

minus 19355288 + 8275

24 t minus 550t2 + 497518 t3 t isin ( 1

2 1]

w13(t) =

9596 minus 50

3 t + 2254 t2 minus 575

12 t3 t isin [0 12 ]

minus 1334596 + 1775

3 t minus 32754 t2 + 1475

4 t3 t isin ( 12 1]

The corresponding collocation functionals are

00 = δ 15 01 = δ 2

5 02 = δ 3

5 03 = δ 4

5

10 = 2

5δ 1

10minus 3

2δ 2

10+ 2δ 3

10minus δ 4

10+ 1

10δ 6

10

11 = 3

10δ 2

10minus δ 3

10+ δ 4

10minus 1

2δ 6

10+ 1

5δ 7

10

12 = 1

5δ 3

10minus 1

2δ 4

10+ δ 6

10minus δ 7

10+ 3

10δ 8

10

13 = 1

10δ 4

10minus δ 6

10+ 2δ 7

10minus 3

2δ 8

10+ 2

5δ 9

10

The basis functions wij and collocation functionals ij for i gt 1 can beconstructed recursively from w1j and 1j

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

412 Multiscale methods for nonlinear integral equations

Table 108 Total time for solving the related nonlinear system

m Tm T primem0 249 00161 370 00462 447 00783 537 00934 638 01405 747 02036 852 03827 991 07578 1132 1592

Example 1 In this experiment we use the MAMs with initial level k = 4 tosolve the boundary integral equation (1029) The purpose of this experimentis to test how much the proposed technique for solving the nonlinear systemsspeeds up the solution process To this end we run Algorithms 1018 and 1021separately and add up the time used in solving the nonlinear systems Note thatin Algorithm 1018 we need to solve (1022) once and (1051) m times whilein Algorithm 1021 we need to solve (1058) once and (1059) m times InTable 108 Tm denotes the total time spent in Algorithm 1018 for solving thenonlinear systems while T primem denotes that in Algorithm 1021 Obviously wesee that in Table 108 T primem is much less than Tm

Example 2 We illustrate in this experiment the approximation accuracy andthe total computational effects of Algorithm 1021 compared with those ofAlgorithm 1018 and Algorithm AC (the algorithm of Atkinson and Chandlerpresented in [17])

In Table 109 we report the numerical results of Algorithms 1018 and1021 For any m we denote by u4m and u4m respectively the numericalsolutions resulting from Algorithms 1018 and 1021 Moreover we let TM

and T primeM denote the total times for implementing Algorithms 1018 and 1021respectively We observe in Table 109 that u4m and u4m have nearly thesame accuracy while T primeM is significantly less than TM This indicates thatalthough Algorithms 1018 and 1021 have the same order of computationalcosts the techniques proposed in this paper effectively reduce the absolutecomputational costs

For comparison we list in Table 1010 the numerical results of AlgorithmAC which is a Nystrom method We let N be the number of unknowns uN thenumerical solution and TN the running time of the program

Note that we cannot compare Algorithms 1018 and AC in the same dis-cretization scale since Algorithm 1018 is a multiscale method and Algorithm

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

105 Bibliographical remarks 413

Table 109 Comparison of the accuracy and running time of Algorithms 1018and 1021

m s(4+ m) ulowast minus u4minfin ulowast minus u4minfin TM T primeM0 64 4107e-3 5079e-3 85 021 128 3079e-4 3007e-4 89 042 256 2152e-5 2225e-5 112 083 512 1491e-6 1527e-6 146 174 1024 8204e-8 8624e-8 203 375 2048 5041e-9 5313e-9 309 796 4096 2865e-10 2970e-10 519 1697 8192 1795e-11 1882e-11 880 3618 16384 1023e-12 1101e-12 1961 768

Table 1010 Numerical results of Algorithm AC

N ulowast minus uNinfin TN

128 1780e-5 3256 1071e-6 11512 6559e-8 52

1024 4076e-9 2262048 2695e-10 8364096 9870e-11 38778192 6751e-11 16473

AC is a single-scale quadrature method However we may utilize a ldquonumericalerrors vs computing timerdquo figure for these methods We use the data inTables 109 and 1010 to generate Figure 105 For convenience of displaywe take logarithms on both numerical errors and computing times It is seenthat for any accuracy level Algorithm 1021 uses the least time among thethree algorithms These numerical results confirm that the proposed methodsoutperform both the two-grid method and Algorithm AC

105 Bibliographical remarks

The results presented in this chapter were mainly chosen from the originalpapers [51 52 76] The reader is referred to recent papers [50 158] foradditional developments where a multiscale collocation method was appliedto solving nonlinear Hammerstein equations Specifically in [158] a sparsityin the Jacobian matrix resulted from nonlinear solvers such as the Newtonmethod and the quasi-Newton method was discussed that leads to a fast

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

414 Multiscale methods for nonlinear integral equations

minus2 0 2 4 6 8 10minus28

minus26

minus24

minus22

minus20

minus18

minus16

minus14

minus12

minus10

logarithm of computing time

loga

rithm

of n

umer

ical

err

ors

Algorithm 1021Algorithm 1018Algorithm AC

Figure 105 Comparison of Algorithms 1018 1021 and AC on numericalperformance

algorithm Furthermore a fast multilevel augmentation method was appliedto a transformed nonlinear equation that results in an additional saving incomputing time

Numerical solutions of nonlinear integral equations especially the Hammer-stein equation have been studied extensively in the literature see [20 23 3435 131 132 159 161 163 175 180ndash182 240 255 257 258] Specificallya degenerate kernel scheme was introduced in [163] a variation of Nystromrsquosmethod was proposed in [182] and a product integration method was discussedin [161] Projection methods including the Galerkin scheme and collocationscheme were discussed in [20 23 47 131 159 180 181 240 255 258]Fundamental work on the error analysis of projection methods may be foundin Section 33 of [175] Studies of superconvergence properties can be foundin [160 161 165 179 247] Moreover the reader is referred to [13 22] forgeneral information on numerical treatments of the Hammerstein equationsFurthermore regularity of the solution of the Hammerstein equation with aweakly singular kernel was studied in [162] Certain aspects of theoreticalanalysis of the Hammerstein equation may be found in [99 130 184 277]The connection of a large class of Gaussian processes with the Hammersteinequation was studied in [218]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

105 Bibliographical remarks 415

Boundary value problems of the Laplace equation serve as mathematicalmodels for many important applications such as acoustics elasticity elec-tromagnetics and fluid dynamics (see [126 149 208 234 265] and thereferences cited therein) Nonlinear boundary conditions are also involved invarious applications such as heat radiation and heat transfer [29 32 42] Insome electromagnetic problems the boundary conditions may also containnonlinearities [171] In these cases the reformulation of the correspondingboundary value problems leads to nonlinear integral equations The nonlinearboundary integral equation of the Laplace equation was treated numerically in[17] Certain linearization approaches are used in the numerical treatment ofnonlinear integral equations (see for example [13ndash15 20 23 131 132 159ndash161 163 165 182 240 255])

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637012Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163153 subject to the Cambridge Core terms of

11

Multiscale methods for ill-posedintegral equations

In this chapter we present multiscale methods for solving ill-posed integralequations of the first kind The ill-posed integral equations are turned to well-posed integral equations of the second kind by using regularization methodsincluding the Lavrentiev regularization and the Tikhonov regularization Multi-scale Galerkin and collocation methods are introduced for solving the resultingwell-posed equations A priori and a posteriori regularization parameter choicestrategies are proposed Convergence rates of the regularized solutions areestablished

111 Numerical solutions of regularization problems

We present a brief review of regularization of ill-posed integral equations ofthe first kind and discuss the main idea used in this chapter in developing fastalgorithms for solving the equations

We consider the Fredholm integral equation of the first kind in the form

Ku = f (111)

where

(Ku)(s) =int

K(s t)u(t)dt s isin (112)

Noting that compact operators cannot have a continuous inverse equa-tion (111) with a compact operator K is an ill-posed problem in the senseof the following definition

Definition 111 Let K be an operator from a normed linear space X to anormed linear space Y Equation (111) is said to be well posed if for any

416

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

111 Numerical solutions of regularization problems 417

f isin Y there exists a solution u isin X the solution is unique and the dependenceof u upon f is continuous Otherwise the equation is called ill posed

Among the three conditions for equation (111) to be well posed listedin Definition 111 continuous dependence of the solution on the given datais most crucial and challenging because the nonexistence or nonuniquenessof a solution may be well treated by an existing method such as the least-squares method In this chapter we mainly treat the ill-posedness caused by theviolation of this condition The ill-posedness of the equation is often treatedby regularization which imposes certain a priori conditions on the solutionThe most commonly used method of regularization of ill-posed problems isthe Tikhonov regularization (named after Andrey Tikhonov) [252] It is alsocalled the TikhonovndashPhillips regularization because of the contribution ofPhillips [217] Another frequently used method is the Lavrentiev regularization[116 220] Recent developments in image science use the TV regularization[237]

Certain regularization methods may be formulated as a minimization of afidelity term which measures the error of Kuminusf plus a regularization parametertimes the norm of u determined by the a priori condition The Tikhonov regu-larization [116 117 119 217 252] and the TV regularization are examplesof this type The optimal regularization parameter is usually unknown Inpractical problems such as imagesignal processing it is often determined byan ad hoc method Approaches include the Bayesian method the discrepancyprinciple cross-validation L-curve method restricted maximum likelihoodand unbiased predictive risk estimator In [262] the connection was establishedbetween the choice of the optimal parameter and the leave-one-out cross-validation There is recent interest in the multiparameter regularization [63]

Although the theoretical development of solving ill-posed problems isnearly complete developing efficient stable fast solvers for such problemsremains an active focused research area A bottleneck problem for the numer-ical solution of the ill-posed Fredholm integral equation of the first kind is itsdemanding large computational costs By regularization the solution of theill-posed equation is obtained by solving a sequence of well-posed Fredholmintegral equations of the second kind As discussed in previous chaptersthe discretization of such an equation leads to an algebraic system with afull matrix and numerical solutions of the system are computationally costlyAimed at overcoming this bottleneck problem we consider in this chaptersolving ill-posed operator equations of the first kind in a multiresolutionframework Although multiscale methods for solving well-posed operatorequations have been well understood and widely used (see previous chapters

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

418 Multiscale methods for ill-posed integral equations

of this book) less attention has been paid to the development of multiscalemethods for ill-posed problems Solving ill-posed problems requires iteratedcomputations which demand huge computational costs and therefore design-ing efficient numerical methods for solving problems of this type by makinguse of the underlying multiscale data structure is extremely beneficial It isthe attempt of this chapter to explore the multiscale matrix representationof the operator in developing efficient fast algorithms for solving ill-posedproblems

To describe the main point of this chapter we now elaborate the multilevelaugmentation method developed in Chapter 9 for solving well-posed operatorequations It is based on direct sum decompositions of the range space ofthe operator and the solution space of the operator equation and a matrixsplitting scheme It allows us to develop fast accurate and stable noncon-ventional numerical algorithms for solving operator equations For second-kind equations special splitting techniques were proposed to develop suchalgorithms These algorithms were then applied to solve the linear systemsresulting from matrix compression schemes using wavelet-like functions forsolving Fredholm integral equations of the second kind It was proved thatthe method has an optimal order of convergence and it is numerically stableWith an appropriate matrix compression technique it leads to fast algorithmsBasically this method generates an approximate solution with convergenceorder O(Nminuskd) if piecewise polynomials of order k are used in the approx-imation the spatial dimension of the integral equation is d and solving theentire discrete equation requires computational complexity of order O(N)

The main idea used in Section 112 is to combine the Lavrentiev regular-ization method and the MAM in solving the resulting regularized second-kind equations Since the matrix compression issue for integral equations hasbeen well explained in previous chapters we do not discuss this issue in thesection and suppose that an appropriate compression technique will be used inpractical computation (cf [55]) Instead we focus on choosing regularizationparameters by exploring the multiscale structure of the matrix representationof the operator We present choices for a priori parameters and for a posterioriparameters in the context of the MAM

Multilevel methods were applied to solving ill-posed problems and a prioriand a posteriori parameter choice strategies were also proposed (see forexample [103 137 157 193]) These methods are all based on the Galerkindiscretization principle Because of the computational advantages which wediscussed earlier of the collocation method it is highly desirable to developfast multiscale collocation methods However there are two challenging issuesrelated to the development of the fast collocation method for solving the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

111 Numerical solutions of regularization problems 419

ill-posed integral equation First it requires the availability of a Banach spacesetting for regularization since the appropriate context for collocation methodsis Linfin There is some effort at developing the collocation method in the L2

space for solving ill-posed integral equations (see [190 207 211]) Howeverwe feel that the traditional regularization analysis in Hilbert spaces (such asthe L2 space) is more suitable for Galerkin-type methods In other words itis more natural and interesting to analyze collocation-type methods in the Linfinspace Regularization analysis for collocation methods for ill-posed problemsin a Banach space has not yet been completely understood in the literatureeven though some interesting results were obtained in [117 216 224] Themathematical analysis of the convergence and convergence rate in a Banachspace is quite different from that in a Hilbert space since many estimateresults and conclusions which hold in a Hilbert space may not hold in aBanach space For example in the Banach space little is known about thesaturation phenomenon The second challenging issue is that a posterioriparameter choice strategies for the fast collocation method demand certainestimates in the Linfin-norm which are not available In general mathematicalanalysis is more difficult in a Banach space than in a Hilbert space An optimalregularization parameter should give the best balance between the well-posedness and approximation accuracy This principle has been used by manyresearchers (for example [118 127ndash129 206 220 222 224 228 248 275]) indeveloping a priori and a posteriori parameter choice strategies for regularizedGalerkin methods The focus of Section 113 is to develop a fast collocationalgorithm based on the mathematical development presented in [117 224] anda related a posteriori choice strategy of regularization parameters based on theidea used in [207 216]

We remark that further development in this topic can be seen in [56] Inthis paper multiscale collocation methods are developed for solving a systemof integral equations which is a reformulation of the Tikhonov-regularizedsecond-kind equation of an ill-posed integral equation of the first kind Directnumerical solutions of the Tikhonov regularization equation require generatinga matrix representation of the composition of the conjugate operator with itsoriginal integral operator Generating such a matrix is computationally costlyTo overcome this difficulty rather than solving the Tikhonov-regularizedequation directly it is proposed to solve an equivalent coupled system ofintegral equations A multiscale collocation method is applied with a matrixcompression strategy to discretize the system of integral equations and thenuse the multilevel augmentation method to solve the resulting discrete systemA priori and a posteriori parameter choice strategies are also developed forthese methods

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

420 Multiscale methods for ill-posed integral equations

To close this section we remark that as an application of multiscaleGalerkin methods for solving the ill-posed integral equation integral equationmodels for image restoration were considered in [189] Discrete models areconsistently used as practical models for image restoration They are piecewiseconstant approximations of true physical (continuous) models and henceinevitably impose bottleneck model errors Paper [189] proposed workingdirectly with continuous models for image restoration aiming to suppress themodel errors caused by the discrete models A systematic study was conductedin that paper for the continuous out-of-focus image models which can beformulated as an integral equation of the first kind The resulting integralequation was regularized by the Lavrentiev method and the Tikhonov methodFast multiscale algorithms having high accuracy were developed there to solvethe regularized integral equations of the second kind Numerical experimentspresented in the paper show that the methods based on the continuous modelperform much better than those based on discrete models

112 Multiscale Galerkin methods via the Lavrentievregularization

In this section we first describe the MAM for numerical solutions of ill-posed equations of the first kind and then present convergence analysis for theapproximate solution obtained from the MAM with an a priori regularizationparameter We propose an a posteriori regularization parameter choice strategyin the MAM The choice of the parameter is adapted to the context of themultilevel augmentation method We establish an optimal order of convergencefor the approximate solution obtained from the multilevel augmentationmethod using the a posteriori regularization parameter

1121 Multilevel augmentation methods

We present in this subsection the MAM for numerical solutions of ill-posedoperator equations of the first kind For this purpose we recall the Lavrentievregularization method for the equations Suppose that X is a real Hilbert spacewith an inner product (middot middot) and the related norm middot Let K X rarr X be alinear and positive semi-definite compact operator that is (Kx x) ge 0 for allx isin X Given f isin X we consider the operator equation of the first kind havingthe form

Ku = f (113)

where u isin X is the unknown to be determined

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 421

We assume that the range R(K) of operator K is infinite and thus equa-tion (113) is ill posed [116 221] For f isin R(K) we let ulowast isin X denote theunique minimum norm solution of equation (113) in the sense that

Kulowast = f and ulowast = infv Kv = f v isin X (114)

In general the solution of (113) does not depend continuously on the right-hand side f Let δ gt 0 be a given small number and suppose that f δ isin X

satisfying the condition

f δ minus f le δ (115)

is given noisy data actually used in computing the solution of equation (113)We apply the Lavrentiev regularization method to solve equation (113) Forα gt 0 we solve the equation

(αI +K)uδα = f δ (116)

It can be proved that

(αI +K)minus1 le αminus1 (117)

and thus for any given α gt 0 equation (116) has a unique solution uδα isin XWe consider the unique solution uδα isin X of (116) as an approximation of ulowastIt is well known that

limαrarr0δαminus1rarr0

uδα minus ulowast = 0

We now recall the projection method for solving the regularization equa-tion (116) Suppose that Xn n isin N0 is a sequence of finite-dimensionalsubspaces of X satisfying ⋃

nisinN0

Xn = X

For each n isin N0 we let Pn Xrarr Xn denote the linear orthogonal projectionand thus Pn converges pointwise to the identity operator and Pn = 1 Set

Kn = PnKPn f δn = Pnf δ

The projection method is to find uδαn isin Xn such that

(αI +Kn)uδαn = f δn (118)

Since Kn is positive semi-definite

(αI +Kn)minus1 le αminus1 (119)

and thus equation (118) has a unique solution

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

422 Multiscale methods for ill-posed integral equations

To apply the MAM to equation (118) as we have done earlier we assumethat the subspaces Xn n isin N0 are nested and let Wn+1 be the orthogonalcomplement of Xn in Xn+1 For a fixed k isin N and any m isin N0 we have thedecomposition

Xk+m = Xk oplusperpWk+1 oplusperp middot middot middot oplusperpWk+m (1110)

For g0 isin Xk and gi isin Wk+i i = 1 2 m we identify the vector[g0 g1 gm]T in XktimesWk+1timesmiddot middot middottimesWk+m with the vector g0+g1+middot middot middot+gm

in Xk oplusperp Wk+1 oplusperp middot middot middot oplusperp Wk+m We write the solution uδαk+m isin Xk+m ofequation (118) with n = k + m as

uδαk+m = (uδαk)0 +msum

j=1

(uδαk)j = [(uδαk)0 (uδαk)1 (uδαk)m]T

where (uδαk)0 isin Xk and (uδαk)j isinWk+j for j = 1 2 mWe next re-express the operator in equation (118) with n = k+m Defining

Qn+1 = Pn+1 minus Pn n isin N0

the function f δk+m is identified as

f δk+m = [Pkf δ Qk+1f δ Qk+mf δ]T

and the operator Kk+m is identified as a matrix of operators

Kkm =

⎡⎢⎢⎢⎣PkKPk PkKQk+1 middot middot middot PkKQk+m

Qk+1KPk Qk+1KQk+1 middot middot middot Qk+1KQk+m

Qk+mKPk Qk+mKQk+1 middot middot middot Qk+mKQk+m

⎤⎥⎥⎥⎦

We then split the operator Kkm into the sum of two operators

Kkm = KLkm +KH

km

where

KLkm = PkKPk+m and KH

km = (Pk+m minus Pk)KPk+m

which correspond to lower and higher resolution of the operator Kk+mrespectively In the matrix notation we have that

KLkm =

⎡⎢⎢⎢⎣PkKPk PkKQk+1 middot middot middot PkKQk+m

0 0 middot middot middot 0

0 0 middot middot middot 0

⎤⎥⎥⎥⎦

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 423

and

KHkm =

⎡⎢⎢⎢⎣0 0 middot middot middot 0

Qk+1KPk Qk+1KQk+1 middot middot middot Qk+1KQk+m

Qk+mKPk Qk+mKQk+1 middot middot middot Qk+mKQk+m

⎤⎥⎥⎥⎦

For a given parameter α gt 0 we set

Bkm(α) = I + αminus1KLkm and Ckm(α) = αminus1KH

km (1111)

Thus we have the decomposition

I + αminus1Kkm = Bkm(α)+ Ckm(α) m isin N

The MAM for solving (118) can be described as follows

Algorithm 112 (Multilevel augmentation algorithm)

Step 1 For a fixed k gt 0 solve (118) with n = k exactlyStep 2 Set uδαk0 = uδαk and compute matrices Bk0(α) and Ck0(α)

Step 3 For m isin N suppose that uδαkmminus1 isin Xk+mminus1 has been obtained anddo the followingbull Augment the matrices Bkmminus1(α) and Ckmminus1(α) to form Bkm(α)

and Ckm(α) respectivelybull Augment uδαkmminus1 to form

uδαkm =[

uδαkmminus10

]isin Xk+m

bull Solve uδαkm = [(uδαkm)0 (uδαkm)1 (uδαkm)m]T with

(uδαkm)0 isin Xk and (uδαkm)j isinWk+j j = 1 2 m fromequation

Bkm(α)uδαkm = αminus1f δk+m minus Ckm(α)u

δαkm (1112)

The augmentation method begins with an initial approximate solutionuδαk and updates it from one level to another Specifically after the initial

approximation uδαk is obtained for m = 1 2 we compute

(uδαkm)j = αminus1Qk+jfδ minus αminus1Qk+jKuδαkmminus1 j = 1 2 m

solve

(I + αminus1PkK)(uδαkm)0 = αminus1Pkf δ minus αminus1PkK

⎛⎝ msumj=1

(uδαkm)j

⎞⎠

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

424 Multiscale methods for ill-posed integral equations

and obtain the approximate solution

uδαkm = [(uδαkm)0 (uδαkm)1 (uδαkm)m]T

Note that in this algorithm we only need to find the inverse of I + αminus1PkK atlevel k This is the key point which leads to fast computation for the proposedmethod

The parameter α in general needs to be chosen according to the level Thereare two types of choice for the parameter a priori and a posteriori In the nexttwo subsections we consider the choices of the parameters

1122 A priori error analysis

The goal of this subsection is to propose a choice of a priori regularizationparameters and to estimate the convergence order of the MAM with the choiceof the a priori parameter To this end we define the fractional power operator(cf [220]) Let ν$ denote the greatest integer not larger than ν For 0 lt ν lt 1we define the fractional powers Kν by

Kν = sinπν

π

int +infin0

tνminus1(tI +K)minus1Kdt (1113)

and for ν gt 1

Kν = Kνminusν$Kν$

Since operators K and (αI + K)minus1 commute by the definition of the poweroperator we conclude that operators Kν and (αI + K)minus1 commute as wellthat is

Kν(αI +K)minus1 = (αI +K)minus1Kν (1114)

We impose the hypothesis on ulowast

(H-1) For some ν isin (0 1] ulowast isin R(Kν) that is there is ω isin X such thatulowast = Kνω

Let uα isin X denote the solution of equation

(αI +K)uα = f (1115)

and we need to compare uα with ulowast It is well known that if hypothesis (H-1)is satisfied then the estimates

uα minus ulowast le cνωαν (1116)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 425

hold where

cν =

sin νπν(1minusν)π 0 lt ν lt 1

1 ν = 1

and

uδα minus uα le δαminus1 (1117)

Hence from these estimates we have that

uδα minus ulowast le cνωαν + δαminus1 (1118)

As in [220] we assume that the following hypothesis holds

(H-2) There exists a sequence θn n isin N0 satisfying

σ0 le θn+1

θnle 1 and lim

nrarr+infin θn = 0 (1119)

for some constant σ0 isin (0 1) such that when n ge N0

(I minus Pn)Kν le aνθνn 0 lt ν le 2 (1120)

and

K(I minus Pn) le a1θn (1121)

where N0 is a positive integer and aν 0 lt ν le 2 are constantsdepending only on ν

We next present a bound on the difference uα minus uδαn

Proposition 113 Let uα and uδαn denote the solution of equations (1115)and (118) respectively If hypotheses (H-1) and (H-2) hold then for n ge N0

uα minus uδαn leδ

α+ 2a1+ν + a1aν

αωθ1+ν

n

Proof It follows from equations (1115) and (118) that

uα minus uδαn = (αI +Kn)minus1(fn minus f δn )+ dn

where

fn = Pnf and dn = (αI +K)minus1f minus (αI +Kn)minus1fn

By using condition (115) estimate (119) and the fact that Pn = 1 we havethat

(αI +Kn)minus1(fn minus f δn ) le

δ

α

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

426 Multiscale methods for ill-posed integral equations

Introducing the operator

En = (αI +K)minus1 minus (αI +Kn)minus1

we write dn as two terms

dn = (αI +Kn)minus1(I minus Pn)f + Enf

We next estimate the two terms in the last formula for dn separately Byhypothesis (H-1) and the definition of the power operator we find that

f = Kulowast = K1+νω (1122)

By using (1122) and hypothesis (H-2) we have the estimate for the first termthat

(αI +Kn)minus1(I minus Pn)f le a1+ν

αωθ1+ν

n

It remains to estimate the term En f Again using (1122) we conclude that

En f = e1 + e2

where

e1 = (αI +Kn)minus1PnK(Pn minus I)(αI +K)minus1K1+νω

and

e2 = (αI +Kn)minus1(Pn minus I)K(αI +K)minus1K1+νω

By using the commutativity of Kν and (αI +K)minus1 and the identity

Pn minus I = (Pn minus I)(Pn minus I)

we obtain that

e1 = (αI +Kn)minus1PnK(Pn minus I)(Pn minus I)Kν(αI +K)minus1Kω

Moreover since K is positive semi-definite for any α gt 0

(αI +K)minus1K le 1 (1123)

It follows from (119) (1120) (1121) and (1123) that

e1 le αminus1a1aνθ1+νn ω

Likewise we have that

e2 = (αI +Kn)minus1(Pn minus I)K1+ν(αI +K)minus1Kω

By (119) (1121) and (1123) we see that

e2 le αminus1a1+νθ1+νn ω

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 427

Consequently we have that

En f le e1 + e2 le αminus1(a1aν + a1+ν)θ1+νn ω

This proves the result of the proposition

We next estimate the distance of uδα from the subspace Xn To do this forn isin N0 we let

Eδαn = (I minus Pn)u

δα (1124)

We also set

γ δαn = δ

α+ a1+ν

αωθ1+ν

n (1125)

We remark that if the sequence θn satisfies condition (1119) then we have that

γ δαn+1

γ δαnge σ = σ 1+ν

0 (1126)

Proposition 114 If hypotheses (H-1) and (H-2) hold then for n ge N0

Eδαn le γ δαn (1127)

Proof By (114) (116) and (H-1) we have that

uδα = (αI +K)minus1(f δ minus f )+ (αI +K)minus1K1+νω (1128)

Since Pn is an orthogonal projection we have that I minus Pn le 1 Thus from(115) (1128) (117) and (H-2) we conclude that

Eδαn = (I minus Pn)u

δα le

δ

α+ a1+ν

αωθ1+ν

n

establishing the estimate

The next proposition shows that uδαkm approximates uδαk+m at a conver-

gence rate comparable with γ δαk+m The proof of this result follows the sameidea as the proof of Theorem 22 in [71] To make this chapter self-containedwe provide details of the proof for convenient reference

Proposition 115 Let uδαkm and uδαk+m be the solution of the MAM (1112)and the solution of the projection method (118) with n = k + m respectivelySuppose that hypotheses (H-1) and (H-2) hold Then there exists a positiveinteger N ge N0 when k ge N m isin N0 and α satisfies the condition

α ge a1

ρθk (1129)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

428 Multiscale methods for ill-posed integral equations

where

ρ = 1

2

(minus1+

radic1+ 2σ

1+ σ

) (1130)

The following estimate holds

uδαkm minus uδαk+m le γ δαk+m

Proof From (118) and (1112) we have that

Bkm(α)(uδαkm minus uδαk+m) = Ckm(α)(u

δαk+m minus uδαkm) (1131)

Noting that I + αminus1Kk+m ge 1 it holds that

Bkm(α)x = ((I + αminus1Kkm)minus Ckm(α))x ge (1minus Ckm(α))x(1132)

By the definition of Ckm(α) we have that

Ckm(α) = αminus1(Pk+m minus Pk)KPk+m le αminus1(Pk+m minus Pk)KThus when k ge N m isin N0 and hypothesis (1129) is satisfied we have that

Ckm(α) le 2a1θk

αle 2ρ

This with (1130) ensures that

Ckm(α)) lt 1

(ρ + 1)(1+ 1σ) (1133)

which also implies Ckm(α)lt 1 Therefore by inequality (1132) we con-clude that when k ge N m isin N0 and hypothesis (1129) is satisfied

Bminus1km(α)) le

1

1minus Ckm(α)

From this and (1131) we obtain

uδαkm minus uδαk+m leCkm(α)

1minus Ckm(α)uδαk+m minus uδαkm (1134)

Moreover as in the proof of Theorem 92 we have that

uδα minus uδαk+m lea1θk+m

αEδαk+m le

a1θk

αEδαk+m

By condition (1129) we find that

uδα minus uδαk+m le ρEδαk+m (1135)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 429

We next prove by induction on m that when k ge N m isin N0 and hypothesis(1129) is satisfied we have the estimate

uδαkm minus uδαk+m le γ δαk+m (1136)

When m = 0 since uδαk0 = uδαk estimate (1136) holds in this case Supposethat the claim holds for m = r minus 1 To prove (1136) with m = r we usethe definition of uδαkr estimates (1135) (1126) (1127) and the inductionhypothesis to obtain

uδαk+r minus uδαkr le uδαk+r minus uδα + uδα minus uδαk+rminus1+uδαk+rminus1 minus uδαkrminus1le ργ δαk+r + (ρ + 1)γ δαk+rminus1

le(ρ + (ρ + 1)

1

σ

)γ δαk+r

Substituting this estimate into the right-hand side of (1134) with m = r yields

uδαkr minus uδαk+r leCkr(α))

1minus Ckr(α))(ρ + (ρ + 1)

1

σ

)γ δαk+r

It follows from the estimate (1133) that for k ge N and non-negative integer r

Ckr(α))1minus Ckr(α))

(ρ + (ρ + 1)

1

σ

)le 1

Therefore for k ge N estimate (1136) holds for m = r The proof is complete

In the next theorem we present the a priori error estimate for the approxi-mate solution uδαkm obtained by the MAM

Theorem 116 Let ulowast denote the minimum norm solution of equation (113)and uδαkm denote the solution of the MAM (1112) If hypotheses (H-1) and(H-2) hold then there exists a positive integer N ge N0 such that for all k ge Nm isin N0 and α satisfying the condition (1129) the following estimate holds

ulowast minus uδαkm le cνωαν + 2δ

α+ (3a1+ν + a1aν)ω

θ1+νk+m

α (1137)

Proof Note that

ulowast minus uδαkm = (ulowast minus uα)+ (uα minus uδαk+m)+ (uδαk+m minus uδαkm)

The estimate (1137) follows directly from the estimate (1116) Proposi-tions 113 114 and 115

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

430 Multiscale methods for ill-posed integral equations

Motivated by Proposition 115 we now propose a choice of the regulariza-tion parameter α in the multilevel augmentation algorithm For given δ gt 0we choose a positive integer k gt N so that

cminusδ1

ν+1 le c0θk le c+δ1

ν+1 (1138)

for some positive integers cminus c+ and c0 ge a1ρ and choose

α = c0θk (1139)

This choice of α is called the a priori parameter since (1138) uses theinformation on ν which appears in a priori assumption (H-1) In the nexttheorem we present the a priori error estimate of the MAM

Theorem 117 Let ulowast be the minimum norm solution of equation (113)and uδαkm be the solution of the MAM (1112) with the choice of α satisfying(1139) If hypotheses (H-1) and (H-2) hold then there exists a positive integerN ge N0 such that when k ge N m isin N0 and (1138) is satisfied the followingestimate holds

ulowast minus uδαkm le(

cνcν+ω +2

cminus

νν+1 + (3aν+1 + a1aν)ω ρ

a1θνk+m

(1140)

Proof The estimate (1140) is obtained by substituting the choice of theregularization parameter α into the right-hand side of estimate (1137)

The last theorem shows that the MAM improves the approximation errorfrom θk to θk+m

1123 A posteriori choice strategy

In this subsection we develop a strategy for choosing an a posteriori regular-ization parameter which ensures the optimal convergence for the approximatesolution obtained by the MAM with the parameter

To introduce an a posteriori regularization parameter we consider anauxiliary operator equation For fixed k m isin N we consider the equation

(αI +K)uδα = uδαkm (1141)

where uδα isin X That is we consider equation (116) with the right-handside f δ replaced by uδαkm where uδαkm is the solution using the MAM for

equation (116) It is clear that equation (1141) has a unique solution uδα andthe solution depends on k m For simple presentation we abuse the notation

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 431

without indicating the dependence The projection method for equation (1141)has the form

(αI +Kk+i)uδαk+i = uδαkm i = 0 1 m (1142)

We denote by uδαki the solution of the above equation using the MAMdescribed in Section 1121 for an α to be determined

Let

Eδαn = (I minus Pn)u

δα

and

γ δαn = 2δ

α2+ (4a1+ν + a1aν)ω

α2θ1+ν

n

In the next proposition we bound Eδαn by γ δαn

Proposition 118 If hypotheses (H-1) and (H-2) hold there exists a positiveinteger N ge N0 such that when k ge N m isin N0 and α satisfies the condition(1129) the following estimate holds

Eδαk+m le γ δαk+m

and for i = 0 1 m

uδαki minus uδαk+i le γ δαk+i

Proof It follows from (1141) that

uδα = (αI +K)minus1uδαkm

Using equation (1115) and hypothesis (H-1) we have that

uα = (αI +K)minus1K1+νω

Therefore

uδα = (αI +K)minus1(uδαkm minus uα)+ (αI +K)minus2K1+νω

Hence by the fact that I minus Pn le 1 hypothesis (H-2) and the estimatesobtained in the last section we have that

Eδαk+m=uδα minusPk+muδα le

α2+ (3a1+ν + a1aν)ω

α2θ1+ν

k+m +a1+νω

α2θ1+ν

k+m

which leads to the first estimate of this proposition The second estimatefollows similarly from the proof of Proposition 115

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

432 Multiscale methods for ill-posed integral equations

We define the quantities

α = α2(αI +K)minus2f δαkm = α2uδαkm

and

D(δ θk+m) = 4δ + (8a1+ν + 3a1aν)ωθ1+νk+m

In the next result we estimate the difference between α and δαkm in terms

of D(δ θk+m)

Proposition 119 If hypotheses (H-1) and (H-2) hold there exists a positiveinteger N ge N0 such that when k ge N m isin N0 and α satisfies the condition(1129) it holds that

δαkm minusα le D(δ θk+m) (1143)

and

α le c2νωα1+ν (1144)

Moreover if

f δ gt D(δ θk+m)+ (b+ 1)δ (1145)

for some constant b gt 0 then

lim infαrarr+infin

δαkm gt bδ (1146)

Proof It follows from (1142) and (1115) that

δαkm minusα = α2(uδαkm minus uδαk+m)+ α2(αI +Kkm)

minus1(uδαkm minus uδαk+m)

+ α2(αI +Kkm)minus1(uδαk+m minus uα)

+ α2[(αI +Kkm)minus1 minus (αI +K)minus1](αI +K)minus1f

Thus by Propositions 118 and 115

δαkm minusα le α2γ δαk+m + αγ δαk+m + αuδαk+m minus uα

+ α2[(αI +Kkm)minus1 minus (αI +K)minus1](αI +K)minus1f

Similar to the proof of Proposition 113 we find that the last term is given by

α2[(αI +Kk+m)minus1 minus (αI +K)minus1](αI +K)minus1K1+νω

le (a1aν + a1+ν)θ1+νk+mω

Combining these inequalities and using Propositions 118 114 113 and(1117) we conclude that

δαkm minusα le 4δ + (8a1+ν + 3a1aν)ωθ1+ν

k+m (1147)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 433

which is the estimate (1143) Note by hypotheses that

α = α2∥∥∥(αI +K)minus2K1+νω

∥∥∥= α1+ν

∥∥∥[α(αI +K)minus1](1minusν)[(αI +K)minus1K]1+νω∥∥∥

le c2νωα1+ν

where we used the inequality [α(αI + K)minus1]ν le cν for ν isin (0 1] whichcan easily be verified by the definition (1113) of the fractional power of anoperator Thus estimate (1144) follows

To prove the second statement we note that

δαkm ge α minus δ

αkm minusα and limαrarr+infinα = f

When condition (1145) holds we have that

f ge f δ minus f minus f δ gt D(δ θk+m)+ bδ

Thus

lim infαrarr+infin

δαkm gt bδ

which completes the proof

We remark that if f δ ge cδ with c gt 5 and if k + m is sufficiently largethen condition (1145) holds In fact a simple computation yields

f δ minus D(δ θk+m)minus δ ge (cminus 5)δ minus (8a1+ν + 3a1aν)ωθ1+νk+m

which confirms that (1145) holds for sufficiently large k + m

Proposition 1110 Suppose that α ge αprime gt 0 Let uα and ulowast denote the solu-tion of equation (1115) and the minimum norm solution of equation (114)respectively Then

uα minus ulowast le uαprime minus ulowast + ααprime

Proof Direct computation leads to

uα minus uαprime = (αI +K)minus1f minus (αprimeI +K)minus1f

= minus 1

αprime

(1minus αprime

α

)αprimeαminus1(αprimeI +K)minus1(αI +K)α2(αI +K)minus2f

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

434 Multiscale methods for ill-posed integral equations

and

αprimeαminus1(αprimeI +K)minus1(αI +K) = αprime

α

∥∥∥I + ( ααprimeminus 1)(I + αprimeminus1K)minus1

∥∥∥le αprime

α

(1+ α

αprimeminus 1)= 1

Thus we conclude that

uα minus uαprime le 1

αprime

(1minus αprime

α

)α2(αI +K)minus2f le α

αprime

This with the inequality

uα minus ulowast le uαprime minus ulowast + uα minus uαprime yields the estimate of the proposition

Let d gt 1 and τ = a1ρ be fixed We choose a positive number α0 satisfyingthe condition

τθk le α0 le dτθk (1148)

and define an increasing sequence αn by the recursive formula

αn = dαnminus1 n = 1 2

Clearly the sequence αn goes to infinity as n rarr infin We present in the nextlemma a property of this sequence

Lemma 1111 If condition (1145) holds for some positive constant b thenthere exists a non-negative integer n0 such that

δαn0minus1km le bδ le δ

αn0 km (1149)

where δαminus1km = 0

Proof If α0 satisfies the condition

δα0km ge bδ (1150)

the proof is complete with n0 = 0 We now assume that condition (1150) is notsatisfied By the hypothesis of this lemma Proposition 119 and the definitionof the sequence αn there exists a positive integer p such that

δαpkm ge bδ

Let n0 be the smallest such integer p Thus we obtain (1149)

As suggested by Lemma 1111 we present an algorithm which generates achoice for an a posteriori parameter

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 435

Algorithm 1112 (A posteriori choice of regularization parameter) Let τ =a1ρ gt 0 d gt 1 and b gt 4 be fixed

Step 1 For given δ gt 0 choose a positive integer k gt N0 and a constant α0

such that θk le cδ for some c gt 0 and (1148) holdsStep 2 If αn has been defined use Algorithm 112 to compute uδαnkm and

δαnkm If

δαnkm lt bδ

is satisfied we set

αn+1 = dαn

and repeat this step Otherwise go to step 3Step 3 Set α = αnminus1 and stop

The output α of Algorithm 1112 depends on k m and δ and satisfies theconditions

τθk le α le dτθk and bδ le δαkm (1151)

or

α ge τθk and δαkm le bδ le δ

dαkm (1152)

The next proposition ensures that α = α(k m δ) converges to zero as δ rarr 0and krarrinfin

Proposition 1113 Suppose that hypotheses (H-1) and (H-2) hold If α =α(k m δ) is chosen according to Algorithm 1112 then there exists a positiveinteger N ge N0 such that when k ge N and m isin N0

limδrarr0 krarr+infin α(k m δ) = 0 (1153)

Proof If α le dτθk then (1153) holds since limkrarr+infin θk = 0 Otherwiseinequality (1152) must be satisfied Thus

limδrarr0 krarr+infin

δαkm = 0

Moreover noting that α satisfies (1129) it follows from (1143) that

limδrarr0 krarr+infin

δαkm minusα = 0

Combining the above two inequalities yields

limδrarr0 krarr+infinα = 0

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

436 Multiscale methods for ill-posed integral equations

According to a well-known result (cf [156 274 275]) we conclude (1153)from the equation above

In the next theorem we present an error estimate for the multilevel augmen-tation solution using the a posteriori parameter

Theorem 1114 Suppose that hypotheses (H-1) and (H-2) hold Let ulowast anduδαkm be the minimum norm solution of equation (114) and the solution of

the MAM (1112) respectively with α chosen according to Algorithm 1112 Iff δ gt cδ with c gt 5 then there exist a positive integer N and constants d1

and d2 such that when k ge N and m isin N0

uδαkm minus ulowast le d1δ

ν1+ν + d2θ

νk+m

Proof We first note that according to the remark after Proposition 119condition (1145) holds if f δ gt cδ with c gt 5 and if k + m is sufficientlylarge This with Lemma 1111 ensures that the parameter α can be obtainedby Algorithm 1112 Noting that the parameter generated by Algorithm 1112satisfies the condition (1129) it follows from Theorem 116 that there exists apositive integer N when k ge N and m isin N0 such that

ulowast minus uδαkm le ulowast minus uα + 2δ

α+ 1

τ(3a1+ν + a1aν)ωθνk+m (1154)

or

ulowast minus uδαkm le cνωαν + 2δ

α+ 1

τ(3a1+ν + a1aν)ωθνk+m (1155)

In the case that (1151) holds it follows from Algorithm 1112 that

αν le (dτθk)ν le (cdτ)νδν (1156)

Using (1143) and (1144) we have

bδ le δαkm le α + δ

αkm minusα le c2νωα1+ν + D(δ θk+m)

This with (1156) yields

(bminus 4)δ

αle c2

νω(cdτ)νδν + (8a1+ν + 3a1aν)ωθνk+m

Combining (1155) (1156) and the above inequality we conclude the estimateof this theorem with

d1 = (1+ 2cν(bminus 4))cν(cdτ)νωand

d2 = [(16(bminus 4)+ 3τ)a1+ν + (6(bminus 4)+ 1τ)a1aν]ω

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

112 Multiscale Galerkin methods via the Lavrentiev regularization 437

In the case that (1152) holds we let αprime = δ1

1+ν + θk+m Then we have that

(αprime)ν le 2ν(δν

1+ν + θνk+m) (1157)

If α ge αprime it follows from Proposition 1110 that

ulowast minus uα le ulowast minus uαprime + ααprime

(1158)

Using estimates (1116) and (1157) we have that

ulowast minus uαprime le cνω(αprime)ν le 2νcνω(δ ν1+ν + θνk+m) (1159)

From Proposition 119 and (1152) we observe that

α le α minusδαkm + δ

αkm le D(δ θk+m)+ bδ

which leads to

ααprimele (b+ 4)δ

ν1+ν + (8a1+ν + 3a1aν)ωθνk+m

This with (1158) and (1159) yields

ulowast minus uα le [2νcνω + (b+ 4)]δ ν1+ν + [2νcν + (8a1+ν + 3a1aν)]ωθνk+m

(1160)

It is clear that

δ

αle δ

αprimele δ

ν1+ν

Combining (1154) (1123) and the above inequality we have that

d1 = 2νcνω + b+ 6 and

d2 =[2νcν + (8+ 3τ)a1+ν + (3+ 1τ)a1aν

] ωIf α le αprime then using (1157)

αν le 2ν(δν

1+ν + θνk+m) (1161)

It follows from (1152) and Proposition 119 that

bδ le dαkm le dαkm minusdα + dαle 4δ + (8a1+ν + 3a1aν)ωθ1+ν

k+m + dαSince b gt 4 and τθk le α we obtain

(bminus 4)δ

αle 1

τ(8a1+ν + 3a1aν)ωθνk+m +

dαα

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

438 Multiscale methods for ill-posed integral equations

which with (1144) yields

δ

αle 1

(bminus 4)τ(8a1+ν + 3a1aν)ωθνk+m +

1

(bminus 4)c2νd1+νωαν (1162)

Combining (1155) (1161) and (1162) we conclude the result of this theoremagain with

d1 = [2νcν + (2d)1+νc2ν(bminus 4)]ω

and

d2 = d1 + [(16(bminus 4)+ 3)a1+ν + (6(bminus 4)+ 1)a1aν]ωτ

This completes the proof

113 Multiscale collocation methods via the Tikhonovregularization

In this section we introduce a fast piecewise polynomial collocation methodfor solving the second-kind integral equation obtained using Tikhonov regu-larization from the original ill-posed equation The method is developed basedon a matrix compression strategy resulting from using multiscale piecewisepolynomial basis functions and their corresponding multiscale collocationfunctionals

1131 The polynomial collocation method for the Tikhonovregularization

We present in this subsection a polynomial collocation method for solvingthe Tikhonov regularization equation For this purpose we first describe theTikhonov regularization in the Linfin space for ill-posed integral equations of thefirst kind Suppose that is a compact set of the d-dimensional Euclideanspace Rd for d ge 1 The Fredholm integral operator K is defined by

(Ku)(s) =int

K(s t)u(t)dt s isin (1163)

where K isin C( times ) is a nondegenerate kernel We consider the Fredholmintegral equation of the first kind in the form

Ku = f (1164)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 439

where f isin Linfin() is a given function and u is the unknown solution to bedetermined The operator K can be considered as a compact operator fromLinfin() to Linfin() In this case equation (1164) is an ill-posed problem

Let Klowast be the adjoint operator of K defined by

(Klowastu)(s) =int

K(t s)u(t)dt s isin (1165)

Instead of solving equation (1164) the Tikhonov regularization method is tosolve the equation

(αI +A)uδα = Klowastf δ (1166)

where A = KlowastK α is a positive parameter and f δ is the approximate data off with

f δ minus f2 le δ (1167)

for some positive constant δ We also denote by uα the solution of the equation

(αI +A)uα = Klowastf (1168)

where we assume that the right-hand-side function f contains no noiseFor 1 le p le infin we use up to denote the norm of the function u isin Lp()

and use AXrarrY to denote the norm of the operator A Xrarr Y When X =Y = Lp() we simplify the notation by Ap = ALp()rarrLp() Letting

M = supsisin( int

|K(t s)|2dt

)12 we have that KlowastL2()rarrLinfin() le MWe recall the following two useful estimates established in [224]

Lemma 1115 For each α gt 0 the operator αI+A is invertible from Linfin()to Linfin()

(αI +A)minus1infin leradicα +M2

α32and (αI +A)minus1KlowastL2()rarrLinfin() le

M

α

It is well known that in Hilbert space L2 we have the estimates

(αI +A)minus1L2() le1

αand (αI +A)minus1KlowastL2() le

1

2α12

These estimates are optimal in the L2 space However to the best of ourknowledge the estimates stated in Lemma 1115 are the only availableestimates although it is not clear if they are optimal in the Linfin space

In the sequel for simplicity we assume that f isin R(K) where R(K) denotesthe range of K Moreover we use u = Kdaggerf to denote the solution of (1164)where Kdagger is the MoorendashPenrose generalized inverse of K We also need thefollowing assumption

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

440 Multiscale methods for ill-posed integral equations

(H1) u isin R(AνKlowast) with 0 lt ν le 1 that is there exists an ω isin Linfin() suchthat u = AνKlowastω

The next result was obtained in [117]

Lemma 1116 If u isin R(Klowast) then u minus uαinfin rarr 0 as α rarr 0 Moreover ifhypothesis (H1) holds then

uminus uαinfin le c(ν)ωαν as αrarr 0

where c(ν) is a constant defined by

c(ν) =

sin νπν(1minusν)π 0 lt ν lt 1

1 ν = 1

We remark that a somewhat different collocation method was studied in[207] recently in the L2 space The ill-posed equation was first discretizedusing a numerical integration formula which leads to a finite-dimensionalequation A regularization is then applied to convert the discrete ill-posedequation to a discrete well-posed equation In paper [207] a weak assumptionx isin R(Aν) 0 lt ν le 1 was assumed to obtain the optimal convergencerate O(δ2ν(2ν+1)) for the Tikhonov regularization solution in the L2 spaceMoreover papers [194 195] prove the saturation of methods for solvinglinear ill-posed problems in Hilbert spaces for a wide class of regularizationmethods We take a different approach in this section We first regularize theill-posed equation (1164) to obtain the equation (1166) and then apply the fastcollocation method to solve the equation (1166) For convergence analysis ofthe proposed method we feel that for the collocation method using the Linfin-norm is more natural For this reason we adopt in this section the Linfin space foranalysis of the proposed collocation method Our analysis will be based on theestimates described in Lemmas 1115 and 1116 presented in [224] and [117]respectively

Next we present the piecewise polynomial collocation method for solvingthe regularized equation (1166) We need some necessary notations Weassume that there is a partition E = n n isin ZN of for N isin N whichsatisfies the following conditions

(I) =⋃nisinZNn and meas(i capj) = 0 for i = j

(II) For each i isin ZN there exists an invertible affine map φn 0 rarr suchthat φn(

0) = n where 0 sube Rd is a reference element

We denote by XkN the space of piecewise polynomials of total degree le

k minus 1 with respect to the partition E In other words every element in XkN is a

polynomial of total degree le k minus 1 on each element n Since the dimension

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 441

of the polynomials of total degree kminus 1 is given by rk =(

k + d minus 1d

) the

dimension of XkN is Nrk

Choose rk distinct points xj isin 0 j = 1 2 rk in a general position thatis the Lagrange interpolation polynomial of total degree kminus1 at these points isuniquely defined (cf [198]) We assume that the polynomials pj isin Pk of totaldegree k minus 1 satisfy the conditions pj(xi) = δij for i j = 1 2 rk For eachn isin ZN we define basis functions by

ρnj(x) =(pj φminus1

n )(x) x isin n0 x isin n

In what follows we denote h = maxdiam(n) n isin ZN and define theinterpolation projection Ph C() rarr Xk

N The projection Ph is first definedfor f isin C() by

(Phf )(x) =sum

nisinZN

rksumj=1

f (φn(xj))ρnj(x) x isin

The operator Ph C() rarr XkN is an interpolation projection onto Xk

N It canbe verified that there exists a positive constant c such that Phinfin le c for all NThe projection Ph can be extended to Linfin() by the HahnndashBanach theoremWe also need the orthogonal projection Qh L2()rarr Xk

N In the next proposition we prove important properties of the projections Ph

and Qh These properties may have appeared in the literature in a differentform To make this section self-contained we provide a complete proof for theconvenience of the reader For 1 le p q le +infin and a positive constant r welet Wrq() denote the Sobolev space of functions whose r-derivatives are inLq() and set

Lp( Wrq()) = v v(s middot) isin Wrq() for almost all s isin and v(s middot)Wrq() isin Lp()

and

Lp(Wrq()) = v v(middot t) isin Wrq() for almost all t isin and v(middot t)Wrq() isin Lp()

In the rest of this section unless stated otherwise we use c for a genericpositive constant whose values may be different on different occasions

Proposition 1117 If K isin C( times ) and 0 lt r le k then the followingstatements hold

(1) For K isin Linfin( Wr1()) K(I minusQh)infin le chrKLinfin(Wr1())

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

442 Multiscale methods for ill-posed integral equations

(2) For K isin Linfin(Wr1()) (I minus Ph)Klowastinfin le chrKLinfin(Wr1())(3) For K isin L2( Wr2()) K(I minusQh)2 le chrKL2(Wr2())(4) For K isin L2(Wr2()) (I minus Ph)Klowast2 le chrKL2(Wr2())(5) For K isin Linfin(Wr2()) (I minus Ph)KlowastL2()rarrLinfin()le chr

KLinfin(Wr2())

Proof (1) For any u isin Linfin() noting that Qh is self-adjoint we have that

(K(I minusQh)u)(s) =int

((I minusQh)K(s middot))(t)u(t)dt (1169)

Therefore

K(I minusQh)uinfin le supsisin

int

|((I minusQh)K(s middot))(t)|dtuinfinle chrKLinfin(Wr1())uinfin

which implies that

K(I minusQh)infin le chrKLinfin(Wr1())

(2) Likewise we have that

((I minus Ph)Klowastu)(s) =int

((I minus Ph)K(middot s))(t)u(t)dt (1170)

This implies that

(I minus Ph)Klowastuinfin le supsisin

int

|((I minus Ph)K(middot t))(s)|dsuinfinle chrKLinfin(Wr1())uinfin

which leads to the estimate of (2)(3) For any u isin L2() it follows from (1169) that

K(I minusQh)u2 le(int

(I minusQh)K(s middot)22u22ds

)12

le chrKL2(Wr2())u2

which yields the result of (3)(4) The estimate of (4) follows from (1170) and a similar argument used in

the proof of (3)(5) For any u isin L2() it follows that

(I minus Ph)Klowastuinfin le supsisin(I minus Ph)K(middot s)2u2 le chrKLinfin(Wr2())u2

This completes the proof

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 443

The next corollary follows directly from Proposition 1117

Corollary 1118 If K(middot middot) isin Wrinfin(times) with 0 lt r le k then there existsa positive constant c such that for all h gt 0

maxAminus PhAQhinfin PhAQh minus PhAinfin

(I minus Ph)KlowastL2()rarrLinfin() le chr

We remark that the number r on the right-hand side of the above estimatemay be larger than that in the assumption of this corollary (see the example inSection 1142)

Using the piecewise polynomial spaces and the projection operators intro-duced above the piecewise polynomial collocation method for solving theregularized equation (1166) is to find uδαh isin Xk

N such that

(αI + PhAQh)uδαh = PhKlowastf δ (1171)

Following the discussion in [224] we have the convergence and errorestimate of the polynomial collocation method

Theorem 1119 Suppose that K(middot middot) isin Wrinfin(times) with 0 lt r le k and

chr le min

1

2

α32

radicα +M2

radicαradic

α +M2

(1172)

(1) If u isin R(Klowast) and h α are chosen such that hr = O(radicαδ) δα rarr 0 asδrarr 0 then

uminus uδαhinfin rarr 0 as δrarr 0 (1173)

(2) If hypothesis (H1) holds and h α are chosen such that hr = O(radicαδ) and

α sim δ1

ν+1 as δrarr 0 then

uminus uδαhinfin = O(δν

ν+1 ) as δrarr 0 (1174)

Proof Using Lemma 1115 and a standard argument we have that

uminus uδαhinfin le uminus uαinfin + c

α+ hr

α32

)

Thus the results of this theorem follow from Theorems 25 and 26 of [224]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

444 Multiscale methods for ill-posed integral equations

1132 The fast multiscale collocation method

This subsection is devoted to the development of the fast multiscale piecewisepolynomial collocation method for solving equation (1166)

We recall the multiscale sequence of approximate subspaces A sequence ofpartitions of domain is called multiscale if every partition in the sequenceis obtained from a refinement of its previous partition Let Xn n isin N be asequence of piecewise polynomial spaces of total degree k minus 1 based on asequence of multiscale partitions of where X0 is the space of piecewisepolynomials of total degree k minus 1 on an initial partition of with m =dimX0 = m0rk for some positive integer m0 Because of the multiscalepartition we have that the spaces Xn n isin N are nested that is Xn sube Xn+1 forn isin N This leads to the decomposition

Xn =W0 oplusperpW1 oplusperp middot middot middot oplusperpWn (1175)

where W0 = X0 For each i isin N0 we assume that Wi has a basis wij j isin Zw(i)that is Wi = spanwij j isin Zw(i) According to (1175) we then have thatXn = spanwij (i j) isin Un For each (i j) isin Un we denote by Sij the supportof the basis function wij and let d(A) denote the diameter of the set A sub Wedefine s(n) = dimXn and for each i isin Zn+1 we set hi = maxd(Sij) j isinZw(i) We further require that the spaces and their bases have the propertiesthat s(n) sim μn w(i) sim μi hi sim μminusid where μ gt 1 is an integer and thatthere exists a positive constant c such that wijinfin le c for all (i j) isin U Theseproperties are fulfilled for the spaces and bases constructed in [69 75 264]

Next we turn to describing a multiscale sequence of the correspondingcollocation functional spaces Associated with each basis function wij we havea collocation functional ij which is a sum of point evaluation functionalsat a fixed number of points in Sij We demand that for each (i j) isin Unlangij q

rang = 0 for any polynomial q of total degree k minus 1 and there existsa positive constant c such that ij le c for all (i j) isin U We alsorequire that the basis functions and their corresponding collocation functionalshave the semi-bi-orthogonal property 〈iprimejprime wij〉 = δiprimeiδjprimej (i j) (iprime jprime) isin Uiprime le i These properties of the collocation functionals are satisfied for thoseconstructed in [69 75] The multiscale collocation functionals constructed inthese papers make use of refinable sets introduced in [65] which admit theunique Lagrange interpolating polynomial (cf [198]) These functionals wereoriginally defined for continuous functions and then extended to functions inLinfin() by the HahnndashBanach theorem Corresponding to each subspace Wi

we have the collocation functional space Vi = spanij j isin Zw(i) andcorresponding to the space Xn we have Ln = spanij (i j) isin Un The

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 445

space Ln has the decomposition

Ln = V0 oplus V1 oplus middot middot middot oplus Vn

We now formulate the collocation method for solving equation (1166) Tothis end for each n isin N0 we let Pn denote the interpolation projection fromLinfin() onto Xn defined for f isin Linfin() by Pnf isin Xn satisfying

〈ij f minus Pnf 〉 = 0 (i j) isin Un

Moreover for each n isin N0 we let Qn denote the orthogonal projection fromL2() onto Xn The collocation method for solving (1166) is to find uδαn isin Xn

such that

(αI + PnAQn)uδαn = PnKlowastf δ (1176)

In the standard piecewise polynomial collocation method the orthogonalprojection Qn is not used Instead its roll is taken by the interpolationprojection Pn At the operator equation level the two formulations areequivalent However the use of the orthogonal projection Qn allows us to usethe multiscale basis functions which have vanishing moments This is crucialfor developing fast algorithms based on a matrix compression

The matrix representation of operator αI+PnAQn under the basis functionsand the corresponding multiscale collocation functionals is a dense matrix Tocompress this matrix we write it in block form Let An = PnAQn then theoperator An Xn rarr Xn is identified in matrix form with

An=

⎡⎢⎢⎢⎣P0AQ0 P0A(Q1 minusQ0) middot middot middot P0A(Qn minusQnminus1)

(P1 minusP0)AQ0 (P1 minusP0)A(Q1 minusQ0) middot middot middot (P1 minusP0)A(Qn minusQnminus1)

(Pn minusPnminus1)AQ0 (Pn minusPnminus1)A(Q1 minusQ0) middot middot middot (Pn minusPnminus1)A(Qn minusQnminus1)

⎤⎥⎥⎥⎦

(1177)

If i+ j gt n we replace the block (Pi minus Piminus1)A(Qj minusQjminus1) of (1177) by thezero which leads to a compressed matrix

An =

⎡⎢⎢⎢⎣P0AQ0 P0A(Q1 minusQ0) middot middot middot P0A(Qn minusQnminus1)

(P1 minusP0)AQ0 (P1 minusP0)A(Q1 minusQ0) middot middot middot 0

(Pn minusPnminus1)AQ0 0 middot middot middot 0

⎤⎥⎥⎥⎦ (1178)

We call this compression strategy the -shape compression Letting Aminus1 =Qminus1 = 0 the operator An Xn rarr Xn can be written as

An =sum

ijisinZn+1i+jlen

(Pi minus Piminus1)A(Qj minusQjminus1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

446 Multiscale methods for ill-posed integral equations

In the collocation method (1176) we replace An by An and obtain a newapproximation scheme for solving equation (1166) That is we find uδαn isin Xn

such that

(αI + An)uδαn = PnKlowastf δ (1179)

We show that this modified collocation method leads to a fast algorithm and atthe same time it preserves the convergence rate obtained in [224]

To write equation (1179) in its equivalent matrix form we make use ofthe multiscale basis functions and the corresponding collocation functionalsWe write the solution uδαn isin Xn as uδαn =

sum(ij)isinUn

uijwij and introduce the

solution vector un = [uij (i j) isin Un]T We introduce matrices the

En = [〈iprimejprime wij〉 (iprime jprime) (i j) isin Un] An = [Aiprimejprimeij (iprime jprime) (i j) isin Un]where

Aiprimejprimeij = 〈iprimejprime KlowastKwij〉 iprime + i le n

0 otherwise(1180)

and vector fn = [〈iprimejprime Klowastf δ〉 (i j) isin Un]T Upon using these notationsequation (1179) is written in the matrix form

(αEn + An)un = fn (1181)

The semi-bi-orthogonality and the compact support property of the basisfunctions wij and the corresponding collocation functionals ij for (i j) isin Uensure that En is a sparse upper triangular matrix According to the -shapecompression strategy An is a sparse matrix These facts lead to a fast algorithmfor solving equation (1181)

In the next theorem we analyze the number N (An) of nonzero entries ofmatrix An

Theorem 1120 If the matrix An is obtained from the compression strategy(1180) then

N (An) = O(s(n) log(s(n)) nrarrinfin

Proof For i iprime isin Zn+1 we introduce the block matrix

Aiprimei = [Aiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)]According to the compression strategy (1180) it is clear that

N (An) =sum

iprime+ilen

N (Aiprimei)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 447

Noting that Aiprimei is a w(iprime)timesw(i) matrix and that w(i) sim μi from the equationabove we conclude that

N (An) simsum

iprime+ilen

μiprime+i =sum

iisinZn+1

μisum

iprimeisinZnminusi+1

μiprime =O(s(n) log(s(n)) as nrarrinfin

proving the result of this theorem

A crucial step in generating the sparse coefficient matrix An is computingthe nonzero entries 〈iprimejprime KlowastKwij〉 for iprime + i le n This requires computingthe values of the collocation functionals iprimejprime at the functions defined by theintegrals int

int

K(τ s)K(τ t)dτwij(t)dt

The same issue has been addressed in [33] in a different context (see Theorems5 and 6 in that paper) The key idea is to develop a numerical quadraturestrategy for computing the entrieslang

iprimejprime int

int

K(τ s)K(τ t)dτwij(t)dt

rangof the compressed matrix Such a strategy requires only O(s(n) log(s(n)))number of functional evaluations with preserving the convergence order of theresulting approximate solution See also [75] for a similar development for thesecond-kind Fredholm integral equation Since this issue has been understoodin principle and since the main focus of this section is the regularization wewill omit the details of this development and leave them to the interested reader

We briefly discuss an alternative idea which may provide a fast algorithmfor the collocation method proposed recently in [207] The suggestion is thatwe first discretize equation (1164) directly by using the collocation methoddescribed previously which results in the linear system

Knun = fn (1182)

The use of the collocation functionals iprimejprime instead of the point evaluationfunctionals as in [207] is for the matrix compression The second step is tocompress the dense matrix Kn in equation (1182) to obtain

Knun = fn (1183)

The resulting coefficient matrix has only O(s(n) log(s(n))) nonzero entriesThis treatment is the same as that for the second-kind Fredholm integralequation developed in [69] The third step is to apply the regularization to

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

448 Multiscale methods for ill-posed integral equations

equation (1183) to have

(αEn + KlowastnKn)unα = Klowastnfn (1184)

The matrix En may be replaced by the identity matrix In Since all matrices EnKn and Klowastn are sparse having at most O(s(n) log(s(n))) nonzero entries thecomputation of KlowastnKnunα has a fast algorithm by first computing Knunα andthen computing the product of Klowastn with the resulting vector if certain iterationmethods are used in solving the discrete equation (1184) Further descriptionof this idea may result in a detour from the main focus of this section Hencewe leave the details for future investigation

We now turn to estimating the convergence rate of the modified collocationmethod (1179) We impose the following hypothesis

(H2) There exists a positive constant c such that

K(I minusQj)infin le cμminusrjd K(I minusQj)2 le cμminusrjd

(I minus Pj)Klowastinfin le cμminusrjd (I minus Pj)Klowast2 le cμminusrjd

(I minus Pj)KlowastL2()rarrLinfin() le cμminusrjd

Proposition 1117 gives various smoothness conditions on the kernel K so thatestimates in hypothesis (H2) hold We impose hypothesis (H2) instead of thesmoothness conditions on the kernel K since the smoothness conditions aresufficient but not necessary Examples of such cases will be shown in the lastsection

In the following lemma we present the error between An and A in theoperator norm

Lemma 1121 If hypothesis (H2) holds then there exists a positive constantc0 such that

Aminus Aninfin le c0nμminusrnd

Proof We write

Aminus An = (I minus Pn)A+ PnA(I minusQn)+ (PnAQn minus An) (1185)

Recalling that A = KlowastK and Pninfin is uniformly bounded by a constant itfollows from hypothesis (H2) that there exists a positive constant c such that

(I minus Pn)A+ PnA(I minusQn)infin le cμminusrnd (1186)

Moreover noting for each i isin Zn+1 that

Qnminusi =sum

jisinZnminusi+1

(Qj minusQjminus1)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 449

we conclude that

An =sum

iisinZn+1

(Pi minus Piminus1)AQnminusi (1187)

We conclude from (1187) that

PnAQn minus An =sum

iisinZn+1

(Pi minus Piminus1)KlowastK(Qn minusQnminusi) (1188)

Again using hypothesis (H2) there exists a positive constant c such that

(Pi minus Piminus1)Klowastinfin le (Pi minus I)Klowastinfin + (I minus Piminus1)Klowastinfin le cμminusrid

and

K(Qn minusQnminusi)infin le K(Qn minus I)infin + K(I minusQnminusi)infin le cμminusr(nminusi)d

These estimates together with (1188) lead to

PnAQn minus Aninfin le cnsum

i=1

μminusridμminusr(nminusi)d le c0nμminusrnd (1189)

Combining (1186) and (1189) yields the desired result of this lemma

We establish below an estimate for the approximate operator An similar tothe first estimate in Lemma 1115 which is for the operator A

Lemma 1122 Suppose that hypothesis (H2) holds If

nμminusrnd le 1

2c0

α32

radicα +M2

(1190)

where c0 is the constant appearing in Lemma 1121 then αI+An Linfin()rarrLinfin() is invertible and

(αI + An)minus1infin le 2

radicα +M

α32 (1191)

Proof By a basic result in functional analysis (cf Theorem 15 p 193 of[254]) we conclude that αI + An Linfin()rarr Linfin() is invertible and

(αI + An)minus1infin le (αI +A)minus1infin

1minus (αI +A)minus1infinAminus Aninfin

The estimate (1191) follows from the above bound condition (1190) andLemmas 1115 and 1121

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

450 Multiscale methods for ill-posed integral equations

We next proceed to estimate the error uminus uδαninfin To this end we let uαn

denote the solution of (1179) with f δ replaced by f By the triangle inequalitywe have that

uminus uδαninfin le uminus uαinfin + uα minus uαninfin + uαn minus uδαninfin (1192)

Since Lemma 1116 has given an estimate for uminusuαinfin it remains to estimatethe last two terms on the right-hand side of inequality (1192) In the nextlemma we estimate the second term

Lemma 1123 If hypothesis (H2) holds the integer n is chosen to satisfyinequality (1190) and u isin R(Klowast) then there exists a positive constant c suchthat for all n isin N

uα minus uαninfin le cnμminusrnd

α32

Proof By using equations (1168) and (1179) we have that

uα minus uαn = (αI +A)minus1Klowastf minus (αI + An)minus1AnKlowastf

It can be rewritten as

uα minus uαn=[(αI +A)minus1 minus (αI + An)minus1]Klowastf + (αI + An)

minus1(I minusAn)Klowastf

By employing the relation f = Ku we derive that

uα minus uαn = (αI + An)minus1(An minusA)(αI +A)minus1Au

+ (αI + An)minus1(I minusAn)Au

By hypothesis u isin R(Klowast) and thus we write u = Klowastv for some v isin Linfin()Since for any positive number α the operator αI +KKlowast is invertible and

(αI +KKlowast)minus1KKlowast2 le 1

it follows that

(αI +A)minus1Auinfin =Klowast(αI +KKlowast)minus1KKlowastvinfin leKlowastL2()rarrLinfin()v2Hence there exists a positive constant c such that

(αI +A)minus1Auinfin le c

This estimate together with Lemmas 1121 and 1122 ensures that

(αI + An)minus1(An minusA)(αI +A)minus1Auinfin le c

nμminusrnd

α32 (1193)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 451

Moreover using Lemma 1122 and part (2) of Proposition 1117 we concludethat

(αI + An)minus1(I minusAn)Auinfin le c

μminusrnd

α32 (1194)

Combining estimates (1193) and (1194) yields the desired estimate

Next we estimate the third term on the right-hand side of inequality (1192)

Lemma 1124 If hypothesis (H2) holds and condition (1190) is satisfiedthen there exists a positive constant c such that

uαn minus uδαninfin le c

α+ μminusrnd δ

α32

)

Proof From (1179) we have that

(αI + An)(uαn minus uδαn) = AnKlowast(f minus f δ)

We rewrite it in the form

uαn minus uδαn = (αI +A)minus1 [Klowast(f minus f δ)+ (Pn minus I)Klowast(f minus f δ)

+(Aminus An)(uαn minus uδαn)]

It follows from the second estimate of Lemma 1115 and hypothesis (1167)that

(αI +A)minus1Klowast(f minus f δ)infinle (αI +A)minus1KlowastL2()rarrLinfin()f minus f δ2 le M

δ

α

The first estimate of Lemma 1115 and part (2) of Proposition 1117 ensurethat there exists a positive constant c such that

(αI +A)minus1(Pn minus I)Klowast(f minus f δ)infin le cμminusrnd(radicα +M2)

δ

α32

By the first estimate of Lemma 1115 Lemma 1121 and condition (1190) weobtain

(αI +A)minus1(Aminus An)(uαn minus uδαn)infin le1

2uαn minus uδαninfin

Combining the above three estimates we conclude that

uαn minus uδαninfin le Mδ

α+ cμminusrnd(

radicα +M2)

δ

α32+ 1

2uαn minus uδαninfin

In the above inequality solving for uαnminus uδαninfin yields the desired estimate

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

452 Multiscale methods for ill-posed integral equations

We are now ready to present an error bound for uminus uδαninfin

Theorem 1125 If hypothesis (H2) holds the integer n is chosen to satisfyinequality (1190) and u isin R(Klowast) then there exists a positive constant c suchthat

uminus uδαninfin le uminus uαinfin + c

α+ nμminusrnd

α32

) (1195)

Proof The estimate in this theorem follows directly from (1192) Lemmas1123 and 1124

We present in the next corollary two special results

Corollary 1126 Suppose that hypothesis (H2) holds and the integer n ischosen to satisfy inequality (1190)

(1) If α is chosen such that δαrarr 0 as δrarr 0 the integer n is chosen so thatnμminusrnd = O(radicαδ) as δrarr 0 and u isin R(Klowast) then

uminus uδαninfin rarr 0 as δrarr 0 αrarr 0 (1196)

(2) If hypothesis (H1) holds then

uminus uδαninfin = O(αν + δ

α+ nμminusrnd

α32

) as δrarr 0 (1197)

Proof (1) Using the choice of α and n in estimate (1195) leads to

uminus uδαninfin le uminus uαinfin + cδ

α

By the first result of Lemma 1116 as α rarr 0 the first term on the right-handside goes to zero proving the result

(2) Since

(KlowastK)νKlowast = Klowast(KKlowast)ν

(cf p 16 of [116]) we obtain that R(AνKlowast) sube R(Klowast) When hypothesis(H1) holds we have that u isin R(Klowast) By Theorem 1125 estimate (1195)holds Using the second result of Lemma 1116 estimate (1195) reduces tothe desired result

1133 Regularization parameter choice strategies

Solving equation (1179) (or (1181)) numerically requires appropriate choicesof the regularization parameter α In this subsection we present a priori and

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 453

a posteriori strategies for choices of such parameters and estimate the errorbound of the corresponding approximate solutions

We first present an a priori parameter choice strategy We propose to choosethe parameter α such that the right-hand side of estimate (1197) is minimizedSpecifically we suppose that u isin R(AνKlowast) with 0 lt ν le 1 where we callν the smoothness order of the exact solution u and propose the following rulefor the choice of parameter α

Rule 1127 Given the noise bound δ gt 0 and the smoothness order ν isin (0 1]of the exact solution u we choose α to satisfy

α sim δ1

ν+1

and a positive integer n to satisfy

nμminusrnd = O(radicαδ) as δrarr 0

Rule 1127 uses the a priori parameter ν which is normally not availableFor this reason Rule 1127 is called an a priori strategy If there is a way toobtain ν the parameter α and integer n chosen in Rule 1127 are then used inequation (1181) to obtain an approximate solution uδαn In the next theoremwe present the convergence rate of the approximate solution corresponding tothe above choice of α

Theorem 1128 Suppose that K isin Wrinfin(times) with 0 lt r le k If α and nare chosen according to Rule 1127 and hypothesis (H1) is satisfied then

uminus uδαninfin = O(δν

ν+1 ) as δrarr 0 (1198)

Proof When α and n are chosen according to Rule 1127 by a straightforwardcomputation we confirm that inequality (1190) is satisfied The assumptionon K of this theorem with Proposition 1117 ensures that hypothesis (H2)is satisfied Hence the hypothesis of Corollary 1126 holds Substituting thechoices of α and n into the right-hand side of (1197) in Corollary 1126 weobtain estimate (1198)

Because the smoothness order ν of the exact solution is normally not knownit is desirable to develop a strategy for choices of the parameter α without apriori information on ν Next we present an a posteriori parameter choicestrategy The idea of the strategy was first used in [216] and recently developedfurther in [207]

For a given α gt 0 we let n(α) be a positive integer associated with α Fol-lowing [216] we assume that there exist two increasing continuous functions

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

454 Multiscale methods for ill-posed integral equations

ϕ(α) and λ(α) with ϕ(0) = 0 and λ(0) = 0 such that

uminus uδαn(α)infin le ϕ(α)+ δ

λ(α) (1199)

The assumption (1199) leads us to the choice

α = αopt = (ϕλ)minus1(δ)

With this choice of the parameter we have that

uminus uδαopt n(αopt)infin le 2ϕ((ϕλ)minus1(δ)) (11100)

Observing that

αopt = max

α ϕ(α) le δ

λ(α)

in practice we select the regularization parameter from a finite set

M(N) =αi αi isin N ϕ(αi) le δ

λ(αi)

where

N = αi 0 lt α0 lt α1 lt middot middot middot lt αNis a given set of N distinct positive numbers which will be specified later Weconsider

αlowast = maxαi αi isin M(N)as an approximation of αopt under appropriate conditions

When ϕ is unknown the above choice of the parameter is not feasible Itwas suggested in [207 216] to introduce a set

M+(N) =αj αj isin N uδαjn(αj)

minus uδαin(αi)infin le 4

δ

λ(αi) i = 0 1 j

to replace M(N) and choose

α+ = maxαi αi isin M+(N)as an approximation of αlowast accordingly We have the next lemma which isessentially Theorem 21 in [216]

Lemma 1129 Suppose that estimate (1199) holds If M(N) = emptyNM(N) = empty and for any αi isin N i = 1 2 N the function λ(α)

satisfies λ(αi) le qλ(αiminus1) for a fixed constant q then

uminus uδα+n(α+)infin le 6qϕ((ϕλ)minus1(δ)) (11101)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

113 Multiscale collocation methods via the Tikhonov regularization 455

The general result stated in the above lemma will be used to develop an aposteriori parameter choice strategy To this end for any fixed α we choosen = n(α) to be the smallest positive integer satisfying condition

nμminusrnd le min

c1radicαδ

1

2c0

α32

radicα +M2

(11102)

Note that according to the condition above n(α) is uniquely determined by thevariable α If hypotheses (H1) and (H2) hold then for such a pair of (α n) theestimate (1197) holds From (1197) there exist positive constants c2 c3 suchthat

uminus uδαn(α)infin le c2αν + c3

δ

α (11103)

We choose ϕ(α) = c2αν λ(α) = α

c3and for q0 gt 1 ρ gt 0 a positive integer

N determined by

ρδqNminus10 le 1 lt ρδqN

0

as in [207 216] specify the finite set by

N = αi = ρδqi0 i = 0 1 N

The introduction of the parameter ρ allows a larger degree of freedom for thechoice of the regularization parameter

Now we have the following rule for choosing the regularization parameterα = α+

Rule 1130 Choose α = α+ by

α+ = max

αj isin N uδαjn(αj)

minus uδαin(αi)infin le 4c3

δ

αi i = 0 1 j

(11104)

Rule 1130 does not use the smoothness order ν of the exact solution or anyother a priori information Hence it is an a posteriori choice strategy of theregularization parameter In the next theorem we present the convergence orderof the approximate solution corresponding to the above choice of α+

Theorem 1131 Suppose that hypotheses (H1) and (H2) hold If α = α+ ischosen according to Rule 1130 then

uminus uδα+n(α+)infin = O(δν

ν+1 ) as δrarr 0 (11105)

Proof This theorem is a direct consequence of Lemma 1129 It suffices toverify that the hypotheses of the lemma are satisfied

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

456 Multiscale methods for ill-posed integral equations

First of all according to (11103) estimate (1199) holds with ϕ(α) = c2αν

and λ(α) = αc3

It can be verified that λ(αi) = q0λ(αiminus1) for i = 0 1 Nand

c2

c3ρν+1δν le 1 and

c2

c3gt δ as δrarr 0

From this we have that

ϕ(α0) le δλ(α0) and ϕ(αN) gt δλ(αN)

We conclude that α0 isin M(N) and αN isin M(N) and thus

M(N) = empty and NM(N) = emptyWe have proved that all hypotheses of Lemma 1129 are satisfied Thereforefrom (11101) we have that

uminus uδα+n(α+)infin le 6q0ϕ((ϕλ)minus1(δ)) = c4δ

νν+1 (11106)

with c4 = 6q0cνminus12 cν3

114 Numerical experiments

In this section we present numerical results which verify the efficiency of themethods proposed in Sections 112 and 113

1141 A numerical example for the multiscale Galerkin method

We now present a numerical example to illustrate the method proposed inSection 112 and to confirm the theoretical estimates established in the section

For this purpose we consider the integral operator K L2[0 1] rarr L2[0 1]defined by

(Ku)(x) =int 1

0K1(x t)u(t)dt x isin [0 1] (11107)

with the kernel

K1(x t) =

x(1minus t) 0 le x le t le 1t(1minus x) 0 le t lt x le 1

Operator K is positive semi-definite with respect to L2[0 1] and it is alinear compact self-adjoint operator from L2[0 1] to L2[0 1] We consider the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

114 Numerical experiments 457

integral equation of the first kind (113) with

f (x) = 5

12x(xminus 1)(x2 minus xminus 1) x isin [0 1]

Clearly the unique solution of equation (113) is given by

ulowast(x) = 5x(1minus x) x isin [0 1]Moreover ulowast = Kω isin R(K) with ω = 10 This means that condition (H-1) issatisfied with ν = 1

Let Xn be the piecewise linear functions on [0 1] with knots at j2n j =1 2 2nminus1 We decompose Xn into the orthogonal direct sum of subspaces

Xn = X0 oplusperpW1 oplusperp middot middot middot oplusperpWn

where X0 is the linear polynomial space on [0 1] and Wi i = 2 3 n areconstructed recursively once the initial space W1 is given We choose a basisfor X0

w00(t) = 2minus 3t and w01(t) = minus1+ 3t

and a basis for space W1

w10(t) =

1minus 92 t t isin [0 1

2 ]minus1+ 3

2 t t isin ( 12 1]

and

w11(t) = 1

2 minus 32 t t isin [0 1

2 ]minus 7

2 + 92 t t isin ( 1

2 1]It follows from a simple computation that

(K2u)(x) =int 1

0K2(x t)u(t)dt t isin [0 1] (11108)

where

K2(x t) =

x(t minus 1)(minus2t + t2 + x2)6 0 le x le t le 1t(xminus 1)(t2 minus 2x+ x2)6 0 le t lt x le 1

We remark that K2 isin C2([0 1] times [0 1]) For the orthogonal projection Pn L2[0 1] rarr Xn we have that

(I minus Pn)K2 leradic

2

22n+1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

458 Multiscale methods for ill-posed integral equations

Moreover by the moment inequality in [220] we know that the approximationproperties (1120) and (1121) are valid with

θn =radic

2radic

2

2n+1

In the MAM we choose

n = k + m with k = 3 m = 3

Hence we use the decomposition

X3+3 = X3 oplusperpW3+1 oplusperpW3+2 oplusperpW3+3

and

θ6 =radic

2radic

2

27

We choose a perturbed right-hand side

f δ = f + δv (11109)

where v isin X has uniformly distributed random values with v le 1 and where

δ = e

100middot f with e isin 0125 025 05 10 15 20

Now we present numerical results for our experiments in Tables 111ndash113The numerical results in Table 111 are for an a priori parameter choice

According to Theorem 117 we choose an a priori parameter

α = 001 lowast θk with k = 3

since ν = 1 in this example Table 112 shows numerical results for an apriori parameter choice with error level δ fixed that is e = 025 It illustratesthat the MAM takes effect since the absolute error uδα3m minus ulowast decreasesas m increases in this case In Tables 111 and 112 we also compared theeffect by the MAM and the Galerkin method Table 113 is associated with an

Table 111 Numerical results for a priori parameter with α = 001 lowast θ3

e uδα33 minus ulowast uδ

α3+3 minus ulowast uδα33 minus ulowast minus uδα3+3 minus ulowast

20 13412 times10minus2 13412times10minus2 47452times10minus7

15 11230times10minus2 11229times10minus2 17932times10minus7

10 10549 times10minus2 10549times10minus2 79820times10minus8

05 99053times10minus3 99053times10minus3 13435times10minus8

025 98348times10minus3 98348times10minus3 minus39297times10minus10

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

114 Numerical experiments 459

Table 112 Numerical results for a priori parameter with e = 025 and α =001 lowast θ3

m uδα3m minus ulowast uδ

α3+m minus ulowast uδα3m minus ulowast minus uδα3+m minus ulowast

0 11447times10minus2 11447times10minus2 01 10241times10minus2 10059times10minus2 13841times10minus2

2 99703times10minus3 99698times10minus3 57395times10minus7

3 99645times10minus3 99645times10minus3 55529 times10minus9

Table 113 Numerical results for a posteriori parameter with θk+m = θ6

e α uδα33 minus ulowast

20 90117times10minus3 75321 times10minus2

15 90117times10minus3 75578 times10minus2

10 30039times10minus3 27045times10minus2

05 30039times10minus3 27188times10minus2

025 30039times10minus3 27470 times10minus2

0125 10013times10minus3 95639 times10minus3

a posteriori parameter choice and we use Algorithm 1112 to determine theparameter α The numerical results in these three tables demonstrate that theMAM is an efficient method to solve ill-posed problems

1142 Numerical examples for the multiscale collocation method

In this subsection we present numerical results to demonstrate the efficiencyand accuracy of the method proposed in Section 113

We consider the integral operator K Linfin[0 1] rarr Linfin[0 1] defined by

(Ku)(s) =int 1

0K(s t)u(t)dt s isin [0 1] (11110)

where

K(s t) =

s(1minus t) 0 le s le t le 1t(1minus s) 0 le t lt s le 1

Since the kernel K is continuous on [0 1]times[0 1] the operator K is compact onLinfin[0 1] Hence the integral equation (1164) that is Ku = f with this kernelis ill posed We solve this equation using the fast piecewise linear collocationmethod The numerical results produced by this method with both a priori anda posteriori regularization parameter choices will be presented

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

460 Multiscale methods for ill-posed integral equations

Now we describe the approximation spaces Xn and the correspondingcollocation functional spaces Ln Specifically for each positive integer n wechoose Xn as the space of piecewise linear polynomials on [0 1] with knots atj2n j = 1 2 2n minus 1 Here dim Xn = 2n+1 and μ = 2 correspondingto the general case We then decompose Xn into an orthogonal direct sum ofsubspaces of different scales

Xn = X0 oplusperpW1 oplusperp middot middot middot oplusperpWn

where X0 has a basis

w00(s) = 2minus 3s and w01(s) = minus1+ 3s

and W1 has a basis

w10(s)=

1minus 92 s s isin [0 1

2 ]minus1+ 3

2 s s isin ( 12 1] and w11(s) =

12 minus 3

2 s s isin [0 12 ]

minus 72 + 9

2 s s isin ( 12 1]

The spaces Wi = spanwij j isin Z2i are generated recursively by W1

following the general construction developed in [66 72 200 201]The collocation functional space Ln corresponding to Xn is constructed

likewise For any s isin [0 1] we use δs to denote the linear functional defined forfunctions v isin C[0 1] by 〈δs v〉 = v(s) and extended to all functions in Linfin[0 1]by the HahnndashBanach theorem We let L0 = span01 02 with 00 = δ 1

3

01 = δ 23

and decompose Ln as

Ln = L0 oplus V1 oplus middot middot middot oplus Vn

where V1 = span11 12 with

10 = minus3

2δ 1

3+ 1

2δ 2

3+ δ 1

6 11 = 1

2δ 1

3minus 3

2δ 2

3+ δ 5

6

and the spaces

Vi = spanij j isin Z2iare constructed recursively by V1 The approximation spaces and the corre-sponding collocation functionals have the properties outlined in Section 1132

By direct computation we obtain that

Kinfin = Klowastinfin le max0letle1

int 1

0|k(t s)|ds = 1

8

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

114 Numerical experiments 461

Moreover since Xn are the piecewise linear polynomials on [0 1] with knotsat j2n j = 1 2 2n minus 1 it is easily verified that

K(I minusQn)infin leradic

2

32minus

32 n (I minusAn)Klowastinfin le 1

82minus2n

K(I minusQn)2 leradic

2

32minus

32 n (I minusAn)Klowast2 le 2radic

32minus2n

and

(I minusAn)KlowastL2[01]rarrLinfin[01] leradic

2

32minus

32 n

Therefore hypothesis (H2) holds with r = 32

For comparison purposes in our numerical experiment we choose the right-hand side of the equation as

f (s) = 1

40320s(sminus 1)(minus17minus 17s+ 11s2+ 11s3 minus 3s4minus 3s5+ s6) sisin [0 1]

so that we have the exact solution

u(s) = minus1

0720s(sminus 1)(3+ 3sminus 2s2 minus 2s3 + s4) s isin [0 1]

Moreover

u = AKlowastω isin R(AKlowast) with ω = 1000

Hence hypothesis (H1) is satisfied with ν = 1 The perturbed right-hand sideis chosen as f δ = f + δv where v isin L2[0 1] has uniformly distributed randomvalues with v2 = 1 and δ = f2 middot e100 = 009488 middot e100 with e isin1 3 5 7 9 The linear system of the multiscale regularized equation is solvedby the augmentation method described in [71]

We present the results of two numerical experiments The goal of the firstexperiment is to confirm the convergence result for the a priori parameterchoice For given δ following Rule 1127 we choose α = 0005δ12 and nsuch that n2minus3n2 le 15δ54 and (n minus 1)2minus3(nminus1)2 gt 15δ54 The numericalresults are listed in Table 114 In Table 114 ldquoCompression ratesrdquo arecomputed by dividing the number of nonzero entries of the matrix by thetotal number of entries of the matrix Table 114 shows that the computedconvergence rate is O(δ12) for ν = 1 which is consistent with the theoreticalestimate given in Theorem 1128 In Figure 111 we illustrate the approximatesolutions with different parameters

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

462 Multiscale methods for ill-posed integral equations

Table 114 A priori parameter choices

e α = 0005 lowast δ12 n Compression rates uminus uδαninfin uminus uδαninfinδ12

1 15401e-4 8 00145 00260 084343 26676e-4 7 00286 00395 073985 34435e-4 6 00557 00547 079497 40749e-4 5 01055 00707 086809 46204e-4 5 01055 00892 09657

0 02 04 06 08 10

02

04

06

08

1

12

14

0 02 04 06 08 10

02

04

06

08

1

12

14

(a) (b)

0 02 04 06 08 10

02

04

06

08

1

12

14

0 02 04 06 08 10

02

04

06

08

1

12

14

(c) (d)

Figure 111 (a) The original function (b) the restored function with n = 8 e = 1and α = 0005 lowast δ12 (c) the restored function with n = 6 e = 5 and α =0005 lowast δ12 (d) the restored function with n = 5 e = 9 and α = 0005 lowast δ12

In the second experiment we choose α = α+ following Rule 1130 withρ = 0005 and q0 = 3 The numerical results are shown in Table 115 andillustrated in Figure 112 These results show that the a posteriori parameterchoice gives results comparable with those of the a priori parameter choiceThe computed convergence rate O(δ12) confirms the theoretical estimategiven in Theorem 1131 with ν = 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

115 Bibliographical remarks 463

Table 115 A posteriori parameter choices

e α n Compression rates uminus uδαninfin uminus uδαninfinδ12

1 12809e-4 8 00145 00235 079073 38426e-4 7 00286 00576 071995 64044e-4 6 00557 00410 076817 89662e-4 5 01055 00608 074579 11528e-3 5 01055 00779 08428

0 02 04 06 08 10

02

04

06

08

1

12

14

0 02 04 06 08 10

02

04

06

08

1

12

14

(a) (b)

0 02 04 06 08 10

02

04

06

08

1

12

14

0 02 04 06 08 10

02

04

06

08

1

12

14

(c) (d)

Figure 112 (a) The original function (b) the restored function with n = 8 e = 1and α=12809e-4 (c) the restored function with n = 6 e = 5 and α=64044e-4(d) the restored function with n = 5 e = 9 and α=11528e-3

115 Bibliographical remarks

The material presented in this chapter was mainly chosen from two papers [7879] The a posteriori choice strategy of regularization parameters describedin this chapter is based on the ideas developed in [207 216] For otherdevelopments of a priori and a posteriori parameter choice strategies the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

464 Multiscale methods for ill-posed integral equations

reader is referred to [58 118 127ndash129 191 192 206 219 220 222 224228 248 275]

We review several other recent developments in the numerical solution ofill-posed operator equations closely related to the research presented in thischapter A class of regularization methods for discretized ill-posed problemswas proposed in [222] and a method was suggested to determine a priori anda posteriori parameters for the regularization Papers [127ndash129 220] studiednumerical algorithms for regularization of the Galerkin method In [231] anadditive Schwarz iteration was presented for the fast solution of the Tikhonovregularized ill-posed problems A two-level preconditioner was developedin [133 134] for solving the discretized operator equation and extended in[233] to a general case The work in [193] designed an adaptive discretizationfor Tikhonov regularization which has a sparse matrix structure Multilevelmethods were applied to solve ill-posed problems and also established a prioriand a posteriori parameter choice strategies [103 137 177] In particular awavelet-based matrix compression technique was used in [137] Moreoverwavelet-based multilevel methods and cascadic multilevel methods for solv-ing ill-posed problems were developed in [174] and [230] respectivelyA multiscale analysis for ill-posed problems with semi-discrete Tikhonovregularization was established in [278] For more information on fast solversfor ill-posed integral equations the reader is referred to [54 78 103 104 133137 157 193 231]

Finally applications of ill-posed problems in science and engineering maybe found in [106 116 173] In particular for applications of regularizationin image processing system identification and machine learning the reader isreferred to [188 189 237 259] [33 223] and [87 195 268] respectively

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637013Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163251 subject to the Cambridge Core terms of

12

Eigen-problems of weakly singularintegral operators

In this chapter we consider solving the eigen-problem of a weakly singularintegral operator K As we know the spectrum of a compact integral operatorK consists of a countable number of eigenvalues which only have an accu-mulation point at zero that may or may not be an eigenvalue We explain inthis chapter how the multiscale methods can be used to compute the nonzeroeigenvalues of K rapidly and efficiently We begin with a brief introduction tothe subject

121 Introduction

Many practical problems in science and engineering are formulated as eigen-problems of compact linear integral operators (cf [44]) Standard numericaltreatments of the eigen-problem normally discretize the compact integraloperator into a matrix and then solve the eigen-problem of the resulting matrixThe computed eigenvalues and associated eigenvectors of the matrix areconsidered approximations of the corresponding eigenvalues and eigenvectorsof the compact integral operator In particular the Galerkin PetrovndashGalerkincollocation Nystrom and degenerate kernel methods are commonly usedmethods for the approximation of eigenvalues and eigenvectors of compactintegral operators It is well known that the matrix which results from adiscretization of a compact integral operator is dense Solving the eigen-problem of a dense matrix requires a significant amount of computationaleffort Hence fast algorithms for solving such a problem are highly desirable

We are interested in developing a fast collocation method for solving theeigen-problem of a compact linear integral operator K on the Linfin() spacewith a weakly singular kernel Wavelet and multiscale methods were recentlydeveloped (see for example [28 64 67 68 95 108 202] and the references

465

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

466 Eigen-problems of weakly singular integral operators

cited therein) for numerical solutions of weakly singular Fredholm integralequations of the second kind Some of them were discussed in the previouschapter The essence of these methods is to approximate the dense matrix thatresults from the discretization of the integral operator by a sparse matrix andsolve the linear system of the sparse matrix It has been proved that the methodshave nearly linear computational complexity and optimal convergence (seeChapters 5ndash7) Among these methods the fast collocation method receivesfavorable attention due to the lower computational costs in generating its sparsecoefficient matrix (cf [69] and Chapter 7)

The specific goal of this chapter is to develop the fast collocation methodfor finding eigenvalues and eigenvectors of a compact integral operator K witha weakly singular kernel based on the matrix compression technique For theanalysis of the fast method we extend the classical spectral approximationtheory [3 44 212] to a somewhat more general setting so that it is applicableto the scenario where the matrix compression technique is used

We organize this chapter into seven sections In Section 122 we presentan abstract framework for a compact integral operator to be approximatedby a sequence of ν-convergent bounded linear operators We develop inSection 123 the fast multiscale collocation method for solving the eigen-problem of a compact integral operator K with a weakly singular kernel Weestablish in Section 124 the optimal convergence rate for the approximateeigenvalues and generalized eigenvectors In Section 125 we describe a powermethod for solving the eigen-problem of the compressed matrix which makesuse of the sparsity of the compressed matrix We present in Section 126numerical results to confirm the convergence estimates Finally in Section127 we make bibliographical remarks

122 An abstract framework

We describe in this section an abstract framework for eigenvalue approxima-tion of a compact operator in a Banach space The results presented here arebasically the classical spectral approximation theory [3 44 212] However wepresent them in a somewhat more general form so that they are applicable tothe sparse matrix resulting from the multiscale collocation method with matrixcompression

Suppose that X is a complex Banach space and B(X) is the space of boundedlinear operators from X to X For an operator T isin B(X) we define its resolventset by

ρ(T ) = z z isin C (T minus zI)minus1 isin B(X)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

122 An abstract framework 467

and its spectrum by

σ(T ) = C ρ(T )The resolvent operator (T minus zI)minus1 is denoted by R(T ) and when wewish to show its dependency on z we write R(T z) Clearly by definitionR(T z) isin B(X) on ρ(T ) The range space and null space of T are definedrespectively by

R(T ) = T x x isin X and N(T ) = x T x = 0 x isin XThe central problem considered in this chapter is that for a compact operator

T defined on a Banach space X we wish to find λ isin σ(T ) 0 and φ isin X

with φ = 1 such that

T φ = λφ (121)

As the eigenvalue problem (121) in general cannot be solved exactlywe consider solving the problem approximately To this end we require asequence of subspaces Xn n isin N which approximate the space X anda sequence of operators Tn n isin N which approximate the operator T andconsider the approximate eigen-problem finding λn isin σ(Tn)0 and φn isin Xn

with φn = 1 such that

Tnφn = λnφn (122)

We start with a discussion of the spectral approximation For any closedJordan (rectifiable) curve in ρ(T ) we use the notation R(T ) for thequantity maxR(T z) z isin We first propose an ancillary lemma

Lemma 121 If T S isin B(X) z isin where is a closed Jordan curve inρ(T ) ζ ζ isin C |ζ | le r for some r gt 0 and

μ = rminus1R(T )(1+ T minus S)+ 1 (123)

then ∥∥∥[(T minus S)R(T z)]2∥∥∥ le μ ((T minus S)T + (T minus S)S) (124)

Proof We compute

[(T minus S)R(T z)]2 = (T minus S)R(T z)(T minus S)R(T z) (125)

For any z isin direct computation confirms that

R(T z) = zminus1[T R(T z)minus I] (126)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

468 Eigen-problems of weakly singular integral operators

We substitute this formula and obtain the identity

[(T minus S)R(T z)]2

= zminus1 [(T minus S)T R(T z)(T minus S)minus (T minus S)T + (T minus S)S]R(T z)

Using the triangle inequality proves the result

The next lemma demonstrates how we shall use Lemma 121

Lemma 122 If the hypotheses of Lemma 121 are satisfied and both of thequantities (T minus S)T and (T minus S)S do not exceed (4μ)minus1 then ρ(S) sube ζ |ζ | le r and the resolvent of S satisfies the bound

R(S) le 2 (R(T ) + 1)2 (1+ T minus S) (127)

Proof We choose z isin and introduce the operator

Q = [(T minus S)R(T z)]2 (128)

Our hypotheses and Lemma 121 imply that Q le 12 Next we use theformula

R(S z) = R(T z)[I minus (T minus S)R(T z)]minus1

= R(T z)[I + (T minus S)R(T z)]sumjisinN0

Qj

to obtain the estimate

R(S z) le 2R(T ) [1+ T minus S middot R(T )]

which proves the lemma

The above lemmas are used in conjunction with the following notion ofν-convergence of operators in B(X)

Definition 123 Let X be a Banach space and T Tn n isin N in B(X) Thesequence Tn n isin N is said to ν-converge to T denoted by Tn

νminusrarr T if

(i) the sequence Tn n isin N is bounded(ii) (T minus Tn)T

uminusrarr 0(iii) (T minus Tn)Tn

uminusrarr 0

Lemmas 121 and 122 give us the following well known fact (see forexample [3])

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

122 An abstract framework 469

Proposition 124 If X is a Banach space Tnνminusrarr T in B(X) and a closed

Jordan curve in ρ(T ) ζ |ζ | le r r gt 0 then there is a constant c gt 0such that

sube ρ(T ) ζ |ζ | le r and R(Tn) le c

Proof We apply Lemma 121 to T and S = Tn and see that the constant μin (123) corresponding to this choice is bounded independent of n Now wechoose n large enough so that both (T minus Tn)Tn and (T minus Tn)T do notexceed (4μ)minus1 Therefore Lemma 122 gives us the desired conclusion

For the next proposition we review the notion of the spectral projection Weassume that the spectrum of T isin B(X) has an isolated point λ and define thespectral projection associated with T and λ by

P = minus 1

2π i

int

R(z)dz (129)

where is a closed Jordan curve in ρ(T ) enclosing λ but not any other pointof σ(T ) and we simplify the notation to R(z) = R(T z) Clearly P does notdepend on but on λ only It is also known that P isin B(X) and moreover Pis a projection Indeed following the proof of Theorem 227 on p 105 of [44]for example we have that

P2 = 1

(2π i)2

int

intprimeR(z)R(ζ )dzdζ (1210)

where prime is a closed Jordan curve enclosing λ and completely contained in thedomain bounded by We rewrite the integrand in (1210) in an equivalentform to obtain the formula

P2 = 1

(2π i)2

int

intprime

R(ζ )minusR(z)

ζ minus zdzdζ (1211)

From the choice of prime we have by the Cauchy integral formula for ζ isin prime andz isin that int

primedη

η minus z= 2π i

int

ζ minus η= 0

Hence by interchanging the order of integration in (1211) we obtain

P2 = minus 1

2π i

int

R(z)dz

which proves P is a projectionWhen T is a compact operator on B(X) and λ is a nonzero eigenvalue of

T a typical choice for the Jordan curve in (129) is one that includes only

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

470 Eigen-problems of weakly singular integral operators

λ in its interior but not the origin In that case m = dimR(P) lt infin and them times m matrix obtained by restricting the range and domain of T to R(P) hasλ as an eigenvalue of algebraic multiplicity m Moreover the least integer l forwhich (T minusλI)l = 0 on R(P) is called the ascent of λ In other words l is thesmallest positive integer such that (T minus λI)lP = 0 Generally the eigenspaceN(T minus λI) is a proper subspace of R(P) and equal to it if and only if l = 1

We need to compare two spectral projections of two distinct operators whichwill be closed in the sense of ν-convergence This will be accomplished withthe next lemma

Lemma 125 If X is a Banach space T1 T2 isin B(X) is a closed Jordancurve in ρ(T1)

⋂ρ(T2) ζ ζ isin C |ζ | le r for some r gt 0 and P1 P2

are the corresponding spectral projections of T1 and T2 with the same Jordancurve respectively then

(P1 minus P2)P1 le α1(T1 minus T2)|R(P1) le α2(T1 minus T2)T1 (1212)

where

α1 = R(T1)R(T2) α2 = R(T1)2R(T2)()(2πr)

and () is the length of

Proof For k = 1 2 we have that

Pk = minus 1

2π i

int

R(Tk z)dz

Hence we obtain that

(P1 minus P2)P1 = minus 1

2π i

int

R(T1 z)minusR(T2 z)P1dz

= minus 1

2π i

int

R(T2 z)(T2 minus T1)R(T1 z)P1dz

= minus 1

2π i

int

R(T2 z)(T2 minus T1)P1R(T1 z)dz

Taking the norms of both sides of this equation leads to the inequality

(P1 minus P2)P1 le R(T1)R(T2)(T2 minus T1)P1 (1213)

which leads to the first inequality of (125) Next we estimate the last term onthe right-hand side of this inequality To this end we note that

(T2 minus T1)P1 = minus 1

2π i

int

(T2 minus T1)R(T1 z)dz

= minus 1

2π i

int

(T2 minus T1)(T1R(T1 z)minus I)dz

z

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

122 An abstract framework 471

Since zero lies outside the domain bounded by the Cauchy integral formulasays that int

dz

z= 0

and so the above equation simplifies to

(T2 minus T1)P1 = minus 1

2π i

int

(T2 minus T1)T1R(T1 z)dz

z

Therefore taking norms of both sides of this inequality gives the inequality

(T2 minus T1)P1 le ()

2πrR(T1)(T2 minus T1)T1 (1214)

Combining this inequality with inequality (1213) proves the result

From the above lemma we obtain the following proposition (see forexample [3 44 167 205])

Proposition 126 Suppose Tn n isin N and T are in B(X) is a closedJordan curve contained in ρ(T ) ζ ζ isin C |ζ | le r for some r gt 0 andTn

νminusrarr T in B(X) Then there exist positive constants c1 c2 and an N isin N

such that for all n ge N we have that

(i) (P minus Pn)P le c1(T minus Tn)|R(P) le c2(T minus Tn)T (ii) (P minus Pn)Pn le c1(T minus Tn)|R(Pn) le c2(T minus Tn)TnProof The proof uses Lemma 125 applied to P and Pn and also Proposi-tion 124

Propositions 124 and 126 supply the well known tools for the presentationin this chapter Besides the above facts on spectral projections we need thenotion of the gap between subspaces and some of its useful properties

Definition 127 Let Y1Y2 be two closed subspaces of a Banach space X

and set

δ(Y1Y2) = supdist(yY2) y isin Y1 y = 1The gap between Y1 and Y2 is defined to be

θ(Y1Y2) = maxδ(Y1Y2) δ(Y2Y1)We make use of the following lemma of Kato (see [44] p 87 [166]

pp 264ndash269)

Lemma 128 If dimY1 = dimY2 ltinfin then

δ(Y2Y1) le δ(Y1Y2)

1minus δ(Y1Y2) (1215)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

472 Eigen-problems of weakly singular integral operators

The next lemma is of a different character Its proof requires the Borsukantipodal mapping theorem and is given either in [186] p 199 or [167] p 385

Lemma 129 If dimY1 lt infin and dimY2 gt dimY1 then there is an x isinY2 0 such that dist(xY1) = x and consequently

δ(Y2Y1) = 1

From this lemma we have the following fact

Lemma 1210 If dimY1 ltinfin and θ(Y2Y1) lt 1 then dimY2 = dimY1

Proof Since our hypothesis implies δ(Y2Y1) lt 1 we conclude byLemma 129 that dimY2 le dimY1 But our hypothesis also implies thatδ(Y1Y2) lt 1 and so indeed dimY2 = dimY1

We combine this lemma with Proposition 126 and conclude the followingresult

Proposition 1211 If the hypotheses of Proposition 126 hold then there is apositive integer N such that for all n ge N

dim R(P) = dim R(Pn)

Proof For any two projections P1 and P2 isin B(X) we have that

δ(R(P1) R(P2)) le supxminus P2x x isin R(P1) x = 1= supP1xminus P2P1x x isin R(P1) x = 1le P1 minus P2P1 = (P1 minus P2)P1

Using this inequality on the operators P and Pn it follows from Proposi-tion 126 that for some N whenever n ge N θ(R(P) R(Pn)) lt 1 becausedim R(P) lt infin and Tn

νminusrarr T in B(X) The result of this proposition thenfollows from Lemma 1210

The following theorem shows the gap between the spectral subspaces thatis the nearness of the eigenfunctions

Theorem 1212 If the hypotheses of Proposition 126 hold then there exist apositive constant c and an integer N such that for all n ge N

θ(R(Pn) R(P)) le c(T minus Tn)|R(P)In particular if φ isin R(Pn) with φ = 1 then

dist(φ R(P)) le c(T minus Tn)|R(P)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

122 An abstract framework 473

Proof We know from Proposition 1211 that dim R(P) = dim R(Pn) for nsufficiently large Hence by Lemma 128 we have that

δ(R(Pn) R(P)) le δ(R(P) R(Pn))

1minus δ(R(P) R(Pn))le 2δ(R(P) R(Pn))

Moreover as in the proof of Proposition 1211 and using Proposition 126 weconclude that

θ(R(Pn) R(P)) le 2δ(R(P) R(Pn)) le 2(P minus Pn)P le c(T minus Tn)|R(P)

Suppose that λ is an eigenvalue of the operator T with algebraic multiplicitym and ascent isolated by a closed rectifiable curve in ρ(T ) from therest of the spectrum of T and from zero We assume that Tn

νminusrarr T Thenthe spectrum of Tn inside consists of m eigenvalues say λn1 λn2 λnmcounted according to their algebraic multiplicities (see [3 44]) We define thearithmetic mean of these eigenvalues by letting

λn = λn1 + λn2 + middot middot middot + λnm

m

and we approximate λ by λnThe next theorem concerns the approximation of eigenvalues

Theorem 1213 If X is a Banach space T Tn isin B(X) for n isin N andTn

νminusrarr T then there exist a constant c and an integer N isin N such thatfor all n ge N

|λminus λn| le c(T minus Tn)|R(P)and

|λminus λnj| le c(T minus Tn)|R(P)Proof Let Pn = Pn|R(P) Note that there exists N such that for all n isin N n geN Pn is a surjective isomorphism from R(P) to R(Pn) and Pminus1

n n isin Nis bounded Let T = T |R(P) and Tn = Pminus1

n TnPn Then σ(T ) = λ andσ(Tn) = λn1 λn2 λnm Thus we have that

|λminus λn| = 1

m

∣∣∣trace(T minus Tn)

∣∣∣ le T minus Tn= Pminus1

n Pn(T minus Tn)|R(P) le c(T minus Tn)|R(P)To prove the second estimate we note that the ascent of λ is which ensures

that (λI|R(P) minus T ) = 0 By virtue of this equation we observe that

|λminus λnj| le (λI|R(P) minus Tn)

= (λI|R(P) minus Tn) minus (λI|R(P) minus T )

le αnT minus Tn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

474 Eigen-problems of weakly singular integral operators

where

αn =sumkisinZλIR(P) minus Pminus1

n TnPnminus1minuskλIR(P) minus T |R(P)k

Since Pn Pminus1n and Tn are all bounded αn can be bounded by a positive

constant c independent of n This leads to the desired estimate and completesthe proof

123 A multiscale collocation method

In this section we develop fast multiscale collocation methods for solving theeigen-problem of a compact integral operator with a weakly singular kernelSpecifically we let d be a positive integer and assume that is a compactset of the d-dimensional Euclidean space Rd For a compact linear integraloperator K on Linfin() defined by

Kφ(s) =int

K(s t)φ(t)dt t isin

with a weakly singular kernel K we consider the following eigen-problemFind λ isin σ(K) 0 and φ isin Linfin() with φ = 1 such that

Kφ = λφ (1216)

In this case the Banach space X in the abstract setting described in the last sec-tion is chosen as the space Linfin() Hence in the rest of this section we alwayshave X = Linfin() and V = C() By Vlowast we denote the dual space of V For isin Vlowast and v isin V we use 〈 v〉 to stand for the value of the linear functional evaluated at the function v and v for their respective norms We alsouse (middot middot) to denote the inner product in L2() For s isin by δs we denote thelinear functional in Vlowast defined for v isin V by the equation 〈δs v〉 = v(s) Weneed to evaluate δs on functions in X As in [21] we take the norm-preservingextension of δs to X and use the same notation for the extension

A multiscale scheme is based on a multiscale partition of the set amultiscale subspace decomposition of the space X and a multiscale basisof the space (cf [69]) We first require that there is a family of partitionsn n isin N0 of satisfying

n = ni i isin Ze(n) and⋃

iisinZe(n)

ni =

meas(ni capniprime) = 0 i iprime isin Ze(n) i = iprime

where the sets ni are star-shaped and e(n) denotes the cardinality of nWe then assume that there is a sequence of finite-dimensional subspaces Xn

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

123 A multiscale collocation method 475

n isin N0 of X which have the nested property namely Xnminus1 sub Xn n isin NThus a subspace Wn sub Xn can be defined such that Xn is an orthogonal directsum of Xnminus1 and Wn As a result for each n isin N0 we have a multiscaledecomposition

Xn = X0 oplusW1 oplusW2 oplus middot middot middot oplusWn

where W0 = X0 Let w(i) = dim(Wi) i isin N0 and s(n) = dim(Xn) n isin N0Then s(n) =sumiisinZn+1

w(i) We also assume that there is a basis wij j isin Zw(i)for the spaces Wi that is

Wi = spanwij j isin Zw(i) i isin N0

and that there exist positive integers h and r such that for any i gt h and j isinZw(i) with j = νr + l for some ν isin N0 and l isin Zr

wij(x) = 0 x isin iminushν (1217)

Letting Sij = iminushν condition (1217) means that the support of the basisfunction wij is contained in Sij The multiscale property demands that there isa positive integer μ gt 1 and positive constants c1 c2 such that for n isin N0

c1μminusndlednlec2μ

minusnd c1μnledimXnlec2μ

n and c1μnledimWnlec2μ

n(1218)

where dn = maxd(ni) i isin Ze(n) and the notation d(A) denotes thediameter of the set A

To define a multiscale collocation scheme we also need linear functionalsiprimejprime in Vlowast jprime isin Zw(iprime) iprime isin Zn+1 Each iprimejprime is a finite sum of point evaluations

iprimejprime =sum

sisinSiprime jprime

csδs

where cs are constants and Siprimejprime is a finite subset of distinct points in Siprimejprime havinga constant cardinality Let

Un = (i j) j isin Zw(i) i isin Zn+1The linear functionals and multiscale bases are required to satisfy the vanishingmoment condition that for any polynomial p of total degree less than or equalto k minus 1

〈ij p〉 = 0 (wij p) = 0 (i j) isin Un i ge 1 (1219)

the boundedness property

ij + wij le c (i j) isin Un (1220)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

476 Eigen-problems of weakly singular integral operators

where c is a positive constant independent of i j and n and the requirementthat for any i iprime isin N0

〈iprimejprime wij〉 = δiiprimeδjjprime (i j) (iprime jprime) isin Un i le iprime (1221)

with δiiprime being the Kronecker deltaThe collocation scheme for solving the eigen-problem (1216) is to find

λn isin C and φn isin Xn with φn = 1 such that for any (iprime jprime) isin Un

〈iprimejprime Kφn〉 = λn〈iprimejprime φn〉 (1222)

Since φn isin Xn we write

φn =sum

(ij)isinUn

φijwij (1223)

and substitute (1223) into (1222) to obtain the linear systemsum(ij)isinUn

φij〈iprimejprime Kwij〉 = λn

sum(ij)isinUn

φij〈iprimejprime wij〉 (iprime jprime) isin Un (1224)

Using the following notations

En = [〈iprimejprime wij〉 (iprime jprime) (i j) isin Un] Kn = [〈iprimejprime Kwij〉 (iprime jprime) (i j) isin Un]and n = [φij (i j) isin Un] the system (1224) is written in the matrix form

Knn = λnEnn (1225)

In the above generalized eigen-problem the matrix En is an upper triangularsparse matrix having only O(s(n) log s(n)) nonzero entries due to the con-struction of the basis functions and their corresponding collocation functionalsSpecifically when the elements Siprimejprime and Sij are disjoint the entry

langiprimejprime wij

rangof

En is zero The matrix Kn is still a full matrix but it is numerically sparse Ournext task is to use the truncation strategy developed in [69] for the matrix Kn

to formulate a fast algorithm for solving the eigen-problem (1225)For analysis purposes it is convenient to work with the functional analytic

form of the eigen-problem (1225) For this purpose we let πn X rarr Xn

denote the bounded linear projection operator defined by

〈ij πnx〉 = 〈ij x〉 (i j) isin Un (1226)

With this operator we define Kn X rarr X by Kn = πnKπn Clearly Kn isa bounded linear operator Thus the eigen-problem (1225) is written in theoperator form

Knφn = λnφn (1227)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

123 A multiscale collocation method 477

We remark that as in Chapter 7 (cf [69]) the projection operator πn is requiredto satisfy the property that there exists a positive constant c such that for v isinWkinfin() and for all n isin N

vminus πnv le c infvnisinXn

vminus vn le cμminuskndvkinfin (1228)

where middot and middotkinfin denote the norm in X and in Wkinfin() respectively Thiscondition is fulfilled when Xn is chosen as the space of piecewise polynomialsof total degree k minus 1

We now assume that the integral kernel of the operator K is weakly singularin the sense that for s t isin [0 1] s = t and a positive integer k gt 0 the kernel Khas continuous partial derivatives Dα

s Dβt K(s t) for |α| le k |β| le k and there

exist positive constants σ and c with σ lt d such that for |α| = |β| = k∣∣∣Dαs Dβ

t K(s t)∣∣∣ le c

|sminus t|σ+|α|+|β| We truncate the matrix Kn according to the singularity of the kernel As in

Chapter 7 (cf [69]) we partition the matrix Kn into a block matrix

Kn = [Kiprimei iprime i isin Zn+1] with Kiprimei = [〈iprimejprime Kwij〉 jprime isin Zw(iprime) j isin Zw(i)]and truncate the block Kiprimei by using a truncation parameter ε = εn

iprimei which willbe described later Specifically we define a truncation matrix

Kiprimei = [Kiprimejprimeij jprime isin Zw(iprime) j isin Zw(i)] (1229)

by setting

Kiprimejprimeij = 〈iprimejprime Kwij〉 dist(Siprimejprime Sij) le ε

0 otherwise(1230)

Using the truncation (1230) problem (1224) becomessum(ij)isinUn

φijKiprimejprimeij = λn

sum(ij)isinUn

φij〈iprimejprime wij〉 (iprime jprime) isin Un (1231)

Let

Kn = [Kiprimei iprime i isin Zn+1] n = [φij (i j) isin Un]The eigen-problem (1231) can then be written as the eigen-problem of thetruncated matrix

Knn = λnEnn (1232)

There is no need to compress the matrix En since it is already sparse In fact ifan entry of matrix Kn is truncated to zero according to our proposed truncationstrategy the corresponding entry of the matrix En is already zero

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

478 Eigen-problems of weakly singular integral operators

Eigen-problem (1232) may be expressed in the operator form Let Kn bethe bounded linear operator from Xn to Xn having the matrix representationEminus1

n Kn in the basis wij (i j) isin Un We then define a bounded linear operatorKn X rarr X by Kn = Knπn Solving eigen-problem (1232) is nowequivalent to finding λn isin C and φn = sum(ij)isinUn

φijwij isin Xn with φn = 1such that

Knφn = λnφn (1233)

Eigen-problem (1232) leads to a fast method for solving the original eigen-problem We study the convergence order and computational complexity of thefast method

124 Analysis of the fast algorithm

We provide in this section analysis for the convergence order and com-putational complexity of the fast algorithm described in the last sectionSpecifically we show that the proposed method has the optimal convergenceorder (up to a logarithmic factor) and almost linear computational complexity

For real numbers b and bprime we choose the truncation parameters εniprimei iprime i isin

Zn+1 to satisfy the condition

εniprimei ge maxaμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime) i iprime isin Zn+1 (1234)

for some constants a gt 0 and r gt 1 For any real numbers α and β and apositive integer n we define a function

μ[αβ n] =sum

iisinZn+1

μαidsum

iprimeisinZn+1

μβiprimed

We recall a result from Lemma 711 (cf Lemma 42 of [69]) whichestimates the difference between the operators Kn and Kn

Lemma 1214 If 0 lt σ prime lt min2k d minus σ η = 2k minus σ prime and the truncationparameter εn

iprimei is chosen according to (1234) for any real numbers b bprime thenthere exists a positive constant c such that for all n isin N

Kn minus Kn le cμ[k minus bη k minus bprimeη n](n+ 1)μminusσ primend

and

Kn minus KnWkinfinrarrX le cμ[2k minus bη k minus bprimeη n](n+ 1)μminus(k+σ prime)nd

where middot WkinfinrarrX denotes the norm of the operator from Wkinfin() to X

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

124 Analysis of the fast algorithm 479

The following lemma concerns the ν-convergence of the truncated operatorsKn to K on X

Lemma 1215 If 0 lt σ prime lt min2k d minus σ η = 2k minus σ prime and the truncationparameter εn

iprimei is chosen according to (1234) with b gt kminusσ primeη

bprime gt kminusσ primeη

b +bprime gt 1 then Kn

νminusrarr K on X as nrarrinfin

Proof We prove that Kn satisfies the three conditions in the definitionof ν-convergence Note that for any real numbers α β and e with e gt

max0αβα + βlim

nrarrinfinμ[αβ n](n+ 1)μminusend = 0

The choice of parameters in this lemma ensures that

limnrarrinfinμ[k minus bη k minus bprimeη n](n+ 1)μminusσ primend = 0 (1235)

This with Lemma 1214 yields that

limnrarrinfinKn minus Kn = 0 (1236)

which with the boundedness of Kn n isin N ensures that Kn n isin N isalso bounded That is there is a constant c such that for any n isin N

Kn le c (1237)

It is known that for any v isin X

limnrarrinfinKnvminusKv = 0

This with (1236) leads to

limnrarrinfinKnvminusKv = 0

By using the compactness of the operator K we conclude that

limnrarrinfin(Kn minusK)K = 0 (1238)

Moreover we have that

(Kn minusK)Kn le (Kn minusKn)Kn + (πn minus I)KKnle [Kn minusKn + (πn minus I)K]Kn

Since limnrarrinfin πnv minus v = 0 for any v isin X and since K is compactlimnrarrinfin (πn minus I)K = 0 From this with (1236) (1237) and the inequalityabove we conclude that

limnrarrinfin(Kn minusK)Kn = 0 (1239)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

480 Eigen-problems of weakly singular integral operators

Combining (1237)ndash(1239) yields the result of this lemma

We now consider the spectral projection associated with K and λ isin σ(K)

P = minus 1

2π i

int

(K minus zI)minus1dz

where is a closed rectifiable curve in ρ(K) enclosing λ but no other point ofσ(K)

Lemma 1216 If R(P) sub Wkinfin() then there exists a positive constant csuch that for all n isin N and for all v isin R(P)

(I minus πn)Kv le cμminuskndvkinfin

Proof Since R(P) is invariant under K we have that for any v isin R(P) Kv isinR(P) sub Wkinfin() Because all norms are equivalent in a finite-dimensionalspace we see that there is a positive constant c such that for any v isin R(P)

Kvkinfin le cKv le cKvkinfin

Thus the desired result follows from the above inequality and (1228)

Lemma 1217 Let R(P) sub Wkinfin() If 0 lt σ prime lt min2k d minus σ η =2k minus σ prime and the truncation parameter εn

iprimei is chosen according to (1234) forsome constants a gt 0 and r gt 1 with b and bprime satisfying one of the followingconditions

(i) b gt 1 bprime gt kminusσ primeη

b+ bprime gt 1+ kη

(ii) b = 1 bprime gt kminusσ primeη

b + bprime gt 1 + kη

b gt 1 bprime = kminusσ primeη

b + bprime gt1+ k

η or b gt 1 bprime gt kminusσ prime

η b+ bprime = 1+ k

η

(iii) b = 1 bprime = kη

or b = 2kη

bprime = kminusσ primeη

then there exists a positive constant c such that for all n isin N and for allv isin R(P)

(Kn minusK)v le c(s(n))minuskd logτ s(n)vkinfin

where τ = 0 in case (i) τ = 1 in case (ii) and τ = 2 in case (iii)

Proof We prove this result by using the inequality

(Kn minusK)v le (Kn minusKn)v + (Kn minusK)v (1240)

It follows from Lemma 1214 that

Kn minusKnWkinfinrarrX le cμ[2k minus bη k minus bprimeη n](n+ 1)μminusσ primendμminusknd

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

124 Analysis of the fast algorithm 481

Note that for any real numbers α β and e with e gt 0

μ[αβ n](n+ 1)μminusend =

⎧⎪⎪⎪⎪⎪⎨⎪⎪⎪⎪⎪⎩

o(1) if e gt maxαβα + βO(n) if α = e β lt e α + β lt e

or α lt e β = e α + β lt eor α lt e β lt e α + β = e

O(n2) if α = 0 β = e or α = e β = 0

as n rarr infin We then conclude by choosing α = 2k minus bη β = k minus bprimeη ande = σ prime that

μ[2k minus bη k minus bprimeη n](n+ 1)μminusσ primend =⎧⎨⎩

o(1) in case (i)O(n) in case (ii)O(n2) in case (iii)

Hence we see that there exist a constant c1 and a positive integer N such thatfor all n ge N

Kn minusKnWkinfinrarrX le c1μminuskndnτ

where τ = 0 in case (i) τ = 1 in case (ii) and τ = 2 in case (iii) That is

(Kn minusKn)v le c1μminuskndnτvkinfin (1241)

Moreover it follows from (1228) and Lemma 1216 that there exists apositive constant c2 such that for all n isin N

(Kn minusK)v le πnK(πn minus I)v + (πn minus I)Kv le c2μminuskndvkinfin

(1242)

Combining estimates (1240)ndash(1242) and the relation s(n) sim μn we obtainthe estimate of this theorem

Suppose that rank P = m lt infin Note that Knνminusrarr K on X as n rarr infin

As described in Section 122 when n is sufficiently large the spectrum of Kn

inside consists of m eigenvalues λin i = 1 2 m counting algebraicmultiplicities Let

Pn = minus 1

2π i

int

(Kn minus zI)minus1dz

be the spectral projection associated with Kn and its spectra inside Thusdim R(Pn) = dim R(P) = m We define the quantity

C(P) = supφkinfin φ isin R(P) φinfin = 1

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

482 Eigen-problems of weakly singular integral operators

Theorem 1218 If the assumptions of Lemma 1217 hold then there exist apositive constant c and a positive integer N such that for all n ge N

δ(R(Pn) R(P)) le c(s(n))minuskd logτ s(n)C(P) (1243)

In particular for any φn isin R(Pn) with φn = 1 we have that

dist(φn R(P)) le c(s(n))minuskd logτ s(n)C(P)

where τ = 0 in case (i) τ = 1 in case (ii) and τ = 2 in case (iii)

Proof It is easy to verify that the choice of truncation parameters in case(i) (ii) or (iii) satisfies the hypothesis of Lemma 1215 It follows fromLemma 1215 that Kn

νminusrarr K as nrarrinfin Using Theorem 1212 we see that

δ(R(Pn) R(P)) le c sup(Kn minusK)φ φ isin R(P) φ = 1Now by using Lemma 1217 we conclude the desired estimate (1243)

We define the arithmetic mean of the eigenvalues λin i = 1 2 m by

ˆλn = λ1n + + λmn

m

Theorem 1219 If the assumptions of Lemma 1217 hold then there exist apositive constant c and a positive integer N such that for all n ge N

|λminus ˆλn| le c(s(n))minuskd(log s(n))τC(P)|λminus λnj| le c(s(n))minuskd(log s(n))τC(P) j = 1 2 m

In particular if λ is simple that is m = 1 and = 1 then

|λminus λn| le c(s(n))minuskd(log s(n))τC(P)

where τ = 0 in case (i) τ = 1 in case (ii) and τ = 2 in case (iii)

Proof The results of this theorem follow from Theorem 1213 andLemma 1217

Theorem 1220 Let R(P)subWkinfin() Suppose that 0ltσ primelt min2k dminusσ and η = 2k minus σ prime and the truncation parameters εn

iprimei iprime i isin Zn+1 are chosensuch that

εniprimei = maxaμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime) i iprime isin Zn+1 (1244)

for some constants a gt 0 and r gt 1 with b = 1 and kηle bprime le 1 Then the

number of nonzero entries of matrix Kn is

N (Kn) = O(s(n) logτ s(n))

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

125 A power iteration algorithm 483

where τ = 1 except for bprime = 1 in which case τ = 2 and the estimates inTheorems 1218 and 1219 hold with τ = 1 except for bprime = k

η in which case

τ = 2

Proof It is shown in Theorem 715 (cf Theorem 46 of [69]) that if theparameters are chosen such that

εniprimei le maxaμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime) i iprime isin Zn+1

for some constants a gt 0 and r gt 1 with b and bprime not larger than one thenumber of nonzero entries of matrix Kn is in the order O(s(n) logτ s(n)) whereτ = 1 except for b = bprime = 1 in which case τ = 2 This together withTheorems 1218 and 1219 yields the results of this theorem

The above theorem means that the scheme (1231) (or (1232) (1233))leads to a fast numerical algorithm for solving the eigen-problem (1216)which has both optimal order (up to a logarithmic factor) of convergence andcomputational complexity

125 A power iteration algorithm

The matrix compression technique described and analyzed in the previoussections provides a basis for developing various fast numerical solvers for theeigen-problem (1232) Once the eigen-problem (1232) is set up standardnumerical methods may be applied to it and the sparsity of its coefficientmatrix leads to fast methods for solving the problem As an example toillustrate this point in this section we apply the power iteration algorithm toeigen-problem (1232) and provide a computational complexity result of thealgorithm

For convenience we rewrite eigen-problem (1232) in the form

Eminus1n Knn = λnn (1245)

The power iteration method is to find the largest eigenvalue and its correspond-ing eigenvectors of a matrix We describe below the power iteration algorithmapplied to eigen-problem (1245)

Algorithm 1221 (Power iteration algorithm)

Step 1 For fixed n isin N choose (0)n = 0 satisfying that there is an (l m) isin Un

such that ((0)n )(lm) = (0)

n infin = 1

Step 2 For j isin N0 suppose that (j)n has been obtained and do the following

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

484 Eigen-problems of weakly singular integral operators

bull Compute (j)1 = Kn

(j)n

bull Solve (j)2 from the equation En

(j)2 =

(j)1

bull Compute λ(j)n = ((j)2 )(lm)

Step 3 Find an (l m) isin Un such that |((j)2 )(lm)| = (j)

2 infin let (j+1)n =

(j)2 (

(j)2 )(lm) and go to step 2

The sequences λ(j)n j isin N0 and

(j)n j isin N0 converge to the largest

(in magnitude) eigenvalue and its corresponding eigenvector respectively ofthe eigen-problem (1245) Since the number of nonzero entries of matricesKn and En is on the order of O(s(n) logτ s(n)) and since Algorithm 1221uses basically matrixndashvector multiplications the algorithm is fast In the nextproposition we provide an estimate on the number of multiplications needed ineach iteration step

Proposition 1222 Suppose that 0 lt σ prime lt min2k dminus σ and η = 2kminus σ primeIf the truncation parameters ε = εn

iprimei iprime i isin Zn+1 are chosen such that

εniprimei = maxaμ[minusn+b(nminusi)+bprime(nminusiprime)]d r(di + diprime) i iprime isin Zn+1 (1246)

for some constants a gt 0 and r gt 1 with b = 1 and kηle bprime le 1 then the

number of multiplications needed in a single iteration of Algorithm 1221 is onthe order of O(s(n) logτ s(n)) where τ = 1 except for b = bprime = 1 in whichcase τ = 2

Proof This result is a direct consequence of (1221) and the estimates of thenumber of nonzero entries of matrices Kn and En The major computationaleffort is spent in step 2 The matrixndashvector multiplication Kn

(j)n needs

O(s(n) logτ s(n)) number of multiplications Owing to the special structureof matrix En the equation En

(j)2 =

(j)1 can be solved by direct backward

substitution It requires O(s(n) logτ s(n)) number of multiplications Hencethe total number of multiplications needed in a single iteration is on the orderof O(s(n) logτ s(n))

126 A numerical example

We present a numerical example in this section to confirm the theoreticalestimates for the convergence order and computational complexity

Consider the eigen-problem

Kφ(s) = λφ(s) s isin = [0 1] (1247)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

126 A numerical example 485

where K is the integral operator with the weakly singular kernel

K(s t) = log | cos(πs)minus cos(π t)| s t isin [0 1]Let Xn be the space of piecewise linear polynomials having a multiscale

basis wij (i j) isin Un In this case k = 2 μ = 2 and s(n) = dimXn = 2n+1We choose the basis for X0

w00(t) = minus3t + 2 w01(t) = 3t minus 1 t isin [0 1]and the basis for W1

w10(t) = minus 9

2 t + 1 t isin [0 12 ]

32 t minus 1 t isin ( 1

2 1] w11(t) = minus 3

2 t + 12 t isin [0 1

2 ]92 t minus 7

2 t isin ( 12 1]

The bases for Wi i gt 1 are generated recursively by

wij(t) = radic

2wiminus1j(2t) t isin [0 12 ]

0 t isin ( 12 1] j isin Z2iminus1

and

wi2iminus1+j(t) =

0 t isin [0 12 ]radic

2wiminus1j(2t minus 1) t isin ( 12 1] j isin Z2iminus1

The multiscale functionals ij (i j) isin Un are chosen as follows

00 = δ 13 01 = δ 2

3

10 = δ 16minus 3

2δ 1

3+ 1

2δ 2

3 11 = 1

2δ 1

3minus 3

2δ 2

3+ δ 5

6

and ij (i j) isin Un i gt 1 are also generated recursively See [69] for therecursive generation of the functionals of higher levels

As the exact eigenvalues and eigenfunctions are not known for comparisonpurposes we use the approximate eigenvalues and eigenvectors of Kn withn = 15 for those of K Let λ denote the largest simple eigenvalue ofKn (n = 15) P be the spectral projection associated with Kn (n = 15) andλ and λn φn be the largest (in magnitude) simple eigenvalue and associatedeigenfunction of the truncated operator Kn respectively It can be computedthat λ asymp minus099999982793163

We apply the fast collocation method to eigen-problem (1247) The corre-sponding matrix is truncated with the parameter choice (1246) with a = 1b = 1 and bprime = 078 The numerical algorithm is run on a PC with Intel Core2T5600 183-GHz CPU 2GB RAM and the programs are compiled using VisualC++ 2005 with single thread

The numerical results are listed in Tables 121 and 122 In Table 121ldquoComp raterdquo denotes the compression rate which is defined as the ratio of

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

486 Eigen-problems of weakly singular integral operators

Table 121 Numerical results for eigenvalue computation (1)

n Comp rate |λminus λn| r1 φn minus Pφninfin r2

1 minus 47362e-2 57466e-42 minus 14659e-2 169 12057e-4 2253 0875 39724e-3 188 22607e-5 2424 0672 10293e-3 195 40572e-6 2485 0469 25869e-4 199 73048e-7 2476 0306 64714e-5 200 13101e-7 2487 0190 16102e-5 201 24330e-8 2438 0115 38976e-6 205 45452e-9 2429 0067 96189e-7 202 83415e-10 245

10 0039 23518e-7 203 19261e-10 21111 0022 57356e-8 203 52633e-11 18712 0012 13001e-8 214 12906e-11 20313 0007 28441e-9 219 39536e-12 171

Table 122 Numerical results for eigenvalue computation (2)

n N = μn+1 t1(s) t2(s) NA51

1 4 0016 0015 1722 8 0032 0031 1603 16 0079 0047 1694 32 0234 0031 1675 64 0656 0047 1636 128 1703 0063 1607 256 4266 0125 1618 512 1031 0281 1679 1024 2427 0579 156

10 2048 5669 1343 15811 4096 1353 2859 15312 8192 3321 5953 15413 16384 7111 1265 154

the number of nonzero entries in Kn to that of the full matrix Kn that isN (Kn)s(n)2 and r1 r2 denote the convergence orders of eigenvalues andeigenfunctions respectively The numerical results show that the truncationdoes not ruin the convergence order which agrees with the theoretical esti-mates presented in this chapter

In Table 122 t1 records the time in seconds for generating the matrix Knand t2 and NA51 record the time for solving the resulting discrete eigen-problem (1245) and the number of iterations used in Algorithm 51 to obtainthe corresponding results listed in Table 121 It can clearly be seen from thenumerical results that most of the computing time is spent on generating the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

127 Bibliographical remarks 487

coefficient matrix The computing time spent on generating the matrix and onfinding the solution shows a linear increase corresponding to the growth ofthe size of the matrix Indeed the power iteration method based on the matrixcompression technique is a fast method

127 Bibliographical remarks

The abstract framework for the eigenvalue approximation presented in Sec-tion 122 is an extension of the classical spectral approximation theory[3 44 212] For the eigen-problems of compact linear operators [44] is agood reference (see also Section A27 in the Appendix) For the classicalspectral approximation theory and the notion of spectral projection the readeris referred to [3 44 145 166 167 186 212] The material of the multiscalemethod for solving the eigen-problem is mainly chosen from the paper [70]See [214 215] for further developments along this line

For more about the numerical solutions of the eigen-problem of compactintegral operators see [3 8 11 26 27 31 61 212 213 245] Analysisof numerical methods for the approximation of eigenvalues and eigenvectorsof compact integral operators is well documented in the literature (see forexample [8 11 15 44 47 185 209 210 213 245])

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637014Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163256 subject to the Cambridge Core terms of

Appendix

Basic results from functional analysis

In this appendix we summarize some of the standard concepts and resultsfrom functional analysis in a form that is used throughout the book Thereforethis section provides the reader with a convenient source for the backgroundmaterial needed to follow the ideas and arguments presented in previouschapters Further discussion and detailed proofs of the concepts we reviewhere can be found in standard texts on functional analysis for example[1 86 183 236 276]

A1 Metric spaces

A11 Metric spaces

Definition A1 Let X be a nonempty set and ρ be a real-valued functiondefined on Xtimes X satisfying the following properties

(i) ρ(x y) ge 0 and ρ(x y) = 0 if and only if x = y(ii) ρ(x y) = ρ(y x)

(iii) ρ(x y) le ρ(x z)+ ρ(z y)

In this case ρ is called a metric function (or distance function) defined on Xand (X ρ) (or X) is called a metric space

Definition A2 A sequence xj j isin N in a metric space X is said to convergeto x isin X as jrarrinfin if

limjrarrinfin ρ(xj x) = 0

A sequencexj j isin N

sube X is called a Cauchy sequence if

limijrarrinfin ρ

(xi xj

) = 0

488

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A1 Metric spaces 489

A subset A of a metric space X is said to be complete if every Cauchy sequencein A converges to some element in A

Definition A3 Let X be a metric space and let S and Sprime be subsets of X Ifin every neighborhood of any point x in S there is an element of Sprime then Sprime issaid to be dense in S If S has a countable dense subset then S is said to be aseparable set If X itself is separable then X is called a separable space

Definition A4 The ball with center x isin X and radius r shall be denoted byB(x r) = y y isin X ρ(x y) lt r

Of course a set A in X is bounded if it is contained in some ball B(x r)Moreover if A and B are bounded then A cup B is bounded

Definition A5 A subset A of a metric space X is totally bounded if for everyε gt 0 there is a finite set of points xj j isin Nm sube X such that A sube cupB(xj ε) j isin NmDefinition A6 Let S be a subset of a metric space X If every sequencexj j isin N

in S has a convergent subsequence

xkj j isin N

that is there is an

x isin X such that limjrarrinfin ρ(xkj x

) = 0 then the subset S is said to be relativelycompact Moreover if the limit x is always in S then S is said to be compactThe space X is called a compact space if it has this property

Certainly every compact set is closed and bounded (but not conversely)However we have the following useful fact

Theorem A7 A subset A of a metric space X is compact if and only if it istotally bounded and complete

A12 Normed linear spaces

Definition A8 Suppose that every pair of elements x y isin X can be combinedby an operation called addition to yield a new element in X denoted by x+ ySuppose also that for every complex (or real) number a and every elementx isin X there is an operation called scalar multiplication which yields a newelement in X denoted by ax The set X is said to be a linear space if it satisfiesthe following axioms

(i) x+ y = y+ x(ii) x+ (y+ z) = (x+ y)+ z

(iii) X contains a unique element denoted 0 which satisfies x+ 0 = x for allx isin X

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

490 Basic results from functional analysis

(iv) to each x isin X there corresponds an element of X denoted minusx such thatx+ (minusx) = 0

(v) a(x+ y) = ax+ ay(vi) (a+ b)x = ax+ bx

(vii) a(bx) = (ab)x(viii) 1x = x

(ix) 0x = 0

where a and b are complex (or real) numbers

Definition A9 A norm on a linear space X over a complex (or real) field isa real-valued function denoted middot satisfying the requirements that

(i) x ge 0 where the equality holds if and only if x = 0(ii) ax = |a| x

(iii) x+ y le x + yA linear space with a norm is called a normed linear space

When we wish to indicate the connection of a norm to the space X on whichit is defined we indicate it with the symbol middot X A normed linear space X isa metric space with metric function ρ(x y) = xminus yX

Definition A10 A normed linear space X is called a Banach space if it is acomplete metric space in the metric induced by its norm ρ(x y) = xminus yX

A13 Inner product spaces

Definition A11 A linear space X over a complex (or real) field is calledan inner product space if for any two elements x and y of X there is auniquely associated complex (or real) number called the inner product of xand y and denoted by (x y) (or when needed by (x y)X) which satisfies therequirements that

(i) (x x) ge 0 and the equality holds if and only if x = 0(ii) (x y) = (y x)

(iii) (ax+ by z) = a(x z)+ b(y z)

Every inner product satisfies the CauchyndashSchwarz inequality namely forevery x y isin X we have that ∣∣(x y)

∣∣ le x middot yAn inner product space is also a normed linear space Specifically we

associate with each x isin X the non-negative number x = (x x)12 and by the

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A1 Metric spaces 491

Minkowski inequality ∥∥x+ y∥∥ le x + y

we immediately conclude that middot X is a norm on X A characteristic featureof the norm on an inner product space is the following result

Proposition A12 (Parallelogram identity) If X is an inner product spacewith norm middot then for any x y isin X

x+ y2 + xminus y2 = 2x2 + 2y2

Definition A13 An inner product space X is called a Hilbert space if it is acomplete metric space under the metric induced by its inner product ρ(x y) =(xminus y xminus y)12

A14 Function spaces C() and Lp() (1 le p le infin)

Let be a bounded domain in a d-dimensional Euclidean space Rd whered is a positive integer We use x = [xj j isin Nd] to denote a vector inRd and define C() to be the linear space (under pointwise addition andscalar multiplication) of uniformly continuous complex-valued functions on Moreover the notation

supp (u) = closure ofx x isin u(x) = 0

stands for the support of the function u on The symbol C0() indicates thelinear subspace of C() consisting of functions with support contained inside Also C() equipped with the norm

u0infin = max∣∣ u(x) ∣∣ x isin is a Banach space The symbol Linfin() is used to denote the linear space ofcomplex-valued measurable functions u which are essentially bounded that isthere is a set E sube of measure zero such that u is bounded on E Equippedwith the norm

u0infin = ess supxisin∣∣ u(x) ∣∣ = inf

sup|u(x)| x isin E meas(E) = 0

Linfin() is a Banach space Let Lp() 1 le p lt infin denote the linear space ofcomplex-valued measurable functions u such that |u|p is Lebesgue-integrableon For any u isin Lp() we define

u0p =int

∣∣ u(x) ∣∣pdx

1p

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

492 Basic results from functional analysis

and recall that in this norm Lp() becomes a Banach space In particularL2() is a Hilbert space with inner product

(u v)0 =int

u(x)v(x)dx

and corresponding norm middot 02 as defined above An important and usefulfact is that C0() is dense in Lp() for 1 le p ltinfin

Theorem A14 (Fubini theorem) If u is a measurable function defined onRn+m and at least one of the integrals

I1 =intRn+m|u(x y)|dxdy

I2 =intRm

(intRn|u(x y)|dx

)dy

and

I3 =intRn

(intRm|u(x y)|dy

)dx

exists and is finite then

(i) for almost all y isin Rm u(middot y) isin L1(Rn) andintRn u(x middot)dx isin L1(Rm)

(ii) for almost all x isin Rn u(x middot) isin L1(Rm) andintRm u(middot y)dy isin L1(Rn)

(iii) I1 = I2 = I3

A15 Function spaces Cm() and Wmp() (m ge 1 1 le p le infin)

We generally use α = [αj j isin Nd] for a vector in Rd with non-negativeinteger components The set of all such lattice vectors is denoted by Nd andwith each such vector we define |α| = α1 + middot middot middot + αd We find convenient thenotation Zd

m = α |α| le m minus 1 The αth derivative operator is denoted byDα = part |α|

partxα1

1 middot middot middot partxαdd and |α| is called the (total) order of the derivative

When α is the zero vector we interpret Dα as the identity operator Let Cm()

be the closed subspace of C() of all the functions which have continuousderivatives up to and including the mth-order derivatives that is Dαu isin C()for α isin Zd

m Cm() is a Banach space with the norm

uminfin =sumαisinZd

m

∥∥Dαu∥∥

0infin (A1)

In a similar manner the linear space Cm() is defined We also set

Cinfin() =⋂

misinN0

Cm()

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A1 Metric spaces 493

and

Cinfin0 () = u isin Cinfin() support of u is inside and boundedNeither one of these linear spaces are Banach spaces However the family ofnorms given in (A1) as m varies over N0 determines a Fechet topology onthese spaces

We now describe Sobolev spaces For a non-negative integer m isin N and1 lt p ltinfin we define the linear space

Wmp() =

u u isin Lp() Dαu isin Lp()α isin Zdm+1

and when m = 0 set

W0p() = Lp()

When 1 lt p ltinfin we define on Wmp() the norm

ump = sumαisinZd

m+1

∥∥Dαu∥∥p

0p

1p

and for p = infin set

uminfin = maxDαu0infin α isin Zdm+1

The space Wmp() is a Banach space with norm middot mp Let Wmp

0 () be the closure of Cinfin0 () in Wmp() In particular for p = 2it is a standard notation to use Hm() and Hm

0 () for Wmp() and Wmp0 ()

respectively and likewise denote the norm middot m2 by middot m Both Wmp()

and Wmp0 () are called Sobolev spaces

Theorem A15 (Properties of Sobolev spaces)

(i) For any (1 le p le infin) Wmp() is a Banach space In particular thespace Hm() is a Hilbert space with inner product

(u v)m =sum

1le|α|lem

(Dαu Dαv)0 u v isin Hm()

(ii) For any 1 le p ltinfin the space Wmp() is the closure of the set u u isinCinfin() ump ltinfin

(iii) For any 1 le p ltinfin the space Wmp() is separable(iv) For any 1 le p ltinfin the space Wmp() is reflexive

Definition A16 Let (middot middot) and (middot middot)prime be two inner products in the same linearspace X If the two norms middot and middot prime induced by these inner products

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

494 Basic results from functional analysis

are equivalent namely there are positive constants c1 and c2 such that for allu isin X

c1u le uprime le c2uthen the two inner products are said to be equivalent

With equivalent inner products all results based on convergence in X are thesame as both norms determine the same metric topology on X

Proposition A17 (Poincare inequality) Let ρ be the diameter of a boundeddomain in a d-dimensional Euclidean space Rd Then the followingPoincare inequality holds for all u isin H1

0()

u0 le ρradicdnablau0

It follows from the Poincare inequality that

(u v)prime1 =sumkisinNd

(partu

partxkpartv

partxk

)0

is an inner product of H10() satisfying for all u isin H1

0() the inequality

(u u)prime1 le u21 le(1+ dminus1ρ2) (u u)prime1

Hence the new inner product (middot middot)prime1 and the original inner product (middot middot)1 areequivalent

When m gt 1 observe that if u isin Hm0 () then Dαu isin H1

0() |α| lem minus 1 so that by repeatedly applying the Poincare inequality we verify thatsum|α|=m(D

αu Dαv)0 is an inner product of Hm0 () which is equivalent to the

original inner product This leads to the following proposition

Proposition A18 In Hm0 () the semi-norm defined by

|u|m =⎛⎝sum|α|=m

Dαu20

⎞⎠12

is equivalent to the original norm um

A2 Linear operator theory

A21 Linear operators

Definition A19 Let X and Y be normed linear spaces and T be an operatorfrom X into Y T is called a linear operator if for any x y isin X and any complex

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A2 Linear operator theory 495

numbers a and b

T(ax+ by

) = aT x+ bT y

In particular if Y is a complex (or real) number field then T is called a linearfunctional

Definition A20 A linear operator is said to be a bounded operator if there isa positive constant c such that for any x isin X

T xY le cxX

The infimum of the values of c that satisfy the above inequality is called thenorm of the linear operator denoted T

A linear operator is bounded if and only if it is continuous We denote byB(XY) the set of all bounded linear operators from X to Y If Y is a Banachspace then B(XY) is a Banach space relative to the operator norm WhenY = R the elements of B(XR) are bounded linear functionals on X Thespace B(XR) is called the dual space of X and is denoted by Xlowast see alsoDefinition A36

Definition A21 Let T Xrarr Y be a linear operator We denote its domainby D(T ) a subspace of X and its range by R(T ) which is a subspace ofY The linear operator T is said to be a closed operator if for any sequencexj j isin N sube D(T ) satisfying limjrarrinfin xj = x in X and limjrarrinfin T xj = y in Yit follows that x isin D(T ) and y = T x

We note that a linear operator T is closed if and only if its graph

G(T ) = (x T x) x isin D(T )

is a closed subspace of Xtimes Y In general a closed operator is not necessarilycontinuous However under certain conditions this is true (see the closed graphtheorem below) Moreover if the domain D(T ) of a continuous linear operatorT D(T )rarr Y is a closed subspace of X then it is a closed operator

A22 Open mapping theorems and uniform boundedness theorem

Theorem A22 (Open mapping theorem) If T isin B(XY) where X Y areBanach spaces R(T ) = Y and ε gt 0 then there exists an η gt 0 such that

y y isin Y yY lt η sube T x x isin X xX lt ε

This theorem states that for any open set G sube X the operator T maps theset D(T ) cap G to an open set in Y

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

496 Basic results from functional analysis

Theorem A23 (Inverse operator theorem) If the operator T satisfies all theproperties stated in Theorem A22 and in addition is one to one then theinverse operator T minus1 is a continuous linear operator

Theorem A24 (Closed graph theorem) If X and Y are Banach spaces andT X rarr Y is a linear operator such that the domain D(T ) and graph of Tare closed sets of X and Xtimes Y respectively then T is continuous

A statement equivalent to the closed graph theorem is that if X and Y areBanach spaces and T Xrarr Y is a closed linear operator such that its domainD(T ) is a closed subspace of X then T is continuous

Theorem A25 (Uniform boundedness theorem) If X is a Banach space Ya normed linear space and for any x isin X the sequence of operators

Tj j isin

N sube B(XY) has the property that supTjxY j isin N ltinfin then

supTj j isin N ltinfin

Corollary A26 If X is a Banach space Y a normed linear space andthe sequence of operators

Tn n isin N

sube B(XY) has the property thatlimnrarrinfin Tnx = T x for all x isin X then T isin B(XY)

Corollary A27 (BanachndashSteinhaus theorem) If X and Y are Banach spacesU a dense subset of X

Tn n isin N

sube B(XY) and T isin B(XY) thenlimnrarrinfin Tnx = T x for all x isin X if and only if

limnrarrinfin Tnx = T x for all x isin U

and

supTn n isin N ltinfin

A23 Orthogonal projection

Let K be a closed convex set in a Hilbert space X and x isin X The shortestdistance problem is to find y isin K such that

xminus y = minxminus w w isin KIt can be shown that this problem is equivalent to finding a y isin K which solvesthe variational inequality that for all w isin K

(xminus y wminus y) le 0

Theorem A28 The above shortest distance problem is uniquely solvable

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A2 Linear operator theory 497

Theorem A29 (Orthogonal projection theorem) Let M be a closed subspaceof the Hilbert space X and Mperp be the orthogonal complement of M that is

Mperp = y isin X (y x) = 0 x isin M

Then X can be decomposed as a direct sum of M and Mperp that is X = MoplusMperp

Corollary A30 In a Hilbert space X a subspace M is dense in X if and onlyif Mperp = 0

A24 Riesz representation theorem

If X is a Hilbert space and x isin X then the linear functional defined for ally isin X

(y) = (y x)

is a bounded linear functional on X with norm x The converse is given bythe Riesz representation theorem

Theorem A31 (Riesz representation theorem) If X is a Hilbert space and

is a bounded linear functional defined on X then there is a unique x isin X suchthat for all y isin X

(y) = (y x)

and in that case = xGiven a real Hilbert space X and isin Xlowast a method of finding the

corresponding unique x isin X follows by establishing that the variationalproblem

infw2Xminus 2(w) w isin X

has a solution

A25 HahnndashBanach extension theorem

Theorem A32 (HahnndashBanach extension theorem) If 0 is a bounded linearfunctional defined on a subspace M of a normed linear space X then there isa norm-preserving extension isin Xlowast that is (x) = 0(x) for all x isin M and = 0

We note several useful corollaries of the HahnndashBanach theorem

Corollary A33 For any nonzero x isin X there exists a bounded linearfunctional isin Xlowast of norm one such that (x) = xX

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

498 Basic results from functional analysis

Corollary A34 For any closed subspace M sube X and x isin X

max(x) isin Xlowast le 1 (y) = 0 y isin M = infxminus yX y isin MCorollary A35 A linear subspace M sube X is dense in X if and only if thereis no nonzero bounded linear functional which vanishes on M

A26 Compactness

Definition A36 The dual space of a normed linear space X denoted Xlowast isa Banach space with norm

= sup |(x)| x isin X x le 1 For each x isin X we define an x isin Xlowastlowast by setting for each isin Xlowast

〈 x〉 = 〈x 〉 thereby defining a mapping τ Xrarr Xlowastlowast as τ(x) = x

Proposition A37 If X is a normed linear space and Xlowastlowast its double dual thenthe mapping τ defined above is a one to one norm-preserving linear operator

This proposition implies from the perspective of the normed linear spacestructure that X can be considered as the subspace τ(X) of Xlowastlowast In generalτ(X) is a proper subspace of Xlowastlowast However when τ(X) = Xlowastlowast then X is saidto be reflexive

Definition A38 Let X be a normed linear space A sequencexj j isin N

subeX is said to be a weak Cauchy sequence if for each isin Xlowast the sequence ofscalars

(xj) j isin N

is a Cauchy sequence and a set S sube X is said to be

weakly bounded if for each isin Xlowast the set (S) is bounded The sequencexj j isin N

sube X is said to be weakly convergent in X if there exists an x isin X

such that for all isin Xlowast

limjrarrinfin

(xj) =

(x)

In this case x is called the weak limit of the sequence Moreover a sequencej j isin N

sube Xlowast is said to be weaklowast convergent if there exists an isin Xlowastsuch that for all x isin X

limjrarrinfin j(x) = (x)

where is called the weaklowast limit of the sequence

From Theorem A25 we have the following useful fact

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A2 Linear operator theory 499

Corollary A39 Every weakly bounded subset of a normed linear space isnormed bounded

We also recall the following results

Theorem A40 A reflexive Banach space is weakly complete In particularevery Hilbert space is weakly complete Moreover the dual space Xlowast of aBanach space X is weaklowast complete

Definition A41 Let X be a normed linear space A subset S sube X is saidto be weak-relatively (sequentially) compact if every sequence in S contains aweakly convergent subsequence

Theorem A42 Let X be a separable and reflexive Banach space A subsetS sube X is weak-relatively (sequentially) compact if and only if S is bounded

This theorem implies that a subset S sube Wmp() where m isin N 0 lt p ltinfinis weak-relatively (sequentially) compact if and only if S is bounded

Theorem A43 (ArzelandashAscoli theorem) A subset S sube C() is relatively(sequentially) compact if S is bounded pointwise and equicontinuous that is

(i) for any x isin sup|u(x)| u isin S ltinfin(ii) lim

δrarr0+sup|u(x1)minus u(x2)| |x1 minus x2| le δ u isin S = 0

Theorem A44 (Kolmogorov theorem) A subset U sube Lp[0 1] 1 le p lt infinis relatively compact if and only if the following conditions are satisfied

(i) supfisinU fp ltinfin

(ii) limtrarr0int 1minust

0 | f (t + s)minus f (s)|pds = 0 uniformly in f isin U

(iii) limtrarr0int 1

1minust | f (s)|pds = 0 uniformly in f isin U

A27 Compact operators and Fredholm theorems

Definition A45 Let X and Y be Banach spaces with a subset D sube X andK D rarr Y be an operator K is said to be a relatively compact operator iffor any bounded set S sube D K(S) is a relatively compact set in Y K is saidto be a compact operator if it is continuous and relatively compact K is saidto be a completely continuous operator if for any sequence xn n isin N in Dweakly converging to x it always follows that Kxn n isin N converges to Kxin Y

Proposition A46 Let X and Y be Banach spaces and K X rarr Y anoperator

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

500 Basic results from functional analysis

(i) If K is a compact linear operator then K is completely continuous(ii) If X is reflexive and K (need not be linear) is completely continuous then

K is compact

This proposition implies that on a reflexive Banach space a compact linearoperator and a completely continuous linear operator are the same

Proposition A47 Let X and Y be Banach spaces and K X rarr Y a linearoperator

(i) If K is relatively compact then K is continuous(ii) If K is bounded and the range R(K) is finite dimensional then K is

compact(iii) If K is compact then the range R(K) is separable(iv) If K is compact then the adjoint operator Klowast Ylowast rarr Xlowast is also compact(v) Let Kj j isin N be a sequence of compact operators from X to Y

satisfying Kjuminusrarr K on X then K is compact

Let X be a Hilbert space with inner product (middot middot)X and norm middot Xrespectively and let K X rarr X be a linear compact operator Consider thelinear operator equation

(I minusK)u = f f isin X (a)

and its adjoint operator equation

(I minusKlowast)v = g g isin X (b)

where I is the identity operator and Klowast is the adjoint operator of KWe have the following basic results

Theorem A48 (First Fredholm theorem) If K Xrarr X is a compact linearoperator then the following statements hold

(i) Equation (a) is uniquely solvable for an arbitrarily given f isin X if andonly if equation (b) is so for an arbitrarily given g isin X

(ii) Equation (a) (or equation (b)) is uniquely solvable for an arbitrarily givenf isin X (or g isin X) if and only if the corresponding homogeneous equation

(I minusK)u = 0 (or (I minusKlowast)v = 0)

has only a zero solution(iii) In the case that equation (I minus K)u = 0 only has a zero solution the

inverse operator (I minus K)minus1 exists on the entire space X and is boundedso that the solution u isin X of equation (a) satisfies

uX le c fXfor a constant c gt 0 independent of f isin X

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A2 Linear operator theory 501

Theorem A49 (Second Fredholm theorem) If K X rarr X is a compactlinear operator and equation (IminusK)u = 0 has nonzero solutions then amongall its solutions there are only finitely many that are linearly independent Inthis case equation (IminusKlowast)v = 0 has the same number of linearly independentsolutions

Theorem A50 (Third Fredholm theorem) If K X rarr X is a compactlinear operator then equation (a) is solvable if and only if the given f isin X

is orthogonal with all the solutions of equation (I minus Klowast)v = 0 In this caseequation (a) has one and only one solution u isin X that is orthogonal to allthe solutions of equation (I minus K)u = 0 which satisfies uX le c fX for aconstant c gt 0 independent of the given f isin X

We now consider the following operator equation

(λI minusK)u = f f isin X (c)

where λ is a complex parameter If for an arbitrarily given f isin X equation (c)has a unique solution corresponding to a complex value of λ and (λI minusK)minus1

is bounded then this λ is called a regular value of K If correspondingto a complex value of λ the homogeneous equation (λI minus K)u = 0 hasnonzero solutions then this λ is called an eigenvalue of K while eachcorresponding nonzero solution is called the eigenfunction associated with thisλ The maximum number of linearly independent eigenfunctions is called thegeometric multiplicity of the corresponding eigenvalue λ which can be eitherfinite or infinite

In general it is quite possible that there exists a number λ for which (λI minusK)minus1 exists but is not everywhere defined in X However if K is a compactlinear operator this is impossible In this case it is also impossible to have aninfinite geometric multiplicity (as shown in the following theorem)

Theorem A51 Let K Xrarr X be a compact linear operator

(i) If λ = 0 then λ is a regular value of K if and only if λ is not an eigenvalueof K

(ii) If λ = 0 is an eigenvalue of K then the geometric multiplicity of λ isfinite λ is the eigenvalue of the adjoint operator Klowast of K with the samegeometric multiplicity as λ

(iii) If λ = 0 is an eigenvalue of K then equation (c) is solvable if and onlyif the given f isin X is orthogonal to all eigenfunctions associated withthe eigenvalue λ of Klowast in this case equation (c) has one and only onesolution that is orthogonal to all eigenfunctions of K associated with λ

(iv) The family of eigenfunctions corresponding to different eigenvalues of Kis linearly independent

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

502 Basic results from functional analysis

Theorem A52 (Fourth Fredholm theorem) Let K X rarr X be a compactlinear operator For any constant c gt 0 there are only finitely manyeigenvalues of K that are located outside the disc of radius c centered at zeroConsequently K either has finitely many eigenvalues or has countably manyeigenvalues converging to zero

According to this theorem if a compact linear operator K has nonzeroeigenvalues then they can be arranged in decreasing order in absolute values(ie |λn| ge |λn+1| n isin N) which can be a finite or infinite sequence ofordering and the number of appearance of any particular eigenvalue in thisordering is equal to its geometric multiplicity If this sequence of orderingis infinite then limnrarrinfin λn = 0 Also the eigenfunctions associated withthe above sequence of ordering can be chosen such that they are all linearlyindependent

The above results can be extended directly to Banach spacesAs a special case if the compact linear operator is self-adjoint and K = 0

then the above sequence of ordering of eigenvalues and their correspondingeigenfunctions will be nonempty all the eigenvalues are real and the eigen-functions can be chosen so that they constitute an orthonormal basis of theHilbert space X

Theorem A53 (HilbertndashSchmidt theorem) If K Xrarr X is a nonzero self-adjoint compact linear operator then for any u isin X Ku has a convergentFourier series expansion by an orthonormal basis en n isin N of X

Ku =sumnisinN

(Ku en)Xen =sumnisinN

λn(u en)Xen

Moreover if λ = 0 is not an eigenvalue of K then every u isin X has a convergentFourier series expansion by this orthonormal basis

u =sumnisinN

(u en)Xen

A3 Invariant sets

In this section we present a proof of the existence of an invariant set associatedwith a finite number of contractions on a complete metric space as this result isimportant in several chapters of this book We follow closely the presentationof the fundamental paper of Hutchinson [148]

Let X = (X d) be a metric space and F a function such that F Xrarr X Ifthere is a constant c isin (01) such that

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 503

d(F(x) F(y)) le cd(x y)

for all x y isin X then F is called a contraction mapping on X A basic fact isthat any contraction on a complete metric space has a unique fixed point Westate and prove this fact next

Theorem A54 If X is a complete metric space and F is a contraction on Xthen there exists one and only one x isin X such that F(x) = x

Proof First we establish the uniqueness of x isin X To this end we assumethat there are x1 x2 isin X such that F(x1) = x1 and F(x2) = x2 Since F isa contraction it follow that d(x1 x2) le cd(x1 x2) from which it follows thatd(x1 x2) = 0 since c isin (01)

To establish the existence of the fixed point for the function F we chooseany x0 isin X define recursively the sequence of points

xn+1 = F(xn) n isin N0

and observe that

d(xn+1 xn) = d(F(xn) F(xnminus1))

le cd(xn xnminus1)

from which it follows that

d(xn+1 xn) le cnd(x1 x0)

Consequently for n lt m we have that

d(xm xn) lesum

isinNmNn

d(x xminus1)

lesum

isinNmNn

cd(x1 x0)

which yields the inequality

d(xm xn) le cn

1minus cd(x1 x0)

This means that xn n isin X is a Cauchy sequence and hence must convergeto some x isin X that is limnrarrinfin xn = x isin X Since F is a continuous functionwe conclude that

F(x) = limnrarrinfinF(xn) = lim

nrarrinfin xn+1 = x

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

504 Basic results from functional analysis

Definition A55 Let X be a complete metric space The Lipschitz constant ofa function F Xrarr X is defined by

Lip F = sup

d(F(x) F(y))

d(x y) x y isin X x = y

We say F is Lipschitz continuous when Lip F ltinfin A Lipschitz continuousmapping takes bounded sets into bounded sets but not necessarily closed setsinto closed sets

According to the definition

d(F(x) F(y)) le Lip F d(x y)

and F is a contraction means that Lip F lt 1 In addition if F G X rarr Xthen

Lip F G le Lip F middot Lip G

Thus it follows from Definition A32 that

Lip F G = sup

d(F(G(x)) F(G(y)))

d(x y) x y isin X x = y

le Lip F middot sup

d(G(x) G(y))

d(x y) x y isin X x = y

le Lip F middot Lip G

For x isin X A sube X the distance from x to A is defined by the equation

d(x A) = infd(x a) a isin ALet B be the class of nonempty closed bounded subsets of X The least closedset containing A that is the closure of A is denoted by A Certainly for anyx isin X and A sube X we have that d(x A) = d(x A)

Definition A56 The Hausdorff metric δ on B is defined for any A B isin B bythe equation

δ(A B) = supd(a B) d(b A) a isin A b isin BLemma A57 If δ is the Hausdorff metric then δ is a metric on B

Proof For any A B isin B we certainly have that δ(A B) ge 0 If δ(A B) = 0then any a0 isin A must have the property that d(a0 B) = 0 But then it must bethe case that a0 isin B In other words we have verified that A sube B and likewiseB sube A that is A = B Conversely if A = B then d(a B) = 0 when a isin A

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 505

and d(b A) = 0 when b isin B from which it follows that δ(A B) = 0 We nextpoint out that

δ(B A) = supd(a A) d(b B) a isin B b isin A= supd(a B) d(b A) a isin A b isin B= δ(A B)

Finally we prove the triangle inequality To this end we suppose thatA B C isin B The triangle inequality for the metric space (X d) says that forany a isin A b isin B and c isin C

d(a B) le d(a b) le d(a c)+ d(c b)

which yields

d(a B) le d(a C)+ d(b C) le δ(A C)+ δ(B C)

and likewise we conclude that

d(b A) le δ(A C)+ δ(B C)

Therefore we obtain the desired fact that

δ(A B) le δ(A C)+ δ(C B)

This completes the proof of this lemma

Lemma A58 If (X d) is a complete metric space then (B δ) is a completemetric space

For the proof of this fact we find the following additional notation conve-nient A neighborhood of a subset A of X is denoted by

Nr(A) = x x isin X d(x A) lt r =⋃B(a r) a isin A

According to this definition we conclude that A sube Nr(B) if and only if for alla isin A there exists a b isin B such that d(a b) lt r or equivalently d(a B) lt rAnother way of expressing this condition is to merely say that

supd(a B) a isin A lt r

A fact relating these two simple metric concepts is a useful set inclusionrelationship which says that for any x isin X and positive numbers r1 r2

Nr1(B(x r2)) sube B(x r1 + r2) (A2)

A related notion is the concept of the upper Hausdorff hemi-metric given bythe formula

δlowast(A B) = infr r gt 0 A sube Nr(B) (A3)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

506 Basic results from functional analysis

and by our comments above we obtain that

δlowast(A B) = supd(a B) a isin Aas well as that

δ(A B) = maxδlowast(A B) δlowast(B A) (A4)

We also point out by what we have already said that

δ(A B) = infr r gt 0 A sube Nr(B) B sube Nr(A)In the next lemma we list some additional basic properties concerning these

concepts

Lemma A59(a) If A is open then for any r gt 0 the set Nr(A) is open(b) If A sube B then Nr(A) sube Nr(B)(c) Nr(A cap B) sube Nr(A) cap Nr(B)(d) For any collection Aγ γ isin of subsets of X we have that

Nr

⎛⎝⋃γisin

⎞⎠ = ⋃γisin

Nr(Aγ )

(e) If r1 le r2 then Nr1(A) sube Nr2(A)(f) For every r1 r2 we have that Nr1(Nr2(K)) sube Nr1+r2(K)(g) δ(A B) lt ε if and only if there are positive numbers r1 r2 lt ε such that

A sube Nr1(B) and B sube Nr2(A)

Proof Part (a) follows directly from the definition of the set Nr(A) as theunion of open balls The remaining assertions are also straightforward Hereare some of the details For (b) and (c) we use the facts that for any x isin X andany sets A B of X the following equation holds

d(x A cap B) = mind(x A) d(x B)and when A sube B we have additionally d(x B) le d(x A) For (d) we compute

Nr(cuprisinAr) =⋃B(a r) a isin

⋃risin

Ar

=⋃B(a r) a isin Ar r isin

=⋃Nr(Ar) r isin

Part (e) is obvious and (f) follows from the set inclusion (A2) The final claimwhich is especially useful follows from equations (A3) and (A4)

The next fact we use is stated in an ancillary lemma

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 507

Lemma A60 If A = An n isin N is a Cauchy sequence in B then thereis a ball B in X such that for all n isin N we have that An sube B That is An isuniformly bounded for all n isin N

Proof There is a positive integer k such that for any n ge k we have thatδ(Ak An) lt

12 and so by using Lemma A59 part (g) we conclude for n ge k

that An sube N12(Ak) Now we choose a positive number r and y isin X such that⋃jisinNk

Aj sube B(y r) Therefore the ball B = B(y r + 12) has the desiredproperty

Definition A61 For every sequence A =An nisinN in B we define the set

C(A) = x x isin X such that there is a subsequence nk k isin Nand xnk isin Ank with limkrarrinfin xnk = x

Note that when A consists of one set A the set C(A) is the set ofaccumulation points of A and is called the derived set of A

We are now ready to prove Lemma A58 To this end we let A = An n isinN be a Cauchy sequence in B We prove it converges to C(A)

Proof We divide the proof into several steps and begin with the statementthat

(a) C(A) is bounded

Indeed by Lemma A60 there is a ball B such that for all n isin N we have thatAn sube B Now let x isin C(A) so there is some subsequence nk k isin N andxnk isin Ank for which limkrarrinfin xnk = x Therefore we conclude that x isin B thatis C(A) sube B

(b) C(A) is closed

If y isin N is a sequence in C(A) which converges to some y isin X thenthere are subsequences nk k isin N and mk k isin N such that for all k isin N

we have that d(y ynk) lt 2minusk xmk isin Amk and d(ynk xmk) lt 2minusk Hence weobtain that limkrarrinfin xmk = y which means that y isin C(A)

(c) C(A) is not empty

Since An nisinN is a Cauchy sequence there is a subsequence nk such thatfor all kisinN the inequality δ(Ank Ank+1)lt 2minusk holds In particular we obtainfor all aisinAnk that d(a Ank+1)lt 2minusk and so we can construct inductively asequence xnk isinAnk such that d(xnk xnk+1)lt 2minusk This means that xnk kisinNis a Cauchy sequence in X and so limkrarrinfin xnk = x0 That is we conclude thatx0 isinA

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

508 Basic results from functional analysis

So far we have demonstrated that C(A)subeB Now we prove thatlimnrarrinfin An=C(A) in (B δ) For this purpose we choose ε gt 0 anddemonstrate that there is a pisinN such that for mgt p we have δ(C(A) Am) lt εBy hypothesis corresponding to this ε there is an integer k such that for allk lt n lt m we have

δ(Am An) lt ε2 (A5)

Using this fact we first show for every x isin C(A) and n gt k that d(x An) lt εIndeed by the definition of the set C(A) there is a subsequence n isin Nwith xn isin An and d(x xn ) lt ε2 for sufficiently large We choose evenlarger say gt q so that not only does this inequality hold but also n gt nthereby guaranteeing by (A5) for n gt k and gt q that δ(An An) lt ε Con-sequently we have for n gt k that d(x An) lt ε which is the desired inequality

Next we show that for all n gt k and y isin An d(y C(A)) lt ε We do thisby constructing an x isin C(A) so that d(y x) lt ε Here we again appeal to(A5) to define inductively for a subsequence m isin Z+ with xm

isin Am

and x0 = y and for all isin Z+ that d(xm xm+1) lt ε

2 Clearly we have

that x isin N is a Cauchy sequence and hence converges to some x isin XMoreover we observe for all isin Z+ that

d(y xm+1) lt ε1

2+ 1

22+ middot middot middot

= ε

and so d(y x) lt ε

For the next lemma we let C be the collection of all compact subsets of thecomplete metric space X Clearly the set C is a subset of B In the next lemmawe prove that it is a closed subset of B in the Hausdorff metric on B

Lemma A62 If X is a complete metric space then the set C is a closed subsetof (B δ) Therefore in particular (C δ) is a complete metric space

Proof Suppose A=An n isin N is a sequence in C such that limnrarrinfin An=Ain the metric δ We show that the set A is compact by establishing that it iscomplete and totally bounded (see Theorem A7) Since (B δ) is complete wecan then be assured that A is in B too Certainly we already know by the proofof Lemma A59 that A = C(A) and so is in B Recall that a closed subset ofa complete metric space must be complete In particular we conclude that Ais complete Next we show that it is totally bounded To this end we chooseany ε gt 0 By our hypothesis there is an m isin N such that δ(A Am) lt ε2Using part (g) of Lemma A59 we conclude that there is an r isin (0 ε2) suchthat A sube Nr(Am) However the set Am is compact and consequently there is a

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 509

finite set of points xj j isin Nk sube X such that

Am sube⋃B(xj ε2) j isin Nk

Again using Lemma A59 we get that

A sube⋃B(xj ε) j isin Nk

and so A is totally bounded This proves that A isin C and we conclude thatindeed C is a closed subset of B in the metric δ

For later purposes we require the next lemma

Lemma A63 For any Lipschitz continuous mapping F Xrarr X and subsetA B isin X we have that

δ(F(A) F(B)) le (Lip F) middot δ(A B) (A6)

Proof First we observe for any subset A B of X that δ(A B) le δ(A B)To see this we choose any ε gt δ(A B) and a positive δ By Lemma A59part (g) there are positive constants r1 r2 lt ε such that A sube Nr1(B) andB sube Nr2(A) Consequently we get that A sube Nr1+δ(B) and B sube Nr2+δ(A)Therefore again by Lemma A59 part (g) we have that δ(A B) lt ε+ δ Nowwe let δ rarr 0+ and ε rarr δ(A B) from above and conclude as claimed thatδ(A B) le δ(A B)

Applying this preliminary observation to the inequality (A6) we reduce theproof to establishing that

δ(F(A) F(B)) le (Lip F)δ(A B)

To this end for a given u isin F(A) there is v isin A with u = F(v) Now chooseany x isin B and observe that

d(u F(B)) le d(u F(x)) = d(F(v) F(x)) le (Lip F)d(v x)

Since x was chosen arbitrarily in B we get that

d(u F(B)) le (Lip F)d(v B) le (Lip F)δ(A B)

Similarly for u isin F(B) with u = F(v) where v isin B we conclude as abovefor all x isin A that

d(F(A) u) le d(F(x) F(v)) le (Lip F)d(x v)

Consequently we obtain that

d(F(A) u) le (Lip F)d(A v) le (Lip F)δ(A B)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

510 Basic results from functional analysis

and also

δ(F(A) F(B)) le (Lip F)δ(A B)

The next lemma is also needed

Lemma A64 If Aγ γ isin and Bγ γ isin are families of subsets of Xthen

δ

⎛⎝⋃γisin

Aγ ⋃γisin

⎞⎠ le supδ(Aγ Bγ ) γ isin

Proof For any x isin⋃γisin Aγ there is a μ isin such that x isin Aμ and so we seethat

d

⎛⎝x⋃γisin

⎞⎠ le d(x Bμ) le δ(Aμ Bμ) le supδ(Aγ Bγ ) γ isin

Similarly for any y isin⋃γisin Bγ there corresponds a μ isin with y isin Bμ and so

d

⎛⎝⋃γisin

Aγ y

⎞⎠ le d(Aμ y) le δ(Aμ Bμ) le supδ(Aγ Bγ ) γ isin

Combining these two inequalities proves the lemma

If A is nonempty then the S(A)s are nonempty and so is S(A) ThereforeS B rarr B B is a set of closed bounded nonempty sets We know that (B δ)is a complete metric space

We have now prepared the basic facts needed about the Hausdorff metricand can address the essential issue of this section namely the construction ofinvariant sets We start with a complete metric space X and a finite family ofcontractive mappings

= φε ε isin ZμCorresponding to the family is a set-valued mapping which is introduced inthe next definition

Definition A65 Let = φε ε isin Zμ be a finite family of contractionmappings on a metric space X For every subset A of X we define the set-valuedmapping + at A by the formula

+(A) =⋃φε(A) ε isin Zμ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 511

Clearly by what has already been said we conclude that + B rarr B andalso we have that + C rarr C Moreover for A isin C the set +(A) reduces to

+(A) =⋃φε(A) ε isin Zμ

The essential property of the set-valued mapping + is given in the nextlemma For its statement we introduce the constant

λ() = maxLip φε ε isin Zμwhich is certainly less than one

Lemma A66 If = φε ε isin Zμ is a finite family of contractionmappings on a metric space then the set-valued mapping + B rarr B isa contraction relative to the Hausdorff metric In fact we have the inequality

Lip + le λ()

for its Lipschitz constant

Proof Let A B isin B and use the two previous lemmas to conclude that

δ(+(A)+(B)) le maxδ(φε(A)φε(B)) ε isin Zμle λ(+)δ(A B)

Theorem A67 If = φε ε isin Zμ is a finite family of contractionmappings on a metric space X then there is a unique K isin C such that K =+(K) Moreover there is at most one K isin B which satisfies this equation

Proof This result follows from the contraction mapping principle Specifi-cally the uniqueness of K isin B follows from the fact that + is a contractionon (B δ) while the existence of K isin C follows from the fact that (C δ) is alsoa complete metric space

Remark Any subset K of X which satisfies the equation K = +(K) is calledan invariant set of the collection of mappings

We end this appendix with an alternative construction of K which is veryrelevant to the presentation in Chapter 4 The additional information providedis that an invariant set K isin C can be obtained from the fixed points of afinite number of compositions of mappings chosen from the collection Foran explanation of this procedure we review the appropriate notation used inChapter 4 We choose Zp

μ to denote all ordered sequences of p integers p isin Nselected from Zμ Every e isin Z

pμ is written in vector form e = [εe e isin Zp]

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

512 Basic results from functional analysis

where each εe is in Zμ and associated with this vector is the compositionmapping

φe = φε0 φε1 middot middot middot φεpminus1

Associated with the family of contraction mappings is the p-compoundfamily of mappings

p = φe e isin Zpμ

We show next that and p share the same invariant set We start with apreliminary lemma

Lemma A68 If K is an invariant set of the family of contractions = φε ε isin Zμ and e = [εi i isin Zμ] isin Z

pμ then φe(K) =⋃(φe φε)(K) ε isin Zμ

Proof In the above equation the case p = 0 just means that K is an invariantset for the collection of mappings Now for p isin N e isin Z

pμ we merely

compute that

φe(K) = φe(cupφε(K) ε isin Zμ) =⋃(φe φε)(K) ε isin Zμ

Proposition A69 If K = +(K) and p isin N then

K = (p+)(K)

Proof The case p = 1 is immediate The remaining cases follow by inductionon p Specifically if p gt 1 and the result is valued for pminus 1 we compute

K =⋃φe(K) e isin Zpminus1

μ =⋃(φe φε)(K) e isin Zpminus1

μ ε isin Zμ=⋃φe(K) e isin Zp

μ

For the next series of lemmas we use the notation Zinfinμ for all orderedsequences of integers selected from Zμ Every e isin Zinfinμ is written in vectorform e = [εe e isin N0] where each component εe is in Zμ There is a naturalassociation of any e isin Zinfinμ with an element in Z

pμ by truncating it to its first p

components namely ep = [εe e isin Zp] that is to say every e isin Zinfinμ givesrise to the sequence of vectors ep p isin N Of course every e isin Z

pμ may be

realized as ep for many choices of e isin Zinfinμ For every subset A in X we set

Ae = φe(A)

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 513

so that Ae isin B We use the convention that for p = 0 the mapping φe e isin Zpμ

is the identity mapping on X Hence if A isin B then in this case Ae = AThe next sequence of lemmas study what happens when p rarr infin For

this purpose we recall that the diameter of a subset A of a metric space X

is defined as

diam A = supd(x y) x y isin AIt follows readily that a bounded set has a finite diameter and the diameter of aset and its closure are the same

Lemma A70 If A is a bounded subset of a metric space X a finitecollection of contractive mappings on X and e isin Zinfinμ then

limprarrinfin diam Aep = 0

Proof For any x y isin φep(A) there are u v isin A such that x = φepu andy = φepv Therefore we conclude that

d(x y) le λ(φ)pd(u v) le λ(φ)p diam A

That is we have that

diam Aep le λ(φ)p diam A (A7)

from which the lemma follows

Lemma A71 If K is an invariant set in B of a finite number of contractivemappings on a complete metric space X and e isin Zinfinμ then the family ofsubsets Kep p isin N of K is nested and

Ke = capKep p isin Nconsists of exactly one point ke isin X

Proof For each p isin N and e isin Zinfinμ we have by Proposition A69 that

Kep = ep(K)

= ep(cupφε(K) ε isin Zμ)= cup(ep φ)(K) ε isin Zμsupe Kep+1

Moreover since K is an invariant set in B it follows from the above inclusionthat Kep sube K

Now if x y isin Ke then by inequality (A7) for any p isin N we have that

d(x y) le λ()p diam K

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

514 Basic results from functional analysis

We now let prarrinfin and conclude that x = y That is the set Ke consists of atmost one point

It remains to prove that Ke is nonempty Indeed since K is nonempty forany p isin N there is an xp isin Kep and we claim that the sequence xp p isin N isa Cauchy sequence in X To this end we suppose that p le q then xp xq isin Kep So we conclude that

d(xp xq) le λ()p diam K

from which it follows that xp p isin N is a Cauchy sequence Therefore thereis an x isin X for which x = limqrarrinfin xq However xq isin Kep for all q ge pfrom which we conclude that x isin Kep because Kep is a closed subset of XConsequently we have that x isin Ke which proves the result

Lemma A72 If K is an invariant set of a finite number of contractivemappings in B then

K = ke e isin Zinfinμ Proof By Lemma A71 we have that

ke e isin Zinfinμ sube K

Now we choose an x0 isin K Since K is an invariant set corresponding to thefamily there is an ε0 isin Zμ and an x1 isin X such that x0 = φε0 x1 Repeatingthis process we create an e isin Zinfinμ such that x0 = φepxp In particular weconclude that x0 isin Kep and so also x0 isin Ke This means by Lemma A71 thatx0 = ke thereby establishing the claim

Our next observation demonstrates that for any e isin Zinfinμ the point ke isin Kcan be constructed as a limit of the fixed points xep of the contraction mappingφep We first demonstrate that xep isin K for any e isin Zinfinμ and for any p isin N Tothis end we introduce a map from Z

pμ to Zinfinμ Specifically corresponding to

an e isin Zpμ expressed as e = [εl l isin Zp] we let ep isin Zinfinμ be the infinite vector

obtained by repeating the components of the vector e isin Zpμ infinitely often

That is we define

ep = [ε0 middot middot middot εpminus1 ε0 εpminus1 ]T

Proposition A73 For every p isin N and e isin Zpμ we have that

kep = xe

Proof We need to show that

φe(kep) = kep

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 515

for any e isin Zpμ As a first step we have that

φe(kep) isin φe

(⋂K(ep)q q isin N

)and so we get

φe(kep) isin⋂

(φe φ(ep)q)(K) q isin N

However by our notational convention

φe φ(ep)q = φ(ep)p+q

But then Lemma A71 implies that

φe(kep) isin⋂K(ep)q q isin N = kep

Since φe has a unique fixed point the proof of the proposition is complete

We also require the next result

Lemma A74 For each e isin Zinfinμ we have that

ke = limprarrinfin xep

Proof For any e isin Zinfinμ and p isin N we have by Proposition A73 that xep isinKep But we also have that ke isin Kep So we conclude by inequality (A7) that

d(xep ke) le λ()p diam K

Letting prarrinfin in this inequality proves the result

As a corollary we obtain the following result part of which was provedearlier in an even stronger form To start with we let W be the smallest closedset containing all fixed points xe where e isin Z

pμ and p isin N

Corollary A75 If K isin B is an invariant set for the contractive mapping then K = W

Proof By Proposition A73 it follows that W sube K while Lemma A74implies that K sube W

This corollary is the key to constructing an invariant set for Wecontinue to explain further properties of K and specifically comment on itsrepresentation given in Lemma A62

This requires putting the Tychnoff topology on Zinfinμ Specifically we let Zμ

have the discrete topology that is all sets are open We view Zinfinμ as a functionfrom N0 into Zμ and give Zinfinμ the weakest topology so that all componentmaps are continuous In other words for each i isin N0 the map which takes

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

516 Basic results from functional analysis

e = [εk k isin N0] into εi is continuous in this topology on Zinfinμ Therefore as aspecial case of the Tychnoff theorem see [183 276] we conclude that Zinfinμ iscompact in this topology

Definition A76 We define the map ψ Zinfinμ rarr K at e isin Zinfinμ by the equationψ(e) = ke

Lemma A77 The map ψ defined above is continuous

Remark A78 According to Lemma A77 the mapping is onto Hence thislemma implies any invariant set K isin B must be compact confirming in analternative manner a part of Theorem A67

Proof We show that ψ is continuous at any e isin Zpμ Thus we must show for

any ε gt 0 that the inverse image of the ball B(ke ε) in X contains an openneighborhood of e isin Zinfinμ Corresponding to this ε we choose a positive integerq such that diam Keq le ε The existence of such an integer is guaranteed byLemma A70 applied to the set K The set

O = e e isin Zinfinμ eq = eqis an open neighborhood of e in the topology on Zinfinμ Moreover if e isin O thenit follows that ke isin Keq That is if e isin O then d(ke keq) le ε This means thatO sube ψminus1(B(ke ε)) which proves the lemma

We now turn to the iterates of the set-valued map + We denote them as

p+ defined on any subset A iteratively by the formula

p+(A) = +(pminus1

+ (A))

for p ge 1 and 0+(A) = AAccording to Lemma A66 we have that

δ(p+(A) K) le λ()pδ(A K)

and so for any nonempty bounded subset A of X we conclude that

limprarrinfin

p+(A) = K

We are now going to establish the existence of the limit in Lemma A74 withoutassuming the existence of an invariant set

We start from the fixed point xε of the map φε for each ε isin Zμ defineconstants ρ = maxd(Sε Sθ ) ε θ isin Zμ and r = ρ(1 minus λ())minus1 and a setV isin B given as

V =⋂B(xε r) ε isin Zμ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

A3 Invariant sets 517

Lemma A79 If is a finite family of contraction maps on a complete metricspace X and e isin Zinfinμ then limprarrinfin xep exists and is the unique point in the set

Ve =⋂Vep p isin N

Proof The proof proceeds as before First we establish that the sets Vep p isin N are nested in a decreasing manner

By the choice of constants r and ρ we have for λ = λ() that⋃B(xε λr) ε isin Zμ sube V (A8)

In fact if d(x xθ ) le r for some θ isin Zμ we have that

d(x xε) le d(x xθ )+ d(xθ xε) le λr + ρ = r

Now if u isin φε(V) then there is a υ isin V such that u = φευ Since υ isin V weknow for any ε isin Zμ that d(υ xε) le r and so it follows that

d(u xε) = d(φε(υ)φ(xε)) le λd(υ xε) le λr

Consequently by the set inclusion (A8) we conclude that u isin V In otherwords we have established that φε(V) isin V for any ε isin Zμ

From this observation it follows directly for any p isin N that Vep supe Vep+1 This implies that the contraction map φep has the property that φep V rarr V Since V is closed the unique fixed point of φep lies in V that is xep isin V Asbefore we argue as in Lemma A70 that diam Vep le λp diam V Thus as inthe proof of Lemma A71 we conclude that Ve consists of at most one pointthat is the sequence xep p isin N is a Cauchy sequence and limprarrinfin xep = xethe unique point in Ve This proves the lemma

We can now define the subset K = xe e isin Zinfinμ of X As in the remarkfollowing Lemma A77 it follows that K is compact Indeed the map Zinfinμ erarr xe isin K is continuous by the same argument used to prove Lemma A77 Itremains only to establish that K is an invariant set of the collection of invariantsets For this purpose for each ε isin Zμ and e = [εj j isin N] isin Zinfinμ wedefine

εe = [ε ε0 ε1 ]

and likewise for e isin Zinfinμ We claim that ψε(xe) = xεe for each e isin Zinfinμ

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

518 Basic results from functional analysis

To see this we observe for each e isin Zinfinμ and ε isin Zμ that

ψε(xe) isin ψε

(⋂Vep p isin N

)sube⋂ψε(Vep) p isin N

=⋂Vεep p isin N

= xεeIn other words we do indeed have that ψε(xe) = xεe Now if xe isin K we writee = ε0e for some ε0 isin Zμ and e isin Zinfinμ and conclude that

xe = xε0e = ψε0(xe) isin⋃ψε(K) ε isin Zμ

Similarly if u isin ⋃ψε(K) ε isin Zμ we get that u = ψεxe for some ε isin Zμ

and e isin Zinfinμ This implies u = xεe isin K So K is an invariant set for thecollection of contraction mappings on a complete metric space

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637015Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163316 subject to the Cambridge Core terms of

References

[1] R A Adams Sobolev Spaces Academic Press New York 1975[2] L V Ahlfors Complex Analysis 3rd edn McGraw-Hill New York 1985[3] M Ahues A Largillier and B V Limaye Spectral Computations for Bounded

Operators Chapman and HallCRC London 2001[4] B K Alpert A class of bases in L2 for the sparse representation of integral

operators SIAM Journal on Mathematical Analysis 24 (1993) 246ndash262[5] B Alpert G Beylkin R Coifman and V Rokhlin Wavelet-like bases for the

fast solution of second-kind integral equations SIAM Journal on ScientificComputing 14 (1993) 159ndash184

[6] P M Anselone Collectively Compact Operator Approximation Theory andApplications to Integral Equations Prentice-Hall Englewood Cliffs NJ 1971

[7] K E Atkinson Numerical solution of Fredholm integral equation of the secondkind SIAM Journal on Numerical Analysis 4 (1967) 337ndash348

[8] K E Atkinson The numerical solution of the eigenvalue problem for compactintegral operators Transactions of the American Mathematical Society 129(1967) 458ndash465

[9] K E Atkinson Iterative variants of the Nystrom method for the numericalsolution of integral equations Numerische Mathematik 22 (1973) 17ndash31

[10] K E Atkinson The numerical evaluation of fixed points for completely contin-uous operators SIAM Journal on Numerical Analysis 10 (1973) 799ndash807

[11] K E Atkinson Convergence rates for approximate eigenvalues of compactintegral operators SIAM Journal on Numerical Analysis 12 (1975) 213ndash222

[12] K E Atkinson A survey of boundary integral equation methods for the numer-ical solution of Laplacersquos equation in three dimension in M Golberg (ed)Numerical Solution of Integral Equations Plenum Press New York 1990

[13] K E Atkinson A survey of numerical methods for solving nonlinear integralequations Journal of Integral Equations and Applications 4 (1992) 15ndash46

[14] K E Atkinson The numerical solution of a nonlinear boundary integral equationon smooth surfaces IMA Journal of Numerical Analysis 14 (1994) 461ndash483

[15] K E Atkinson The Numerical Solution of Integral Equations of the SecondKind Cambridge University Press Cambridge 1997

[16] K E Atkinson and A Bogomolny The discrete Galerkin method for integralequations Mathematics of Computation 48 (1987) 595ndash616

[17] K E Atkinson and G Chandler Boundary integral equation methods for solvingLaplacersquos equation with nonlinear boundary conditions The smooth boundarycase Mathematics of Computation 55 (1990) 451ndash472

519

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

520 References

[18] K E Atkinson and G Chandler The collocation method for solving the radiosityequation for unocluded surfaces Journal of Integral Equations and Applications10 (1998) 253ndash290

[19] K E Atkinson and D Chien Piecewise polynomial collocation for boundaryintegral equations SIAM Journal on Scientific Computing 16 (1995) 651ndash681

[20] K E Atkinson and J Flores The discrete collocation method for nonlinearintegral equations IMA Journal of Numerical Analysis 13 (1993) 195ndash213

[21] K E Atkinson I Graham and I Sloan Piecewise continuous collocation forintegral equations SIAM Journal on Numerical Analysis 20 (1987) 172ndash186

[22] K E Atkinson and W Han Theoretical Numerical Analysis Springer-VerlagNew York 2001

[23] K E Atkinson and F A Potra Projection and iterated projection methods fornonlinear integral equations SIAM Journal on Numerical Analysis 24 (1987)1352ndash1373

[24] K E Atkinson and F A Potra On the discrete Galerkin method for Fredholmintegral equations of the second kind IMA Journal of Numerical Analysis 9(1989) 385ndash403

[25] K E Atkinson and I H Sloan The numerical solution of the first kindlogarithmic kernel integral equations on smooth open curves Mathematics ofComputation 56 (1991) 119ndash139

[26] I Babuska and J E Osborn Estimates for the errors in eigenvalue and eigen-vector approximation by Galerkin methods with particular attention to the caseof multiple eigenvalues SIAM Journal on Numerical Analysis 24 (1987) 1249ndash1276

[27] I Babuska and J E Osborn Finite element-Galerkin approximation of the eigen-values and eigenvectors of selfadjoint problems Mathematics of Computation52 (1989) 275ndash297

[28] G Beylkin R Coifman and V Rokhlin Fast wavelet transforms and numericalalgorithms I Communications on Pure and Applied Mathematics 44 (1991)141ndash183

[29] R Bialecki and A Nowak Boundary value problems in heat conduction withnonlinear material and nonlinear boundary conditions Applied MathematicalModelling 5 (1981) 417ndash421

[30] S Borm L Grasedyck and W Hackbusch Introduction to hierarchical matriceswith applications Engineering Analysis with Boundary Elements 27 (2003)405ndash422

[31] J H Bramble and J E Osborn Rate of convergence estimates for nonselfadjointeigenvalue approximations Mathematics of Computation 27 (1973) 525ndash549

[32] C Brebbia J Telles and L Wrobel Boundary Element Technique Theory andApplications in Engineering Springer-Verlag Berlin 1984

[33] M Brenner Y Jiang and Y Xu Multiparameter regularization for Volterrakernel identification via multiscale collocation methods Advances in Compu-tational Mathematics 31 (2009) 421ndash455

[34] H Brunner On the numerical solution of nonlinear VolterrandashFredholm integralequations by collocation methods SIAM Journal on Numerical Analysis 27(1990) 987ndash1000

[35] H Brunner On implicitly linear and iterated collocation methods for Ham-merstein integral equations Journal of Integral Equations and Applications 3(1991) 475ndash488

[36] H J Bungartz and M Griebel Sparse grids Acta Numerica 13 (2004)147ndash269

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 521

[37] H Cai and Y Xu A fast FourierndashGalerkin method for solving singular boundaryintegral equations SIAM Journal on Numerical Analysis 46 (2008) 1965ndash1984

[38] Y Cao T Herdman and Y Xu A hybrid collocation method for Volterra integralequations with weakly singular kernels SIAM Journal on Numerical Analysis41 (2003) 264ndash281

[39] Y Cao M Huang L Liu and Y Xu Hybrid collocation methods for Fredholmintegral equations with weakly singular kernels Applied Numerical Mathemat-ics 57 (2007) 549ndash561

[40] Y Cao B Wu and Y Xu A fast collocation method for solving stochasticintegral equations SIAM Journal on Numerical Analysis 47 (2009) 3744ndash3767

[41] Y Cao and Y Xu Singularity preserving Galerkin methods for weakly singularFredholm integral equations Journal of Integral Equations and Applications 6(1994) 303ndash334

[42] T Carleman Uber eine nichtlineare Randwertaufgabe bei der Gleichungu= 0Mathematische Zeitschrift 9 (1921) 35ndash43

[43] A Cavaretta W Dahmen and C A Micchelli Stationary subdivision Memoirsof the American Mathematical Society No 453 1991

[44] F Chatelin Spectral Approximation of Linear Operators Academic Press NewYork 1983

[45] J Chen Z Chen and S Cheng Multilevel augmentation methods for solving thesine-Gordon equation Journal of Mathematical Analysis and Applications 375(2011) 706ndash724

[46] J Chen Z Chen and Y Zhang Fast singularity preserving methods forintegral equations with non-smooth solutions Journal of Integral Equations andApplications 24 (2012) 213ndash240

[47] M Chen Z Chen and G Chen Approximate Solutions of Operator EquationsWorld Scientific Singapore 1997

[48] Q Chen C A Micchelli and Y Xu On the matrix completion problem formultivariate filter bank construction Advances in Computational Mathematics26 (2007) 173ndash204

[49] Q Chen T Tang and Z Teng A fast numerical method for integral equationsof the first kind with logarithmic kernel using mesh grading Journal ofComputational Mathematics 22 (2004) 287ndash298

[50] X Chen Z Chen and B Wu Multilevel augmentation methods with matrixcompression for solving reformulated Hammerstein equations Journal of Inte-gral Equations and Applications 24 (2012) 513ndash544

[51] X Chen Z Chen B Wu and Y Xu Fast multilevel augmentation methods fornonlinear boundary integral equations SIAM Journal on Numerical Analysis 49(2011) 2231ndash2255

[52] X Chen Z Chen B Wu and Y Xu Fast multilevel augmentation methods fornonlinear boundary integral equations II efficient implementation Journal ofIntegral Equations and Applications 24 (2012) 545ndash574

[53] X Chen R Wang and Y Xu Fast FourierndashGalerkin methods for nonlin-ear boundary integral equations Journal of Scientific Computing 56 (2013)494ndash514

[54] Z Chen S Cheng G Nelakanti and H Yang A fast multiscale Galerkinmethod for the first kind ill-posed integral equations via Tikhonov regularizationInternational Journal of Computer Mathematics 87 (2010) 565ndash582

[55] Z Chen S Cheng and H Yang Fast multilevel augmentation methods withcompression technique for solving ill-posed integral equations Journal ofIntegral Equations and Applications 23 (2011) 39ndash70

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

522 References

[56] Z Chen S Ding Y Xu and H Yang Multiscale collocation methods for ill-posed integral equations via a coupled system Inverse Problems 28 (2012)025006

[57] Z Chen S Ding and H Yang Multilevel augmentation algorithms based onfast collocation methods for solving ill-posed integral equations Computers ampMathematics with Applications 62 (2011) 2071ndash2082

[58] Z Chen Y Jiang L Song and H Yang A parameter choice strategy for amulti-level augmentation method solving ill-posed operator equations Journalof Integral Equations and Applications 20 (2008) 569ndash590

[59] Z Chen J Li and Y Zhang A fast multiscale solver for modified Hammersteinequations Applied Mathematics and Computation 218 (2011) 3057ndash3067

[60] Z Chen G Long and G Nelakanti The discrete multi-projection method forFredholm integral equations of the second kind Journal of Integral Equationsand Applications 19 (2007) 143ndash162

[61] Z Chen G Long and G Nelakanti Richardson extrapolation of iterated discreteprojection methods for eigenvalue approximation Journal of Computational andApplied Mathematics 223 (2009) 48ndash61

[62] Z Chen G Long G Nelakanti and Y Zhang Iterated fast collocation methodsfor integral equations of the second kind Journal of Scientific Computing 57(2013) 502ndash517

[63] Z Chen Y Lu Y Xu and H Yang Multi-parameter Tikhonov regularization forlinear ill-posed operator equations Journal of Computational Mathematics 26(2008) 37ndash55

[64] Z Chen C A Micchelli and Y Xu The PetrovndashGalerkin methods for secondkind integral equations II Multiwavelet scheme Advances in ComputationalMathematics 7 (1997) 199ndash233

[65] Z Chen C A Micchelli and Y Xu A construction of interpolating wavelets oninvariant sets Mathematics of Computation 68 (1999) 1569ndash1587

[66] Z Chen C A Micchelli and Y Xu Hermite interpolating wavelets in L Li ZChen and Y Zhang (eds) Lecture Notes in Scientific Computation InternationalCulture Publishing Beiging 2000 pp 31ndash39

[67] Z Chen C A Micchelli and Y Xu A multilevel method for solving operatorequations Journal of Mathematical Analysis and Applications 262 (2001) 688ndash699

[68] Z Chen C A Micchelli and Y Xu Discrete wavelet PetrovndashGalerkin methodsAdvances in Computational Mathematics 16 (2002) 1ndash28

[69] Z Chen C A Micchelli and Y Xu Fast collocation method for second kindintegral equations SIAM Journal on Numerical Analysis 40 (2002) 344ndash375

[70] Z Chen G Nelakanti Y Xu and Y Zhang A fast collocation method for eigen-problems of weakly singular integral operators Journal of Scientific Computing41 (2009) 256ndash272

[71] Z Chen B Wu and Y Xu Multilevel augmentation methods for solvingoperator equations Numerical Mathematics A Journal of Chinese Universities14 (2005) 31ndash55

[72] Z Chen B Wu and Y Xu Error control strategies for numerical integrationsin fast collocation methods Northeastern Mathematical Journal 21(2) (2005)233ndash252

[73] Z Chen B Wu and Y Xu Multilevel augmentation methods for differentialequations Advances in Computational Mathematics 24 (2006) 213ndash238

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 523

[74] Z Chen B Wu and Y Xu Fast numerical collocation solutions of inte-gral equations Communications on Pure and Applied Mathematics 6 (2007)649ndash666

[75] Z Chen B Wu and Y Xu Fast collocation methods for high-dimensionalweakly singular integral equations Journal of Integral Equations and Applica-tions 20 (2008) 49ndash92

[76] Z Chen B Wu and Y Xu Fast multilevel augmentation methods for solvingHammerstein equations SIAM Journal on Numerical Analysis 47 (2009)2321ndash2346

[77] Z Chen and Y Xu The PetrovndashGalerkin and integrated PetrovndashGalerkin meth-ods for second kind integral equations SIAM Journal on Numerical Analysis 35(1998) 406ndash434

[78] Z Chen Y Xu and H Yang A multilevel augmentation method for solvingill-posed operator equations Inverse Problems 22 (2006) 155ndash174

[79] Z Chen Y Xu and H Yang Fast collocation methods for solving ill-posedintegral equations of the first kind Inverse Problems 24 (2008) 065007

[80] Z Chen Y Xu and J Zhao The discrete PetrovndashGalerkin method for weaklysingular integral equations Journal of Integral Equations and Applications 11(1999) 1ndash35

[81] D Chien and K E Atkinson A discrete Galerkin method for hypersingularboundary integral equations IMA Journal of Numerical Analysis 17 (1997)463ndash478

[82] C K Chui and J Z Wang A cardinal spline approach to wavelets Proceedingsof the American Mathematical Society 113 (1991) 785ndash793

[83] K C Chung and T H Yao On lattices admitting unique Lagrange interpolationSIAM Journal on Numerical Analysis 14 (1977) 735ndash743

[84] A Cohen W Dahmen and R DeVore Multiscale decomposition on boundeddomains Transactions of the American Mathematical Society 352 (2000)3651ndash3685

[85] M Cohen and J Wallace Radiosity and Realistic Image Synthesis AcademicPress New York 1993

[86] J B Convey A Course in Functional Analysis Springer-Verlag New York1990

[87] F Cucker and S Smale On the mathematical foundation of learning Bulletin ofthe American Mathematical Society 39 (2002) 1ndash49

[88] W Dahmen Wavelet and multiscale methods for operator equations ActaNumerica 6 (1997) 55ndash228

[89] W Dahmen H Harbrecht and R Schneider Compression techniques forboundary integral equations ndash asymptotically optimal complexity estimatesSIAM Journal on Numerical Analysis 43 (2006) 2251ndash2271

[90] W Dahmen H Harbrecht and R Schneider Adaptive methods for boundaryintegral equations Complexity and convergence estimates Mathematics ofComputation 76 (2007) 1243ndash1274

[91] W Dahmen A Kunoth and R Schneider Operator equations multiscaleconcepts and complexity in The Mathematics of Numerical Analysis (Park CityUT 1995) pp 225ndash261 [Lecture in Applied Mathematics No 32 AmericanMathematical Society Providence RI 1996]

[92] W Dahmen and C A Micchelli Using the refinement equation for evaluatingintegrals of wavelets SIAM Journal on Numerical Analysis 30 (1993) 507ndash537

[93] W Dahmen and C A Micchelli Biorthogonal wavelet expansions ConstructiveApproximation 13 (1997) 293ndash328

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

524 References

[94] W Dahmen S Prossdorf and R Schneider Wavelet approximation methodsfor pseudodifferential equations I Stability and convergence MathematischeZeitschrift 215 (1994) 583ndash620

[95] W Dahmen S Prossdorf and R Schneider Wavelet approximation methods forpseudodifferential equations II Matrix compression and fast solutions Advancesin Computational Mathematics 1 (1993) 259ndash335

[96] W Dahmen R Schneider and Y Xu Nonlinear functionals of wavelet expan-sions ndash adaptive reconstruction and fast evaluation Numerische Mathematik 86(2000) 49ndash101

[97] I Daubechies Orthonormal bases of compactly supported wavelets Communi-cations on Pure and Applied Mathematics 41 (1988) 909ndash996

[98] I Daubechies Ten Lectures on Wavelets CBMS-NSF Regional ConferenceSeries in Applied Mathematics No 61 SIAM Philadelphia PA 1992

[99] K Deimling and D Klaus Nonlinear Functional Analysis Springer-VerlagBerlin 1985

[100] R DeVore B Jawerth and V Popov Compression of wavelet decompositionsAmerican Journal of Mathematics 114 (1992) 737ndash785

[101] R DeVore and B Lucier Wavelets Acta Numerica 1 (1991) 1ndash56[102] J Dicka P Kritzerb F Y Kuoc and I H Sloan Lattice-Nystrom method for

Fredholm integral equations of the second kind with convolution type kernelsJournal of Complexity 23 (2007) 752ndash772

[103] V Dicken and P Maass Wavelet-Galerkin methods for ill-posed problemsJournal of Inverse and Ill-Posed Problems 4 (1996) 203ndash221

[104] S Ding and H Yang Multilevel augmentation methods for nonlinear ill-posedproblems International Journal of Computer Mathematics 88 (2011) 3685ndash3701

[105] S Ehrich and A Rathsfeld Piecewise linear wavelet collocation approximationof the boundary manifold and quadrature Electronic Transactions on NumericalAnalysis 12 (2001) 149ndash192

[106] H W Engl M Hanke and A Neubauer Regularization of Inverse ProblemsKluwer Dordrecht 1996

[107] W Fang and M Lu A fast collocation method for an inverse boundary valueproblem International Journal for Numerical Methods in Engineering 59(2004) 1563ndash1585

[108] W Fang F Ma and Y Xu Multilevel iteration methods for solving integralequations of the second kind Journal of Integral Equations and Applications14 (2002) 355ndash376

[109] W Fang Y Wang and Y Xu An implementation of fast wavelet Galerkin meth-ods for integral equations of the second kind Journal of Scientific Computing20 277ndash302

[110] I Fenyo and H Stolle Theorie und Praxis der Linearen IntegralgleichungenVols 1ndash4 Birkhauser-Verlag Berlin 1981ndash84

[111] N J Ford M L Morgado and M Rebelo Nonpolynomial collocation approxi-mation of solutions to fractional differential equations Fractional Calculus andApplied Analysis 16 (2013) 874ndash891

[112] N J Ford M L Morgado and M Rebelo High order numerical methodsfor fractional terminal value problems Computational Methods in AppliedMathematics 14 (2014) 55ndash70

[113] W F Ford Y Xu and Y Zhao Derivative correction for quadrature formulasAdvances in Computational Mathematics 6 (1996) 139ndash157

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 525

[114] L Greengard The Rapid Evaluation of Potential Fields in Particle Systems MITPress Cambridge MA 1988

[115] L Greengard and V Rokhlin A fast algorithm for particle simulation Journalof Computational Physics 73 (1987) 325ndash348

[116] C W Groetsch The Theory of Tikhonov Regularization for Fredholm Equationsof the First Kind Research Notes in Mathematics No 105 Pitman Boston MA1984

[117] C W Groetsch Uniform convergence of regularization methods for Fredholmequations of the first kind Journal of Australian Mathematical Society Series A39 (1985) 282ndash286

[118] C W Groetsch Convergence analysis of a regularized degenerate kernel methodfor Fredholm equations of the first kind Integral Equations and OperatorTheory 13 (1990) 67ndash75

[119] C W Groetsch Linear inverse problems in O Scherze (ed) Handbook ofMathematical Methods in Imaging pp 3ndash41 Springer-Verlag New York 2011

[120] W Hackbusch Multi-grid Methods and Applications Springer-Verlag Berlin1985

[121] W Hackbusch Integral Equations Theory and Numerical Treatment [translatedand revised by the author from the 1989 German original] International Seriesof Numerical Mathematics No 120 Birhauser-Verlag Basel 1995

[122] W Hackbusch A sparse matrix arithmetic based on H-matrices I Introductionto H-matrices Computing 62 (1999) 89ndash108

[123] W Hackbusch and B Khoromskij A sparse H-matrix arithmetic Generalcomplexity estimates Numerical Analysis 2000 Vol VI Ordinary differentialequations and integral equations Journal of Computational and Applied Mathe-matics 125 (2000) 479ndash501

[124] W Hackbusch and Z Nowak A multilevel discretization and solution methodfor potential flow problems in three dimensions in E H Hirschel (ed) FiniteApproximations in Fluid Mechanics Notes on Numerical Fluid Mechanics No14 Vieweg Braunschweig 1986

[125] W Hackbusch and Z Nowak On the fast matrix multiplication in the bound-ary element method by panel clustering Numerische Mathematik 54 (1989)463ndash491

[126] T Haduong La methode de Schenck pour la resolution numerique du problemede radiation acoustique Bull Dir Etudes Recherces Ser C Math Inf ServiceInf Math Appl 2 (1979) 15ndash50

[127] U Hamarik On the discretization error in regularized projection methods withparameter choice by discrepancy principle in A N Tikhonov (ed) Ill-posedproblems in Natural Sciences VSP UtrechtTVP Moscow 1992 pp 24ndash29

[128] U Hamarik Quasioptimal error estimate for the regularized RitzndashGalerkinmethod with the a-posteriori choice of the parameter Acta et CommentationesUniversitatis Tartuensis 937 (1992) 63ndash76

[129] U Hamarik On the parameter choice in the regularized RitzndashGalerkin methodEesti Teaduste Akadeemia Toimetised Fsika Matemaatika 42 (1993) 133ndash143

[130] A Hammerstein Nichtlineare integralgleichungen nebst anwendungen ActaMathematica 54 (1930) 117ndash176

[131] G Han Extrapolation of a discrete collocation-type method of Hammersteinequations Journal of Computational and Applied Mathematics 61 (1995)73ndash86

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

526 References

[132] G Han and J Wang Extrapolation of Nystrom solution for two-dimensionalnonlinear Fredholm integral equations Journal of Scientific Computing 14(1999) 197ndash209

[133] M Hanke and C R Vogel Two-level preconditioners for regularized inverseproblems I Theory Numerische Mathematik 83 (1999) 385ndash402

[134] M Hanke and C R Vogel Two-level preconditioners for regularized inverseproblems II Implementation and numerical results preprint

[135] H Harbrecht U Kahler and R Schneider Wavelet matrix compression forboundary integral equations in Parallel Algorithms and Cluster Computingpp 129ndash149 Lecture Notes in Computational Science and Engineering No 52Springer-Verlag Berlin 2006

[136] H Harbrecht M Konik and R Schneider Fully discrete wavelet Galerkinschemes Engineering Analysis with Boundary Elements 27 (2003) 423ndash437

[137] H Harbrecht S Pereverzev and R Schneider Self-regularization by projectionfor noisy pseudodifferential equations of negative order Numerische Mathe-matik 95 (2003) 123ndash143

[138] H Harbrecht and R Schneider Wavelet Galerkin schemes for 2D-BEM inOperator Theory Advances and Applications Vol 121 Birkhauser-VerlagBerlin 2001

[139] H Harbrecht and R Schneider Wavelet Galerkin schemes for boundary integralequations ndash implementation and quadrature SIAM Journal on Scientific Com-puting 27 (2006) 1347ndash1370

[140] H Harbrecht and R Schneider Rapid solution of boundary integral equationsby wavelet Galerkin schemes in Multiscale Nonlinear and Adaptive Approxi-mation pp 249ndash294 Springer-Verlag Berlin 2009

[141] E Hille and J Tamarkin On the characteristic values of linear integral equationsActa Mathematica 57 (1931) 1ndash76

[142] R A Horn and C R Johnson Matrix Analysis Cambridge University PressCambridge 1985

[143] G C Hsiao and A Rathsfeld Wavelet collocation methods for a first kindboundary integral equation in acoustic scattering Advances in ComputationalMathematics 17 (2002) 281ndash308

[144] G C Hsiao and W L Wendland Boundary Integral Equations Springer-VarlagBerlin 2008

[145] C Huang H Guo and Z Zhang A spectral collocation method for eigenvalueproblems of compact integral operators Journal of Integral Equations andApplications 25 (2013) 79ndash101

[146] M Huang Wavelet PetrovndashGalerkin algorthims for Fredholm integral equationsof the second kind PhD thesis Academia Sinica (in Chinese) 2003

[147] M Huang A construction of multiscale bases for PetrovndashGalerkin methods forintegral equations Advances in Computational Mathematics 25 (2006) 7ndash22

[148] J E Hutchinson Fractals and self similarity Indiana University MathematicsJournal 30 (1981) 713ndash747

[149] M Jaswon and G Symm Integral Equation Methods in Potential Theory andElastostatics Academic Press London 1977

[150] Y Jeon An indirect boundary integral equation method for the biharmonicequation SIAM Journal on Numerical Analysis 31 (1994) 461ndash476

[151] Y Jeon New boundary element formulas for the biharmonic equation Advancesin Computational Mathematics 9 (1998) 97ndash115

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 527

[152] Y Jeon New indirect scalar boundary integral equation formulas for thebiharmonic equation Journal of Computational and Applied Mathematics 135(2001) 313ndash324

[153] Y Jeon and W McLean A new boundary element method for the biharmonicequation with Dirichlet boundary conditions Advances in Computational Math-ematics 19 (2003) 339ndash354

[154] Y Jiang B Wang and Y Xu A fast FourierndashGalerkin method solving a boundaryintegral equation for the biharmonic equation SIAM Journal on NumericalAnalysis 52 (2014) 2530ndash2554

[155] Y Jiang and Y Xu Fast Fourier Galerkin methods for solving singular boundaryintegral equations Numerical integration and precondition Journal of Compu-tational and Applied Mathematics 234 (2010) 2792ndash2807

[156] Q Jin and Z Hou On an a posteriori parameter choice strategy for Tikhonovregularization of nonlinear ill-posed problems Numerische Mathematik 83(1999) 139ndash159

[157] B Kaltenbacher On the regularizing properties of a full multigrid method forill-posed problems Inverse Problems 17 (2001) 767ndash788

[158] H Kaneko K Neamprem and B Novaprateep Wavelet collocation method andmultilevel augmentation method for Hammerstein equations SIAM Journal onScientific Computing 34 (2012) A309ndashA338

[159] H Kaneko R Noren and B Novaprateep Wavelet applications to the PetrovndashGalerkin method for Hammerstein equations Applied Numerical Mathematics45 (2003) 255ndash273

[160] H Kaneko R D Noren and P A Padilla Superconvergence of the iteratedcollocation methods for Hammerstein equations Journal of Computational andApplied Mathematics 80 (1997) 335ndash349

[161] H Kaneko R Noren and Y Xu Numerical solutions for weakly singular Ham-merstein equations and their superconvergence Journal of Integral Equationsand Applications 4 (1992) 391ndash407

[162] H Kaneko R Noren and Y Xu Regularity of the solution of Hammersteinequations with weakly singular kernel Integral Equation Operator Theory 13(1990) 660ndash670

[163] H Kaneko and Y Xu Degenerate kernel method for Hammerstein equationsMathematics of Computation 56 (1991) 141ndash148

[164] H Kaneko and Y Xu Gauss-type quadratures for weakly singular integrals andtheir application to Fredholm integral equations of the second kind Mathematicsof Computation 62 (1994) 739ndash753

[165] H Kaneko and Y Xu Superconvergence of the iterated Galerkin methods forHammerstein equations SIAM Journal on Numerical Analysis 33 (1996) 1048ndash1064

[166] T Kato Perturbation theory for nullity deficiency and other quantities of linearoperators Journal drsquoAnalyse Mathematique 6 (1958) 261ndash322

[167] T Kato Perturbation Theory of Linear Operators Springer-Verlag Berlin1976

[168] C T Kelley A fast two-grid method for matrix H-equations Transport Theoryand Statistical Physics 18 (1989) 185ndash203

[169] C T Kelley A fast multilevel algorithm for integral equations SIAM Journal onNumerical Analysis 32 (1995) 501ndash513

[170] C T Kelley and E W Sachs Fast algorithms for compact fixed point problemswith inexact function evaluations SIAM Journal on Scientific and StatisticalComputing 12 (1991) 725ndash742

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

528 References

[171] M Kelmanson Solution of nonlinear elliptic equations with boundary singu-larities by an integral equation method Journal of Computational Physics 56(1984) 244ndash283

[172] D Kincaid and W Cheney Numerical Analysis Mathematics of ScientificComputing 3rd edn American Mathematical Society Providence RI 2002

[173] A Kirsch An Introduction to the Mathematical Theory of Inverse Problems 2ndedn Applied Mathematical Sciences No 120 Springer-Verlag New York 2011

[174] E Klann R Ramlau and L Reichel Wavelet-based multilevel methods forlinear ill-posed problems BIT Numerical Mathematics 51 (2011) 669ndash694

[175] M A Kransnoselrsquoskii Topological Methods in the Theory of Nonlinear IntegralEquations Pergamon Press New York 1964

[176] M A Kransnoselrsquoskii G M Vainikko P P Zabreiko Ya B Rutiskii and V YaStetsenko Approximate Solution of Operator Equations Wolters-NoordhoffPublishing Groningen 1972

[177] R Kress Linear Integral Equations Springer-Verlag Berlin 1989[178] R Kress Numerical Analysis Graduate Texts in Mathematics No 181 Springer-

Verlag New York 1998[179] S Kumar Superconvergence of a collocation-type method for Hammerstein

equations IMA Journal of Numerical Analysis 7 (1987) 313ndash325[180] S Kumar A discrete collocation-type method for Hammerstein equation SIAM

Journal on Numerical Analysis 25 (1988) 328ndash341[181] S Kumar and I H Sloan A new collocation-type method for Hammerstein

integral equations Mathematics of Computation 48 (1987) 585ndash593[182] L J Lardy A variation of Nystromrsquos method for Hammerstein equations

Journal of Integral Equations 3 (1981) 43ndash60[183] P D Lax Functional Analysis Wiley-Interscience New York 2002[184] F Li Y Li and Z Li Existence of solutions to nonlinear Hammerstein integral

equations and applications Journal of Mathematical Analysis and Applications323 (2006) 209ndash227

[185] E Lin Multiscale approximation for eigenvalue problems of Fredholm integralequations Journal of Applied Functional Analysis 2 (2007) 461ndash469

[186] G G Lorentz Approximation Theory and Functional Analysis Academic PressBoston MA 1991

[187] Y Lu L Shen and Y Xu Shadow block iteration for solving linear systemsobtained from wavelet transforms Applied and Computational Harmonic Anal-ysis 19 (2005) 359ndash385

[188] Y Lu L Shen and Y Xu Multi-parameter regularization methods for high-resolution image reconstruction with displacement errors IEEE Transactions onCircuits and Systems I 54 (2007) 1788ndash1799

[189] Y Lu L Shen and Y Xu Integral equation models for image restoration Highaccuracy methods and fast algorithms Inverse Problems 26 (2010) 045006

[190] M A Lukas Comparisons of parameter choice methods for regularization withdiscrete noisy data Inverse Problems 14 (1998) 161ndash184

[191] X Luo L Fan Y Wu and F Li Fast multilevel iteration methods with compres-sion technique for solving ill-posed integral equations Journal of Computationaland Applied Mathematics 256 (2014) 131ndash151

[192] X Luo F Li and S Yang A posteriori parameter choice strategy for fast multi-scale methods solving ill-posed integral equations Advances in ComputationalMathematics 36 (2012) 299ndash314

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 529

[193] P Maass S V Pereverzev R Ramlau and SG Solodky An adaptive discretiza-tion for TikhonovndashPhillips regularization with a posteriori parameter selectionNumerische Mathematik 87 (2001) 485ndash502

[194] P Mathe Saturation of regularization methods for linear ill-posed problems inHilbert spaces SIAM Journal on Numerical Analysis 42 (2004) 968ndash973

[195] P Mathe and S V Pereverzev Discretization strategy for linear ill-posed prob-lems in variable Hilbert scales Inverse Problems 19 (2003) 1263ndash1277

[196] C A Micchelli Using the refinable equation for the construction ofpre-wavelets Numerical Algorithm 1 (1991) 75ndash116

[197] C A Micchelli and M Pontil Learning the kernel function via regularizationJournal of Machine Learning Research 6 (2005) 1099ndash1125

[198] C A Micchelli T Sauer and Y Xu A construction of refinable sets forinterpolating wavelets Results in Mathematics 34 (1998) 359ndash372

[199] C A Micchelli T Sauer and Y Xu Subdivision schemes for iterated functionsystems Proceedings of the American Mathematical Society 129 (2001)1861ndash1872

[200] C A Micchelli and Y Xu Using the matrix refinement equation for theconstruction of wavelets on invariant sets Applied and Computational HarmonicAnalysis 1 (1994) 391ndash401

[201] C A Micchelli and Y Xu Reconstruction and decomposition algorithms forbiorthogonal multiwavelets Multidimensional Systems and Signal Processing 8(1997) 31ndash69

[202] C A Micchelli Y Xu and Y Zhao Wavelet Galerkin methods for second-kind integral equations Journal of Computational and Applied Mathematics86 (1997) 251ndash270

[203] S G Mikhlin Mathematical Physics an Advanced Course North-HollandAmsterdam 1970

[204] G Monegato and I H Sloan Numerical solution of the generalized airfoilequation for an airfoil with a flap SIAM Journal on Numerical Analysis 34(1997) 2288ndash2305

[205] M T Nair On strongly stable approximations Journal of Australian Mathemat-ical Society Series A 52 (1992) 251ndash260

[206] M T Nair A unified approach for regularized approximation methods forFredholm integral equations of the first kind Numerical Functional Analysisand Optimization 15 (1994) 381ndash389

[207] M T Nair and S V Pereverzev Regularized collocation method for Fredholmintegral equations of the first kind Journal of Complexity 23 (2007) 454ndash467

[208] J Nedelec Approximation des Equations Integrales en Mecanique et enPhysique Lecture Notes Centre Math Appl Ecole Polytechnique PalaiseauFrance 1977

[209] G Nelakanti A degenerate kernel method for eigenvalue problems of com-pact integral operators Advances in Computational Mathematics 27 (2007)339ndash354

[210] G Nelakanti Spectral Approximation for Integral Operators PhD thesis IndianInsitute of Technology Bombay 2003

[211] D W Nychka and D D Cox Convergence rates for regularized solutions ofintegral equations from discrete noisy data Annals of Statistics 17 (1989)556ndash572

[212] J E Osborn Spectral approximation for compact operators Mathematics ofComputation 29 (1975) 712ndash725

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

530 References

[213] R Pallav and A Pedas Quadratic spline collocation method for weakly sin-gular integral equations and corresponding eigenvalue problem MathematicalModelling and Analysis 7 (2002) 285ndash296

[214] B L Panigrahi and G Nelakanti Legendre Galerkin method for weakly singularFredholm integral equations and the corresponding eigenvalue problem Journalof Applied Mathematics and Computing 43 (2013) 175ndash197

[215] B L Panigrahi and G Nelakanti Richardson extrapolation of iterated discreteGalerkin method for eigenvalue problem of a two dimensional compact integraloperator Journal of Scientific Computing 51 (2012) 421ndash448

[216] S V Pereverzev and E Schock On the adaptive selection of the parameter inregularization of ill-posed problems SIAM Journal on Numerical Analysis 43(2005) 2060ndash2076

[217] D L Phillips A technique for the numerical solution of certain integral equa-tions of the first kind Journal of the Association for Computing Machinery 9(1962) 84ndash97

[218] M Pincus Gaussian processes and Hammerstein integral equations Transac-tions of the American Mathematical Society 134 (1968) 193ndash214

[219] R Plato On the discrepancy principle for iterative and parametric methods tosolve linear ill-posed equations Numerische Mathematik 75 (1996) 99ndash120

[220] R Plato The Galerkin scheme for Lavrentievrsquos m-times iterated method to solvelinear accretive Volterra integral equations of the first kind BIT NumericalMathematics 37 (1997) 404ndash423

[221] R Plato and U Hamarik On pseudo-optimal parameter choices and stoppingrules for regularization methods in Banach spaces Numerical Functional Anal-ysis and Optimization 17 (1996) 181ndash195

[222] R Plato and G Vainikko On the regularization of projection methods for solvingill-posed problems Numerische Mathematik 57 (1990) 63ndash79

[223] R Prazenica R Lind and A Kurdila Uncertainty estimation from Volterrakernels for robust flutter analysis Journal of Guidance Control and Dynamics26 (2003) 331ndash339

[224] M P Rajan Convergence analysis of a regularized approximation for solvingFredholm integral equations of the first kind Journal of Mathematical Analysisand Applications 279 (2003) 522ndash530

[225] A Rathsfeld A wavelet algorithm for the solution of the double layer potentialequation over polygonal boundaries Journal of Integral Equations and Applica-tions 7 (1995) 47ndash98

[226] A Rathsfeld A wavelet algorithm for the solution of a singular integral equationover a smooth two-dimensional manifold Journal of Integral Equations andApplications 10 (1998) 445ndash501

[227] A Rathsfeld and R Schneider On a quadrature algorithm for the piecewiselinear wavelet collocation applied to boundary integral equations MathematicalMethods in the Applied Sciences 26 (2003) 937ndash979

[228] T Raus About regularization parameter choice in case of approximately givenerrors of data Acta et Commentationes Universitatis Tartuensis 937 (1992)77ndash89

[229] M Rebeloa and T Diogob A hybrid collocation method for a nonlinear Volterraintegral equation with weakly singular kernel Journal of Computational andApplied Mathematics 234 (2010) 2859ndash2869

[230] L Reichel and A Shyshkov Cascadic multilevel methods for ill-posedproblems Journal of Computational and Applied Mathematics 233 (2010)1314ndash1325

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 531

[231] A Rieder A wavelet multilevel method for ill-posed problems stabilized byTikhonov regularization Numerische Mathematik 75 (1997) 501ndash522

[232] S D Riemenschneider and Z Shen Wavelets and pre-wavelets in low dimen-sions Journal of Approximation Theory 71 (1992) 18ndash38

[233] K Riley Two-level preconditioners for regularized ill-posed problems PhDthesis Montana State University 1999

[234] F Rizzo An integral equation approach to boundary value problems of classicalelastostatics Quarterly Journal of Applied Mathematics 25 (1967) 83ndash95

[235] V Rokhlin Rapid solution of integral equations of classical potential theoryJournal of Computational Physics 60 (1983) 187ndash207

[236] H L Royden Real Analysis Macmillan New York 1963[237] L Rudin S Osher and E Fatemi Nonlinear total variation based noise removal

algorithms Physica D 60 (1992) 259ndash268[238] T Runst and W Sickel Sobolev Spaces of Fractional Order Nemytskij Opera-

tors and Nonlinear Partial Differential Equations de Gryuter Berlin 1996[239] K Ruotsalainen and W Wendland On the boundary element method for

some nonlinear boundary value problems Numerische Mathematik 53 (1988)299ndash314

[240] J Saranen Projection methods for a class of Hammerstein equations SIAMJournal on Numerical Analysis 27 (1990) 1445ndash1449

[241] R Schneider Multiskalen- und wavelet-Matrixkompression Analyiss-sasierteMethoden zur effizienten Losung groβer vollbesetzter Gleichungs-systemeHabilitationsschrift Technische Hochschule Darmstadt 1995

[242] C Schwab Variable order composite quadrature of singular and nearly singularintegrals Computing 53 (1994) 173ndash194

[243] Y Shen and W Lin Collocation method for the natural boundary integralequation Applied Mathematics Letters 19 (2006) 1278ndash1285

[244] F Sillion and C Puech Radiosity and Global Illumination Morgan KaufmannSan Francisco CA 1994

[245] I H Sloan Iterated Galerkin method for eigenvalue problem SIAM Journal onNumerical Analysis 13 (1976) 753ndash760

[246] I H Sloan Superconvergence in M Golberg (ed) Numerical Solution ofIntegral Equations Plenum New York 1990 pp 35ndash70

[247] I H Sloan and V Thomee Superconvergence of the Galerkin iterates for integralequations of the second kind Journal of Integral Equations 9 (1985) 1ndash23

[248] S G Solodky On a quasi-optimal regularized projection method for solvingoperator equations of the first kind Inverse Problems 21 (2005) 1473ndash1485

[249] G W Stewart Fredholm Hilbert Schmidt Three Fundamental Papers on Inte-gral Equations translated with commentary by G W Stewat 2011 Available atwwwcsundedustewartFHSpdf

[250] J Tausch The variable order fast multipole method for boundary integralequations of the second kind Computing 72 (2004) 267ndash291

[251] J Tausch and J White Multiscale bases for the sparse representation ofboundary integral operators on complex geometry SIAM Journal on ScientificComputation 24 (2003) 1610ndash1629

[252] A N Tikhonov Solution of incorrectly formulated problems and the regulariza-tion method Doklady Akademii Nauk SSSR 151 (1963) 501ndash504 [translated inSoviet Mathematics 4 1035ndash1038]

[253] F G Tricomi Integral Equations Dover Publications New York 1985[254] A E Taylor and D C Lay Introduction to Functional Analysis 2nd edn John

Wiley amp Sons New York 1980

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

532 References

[255] G Vainikko A perturbed Galerkin method and the general theory of approx-imate methods for nonlinear equations Z Vycisl Mat i Mat Fiz 7 (1967)723ndash751 [English translation USSR Computational Mathematics and Mathe-matical Physics 7 (1967) 723ndash751]

[256] G Vainikko Multidimensional Weakly Singular Integral Equations Springer-Verlag Berlin 1993

[257] G Vainikko A Pedes and P Uba Methods of Solving Weakly Singular IntegralEquations (in Russian) Tartu University 1984

[258] G Vainikko and P Uba A piecewise polynomial approximation to the solutionof an integral equation with weakly singular kernel Journal of AustralianMathematical Society Series B 22 (1981) 431ndash438

[259] C R Vogel and M E Oman Fast robust total variation-based reconstructionof noisy blurred images IEEE Transactions on Image Processing 7 (1998)813ndash824

[260] T von Petersdorff and C Schwab Wavelet approximation of first kind integralequations in a polygon Numerische Mathematik 74 (1996) 479ndash516

[261] T von Petersdordd R Schneider and C Schwab Multiwavelets for second kindintegral equations SIAM Journal on Numerical Analysis 34 (1997) 2212ndash2227

[262] G Wahba Spline Models for Observational Data Society for Industrial andApplied Mathematics Philadelphia PA 1990

[263] B Wang R Wang and Y Xu Fast FourierndashGalerkin methods for first-kindlogarithmic-kernel integral equations on open arcs Science China Mathematics53 (2010) 1ndash22

[264] Y Wang and Y Xu A fast wavelet collocation method for integral equations onpolygons Journal of Integral Equations and Applications 17 (2005) 277ndash330

[265] W Wendland On some mathematical aspects of boundary element methods forelliptic problems in J Whiteman (ed) The Mathematics of Finite Elements andApplications Academic Press London 1985 pp 230ndash257

[266] W-J Xie and F-R Lin A fast numerical solution method for two dimensionalFredholm integral equations of the second kind Applied Numerical Mathemat-ics 59 (2009) 1709ndash1719

[267] Y Xu H L Chen and Q Zou Limit values of derivatives of the Cauchy integralsand computation of the logarithmic potentials Computing 73 (2004) 295ndash327

[268] Y Xu and H Zhang Refinable kernels Journal of Machine Learning Research8 (2007) 2083ndash2120

[269] Y Xu and Y Zhao Quadratures for improper integrals and their applications inintegral equations Proceedings of Symposia in Applied Mathematics 48 (1994)409ndash413

[270] Y Xu and Y Zhao Quadratures for boundary integral equations of the firstkind with logarithmic kernels Journal of Integral Equations and Applications8 (1996) 239ndash268

[271] Y Xu and Y Zhao An extrapolation method for a class of boundary integralequations Mathematics of Computation 65 (1996) 587ndash610

[272] Y Xu and A Zhou Fast Boolean approximation methods for solving integralequations in high dimensions Journal of Integral Equations and Applications16 (2004) 83ndash110

[273] Y Xu and Q Zou Adaptive wavelet methods for elliptic operator equations withnonlinear terms Advances in Computational Mathematics 19 (2003) 99ndash146

[274] H Yang and Z Hou Convergence rates of regularized solutions and param-eter choice strategy for positive semi-definite operator equation Numerical

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

References 533

Mathematics A Journal of Chinese Universities (Chinese Edition) 20 (1998)245ndash251

[275] H Yang and Z Hou A posteriori parameter choice strategy for nonlinearmonotone operator equations Acta Mathematicae Applicatae Sinica 18 (2002)289ndash294

[276] K Yosida Functional Analysis Springer-Verlag Berlin 1965[277] E Zeidler Nonlinear Functional Analysis and its Applications I IIB Springer-

Verlag New York 1990[278] M Zhong S Lu and J Cheng Multiscale analysis for ill-posed problems with

semi-discrete Tikhonov regularization Inverse Problems 28 (2012) 065019[279] A Zygmund Trigonometric Series Cambridge University Press New York

1959

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637016Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163332 subject to the Cambridge Core terms of

Index

(k kprime) element 224α-property 241ν-convergence 468

adjoint identity 43adjoint operator 34ArzelandashAscoli theorem 499ascent 470

Banach space 490BanachndashSteinhaus theorem 496boundary integral equation 42 356

Cauchy sequence 488closed graph theorem 496collectively compact 74 240collocation matrix 106collocation method 63 105compact operator 34condition number 71contraction mapping 503converges pointwise 60 74converges uniformly 74correlation matrix 120cyclic μ-adic expansions 171

degenerate kernel 36 81degenerate kernel method 80derived set 507discrete orthogonal projection 103distance function 488dual space 495 498

eigenfunction 501eigenvalue 501

equationboundary integral 42Hammerstein 356ill-posed 416integral 5nonlinear boundary integral 356nonlinear integral 356

equation integral 32

Fredholm determinant 20Fredholm function 11 12Fredholm integral equation 32Fredholm minor 11Fredholm operator

continuous kernel 35Schmidt kernel 35

Fredholm theorem 499Fubini theorem 492fundamental solution 43

of the Laplace operator 44

Galerkin matrix 95Galerkin method 62 94gap between subspaces 471generalized best approximation 56

Holder continuous 24Hadamard inequality 12HahnndashBanach extension theorem

497Hammerstein equation 356 359harmonic 44Hausdorff metric 504Hermite admissible 192Hermite interpolation 56

534

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163339 subject to the Cambridge Core terms of

Index 535

Hilbert space 491HilbertndashSchmidt theorem 502

ill-posed 417ill-posed integral equation 416inequality

Hadamard 12inner product 490inner product space 490integral equation

of the first kind 416of the second kind 5 32

integral operator 5compact 34Fredholm 32 35weakly singular 37

interpolating wavelet spaces 187interpolation projection 55invariant set 155inverse operator theorem 496

Jensenrsquos formula 27

kernel 5 32continuous 35degenerate 36quasi-weakly singular 232Schmidt 35weak singularity 37

Kolmogorov theorem 499Kronecker symbol 55

Lagrange admissible 185Lagrange interpolation 56Laplace equation 42Laplace expansion 9lattice vector 5Lavrentiev regularization 416 421least-squares method 62linear functional 495linear operator 494linear space 489Lipschitz constant 504

majorization sequence 326MAM 322 325 331 359 377 420matrix norm 99matrix representation 207metric function 488metric space 488MIM 322 347minimum norm solution 421

minor 7modules of continuity 14multilevel augmentation method 322 325

331 359 377 420multilevel iteration method 322 347multiscale basis function 144multiscale collocation method 265multiscale Galerkin method 199multiscale Hermite interpolation 191multiscale interpolating bases 184multiscale Lagrange interpolation 184multiscale orthogonal bases 166multiscale partitions 153multiscale PetrovndashGalerkin method 223

nested 66nonlinear boundary value problem 377nonlinear integral 356nonlinear integral equation 356

boundary integral equation 356Hammerstein 356

norm 490normed linear space 490numerically sparse 203Nystrom method 86

open mapping theorem 495operator

adjoint 34 43bounded 495closed 495compact 34 499completely continuous 499elliptic partial differential 43integral 5interpolation projection 55Laplace 42orthogonal projection 54projection 54relatively compact 499spectral projection 469

operator equation 53orthogonal projection 54orthogonal projection theorem 497orthogonal wavelets 169

parallelepiped 180parallelogram identity 491PetrovndashGalerkin matrix 113PetrovndashGalerkin method 61 112Poincare inequality 494pointwise convergence 34

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163339 subject to the Cambridge Core terms of

536 Index

Poisson Jensen formula 28power iteration algorithm 483principle minor 7projection 54

generalized best approximation 56interpolation 55orthogonal 54spectral 469

projection method 53 94

quadrature method 86quadrature rule 87 302 314

convergent 88Gaussian 87Simpson 87trapezoidal 87

quasi-weakly singular 232

refinable set 169 170regular pair 60 121regular partition 96regular value 501resolvent kernel 17resolvent set 466Riesz representation theorem 497

Schmidt kernel 35Schur lemma 254set

center 200compact 489derived 507invariant 155refinable 169relatively compact 489separable 489star-shaped 200weak-relatively compact 499

set wavelet 169 175set-valued mapping 510

Sloan iterate 73Sobolev spaces 493space

compact 489complete 489dual 495 498metric 488reflexive 498separable 489

spectral projection 469spectrum 467stability 70stable 70star-shaped set 200superconvergence 77support 491

Tikhonov regularization 416totally bounded 489trace formula 24truncation matrix 207 219truncation parameter 207truncation strategy 205 219

uniform boundedness theorem 496uniform convergence 34unisolvent 55upper Hausdorff hemi-metric 505

vanishing moments 149

wavelet bases 149wavelets

set 175interpolating 190orthogonal 169

weak Cauchy sequence 498weakly convergent 498weakly singular 37well-posed 416

use available at httpwwwcambridgeorgcoreterms httpdxdoiorg101017CBO9781316216637Downloaded from httpwwwcambridgeorgcore Lund University Libraries on 17 Oct 2016 at 163339 subject to the Cambridge Core terms of

  • 9781107103474
  • a1_front
  • a2_contents
  • a3_preface
  • a4_symbols
  • ch00_Intro
  • ch01
  • ch02
  • ch03
  • ch04
  • ch05
  • ch06
  • ch07
  • ch08
  • ch09
  • ch10
  • ch11
  • ch12
  • p_app
  • z1_refs
  • z2_index
Page 3: CAMBRIDGE MONOGRAPHS ON
Page 4: CAMBRIDGE MONOGRAPHS ON
Page 5: CAMBRIDGE MONOGRAPHS ON
Page 6: CAMBRIDGE MONOGRAPHS ON
Page 7: CAMBRIDGE MONOGRAPHS ON
Page 8: CAMBRIDGE MONOGRAPHS ON
Page 9: CAMBRIDGE MONOGRAPHS ON
Page 10: CAMBRIDGE MONOGRAPHS ON
Page 11: CAMBRIDGE MONOGRAPHS ON
Page 12: CAMBRIDGE MONOGRAPHS ON
Page 13: CAMBRIDGE MONOGRAPHS ON
Page 14: CAMBRIDGE MONOGRAPHS ON
Page 15: CAMBRIDGE MONOGRAPHS ON
Page 16: CAMBRIDGE MONOGRAPHS ON
Page 17: CAMBRIDGE MONOGRAPHS ON
Page 18: CAMBRIDGE MONOGRAPHS ON
Page 19: CAMBRIDGE MONOGRAPHS ON
Page 20: CAMBRIDGE MONOGRAPHS ON
Page 21: CAMBRIDGE MONOGRAPHS ON
Page 22: CAMBRIDGE MONOGRAPHS ON
Page 23: CAMBRIDGE MONOGRAPHS ON
Page 24: CAMBRIDGE MONOGRAPHS ON
Page 25: CAMBRIDGE MONOGRAPHS ON
Page 26: CAMBRIDGE MONOGRAPHS ON
Page 27: CAMBRIDGE MONOGRAPHS ON
Page 28: CAMBRIDGE MONOGRAPHS ON
Page 29: CAMBRIDGE MONOGRAPHS ON
Page 30: CAMBRIDGE MONOGRAPHS ON
Page 31: CAMBRIDGE MONOGRAPHS ON
Page 32: CAMBRIDGE MONOGRAPHS ON
Page 33: CAMBRIDGE MONOGRAPHS ON
Page 34: CAMBRIDGE MONOGRAPHS ON
Page 35: CAMBRIDGE MONOGRAPHS ON
Page 36: CAMBRIDGE MONOGRAPHS ON
Page 37: CAMBRIDGE MONOGRAPHS ON
Page 38: CAMBRIDGE MONOGRAPHS ON
Page 39: CAMBRIDGE MONOGRAPHS ON
Page 40: CAMBRIDGE MONOGRAPHS ON
Page 41: CAMBRIDGE MONOGRAPHS ON
Page 42: CAMBRIDGE MONOGRAPHS ON
Page 43: CAMBRIDGE MONOGRAPHS ON
Page 44: CAMBRIDGE MONOGRAPHS ON
Page 45: CAMBRIDGE MONOGRAPHS ON
Page 46: CAMBRIDGE MONOGRAPHS ON
Page 47: CAMBRIDGE MONOGRAPHS ON
Page 48: CAMBRIDGE MONOGRAPHS ON
Page 49: CAMBRIDGE MONOGRAPHS ON
Page 50: CAMBRIDGE MONOGRAPHS ON
Page 51: CAMBRIDGE MONOGRAPHS ON
Page 52: CAMBRIDGE MONOGRAPHS ON
Page 53: CAMBRIDGE MONOGRAPHS ON
Page 54: CAMBRIDGE MONOGRAPHS ON
Page 55: CAMBRIDGE MONOGRAPHS ON
Page 56: CAMBRIDGE MONOGRAPHS ON
Page 57: CAMBRIDGE MONOGRAPHS ON
Page 58: CAMBRIDGE MONOGRAPHS ON
Page 59: CAMBRIDGE MONOGRAPHS ON
Page 60: CAMBRIDGE MONOGRAPHS ON
Page 61: CAMBRIDGE MONOGRAPHS ON
Page 62: CAMBRIDGE MONOGRAPHS ON
Page 63: CAMBRIDGE MONOGRAPHS ON
Page 64: CAMBRIDGE MONOGRAPHS ON
Page 65: CAMBRIDGE MONOGRAPHS ON
Page 66: CAMBRIDGE MONOGRAPHS ON
Page 67: CAMBRIDGE MONOGRAPHS ON
Page 68: CAMBRIDGE MONOGRAPHS ON
Page 69: CAMBRIDGE MONOGRAPHS ON
Page 70: CAMBRIDGE MONOGRAPHS ON
Page 71: CAMBRIDGE MONOGRAPHS ON
Page 72: CAMBRIDGE MONOGRAPHS ON
Page 73: CAMBRIDGE MONOGRAPHS ON
Page 74: CAMBRIDGE MONOGRAPHS ON
Page 75: CAMBRIDGE MONOGRAPHS ON
Page 76: CAMBRIDGE MONOGRAPHS ON
Page 77: CAMBRIDGE MONOGRAPHS ON
Page 78: CAMBRIDGE MONOGRAPHS ON
Page 79: CAMBRIDGE MONOGRAPHS ON
Page 80: CAMBRIDGE MONOGRAPHS ON
Page 81: CAMBRIDGE MONOGRAPHS ON
Page 82: CAMBRIDGE MONOGRAPHS ON
Page 83: CAMBRIDGE MONOGRAPHS ON
Page 84: CAMBRIDGE MONOGRAPHS ON
Page 85: CAMBRIDGE MONOGRAPHS ON
Page 86: CAMBRIDGE MONOGRAPHS ON
Page 87: CAMBRIDGE MONOGRAPHS ON
Page 88: CAMBRIDGE MONOGRAPHS ON
Page 89: CAMBRIDGE MONOGRAPHS ON
Page 90: CAMBRIDGE MONOGRAPHS ON
Page 91: CAMBRIDGE MONOGRAPHS ON
Page 92: CAMBRIDGE MONOGRAPHS ON
Page 93: CAMBRIDGE MONOGRAPHS ON
Page 94: CAMBRIDGE MONOGRAPHS ON
Page 95: CAMBRIDGE MONOGRAPHS ON
Page 96: CAMBRIDGE MONOGRAPHS ON
Page 97: CAMBRIDGE MONOGRAPHS ON
Page 98: CAMBRIDGE MONOGRAPHS ON
Page 99: CAMBRIDGE MONOGRAPHS ON
Page 100: CAMBRIDGE MONOGRAPHS ON
Page 101: CAMBRIDGE MONOGRAPHS ON
Page 102: CAMBRIDGE MONOGRAPHS ON
Page 103: CAMBRIDGE MONOGRAPHS ON
Page 104: CAMBRIDGE MONOGRAPHS ON
Page 105: CAMBRIDGE MONOGRAPHS ON
Page 106: CAMBRIDGE MONOGRAPHS ON
Page 107: CAMBRIDGE MONOGRAPHS ON
Page 108: CAMBRIDGE MONOGRAPHS ON
Page 109: CAMBRIDGE MONOGRAPHS ON
Page 110: CAMBRIDGE MONOGRAPHS ON
Page 111: CAMBRIDGE MONOGRAPHS ON
Page 112: CAMBRIDGE MONOGRAPHS ON
Page 113: CAMBRIDGE MONOGRAPHS ON
Page 114: CAMBRIDGE MONOGRAPHS ON
Page 115: CAMBRIDGE MONOGRAPHS ON
Page 116: CAMBRIDGE MONOGRAPHS ON
Page 117: CAMBRIDGE MONOGRAPHS ON
Page 118: CAMBRIDGE MONOGRAPHS ON
Page 119: CAMBRIDGE MONOGRAPHS ON
Page 120: CAMBRIDGE MONOGRAPHS ON
Page 121: CAMBRIDGE MONOGRAPHS ON
Page 122: CAMBRIDGE MONOGRAPHS ON
Page 123: CAMBRIDGE MONOGRAPHS ON
Page 124: CAMBRIDGE MONOGRAPHS ON
Page 125: CAMBRIDGE MONOGRAPHS ON
Page 126: CAMBRIDGE MONOGRAPHS ON
Page 127: CAMBRIDGE MONOGRAPHS ON
Page 128: CAMBRIDGE MONOGRAPHS ON
Page 129: CAMBRIDGE MONOGRAPHS ON
Page 130: CAMBRIDGE MONOGRAPHS ON
Page 131: CAMBRIDGE MONOGRAPHS ON
Page 132: CAMBRIDGE MONOGRAPHS ON
Page 133: CAMBRIDGE MONOGRAPHS ON
Page 134: CAMBRIDGE MONOGRAPHS ON
Page 135: CAMBRIDGE MONOGRAPHS ON
Page 136: CAMBRIDGE MONOGRAPHS ON
Page 137: CAMBRIDGE MONOGRAPHS ON
Page 138: CAMBRIDGE MONOGRAPHS ON
Page 139: CAMBRIDGE MONOGRAPHS ON
Page 140: CAMBRIDGE MONOGRAPHS ON
Page 141: CAMBRIDGE MONOGRAPHS ON
Page 142: CAMBRIDGE MONOGRAPHS ON
Page 143: CAMBRIDGE MONOGRAPHS ON
Page 144: CAMBRIDGE MONOGRAPHS ON
Page 145: CAMBRIDGE MONOGRAPHS ON
Page 146: CAMBRIDGE MONOGRAPHS ON
Page 147: CAMBRIDGE MONOGRAPHS ON
Page 148: CAMBRIDGE MONOGRAPHS ON
Page 149: CAMBRIDGE MONOGRAPHS ON
Page 150: CAMBRIDGE MONOGRAPHS ON
Page 151: CAMBRIDGE MONOGRAPHS ON
Page 152: CAMBRIDGE MONOGRAPHS ON
Page 153: CAMBRIDGE MONOGRAPHS ON
Page 154: CAMBRIDGE MONOGRAPHS ON
Page 155: CAMBRIDGE MONOGRAPHS ON
Page 156: CAMBRIDGE MONOGRAPHS ON
Page 157: CAMBRIDGE MONOGRAPHS ON
Page 158: CAMBRIDGE MONOGRAPHS ON
Page 159: CAMBRIDGE MONOGRAPHS ON
Page 160: CAMBRIDGE MONOGRAPHS ON
Page 161: CAMBRIDGE MONOGRAPHS ON
Page 162: CAMBRIDGE MONOGRAPHS ON
Page 163: CAMBRIDGE MONOGRAPHS ON
Page 164: CAMBRIDGE MONOGRAPHS ON
Page 165: CAMBRIDGE MONOGRAPHS ON
Page 166: CAMBRIDGE MONOGRAPHS ON
Page 167: CAMBRIDGE MONOGRAPHS ON
Page 168: CAMBRIDGE MONOGRAPHS ON
Page 169: CAMBRIDGE MONOGRAPHS ON
Page 170: CAMBRIDGE MONOGRAPHS ON
Page 171: CAMBRIDGE MONOGRAPHS ON
Page 172: CAMBRIDGE MONOGRAPHS ON
Page 173: CAMBRIDGE MONOGRAPHS ON
Page 174: CAMBRIDGE MONOGRAPHS ON
Page 175: CAMBRIDGE MONOGRAPHS ON
Page 176: CAMBRIDGE MONOGRAPHS ON
Page 177: CAMBRIDGE MONOGRAPHS ON
Page 178: CAMBRIDGE MONOGRAPHS ON
Page 179: CAMBRIDGE MONOGRAPHS ON
Page 180: CAMBRIDGE MONOGRAPHS ON
Page 181: CAMBRIDGE MONOGRAPHS ON
Page 182: CAMBRIDGE MONOGRAPHS ON
Page 183: CAMBRIDGE MONOGRAPHS ON
Page 184: CAMBRIDGE MONOGRAPHS ON
Page 185: CAMBRIDGE MONOGRAPHS ON
Page 186: CAMBRIDGE MONOGRAPHS ON
Page 187: CAMBRIDGE MONOGRAPHS ON
Page 188: CAMBRIDGE MONOGRAPHS ON
Page 189: CAMBRIDGE MONOGRAPHS ON
Page 190: CAMBRIDGE MONOGRAPHS ON
Page 191: CAMBRIDGE MONOGRAPHS ON
Page 192: CAMBRIDGE MONOGRAPHS ON
Page 193: CAMBRIDGE MONOGRAPHS ON
Page 194: CAMBRIDGE MONOGRAPHS ON
Page 195: CAMBRIDGE MONOGRAPHS ON
Page 196: CAMBRIDGE MONOGRAPHS ON
Page 197: CAMBRIDGE MONOGRAPHS ON
Page 198: CAMBRIDGE MONOGRAPHS ON
Page 199: CAMBRIDGE MONOGRAPHS ON
Page 200: CAMBRIDGE MONOGRAPHS ON
Page 201: CAMBRIDGE MONOGRAPHS ON
Page 202: CAMBRIDGE MONOGRAPHS ON
Page 203: CAMBRIDGE MONOGRAPHS ON
Page 204: CAMBRIDGE MONOGRAPHS ON
Page 205: CAMBRIDGE MONOGRAPHS ON
Page 206: CAMBRIDGE MONOGRAPHS ON
Page 207: CAMBRIDGE MONOGRAPHS ON
Page 208: CAMBRIDGE MONOGRAPHS ON
Page 209: CAMBRIDGE MONOGRAPHS ON
Page 210: CAMBRIDGE MONOGRAPHS ON
Page 211: CAMBRIDGE MONOGRAPHS ON
Page 212: CAMBRIDGE MONOGRAPHS ON
Page 213: CAMBRIDGE MONOGRAPHS ON
Page 214: CAMBRIDGE MONOGRAPHS ON
Page 215: CAMBRIDGE MONOGRAPHS ON
Page 216: CAMBRIDGE MONOGRAPHS ON
Page 217: CAMBRIDGE MONOGRAPHS ON
Page 218: CAMBRIDGE MONOGRAPHS ON
Page 219: CAMBRIDGE MONOGRAPHS ON
Page 220: CAMBRIDGE MONOGRAPHS ON
Page 221: CAMBRIDGE MONOGRAPHS ON
Page 222: CAMBRIDGE MONOGRAPHS ON
Page 223: CAMBRIDGE MONOGRAPHS ON
Page 224: CAMBRIDGE MONOGRAPHS ON
Page 225: CAMBRIDGE MONOGRAPHS ON
Page 226: CAMBRIDGE MONOGRAPHS ON
Page 227: CAMBRIDGE MONOGRAPHS ON
Page 228: CAMBRIDGE MONOGRAPHS ON
Page 229: CAMBRIDGE MONOGRAPHS ON
Page 230: CAMBRIDGE MONOGRAPHS ON
Page 231: CAMBRIDGE MONOGRAPHS ON
Page 232: CAMBRIDGE MONOGRAPHS ON
Page 233: CAMBRIDGE MONOGRAPHS ON
Page 234: CAMBRIDGE MONOGRAPHS ON
Page 235: CAMBRIDGE MONOGRAPHS ON
Page 236: CAMBRIDGE MONOGRAPHS ON
Page 237: CAMBRIDGE MONOGRAPHS ON
Page 238: CAMBRIDGE MONOGRAPHS ON
Page 239: CAMBRIDGE MONOGRAPHS ON
Page 240: CAMBRIDGE MONOGRAPHS ON
Page 241: CAMBRIDGE MONOGRAPHS ON
Page 242: CAMBRIDGE MONOGRAPHS ON
Page 243: CAMBRIDGE MONOGRAPHS ON
Page 244: CAMBRIDGE MONOGRAPHS ON
Page 245: CAMBRIDGE MONOGRAPHS ON
Page 246: CAMBRIDGE MONOGRAPHS ON
Page 247: CAMBRIDGE MONOGRAPHS ON
Page 248: CAMBRIDGE MONOGRAPHS ON
Page 249: CAMBRIDGE MONOGRAPHS ON
Page 250: CAMBRIDGE MONOGRAPHS ON
Page 251: CAMBRIDGE MONOGRAPHS ON
Page 252: CAMBRIDGE MONOGRAPHS ON
Page 253: CAMBRIDGE MONOGRAPHS ON
Page 254: CAMBRIDGE MONOGRAPHS ON
Page 255: CAMBRIDGE MONOGRAPHS ON
Page 256: CAMBRIDGE MONOGRAPHS ON
Page 257: CAMBRIDGE MONOGRAPHS ON
Page 258: CAMBRIDGE MONOGRAPHS ON
Page 259: CAMBRIDGE MONOGRAPHS ON
Page 260: CAMBRIDGE MONOGRAPHS ON
Page 261: CAMBRIDGE MONOGRAPHS ON
Page 262: CAMBRIDGE MONOGRAPHS ON
Page 263: CAMBRIDGE MONOGRAPHS ON
Page 264: CAMBRIDGE MONOGRAPHS ON
Page 265: CAMBRIDGE MONOGRAPHS ON
Page 266: CAMBRIDGE MONOGRAPHS ON
Page 267: CAMBRIDGE MONOGRAPHS ON
Page 268: CAMBRIDGE MONOGRAPHS ON
Page 269: CAMBRIDGE MONOGRAPHS ON
Page 270: CAMBRIDGE MONOGRAPHS ON
Page 271: CAMBRIDGE MONOGRAPHS ON
Page 272: CAMBRIDGE MONOGRAPHS ON
Page 273: CAMBRIDGE MONOGRAPHS ON
Page 274: CAMBRIDGE MONOGRAPHS ON
Page 275: CAMBRIDGE MONOGRAPHS ON
Page 276: CAMBRIDGE MONOGRAPHS ON
Page 277: CAMBRIDGE MONOGRAPHS ON
Page 278: CAMBRIDGE MONOGRAPHS ON
Page 279: CAMBRIDGE MONOGRAPHS ON
Page 280: CAMBRIDGE MONOGRAPHS ON
Page 281: CAMBRIDGE MONOGRAPHS ON
Page 282: CAMBRIDGE MONOGRAPHS ON
Page 283: CAMBRIDGE MONOGRAPHS ON
Page 284: CAMBRIDGE MONOGRAPHS ON
Page 285: CAMBRIDGE MONOGRAPHS ON
Page 286: CAMBRIDGE MONOGRAPHS ON
Page 287: CAMBRIDGE MONOGRAPHS ON
Page 288: CAMBRIDGE MONOGRAPHS ON
Page 289: CAMBRIDGE MONOGRAPHS ON
Page 290: CAMBRIDGE MONOGRAPHS ON
Page 291: CAMBRIDGE MONOGRAPHS ON
Page 292: CAMBRIDGE MONOGRAPHS ON
Page 293: CAMBRIDGE MONOGRAPHS ON
Page 294: CAMBRIDGE MONOGRAPHS ON
Page 295: CAMBRIDGE MONOGRAPHS ON
Page 296: CAMBRIDGE MONOGRAPHS ON
Page 297: CAMBRIDGE MONOGRAPHS ON
Page 298: CAMBRIDGE MONOGRAPHS ON
Page 299: CAMBRIDGE MONOGRAPHS ON
Page 300: CAMBRIDGE MONOGRAPHS ON
Page 301: CAMBRIDGE MONOGRAPHS ON
Page 302: CAMBRIDGE MONOGRAPHS ON
Page 303: CAMBRIDGE MONOGRAPHS ON
Page 304: CAMBRIDGE MONOGRAPHS ON
Page 305: CAMBRIDGE MONOGRAPHS ON
Page 306: CAMBRIDGE MONOGRAPHS ON
Page 307: CAMBRIDGE MONOGRAPHS ON
Page 308: CAMBRIDGE MONOGRAPHS ON
Page 309: CAMBRIDGE MONOGRAPHS ON
Page 310: CAMBRIDGE MONOGRAPHS ON
Page 311: CAMBRIDGE MONOGRAPHS ON
Page 312: CAMBRIDGE MONOGRAPHS ON
Page 313: CAMBRIDGE MONOGRAPHS ON
Page 314: CAMBRIDGE MONOGRAPHS ON
Page 315: CAMBRIDGE MONOGRAPHS ON
Page 316: CAMBRIDGE MONOGRAPHS ON
Page 317: CAMBRIDGE MONOGRAPHS ON
Page 318: CAMBRIDGE MONOGRAPHS ON
Page 319: CAMBRIDGE MONOGRAPHS ON
Page 320: CAMBRIDGE MONOGRAPHS ON
Page 321: CAMBRIDGE MONOGRAPHS ON
Page 322: CAMBRIDGE MONOGRAPHS ON
Page 323: CAMBRIDGE MONOGRAPHS ON
Page 324: CAMBRIDGE MONOGRAPHS ON
Page 325: CAMBRIDGE MONOGRAPHS ON
Page 326: CAMBRIDGE MONOGRAPHS ON
Page 327: CAMBRIDGE MONOGRAPHS ON
Page 328: CAMBRIDGE MONOGRAPHS ON
Page 329: CAMBRIDGE MONOGRAPHS ON
Page 330: CAMBRIDGE MONOGRAPHS ON
Page 331: CAMBRIDGE MONOGRAPHS ON
Page 332: CAMBRIDGE MONOGRAPHS ON
Page 333: CAMBRIDGE MONOGRAPHS ON
Page 334: CAMBRIDGE MONOGRAPHS ON
Page 335: CAMBRIDGE MONOGRAPHS ON
Page 336: CAMBRIDGE MONOGRAPHS ON
Page 337: CAMBRIDGE MONOGRAPHS ON
Page 338: CAMBRIDGE MONOGRAPHS ON
Page 339: CAMBRIDGE MONOGRAPHS ON
Page 340: CAMBRIDGE MONOGRAPHS ON
Page 341: CAMBRIDGE MONOGRAPHS ON
Page 342: CAMBRIDGE MONOGRAPHS ON
Page 343: CAMBRIDGE MONOGRAPHS ON
Page 344: CAMBRIDGE MONOGRAPHS ON
Page 345: CAMBRIDGE MONOGRAPHS ON
Page 346: CAMBRIDGE MONOGRAPHS ON
Page 347: CAMBRIDGE MONOGRAPHS ON
Page 348: CAMBRIDGE MONOGRAPHS ON
Page 349: CAMBRIDGE MONOGRAPHS ON
Page 350: CAMBRIDGE MONOGRAPHS ON
Page 351: CAMBRIDGE MONOGRAPHS ON
Page 352: CAMBRIDGE MONOGRAPHS ON
Page 353: CAMBRIDGE MONOGRAPHS ON
Page 354: CAMBRIDGE MONOGRAPHS ON
Page 355: CAMBRIDGE MONOGRAPHS ON
Page 356: CAMBRIDGE MONOGRAPHS ON
Page 357: CAMBRIDGE MONOGRAPHS ON
Page 358: CAMBRIDGE MONOGRAPHS ON
Page 359: CAMBRIDGE MONOGRAPHS ON
Page 360: CAMBRIDGE MONOGRAPHS ON
Page 361: CAMBRIDGE MONOGRAPHS ON
Page 362: CAMBRIDGE MONOGRAPHS ON
Page 363: CAMBRIDGE MONOGRAPHS ON
Page 364: CAMBRIDGE MONOGRAPHS ON
Page 365: CAMBRIDGE MONOGRAPHS ON
Page 366: CAMBRIDGE MONOGRAPHS ON
Page 367: CAMBRIDGE MONOGRAPHS ON
Page 368: CAMBRIDGE MONOGRAPHS ON
Page 369: CAMBRIDGE MONOGRAPHS ON
Page 370: CAMBRIDGE MONOGRAPHS ON
Page 371: CAMBRIDGE MONOGRAPHS ON
Page 372: CAMBRIDGE MONOGRAPHS ON
Page 373: CAMBRIDGE MONOGRAPHS ON
Page 374: CAMBRIDGE MONOGRAPHS ON
Page 375: CAMBRIDGE MONOGRAPHS ON
Page 376: CAMBRIDGE MONOGRAPHS ON
Page 377: CAMBRIDGE MONOGRAPHS ON
Page 378: CAMBRIDGE MONOGRAPHS ON
Page 379: CAMBRIDGE MONOGRAPHS ON
Page 380: CAMBRIDGE MONOGRAPHS ON
Page 381: CAMBRIDGE MONOGRAPHS ON
Page 382: CAMBRIDGE MONOGRAPHS ON
Page 383: CAMBRIDGE MONOGRAPHS ON
Page 384: CAMBRIDGE MONOGRAPHS ON
Page 385: CAMBRIDGE MONOGRAPHS ON
Page 386: CAMBRIDGE MONOGRAPHS ON
Page 387: CAMBRIDGE MONOGRAPHS ON
Page 388: CAMBRIDGE MONOGRAPHS ON
Page 389: CAMBRIDGE MONOGRAPHS ON
Page 390: CAMBRIDGE MONOGRAPHS ON
Page 391: CAMBRIDGE MONOGRAPHS ON
Page 392: CAMBRIDGE MONOGRAPHS ON
Page 393: CAMBRIDGE MONOGRAPHS ON
Page 394: CAMBRIDGE MONOGRAPHS ON
Page 395: CAMBRIDGE MONOGRAPHS ON
Page 396: CAMBRIDGE MONOGRAPHS ON
Page 397: CAMBRIDGE MONOGRAPHS ON
Page 398: CAMBRIDGE MONOGRAPHS ON
Page 399: CAMBRIDGE MONOGRAPHS ON
Page 400: CAMBRIDGE MONOGRAPHS ON
Page 401: CAMBRIDGE MONOGRAPHS ON
Page 402: CAMBRIDGE MONOGRAPHS ON
Page 403: CAMBRIDGE MONOGRAPHS ON
Page 404: CAMBRIDGE MONOGRAPHS ON
Page 405: CAMBRIDGE MONOGRAPHS ON
Page 406: CAMBRIDGE MONOGRAPHS ON
Page 407: CAMBRIDGE MONOGRAPHS ON
Page 408: CAMBRIDGE MONOGRAPHS ON
Page 409: CAMBRIDGE MONOGRAPHS ON
Page 410: CAMBRIDGE MONOGRAPHS ON
Page 411: CAMBRIDGE MONOGRAPHS ON
Page 412: CAMBRIDGE MONOGRAPHS ON
Page 413: CAMBRIDGE MONOGRAPHS ON
Page 414: CAMBRIDGE MONOGRAPHS ON
Page 415: CAMBRIDGE MONOGRAPHS ON
Page 416: CAMBRIDGE MONOGRAPHS ON
Page 417: CAMBRIDGE MONOGRAPHS ON
Page 418: CAMBRIDGE MONOGRAPHS ON
Page 419: CAMBRIDGE MONOGRAPHS ON
Page 420: CAMBRIDGE MONOGRAPHS ON
Page 421: CAMBRIDGE MONOGRAPHS ON
Page 422: CAMBRIDGE MONOGRAPHS ON
Page 423: CAMBRIDGE MONOGRAPHS ON
Page 424: CAMBRIDGE MONOGRAPHS ON
Page 425: CAMBRIDGE MONOGRAPHS ON
Page 426: CAMBRIDGE MONOGRAPHS ON
Page 427: CAMBRIDGE MONOGRAPHS ON
Page 428: CAMBRIDGE MONOGRAPHS ON
Page 429: CAMBRIDGE MONOGRAPHS ON
Page 430: CAMBRIDGE MONOGRAPHS ON
Page 431: CAMBRIDGE MONOGRAPHS ON
Page 432: CAMBRIDGE MONOGRAPHS ON
Page 433: CAMBRIDGE MONOGRAPHS ON
Page 434: CAMBRIDGE MONOGRAPHS ON
Page 435: CAMBRIDGE MONOGRAPHS ON
Page 436: CAMBRIDGE MONOGRAPHS ON
Page 437: CAMBRIDGE MONOGRAPHS ON
Page 438: CAMBRIDGE MONOGRAPHS ON
Page 439: CAMBRIDGE MONOGRAPHS ON
Page 440: CAMBRIDGE MONOGRAPHS ON
Page 441: CAMBRIDGE MONOGRAPHS ON
Page 442: CAMBRIDGE MONOGRAPHS ON
Page 443: CAMBRIDGE MONOGRAPHS ON
Page 444: CAMBRIDGE MONOGRAPHS ON
Page 445: CAMBRIDGE MONOGRAPHS ON
Page 446: CAMBRIDGE MONOGRAPHS ON
Page 447: CAMBRIDGE MONOGRAPHS ON
Page 448: CAMBRIDGE MONOGRAPHS ON
Page 449: CAMBRIDGE MONOGRAPHS ON
Page 450: CAMBRIDGE MONOGRAPHS ON
Page 451: CAMBRIDGE MONOGRAPHS ON
Page 452: CAMBRIDGE MONOGRAPHS ON
Page 453: CAMBRIDGE MONOGRAPHS ON
Page 454: CAMBRIDGE MONOGRAPHS ON
Page 455: CAMBRIDGE MONOGRAPHS ON
Page 456: CAMBRIDGE MONOGRAPHS ON
Page 457: CAMBRIDGE MONOGRAPHS ON
Page 458: CAMBRIDGE MONOGRAPHS ON
Page 459: CAMBRIDGE MONOGRAPHS ON
Page 460: CAMBRIDGE MONOGRAPHS ON
Page 461: CAMBRIDGE MONOGRAPHS ON
Page 462: CAMBRIDGE MONOGRAPHS ON
Page 463: CAMBRIDGE MONOGRAPHS ON
Page 464: CAMBRIDGE MONOGRAPHS ON
Page 465: CAMBRIDGE MONOGRAPHS ON
Page 466: CAMBRIDGE MONOGRAPHS ON
Page 467: CAMBRIDGE MONOGRAPHS ON
Page 468: CAMBRIDGE MONOGRAPHS ON
Page 469: CAMBRIDGE MONOGRAPHS ON
Page 470: CAMBRIDGE MONOGRAPHS ON
Page 471: CAMBRIDGE MONOGRAPHS ON
Page 472: CAMBRIDGE MONOGRAPHS ON
Page 473: CAMBRIDGE MONOGRAPHS ON
Page 474: CAMBRIDGE MONOGRAPHS ON
Page 475: CAMBRIDGE MONOGRAPHS ON
Page 476: CAMBRIDGE MONOGRAPHS ON
Page 477: CAMBRIDGE MONOGRAPHS ON
Page 478: CAMBRIDGE MONOGRAPHS ON
Page 479: CAMBRIDGE MONOGRAPHS ON
Page 480: CAMBRIDGE MONOGRAPHS ON
Page 481: CAMBRIDGE MONOGRAPHS ON
Page 482: CAMBRIDGE MONOGRAPHS ON
Page 483: CAMBRIDGE MONOGRAPHS ON
Page 484: CAMBRIDGE MONOGRAPHS ON
Page 485: CAMBRIDGE MONOGRAPHS ON
Page 486: CAMBRIDGE MONOGRAPHS ON
Page 487: CAMBRIDGE MONOGRAPHS ON
Page 488: CAMBRIDGE MONOGRAPHS ON
Page 489: CAMBRIDGE MONOGRAPHS ON
Page 490: CAMBRIDGE MONOGRAPHS ON
Page 491: CAMBRIDGE MONOGRAPHS ON
Page 492: CAMBRIDGE MONOGRAPHS ON
Page 493: CAMBRIDGE MONOGRAPHS ON
Page 494: CAMBRIDGE MONOGRAPHS ON
Page 495: CAMBRIDGE MONOGRAPHS ON
Page 496: CAMBRIDGE MONOGRAPHS ON
Page 497: CAMBRIDGE MONOGRAPHS ON
Page 498: CAMBRIDGE MONOGRAPHS ON
Page 499: CAMBRIDGE MONOGRAPHS ON
Page 500: CAMBRIDGE MONOGRAPHS ON
Page 501: CAMBRIDGE MONOGRAPHS ON
Page 502: CAMBRIDGE MONOGRAPHS ON
Page 503: CAMBRIDGE MONOGRAPHS ON
Page 504: CAMBRIDGE MONOGRAPHS ON
Page 505: CAMBRIDGE MONOGRAPHS ON
Page 506: CAMBRIDGE MONOGRAPHS ON
Page 507: CAMBRIDGE MONOGRAPHS ON
Page 508: CAMBRIDGE MONOGRAPHS ON
Page 509: CAMBRIDGE MONOGRAPHS ON
Page 510: CAMBRIDGE MONOGRAPHS ON
Page 511: CAMBRIDGE MONOGRAPHS ON
Page 512: CAMBRIDGE MONOGRAPHS ON
Page 513: CAMBRIDGE MONOGRAPHS ON
Page 514: CAMBRIDGE MONOGRAPHS ON
Page 515: CAMBRIDGE MONOGRAPHS ON
Page 516: CAMBRIDGE MONOGRAPHS ON
Page 517: CAMBRIDGE MONOGRAPHS ON
Page 518: CAMBRIDGE MONOGRAPHS ON
Page 519: CAMBRIDGE MONOGRAPHS ON
Page 520: CAMBRIDGE MONOGRAPHS ON
Page 521: CAMBRIDGE MONOGRAPHS ON
Page 522: CAMBRIDGE MONOGRAPHS ON
Page 523: CAMBRIDGE MONOGRAPHS ON
Page 524: CAMBRIDGE MONOGRAPHS ON
Page 525: CAMBRIDGE MONOGRAPHS ON
Page 526: CAMBRIDGE MONOGRAPHS ON
Page 527: CAMBRIDGE MONOGRAPHS ON
Page 528: CAMBRIDGE MONOGRAPHS ON
Page 529: CAMBRIDGE MONOGRAPHS ON
Page 530: CAMBRIDGE MONOGRAPHS ON
Page 531: CAMBRIDGE MONOGRAPHS ON
Page 532: CAMBRIDGE MONOGRAPHS ON
Page 533: CAMBRIDGE MONOGRAPHS ON
Page 534: CAMBRIDGE MONOGRAPHS ON
Page 535: CAMBRIDGE MONOGRAPHS ON
Page 536: CAMBRIDGE MONOGRAPHS ON
Page 537: CAMBRIDGE MONOGRAPHS ON
Page 538: CAMBRIDGE MONOGRAPHS ON
Page 539: CAMBRIDGE MONOGRAPHS ON
Page 540: CAMBRIDGE MONOGRAPHS ON
Page 541: CAMBRIDGE MONOGRAPHS ON
Page 542: CAMBRIDGE MONOGRAPHS ON
Page 543: CAMBRIDGE MONOGRAPHS ON
Page 544: CAMBRIDGE MONOGRAPHS ON
Page 545: CAMBRIDGE MONOGRAPHS ON
Page 546: CAMBRIDGE MONOGRAPHS ON
Page 547: CAMBRIDGE MONOGRAPHS ON
Page 548: CAMBRIDGE MONOGRAPHS ON
Page 549: CAMBRIDGE MONOGRAPHS ON
Page 550: CAMBRIDGE MONOGRAPHS ON