Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of...

152
Adaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen van optimale complexiteit (met een samenvatting in het Nederlands) PROEFSCHRIFT ter verkrijging van de graad van doctor aan de Universiteit Utrecht op gezag van de rector magnificus prof. dr. W.H. Gis- pen, ingevolge het besluit van het college voor promoties in het openbaar te verdedigen op woensdag 18 oktober 2006 des middags te 12.45 uur door YAROSLAV KONDRATYUK geboren op 12 juli 1977 te Khmelnitsk reg., Oekraine

Transcript of Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of...

Page 1: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Adaptive Finite ElementAlgorithms

of optimal complexity

Adaptieve Eindige ElementenAlgoritmen

van optimale complexiteit

(met een samenvatting in het Nederlands)

PROEFSCHRIFT

ter verkrijging van de graad van doctor aan de UniversiteitUtrecht op gezag van de rector magnificus prof. dr. W.H. Gis-pen, ingevolge het besluit van het college voor promoties inhet openbaar te verdedigen op woensdag 18 oktober 2006 desmiddags te 12.45 uur

door

YAROSLAV KONDRATYUK

geboren op 12 juli 1977 te Khmelnitsk reg., Oekraine

Page 2: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Promotor: Prof. dr. H.A. van der Vorst

Co-promotor: Dr. R.P. Stevenson

Het onderzoek in dit proefschrift is financieel mogelijk gemaakt door de NederlandseOrganisatie voor Wetenschappelijk Onderzoek (NWO).

2000 Mathematics Subject Classification: 41A25, 65N12, 65N30, 65N50, 65Y20, 76M10

Yaroslav KondratyukAdaptive finite element algorithms of optimal complexityProefschrift Universiteit Utrecht – Met een samenvatting in het Nederlands

ISBN 90-393-4375-6

Page 3: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Contents

1 Introduction 11.1 Scope of problems and motivations . . . . . . . . . . . . . . . . . . . . . . 2

1.1.1 Navier-Stokes equations . . . . . . . . . . . . . . . . . . . . . . . . 21.1.2 Linearized and stationary Navier-Stokes problem of Oseen type . . 21.1.3 Stokes Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 3

1.2 Thesis overview . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.1 Chapter 2 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.2 Chapter 3 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 31.2.3 Chapter 4 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41.2.4 Chapter 5 . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 4

2 Analysis of Adaptive FE Algorithms for the Poisson Problem 72.1 Abstract Variational Problem . . . . . . . . . . . . . . . . . . . . . . . . . 7

2.1.1 Galerkin Method . . . . . . . . . . . . . . . . . . . . . . . . . . . . 82.2 Finite Element Approximation Spaces . . . . . . . . . . . . . . . . . . . . . 92.3 Local Mesh Refinement, Adaptivity and Optimal Complexity Algorithms . 13

2.3.1 Linear versus nonlinear and adaptive approximations . . . . . . . . 132.3.2 Singular solutions of boundary value problems . . . . . . . . . . . . 162.3.3 Nonconforming triangulations and hierarchical tree-structures . . . 192.3.4 Adaptive Algorithms of Optimal Complexity . . . . . . . . . . . . 22

2.4 A Posteriori Error Analysis . . . . . . . . . . . . . . . . . . . . . . . . . . 262.5 Basic Principle of Adaptive Error Reduction . . . . . . . . . . . . . . . . . 302.6 First Practical Convergent Adaptive FE Algorithm . . . . . . . . . . . . . 322.7 Second Practical Adaptive FE Algorithm . . . . . . . . . . . . . . . . . . . 382.8 Optimal Adaptive Algorithms . . . . . . . . . . . . . . . . . . . . . . . . . 40

2.8.1 Multiscale Decomposition of FEM Approximation Spaces . . . . . . 412.8.2 Optimization of Adaptive Triangulations . . . . . . . . . . . . . . . 43

2.9 Optimal Adaptive Finite Element Algorithm . . . . . . . . . . . . . . . . . 472.10 Numerical Experiments . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 50

2.10.1 L-shaped domain . . . . . . . . . . . . . . . . . . . . . . . . . . . . 512.10.2 Domain with a crack . . . . . . . . . . . . . . . . . . . . . . . . . . 51

i

Page 4: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

3 Adaptive FE Algorithms for Singularly Perturbed Problems 573.1 Singularly Perturbed Reaction-Diffusion Equation . . . . . . . . . . . . . . 573.2 Robust Residual Based A Posteriori Error Estimator . . . . . . . . . . . . 583.3 Uniformly Convergent Adaptive Algorithm . . . . . . . . . . . . . . . . . . 623.4 Numerical Example . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 64

4 Derefinement-based Adaptive FE Algorithms for the Stokes Problem 694.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 694.2 Mixed Variational Problems. Well-Posedness and Discretization . . . . . . 704.3 Stokes Problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 714.4 Methods of Optimal Complexity for Solving the Stokes Problem . . . . . . 734.5 Derefinement/optimization of finite element approximation . . . . . . . . . 75

4.5.1 Derefinement of pressure approximation . . . . . . . . . . . . . . . 754.5.2 Derefinement of velocity approximation . . . . . . . . . . . . . . . . 77

4.6 Adaptive Fixed Point Algorithms for Stokes Problem: Analysis and Convergence 784.7 Convergent Adaptive Finite Element Algorithm for Inner Elliptic Problem 844.8 Conclusions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 96

5 An Optimal Adaptive FEM for the Stokes Problem 985.1 Introduction . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 985.2 Stokes problem . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 1005.3 Finite element approximation . . . . . . . . . . . . . . . . . . . . . . . . . 1025.4 Overview of the solution method . . . . . . . . . . . . . . . . . . . . . . . 1045.5 A posteriori error estimators . . . . . . . . . . . . . . . . . . . . . . . . . . 106

5.5.1 A posteriori error estimator for the inner elliptic problem . . . . . . 1065.5.2 A posteriori error estimator for the (reduced) Stokes problem . . . . 109

5.6 Adaptive refinements resulting in error reduction . . . . . . . . . . . . . . 1115.6.1 Adaptive pressure refinements . . . . . . . . . . . . . . . . . . . . . 1115.6.2 Adaptive velocity refinements . . . . . . . . . . . . . . . . . . . . . 113

5.7 An adaptive method for the Stokes problem in an idealized setting . . . . . 1165.8 A practical adaptive method for the Stokes problem . . . . . . . . . . . . . 1235.9 Numerical experiment . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 129

Nederlandse samenvatting 139

Dankwoord/Acknowledgements 142

Curriculum Vitae 144

Page 5: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations
Page 6: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Chapter 1

Introduction

This thesis is dedicated to the design and analysis of optimal algorithms for solving partialdifferential equations (PDEs). Recent tremendous progress in the performance of moderncomputers stimulated the development of efficient numerical methods. Modern numericalsimulations employing large-scale computations provide a powerful approach for real-lifeproblems. For the development of new numerical models, verification by means of rigorousmathematical analysis and numerical computations has become essential. The explanationby theoretical analysis, detailed analysis based on numerical computations, and also thecomparison of these with real experiments, altogether usually guide the development ofnumerical models.

Nevertheless, the most difficult problems, for instance turbulence in fluid flow, remainto be challenging problems for modern computers and numerical methods. In view of this,the design of optimal algorithms has received special attention in the numerical community.In this thesis, we are primarily interested in the development of efficient algorithms, withrespect to computer memory usage and execution time, for solving numerically PDEs.

Aiming for an optimal algorithm, we are naturally led to the so-called adaptive method.Simply speaking, the task of an adaptive method is to adapt the approximation to the un-known solution of a differential equation during the solution process, using only a posterioriinformation, and to produce eventually the (quasi-) best possible approximation with op-timal computational costs. Nowadays adaptive finite element algorithms are being usedto solve efficiently partial differential equations (PDEs) arising in science and engineering[40, 1, 3]. Adaptive methods are particularly efficient if the solution of a boundary valueproblem has singularities. The origin of a singularity can be a concentration of stressesin elasticity, boundary/internal layers in environmental pollution problems, or a propaga-tion of shock-waves in fluid mechanics. The general structure of the loop of an adaptivealgorithm is

Solve → Estimate → Refine, Derefine.

A fully adaptive finite element algorithm solves a discrete problem on some triangulation,estimates the error in the current approximation using an a posteriori error estimatorand performs adaptive refinement of the triangulation in the regions where the error islargest. Using such an adaptive procedure, we work with a sequence of successively refined

1

Page 7: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

triangulations with the objective to satisfy a stopping criterion based on the a posteriorierror estimate with minimal computational effort. Despite the big success of adaptive FEmethods in practical applications during the last 25 years, they were not even proven toconverge in more than one space dimension before the works of Dorfler [19] (in 1996), andMorin, Nochetto, Siebert [29] (in 2000).

Convergence alone, however, does not imply that the adaptive method is more efficientthan its non-adaptive counterpart. Using the results concerning convergence, afterwardsBinev, Dahmen and DeVore [6] (in 2004), and Stevenson [36] (in 2005) showed that adap-tive FEMs converge with optimal rates and with linear computational complexity. Thesefirst optimality results were based on the use of special mesh coarsening routines. In thosereferences and in this thesis, an adaptive finite element method is called optimal if it per-forms as follows: whenever for some s > 0, the solution of boundary value problem canbe approximated within a tolerance ε > 0, in energy norm, by a finite element approxi-mation on some triangulation with O(ε−1/s) triangles, and one knows how to approximatethe right-hand side in the dual norm with the same rate, then the adaptive finite elementmethod produces approximations that converge with this rate, with a number of operationsthat is of the order of the number of triangles in the output triangulation.

1.1 Scope of problems and motivations

1.1.1 Navier-Stokes equations

The Navier-Stokes problem (1.1.1) is a very difficult problem to solve numerically.

∂u

∂t− ν∆u + (u · ∇)u +∇p = f in Ω× (0, T )

divu = 0 in Ω

u = 0 on ∂Ω× (0, T )

(1.1.1)

Typically, the solutions of this problem show strong singularities, and the application ofadaptive procedures is very valuable in this case. In view of the nonstationary behavior ofsingularities, a detailed analysis of optimality of local refinement/derefinement is necessary.

1.1.2 Linearized and stationary Navier-Stokes problem of Oseentype

Typically, an Oseen problem (1.1.2) has to be solved at each step in the numerical approx-imation of the Navier-Stokes problem.

−ν∆u + (b · ∇)u + cu +∇p = f in Ω

divu = 0 in Ω

u = 0 on ∂Ω

(1.1.2)

This problem too is computationally far from trivial, especially because of singular pertur-bations caused by the interactions between diffusion, convection and reaction terms.

2

Page 8: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

To our knowledge, no complete analysis of optimal adaptive algorithms exists for thistype of problem.

1.1.3 Stokes Problem

The Stokes problem (1.1.3) is one of the main problems in this thesis, for which we studyand develop optimal adaptive algorithms.

−∆u +∇p = f in Ω

divu = 0 in Ω

u = 0 on ∂Ω.

(1.1.3)

1.2 Thesis overview

1.2.1 Chapter 2

In this chapter we explain in detail the most important recent achievements in the theory ofoptimal adaptive FE algorithms, working with the Poisson problem. First, after recallingthe basic properties of the finite element method, we consider the problem of approximationof a given function, and explain the notions of linear, nonlinear, and adaptive approxima-tion. Then we explain why we may expect advantages of adaptive approximation for thenumerical solution of partial differential equations. Furthermore, the construction andanalysis of a posteriori error estimators is presented, and we discuss how to design a con-vergent adaptive method. We explain the notion of an optimal algorithm and present theconstruction of an algorithm that produces a sequence of approximations that convergeswith the optimal rate. Finally, we also present a proof for the optimal computationalcomplexity of this adaptive algorithm.

We wrote a C++ code implementing a 2D adaptive finite element solver. In thisalgorithm, Galerkin systems are solved with CG and a wavelet preconditioner. The solverhas the possibility to switch between a nodal FEM basis and a wavelet basis on adaptivetriangulations, when multiscale analysis of the solution is required. To demonstrate theperformance of our solver, we present numerical examples for the Poisson problem on anL-shaped domain and on a domain with a crack.

1.2.2 Chapter 3

Here, we study adaptive algorithms for the singularly perturbed reaction-diffusion prob-lem. Problems of this type arise, for example, in the modeling of thin plates, shells, inthe numerical treatment of reaction processes, and as a result of an implicit time semi-discretization of parabolic or hyperbolic equations of second order. The main feature ofthe reaction-diffusion equation is that its solution exhibits very steep boundary layers forlarge values of the reaction term. In view of this, the application of adaptive methodsto solve this problem is particularly attractive ([33, 42]). We are interested in adaptivemethods that converge uniformly in terms of the reaction term.

3

Page 9: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

The main ingredients of a robust solver for this kind of problem are discussed. Inparticular we present an analysis of a robust a posteriori error estimator and a uniformsaturation property. We have implemented in C++ a 2D adaptive solver for the singu-larly perturbed reaction-diffusion problem. This solver constructs a sequence of adaptiveapproximations that converge uniformly in terms of the reaction term. Usually, it is quitedifficult to achieve linear computational complexity for an adaptive algorithm in practice,because this requires a careful design and programming. We did numerical experimentsthat indicate that our adaptive solver is quite fast in operating with hundreds of thousandsdegrees of freedom and that it has linear computational complexity.

1.2.3 Chapter 4

This chapter and the paper [25] are dedicated to the design of optimal algorithms for theStokes problem. Typically, problems in fluid mechanics (1.1.1), (1.1.2), (1.1.3) naturallylead to mixed variational problems [21, 11]. For the adaptive solution of mixed variationalproblems, the situation is much more complicated than for elliptic problems, and we arenot aware of any proof of optimality of adaptive finite element algorithms.

Banch, Morin and Nochetto [5] introduced an adaptive FEM for the Stokes problem,and they proved convergence of the method. Since convergence alone does not imply thatthe adaptive method is more efficient than its non-adaptive counterpart, an analysis of theconvergence rates is important. The numerical experiments in [5] show (quasi-) optimaltriangulations for some values of the parameters, but a theoretical explanation of theseresults is still an open question.

In [25] and in this chapter we present a detailed design of an adaptive FE algorithm forthe Stokes problem, and analyze its computational complexity. As in [5], we apply a fixedpoint iteration to an infinite dimensional Schur complement operator. For the approxima-tion of the inverse of the elliptic operator we use a generalization of the optimal adaptivefinite element method from Chapter 2 for elliptic equations. By adding a derefinementstep to the resulting adaptive fixed point iteration, in order to optimize the underlyingtriangulation after each fixed number of iterations, we show that the resulting methodconverges with the optimal rate and that it has optimal computational complexity. Apartfrom its use here, in our view, an efficient mesh optimization/derefinement routine couldbe very useful for the adaptive solution of nonstationary problems that exhibit movingshock waves.

1.2.4 Chapter 5

Recently, in the work of Stevenson [37], an adaptive FEM method was proposed for solvingelliptic problems that has optimal computational complexity not relying on a recurrentderefinement of the triangulations. This result is very valuable for the construction ofoptimal adaptive algorithms for stationary problems. The extention of this result to theStokes mixed variational problem appeared to be very complicated; it is the subject of thischapter (that presents a joint work with Stevenson [27]).

4

Page 10: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

A new adaptive finite element method for solving the Stokes equations is developed,which is shown to converge with the best possible rate. The method consists of 3 nestedloops. For a sequence of finite element spaces with respect to adaptively refined partitions,the solution is approximated in the outmost loop by that of the Stokes problem in which thedivergence-free condition is reduced to orthogonality of the divergence to the finite elementspace in which the approximate pressure is sought. For solving each of these semi-discreteproblems, the Uzawa iteration is applied. Finally, the elliptic system for the velocity thathas to be solved in each Uzawa step is approximately solved by an adaptive finite elementmethod.

We have implemented this adaptive algorithm and we present a discussion of the nu-merical performance of our method for the Stokes problem on an L-shaped domain. Wecompare our results with those from an adaptive Uzawa method introduced by Banch,Morin and Nochetto in [5].

5

Page 11: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

6

Page 12: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Chapter 2

Analysis of Adaptive FEAlgorithms for the PoissonProblem

2.1 Abstract Variational Problem

Let W be a real Hilbert space endowed with norm ‖.‖W , and let W ′ denote its dual space,i.e., the space of all bounded linear functionals f : W → R equipped with the norm

‖f‖W ′ := supw∈W,w 6=0

|f(w)|‖w‖W

. (2.1.1)

Consider the following abstract variational problem (AVP)

Given f ∈ W ′,

find u ∈ W such that (2.1.2)

a(u,w) = f(w) for all w ∈ W.

Definition 2.1.1 A bilinear form a(., .) : W ×W → R is called continuous if there existsa constant ς > 0 such that

|a(u,w)| ≤ ς‖u‖W‖w‖W for all u,w ∈ W. (2.1.3)

Definition 2.1.2 A bilinear form a(., .) is called W -elliptic (or coercive), if there exists aconstant α > 0 such that,

a(w,w) ≥ α‖w‖2W for all w ∈ W. (2.1.4)

The question about existence and uniqueness of the solution of (AVP) is answered bythe Lax-Milgram lemma ([35, 13]):

7

Page 13: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Lemma 2.1.3 (Lax-Milgram) Let W be a (real) Hilbert space, a(., .) : W × W → R abilinear, continuous and W-elliptic form, and f : W → R be a bounded linear functional.Then there exists a unique solution u ∈ W to (AVP), and moreover

‖u‖W ≤ 1

α‖f‖W ′ , (2.1.5)

where the constant α stems from the coercivity condition.

2.1.1 Galerkin Method

In the following let us assume that the conditions of the Lax-Milgram lemma hold. Withthe aim to build a numerical approximation to the solution of the abstract variationalproblem, we choose a sequence of subspaces (Wj)j∈N ⊂ W , such that

for all w ∈ W infwj∈Wj

‖w − wj‖W → 0 as j →∞. (2.1.6)

For each Wj ⊂ W we can formulate the following discrete problem

Given f ∈ W ′,

find uj ∈ Wj such that (2.1.7)

a(uj, wj) = f(wj) for all wj ∈ Wj.

Since Wj is a subspace of W , the conditions of Lax-Milgram lemma are true also forthe discrete problem providing the existence and uniqueness of the discrete solution uj,that is called a Galerkin approximation. More precisely, its properties are stated in thefollowing theorem([32]).

Theorem 2.1.4 (Properties of a Galerkin approximation)Under the assumptions of the Lax-Milgram lemma, there exists a unique solution uj to

(2.1.7). Furthermore it is stable, since independently of the choice of the subspace Wj itsatisfies

‖uj‖W ≤ 1

α‖f‖W ′ . (2.1.8)

Moreover, with u being the solution of (AVP), we have that

‖u− uj‖W ≤ ς

αinf

wj∈Wj

‖u− wj‖W . (2.1.9)

In the following we consider the special case that the bilinear form a(., .) is symmet-ric. The assumptions concerning symmetry, continuity and coercivity imply that we canintroduce an energy inner product and energy norm induced by a(., .):

(u,w)E := a(u,w) for all u,w ∈ W, (2.1.10)

8

Page 14: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

‖u‖E := a(u, u)1/2 for all u ∈ W, (2.1.11)

and the following norm equivalence holds

α1/2‖u‖W ≤ ‖u‖E ≤ ς1/2‖u‖W for all u ∈ W. (2.1.12)

Now, let u and uj be the solutions of (AVP) and its discrete analog (2.1.7), respectively.Comparing (2.1.2) and (2.1.7) we infer that

a(u− uj, wj) = 0 for all wj ∈ Wj (2.1.13)

showing that the error u− uj of the Galerkin approximation is orthogonal to the approx-imation space Wj with respect to the energy inner product. As a consequence, we have

‖u− wj‖2E = ‖uj − wj‖2

E + ‖u− uj‖2E for all wj ∈ W, (2.1.14)

and so the following lemma clearly holds true.

Lemma 2.1.5 If, in addition to the conditions of the Lax-Milgam lemma, the bilinear forma(., .) : W ×W → R is symmetric, then the Galerkin approximation uj, being the solutionof (2.1.7), is the best approximation to the solution of (AVP) from the approximation spaceWj, i.e.,

‖u− uj‖E = infwj∈Wj

‖u− wj‖E. (2.1.15)

So note that in view of estimate (2.1.9) and property (2.1.6), we have

limj→∞

uj = u in W. (2.1.16)

However, this result says nothing about the rate of convergence of the approximate solutionstowards the exact one. For this we need additional information, namely, properties of theapproximation spaces Wj and the regularity of the solution. One of the possible choicesfor Wj is provided by the finite element method.

2.2 Finite Element Approximation Spaces

As we have seen in the previous section, to use the Galerkin method for the numericalsolution of (AVP) we need to specify the approximation spaces Wj. The finite elementmethod is a specific procedure of constructing computationally efficient approximationspaces which are called finite element spaces. In many cases of practical interest we dealwith a boundary value problem posed over a domain Ω ⊂ Rd with Lipschitz-continuousboundary. Depending on the boundary value problem under consideration, W is for ex-ample one of the spaces H1

0 (Ω), H1(Ω), H20 (Ω), H(div, Ω), H(curl, Ω) etc. Here we shall

confine ourselves mostly to variational problems defined on H10 (Ω). This corresponds to

9

Page 15: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

a second order elliptic boundary value problem. In particular, we start with the Poissonproblem

Given f ∈ H−1(Ω) := (H10 (Ω))′,

find u ∈ H10 (Ω) such that (2.2.1)

a(u,w) :=

Ω

∇u · ∇w = f(w) for all w ∈ H10 (Ω)

where we assume that the domain Ω is a polytope. The bilinear form a(·, ·) : H10 (Ω) ×

H10 (Ω) → R is continuous, coercive and symmetric. So on H1

0 (Ω), the ‖ · ‖H10 (Ω)-norm and

the energy-norm ‖ · ‖E differ at most a constant multiple. We switch between these normsat our convenience. Defining A : H1

0 (Ω) → H−1(Ω) by (A u)(w) = a(u,w), (2.2.1) can berewritten as

A u = f. (2.2.2)

We will measure the error of any approximation for u in the energy norm ‖ · ‖E, with dualnorm

‖f‖E′ := sup06=w∈H1

0 (Ω)

|f(w)|‖w‖E

, (f ∈ H−1(Ω)). (2.2.3)

Equipped with these norms, A : H10 (Ω) → H−1(Ω) is an isomorphism.

The first step within the finite element procedure is to produce a triangulation T of thedomain Ω ([12]), i.e., a finite collection of closed polytopes K (the ‘elements’) such that

• Ω = ∪K∈T K

• the interior of each polytope K ∈ T is non empty, i.e., int(K) 6= ∅• for each pair of distinct K1, K2 ∈ T one has int(K1) ∩ int(K2) = ∅

In view of constructing a sequence of subspaces (Wj)j∈N we will consider a sequence oftriangulations (Tj)j∈N.

Definition 2.2.1 A sequence of triangulations (Tj)j∈N is called quasi-uniform if

maxK∈Tj

diam(K) h minK∈Tj

diam(K). (2.2.4)

Here and throughout the thesis, in order to avoid the repeated use of generic butunspecified constants, by C . D we mean that C can be bounded by a multiple of D,independently of parameters which C and D may depend on, where in particular we thinkof j. Obviously, C & D is defined as D . C, and by C h D we mean that C . D andC & D.

Definition 2.2.2 If for any pair of distinct polytopes K1, K2 ∈ T with K1 ∩K2 6= ∅, theirintersection is a common lower dimensional face of K1 and K2, i.e., for d = 3 it is either acommon face, side or vertex, then the triangulation T is called conforming and otherwisewe shall call it a nonconforming triangulation. A vertex of any K ∈ T is called a vertexof T . It is called a non-hanging vertex if it is a vertex for all K ∈ T that contain it,otherwise it is called a hanging vertex (see Fig. 2.1).

10

Page 16: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 2.1: Non-hanging and hanging vertices.

Once we set up the triangulation Tj, the next step is to construct the corresponding ap-proximation space Wj. For some k ∈ N>0, for each element K ∈ Tj let P (K) be a space ofalgebraic polynomials of some fixed finite dimension that includes the space Pk(K) of allpolynomials of total degree less or equal to k. We define

Wj := v ∈ Cr(Ω) ∩∏

K∈Tj

P (K) : v|∂Ω = 0 (2.2.5)

where r ∈ −1, 0, 1, · · · and C−1(Ω) := L2(Ω). Depending on k and of the type ofpolytopes being used, the space P (K) will be chosen to be sufficiently large such that,despite of the intersection with Cr(Ω), the following direct or Jackson estimate is valid:

For all p ∈ [1,∞], s ≤ t ≤ k + 1, s < r + 1p

infwj∈Wj

‖u− wj‖W sp (Ω) . ht−s

j ‖u‖W tp(Ω) for all u ∈ W t

p(Ω) (2.2.6)

where hj := maxK∈Tjdiam(K). Since measured in L2(Ω), the error behaves at best like

O(hk+1), the space Wj is said to have order k + 1.Defining, for a triangulation T , AT : WT → (WT )′ ⊃ H−1(Ω) by (AT vT )(wT ) =

a(vT , wT ) for vT , wT ∈ WT , uT := A −1T f is the Galerkin approximation of u. One easily

verifies that for any f ∈ H−1(Ω), ‖A −1T f‖E ≤ ‖f‖E′ .

Definition 2.2.3 If, for each polytope K ∈ Tj, the ratio of the radii of the smallest cir-cumscribed and the largest inscribed ball of K is bounded uniformly in K ∈ Tj and j ∈ N,then the sequence of triangulations (Tj)j∈N is called shape-regular.

In finite element analysis we need the shape regularity requirement to avoid very smallangles in the triangulation since typical FEM estimates depend on the minimal angle ofthe triangulation and deteriorate when it becomes small.

For a shape regular sequence of triangulations, we have the following inverse or Bern-stein estimate (e.g., [31]): With hj := minK∈Tj

diam(K) and s ≤ t < r + 1p,

‖wj‖W tp(Ω) . hs−t

j ‖wj‖W sp (Ω) for all wj ∈ Wj. (2.2.7)

Since W 12 (Ω) = H1(Ω), note that the Bernstein inequality in particular implies that for

r = 0, the space Wj defined in (2.2.5) is contained in H10 (Ω). Therefore, in the following

11

Page 17: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

we may confine the discussion to the case r = 0. Furthermore, as elements we will considerexclusively d-simplices, i.e., triangles for d = 2 and tetrahedra for d = 3. Finally, as spacesP (K) we consider Pk(K) for some k ∈ N>0. That is, for some triangulations Tj of Ω ⊂ Rd

into d-simplices, we consider

Wj := H10 (Ω) ∩ C(Ω) ∩

∏K∈Tj

Pk(K). (2.2.8)

In order to perform efficient computations with this space we need a basis consistingof functions that have supports that are as small as possible. For this goal, we will firstselect a set ai : i ∈ Jj of points in Ω such that each u ∈ Wj is uniquely determined by itsvalues at these points, and, conversely, that for each ~c = (ci)i∈Jj

∈ R]Jj there is a wj ∈ Wj

with wj(ai) = ci, i ∈ Jj. We will call such a set a nodal set.For each K ∈ Tj we select the set of points

SK = x ∈ K : λK(x) ∈ k−1Nd+1, (2.2.9)

where λK(x) ∈ Rd+1 are the barycentric coordinates of x with respect to K, see Figures2.2, 2.3, 2.4. Now, assuming that Tj is conforming, a nodal set is the union of these sets,excluding those on the boundary of Ω. Indeed, note that each p ∈ Pk(K) is uniquelydetermined by its values in the points from the set SK . Then the statement follows fromthe fact that for u ∈ Wj the degrees of freedom on each face of K ∈ Tj uniquely identifythe restriction of u to that face.

Given a nodal set ai : i ∈ Jj, we define a corresponding basis φi : i ∈ Jj ⊂ Wj by

φi(am) =

1 if i = m0 if i 6= m.

(2.2.10)

The set φi : i ∈ Jj is called a nodal basis and each wj ∈ Wj can be uniquely representedas

wj =∑i∈Jj

wj(ai)φi. (2.2.11)

In case of a possibly nonconforming triangulation, we define the nodal set as

x ∈ ∪K∈TjSK | x /∈ ∂Ω and if x ∈ K ′ ∈ Tj then x ∈ SK′. (2.2.12)

The basis for Wj is then again defined by (2.2.10).For each w ∈ C(Ω) we define the nodal interpolant

Ijw =∑i∈Jj

w(ai)φi. (2.2.13)

The following theorem gives estimates for the interpolation error w − Ijw ([31, 13])

12

Page 18: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 2.2: Sets SK in 2D and 3D for k = 1

Figure 2.3: Sets SK in 2D and 3D for k = 2

Theorem 2.2.4 Let (Tj)j∈N be a shape-regular sequence of triangulations. Then

‖w − Ijw‖W sp (Ω) . ht+1−s

j ‖w‖W t+1p (Ω) for all w ∈ W t+1

p (Ω), (2.2.14)

for parameters s, t and p as in (2.2.6) (and r = 0), with the additional condition t+1 > dp.

Remark 2.2.5 Note that this theorem implies (2.2.6) for t + 1 > dp. To obtain (2.2.6)

without this additional condition, one can apply so-called quasi-interpolants ([31, 14]).

Finally we note that in case the domain Ω is not a polytope, in order to provide a goodapproximation of the boundary of Ω, isoparametric elements are used, which have curvedboundaries. For more details about the construction of finite element spaces we refer thereader to the books ([13, 9, 2, 32, 10]).

2.3 Local Mesh Refinement, Adaptivity and Optimal Complexity Algorithms

2.3.1 Approximation of a given function. Linear versus nonlinearand adaptive approximations

In this section, as an example to illustrate ideas, we consider the problem of approximatinga given function f : Ω → R by piecewise constant functions, where Ω := [0, 1]. For

13

Page 19: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 2.4: Sets SK in 2D and 3D for k = 3

W ⊂ L∞(Ω), by e(f, W ) we denote the error in uniform norm (L∞-norm) of the bestapproximation of f by elements of W , i.e.,

e(f, W ) := infw∈W

‖f − w‖L∞(Ω). (2.3.1)

We will describe here three types of approximation, namely linear, nonlinear and adaptive.Let T be a triangulation of Ω, i.e., a finite collection of essentially disjoint closed

subintervals K such that Ω = ∪K∈T K. Let W 0(T ) be the space of piecewise constantfunctions relative to T , i.e.,

W 0(T ) :=∏K∈T

P0(K). (2.3.2)

Let us first consider a uniform triangulation T , meaning that all subintervals have equallength. From the estimate (2.2.6) we infer that

infw∈W 0(T )

‖f − w‖L∞(Ω) . ]T −1‖u‖W 1∞(Ω) for all f ∈ W 1∞(Ω), (2.3.3)

i.e., the optimal order 1 for piecewise constant approximation is attained for f ∈ W 1∞(Ω).

What is more, in ([18]) it is shown that

e(f, W 0(T )) h ]T −α ⇔ f ∈ Lip(α, L∞(Ω)). (2.3.4)

Here, the space Lip(α,Lp(Ω)), 0 < α ≤ 1, 0 < p ≤ ∞ is the space of all functionsf ∈ Lp(Ω) for which

‖f(·+ h)− f(·)‖Lp[0,1−h) ≤ Mhα, 0 < h < 1 (2.3.5)

with the smallest M ≥ 0 for which (2.3.5) holds being the norm ‖f‖Lip(α,Lp(Ω)). Thespace Lip(α,L∞(Ω)) is equal to W 1

∞(Ω). Thanks to (2.3.4), we know precisely whichclass of functions can be approximated with order α by piecewise constants on uniformtriangulations. This type of approximation where the triangulations are chosen in advanceand are independent of the target function f is called linear.

14

Page 20: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Another type of approximation is nonlinear approximation. Nonlinear approximationallows the triangulations T of Ω to depend on f . Let

W 0NL(n) := ∪T ,]T =nW

0(T ), (2.3.6)

i.e., the space of piecewise constants with respect to some partition of [0, 1] into n subin-tervals, which, clearly, is not a linear space. Whereas for linear approximation the optimalorder 1 is attained for functions f ∈ Lip(1, L∞(Ω)), as shown in ([18]) the nonlinear ap-proximation provides the optimal order for f ∈ Lip(1, L1(Ω)), i.e.,

e(f,W 0NL(n)) h n−1 ⇔ f ∈ Lip(1, L1(Ω)). (2.3.7)

Since Lip(1, L1(Ω)) is a much larger space than Lip(1, L∞(Ω)), with nonlinear approx-imation it is possible to realize the optimal approximation order for a much larger classof functions. Note, however, that the sequence of underlying optimal triangulations is notexplicitly given.

The third type of approximation we discuss is adaptive approximation. This is a kindof nonlinear approximation given by an algorithm that, as we will see, produces a sequenceof piecewise constant approximations that converges with the optimal order under onlyslightly stronger conditions on f than needed for (best) nonlinear approximations. Sincethis is the first adaptive algorithm that we mention here, we will discuss it in some detail.Given ε, which is the target accuracy of the approximation, the adaptive algorithm ADAP-TIVE subdivides those elements K ∈ T for which the error infP0(K)∈R ‖f − c‖L∞(K) ≥ εinto two equal parts until the desired tolerance is met.

Algorithm 2.1 Adaptive Algorithm

ADAPTIVE[f , ε, K]→ T/* Called with K = [0, 1], the output of the algorithm is the triangulation T such thate(f, W 0(T )) ≤ ε. */

if infc∈R ‖f − c‖L∞(K) > εthen

Subdivide K into two equal parts K1 and K2.T1:=ADAPTIVE[f , ε, K1]T2:=ADAPTIVE[f , ε, K2]T := T1 ∪ T2

endif

In [18] it was shown that this adaptive algorithm attains the optimal approximationorder 1 if f satisfies the condition f ′ ∈ L log L, where L log L stands for the space of allintegrable functions g for which

‖g‖L log L :=

Ω

|g(x)|(1 + log |g(x)|)dx < ∞. (2.3.8)

This space is only slightly smaller than Lip(1, L1(Ω)), membership of which was neededfor convergence of order 1 with best partitions.

As an illustration, we consider the following example.

15

Page 21: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

y

Nonlinear approximation for y=x1/2

Figure 2.5:

Example 2.3.1 Let us consider the function f(x) = xα with 0 < α < 1. Since it belongsto Lip(α, L∞(Ω)), whereas it doesn’t belong to a higher order Lipschitz space, in view of(2.3.4), this function can be approximated on uniform triangulations with order exactly α,i.e.,

e(f, W 0(T )) h ]T −α. (2.3.9)

In particular, the optimal order 1 can not be attained by linear approximation. Notingthat f ∈ Lip(1, L1(Ω)), we conclude that this function can be approximated with order 1using nonlinear approximation. Indeed, choose the points xk := (k/n)1/α, k = 0, 1, . . . , n,which are preimages of yk := (k/n), k = 0, 1, . . . , n (see Fig. 2.5, Fig. 2.6). Then, withT = [xi, xi+1]n−1

i=0 one easily verifies that

e(f,W 0NL(n)) ≤ e(f, W 0(T )) . 1/n, (2.3.10)

i.e., with nonlinear approximation the optimal order 1 is attained for any 0 < α < 1.Finally what is more, since for any α ∈ (0, 1), f ′ ∈ L log L, this optimal order is also

attained by the algorithm ADAPTIVE.

2.3.2 Singular solutions of boundary value problems

In the previous section, we discussed the problem of approximating a given function. Thesituation is more complicated if we want to construct an algorithm for the numericalsolution of a boundary value problem, since the solution is unknown and can have asingularity somewhere in the domain Ω. The origin of a singularity can be a concentration

16

Page 22: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

x

y

Nonlinear approximation for y=x1/4

Figure 2.6:

of stresses in elasticity, boundary/internal layers in environmental pollution problems, ora propagation of shock-waves in fluid mechanics ([4, 20, 33]).

As an example, let us consider the Poisson problem (2.2.1) in two space dimensions,where Ω is a polygon (Fig. (2.7)), and the right-hand side is sufficiently smooth. Then itis known ([2]) that the regularity of the solution u depends on the maximum interior angleω at the corners in the boundary of the domain. More precisely, for any ε > 0 we have

u ∈ Hπ/ω+1−ε(Ω), whereas generally u /∈ Hπ/ω+1(Ω). (2.3.11)

Consider now the finite element spaces Wj from (2.2.5) of order k + 1 with respect toa sequence of quasi-uniform triangulations (Tj)j∈N. From (2.2.6), with Nj := ]Tj ∼ h−2

j weinfer that

infwj∈Wj

‖u− wj‖H1(Ω) . N−k/2j ‖u‖Hk+1(Ω) for all u ∈ Hk+1(Ω) (2.3.12)

We conclude that the optimal order k/2 is attained if u ∈ Hk+1(Ω). Actually, one canshow that this is sharp, so that from (2.3.11) we see that the rate of such finite elementapproximations is not restricted by the regularity of the solution only if ω ≤ π/k. Fork ≥ 4, there is no polygon that satisfies this condition. For k = 3, it is only satisfied foran equilateral triangle, and for k = 1, it requires that there are no re-entrant corners.

To handle the case that ω > π/k, algorithms based on adaptive triangulations are oftenproposed with the aim to retain the optimal rate of convergence. We are interested in thequestion whether such adaptive procedures can really lead to a significant improvementof the numerical efficiency. An example of a typical adaptive finite element mesh for theproblem with a corner singularity is given in Fig. 2.8.

17

Page 23: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 2.7: Polygon with maximum interior angle ω

Figure 2.8: Typical adaptive finite element mesh for a problem with a corner singularity

18

Page 24: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

2.3.3 Nonconforming triangulations and adaptive hierarchical tree-structures

As we have described before, finite element spaces consist of suitable subspaces of allpiecewise polynomials of a certain degree with respect to a subdivision of Ω into a largenumber of tiny polytopes. Here, we will consider such approximation spaces based onsubdivisions into triangles, so in two space dimensions. In this section, we describe the typeof triangulations that we shall use throughout this chapter. Let T be a triangulation of Ω.Recall from Definition 2.2.2 the concepts of conforming and nonconforming triangulationsand hanging and non-hanging vertices of T . We shall say that an edge ` of a triangle K ∈ Tis an edge of the triangulation T if it doesn’t contain a hanging vertex in its interior. Theset of all edges of T will be denoted by ET , and the set of all non-hanging vertices of T byVT . The set of all interior edges of T and the set of all interior non-hanging vertices of Twill be denoted by ET and VT respectively. If two triangles K1, K2 ∈ T share some edge` ∈ ET , they will be called edge neighbors, and if they share a vertex we will call themvertex neighbors.

An adaptive algorithm will produce generally locally refined meshes. In view of the im-portance of having shape-regular triangulations, the question arises which local refinementstrategies preserve shape regularity. The following three methods are known to do so:

• Regular or red-refinement. The red refinement strategy consists of subdividing atriangle by connecting the midpoints of its edges. Clearly, since we start with a non-degenerate triangulation T0, subdivision based on red refinement produces a shaperegular family of triangulations.

• Longest edge bisection. This strategy subdivides a triangle by connecting the mid-point of the longest edge with the vertex opposite to this edge. In ([34]) it wasproven that in each triangulation from any sequence T1 . . . , Tn built by the longestedge bisection, the smallest angle is larger than or equal to half of the smallest anglein the initial triangulation T0.

• Newest vertex bisection. This strategy starts with assigning to exactly one vertex ofeach triangle in the initial triangulation T0 the newest vertex label. Then if at somestep of the adaptive algorithm triangle K should be subdivided, we perform this byconnecting the newest vertex of K with the opposite edge, and mark the midpoint ofthis edge as the newest vertex for both new triangles produced by this subdivision.In ([28]) it was proven that the refinement based on newest vertex bisection builds afamily of triangulations which is shape regular.

In this chapter we restrict ourselves to triangulations that are created by (recursive),generally non-uniform red-refinements starting from some fixed initial conforming triangu-lation T0. Generally these triangulations are generally nonconforming.

For reasons that will become clear later (in Section 2.8.1), we will have to limit the‘amount of nonconformity’ by restricting ourselves to admissible triangulations, a conceptthat is defined as follows:

19

Page 25: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 2.9: Triangle and its red-refinement.

Figure 2.10: An example of admissible triangulation.

Definition 2.3.2 A triangulation T is called an admissible triangulation if for every edgeof a K ∈ T that contains a hanging node in its interior, the endpoints of this edge arenonhanging vertices.

Figure 2.10 depicts an example of an admissible triangulation. Apparently local re-finement produces triangulations which are generally not admissible. In the following wediscuss how to repair this. To any triangle K from the initial triangulation T0 we assignthe generation gen(K) = 0. Then the generation gen(K) for K ∈ T is defined as thenumber of subdivision steps needed to produce K starting from the initial triangulationT0. A triangulation is called uniform when all triangles have the same generation. A vertexof the triangulation will be called a regular vertex if all triangles that contain this vertexhave the same generation (see Fig. 2.11). Obviously, a regular vertex is non-hanging.

Corresponding to the triangulation T , we build also a tree T (T ) that contains as nodesall the triangles which were created to construct T from T0. The roots of this tree arethe triangles of the initial triangulation T0. When a triangle K is subdivided, four newtriangles appear which are called children of K, and K is called their parent. Similarly, we

20

Page 26: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 2.11: A regular (left picture) and a non-regular although non-hanging vertex (rightpicture).

introduce also grandparent/grandchildren relations. In case a triangle K ∈ T (T ) has nochildren in the tree T (T ), it is called a leaf of this tree. The set of all leaves of the giventree T (T ) will be denoted by L(T (T )). Apparently, the set of leaves L(T (T )) forms thefinal triangulation T . We will call T a subtree of T , denoted as T ⊂ T , if it contains allroots of T and if for any K ∈ T all its siblings and ancestors are also in T .

Proposition 2.3.3 [36] For any triangulation T created by red-refinements starting fromT0, there exists a unique sequence of triangulations T0, T1 . . . , Tn with maxK∈T i gen(K) = i,Tn = T and where Ti+1 is created from Ti by refining some K ∈ Ti with gen(K) = i.

Let T−1 = ∅ and VT−1 = ∅. The following properties are valid:

• ∑ni=0 ]Ti ≤ 4

3]T ,

• VTi−1⊂ VTi

and so VT = ∪ni=0VTi

\ VTi−1with empty mutual intersections,

• a v ∈ VTi\ VTi−1

is not a vertex of Ti−1, and so it is a regular vertex of Ti

As we noted before, in case we refine some admissible triangulation only locally, theadmissibility of the triangulation might be destroyed. If a given triangulation is not admis-sible (see Fig. 2.12), it can be completed to an admissible triangulation using the followingalgorithm. This algorithm makes use of the sequence T0, . . . , Tn corresponding to the inputtriangulation T as defined in Proposition 2.3.3.

Algorithm 2.2 Make Admissible

MakeAdmissible[T ]→ T

1. T a0 := T0

2. Let i := 0

3. Define T ai+1 as the union of Ti+1 and, when i ≤ n − 2, the collection of children of

those K ∈ T ai that have a vertex neighbor in Ti with grandchildren in Ti+2

4. If i ≤ n− 2 then i + + and go to step (3). Otherwise set T := T an and stop.

21

Page 27: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 2.12: An example of a non-admissible triangulation (left) and its smallest admissiblerefinement (right).

Proposition 2.3.4 [36] The triangulation T a constructed by MakeAdmissible is anadmissible refinement of T . Moreover,

]T a . ]T (2.3.13)

2.3.4 Adaptive Algorithms of Optimal Complexity

For (Ti)i being the family of all triangulations that can be constructed from a fixed initialtriangulation T0 of Ω by red-refinements, let (WTi

)i, (YTi)i be the families of finite element

approximation spaces defined by

WTi:= v ∈ C(Ω) ∩

∏K∈Ti

P1(K) : v|∂Ω = 0, (2.3.14)

YTi:=

∏K∈Ti

P0(K). (2.3.15)

The spaces YTiwill be used for approximating the right-hand side of our equation. The

reason for their introduction will become clear clear in the next section where we discussa posteriori error estimators. Defining for any S ⊂ H1

0 (Ω) and w ∈ H10 (Ω)

e(w, S)H10 (Ω) := inf

w∈S‖w − w‖H1

0 (Ω), (2.3.16)

the error of the best approximation from the best space WTiwith underlying triangulation

consisting of n + ]T0 triangles is given by

σH10 (Ω)

n (w) = infTi∈(Tj)j , ]Ti−]T0≤n

e(w,WTi)H1

0 (Ω) (2.3.17)

We now define classes for which the errors of the best approximations from the best spacesdecay with certain rates.

Definition 2.3.5 For any s > 0, let As(H10 (Ω)) = As(H1

0 (Ω), (WTi)i) be the class of

functions w ∈ H10 (Ω) such that for some M > 0

σH10 (Ω)

n (w) ≤ Mn−s n = 1, 2, . . . . (2.3.18)

22

Page 28: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

We equip As(H10 (Ω)) with a semi-norm defined as the smallest M for which (4.4.6) holds:

|w|As(H10 (Ω)) := sup

n≥1nsσH1

0 (Ω)n (w), (2.3.19)

and with norm‖w‖As(H1

0 (Ω)) := |w|As(H10 (Ω)) + ‖w‖H1

0 (Ω). (2.3.20)

For approximating the right-hand side of our boundary value problem, in a similar waywe define also the class As(H−1(Ω)) = As(H−1(Ω), (YTi

)i) using the approximation space(YTi

)i. The goal of adaptive approximation is to realize the rates of the best approximationfrom the best spaces. We note here that a rate s > 1/2 only occurs when the approximatedfunction is exceptionally close to a finite element function, and such a rate can not be en-forced by imposing whatever smoothness condition. From now on we consider the cases ≤ 1/2. Below we sketch why we expect, similar to section 2.3.1, better results with adap-tive approximation than with non-adaptive approximation based on uniform refinements.In order to do so, we temporarily consider the family of triangulations generated by newestvertex bisection. We expect however similar results to be valid with red refinements.

Let (Ti)i be the family of all triangulations created by newest vertex bisections from aninitial triangulation T0 and let

XTi:= C(Ω) ∩

K∈Ti

P1(K). (2.3.21)

For simplicity, to illustrate the ideas, we dropped here the Dirichlet boundary conditions.Recently, in [6] it was shown that the approximation classes As(H1(Ω), (XTi

)i) are(nearly) characterised by membership of certain Besov spaces:

Theorem 2.3.6 (i) If u ∈ B2s+1τ (Lτ (Ω)) with 0 ≤ s ≤ 1/2 and 1/τ < s + 1/2 then

u ∈ As(H1(Ω), (XTi)i) and ‖u‖As(H1(Ω),(XTi

)i) . ‖u‖B2s+1τ (Lτ (Ω)) (2.3.22)

(ii) if u ∈ As(H1(Ω), (XTi)i) then u ∈ B2s+1

τ (Lτ (Ω)), where 1/τ = s + 1/2.

On the other hand, as is well-known, in two space dimensions and with piecewise linearapproximation a rate s ≤ 1/2 is attained by approximations on uniform triangulationsif and only if the approximated function belongs to H2s+1(Ω). The above result showsthat a rate s ≤ 1/2 is achieved by ‘the best approximation from the best spaces’ for anyfunction from the much larger spaces B2s+1

τ (Lτ (Ω)) for any τ > (s + 1/2)−1. Althoughthese results were stated for the triangulations created by newest vertex bisections, as wesaid we expect the same results to be valid for the family of triangulations constructed byred-refinements. The difference between Sobolev spaces and Besov spaces is well illustratedby the DeVore diagram (Fig. 2.13). In this diagram, the point (1/τ, r) represents the Besovspace Br

τ (Lτ (Ω)). Since Br2(L2(Ω)) = Hr(Ω), the Sobolev spaces Hr(Ω) corresponds to the

points (1/2, r). From this diagram we can observe that the larger s is, the larger is thespace B2s+1

τ (Lτ (Ω)) with τ = (s + 1/2)−1 compared to H2s+1(Ω).

23

Page 29: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

To solve the Poisson problem (2.2.1) numerically, we need to be able to approximatethe right-hand side within any given tolerance. We assume the availability of the followingroutine RHS.

Algorithm 2.3 RHS

RHS[T , f, ε] → [T , fT ]/* Input of the routine:

• triangulation T• f ∈ H−1(Ω)

• ε > 0

Output: admissible triangulation T ⊃ T and an fT ∈ YT such that ‖f − fT ‖H−1(Ω) ≤ ε */

As in [36], we will call the pair (f,RHS) to be s-optimal if there exists an absoluteconstant cf > 0 such that for any ε > 0 and triangulation T , the call of the algorithmRHS performs such that both

]T and the number of flops required by the call are . ]T + c1/sf ε−1/s (2.3.23)

Apparently, for a given s, such a pair can only exist if f ∈ A s(H−1(Ω), (YTi)i). A realisation

of the routine RHS depends on the right-hand side at hand. As it was shown in [36], forf ∈ L2(Ω), RHS can be based on uniform refinements.

We will say that an adaptive method is optimal if it produces a sequence of approxima-tions with respect to triangulations with cardinalities that are at most a constant multiplelarger then the optimal ones. More precisely, we define

Definition 2.3.7 Suppose that the solution of the Poisson problem (2.2.1) is such thatu ∈ As(H1

0 (Ω)) for some s > 0 and there exists a routine RHS such that the pair (f,RHS)is s-optimal. Then we shall say that an adaptive method is optimal if for any ε > 0 itproduces a triangulation Tj from the family (Ti)i with ]Tj . ε−1/s(|u|1/s

As(H10 (Ω))

+ c1/sf ) and

an approximation wTj∈ WTj

with ‖u−wTj‖H1

0 (Ω) ≤ ε taking only . ε−1/s(|u|1/s

As(H10 (Ω))

+c1/sf )

flops.

Note that in view of the assumption u ∈ As, a cardinality of Tj equal to ε−1/s|u|1/s

As(H10 (Ω))

is generally the best we can expect.The questions that should be answered, being the topics of our discussions in the next

sections, are

• How to construct an a posteriori error estimator that provides reliable error controlduring the adaptive approximation process?

• How to construct a convergent adaptive strategy?

24

Page 30: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

6

-

(0,0)

¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢¢

r

1/τ

u

u

u

(1/2,1) ∼ H1(Ω)

(1/τ, r) ∼ Brτ (Lτ ), 1/τ = r/2u(1/2,r)

Figure 2.13: DeVore diagram. On the vertical line are the spaces Hr(Ω), and on the skewline are the spaces Br

τ (Lτ (Ω)) with r = 2s + 1 and τ = (1/2 + s)−1, i.e., r = 2/τ

25

Page 31: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

• How to ensure that our adaptive approximation is optimal in the above sense?

• How to design the computer implementation of the adaptive algorithm which useCPU, memory etc. also in an optimal way?

2.4 A Posteriori Error Analysis

In this section we will apply the elegant technique developed by Verfurth ([41]) to constructand analyze an a posteriori error estimator for the case of nonconforming triangulations.Moreover, with this technique we will be able to prove the so called saturation propertywhich, as we will show in the next section, is essential for realizing a convergent adaptivemethod.

Thinking of an adaptive algorithm for solving the boundary value problem (2.2.1), wedraw the attention of the reader to the algorithm ADAPTIVE. Note that this algorithmfor approximating a given function is based on the full knowledge of the error. Whensolving a boundary value problem, however, we do not know the solution u and thus theerror. Given the Galerkin approximation uT , from (2.2.1) we can derive the followingequation for the error e := u− uT

find e ∈ H10 (Ω) such that (2.4.1)

a(e, w) = f(w)− a(uT , w) for all w ∈ H10 (Ω).

This problem is as expensive to solve as the initial one. However, we have the discreteapproximation uT and right hand side f ∈ H−1(Ω) in our hands. This information can beused to create an a posteriori error estimator E(T , f, uT ).

The important requirements on the estimator E are:

• Reliability. There exists a constant CU > 0 such that

‖u− uT ‖E ≤ CUE (2.4.2)

• Efficiency. The estimator is called efficient if there exists a constant CL > 0 suchthat

CLE ≤ ‖u− uT ‖E (2.4.3)

Remark 2.4.1 A quantitative characterization of the estimator is given by the global ef-fectivity index defined by

Ieffectivity =E

‖u− uT ‖E

(2.4.4)

In view of the characterization

‖u− uT ‖E = sup06=w∈H1

0 (Ω)

a(u− uT , w)

‖w‖E

, (2.4.5)

26

Page 32: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

and assuming that f ∈ L2(Ω), using integration by parts, we write

a(u− uT , w) =

Ω

fw −∫

Ω

∇uT∇w

=∑K∈T

K

f + ∆uT w −∑

`∈ET

`

[∂uT∂ν`

]w (2.4.6)

=∑K∈T

K

RKw −∑

`∈ET

`

R`w,

where RK is an element residual

RK = RK(f) := (f + ∆uT )|K , (2.4.7)

and R` denotes an edge residual

R` = R`(uT ) := [∂uT∂ν`

]. (2.4.8)

Here for ` ∈ ET , ν` is a unit vector orthogonal to `, and with ` separating two elementsK1, K2 ∈ T ,

[∂wT∂ν`

](x) := limt→0+

∂wT∂ν`

(x + tν`)− limt→0+

∂wT∂ν`

(x− tν`) for all x ∈ `. (2.4.9)

is the jump of the normal derivative over `. Note here that, since we consider the case thatuT is linear, ∆uT = 0, so that indeed RK only depends on f .

Having these quantities in our hands we are ready to set an a posteriori error estimator

E(T , f, wT ) = [∑K∈T

diam(K)2‖RK‖2L2(K) +

`∈ETdiam(`)‖R`‖2

L2(`)]1/2.

Before stating the theorem about our error estimator, we need to recall one more result,that will be invoked in the proof.

In [36] a piecewise linear (quasi)-interpolant was constructed on admissible triangula-tions for H1

0 (Ω) functions. Its properties are stated in the following theorem.

Theorem 2.4.2 Let T be an admissible triangulation of Ω. Then there exists a linearmapping IT : H1

0 (Ω) → WT such that

‖w − IT w‖Hs(K) . diam(K)1−s‖w‖H1(ΩK) (w ∈ H10 (Ω), s = 0, 1, K ∈ T ) (2.4.10)

‖w − IT v‖L2(`) . diam(`)1/2‖w‖H1(ΩK`) (w ∈ H1

0 (Ω), ` ∈ ET ) (2.4.11)

where for each K, ΩK is the union of uniformly bounded collection of triangles in T ,among them K, which is simply connected and satisfies diam(ΩK) . diam(K), and wherefor ` ∈ ET , K` ∈ T is a triangle that has edge `.

27

Page 33: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Proof. The statement follows from the proof of the Theorem 6.1 [36]. ♦

Theorem 2.4.3 Let T be an admissible triangulation and T be a refinement of T . As-sume that f ∈ YT , and let u = A −1f and let uT = A −1

T f , uT = A −1

T f be its Galerkin

approximations on the triangulations T and T respectively.i) If VT contains a point inside K ∈ T , then

‖uT − uT ‖2H1(K) & diam(K)2‖RK‖2

L2(K) (2.4.12)

ii) With K1, K2 ∈ T , such that ` := K1∩K2 ∈ ET , if VT contains a point in the interiorof both K1 and K2, then

‖uT − uT ‖2H1(K1∪K2) & diam(`)‖R`‖2

L2(`) (2.4.13)

iii) Let for some F ⊂ T and G ⊂ ET , the refinement T satisfies the condition i) or ii)for all K ∈ F and ` ∈ G, respectively, then

‖uT − uT ‖2E ≥ C2

L∑K∈F

diam(K)2‖RK‖2L2(K) +

`∈G

diam(`)‖R`‖2L2(`) (2.4.14)

for some absolute constant CL > 0.iv)

‖u− uT ‖E ≥ CLE(T , f, uT ) (2.4.15)

v) there exists an absolute constant CU such that for any f ∈ L2(Ω), so not only for f ∈ YT ,

‖u− uT ‖E ≤ CUE(T , f, uT ). (2.4.16)

K1

K2

`

T TProof. (i) To prove the local lower bounds i)-ii), we notice that, due to the conditions

on T , for all K ∈ T , ` ∈ ET there exist functions ψK , ψ`, which are continuous piecewiselinear with respect to T , with the properties

supp(ψK) ⊂ K, supp(ψ`) ⊂ Ω` := K ′ ∈ T , ` ⊂ K ′,∫

K

ψK h meas(K),

`

ψ` h meas(`),

(2.4.17)

0 ≤ ψK ≤ 1, x ∈ K, 0 ≤ ψ` ≤ 1, x ∈ `, (2.4.18)

28

Page 34: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

‖ψK‖L2(K) . meas(K)1/2,

‖ψ`‖L2(Ω`) . diam(`).(2.4.19)

For any w ∈ WT , we have

a(uT − uT , w) = f(w)− a(uT , w)

=∑K∈T

K

RKw −∑

`∈ET

`

R`w(2.4.20)

by (4.7.1). Now, for K as in (i), by choosing w := RKψK , that, thanks to RK being aconstant is in WT , and using suppψK ⊂ K and the Bernstein inequality (2.2.7), we have

‖RK‖2L2(K) h

K

RKw = a(uT − uT , w)

. ‖uT − uT ‖H1(K)diam(K)−1‖w‖L2(K)

. ‖uT − uT ‖H1(K)diam(K)−1‖RK‖L2(K).

(2.4.21)

(ii) In a similar way, let for ` as in (ii), w := R`ψ`, then using suppψ` ⊂ Ω`, the Bernsteininequality, (i) and (4.7.15), we find

`

R`w = a(uT − uT , w) +∑

K∈Ω`

K

RK · w

.∑

K∈Ω`

‖uT − uT ‖H1(K)diam(K)−1‖w‖L2(K) + ‖RK‖L2(K)‖w‖L2(K)

.∑

K∈Ω`

‖uT − uT ‖H1(K)diam(K)−1‖w‖L2(K)

.∑

K∈Ω`

‖uT − uT ‖H1(K)diam(`)−1/2‖R`‖L2(`)

(2.4.22)

Noting that∫

`R`w h ‖R`‖2

L2(`) the statement follows.

iii) Follows from i) and ii).iv) Let T be a refinement of T such that it satisfies conditions i),ii) for all K ∈ T and

for all ` ∈ ET , respectively. The statement follows from iii) and Pythagoras

‖u− uT ‖2E = ‖u− uT ‖2

E + ‖uT − uT ‖2E (2.4.23)

v) Using the Galerkin orthogonality

a(u− uT , wT ) = 0, for all wT ∈ WT , (2.4.24)

29

Page 35: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

equation (4.7.1), Cauchy-Schwarz inequality and Theorem 4.7.1, for arbitrary w ∈ H10 (Ω)

we find

a(u− uT , w) = a(u− uT , w − IT w) =

=∑K∈T

K

RK(w − IT w)−∑

`∈ET

`

R`(w − IT w)

≤∑K∈T

‖RK‖L2(K)]‖w − IT w‖L2(K)] +∑

`∈ET‖R`‖L2(`)‖w − IT w‖L2(`)

.∑K∈T

‖RK‖L2(K)]diam(K)‖w‖H1(ΩK) +∑

`∈ET‖R`‖L2(`)diam1/2(`)‖w‖H1(Ω`)

. ∑K∈T

diam(K)2‖RK‖2L2(K) +

`∈ETdiam(`)‖R`‖2

L2(`)1/2‖w‖E

(2.4.25)

Finally, invoking

‖u− uT ‖E = supw∈H1

0 (Ω)

a(u− uT , w)

‖w‖E

, (2.4.26)

the upper bound follows. ♦

2.5 Basic Principle of Adaptive Error Reduction

In this section we describe the basic principle of adaptive error reduction. Pioneering workin this direction can be found in ([19], [29]).

Assume that T is a refinement of triangulation T and uT = A −1

T f ∈ WT , uT = A −1T f ∈

WT ⊂ WT are the Galerkin solutions.Since

a(u− uT , wT ) = 0 for all wT ∈ WT , (2.5.1)

the error u− uT of the Galerkin approximation is orthogonal to the approximation spaceWT with respect to the energy inner product. As a consequence, we have

‖u− uT ‖2E = ‖u− uT ‖2

E − ‖uT − uT ‖2E, (2.5.2)

Suppose now that there exists a σ > 0 such that

‖uT − uT ‖E ≥ σ‖u− uT ‖E, (2.5.3)

then we immediately conclude that

‖u− uT ‖E ≤√

1− σ2‖u− uT ‖E. (2.5.4)

On the other hand, (2.5.4) also implies (2.5.3) proving the following lemma

30

Page 36: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Lemma 2.5.1 Let u = A −1f , and let uT := A −1

T f , uT ∈ WT ⊂ WT . Then with σ ∈ (0, 1)we have

‖u− uT ‖E ≤√

1− σ2‖u− uT ‖E ⇔ ‖uT − uT ‖E ≥ σ‖u− uT ‖E (2.5.5)

The second inequality in (2.5.5) is known as a saturation property. If we think of con-structing a convergent algorithm, according to the above lemma in order to guarantee afixed error reduction we should be able to bound ‖uT − uT ‖2

E from below by a constantmultiple of ‖u− uT ‖2

E.The aforementioned observations lead us to the following task

Given a triangulation T , and a σ ∈ (0, 1),

find a triangulation T that is

a refinement of T such that

‖uT − uT ‖E ≥ σ‖u− uT ‖E. (2.5.6)

Based on our observations from Theorem 2.4.3, we define a routine that constructs alocal refinement of a given triangulation, such that, as we will see later, an error reductionin our discrete approximation is ensured. In view of the assumptions made in Theorem2.4.3, for the moment we will simply assume that f ∈ YT . We return to this point in thenext section.

Algorithm 2.4 REFINE

REFINE[T ,f ,wT ,θ]→ T/* Input of the routine:

• admissible triangulation T• f ∈ YT

• wT ∈ WT

• θ ∈ (0, 1)

Select in O(T ) operations F ⊂ T and G ⊂ ET , such that

∑K∈F

diam(K)2‖RK‖2L2(K) +

`∈G

diam(`)‖R`‖2L2(`) ≥ θ2E(T , f, wT )2 (2.5.7)

Construct a refinement T of T , such that for all K ∈ F or ` ∈ G the conditions i) or ii)from Theorem 2.4.3 are satisfied. */

Theorem 2.5.2 (Basic principle of adaptive error reduction) Let T be an admissible trian-gulation, and let f ∈ YT , uT := A −1

T f . Taking T =REFINE[T , f, uT , θ], for uT := A −1

T f

31

Page 37: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

the following error reduction holds

‖u− uT ‖E ≤ (1− (CLθ

CU

)2)1/2‖u− uT ‖E (2.5.8)

Proof. Thanks to the properties of REFINE, invoking iii) and v) of Theorem 2.4.3, wefind that the saturation property holds.

‖uT − uT ‖2E ≥ C2

L∑K∈F

diam(K)2‖RK‖2L2(K) +

`∈G

diam(`)‖R`‖2L2(`)

≥ C2Lθ2E(T , fT , wT )2

≥ C2Lθ2

C2U

‖u− uT ‖2E

(2.5.9)

Now, invoking Lemma 2.5.5 with σ = CLθCU

, the statement follows. ♦At this point, we realize that we have enough knowledge to formulate our first adaptive

finite element algorithm, which applies in the idealized situation that f ∈ YT and wherewe moreover solve the Galerkin problems exactly.

Algorithm 2.5 An Idealized Adaptive Finite Element Algorithm

AFEMprelim[T , f, uT , ε0, ε] → [T , uT ]/*Input parameters of the algorithm are: an admissible triangulation T , f ∈ YT , anduT := A −1

T f such that ‖u− uT ‖H10 (Ω) ≤ ε0.

*/Select 0 < θ < 1N := mini ∈ N : (1− (CLθ

CU)2)i/2ε0 ≤ ε

for i = 1, . . . , N do

T :=REFINE[T , f, uT , θ]uT := A −1

T fendfor

Obviously, due to the Theorem 2.5.2, the adaptive algorithm AFEMprelim is conver-gent, and the following theorem holds

Theorem 2.5.3 Algorithm AFEMprelim[T , f, uT , ε0, ε] → [T , uT ] terminates with

‖u− uT ‖E ≤ ε. (2.5.10)

2.6 First Practical Convergent Adaptive FE Algorithm

In Theorem 2.5.2, the adaptive error reduction was based on the assumption that f ∈ YTand on the availability of the exact Galerkin solution uT := A −1

T f . Of course, in practice,

32

Page 38: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

we will approximate f by some fT ∈ YT using the RHS routine introduced in Section2.3.4, and A −1

T fT will be approximated by some wT ∈ WT using some inexact iterativesolver. This implies that the evaluation of the a posteriori error estimator and so the meshrefinement will be performed using these approximations. In the following, we will studyhow this influences the convergence of the adaptive approximations. We will start withintroducing an iterative solver:

To solve the Galerkin problem approximately we assume the following routine.

Algorithm 2.6 GALSOLVE

GALSOLVE[T ,fT ,u(0)T , ε]→ uε

T/* Input of the routine:

• an admissible triangulation T• fT ∈ X ′

T ,

• u(0)T ∈ WT

• ε > 0

output: uεT ∈ WT such that

‖uT − uεT ‖E ≤ ε (2.6.1)

where uT = A −1T fT .

A call of the algorithm should take not more than O(max1, log(ε−1‖uT − u(0)T ‖E)]T )

flops. */

To implement the routine GALSOLVE, one can, for example, apply Conjugate Gra-dients to the matrix-vector representation of AT uT = fT with respect to the wavelet basisΨT described in the next section, that is well-conditioned uniformly in ]T .

Next we investigate the stability of the error estimator.

Lemma 2.6.1 For any admissible triangulation T , f ∈ L2(Ω), uT , uT ∈ WT , it holds

|E(T , f, uT )− E(T , f, uT )| ≤ CS‖uT − uT ‖E, (2.6.2)

for an absolute constant CS > 0.

33

Page 39: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Proof. Using the triangle inequality twice, first for vectors and then for functions, we find

|E(T , f, uT )− E(T , f, uT )|= |(

∑K∈T

diam(K)2‖RK(f)‖2L2(K) +

`∈ETdiam(`)‖R`(uT )‖2

L2(`))1/2

− (∑K∈T

diam(K)2‖RK(f)‖2L2(K) +

`∈ETdiam(`)‖R`(uT )‖2

L2(`))1/2|

≤ (∑

`∈ETdiam(`)(‖R`(uT )‖L2(`) − ‖R`(uT )‖L2(`))

2)1/2

≤ (∑

`∈ETdiam(`)‖R`(uT − uT )‖2

L2(`))1/2 ≤ CS‖uT − uT ‖E,

(2.6.3)

where in the last line we have used that by a homogeneity argument, for any edge ` of atriangle K ∈ T , and any wT ∈ P1(K), it holds

‖R`(wT )‖L2(`) . diam(`)−1/2‖wT ‖H1(K).♦ (2.6.4)

In the following theorem, we give an error estimate for the realistic situation that on T ,T , the approximate right-hand sides fT , fT are used, and that in the refinement routineREFINE, instead of the exact Galerkin solution, an inexact Galerkin solution is used thatis produced by the application of GALSOLVE.

Theorem 2.6.2 Let T be an admissible triangulation, f ∈ H−1(Ω), u = A −1f , fT ∈ YT ,uT = A −1

T fT , wT ∈ WT , and let T = REFINE[T ,fT ,wT ,θ], fT ∈ H−1(Ω), uT = A −1

T fT .Then it holds

‖u−uT ‖E ≤ (1− 1

2(CLθ

CU

)2)1/2‖u−uT ‖E +2CSCL‖uT −wT ‖E +3‖f − fT ‖E′ + ‖f − fT ‖E′

(2.6.5)

Proof.

‖u− uT ‖E =‖A −1f −A −1

T fT ‖E

≤‖A −1f −A −1fT ‖E + ‖A −1fT −A −1

T fT ‖E

≤‖f − fT ‖E′ + ‖A −1fT −A −1

T fT ‖E + ‖A −1

T fT −A −1

T fT ‖E

≤2‖f − fT ‖E′ + ‖A −1fT −A −1

T fT ‖E + ‖f − fT ‖E′

(2.6.6)

Now, to get an estimate for ‖A −1fT − A −1

T fT ‖E, we will apply a similar analysis asin the proof of the Theorem 2.5.2. With F ⊂ T , G ⊂ ET as determined in the call ofREFINE, using Theorem 2.4.3 (iii), Lemma 2.6.1, a property of REFINE, Lemma 2.6.1

34

Page 40: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

and Theorem 2.4.3 (v), we find

‖A −1

T fT − uT ‖E

≥CL(∑K∈F

diam(K)2‖RK(fT )‖2L2(K) +

`∈G

diam(`)‖R`(uT )‖2L2(`))

1/2

≥CL(∑K∈F

diam(K)2‖RK(fT )‖2L2(K) +

`∈G

diam(`)‖R`(wT )‖2L2(`))

1/2

− CS‖uT − wT ‖E)

≥CL(θE(T , fT , wT )− CS‖uT − wT ‖E)

≥CL(θE(T , fT , uT )− 2CS‖uT − wT ‖E)

≥CL(θ

CU

‖A −1fT − uT ‖E − 2CS‖uT − wT ‖E).

(2.6.7)

Due to the Galerkin orthogonality and the previous estimate we have

‖A −1fT −A −1

T fT ‖2E = ‖A −1fT − uT ‖2

E − ‖A −1

T fT − uT ‖2E

≤‖A −1fT − uT ‖2E − C2

L(θ

CU

‖A −1fT − uT ‖E − 2CS‖uT − wT ‖E)2

≤‖A −1fT − uT ‖2E − C2

L(1

2(

θ

CU

)2‖A −1fT − uT ‖2E − 4C2

S‖uT − wT ‖2E)

=(1− 1

2(θCL

CU

)2)‖A −1fT − uT ‖2E + 4C2

SC2L‖uT − wT ‖2

E

≤((1− 1

2(θCL

CU

)2)1/2‖A −1fT − uT ‖E + 2CSCL‖uT − wT ‖E)2

≤((1− 1

2(θCL

CU

)2)1/2(‖u− uT ‖E + ‖f − fT ‖E′) + 2CSCL‖uT − wT ‖E)2,

(2.6.8)

where in the third line we have used that for any scalars a, b, (a − b)2 ≥ 12a2 − b2. Com-

bination of the last result with the first bound obtained in this proof completes the task.♦

Corollary 2.6.3 (General adaptive error reduction estimate) For any µ ∈ ((1−12(CLθ

CU)2)1/2, 1)

there exists a sufficiently small constant δ > 0 such that if for f ∈ H−1(Ω), an admissibletriangulation T , fT ∈ YT , wT ∈ WT , T = REFINE[T ,fT ,wT ,θ], fT ∈ H−1(Ω), wT ∈ WTand ε > 0, with u = A −1f , uT = A −1

T fT , uT = A −1

T fT , we have that ‖u−wT ‖E ≤ ε and

‖uT − wT ‖E + ‖f − fT ‖E′ + ‖uT − wT ‖E + ‖f − fT ‖E′ ≤ 2(1 + µ)δε, (2.6.9)

then‖u− wT ‖E ≤ µε (2.6.10)

35

Page 41: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Proof. Using the triangle inequality, the conditions of the corollary and the Theorem 2.6.2,we easily find

‖u− wT ‖E ≤ ‖u− uT ‖E + ‖uT − wT ‖E

≤(1− 1

2(CLθ

CU

)2)1/2‖u− uT ‖E + 2CSCL‖uT − wT ‖E

+ 3‖f − fT ‖E′ + ‖f − fT ‖E′ + ‖uT − wT ‖E

≤µ‖u− wT ‖E + ((1− 1

2(CLθ

CU

)2)1/2 − µ)‖u− wT ‖E

+ max2CSCL, 3(‖uT − wT ‖E + ‖f − fT ‖E′ + ‖f − fT ‖E′ + ‖uT − wT ‖E)

≤µε,

(2.6.11)

where we have chosen δ to satisfy

0 < δ <µ− (1− 1

2(CLθ

CU)2)1/2

2(1 + µ) max2CSCL, 3 .♦ (2.6.12)

Now we are ready to present our practical adaptive algorithm and to state a theoremabout its convergence.

Algorithm 2.7 First Practical Convergent Adaptive FE Algorithm

AFEM1[T , f, wT , ε0, ε] → [T , wT ]/*Input parameters of the algorithm are: f ∈ H−1(Ω), an admissible triangulation T andwT ∈ WT such that with u = A −1f , ‖u − wT ‖H1

0 (Ω) ≤ ε0. The parameter δ < 1/3 ischosen such that it corresponds to a µ < 1 as in Corollary 2.6.3.*/

ε1 := ε0

1−3δ

[T , fT ]:=RHS[T , f, δε1]wT :=GALSOLVE[T , fT , wT , δε1]N := minj ∈ N : µjε1 ≤ ε for k = 1, . . . , N do

T :=REFINE[T , fT , wT , θ][T , fT ]:=RHS[T , f, δµkε1]wT :=GALSOLVE[T , fT , wT , δµkε1]

endfor

Theorem 2.6.4 Algorithm AFEM1[T , f, wT , ε0, ε] → [T , wT ] terminates with

‖u− wT ‖E ≤ ε (2.6.13)

Proof. By induction on k, we will show that before the kth call of REFINE

‖u− wT ‖E ≤ ε1µk−1 (2.6.14)

36

Page 42: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

meaning that after the kth inner loop ‖u−wT ‖E ≤ ε1µk, which, by definition of N , proves

the theorem. For k = 1 after the call [T , fT ] :=RHS[T , f, δε1] it holds

‖f − fT ‖E′ ≤ δε1. (2.6.15)

Further, we note that for the input wT we have ‖u−wT ‖E ≤ (1−3δ)ε1. Using the triangleinequality and the fact that A −1

T fT is the best approximation to A −1fT from WT withrespect to ‖.‖E, we find

‖u−A −1T fT ‖E ≤‖u−A −1fT ‖E + ‖A −1fT −A −1

T fT ‖E

≤‖u−A −1fT ‖E + ‖A −1fT − wT ‖E

≤2‖u−A −1fT ‖E + ‖u− wT ‖E

≤2‖f − fT ‖E + ‖u− wT ‖E ≤ 2δε1 + (1− 3δ)ε1 ≤ (1− δ)ε1.

(2.6.16)

After the call wT :=GALSOLVE[T , fT , wT , δε1] we have

‖A −1T fT − wT ‖E ≤ δε1. (2.6.17)

Together with the previous estimate it gives

‖u− wT ‖E ≤ ‖u−A −1T fT ‖E + ‖A −1

T fT − wT ‖E ≤ ε1, (2.6.18)

i.e., (2.6.14) is valid for k = 1.Let us now assume that (2.6.14) is valid for some k. By the last calls of RHS and

GALSOLVE, for the current T , uT , fT , i.e., just before the kth call of REFINE, wehave

‖f − fT ‖E′ ≤ δε1µk−1, ‖A −1

T fT − wT ‖E ≤ δε1µk−1. (2.6.19)

The subsequent calls T := REFINE[T , fT , wT , θ], [T , fT ] := RHS[T , f, µkδε1], wT :=GALSOLVE[T , fT , wT , µkδε1] results in

‖f − fT ‖E′ ≤ δε1µk, ‖A −1

T fT − wT ‖E ≤ δε1µk. (2.6.20)

Therefore, we obtain

‖f−fT ‖E′ +‖A −1T fT −wT ‖E +‖f−fT ‖E′ +‖A −1

T fT −wT ‖E ≤ 2(1+µ)δε1µk−1. (2.6.21)

Using the induction hypothesis that ‖A −1T fT − wT ‖ ≤ ε1µ

k−1, Corollary 2.6.3 shows

‖u− wT ‖E ≤ ε1µk, (2.6.22)

with which (2.6.14) is proven. ♦

37

Page 43: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

2.7 Second Practical Adaptive FE Algorithm

In the previous section we have analyzed a practical convergent adaptive FE algorithm.Although, this algorithm is quite general, it has one drawback. Namely, in this algorithmthe tolerances in RHS and GALSOLVE are chosen to be h µk. Drawing the atten-tion to Theorem 2.6.2, we see that if we choose µ < (1 − 1

2(CLθ

CU)2)1/2 this can results in

unnecessarily small tolerances and expensive iterations. On the other hand, the choiceµ > (1 − 1

2(CLθ

CU)2)1/2 can cause a reduced convergence rate. A proper choice for µ is not

easy to make, because the constants CU and CL are generally not known and should beestimated.

In this section we present an alternative algorithm, in which the mentioned tolerancesare chosen as some fixed multiple of an a posteriori error estimator of the error in theprevious iterand. In this algorithm the a posteriori error estimator is enhanced with extraterms to incorporate the errors in the approximate right-hand sides and inexact Galerkinsolutions as analyzed in the following Proposition.

Proposition 2.7.1 i) Let CO := 1+CUCS. For f ∈ H−1(Ω), any admissible triangulationT , wT ∈ WT , fT ∈ L2(Ω), with u = A −1f , uT := A −1

T fT we have

‖u− wT ‖E ≤ CUE(T , fT , wT ) + ‖f − fT ‖E′ + CO‖uT − wT ‖E. (2.7.1)

ii) With fT ∈ YT , and ξT being an upper bound for ‖f −fT ‖+CO‖uT −wT ‖ we have that

CUE(T , fT , wT ) + ξT h ‖u− wT ‖E + ξT (2.7.2)

iii) For any µ ∈ ((1− 12(CLθ

CU)2)1/2, 1), there exists Cµ > 0 small enough and CE > 0, such

that if T =REFINE[T , fT , wT ], wT ∈ WT , fT ∈ H−1(Ω), uT := A −1

AfT , and

ξT ≤ Cµ(1 + CO)(CUE(T , fT , wT ) + ξT ), (2.7.3)

where ξT denotes an upper bound on ‖f − fT ‖E′ + CO‖uT − wT ‖E, then

‖u− wT ‖E + CEξT ≤ µ(‖u− wT ‖E + CEξT ). (2.7.4)

Proof. i) Using the triangle inequality, Theorem 2.4.3 (v) and Lemma 2.6.1, we easilyfind

‖u− wT ‖E ≤ ‖u−A −1fT ‖E + ‖A −1fT − uT ‖E + ‖uT − wT ‖E

≤ ‖f − fT ‖E′ + CUE(T , fT , uT ) + ‖uT − wT ‖E

≤ ‖f − fT ‖E′ + CUE(T , fT , wT ) + (1 + CUCS)‖uT − wT ‖E

(2.7.5)

ii) Using Lemma 2.6.1, Theorem 2.4.3 (iv) and the triangle inequality, we have

E(T , fT , wT ) ≤ E(T , fT , uT ) + CS‖uT − wT ‖E

≤ 1

CL

‖A −1fT − uT ‖E + CS‖uT − wT ‖E

≤ 1

CL

(‖u− wT ‖E + ‖f − fT ‖E′) + (CS +1

CL

)‖uT − wT ‖E

. ‖u− wT ‖E + ‖f − fT ‖E′ + ‖uT − wT ‖E

. ‖u− wT ‖E + ξT .

(2.7.6)

38

Page 44: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

The proof in the other direction follows from (2.7.1).iii) Using Theorem 2.6.2, we find that with C := max1+2CLCS

CO, 3 it holds

‖u− wT ‖E + CEξT ≤ (1− 1

2(CLθ

CU

)2)1/2‖u− wT ‖E + C(ξT + ξT ) + CEξT (2.7.7)

Then, for a given µ ∈ ((1− 12(CLθ

CU)2)1/2, 1), the error reduction (2.7.4) holds if

(C + CE)ξT ≤ (µ− (1− 1

2(CLθ

CU

)2)1/2)‖u− uT ‖E + (µCE − C)ξT . (2.7.8)

Finally, selecting CE > C/µ, and taking Cµ small enough such that

Cµ(1 + CO)(CUE(T , fT , wT ) + ξT ) ≤ (µ− (1− 12(CLθ

CU)2)1/2)‖u− uT ‖+ (µCE − C)ξT

C + CE

,

(2.7.9)the proof is complete.♦

Now we are ready to present a convergent adaptive algorithm with a posteriori errorcontrol.

Algorithm 2.8 Second Practical Adaptive FE Algorithm

AFEM2[T , f, wT , ε0, ε] → [T , wT ]/*Input parameters of the algorithm are: f ∈ H−1(Ω), an admissible triangulation T ,ε0, ε > 0, wT ∈ WT , such that with u := A −1f , maxε, ‖u − wT ‖E ≤ ε0. Let Cµ be aconstant that corresponds to a µ ∈ ((1− 1

2(CLθ

CU)2)1/2, 1) as in Proposition 2.7.1(iii).

*/EO := ε0

[T , fT ]:=RHS[T , f, CµEO]wT :=GALSOLVE[T , fT , wT , CµEO]while EO := CUE(T , fT , wT ) + (1 + CO)CµEO > ε do

T :=REFINE[T , fT , wT , θ][T , fT ]:=RHS[T , f, CµEO]wT :=GALSOLVE[T , fT , wT , CµEO]

endwhile

The following theorem states results about convergence and complexity of this algo-rithm. A similar complexity analysis could also be performed for the algorithm AFEM1.

Theorem 2.7.2 i) For some absolute constant C > 0, Algorithm AFEM2[T , f, wT , ε0, ε] →[T , wT ] terminates with

‖u− wT ‖E ≤ ε (2.7.10)

39

Page 45: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

after at most M iterations of the while-loop, with M being the smallest integer with CµM ≤ε/ε0.

ii) The number of operations required by a call of AFEM2 is proportional to the sumof the cardinalities of all triangulations created by this algorithm.

Proof. i) Let us consider any newly computed wT in the while-loop, and let us denotewith δ1, δ2 the tolerances that were used in the most recent calls of RHS and GALSOLVE,respectively, and let ξT := δ1+COδ2. Then ξT is equal to (1+CO)Cµ(CUE(T , fT , wT )+ξT ),where in this expression T , fT , wT , ξT refer to the previous triangulation, right-hand sideapproximation, approximate solution and previous value of ξT , respectively.

Since ξT thus satisfies (2.7.3), we conclude that each call of the triple REFINE, RHS,GALSOLVE reduces ‖u− wT ‖E + CEξT at least with a factor µ. Thanks to (2.7.2) and(2.7.1) it holds

EO . ‖u− wT ‖E + CEξT (2.7.11)

and‖u− wT ‖E ≤ EO. (2.7.12)

The proof is completed by taking into account the upper bounds ε0 and (1 + CO)Cµε0 forthe initial values of ‖u− wT ‖E and ξT , respectively.

ii) Now we are going to analyse the complexity of this algorithm. Thanks to theproperties of REFINE and RHS, in producing a triangulation T these routines require. ]T flops. Before any call wT = GALSOLVE[T , fT , wT , δEO] we have ‖f−fT ‖E′ ≤ δEO

and ‖u−wT ‖ ≤ EO. Using the triangle inequality and the property of the Galerkin solutionof being the best approximation with respect to ‖ · ‖E, we find

‖u−A −1T fT ‖E ≤ ‖u−A −1fT ‖E + ‖A −1fT −A −1

T fT ‖E

≤ ‖u−A −1fT ‖E + ‖A −1fT − wT ‖E

≤ 2‖u−A −1fT ‖E + ‖u− wT ‖E

= 2‖f − fT ‖E′ + ‖u− wT ‖E ≤ (2δ + 1)EO,

(2.7.13)

and so,

‖wT −A −1T fT ‖E ≤ ‖u− wT ‖E + ‖u−A −1

T fT ‖E ≤ 2(δ + 1)EO. (2.7.14)

Observing that 2(δ+1)EO

δEOis an absolute constant, we conclude that also a call of GAL-

SOLVE requires . ]T flops. ♦

2.8 Optimal Adaptive Algorithms

In the previous section we were able to prove convergence of our practical adaptive al-gorithm AFEM2. However, this result does not guarantee optimality of the algorithm.Recently, in [6], Binev, Dahmen and DeVore proposed a modification of the method from[29] by adding a coarsening step, and proved that the resulting adaptive method achievesthe optimal convergence rate. Later, in [36], Stevenson developed a similar coarseningroutine, that is based on the transformation to a wavelet basis. In the next section, wediscuss the construction and properties of this wavelet basis.

40

Page 46: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

2.8.1 Multiscale Decomposition of FEM Approximation Spaces

In this section we briefly discuss the wavelet bases mentioned before in connection withthe GALSOLVE routine. Our exposition is based on the results from [36] and uses thedefinitions of vertex of triangulation and regular vertex that we made in the Section 2.3.3.For an admissible triangulation T , let φv

T : v ∈ VT denote the nodal basis for WT . Inthe following, we will denote by T ∗

i a triangulation constructed by applying recursively iuniform red-refinement to T0. Recalling the properties of the finite element nodal basisfunctions, we easily check the following result.

Lemma 2.8.1 Let v be a regular vertex of a triangulation T . Then with i = gen(K) forsome and thus for any K ∈ T that contains v, we have φv

T ∗i ∈ WT .

For a triangulation T , recall the definition of T0, . . . , Tn = T . Using that any v ∈ VTi\VTi−1

is a regular vertex of Ti (Proposition 2.3.3), one verifies the following lemma.

Lemma 2.8.2 For any triangulation T = Tn, ∪ni=0φv

T ∗i : v ∈ VTi\ VTi−1

is a basis forWT , called the hierarchical basis.

We will now turn the hierarchical basis into a true wavelet basis by correcting each basisfunction, except those on the coarsest level, by a linear combination of coarse grid nodalbasis functions.

Let us defineV∗ := ∪i≥0VT ∗i \ VT ∗i−1

. (2.8.1)

Let v ∈ V∗ and i be such that v ∈ VT ∗i \VT ∗i−1. When i > 0, there exist two triangles K1, K2 ∈

T ∗i−1 such that v is the midpoint of their common edge. Denoting by v1(v), . . . , v4(v) the

vertices of these K1, K2, with v2(v), v3(v) being the vertices of the edge containing v, wedefine

ψv =

φvT ∗0 , v ∈ VT ∗0

φvT ∗i −

4∑j=1

µv,jφvj(v)T ∗i−1

, otherwise,(2.8.2)

where µv,2 = µv,3 = −3/16, µv,1 = µv,4 = 1/16 and µv,j = 0 if vj(v) ∈ ∂Ω [36], [15]. Since,as one can verify,

∫Ω

ψv = 0, except for those of the coarsest level, it is appropriate to callthe new basis a wavelet basis. Due to the construction procedure used, it is also calledcoarse-grid correction wavelet basis. The following result was shown in [36].

Theorem 2.8.3 With ψv := ψv/‖ψv‖E,

ψv : v ∈ V∗ (2.8.3)

is a Riesz basis for H10 (Ω).

41

Page 47: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 2.14: Wavelet ψv

Each u ∈ H10 (Ω) has a unique representation u =

∑v∈V∗ cvψv. With ‖u‖Ψ := (

∑v∈V∗ c2

v)1/2,

we define λΨ, ΛPsi > 0 to be largest and smallest constants, respectively, such that

λΨ‖u‖2Ψ ≤ ‖u‖2

E ≤ ΛΨ‖u‖2Ψ (u ∈ H1

0 (Ω)). (2.8.4)

The condition number of the wavelet basis is then given by κΨ := ΛΨ

λΨ.

Let us now consider an admissible triangulation T , with corresponding sequence T0, . . . , Tn =T . Because of the admissability, for any v ∈ VTi

\VTi−1(i > 0), the vertices v1(v), . . . , v4(v),

if not on ∂Ω, are regular vertices of Ti−1. Thanks to this property one infers

Theorem 2.8.4 For an admissible triangulation T , ψv : v ∈ VT is a basis for WT . Inparticular, it is a uniform Riesz basis (with respect to the H1-metric), i.e., the stabilityconstants are independent of T .

Given an admissible triangulation T = Tn, any wT ∈ WT can be represented in thenodal basis as well as in the wavelet basis

wT =∑v∈VT

wT (v)φvT =

∑v∈VT

cvψv. (2.8.5)

Whenever a transformation from one basis to another one is needed, the followingroutines can be used.

42

Page 48: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Algorithm 2.9 Transformation from nodal basis to wavelet basis

NodaltoWavelet[(wT (v))v∈VT ] → [(cv)v∈VT ]/* Input: (wT (v))v∈VT - coefficients of representation of wT ∈ WT with respect to thenodal basis, where wT (v) := 0 for v ∈ VT \ VT .Output: (cv)v∈VT - coefficients of representation of wT ∈ WT with respect to the waveletbasis. */

for v ∈ VT do cv := wT (v) endforfor i = n, . . . , 1 do

for v ∈ VTi\ VTi−1

do cv := cv − (cv2(v) + cv3(v))/2 endfor

for v ∈ VTi\ VTi−1

do

for j = 1, . . . , 4 do cvj(v) := cvj(v) + cvµv,j endfor

endfor

endfor

Algorithm 2.10 Transformation from wavelet basis to FE basis

WavelettoNodal[(cv)v∈VT ] → [(wT (v))v∈VT ]/* Input: (cv)v∈VT - coefficients of representation of wT ∈ WT with respect to the waveletbasis, where cv := 0 for v ∈ VT \ VT .Output: (wT (v))v∈VT - coefficients of representation of wT ∈ WT with respect to the nodalbasis. */

for i = 1, . . . , n do

for v ∈ VTi\ VTi−1

do

for j = 1, . . . , 4 do cvj(v) := cvj(v) − cvµv,j endfor

endfor

for v ∈ VTi\ VTi−1

do cv := cv + (cv2(v) + cv3(v))/2 endfor

endfor

for v ∈ VT do wT (v) := cv endfor

The next statement about the complexity of the above algorithms is obvious.

Proposition 2.8.5 Both algorithms, NodaltoWavelet and WavelettoNodal, take O(]T )operations.

2.8.2 Optimization of Adaptive Triangulations

In order to guarantee an optimal relation between the degrees of freedom and the errorof approximation, we will add a coarsening routine to our adaptive algorithm. The aimis to undo refinements that hardly turn out to contribute to a better approximation, butthat, due to their possibly large number, significantly contribute to the cardinality ofthe triangulation. Given a finite element approximation with respect to an admissibletriangulation, we can compute its wavelet coefficients. Then in order to find a more efficientrepresentation, in view of a norm equivalence (2.8.4), one may think of ordering the wavelet

43

Page 49: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

coefficients by their size and then removing the smallest ones until the desired toleranceis met. Note, however, that the remaining function has to be represented as a finiteelement function. With this unconstrained coarsening, a small number of remaining waveletcoefficients does not mean that the corresponding function can be represented as a finiteelement function with respect to a triangulation that has only a few triangles. Havingin mind that a triangulation can be represented as a tree, we will search for a best treeapproximation.

It is possible to equip V∗ with a tree structure as follows. The vertices from VT ∗0will be the roots of this tree. For v ∈ VT ∗1 \ VT ∗0 , assuming that one of the verticesv1(v), . . . , v4(v) is not on the boundary ∂Ω, we pick one of them to be the parent of v.Now consider i > 1 and v ∈ VT ∗i \ VT ∗i−1

. At least one of v1(v), . . . , v4(v) is VT ∗i−1\ VT ∗i−2

,and we can pick one of them to be the parent of v. In the case of multiple choices, onecan design a deterministic rule to make the selection, for example as follows. If one ofv2(v) or v3(v) is in VT ∗i−2

, then the other is in VT ∗i−1\ VT ∗i−2

, and we select it to be the

parent of v. Otherwise, if both v2(v), v3(v) ∈ VT ∗i−1\ VT ∗i−2

, then one of the remaining

v1(v), v4(v) is in VT ∗i−2, and we call it v1(v), whereas the other, thus called v4(v), is in

VT ∗i−1\ VT ∗i−2

. After numbering v1(v), v2(v), v3(v) in a clockwise direction, we select thefirst of v2(v), v3(v), v4(v) ∈ VT ∗i−1

\ VT ∗i−2to be the parent of v. Apparently, in view of

the described construction, the number of children of any parent in this tree is uniformlybounded and is only dependent on T0. We will call V ⊂ V∗ a (vertex) subtree when itcontains all roots of V∗, and when, for any v ∈ V , all its ancestors and all its siblings, i.e.,those v ∈ V∗ that have the same parent of v, are also in V . For an admissible triangulationT , VT is nearly a subtree of V∗ since it contains the roots of V∗ as well as all ancestors ofany v ∈ VT . There may be v ∈ VT with siblings outside VT .

Given a vertex subtree V , we can construct a corresponding triangulation by the fol-lowing routine.

Algorithm 2.11 ConstructTriangulation

ConstructTriangulation [V ]→ [T ]T0 := T0, i := 1while V ∩ (VT ∗i \ VT ∗i−1

) 6= ∅ do

construct Ti from Ti−1 by refining those K ∈ Ti−1 that contain av ∈ V ∩ (VT ∗i \ VT ∗i−1

), i + +endwhile

Theorem 2.8.6 Let for subtree V ⊂ V∗, T be a triangulation constructed by a callConstructTriangulation(V). Then for an admissible triangulation T := MakeAdmissible(T )it holds

]T . ]V and spanψv : v ∈ V ⊂ WT . (2.8.6)

Proof. The first part of the statement follows by construction and properties of MakeAd-missible. Since V ⊂ VT , and so spanψv : v ∈ V ⊂ spanψv : v ∈ VT = WT .♦

44

Page 50: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Let w =∑

v∈V∗ cvψv, then for any subset V ⊂ V∗, its best approximation with respect to

‖·‖Ψ from spanψv : v ∈ V is∑

v∈V cvψv, and the squared error E(V) of this approximation

with respect to ‖ · ‖Ψ is E(V) =∑

v∈V∗\V |cv|2. Defining for v ∈ V∗ the error functional

e(v) :=∑

v a descendant of v |cv|2, for any subtree V ⊂ V∗ we have E(V) =∑

v∈L(V ) e(v).

Further, we also define a modified error functional e(v) for v ∈ V∗ as follows. For theroots v ∈ VT0 , e(v) := e(v), and assuming that e(v) has been defined, for all of its childrenv1, . . . , vm,

e(vj) :=

∑mi=1 e(vj)

e(v) + e(v)e(v). (2.8.7)

To compute the error functional we define the following routines.

Algorithm 2.12 ComputeErrorFunctional

ComputeErrorFunctional [T ,(cv)v∈VT ]→ [(e(v))v∈VT ]For all v ∈ VT0 do

Compute-e(v)endfor

Algorithm 2.13 Compute-eCompute-e

if v is a leaf of VT then e(v) := 0else e(v) :=

∑v∈VT :v is a child of v c2

v + Compute-e(v)endif

Proposition 2.8.7 The call of ComputeErrorFunctional[T ,(cv)v∈VT ] requires O(]T ).

Now we are ready to describe the ThresholdingSecondAlgorithm from [8] for de-termining a quasi-optimal subtree approximation.

Algorithm 2.14 ThresholdingSecondAlgorithm

ThresholdingSecondAlgorithm [(e(v)v∈V∗),ε]→ [V ]V := VT0

while E(V) > ε2 do

compute ρ = maxv∈L(V) e(v)

for all v ∈ L(V) with e(v) = ρ add all children of v to Vendwhile

A most efficient implementation of this algorithm is obtained by keeping the values e(v)stored as an ordered list. Even then, with V being the subtree at termination, the operationcount of this algorithm will contain a term O(]V log(]V)) due to the insertion of e(v) fornewly created leaves in this list. To keep the complexity of the algorithm of order ]V ,

45

Page 51: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

an approximate sorting based on binary binning can be used instead of the exact sorting.With this modification the following holds for the ThresholdingSecondAlgorithm.

Proposition 2.8.8 The subtree V yielded by (the modified) ThresholdingSecondAlgo-rithm satisfies E(V) ≤ ε2. There exists absolute constants t1, T2 > 0, necessary witht1 ≤ 1 ≤ T2, such that if V is a subtree with E(V) ≤ t1ε

2, then ]V ≤ T2]V. The numberof evaluations of e, and the number of additional arithmetic operations required by thisalgorithm are . ]V + max0, log(ε−2‖w‖2

Ψ]VT ).Now we are ready to present a coarsening (derefinement) algorithm.

Algorithm 2.15 Derefine

DEREFINE [T ,wT ,ε]→ [T , wT ]/* T = Tn is some admissible triangulation, wT ∈ WT is given by its values (wT (v))v∈VT ,and ε > 0. The output T is an admissible triangulation, and wT ∈ WT is given by itsvalues (wT (v))v∈VT . */

[(cv)v∈VT ] := NodaltoWavelet[(wT (v))v∈VT ](e(v))v∈VT :=ComputeErrorFunctional [T ,(cv)v∈VT ]V := ThresholdingSecondAlgorithm [(e(v)v∈V∗)]T := ConstructTriangulation [V ]T := MakeAdmissible [T ](wT (v))v∈VT := WavelettoNodal[(cv)v∈V ]

Theorem 2.8.9 i) [T , wT ] :=DEREFINE[T , wT , ε] satisfies ‖wT − wT ‖Ψ ≤ ε. There

exists an absolute constant D > 0, such that for any triangulation T for which there existsa wT ∈ WT with ‖wT − wT ‖Ψ ≤ t

1/21 ε, we have that ]T ≤ D]T .

ii) The call of this algorithm requires . ]T + max0, log(ε−1‖wT ‖Ψ) arithmetic oper-ations.

Proof. i) The first statement follows by construction. With wT =∑

v∈V∗ cvψv written in

the wavelet basis, let T , be a triangulation wT ∈ WT such that ‖wT −wT ‖Ψ ≤ t1/21 ε, and let

T a :=MakeAdmissible[T ], wT =∑

v∈VT a∈ VT ⊂ VT a . Let V be the enlargement of VT a

by adding all siblings of all v ∈ VT a to this set. Then V is a subtree with ]V . ]VT a . ]T .Since

E(V) =∑

v∈V∗\V|cv|2 ≤ ‖wT − wT ‖2

Ψ ≤ t1ε2, (2.8.8)

applying Proposition 2.8.8, we get that V constructed by the ThresholdingSecondAl-gorithm satisfies ]V ≤ T2V . Finally, we use that for T :=ConstructTriangulation[V ]and T a :=MakeAdmissible[T ] it holds ]T a . ]T . ]V .

ii)Let us now analyze the computational complexity of the algorithm. The transforma-tion to the wavelet basis and the computation of error functionals cost . ]T arithmetic

46

Page 52: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

operations. Thanks to ]VT . ]T , Proposition 2.8.8 shows that ThresholdingSecon-dAlgorithm requires . ]V + max0, log(ε−2‖wT ‖2

Ψ]VT ) . ]T + max0, log(ε−2‖wT ‖2Ψ]T )

arithmetic operations. Finally, the number of operations required to construct triangula-tion from the vertex tree and to make this triangulation admissible is . ]V . ]T . ♦

The next corollary shows that by allowing the error of approximation to increase bysome constant factor, DEREFINE outputs a triangulation, which has the smallest cardi-nality, modulo some constant factor, one can generally expect for an approximation withthis accuracy.

Corollary 2.8.10 Let γ > t−1/21 . Then for any ε > 0, u ∈ H1

0 (Ω), a triangulation T ,wT ∈ WT with ‖u − wT ‖Ψ ≤ ε, for [T , wT ] :=DEREFINE[T , wT , γε] we have that

‖u−wT ‖Ψ ≤ (1 + γ)ε and ]T ≤ D]T for any triangulation T with infwT ∈VT ‖u−wT ‖Ψ ≤(t

1/21 γ − 1)ε

Proof. The first statement follows from Theorem 2.8.9. Since,

infwT ∈WT

‖wT −wT ‖Ψ ≤ ‖u−wT ‖Ψ + infwT ∈WT

‖wT − u‖Ψ ≤ ε + (t1/21 γ − 1)ε = t

1/21 γε, (2.8.9)

using again the same theorem, the proof is completed. ♦

2.9 Optimal Adaptive Finite Element Algorithm

Finally, we can present the optimal algorithm AFEMOPT, that is obtained by adding aderefinement routine to AFEM1.

47

Page 53: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Algorithm 2.16 Optimal Adaptive FE Algorithm

AFEMOPT[T , f, wT , ε0, ε] → [T , wT ]/*Input parameters of the algorithm are: f ∈ H−1(Ω), an admissible triangulation T , wT ∈WT , ε0 such that ε0 ≥ ‖u− wT ‖E, and ε > 0. Parameters δ ∈ (0, 1/3), γ and M ∈ N are

chosen such that δ corresponds to a µ < 1 as in Corollary 2.6.3, γ > t−1/21 with t1 as in

Proposition 2.8.8, and(1+γ)κ

1/2Ψ µM

1−3δ< 1.

*/

N := mini ∈ N : ( µM

1−3δ)N((1 + γ)κ

1/2Ψ )N−1 ≤ ε

for i = 1, . . . , N do

if i = 1 then ε1 := ε0/(1− 3δ)else

[T , wT ] :=DEREFINE [T , wT , γλ−1/2Ψ µMεi−1]

εi :=(1+γ)κ

1/2Ψ µM

1−3δεi−1

endif

[T , fT ]:=RHS[T , f, δεi]wT :=GALSOLVE[T , fT , wT , δεi]for k = 1, . . . , M do

T :=REFINE[T , fT , wT , θ][T , fT ]:=RHS[T , f, δµkεi]wT :=GALSOLVE[T , fT , wT , δµkεi]

endfor

endfor

Theorem 2.9.1 i) Let T be an admissible triangulation , ε > 0 and wT ∈ WT such that‖u− wT ‖E ≤ ε0. Then [T , wT ] :=AFEMOPT[f, T , wT , ε0, ε] satisfies ‖u− wT ‖E ≤ ε.

ii)Assuming ε0 . ‖u‖E, if for some s > 0, u ∈ As and (f ,RHS) is s-optimal, then

both ]T and the number of flops required by this call are . max1, ε−1/s(c1/sf + ‖u‖1/s

As ).

Proof. i) By induction on i we are going to prove that after the if-then-else-endif

statement it holds that‖u− wT ‖E ≤ (1− 3δ)εi (2.9.1)

For i = 1, (2.9.1) is valid by the definition of ε1 and since ‖u− wT ‖E ≤ ε0.Let us assume that (2.9.1) holds for some i < N . Since the call [T , fT ] :=RHS[T , f, δεi],

following the if-then-else-endif statement, leads to

‖f − fT ‖E′ ≤ δεi, (2.9.2)

as in (2.6.16) after this call we have

‖u−A −1T fT ‖E ≤ (1− δ)εi. (2.9.3)

48

Page 54: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Clearly, after wT :=GALSOLVE[T , fT , wT , δεi] we have

‖A −1T fT − wT ‖E ≤ δεi. (2.9.4)

Together, (2.9.3) and (2.9.4) show that

‖u− wT ‖E ≤ εi (2.9.5)

at the moment of entering the inner loop.By induction on k, for the inner loop we will prove, that just before the call of REFINE

we have‖u− wT ‖E ≤ µk−1εi, (2.9.6)

proving that the inner loop terminates with

‖u− wT ‖E ≤ εiµM . (2.9.7)

For k = 1, (2.9.6) is satisfied thanks to (2.9.5). Assuming that (2.9.6) is valid for some k,and using that on that moment

‖f − fT ‖E′ ≤ δεiµk−1, ‖A −1

T fT − wT ‖E ≤ δεiµk−1, (2.9.8)

by the subsequent calls T :=REFINE[T , fT , wT , θ], [T , fT ] :=RHS[T , f, δεiµk], and

wT := GALSOLVE[T , fT , wT , δεiµk], from Corollary 2.6.3 we have

‖u− wT ‖E ≤ εiµk (2.9.9)

proving the induction hypothesis. Knowing (2.9.7), before the call of DEREFINE in thenext outer iteration it holds

‖u− wT ‖Ψ ≤ λ−1/2Ψ ‖u− wT ‖E ≤ λ

−1/2Ψ εiµ

M . (2.9.10)

Then the call of DEREFINE leads to

‖u− wT ‖E ≤ Λ1/2Ψ ‖u− wT ‖Ψ ≤ Λ

1/2Ψ (1 + γ)λ

−1/2Ψ εiµ

M = (1− 3δ)εi+1 (2.9.11)

by definition of εi+1. With this we conclude the proof of (2.9.1).As a special case of (2.9.7), at termination of the last inner loop, we have

‖u− wT ‖E ≤ µMεN , (2.9.12)

and the proof of the first part of the theorem is completed by definition of N .ii) First, we will show that at the end of the ith outer cycle both T and the cost of this

cycle are . ε−1/si (c

1/sf + ‖u‖1/s

As ). Using that∑N

i=1 ε−1/si . ε

−1/sN ≤ ε−1/s, this will suffice for

the proof.For i = 1 at the beginning of the outer cycle we have ]T . ε

−1/si ‖u‖1/s

As , because of]T = ]T0 . 1 and our assumption that ε0 . ‖u‖E. Next, for i > 1 before the call

49

Page 55: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

of DEREFINE[T , wT , γλ−1/2Ψ µMεi−1] it holds ‖u− wT ‖Ψ ≤ λ

−1/2Ψ εi−1µ

M . Now Corollary

2.8.10 states that after this call for any triangulation T with infvT ∈WT ‖u−vT ‖Ψ ≤ (t1/21 γ−

1)λ−1/2Ψ εi−1µ

M we have ]T ≤ D]T . Since u ∈ As,

]T ≤ D(]T0 + ((t1/21 γ − 1)εi−1µ

M)−1/s‖u‖1/sAs ) . ε

−1/si ‖u‖1/s

As . (2.9.13)

Further, recalling the properties of RHS and REFINE and using that (f,RHS) iss-optimal, we conclude that at the end of the outer cycle we have

]T . ε−1/si (c

1/sf + ‖u‖1/s

As ). (2.9.14)

Therefore, the proof of the statement about the cardinality of T is complete.Let us analyze the costs of the algorithm. Theorem 2.8.9 states that the call of

DEREFINE requires . ]T + log(ε−1i ‖uT ‖Ψ) operations. Next, we find log(ε−1

i ‖wT ‖Ψ) ≤ε−1/si ‖wT ‖1/s

Ψ , ‖wT ‖Ψ . ‖wT ‖E ≤ ‖u‖E + ε0 . ‖u‖E ≤ ‖u‖As . Using (2.9.14), we find thatthe cost of this call is

. ε−1/si (c

1/sf + ‖u‖1/s

As ). (2.9.15)

Thanks to the properties of RHS[T , f, δεi] and REFINE, the costs of the calls of these

routines are . ε−1/si (c

1/sf + ‖u‖1/s

As ).As we have seen, before the call of GALSOLVE in the outer loop it holds

‖u− wT ‖E ≤ (1− 3δ)εi, ‖u−A −1T fT ‖E ≤ (1− δ)εi, (2.9.16)

therefore, ‖A −1T fT −wT ‖E ≤ 2(1−2δ)εi. Observing that 2(1−2δ)εi

δεiis constant, we conclude

that the cost of this call is . ]T .Before the call of GALSOLVE[T , fT , wT , δµkεi], in the inner loop it holds that

‖u− wT ‖E ≤ µk−1εi, ‖f − fT ‖E′ ≤ δµkεi. (2.9.17)

Using (2.6.16),

‖u−A −1T fT ‖E ≤ 2‖f − fT ‖E′ + ‖u− wT ‖E ≤ (2δµk + µk−1)εi, (2.9.18)

and we find that ‖A −1T fT −wT ‖E ≤ 2(δµk+µk−1)εi. Noting that 2(δµk+µk−1)εi

δµkεiis a constant,

the cost of this call is also . ]T . ♦In a similar way we can turn AFEM2 into an optimal adaptive finite element routine by

adding the derefinement routine to it. In that case, the third parameter of DEREFINE,i.e., the tolerance, will be a constant multiple of the current a posteriori error estimator.For details, one may consult [36].

2.10 Numerical Experiments

In this section we present numerical examples to verify experimentally the performance ofthe second practical adaptive algorithm (AFEM2).

50

Page 56: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 2.15: Solution of the Poisson problem on an L-shaped domain

2.10.1 L-shaped domain

Let us consider the Poisson problem on the L-shaped domain Ω = [0, 1]2 \ [0.5, 1]2 withconstant right-hand side. This problem has a solution as depicted in Fig. 2.15. Recallingour discussion in Section 2.3.2, we infer that in this case for the solution u it holds

u ∈ H2/3+1−ε(Ω), (2.10.1)

for any ε > 0. As a consequence, finite element approximations with respect to triangu-lations created by uniform refinements can be expected to have errors in energy norm oforder N−1/3+ε/2 where N denotes the number of unknowns. So the convergence rate is−1/3 + ε/2 instead of −1/2 that we may expect for optimal triangulations.

We have created a C++ implementation of the adaptive solver AFEM2. For thetheory (Theorem 2.4.3(i) and (ii)) it was required to create an interior node in each elementthat was marked for refinement. This can be achieved by 2 levels of red refinements (i.e.subdivision into 16 subtriangles). We compared this approach with one in which only onered refinement was applied, which does not create an interior node. Since the convergencerates were the same, the results presented here were obtained by the simpler approach ofapplying only one red refinement.

In Fig. 2.17 and Fig. 2.18 we give two examples of adaptive triangulations producedby this solver. We have studied AFEM2 for different values of θ. As one can see from Fig.2.16, for θ being small enough (θ2 = 0.3 and 0.5), the approximations converge with theoptimal rate −1/2, whereas for larger θ, the rates become increasingly closer to the rate−1/3 that one obtains with uniform refinements. The results indicate that for sufficientlysmall θ, the derefinement routine is not required for obtaining optimal rates.

2.10.2 Domain with a crack

In our second example we consider the Poisson problem on a crack domain, that hasmaximum interior angle 2π. In this case the solution u has even lower Sobolev regularity

51

Page 57: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

101

102

103

104

105

106

10−1

100

101

102

Ideal line −1/2Ideal line −1/3theta2=0.3theta2=0.5theta2=0.8theta2=0.9theta2=0.95theta2=1

Figure 2.16: Error estimator vs ]T for different values of θ on the L-shaped domain

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 2.17: Mesh example (θ2 = 0.3)

52

Page 58: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 2.18: Mesh example (θ2 = 0.3)

than in the first example, namely,

u ∈ H1/2+1−ε(Ω). (2.10.2)

This means that the non-adaptive method will converge with a rate close to −1/4. Thesolution is illustrated in Fig. 2.19. As in the L-shaped domain case, the results obtainedwith AFEM2 show (Fig. 2.20) that for θ small enough (here θ2 = 0.3) the optimal rateis obtained, whereas for larger θ the rates get increasingly closer to the rate −1/4 thatcorresponds to uniform refinements. Mesh examples obtained with AFEM2 are given inFig. 2.21 and Fig. 2.22.

53

Page 59: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 2.19: Solution of the Poisson problem on a domain with a crack

102

103

104

105

106

10−1

100

101

102

Ideal line −1/2Ideal line −1/4theta2=0.3theta2=0.5theta2=0.8theta2=0.9theta2=0.95theta2=1

Figure 2.20: Error estimator vs ]T for different values of θ on the crack domain

54

Page 60: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 2.21: Mesh example with 2735 triangles (θ2 = 0.3)

0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 10

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

0.9

1

Figure 2.22: Mesh example with 10571 triangles (θ2 = 0.3)

55

Page 61: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

56

Page 62: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Chapter 3

Adaptive FE Algorithms forSingularly Perturbed Problems

3.1 Singularly Perturbed Reaction-Diffusion Equation

In this short chapter, for some polygon Ω ⊂ R2, we will study adaptive algorithms for thesingularly perturbed reaction-diffusion problem

Given f ∈ H−1(Ω),

find u ∈ H10 (Ω) such that

a(u,w) :=1

1 + κ2

Ω

∇u · ∇w + κ2uw = f(w) for all w ∈ H10 (Ω).

(3.1.1)

To measure the error of some approximation, we will use the energy norm

‖w‖E := a(w, w)1/2, (w ∈ H10 (Ω)) (3.1.2)

with dual norm

‖f‖E′ := sup0 6=w∈W

|f(w)|‖w‖E

, (f ∈ H−1(Ω)). (3.1.3)

As in the previous chapter, we consider continuous piecewise linear finite element spacesWT for approximating the solution, and piecewise constant finite element spaces YT forapproximating the right hand side, where T is a triangulation created by red-refinements.Unless stated otherwise, the notations we use related to finite element spaces, triangulationsetc. are as in Chapter 2.

Similar as in Section 2.2, we define A : H10 (Ω) → H−1(Ω) and AT : WT → W ′

Tby (A v)(w) = a(v, w) (v, w ∈ H1

0 (Ω)) and (AT vT )(wT ) = a(vT , wT ) (vT , wT ∈ WT ),respectively.

Problems of the reaction-diffusion type arise, for example, in the modeling of thin plates,shells, in the numerical treatment of reaction processes, and as a result of an implicit timesemi-discretization of parabolic or hyperbolic equation of second order. The main featureof the reaction-diffusion equation is that its solution exhibits very steep boundary layers

57

Page 63: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

for large values of reaction term. In view of this, the application of adaptive methods tosolve this problem is particularly attractive ([33, 42]). In this chapter, we are interested inadaptive methods that converge uniformly in the size of reaction term.

In view of the design of convergent adaptive algorithms presented in the previous chap-ter, our reader can notice that we need a robust error estimator to achieve our goal. Inparticular, a uniform saturation property is needed in this case. Except for the construc-tion and analysis of the a posteriori error estimator, the design of the adaptive algorithmis the same as in the previous chapter.

We mention here that the theoretical part of this chapter concerning the uniform sat-uration property builds on earlier work from [38]. Our contribution has been mainly theimplementation in C++ of the adaptive solver. The numerical results show that it hasoptimal computational complexity.

3.2 Robust Residual Based A Posteriori Error Estimator

We start with constructing an a posteriori error estimator following the ideas presented inSection 2.4. For the Galerkin solution uT := A −1

T f , assuming that f ∈ L2(Ω) and usingintegration by parts, for any wT ∈ WT we can write

a(u− uT , w) = a(u− uT , w − wT ) (3.2.1)

=

Ω

f(w − wT )− 1

1 + κ2

Ω

∇uT∇(w − wT ) + κ2u(w − wT )

=∑K∈T

K

f +1

1 + κ2(∆uT − κ2uT )(w − wT )−

`∈ET

`

[∂uT∂ν`

]

1 + κ2(w − wT )

=∑K∈T

K

RK(w − wT )−∑

`∈ET

`

R`(w − wT )

where RK is an element residual

RK := (f +1

1 + κ2(∆uT − κ2uT ))|K (3.2.2)

and R` denotes an edge residual

R` :=1

1 + κ2[∂uT∂ν`

], ` ∈ ET (3.2.3)

Here for ` ∈ ET , ν` is a unit vector orthogonal to `. Although for clarity we wrote here∆uT , note that this term is actually zero since we consider piecewise linear approximations.

To continue our analysis of the a posteriori error estimator, we need to recall someproperties of the (quasi)- interpolant IT : H1

0 (Ω) → WT from Section 2.4, where T is anadmissible triangulation:

‖w − IT w‖L2(K) . diam(K)|w|H1(ΩK), (3.2.4)

58

Page 64: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

‖w − IT w‖L2(K) . ‖w‖L2(ΩK), (3.2.5)

‖w − IT w‖H1(K) . |w|H1(ΩK). (3.2.6)

Using (3.2.4) and noting that | · |H1 ≤ √1 + κ2‖ · ‖E we find

‖w − IT w‖L2(K) . diam(K)√

1 + κ2‖w‖E,ΩK(3.2.7)

Since for κ ≥ 1, ‖ · ‖L2(ΩK) ≤√

2‖ · ‖E,ΩKwe get

‖w − IT w‖L2(K) . min1, diam(K)√

1 + κ2‖w‖E,ΩK(3.2.8)

Next, we will bound ‖w − IT w‖L2(`) (` ∈ ET ). An important ingredient of our analysisis the following lemma [42].

Lemma 3.2.1 For any triangle K, it holds that

‖w‖L2(∂K) . diam(K)−1/2‖w‖L2(K) + ‖w‖1/2L2(K)|w|1/2

H1(K), (w ∈ H1(K)), (3.2.9)

only dependent on the smallest angle in K.

An application of this lemma gives

‖w − IT w‖L2(`) . diam(`)−1/2‖w − IT w‖L2(K`) + ‖w − IT w‖1/2L2(K`)

‖w − IT w‖1/2

H1(K`)

. (diam(`)−1/2 min1, diam(K)√

1 + κ2+ min1, diam(K)√

1 + κ21/2√

1 + κ2)‖w‖E,K`

h min1, diam(K)√

1 + κ21/2(1 + κ2)1/4‖w‖E,K`.

(3.2.10)Let us define the a posteriori error estimator

E(T , f, u)2 :=∑K∈T

min1, diam(K)√

1 + κ22‖RK‖2L2(K)+

`∈ET(1 + κ2)1/2 min1, diam(K)

√1 + κ2‖R`‖2

L2(`).(3.2.11)

We now substitute wT = IT w in (3.2.1). Then using the previous estimates and theCauchy-Schwarz inequality we find

‖u− uT ‖2E = sup

06=w∈H10 (Ω)

|a(u− uT , w)|2‖w‖2

E

. E(T , f, u)2, (3.2.12)

and therefore we can state

59

Page 65: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Theorem 3.2.2 For any f ∈ L2(Ω) and any admissible triangulation T , with u = A −1fand uT := A −1

T f , we have

‖u− uT ‖E ≤ CUE(T , f, uT ), (3.2.13)

for some absolute constant CU > 0.

As we have seen in the previous chapter, we also need to study whether the a posteriorierror estimator provides the proper lower bounds.

Lemma 3.2.3 For any two triangles K1,K2, that share a complete edge ` of both K1 andK2, any z ∈ C(K1 ∪K2) ∩

∏2i=1 P1(Ki), and any g ∈ ∏2

i=1 P0(Ki), we have

‖[ ∂z

∂ν`

]‖L2(`) . diam(`)−3/2‖g − z‖L2(K1∪K2) (3.2.14)

only dependent on the smallest angle in K1, K2.

Proof. Let w ∈ C(K1 ∪K2) ∩∏2

i=1 P1(Ki) be defined by w(v) = diam(`) for the vertex vof K not on `, and w ≡ 0 on K2. Noting that any continuous piecewise linear z can bewritten as a linear combination of w and a p ∈ P1(K1 ∪K2), thus with [ ∂p

∂ν`] = 0, our task

is equivalent to proving that

‖[∂w

∂ν`

]‖L2(`) . diam(`)−3/2 infp∈P1(K1∪K2),f∈Q2

i=1 P0(Ki)‖g − (w + p)‖L2(K1∪K2). (3.2.15)

Since, ‖[ ∂w∂ν`

]‖L2(`) h diam(`)1/2 h diam(`)−3/2‖w‖L2(K1∪K2), with V := P1(K1 ∪ K2) +∏2i=1 P0(Ki) we have to show that

‖w‖L2(K1∪K2) . infv∈V

‖w − v‖L2(K1∪K2). (3.2.16)

Invoking a homogeneity argument, it is sufficient to prove this inequality for diam(`) =1. Observing that infv∈V ‖w − v‖L2(K1∪K2) is a continuous, and since w /∈ V , positivefunction of the angles in both triangles K1 and K2, the proof is completed by a compactnessargument. ♦

Corollary 3.2.4 For admissible triangulations T , f ∈ YT , K1, K2 ∈ T with ` = K1∩K2 ∈ET , and diam(`)

√1 + κ2 & 1 it holds

diam(`)(1 + κ2)‖R`‖2L2(`) .

2∑i=1

diam(Ki)2(1 + κ2)‖RKi

‖2L2(Ki)

(3.2.17)

60

Page 66: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Proof. Using Lemma 3.2.3, we find

diam(`)(1 + κ2)‖R`‖2L2(`) . (1 + κ2)−3/2‖[∂uT

∂ν`

]‖2L2(`)

. diam(`)−3(1 + κ2)−3/2‖1 + κ2

κ2fT − uT ‖2

L2(K1∪K2)

= (diam(`)√

1 + κ2)−3(1 + κ2

κ2)2‖fT − κ2

1 + κ2uT ‖2

L2(K1∪K2)

.2∑

i=1

diam(Ki)2(1 + κ2)‖RKi

‖2L2(Ki)

.♦

(3.2.18)

With the help of the previous corollary, a saturation property uniformly in κ was provedin [38]. Its proof is quit technical and we refer the reader to [38].

Theorem 3.2.5 Let T be an admissible triangulation and T be a refinement of T . Assumethat fT ∈ YT , and let u = A −1f and let uT = A −1

T f , uT = A −1

T f be its Galerkin

approximations on the triangulations T and T respectively.i) If in T K ∈ T was refined by two recursive red-refinements, then

‖uT − uT ‖2H1(K) & min1, diam(K)

√1 + κ22‖RK‖2

L2(K) (3.2.19)

ii) With K1, K2 ∈ T such that ` := K1 ∩K2 ∈ ET , if in T both K1 and K2 were refinedby two recursive red-refinements, then

‖uT − uT ‖2H1(K1∪K2) & (1 + κ2)1/2 min1, diam(`)

√1 + κ2‖R`‖2

L2(`)+

2∑i=1

min1, diam(Ki)√

1 + κ22‖RKi‖2

L2(Ki)

(3.2.20)

iii) Let for some F ⊂ T and G ⊂ ET , the refinement T satisfies the condition i) or ii)for all K ∈ F and ` ∈ G, respectively, then

‖uT − uT ‖2E ≥C2

L∑K∈F

min1, diam(K)√

1 + κ22‖RK‖2L2(K)+

`∈G

(1 + κ2)1/2 min1, diam(`)√

1 + κ2‖R`‖2L2(`)

(3.2.21)

for some absolute constant CL > 0.iv)

CLE(T , f, uT ) ≤ ‖u− uT ‖E (3.2.22)

61

Page 67: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

3.3 Uniformly Convergent Adaptive Algorithm

Having described all relevant important results concerning our robust error estimator andthe uniform saturation property, now we can design a uniformly convergent adaptive al-gorithm, directly following our construction from the previous chapter. Since the rest ofthe convergence analysis is the same as in the previous chapter, here we only present allnecessary routines.

Algorithm 3.1 REFINE

REFINE[T ,fT ,wT ,θ]→ T/* Input of the routine:

• admissible triangulation T• fT ∈ YT

• wT ∈ WT

• θ ∈ (0, 1)

Select, in O(]T ) operations, F ⊂ T and G ⊂ ET , such that

∑K∈F

min1, diam(K)√

1 + κ22‖RK‖2L2(K) +

`∈G

(1 + κ2)1/2 min1, diam(`)√

1 + κ22‖R`‖2L2(`)

≥ θ2E(T , fT , wT )2

(3.3.1)Construct a refinement T of T , such that for all K ∈ F and ` ∈ G the conditions i)-ii)from Theorem 3.2.5 are satisfied. */

Algorithm 3.2 RHS

RHS[T , f, ε] → [T , fT ]/* Input of the routine:

• triangulation T• f ∈ H−1(Ω)

• ε > 0

Output: admissible triangulation T ⊃ T and an fT ∈ YT such that ‖f − fT ‖E′ ≤ ε */

62

Page 68: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Algorithm 3.3 GALSOLVE

GALSOLVE[T ,fT ,u(0)T , ε]→ uε

T/* Input of the routine:

• an admissible triangulation T• fT ∈ W ′

T ,

• u(0)T ∈ WT

• ε > 0

output: uεT ∈ WT such that

‖uT − uεT ‖E ≤ ε (3.3.2)

where uT = A −1T fT .

The call of the algorithm should take . max1, log(ε−1‖uT − u(0)T ‖E)]T flops. */

We have implemented GALSOLVE by applying Conjugate Gradients to the systemin wavelet coordinates with wavelets as introduced in Chapter 2. Since these wavelets arestable both in L2(Ω) and in H1

0 (Ω), they are stable with respect to ‖ · ‖E uniformly in κ,meaning that our implementation of GALSOLVE satisfies the assumptions.

Finally, we can present a uniformly convergent adaptive solver.

Algorithm 3.4 Adaptive FE Algorithm for Reaction-Diffusion Equation

AFEMReact-Diff [T , f, wT , ε0, ε] → [T , wT ]/*Input parameters of the algorithm are: f ∈ H−1(Ω), an admissible triangulation T ,ε0, ε > 0, wT ∈ WT , such that with u = A −1f , maxε, ‖u− wT ‖E ≤ ε0. Let D > 0 be asufficiently small constant (called Cµ in Proposition 2.7.1 of Chapter 2).*/

EO := ε0

[T , fT ]:=RHS[T , f, DEO]wT :=GALSOLVE[T , fT , wT , DEO]while EO := CUE(T , fT , wT ) + (1 + CO)DEO > ε do

T :=REFINE[T , fT , wT , θ][T , fT ]:=RHS[T , f, DEO]wT :=GALSOLVE[T , fT , wT , DEO]

endwhile

Theorem 3.3.1 i) For some absolute constant C > 0, AFEMReact-Diff [T , f, wT , ε0, ε] →[T , wT ] terminates with

‖u− wT ‖E ≤ ε (3.3.3)

63

Page 69: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

after at most M iterations of the while-loop, with M being the smallest integer with CµM ≤ε/ε0. Here we emphasize that, as any constant in this chapter, C and M can be chosen tobe independent of κ.

ii) The number of operations required by a call of AFEMReact-Diff is proportionalto the sum of the cardinalities of all triangulations created by this algorithm.

As in Chapter 2, by adding a derefinement routine we can turn AFEMReact-Diffinto an adaptive routine that converges with the optimal rate in linear complexity, uni-formly in κ. However, since the numerical results from the previous chapter indicated that’coarsening’ was not needed to achieve optimality, at least when θ was chosen not too large,in the current chapter we simply skipped the derefinement.

3.4 Numerical Example

We consider the problem (3.1.1) with Ω = (0, 1)2 and right-hand side

f(x, y) =1

2

κ2

1 + κ2(vκ(x) + vκ(y)) +

2π2 + κ2

1 + κ2sin(πx) sin(πy), (3.4.1)

where

vκ(x) := 1− e−12

√2κx + e−

12

√2κ(1−x)

1 + e−12

√2κ

(3.4.2)

The exact solution of this problem has the form

u(x, y) = vκ(x)vκ(y) + sin(πx) sin(πy). (3.4.3)

Since, as one can verify,

‖f − (1 + sin(πx) sin(πy))‖L2(Ω) . κ−1/2, (3.4.4)

for κ → ∞, f tends to a smooth, κ-independent function that does not vanish at theboundary. Therefore, the solution exhibits boundary layers that are typical for the reaction-diffusion problem.

We have designed a C++ implementation of the solver AFEMReact-Diff.As we can observe from Fig. 3.2, for large values of κ the solution indeed exhibits

very steep boundary layers. To investigate the performance of our adaptive algorithm, westarted with the initial triangulation depicted in Fig. 3.1, and we used the parametersCU = 0.8, CO = 1, δ = 1/9, θ2 = 0.5.

The numerical results produced by the adaptive algorithm for κ = 0, κ2 = 103, κ2 = 104,depicted in Fig. 3.3 show that the error in energy norm decays as N−1/2, illustrating thatthe adaptive algorithm converges with the asymptotically optimal rate. Moreover, Fig. 3.4plotting the error versus CPU time, indicates that our C++ implementation is fast andthat it has linear complexity, i.e. that the work is proportional to the number of degreesof freedom. An example of an adaptive mesh produced by our solver is given in Fig. 3.5.

64

Page 70: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 3.1: Initial mesh

Figure 3.2: Solution u for κ2 = 104

65

Page 71: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

2 4 6 8 10 12 14−7

−6

−5

−4

−3

−2

−1

0

Number of degrees of freedom

Err

or in

ene

rgy

norm

0103

104

optimal

Figure 3.3: Error in the energy norm versus number of degrees of freedom in log-log scalefor different values of κ2.

−4 −3 −2 −1 0 1 2 3 4 5 6−7

−6

−5

−4

−3

−2

−1

0

CPU time

Err

or in

ene

rgy

norm

0103

104

optimal

Figure 3.4: CPU time versus number of degrees of freedom in log-log scale for differentvalues of κ2.

66

Page 72: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 3.5: Mesh example

67

Page 73: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

68

Page 74: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Chapter 4

Derefinement-based AdaptiveFinite Element Algorithms for theStokes Problem: ConvergenceRates and Optimal ComputationalComplexity

4.1 Introduction

Nowadays adaptive finite element algorithms are used to efficiently solve partial differentialequations (PDEs) arising in science and engineering. The general structure of the loop ofan adaptive algorithm is

Solve → Estimate → Refine, Derefine

Only recently, however, in the works of Dorfler ([19]) and Morin, Nochetto, Siebert ([29]),adaptive FEMs for elliptic problems were shown to converge. Later, in ([6]), Binev, Dahmenand DeVore, and in ([36]), Stevenson showed that adaptive FEMs of this type convergewith optimal rates and with linear computational complexity.

Typically, problems in fluid mechanics naturally lead to mixed variational problems.Concerning the adaptive solution of mixed variational problems, the situation is morecomplicated, and we are not aware of any proof of optimality of adaptive finite elementalgorithms.

In ([5]), Banch, Morin and Nochetto introduced an adaptive FEM for the Stokes prob-lem, and they proved convergence of the method. Since convergence alone does not implythat the adaptive method is more efficient than its non-adaptive counterpart, an analysisof the convergence rates of an adaptive approximation is important. The numerical ex-periments in ([5]) show (quasi-) optimal triangulations for some values of the parameters,where, however, a theoretical explanation of these results is still an open question.

In this chapter, we present a detailed design of adaptive FEM algorithm for the Stokesproblem, and analyze its computational complexity. As in [5], we apply a fixed point

69

Page 75: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

iteration to an infinite dimensional Schur complement operator, where to approximate theinverse of the elliptic operator we use a generalization of the optimal adaptive finite elementmethod of Stevenson for elliptic equations. By adding a derefinement step to the resultingadaptive fixed point iteration, in order to optimize the underlying triangulation after eachfixed number of iterations, we show that the resulting method converges with the optimalrate and that it has optimal computational complexity.

4.2 Mixed Variational Problems. Well-Posedness and Discretization

Let X and Q be Hilbert spaces, and let the bilinear forms

a : X ×X → R, b : X ×Q → R (4.2.1)

be continuous. We consider the following mixed variational problem

Given f ∈ X ′, g ∈ Q′

find (u, p) ∈ X ×Q such that

a(u, v) + b(v, p) = f(v) for all v ∈ X.

b(u, q) = g(q) for all q ∈ Q.

(4.2.2)

By the Riesz representation theorem, the bilinear forms a : X×X → R and b : X×Q →R induce corresponding operators A ∈ L(X,X ′), B ∈ L(X, Q′) and their adjoints suchthat

a(u, v) = (A u)(v) = (A ∗v)(u) for all u, v ∈ X

b(v, q) = (Bv)(q) = (B∗q)(v) for all v ∈ X, q ∈ Q(4.2.3)

Now we can rewrite (4.2.2) as the following operator problem

Given f ∈ X ′, g ∈ Q′

find (u, p) ∈ X ×Q such that

A u + B∗p = f

Bu = g,

(4.2.4)

or with

L :=

(A B∗

B 0

): X ×Q → X ′ ×Q′, (4.2.5)

as

Given F = (f, g) ∈ X ′ ×Q′

find U = (u, p) ∈ X ×Q such that

L u = F.

Theorem 4.2.1 [21] The mapping (4.2.5) is an homeomorphism, i.e., there exist con-stants cL , CL > 0 such that

cL (‖v‖2X + ‖q‖2

Q)1/2 ≤ ‖L (v, q)‖X′×Q′ ≤ CL (‖v‖2X + ‖q‖2

Q)1/2 (4.2.6)

70

Page 76: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

if and only if the following conditions are satisfied(i) There exists a constant α > 0 such that

a(v, v) ≥ α‖v‖2X for all v ∈ Ker B (4.2.7)

(ii) there exists a constant β > 0 such that

infq∈Q

supv∈X

b(v, q)

‖v‖X‖q‖Q

≥ β (4.2.8)

4.3 Stokes Problem

The motion of an incompressible viscous fluid enclosed in a domain Ω ⊂ R2 is describedby the Stokes equations

−n∑

j=1

∂xj

σij(u) = fi in Ω (4.3.1)

σij(u) = −pδij + 2µeij(u) in Ω (4.3.2)

eij(u) =1

2(∂ui

∂xj

+∂uj

∂xi

) in Ω (4.3.3)

divu = 0 in Ω (4.3.4)

u = 0 on ∂Ω (4.3.5)

where u is a velocity field, p is the pressure, µ is the viscosity of the fluid, f defines thebody forces per unit mass, and σij, eij are the stress and deformation rate tensors,respectively. Here, for simplicity of the presentation we assumed that Ω ⊂ R2. However,we expect that the results of this chapter can be generalized to the 3-D case. Eliminationof σij leads to the equivalent formulation

−µ∆u +∇p = f in Ω

divu = 0 in Ω

u = 0 on ∂Ω

(4.3.6)

Finally, introducing the scaled variables p = µ−1p, f = µ−1f , we obtain the parameter-independent Stokes problem

−∆u +∇p = f in Ω

divu = 0 in Ω

u = 0 on ∂Ω

(4.3.7)

Next we formulate the variational form of the Stokes problem, which is the following

71

Page 77: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

mixed variational problem

With X = [H10 (Ω)]2, Q = L2,0(Ω) = w ∈ L2(Ω) :

Ω

w = 0, and

for given f ∈ X′,

find (u, p) ∈ X×Q such that

a(u, v) + b(v, p) = f(v) for all v ∈ X.

b(u, q) = 0 for all q ∈ Q.

(4.3.8)

where

a(u,v) =

Ω

∇u : ∇v =

Ω

2∑i=1

∇ui∇vi

b(v, q) = −∫

Ω

qdivv

(4.3.9)

We will equip the space of vector-fields X with norm

‖v‖X := a(v,v)1/2 for all v ∈ X, (4.3.10)

which is, in fact, the [H1(Ω)]2-seminorm. By Poincare’s inequality, on [H10 (Ω)]2 it is a norm

that is equivalent to the standard [H1(Ω)]2-norm. We equip X′ with dual norm

‖f‖X′ := sup06=v∈X

|f(v)|‖v‖X . (4.3.11)

Equipped with these norms, A : X → X′ is an isomorphism. We equip the space Q withthe L2(Ω) norm, i.e.,

‖q‖Q := ‖q‖L2(Ω) for all q ∈ Q. (4.3.12)

Again, as we have done in (4.2.3), we will introduce the operators induced by the bilinearforms of the Stokes problem, where now B = −div and B∗ = ∇, and write the problemas the equivalent operator problem (4.2.4). To convince ourselves that the mapping L :X×Q → X′×Q′ is an homeomorphism, so that the Stokes problem is well-posed, we willcheck the conditions of Theorem 4.2.1. The ellipticity (4.2.7) is valid with α = 1.

To verify the inf-sup condition (4.2.8) we recall the following theorem from ([21]).

Theorem 4.3.1 Let Ω be a bounded connected domain with Lipschitz continuous boundary.Then there exists a c, which depends on Ω, such that

‖p‖L2(Ω) ≤ c‖∇p‖[H−1(Ω)]2 for all p ∈ L2,0(Ω) (4.3.13)

72

Page 78: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Using Theorem 4.3.1, the inf-sup condition (4.2.8) easily follows. Indeed, given q ∈ Q, wecan find v ∈ X, such that ‖v‖X = 1 and

1

c‖q‖Q ≤ ‖∇q‖X′ =< v,∇q >X×X′= (B∗q)(v) = (Bv)(q) = b(v, q) (4.3.14)

Given some pair (Xi, Qi) ⊂ X×Q of FEM spaces, the discrete problem reads as follows

Given f ∈ X′

find (ui, pi) ∈ Xi ×Qi such that

a(ui,v) + b(v, pi) = f(v) for all v ∈ Xi.

b(ui, q) = 0 for all q ∈ Qi.

(4.3.15)

Theorem 4.3.2 [11] Let (u, p) ∈ X × Q be the solution of Stokes problem, and assumethat for some constant β > 0 the pair (Xi, Qi) satisfies

infqi∈Qi

supvi∈Xi

b(vi, qi)

‖vi‖X‖qi‖Q

≥ β. (4.3.16)

Then the following error estimate holds for the discrete solution (ui, pi) ∈ Xi ×Qi

‖u− ui‖X + ‖p− qi‖Q ≤ C( infvi∈Xi

‖u− vi‖X + infqi∈Qi

‖p− qi‖Q), (4.3.17)

where C > 0 is a constant only dependent on the bilinear forms a and b and the constantβ.

Although the above error estimate indicates the importance of the discrete inf-sup condition(4.3.16), in this chapter we develop an adaptive algorithm where there is no need to posesuch a requirement on the approximation spaces.

4.4 Methods of Optimal Complexity for Solving the Stokes Problem

For (Ti)i being the family of all triangulations that can be constructed from a fixed initialtriangulation T0 of Ω by red-refinements, let (XTi

, QTi,YTi

)i be the family of finite elementapproximation spaces defined by

XTi:= X ∩ [C(Ω)]2 ∩

∏K∈Ti

[P1(K)]2, (4.4.1)

QTi:= Q ∩

∏K∈Ti

P0(K), (4.4.2)

YTi:=

∏K∈Ti

[P0(K)]2. (4.4.3)

73

Page 79: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Defining for any V ⊂ X and w ∈ X

e(w,V)X := infw∈V

‖w − w‖X, (4.4.4)

the error of the best approximation from the best space XTiwith underlying triangulation

consisting of n− ]T0 triangles is given by

σXn (w) = inf

Ti∈(Tj)j , ]Ti−]T0≤ne(w,XTi

)X (4.4.5)

We now define classes of vector fields for which the errors of the best approximations fromthe best spaces decay with certain rates.

Definition 4.4.1 For any s > 0, let As(X, (XTi)i) be the class of vector fields w ∈ X such

that for some M > 0σX

n (w) ≤ Mn−s n = 1, 2, . . . . (4.4.6)

We equip As(X) with a semi-norm defined as the smallest M for which (4.4.6) holds:

|w|As(X) := supn≥1

nsσXn (w), (4.4.7)

and with norm‖w‖As(X) := |w|As(X) + ‖w‖X. (4.4.8)

In a similar way we define also classes As(Q) = As(Q, (QTi)i) and As(X′) = As(X′, (YTi

)i)using approximation spaces (QTi

)i and (YTi)i respectively.

It should be mentioned here that for the Stokes problem on an L-shaped domain, evenwith a smooth right-hand side, one generally has

u ∈ [H1+r(Ω)]2, p ∈ Hr(Ω), only if r < 0.54448373678246. (4.4.9)

In contrast, in [16] it was shown that the solutions of the Stokes problem have arbitrarilyhigh Besov regularity independent of the shape of the domain, depending only on thesmoothnes of the right-hand side, revealing the potential of adaptive methods.

To solve the Stokes problem numerically, we need to be able to approximate the right-hand side within any given tolerance. We assume the availability of the following routine:

Algorithm 4.1 RHS

RHS[T , f , ε] → [T , fT ]/* Input of the routine:

• triangulation T• f ∈ X′

• ε > 0

Output: admissible triangulation T ⊃ T and an fT ∈ YT such that ‖f − fT ‖X′ ≤ ε */

74

Page 80: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

As in the previous chapter, we will call the pair (f ,RHS) to be s-optimal if there existsan absolute constant cf > 0 such that for any ε > 0 and triangulation T , the call of thealgorithm RHS performs such that both

]T and the number of flops required by the call are . ]T + c1/sf ε−1/s (4.4.10)

We recall that, for a given s, such a pair can only exist if f ∈ A s(X′).We will say that an adaptive method is optimal if it produces a sequence of approxima-

tions with respect to triangulations with cardinalities that are at most a constant multiplelarger then the optimal ones. More precisely, we define

Definition 4.4.2 Suppose that the solution of the Stokes problem is such that (u, p) ∈(As(X),As(Q)) and there exists a routine RHS such that the pair (f ,RHS) is s-optimal.Then we shall say that an adaptive method is optimal if for any ε > 0 it produces atriangulation Ti from the family (Tj)j with ]Ti . ε−1/s(|u|1/s

As(X) + ‖p‖1/sAs(Q) + c

1/sf ) and

an approximation (wTi, qTi

) ∈ (XTi, QTi

) with ‖u − wTi‖X + ‖p − qTi

‖Q ≤ ε taking only

. ε−1/s(|u|1/sAs(X) + ‖p‖1/s

As(Q) + c1/sf ) flops.

4.5 Derefinement/optimization of finite element approximation

In this section we deal with the task how to optimize a finite element approximation, i.e.,how to derefine those elements in the current triangulation that hardly contribute to thequality of the approximation, but, because of their number, may spoil the complexity ofthe algorithm.

4.5.1 Derefinement of pressure approximation

We first consider the approximations for the pressure. Suppose we are given pT ∈ QT . Weconsider the tree T (T ) that corresponds to the triangulation T and for any K ∈ T (T ) wedefine an error functional as

e(K) := infpK∈P0(K)

‖pK − (pT )|K‖2L2(K). (4.5.1)

Then, for any subtree T ⊂ T (T ) we define

E(T ) :=∑

K∈L(T )

e(K) = infpT (T )∈QT (T )

‖pT − pT (T )‖2L2(Ω) (4.5.2)

which is just the error in the best approximation for pT from QT (T ). When implementedin a leaves-to-roots recursive way, all e(K) for K ∈ T (T ) can be computed in O(]T (T )) =O(]T ) operations. Further we define a modified error functional by e(K) := e(K) for allroot nodes, and whenever e(K) has been defined, for each child Ki of K by

e(Ki) :=

∑4j=1 e(Kj)

e(K) + e(K)e(K). (4.5.3)

75

Page 81: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Clearly, knowing e(K) for all K ∈ T (T ), the computation of e(K) for each K ∈ T (T )requires not more than O(1) operations. Now in order to find a (quasi-) optimal piecewiseconstant finite element approximation pT with respect to T ⊂ T we use the followingderefinement algorithm.

Algorithm 4.2 DEREFINE-Q

DEREFINE-Q [T ,pT ,ε]→ [T , pT ]T := T (T0)while E(T ) > ε2 do

compute ρ = maxK∈L(T ) e(K)

for all K ∈ L(T ) with e(K) = ρ add all children of K to Tendwhile

T := T (T )

Actually, in order to avoid a log-factor in the operation count due to sorting of thee(K) needed to find maxK∈L(T ) e(K), we shall use an approximate sorting based on binarybinning. With this small modification the following can be proven.

Theorem 4.5.1 The algorithm DEREFINE-Q [T ,pT ,ε]→ [T , pT ] produces a triangula-tion T , and an approximation pT such that ‖pT −pT ‖Q ≤ ε. There exist absolute constants

t1, T2 > 0, necessarily with t1 ≤ 1 ≤ T2, such that for any triangulation T for which thereexists a pT ∈ QT with ‖pT − pT ‖Q ≤

√t1ε we have ]T ≤ T2]T .

Moreover, the call of the algorithm requires . ]T + max0, log(ε−1‖pT ‖Q) flops.

Proof. See [8] with a small modification because of binary binning from Proposition 5.3 in[36]. ♦

The following corollary motivates the use of the described above derefinement routine.It states that by a call of this derefinement routine any approximation for the pressurecan be turned into a (quasi-) optimal one, at the expense of making the error at most aconstant factor larger.

Corollary 4.5.2 Let ρ > t−1/21 . Then, for any ε > 0, p ∈ L2(Ω), a triangulation T , pT ∈

QT with ‖p− pT ‖Q ≤ ε, for [T , pT ]:=DEREFINE-Q[T ,pT ,ρε] we have that ‖p− pT‖Q ≤(1 + ρ)ε and ]T ≤ T2]T for any triangulation T with infpT ∈QT ‖p− pT ‖Q ≤ (

√t1ρ− 1)ε.

Proof. With [T , pT ]:= DEREFINE-Q[T ,pT ,ρε], the properties of DEREFINE-Q ensurethat ‖pT − pT ‖Q ≤ ρε and so

‖p− pT ‖Q ≤ ‖p− pT ‖Q + ‖pT − pT ‖Q ≤ ε + ρε ≤ (1 + ρ)ε. (4.5.4)

Further, let T be a triangulation with infpT ∈QT ‖p− pT ‖Q ≤ (√

t1ρ− 1)ε, then

infpT ∈QT

‖pT − pT ‖Q ≤ infpT ∈QT

‖pT − p‖Q + ‖p− pT ‖Q ≤ (√

t1ρ− 1)ε + ε ≤ √t1ρε, (4.5.5)

and so, from Corollary 4.5.1 we conclude that ]T ≤ T2]T . ♦

76

Page 82: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

4.5.2 Derefinement of velocity approximation

Let V∗ be the set of all vertices of all triangles that can be created by recursive uniformred-refinements of T0. V∗ can be organized as a tree as follows. The set of the interiorvertices VT0 are the roots. Assuming that the vertices after k uniform refinement steps havebeen organized in a tree, each of the newly created vertices by the next uniform refinementstep is assigned to be a child of an already existing vertex in the following way. The newlycreated vertex is the midpoint of an edge between two triangles. One of the 4 vertices ofthese triangles, not being on the boundary of the domain, is assigned to be the parent ofthe newly created vertex. It is not difficult to give a deterministic rule for the assignments.In [36], a wavelet basis Ψ = ψv : v ∈ V∗ was constructed for H1

0 (Ω). Using this waveletbasis we define the wavelet basis for X as Ψ = ψvei : v ∈ V∗, i ∈ 1, 2. Each u ∈ X hasa unique representation u =

∑v∈V∗

∑2i=1 wv,iψvei with respect to the wavelet basis. With

‖u‖Ψ := (∑

v∈V∗∑2

i=1 w2v,i)

1/2, we define λΨ, ΛΨ > 0 to be largest and smallest constants,respectively, such that

λΨ‖u‖2Ψ ≤ ‖u‖2

X ≤ ΛΨ‖u‖2Ψ (u ∈ X) (4.5.6)

The condition number of the wavelet basis Ψ is then given by κΨ := ΛΨ

λΨ. Further, given an

admissable triangulation T , a wavelet basis for XT is given by ΨT = ψvei : v ∈ VT , i ∈1, 2. At this point our restriction to admissible triangulations becomes essential, sincethe third property of Proposition 4.2 guarantees that for any ψvei ∈ ΨT it holds thatψvei ∈ XT so that this collection is indeed a basis for XT .

In [36] a coarsening routine COARSE was developed to derefine a scalar continuouspiecewise linear approximation. This routine is based on the transformation to the waveletbasis Ψ and a call of a routine similar to DEREFINE-Q on the vertex tree correspondingto the current admissible triangulation. Having the wavelet basis Ψ in our hands we caneasily generalize the coarsening routine COARSE from [36] to an algorithm for derefine-ment of the vector-field approximation uT ∈ XT . Since this derefinement routine worksprecisely as COARSE with the wavelet basis Ψ instead of Ψ and the error functionale(v) :=

∑v a descendent of v in the vertex tree |wv,1|2 + |wv,2|2, we present only the prototype and

properties of this algorithm.

77

Page 83: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Algorithm 4.3 DEREFINE-X

DEREFINE-X[T ,uT , ε] → [T ,uT ]/* Input:

• triangulation T• uT ∈ XT

• ε > 0

Output: A triangulation T , and an approximation uT ∈ XT such that ‖uT −uT ‖Ψ ≤ ε and

such that for any triangulation T for which there exists a uT ∈ XT with ‖uT−uT ‖Ψ ≤√

t1ε

we have ]T ≤ CC]T , with CC > 0 being some absolute constant.Moreover, the call of the algorithm requires . ]T + max0, log(ε−1‖uT ‖Ψ) flops. */

Similar to Corollary 4.5.2 we have

Corollary 4.5.3 Let γ > t−1/21 with t1 from the Theorem 4.5.1. Then, for any ε > 0,

u ∈ X, a triangulation T , uT ∈ XT with ‖u − uT ‖Ψ ≤ ε, for [T ,uT ]:=DEREFINE-

X[T ,uT ,γε] we have that ‖u− uT ‖Ψ ≤ (1 + γ)ε and ]T ≤ CC]T for any triangulation Twith infuT ∈XT ‖u− uT ‖Ψ ≤ (

√t1γ − 1)ε

Finally in this section, we define a routine SCR[T 1, T 2] → [T ], that constructs the smallestcommon refinement of the input triangulations T 1 and T 2.

4.6 Adaptive Fixed Point Algorithms for Stokes Problem: Analysis andConvergence

For convenience let us recall again the Stokes problem in operator notations.

Given f ∈ X′

find (u, p) ∈ X×Q such that

A u + B∗p = f

Bu = 0

(4.6.1)

Since A is an isomorphism between X and X′ we can solve the first equation in (4.6.1) foru

u = A −1(f −B∗p) (4.6.2)

Substituting this into the second equation in (4.6.1) gives

BA −1B∗p = BA −1f (4.6.3)

The operator S := BA −1B∗ : Q → Q, is the Schur complement operator and it isknown to be symmetric positive definite with ‖S ‖Q→Q ≤ 1 [23]. The problem (4.6.3) can

78

Page 84: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

be reformulated as a fixed point problem for the operator

Nα : Q → Q, q → q − α(S q −BA −1f) (4.6.4)

Find p ∈ Q such that

Nαp = p(4.6.5)

From

‖Nαq1 −Nαq2‖Q = ‖(I − αS )(q1 − q2)‖Q

≤ ‖I − αS ‖Q→Q‖q1 − q2‖Q for all q1, q2 ∈ Q(4.6.6)

we see that Nα defines a contractive mapping if ‖I−αS ‖Q→Q < 1. Since ‖I−αS ‖Q→Q < 1if 0 < α < 2/‖S ‖Q→Q, the iterative process

pj = (I − αS )pj−1 + αF , F := BA −1f (4.6.7)

converges to the solution of (4.6.3), due to Banach’s theorem for contractive mappings.This iteration can be also viewed as a generalization of the damped Richardson method tothe operator equation (4.6.3). The optimal choice of the parameter α is

αopt =2

‖S ‖Q→Q + ‖S −1‖−1Q→Q

, giving ‖I − αoptS ‖Q→Q =κ(S )− 1

κ(S ) + 1, (4.6.8)

with κ(S ) := ‖S ‖Q→Q‖S −1‖Q→Q. Rewritten in the following form, this iterative methodis known as the Uzawa algorithm.

uj = A −1(f −B∗pj−1)

pj = pj−1 + αBuj

(4.6.9)

Of course (4.6.9) is an idealized iteration, and we will analyse an inexact version ofit where the application of the operator A −1 is approximated by a convergent adaptivefinite element algorithm that we will develop in Section 10. At the moment it is enough topresent a prototype of this routine

79

Page 85: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Algorithm 4.4 INNERELLIPTIC

INNERELLIPTIC[T , f , pT ,uT , ε0, ε] → [T ,uT ]/* Input of the routine:

• input triangulation T• f ∈ X′

• pT ∈ QT , uT ∈ XT

• ε0, such that withupT := A −1(f −B∗pT ), (4.6.10)

‖upT − uT ‖X ≤ ε0.

• ε > 0

Output: a triangulation T ⊃ T , uT ∈ XT , with ‖upT − uT ‖X ≤ ε. Moreover, if forsome s > 0 the pair (f ,RHS) is s-optimal, and the input satisfies ε0 . ε, then both thecardinality of the output triangulation T and the number of flops required by the routineare . ]T + ε−1/sc

1/sf with cf as in the definition of s-optimality.

*/

Using INNERELLIPTIC we are ready to define our adaptive routine AFEMSTOKES-SOLVER.

Theorem 4.6.1 The algorithm AFEMSTOKESSOLVER[f , ε] → [T ,uT , pT ] termi-nates with

‖p− pT ‖Q ≤ ε, ‖u− uT ‖X ≤ λ−1/2Ψ (η−1 + 1)ε (4.6.11)

Moreover, assuming ε0 . ‖p‖Q, if u ∈ As(X), p ∈ As(Q) and the pair (f ,RHS) is s-optimal, then both

the number of flops and ]T . ε−1/s(‖u‖1/sAs(X) + ‖p‖1/s

As(Q) + c1/sf ), (4.6.12)

i.e., the algorithm has optimal computational complexity.

80

Page 86: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Algorithm 4.5 AFEMSTOKESSOLVER

AFEMSTOKESSOLVER[f , ε] → [T ,uT , pT ]/*Input parameters of the algorithm are: f ∈ X′, ε > 0Output: a triangulation T , uT ∈ XT , pT ∈ QT such that‖u− uT ‖X + ‖p− pT ‖Q ≤ ε*/

Select an initial triangulation T 00 , approximations uT 0

0∈ XT0 , pT 0

0∈ QT0 , and

constants 0 < γ < 1, δ < 1/2, ρ > t−1/21 , with t1 from Theorem 4.5.1, η ∈ (maxγ, β, 1),

where β := ‖I − αS ‖Q→Q, and α such that δ(ρ + 1) < 1, 0 < α < 2/‖S ‖Q→Q, and ε0

such that ‖u− uT 00‖X + ‖p− pT 0

0‖Q ≤ ε0.

M := minj ∈ N : ηj(αj + 1) ≤ δN := mini ∈ N : δ((ρ + 1)δ)i−1ε0 ≤ εfor i = 0 . . . N do

if i 6= 0 then

[T p, pT p ] :=DEREFINE-Q[T Mi−1, pT M

i−1, ρδεi−1]

[T u,uT u ] :=DEREFINE-X[T Mi−1,uT M

i−1, ρλ

−1/2Ψ ( 1

η+ 1)δεi−1]

[T 0i ] :=SCR[T p, T u], uT 0

i:= uT u , pT 0

i:= pT p

εi := (ρ + 1)δεi−1

endif

ε00 := (κ

1/2Ψ (η−1 + 1) + 1)εi

for j = 1, . . . , M do

[T ji ,uT j

i]:= INNERELLIPTIC[T j−1

i , f , pT j−1i

,uT j−1i

, εj−10 , γjεi]

pT ji

:= pT j−1i

+ αBuT ji

εj0 := (ηj(αj + 1) + ηj−1(α(j − 1) + 1) + γj)εi

endfor

endfor

T := T MN , pT := pT M

N, uT := uT M

N

Proof. By construction, we have ‖p − pT 00‖Q ≤ ε0. Considering the inner loop of the

algorithm, it is easy to see that

p− pT ji

= (I − αS )(p− pT j−1i

) + αB(uT ji− u

pT j−1i ) (4.6.13)

where upT j−1

i corresponds to pT j−1i

as in (4.6.10). Since, see [23],

‖Bu‖Q ≤ ‖u‖X, (4.6.14)

81

Page 87: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

with η := maxβ, γ we have

‖p− pT ji‖Q ≤ ‖I − αS ‖Q→Q‖p− pT j−1

i‖Q + α‖uT j

i− u

pT j−1i ‖X

≤ βj‖p− pT 0i‖Q + αεi

j−1∑

l=0

βlγj−l

≤ ηj‖p− pT 0i‖Q + αεij

≤ ηj(αj + 1)εi

(4.6.15)

where we used as induction hypothesis that ‖p − pT 0i‖Q ≤ εi. Now with M as in the

algorithm, we conclude that‖p− pT M

i‖Q ≤ δεi. (4.6.16)

A call of the DEREFINE-Q in the next iteration of the outer loop results in

‖p− pT 0i+1‖Q ≤ (ρ + 1)δεi = εi+1. (4.6.17)

We can show now that a similar error reduction also holds for the velocities. Indeed,

‖u− upT j−1

i ‖X = supv∈X

(A −1B∗(pT j−1i

− p),v)X

‖v‖X

= supv∈X

(pT j−1i

− p, Bv)

‖v‖X≤ ‖pT j−1

i− p‖Q.

(4.6.18)

Further,

‖u− uT ji‖X ≤ ‖u− u

pT j−1i ‖X + ‖upT j−1

i − uT ji‖X

≤ ‖p− pT j−1i‖Q + γjεi,

≤ (ηj−1(α(j − 1) + 1) + γj)εi.

(4.6.19)

In particular, using definition of M and η, we have

‖u− uT Mi‖X ≤ (η−1 + 1)δεi. (4.6.20)

So before the call of the DEREFINE-X in the next iteration of the outer loop it holds

‖u− uT Mi‖Ψ ≤ λ

−1/2Ψ ‖u− uT M

i‖X ≤ λ

−1/2Ψ (η−1 + 1)δεi. (4.6.21)

Then the call of this routine gives

‖u− uT 0i+1‖X ≤ Λ

1/2Ψ ‖u− uT 0

i+1‖Ψ

≤ κ1/2Ψ (η−1 + 1)δεi(1 + ρ) = κ

1/2Ψ (η−1 + 1)εi+1

(4.6.22)

By (4.6.16),(4.6.22) and the definition of N , the proof of the first part of the theorem iscompleted.

82

Page 88: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Let us analyse the computational complexity of the algorithm. Since we start with thetriangulation T 0

0 , where ]T 00 . 1 and by assumption ε0 . ‖p‖Q ≤ ‖p‖As(Q) we have

]T 00 . ε

−1/s0 (‖p‖1/s

As(Q) + ‖u‖1/sAs(X)). (4.6.23)

For i > 0, before the call of DEREFINE-Q it holds ‖p− pT Mi−1‖Q ≤ δεi−1. After the call

of this routine, its properties guarantee that for any triangulation T such that

infpT ∈QT

‖p− pT ‖Q ≤ (√

t1ρ− 1)δεi−1 (4.6.24)

we have]T p . T2]T . (4.6.25)

Since p ∈ As(Q), we conclude that

]T p ≤ T2(]T0 + ((√

t1ρ− 1)δεi−1)−1/s‖p‖1/s

As(Q) . ε−1/si ‖p‖1/s

As(Q) (4.6.26)

Similar, since u ∈ As(X), after the call of DEREFINE-X we have

]T u . ε−1/si ‖u‖1/s

As(X) (4.6.27)

Apparently, the smallest common refinement T 0i constructed by the routine SCR satisfies

]T 0i . ε

−1/si (‖p‖1/s

As(Q) + ‖u‖1/sAs(X)). (4.6.28)

In the internal for-loop, before the call of INNERELLIPTIC[T j−1i , f , pT j−1

i,uT j−1

i, εj−1

0 , γjεi],

using (4.6.18) and (4.6.19) or, when j = 1, (4.6.22), and that 1 ≤ j ≤ M . 1, we obtain

‖upT j−1i − uT j−1

i‖X ≤ ‖upT j−1

i − u‖X + ‖u− uT j−1i‖X ≤ εj−1

0 . γjεi. (4.6.29)

In view of the properties of INNERELLIPTIC, after the call of this routine it holds

]T ji . (ηjεi)

−1/sc1/sf + ]T j−1

i (4.6.30)

Since∑M

j=1(ηjεi)

−1/s . (ηMεi)−1/s we conclude

]T Mi . (ηMεi)

−1/sc1/sf + ]T 0

i

. ε−1/si (‖u‖1/s

As(X) + ‖p‖1/sAs(Q) + c

1/sf )

(4.6.31)

where in the last inequality we have used (4.6.28) or, when i = 0, (4.6.23).For i > 0 the call of DEREFINE-Q[T M

i−1, pT Mi−1

, ρδεi−1] requires

number of flops . ]T Mi + max0, log((ρδεi)

−1‖pT Mi‖Q (4.6.32)

83

Page 89: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Due to log((ρδεi)−1‖pT M

i‖Q) ≤ (ρδεi)

−1/s‖pT Mi‖1/s

Q . ε−1/si+1 ‖p‖1/s

As(Q), and the fact that simi-lar results are valid for the call of DEREFINE-X, we find that the calls of these routinesincluding SCR require

a number of flops . ε−1/si (‖u‖1/s

As(X) + ‖p‖1/sAs(Q) + c

1/sf ) (4.6.33)

Since the jth call of INNERELLIPTIC requires a number of flops . (ηjεi)−1/sc

1/sf +

]T ji , we find that the whole internal for-loop needs

a number of flops . ε−1/si (‖u‖1/s

As(X) + ‖p‖1/sAs(Q) + c

1/sf ). (4.6.34)

Finally, recalling that

N = minj ∈ N : (1 + κ1/2Ψ (η−1 + 1))εj+1 ≤ ε (4.6.35)

we have εi & ε, and for the whole algoritm [T ,uT , pT ]:=AFEMSTOKESSOLVER[f , ε]it holds that both

the number of flops and]T . ε−1/s(‖u‖1/sAs(X) + ‖p‖1/s

As(Q) + c1/sf ) (4.6.36)

completing the proof. ♦

4.7 Convergent Adaptive Finite Element Algorithm for Inner Elliptic Prob-lem

In this section we develop the INNERELLIPTIC routine that approximates the inverseof A in (4.6.9), or equivalently, that constructs the approximation to the solution of (4.7.1)

Given a triangulation T , pT ∈ QT , f ∈ X′

find u ∈ X such that

a(upT ,v) = f(v)− b(v, pT ) for all v ∈ X.

With T ⊃ T , defining AT : XT → (XT )′ ⊃ X′ by (AT uT )(vT ) = a(uT ,vT ) for uT ,vT ∈XT , A −1

T (f − B∗pT ) is the Galerkin approximation of upT . One easily verifies that forf ∈ X′, ‖A −1

T f‖X ≤ ‖f‖X′ .In [36], an adaptive algorithm of optimal complexity has been developed for a scalar

elliptic problem. Here we generalize this algorithm for the system of elliptic equations(4.7.1), also adapting it to the special term B∗pT ∈ X′ in the right-hand side. Anotherimportant issue we have to pay attention to here is the fact that our inner solver willbe called every Uzawa iteration with a different initial triangulation T and a differentright-hand side.

We start the design of an adaptive algorithm for the above described problem with thedevelopment of an a posteriori error estimator. Let T ⊃ T and wT ∈ XT . In the following

84

Page 90: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

it is only needed that pT ∈ QT , for that reason we will write pT , upT instead of pT , upT ,respectively. For f ∈ [L2(Ω)]2, using integration by parts, we find

a(upT −wT ,v) = f(v)− b(v, pT )− a(wT ,v)

=∑K∈T

K

f + ∆wT v −∫

K

∇pT · v +

∂K

pT v · ν −∑

`∈∂K

`

2∑i=1

∂(wT )i

∂ν`

vi

=∑K∈T

K

RK · v −∑

`∈ET

`

R` · v

(4.7.1)

where RK is an element residual

RK := (f + ∆wT −∇pT )|K (4.7.2)

and R` denotes an edge residual

R` := [

(∂(wT )1

∂ν`∂(wT )2

∂ν`

)− pT ν`], (4.7.3)

with ν` being an unit vector orthogonal to `. We set up the a posteriori error estimator

E(T , f ,wT , pT ) := (∑K∈T

diam(K)2‖RK‖2[L2(K)]2 +

`∈ETdiam(`)‖R`‖2

[L2(`)]2)1/2. (4.7.4)

Before stating the theorem about our error estimator, we need to recall one more result,that will be invoked in the proof. For K ∈ T and ` ∈ ET we introduce the notations

ΩK := K ′ ∈ Tj, K ∩K ′ 6= ∅, Ω` := K ′ ∈ Tj, ` ⊂ K ′ (4.7.5)

In [36] a piecewise linear (quasi)-interpolant was constructed on admissible triangulationsfor H1

0 (Ω) functions. We state the properties of its obvious generalization to X vector-fieldsin the following lemma.

Lemma 4.7.1 Let T be an admissible triangulation of Ω. Then there exists a linearmapping IT : X → XT such that

‖v − IT v‖[Hs(K)]2 . diam(K)1−s‖v‖[H1(ΩK)]2 (v ∈ X, s = 0, 1, K ∈ T ) (4.7.6)

‖v − IT v‖[L2(`)]2 . diam(`)1/2‖v‖[H1(Ω`)]2 (v ∈ X, ` ∈ ET ). (4.7.7)

Theorem 4.7.2 Let T be an admissible triangulation and T be a refinement of T . Assumethat f ∈ YT , and let upT = A −1(f − B∗pT ) and let upT

T = A −1T (f − B∗pT ), upT

T =

A −1

T (f−B∗pT ) be its Galerkin approximations on the triangulations T and T respectively.

85

Page 91: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

i) If VT contains a point inside K ∈ T , then

‖upTT − upT

T ‖2[H1(K)]2 & diam(K)2‖RK‖2

[L2(K)]2 (4.7.8)

ii) With K1, K2 ∈ T , such that ` := K1∩K2 ∈ ET , if VT contains a point in the interiorof both K1 and K2, then

‖upTT − upT

T ‖2[H1(K1∪K2)]2 & diam(`)‖R`‖2

[L2(`)]2 (4.7.9)

iii) Let for some F ⊂ T and G ⊂ ET , the refinement T satisfies the condition i) or ii)for all K ∈ F and ` ∈ G, respectively, then

‖upTT − upT

T ‖2X ≥ C2

L∑K∈F

diam(K)2‖RK‖2[L2(K)]2 +

`∈G

diam(`)‖R`‖2[L2(`)]2 (4.7.10)

for some absolute constant CL > 0.iv)

CLE(T , f ,upTT , pT ) ≤ ‖upT − upT

T ‖X (4.7.11)

v) there exists an absolute constant CU such that even for any f ∈ [L2(Ω)]2,

‖upT − upTT ‖X ≤ CUE(T , f ,upT

T , pT ). (4.7.12)

K1

K2

`

T TProof. We shall use here Verfurth’s technique developed for the analysis of a posteriori

error estimators. Actually, one who is familiar with this technique will recognize that ithas been used in [5] to prove similar results for conforming triangulations.

(i) To prove the local lower bounds i)-ii), we notice that, due to the conditions on T ,for all K ∈ T , ` ∈ ET there exist functions ψK , ψ`, which are continuous piecewise linearwith respect to T , with the properties

supp(ψK) ⊂ K, supp(ψ`) ⊂ Ω`,

K

ψK h meas(K),

`

ψ` h meas(`), (4.7.13)

0 ≤ ψK ≤ 1, x ∈ K, 0 ≤ ψ` ≤ 1, x ∈ `, (4.7.14)

‖ψK‖L2(K) . meas(K)1/2,

‖ψ`‖L2(Ω`) . diam(`).(4.7.15)

86

Page 92: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

We have

a(upTT − upT

T ,v) = f(v) + b(pT ,v)− a(upTT ,v)

=∑K∈T

K

RK · v −∑

`∈ET

`

R` · v (v ∈ XT ).(4.7.16)

Now by, for K as in (i), choosing v := RKψK , that, thanks to RK being a constantvector-field and the properties (4.7.13)-(4.7.15) of ψK is in XT , and using Bernstein in-equality (2.2.7), we have

‖RK‖2L2(K)2 h

K

RK · v = a(upTT − upT

T ,v)

. ‖upTT − upT

T ‖H1(K)2diam(K)−1‖v‖L2(K)2

. diam(K)−1‖RK‖L2(K)2‖upTT − upT

T ‖H1(K)2 .

(4.7.17)

(ii) In a similar way, let for ` as in (ii),v := R`ψ`, then after some manipulations using(i), we find

`

R` · v = a(upTT − upT

T ,v) +∑

K∈Ω`

K

RK · v

.∑

K∈Ω`

‖upTT − upT

T ‖H1(K)2diam(K)−1‖v‖L2(K)2 + ‖RK‖L2(K)2‖v‖L2(K)2

.∑

K∈Ω`

‖upTT − upT

T ‖H1(K)2diam(K)−1‖v‖L2(K)2

.∑

K∈Ω`

‖upTT − upT

T ‖H1(K)2diam(`)−1/2‖R`‖L2(`)2

(4.7.18)Noting that

∫`R` · v h ‖R`‖L2(`)2 the statement follows.

iii) Follows from i) and ii).iv) Let T be a refinement of T such that it satisfies conditions i),ii) for all K ∈ T and

for all ` ∈ ET , respectively. The statement follows from iii) and Pythagoras

‖upT − upTT ‖2

X = ‖upT − upTT ‖2

X + ‖upTT − upT

T ‖2X (4.7.19)

v) Using the Galerkin orthogonality

a(upT − upTT ,vT ) = 0, for all vT ∈ XT , (4.7.20)

87

Page 93: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

equation (4.7.1), Cauchy-Schwarz inequality and Lemma 4.7.1 we find

a(upT − upTT ,v) = a(upT − upT

T ,v − IT v) =

=∑K∈T

K

RK · (v − IT v)−∑

`∈ET

`

R` · (v − IT v)

≤∑K∈T

‖RK‖[L2(K)]2‖v − IT v‖[L2(K)]2 +∑

`∈ET‖R`‖[L2(`)]2‖v − IT v‖[L2(`)]2

.∑K∈T

‖RK‖[L2(K)]2diam(K)‖v‖[H1(ΩK)]2 +∑

`∈ET‖R`‖[L2(`)]2diam1/2(`)‖v‖[H1(Ω`)]2

≤ ∑K∈T

diam(K)2‖RK‖2[L2(K)]2 +

`∈ETdiam(`)‖R`‖2

[L2(`)]21/2‖v‖X(4.7.21)

Finally, invoking

‖upT − upTT ‖X = sup

v∈X

a(upT − upTT ,v)

‖v‖X , (4.7.22)

the upper bound follows. ♦Based on our observations from the previous theorem, we define a routine that con-

structs a local refinement of a given triangulation, such that, as we will see later, an errorreduction in our discrete approximation is ensured. In order to be able to apply Theorem4.7.2, for the moment we will assume that f ∈ YT for any triangulation we encounter,i.e., that f ∈ YT0 . Later, given f ∈ X′, on any triangulation T we will replace f by anapproximation fT ∈ YT produced by the RHS routine. Here either T = T or T is aproper refinement of T needed to get ‖f − fT ‖X′ sufficiently small.

88

Page 94: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Algorithm 4.6 REFINE

REFINE[T ,fT ,pT ,wT ,θ]→ T/* Input of the routine:

• admissible triangulation T• fT ∈ YT , pT ∈ QT

• wT ∈ XT

• θ ∈ (0, 1)

Select in O(T ) operations F ⊂ T and G ⊂ ET , such that

∑K∈F

diam(K)2‖RK‖2[L2(K)]2 +

`∈G

diam(K)2‖R`‖2[L2(`)]2 ≥ θ2E(T , fT ,wT , pT )2 (4.7.23)

Construct a refinement T of T , such that for all K ∈ F and ` ∈ G the conditions i)-ii)from Theorem 4.7.2 are satisfied. */

Theorem 4.7.3 (Basic principle of adaptive error reduction) Let T be an admissible tri-angulation, assume that f ∈ YT , and let pT ∈ QT , upT

T := A −1T (f − B∗pT ). Taking

T =REFINE[T , f , pT ,upTT , θ] then for upT

T := A −1

T (f −B∗pT ) the following error reduc-tion holds

‖upT − upTT ‖X ≤ (1− (

CLθ

CU

)2)1/2‖upT − upTT ‖X (4.7.24)

Proof. Using Galerkin orthogonality, properties of REFINE and that of the a posteriorierror estimator, we easily find

‖upT − upTT ‖2

X = ‖upT − upTT ‖2

X − ‖upTT − upT

T ‖2X

≤ ‖upT − upTT ‖2

X − C2Lθ2E2(T , fT ,upT

T , pT )

≤ (1− (CLθ

CU

)2)1/2‖upT − upTT ‖2

X

(4.7.25)

♦Aiming at optimal computational complexity, we will solve the Galerkin problems ap-

proximately using the following routine

89

Page 95: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Algorithm 4.7 GALSOLVE

GALSOLVE[T ,fT ,pT ,u(0)T , ε]→ uε

T/* Input of the routine:

• an admissible triangulation T• fT ∈ X′T (⊃ X′), pT ∈ QT

• u(0)T ∈ XT

• ε > 0

output: uεT ∈ XT such that

‖upTT − uε

T ‖X ≤ ε (4.7.26)

where upTT = A −1

T (fT −B∗pT ).

The call of the algorithm takes . max1, log(ε−1‖upTT − u

(0)T ‖X)]T flops. */

An implementation of GALSOLVE that realizes the requirements is, for example,given by the application of Conjugate Gradients to the matrix-vector representation ofAT upT

T = fT −B∗pT with respect to the wavelet basis ΨT , that is well-conditioned uni-formly in ]T .

In the previous Theorem 4.7.3, the error reduction was based on the availability ofthe exact Galerkin solution upT

T := A −1T (f −B∗pT ) and the assumption that f ∈ YT . Of

course, in practice we will approximate upTT by some uT ∈ XT using GALSOLVE, and the

evaluation of the a posteriori error estimator and so the mesh refinement will be performedusing this inexact Galerkin solution uT . Furthermore, instead of making the unrealisticassumption that our right-hand side f ∈ YT , we will replace f by an fT ∈ YT with T = Tor possibly T a refinement of T . In the following, we will study how this influences theconvergence of the adaptive approximations. We will start with investigating the stabilityof the error estimator.

Lemma 4.7.4 For any admissible triangulation T , pT ∈ QT , f ∈ [L2(Ω)]2, uT , uT ∈ XT ,it holds

|E(T , f ,uT , pT )− E(T , f , uT , pT )| ≤ CS‖uT − uT ‖X, (4.7.27)

with an absolute constant CS > 0.

90

Page 96: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Proof. Using the triangle inequality twice, first for vectors and then for functions, we find

|E(T , f ,uT , pT )− E(T , f , uT , pT )|= |(

∑K∈T

diam(K)2‖RK(f , pT )‖2[L2(K)]2 +

`∈ETdiam(`)‖R`(uT , pT )‖2

[L2(`)]2)1/2

− (∑K∈T

diam(K)2‖RK(f , pT )‖2[L2(K)]2 +

`∈ETdiam(`)‖R`(uT , pT )‖2

[L2(`)]2)1/2|

≤ (∑

`∈ETdiam(`)(‖R`(uT , pT )‖[L2(`)]2 − ‖R`(uT , pT )‖[L2(`)]2)

2)1/2

≤ (∑

`∈ETdiam(`)‖R`(uT − uT , 0)‖2

[L2(`)]2)1/2 ≤ CS‖uT − uT ‖X,

(4.7.28)

where in the last line we have used that, for any edge ` of a triangle K ∈ T , and anywT ∈ [P (K)]2 it holds

‖R`(wT , 0)‖[L2(`)]2 . diam(`)−1/2‖wT ‖[H1(K)]2 . (4.7.29)

♦Theorem 4.7.5 Let T be an admissible triangulation, f ∈ X′, pT ∈ QT , upT = A −1(f −B∗pT ), fT ∈ YT , upT

T = A −1T (fT−B∗pT ), uT ∈ XT , and let T = REFINE[T ,fT ,pT ,uT ,θ],

fT ∈ X′, upTT = A −1

T (fT −B∗pT ). Then it holds

‖upT−upTT ‖X ≤ (1−1

2(CLθ

CU

)2)1/2‖upT−upTT ‖X+2CSCL‖upT

T −uT ‖X+3‖f−fT ‖X′+‖f−fT ‖X′

(4.7.30)

Before we come to the proof, note that on T , T we use the approximate right-hand sidesfT , fT respectively, and in the refinement routine REFINE we use an approximation uTinstead of upT

T = A −1T (fT − B∗pT ), where we think of an approximation we got by the

application of GALSOLVE.Proof.

‖A −1(f −B∗pT )−A −1

T (fT −B∗pT )‖X≤‖A −1(f −B∗pT )−A −1(fT −B∗pT )‖X + ‖A −1(fT −B∗pT )−A −1

T (fT −B∗pT )‖X≤‖f − fT ‖X′ + ‖A −1(fT −B∗pT )−A −1

T (fT −B∗pT )‖X+

‖A −1

T (fT −B∗pT )−A −1

T (fT −B∗pT )‖X≤2‖f − fT ‖X′ + ‖f − fT ‖X′ + ‖A −1(fT −B∗pT )−A −1

T (fT −B∗pT )‖X(4.7.31)

Now, to get an estimate for ‖A −1(fT −B∗pT )−A −1

T (fT −B∗pT )‖X, we will apply asimilar analysis as in the proof of the Theorem 4.7.3. Using the properties of REFINE,

91

Page 97: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Lemma 4.7.4 and Theorem 4.7.2 (iii, v) we find

‖A −1

T (fT −B∗pT )− upTT ‖X

≥CL(∑K∈T

diam(K)2‖RK(fT , pT )‖2[L2(K)]2 +

`∈ETdiam(`)‖R`(u

pTT , pT )‖2

[L2(`)]2)1/2

≥CL(∑K∈T

diam(K)2‖RK(fT , pT )‖2[L2(K)]2 +

`∈ETdiam(`)‖R`(uT , pT )‖2

[L2(`)]2)1/2

−CS‖uT − upTT ‖X)

≥CL(θE(T , fT ,uT , pT )− CS‖uT − upTT ‖X)

≥CL(θE(T , fT ,upTT , pT )− 2CS‖uT − upT

T ‖X)

≥CL(θ

CU

‖A −1T (fT −B∗pT )− upT

T ‖X − 2CS‖uT − upTT ‖X)

(4.7.32)

Thanks to the Galerkin orthogonality and the previous estimate we have

‖A −1(fT −B∗pT )−A −1

T (fT −B∗pT )‖2X

=‖A −1(fT −B∗pT )− upTT ‖2

X − ‖A −1

T (fT −B∗pT )− upTT ‖2

X

≤‖A −1(fT −B∗pT )− upTT ‖2

X

−C2L(

θ

CU

‖A −1(fT −B∗pT )− upTT ‖X − 2CS‖uT − upT

T ‖X)2

≤‖A −1(fT −B∗pT )− upTT ‖2

X

−C2L(

1

2(

θ

CU

)2‖A −1(fT −B∗pT )− upTT ‖2

X − 4C2S‖uT − upT

T ‖2X)

=(1− 1

2(θCL

CU

)2)‖A −1(fT −B∗pT )− upTT ‖2

X + 4C2SC2

L‖uT − upTT ‖2

X

≤((1− 1

2(θCL

CU

)2)1/2‖A −1(fT −B∗pT )− upTT ‖X + 2CSCL‖uT − upT

T ‖X)2

≤((1− 1

2(θCL

CU

)2)1/2(‖u− upTT ‖X + ‖f − fT ‖X′)

+2CSCL‖uT − upTT ‖X)2,

(4.7.33)

where in (4.7.33) we have used that for any scalars a, b, (a− b)2 ≥ 12a2 − b2. Combination

of last result with the first bound obtained in this proof completes the task. ♦In Theorem 4.7.5 we estimated ‖upT − upT

T ‖X, where upTT = A −1

T (fT − B∗pT ), i.e.,the exact Galerkin approximation. In the following Corollary we bound ‖upT − uT ‖X,where we think of uT ∈ XT being a sufficiently close approximation for upT

T , obtained bythe GALSOLVE routine. As we will see, we get error reduction with a factor µ < 1in case we control the errors in the approximate Galerkin solutions and the approximateright-hand sides in a proper way.

92

Page 98: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Corollary 4.7.6 (General adaptive error reduction estimate) For any µ ∈ ((1−12(CLθ

CU)2)1/2, 1)

there exists a sufficiently small constant δ > 0 such that if for f ∈ X′, an admissible trian-gulation T , pT ∈ QT , fT ∈ YT , uT ∈ XT , T = REFINE[T ,fT ,uT ,θ], fT ∈ X′, uT ∈ XTand ε > 0, with upT = A −1(f −B∗pT ), upT

T = A −1T (fT −B∗pT ), upT

T = A −1

T (fT −B∗pT ),we have that ‖upT − uT ‖X ≤ ε and

‖upTT − uT ‖X + ‖f − fT ‖X′ + ‖upT

T − uT ‖X + ‖f − fT ‖X′ ≤ 2(1 + µ)δε, (4.7.34)

then‖upT − uT ‖X ≤ µε (4.7.35)

Proof. Using the triangle inequality, the conditions of the corollary and Theorem 4.7.5,we easily find

‖upT − uT ‖X ≤ ‖upT − upTT ‖X + ‖upT

T − uT ‖X≤(1− 1

2(CLθ

CU

)2)1/2‖upT − upTT ‖X + 2CSCL‖upT

T − uT ‖X+3‖f − fT ‖X′ + ‖f − fT ‖X′ + ‖upT

T − uT ‖X≤µ‖upT − uT ‖X + ((1− 1

2(CLθ

CU

)2)1/2 − µ)‖upT − uT ‖X+ max2CSCL, 3(‖upT

T − uT ‖X + ‖f − fT ‖X′ + ‖f − fT ‖X′ + ‖upTT − uT ‖X)

≤µε,

(4.7.36)

where we have chosen δ to satisfy

δ ≤ µ− (1− 12(CLθ

CU)2)1/2

2(1 + µ) max2CSCL, 3 . ♦ (4.7.37)

We are now ready to present the adaptive solver of the inner elliptic problem.

93

Page 99: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Algorithm 4.8 Adaptive Inner Solver

INNERELLIPTIC[T , f , pT ,uT , ε0, ε] → [T ,uT ]/*Input parameters of the algorithm are: f ∈ X′, an admissible triangulation T , pT ∈ QT ,uT ∈ XT , and ε0 ≥ ‖upT − uT ‖X. The parameter δ < 1/3 is chosen such that itcorresponds to a µ < 1 as in Corollary 4.7.6.*/

T := T , uT := uTε1 := ε0

1−3δ

[T , fT ]:=RHS[T , f , δε1]uT :=GALSOLVE[T , fT , pT ,uT , δε1]N := minj ∈ N : µjε1 ≤ ε for k = 1, . . . , N do

T :=REFINE[T , fT , pT ,uT , θ][T , fT ]:=RHS[T , f , δµkε1]uT :=GALSOLVE[T , fT , pT ,uT , δµkε1]

endfor

Theorem 4.7.7 (i) Algorithm INNERELLIPTIC[T , f , pT ,uT , ε0, ε] → [T ,uT ] termi-nates with

‖upT − uT ‖X ≤ ε (4.7.38)

(ii) If for some s > 0 the pair (f ,RHS) is s-optimal, and the algorithm is called with ε0 . εthen both the cardinality of the output triangulation and the number of flops required by thealgorithm satisfy

]T , number of flops . ]T + ε−1/sc1/sf (4.7.39)

Proof. i) We are going to show that just before the kth call of REFINE

‖A −1T (fT −B∗pT )− uT ‖X ≤ ε1µ

k−1 (4.7.40)

meaning that after the kth inner loop ‖A −1T (fT−B∗pT )−uT ‖X ≤ ε1µ

k, which, by definitionof N , proves the first part of the theorem.

For k = 1, after the first call [T , fT ] :=RHS[T , f , δε1] it holds

‖f − fT ‖X′ ≤ δε1. (4.7.41)

For the input uT we have ‖upT −uT ‖X ≤ (1−3δ)ε1. Using the triangle inequality and thefact that A −1

T (fT −B∗pT ) is the best approximation of A −1(fT −B∗pT ) from XT with

94

Page 100: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

respect to ‖ · ‖X, we find

‖upT −A −1T (fT −B∗pT )‖X ≤ ‖upT −A −1(fT −B∗pT )‖X

+‖A −1(fT −B∗pT )−A −1T (fT −B∗pT )‖X

≤‖upT −A −1(fT −B∗pT )‖X + ‖A −1(fT −B∗pT )− uT ‖X≤2‖upT −A −1(fT −B∗pT )‖X + ‖upT − uT ‖X≤2‖f − fT ‖X + ‖upT − uT ‖X ≤ 2δε1 + (1− 3δ)ε1 ≤ (1− δ)ε1

(4.7.42)

After the call uT :=GALSOLVE[T , fT , pT ,uT , δε1] we have

‖A −1T (fT −B∗pT )− uT ‖X ≤ δε1. (4.7.43)

Together with the previous estimate it gives

‖upT − uT ‖X ≤ ‖upT −A −1T (fT −B∗pT )‖X + ‖A −1

T (fT −B∗pT )− uT ‖X ≤ ε1, (4.7.44)

i.e., (4.7.40) is valid for k = 1.Let us now assume that (4.7.40) is valid for some k. By the last calls of RHS and

GALSOLVE for the current T , uT and fT we have

‖f − fT ‖X′ ≤ δε1µk−1, ‖A −1

T (fT −B∗pT )− uT ‖X ≤ δε1µk−1. (4.7.45)

The subsequent calls T :=REFINE[T , fT , pT ,uT , θ], [T , fT ] :=RHS[T , f , δµkε1], anduT :=GALSOLVE[T , fT , pT ,uT , δµkε1] result in

‖f − fT ‖X′ ≤ δε1µk, ‖A −1

T (fT −B∗pT )− uT ‖X ≤ δε1µk. (4.7.46)

Therefore we obtain

‖f−fT ‖X′+‖A −1T (fT−B∗pT )−uT ‖X+‖f−fT ‖X′+‖A −1

T (fT−B∗pT )−uT ‖X ≤ 2(1+µ)δε1µk.

(4.7.47)Using the induction hypothesis, Corollary 4.7.6 shows that

‖upT − uT ‖ ≤ ε1µk, (4.7.48)

with which (4.7.40) is proven.ii) Let us now analyse the computational complexity of the algorithm. For k = 1, . . . , N ,

just before the call GALSOLVE[T , fT ,uT , δµkε1] it holds that ‖upT −uT ‖X ≤ µk−1ε1 and‖f − fT ‖X′ ≤ δµkε1. Since, thanks to (4.7.42),

‖upT −A −1T (fT −B∗pT )‖X ≤ 2‖upT −A −1(fT −B∗pT )‖X+‖upT −uT ‖X ≤ (2δµk+µk−1)ε1,

(4.7.49)

we have that ‖upT − uT ‖X ≤ 2(δµk + µk−1)ε1. Observing that 2(δµk+µk−1)ε1

δµkε1is a constant,

the cost of this call is . ]T . Before the first call GALSOLVE[T , fT ,uT , δε1] we have‖upT −uT ‖X ≤ ε0 = (1− 3δ)ε1 and ‖f − fT ‖X′ ≤ δε1. The same arguments show that alsothe cost of this call is . ]T .

95

Page 101: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Recalling the properties of the routine REFINE, RHS, we can list the costs of eachcall.

T := REFINE[T , fT ,uT ], ]T , flops . ]T (4.7.50)

[T , fT ] := RHS[T , f , δµkε1], ]T , flops . ]T + (δµkε1)−1/sc

1/sf (4.7.51)

Finally, since the for -loop runs for N iterations, being an absolute constant only de-pendent on ε0/ε, we conclude that after the call INNERELLIPTIC[T , f , pT ,uT , ε0, ε] →[T ,uT ], the cardinality of the output triangulation and the number of flops required bythe algorithm satisfy

]T , number of flops . ]T + ε−1/sc1/sf ♦ (4.7.52)

4.8 Conclusions

In this chapter we have designed an adaptive FEM algorithm for solving the Stokes prob-lem. We were able to prove that this algorithm produces a sequence of adaptive approx-imations that converges with the optimal rate using the fact that the algorithm containsmesh derefinement routine.

Recently, in the work of Stevenson ([37]), an adaptive FEM method was constructedfor solving elliptic problems that has optimal computational complexity not relying on arecurrent derefinement of the triangulations. We will study whether this is possible formixed problems in the next chapter.

In any case, in our view, availability of an efficient mesh optimization/derefinementroutine is very useful for the adaptive solutions of nonstationary problems that often exhibitmoving shock waves.

96

Page 102: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

97

Page 103: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Chapter 5

An Optimal Adaptive FEM for theStokes Problem

5.1 Introduction

Often the solution of a boundary value problem exhibits singularities, e.g., due to a non-smooth boundary. Then, because of the lacking (Sobolev) smoothness of the solution, finiteelement methods based on quasi-uniform partitions converge with a rate that is smallerthan is allowed by the polynomial degree. This can be repaired when suitable refinementsare made near those singularities. The optimal size of the elements as a function of thedistance to a singularity depends on the strength of the singularity, which is generallyunknown.

With adaptive finite element methods (AFEMs), a sequence of nested partitions iscreated. When creating the next partition, the decision where to refine is made on basis ofan a posteriori estimator of the error in the current finite element approximation. Althoughbeing successfully in use for more than 25 years, in more than one space dimension, even forsecond order elliptic equations, their convergence was not shown before the work of Dorfler([19]) and that of Morin, Nochetto and Siebert ([29]). Convergence alone, however, doesnot show that the use of an adaptive method for a solution that has singularities improvesupon, or even competes with that of a non-adaptive one. Recently, after the derivation ofsuch a result by Binev, Dahmen and DeVore ([6]) for an AFEM extended with a so-calledcoarsening routine, in [37] Stevenson could prove that standard AFEMs converge withthe best possible rate in linear complexity. So this rate is equal to that of finite elementapproximations with respect to the sequence over N ∈ N of the best partitions with Nelements. This chapter presents a joint work with Stevenson [27].

In this chapter, as a model saddle point problem, we consider the Stokes equations −4u +∇p = f on Ω ⊂ Rd,

divu = 0 on ∂Ω,

(although, in this introduction, we write equations in strong form, actually we alwaysmean the corresponding variational formulations). In [17], Dahlke, Dahmnen and Urbananalyzed an adaptive wavelet method for solving these equations. The starting point was

98

Page 104: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

the application of the Uzawa iteration on the continuous level, i.e., given some p0, tocompute for j = 0, 1, . . . −4uj+1 = f −∇pj,

pj+1 = pj − divuj,

Of course, this iteration cannot be performed exactly, and in each iteration the solutionof the elliptic system was approximated using an adaptive wavelet method within decreas-ing tolerances as the iteration proceeds. Convergence was shown, and by the inclusionof coarsening steps, even optimal rates and linear complexity were demonstrated. Sincenowhere Galerkin discretizations were formed of the mixed problem, the so-called LBBstability was not required.

In [5], Bansch, Morin and Nochetto studied the above solution method with the adaptivewavelet method replaced by an AFEM. They proved convergence, and despite of the factthat they did not include coarsening, in numerical experiments they observed optimal rates,at least when the elliptic problems were solved not too accurately. When prescribing an apriori tolerance of the form γj for the jth iteration, it was needed to take γ in the range[≈ .95, 1). By the addition of coarsening to this method, in [25] optimal computationalcomplexity was demonstrated.

When starting this work, our aim was to prove optimal computational complexityof basically the method from [5]. For reasons that will be indicated in Remark 5.6.8,we did not succeed to do this. Instead, for a somewhat more complicated algorithminvolving an additional loop, we will prove optimal rates, and under some mild assumption(Assumption 5.6.4), also optimal computational complexity.

The pressure p can be found as the solution of the Schur complement equation that oneobtains by eliminating the velocity u from the Stokes equations. This equation is elliptic,with corresponding energy norm equivalent to the L2(Ω)-norm. Given a finite element spacePσi

, where σi denotes the underlying finite element partition, the best approximation fromthis space to p with respect to this energy norm is the Galerkin solution pi ∈ Pσi

. With Qσi

denoting the L2(Ω)-orthogonal projection onto Pσi, this pi can be shown to be the unique

solution of −4u(i) +∇pi = f on Ω,Qσi

divu(i) = 0 on ∂Ω,

i.e., the Stokes equations in which the divergence-free condition has been relaxed. We referto this system as the reduced Stokes equations.

Concerning u(i), this is still a problem posed over an infinite dimensional space. As-suming for the moment that we can solve it exactly, or more precisely, with a sufficientaccuracy, the error in the energy norm in pi can be shown to be equivalent to the aposteriori error estimator ‖divu(i)‖L2(Ω). Furthermore, for any refined partition σi+1, theenergy norm of pi+1 − pi is equivalent to ‖Qσi+1

divu(i)‖L2(Ω). Now following the linesof [19, 29] for Poisson type problems, if, for some θ ∈ (0, 1], σi+1 is selected such that‖Qσi+1

divu(i)‖L2(Ω) ≥ θ‖divu(i)‖L2(Ω) (“bulk criterion”), then the so-called saturation prop-erty is guaranteed, and a linearly convergent sequence (pi)i towards p is obtained. More-over, if, depending on the efficiency index of this a posteriori error estimator, θ is small

99

Page 105: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

enough, and σi+1 is selected with quasi minimal cardinality, then following the lines of [37],we can show convergence with the optimal rate. Compared to the adaptive methods forPoisson type problems, a complication is that to find such a σi+1, it is generally not suffi-cient to search it within the set of partitions that can be created by refining each elementof σi only a small, fixed number of times. For our theoretical considerations, we studiedthe adaptive tree algorithms by Binev and DeVore from [8], whereas in our experimentswe relied on the easily implementable greedy approach.

For solving the reduced Stokes problem for given i, we follow the approach from [5] forthe full Stokes problem. That is, we apply Uzawa, where the pressure update then reads asp

(i)j+1 = p

(i)j −Qσi

divu(i)j , and where we solve the inner elliptic systems −4u

(i)j+1 = f −∇p

(i)j

with AFEM. Knowing that p(i)j ∈ Pσi

, and having control over #σi, we are now able to provealso optimal rates of the velocity approximations towards u. Note that other than in [5], wehave two different partitions underlying pressure and velocity approximations. Throughoutthe algorithm, both partitions become increasingly more refined, i.e., no derefinements aremade.

This chapter is organized as follows: In Section 5.2, we recall some properties of theStokes problem. In Section 5.3, we define the finite element spaces that we will use.We give a procedure for refining partitions, which is a generalization to arbitrary spacedimensions of the newest vertex bisection method in two space dimensions. An overviewof the solution method will be given in Section 5.4. In Section 5.5, a posteriori errorestimators are derived for the various problems that occur in our solution method. InSection 5.6, the adaptive refinement routines for pressure and velocity partitions are given.In Section 5.7, we give the detailed description of the adaptive method in the simplifiedsituation that the right-hand side f is piecewise polynomial with respect to the initialpartition. We prove convergence with the optimal rates. In this section, we assume thatthe arising finite dimensional linear systems are solved exactly, ignoring the question ofcomputational complexity. In Section 5.8, we give the method for general right-hand sides,and replace the direct solvers by iterative solution methods, with which we end up with amethod of optimal computational complexity. Finally, in Section 5.9, we present numericalresults, and compare them with those obtained with the method from [5]. As we will see,in this example both methods give similar results.

In this chapter, by C . D we will mean that C can be bounded by a multiple of D,independently of parameters which C and D may depend on. Obviously, C & D is definedas D . C, and C h D as C . D and C & D.

5.2 Stokes problem

Let Ω be a polygonal domain in Rd. We consider the Stokes problem in variational form:With

V := H10 (Ω)d, P := L2,0(Ω),

100

Page 106: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

and given an f ∈ V′, throughout this chapter u ∈ V (the velocity) and p ∈ P (the pressure)will denote the solutions of

a(u,v) + b(v, p) + b(u, q) = f(v), (v ∈ V, q ∈ P) , (5.2.1)

where a : V× V→ R, b : V× P→ R are defined by

a(w,v) :=

Ω

∇w : ∇v, b(v, q) := −∫

Ω

q divv.

It is well-known that

‖v‖V := a(v,v)12 h ‖v‖H1(Ω)d , (v ∈ V),

b is bounded, and

β := inf06=q∈P

sup06=v∈V

b(v, q)

‖v‖V‖q‖L2(Ω)

> 0. (5.2.2)

As a consequence, the Stokes problem is well-posed, meaning that

‖w‖V + ‖r‖P h sup0 6=(v,q)∈V×P

a(w,v) + b(v, r) + b(w, q)

‖v‖V + ‖q‖P , (w ∈ V, r ∈ P). (5.2.3)

Remark 5.2.1 Clearly, for r ∈ P ⊂ P, (5.2.3) is also valid, uniformly over P ⊂ P, whenthe supremum over V× P is replaced by that over V× P.

Using that ‖divv‖L2(Ω) ≤ ‖v‖V (v ∈ V) (see [30]), in particular we have |b(v, q)| ≤‖v‖V‖q‖P, and besides thus β ≤ 1.

Defining A : V → V′, B : V → P′, and B′ : P → V′ by (Av)(w) = a(v,w), (Bv)(q) =b(v, q) = (B′q)(v), the problem (5.2.1) can be equivalently written as

[A B′

B 0

] [up

]=

[f0

],

and, with the Schur complement S := BA−1B′, p is also uniquely determined by

Sp = BA−1f.

Lemma 5.2.2 We have

(Sq)(q) = sup06=v∈V

b(v, q)2

a(v,v),

so that, by the ellipticity of a, the boundedness of b, and (5.2.2),

‖q‖P := (Sq)(q)12 h ‖q‖L2(Ω), (q ∈ P).

101

Page 107: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Proof. With 〈 , 〉 = 〈 , 〉H1(Ω)d (or 〈 , 〉 = a( , )), let R : V → V′ be the mapping such

that g(v) = 〈v, Rg〉 (v ∈ V, g ∈ V′). Writing B′ = RB′, A = RA, we have

sup06=v∈V

b(v, q)2

a(v,v)= sup

06=v∈V

(B′q)(v)2

(Av)(v)= sup

0 6=v∈V

〈v, B′q〉2〈v, Av〉 = sup

06=w∈V

〈w, A− 12 B′q〉2

〈w,w〉= 〈A− 1

2 B′q, A− 12 B′q〉 = 〈A−1B′q, RB′q〉 = (Sq)(q) (q ∈ P).

For g ∈ V′, we set ‖g‖V′ = sup0 6=v∈V|g(v)|‖v‖V . Equipped with norms ‖ ‖V and ‖ ‖V′ ,

respectively, A : V→ V′ is an isomorphism.Functions g ∈ L2(Ω)d will be interpreted as functionals by means of g(v) :=

∫Ωg · v.

5.3 Finite element approximation

Given some fixed k ∈ N>0, and partitions τ and σ of Ω into essentially disjoint (closed)d-simplices, we will search approximations for u and p from the finite element spaces

Vτ := V ∩∏T∈τ

Pm(T )d,

andPσ := P ∩

∏T∈σ

Pm−1(T ),

respectively. For doing so, furthermore we will approximate the right-hand side f byfunctions from

V∗τ :=∏T∈τ

Pm−1(T )d.

At any moment in our algorithm we will have that τ ⊇ σ, meaning that τ is a refinementof σ, or is equal to σ.

Sometimes, we will view V and P formally as finite element spaces corresponding tothe infinitely fine partition ∞, and denote them as V∞ and P∞, respectively.

Remark 5.3.1 The fact that the approximate pressure is a piecewise polynomial of degreenot larger than m − 1 will only be used in the forthcoming Proposition 5.5.3. It is mostlikely that also there higher degree polynomials can be permitted at the expense of havinga more complicated refinement rule for the velocity partitions (it will be needed to createmore interior vertices, cf. Figure 5.2). On the other hand, at least for our analysis, it willbe essential that Pτ ⊇ divVτ (cf. Remark 5.6.1).

Note that (Vτ ,Pτ ) is not an LBB stable pair.

Below, we specify the type of partitions we will consider, and recall some results from[39], generalizing upon known results for newest vertex bisection in two dimensions. For0 ≤ i ≤ d− 1, a simplex spanned by i + 1 vertices of an d-simplex T is called a hyperfaceof T . For i = d−1, it will be called a true hyperface, and for i ≤ d−2 it will called a lower

102

Page 108: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

dimensional hyperface. A partition τ is called conforming when the intersection of any twodifferent T, T ′ ∈ τ is either empty, or a hyperface of both simplices. Different simplices T ,T ′ that share a true hyperface will be called neighbours. (Actually, when Ω 6= int(Ω) abovedefinition of a conforming partition can be unnecessarily restrictive. We refer to [39] for adiscussion of this matter.)

Given a simplex T with vertices x0, . . . , xd, we will identify 12(d+1)! tagged simplices

given by all possible ordered sequences (x0, x1, x2, . . . , xd). So although, for convenience,we will write a tagged simplex as (x0, x1, x2, . . . , xd), the ordering of the first two coordinatesis arbitrary. Given a tagged simplex T = (x0, x1, x2, . . . , xd), its children are the taggedsimplices (x0, x2, . . . , xd,

x0+x1

2) and (x1, x2, . . . , xd,

x0+x1

2). So these children are generated

by bisecting the edge x0x1 of T , i.e., by connecting its midpoint with the other verticesx2, . . . , xd, see Figure 5.1 for an illustration. The edge x0x1 is called the refinement edge

3 3

223

1

0

1

0

2

1

0

Figure 5.1: Bisection of a tagged tetrahedron

of T . In the d = 2 case, the vertex opposite to this edge is known as the newest vertex.Given a fixed conforming initial partition τ0 of tagged simplices, we will exclusively

consider partitions that can be created from τ0 by recurrent bisections of tagged simplices,for short descendants of τ0. The simplices that can be created in this way are uniformlyshape regular, only dependent on τ0 and d. For the case that Ω might have slits, we assumethat

∂Ω is the union of true hyperfaces of T ∈ τ0.

We will assume that the simplices from τ0 are tagged in a way such that any two neighboursT = (x0, . . . , xd), T ′ = (x′0, . . . , x

′d) in τ0 match in the following sense: If x0x1 ⊂ T ′, then

x′0x′1 = x0x1, and xi = x′i for all but one i ∈ 2, . . . , d.It is known, see [6] and the references therein, that for any conforming partition into

triangles there exists a local numbering of the vertices so that the matching conditionis satisfied. In more than two dimensions, this condition cannot be satisfied for eachconforming partition. On the other hand, it can be shown that any conforming partitionof d-simplices can be refined, inflating the number of simplices by not more than an absolute

103

Page 109: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

constant factor, into a conforming partition τ0 that allows a local numbering of the verticesso that the matching condition is satisfied.

We note that any partition is given by the leaves of some subtree of the fixed infinitebinary tree having as nodes all tagged simplices that can be created. The roots of this treeare the simplices of τ0, and the children of any node are the simplices that result from itsbisection.

For applying a posteriori error estimators, we will need that the partitions τ underlyingthe velocity approximations are conforming. So in the following

τ , τ ′, τ etc. will always denote conforming partitions.

Bisecting one or more simplices in a conforming partition τ generally results in a non-conforming partition %. Conformity has to be restored by (recursively) bisecting any sim-plex T ∈ % that contains a vertex v of a T ′ ∈ % that does not coincides with any vertex ofT (such a v is called a hanging vertex). This process, called completion, results into thesmallest conforming refinement of %.

Our adaptive method will be of the following form

for j := 1 to Mdo create some, possibly non-conforming refinement %j of τj−1

complete %j to its smallest conforming refinement τj

endfor

As we will see, we will be able to bound∑M

j=1 #%j −#τj−1. Because of the additionalbisections made in the completion steps, however, generally #τM −#τ0 will be larger. Thefollowing crucial result shows that these additional bisections inflate the total number ofsimplices by at most an absolute constant factor.

Theorem 5.3.2 (generalizes upon [6, Theorem 2.4] for n = 2)

#τM −#τ0 .M∑

j=1

#%j −#τj−1,

only dependent on τ0 and d, and in particular thus independent of M .

We end this section by introducing two more notations. The smallest common refine-ment of partitions %1 and %2 will be denoted as %1∪%2. For a partition τ (thus a conformingone), Eτ will denote the set of internal true hyperfaces in τ , and Fτ (T ) denotes the set ofT and its neighbours in τ .

5.4 Overview of the solution method

For a partition σi, we consider the Galerkin problem of finding p(i) ∈ Pσisuch that

(Sp(i))(q) = (BA−1f)(q), (q ∈ Pσi).

104

Page 110: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

With u(i) := A−1(f − B′p(i)), this problem is equivalent to the semi-discrete problem offinding (u(i), p(i)) ∈ V× Pσi

such that

a(u(i),v) + b(v, p(i)) + b(u(i), q) = f(v), (v ∈ V, q ∈ Pσi). (5.4.1)

Since this is just the Stokes problem in which the divergence-free constraint has beenrelaxed, we will refer to this problem as the reduced Stokes problem. The solution p(i)

is the best approximation to p from Pσiwith respect to ‖ ‖P, and by creating a suitable

adaptively refined sequence of partitions τ0 =: σ0 ⊂ σ1 ⊂ . . ., a convergent sequence (p(i))i

towards p is obtained.The reduced Stokes problem can however not be solved exactly. Defining Rσi

: P′ → Pi

by g(q) = 〈q, Rσig〉L2(Ω) (q ∈ Pσi

), it can be written as RσiSp(i) = Rσi

BA−1f . EquippingPσi

with 〈 , 〉L2(Ω), the operator RσiS : Pσi

→ Pσiis symmetric, bounded, and positive

definite, with spectrum in [β2, 1] (for the lower bound, cf. Lemma 5.2.2 and (5.2.2); forthe upper bound, cf. [30]). So to solve the reduced Stokes problem approximately, we mayapply Richardson’s iteration

p(i)j+1 = p

(i)j − (Rσi

Sp(i)j −Rσi

BA−1f).

Writing u(i)j := A−1(f − B′p(i)

j ), and, with Qσi: P → Pσi

denoting the L2(Ω)-orthogonalprojector, noting that Rσi

B = −Qσidiv, we arrive at the equivalent formulation

a(u

(i)j ,v) = f(v)− b(v, p

(i)j ), (v ∈ V),

p(i)j+1 = p

(i)j −Qσi

divu(i)j ,

(5.4.2)

known as the Uzawa iteration. The properties of RσiS show that

‖p(i) − p(i)j+1‖L2(Ω) ≤ [1− β2]‖p(i) − p

(i)j ‖L2(Ω) (5.4.3)

Also the Uzawa iteration for solving the reduced Stokes problem cannot be performedexactly since it involves solving an elliptic problem posed over V. To solve this problemapproximately, we will again consider Galerkin approximations: Given a partition τ

(i)j,k , let

u(i)j,k ∈ Vτ

(i)j,k

being the solution of

a(u(i)j,k,v) = f(v)− b(v, p

(i)j ), (v ∈ V

τ(i)j,k

). (5.4.4)

It is the best approximation to u(i)j from V

τ(i)j,k

with respect to ‖ ‖V, and by creating a

suitable adaptively refined sequence of partitions σi ⊆ τ(i)j,0 ⊂ τ

(i)j,1 ⊂ . . ., a convergent

sequence (u(i)j,k)k towards u

(i)j is obtained. To guarantee that u

(i)j,k+1 is indeed a better

approximation than u(i)j,k, we will need that f ∈ V′ can be sufficiently well approximated

by a vector field from V∗τ(i)j,k

. To implement the latter requirement, instead of working with

105

Page 111: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

f , we will replace it by suitable piecewise polynomial vector fields of degree m − 1, thatshould become increasingly more accurate when the iterations proceeds.

Finally, instead of solving the finite dimensional linear systems (5.4.4) exactly, in orderto obtain a method of (quasi-) optimal computational complexity, we will use approximatesolutions obtained by employing optimal iterative solvers.

In order to stop each of the nested loops on time, as well as, for both Galerkin problems,to create adaptively refined partitions such that the corresponding approximations convergetowards the solution and which partitions have quasi-optimal cardinalities, we need aposteriori error estimators that will be discussed in the next section. Stopping a loop ontime on the one hand means that it should not stop too early in order to guarantee convergeof the overall process, whereas on the other hand the iterations should not proceed toolong in order to control the cardinalities of the partitions, that grow by the refinements,as well as the computational complexity.

In view of our solution method, we fix some notations. Throughout this chapter, givensome r ∈ P, where we have in mind an approximation to p, and a partition τ , ur

τ ∈ Vτ willdenote the solution of the (discretized) elliptic problem

a(urτ ,vτ ) = f(vτ )− b(vτ , r), (vτ ∈ Vτ ) . (5.4.5)

Actually, we will consider this problem only for an r ∈ Pτ . As a special case of (5.4.5),ur = ur

∞ ∈ V denotes thus the solution of

a(ur,v) = f(v)− b(v, r), (v ∈ V).

Given a partition σ, (uσ, pσ) ∈ V × Pσ will denote the solution of the reduced Stokesproblem

a(uσ,v) + b(v, pσ) + b(uσ, qσ) = f(v), (v ∈ V, qσ ∈ Pσ) .

5.5 A posteriori error estimators

5.5.1 A posteriori error estimator for the inner elliptic problem

For a partition τ , rτ ∈ Pτ , wτ ∈ Vτ , and T ∈ τ , we set the local error indicator

ηT (f , rτ ,wτ ) := diam(T )2‖f −∇rτ +4wτ‖2L2(T )d + diam(T )‖Jrτn−∇wτ · nK∂T‖2

L2(∂T )d ,

where J K∂T denotes the jump of its argument over ∂T in the direction of n, being a unitvector normal to ∂T . This jump is defined to be zero over ∂Ω. We set the elliptic errorestimator

EE(τ, f , rτ ,wτ ) := [∑T∈τ

ηT (f , rτ ,wτ )]12 .

Note that the definition of the error estimator requires f ∈ L2(Ω)d, that we thereforeassume here.

The following Proposition 5.5.2 is a generalization of [5, Lemma 5.1(5.4)], see also [41],in the sense that instead of ‖urτ − urτ

τ ‖V, the difference ‖urτ

τ ′ − urττ ‖V for any τ ′ ⊃ τ is

106

Page 112: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

estimated. It is shown that this difference can be bounded from above by the square rootof the sum of the local error indicators corresponding to the simplices that were refinedwhen creating τ ′ from τ , or those that have non-empty intersection with such simplices.By taking τ ′ = ∞, this result yields the known bound for ‖urτ − urτ

τ ‖V. Although ourgeneralization is not so difficult to derive knowing the results from [5, 41], for completenesswe include a proof.

Remark 5.5.1 Although in this and the next subsection it will be allowed that rτ ∈ Pτ ,actually we will think of it as being an approximation to p from Pσ for some “fixed” σ ⊆ τ .

Proposition 5.5.2 Let τ ′ ⊃ τ be partitions, rτ ∈ Pτ , f ∈ L2(Ω)d, and

F = F (τ, τ ′) := T ∈ τ : T ∩ T 6= ∅ for some T ∈ τ that has been refined in τ ′.

Then we have‖urτ

τ ′ − urττ ‖V ≤ C1

[ ∑

T∈F

ηT (f , rτ ,urττ )

] 12 ,

for some absolute constant C1 > 0. Note that #F . #τ ′ −#τ .In particular, by taking τ ′ = ∞, we have

‖urτ − urττ ‖V ≤ C1E

E(τ, f , rτ ,urττ ).

Proof. We have ‖urτ

τ ′ − urττ ‖V h sup06=vτ ′∈Vτ ′

a(urττ ′ −urτ

τ ,vτ ′ )‖vτ ′‖V . For any vτ ′ ∈ Vτ ′ , vτ ∈ Vτ ,

from urτ

τ ′ − urττ ⊥a( , ) Vτ , the definition of urτ

τ ′ , and integration by parts, we have

a(urτ

τ ′ − urττ ,vτ ′) = a(urτ

τ ′ − urττ ,vτ ′ − vτ )

=∑T∈τ

T

[f · (vτ ′ − vτ ) + rτdiv(vτ ′ − vτ )−∇urτ

τ : ∇(vτ ′ − vτ )]

(5.5.1)

=∑T∈τ

T

(f −∇rτ +4urττ ) · (vτ ′ − vτ ) +

∑e∈Eτ

e

Jrτn−∇urττ · nKe · (vτ ′ − vτ ).

(5.5.2)

We will select vτ as a quasi-interpolant of vτ ′ . For any T ∈ τ , let NT = x ∈ T :kλT (x) ∈ Nd, where λT (x) denotes the barycentric coordinates of x with respect to T .Corresponding to the local nodal basis φT,v : v ∈ NT of Pm(T ), defined by φT,v(w) = δvw

(w ∈ NT ), there exists a dual basis φ∗T,v : v ∈ NT of Pm(T ), defined by 〈φT,v, φ∗T,w〉L2(T ) =

δvw (w ∈ NT ). A scaling argument shows that ‖φ∗T,v‖L2(T ) . meas(T )−1/2. For any nodalpoint v ∈ ∪T∈τNT , v 6∈ ∂Ω, we now select a Tv ∈ τ with v ∈ Tv, and define vτ ∈ Vτ byvτ (v) =

∫Tv

vτ ′φ∗T,v. Its key properties are: For any v ∈ ∪T∈τNT , vτ (v) = vτ ′(v) when

Tv ∈ τ ′; for any T ∈ τ , ‖vτ‖L2(T )d . ‖vτ ′‖L2(ΩT )d , where ΩT := ∪T ∈ τ : T ∩ T 6= ∅.This first property shows that the sums in (5.5.2) vanish for any T or e for which all

T ∈ τ with T ∩ T 6= ∅ or T ∩ e 6= ∅ are also in τ ′. It also shows that the interpolator

107

Page 113: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

is actually a projector onto Vτ . From the second property, and either the fact that ourinterpolator reproduces any constant together with the Bramble-Hilbert lemma, or, incase T ∩ ∂Ω 6= ∅ so at least one of the T that form ΩT has a true hyperface on ∂Ω, thePoincare-Friedrichs inequality, we have

diam(T )−1‖vτ ′ − vτ‖L2(T )d + |vτ ′ − vτ |H1(T )d . |vτ ′ |H1(ΩT )d , (5.5.3)

where also a homogeneity argument was used. For each e ∈ Eτ and either T ∈ τ on bothsides of e, from the trace theorem and (5.4.2), we have

‖vτ ′ − vτ‖L2(e)d . diam(e)−12‖vτ ′ − vτ‖L2(T )d + diam(e)

12 |vτ ′ − vτ |H1(T )d

. diam(e)12 |vτ ′|H1(ΩT )d (5.5.4)

By applying the Cauchy-Schwarz inequality to both sums from (5.5.2), and then substi-tuting (5.5.3) or (5.5.4), the proof follows.

Next we study whether the error estimator also provides a lower bound for ‖urτ −urττ ‖V

and, when τ ′ is a sufficient refinement of τ , for ‖urτ

τ ′ − urττ ‖V. In order to derive such

estimates, for the time being we restrict further the type of right-hand sides to piecewisepolynomials of degree m − 1 with respect to τ . We will call τ ′ ⊃ τ a full refinement withrespect to T ∈ τ , when

all T ∈ Fτ (T ), as well as all faces of Tcontain a vertex of τ ′ in their interiors,

see Figure 5.2 for an illustration. The following proposition was shown in [5, Lemma 5.3]

Figure 5.2: A refinement τ ′ of τ , which is a full refinement with respect to a triangle T ∈ τ

[Actually, there a somewhat stronger condition on the refinement was imposed, but notused; a more general f was considered at the expense of an additional “oscillation” termin the expression; and, finally, there the general wτ ∈ Vτ reads as urτ

τ whose additionalproperty of being the solution of (5.4.5) was not used though].

108

Page 114: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Proposition 5.5.3 Let τ be a partition, rτ ∈ Pτ , and let us assume that f ∈ V∗τ . Letτ ′ ⊃ τ be a full refinement of τ with respect to T ∈ τ . Then for any wτ ∈ Vτ , we have

ηT (f , rτ ,wτ ) .∑

T∈Fτ (T )

|urτ

τ ′ −wτ |2H1(T )d .

As a straightforward consequence we have

Corollary 5.5.4 In the situation of Proposition 5.5.3, let τ ′ ⊃ τ be a full refinement of τwith respect to all T from some F ⊂ τ . Then

c2

[ ∑T∈F

ηT (f , rτ ,wτ )] 1

2 ≤ ‖urτ

τ ′ −wτ‖V,

for some absolute constant c2 > 0. In particular, we have

c2EE(τ, f , rτ ,wτ ) ≤ ‖urτ −wτ‖V. (5.5.5)

Finally in this subsection, we investigate the stability of the elliptic error estimator.

Proposition 5.5.5 Let τ be a partition, rτ ∈ Pτ , f ∈ L2(Ω)d, and vτ ,wτ ∈ Vτ . Then

c2|EE(τ, f , rτ ,vτ )− EE(τ, f , rτ ,wτ )| ≤ ‖vτ −wτ‖V.

Proof. For g ∈ L2(Ω)d, qτ ∈ Pτ , by two applications of the triangle inequality in the

form∣∣‖ · ‖ − ‖ · ‖

∣∣2 ≤ ‖ · − · ‖2, first for vectors and then for functions, we have

|EE(τ, f , rτ ,vτ )− EE(τ,g, qτ ,wτ )| ≤ EE(τ, f − g, rτ − qτ ,vτ −wτ ).

By substituting g = f and qτ = rτ , and by applying (5.5.5) the proof is completed.

5.5.2 A posteriori error estimator for the (reduced) Stokes prob-lem

For partitions τ, %, rτ ∈ Pτ , f ∈ L2(Ω)d, and wτ ∈ Vτ , we consider the estimator

ES(%, τ, f , rτ ,wτ ) := EE(τ, f , rτ ,wτ ) + ‖Q%divwτ‖L2(Ω)

We are going to apply the results from this subsection for % = ∞ (the full, unreducedStokes problem) as well as for % = σ ⊆ τ .

Proposition 5.5.6 For partitions %, τ , rτ ∈ Pτ , and f ∈ L2(Ω)d, we have

‖u% − urττ ‖V + ‖p% − rτ‖P ≤ C3E

S(%, τ, f , rτ ,urττ ),

for some absolute constant C3 > 0.

109

Page 115: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Proof. The proof given in [41] (cf. [5, Lemma 4.1]) for % = ∞ easily generalizes to% ( ∞. Since it can be derived by a variation of the proof of Proposition 5.5.2, we onlysketch the idea.

For any (v, q%) ∈ V× P%, vτ ∈ V, we have

a(u% − urττ ,v) + b(v, p% − rτ ) + b(u% − urτ

τ , q%)

= a(u% − urττ ,v − vτ ) + b(v − vτ , p% − rτ )− b(urτ

τ , q%).

Together the first two terms in the second line are equal to (5.5.1) with vτ ′ reading as v.By estimating them in the same way using that rτ ∈ Pτ , and applying the obvious estimatefor b(urτ

τ , q%), and, finally, by invoking (5.2.3) the proof follows.The following result was shown in [41] (cf. [5, Lemma 4.3]) for the case % = ∞, but the

proof generalizes immediately to general partitions %.

Proposition 5.5.7 Let τ, % be partitions, rτ ∈ Pτ , wτ ∈ Vτ , and let us assume thatf ∈ V∗τ . Then for T ∈ τ , we have

ηT (f , rτ ,wτ ) .∑

T∈Fτ (T )

[|u% −wτ |2H1(T )d + ‖p% − rτ‖2L2(T )d

]

Since for T ∈ %, with QT denoting the L2(T )-orthogonal projector onto Pm−1(T ), ‖QT divwτ‖L2(T ) ≤|u% −wτ |H1(T ), we conclude that

c4ES(%, τ, f , rτ ,wτ ) ≤ ‖u% −wτ‖V + ‖p% − rτ‖P.

for some absolute constant c4 > 0.

The last result in this subsection provides an a posteriori error error estimator for theouter elliptic problem.

Proposition 5.5.8 For a partition %, and r ∈ P, we have

c6‖Q%divur‖L2(Ω) ≤ ‖p% − r‖P ≤ C5‖Q%divur‖L2(Ω),

for some absolute constants C5, c6 > 0.

Proof. Use sup0 6=v∈Vb(v,p%−r)

‖v‖V = sup06=v∈Va(ur−u%,v)

‖v‖V = ‖u% − ur‖V, and thus

β ≤ ‖u% − ur‖V‖p% − r‖P ≤ 1, (5.5.6)

and

‖p% − r‖P + ‖u% − ur‖Vh sup

06=(v,q%)∈V×P%

a(u% − ur,v) + b(v, p% − r) + b(u% − ur, q%)

‖v‖V + ‖q%‖P= ‖Q%divur‖L2(Ω).

110

Page 116: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

5.6 Adaptive refinements resulting in error reduction

For both elliptic problems Sp = BA−1f and a(ur,v) = f(v) − b(v, r) (v ∈ V), the latterfor some given r ∈ P, we construct adaptive refinement routines based on the a posteriorierror estimators. Given (approximate) Galerkin solutions from Pσ or Vτ , respectively, theyproduce refinements σ ⊃ σ or τ ⊃ τ such that the Galerkin solutions with respect to thesepartitions have strictly smaller errors. Moreover, we will give bounds on the number ofrefined simplices that eventually will lead to the conclusion that our adaptive Stokes solvergenerates quasi-optimal partitions.

5.6.1 Adaptive pressure refinements

With C5, c6 being the constants from Proposition 5.5.8, for some absolute constants

d ∈(1− c2

6

C25

, 1], D ≥ 1, θ ∈

(0,

[1− 1− c2

6/C25

d

] 12 )

, (5.6.1)

we assume that we have the following routine available. We think of its arguments rσ andw as being approximations to p and urσ , respectively.

Algorithm 5.1 REFpres

REFpres[σ, rσ,w] → σ/* σ is a partition, rσ ∈ Pσ and w ∈ V. */Select a partition σ ⊃ σ with

‖Qσdivw‖L2(Ω) ≥ θ‖divw‖L2(Ω), (5.6.2)

such that#σ −#σ ≤ D(#σ −#σ)

for any σ ⊃ σ with ‖Qσdivw‖L2(Ω) ≥√

1− d(1− θ2)‖divw‖L2(Ω).

Remark 5.6.1 We will make calls σ := REFpres[σ, rσ,w] only for the argument w fromVτ for some τ ⊃ σ, so that divw ∈ Pτ . It is therefore no restriction to assume thatthen σ ⊆ τ . The fact that the partition underlying the pressure approximation is alwayscontained in or equal to that underlying the velocity approximation will be essential for theadaptive refinement routine for reducing the error in the inner elliptic problem.

The benefit of REFpres appears from the following two lemmas:

Lemma 5.6.2 Let σ be a partition, and rσ ∈ Pσ. Then for σ = REFpres[σ, rσ,urσ ], we

have‖p− pσ‖P ≤

[1− c26θ2

C25

] 12‖p− rσ‖P.

111

Page 117: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Moreover,#σ −#σ ≤ D[#σ −#τ0]

for any partition σ for which

infqσ∈Pσ

‖p− qσ‖P ≤[1− C2

5

c26(1− d(1− θ2))

] 12‖p− rσ‖P. (5.6.3)

(Note that (5.6.1) implies thatC2

5

c26(1− d(1− θ2)) < 1.)

Proof. The first statement follows from

‖p− rσ‖2P = ‖p− pσ‖2

P + ‖pσ − rσ‖2P

and ‖pσ−rσ‖P ≥ c6‖Qσdivurσ‖L2(Ω) ≥ c6θ‖divurσ‖L2(Ω) ≥ c6θC5‖p−rσ‖P by Proposition 5.5.8.

For a σ satisfying (5.6.3), let σ = σ ∪ σ. Then from ‖p − pσ‖P ≤ infqσ∈Pσ ‖p − qσ‖P,with λ :=

C25

c26(1− d(1− θ2)) we have

C25‖Qσdivurσ‖2

L2(Ω) ≥ ‖pσ − rσ‖2P = ‖p− rσ‖2

P − ‖p− pσ‖2P

≥ λ‖p− rσ‖2P ≥ λc2

6‖divurσ‖2L2(Ω).

Noting thatλc26C2

5= 1− d(1− θ2), by construction of σ, we conclude that

#σ −#σ ≤ D[#σ −#σ] ≤ D[#σ −#τ0].

Now we generalize Lemma 5.6.2 to the practical relevant situation that we have onlyan approximation to urσ available:

Lemma 5.6.3 Let ω ∈ (0, θ) be a constant, σ a partition, rσ ∈ Pσ, and w ∈ V with

‖divurσ − divw‖L2(Ω) ≤ ω‖divw‖L2(Ω).

Then for σ = REFpres[σ, rσ,w], we have

‖p− pσ‖P ≤[1− c26(θ−ω)2

C25 (1+ω)2

] 12‖p− rσ‖P.

Moreover, if ω is sufficiently small such thatω+√

1−d(1−θ2)

1−ω< c6

C5, then

#σ −#σ ≤ D[#σ −#τ0]

for any partition σ for which with ξ :=[1− (ω+

√1−d(1−θ2)

1−ωC5

c6

)2] 12 ,

infqσ∈Pσ

‖p− qσ‖P ≤ ξ‖p− rσ‖P. (5.6.4)

112

Page 118: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Proof. Similar to the proof of Lemma 5.6.2. For the first part use that

‖Qσdivurσ‖L2(Ω) ≥ ‖Qσdivw‖L2(Ω) − ω‖divw‖L2(Ω) ≥ θ−ω1+ω

‖divurσ‖L2(Ω),

and, for the second part, with any σ satisfying (5.6.4) and σ = σ ∪ σ, that

C5[‖Qσdivw‖L2(Ω) + ω‖divw‖L2(Ω)] ≥ C5‖Qσdivurσ‖L2(Ω) ≥ ‖pσ − rσ‖P=

[‖p− rσ‖2P − ‖p− pσ‖2

P] 1

2 ≥√

1− ξ2‖p− rσ‖P ≥ c6

√1− ξ2‖divurσ‖L2(Ω)

≥ (1− ω)c6

√1− ξ2‖divw‖L2(Ω),

or equivalently, ‖Qσdivw‖L2(Ω) ≥√

1− d(1− θ2)‖divw‖L2(Ω).As we said, we will make calls σ := REFpres[σ, rσ,w] only for the argument w from

Vτ for some τ ⊃ σ, so that divw ∈ Pτ . Obviously, if, for some θ ∈ (0, c6/C5), REFpres isimplemented as the selection of the smallest partition σ ⊃ σ such that (5.6.2) is valid, itsatisfies its requirements with d = 1 = D. Yet, in any case a naive implementation of thisalgorithm would require computing ‖Qσdivw‖L2(Ω) for all partitions σ ( σ ( τ , which isprohibitive expensive.

Recalling that any partition corresponds to a subtree of the infinite binary tree thatis determined by the initial partition τ0 of tagged simplices, alternatively one may applythe adaptive tree approximation algorithms from [8]. Prescribing a θ ∈ (0, 1), these al-gorithms are shown to fulfill the requirements on REFpres for some absolute constants0 < d ≤ 1 ≤ D. Assuming that for any simplex T from any partition σ ( σ ( τ the val-ues infq∈Pm−1(T ) ‖(divw)|T − q‖L2(T ) are known, which values can be computed in O(#τ)operations, these algorithms produce σ as in (5.6.2) in O(#σ) additional operations.

Unfortunately, it might be that the constant d derived in [8] is not larger than 1−c26/C

25

as it should be in view of (5.6.1). So far, we have not verified whether the statements from[8] can be shown for any given d ∈ (0, 1) (likely at the expense of D → ∞ when d ↑ 1).Therefore, the statements in this chapter concerning the cost of our adaptive algorithm arevalid under the following assumption:

Assumption 5.6.4 For w ∈ Vτ , the call REFpres[·, ·,w] takes O(#τ) operations.

Remark 5.6.5 Actually, so far for our experiments we used the easy implementable greedyalgorithm: Starting from σ, we bisect that simplex T or those simplices T with maximuminfq∈Pm−1(T ) ‖(divw)|T − q‖L2(T ) until (5.6.2) is satisfied. Although there exist inputs wfor which this greedy approach results in an output partition σ that is not quasi-optimal,“usually” it works well (in any case when for all T ∈ σ, (divw)|T is sufficiently smooth).

5.6.2 Adaptive velocity refinements

For some fixed

ζ ∈ (0,c2

C1

),

113

Page 119: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

we will make use of the following routine to determine a suitable adaptive refinement foran update of the velocity:

Algorithm 5.2 REFvel

REFvel[τ,g, rτ ,wτ ] → τ/* τ is a partition, g ∈ L2(Ω)d, rτ ∈ Pτ , and wτ ∈ Vτ . */Select a set F ⊂ τ with, up to some absolute factor, minimal cardinality such that

∑T∈F

ηT (g, rτ ,wτ ) ≥ ζ2 EE(τ,g, rτ ,wτ )2. (5.6.5)

Construct the smallest τ ⊃ τ which is a full refinement with respect to all T ∈ F .

The next lemma will show the benefit of REFvel. It applies under the (unrealistic)assumptions that f ∈ V∗τ and that the Galerkin problems are solved exactly. In Lemma 5.8.1given in the next section, inexact Galerkin solutions will be allowed, and the given right-hand side f ∈ V′ will be replaced by an approximation from V∗τ .

Note that when f ∈ V∗τ , the computation of all ηT (f , rτ ,wτ ) (T ∈ τ) can be done inO(#τ) operations. By doing an approximate sorting of the ηT (fτ , rτ ,wτ ) by their values,REFvel[τ, f , ·, ·] can be implemented in O(#τ) operations (cf. [37]).

Lemma 5.6.6 Let τ ⊇ σ be partitions, f ∈ V∗τ , and rσ ∈ Pσ. Then for τ = REFvel[τ, f , rσ,urστ ],

we have‖urσ − urσ

τ ‖V ≤[1− c22ζ2

C21

] 12‖urσ − urσ

τ ‖V.Moreover, if ζ < c2

C1, and, for some absolute constant ϑ > 0,

‖urσ − urστ ‖V ≥ ϑ‖u− urσ‖V, (5.6.6)

then for the set of marked simplices F inside REFvel, we have

#F . #τ + #σ + #σ (5.6.7)

for any partitions τ and σ for which

infvτ∈Vτ

‖u− vτ‖V ≤ 13

[1− C2

1ζ2

c22

] 12‖urσ − urσ

τ ‖V (5.6.8)

infqσ∈Pσ

‖p− qσ‖V ≤ 13

[1− C2

1ζ2

c22

] 12‖urσ − urσ

τ ‖V (5.6.9)

Remark 5.6.7 Note that the bound on #F in terms of u, p (via τ and σ) and σ canonly be shown when (the variational formulation of) −4urσ = f − ∇rσ is not solvedtoo accurately, which is enforced by (5.6.6). By assuming that u and p are in certainapproximation classes, i.e., that these functions can be approximated with certain rates

114

Page 120: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

by finite element functions with respect to the best partitions, later we will derive quasi-optimal bounds for #τ and #σ, as well as for #σ via Lemma 5.6.3, and so in the end on#F . Without imposing (5.6.6), we would only arrive at a similar bound on #F when wewould assume that all approximations urσ to u corresponding to all approximate pressuresrσ that are created in the adaptive method are similarly easy to approximate as u, which isan unverifiable assumption.

Proof. The first statement follows from

‖urσ − urστ ‖2

V = ‖urσ − urστ ‖2

V + ‖urστ − urσ

τ ‖2V,

and ‖urστ − urσ

τ ‖V ≥ c2ζEE(τ, f , rσ,urστ ) ≥ c2ζ

C1‖urσ − urσ

τ ‖V by Corollary 5.5.4 and Proposi-tion 5.5.2.

Let τ be a partition for which

infvτ∈Vτ

‖urσ − vτ‖V ≤[1− C2

1ζ2

c22

] 12‖urσ − urσ

τ ‖V, (5.6.10)

and let τ = τ ∪ τ . Then with F = F (τ, τ) from Proposition 5.5.2, we have

C21

T∈F

ηT (f , rσ,urστ ) ≥ ‖urσ

τ − urστ ‖2

V = ‖urσ − urστ ‖2

V − ‖urσ − urστ ‖2

V

≥ C21ζ2

c22‖urσ − urσ

τ ‖2V ≥ C2

1ζ2EE(τ, f , rσ,u

rστ )2.

By construction of F , we infer that

#F . F . #τ −#τ ≤ #τ −#τ0. (5.6.11)

It remains to bound #τ − #τ0 for a τ as in (5.6.10). With σ as in (5.6.9), we writeurσ = (urσ −uσ)+ (uσ−u)+u, and approximate each of the three terms within tolerance13

[1 − C2

1ζ2

c22

] 12 with finite element functions (the second one with zero). From (5.5.6), we

have‖u− uσ‖V ≤ ‖p− pσ‖P ≤ 1

3

[1− C2

1ζ2

c22

] 12‖urσ − urσ

τ ‖V.The vector field w = urσ − uσ solves

a(w,v) = −b(v, rσ − pσ), (v ∈ V).

With σ = σ ∪ σ, the error in its best approximation wσ from Vσ can be bounded by

‖w −wσ‖V ≤ ‖w‖V ≤ ‖urσ − u‖V + ‖u− uσ‖V≤ (

ϑ−1 +1

3

[1− C2

1ζ2

c22

] 12)‖urσ − urσ

τ ‖V.

With σ0 = σ, for k = 1, 2, . . . let σk ⊃ σk−1 be the smallest partition that is a full refinementwith respect to all T ∈ σk−1. Then using the fact that rσ − pσ ∈ Pσ0 , as in the first part

115

Page 121: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

of this lemma with now ζ = 1 we have ‖w − wσk‖V ≤

[1 − C2

1

c22

] k2 ‖w − wσ‖V, where wσk

denotes the best approximation to w from Vσk. With k being the smallest integer with[

1− C21

c22

] k2(ϑ−1 + 1

3

[1− C2

1ζ2

c22

] 12) ≤ 1

3

[1− C2

1ζ2

c22

] 12 , we conclude that

infvτ∈Vτ

‖urσ − (vτ + wσk)‖V ≤ ‖urσ − uσ −wσk

‖V + ‖uσ − u‖V + infvτ∈Vτ

‖u− vτ‖V

≤ [1− C2

1ζ2

c22

] 12‖urσ − urσ

τ ‖V.

Since vτ + wσk∈ Vτ∪σk

, and #(τ ∪ σk)−#τ0 . #τ + #σ + #σ (dependent on k and thuson ϑ), in view of (5.6.10) and (5.6.11) the proof is completed.

Remark 5.6.8 If in (5.6.6), ϑ >[1 − C2

1ζ2

c22

]− 12 , then by a simplification of above proof,

instead of (5.6.7), one obtains that

#F . #τ −#τ0

for any τ with

infvτ∈Vτ

‖u− vτ‖V ≤([

1− C21ζ2

c22

] 12 − ϑ−1

)‖urσ − urστ ‖V,

which bound on #F is in particular independent of the pressure p. It is, however, not clearwhether under the condition ‖urσ − urσ

τ ‖V ≥ ϑ‖u − urσ‖V for such ϑ, the inner ellipticproblem is sufficiently accurately solved to obtain a convergent inexact Uzawa algorithmfor solving the reduced Stokes problem of finding (uσ, pσ). Knowing that we can control #σbecause our outmost loop producing Galerkin approximations to p, Lemma 5.6.6 provides

a way to avoid the condition that ϑ >[1− C2

1ζ2

c22

]− 12 . This point is the exact reason why we

did not succeed to prove quasi-optimality of the Uzawa iteration for solving the full Stokesproblem, so without our outmost loop. Indeed, with that method there is no separate controlover the partitions that underly the pressure approximations.

5.7 An adaptive method for the Stokes problem in an idealized setting

For s > 0, we define the approximation class

AsV = v ∈ V : |v|As

V:= sup

ε>0ε infτ :infvτ∈Vτ ‖v−vτ‖V≤ε

[#τ −#τ0]s < ∞,

and equip it with norm ‖v‖AsV

:= ‖v‖V+ |v|AsV. So As

V is the class of vector fields that canbe approximated within any given tolerance ε > 0 by a vτ ∈ Vτ for some partition τ with#τ −#τ0 . ε−1/s|u|1/s

AsV. Similarly, we define

AsP = q ∈ P : |q|As

P:= sup

ε>0ε infσ:infqσ∈Pσ ‖q−qσ‖P≤ε

[#σ −#τ0]s < ∞,

and equip it with norm ‖q‖AsP

:= ‖q‖P + |q|AsP.

116

Page 122: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Because of the polynomial degrees of our approximations, only for s ≤ m/d membershipof u ∈ As

V or p ∈ AsP can be enforced by imposing suitable smoothness conditions on u

or p, respectively. These smoothness conditions, however, are much milder than requiringthat u ∈ H1+sd(Ω)d or p ∈ Hsd(Ω), that would be needed when only uniformly refinedpartitions were considered. The approximations classes can be (nearly) characterized ascertain Besov spaces (see [7] for details). In any case for d = 2 and sufficiently smooth f ,it is known (see [16]) that u and p have sufficient Besov smoothness so that they are in As

Vor As

P for any s < m/d.The results derived in Section 5.6.2 concerning adaptive velocity refinements were valid

under the assumption that f was piecewise polynomial of degree m − 1 with respect tothe current partition. In order to make our exposition not too complicated, in this sectionwe will assume that f is piecewise polynomial of degree m− 1 with respect to any partitionthat we encounter, i.e., that is in V∗τ0. In the next section, we will remove this restriction.Furthermore, for the moment we assume that the arising finite dimensional linear systemsare solved exactly, i.e., we do not care about the computational cost. In the next section,by applying iterative solvers, we will show quasi-optimal computational complexity.

The following algorithm is an implementation of the solution method that was an-nounced in Section 5.4 with the simplifications mentioned above. Note that the do-loop inthe algorithm actually consists of 3 nested loops over i, j and k. The loop over i concerns anadaptive method for solving p from the Schur complement equation. The loop over j con-cerns the Uzawa method for solving pσi from the reduced Stokes problem. Finally, the loop

over k concerns an adaptive method for solving up(i)j ∈ V from a(up

(i)j ,v) = f(v)− b(v, p

(i)j )

(v ∈ V). We have formulated these loops as one loop to deal efficiently with the compli-cated stopping criteria. E.g., the innermost one stops when either EE(· · · ) ≤ αES(σi, · · · )or ES(σi, · · · ) ≤ κES(∞, · · · ) or ES(∞, · · · ) ≤ ε, i.e., when either

‖up(i)j − u

(i)j,k‖V ≤ C1αc−1

4 [‖pσi− p

(i)j ‖P + ‖upσi − u

(i)j,k‖V] or

‖pσi − p(i)j ‖P + ‖upσi − u

(i)j,k‖V ≤ C3κc−1

4 [‖p− p(i)j ‖P + ‖u− u

(i)j,k‖V] or

‖p− p(i)j ‖P + ‖u− u

(i)j,k‖V ≤ ε.

117

Page 123: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Algorithm 5.3 STOKESSOLVE0

STOKESSOLVE0[f, ε] → [σ(i)j , p

(i)j , τ

(i)j,k ,u

(i)j,k]

/* For this preliminary version of the adaptive solver it is assumedthat f ∈ V∗τ0.Let the parameter ζ from REFvel satisfy ζ ∈ (0, c2

C1), and θ from REFpres

satisfy θ ∈ (0, [1− 1−c26/C25

d]12 ). For some ω ∈ (0, θ) small enough such that

ω+√

1−d(1−θ2)

1−ω< c6

C5, fix some sufficiently small constants κ, α > 0 such that

κ < 1, C1κ

1−κ≤ ω, κC1 < c4, [1− 2κC3

c4−κC1]−1[1− c26(θ−ω)2

c25(1+ω)2]12 < 1, αC1 < c4,

and 1− β2 + 2αC1

c4−αC1< 1. */

p(0)0 := 0, σ0 := τ

(0)0,0 := τ0

i := j := k := 0

do u(i)j,k := u

p(i)j

τ(i)j,k

% i.e., u(i)j,k ∈ Vτ

(i)j,k

, a(u(i)j,k,v) = f(v)− b(v, p

(i)j ), (v ∈ V

τ(i)j,k

)

if C3ES(∞, τ

(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ ε then stop

elsif ES(σi, τ(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ κES(∞, τ

(i)j,k , f , p

(i)j ,u

(i)j,k) then

σi+1 := REFpres[σi, p(i)j ,u

(i)j,k]

p(i+1)0 := p

(i)j , τ

(i+1)0,0 := τ

(i)j,k

i++, j := k := 0

elsif EE(τ(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ αES(σi, τ

(i)j,k , f , p

(i)j ,u

(i)j,k) then

p(i)j+1 := p

(i)j −Qσi

divu(i)j,k

τ(i)j+1,0 := τ

(i)j,k

j++, k := 0

else τ(i)j,k+1 := REFvel[τ

(i)j,k , f , p

(i)j ,u

(i)j,k]

k++endif

enddo

Theorem 5.7.1 (I) Let f ∈ V∗τ0, then [σ(i)j , p

(i)j , τ

(i)j,k ,u

(i)j,k] := STOKESSOLVE0[f, ε] ter-

minates, and ‖u − u(i)j,k‖V + ‖p − p

(i)j ‖P ≤ ε. (II) If, for some s > 0, p ∈ As

P, then

#σ(i)j − #τ0 . ε−1/s|p|1/s

AsP, only dependent on τ0 and on s when it tends to 0 or infin-

ity. If, in addition, for some s > 0, u ∈ AsV, then with s = min(s, s), #τ

(i)j,k − #τ0 .

ε−1/s(‖p‖1/s

AsP

+ ‖u‖1/s

AsV), only dependent on τ0, and on s when it tends to 0 or infinity.

Remark 5.7.2 Note that in view of the assumptions, #σ(i)j − #τ0 is at most a constant

multiple larger than this expression for the best partition σ(i)j giving rise to such an error

in the pressure. Similarly, #τ(i)j,k − #τ0 is at most a constant multiple larger than this

expression for the best partition τ(i)j,k on which p and u can be approximated by piecewise

118

Page 124: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

polynomials of degree m− 1, or continuous piecewise polynomials of degree m with errorsless than or equal to ε in ‖ ‖P or ‖ ‖V, respectively.

Proof. (I) Given i and j, k = k(i, j) will denote the maximum value of k for those iand j. Given an i, j = j(i) will denote the maximum value of j for that i, and k = k(i) :=k(i, j(i)) is the the maximum value of k for that i. Finally, i will denote the maximumvalue of i.

If C3ES(∞, τ

(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ ε is passed, i.e., (i, j, k) = (i, j(i), k(i)), then ‖u−u

(i)j,k‖V+

‖p− p(i)j ‖P ≤ ε by Proposition 5.5.6.

The inequality ES(σi, τ(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ κES(∞, τ

(i)j,k , f , p

(i)j ,u

(i)j,k) is equivalent to the

EE(τ(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ (1 − κ)−1(κ‖divu

(i)j,k‖L2(Ω) − ‖Qσi

divu(i)j,k‖L2(Ω)). So if this test is

passed, i.e., (j, k) = (j(i), k(i)), then by Propositions 5.5.2, 5.5.7 and 5.5.6,

C−11 ‖up

(i)j − u

(i)j,k‖V ≤ EE(τ

(i)j,k, f , p

(i)j ,u

(i)j,k) ≤ κ

1−κ‖divu

(i)j,k‖L2(Ω), (5.7.1)

C−11 ‖up

(i)j − u

(i)j,k‖V ≤ κc−1

4 (‖u− u(i)j,k‖V + ‖p− p

(i)j ‖P), (5.7.2)

C−13 ‖pσi

− p(i)j ‖P ≤ κc−1

4 (‖u− u(i)j,k‖V + ‖p− p

(i)j ‖P). (5.7.3)

By (5.7.1), ‖div · ‖L2(Ω) ≤ ‖ · ‖V on V, and C1κ

1−κ≤ ω, Lemma 5.6.3 shows that

‖p− pσi+1‖P ≤

[1− c26(θ−ω)2

C25 (1+ω)2

] 12‖p− p

(i)j ‖P, (5.7.4)

and furthermore, sinceω+√

1−d(1−θ2)

1−ω< c6

C5and in view of the definition of As

P, that

#σi+1 −#σi . ‖p− p(i)j ‖−1/s

P |p|1/sAsP. (5.7.5)

By (5.7.2), κ ≤ c4C−11 , and (5.5.6), we have

‖u− u(i)j,k‖V ≤ ‖u− u

p(i)j ‖V + κC1

c4−κC1(‖u− u

p(i)j ‖V + ‖p− p

(i)j ‖P)

≤ c4+κC1

c4−κC1‖p− p

(i)j ‖P , (5.7.6)

and so by (5.7.3),

‖pσi− p

(i)j ‖P ≤ 2κC3

c4−κC1‖p− p

(i)j ‖P. (5.7.7)

With ρ1 := [1− 2κC3

c4−κC1]−1[1− c26(θ−ω)2

c25(1+ω)2]12 < 1, combining (5.7.7), with i reading as i+1, and

(5.7.4) shows that

‖p− p(i+1)j(i+1)‖P ≤ ρ1‖p− p

(i)j(i)‖P . (5.7.8)

Since c4ES(∞, τ

(i)j,k, f , p

(i)j ,u

(i)j,k) ≤ ‖u−u

(i)j,k‖V+‖p−p

(i)j ‖P, together (5.7.8) and (5.7.6) show

that the algorithm terminates, assuming each loop over j does, which we show next.

119

Page 125: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

If EE(τ(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ αES(σi, τ

(i)j,k , f , p

(i)j ,u

(i)j,k) is passed, i.e., k = k(i, j), then

‖up(i)j − u

(i)j,k‖V ≤ αC1c

−14 (‖pσi

− p(i)j ‖P + ‖uσi − u

(i)j,k‖V). (5.7.9)

Estimating ‖uσi − u(i)j,k‖V ≤ ‖uσi − up

(i)j ‖V + ‖up

(i)j − u

(i)j,k‖V, applying (5.7.9) and ‖uσi −

up(i)j ‖V ≤ ‖pσi

− p(i)j ‖P ((5.5.6)), we obtain that

‖uσi − u(i)j,k‖V ≤ c4+αC1

c4−αC1‖pσi

− p(i)j ‖P, (5.7.10)

and by substituting this in (5.7.9), that

‖up(i)j − u

(i)j,k‖V ≤ 2αC1

c4−αC1‖pσi

− p(i)j ‖P . (5.7.11)

With ρ2 := 1 − β2 + 2αC1

c4−αC1< 1, by the statement (5.4.3) concerning convergence of the

exact Uzawa iteration, we infer that

‖pσi− p

(i)j+1‖P ≤ ρ2‖pσi

− p(i)j ‖P . (5.7.12)

Since c4ES(σi, τ

(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ ‖uσi −u

(i)j,k‖V+‖pσi

−p(i)j ‖P, together (5.7.11) and (5.7.12)

show that each loop over j terminates by either ES(σi, τ(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ κES(∞, τ

(i)j,k , f , p

(i)j ,u

(i)j,k)

or C3ES(∞, τ

(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ ε, assuming each loop over k terminates, which we show next.

With ρ3 := [1− c22ζ2

C21

]12 < 1, Lemma 5.6.6 shows that

‖up(i)j − u

(i)j,k+1‖V ≤ ρ3‖up

(i)j − u

(i)j,k‖V . (5.7.13)

Since c2EE(τ

(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ ‖up

(i)j − u

(i)j,k‖V, from (5.7.13) we infer that each loop over k

terminates by either EE(τ(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ αES(σi, τ

(i)j,k , f , p

(i)j ,u

(i)j,k) or

ES(σi, τ(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ κES(∞, τ

(i)j,k , f , p

(i)j ,u

(i)j,k) or C3E

S(∞, τ(i)j,k , f , p

(i)j ,u

(i)j,k) ≤ ε. With

this, part (I) of the theorem is proven.

Before starting with the second part, we collect some estimates for ‖p− p(0)j(0)‖P, ‖pσi

−p

(i)0 ‖P and ‖up

(i)j − u

(i)j,0‖V, i.e., the initial values for the recursions (5.7.8), (5.7.12), and

(5.7.13) over i, j and k, respectively.From (5.7.7) we infer that

‖p− p(0)j(0)‖P ≤ [1− 2κC3

c4−κC1]−1‖p− pτ0‖P ≤ [1− 2κC3

c4−κC1]−1‖p‖P. (5.7.14)

For j = 0 and i = 0, we have that

‖pσ0 − p(0)0 ‖P ≤ ‖pσ0 − p‖P + ‖p‖P ≤ 2‖p‖P, (5.7.15)

120

Page 126: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

and for j = 0 and i > 0, that

‖pσi− p

(i)0 ‖P = ‖pσi

− p(i−1)j(i−1)‖P ≤ ‖pσi

− p‖P + ‖p− p(i−1)j(i−1)‖P

≤ ([1− c26(θ−ω)2

C25 (1+ω)2

] 12 + 1

)‖p− p(i−1)j(i−1)‖P (5.7.16)

by (5.7.4).For k = j = i = 0, we have

‖up(0)0 − u

(0)0,0‖V ≤ ‖up

(0)0 ‖V = ‖f‖V′ . (5.7.17)

For k = j = 0 and i > 0, we have p(i)0 = p

(i−1)j(i−1) and τ

(i)0,0 = τ

(i−1)j(i−1),k(i−1), and so

‖up(i)0 − u

(i)0,0‖V = ‖up

(i−1)j(i−1) − u

(i−1)j(i−1),k(i−1)‖V ≤ 2C1κ

c4−κC1‖p− p

(i−1)j(i−1)‖P, (5.7.18)

by (5.7.2) and (5.7.6). For k = 0 and j > 0, we have

‖up(i)j − u

(i)j,0‖V ≤ ‖up

(i)j − u

(i)j−1,k(j−1)‖V

≤ ‖up(i)j − up

(i)j−1‖V + ‖up

(i)j−1 − u

(i)j−1,k(j−1)‖V.

Now using that by (5.5.6)

‖up(i)j − up

(i)j−1‖V ≤ ‖p(i)

j − p(i)j−1‖P ≤ ‖uσi − u

(i)j−1,k(j−1)‖V

≤ ‖uσi − up(i)j−1‖V + ‖up

(i)j−1 − u

(i)j−1,k(j−1)‖V ≤ ‖pσi

− p(i)j−1‖P + ‖up

(i)j−1 − u

(i)j−1,k(j−1)‖V,

and‖up

(i)j−1 − u

(i)j−1,k(j−1)‖V ≤ 2αC1

c4−αC1‖pσi

− p(i)j−1‖P

by (5.7.11), we find that

‖up(i)j − u

(i)j,0‖V ≤

(1 + 4αC1

c4−αC1

)‖pσi− p

(i)j−1‖P. (5.7.19)

(II) At the moment of a call σi+1 := REFpres[σi, p(i)j ,u

(i)j,k], we have

‖p− p(i)j ‖P ≥ c4E

S(∞, τ(i)j,k, f , p

(i)j ,u

(i)j,k) > c4C

−13 ε,

so that in view of (5.7.5) and (5.7.8), we have

#σi −#τ0 ≤i∑

i=0

#σi −#σi−1 . ε−1/s|p|1/sAsP.

121

Page 127: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

At the moment of a call τ(i)j,k+1 := REFvel[τ

(i)j,k , f , p

(i)j ,u

(i)j,k], we have

‖up(i)j − u

(i)j,k‖V > c2E

E(τ(i)j,k , f , p

(i)j ,u

(i)j,k) > c2αES(σi, τ

(i)j,k , f , p

(i)j ,u

(i)j,k)

> c2ακES(∞, τ(i)j,k , f , p

(i)j ,u

(i)j,k) ≥ c2ακC−1

3 ‖u− u(i)j,k‖V,

where for the first time in this proof we used the fact that innermost iteration is stoppedin time. In view of the definitions of As

P and AsV, Lemma 5.6.6 now shows that the set of

marked simplices F i,j,k = F inside REFvel[τ(i)j,k , f , p

(i)j ,u

(i)j,k] satisfies

#F i,j,k . ‖up(i)j − u

(i)j,k‖−1/s

V (|u|1/s

AsV

+ |p|1/s

AsP) + #σi −#τ0 + #τ0. (5.7.20)

From ‖up(i)j − u

(i)j,k‖V ≤ ‖up

(i)j ‖V . ‖u‖V + ‖p‖P, we have

#τ0 . 1 . ‖up(i)j − u

(i)j,k‖−1/s

V (‖u‖1/sV + ‖p‖1/s

P ).

For i > 0, we have #σi − #τ0 . ‖p − p(i−1)j(i−1)‖−1/s

P |p|1/s

AsP

by (5.7.5). By (5.7.18), (5.7.16)

and (5.7.19), and the decrease of ‖pσi− p

(i)j ‖P and ‖up

(i)j − u

(i)j,k‖V as function of j or k,

respectively, we have

‖p− p(i−1)j(i−1)‖P &

‖up

(i)0 − u

(i)0,0‖V

‖pσi− p

(i)0 ‖P ≥ ‖pσi

− p(i)j−1‖P & ‖up

(i)j − u

(i)j,0‖V (j > 0)

& ‖up(i)j − u

(i)j,k‖V

uniformly in i, j, k. We conclude that

#F i,j,k . ‖up(i)j − u

(i)j,k‖−1/s

V (‖u‖1/s

AsV

+ ‖p‖1/s

AsP),

and so, by Theorem 5.3.2 and (5.7.13), that

#τ(i)j(i),k(i) − τ0 . (‖u‖1/s

AsV

+ ‖p‖1/s

AsP)

i∑i=0

j(i)∑j=0

k(j)−1∑

k=0

‖up(i)j − u

(i)j,k‖−1/s

V

. (‖u‖1/s

AsV

+ |p|1/s

AsP)

i∑i=0

j(i)∑j=0

‖up(i)j − u

(i)j,k(j)−1‖−1/s

V .

We will bound this expression by using that each of the three nested iterations is stoppedin time.

For any i and j, by definition of k, we have

EE(τ(i)j,k−1, f , p

(i)j ,u

(i)j,k−1) > αES(σi, τ

(i)j,k−1, f , p

(i)j ,u

(i)j,k−1)

> ακES(∞, τ(i)j,k−1, f , p

(i)j ,u

(i)j,k−1) > ακC−1

3 ε,

122

Page 128: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

or

‖up(i)j − u

(i)j,k−1‖V & ‖uσi − u

(i)j,k−1‖V + ‖pσi

− p(i)j ‖P

& ‖u− u(i)j,k−1‖V + ‖p− p

(i)j ‖P & ε.

Similarly, for any i, by definition of j we have

ES(σi, τ(i)j−1,k, f , p

(i)j−1,u

(i)j−1,k) > κES(∞, τ

(i)j−1,k, f , p

(i)j−1,u

(i)j−1,k) > κC−1

3 ε,

or

‖pσi− p

(i)j−1‖P h ‖uσi − u

(i)j−1,k‖V + ‖pσi

− p(i)j−1‖P & ‖u− u

(i)j−1,k‖V + ‖p− p

(i)j−1‖P & ε,

where for “h” we used (5.7.10). Finally, by definition of i, we have

ES(∞, τ(i−1)j,k , f , p

(i−1)j ,u

(i−1)j,k ) > C−1

3 ε,

or‖p− p

(i−1)j ‖P h ‖u− u

(i−1)j,k ‖V + ‖p− p

(i−1)j ‖P & ε,

where for “h” we used (5.7.6). By using in addition (5.7.12) and (5.7.8), we find that

i∑i=0

j(i)∑j=0

‖up(i)j − u

(i)j,k(j)−1‖−1/s

V

.i∑

i=0

j(i)−1∑j=0

‖pσi− p

(i)j ‖−1/s

P +

i−1∑i=0

‖p− p(i)j(i)‖−1/s

P + ε−1/s

.i∑

i=0

‖pσi− p

(i)j(i)−1‖−1/s

P + ‖p− p(i−1)j(i−1)‖−1/s

P + ε−1/s

.i−1∑i=0

‖p− p(i)j(i)−1‖−1/s

P + ‖pσi− p

(i)j(i)−1‖−1/s

P + ε−1/s

.i−1∑i=0

‖p− p(i)j(i)‖−1/s

P + ε−1/s . ε−1/s,

which completes the proof of the theorem.

5.8 A practical adaptive method for the Stokes problem

The following lemma generalizes upon Lemma 5.6.6, relaxing both the condition thatf ∈ V∗τ , and the assumption that we have the exact Galerkin solutions for the inner ellipticproblems available, assuming that the deviations from that ideal situation are sufficiently

123

Page 129: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

small in a relative sense. The idea is that we replace f by a sufficiently accurate piecewisepolynomial approximation, and that we solve the arising linear system only approximatelyusing an iterative solver.

Since we are going to consider Galerkin systems with modified right-hand sides, weintroduce the following notation: Given r ∈ P, g ∈ V′, and a partition τ , ur,g

τ ∈ Vτ willdenote the solution of the Galerkin problem

a(ur,gτ ,vτ ) = g(vτ )− b(vτ , r), (vτ ∈ Vτ ) . (5.8.1)

So in view of the notation introduced in (5.4.5), we have ur,fτ = ur

τ . Note that ‖urτ−ur,g

τ ‖V ≤‖f − g‖V.

Lemma 5.8.1 There exist constants χ1 = χ1(ζ, C1, c2) > 0, and λ = λ(χ1, C1, c2) ∈(0, 1

3

[1− C2

1ζ2

c22

] 12 ] such that if for any f ∈ V′, partitions τ ⊇ σ, rσ ∈ Pσ, fτ ∈ V∗τ , wτ ∈ Vτ

with‖f − fτ‖V′ + ‖urσ,fτ

τ −wτ‖V ≤ χ1E(τ, fτ , rσ,wτ ), (5.8.2)

and, for some absolute constant ϑ > 0,

‖urσ −wτ‖V ≥ ϑ‖u− urσ‖V,

then the set of marked simplices F inside the call τ := REFvel[τ, fτ , rσ,wτ ] satisfies

#F . #τ + #σ + #σ

for any partitions τ and σ for which

infvτ∈Vτ

‖u− vτ‖V ≤ λ‖urσ −wτ‖V, infqσ∈Pσ

‖p− qσ‖V ≤ λ‖urσ −wτ‖V. (5.8.3)

Furthermore, given a

µ ∈ ([1− c22ζ2

C21

] 12 , 1

),

there exists an χ2 = χ2(µ, ζ, C1, c2) > 0, such that if (5.5.6) is valid with χ1 reading as χ2,and for τ ′ ⊇ τ , fτ ′ ∈ V′ and wτ ′ ∈ Vτ ′,

‖f − fτ ′‖V′ + ‖urσ,fτ ′τ ′ −wτ ′‖V ≤ χ2E

E(τ, fτ , rσ,wτ ),

then‖urσ −wτ ′‖V ≤ µ‖urσ −wτ‖V.

Proof. Following the lines of [37, proof of Lemma 6.1], for suitable constants χ1 and λ,one can show that #F . #τ−#τ0 for any partition τ with infvτ∈Vτ

‖urσ−vτ‖V ≤ λ‖urσ−wτ‖V. Then, following the proof of Lemma 5.6.6, we infer that #τ−#τ0 ≤ #τ +#σ+#σ,with τ and #σ from (5.8.3).

The second statement can be proven as in [37, Lemma 6.2].

124

Page 130: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

For solving the Galerkin systems approximately, we assume that we have an iterativesolver of optimal type available:

Algorithm 5.4 GALSOLVE

GALSOLVE[τ, fτ , rτ ,w(0)τ , η] → wτ

/* τ is a partition, fτ ∈ V∗τ , rτ ∈ Pτ , w(0)τ ∈ Vτ , the latter being an initial

approximation for an iterative solver, and η > 0.The output wτ ∈ Vτ satisfies

‖urτ ,fττ − wτ‖V ≤ η.

The call requires . max1, log(η−1‖urτ ,fττ −w

(0)τ ‖V)#τ arithmetic operations. */

Additive or multiplicative multigrid methods with local smoothing are known to be of thistype.

A routine called RHS will be needed to find a sufficiently accurate piecewise polynomialapproximation of degree m − 1 to the right-hand side f . Since this might not be possiblewith respect to the current partition τ , a call of RHS may result in a further refinement.

Algorithm 5.5 RHS

RHS[τ, η] → [τ ′, fτ ′ ]/* η > 0. The output consists of an fτ ′ ∈ V∗τ ′, where τ ′ = τ , or, if necessary,τ ′ ⊃ τ , such that ‖f − fτ ′‖V′ ≤ η. */

Assuming that p ∈ AsP and u ∈ As

V for some s, s > 0, the cost of approximating theright-hand side f using RHS will generally not dominate the other costs of our adaptivemethod only if there is some constant cf such that, with s = max(s, s), for any η > 0 andany partition τ , for [τ ′, ·] := RHS[τ, η], it holds that

#τ ′ −#τ ≤ c1/sf η−1/s,

and the number of arithmetic operations required by the call is . #τ ′. We will call such aRHS to be s-optimal with constant cf . Obviously, given s, such a routine can only existwhen f ∈ As

V′ , defined by

AsV′ = g ∈ V′ : sup

ε>0ε infτ :infgτ∈V∗τ ‖g−gτ‖V′≤ε

[#τ −#τ0]s < ∞.

Knowing that f ∈ AsV′ is a different thing than knowing how to construct suitable approx-

imations. If s ∈ [1/d, (m + 1)/d] and f ∈ Hsd−1(Ω)d, then f ∈ AsV′ and fτ ′ constructed as

the best approximation from V∗τ ′ to f with respect to L2(Ω)d using (the smallest commonrefinement of the input partition and) uniform refined partitions τ ′ are known to convergewith the required rate. For general f ∈ As

V′ , however, a realization of a suitable routineRHS has to depend on f at hand.

125

Page 131: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Remark 5.8.2 As we have said, in our adaptive method, for both computing the errorestimator and setting up the Galerkin system for the inner elliptic problem, we will replacef by a piecewise polynomial approximation. This has the advantages that we can considerf 6∈ L2(Ω)d, for which thus the error estimator is not defined, and that it simplifies theanalysis, since we don’t have to take into quadrature errors on various places.

Thinking of smooth p, u and thus f , we have p ∈ Am/dP , u ∈ Am/d

V (and generally

p 6∈ AsP, u 6∈ As

V for s > m/d), whereas f ∈ A(m+1)/dV′ . Moreover, these rates are realized

using quasi-uniform partitions. In this case, an adaptive method will create such partitions,and we see that at least asymptotically calls of RHS will not give rise to refinements. So inthis case, we can simply skip these calls, and work with the exact f , where one can view thegenerally now necessary application of quadrature for the evaluation of the error estimatorand for setting up the right-hand sides of the Galerkin systems as an implicit replacementof f by a piecewise polynomial. Also for less smooth p and/or u, and an f that is at least inL2(Ω)d, one can expect that usually refinements are not needed for obtaining a sufficientlyaccurate approximation of f by a piecewise polynomial of degree m− 1.

Now we are ready to formulate our practical adaptive Stokes solver STOKESSOLVE.

126

Page 132: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Algorithm 5.6 STOKESSOLVE

STOKESSOLVE[f, ε] → [σ(i)j , p

(i)j , τ

(i)j,k ,w

(i)j,k]

/* Let the parameter ζ from REFvel satisfy ζ ∈ (0, c2C1

), and θ from REFpres

satisfy θ ∈ (0, [1− 1−c26/C25

d]12 ). Let κ, β, α, χ > 0 be sufficiently small constants. */

σ0 := τ(0)0,0 := τ0, p

(0)0 := 0, w

(0)0,0 := 0, δ h ‖f‖V′, i := j := k := 0

do [f(i)j,k , τ

(i)j,k ] := RHS[δ/2, τ

(i)j,k ]

w(i)j,k := GALSOLVE[τ

(i)j,k , f

(i)j,k , p

(i)j ,w

(i)j,k, δ/2]

if C3ES(∞, τ

(i)j,k , f

(i)j,k , p

(i)j ,w

(i)j,k) + (1 + C3(2c

−12 + 1))δ/2 ≤ ε then stop

elsif ES(σi, τ(i)j,k , f

(i)j,k , p

(i)j ,w

(i)j,k) + δ ≤ κES(∞, τ

(i)j,k , f

(i)j,k , p

(i)j ,w

(i)j,k) then

σi+1 := REFpres[σi, p(i)j ,w

(i)j,k]

p(i+1)0 := p

(i)j , τ

(i+1)0,0 := τ

(i)j,k , w

(i+1)0,0 := w

(i)j,k

δ := β(ES(∞, τ(i)j,k , f

(i)j,k , p

(i)j ,w

(i)j,k) + δ)

i++, j := k := 0

elsif EE(τ(i)j,k , f

(i)j,k , p

(i)j ,w

(i)j,k) + δ ≤ αES(σi, τ

(i)j,k , f

(i)j,k , p

(i)j ,w

(i)j,k) then

p(i)j+1 := p

(i)j −Qσi

divw(i)j,k

τ(i)j+1,0 := τ

(i)j,k , w

(i)j+1,0 := w

(i)j,k

δ := β(ES(σi, τ(i)j,k , f

(i)j,k , p

(i)j ,w

(i)j,k) + δ)

j++, k := 0

elsif δ ≤ χEE(τ(i)j,k , f

(i)j,k,, p

(i)j ,w

(i)j,k)

τ(i)j,k+1 := REFvel[τ

(i)j,k , f

(i)j,k , p

(i)j ,w

(i)j,k], w

(i)j,k+1 := w

(i)j,k

δ := β(EE(τ(i)j,k , f

(i)j,k , p

(i)j ,w

(i)j,k) + δ)

k++else δ := δ/2endif

enddo

Compared to the preliminary version STOKESSOLVE0, in STOKESSOLVE theexact solution u

(i)j,k of the inner elliptic problem is replaced by an approximate one w

(i)j,k,

which is moreover computed using an approximation f(i)j,k ∈ V∗τ (i)

j,k

of the right-hand side

f ∈ V′. As a consequence, the results on the a posteriori error estimators from Section 5.5cannot be applied directly. Using the stability of the problems defining u and p withrespect to perturbations in f , and the stability of the error estimator EE demonstratedin Proposition 5.5.5, for sufficiently small α and κ, stopping by either EE(· · · ) + δ ≤αES(σi, · · · ), ES(σi, · · · ) + δ ≤ κES(∞, · · · ) or ES(∞, · · · ) + (1 + C3(2c

−12 + 1))δ/2 ≤ ε,

127

Page 133: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

means that

‖up(i)j −w

(i)j,k‖V ≤ D1(α)[‖pσi

− p(i)j ‖P + ‖upσi −w

(i)j,k‖V],

‖pσi − p(i)j ‖P + ‖upσi −w

(i)j,k‖V ≤ D2(κ)[‖p− p

(i)j ‖P + ‖u−w

(i)j,k‖V] or

‖p− p(i)j ‖P + ‖u−w

(i)j,k‖V ≤ ε,

respectively, with D1(α), D2(κ) > 0 being some constants that tend to zero when α or κtend to zero.

Since the tolerance δ/2 for the error in the approximations for the right-hand side andin that for the solution of the inner Galerkin problem decreases until δ ≤ χEE(· · · ), where,for β small enough, the next δ respects the same bound, the second part of Lemma 5.8.1

shows that for each i and j, w(i)j,k converges linearly towards up

(i)j . Here we silently assumed

that by halving δ, at some point δ ≤ χEE(· · · ) is valid. This, however, is not necessarilytrue since EE(· · · ) changes as well. E.g. think of the (unlikely) situation that we have

reached a partition on which up(i)j can be represented exactly. Yet, in that case we also

have linear convergence of w(i)j,k towards up

(i)j . Indeed, if during the process of halving δ

it remains larger than χEE(· · · ), then χEE(· · · ) + δ, up to some constant factor being an

upper bound for ‖up(i)j −w

(i)j,k‖V, decreases linearly.

Based on these observations, similarly as in the proof of Part (I) of Theorem 5.8.3, for

sufficiently small κ and α, one shows convergence of STOKESSOLVE, with ‖p−p(i)j ‖P+

‖u−w(i)j,k‖V ≤ ε at termination.

When REFvel is called, from δ ≤ χEE(· · · ) for sufficiently small χ, EE(· · · ) + δ >αES(σi, · · · ) and ES(σi, · · · ) + δ > κES(∞, · · · ), we have

‖up(i)j −w

(i)j,k‖V & EE(· · · )− δ & EE(· · · ) + δ & ES(σi, · · · ) + δ & ES(∞, · · · ) + δ

& ‖u−w(i)j,k‖V,

meaning that we may apply the first part of Lemma 5.8.1 to bound the cardinality of theset F i,j,k of marked simplices. Assuming that p ∈ As

P and u ∈ AsV, as in (5.7.20) we find

#F i,j,k . ‖up(i)j −w

(i)j,k‖−1/s

V (|u|1/s

AsV

+ |p|1/s

AsP) + #σi −#τ0 + #τ0.

Other than in STOKESSOLVE0, in STOKESSOLVE refinements can also be madeby calls [f

(i)j,k , τ

(i)j,k ] := RHS[δ/2, τ

(i)j,k ]. Assuming that RHS s-optimal with constant cf , then

#τ(i)j,k −#τ

(i)j,k ≤ c

−1/sf (δ/2)−1/s Using that such a call can only be made when

δ & EE(· · · ) + δ & ES(σi, · · · ) + δ & ES(∞, · · · ) + δ & ε,

applying similar techniques as in the proof of Theorem 5.7.1 one can prove that STOKES-SOLVE outputs quasi-optimal partitions.

128

Page 134: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Finally, one can prove that STOKESSOLVE is of quasi-optimal computational com-plexity. The main point is that with a call GALSOLVE[τ

(i)j,k , p

(i)j , f

(i)j,k ,w

(i)j,k, δ/2], it holds

that ‖up(i)j ,f

(i)j,k−w

(i)j,k‖V . δ/2, so that the error has to be reduced by only a constant factor.

Along the lines indicated above, we end up with the following theorem. We have chosennot to include a full proof, since this would require another level of technicalities on top ofthose from the proof of Theorem 5.7.1.

Theorem 5.8.3 (I) [σ(i)j , p

(i)j , τ

(i)j,k ,w

(i)j,k] := STOKESSOLVE[f, ε] terminates, and ‖u −

w(i)j,k‖V + ‖p− p

(i)j ‖P ≤ ε. (II) If, for some s > 0, p ∈ As

P, then #σ(i)j −#τ0 . ε−1/s|p|1/s

AsP,

only dependent on τ0 and on s when it tends to 0 or infinity. If, in addition, for some s > 0,u ∈ As

V and, with s = min(s, s), RHS s-optimal with constant cf , then #τ(i)j,k − #τ0 .

ε−1/s(‖p‖1/s

AsP

+ ‖u‖1/s

AsV

+ c1/sf ), only dependent on τ0, and on s when it tends to 0 or infinity.

Under Assumption 5.6.4, and when ε . ‖f‖V′, the number of arithmetic operations andstorage locations required by the call is also bounded by a multiple of the same expressionas ε−1/s(‖p‖1/s

AsP

+ ‖u‖1/s

AsV

+ c1/sf ).

5.9 Numerical experiment

We consider d = 2, the L-shaped domain Ω = (0, 1)2\(12, 1)2, f : x 7→ 25(4x2 − 1, 1− 4x1),

and m = 1, i.e., continuous piecewise linear approximation for the velocity, and piecewiseconstant approximation for the pressure. The initial partition τ0 is illustrated in Figure 5.3.

Figure 5.3: Initial partition τ0

The “bulk chasing” parameters θ and ζ inside REFpres or REFvel were chosen tobe 0.7 and

√0.3, repectively. Since f is smooth, we followed the approach discussed in

Remark 5.8.2, and skipped the calls of RHS. Furthermore, instead of solving each arisingfinite dimensional linear system within tolerance δ/2 for the first value of δ/2 obtained

by successively halving that is less than χEE(τ(i)j,k , f

(i)j,k,, p

(i)j ,w

(i)j,k), we always approximately

129

Page 135: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

solved it by 3 multigrid iterations with local smoothing (cf. [44]) starting with the previ-ously computed approximate velocity. The parameters κ and α are chosen to be 0.88 and0.9, respectively.

For comparison, we also implemented the adaptive Uzawa method for solving the fullStokes problem, i.e., STOKESSOLVE without the outmost loop over i, i.e., withoutthe part starting from the first elsif-statement until the second one, and all remainingoccurrences of σi replaced by∞. In particular the pressure update p

(i)j+1 := p

(i)j +Qσi

divw(i)j,k

then reads as pj+1 := pj + divwj,k. This is the algorithm studied in [5], apart from thereplacement of a priori prescribed tolerances by a posteriori ones.

In Figure 5.4, we plotted the full Stokes error estimator ES(∞, τ(i)j,k, f , p

(i)j ,w

(i)j,k) vs. #τ

(i)j,k

(or ES(∞, τj,k, f , pj,wj,k) vs. #τj,k). Ignoring the fact that generally w(i)j,k 6= u

p(i)j

τ(i)j,k

(or wj,k 6=

upjτj,k), modulo a constant factor this estimator is an upper bound for ‖u−w

(i)j,k‖V+‖p−p

(i)j ‖P

(or ‖u−wj,k‖V+‖p−pj‖P) (For f ∈ V∗τ(i)j,k

(or f ∈ V∗τj,k), it would also be a lower bound). In

101

102

103

104

105

100

Ideal lineSTOKESSOLVEUzawa

Figure 5.4: Estimated Stokes error vs. cardinality underlying partition

Figure 5.5, we plotted the pressure error estimator ‖divw(i)j,k‖L2(Ω) vs. #σi (or ‖divwj,k‖L2(Ω)

vs. #τj,0). Ignoring that generally w(i)j,k 6= u

p(i)j (or wj,k 6= upj), this estimator is equivalent

to ‖p− p(i)j ‖P (or ‖p− pj‖P). As predicted by the theory, the approximations produced by

STOKESOLVE converge with the best possible rates. In this example, we observe thatthe same is true for the adaptive Uzawa method.

Our implementation is partly written in C, and, for our convenience, partly in MATLABmaking use of the PDE toolbox. Due to the datastructures used in the MATLAB part, ourcode is not of optimal computational complexity; the time needed for a call of REFvel isnot proportional to the cardinality of its output partition. Subtracting the times spent forthis routine, we observed computing times that are proportional to the cardinality of thefinal partition.

130

Page 136: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

101

102

103

104

105

106

10−2

10−1

100

101

Ideal lineSTOKESSOLVEUzawa

Figure 5.5: Estimated pressure error vs. cardinality underlying partition

In Figure 5.6 we show two partitions σi that were produced by STOKESSOLVE.Note that these partitions are nonconforming. In Figure 5.7 we give two partitions τ

(i)j,k .

Figure 5.6: Pressure partitions with 1965 or 4235 triangles

Finally, in Figure 5.8 plots of p and u are given.Acknowledgements We are indebted to Dr. Haijun Wu (Nanjing University, China)

for providing his code implementing multigrid on locally refined triangulations for ourGALSOLVE routine.

131

Page 137: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Figure 5.7: Velocity partitions with 1842 or 6126 triangles

Figure 5.8: Pressure p and the norm |u| of the velocity

132

Page 138: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

133

Page 139: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Bibliography

[1] M. Ainsworth and J. T. Oden, A posteriori error estimation in finite elementanalysis, Pure and Applied Mathematics (New York), Wiley-Interscience [John Wiley& Sons], New York, 2000.

[2] O. Axelsson and V. A. Barker, Finite element solution of boundary value prob-lems, vol. 35 of Classics in Applied Mathematics, Society for Industrial and AppliedMathematics (SIAM), Philadelphia, PA, 2001. Theory and computation, Reprint ofthe 1984 original.

[3] I. Babuska and T. Strouboulis, The finite element method and its reliability,Numerical Mathematics and Scientific Computation, The Clarendon Press OxfordUniversity Press, New York, 2001.

[4] W. Bangerth and R. Rannacher, Adaptive finite element methods for differentialequations, Lectures in Mathematics ETH Zurich, Birkhauser Verlag, Basel, 2003.

[5] E. Bansch, P. Morin, and R. H. Nochetto, An adaptive Uzawa FEM for theStokes problem: convergence without the inf-sup condition, SIAM J. Numer. Anal., 40(2002), pp. 1207–1229 (electronic).

[6] P. Binev, W. Dahmen, and R. DeVore, Adaptive finite element methods withconvergence rates, Numer. Math., 97 (2004), pp. 219–268.

[7] P. Binev, W. Dahmen, R. DeVore, and P. Petrushev, Approximation classesfor adaptive methods, Serdica Math. J., 28 (2002), pp. 391–416. Dedicated to thememory of Vassil Popov on the occasion of his 60th birthday.

[8] P. Binev and R. DeVore, Fast computation in adaptive tree approximation, Numer.Math., 97 (2004), pp. 193–217.

[9] D. Braess, Finite elements, Cambridge University Press, Cambridge, second ed.,2001. Theory, fast solvers, and applications in solid mechanics, Translated from the1992 German edition by Larry L. Schumaker.

[10] S. C. Brenner and L. R. Scott, The mathematical theory of finite element meth-ods, vol. 15 of Texts in Applied Mathematics, Springer-Verlag, New York, second ed.,2002.

134

Page 140: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

[11] F. Brezzi and M. Fortin, Mixed and Hybrid Finite Element Methods: Series inComputational Mechanics, Springer, New York, 1991.

[12] G. F. Carey, Computational grids, Series in Computational and Physical Processesin Mechanics and Thermal Sciences, Taylor & Francis, Washington, DC, 1997. Gen-eration, adaptation, and solution strategies.

[13] P. G. Ciarlet, The finite element method for elliptic problems, vol. 40 of Classicsin Applied Mathematics, Society for Industrial and Applied Mathematics (SIAM),Philadelphia, PA, 2002. Reprint of the 1978 original [North-Holland, Amsterdam;MR0520174 (58 #25001)].

[14] P. Clement, Approximation by finite element functions using local regularization,Rev. Francaise Automat. Informat. Recherche Operationnelle Ser. RAIRO AnalyseNumerique, 9 (1975), pp. 77–84.

[15] A. Cohen, L. Echevery, and Q. Sun, Finite element wavelets., technical report,Laboratoire d’Analyse Numerique, Universite Pierre et Marie Curie, 2000.

[16] S. Dahlke, Besov regularity for the Stokes problem, in Advances in multivariate ap-proximation (Witten-Bommerholz, 1998), vol. 107 of Math. Res., Wiley-VCH, Berlin,1999, pp. 129–138.

[17] S. Dahlke, W. Dahmen, and K. Urban, Adaptive wavelet methods for sad-dle point problems—optimal convergence rates, SIAM J. Numer. Anal., 40 (2002),pp. 1230–1262 (electronic).

[18] R. A. DeVore, Nonlinear approximation, in Acta numerica, 1998, vol. 7 of ActaNumer., Cambridge Univ. Press, Cambridge, 1998, pp. 51–150.

[19] W. Dorfler, A convergent adaptive algorithm for Poisson’s equation, SIAM J. Nu-mer. Anal., 33 (1996), pp. 1106–1124.

[20] K. Eriksson, D. Estep, P. Hansbo, and C. Johnson, Computational differentialequations, Cambridge University Press, Cambridge, 1996.

[21] V. Girault and P.-A. Raviart, Finite Element Methods for the Navier–StokesEquations, Springer–Verlag, New York, 1986. Also, 1979.

[22] V. Horlatch, Y. Kondratyuk, and G. Shynkarenko, Numerical analysis ofacoustic problems for hydroelastic dissipative systems, in Book of abstracts of FifthWorld Congress on Computational Mechanics, Vienna, vol. II, July 7-12 2002, p. 577.

[23] J-H.Pyo, The Gauge-Uzawa and related projection finite element methods for theevolution Navier-Stokes equations, PhD thesis, University of Maryland, College Park,2002.

135

Page 141: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

[24] Y. Kondratyuk, Numerical approximation of acoustic fluid-structure dynamic in-teraction problems, in Abstracts of the 6th International Conference ’MathematicalModeling and Analysis’, Vilnius, Lithuania, 2001, p. 49.

[25] Y. Kondratyuk, Adaptive finite element algorithms for the Stokes problem: Con-vergence rates and optimal computational complexity, Technical Report 1346, UtrechtUniversity, The Netherlands, Feb. 2006.

[26] Y. Kondratyuk and G. Shynkarenko, Analysis of solvability of mixed varia-tional formulation for the linear dynamic viscoelasticity problem., Visnyk Lviv. Univ.,Ser. Applied Math. and Computer Science (in Ukr.), (2000), pp. 126–135.

[27] Y. Kondratyuk and R. Stevenson, An optimal adaptive finite element methodfor the Stokes problem, Technical Report 1355, Utrecht University, The Netherlands,July 2006. Submitted.

[28] W. F. Mitchell, A comparison of adaptive refinement techniques for elliptic prob-lems, ACM Trans. Math. Software, 15 (1989), pp. 326–347 (1990).

[29] P. Morin, R. H. Nochetto, and K. G. Siebert, Data oscillation and conver-gence of adaptive FEM, SIAM J. Numer. Anal., 38 (2000), pp. 466–488 (electronic).

[30] R. H. Nochetto and J.-H. Pyo, Optimal relaxation parameter for the Uzawamethod, Numer. Math., 98 (2004), pp. 695–702.

[31] P. Oswald, Multilevel finite element approximation, Teubner Skripten zur Numerik.[Teubner Scripts on Numerical Mathematics], B. G. Teubner, Stuttgart, 1994. Theoryand applications.

[32] A. Quarteroni and A. Valli, Numerical approximation of partial differentialequations, vol. 23 of Springer Series in Computational Mathematics, Springer-Verlag,Berlin, 1994.

[33] H.-G. Roos, M. Stynes, and L. Tobiska, Numerical methods for singularly per-turbed differential equations, vol. 24 of Springer Series in Computational Mathematics,Springer-Verlag, Berlin, 1996. Convection-diffusion and flow problems.

[34] I. G. Rosenberg and F. Stenger, A lower bound on the angles of triangles con-structed by bisecting the longest side, Math. Comp., 29 (1975), pp. 390–395.

[35] G. Shynkarenko, Projection-Mesh Methods for Solving Initial Boundary ValueProblems, NMK VO, Kyiv, 1991.

[36] R. Stevenson, An optimal adaptive finite element method, SIAM J. Numer. Anal.,42 (2005), pp. 2188–2217 (electronic).

136

Page 142: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

[37] R. P. Stevenson, Optimality of a standard adaptive finite element method, TechnicalReport 1329, Utrecht University, The Netherlands, May 2005. To appear in Found.Comput. Math.

[38] R. P. Stevenson, The uniform saturation property for a singularly perturbedreaction-diffusion equation, Numer. Math., 101 (2005), pp. 355–379.

[39] R. P. Stevenson, The completion of locally refined simplicial partitions created bybisection., Technical Report 1336, Utrecht University, The Netherlands, May 2006. Toappear in Math. Comp.

[40] R. Verfurth, A Review of A Posteriori Error Estimation and Adaptive Mesh–Refinement Techniques, Wiley Teubner, New York, Stuttgart, 1996.

[41] R. Verfurth, A review of a posteriori error estimation techniques for elasticityproblems, in Advances in adaptive computational methods in mechanics (Cachan,1997), vol. 47 of Stud. Appl. Mech., Elsevier, Amsterdam, 1998, pp. 257–274.

[42] , Robust a posteriori error estimators for a singularly perturbed reaction-diffusionequation, Numer. Math., 78 (1998), pp. 479–493.

[43] V. Voytovych, V. Horlatch, Y. Kondratyuk, and G. Shynkarenko,Displacement-based mathematical model of acoustic fluid-structure interaction prob-lem., Applied Problems of Mechanics and Mathematics, Pidstryhach Institute of Ap-plied Problems of Mechanics and Mathematics (in Ukr.), 2 (2005), pp. 108–118.

[44] H. Wu and Z. Chen., Uniform convergence of multigrid V-cycle on adaptively re-fined finite element meshes for second order elliptic problems., Technical Report 2003-07, Institute of Computational Mathematics, Chinese Academy of Sciences, 2003. Toappear in Science in China: Series A Mathematics.

137

Page 143: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Index

abstract variational problem, 7adaptive approximation, 15adaptive error reduction, 30

Bernstein estimate, 11

derefinement, 75

edge residual, 27element residual, 27error estimator, 26

robust, 58

Galerkin approximation, 8Galerkin method, 8

hierarchical basis, 41

inf-sup condition, 70, 73

Jackson estimate, 11

Lax-Milgram lemma, 7linear approximation, 14longest edge bisection, 19

mixed variational problem, 70

Navier-Stokes equations, 2Newest vertex bisection, 19nodal basis, 12nodal interpolant, 12nodal set, 12nonlinear approximation, 15

optimal adaptive algorithmfor the Poisson problem, 24for the Stokes problem, 75

optimal algorithm, 2Oseen problem, 2

reaction-diffusion problem, 57red-refinement, 19

saturation property, 31uniform, 61

Schur complement operator, 78sequence of triangulations

quasi-uniform, 10shape-regular, 11

singularly perturbed problem, 57Stokes problem, 3, 71

triangulation, 10admissible, 20conforming, 10nonconforming, 10

Uzawa method, 79

vertexhanging, 10non-hanging, 10regular, 20

wavelet basis, 41

138

Page 144: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Nederlandse samenvatting

Dit proefschrift behelst de ontwikkeling en analyse van optimale algoritmen voor hetoplossen van partiele differentiaalvergelijkingen (PDV’s). Het zoeken naar een optimaalalgoritme leidt al snel naar de zogenaamde adaptieve methode. Simpel gezegd is het doelvan zo’n methode de benadering tijdens het oplossen aan te passen aan de nog onbekendeoplossing, en uiteindelijk de (quasi-) best mogelijke benadering te leveren met optimalecomputationele kosten. Adaptieve methoden worden tegenwoordig veel in wetenschap entechniek gebruikt om efficient PDV’s op te lossen [40, 1, 3]. Adaptieve methoden zijnvooral efficient als de oplossing van een randwaardeprobleem singulariteiten vertoont. Eensingulariteit kan voortkomen uit een concentratie van druk in elasticiteit, grenslagen in mi-lieuvervuilingsproblemen, of een voortplanting van schokgolven in vloeistofdynamica. Dealgemene structuur van een programmalus van het algoritme is

Oplossen → Schatten → Verfijnen danwel vergroven

Een volledig adaptief eindige-elementenalgoritme genereert een oplossing van een discreetprobleem op een zekere triangulatie, geeft een schatting van de fout in de huidige benaderingmet een a posteriori foutschatter, en voert een adaptieve verfijning van de triangulatie uitin de gebieden met de grootste fout. Een dergelijke adaptieve procedure produceert, tegenminimale computationele kosten, een rij van steeds verder verfijnde triangulaties, met alsdoel aan een stopcriterium te voldoen dat gebaseerd is op de a posteriori foutschatter. Hetgrote succes van eindige-elementenmethoden in toegepassingsgebieden in de afgelopen 25jaar ten spijt, is de convergentie ervan pas bewezen in het werk van Dorfler [19] (in 1996),en Morin, Nochetto, Siebert [29] (in 2000).

Het feit dat de adaptieve methode convergeert betekent niet noodzakelijk dat de meth-ode efficienter is dan de niet-adaptieve versie. Gebruikmakend van de resultaten uit [19] en[29], is echter daarna door Binev, Dahmen en DeVore [6] (in 2004), en Stevenson [36] (in2005) bewezen dat adaptieve eindige-elementenmethode met optimale snelheid converg-eren, met lineaire computationele complexiteit. Deze eerste optimaliteitsresultaten warengebaseerd op het gebruik van speciale vergrovingsroutines voor het rooster.

Evenals in de eerder genoemde literatuur zullen we een adaptieve eindige-elementenmethodeoptimaal noemen als deze als volgt presteert: wanneer voor een of andere s > 0 de oploss-ing van een randwaardeprobleem kan worden benaderd met tolerantie ε > 0, in energien-orm, door een eindige-elementenbenadering ten opzichte van een triangulatie met O(ε−1/s)driehoeken, en als het rechterlid met eenzelfde snelheid kan worden benaderd, dan levert de

139

Page 145: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

methode oplossingen die ook met deze snelheid convergeren, met een aantal berekeningendat evenredig is met het aantal driehoeken in de uiteindelijke triangulatie.

In hoofdstuk 2 worden de belangrijkste recente ontwikkelingen in de theorie van adap-tieve eindige-elementenmethoden in detail besproken, aan de hand van het Poisson-probleem.We hebben een C++ implementatie geschreven van een adaptieve eindige-elementenmethodein 2D. In deze methode worden Galerkin-systemen opgelost met de geconjugeerde gradientenmethode na transformatie van de stijfheidsmatrix in wavelet coordinaten. We presenterennumerieke experimenten voor het Poisson-probleem op een L-vormig domein en op eendomein met een breuk.

Adaptieve algoritmen voor een singulier verstoord reactie-diffusieprobleem worden be-sproken in hoofdstuk 3. Problemen van dit type komen bijvoorbeeld voor in het modellerenvan dunne platen en schillen, in de numerieke behandeling van reactieprocessen en als re-sultaat van een impliciete tijdssemidiscretisatie van parabolische of hyperbolische vergeli-jkingen van de tweede orde. De belangrijkste eigenschap van de reactie-diffusievergelijkingis dat de oplossing erg steile grenslagen vertoont voor grote waarden van de reactieterm.Vanuit dit perspectief biedt de toepassing van adaptieve methoden voor dit probleemgoede kansen ([33, 42]). We zijn geınteresseerd in methoden die uniform ten opzichte vande reactieterm convergeren. We analyseren een robuuste a posteriori foutschatter en eenuniforme verzadigingseigenschap. We hebben een C++ implementatie gemaakt van een2D-algoritme voor het oplossen van een singulier verstoord reactie-diffusieprobleem. Hierinwordt een rij adaptieve benaderingen geconstrueerd die uniform convergeren in termen vande reactieterm. In de praktijk is het vaak moeilijk om lineaire computationele complexiteitte bereiken met een adaptieve methode, omdat het ontwerp en de implementatie hiervanzeer nauw luisteren. Numerieke experimenten die we gedaan hebben geven aan dat onzeadaptieve methode relatief snel termineert zelfs met honderdduizenden vrijheidsgraden endat hij lineaire computationele complexiteit heeft.

Hoofdstuk 4 behandelt optimale algoritmen voor het Stokes-probleem. Problemen invloeistofdynamica leiden in het algemeen tot gemengde variationele problemen [21, 11].In dit geval zijn adaptieve algoritmen veel lastiger te ontwerpen, en bij ons weten is ergeen bewijs van optimaliteit van een adaptieve eindige-elementenmethode. In [25] en indit hoofdstuk wordt een gedetailleerd ontwerp van een eindige-elementenalgoritme voorhet Stokes-probleem gegeven, en wordt zijn computationele complexiteit geanalyseerd. Wepassen een vaste punt iteratie toe op een oneindig dimensionale Schur-complement oper-ator. We gebruiken een generalisatie van de optimale adaptieve methode uit hoofdstuk2 voor het benaderen van de inverse van de elliptische operator. We laten zien dat dezemethode convergeert en optimale computationele complexiteit heeft als na ieder vast aantalstappen een vergrovingsstap wordt toegepast, met als doel de onderliggende triangulatie teoptimaliseren. Naast zijn gebruik hier, verwachten we dat een efficiente vergrovingsroutinebuitengewoon nuttig zou kunnen zijn voor het adaptief oplossen van evolutieproblemenmet bewegende schokgolven.

Recentelijk, in het werk van Stevenson [37], werd een adaptieve eindige-elementenmethodevoor het oplossen van elliptische problemen voorgesteld die optimale computationele com-plexiteit heeft, zonder een herhaaldelijke vergroving van de triangulaties. Dit resultaat is

140

Page 146: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

erg waardevol voor het construeren van optimale adaptieve methoden voor stationaire prob-lemen. De uitbreiding van dit resultaat naar het gemengde variationele Stokes-probleembleek erg complex; het is het onderwerp van hoofdstuk 4 (en van het artikel [27]). Eennieuwe adaptieve eindige-elementenmethode voor de Stokes-vergelijking wordt ontworpendie convergeert met de best mogelijke snelheid. We hebben deze methode geımplementeerden bespreken de numerieke prestatie van onze methode voor het Stokes-probleem op eenL-vormig domein.

141

Page 147: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Dankwoord/Acknowledgements

I would like to thank my promotor Henk van der Vorst for his attention to my work,many inspiring discussions and his kind attitude. I am indebted to my co-promotor RobStevenson for his guidance during my work on dissertation, collaboration and advices,thorough and critical proofreading of my work.

I acknowledge the financial support provided by the Netherlands Organization for Sci-entific Research (NWO).

I thank the members of the reading committee, Prof. Angela Kunoth, Dr. Jan Brandts,Prof. Hans Duistermaat, Prof. Piet Hemker and Prof Jaap van der Vegt for their commentsconcerning my thesis.

Apart from research, during my stay in Utrecht I spent time on teaching, working withmany colleagues and I am thankful to all of them for a pleasant collaboration.

I would like to express my gratitude to Tsogtgerel Gantumur, with whom I shared myoffice, first of all for being a true friend, and secondly for interesting discussions concerningNumerical Mathematics. I use this opportunity to thank all my friends, in particular ArnoSwart, Hil Meijer, Taoufik Bakri, Hoang Nguyen, Tammo Jan Dijkema, Igor Grubisic, JoostRommes, Budhi Arta Surya, Arthur van Dam, Mario Mommer, Helmut Harbrecht, Mar-tijn van Manen, Roderik Lindenbergh, Behrooz Mirzaii, Sergey Anisov, Fernando MoralesCano, Andriy Shmatukha. Separate thanks to Tammo Jan for translating my english textfor the dutch summary.

I thank all my colleages from Mathematics Institute of Utrecht University for a niceworking atmosphere.

I acknowledge the Italian National Research Council for awarding me a fellowship inMathematical Sciences that allowed me to do research in Italy during 6 month period in2001. I would like to thank Roberto Natalini (Rome) for his advice to choose the Instituteof Numerical Analysis in Pavia within this fellowship. I thank Silvia Bertoluzza (Pavia)who agreed to guide me in analysis of adaptive methods for nonlinear nonstationary PDEsduring my stay in Pavia. Many thanks to my friends, Marco Verani, Giancarlo Sangalliand Ulisse Stefanelli from Pavia.

I am pleased to have this opportunity to thank Prof. G.Shynkarenko, Prof. Ya. Savulaand Prof. G.Tsehelyk from whom I learned Numerical Mathematics and received a lot ofinspiration during my study and research at the Faculty of Applied Mathematics and Infor-matics of L’viv University in Lviv (Ukraine). My special gratitude to Prof. G. Shynkarenkofor valuable advices and sincere support in difficult moments concerning science and life

142

Page 148: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

along my career.My parents and my brother have been far away from here, but I have always felt as if

they were very close. For their love and support, thanks.

143

Page 149: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

Curriculum Vitae

Yaroslav Kondratyuk was born in Khmelnitsk reg., Ukraine, on 12th July 1977. In 1994he graduated from high-school with a gold medal.

Between 1994 and 1999 he studied Applied Mathematics at the Faculty of AppliedMathematics and Informatics, L’viv University (Ivan Franko National University of L’viv),Ukraine. L’viv University is one of the oldest universities in Eastern Europe (founded in1661). The contribution to the successful development of mathematics in L’viv Universitywas made in the past by scientists such as Stefan Banach, Hugo Steinhaus, WladyslawOrlicz, Juliusz Schauder.

During his study at L’viv University he was very much influenced by the lectures ofProf. G.Shynkarenko on Functional Analysis, Theory and Applications of Finite ElementMethod, and Mathematical Modeling, by the lectures of Prof. Ya. Savula on Finite El-ement Method and by the lectures of Prof. G.Tsehelyk on Numerical Mathematics. In1998 he received a grant of International Renaissance Foundation for the work on numer-ical modeling. In 1999 he received diploma in Applied Mathematics; his diploma thesis’Numerical Modeling of Acoustic Fluid-Structure Interaction’ was distinguished by theExamination Board.

Between 1999 and 2001 he worked as junior scientific researcher at L’viv University.Some results of this work were published in [43], [26], [22], [24].

In 2001, Italian National Research Council awarded him with a Fellowship in Mathe-matical Sciences. This allowed him to do research in adaptive methods at the Institute ofNumerical Analysis, Pavia, Italy during 6 month period.

Till August 2006 he was working on PhD dissertation ’Adaptive Finite Element Algo-rithms of Optimal Complexity’ at Mathematics Institute, Utrecht University, The Nether-lands. This work was supported by the Netherlands Organization for Scientific Research.During this period he was a member of the EU IHP network ’Nonlinear Approximationand Adaptivity: Breaking Complexity in Numerical Modeling and Data Representation’.His research resulted in the present thesis.

144

Page 150: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

145

Page 151: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

146

Page 152: Adaptive Finite Element Algorithms of optimal complexityAdaptive Finite Element Algorithms of optimal complexity Adaptieve Eindige Elementen Algoritmen ... 1.1.1 Navier-Stokes equations

147