File: intro.tex

package info (click to toggle)
alberta 3.1.1-3
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 19,176 kB
  • sloc: ansic: 135,836; cpp: 6,601; makefile: 2,801; sh: 333; fortran: 180; lisp: 177; xml: 30
file content (459 lines) | stat: -rw-r--r-- 23,742 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
\section*{Introduction}%+
\addcontentsline{toc}{chapter}{Introduction}
\markboth{INTRODUCTION}{INTRODUCTION}

Finite element methods provide a widely used tool for the solution of
problems with an underlying variational structure. Modern numerical
analysis and implementations for finite elements provide more and more
tools for the efficient solution of large-scale applications.
Efficiency can be increased by using local mesh adaption, by using
higher order elements, where applicable, and by fast solvers.

Adaptive procedures for the numerical solution of partial differential
equations started in the late 70's and are now standard tools in
science and engineering. Adaptive finite element methods are a
meaningful approach for handling multi scale phenomena and making
realistic computations feasible, specially in 3d.

There exists a vast variety of books about finite elements. Here, we
only want to mention the books by Ciarlet \cite{Ciarlet:2002}, and
Brenner and Scott \cite{BrennerScott:2002} as the most prominent ones.
The book by Brenner and Scott also contains an introduction to
multi-level methods. 

The situation is completely different for books about adaptive finite
elements. Only few books can be found with introductory material about
the mathematics of adaptive finite element methods, like the books by
Verf\"urth \cite{Verfuerth:96}, and Ainsworth and Oden
\cite{AinsworthOden:2000}. Material about more practical issues like
adaptive techniques and refinement procedures can for example be found
in
\cite{BabuskaRheinboldt:78,Bank:98,Baensch:91,Kossaczky:94,Maubach:95,Mitchell:88}.

Another basic ingredient for an adaptive finite element method is the
a posteriori error estimator which is the main object of interest in the
analysis of adaptive methods.  While a general theory exists for these
estimators in the case of linear and mildly nonlinear problems
\cite{BaenschSiebert:95,Verfuerth:96}, highly nonlinear problems
usually still need a special treatment, see
\cite{ChenNochetto:00,DoerflerSiebert:03,NSV:2000,NSV:2003,Veeser:2001} for
instance. There exist a lot of different approaches to (and a large
number of articles about) the derivation of error estimates, by
residual techniques, dual techniques, solution of local problems,
hierarchical approaches, etc., a fairly incomplete list of references
is
\cite{AinsworthOden:93,BabuskaRheinboldt:78,BankWeiser:85,BeckerRannacher:96,BEK:96,ErikssonJohnson:91,MNS:03,Verfuerth:94a}.

Although adaptive finite element methods in practice construct a
sequence of discrete solutions which converge to the true solution,
this convergence could only be proved recently
for linear elliptic problem \cite{MNS:00,MNS:02,MNS:03} and
for the nonlinear Laplacian \cite{Veeser:2002}, based on the
fundamental paper \cite{Doerfler:96a}. For a modification of the
convergent algorithm in \cite{MNS:00}, quasi-optimality of the adaptive
method was proved in \cite{BDD:2002} and \cite{Stevenson:2003}.

During the last years there has been a great progress in designing
finite element software. It is not possible to mention all
freely available packages. Examples are
\cite{Bank:98,BBJLRWWW:99,BER:97,Mitchell:93,SchmidtSiebert:2001},
and an continuously updated list of other available finite element codes 
and resources can for instance be found at

\centerline{\url{www.engr.usask.ca/~macphed/finite/fe_resources/}.}

\subsection*{Adaptive finite element methods and basic concepts of \ALBERTA}

Finite element methods calculate approximations to the true solution
in some finite dimensional function space. This space is built from
\emph{local function spaces}, usually polynomials of low order, on
elements of a partitioning of the domain (the \emph{mesh}). An
adaptive method adjusts this mesh (or the local function space, or
both) to the solution of the problem. This adaptation is based on
information extracted from \emph{a posteriori error estimators}.

The basic iteration of an adaptive finite element code for a stationary
problem is 
\begin{descr}
\item assemble and solve the discrete system;
\item calculate the error estimate;
\item adapt the mesh, when needed.
\end{descr}
For time dependent problems, such an iteration is used in each time
step, and the step size of a time discretization may be subject to
adaptivity, too.

\medskip

The core part of every finite element program is the problem dependent
assembly and solution of the discretized problem. This holds for
programs that solve the discrete problem on a fixed mesh as well as
for adaptive methods that automatically adjust the underlying mesh to
the actual problem and solution. In the adaptive iteration, the
assemblage and solution of a discrete system is necessary after each
mesh change. Additionally, this step is usually the most time
consuming part of that iteration.

A general finite element toolbox must provide flexibility in problems
and finite element spaces while on the other hand this core part can
be performed efficiently. Data structures are needed which allow an
easy and efficient implementation of the problem dependent parts and
also allow the use of adaptive methods, mesh modification algorithms, and
fast solvers for linear and nonlinear discrete problems by calling
library routines. On one hand, large flexibility is needed in order to
choose various kinds of finite element spaces, with higher order
elements or combinations of different spaces for mixed methods or
systems. On the other hand, the solution of the resulting discrete
systems may profit enormously from a simple vector--oriented storage
of coefficient vectors and matrices. This also allows the use of
optimized solver and BLAS libraries. Additionally, multilevel
preconditioners and solvers may profit from hierarchy information,
leading to highly efficient solvers for the linear (sub--) problems.

\medskip

\ALBERTA\cite{SchmidtSiebert:98a,SchmidtSiebert:99,SchmidtSiebert:2001} 
provides all those tools mentioned above for the efficient
implementation and adaptive solution of general nonlinear problems in
one, two, or three space dimensions. The design of the \ALBERTA data
structures allows a dimension independent implementation of problem
dependent parts. The mesh adaptation is done by local refinement and
coarsening of mesh elements, while the same local function space is
used on all mesh elements.

Starting point for the design of \ALBERTA data structures is the
abstract concept of a finite element space defined (similar to the
definition of a single finite element by Ciarlet \cite{Ciarlet:2002}) as
a triple consisting of
\begin{descr}
\item a collection of \emph{mesh elements};
\item a set of local \emph{basis functions} on a single element, usually a
      restriction of global basis functions to a single element;
\item a connection of local and global basis functions giving global
      \emph{degrees of freedom} for a finite element function.
\end{descr}
This directly leads to the definition of three main groups of data structures:
\begin{descr}
\item data structures for geometric information storing the underlying
      mesh together with element coordinates, boundary type and
      geometry, etc.;
\item data structures for finite element information providing
      values of local basis functions and their derivatives;
\item data structures for algebraic information linking geometric data
      and finite element data.
\end{descr}
Using these data structures, the finite element toolbox \ALBERTA
provides the whole abstract framework like finite element spaces and
adaptive strategies, together with hierarchical meshes, routines for
mesh adaptation, and the complete administration of finite element
spaces and the corresponding degrees of freedom (DOFs) during mesh
modifications. The underlying data structures allow a flexible
handling of such information.  Furthermore, tools for numerical
quadrature, matrix and load vector assembly as well as solvers for
(linear) problems, like conjugate gradient methods, are available.

A specific problem can be implemented and solved by providing just
some problem dependent routines for evaluation of the (linearized)
differential operator, data, nonlinear solver, and (local) error
estimators, using all the tools above mentioned from a library.

Both geometric and finite element information strongly depend on the
space dimension. Thus, mesh modification algorithms and basis
functions are implemented for one (1d), two (2d), and three (3d)
dimensions separately and are provided by the toolbox. Everything
besides that can be formulated in such a way that the dimension only
enters as a parameter (like size of local coordinate vectors, e.g.).
For usual finite element applications this results in a dimension
independent programming, where all dimension dependent parts are
hidden in a library. This allows a dimension independent programming
of applications to the greatest possible extent.

\medskip

The remaining parts of the introduction give a short overview over the main
concepts, details are then given in Chapter~\ref{CH:concepts}.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection*{The hierarchical mesh}

The underlying mesh is a conforming triangulation of the computational
domain into simplices, i.e.\ intervals (1d), triangles (2d), or
tetrahedra (3d). The simplicial mesh is generated by refinement of a
given initial triangulation. Refined parts of the mesh can be
de--refined, but elements of the initial triangulation (\emph{macro
  elements}) must not be coarsened.  The refinement and coarsening
routines construct a sequence of nested meshes with a hierarchical
structure. In \ALBERTA, the recursive refinement by bisection is
implemented, using the notation of Kossaczk\'y \cite{Kossaczky:94}.

During refinement, new degrees of freedom are created. A single degree
of freedom is shared by all elements which belong to the support of
the corresponding finite element basis function (compare next
paragraph). The mesh refinement routines must create a new DOF only
once and give access to this DOF from all elements sharing
it. Similarly, DOFs are handled during coarsening. This is done in
cooperation with the DOF administration tool, see below.

The bisectioning refinement of elements leads naturally to nested
meshes with the hierarchical structure of binary trees, one tree for every
element of the initial triangulation. Every interior node of that tree
has two pointers to the two children; the leaf elements are part of
the actual triangulation, which is used to define the finite element
space(s). The whole triangulation is a list of given macro elements
together with the associated binary trees.
The hierarchical structure allows the generation of most information
by the hierarchy, which reduces the amount of data to be stored.
Some information is stored on the (leaf) elements explicitly, other
information is located at the macro elements and is transferred to
the leaf elements while traversing through the binary tree.
Element information about vertex coordinates, domain boundaries,
and element adjacency can be computed easily and very fast from the
hierarchy, when needed. Data stored explicitly at tree elements can
be reduced to pointers to the two possible children and information
about local DOFs (for leaf elements).
Furthermore, the hierarchical mesh structure directly leads to
multilevel information which can be used by multilevel preconditioners
and solvers.

Access to mesh elements is available solely via routines which
traverse the hierarchical trees; no direct access is possible. The
traversal routines can give access to all tree elements, only to leaf
elements, or to all elements which belong to a single hierarchy level
(for a multilevel application, e.g.). In order to perform operations
on visited elements, the traversal routines call a subroutine which is
given to them as a parameter. Only such element information which is
needed by the current operation is generated during the tree
traversal.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection*{Finite elements}

The values of a finite element function or the values of its
derivatives are uniquely defined by the values of its DOFs and the
values of the basis functions or the derivatives of the basis
functions connected with these DOFs.  We follow the concept of finite
elements which are given on a single element $S$ in local coordinates:
Finite element functions on an element $S$ are defined by a finite
dimensional function space $\Pbar$ on a reference element $\Sbar$ and
the (one to one) mapping $\lambda^S: \Sbar \to S$ from the reference
element $\Sbar$ to the element $S$. In this situation the non
vanishing basis functions on an arbitrary element are given by the set
of basis functions of $\Pbar$ in local coordinates $\lambda^S$.  Also,
derivatives are given by the derivatives of basis functions on $\Pbar$
and derivatives of $\lambda^S$.

Each local basis function on $S$ is uniquely connected to a global
degree of freedom, which can be accessed from $S$ via the DOF
administration tool. \ALBERTA supports basis functions connected with
DOFs, which are located at vertices of elements, at edges, at faces
(in 3d), or in the interior of elements. DOFs at a vertex are shared
by all elements which meet at this vertex, DOFs at an edge or face are
shared by all elements which contain this edge or face, and DOFs
inside an element are not shared with any other element. The support
of the basis function connected with a DOF is the patch of all
elements sharing this DOF.

For a very general approach, we only need a vector of the basis
functions (and its derivatives) on $\Sbar$ and a function for the
communication with the DOF administration tool in order to access the
degrees of freedom connected to local basis functions. By such
information every finite element function (and its derivatives) is
uniquely described on every element of the mesh.

During mesh modifications, finite element functions must be
transformed to the new finite element space. For example, a discrete
solution on the old mesh yields a good initial guess for an iterative
solver and a smaller number of iterations for a solution of the
discrete problem on the new mesh. Usually, these transformations can
be realized by a sequence of local operations. Local interpolations
and restrictions during refinement and coarsening of elements depend
on the function space $\Pbar$ and the refinement of $\Sbar$
only. Thus, the subroutine for interpolation during an atomic mesh
refinement is the efficient implementation of the representation of
coarse grid functions by fine grid functions on $\Sbar$ and its
refinement. A restriction during coarsening is implemented using
similar information.

Lagrange finite element spaces up to order four are currently
implemented in one, two, and three dimensions. This includes the
communication with the DOF administration as well as the interpolation
and restriction routines.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection*{Degrees of freedom}

Degrees of freedom (DOFs) connect finite element data with geometric
information of a triangulation. For general applications, it is
necessary to handle several different sets of degrees of freedom on
the same triangulation. For example, in mixed finite element methods
for the Navier-Stokes problem, different polynomial degrees are used
for discrete velocity and pressure functions.

During adaptive refinement and coarsening of a triangulation, not only
elements of the mesh are created and deleted, but also degrees of
freedom together with them. The geometry is handled dynamically in a
hierarchical binary tree structure, using pointers from parent
elements to their children. For data corresponding to DOFs, which are
usually involved with matrix--vector operations, simpler storage and
access methods are more efficient. For that reason every DOF is
realized just as an integer index, which can easily be used to access
data from a vector or to build matrices that operate on vectors of DOF
data. This results in a very efficient access during matrix/vector
operations and in the possibility to use libraries for the solution
of linear systems with a sparse system matrix (\cite{Doerfler:95a}, e.g.).

Using this realization of DOFs two major problems arise:
\begin{descr}
\item During refinement of the mesh, new DOFs are added, and additional
indices are needed. The total range of used indices has to be
enlarged. At the same time, all vectors and matrices that use these
DOF indices have to be adjusted in size, too.
\item
During coarsening of the mesh, DOFs are deleted. In general, the
deleted DOF is not the one which corresponds to the largest integer
index. Holes with unused indices appear in the total range of used
indices and one has to keep track of all used and unused indices.
\end{descr}
These problems are solved by a general DOF administration tool. During
refinement, it enlarges the ranges of indices, if no unused indices
produced by a previous coarsening are available. During coarsening, a
book--keeping about used and unused indices is done. In order to
reestablish a contiguous range of used indices, a compression of DOFs
can be performed; all DOFs are renumbered such that all unused indices
are shifted to the end of the index range, thus removing holes of
unused indices. Additionally, all vectors and matrices connected to
these DOFs are adjusted correspondingly. After this process,
vectors do not contain holes anymore and standard operations
like BLAS1 routines can be applied and yield optimal performance.

In many cases, information stored in DOF vectors has to be adjusted to
the new distribution of DOFs during mesh refinement and coarsening.
Each DOF vector can provide pointers to subroutines that implement
these operations on data (which usually strongly depend on the
corresponding finite element basis). Providing such a pointer, a DOF
vector will automatically be transformed during mesh modifications.

All tasks of the DOF administration are performed automatically during
refinement and coarsening for every kind and combination of finite
elements defined on the mesh.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection*{Adaptive solution of the discrete problem}

The aim of adaptive methods is the generation of a mesh which is
adapted to the problem such that a given criterion, like a tolerance
for the estimated error between exact and discrete solution, is
fulfilled by the finite element solution on this mesh. An optimal mesh
should be as coarse as possible while meeting the criterion, in order
to save computing time and memory requirements.  For time dependent
problems, such an adaptive method may include mesh changes in each
time step and control of time step sizes. The philosophy implemented
in \ALBERTA is to change meshes successively by local refinement or
coarsening, based on error estimators or error indicators, which are
computed a posteriori from the discrete solution and given data on the
current mesh.

Several adaptive strategies are proposed in the literature, that give
criteria which mesh elements should be marked for refinement.  All
strategies are based on the idea of an equidistribution of the local
error to all mesh elements. Babu\v{s}ka and Rheinboldt
\cite{BabuskaRheinboldt:78} motivate that for stationary problems a
mesh is almost optimal when the local errors are approximately equal
for all elements. So, elements where the error indicator is large will
be marked for refinement, while elements with a small estimated indicator
are left unchanged or are marked for coarsening.  In time dependent
problems, the mesh is adapted to the solution in every time step using
a~posteriori information like in the stationary case. As a first mesh
for the new time step we use the adaptive mesh from the previous time
step. Usually, only few iterations of the adaptive procedure are then
needed for the adaptation of the mesh for the new time step. This may be
accompanied by an adaptive control of time step sizes.

Given pointers to the problem dependent routines for assembling and
solution of the discrete problems, as well as an error
estimator/indicator, the adaptive method for finding a solution on a
quasi--optimal mesh can be performed as a black--box algorithm.  The
problem dependent routines are used for the calculation of discrete
solutions on the current mesh and (local) error estimates. Here, the
problem dependent routines heavily make use of library tools for
assembling system matrices and right hand sides for an arbitrary finite
element space, as well as tools for the solution of linear or
nonlinear discrete problems. On the other hand, any specialized
algorithm may be added if needed. The marking of mesh elements is
based on general refinement and coarsening strategies relying on the
local error indicators. During the following mesh modification step,
DOF vectors are transformed automatically to the new finite element
spaces as described in the previous paragraphs.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection*{Dimension independent program development}

Using black--box algorithms, the abstract definition of basis
functions, quadrature formulas and the DOF administration tool, only
few parts of the finite element code depend on the dimension. Usually,
all dimension dependent parts are hidden in the library. Hence,
program development can be done in 1d or 2d, where execution is usually much
faster and debugging is much easier (because of simple 1d and 2d
visualization, e.g., which is much more involved in 3d). With no (or
maybe few) additional changes, the program will then also work in
3d. This approach leads to a tremendous reduction of program
development time for 3d problems.

%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%

\bigskip

\paragraph{Notations.} 
For a differentiable function $f\colon\, \Omega \to \R$
on a domain $\Omega \subset \R^d$, $d = 1,2,3$,
we set
\[
\nabla f(x) = \left(f_{,x_1}(x), \ldots, f_{,x_d}(x)\right)
= \left(\frac{\partial}{\partial x_1} f(x), \ldots,
        \frac{\partial}{\partial x_d} f(x)  \right)
\]
and
\[
D^2 f(x) = \left(f_{,x_k x_l}\right)_{k,l = 1,\ldots d}
 = \left(\frac{\partial^2}{\partial x_k x_l}f(x)\right)_{k,l = 1,\ldots d}.
\]
In the case of a vector valued, differentiable function
$f = (f_1,\ldots,f_n) \colon\, \Omega \to \R^n$
we write
\[
\nabla f(x) = 
\left(f_{i,x_1}(x), \ldots, f_{i,x_d}(x)\right)_{i = 1,\ldots, n}
=
\left(\frac{\partial}{\partial x_1} f_i(x), \ldots,
              \frac{\partial}{\partial x_d} f_i(x)  \right)_{i = 1,\ldots, n}
\]
and
\[
D^2 f(x) =
\left(f_{i,x_k x_l}\right)_{\atop{i = 1,\ldots, n}{k,l = 1,\ldots d}}
 =
\left(\frac{\partial^2}{\partial x_k x_l} f_i(x)\right)
_{\atop{i = 1,\ldots, n}{k,l = 1,\ldots d}}.
\]
By $L^p(\Omega)$, $1 \le p \le \infty$, we denote the usual Lebesgue 
spaces with norms
\[
\|f\|_{L^p(\Omega)} = \left(\int_\Omega |f(x)|^p\,dx\right)^{1/p}
\qquad\mbox{for } p < \infty\qquad\mbox{and}\qquad
\|f\|_{L^\infty(\Omega)} = \operatorname*{ess\ sup}_{x\in\Omega} |f(x)|.
\]
The Sobolev space of functions $u \in L^2(\Omega)$ with weak
derivatives $\nabla u \in L^2(\Omega)$ is denoted by $H^1(\Omega)$
with semi norm
\[
|u|_{H^1(\Omega)} = \left(\int_\Omega |\nabla u(x)|^2\,dx\right)^{1/2}
\qquad\mbox{and norm}\qquad
\|u\|_{H^1(\Omega)} = \left(\|u\|_{L^2(\Omega)}^2 
+ |u|_{H^1(\Omega)}^2\right)^{1/2}.
\]


%%% Local Variables: 
%%% mode: latex
%%% TeX-master: "alberta"
%%% End: