1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512 513 514 515 516 517 518 519 520 521 522 523 524 525 526 527 528 529 530 531 532 533 534 535 536 537 538 539 540 541 542 543 544 545 546 547 548 549 550 551 552 553 554 555 556 557 558 559 560 561 562 563 564 565 566 567 568 569 570 571 572 573 574 575 576 577 578 579 580 581 582 583 584 585 586 587 588 589 590 591 592 593 594 595 596 597 598 599 600 601 602 603 604 605 606 607 608 609 610 611 612 613 614 615 616 617 618 619 620 621 622 623 624 625 626 627 628 629 630 631 632 633 634 635 636 637 638 639 640 641 642 643 644 645 646 647 648 649 650 651 652 653 654 655 656 657 658 659 660 661 662 663 664 665 666 667 668 669 670 671 672 673 674 675 676 677 678 679 680 681 682 683 684 685 686 687 688 689 690 691 692 693 694 695 696 697 698 699 700 701 702 703 704 705 706 707 708 709 710 711 712 713 714 715 716 717
|
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\section{Adaptive Methods}
\label{S:adaptive_methods}%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\idx{adaptive methods!concepts|(}
The aim of adaptive methods is the generation of a mesh which is
adapted to the problem such that a given criterion, like a tolerance
for the estimated error between exact and discrete solution, if
fulfilled by the finite element solution on this mesh. An optimal mesh
should be as coarse as possible while meeting the criterion, in order
to save computing time and memory requirements. For time dependent
problems, such an adaptive method may include mesh changes in each
time step and control of time step sizes.
The philosophy implemented in \ALBERTA is to change meshes successively
by local refinement or coarsening, based on error estimators or error
indicators, which are computed a posteriori from the discrete solution
and given data on the current mesh.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Adaptive method for stationary problems}\label{S:adapt_stat_prob}%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\idx{adaptive methods!stationary problems}
Let us assume that a triangulation $\tria$ of $\Omega$, a finite
element solution $u_h \in X_h$ to an elliptic problem,
and an a posteriori error estimate
\begin{equation}\label{E:adapt.p}
\| u - u_h \| ~\leq~ \eta(u_h)
~=~ \left( \sum_{S \in \tria} \eta_S(u_h)^p \right)^{1/p}
, \qquad p \in [1,\infty)
\end{equation}
on this mesh are given. If $\tol$ is a given allowed tolerance for the
error, and $\eta(u_h) > \tol$, the problem arises
\begin{itemize}\itemsep=0pt
\item where to refine the mesh in order to reduce the error,
\item while at the same time the number of unknowns should not become
too large.
\end{itemize}
A global refinement of the mesh would lead to the best error
reduction, but the amount of new unknowns might be much larger than
needed to reduce the error below the given tolerance. Using local
refinement, we hope to do much better.
The design of an ``optimal'' mesh, where the number of unknowns is as
small as possible to keep the error below the tolerance, is an open
problem and will probably be much too costly. Especially in the case
of linear problems, the design of an optimal mesh will be much more
expensive than the solution of the original problem, since the mesh
optimization is a highly nonlinear problem. Usually, some heuristic
arguments are then used in the algorithm. The aim is to produce a
result that is ``not too far'' from an optimal mesh, but with a
relatively small amount of additional work to generate it.
Several adaptive strategies are proposed in the literature, that give
criteria which mesh elements should be marked for refinement. All
strategies are based on the idea of an equidistribution of the local
error to all mesh elements. Babu\v{s}ka and Rheinboldt
\cite{BabuskaRheinboldt:78} motivate that a mesh is almost optimal
when the local errors are approximately equal for all elements. So,
elements where the error indicator is large will be marked for
refinement, while elements with a small error indicator are left
unchanged.
The general outline of the adaptive algorithm for a stationary problem
is the following. Starting from an initial triangulation $\tria_0$, we
produce a sequence of triangulations $\tria_k$, for $k=1,2,\dots$, until
the estimated error is below the given tolerance:
\medskip
\begin{algorithm}[General adaptive refinement strategy]
\label{A:general_strategy}
\idx{adaptive methods!adaptive strategy}
\begin{algotab}
\> Start with $\tria_0$ and error tolerance $\tol$ \\[2mm]
\> \> $k := 0$ \\
\> \> solve the discrete problem on $\tria_k$\\
\> \> compute global error estimate $\eta$ and local indicators $\eta_S$,
$S\in\tria_k$\\
\> \> while $\eta > \tol$ do\\
\>\> \> mark elements for refinement (or coarsening)\\
\>\> \> adapt mesh $\tria_k$, producing $\tria_{k+1}$ \\
\>\> \> $k$ := $k+1$\\
\>\> \> solve the discrete problem on $\tria_k$\\
\>\> \> compute global error estimate $\eta$ and local indicators $\eta_S$,
$S\in\tria_k$\\
\> \> end while
\end{algotab}
\end{algorithm}%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Mesh refinement strategies}%
\label{S:refinement_strategies}%
\idx{adaptive methods!marking strategies}%
\idx{marking strategies}
Since a discrete problem has to be solved in every iteration of this
algorithm, the number of iterations should be as small as possible.
Thus, the marking strategy should select not too few mesh elements for
refinement in each cycle. On the other hand, not much more elements
should be selected than is needed to reduce the error below the given
tolerance.
In the sequel, we describe several marking strategies that are
commonly used in adaptive finite element methods.
The basic assumption for all marking strategies is the fact that the
mesh is ``optimal'' when the local error is the same for all elements
of the mesh. This optimality can be shown under some heuristic
assumptions, see \cite{BabuskaRheinboldt:78}. Since the true error is
not know we try to equidistribute the local error indicators. This is
motivated by the lower bound for error estimators of elliptic
problems. This lower bound ensures that the local error is large if
the local indicator is large and data of the problem is sufficiently
resolved \cite{AinsworthOden:2000,Verfuerth:96}. As a consequence,
elements with a large local error indicator should be refined, while
elements with a very small local error indicator may be coarsened.
\paragraph{Global refinement:}
The simplest strategy is not really ``adaptive'' at all, at least not
producing a locally refined mesh. It refines the mesh globally, until
the given tolerance is reached.
If an a priori estimate for the error in terms of the maximal size of
a mesh element $h$ is known, where the error is bounded by a positive
power of $h$, and if the error estimate tends to zero if the error
gets smaller, then this strategy is guaranteed to produce a mesh and a
discrete solution which meets the error tolerance.
But, in most cases, global refinement produces far too much mesh
elements than are needed to meet the error tolerance.
\paragraph{Maximum strategy:}
\idx{maximum strategy}
\idx{adaptive methods!maximum strategy}
Another very simple strategy is the maximum strategy. A threshold $\gamma
\in (0,1)$ is given, and all elements $S \in \tria_k$ with
\begin{equation}\label{E:max-strategy}
\eta_S ~>~ \gamma \max_{S'\in \tria_k} \eta_{S'}
\end{equation}
are marked for refinement. A small $\gamma$ leads to more refinement
and maybe non--optimal meshes, while a large $\gamma$ leads to more
cycles until the error tolerance is reached, but usually produces a
mesh with less unknowns. Typically, a threshold value $\gamma = 0.5$
is used when the power $p$ in (\ref{E:adapt.p}) is $p=2$
\cite{Verfuerth:94a, ZKGB:82}.
\begin{algorithm}[Maximum strategy]\label{A:maximum_strategy}
\begin{algotab}
\> Given parameter $\gamma \in (0,1)$ \\[2mm]
\> $\eta_{\rm max}$ := max($\eta_S$, $S\in\tria_k$) \\
\> for all $S$ in $\tria_k$ do\\
\> \> if $\eta_S > \gamma\,\eta_{\rm max} $ then mark $S$ for refinement\\
\> end for
\end{algotab}
\end{algorithm}
\paragraph{Equidistribution strategy:}
\idx{equidistribution strategy}
\idx{adaptive methods!equidistribution strategy}
Let $N_k$ be the number of mesh elements in $\tria_k$. If we assume
that the error indicators are equidistributed over all elements, i.\ e.\
$\eta_S = \eta_{S'}$ for all $S,S'\in\tria_k$, then
\[
\eta ~=~ \left( \sum_{S \in \tria_h} \eta_S^p \right)^{1/p}
=~ N_k^{1/p} \, \eta_S ~\stackrel{!}{=}~ \tol
\qquad\mbox{ and }\quad
\eta_S ~=~ \frac{\tol}{N_k^{1/p}}\,.
\]
We can try to reach this equidistribution by refining all elements
where it is violated because the error indicator is larger than $\tol
/ N_k^{1/p}$. To make the procedure more robust, a parameter $\theta
\in (0,1)$, $\theta \approx 1$, is included in the method.
\begin{algorithm}[Equidistribution strategy\cite{ErikssonJohnson:91}]
\label{A:equidistribution_strategy}
\begin{algotab}
\> Start with parameter $\theta \in (0,1)$, $\theta \approx 1$ \\[2mm]
\> for all $S$ in $\tria_k$ do\\
\> \> if $\eta_S > \theta \tol / N_k^{1/p}$ then mark $S$ for refinement\\
\> end for
\end{algotab}
\end{algorithm}
If the error $\eta$ is already near $\tol$, then the choice $\theta=1$
leads to the selection of only very few elements for refinement, which
results in more iterations of the adaptive process. Thus, $\theta$
should be chosen smaller than $1$, for example $\theta=0.9$.
Additionally, this accounts for the fact that the number
of mesh elements increases, i. e. $N_{k+1} > N_k$, and
thus the tolerance for local errors will be smaller after refinement.
\paragraph{Guaranteed error reduction strategy:}
\idx{guaranteed error reduction strategy}
\idx{adaptive methods!guaranteed error reduction strategy}
Usually, it is not clear whether the adaptive refinement strategy
Algorithm \ref{A:general_strategy} using a marking strategy (other
than global refinement) will converge and stop. D\"orfler
\cite{Doerfler:96a} describes a strategy with a guaranteed error
reduction for the Poisson equation within a given tolerance.
We need the assumptions, that
\begin{itemize}
\item[-] given data of the problem (like the right hand side) is
sufficiently resolved by the initial mesh $\tria_0$ with respect
to the given tolerance (such that, for
example, errors from the numerical quadrature are negligible),
\item[-] all edges of marked mesh elements are at least bisected by the
refinement procedure (using regular refinement or two/three
iterated bisections of triangles/tetrahedra, for example).
\end{itemize}
The idea is to refine a subset of the triangulation whose element
errors sum up to a fixed amount of the total error $\eta$. Given a
parameter $\theta_* \in (0,1)$, the procedure is:
\[
\mbox{Mark a set } {\cal A}\subseteq\tria_k \mbox{ such that}
\quad \sum_{S \in {\cal A}} \eta_S^p \geq (1 - \theta_*)^p \eta^p \,.
\]
It follows from the assumptions that the error will be reduced by at
least a factor $\kappa < 1$ depending of $\theta_*$ and data of the
problem. Selection of the set ${\cal A}$ can be done in the following
way. The maximum strategy threshold $\gamma$ is reduced in small steps
of size $\nu \in (0,1)$, $\nu << 1$, until the maximum strategy marks
a set which is large enough. This inner iteration is not costly in
terms of CPU time as no computations are performed.
\begin{algorithm}[Guaranteed error reduction strategy\cite{Doerfler:96a}]
\label{A:doerfler_strategy}
\begin{algotab}
\> Start with given parameters $\theta_* \in (0,1)$, $\nu \in (0,1)$ \\[2mm]
\> $\eta_{\rm max}$ := max($\eta_S$, $S\in\tria_k$) \\
\> sum := 0 \\
\> $\gamma$ := 1\\
\> while sum $< (1-\theta_*)^p \eta^p$ do\\
\> \> $\gamma$ := $\gamma - \nu$ \\
\> \> for all $S$ in $\tria_k$ do\\
\> \> \> if $S$ is not marked\\
\> \> \> \> if $\eta_S > \gamma \, \eta_{\rm max}$\\
\> \> \> \> \> mark $S$ for refinement\\
\> \> \> \> \> sum := sum + $\eta_S^p$\\
\> \> \> \> end if\\
\> \> \> end if\\
\> \> end for\\
\> end while
\end{algotab}
\end{algorithm}
Using the above algorithm, D\"orfler \cite{Doerfler:95} describes a
robust adaptive strategy also for the {\em nonlinear} Poisson equation
$ -\Delta u = f(u) $. It is based on a~posteriori error estimates and
a posteriori saturation criteria for the approximation of the
nonlinearity.
\begin{remark}
Using this GERS strategy and an additional marking of elements due
to data approximation, Morin, Nochetto, and Siebert
\cite{MNS:00,MNS:02,MNS:03} could remove the assumption that data is
sufficiently resolved on $\tria_0$ in order to prove convergence.
The result is a simple and efficient adaptive finite element method
for linear elliptic PDEs with a linear rate of convergence without
any preliminary mesh adaptation.
\end{remark}
\paragraph{Other refinement strategies:}
Babu{\v s}ka and Rheinboldt \cite{BabuskaRheinboldt:78} describe an
extrapolation strategy, which estimates the local error decay. Using
this estimate, refinement of elements is done when the actual local
error is larger than the biggest expected local error {\em after
refinement}.
Jarausch \cite{Jarausch:86} describes a strategy which generates
quasi--optimal meshes. It is based on an optimization procedure
involving the increase of a cost function during refinement and the
profit while minimizing an energy functional.
For special applications, additional information may be generated by
the error estimator and used by the adaptive strategy. This includes
(anisotropic) directional refinement of elements
\cite{KornhuberRoitzsch:90,Siebert:96}, or the decision of
local $h$-- or $p$--enrichment of the finite element space
\cite{DORH:89,SchmidtSiebert:00a}.
\subsection{Coarsening strategies}%
\label{S:coarsening_strategies}%
\idx{coarsening strategies}%
\idx{adaptive methods!coarsening strategies}
Up to now we presented only refinement strategies. Practical
experience indicates that for linear elliptic problems, no more is
needed to generate a quasi--optimal mesh with nearly equidistributed
local errors.
In time dependent problems, the regions where large local errors are
produced can move in time. In stationary nonlinear problems, a bad
resolution of the solution on coarse meshes may lead to some local
refinement where it is not needed for the final solution, and the mesh
could be coarsened again. Both situations result in the need to
coarsen the mesh at some places in order to keep the number of
unknowns small.
Coarsening of the mesh can produce additional errors. Assuming that
these are bounded by an a posteriori estimate $\eta_{c,S}$, we can
take this into account during the marking procedure.
Some of the refinement strategies described above can also be used to
mark mesh elements for coarsening. Actually, elements will only be
coarsened if all neighbour elements which are affected by the
coarsening process are marked for coarsening, too. This makes sure
that only elements where the error is small enough are coarsened, and
motivates the coarsening algorithm in Section \ref{S:coarsening_algorithm}.
The main concept for coarsening is again the equidistribution of local
errors mentioned above. Only elements with a very small local error
estimate are marked for coarsening. On the other hand, such a
coarsening tolerance should be small enough such that the local error
\emph{after coarsening} should not be larger than the tolerance used
for refinement. If the error after coarsening gets larger than this
value, the elements would be directly refined again in the next
iteration (which may lead to a sequence of oscillating grid never
meeting the desired criterion).
Usually, an upper bound $\mu$ for the mesh size power
of the local error estimate is known, which can be used to determine
the coarsening tolerance: if
\[
\eta_S ~\leq~ c h_S^\mu,
\]
then coarsening by undoing $b$ bisections will enlarge the local error
by a factor smaller than $ 2^{\mu b/\code{DIM}} $, such that the local
coarsening tolerance $tol_c$ should be smaller than
\[
tol_c ~\leq~ \frac{tol_r}{2^{\mu b/\code{DIM}}},
\]
where $tol_r$ is the local refinement tolerance.
\paragraph{Maximum strategy:}
Given two parameters $\gamma > \gamma_c$, refine all elements $S$ with
\[
\eta_S^p > \gamma \max_{S'\in \tria_k} \eta_{S'}^p
\]
and mark all elements $S$ with
\[
\eta_S^p + \eta_{c,S}^p \leq \gamma_c \max_{S'\in \tria_k} \eta_{S'}^p
\]
for coarsening.
\paragraph{Equidistribution strategy:}
Equidistribution of the tolerated error $\tol$ leads to
\[
\eta_S \approx \frac{\tol}{N_k^{1/p}} \qquad \mbox{for all } S \in \tria.
\]
If the local error at an element is considerably smaller than this
mean value, we may coarsen the element without producing an error that
is too large.
All elements with
\[
\eta_S > \theta \, \frac{\tol}{N_k^{1/p}}
\]
are marked for refinement, while all elements with
\[
\eta_S + \eta_{c,S} \leq \theta_c \, \frac{\tol}{N_k^{1/p}}
\]
are marked for coarsening.
\paragraph{Guaranteed error reduction strategy:}
Similar to the refinement in Algorithm \ref{A:doerfler_strategy},
D\"orfler \cite{Doerfler:96b} describes a marking strategy for
coarsening. Again, the idea is to coarsen a subset of the
triangulation such that the additional error after coarsening is not
larger than a fixed amount of the given tolerance $\tol$. Given a
parameter $\theta_c \in (0,1)$, the procedure is:
\[
\mbox{Mark a set } {\cal B}\subseteq\tria_k \mbox{ such that}
\quad \sum_{S \in {\cal B}} \eta_S^p + \eta_{c,S}^p
\leq \theta_c^p \eta^p \,.
\]
The selection of the set $\cal B$ can be done similar to Algorithm
\ref{A:doerfler_strategy}.
\begin{remark}
When local $h$-- and $p$--enrichment and coarsening of the finite
element space is used, then the threshold $\theta_c$ depends on the
local degree of finite elements. Thus, local thresholds $\theta_{c,S}$
have to be used.
\end{remark}
\paragraph{Handling information loss during coarsening.}
Usually, some information is irreversibly destroyed during coarsening
of parts of the mesh, compare Section \ref{man:S:DOF_INTERPOL}. If the
adaptive procedure iterates several times, it may occur that elements
which were marked for coarsening in the beginning are not allowed to
coarsen at the end. If the mesh was already coarsened, an error is
produced which can not be reduced anymore.
One possibility to circumvent such problems is to delay the mesh
coarsening until the final iteration of the adaptive procedure,
allowing only refinements before. If the coarsening marking strategy
is not too liberal ($\theta_c$ not too large), this should keep the
error below the given bound.
D\"orfler \cite{Doerfler:96b} proposes to keep all information until
it is clear, after solving and by estimating the error on a (virtually)
coarsened mesh, that the coarsening does not lead to an error which is
too large.
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsection{Adaptive methods for time dependent problems}%
\label{S:time_adaptive_strategies}%
\idx{adaptive methods!time dependent problems}
In time dependent problems, the mesh is adapted to the solution in
every time step using a~posteriori error estimators or indicators.
This may be accompanied by an adaptive control of time step sizes,
see below.
B\"ansch \cite{Baensch:93} lists several different adaptive procedures
(in space) for time dependent problems:
\idx{strategies for time dependent problems}
\idx{adaptive methods!strategies for time dependent problems}
\begin{itemize}
\item {\bf Explicit strategy:} The current time step is solved once on
the mesh from the previous time step, giving the solution $\uh$.
Based on a posteriori estimates of $\uh$, the mesh is locally refined
and coarsened. The problem is \emph{not} solved again on the new
mesh, and the solve--estimate--adapt process is \emph{not} iterated.
\\
This strategy is only usable when the solution is nearly stationary
and does not change much in time, or when the time step size is very
small. Usually, a given tolerance for the error can not be guaranteed
with this strategy.
\item {\bf Semi--implicit strategy:} The current time step is solved
once on the mesh from the previous time step, giving an intermediate
solution $\tilde u_h$. Based on a posteriori estimates of $\tilde
u_h$, the mesh is locally refined and coarsened. This produces the
final mesh for the current time step, where the discrete solution
$u_h$ is computed. The solve--estimate--adapt process is {\em not}
iterated.
\\
This strategy works quite well, if the time steps are not too large,
such that regions of refinement move too fast.
\item {\bf Implicit strategy A:} In every time step starting from the
previous time step's triangulation, a mesh is generated using local
refinement and coarsening based on a posteriori estimates of a
solution which is calculated on the current mesh. This
solve--estimate--adapt process is iterated until the estimated error is
below the given bound.
\\
This guarantees that the estimated error is below the given
bound. Together with an adaptive control of the time step size, this
leads to global (in time) error bounds. If the time step size is not
too large, the number of iterations of the solve--estimate--adapt
process is usually very small.
\item {\bf Implicit strategy B:} In every time step starting from the
macro triangulation, a mesh is generated using local refinements
based on a posteriori estimates of a solution which is calculated on
the current (maybe quite coarse) mesh; no mesh coarsening is
needed. This solve--estimate--adapt process is iterated until the
estimated error is below the given bound.
\\
Like implicit strategy A, this guarantees error bounds. As
the initial mesh for every time step is very coarse, the number of
iterations of the solve--estimate--adapt process becomes quite
large, and thus the algorithm might become expensive. On the other
hand, a solution on a coarse grid is fast and can be used as a good
initial guess for finer grids, which is usually better than using
the solution from the old time step.
\\
Implicit strategy B can also be used with anisotropically refined
triangular meshes, see \cite{FLR:96}. As coarsening of anisotropic
meshes and changes of the anisotropy direction are still open
problems, this implies that the implicit strategy A can not be used
in this context.
\end{itemize}
The following algorithm implements one time step of the implicit strategy
A. The adaptive algorithm ensures that
the mesh refinement/coarsening is done at least once in each time
step, even if the error estimate is below the limit. Nevertheless, the
error might not be equally distributed over all elements; for some
simplices the local error estimates might be bigger than allowed.
\begin{algorithm}[Implicit strategy A]\label{A:implicit-strat-A}
\begin{algotab}
\> Start with given parameters $\tol$ and time step size $\tau$,\\
\> the solution $u_n$ from the previous time step on grid $\tria_n$\\[2mm]
\> $\tria_{n+1}$ := $\tria_n$\\
\> solve the discrete problem for $u_{n+1}$ on $\tria_{n+1}$ using
data $u_n$\\
\> compute error estimates on $\tria_{n+1}$\\[1mm]
\> do\\
\> \> mark elements for refinement or coarsening\\
\> \> if elements are marked then\\
\> \> \> adapt mesh $\tria_{n+1}$ producing a modified $\tria_{n+1}$\\
\> \> \> solve the discrete problem for $u_{n+1}$ on $\tria_{n+1}$ using
data $u_n$\\
\> \> \> compute error estimates on $\tria_{n+1}$\\
\> \> end if\\
\> while $\eta > \tol$
\end{algotab}
\end{algorithm}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
\subsubsection{Adaptive control of the time step size}%
\label{S:TimeStepControl}%
\idx{time step size control}%
\idx{adaptive methods!time step size control}
A posteriori error estimates for parabolic problems usually consist of
four different types of terms:
\begin{descr}
\item terms estimating the initial error;
\item terms estimating the error from discretization in space;
\item terms estimating the error from mesh coarsening between time steps;
\item terms estimating the error from discretization in time.
\end{descr}
Thus, the total estimate can be split into parts
\[
\eta_0, ~ \eta_h, ~ \eta_c, ~\mbox{and}~ \eta_\tau
\]
estimating these four different error parts.
\paragraph{Example:}
Eriksson and Johnson \cite{ErikssonJohnson:91} prove an a posteriori
error estimate for the discontinuous Galerkin time discretization of
the heat equation
\[
u_t - \Delta u = f ~\mbox{ in }\Omega, \quad
u_{|_{\partial\Omega}}=0, \quad u_{|_{t=0}}=u_0;
\]
the error estimate for piecewise constant time discretization and
piecewise linear discretization in space is given by
\begin{eqnarray*}
\|u(t_N) - U_N\| &\leq&
\| u_0 - U_0 \| +
\max_{1\leq n\leq N} \Bigg(
C_1 \| h_n^2 f \|_{L^\infty(I_n, L^2(S))} + C_2 \int_{I_n} \|f\|\,dt
\\&&
\qquad
+ C_3 \Big( \sum_{e\in E_n}
h_e^3 \Big|\Big[\frac{\partial U_n}{\partial\nu_e}\Big]\Big|^2\Big)^{1/2}
+\,C_4 \| U_n - U_{n-1} \|
+ C_5 \Big\|h_n^2 \frac{[U_{n-1}]}{\tau_n} \Big\| \Bigg),
\end{eqnarray*}
where $U_n$ is the discrete solution on $I_n := (t_{n-1},t_n)$,
$\tau_n = t_n - t_{n-1}$ is the $n^{\mathrm{th}}$ time step size,
$[\cdot]$ denotes jumps over edges or between time intervals, and
$\|\cdot\|$ denotes the norm in $L^2(\Omega)$. The last term $C_5
\|\dots\|$ is present only in case of mesh coarsening. The constants
$C_i$ depend on the time $t_N$ and the size of the last time step:
$C_i = C_i(\log(\frac{t_N}{\tau_N}))$.
This leads to the following error estimator parts:
\begin{eqnarray*}
\eta_0 &=& \| u_0 - U_0 \|,
\\
\eta_h &=& \sum_{S\in\tria_n} \bigg(
\tilde C_1 \| h_n^2 f \|_{L^\infty(I_n, L^2(S))}
+ \tilde C_3 \Big(\frac12 \sum_{e\subset\partial S} h_e^3
\Big|\Big[\frac{\partial U_n}{\partial\nu_e}\Big]\Big|^2\Big)^{1/2} \bigg),
\\
\eta_{c} &=& \sum_{S\in\tria_n} \bigg(
\tilde C_5 \Big\| h_n^2 \frac{[U_{n-1}]}{\tau_n} \Big\|_{L^2(S)} \bigg),
\\
\eta_\tau &=& \tau_n \bigg( C_2 \|f\|_{L^\infty(I_n, L^2(S))}
+ C_4 \Big\|\frac{U_n - U_{n-1}}{\tau_n} \Big\| \bigg).
\end{eqnarray*}
When a bound $\tol$ is given for the total error produced in each time
step, the widely used strategy is to allow one fixed portion $\Gamma_0
\,\tol$ to be produced by the discretization of initial data, a
portion $\Gamma_h \,\tol$ to be produced by the spatial
discretization, and another portion $\Gamma_\tau \,\tol$ of the error
to be produced by the time discretization, with $\Gamma_0 + \Gamma_h +
\Gamma_\tau \leq 1.0$. The adaptive procedure now tries to adjust
time step sizes and meshes such that
\[
\eta_o \approx \Gamma_0 \,\tol
\]
and in every time step
\[
\eta_\tau \approx \Gamma_\tau \,\tol \quad \mbox{and} \quad
\eta_h + \eta_c \approx \Gamma_h \,\tol\,.
\]
The adjustment of the time step size can be done via extrapolation
techniques known from numerical methods for ordinary differential
equations, or iteratively: The algorithm starts from the previous time
step size $\tau_{\rm old}$ or from an initial guess. A parameter
$\delta_1 \in (0,1)$ is used to reduce the step size until the
estimate is below the given bound. If the error is smaller than the
bound, the step size is enlarged by a factor $\delta_2 > 1$ (usually
depending on the order of the time discretization). In this case, the
actual time step is not recalculated, only the initial step size for
the next time step is changed. Two additional parameters
$\theta_1\in(0,1)$, $\theta_2\in(0,\theta_1)$ are used to keep the
algorithm robust, just like it is done in the equidistribution
strategy for the mesh adaption. The algorithm starts from the previous
time step size $\tau_{\rm old}$ or from an initial guess.
If $\delta_1 \approx 1$, consecutive time steps may vary only
slightly, but the number of iterations for getting the new accepted
time step may increase. Again, as each iteration includes the solution
of a discrete problem, this value should be chosen not too large. For
a first order time discretization scheme, a common choice is $\delta_1
\approx 1/\sqrt{2}$.
\begin{algorithm}[Time step size control]\label{A:Alg.timestepsize}
\begin{algotab}
\> Start with parameters $\delta_1 \in (0,1)$, $\delta_2 > 1$,
$\theta_1\in(0,1)$, $\theta_2\in(0,\theta_1)$\\[2mm]
\> $\tau$ := $\tau_{\rm old}$ \\
\> Solve time step problem and estimate the error\\
\> while $\eta_\tau > \theta_1 \,\Gamma_\tau \,\tol$ do\\
\> \> $\tau$ := $\delta_1\,\tau$\\
\> \> Solve time step problem and estimate the error\\
\> end while \\
\> if $\eta_\tau \leq \theta_2 \,\Gamma_\tau \,\tol$ then\\
\> \> $\tau$ := $\delta_2\,\tau$\\
\> end if
\end{algotab}
\end{algorithm}
\idx{time and space adaptive strategy}
\idx{adaptive methods!time and space adaptive strategy}
The above algorithm controls only the time step size, but does not
show the mesh adaption. There are several possibilities to combine
both controls. An inclusion of the grid adaption in every iteration
of Algorithm \ref{A:Alg.timestepsize} can result in a large number of
discrete problems to solve, especially if the time step size is
reduced more than once. A better procedure is first to do the step
size control with the old mesh, then adapt the mesh, and after this
check the time error again. In combination with the implicit strategy A,
this procedure leads to the following algorithm for one single
time step
\begin{algorithm}[Time and space adaptive algorithm]
\label{A:time-space-algo}
\begin{algotab}
\> Start with given parameter $\tol$, $\delta_1 \in (0,1)$, $\delta_2 > 1$,
$\theta_1\in(0,1)$, $\theta_2\in(0,\theta_1)$,\\
\> the solution $u_n$ from the previous time step on grid $\tria_n$
at time $t_n$\\
\> with time step size $\tau_n$\\[2mm]
\> $\tria_{n+1}$ := $\tria_n$\\
\> $\tau_{n+1}$ := $\tau_n$\\
\> $t_{n+1}$ := $t_n + \tau_{n+1}$\\
\> solve the discrete problem for $u_{n+1}$ on $\tria_{n+1}$ using
data $u_n$\\
\> compute error estimates on $\tria_{n+1}$\\[2mm]
\> while $\eta_\tau > \theta_1 \,\Gamma_\tau \,\tol$\\
\> \> $\tau_{n+1}$ := $\delta_1\,\tau_{n+1}$\\
\> \> $t_{n+1}$ := $t_n + \tau_{n+1}$\\
\> \> solve the discrete problem for $u_{n+1}$ on $\tria_{n+1}$ using
data $u_n$\\
\> \> compute error estimates on $\tria_{n+1}$\\
\> end while\\[2mm]
\> do\\
\> \> mark elements for refinement or coarsening\\
\> \> if elements are marked then\\
\> \> \> adapt mesh $\tria_{n+1}$ producing a modified $\tria_{n+1}$\\
\> \> \> solve the discrete problem for $u_{n+1}$ on $\tria_{n+1}$ using
data $u_n$\\
\> \> \> compute estimates on $\tria_{n+1}$\\
\> \> end if\\
\> \> while $\eta_\tau > \theta_1 \,\Gamma_\tau \,\tol$\\
\> \> \> $\tau_{n+1}$ := $\delta_1\,\tau_{n+1}$\\
\> \> \> $t_{n+1}$ := $t_n + \tau_{n+1}$\\
\> \> \> solve the discrete problem for $u_{n+1}$ on $\tria_{n+1}$ using
data $u_n$\\
\> \> \> compute error estimates on $\tria_{n+1}$\\
\> \> end while\\
\> while $\eta_h > \tol$\\[2mm]
\> if $\eta_\tau \leq \theta_2 \,\Gamma_\tau \,\tol$ then\\
\> \> $\tau_{n+1}$ := $\delta_2\,\tau_{n+1}$\\
\> end if
\end{algotab}
\end{algorithm}
The adaptive a posteriori approach can be extended to the adaptive
choice of the order of the time discretization: Bornemann
\cite{Bornemann:90,Bornemann:91,Bornemann:92} describes an adaptive
variable order time discretization method, combined with implicit
strategy B using the extrapolation marking strategy for the mesh
adaption.
\idx{adaptive methods!concepts|)}
%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%
%%% Local Variables:
%%% mode: latex
%%% TeX-master: "alberta-book"
%%% End:
|