File: snes.rst.txt

package info (click to toggle)
petsc 3.22.5%2Bdfsg1-2
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 516,740 kB
  • sloc: ansic: 814,333; cpp: 50,948; python: 37,416; f90: 17,187; javascript: 3,493; makefile: 3,198; sh: 1,502; xml: 619; objc: 445; java: 13; csh: 1
file content (1374 lines) | stat: -rw-r--r-- 57,575 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
.. _ch_snes:

SNES: Nonlinear Solvers
-----------------------

The solution of large-scale nonlinear problems pervades many facets of
computational science and demands robust and flexible solution
strategies. The ``SNES`` library of PETSc provides a powerful suite of
data-structure-neutral numerical routines for such problems. Built on
top of the linear solvers and data structures discussed in preceding
chapters, ``SNES`` enables the user to easily customize the nonlinear
solvers according to the application at hand. Also, the ``SNES``
interface is *identical* for the uniprocess and parallel cases; the only
difference in the parallel version is that each process typically forms
only its local contribution to various matrices and vectors.

The ``SNES`` class includes methods for solving systems of nonlinear
equations of the form

.. math::
   :label: fx0

   \mathbf{F}(\mathbf{x}) = 0,

where :math:`\mathbf{F}: \, \Re^n \to \Re^n`. Newton-like methods provide the
core of the package, including both line search and trust region
techniques. A suite of nonlinear Krylov methods and methods based upon
problem decomposition are also included. The solvers are discussed
further in :any:`sec_nlsolvers`. Following the PETSc design
philosophy, the interfaces to the various solvers are all virtually
identical. In addition, the ``SNES`` software is completely flexible, so
that the user can at runtime change any facet of the solution process.

PETSc’s default method for solving the nonlinear equation is Newton’s
method with line search, ``SNESNEWTONLS``. The general form of the :math:`n`-dimensional Newton’s method
for solving :math:numref:`fx0` is

.. math::
   :label: newton

   \mathbf{x}_{k+1} = \mathbf{x}_k - \mathbf{J}(\mathbf{x}_k)^{-1} \mathbf{F}(\mathbf{x}_k), \;\; k=0,1, \ldots,

where :math:`\mathbf{x}_0` is an initial approximation to the solution and
:math:`\mathbf{J}(\mathbf{x}_k) = \mathbf{F}'(\mathbf{x}_k)`, the Jacobian, is nonsingular at each
iteration. In practice, the Newton iteration :math:numref:`newton` is
implemented by the following two steps:

.. math::

   \begin{aligned}
   1. & \text{(Approximately) solve} & \mathbf{J}(\mathbf{x}_k) \Delta \mathbf{x}_k &= -\mathbf{F}(\mathbf{x}_k). \\
   2. & \text{Update} & \mathbf{x}_{k+1} &\gets \mathbf{x}_k + \Delta \mathbf{x}_k.
   \end{aligned}

Other defect-correction algorithms can be implemented by using different
choices for :math:`J(\mathbf{x}_k)`.

.. _sec_snesusage:

Basic SNES Usage
~~~~~~~~~~~~~~~~

In the simplest usage of the nonlinear solvers, the user must merely
provide a C, C++, Fortran, or Python routine to evaluate the nonlinear function
:math:numref:`fx0`. The corresponding Jacobian matrix
can be approximated with finite differences. For codes that are
typically more efficient and accurate, the user can provide a routine to
compute the Jacobian; details regarding these application-provided
routines are discussed below. To provide an overview of the use of the
nonlinear solvers, browse the concrete example in :ref:`ex1.c <snes-ex1>` or skip ahead to the discussion.

.. _snes-ex1:
.. admonition:: Listing: ``src/snes/tutorials/ex1.c``

   .. literalinclude:: /../src/snes/tutorials/ex1.c
      :end-before: /*TEST

To create a ``SNES`` solver, one must first call ``SNESCreate()`` as
follows:

.. code-block::

   SNESCreate(MPI_Comm comm,SNES *snes);

The user must then set routines for evaluating the residual function :math:numref:`fx0`
and, *possibly*, its associated Jacobian matrix, as
discussed in the following sections.

To choose a nonlinear solution method, the user can either call

.. code-block::

   SNESSetType(SNES snes,SNESType method);

or use the option ``-snes_type <method>``, where details regarding the
available methods are presented in :any:`sec_nlsolvers`. The
application code can take complete control of the linear and nonlinear
techniques used in the Newton-like method by calling

.. code-block::

   SNESSetFromOptions(snes);

This routine provides an interface to the PETSc options database, so
that at runtime the user can select a particular nonlinear solver, set
various parameters and customized routines (e.g., specialized line
search variants), prescribe the convergence tolerance, and set
monitoring routines. With this routine the user can also control all
linear solver options in the ``KSP``, and ``PC`` modules, as discussed
in :any:`ch_ksp`.

After having set these routines and options, the user solves the problem
by calling

.. code-block::

   SNESSolve(SNES snes,Vec b,Vec x);

where ``x`` should be initialized to the initial guess before calling and contains the solution on return.
In particular, to employ an initial guess of
zero, the user should explicitly set this vector to zero by calling
``VecZeroEntries(x)``. Finally, after solving the nonlinear system (or several
systems), the user should destroy the ``SNES`` context with

.. code-block::

   SNESDestroy(SNES *snes);

.. _sec_snesfunction:

Nonlinear Function Evaluation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

When solving a system of nonlinear equations, the user must provide a
a residual function :math:numref:`fx0`, which is set using

.. code-block::

   SNESSetFunction(SNES snes,Vec f,PetscErrorCode (*FormFunction)(SNES snes,Vec x,Vec f,void *ctx),void *ctx);

The argument ``f`` is an optional vector for storing the solution; pass ``NULL`` to have the ``SNES`` allocate it for you.
The argument ``ctx`` is an optional user-defined context, which can
store any private, application-specific data required by the function
evaluation routine; ``NULL`` should be used if such information is not
needed. In C and C++, a user-defined context is merely a structure in
which various objects can be stashed; in Fortran a user context can be
an integer array that contains both parameters and pointers to PETSc
objects.
`SNES Tutorial ex5 <PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/snes/tutorials/ex5.c.html>`__
and
`SNES Tutorial ex5f90 <PETSC_DOC_OUT_ROOT_PLACEHOLDER/src/snes/tutorials/ex5f90.F90.html>`__
give examples of user-defined application contexts in C and Fortran,
respectively.

.. _sec_snesjacobian:

Jacobian Evaluation
^^^^^^^^^^^^^^^^^^^

The user may also specify a routine to form some approximation of the
Jacobian matrix, ``A``, at the current iterate, ``x``, as is typically
done with

.. code-block::

   SNESSetJacobian(SNES snes,Mat Amat,Mat Pmat,PetscErrorCode (*FormJacobian)(SNES snes,Vec x,Mat A,Mat B,void *ctx),void *ctx);

The arguments of the routine ``FormJacobian()`` are the current iterate,
``x``; the (approximate) Jacobian matrix, ``Amat``; the matrix from
which the preconditioner is constructed, ``Pmat`` (which is usually the
same as ``Amat``); and an optional user-defined Jacobian context,
``ctx``, for application-specific data. The ``FormJacobian()``
callback is only invoked if the solver requires it, always
*after* ``FormFunction()`` has been called at the current iterate.

Note that the ``SNES`` solvers
are all data-structure neutral, so the full range of PETSc matrix
formats (including “matrix-free” methods) can be used.
:any:`ch_matrices` discusses information regarding
available matrix formats and options, while :any:`sec_nlmatrixfree` focuses on matrix-free methods in
``SNES``. We briefly touch on a few details of matrix usage that are
particularly important for efficient use of the nonlinear solvers.

A common usage paradigm is to assemble the problem Jacobian in the
preconditioner storage ``B``, rather than ``A``. In the case where they
are identical, as in many simulations, this makes no difference.
However, it allows us to check the analytic Jacobian we construct in
``FormJacobian()`` by passing the ``-snes_mf_operator`` flag. This
causes PETSc to approximate the Jacobian using finite differencing of
the function evaluation (discussed in :any:`sec_fdmatrix`),
and the analytic Jacobian becomes merely the preconditioner. Even if the
analytic Jacobian is incorrect, it is likely that the finite difference
approximation will converge, and thus this is an excellent method to
verify the analytic Jacobian. Moreover, if the analytic Jacobian is
incomplete (some terms are missing or approximate),
``-snes_mf_operator`` may be used to obtain the exact solution, where
the Jacobian approximation has been transferred to the preconditioner.

One such approximate Jacobian comes from “Picard linearization”, use ``SNESSetPicard()``, which
writes the nonlinear system as

.. math:: \mathbf{F}(\mathbf{x}) := \mathbf{A}(\mathbf{x}) \mathbf{x} - \mathbf{b} = 0

where :math:`\mathbf{A}(\mathbf{x})` usually contains the lower-derivative parts of the
equation. For example, the nonlinear diffusion problem

.. math:: - \nabla\cdot(\kappa(u) \nabla u) = 0

would be linearized as

.. math:: A(u) v \simeq -\nabla\cdot(\kappa(u) \nabla v).

Usually this linearization is simpler to implement than Newton and the
linear problems are somewhat easier to solve. In addition to using
``-snes_mf_operator`` with this approximation to the Jacobian, the
Picard iterative procedure can be performed by defining :math:`\mathbf{J}(\mathbf{x})`
to be :math:`\mathbf{A}(\mathbf{x})`. Sometimes this iteration exhibits better global
convergence than Newton linearization.

During successive calls to ``FormJacobian()``, the user can either
insert new matrix contexts or reuse old ones, depending on the
application requirements. For many sparse matrix formats, reusing the
old space (and merely changing the matrix elements) is more efficient;
however, if the matrix nonzero structure completely changes, creating an
entirely new matrix context may be preferable. Upon subsequent calls to
the ``FormJacobian()`` routine, the user may wish to reinitialize the
matrix entries to zero by calling ``MatZeroEntries()``. See
:any:`sec_othermat` for details on the reuse of the matrix
context.

The directory ``$PETSC_DIR/src/snes/tutorials`` provides a variety of
examples.

Sometimes a nonlinear solver may produce a step that is not within the domain
of a given function, for example one with a negative pressure. When this occurs
one can call ``SNESSetFunctionDomainError()`` or ``SNESSetJacobianDomainError()``
to indicate to ``SNES`` the step is not valid. One must also use ``SNESGetConvergedReason()``
and check the reason to confirm if the solver succeeded. See :any:`sec_vi` for how to
provide ``SNES`` with bounds on the variables to solve (differential) variational inequalities
and how to control properties of the line step computed.

.. _sec_nlsolvers:

The Nonlinear Solvers
~~~~~~~~~~~~~~~~~~~~~

As summarized in Table :any:`tab-snesdefaults`, ``SNES`` includes
several Newton-like nonlinear solvers based on line search techniques
and trust region methods. Also provided are several nonlinear Krylov
methods, as well as nonlinear methods involving decompositions of the
problem.

Each solver may have associated with it a set of options, which can be
set with routines and options database commands provided for this
purpose. A complete list can be found by consulting the manual pages or
by running a program with the ``-help`` option; we discuss just a few in
the sections below.

.. list-table:: PETSc Nonlinear Solvers
   :name: tab-snesdefaults
   :header-rows: 1

   * - Method
     - SNESType
     - Options Name
     - Default Line Search
   * - Line Search Newton
     - ``SNESNEWTONLS``
     - ``newtonls``
     - ``SNESLINESEARCHBT``
   * - Trust region Newton
     - ``SNESNEWTONTR``
     - ``newtontr``
     - —
   * - Newton with Arc Length Continuation
     - ``SNESNEWTONAL``
     - ``newtonal``
     - —
   * - Nonlinear Richardson
     - ``SNESNRICHARDSON``
     - ``nrichardson``
     - ``SNESLINESEARCHL2``
   * - Nonlinear CG
     - ``SNESNCG``
     - ``ncg``
     - ``SNESLINESEARCHCP``
   * - Nonlinear GMRES
     - ``SNESNGMRES``
     - ``ngmres``
     - ``SNESLINESEARCHL2``
   * - Quasi-Newton
     - ``SNESQN``
     - ``qn``
     - see :any:`tab-qndefaults`
   * - Full Approximation Scheme
     - ``SNESFAS``
     - ``fas``
     - —
   * - Nonlinear ASM
     - ``SNESNASM``
     - ``nasm``
     - –
   * - ASPIN
     - ``SNESASPIN``
     - ``aspin``
     - ``SNESLINESEARCHBT``
   * - Nonlinear Gauss-Seidel
     - ``SNESNGS``
     - ``ngs``
     - –
   * - Anderson Mixing
     - ``SNESANDERSON``
     - ``anderson``
     - –
   * -  Newton with constraints (1)
     - ``SNESVINEWTONRSLS``
     - ``vinewtonrsls``
     - ``SNESLINESEARCHBT``
   * -  Newton with constraints (2)
     - ``SNESVINEWTONSSLS``
     - ``vinewtonssls``
     - ``SNESLINESEARCHBT``
   * - Multi-stage Smoothers
     - ``SNESMS``
     - ``ms``
     - –
   * - Composite
     - ``SNESCOMPOSITE``
     - ``composite``
     - –
   * - Linear solve only
     - ``SNESKSPONLY``
     - ``ksponly``
     - –
   * - Python Shell
     - ``SNESPYTHON``
     - ``python``
     - –
   * - Shell (user-defined)
     - ``SNESSHELL``
     - ``shell``
     - –


Line Search Newton
^^^^^^^^^^^^^^^^^^

The method ``SNESNEWTONLS`` (``-snes_type newtonls``) provides a
line search Newton method for solving systems of nonlinear equations. By
default, this technique employs cubic backtracking
:cite:`dennis:83`. Alternative line search techniques are
listed in Table :any:`tab-linesearches`.

.. table:: PETSc Line Search Methods
   :name: tab-linesearches

   ==================== ======================= ================
   **Line Search**      **SNESLineSearchType**  **Options Name**
   ==================== ======================= ================
   Backtracking         ``SNESLINESEARCHBT``    ``bt``
   (damped) step        ``SNESLINESEARCHBASIC`` ``basic``
   identical to above   ``SNESLINESEARCHNONE``  ``none``
   L2-norm Minimization ``SNESLINESEARCHL2``    ``l2``
   Critical point       ``SNESLINESEARCHCP``    ``cp``
   Shell                ``SNESLINESEARCHSHELL`` ``shell``
   ==================== ======================= ================

Every ``SNES`` has a line search context of type ``SNESLineSearch`` that
may be retrieved using

.. code-block::

   SNESGetLineSearch(SNES snes,SNESLineSearch *ls);.

There are several default options for the line searches. The order of
polynomial approximation may be set with ``-snes_linesearch_order`` or

.. code-block::

   SNESLineSearchSetOrder(SNESLineSearch ls, PetscInt order);

for instance, 2 for quadratic or 3 for cubic. Sometimes, it may not be
necessary to monitor the progress of the nonlinear iteration. In this
case, ``-snes_linesearch_norms`` or

.. code-block::

   SNESLineSearchSetComputeNorms(SNESLineSearch ls,PetscBool norms);

may be used to turn off function, step, and solution norm computation at
the end of the linesearch.

The default line search for the line search Newton method,
``SNESLINESEARCHBT`` involves several parameters, which are set to
defaults that are reasonable for many applications. The user can
override the defaults by using the following options:

* ``-snes_linesearch_alpha <alpha>``
* ``-snes_linesearch_maxstep <max>``
* ``-snes_linesearch_minlambda <tol>``

Besides the backtracking linesearch, there are ``SNESLINESEARCHL2``,
which uses a polynomial secant minimization of :math:`||F(x)||_2`, and
``SNESLINESEARCHCP``, which minimizes :math:`F(x) \cdot Y` where
:math:`Y` is the search direction. These are both potentially iterative
line searches, which may be used to find a better-fitted steplength in
the case where a single secant search is not sufficient. The number of
iterations may be set with ``-snes_linesearch_max_it``. In addition, the
convergence criteria of the iterative line searches may be set using
function tolerances ``-snes_linesearch_rtol`` and
``-snes_linesearch_atol``, and steplength tolerance
``snes_linesearch_ltol``.

Custom line search types may either be defined using
``SNESLineSearchShell``, or by creating a custom user line search type
in the model of the preexisting ones and register it using

.. code-block::

   SNESLineSearchRegister(const char sname[],PetscErrorCode (*function)(SNESLineSearch));.

Trust Region Methods
^^^^^^^^^^^^^^^^^^^^

The trust region method in ``SNES`` for solving systems of nonlinear
equations, ``SNESNEWTONTR`` (``-snes_type newtontr``), is similar to the one developed in the
MINPACK project :cite:`more84`. Several parameters can be
set to control the variation of the trust region size during the
solution process. In particular, the user can control the initial trust
region radius, computed by

.. math:: \Delta = \Delta_0 \| F_0 \|_2,

by setting :math:`\Delta_0` via the option ``-snes_tr_delta0 <delta0>``.

Newton with Arc Length Continuation
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The Newton method with arc length continuation reformulates the linearized system
:math:`K\delta \mathbf x = -\mathbf F(\mathbf x)` by introducing the load parameter
:math:`\lambda` and splitting the residual into two components, commonly
corresponding to internal and external forces:

.. math:: \mathbf F(x, \lambda) = \mathbf F^{\mathrm{int}}(\mathbf x) - \mathbf F^{\mathrm{ext}}(\mathbf x, \lambda)

Often, :math:`\mathbf F^{\mathrm{ext}}(\mathbf x, \lambda)` is linear in :math:`\lambda`,
which can be thought of as applying the external force in proportional load
increments. By default, this is how the right-hand side vector is handled in the
implemented method. Generally, however, :math:`\mathbf F^{\mathrm{ext}}(\mathbf x, \lambda)`
may depend non-linearly on :math:`\lambda` or :math:`\mathbf x`, or both.
To accommodate this possibility, we provide the ``SNESNewtonALGetLoadParameter()``
function, which allows for the current value of :math:`\lambda` to be queried in the
functions provided to ``SNESSetFunction()`` and ``SNESSetJacobian()``.

Additionally, we split the solution update into two components:

.. math:: \delta \mathbf x = \delta s\delta\mathbf x^F + \delta\lambda\delta\mathbf x^Q,

where :math:`\delta s = 1` unless partial corrections are used (discussed more
below). Each of :math:`\delta \mathbf x^F` and :math:`\delta \mathbf x^Q` are found via
solving a linear system with the Jacobian :math:`K`:

- :math:`\delta \mathbf x^F` is the full Newton step for a given value of :math:`\lambda`: :math:`K \delta \mathbf x^F = -\mathbf F(\mathbf x, \lambda)`

- :math:`\delta \mathbf x^Q` is the variation in :math:`\mathbf x` with respect to :math:`\lambda`, computed by :math:`K \delta\mathbf x^Q = \mathbf Q(\mathbf x, \lambda)`, where :math:`\mathbf Q(\mathbf x, \lambda) = -\partial \mathbf F (\mathbf x, \lambda) / \partial \lambda` is the tangent load vector.

Often, the tangent load vector :math:`\mathbf Q` is constant within a load increment,
which corresponds to the case of proportional loading discussed above. By default,
:math:`\mathbf Q` is the full right-hand-side vector, if one was provided.
The user can also provide a function which computes :math:`\mathbf Q` to
``SNESNewtonALSetFunction()``. This function should have the same signature as for
``SNESSetFunction``, and the user should use ``SNESNewtonALGetLoadParameter()`` to get
:math:`\lambda` if it is needed.

**The Constraint Surface.** Considering the :math:`n+1` dimensional space of
:math:`\mathbf x` and :math:`\lambda`, we define the linearized equilibrium line to be
the set of points for which the linearized equilibrium equations are satisfied.
Given the previous iterative solution
:math:`\mathbf t^{(j-1)} = [\mathbf x^{(j-1)}, \lambda^{(j-1)}]`,
this line is defined by the point :math:`\mathbf t^{(j-1)} + [\delta\mathbf x^F, 0]` and
the vector :math:`\mathbf t^Q [\delta\mathbf x^Q, 1]`.
The arc length method seeks the intersection of this linearized equilibrium line
with a quadratic constraint surface, defined by

.. math::L^2 = \|\Delta x\|^2 + \psi^2 (\Delta\lambda)^2,

where :math:`L` is a user-provided step size corresponding to the radius of the
constraint surface, :math:`\Delta\mathbf x` and :math:`\Delta\lambda` are the
accumulated updates over the current load step, and :math:`\psi^2` is a
user-provided consistency parameter determining the shape of the constraint surface.
Generally, :math:`\psi^2 > 0` leads to a hyper-sphere constraint surface, while
:math:`\psi^2 = 0` leads to a hyper-cylinder constraint surface.

Since the solution will always fall on the constraint surface, the method will often
require multiple incremental steps to fully solve the non-linear problem.
This is necessary to accurately trace the equilibrium path.
Importantly, this is fundamentally different from time stepping.
While a similar process could be implemented as a ``TS``, this method is
particularly designed to be used as a SNES, either standalone or within a ``TS``.

To this end, by default, the load parameter is used such that the full external
forces are applied at :math:`\lambda = 1`, although we allow for the user to specify
a different value via ``-snes_newtonal_lambda_max``.
To ensure that the solution corresponds exactly to the external force prescribed by
the user, i.e. that the load parameter is exactly :math:`\lambda_{max}` at the end
of the SNES solve, we clamp the value before computing the solution update.
As such, the final increment will likely be a hybrid of arc length continuation and
normal Newton iterations.

**Choosing the Continuation Step.** For the first iteration from an equilibrium
point, there is a single correct way to choose :math:`\delta\lambda`, which follows
from the constraint equations. Specifically the constraint equations yield the
quadratic equation :math:`a\delta\lambda^2 + b\delta\lambda + c = 0`, where

.. math::

   \begin{aligned}
   a &= \|\delta\mathbf x^Q\|^2 + \psi^2,\\
   b &= 2\delta\mathbf x^Q\cdot (\Delta\mathbf x + \delta s\delta\mathbf x^F) + 2\psi^2 \Delta\lambda,\\
   c &= \|\Delta\mathbf x + \delta s\delta\mathbf x^F\|^2 + \psi^2 \Delta\lambda^2 - L^2.
   \end{aligned}

Since in the first iteration, :math:`\Delta\mathbf x = \delta\mathbf x^F = \mathbf 0` and
:math:`\Delta\lambda = 0`, :math:`b = 0` and the equation simplifies to a pair of
real roots:

.. math:: \delta\lambda = \pm\frac{L}{\sqrt{\|\delta\mathbf x^Q\|^2 + \psi^2}},

where the sign is positive for the first increment and is determined by the previous
increment otherwise as

.. math:: \text{sign}(\delta\lambda) = \text{sign}\big(\delta\mathbf x^Q \cdot (\Delta\mathbf x)_{i-1} + \psi^2(\Delta\lambda)_{i-1}\big),

where :math:`(\Delta\mathbf x)_{i-1}` and :math:`(\Delta\lambda)_{i-1}` are the
accumulated updates over the previous load step.

In subsequent iterations, there are different approaches to selecting
:math:`\delta\lambda`, all of which have trade-offs.
The main difference is whether the iterative solution falls on the constraint
surface at every iteration, or only when fully converged.
This MR implements one of each of these approaches, set via
``SNESNewtonALSetCorrectionType()`` or
``-snes_newtonal_correction_type <normal|exact>`` on the command line.

**Corrections in the Normal Hyperplane.** The ``SNES_NEWTONAL_CORRECTION_NORMAL``
option is simpler and computationally less expensive, but may fail to converge, as
the constraint equation is not satisfied at every iteration.
The update :math:`\delta \lambda` is chosen such that the update is within the
normal hyper-surface to the quadratic constraint surface.
Mathematically, that is

.. math:: \delta \lambda = -\frac{\Delta \mathbf x \cdot \delta \mathbf x^F}{\Delta\mathbf x \cdot \delta\mathbf x^Q + \psi^2 \Delta\lambda}.

This implementation is based on :cite:`LeonPaulinoPereiraMenezesLages_2011`.

**Exact Corrections.** The ``SNES_NEWTONAL_CORRECTION_EXACT`` option is far more
complex, but ensures that the constraint is exactly satisfied at every Newton
iteration. As such, it is generally more robust.
By evaluating the intersection of constraint surface and equilibrium line at each
iteration, :math:`\delta\lambda` is chosen as one of the roots of the above
quadratic equation :math:`a\delta\lambda^2 + b\delta\lambda + c = 0`.
This method encounters issues, however, if the linearized equilibrium line and
constraint surface do not intersect due to particularly large linearized error.
In this case, the roots are complex.
To continue progressing toward a solution, this method uses a partial correction by
choosing :math:`\delta s` such that the quadratic equation has a single real root.
Geometrically, this is selecting the point on the constraint surface closest to the
linearized equilibrium line. See the code or :cite:`Ritto-CorreaCamotim2008` for a
mathematical description of these partial corrections.

Nonlinear Krylov Methods
^^^^^^^^^^^^^^^^^^^^^^^^

A number of nonlinear Krylov methods are provided, including Nonlinear
Richardson (``SNESNRICHARDSON``), nonlinear conjugate gradient (``SNESNCG``), nonlinear GMRES (``SNESNGMRES``), and Anderson Mixing (``SNESANDERSON``). These
methods are described individually below. They are all instrumental to
PETSc’s nonlinear preconditioning.

**Nonlinear Richardson.** The nonlinear Richardson iteration, ``SNESNRICHARDSON``, merely
takes the form of a line search-damped fixed-point iteration of the form

.. math::

   \mathbf{x}_{k+1} = \mathbf{x}_k - \lambda \mathbf{F}(\mathbf{x}_k), \;\; k=0,1, \ldots,

where the default linesearch is ``SNESLINESEARCHL2``. This simple solver
is mostly useful as a nonlinear smoother, or to provide line search
stabilization to an inner method.

**Nonlinear Conjugate Gradients.** Nonlinear CG, ``SNESNCG``, is equivalent to linear
CG, but with the steplength determined by line search
(``SNESLINESEARCHCP`` by default). Five variants (Fletcher-Reed,
Hestenes-Steifel, Polak-Ribiere-Polyak, Dai-Yuan, and Conjugate Descent)
are implemented in PETSc and may be chosen using

.. code-block::

   SNESNCGSetType(SNES snes, SNESNCGType btype);

**Anderson Mixing and Nonlinear GMRES Methods.** Nonlinear GMRES (``SNESNGMRES``), and
Anderson Mixing (``SNESANDERSON``) methods combine the last :math:`m` iterates, plus a new
fixed-point iteration iterate, into an approximate residual-minimizing new iterate.

All of the above methods have support for using a nonlinear preconditioner to compute the preliminary update step, rather than the default
which is the nonlinear function's residual,  $ \mathbf{F}(\mathbf{x}_k)$. The different update is obtained by solving a nonlinear preconditioner nonlinear problem, which has its own
`SNES` object that may be obtained with ``SNESGetNPC()``.
Quasi-Newton Methods
^^^^^^^^^^^^^^^^^^^^

Quasi-Newton methods store iterative rank-one updates to the Jacobian
instead of computing the Jacobian directly. Three limited-memory quasi-Newton
methods are provided, L-BFGS, which are described in
Table :any:`tab-qndefaults`. These all are encapsulated under
``-snes_type qn`` and may be changed with ``snes_qn_type``. The default
is L-BFGS, which provides symmetric updates to an approximate Jacobian.
This iteration is similar to the line search Newton methods.

The quasi-Newton methods support the use of a nonlinear preconditioner that can be obtained with ``SNESGetNPC()`` and then configured; or that can be configured with
``SNES``, ``KSP``, and ``PC`` options using the options database prefix ``-npc_``.

.. list-table:: PETSc quasi-Newton solvers
   :name: tab-qndefaults
   :header-rows: 1

   * - QN Method
     - ``SNESQNType``
     - Options Name
     - Default Line Search
   * - L-BFGS
     - ``SNES_QN_LBFGS``
     - ``lbfgs``
     - ``SNESLINESEARCHCP``
   * - “Good” Broyden
     - ``SNES_QN_BROYDEN``
     - ``broyden``
     - ``SNESLINESEARCHBASIC`` (or equivalently ``SNESLINESEARCHNONE``
   * - “Bad” Broyden
     - ``SNES_QN_BADBROYDEN``
     - ``badbroyden``
     - ``SNESLINESEARCHL2``

One may also control the form of the initial Jacobian approximation with

.. code-block::

   SNESQNSetScaleType(SNES snes, SNESQNScaleType stype);

and the restart type with

.. code-block::

   SNESQNSetRestartType(SNES snes, SNESQNRestartType rtype);

The Full Approximation Scheme
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

The Nonlinear Full Approximation Scheme (FAS) ``SNESFAS``, is a nonlinear multigrid method. At
each level, there is a recursive cycle control ``SNES`` instance, and
either one or two nonlinear solvers that act as smoothers (up and down). Problems
set up using the ``SNES`` ``DMDA`` interface are automatically
coarsened. FAS, ``SNESFAS``, differs slightly from linear multigrid ``PCMG``, in that the hierarchy is
constructed recursively. However, much of the interface is a one-to-one
map. We describe the “get” operations here, and it can be assumed that
each has a corresponding “set” operation. For instance, the number of
levels in the hierarchy may be retrieved using

.. code-block::

   SNESFASGetLevels(SNES snes, PetscInt *levels);

There are four ``SNESFAS`` cycle types, ``SNES_FAS_MULTIPLICATIVE``,
``SNES_FAS_ADDITIVE``, ``SNES_FAS_FULL``, and ``SNES_FAS_KASKADE``. The
type may be set with

.. code-block::

   SNESFASSetType(SNES snes,SNESFASType fastype);.

and the cycle type, 1 for V, 2 for W, may be set with

.. code-block::

   SNESFASSetCycles(SNES snes, PetscInt cycles);.

Much like the interface to ``PCMG`` described in :any:`sec_mg`, there are interfaces to recover the
various levels’ cycles and smoothers. The level smoothers may be
accessed with

.. code-block::

   SNESFASGetSmoother(SNES snes, PetscInt level, SNES *smooth);
   SNESFASGetSmootherUp(SNES snes, PetscInt level, SNES *smooth);
   SNESFASGetSmootherDown(SNES snes, PetscInt level, SNES *smooth);

and the level cycles with

.. code-block::

   SNESFASGetCycleSNES(SNES snes,PetscInt level,SNES *lsnes);.

Also akin to ``PCMG``, the restriction and prolongation at a level may
be acquired with

.. code-block::

   SNESFASGetInterpolation(SNES snes, PetscInt level, Mat *mat);
   SNESFASGetRestriction(SNES snes, PetscInt level, Mat *mat);

In addition, FAS requires special restriction for solution-like
variables, called injection. This may be set with

.. code-block::

   SNESFASGetInjection(SNES snes, PetscInt level, Mat *mat);.

The coarse solve context may be acquired with

.. code-block::

   SNESFASGetCoarseSolve(SNES snes, SNES *smooth);

Nonlinear Additive Schwarz
^^^^^^^^^^^^^^^^^^^^^^^^^^

Nonlinear Additive Schwarz methods (NASM) take a number of local
nonlinear subproblems, solves them independently in parallel, and
combines those solutions into a new approximate solution.

.. code-block::

   SNESNASMSetSubdomains(SNES snes,PetscInt n,SNES subsnes[],VecScatter iscatter[],VecScatter oscatter[],VecScatter gscatter[]);

allows for the user to create these local subdomains. Problems set up
using the ``SNES`` ``DMDA`` interface are automatically decomposed. To
begin, the type of subdomain updates to the whole solution are limited
to two types borrowed from ``PCASM``: ``PC_ASM_BASIC``, in which the
overlapping updates added. ``PC_ASM_RESTRICT`` updates in a
nonoverlapping fashion. This may be set with

.. code-block::

   SNESNASMSetType(SNES snes,PCASMType type);.

``SNESASPIN`` is a helper ``SNES`` type that sets up a nonlinearly
preconditioned Newton’s method using NASM as the preconditioner.

General Options
~~~~~~~~~~~~~~~

This section discusses options and routines that apply to all ``SNES``
solvers and problem classes. In particular, we focus on convergence
tests, monitoring routines, and tools for checking derivative
computations.

.. _sec_snesconvergence:

Convergence Tests
^^^^^^^^^^^^^^^^^

Convergence of the nonlinear solvers can be detected in a variety of
ways; the user can even specify a customized test, as discussed below.
Most of the nonlinear solvers use ``SNESConvergenceTestDefault()``,
however, ``SNESNEWTONTR`` uses a method-specific additional convergence
test as well. The convergence tests involves several parameters, which
are set by default to values that should be reasonable for a wide range
of problems. The user can customize the parameters to the problem at
hand by using some of the following routines and options.

One method of convergence testing is to declare convergence when the
norm of the change in the solution between successive iterations is less
than some tolerance, ``stol``. Convergence can also be determined based
on the norm of the function. Such a test can use either the absolute
size of the norm, ``atol``, or its relative decrease, ``rtol``, from an
initial guess. The following routine sets these parameters, which are
used in many of the default ``SNES`` convergence tests:

.. code-block::

   SNESSetTolerances(SNES snes,PetscReal atol,PetscReal rtol,PetscReal stol, PetscInt its,PetscInt fcts);

This routine also sets the maximum numbers of allowable nonlinear
iterations, ``its``, and function evaluations, ``fcts``. The
corresponding options database commands for setting these parameters are:

* ``-snes_atol <atol>``
* ``-snes_rtol <rtol>``
* ``-snes_stol <stol>``
* ``-snes_max_it <its>``
* ``-snes_max_funcs <fcts>`` (use ``unlimited`` for no maximum)

A related routine is ``SNESGetTolerances()``. ``PETSC_CURRENT`` may be used
for any parameter to indicate the current value should be retained; use ``PETSC_DETERMINE`` to restore to the default value from when the object was created.

Users can set their own customized convergence tests in ``SNES`` by
using the command

.. code-block::

   SNESSetConvergenceTest(SNES snes,PetscErrorCode (*test)(SNES snes,PetscInt it,PetscReal xnorm, PetscReal gnorm,PetscReal f,SNESConvergedReason reason, void *cctx),void *cctx,PetscErrorCode (*destroy)(void *cctx));

The final argument of the convergence test routine, ``cctx``, denotes an
optional user-defined context for private data. When solving systems of
nonlinear equations, the arguments ``xnorm``, ``gnorm``, and ``f`` are
the current iterate norm, current step norm, and function norm,
respectively. ``SNESConvergedReason`` should be set positive for
convergence and negative for divergence. See ``include/petscsnes.h`` for
a list of values for ``SNESConvergedReason``.

.. _sec_snesmonitor:

Convergence Monitoring
^^^^^^^^^^^^^^^^^^^^^^

By default the ``SNES`` solvers run silently without displaying
information about the iterations. The user can initiate monitoring with
the command

.. code-block::

   SNESMonitorSet(SNES snes,PetscErrorCode (*mon)(SNES,PetscInt its,PetscReal norm,void* mctx),void *mctx,PetscErrorCode (*monitordestroy)(void**));

The routine, ``mon``, indicates a user-defined monitoring routine, where
``its`` and ``mctx`` respectively denote the iteration number and an
optional user-defined context for private data for the monitor routine.
The argument ``norm`` is the function norm.

The routine set by ``SNESMonitorSet()`` is called once after every
successful step computation within the nonlinear solver. Hence, the user
can employ this routine for any application-specific computations that
should be done after the solution update. The option ``-snes_monitor``
activates the default ``SNES`` monitor routine,
``SNESMonitorDefault()``, while ``-snes_monitor_lg_residualnorm`` draws
a simple line graph of the residual norm’s convergence.

One can cancel hardwired monitoring routines for ``SNES`` at runtime
with ``-snes_monitor_cancel``.

As the Newton method converges so that the residual norm is small, say
:math:`10^{-10}`, many of the final digits printed with the
``-snes_monitor`` option are meaningless. Worse, they are different on
different machines; due to different round-off rules used by, say, the
IBM RS6000 and the Sun SPARC. This makes testing between different
machines difficult. The option ``-snes_monitor_short`` causes PETSc to
print fewer of the digits of the residual norm as it gets smaller; thus
on most of the machines it will always print the same numbers making
cross-process testing easier.

The routines

.. code-block::

   SNESGetSolution(SNES snes,Vec *x);
   SNESGetFunction(SNES snes,Vec *r,void *ctx,int(**func)(SNES,Vec,Vec,void*));

return the solution vector and function vector from a ``SNES`` context.
These routines are useful, for instance, if the convergence test
requires some property of the solution or function other than those
passed with routine arguments.

.. _sec_snesderivs:

Checking Accuracy of Derivatives
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Since hand-coding routines for Jacobian matrix evaluation can be error
prone, ``SNES`` provides easy-to-use support for checking these matrices
against finite difference versions. In the simplest form of comparison,
users can employ the option ``-snes_test_jacobian`` to compare the
matrices at several points. Although not exhaustive, this test will
generally catch obvious problems. One can compare the elements of the
two matrices by using the option ``-snes_test_jacobian_view`` , which
causes the two matrices to be printed to the screen.

Another means for verifying the correctness of a code for Jacobian
computation is running the problem with either the finite difference or
matrix-free variant, ``-snes_fd`` or ``-snes_mf``; see :any:`sec_fdmatrix` or :any:`sec_nlmatrixfree`.
If a
problem converges well with these matrix approximations but not with a
user-provided routine, the problem probably lies with the hand-coded
matrix. See the note in :any:`sec_snesjacobian` about
assembling your Jabobian in the "preconditioner" slot ``Pmat``.

The correctness of user provided ``MATSHELL`` Jacobians in general can be
checked with ``MatShellTestMultTranspose()`` and ``MatShellTestMult()``.

The correctness of user provided ``MATSHELL`` Jacobians via ``TSSetRHSJacobian()``
can be checked with ``TSRHSJacobianTestTranspose()`` and ``TSRHSJacobianTest()``
that check the correction of the matrix-transpose vector product and the
matrix-product. From the command line, these can be checked with

* ``-ts_rhs_jacobian_test_mult_transpose``
* ``-mat_shell_test_mult_transpose_view``
* ``-ts_rhs_jacobian_test_mult``
* ``-mat_shell_test_mult_view``

Inexact Newton-like Methods
~~~~~~~~~~~~~~~~~~~~~~~~~~~

Since exact solution of the linear Newton systems within :math:numref:`newton`
at each iteration can be costly, modifications
are often introduced that significantly reduce these expenses and yet
retain the rapid convergence of Newton’s method. Inexact or truncated
Newton techniques approximately solve the linear systems using an
iterative scheme. In comparison with using direct methods for solving
the Newton systems, iterative methods have the virtue of requiring
little space for matrix storage and potentially saving significant
computational work. Within the class of inexact Newton methods, of
particular interest are Newton-Krylov methods, where the subsidiary
iterative technique for solving the Newton system is chosen from the
class of Krylov subspace projection methods. Note that at runtime the
user can set any of the linear solver options discussed in :any:`ch_ksp`,
such as ``-ksp_type <ksp_method>`` and
``-pc_type <pc_method>``, to set the Krylov subspace and preconditioner
methods.

Two levels of iterations occur for the inexact techniques, where during
each global or outer Newton iteration a sequence of subsidiary inner
iterations of a linear solver is performed. Appropriate control of the
accuracy to which the subsidiary iterative method solves the Newton
system at each global iteration is critical, since these inner
iterations determine the asymptotic convergence rate for inexact Newton
techniques. While the Newton systems must be solved well enough to
retain fast local convergence of the Newton’s iterates, use of excessive
inner iterations, particularly when :math:`\| \mathbf{x}_k - \mathbf{x}_* \|` is large,
is neither necessary nor economical. Thus, the number of required inner
iterations typically increases as the Newton process progresses, so that
the truncated iterates approach the true Newton iterates.

A sequence of nonnegative numbers :math:`\{\eta_k\}` can be used to
indicate the variable convergence criterion. In this case, when solving
a system of nonlinear equations, the update step of the Newton process
remains unchanged, and direct solution of the linear system is replaced
by iteration on the system until the residuals

.. math:: \mathbf{r}_k^{(i)} =  \mathbf{F}'(\mathbf{x}_k) \Delta \mathbf{x}_k + \mathbf{F}(\mathbf{x}_k)

satisfy

.. math:: \frac{ \| \mathbf{r}_k^{(i)} \| }{ \| \mathbf{F}(\mathbf{x}_k) \| } \leq \eta_k \leq \eta < 1.

Here :math:`\mathbf{x}_0` is an initial approximation of the solution, and
:math:`\| \cdot \|` denotes an arbitrary norm in :math:`\Re^n` .

By default a constant relative convergence tolerance is used for solving
the subsidiary linear systems within the Newton-like methods of
``SNES``. When solving a system of nonlinear equations, one can instead
employ the techniques of Eisenstat and Walker :cite:`ew96`
to compute :math:`\eta_k` at each step of the nonlinear solver by using
the option ``-snes_ksp_ew`` . In addition, by adding one’s own
``KSP`` convergence test (see :any:`sec_convergencetests`), one can easily create one’s own,
problem-dependent, inner convergence tests.

.. _sec_nlmatrixfree:

Matrix-Free Methods
~~~~~~~~~~~~~~~~~~~

The ``SNES`` class fully supports matrix-free methods. The matrices
specified in the Jacobian evaluation routine need not be conventional
matrices; instead, they can point to the data required to implement a
particular matrix-free method. The matrix-free variant is allowed *only*
when the linear systems are solved by an iterative method in combination
with no preconditioning (``PCNONE`` or ``-pc_type`` ``none``), a
user-provided preconditioner matrix, or a user-provided preconditioner
shell (``PCSHELL``, discussed in :any:`sec_pc`); that
is, obviously matrix-free methods cannot be used with a direct solver,
approximate factorization, or other preconditioner which requires access
to explicit matrix entries.

The user can create a matrix-free context for use within ``SNES`` with
the routine

.. code-block::

   MatCreateSNESMF(SNES snes,Mat *mat);

This routine creates the data structures needed for the matrix-vector
products that arise within Krylov space iterative
methods :cite:`brownsaad:90`.
The default ``SNES``
matrix-free approximations can also be invoked with the command
``-snes_mf``. Or, one can retain the user-provided Jacobian
preconditioner, but replace the user-provided Jacobian matrix with the
default matrix-free variant with the option ``-snes_mf_operator``.

``MatCreateSNESMF()`` uses

.. code-block::

   MatCreateMFFD(Vec x, Mat *mat);

which can also be used directly for users who need a matrix-free matrix but are not using ``SNES``.

The user can set one parameter to control the Jacobian-vector product
approximation with the command

.. code-block::

   MatMFFDSetFunctionError(Mat mat,PetscReal rerror);

The parameter ``rerror`` should be set to the square root of the
relative error in the function evaluations, :math:`e_{rel}`; the default
is the square root of machine epsilon (about :math:`10^{-8}` in double
precision), which assumes that the functions are evaluated to full
floating-point precision accuracy. This parameter can also be set from
the options database with ``-mat_mffd_err <err>``

In addition, PETSc provides ways to register new routines to compute
the differencing parameter (:math:`h`); see the manual page for
``MatMFFDSetType()`` and ``MatMFFDRegister()``. We currently provide two
default routines accessible via ``-mat_mffd_type <ds or wp>``. For
the default approach there is one “tuning” parameter, set with

.. code-block::

   MatMFFDDSSetUmin(Mat mat,PetscReal umin);

This parameter, ``umin`` (or :math:`u_{min}`), is a bit involved; its
default is :math:`10^{-6}` . Its command line form is ``-mat_mffd_umin <umin>``.

The Jacobian-vector product is approximated
via the formula

.. math:: F'(u) a \approx \frac{F(u + h*a) - F(u)}{h}

where :math:`h` is computed via

.. math::

   h = e_{\text{rel}} \cdot \begin{cases}
   u^{T}a/\lVert a \rVert^2_2                                 & \text{if $|u^T a| > u_{\min} \lVert a \rVert_{1}$} \\
   u_{\min} \operatorname{sign}(u^{T}a) \lVert a \rVert_{1}/\lVert a\rVert^2_2  & \text{otherwise}.
   \end{cases}

This approach is taken from Brown and Saad
:cite:`brownsaad:90`. The second approach, taken from Walker and Pernice,
:cite:`pw98`, computes :math:`h` via

.. math::

   \begin{aligned}
           h = \frac{\sqrt{1 + ||u||}e_{rel}}{||a||}\end{aligned}

This has no tunable parameters, but note that inside the nonlinear solve
for the entire *linear* iterative process :math:`u` does not change
hence :math:`\sqrt{1 + ||u||}` need be computed only once. This
information may be set with the options

.. code-block::

   MatMFFDWPSetComputeNormU(Mat mat,PetscBool );

or ``-mat_mffd_compute_normu <true or false>``. This information is used
to eliminate the redundant computation of these parameters, therefore
reducing the number of collective operations and improving the
efficiency of the application code. This takes place automatically for the PETSc GMRES solver with left preconditioning.

It is also possible to monitor the differencing parameters h that are
computed via the routines

.. code-block::

   MatMFFDSetHHistory(Mat,PetscScalar *,int);
   MatMFFDResetHHistory(Mat,PetscScalar *,int);
   MatMFFDGetH(Mat,PetscScalar *);

We include an explicit example of using matrix-free methods in :any:`ex3.c <snes-ex3>`.
Note that by using the option ``-snes_mf`` one can
easily convert any ``SNES`` code to use a matrix-free Newton-Krylov
method without a preconditioner. As shown in this example,
``SNESSetFromOptions()`` must be called *after* ``SNESSetJacobian()`` to
enable runtime switching between the user-specified Jacobian and the
default ``SNES`` matrix-free form.

.. _snes-ex3:
.. admonition:: Listing: ``src/snes/tutorials/ex3.c``

   .. literalinclude:: /../src/snes/tutorials/ex3.c
      :end-before: /*TEST

Table :any:`tab-jacobians` summarizes the various matrix situations
that ``SNES`` supports. In particular, different linear system matrices
and preconditioning matrices are allowed, as well as both matrix-free
and application-provided preconditioners. If :any:`ex3.c <snes-ex3>` is run with
the options ``-snes_mf`` and ``-user_precond`` then it uses a
matrix-free application of the matrix-vector multiple and a user
provided matrix-free Jacobian.

.. list-table:: Jacobian Options
   :name: tab-jacobians

   * - Matrix Use
     - Conventional Matrix Formats
     - Matrix-free versions
   * - Jacobian Matrix
     - Create matrix with ``MatCreate()``:math:`^*`.  Assemble matrix with user-defined routine :math:`^\dagger`
     - Create matrix with ``MatCreateShell()``.  Use ``MatShellSetOperation()`` to set various matrix actions, or use ``MatCreateMFFD()`` or ``MatCreateSNESMF()``.
   * - Preconditioning Matrix
     - Create matrix with ``MatCreate()``:math:`^*`.  Assemble matrix with user-defined routine :math:`^\dagger`
     - Use ``SNESGetKSP()`` and ``KSPGetPC()`` to access the ``PC``, then use ``PCSetType(pc, PCSHELL)`` followed by ``PCShellSetApply()``.

| :math:`^*` Use either the generic ``MatCreate()`` or a format-specific variant such as ``MatCreateAIJ()``.
| :math:`^\dagger` Set user-defined matrix formation routine with ``SNESSetJacobian()`` or with a ``DM`` variant such as ``DMDASNESSetJacobianLocal()``

SNES also provides some less well-integrated code to apply matrix-free finite differencing using an automatically computed measurement of the
noise of the functions. This can be selected with ``-snes_mf_version 2``; it does not use ``MatCreateMFFD()`` but has similar options that start with
``-snes_mf_`` instead of ``-mat_mffd_``. Note that this alternative prefix **only** works for version 2 differencing.


.. _sec_fdmatrix:

Finite Difference Jacobian Approximations
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

PETSc provides some tools to help approximate the Jacobian matrices
efficiently via finite differences. These tools are intended for use in
certain situations where one is unable to compute Jacobian matrices
analytically, and matrix-free methods do not work well without a
preconditioner, due to very poor conditioning. The approximation
requires several steps:

-  First, one colors the columns of the (not yet built) Jacobian matrix,
   so that columns of the same color do not share any common rows.

-  Next, one creates a ``MatFDColoring`` data structure that will be
   used later in actually computing the Jacobian.

-  Finally, one tells the nonlinear solvers of ``SNES`` to use the
   ``SNESComputeJacobianDefaultColor()`` routine to compute the
   Jacobians.

A code fragment that demonstrates this process is given below.

.. code-block::

   ISColoring    iscoloring;
   MatFDColoring fdcoloring;
   MatColoring   coloring;

   /*
     This initializes the nonzero structure of the Jacobian. This is artificial
     because clearly if we had a routine to compute the Jacobian we wouldn't
     need to use finite differences.
   */
   FormJacobian(snes,x, &J, &J, &user);

   /*
      Color the matrix, i.e. determine groups of columns that share no common
     rows. These columns in the Jacobian can all be computed simultaneously.
   */
   MatColoringCreate(J, &coloring);
   MatColoringSetType(coloring,MATCOLORINGSL);
   MatColoringSetFromOptions(coloring);
   MatColoringApply(coloring, &iscoloring);
   MatColoringDestroy(&coloring);
   /*
      Create the data structure that SNESComputeJacobianDefaultColor() uses
      to compute the actual Jacobians via finite differences.
   */
   MatFDColoringCreate(J,iscoloring, &fdcoloring);
   ISColoringDestroy(&iscoloring);
   MatFDColoringSetFunction(fdcoloring,(PetscErrorCode (*)(void))FormFunction, &user);
   MatFDColoringSetFromOptions(fdcoloring);

   /*
     Tell SNES to use the routine SNESComputeJacobianDefaultColor()
     to compute Jacobians.
   */
   SNESSetJacobian(snes,J,J,SNESComputeJacobianDefaultColor,fdcoloring);

Of course, we are cheating a bit. If we do not have an analytic formula
for computing the Jacobian, then how do we know what its nonzero
structure is so that it may be colored? Determining the structure is
problem dependent, but fortunately, for most structured grid problems
(the class of problems for which PETSc was originally designed) if one
knows the stencil used for the nonlinear function one can usually fairly
easily obtain an estimate of the location of nonzeros in the matrix.
This is harder in the unstructured case, but one typically knows where the nonzero entries are from the mesh topology and distribution of degrees of freedom.
If using ``DMPlex`` (:any:`ch_unstructured`) for unstructured meshes, the nonzero locations will be identified in ``DMCreateMatrix()`` and the procedure above can be used.
Most external packages for unstructured meshes have similar functionality.

One need not necessarily use a ``MatColoring`` object to determine a
coloring. For example, if a grid can be colored directly (without using
the associated matrix), then that coloring can be provided to
``MatFDColoringCreate()``. Note that the user must always preset the
nonzero structure in the matrix regardless of which coloring routine is
used.

PETSc provides the following coloring algorithms, which can be selected using ``MatColoringSetType()`` or via the command line argument ``-mat_coloring_type``.

.. list-table::
   :header-rows: 1

   * - Algorithm
     - ``MatColoringType``
     - ``-mat_coloring_type``
     - Parallel
   * - smallest-last :cite:`more84`
     - ``MATCOLORINGSL``
     - ``sl``
     - No
   * - largest-first :cite:`more84`
     - ``MATCOLORINGLF``
     - ``lf``
     - No
   * - incidence-degree :cite:`more84`
     - ``MATCOLORINGID``
     - ``id``
     - No
   * - Jones-Plassmann :cite:`jp:pcolor`
     - ``MATCOLORINGJP``
     - ``jp``
     - Yes
   * - Greedy
     - ``MATCOLORINGGREEDY``
     - ``greedy``
     - Yes
   * - Natural (1 color per column)
     - ``MATCOLORINGNATURAL``
     - ``natural``
     - Yes
   * - Power (:math:`A^k` followed by 1-coloring)
     - ``MATCOLORINGPOWER``
     - ``power``
     - Yes

As for the matrix-free computation of Jacobians (:any:`sec_nlmatrixfree`), two parameters affect the accuracy of the
finite difference Jacobian approximation. These are set with the command

.. code-block::

   MatFDColoringSetParameters(MatFDColoring fdcoloring,PetscReal rerror,PetscReal umin);

The parameter ``rerror`` is the square root of the relative error in the
function evaluations, :math:`e_{rel}`; the default is the square root of
machine epsilon (about :math:`10^{-8}` in double precision), which
assumes that the functions are evaluated approximately to floating-point
precision accuracy. The second parameter, ``umin``, is a bit more
involved; its default is :math:`10e^{-6}` . Column :math:`i` of the
Jacobian matrix (denoted by :math:`F_{:i}`) is approximated by the
formula

.. math:: F'_{:i} \approx \frac{F(u + h*dx_{i}) - F(u)}{h}

where :math:`h` is computed via:

.. math::

   h = e_{\text{rel}} \cdot \begin{cases}
   u_{i}             &    \text{if $|u_{i}| > u_{\min}$} \\
   u_{\min} \cdot \operatorname{sign}(u_{i})  & \text{otherwise}.
   \end{cases}

for ``MATMFFD_DS`` or:

.. math::

   h = e_{\text{rel}} \sqrt(\|u\|)

for ``MATMFFD_WP`` (default). These parameters may be set from the options
database with

::

   -mat_fd_coloring_err <err>
   -mat_fd_coloring_umin <umin>
   -mat_fd_type <htype>

Note that ``MatColoring`` type ``MATCOLORINGSL``, ``MATCOLORINGLF``, and
``MATCOLORINGID`` are sequential algorithms. ``MATCOLORINGJP`` and
``MATCOLORINGGREEDY`` are parallel algorithms, although in practice they
may create more colors than the sequential algorithms. If one computes
the coloring ``iscoloring`` reasonably with a parallel algorithm or by
knowledge of the discretization, the routine ``MatFDColoringCreate()``
is scalable. An example of this for 2D distributed arrays is given below
that uses the utility routine ``DMCreateColoring()``.

.. code-block::

   DMCreateColoring(da,IS_COLORING_GHOSTED, &iscoloring);
   MatFDColoringCreate(J,iscoloring, &fdcoloring);
   MatFDColoringSetFromOptions(fdcoloring);
   ISColoringDestroy( &iscoloring);

Note that the routine ``MatFDColoringCreate()`` currently is only
supported for the AIJ and BAIJ matrix formats.

.. _sec_vi:

Variational Inequalities
~~~~~~~~~~~~~~~~~~~~~~~~

``SNES`` can also solve (differential) variational inequalities with box (bound) constraints.
These are nonlinear algebraic systems with additional inequality
constraints on some or all of the variables:
:math:`L_i \le u_i \le H_i`. For example, the pressure variable cannot be negative.
Some, or all, of the lower bounds may be
negative infinity (indicated to PETSc with ``SNES_VI_NINF``) and some, or
all, of the upper bounds may be infinity (indicated by ``SNES_VI_INF``).
The commands

.. code-block::

   SNESVISetVariableBounds(SNES,Vec L,Vec H);
   SNESVISetComputeVariableBounds(SNES snes, PetscErrorCode (*compute)(SNES,Vec,Vec))

are used to indicate that one is solving a variational inequality.  Problems with box constraints can be solved with
the reduced space, ``SNESVINEWTONRSLS``, and semi-smooth ``SNESVINEWTONSSLS`` solvers.

The
option ``-snes_vi_monitor`` turns on extra monitoring of the active set
associated with the bounds and ``-snes_vi_type`` allows selecting from
several VI solvers, the default is preferred.

``SNESLineSearchSetPreCheck()`` and ``SNESLineSearchSetPostCheck()`` can also be used to control properties
of the steps selected by ``SNES``.

.. _sec_snespc:

Nonlinear Preconditioning
~~~~~~~~~~~~~~~~~~~~~~~~~

The mathematical framework of nonlinear preconditioning is explained in detail in :cite:`bruneknepleysmithtu15`.
Nonlinear preconditioning in PETSc involves the use of an inner ``SNES``
instance to define the step for an outer ``SNES`` instance. The inner
instance may be extracted using

.. code-block::

   SNESGetNPC(SNES snes,SNES *npc);

and passed run-time options using the ``-npc_`` prefix. Nonlinear
preconditioning comes in two flavors: left and right. The side may be
changed using ``-snes_npc_side`` or ``SNESSetNPCSide()``. Left nonlinear
preconditioning redefines the nonlinear function as the action of the
nonlinear preconditioner :math:`\mathbf{M}`;

.. math:: \mathbf{F}_{M}(x) = \mathbf{M}(\mathbf{x},\mathbf{b}) - \mathbf{x}.

Right nonlinear preconditioning redefines the nonlinear function as the
function on the action of the nonlinear preconditioner;

.. math:: \mathbf{F}(\mathbf{M}(\mathbf{x},\mathbf{b})) = \mathbf{b},

which can be interpreted as putting the preconditioner into “striking
distance” of the solution by outer acceleration.

In addition, basic patterns of solver composition are available with the
``SNESType`` ``SNESCOMPOSITE``. This allows for two or more ``SNES``
instances to be combined additively or multiplicatively. By command
line, a set of ``SNES`` types may be given by comma separated list
argument to ``-snes_composite_sneses``. There are additive
(``SNES_COMPOSITE_ADDITIVE``), additive with optimal damping
(``SNES_COMPOSITE_ADDITIVEOPTIMAL``), and multiplicative
(``SNES_COMPOSITE_MULTIPLICATIVE``) variants which may be set with

.. code-block::

   SNESCompositeSetType(SNES,SNESCompositeType);

New subsolvers may be added to the composite solver with

.. code-block::

   SNESCompositeAddSNES(SNES,SNESType);

and accessed with

.. code-block::

   SNESCompositeGetSNES(SNES,PetscInt,SNES *);

.. bibliography:: /petsc.bib
   :filter: docname in docnames