File: faq.html

package info (click to toggle)
petsc 3.2.dfsg-6
  • links: PTS, VCS
  • area: main
  • in suites: wheezy
  • size: 124,660 kB
  • sloc: ansic: 342,250; cpp: 62,975; python: 32,761; fortran: 17,337; makefile: 15,867; xml: 621; objc: 594; sh: 492; java: 381; f90: 347; csh: 245
file content (2158 lines) | stat: -rw-r--r-- 122,390 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
<html>
<body BGCOLOR="FFFFFF">

            <h1>Docs:&nbsp; FAQ</h1>
            

            <h4><a href="faq.html#General">General</a></h4>
            <menu>
              <li><a href="faq.html#petsc-mailing-list">How can I
                  subscribe to the PETSc&nbsp;mailing lists?</a></li>
              <li><a href="faq.html#book">Any useful books on numerical
                  computing?</a><br>
              </li>
              <li><a href="faq.html#computers">What kind of parallel
                  computers or clusters are needed to use PETSc?</a></li>
              <li><a href="faq.html#license">What kind of license is
                  PETSc released under?</a></li>
              <li><a href="faq.html#why-c">Why is PETSc programmed in C,
                  instead of Fortran or C++?</a> </li>
              <li><a href="faq.html#logging-overhead">Does all the PETSc
                  error checking and logging reduce PETSc's efficiency?</a></li>
              <li><a href="faq.html#work-efficiently">How do such a
                  small group of people manage to write and maintain
                  such a large and marvelous package as PETSc?</a></li>
              <li><a href="faq.html#complex">For complex numbers will I
                  get better performance using C or C++?&nbsp;</a></li>
              <li><a href="faq.html#different">How come when I run the
                  same program on the same number of processes I get
                  "different" answers"?</a></li>
              <li><a href="faq.html#differentiterations">How come when I
                  run the same linear solver with different number of
                  processes it takes a different number of iterations?</a></li>
              <li><a href="faq.html#newremotebranches">How come I get an
                  hg error indicating "new remote branches" might be
                  created when I try to push?</a></li>
              <li><a href="faq.html#gpus">Can PETSc use GPUs to speed up
                  the computation time?</a></li>
              <li><a href="faq.html#precision">Can I run PETSc with
                  extended precision?</a></li>
              <li><a href="faq.html#qd">Why doesn't PETSc use QD to
                  implement support for extended precision?</a></li>
            </menu>
            <h4><a href="faq.html#Installation">Installation</a></h4>
            <menu>
              <li><a href="faq.html#already-installed">How do I begin
                  using PETSc if the software has already been
                  completely built and installed by someone else?</a></li>
              <li><a href="faq.html#reduce-disk-space">The PETSc
                  distribution is SO large. How can I reduce my disk
                  space usage?</a></li>
              <li><a href="faq.html#petsc-uni">I want to use PETSc only
                  for uniprocessor programs. Must I still install and
                  use a version of MPI?</a></li>
              <li><a href="faq.html#no-x">Can I install PETSc to not use
                  X windows (either under Unix or Windows with gcc, the
                  gnu compiler)?</a></li>
              <li><a href="faq.html#use-mpi">Why do you use MPI</a>?</li>
              <li><a href="faq.html#mpi-compilers">What do I do if my
                  MPI compiler wrappers are invalid</a>?</li>
              <li><a href="faq.html#64-bit-indices">When should/can I
                  use the ./configure option --with-64-bit-indices</a>?</li>
              <li><a href="faq.html#install-petsc4py-dev">How do I
                  install petsc4py with the development PETSc</a>?</li>
              <li><a href="faq.html#gfortran">What Fortran compiler do
                  you recommend for the Apple Mac OS X?</a><br>
              </li>
            </menu>
            <p><a href="#usage"><b>Usage</b></a></p>
            <ul>
              <li><a href="#redirectstdout">How can I redirect PETSc's
                  stdout and stderr when programming with a GUI
                  interface in Windows Developer Studio or to C++
                  streams?</a></li>
              <li><a href="#hypre">I want to use hypre boomerAMG without
                  GMRES but when I run -pc_type hypre -pc_hypre_type
                  boomeramg -ksp_type preonly I don't get a very
                  accurate answer!</a></li>
              <li><a href="#nosaij">You have AIJ and BAIJ matrix
                  formats, and SBAIJ for symmetric storage, how come no
                  SAIJ?</a></li>
              <li><a href="#domaindecomposition">How do I use PETSc for
                  domain decomposition?</a></li>
              <li><a href="#blocks">Can I create BAIJ matrices with
                  different size blocks for different block rows?</a></li>
              <li><a href="faq.html#mpi-vec-access">How do I access
                  values from a parallel PETSc vector on a different
                  process than the one that owns the values?</a></li>
              <li><a href="faq.html#mpi-vec-to-seq-vec">How do I collect
                  all the values from a parallel PETSc vector into a
                  sequential vector on each processor?</a></li>
              <li><a href="faq.html#mpi-vec-to-mpi-vec">How do I collect
                  all the values from a parallel PETSc vector into a
                  vector on the zeroth (or any particular) processor?</a></li>
              <li><a href="faq.html#sparse-matrix-ascii-format">How can
                  I read in or write out a sparse matrix in Matrix
                  Market, Harwell-Boeing, SLAPC or other ASCII format?</a></li>
              <li><a href="faq.html#setfromoptions">Does
                  TSSetFromOptions(), SNESSetFromOptions() or
                  KSPSetFromOptions() reset all the parameters I set or
                  how come TS/SNES/KSPSetXXX() don't seem to work?</a></li>
              <li><a href="faq.html#makefiles">Can I use my own
                  makefiles or rules for compiling code, rather than
                  PETSc's?</a></li>
              <li><a href="faq.html#cmake">Can I use CMake to build my
                  own project that depends on PETSc?</a></li>
              <li><a href="faq.html#carriagereturns">How can I put
                  carriage returns in PetscPrintf() statements from
                  Fortran?</a></li>
              <li><a href="faq.html#functionjacobian">Everyone knows
                  that when you code Newton's method you should compute
                  the function and its Jacobian at the same time. How
                  can one do this in PETSc?</a></li>
              <li><a href="faq.html#invertmatrix">How can I compute the
                  inverse of a PETSc matrix?</a></li>
              <li><a href="faq.html#schurcomplement">How can I compute a
                  Schur complement: Kbb - Kba *inverse(Kaa)*Kab?</a></li>
              <li><a href="faq.html#fem">Do you have examples of doing
                  unstructured grid finite element computations (FEM)
                  with PETSc?</a></li>
              <li><a href="faq.html#da_mpi_cart">The PETSc DMDA object
                  decomposes the domain differently than the
                  MPI_Cart_create() command. How can one use them
                  together?</a></li>
              <li><a href="faq.html#redistribute">When solving a system
                  with Dirichlet boundary conditions I can use
                  MatZeroRows() to eliminate the Dirichlet rows but this
                  results in a non-symmetric system. How can I apply
                  Dirichlet boundary conditions and yet keep the matrix
                  symmetric?</a></li>
              <li><a href="faq.html#matlab">How can I use PETSc with
                  MATLAB? How can I get PETSc Vecs and Mats to MATLAB or
                  vice versa?</a></li>
              <li><a href="faq.html#usingCython">How do I get started
                  with Cython so that I can extend petsc4py?</a></li>
            </ul>
            <h4><a href="faq.html#Execution">Execution</a></h4>
            <menu>
              <li><a href="faq.html#long-link-time">PETSc executables
                  are SO big and take SO long to link.</a></li>
              <li><a href="faq.html#petsc-options">PETSc has so many
                  options for my program that it is hard to keep them
                  straight.</a></li>
              <li><a href="faq.html#petsc-log-info">PETSc automatically
                  handles many of the details in parallel PDE solvers.
                  How can I understand what is really happening within
                  my program? </a></li>
              <li><a href="faq.html#efficient-assembly">Assembling large
                  sparse matrices takes a long time. What can I do make
                  this process faster? or MatSetValues() is <span
                    style="font-weight: bold;">so slow, </span>what can
                  I do to make it faster?</a></li>
              <li><a href="faq.html#log-summary">How can I generate
                  performance summaries with PETSc?</a></li>
              <li><a href="faq.html#parallel-roundoff">Why do I get
                  different answers on a different numbers of
                  processors?</a></li>
              <li><a href="faq.html#mg-log">How do I know the amount of
                  time spent on each level of the solver in multigrid
                  (PCType of PCMG) -pc_type mg.</a></li>
              <li><a href="faq.html#datafiles">Where do I get the input
                  matrices for the examples?&nbsp;</a></li>
              <li><a href="faq.html#info">When I dump some matrices and
                  vectors to binary, I seem to be generating some empty
                  files with .info extensions.&nbsp; What's the deal
                  with these?</a></li>
              <li><a href="faq.html#slowerparallel">Why is my parallel <span
                    style="font-weight: bold;">solver slower </span>than


                  the sequential solver?<span style="font-weight: bold;"></span></a></li>
              <li><a href="faq.html#singleprecision">When using PETSc in
                  single precision mode (--with-precision=single when
                  running ./configure) are the operations done in single
                  or double precision?</a></li>
              <li><a href="faq.html#newton">Why is Newton's method
                  (SNES) not converging?</a></li>
              <li><a href="faq.html#kspdiverged">Why is the linear
                  solver (KSP) not converging?</a></li>
            </menu>
            <a href="faq.html#Debugging"><span style="font-weight:
                bold;"></span>Debugging</a>
            <menu>
              <li><a href="faq.html#debug-ibmfortran">How do I turn off
                  PETSc signal handling so I can use the -C option on
                  xlF?</a></li>
              <li><a href="faq.html#start_in_debugger-doesnotwork">How
                  do I debug if -start_in_debugger does not work on my
                  machine?</a></li>
              <li><a href="faq.html#debug-hang">How can I see where my
                  code is hanging?</a></li>
              <li><a href="faq.html#debug-inspect">How can I inspect Vec
                  and Mat values when in the debugger?</a></li>
              <li><a href="faq.html#libimf">Error while loading shared
                  libraries: libimf.so: cannot open shared object file:
                  No such file or directory.</a></li>
              <li><font color="#ff0000"></font><a
                  href="faq.html#objecttypenotset"><font face="Terminal">What


                    does Object Type not set: Argument # n mean?</font></a></li>
              <li><a href="faq.html#split"><font face="Terminal">What
                    does </font><font color="#ff0000"> </font></a><font
                  face="Terminal"><a href="faq.html#split">Error
                    detected&nbsp;in PetscSplitOwnership() about "sum of
                    local lengths ...": mean?</a></font></li>
              <li><a href="faq.html#valgrind"><font face="Terminal">What
                    does </font><font face="Terminal">Corrupt argument
                    or Caught signal or SEQV or segmentation violation
                    or bus error mean? Can I use valgrind to debug
                    memory corruption issues? <br>
                  </font></a></li>
              <li><a href="faq.html#zeropivot"><font face="Terminal">What


                    does </font><font color="#ff0000"> </font></a><font
                  face="Terminal"><a href="faq.html#zeropivot">Detected
                    zero pivot in LU factorization mean?</a></font></li>
              <li><a href="faq.html#xwindows"><font face="Terminal"></font></a><font
                  face="Terminal"><a href="faq.html#xwindows">You create
                    Draw windows or ViewerDraw windows or use options
                    -ksp_monitor or_draw -snes_monitor_draw and the
                    program seems to run OK but windows never open.</a></font></li>
              <li><a href="faq.html#memory"><font face="Terminal"></font></a><font
                  face="Terminal"><a href="faq.html#memory">The program
                    seems to use more and more memory as it runs, even
                    though you don't think you are allocating more
                    memory. </a></font></li>
              <li><a href="faq.html#key"><font face="Terminal">When
                    calling MatPartitioningApply() you get a message
                    Error! Key 16615 not found </font></a></li>
              <li><a href="faq.html#gmres"><font face="Terminal">With
                    GMRES At restart the second residual norm printed
                    does not match the first </font></a></li>
              <li><font face="Terminal"><font face="Terminal"><a
                      href="faq.html#2its">Why do some Krylov methods
                      seem to print two residual norms per iteration?</a></font></font></li>
              <li><font face="Terminal"><a href="faq.html#dylib">Unable
                    to locate PETSc dynamic library
                    /home/balay/spetsc/lib/libg/linux/libpetsc</a></font></li>
              <li><font face="Terminal"><a href="faq.html#bisect">How do
                    I determine what update to PETSc broke my code? </a><br>
                </font></li>
            </menu>
            <h4><a href="faq.html#Shared%20Libraries">Shared Libraries</a></h4>
            <menu>
              <li><a href="faq.html#install-shared">Can I install PETSc
                  libraries as shared libraries?</a></li>
              <li><a href="faq.html#why-use-shared">Why should I use
                  shared libraries?</a></li>
              <li><a href="faq.html#link-shared">How do I link to the
                  PETSc shared libraries?</a></li>
              <li><a href="faq.html#link-regular-lib">What if I want to
                  link to the regular .a library files?</a></li>
              <li><a href="faq.html#move-shared-exec">What do I do if I
                  want to move my executable to a different machine?</a></li>
              <li><a href="#dynamic-shared">What is the deal with
                  dynamic libraries (and difference with shared
                  libraries)</a></li>
            </menu>
            <hr>
            <h3><a name="General">General</a></h3>
            <p><a name="petsc-mailing-list"><font color="#ff0000">How
                  can I subscribe to the PETSc&nbsp;mailing lists?</font>
              </a></p>
            <p>See <a
href="http://www.mcs.anl.gov/petsc/petsc-as/miscellaneous/mailing-lists.html">http://www.mcs.anl.gov/petsc/petsc-as/miscellaneous/mailing-lists.html</a><br>
              <span style="font-weight: bold;"><br>
              </span></p>
            <h3 style="font-weight: normal;"><small><font><a
                    name="petsc-mailing-list"><font color="#ff0000">Any
                      useful books on numerical computing?</font></a></font></small></h3>
            <p><a
                href="http://ebooks.cambridge.org/ebook.jsf?bid=CBO9780511617973">Writing
Scientific
Software:
A


                Guide to Good Style</a><br>
            </p>
            <p><a name="computers"><font color="#ff0000">What kind of
                  parallel computers or clusters are needed to use
                  PETSc?</font><br>
              </a><br>
              PETSc can be used with any kind of parallel system that
              supports MPI.<span style="font-weight: bold;"> BUT </span>for

              any decent performance one needs&nbsp;</p>
            <ul>
              <li>a <span style="font-weight: bold;">fast, low-latency
                  interconnect</span>; any ethernet, even 10 gigE simply
                cannot provide the needed performance.&nbsp;</li>
              <li><span style="font-weight: bold;">high per-CPU memory
                  performance</span>. Each CPU (core in multi-core
                systems) needs to have its <span style="font-weight:
                  bold;">own</span> memory bandwith of roughly 2 or more
                gigabytes/second. For example, standard dual processor
                "PC's" will <span style="font-weight: bold;">not</span>
                provide better performance when the second processor is
                used, that is, you will not see speed-up when you using
                the second processor. This is because the speed of
                sparse matrix computations is almost totally determined
                by the speed of the memory, not the speed of the CPU.
                Smart process to core/socket binding may help you. For
                example, consider using fewer processes than cores and
                binding processes to separate sockets so that each
                process uses a different memory bus:
                <dl>
                  <dt><a
href="http://wiki.mcs.anl.gov/mpich2/index.php/Using_the_Hydra_Process_Manager#Process-core_Binding">MPICH2


                      binding with the Hydra process manager</a></dt>
                  <dd>mpiexec.hydra -n 4 --binding cpu:sockets</dd>
                  <dt><a
                      href="http://www.open-mpi.org/doc/v1.5/man1/mpiexec.1.php#sect8">Open
MPI


                      binding</a></dt>
                  <dd>mpiexec -n 4 --bysocket --bind-to-socket
                    --report-bindings</dd>
                </dl>
              </li>
              <li>The software &nbsp;<a href="http://open-mx.org">http://open-mx.org</a>
                provides faster speed for ethernet systems, we have not
                tried it but it claims it can dramatically reduce
                latency and increase bandwidth on Linux system. You must
                first install this software and then install MPICH or
                Open MPI to use it.</li>
              <li>In ${PETSC_DIR} run make streams and when requested
                enter the number of cores your system has. The more the
                achieved memory bandwidth increases the more performance
                you can expect across your multiple cores. If the
                bandwidth does not increase significently then you
                cannot expect to get any improvement in parallel
                performance. </li>
            </ul>
            <a name="license"><font color="#ff0000">What kind of license
                is PETSc released under?</font><br>
            </a><br>
            See the <a href="copyright.html">licensing notice.</a>&nbsp;


            <span style="text-decoration: underline;"></span>
            <p><a name="why-c"><font color="#ff0000">Why is PETSc
                  programmed in C, instead of Fortran or C++?</font> </a></p>
            <p>C enables us to build data structures for storing sparse
              matrices, solver information, etc. in ways that Fortran
              simply does not allow. ANSI C is a complete standard that
              all modern C compilers support. The language is identical
              on all machines. C++ is still evolving and compilers on
              different machines are not identical. Using C function
              pointers to provide data encapsulation and polymorphism
              allows us to get many of the advantages of C++ without
              using such a large and more complicated language. It would
              be natural and reasonable to have coded PETSc in C++; we
              opted to use C instead. </p>
            <p><a name="logging-overhead"><font color="#ff0000">Does all
                  the PETSc error checking and logging reduce PETSc's
                  efficiency? </font></a></p>
            <p>No, </p>
            <p><font color="#ff0000"><a name="work-efficiently">How do
                  such a small group of people manage to write and
                  maintain such a large and marvelous package as PETSc?</a>
              </font></p>
            <p>a) We work very efficiently. </p>
            <ol>
              <li>We use Emacs for all editing; the etags feature makes
                navigating and changing our source code very easy. </li>
              <li>Our manual pages are generated automatically from
                formatted comments in the code, thus alleviating the
                need for creating and maintaining manual pages. </li>
              <li>We employ automatic nightly tests of PETSc on several
                different machine architectures. This process helps us
                to discover problems the day after we have introduced
                them rather than weeks or months later. </li>
            </ol>
            <p>b) We are very careful in our design (and are constantly
              revising our design) to make the package easy to use,
              write, and maintain. </p>
            <p>c) We are willing to do the grunt work of going through
              all the code regularly to make sure that <u>all</u> code
              conforms to our interface design. We will <u>never</u>
              keep in a bad design decision simply because changing it
              will require a lot of editing; we do a lot of editing. </p>
            <p>d) We constantly seek out and experiment with new design
              ideas; we retain the the useful ones and discard the rest.
              All of these decisions are based on <u>practicality</u>.
            </p>
            <p>e) Function and variable names are chosen to be very
              consistent throughout the software. Even the rules about
              capitalization are designed to make it easy to figure out
              the name of a particular object or routine. Our memories
              are terrible, so careful consistent naming puts less
              stress on our limited human RAM. </p>
            <p>f) The PETSc directory tree is carefully designed to make
              it easy to move throughout the entire package. </p>
            <p>g) Our bug reporting system, based on email to <a
                href="../documentation/bugreporting.html">petsc-maint@mcs.anl.gov</a>,
              makes it very simple to keep track of what bugs have been
              found and fixed. In addition, the bug report system
              retains an archive of all reported problems and fixes, so
              it is easy to refind fixes to previously discovered
              problems. </p>
            <p>h) We contain the complexity of PETSc by using
              object-oriented programming techniques including data
              encapsulation (this is why your program cannot, for
              example, look directly at what is inside the object Mat)
              and polymorphism (you call MatMult() regardless of whether
              your matrix is dense, sparse, parallel or sequential; you
              don't call a different routine for each format).</p>
            <p>i) We try to provide the functionality requested by our
              users.</p>
            <p>j) We never sleep. </p>
            <br>
            <p><a name="complex"><font color="#ff0000">For complex
                  numbers will I get better performance with C++?</font></a><span
                style="font-weight: bold;"></span></p>
            <p><span style="font-weight: bold;"></span>To use PETSc with
              complex numbers you either ./configure with the option
              --with-scalar-type complex and either --with-clanguage=c++
              or, the default, --with-clanguage=c. In our experience
              they will deliver very similar performance (speed), but if
              one is concerned they should just try both and see if one
              is faster.</p>
            <p><br>
            </p>
            <p><a name="different"><font color="#ff0000">How come when I
                  run the same program on the same number of processes I
                  get a "different" answer?</font></a><span
                style="font-weight: bold;"></span></p>
            <p><span style="font-weight: bold;"></span>Inner products
              and norms in PETSc are&nbsp; computed using the
              MPI_Allreduce() command. In different runs the order at
              which values arrive at a given process (via MPI) can be in
              a different order, thus the order in which some floating
              point arithmetic operations are performed will
              be&nbsp;different. Since floating point arithmetic
              arithmetic is not commutative the computed quantity may be
              (slightly) different. Over a run the many slight
              differences in the inner products and norms will effect
              all the computed results. It is important to realize that
              none of the computed answers are any less right or wrong
              (in fact the sequential computation is no more right then
              the parallel ones), they are all equally valid.</p>
            The discussion above assumes that the exact same algorithm
            is being used on the different number of processes. When the
            algorithm is different for the different number of processes
            (almost all preconditioner algorithms except Jacobi are
            different for different number of processes) then one
            expects to see (and does) a greater difference in results
            for different numbers of processes. In some cases (for
            example block Jacobi preconditioner) it may be that the
            algorithm works for some number of processes and does not
            work for others.
            <p><a name="differentiterations"><font color="#ff0000">How
                  come when I run the same linear solver on a different
                  number of processes it takes a different number of
                  iterations?</font></a><span style="font-weight: bold;"></span></p>
            <p><span style="font-weight: bold;"></span>The convergence
              of many of the preconditioners in PETSc including the the
              default parallel preconditioner block Jacobi depends on
              the number of processes. The more processes the (slightly)
              slower convergence it has. This is the nature of iterative
              solvers, the more parallelism means the more "older"
              information is used in the solution process hence slower
              convergence.</p>
            <p></p>
            <p><a name="newremotebranches"><font color="#ff0000">How
                  come I get an hg error indicating "new remote
                  branches" might be created when I try to push?</font></a><span
                style="font-weight: bold;"></span></p>
            <p>Here is an example:</p>
            [linux]% hg push<br>
            pushing to https://petsc.cs.iit.edu/petsc/petsc-dev<br>
            searching for changes<br>
            abort: push creates new remote branches!<br>
            <p>This is almost always an indication that you have done
              serious harm to your local repo. If you run hg heads and
              there are more than 1 (which causes this), then you know
              its true.</p>
            <p>Here is how it happens. You make some local changes, but
              do not commit. You pull down and it aborts part way
              because you have "uncommitted local changes". However, you
              do not hg rollback. Instead you just hg commit, which
              creates another head. This is supposed to be a feature. I
              think it should have a user disable.</p>
            <p>Fixing this is complicated. Basically, you clone the repo
              before you made head #2, then create the diff for the bad
              changeset that made head #2. Apply it to the clone and
              checkin, then pull the master.<br>
            </p>
            <p><a name="gpus"><font color="#ff0000">Can PETSc use GPUs
                  to speedup computations?</font></a><span
                style="font-weight: bold;"></span></p>
            <p>PETSc-dev has some support for running portions of the
              computation on Nvidia GPUs. See <a
                href="http://www.mcs.anl.gov/petsc/petsc-as/features/gpus.html">PETSc


                GPUs</a> for more information. PETSc has a Vec class
              VECCUSP that performs almost all the vector operations on
              the GPU. The Mat class MATCUSP performs matrix-vector
              products on the GPU but does not have matrix assembly on
              the GPU yet. Both of these classes run in parallel with
              MPI. All KSP methods, except KSPIBCGS, run all their
              vector operations on the GPU thus, for example Jacobi
              preconditioned Krylov methods run completely on the GPU.
              Preconditioners are a problem, we could do with some help
              for these. The example <a
href="http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/src/snes/examples/tutorials/ex47cu.cu.html">src/snes/examples/tutorials/ex47cu.cu</a>
              demonstates how the&nbsp; nonlinear function evaluation
              can be done on the GPU.<br>
            </p>
            <p><font><a name="precision"><font color="#ff0000">Can I run
                    PETSc with exended precision</font></a></font> <a><font>Yes,


                  with gcc 4.6 and later (and gfortran 4.6 and later)
                  ./configure PETSc using the options
                  --with-precision=__float128 --download-f2cblaslapack.</font></a>
              External packages cannot be used in this mode and some
              print statements in PETSc (those that use the %G format)
              will not print correctly.<br>
            </p>
            <p><a><font></font></a><font><a name="qd"><font
                    color="#ff0000">Why doesn't PETSc use QD to
                    implement support for exended precision</font></a></font></p>
            <p><a><font>We tried really hard but could not. The problem
                  is that the QD c++ classes, though they try to
                  implement the built-in data types of double etc are
                  not native types and cannot "just be used" in a
                  general piece of numerical source code rather the code
                  has to rewritten to live within the limitations of QD
                  classes.<br>
                </font></a> </p>
            <h3><a name="Installation">Installation</a></h3>
            <p><a name="already-installed"><font color="#ff0000">How do
                  I begin using PETSc if the software has already been
                  completely built and installed by someone else?</font>
              </a></p>
            <p>Assuming that the PETSc libraries have been successfully
              built for a particular architecture and level of
              optimization, a new user must merely: </p>
            <p>a) Set the environmental variable PETSC_DIR to the full
              path of the PETSc home directory (for example,
              /home/username/petsc). </p>
            <p>b) Set the environmental variable PETSC_ARCH, which
              indicates the configuration on which PETSc will be
              used.&nbsp; Note that the PETSC_ARCH is simply a name the
              installer used when installing the libraries. There many
              be several on a single system, like mylinux-g for the
              debug versions of the library and mylinux-O for the
              optimized version, or petscdebug for the debug version
              &nbsp;and petscopt for the optimized version. </p>
            <p>c) Begin by copying one of the many PETSc examples (in,
              for example, petsc/src/ksp/examples/tutorials) and its
              corresponding makefile. </p>
            <p>d) See the introductory section of the PETSc users manual
              for tips on documentation. </p>
            <p><a name="reduce-disk-space"><font color="#ff0000">The
                  PETSc distribution is SO large. How can I reduce my
                  disk space usage?</font> </a></p>
            <p>a) Don't install the -doc package.</p>
            <p>b) <font color="#ff0000"><a name="petsc-uni">I want to
                  use PETSc only for uniprocessor programs. Must I still
                  install and use a version of MPI</a>?</font> </p>
            No, run ./configure with the option --with-mpi=0<br>
            <p><a name="no-x"><font color="#ff0000">Can I install PETSc
                  to not use X windows (either under Unix or Windows
                  with gcc, the gnu compiler)?</font></a></p>
            <p>Yes. Run ./configure with the additional flag --with-x=0</p>
            <p><font color="#ff0000"><a name="use-mpi">Why do you use
                  MPI</a>? </font></p>
            <p>MPI is the message-passing standard. Because it is a
              standard, it will not change over time; thus, we do not
              have to change PETSc every time the provider of the
              message-passing system decides to make an interface
              change. MPI was carefully designed by experts from
              industry, academia, and government labs to provide the
              highest quality performance and capability. For example,
              the careful design of communicators in MPI allows the easy
              nesting of different libraries; no other message-passing
              system provides this support. All of the major parallel
              computer vendors were involved in the design of MPI and
              have committed to providing quality implementations. In
              addition, since MPI is a standard, several different
              groups have already provided complete free
              implementations. Thus, one does not have to rely on the
              technical skills of one particular group to provide the
              message-passing libraries. Today, MPI is the only
              practical, portable approach to writing efficient parallel
              numerical software. </p>
            <p><font color="#ff0000"><a name="mpi-compilers">What do I
                  do if my MPI compiler wrappers are invalid</a>?</font></p>
            <p>Most MPI implementations provide compiler wrappers (such
              as mpicc) which give the include and link options
              necessary to use that verson of MPI to the underlying
              compilers . These wrappers are either absent or broken in
              the MPI pointed to by --with-mpi-dir. You can rerun the
              configure with the additional option
              --with-mpi-compilers=0, which will try to auto-detect
              working compilers; however, these compilers may be
              incompatible with the particular MPI build. If this fix
              does not work, run with --with-cc=c_compiler where you
              know c_compiler works with this particular MPI, and
              likewise for C++ and Fortran.</p>
            <p>&nbsp;</p>
            <p><font color="#ff0000"><a name="64-bit-indices">When
                  should/can I use the ./configure option
                  --with-64-bit-indices?</a></font></p>
            <p>By default the type that PETSc uses to index into arrays
              and keep sizes of arrays is a PetscInt defined to be a 32
              bit int. If your problem&nbsp;</p>
            <ul>
              <li>involves more than 2^31 - 1 unknowns (around 2
                billion) OR&nbsp;</li>
              <li>your matrix might contain more than 2^31 - 1 nonzeros
                on a single process&nbsp;</li>
            </ul>
            then you need to use this option. Otherwise you will get
            strange crashes.
            <p>This option can be used when you are using either 32 bit
              or 64 bit pointers. You do not need to use this option if
              you are using 64 bit pointers unless the two conditions
              above hold.&nbsp; </p>
            <p><font color="#ff0000"><a name="install-petsc4py-dev">How
                  do I install petsc4py with the development PETSc</a>?</font></p>
            <p>You can follow these steps </p>
            <ol>
              <li>grab petsc4py-dev repo [from hg]</li>
              <li>install Cython</li>
              <li>make cython [in petsc4py-dev]</li>
              <li>place petsc4py-dev in PETSC_DIR/externalpackages</li>
              <li>export ARCHFLAGS=''</li>
              <li>install PETSc with --download-petsc4py etc..</li>
            </ol>
            <p></p>
            <p>&nbsp; </p>
            <p><font color="#ff0000"><a name="gfortran">What Fortran
                  compiler do you recommend for the Apple Mac OS X?</a></font></p>
            (as of 11/6/2010) We recommend installing gfortran from <a
              href="http://hpc.sourceforge.net/">http://hpc.sourceforge.net</a>.
            They have gfortran-4.6.0 (experimental) for Snow Leopard
            (10.6) and gfortran 4.4.1 (prerelease) for Leopard (10.5).<br>
            <br>
            Please contact Apple at <a
              href="http://www.apple.com/feedback">http://www.apple.com/feedback</a>
            and urge them to bundle gfortran with future versions of
            Xcode.<br>
            <br>
            <p>&nbsp; </p>
            <hr>
            <h3><a name="Using">Using</a></h3>
            <p>&nbsp;<a name="redirectstdout"><font color="#ff0000">How
                  can I redirect PETSc's stdout and stderr when
                  programming with a GUI interface in Windows Developer
                  Studio or too C++ streams?&nbsp;</font></a> </p>
            To overload just the error messages write your own
            MyPrintError() function that does whatever you want
            (including pop up windows etc) and use it like below.<br>
            <br>
            extern "C"<br>
            {<br>
            &nbsp; &nbsp;int PASCAL WinMain(HINSTANCE inst,HINSTANCE
            dumb,LPSTR param,int show);<br>
            };<br>
            <br>
            #include "petscsys.h"<br>
            #include "mpi.h"<br>
            <br>
            <br>
            int MyPrintError(const char error[],...){<br>
            <br>
            &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
            printf("%s",error);<br>
            &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; return 0;<br>
            }<br>
            <br>
            <br>
            int main(int ac,char *av[])<br>
            {<br>
            &nbsp;&nbsp;&nbsp; char buf[256];<br>
            &nbsp;&nbsp;&nbsp; int i;<br>
            &nbsp;&nbsp;&nbsp; HINSTANCE inst;<br>
            <br>
            &nbsp;&nbsp;&nbsp; inst=(HINSTANCE)GetModuleHandle(NULL);<br>
            &nbsp;&nbsp;&nbsp; PetscErrorPrintf = MyPrintError;<br>
            <br>
            &nbsp;&nbsp;&nbsp; buf[0]=0;<br>
            &nbsp;&nbsp;&nbsp; for(i=1; i&lt;ac; i++)<br>
            &nbsp;&nbsp;&nbsp; {<br>
            &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;
            strcat(buf,av[i]);<br>
            &nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;&nbsp; strcat(buf," ");<br>
            &nbsp;&nbsp;&nbsp; }<br>
            &nbsp; &nbsp; PetscErrorCode ierr;<br>
            &nbsp; &nbsp; char* help = "Set up from main";<br>
            <br>
            &nbsp;&nbsp;&nbsp;&nbsp;ierr = PetscInitialize(&amp;ac,
            &amp;av, (char*)0, help);<br>
            <br>
            &nbsp;&nbsp;&nbsp; return
            WinMain(inst,NULL,buf,SW_SHOWNORMAL);<br>
            }<br>
            <br>
            file in the project and compile with this preprocessor
            definitiions:
WIN32,_DEBUG,_CONSOLE,_MBCS,USE_PETSC_LOG,USE_PETSC_BOPT_g,USE_PETSC_STA<br>
            CK,_AFXDLL<br>
            <br>
            And this link options /nologo /subsystem:console
            /incremental:yes&nbsp;&nbsp; /debug /machine:I386
            /nodefaultlib:"libcmtd.lib" /nodefaultlib:"libcd.lib"
            /nodefaultlib:"mvcrt.lib" /pdbtype:sept<br>
            <br>
            Note that it is compiled and linked as if it was a console
            program. The linker will search for a main,&nbsp; and then
            from it the WinMain will start. This works with MFC
            templates and derived classes too.<br>
            <br>
            &nbsp;Note: When writing a Window's console application you
            do not need to do anything, the stdout and stderr is
            automatically output to the console window.<a name="nosaij"><br>
            </a><br>
            To change where all PETSc stdout and stderr go write a
            function<br>
            <br>
            You can also reassign PetscVFPrintf() to handle stdout and
            stderr any way you like write the following function<br>
            <br>
            PetscErrorCode mypetscvfprintf(FILE *fd,const char
            format[],va_list Argp)<br>
            {<br>
            &nbsp; PetscErrorCode ierr;<br>
            <br>
            &nbsp; PetscFunctionBegin;<br>
            &nbsp;&nbsp; if (fd != stdout &amp;&amp; fd != stderr) { /*
            handle regular files */<br>
            &nbsp;&nbsp;&nbsp;&nbsp;&nbsp; ierr =
            PetscVFPrintfDefault(fd,format,Argp); CHKERR(ierr);<br>
            &nbsp; } else {<br>
            &nbsp;&nbsp;&nbsp;&nbsp; char buff[BIG];<br>
            &nbsp;&nbsp;&nbsp;&nbsp; int&nbsp;&nbsp;&nbsp;&nbsp; length;<br>
            &nbsp;&nbsp;&nbsp;&nbsp; ierr =
PetscVSNPrintf(buff,BIG,format,&amp;length,Argp);CHKERRQ(ierr);<br>
            &nbsp;&nbsp;&nbsp;&nbsp; /* now send buff to whatever stream
            or whatever you want */<br>
            &nbsp;}<br>
            &nbsp;PetscFunctionReturn(0);<br>
            }<br>
            <br>
            and assign PetscVFPrintf = mypetscprintf; before
            PetscInitialize() in your main program.<br>
            <a name="nosaij"><br>
            </a>
            <p>&nbsp;<a name="hypre"><font color="#ff0000">I want to use
                  hypre boomerAMG without GMRES but when I run -pc_type
                  hypre -pc_hypre_type boomeramg -ksp_type preonly I
                  don't get a very accurate answer!</font></a> </p>
            You should run with -ksp_type richardson to have PETSc run
            several V or W cycles. -ksp_type of preonly causes boomerAMG
            to use only one V/W cycle. You can control how many cycles
            are used in a single application of the boomerAMG
            preconditioner with -pc_hypre_boomeramg_max_iter <it> (the
              default is 1). You can also control the tolerance
              boomerAMG uses to decide if to stop before max_iter with
              -pc_hypre_boomeramg_tol <tol> (the default is 1.e-7). Run
                with -ksp_view to see all the hypre options used and
                -help | grep boomeramg to see all the command line
                options. </tol></it>
            <p>&nbsp;<a name="nosaij"><font color="#ff0000">You have AIJ
                  and BAIJ matrix formats, and SBAIJ for symmetric
                  storage, how come no SAIJ</font></a> </p>
            Just for historical reasons, the SBAIJ format with blocksize
            one is just as efficient as an SAIJ would be
            <p></p>
            <p>&nbsp;<a name="domaindecomposition"><font color="#ff0000">How
do


                  I use PETSc for Domain Decomposition?</font></a> </p>
            <p>PETSc includes Additive Schwarz methods in the suite of
              preconditioners. These may be activated with the runtime
              option&nbsp;<br>
              <i>-pc_type asm.</i>&nbsp;<br>
              Various other options may be set, including the degree of
              overlap<br>
              <i> -pc_asm_overlap &lt;number&gt;</i><br>
              the type of restriction/extension&nbsp;<br>
              <i>-pc_asm_type [basic,restrict,interpolate,none] </i> -
              Sets ASM type and several others. You may see the
              available ASM options by using<br>
              <i> -pc_type asm -help</i><br>
              Also, see the procedural interfaces in the manual pages,
              with names <b>PCASMxxxx()</b><br>
              and check the index of the <a
href="http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manual.pdf">users


                manual</a> for <b>PCASMxxx</b>().<br>
            </p>
            <p>PETSc also contains a domain decomposition inspired
              wirebasket or face based two level method where the coarse
              mesh to fine mesh interpolation is defined by solving
              specific local subdomain problems. It currently only works
              for 3D scalar problems on structured grids created with
              PETSc DMDAs. See the manual page for PCEXOTIC and
              src/ksp/ksp/examples/tutorials/ex45.c for any example.<br>
            </p>
            <p>PETSc also contains a balancing Neumann-Neumann
              preconditioner, see the manual page for PCNN. This
              requires matrices be constructed with MatCreateIS() via
              the finite element method. There are currently no examples
              that demonstrate it use.<br>
            </p>
            <hr>
            <p>&nbsp;<a name="blocks"><font color="#ff0000">Can I create
                  BAIJ matrices with different size blocks for different
                  block rows?</font></a></p>
            Sorry, this is not possible, the BAIJ format only supports a
            single fixed block size on the entire matrix. But the AIJ
            format automatically searches for matching rows and thus
            still takes advantage of the natural blocks in your matrix
            to obtain good performance. Unfortunately you cannot use the
            MatSetValuesBlocked().<br>
            <br>
            <br>
            <p><a name="mpi-vec-access"><font color="#ff0000">How do I
                  access the values of a parallel PETSc vector on a
                  different process than owns them?</font></a></p>
            <p> </p>
            <ul>
              <li> On each process create a local vector large enough to
                hold all the values it wishes to access <span
                  style="text-decoration: underline;"></span></li>
              <li>Create a VecScatter that scatters from the parallel
                vector into the local vectors</li>
              <li>Use VecGetArray() to access the values in the local
                vector<br>
              </li>
            </ul>
            <br>
            <p><a name="mpi-vec-to-seq-vec"><font color="#ff0000">How do
                  I collect all the values from a parallel PETSc vector
                  into a sequential vector on each processor?</font></a></p>
            <p> </p>
            <ul>
              <li> Create the scatter context that will do the
                communication </li>
              <li> <a
href="manualpages/Vec/VecScatterCreateToAll.html">VecScatterCreateToAll</a>(v,&amp;ctx,&amp;w);</li>
            </ul>
            <table width="100%">
              <tbody>
                <tr>
                  <td valign="top" width="75%">
                    <li> Actually do the communication; this can be done
                      repeatedly as needed</li>
                    <ul>
                      <li> <a
href="manualpages/Vec/VecScatterBegin.html">VecScatterBegin</a>(ctx,v,w,INSERT_VALUES,SCATTER_FORWARD);</li>
                      <li> <a
href="manualpages/Vec/VecScatterEnd.html">VecScatterEnd</a>(ctx,v,w,INSERT_VALUES,SCATTER_FORWARD);</li>
                    </ul>
                    <li> Remember to free the scatter context when no
                      longer needed</li>
                    <ul>
                      <li> <a
href="manualpages/Vec/VecScatterDestroy.html">VecScatterDestroy</a>(ctx);</li>
                    </ul>
                    Note that this simply concatenates in the parallel
                    ordering of the vector. If you are using a vector
                    from DMCreateGlobalVector() you likely want to first
                    call DMDAGlobalToNaturalBegin/End() to scatter the
                    original vector into the natural ordering in a new
                    global vector before calling VecScatterBegin/End()
                    to scatter the natural vector onto all processes.
                    <p></p>
                    <p><a name="mpi-vec-to-mpi-vec"><font
                          color="#ff0000">How do I collect all the
                          values from a parallel PETSc vector into a
                          vector on the zeroth processor?</font></a></p>
                    <p> </p>
                    <ul>
                      <li> Create the scatter context that will do the
                        communication </li>
                      <ul>
                        <li> <a
href="manualpages/Vec/VecScatterCreateToZero.html">VecScatterCreateToZero</a>(v,&amp;ctx,&amp;w);</li>
                      </ul>
                      <li> Actually do the communication; this can be
                        done repeatedly as needed</li>
                      <ul>
                        <li> <a
href="manualpages/Vec/VecScatterBegin.html">VecScatterBegin</a>(ctx,v,w,INSERT_VALUES,SCATTER_FORWARD);</li>
                        <li> <a
href="manualpages/Vec/VecScatterEnd.html">VecScatterEnd</a>(ctx,v,w,INSERT_VALUES,SCATTER_FORWARD);</li>
                      </ul>
                      <li> Remember to free the scatter context when no
                        longer needed</li>
                      <ul>
                        <li> <a
href="manualpages/Vec/VecScatterDestroy.html">VecScatterDestroy</a>(ctx);</li>
                      </ul>
                    </ul>
                    Note that this simply concatenates in the parallel
                    ordering of the vector. If you are using a vector
                    from DMCreateGlobalVector() you likely want to first
                    call DMDAGlobalToNaturalBegin/End() to scatter the
                    original vector into the natural ordering in a new
                    global vector before calling VecScatterBegin/End()
                    to scatter the natural vector onto process 0.
                    <p>&nbsp;<br>
                      <a name="sparse-matrix-ascii-format"></a><span
                        style="color: rgb(255, 0, 0);">How can I read in
                        or write out a sparse matrix in Matrix Market,
                        Harwell-Boeing, SLAPC or other ASCII format?</span></p>
                    See the examples in src/mat/examples/tests,
                    specifically ex72.c, ex78.c, and ex32.c. You will
                    likely need to modify the code slightly to match
                    your required ASCII format. Note: Never read or
                    write in parallel an ASCII matrix file, instead for
                    reading: read in sequentially with a standalone code
                    based on ex72.c, ex78.c, or ex32.c then save the
                    matrix with the binary viewer
                    PetscBinaryViewerOpen() and load the matrix in
                    parallel in your "real" PETSc program with
                    MatLoad(); for writing save with the binary viewer
                    and then load with the sequential code to store it
                    as ASCII.<br>
                    <br>
                    <br>
                    <a name="setfromoptions"></a><span style="color:
                      rgb(255, 0, 0);">Does TSSetFromOptions(),
                      SNESSetFromOptions() or KSPSetFromOptions() reset
                      all the parameters I previously set or how come my
                      TS/SNES/KSPSetXXX() does not seem to work?</span>
                    <br>
                    <br>
                    If XXSetFromOptions() is used (with -xxx_type aaaa)
                    to change the type of the object then all parameters
                    associated with the previous type are removed.
                    Otherwise it does not reset parameters.<br>
                    <br>
                    TS/SNES/KSPSetXXX() commands that set properties for
                    a particular type of object (such as
                    KSPGMRESSetRestart()) ONLY work if the object is
                    ALREADY of that type. For example, with<br>
                    KSPCreate(PETSC_COMM_WORLD,&amp;ksp);<br>
                    KSPGMRESSetRestart(ksp,10); the restart will be
                    ignored since the type has not yet been set to
                    GMRES. &nbsp;To have those values take effect you
                    should do one of the following<br>
                    <br>
                    XXXCreate(..,&amp;obj);<br>
                    <br>
                    XXXSetFromOptions(obj); &nbsp; allow setting the
                    type from the command line, if it is not on the
                    command line then the default type is automatically
                    set<br>
                    <br>
                    XXXSetYYYYY(obj,...); &nbsp; if the obj is the
                    appropriate type then the operation takes place<br>
                    <br>
                    XXXSetFromOptions(obj); &nbsp;allow user to
                    overwrite options hardwired in code (optional)<br>
                    <br>
                    The other approach is to replace the first
                    XXXSetFromOptions() to XXXSetType(obj,type) and
                    hardwire the type at that point.<br>
                    <br>
                    <br>
                    <br>
                    <a name="makefiles"></a><span style="color: rgb(255,
                      0, 0);">Can I use my own makefiles or rules for
                      compiling code, instead of using PETSc's?</span><br>
                    <br>
                    Yes, see the section of the <a
href="manual.pdf">users


                      manual</a> called Makefiles <br>
                    <br>
                    <a name="cmake"></a><span style="color: rgb(255, 0,
                      0);">Can I use CMake to build my own project that
                      depends on PETSc? </span><br>
                    <br>
                    Use the FindPETSc.cmake module from <a
                      href="https://github.com/jedbrown/cmake-modules/">this


                      repository</a>. See the CMakeLists.txt from <a
                      href="https://github.com/jedbrown/dohp">Dohp</a>
                    for example usage. <br>
                    <br>
                    <a name="carriagereturns"></a><span style="color:
                      rgb(255, 0, 0);">How can I put carriage returns in
                      PetscPrintf() statements from Fortran?</span><br>
                    <br>
                    You can use the same notation as in C, just put a \n
                    in the string. Note that no other C format
                    instruction is supported. <br>
                    Or you can use the Fortran concatination //
                    and&nbsp;char(10); for example 'some
                    string'//char(10)//'another string on the next line'<br>
                    <br>
                    <a name="functionjacobian"></a><span style="color:
                      rgb(255, 0, 0);">Everyone knows that when you code
                      Newton's method you should compute the function
                      and its Jacobian at the same time. How can one do
                      this in PETSc?<br>
                      <br>
                    </span>The update in Newton's method is computed as
                    u^{n+1} = u^n - lambda * approx-inverse[J(u^n)] *
                    F(u^n)]. The reason PETSc doesn't default to
                    computing both the function and Jacobian at the same
                    time is<br>
                    <ol>
                      <li>In order to do the line search, F (u^n -
                        lambda * step) may need to be computed for
                        several lambda, the Jacobian is not needed for
                        each of those and one does not know in advance
                        which will be the final lambda until after the
                        function value is computed, so many extra
                        Jacobians may be computed.</li>
                      <li>In the final step if || F(u^p)|| satisfies the
                        convergence criteria then a Jacobian need not be
                        computed.</li>
                    </ol>
                    You are free to have your "FormFunction" compute as
                    much of the Jacobian at that point as you like, keep
                    the information in the user context (the final
                    argument to FormFunction and FormJacobian) and then
                    retreive the information in your FormJacobian()
                    function.<br>
                    <br>
                    <span style="color: rgb(255, 0, 0);"><a
                        name="invertmatrix"></a>How can I compute the
                      inverse of a matrix in PETSc?<br>
                      <br>
                    </span>It is very expensive to compute the inverse
                    of a matrix and very rarely needed in practice. We
                    highly recommend avoiding &nbsp;algorithms that need
                    it. The inverse of a matrix (dense or sparse) is
                    essentially always dense, so begin by creating a
                    dense matrix B and fill it with the identity matrix
                    (ones along the diagonal), also create a dense
                    matrix X of the same size that will hold the
                    solution. Then factor the matrix you wish to invert
                    with MatLUFactor() or MatCholeskyFactor(), call the
                    result A. Then call MatMatSolve(A,B,X) to compute
                    the inverse into X. <a
                      href="faq.html#schurcomplement">See also</a>.<br>
                    <br>
                    <span style="color: rgb(255, 0, 0);"><a
                        name="schurcomplement"></a>How can I compute the
                      Schur complement, Kbb - Kab * inverse(Kbb) * Kba
                      in PETSc?<br>
                      <br>
                    </span>It is very expensive to compute the Schur
                    complement of a matrix and very rarely needed in
                    practice. We highly recommend avoiding
                    &nbsp;algorithms that need it. The Schur complement
                    of a matrix (dense or sparse) is essentially always
                    dense, so begin by<br>
                    <ul>
                      <li>forming a dense matrix Kba,&nbsp;</li>
                      <li>also create another dense matrix T of the same
                        size.&nbsp;</li>
                      <li>Then factor the matrix Kaa&nbsp; with
                        MatLUFactor() or MatCholeskyFactor(), call the
                        result A.</li>
                      <li>Then call MatMatSolve(A,Kba,T).&nbsp;</li>
                      <li>Then call
                        MatMatMult(Kab,T,MAT_INITIAL_MATRIX,1.0,&amp;S).</li>
                      <li>Now call
                        MatAXPY(S,-1.0,Kbb,MAT_SUBSET_NONZERO).</li>
                      <li>Followed by MatScale(S,-1.0);</li>
                    </ul>
                    As you can see, this requires a great deal of work
                    space and computation so is best avoided. For
                    example if you want to solve S x = b, instead of
                    forming S explicitly you can provide the action of S
                    on a vector efficiently with a MATSHELL and pass
                    this to a KSP solver.<br>
                    <br>
                    <span style="color: rgb(255, 0, 0);"><a name="fem"></a>Do
you


                      have examples of doing unstructured grid finite
                      element computations (FEM) with PETSc?<br>
                      <br>
                    </span>There are at least two ways to write a finite
                    element code using PETSc<br>
                    1) use the Sieve construct in PETSc, this is a high
                    level approach that uses a small number of
                    abstractions to help you manage distributing the
                    grid data structures and computing the elements into
                    the matrices.<br>
                    2) manage the grid data structure yourself and use
                    PETSc IS and VecScatter to communicate the required
                    ghost point communication. See <a
href="http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-current/src/snes/examples/tutorials/ex10d/ex10.c.html">src/snes/examples/tutorials/ex10d/ex10.c</a><br>
                    <br>
                    <br>
                    <span style="color: rgb(255, 0, 0);"><a
                        name="da_mpi_cart"></a>The PETSc DA object
                      decomposes the domain differently than the
                      MPI_Cart_create() command. How can one use them
                      together?<br>
                      <br>
                    </span>The MPI_Cart_create() first divides the mesh
                    along the z direction, then the y, then the x. DMDA
                    divides along the x, then y, then z. Thus, for
                    example, rank 1 of the processes will be in a
                    different part of the mesh for the two schemes. To
                    resolve this you can create a new MPI communicator
                    that you pass to DMDACreate() that renumbers the
                    process ranks so that each physical process shares
                    the same part of the mesh with both the DMDA and the
                    MPI_Cart_create(). The code to determine the new
                    numbering was provided by Rolf Kuiper. <br>
                    <br>
                    // the numbers of processors per direction are (int)
                    x_procs, y_procs, z_procs respectively <br>
                    // (no parallelization in direction 'dir' means
                    dir_procs = 1)<br>
                    <br>
                    MPI_Comm NewComm;<br>
                    int MPI_Rank, NewRank, x,y,z;<br>
                    <br>
                    // get rank from MPI ordering:<br>
                    MPI_Comm_rank(MPI_COMM_WORLD, &amp;MPI_Rank);<br>
                    <br>
                    // calculate coordinates of cpus in MPI ordering:<br>
                    x = MPI_rank / (z_procs*y_procs);<br>
                    y = (MPI_rank % (z_procs*y_procs)) / z_procs;<br>
                    z = (MPI_rank % (z_procs*y_procs)) % z_procs;<br>
                    <br>
                    // set new rank according to PETSc ordering:<br>
                    NewRank = z*y_procs*x_procs + y*x_procs + x;<br>
                    <br>
                    // create communicator with new ranks according to
                    PETSc ordering:<br>
                    MPI_Comm_split(PETSC_COMM_WORLD, 1, NewRank,
                    &amp;NewComm);<br>
                    <br>
                    // override the default communicator (was
                    MPI_COMM_WORLD as default)<br>
                    PETSC_COMM_WORLD = NewComm;<br>
                    <br>
                    <span style="color: rgb(255, 0, 0);"><a
                        name="redistribute"></a>The When solving a
                      system with Dirichlet boundary conditions I can
                      use MatZeroRows() to eliminate the Dirichlet rows
                      but this results in a non-symmetric system. How
                      can I apply Dirichlet boundary conditions
                      but&nbsp; keep the matrix symmetric?<br>
                      <br>
                    </span>For nonsymmetric systems put the appropriate
                    boundary solutions in the x vector and use
                    MatZeroRows() followed by KSPSetOperators(). For
                    symmetric problems use MatZeroRowsColumns() instead.
                    If you have many Dirichlet locations you can use
                    MatZeroRows() (not MatZeroRowsColumns()) and
                    -ksp_type preonly -pc_type redistribute, see the
                    manual page for PCREDISTRIBUTE) and PETSc will
                    repartition the parallel matrix for load balancing;
                    in this case the new matrix solved remains symmetric
                    even though MatZeroRows() is used.<br>
                    <br>
                    An alternative approach is when assemblying the
                    matrix, (generating values and passing them to the
                    matrix), never include locations for the Dirichlet
                    grid points in the vector and matrix, instead take
                    them into account as you put the other values into
                    the load. <br>
                    <br>
                    <span style="color: rgb(255, 0, 0);"><a
                        name="matlab"></a>How can I get PETSc Vecs and
                      Mats to MATLAB or vice versa?<br>
                    </span><br>
                    <span style="color: rgb(255, 0, 0);"><br>
                    </span>There are five ways to work with PETSc and
                    MATLAB<br>
                    <br>
                    <ol>
                      <li>Using the MATLAB Engine, this allows PETSc to
                        automatically call MATLAB to perform some
                        specific computations. It does not allow MATLAB
                        to be used interactively by the user. See the <a
href="http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/Sys/PetscMatlabEngine.html">PetscMatlabEngine</a>.</li>
                      <li>To save PETSc Mat and Vecs to files that can
                        be read from MATLAB use&nbsp; <a
href="http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/Viewer/PetscViewerBinaryOpen.html">PetscViewerBinaryOpen()</a>
                        viewer and VecView() or MatView() to save
                        objects for MATLAB and VecLoad() and MatLoad()
                        to get the objects that MATLAB has saved. See
                        PetscBinaryRead.m and PetscBinaryWrite.m in
                        bin/matlab for loading and saving the objects in
                        MATLAB.</li>
                      <li>You can open a socket connection between
                        MATLAB and PETSc to allow sending objects back
                        and forth between an interactive MATLAB session
                        and a running PETSc program. See <a
href="http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/Viewer/PetscViewerSocketOpen.html">PetscViewerSocketOpen</a>()


                        for access from the PETSc side and
                        PetscOpenSocket in bin/matlab for access from
                        the MATLAB side.</li>
                      <li>You can save PETSc Vecs (not Mats) with the <a
href="http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manualpages/Viewer/PetscViewerMatlabOpen.html">PetscViewerMatlabOpen</a>()


                        viewer that saves .mat files can then be loaded
                        into MATLAB.</li>
                      <li>We are just being to develop in <a
                          href="../developers/index.html">petsc-dev</a>
                        an API to call most of the PETSc function
                        directly from MATLAB; we could use help in
                        developing this. See
                        bin/matlab/classes/PetscInitialize.m<br>
                      </li>
                    </ol>
                    <br>
                    <span style="color: rgb(255, 0, 0);"><a
                        name="usingCython"></a>How do I get started with
                      Cython so that I can extend petsc4py?<br>
                    </span><br>
                    Steps I used:
                    <ol>
                      <li>Learn how to <a
                          href="http://docs.cython.org/src/quickstart/build.html">build


                          a Cython module</a></li>
                      <li>Go through the simple example provided by
                        Denis <a
href="http://stackoverflow.com/questions/3046305/simple-wrapping-of-c-code-with-cython">here</a>.
                        Note also the next comment that shows how to
                        create numpy arrays in the Cython and pass them
                        back.</li>
                      <li>Check out <a
                          href="http://docs.cython.org/src/tutorial/numpy.html">this


                          page</a> which tells you how to get fast
                        indexing</li>
                      <li>Have a look at the petsc4py <a
href="http://code.google.com/p/petsc4py/source/browse/src/PETSc/arraynpy.pxi">array


                          source</a></li>
                    </ol>
                    <hr> </td>
                </tr>
              </tbody>
            </table>
            <hr>
            <h3><a name="Execution">Execution</a></h3>
            <p><a name="long-link-time"><font color="#ff0000">PETSc
                  executables are SO big and take SO long to link</font>.</a></p>
            <p>We find this annoying as well. On most machines PETSc can
              use shared libraries, so executables should be much
              smaller, run ./configure with the additional option
              --with-shared-libraries. Also, if you have room, compiling
              and linking PETSc on your machine's /tmp disk or similar
              local disk, rather than over the network will be much
              faster. </p>
            <p><a name="petsc-options"><font color="#ff0000">PETSc has
                  so many options for my program that it is hard to keep
                  them straight.</font> </a></p>
            <p>Running the PETSc program with the option -help will
              print of many of the options. To print the options that
              have been specified within a program, employ -optionsleft
              to print any options that the user specified but were not
              actually used by the program and all options used; this is
              helpful for detecting typo errors. </p>
            <p><a name="petsc-log-info"><font color="#ff0000">PETSc
                  automatically handles many of the details in parallel
                  PDE solvers. How can I understand what is really
                  happening within my program?</font> </a></p>
            <p>You can use the option -info to get more details about
              the solution process. The option -log_summary provides
              details about the distribution of time spent in the
              various phases of the solution process. You can run with
              -ts_view or -snes_view or -ksp_view to see what solver
              options are being used. Run with -ts_monitor -snes_monitor
              or -ksp_monitor to watch convergence of the methods.
              -snes_converged_reason and -ksp_converged_reason will
              indicate why and if the solvers have converged. <br>
            </p>
            <p><a name="efficient-assembly"><font color="#ff0000">Assembling
large
sparse
matrices


                  takes a long time. What can I do make this process
                  faster? or MatSetValues() is so slow, what can I do to
                  speed it up?</font></a><a name="efficient-assembly"><font><font></font></font></a><font><font><a
                    name="slow"></a></font></font><a
                name="efficient-assembly"> </a></p>
            <p>See the <a
href="http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manual.pdf#nameddest=ch_performance">Performance


                chapter of the users manual</a> for many tips on this.</p>
            <p>a) Preallocate enough space for the sparse matrix. For
              example, rather than calling
              MatCreateSeqAIJ(comm,n,n,0,PETSC_NULL,&amp;mat); call
              MatCreateSeqAIJ(comm,n,n,rowmax,PETSC_NULL,&amp;mat);
              where rowmax is the maximum number of nonzeros expected
              per row. Or if you know the number of nonzeros per row,
              you can pass this information in instead of the PETSC_NULL
              argument. See the&nbsp; manual pages for each of the
              MatCreateXXX() routines.</p>
            <p>b) Insert blocks of values into the matrix, rather than
              individual components. <br>
            </p>
            <p>Preallocation of matrix memory is crucial for good
              performance for large problems, see <br>
              <a
href="manual.pdf#sec_matsparse">manual.pdf#sec_matsparse</a><br>
              <a
href="manualpages/Mat/MatCreateMPIAIJ.html">manualpages/Mat/MatCreateMPIAIJ.html</a><br>
              <br>
              If you can set several nonzeros in a block at the same
              time, this is faster than calling MatSetValues() for each
              <br>
              individual matrix entry.<br>
              <br>
              It is best to generate most matrix entries on the process
              they belong to (so they do not have to be stashed and then
              shipped to the owning process). Note: it is fine to have
              some entries generated on the "wrong" process, just not
              many.</p>
            <p><a name="log-summary"><font color="#ff0000">How can I
                  generate performance summaries with PETSc?</font> </a></p>
            <p>Use these options at runtime: -log_summary. See the <a
href="http://www.mcs.anl.gov/petsc/petsc-as/snapshots/petsc-dev/docs/manual.pdf#nameddest=ch_performance">Performance


                chapter of the users manual</a> for information on
              interpreting the summary data. If using the PETSc
              (non)linear solvers, one can also specify -snes_view or
              -ksp_view for a printout of solver info. Only the highest
              level PETSc object used needs to specify the view option.
            </p>
            <p><a name="parallel-roundoff"><font color="#ff0000">Why do
                  I get different answers on a different numbers of
                  processors?</font> </a></p>
            <p>Most commonly, you are using a preconditioner which
              behaves differently based upon the number of processors,
              such as Block-Jacobi which is the PETSc default. However,
              since computations are reordered in parallel, small
              roundoff errors will still be present with identical
              mathematical formulations. If you set a tighter linear
              solver tolerance (using -ksp_rtol), the differences will
              decrease.</p>
            <p><a name="mg-log"><font color="#ff0000">How do I know the
                  amount of time spent on each level of the multigrid
                  solver/preconditioner?</font></a></p>
            <p>Run with -log_summary and -pc_mg_log</p>
            <p><font><a name="datafiles"><font color="#ff0000">Where do
                    I get the input matrices for the examples?<br>
                  </font></a></font></p>
            <p>Some makefiles use &nbsp;${DATAFILESPATH}/matrices/medium
              and other files. These test matrices in PETSc binary
              format can be found with anonymous ftp from <a
                href="http://ftp.mcs.anl.gov">ftp.mcs.anl.gov</a> in the
              directory pub/petsc/matrices. The are not included with
              the PETSc distribution in the interest of reducing the
              distribution size.</p>
            <p><font><a name="info"><font color="#ff0000">When I dump
                    some matrices and vectors to binary, I seem to be
                    generating some empty files with .info
                    extensions.&nbsp; What's the deal with these?<br>
                  </font></a></font> </p>
            <p>PETSc binary viewers put some additional information into
              .info files like matrix block size; it is harmless<br>
              but if you really don't like it you can use
              -viewer_binary_skip_info or&nbsp;
              PetscViewerBinarySkipInfo()<br>
              note you need to call PetscViewerBinarySkipInfo() before
              PetscViewerFileSetName(). In other words you<br>
              cannot use PetscViewerBinaryOpen() directly.<font><a
                  name="slow"><font color="#ff0000"><br>
                  </font></a></font> </p>
            <p><font><a name="slowerparallel"><font color="#ff0000">Why
                    is my parallel solver&nbsp;slower than my sequential
                    solver?<br>
                  </font></a></font> </p>
            This can happen for many reasons:<br>
            <ul>
              <li>First make sure it is truely the time in KSPSolve()
                that is slower (by running the code with <a
                  href="faq.html#log-summary">-log_summary</a>). Often
                the slower time is in <a href="faq.html#slow">generating

                  the matrix</a> or some other operation.</li>
              <li>There must be enough work for each process to
                overweigh the communication time. We recommend an
                absolute minimum of about 10,000 unknowns per process,
                better is 20,000 or more.</li>
              <li>Make sure the&nbsp; <a href="faq.html#computers">communication


                  speed of the parallel computer</a> is good enough for
                parallel solvers.</li>
              <li>Check the number of solver iterates with the parallel
                solver against the sequential solver. Most
                preconditioners require more iterations when used on
                more processes, this is particularly true for block
                Jaccobi, the default parallel preconditioner, you can
                try -pc_type asm (<a
href="manualpages/PC/PCASM.html">PCASM</a>)
                its iterations scale a bit better for more processes.
                You may also consider multigrid preconditioners like <a
href="manualpages/PC/PCMG.html">PCMG</a>
                or BoomerAMG in <a
href="manualpages/PC/PCHYPRE.html">PCHYPRE</a>.</li>
            </ul>
            <p><font><a name="singleprecision"><font color="#ff0000">When
using
PETSc
in


                    single precision mode (--with-precision=single when
                    running ./configure) are the operations done in
                    single or double precision? </font></a></font> </p>
            PETSc does NOT do any explicit conversion of single
            precision to double before performing computations; this it
            depends on the hardware and compiler what happens. For
            example, the compiler could choose to put the single
            precision numbers into the usual double precision registers
            and then use the usual double precision floating point unit.
            Or it could use SSE2 instructions that work directly on the
            single precision numbers. It is a bit of a mystery what
            decisions get made sometimes. There may be compiler flags in
            some circumstances that can affect this.<br>
            <br>
            <p><font><a name="newton"><font color="#ff0000">Why is
                    Newton's method (SNES) not converging?</font></a></font></p>
            Newton's method may not converge for many reasons, here are
            some of the most common.<br>
            <ul>
              <li>The Jacobian is wrong (or correct in sequential but
                not in parallel).</li>
              <li>The linear system is not solved or is not solved
                accurately enough.</li>
              <li>The Jacobian system has a singularity that the linear
                solver is not handling.</li>
              <li>There is a bug in the function evaluation routine.</li>
              <li>The function is not continuous or does not have
                continuous first derivatives (e.g. phase change or TVD
                limiters).</li>
              <li>The equations may not have a solution (e.g. limit
                cycle instead of a steady state) or there may be a
                "hill" between the initial guess and the steady state
                (e.g. reactants must ignite and burn before reaching a
                steady state, but the steady-state residual will be
                larger during combustion).</li>
            </ul>
            Here are some of the ways to help debug lack of convergence
            of Newton.<br>
            <ul>
              <li>Run with the options <tt>-snes_monitor
                  -ksp_monitor_true_residual -snes_converged_reason
                  -ksp_converged_reason</tt>.
                <ul>
                  <li>If the linear solve does not converge, check if
                    the Jacobian is correct, then see <a
                      href="faq.html#kspdiverged">this question</a>.</li>
                  <li>If the preconditioned residual converges, but the
                    true residual does not, the preconditioner may be
                    singular.</li>
                  <li>If the linear solve converges well, but the line
                    search fails, the Jacobian may be incorrect.</li>
                </ul>
              </li>
              <li>Run with <tt>-pc_type lu</tt> or <tt>-pc_type svd</tt>
                to see if the problem is a poor linear solver</li>
              <li>Run with <tt>-mat_view</tt> or <tt>-mat_view_draw</tt>
                to see if the Jacobian looks reasonable</li>
              <li>Run with <tt>-snes_type test -snes_test_display</tt>
                to see if the Jacobian you are using is wrong. Compare
                the output when you add <tt>-mat_fd_type ds</tt> to see
                if the result is sensitive to the choice of differencing
                parameter.</li>
              <li>Run with <tt>-snes_mf_operator -pc_type lu</tt> to
                see if the Jacobian you are using is wrong. If the
                problem is too large for a direct solve, try <tt>-snes_mf_operator

                  -pc_type ksp -ksp_ksp_rtol 1e-12</tt>. Compare the
                output when you add <tt>-mat_mffd_type ds</tt> to see
                if the result is sensitive to choice of differencing
                parameter.</li>
              <li>Run on one processor to see if the problem is only in
                parallel.</li>
              <li>Run with <tt>-snes_ls_monitor</tt> to see if the line
                search is failing (this is usually a sign of a bad
                Jacobian) use -info in PETSc 3.1 and older versions</li>
              <li>Run with <tt>-info</tt> to get more detailed
                information on the solution process.</li>
            </ul>
            Here are some ways to help the Newton process if everything
            above checks out<br>
            <ul>
              <li>Run with grid sequencing (<tt>-snes_grid_sequence</tt>
                if working with a DM is all you need) to generate better
                initial guess on your finer mesh</li>
              <li>Run with quad precision (./configure with
                --with-precision=__float128 --download-f2cblaslapack
                with PETSc 3.2 and later and recent versions of the GNU
                compilers)</li>
              <li>Change the units (nondimensionalization), boundary
                condition scaling, or formulation so that the Jacobian
                is better conditioned.</li>
              <li>Mollify features in the function that do not have
                continuous first derivatives (often occurs when there
                are "if" statements in the residual evaluation, e.g.
                phase change or TVD limiters). Use a variational
                inequality solver (SNESVI) if the discontinuities are of
                fundamental importance.</li>
              <li>Try a trust region method (<tt>-ts_type tr</tt>, may
                have to adjust parameters).</li>
              <li>Run with some continuation parameter from a point
                where you know the solution, see TSPSEUDO for
                steady-states.</li>
              <li>There are homotopy solver packages like PHCpack that
                can get you all possible solutions (and tell you that it
                has found them all) but those are not scalable and
                cannot solve anything but small problems.<br>
              </li>
            </ul>
            <p><a name="kspdiverged"><font color="#ff0000">Why is the
                  linear solver (KSP) not converging?</font> </a></p>
            <p>Always run with <tt>-ksp_converged_reason
                -ksp_monitor_true_residual</tt> when trying to learn why
              a method is not converging. Common reasons for KSP not
              converging are</p>
            <ul>
              <li>The equations are singular by accident (e.g. forgot to
                impose boundary conditions). Check this for a small
                problem using <tt>-pc_type svd -pc_svd_monitor</tt>.</li>
              <li>The equations are intentionally singular (e.g.
                constant null space), but the Krylov method was not
                informed, see KSPSetNullSpace().</li>
              <li>The equations are intentionally singular and
                KSPSetNullSpace() was used, but the right hand side is
                not consistent. You may have to call
                MatNullSpaceRemove() on the right hand side before
                calling KSPSolve().</li>
              <li>The equations are indefinite so that standard
                preconditioners don't work. Usually you will know this
                from the physics, but you can check with <tt>-ksp_compute_eigenvalues

                  -ksp_gmres_restart 1000 -pc_type none</tt>. For simple
                saddle point problems, try <tt>-pc_type fieldsplit
                  -pc_fieldsplit_type schur
                  -pc_fieldsplit_detect_saddle_point</tt>. For more
                difficult problems, read the literature to find robust
                methods and ask petsc-users@mcs.anl.gov or
                petsc-maint@mcs.anl.gov if you want advice about how to
                implement them.</li>
              <li>The preconditioner is too weak or is unstable. See if
                <tt>-pc_type asm -sub_pc_type lu</tt> improves the
                convergence rate. If GMRES is losing too much progress
                in the restart, see if longer restarts help <tt>-ksp_gmres_restart

                  300</tt>. If a transpose is available, try <tt>-ksp_type

                  bcgs</tt> or other methods that do not require a
                restart. (Note that convergence with these methods is
                frequently erratic.)</li>
              <li>The preconditioner is nonlinear (e.g. a nested
                iterative solve), try <tt>-ksp_type fgmres</tt> or <tt>-ksp_type

                  gcr</tt>.</li>
              <li>The matrix is very ill-conditioned. You can run with
                -pc_type none -ksp_type gmres
                -ksp_monitor_singular_value -ksp_gmres_restart 100 to
                get approximations to the condition number of the
                operator. Or run with -pc_type &lt;somepc&gt; to get
                condition number estimates of the preconditioned
                operator.&nbsp;</li>
              <ul>
                <li> Try to improve it by choosing the relative scaling
                  of components/boundary conditions.</li>
                <li>Try <tt>-ksp_diagonal_scale -ksp_diagonal_scale_fix</tt>.
                  <br>
                </li>
                <li>Perhaps change the formulation of the problem to
                  produce more friendly algebraic equations.</li>
              </ul>
              <li>The matrix is nonlinear (e.g. evaluated using finite
                differencing of a nonlinear function). Try different
                differencing parameters, <tt>./configure
                  --with-precision=__float128 --download-f2cblaslapack</tt>,
                check if it converges in "easier" parameter regimes.</li>
              <li>A symmetric method is being used for a non-symmetric
                problem.</li>
              <li>Classical Gram-Schmidt is becoming unstable, try <tt>-ksp_gmres_modifiedgramschmidt</tt>
                or use a method that orthogonalizes differently, e.g. <tt>-ksp_type

                  gcr</tt>.</li>
            </ul>
            <hr>
            <h3><a name="Debugging">Debugging</a></h3>
            <p><a name="debug-ibm"><font color="#ff0000">How do I turn
                  off PETSc signal handling so I can use the -C option
                  on xlF? </font></a></p>
            <p>Immediately after calling PetscInitialize() call
              PetscPopSignalHandler()</p>
            <p>Some Fortran compilers including the IBM xlf, xlF etc
              compilers have a compile option (-C for IBM's) that causes
              all array access in Fortran to be checked that they are
              in-bounds. This is a great feature but does require that
              the array dimensions be set explicitly, not with a *.</p>
            <p><a name="start_in_debugger-doesnotwork"><font
                  color="#ff0000">How do I debug if -start_in_debugger
                  does not work on my machine?</font> </a></p>
            <p>On newer Mac OSX machines - one has to be in admin
              group&nbsp; to be able to use debugger</p>
            <p>On newer UBUNTU linux machines - one has to disable <a
                href="https://wiki.ubuntu.com/Security/Features#ptrace">ptrace_scop</a>
              with "sudo echo 0 &gt; /proc/sys/kernel/yama/ptrace_scope"
              - to get start in debugger working.<br>
            </p>
            <p>If start_in_debugger does not really work on your OS, for
              a uniprocessor job, just try&nbsp;the debugger directly,
              for example: gdb ex1. You can also use Totalview which is
              a good graphical parallel debugger.<br>
            </p>
            <p><a name="debug-hang"><font color="#ff0000">How do I see
                  where my code is hanging? </font></a></p>
            <p>You can use the -start_in_debugger option to start all
              processes in the debugger (each will come up in its own
              xterm) or run in Totalview. Then use cont (for continue)
              in each xterm. Once you are sure that the program is
              hanging, hit control-c in each xterm and then use 'where'
              to print a stack trace for each process.</p>
            <p><a name="debug-inspect"><font color="#ff0000">How can I
                  inspect Vec and Mat values when in the debugger? </font></a></p>
            <p>I will illustrate this with gdb, but it should be similar
              on other debuggers. You can look at local Vec values
              directly by obtaining the array. For a Vec v, we can print
              all local values using</p>
            <p>(gdb) p ((Vec_Seq*) v-&gt;data)-&gt;array[0]@v-&gt;map.n</p>
            <p>However, this becomes much more complicated for a matrix.
              Therefore, it is advisable to use the default viewer to
              look at the object. For a Vec v and a Mat m, this would be</p>
            <p>(gdb) call VecView(v, 0)</p>
            <p>(gdb) call MatView(m, 0)</p>
            <p>or with a communicator other than MPI_COMM_WORLD,</p>
            <p>(gdb) call MatView(m, PETSC_VIEWER_STDOUT_(m-&gt;comm))<br>
            </p>
            <p>Totalview 8.8.0 has a new feature that allows libraries
              to provide their own code to display objects in the
              debugger. Thus in theory each PETSc object, Vec, Mat etc
              could have custom code to print values in the object. We
              have only done this for the most elementary display of Vec
              and Mat. See the routine TV_display_type() in
              src/vec/vec/interface/vector.c for an example of how these
              may be written. Contact us if you would like to add more.<br>
            </p>
            <p>&nbsp;<span style="color: rgb(255, 0, 0);"><a
                  name="libimf"></a>Error while loading shared
                libraries: libimf.so: cannot open shared object file: No
                such file or directory.</span></p>
            <p>The Intel compilers use shared libraries (like libimf)
              that cannot by default at run time. When using the Intel
              compilers (and running the resulting code) you must make
              sure that the proper Intel initialization scripts are run.
              This is usually done by putting some code into your
              .cshrc, .bashrc, .profile etc file. Sometimes on batch
              file systems that do now access your initialization files
              (like .cshrc) you must include the initialization calls in
              your batch file submission.</p>
            For example, on my Mac using csh I have the following in my
            .cshrc file<br>
            <br>
            source /opt/intel/cc/10.1.012/bin/iccvars.csh<br>
            source /opt/intel/fc/10.1.012/bin/ifortvars.csh<br>
            source /opt/intel/idb/10.1.012/bin/idbvars.csh<br>
            <br>
            in my .profile I have<br>
            <br>
            source /opt/intel/cc/10.1.012/bin/iccvars.sh<br>
            source /opt/intel/fc/10.1.012/bin/ifortvars.sh<br>
            source /opt/intel/idb/10.1.012/bin/idbvars.sh<br>
            <br>
            <p><span style="color: rgb(255, 0, 0);"><a
                  name="objecttypenotset">What does Object Type not set:
                  Argument # n mean?</a></span></p>
            Many operations on PETSc objects require that the specific
            type of the object be set before the operations is
            performed. You must call XXXSetType() or XXXSetFromOptions()
            before you make the offending call. For example,
            MatCreate(comm,&amp;A); MatSetValues(A,....); will not work.
            You must add MatSetType(A,...) or MatSetFromOptions(A,....);
            before the call to MatSetValues();
            <p></p>
            <a name="split"></a><font style="color: rgb(255, 0, 0);"
              face="Terminal">What does </font><font style="color:
              rgb(255, 0, 0);" color="#ff0000"> </font><font
              style="color: rgb(255, 0, 0);" face="Terminal">Error
              detected&nbsp;in PetscSplitOwnership() about "sum of local
              lengths ...": mean?<br>
              <br>
            </font><font color="#ff0000"><span style="color: rgb(0, 0,
                0);">In a previous call to VecSetSizes(), MatSetSizes(),
                VecCreateXXX() or MatCreateXXX() you passed in local and
                global sizes that do not make sense for the correct
                number of processors. For example if you pass in a local
                size of 2 and a global size of 100 and run on two
                processors, this cannot work since the sum of the local
                sizes is 4, not 100.</span></font><br>
            <br>
            <a name="valgrind"></a><font style="color: rgb(255, 0, 0);"
              face="Terminal">What does </font><font style="color:
              rgb(255, 0, 0);" face="Terminal">Corrupt argument or
              Caught signal or SEQV or segmentation violation or bus
              error mean? Can I use valgrind to debug memory corruption
              issues? </font><br>
            <br>
            <font color="#ff0000"><span style="color: rgb(0, 0, 0);">Sometime

                it can mean an argument to a function is invalid. In
                Fortran this may be caused by forgeting to list an
                argument in the call, especially the final ierr.<br>
                &nbsp; &nbsp;&nbsp; <br>
                Otherwise it is usually caused by memory corruption;
                that is somewhere the code is writing out of array
                bounds. To track this down rerun the debug version of
                the code with the option -malloc_debug. Occasionally the
                code may crash only with the optimized version, in that
                case run the optimized version with -malloc_debug. If
                you determine the problem is from memory corruption you
                can put the macro CHKMEMQ in the code near the crash to
                determine exactly what line is causing the problem.<br>
                &nbsp; <br>
                If -malloc_debug does not help: on GNU/Linux and Apple
                Mac OS X machines - you can try using&nbsp;<a
                  href="http://valgrind.org/">http://valgrind.org </a>to


                look for memory corruption.</span></font> - Make sure
            valgrind is installed<br>
            - Recommend building PETSc with "--download-mpich=1
            --with-debugging=1" [debugging is enabled by default]<br>
            - Compile application code with this build of PETSc<br>
            - run with valgrind using:<br>
            <font color="#ff1a8b">${PETSC_DIR}/bin/petscmpiexec
              -valgrind -n NPROC PETSCPROGRAMNAME -malloc off
              PROGRAMOPTIONS</font><br>
            or invoke valgrind directly with:<br>
            <font color="#ff1a8b">mpiexec -n NPROC valgrind
              --tool=memcheck -q --num-callers=20
              --log-file=valgrind.log.%p PETSCPROGRAMNAME -malloc off
              PROGRAMOPTIONS</font><br>
            <br>
            Notes:<br>
            - option '--with-debugging=1' enables valgrind to give stack
            trace with additional source-file:line-number info.<br>
            - option '-download-mpich=1' gives valgrind clean MPI -
            hence the recommendation.<br>
            - Wrt Other MPI impls, Open MPI should also work. MPICH1
            will not work.<br>
            - if '-download-mpich=1' is used - mpiexec will be in
            PETSC_ARCH/bin<br>
            '--log-file=valgrind.log.%p' option tells valgrind to store
            the output from each proc in a different file [as %p i.e
            PID, is different for each MPI proc].<br>
            - On Apple you need the additional valgrind option
            '--dsymutil=yes'<br>
            - memcheck will not find certain array access that violate
            static array declarations so if memcheck runs clean you can
            try the --tool=exp-ptrcheck instead. <br>
            <font style="color: rgb(255, 0, 0);" face="Terminal"><a
                name="zeropivot"></a>What does </font><font
              style="color: rgb(255, 0, 0);" color="#ff0000"> </font><font
              style="color: rgb(255, 0, 0);" face="Terminal">Detected
              zero pivot in LU factorization mean?</font><br>
            <br>
            <font color="#ff0000"><span style="color: rgb(0, 0, 0);">A
                zero pivot in LU, ILU, Cholesky, or ICC sparse
                factorization does not always mean that the matrix is
                singular. You can use '-pc_factor_shift_type NONZERO
                -pc_factor_shift_amount [amount]' or
                '-pc_factor_shift_type POSITIVE_DEFINITE';
                '-[level]_pc_factor_shift_type NONZERO
                -pc_factor_shift_amount [amount]' &nbsp; </span></font><font
              style="color: rgb(0, 0, 0);" color="#ff0000"> or
              '-[level]_pc_factor_shift_type POSITIVE_DEFINITE' </font><font
              style="color: rgb(0, 0, 0);" color="#ff0000">to prevent
              the zero pivot. [level] is "sub" when lu, ilu, cholesky,
              or icc are employed in each individual block of the
              bjacobi or ASM preconditioner; and [level] is "mg_levels"
              or "mg_coarse" when lu, ilu, cholesky, or icc are used
              inside multigrid smoothers or to the coarse grid solver.
              See PCFactorSetShiftType(), PCFactorSetAmount().</font> <font
              style="color: rgb(0, 0, 0);" color="#ff0000"> </font>
            <p style="color: rgb(0, 0, 0);">This error can also happen
              if your matrix is singular, see KSPSetNullSpace() for how
              to handle this.</p>
            If this error occurs in the zeroth row of the matrix, it is
            likely you have an error in the code that generates the
            matrix.<a><span style="font-family: Terminal;"><br>
              </span></a>
            <p></p>
            <p><a><span style="font-family: Terminal;"></span><span
                  style="color: rgb(255, 0, 0);"></span></a><a
                name="xwindows"></a><span style="color: rgb(255, 0, 0);">You
create


                Draw windows or ViewerDraw windows or use options
                -ksp_monitor or_draw -snes_monitor_draw and the program
                seems to run OK but windows never open.</span></p>
            <p>The libraries were compiled without support for X
              windows.<font color="#ff0000"> </font>Make sure that
              ./configure was run with the option --with-x=1<br>
            </p>
            <p style="color: rgb(255, 0, 0);"><font face="Terminal"><a
                  name="memory"></a><a>The program seems to use more and
                  more memory as it runs, even though you don't think
                  you are allocating more memory.<br>
                </a></font></p>
            <p style="color: rgb(255, 0, 0);"><a><font face="Terminal">
                </font></a></p>
            <p style="color: rgb(0, 0, 0);"><a><font face="Terminal">Problem:
Possibly


                  some of the following:</font></a></p>
            <a><font face="Terminal">
                <ol style="color: rgb(0, 0, 0);">
                  <li>You are creating new PETSc objects but never
                    freeing them.</li>
                  <li>There is a memory leak in PETSc or your code. </li>
                  <li>Something much more subtle: (if you are using
                    Fortran). When you declare a large array in Fortran,
                    the operating system does not allocate all the
                    memory pages for that array until you start using
                    the different locations in the array. Thus, in a
                    code, if at each step you start using later values
                    in the array your virtual memory usage will
                    "continue" to increase as measured by ps or top. </li>
                  <li>You are running with the -log, -log_mpe, or
                    -log_all option. He a great deal of logging
                    information is stored in memory until the conclusion
                    of the run.</li>
                  <li>You are linking with the MPI profiling libraries;
                    these cause logging of all MPI activities. Another
                    Symptom is at the conclusion of the run it may print
                    some message about writing log files. </li>
                </ol>
                <p style="color: rgb(0, 0, 0);">Cures:</p>
                <ol>
                  <li style="color: rgb(0, 0, 0);">Run with the
                    -malloc_debug option and -malloc_dump. Or use the
                    commands PetscMallocDump() and PetscMallocLogDump()
                    sprinkled in your code to track memory that is
                    allocated and not later freed. Use the commands
                    PetscMallocSpace() and PetscGetResidentSetSize() to
                    monitor memory allocated and total memory used as
                    the code progresses. </li>
                  <li style="color: rgb(0, 0, 0);">This is just the way
                    Unix works and is harmless.</li>
                  <li style="color: rgb(0, 0, 0);">Do not use the -log,
                    -log_mpe, or -log_all option, or use
                    PLogEventDeactivate() or PLogEventDeactivateClass(),
                    PLogEventMPEDeactivate() to turn off logging of
                    specific events. </li>
                  <li style="color: rgb(0, 0, 0);">Make sure you do not
                    link with the MPI profiling libraries. <br>
                  </li>
                </ol>
              </font><font face="Terminal"></font></a><font
              face="Terminal"><a name="key"></a>When calling
              MatPartitioningApply() you get a message Error! Key 16615
              not found<br>
            </font>
            <p></p>
            <p>The graph of &nbsp;the matrix you are using is not
              symmetric. Yo<font color="#ff0000"></font>u must use
              symmetric matrices for partitioning.<br>
            </p>
            <p style="color: rgb(255, 0, 0);"><font face="Terminal"><a
                  name="gmres"></a>With GMRES At restart the second
                residual norm printed does not match the first</font></p>
            <p style="color: rgb(255, 0, 0);"><font face="Terminal"> <span
                  style="color: rgb(0, 0, 0);">26 KSP Residual norm
                  3.421544615851e-04 </span><br style="color: rgb(0, 0,
                  0);">
                <span style="color: rgb(0, 0, 0);">27 KSP Residual norm
                  2.973675659493e-04 </span><br style="color: rgb(0, 0,
                  0);">
                <span style="color: rgb(0, 0, 0);">28 KSP Residual norm
                  2.588642948270e-04 </span><br style="color: rgb(0, 0,
                  0);">
                <span style="color: rgb(0, 0, 0);">29 KSP Residual norm
                  2.268190747349e-04 </span><br style="color: rgb(0, 0,
                  0);">
                <span style="color: rgb(0, 0, 0);">30 KSP Residual norm
                  1.977245964368e-04</span><br style="color: rgb(0, 0,
                  0);">
                <span style="color: rgb(0, 0, 0);">30 KSP Residual norm
                  1.994426291979e-04 &lt;----- At restart the residual
                  norm is printed a second time </span> </font></p>
            <p style="color: rgb(0, 0, 0);"><font face="Terminal">Problem:
Actually


                this is not surprising. GMRES computes the norm of the
                residual at each iteration via a recurrence relation
                between the norms of the residuals at the previous
                iterations and quantities computed at the current
                iteration; it does not compute it via directly || b - A
                x^{n} ||. Sometimes, especially with an ill-conditioned
                matrix, or computation of the matrix-vector product via
                differencing, the residual norms computed by GMRES start
                to "drift" from the correct values. At the restart, we
                compute the residual norm directly, hence the "strange
                stuff," the difference printed. The drifting, if it
                remains small, is harmless (doesn't effect the accuracy
                of the solution that GMRES computes). </font></p>
            <font face="Terminal">
              <p style="color: rgb(0, 0, 0);">Cure: There realy isn't a
                cure, but if you use a more powerful preconditioner the
                drift will often be smaller and less noticeable. Of if
                you are running matrix-free you may need to tune the
                matrix-free parameters.<br>
              </p>
              <p style="color: rgb(0, 0, 0);"><font face="Terminal"><font
                    face="Terminal"><span style="color: rgb(255, 0, 0);"><a
                        name="2its"></a>Why do some Krylov methods seem
                      to print two residual norms per iteration?<br>
                    </span></font></font></p>
              <p style="color: rgb(0, 0, 0);"><font face="Terminal"><font
                    face="Terminal"><font face="Terminal">&gt; 1198 KSP
                      Residual norm 1.366052062216e-04<br>
                      &gt; 1198 KSP Residual norm 1.931875025549e-04<br>
                      &gt; 1199 KSP Residual norm 1.366026406067e-04<br>
                      &gt; 1199 KSP Residual norm
                      1.931819426344e-04&nbsp;</font></font> </font></p>
              <p><font face="Terminal"><font color="#ff0000"></font>Some
                  Krylov methods, for example tfqmr, actually have a
                  "sub-iteration"<br>
                  of size 2 inside the loop; each of the two substeps
                  has its own matrix vector<br>
                  product and application of the preconditioner and
                  updates the residual<br>
                  approximations. This is why you get this "funny"
                  output where it looks like&nbsp;<br>
                  there are two residual norms per iteration. You can
                  also think of it as twice<br>
                  as many iterations.<br>
                </font></p>
              <font face="Terminal">
                <p><font face="Terminal"><span style="color: rgb(255, 0,
                      0);"><a name="dylib"></a>Unable to locate PETSc
                      dynamic library
                      /home/balay/spetsc/lib/libg/linux/libpetsc <br>
                    </span></font></p>
              </font>
              <p></p>
            </font>
            <p></p>
            <p>When using DYNAMIC libraries - the libraries cannot be
              moved after they are installed. This could also happen on
              clusters - where the paths are different on the (run)
              nodes - than on the (compile) front-end.<font
                color="#ff0000"> </font>Do not use dynamic libraries
              &amp; shared libraries. Run ./configure with
              --with-shared-libraries=0 --with-dynamic-loading=0<br>
            </p>
            <p><font face="Terminal"> </font></p>
            <p style="color: rgb(0, 0, 0);"><font face="Terminal"><font
                  face="Terminal"> </font></font></p>
            <p><font face="Terminal"><font face="Terminal"><font
                    face="Terminal"><span style="color: rgb(255, 0, 0);"><a
                        name="bisect"></a></span></font><font
                    face="Terminal"><a><span style="color: rgb(255, 0,
                        0);">How do I determine what update to PETSc
                        broke my code?</span></a></font><font
                    face="Terminal"><span style="color: rgb(255, 0, 0);"><br>
                    </span></font></font></font></p>
            <font face="Terminal"><font face="Terminal"> </font>
              <p></p>
            </font>
            <p></p>
            <p>If at some point [in petsc code history] you had a
              working code - but the latest petsc code broke it, its
              possible to determine the petsc code change that might
              have caused this behavior. This is achieved by:<br>
            </p>
            <ul>
              <li>using <a href="http://mercurial.selenic.com/">Mercurial</a>
                DVCS to access petsc-dev sources [and BuildSystem
                sources]</li>
              <li>knowing the changeset number [in mercurial] for the <span
                  style="font-weight: bold;">known working</span>
                version of petsc</li>
              <li>knowing the changeset number [in mercurial] for the <span
                  style="font-weight: bold;">known broken</span> version
                of petsc</li>
              <li>using <a
                  href="http://mercurial.selenic.com/wiki/BisectExtension">bisect</a>
                functionality of mercurial</li>
            </ul>
            This process can be as follows:<br>
            <ul>
              <li>get petsc-dev and BuildSystem sources<br>
                <span style="color: rgb(153, 51, 153);">&nbsp; hg clone
                  http://petsc.cs.iit.edu/petsc/petsc-dev</span><br>
                <span style="color: rgb(153, 51, 153);">&nbsp; hg clone
                  http://petsc.cs.iit.edu/petsc/BuildSystem
                  petsc-dev/config/BuildSystem</span></li>
              <li>Find the <span style="font-weight: bold;">good</span>
                and <span style="font-weight: bold;">bad</span> markers
                to start the bisection process. This can be done either
                by checking 'hg log' or 'hg view' or<a
                  href="http://petsc.cs.iit.edu/petsc/petsc-dev">
                  http://petsc.cs.iit.edu/petsc/petsc-dev</a> or <a
                  href="http://petsc.cs.iit.edu/petsc/BuildSystem">http://petsc.cs.iit.edu/petsc/BuildSystem</a>
                or the web history of petsc-release clones. Lets say the
                known bad changeset is&nbsp; 21af4baa815c and known good
                changeset is 5ae5ab319844<br>
              </li>
              <li>Now start the bisection process with these known
                revisions. [build PETSc, and test your code to confirm
                known good/bad behavior]<br>
                <span style="color: rgb(153, 51, 153);">&nbsp; hg update
                  -C 21af4baa815c</span><br>
                <span style="color: rgb(153, 51, 153);">&nbsp; hg update
                  -C --date "&lt;`hg parent --template '{date|date}'`"
                  -R config/BuildSystem</span><br>
                &nbsp; &lt;build/test/confirm-bad&gt;<br style="color:
                  rgb(153, 51, 153);">
                <span style="color: rgb(153, 51, 153);">&nbsp; hg bisect
                  --bad</span><br style="color: rgb(153, 51, 153);">
                <span style="color: rgb(153, 51, 153);">&nbsp; hg update
                  -C 5ae5ab319844</span><br>
                <span style="color: rgb(153, 51, 153);">&nbsp; hg update
                  -C --date "&lt;`hg parent --template '{date|date}'`"
                  -R config/BuildSystem</span><br>
                &nbsp; &lt;build/test/confirm-good&gt;<br style="color:
                  rgb(153, 51, 153);">
                <span style="color: rgb(153, 51, 153);">&nbsp; hg bisect
                  --good</span><br>
                <span style="color: rgb(153, 51, 153);">&nbsp; hg update
                  -C --date "&lt;`hg parent --template '{date|date}'`"
                  -R config/BuildSystem</span><br>
              </li>
              <li>&nbsp;Now until done - keep bisecting, building PETSc,
                and testing your code with it and determine if the code
                is working or not. i.e<br>
                &nbsp; if &lt;build&gt; broken:<br>
                &nbsp;&nbsp;&nbsp; <span style="color: rgb(153, 51,
                  153);">hg bisect --skip</span><br>
                <span style="color: rgb(153, 51, 153);">&nbsp;&nbsp;&nbsp;


                  hg update -C --date "&lt;`hg parent --template
                  '{date|date}'`" -R config/BuildSystem</span><br>
                &nbsp; if &lt;test&gt; good:<br>
                &nbsp;&nbsp;&nbsp; <span style="color: rgb(153, 51,
                  153);">hg bisect --good</span><br>
                <span style="color: rgb(153, 51, 153);">&nbsp;&nbsp;&nbsp;


                  hg update -C --date "&lt;`hg parent --template
                  '{date|date}'`" -R config/BuildSystem</span><br>
                &nbsp; elseif &lt;test&gt; bad:<br>
                &nbsp;&nbsp;&nbsp; <span style="color: rgb(153, 51,
                  153);">hg bisect --bad</span><br>
                <span style="color: rgb(153, 51, 153);">&nbsp;&nbsp;&nbsp;


                  hg update -C --date "&lt;`hg parent --template
                  '{date|date}'`" -R config/BuildSystem</span><br>
                <br>
                Notice the <span style="color: rgb(153, 51, 153);">hg
                  update -C --date "&lt;`hg parent --template
                  '{date|date}'`" -R config/BuildSystem</span> after
                each 'hg update' or 'hg bisect'. This is to update
                BuildSystem to be in sync to petsc-dev. If this is not
                done - and BuildSystem is out of sync with petsc-dev -
                configure will keep failing.</li>
              <li>After something like 5-15 iterations - 'hg bisect'
                will pin-point the exact code change that resulted in
                the difference in application behavior<br>
                <br>
              </li>
            </ul>
            <p> </p>
            <p> </p>
            <hr>
            <h3><a name="Shared Libraries">Shared Libraries</a></h3>
            <p><font color="#ff0000"><a name="install-shared">Can I
                  install PETSc libraries as shared libraries</a>?</font></p>
            <p>Yes.&nbsp;Use the ./configure option
              --with-shared-libraries</p>
            <p><a name="why-use-shared"><font color="#ff0000">Why should
                  I use shared libraries?</font></a></p>
            <p>When you link to shared libraries, the function symbols
              from the shared libraries are not copied in the
              executable. This way the size of the executable is
              considerably smaller than when using regular libraries.
              This helps in a couple of ways: <br>
              &nbsp;&nbsp;&nbsp; 1) saves disk space when more than one
              executable is created, and &nbsp; <br>
              &nbsp;&nbsp;&nbsp; 2) improves the compile time immensly,
              because the compiler has to write a much smaller file
              (executable) to the disk.</p>
            <p><font color="#ff0000"><a name="link-shared">How do I link
                  to the PETSc shared libraries</a>?</font></p>
            <p>By default, the compiler should pick up the shared
              libraries instead of the regular ones. Nothing special
              should be done for this.</p>
            <p><font color="#ff0000"><a name="link-regular-lib">What If
                  I want to link to the regular .a library files</a>?</font></p>
            <p>You must run ./configure without the option
              --with-shared-libraries (you can use a different
              PETSC_ARCH for this build so you can easily switch between
              the two).</p>
            <p><a name="move-shared-exec"><font color="#ff0000">What do
                  I do if I want to move my executable to a different
                  machine?</font></a></p>
            <p>You would also need to have access to the shared
              libraries on this new machine. The other alternative is to
              build the exeutable without shared libraries by first
              deleting the shared libraries, and then creating the
              executable.&nbsp;</p>
            <p><a name="dynamic-shared"><font color="#ff0000">What is
                  the deal with dynamic libraries (and difference
                  between shared libraries)</font></a></p>
            <p>PETSc libraries are installed as dynamic libraries when
              the ./configure flag --with-dynamic-loading is used. The
              difference with this - from shared libraries - is the way
              the libraries are used. From the program the library is
              loaded using dlopen() - and the functions are searched
              using dlsymm(). This separates the resolution of function
              names from link-time to run-time - i.e when
              dlopen()/dlsymm() are called.</p>
            <p>When using Dynamic libraries - PETSc libraries cannot be
              moved to a different location after they are built. </p>
            <p>&nbsp; </p>
            <p></body>
</html>