File: compilation.md

package info (click to toggle)
abinit 9.2.2-1
  • links: PTS, VCS
  • area: main
  • in suites: bullseye
  • size: 416,828 kB
  • sloc: xml: 667,132; f90: 543,145; python: 77,716; perl: 7,479; ansic: 4,040; sh: 1,868; javascript: 692; fortran: 557; cpp: 450; objc: 323; makefile: 73; csh: 42; pascal: 31
file content (2262 lines) | stat: -rw-r--r-- 92,480 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
---
authors: MG
---

# How to compile ABINIT

This tutorial explains how to compile ABINIT including the external dependencies
without relying on pre-compiled libraries, package managers and root privileges.
You will learn how to use the standard **configure** and **make** Linux tools
to build and install your own software stack including the MPI library and the associated
*mpif90* and *mpicc* wrappers required to compile MPI applications.

It is assumed that you already have a standard Unix-like installation
that provides the basic tools needed to build software from source (Fortran/C compilers and *make*).
The changes required for MacOsX are briefly mentioned when needed.
Windows users should install [cygwin](https://cygwin.com/index.html) that
provides a POSIX-compatible environment
or, alternatively, use a [Windows Subsystem for Linux](https://docs.microsoft.com/en-us/windows/wsl/about).
Note that the procedure described in this tutorial has been tested with Linux/MacOsX hence
feedback and suggestions from Windows users are welcome.

!!! tip

    In the last part of the tutorial, we discuss more advanced topics such as using **modules** in supercomputing centers,
    compiling and linking with the **intel compilers** and the **MKL library** as well as **OpenMP threads**.
    You may want to jump directly to this section if you are already familiar with software compilation.

In the following, we will make extensive use of the bash shell hence familiarity with the terminal is assumed.
For a quick introduction to the command line, please consult
this [Ubuntu tutorial](https://ubuntu.com/tutorials/command-line-for-beginners#1-overview).
If this is the first time you use the **configure && make** approach to build software,
we **strongly** recommended to read this
[guide](https://www.codecoffee.com/software-installation-configure-make-install/)
before proceeding with the next steps.
If, on the other hand, you are not interested in compiling all the components from source,
you may want to consider the following alternatives:

* Compilation with external libraries provided by apt-based Linux distributions (e.g. **Ubuntu**).
  More info available [here](../INSTALL_Ubuntu).

* Compilation with external libraries on **Fedora/RHEL/CentOS** Linux distributions.
  More info available [here](../INSTALL_CentOS).

* Homebrew bottles or macports for **MacOsX**.
  More info available [here](../INSTALL_MacOS).

* Automatic compilation and generation of modules on clusters with **EasyBuild**.
  More info available [here](../INSTALL_EasyBuild).

* Compiling Abinit using the **internal fallbacks** and the *build-abinit-fallbacks.sh* script
  automatically generated by *configure* if the mandatory dependencies are not found.

* Using precompiled binaries provided by conda-forge (for Linux and MacOsX users).

Before starting, it is also worth reading this document prepared by Marc Torrent
that introduces important concepts and provides a detailed description of the configuration options
supported by the ABINIT build system.
Note that these slides have been written for Abinit v8 hence some examples should be changed in order
to be compatible with the build system of version 9, yet the document represents a valuable source of information.

<embed src="https://school2019.abinit.org/images/lectures/abischool2019_installing_abinit_lecture.pdf"
type="application/pdf" width="100%" height="480px">


!!! important

    The aim of this tutorial is to teach you how to compile code from source but we cannot guarantee
    that these recipes will work out of the box on every possible architecture.
    We will do our best to explain how to **setup your environment** and how to **avoid the typical pitfalls**
    but we cannot cover all the possible cases.

    Fortunately, the internet provides lots of resources.
    Search engines and stackoverflow are your best friends and in some cases one can find the solution
    by just **copying the error message in the search bar**.
    For more complicated issues, you can ask for help on the [Abinit forum](https://forum.abinit.org)
    or contact the sysadmin of your cluster but remember to provide enough information about your system
    and the problem you are encountering.

## Getting started

Since ABINIT is written in Fortran, we need a **recent Fortran compiler**
that supports the **F2003 specifications** as well as a C compiler.
At the time of writing ( |today| ), the C++ compiler is optional and required only for advanced features
that are not treated in this tutorial.

In what follows, we will be focusing on the GNU toolchain i.e. *gcc* for C and *gfortran* for Fortran.
These "sequential" compilers are adequate if you don't need to compile parallel MPI applications.
The compilation of MPI code, indeed, requires the installation of **additional libraries**
and **specialized wrappers** (*mpicc*, *mpif90* or *mpiifort* ) replacing the "sequential" compilers.
This very important scenario is covered in more detail in the next sections.
For the time being, we mainly focus on the compilation of sequential applications/libraries.

First of all, let's make sure the **gfortran** compiler is installed on your machine
by issuing in the terminal the following command:

```sh
which gfortran
/usr/bin/gfortran
```

!!! tip

    The **which** command, returns the **absolute path** of the executable.
    This Unix tool is extremely useful to pinpoint possible problems and we will use it
    a lot in the rest of this tutorial.

In our case, we are lucky that the Fortran compiler is already installed in */usr/bin* and we can immediately
use it to build our software stack.
If *gfortran* is not installed, you may want to use the package manager provided by your
Linux distribution to install it.
On Ubuntu, for instance, use:

```sh
sudo apt-get install gfortran
```

To get the version of the compiler, use the `--version` option:

```sh
gfortran --version
GNU Fortran (GCC) 5.3.1 20160406 (Red Hat 5.3.1-6)
Copyright (C) 2015 Free Software Foundation, Inc.
```

Starting with version 9, ABINIT requires gfortran >= v5.4.
Consult the release notes to check whether your gfortran version is supported by the latest ABINIT releases.

Now let's check whether **make** is already installed using:

```sh
which make
/usr/bin/make
```

Hopefully, the C compiler *gcc* is already installed on your machine.

```sh
which gcc
/usr/bin/gcc
```

At this point, we have all the basic building blocks needed to compile ABINIT from source and we
can proceed with the next steps.

!!! tip

    Life gets hard if you are a Mac-OsX user as Apple does not officially
    support Fortran (😞) so you need to install *gfortran* and *gcc* either via
    [homebrew](https://brew.sh/) or [macport](https://www.macports.org/).
    Alternatively, one can install *gfortran* using one of the standalone DMG installers
    provided by the [gfortran-for-macOS project](https://github.com/fxcoudert/gfortran-for-macOS/releases).
    Note also that MaxOsX users will need to install **make** via [Xcode](https://developer.apple.com/xcode/).
    More info can be found in [this page](INSTALL_MacOS).

## How to compile BLAS and LAPACK

BLAS and LAPACK represent the workhorse of many scientific codes and an optimized implementation
is crucial for achieving **good performance**.
In principle this step can be skipped as any decent Linux distribution already provides
pre-compiled versions but, as already mentioned in the introduction, we are geeks and we
prefer to compile everything from source.
Moreover the compilation of BLAS/LAPACK represents an excellent exercise
that gives us the opportunity to discuss some basic concepts that
will reveal very useful in the other parts of this tutorial.

First of all, let's create a new directory inside your `$HOME` (let's call it **local**) using the command:

```sh
cd $HOME && mkdir local
```

!!! tip

    $HOME is a standard shell variable that stores the absolute path to your home directory.
    Use:

    ```sh
    echo My home directory is $HOME
    ```

    to print the value of the variable.

    The **&&** syntax is used to chain commands together, such that the next command is executed if and only
    if the preceding command exited without errors (or, more accurately, exits with a return code of 0).
    We will use this trick a lot in the other examples to reduce the number of lines we have to type
    in the terminal so that one can easily cut and paste the examples in the terminal.


Now create the `src` subdirectory inside $HOME/local with:

```sh
cd $HOME/local && mkdir src && cd src
```

The *src* directory will be used to store the packages with the source files and compile code,
whereas executables and libraries will be installed in `$HOME/local/bin` and `$HOME/local/lib`, respectively.
We use `$HOME/local` because we are working as **normal users** and we cannot install software
in `/usr/local` where root privileges are required and a `sudo make install` would be needed.
Moreover, working inside `$HOME/local` allows us to keep our software stack well separated
from the libraries installed by our Linux distribution so that we can easily test new libraries and/or
different versions without affecting the software stack installed by our distribution.

Now download the tarball from the [openblas website](https://www.openblas.net/) with:

```sh
wget https://github.com/xianyi/OpenBLAS/archive/v0.3.7.tar.gz
```

If *wget* is not available, use *curl* with the `-o` option to specify the name of the output file as in:

```sh
curl -L https://github.com/xianyi/OpenBLAS/archive/v0.3.7.tar.gz -o v0.3.7.tar.gz
```

!!! tip

    To get the URL associated to a HTML link inside the browser, hover the mouse pointer over the link,
    press the right mouse button and then select `Copy Link Address` to copy the link to the system clipboard.
    Then paste the text in the terminal by selecting the `Copy` action in the menu
    activated by clicking on the right button.
    Alternatively, one can press the central button (mouse wheel) or use CMD + V on MacOsX.
    This trick is quite handy to fetch tarballs directly from the terminal.


Uncompress the tarball with:

```sh
tar -xvf v0.3.7.tar.gz
```

then `cd` to the directory with:

```sh
cd OpenBLAS-0.3.7
```

and execute

```sh
make -j2 USE_THREAD=0 USE_LOCKING=1
```

to build the single thread version.

!!! tip

    By default, *openblas* activates threads (see [FAQ page](https://github.com/xianyi/OpenBLAS/wiki/Faq#multi-threaded))
    but in our case we prefer to use the sequential version as Abinit is mainly optimized for MPI.
    The `-j2` option tells *make* to use 2 processes to build the code in order to speed up the compilation.
    Adjust this value according to the number of **physical cores** available on your machine.

At the end of the compilation, you should get the following output (note **Single threaded**):

```md
 OpenBLAS build complete. (BLAS CBLAS LAPACK LAPACKE)

  OS               ... Linux
  Architecture     ... x86_64
  BINARY           ... 64bit
  C compiler       ... GCC  (command line : cc)
  Fortran compiler ... GFORTRAN  (command line : gfortran)
  Library Name     ... libopenblas_haswell-r0.3.7.a (Single threaded)

To install the library, you can run "make PREFIX=/path/to/your/installation install".
```

<!--
As a side note, a compilation with plain *make* would give:

```md
  Library Name     ... libopenblas_haswellp-r0.3.7.a (Multi threaded; Max num-threads is 12)
```

that indicates that the openblas library now supports threads.
-->

You may have noticed that, in this particular case, *make* is not just building the library but is also
running **unit tests** to validate the build.
This means that if *make* completes successfully, we can be confident that the build is OK
and we can proceed with the installation.
Other packages use a more standard approach and provide a **make check** option that should be executed after *make*
in order to run the test suite before installing the package.

To install *openblas* in $HOME/local, issue:

```sh
make PREFIX=$HOME/local/ install
```

At this point, we should have the following **include files** installed in $HOME/local/include:

```sh
ls $HOME/local/include/
cblas.h  f77blas.h  lapacke.h  lapacke_config.h  lapacke_mangling.h  lapacke_utils.h  openblas_config.h
```

and the following **libraries** installed in $HOME/local/lib:

```sh
ls $HOME/local/lib/libopenblas*

/home/gmatteo/local/lib/libopenblas.a     /home/gmatteo/local/lib/libopenblas_haswell-r0.3.7.a
/home/gmatteo/local/lib/libopenblas.so    /home/gmatteo/local/lib/libopenblas_haswell-r0.3.7.so
/home/gmatteo/local/lib/libopenblas.so.0
```

Files ending with `.so` are **shared libraries** (`.so` stands for shared object) whereas
`.a` files are **static libraries**.
When compiling source code that relies on external libraries, the name of the library
(without the *lib* prefix and the file extension) as well as the directory where the library is located must be passed
to the linker.

The name of the library is usually specified with the `-l` option while the directory is given by `-L`.
According to these simple rules, in order to compile source code that uses BLAS/LAPACK routines,
one should use the following option:

    -L$HOME/local/lib -lopenblas

We will use a similar syntax to help the ABINIT *configure* script locate the external linear algebra library.

!!! important

    You may have noticed that we haven't specified the file extension in the library name.
    If both static and shared libraries are found, the linker gives preference to linking with the shared library
    unless the `-static` option is used.
    **Dynamic is the default behaviour** on several Linux distributions so we assume dynamic linking
    in what follows.

If you are compiling C or Fortran code that requires include files with the declaration of prototypes and the definition
of named constants, you will need to specify the location of the **include files** via the `-I` option.
In this case, the previous options should be augmented by:

```sh
-L$HOME/local/lib -lopenblas -I$HOME/local/include
```

This approach is quite common for C code where `.h` files must be included to compile properly.
It is less common for modern Fortran code in which include files are usually replaced by `.mod` files
*i.e.* Fortran modules produced by the compiler whose location is usually specified via the `-J` option.
Still, the `-I` option for include files is valuable also when compiling Fortran applications as libraries
such as FFTW and MKL rely on (Fortran) include files whose location should be passed to the compiler
via `-I` instead of `-J`,
see also the official [gfortran documentation](https://gcc.gnu.org/onlinedocs/gfortran/Directory-Options.html#Directory-Options).

Do not worry if this rather technical point is not clear to you.
Any external library has its own requirements and peculiarities and the ABINIT build system provides several options
to automate the detection of external dependencies and the final linkage.
The most important thing is that you are now aware that the compilation of ABINIT requires
the correct specification of `-L`, `-l` for libraries, `-I` for include files, and `-J` for Fortran modules.
We will elaborate more on this topic when we discuss the configuration options supported by the ABINIT build system.

<!--
https://gcc.gnu.org/onlinedocs/gcc/Link-Options.html

```sh
nm $HOME/local/lib/libopenblas.so
```
to list the symbols presented in the library.
-->

Since we have installed the package in a **non-standard directory** ($HOME/local),
we need to update two important shell variables: `$PATH` and `$LD_LIBRARY_PATH`.
If this is the first time you hear about $PATH and $LD_LIBRARY_PATH, please take some time to learn
about the meaning of these environment variables.
More information about `$PATH` is available [here](http://www.linfo.org/path_env_var.html).
See [this page](https://tldp.org/HOWTO/Program-Library-HOWTO/shared-libraries.html) for `$LD_LIBRARY_PATH`.

Add these two lines at the end of your `$HOME/.bash_profile` file

```sh
export PATH=$HOME/local/bin:$PATH

export LD_LIBRARY_PATH=$HOME/local/lib:$LD_LIBRARY_PATH
```

then execute:

```sh
source $HOME/.bash_profile
```

to activate these changes without having to start a new terminal session.
Now use:

```sh
echo $PATH
echo $LD_LIBRARY_PATH
```

to print the value of these variables.
On my Linux box, I get:

```sh
echo $PATH
/home/gmatteo/local/bin:/usr/local/bin:/usr/bin:/usr/local/sbin:/usr/sbin

echo $LD_LIBRARY_PATH
/home/gmatteo/local/lib:
```

Note how `/home/gmatteo/local/bin` has been **prepended** to the previous value of $PATH.
From now on, we can invoke any executable located in $HOME/local/bin by just typing
its **base name** in the shell without having to the enter the full path.

!!! warning
	Using:

	```sh
	export PATH=$HOME/local/bin
	```

	is not a very good idea as the shell will stop working. Can you explain why?

!!! tip

    MaxOsx users should replace `LD_LIBRARY_PATH` with `DYLD_LIBRARY_PATH`

    Remember also that one can use `env` to print all the environment variables defined
    in your session and pipe the results to other Unix tools.
    Try e.g.:

    ```sh
    env | grep LD_
    ```

    to print only the variables whose name starts with **LD_**

We conclude this section with another tip.
From time to time, some compilers complain or do not display important messages
because **language support is improperly configured** on your computer.
Should this happen, we recommend to export the two variables:

```sh
export LANG=C
export LC_ALL=C
```

This will reset the language support to its most basic defaults and will make sure that you get
all messages from the compilers in English.


## How to compile libxc

At this point, it should not be so difficult to compile and install
[libxc](https://www.tddft.org/programs/libxc/), a library that provides
many useful XC functionals (PBE, meta-GGA, hybrid functionals, etc).
Libxc is written in C and can be built using the standard `configure && make` approach.
No external dependency is needed, except for basic C libraries that are available
on any decent Linux distribution.

Let's start by fetching the tarball from the internet:

```sh
# Get the tarball.
# Note the -O option used in wget to specify the name of the output file

cd $HOME/local/src
wget http://www.tddft.org/programs/libxc/down.php?file=4.3.4/libxc-4.3.4.tar.gz -O libxc.tar.gz
tar -zxvf libxc.tar.gz
```

Now configure the package with the standard `--prefix` option
to **specify the location** where all the libraries, executables, include files,
Fortran modules, man pages, etc. will be installed when we execute `make install`
(the default is `/usr/local`)

```sh
cd libxc-4.3.4 && ./configure --prefix=$HOME/local
```

Finally, build the library, run the tests and install it with:

```sh
make -j2
make check && make install
```

At this point, we should have the following include files in $HOME/local/include

```sh
[gmatteo@bob libxc-4.3.4]$ ls ~/local/include/*xc*
/home/gmatteo/local/include/libxc_funcs_m.mod  /home/gmatteo/local/include/xc_f90_types_m.mod
/home/gmatteo/local/include/xc.h               /home/gmatteo/local/include/xc_funcs.h
/home/gmatteo/local/include/xc_f03_lib_m.mod   /home/gmatteo/local/include/xc_funcs_removed.h
/home/gmatteo/local/include/xc_f90_lib_m.mod   /home/gmatteo/local/include/xc_version.h
```

where `.mod` are Fortran modules generated by the compiler that are needed
when compiling Fortran source using the *libxc* Fortran API.

!!! warning

    The `.mod` files are **compiler- and version-dependent**.
    In other words, one cannot use these `.mod` files to compile code with a different Fortran compiler.
    Moreover, you should not expect to be able to use modules compiled with
    a **different version of the same compiler**, especially if the major version has changed.
    This is one of the reasons why the version of the Fortran compiler employed
    to build our software stack is very important.


Finally, we have the following static libraries installed in ~/local/lib

```sh
ls ~/local/lib/libxc*
/home/gmatteo/local/lib/libxc.a   /home/gmatteo/local/lib/libxcf03.a   /home/gmatteo/local/lib/libxcf90.a
/home/gmatteo/local/lib/libxc.la  /home/gmatteo/local/lib/libxcf03.la  /home/gmatteo/local/lib/libxcf90.la
```

where:

  * **libxc** is the C library
  * **libxcf90** is the library with the F90 API
  * **libxcf03** is the library with the F2003 API

Both *libxcf90* and *libxcf03* depend on the C library where most of the work is done.
At present, ABINIT requires the F90 API only so we should use

    -L$HOME/local/lib -lxcf90 -lxc

for the libraries and

    -I$HOME/local/include

for the include files.

Note how `libxcf90` comes **before** the C library `libxc`.
This is done on purpose as `libxcf90` depends on `libxc` (the Fortran API calls the C implementation).
Inverting the order of the libraries will likely trigger errors (**undefined references**)
in the last step of the compilation when the linker tries to build the final application.

Things become even more complicated when we have to build applications using many different interdependent
libraries as the **order of the libraries** passed to the linker is of crucial importance.
Fortunately the ABINIT build system is aware of this problem and all the dependencies
(BLAS, LAPACK, FFT, LIBXC, MPI, etc) will be automatically put in the right order so
you don't have to worry about this point although it is worth knowing about it.

## Compiling and installing FFTW

FFTW is a C library for computing the Fast Fourier transform in one or more dimensions.
ABINIT already provides an internal implementation of the FFT algorithm implemented in Fortran
hence FFTW is considered an optional dependency.
Nevertheless, **we do not recommend the internal implementation if you really care about performance**.
The reason is that FFTW (or, even better, the DFTI library provided by intel MKL)
is usually much faster than the internal version.

!!! important

    FFTW is very easy to install on Linux machines once you have *gcc* and *gfortran*.
    The [[fftalg]] variable defines the implementation to be used and 312 corresponds to the FFTW implementation.
    The default value of [[fftalg]] is automatically set by the *configure* script via pre-preprocessing options.
    In other words, if you activate support for FFTW (DFTI) at configure time,
    ABINIT will use [[fftalg]] 312 (512) as default.

The FFTW source code can be downloaded from [fftw.org](http://www.fftw.org/),
and the tarball of the latest version is available at <http://www.fftw.org/fftw-3.3.8.tar.gz>.

```sh
cd $HOME/local/src

wget http://www.fftw.org/fftw-3.3.8.tar.gz
tar -zxvf fftw-3.3.8.tar.gz && cd fftw-3.3.8
```

The compilation procedure is very similar to the one already used for the *libxc* package.
Note, however, that ABINIT needs both the **single-precision** and the **double-precision** version.
This means that we need to configure, build and install the package **twice**.

To build the single precision version, use:

```sh
./configure --prefix=$HOME/local --enable-single
make -j2
make check && make install
```

During the configuration step, make sure that *configure* finds the Fortran compiler
because ABINIT needs the Fortran interface.

```md
checking for gfortran... gfortran
checking whether we are using the GNU Fortran 77 compiler... yes
checking whether gfortran accepts -g... yes
checking if libtool supports shared libraries... yes
checking whether to build shared libraries... no
checking whether to build static libraries... yes
```

Let's have a look at the libraries we've just installed:

```sh
ls $HOME/local/lib/libfftw3*
/home/gmatteo/local/lib/libfftw3f.a  /home/gmatteo/local/lib/libfftw3f.la
```

the `f` at the end stands for `float` (C jargon for single precision).
Note that only static libraries have been built.
To build shared libraries, one should use `--enable-shared` when configuring.

Now we configure for the double precision version (this is the default behaviour so no extra option is needed)

```sh
./configure --prefix=$HOME/local
make -j2
make check && make install
```

After this step, you should have two libraries with the single and the double precision API:

```sh
ls $HOME/local/lib/libfftw3*
/home/gmatteo/local/lib/libfftw3.a   /home/gmatteo/local/lib/libfftw3f.a
/home/gmatteo/local/lib/libfftw3.la  /home/gmatteo/local/lib/libfftw3f.la
```

To compile ABINIT with FFTW3 support, one should use:

    -L$HOME/local/lib -lfftw3f -lfftw3 -I$HOME/local/include


Note that, unlike in *libxc*, here we don't have to specify different libraries for Fortran and C
as FFTW3 bundles **both the C and the Fortran API in the same library**.
The Fortran interface is included by default provided the FFTW3 *configure* script can find a Fortran compiler.
In our case, we know that our FFTW3 library supports Fortran as *gfortran* was found by *configure*
but this may not be true if you are using a precompiled library installed via your package manager.

To make sure we have the Fortran API, use the `nm` tool
to get the list of symbols in the library and then use *grep* to search for the Fortran API.
For instance we can check whether our library contains the Fortran routine for multiple single-precision
FFTs (*sfftw_plan_many_dft*) and the version for multiple double-precision FFTs (*dfftw_plan_many_dft*)

```sh
[gmatteo@bob fftw-3.3.8]$ nm $HOME/local/lib/libfftw3f.a | grep sfftw_plan_many_dft
0000000000000400 T sfftw_plan_many_dft_
0000000000003570 T sfftw_plan_many_dft__
0000000000001a90 T sfftw_plan_many_dft_c2r_
0000000000004c00 T sfftw_plan_many_dft_c2r__
0000000000000f60 T sfftw_plan_many_dft_r2c_
00000000000040d0 T sfftw_plan_many_dft_r2c__

[gmatteo@bob fftw-3.3.8]$ nm $HOME/local/lib/libfftw3.a | grep dfftw_plan_many_dft
0000000000000400 T dfftw_plan_many_dft_
0000000000003570 T dfftw_plan_many_dft__
0000000000001a90 T dfftw_plan_many_dft_c2r_
0000000000004c00 T dfftw_plan_many_dft_c2r__
0000000000000f60 T dfftw_plan_many_dft_r2c_
00000000000040d0 T dfftw_plan_many_dft_r2c__
```

If you are using a FFTW3 library without Fortran support, the ABINIT *configure* script will complain that the library
cannot be called from Fortran and you will need to dig into *config.log* to understand what's going on.

!!! note

    At present, there is no need to compile FFTW with MPI support because ABINIT implements its own
    version of the MPI-FFT algorithm based on the sequential FFTW version.
    The MPI algorithm implemented in ABINIT is optimized for plane-waves codes
    as it supports zero-padding and composite transforms for the applications of the local part of the KS potential.

    Also, **do not use MKL with FFTW3** for the FFT as the MKL library exports the same symbols as FFTW.
    This means that the linker will receive multiple definitions for the same procedure and
    the **behaviour is undefined**! Use either MKL or FFTW3 with e.g. openblas.

## Installing MPI

In this section, we discuss how to compile and install the MPI library.
This step is required if you want to run ABINIT with multiple processes and/or you
need to compile MPI-based libraries such as PBLAS/Scalapack or the HDF5 library with support for parallel IO.

It is worth stressing that the MPI installation provides two scripts (**mpif90** and **mpicc**)
that act as a sort of wrapper around the sequential Fortran and the C compilers, respectively.
These scripts must be used to compile parallel software using MPI instead
of the "sequential" *gfortran* and *gcc*.
The MPI library also provides launcher scripts installed in the *bin* directory (**mpirun** or **mpiexec**)
that must be used to execute an MPI application EXEC with NUM_PROCS MPI processes with the syntax:

```sh
mpirun -n NUM_PROCS EXEC [EXEC_ARGS]
```

!!! warning

    Keep in mind that there are **several MPI implementations** available around
    (*openmpi*, *mpich*, *intel mpi*, etc) and you must **choose one implementation and stick to it**
    when building your software stack.
    In other words, all the libraries and executables requiring MPI must be compiled, linked and executed
    **with the same MPI library**.

    Don't try to link a library compiled with e.g. *mpich* if you are building the code with
    the *mpif90* wrapper provided by e.g. *openmpi*.
    By the same token, don't try to run executables compiled with e.g. *intel mpi* with the
    *mpirun* launcher provided by *openmpi* unless you are looking for troubles!
    Again, the *which* command is quite handy to pinpoint possible problems especially if there are multiple
    installations of MPI in your $PATH (not a very good idea!).

In this tutorial, we employ the *mpich* implementation that can be downloaded
from this [webpage](https://www.mpich.org/downloads/).
In the terminal, issue:

```sh
cd $HOME/local/src
wget http://www.mpich.org/static/downloads/3.3.2/mpich-3.3.2.tar.gz
tar -zxvf mpich-3.3.2.tar.gz
cd mpich-3.3.2/
```

to download and uncompress the tarball.
Then configure/compile/test/install the library with:

```sh
./configure --prefix=$HOME/local
make -j2
make check && make install
```

Once the installation is completed, you should obtain this message
(possibly not the last message, you might have to look for it).

```sh
----------------------------------------------------------------------
Libraries have been installed in:
   /home/gmatteo/local/lib

If you ever happen to want to link against installed libraries
in a given directory, LIBDIR, you must either use libtool, and
specify the full pathname of the library, or use the '-LLIBDIR'
flag during linking and do at least one of the following:
   - add LIBDIR to the 'LD_LIBRARY_PATH' environment variable
     during execution
   - add LIBDIR to the 'LD_RUN_PATH' environment variable
     during linking
   - use the '-Wl,-rpath -Wl,LIBDIR' linker flag
   - have your system administrator add LIBDIR to '/etc/ld.so.conf'

See any operating system documentation about shared libraries for
more information, such as the ld(1) and ld.so(8) manual pages.
----------------------------------------------------------------------
```

The reason why we should add `$HOME/local/lib` to `$LD_LIBRARY_PATH` now should be clear to you.

Let's have a look at the MPI executables we have just installed in $HOME/local/bin:

```sh
ls $HOME/local/bin/mpi*
/home/gmatteo/local/bin/mpic++        /home/gmatteo/local/bin/mpiexec        /home/gmatteo/local/bin/mpifort
/home/gmatteo/local/bin/mpicc         /home/gmatteo/local/bin/mpiexec.hydra  /home/gmatteo/local/bin/mpirun
/home/gmatteo/local/bin/mpichversion  /home/gmatteo/local/bin/mpif77         /home/gmatteo/local/bin/mpivars
/home/gmatteo/local/bin/mpicxx        /home/gmatteo/local/bin/mpif90
```

Since we added $HOME/local/bin to $PATH, we should see that *mpi90* is actually
pointing to the version we have just installed:

```sh
which mpif90
~/local/bin/mpif90
```

As already mentioned, *mpif90* is a wrapper around the sequential Fortran compiler.
To show the Fortran compiler invoked by *mpif90*, use:

```sh
mpif90 -v

mpifort for MPICH version 3.3.2
Using built-in specs.
COLLECT_GCC=gfortran
COLLECT_LTO_WRAPPER=/usr/libexec/gcc/x86_64-redhat-linux/5.3.1/lto-wrapper
Target: x86_64-redhat-linux
Configured with: ../configure --enable-bootstrap --enable-languages=c,c++,objc,obj-c++,fortran,ada,go,lto --prefix=/usr --mandir=/usr/share/man --infodir=/usr/share/info --with-bugurl=http://bugzilla.redhat.com/bugzilla --enable-shared --enable-threads=posix --enable-checking=release --enable-multilib --with-system-zlib --enable-__cxa_atexit --disable-libunwind-exceptions --enable-gnu-unique-object --enable-linker-build-id --with-linker-hash-style=gnu --enable-plugin --enable-initfini-array --disable-libgcj --with-isl --enable-libmpx --enable-gnu-indirect-function --with-tune=generic --with-arch_32=i686 --build=x86_64-redhat-linux
Thread model: posix
gcc version 5.3.1 20160406 (Red Hat 5.3.1-6) (GCC)
```

The C include files (*.h*) and the Fortran modules (*.mod*) have been installed in $HOME/local/include

```sh
ls $HOME/local/include/mpi*

/home/gmatteo/local/include/mpi.h              /home/gmatteo/local/include/mpicxx.h
/home/gmatteo/local/include/mpi.mod            /home/gmatteo/local/include/mpif.h
/home/gmatteo/local/include/mpi_base.mod       /home/gmatteo/local/include/mpio.h
/home/gmatteo/local/include/mpi_constants.mod  /home/gmatteo/local/include/mpiof.h
/home/gmatteo/local/include/mpi_sizeofs.mod
```

In principle, the location of the directory must be passed to the Fortran compiler either
with the `-J` (`mpi.mod` module for MPI2+) or the `-I` option (`mpif.h` include file for MPI1).
Fortunately, the ABINIT build system can automatically detect your MPI installation and set all the compilation
options automatically if you provide the installation root ($HOME/local).

## Installing HDF5 and netcdf4

Abinit developers are trying to move away from Fortran binary files as this
format is not portable and difficult to read from high-level languages such as python.
For this reason, in Abinit v9, HDF5 and netcdf4 have become **hard-requirements**.
This means that the *configure* script will abort if these libraries are not found.
In this section, we explain how to build HDF5 and netcdf4 from source including support for parallel IO.

Netcdf4 is built on top of HDF5 and consists of two different layers:

* The **low-level C library**

* The **Fortran bindings** i.e. Fortran routines calling the low-level C implementation.
  This is the high-level API used by ABINIT to perform all the IO operations on netcdf files.

To build the libraries required by ABINIT, we will compile the three different layers
in a bottom-up fashion starting from the HDF5 package (*HDF5 --> netcdf-c --> netcdf-fortran*).
Since we want to activate support for parallel IO, we need to compile the libraries **using the wrappers**
provided by our MPI installation instead of using *gcc* or *gfortran* directly.

Let's start by downloading the HDF5 tarball from this [download page](https://www.hdfgroup.org/downloads/hdf5/source-code/).
Uncompress the archive with *tar* as usual, then configure the package with:

```sh
./configure --prefix=$HOME/local/ \
            CC=$HOME/local/bin/mpicc --enable-parallel --enable-shared
```

where we've used the **CC** variable to specify the C compiler.
This step is crucial in order to activate support for parallel IO.

!!! tip

    A table with the more commonly-used predefined variables is available
    [here](https://www.gnu.org/software/make/manual/html_node/Implicit-Variables.html)


At the end of the configuration step, you should get the following output:

```sh
                     AM C Flags:
               Shared C Library: yes
               Static C Library: yes

                        Fortran: no
                            C++: no
                           Java: no

Features:
---------
                   Parallel HDF5: yes
Parallel Filtered Dataset Writes: yes
              Large Parallel I/O: yes
              High-level library: yes
                    Threadsafety: no
             Default API mapping: v110
  With deprecated public symbols: yes
          I/O filters (external): deflate(zlib)
                             MPE:
                      Direct VFD: no
                         dmalloc: no
  Packages w/ extra debug output: none
                     API tracing: no
            Using memory checker: no
 Memory allocation sanity checks: no
          Function stack tracing: no
       Strict file format checks: no
    Optimization instrumentation: no
```

The line with:

```sh
Parallel HDF5: yes
```

tells us that our HDF5 build supports parallel IO.
The Fortran API is not activated but this is not a problem
as ABINIT will be interfaced with HDF5 through the Fortran bindings provided by netcdf-fortran.
In other words, **ABINIT requires _netcdf-fortran_** and not the HDF5 Fortran bindings.

Again, issue `make -j NUM` followed by `make check` and finally `make install`.
Note that `make check` may take some time so you may want to install immediately and run the tests in another terminal
so that you can continue with the tutorial.

Now let's move to netcdf.
Download the C version and the Fortran bindings from the
[netcdf website](https://www.unidata.ucar.edu/downloads/netcdf/)
and unpack the tarball files as usual.

```sh
wget ftp://ftp.unidata.ucar.edu/pub/netcdf/netcdf-c-4.7.3.tar.gz
tar -xvf netcdf-c-4.7.3.tar.gz

wget ftp://ftp.unidata.ucar.edu/pub/netcdf/netcdf-fortran-4.5.2.tar.gz
tar -xvf netcdf-fortran-4.5.2.tar.gz
```

To compile the C library, use:

```sh
cd netcdf-c-4.7.3
./configure --prefix=$HOME/local/ \
            CC=$HOME/local/bin/mpicc \
            LDFLAGS=-L$HOME/local/lib CPPFLAGS=-I$HOME/local/include
```

where `mpicc` is used as C compiler (**CC** environment variable)
and we have to specify **LDFLAGS** and **CPPFLAGS** as we want to link against our installation of *hdf5*.
At the end of the configuration step, we should obtain

```sh
# NetCDF C Configuration Summary
==============================

# General
-------
NetCDF Version:		4.7.3
Dispatch Version:       1
Configured On:		Wed Apr  8 00:53:19 CEST 2020
Host System:		x86_64-pc-linux-gnu
Build Directory: 	/home/gmatteo/local/src/netcdf-c-4.7.3
Install Prefix:         /home/gmatteo/local

# Compiling Options
-----------------
C Compiler:		/home/gmatteo/local/bin/mpicc
CFLAGS:
CPPFLAGS:		-I/home/gmatteo/local/include
LDFLAGS:		-L/home/gmatteo/local/lib
AM_CFLAGS:
AM_CPPFLAGS:
AM_LDFLAGS:
Shared Library:		yes
Static Library:		yes
Extra libraries:	-lhdf5_hl -lhdf5 -lm -ldl -lz -lcurl

# Features
--------
NetCDF-2 API:		yes
HDF4 Support:		no
HDF5 Support:		yes
NetCDF-4 API:		yes
NC-4 Parallel Support:	yes
PnetCDF Support:	no
DAP2 Support:		yes
DAP4 Support:		yes
Byte-Range Support:	no
Diskless Support:	yes
MMap Support:		no
JNA Support:		no
CDF5 Support:		yes
ERANGE Fill Support:	no
Relaxed Boundary Check:	yes
```

The section:

```sh
HDF5 Support:		yes
NetCDF-4 API:		yes
NC-4 Parallel Support:	yes
```

tells us that *configure* detected our installation of *hdf5* and that support for parallel-IO is activated.

Now use the standard sequence of commands to compile and install the package:

```sh
make -j2
make check && make install
```

Once the installation is completed, use the `nc-config` executable to
inspect the features provided by the library we've just installed.

```sh
which nc-config
/home/gmatteo/local/bin/nc-config

# installation directory
nc-config --prefix
/home/gmatteo/local/
```

To get a summary of the options used to build the C layer and the available features, use

```sh
nc-config --all

This netCDF 4.7.3 has been built with the following features:

  --cc            -> /home/gmatteo/local/bin/mpicc
  --cflags        -> -I/home/gmatteo/local/include
  --libs          -> -L/home/gmatteo/local/lib -lnetcdf
  --static        -> -lhdf5_hl -lhdf5 -lm -ldl -lz -lcurl
  ....
  <snip>
```

*nc-config* is quite useful as it prints the compiler options required to
build C applications requiring netcdf-c (`--cflags` and `--libs`).
Unfortunately, this tool is not enough for ABINIT as we need the Fortran bindings as well.

To compile the Fortran bindings, execute:

```sh
cd netcdf-fortran-4.5.2
./configure --prefix=$HOME/local/ \
            FC=$HOME/local/bin/mpif90 \
            LDFLAGS=-L$HOME/local/lib CPPFLAGS=-I$HOME/local/include
```

where **FC** points to our *mpif90* wrapper (**CC** is not needed here).
For further info on how to build *netcdf-fortran*, see the
[official documentation](https://www.unidata.ucar.edu/software/netcdf/docs/building_netcdf_fortran.html).

Now issue:

```sh
make -j2
make check && make install
```

To inspect the features activated in our Fortran library, use `nf-config` instead of `nc-config`
(note the `nf-` prefix):

```sh
which nf-config
/home/gmatteo/local/bin/nf-config

# installation directory
nf-config --prefix
/home/gmatteo/local/
```

To get a summary of the options used to build the Fortran bindings and the list of available features, use

```sh
nf-config --all

This netCDF-Fortran 4.5.2 has been built with the following features:

  --cc        -> gcc
  --cflags    ->  -I/home/gmatteo/local/include -I/home/gmatteo/local/include

  --fc        -> /home/gmatteo/local/bin/mpif90
  --fflags    -> -I/home/gmatteo/local/include
  --flibs     -> -L/home/gmatteo/local/lib -lnetcdff -L/home/gmatteo/local/lib -lnetcdf -lnetcdf -ldl -lm
  --has-f90   ->
  --has-f03   -> yes

  --has-nc2   -> yes
  --has-nc4   -> yes

  --prefix    -> /home/gmatteo/local
  --includedir-> /home/gmatteo/local/include
  --version   -> netCDF-Fortran 4.5.2
```

!!! tip

    *nf-config* is quite handy to pass options to the ABINIT *configure* script.
    Instead of typing the full list of libraries (`--flibs`) and the location of the include files (`--fflags`)
    we can delegate this boring task to *nf-config* using
    [backtick syntax](https://unix.stackexchange.com/questions/48392/understanding-backtick/48393):

    ```sh
    NETCDF_FORTRAN_LIBS=`nf-config --flibs`
    NETCDF_FORTRAN_FCFLAGS=`nf-config --fflags`
    ```

    Alternatively, one can simply pass the installation directory (here we use the `$(...)` syntax):

    ```sh
    --with-netcdf-fortran=$(nf-config --prefix)
    ```

    and then let *configure* detect **NETCDF_FORTRAN_LIBS** and **NETCDF_FORTRAN_FCFLAGS** for us.

## How to compile ABINIT

In this section, we finally discuss how to compile ABINIT using the
MPI compilers and the libraries installed previously.
First of all, download the ABINIT tarball from [this page](https://www.abinit.org/packages) using e.g.

```sh
wget https://www.abinit.org/sites/default/files/packages/abinit-9.0.2.tar.gz
```

Here we are using version 9.0.2 but you may want to download the
latest production version to take advantage of new features and benefit from bug fixes.

Once you got the tarball, uncompress it by typing:

```sh
tar -xvzf abinit-9.0.2.tar.gz
```

Then `cd` into the newly created *abinit-9.0.2* directory.
Before actually starting the compilation, type:

```sh
./configure --help
```

and take some time to read the documentation of the different options.

The documentation mentions the most important environment variables
that can be used to specify compilers and compilation flags.
We already encountered some of these variables in the previous examples:

```md
Some influential environment variables:
  CC          C compiler command
  CFLAGS      C compiler flags
  LDFLAGS     linker flags, e.g. -L<lib dir> if you have libraries in a
              nonstandard directory <lib dir>
  LIBS        libraries to pass to the linker, e.g. -l<library>
  CPPFLAGS    (Objective) C/C++ preprocessor flags, e.g. -I<include dir> if
              you have headers in a nonstandard directory <include dir>
  CPP         C preprocessor
  CXX         C++ compiler command
  CXXFLAGS    C++ compiler flags
  FC          Fortran compiler command
  FCFLAGS     Fortran compiler flags
```

Besides the standard environment variables: **CC**, **CFLAGS**, **FC**, **FCFLAGS** etc.
the build system also provides specialized options to activate support for external libraries.
For *libxc*, for instance, we have:

```md
LIBXC_CPPFLAGS
            C preprocessing flags for LibXC.
LIBXC_CFLAGS
            C flags for LibXC.
LIBXC_FCFLAGS
            Fortran flags for LibXC.
LIBXC_LDFLAGS
            Linker flags for LibXC.
LIBXC_LIBS
            Library flags for LibXC.
```

According to what we have seen during the compilation of *libxc*, one should pass to
*configure* the following options:

```sh
LIBXC_LIBS="-L$HOME/local/lib -lxcf90 -lxc"
LIBXC_FCFLAGS="-I$HOME/local/include"
```

Alternatively, one can use the **high-level interface** provided by the `--with-LIBNAME` options
to specify the installation directory as in:

```sh
--with-libxc="$HOME/local/lib"
```

In this case, *configure* will try to **automatically detect** the other options.
This is the easiest approach but if *configure* cannot detect the dependency properly,
you may need to inspect `config.log` for error messages and/or set the options manually.

<!--
https://www.cprogramming.com/tutorial/shared-libraries-linux-gcc.html
In this example, we will be taking advantage of the high-level interface provided by the *with_XXX* options
to tell the build system where external dependencies are located instead of passing options explicitly.
-->

In the previous examples, we executed *configure* in the top level directory of the package but
for ABINIT we prefer to do things in a much cleaner way using a **build directory**
The advantage of this approach is that we keep object files and executables separated from the source code
and this allows us to **build different executables using the same source tree**.
For example, one can have a build directory with a version compiled with *gfortran* and another
build directory for the intel *ifort* compiler or other builds done with same compiler but different compilation options.

Let's call the build directory `build_gfortran`:

```sh
mkdir build_gfortran && cd build_gfortran
```

Now we should define the options that will be passed to the *configure* script.
Instead of using the command line as done in the previous examples,
we will be using an **external file** (*myconf.ac9*) to collect all our options.
The syntax to read options from file is:

```sh
../configure --with-config-file="myconf.ac9"
```

where double quotation marks may be needed for portability reasons.
Note the use of `../configure` as we are working inside the build directory `build_gfortran` while
the `configure` script is located in the top level directory of the package.

!!! important

    The name of the options in `myconf.ac9` is in **normalized form** that is
    the initial `--` is removed from the option name and all the other `-` characters in the string
    are replaced by an underscore `_`.
    Following these simple rules, the  *configure* option `--with-mpi` becomes `with_mpi` in the ac9 file.

    Also note that in the configuration file it is possible to use **shell variables**
    and reuse the output of external tools using
    [backtick syntax](https://unix.stackexchange.com/questions/48392/understanding-backtick/48393)
    as is `nf-config --flibs` or, if you prefer, `${nf-config --flibs}`.
    This tricks allow us to reduce the amount of typing
    and have configuration files that can be easily reused for other machines.

This is an example of configuration file in which we use the high-level interface
(`with_LIBNAME=dirpath`) as much as possible, except for linalg and FFTW3.
The explicit value of *LIBNAME_LIBS* and *LIBNAME_FCFLAGS* is also reported in the commented sections.

```sh
# -------------------------------------------------------------------------- #
# MPI support                                                                #
# -------------------------------------------------------------------------- #

#   * the build system expects to find subdirectories named bin/, lib/,
#     include/ inside the with_mpi directory
#
with_mpi=$HOME/local/

# Flavor of linear algebra libraries to use (default is netlib)
#
with_linalg_flavor="openblas"

# Library flags for linear algebra (default is unset)
#
LINALG_LIBS="-L$HOME/local/lib -lopenblas"

# -------------------------------------------------------------------------- #
# Optimized FFT support                                                      #
# -------------------------------------------------------------------------- #

# Flavor of FFT framework to support (default is auto)
#
# The high-level interface does not work yet so we pass options explicitly
#with_fftw3="$HOME/local/lib"

# Explicit options for fftw3
with_fft_flavor="fftw3"
FFTW3_LIBS="-L$HOME/local/lib -lfftw3f -lfftw3"
FFTW3_FCFLAGS="-L$HOME/local/include"

# -------------------------------------------------------------------------- #
# LibXC
# -------------------------------------------------------------------------- #
# Install prefix for LibXC (default is unset)
#
with_libxc="$HOME/local"

# Explicit options for libxc
#LIBXC_LIBS="-L$HOME/local/lib -lxcf90 -lxc"
#LIBXC_FCFLAGS="-I$HOME/local/include"

# -------------------------------------------------------------------------- #
# NetCDF
# -------------------------------------------------------------------------- #

# install prefix for NetCDF (default is unset)
#
with_netcdf=$(nc-config --prefix)
with_netcdf_fortran=$(nf-config --prefix)

# Explicit options for netcdf
#with_netcdf="yes"
#NETCDF_FORTRAN_LIBS=`nf-config --flibs`
#NETCDF_FORTRAN_FCFLAGS=`nf-config --fflags`

# install prefix for HDF5 (default is unset)
#
with_hdf5="$HOME/local"

# Explicit options for hdf5
#HDF5_LIBS=`nf-config --flibs`
#HDF5_FCFLAGS=`nf-config --fflags`

# Enable OpenMP (default is no)
enable_openmp="no"
```

A documented template with all the supported options can be found here

{% dialog build/config-template.ac9 %}

Copy the content of the example in *myconf.ac9*, then run:

```sh
../configure --with-config-file="myconf.ac9"
```

If everything goes smoothly, you should obtain the following summary:

```md
==============================================================================
=== Final remarks                                                          ===
==============================================================================


Core build parameters
---------------------

  * C compiler       : gnu version 5.3
  * Fortran compiler : gnu version 5.3
  * architecture     : intel xeon (64 bits)
  * debugging        : basic
  * optimizations    : standard

  * OpenMP enabled   : no (collapse: ignored)
  * MPI    enabled   : yes (flavor: auto)
  * MPI    in-place  : no
  * MPI-IO enabled   : yes
  * GPU    enabled   : no (flavor: none)

  * LibXML2 enabled  : no
  * LibPSML enabled  : no
  * XMLF90  enabled  : no
  * HDF5 enabled     : yes (MPI support: yes)
  * NetCDF enabled   : yes (MPI support: yes)
  * NetCDF-F enabled : yes (MPI support: yes)

  * FFT flavor       : fftw3 (libs: user-defined)
  * LINALG flavor    : openblas (libs: user-defined)

  * Build workflow   : monolith

0 deprecated options have been used:.

Configuration complete.
You may now type "make" to build Abinit.
(or "make -j<n>", where <n> is the number of available processors)
```

!!! important

    Please take your time to read carefully the final summary and **make sure you are getting what you expect**.
    A lot of typos or configuration errors can be easily spotted at this level.

    You might then find useful to have a look at other examples available [in this page](../developers/autoconf_examples).
    Additional configuration files for clusters can be found in the
    [abiconfig package](https://github.com/abinit/abiconfig).

The *configure* script has generated several **Makefiles** required by *make* as well as the **config.h**
include file with all the pre-processing options that will be used to build ABINIT.
This file is included in every ABINIT source file and it defines the features that will be activated or deactivated
at compilation-time depending on the libraries available on your machine.
Let's have a look at a selected portion of **config.h**:

```c
/* Define to 1 if you have a working MPI installation. */
#define HAVE_MPI 1

/* Define to 1 if you have a MPI-1 implementation (obsolete, broken). */
/* #undef HAVE_MPI1 */

/* Define to 1 if you have a MPI-2 implementation. */
#define HAVE_MPI2 1

/* Define to 1 if you want MPI I/O support. */
#define HAVE_MPI_IO 1

/* Define to 1 if you have a parallel NetCDF library. */
/* #undef HAVE_NETCDF_MPI */
```

This file tells us that

- we are building ABINIT with MPI support
- we have a library implementing the MPI2 specifications
- our MPI implementation supports parallel MPI-IO. Note that this does not mean that *netcdf* supports MPI-IO.
  In this example, indeed, **HAVE_NETCDF_MPI is undefined** and this means the library does not have
  parallel-IO capabilities.

Of course, end users are mainly concerned with the final summary reported
by the *configure* script to understand whether
a particular feature has been activated or not but more advanced users may
find the content of `config.h` valuable to understand what's going on.

Now we can finally compile the package with e.g. *make -j2*.
If the compilation completes successfully (🙌), you should end up with a bunch of executables inside *src/98_main*.
Note, however, that the fact that the compilation completed successfully **does not necessarily imply that
the executables will work as expected** as there are many different things that can go
[wrong at runtime](https://en.wikiquote.org/wiki/Murphy%27s_law).

First of all, let's try to execute:

```sh
abinit --version
```

!!! tip

    If this is a parallel build, you may need to use

    ```sh
    mpirun -n 1 abinit --version
    ```

    even for a sequential run as certain MPI libraries are not able to bootstrap the MPI library
    without *mpirun* (*mpiexec*). On some clusters with Slurm, the syadmin may ask you to use
    *srun* instead of *mpirun*.

To get the summary of options activated during the build, run *abinit* with the `-b` option
(or `--build` if you prefer the verbose version)

```sh
./src/98_main/abinit -b
```

If the executable does not crash (🙌), you may want to execute

```sh
make test_fast
```

to run some basic tests.
If something goes wrong when executing the binary or when running the tests,
checkout the [Troubleshooting](#troubleshooting) section for possible solutions.

Finally, you may want to execute the *runtests.py* python script in the *tests* directory
in order to validate the build before running production calculations:

```sh
cd tests
../../tests/runtests.py v1 -j4
```

As usual, use:

```sh
../../tests/runtests.py --help
```

to list the available options.
A more detailed discussion is given in [this page](../developers/testsuite_howto).

[![asciicast](https://asciinema.org/a/40324.png)](https://asciinema.org/a/40324)


### Dynamic libraries and ldd

Since we decided to compile with **dynamic linking**, the external libraries are not included in the final executables.
Actually, the libraries will be loaded by the Operating System (OS) at runtime when we execute the binary.
The OS will search for dynamic libraries using the list of directories specified
in `$LD_LIBRARY_PATH` (`$DYLD_LIBRARY_PATH` for MacOs).

A typical mistake is to execute *abinit* with a wrong `$LD_LIBRARY_PATH` that is either **empty or
different from the one used when compiling the code**
(if it's different and it works, I assume you know what you are doing so you should not be reading this section!)

On Linux, one can use the *ldd* tool to print the shared objects (shared libraries) required by each
program or shared object specified on the command line:

```sh
ldd src/98_main/abinit

	linux-vdso.so.1 (0x00007fffbe7a4000)
	libopenblas.so.0 => /home/gmatteo/local/lib/libopenblas.so.0 (0x00007fc892155000)
	libnetcdff.so.7 => /home/gmatteo/local/lib/libnetcdff.so.7 (0x00007fc891ede000)
	libnetcdf.so.15 => /home/gmatteo/local/lib/libnetcdf.so.15 (0x00007fc891b62000)
	libhdf5_hl.so.200 => /home/gmatteo/local/lib/libhdf5_hl.so.200 (0x00007fc89193c000)
	libhdf5.so.200 => /home/gmatteo/local/lib/libhdf5.so.200 (0x00007fc891199000)
	libz.so.1 => /lib64/libz.so.1 (0x00007fc890f74000)
	libdl.so.2 => /lib64/libdl.so.2 (0x00007fc890d70000)
	libgfortran.so.3 => /lib64/libgfortran.so.3 (0x00007fc890a43000)
	libm.so.6 => /lib64/libm.so.6 (0x00007fc890741000)
	libmpifort.so.12 => /home/gmatteo/local/lib/libmpifort.so.12 (0x00007fc89050a000)
	libmpi.so.12 => /home/gmatteo/local/lib/libmpi.so.12 (0x00007fc88ffb9000)
	libquadmath.so.0 => /lib64/libquadmath.so.0 (0x00007fc88fd7a000)
	libgcc_s.so.1 => /lib64/libgcc_s.so.1 (0x00007fc88fb63000)
	libc.so.6 => /lib64/libc.so.6 (0x00007fc88f7a1000)
    ...
    <snip>
```

As expected, our executable uses the  *openblas*, *netcdf*, *hdf5*, *mpi* libraries installed in $HOME/local/lib plus
other basic libs coming from `lib64`(e.g. *libgfortran*) added by the compiler.

!!! tip

    On MacOsX, replace *ldd* with *otool* and the syntax:

    ```sh
    otool -L abinit
    ```

    If you see entries like:

    ```sh
    /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libLAPACK.dylib (compatibility version 1.0.0, current version 1.0.0)
    /System/Library/Frameworks/Accelerate.framework/Versions/A/Frameworks/vecLib.framework/Versions/A/libBLAS.dylib (compatibility version 1.0.0, current version 1.0.0)
    ```

    it means that you are linking against **MacOsx VECLIB**.
    In this case, make sure to use `--enable-zdot-bugfix="yes"` when configuring the package
    otherwise the code will crash at runtime due to ABI incompatibility (calling conventions
    for functions returning complex values).
    Did I tell you that MacOsX does not care about Fortran?
    If you wonder about the difference between API and ABI, please read this
    [stackoverflow post](https://stackoverflow.com/questions/2171177/what-is-an-application-binary-interface-abi).


To understand why LD_LIBRARY_PATH is so important, let's try to reset the value of this variable with

```sh
unset LD_LIBRARY_PATH
echo $LD_LIBRARY_PATH
```

then rerun *ldd* (or *otool*) again.
Do you understand what's happening here?
Why it's not possible to execute *abinit* with an empty `$LD_LIBRARY_PATH`?
How would you fix the problem?

### Troubleshooting

Problems can appear at different levels:

* configuration time
* compilation time
* runtime *i.e.* when executing the code

**Configuration-time errors** are usually due to misconfiguration of the environment, missing (hard) dependencies
or critical problems in the software stack that will make *configure* abort.
Unfortunately, the error message reported by *configure* is not always self-explanatory.
To pinpoint the source of the problem you will need to **search for clues in _config.log_**,
especially the error messages associated to the feature/library that is triggering the error.

This is not as easy as it looks since *configure* sometimes performs multiple tests to detect your architecture
and some of these tests are **supposed to fail**.
As a consequence, not all the error messages reported in *config.log* are necessarily relevant.
Even if you find the test that makes *configure* abort, the error message may be obscure and difficult to decipher.
In this case, you can ask for help on the forum but remember to provide enough info on your architecture, the
compilation options and, most importantly, **a copy of _config.log_**.
Without this file, indeed, it is almost impossible to understand what's going on.

An example will help.
Let's assume we are compiling on a cluster using modules provided by our sysadmin.
More specifically, there is an `openmpi_intel2013_sp1.1.106` module that is supposed to provide
the *openmpi* implementation of the MPI library compiled with a particular version of the intel compiler
(remember what we said about using the same version of the compiler).
Obviously **we need to load the modules before running configure** in order to setup our environment
so we issue:

```sh
module load openmpi_intel2013_sp1.1.106
```

The module seems to work as no error message is printed to the terminal and `which mpicc` shows
that the compiler has been added to $PATH.
At this point we try to configure ABINIT with:

```sh
with_mpi_prefix="${MPI_HOME}"
```

where `$MPI_HOME` is a environment variable set by *module load* (use e.g. `env | grep MPI`).
Unfortunately, the *configure* script aborts at the very beginning complaining
that the C compiler does not work!

```text
checking for gcc... /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc
checking for C compiler default output file name...
configure: error: in `/home/gmatteo/abinit/build':
configure: error: C compiler cannot create executables
See `config.log' for more details.
```

Let's analyze the output of *configure*.
The line:

```md
checking for gcc... /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc
```

indicates that *configure* was able to find *mpicc* in *${MPI_HOME}/bin*.
Then an internal test is executed to make sure the wrapper can compile a rather simple Fortran program using MPI
but the test fails and *configure* aborts immediately with the pretty explanatory message:

```md
configure: error: C compiler cannot create executables
See `config.log' for more details.
```

If we want to understand why *configure* failed, we have to **open _config.log_ in the editor**
and search for error messages towards the end of the log file.
For example one can search for the string "C compiler cannot create executables".
Immediately above this line, we find the following section:

??? note "config.log"

    ```sh
    configure:12104: checking whether the C compiler works
    configure:12126: /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc conftest.c  >&5
    /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_reg_xrc_rcv_qp@IBVERBS_1.1'
    /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_modify_xrc_rcv_qp@IBVERBS_1.1'
    /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_open_xrc_domain@IBVERBS_1.1'
    /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_unreg_xrc_rcv_qp@IBVERBS_1.1'
    /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_query_xrc_rcv_qp@IBVERBS_1.1'
    /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_create_xrc_rcv_qp@IBVERBS_1.1'
    /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_create_xrc_srq@IBVERBS_1.1'
    /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_close_xrc_domain@IBVERBS_1.1'
    configure:12130: $? = 1
    configure:12168: result: no
    configure: failed program was:
    | /* confdefs.h */
    | #define PACKAGE_NAME "ABINIT"
    | #define PACKAGE_TARNAME "abinit"
    | #define PACKAGE_VERSION "9.1.2"
    | #define PACKAGE_STRING "ABINIT 9.1.2"
    | #define PACKAGE_BUGREPORT "https://bugs.launchpad.net/abinit/"
    | #define PACKAGE_URL ""
    | #define PACKAGE "abinit"
    | #define VERSION "9.1.2"
    | #define ABINIT_VERSION "9.1.2"
    | #define ABINIT_VERSION_MAJOR "9"
    | #define ABINIT_VERSION_MINOR "1"
    | #define ABINIT_VERSION_MICRO "2"
    | #define ABINIT_VERSION_BUILD "20200824"
    | #define ABINIT_VERSION_BASE "9.1"
    | #define HAVE_OS_LINUX 1
    | /* end confdefs.h.  */
    |
    | int
    | main ()
    | {
    |
    |   ;
    |   return 0;
    | }
    ```

The line

```sh
configure:12126: /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc conftest.c  >&5
```

tells us that *configure* tried to compile a C file named *conftest.c* and that the return value
stored in the `$?` shell variable is non-zero thus indicating failure:

```sh
configure:12130: $? = 1
configure:12168: result: no
```

The failing program (the C main after the line "configure: failed program was:")
is a rather simple piece of code and our *mpicc* compiler is not able to compile it!
If we look more carefully at the lines after the invocation of *mpicc*,
we see lots of undefined references to functions of the *libibverbs* library:

```sh
configure:12126: /cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/bin/mpicc conftest.c  >&5
/cm/shared/apps/openmpi/1.7.5/intel2013_sp1.1.106/lib/libmpi.so: undefined reference to `ibv_reg_xrc_rcv_qp@IBVERBS_1.1
```

This looks like some mess in the system configuration and not necessarily a problem in the ABINIT build system.
Perhaps there have been changes to the environment, maybe a system upgrade or the module is simply broken.
In this case you should send the *config.log* to the sysadmin so that he/she can fix the problem or just use
another more recent module.

Obviously, one can encounter cases in which modules are properly configured yet the *configure* script aborts
because it does not know how to deal with your software stack.
In both cases, **_config.log_ is key to pinpoint the problem** and sometimes you will find that
the problem is rather simple to solve.
For instance, you are using a Fortran module files produced by *gfortran* while trying to compile with the
intel compiler or perhaps you are trying to use modules produced by a different version of the same compiler.
Perhaps you forgot to add the include directory required by an external library and the compiler
cannot find the include file or maybe there is a typo in the configuration options.
The take-home message is that several mistakes can be detected by just **inspecting the log messages**
reported in *configure.log* if you know how to search for them.

**Compilation-time errors** are usually due to syntax errors, portability issues or
Fortran constructs that are not supported by that particular version of the compiler.
In the first two cases, please report the problem on the forum.
In the later case, you will need a more recent version of the compiler.
Sometimes the compilation aborts with an **internal compiler error** that should be considered
as a **bug in the compiler** rather than an error in the ABINIT source code.
Decreasing the optimization level when compiling the particular routine that triggers the error
(use -O1 or even -O0 for the most problematic cases) may solve the problem else
try a more recent version of the compiler.
If you have made non-trivial changes in code (modifications in the datatypes/interfaces),
run `make clean` and recompile.

**Runtime errors** are more difficult to fix as they may require the use of a debugger and some basic
understanding of [Linux signals](https://en.wikipedia.org/wiki/Signal_(IPC)).
Here we focus on two common scenarios: **SIGILL** and **SIGSEGV**.

If the code raises the **SIGILL** signal, it means that the CPU attempted to execute
an instruction it didn't understand.
Very likely, your executables/libraries have been compiled for the **wrong architecture**.
This may happen on clusters when the CPU family available on the frontend differs
from the one available on the compute node and aggressive optimization options (-O3, -march, -xHost, etc) are used.
Removing the optimization options and using the much safer -O2 level may help.
Alternatively, one can **configure and compile** the source directly on the compute node or use compilation options
compatible both with the frontend and the compute node (ask your sysadmin for details).

!!! warning

    Never ever run calculations on CPUs belonging to different families unless
    you know what you are doing.
    Many MPI codes assume reproducibility at the binary level:
    on different MPI processes the same set of bits in input should produce the same set of bits in output.
    If you are running on a heterogeneous cluster, select the queue with the same CPU family
    and make sure the code has been compiled with options that are compatibile with the compute node.

Segmentation faults (**SIGSEGV**) are usually due to bugs in the code but they may also be
triggered by non-portable code or misconfiguration of the software stack.
When reporting this kind of problem on the forum, please add an input file so that developers
can try to reproduce the problem.
Keep in mind, however, that the problem may not be reproducible on other architectures.
The ideal solution would be to run the code under the control of the debugger,
use the backtrace to locate the line of code where the segmentation fault occurs and then
attach the backtrace to your issue on the forum.

??? note "How to run gdb"

    Using the debugger in sequential is really simple.
    First of all, make sure the code have been compiled with the `-g` option
    to generate **source-level debug information**.
    To use the `gdb` GNU debugger, perform the following operations:

      1. Load the executable in the GNU debugger using the syntax:

        ```sh
        gdb path_to_abinit_executable
        ```

      2. Run the code with the `run` command and pass the input file as argument:

        ```sh
        (gdb) run t02.in
        ```

      3. Wait for the error e.g. SIGSEGV, then print the **backtrace** with:

        ```sh
        (gdb) bt
        ```

    PS: avoid debugging code compiled with `-O3` or `-Ofast` as the backtrace may not be reliable.
    Sometimes, even `-O2` (default) is not reliable and you have to resort to print statements
    and bisection to braket the problematic piece of code.

## How to compile ABINIT on a cluster with the intel toolchain and modules

On intel-based clusters, we suggest to compile ABINIT with the intel compilers (**_icc_** and **_ifort_**)
and MKL in order to achieve better performance.
The MKL library, indeed, provides highly-optimized implementations for BLAS, LAPACK, FFT, and SCALAPACK
that can lead to a **significant speedup** while simplifying considerably the compilation process.
As concerns MPI, intel provides its own implementation (**Intel MPI**) but it is also possible to employ
*openmpi* or *mpich* provided these libraries have been compiled with the **same intel compilers**.

In what follows, we assume a cluster in which scientific software is managed
with **modules** and the [EasyBuild](https://easybuild.readthedocs.io/en/latest/index.html) framework.
Before proceeding with the next steps, it is worth summarizing the most important *module* commands.

??? note "module commands"

    To list the modules installed on the cluster, use:

    ```sh
    module avail
    ```

    The syntax to load the module `MODULE_NAME` is:

    ```sh
    module load MODULE_NAME
    ```

    while

    ```sh
    module list
    ```

    prints the list of modules currently loaded.

    To list all modules containing "string", use:

    ```sh
    module spider string  # requires LMOD with LUA
    ```

    Finally,

    ```sh
    module show MODULE_NAME
    ```

    shows the commands in the module file (useful for debugging).
    For a more complete introduction to environment modules, please consult
    [this page](https://support.ceci-hpc.be/doc/_contents/UsingSoftwareAndLibraries/UsingPreInstalledSoftware/index.html).

<!--
The first thing we should do is to locate the module(s) proving MKL and the MPI library.
Sorry for repeating it again but this step is crucial as the MPI module
will define our toolchain (MPI library + compilers)
and all the other libraries **must be compiled with the same toolchain**.
-->

On my cluster, I can activate **intel MPI** by executing:

```sh
module load releases/2018b
module load intel/2018b
module load iimpi/2018b
```

to load the 2018b intel MPI [EasyBuild toolchain](https://easybuild.readthedocs.io/en/latest/Common-toolchains.html).
On your cluster, you may need to load different modules but the effect
at the level of the shell environment should be the same.
More specifically, **mpiifort** is now in **PATH** (note how *mpiifort* wraps intel *ifort*):

```sh
mpiifort -v
mpiifort for the Intel(R) MPI Library 2018 Update 3 for Linux*
Copyright(C) 2003-2018, Intel Corporation.  All rights reserved.
ifort version 18.0.3  
```

the directories with the libraries required by the compiler/MPI have been added
to **LD_LIBRARY_PATH** while **CPATH** stores the locations to search for include file.
Last but not least, the env should now define [intel-specific variables](https://software.intel.com/content/www/us/en/develop/documentation/mpi-developer-reference-windows/top/environment-variable-reference/compilation-environment-variables.html)
whose name starts with `I_`:

```sh
$ env | grep I_
I_MPI_ROOT=/opt/cecisw/arch/easybuild/2018b/software/impi/2018.3.222-iccifort-2018.3.222-GCC-7.3.0-2.30
```

Since **I_MPI_ROOT** points to the installation directory of intel MPI, 
we can use this environment variable to tell *configure* how to locate our MPI installation:

```sh
with_mpi="${I_MPI_ROOT}"

FC="mpiifort"  # Use intel wrappers. Important!
CC="mpiicc"    # See warning below
CXX="mpiicpc"

# with_optim_flavor="aggressive"
# FCFLAGS="-g -O2"
```

Optionally, you can use `with_optim_flavor="aggressive` to let *configure* select compilations
options tuned for performance or set the options explicitly via **FCFLAGS**.

!!! warning

    Intel MPI installs **two sets of MPI wrappers**.
    (*mpiicc*, *mpicpc*, *mpiifort*) and (*mpicc*, *mpicxx*, *mpif90*) that use
    Intel compilers and GNU compilers, respectively.
    Use the `-show` option (e.g. `mpif90 -show`) to display the underlying compiler.
    As expected

    ```sh
    $ mpif90 -v

    mpif90 for the Intel(R) MPI Library 2018 Update 3 for Linux*
    COLLECT_GCC=gfortran
    <snip>
    Thread model: posix
    gcc version 7.3.0 (GCC)
    ```

    shows that `mpif90` wraps GNU *gfortran*.
    Unless you really need to use GNU compilers, we strongly suggest the wrappers
    based on the Intel compilers (**_mpiicc_**, **_mpicpc_**, **_mpiifort_**)


If we run *configure* with these options, we should see a section at the beginning
in which the build system is testing basic capabilities of the Fortran compiler.
If *configure* stops at this level it means there's a severe problem with your toolchain.

```text
 ==============================================================================
 === Fortran support                                                        ===
 ==============================================================================

checking for mpiifort... /opt/cecisw/arch/easybuild/2018b/software/impi/2018.3.222-iccifort-2018.3.222-GCC-7.3.0-2.30/bin64/mpiifort
checking whether we are using the GNU Fortran compiler... no
checking whether mpiifort accepts -g... yes
checking which type of Fortran compiler we have... intel 18.0
```

Then we have a section in which *configure* tests the MPI implementation:


??? note "Multicore architecture support"

    ```text
    ==============================================================================
    === Multicore architecture support                                         ===
    ==============================================================================

    checking whether to enable OpenMP support... no
    checking whether to enable MPI... yes
    checking how MPI parameters have been set... yon
    checking whether the MPI C compiler is set... yes
    checking whether the MPI C++ compiler is set... yes
    checking whether the MPI Fortran compiler is set... yes
    checking for MPI C preprocessing flags...
    checking for MPI C flags...
    checking for MPI C++ flags...
    checking for MPI Fortran flags...
    checking for MPI linker flags...
    checking for MPI library flags...
    checking whether the MPI C API works... yes
    checking whether the MPI C environment works... yes
    checking whether the MPI C++ API works... yes
    checking whether the MPI C++ environment works... yes
    checking whether the MPI Fortran API works... yes
    checking whether the MPI Fortran environment works... yes
    checking whether to build MPI I/O code... auto
    checking which level of MPI is supported by the Fortran compiler... 2
    configure: forcing MPI-2 standard level support
    checking whether the MPI library supports MPI_INTEGER16... yes
    checking whether the MPI library supports MPI_CREATE_TYPE_STRUCT... yes
    checking whether the MPI library supports MPI_IBCAST (MPI3)... yes
    checking whether the MPI library supports MPI_IALLGATHER (MPI3)... yes
    checking whether the MPI library supports MPI_IALLTOALL (MPI3)... yes
    checking whether the MPI library supports MPI_IALLTOALLV (MPI3)... yes
    checking whether the MPI library supports MPI_IGATHERV (MPI3)... yes
    checking whether the MPI library supports MPI_IALLREDUCE (MPI3)... yes
    configure:
    configure: dumping all MPI parameters for diagnostics
    configure: ------------------------------------------
    configure:
    configure: Configure options:
    configure:
    configure:   * enable_mpi_inplace = ''
    configure:   * enable_mpi_io      = ''
    configure:   * with_mpi           = 'yes'
    configure:   * with_mpi_level     = ''
    configure:
    configure: Internal parameters
    configure:
    configure:   * MPI enabled (required)                       : yes
    configure:   * MPI C compiler is set (required)             : yes
    configure:   * MPI C compiler works (required)              : yes
    configure:   * MPI Fortran compiler is set (required)       : yes
    configure:   * MPI Fortran compiler works (required)        : yes
    configure:   * MPI environment usable (required)            : yes
    configure:   * MPI C++ compiler is set (optional)           : yes
    configure:   * MPI C++ compiler works (optional)            : yes
    configure:   * MPI-in-place enabled (optional)              : no
    configure:   * MPI-IO enabled (optional)                    : yes
    configure:   * MPI configuration type (computed)            : yon
    configure:   * MPI Fortran level supported (detected)       : 2
    configure:   * MPI_Get_library_version available (detected) : unknown
    configure:
    configure: All required parameters must be set to 'yes'.
    configure: If not, the configuration and/or the build with
    configure: MPI support will very likely fail.
    configure:
    checking whether to activate GPU support... no
    ```

So far so good. Our compilers and MPI seem to work so we can proceed with
the setup of the external libraries.

On my cluster, `module load intel/2018b` has also defined the **MKLROOT** env variable

```sh
env | grep MKL

MKLROOT=/opt/cecisw/arch/easybuild/2018b/software/imkl/2018.3.222-iimpi-2018b/mkl
EBVERSIONIMKL=2018.3.222
```

that can be used in conjunction with the **highly recommended**
[mkl-link-line-advisor](https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor)
to link with MKL. On other clusters, you may need load an *mkl* module explicitly
(or *composerxe* or *parallel-studio-xe*)

Let's now discuss how to configure ABINIT with MKL starting from the simplest cases:

- BLAS and Lapack from MKL
- FFT from MKL DFTI
- no Scalapack
- no OpenMP threads.

These are the options I have to select in the
[mkl-link-line-advisor](https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor)
to enable this configuration with my software stack:

![](compilation_assets/link_line_advisor.png)

The options should be self-explanatory.
Perhaps the tricky part is **Select interface layer** where one should select **32-bit integer**.
This simply means that we are compiling and linking code in which default integer is 32-bits wide
(default behaviour).
Note how the threading layer is set to **Sequential** (no OpenMP threads)
and how we chose to **link with MKL libraries explicitly** the get the full
link line and compiler options.

Now we can use these options in our configuration file:

```sh
# BLAS/LAPACK with MKL
with_linalg_flavor="mkl"

LINALG_CPPFLAGS="-I${MKLROOT}/include"
LINALG_FCFLAGS="-I${MKLROOT}/include"
LINALG_LIBS="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl"

# FFT from MKL
with_fft_flavor="dfti"

FFT_CPPFLAGS="-I${MKLROOT}/include"
FFT_FCFLAGS="-I${MKLROOT}/include"
FFT_LIBS="-L${MKLROOT}/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl"
```

!!! warning

    **Do not use MKL with FFTW3** for the FFT as the MKL library exports the same symbols as FFTW.
    This means that the linker will receive multiple definitions for the same procedure and
    the **behaviour is undefined**! Use either MKL or FFTW3 with e.g. *openblas*.


If we run *configure* with these options, we should obtain the following output in the
**Linear algebra support** section:

??? note "Linear algebra support"

    ```text
    ==============================================================================
    === Linear algebra support                                                 ===
    ==============================================================================

    checking for the requested linear algebra flavor... mkl
    checking for the serial linear algebra detection sequence... mkl
    checking for the MPI linear algebra detection sequence... mkl
    checking for the MPI acceleration linear algebra detection sequence... none
    checking how to detect linear algebra libraries... verify
    checking for BLAS support in the specified libraries... yes
    checking for AXPBY support in the BLAS libraries... yes
    checking for GEMM3M in the BLAS libraries... yes
    checking for mkl_imatcopy in the specified libraries... yes
    checking for mkl_omatcopy in the specified libraries... yes
    checking for mkl_omatadd in the specified libraries... yes
    checking for mkl_set/get_threads in the specified libraries... yes
    checking for LAPACK support in the specified libraries... yes
    checking for LAPACKE C API support in the specified libraries... no
    checking for PLASMA support in the specified libraries... no
    checking for BLACS support in the specified libraries... no
    checking for ELPA support in the specified libraries... no
    checking how linear algebra parameters have been set... env (flavor: kwd)
    checking for the actual linear algebra flavor... mkl
    checking for linear algebra C preprocessing flags... none
    checking for linear algebra C flags... none
    checking for linear algebra C++ flags... none
    checking for linear algebra Fortran flags... -I/opt/cecisw/arch/easybuild/2018b/software/imkl/2018.3.222-iimpi-2018b/mkl/include
    checking for linear algebra linker flags... none
    checking for linear algebra library flags... -L/opt/cecisw/arch/easybuild/2018b/software/imkl/2018.3.222-iimpi-2018b/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl
    configure: WARNING: parallel linear algebra is not available
    ```

Excellent, *configure* detected a working BLAS/Lapack installation, plus some MKL extensions (*mkl_imatcopy* etc).
BLACS and Scalapack (parallel linear algebra) have not been detected but this is expected as we haven't asked
for these libraries in the mkl-link-line-advisor GUI.

This is the section in which *configure* checks the presence of the FFT library
(DFTI from MKL, goedecker means internal Fortran version).

??? note "Optimized FFT support"

    ```text
    ==============================================================================
    === Optimized FFT support                                                  ===
    ==============================================================================

    checking which FFT flavors to enable... dfti goedecker
    checking for FFT flavor... dfti
    checking for FFT C preprocessing flags...
    checking for FFT C flags...
    checking for FFT Fortran flags...
    checking for FFT linker flags...
    checking for FFT library flags...
    checking for the FFT flavor to try... dfti
    checking whether to enable DFTI... yes
    checking how DFTI parameters have been set... mkl
    checking for DFTI C preprocessing flags... none
    checking for DFTI C flags... none
    checking for DFTI Fortran flags... -I/opt/cecisw/arch/easybuild/2018b/software/imkl/2018.3.222-iimpi-2018b/mkl/include
    checking for DFTI linker flags... none
    checking for DFTI library flags... -L/opt/cecisw/arch/easybuild/2018b/software/imkl/2018.3.222-iimpi-2018b/mkl/lib/intel64 -lmkl_intel_lp64 -lmkl_core -lmkl_sequential -lpthread -lm -ldl
    checking whether the DFTI library works... yes
    checking for the actual FFT flavor to use... dfti
    ```

The line

```text
checking whether the DFTI library works... yes
```

tells us that DFTI has been found and we can link against it although this does not necessarily mean
that the final executable will work out of the box.


!!! tip

    You may have noticed that it is also possible to use MKL with GNU *gfortran* but in this case you need
    to use a different set of libraries including the so-called **compatibility layer** that allows GCC code
    to call MKL routines.
    Also, **MKL Scalapack requires either Intel MPI or MPICH2**.


??? note "Optional Exercise"

     Compile ABINIT with BLAS/ScalaPack from MKL.
     Scalapack (or ELPA) may lead to a significant speedup when running GS calculations
     with large [[nband]]. See also the [[np_slk]] input variable.


### How to compile libxc, netcdf4/hdf5 with intel

At this point, one should check whether our cluster provides modules for
*libxc*, *netcdf-fortran*, *netcdf-c* and *hdf5* **compiled with the same toolchain**.
Use `module spider netcdf` or `module keyword netcdf` to find the modules (if any).

Hopefully, you will find a pre-existent installation for *netcdf* and *hdf5* (possibly with MPI-IO support)
as these libraries are quite common on HPC centers.
Load these modules to have `nc-config` and `nf-config` in your $PATH and then use the
`--prefix` option to specify the installation directory as done in the previous examples.
Unfortunately, *libxc* and *hdf5* do not provide similar tools so you will have to find
the installation directory for these libs and pass it to *configure*.

<!--
For libxc and hdf5
```sh
with_netcdf="`nc-config --prefix`"
with_netcdf_fortran="`nf-config --prefix`"
with_hdf5="`installation_dir_for_hdf5`"
with_hdf5="${EBROOTHDF5}"

# libxc
with_libxc="${EBROOTLIBXC}"
```
-->

!!! tip

    You may encounter problems with *libxc* as this library is rather domain-specific
    and not all the HPC centers install it.
    If your cluster does not provide *libxc*, it should not be that difficult
    to reuse the expertise acquired in this tutorial to build
    your version and then install the missing dependencies inside $HOME/local.
    Just remember to:

    1. load the correct modules for MPI with the associated compilers before configuring
    2. *configure* with **CC=mpiicc** and **FC=mpiifort** so that the intel compilers are used
    3. install the libraries and prepend $HOME/local/lib to LD_LIBRARY_PATH
    4. use the *with_LIBNAME* option in conjunction with $HOME/local/lib in the ac9 file.
    5. run *configure* with the ac9 file.

In the worst case scenario in which neither *netcdf4/hdf5* nor *libxc* are installed, you may want to
use the **internal fallbacks**.
The procedure goes as follows.

- Start to configure with a minimalistic set of options just for MPI and MKL (linalg and FFT)
- The build system will detect that some hard dependencies are missing and will generate a
  *build-abinit-fallbacks.sh* script in the *fallbacks* directory.
- Execute the script to build the missing dependencies **using the toolchain specified
  in the initial configuration file**
- Finally, reconfigure ABINIT with the fallbacks.


## How to compile ABINIT with support for OpenMP threads

!!! tip

    For a quick introduction to MPI and OpenMP and a comparison between the two parallel programming paradigms, see this
    [presentation](https://princetonuniversity.github.io/PUbootcamp/sessions/parallel-programming/Intro_PP_bootcamp_2018.pdf).

Compiling ABINIT with OpenMP is not that difficult as everything boils down to:

* Using a **threaded version** for BLAS, LAPACK and FFTs
* Passing **enable_openmp="yes"** to the ABINIT configure script
  so that OpenMP is activated also at level of the ABINIT Fortran code.

On the contrary, answering the questions:

* When and why should I use OpenMP threads for my calculations?
* How many threads should I use and what is the parallel speedup I should expect?

is much more difficult as there are several factors that should be taken into account.


!!! note

    To keep a long story short, one should use OpenMP threads
    when we start to trigger **limitations or bottlenecks in the MPI implementation**,
    especially at the level of the memory requirements or in terms of parallel scalability.
    These problems are usually observed in calculations with large [[natom]], [[mpw]], [[nband]].

    As a matter of fact, it does not make sense to compile ABINIT with OpenMP
    if your calculations are relatively small.
    Indeed, ABINIT is mainly designed with MPI-parallelism in mind.
    For instance, calculations done with a relatively large number of $\kk$-points will benefit more of MPI than OpenMP,
    especially if the number of MPI processes divides the number of $\kk$-points exactly.
    Even worse, do not compile the code with OpenMP support if you do not plan to use threads because the OpenMP
    version will have an **additional overhead** due to the creation of the threaded sections.

    Remember also that increasing the number of threads does not necessarily leads to faster calculations
    (the same is true for MPI processes).
    There's always an **optimal value** for the number of threads (MPI processes)
    beyond which the parallel efficiency starts to deteriorate.
    Unfortunately, this value is strongly hardware and software dependent so you will need to **benchmark the code**
    before running production calculations.

    Last but not least, OpenMP threads are not necessarily Posix threads. Hence if you have a library that provides
    both Open and Posix-threads, link with the OpenMP version.

After this necessary preamble, let's discuss how to compile a threaded version.
To activate OpenMP support in the Fortran routines of ABINIT, pass

```sh
enable_openmp="yes"
```

to the *configure* script via the configuration file.
This will automatically activate the compilation option needed to enable OpenMP in the ABINIT source code
(e..g. `-fopenmp` option for *gfortran*) and the CPP variable **HAVE_OPENMP in _config.h_**.
Note that this option is just part of the story as a significant fraction of the wall-time is spent in the external
BLAS/FFT routines so **do not expect big speedups if you do not link against threaded libraries**.

If you are building your own software stack for BLAS/LAPACK and FFT, you will have to
reconfigure with the correct options for the OpenMP version and then issue
*make and make install* again to build the threaded version.
Also note that some libraries may change.
FFTW3, for example, ships the OpenMP version in **_libfftw3_omp_**
(see the [official documentation](http://www.fftw.org/fftw3_doc/Installation-and-Supported-Hardware_002fSoftware.html#Installation-and-Supported-Hardware_002fSoftware)) hence the list of libraries in **FFTW3_LIBS** should be changed accordingly.

Life is much easier if you are using intel MKL because in this case
it is just a matter of selecting *OpenMP threading* as threading layer
in the [mkl-link-line-advisor interface](https://software.intel.com/en-us/articles/intel-mkl-link-line-advisor)
and then pass these options to the ABINIT build system together with `enable_openmp="yes"`.

!!! Important

    When using threaded libraries remember to set explicitly the number of threads with e.g.

    ```sh
    export OMP_NUM_THREADS=2
    ```

    either in your *bash_profile* or in the submission script (or in both).
    **By default, OpenMP uses all the available CPUs** so it is very easy to overload
    the machine, especially if one uses threads in conjunction with MPI processes.

    When running threaded applications with MPI, we suggest to allocate a number of **physical CPUs**
    that is equal to the number of MPI processes times the number of OpenMP threads.
    Computational intensive applications such as DFT codes have less chance to be improved in performance
    from Hyper-Threading technology (usually referred to as number of **logical CPUs**).

    We also recommend to increase the **stack size limit** using e.g.

    ```sh
    ulimit -s unlimited
    ```

    if the sysadmin allows you to do so.

    To run the ABINIT test suite with e.g. two OpenMP threads, use the `-o2` option of *runtests.py*