File: faq.shtml

package info (click to toggle)
slurm-wlm 22.05.8-4%2Bdeb12u3
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 48,492 kB
  • sloc: ansic: 475,246; exp: 69,020; sh: 8,862; javascript: 6,528; python: 6,444; makefile: 4,185; perl: 4,069; pascal: 131
file content (2321 lines) | stat: -rw-r--r-- 112,722 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
1097
1098
1099
1100
1101
1102
1103
1104
1105
1106
1107
1108
1109
1110
1111
1112
1113
1114
1115
1116
1117
1118
1119
1120
1121
1122
1123
1124
1125
1126
1127
1128
1129
1130
1131
1132
1133
1134
1135
1136
1137
1138
1139
1140
1141
1142
1143
1144
1145
1146
1147
1148
1149
1150
1151
1152
1153
1154
1155
1156
1157
1158
1159
1160
1161
1162
1163
1164
1165
1166
1167
1168
1169
1170
1171
1172
1173
1174
1175
1176
1177
1178
1179
1180
1181
1182
1183
1184
1185
1186
1187
1188
1189
1190
1191
1192
1193
1194
1195
1196
1197
1198
1199
1200
1201
1202
1203
1204
1205
1206
1207
1208
1209
1210
1211
1212
1213
1214
1215
1216
1217
1218
1219
1220
1221
1222
1223
1224
1225
1226
1227
1228
1229
1230
1231
1232
1233
1234
1235
1236
1237
1238
1239
1240
1241
1242
1243
1244
1245
1246
1247
1248
1249
1250
1251
1252
1253
1254
1255
1256
1257
1258
1259
1260
1261
1262
1263
1264
1265
1266
1267
1268
1269
1270
1271
1272
1273
1274
1275
1276
1277
1278
1279
1280
1281
1282
1283
1284
1285
1286
1287
1288
1289
1290
1291
1292
1293
1294
1295
1296
1297
1298
1299
1300
1301
1302
1303
1304
1305
1306
1307
1308
1309
1310
1311
1312
1313
1314
1315
1316
1317
1318
1319
1320
1321
1322
1323
1324
1325
1326
1327
1328
1329
1330
1331
1332
1333
1334
1335
1336
1337
1338
1339
1340
1341
1342
1343
1344
1345
1346
1347
1348
1349
1350
1351
1352
1353
1354
1355
1356
1357
1358
1359
1360
1361
1362
1363
1364
1365
1366
1367
1368
1369
1370
1371
1372
1373
1374
1375
1376
1377
1378
1379
1380
1381
1382
1383
1384
1385
1386
1387
1388
1389
1390
1391
1392
1393
1394
1395
1396
1397
1398
1399
1400
1401
1402
1403
1404
1405
1406
1407
1408
1409
1410
1411
1412
1413
1414
1415
1416
1417
1418
1419
1420
1421
1422
1423
1424
1425
1426
1427
1428
1429
1430
1431
1432
1433
1434
1435
1436
1437
1438
1439
1440
1441
1442
1443
1444
1445
1446
1447
1448
1449
1450
1451
1452
1453
1454
1455
1456
1457
1458
1459
1460
1461
1462
1463
1464
1465
1466
1467
1468
1469
1470
1471
1472
1473
1474
1475
1476
1477
1478
1479
1480
1481
1482
1483
1484
1485
1486
1487
1488
1489
1490
1491
1492
1493
1494
1495
1496
1497
1498
1499
1500
1501
1502
1503
1504
1505
1506
1507
1508
1509
1510
1511
1512
1513
1514
1515
1516
1517
1518
1519
1520
1521
1522
1523
1524
1525
1526
1527
1528
1529
1530
1531
1532
1533
1534
1535
1536
1537
1538
1539
1540
1541
1542
1543
1544
1545
1546
1547
1548
1549
1550
1551
1552
1553
1554
1555
1556
1557
1558
1559
1560
1561
1562
1563
1564
1565
1566
1567
1568
1569
1570
1571
1572
1573
1574
1575
1576
1577
1578
1579
1580
1581
1582
1583
1584
1585
1586
1587
1588
1589
1590
1591
1592
1593
1594
1595
1596
1597
1598
1599
1600
1601
1602
1603
1604
1605
1606
1607
1608
1609
1610
1611
1612
1613
1614
1615
1616
1617
1618
1619
1620
1621
1622
1623
1624
1625
1626
1627
1628
1629
1630
1631
1632
1633
1634
1635
1636
1637
1638
1639
1640
1641
1642
1643
1644
1645
1646
1647
1648
1649
1650
1651
1652
1653
1654
1655
1656
1657
1658
1659
1660
1661
1662
1663
1664
1665
1666
1667
1668
1669
1670
1671
1672
1673
1674
1675
1676
1677
1678
1679
1680
1681
1682
1683
1684
1685
1686
1687
1688
1689
1690
1691
1692
1693
1694
1695
1696
1697
1698
1699
1700
1701
1702
1703
1704
1705
1706
1707
1708
1709
1710
1711
1712
1713
1714
1715
1716
1717
1718
1719
1720
1721
1722
1723
1724
1725
1726
1727
1728
1729
1730
1731
1732
1733
1734
1735
1736
1737
1738
1739
1740
1741
1742
1743
1744
1745
1746
1747
1748
1749
1750
1751
1752
1753
1754
1755
1756
1757
1758
1759
1760
1761
1762
1763
1764
1765
1766
1767
1768
1769
1770
1771
1772
1773
1774
1775
1776
1777
1778
1779
1780
1781
1782
1783
1784
1785
1786
1787
1788
1789
1790
1791
1792
1793
1794
1795
1796
1797
1798
1799
1800
1801
1802
1803
1804
1805
1806
1807
1808
1809
1810
1811
1812
1813
1814
1815
1816
1817
1818
1819
1820
1821
1822
1823
1824
1825
1826
1827
1828
1829
1830
1831
1832
1833
1834
1835
1836
1837
1838
1839
1840
1841
1842
1843
1844
1845
1846
1847
1848
1849
1850
1851
1852
1853
1854
1855
1856
1857
1858
1859
1860
1861
1862
1863
1864
1865
1866
1867
1868
1869
1870
1871
1872
1873
1874
1875
1876
1877
1878
1879
1880
1881
1882
1883
1884
1885
1886
1887
1888
1889
1890
1891
1892
1893
1894
1895
1896
1897
1898
1899
1900
1901
1902
1903
1904
1905
1906
1907
1908
1909
1910
1911
1912
1913
1914
1915
1916
1917
1918
1919
1920
1921
1922
1923
1924
1925
1926
1927
1928
1929
1930
1931
1932
1933
1934
1935
1936
1937
1938
1939
1940
1941
1942
1943
1944
1945
1946
1947
1948
1949
1950
1951
1952
1953
1954
1955
1956
1957
1958
1959
1960
1961
1962
1963
1964
1965
1966
1967
1968
1969
1970
1971
1972
1973
1974
1975
1976
1977
1978
1979
1980
1981
1982
1983
1984
1985
1986
1987
1988
1989
1990
1991
1992
1993
1994
1995
1996
1997
1998
1999
2000
2001
2002
2003
2004
2005
2006
2007
2008
2009
2010
2011
2012
2013
2014
2015
2016
2017
2018
2019
2020
2021
2022
2023
2024
2025
2026
2027
2028
2029
2030
2031
2032
2033
2034
2035
2036
2037
2038
2039
2040
2041
2042
2043
2044
2045
2046
2047
2048
2049
2050
2051
2052
2053
2054
2055
2056
2057
2058
2059
2060
2061
2062
2063
2064
2065
2066
2067
2068
2069
2070
2071
2072
2073
2074
2075
2076
2077
2078
2079
2080
2081
2082
2083
2084
2085
2086
2087
2088
2089
2090
2091
2092
2093
2094
2095
2096
2097
2098
2099
2100
2101
2102
2103
2104
2105
2106
2107
2108
2109
2110
2111
2112
2113
2114
2115
2116
2117
2118
2119
2120
2121
2122
2123
2124
2125
2126
2127
2128
2129
2130
2131
2132
2133
2134
2135
2136
2137
2138
2139
2140
2141
2142
2143
2144
2145
2146
2147
2148
2149
2150
2151
2152
2153
2154
2155
2156
2157
2158
2159
2160
2161
2162
2163
2164
2165
2166
2167
2168
2169
2170
2171
2172
2173
2174
2175
2176
2177
2178
2179
2180
2181
2182
2183
2184
2185
2186
2187
2188
2189
2190
2191
2192
2193
2194
2195
2196
2197
2198
2199
2200
2201
2202
2203
2204
2205
2206
2207
2208
2209
2210
2211
2212
2213
2214
2215
2216
2217
2218
2219
2220
2221
2222
2223
2224
2225
2226
2227
2228
2229
2230
2231
2232
2233
2234
2235
2236
2237
2238
2239
2240
2241
2242
2243
2244
2245
2246
2247
2248
2249
2250
2251
2252
2253
2254
2255
2256
2257
2258
2259
2260
2261
2262
2263
2264
2265
2266
2267
2268
2269
2270
2271
2272
2273
2274
2275
2276
2277
2278
2279
2280
2281
2282
2283
2284
2285
2286
2287
2288
2289
2290
2291
2292
2293
2294
2295
2296
2297
2298
2299
2300
2301
2302
2303
2304
2305
2306
2307
2308
2309
2310
2311
2312
2313
2314
2315
2316
2317
2318
2319
2320
2321
<!--#include virtual="header.txt"-->

<h1><a id="top">Frequently Asked Questions</a></h1>

<h2>For Management</h2>
<ul>
<li><a href="#foss">Why should I use Slurm or other Free Open Source Software (FOSS)?</a></li>
<li><a href="#acronym">What does "Slurm" stand for?</a></li>
</ul>

<h2>For Users</h2>
<ul>
<li><a href="#comp">Why is my job/node in a COMPLETING state?</a></li>
<li><a href="#rlimit">Why are my resource limits not propagated?</a></li>
<li><a href="#pending">Why is my job not running?</a></li>
<li><a href="#sharing">Why does the srun --overcommit option not permit
  multiple jobs to run on nodes?</a></li>
<li><a href="#purge">Why is my job killed prematurely?</a></li>
<li><a href="#opts">Why are my srun options ignored?</a></li>
<li><a href="#backfill">Why is the Slurm backfill scheduler not starting my
  job?</a></li>
<li><a href="#steps">How can I run multiple jobs from within a single
  script?</a></li>
<li><a href="#multi_batch">How can I run a job within an existing job
  allocation?</a></li>
<li><a href="#user_env">How does Slurm establish the environment for my
  job?</a></li>
<li><a href="#prompt">How can I get shell prompts in interactive mode?</a></li>
<li><a href="#batch_out">How can I get the task ID in the output or error file
  name for a batch job?</a></li>
<li><a href="#parallel_make">Can the <i>make</i> command utilize the resources
  allocated to a Slurm job?</a></li>
<li><a href="#terminal">Can tasks be launched with a remote (pseudo)
  terminal?</a></li>
<li><a href="#force">What does &quot;srun: Force Terminated job&quot;
  indicate?</a></li>
<li><a href="#early_exit">What does this mean: &quot;srun: First task exited
  30s ago&quot; followed by &quot;srun Job Failed&quot;?</a></li>
<li><a href="#memlock">Why is my MPI job  failing due to the locked memory
  (memlock) limit being too low?</a></li>
<li><a href="#inactive">Why is my batch job that launches no job steps being
  killed?</a></li>
<li><a href="#arbitrary">How do I run specific tasks on certain nodes
  in my allocation?</a></li>
<li><a href="#hold">How can I temporarily prevent a job from running
  (e.g. place it into a <i>hold</i> state)?</a></li>
<li><a href="#mem_limit">Why are jobs not getting the appropriate
  memory limit?</a></li>
<li><a href="#mailing_list">Is an archive available of messages posted to
  the <i>slurm-users</i> mailing list?</a></li>
<li><a href="#job_size">Can I change my job's size after it has started
  running?</a></li>
<li><a href="#mpi_symbols">Why is my MPICH2 or MVAPICH2 job not running with
  Slurm? Why does the DAKOTA program not run with Slurm?</a></li>
<li><a href="#estimated_start_time">Why does squeue (and "scontrol show
  jobid") sometimes not display a job's estimated start time?</a></li>
<li><a href="#ansys">How can I run an Ansys program with Slurm?</a></li>
<li><a href="#req">How can a job in a complete or failed state be requeued?</a></li>
<li><a href="#cpu_count">Slurm documentation refers to CPUs, cores and threads.
  What exactly is considered a CPU?</a></li>
<li><a href="#sbatch_srun">What is the difference between the sbatch
  and srun commands?</a></li>
<li><a href="#squeue_color">Can squeue output be color coded?</a></li>
<li><a href="#x11">Can Slurm export an X11 display on an allocated compute node?</a></li>
<li><a href="#unbuffered_cr">Why is the srun --u/--unbuffered option adding
   a carriage return to my output?</a></li>
<li><a href="#sview_colors">Why is sview not coloring/highlighting nodes
    properly?</a></li>
</ul>

<h2>For Administrators</h2>
<ul>
<li><a href="#suspend">How is job suspend/resume useful?</a></li>
<li><a href="#return_to_service">Why is a node shown in state DOWN when the node
  has registered for service?</a></li>
<li><a href="#down_node">What happens when a node crashes?</a></li>
<li><a href="#multi_job">How can I control the execution of multiple
  jobs per node?</a></li>
<li><a href="#inc_plugin">When the Slurm daemon starts, it prints
  &quot;cannot resolve X plugin operations&quot; and exits. What does this mean?</a></li>
<li><a href="#pam_exclude">How can I exclude some users from pam_slurm?</a></li>
<li><a href="#maint_time">How can I dry up the workload for a maintenance
  period?</a></li>
<li><a href="#pam">How can PAM be used to control a user's limits on or
  access to compute nodes?</a></li>
<li><a href="#time">Why are jobs allocated nodes and then unable to initiate
  programs on some nodes?</a></li>
<li><a href="#ping"> Why does <i>slurmctld</i> log that some nodes
  are not responding even if they are not in any partition?</a></li>
<li><a href="#controller"> How should I relocate the primary or backup
  controller?</a></li>
<li><a href="#multi_slurm">Can multiple Slurm systems be run in
  parallel for testing purposes?</a></li>
<li><a href="#multi_slurmd">Can Slurm emulate a larger cluster?</a></li>
<li><a href="#extra_procs">Can Slurm emulate nodes with more
  resources than physically exist on the node?</a></li>
<li><a href="#credential_replayed">What does a
  &quot;credential replayed&quot; error in the <i>SlurmdLogFile</i>
  indicate?</a></li>
<li><a href="#large_time">What does
  &quot;Warning: Note very large processing time&quot;
  in the <i>SlurmctldLogFile</i> indicate?</a></li>
<li><a href="#limit_propagation">Is resource limit propagation
  useful on a homogeneous cluster?</a></li>
<li><a href="#clock">Do I need to maintain synchronized clocks
  on the cluster?</a></li>
<li><a href="#cred_invalid">Why are &quot;Invalid job credential&quot; errors
  generated?</a></li>
<li><a href="#cred_replay">Why are
  &quot;Task launch failed on node ... Job credential replayed&quot;
  errors generated?</a></li>
<li><a href="#globus">Can Slurm be used with Globus?</a></li>
<li><a href="#file_limit">What causes the error
  &quot;Unable to accept new connection: Too many open files&quot;?</a></li>
<li><a href="#slurmd_log">Why does the setting of <i>SlurmdDebug</i> fail
  to log job step information at the appropriate level?</a></li>
<li><a href="#rpm">Why aren't pam_slurm.so, auth_none.so, or other components in a
  Slurm RPM?</a></li>
<li><a href="#slurmdbd">Why should I use the slurmdbd instead of the
  regular database plugins?</a></li>
<li><a href="#debug">How can I build Slurm with debugging symbols?</a></li>
<li><a href="#state_preserve">How can I easily preserve drained node
  information between major Slurm updates?</a></li>
<li><a href="#health_check">Why doesn't the <i>HealthCheckProgram</i>
  execute on DOWN nodes?</a></li>
<li><a href="#batch_lost">What is the meaning of the error
  &quot;Batch JobId=# missing from batch node &lt;node&gt; (not found
  BatchStartTime after startup)&quot;?</a></li>
<li><a href="#accept_again">What does the message
  &quot;srun: error: Unable to accept connection: Resources temporarily unavailable&quot;
  indicate?</a></li>
<li><a href="#task_prolog">How could I automatically print a job's
  Slurm job ID to its standard output?</a></li>
<li><a href="#orphan_procs">Why are user processes and <i>srun</i>
  running even though the job is supposed to be completed?</a></li>
<li><a href="#slurmd_oom">How can I prevent the <i>slurmd</i> and
  <i>slurmstepd</i> daemons from being killed when a node's memory
  is exhausted?</a></li>
<li><a href="#ubuntu">I see the host of my calling node as 127.0.1.1
  instead of the correct IP address.  Why is that?</a></li>
<li><a href="#stop_sched">How can I stop Slurm from scheduling jobs?</a></li>
<li><a href="#scontrol_multi_jobs">Can I update multiple jobs with a single
<i>scontrol</i> command?</a></li>
<li><a href="#amazon_ec2">Can Slurm be used to run jobs on Amazon's EC2?</a></li>
<li><a href="#core_dump">If a Slurm daemon core dumps, where can I find the
  core file?</a></li>
<li><a href="#totalview">How can TotalView be configured to operate with
  Slurm?</a></li>
<li><a href="#git_patch">How can a patch file be generated from a Slurm commit
  in GitHub?</a></li>
<li><a href="#enforce_limits">Why are the resource limits set in the database
  not being enforced?</a></li>
<li><a href="#restore_priority">After manually setting a job priority value,
  how can its priority value be returned to being managed by the
  priority/multifactor plugin?</a></li>
<li><a href="#health_check_example">Does anyone have an example node health check
script for Slurm?</a></li>
<li><a href="#add_nodes">What process should I follow to add nodes to Slurm?</a></li>
<li><a href="#rem_nodes">What process should I follow to remove nodes from Slurm?</a></li>
<li><a href="#licenses">Can Slurm be configured to manage licenses?</a></li>
<li><a href="#salloc_default_command">Can the salloc command be configured to
  launch a shell on a node in the job's allocation?</a></li>
<li><a href="#upgrade">What should I be aware of when upgrading Slurm?</a></li>
<li><a href="#torque">How easy is it to switch from PBS or Torque to Slurm?</a></li>
<li><a href="#sssd">How can I get SSSD to work with Slurm?</a></li>
<li><a href="#ha_db">How critical is configuring high availability for my
  database?</a></li>
<li><a href="#sql">How can I use double quotes in MySQL queries?</a></li>
<li><a href="#reboot">Why is a compute node down with the reason set to
"Node unexpectedly rebooted"?</a></li>
<li><a href="#reqspec">How can a job which has exited with a specific exit code
   be requeued?</a></li>
<li><a href="#user_account">Can a user's account be changed in the database?</a></li>
<li><a href="#mpi_perf">What might account for MPI performance being below the
   expected level?</a></li>
<li><a href="#state_info">How could some jobs submitted immediately before the
   slurmctld daemon crashed be lost?</a></li>
<li><a href="#delete_partition">How do I safely remove partitions?</a></li>
<li><a href="#cpu_freq">Why is Slurm unable to set the CPU frequency for jobs?</a></li>
<li><a href="#cluster_acct">When adding a new cluster, how can the Slurm cluster
    configuration be copied from an existing cluster to the new cluster?</a></li>
<li><a href="#cray_dvs">How can I update Slurm on a Cray DVS file system without
    rebooting the nodes?</a></li>
<li><a href="#dbd_rebuild">How can I rebuild the database hierarchy?</a></li>
<li><a href="#db_upgrade">Is there anything exceptional to be aware of when
    upgrading my database server?</a></li>
<li><a href="#routing_queue">How can a routing queue be configured?</a></li>
<li><a href="#squeue_script">How can I suspend, resume, hold or release all
    of the jobs belonging to a specific user, partition, etc?</a></li>
<li><a href="#changed_uid">I had to change a user's UID and now they cannot submit
    jobs. How do I get the new UID to take effect?</a></li>
<li><a href="#mysql_duplicate">Slurmdbd is failing to start with a 'Duplicate entry'
    error in the database. How do I fix that?</a></li>
<li><a href="#cray_sigbus">Why are applications on my Cray system failing
    with SIGBUS (bus error)?</a></li>
<li><a href="#sysv_memory">How do I configure Slurm to work with System V IPC
    enabled applications?</a></li>
<li><a href="#opencl_pmix">Why is Multi-Instance GPU not working with Slurm and
    PMIx, and complaining about GPUs being 'In use by another client'?</a></li>
<li><a href="#tmpfs_jobcontainer">How can I set up a private /tmp and /dev/shm for
    jobs on my machine?</a></li>
<li><a href="#json_serializer">Why am I getting the following error: "Unable to
    find plugin: serializer/json"?</a></li>
<li><a href="#epel">Why am I being offered an automatic update for Slurm?</a></li>
</ul>

<h2>For Management</h2>
<p><a id="foss"><b>Why should I use Slurm or other Free Open Source Software (FOSS)?</b></a><br>
Free Open Source Software (FOSS) does not mean that it is without cost.
It does mean that the you have access to the code so that you are free to
use it, study it, and/or enhance it.
These reasons contribute to Slurm (and FOSS in general) being subject to
active research and development worldwide, displacing proprietary software
in many environments.
If the software is large and complex, like Slurm or the Linux kernel,
then while there is no license fee, its use is not without cost.</p>
<p>If your work is important, you'll want the leading Slurm experts at your
disposal to keep your systems operating at peak efficiency.
While Slurm has a global development community incorporating leading edge
technology, <a href="https://www.schedmd.com">SchedMD</a> personnel have developed
most of the code and can provide competitively priced commercial support.
SchedMD works with various organizations to provide a range of support
options ranging from remote level-3 support to 24x7 on-site personnel.
Customers switching from commercial workload mangers to Slurm typically
report higher scalability, better performance and lower costs.</p>

<p><a id="acronym"><b>What does "Slurm" stand for?</b></a><br>
Nothing.
<p>Originally, "SLURM" (completely capitalized) was an acryonym for
"Simple Linux Utility for Resource Management". In 2012 the preferred
capitalization was changed to Slurm, and the acroynm was dropped &mdash; the
developers preferred to think of Slurm as "sophisticated" rather than "Simple"
by this point. And, as Slurm continued to expand it's scheduling capabilities,
the "Resource Management" label was also viewed as outdated.</p>

<h2>For Users</h2>
<p><a id="comp"><b>Why is my job/node in a COMPLETING state?</b></a><br>
When a job is terminating, both the job and its nodes enter the COMPLETING state.
As the Slurm daemon on each node determines that all processes associated with
the job have terminated, that node changes state to IDLE or some other appropriate
state for use by other jobs.
When every node allocated to a job has determined that all processes associated
with it have terminated, the job changes state to COMPLETED or some other
appropriate state (e.g. FAILED).
Normally, this happens within a second.
However, if the job has processes that cannot be terminated with a SIGKILL
signal, the job and one or more nodes can remain in the COMPLETING state
for an extended period of time.
This may be indicative of processes hung waiting for a core file
to complete I/O or operating system failure.
If this state persists, the system administrator should check for processes
associated with the job that cannot be terminated then use the
<span class="commandline">scontrol</span> command to change the node's
state to DOWN (e.g. &quot;scontrol update NodeName=<i>name</i> State=DOWN Reason=hung_completing&quot;),
reboot the node, then reset the node's state to IDLE
(e.g. &quot;scontrol update NodeName=<i>name</i> State=RESUME&quot;).
Note that setting the node DOWN will terminate all running or suspended
jobs associated with that node.
An alternative is to set the node's state to DRAIN until all jobs
associated with it terminate before setting it DOWN and re-booting.</p>
<p>Note that Slurm has two configuration parameters that may be used to
automate some of this process.
<i>UnkillableStepProgram</i> specifies a program to execute when
non-killable processes are identified.
<i>UnkillableStepTimeout</i> specifies how long to wait for processes
to terminate.
See the "man slurm.conf" for more information about these parameters.</p>

<p><a id="rlimit"><b>Why are my resource limits not propagated?</b></a><br>
When the <span class="commandline">srun</span> command executes, it captures the
resource limits in effect at submit time on the node where srun executes.
These limits are propagated to the allocated nodes before initiating the
user's job.
The Slurm daemons running on the allocated nodes then try to establish
identical resource limits for the job being initiated.
There are several possible reasons for not being able to establish those
resource limits.</p>
<ul>
<li>The hard resource limits applied to Slurm's slurmd daemon are lower
than the user's soft resources limits on the submit host. Typically
the slurmd daemon is initiated by the init daemon with the operating
system default limits. This may be addressed either through use of the
ulimit command in the /etc/sysconfig/slurm file or enabling
<a href="#pam">PAM in Slurm</a>.</li>
<li>The user's hard resource limits on the allocated node are lower than
the same user's soft hard resource limits on the node from which the
job was submitted. It is recommended that the system administrator
establish uniform hard resource limits for users on all nodes
within a cluster to prevent this from occurring.</li>
<li>PropagateResourceLimits or PropagateResourceLimitsExcept parameters are
configured in slurm.conf and avoid propagation of specified limits.</li>
</ul>
<p>NOTE: This may produce the error message &quot;Can't propagate RLIMIT_...&quot;.
The error message is printed only if the user explicitly specifies that
the resource limit should be propagated or the srun command is running
with verbose logging of actions from the slurmd daemon (e.g. "srun -d6 ...").</p>

<p><a id="pending"><b>Why is my job not running?</b></a><br>
The answer to this question depends on a lot of factors. The main one is which
scheduler is used by Slurm. Executing the command</p>
<blockquote>
<p> <span class="commandline">scontrol show config | grep SchedulerType</span></p>
</blockquote>
<p> will supply this information. If the scheduler type is <b>builtin</b>, then
jobs will be executed in the order of submission for a given partition. Even if
resources are available to initiate your job immediately, it will be deferred
until no previously submitted job is pending. If the scheduler type is <b>backfill</b>,
then jobs will generally be executed in the order of submission for a given partition
with one exception: later submitted jobs will be initiated early if doing so does
not delay the expected execution time of an earlier submitted job. In order for
backfill scheduling to be effective, users' jobs should specify reasonable time
limits. If jobs do not specify time limits, then all jobs will receive the same
time limit (that associated with the partition), and the ability to backfill schedule
jobs will be limited. The backfill scheduler does not alter job specifications
of required or excluded nodes, so jobs which specify nodes will substantially
reduce the effectiveness of backfill scheduling. See the <a href="#backfill">
backfill</a> section for more details. For any scheduler, you can check priorities
of jobs using the command <span class="commandline">scontrol show job</span>.
Other reasons can include waiting for resources, memory, qos, reservations, etc.
As a guideline, issue an <span class="commandline">scontrol show job &lt;jobid&gt;</span>
and look at the field <i>State</i> and <i>Reason</i> to investigate the cause.
A full list and explanation of the different Reasons can be found in the
<a href="resource_limits.html#reasons">resource limits</a> page.</p>

<p><a id="sharing"><b>Why does the srun --overcommit option not permit multiple jobs
to run on nodes?</b></a><br>
The <b>--overcommit</b> option is a means of indicating that a job or job step is willing
to execute more than one task per processor in the job's allocation. For example,
consider a cluster of two processor nodes. The srun execute line may be something
of this sort</p>
<blockquote>
<p><span class="commandline">srun --ntasks=4 --nodes=1 a.out</span></p>
</blockquote>
<p>This will result in not one, but two nodes being allocated so that each of the four
tasks is given its own processor. Note that the srun <b>--nodes</b> option specifies
a minimum node count and optionally a maximum node count. A command line of</p>
<blockquote>
<p><span class="commandline">srun --ntasks=4 --nodes=1-1 a.out</span></p>
</blockquote>
<p>would result in the request being rejected. If the <b>--overcommit</b> option
is added to either command line, then only one node will be allocated for all
four tasks to use.</p>
<p>More than one job can execute simultaneously on the same compute resource
(e.g. CPU) through the use of srun's <b>--oversubscribe</b> option in
conjunction with the <b>OverSubscribe</b> parameter in Slurm's partition
configuration. See the man pages for srun and slurm.conf for more information.</p>

<p><a id="purge"><b>Why is my job killed prematurely?</b></a><br>
Slurm has a job purging mechanism to remove inactive jobs (resource allocations)
before reaching its time limit, which could be infinite.
This inactivity time limit is configurable by the system administrator.
You can check its value with the command</p>
<blockquote>
<p><span class="commandline">scontrol show config | grep InactiveLimit</span></p>
</blockquote>
<p>The value of InactiveLimit is in seconds.
A zero value indicates that job purging is disabled.
A job is considered inactive if it has no active job steps or if the srun
command creating the job is not responding.
In the case of a batch job, the srun command terminates after the job script
is submitted.
Therefore batch job pre- and post-processing is limited to the InactiveLimit.
Contact your system administrator if you believe the InactiveLimit value
should be changed.

<p><a id="opts"><b>Why are my srun options ignored?</b></a><br>
Everything after the command <span class="commandline">srun</span> is
examined to determine if it is a valid option for srun. The first
token that is not a valid option for srun is considered the command
to execute and everything after that is treated as an option to
the command. For example:</p>
<blockquote>
<p><span class="commandline">srun -N2 hostname -pdebug</span></p>
</blockquote>
<p>srun processes "-N2" as an option to itself. "hostname" is the
command to execute and "-pdebug" is treated as an option to the
hostname command. This will change the name of the computer
on which Slurm executes the command - Very bad, <b>Don't run
this command as user root!</b></p>

<p><a id="backfill"><b>Why is the Slurm backfill scheduler not starting my job?
</b></a><br>
The most common problem is failing to set job time limits. If all jobs have
the same time limit (for example the partition's time limit), then backfill
will not be effective. Note that partitions can have both default and maximum
time limits, which can be helpful in configuring a system for effective
backfill scheduling.</p>

<p>In addition, there are a multitude of backfill scheduling parameters
which can impact which jobs are considered for backfill scheduling, such
as the maximum number of jobs tested per user. For more information see
the slurm.conf man page and check the configuration of SchedulerParameters
on your system.</p>

<p><a id="steps"><b>How can I run multiple jobs from within a
single script?</b></a><br>
A Slurm job is just a resource allocation. You can execute many
job steps within that allocation, either in parallel or sequentially.
Some jobs actually launch thousands of job steps this way. The job
steps will be allocated nodes that are not already allocated to
other job steps. This essentially provides a second level of resource
management within the job for the job steps.</p>

<p><a id="multi_batch"><b>How can I run a job within an existing
job allocation?</b></a><br>
There is an srun option <i>--jobid</i> that can be used to specify
a job's ID.
For a batch job or within an existing resource allocation, the
environment variable <i>SLURM_JOB_ID</i> has already been defined,
so all job steps will run within that job allocation unless
otherwise specified.
The one exception to this is when submitting batch jobs.
When a batch job is submitted from within an existing batch job,
it is treated as a new job allocation request and will get a
new job ID unless explicitly set with the <i>--jobid</i> option.
If you specify that a batch job should use an existing allocation,
that job allocation will be released upon the termination of
that batch job.</p>

<p><a id="user_env"><b>How does Slurm establish the environment
for my job?</b></a><br>
Slurm processes are not run under a shell, but directly exec'ed
by the <i>slurmd</i> daemon (assuming <i>srun</i> is used to launch
the processes).
The environment variables in effect at the time the <i>srun</i> command
is executed are propagated to the spawned processes.
The <i>~/.profile</i> and <i>~/.bashrc</i> scripts are not executed
as part of the process launch. You can also look at the <i>--export</i> option of
srun and sbatch. See man pages for details.</p>

<p><a id="prompt"><b>How can I get shell prompts in interactive
mode?</b></a><br>
<p>Starting in 20.11, the recommended way to get an interactive shell prompt is
to configure <b>use_interactive_step</b> in <i>slurm.conf</i>:</p>
<pre>
LaunchParameters=use_interactive_step
</pre>
<p>This configures <code>salloc</code> to automatically launch an interactive
shell via <code>srun</code> on a node in the allocation whenever
<code>salloc</code> is called without a program to execute.</p>

<p>By default, <b>use_interactive_step</b> creates an <i>interactive step</i> on
a node in the allocation and runs the shell in that step. An interactive step
is to an interactive shell what a batch step is to a batch script - both have
access to all resources in the allocation on the node they are running on, but
do not "consume" them.</p>

<p>Note that beginning in 20.11, steps created by srun are now exclusive. This
means that the previously-recommended way to get an interactive shell,
<span class="commandline">srun --pty $SHELL</span>, will no longer work, as the
shell's step will now consume all resources on the node and cause subsequent
<span class="commandline">srun</span> calls to pend.</p>

<p><a id="batch_out"><b>How can I get the task ID in the output
or error file name for a batch job?</b></a><br>
If you want separate output by task, you will need to build a script
containing this specification. For example:</p>
<pre>
$ cat test
#!/bin/sh
echo begin_test
srun -o out_%j_%t hostname

$ sbatch -n7 -o out_%j test
sbatch: Submitted batch job 65541

$ ls -l out*
-rw-rw-r--  1 jette jette 11 Jun 15 09:15 out_65541
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_0
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_1
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_2
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_3
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_4
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_5
-rw-rw-r--  1 jette jette  6 Jun 15 09:15 out_65541_6

$ cat out_65541
begin_test

$ cat out_65541_2
tdev2
</pre>

<p><a id="parallel_make"><b>Can the <i>make</i> command
utilize the resources allocated to a Slurm job?</b></a><br>
Yes. There is a patch available for GNU make version 3.81
available as part of the Slurm distribution in the file
<i>contribs/make-3.81.slurm.patch</i>.  For GNU make version 4.0 you
can use the patch in the file <i>contribs/make-4.0.slurm.patch</i>.
This patch will use Slurm to launch tasks across a job's current resource
allocation. Depending upon the size of modules to be compiled, this may
or may not improve performance. If most modules are thousands of lines
long, the use of additional resources should more than compensate for the
overhead of Slurm's task launch. Use with make's <i>-j</i> option within an
existing Slurm allocation. Outside of a Slurm allocation, make's behavior
will be unchanged.</p>

<p><a id="terminal"><b>Can tasks be launched with a remote (pseudo)
terminal?</b></a><br>
You have several ways to do so, the recommended ones are the following:<br>
The simplest method is to make use of srun's <i>--pty</i> option,
(e.g. <i>srun --pty bash -i</i>).
Srun's <i>--pty</i> option runs task zero in pseudo terminal mode. Bash's
<i>-i</i> option instructs it to run in interactive mode (with prompts).<br>
In addition to that method you have the option to automatically have salloc
place terminals on the compute nodes by setting "use_interactive_step" as
an option in LaunchParameters.</p>

<p><a id="force"><b>What does &quot;srun: Force Terminated job&quot;
indicate?</b></a><br>
The srun command normally terminates when the standard output and
error I/O from the spawned tasks end. This does not necessarily
happen at the same time that a job step is terminated. For example,
a file system problem could render a spawned task non-killable
at the same time that I/O to srun is pending. Alternately a network
problem could prevent the I/O from being transmitted to srun.
In any event, the srun command is notified when a job step is
terminated, either upon reaching its time limit or being explicitly
killed. If the srun has not already terminated, the message
&quot;srun: Force Terminated job&quot; is printed.
If the job step's I/O does not terminate in a timely fashion
thereafter, pending I/O is abandoned and the srun command
exits.</p>

<p><a id="early_exit"><b>What does this mean:
&quot;srun: First task exited 30s ago&quot;
followed by &quot;srun Job Failed&quot;?</b></a><br>
The srun command monitors when tasks exit. By default, 30 seconds
after the first task exits, the job is killed.
This typically indicates some type of job failure and continuing
to execute a parallel job when one of the tasks has exited is
not normally productive. This behavior can be changed using srun's
<i>--wait=&lt;time&gt;</i> option to either change the timeout
period or disable the timeout altogether. See srun's man page
for details.</p>

<p><a id="memlock"><b>Why is my MPI job  failing due to the
locked memory (memlock) limit being too low?</b></a><br>
By default, Slurm propagates all of your resource limits at the
time of job submission to the spawned tasks.
This can be disabled by specifically excluding the propagation of
specific limits in the <i>slurm.conf</i> file. For example
<i>PropagateResourceLimitsExcept=MEMLOCK</i> might be used to
prevent the propagation of a user's locked memory limit from a
login node to a dedicated node used for his parallel job.
If the user's resource limit is not propagated, the limit in
effect for the <i>slurmd</i> daemon will be used for the spawned job.
A simple way to control this is to ensure that user <i>root</i> has a
sufficiently large resource limit and ensuring that <i>slurmd</i> takes
full advantage of this limit. For example, you can set user root's
locked memory limit ulimit to be unlimited on the compute nodes (see
<i>"man limits.conf"</i>) and ensuring that <i>slurmd</i> takes
full advantage of this limit (e.g. by adding <i>"LimitMEMLOCK=infinity"</i>
to your systemd's <i>slurmd.service</i> file). It may also be desirable to lock
the slurmd daemon's memory to help ensure that it keeps responding if memory
swapping begins. A sample <i>/etc/sysconfig/slurm</i> which can be read from
systemd is shown below.
Related information about <a href="#pam">PAM</a> is also available.</p>
<pre>
#
# Example /etc/sysconfig/slurm
#
# Memlocks the slurmd process's memory so that if a node
# starts swapping, the slurmd will continue to respond
SLURMD_OPTIONS="-M"
</pre>

<p><a id="inactive"><b>Why is my batch job that launches no
job steps being killed?</b></a><br>
Slurm has a configuration parameter <i>InactiveLimit</i> intended
to kill jobs that do not spawn any job steps for a configurable
period of time. Your system administrator may modify the <i>InactiveLimit</i>
to satisfy your needs. Alternately, you can just spawn a job step
at the beginning of your script to execute in the background. It
will be purged when your script exits or your job otherwise terminates.
A line of this sort near the beginning of your script should suffice:<br>
<i>srun -N1 -n1 sleep 999999 &</i></p>

<p><a id="arbitrary"><b>How do I run specific tasks on certain nodes
in my allocation?</b></a><br>
One of the distribution methods for srun '<b>-m</b>
or <b>--distribution</b>' is 'arbitrary'.  This means you can tell Slurm to
layout your tasks in any fashion you want.  For instance if I had an
allocation of 2 nodes and wanted to run 4 tasks on the first node and
1 task on the second and my nodes allocated from SLURM_JOB_NODELIST
where tux[0-1] my srun line would look like this:<br><br>
<i>srun -n5 -m arbitrary -w tux[0,0,0,0,1] hostname</i><br><br>
If I wanted something similar but wanted the third task to be on tux 1
I could run this:<br><br>
<i>srun -n5 -m arbitrary -w tux[0,0,1,0,0] hostname</i><br><br>
Here is a simple Perl script named arbitrary.pl that can be ran to easily lay
out tasks on nodes as they are in SLURM_JOB_NODELIST.</p>
<pre>
#!/usr/bin/perl
my @tasks = split(',', $ARGV[0]);
my @nodes = `scontrol show hostnames $SLURM_JOB_NODELIST`;
my $node_cnt = $#nodes + 1;
my $task_cnt = $#tasks + 1;

if ($node_cnt < $task_cnt) {
	print STDERR "ERROR: You only have $node_cnt nodes, but requested layout on $task_cnt nodes.\n";
	$task_cnt = $node_cnt;
}

my $cnt = 0;
my $layout;
foreach my $task (@tasks) {
	my $node = $nodes[$cnt];
	last if !$node;
	chomp($node);
	for(my $i=0; $i < $task; $i++) {
		$layout .= "," if $layout;
		$layout .= "$node";
	}
	$cnt++;
}
print $layout;
</pre>

<p>We can now use this script in our srun line in this fashion.<br><br>
<i>srun -m arbitrary -n5 -w `arbitrary.pl 4,1` -l hostname</i><br><br>
<p>This will layout 4 tasks on the first node in the allocation and 1
task on the second node.</p>

<p><a id="hold"><b>How can I temporarily prevent a job from running
(e.g. place it into a <i>hold</i> state)?</b></a><br>
The easiest way to do this is to change a job's earliest begin time
(optionally set at job submit time using the <i>--begin</i> option).
The example below places a job into hold state (preventing its initiation
for 30 days) and later permitting it to start now.</p>
<pre>
$ scontrol update JobId=1234 StartTime=now+30days
... later ...
$ scontrol update JobId=1234 StartTime=now
</pre>

<p><a id="mem_limit"><b>Why are jobs not getting the appropriate
memory limit?</b></a><br>
This is probably a variation on the <a href="#memlock">locked memory limit</a>
problem described above.
Use the same solution for the AS (Address Space), RSS (Resident Set Size),
or other limits as needed.</p>

<p><a id="mailing_list"><b>Is an archive available of messages posted to
the <i>slurm-users</i> mailing list?</b></a><br>
Yes, it is at <a href="http://groups.google.com/group/slurm-users">
http://groups.google.com/group/slurm-users</a></p>

<p><a id="job_size"><b>Can I change my job's size after it has started
running?</b></a><br>
Slurm supports the ability to decrease the size of jobs.
Requesting fewer hardware resources, and changing partition, qos,
reservation, licenses, etc. is only allowed for pending jobs.</p>

<p>Use the <i>scontrol</i> command to change a job's size either by specifying
a new node count (<i>NumNodes=</i>) for the job or identify the specific nodes
(<i>NodeList=</i>) that you want the job to retain.
Any job steps running on the nodes which are relinquished by the job will be
killed unless initiated with the <i>--no-kill</i> option.
After the job size is changed, some environment variables created by Slurm
containing information about the job's environment will no longer be valid and
should either be removed or altered (e.g. SLURM_JOB_NUM_NODES,
SLURM_JOB_NODELIST and SLURM_NTASKS).
The <i>scontrol</i> command will generate a script that can be executed to
reset local environment variables.
You must retain the SLURM_JOB_ID environment variable in order for the
<i>srun</i> command to gather information about the job's current state and
specify the desired node and/or task count in subsequent <i>srun</i> invocations.
A new accounting record is generated when a job is resized, showing the job to
have been resubmitted and restarted at the new size.
An example is shown below.</p>
<pre>
#!/bin/bash
srun my_big_job
scontrol update JobId=$SLURM_JOB_ID NumNodes=2
. slurm_job_${SLURM_JOB_ID}_resize.sh
srun -N2 my_small_job
rm slurm_job_${SLURM_JOB_ID}_resize.*
</pre>

<p><a id="mpi_symbols"><b>Why is my MPICH2 or MVAPICH2 job not running with
Slurm? Why does the DAKOTA program not run with Slurm?</b></a><br>
The Slurm library used to support MPICH2 or MVAPICH2 references a variety of
symbols. If those symbols resolve to functions or variables in your program
rather than the appropriate library, the application will fail. For example
<a href="http://dakota.sandia.gov">DAKOTA</a>, versions 5.1 and
older, contains a function named regcomp, which will get used rather
than the POSIX regex functions. Rename DAKOTA's function and
references from regcomp to something else to make it work properly.</p>

<p><a id="estimated_start_time"><b>Why does squeue (and "scontrol show
jobid") sometimes not display a job's  estimated start time?</b></a><br>
When the backfill scheduler is configured, it provides an estimated start time
for jobs that are candidates for backfill. Pending jobs with dependencies
will not have an estimate as it is difficult to predict what resources will
be available when the jobs they are dependent on terminate. Also note that
the estimate is better for jobs expected to start soon, as most running jobs
end before their estimated time. There are other restrictions on backfill that
may apply. See the <a href="#backfill">backfill</a> section for more details.
</p>

<p><a id="ansys"><b>How can I run an Ansys program with Slurm?</b></a><br>
If you are talking about an interactive run of the Ansys app, then you can use
this simple script (it is for Ansys Fluent):</p>
<pre>
$ cat ./fluent-srun.sh
#!/usr/bin/env bash
HOSTSFILE=.hostlist-job$SLURM_JOB_ID
if [ "$SLURM_PROCID" == "0" ]; then
   srun hostname -f > $HOSTSFILE
   fluent -t $SLURM_NTASKS -cnf=$HOSTSFILE -ssh 3d
   rm -f $HOSTSFILE
fi
exit 0
</pre>

<p>To run an interactive session, use srun like this:</p>
<pre>
$ srun -n &lt;tasks&gt; ./fluent-srun.sh
</pre>

<p><a id="req"><b>How can a job in a complete or failed state be requeued?</b></a>
<br>
Slurm supports requeuing jobs in a done or failed state. Use the
command:</p>
<p><b>scontrol requeue job_id</b></p>
<p>The job will then be requeued back in the PENDING state and scheduled again.
See man(1) scontrol.
</p>
<p>Consider a simple job like this:</p>
<pre>
$cat zoppo
#!/bin/sh
echo "hello, world"
exit 10

$sbatch -o here ./zoppo
Submitted batch job 10
</pre>
<p>
The job finishes in FAILED state because it exits with
a non zero value. We can requeue the job back to
the PENDING state and the job will be dispatched again.
</p>
<pre>
$ scontrol requeue 10
$ squeue
     JOBID PARTITION  NAME     USER   ST   TIME  NODES NODELIST(REASON)
      10      mira    zoppo    david  PD   0:00    1   (NonZeroExitCode)
$ squeue
    JOBID PARTITION   NAME     USER ST     TIME  NODES NODELIST(REASON)
      10      mira    zoppo    david  R    0:03    1      alanz1
</pre>
<p>Slurm supports requeuing jobs in a hold state with the command:</p>
<p><b>scontrol requeuehold job_id</b></p>
<p>The job can be in state RUNNING, SUSPENDED, COMPLETED or FAILED
before being requeued.</p>
<pre>
$ scontrol requeuehold 10
$ squeue
    JOBID PARTITION  NAME     USER ST       TIME  NODES NODELIST(REASON)
    10      mira    zoppo    david PD       0:00      1 (JobHeldUser)
</pre>

<p><a id="cpu_count"><b>Slurm documentation refers to CPUs, cores and threads.
What exactly is considered a CPU?</b></a><br>
If your nodes are configured with hyperthreading, then a CPU is equivalent
to a hyperthread.
Otherwise a CPU is equivalent to a core.
You can determine if your nodes have more than one thread per core
using the command "scontrol show node" and looking at the values of
"ThreadsPerCore".</p>
<p>Note that even on systems with hyperthreading enabled, the resources will
generally be allocated to jobs at the level of a core (see NOTE below).
Two different jobs will not share a core except through the use of a partition
OverSubscribe configuration parameter.
For example, a job requesting resources for three tasks on a node with
ThreadsPerCore=2 will be allocated two full cores.
Note that Slurm commands contain a multitude of options to control
resource allocation with respect to base boards, sockets, cores and threads.</p>
<p>(<b>NOTE:</b> An exception to this would be if the system administrator
configured SelectTypeParameters=CR_CPU and each node's CPU count without its
socket/core/thread specification. In that case, each thread would be
independently scheduled as a CPU. This is not a typical configuration.)</p>

<p><a id="sbatch_srun"><b>What is the difference between the sbatch
  and srun commands?</b></a><br>
The srun command has two different modes of operation. First, if not run within
an existing job (i.e. not within a Slurm job allocation created by salloc or
sbatch), then it will create a job allocation and spawn an application.
If run within an existing allocation, the srun command only spawns the
application.
For this question, we will only address the first mode of operation and compare
creating a job allocation using the sbatch and srun commands.</p>

<p>The srun command is designed for interactive use, with someone monitoring
the output.
The output of the application is seen as output of the srun command,
typically at the user's terminal.
The sbatch command is designed to submit a script for later execution and its
output is written to a file.
Command options used in the job allocation are almost identical.
The most noticeable difference in options is that the sbatch command supports
the concept of <a href="job_array.html">job arrays</a>, while srun does not.
Another significant difference is in fault tolerance.
Failures involving sbatch jobs typically result in the job being requeued
and executed again, while failures involving srun typically result in an
error message being generated with the expectation that the user will respond
in an appropriate fashion.</p>

<p><a id="squeue_color"><b>Can squeue output be color coded?</b></a><br>
The squeue command output is not color coded, but other tools can be used to
add color. One such tool is ColorWrapper
(<a href="https://github.com/rrthomas/cw">https://github.com/rrthomas/cw</a>).
A sample ColorWrapper configuration file and output are shown below.</p>
<pre>
path /bin:/usr/bin:/sbin:/usr/sbin:&lt;env&gt;
usepty
base green+
match red:default (Resources)
match black:default (null)
match black:cyan N/A
regex cyan:default  PD .*$
regex red:default ^\d*\s*C .*$
regex red:default ^\d*\s*CG .*$
regex red:default ^\d*\s*NF .*$
regex white:default ^JOBID.*
</pre>
<img src="squeue_color.png" width=600>

<p><a id="x11"><b>Can Slurm export an X11 display on an allocated compute node?</b></a><br/>
You can use the X11 builtin feature starting at version 17.11.
It is enabled by setting <i>PrologFlags=x11</i> in <i>slurm.conf</i>.
Other X11 plugins must be deactivated.
<br/>
Run it as shown:
</p>
<pre>
$ ssh -X user@login1
$ srun -n1 --pty --x11 xclock
</pre>
<p>
An alternative for older versions is to build and install an optional SPANK
plugin for that functionality. Instructions to build and install the plugin
follow. This SPANK plugin will not work if used in combination with native X11
support so you must disable it compiling Slurm with <i>--disable-x11</i>. This
plugin relies on openssh library and it provides features such as GSSAPI
support.<br/> Update the Slurm installation path as needed:</p>
<pre>
# It may be obvious, but don't forget the -X on ssh
$ ssh -X alex@testserver.com

# Get the plugin
$ mkdir git
$ cd git
$ git clone https://github.com/hautreux/slurm-spank-x11.git
$ cd slurm-spank-x11

# Manually edit the X11_LIBEXEC_PROG macro definition
$ vi slurm-spank-x11.c
$ vi slurm-spank-x11-plug.c
$ grep "define X11_" slurm-spank-x11.c
#define X11_LIBEXEC_PROG "/opt/slurm/17.02/libexec/slurm-spank-x11"
$ grep "define X11_LIBEXEC_PROG" slurm-spank-x11-plug.c
#define X11_LIBEXEC_PROG "/opt/slurm/17.02/libexec/slurm-spank-x11"


# Compile
$ gcc -g -o slurm-spank-x11 slurm-spank-x11.c
$ gcc -g -I/opt/slurm/17.02/include -shared -fPIC -o x11.so slurm-spank-x11-plug.c

# Install
$ mkdir -p /opt/slurm/17.02/libexec
$ install -m 755 slurm-spank-x11 /opt/slurm/17.02/libexec
$ install -m 755 x11.so /opt/slurm/17.02/lib/slurm

# Configure
$ echo -e "optional x11.so" >> /opt/slurm/17.02/etc/plugstack.conf
$ cd ~/tests

# Run
$ srun -n1 --pty --x11 xclock
alex@node1's password:
</pre>

<p><a id="unbuffered_cr"><b>Why is the srun --u/--unbuffered option adding
   a carriage character return to my output?</b></a><br>
The libc library used by many programs internally buffers output rather than
writing it immediately. This is done for performance reasons.
The only way to disable this internal buffering is to configure the program to
write to a pseudo terminal (PTY) rather than to a regular file.
This configuration causes <u>some</u> implementations of libc to prepend the
carriage return character before all line feed characters.
Removing the carriage return character would result in desired formatting
in some instances, while causing bad formatting in other cases.
In any case, Slurm is not adding the carriage return character, but displaying
the actual program's output.</p>

<p><a id="sview_colors"><b>Why is sview not coloring/highlighting nodes
    properly?</b></a><br>
sview color-coding is affected by the GTK theme. The node status grid
is made up of button widgets and certain GTK themes don't show the color
setting as desired. Changing GTK themes can restore proper color-coding.</p>



<h2>For Administrators</h2>

<p><a id="suspend"><b>How is job suspend/resume useful?</b></a><br>
Job suspend/resume is most useful to get particularly large jobs initiated
in a timely fashion with minimal overhead. Say you want to get a full-system
job initiated. Normally you would need to either cancel all running jobs
or wait for them to terminate. Canceling jobs results in the loss of
their work to that point from their beginning.
Waiting for the jobs to terminate can take hours, depending upon your
system configuration. A more attractive alternative is to suspend the
running jobs, run the full-system job, then resume the suspended jobs.
This can easily be accomplished by configuring a special queue for
full-system jobs and using a script to control the process.
The script would stop the other partitions, suspend running jobs in those
partitions, and start the full-system partition.
The process can be reversed when desired.
One can effectively gang schedule (time-slice) multiple jobs
using this mechanism, although the algorithms to do so can get quite
complex.
Suspending and resuming a job makes use of the SIGSTOP and SIGCONT
signals respectively, so swap and disk space should be sufficient to
accommodate all jobs allocated to a node, either running or suspended.

<p><a id="return_to_service"><b>Why is a node shown in state
DOWN when the node has registered for service?</b></a><br>
The configuration parameter <i>ReturnToService</i> in <i>slurm.conf</i>
controls how DOWN nodes are handled.
Set its value to one in order for DOWN nodes to automatically be
returned to service once the <i>slurmd</i> daemon registers
with a valid node configuration.
A value of zero is the default and results in a node staying DOWN
until an administrator explicitly returns it to service using
the command &quot;scontrol update NodeName=whatever State=RESUME&quot;.
See &quot;man slurm.conf&quot; and &quot;man scontrol&quot; for more
details.</p>

<p><a id="down_node"><b>What happens when a node crashes?</b></a><br>
A node is set DOWN when the slurmd daemon on it stops responding
for <i>SlurmdTimeout</i> as defined in <i>slurm.conf</i>.
The node can also be set DOWN when certain errors occur or the
node's configuration is inconsistent with that defined in <i>slurm.conf</i>.
Any active job on that node will be killed unless it was submitted
with the srun option <i>--no-kill</i>.
Any active job step on that node will be killed.
See the slurm.conf and srun man pages for more information.</p>

<p><a id="multi_job"><b>How can I control the execution of multiple
jobs per node?</b></a><br>
There are two mechanisms to control this.
If you want to allocate individual processors on a node to jobs,
configure <i>SelectType=select/cons_res</i>.
See <a href="cons_res.html">Consumable Resources in Slurm</a>
for details about this configuration.
If you want to allocate whole nodes to jobs, configure
configure <i>SelectType=select/linear</i>.
Each partition also has a configuration parameter <i>OverSubscribe</i>
that enables more than one job to execute on each node.
See <i>man slurm.conf</i> for more information about these
configuration parameters.</p>

<p><a id="inc_plugin"><b>When the Slurm daemon starts, it
prints &quot;cannot resolve X plugin operations&quot; and exits.
What does this mean?</b></a><br>
This means that symbols expected in the plugin were
not found by the daemon. This typically happens when the
plugin was built or installed improperly or the configuration
file is telling the plugin to use an old plugin (say from the
previous version of Slurm). Restart the daemon in verbose mode
for more information (e.g. &quot;slurmctld -Dvvvvv&quot;).

<p><a id="pam_exclude"><b>How can I exclude some users from pam_slurm?</b></a><br>
<b>CAUTION:</b> Please test this on a test machine/VM before you actually do
this on your Slurm computers.</p>

<p><b>Step 1.</b> Make sure pam_listfile.so exists on your system.
The following command is an example on Redhat 6:</p>
<pre>
ls -la /lib64/security/pam_listfile.so
</pre>

<p><b>Step 2.</b> Create user list (e.g. /etc/ssh/allowed_users):</p>
<pre>
# /etc/ssh/allowed_users
root
myadmin
</pre>
<p>And, change file mode to keep it secret from regular users(Optional):</p>
<pre>
chmod 600 /etc/ssh/allowed_users
</pre>
<p><b>NOTE:</b> root is not necessarily listed on the allowed_users, but I
feel somewhat safe if it's on the list.</p>

<p><b>Step 3.</b> On /etc/pam.d/sshd, add pam_listfile.so with sufficient flag
before pam_slurm.so (e.g. my /etc/pam.d/sshd looks like this):</p>
<pre>
#%PAM-1.0
auth       required     pam_sepermit.so
auth       include      password-auth
account    sufficient   pam_listfile.so item=user sense=allow file=/etc/ssh/allowed_users onerr=fail
account    required     pam_slurm.so
account    required     pam_nologin.so
account    include      password-auth
password   include      password-auth
# pam_selinux.so close should be the first session rule
session    required     pam_selinux.so close
session    required     pam_loginuid.so
# pam_selinux.so open should only be followed by sessions to be executed in the user context
session    required     pam_selinux.so open env_params
session    optional     pam_keyinit.so force revoke
session    include      password-auth
</pre>
<p>(Information courtesy of Koji Tanaka, Indiana University)</p>

<p><a id="maint_time"><b>How can I dry up the workload for a
maintenance period?</b></a><br>
Create a resource reservation as described b. Slurm's
<a href="reservations.html">Resource Reservation Guide</a>.

<p><a id="pam"><b>How can PAM be used to control a user's limits on
or access to compute nodes?</b></a><br>
To control a user's limits on a compute node:<br>
<p>First, enable Slurm's use of PAM by setting <i>UsePAM=1</i> in
<i>slurm.conf</i>.</p>
<p>Second, establish PAM configuration file(s) for Slurm in <i>/etc/pam.conf</i>
or the appropriate files in the <i>/etc/pam.d</i> directory (e.g.
<i>/etc/pam.d/sshd</i> by adding the line "account required pam_slurm.so".
A basic configuration you might use is:</p>
<pre>
account  required  pam_unix.so
account  required  pam_slurm.so
auth     required  pam_localuser.so
session  required  pam_limits.so
</pre>
<p>Third, set the desired limits in <i>/etc/security/limits.conf</i>.
For example, to set the locked memory limit to unlimited for all users:</p>
<pre>
*   hard   memlock   unlimited
*   soft   memlock   unlimited
</pre>
<p>Finally, you need to disable Slurm's forwarding of the limits from the
session from which the <i>srun</i> initiating the job ran. By default
all resource limits are propagated from that session. For example, adding
the following line to <i>slurm.conf</i> will prevent the locked memory
limit from being propagated:<i>PropagateResourceLimitsExcept=MEMLOCK</i>.</p>

<p>To control a user's access to a compute node:</p>
<p>The pam_slurm_adopt and pam_slurm modules prevent users from
logging into nodes that they have not been allocated (except for user
root, which can always login).
They are both included with the Slurm distribution.
<p>The pam_slurm_adopt module is highly recommended for most installations,
and is documented in its <a href="pam_slurm_adopt.shtml">own guide</a>.</p>
<p>pam_slurm is older and less functional.
These modules are built by default for RPM packages, but can be disabled using
the .rpmmacros option "%_without_pam 1" or by entering the command line
option "--without pam" when the configure program is executed.
Their source code is in the "contribs/pam" and "contribs/pam_slurm_adopt"
directories respectively.</p>
<p>The use of either pam_slurm_adopt or pam_slurm does not require
<i>UsePAM</i> being set. The two uses of PAM are independent.</p>

<p><a id="time"><b>Why are jobs allocated nodes and then unable
to initiate programs on some nodes?</b></a><br>
This typically indicates that the time on some nodes is not consistent
with the node on which the <i>slurmctld</i> daemon executes. In order to
initiate a job step (or batch job), the <i>slurmctld</i> daemon generates
a credential containing a time stamp. If the <i>slurmd</i> daemon
receives a credential containing a time stamp later than the current
time or more than a few minutes in the past, it will be rejected.
If you check in the <i>SlurmdLogFile</i> on the nodes of interest, you
will likely see messages of this sort: "<i>Invalid job credential from
&lt;some IP address&gt;: Job credential expired</i>." Make the times
consistent across all of the nodes and all should be well.

<p><a id="ping"><b>Why does <i>slurmctld</i> log that some nodes
are not responding even if they are not in any partition?</b></a><br>
The <i>slurmctld</i> daemon periodically pings the <i>slurmd</i>
daemon on every configured node, even if not associated with any
partition. You can control the frequency of this ping with the
<i>SlurmdTimeout</i> configuration parameter in <i>slurm.conf</i>.

<p><a id="controller"><b>How should I relocate the primary or
backup controller?</b></a><br>
If the cluster's computers used for the primary or backup controller
will be out of service for an extended period of time, it may be desirable
to relocate them. In order to do so, follow this procedure:</p>
<ol>
<li>Stop all Slurm daemons</li>
<li>Modify the <i>SlurmctldHost</i> values in the <i>slurm.conf</i> file</li>
<li>Distribute the updated <i>slurm.conf</i> file to all nodes</li>
<li>Copy the <i>StateSaveLocation</i> directory to the new host and
make sure the permissions allow the <i>SlurmUser</i> to read and write it.
<li>Restart all Slurm daemons</li>
</ol>
<p>There should be no loss of any running or pending jobs. Ensure that
any nodes added to the cluster have a current <i>slurm.conf</i> file
installed.
<b>CAUTION:</b> If two nodes are simultaneously configured as the primary
controller (two nodes on which <i>SlurmctldHost</i> specify the local host
and the <i>slurmctld</i> daemon is executing on each), system behavior will be
destructive. If a compute node has an incorrect <i>SlurmctldHost</i> parameter,
that node may be rendered unusable, but no other harm will result.

<p><a id="multi_slurm"><b>Can multiple Slurm systems be run in
parallel for testing purposes?</b></a><br>
Yes, this is a great way to test new versions of Slurm.
Just install the test version in a different location with a different
<i>slurm.conf</i>.
The test system's <i>slurm.conf</i> should specify different
pathnames and port numbers to avoid conflicts.
The only problem is if more than one version of Slurm is configured
with <i>burst_buffer/*</i> plugins or others that may interact with external
system APIs.
In that case, there can be conflicting API requests from
the different Slurm systems.
This can be avoided by configuring the test system with <i>burst_buffer/none</i>.

<p><a id="multi_slurmd"><b>Can Slurm emulate a larger cluster?</b></a><br>
Yes, this can be useful for testing purposes.
It has also been used to partition "fat" nodes into multiple Slurm nodes.
There are two ways to do this.
The best method for most conditions is to run one <i>slurmd</i>
daemon per emulated node in the cluster as follows.
<ol>
<li>When executing the <i>configure</i> program, use the option
<i>--enable-multiple-slurmd</i> (or add that option to your <i>~/.rpmmacros</i>
file).</li>
<li>Build and install Slurm in the usual manner.</li>
<li>In <i>slurm.conf</i> define the desired node names (arbitrary
names used only by Slurm) as <i>NodeName</i> along with the actual
address of the physical node in <i>NodeHostname</i>. Multiple
<i>NodeName</i> values can be mapped to a single
<i>NodeHostname</i>.  Note that each <i>NodeName</i> on a single
physical node needs to be configured to use a different port number
(set <i>Port</i> to a unique value on each line for each node).  You
will also want to use the "%n" symbol in slurmd related path options in
slurm.conf (<i>SlurmdLogFile</i> and <i>SlurmdPidFile</i>). </li>
<li>When starting the <i>slurmd</i> daemon, include the <i>NodeName</i>
of the node that it is supposed to serve on the execute line (e.g.
"slurmd -N hostname").</li>
<li> This is an example of the <i>slurm.conf</i> file with the  emulated nodes
and ports configuration. Any valid value for the CPUs, memory or other
valid node resources can be specified.</li>
</ol>

<pre>
NodeName=dummy26[1-100] NodeHostName=achille Port=[6001-6100] NodeAddr=127.0.0.1 CPUs=4 RealMemory=6000
PartitionName=mira Default=yes Nodes=dummy26[1-100]
</pre>

<p>See the
<a href="programmer_guide.html#multiple_slurmd_support">Programmers Guide</a>
for more details about configuring multiple slurmd support.</p>

<p>In order to emulate a really large cluster, it can be more
convenient to use a single <i>slurmd</i> daemon.
That daemon will not be able to launch many tasks, but can
suffice for developing or testing scheduling software.
Do not run job steps with more than a couple of tasks each
or execute more than a few jobs at any given time.
Doing so may result in the <i>slurmd</i> daemon exhausting its
memory and failing.
<b>Use this method with caution.</b>
<ol>
<li>Execute the <i>configure</i> program with your normal options
plus <i>--enable-front-end</i> (this will define HAVE_FRONT_END in
the resulting <i>config.h</i> file.</li>
<li>Build and install Slurm in the usual manner.</li>
<li>In <i>slurm.conf</i> define the desired node names (arbitrary
names used only by Slurm) as <i>NodeName</i> along with the actual
name and address of the <b>one</b> physical node in <i>NodeHostName</i>
and <i>NodeAddr</i>.
Up to 64k nodes can be configured in this virtual cluster.</li>
<li>Start your <i>slurmctld</i> and one <i>slurmd</i> daemon.
It is advisable to use the "-c" option to start the daemons without
trying to preserve any state files from previous executions.
Be sure to use the "-c" option when switching from this mode too.</li>
<li>Create job allocations as desired, but do not run job steps
with more than a couple of tasks.</li>
</ol>

<pre>
$ ./configure --enable-debug --enable-front-end --prefix=... --sysconfdir=...
$ make install
$ grep NodeHostName slurm.conf
<i>NodeName=dummy[1-1200] NodeHostName=localhost NodeAddr=127.0.0.1</i>
$ slurmctld -c
$ slurmd -c
$ sinfo
<i>PARTITION AVAIL  TIMELIMIT NODES  STATE NODELIST</i>
<i>pdebug*      up      30:00  1200   idle dummy[1-1200]</i>
$ cat tmp
<i>#!/bin/bash</i>
<i>sleep 30</i>
$ srun -N200 -b tmp
<i>srun: jobid 65537 submitted</i>
$ srun -N200 -b tmp
<i>srun: jobid 65538 submitted</i>
$ srun -N800 -b tmp
<i>srun: jobid 65539 submitted</i>
$ squeue
<i>JOBID PARTITION  NAME   USER  ST  TIME  NODES NODELIST(REASON)</i>
<i>65537    pdebug   tmp  jette   R  0:03    200 dummy[1-200]</i>
<i>65538    pdebug   tmp  jette   R  0:03    200 dummy[201-400]</i>
<i>65539    pdebug   tmp  jette   R  0:02    800 dummy[401-1200]</i>
</pre>

<p><a id="extra_procs"><b>Can Slurm emulate nodes with more
resources than physically exist on the node?</b></a><br>
Yes. In the slurm.conf file, configure <i>SlurmdParameters=config_overrides</i>
and specify
any desired node resource specifications (<i>CPUs</i>, <i>Sockets</i>,
<i>CoresPerSocket</i>, <i>ThreadsPerCore</i>, and/or <i>TmpDisk</i>).
Slurm will use the resource specification for each node that is
given in <i>slurm.conf</i> and will not check these specifications
against those actually found on the node. The system would best be configured
with <i>TaskPlugin=task/none</i>, so that launched tasks can run on any
available CPU under operating system control.

<p><a id="credential_replayed"><b>What does a
&quot;credential replayed&quot;
error in the <i>SlurmdLogFile</i> indicate?</b></a><br>
This error is indicative of the <i>slurmd</i> daemon not being able
to respond to job initiation requests from the <i>srun</i> command
in a timely fashion (a few seconds).
<i>Srun</i> responds by resending the job initiation request.
When the <i>slurmd</i> daemon finally starts to respond, it
processes both requests.
The second request is rejected and the event is logged with
the "credential replayed" error.
If you check the <i>SlurmdLogFile</i> and <i>SlurmctldLogFile</i>,
you should see signs of the <i>slurmd</i> daemon's non-responsiveness.
A variety of factors can be responsible for this problem
including
<ul>
<li>Diskless nodes encountering network problems</li>
<li>Very slow Network Information Service (NIS)</li>
<li>The <i>Prolog</i> script taking a long time to complete</li>
</ul>
<p>Configure <i>MessageTimeout</i> in slurm.conf to a value higher than the
default 10 seconds.</p>

<p><a id="large_time"><b>What does
&quot;Warning: Note very large processing time&quot;
in the <i>SlurmctldLogFile</i> indicate?</b></a><br>
This error is indicative of some operation taking an unexpectedly
long time to complete, over one second to be specific.
Setting the value of the <i>SlurmctldDebug</i> configuration parameter
to <i>debug2</i> or higher should identify which operation(s) are
experiencing long delays.
This message typically indicates long delays in file system access
(writing state information or getting user information).
Another possibility is that the node on which the slurmctld
daemon executes has exhausted memory and is paging.
Try running the program <i>top</i> to check for this possibility.

<p><a id="limit_propagation"><b>Is resource limit propagation
useful on a homogeneous cluster?</b></a><br>
Resource limit propagation permits a user to modify resource limits
and submit a job with those limits.
By default, Slurm automatically propagates all resource limits in
effect at the time of job submission to the tasks spawned as part
of that job.
System administrators can utilize the <i>PropagateResourceLimits</i>
and <i>PropagateResourceLimitsExcept</i> configuration parameters to
change this behavior.
Users can override defaults using the <i>srun --propagate</i>
option.
See <i>"man slurm.conf"</i> and <i>"man srun"</i> for more information
about these options.

<p><a id="clock"><b>Do I need to maintain synchronized
clocks on the cluster?</b></a><br>
In general, yes. Having inconsistent clocks may cause nodes to
be unusable. Slurm log files should contain references to
expired credentials. For example:
<pre>
error: Munge decode failed: Expired credential
ENCODED: Wed May 12 12:34:56 2008
DECODED: Wed May 12 12:01:12 2008
</pre>

<p><a id="cred_invalid"><b>Why are &quot;Invalid job credential&quot;
errors generated?</b></a><br>
This error is indicative of Slurm's job credential files being inconsistent across
the cluster. All nodes in the cluster must have the matching public and private
keys as defined by <b>JobCredPrivateKey</b> and <b>JobCredPublicKey</b> in the
Slurm configuration file <b>slurm.conf</b>.

<p><a id="cred_replay"><b>Why are
&quot;Task launch failed on node ... Job credential replayed&quot;
errors generated?</b></a><br>
This error indicates that a job credential generated by the slurmctld daemon
corresponds to a job that the slurmd daemon has already revoked.
The slurmctld daemon selects job ID values based upon the configured
value of <b>FirstJobId</b> (the default value is 1) and each job gets
a value one larger than the previous job.
On job termination, the slurmctld daemon notifies the slurmd on each
allocated node that all processes associated with that job should be
terminated.
The slurmd daemon maintains a list of the jobs which have already been
terminated to avoid replay of task launch requests.
If the slurmctld daemon is cold-started (with the &quot;-c&quot; option
or &quot;/etc/init.d/slurm startclean&quot;), it starts job ID values
over based upon <b>FirstJobId</b>.
If the slurmd is not also cold-started, it will reject job launch requests
for jobs that it considers terminated.
This solution to this problem is to cold-start all slurmd daemons whenever
the slurmctld daemon is cold-started.

<p><a id="globus"><b>Can Slurm be used with Globus?</b></a><br>
Yes. Build and install Slurm's Torque/PBS command wrappers along with
the Perl APIs from Slurm's <i>contribs</i> directory and configure
<a href="http://www-unix.globus.org/">Globus</a> to use those PBS commands.
Note there are RPMs available for both of these packages, named
<i>torque</i> and <i>perlapi</i> respectively.

<p><a id="file_limit"><b>What causes the error
&quot;Unable to accept new connection: Too many open files&quot;?</b></a><br>
The srun command automatically increases its open file limit to
the hard limit in order to process all of the standard input and output
connections to the launched tasks. It is recommended that you set the
open file hard limit to 8192 across the cluster.

<p><a id="slurmd_log"><b>Why does the setting of <i>SlurmdDebug</i>
fail to log job step information at the appropriate level?</b></a><br>
There are two programs involved here. One is <b>slurmd</b>, which is
a persistent daemon running at the desired debug level. The second
program is <b>slurmstepd</b>, which executes the user job and its
debug level is controlled by the user. Submitting the job with
an option of <i>--debug=#</i> will result in the desired level of
detail being logged in the <i>SlurmdLogFile</i> plus the output
of the program.

<p><a id="rpm"><b>Why aren't pam_slurm.so, auth_none.so, or other components in a
Slurm RPM?</b></a><br>
It is possible that at build time the required dependencies for building the
library are missing. If you want to build the library then install pam-devel
and compile again. See the file slurm.spec in the Slurm distribution for a list
of other options that you can specify at compile time with rpmbuild flags
and your <i>rpmmacros</i> file.

The auth_none plugin is in a separate RPM and not built by default.
Using the auth_none plugin means that Slurm communications are not
authenticated, so you probably do not want to run in this mode of operation
except for testing purposes. If you want to build the auth_none RPM then
add <i>--with auth_none</i> on the rpmbuild command line or add
<i>%_with_auth_none</i> to your ~/rpmmacros file. See the file slurm.spec
in the Slurm distribution for a list of other options.

<p><a id="slurmdbd"><b>Why should I use the slurmdbd instead of the
regular database plugins?</b></a><br>
While the normal storage plugins will work fine without the added
layer of the slurmdbd there are some great benefits to using the
slurmdbd.
<ol>
<li>Added security.  Using the slurmdbd you can have an authenticated
connection to the database.</li>
<li>Offloading processing from the controller. With the slurmdbd there is no
slowdown to the controller due to a slow or overloaded database.</li>
<li>Keeping enterprise wide accounting from all Slurm clusters in one database.
The slurmdbd is multi-threaded and designed to handle all the
accounting for the entire enterprise.</li>
<li>With the database plugins you can query with sacct accounting stats from
any node Slurm is installed on. With the slurmdbd you can also query any
cluster using the slurmdbd from any other cluster's nodes. Other tools like
sreport are also available.</li>
</ol>

<p><a id="debug"><b>How can I build Slurm with debugging symbols?</b></a><br>
When configuring, run the configure script with <i>--enable-developer</i> option.
That will provide asserts, debug messages and the <i>-Werror</i> flag, that
will in turn activate <i>--enable-debug</i>.
<br/>With the <i>--enable-debug</i> flag, the code will be compiled with
<i>-ggdb3</i> and <i>-g -O1 -fno-strict-aliasing</i> flags that will produce
extra debugging information. Another possible option to use is
<i>--disable-optimizations</i> that will set <i>-O0</i>.
See also <i>auxdir/x_ac_debug.m4</i> for more details.

<p><a id="state_preserve"><b>How can I easily preserve drained node
information between major Slurm updates?</b></a><br>
Major Slurm updates generally have changes in the state save files and
communication protocols, so a cold-start (without state) is generally
required. If you have nodes in a DRAIN state and want to preserve that
information, you can easily build a script to preserve that information
using the <i>sinfo</i> command. The following command line will report the
<i>Reason</i> field for every node in a DRAIN state and write the output
in a form that can be executed later to restore state.
<pre>
sinfo -t drain -h -o "scontrol update nodename='%N' state=drain reason='%E'"
</pre>

<p><a id="health_check"><b>Why doesn't the <i>HealthCheckProgram</i>
execute on DOWN nodes?</b></a><br>
Hierarchical communications are used for sending this message. If there
are DOWN nodes in the communications hierarchy, messages will need to
be re-routed. This limits Slurm's ability to tightly synchronize the
execution of the <i>HealthCheckProgram</i> across the cluster, which
could adversely impact performance of parallel applications.
The use of CRON or node startup scripts may be better suited to ensure
that <i>HealthCheckProgram</i> gets executed on nodes that are DOWN
in Slurm.

<p><a id="batch_lost"><b>What is the meaning of the error
&quot;Batch JobId=# missing from batch node &lt;node&gt; (not found
  BatchStartTime after startup)&quot;?</b></a><br>
A shell is launched on node zero of a job's allocation to execute
the submitted program. The <i>slurmd</i> daemon executing on each compute
node will periodically report to the <i>slurmctld</i> what programs it
is executing. If a batch program is expected to be running on some
node (i.e. node zero of the job's allocation) and is not found, the
message above will be logged and the job canceled. This typically is
associated with exhausting memory on the node or some other critical
failure that cannot be recovered from.

<p><a id="accept_again"><b>What does the message
&quot;srun: error: Unable to accept connection: Resources temporarily unavailable&quot;
indicate?</b></a><br>
This has been reported on some larger clusters running SUSE Linux when
a user's resource limits are reached. You may need to increase limits
for locked memory and stack size to resolve this problem.

<p><a id="task_prolog"><b>How could I automatically print a job's
Slurm job ID to its standard output?</b></a><br>
The configured <i>TaskProlog</i> is the only thing that can write to
the job's standard output or set extra environment variables for a job
or job step. To write to the job's standard output, precede the message
with "print ". To export environment variables, output a line of this
form "export name=value". The example below will print a job's Slurm
job ID and allocated hosts for a batch job only.

<pre>
#!/bin/sh
#
# Sample TaskProlog script that will print a batch job's
# job ID and node list to the job's stdout
#

if [ X"$SLURM_STEP_ID" = "X" -a X"$SLURM_PROCID" = "X"0 ]
then
  echo "print =========================================="
  echo "print SLURM_JOB_ID = $SLURM_JOB_ID"
  echo "print SLURM_JOB_NODELIST = $SLURM_JOB_NODELIST"
  echo "print =========================================="
fi
</pre>

<p><a id="orphan_procs"><b>Why are user processes and <i>srun</i>
running even though the job is supposed to be completed?</b></a><br>
Slurm relies upon a configurable process tracking plugin to determine
when all of the processes associated with a job or job step have completed.
Those plugins relying upon a kernel patch can reliably identify every process.
Those plugins dependent upon process group IDs or parent process IDs are not
reliable. See the <i>ProctrackType</i> description in the <i>slurm.conf</i>
man page for details. We rely upon the cgroup plugin for most systems.</p>

<p><a id="slurmd_oom"><b>How can I prevent the <i>slurmd</i> and
<i>slurmstepd</i> daemons from being killed when a node's memory
is exhausted?</b></a><br>
You can set the value in the <i>/proc/self/oom_adj</i> for
<i>slurmd</i> and <i>slurmstepd</i> by initiating the <i>slurmd</i>
daemon with the <i>SLURMD_OOM_ADJ</i> and/or <i>SLURMSTEPD_OOM_ADJ</i>
environment variables set to the desired values.
A value of -17 typically will disable killing.</p>

<p><a id="ubuntu"><b>I see the host of my calling node as 127.0.1.1
    instead of the correct IP address.  Why is that?</b></a><br>
Some systems by default will put your host in the /etc/hosts file as
something like</p>
<pre>
127.0.1.1	snowflake.llnl.gov	snowflake
</pre>
<p>This will cause srun and Slurm commands to use the 127.0.1.1 address
instead of the correct address and prevent communications between nodes.
The solution is to either remove this line or configure a different NodeAddr
that is known by your other nodes.</p>

<p>The CommunicationParameters=NoInAddrAny configuration parameter is subject to
this same problem, which can also be addressed by removing the actual node
name from the "127.0.1.1" as well as the "127.0.0.1"
addresses in the /etc/hosts file.  It is ok if they point to
localhost, but not the actual name of the node.</p>

<p><a id="stop_sched"><b>How can I stop Slurm from scheduling jobs?</b></a><br>
You can stop Slurm from scheduling jobs on a per partition basis by setting
that partition's state to DOWN. Set its state UP to resume scheduling.
For example:</p>
<pre>
$ scontrol update PartitionName=foo State=DOWN
$ scontrol update PartitionName=bar State=UP
</pre>

<p><a id="scontrol_multi_jobs"><b>Can I update multiple jobs with a
single <i>scontrol</i> command?</b></a><br>
No, but you can probably use <i>squeue</i> to build the script taking
advantage of its filtering and formatting options. For example:</p>
<pre>
$ squeue -tpd -h -o "scontrol update jobid=%i priority=1000" >my.script
</pre>

<p><a id="amazon_ec2"><b>Can Slurm be used to run jobs on
Amazon's EC2?</b></a><br>
Yes, here is a description of Slurm use with
<a href="http://aws.amazon.com/ec2/">Amazon's EC2</a> courtesy of
Ashley Pittman:</p>
<p>I do this regularly and have no problem with it, the approach I take is to
start as many instances as I want and have a wrapper around
ec2-describe-instances that builds a /etc/hosts file with fixed hostnames
and the actual IP addresses that have been allocated.  The only other step
then is to generate a slurm.conf based on how many node you've chosen to boot
that day.  I run this wrapper script on my laptop and it generates the files
and they rsyncs them to all the instances automatically.</p>
<p>One thing I found is that Slurm refuses to start if any nodes specified in
the slurm.conf file aren't resolvable, I initially tried to specify cloud[0-15]
in slurm.conf, but then if I configure less than 16 nodes in /etc/hosts this
doesn't work so I dynamically generate the slurm.conf as well as the hosts
file.</p>
<p>As a comment about EC2 I run just run generic AMIs and have a persistent EBS
storage device which I attach to the first instance when I start up.  This
contains a /usr/local which has my software like Slurm, pdsh and MPI installed
which I then copy over the /usr/local on the first instance and NFS export to
all other instances.  This way I have persistent home directories and a very
simple first-login script that configures the virtual cluster for me.</p>

<p><a id="core_dump"><b>If a Slurm daemon core dumps, where can I find the
core file?</b></a><br>
If <i>slurmctld</i> is started with the -D option, then the core file will be
written to the current working directory. If <i>SlurmctldLogFile</i> is an
absolute path, the core file will be written to this directory. Otherwise the
core file will be written to the <i>StateSaveLocation</i>, or "/var/tmp/" as a
last resort.<br>
SlurmUser must have write permission on the directories. If none of the above
directories have write permission for SlurmUser, no core file will be produced.
For testing purposes the command "scontrol abort" can be used to abort the
slurmctld daemon and generate a core file.

<p>If <i>slurmd</i> is started with the -D option, then the core file will also be
written to the current working directory. If <i>SlurmdLogFile</i> is an
absolute path, the core file will be written to the this directory.
Otherwise the core file will be written to the <i>SlurmdSpoolDir</i>, or
"/var/tmp/" as a last resort.<br>
If none of the above directories can be written, no core file will be produced.
</p>

<p>For <i>slurmstepd</i>, the core file will depend upon when the failure
occurs. If it is running in a privileged phase, it will be in the same location
as that described above for the slurmd daemon. If it is running in an
unprivileged phase, it will be in the spawned job's working directory.</p>


<p>Nevertheless, in some operating systems this can vary:</p>
<ul>
<li>
I.e. in RHEL the event
may be captured by abrt daemon and generated in the defined abrt configured
dump location (i.e. /var/spool/abrt).
</li>
<li>
In Cray XC ATP
(Abnormal Termination Processing) daemon acts the same way, if it is enabled.
</li>
</ul>

<p>Normally, distributions need some more tweaking in order to allow the core
files to be generated correctly.</p>

<p>slurmstepd uses the setuid() (set user ID) function to escalate
privileges. It is possible that in certain systems and for security policies,
this causes the core files not to be generated.
<br>To allow the generation in such systems you usually must enable the
suid_dumpable kernel parameter:</p>

Set:<br>
 /proc/sys/fs/suid_dumpable to 2<br>
or<br>
 sysctl fs.suid_dumpable=2<br><br>
or set it permanently in sysctl.conf<br>
 fs.suid_dumpable = 2<br><br>

<p>The value of 2, "suidsafe", makes any binary which normally not be dumped is
dumped readable by root only.<br>This allows the end user to remove such a dump
but not access it directly. For security reasons core dumps in this mode will
not overwrite one another or other files.<br> This mode is appropriate when
administrators are attempting to debug problems in a normal environment.</p>

<p>Then you must also set the core pattern to an absolute pathname:</p>

<pre>sysctl kernel.core_pattern=/tmp/core.%e.%p</pre>

<p>We recommend reading your distribution's documentation about the
configuration of these parameters.</p>

<p>It is also usually needed to configure the system core limits, since it can be
set to 0.</p>
<pre>
$ grep core /etc/security/limits.conf
#        - core - limits the core file size (KB)
*               hard    core            unlimited
*               soft    core            unlimited
</pre>
<p>In some systems it is not enough to set a hard limit, you must set also a
soft limit.</p>

<p>Also, for generating the limits in userspace, the
<i>PropagateResourceLimits=CORE</i> parameter in slurm.conf could be needed.</p>

<p>Be also sure to give SlurmUser the appropriate permissions to write in the
core location directories.</p>

<p> NOTE: On a diskless node depending on the core_pattern or if
/var/spool/abrt is pointing to an in-memory filespace like tmpfs, if the job
caused an OOM, then the generation of the core may fill up your machine's
memory and hang it. It is encouraged then to make coredumps go to a persistent
storage. Be careful of multiple nodes writting a core dump to a shared
filesystem since it may significantly impact it.
</p>

<b>Other exceptions:</b>

<p>On Centos 6, also set "ProcessUnpackaged = yes" in the file
/etc/abrt/abrt-action-save-package-data.conf.

<p>On RHEL6, also set "DAEMON_COREFILE_LIMIT=unlimited" in the file
rc.d/init.d/functions.</p>

<p>On a SELinux enabled system, or on a distribution with similar security
system, get sure it is allowing to dump cores:</p>

<pre>$ getsebool allow_daemons_dump_core</pre>

<p>coredumpctl can also give valuable information:</p>

<pre>$ coredumpctl info</pre>

<p><a id="totalview"><b>How can TotalView be configured to operate with
  Slurm?</b></a><br>
The following lines should also be added to the global <i>.tvdrc</i> file
for TotalView to operate with Slurm:</p>
<pre>
# Enable debug server bulk launch: Checked
dset -set_as_default TV::bulk_launch_enabled true

# Command:
# Beginning with TV 7X.1, TV supports Slurm and %J.
# Specify --mem-per-cpu=0 in case Slurm configured with default memory
# value and we want TotalView to share the job's memory limit without
# consuming any of the job's memory so as to block other job steps.
dset -set_as_default TV::bulk_launch_string {srun --mem-per-cpu=0 -N%N -n%N -w`awk -F. 'BEGIN {ORS=","} {if (NR==%N) ORS=""; print $1}' %t1` -l --input=none %B/tvdsvr%K -callback_host %H -callback_ports %L -set_pws %P -verbosity %V -working_directory %D %F}

# Temp File 1 Prototype:
# Host Lines:
# Slurm NodeNames need to be unadorned hostnames. In case %R returns
# fully qualified hostnames, list the hostnames in %t1 here, and use
# awk in the launch string above to strip away domain name suffixes.
dset -set_as_default TV::bulk_launch_tmpfile1_host_lines {%R}
</pre>
<!-- OLD FORMAT
dset TV::parallel_configs {
	name: Slurm;
	description: Slurm;
	starter: srun %s %p %a;
	style: manager_process;
	tasks_option: -n;
	nodes_option: -N;
	env: ;
	force_env: false;
}
!-->

<p><a id="git_patch"><b>How can a patch file be generated from a Slurm
commit in GitHub?</b></a><br>
Find and open the commit in GitHub then append ".patch" to the URL and save
the resulting file. For an example, see:
<a href="https://github.com/SchedMD/slurm/commit/91e543d433bed11e0df13ce0499be641774c99a3.patch">
https://github.com/SchedMD/slurm/commit/91e543d433bed11e0df13ce0499be641774c99a3.patch</a>
</p>

<p><a id="enforce_limits"><b>Why are the resource limits set in the
database not being enforced?</b></a><br>
In order to enforce resource limits, set the value of
<b>AccountingStorageEnforce</b> in each cluster's slurm.conf configuration
file appropriately. If <b>AccountingStorageEnforce</b> does not contains
an option of "limits", then resource limits will not be enforced on that cluster.
See <a href="resource_limits.html">Resource Limits</a> for more information.</p>

<p><a id="restore_priority"><b>After manually setting a job priority
value, how can its priority value be returned to being managed by the
priority/multifactor plugin?</b></a><br>
Hold and then release the job as shown below.</p>
<pre>
$ scontrol hold &lt;jobid&gt;
$ scontrol release &lt;jobid&gt;
</pre>

<p><a id="health_check_example"><b>Does anyone have an example node
health check script for Slurm?</b></a><br>
Probably the most comprehensive and lightweight health check tool out
there is
<a href="https://github.com/mej/nhc">Node Health Check</a>.
It has integration with Slurm as well as Torque resource managers.</p>

<p><a id="add_nodes"><b>What process should I follow to add nodes to Slurm?</b></a><br>
The slurmctld daemon has a multitude of bitmaps to track state of nodes and cores
in the system. Adding nodes to a running system would require the slurmctld daemon
to rebuild all of those bitmaps, which the developers feel would be safer to do by
restarting the daemon. Communications from the slurmd daemons on the compute
nodes to the slurmctld daemon include a configuration file checksum, so you
probably also want to maintain a common slurm.conf file on all nodes. The
following procedure is recommended:</p>
<ol>
<li>Stop the slurmctld daemon (e.g. "systemctl stop slurmctld" on the head node)</li>
<li>Update the slurm.conf file on all nodes in the cluster</li>
<li>Restart the slurmd daemons on all nodes (e.g. "systemctl restart slurmd" on all nodes)</li>
<li>Restart the slurmctld daemon (e.g. "systemctl start slurmctld" on the head node)</li>
</ol>

<p>NOTE: Jobs submitted with srun, and that are waiting for an allocation,
prior to new nodes being added to the slurm.conf can fail if the job is
allocated one of the new nodes.</p>

<p><a id="rem_nodes"><b>What process should I follow to remove nodes from Slurm?</b></a><br>
To safely remove a node from a system, it's best to drain the node of all jobs.
This ensures that job processes aren't running on the node after removal. On
restart of the controller, if a node is removed from a running job the
controller will kill the job on any remaining allocated nodes and attempt to
requeue the job if possible. The following procedure is recommended:</p>
<ol>
<li>Drain node of all jobs (e.g. "scontrol update nodename='%N' state=drain reason='removing nodes'")</li>
<li>Stop the slurmctld daemon (e.g. "systemctl stop slurmctld" on the head node)</li>
<li>Update the slurm.conf file on all nodes in the cluster</li>
<li>Restart the slurmd daemons on all nodes (e.g. "systemctl restart slurmd" on all nodes)</li>
<li>Restart the slurmctld daemon (e.g. "systemctl start slurmctld" on the head node)</li>
</ol>

<p><a id="licenses"><b>Can Slurm be configured to manage licenses?</b></a><br>
Slurm is not currently integrated with FlexLM, but it does provide for the
allocation of global resources called licenses. Use the Licenses configuration
parameter in your slurm.conf file (e.g. "Licenses=foo:10,bar:20").
Jobs can request licenses and be granted exclusive use of those resources
(e.g. "sbatch --licenses=foo:2,bar:1 ...").
It is not currently possible to change the total number of licenses on a system
without restarting the slurmctld daemon, but it is possible to dynamically
reserve licenses and remove them from being available to jobs on the system
(e.g. "scontrol update reservation=licenses_held licenses=foo:5,bar:2").</p>

<p><a id="salloc_default_command"><b>Can the salloc command be configured to
launch a shell on a node in the job's allocation?</b></a><br>
Yes, just set "use_interactive_step" as part of the LaunchParameters
configuration option in slurm.conf.</p>

<p><a id="upgrade"><b>What should I be aware of when upgrading Slurm?</b></a><br>
See the Quick Start Administrator Guide <a href="quickstart_admin.html#upgrade">Upgrade</a>
section for details.</p>

<p><a id="torque"><b>How easy is it to switch from PBS or Torque to Slurm?</b></a><br>
A lot of users don't even notice the difference.
Slurm has wrappers available for the mpiexec, pbsnodes, qdel, qhold, qrls,
qstat, and qsub commands (see contribs/torque in the distribution and the
"slurm-torque" RPM).
There is also a wrapper for the showq command at
<a href="https://github.com/pedmon/slurm_showq">
https://github.com/pedmon/slurm_showq</a>.</p>

<p>Slurm recognizes and translates the "#PBS" options in batch scripts.
Most, but not all options are supported.</p>

<p>Slurm also includes a SPANK plugin that will set all of the PBS environment
variables based upon the Slurm environment (e.g. PBS_JOBID, PBS_JOBNAME,
PBS_WORKDIR, etc.).
One environment not set by PBS_ENVIRONMENT, which if set would result in the
failure of some MPI implementations.
The plugin will be installed in<br>
&lt;install_directory&gt;/lib/slurm/spank_pbs.so<br>
See the SPANK man page for configuration details.</p>

<p><a id="sssd"><b>How can I get SSSD to work with Slurm?</b></a><br>
SSSD or System Security Services Daemon does not allow enumeration of
group members by default. Note that enabling enumeration in large
environments might not be feasible. However, Slurm does not need enumeration
except for some specific quirky configurations (multiple groups with the same
GID), so it's probably safe to leave enumeration disabled.
SSSD is also case sensitive by default for some configurations, which could
possibly raise other issues. Add the following lines
to <i>/etc/sssd/sssd.conf</i> on your head node to address these issues:</p>
<pre>
enumerate = True
case_sensitive = False
</pre>

<p><a id="ha_db"><b>How critical is configuring high availability for my
database?</b></a><br>
<ul>
<li>Consider if you really need a high-availability MySQL setup. A short outage
of slurmdbd is not a problem, because slurmctld will store all data in memory
and send it to slurmdbd when it resumes operations. The slurmctld daemon will
also cache all user limits and fair share information.</li>
<li>You cannot use NDB, since SlurmDBD's MySQL implementation uses keys on BLOB
values (and potentially other features on the incompatibility list).</li>
<li>You can set up "classical" Linux HA, with heartbeat/corosync to migrate IP
between primary/backup mysql servers and:
<ul>
<li>Configure one way replication of mysql, and change primary/backup roles on
failure</li>
<li>Use shared storage for primary/backup mysql servers database, and start
backup on primary mysql failure.</li>
</ul>
</li>
</ul>

<p><a id="sql"><b>How can I use double quotes in MySQL queries?</b></a><br>
Execute:
<pre>
SET session sql_mode='ANSI_QUOTES';
</pre>
<p>This will allow double quotes in queries like this:</p>
<pre>
show columns from "tux_assoc_table" where Field='is_def';
</pre>

<p><a id="reboot"><b>Why is a compute node down with the reason set to
"Node unexpectedly rebooted"?</b></a><br>
This is indicative of the slurmctld daemon running on the cluster's head node
as well as the slurmd daemon on the compute node when the compute node reboots.
If you want to prevent this condition from setting the node into a DOWN state
then configure ReturnToService to 2. See the slurm.conf man page for details.
Otherwise use scontrol or sview to manually return the node to service.</p>

<p><a id="reqspec"><b>How can a job which has exited with a specific exit
  code be requeued?</b></a><br>
Slurm supports requeue in hold with a <b>SPECIAL_EXIT</b> state using the
command:</p>

<pre>scontrol requeuehold State=SpecialExit job_id</pre>

<p>This is useful when users want to requeue and flag a job which has exited
with a specific error case. See man scontrol(1) for more details.</p>

<pre>
$ scontrol requeuehold State=SpecialExit 10
$ squeue
   JOBID PARTITION  NAME     USER  ST       TIME  NODES NODELIST(REASON)
    10      mira    zoppo    david SE       0:00      1 (JobHeldUser)
</pre>
<p>
The job can be later released and run again.
</p>
<p>
The requeuing of jobs which exit with a specific exit code can be
automated using an <b>EpilogSlurmctld</b>, see man(5) slurm.conf.
This is an example of a script which exit code depends on the existence
of a file.
</p>

<pre>
$ cat exitme
#!/bin/sh
#
echo "hi! `date`"
if [ ! -e "/tmp/myfile" ]; then
  echo "going out with 8"
  exit 8
fi
rm /tmp/myfile
echo "going out with 0"
exit 0
</pre>
<p>
This is an example of an EpilogSlurmctld that checks the job exit value
looking at the <b>SLURM_JOB_EXIT2</b> environment variable and requeues a job if
it exited with value 8. The SLURM_JOB_EXIT2 has the format "exit:sig", the first
number is the exit code, typically as set by the exit() function.
The second number of the signal that caused the process to terminate if
it was terminated by a signal.
</p>

<pre>
$ cat slurmctldepilog
#!/bin/sh

export PATH=/bin:/home/slurm/linux/bin
LOG=/home/slurm/linux/log/logslurmepilog

echo "Start `date`" >> $LOG 2>&1
echo "Job $SLURM_JOB_ID exitcode $SLURM_JOB_EXIT_CODE2" >> $LOG 2>&1
exitcode=`echo $SLURM_JOB_EXIT_CODE2|awk '{split($0, a, ":"); print a[1]}'` >> $LOG 2>&1
if [ "$exitcode" == "8" ]; then
   echo "Found REQUEUE_EXIT_CODE: $REQUEUE_EXIT_CODE" >> $LOG 2>&1
   scontrol requeuehold state=SpecialExit $SLURM_JOB_ID >> $LOG 2>&1
   echo $? >> $LOG 2>&1
else
   echo "Job $SLURM_JOB_ID exit all right" >> $LOG 2>&1
fi
echo "Done `date`" >> $LOG 2>&1

exit 0
</pre>
<p>
Using the exitme script as an example, we have it exit with a value of 8 on
the first run, then when it gets requeued in hold with SpecialExit state
we touch the file /tmp/myfile, then release the job which will finish
in a COMPLETE state.
</p>

<p><a id="user_account"><b>Can a user's account be changed in the database?</b></a><br>
A user's account can not be changed directly. A new association needs to be
created for the user with the new account. Then the association with the old
account can be deleted.</p>
<pre>
# Assume user "adam" is initially in account "physics"
sacctmgr create user name=adam cluster=tux account=physics
sacctmgr delete user name=adam cluster=tux account=chemistry
</pre>

<p><a id="mpi_perf"><b>What might account for MPI performance being below
  the expected level?</b></a><br>
Starting the slurmd daemons with limited locked memory can account for this.
Adding the line "ulimit -l unlimited" to the <i>/etc/sysconfig/slurm</i> file can
fix this.</p>

<p><a id="state_info"><b>How could some jobs submitted immediately before
   the slurmctld daemon crashed be lost?</b></a><br>
Any time the slurmctld daemon or hardware fails before state information reaches
disk can result in lost state.
Slurmctld writes state frequently (every five seconds by default), but with
large numbers of jobs, the formatting and writing of records can take seconds
and recent changes might not be written to disk.
Another example is if the state information is written to file, but that
information is cached in memory rather than written to disk when the node fails.
The interval between state saves being written to disk can be configured at
build time by defining SAVE_MAX_WAIT to a different value than five.</p>

<p><a id="delete_partition"><b>How do I safely remove partitions?
</b></a><br>
Partitions should be removed using the
"scontrol delete PartitionName=&lt;partition&gt;" command. This is because
scontrol will prevent any partitions from being removed that are in use.
Partitions need to be removed from the slurm.conf after being removed using
scontrol or they will return after a restart.
An existing job's partition(s) can be updated with the "scontrol update
JobId=&lt;jobid&gt; Partition=&lt;partition(s)&gt;" command.
Removing a partition from the slurm.conf and restarting will cancel any existing
jobs that reference the removed partitions.
</p>

<p><a id="cpu_freq"><b>Why is Slurm unable to set the CPU frequency for
   jobs?</b></a><br>
First check that Slurm is configured to bind jobs to specific CPUs by
making sure that TaskPlugin is configured to either affinity or cgroup.
Next check that your processor is configured to permit frequency
control by examining the values in the file
<i>/sys/devices/system/cpu/cpu0/cpufreq</i> where "cpu0" represents a CPU ID 0.
Of particular interest is the file <i>scaling_available_governors</i>,
which identifies the CPU governors available.
If "userspace" is not an available CPU governor, this may well be due to the
<i>intel_pstate</i> driver being installed.
Information about disabling the <i>intel_pstate</i> driver is available
from<br>
<a href="https://bugzilla.kernel.org/show_bug.cgi?id=57141">
https://bugzilla.kernel.org/show_bug.cgi?id=57141</a> and<br>
<a href="http://unix.stackexchange.com/questions/121410/setting-cpu-governor-to-on-demand-or-conservative">
http://unix.stackexchange.com/questions/121410/setting-cpu-governor-to-on-demand-or-conservative</a>.</p>

<p><a id="cluster_acct"><b>When adding a new cluster, how can the Slurm cluster
    configuration be copied from an existing cluster to the new cluster?</b></a><br>
Accounts need to be configured for the cluster. An easy way to copy information from
an existing cluster is to use the sacctmgr command to dump that cluster's information,
modify it using some editor, the load the new information using the sacctmgr
command. See the sacctmgr man page for details, including an example.</p>

<p><a id="cray_dvs"><b>How can I update Slurm on a Cray DVS file system
   without rebooting the nodes?</b></a><br>
The problem with DVS caching is related to the fact that the dereferenced value
of /opt/slurm/default symlink is cached in the DVS attribute cache, and that
cache is not dropped when the rest of the VM caches are.</p>

<p>The Cray Native Slurm installation manual indicates that Slurm should
have a "default" symlink run through /etc/alternatives.
As an alternative to that:
<ol>
<li>Institute a policy that all changes to files which could be open
persistently (i.e., .so files) are always modified by creating a new access
path.  I.e., installations go to a new directory.</li>
<li>Dump the /etc/alternatives stuff, just use a regular symlink, e.g., default
points to 15.8.0-1.</li>
<li>Add a new mountpoint on all the compute nodes for /dsl/opt/slurm where the
attrcache_timeout attribute is reduced from 14440s to 60s (or 15s -- whatever):<br>
mount -t dvs /opt/slurm /dsl/opt/slurm -o<br>
path=/dsl/opt/slurm,nodename=c0-0c0s0n0,loadbalance,cache,ro,attrcache_timeout=15<br>
In the example above, c0-0c0s0n0 is the single DVS server for the system.</li>
</ol>
<p>Using this strategy avoids the caching problems, making upgrades simple.
One just has to wait for about 20 seconds after changing the default symlinks
before starting the slurmds again.</p>
<p>(Information courtesy of Douglas Jacobsen, NERSC,
Lawrence Berkeley National Laboratory)</p>

<p><a id="dbd_rebuild"><b>How can I rebuild the database hierarchy?</b></a><br>
If you see errors of this sort:</p>
<pre>
error: Can't find parent id 3358 for assoc 1504, this should never happen.
</pre>
<p>in the slurmctld log file, this is indicative that the database hierarchy
information has been corrupted, typically due to a hardware failure or
administrator error in directly modifying the database. In order to rebuild
the database information, start the slurmdbd daemon with the "-R" option
followed by an optional comma separated list of cluster names to operate on.</p>

<p><a id="db_upgrade"><b>Is there anything exceptional to be aware of when
upgrading my database server?</b></a><br>
Generally, no. However if you are using MariaDB and have been using an older
version or are upgrading from MySQL, additional steps will need to be taken.</p>

<p>From the MariaDB documentation:</p>

<p><i>Before MariaDB 10.2.1, BLOB and TEXT columns could not be assigned a
DEFAULT value. This restriction was lifted in MariaDB 10.2.1.</i></p>

<p>Therefore, if a site begins using MariaDB >= 10.2.1 and is either using
an existing Slurm database from an earlier version or has restored one from
a dump from an earlier version or from any version of MySQL, some text/blob
default values will need to be altered to avoid failures from subsequent
queries from slurmdbd that set affected fields to DEFAULT. Please contact
SchedMD for assistance with this.</p>

<p><a id="routing_queue"><b>How can a routing queue be configured?</b></a><br>
A job submit plugin is designed to have access to a job request from a user,
plus information about all of the available system partitions/queue.
An administrator can write a C plugin or LUA script to set an incoming job's
partition based upon its size, time limit, etc.
See the <a href="https://slurm.schedmd.com/job_submit_plugins.html"> Job Submit Plugin API</a>
guide for more information.
Also see the available job submit plugins distributed with Slurm for examples
(look in the "src/plugins/job_submit" directory).</p>

<p><a id="squeue_script"><b>How can I suspend, resume, hold or release all
    of the jobs belonging to a specific user, partition, etc?</b></a><br>
There isn't any filtering by user, partition, etc. available in the scontrol
command; however the squeue command can be used to perform the filtering and
build a script which you can then execute. For example:
<pre>
$ squeue -u adam -h -o "scontrol hold %i" &gt;hold_script
</pre>

<p><a id="changed_uid"><b>I had to change a user's UID and now they cannot submit
    jobs. How do I get the new UID to take effect?</b></a><br>
When changing UIDs, you will also need to restart the slurmctld for the changes to
take effect. Normally, when adding a new user to the system, the UID is filled in
automatically and immediately. If the user isn't known on the system yet, there is a
thread that runs every hour that fills in those UIDs when they become known, but it
doesn't recognize UID changes of preexisting users. But you can simply restart the
slurmctld for those changes to be recognized.</p>

<p><a id="mysql_duplicate"><b>Slurmdbd is failing to start with a 'Duplicate entry'
    error in the database. How do I fix that?</b></a><br>
This problem has been rarely observed with MySQL, but not MariaDB.
The root cause of the failure seems to be reaching the upper limit on the auto increment field.
Upgrading to MariaDB is recommended.
If that is not possible then: backup the database, remove the duplicate record(s),
and restart the slurmdbd daemon as shown below.</p>
<pre>
$ slurmdbd -Dvv
...
slurmdbd: debug:  Table "cray_job_table" has changed.  Updating...
slurmdbd: error: mysql_query failed: 1062 Duplicate entry '2711-1478734628' for key 'id_job'
...

$ mysqldump --single-transaction -u&lt;user&gt; -p&lt;user&gt; slurm_acct_db &gt;/tmp/slurm_db_backup.sql

$ mysql
mysql> use slurm_acct_db;
mysql> delete from cray_job_table where id_job='2711-1478734628';
mysql> quit;
Bye
</pre>

<p>If necessary, you can edit the database dump and recreate the database as
shown below.</p>
<pre>
$ mysql
mysql> drop database slurm_acct_db;
mysql> create database slurm_acct_db;
mysql> quit;
Bye

$ mysql -u&lt;user&gt; -p&lt;user&gt; &lt;/tmp/slurm_db_backup.sql
</pre>

<p><a id="cray_sigbus"><b>Why are applications on my Cray system failing
    with SIGBUS (bus error)?</b></a><br>
By default, Slurm flushes Lustre file system and kernel caches upon completion
of each job step. If multiple applications are run simultaneously on compute
nodes (either multiple applications from a single Slurm job or multiple jobs)
the result can be significant performance degradation and even bus errors.
Failures occur more frequently when more applications are executed at the same
time on individual compute nodes.
Failures are also more common when Lustre file systems are used.</p>

<p>Two approaches exist to address this issue.
One is to disable the flushing of caches, which can be accomplished by adding
"LaunchParameters=lustre_no_flush" to your Slurm configuration file
"slurm.conf".
A second approach is to modify the Cray file system as described below in order to
prevent Slurm-specific files needing to be re-resolved over DFS.
This second approach does not address files used by applications, only those
used directly by Slurm.</p>

<p>On Cray CLE6.0, by default, nodes get the operating system, including the
Slurm installation and all of its plugins, via a DVS mount of "/".
Really "/" is an overlay filesystem where the lower portion is a loop-mounted
squashfs layer and the upper layer is tmpfs.
When buffer caches are flushed during a dlopen (used by Slurm to load its
plugins), a timeout may result from waiting to re-resolve a Slurm plugin over
DVS.</p>

<p>The NERSC solution is to localize all files related to Slurm or involved in
slurmstepd launch into that tmpfs layer at boot time.
This is possible by creating a new netroot preload file:</p>

<pre>
# cat compute-preload.nersc
/usr/lib64/libslurm*so*
/usr/lib64/slurm/*.so
/usr/sbin/slurmd
/usr/sbin/slurmstepd
/usr/bin/sbatch
/usr/bin/srun
/usr/bin/sbcast
/usr/bin/numactl
/usr/lib64/libnuma*so*
/lib64/ast/libast.so*
/lib64/ast/libcmd.so*
/lib64/ast/libdll.so*
/lib64/ast/libshell.so*
/lib64/libacl.so*
/lib64/libattr.so*
/lib64/libc.so*
/lib64/libcap.so*
/lib64/libdl.so*
/lib64/libgcc_s.so*
...
</pre>

<p>NERSC generates its preload file by including everything installed by Slurm
RPMs plus files identified as being used by Slurm's slurmd daemon on the compute
node by running the "strace -f" command while the slurmd daemon is launching
a job step.</p>

<p>Once the netroot preload file is generated, it needs to then be included in the
cray_netroot_preload_worksheet CLE configuration. For example:</p>

<pre>
cray_netroot_preload.settings.load.data.label.compute: null
cray_netroot_preload.settings.load.data.compute.targets: []
cray_netroot_preload.settings.load.data.compute.content_lists:
- dist/compute-preload.cray
- dist/compute-preload.nersc
cray_netroot_preload.settings.load.data.compute.size_limit: 0
</pre>

<p>This is a generally useful technique for preventing remote lookups of commonly
accessed files within jobs.</p>

<p><a id="sysv_memory"><b>How do I configure Slurm to work with System V IPC
    enabled applications?</b></a><br>
Slurm is generally agnostic to
<a href="http://man7.org/linux/man-pages/man2/ipc.2.html">
System V IPC</a> (a.k.a. "sysv ipc" in the Linux kernel).
Memory accounting of processes using sysv ipc changes depending on the value
of <a href="https://www.kernel.org/doc/Documentation/sysctl/kernel.txt">
sysctl kernel.shm_rmid_forced</a> (added in Linux kernel 3.1):
</p>
<ul>
<li>shm_rmid_forced = 1
<br>
Forces all shared memory usage of processes to be accounted and reported by the
kernel to Slurm. This breaks the separate namespace of sysv ipc and may cause
unexpected application issues without careful planning. Processes that share
the same sysv ipc namespaces across jobs may end up getting OOM killed when
another job ends and their allocation percentage increases.
</li>
<li>shm_rmid_forced = 0 (default in most Linux distributions)
<br>
System V memory usage will not be reported by Slurm for jobs.
It is generally suggested to configure the
<a href="https://www.kernel.org/doc/Documentation/sysctl/kernel.txt">
sysctl kernel.shmmax</a> parameter. The value of kernel.shmmax times the
maximum number of job processes should be deducted from each node's
configured RealMemory in your slurm.conf. Most Linux distributions set the
default to what is effectively unlimited, which can cause the OOM killer
to activate for unrelated new jobs or even for the slurmd process. If any
processes use sysv memory mechanisms, the Linux kernel OOM killer will never
be able to free the used memory. A Slurm job epilog script will be needed to
free any of the user memory. Setting kernel.shmmax=0 will disable sysv ipc
memory allocations but may cause application issues.
</li>
</ul>

<p><a id="opencl_pmix"><b>Why is Multi-Instance GPU not working with Slurm and
    PMIx, and complaining about GPUs being 'In use by another client'?</b></a>
<br/>
PMIx uses the <b>hwloc API</b> for different purposes, including
<i>OS device</i> features like querying sysfs folders (such as
<i>/sys/class/net</i> and <i>/sys/class/infiniband</i>) to get the names of
Infiniband HCAs. With the above mentioned features, hwloc defaults to
querying the OpenCL devices, which creates handles on <i>/dev/nvidia*</i> files.
These handles are kept by slurmstepd and will result in the following error
inside a job:
</p>
<pre>
$ nvidia-smi mig --id 1 --create-gpu-instance FOO,FOO --default-compute-instance
Unable to create a GPU instance on GPU 1 using profile FOO: In use by another client
</pre>
<p>
In order to use Multi-Instance GPUs with Slurm and PMIx you can instruct hwloc
to not query OpenCL devices by setting the
<span class="commandline">HWLOC_COMPONENTS=-opencl</span> environment
variable for slurmd, i.e. setting this variable in systemd unit file for slurmd.
</p>

<p><a id="tmpfs_jobcontainer"><b>How can I set up a private /tmp and /dev/shm for
    jobs on my machine?</b></a>
<br/>
Tmpfs job container plugin can be used by including
<i>JobContainerType=job_container/tmpfs</i>
in your slurm.conf file. It additionally requires a
<a href="job_container.conf.html">job_container.conf</a> file to be
set up which is further described in the man page.
Tmpfs plugin creates a private mount namespace inside of which it mounts a
private /tmp to a location that is configured in job_container.conf. The basepath
is used to construct the mount path, by creating a job specific directory inside it
and mounting /tmp to it. Since all the mounts are created inside of a mount
namespace which is private, they are only visible inside the job. Hence this
proves to be a useful solution for jobs that are on shared nodes, since each
job can only view mounts created in their own mount namespace. A private
/dev/shm is also mounted to isolate it between different jobs.</p>
<p>
Mount namespace construction also happens before job's spank environment is
set up. Hence all spank related job steps will view only private /tmp the
plugin creates. The plugin also provides an optional initialization script that
is invoked before the job's namespace is constructed. This can be useful for
any site specific customization that may be necessary.</p>
<pre>
parallels@linux_vb:~$ echo $SLURM_JOB_ID
7
parallels@linux_vb:~$ findmnt -o+PROPAGATION | grep /tmp
└─/tmp  /dev/sda1[/storage/7/.7] ext4  rw,relatime,errors=remount-ro,data=ordered   private
</pre>
<p>In the example above, <i>BasePath</i> points to /storage and a slurm job with
job id 7 is set up to mount /tmp on /storage/7/.7. When user from inside a job
tries to look up mounts, they can see that their /tmp is mounted. However
they are prevented from mistakenly accessing the backing directory directly.</p>
<pre>
parallels@linux_vb:~$ cd /storage/7/
bash: cd: /storage/7/: Permission denied
</pre>
<p>They are allowed to access (read/write) /tmp only.</p>
<p>
Additionally pam_slurm_adopt has also been extended to support this functionality.
If a user starts an ssh session which is managed by pam_slurm_adopt, then
the user's process joins the namespace that is constructed by tmpfs plugin.
Hence in ssh sessions, user has the same view of /tmp and /dev/shm as
their job. This functionality is enabled by default in pam_slurm_adopt
but can be disabled explicitly by appending <i>join_container=false</i> as shown:</p>
<pre>
account	sufficient  pam_slurm_adopt.so join_container=false
</pre>

<p><a id="json_serializer"><b>Why am I getting the following error: "Unable to
    find plugin: serializer/json"?</b></a>
<br/>
Several parts of Slurm have swapped to using our centralized serializer
code. JSON or YAML plugins are only required if one of the functions that
require it is executed. If one of the functions is executed it will fail to
create the JSON/YAML output and the linker will fail with the following error:
</p>
<pre>
slurmctld: fatal: Unable to find plugin: serializer/json
</pre>
<p>
In most cases, these are required for new functionality added after Slurm-20.02.
However, with each release, we have been adding more places that use the
serializer plugins. Because the list is evolving we do not plan on listing all
the commands that require the plugins but will instead provide the error
(shown above). To correct the issue, please make sure that Slurm is configured,
compiled and installed with the relavent JSON or YAML library (or preferably
both). Configure can be made to explicity request these libraries:
</p>
<pre>
./configure --with-json=PATH --with-yaml=PATH $@
</pre>
<p>
Most distributions include packages to make installation relatively easy.
Please make sure to install the 'dev' or 'devel' packages along with the
library packages. We also provide explicit instructions on how to install from
source: <a href="download.html#yaml">libyaml</a> and <a
href="download.html#jwt">libjwt</a>.
</p>

<p><a id="epel"><b>Why am I being offered an automatic update for Slurm?</b></a>
<br>
EPEL has added Slurm packages to their repository to make them more widely
available to the Linux community. However, this packaged version is not
supported or maintained by SchedMD, and is not recommend for customers at this
time. If you are using the EPEL repo you could be offered an update for Slurm
that you may not anticipate. In order to prevent Slurm from being upgraded
unintentionally, we recommend you modify the EPEL repository configuration file
to exclude all Slurm packages from automatic updates.</p>
<pre>
exclude=slurm*
</pre>

<p style="text-align:center;">Last modified 14 December 2022</p>

<!--#include virtual="footer.txt"-->