File: authentication.shtml

package info (click to toggle)
slurm-wlm-contrib 24.11.5-2
  • links: PTS, VCS
  • area: contrib
  • in suites: trixie
  • size: 50,596 kB
  • sloc: ansic: 529,598; exp: 64,795; python: 17,051; sh: 9,411; javascript: 6,528; makefile: 4,030; perl: 3,762; pascal: 131
file content (212 lines) | stat: -rw-r--r-- 7,821 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
<!--#include virtual="header.txt"-->

<h1><a id="top">Authentication Plugins</a></h1>

<h2 id="Overview">Overview<a class="slurm_link" href="#Overview"></a></h2>

<p>It is important to know that the Remote Procedure Calls (RPCs) that are
received by Slurm are coming from trusted sources. There are a few different
authentication mechanisms available within Slurm to verify the legitimacy and
integrity of the requests.</p>

<h2 id="munge">MUNGE<a class="slurm_link" href="#munge"></a></h2>

<p>MUNGE can be used to create and validate credentials. It allows Slurm to
authenticate the UID and GID of a request from another host that has matching
users and groups. MUNGE libraries must exist when building Slurm in order for
it to be able to use munge for authentication. All hosts in the cluster must
have a shared cryptographic key.</p>

<h4 id="munge_setup">Setup<a class="slurm_link" href="#munge_setup"></a></h4>

<ol>
<li>MUNGE requires that there be a shared key on the machines running
slurmctld, slurmdbd and the nodes. You can create a key file by entering your
own text or by generating random data:
<pre>
dd if=/dev/random of=/etc/munge/munge.key bs=1024 count=1
</pre>
</li>

<li>This key should be owned by the "munge" user and should not be readable
or writable by other users:
<pre>
chown munge:munge /etc/munge/munge.key
chmod 400 /etc/munge/munge.key
</pre>
</li>

<li>Distribute the key file to the machines on the cluster. It needs to be
on the machines running slurmctld, slurmdbd, slurmd and any submit hosts
you have configured. Use the distribution method of your choice.</li>

<li>The "munge" service should be running on any machines that need to use it
for authentication. It should be enabled and started on all the machines you
distributed a key to:
<pre>
systemctl enable munge
systemctl start munge
</pre>
</li>

<li>Changing the authentication mechanism requires a restart of Slurm daemons.
The daemons need to be stopped before updating the slurm.conf so that client
commands do not use a mechanism other than what the running daemons are
expecting.</li>

<li>Update your slurm.conf and slurmdbd.conf to use MUNGE authentication.
<ul>
<li>slurm.conf:
<pre>
AuthType = auth/munge
CredType = cred/munge
</pre></li>
<li>slurmdbd.conf:
<pre>
AuthType = auth/munge
</pre></li>
</ul>
</li>

<li>Start the Slurm daemons back up with the appropriate method for your
cluster.</li>
</ol>

<h2 id="slurm">Slurm<a class="slurm_link" href="#slurm"></a></h2>

<p>Beginning with version 23.11, Slurm has its own plugin that can create and
validate credentials. It validates that the requests come from legitimate UIDs
and GIDs on other hosts with matching users and groups. All hosts in the
cluster must have a shared cryptographic key.</p>

<h4 id="single_key_setup">Single Key Setup<a class="slurm_link" href="#single_key_setup"></a></h4>

<ol>
<li>For the authentication to happen correctly you must have a shared key on
the machine running slurmctld, slurmdbd and the nodes. You can create a key
file by entering your own text or by generating random data:
<pre>
dd if=/dev/random of=/etc/slurm/slurm.key bs=1024 count=1
</pre>
</li>

<li>The slurm.key or slurm.jwks should be owned by SlurmUser and should not be
readable or writable by other users. This example assumes the SlurmUser is
'slurm':
<pre>
chown slurm:slurm /etc/slurm/slurm.key
chmod 600 /etc/slurm/slurm.key
</pre>
</li>

<li>Distribute the key file to the machines on the cluster. It needs to be
on the machines running slurmctld, slurmdbd, slurmd and sackd.</li>

<li>Update your slurm.conf and slurmdbd.conf to use the Slurm authentication
type.
<ul>
<li>slurm.conf:
<pre>
AuthType = auth/slurm
CredType = cred/slurm
</pre></li>
<li>slurmdbd.conf:
<pre>
AuthType = auth/slurm
</pre></li>
</ul>
</li>

<li>Start the daemons back up with the appropriate method for your cluster.</li>
</ol>

<p>Beginning with version 24.05, you may alternatively create a slurm.jwks file
with multiple keys defined. The slurm.jwks file aids with key rotation, as
the cluster does not need to be restarted at once when a key is rotated.
Instead, an scontrol reconfigure is sufficient. There are no slurm.conf
parameters required to use the slurm.jwks file, instead, the presence of the
slurm.jwks file enables this functionality. If the slurm.jwks is not present or
cannot be read, the cluster defaults to the slurm.key.</p>

<h4 id="multiple_key_setup">Multiple Key Setup<a class="slurm_link" href="#multiple_key_setup"></a></h4>

<p>Setup for multiple keys is very similar to the single key setup with the
exception of a richer key file. The key file is composed of one jwks-esque
"keys" list, with a number of "key" entries into this list. The key entries
have many different fields. An example file is below.
</p>

<pre>
{
  "keys": [
    {
      "alg": "HS256",
      "kty": "oct",
      "kid": "key-identifier",
      "k": "VGhlIGtleSBiZWxvdyBtZSBuZXZlciBsaWVz",
      "exp": 1718200800
    },
    {
      "alg": "HS256",
      "kty": "oct",
      "kid": "key-identifier-2",
      "k": "VGhlIGtleSBhYm92ZSBtZSBhbHdheXMgbGllcw==",
      "use": "default"
    }
  ]
}
</pre>

<p>The following fields are used by auth/slurm. (Additional fields can be
present, but will be ignored.)
<ul>
<li><b>alg</b> &mdash; The cryptographic algorithm used with the key. This
field is required, and the value MUST be HS256.</li>
<li><b>kty</b> &mdash; The family of cryptographic algorithms used to sign the
key. This field is required, and the value MUST be oct.</li>
<li><b>kid</b> &mdash; The case-sensitive text identifier used for key
matching. This field is required, and the text must be unique.</li>
<li><b>k</b> &mdash; The actual key, represented in a Base64 or Base64url
encoded binary blob. This field is required, and must be longer than 16
bytes.</li>
<li><b>use</b> &mdash; Determines whether this key is the default key.
Acceptable values are only "default", which denotes this key as the default
key. There can only be one default key. This field is optional.</li>
<li><b>exp</b> &mdash; The expiration date of the key as a Unix timestamp. This
field is optional.</li>
</ul>
</p>

<p>Once the slurm.jwks file has been created, use the same process for setting
up auth/slurm as in the single key setup, except use the slurm.jwks file instead
of the slurm.key file.</p>

<p>If the cluster does not have access to consistent user ids from LDAP or
the operating system, you can use the
<a href="slurm.conf.html#OPT_use_client_ids">use_client_ids</a> option to
allow it to use the Slurm authentication mechanism.</p>

<h4 id="sack">SACK<a class="slurm_link" href="#sack"></a></h4>

<p>Slurm's internal authentication makes use of a subsystem &mdash; the
<b>S</b>lurm <b>A</b>uth and <b>C</b>red <b>K</b>iosk (SACK) &mdash;
that is responsible for handling requests from the <b>auth/slurm</b> and
<b>cred/slurm</b> plugins. This subsystem is automatically started and managed
internally by the slurmctld, slurmdbd, and slurmd daemons on each system with no
need to run a separate daemon.</p>

<p>For login nodes not running one of these Slurm daemons, the <b>sackd</b>
daemon should be run to allow the Slurm client commands to authenticate to
the rest of the cluster. This daemon can also manage cached configuration files
for <a href="configless_slurm.html">configless</a> environments.</p>

<h2 id="jwt">JWT<a class="slurm_link" href="#jwt"></a></h2>

<p>Slurm can be configured to use JSON Web Tokens (JWT) for authentication
purposes. This is configured with the AuthAltType parameter and is used only
for client to server communication. You can read more about this authentication
mechanism and how to install it <a href="jwt.html">here</a>.</p>

<p style="text-align:center;">Last modified 02 July 2024</p>

<!--#include virtual="footer.txt"-->