File: MPI_Comm_spawn_multiple.3

package info (click to toggle)
lam 7.1.4-8
  • links: PTS
  • area: main
  • in suites: forky, sid
  • size: 56,404 kB
  • sloc: ansic: 156,541; sh: 9,991; cpp: 7,699; makefile: 5,621; perl: 488; fortran: 260; asm: 83
file content (355 lines) | stat: -rw-r--r-- 9,535 bytes parent folder | download | duplicates (7)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
.TH MPI_Comm_spawn_multiple 3 "6/24/2006" "LAM/MPI 7.1.4" "LAM/MPI"
.SH NAME
MPI_Comm_spawn_multiple \-  Spawn a dynamic MPI process from multiple executables 
.SH SYNOPSIS
.nf
#include <mpi.h>
int
MPI_Comm_spawn_multiple(int count, char **commands, char ***argvs,
                      int *maxprocs, MPI_Info *infos, int root, 
                      MPI_Comm comm, MPI_Comm *intercomm, 
                      int *errcodes)
.fi
.SH INPUT PARAMETERS
.PD 0
.TP
.B count 
- number of commands (only significant at root)
.PD 1
.PD 0
.TP
.B commands 
- commands to be executed (only significant at root)
.PD 1
.PD 0
.TP
.B argvs 
- arguments for commands (only significant at root)
.PD 1
.PD 0
.TP
.B maxprocs 
- max number of processes for each command (only significant at root)
.PD 1
.PD 0
.TP
.B infos 
- startup hints for each command
.PD 1
.PD 0
.TP
.B root 
- rank of process to perform the spawn
.PD 1
.PD 0
.TP
.B comm 
- parent intracommunicator
.PD 1

.SH OUTPUT PARAMETERS
.PD 0
.TP
.B intercomm 
- child intercommunicator containing spawned processes
.PD 1
.PD 0
.TP
.B errcodes 
- one code per process
.PD 1

.SH DESCRIPTION

A group of processes can create another group of processes with
.I MPI_Comm_spawn_multiple
\&.
This function is a collective operation
over the parent communicator.  The child group starts up like any MPI
application.  The processes must begin by calling 
.I MPI_Init
, after
which the pre-defined communicator, 
.I MPI_COMM_WORLD
, may be used.
This world communicator contains only the child processes.  It is
distinct from the 
.I MPI_COMM_WORLD
of the parent processes.

.I MPI_Comm_spawn_multiple
is used to manually specify a group of
different executables and arguments to spawn.  
.I MPI_Comm_spawn
is
used to specify one executable and set of arguments (although a
LAM/MPI appschema(5) can be provided to 
.I MPI_Comm_spawn
via the
"file" info key).

Communication With Spawned Processes

The natural communication mechanism between two groups is the
intercommunicator.  The second communicator argument to
.I MPI_Comm_spawn_multiple
returns an intercommunicator whose local
group contains the parent processes (same as the first communicator
argument) and whose remote group contains child processes. The child
processes can access the same intercommunicator by using the
.I MPI_Comm_get_parent
call.  The remote group size of the parent
communicator is zero if the process was created by 
.I mpirun
(1) instead
of one of the spawn functions.  Both groups can decide to merge the
intercommunicator into an intracommunicator (with the
.I MPI_Intercomm_merge
() function) and take advantage of other MPI
collective operations.  They can then use the merged intracommunicator
to create new communicators and reach other processes in the MPI
application.

Resource Allocation

Note that no MPI_Info keys are recognized by this implementation of
.I MPI_Comm_spawn_multiple
\&.
To use the "file" info key to specify an
appschema(5), use LAM's 
.I MPI_Comm_spawn
\&.
This may be preferable to
.I MPI_Comm_spawn_multiple
because it allows the arbitrary
specification of what nodes and/or CPUs should be used to launch jobs
(either SPMD or MPMD).  See MPI_Comm_spawn(3) for more details.

The value of 
.I MPI_INFO_NULL
should be given for each value in 
.I infos
(the 
.I infos
array is not currently examined by LAM/MPI, so specifying
non-NULL values for the array values is not harmful).  LAM schedules
the given number of processes onto LAM nodes by starting with CPU 0
(or the lowest numbered CPU), and continuing through higher CPU
numbers, placing one process on each CPU.  If the process count is
greater than the CPU count, the procedure repeats.

Process Terminiation

Note that the process[es] spawned by 
.I MPI_COMM_SPAWN
(and
.I MPI_COMM_SPAWN_MULTIPLE
) effectively become orphans.  That is, the
spawnning MPI application does not wait for the spawned application to
finish.  Hence, there is no guarantee the spawned application has
finished when the spawning completes.  Similarly, killing the spawning
application will also have no effect on the spawned application.

User applications can effect this kind of behavior with 
.I MPI_BARRIER
between the spawning and spawned processed before 
.I MPI_FINALIZE
\&.


Note that 
.I lamclean
will kill *all* MPI processes.

Process Count

The 
.I maxprocs
array parameter to 
.I MPI_Comm_spawn_multiple
specifies
the exact number of processes to be started.  If it is not possible to
start the desired number of processes, 
.I MPI_Comm_spawn_multiple
will
return an error code.  Note that even though 
.I maxprocs
is only
relevant on the root, all ranks must have an 
.I errcodes
array long
enough to handle an integer error code for every process that tries to
launch, or give MPI constant 
.I MPI_ERRCODES_IGNORE
for the 
.I errcodes
argument.  While this appears to be a contradiction, it is per the
MPI-2 standard.  :-\\

Frequently, an application wishes to chooses a process count so as to
fill all processors available to a job.  MPI indicates the maximum
number of processes recommended for a job in the pre-defined
attribute, 
.I MPI_UNIVERSE_SIZE
, which is cached on 
.I MPI_COMM_WORLD
\&.

The typical usage is to subtract the value of 
.I MPI_UNIVERSE_SIZE
from
the number of processes currently in the job and spawn the difference.
LAM sets 
.I MPI_UNIVERSE_SIZE
to the number of CPUs in the user's LAM
session (as defined in the boot schema [bhost(5)] via 
.I lamboot
(1)).

See MPI_Init(3) for other pre-defined attributes that are helpful when
spawning.

Locating an Executable Program

The executable program file must be located on the node(s) where the
process(es) will run.  On any node, the directories specified by the
user's PATH environment variable are searched to find the program.

All MPI runtime options selected by 
.I mpirun
(1) in the initial
application launch remain in effect for all child processes created by
the spawn functions.

Command-line Arguments

The 
.I argvs
array parameter to 
.I MPI_Comm_spawn_multiple
should not
contain the program name since it is given in the first parameter.
The command line that is passed to the newly launched program will be
the program name followed by the strings in corresponding entry in the
.I argvs
array.

.SH USAGE WITH IMPI EXTENSIONS

The IMPI standard only supports MPI-1 functions.  Hence, this function
is currently not designed to operate within an IMPI job.

.SH ERRORS

If an error occurs in an MPI function, the current MPI error handler
is called to handle it.  By default, this error handler aborts the
MPI job.  The error handler may be changed with 
.I MPI_Errhandler_set
;
the predefined error handler 
.I MPI_ERRORS_RETURN
may be used to cause
error values to be returned (in C and Fortran; this error handler is
less useful in with the C++ MPI bindings.  The predefined error
handler 
.I MPI::ERRORS_THROW_EXCEPTIONS
should be used in C++ if the
error value needs to be recovered).  Note that MPI does 
.I not
guarantee that an MPI program can continue past an error.

All MPI routines (except 
.I MPI_Wtime
and 
.I MPI_Wtick
) return an error
value; C routines as the value of the function and Fortran routines
in the last argument.  The C++ bindings for MPI do not return error
values; instead, error values are communicated by throwing exceptions
of type 
.I MPI::Exception
(but not by default).  Exceptions are only
thrown if the error value is not 
.I MPI::SUCCESS
\&.


Note that if the 
.I MPI::ERRORS_RETURN
handler is set in C++, while
MPI functions will return upon an error, there will be no way to
recover what the actual error value was.
.PD 0
.TP
.B MPI_SUCCESS 
- No error; MPI routine completed successfully.
.PD 1
.PD 0
.TP
.B MPI_ERR_COMM 
- Invalid communicator.  A common error is to use a
null communicator in a call (not even allowed in 
.I MPI_Comm_rank
).
.PD 1
.PD 0
.TP
.B MPI_ERR_SPAWN 
- Spawn error; one or more of the applications
attempting to be launched failed.  Check the returned error code
array.  
.PD 1
.PD 0
.TP
.B MPI_ERR_ARG 
- Invalid argument.  Some argument is invalid and is not
identified by a specific error class.  This is typically a NULL
pointer or other such error.
.PD 1
.PD 0
.TP
.B MPI_ERR_ROOT 
- Invalid root.  The root must be specified as a rank
in the communicator.  Ranks must be between zero and the size of the
communicator minus one.
.PD 1
.PD 0
.TP
.B MPI_ERR_OTHER 
- Other error; use 
.I MPI_Error_string
to get more
information about this error code.
.PD 1
.PD 0
.TP
.B MPI_ERR_INTERN 
- An internal error has been detected.  This is
fatal.  Please send a bug report to the LAM mailing list (see
.I http://www.lam-mpi.org/contact.php
). 
.PD 1

.SH SEE ALSO
appschema(5), bhost(5), lamboot(1), MPI_Comm_get_parent(3), MPI_Intercomm_merge(3), MPI_Comm_spawn_multiple(3), MPI_Info_create(3), MPI_Info_set(3), MPI_Info_delete(3), MPI_Info_free(3), MPI_Init(3), mpirun(1)
.br

.SH MORE INFORMATION

For more information, please see the official MPI Forum web site,
which contains the text of both the MPI-1 and MPI-2 standards.  These
documents contain detailed information about each MPI function (most
of which is not duplicated in these man pages).

.I http://www.mpi-forum.org/


.SH ACKNOWLEDGEMENTS

The LAM Team would like the thank the MPICH Team for the handy program
to generate man pages ("doctext" from
.I ftp://ftp.mcs.anl.gov/pub/sowing/sowing.tar.gz
), the initial
formatting, and some initial text for most of the MPI-1 man pages.
.SH LOCATION
spawnmult.c