File: MPI_Reduce_scatter_block_init.3

package info (click to toggle)
openmpi 5.0.8-4
  • links: PTS, VCS
  • area: main
  • in suites:
  • size: 201,684 kB
  • sloc: ansic: 613,078; makefile: 42,353; sh: 11,194; javascript: 9,244; f90: 7,052; java: 6,404; perl: 5,179; python: 1,859; lex: 740; fortran: 61; cpp: 20; tcl: 12
file content (264 lines) | stat: -rw-r--r-- 9,396 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
.\" Man page generated from reStructuredText.
.
.TH "MPI_REDUCE_SCATTER_BLOCK_INIT" "3" "May 30, 2025" "" "Open MPI"
.
.nr rst2man-indent-level 0
.
.de1 rstReportMargin
\\$1 \\n[an-margin]
level \\n[rst2man-indent-level]
level margin: \\n[rst2man-indent\\n[rst2man-indent-level]]
-
\\n[rst2man-indent0]
\\n[rst2man-indent1]
\\n[rst2man-indent2]
..
.de1 INDENT
.\" .rstReportMargin pre:
. RS \\$1
. nr rst2man-indent\\n[rst2man-indent-level] \\n[an-margin]
. nr rst2man-indent-level +1
.\" .rstReportMargin post:
..
.de UNINDENT
. RE
.\" indent \\n[an-margin]
.\" old: \\n[rst2man-indent\\n[rst2man-indent-level]]
.nr rst2man-indent-level -1
.\" new: \\n[rst2man-indent\\n[rst2man-indent-level]]
.in \\n[rst2man-indent\\n[rst2man-indent-level]]u
..
.INDENT 0.0
.INDENT 3.5
.UNINDENT
.UNINDENT
.sp
\fI\%MPI_Reduce_scatter_block\fP, \fI\%MPI_Ireduce_scatter_block\fP,
\fI\%MPI_Reduce_scatter_block_init\fP — Combines values and scatters the
results in blocks.
.SH SYNTAX
.SS C Syntax
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
#include <mpi.h>

int MPI_Reduce_scatter_block(const void *sendbuf, void *recvbuf, int recvcount,
     MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)

int MPI_Ireduce_scatter_block(const void *sendbuf, void *recvbuf, int recvcount,
     MPI_Datatype datatype, MPI_Op op, MPI_Comm comm, MPI_Request *request)


int MPI_Reduce_scatter_block_init(const void *sendbuf, void *recvbuf, int recvcount,
     MPI_Datatype datatype, MPI_Op op, MPI_Comm comm, MPI_Info info, MPI_Request *request)
.ft P
.fi
.UNINDENT
.UNINDENT
.SS Fortran Syntax
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
USE MPI
! or the older form: INCLUDE \(aqmpif.h\(aq
MPI_REDUCE_SCATTER_BLOCK(SENDBUF, RECVBUF, RECVCOUNT, DATATYPE, OP,
             COMM, IERROR)
     <type>  SENDBUF(*), RECVBUF(*)
     INTEGER RECVCOUNT, DATATYPE, OP, COMM, IERROR

MPI_IREDUCE_SCATTER_BLOCK(SENDBUF, RECVBUF, RECVCOUNT, DATATYPE, OP,
             COMM, REQUEST, IERROR)
     <type>  SENDBUF(*), RECVBUF(*)
     INTEGER RECVCOUNT, DATATYPE, OP, COMM, REQUEST, IERROR


MPI_REDUCE_SCATTER_BLOCK_INOT(SENDBUF, RECVBUF, RECVCOUNT, DATATYPE, OP,
             COMM, INFO, REQUEST, IERROR)
     <type>  SENDBUF(*), RECVBUF(*)
     INTEGER RECVCOUNT, DATATYPE, OP, COMM, INFO, REQUEST, IERROR
.ft P
.fi
.UNINDENT
.UNINDENT
.SS Fortran 2008 Syntax
.INDENT 0.0
.INDENT 3.5
.sp
.nf
.ft C
USE mpi_f08
MPI_Ireduce_scatter_block(sendbuf, recvbuf, recvcount, datatype, op, comm,
             ierror)
     TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: sendbuf
     TYPE(*), DIMENSION(..), ASYNCHRONOUS :: recvbuf
     INTEGER, INTENT(IN) :: recvcount
     TYPE(MPI_Datatype), INTENT(IN) :: datatype
     TYPE(MPI_Op), INTENT(IN) :: op
     TYPE(MPI_Comm), INTENT(IN) :: comm
     INTEGER, OPTIONAL, INTENT(OUT) :: ierror

MPI_Ireduce_scatter_block(sendbuf, recvbuf, recvcount, datatype, op, comm,
             request, ierror)
     TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: sendbuf
     TYPE(*), DIMENSION(..), ASYNCHRONOUS :: recvbuf
     INTEGER, INTENT(IN) :: recvcount
     TYPE(MPI_Datatype), INTENT(IN) :: datatype
     TYPE(MPI_Op), INTENT(IN) :: op
     TYPE(MPI_Comm), INTENT(IN) :: comm
     TYPE(MPI_Request), INTENT(OUT) :: request
     INTEGER, OPTIONAL, INTENT(OUT) :: ierror

MPI_Reduce_scatter_block_init(sendbuf, recvbuf, recvcount, datatype, op, comm,
             info, request, ierror)
     TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: sendbuf
     TYPE(*), DIMENSION(..), ASYNCHRONOUS :: recvbuf
     INTEGER, INTENT(IN) :: recvcount
     TYPE(MPI_Datatype), INTENT(IN) :: datatype
     TYPE(MPI_Op), INTENT(IN) :: op
     TYPE(MPI_Comm), INTENT(IN) :: comm
     TYPE(MPI_Info), INTENT(IN) :: info
     TYPE(MPI_Request), INTENT(OUT) :: request
     INTEGER, OPTIONAL, INTENT(OUT) :: ierror
.ft P
.fi
.UNINDENT
.UNINDENT
.SH INPUT PARAMETERS
.INDENT 0.0
.IP \(bu 2
\fBsendbuf\fP: Starting address of send buffer (choice).
.IP \(bu 2
\fBrecvcount\fP: lement count per block (non\-negative integer).
.IP \(bu 2
\fBdatatype\fP: Datatype of elements of input buffer (handle).
.IP \(bu 2
\fBop\fP: Operation (handle).
.IP \(bu 2
\fBcomm\fP: Communicator (handle).
.IP \(bu 2
\fBinfo\fP: Info (handle, persistent only).
.UNINDENT
.SH OUTPUT PARAMETERS
.INDENT 0.0
.IP \(bu 2
\fBrecvbuf\fP: Starting address of receive buffer (choice).
.IP \(bu 2
\fBrequest\fP: Request (handle, non\-blocking only).
.IP \(bu 2
\fBierror\fP: Fortran only: Error status (integer).
.UNINDENT
.SH DESCRIPTION
.sp
\fI\%MPI_Reduce_scatter_block\fP first does an element\-wise reduction on vector
of \fBcount = n * recvcount\fP elements in the send buffer defined by
\fIsendbuf\fP, \fIcount\fP, and \fIdatatype\fP, using the operation \fIop\fP, where n is
the number of processes in the group of \fIcomm\fP\&. Next, the resulting
vector of results is split into n disjoint segments, where n is the
number of processes in the group. Each segments contains \fIrecvcount\fP
elements. The ith segment is sent to process i and stored in the receive
buffer defined by \fIrecvbuf\fP, \fIrecvcount\fP, and \fIdatatype\fP\&.
.SH USE OF IN-PLACE OPTION
.sp
When the communicator is an intracommunicator, you can perform a
reduce\-scatter operation in\-place (the output buffer is used as the
input buffer). Use the variable MPI_IN_PLACE as the value of the
\fIsendbuf\fP\&. In this case, the input data is taken from the top of the
receive buffer. The area occupied by the input data may be either longer
or shorter than the data filled by the output data.
.SH WHEN COMMUNICATOR IS AN INTER-COMMUNICATOR
.sp
When the communicator is an inter\-communicator, the reduce\-scatter
operation occurs in two phases. First, the result of the reduction
performed on the data provided by the processes in the first group is
scattered among the processes in the second group. Then the reverse
occurs: the reduction performed on the data provided by the processes in
the second group is scattered among the processes in the first group.
For each group, all processes provide the same \fIrecvcounts\fP argument,
and the sum of the \fIrecvcounts\fP values should be the same for both
groups.
.SH NOTES ON COLLECTIVE OPERATIONS
.sp
The reduction functions ( MPI_Op ) do not return an error value. As a
result, if the functions detect an error, all they can do is either call
\fI\%MPI_Abort\fP or silently skip the problem. Thus, if you change the error
handler from MPI_ERRORS_ARE_FATAL to something else, for example,
MPI_ERRORS_RETURN , then no error may be indicated.
.sp
The reason for this is the performance problems in ensuring that all
collective routines return the same error value.
.SH ERRORS
.sp
Almost all MPI routines return an error value; C routines as the return result
of the function and Fortran routines in the last argument.
.sp
Before the error value is returned, the current MPI error handler associated
with the communication object (e.g., communicator, window, file) is called.
If no communication object is associated with the MPI call, then the call is
considered attached to MPI_COMM_SELF and will call the associated MPI error
handler. When MPI_COMM_SELF is not initialized (i.e., before
\fI\%MPI_Init\fP/\fI\%MPI_Init_thread\fP, after \fI\%MPI_Finalize\fP, or when using the Sessions
Model exclusively) the error raises the initial error handler. The initial
error handler can be changed by calling \fI\%MPI_Comm_set_errhandler\fP on
MPI_COMM_SELF when using the World model, or the mpi_initial_errhandler CLI
argument to mpiexec or info key to \fI\%MPI_Comm_spawn\fP/\fI\%MPI_Comm_spawn_multiple\fP\&.
If no other appropriate error handler has been set, then the MPI_ERRORS_RETURN
error handler is called for MPI I/O functions and the MPI_ERRORS_ABORT error
handler is called for all other MPI functions.
.sp
Open MPI includes three predefined error handlers that can be used:
.INDENT 0.0
.IP \(bu 2
\fBMPI_ERRORS_ARE_FATAL\fP
Causes the program to abort all connected MPI processes.
.IP \(bu 2
\fBMPI_ERRORS_ABORT\fP
An error handler that can be invoked on a communicator,
window, file, or session. When called on a communicator, it
acts as if \fI\%MPI_Abort\fP was called on that communicator. If
called on a window or file, acts as if \fI\%MPI_Abort\fP was called
on a communicator containing the group of processes in the
corresponding window or file. If called on a session,
aborts only the local process.
.IP \(bu 2
\fBMPI_ERRORS_RETURN\fP
Returns an error code to the application.
.UNINDENT
.sp
MPI applications can also implement their own error handlers by calling:
.INDENT 0.0
.IP \(bu 2
\fI\%MPI_Comm_create_errhandler\fP then \fI\%MPI_Comm_set_errhandler\fP
.IP \(bu 2
\fI\%MPI_File_create_errhandler\fP then \fI\%MPI_File_set_errhandler\fP
.IP \(bu 2
\fI\%MPI_Session_create_errhandler\fP then \fI\%MPI_Session_set_errhandler\fP or at \fI\%MPI_Session_init\fP
.IP \(bu 2
\fI\%MPI_Win_create_errhandler\fP then \fI\%MPI_Win_set_errhandler\fP
.UNINDENT
.sp
Note that MPI does not guarantee that an MPI program can continue past
an error.
.sp
See the \fI\%MPI man page\fP for a full list of \fI\%MPI error codes\fP\&.
.sp
See the Error Handling section of the MPI\-3.1 standard for
more information.
.sp
\fBSEE ALSO:\fP
.INDENT 0.0
.INDENT 3.5
.INDENT 0.0
.IP \(bu 2
\fI\%MPI_Reduce_scatter\fP
.UNINDENT
.UNINDENT
.UNINDENT
.SH COPYRIGHT
2003-2025, The Open MPI Community
.\" Generated by docutils manpage writer.
.