File: MPI_Exscan.3.rst

package info (click to toggle)
openmpi 5.0.8-10
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 201,692 kB
  • sloc: ansic: 613,078; makefile: 42,351; sh: 11,194; javascript: 9,244; f90: 7,052; java: 6,404; perl: 5,179; python: 1,859; lex: 740; fortran: 61; cpp: 20; tcl: 12
file content (177 lines) | stat: -rw-r--r-- 5,740 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
.. _mpi_exscan:


MPI_Exscan
==========

.. include_body

:ref:`MPI_Exscan`, :ref:`MPI_Iexscan` - Computes an exclusive scan (partial
reduction)


SYNTAX
------


C Syntax
^^^^^^^^

.. code-block:: c

   #include <mpi.h>

   int MPI_Exscan(const void *sendbuf, void *recvbuf, int count,
   	MPI_Datatype datatype, MPI_Op op, MPI_Comm comm)

   int MPI_Iexscan(const void *sendbuf, void *recvbuf, int count,
   	MPI_Datatype datatype, MPI_Op op, MPI_Comm comm,
   	MPI_Request *request)

   int MPI_Exscan_init(const void *sendbuf, void *recvbuf, int count,
   	MPI_Datatype datatype, MPI_Op op, MPI_Comm comm,
   	MPI_Info info, MPI_Request *request)


Fortran Syntax
^^^^^^^^^^^^^^

.. code-block:: fortran

   USE MPI
   ! or the older form: INCLUDE 'mpif.h'
   MPI_EXSCAN(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, IERROR)
   	<type>	SENDBUF(*), RECVBUF(*)
   	INTEGER	COUNT, DATATYPE, OP, COMM, IERROR

   MPI_IEXSCAN(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, REQUEST, IERROR)
   	<type>	SENDBUF(*), RECVBUF(*)
   	INTEGER	COUNT, DATATYPE, OP, COMM, REQUEST, IERROR

   MPI_EXSCAN_INIT(SENDBUF, RECVBUF, COUNT, DATATYPE, OP, COMM, INFO, REQUEST, IERROR)
   	<type>	SENDBUF(*), RECVBUF(*)
   	INTEGER	COUNT, DATATYPE, OP, COMM, INFO, REQUEST, IERROR


Fortran 2008 Syntax
^^^^^^^^^^^^^^^^^^^

.. code-block:: fortran

   USE mpi_f08
   MPI_Exscan(sendbuf, recvbuf, count, datatype, op, comm, ierror)
   	TYPE(*), DIMENSION(..), INTENT(IN) :: sendbuf
   	TYPE(*), DIMENSION(..) :: recvbuf
   	INTEGER, INTENT(IN) :: count
   	TYPE(MPI_Datatype), INTENT(IN) :: datatype
   	TYPE(MPI_Op), INTENT(IN) :: op
   	TYPE(MPI_Comm), INTENT(IN) :: comm
   	INTEGER, OPTIONAL, INTENT(OUT) :: ierror

   MPI_Iexscan(sendbuf, recvbuf, count, datatype, op, comm, request, ierror)
   	TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: sendbuf
   	TYPE(*), DIMENSION(..), ASYNCHRONOUS :: recvbuf
   	INTEGER, INTENT(IN) :: count
   	TYPE(MPI_Datatype), INTENT(IN) :: datatype
   	TYPE(MPI_Op), INTENT(IN) :: op
   	TYPE(MPI_Comm), INTENT(IN) :: comm
   	TYPE(MPI_Request), INTENT(OUT) :: request
   	INTEGER, OPTIONAL, INTENT(OUT) :: ierror

   MPI_Exscan_init(sendbuf, recvbuf, count, datatype, op, comm, info, request, ierror)
   	TYPE(*), DIMENSION(..), INTENT(IN), ASYNCHRONOUS :: sendbuf
   	TYPE(*), DIMENSION(..), ASYNCHRONOUS :: recvbuf
   	INTEGER, INTENT(IN) :: count
   	TYPE(MPI_Datatype), INTENT(IN) :: datatype
   	TYPE(MPI_Op), INTENT(IN) :: op
   	TYPE(MPI_Comm), INTENT(IN) :: comm
   	TYPE(MPI_Info), INTENT(IN) :: info
   	TYPE(MPI_Request), INTENT(OUT) :: request
   	INTEGER, OPTIONAL, INTENT(OUT) :: ierror


INPUT PARAMETERS
----------------
* ``sendbuf``: Send buffer (choice).
* ``count``: Number of elements in input buffer (integer).
* ``datatype``: Data type of elements of input buffer (handle).
* ``op``: Operation (handle).
* ``comm``: Communicator (handle).

OUTPUT PARAMETERS
-----------------
* ``recvbuf``: Receive buffer (choice).
* ``request``: Request (handle, non-blocking only).
* ``ierror``: Fortran only: Error status (integer).

DESCRIPTION
-----------

:ref:`MPI_Exscan` is used to perform an exclusive prefix reduction on data
distributed across the calling processes. The operation returns, in the
*recvbuf* of the process with rank i, the reduction (calculated
according to the function *op*) of the values in the *sendbuf*\ s of
processes with ranks 0, ..., i-1. Compare this with the functionality of
:ref:`MPI_Scan`, which calculates over the range 0, ..., i (inclusive). The
type of operations supported, their semantics, and the constraints on
send and receive buffers are as for :ref:`MPI_Reduce`.

The value in *recvbuf* on process 0 is undefined and unreliable as
*recvbuf* is not significant for process 0. The value of *recvbuf* on
process 1 is always the value in *sendbuf* on process 0.


USE OF IN-PLACE OPTION
----------------------

The \`in place' option for intracommunicators is specified by passing
MPI_IN_PLACE in the *sendbuf* argument. In this case, the input data is
taken from the receive buffer, and replaced by the output data.

Note that MPI_IN_PLACE is a special kind of value; it has the same
restrictions on its use as MPI_BOTTOM.

Because the in-place option converts the receive buffer into a
send-and-receive buffer, a Fortran binding that includes INTENT must
mark these as INOUT, not OUT.


NOTES
-----

MPI does not specify which process computes which operation. In
particular, both processes 0 and 1 may participate in the computation
even though the results for both processes' *recvbuf* are degenerate.
Therefore, all processes, including 0 and 1, must provide the same *op*.

It can be argued, from a mathematical perspective, that the definition
of :ref:`MPI_Exscan` is unsatisfactory because the output at process 0 is
undefined. The "mathematically correct" output for process 0 would be
the unit element of the reduction operation. However, such a definition
of an exclusive scan would not work with user-defined *op* functions as
there is no way for MPI to "know" the unit value for these custom
operations.


NOTES ON COLLECTIVE OPERATIONS
------------------------------

The reduction functions of type MPI_Op do not return an error value. As
a result, if the functions detect an error, all they can do is either
call :ref:`MPI_Abort` or silently skip the problem. Thus, if the error handler
is changed from MPI_ERRORS_ARE_FATAL to something else (e.g.,
MPI_ERRORS_RETURN), then no error may be indicated.

The reason for this is the performance problems in ensuring that all
collective routines return the same error value.


ERRORS
------

.. include:: ./ERRORS.rst

.. seealso::
   * :ref:`MPI_Op_create`
   * :ref:`MPI_Reduce`
   * :ref:`MPI_Scan`