File: MPI_Intercomm_create.3.rst

package info (click to toggle)
openmpi 5.0.8-10
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 201,692 kB
  • sloc: ansic: 613,078; makefile: 42,351; sh: 11,194; javascript: 9,244; f90: 7,052; java: 6,404; perl: 5,179; python: 1,859; lex: 740; fortran: 61; cpp: 20; tcl: 12
file content (111 lines) | stat: -rw-r--r-- 3,347 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
.. _mpi_intercomm_create:


MPI_Intercomm_create
====================

.. include_body

:ref:`MPI_Intercomm_create` |mdash| Creates an intercommunicator from two
intracommunicators.


SYNTAX
------


C Syntax
^^^^^^^^

.. code-block:: c

   #include <mpi.h>

   int MPI_Intercomm_create(MPI_Comm local_comm, int local_leader,
   	MPI_Comm peer_comm, int remote_leader, int tag, MPI_Comm *newintercomm)


Fortran Syntax
^^^^^^^^^^^^^^

.. code-block:: fortran

   USE MPI
   ! or the older form: INCLUDE 'mpif.h'
   MPI_INTERCOMM_CREATE(LOCAL_COMM, LOCAL_LEADER, PEER_COMM,
   		REMOTE_LEADER, TAG, NEWINTERCOMM, IERROR)
   	INTEGER	LOCAL_COMM, LOCAL_LEADER, PEER_COMM, REMOTE_LEADER
   	INTEGER	TAG, NEWINTERCOMM, IERROR


Fortran 2008 Syntax
^^^^^^^^^^^^^^^^^^^

.. code-block:: fortran

   USE mpi_f08
   MPI_Intercomm_create(local_comm, local_leader, peer_comm, remote_leader,
   		tag, newintercomm, ierror)
   	TYPE(MPI_Comm), INTENT(IN) :: local_comm, peer_comm
   	INTEGER, INTENT(IN) :: local_leader, remote_leader, tag
   	TYPE(MPI_Comm), INTENT(OUT) :: newintercomm
   	INTEGER, OPTIONAL, INTENT(OUT) :: ierror


INPUT PARAMETERS
----------------
* ``local_comm``: The communicator containing the process that initiates the inter-communication (handle).
* ``local_leader``: Rank of local group leader in local_comm (integer).
* ``peer_comm``: "Peer" communicator; significant only at the local_leader (handle).
* ``remote_leader``: Rank of remote group leader in peer_comm; significant only at the local_leader (integer).
* ``tag``: Message tag used to identify new intercommunicator (integer).

OUTPUT PARAMETERS
-----------------
* ``newintercomm``: Created intercommunicator (handle).
* ``ierror``: Fortran only: Error status (integer).

DESCRIPTION
-----------

This call creates an intercommunicator. It is collective over the union
of the local and remote groups. Processes should provide identical
local_comm and local_leader arguments within each group. Wildcards are
not permitted for remote_leader, local_leader, and tag.

This call uses point-to-point communication with communicator peer_comm,
and with tag tag between the leaders. Thus, care must be taken that
there be no pending communication on peer_comm that could interfere with
this communication.

If multiple MPI_Intercomm_creates are being made, they should use
different tags (more precisely, they should ensure that the local and
remote leaders are using different tags for each MPI_intercomm_create).


NOTES
-----

We recommend using a dedicated peer communicator, such as a duplicate of
MPI_COMM_WORLD, to avoid trouble with peer communicators.

The MPI 1.1 Standard contains two mutually exclusive comments on the
input intracommunicators. One says that their respective groups must be
disjoint; the other that the leaders can be the same process. After some
discussion by the MPI Forum, it has been decided that the groups must be
disjoint. Note that the **reason** given for this in the standard is
**not** the reason for this choice; rather, the **other** operations on
intercommunicators (like :ref:`MPI_Intercomm_merge` ) do not make sense if
the groups are not disjoint.


ERRORS
------

.. include:: ./ERRORS.rst

.. seealso::
   * :ref:`MPI_Intercomm_merge`
   * :ref:`MPI_Comm_free`
   * :ref:`MPI_Comm_remote_group`
   * :ref:`MPI_Comm_remote_size`