File: MPI_Dims_create.3.rst

package info (click to toggle)
openmpi 5.0.8-10
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 201,692 kB
  • sloc: ansic: 613,078; makefile: 42,351; sh: 11,194; javascript: 9,244; f90: 7,052; java: 6,404; perl: 5,179; python: 1,859; lex: 740; fortran: 61; cpp: 20; tcl: 12
file content (108 lines) | stat: -rw-r--r-- 2,771 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
.. _mpi_dims_create:


MPI_Dims_create
===============

.. include_body

:ref:`MPI_Dims_create` |mdash| Creates a division of processors in a Cartesian
grid.


SYNTAX
------


C Syntax
^^^^^^^^

.. code-block:: c

   #include <mpi.h>

   int MPI_Dims_create(int nnodes, int ndims, int dims[])


Fortran Syntax
^^^^^^^^^^^^^^

.. code-block:: fortran

   USE MPI
   ! or the older form: INCLUDE 'mpif.h'
   MPI_DIMS_CREATE(NNODES, NDIMS, DIMS, IERROR)
   	INTEGER	NNODES, NDIMS, DIMS(*), IERROR


Fortran 2008 Syntax
^^^^^^^^^^^^^^^^^^^

.. code-block:: fortran

   USE mpi_f08
   MPI_Dims_create(nnodes, ndims, dims, ierror)
   	INTEGER, INTENT(IN) :: nnodes, ndims
   	INTEGER, INTENT(INOUT) :: dims(ndims)
   	INTEGER, OPTIONAL, INTENT(OUT) :: ierror


INPUT PARAMETERS
----------------
* ``nnodes``: Number of nodes in a grid (integer).
* ``ndims``: Number of Cartesian dimensions (integer).

IN/OUT PARAMETER
----------------
* ``dims``: Integer array of size ndims specifying the number of nodes in each dimension.

OUTPUT PARAMETER
----------------
* ``ierror``: Fortran only: Error status (integer).

DESCRIPTION
-----------

For Cartesian topologies, the function :ref:`MPI_Dims_create` helps the user
select a balanced distribution of processes per coordinate direction,
depending on the number of processes in the group to be balanced and
optional constraints that can be specified by the user. One use is to
partition all the processes (the size of MPI_COMM_WORLD's group) into an
n-dimensional topology.

The entries in the array *dims* are set to describe a Cartesian grid
with *ndims* dimensions and a total of *nnodes* nodes. The dimensions
are set to be as close to each other as possible, using an appropriate
divisibility algorithm. The caller may further constrain the operation
of this routine by specifying elements of array dims. If dims[i] is set
to a positive number, the routine will not modify the number of nodes in
dimension i; only those entries where dims[i] = 0 are modified by the
call.

Negative input values of dims[i] are erroneous. An error will occur if
nnodes is not a multiple of ((pi) over (i, dims[i] != 0)) dims[i].

For dims[i] set by the call, dims[i] will be ordered in nonincreasing
order. Array dims is suitable for use as input to routine
:ref:`MPI_Cart_create`. :ref:`MPI_Dims_create` is local.

**Example:**

::


   dims
   before					dims
   call		function call		on return
   -----------------------------------------------------
   (0,0)	MPI_Dims_create(6, 2, dims)	(3,2)
   (0,0)	MPI_Dims_create(7, 2, dims) 	(7,1)
   (0,3,0)	MPI_Dims_create(6, 3, dims)	(2,3,1)
   (0,3,0)	MPI_Dims_create(7, 3, dims)	erroneous call
   ------------------------------------------------------


ERRORS
------

.. include:: ./ERRORS.rst