1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460
|
Overview
========
MPI for Python provides an object oriented approach to message passing
which grounds on the standard MPI-2 C++ bindings. The interface was
designed with focus in translating MPI syntax and semantics of
standard MPI-2 bindings for C++ to Python. Any user of the standard
C/C++ MPI bindings should be able to use this module without need of
learning a new interface.
Communicating Python Objects and Array Data
-------------------------------------------
The Python standard library supports different mechanisms for data
persistence. Many of them rely on disk storage, but *pickling* and
*marshaling* can also work with memory buffers.
The :mod:`pickle` modules provide user-extensible facilities to
serialize general Python objects using ASCII or binary formats. The
:mod:`marshal` module provides facilities to serialize built-in Python
objects using a binary format specific to Python, but independent of
machine architecture issues.
*MPI for Python* can communicate any built-in or user-defined Python
object taking advantage of the features provided by the :mod:`pickle`
module. These facilities will be routinely used to build binary
representations of objects to communicate (at sending processes), and
restoring them back (at receiving processes).
Although simple and general, the serialization approach (i.e.,
*pickling* and *unpickling*) previously discussed imposes important
overheads in memory as well as processor usage, especially in the
scenario of objects with large memory footprints being
communicated. Pickling general Python objects, ranging from primitive
or container built-in types to user-defined classes, necessarily
requires computer resources. Processing is also needed for
dispatching the appropriate serialization method (that depends on the
type of the object) and doing the actual packing. Additional memory is
always needed, and if its total amount is not known *a priori*, many
reallocations can occur. Indeed, in the case of large numeric arrays,
this is certainly unacceptable and precludes communication of objects
occupying half or more of the available memory resources.
*MPI for Python* supports direct communication of any object exporting
the single-segment buffer interface. This interface is a standard
Python mechanism provided by some types (e.g., strings and numeric
arrays), allowing access in the C side to a contiguous memory buffer
(i.e., address and length) containing the relevant data. This feature,
in conjunction with the capability of constructing user-defined MPI
datatypes describing complicated memory layouts, enables the
implementation of many algorithms involving multidimensional numeric
arrays (e.g., image processing, fast Fourier transforms, finite
difference schemes on structured Cartesian grids) directly in Python,
with negligible overhead, and almost as fast as compiled Fortran, C,
or C++ codes.
Communicators
-------------
In *MPI for Python*, :class:`MPI.Comm` is the base class of
communicators. The :class:`MPI.Intracomm` and :class:`MPI.Intercomm`
classes are sublcasses of the :class:`MPI.Comm` class. The
:meth:`MPI.Comm.Is_inter` method (and :meth:`MPI.Comm.Is_intra`,
provided for convenience but not part of the MPI specification) is
defined for communicator objects and can be used to determine the
particular communicator class.
The two predefined intracommunicator instances are available:
:const:`MPI.COMM_SELF` and :const:`MPI.COMM_WORLD`. From them, new
communicators can be created as needed.
The number of processes in a communicator and the calling process rank
can be respectively obtained with methods :meth:`MPI.Comm.Get_size`
and :meth:`MPI.Comm.Get_rank`. The associated process group can be
retrieved from a communicator by calling the
:meth:`MPI.Comm.Get_group` method, which returns an instance of the
:class:`MPI.Group` class. Set operations with :class:`MPI.Group`
objects like like :meth:`MPI.Group.Union`, :meth:`MPI.Group.Intersect`
and :meth:`MPI.Group.Difference` are fully supported, as well as the
creation of new communicators from these groups using
:meth:`MPI.Comm.Create` and :meth:`MPI.Comm.Create_group`.
New communicator instances can be obtained with the
:meth:`MPI.Comm.Clone`, :meth:`MPI.Comm.Dup` and
:meth:`MPI.Comm.Split` methods, as well methods
:meth:`MPI.Intracomm.Create_intercomm` and
:meth:`MPI.Intercomm.Merge`.
Virtual topologies (:class:`MPI.Cartcomm`, :class:`MPI.Graphcomm` and
:class:`MPI.Distgraphcomm` classes, which are specializations of the
:class:`MPI.Intracomm` class) are fully supported. New instances can
be obtained from intracommunicator instances with factory methods
:meth:`MPI.Intracomm.Create_cart` and
:meth:`MPI.Intracomm.Create_graph`.
Point-to-Point Communications
-----------------------------
Point to point communication is a fundamental capability of message
passing systems. This mechanism enables the transmission of data
between a pair of processes, one side sending, the other receiving.
MPI provides a set of *send* and *receive* functions allowing the
communication of *typed* data with an associated *tag*. The type
information enables the conversion of data representation from one
architecture to another in the case of heterogeneous computing
environments; additionally, it allows the representation of
non-contiguous data layouts and user-defined datatypes, thus avoiding
the overhead of (otherwise unavoidable) packing/unpacking
operations. The tag information allows selectivity of messages at the
receiving end.
Blocking Communications
^^^^^^^^^^^^^^^^^^^^^^^
MPI provides basic send and receive functions that are *blocking*.
These functions block the caller until the data buffers involved in
the communication can be safely reused by the application program.
In *MPI for Python*, the :meth:`MPI.Comm.Send`, :meth:`MPI.Comm.Recv`
and :meth:`MPI.Comm.Sendrecv` methods of communicator objects provide
support for blocking point-to-point communications within
:class:`MPI.Intracomm` and :class:`MPI.Intercomm` instances. These
methods can communicate memory buffers. The variants
:meth:`MPI.Comm.send`, :meth:`MPI.Comm.recv` and
:meth:`MPI.Comm.sendrecv` can communicate general Python objects.
Nonblocking Communications
^^^^^^^^^^^^^^^^^^^^^^^^^^
On many systems, performance can be significantly increased by
overlapping communication and computation. This is particularly true
on systems where communication can be executed autonomously by an
intelligent, dedicated communication controller.
MPI provides *nonblocking* send and receive functions. They allow the
possible overlap of communication and computation. Non-blocking
communication always come in two parts: posting functions, which begin
the requested operation; and test-for-completion functions, which
allow to discover whether the requested operation has completed.
In *MPI for Python*, the :meth:`MPI.Comm.Isend` and
:meth:`MPI.Comm.Irecv` methods initiate send and receive operations,
respectively. These methods return a :class:`MPI.Request` instance,
uniquely identifying the started operation. Its completion can be
managed using the :meth:`MPI.Request.Test`, :meth:`MPI.Request.Wait`
and :meth:`MPI.Request.Cancel` methods. The management of
:class:`MPI.Request` objects and associated memory buffers involved in
communication requires a careful, rather low-level coordination. Users
must ensure that objects exposing their memory buffers are not
accessed at the Python level while they are involved in nonblocking
message-passing operations.
Persistent Communications
^^^^^^^^^^^^^^^^^^^^^^^^^
Often a communication with the same argument list is repeatedly
executed within an inner loop. In such cases, communication can be
further optimized by using persistent communication, a particular case
of nonblocking communication allowing the reduction of the overhead
between processes and communication controllers. Furthermore , this
kind of optimization can also alleviate the extra call overheads
associated to interpreted, dynamic languages like Python.
In *MPI for Python*, the :meth:`MPI.Comm.Send_init` and
:meth:`MPI.Comm.Recv_init` methods create persistent requests for a
send and receive operation, respectively. These methods return an
instance of the :class:`MPI.Prequest` class, a subclass of the
:class:`MPI.Request` class. The actual communication can be
effectively started using the :meth:`MPI.Prequest.Start` method, and
its completion can be managed as previously described.
Collective Communications
--------------------------
Collective communications allow the transmittal of data between
multiple processes of a group simultaneously. The syntax and semantics
of collective functions is consistent with point-to-point
communication. Collective functions communicate *typed* data, but
messages are not paired with an associated *tag*; selectivity of
messages is implied in the calling order. Additionally, collective
functions come in blocking versions only.
The more commonly used collective communication operations are the
following.
* Barrier synchronization across all group members.
* Global communication functions
+ Broadcast data from one member to all members of a group.
+ Gather data from all members to one member of a group.
+ Scatter data from one member to all members of a group.
* Global reduction operations such as sum, maximum, minimum, etc.
In *MPI for Python*, the :meth:`MPI.Comm.Bcast`,
:meth:`MPI.Comm.Scatter`, :meth:`MPI.Comm.Gather`,
:meth:`MPI.Comm.Allgather`, and :meth:`MPI.Comm.Alltoall`
:meth:`MPI.Comm.Alltoallw` methods provide support for collective
communications of memory buffers. The lower-case variants
:meth:`MPI.Comm.bcast`, :meth:`MPI.Comm.scatter`,
:meth:`MPI.Comm.gather`, :meth:`MPI.Comm.allgather` and
:meth:`MPI.Comm.alltoall` can communicate general Python objects. The
vector variants (which can communicate different amounts of data to
each process) :meth:`MPI.Comm.Scatterv`, :meth:`MPI.Comm.Gatherv`,
:meth:`MPI.Comm.Allgatherv`, :meth:`MPI.Comm.Alltoallv` and
:meth:`MPI.Comm.Alltoallw` are also supported, they can only
communicate objects exposing memory buffers.
Global reduction operations on memory buffers are accessible through
the :meth:`MPI.Comm.Reduce`, `MPI.Comm.Reduce_scatter`,
:meth:`MPI.Comm.Allreduce`, :meth:`MPI.Intracomm.Scan` and
:meth:`MPI.Intracomm.Exscan` methods. The lower-case variants
:meth:`MPI.Comm.reduce`, :meth:`MPI.Comm.allreduce`,
:meth:`MPI.Intracomm.scan` and :meth:`MPI.Intracomm.exscan` can
communicate general Python objects; however, the actual required
reduction computations are performed sequentially at some process. All
the predefined (i.e., :const:`MPI.SUM`, :const:`MPI.PROD`,
:const:`MPI.MAX`, etc.) reduction operations can be applied.
Dynamic Process Management
--------------------------
In the context of the MPI-1 specification, a parallel application is
static; that is, no processes can be added to or deleted from a
running application after it has been started. Fortunately, this
limitation was addressed in MPI-2. The new specification added a
process management model providing a basic interface between an
application and external resources and process managers.
This MPI-2 extension can be really useful, especially for sequential
applications built on top of parallel modules, or parallel
applications with a client/server model. The MPI-2 process model
provides a mechanism to create new processes and establish
communication between them and the existing MPI application. It also
provides mechanisms to establish communication between two existing
MPI applications, even when one did not *start* the other.
In *MPI for Python*, new independent process groups can be created by
calling the :meth:`MPI.Intracomm.Spawn` method within an
intracommunicator. This call returns a new intercommunicator (i.e.,
an :class:`MPI.Intercomm` instance) at the parent process group. The
child process group can retrieve the matching intercommunicator by
calling the :meth:`MPI.Comm.Get_parent` class method. At each side,
the new intercommunicator can be used to perform point to point and
collective communications between the parent and child groups of
processes.
Alternatively, disjoint groups of processes can establish
communication using a client/server approach. Any server application
must first call the :func:`MPI.Open_port` function to open a *port*
and the :func:`MPI.Publish_name` function to publish a provided
*service*, and next call the :meth:`MPI.Intracomm.Accept` method. Any
client applications can first find a published *service* by calling
the :func:`MPI.Lookup_name` function, which returns the *port* where a
server can be contacted; and next call the
:meth:`MPI.Intracomm.Connect` method. Both
:meth:`MPI.Intracomm.Accept` and :meth:`MPI.Intracomm.Connect` methods
return an :class:`MPI.Intercomm` instance. When connection between
client/server processes is no longer needed, all of them must
cooperatively call the :meth:`MPI.Comm.Disconnect`
method. Additionally, server applications should release resources by
calling the :func:`MPI.Unpublish_name` and :func:`MPI.Close_port`
functions.
One-Sided Communications
------------------------
One-sided communications (also called *Remote Memory Access*, *RMA*)
supplements the traditional two-sided, send/receive based MPI
communication model with a one-sided, put/get based
interface. One-sided communication that can take advantage of the
capabilities of highly specialized network hardware. Additionally,
this extension lowers latency and software overhead in applications
written using a shared-memory-like paradigm.
The MPI specification revolves around the use of objects called
*windows*; they intuitively specify regions of a process's memory that
have been made available for remote read and write operations. The
published memory blocks can be accessed through three functions for
put (remote send), get (remote write), and accumulate (remote update
or reduction) data items. A much larger number of functions support
different synchronization styles; the semantics of these
synchronization operations are fairly complex.
In *MPI for Python*, one-sided operations are available by using
instances of the :class:`MPI.Win` class. New window objects are
created by calling the :meth:`MPI.Win.Create` method at all processes
within a communicator and specifying a memory buffer . When a window
instance is no longer needed, the :meth:`MPI.Win.Free` method should
be called.
The three one-sided MPI operations for remote write, read and
reduction are available through calling the methods
:meth:`MPI.Win.Put`, :meth:`MPI.Win.Get()`, and
:meth:`MPI.Win.Accumulate` respectively within a :class:`Win`
instance. These methods need an integer rank identifying the target
process and an integer offset relative the base address of the remote
memory block being accessed.
The one-sided operations read, write, and reduction are implicitly
nonblocking, and must be synchronized by using two primary modes.
Active target synchronization requires the origin process to call the
:meth:`MPI.Win.Start` and :meth:`MPI.Win.Complete` methods at the
origin process, and target process cooperates by calling the
:meth:`MPI.Win.Post` and :meth:`MPI.Win.Wait` methods. There is also a
collective variant provided by the :meth:`MPI.Win.Fence`
method. Passive target synchronization is more lenient, only the
origin process calls the :meth:`MPI.Win.Lock` and
:meth:`MPI.Win.Unlock` methods. Locks are used to protect remote
accesses to the locked remote window and to protect local load/store
accesses to a locked local window.
Parallel Input/Output
---------------------
The POSIX standard provides a model of a widely portable file
system. However, the optimization needed for parallel input/output
cannot be achieved with this generic interface. In order to ensure
efficiency and scalability, the underlying parallel input/output
system must provide a high-level interface supporting partitioning of
file data among processes and a collective interface supporting
complete transfers of global data structures between process memories
and files. Additionally, further efficiencies can be gained via
support for asynchronous input/output, strided accesses to data, and
control over physical file layout on storage devices. This scenario
motivated the inclusion in the MPI-2 standard of a custom interface in
order to support more elaborated parallel input/output operations.
The MPI specification for parallel input/output revolves around the
use objects called *files*. As defined by MPI, files are not just
contiguous byte streams. Instead, they are regarded as ordered
collections of *typed* data items. MPI supports sequential or random
access to any integral set of these items. Furthermore, files are
opened collectively by a group of processes.
The common patterns for accessing a shared file (broadcast, scatter,
gather, reduction) is expressed by using user-defined datatypes.
Compared to the communication patterns of point-to-point and
collective communications, this approach has the advantage of added
flexibility and expressiveness. Data access operations (read and
write) are defined for different kinds of positioning (using explicit
offsets, individual file pointers, and shared file pointers),
coordination (non-collective and collective), and synchronism
(blocking, nonblocking, and split collective with begin/end phases).
In *MPI for Python*, all MPI input/output operations are performed
through instances of the :class:`MPI.File` class. File handles are
obtained by calling the :meth:`MPI.File.Open` method at all processes
within a communicator and providing a file name and the intended
access mode. After use, they must be closed by calling the
:meth:`MPI.File.Close` method. Files even can be deleted by calling
method :meth:`MPI.File.Delete`.
After creation, files are typically associated with a per-process
*view*. The view defines the current set of data visible and
accessible from an open file as an ordered set of elementary
datatypes. This data layout can be set and queried with the
:meth:`MPI.File.Set_view` and :meth:`MPI.File.Get_view` methods
respectively.
Actual input/output operations are achieved by many methods combining
read and write calls with different behavior regarding positioning,
coordination, and synchronism. Summing up, *MPI for Python* provides
the thirty (30) methods defined in MPI-2 for reading from or writing
to files using explicit offsets or file pointers (individual or
shared), in blocking or nonblocking and collective or noncollective
versions.
Environmental Management
------------------------
Initialization and Exit
^^^^^^^^^^^^^^^^^^^^^^^
Module functions :func:`MPI.Init` or :func:`MPI.Init_thread` and
:func:`MPI.Finalize` provide MPI initialization and finalization
respectively. Module functions :func:`MPI.Is_initialized()` and
:func:`MPI.Is_finalized()` provide the respective tests for
initialization and finalization.
.. note::
:c:func:`MPI_Init()` or :c:func:`MPI_Init_thread()` is actually
called when you import the :mod:`MPI` module from the :mod:`mpi4py`
package, but only if MPI is not already initialized. In such case,
calling :func:`MPI.Init` or :func:`MPI.Init_thread` from Python is
expected to generate an MPI error, and in turn an exception will be
raised.
.. note::
:c:func:`MPI_Finalize()` is registered (by using Python C/API
function :c:func:`Py_AtExit()`) for being automatically called when
Python processes exit, but only if :mod:`mpi4py` actually
initialized MPI. Therefore, there is no need to call
:func:`MPI.Finalize()` from Python to ensure MPI finalization.
Implementation Information
^^^^^^^^^^^^^^^^^^^^^^^^^^
* The MPI version number can be retrieved from module function
:func:`MPI.Get_version`. It returns a two-integer tuple
``(version,subversion)``.
* The :func:`MPI.Get_processor_name` function can be used to access
the processor name.
* The values of predefined attributes attached to the world
communicator can be obtained by calling the
:meth:`MPI.Comm.Get_attr` method within the :const:`MPI.COMM_WORLD`
instance.
Timers
^^^^^^
MPI timer functionalities are available through the :func:`MPI.Wtime`
and :func:`MPI.Wtick` functions.
Error Handling
^^^^^^^^^^^^^^
In order facilitate handle sharing with other Python modules
interfacing MPI-based parallel libraries, the predefined MPI error
handlers :const:`MPI.ERRORS_RETURN` and :const:`MPI.ERRORS_ARE_FATAL`
can be assigned to and retrieved from communicators, windows and files
using methods :meth:`MPI.{Comm|Win|File}.Set_errhandler` and
:meth:`MPI.{Comm|Win|File}.Get_errhandler`.
When the predefined error handler :const:`MPI.ERRORS_RETURN` is set,
errors returned from MPI calls within Python code will raise an
instance of the exception class :exc:`MPI.Exception`, which is a
subclass of the standard Python exception :exc:`RuntimeError`.
.. note::
After import, mpi4py overrides the default MPI rules governing
inheritance of error handlers. The :const:`MPI.ERRORS_RETURN` error
handler is set in the predefined :const:`MPI.COMM_SELF` and
:const:`MPI.COMM_WORLD` communicators, as well as any new
:class:`MPI.Comm`, :class:`MPI.Win`, or :class:`MPI.File` instance
created through mpi4py. If you ever pass such handles to
C/C++/Fortran library code, it is recommended to set the
:const:`MPI.ERRORS_ARE_FATAL` error handler on them to ensure MPI
errors do not pass silently.
.. warning::
Importing with ``from mpi4py.MPI import *`` will cause a name
clashing with the standard Python :exc:`Exception` base class.
|