File: mpi4py.futures.rst

package info (click to toggle)
mpi4py 4.1.0-3
  • links: PTS, VCS
  • area: main
  • in suites: forky
  • size: 4,544 kB
  • sloc: python: 34,453; ansic: 16,475; makefile: 614; sh: 325; cpp: 193; f90: 178
file content (696 lines) | stat: -rw-r--r-- 30,752 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
mpi4py.futures
==============

.. module:: mpi4py.futures
   :synopsis: Execute computations concurrently using MPI processes.

.. versionadded:: 3.0.0

This package provides a high-level interface for asynchronously executing
callables on a pool of worker processes using MPI for inter-process
communication.

The :mod:`mpi4py.futures` package is based on :mod:`concurrent.futures` from
the Python standard library. More precisely, :mod:`mpi4py.futures` provides the
:class:`MPIPoolExecutor` class as a concrete implementation of the abstract
class :class:`~concurrent.futures.Executor`.  The
:meth:`~concurrent.futures.Executor.submit` interface schedules a callable to
be executed asynchronously and returns a :class:`~concurrent.futures.Future`
object representing the execution of the callable.
:class:`~concurrent.futures.Future` instances can be queried for the call
result or exception. Sets of :class:`~concurrent.futures.Future` instances can
be passed to the :func:`~concurrent.futures.wait` and
:func:`~concurrent.futures.as_completed` functions.

.. seealso::

   Module :mod:`concurrent.futures`
      Documentation of the :mod:`concurrent.futures` standard module.


MPIPoolExecutor
---------------

The :class:`MPIPoolExecutor` class uses a pool of MPI processes to execute
calls asynchronously. By performing computations in separate processes, it
allows to side-step the :term:`global interpreter lock` but also means that
only picklable objects can be executed and returned. The :mod:`__main__` module
must be importable by worker processes, thus :class:`MPIPoolExecutor` instances
may not work in the interactive interpreter.

:class:`MPIPoolExecutor` takes advantage of the dynamic process management
features introduced in the MPI-2 standard. In particular, the
`MPI.Intracomm.Spawn` method of `MPI.COMM_SELF` is used in the master (or
parent) process to spawn new worker (or child) processes running a Python
interpreter. The master process uses a separate thread (one for each
:class:`MPIPoolExecutor` instance) to communicate back and forth with the
workers.  The worker processes serve the execution of tasks in the main (and
only) thread until they are signaled for completion.

.. note::

   The worker processes must import the main script in order to *unpickle* any
   callable defined in the :mod:`__main__` module and submitted from the master
   process. Furthermore, the callables may need access to other global
   variables. At the worker processes, :mod:`mpi4py.futures` executes the main
   script code (using the :mod:`runpy` module) under the :mod:`__worker__`
   namespace to define the :mod:`__main__` module. The :mod:`__main__` and
   :mod:`__worker__` modules are added to :data:`sys.modules` (both at the
   master and worker processes) to ensure proper *pickling* and *unpickling*.

.. warning::

   During the initial import phase at the workers, the main script cannot
   create and use new :class:`MPIPoolExecutor` instances. Otherwise, each
   worker would attempt to spawn a new pool of workers, leading to infinite
   recursion. :mod:`mpi4py.futures` detects such recursive attempts to spawn
   new workers and aborts the MPI execution environment. As the main script
   code is run under the :mod:`__worker__` namespace, the easiest way to avoid
   spawn recursion is using the idiom :code:`if __name__ == '__main__': ...` in
   the main script.

.. class:: MPIPoolExecutor(max_workers=None, \
                           initializer=None, initargs=(), **kwargs)

   An :class:`~concurrent.futures.Executor` subclass that executes calls
   asynchronously using a pool of at most *max_workers* processes.  If
   *max_workers* is `None` or not given, its value is determined from the
   :envvar:`MPI4PY_FUTURES_MAX_WORKERS` environment variable if set, or the MPI
   universe size if set, otherwise a single worker process is spawned.  If
   *max_workers* is lower than or equal to ``0``, then a :exc:`ValueError` will
   be raised.

   *initializer* is an optional callable that is called at the start of each
   worker process before executing any tasks; *initargs* is a tuple of
   arguments passed to the initializer. If *initializer* raises an exception,
   all pending tasks and any attempt to submit new tasks to the pool will raise
   a :exc:`~concurrent.futures.BrokenExecutor` exception.

   Other parameters:

   * *python_exe*: Path to the Python interpreter executable used to spawn
     worker processes, otherwise :data:`sys.executable` is used.

   * *python_args*: :class:`list` or iterable with additional command line
     flags to pass to the Python executable. Command line flags determined from
     inspection of :data:`sys.flags`, :data:`sys.warnoptions` and
     :data:`sys._xoptions` in are passed unconditionally.

   * *mpi_info*: :class:`dict` or iterable yielding ``(key, value)`` pairs.
     These ``(key, value)`` pairs are passed (through an `MPI.Info` object) to
     the `MPI.Intracomm.Spawn` call used to spawn worker processes. This
     mechanism allows telling the MPI runtime system where and how to start the
     processes. Check the documentation of the backend MPI implementation about
     the set of keys it interprets and the corresponding format for values.

   * *globals*: :class:`dict` or iterable yielding ``(name, value)`` pairs to
     initialize the main module namespace in worker processes.

   * *main*: If set to `False`, do not import the :mod:`__main__` module in
     worker processes. Setting *main* to `False` prevents worker processes
     from accessing definitions in the parent :mod:`__main__` namespace.

   * *path*: :class:`list` or iterable with paths to append to :data:`sys.path`
     in worker processes to extend the :ref:`module search path
     <python:tut-searchpath>`.

   * *wdir*: Path to set the current working directory in worker processes
     using :func:`os.chdir()`. The initial working directory is set by the MPI
     implementation. Quality MPI implementations should honor a ``wdir`` info
     key passed through *mpi_info*, although such feature is not mandatory.

   * *env*: :class:`dict` or iterable yielding ``(name, value)`` pairs with
     environment variables to update :data:`os.environ` in worker processes.
     The initial environment is set by the MPI implementation. MPI
     implementations may allow setting the initial environment through
     *mpi_info*, however such feature is not required nor recommended by the
     MPI standard.

   * *use_pkl5*: If set to `True`, use :mod:`pickle` with out-of-band buffers
     for interprocess communication. If *use_pkl5* is set to `None` or not
     given, its value is determined from the :envvar:`MPI4PY_FUTURES_USE_PKL5`
     environment variable. Using :mod:`pickle` with out-of-band buffers may
     benefit applications dealing with large buffer-like objects like NumPy
     arrays. See :mod:`mpi4py.util.pkl5` for additional information.

   * *backoff*: :class:`float` value specifying the maximum number of seconds a
     worker thread or process suspends execution with :func:`time.sleep()`
     while idle-waiting. If not set, its value is determined from the
     :envvar:`MPI4PY_FUTURES_BACKOFF` environment variable if set, otherwise
     the default value of 0.001 seconds is used. Lower values will reduce
     latency and increase execution throughput for very short-lived tasks,
     albeit at the expense of spinning CPU cores and increased energy
     consumption.

   .. method:: submit(fn, /, *args, **kwargs)

      Schedule the callable *fn* to be executed as ``fn(*args,
      **kwargs)`` and returns a :class:`~concurrent.futures.Future` object
      representing the execution of the callable. ::

         executor = MPIPoolExecutor(max_workers=1)
         future = executor.submit(pow, 321, 1234)
         print(future.result())

   .. method:: map(fn, *iterables, \
                   timeout=None, chunksize=1, buffersize=None, **kwargs)

      Similar to :func:`map(fn, *iterables) <python:map>` except:

      * The *iterables* are consumed immediately rather than lazily, unless
        *buffersize* is specified to limit the number of submitted tasks whose
        results have not yet been yielded. If the task buffer is full, the
        caller blocks and iteration over the *iterables* pauses until a result
        is yielded from the buffer.

      * *fn* is executed asynchronously and several calls to
        *fn* may be made concurrently, out-of-order, in separate processes.

      The returned iterator raises a :exc:`~concurrent.futures.TimeoutError` if
      :meth:`~iterator.__next__` is called and the result isn't available after
      *timeout* seconds from the original call to :meth:`~MPIPoolExecutor.map`.
      *timeout* can be an int or a float.  If *timeout* is not specified or
      `None`, there is no limit to the wait time.

      If *fn* raises an exception, then that exception will be raised when
      its value is retrieved from the iterator.

      This method chops *iterables* into a number of chunks which it submits to
      the pool as separate tasks. The (approximate) size of these chunks can be
      specified by setting *chunksize* to a positive integer. For very long
      iterables, using a large value for *chunksize* can significantly improve
      performance compared to the default size of one.

      By default, the returned iterator yields results in-order, waiting for
      successive tasks to complete . This behavior can be changed by passing
      the keyword argument *unordered* as `True`, then the result iterator will
      yield a result as soon as any of the tasks complete. ::

         executor = MPIPoolExecutor(max_workers=3)
         for result in executor.map(pow, [2] * 32, range(32)):
             print(result)

      .. versionchanged:: 4.1.0
         Added the *buffersize* parameter.

   .. method:: starmap(fn, iterable, \
                       timeout=None, chunksize=1, buffersize=None, **kwargs)

      Similar to :func:`itertools.starmap(fn, iterable)
      <itertools.starmap>`. Used instead of :meth:`~MPIPoolExecutor.map` when
      argument parameters are already grouped in tuples from a single iterable
      (the data has been "pre-zipped"). :func:`map(fn, *iterable) <map>` is
      equivalent to :func:`starmap(fn, zip(*iterable)) <starmap>`. ::

         executor = MPIPoolExecutor(max_workers=3)
         iterable = ((2, n) for n in range(32))
         for result in executor.starmap(pow, iterable):
             print(result)

      .. versionchanged:: 4.1.0
         Added the *buffersize* parameter.

   .. method:: shutdown(wait=True, cancel_futures=False)

      Signal the executor that it should free any resources that it is using
      when the currently pending futures are done executing.  Calls to
      :meth:`~MPIPoolExecutor.submit` and :meth:`~MPIPoolExecutor.map` made
      after :meth:`~MPIPoolExecutor.shutdown` will raise :exc:`RuntimeError`.

      If *wait* is `True` then this method will not return until all the
      pending futures are done executing and the resources associated with the
      executor have been freed.  If *wait* is `False` then this method will
      return immediately and the resources associated with the executor will be
      freed when all pending futures are done executing.  Regardless of the
      value of *wait*, the entire Python program will not exit until all
      pending futures are done executing.

      If *cancel_futures* is `True`, this method will cancel all pending
      futures that the executor has not started running. Any futures that
      are completed or running won't be cancelled, regardless of the value
      of *cancel_futures*.

      You can avoid having to call this method explicitly if you use the
      :keyword:`with` statement, which will shutdown the executor instance
      (waiting as if :meth:`~MPIPoolExecutor.shutdown` were called with *wait*
      set to `True`). ::

         import time
         with MPIPoolExecutor(max_workers=1) as executor:
             future = executor.submit(time.sleep, 2)
         assert future.done()

   .. method:: bootup(wait=True)

      Signal the executor that it should allocate eagerly any required
      resources (in particular, MPI worker processes). If *wait* is `True`,
      then :meth:`~MPIPoolExecutor.bootup` will not return until the executor
      resources are ready to process submissions.  Resources are automatically
      allocated in the first call to :meth:`~MPIPoolExecutor.submit`, thus
      calling :meth:`~MPIPoolExecutor.bootup` explicitly is seldom needed.

   .. attribute:: num_workers

      Number or worker processes in the pool.


.. envvar:: MPI4PY_FUTURES_MAX_WORKERS

   If the *max_workers* parameter to :class:`MPIPoolExecutor` is `None` or not
   given, the :envvar:`MPI4PY_FUTURES_MAX_WORKERS` environment variable
   provides a fallback value for the maximum number of MPI worker processes to
   spawn.

   .. versionadded:: 3.1.0

.. envvar:: MPI4PY_FUTURES_USE_PKL5

   If the *use_pkl5* keyword argument to :class:`MPIPoolExecutor` is `None` or
   not given, the :envvar:`MPI4PY_FUTURES_USE_PKL5` environment variable
   provides a fallback value for whether the executor should use :mod:`pickle`
   with out-of-band buffers for interprocess communication. Accepted values are
   ``0`` and ``1`` (interpreted as `False` and `True`, respectively), and
   strings specifying a `YAML boolean`_ value (case-insensitive). Using
   :mod:`pickle` with out-of-band buffers may benefit applications dealing
   with large buffer-like objects like NumPy arrays. See
   :mod:`mpi4py.util.pkl5` for additional information.

   .. versionadded:: 4.0.0

   .. _YAML boolean: https://yaml.org/type/bool.html

.. envvar:: MPI4PY_FUTURES_BACKOFF

   If the *backoff* keyword argument to :class:`MPIPoolExecutor` is not given,
   the :envvar:`MPI4PY_FUTURES_BACKOFF` environment variable can be set to a
   :class:`float` value specifying the maximum number of seconds a worker
   thread or process suspends execution with :func:`time.sleep()` while
   idle-waiting. If not set, the default backoff value is 0.001 seconds. Lower
   values will reduce latency and increase execution throughput for very
   short-lived tasks, albeit at the expense of spinning CPU cores and increased
   energy consumption.

   .. versionadded:: 4.0.0

.. note::

   As the master process uses a separate thread to perform MPI communication
   with the workers, the backend MPI implementation should provide support for
   `MPI.THREAD_MULTIPLE`. However, some popular MPI implementations do not
   support yet concurrent MPI calls from multiple threads. Additionally, users
   may decide to initialize MPI with a lower level of thread support. If the
   level of thread support in the backend MPI is less than
   `MPI.THREAD_MULTIPLE`, :mod:`mpi4py.futures` will use a global lock to
   serialize MPI calls. If the level of thread support is less than
   `MPI.THREAD_SERIALIZED`, :mod:`mpi4py.futures` will emit a
   :exc:`RuntimeWarning`.

.. warning::

   If the level of thread support in the backend MPI is less than
   `MPI.THREAD_SERIALIZED` (i.e, it is either `MPI.THREAD_SINGLE` or
   `MPI.THREAD_FUNNELED`), in theory :mod:`mpi4py.futures` cannot be
   used. Rather than raising an exception, :mod:`mpi4py.futures` emits a
   warning and takes a "cross-fingers" attitude to continue execution in the
   hope that serializing MPI calls with a global lock will actually work.


MPICommExecutor
---------------

Legacy MPI-1 implementations (as well as some vendor MPI-2 implementations) do
not support the dynamic process management features introduced in the MPI-2
standard. Additionally, job schedulers and batch systems in supercomputing
facilities may pose additional complications to applications using the
:c:func:`MPI_Comm_spawn` routine.

With these issues in mind, :mod:`mpi4py.futures` supports an additional, more
traditional, SPMD-like usage pattern requiring MPI-1 calls only. Python
applications are started the usual way, e.g., using the :program:`mpiexec`
command. Python code should make a collective call to the
:class:`MPICommExecutor` context manager to partition the set of MPI processes
within a MPI communicator in one master processes and many workers
processes. The master process gets access to an :class:`MPIPoolExecutor`
instance to submit tasks. Meanwhile, the worker process follow a different
execution path and team-up to execute the tasks submitted from the master.

Besides alleviating the lack of dynamic process management features in legacy
MPI-1 or partial MPI-2 implementations, the :class:`MPICommExecutor` context
manager may be useful in classic MPI-based Python applications willing to take
advantage of the simple, task-based, master/worker approach available in the
:mod:`mpi4py.futures` package.

.. class:: MPICommExecutor(comm=None, root=0)

   Context manager for :class:`MPIPoolExecutor`. This context manager splits a
   MPI (intra)communicator *comm* (defaults to `MPI.COMM_WORLD` if not provided
   or `None`) in two disjoint sets: a single master process (with rank *root*
   in *comm*) and the remaining worker processes. These sets are then connected
   through an intercommunicator.  The target of the :keyword:`with` statement
   is assigned either an :class:`MPIPoolExecutor` instance (at the master) or
   `None` (at the workers). ::

      from mpi4py import MPI
      from mpi4py.futures import MPICommExecutor

      with MPICommExecutor(MPI.COMM_WORLD, root=0) as executor:
          if executor is not None:
             future = executor.submit(abs, -42)
             assert future.result() == 42
             answer = set(executor.map(abs, [-42, 42]))
             assert answer == {42}

.. warning::

   If :class:`MPICommExecutor` is passed a communicator of size one (e.g.,
   `MPI.COMM_SELF`), then the executor instance assigned to the target of the
   :keyword:`with` statement will execute all submitted tasks in a single
   worker thread, thus ensuring that task execution still progress
   asynchronously. However, the :term:`GIL` will prevent the main and worker
   threads from running concurrently in multicore processors. Moreover, the
   thread context switching may harm noticeably the performance of CPU-bound
   tasks. In case of I/O-bound tasks, the :term:`GIL` is not usually an issue,
   however, as a single worker thread is used, it progress one task at a
   time. We advice against using :class:`MPICommExecutor` with communicators of
   size one and suggest refactoring your code to use instead a
   :class:`~concurrent.futures.ThreadPoolExecutor`.


Command line
------------

Recalling the issues related to the lack of support for dynamic process
management features in MPI implementations, :mod:`mpi4py.futures` supports an
alternative usage pattern where Python code (either from scripts, modules, or
zip files) is run under command line control of the :mod:`mpi4py.futures`
package by passing :samp:`-m mpi4py.futures` to the :program:`python`
executable.  The ``mpi4py.futures`` invocation should be passed a *pyfile* path
to a script (or a zipfile/directory containing a :file:`__main__.py` file).
Additionally, ``mpi4py.futures`` accepts :samp:`-m {mod}` to execute a module
named *mod*, :samp:`-c {cmd}` to execute a command string *cmd*, or even
:samp:`-` to read commands from standard input (:data:`sys.stdin`).
Summarizing, :samp:`mpi4py.futures` can be invoked in the following ways:

* :samp:`$ mpiexec -n {numprocs} python -m mpi4py.futures {pyfile} [arg] ...`
* :samp:`$ mpiexec -n {numprocs} python -m mpi4py.futures -m {mod} [arg] ...`
* :samp:`$ mpiexec -n {numprocs} python -m mpi4py.futures -c {cmd} [arg] ...`
* :samp:`$ mpiexec -n {numprocs} python -m mpi4py.futures - [arg] ...`

Before starting the main script execution, :mod:`mpi4py.futures` splits
`MPI.COMM_WORLD` in one master (the process with rank 0 in `MPI.COMM_WORLD`) and
*numprocs - 1* workers and connects them through an MPI intercommunicator.
Afterwards, the master process proceeds with the execution of the user script
code, which eventually creates :class:`MPIPoolExecutor` instances to submit
tasks. Meanwhile, the worker processes follow a different execution path to
serve the master.  Upon successful termination of the main script at the master,
the entire MPI execution environment exists gracefully. In case of any unhandled
exception in the main script, the master process calls
``MPI.COMM_WORLD.Abort(1)`` to prevent deadlocks and force termination of entire
MPI execution environment.

.. warning::

   Running scripts under command line control of :mod:`mpi4py.futures` is quite
   similar to executing a single-process application that spawn additional
   workers as required. However, there is a very important difference users
   should be aware of. All :class:`~MPIPoolExecutor` instances created at the
   master will share the pool of workers. Tasks submitted at the master from
   many different executors will be scheduled for execution in random order as
   soon as a worker is idle. Any executor can easily starve all the workers
   (e.g., by calling :func:`MPIPoolExecutor.map` with long iterables). If that
   ever happens, submissions from other executors will not be serviced until
   free workers are available.

.. seealso::

   :ref:`python:using-on-cmdline`
      Documentation on Python command line interface.


Parallel tasks
--------------

The :mod:`mpi4py.futures` package favors an embarrassingly parallel execution
model involving a series of sequential tasks independent of each other and
executed asynchronously. Albeit unnatural, :attr:`MPIPoolExecutor` can still be
used for handling workloads involving parallel tasks, where worker processes
communicate and coordinate each other via MPI.

.. function:: get_comm_workers()

   Access an intracommunicator grouping MPI worker processes.

Executing parallel tasks with :mod:`mpi4py.futures` requires following some
rules, cf. highlighted lines in example :ref:`cpi-py` :

* Use :attr:`MPIPoolExecutor.num_workers` to determine the number of worker
  processes in the executor and **submit exactly one callable per worker
  process** using the :meth:`MPIPoolExecutor.submit` method.

* The submitted callable must use :func:`get_comm_workers` to access an
  intracommunicator grouping MPI worker processes. Afterwards, it is highly
  recommended calling the :meth:`~mpi4py.MPI.Comm.Barrier` method on the
  communicator. The barrier synchronization ensures that every worker process
  is executing the submitted callable exactly once. Afterwards, the parallel
  task can safely perform any kind of point-to-point or collective operation
  using the returned communicator.

* The :class:`~concurrent.futures.Future` instances returned by
  :meth:`MPIPoolExecutor.submit` should be collected in a sequence.
  Use :func:`~concurrent.futures.wait` with the sequence of
  :class:`~concurrent.futures.Future` instances to ensure logical completion of
  the parallel task.


Utilities
---------

The :mod:`mpi4py.futures` package provides additional utilities for handling
:class:`~concurrent.futures.Future` instances.

.. autofunction:: mpi4py.futures.collect

.. autofunction:: mpi4py.futures.compose


Examples
--------

Computing the Julia set
+++++++++++++++++++++++

The following :ref:`julia-py` script computes the `Julia set`_ and dumps an
image to disk in binary `PGM`_ format. The code starts by importing
:class:`MPIPoolExecutor` from the :mod:`mpi4py.futures` package. Next, some
global constants and functions implement the computation of the Julia set. The
computations are protected with the standard :code:`if __name__ == '__main__':
...` idiom.  The image is computed by whole scanlines submitting all these
tasks at once using the :class:`~MPIPoolExecutor.map` method. The result
iterator yields scanlines in-order as the tasks complete. Finally, each
scanline is dumped to disk.

.. _`Julia set`: https://en.wikipedia.org/wiki/Julia_set
.. _`PGM`: https://netpbm.sourceforge.net/doc/pgm.html

.. code-block:: python
   :name: julia-py
   :caption: :file:`julia.py`
   :emphasize-lines: 1,26,28,29
   :linenos:

   from mpi4py.futures import MPIPoolExecutor

   x0, x1, w = -2.0, +2.0, 640*2
   y0, y1, h = -1.5, +1.5, 480*2
   dx = (x1 - x0) / w
   dy = (y1 - y0) / h

   c = complex(0, 0.65)

   def julia(x, y):
       z = complex(x, y)
       n = 255
       while abs(z) < 3 and n > 1:
           z = z**2 + c
           n -= 1
       return n

   def julia_line(k):
       line = bytearray(w)
       y = y1 - k * dy
       for j in range(w):
           x = x0 + j * dx
           line[j] = julia(x, y)
       return line

   if __name__ == '__main__':

       with MPIPoolExecutor() as executor:
           image = executor.map(julia_line, range(h))
           with open('julia.pgm', 'wb') as f:
               f.write(b'P5 %d %d %d\n' % (w, h, 255))
               for line in image:
                   f.write(line)

The recommended way to execute the script is by using the :program:`mpiexec`
command specifying one MPI process (master) and (optional but recommended) the
desired MPI universe size, which determines the number of additional
dynamically spawned processes (workers). The MPI universe size is provided
either by a batch system or set by the user via command-line arguments to
:program:`mpiexec` or environment variables. Below we provide examples for
MPICH and Open MPI implementations [#]_. In all of these examples, the
:program:`mpiexec` command launches a single master process running the Python
interpreter and executing the main script. When required, :mod:`mpi4py.futures`
spawns the pool of 16 worker processes. The master submits tasks to the workers
and waits for the results. The workers receive incoming tasks, execute them,
and send back the results to the master.

.. highlight:: console

When using MPICH implementation or its derivatives based on the Hydra process
manager, users can set the MPI universe size via the ``-usize`` argument to
:program:`mpiexec`::

  $ mpiexec -n 1 -usize 17 python julia.py

or, alternatively, by setting the :envvar:`MPIEXEC_UNIVERSE_SIZE` environment
variable::

  $ env MPIEXEC_UNIVERSE_SIZE=17 mpiexec -n 1 python julia.py

In the Open MPI implementation, the MPI universe size can be set via the
``-host`` argument to :program:`mpiexec`::

  $ mpiexec -n 1 -host localhost:17 python julia.py

Another way to specify the number of workers is to use the
:mod:`mpi4py.futures`-specific environment variable
:envvar:`MPI4PY_FUTURES_MAX_WORKERS`::

  $ env MPI4PY_FUTURES_MAX_WORKERS=16 mpiexec -n 1 python julia.py

Note that in this case, the MPI universe size is ignored.

Alternatively, users may decide to execute the script in a more traditional
way, that is, all the MPI processes are started at once. The user script is run
under command-line control of :mod:`mpi4py.futures` passing the :ref:`-m
<python:using-on-cmdline>` flag to the :program:`python` executable::

  $ mpiexec -n 17 python -m mpi4py.futures julia.py

As explained previously, the 17 processes are partitioned in one master and 16
workers. The master process executes the main script while the workers execute
the tasks submitted by the master.

.. [#] When using an MPI implementation other than MPICH or Open MPI, please
   check the documentation of the implementation and/or batch
   system for the ways to specify the desired MPI universe size.


Computing Pi (parallel task)
++++++++++++++++++++++++++++

The number :math:`\pi` can be approximated via numerical integration with the
simple midpoint rule, that is:

.. math::

   \pi = \int_{0}^{1} \frac{4}{1+x^2} \,dx \approx
   \frac{1}{n} \sum_{i=1}^{n}
   \frac{4}{1 + \left[\frac{1}{n} \left(i-\frac{1}{2}\right) \right]^2} .

The following :ref:`cpi-py` script computes such approximations using
:mod:`mpi4py.futures` with a parallel task involving a collective reduction
operation. Highlighted lines correspond to the rules discussed in `Parallel
tasks`_.

.. code-block:: python
   :name: cpi-py
   :caption: :file:`cpi.py`
   :emphasize-lines: 9-10,21,35-36,39
   :linenos:

   import math
   import sys
   from mpi4py.futures import MPIPoolExecutor, wait
   from mpi4py.futures import get_comm_workers


   def compute_pi(n):
       # Access intracommunicator and synchronize
       comm = get_comm_workers()
       comm.Barrier()

       rank = comm.Get_rank()
       size = comm.Get_size()

       # Local computation
       h = 1.0 / n
       s = 0.0
       for i in range(rank + 1, n + 1, size):
           x = h * (i - 0.5)
           s += 4.0 / (1.0 + x**2)
       pi_partial = s * h

       # Parallel reduce-to-all
       pi = comm.allreduce(pi_partial)

       # All workers return the same value
       return pi


   if __name__ == '__main__':
       n = int(sys.argv[1]) if len(sys.argv) > 1 else 256

       with MPIPoolExecutor() as executor:
           # Submit exactly one callable per worker
           P = executor.num_workers
           fs = [executor.submit(compute_pi, n) for _ in range(P)]

           # Wait for all workers to finish
           wait(fs)

           # Get result from the first future object.
           # In this particular example, due to using reduce-to-all,
           # all the other future objects hold the same result value.
           pi = fs[0].result()
           print(
               f"pi: {pi:.16f}, error: {abs(pi - math.pi):.3e}",
               f"({n:d} intervals, {P:d} workers)",
           )

.. highlight:: console

To run in modern MPI-2 mode::

  $ env MPI4PY_FUTURES_MAX_WORKERS=4 mpiexec -n 1 python cpi.py 128
  pi: 3.1415977398528137, error: 5.086e-06 (128 intervals, 4 workers)

  $ env MPI4PY_FUTURES_MAX_WORKERS=8 mpiexec -n 1 python cpi.py 512
  pi: 3.1415929714812316, error: 3.179e-07 (512 intervals, 8 workers)

To run in legacy MPI-1 mode::

  $ mpiexec -n 5 python -m mpi4py.futures cpi.py 128
  pi: 3.1415977398528137, error: 5.086e-06 (128 intervals, 4 workers)

  $ mpiexec -n 9 python -m mpi4py.futures cpi.py 512
  pi: 3.1415929714812316, error: 3.179e-07 (512 intervals, 8 workers)


Citation
--------

If :mod:`mpi4py.futures` been significant to a project that leads to an
academic publication, please acknowledge our work by citing the following
article [mpi4py-futures]_:

.. [mpi4py-futures] M. Rogowski, S. Aseeri, D. Keyes, and L. Dalcin,
  *mpi4py.futures: MPI-Based Asynchronous Task Execution for Python*,
  IEEE Transactions on Parallel and Distributed Systems, 34(2):611-622, 2023.
  https://doi.org/10.1109/TPDS.2022.3225481


.. Local variables:
.. fill-column: 79
.. End: