File: localhost.rst

package info (click to toggle)
openmpi 5.0.8-3
  • links: PTS, VCS
  • area: main
  • in suites:
  • size: 201,692 kB
  • sloc: ansic: 613,078; makefile: 42,353; sh: 11,194; javascript: 9,244; f90: 7,052; java: 6,404; perl: 5,179; python: 1,859; lex: 740; fortran: 61; cpp: 20; tcl: 12
file content (40 lines) | stat: -rw-r--r-- 1,491 bytes parent folder | download | duplicates (8)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
Launching only on the local node
================================

It is common to develop MPI applications on a single workstation or
laptop, and then move to a larger parallel / HPC environment once the
MPI application is ready.

Open MPI supports running multi-process MPI jobs on a single machine.
In such cases, you can simply avoid listing a hostfile or remote
hosts, and simply list a number of MPI processes to launch.  For
example:

.. code-block:: sh

   shell$ mpirun -n 6 mpi-hello-world
   Hello world, I am 0 of 6 (running on my-laptop))
   Hello world, I am 1 of 6 (running on my-laptop)
   ...
   Hello world, I am 5 of 6 (running on my-laptop)

If you do not specify the ``-n`` option, ``mpirun`` will default to
launching as many MPI processes as there are processor cores (not
hyperthreads) on the machine.

MPI communication
-----------------

When running on a single machine, Open MPI will most likely use the
``ob1`` PML and the following BTLs for MPI communication between
peers:

* ``self``: used for sending and receiving loopback MPI messages
  |mdash| where the source and destination MPI process are the same.
* ``sm``: used for sending and receiving MPI messages where the source
  and destination MPI processes can share memory (e.g., via SYSV or
  POSIX shared memory mechanisms).

  .. note:: For more information about using shared memory MPI
            communication, see the :doc:`Shared Memory
            </tuning-apps/networking/shared-memory>` page.