File: quickstart.rst

package info (click to toggle)
openmpi 5.0.8-9
  • links: PTS, VCS
  • area: main
  • in suites: sid
  • size: 201,680 kB
  • sloc: ansic: 613,078; makefile: 42,350; sh: 11,194; javascript: 9,244; f90: 7,052; java: 6,404; perl: 5,179; python: 1,859; lex: 740; fortran: 61; cpp: 20; tcl: 12
file content (101 lines) | stat: -rw-r--r-- 3,549 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
.. _label-quickstart-building-open-mpi:

Quick start: Installing Open MPI
================================

Although this section skips many details, it offers examples that will
probably work in many environments.

.. caution:: Note that this section is a "Quick start" |mdash| it does
   not attempt to be comprehensive or describe how to build Open MPI
   in all supported environments.  The examples below may therefore
   not work exactly as shown in your environment.

   Please consult the other sections in this chapter for more details,
   if necessary.

.. important:: If you have checked out a *developer's copy* of Open MPI
   (i.e., you cloned from Git), you really need to read :doc:`the
   Developer's Guide </developers/index>` before attempting to build Open
   MPI. Really.

Binary packages
---------------

Although the Open MPI community itself does not distribute binary
packages for Open MPI, many downstream packagers do.

For example, many Linux distributions include Open MPI packages
|mdash| even if they are not installed by default.  You should consult
the documentation and/or package list for your Linux distribution to
see if you can use its built-in package system to install Open MPI.

The MacOS package managers `Homebrew <https://brew.sh/>`_ and
`MacPorts <https://macports.org/>`_ both offer binary Open MPI
packages:

.. code-block:: sh

   # For Homebrew
   shell$ brew install openmpi

   # For MacPorts
   shell$ port install openmpi

.. important:: Binary packages may or may not include support for
               features that are required on your platform (e.g., a
               specific networking stack).  Or the binary packages
               available to you may be older / out of date.  As such,
               it may be better to build and install Open MPI from a
               source tarball available from `the main Open MPI web
               site <https://www.open-mpi.org/>`_.

Building from source
--------------------

Download the Open MPI source code from `the main Open MPI web site
<https://www.open-mpi.org/>`_.

.. caution:: Do **not** download an Open MPI source code tarball from
             GitHub.com.  The tarballs automatically generated by
             GitHub.com are incomplete and will not build properly.
             They are **not** official Open MPI releases.

Open MPI uses a traditional ``configure`` script paired with ``make``
to build.  Typical installs can be of the pattern:

.. code-block:: sh

   shell$ tar xf openmpi-<version>.tar.bz2
   shell$ cd openmpi-<version>
   shell$ ./configure --prefix=<path> [...options...] 2>&1 | tee config.out
   <... lots of output ...>

   # Use an integer value of N for parallel builds
   shell$ make [-j N] all 2>&1 | tee make.out

   # ...lots of output...

   # Depending on the <prefix> chosen above, you may need root access
   # for the following:
   shell$ make install 2>&1 | tee install.out

   # ...lots of output...

Note that VPATH builds are fully supported.  For example:

.. code-block:: sh

   shell$ tar xf openmpi-<version>.tar.bz2
   shell$ cd openmpi-<version>
   shell$ mkdir build
   shell$ cd build
   shell$ ../configure --prefix=<path> 2>&1 | tee config.out
   # ...etc.

The above patterns can be used in many environments.

Note that there are many, many configuration options available in the
``./configure`` step.  Some of them may be needed for your particular
HPC network interconnect type and/or computing environment; see the
rest of this chapter for descriptions of the available options.