File: appendix.rst

package info (click to toggle)
mpi4py 2.0.0-2.1%2Bdeb9u1
  • links: PTS, VCS
  • area: main
  • in suites: stretch
  • size: 2,680 kB
  • sloc: python: 15,291; ansic: 7,099; makefile: 719; f90: 158; sh: 156; cpp: 121
file content (216 lines) | stat: -rw-r--r-- 7,165 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
Appendix
========

.. _python-mpi:

MPI-enabled Python interpreter
------------------------------

  .. warning::

     These days it is no longer required to use the MPI-enabled Python
     interpreter in most cases, and, therefore, is not built by
     default anymore because it is too difficult to reliably build a
     Python interpreter across different distributions.  If you know
     that you still **really** need it, see below on how to use the
     `build_exe` and `install_exe` commands.

Some MPI-1 implementations (notably, MPICH 1) **do require** the
actual command line arguments to be passed at the time
:c:func:`MPI_Init()` is called. In this case, you will need to use a
re-built, MPI-enabled, Python interpreter binary executable. A basic
implementation (targeting Python 2.X) of what is required is shown
below:

.. sourcecode:: c

    #include <Python.h>
    #include <mpi.h>

    int main(int argc, char *argv[])
    {
       int status, flag;
       MPI_Init(&argc, &argv);
       status = Py_Main(argc, argv);
       MPI_Finalized(&flag);
       if (!flag) MPI_Finalize();
       return status;
    }

The source code above is straightforward; compiling it should also
be. However, the linking step is more tricky: special flags have to be
passed to the linker depending on your platform. In order to alleviate
you for such low-level details, *MPI for Python* provides some
pure-distutils based support to build and install an MPI-enabled
Python interpreter executable::

    $ cd mpi4py-X.X.X
    $ python setup.py build_exe [--mpi=<name>|--mpicc=/path/to/mpicc]
    $ [sudo] python setup.py install_exe [--install-dir=$HOME/bin]

After the above steps you should have the MPI-enabled interpreter
installed as :file:`{prefix}/bin/python{X}.{X}-mpi` (or
:file:`$HOME/bin/python{X}.{X}-mpi`). Assuming that
:file:`{prefix}/bin` (or :file:`$HOME/bin`) is listed on your
:envvar:`PATH`, you should be able to enter your MPI-enabled Python
interactively, for example::

    $ python2.7-mpi
    Python 2.7.8 (default, Nov 10 2014, 08:19:18)
    [GCC 4.9.2 20141101 (Red Hat 4.9.2-1)] on linux2
    Type "help", "copyright", "credits" or "license" for more information.
    >>> import sys
    >>> sys.executable
    '/usr/bin/python2.7-mpi'
    >>>


.. _macosx-universal-sdk:

Mac OS X and Universal/SDK Python builds
----------------------------------------

Mac OS X users employing a Python distribution built with support for
`Universal applications <http://www.apple.com/universal/>`_ could have
trouble building *MPI for Python*, specially if they want to link
against MPI libraries built without such support. Another source of
trouble could be a Python build using a specific *deployment target*
and *cross-development SDK* configuration. Workarounds for such issues
are to temporarily set the environment variables
:envvar:`MACOSX_DEPLOYMENT_TARGET`, :envvar:`SDKROOT` and/or
:envvar:`ARCHFLAGS` to appropriate values in the shell before trying
to build/install *MPI for Python*.

An appropriate value for :envvar:`MACOSX_DEPLOYMENT_TARGET` should be
any greater or equal than the one used to build Python, and less or
equal than your system version. The safest choice for end-users would
be to use the system version (e.g, if you are on *Leopard*, you should
try ``MACOSX_DEPLOYMENT_TARGET=10.5``).

An appropriate value for :envvar:`SDKROOT` is the full path name of
any of the SDK's you have at :file:`/Developer/SDKs` directory (e.g.,
``SDKROOT=/Developer/SDKs/MacOSX10.5.sdk``). The safest choice for
end-users would be the one matching the system version; or
alternatively the root directory (i.e., ``SDKROOT=/``).

Appropriate values for :envvar:`ARCHFLAGS` have the form ``-arch
<value>``, where ``<value>`` should be chosen from the following
table:

====== ==========  =========
  @      Intel      PowerPC
====== ==========  =========
32-bit ``i386``    ``ppc``
64-bit ``x86_64``  ``ppc64``
====== ==========  =========

For example, assuming your Mac is running **Snow Leopard** on a
**64-bit Intel** processor and you want to override the hard-wired
cross-development SDK in Python configuration, you can build and
install *MPI for Python* using any of the alternatives below. Note
that environment variables may need to be passed/set both at the build
and install steps (because :program:`sudo` may not pass environment
variables to subprocesses for security reasons)

* Alternative 1::

    $ env MACOSX_DEPLOYMENT_TARGET=10.6 \
          SDKROOT=/                     \
          ARCHFLAGS='-arch x86_64'      \
          python setup.py build [options]

    $ sudo env MACOSX_DEPLOYMENT_TARGET=10.6 \
               SDKROOT=/                     \
               ARCHFLAGS='-arch x86_64'      \
               python setup.py install [options]

* Alternative 2::

    $ export MACOSX_DEPLOYMENT_TARGET=10.6
    $ export SDKROOT=/
    $ export ARCHFLAGS='-arch x86_64'
    $ python setup.py build [options]

    $ sudo -s # enter interactive shell as root
    $ export MACOSX_DEPLOYMENT_TARGET=10.6
    $ export SDKROOT=/
    $ export ARCHFLAGS='-arch x86_64'
    $ python setup.py install [options]
    $ exit

.. _building-mpi:


Building MPI from sources
-------------------------

In the list below you have some executive instructions for building
some of the open-source MPI implementations out there with support for
shared/dynamic libraries on POSIX environments.

+ *MPICH* ::

    $ tar -zxf mpich-X.X.X.tar.gz
    $ cd mpich-X.X.X
    $ ./configure --enable-shared --prefix=/usr/local/mpich
    $ make
    $ make install

+ *Open MPI* ::

    $ tar -zxf openmpi-X.X.X tar.gz
    $ cd openmpi-X.X.X
    $ ./configure --prefix=/usr/local/openmpi
    $ make all
    $ make install

+ *LAM/MPI* ::

    $ tar -zxf lam-X.X.X.tar.gz
    $ cd lam-X.X.X
    $ ./configure --enable-shared --prefix=/usr/local/lam
    $ make
    $ make install

+ *MPICH 1* ::

    $ tar -zxf mpich-X.X.X.tar.gz
    $ cd mpich-X.X.X
    $ ./configure --enable-sharedlib --prefix=/usr/local/mpich1
    $ make
    $ make install

Perhaps you will need to set the :envvar:`LD_LIBRARY_PATH`
environment variable (using :command:`export`, :command:`setenv` or
what applies to your system) pointing to the directory containing the
MPI libraries . In case of getting runtime linking errors when running
MPI programs, the following lines can be added to the user login shell
script (:file:`.profile`, :file:`.bashrc`, etc.).

- *MPICH* ::

    MPI_DIR=/usr/local/mpich
    export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH

- *Open MPI* ::

    MPI_DIR=/usr/local/openmpi
    export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH

- *LAM/MPI* ::

    MPI_DIR=/usr/local/lam
    export LD_LIBRARY_PATH=$MPI_DIR/lib:$LD_LIBRARY_PATH

- *MPICH 1* ::

    MPI_DIR=/usr/local/mpich1
    export LD_LIBRARY_PATH=$MPI_DIR/lib/shared:$LD_LIBRARY_PATH:
    export MPICH_USE_SHLIB=yes

  .. warning::

     MPICH 1 support for dynamic libraries is not completely
     transparent. Users should set the environment variable
     :envvar:`MPICH_USE_SHLIB` to ``yes`` in order to avoid link
     problems when using the :program:`mpicc` compiler wrapper.