File: lsf.rst

package info (click to toggle)
prrte 3.0.13-2
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 9,132 kB
  • sloc: ansic: 80,431; sh: 4,289; perl: 3,195; makefile: 1,829; lex: 352; python: 239
file content (50 lines) | stat: -rw-r--r-- 1,342 bytes parent folder | download | duplicates (4)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
Launching with LSF
==================

PRRTE supports the LSF resource manager.

Verify LSF support
------------------

The ``prte_info`` command can be used to determine whether or not an
installed Open MPI includes LSF support:

.. code-block::

   shell$ prte_info | grep lsf

If the PRRTE installation includes support for LSF, you
should see a line similar to that below. Note the MCA version
information varies depending on which version of PRRTE is
installed.

.. code-block::

       MCA ras: lsf (MCA v2.1.0, API v2.0.0, Component v3.0.0)

Launching
---------

When properly configured, PRRTE obtains both the list of hosts and
how many processes to start on each host from LSF directly.  Hence, it
is unnecessary to specify the ``--hostfile``, ``--host``, or ``-n``
options to ``mpirun``.  PRRTE will use LSF-native mechanisms
to launch and kill processes (``ssh`` is not required).

For example:

.. code-block:: sh

   # Allocate a job using 4 nodes with 2 processors per node and run the job on the nodes allocated by LSF
   shell$ bsub -n 8 -R "span[ptile=2]" "prterun mpi-hello-world"


This will run the processes on the nodes that were allocated by
LSF.  Or, if submitting a script:

.. code-block:: sh

   shell$ cat my_script.sh
   #!/bin/sh
   prterun mpi-hello-world
   shell$ bsub -n 8 -R "span[ptile=2]" < my_script.sh