1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257
|
<!DOCTYPE html>
<html class="writer-html5" lang="en">
<head>
<meta charset="utf-8" />
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>10.5. Launching only on the local node — Open MPI 5.0.7 documentation</title>
<link rel="stylesheet" type="text/css" href="../_static/pygments.css" />
<link rel="stylesheet" type="text/css" href="../_static/css/theme.css" />
<!--[if lt IE 9]>
<script src="../_static/js/html5shiv.min.js"></script>
<![endif]-->
<script data-url_root="../" id="documentation_options" src="../_static/documentation_options.js"></script>
<script src="../_static/jquery.js"></script>
<script src="../_static/underscore.js"></script>
<script src="../_static/_sphinx_javascript_frameworks_compat.js"></script>
<script src="../_static/doctools.js"></script>
<script src="../_static/sphinx_highlight.js"></script>
<script src="../_static/js/theme.js"></script>
<link rel="index" title="Index" href="../genindex.html" />
<link rel="search" title="Search" href="../search.html" />
<link rel="next" title="10.6. Launching with SSH" href="ssh.html" />
<link rel="prev" title="10.4. Scheduling processes across hosts" href="scheduling.html" />
</head>
<body class="wy-body-for-nav">
<div class="wy-grid-for-nav">
<nav data-toggle="wy-nav-shift" class="wy-nav-side">
<div class="wy-side-scroll">
<div class="wy-side-nav-search" >
<a href="../index.html" class="icon icon-home">
Open MPI
</a>
<div role="search">
<form id="rtd-search-form" class="wy-form" action="../search.html" method="get">
<input type="text" name="q" placeholder="Search docs" aria-label="Search docs" />
<input type="hidden" name="check_keywords" value="yes" />
<input type="hidden" name="area" value="default" />
</form>
</div>
</div><div class="wy-menu wy-menu-vertical" data-spy="affix" role="navigation" aria-label="Navigation menu">
<ul class="current">
<li class="toctree-l1"><a class="reference internal" href="../quickstart.html">1. Quick start</a></li>
<li class="toctree-l1"><a class="reference internal" href="../getting-help.html">2. Getting help</a></li>
<li class="toctree-l1"><a class="reference internal" href="../release-notes/index.html">3. Release notes</a></li>
<li class="toctree-l1"><a class="reference internal" href="../installing-open-mpi/index.html">4. Building and installing Open MPI</a></li>
<li class="toctree-l1"><a class="reference internal" href="../features/index.html">5. Open MPI-specific features</a></li>
<li class="toctree-l1"><a class="reference internal" href="../validate.html">6. Validating your installation</a></li>
<li class="toctree-l1"><a class="reference internal" href="../version-numbering.html">7. Version numbers and compatibility</a></li>
<li class="toctree-l1"><a class="reference internal" href="../mca.html">8. The Modular Component Architecture (MCA)</a></li>
<li class="toctree-l1"><a class="reference internal" href="../building-apps/index.html">9. Building MPI applications</a></li>
<li class="toctree-l1 current"><a class="reference internal" href="index.html">10. Launching MPI applications</a><ul class="current">
<li class="toctree-l2"><a class="reference internal" href="quickstart.html">10.1. Quick start: Launching MPI applications</a></li>
<li class="toctree-l2"><a class="reference internal" href="prerequisites.html">10.2. Prerequisites</a></li>
<li class="toctree-l2"><a class="reference internal" href="pmix-and-prrte.html">10.3. The role of PMIx and PRRTE</a></li>
<li class="toctree-l2"><a class="reference internal" href="scheduling.html">10.4. Scheduling processes across hosts</a></li>
<li class="toctree-l2 current"><a class="current reference internal" href="#">10.5. Launching only on the local node</a><ul>
<li class="toctree-l3"><a class="reference internal" href="#mpi-communication">10.5.1. MPI communication</a></li>
<li class="toctree-l3"><a class="reference internal" href="#shared-memory-mpi-communication">10.5.2. Shared memory MPI communication</a></li>
</ul>
</li>
<li class="toctree-l2"><a class="reference internal" href="ssh.html">10.6. Launching with SSH</a></li>
<li class="toctree-l2"><a class="reference internal" href="slurm.html">10.7. Launching with Slurm</a></li>
<li class="toctree-l2"><a class="reference internal" href="lsf.html">10.8. Launching with LSF</a></li>
<li class="toctree-l2"><a class="reference internal" href="tm.html">10.9. Launching with PBS / Torque</a></li>
<li class="toctree-l2"><a class="reference internal" href="gridengine.html">10.10. Launching with Grid Engine</a></li>
<li class="toctree-l2"><a class="reference internal" href="unusual.html">10.11. Unusual jobs</a></li>
<li class="toctree-l2"><a class="reference internal" href="troubleshooting.html">10.12. Troubleshooting</a></li>
</ul>
</li>
<li class="toctree-l1"><a class="reference internal" href="../tuning-apps/index.html">11. Run-time operation and tuning MPI applications</a></li>
<li class="toctree-l1"><a class="reference internal" href="../app-debug/index.html">12. Debugging Open MPI Parallel Applications</a></li>
<li class="toctree-l1"><a class="reference internal" href="../developers/index.html">13. Developer’s guide</a></li>
<li class="toctree-l1"><a class="reference internal" href="../contributing.html">14. Contributing to Open MPI</a></li>
<li class="toctree-l1"><a class="reference internal" href="../license/index.html">15. License</a></li>
<li class="toctree-l1"><a class="reference internal" href="../history.html">16. History of Open MPI</a></li>
<li class="toctree-l1"><a class="reference internal" href="../man-openmpi/index.html">17. Open MPI manual pages</a></li>
<li class="toctree-l1"><a class="reference internal" href="../man-openshmem/index.html">18. OpenSHMEM manual pages</a></li>
</ul>
</div>
</div>
</nav>
<section data-toggle="wy-nav-shift" class="wy-nav-content-wrap"><nav class="wy-nav-top" aria-label="Mobile navigation menu" >
<i data-toggle="wy-nav-top" class="fa fa-bars"></i>
<a href="../index.html">Open MPI</a>
</nav>
<div class="wy-nav-content">
<div class="rst-content">
<div role="navigation" aria-label="Page navigation">
<ul class="wy-breadcrumbs">
<li><a href="../index.html" class="icon icon-home" aria-label="Home"></a></li>
<li class="breadcrumb-item"><a href="index.html"><span class="section-number">10. </span>Launching MPI applications</a></li>
<li class="breadcrumb-item active"><span class="section-number">10.5. </span>Launching only on the local node</li>
<li class="wy-breadcrumbs-aside">
<a href="../_sources/launching-apps/localhost.rst.txt" rel="nofollow"> View page source</a>
</li>
</ul>
<hr/>
</div>
<div role="main" class="document" itemscope="itemscope" itemtype="http://schema.org/Article">
<div itemprop="articleBody">
<style>
.wy-table-responsive table td,.wy-table-responsive table th{white-space:normal}
</style><div class="section" id="launching-only-on-the-local-node">
<h1><span class="section-number">10.5. </span>Launching only on the local node<a class="headerlink" href="#launching-only-on-the-local-node" title="Permalink to this heading"></a></h1>
<p>It is common to develop MPI applications on a single workstation or
laptop, and then move to a larger parallel / HPC environment once the
MPI application is ready.</p>
<p>Open MPI supports running multi-process MPI jobs on a single machine.
In such cases, you can simply avoid listing a hostfile or remote
hosts, and simply list a number of MPI processes to launch. For
example:</p>
<div class="highlight-sh notranslate"><div class="highlight"><pre><span></span>shell$<span class="w"> </span>mpirun<span class="w"> </span>-n<span class="w"> </span><span class="m">6</span><span class="w"> </span>mpi-hello-world
Hello<span class="w"> </span>world,<span class="w"> </span>I<span class="w"> </span>am<span class="w"> </span><span class="m">0</span><span class="w"> </span>of<span class="w"> </span><span class="m">6</span><span class="w"> </span><span class="o">(</span>running<span class="w"> </span>on<span class="w"> </span>my-laptop<span class="o">))</span>
Hello<span class="w"> </span>world,<span class="w"> </span>I<span class="w"> </span>am<span class="w"> </span><span class="m">1</span><span class="w"> </span>of<span class="w"> </span><span class="m">6</span><span class="w"> </span><span class="o">(</span>running<span class="w"> </span>on<span class="w"> </span>my-laptop<span class="o">)</span>
...
Hello<span class="w"> </span>world,<span class="w"> </span>I<span class="w"> </span>am<span class="w"> </span><span class="m">5</span><span class="w"> </span>of<span class="w"> </span><span class="m">6</span><span class="w"> </span><span class="o">(</span>running<span class="w"> </span>on<span class="w"> </span>my-laptop<span class="o">)</span>
</pre></div>
</div>
<p>If you do not specify the <code class="docutils literal notranslate"><span class="pre">-n</span></code> option, <code class="docutils literal notranslate"><span class="pre">mpirun</span></code> will default to
launching as many MPI processes as there are processor cores (not
hyperthreads) on the machine.</p>
<div class="section" id="mpi-communication">
<h2><span class="section-number">10.5.1. </span>MPI communication<a class="headerlink" href="#mpi-communication" title="Permalink to this heading"></a></h2>
<p>When running on a single machine, Open MPI will most likely use the
<code class="docutils literal notranslate"><span class="pre">ob1</span></code> PML and the following BTLs for MPI communication between
peers:</p>
<ul class="simple">
<li><p><code class="docutils literal notranslate"><span class="pre">self</span></code>: used for sending and receiving loopback MPI messages
— where the source and destination MPI process are the same.</p></li>
<li><p><code class="docutils literal notranslate"><span class="pre">sm</span></code>: used for sending and receiving MPI messages where the source
and destination MPI processes can share memory (e.g., via SYSV or
POSIX shared memory mechanisms).</p></li>
</ul>
</div>
<div class="section" id="shared-memory-mpi-communication">
<h2><span class="section-number">10.5.2. </span>Shared memory MPI communication<a class="headerlink" href="#shared-memory-mpi-communication" title="Permalink to this heading"></a></h2>
<div class="admonition error">
<p class="admonition-title">Error</p>
<p>TODO This should really be moved to the networking section.</p>
</div>
<p>The <code class="docutils literal notranslate"><span class="pre">sm</span></code> BTL supports two modes of shared memory communication:</p>
<ol class="arabic">
<li><p><strong>Two-copy:</strong> Otherwise known as “copy-in / copy-out”, this mode is
where the sender copies data into shared memory and the receiver
copies the data out.</p>
<p>This mechanism is always available.</p>
</li>
<li><p><strong>Single copy:</strong> In this mode, the sender or receiver makes a
single copy of the message data from the source buffer in one
process to the destination buffer in another process. Open MPI
supports three flavors of shared memory single-copy transfers:</p>
<ul>
<li><p><a class="reference external" href="https://knem.gitlabpages.inria.fr/">Linux KNEM</a>. This is a
standalone Linux kernel module, made specifically for HPC and MPI
libraries to enable high-performance single-copy message
transfers.</p>
<p>Open MPI must be able to find the KNEM header files in order to
build support for KNEM.</p>
</li>
<li><p><a class="reference external" href="https://github.com/hjelmn/xpmem">Linux XPMEM</a>. Similar to
KNEM, this is a standalone Linux kernel module, made specifically
for HPC and MPI libraries to enable high-performance single-copy
message transfers. It is derived from the Cray XPMEM system.</p>
<p>Open MPI must be able to find the XPMEM header files in order to
build support for XPMEM.</p>
</li>
<li><p>Linux Cross-Memory Attach (CMA). This mechanism is built-in to
modern versions of the Linux kernel. Although more performance
than the two-copy shared memory transfer mechanism, CMA is the
lowest performance of the single-copy mechanisms. However, CMA
is likely the most widely available because it is enabled by
default in several modern Linux distributions.</p>
<p>Open MPI must be built on a Linux system with a recent enough
Glibc and kernel version in order to build support for Linux CMA.</p>
</li>
</ul>
</li>
</ol>
<p>Which mechanism is used at run time depends both on how Open MPI was
built and how your system is configured. You can check to see which
single-copy mechanisms Open MPI was built with via two mechanisms:</p>
<ol class="arabic">
<li><p>At the end of running <code class="docutils literal notranslate"><span class="pre">configure</span></code>, Open MPI emits a list of
transports for which it found relevant header files and libraries
such that it will be able to build support for them. You might see
lines like this, for example:</p>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>Shared memory/copy in+copy out: yes
Shared memory/Linux CMA: yes
Shared memory/Linux KNEM: no
Shared memory/XPMEM: no
</pre></div>
</div>
<p>The above output indicates that Open MPI will be built with 2-copy
(as mentioned above, 2-copy is <em>always</em> available) and with Linux
CMA support. KNEM and XPMEM support will <em>not</em> be built.</p>
</li>
<li><p>After Open MPI is installed, the <code class="docutils literal notranslate"><span class="pre">ompi_info</span></code> command can show
which <code class="docutils literal notranslate"><span class="pre">smsc</span></code> (shared memory single copy) components are
available:</p>
<div class="highlight-text notranslate"><div class="highlight"><pre><span></span>shell$ ompi_info | grep smsc
MCA smsc: cma (MCA v2.1.0, API v1.0.0, Component v5.1.0)
</pre></div>
</div>
<p>This Open MPI installation only supports the Linux CMA single-copy
mechanism.</p>
</li>
</ol>
<div class="admonition note">
<p class="admonition-title">Note</p>
<p>As implied by the SMSC component names, none of them are
supported on macOS. macOS users will use the two-copy mechanism.</p>
</div>
</div>
</div>
</div>
</div>
<footer><div class="rst-footer-buttons" role="navigation" aria-label="Footer">
<a href="scheduling.html" class="btn btn-neutral float-left" title="10.4. Scheduling processes across hosts" accesskey="p" rel="prev"><span class="fa fa-arrow-circle-left" aria-hidden="true"></span> Previous</a>
<a href="ssh.html" class="btn btn-neutral float-right" title="10.6. Launching with SSH" accesskey="n" rel="next">Next <span class="fa fa-arrow-circle-right" aria-hidden="true"></span></a>
</div>
<hr/>
<div role="contentinfo">
<p>© Copyright 2003-2025, The Open MPI Community.</p>
</div>
Built with <a href="https://www.sphinx-doc.org/">Sphinx</a> using a
<a href="https://github.com/readthedocs/sphinx_rtd_theme">theme</a>
provided by <a href="https://readthedocs.org">Read the Docs</a>.
</footer>
</div>
</div>
</section>
</div>
<script>
jQuery(function () {
SphinxRtdTheme.Navigation.enable(true);
});
</script>
</body>
</html>
|