1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341 342 343 344 345 346 347 348 349 350 351 352 353 354 355 356 357 358 359 360 361 362 363 364 365 366 367 368 369 370 371 372 373 374 375 376 377 378 379 380 381 382 383 384 385 386 387 388 389 390 391 392 393 394 395 396 397 398 399 400 401 402 403 404 405 406 407 408 409 410 411 412 413 414 415 416 417 418 419 420 421 422 423 424 425 426 427 428 429 430 431 432 433 434 435 436 437 438 439 440 441 442 443 444 445 446 447 448 449 450 451 452 453 454 455 456 457 458 459 460 461 462 463 464 465 466 467 468 469 470 471 472 473 474 475 476 477 478 479 480 481 482 483 484 485 486 487 488 489 490 491 492 493 494 495 496 497 498 499 500 501 502 503 504 505 506 507 508 509 510 511 512
|
.. _usecase_smpi:
Simulating MPI Applications
===========================
.. warning:: This document is still in early stage. You can try to
take this tutorial, but should not be surprised if things fall short.
It will be completed for the next release, v3.22, released by the end
of 2018.
Discover SMPI
-------------
SimGrid can not only :ref:`simulate algorithms <usecase_simalgo>`, but
it can also be used to execute real MPI applications on top of
virtual, simulated platforms with the SMPI module. Even complex
C/C++/F77/F90 applications should run out of the box in this
environment. In fact, almost all proxy apps provided by the `ExaScale
Project <https://proxyapps.exascaleproject.org/>`_ only require minor
modifications to `run on top of SMPI
<https://github.com/simgrid/SMPI-proxy-apps/>`_.
This setting permits to debug your MPI applications in a perfectly
reproducible setup, with no Heisenbugs. Enjoy the full Clairevoyance
provided by the simulator while running what-if analysis on platforms
that are still to be built! Several `production-grade MPI applications
<https://framagit.org/simgrid/SMPI-proxy-apps#full-scale-applications>`_
use SimGrid for their integration and performance testing.
MPI 2.2 is already partially covered: over 160 primitives are
supported. Some parts of the standard are still missing: MPI-IO, MPI3
collectives, spawning ranks, and some others. If one of the functions
you use is still missing, please drop us an email. We may find the
time to implement it for you.
Multi-threading support is very limited in SMPI. Only funneled
applications are supported: at most one thread per rank can issue any
MPI calls. For better timing predictions, your application should even
be completely mono-threaded. Using OpenMP (or pthreads directly) may
greatly decrease SimGrid predictive power. That may still be OK if you
only plan to debug your application in a reproducible setup, without
any performance-related analysis.
How does it work?
.................
In SMPI, communications are simulated while computations are
emulated. This means that while computations occur as they would in
the real systems, communication calls are intercepted and achived by
the simulator.
To start using SMPI, you just need to compile your application with
``smpicc`` instead of ``mpicc``, or with ``smpiff`` instead of
``mpiff``, or with ``smpicxx`` instead of ``mpicxx``. Then, the only
difference between the classical ``mpirun`` and the new ``smpirun`` is
that it requires a new parameter ``-platform`` with a file describing
the simulated platform on which your application shall run.
Internally, all ranks of your application are executed as threads of a
single unix process. That's not a problem if your application has
global variables, because ``smpirun`` loads one application instance
per MPI rank as if it was another dynamic library. Then, MPI
communication calls are implemented using SimGrid: data is exchanged
through memory copy, while the simulator's performance models are used
to predict the time taken by each communications. Any computations
occuring between two MPI calls are benchmarked, and the corresponding
time is reported into the simulator.
.. image:: /tuto_smpi/img/big-picture.svg
:align: center
Describing Your Platform
------------------------
As a SMPI user, you are supposed to provide a description of your
simulated platform, that is mostly a set of simulated hosts and network
links with some performance characteristics. SimGrid provides a plenty
of :ref:`documentation <platform>` and examples (in the
`examples/platforms <https://framagit.org/simgrid/simgrid/tree/master/examples/platforms>`_
source directory), and this section only shows a small set of introductory
examples.
Feel free to skip this section if you want to jump right away to usage
examples.
Simple Example with 3 hosts
...........................
At the most basic level, you can describe your simulated platform as a
graph of hosts and network links. For instance:
.. image:: /tuto_smpi/3hosts.png
:align: center
.. literalinclude:: /tuto_smpi/3hosts.xml
:language: xml
Note the way in which hosts, links, and routes are defined in
this XML. All hosts are defined with a speed (in Gflops), and links
with a latency (in us) and bandwidth (in MBytes per second). Other
units are possible and written as expected. Routes specify the list of
links encountered from one route to another. Routes are symmetrical by
default.
Cluster with a Crossbar
.......................
A very common parallel computing platform is a homogeneous cluster in
which hosts are interconnected via a crossbar switch with as many
ports as hosts, so that any disjoint pairs of hosts can communicate
concurrently at full speed. For instance:
.. literalinclude:: ../../examples/platforms/cluster_crossbar.xml
:language: xml
:lines: 1-3,18-
One specifies a name prefix and suffix for each host, and then give an
integer range. In the example the cluster contains 65535 hosts (!),
named ``node-0.simgrid.org`` to ``node-65534.simgrid.org``. All hosts
have the same power (1 Gflop/sec) and are connected to the switch via
links with same bandwidth (125 MBytes/sec) and latency (50
microseconds).
.. todo::
Add the picture.
Cluster with a Shared Backbone
..............................
Another popular model for a parallel platform is that of a set of
homogeneous hosts connected to a shared communication medium, a
backbone, with some finite bandwidth capacity and on which
communicating host pairs can experience contention. For instance:
.. literalinclude:: ../../examples/platforms/cluster_backbone.xml
:language: xml
:lines: 1-3,18-
The only differences with the crossbar cluster above are the ``bb_bw``
and ``bb_lat`` attributes that specify the backbone characteristics
(here, a 500 microseconds latency and a 2.25 GByte/sec
bandwidth). This link is used for every communication within the
cluster. The route from ``node-0.simgrid.org`` to ``node-1.simgrid.org``
counts 3 links: the private link of ``node-0.simgrid.org``, the backbone
and the private link of ``node-1.simgrid.org``.
.. todo::
Add the picture.
Torus Cluster
.............
Many HPC facilities use torus clusters to reduce sharing and
performance loss on concurrent internal communications. Modeling this
in SimGrid is very easy. Simply add a ``topology="TORUS"`` attribute
to your cluster. Configure it with the ``topo_parameters="X,Y,Z"``
attribute, where ``X``, ``Y`` and ``Z`` are the dimension of your
torus.
.. image:: ../../examples/platforms/cluster_torus.svg
:align: center
.. literalinclude:: ../../examples/platforms/cluster_torus.xml
:language: xml
Note that in this example, we used ``loopback_bw`` and
``loopback_lat`` to specify the characteristics of the loopback link
of each node (i.e., the link allowing each node to communicate with
itself). We could have done so in previous example too. When no
loopback is given, the communication from a node to itself is handled
as if it were two distinct nodes: it goes twice through the private
link and through the backbone (if any).
Fat-Tree Cluster
................
This topology was introduced to reduce the amount of links in the
cluster (and thus reduce its price) while maintaining a high bisection
bandwidth and a relatively low diameter. To model this in SimGrid,
pass a ``topology="FAT_TREE"`` attribute to your cluster. The
``topo_parameters=#levels;#downlinks;#uplinks;link count`` follows the
semantic introduced in the `Figure 1B of this article
<http://webee.eedev.technion.ac.il/wp-content/uploads/2014/08/publication_574.pdf>`_.
Here is the meaning of this example: ``2 ; 4,4 ; 1,2 ; 1,2``
- That's a two-level cluster (thus the initial ``2``).
- Routers are connected to 4 elements below them, regardless of its
level. Thus the ``4,4`` component that is used as
``#downlinks``. This means that the hosts are grouped by 4 on a
given router, and that there is 4 level-1 routers (in the middle of
the figure).
- Hosts are connected to only 1 router above them, while these routers
are connected to 2 routers above them (thus the ``1,2`` used as
``#uplink``).
- Hosts have only one link to their router while every path between a
level-1 routers and level-2 routers use 2 parallel links. Thus the
``1,2`` that is used as ``link count``.
.. image:: ../../examples/platforms/cluster_fat_tree.svg
:align: center
.. literalinclude:: ../../examples/platforms/cluster_fat_tree.xml
:language: xml
:lines: 1-3,10-
Dragonfly Cluster
.................
This topology was introduced to further reduce the amount of links
while maintaining a high bandwidth for local communications. To model
this in SimGrid, pass a ``topology="DRAGONFLY"`` attribute to your
cluster. It's based on the implementation of the topology used on
Cray XC systems, described in paper
`Cray Cascade: A scalable HPC system based on a Dragonfly network <https://dl.acm.org/citation.cfm?id=2389136>`_.
System description follows the format ``topo_parameters=#groups;#chassis;#routers;#nodes``
For example, ``3,4 ; 3,2 ; 3,1 ; 2``:
- ``3,4``: There are 3 groups with 4 links between each (blue level).
Links to nth group are attached to the nth router of the group
on our implementation.
- ``3,2``: In each group, there are 3 chassis with 2 links between each nth router
of each group (black level)
- ``3,1``: In each chassis, 3 routers are connected together with a single link
(green level)
- ``2``: Each router has two nodes attached (single link)
.. image:: ../../examples/platforms/cluster_dragonfly.svg
:align: center
.. literalinclude:: ../../examples/platforms/cluster_dragonfly.xml
:language: xml
Final Word
..........
We only glanced over the abilities offered by SimGrid to describe the
platform topology. Other networking zones model non-HPC platforms
(such as wide area networks, ISP network comprising set-top boxes, or
even your own routing schema). You can interconnect several networking
zones in your platform to form a tree of zones, that is both a time-
and memory-efficient representation of distributed platforms. Please
head to the dedicated :ref:`documentation <platform>` for more
information.
Hands-on!
---------
It is time to start using SMPI yourself. For that, you first need to
install it somehow, and then you will need a MPI application to play with.
Using Docker
............
The easiest way to take the tutorial is to use the dedicated Docker
image. Once you `installed Docker itself
<https://docs.docker.com/install/>`_, simply do the following:
.. code-block:: shell
docker pull simgrid/tuto-smpi
docker run -it --rm --name simgrid --volume ~/smpi-tutorial:/source/tutorial simgrid/tuto-smpi bash
This will start a new container with all you need to take this
tutorial, and create a ``smpi-tutorial`` directory in your home on
your host machine that will be visible as ``/source/tutorial`` within the
container. You can then edit the files you want with your favorite
editor in ``~/smpi-tutorial``, and compile them within the
container to enjoy the provided dependencies.
.. warning::
Any change to the container out of ``/source/tutorial`` will be lost
when you log out of the container, so don't edit the other files!
All needed dependencies are already installed in this container
(SimGrid, the C/C++/Fortran compilers, make, pajeng and R). Vite being
only optional in this tutorial, it is not installed to reduce the
image size.
The container also include the example platform files from the
previous section as well as the source code of the NAS Parallel
Benchmarks. These files are available under
``/source/simgrid-template-smpi`` in the image. You should copy it to
your working directory when you first log in:
.. code-block:: shell
cp -r /source/simgrid-template-smpi/* /source/tutorial
cd /source/tutorial
Using your Computer Natively
............................
To take the tutorial on your machine, you first need to :ref:`install
SimGrid <install>`, the C/C++/Fortran compilers and also ``pajeng`` to
visualize the traces. You may want to install `Vite
<http://vite.gforge.inria.fr/>`_ to get a first glance at the
traces. The provided code template requires make to compile. On
Debian and Ubuntu for example, you can get them as follows:
.. code-block:: shell
sudo apt install simgrid pajeng make gcc g++ gfortran vite
To take this tutorial, you will also need the platform files from the
previous section as well as the source code of the NAS Parallel
Benchmarks. Just clone `this repository
<https://framagit.org/simgrid/simgrid-template-smpi>`_ to get them all:
.. code-block:: shell
git clone git@framagit.org:simgrid/simgrid-template-smpi.git
cd simgrid-template-smpi/
If you struggle with the compilation, then you should double check
your :ref:`SimGrid installation <install>`. On need, please refer to
the :ref:`Troubleshooting your Project Setup
<install_yours_troubleshooting>` section.
Lab 0: Hello World
------------------
It is time to simulate your first MPI program. Use the simplistic
example `roundtrip.c
<https://framagit.org/simgrid/simgrid-template-smpi/raw/master/roundtrip.c?inline=false>`_
that comes with the template.
.. literalinclude:: /tuto_smpi/roundtrip.c
:language: c
Compiling and Executing
.......................
Compiling the program is straightforward (double check your
:ref:`SimGrid installation <install>` if you get an error message):
.. code-block:: shell
$ smpicc -O3 roundtrip.c -o roundtrip
Once compiled, you can simulate the execution of this program on 16
nodes from the ``cluster_crossbar.xml`` platform as follows:
.. code-block:: shell
$ smpirun -np 16 -platform cluster_crossbar.xml -hostfile cluster_hostfile ./roundtrip
- The ``-np 16`` option, just like in regular MPI, specifies the
number of MPI processes to use.
- The ``-hostfile cluster_hostfile`` option, just like in regular
MPI, specifies the host file. If you omit this option, ``smpirun``
will deploy the application on the first machines of your platform.
- The ``-platform cluster_crossbar.xml`` option, **which doesn't exist
in regular MPI**, specifies the platform configuration to be
simulated.
- At the end of the line, one finds the executable name and
command-line arguments (if any -- roundtrip does not expect any arguments).
Feel free to tweak the content of the XML platform file and the
program to see the effect on the simulated execution time. It may be
easier to compare the executions with the extra option
``--cfg=smpi/display_timing:yes``. Note that the simulation accounts
for realistic network protocol effects and MPI implementation
effects. As a result, you may see "unexpected behavior" like in the
real world (e.g., sending a message 1 byte larger may lead to
significant higher execution time).
Lab 1: Visualizing LU
---------------------
We will now simulate a larger application: the LU benchmark of the NAS
suite. The version provided in the code template was modified to
compile with SMPI instead of the regular MPI. Compare the difference
between the original ``config/make.def.template`` and the
``config/make.def`` that was adapted to SMPI. We use ``smpiff`` and
``smpicc`` as compilers, and don't pass any additional library.
Now compile and execute the LU benchmark, class S (i.e., for `small
data size
<https://www.nas.nasa.gov/publications/npb_problem_sizes.html>`_) with
4 nodes.
.. code-block:: shell
$ make lu NPROCS=4 CLASS=S
(compilation logs)
$ smpirun -np 4 -platform ../cluster_backbone.xml bin/lu.S.4
(execution logs)
To get a better understanding of what is going on, activate the
vizualization tracing, and convert the produced trace for later
use:
.. code-block:: shell
smpirun -np 4 -platform ../cluster_backbone.xml -trace --cfg=tracing/filename:lu.S.4.trace bin/lu.S.4
pj_dump --ignore-incomplete-links lu.S.4.trace | grep State > lu.S.4.state.csv
You can then produce a Gantt Chart with the following R chunk. You can
either copy/paste it in a R session, or `turn it into a Rscript executable
<https://swcarpentry.github.io/r-novice-inflammation/05-cmdline/>`_ to
run it again and again.
.. code-block:: R
library(ggplot2)
# Read the data
df_state = read.csv("lu.S.4.state.csv", header=F, strip.white=T)
names(df_state) = c("Type", "Rank", "Container", "Start", "End", "Duration", "Level", "State");
df_state = df_state[!(names(df_state) %in% c("Type","Container","Level"))]
df_state$Rank = as.numeric(gsub("rank-","",df_state$Rank))
# Draw the Gantt Chart
gc = ggplot(data=df_state) + geom_rect(aes(xmin=Start, xmax=End, ymin=Rank, ymax=Rank+1,fill=State))
# Produce the output
plot(gc)
dev.off()
This produces a file called ``Rplots.pdf`` with the following
content. You can find more visualization examples `online
<http://simgrid.gforge.inria.fr/contrib/R_visualization.html>`_.
.. image:: /tuto_smpi/img/lu.S.4.png
:align: center
Lab 2: Tracing and Replay of LU
-------------------------------
Now compile and execute the LU benchmark, class A, with 32 nodes.
.. code-block:: shell
$ make lu NPROCS=32 CLASS=A
This takes several minutes to to simulate, because all code from all
processes has to be really executed, and everything is serialized.
SMPI provides several methods to speed things up. One of them is to
capture a time independent trace of the running application, and
replay it on a different platform with the same amount of nodes. The
replay is much faster than live simulation, as the computations are
skipped (the application must be network-dependent for this to work).
You can even generate the trace during as live simulation, as follows:
.. code-block:: shell
$ smpirun -trace-ti --cfg=tracing/filename:LU.A.32 -np 32 -platform ../cluster_backbone.xml bin/lu.A.32
The produced trace is composed of a file ``LU.A.32`` and a folder
``LU.A.32_files``. To replay this with SMPI, you need to first compile
the provided ``smpi_replay.cpp`` file, that comes from
`simgrid/examples/smpi/replay
<https://framagit.org/simgrid/simgrid/tree/master/examples/smpi/replay>`_.
.. code-block:: shell
$ smpicxx ../replay.cpp -O3 -o ../smpi_replay
Afterward, you can replay your trace in SMPI as follows:
$ smpirun -np 32 -platform ../cluster_torus.xml -ext smpi_replay ../smpi_replay LU.A.32
All the outputs are gone, as the application is not really simulated
here. Its trace is simply replayed. But if you visualize the live
simulation and the replay, you will see that the behavior is
unchanged. The simulation does not run much faster on this very
example, but this becomes very interesting when your application
is computationally hungry.
.. todo:: smpi_replay should be installed by SimGrid, and smpirun interface could be simplified here.
Lab 3: Execution Sampling on EP
-------------------------------
The second method to speed up simulations is to sample the computation
parts in the code. This means that the person doing the simulation
needs to know the application and identify parts that are compute
intensive and take time, while being regular enough not to ruin
simulation accuracy. Furthermore there should not be any MPI calls
inside such parts of the code.
Use the EP benchmark, class B, 16 processes.
.. todo:: write this section, and the following ones.
Further Readings
----------------
You may also be interested in the `SMPI reference article
<https://hal.inria.fr/hal-01415484>`_ or these `introductory slides
<http://simgrid.org/tutorials/simgrid-smpi-101.pdf>`_. The `SMPI
reference documentation <SMPI_doc>`_ covers much more content than
this short tutorial.
Finally, we regularly use SimGrid in our teachings on MPI. This way,
our student can experiment with platforms that they do not have access
to, and the associated visualisation tools helps them to understand
their work. The whole material is available online, in a separate
project: the `SMPI CourseWare <https://simgrid.github.io/SMPI_CourseWare/>`_.
.. LocalWords: SimGrid
|