File: terminology.rst

package info (click to toggle)
openmpi 5.0.8-9
  • links: PTS, VCS
  • area: main
  • in suites: sid
  • size: 201,680 kB
  • sloc: ansic: 613,078; makefile: 42,350; sh: 11,194; javascript: 9,244; f90: 7,052; java: 6,404; perl: 5,179; python: 1,859; lex: 740; fortran: 61; cpp: 20; tcl: 12
file content (81 lines) | stat: -rw-r--r-- 2,859 bytes parent folder | download | duplicates (10)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
Open MPI terminology
====================

Open MPI is a large project containing many different sub-systems and
a relatively large code base.  Let's first cover some fundamental
terminology in order to make the rest of the discussion easier.

Modular Component Architecture (MCA)
------------------------------------

:ref:`See this section <label-mca>` for a discussion of the Modular
Component Architecture (MCA).  Seriously.  Go read it now.  From
reading that section, you should understand the following terms before
continuing reading these docs:

* Project
* Framework
* Component
* Module
* Parameters (variables)

Notes on projects
-----------------

Projects are strict abstraction barriers in the code.  That is, they
are compiled into separate libraries: ``liboshmem``, ``libmpi``,
``libopen-pal`` with a strict dependency order: OSHMEM depends on
OMPI, OMPI depends on OPAL.  For example, MPI executables are linked
with:

.. code-block:: sh

   shell$ mpicc myapp.c -o myapp
   # This actually turns into:
   shell$ cc myapp.c -o myapp -lmpi ...

More system-level libraries may listed after ``-lmpi``, but you get
the idea.  ``libmpi`` will implicitly pull ``libopen-pal`` into the
overall link step.

Strictly speaking, these are not "layers" in the classic software
engineering sense (even though it is convenient to refer to them as
such).  They are listed above in dependency order, but that does not
mean that, for example, the OMPI code must go through the OPAL code in
order to reach the operating system or a network interface.

As such, this code organization more reflects abstractions and
software engineering, not a strict hierarchy of functions that must be
traversed in order to reach a lower layer.  For example, OMPI can
directly call the operating system as necessary (and not go through
OPAL).  Indeed, many top-level MPI API functions are quite performance
sensitive; it would not make sense to force them to traverse an
arbitrarily deep call stack just to move some bytes across a network.

Frameworks, components, and modules can be dynamic or static. That is,
they can be available as plugins or they may be compiled statically
into libraries (e.g., ``libmpi``).

In Open MPI |ompi_ver|, ``configure`` defaults to:

* Building projects as dynamic libraries
* Linking all components into their parent project libraries
  (vs. compiling them as independent DSOs)

Although these defaults can be modified by :doc:`command line
arguments to configure
</installing-open-mpi/configure-cli-options/index>`.

Required 3rd party libraries
----------------------------

Note that Open MPI also uses some third-party libraries for core
functionality:

* PMIx
* PRRTE
* Libevent
* Hardware Locality ("hwloc")

These are discussed in detail in the :ref:`required support libraries
section <label-install-required-support-libraries>`.