1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML3.2 EN">
<HTML>
<HEAD> <link rel="canonical" href="http://www.mcs.anl.gov/petsc/petsc-current/docs/manualpages/Sys/PetscHMPIMerge.html" />
<META NAME="GENERATOR" CONTENT="DOCTEXT">
<TITLE>PetscHMPIMerge</TITLE>
</HEAD>
<BODY BGCOLOR="FFFFFF">
<div id="version" align=right><b>petsc-3.4.2 2013-07-02</b></div>
<A NAME="PetscHMPIMerge"><H1>PetscHMPIMerge</H1></A>
Initializes the PETSc and MPI to work with HMPI. This is not usually called by the user. One should use -hmpi_merge_size <n> to indicate the node size of merged communicator to be.
<H3><FONT COLOR="#CC3333">Synopsis</FONT></H3>
<PRE>
#include "petscsys.h"
PetscErrorCode PetscHMPIMerge(PetscMPIInt nodesize,PetscErrorCode (*func)(void*),void *ctx)
</PRE>
Collective on MPI_COMM_WORLD or <A HREF="../Sys/PETSC_COMM_WORLD.html#PETSC_COMM_WORLD">PETSC_COMM_WORLD</A> if it has been set
<P>
<H3><FONT COLOR="#CC3333">Input Parameter</FONT></H3>
<TABLE border="0" cellpadding="0" cellspacing="0">
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>nodesize </B></TD><TD>- size of each compute node that will share processors
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>func </B></TD><TD>- optional function to call on the master nodes
</TD></TR>
<TR><TD WIDTH=40></TD><TD ALIGN=LEFT VALIGN=TOP><B>ctx </B></TD><TD>- context passed to function on master nodes
</TD></TR></TABLE>
<P>
<H3><FONT COLOR="#CC3333">Options Database</FONT></H3>
<DT><B>-hmpi_merge_size <n></B> -
<br>
<P>
<pre>
Comparison of two approaches for HMPI usage (MPI started with N processes)
</pre>
<pre>
</pre>
<pre>
-hmpi_spawn_size <n> requires MPI 2, results in n*N total processes with N directly used by application code
</pre>
<pre>
and n-1 worker processes (used by PETSc) for each application node.
</pre>
<pre>
You MUST launch MPI so that only ONE MPI process is created for each hardware node.
</pre>
<pre>
</pre>
<pre>
-hmpi_merge_size <n> results in N total processes, N/n used by the application code and the rest worker processes
</pre>
<pre>
(used by PETSc)
</pre>
<pre>
You MUST launch MPI so that n MPI processes are created for each hardware node.
</pre>
<pre>
</pre>
<pre>
petscmpiexec -n 2 ./ex1 -hmpi_spawn_size 3 gives 2 application nodes (and 4 PETSc worker nodes)
</pre>
<pre>
petscmpiexec -n 6 ./ex1 -hmpi_merge_size 3 gives the SAME 2 application nodes and 4 PETSc worker nodes
</pre>
<pre>
This is what would use if each of the computers hardware nodes had 3 CPUs.
</pre>
<pre>
</pre>
<pre>
These are intended to be used in conjunction with USER HMPI code. The user will have 1 process per
</pre>
<pre>
computer (hardware) node (where the computer node has p cpus), the user's code will use threads to fully
</pre>
<pre>
utilize all the CPUs on the node. The PETSc code will have p processes to fully use the compute node for
</pre>
<pre>
PETSc calculations. The user THREADS and PETSc PROCESSES will NEVER run at the same time so the p CPUs
</pre>
<pre>
are always working on p task, never more than p.
</pre>
<pre>
</pre>
<pre>
See <A HREF="../PC/PCHMPI.html#PCHMPI">PCHMPI</A> for a PETSc preconditioner that can use this functionality
</pre>
<pre>
</pre>
<P>
For both <A HREF="../Sys/PetscHMPISpawn.html#PetscHMPISpawn">PetscHMPISpawn</A>() and <A HREF="../Sys/PetscHMPIMerge.html#PetscHMPIMerge">PetscHMPIMerge</A>() <A HREF="../Sys/PETSC_COMM_WORLD.html#PETSC_COMM_WORLD">PETSC_COMM_WORLD</A> consists of one process per "node", PETSC_COMM_LOCAL_WORLD
consists of all the processes in a "node."
<P>
In both cases the user's code is running ONLY on <A HREF="../Sys/PETSC_COMM_WORLD.html#PETSC_COMM_WORLD">PETSC_COMM_WORLD</A> (that was newly generated by running this command).
<P>
<P>
<H3><FONT COLOR="#CC3333">See Also</FONT></H3>
<A HREF="../Sys/PetscFinalize.html#PetscFinalize">PetscFinalize</A>(), PetscInitializeFortran(), <A HREF="../Sys/PetscGetArgs.html#PetscGetArgs">PetscGetArgs</A>(), <A HREF="../Sys/PetscHMPIFinalize.html#PetscHMPIFinalize">PetscHMPIFinalize</A>(), <A HREF="../Sys/PetscInitialize.html#PetscInitialize">PetscInitialize</A>(), <A HREF="../Sys/PetscHMPISpawn.html#PetscHMPISpawn">PetscHMPISpawn</A>(), <A HREF="../Sys/PetscHMPIRun.html#PetscHMPIRun">PetscHMPIRun</A>()
<BR>
<P>
<P><B><P><B><FONT COLOR="#CC3333">Level:</FONT></B>developer
<BR><FONT COLOR="#CC3333">Location:</FONT></B><A HREF="../../../src/sys/objects/mpinit.c.html#PetscHMPIMerge">src/sys/objects/mpinit.c</A>
<BR><A HREF="./index.html">Index of all Sys routines</A>
<BR><A HREF="../../index.html">Table of Contents for all manual pages</A>
<BR><A HREF="../singleindex.html">Index of all manual pages</A>
</BODY></HTML>
|