File: README

package info (click to toggle)
mpich2 1.2.1.1-5
  • links: PTS, VCS
  • area: main
  • in suites: squeeze
  • size: 73,904 kB
  • ctags: 65,947
  • sloc: ansic: 343,583; makefile: 55,174; java: 34,959; sh: 27,558; perl: 17,355; cpp: 10,472; python: 9,649; f90: 5,753; fortran: 5,128; cs: 4,019; csh: 152; xml: 91; php: 8
file content (1096 lines) | stat: -rw-r--r-- 42,391 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
196
197
198
199
200
201
202
203
204
205
206
207
208
209
210
211
212
213
214
215
216
217
218
219
220
221
222
223
224
225
226
227
228
229
230
231
232
233
234
235
236
237
238
239
240
241
242
243
244
245
246
247
248
249
250
251
252
253
254
255
256
257
258
259
260
261
262
263
264
265
266
267
268
269
270
271
272
273
274
275
276
277
278
279
280
281
282
283
284
285
286
287
288
289
290
291
292
293
294
295
296
297
298
299
300
301
302
303
304
305
306
307
308
309
310
311
312
313
314
315
316
317
318
319
320
321
322
323
324
325
326
327
328
329
330
331
332
333
334
335
336
337
338
339
340
341
342
343
344
345
346
347
348
349
350
351
352
353
354
355
356
357
358
359
360
361
362
363
364
365
366
367
368
369
370
371
372
373
374
375
376
377
378
379
380
381
382
383
384
385
386
387
388
389
390
391
392
393
394
395
396
397
398
399
400
401
402
403
404
405
406
407
408
409
410
411
412
413
414
415
416
417
418
419
420
421
422
423
424
425
426
427
428
429
430
431
432
433
434
435
436
437
438
439
440
441
442
443
444
445
446
447
448
449
450
451
452
453
454
455
456
457
458
459
460
461
462
463
464
465
466
467
468
469
470
471
472
473
474
475
476
477
478
479
480
481
482
483
484
485
486
487
488
489
490
491
492
493
494
495
496
497
498
499
500
501
502
503
504
505
506
507
508
509
510
511
512
513
514
515
516
517
518
519
520
521
522
523
524
525
526
527
528
529
530
531
532
533
534
535
536
537
538
539
540
541
542
543
544
545
546
547
548
549
550
551
552
553
554
555
556
557
558
559
560
561
562
563
564
565
566
567
568
569
570
571
572
573
574
575
576
577
578
579
580
581
582
583
584
585
586
587
588
589
590
591
592
593
594
595
596
597
598
599
600
601
602
603
604
605
606
607
608
609
610
611
612
613
614
615
616
617
618
619
620
621
622
623
624
625
626
627
628
629
630
631
632
633
634
635
636
637
638
639
640
641
642
643
644
645
646
647
648
649
650
651
652
653
654
655
656
657
658
659
660
661
662
663
664
665
666
667
668
669
670
671
672
673
674
675
676
677
678
679
680
681
682
683
684
685
686
687
688
689
690
691
692
693
694
695
696
697
698
699
700
701
702
703
704
705
706
707
708
709
710
711
712
713
714
715
716
717
718
719
720
721
722
723
724
725
726
727
728
729
730
731
732
733
734
735
736
737
738
739
740
741
742
743
744
745
746
747
748
749
750
751
752
753
754
755
756
757
758
759
760
761
762
763
764
765
766
767
768
769
770
771
772
773
774
775
776
777
778
779
780
781
782
783
784
785
786
787
788
789
790
791
792
793
794
795
796
797
798
799
800
801
802
803
804
805
806
807
808
809
810
811
812
813
814
815
816
817
818
819
820
821
822
823
824
825
826
827
828
829
830
831
832
833
834
835
836
837
838
839
840
841
842
843
844
845
846
847
848
849
850
851
852
853
854
855
856
857
858
859
860
861
862
863
864
865
866
867
868
869
870
871
872
873
874
875
876
877
878
879
880
881
882
883
884
885
886
887
888
889
890
891
892
893
894
895
896
897
898
899
900
901
902
903
904
905
906
907
908
909
910
911
912
913
914
915
916
917
918
919
920
921
922
923
924
925
926
927
928
929
930
931
932
933
934
935
936
937
938
939
940
941
942
943
944
945
946
947
948
949
950
951
952
953
954
955
956
957
958
959
960
961
962
963
964
965
966
967
968
969
970
971
972
973
974
975
976
977
978
979
980
981
982
983
984
985
986
987
988
989
990
991
992
993
994
995
996
997
998
999
1000
1001
1002
1003
1004
1005
1006
1007
1008
1009
1010
1011
1012
1013
1014
1015
1016
1017
1018
1019
1020
1021
1022
1023
1024
1025
1026
1027
1028
1029
1030
1031
1032
1033
1034
1035
1036
1037
1038
1039
1040
1041
1042
1043
1044
1045
1046
1047
1048
1049
1050
1051
1052
1053
1054
1055
1056
1057
1058
1059
1060
1061
1062
1063
1064
1065
1066
1067
1068
1069
1070
1071
1072
1073
1074
1075
1076
1077
1078
1079
1080
1081
1082
1083
1084
1085
1086
1087
1088
1089
1090
1091
1092
1093
1094
1095
1096
			MPICH2 Release 1.2.1p1

MPICH2 is a high-performance and widely portable implementation of the
MPI-2.2 standard from the Argonne National Laboratory. This release
has all MPI 2.2 functions and features required by the standard with
the exception of support for the "external32" portable I/O format and
user-defined data representations for I/O.

The distribution has been tested by us on a variety of machines in our
environments as well as our partner institutes. If you have problems
with the installation or usage of MPICH2, please send an email to
mpich-discuss@mcs.anl.gov (you need to subscribe to this list
(https://lists.mcs.anl.gov/mailman/listinfo/mpich-discuss) before
sending an email). If you have found a bug in MPICH2, we request that
you report it at our bug tracking system:
(https://trac.mcs.anl.gov/projects/mpich2/newticket).

This README file should contain enough information to get you started
with MPICH2. More extensive installation and user guides can be found
in the doc/installguide/install.pdf and doc/userguide/user.pdf files
respectively. Additional information regarding the contents of the
release can be found in the CHANGES file in the top-level directory,
and in the RELEASE_NOTES file, where certain restrictions are
detailed. Finally, the MPICH2 web site,
http://www.mcs.anl.gov/research/projects/mpich2, contains information
on bug fixes and new releases.

  
I.    Getting Started
II.   Alternate Configure Options
III.  Compiler Flags
IV.   Alternate Channels and Devices
V.    Alternate Process Managers
VI.   VPATH Builds
VII.  Shared Libraries
VIII. Other Features
IX.   Environment Variables
X.    Developer Builds
XI.   Building ROMIO into MPICH2
XII.  Testing the MPICH2 installation
XIII. Installing MPICH2 on windows


-------------------------------------------------------------------------

I. Getting Started
==================

The following instructions take you through a sequence of steps to get
the default configuration (ch3 device, nemesis channel (with TCP and
shared memory), MPD process management) of MPICH2 up and running.

1.  You will need the following prerequisites.

    - This tar file mpich2-1.2.1p1.tar.gz

    - A C compiler (gcc is sufficient)

    - A Fortran compiler if Fortran applications are to be used
      (g77 or gfortran is sufficient) 

    - A C++ compiler for the C++ MPI bindings (g++ is sufficient)

    - Python 2.2 or later (for the default MPD process manager)

    - If a Fortran 90 compiler is found, by default MPICH2 will
      attempt to build a basic MPI module.  This module contains the
      MPI routines that do not contain "choice" arguments; i.e., the module
      does not contain any of the communication routines, such as
      MPI_Send, that can take arguments of different type.  You may still
      use those routines, however, the MPI module does not contain
      interface specifications for them. If you have trouble with the
      configuration step and do not need Fortran 90, configure with
      --disable-f90 .

    Configure will check for these prerequisites and try to work around
    deficiencies if possible.  (If you don't have Fortran, you will
    still be able to use MPICH2, just not with Fortran applications.)

    Also, you need to know what shell you are using since different shell
    has different command syntax. Command "echo $SHELL" prints out the
    current shell used by your terminal program.

2.  Unpack the tar file and go to the top level directory:

      tar xzf mpich2-1.2.1p1.tar.gz
      cd mpich2-1.2.1p1

    If your tar doesn't accept the z option, use

      gunzip mpich2-1.2.1p1.tar.gz
      tar xf mpich2-1.2.1p1.tar
      cd mpich2-1.2.1p1

3.  Choose an installation directory (the default is /usr/local/bin),
    /home/you/mpich2-install which is assumed to non-existent or empty.
    It will be most convenient if this directory is shared by all of the
    machines where you intend to run processes.  If not, you will have
    to duplicate it on the other machines after installation.

4.  Configure MPICH2 (The steps described here are called inpath-build,
    we recommend user to do vpath build if possible), specifying the
    installation directory:

    for csh and tcsh:

      ./configure --prefix=/home/you/mpich2-install |& tee c.txt

    for bash and sh:

      ./configure --prefix=/home/you/mpich2-install 2>&1 | tee c.txt

    Bourne-like shells, sh and bash, accept "2>&1 |".  Csh-like shell,
    csh and tcsh, accept "|&".  File c.txt is used to store all messages
    generated configure command and is useful for diagnosis if something
    goes wrong.  Other configure options are described below.  You might
    also prefer to do a VPATH build (see below).  Check the c.txt file
    to make sure everything went well.  Problems should be self-explanatory,
    but if not, please attach c.txt to your bug report.

5.  Build MPICH2:

    for csh and tcsh:

      make |& tee m.txt

    for bash and sh:

      make 2>&1 | tee m.txt

    This step should succeed if there were no problems with the
    preceding step.  Check file m.txt. If there were problems,
    do a "make clean" and then run make again with V=1

      make V=1 |& tee m.txt       (for csh and tcsh)

      OR

      make V=1 2>&1 | tee m.txt   (for bash and sh)

    and then attach m.txt and c.txt to your bug report.

6.  Install the MPICH2 commands:

    for csh and tcsh:

      make install |& tee mi.txt

    for bash and sh:

      make install 2>&1 | tee mi.txt

    This step collects all required executables and scripts in the bin
    subdirectory of the directory specified by the prefix argument to
    configure. 

    (For users who want an install directory structure compliant to
     GNU coding standards (i.e., documentation files go to 
     ${datarootdir}/doc/${PACKAGE}, architecture-independent
     read-only files go to ${datadir}/${PACKAGE}), replace
     "make install" by

       make install PACKAGE=mpich2-<versrion>

     and corresponding installcheck step should be

       make installcheck PACKAGE=mpich2-<version>

    Setting PACKAGE in "make install" or "installcheck" step is optional
    and unnecessary for typical MPI users.)

7.  Add the bin subdirectory of the installation directory to your path:

    for csh and tcsh:

      setenv PATH /home/you/mpich2-install/bin:$PATH

    for bash and sh:
  
      PATH=/home/you/mpich2-install/bin:$PATH ; export PATH

    Check that everything is in order at this point by doing 

      which mpd
      which mpiexec
      which mpirun

    All should refer to the commands in the bin subdirectory of your
    install directory.  It is at this point that you will need to
    duplicate this directory on your other machines if it is not
    in a shared file system such as NFS.

8.  MPICH2 uses an external process manager for scalable startup of
    large MPI jobs.  The default process manager is called MPD, which
    is a ring of daemons on the machines where you will run your MPI
    programs.  In the next few steps, you will get his ring up and
    tested.  More details on interacting with MPD can be found in the
    README file in mpich2-1.2.1p1/src/pm/mpd, such as how to list
    running jobs, kill, suspend, or otherwise signal them, and how to
    debug programs with "mpiexec -gdb".

    If you have problems getting the MPD ring established, see the
    Installation Guide for instructions on how to diagnose problems
    with your system configuration that may be preventing it.  Also
    see that guide if you plan to run MPD as root on behalf of users.
    Please be aware that we do not recommend running MPD as root until
    you have done testing to make sure that all is well.

    Begin by placing in your home directory a file named .mpd.conf
    (/etc/mpd.conf if root), containing the line 

      secretword=<secretword>

    where <secretword> is a string known only to yourself.  It should
    NOT be your normal Unix password.  Make this file readable and
    writable only by you:

      chmod 600 .mpd.conf

9.  The first sanity check consists of bringing up a ring of one mpd on
    the local machine, testing one mpd command, and bringing the "ring"
    down. 

      mpd &
      mpdtrace
      mpdallexit

    The output of mpdtrace should be the hostname of the machine you are
    running on.  The mpdallexit causes the mpd daemon to exit.
    If you have problems getting the mpd ring established, see the
    Installation Guide for instructions on how to diagnose problems
    with your system configuration that may be preventing it.

10. Now we will bring up a ring of mpd's on a set of machines.  Create a
    file consisting of a list of machine names, one per line.  Name this
    file mpd.hosts.  These hostnames will be used as targets for ssh or
    rsh, so include full domain names if necessary.  Check that you can
    reach these machines with ssh or rsh without entering a password.
    You can test by doing

      ssh othermachine date

    or

      rsh othermachine date

    If you cannot get this to work without entering a password, you will
    need to configure ssh or rsh so that this can be done, or else use
    the workaround for mpdboot in the next step.

11. Start the daemons on (some of) the hosts in the file mpd.hosts

      mpdboot -n <number to start>  

    The number to start can be less than 1 + number of hosts in the
    file, but cannot be greater than 1 + the number of hosts in the
    file.  One mpd is always started on the machine where mpdboot is
    run, and is counted in the number to start, whether or not it occurs
    in the file.

    There is a workaround if you cannot get mpdboot to work because of
    difficulties with ssh or rsh setup.  You can start the daemons "by
    hand" as follows:

       mpd &                # starts the local daemon
       mpdtrace -l          # makes the local daemon print its host
                            # and port in the form <host>_<port>

    Then log into each of the other machines, put the install/bin
    directory in your path, and do:

       mpd -h <hostname> -p <port> &

    where the hostname and port belong to the original mpd that you
    started.  From each machine, after starting the mpd, you can do 

       mpdtrace

    to see which machines are in the ring so far.  More details on
    mpdboot and other options for starting the mpd's are in
    mpich2-1.2.1p1/src/pm/mpd/README.

 !! ***************************
    If you are still having problems getting the mpd ring established,
    you can use the mpdcheck utility as described in the Installation Guide 
    to diagnose problems with your system configuration.
 !! ***************************

12. Test the ring you have just created:

      mpdtrace

    The output should consist of the hosts where MPD daemons are now
    running.  You can see how long it takes a message to circle this
    ring with 

      mpdringtest

    That was quick.  You can see how long it takes a message to go
    around many times by giving mpdringtest an argument:

      mpdringtest 100
      mpdringtest 1000

13. Test that the ring can run a multiprocess job:

      mpiexec -n <number> hostname

    The number of processes need not match the number of hosts in the
    ring;  if there are more, they will wrap around.  You can see the
    effect of this by getting rank labels on the stdout:

      mpiexec -l -n 30 hostname

    You probably didn't have to give the full pathname of the hostname
    command because it is in your path.  If not, use the full pathname:

      mpiexec -l -n 30 /bin/hostname

14. Now we will run an MPI job, using the mpiexec command as specified
    in the MPI-2 standard.  There are some examples in the install
    directory, which you have already put in your path, as well as in
    the directory mpich2-1.2.1p1/examples.  One of them is the classic
    cpi example, which computes the value of pi by numerical
    integration in parallel.

      mpiexec -n 5 cpi

    The number of processes need not match the number of hosts.
    The cpi example will tell you which hosts it is running on.
    By default, the processes are launched one after the other on the hosts
    in the mpd ring, so it is not necessary to specify hosts when running a
    job with mpiexec.

    There are many options for mpiexec, by which multiple executables
    can be run, hosts can be specified (as long as they are in the mpd
    ring), separate command-line arguments and environment variables can
    be passed to different processes, and working directories and search
    paths for executables can be specified.  Do

      mpiexec --help

    for details. A typical example is:

      mpiexec -n 1 master : -n 19 slave

    or

      mpiexec -n 1 -host mymachine : -n 19 slave

    to ensure that the process with rank 0 runs on your workstation.

    The arguments between ':'s in this syntax are called "argument
    sets", since they apply to a set of processes.  Some arguments,
    called "global", apply across all argument sets and must appear
    first.  For example, to get rank labels on standard output, use

      mpiexec -l -n 3 cpi

    See the User's Guide for much more detail on arguments to mpiexec.

    The mpirun command from the original MPICH is still available,
    although it does not support as many options as mpiexec.

If you have completed all of the above steps, you have successfully
installed MPICH2 and run an MPI example.  

More details on arguments to mpiexec are given in the User's Guide in
the doc subdirectory.  Also in the User's Guide you will find help on
debugging.  MPICH2 has some some support for the TotalView debugger, as
well as some other approaches described there.

-------------------------------------------------------------------------

II. Alternate Configure Options
===============================

The above steps utilized the MPICH2 defaults, which included choosing
TCP and shared memory for communication (via the "nemesis" channel)
and the MPD process manager.  Other alternatives are available.  You
can find out about configuration alternatives with

   ./configure --help

in the mpich2 directory.  The alternatives described below are
configured by adding arguments to the configure step.

-------------------------------------------------------------------------

III. Compiler Flags
===================

MPICH2 allows several sets of compiler flags to be used. The first
three sets are configure-time options for MPICH2, while the fourth is
only relevant when compiling applications with mpicc and friends.

1. CFLAGS, CXXFLAGS, FFLAGS, F90FLAGS and LDFLAGS (abbreviated as
xFLAGS): Setting these flags would result in the MPICH2 library being
compiled/linked with these flags and the flags internally being used
in mpicc and friends.

2. MPICH2LIB_CFLAGS, MPICH2LIB_CXXFLAGS, MPICH2LIB_FFLAGS,
MPICH2LIB_F90FLAGS and MPICH2LIB_LDFLAGS (abbreviated as
MPICH2LIB_xFLAGS): Setting these flags would result in the MPICH2
library being compiled/linked with these flags. However, these flags
will *not* be used by mpicc and friends.

3. MPICH2_MAKE_CFLAGS: Setting these flags would result in MPICH2's
configure tests to not use these flags, but the makefile's to use
them. This is a temporary hack for certain cases that advanced
developers might be interested in which break existing configure tests
(e.g., -Werror) and are not recommended for regular users.

4. MPICH2_MPICC_FLAGS, MPICH2_MPICXX_FLAGS, MPICH2_MPIF77_FLAGS,
MPICH2_MPIF90_FLAGS and MPICH2_LDFLAGS (abbreviated as
MPICH2_MPIX_FLAGS): These flags do *not* affect the compilation of the
MPICH2 library itself, but will be internally used by mpicc and
friends.


  +--------------------------------------------------------------------+
  |                    |                      |                        |
  |                    |    MPICH2 library    |    mpicc and friends   |
  |                    |                      |                        |
  +--------------------+----------------------+------------------------+
  |                    |                      |                        |
  |     xFLAGS         |         Yes          |           Yes          |
  |                    |                      |                        |
  +--------------------+----------------------+------------------------+
  |                    |                      |                        |
  |  MPICH2LIB_xFLAGS  |         Yes          |           No           |
  |                    |                      |                        |
  +--------------------+----------------------+------------------------+
  |                    |                      |                        |
  | MPICH2_MAKE_xFLAGS |         Yes          |           No           |
  |                    |                      |                        |
  +--------------------+----------------------+------------------------+
  |                    |                      |                        |
  | MPICH2_MPIX_FLAGS  |         No           |           Yes          |
  |                    |                      |                        |
  +--------------------+----------------------+------------------------+


All these flags can be set as part of configure command or through
environment variables. (CPPFLAGS stands for C preprocessor flags,
which should NOT be set)


Default flags
--------------
By default, MPICH2 automatically adds certain compiler optimizations
to MPICH2LIB_CFLAGS. The currently used optimization level is -O2.

** IMPORTANT NOTE: Remember that this only affects the compilation of
the MPICH2 library and is not used in the wrappers (mpicc and friends)
that are used to compile your applications or other libraries.

This optimization level can be changed with the --enable-fast option
passed to configure. For example, to build an MPICH2 environment with
-O3 for all language bindings, one can simply do:

  ./configure --enable-fast=O3

Or to disable all compiler optimizations, one can do:

  ./configure --disable-fast

For more details of --enable-fast, see the output of "configure
--help".


Examples
--------

Example 1:

  ./configure --disable-fast MPICH2LIB_CFLAGS=-O3 MPICH2LIB_FFLAGS=-O3 MPICH2LIB_CXXFLAGS=-O3 MPICH2LIB_F90FLAGS=-O3

This will cause the MPICH2 libraries to be built with -O3, and -O3
will *not* be included in the mpicc and other MPI wrapper script.

Example 2:

  ./configure --disable-fast CFLAGS=-O3 FFLAGS=-O3 CXXFLAGS=-O3 F90FLAGS=-O3

This will cause the MPICH2 libraries to be built with -O3, and -O3
will be included in the mpicc and other MPI wrapper script.

Example 3:

There are certain compiler flags that should not be used with MPICH2's
configure, e.g. gcc's -Werror, which would confuse configure and cause
certain configure tests to fail to detect the correct system features.
To use -Werror in building MPICH2 libraries, you can pass the compiler
flags during the make step through the Makefile variable
MPICH2_MAKE_CFLAGS as follows:

  make MPICH2_MAKE_CFLAGS="-Wall -Werror"

The content of MPICH2_MAKE_CFLAGS is appended to the CFLAGS in all
relevant Makefiles.

-------------------------------------------------------------------------

IV. Alternate Channels and Devices
==================================

The communication mechanisms in MPICH2 are called "devices". MPICH2
supports several internal devices including ch3 (default), dcmfd (for
Blue Gene/P) and globus (for Globus), as well as many third-party
devices that are released and maintained by other institutes such as
osu_ch3 (from Ohio State University for InfiniBand and iWARP), ch_mx
(from Myricom for Myrinet MX), etc.

                   *************************************

ch3 device
**********
The ch3 device contains different internal communication options
called "channels". We currently support nemesis (default), sock, ssm,
and shm channels, and experimentally provide a dllchan channel within
the ch3 device.

nemesis channel
---------------
Nemesis provides communication using different networks (tcp, mx) as
well as various shared-memory optimizations. To configure MPICH2 with
nemesis, you can use the following configure option:

  --with-device=ch3:nemesis

The TCP network module gets configured in by default. To specify a
different network module such as MX, you can use:

  --with-device=ch3:nemesis:mx

If the MX include files and libraries are not in the normal search
paths, you can specify them with the following options:

  --with-mx-include= and --with-mx-lib=

... or the if lib/ and include/ are in the same directory, you can use
the following option:

  --with-mx=

If the MX libraries are shared libraries, they need to be in the
shared library search path. This can be done by adding the path to
/etc/ld.so.conf, or by setting the LD_LIBRARY_PATH variable in your
.bashrc (or .tcshrc) file.  It's also possible to set the shared
library search path in the binary. If you're using gcc, you can do
this by adding

  LD_LIBRARY_PATH=/path/to/lib

  (and)

  LDFLAGS="-Wl,-rpath -Wl,/path/to/lib"

... as arguments to configure.

By default, MX allows for only eight endpoints per node causing
ch3:nemesis:mx to give initialization errors with greater than 8
processes on the same node (this is an MX error and not an inherent
limitation in the MPICH2/Nemesis design). If needed, this can be set
to a higher number when MX is loaded. We recommend the user to contact
help@myri.com for details on how to do this.

Shared-memory optimizations are enabled by default to improve
performance for multi-processor/multi-core platforms. They can be
disabled (at the cost of performance) either by setting the
environment variable MPICH_NO_LOCAL to 1, or using the following
configure option:

  --enable-nemesis-dbg-nolocal

The --with-shared-memory= configure option allows you to choose how
Nemesis allocates shared memory.  The options are "auto", "sysv", and
"mmap".  Using "sysv" will allocate shared memory using the System V
shmget(), shmat(), etc. functions.  Using "mmap" will allocate shared
memory by creating a file (in /dev/shm if it exists, otherwise /tmp),
then mmap() the file.  The default is "auto". Note that System V
shared memory has limits on the size of shared memory segments so
using this for Nemesis may limit the number of processes that can be
started on a single node.


sock channel
------------
sock is the traditional TCP sockets based communication channel. It
uses TCP/IP sockets for all communication including intra-node
communication. So, though the performance of this channel is worse
than that of nemesis, it should work on almost every platform. This
channel can be configured using the following option:

  --with-device=ch3:sock

ssm and shm channels
--------------------
shm (shared memory) channel is for use on platforms that only use
shared-memory communication. ssm (sockets and shared memory) is for
use on clusters of shared-memory machines. They can be configured
using:

  --with-device=ch3:ssm

  (or)

  --with-device=ch3:shm

These two channels are being deprecated and will be removed starting
the 1.2 release series of MPICH2. The nemesis channel provides all the
functionality supported by these channels and more.

dllchan
-------

dllchan is a new *experimental* channel for supporting dynamic loading
of other channels. To use this channel, configure with:

  --with-device=ch3:dllchan:sock,shm,ssm

This provides the sock, shm, and ssm channels as options, with sock
being the default. In addition, you must specify the shared library
type; under Linux and when using gcc (or compilers that mimic gcc for
shared-library construction) add:

  --enable-sharedlibs=gcc

On Mac OSX, use:

  --enable-sharedlibs=gcc-osx

On Solaris, use:

  --enable-sharedlibs=solaris-cc

To select a channel other than the default channel, set the
environment variable MPICH_CH3CHANNEL to the channel name (i.e., sock,
shm, or ssm).

There are known problems with this channel, particularly during the
make step. You may find that some symbols are not found when loading
the libraries. If you want to try this experimental channel, please
let us know what does and does not work.

sctp channel
------------
The SCTP channel is a new channel using the Stream Control
Transmission Protocol (SCTP). This channel supports regular MPI-1
operations as well as dynamic processes and RMA from MPI-2; it
currently does not offer support for multiple threads.

Configure the sctp channel by using the following option:

  --with-device=ch3:sctp

If the SCTP include files and libraries are not in the normal search
paths, you can specify them with the --with-sctp-include= and
--with-sctp-lib= options, or the --with-sctp= option if lib/ and
include/ are in the same directory.

SCTP stack specific instructions:

  For FreeBSD 7 and onward, SCTP comes with CURRENT and is enabled with
  the "option SCTP" in the kernel configuration file.  The sctp_xxx()
  calls are contained within libc so to compile ch3:sctp, make a soft-link
  named libsctp.a to the target libc.a, then pass the path of the 
  libsctp.a soft-link to --with-sctp-lib.
  
  For FreeBSD 6.x, kernel patches and instructions can be downloaded at
  http://www.sctp.org/download.html .  These kernels place libsctp and
  headers in /usr, so nothing needs to be specified for --with-sctp
  since /usr is often in the default search path.

  For Mac OS X, the SCTP Network Kernel Extension (NKE) can be
  downloaded at http://sctp.fh-muenster.de/sctp-nke.html .  This places
  the lib and include in /usr, so nothing needs to be specified for
  --with-sctp since /usr is often in the default search path.

  For Linux, SCTP comes with the default kernel from 2.4.23 and later as
  a module.  This module can be loaded as root using "modprobe sctp".
  After this is loaded,  you can verify it is loaded using "lsmod".
  Once loaded, the SCTP socket lib and include files must be downloaded
  and installed from http://lksctp.sourceforge.net/ .  The prefix 
  location must then be passed into --with-sctp.  This bundle is called 
  lksctp-tools and is available for download off their website.

  For Solaris, SCTP comes with the default Solaris 10 kernel; the lib
  and include in /usr, so nothing needs to be specified for --with-sctp
  since /usr is often in the default search path.  In order to compile
  under Solaris, MPICH2LIB_CFLAGS must have
  -DMPICH_SCTP_CONCATENATES_IOVS set when running MPICH2's configure
  script.

                   *************************************

IBM Blue Gene/P device
**********************
MPICH2 also supports the IBM Blue Gene/P systems. Since BG/P's
front-end uses a different architecture than the actual compute nodes,
MPICH2 has to be cross-compiled for this platform. The configuration
of MPICH2 on BG/P relies on the availability of the DCMF driver stack
and cross compiler binaries on the system. These are packaged by IBM
in their driver releases (default installation path is
/bgsys/drivers/ppcfloor) and are not released with MPICH2.

Assuming DRIVER_PATH points to the driver installation path (e.g.,
/bgsys/drivers/ppcfloor), the following is an example configure
command-line for MPICH2:

  GCC=${DRIVER_PATH}/gnu-linux/bin/powerpc-bgp-linux-gcc \
  CC=${DRIVER_PATH}/gnu-linux/bin/powerpc-bgp-linux-gcc \
  CXX=${DRIVER_PATH}/gnu-linux/bin/powerpc-bgp-linux-g++ \
  F77=${DRIVER_PATH}/gnu-linux/bin/powerpc-bgp-linux-gfortran \
  F90=${DRIVER_PATH}/gnu-linux/bin/powerpc-bgp-linux-gfortran \
  CFLAGS="-mcpu=450fp2" \
  CXXFLAGS="-mcpu=450fp2" \
  FFLAGS="-mcpu=450fp2" \
  F90FLAGS="-mcpu=450fp2" \
  AR=${DRIVER_PATH}/gnu-linux/bin/powerpc-bgp-linux-ar \
  LD=${DRIVER_PATH}/gnu-linux/bin/powerpc-bgp-linux-ld \
  MSGLAYER_INCLUDE="-I${DRIVER_PATH}/comm/include" \
  MSGLAYER_LIB="-L${DRIVER_PATH}/comm/lib -ldcmfcoll.cnk -ldcmf.cnk -lpthread -lrt -L$DRIVER_PATH/runtime/SPI -lSPI.cna" \
  ./configure --with-device=dcmfd:BGP --with-pmi=no --with-pm=no --with-file-system=bgl \
  	      --enable-timer-type=device --with-cross=src/mpid/dcmfd/cross \
	      --host=powerpc-bgp-linux --target=powerpc-bgp-linux --build=powerpc64-linux-gnu

-------------------------------------------------------------------------

V. Alternate Process Managers
=============================

mpd
---
MPD is the default process manager.  Its setup and use have been
described above.  The file mpich2-1.2.1p1/src/pm/mpd/README has more 
information about interactive commands for managing the ring of MPDs.

hydra
-----
Hydra is a new process management framework that uses existing daemons
on nodes (e.g., ssh, pbs, slurm, sge) to start MPI processes. The file
mpich2-1.2.1p1/src/pm/hydra/README has mode information about Hydra.

smpd
---- 
SMPD is a process management system for both Microsoft Windows and UNIX.
SMPD is capable of starting a job where some processes are running on
Windows and others are running on a variant of UNIX.  For more
information, please see mpich2-1.2.1p1/src/pm/smpd/README.

gforker
-------
gforker is a process manager that creates processes on a single machine,
by having mpiexec directly fork and exec them.  This mechanism is
particularly appropriate for shared-memory multiprocessors (SMPs) where
you want to create all the processes on the same machine.  gforker is
also useful for debugging, where running all the processes on a single
machine is often convenient.

slurm
-----
SLURM is an external process manager not distributed with
MPICH2. However, we provide configure options that allow integration
with SLURM. To enable this support, use "--with-pmi=slurm
--with-pm=no" option with configure.

-------------------------------------------------------------------------

VI. VPATH Builds
================
MPICH2 supports building MPICH in a different directory tree than the
one where the MPICH2 source is installed. This often allows faster
building, as the sources can be placed in a shared filesystem and the
builds done in a local (and hence usually much faster) filesystem.  To
make this clear, the following example assumes that the sources are
placed in /home/me/mpich2-<VERSION>, the build is done in
/tmp/me/mpich2, and the installed version goes into
/usr/local/mpich2-<VERSION>:

  shell$ cd /home/me
  shell$ tar xzf mpich2-<VERSION>.tar.gz
  shell$ cd /tmp/me

  shell$ mkdir mpich2
  shell$ cd mpich2
  shell$ /home/me/mpich2-<VERSION>/configure --prefix=/usr/local/mpich2-<VERSION>
  shell$ make
  shell$ make install

-------------------------------------------------------------------------

VII. Shared Libraries
=====================
Shared libraries are currently only supported for gcc on Linux and Mac
and for cc on Solaris. To have shared libraries created when MPICH2 is
built, specify the following when MPICH2 is configured:

    configure --enable-sharedlibs=gcc         (on Linux)
    configure --enable-sharedlibs=osx-gcc     (on Mac OS X)
    configure --enable-sharedlibs=solaris-cc  (on Solaris)

-------------------------------------------------------------------------

VIII. Other Features
====================

MPICH2 has a number of other features. If you are exploring MPICH2 as
part of a development project the following configure options are
important:

Performance Options:

 --enable-fast - Turns off error checking and collection of internal
                 timing information

 --enable-timing=no - Turns off just the collection of internal timing
                 information

 --enable-ndebug - Turns on NDEBUG, which disables asserts. This is a
		subset of the optimizations provided by enable-fast,
		but is useful in environments where the user wishes
		to retain the debug symbols, e.g., this can be combined
		with the --enable-g option.

MPI Features:

  --enable-romio - Build the ROMIO implementation of MPI-IO.  This is
                 the default

  --with-file-system - When used with --enable-romio, specifies
                 filesystems ROMIO should support.  See README.romio.

  --enable-threads - Build MPICH2 with support for multi-threaded
                 applications. Only the sock and nemesis channels support
                 MPI_THREAD_MULTIPLE. 

  --with-thread-package - When used with --enable-threads, this option
                 specifies the thread package to use.  This option
                 defaults to "posix".  At the moment, only POSIX
                 threads are supported on UNIX platforms.  We plan to
                 support Solaris threads in the future.

Language bindings:

  --enable-f77 - Build the Fortran 77 bindings.  This is the default.
                 It has been tested with the Fortran parts of the Intel
                 test suite.

  --enable-f90 - Build the Fortran 90 bindings.  This is not on by
                 default, since these have not yet been tested.

  --enable-cxx - Build the C++ bindings.  This has been tested with the
                 Notre Dame C++ test suite and some additional tests.

Cross compilation:

  --with-cross=filename - Provide values for the tests that required
                 running a program, such as the tests that configure
                 uses to determine the sizes of the basic types.  This
                 should be a fine in Bourne shell format containing
                 variable assignment of the form

                     CROSS_SIZEOF_INT=2

                 for all of the CROSS_xxx variables.  A list will be
                 provided in later releases; for now, look at the
                 configure.in files.  This has not been completely
                 tested.

Error checking and reporting:

  --enable-error-checking=level - Control the amount of error checking.
                 Currently, only "no" and "all" is supported; all is the
                 default.

  --enable-error-messages=level - Control the aount of detail in error
                 messages.  By default, MPICH2 provides
                 instance-specific error messages; but, with this
                 option, MPICH2 can be configured to provide less
                 detailed messages.  This may be desirable on small
                 systems, such as clusters built from game consoles or
                 high-density massively parallel systems.  This is still
                 under active development.

Compilation options for development:

  --enable-g=value - Controls the amount of debugging information
                 collected by the code.  The most useful choice here is
                 dbg, which compiles with -g.

  --enable-coverage - An experimental option that enables GNU coverage
                 analysis.

  --with-logging=name - Select a logging library for recording the
                 timings of the internal routines.  We have used this to
                 understand the performance of the internals of MPICH2.
                 More information on the logging options, capabilities
                 and usage can be found in doc/logging/logging.pdf.

  --enable-timer-type=name -  Select the timer to use for MPI_Wtime
                 and internal timestamps.  name may be one of:
                     gethrtime        - Solaris timer (Solaris systems
                                        only) 
                     clock_gettime    - Posix timer (where available)
                     gettimeofday     - Most Unix systems
                     linux86_cycle    - Linux x86; returns cycle
                                        counts, not time in seconds*
                     linuxalpha_cycle - Like linux86_cycle, but for
                                        Linux Alpha* 
                     gcc_ia64_cycle   - IPF ar.itc timer*
                     device           - The timer is provided by the device
                 *Note that the cycle timers are intended to be used by
                  MPICH2 developers for internal low-level timing.
                  Normal users should not use these as they are not
                  guaranteed to be accurate in certain situations.

-------------------------------------------------------------------------

IX. Environment Variables
=========================

MPICH2 provides several environment variables that have different
purposes.

Generic Environment Variables
-----------------------------

  MPICH_NO_LOCAL - Disable shared-memory communication. With this
         option, even communication within a node will use the network
         stack.

               ************************************

  MPICH_INTERFACE_HOSTNAME - Network interface to use for
         communication. By default MPICH2 picks the network interface
         representing the hostname (gotten by gethostbyname). Consider
         the following example:

% /sbin/ifconfig

eth0      Link encap:Ethernet  HWaddr 00:14:5E:57:C4:FA
          inet addr:192.148.3.182  Bcast:192.148.248.255  Mask:255.255.255.0
          inet6 addr: fe80::214:5eff:fe57:c4fa/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:989925894 errors:0 dropped:7186 overruns:0 frame:0
          TX packets:1480277023 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:441568994866 (411.2 GiB)  TX bytes:1864173370054 (1.6 TiB)
          Interrupt:185 Memory:e2000000-e2012100

myri0     Link encap:Ethernet  HWaddr 00:14:5E:57:C4:F8
          inet addr:10.21.3.182  Bcast:10.21.255.255  Mask:255.255.0.0
          inet6 addr: fe80::214:5eff:fe57:c4f8/64 Scope:Link
          UP BROADCAST RUNNING MULTICAST  MTU:1500  Metric:1
          RX packets:3068986439 errors:0 dropped:7841 overruns:0 frame:0
          TX packets:2288060450 errors:0 dropped:0 overruns:0 carrier:0
          collisions:0 txqueuelen:1000
          RX bytes:3598751494497 (3.2 TiB)  TX bytes:1744058613150 (1.5 TiB)
          Interrupt:185 Memory:e4000000-e4012100

In the above case the 192.148.x.x IP series refers to the standard
Ethernet (or Gigabit Ethernet) network, AND the 10.21.x.x series
refers to Myrinet.

To run over the Myrinet network use:

% mpiexec -np 1 -env MPICH_INTERFACE_HOSTNAME 10.21.3.182 ./foo

               ************************************

  MPICH_INTERFACE_HOSTNAME_R%d - Network interface to use for rank %d.

               ************************************

  MPICH_PORT_RANGE - Port range to use for MPICH2 internal TCP
         connections. This is useful when some of the host ports are
         blocked by a firewall. For example, setting MPICH_PORT_RANGE
         to "2000:3000" will ensure that MPICH2 will internally only
         uses ports between 2000 and 3000.

-------------------------------------------------------------------------

X. Developer Builds
===================
For MPICH2 developers who want to directly work on the svn, there are
a few additional steps involved (people using the release tarballs do
not have to follow these steps). Details about these steps can be
found here:
http://wiki.mcs.anl.gov/mpich2/index.php/Getting_And_Building_MPICH2

-------------------------------------------------------------------------

XI. Building ROMIO into MPICH2
==============================
By default, ROMIO, an implementation of the I/O portion of MPI-2 will
be built as a part of MPICH2. The file systems to be built can be
speicified by passing them in a '+'-delimited list to the
--with-file-system configure option. For example:

  --with-file-system="pvfs+nfs+ufs"

If you have installed version 2 of the PVFS file system, you can use
the '--with-pvfs2=<prefix>' configure option to specify where
libraries, headers, and utilities have been installed. If you have
added the pvfs utilities to your PATH, then ROMIO will detect this and
build support for PVFS automatically.

-------------------------------------------------------------------------

XII. Testing the MPICH2 installation
====================================
To test MPICH2, use the following options after installing mpich2.
These will assume that mpich2 is installed into /usr/local/mpich2.

1. MPICH2 test suite:

   shell$ make testing

The results summary will be placed in test/summary.xml 

-------------------------------------------------------------------------

XIII. Installing MPICH2 on Windows
==================================

Here are the instructions for setting up MPICH2 on a Windows machine:

0) Install:
    Microsoft Developer Studio 2003 or later
    Intel Fortran 8.0 or later
    cygwin
	choose the dos file format option
	install perl and cvs

1) Checkout mpich2:

    Bring up a command prompt.
    (replace "yourname" with your MCS login name):
    svn co https://svn.mcs.anl.gov/repos/mpi/mpich2/trunk mpich2

2) Generate *.h.in

    Bring up a cygwin bash shell.
    cd mpich2
    maint/updatefiles
    exit

3) Execute winconfigure.wsf

4) Open Developer Studio

    open mpich2\mpich2.sln
    build the ch3sockDebug mpich2 solution
    build the ch3sockDebug mpich2s project
    build the ch3sockRelease mpich2 solution
    build the ch3sockRelease mpich2s project
    build the Debug mpich2 solution
    build the Release mpich2 solution
    build the fortDebug mpich2 solution
    build the fortRelease mpich2 solution
    build the gfortDebug mpich2 solution
    build the gfortRelease mpich2 solution
    build the sfortDebug mpich2 solution
    build the sfortRelease mpich2 solution

5) Open a command prompt

    cd to mpich2\maint
    execute "makegcclibs.bat"

6) Open another Developer Studio instance

    open mpich2\examples\examples.sln
    build the Release target of the cpi project

7) Return to Developer Studio with the mpich2 solution

    set the version numbers in the Installer project
    build the Installer mpich2 solution

8) Test and distribute mpich2\maint\ReleaseMSI\mpich2.msi

    mpich2.msi can be renamed, eg mpich2-1.1.msi

9) To install the launcher:

    Copy smpd.exe to a local directory on all the nodes.
    Log on to each node as an administrator and execute "smpd.exe -install"

10) Compile and run an MPI application:

    Compile an mpi application.  Use mpi.h from mpich2\src\include\win32 and mpi.lib in mpich2\lib
    Place your executable along with the mpich2 dlls somewhere accessable to all the machines.
    Execute a job by running something like: mpiexec -n 3 myapp.exe