File: README

package info (click to toggle)
openmpi 5.0.9-1
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 201,956 kB
  • sloc: ansic: 614,602; makefile: 42,354; sh: 11,194; javascript: 9,244; f90: 7,052; java: 6,404; perl: 5,192; python: 1,862; lex: 740; fortran: 61; cpp: 20; tcl: 12
file content (84 lines) | stat: -rw-r--r-- 3,335 bytes parent folder | download | duplicates (15)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
This directory contains a few example programs. 

Each program takes the filename as a command-line argument
"-fname filename". 

If you are using "mpirun" to run an MPI program, you can run the 
program "simple" with two processes as follows:
   mpirun -np 2 simple -fname test


simple.c: Each process creates its own file, writes to it, reads it
          back, and checks the data read.

psimple.c: Same as simple.c but uses the PMPI versions of all MPI routines

error.c: Tests if error messages are printed correctly

status.c: Tests if the status object is filled correctly by I/O functions

perf.c: A simple read and write performance test. Each process writes
        4Mbytes to a file at a location determined by its rank and
        reads it back. For a different access size, change the value
        of SIZE in the code. The bandwidth is reported for two cases:
        (1) without including MPI_File_sync and (2) including
        MPI_File_sync. 
     
async.c: This program is the same as simple.c, except that it uses 
        asynchronous I/O.

coll_test.c: This program tests the use of collective I/O. It writes
        a 3D block-distributed array to a file corresponding to the
        global array in row-major (C) order, reads it back, and checks
        that the data read is correct. The global array size has been
        set to 32^3. If you are running it on NFS, which is very slow,
        you may want to reduce that size to 16^3.

coll_perf.c: Measures the I/O bandwidth for writing/reading a 3D
      block-distributed array to a file corresponding to the global array
      in row-major (C) order. The global array size has been
      set to 128^3. If you are running it on NFS, which is very slow,
      you may want to reduce that size to 16^3.

misc.c: Tests various miscellaneous MPI-IO functions

atomicity.c: Tests whether atomicity semantics are satisfied for 
      overlapping accesses in atomic mode. The probability of detecting 
      errors is higher if you run it on 8 or more processes.

large_file.c: Tests access to large files. Writes a 4-Gbyte file and
      reads it back. Run it only on one process and on a file system
      on which ROMIO supports large files.

large_array.c: Tests writing and reading a 4-Gbyte distributed array using
      the distributed array datatype constructor. Works only on file
      systems that support 64-bit file sizes and MPI implementations
      that support 64-bit MPI_Aint. 

file_info.c: Tests the setting and retrieval of hints via 
      MPI_File_set_info and MPI_File_get_info

excl.c: Tests MPI_File_open with MPI_MODE_EXCL

noncontig.c: Tests noncontiguous accesses in memory and file using 
             independent I/O. Run it on two processes only.

noncontig_coll.c: Same as noncontig.c, but uses collective I/O

noncontig_coll2.c: Same as noncontig_coll.c, but exercises the 
             cb_config_list hint and aggregation handling more. 

i_noncontig.c: Same as noncontig.c, but uses nonblocking I/O

shared_fp.c: Tests the shared file pointer functions

split_coll.c: Tests the split collective I/O functions

fperf.f: Fortran version of perf.c

fcoll_test.f: Fortran version of coll_test.c

pfcoll_test.f: Same as fcoll_test.f but uses the PMPI versions of 
               all MPI routines

fmisc.f: Fortran version of misc.c