File: README

package info (click to toggle)
combblas 2.0.0-6
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid, trixie
  • size: 190,476 kB
  • sloc: cpp: 55,912; ansic: 25,134; sh: 3,691; makefile: 548; csh: 66; python: 49; perl: 21
file content (52 lines) | stat: -rw-r--r-- 1,539 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
--------------------------------------------
Graph500 BFS Implementation
---
Kamesh Madduri
Last updated: December 2011
--------------------------------------------

1. Create a file in the ARCH directory for specifying compiler 
   flags, etc.
   See, for example, Makefile.hopper-opt.
   These settings are for building the benchmark code in the 
   src directory.

2. Modify the Makefile in src/generator, and build the 
   RMAT graph generator (extracted from the reference implementation).

   Within src/generator, do

   $ make ARCH=hopper-opt
   (or the appropriate Makefile suffix). 

3. Set the MAX_NUMPROCS variable to the maximum number of MPI tasks that will used
   for executing the benchmark.
   
4. Now build the graph creation and BFS code in the src directory.
    
   Within src, do

   $ make ARCH=hopper-opt
   (or the appropriate Makefile suffix).

5. Running the code:

   Set the number of OpenMP threads per process using the environment variable OMP_NUM_THREADS.

   $ ./graph500_bfs_hopper_opt <SCALE> <avg. degree> 1 <MPI process grid, row dim.> <MPI process grid, col dim.>

   examples:  
   
   i) SCALE 32 with 100 MPI tasks and a 100X1 process grid:
   $ mpiexec -n 100 ./graph500_bfs_hopper_opt 32 16 1 100 1
   
   ii) SCALE 32 with 100 MPI tasks and a 10X10 process grid:
   $ mpiexec -n 100 ./graph500_bfs_hopper_opt 32 16 1 10 10

   ii) SCALE 32 with 100 MPI tasks and a 1X100 process grid:
   $ mpiexec -n 100 ./graph500_bfs_hopper_opt 32 16 1 1 100

   I recommend one MPI task per socket.

----