File: arch-nersc-perlmutter-opt.py

package info (click to toggle)
petsc 3.23.1%2Bdfsg1-1exp1
  • links: PTS, VCS
  • area: main
  • in suites: experimental
  • size: 515,576 kB
  • sloc: ansic: 751,607; cpp: 51,542; python: 38,598; f90: 17,352; javascript: 3,493; makefile: 3,157; sh: 1,502; xml: 619; objc: 445; java: 13; csh: 1
file content (61 lines) | stat: -rwxr-xr-x 2,878 bytes parent folder | download | duplicates (2)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
#!/usr/bin/env python3

# Example configure script for Perlmutter, the HPE Cray EX system at NERSC/LBNL equipped with
# AMD EPYC CPUS and NVIDIA A100 GPUS. Here we target the GPU compute nodes and builds with
# support for the CUDA/cuSPARSE, Kokkos, and ViennaCL back-ends.
#
# Currently, configuring PETSc on the system does not require loading many , if any, non-default modules.
# As documented at https://docs.nersc.gov/systems/perlmutter/software/#mpi, typical settings might be
#
#   export MPICH_GPU_SUPPORT_ENABLED=1
#   module load cudatoolkit
#   module load PrgEnv-gnu
#   module load craype-accel-nvidia80
#   module load cray-python
#
# The above are currently present in the default environment. Users may wish to 'module load' a
# different programming environment (which will generally force a reload of certain related modules,
# such as the one corresponding to the MPI implementation).

if __name__ == '__main__':
  import sys
  import os
  sys.path.insert(0, os.path.abspath('config'))
  import configure
  configure_options = [
    '--with-make-np=8', # Must limit size of parallel build to stay within resource limitations imposed by the center
    # '-G4' requests all four GPUs present on a Perlmutter GPU compute node.
    # --gpu-bind=none to avoid the gpu-aware mpi runtime error: (GTL DEBUG: 0) cuIpcOpenMemHandle: invalid argument, CUDA_ERROR_INVALID_VALUE, line no 360
    '--with-mpiexec=srun -G4 --gpu-bind=none',
    '--with-batch=0',

    # Use the Cray compiler wrappers, regardless of the underlying compilers loaded by the programming environment module:
    '--with-cc=cc',
    '--with-cxx=CC',
    '--with-fc=ftn',

    # Build with aggressive optimization ('-O3') but also include debugging symbols ('-g') to support detailed profiling.
    # If you are doing development, using no optimization ('-O0') can be a good idea. Also note that some compilers (GNU
    # is one) support the '-g3' debug flag, which allows macro expansion in some debuggers; this can be very useful when
    # debugging PETSc code, as PETSc makes extensive use of macros.
    '--COPTFLAGS=   -g -O3',
    '--CXXOPTFLAGS= -g -O3',
    '--FOPTFLAGS=   -g -O3',
    '--CUDAFLAGS=   -g -O3',
    '--with-debugging=0',  # Disable debugging for production builds; use '--with-debugging=1' for development work.

    # Build with support for CUDA/cuSPARSE, Kokkos/Kokkos Kernels, and ViennaCL back-ends:
    '--with-cuda=1',
    '--with-cuda-arch=80',
    '--download-viennacl',
    '--download-kokkos',
    '--download-kokkos-kernels',

    # Download and build a few commonly-used packages:
    '--download-hypre',
    '--download-metis',
    '--download-parmetis',
    '--download-hdf5', # Note that NERSC does provide an HDF5 module, but using our own is generally reliable.
    '--download-hdf5-fortran-bindings',
  ]
  configure.petsc_configure(configure_options)