File: README.md

package info (click to toggle)
pytorch-geometric 2.6.1-7
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 12,904 kB
  • sloc: python: 127,155; sh: 338; cpp: 27; makefile: 18; javascript: 16
file content (49 lines) | stat: -rw-r--r-- 1,772 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
# Inference Benchmark

## Environment setup

1. Confirm that PyG is properly installed.
1. Install dataset package:
   ```bash
   pip install ogb
   ```
1. Install `autoconf` required for `jemalloc` setup:
   ```bash
   sudo apt-get install autoconf
   ```
1. Install `jemalloc` for performance benchmark:
   ```bash
   cd ${workspace}
   git clone https://github.com/jemalloc/jemalloc.git
   cd jemalloc
   git checkout 5.2.1
   ./autogen.sh
   ./configure --prefix=${workspace}/jemalloc-bin
   make
   make install
   ```

## Running benchmark

1. Set environment variables:
   ```bash
   source activate env_name
   export DNNL_PRIMITIVE_CACHE_CAPACITY=1024
   export KMP_BLOCKTIME=1
   export KMP_AFFINITY=granularity=fine,compact,1,0

   jemalloc_lib=${workspace}/jemalloc-bin/lib/libjemalloc.so
   export LD_PRELOAD="$jemalloc_lib"
   export MALLOC_CONF="oversize_threshold:1,background_thread:true,metadata_thp:auto,dirty_decay_ms:9000000000,muzzy_decay_ms:9000000000"
   ```
1. Core binding, *e.g.*, single socket / single core / 4 cores per instance:
   ```bash
   OMP_NUM_THREADS=${CORES} numactl -C 0-${LAST_CORE} -m 0 CMD......
   ```
1. Execute benchmarks, *e.g.*:
   ```bash
   python -u inference_benchmark.py --datasets=Reddit --models=gcn --eval-batch-sizes=512 --num-layers=2 --num-hidden-channels=64
   python -u inference_benchmark.py --datasets=Reddit --models=gcn --eval-batch-sizes=512 --num-layers=2 --num-hidden-channels=64 --use-sparse-tensor
   python -u inference_benchmark.py --datasets=ogbn-products --models=sage --eval-batch-sizes=512 --num-layers=2 --num-hidden-channels=64
   python -u inference_benchmark.py --datasets=ogbn-products --models=sage --eval-batch-sizes=512 --num-layers=2 --num-hidden-channels=64 --use-sparse-tensor
   ```