File: README.md

package info (click to toggle)
pytorch-geometric 2.6.1-7
  • links: PTS, VCS
  • area: main
  • in suites: forky, sid
  • size: 12,904 kB
  • sloc: python: 127,155; sh: 338; cpp: 27; makefile: 18; javascript: 16
file content (40 lines) | stat: -rw-r--r-- 1,279 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
# Using Quiver for PyG Examples

**[Quiver](https://github.com/quiver-team/torch-quiver)** is a **GPU-optimized distributed library** for PyG.
It can speed up graph sampling and feature aggregation through GPU when running PyG examples.

## Installation

Assuming you have installed PyTorch and PyG, you can install Quiver as follows:

```bash
pip install torch-quiver>=0.1.1
```

## Usage

The API and design documentation of Quiver can be found [here](https://github.com/quiver-team/torch-quiver).

## Examples

We provide several examples to showcase the usage of Quiver within PyG:

### Single-GPU Training

The single-GPU example leverages Quiver's ability of **(i)** GPU-based graph sampling and feature aggregation, and **(ii)** GNN data caching algorithm (which cache hot data in GPU memory) while enabling fast access to CPU data using a Quiver shared tensor implementation:

```bash
python single_gpu_quiver.py
```

### Multi-GPU Training

The multi-GPU example further leverages Quiver's ability of **(i)** distributing sampling and feature aggregation to multiple GPUs, and **(ii)** using multi-GPU memories to cache and replicate hot GNN data:

```bash
python multi_gpu_quiver.py
```

### Distributed Training

A Quiver-based distributed PyG example is coming soon.