1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69
|
This folder contains a number of scripts which are used as
part of the PyTorch build process. This directory also doubles
as a Python module hierarchy (thus the `__init__.py`).
## Overview
Modern infrastructure:
* [autograd](autograd) - Code generation for autograd. This
includes definitions of all our derivatives.
* [jit](jit) - Code generation for JIT
* [shared](shared) - Generic infrastructure that scripts in
tools may find useful.
* [module_loader.py](shared/module_loader.py) - Makes it easier
to import arbitrary Python files in a script, without having to add
them to the PYTHONPATH first.
Build system pieces:
* [setup_helpers](setup_helpers) - Helper code for searching for
third-party dependencies on the user system.
* [build_pytorch_libs.py](build_pytorch_libs.py) - cross-platform script that
builds all of the constituent libraries of PyTorch,
but not the PyTorch Python extension itself.
* [build_libtorch.py](build_libtorch.py) - Script for building
libtorch, a standalone C++ library without Python support. This
build script is tested in CI.
* [fast_nvcc](fast_nvcc) - Mostly-transparent wrapper over nvcc that
parallelizes compilation when used to build CUDA files for multiple
architectures at once.
* [fast_nvcc.py](fast_nvcc/fast_nvcc.py) - Python script, entrypoint to the
fast nvcc wrapper.
Developer tools which you might find useful:
* [git_add_generated_dirs.sh](git_add_generated_dirs.sh) and
[git_reset_generated_dirs.sh](git_reset_generated_dirs.sh) -
Use this to force add generated files to your Git index, so that you
can conveniently run diffs on them when working on code-generation.
(See also [generated_dirs.txt](generated_dirs.txt) which
specifies the list of directories with generated files.)
* [stats/test_history.py](stats/test_history.py) - Query S3 to display history of a single
test across multiple jobs over time.
Important if you want to run on AMD GPU:
* [amd_build](amd_build) - HIPify scripts, for transpiling CUDA
into AMD HIP. Right now, PyTorch and Caffe2 share logic for how to
do this transpilation, but have separate entry-points for transpiling
either PyTorch or Caffe2 code.
* [build_amd.py](amd_build/build_amd.py) - Top-level entry
point for HIPifying our codebase.
Tools which are only situationally useful:
* [docker](docker) - Dockerfile for running (but not developing)
PyTorch, using the official conda binary distribution. Context:
https://github.com/pytorch/pytorch/issues/1619
* [download_mnist.py](download_mnist.py) - Download the MNIST
dataset; this is necessary if you want to run the C++ API tests.
* [run-clang-tidy-in-ci.sh](run-clang-tidy-in-ci.sh) - Responsible
for checking that C++ code is clang-tidy clean in CI on Travis
[actions/github-script]: https://github.com/actions/github-script
[clang-tidy]: https://clang.llvm.org/extra/clang-tidy/
[flake8]: https://flake8.pycqa.org/en/latest/
[github actions expressions]: https://docs.github.com/en/actions/reference/context-and-expression-syntax-for-github-actions#about-contexts-and-expressions
[pytorch/add-annotations-github-action]: https://github.com/pytorch/add-annotations-github-action
[shellcheck]: https://github.com/koalaman/shellcheck
|