File: aot_autograd.rst

package info (click to toggle)
pytorch 1.13.1%2Bdfsg-4
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 139,252 kB
  • sloc: cpp: 1,100,274; python: 706,454; ansic: 83,052; asm: 7,618; java: 3,273; sh: 2,841; javascript: 612; makefile: 323; xml: 269; ruby: 185; yacc: 144; objc: 68; lex: 44
file content (43 lines) | stat: -rw-r--r-- 1,189 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
functorch.compile (experimental)
================================

AOT Autograd is an experimental feature that allows ahead of time capture of
forward and backward graphs, and allows easy integration with compilers. This
creates an easy to hack Python-based development environment to speedup training
of PyTorch models. AOT Autograd currently lives inside ``functorch.compile``
namespace.

.. warning::
    AOT Autograd is experimental and the APIs are likely to change. We are looking
    for feedback. If you are interested in using AOT Autograd and need help or have
    suggestions, please feel free to open an issue. We will be happy to help.

.. currentmodule:: functorch.compile

Compilation APIs (experimental)
-------------------------------
.. autosummary::
    :toctree: generated
    :nosignatures:

    aot_function
    aot_module
    memory_efficient_fusion

Partitioners (experimental)
---------------------------
.. autosummary::
    :toctree: generated
    :nosignatures:

    default_partition
    min_cut_rematerialization_partition

Compilers (experimental)
------------------------
.. autosummary::
    :toctree: generated
    :nosignatures:

    nop
    ts_compile