File: GLOSSARY.md

package info (click to toggle)
pytorch 1.13.1%2Bdfsg-4
  • links: PTS, VCS
  • area: main
  • in suites: bookworm
  • size: 139,252 kB
  • sloc: cpp: 1,100,274; python: 706,454; ansic: 83,052; asm: 7,618; java: 3,273; sh: 2,841; javascript: 612; makefile: 323; xml: 269; ruby: 185; yacc: 144; objc: 68; lex: 44
file content (85 lines) | stat: -rw-r--r-- 2,470 bytes parent folder | download | duplicates (3)
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
# PyTorch Glossary

<!-- toc -->

- [Operation and Kernel](#operation-and-kernel)
  - [ATen](#aten)
  - [Operation](#operation)
  - [Native Operation](#native-operation)
  - [Custom Operation](#custom-operation)
  - [Kernel](#kernel)
  - [Compound Operation](#compound-operation)
  - [Composite Operation](#composite-operation)
  - [Non-Leaf Operation](#non-leaf-operation)
  - [Leaf Operation](#leaf-operation)
  - [Device Kernel](#device-kernel)
  - [Compound Kernel](#compound-kernel)
- [JIT Compilation](#jit-compilation)
  - [JIT](#jit)
  - [TorchScript](#torchscript)
  - [Tracing](#tracing)
  - [Scripting](#scripting)

<!-- tocstop -->

# Operation and Kernel

## ATen
Short for "A Tensor Library". The foundational tensor and mathematical
operation library on which all else is built.

## Operation
A unit of work. For example, the work of matrix multiplication is an operation
called aten::matmul.

## Native Operation
An operation that comes natively with PyTorch ATen, for example aten::matmul.

## Custom Operation
An Operation that is defined by users and is usually a Compound Operation.
For example, this
[tutorial](https://pytorch.org/docs/stable/notes/extending.html) details how
to create Custom Operations.

## Kernel
Implementation of a PyTorch operation, specifying what should be done when an
operation executes.

## Compound Operation
A Compound Operation is composed of other operations. Its kernel is usually
device-agnostic. Normally it doesn't have its own derivative functions defined.
Instead, AutoGrad automatically computes its derivative based on operations it
uses.

## Composite Operation
Same as Compound Operation.

## Non-Leaf Operation
Same as Compound Operation.

## Leaf Operation
An operation that's considered a basic operation, as opposed to a Compound
Operation. Leaf Operation always has dispatch functions defined, usually has a
derivative function defined as well.

## Device Kernel
Device-specific kernel of a leaf operation.

## Compound Kernel
Opposed to Device Kernels, Compound kernels are usually device-agnostic and belong to Compound Operations.

# JIT Compilation

## JIT
Just-In-Time Compilation.

## TorchScript
An interface to the TorchScript JIT compiler and interpreter.

## Tracing
Using `torch.jit.trace` on a function to get an executable that can be optimized
using just-in-time compilation.

## Scripting
Using `torch.jit.script` on a function to inspect source code and compile it as
TorchScript code.