File: autograd.h

package info (click to toggle)
pytorch 1.7.1-7
  • links: PTS, VCS
  • area: main
  • in suites: bullseye
  • size: 80,340 kB
  • sloc: cpp: 670,830; python: 343,991; ansic: 67,845; asm: 5,503; sh: 2,924; java: 2,888; xml: 266; makefile: 244; ruby: 148; yacc: 144; objc: 51; lex: 44
file content (74 lines) | stat: -rw-r--r-- 3,831 bytes parent folder | download
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
#pragma once

#include <torch/csrc/autograd/variable.h>

#include <ATen/ATen.h>

namespace torch {
namespace autograd {

/// Computes the sum of gradients of given tensors with respect to graph leaves.
///
/// The graph is differentiated using the chain rule. If any of ``tensors``
/// are non-scalar (i.e. their data has more than one element) and require gradient,
/// then the Jacobian-vector product would be computed, in this case the function
/// additionally requires specifying `grad_tensors`. It should be a sequence of
/// matching length, that contains the "vector" in the Jacobian-vector product,
/// usually the gradient of the differentiated function w.r.t. corresponding tensors
/// (`torch::Tensor()` is an acceptable value for all tensors that don't need
/// gradient tensors).
///
/// This function accumulates gradients in the leaves - you might need to zero them
/// before calling it.
///
/// \param tensors Tensors of which the derivative will be computed.
/// \param grad_tensors The "vector" in the Jacobian-vector product, usually gradients
///     w.r.t. each element of corresponding tensors. `torch::Tensor()` values can be
///     specified for scalar Tensors or ones that don't require grad. If a `torch::Tensor()` value
///     would be acceptable for all grad_tensors, then this argument is optional.
/// \param retain_graph If `false`, the graph used to compute the grad will be freed.
///     Note that in nearly all cases setting this option to `true` is not needed
///     and often can be worked around in a much more efficient way. Defaults to the
///     value of `create_graph`.
/// \param create_graph If `true`, graph of the derivative will be constructed, allowing
///     to compute higher order derivative products. Defaults to `false`.
TORCH_API void backward(
    const variable_list& tensors,
    const variable_list& grad_tensors = {},
    c10::optional<bool> retain_graph = c10::nullopt,
    bool create_graph = false);

/// Computes and returns the sum of gradients of outputs with respect to the inputs.
///
/// ``grad_outputs`` should be a sequence of length matching ``output``
/// containing the "vector" in Jacobian-vector product, usually the pre-computed
/// gradients w.r.t. each of the outputs. If an output doesn't require_grad,
/// then the gradient can be ``torch::Tensor()``).
///
/// \param outputs outputs of the differentiated function.
/// \param inputs Inputs w.r.t. which the gradient will be
///     returned (and not accumulated into ``at::Tensor::grad``).
/// \param grad_outputs The "vector" in the Jacobian-vector product.
///     Usually gradients w.r.t. each output. `torch::Tensor()` values can be specified for scalar
///     Tensors or ones that don't require grad. If a `torch::Tensor()` value would be acceptable
///     for all grad_tensors, then this argument is optional. Default: `{}`.
/// \param retain_graph If ``false``, the graph used to compute the grad
///     will be freed. Note that in nearly all cases setting this option to ``true``
///     is not needed and often can be worked around in a much more efficient
///     way. Defaults to the value of ``create_graph``.
/// \param create_graph If ``true``, graph of the derivative will
///     be constructed, allowing to compute higher order derivative products.
///     Default: ``false``.
/// \param allow_unused If ``false``, specifying inputs that were not
///     used when computing outputs (and therefore their grad is always zero)
///     is an error. Defaults to ``false``.
TORCH_API variable_list grad(
    const variable_list& outputs,
    const variable_list& inputs,
    const variable_list& grad_outputs = {},
    c10::optional<bool> retain_graph = c10::nullopt,
    bool create_graph = false,
    bool allow_unused = false);

} // namespace autograd
} // namespace torch