1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206
|
.. role:: hidden
:class: hidden-section
torch.nn.functional
===================
.. currentmodule:: torch.nn.functional
Convolution functions
----------------------------------
.. autosummary::
:toctree: generated
:nosignatures:
conv1d
conv2d
conv3d
conv_transpose1d
conv_transpose2d
conv_transpose3d
unfold
fold
Pooling functions
----------------------------------
.. autosummary::
:toctree: generated
:nosignatures:
avg_pool1d
avg_pool2d
avg_pool3d
max_pool1d
max_pool2d
max_pool3d
max_unpool1d
max_unpool2d
max_unpool3d
lp_pool1d
lp_pool2d
adaptive_max_pool1d
adaptive_max_pool2d
adaptive_max_pool3d
adaptive_avg_pool1d
adaptive_avg_pool2d
adaptive_avg_pool3d
fractional_max_pool2d
fractional_max_pool3d
Non-linear activation functions
-------------------------------
.. autosummary::
:toctree: generated
:nosignatures:
threshold
threshold_
relu
relu_
hardtanh
hardtanh_
hardswish
relu6
elu
elu_
selu
celu
leaky_relu
leaky_relu_
prelu
rrelu
rrelu_
glu
gelu
logsigmoid
hardshrink
tanhshrink
softsign
softplus
softmin
softmax
softshrink
gumbel_softmax
log_softmax
tanh
sigmoid
hardsigmoid
silu
mish
batch_norm
group_norm
instance_norm
layer_norm
local_response_norm
normalize
.. _Link 1: https://arxiv.org/abs/1611.00712
.. _Link 2: https://arxiv.org/abs/1611.01144
Linear functions
----------------
.. autosummary::
:toctree: generated
:nosignatures:
linear
bilinear
Dropout functions
-----------------
.. autosummary::
:toctree: generated
:nosignatures:
dropout
alpha_dropout
feature_alpha_dropout
dropout1d
dropout2d
dropout3d
Sparse functions
----------------------------------
.. autosummary::
:toctree: generated
:nosignatures:
embedding
embedding_bag
one_hot
Distance functions
----------------------------------
.. autosummary::
:toctree: generated
:nosignatures:
pairwise_distance
cosine_similarity
pdist
Loss functions
--------------
.. autosummary::
:toctree: generated
:nosignatures:
binary_cross_entropy
binary_cross_entropy_with_logits
poisson_nll_loss
cosine_embedding_loss
cross_entropy
ctc_loss
gaussian_nll_loss
hinge_embedding_loss
kl_div
l1_loss
mse_loss
margin_ranking_loss
multilabel_margin_loss
multilabel_soft_margin_loss
multi_margin_loss
nll_loss
huber_loss
smooth_l1_loss
soft_margin_loss
triplet_margin_loss
triplet_margin_with_distance_loss
Vision functions
----------------
.. autosummary::
:toctree: generated
:nosignatures:
pixel_shuffle
pixel_unshuffle
pad
interpolate
upsample
upsample_nearest
upsample_bilinear
grid_sample
affine_grid
DataParallel functions (multi-GPU, distributed)
-----------------------------------------------
:hidden:`data_parallel`
~~~~~~~~~~~~~~~~~~~~~~~
.. autosummary::
:toctree: generated
:nosignatures:
torch.nn.parallel.data_parallel
|