1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99
|
#pragma once
#include <c10/core/SymInt.h>
#include <torch/csrc/autograd/python_variable.h>
#include <torch/csrc/python_headers.h>
#include <torch/csrc/utils/pybind.h>
#include <torch/csrc/utils/python_symnode.h>
namespace torch::autograd {
struct UnpackedSlice {
c10::SymInt start;
c10::SymInt stop;
c10::SymInt step;
};
// This mirrors Cpython's PySlice_Unpack method
inline UnpackedSlice __PySlice_Unpack(PyObject* _r) {
PySliceObject* r = (PySliceObject*)_r;
/* this is harder to get right than you might think */
c10::SymInt start_sym, stop_sym, step_sym;
auto clip_val = [](Py_ssize_t val) {
if (val < c10::SymInt::min_representable_int()) {
auto r = PyErr_WarnEx(
PyExc_UserWarning,
"Truncating the start/stop/step "
"of slice. This is likely because of "
"saved old models when the start/stop/step were larger.",
1);
if (r != 0) {
throw python_error();
}
return (Py_ssize_t)(c10::SymInt::min_representable_int());
}
return val;
};
if (r->step == Py_None) {
step_sym = c10::SymInt(1);
} else {
if (torch::is_symint(r->step)) {
step_sym = py::handle(r->step).cast<c10::SymInt>();
} else {
Py_ssize_t step = 0;
if (!_PyEval_SliceIndex(r->step, &step)) {
throw python_error();
}
if (step == 0) {
PyErr_SetString(PyExc_ValueError, "slice step cannot be zero");
}
step = clip_val(step);
step_sym = c10::SymInt(step);
}
}
if (torch::is_symint(r->start)) {
start_sym = py::handle(r->start).cast<c10::SymInt>();
} else if (r->start == Py_None) {
start_sym = c10::SymInt(step_sym < 0 ? PY_SSIZE_T_MAX : 0);
} else {
Py_ssize_t start = 0;
if (!_PyEval_SliceIndex(r->start, &start)) {
throw python_error();
}
start = clip_val(start);
start_sym = c10::SymInt(start);
}
if (torch::is_symint(r->stop)) {
stop_sym = py::handle(r->stop).cast<c10::SymInt>();
} else if (r->stop == Py_None) {
stop_sym = c10::SymInt(
step_sym < 0 ? c10::SymInt::min_representable_int() : PY_SSIZE_T_MAX);
} else {
Py_ssize_t stop = 0;
if (!_PyEval_SliceIndex(r->stop, &stop)) {
throw python_error();
}
stop = clip_val(stop);
stop_sym = c10::SymInt(stop);
}
return UnpackedSlice{
std::move(start_sym), std::move(stop_sym), std::move(step_sym)};
}
Py_ssize_t THPVariable_length(PyObject* self);
PyObject* THPVariable_getitem(PyObject* self, PyObject* index);
int THPVariable_setitem(PyObject* self, PyObject* index, PyObject* value);
Variable valueToTensor(
c10::TensorOptions options,
PyObject* value,
const at::Device& device);
} // namespace torch::autograd
|