1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119
|
#pragma once
#include <ATen/core/builtin_function.h>
#include <ATen/core/stack.h>
#include <torch/csrc/jit/backends/backend_interface.h>
#include <torch/custom_class.h>
namespace torch {
namespace jit {
namespace {
// NOLINTNEXTLINE(clang-diagnostic-unneeded-internal-declaration)
inline c10::FunctionSchema getIsAvailableSchema() {
c10::Argument self("self", c10::AnyType::get());
c10::Argument available("available", c10::BoolType::get());
c10::FunctionSchema preprocessor_schema(
"is_available",
/*overload_name=*/"",
/*arguments=*/{self},
/*returns=*/{available});
return preprocessor_schema;
}
constexpr static auto kBackendsNamespace = "__backends__";
// NOLINTNEXTLINE(clang-diagnostic-unneeded-internal-declaration)
inline c10::FunctionSchema getCompileSchema() {
c10::Argument self("self", c10::AnyType::get());
c10::Argument mod("processed", c10::AnyType::get());
auto any_dict_ty =
c10::DictType::create(c10::StringType::get(), c10::AnyType::get());
c10::Argument method_compile_spec("method_compile_spec", any_dict_ty);
c10::Argument handles("handles", any_dict_ty);
c10::FunctionSchema compile_schema(
"compile",
/*overload_name=*/"",
/*arguments=*/{self, mod, method_compile_spec},
/*returns=*/{handles});
return compile_schema;
}
// NOLINTNEXTLINE(clang-diagnostic-unneeded-internal-declaration)
inline c10::FunctionSchema getExecuteSchema() {
auto any_list_ty = c10::ListType::create(c10::AnyType::get());
c10::Argument self("self", c10::AnyType::get());
c10::Argument handle("handle", c10::AnyType::get());
c10::Argument input("input", any_list_ty);
c10::Argument output("output", any_list_ty);
return c10::FunctionSchema(
"execute",
/*overload_name=*/"",
/*arguments=*/{self, handle, input},
/*returns=*/{output});
}
template <typename TBackendInterface>
std::function<void(Stack&)> getIsAvailableFunc() {
return [](Stack& stack) {
auto self = pop(stack).toCustomClass<TBackendInterface>();
auto ret = self->is_available();
push(stack, ret);
};
}
template <typename TBackendInterface>
std::function<void(Stack&)> getCompileFunc() {
return [](Stack& stack) {
auto method_compile_spec = pop(stack).toGenericDict();
auto processed = pop(stack);
auto self = pop(stack).toCustomClass<TBackendInterface>();
auto ret = self->compile(processed, method_compile_spec);
push(stack, ret);
};
}
template <typename TBackendInterface>
std::function<void(Stack&)> getExecuteFunc() {
return [](Stack& stack) {
auto args = pop(stack);
auto handle = pop(stack);
auto self = pop(stack);
auto backend = self.toCustomClass<TBackendInterface>();
auto res = backend->execute(handle, args.toList());
push(stack, res);
};
}
} // namespace
// Static registration API for backends.
template <class TBackendInterface>
class backend {
static_assert(
std::is_base_of<PyTorchBackendInterface, TBackendInterface>::value,
"torch::jit::backend<T> requires T to inherit from PyTorchBackendInterface");
std::string backend_name_;
public:
// Registers a new backend with /p name, and the given /p preprocess
// function.
backend(const std::string& name) : backend_name_(name) {
static auto cls = torch::class_<TBackendInterface>(kBackendsNamespace, name)
.def(torch::init<>())
._def_unboxed(
"is_available",
getIsAvailableFunc<TBackendInterface>(),
getIsAvailableSchema())
._def_unboxed(
"compile",
getCompileFunc<TBackendInterface>(),
getCompileSchema())
._def_unboxed(
"execute",
getExecuteFunc<TBackendInterface>(),
getExecuteSchema());
}
};
} // namespace jit
} // namespace torch
|