1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158
|
//===- SerializeToBlob.cpp - MLIR GPU lowering pass -----------------------===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
//
// This file implements a base class for a pass to serialize a gpu module
// into a binary blob that can be executed on a GPU. The binary blob is added
// as a string attribute to the gpu module.
//
//===----------------------------------------------------------------------===//
#include "mlir/Dialect/GPU/Transforms/Passes.h"
#include "mlir/ExecutionEngine/OptUtils.h"
#include "mlir/Pass/Pass.h"
#include "mlir/Target/LLVMIR/Dialect/GPU/GPUToLLVMIRTranslation.h"
#include "mlir/Target/LLVMIR/Dialect/LLVMIR/LLVMToLLVMIRTranslation.h"
#include "mlir/Target/LLVMIR/Export.h"
#include "llvm/IR/LegacyPassManager.h"
#include "llvm/MC/TargetRegistry.h"
#include "llvm/Support/TargetSelect.h"
#include "llvm/Target/TargetMachine.h"
#include <string>
#include <optional>
#define DEBUG_TYPE "serialize-to-blob"
using namespace mlir;
std::string gpu::getDefaultGpuBinaryAnnotation() { return "gpu.binary"; }
gpu::SerializeToBlobPass::SerializeToBlobPass(TypeID passID)
: OperationPass<gpu::GPUModuleOp>(passID) {}
gpu::SerializeToBlobPass::SerializeToBlobPass(const SerializeToBlobPass &other)
: OperationPass<gpu::GPUModuleOp>(other) {}
std::optional<std::string>
gpu::SerializeToBlobPass::translateToISA(llvm::Module &llvmModule,
llvm::TargetMachine &targetMachine) {
llvmModule.setDataLayout(targetMachine.createDataLayout());
if (failed(optimizeLlvm(llvmModule, targetMachine)))
return std::nullopt;
std::string targetISA;
llvm::raw_string_ostream stream(targetISA);
{ // Drop pstream after this to prevent the ISA from being stuck buffering
llvm::buffer_ostream pstream(stream);
llvm::legacy::PassManager codegenPasses;
if (targetMachine.addPassesToEmitFile(codegenPasses, pstream, nullptr,
llvm::CGFT_AssemblyFile))
return std::nullopt;
codegenPasses.run(llvmModule);
}
return stream.str();
}
void gpu::SerializeToBlobPass::runOnOperation() {
// Lower the module to an LLVM IR module using a separate context to enable
// multi-threaded processing.
llvm::LLVMContext llvmContext;
std::unique_ptr<llvm::Module> llvmModule = translateToLLVMIR(llvmContext);
if (!llvmModule)
return signalPassFailure();
// Lower the LLVM IR module to target ISA.
std::unique_ptr<llvm::TargetMachine> targetMachine = createTargetMachine();
if (!targetMachine)
return signalPassFailure();
std::optional<std::string> maybeTargetISA =
translateToISA(*llvmModule, *targetMachine);
if (!maybeTargetISA.has_value())
return signalPassFailure();
std::string targetISA = std::move(*maybeTargetISA);
LLVM_DEBUG({
llvm::dbgs() << "ISA for module: " << getOperation().getNameAttr() << "\n";
llvm::dbgs() << targetISA << "\n";
llvm::dbgs().flush();
});
// Serialize the target ISA.
std::unique_ptr<std::vector<char>> blob = serializeISA(targetISA);
if (!blob)
return signalPassFailure();
// Add the blob as module attribute.
auto attr =
StringAttr::get(&getContext(), StringRef(blob->data(), blob->size()));
getOperation()->setAttr(gpuBinaryAnnotation, attr);
}
LogicalResult
gpu::SerializeToBlobPass::optimizeLlvm(llvm::Module &llvmModule,
llvm::TargetMachine &targetMachine) {
int optLevel = this->optLevel.getValue();
if (optLevel < 0 || optLevel > 3)
return getOperation().emitError()
<< "invalid optimization level " << optLevel;
targetMachine.setOptLevel(static_cast<llvm::CodeGenOpt::Level>(optLevel));
auto transformer =
makeOptimizingTransformer(optLevel, /*sizeLevel=*/0, &targetMachine);
auto error = transformer(&llvmModule);
if (error) {
InFlightDiagnostic mlirError = getOperation()->emitError();
llvm::handleAllErrors(
std::move(error), [&mlirError](const llvm::ErrorInfoBase &ei) {
mlirError << "could not optimize LLVM IR: " << ei.message();
});
return mlirError;
}
return success();
}
void gpu::SerializeToBlobPass::getDependentDialects(
DialectRegistry ®istry) const {
registerGPUDialectTranslation(registry);
registerLLVMDialectTranslation(registry);
OperationPass<gpu::GPUModuleOp>::getDependentDialects(registry);
}
std::unique_ptr<llvm::TargetMachine>
gpu::SerializeToBlobPass::createTargetMachine() {
Location loc = getOperation().getLoc();
std::string error;
const llvm::Target *target =
llvm::TargetRegistry::lookupTarget(triple, error);
if (!target) {
emitError(loc, Twine("failed to lookup target: ") + error);
return {};
}
llvm::TargetMachine *machine =
target->createTargetMachine(triple, chip, features, {}, {});
if (!machine) {
emitError(loc, "failed to create target machine");
return {};
}
return std::unique_ptr<llvm::TargetMachine>{machine};
}
std::unique_ptr<llvm::Module>
gpu::SerializeToBlobPass::translateToLLVMIR(llvm::LLVMContext &llvmContext) {
return translateModuleToLLVMIR(getOperation(), llvmContext,
"LLVMDialectModule");
}
|