1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154
|
//===- RankReductionPatterns.cpp - Patterns related to rank reductions ----===//
//
// Part of the LLVM Project, under the Apache License v2.0 with LLVM Exceptions.
// See https://llvm.org/LICENSE.txt for license information.
// SPDX-License-Identifier: Apache-2.0 WITH LLVM-exception
//
//===----------------------------------------------------------------------===//
#include "mlir/Dialect/Tensor/IR/Tensor.h"
#include "mlir/Dialect/Tensor/Transforms/Transforms.h"
#include "mlir/IR/PatternMatch.h"
#include "llvm/Support/Debug.h"
using namespace mlir;
using namespace mlir::tensor;
namespace {
/// Fold expand_shape(extract_slice) ops that cancel itself out.
struct FoldExpandOfRankReducingExtract
: public OpRewritePattern<ExpandShapeOp> {
using OpRewritePattern<ExpandShapeOp>::OpRewritePattern;
LogicalResult matchAndRewrite(ExpandShapeOp expandShapeOp,
PatternRewriter &rewriter) const override {
RankedTensorType resultType = expandShapeOp.getResultType();
auto extractSliceOp =
expandShapeOp.getSrc().getDefiningOp<ExtractSliceOp>();
if (!extractSliceOp)
return failure();
RankedTensorType srcType = extractSliceOp.getSourceType();
// Only cases where the ExpandShapeOp can be folded away entirely are
// supported. Moreover, only simple cases where the resulting ExtractSliceOp
// has no rank-reduction anymore are supported at the moment.
RankedTensorType nonReducingExtractType = ExtractSliceOp::inferResultType(
srcType, extractSliceOp.getStaticOffsets(),
extractSliceOp.getStaticSizes(), extractSliceOp.getStaticStrides());
if (nonReducingExtractType != resultType)
return failure();
SmallVector<OpFoldResult> mixedOffsets = extractSliceOp.getMixedOffsets();
SmallVector<OpFoldResult> mixedSizes = extractSliceOp.getMixedSizes();
SmallVector<OpFoldResult> mixedStrides = extractSliceOp.getMixedStrides();
rewriter.replaceOpWithNewOp<tensor::ExtractSliceOp>(
expandShapeOp, extractSliceOp.getSource(), mixedOffsets, mixedSizes,
mixedStrides);
return success();
}
};
/// Fold collapse_shape which only removes static dimensions of size `1`
/// into extract_slice.
struct FoldUnPaddingCollapseIntoExtract
: public OpRewritePattern<tensor::CollapseShapeOp> {
using OpRewritePattern<tensor::CollapseShapeOp>::OpRewritePattern;
LogicalResult matchAndRewrite(tensor::CollapseShapeOp collapseShapeOp,
PatternRewriter &rewriter) const override {
auto extractSliceOp =
collapseShapeOp.getSrc().getDefiningOp<tensor::ExtractSliceOp>();
// Collapse cannot be folded away with multiple users of the extract slice
// and it is not necessarily beneficial to only convert the collapse into
// another extract slice.
if (!extractSliceOp || !extractSliceOp->hasOneUse())
return failure();
// Only fold away simple collapse where all removed dimensions have static
// size `1`.
SliceVerificationResult res = isRankReducedType(
collapseShapeOp.getSrcType(), collapseShapeOp.getResultType());
if (res != SliceVerificationResult::Success)
return rewriter.notifyMatchFailure(collapseShapeOp,
"expected unpadding collapse");
Value unPaddedExtractSlice = rewriter.create<tensor::ExtractSliceOp>(
extractSliceOp.getLoc(), collapseShapeOp.getResultType(),
extractSliceOp.getSource(), extractSliceOp.getMixedOffsets(),
extractSliceOp.getMixedSizes(), extractSliceOp.getMixedStrides());
rewriter.replaceOp(collapseShapeOp, unPaddedExtractSlice);
return success();
}
};
/// Fold insert_slice(collapse_shape) ops that cancel itself out.
template <typename OpTy>
struct FoldInsertOfRankReducingInsert : public OpRewritePattern<OpTy> {
using OpRewritePattern<OpTy>::OpRewritePattern;
LogicalResult matchAndRewrite(OpTy insertSliceOp,
PatternRewriter &rewriter) const override {
auto collapseShapeOp =
insertSliceOp.getSource().template getDefiningOp<CollapseShapeOp>();
if (!collapseShapeOp)
return failure();
RankedTensorType srcType = collapseShapeOp.getSrcType();
// Only cases where the CollapseShapeOp can be folded away entirely are
// supported. Moreover, only simple cases where the resulting InsertSliceOp
// has no rank-reduction anymore are supported at the moment.
RankedTensorType nonReducingInsertType =
RankedTensorType::get(insertSliceOp.getStaticSizes(),
insertSliceOp.getDestType().getElementType());
if (nonReducingInsertType != srcType)
return failure();
SmallVector<OpFoldResult> mixedOffsets = insertSliceOp.getMixedOffsets();
SmallVector<OpFoldResult> mixedSizes = insertSliceOp.getMixedSizes();
SmallVector<OpFoldResult> mixedStrides = insertSliceOp.getMixedStrides();
rewriter.replaceOpWithNewOp<OpTy>(insertSliceOp, collapseShapeOp.getSrc(),
insertSliceOp.getDest(), mixedOffsets,
mixedSizes, mixedStrides);
return success();
}
};
/// Fold expand_shape which only adds static dimensions of size `1`
/// into insert_slice.
template <typename OpTy>
struct FoldPaddingExpandIntoInsert : public OpRewritePattern<OpTy> {
using OpRewritePattern<OpTy>::OpRewritePattern;
LogicalResult matchAndRewrite(OpTy insertSliceOp,
PatternRewriter &rewriter) const override {
auto expandShapeOp = insertSliceOp.getSource()
.template getDefiningOp<tensor::ExpandShapeOp>();
if (!expandShapeOp)
return failure();
// Only fold away simple expansion where all added dimensions have static
// size `1`.
SliceVerificationResult res = isRankReducedType(
expandShapeOp.getResultType(), expandShapeOp.getSrcType());
if (res != SliceVerificationResult::Success)
return rewriter.notifyMatchFailure(insertSliceOp,
"expected rank increasing expansion");
rewriter.modifyOpInPlace(insertSliceOp, [&]() {
insertSliceOp.getSourceMutable().assign(expandShapeOp.getSrc());
});
return success();
}
};
} // namespace
void mlir::tensor::populateReassociativeReshapeFoldingPatterns(
RewritePatternSet &patterns) {
patterns
.add<FoldExpandOfRankReducingExtract, FoldUnPaddingCollapseIntoExtract,
FoldInsertOfRankReducingInsert<tensor::InsertSliceOp>,
FoldInsertOfRankReducingInsert<tensor::ParallelInsertSliceOp>,
FoldPaddingExpandIntoInsert<tensor::InsertSliceOp>,
FoldPaddingExpandIntoInsert<tensor::ParallelInsertSliceOp>>(
patterns.getContext());
}
|