1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113
|
// RUN: mlir-opt %s -split-input-file -verify-diagnostics
func.func @alloc_tensor_missing_dims(%arg0: index)
{
// expected-error @+1 {{expected 2 dynamic sizes}}
%0 = bufferization.alloc_tensor(%arg0) : tensor<4x?x?x5xf32>
return
}
// -----
// expected-note @+1 {{prior use here}}
func.func @alloc_tensor_type_mismatch(%t: tensor<?xf32>) {
// expected-error @+1{{expects different type than prior uses: 'tensor<5xf32>' vs 'tensor<?xf32>'}}
%0 = bufferization.alloc_tensor() copy(%t) : tensor<5xf32>
return
}
// -----
func.func @alloc_tensor_copy_and_dims(%t: tensor<?xf32>, %sz: index) {
// expected-error @+1{{dynamic sizes not needed when copying a tensor}}
%0 = bufferization.alloc_tensor(%sz) copy(%t) : tensor<?xf32>
return
}
// -----
func.func @alloc_tensor_invalid_escape_attr(%sz: index) {
// expected-error @+1{{'bufferization.escape' is expected to be a bool array attribute}}
%0 = bufferization.alloc_tensor(%sz) {bufferization.escape = 5} : tensor<?xf32>
return
}
// -----
func.func @alloc_tensor_invalid_escape_attr_size(%sz: index) {
// expected-error @+1{{'bufferization.escape' has wrong number of elements, expected 1, got 2}}
%0 = bufferization.alloc_tensor(%sz) {bufferization.escape = [true, false]} : tensor<?xf32>
return
}
// -----
func.func @escape_attr_non_allocating(%t0: tensor<?xf32>) {
// expected-error @+1{{'bufferization.escape' only valid for allocation results}}
%0 = tensor.extract_slice %t0[0][5][1] {bufferization.escape = [true]} : tensor<?xf32> to tensor<5xf32>
return
}
// -----
func.func @escape_attr_non_bufferizable(%m0: memref<?xf32>) {
// expected-error @+1{{'bufferization.escape' only valid on bufferizable ops}}
%0 = memref.cast %m0 {bufferization.escape = [true]} : memref<?xf32> to memref<10xf32>
return
}
// -----
#DCSR = #sparse_tensor.encoding<{ lvlTypes = [ "compressed", "compressed" ] }>
func.func @sparse_alloc_direct_return() -> tensor<20x40xf32, #DCSR> {
// expected-error @+1{{sparse tensor allocation should not escape function}}
%0 = bufferization.alloc_tensor() : tensor<20x40xf32, #DCSR>
return %0 : tensor<20x40xf32, #DCSR>
}
// -----
#DCSR = #sparse_tensor.encoding<{ lvlTypes = [ "compressed", "compressed" ] }>
func.func private @foo(tensor<20x40xf32, #DCSR>) -> ()
func.func @sparse_alloc_call() {
// expected-error @+1{{sparse tensor allocation should not escape function}}
%0 = bufferization.alloc_tensor() : tensor<20x40xf32, #DCSR>
call @foo(%0) : (tensor<20x40xf32, #DCSR>) -> ()
return
}
// -----
// expected-error @+1{{invalid value for 'bufferization.access'}}
func.func private @invalid_buffer_access_type(tensor<*xf32> {bufferization.access = "foo"})
// -----
// expected-error @+1{{'bufferization.writable' is invalid on external functions}}
func.func private @invalid_writable_attribute(tensor<*xf32> {bufferization.writable = false})
// -----
func.func @invalid_writable_on_op() {
// expected-error @+1{{attribute '"bufferization.writable"' not supported as an op attribute by the bufferization dialect}}
arith.constant {bufferization.writable = true} 0 : index
}
// -----
// expected-note @below{{prior use here}}
func.func @invalid_tensor_copy(%arg0: tensor<?xf32>, %arg1: tensor<5xf32>) {
// expected-error @below{{expects different type than prior uses: 'tensor<?xf32>' vs 'tensor<5xf32>'}}
bufferization.copy_tensor %arg0, %arg1 : tensor<?xf32>
}
// -----
func.func @invalid_dealloc_memref_condition_mismatch(%arg0: memref<2xf32>, %arg1: memref<4xi32>, %arg2: i1) -> i1 {
// expected-error @below{{must have the same number of conditions as memrefs to deallocate}}
%0 = bufferization.dealloc (%arg0, %arg1 : memref<2xf32>, memref<4xi32>) if (%arg2)
return %0 : i1
}
|