1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118
|
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py UTC_ARGS: --version 3
; RUN: opt < %s -passes=asan -S | FileCheck %s
; RUN: opt < %s -passes=asan -asan-recover -S | FileCheck %s --check-prefix=RECOV
target datalayout = "e-p:64:64-p1:64:64-p2:32:32-p3:32:32-p4:64:64-p5:32:32-p6:32:32-p7:160:256:256:32-p8:128:128-i64:64-v16:16-v24:32-v32:32-v48:64-v96:128-v192:256-v256:256-v512:512-v1024:1024-v2048:2048-n32:64-S32-A5-G1-ni:7:8:9"
target triple = "amdgcn-amd-amdhsa"
@x = addrspace(4) global [2 x i32] zeroinitializer, align 4
@x8 = addrspace(4) global [2 x i64] zeroinitializer, align 8
define protected amdgpu_kernel void @constant_load(i64 %i) sanitize_address {
; CHECK-LABEL: define protected amdgpu_kernel void @constant_load(
; CHECK-SAME: i64 [[I:%.*]]) #[[ATTR0:[0-9]+]] {
; CHECK-NEXT: entry:
; CHECK-NEXT: [[A:%.*]] = getelementptr inbounds [2 x i32], ptr addrspace(4) @x, i64 0, i64 [[I]]
; CHECK-NEXT: [[TMP0:%.*]] = ptrtoint ptr addrspace(4) [[A]] to i64
; CHECK-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 3
; CHECK-NEXT: [[TMP2:%.*]] = add i64 [[TMP1]], 2147450880
; CHECK-NEXT: [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
; CHECK-NEXT: [[TMP4:%.*]] = load i8, ptr [[TMP3]], align 1
; CHECK-NEXT: [[TMP5:%.*]] = icmp ne i8 [[TMP4]], 0
; CHECK-NEXT: [[TMP6:%.*]] = and i64 [[TMP0]], 7
; CHECK-NEXT: [[TMP7:%.*]] = add i64 [[TMP6]], 3
; CHECK-NEXT: [[TMP8:%.*]] = trunc i64 [[TMP7]] to i8
; CHECK-NEXT: [[TMP9:%.*]] = icmp sge i8 [[TMP8]], [[TMP4]]
; CHECK-NEXT: [[TMP10:%.*]] = and i1 [[TMP5]], [[TMP9]]
; CHECK-NEXT: [[TMP11:%.*]] = call i64 @llvm.amdgcn.ballot.i64(i1 [[TMP10]])
; CHECK-NEXT: [[TMP12:%.*]] = icmp ne i64 [[TMP11]], 0
; CHECK-NEXT: br i1 [[TMP12]], label [[ASAN_REPORT:%.*]], label [[TMP15:%.*]], !prof [[PROF2:![0-9]+]]
; CHECK: asan.report:
; CHECK-NEXT: br i1 [[TMP10]], label [[TMP13:%.*]], label [[TMP14:%.*]]
; CHECK: 13:
; CHECK-NEXT: call void @__asan_report_load4(i64 [[TMP0]]) #[[ATTR5:[0-9]+]]
; CHECK-NEXT: call void @llvm.amdgcn.unreachable()
; CHECK-NEXT: br label [[TMP14]]
; CHECK: 14:
; CHECK-NEXT: br label [[TMP15]]
; CHECK: 15:
; CHECK-NEXT: [[Q:%.*]] = load i32, ptr addrspace(4) [[A]], align 4
; CHECK-NEXT: ret void
;
; RECOV-LABEL: define protected amdgpu_kernel void @constant_load(
; RECOV-SAME: i64 [[I:%.*]]) #[[ATTR0:[0-9]+]] {
; RECOV-NEXT: entry:
; RECOV-NEXT: [[A:%.*]] = getelementptr inbounds [2 x i32], ptr addrspace(4) @x, i64 0, i64 [[I]]
; RECOV-NEXT: [[TMP0:%.*]] = ptrtoint ptr addrspace(4) [[A]] to i64
; RECOV-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 3
; RECOV-NEXT: [[TMP2:%.*]] = add i64 [[TMP1]], 2147450880
; RECOV-NEXT: [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
; RECOV-NEXT: [[TMP4:%.*]] = load i8, ptr [[TMP3]], align 1
; RECOV-NEXT: [[TMP5:%.*]] = icmp ne i8 [[TMP4]], 0
; RECOV-NEXT: [[TMP6:%.*]] = and i64 [[TMP0]], 7
; RECOV-NEXT: [[TMP7:%.*]] = add i64 [[TMP6]], 3
; RECOV-NEXT: [[TMP8:%.*]] = trunc i64 [[TMP7]] to i8
; RECOV-NEXT: [[TMP9:%.*]] = icmp sge i8 [[TMP8]], [[TMP4]]
; RECOV-NEXT: [[TMP10:%.*]] = and i1 [[TMP5]], [[TMP9]]
; RECOV-NEXT: br i1 [[TMP10]], label [[ASAN_REPORT:%.*]], label [[TMP11:%.*]], !prof [[PROF2:![0-9]+]]
; RECOV: asan.report:
; RECOV-NEXT: call void @__asan_report_load4_noabort(i64 [[TMP0]]) #[[ATTR3:[0-9]+]]
; RECOV-NEXT: br label [[TMP11]]
; RECOV: 11:
; RECOV-NEXT: [[Q:%.*]] = load i32, ptr addrspace(4) [[A]], align 4
; RECOV-NEXT: ret void
;
entry:
%a = getelementptr inbounds [2 x i32], ptr addrspace(4) @x, i64 0, i64 %i
%q = load i32, ptr addrspace(4) %a, align 4
ret void
}
define protected amdgpu_kernel void @constant_load_8(i64 %i) sanitize_address {
; CHECK-LABEL: define protected amdgpu_kernel void @constant_load_8(
; CHECK-SAME: i64 [[I:%.*]]) #[[ATTR0]] {
; CHECK-NEXT: entry:
; CHECK-NEXT: [[A:%.*]] = getelementptr inbounds [2 x i64], ptr addrspace(4) @x8, i64 0, i64 [[I]]
; CHECK-NEXT: [[TMP0:%.*]] = ptrtoint ptr addrspace(4) [[A]] to i64
; CHECK-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 3
; CHECK-NEXT: [[TMP2:%.*]] = add i64 [[TMP1]], 2147450880
; CHECK-NEXT: [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
; CHECK-NEXT: [[TMP4:%.*]] = load i8, ptr [[TMP3]], align 1
; CHECK-NEXT: [[TMP5:%.*]] = icmp ne i8 [[TMP4]], 0
; CHECK-NEXT: [[TMP6:%.*]] = call i64 @llvm.amdgcn.ballot.i64(i1 [[TMP5]])
; CHECK-NEXT: [[TMP7:%.*]] = icmp ne i64 [[TMP6]], 0
; CHECK-NEXT: br i1 [[TMP7]], label [[ASAN_REPORT:%.*]], label [[TMP10:%.*]], !prof [[PROF2]]
; CHECK: asan.report:
; CHECK-NEXT: br i1 [[TMP5]], label [[TMP8:%.*]], label [[TMP9:%.*]]
; CHECK: 8:
; CHECK-NEXT: call void @__asan_report_load8(i64 [[TMP0]]) #[[ATTR5]]
; CHECK-NEXT: call void @llvm.amdgcn.unreachable()
; CHECK-NEXT: br label [[TMP9]]
; CHECK: 9:
; CHECK-NEXT: br label [[TMP10]]
; CHECK: 10:
; CHECK-NEXT: [[Q:%.*]] = load i64, ptr addrspace(4) [[A]], align 8
; CHECK-NEXT: ret void
;
; RECOV-LABEL: define protected amdgpu_kernel void @constant_load_8(
; RECOV-SAME: i64 [[I:%.*]]) #[[ATTR0]] {
; RECOV-NEXT: entry:
; RECOV-NEXT: [[A:%.*]] = getelementptr inbounds [2 x i64], ptr addrspace(4) @x8, i64 0, i64 [[I]]
; RECOV-NEXT: [[TMP0:%.*]] = ptrtoint ptr addrspace(4) [[A]] to i64
; RECOV-NEXT: [[TMP1:%.*]] = lshr i64 [[TMP0]], 3
; RECOV-NEXT: [[TMP2:%.*]] = add i64 [[TMP1]], 2147450880
; RECOV-NEXT: [[TMP3:%.*]] = inttoptr i64 [[TMP2]] to ptr
; RECOV-NEXT: [[TMP4:%.*]] = load i8, ptr [[TMP3]], align 1
; RECOV-NEXT: [[TMP5:%.*]] = icmp ne i8 [[TMP4]], 0
; RECOV-NEXT: br i1 [[TMP5]], label [[ASAN_REPORT:%.*]], label [[TMP6:%.*]], !prof [[PROF2]]
; RECOV: asan.report:
; RECOV-NEXT: call void @__asan_report_load8_noabort(i64 [[TMP0]]) #[[ATTR3]]
; RECOV-NEXT: br label [[TMP6]]
; RECOV: 6:
; RECOV-NEXT: [[Q:%.*]] = load i64, ptr addrspace(4) [[A]], align 8
; RECOV-NEXT: ret void
;
entry:
%a = getelementptr inbounds [2 x i64], ptr addrspace(4) @x8, i64 0, i64 %i
%q = load i64, ptr addrspace(4) %a, align 8
ret void
}
|