1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142
|
# NOTE: Assertions have been autogenerated by utils/update_mir_test_checks.py UTC_ARGS: --version 5
# RUN: llc -mtriple aarch64-none-elf -mattr=+mte --run-pass=aarch64-ldst-opt %s -o - | FileCheck %s
## When generating code with sanitize_memtag, we make use of the fact that the
## sp+imm forms of many load and store instructions are not tag-checked, so we
## can use SP directly instead of needing a register holding the tagged
## pointer. However, this isn't true for the writeback versions of the
## instructions, so we can't fold ADDs and SUBs into them in
## AArch64LoadStoreOptimizer. This would be possible in cases where the
## loads/stores only access untagged stack slots, but that information isn't
## easily available after frame index elimination.
--- |
define void @pre_index() {
entry:
ret void
}
define void @pre_index_memtag() sanitize_memtag {
entry:
ret void
}
define void @pre_index_memtag_not_sp() sanitize_memtag {
entry:
ret void
}
define void @post_index() {
entry:
ret void
}
define void @post_index_memtag() sanitize_memtag {
entry:
ret void
}
define void @post_index_memtag_not_sp() sanitize_memtag {
entry:
ret void
}
...
---
name: pre_index
body: |
bb.0.entry:
liveins: $x0
; CHECK-LABEL: name: pre_index
; CHECK: liveins: $x0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: $sp = frame-setup SUBXri $sp, 16, 0
; CHECK-NEXT: early-clobber $sp = STRXpre killed renamable $x0, $sp, 16
; CHECK-NEXT: RET undef $lr
$sp = frame-setup SUBXri $sp, 16, 0
STRXui killed renamable $x0, $sp, 2
$sp = ADDXri $sp, 16, 0
RET undef $lr
...
---
name: pre_index_memtag
body: |
bb.0.entry:
liveins: $x0
; CHECK-LABEL: name: pre_index_memtag
; CHECK: liveins: $x0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: $sp = frame-setup SUBXri $sp, 16, 0
; CHECK-NEXT: STRXui killed renamable $x0, $sp, 2
; CHECK-NEXT: $sp = ADDXri $sp, 16, 0
; CHECK-NEXT: RET undef $lr
$sp = frame-setup SUBXri $sp, 16, 0
STRXui killed renamable $x0, $sp, 2
$sp = ADDXri $sp, 16, 0
RET undef $lr
...
---
name: pre_index_memtag_not_sp
body: |
bb.0.entry:
liveins: $x0, $x1
; CHECK-LABEL: name: pre_index_memtag_not_sp
; CHECK: liveins: $x0, $x1
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: $x1 = frame-setup SUBXri $x1, 16, 0
; CHECK-NEXT: early-clobber $x1 = STRXpre killed renamable $x0, $x1, 16
; CHECK-NEXT: RET undef $lr, implicit $x1
$x1 = frame-setup SUBXri $x1, 16, 0
STRXui killed renamable $x0, $x1, 2
$x1 = ADDXri $x1, 16, 0
RET undef $lr, implicit $x1
...
---
name: post_index
body: |
bb.0.entry:
liveins: $x0
; CHECK-LABEL: name: post_index
; CHECK: liveins: $x0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: $sp = frame-setup SUBXri $sp, 16, 0
; CHECK-NEXT: early-clobber $sp = STRXpost killed renamable $x0, $sp, 16
; CHECK-NEXT: RET undef $lr
$sp = frame-setup SUBXri $sp, 16, 0
STRXui killed renamable $x0, $sp, 0
$sp = ADDXri $sp, 16, 0
RET undef $lr
...
---
name: post_index_memtag
body: |
bb.0.entry:
liveins: $x0
; CHECK-LABEL: name: post_index_memtag
; CHECK: liveins: $x0
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: $sp = frame-setup SUBXri $sp, 16, 0
; CHECK-NEXT: STRXui killed renamable $x0, $sp, 0
; CHECK-NEXT: $sp = ADDXri $sp, 16, 0
; CHECK-NEXT: RET undef $lr
$sp = frame-setup SUBXri $sp, 16, 0
STRXui killed renamable $x0, $sp, 0
$sp = ADDXri $sp, 16, 0
RET undef $lr
...
---
name: post_index_memtag_not_sp
body: |
bb.0.entry:
liveins: $x0, $x1
; CHECK-LABEL: name: post_index_memtag_not_sp
; CHECK: liveins: $x0, $x1
; CHECK-NEXT: {{ $}}
; CHECK-NEXT: $x1 = frame-setup SUBXri $x1, 16, 0
; CHECK-NEXT: early-clobber $x1 = STRXpost killed renamable $x0, $x1, 16
; CHECK-NEXT: RET undef $lr, implicit $x1
$x1 = frame-setup SUBXri $x1, 16, 0
STRXui killed renamable $x0, $x1, 0
$x1 = ADDXri $x1, 16, 0
RET undef $lr, implicit $x1
...
|