1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235
|
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc -mtriple=aarch64-unknown-linux-gnu < %s | FileCheck %s
define <4 x i16> @fold_urem_vec_1(<4 x i16> %x) {
; CHECK-LABEL: fold_urem_vec_1:
; CHECK: // %bb.0:
; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
; CHECK-NEXT: umov w8, v0.h[0]
; CHECK-NEXT: mov w9, #55879
; CHECK-NEXT: movk w9, #689, lsl #16
; CHECK-NEXT: umov w10, v0.h[1]
; CHECK-NEXT: mov w11, #33826
; CHECK-NEXT: mov w12, #95
; CHECK-NEXT: movk w11, #528, lsl #16
; CHECK-NEXT: umov w13, v0.h[2]
; CHECK-NEXT: umull x9, w8, w9
; CHECK-NEXT: umull x11, w10, w11
; CHECK-NEXT: lsr x9, x9, #32
; CHECK-NEXT: lsr x11, x11, #32
; CHECK-NEXT: msub w8, w9, w12, w8
; CHECK-NEXT: mov w9, #48149
; CHECK-NEXT: movk w9, #668, lsl #16
; CHECK-NEXT: mov w12, #124
; CHECK-NEXT: umull x9, w13, w9
; CHECK-NEXT: msub w10, w11, w12, w10
; CHECK-NEXT: umov w11, v0.h[3]
; CHECK-NEXT: fmov s0, w8
; CHECK-NEXT: mov w12, #22281
; CHECK-NEXT: lsr x8, x9, #32
; CHECK-NEXT: mov w9, #98
; CHECK-NEXT: movk w12, #65, lsl #16
; CHECK-NEXT: msub w8, w8, w9, w13
; CHECK-NEXT: mov v0.h[1], w10
; CHECK-NEXT: umull x9, w11, w12
; CHECK-NEXT: mov w10, #1003
; CHECK-NEXT: lsr x9, x9, #32
; CHECK-NEXT: mov v0.h[2], w8
; CHECK-NEXT: msub w8, w9, w10, w11
; CHECK-NEXT: mov v0.h[3], w8
; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
; CHECK-NEXT: ret
%1 = urem <4 x i16> %x, <i16 95, i16 124, i16 98, i16 1003>
ret <4 x i16> %1
}
define <4 x i16> @fold_urem_vec_2(<4 x i16> %x) {
; CHECK-LABEL: fold_urem_vec_2:
; CHECK: // %bb.0:
; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
; CHECK-NEXT: umov w8, v0.h[0]
; CHECK-NEXT: mov w9, #55879
; CHECK-NEXT: movk w9, #689, lsl #16
; CHECK-NEXT: umov w10, v0.h[1]
; CHECK-NEXT: mov w12, #95
; CHECK-NEXT: umov w13, v0.h[2]
; CHECK-NEXT: umull x11, w8, w9
; CHECK-NEXT: umull x14, w10, w9
; CHECK-NEXT: lsr x11, x11, #32
; CHECK-NEXT: msub w8, w11, w12, w8
; CHECK-NEXT: lsr x11, x14, #32
; CHECK-NEXT: umull x14, w13, w9
; CHECK-NEXT: msub w10, w11, w12, w10
; CHECK-NEXT: umov w11, v0.h[3]
; CHECK-NEXT: fmov s0, w8
; CHECK-NEXT: lsr x8, x14, #32
; CHECK-NEXT: msub w8, w8, w12, w13
; CHECK-NEXT: mov v0.h[1], w10
; CHECK-NEXT: umull x9, w11, w9
; CHECK-NEXT: lsr x9, x9, #32
; CHECK-NEXT: mov v0.h[2], w8
; CHECK-NEXT: msub w8, w9, w12, w11
; CHECK-NEXT: mov v0.h[3], w8
; CHECK-NEXT: // kill: def $d0 killed $d0 killed $q0
; CHECK-NEXT: ret
%1 = urem <4 x i16> %x, <i16 95, i16 95, i16 95, i16 95>
ret <4 x i16> %1
}
; Don't fold if we can combine urem with udiv.
define <4 x i16> @combine_urem_udiv(<4 x i16> %x) {
; CHECK-LABEL: combine_urem_udiv:
; CHECK: // %bb.0:
; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
; CHECK-NEXT: umov w8, v0.h[0]
; CHECK-NEXT: mov w9, #55879
; CHECK-NEXT: movk w9, #689, lsl #16
; CHECK-NEXT: umov w10, v0.h[1]
; CHECK-NEXT: mov w12, #95
; CHECK-NEXT: umov w14, v0.h[2]
; CHECK-NEXT: umov w15, v0.h[3]
; CHECK-NEXT: umull x11, w8, w9
; CHECK-NEXT: umull x13, w10, w9
; CHECK-NEXT: lsr x11, x11, #32
; CHECK-NEXT: lsr x13, x13, #32
; CHECK-NEXT: msub w8, w11, w12, w8
; CHECK-NEXT: msub w10, w13, w12, w10
; CHECK-NEXT: fmov s1, w11
; CHECK-NEXT: fmov s0, w8
; CHECK-NEXT: umull x8, w14, w9
; CHECK-NEXT: umull x9, w15, w9
; CHECK-NEXT: lsr x8, x8, #32
; CHECK-NEXT: mov v0.h[1], w10
; CHECK-NEXT: lsr x9, x9, #32
; CHECK-NEXT: msub w10, w8, w12, w14
; CHECK-NEXT: mov v1.h[1], w13
; CHECK-NEXT: msub w11, w9, w12, w15
; CHECK-NEXT: mov v0.h[2], w10
; CHECK-NEXT: mov v1.h[2], w8
; CHECK-NEXT: mov v0.h[3], w11
; CHECK-NEXT: mov v1.h[3], w9
; CHECK-NEXT: add v0.4h, v0.4h, v1.4h
; CHECK-NEXT: ret
%1 = urem <4 x i16> %x, <i16 95, i16 95, i16 95, i16 95>
%2 = udiv <4 x i16> %x, <i16 95, i16 95, i16 95, i16 95>
%3 = add <4 x i16> %1, %2
ret <4 x i16> %3
}
; Don't fold for divisors that are a power of two.
define <4 x i16> @dont_fold_urem_power_of_two(<4 x i16> %x) {
; CHECK-LABEL: dont_fold_urem_power_of_two:
; CHECK: // %bb.0:
; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
; CHECK-NEXT: umov w9, v0.h[0]
; CHECK-NEXT: umov w11, v0.h[1]
; CHECK-NEXT: umov w10, v0.h[3]
; CHECK-NEXT: mov w8, #55879
; CHECK-NEXT: movk w8, #689, lsl #16
; CHECK-NEXT: and w9, w9, #0x3f
; CHECK-NEXT: umull x8, w10, w8
; CHECK-NEXT: fmov s1, w9
; CHECK-NEXT: and w9, w11, #0x1f
; CHECK-NEXT: umov w11, v0.h[2]
; CHECK-NEXT: lsr x8, x8, #32
; CHECK-NEXT: mov v1.h[1], w9
; CHECK-NEXT: mov w9, #95
; CHECK-NEXT: and w11, w11, #0x7
; CHECK-NEXT: msub w8, w8, w9, w10
; CHECK-NEXT: mov v1.h[2], w11
; CHECK-NEXT: mov v1.h[3], w8
; CHECK-NEXT: fmov d0, d1
; CHECK-NEXT: ret
%1 = urem <4 x i16> %x, <i16 64, i16 32, i16 8, i16 95>
ret <4 x i16> %1
}
; Don't fold if the divisor is one.
define <4 x i16> @dont_fold_srem_one(<4 x i16> %x) {
; CHECK-LABEL: dont_fold_srem_one:
; CHECK: // %bb.0:
; CHECK-NEXT: // kill: def $d0 killed $d0 def $q0
; CHECK-NEXT: umov w8, v0.h[1]
; CHECK-NEXT: mov w9, #13629
; CHECK-NEXT: movk w9, #100, lsl #16
; CHECK-NEXT: umov w10, v0.h[2]
; CHECK-NEXT: mov w11, #25645
; CHECK-NEXT: mov w12, #654
; CHECK-NEXT: movk w11, #2849, lsl #16
; CHECK-NEXT: movi d1, #0000000000000000
; CHECK-NEXT: umull x9, w8, w9
; CHECK-NEXT: mov w13, #5560
; CHECK-NEXT: umull x11, w10, w11
; CHECK-NEXT: movk w13, #12, lsl #16
; CHECK-NEXT: lsr x9, x9, #32
; CHECK-NEXT: lsr x11, x11, #32
; CHECK-NEXT: msub w8, w9, w12, w8
; CHECK-NEXT: umov w9, v0.h[3]
; CHECK-NEXT: mov w12, #23
; CHECK-NEXT: msub w10, w11, w12, w10
; CHECK-NEXT: mov w11, #5423
; CHECK-NEXT: mov v1.h[1], w8
; CHECK-NEXT: umull x8, w9, w13
; CHECK-NEXT: lsr x8, x8, #32
; CHECK-NEXT: mov v1.h[2], w10
; CHECK-NEXT: msub w8, w8, w11, w9
; CHECK-NEXT: mov v1.h[3], w8
; CHECK-NEXT: fmov d0, d1
; CHECK-NEXT: ret
%1 = urem <4 x i16> %x, <i16 1, i16 654, i16 23, i16 5423>
ret <4 x i16> %1
}
; Don't fold if the divisor is 2^16.
define <4 x i16> @dont_fold_urem_i16_smax(<4 x i16> %x) {
; CHECK-LABEL: dont_fold_urem_i16_smax:
; CHECK: // %bb.0:
; CHECK-NEXT: ret
%1 = urem <4 x i16> %x, <i16 1, i16 65536, i16 23, i16 5423>
ret <4 x i16> %1
}
; Don't fold i64 urem.
define <4 x i64> @dont_fold_urem_i64(<4 x i64> %x) {
; CHECK-LABEL: dont_fold_urem_i64:
; CHECK: // %bb.0:
; CHECK-NEXT: mov x8, #17097
; CHECK-NEXT: fmov x9, d1
; CHECK-NEXT: movk x8, #45590, lsl #16
; CHECK-NEXT: mov x13, #21445
; CHECK-NEXT: movk x8, #34192, lsl #32
; CHECK-NEXT: movk x13, #1603, lsl #16
; CHECK-NEXT: movk x8, #25644, lsl #48
; CHECK-NEXT: movk x13, #15432, lsl #32
; CHECK-NEXT: mov x10, v0.d[1]
; CHECK-NEXT: movk x13, #25653, lsl #48
; CHECK-NEXT: umulh x8, x9, x8
; CHECK-NEXT: mov x11, v1.d[1]
; CHECK-NEXT: sub x12, x9, x8
; CHECK-NEXT: lsr x14, x10, #1
; CHECK-NEXT: add x8, x8, x12, lsr #1
; CHECK-NEXT: mov x12, #12109
; CHECK-NEXT: movk x12, #52170, lsl #16
; CHECK-NEXT: umulh x13, x14, x13
; CHECK-NEXT: movk x12, #28749, lsl #32
; CHECK-NEXT: mov w14, #23
; CHECK-NEXT: movk x12, #49499, lsl #48
; CHECK-NEXT: lsr x8, x8, #4
; CHECK-NEXT: lsr x13, x13, #7
; CHECK-NEXT: umulh x12, x11, x12
; CHECK-NEXT: msub x8, x8, x14, x9
; CHECK-NEXT: mov w9, #5423
; CHECK-NEXT: lsr x12, x12, #12
; CHECK-NEXT: mov w14, #654
; CHECK-NEXT: movi v0.2d, #0000000000000000
; CHECK-NEXT: msub x9, x12, x9, x11
; CHECK-NEXT: msub x10, x13, x14, x10
; CHECK-NEXT: fmov d1, x8
; CHECK-NEXT: mov v1.d[1], x9
; CHECK-NEXT: mov v0.d[1], x10
; CHECK-NEXT: ret
%1 = urem <4 x i64> %x, <i64 1, i64 654, i64 23, i64 5423>
ret <4 x i64> %1
}
|