1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221
|
; RUN: llc -aarch64-sve-vector-bits-min=128 < %s | FileCheck %s -D#VBYTES=16 -check-prefix=NO_SVE
; RUN: llc -aarch64-sve-vector-bits-min=256 < %s | FileCheck %s -D#VBYTES=32 -check-prefixes=CHECK,VBITS_EQ_256
; RUN: llc -aarch64-sve-vector-bits-min=384 < %s | FileCheck %s -D#VBYTES=32 -check-prefixes=CHECK
; RUN: llc -aarch64-sve-vector-bits-min=512 < %s | FileCheck %s -D#VBYTES=64 -check-prefixes=CHECK,VBITS_GE_512
; RUN: llc -aarch64-sve-vector-bits-min=640 < %s | FileCheck %s -D#VBYTES=64 -check-prefixes=CHECK,VBITS_GE_512
; RUN: llc -aarch64-sve-vector-bits-min=768 < %s | FileCheck %s -D#VBYTES=64 -check-prefixes=CHECK,VBITS_GE_512
; RUN: llc -aarch64-sve-vector-bits-min=896 < %s | FileCheck %s -D#VBYTES=64 -check-prefixes=CHECK,VBITS_GE_512
; RUN: llc -aarch64-sve-vector-bits-min=1024 < %s | FileCheck %s -D#VBYTES=128 -check-prefixes=CHECK,VBITS_GE_1024,VBITS_GE_512
; RUN: llc -aarch64-sve-vector-bits-min=1152 < %s | FileCheck %s -D#VBYTES=128 -check-prefixes=CHECK,VBITS_GE_1024,VBITS_GE_512
; RUN: llc -aarch64-sve-vector-bits-min=1280 < %s | FileCheck %s -D#VBYTES=128 -check-prefixes=CHECK,VBITS_GE_1024,VBITS_GE_512
; RUN: llc -aarch64-sve-vector-bits-min=1408 < %s | FileCheck %s -D#VBYTES=128 -check-prefixes=CHECK,VBITS_GE_1024,VBITS_GE_512
; RUN: llc -aarch64-sve-vector-bits-min=1536 < %s | FileCheck %s -D#VBYTES=128 -check-prefixes=CHECK,VBITS_GE_1024,VBITS_GE_512
; RUN: llc -aarch64-sve-vector-bits-min=1664 < %s | FileCheck %s -D#VBYTES=128 -check-prefixes=CHECK,VBITS_GE_1024,VBITS_GE_512
; RUN: llc -aarch64-sve-vector-bits-min=1792 < %s | FileCheck %s -D#VBYTES=128 -check-prefixes=CHECK,VBITS_GE_1024,VBITS_GE_512
; RUN: llc -aarch64-sve-vector-bits-min=1920 < %s | FileCheck %s -D#VBYTES=128 -check-prefixes=CHECK,VBITS_GE_1024,VBITS_GE_512
; RUN: llc -aarch64-sve-vector-bits-min=2048 < %s | FileCheck %s -D#VBYTES=256 -check-prefixes=CHECK,VBITS_GE_2048,VBITS_GE_1024,VBITS_GE_512
target triple = "aarch64-unknown-linux-gnu"
; Don't use SVE when its registers are no bigger than NEON.
; NO_SVE-NOT: ptrue
define <4 x i32> @load_zext_v4i16i32(<4 x i16>* %ap) #0 {
; CHECK-LABEL: load_zext_v4i16i32
; CHECK: ldr d[[D0:[0-9]+]], [x0]
; CHECK-NEXT: ushll v[[D0]].4s, v[[D0]].4h, #0
; CHECK-NEXT: ret
%a = load <4 x i16>, <4 x i16>* %ap
%val = zext <4 x i16> %a to <4 x i32>
ret <4 x i32> %val
}
define <8 x i32> @load_zext_v8i16i32(<8 x i16>* %ap) #0 {
; CHECK-LABEL: load_zext_v8i16i32
; CHECK: ptrue [[P0:p[0-9]+]].s, vl8
; CHECK-NEXT: ld1h { [[Z0:z[0-9]+]].s }, [[P0]]/z, [x0]
; CHECK-NEXT: st1w { [[Z0]].s }, [[P0]], [x8]
; CHECK-NEXT: ret
%a = load <8 x i16>, <8 x i16>* %ap
%val = zext <8 x i16> %a to <8 x i32>
ret <8 x i32> %val
}
define <16 x i32> @load_zext_v16i16i32(<16 x i16>* %ap) #0 {
; CHECK-LABEL: load_zext_v16i16i32
; VBITS_GE_512: ptrue [[P0:p[0-9]+]].s, vl16
; VBITS_GE_512-NEXT: ld1h { [[Z0:z[0-9]+]].s }, [[P0]]/z, [x0]
; VBITS_GE_512-NEXT: st1w { [[Z0]].s }, [[P0]], [x8]
; VBITS_GE_512-NEXT: ret
; Ensure sensible type legalistaion
; VBITS_EQ_256-DAG: ptrue [[PG:p[0-9]+]].h, vl16
; VBITS_EQ_256-DAG: ld1h { [[Z0:z[0-9]+]].h }, [[PG]]/z, [x0]
; VBITS_EQ_256-DAG: mov x9, #8
; VBITS_EQ_256-DAG: ptrue [[PG1:p[0-9]+]].s, vl8
; VBITS_EQ_256-DAG: uunpklo [[R0:z[0-9]+]].s, [[Z0]].h
; VBITS_EQ_256-DAG: ext [[Z0]].b, [[Z0]].b, [[Z0]].b, #16
; VBITS_EQ_256-DAG: uunpklo [[R1:z[0-9]+]].s, [[Z0]].h
; VBITS_EQ_256-DAG: st1w { [[R1]].s }, [[PG1]], [x8, x9, lsl #2]
; VBITS_EQ_256-DAG: st1w { [[R0]].s }, [[PG1]], [x8]
; VBITS_EQ_256-DAG: ret
%a = load <16 x i16>, <16 x i16>* %ap
%val = zext <16 x i16> %a to <16 x i32>
ret <16 x i32> %val
}
define <32 x i32> @load_zext_v32i16i32(<32 x i16>* %ap) #0 {
; CHECK-LABEL: load_zext_v32i16i32
; VBITS_GE_1024: ptrue [[P0:p[0-9]+]].s, vl32
; VBITS_GE_1024-NEXT: ld1h { [[Z0:z[0-9]+]].s }, [[P0]]/z, [x0]
; VBITS_GE_1024-NEXT: st1w { [[Z0]].s }, [[P0]], [x8]
; VBITS_GE_1024-NEXT: ret
%a = load <32 x i16>, <32 x i16>* %ap
%val = zext <32 x i16> %a to <32 x i32>
ret <32 x i32> %val
}
define <64 x i32> @load_zext_v64i16i32(<64 x i16>* %ap) #0 {
; CHECK-LABEL: load_zext_v64i16i32
; VBITS_GE_2048: ptrue [[P0:p[0-9]+]].s, vl64
; VBITS_GE_2048-NEXT: ld1h { [[Z0:z[0-9]+]].s }, [[P0]]/z, [x0]
; VBITS_GE_2048-NEXT: st1w { [[Z0]].s }, [[P0]], [x8]
; VBITS_GE_2048-NEXT: ret
%a = load <64 x i16>, <64 x i16>* %ap
%val = zext <64 x i16> %a to <64 x i32>
ret <64 x i32> %val
}
define <4 x i32> @load_sext_v4i16i32(<4 x i16>* %ap) #0 {
; CHECK-LABEL: load_sext_v4i16i32
; CHECK: ldr d[[D0:[0-9]+]], [x0]
; CHECK-NEXT: sshll v[[D0]].4s, v[[D0]].4h, #0
; CHECK-NEXT: ret
%a = load <4 x i16>, <4 x i16>* %ap
%val = sext <4 x i16> %a to <4 x i32>
ret <4 x i32> %val
}
define <8 x i32> @load_sext_v8i16i32(<8 x i16>* %ap) #0 {
; CHECK-LABEL: load_sext_v8i16i32
; CHECK: ptrue [[P0:p[0-9]+]].s, vl8
; CHECK-NEXT: ld1sh { [[Z0:z[0-9]+]].s }, [[P0]]/z, [x0]
; CHECK-NEXT: st1w { [[Z0]].s }, [[P0]], [x8]
; CHECK-NEXT: ret
%a = load <8 x i16>, <8 x i16>* %ap
%val = sext <8 x i16> %a to <8 x i32>
ret <8 x i32> %val
}
define <16 x i32> @load_sext_v16i16i32(<16 x i16>* %ap) #0 {
; CHECK-LABEL: load_sext_v16i16i32
; VBITS_GE_512: ptrue [[P0:p[0-9]+]].s, vl16
; VBITS_GE_512-NEXT: ld1sh { [[Z0:z[0-9]+]].s }, [[P0]]/z, [x0]
; VBITS_GE_512-NEXT: st1w { [[Z0]].s }, [[P0]], [x8]
; VBITS_GE_512-NEXT: ret
; Ensure sensible type legalistaion
; VBITS_EQ_256-DAG: ptrue [[PG:p[0-9]+]].h, vl16
; VBITS_EQ_256-DAG: ld1h { [[Z0:z[0-9]+]].h }, [[PG]]/z, [x0]
; VBITS_EQ_256-DAG: mov x9, #8
; VBITS_EQ_256-DAG: ptrue [[PG1:p[0-9]+]].s, vl8
; VBITS_EQ_256-DAG: sunpklo [[R0:z[0-9]+]].s, [[Z0]].h
; VBITS_EQ_256-DAG: ext [[Z0]].b, [[Z0]].b, [[Z0]].b, #16
; VBITS_EQ_256-DAG: sunpklo [[R1:z[0-9]+]].s, [[Z0]].h
; VBITS_EQ_256-DAG: st1w { [[R1]].s }, [[PG1]], [x8, x9, lsl #2]
; VBITS_EQ_256-DAG: st1w { [[R0]].s }, [[PG1]], [x8]
; VBITS_EQ_256-DAG: ret
%a = load <16 x i16>, <16 x i16>* %ap
%val = sext <16 x i16> %a to <16 x i32>
ret <16 x i32> %val
}
define <32 x i32> @load_sext_v32i16i32(<32 x i16>* %ap) #0 {
; CHECK-LABEL: load_sext_v32i16i32
; VBITS_GE_1024: ptrue [[P0:p[0-9]+]].s, vl32
; VBITS_GE_1024-NEXT: ld1sh { [[Z0:z[0-9]+]].s }, [[P0]]/z, [x0]
; VBITS_GE_1024-NEXT: st1w { [[Z0]].s }, [[P0]], [x8]
; VBITS_GE_1024-NEXT: ret
%a = load <32 x i16>, <32 x i16>* %ap
%val = sext <32 x i16> %a to <32 x i32>
ret <32 x i32> %val
}
define <64 x i32> @load_sext_v64i16i32(<64 x i16>* %ap) #0 {
; CHECK-LABEL: load_sext_v64i16i32
; VBITS_GE_2048: ptrue [[P0:p[0-9]+]].s, vl64
; VBITS_GE_2048-NEXT: ld1sh { [[Z0:z[0-9]+]].s }, [[P0]]/z, [x0]
; VBITS_GE_2048-NEXT: st1w { [[Z0]].s }, [[P0]], [x8]
; VBITS_GE_2048-NEXT: ret
%a = load <64 x i16>, <64 x i16>* %ap
%val = sext <64 x i16> %a to <64 x i32>
ret <64 x i32> %val
}
define <32 x i64> @load_zext_v32i8i64(<32 x i8>* %ap) #0 {
; CHECK-LABEL: load_zext_v32i8i64
; VBITS_GE_2048: ptrue [[P0:p[0-9]+]].d, vl32
; VBITS_GE_2048-NEXT: ld1b { [[Z0:z[0-9]+]].d }, [[P0]]/z, [x0]
; VBITS_GE_2048-NEXT: st1d { [[Z0]].d }, [[P0]], [x8]
; VBITS_GE_2048-NEXT: ret
%a = load <32 x i8>, <32 x i8>* %ap
%val = zext <32 x i8> %a to <32 x i64>
ret <32 x i64> %val
}
define <32 x i64> @load_sext_v32i8i64(<32 x i8>* %ap) #0 {
; CHECK-LABEL: load_sext_v32i8i64
; VBITS_GE_2048: ptrue [[P0:p[0-9]+]].d, vl32
; VBITS_GE_2048-NEXT: ld1sb { [[Z0:z[0-9]+]].d }, [[P0]]/z, [x0]
; VBITS_GE_2048-NEXT: st1d { [[Z0]].d }, [[P0]], [x8]
; VBITS_GE_2048-NEXT: ret
%a = load <32 x i8>, <32 x i8>* %ap
%val = sext <32 x i8> %a to <32 x i64>
ret <32 x i64> %val
}
define <32 x i64> @load_zext_v32i16i64(<32 x i16>* %ap) #0 {
; CHECK-LABEL: load_zext_v32i16i64
; VBITS_GE_2048: ptrue [[P0:p[0-9]+]].d, vl32
; VBITS_GE_2048-NEXT: ld1h { [[Z0:z[0-9]+]].d }, [[P0]]/z, [x0]
; VBITS_GE_2048-NEXT: st1d { [[Z0]].d }, [[P0]], [x8]
; VBITS_GE_2048-NEXT: ret
%a = load <32 x i16>, <32 x i16>* %ap
%val = zext <32 x i16> %a to <32 x i64>
ret <32 x i64> %val
}
define <32 x i64> @load_sext_v32i16i64(<32 x i16>* %ap) #0 {
; CHECK-LABEL: load_sext_v32i16i64
; VBITS_GE_2048: ptrue [[P0:p[0-9]+]].d, vl32
; VBITS_GE_2048-NEXT: ld1sh { [[Z0:z[0-9]+]].d }, [[P0]]/z, [x0]
; VBITS_GE_2048-NEXT: st1d { [[Z0]].d }, [[P0]], [x8]
; VBITS_GE_2048-NEXT: ret
%a = load <32 x i16>, <32 x i16>* %ap
%val = sext <32 x i16> %a to <32 x i64>
ret <32 x i64> %val
}
define <32 x i64> @load_zext_v32i32i64(<32 x i32>* %ap) #0 {
; CHECK-LABEL: load_zext_v32i32i64
; VBITS_GE_2048: ptrue [[P0:p[0-9]+]].d, vl32
; VBITS_GE_2048-NEXT: ld1w { [[Z0:z[0-9]+]].d }, [[P0]]/z, [x0]
; VBITS_GE_2048-NEXT: st1d { [[Z0]].d }, [[P0]], [x8]
; VBITS_GE_2048-NEXT: ret
%a = load <32 x i32>, <32 x i32>* %ap
%val = zext <32 x i32> %a to <32 x i64>
ret <32 x i64> %val
}
define <32 x i64> @load_sext_v32i32i64(<32 x i32>* %ap) #0 {
; CHECK-LABEL: load_sext_v32i32i64
; VBITS_GE_2048: ptrue [[P0:p[0-9]+]].d, vl32
; VBITS_GE_2048-NEXT: ld1sw { [[Z0:z[0-9]+]].d }, [[P0]]/z, [x0]
; VBITS_GE_2048-NEXT: st1d { [[Z0]].d }, [[P0]], [x8]
; VBITS_GE_2048-NEXT: ret
%a = load <32 x i32>, <32 x i32>* %ap
%val = sext <32 x i32> %a to <32 x i64>
ret <32 x i64> %val
}
attributes #0 = { "target-features"="+sve" }
|