1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219
|
; RUN: llc < %s -mtriple=x86_64-apple-darwin -mcpu=knl -mattr=+avx512cd | FileCheck %s --check-prefix=ALL --check-prefix=AVX512 --check-prefix=AVX512CD
; RUN: llc < %s -mtriple=x86_64-apple-darwin -mcpu=knl -mattr=+avx512bw | FileCheck %s --check-prefix=AVX512BW
define <8 x i64> @testv8i64(<8 x i64> %in) nounwind {
; ALL-LABEL: testv8i64:
; ALL: ## BB#0:
; ALL-NEXT: vplzcntq %zmm0, %zmm0
; ALL-NEXT: retq
%out = call <8 x i64> @llvm.ctlz.v8i64(<8 x i64> %in, i1 0)
ret <8 x i64> %out
}
define <8 x i64> @testv8i64u(<8 x i64> %in) nounwind {
; ALL-LABEL: testv8i64u:
; ALL: ## BB#0:
; ALL-NEXT: vplzcntq %zmm0, %zmm0
; ALL-NEXT: retq
%out = call <8 x i64> @llvm.ctlz.v8i64(<8 x i64> %in, i1 -1)
ret <8 x i64> %out
}
define <16 x i32> @testv16i32(<16 x i32> %in) nounwind {
; ALL-LABEL: testv16i32:
; ALL: ## BB#0:
; ALL-NEXT: vplzcntd %zmm0, %zmm0
; ALL-NEXT: retq
%out = call <16 x i32> @llvm.ctlz.v16i32(<16 x i32> %in, i1 0)
ret <16 x i32> %out
}
define <16 x i32> @testv16i32u(<16 x i32> %in) nounwind {
; ALL-LABEL: testv16i32u:
; ALL: ## BB#0:
; ALL-NEXT: vplzcntd %zmm0, %zmm0
; ALL-NEXT: retq
%out = call <16 x i32> @llvm.ctlz.v16i32(<16 x i32> %in, i1 -1)
ret <16 x i32> %out
}
define <32 x i16> @testv32i16(<32 x i16> %in) nounwind {
; ALL-LABEL: testv32i16:
; ALL: ## BB#0:
; ALL-NEXT: vpmovzxwd %ymm0, %zmm0
; ALL-NEXT: vplzcntd %zmm0, %zmm0
; ALL-NEXT: vpmovdw %zmm0, %ymm0
; ALL-NEXT: vmovdqa {{.*#+}} ymm2 = [16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]
; ALL-NEXT: vpsubw %ymm2, %ymm0, %ymm0
; ALL-NEXT: vpmovzxwd %ymm1, %zmm1
; ALL-NEXT: vplzcntd %zmm1, %zmm1
; ALL-NEXT: vpmovdw %zmm1, %ymm1
; ALL-NEXT: vpsubw %ymm2, %ymm1, %ymm1
; ALL-NEXT: retq
;
; AVX512BW-LABEL: testv32i16:
; AVX512BW: ## BB#0:
; AVX512BW-NEXT: vextracti64x4 $1, %zmm0, %ymm1
; AVX512BW-NEXT: vpmovzxwd %ymm1, %zmm1
; AVX512BW-NEXT: vplzcntd %zmm1, %zmm1
; AVX512BW-NEXT: vpmovdw %zmm1, %ymm1
; AVX512BW-NEXT: vmovdqa {{.*#+}} ymm2 = [16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]
; AVX512BW-NEXT: vpsubw %ymm2, %ymm1, %ymm1
; AVX512BW-NEXT: vpmovzxwd %ymm0, %zmm0
; AVX512BW-NEXT: vplzcntd %zmm0, %zmm0
; AVX512BW-NEXT: vpmovdw %zmm0, %ymm0
; AVX512BW-NEXT: vpsubw %ymm2, %ymm0, %ymm0
; AVX512BW-NEXT: vinserti64x4 $1, %ymm1, %zmm0, %zmm0
; AVX512BW-NEXT: retq
%out = call <32 x i16> @llvm.ctlz.v32i16(<32 x i16> %in, i1 0)
ret <32 x i16> %out
}
define <32 x i16> @testv32i16u(<32 x i16> %in) nounwind {
; ALL-LABEL: testv32i16u:
; ALL: ## BB#0:
; ALL-NEXT: vpmovzxwd %ymm0, %zmm0
; ALL-NEXT: vplzcntd %zmm0, %zmm0
; ALL-NEXT: vpmovdw %zmm0, %ymm0
; ALL-NEXT: vmovdqa {{.*#+}} ymm2 = [16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]
; ALL-NEXT: vpsubw %ymm2, %ymm0, %ymm0
; ALL-NEXT: vpmovzxwd %ymm1, %zmm1
; ALL-NEXT: vplzcntd %zmm1, %zmm1
; ALL-NEXT: vpmovdw %zmm1, %ymm1
; ALL-NEXT: vpsubw %ymm2, %ymm1, %ymm1
; ALL-NEXT: retq
;
; AVX512BW-LABEL: testv32i16u:
; AVX512BW: ## BB#0:
; AVX512BW-NEXT: vextracti64x4 $1, %zmm0, %ymm1
; AVX512BW-NEXT: vpmovzxwd %ymm1, %zmm1
; AVX512BW-NEXT: vplzcntd %zmm1, %zmm1
; AVX512BW-NEXT: vpmovdw %zmm1, %ymm1
; AVX512BW-NEXT: vmovdqa {{.*#+}} ymm2 = [16,16,16,16,16,16,16,16,16,16,16,16,16,16,16,16]
; AVX512BW-NEXT: vpsubw %ymm2, %ymm1, %ymm1
; AVX512BW-NEXT: vpmovzxwd %ymm0, %zmm0
; AVX512BW-NEXT: vplzcntd %zmm0, %zmm0
; AVX512BW-NEXT: vpmovdw %zmm0, %ymm0
; AVX512BW-NEXT: vpsubw %ymm2, %ymm0, %ymm0
; AVX512BW-NEXT: vinserti64x4 $1, %ymm1, %zmm0, %zmm0
; AVX512BW-NEXT: retq
%out = call <32 x i16> @llvm.ctlz.v32i16(<32 x i16> %in, i1 -1)
ret <32 x i16> %out
}
define <64 x i8> @testv64i8(<64 x i8> %in) nounwind {
; ALL-LABEL: testv64i8:
; ALL: ## BB#0:
; ALL-NEXT: vextractf128 $1, %ymm0, %xmm2
; ALL-NEXT: vpmovzxbd %xmm2, %zmm2
; ALL-NEXT: vplzcntd %zmm2, %zmm2
; ALL-NEXT: vpmovdb %zmm2, %xmm2
; ALL-NEXT: vmovdqa {{.*#+}} xmm3 = [24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24]
; ALL-NEXT: vpsubb %xmm3, %xmm2, %xmm2
; ALL-NEXT: vpmovzxbd %xmm0, %zmm0
; ALL-NEXT: vplzcntd %zmm0, %zmm0
; ALL-NEXT: vpmovdb %zmm0, %xmm0
; ALL-NEXT: vpsubb %xmm3, %xmm0, %xmm0
; ALL-NEXT: vinserti128 $1, %xmm2, %ymm0, %ymm0
; ALL-NEXT: vextractf128 $1, %ymm1, %xmm2
; ALL-NEXT: vpmovzxbd %xmm2, %zmm2
; ALL-NEXT: vplzcntd %zmm2, %zmm2
; ALL-NEXT: vpmovdb %zmm2, %xmm2
; ALL-NEXT: vpsubb %xmm3, %xmm2, %xmm2
; ALL-NEXT: vpmovzxbd %xmm1, %zmm1
; ALL-NEXT: vplzcntd %zmm1, %zmm1
; ALL-NEXT: vpmovdb %zmm1, %xmm1
; ALL-NEXT: vpsubb %xmm3, %xmm1, %xmm1
; ALL-NEXT: vinserti128 $1, %xmm2, %ymm1, %ymm1
; ALL-NEXT: retq
;
; AVX512BW-LABEL: testv64i8:
; AVX512BW: ## BB#0:
; AVX512BW-NEXT: vextracti64x4 $1, %zmm0, %ymm1
; AVX512BW-NEXT: vextracti128 $1, %ymm1, %xmm2
; AVX512BW-NEXT: vpmovzxbd %xmm2, %zmm2
; AVX512BW-NEXT: vplzcntd %zmm2, %zmm2
; AVX512BW-NEXT: vpmovdb %zmm2, %xmm2
; AVX512BW-NEXT: vmovdqa {{.*#+}} xmm3 = [24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24]
; AVX512BW-NEXT: vpsubb %xmm3, %xmm2, %xmm2
; AVX512BW-NEXT: vpmovzxbd %xmm1, %zmm1
; AVX512BW-NEXT: vplzcntd %zmm1, %zmm1
; AVX512BW-NEXT: vpmovdb %zmm1, %xmm1
; AVX512BW-NEXT: vpsubb %xmm3, %xmm1, %xmm1
; AVX512BW-NEXT: vinserti128 $1, %xmm2, %ymm1, %ymm1
; AVX512BW-NEXT: vextracti128 $1, %ymm0, %xmm2
; AVX512BW-NEXT: vpmovzxbd %xmm2, %zmm2
; AVX512BW-NEXT: vplzcntd %zmm2, %zmm2
; AVX512BW-NEXT: vpmovdb %zmm2, %xmm2
; AVX512BW-NEXT: vpsubb %xmm3, %xmm2, %xmm2
; AVX512BW-NEXT: vpmovzxbd %xmm0, %zmm0
; AVX512BW-NEXT: vplzcntd %zmm0, %zmm0
; AVX512BW-NEXT: vpmovdb %zmm0, %xmm0
; AVX512BW-NEXT: vpsubb %xmm3, %xmm0, %xmm0
; AVX512BW-NEXT: vinserti128 $1, %xmm2, %ymm0, %ymm0
; AVX512BW-NEXT: vinserti64x4 $1, %ymm1, %zmm0, %zmm0
; AVX512BW-NEXT: retq
%out = call <64 x i8> @llvm.ctlz.v64i8(<64 x i8> %in, i1 0)
ret <64 x i8> %out
}
define <64 x i8> @testv64i8u(<64 x i8> %in) nounwind {
; ALL-LABEL: testv64i8u:
; ALL: ## BB#0:
; ALL-NEXT: vextractf128 $1, %ymm0, %xmm2
; ALL-NEXT: vpmovzxbd %xmm2, %zmm2
; ALL-NEXT: vplzcntd %zmm2, %zmm2
; ALL-NEXT: vpmovdb %zmm2, %xmm2
; ALL-NEXT: vmovdqa {{.*#+}} xmm3 = [24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24]
; ALL-NEXT: vpsubb %xmm3, %xmm2, %xmm2
; ALL-NEXT: vpmovzxbd %xmm0, %zmm0
; ALL-NEXT: vplzcntd %zmm0, %zmm0
; ALL-NEXT: vpmovdb %zmm0, %xmm0
; ALL-NEXT: vpsubb %xmm3, %xmm0, %xmm0
; ALL-NEXT: vinserti128 $1, %xmm2, %ymm0, %ymm0
; ALL-NEXT: vextractf128 $1, %ymm1, %xmm2
; ALL-NEXT: vpmovzxbd %xmm2, %zmm2
; ALL-NEXT: vplzcntd %zmm2, %zmm2
; ALL-NEXT: vpmovdb %zmm2, %xmm2
; ALL-NEXT: vpsubb %xmm3, %xmm2, %xmm2
; ALL-NEXT: vpmovzxbd %xmm1, %zmm1
; ALL-NEXT: vplzcntd %zmm1, %zmm1
; ALL-NEXT: vpmovdb %zmm1, %xmm1
; ALL-NEXT: vpsubb %xmm3, %xmm1, %xmm1
; ALL-NEXT: vinserti128 $1, %xmm2, %ymm1, %ymm1
; ALL-NEXT: retq
;
; AVX512BW-LABEL: testv64i8u:
; AVX512BW: ## BB#0:
; AVX512BW-NEXT: vextracti64x4 $1, %zmm0, %ymm1
; AVX512BW-NEXT: vextracti128 $1, %ymm1, %xmm2
; AVX512BW-NEXT: vpmovzxbd %xmm2, %zmm2
; AVX512BW-NEXT: vplzcntd %zmm2, %zmm2
; AVX512BW-NEXT: vpmovdb %zmm2, %xmm2
; AVX512BW-NEXT: vmovdqa {{.*#+}} xmm3 = [24,24,24,24,24,24,24,24,24,24,24,24,24,24,24,24]
; AVX512BW-NEXT: vpsubb %xmm3, %xmm2, %xmm2
; AVX512BW-NEXT: vpmovzxbd %xmm1, %zmm1
; AVX512BW-NEXT: vplzcntd %zmm1, %zmm1
; AVX512BW-NEXT: vpmovdb %zmm1, %xmm1
; AVX512BW-NEXT: vpsubb %xmm3, %xmm1, %xmm1
; AVX512BW-NEXT: vinserti128 $1, %xmm2, %ymm1, %ymm1
; AVX512BW-NEXT: vextracti128 $1, %ymm0, %xmm2
; AVX512BW-NEXT: vpmovzxbd %xmm2, %zmm2
; AVX512BW-NEXT: vplzcntd %zmm2, %zmm2
; AVX512BW-NEXT: vpmovdb %zmm2, %xmm2
; AVX512BW-NEXT: vpsubb %xmm3, %xmm2, %xmm2
; AVX512BW-NEXT: vpmovzxbd %xmm0, %zmm0
; AVX512BW-NEXT: vplzcntd %zmm0, %zmm0
; AVX512BW-NEXT: vpmovdb %zmm0, %xmm0
; AVX512BW-NEXT: vpsubb %xmm3, %xmm0, %xmm0
; AVX512BW-NEXT: vinserti128 $1, %xmm2, %ymm0, %ymm0
; AVX512BW-NEXT: vinserti64x4 $1, %ymm1, %zmm0, %zmm0
; AVX512BW-NEXT: retq
%out = call <64 x i8> @llvm.ctlz.v64i8(<64 x i8> %in, i1 -1)
ret <64 x i8> %out
}
declare <8 x i64> @llvm.ctlz.v8i64(<8 x i64>, i1)
declare <16 x i32> @llvm.ctlz.v16i32(<16 x i32>, i1)
declare <32 x i16> @llvm.ctlz.v32i16(<32 x i16>, i1)
declare <64 x i8> @llvm.ctlz.v64i8(<64 x i8>, i1)
|