1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224
|
From b01ff685400365f55c5333c29c2227842d61e984 Mon Sep 17 00:00:00 2001
From: Craig Topper <craig.topper@gmail.com>
Date: Tue, 9 Aug 2016 03:06:26 +0000
Subject: [PATCH 2/5] [X86] Remove unnecessary bitcast from the front of
AVX1Only 256-bit logical operation patterns.
git-svn-id: https://llvm.org/svn/llvm-project/llvm/trunk@278088 91177308-0d34-0410-b5e6-96231b3b80d8
---
lib/Target/X86/X86InstrSSE.td | 8 +++----
test/CodeGen/X86/WidenArith.ll | 2 +-
test/CodeGen/X86/merge-consecutive-loads-256.ll | 26 ++++++---------------
test/CodeGen/X86/v8i1-masks.ll | 4 ++--
test/CodeGen/X86/vec_int_to_fp.ll | 30 ++++++++++++-------------
test/CodeGen/X86/vec_uint_to_fp-fastmath.ll | 26 ++++++++++-----------
6 files changed, 42 insertions(+), 54 deletions(-)
diff --git a/lib/Target/X86/X86InstrSSE.td b/lib/Target/X86/X86InstrSSE.td
index f91764a67d1..77da22de4d3 100644
--- a/lib/Target/X86/X86InstrSSE.td
+++ b/lib/Target/X86/X86InstrSSE.td
@@ -2950,13 +2950,13 @@ let isCommutable = 0 in
// AVX1 requires type coercions in order to fold loads directly into logical
// operations.
let Predicates = [HasAVX1Only] in {
- def : Pat<(bc_v8f32 (and VR256:$src1, (loadv4i64 addr:$src2))),
+ def : Pat<(and VR256:$src1, (loadv4i64 addr:$src2)),
(VANDPSYrm VR256:$src1, addr:$src2)>;
- def : Pat<(bc_v8f32 (or VR256:$src1, (loadv4i64 addr:$src2))),
+ def : Pat<(or VR256:$src1, (loadv4i64 addr:$src2)),
(VORPSYrm VR256:$src1, addr:$src2)>;
- def : Pat<(bc_v8f32 (xor VR256:$src1, (loadv4i64 addr:$src2))),
+ def : Pat<(xor VR256:$src1, (loadv4i64 addr:$src2)),
(VXORPSYrm VR256:$src1, addr:$src2)>;
- def : Pat<(bc_v8f32 (X86andnp VR256:$src1, (loadv4i64 addr:$src2))),
+ def : Pat<(X86andnp VR256:$src1, (loadv4i64 addr:$src2)),
(VANDNPSYrm VR256:$src1, addr:$src2)>;
}
diff --git a/test/CodeGen/X86/WidenArith.ll b/test/CodeGen/X86/WidenArith.ll
index cdd1a2818b2..cc5fcba6670 100644
--- a/test/CodeGen/X86/WidenArith.ll
+++ b/test/CodeGen/X86/WidenArith.ll
@@ -9,8 +9,8 @@ define <8 x i32> @test(<8 x float> %a, <8 x float> %b) {
; CHECK-NEXT: vsubps %ymm2, %ymm1, %ymm3
; CHECK-NEXT: vcmpltps %ymm1, %ymm0, %ymm0
; CHECK-NEXT: vcmpltps %ymm3, %ymm2, %ymm1
-; CHECK-NEXT: vandps {{.*}}(%rip), %ymm1, %ymm1
; CHECK-NEXT: vandps %ymm1, %ymm0, %ymm0
+; CHECK-NEXT: vandps {{.*}}(%rip), %ymm0, %ymm0
; CHECK-NEXT: retq
%c1 = fadd <8 x float> %a, %b
%b1 = fmul <8 x float> %b, %a
diff --git a/test/CodeGen/X86/merge-consecutive-loads-256.ll b/test/CodeGen/X86/merge-consecutive-loads-256.ll
index 8c2e9372900..dc268d9bdf8 100644
--- a/test/CodeGen/X86/merge-consecutive-loads-256.ll
+++ b/test/CodeGen/X86/merge-consecutive-loads-256.ll
@@ -547,29 +547,17 @@ define <16 x i16> @merge_16i16_i16_0uu3uuuuuuuuCuEF(i16* %ptr) nounwind uwtable
}
define <16 x i16> @merge_16i16_i16_0uu3zzuuuuuzCuEF(i16* %ptr) nounwind uwtable noinline ssp {
-; AVX1-LABEL: merge_16i16_i16_0uu3zzuuuuuzCuEF:
-; AVX1: # BB#0:
-; AVX1-NEXT: vmovaps {{.*#+}} ymm0 = [65535,0,0,65535,0,0,0,0,0,0,0,0,65535,0,65535,65535]
-; AVX1-NEXT: vandps (%rdi), %ymm0, %ymm0
-; AVX1-NEXT: retq
-;
-; AVX2-LABEL: merge_16i16_i16_0uu3zzuuuuuzCuEF:
-; AVX2: # BB#0:
-; AVX2-NEXT: vmovups (%rdi), %ymm0
-; AVX2-NEXT: vandps {{.*}}(%rip), %ymm0, %ymm0
-; AVX2-NEXT: retq
-;
-; AVX512F-LABEL: merge_16i16_i16_0uu3zzuuuuuzCuEF:
-; AVX512F: # BB#0:
-; AVX512F-NEXT: vmovups (%rdi), %ymm0
-; AVX512F-NEXT: vandps {{.*}}(%rip), %ymm0, %ymm0
-; AVX512F-NEXT: retq
+; AVX-LABEL: merge_16i16_i16_0uu3zzuuuuuzCuEF:
+; AVX: # BB#0:
+; AVX-NEXT: vmovups (%rdi), %ymm0
+; AVX-NEXT: vandps {{.*}}(%rip), %ymm0, %ymm0
+; AVX-NEXT: retq
;
; X32-AVX-LABEL: merge_16i16_i16_0uu3zzuuuuuzCuEF:
; X32-AVX: # BB#0:
; X32-AVX-NEXT: movl {{[0-9]+}}(%esp), %eax
-; X32-AVX-NEXT: vmovaps {{.*#+}} ymm0 = [65535,0,0,65535,0,0,0,0,0,0,0,0,65535,0,65535,65535]
-; X32-AVX-NEXT: vandps (%eax), %ymm0, %ymm0
+; X32-AVX-NEXT: vmovups (%eax), %ymm0
+; X32-AVX-NEXT: vandps {{\.LCPI.*}}, %ymm0, %ymm0
; X32-AVX-NEXT: retl
%ptr0 = getelementptr inbounds i16, i16* %ptr, i64 0
%ptr3 = getelementptr inbounds i16, i16* %ptr, i64 3
diff --git a/test/CodeGen/X86/v8i1-masks.ll b/test/CodeGen/X86/v8i1-masks.ll
index 0135832ad92..d5c31506e98 100644
--- a/test/CodeGen/X86/v8i1-masks.ll
+++ b/test/CodeGen/X86/v8i1-masks.ll
@@ -13,8 +13,8 @@ define void @and_masks(<8 x float>* %a, <8 x float>* %b, <8 x float>* %c) nounwi
; X32-NEXT: vcmpltps %ymm0, %ymm1, %ymm1
; X32-NEXT: vmovups (%eax), %ymm2
; X32-NEXT: vcmpltps %ymm0, %ymm2, %ymm0
-; X32-NEXT: vandps LCPI0_0, %ymm1, %ymm1
; X32-NEXT: vandps %ymm1, %ymm0, %ymm0
+; X32-NEXT: vandps LCPI0_0, %ymm0, %ymm0
; X32-NEXT: vmovaps %ymm0, (%eax)
; X32-NEXT: vzeroupper
; X32-NEXT: retl
@@ -26,8 +26,8 @@ define void @and_masks(<8 x float>* %a, <8 x float>* %b, <8 x float>* %c) nounwi
; X64-NEXT: vcmpltps %ymm0, %ymm1, %ymm1
; X64-NEXT: vmovups (%rdx), %ymm2
; X64-NEXT: vcmpltps %ymm0, %ymm2, %ymm0
-; X64-NEXT: vandps {{.*}}(%rip), %ymm1, %ymm1
; X64-NEXT: vandps %ymm1, %ymm0, %ymm0
+; X64-NEXT: vandps {{.*}}(%rip), %ymm0, %ymm0
; X64-NEXT: vmovaps %ymm0, (%rax)
; X64-NEXT: vzeroupper
; X64-NEXT: retq
diff --git a/test/CodeGen/X86/vec_int_to_fp.ll b/test/CodeGen/X86/vec_int_to_fp.ll
index 5d8f91385c7..8ea7243664a 100644
--- a/test/CodeGen/X86/vec_int_to_fp.ll
+++ b/test/CodeGen/X86/vec_int_to_fp.ll
@@ -1694,15 +1694,15 @@ define <8 x float> @uitofp_8i32_to_8f32(<8 x i32> %a) {
;
; AVX1-LABEL: uitofp_8i32_to_8f32:
; AVX1: # BB#0:
-; AVX1-NEXT: vandps {{.*}}(%rip), %ymm0, %ymm1
+; AVX1-NEXT: vpsrld $16, %xmm0, %xmm1
+; AVX1-NEXT: vextractf128 $1, %ymm0, %xmm2
+; AVX1-NEXT: vpsrld $16, %xmm2, %xmm2
+; AVX1-NEXT: vinsertf128 $1, %xmm2, %ymm1, %ymm1
; AVX1-NEXT: vcvtdq2ps %ymm1, %ymm1
-; AVX1-NEXT: vpsrld $16, %xmm0, %xmm2
-; AVX1-NEXT: vextractf128 $1, %ymm0, %xmm0
-; AVX1-NEXT: vpsrld $16, %xmm0, %xmm0
-; AVX1-NEXT: vinsertf128 $1, %xmm0, %ymm2, %ymm0
+; AVX1-NEXT: vmulps {{.*}}(%rip), %ymm1, %ymm1
+; AVX1-NEXT: vandps {{.*}}(%rip), %ymm0, %ymm0
; AVX1-NEXT: vcvtdq2ps %ymm0, %ymm0
-; AVX1-NEXT: vmulps {{.*}}(%rip), %ymm0, %ymm0
-; AVX1-NEXT: vaddps %ymm1, %ymm0, %ymm0
+; AVX1-NEXT: vaddps %ymm0, %ymm1, %ymm0
; AVX1-NEXT: retq
;
; AVX2-LABEL: uitofp_8i32_to_8f32:
@@ -3372,16 +3372,16 @@ define <8 x float> @uitofp_load_8i32_to_8f32(<8 x i32> *%a) {
;
; AVX1-LABEL: uitofp_load_8i32_to_8f32:
; AVX1: # BB#0:
-; AVX1-NEXT: vmovaps (%rdi), %ymm0
-; AVX1-NEXT: vandps {{.*}}(%rip), %ymm0, %ymm1
+; AVX1-NEXT: vmovdqa (%rdi), %ymm0
+; AVX1-NEXT: vpsrld $16, %xmm0, %xmm1
+; AVX1-NEXT: vextractf128 $1, %ymm0, %xmm2
+; AVX1-NEXT: vpsrld $16, %xmm2, %xmm2
+; AVX1-NEXT: vinsertf128 $1, %xmm2, %ymm1, %ymm1
; AVX1-NEXT: vcvtdq2ps %ymm1, %ymm1
-; AVX1-NEXT: vpsrld $16, %xmm0, %xmm2
-; AVX1-NEXT: vextractf128 $1, %ymm0, %xmm0
-; AVX1-NEXT: vpsrld $16, %xmm0, %xmm0
-; AVX1-NEXT: vinsertf128 $1, %xmm0, %ymm2, %ymm0
+; AVX1-NEXT: vmulps {{.*}}(%rip), %ymm1, %ymm1
+; AVX1-NEXT: vandps {{.*}}(%rip), %ymm0, %ymm0
; AVX1-NEXT: vcvtdq2ps %ymm0, %ymm0
-; AVX1-NEXT: vmulps {{.*}}(%rip), %ymm0, %ymm0
-; AVX1-NEXT: vaddps %ymm1, %ymm0, %ymm0
+; AVX1-NEXT: vaddps %ymm0, %ymm1, %ymm0
; AVX1-NEXT: retq
;
; AVX2-LABEL: uitofp_load_8i32_to_8f32:
diff --git a/test/CodeGen/X86/vec_uint_to_fp-fastmath.ll b/test/CodeGen/X86/vec_uint_to_fp-fastmath.ll
index c0e02bd1599..cb8e2096585 100644
--- a/test/CodeGen/X86/vec_uint_to_fp-fastmath.ll
+++ b/test/CodeGen/X86/vec_uint_to_fp-fastmath.ll
@@ -78,18 +78,18 @@ define <4 x float> @test_uitofp_v4i32_to_v4f32(<4 x i32> %arg) {
ret <4 x float> %tmp
}
-; AVX: [[MASKCSTADDR_v8:.LCPI[0-9_]+]]:
-; AVX-NEXT: .long 65535 # 0xffff
-; AVX-NEXT: .long 65535 # 0xffff
-; AVX-NEXT: .long 65535 # 0xffff
-; AVX-NEXT: .long 65535 # 0xffff
-
; AVX: [[FPMASKCSTADDR_v8:.LCPI[0-9_]+]]:
; AVX-NEXT: .long 1199570944 # float 65536
; AVX-NEXT: .long 1199570944 # float 65536
; AVX-NEXT: .long 1199570944 # float 65536
; AVX-NEXT: .long 1199570944 # float 65536
+; AVX: [[MASKCSTADDR_v8:.LCPI[0-9_]+]]:
+; AVX-NEXT: .long 65535 # 0xffff
+; AVX-NEXT: .long 65535 # 0xffff
+; AVX-NEXT: .long 65535 # 0xffff
+; AVX-NEXT: .long 65535 # 0xffff
+
; AVX2: [[FPMASKCSTADDR_v8:.LCPI[0-9_]+]]:
; AVX2-NEXT: .long 1199570944 # float 65536
@@ -119,15 +119,15 @@ define <8 x float> @test_uitofp_v8i32_to_v8f32(<8 x i32> %arg) {
;
; AVX-LABEL: test_uitofp_v8i32_to_v8f32:
; AVX: # BB#0:
-; AVX-NEXT: vandps [[MASKCSTADDR_v8]](%rip), %ymm0, %ymm1
+; AVX-NEXT: vpsrld $16, %xmm0, %xmm1
+; AVX-NEXT: vextractf128 $1, %ymm0, %xmm2
+; AVX-NEXT: vpsrld $16, %xmm2, %xmm2
+; AVX-NEXT: vinsertf128 $1, %xmm2, %ymm1, %ymm1
; AVX-NEXT: vcvtdq2ps %ymm1, %ymm1
-; AVX-NEXT: vpsrld $16, %xmm0, %xmm2
-; AVX-NEXT: vextractf128 $1, %ymm0, %xmm0
-; AVX-NEXT: vpsrld $16, %xmm0, %xmm0
-; AVX-NEXT: vinsertf128 $1, %xmm0, %ymm2, %ymm0
+; AVX-NEXT: vmulps [[FPMASKCSTADDR_v8]](%rip), %ymm1, %ymm1
+; AVX-NEXT: vandps [[MASKCSTADDR_v8]](%rip), %ymm0, %ymm0
; AVX-NEXT: vcvtdq2ps %ymm0, %ymm0
-; AVX-NEXT: vmulps [[FPMASKCSTADDR_v8]](%rip), %ymm0, %ymm0
-; AVX-NEXT: vaddps %ymm1, %ymm0, %ymm0
+; AVX-NEXT: vaddps %ymm0, %ymm1, %ymm0
; AVX-NEXT: retq
;
; AVX2-LABEL: test_uitofp_v8i32_to_v8f32:
--
2.11.0
|