1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137
|
; NOTE: Assertions have been autogenerated by utils/update_test_checks.py
; RUN: opt -passes=loop-vectorize -force-vector-width=4 -S < %s | FileCheck %s
; This is the test case from PR26314.
; When we were retrying dependence checking with memchecks only,
; the loop-invariant access in the inner loop was incorrectly determined to be wrapping
; because it was not strided in the inner loop.
; Improved wrapping detection allows vectorization in the following case.
; #define Z 32
; typedef struct s {
; int v1[Z];
; int v2[Z];
; int v3[Z][Z];
; } s;
;
; void slow_function (s* const obj, int z) {
; for (int j=0; j<Z; j++) {
; for (int k=0; k<z; k++) {
; int x = obj->v1[k] + obj->v2[j];
; obj->v3[j][k] += x;
; }
; }
; }
target datalayout = "e-m:e-i64:64-f80:128-n8:16:32:64-S128"
%struct.s = type { [32 x i32], [32 x i32], [32 x [32 x i32]] }
define void @Test(ptr nocapture %obj, i64 %z) #0 {
; CHECK-LABEL: @Test(
; CHECK-NEXT: [[TMP1:%.*]] = shl i64 [[Z:%.*]], 2
; CHECK-NEXT: [[TMP2:%.*]] = add i64 [[TMP1]], 256
; CHECK-NEXT: [[UGLYGEP2:%.*]] = getelementptr i8, ptr [[OBJ:%.*]], i64 [[TMP1]]
; CHECK-NEXT: br label [[DOTOUTER_PREHEADER:%.*]]
; CHECK: .outer.preheader:
; CHECK-NEXT: [[I:%.*]] = phi i64 [ 0, [[TMP0:%.*]] ], [ [[I_NEXT:%.*]], [[DOTOUTER:%.*]] ]
; CHECK-NEXT: [[TMP3:%.*]] = shl nuw nsw i64 [[I]], 7
; CHECK-NEXT: [[TMP4:%.*]] = add i64 [[TMP3]], 256
; CHECK-NEXT: [[UGLYGEP:%.*]] = getelementptr i8, ptr [[OBJ]], i64 [[TMP4]]
; CHECK-NEXT: [[TMP5:%.*]] = add i64 [[TMP2]], [[TMP3]]
; CHECK-NEXT: [[UGLYGEP1:%.*]] = getelementptr i8, ptr [[OBJ]], i64 [[TMP5]]
; CHECK-NEXT: [[TMP6:%.*]] = shl nuw nsw i64 [[I]], 2
; CHECK-NEXT: [[TMP7:%.*]] = add i64 [[TMP6]], 128
; CHECK-NEXT: [[UGLYGEP3:%.*]] = getelementptr i8, ptr [[OBJ]], i64 [[TMP7]]
; CHECK-NEXT: [[TMP8:%.*]] = add i64 [[TMP6]], 132
; CHECK-NEXT: [[UGLYGEP4:%.*]] = getelementptr i8, ptr [[OBJ]], i64 [[TMP8]]
; CHECK-NEXT: [[TMP9:%.*]] = getelementptr inbounds [[STRUCT_S:%.*]], ptr [[OBJ]], i64 0, i32 1, i64 [[I]]
; CHECK-NEXT: [[MIN_ITERS_CHECK:%.*]] = icmp ult i64 [[Z]], 4
; CHECK-NEXT: br i1 [[MIN_ITERS_CHECK]], label [[SCALAR_PH:%.*]], label [[VECTOR_MEMCHECK:%.*]]
; CHECK: vector.memcheck:
; CHECK-NEXT: [[BOUND0:%.*]] = icmp ult ptr [[UGLYGEP]], [[UGLYGEP2]]
; CHECK-NEXT: [[BOUND1:%.*]] = icmp ult ptr [[OBJ]], [[UGLYGEP1]]
; CHECK-NEXT: [[FOUND_CONFLICT:%.*]] = and i1 [[BOUND0]], [[BOUND1]]
; CHECK-NEXT: [[BOUND05:%.*]] = icmp ult ptr [[UGLYGEP]], [[UGLYGEP4]]
; CHECK-NEXT: [[BOUND16:%.*]] = icmp ult ptr [[UGLYGEP3]], [[UGLYGEP1]]
; CHECK-NEXT: [[FOUND_CONFLICT7:%.*]] = and i1 [[BOUND05]], [[BOUND16]]
; CHECK-NEXT: [[CONFLICT_RDX:%.*]] = or i1 [[FOUND_CONFLICT]], [[FOUND_CONFLICT7]]
; CHECK-NEXT: br i1 [[CONFLICT_RDX]], label [[SCALAR_PH]], label [[VECTOR_PH:%.*]]
; CHECK: vector.ph:
; CHECK-NEXT: [[N_MOD_VF:%.*]] = urem i64 [[Z]], 4
; CHECK-NEXT: [[N_VEC:%.*]] = sub i64 [[Z]], [[N_MOD_VF]]
; CHECK-NEXT: br label [[VECTOR_BODY:%.*]]
; CHECK: vector.body:
; CHECK-NEXT: [[INDEX:%.*]] = phi i64 [ 0, [[VECTOR_PH]] ], [ [[INDEX_NEXT:%.*]], [[VECTOR_BODY]] ]
; CHECK-NEXT: [[TMP10:%.*]] = add i64 [[INDEX]], 0
; CHECK-NEXT: [[TMP11:%.*]] = getelementptr inbounds [[STRUCT_S]], ptr [[OBJ]], i64 0, i32 0, i64 [[TMP10]]
; CHECK-NEXT: [[TMP12:%.*]] = getelementptr inbounds i32, ptr [[TMP11]], i32 0
; CHECK-NEXT: [[WIDE_LOAD:%.*]] = load <4 x i32>, ptr [[TMP12]], align 4, !alias.scope !0
; CHECK-NEXT: [[TMP13:%.*]] = load i32, ptr [[TMP9]], align 4, !alias.scope !3
; CHECK-NEXT: [[BROADCAST_SPLATINSERT:%.*]] = insertelement <4 x i32> poison, i32 [[TMP13]], i64 0
; CHECK-NEXT: [[BROADCAST_SPLAT:%.*]] = shufflevector <4 x i32> [[BROADCAST_SPLATINSERT]], <4 x i32> poison, <4 x i32> zeroinitializer
; CHECK-NEXT: [[TMP14:%.*]] = add nsw <4 x i32> [[BROADCAST_SPLAT]], [[WIDE_LOAD]]
; CHECK-NEXT: [[TMP15:%.*]] = getelementptr inbounds [[STRUCT_S]], ptr [[OBJ]], i64 0, i32 2, i64 [[I]], i64 [[TMP10]]
; CHECK-NEXT: [[TMP16:%.*]] = getelementptr inbounds i32, ptr [[TMP15]], i32 0
; CHECK-NEXT: [[WIDE_LOAD8:%.*]] = load <4 x i32>, ptr [[TMP16]], align 4, !alias.scope !5, !noalias !7
; CHECK-NEXT: [[TMP17:%.*]] = add nsw <4 x i32> [[TMP14]], [[WIDE_LOAD8]]
; CHECK-NEXT: store <4 x i32> [[TMP17]], ptr [[TMP16]], align 4, !alias.scope !5, !noalias !7
; CHECK-NEXT: [[INDEX_NEXT]] = add nuw i64 [[INDEX]], 4
; CHECK-NEXT: [[TMP18:%.*]] = icmp eq i64 [[INDEX_NEXT]], [[N_VEC]]
; CHECK-NEXT: br i1 [[TMP18]], label [[MIDDLE_BLOCK:%.*]], label [[VECTOR_BODY]], !llvm.loop [[LOOP8:![0-9]+]]
; CHECK: middle.block:
; CHECK-NEXT: [[CMP_N:%.*]] = icmp eq i64 [[Z]], [[N_VEC]]
; CHECK-NEXT: br i1 [[CMP_N]], label [[DOTOUTER]], label [[SCALAR_PH]]
; CHECK: scalar.ph:
; CHECK-NEXT: [[BC_RESUME_VAL:%.*]] = phi i64 [ [[N_VEC]], [[MIDDLE_BLOCK]] ], [ 0, [[DOTOUTER_PREHEADER]] ], [ 0, [[VECTOR_MEMCHECK]] ]
; CHECK-NEXT: br label [[DOTINNER:%.*]]
; CHECK: .exit:
; CHECK-NEXT: ret void
; CHECK: .outer:
; CHECK-NEXT: [[I_NEXT]] = add nuw nsw i64 [[I]], 1
; CHECK-NEXT: [[EXITCOND_OUTER:%.*]] = icmp eq i64 [[I_NEXT]], 32
; CHECK-NEXT: br i1 [[EXITCOND_OUTER]], label [[DOTEXIT:%.*]], label [[DOTOUTER_PREHEADER]]
; CHECK: .inner:
; CHECK-NEXT: [[J:%.*]] = phi i64 [ [[BC_RESUME_VAL]], [[SCALAR_PH]] ], [ [[J_NEXT:%.*]], [[DOTINNER]] ]
; CHECK-NEXT: [[TMP19:%.*]] = getelementptr inbounds [[STRUCT_S]], ptr [[OBJ]], i64 0, i32 0, i64 [[J]]
; CHECK-NEXT: [[TMP20:%.*]] = load i32, ptr [[TMP19]], align 4
; CHECK-NEXT: [[TMP21:%.*]] = load i32, ptr [[TMP9]], align 4
; CHECK-NEXT: [[TMP22:%.*]] = add nsw i32 [[TMP21]], [[TMP20]]
; CHECK-NEXT: [[TMP23:%.*]] = getelementptr inbounds [[STRUCT_S]], ptr [[OBJ]], i64 0, i32 2, i64 [[I]], i64 [[J]]
; CHECK-NEXT: [[TMP24:%.*]] = load i32, ptr [[TMP23]], align 4
; CHECK-NEXT: [[TMP25:%.*]] = add nsw i32 [[TMP22]], [[TMP24]]
; CHECK-NEXT: store i32 [[TMP25]], ptr [[TMP23]], align 4
; CHECK-NEXT: [[J_NEXT]] = add nuw nsw i64 [[J]], 1
; CHECK-NEXT: [[EXITCOND_INNER:%.*]] = icmp eq i64 [[J_NEXT]], [[Z]]
; CHECK-NEXT: br i1 [[EXITCOND_INNER]], label [[DOTOUTER]], label [[DOTINNER]], !llvm.loop [[LOOP10:![0-9]+]]
;
br label %.outer.preheader
.outer.preheader:
%i = phi i64 [ 0, %0 ], [ %i.next, %.outer ]
%1 = getelementptr inbounds %struct.s, ptr %obj, i64 0, i32 1, i64 %i
br label %.inner
.exit:
ret void
.outer:
%i.next = add nuw nsw i64 %i, 1
%exitcond.outer = icmp eq i64 %i.next, 32
br i1 %exitcond.outer, label %.exit, label %.outer.preheader
.inner:
%j = phi i64 [ 0, %.outer.preheader ], [ %j.next, %.inner ]
%2 = getelementptr inbounds %struct.s, ptr %obj, i64 0, i32 0, i64 %j
%3 = load i32, ptr %2
%4 = load i32, ptr %1
%5 = add nsw i32 %4, %3
%6 = getelementptr inbounds %struct.s, ptr %obj, i64 0, i32 2, i64 %i, i64 %j
%7 = load i32, ptr %6
%8 = add nsw i32 %5, %7
store i32 %8, ptr %6
%j.next = add nuw nsw i64 %j, 1
%exitcond.inner = icmp eq i64 %j.next, %z
br i1 %exitcond.inner, label %.outer, label %.inner
}
|