1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161
|
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc -global-isel -O0 -mtriple=amdgcn-amd-amdhsa -mcpu=gfx1031 -verify-machineinstrs -o - %s | FileCheck %s
; Make sure the waterfall loop does not fail the verifier after regalloc fast
define <4 x float> @waterfall_loop(<8 x i32> %vgpr_srd) {
; CHECK-LABEL: waterfall_loop:
; CHECK: ; %bb.0: ; %bb
; CHECK-NEXT: s_waitcnt vmcnt(0) expcnt(0) lgkmcnt(0)
; CHECK-NEXT: s_waitcnt_vscnt null, 0x0
; CHECK-NEXT: s_or_saveexec_b32 s4, -1
; CHECK-NEXT: buffer_store_dword v16, off, s[0:3], s32 offset:44 ; 4-byte Folded Spill
; CHECK-NEXT: s_mov_b32 exec_lo, s4
; CHECK-NEXT: v_mov_b32_e32 v14, v1
; CHECK-NEXT: v_mov_b32_e32 v13, v2
; CHECK-NEXT: v_mov_b32_e32 v12, v3
; CHECK-NEXT: v_mov_b32_e32 v11, v4
; CHECK-NEXT: v_mov_b32_e32 v10, v5
; CHECK-NEXT: v_mov_b32_e32 v9, v6
; CHECK-NEXT: v_mov_b32_e32 v8, v7
; CHECK-NEXT: ; kill: def $vgpr0 killed $vgpr0 def $vgpr0_vgpr1_vgpr2_vgpr3_vgpr4_vgpr5_vgpr6_vgpr7 killed $exec
; CHECK-NEXT: v_mov_b32_e32 v1, v14
; CHECK-NEXT: v_mov_b32_e32 v2, v13
; CHECK-NEXT: v_mov_b32_e32 v3, v12
; CHECK-NEXT: v_mov_b32_e32 v4, v11
; CHECK-NEXT: v_mov_b32_e32 v5, v10
; CHECK-NEXT: v_mov_b32_e32 v6, v9
; CHECK-NEXT: v_mov_b32_e32 v7, v8
; CHECK-NEXT: buffer_store_dword v0, off, s[0:3], s32 offset:8 ; 4-byte Folded Spill
; CHECK-NEXT: buffer_store_dword v1, off, s[0:3], s32 offset:12 ; 4-byte Folded Spill
; CHECK-NEXT: buffer_store_dword v2, off, s[0:3], s32 offset:16 ; 4-byte Folded Spill
; CHECK-NEXT: buffer_store_dword v3, off, s[0:3], s32 offset:20 ; 4-byte Folded Spill
; CHECK-NEXT: buffer_store_dword v4, off, s[0:3], s32 offset:24 ; 4-byte Folded Spill
; CHECK-NEXT: buffer_store_dword v5, off, s[0:3], s32 offset:28 ; 4-byte Folded Spill
; CHECK-NEXT: buffer_store_dword v6, off, s[0:3], s32 offset:32 ; 4-byte Folded Spill
; CHECK-NEXT: buffer_store_dword v7, off, s[0:3], s32 offset:36 ; 4-byte Folded Spill
; CHECK-NEXT: s_mov_b32 s8, 0
; CHECK-NEXT: s_mov_b32 s4, s8
; CHECK-NEXT: s_mov_b32 s5, s8
; CHECK-NEXT: s_mov_b32 s6, s8
; CHECK-NEXT: s_mov_b32 s7, s8
; CHECK-NEXT: v_writelane_b32 v16, s4, 0
; CHECK-NEXT: v_writelane_b32 v16, s5, 1
; CHECK-NEXT: v_writelane_b32 v16, s6, 2
; CHECK-NEXT: v_writelane_b32 v16, s7, 3
; CHECK-NEXT: s_mov_b32 s6, 0
; CHECK-NEXT: s_mov_b32 s4, s6
; CHECK-NEXT: s_mov_b32 s5, s6
; CHECK-NEXT: v_mov_b32_e32 v0, s4
; CHECK-NEXT: v_mov_b32_e32 v1, s5
; CHECK-NEXT: buffer_store_dword v0, off, s[0:3], s32 ; 4-byte Folded Spill
; CHECK-NEXT: buffer_store_dword v1, off, s[0:3], s32 offset:4 ; 4-byte Folded Spill
; CHECK-NEXT: s_mov_b32 s4, exec_lo
; CHECK-NEXT: v_writelane_b32 v16, s4, 4
; CHECK-NEXT: .LBB0_1: ; =>This Inner Loop Header: Depth=1
; CHECK-NEXT: buffer_load_dword v8, off, s[0:3], s32 offset:8 ; 4-byte Folded Reload
; CHECK-NEXT: buffer_load_dword v9, off, s[0:3], s32 offset:12 ; 4-byte Folded Reload
; CHECK-NEXT: buffer_load_dword v10, off, s[0:3], s32 offset:16 ; 4-byte Folded Reload
; CHECK-NEXT: buffer_load_dword v11, off, s[0:3], s32 offset:20 ; 4-byte Folded Reload
; CHECK-NEXT: buffer_load_dword v12, off, s[0:3], s32 offset:24 ; 4-byte Folded Reload
; CHECK-NEXT: buffer_load_dword v13, off, s[0:3], s32 offset:28 ; 4-byte Folded Reload
; CHECK-NEXT: buffer_load_dword v14, off, s[0:3], s32 offset:32 ; 4-byte Folded Reload
; CHECK-NEXT: buffer_load_dword v15, off, s[0:3], s32 offset:36 ; 4-byte Folded Reload
; CHECK-NEXT: s_waitcnt vmcnt(0)
; CHECK-NEXT: v_mov_b32_e32 v7, v8
; CHECK-NEXT: v_mov_b32_e32 v6, v9
; CHECK-NEXT: v_mov_b32_e32 v5, v10
; CHECK-NEXT: v_mov_b32_e32 v4, v11
; CHECK-NEXT: v_mov_b32_e32 v3, v12
; CHECK-NEXT: v_mov_b32_e32 v2, v13
; CHECK-NEXT: v_mov_b32_e32 v1, v14
; CHECK-NEXT: v_mov_b32_e32 v0, v15
; CHECK-NEXT: v_readfirstlane_b32 s12, v7
; CHECK-NEXT: v_readfirstlane_b32 s10, v6
; CHECK-NEXT: v_readfirstlane_b32 s9, v5
; CHECK-NEXT: v_readfirstlane_b32 s8, v4
; CHECK-NEXT: v_readfirstlane_b32 s7, v3
; CHECK-NEXT: v_readfirstlane_b32 s6, v2
; CHECK-NEXT: v_readfirstlane_b32 s5, v1
; CHECK-NEXT: v_readfirstlane_b32 s4, v0
; CHECK-NEXT: ; kill: def $sgpr12 killed $sgpr12 def $sgpr12_sgpr13_sgpr14_sgpr15_sgpr16_sgpr17_sgpr18_sgpr19
; CHECK-NEXT: s_mov_b32 s13, s10
; CHECK-NEXT: s_mov_b32 s14, s9
; CHECK-NEXT: s_mov_b32 s15, s8
; CHECK-NEXT: s_mov_b32 s16, s7
; CHECK-NEXT: s_mov_b32 s17, s6
; CHECK-NEXT: s_mov_b32 s18, s5
; CHECK-NEXT: s_mov_b32 s19, s4
; CHECK-NEXT: v_writelane_b32 v16, s12, 5
; CHECK-NEXT: v_writelane_b32 v16, s13, 6
; CHECK-NEXT: v_writelane_b32 v16, s14, 7
; CHECK-NEXT: v_writelane_b32 v16, s15, 8
; CHECK-NEXT: v_writelane_b32 v16, s16, 9
; CHECK-NEXT: v_writelane_b32 v16, s17, 10
; CHECK-NEXT: v_writelane_b32 v16, s18, 11
; CHECK-NEXT: v_writelane_b32 v16, s19, 12
; CHECK-NEXT: v_mov_b32_e32 v6, v8
; CHECK-NEXT: v_mov_b32_e32 v7, v9
; CHECK-NEXT: v_mov_b32_e32 v4, v10
; CHECK-NEXT: v_mov_b32_e32 v5, v11
; CHECK-NEXT: v_mov_b32_e32 v2, v12
; CHECK-NEXT: v_mov_b32_e32 v3, v13
; CHECK-NEXT: v_mov_b32_e32 v0, v14
; CHECK-NEXT: v_mov_b32_e32 v1, v15
; CHECK-NEXT: s_mov_b64 s[4:5], s[12:13]
; CHECK-NEXT: s_mov_b64 s[10:11], s[14:15]
; CHECK-NEXT: s_mov_b64 s[8:9], s[16:17]
; CHECK-NEXT: s_mov_b64 s[6:7], s[18:19]
; CHECK-NEXT: v_cmp_eq_u64_e64 s4, s[4:5], v[6:7]
; CHECK-NEXT: v_cmp_eq_u64_e64 s5, s[10:11], v[4:5]
; CHECK-NEXT: s_and_b32 s4, s4, s5
; CHECK-NEXT: v_cmp_eq_u64_e64 s5, s[8:9], v[2:3]
; CHECK-NEXT: s_and_b32 s4, s4, s5
; CHECK-NEXT: v_cmp_eq_u64_e64 s5, s[6:7], v[0:1]
; CHECK-NEXT: s_and_b32 s4, s4, s5
; CHECK-NEXT: s_and_saveexec_b32 s4, s4
; CHECK-NEXT: v_writelane_b32 v16, s4, 13
; CHECK-NEXT: ; %bb.2: ; in Loop: Header=BB0_1 Depth=1
; CHECK-NEXT: v_readlane_b32 s4, v16, 13
; CHECK-NEXT: buffer_load_dword v0, off, s[0:3], s32 ; 4-byte Folded Reload
; CHECK-NEXT: buffer_load_dword v1, off, s[0:3], s32 offset:4 ; 4-byte Folded Reload
; CHECK-NEXT: v_readlane_b32 s8, v16, 5
; CHECK-NEXT: v_readlane_b32 s9, v16, 6
; CHECK-NEXT: v_readlane_b32 s10, v16, 7
; CHECK-NEXT: v_readlane_b32 s11, v16, 8
; CHECK-NEXT: v_readlane_b32 s12, v16, 9
; CHECK-NEXT: v_readlane_b32 s13, v16, 10
; CHECK-NEXT: v_readlane_b32 s14, v16, 11
; CHECK-NEXT: v_readlane_b32 s15, v16, 12
; CHECK-NEXT: v_readlane_b32 s16, v16, 0
; CHECK-NEXT: v_readlane_b32 s17, v16, 1
; CHECK-NEXT: v_readlane_b32 s18, v16, 2
; CHECK-NEXT: v_readlane_b32 s19, v16, 3
; CHECK-NEXT: s_waitcnt vmcnt(0)
; CHECK-NEXT: image_sample v0, v[0:1], s[8:15], s[16:19] dmask:0x1 dim:SQ_RSRC_IMG_2D
; CHECK-NEXT: s_waitcnt vmcnt(0)
; CHECK-NEXT: buffer_store_dword v0, off, s[0:3], s32 offset:40 ; 4-byte Folded Spill
; CHECK-NEXT: s_xor_b32 exec_lo, exec_lo, s4
; CHECK-NEXT: s_cbranch_execnz .LBB0_1
; CHECK-NEXT: ; %bb.3:
; CHECK-NEXT: v_readlane_b32 s4, v16, 4
; CHECK-NEXT: s_mov_b32 exec_lo, s4
; CHECK-NEXT: ; %bb.4:
; CHECK-NEXT: buffer_load_dword v0, off, s[0:3], s32 offset:40 ; 4-byte Folded Reload
; CHECK-NEXT: ; implicit-def: $sgpr4
; CHECK-NEXT: v_mov_b32_e32 v1, s4
; CHECK-NEXT: v_mov_b32_e32 v2, s4
; CHECK-NEXT: v_mov_b32_e32 v3, s4
; CHECK-NEXT: s_or_saveexec_b32 s4, -1
; CHECK-NEXT: buffer_load_dword v16, off, s[0:3], s32 offset:44 ; 4-byte Folded Reload
; CHECK-NEXT: s_mov_b32 exec_lo, s4
; CHECK-NEXT: s_waitcnt vmcnt(0)
; CHECK-NEXT: s_waitcnt_vscnt null, 0x0
; CHECK-NEXT: s_setpc_b64 s[30:31]
bb:
%ret = tail call <4 x float> @llvm.amdgcn.image.sample.2d.v4f32.f32(i32 1, float 0.000000e+00, float 0.000000e+00, <8 x i32> %vgpr_srd, <4 x i32> zeroinitializer, i1 false, i32 0, i32 0)
ret <4 x float> %ret
}
declare <4 x float> @llvm.amdgcn.image.sample.2d.v4f32.f32(i32 immarg, float, float, <8 x i32>, <4 x i32>, i1 immarg, i32 immarg, i32 immarg) #0
attributes #0 = { nounwind readonly willreturn }
|