1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149
|
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 2
; RUN: llc -march=amdgcn -verify-machineinstrs < %s | FileCheck -check-prefix=GFX6-NOHSA %s
; RUN: llc -mtriple=amdgcn-amdhsa -mcpu=kaveri -verify-machineinstrs < %s | FileCheck -check-prefix=GFX7-HSA %s
; RUN: llc -march=amdgcn -mcpu=tonga -verify-machineinstrs < %s | FileCheck -check-prefix=GFX8-NOHSA %s
; FUNC-LABEL: {{^}}constant_load_f64:
define amdgpu_kernel void @constant_load_f64(ptr addrspace(1) %out, ptr addrspace(4) %in) #0 {
; GFX6-NOHSA-LABEL: constant_load_f64:
; GFX6-NOHSA: ; %bb.0:
; GFX6-NOHSA-NEXT: s_load_dwordx4 s[0:3], s[0:1], 0x9
; GFX6-NOHSA-NEXT: s_waitcnt lgkmcnt(0)
; GFX6-NOHSA-NEXT: s_load_dwordx2 s[4:5], s[2:3], 0x0
; GFX6-NOHSA-NEXT: s_mov_b32 s3, 0xf000
; GFX6-NOHSA-NEXT: s_mov_b32 s2, -1
; GFX6-NOHSA-NEXT: s_waitcnt lgkmcnt(0)
; GFX6-NOHSA-NEXT: v_mov_b32_e32 v0, s4
; GFX6-NOHSA-NEXT: v_mov_b32_e32 v1, s5
; GFX6-NOHSA-NEXT: buffer_store_dwordx2 v[0:1], off, s[0:3], 0
; GFX6-NOHSA-NEXT: s_endpgm
;
; GFX7-HSA-LABEL: constant_load_f64:
; GFX7-HSA: ; %bb.0:
; GFX7-HSA-NEXT: s_load_dwordx4 s[0:3], s[4:5], 0x0
; GFX7-HSA-NEXT: s_waitcnt lgkmcnt(0)
; GFX7-HSA-NEXT: s_load_dwordx2 s[2:3], s[2:3], 0x0
; GFX7-HSA-NEXT: v_mov_b32_e32 v0, s0
; GFX7-HSA-NEXT: v_mov_b32_e32 v1, s1
; GFX7-HSA-NEXT: s_waitcnt lgkmcnt(0)
; GFX7-HSA-NEXT: v_mov_b32_e32 v2, s2
; GFX7-HSA-NEXT: v_mov_b32_e32 v3, s3
; GFX7-HSA-NEXT: flat_store_dwordx2 v[0:1], v[2:3]
; GFX7-HSA-NEXT: s_endpgm
;
; GFX8-NOHSA-LABEL: constant_load_f64:
; GFX8-NOHSA: ; %bb.0:
; GFX8-NOHSA-NEXT: s_load_dwordx4 s[0:3], s[0:1], 0x24
; GFX8-NOHSA-NEXT: s_waitcnt lgkmcnt(0)
; GFX8-NOHSA-NEXT: s_load_dwordx2 s[2:3], s[2:3], 0x0
; GFX8-NOHSA-NEXT: v_mov_b32_e32 v0, s0
; GFX8-NOHSA-NEXT: v_mov_b32_e32 v1, s1
; GFX8-NOHSA-NEXT: s_waitcnt lgkmcnt(0)
; GFX8-NOHSA-NEXT: v_mov_b32_e32 v2, s2
; GFX8-NOHSA-NEXT: v_mov_b32_e32 v3, s3
; GFX8-NOHSA-NEXT: flat_store_dwordx2 v[0:1], v[2:3]
; GFX8-NOHSA-NEXT: s_endpgm
%ld = load double, ptr addrspace(4) %in
store double %ld, ptr addrspace(1) %out
ret void
}
attributes #0 = { nounwind }
; Tests whether a load-chain of 8 constants of 64bit each gets vectorized into a wider load.
define amdgpu_kernel void @constant_load_2v4f64(ptr addrspace(4) noalias nocapture readonly %weights, ptr addrspace(1) noalias nocapture %out_ptr) {
; GFX6-NOHSA-LABEL: constant_load_2v4f64:
; GFX6-NOHSA: ; %bb.0: ; %entry
; GFX6-NOHSA-NEXT: s_load_dwordx4 s[16:19], s[0:1], 0x9
; GFX6-NOHSA-NEXT: s_waitcnt lgkmcnt(0)
; GFX6-NOHSA-NEXT: s_load_dwordx2 s[24:25], s[18:19], 0x0
; GFX6-NOHSA-NEXT: s_load_dwordx16 s[0:15], s[16:17], 0x0
; GFX6-NOHSA-NEXT: s_mov_b32 s23, 0xf000
; GFX6-NOHSA-NEXT: s_mov_b32 s22, -1
; GFX6-NOHSA-NEXT: s_mov_b32 s20, s18
; GFX6-NOHSA-NEXT: s_mov_b32 s21, s19
; GFX6-NOHSA-NEXT: s_waitcnt lgkmcnt(0)
; GFX6-NOHSA-NEXT: v_mov_b32_e32 v0, s24
; GFX6-NOHSA-NEXT: v_mov_b32_e32 v1, s25
; GFX6-NOHSA-NEXT: v_add_f64 v[0:1], s[0:1], v[0:1]
; GFX6-NOHSA-NEXT: v_add_f64 v[0:1], s[2:3], v[0:1]
; GFX6-NOHSA-NEXT: v_add_f64 v[0:1], s[4:5], v[0:1]
; GFX6-NOHSA-NEXT: v_add_f64 v[0:1], s[6:7], v[0:1]
; GFX6-NOHSA-NEXT: v_add_f64 v[0:1], s[8:9], v[0:1]
; GFX6-NOHSA-NEXT: v_add_f64 v[0:1], s[10:11], v[0:1]
; GFX6-NOHSA-NEXT: v_add_f64 v[0:1], s[12:13], v[0:1]
; GFX6-NOHSA-NEXT: v_add_f64 v[0:1], s[14:15], v[0:1]
; GFX6-NOHSA-NEXT: buffer_store_dwordx2 v[0:1], off, s[20:23], 0
; GFX6-NOHSA-NEXT: s_endpgm
;
; GFX7-HSA-LABEL: constant_load_2v4f64:
; GFX7-HSA: ; %bb.0: ; %entry
; GFX7-HSA-NEXT: s_load_dwordx4 s[16:19], s[4:5], 0x0
; GFX7-HSA-NEXT: s_waitcnt lgkmcnt(0)
; GFX7-HSA-NEXT: s_load_dwordx2 s[20:21], s[18:19], 0x0
; GFX7-HSA-NEXT: s_load_dwordx16 s[0:15], s[16:17], 0x0
; GFX7-HSA-NEXT: v_mov_b32_e32 v2, s18
; GFX7-HSA-NEXT: v_mov_b32_e32 v3, s19
; GFX7-HSA-NEXT: s_waitcnt lgkmcnt(0)
; GFX7-HSA-NEXT: v_mov_b32_e32 v0, s20
; GFX7-HSA-NEXT: v_mov_b32_e32 v1, s21
; GFX7-HSA-NEXT: v_add_f64 v[0:1], s[0:1], v[0:1]
; GFX7-HSA-NEXT: v_add_f64 v[0:1], s[2:3], v[0:1]
; GFX7-HSA-NEXT: v_add_f64 v[0:1], s[4:5], v[0:1]
; GFX7-HSA-NEXT: v_add_f64 v[0:1], s[6:7], v[0:1]
; GFX7-HSA-NEXT: v_add_f64 v[0:1], s[8:9], v[0:1]
; GFX7-HSA-NEXT: v_add_f64 v[0:1], s[10:11], v[0:1]
; GFX7-HSA-NEXT: v_add_f64 v[0:1], s[12:13], v[0:1]
; GFX7-HSA-NEXT: v_add_f64 v[0:1], s[14:15], v[0:1]
; GFX7-HSA-NEXT: flat_store_dwordx2 v[2:3], v[0:1]
; GFX7-HSA-NEXT: s_endpgm
;
; GFX8-NOHSA-LABEL: constant_load_2v4f64:
; GFX8-NOHSA: ; %bb.0: ; %entry
; GFX8-NOHSA-NEXT: s_load_dwordx4 s[16:19], s[0:1], 0x24
; GFX8-NOHSA-NEXT: s_waitcnt lgkmcnt(0)
; GFX8-NOHSA-NEXT: s_load_dwordx2 s[20:21], s[18:19], 0x0
; GFX8-NOHSA-NEXT: s_load_dwordx16 s[0:15], s[16:17], 0x0
; GFX8-NOHSA-NEXT: v_mov_b32_e32 v2, s18
; GFX8-NOHSA-NEXT: v_mov_b32_e32 v3, s19
; GFX8-NOHSA-NEXT: s_waitcnt lgkmcnt(0)
; GFX8-NOHSA-NEXT: v_mov_b32_e32 v0, s20
; GFX8-NOHSA-NEXT: v_mov_b32_e32 v1, s21
; GFX8-NOHSA-NEXT: v_add_f64 v[0:1], s[0:1], v[0:1]
; GFX8-NOHSA-NEXT: v_add_f64 v[0:1], s[2:3], v[0:1]
; GFX8-NOHSA-NEXT: v_add_f64 v[0:1], s[4:5], v[0:1]
; GFX8-NOHSA-NEXT: v_add_f64 v[0:1], s[6:7], v[0:1]
; GFX8-NOHSA-NEXT: v_add_f64 v[0:1], s[8:9], v[0:1]
; GFX8-NOHSA-NEXT: v_add_f64 v[0:1], s[10:11], v[0:1]
; GFX8-NOHSA-NEXT: v_add_f64 v[0:1], s[12:13], v[0:1]
; GFX8-NOHSA-NEXT: v_add_f64 v[0:1], s[14:15], v[0:1]
; GFX8-NOHSA-NEXT: flat_store_dwordx2 v[2:3], v[0:1]
; GFX8-NOHSA-NEXT: s_endpgm
entry:
%out_ptr.promoted = load double, ptr addrspace(1) %out_ptr, align 4
%tmp = load double, ptr addrspace(4) %weights, align 4
%add = fadd double %tmp, %out_ptr.promoted
%arrayidx.1 = getelementptr inbounds double, ptr addrspace(4) %weights, i64 1
%tmp1 = load double, ptr addrspace(4) %arrayidx.1, align 4
%add.1 = fadd double %tmp1, %add
%arrayidx.2 = getelementptr inbounds double, ptr addrspace(4) %weights, i64 2
%tmp2 = load double, ptr addrspace(4) %arrayidx.2, align 4
%add.2 = fadd double %tmp2, %add.1
%arrayidx.3 = getelementptr inbounds double, ptr addrspace(4) %weights, i64 3
%tmp3 = load double, ptr addrspace(4) %arrayidx.3, align 4
%add.3 = fadd double %tmp3, %add.2
%arrayidx.4 = getelementptr inbounds double, ptr addrspace(4) %weights, i64 4
%tmp4 = load double, ptr addrspace(4) %arrayidx.4, align 4
%add.4 = fadd double %tmp4, %add.3
%arrayidx.5 = getelementptr inbounds double, ptr addrspace(4) %weights, i64 5
%tmp5 = load double, ptr addrspace(4) %arrayidx.5, align 4
%add.5 = fadd double %tmp5, %add.4
%arrayidx.6 = getelementptr inbounds double, ptr addrspace(4) %weights, i64 6
%tmp6 = load double, ptr addrspace(4) %arrayidx.6, align 4
%add.6 = fadd double %tmp6, %add.5
%arrayidx.7 = getelementptr inbounds double, ptr addrspace(4) %weights, i64 7
%tmp7 = load double, ptr addrspace(4) %arrayidx.7, align 4
%add.7 = fadd double %tmp7, %add.6
store double %add.7, ptr addrspace(1) %out_ptr, align 4
ret void
}
|