1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153
|
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py
; RUN: llc < %s --mattr=+sve -o - | FileCheck %s
target triple = "aarch64"
; Expected to transform
define <vscale x 4 x half> @complex_mul_v4f16(<vscale x 4 x half> %a, <vscale x 4 x half> %b) {
; CHECK-LABEL: complex_mul_v4f16:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: uzp2 z2.s, z0.s, z0.s
; CHECK-NEXT: uzp1 z0.s, z0.s, z0.s
; CHECK-NEXT: ptrue p0.d
; CHECK-NEXT: uzp2 z3.s, z1.s, z0.s
; CHECK-NEXT: uunpklo z0.d, z0.s
; CHECK-NEXT: uunpklo z2.d, z2.s
; CHECK-NEXT: uunpklo z3.d, z3.s
; CHECK-NEXT: uzp1 z1.s, z1.s, z0.s
; CHECK-NEXT: uunpklo z1.d, z1.s
; CHECK-NEXT: movprfx z4, z3
; CHECK-NEXT: fmul z4.h, p0/m, z4.h, z0.h
; CHECK-NEXT: fmul z3.h, p0/m, z3.h, z2.h
; CHECK-NEXT: fmad z2.h, p0/m, z1.h, z4.h
; CHECK-NEXT: fnmsb z0.h, p0/m, z1.h, z3.h
; CHECK-NEXT: zip2 z1.d, z0.d, z2.d
; CHECK-NEXT: zip1 z0.d, z0.d, z2.d
; CHECK-NEXT: uzp1 z0.s, z0.s, z1.s
; CHECK-NEXT: ret
entry:
%a.deinterleaved = tail call { <vscale x 2 x half>, <vscale x 2 x half> } @llvm.vector.deinterleave2.nxv4f16(<vscale x 4 x half> %a)
%a.real = extractvalue { <vscale x 2 x half>, <vscale x 2 x half> } %a.deinterleaved, 0
%a.imag = extractvalue { <vscale x 2 x half>, <vscale x 2 x half> } %a.deinterleaved, 1
%b.deinterleaved = tail call { <vscale x 2 x half>, <vscale x 2 x half> } @llvm.vector.deinterleave2.nxv4f16(<vscale x 4 x half> %b)
%b.real = extractvalue { <vscale x 2 x half>, <vscale x 2 x half> } %b.deinterleaved, 0
%b.imag = extractvalue { <vscale x 2 x half>, <vscale x 2 x half> } %b.deinterleaved, 1
%0 = fmul fast <vscale x 2 x half> %b.imag, %a.real
%1 = fmul fast <vscale x 2 x half> %b.real, %a.imag
%2 = fadd fast <vscale x 2 x half> %1, %0
%3 = fmul fast <vscale x 2 x half> %b.real, %a.real
%4 = fmul fast <vscale x 2 x half> %a.imag, %b.imag
%5 = fsub fast <vscale x 2 x half> %3, %4
%interleaved.vec = tail call <vscale x 4 x half> @llvm.vector.interleave2.nxv4f16(<vscale x 2 x half> %5, <vscale x 2 x half> %2)
ret <vscale x 4 x half> %interleaved.vec
}
; Expected to transform
define <vscale x 8 x half> @complex_mul_v8f16(<vscale x 8 x half> %a, <vscale x 8 x half> %b) {
; CHECK-LABEL: complex_mul_v8f16:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: mov z2.h, #0 // =0x0
; CHECK-NEXT: ptrue p0.h
; CHECK-NEXT: fcmla z2.h, p0/m, z1.h, z0.h, #0
; CHECK-NEXT: fcmla z2.h, p0/m, z1.h, z0.h, #90
; CHECK-NEXT: mov z0.d, z2.d
; CHECK-NEXT: ret
entry:
%a.deinterleaved = tail call { <vscale x 4 x half>, <vscale x 4 x half> } @llvm.vector.deinterleave2.nxv8f16(<vscale x 8 x half> %a)
%a.real = extractvalue { <vscale x 4 x half>, <vscale x 4 x half> } %a.deinterleaved, 0
%a.imag = extractvalue { <vscale x 4 x half>, <vscale x 4 x half> } %a.deinterleaved, 1
%b.deinterleaved = tail call { <vscale x 4 x half>, <vscale x 4 x half> } @llvm.vector.deinterleave2.nxv8f16(<vscale x 8 x half> %b)
%b.real = extractvalue { <vscale x 4 x half>, <vscale x 4 x half> } %b.deinterleaved, 0
%b.imag = extractvalue { <vscale x 4 x half>, <vscale x 4 x half> } %b.deinterleaved, 1
%0 = fmul fast <vscale x 4 x half> %b.imag, %a.real
%1 = fmul fast <vscale x 4 x half> %b.real, %a.imag
%2 = fadd fast <vscale x 4 x half> %1, %0
%3 = fmul fast <vscale x 4 x half> %b.real, %a.real
%4 = fmul fast <vscale x 4 x half> %a.imag, %b.imag
%5 = fsub fast <vscale x 4 x half> %3, %4
%interleaved.vec = tail call <vscale x 8 x half> @llvm.vector.interleave2.nxv8f16(<vscale x 4 x half> %5, <vscale x 4 x half> %2)
ret <vscale x 8 x half> %interleaved.vec
}
; Expected to transform
define <vscale x 16 x half> @complex_mul_v16f16(<vscale x 16 x half> %a, <vscale x 16 x half> %b) {
; CHECK-LABEL: complex_mul_v16f16:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: mov z4.h, #0 // =0x0
; CHECK-NEXT: ptrue p0.h
; CHECK-NEXT: mov z5.d, z4.d
; CHECK-NEXT: fcmla z4.h, p0/m, z3.h, z1.h, #0
; CHECK-NEXT: fcmla z5.h, p0/m, z2.h, z0.h, #0
; CHECK-NEXT: fcmla z4.h, p0/m, z3.h, z1.h, #90
; CHECK-NEXT: fcmla z5.h, p0/m, z2.h, z0.h, #90
; CHECK-NEXT: mov z1.d, z4.d
; CHECK-NEXT: mov z0.d, z5.d
; CHECK-NEXT: ret
entry:
%a.deinterleaved = tail call { <vscale x 8 x half>, <vscale x 8 x half> } @llvm.vector.deinterleave2.nxv16f16(<vscale x 16 x half> %a)
%a.real = extractvalue { <vscale x 8 x half>, <vscale x 8 x half> } %a.deinterleaved, 0
%a.imag = extractvalue { <vscale x 8 x half>, <vscale x 8 x half> } %a.deinterleaved, 1
%b.deinterleaved = tail call { <vscale x 8 x half>, <vscale x 8 x half> } @llvm.vector.deinterleave2.nxv16f16(<vscale x 16 x half> %b)
%b.real = extractvalue { <vscale x 8 x half>, <vscale x 8 x half> } %b.deinterleaved, 0
%b.imag = extractvalue { <vscale x 8 x half>, <vscale x 8 x half> } %b.deinterleaved, 1
%0 = fmul fast <vscale x 8 x half> %b.imag, %a.real
%1 = fmul fast <vscale x 8 x half> %b.real, %a.imag
%2 = fadd fast <vscale x 8 x half> %1, %0
%3 = fmul fast <vscale x 8 x half> %b.real, %a.real
%4 = fmul fast <vscale x 8 x half> %a.imag, %b.imag
%5 = fsub fast <vscale x 8 x half> %3, %4
%interleaved.vec = tail call <vscale x 16 x half> @llvm.vector.interleave2.nxv16f16(<vscale x 8 x half> %5, <vscale x 8 x half> %2)
ret <vscale x 16 x half> %interleaved.vec
}
; Expected to transform
define <vscale x 32 x half> @complex_mul_v32f16(<vscale x 32 x half> %a, <vscale x 32 x half> %b) {
; CHECK-LABEL: complex_mul_v32f16:
; CHECK: // %bb.0: // %entry
; CHECK-NEXT: mov z24.h, #0 // =0x0
; CHECK-NEXT: ptrue p0.h
; CHECK-NEXT: mov z25.d, z24.d
; CHECK-NEXT: mov z26.d, z24.d
; CHECK-NEXT: mov z27.d, z24.d
; CHECK-NEXT: fcmla z24.h, p0/m, z7.h, z3.h, #0
; CHECK-NEXT: fcmla z25.h, p0/m, z4.h, z0.h, #0
; CHECK-NEXT: fcmla z26.h, p0/m, z5.h, z1.h, #0
; CHECK-NEXT: fcmla z27.h, p0/m, z6.h, z2.h, #0
; CHECK-NEXT: fcmla z24.h, p0/m, z7.h, z3.h, #90
; CHECK-NEXT: fcmla z25.h, p0/m, z4.h, z0.h, #90
; CHECK-NEXT: fcmla z26.h, p0/m, z5.h, z1.h, #90
; CHECK-NEXT: fcmla z27.h, p0/m, z6.h, z2.h, #90
; CHECK-NEXT: mov z3.d, z24.d
; CHECK-NEXT: mov z0.d, z25.d
; CHECK-NEXT: mov z1.d, z26.d
; CHECK-NEXT: mov z2.d, z27.d
; CHECK-NEXT: ret
entry:
%a.deinterleaved = tail call { <vscale x 16 x half>, <vscale x 16 x half> } @llvm.vector.deinterleave2.nxv32f16(<vscale x 32 x half> %a)
%a.real = extractvalue { <vscale x 16 x half>, <vscale x 16 x half> } %a.deinterleaved, 0
%a.imag = extractvalue { <vscale x 16 x half>, <vscale x 16 x half> } %a.deinterleaved, 1
%b.deinterleaved = tail call { <vscale x 16 x half>, <vscale x 16 x half> } @llvm.vector.deinterleave2.nxv32f16(<vscale x 32 x half> %b)
%b.real = extractvalue { <vscale x 16 x half>, <vscale x 16 x half> } %b.deinterleaved, 0
%b.imag = extractvalue { <vscale x 16 x half>, <vscale x 16 x half> } %b.deinterleaved, 1
%0 = fmul fast <vscale x 16 x half> %b.imag, %a.real
%1 = fmul fast <vscale x 16 x half> %b.real, %a.imag
%2 = fadd fast <vscale x 16 x half> %1, %0
%3 = fmul fast <vscale x 16 x half> %b.real, %a.real
%4 = fmul fast <vscale x 16 x half> %a.imag, %b.imag
%5 = fsub fast <vscale x 16 x half> %3, %4
%interleaved.vec = tail call <vscale x 32 x half> @llvm.vector.interleave2.nxv32f16(<vscale x 16 x half> %5, <vscale x 16 x half> %2)
ret <vscale x 32 x half> %interleaved.vec
}
declare { <vscale x 2 x half>, <vscale x 2 x half> } @llvm.vector.deinterleave2.nxv4f16(<vscale x 4 x half>)
declare <vscale x 4 x half> @llvm.vector.interleave2.nxv4f16(<vscale x 2 x half>, <vscale x 2 x half>)
declare { <vscale x 4 x half>, <vscale x 4 x half> } @llvm.vector.deinterleave2.nxv8f16(<vscale x 8 x half>)
declare <vscale x 8 x half> @llvm.vector.interleave2.nxv8f16(<vscale x 4 x half>, <vscale x 4 x half>)
declare { <vscale x 8 x half>, <vscale x 8 x half> } @llvm.vector.deinterleave2.nxv16f16(<vscale x 16 x half>)
declare <vscale x 16 x half> @llvm.vector.interleave2.nxv16f16(<vscale x 8 x half>, <vscale x 8 x half>)
declare { <vscale x 16 x half>, <vscale x 16 x half> } @llvm.vector.deinterleave2.nxv32f16(<vscale x 32 x half>)
declare <vscale x 32 x half> @llvm.vector.interleave2.nxv32f16(<vscale x 16 x half>, <vscale x 16 x half>)
|