1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150
|
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 3
; RUN: llc -mtriple=aarch64 -mattr=+sve < %s -o - | FileCheck -check-prefixes=SVE,SVELINUX %s
; RUN: llc -mtriple=aarch64-windows-msvc -mattr=+sve < %s -o - | FileCheck -check-prefixes=SVE,SVEWINDOWS %s
; RUN: llc -mtriple=aarch64-windows-msvc < %s -o - | FileCheck -check-prefixes=WINDOWS %s
define double @testExp(double %val, i32 %a) {
; SVE-LABEL: testExp:
; SVE: // %bb.0: // %entry
; SVE-NEXT: // kill: def $w0 killed $w0 def $x0
; SVE-NEXT: sxtw x8, w0
; SVE-NEXT: ptrue p0.d
; SVE-NEXT: // kill: def $d0 killed $d0 def $z0
; SVE-NEXT: fmov d1, x8
; SVE-NEXT: fscale z0.d, p0/m, z0.d, z1.d
; SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
; SVE-NEXT: ret
;
; WINDOWS-LABEL: testExp:
; WINDOWS: // %bb.0: // %entry
; WINDOWS-NEXT: b ldexp
entry:
%call = tail call fast double @ldexp(double %val, i32 %a)
ret double %call
}
declare double @ldexp(double, i32) memory(none)
define double @testExpIntrinsic(double %val, i32 %a) {
; SVE-LABEL: testExpIntrinsic:
; SVE: // %bb.0: // %entry
; SVE-NEXT: // kill: def $w0 killed $w0 def $x0
; SVE-NEXT: sxtw x8, w0
; SVE-NEXT: ptrue p0.d
; SVE-NEXT: // kill: def $d0 killed $d0 def $z0
; SVE-NEXT: fmov d1, x8
; SVE-NEXT: fscale z0.d, p0/m, z0.d, z1.d
; SVE-NEXT: // kill: def $d0 killed $d0 killed $z0
; SVE-NEXT: ret
;
; WINDOWS-LABEL: testExpIntrinsic:
; WINDOWS: // %bb.0: // %entry
; WINDOWS-NEXT: b ldexp
entry:
%call = tail call fast double @llvm.ldexp.f64(double %val, i32 %a)
ret double %call
}
define float @testExpf(float %val, i32 %a) {
; SVELINUX-LABEL: testExpf:
; SVELINUX: // %bb.0: // %entry
; SVELINUX-NEXT: fmov s1, w0
; SVELINUX-NEXT: ptrue p0.s
; SVELINUX-NEXT: // kill: def $s0 killed $s0 def $z0
; SVELINUX-NEXT: fscale z0.s, p0/m, z0.s, z1.s
; SVELINUX-NEXT: // kill: def $s0 killed $s0 killed $z0
; SVELINUX-NEXT: ret
;
; SVEWINDOWS-LABEL: testExpf:
; SVEWINDOWS: // %bb.0: // %entry
; SVEWINDOWS-NEXT: b ldexpf
;
; WINDOWS-LABEL: testExpf:
; WINDOWS: // %bb.0: // %entry
; WINDOWS-NEXT: b ldexpf
entry:
%call = tail call fast float @ldexpf(float %val, i32 %a)
ret float %call
}
define float @testExpfIntrinsic(float %val, i32 %a) {
; SVE-LABEL: testExpfIntrinsic:
; SVE: // %bb.0: // %entry
; SVE-NEXT: fmov s1, w0
; SVE-NEXT: ptrue p0.s
; SVE-NEXT: // kill: def $s0 killed $s0 def $z0
; SVE-NEXT: fscale z0.s, p0/m, z0.s, z1.s
; SVE-NEXT: // kill: def $s0 killed $s0 killed $z0
; SVE-NEXT: ret
;
; WINDOWS-LABEL: testExpfIntrinsic:
; WINDOWS: .seh_proc testExpfIntrinsic
; WINDOWS-NEXT: // %bb.0: // %entry
; WINDOWS-NEXT: str x30, [sp, #-16]! // 8-byte Folded Spill
; WINDOWS-NEXT: .seh_save_reg_x x30, 16
; WINDOWS-NEXT: .seh_endprologue
; WINDOWS-NEXT: fcvt d0, s0
; WINDOWS-NEXT: bl ldexp
; WINDOWS-NEXT: fcvt s0, d0
; WINDOWS-NEXT: .seh_startepilogue
; WINDOWS-NEXT: ldr x30, [sp], #16 // 8-byte Folded Reload
; WINDOWS-NEXT: .seh_save_reg_x x30, 16
; WINDOWS-NEXT: .seh_endepilogue
; WINDOWS-NEXT: ret
; WINDOWS-NEXT: .seh_endfunclet
; WINDOWS-NEXT: .seh_endproc
entry:
%call = tail call fast float @llvm.ldexp.f32(float %val, i32 %a)
ret float %call
}
declare float @ldexpf(float, i32) memory(none)
define fp128 @testExpl(fp128 %val, i32 %a) {
; SVE-LABEL: testExpl:
; SVE: // %bb.0: // %entry
; SVE-NEXT: b ldexpl
;
; WINDOWS-LABEL: testExpl:
; WINDOWS: // %bb.0: // %entry
; WINDOWS-NEXT: b ldexpl
entry:
%call = tail call fast fp128 @ldexpl(fp128 %val, i32 %a)
ret fp128 %call
}
declare fp128 @ldexpl(fp128, i32) memory(none)
define half @testExpf16(half %val, i32 %a) {
; SVE-LABEL: testExpf16:
; SVE: // %bb.0: // %entry
; SVE-NEXT: fcvt s0, h0
; SVE-NEXT: fmov s1, w0
; SVE-NEXT: ptrue p0.s
; SVE-NEXT: fscale z0.s, p0/m, z0.s, z1.s
; SVE-NEXT: fcvt h0, s0
; SVE-NEXT: ret
;
; WINDOWS-LABEL: testExpf16:
; WINDOWS: .seh_proc testExpf16
; WINDOWS-NEXT: // %bb.0: // %entry
; WINDOWS-NEXT: str x30, [sp, #-16]! // 8-byte Folded Spill
; WINDOWS-NEXT: .seh_save_reg_x x30, 16
; WINDOWS-NEXT: .seh_endprologue
; WINDOWS-NEXT: fcvt d0, h0
; WINDOWS-NEXT: bl ldexp
; WINDOWS-NEXT: fcvt h0, d0
; WINDOWS-NEXT: .seh_startepilogue
; WINDOWS-NEXT: ldr x30, [sp], #16 // 8-byte Folded Reload
; WINDOWS-NEXT: .seh_save_reg_x x30, 16
; WINDOWS-NEXT: .seh_endepilogue
; WINDOWS-NEXT: ret
; WINDOWS-NEXT: .seh_endfunclet
; WINDOWS-NEXT: .seh_endproc
entry:
%0 = tail call fast half @llvm.ldexp.f16.i32(half %val, i32 %a)
ret half %0
}
declare half @llvm.ldexp.f16.i32(half, i32) memory(none)
|