1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141
|
// RUN: %clang_cc1 -triple aarch64-none-linux-gnu -target-feature +sve2 -fallow-half-arguments-and-returns -S -O1 -Werror -Wall -emit-llvm -o - %s | FileCheck %s
// RUN: %clang_cc1 -triple aarch64-none-linux-gnu -target-feature +sve2 -fallow-half-arguments-and-returns -S -O1 -Werror -Wall -emit-llvm -o - -x c++ %s | FileCheck %s
// RUN: %clang_cc1 -DSVE_OVERLOADED_FORMS -triple aarch64-none-linux-gnu -target-feature +sve2 -fallow-half-arguments-and-returns -S -O1 -Werror -Wall -emit-llvm -o - %s | FileCheck %s
// RUN: %clang_cc1 -DSVE_OVERLOADED_FORMS -triple aarch64-none-linux-gnu -target-feature +sve2 -fallow-half-arguments-and-returns -S -O1 -Werror -Wall -emit-llvm -o - -x c++ %s | FileCheck %s
// RUN: %clang_cc1 -triple aarch64-none-linux-gnu -target-feature +sve -fallow-half-arguments-and-returns -fsyntax-only -verify -verify-ignore-unexpected=error %s
// RUN: %clang_cc1 -DSVE_OVERLOADED_FORMS -triple aarch64-none-linux-gnu -target-feature +sve -fallow-half-arguments-and-returns -fsyntax-only -verify=overload -verify-ignore-unexpected=error %s
#include <arm_sve.h>
#ifdef SVE_OVERLOADED_FORMS
// A simple used,unused... macro, long enough to represent any SVE builtin.
#define SVE_ACLE_FUNC(A1,A2_UNUSED,A3,A4_UNUSED) A1##A3
#else
#define SVE_ACLE_FUNC(A1,A2,A3,A4) A1##A2##A3##A4
#endif
svint8_t test_svrsubhnt_s16(svint8_t op1, svint16_t op2, svint16_t op3)
{
// CHECK-LABEL: test_svrsubhnt_s16
// CHECK: %[[INTRINSIC:.*]] = call <vscale x 16 x i8> @llvm.aarch64.sve.rsubhnt.nxv8i16(<vscale x 16 x i8> %op1, <vscale x 8 x i16> %op2, <vscale x 8 x i16> %op3)
// CHECK: ret <vscale x 16 x i8> %[[INTRINSIC]]
// overload-warning@+2 {{implicit declaration of function 'svrsubhnt'}}
// expected-warning@+1 {{implicit declaration of function 'svrsubhnt_s16'}}
return SVE_ACLE_FUNC(svrsubhnt,_s16,,)(op1, op2, op3);
}
svint16_t test_svrsubhnt_s32(svint16_t op1, svint32_t op2, svint32_t op3)
{
// CHECK-LABEL: test_svrsubhnt_s32
// CHECK: %[[INTRINSIC:.*]] = call <vscale x 8 x i16> @llvm.aarch64.sve.rsubhnt.nxv4i32(<vscale x 8 x i16> %op1, <vscale x 4 x i32> %op2, <vscale x 4 x i32> %op3)
// CHECK: ret <vscale x 8 x i16> %[[INTRINSIC]]
// overload-warning@+2 {{implicit declaration of function 'svrsubhnt'}}
// expected-warning@+1 {{implicit declaration of function 'svrsubhnt_s32'}}
return SVE_ACLE_FUNC(svrsubhnt,_s32,,)(op1, op2, op3);
}
svint32_t test_svrsubhnt_s64(svint32_t op1, svint64_t op2, svint64_t op3)
{
// CHECK-LABEL: test_svrsubhnt_s64
// CHECK: %[[INTRINSIC:.*]] = call <vscale x 4 x i32> @llvm.aarch64.sve.rsubhnt.nxv2i64(<vscale x 4 x i32> %op1, <vscale x 2 x i64> %op2, <vscale x 2 x i64> %op3)
// CHECK: ret <vscale x 4 x i32> %[[INTRINSIC]]
// overload-warning@+2 {{implicit declaration of function 'svrsubhnt'}}
// expected-warning@+1 {{implicit declaration of function 'svrsubhnt_s64'}}
return SVE_ACLE_FUNC(svrsubhnt,_s64,,)(op1, op2, op3);
}
svuint8_t test_svrsubhnt_u16(svuint8_t op1, svuint16_t op2, svuint16_t op3)
{
// CHECK-LABEL: test_svrsubhnt_u16
// CHECK: %[[INTRINSIC:.*]] = call <vscale x 16 x i8> @llvm.aarch64.sve.rsubhnt.nxv8i16(<vscale x 16 x i8> %op1, <vscale x 8 x i16> %op2, <vscale x 8 x i16> %op3)
// CHECK: ret <vscale x 16 x i8> %[[INTRINSIC]]
// overload-warning@+2 {{implicit declaration of function 'svrsubhnt'}}
// expected-warning@+1 {{implicit declaration of function 'svrsubhnt_u16'}}
return SVE_ACLE_FUNC(svrsubhnt,_u16,,)(op1, op2, op3);
}
svuint16_t test_svrsubhnt_u32(svuint16_t op1, svuint32_t op2, svuint32_t op3)
{
// CHECK-LABEL: test_svrsubhnt_u32
// CHECK: %[[INTRINSIC:.*]] = call <vscale x 8 x i16> @llvm.aarch64.sve.rsubhnt.nxv4i32(<vscale x 8 x i16> %op1, <vscale x 4 x i32> %op2, <vscale x 4 x i32> %op3)
// CHECK: ret <vscale x 8 x i16> %[[INTRINSIC]]
// overload-warning@+2 {{implicit declaration of function 'svrsubhnt'}}
// expected-warning@+1 {{implicit declaration of function 'svrsubhnt_u32'}}
return SVE_ACLE_FUNC(svrsubhnt,_u32,,)(op1, op2, op3);
}
svuint32_t test_svrsubhnt_u64(svuint32_t op1, svuint64_t op2, svuint64_t op3)
{
// CHECK-LABEL: test_svrsubhnt_u64
// CHECK: %[[INTRINSIC:.*]] = call <vscale x 4 x i32> @llvm.aarch64.sve.rsubhnt.nxv2i64(<vscale x 4 x i32> %op1, <vscale x 2 x i64> %op2, <vscale x 2 x i64> %op3)
// CHECK: ret <vscale x 4 x i32> %[[INTRINSIC]]
// overload-warning@+2 {{implicit declaration of function 'svrsubhnt'}}
// expected-warning@+1 {{implicit declaration of function 'svrsubhnt_u64'}}
return SVE_ACLE_FUNC(svrsubhnt,_u64,,)(op1, op2, op3);
}
svint8_t test_svrsubhnt_n_s16(svint8_t op1, svint16_t op2, int16_t op3)
{
// CHECK-LABEL: test_svrsubhnt_n_s16
// CHECK: %[[DUP:.*]] = call <vscale x 8 x i16> @llvm.aarch64.sve.dup.x.nxv8i16(i16 %op3)
// CHECK: %[[INTRINSIC:.*]] = call <vscale x 16 x i8> @llvm.aarch64.sve.rsubhnt.nxv8i16(<vscale x 16 x i8> %op1, <vscale x 8 x i16> %op2, <vscale x 8 x i16> %[[DUP]])
// CHECK: ret <vscale x 16 x i8> %[[INTRINSIC]]
// overload-warning@+2 {{implicit declaration of function 'svrsubhnt'}}
// expected-warning@+1 {{implicit declaration of function 'svrsubhnt_n_s16'}}
return SVE_ACLE_FUNC(svrsubhnt,_n_s16,,)(op1, op2, op3);
}
svint16_t test_svrsubhnt_n_s32(svint16_t op1, svint32_t op2, int32_t op3)
{
// CHECK-LABEL: test_svrsubhnt_n_s32
// CHECK: %[[DUP:.*]] = call <vscale x 4 x i32> @llvm.aarch64.sve.dup.x.nxv4i32(i32 %op3)
// CHECK: %[[INTRINSIC:.*]] = call <vscale x 8 x i16> @llvm.aarch64.sve.rsubhnt.nxv4i32(<vscale x 8 x i16> %op1, <vscale x 4 x i32> %op2, <vscale x 4 x i32> %[[DUP]])
// CHECK: ret <vscale x 8 x i16> %[[INTRINSIC]]
// overload-warning@+2 {{implicit declaration of function 'svrsubhnt'}}
// expected-warning@+1 {{implicit declaration of function 'svrsubhnt_n_s32'}}
return SVE_ACLE_FUNC(svrsubhnt,_n_s32,,)(op1, op2, op3);
}
svint32_t test_svrsubhnt_n_s64(svint32_t op1, svint64_t op2, int64_t op3)
{
// CHECK-LABEL: test_svrsubhnt_n_s64
// CHECK: %[[DUP:.*]] = call <vscale x 2 x i64> @llvm.aarch64.sve.dup.x.nxv2i64(i64 %op3)
// CHECK: %[[INTRINSIC:.*]] = call <vscale x 4 x i32> @llvm.aarch64.sve.rsubhnt.nxv2i64(<vscale x 4 x i32> %op1, <vscale x 2 x i64> %op2, <vscale x 2 x i64> %[[DUP]])
// CHECK: ret <vscale x 4 x i32> %[[INTRINSIC]]
// overload-warning@+2 {{implicit declaration of function 'svrsubhnt'}}
// expected-warning@+1 {{implicit declaration of function 'svrsubhnt_n_s64'}}
return SVE_ACLE_FUNC(svrsubhnt,_n_s64,,)(op1, op2, op3);
}
svuint8_t test_svrsubhnt_n_u16(svuint8_t op1, svuint16_t op2, uint16_t op3)
{
// CHECK-LABEL: test_svrsubhnt_n_u16
// CHECK: %[[DUP:.*]] = call <vscale x 8 x i16> @llvm.aarch64.sve.dup.x.nxv8i16(i16 %op3)
// CHECK: %[[INTRINSIC:.*]] = call <vscale x 16 x i8> @llvm.aarch64.sve.rsubhnt.nxv8i16(<vscale x 16 x i8> %op1, <vscale x 8 x i16> %op2, <vscale x 8 x i16> %[[DUP]])
// CHECK: ret <vscale x 16 x i8> %[[INTRINSIC]]
// overload-warning@+2 {{implicit declaration of function 'svrsubhnt'}}
// expected-warning@+1 {{implicit declaration of function 'svrsubhnt_n_u16'}}
return SVE_ACLE_FUNC(svrsubhnt,_n_u16,,)(op1, op2, op3);
}
svuint16_t test_svrsubhnt_n_u32(svuint16_t op1, svuint32_t op2, uint32_t op3)
{
// CHECK-LABEL: test_svrsubhnt_n_u32
// CHECK: %[[DUP:.*]] = call <vscale x 4 x i32> @llvm.aarch64.sve.dup.x.nxv4i32(i32 %op3)
// CHECK: %[[INTRINSIC:.*]] = call <vscale x 8 x i16> @llvm.aarch64.sve.rsubhnt.nxv4i32(<vscale x 8 x i16> %op1, <vscale x 4 x i32> %op2, <vscale x 4 x i32> %[[DUP]])
// CHECK: ret <vscale x 8 x i16> %[[INTRINSIC]]
// overload-warning@+2 {{implicit declaration of function 'svrsubhnt'}}
// expected-warning@+1 {{implicit declaration of function 'svrsubhnt_n_u32'}}
return SVE_ACLE_FUNC(svrsubhnt,_n_u32,,)(op1, op2, op3);
}
svuint32_t test_svrsubhnt_n_u64(svuint32_t op1, svuint64_t op2, uint64_t op3)
{
// CHECK-LABEL: test_svrsubhnt_n_u64
// CHECK: %[[DUP:.*]] = call <vscale x 2 x i64> @llvm.aarch64.sve.dup.x.nxv2i64(i64 %op3)
// CHECK: %[[INTRINSIC:.*]] = call <vscale x 4 x i32> @llvm.aarch64.sve.rsubhnt.nxv2i64(<vscale x 4 x i32> %op1, <vscale x 2 x i64> %op2, <vscale x 2 x i64> %[[DUP]])
// CHECK: ret <vscale x 4 x i32> %[[INTRINSIC]]
// overload-warning@+2 {{implicit declaration of function 'svrsubhnt'}}
// expected-warning@+1 {{implicit declaration of function 'svrsubhnt_n_u64'}}
return SVE_ACLE_FUNC(svrsubhnt,_n_u64,,)(op1, op2, op3);
}
|