1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141
|
// NOTE: Assertions have been autogenerated by utils/update_cc_test_checks.py
// RUN: %clang_cc1 -no-opaque-pointers -triple riscv32 -target-feature +zknh -emit-llvm %s -o - \
// RUN: | FileCheck %s -check-prefix=RV32ZKNH
// RV32ZKNH-LABEL: @sha256sig0(
// RV32ZKNH-NEXT: entry:
// RV32ZKNH-NEXT: [[RS1_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: store i32 [[RS1:%.*]], i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP0:%.*]] = load i32, i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP1:%.*]] = call i32 @llvm.riscv.sha256sig0.i32(i32 [[TMP0]])
// RV32ZKNH-NEXT: ret i32 [[TMP1]]
//
long sha256sig0(long rs1) {
return __builtin_riscv_sha256sig0(rs1);
}
// RV32ZKNH-LABEL: @sha256sig1(
// RV32ZKNH-NEXT: entry:
// RV32ZKNH-NEXT: [[RS1_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: store i32 [[RS1:%.*]], i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP0:%.*]] = load i32, i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP1:%.*]] = call i32 @llvm.riscv.sha256sig1.i32(i32 [[TMP0]])
// RV32ZKNH-NEXT: ret i32 [[TMP1]]
//
long sha256sig1(long rs1) {
return __builtin_riscv_sha256sig1(rs1);
}
// RV32ZKNH-LABEL: @sha256sum0(
// RV32ZKNH-NEXT: entry:
// RV32ZKNH-NEXT: [[RS1_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: store i32 [[RS1:%.*]], i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP0:%.*]] = load i32, i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP1:%.*]] = call i32 @llvm.riscv.sha256sum0.i32(i32 [[TMP0]])
// RV32ZKNH-NEXT: ret i32 [[TMP1]]
//
long sha256sum0(long rs1) {
return __builtin_riscv_sha256sum0(rs1);
}
// RV32ZKNH-LABEL: @sha256sum1(
// RV32ZKNH-NEXT: entry:
// RV32ZKNH-NEXT: [[RS1_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: store i32 [[RS1:%.*]], i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP0:%.*]] = load i32, i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP1:%.*]] = call i32 @llvm.riscv.sha256sum1.i32(i32 [[TMP0]])
// RV32ZKNH-NEXT: ret i32 [[TMP1]]
//
long sha256sum1(long rs1) {
return __builtin_riscv_sha256sum1(rs1);
}
// RV32ZKNH-LABEL: @sha512sig0h(
// RV32ZKNH-NEXT: entry:
// RV32ZKNH-NEXT: [[RS1_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: [[RS2_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: store i32 [[RS1:%.*]], i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: store i32 [[RS2:%.*]], i32* [[RS2_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP0:%.*]] = load i32, i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP1:%.*]] = load i32, i32* [[RS2_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP2:%.*]] = call i32 @llvm.riscv.sha512sig0h(i32 [[TMP0]], i32 [[TMP1]])
// RV32ZKNH-NEXT: ret i32 [[TMP2]]
//
int sha512sig0h(int rs1, int rs2) {
return __builtin_riscv_sha512sig0h_32(rs1, rs2);
}
// RV32ZKNH-LABEL: @sha512sig0l(
// RV32ZKNH-NEXT: entry:
// RV32ZKNH-NEXT: [[RS1_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: [[RS2_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: store i32 [[RS1:%.*]], i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: store i32 [[RS2:%.*]], i32* [[RS2_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP0:%.*]] = load i32, i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP1:%.*]] = load i32, i32* [[RS2_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP2:%.*]] = call i32 @llvm.riscv.sha512sig0l(i32 [[TMP0]], i32 [[TMP1]])
// RV32ZKNH-NEXT: ret i32 [[TMP2]]
//
int sha512sig0l(int rs1, int rs2) {
return __builtin_riscv_sha512sig0l_32(rs1, rs2);
}
// RV32ZKNH-LABEL: @sha512sig1h(
// RV32ZKNH-NEXT: entry:
// RV32ZKNH-NEXT: [[RS1_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: [[RS2_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: store i32 [[RS1:%.*]], i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: store i32 [[RS2:%.*]], i32* [[RS2_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP0:%.*]] = load i32, i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP1:%.*]] = load i32, i32* [[RS2_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP2:%.*]] = call i32 @llvm.riscv.sha512sig1h(i32 [[TMP0]], i32 [[TMP1]])
// RV32ZKNH-NEXT: ret i32 [[TMP2]]
//
int sha512sig1h(int rs1, int rs2) {
return __builtin_riscv_sha512sig1h_32(rs1, rs2);
}
// RV32ZKNH-LABEL: @sha512sig1l(
// RV32ZKNH-NEXT: entry:
// RV32ZKNH-NEXT: [[RS1_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: [[RS2_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: store i32 [[RS1:%.*]], i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: store i32 [[RS2:%.*]], i32* [[RS2_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP0:%.*]] = load i32, i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP1:%.*]] = load i32, i32* [[RS2_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP2:%.*]] = call i32 @llvm.riscv.sha512sig1l(i32 [[TMP0]], i32 [[TMP1]])
// RV32ZKNH-NEXT: ret i32 [[TMP2]]
//
int sha512sig1l(int rs1, int rs2) {
return __builtin_riscv_sha512sig1l_32(rs1, rs2);
}
// RV32ZKNH-LABEL: @sha512sum0r(
// RV32ZKNH-NEXT: entry:
// RV32ZKNH-NEXT: [[RS1_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: [[RS2_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: store i32 [[RS1:%.*]], i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: store i32 [[RS2:%.*]], i32* [[RS2_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP0:%.*]] = load i32, i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP1:%.*]] = load i32, i32* [[RS2_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP2:%.*]] = call i32 @llvm.riscv.sha512sum0r(i32 [[TMP0]], i32 [[TMP1]])
// RV32ZKNH-NEXT: ret i32 [[TMP2]]
//
int sha512sum0r(int rs1, int rs2) {
return __builtin_riscv_sha512sum0r_32(rs1, rs2);
}
// RV32ZKNH-LABEL: @sha512sum1r(
// RV32ZKNH-NEXT: entry:
// RV32ZKNH-NEXT: [[RS1_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: [[RS2_ADDR:%.*]] = alloca i32, align 4
// RV32ZKNH-NEXT: store i32 [[RS1:%.*]], i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: store i32 [[RS2:%.*]], i32* [[RS2_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP0:%.*]] = load i32, i32* [[RS1_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP1:%.*]] = load i32, i32* [[RS2_ADDR]], align 4
// RV32ZKNH-NEXT: [[TMP2:%.*]] = call i32 @llvm.riscv.sha512sum1r(i32 [[TMP0]], i32 [[TMP1]])
// RV32ZKNH-NEXT: ret i32 [[TMP2]]
//
int sha512sum1r(int rs1, int rs2) {
return __builtin_riscv_sha512sum1r_32(rs1, rs2);
}
|