1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138
|
; NOTE: Assertions have been autogenerated by utils/update_llc_test_checks.py UTC_ARGS: --version 5
; RUN: llc < %s -mtriple=x86_64-- -mcpu=k8 | FileCheck %s --check-prefixes=X64
; RUN: llc < %s -mtriple=x86_64-- -mcpu=opteron | FileCheck %s --check-prefixes=X64
; RUN: llc < %s -mtriple=x86_64-- -mcpu=athlon64 | FileCheck %s --check-prefixes=X64
; RUN: llc < %s -mtriple=x86_64-- -mcpu=athlon-fx | FileCheck %s --check-prefixes=X64
; RUN: llc < %s -mtriple=x86_64-- -mcpu=k8-sse3 | FileCheck %s --check-prefixes=X64
; RUN: llc < %s -mtriple=x86_64-- -mcpu=opteron-sse3 | FileCheck %s --check-prefixes=X64
; RUN: llc < %s -mtriple=x86_64-- -mcpu=athlon64-sse3 | FileCheck %s --check-prefixes=X64
; RUN: llc < %s -mtriple=x86_64-- -mcpu=amdfam10 | FileCheck %s --check-prefixes=X64
; RUN: llc < %s -mtriple=x86_64-- -mcpu=btver1 | FileCheck %s --check-prefixes=X64
; RUN: llc < %s -mtriple=x86_64-- -mcpu=btver2 | FileCheck %s --check-prefixes=BMI
; RUN: llc < %s -mtriple=x86_64-- -mcpu=bdver1 | FileCheck %s --check-prefixes=BMI
; RUN: llc < %s -mtriple=x86_64-- -mcpu=bdver2 | FileCheck %s --check-prefixes=BMI
; RUN: llc < %s -mtriple=x86_64-- -mcpu=bdver3 | FileCheck %s --check-prefixes=BMI
; RUN: llc < %s -mtriple=x86_64-- -mcpu=bdver4 | FileCheck %s --check-prefixes=BMI2-SLOW
; RUN: llc < %s -mtriple=x86_64-- -mcpu=znver1 | FileCheck %s --check-prefixes=BMI2-SLOW
; RUN: llc < %s -mtriple=x86_64-- -mcpu=znver2 | FileCheck %s --check-prefixes=BMI2-SLOW
; RUN: llc < %s -mtriple=x86_64-- -mcpu=znver3 | FileCheck %s --check-prefixes=BMI2-FAST
; RUN: llc < %s -mtriple=x86_64-- -mcpu=znver4 | FileCheck %s --check-prefixes=BMI2-FAST
; RUN: llc < %s -mtriple=x86_64-- -mcpu=znver5 | FileCheck %s --check-prefixes=BMI2-FAST
; Verify that for the X86_64 processors that are known to have poor latency
; double precision shift instructions we do not generate 'shld' or 'shrd'
; instructions.
;uint64_t lshift(uint64_t a, uint64_t b, int c)
;{
; return (a << c) | (b >> (64-c));
;}
define i64 @lshift(i64 %a, i64 %b, i32 %c) nounwind readnone {
; X64-LABEL: lshift:
; X64: # %bb.0: # %entry
; X64-NEXT: movl %edx, %ecx
; X64-NEXT: movq %rsi, %rax
; X64-NEXT: shlq %cl, %rdi
; X64-NEXT: shrq %rax
; X64-NEXT: notb %cl
; X64-NEXT: # kill: def $cl killed $cl killed $ecx
; X64-NEXT: shrq %cl, %rax
; X64-NEXT: orq %rdi, %rax
; X64-NEXT: retq
;
; BMI-LABEL: lshift:
; BMI: # %bb.0: # %entry
; BMI-NEXT: movq %rsi, %rax
; BMI-NEXT: movl %edx, %ecx
; BMI-NEXT: shrq %rax
; BMI-NEXT: shlq %cl, %rdi
; BMI-NEXT: notb %cl
; BMI-NEXT: # kill: def $cl killed $cl killed $ecx
; BMI-NEXT: shrq %cl, %rax
; BMI-NEXT: orq %rdi, %rax
; BMI-NEXT: retq
;
; BMI2-SLOW-LABEL: lshift:
; BMI2-SLOW: # %bb.0: # %entry
; BMI2-SLOW-NEXT: # kill: def $edx killed $edx def $rdx
; BMI2-SLOW-NEXT: shlxq %rdx, %rdi, %rcx
; BMI2-SLOW-NEXT: notb %dl
; BMI2-SLOW-NEXT: shrq %rsi
; BMI2-SLOW-NEXT: shrxq %rdx, %rsi, %rax
; BMI2-SLOW-NEXT: orq %rcx, %rax
; BMI2-SLOW-NEXT: retq
;
; BMI2-FAST-LABEL: lshift:
; BMI2-FAST: # %bb.0: # %entry
; BMI2-FAST-NEXT: movl %edx, %ecx
; BMI2-FAST-NEXT: movq %rdi, %rax
; BMI2-FAST-NEXT: # kill: def $cl killed $cl killed $ecx
; BMI2-FAST-NEXT: shldq %cl, %rsi, %rax
; BMI2-FAST-NEXT: retq
entry:
%sh_prom = zext i32 %c to i64
%shl = shl i64 %a, %sh_prom
%sub = sub nsw i32 64, %c
%sh_prom1 = zext i32 %sub to i64
%shr = lshr i64 %b, %sh_prom1
%or = or i64 %shr, %shl
ret i64 %or
}
;uint64_t rshift(uint64_t a, uint64_t b, int c)
;{
; return (a >> c) | (b << (64-c));
;}
define i64 @rshift(i64 %a, i64 %b, i32 %c) nounwind readnone {
; X64-LABEL: rshift:
; X64: # %bb.0: # %entry
; X64-NEXT: movl %edx, %ecx
; X64-NEXT: shrq %cl, %rdi
; X64-NEXT: leaq (%rsi,%rsi), %rax
; X64-NEXT: notb %cl
; X64-NEXT: # kill: def $cl killed $cl killed $ecx
; X64-NEXT: shlq %cl, %rax
; X64-NEXT: orq %rdi, %rax
; X64-NEXT: retq
;
; BMI-LABEL: rshift:
; BMI: # %bb.0: # %entry
; BMI-NEXT: movl %edx, %ecx
; BMI-NEXT: leaq (%rsi,%rsi), %rax
; BMI-NEXT: shrq %cl, %rdi
; BMI-NEXT: notb %cl
; BMI-NEXT: # kill: def $cl killed $cl killed $ecx
; BMI-NEXT: shlq %cl, %rax
; BMI-NEXT: orq %rdi, %rax
; BMI-NEXT: retq
;
; BMI2-SLOW-LABEL: rshift:
; BMI2-SLOW: # %bb.0: # %entry
; BMI2-SLOW-NEXT: # kill: def $edx killed $edx def $rdx
; BMI2-SLOW-NEXT: shrxq %rdx, %rdi, %rcx
; BMI2-SLOW-NEXT: notb %dl
; BMI2-SLOW-NEXT: addq %rsi, %rsi
; BMI2-SLOW-NEXT: shlxq %rdx, %rsi, %rax
; BMI2-SLOW-NEXT: orq %rcx, %rax
; BMI2-SLOW-NEXT: retq
;
; BMI2-FAST-LABEL: rshift:
; BMI2-FAST: # %bb.0: # %entry
; BMI2-FAST-NEXT: movl %edx, %ecx
; BMI2-FAST-NEXT: movq %rdi, %rax
; BMI2-FAST-NEXT: # kill: def $cl killed $cl killed $ecx
; BMI2-FAST-NEXT: shrdq %cl, %rsi, %rax
; BMI2-FAST-NEXT: retq
entry:
%sh_prom = zext i32 %c to i64
%shr = lshr i64 %a, %sh_prom
%sub = sub nsw i32 64, %c
%sh_prom1 = zext i32 %sub to i64
%shl = shl i64 %b, %sh_prom1
%or = or i64 %shl, %shr
ret i64 %or
}
|