1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104
|
/*
* BSD LICENSE
*
* Copyright (C) Cavium networks Ltd. 2015.
*
* Redistribution and use in source and binary forms, with or without
* modification, are permitted provided that the following conditions
* are met:
*
* * Redistributions of source code must retain the above copyright
* notice, this list of conditions and the following disclaimer.
* * Redistributions in binary form must reproduce the above copyright
* notice, this list of conditions and the following disclaimer in
* the documentation and/or other materials provided with the
* distribution.
* * Neither the name of Cavium networks nor the names of its
* contributors may be used to endorse or promote products derived
* from this software without specific prior written permission.
*
* THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
* "AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
* LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
* A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
* OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
* SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
* LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
* DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
* THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
* (INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
* OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
*/
#ifndef _RTE_CYCLES_ARM64_H_
#define _RTE_CYCLES_ARM64_H_
#ifdef __cplusplus
extern "C" {
#endif
#include "generic/rte_cycles.h"
/**
* Read the time base register.
*
* @return
* The time base for this lcore.
*/
#ifndef RTE_ARM_EAL_RDTSC_USE_PMU
/**
* This call is portable to any ARMv8 architecture, however, typically
* cntvct_el0 runs at <= 100MHz and it may be imprecise for some tasks.
*/
static inline uint64_t
rte_rdtsc(void)
{
uint64_t tsc;
asm volatile("mrs %0, cntvct_el0" : "=r" (tsc));
return tsc;
}
#else
/**
* This is an alternative method to enable rte_rdtsc() with high resolution
* PMU cycles counter.The cycle counter runs at cpu frequency and this scheme
* uses ARMv8 PMU subsystem to get the cycle counter at userspace, However,
* access to PMU cycle counter from user space is not enabled by default in
* arm64 linux kernel.
* It is possible to enable cycle counter at user space access by configuring
* the PMU from the privileged mode (kernel space).
*
* asm volatile("msr pmintenset_el1, %0" : : "r" ((u64)(0 << 31)));
* asm volatile("msr pmcntenset_el0, %0" :: "r" BIT(31));
* asm volatile("msr pmuserenr_el0, %0" : : "r"(BIT(0) | BIT(2)));
* asm volatile("mrs %0, pmcr_el0" : "=r" (val));
* val |= (BIT(0) | BIT(2));
* isb();
* asm volatile("msr pmcr_el0, %0" : : "r" (val));
*
*/
static inline uint64_t
rte_rdtsc(void)
{
uint64_t tsc;
asm volatile("mrs %0, pmccntr_el0" : "=r"(tsc));
return tsc;
}
#endif
static inline uint64_t
rte_rdtsc_precise(void)
{
rte_mb();
return rte_rdtsc();
}
static inline uint64_t
rte_get_tsc_cycles(void) { return rte_rdtsc(); }
#ifdef __cplusplus
}
#endif
#endif /* _RTE_CYCLES_ARM64_H_ */
|