From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-6.7 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 2D758CA9EA0 for ; Fri, 25 Oct 2019 17:27:32 +0000 (UTC) Received: from dpdk.org (dpdk.org [92.243.14.124]) by mail.kernel.org (Postfix) with ESMTP id 8A79D21872 for ; Fri, 25 Oct 2019 17:27:31 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8A79D21872 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=intel.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=dev-bounces@dpdk.org Received: from [92.243.14.124] (localhost [127.0.0.1]) by dpdk.org (Postfix) with ESMTP id BF4BA1C215; Fri, 25 Oct 2019 19:27:30 +0200 (CEST) Received: from mga07.intel.com (mga07.intel.com [134.134.136.100]) by dpdk.org (Postfix) with ESMTP id 68C331C215 for ; Fri, 25 Oct 2019 19:27:29 +0200 (CEST) X-Amp-Result: SKIPPED(no attachment in message) X-Amp-File-Uploaded: False Received: from orsmga005.jf.intel.com ([10.7.209.41]) by orsmga105.jf.intel.com with ESMTP/TLS/DHE-RSA-AES256-GCM-SHA384; 25 Oct 2019 10:27:28 -0700 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.68,229,1569308400"; d="scan'208";a="373612127" Received: from irsmsx152.ger.corp.intel.com ([163.33.192.66]) by orsmga005.jf.intel.com with ESMTP; 25 Oct 2019 10:27:25 -0700 Received: from irsmsx104.ger.corp.intel.com ([169.254.5.252]) by IRSMSX152.ger.corp.intel.com ([169.254.6.76]) with mapi id 14.03.0439.000; Fri, 25 Oct 2019 18:27:24 +0100 From: "Ananyev, Konstantin" To: Gavin Hu , "dev@dpdk.org" CC: "nd@arm.com" , "david.marchand@redhat.com" , "thomas@monjalon.net" , "stephen@networkplumber.org" , "hemant.agrawal@nxp.com" , "jerinj@marvell.com" , "pbhagavatula@marvell.com" , "Honnappa.Nagarahalli@arm.com" , "ruifeng.wang@arm.com" , "phil.yang@arm.com" , "steve.capper@arm.com" Thread-Topic: [PATCH v10 2/5] eal: add the APIs to wait until equal Thread-Index: AQHVi0prm0nEekQm4kqM927iwqk1madrmFzg Date: Fri, 25 Oct 2019 17:27:23 +0000 Message-ID: <2601191342CEEE43887BDE71AB97725801A8C70577@IRSMSX104.ger.corp.intel.com> References: <1561911676-37718-1-git-send-email-gavin.hu@arm.com> <1572017951-41253-3-git-send-email-gavin.hu@arm.com> In-Reply-To: <1572017951-41253-3-git-send-email-gavin.hu@arm.com> Accept-Language: en-IE, en-US Content-Language: en-US X-MS-Has-Attach: X-MS-TNEF-Correlator: x-titus-metadata-40: eyJDYXRlZ29yeUxhYmVscyI6IiIsIk1ldGFkYXRhIjp7Im5zIjoiaHR0cDpcL1wvd3d3LnRpdHVzLmNvbVwvbnNcL0ludGVsMyIsImlkIjoiNWJiY2JiYjYtM2JjYy00YjU3LWIwOWYtMDRhMGM2MmI5ZWIxIiwicHJvcHMiOlt7Im4iOiJDVFBDbGFzc2lmaWNhdGlvbiIsInZhbHMiOlt7InZhbHVlIjoiQ1RQX05UIn1dfV19LCJTdWJqZWN0TGFiZWxzIjpbXSwiVE1DVmVyc2lvbiI6IjE3LjEwLjE4MDQuNDkiLCJUcnVzdGVkTGFiZWxIYXNoIjoiXC9jc0EwNVg1UmdrZ3l1M2thejBjMEZUSlNFVUpoSytJdnVVNHVBSHhmTWQyMXVsYlZySG9RRFRUUk9WaUxPS0sifQ== x-ctpclassification: CTP_NT dlp-product: dlpe-windows dlp-version: 11.2.0.6 dlp-reaction: no-action x-originating-ip: [163.33.239.180] Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable MIME-Version: 1.0 Subject: Re: [dpdk-dev] [PATCH v10 2/5] eal: add the APIs to wait until equal X-BeenThere: dev@dpdk.org X-Mailman-Version: 2.1.15 Precedence: list List-Id: DPDK patches and discussions List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Errors-To: dev-bounces@dpdk.org Sender: "dev" Hi Gavin, > The rte_wait_until_equal_xx APIs abstract the functionality of > 'polling for a memory location to become equal to a given value'. >=20 > Add the RTE_ARM_USE_WFE configuration entry for aarch64, disabled > by default. When it is enabled, the above APIs will call WFE instruction > to save CPU cycles and power. >=20 > From a VM, when calling this API on aarch64, it may trap in and out to > release vCPUs whereas cause high exit latency. Since kernel 4.18.20 an > adaptive trapping mechanism is introduced to balance the latency and > workload. >=20 > Signed-off-by: Gavin Hu > Reviewed-by: Ruifeng Wang > Reviewed-by: Steve Capper > Reviewed-by: Ola Liljedahl > Reviewed-by: Honnappa Nagarahalli > Reviewed-by: Phil Yang > Acked-by: Pavan Nikhilesh > Acked-by: Jerin Jacob > --- > config/arm/meson.build | 1 + > config/common_base | 5 + > .../common/include/arch/arm/rte_pause_64.h | 188 +++++++++++++++= ++++++ > lib/librte_eal/common/include/generic/rte_pause.h | 108 ++++++++++++ > 4 files changed, 302 insertions(+) >=20 > diff --git a/config/arm/meson.build b/config/arm/meson.build > index 979018e..b4b4cac 100644 > --- a/config/arm/meson.build > +++ b/config/arm/meson.build > @@ -26,6 +26,7 @@ flags_common_default =3D [ > ['RTE_LIBRTE_AVP_PMD', false], >=20 > ['RTE_SCHED_VECTOR', false], > + ['RTE_ARM_USE_WFE', false], > ] >=20 > flags_generic =3D [ > diff --git a/config/common_base b/config/common_base > index e843a21..c812156 100644 > --- a/config/common_base > +++ b/config/common_base > @@ -111,6 +111,11 @@ CONFIG_RTE_MAX_VFIO_CONTAINERS=3D64 > CONFIG_RTE_MALLOC_DEBUG=3Dn > CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=3Dn > CONFIG_RTE_USE_LIBBSD=3Dn > +# Use WFE instructions to implement the rte_wait_for_equal_xxx APIs, > +# calling these APIs put the cores in low power state while waiting > +# for the memory address to become equal to the expected value. > +# This is supported only by aarch64. > +CONFIG_RTE_ARM_USE_WFE=3Dn >=20 > # > # Recognize/ignore the AVX/AVX512 CPU flags for performance/power testin= g. > diff --git a/lib/librte_eal/common/include/arch/arm/rte_pause_64.h b/lib/= librte_eal/common/include/arch/arm/rte_pause_64.h > index 93895d3..dd37f72 100644 > --- a/lib/librte_eal/common/include/arch/arm/rte_pause_64.h > +++ b/lib/librte_eal/common/include/arch/arm/rte_pause_64.h > @@ -1,5 +1,6 @@ > /* SPDX-License-Identifier: BSD-3-Clause > * Copyright(c) 2017 Cavium, Inc > + * Copyright(c) 2019 Arm Limited > */ >=20 > #ifndef _RTE_PAUSE_ARM64_H_ > @@ -10,6 +11,11 @@ extern "C" { > #endif >=20 > #include > + > +#ifdef RTE_ARM_USE_WFE > +#define RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED > +#endif > + > #include "generic/rte_pause.h" >=20 > static inline void rte_pause(void) > @@ -17,6 +23,188 @@ static inline void rte_pause(void) > asm volatile("yield" ::: "memory"); > } >=20 > +/** > + * Send an event to exit WFE. > + */ > +static inline void rte_sevl(void); > + > +/** > + * Put processor into low power WFE(Wait For Event) state > + */ > +static inline void rte_wfe(void); > + > +#ifdef RTE_ARM_USE_WFE > +static inline void rte_sevl(void) > +{ > + asm volatile("sevl" : : : "memory"); > +} > + > +static inline void rte_wfe(void) > +{ > + asm volatile("wfe" : : : "memory"); > +} > +#else > +static inline void rte_sevl(void) > +{ > +} > +static inline void rte_wfe(void) > +{ > + rte_pause(); > +} > +#endif > + > +#ifdef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change, or be removed, without prior no= tice > + * > + * Atomic exclusive load from addr, it returns the 16-bit content of *ad= dr > + * while making it 'monitored',when it is written by someone else, the > + * 'monitored' state is cleared and a event is generated implicitly to e= xit > + * WFE. > + * > + * @param addr > + * A pointer to the memory location. > + * @param memorder > + * The valid memory order variants are __ATOMIC_ACQUIRE and __ATOMIC_RE= LAXED. > + * These map to C++11 memory orders with the same names, see the C++11 = standard > + * the GCC wiki on atomic synchronization for detailed definitions. > + */ > +static __rte_always_inline uint16_t > +__atomic_load_ex_16(volatile uint16_t *addr, int memorder); I still think (as it is a public header) better to have all function names = prefixed with rte_. Or if you consider them not to be used by user explicitly with __rte_ BTW, these _load_ex_ functions can be defined even if RTE_WAIT_UNTIL_EQUAL_= ARCH_DEFINED, though don't know would be any other non-WFE usages for them.=20 > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change, or be removed, without prior no= tice > + * > + * Atomic exclusive load from addr, it returns the 32-bit content of *ad= dr > + * while making it 'monitored',when it is written by someone else, the > + * 'monitored' state is cleared and a event is generated implicitly to e= xit > + * WFE. > + * > + * @param addr > + * A pointer to the memory location. > + * @param memorder > + * The valid memory order variants are __ATOMIC_ACQUIRE and __ATOMIC_RE= LAXED. > + * These map to C++11 memory orders with the same names, see the C++11 = standard > + * the GCC wiki on atomic synchronization for detailed definitions. > + */ > +static __rte_always_inline uint32_t > +__atomic_load_ex_32(volatile uint32_t *addr, int memorder); > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change, or be removed, without prior no= tice > + * > + * Atomic exclusive load from addr, it returns the 64-bit content of *ad= dr > + * while making it 'monitored',when it is written by someone else, the > + * 'monitored' state is cleared and a event is generated implicitly to e= xit > + * WFE. > + * > + * @param addr > + * A pointer to the memory location. > + * @param memorder > + * The valid memory order variants are __ATOMIC_ACQUIRE and __ATOMIC_RE= LAXED. > + * These map to C++11 memory orders with the same names, see the C++11 = standard > + * the GCC wiki on atomic synchronization for detailed definitions. > + */ > +static __rte_always_inline uint64_t > +__atomic_load_ex_64(volatile uint64_t *addr, int memorder); > + > +static __rte_always_inline uint16_t > +__atomic_load_ex_16(volatile uint16_t *addr, int memorder) > +{ > + uint16_t tmp; > + assert((memorder =3D=3D __ATOMIC_ACQUIRE) > + || (memorder =3D=3D __ATOMIC_RELAXED)); > + if (memorder =3D=3D __ATOMIC_ACQUIRE) > + asm volatile("ldaxrh %w[tmp], [%x[addr]]" > + : [tmp] "=3D&r" (tmp) > + : [addr] "r"(addr) > + : "memory"); > + else if (memorder =3D=3D __ATOMIC_RELAXED) > + asm volatile("ldxrh %w[tmp], [%x[addr]]" > + : [tmp] "=3D&r" (tmp) > + : [addr] "r"(addr) > + : "memory"); > + return tmp; > +} > + > +static __rte_always_inline uint32_t > +__atomic_load_ex_32(volatile uint32_t *addr, int memorder) > +{ > + uint32_t tmp; > + assert((memorder =3D=3D __ATOMIC_ACQUIRE) > + || (memorder =3D=3D __ATOMIC_RELAXED)); > + if (memorder =3D=3D __ATOMIC_ACQUIRE) > + asm volatile("ldaxr %w[tmp], [%x[addr]]" > + : [tmp] "=3D&r" (tmp) > + : [addr] "r"(addr) > + : "memory"); > + else if (memorder =3D=3D __ATOMIC_RELAXED) > + asm volatile("ldxr %w[tmp], [%x[addr]]" > + : [tmp] "=3D&r" (tmp) > + : [addr] "r"(addr) > + : "memory"); > + return tmp; > +} > + > +static __rte_always_inline uint64_t > +__atomic_load_ex_64(volatile uint64_t *addr, int memorder) > +{ > + uint64_t tmp; > + assert((memorder =3D=3D __ATOMIC_ACQUIRE) > + || (memorder =3D=3D __ATOMIC_RELAXED)); > + if (memorder =3D=3D __ATOMIC_ACQUIRE) > + asm volatile("ldaxr %x[tmp], [%x[addr]]" > + : [tmp] "=3D&r" (tmp) > + : [addr] "r"(addr) > + : "memory"); > + else if (memorder =3D=3D __ATOMIC_RELAXED) > + asm volatile("ldxr %x[tmp], [%x[addr]]" > + : [tmp] "=3D&r" (tmp) > + : [addr] "r"(addr) > + : "memory"); > + return tmp; > +} > + > +static __rte_always_inline void > +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, > +int memorder) > +{ > + if (__atomic_load_n(addr, memorder) !=3D expected) { > + rte_sevl(); > + do { > + rte_wfe(); > + } while (__atomic_load_ex_16(addr, memorder) !=3D expected); > + } > +} > + > +static __rte_always_inline void > +rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected, > +int memorder) > +{ > + if (__atomic_load_n(addr, memorder) !=3D expected) { > + rte_sevl(); > + do { > + rte_wfe(); > + } while (__atomic_load_n(addr, memorder) !=3D expected); while (__atomic_load_ex_32(addr, memorder) !=3D expected); ? Same for 64 bit version > + } > +} > + > +static __rte_always_inline void > +rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected, > +int memorder) > +{ > + if (__atomic_load_n(addr, memorder) !=3D expected) { > + rte_sevl(); > + do { > + rte_wfe(); > + } while (__atomic_load_n(addr, memorder) !=3D expected); > + } > +} > +#endif > + > #ifdef __cplusplus > } > #endif > diff --git a/lib/librte_eal/common/include/generic/rte_pause.h b/lib/libr= te_eal/common/include/generic/rte_pause.h > index 52bd4db..9854455 100644 > --- a/lib/librte_eal/common/include/generic/rte_pause.h > +++ b/lib/librte_eal/common/include/generic/rte_pause.h > @@ -1,5 +1,6 @@ > /* SPDX-License-Identifier: BSD-3-Clause > * Copyright(c) 2017 Cavium, Inc > + * Copyright(c) 2019 Arm Limited > */ >=20 > #ifndef _RTE_PAUSE_H_ > @@ -12,6 +13,12 @@ > * > */ >=20 > +#include > +#include > +#include > +#include > +#include > + > /** > * Pause CPU execution for a short while > * > @@ -20,4 +27,105 @@ > */ > static inline void rte_pause(void); >=20 > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change, or be removed, without prior no= tice > + * > + * Wait for *addr to be updated with a 16-bit expected value, with a rel= axed > + * memory ordering model meaning the loads around this API can be reorde= red. > + * > + * @param addr > + * A pointer to the memory location. > + * @param expected > + * A 16-bit expected value to be in the memory location. > + * @param memorder > + * Two different memory orders that can be specified: > + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to > + * C++11 memory orders with the same names, see the C++11 standard or > + * the GCC wiki on atomic synchronization for detailed definition. > + */ > +__rte_experimental > +static __rte_always_inline void > +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, > +int memorder); > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change, or be removed, without prior no= tice > + * > + * Wait for *addr to be updated with a 32-bit expected value, with a rel= axed > + * memory ordering model meaning the loads around this API can be reorde= red. > + * > + * @param addr > + * A pointer to the memory location. > + * @param expected > + * A 32-bit expected value to be in the memory location. > + * @param memorder > + * Two different memory orders that can be specified: > + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to > + * C++11 memory orders with the same names, see the C++11 standard or > + * the GCC wiki on atomic synchronization for detailed definition. > + */ > +__rte_experimental > +static __rte_always_inline void > +rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected, > +int memorder); > + > +/** > + * @warning > + * @b EXPERIMENTAL: this API may change, or be removed, without prior no= tice > + * > + * Wait for *addr to be updated with a 64-bit expected value, with a rel= axed > + * memory ordering model meaning the loads around this API can be reorde= red. > + * > + * @param addr > + * A pointer to the memory location. > + * @param expected > + * A 64-bit expected value to be in the memory location. > + * @param memorder > + * Two different memory orders that can be specified: > + * __ATOMIC_ACQUIRE and __ATOMIC_RELAXED. These map to > + * C++11 memory orders with the same names, see the C++11 standard or > + * the GCC wiki on atomic synchronization for detailed definition. > + */ > +__rte_experimental > +static __rte_always_inline void > +rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected, > +int memorder); > + > +#ifndef RTE_WAIT_UNTIL_EQUAL_ARCH_DEFINED > +static __rte_always_inline void > +rte_wait_until_equal_16(volatile uint16_t *addr, uint16_t expected, > +int memorder) > +{ > + if (__atomic_load_n(addr, memorder) !=3D expected) { > + do { > + rte_pause(); > + } while (__atomic_load_n(addr, memorder) !=3D expected); > + } I think, these generic implementations could be just: while (__atomic_load_n(addr, memorder) !=3D expected) rte_pause(); =09 Other than that: Acked-by: Konstantin Ananyev > +} > + > +static __rte_always_inline void > +rte_wait_until_equal_32(volatile uint32_t *addr, uint32_t expected, > +int memorder) > +{ > + if (__atomic_load_n(addr, memorder) !=3D expected) { > + do { > + rte_pause(); > + } while (__atomic_load_n(addr, memorder) !=3D expected); > + } > +} > + > +static __rte_always_inline void > +rte_wait_until_equal_64(volatile uint64_t *addr, uint64_t expected, > +int memorder) > +{ > + if (__atomic_load_n(addr, memorder) !=3D expected) { > + do { > + rte_pause(); > + } while (__atomic_load_n(addr, memorder) !=3D expected); > + } > +} > +#endif > + > #endif /* _RTE_PAUSE_H_ */ > -- > 2.7.4