All of lore.kernel.org
 help / color / mirror / Atom feed
From: Phil Yang <phil.yang@arm.com>
To: david.marchand@redhat.com, jerinj@marvell.com,
	gage.eads@intel.com, dev@dpdk.org
Cc: thomas@monjalon.net, hemant.agrawal@nxp.com,
	Honnappa.Nagarahalli@arm.com, gavin.hu@arm.com, nd@arm.com
Subject: [dpdk-dev] [PATCH v11 1/3] eal/arm64: add 128-bit atomic compare exchange
Date: Fri, 18 Oct 2019 19:21:28 +0800	[thread overview]
Message-ID: <1571397690-14116-1-git-send-email-phil.yang@arm.com> (raw)
In-Reply-To: <1571139508-21701-1-git-send-email-phil.yang@arm.com>

This patch adds the implementation of the 128-bit atomic compare
exchange API on AArch64. Using 64-bit 'ldxp/stxp' instructions
can perform this operation. Moreover, on the LSE atomic extension
accelerated platforms, it implemented by 'casp' instructions for
better performance.

Since the '__ARM_FEATURE_ATOMICS' flag only supports GCC-9, so this
patch adds a new config flag 'RTE_ARM_FEATURE_ATOMICS' to enable the
'cas' version on elder version compilers.

Since direct x0 register used in the code and cas_op_name() and
rte_atomic128_cmp_exchange() is inline function, based on parent
function load, it may corrupt x0 register aka Break arm64 ABI.
Define CAS operations as rte_noinline functions to avoid the ABI
break[1].

[1]5b40ec6b9662 ("mempool/octeontx2: fix possible arm64 ABI break").

Suggested-by: Jerin Jacob <jerinj@marvell.com>
Signed-off-by: Phil Yang <phil.yang@arm.com>
Reviewed-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Tested-by: Honnappa Nagarahalli <honnappa.nagarahalli@arm.com>
Acked-by: Jerin Jacob <jerinj@marvell.com>
---
v11:
1. Renamed cas_op_name by adding the data width tag 128.
2. Replaced __ldx/__stx inline functions with macros.
3. Illustrate the reason of define cas operations as non-inline
functions in commitlog.

v10:
1.Removed all the rte tag for internal functions.
2.Removed __MO_LOAD and _MO_STORE macros and keep define __HAS_ACQ
and __HAS_REL under non LSE conditional branch.
3.Undef the macro once it is unused.
4.Reword the 1/3 and 2/3 patches' commitlog more specific.

v9:
Updated 19.11 release note.

v8:
Fixed "WARNING:LONG_LINE: line over 80 characters" warnings with latest kernel
checkpatch.pl

v7:
1. Adjust code comment.

v6:
1. Put the RTE_ARM_FEATURE_ATOMICS flag into EAL group. (Jerin Jocob)
2. Keep rte_stack_lf_stubs.h doing nothing. (Gage Eads)
3. Fixed 32 bit build issue.

v5:
1. Enable RTE_ARM_FEATURE_ATOMICS on octeontx2 in default. (Jerin Jocob)
2. Record the reason of introducing "rte_stack_lf_stubs.h" in git
commit.
(Jerin, Jocob)
3. Fixed a conditional MACRO error in rte_atomic128_cmp_exchange. (Jerin
Jocob)

v4:
1. Add RTE_ARM_FEATURE_ATOMICS flag to support LSE CASP instructions.
(Jerin Jocob)
2. Fix possible arm64 ABI break by making casp_op_name noinline. (Jerin
Jocob)
3. Add rte_stack_lf_stubs.h to reduce the ifdef clutter. (Gage
Eads/Jerin Jocob)

v3:
1. Avoid duplication code with macro. (Jerin Jocob)
2. Make invalid memory order to strongest barrier. (Jerin Jocob)
3. Update doc/guides/prog_guide/env_abstraction_layer.rst. (Gage Eads)
4. Fix 32-bit x86 builds issue. (Gage Eads)
5. Correct documentation issues in UT. (Gage Eads)

v2:
Initial version.

 config/arm/meson.build                             |   2 +
 config/common_base                                 |   3 +
 config/defconfig_arm64-octeontx2-linuxapp-gcc      |   1 +
 config/defconfig_arm64-thunderx2-linuxapp-gcc      |   1 +
 .../common/include/arch/arm/rte_atomic_64.h        | 151 +++++++++++++++++++++
 .../common/include/arch/x86/rte_atomic_64.h        |  12 --
 lib/librte_eal/common/include/generic/rte_atomic.h |  17 ++-
 7 files changed, 174 insertions(+), 13 deletions(-)

diff --git a/config/arm/meson.build b/config/arm/meson.build
index 979018e..9f28271 100644
--- a/config/arm/meson.build
+++ b/config/arm/meson.build
@@ -71,11 +71,13 @@ flags_thunderx2_extra = [
 	['RTE_CACHE_LINE_SIZE', 64],
 	['RTE_MAX_NUMA_NODES', 2],
 	['RTE_MAX_LCORE', 256],
+	['RTE_ARM_FEATURE_ATOMICS', true],
 	['RTE_USE_C11_MEM_MODEL', true]]
 flags_octeontx2_extra = [
 	['RTE_MACHINE', '"octeontx2"'],
 	['RTE_MAX_NUMA_NODES', 1],
 	['RTE_MAX_LCORE', 24],
+	['RTE_ARM_FEATURE_ATOMICS', true],
 	['RTE_EAL_IGB_UIO', false],
 	['RTE_USE_C11_MEM_MODEL', true]]
 
diff --git a/config/common_base b/config/common_base
index e843a21..a96beb9 100644
--- a/config/common_base
+++ b/config/common_base
@@ -82,6 +82,9 @@ CONFIG_RTE_MAX_LCORE=128
 CONFIG_RTE_MAX_NUMA_NODES=8
 CONFIG_RTE_MAX_HEAPS=32
 CONFIG_RTE_MAX_MEMSEG_LISTS=64
+
+# Use ARM LSE ATOMIC instructions
+CONFIG_RTE_ARM_FEATURE_ATOMICS=n
 # each memseg list will be limited to either RTE_MAX_MEMSEG_PER_LIST pages
 # or RTE_MAX_MEM_MB_PER_LIST megabytes worth of memory, whichever is smaller
 CONFIG_RTE_MAX_MEMSEG_PER_LIST=8192
diff --git a/config/defconfig_arm64-octeontx2-linuxapp-gcc b/config/defconfig_arm64-octeontx2-linuxapp-gcc
index f20da24..7687dbe 100644
--- a/config/defconfig_arm64-octeontx2-linuxapp-gcc
+++ b/config/defconfig_arm64-octeontx2-linuxapp-gcc
@@ -9,6 +9,7 @@ CONFIG_RTE_MACHINE="octeontx2"
 CONFIG_RTE_CACHE_LINE_SIZE=128
 CONFIG_RTE_MAX_NUMA_NODES=1
 CONFIG_RTE_MAX_LCORE=24
+CONFIG_RTE_ARM_FEATURE_ATOMICS=y
 
 # Doesn't support NUMA
 CONFIG_RTE_EAL_NUMA_AWARE_HUGEPAGES=n
diff --git a/config/defconfig_arm64-thunderx2-linuxapp-gcc b/config/defconfig_arm64-thunderx2-linuxapp-gcc
index cc5c64b..af4a89c 100644
--- a/config/defconfig_arm64-thunderx2-linuxapp-gcc
+++ b/config/defconfig_arm64-thunderx2-linuxapp-gcc
@@ -9,3 +9,4 @@ CONFIG_RTE_MACHINE="thunderx2"
 CONFIG_RTE_CACHE_LINE_SIZE=64
 CONFIG_RTE_MAX_NUMA_NODES=2
 CONFIG_RTE_MAX_LCORE=256
+CONFIG_RTE_ARM_FEATURE_ATOMICS=y
diff --git a/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h b/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h
index 97060e4..d9ebccc 100644
--- a/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h
+++ b/lib/librte_eal/common/include/arch/arm/rte_atomic_64.h
@@ -1,5 +1,6 @@
 /* SPDX-License-Identifier: BSD-3-Clause
  * Copyright(c) 2015 Cavium, Inc
+ * Copyright(c) 2019 Arm Limited
  */
 
 #ifndef _RTE_ATOMIC_ARM64_H_
@@ -14,6 +15,9 @@ extern "C" {
 #endif
 
 #include "generic/rte_atomic.h"
+#include <rte_branch_prediction.h>
+#include <rte_compat.h>
+#include <rte_debug.h>
 
 #define dsb(opt) asm volatile("dsb " #opt : : : "memory")
 #define dmb(opt) asm volatile("dmb " #opt : : : "memory")
@@ -40,6 +44,153 @@ extern "C" {
 
 #define rte_cio_rmb() dmb(oshld)
 
+/*------------------------ 128 bit atomic operations -------------------------*/
+
+#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
+#define __ATOMIC128_CAS_OP(cas_op_name, op_string)                          \
+static __rte_noinline rte_int128_t                                          \
+cas_op_name(rte_int128_t *dst, rte_int128_t old,                            \
+		rte_int128_t updated)                                       \
+{                                                                           \
+	/* caspX instructions register pair must start from even-numbered
+	 * register at operand 1.
+	 * So, specify registers for local variables here.
+	 */                                                                 \
+	register uint64_t x0 __asm("x0") = (uint64_t)old.val[0];            \
+	register uint64_t x1 __asm("x1") = (uint64_t)old.val[1];            \
+	register uint64_t x2 __asm("x2") = (uint64_t)updated.val[0];        \
+	register uint64_t x3 __asm("x3") = (uint64_t)updated.val[1];        \
+	asm volatile(                                                       \
+		op_string " %[old0], %[old1], %[upd0], %[upd1], [%[dst]]"   \
+		: [old0] "+r" (x0),                                         \
+		[old1] "+r" (x1)                                            \
+		: [upd0] "r" (x2),                                          \
+		[upd1] "r" (x3),                                            \
+		[dst] "r" (dst)                                             \
+		: "memory");                                                \
+	old.val[0] = x0;                                                    \
+	old.val[1] = x1;                                                    \
+	return old;                                                         \
+}
+
+__ATOMIC128_CAS_OP(__cas_128_relaxed, "casp")
+__ATOMIC128_CAS_OP(__cas_128_acquire, "caspa")
+__ATOMIC128_CAS_OP(__cas_128_release, "caspl")
+__ATOMIC128_CAS_OP(__cas_128_acq_rel, "caspal")
+
+#undef __ATOMIC128_CAS_OP
+
+#endif
+
+__rte_experimental
+static inline int
+rte_atomic128_cmp_exchange(rte_int128_t *dst,
+				rte_int128_t *exp,
+				const rte_int128_t *src,
+				unsigned int weak,
+				int success,
+				int failure)
+{
+	/* Always do strong CAS */
+	RTE_SET_USED(weak);
+	/* Ignore memory ordering for failure, memory order for
+	 * success must be stronger or equal
+	 */
+	RTE_SET_USED(failure);
+	/* Find invalid memory order */
+	RTE_ASSERT(success == __ATOMIC_RELAXED
+			|| success == __ATOMIC_ACQUIRE
+			|| success == __ATOMIC_RELEASE
+			|| success == __ATOMIC_ACQ_REL
+			|| success == __ATOMIC_SEQ_CST);
+
+	rte_int128_t expected = *exp;
+	rte_int128_t desired = *src;
+	rte_int128_t old;
+
+#if defined(__ARM_FEATURE_ATOMICS) || defined(RTE_ARM_FEATURE_ATOMICS)
+	if (success == __ATOMIC_RELAXED)
+		old = __cas_128_relaxed(dst, expected, desired);
+	else if (success == __ATOMIC_ACQUIRE)
+		old = __cas_128_acquire(dst, expected, desired);
+	else if (success == __ATOMIC_RELEASE)
+		old = __cas_128_release(dst, expected, desired);
+	else
+		old = __cas_128_acq_rel(dst, expected, desired);
+#else
+#define __HAS_ACQ(mo) ((mo) != __ATOMIC_RELAXED && (mo) != __ATOMIC_RELEASE)
+#define __HAS_RLS(mo) ((mo) == __ATOMIC_RELEASE || (mo) == __ATOMIC_ACQ_REL || \
+					  (mo) == __ATOMIC_SEQ_CST)
+
+	int ldx_mo = __HAS_ACQ(success) ? __ATOMIC_ACQUIRE : __ATOMIC_RELAXED;
+	int stx_mo = __HAS_RLS(success) ? __ATOMIC_RELEASE : __ATOMIC_RELAXED;
+
+#undef __HAS_ACQ
+#undef __HAS_RLS
+
+	uint32_t ret = 1;
+
+	/* ldx128 can not guarantee atomic,
+	 * Must write back src or old to verify atomicity of ldx128;
+	 */
+	do {
+
+#define __LOAD_128(op_string, src, dst) {\
+	asm volatile(                    \
+		op_string " %0, %1, %2"  \
+		: "=&r" (dst.val[0]),    \
+		  "=&r" (dst.val[1])     \
+		: "Q" (src->val[0])      \
+		: "memory"); }
+
+		if (ldx_mo == __ATOMIC_RELAXED)
+			__LOAD_128("ldxp", dst, old)
+		else
+			__LOAD_128("ldaxp", dst, old)
+
+#undef __LOAD_128
+
+#define __STORE_128(op_string, dst, src, ret) {\
+	asm volatile(                          \
+		op_string " %w0, %1, %2, %3"   \
+		: "=&r" (ret)                  \
+		: "r" (src.val[0]),            \
+		  "r" (src.val[1]),            \
+		  "Q" (dst->val[0])            \
+		: "memory"); }
+
+		if (likely(old.int128 == expected.int128)) {
+			if (stx_mo == __ATOMIC_RELAXED)
+				__STORE_128("stxp", dst, desired, ret)
+			else
+				__STORE_128("stlxp", dst, desired, ret)
+		} else {
+			/* In the failure case (since 'weak' is ignored and only
+			 * weak == 0 is implemented), expected should contain
+			 * the atomically read value of dst. This means, 'old'
+			 * needs to be stored back to ensure it was read
+			 * atomically.
+			 */
+			if (stx_mo == __ATOMIC_RELAXED)
+				__STORE_128("stxp", dst, old, ret)
+			else
+				__STORE_128("stlxp", dst, old, ret)
+		}
+#undef __STORE_128
+
+	} while (unlikely(ret));
+#endif
+
+	/* Unconditionally updating expected removes
+	 * an 'if' statement.
+	 * expected should already be in register if
+	 * not in the cache.
+	 */
+	*exp = old;
+
+	return (old.int128 == expected.int128);
+}
+
 #ifdef __cplusplus
 }
 #endif
diff --git a/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h b/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
index 1335d92..cfe7067 100644
--- a/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
+++ b/lib/librte_eal/common/include/arch/x86/rte_atomic_64.h
@@ -183,18 +183,6 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
 
 /*------------------------ 128 bit atomic operations -------------------------*/
 
-/**
- * 128-bit integer structure.
- */
-RTE_STD_C11
-typedef struct {
-	RTE_STD_C11
-	union {
-		uint64_t val[2];
-		__extension__ __int128 int128;
-	};
-} __rte_aligned(16) rte_int128_t;
-
 __rte_experimental
 static inline int
 rte_atomic128_cmp_exchange(rte_int128_t *dst,
diff --git a/lib/librte_eal/common/include/generic/rte_atomic.h b/lib/librte_eal/common/include/generic/rte_atomic.h
index 24ff7dc..e6ab15a 100644
--- a/lib/librte_eal/common/include/generic/rte_atomic.h
+++ b/lib/librte_eal/common/include/generic/rte_atomic.h
@@ -1081,6 +1081,20 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
 
 /*------------------------ 128 bit atomic operations -------------------------*/
 
+/**
+ * 128-bit integer structure.
+ */
+RTE_STD_C11
+typedef struct {
+	RTE_STD_C11
+	union {
+		uint64_t val[2];
+#ifdef RTE_ARCH_64
+		__extension__ __int128 int128;
+#endif
+	};
+} __rte_aligned(16) rte_int128_t;
+
 #ifdef __DOXYGEN__
 
 /**
@@ -1093,7 +1107,8 @@ static inline void rte_atomic64_clear(rte_atomic64_t *v)
  *     *exp = *dst
  * @endcode
  *
- * @note This function is currently only available for the x86-64 platform.
+ * @note This function is currently available for the x86-64 and aarch64
+ * platforms.
  *
  * @note The success and failure arguments must be one of the __ATOMIC_* values
  * defined in the C++11 standard. For details on their behavior, refer to the
-- 
2.7.4


  parent reply	other threads:[~2019-10-18 11:22 UTC|newest]

Thread overview: 88+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-06-23  2:41 [dpdk-dev] [PATCH v1 1/3] eal/arm64: add 128-bit atomic compare exchange Phil Yang
2019-06-23  2:41 ` [dpdk-dev] [PATCH v1 2/3] test/atomic: add 128b compare and swap test Phil Yang
2019-06-23  2:41 ` [dpdk-dev] [PATCH v1 3/3] eal/stack: enable lock-free stack for aarch64 Phil Yang
2019-06-23  3:15 ` [dpdk-dev] [PATCH v2 1/3] eal/arm64: add 128-bit atomic compare exchange Phil Yang
2019-06-23  3:15   ` [dpdk-dev] [PATCH v2 2/3] test/atomic: add 128b compare and swap test Phil Yang
2019-06-24 15:09     ` Eads, Gage
2019-06-24 15:29       ` Phil Yang (Arm Technology China)
2019-06-23  3:15   ` [dpdk-dev] [PATCH v2 3/3] eal/stack: enable lock-free stack for aarch64 Phil Yang
2019-06-24 15:15     ` Eads, Gage
2019-06-24 15:22       ` Phil Yang (Arm Technology China)
2019-06-24 14:46   ` [dpdk-dev] [PATCH v2 1/3] eal/arm64: add 128-bit atomic compare exchange Eads, Gage
2019-06-24 15:35     ` Phil Yang (Arm Technology China)
2019-06-28  8:11 ` [dpdk-dev] [PATCH v3 " Phil Yang
2019-06-28  8:11   ` [dpdk-dev] [PATCH v3 2/3] test/atomic: add 128b compare and swap test Phil Yang
2019-06-29  0:17     ` Eads, Gage
2019-07-19  4:03     ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2019-06-28  8:11   ` [dpdk-dev] [PATCH v3 3/3] eal/stack: enable lock-free stack for aarch64 Phil Yang
2019-06-29  0:18     ` Eads, Gage
2019-07-19  4:18     ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2019-07-19  4:42       ` Eads, Gage
2019-07-19  5:02         ` Jerin Jacob Kollanukkaran
2019-07-19  5:15           ` Phil Yang (Arm Technology China)
2019-07-03 12:25   ` [dpdk-dev] [EXT] [PATCH v3 1/3] eal/arm64: add 128-bit atomic compare exchange Jerin Jacob Kollanukkaran
2019-07-03 13:07     ` Jerin Jacob Kollanukkaran
2019-07-05  4:20       ` Honnappa Nagarahalli
2019-07-05  4:37         ` Pavan Nikhilesh Bhagavatula
2019-07-09  9:27           ` Phil Yang (Arm Technology China)
2019-07-09 11:14             ` Jerin Jacob Kollanukkaran
2019-07-19  6:24   ` Jerin Jacob Kollanukkaran
2019-07-19 11:01     ` Phil Yang (Arm Technology China)
2019-07-19 12:35       ` Jerin Jacob Kollanukkaran
2019-07-19 13:56         ` Phil Yang (Arm Technology China)
2019-07-19 14:50           ` Eads, Gage
2019-07-22  8:44 ` [dpdk-dev] [PATCH v4 " Phil Yang
2019-07-22  8:44   ` [dpdk-dev] [PATCH v4 2/3] test/atomic: add 128b compare and swap test Phil Yang
2019-07-22  8:44   ` [dpdk-dev] [PATCH v4 3/3] eal/stack: enable lock-free stack for aarch64 Phil Yang
2019-07-22 10:22     ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2019-07-22 11:51       ` Phil Yang (Arm Technology China)
2019-07-22 10:20   ` [dpdk-dev] [EXT] [PATCH v4 1/3] eal/arm64: add 128-bit atomic compare exchange Jerin Jacob Kollanukkaran
2019-07-22 11:50     ` Phil Yang (Arm Technology China)
2019-07-22 13:06 ` [dpdk-dev] [PATCH v5 " Phil Yang
2019-07-22 13:06   ` [dpdk-dev] [PATCH v5 2/3] test/atomic: add 128b compare and swap test Phil Yang
2019-07-22 13:06   ` [dpdk-dev] [PATCH v5 3/3] eal/stack: enable lock-free stack for aarch64 Phil Yang
2019-07-22 14:14     ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2019-07-22 15:19       ` Phil Yang (Arm Technology China)
2019-07-22 14:34     ` [dpdk-dev] " Eads, Gage
2019-07-22 14:43       ` Phil Yang (Arm Technology China)
2019-07-22 14:19   ` [dpdk-dev] [EXT] [PATCH v5 1/3] eal/arm64: add 128-bit atomic compare exchange Jerin Jacob Kollanukkaran
2019-07-22 16:23     ` Phil Yang (Arm Technology China)
2019-07-22 16:22 ` [dpdk-dev] [PATCH v6 " Phil Yang
2019-07-22 16:22   ` [dpdk-dev] [PATCH v6 2/3] test/atomic: add 128b compare and swap test Phil Yang
2019-07-22 16:22   ` [dpdk-dev] [PATCH v6 3/3] eal/stack: enable lock-free stack for aarch64 Phil Yang
2019-07-22 16:59     ` [dpdk-dev] [EXT] " Jerin Jacob Kollanukkaran
2019-07-22 16:57   ` [dpdk-dev] [EXT] [PATCH v6 1/3] eal/arm64: add 128-bit atomic compare exchange Jerin Jacob Kollanukkaran
2019-07-23  3:28     ` Phil Yang (Arm Technology China)
2019-07-23  7:09       ` Jerin Jacob Kollanukkaran
2019-07-23  7:53         ` Phil Yang (Arm Technology China)
2019-07-23  5:57 ` [dpdk-dev] [PATCH v7 " Phil Yang
2019-07-23  5:57   ` [dpdk-dev] [PATCH v7 2/3] test/atomic: add 128b compare and swap test Phil Yang
2019-07-23  5:57   ` [dpdk-dev] [PATCH v7 3/3] eal/stack: enable lock-free stack for aarch64 Phil Yang
2019-07-23  7:05   ` [dpdk-dev] [PATCH v8 1/3] eal/arm64: add 128-bit atomic compare exchange jerinj
2019-07-23  7:05     ` [dpdk-dev] [PATCH v8 2/3] test/atomic: add 128b compare and swap test jerinj
2019-07-23  7:05     ` [dpdk-dev] [PATCH v8 3/3] eal/stack: enable lock-free stack for aarch64 jerinj
2019-08-14  8:27     ` [dpdk-dev] [PATCH v9 1/3] eal/arm64: add 128-bit atomic compare exchange Phil Yang
2019-08-14  8:27       ` [dpdk-dev] [PATCH v9 2/3] test/atomic: add 128b compare and swap test Phil Yang
2019-10-14 15:45         ` David Marchand
2019-10-15 11:32           ` Phil Yang (Arm Technology China)
2019-08-14  8:27       ` [dpdk-dev] [PATCH v9 3/3] eal/stack: enable lock-free stack for aarch64 Phil Yang
2019-10-14 15:45         ` David Marchand
2019-10-15 11:32           ` Phil Yang (Arm Technology China)
2019-10-14 15:43       ` [dpdk-dev] [PATCH v9 1/3] eal/arm64: add 128-bit atomic compare exchange David Marchand
2019-10-15 11:32         ` Phil Yang (Arm Technology China)
2019-10-15 12:16           ` David Marchand
2019-10-16  9:04             ` Phil Yang (Arm Technology China)
2019-10-17 12:45               ` David Marchand
2019-10-15 11:38       ` [dpdk-dev] [PATCH v10 " Phil Yang
2019-10-15 11:38         ` [dpdk-dev] [PATCH v10 2/3] test/atomic: add 128b compare and swap test Phil Yang
2019-10-15 11:38         ` [dpdk-dev] [PATCH v10 3/3] eal/stack: enable lock-free stack for aarch64 Phil Yang
2019-10-18 11:21         ` Phil Yang [this message]
2019-10-18 11:21           ` [dpdk-dev] [PATCH v11 2/3] test/atomic: add 128b compare and swap test Phil Yang
2019-10-21  8:25             ` David Marchand
2019-10-18 11:21           ` [dpdk-dev] [PATCH v11 3/3] eal/stack: enable lock-free stack for aarch64 Phil Yang
2019-10-21  8:26             ` David Marchand
2019-10-18 14:16           ` [dpdk-dev] [PATCH v11 1/3] eal/arm64: add 128-bit atomic compare exchange David Marchand
2019-10-18 14:24             ` Jerin Jacob
2019-10-18 14:33               ` David Marchand
2019-10-18 14:36                 ` Jerin Jacob
2019-10-21  8:24                   ` David Marchand

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1571397690-14116-1-git-send-email-phil.yang@arm.com \
    --to=phil.yang@arm.com \
    --cc=Honnappa.Nagarahalli@arm.com \
    --cc=david.marchand@redhat.com \
    --cc=dev@dpdk.org \
    --cc=gage.eads@intel.com \
    --cc=gavin.hu@arm.com \
    --cc=hemant.agrawal@nxp.com \
    --cc=jerinj@marvell.com \
    --cc=nd@arm.com \
    --cc=thomas@monjalon.net \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.