All of lore.kernel.org
 help / color / mirror / Atom feed
From: Boqun Feng <boqun.feng@gmail.com>
To: Ingo Molnar <mingo@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>,
	Mark Rutland <mark.rutland@arm.com>,
	linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org, aryabinin@virtuozzo.com,
	catalin.marinas@arm.com, dvyukov@google.com, will.deacon@arm.com
Subject: Re: [PATCH] locking/atomics/powerpc: Move cmpxchg helpers to asm/cmpxchg.h and define the full set of cmpxchg APIs
Date: Sat, 5 May 2018 22:03:40 +0800	[thread overview]
Message-ID: <20180505140340.uzfhoc42xvas4m72@tardis> (raw)
In-Reply-To: <20180505132751.gwzu2vbzibr2risd@gmail.com>

[-- Attachment #1: Type: text/plain, Size: 9180 bytes --]

On Sat, May 05, 2018 at 03:27:51PM +0200, Ingo Molnar wrote:
> 
> * Boqun Feng <boqun.feng@gmail.com> wrote:
> 
> > > May I suggest the patch below? No change in functionality, but it documents the 
> > > lack of the cmpxchg_release() APIs and maps them explicitly to the full cmpxchg() 
> > > version. (Which the generic code does now in a rather roundabout way.)
> > > 
> > 
> > Hmm.. cmpxchg_release() is actually lwsync() + cmpxchg_relaxed(), but
> > you just make it sync() + cmpxchg_relaxed() + sync() with the fallback,
> > and sync() is much heavier, so I don't think the fallback is correct.
> 
> Indeed!
> 
> The bit I missed previously is that PowerPC provides its own __atomic_op_release() 
> method:
> 
>    #define __atomic_op_release(op, args...)                                \
>    ({                                                                      \
>            __asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory");    \
>            op##_relaxed(args);                                             \
>    })
> 
> ... which maps to LWSYNC as you say, and my patch made that worse.
> 
> > I think maybe you can move powerpc's __atomic_op_{acqurie,release}()
> > from atomic.h to cmpxchg.h (in arch/powerpc/include/asm), and
> > 
> > 	#define cmpxchg_release __atomic_op_release(cmpxchg, __VA_ARGS__);
> > 	#define cmpxchg64_release __atomic_op_release(cmpxchg64, __VA_ARGS__);
> > 
> > I put a diff below to say what I mean (untested).
> > 
> > > Also, the change to arch/powerpc/include/asm/atomic.h has no functional effect 
> > > right now either, but should anyone add a _relaxed() variant in the future, with 
> > > this change atomic_cmpxchg_release() and atomic64_cmpxchg_release() will pick that 
> > > up automatically.
> > > 
> > 
> > You mean with your other modification in include/linux/atomic.h, right?
> > Because with the unmodified include/linux/atomic.h, we already pick that
> > autmatically. If so, I think that's fine.
> > 
> > Here is the diff for the modification for cmpxchg_release(), the idea is
> > we generate them in asm/cmpxchg.h other than linux/atomic.h for ppc, so
> > we keep the new linux/atomic.h working. Because if I understand
> > correctly, the next linux/atomic.h only accepts that
> > 
> > 1)	architecture only defines fully ordered primitives
> > 
> > or
> > 
> > 2)	architecture only defines _relaxed primitives
> > 
> > or
> > 
> > 3)	architecture defines all four (fully, _relaxed, _acquire,
> > 	_release) primitives
> > 
> > So powerpc needs to define all four primitives in its only
> > asm/cmpxchg.h.
> 
> Correct, although the new logic is still RFC, PeterZ didn't like the first version 
> I proposed and might NAK them.
> 

Understood. From my side, I don't have strong feelings for either way.
But since powerpc gets affected with the new logic, so I'm glad I could
help.

> Thanks for the patch - I have created the patch below from it and added your 
> Signed-off-by.
> 

Thanks ;-)

> The only change I made beyond a trivial build fix is that I also added the release 
> atomics variants explicitly:
> 
> +#define atomic_cmpxchg_release(v, o, n) \
> +	cmpxchg_release(&((v)->counter), (o), (n))
> +#define atomic64_cmpxchg_release(v, o, n) \
> +	cmpxchg_release(&((v)->counter), (o), (n))
> 
> It has passed a PowerPC cross-build test here, but no runtime tests.
> 

Do you have the commit at any branch in tip tree? I could pull it and
cross-build and check the assembly code of lib/atomic64_test.c, that way
I could verify whether we mess something up.

> Does this patch look good to you?
> 

Yep!

Regards,
Boqun

> (Still subject to PeterZ's Ack/NAK.)
> 
> Thanks,
> 
> 	Ingo
> 
> ======================>
> From: Boqun Feng <boqun.feng@gmail.com>
> Date: Sat, 5 May 2018 19:28:17 +0800
> Subject: [PATCH] locking/atomics/powerpc: Move cmpxchg helpers to asm/cmpxchg.h and define the full set of cmpxchg APIs
> 
> Move PowerPC's __op_{acqurie,release}() from atomic.h to
> cmpxchg.h (in arch/powerpc/include/asm), plus use them to
> define these two methods:
> 
> 	#define cmpxchg_release __op_release(cmpxchg, __VA_ARGS__);
> 	#define cmpxchg64_release __op_release(cmpxchg64, __VA_ARGS__);
> 
> ... the idea is to generate all these methods in cmpxchg.h and to define the full
> array of atomic primitives, including the cmpxchg_release() methods which were
> defined by the generic code before.
> 
> Also define the atomic[64]_() variants explicitly.
> 
> This ensures that all these low level cmpxchg APIs are defined in
> PowerPC headers, with no generic header fallbacks.
> 
> No change in functionality or code generation.
> 
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: aryabinin@virtuozzo.com
> Cc: catalin.marinas@arm.com
> Cc: dvyukov@google.com
> Cc: linux-arm-kernel@lists.infradead.org
> Cc: will.deacon@arm.com
> Link: http://lkml.kernel.org/r/20180505112817.ihrb726i37bwm4cj@tardis
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> ---
>  arch/powerpc/include/asm/atomic.h  | 22 ++++------------------
>  arch/powerpc/include/asm/cmpxchg.h | 24 ++++++++++++++++++++++++
>  2 files changed, 28 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
> index 682b3e6a1e21..4e06955ec10f 100644
> --- a/arch/powerpc/include/asm/atomic.h
> +++ b/arch/powerpc/include/asm/atomic.h
> @@ -13,24 +13,6 @@
>  
>  #define ATOMIC_INIT(i)		{ (i) }
>  
> -/*
> - * Since *_return_relaxed and {cmp}xchg_relaxed are implemented with
> - * a "bne-" instruction at the end, so an isync is enough as a acquire barrier
> - * on the platform without lwsync.
> - */
> -#define __atomic_op_acquire(op, args...)				\
> -({									\
> -	typeof(op##_relaxed(args)) __ret  = op##_relaxed(args);		\
> -	__asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory");	\
> -	__ret;								\
> -})
> -
> -#define __atomic_op_release(op, args...)				\
> -({									\
> -	__asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory");	\
> -	op##_relaxed(args);						\
> -})
> -
>  static __inline__ int atomic_read(const atomic_t *v)
>  {
>  	int t;
> @@ -213,6 +195,8 @@ static __inline__ int atomic_dec_return_relaxed(atomic_t *v)
>  	cmpxchg_relaxed(&((v)->counter), (o), (n))
>  #define atomic_cmpxchg_acquire(v, o, n) \
>  	cmpxchg_acquire(&((v)->counter), (o), (n))
> +#define atomic_cmpxchg_release(v, o, n) \
> +	cmpxchg_release(&((v)->counter), (o), (n))
>  
>  #define atomic_xchg(v, new) (xchg(&((v)->counter), new))
>  #define atomic_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new))
> @@ -519,6 +503,8 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
>  	cmpxchg_relaxed(&((v)->counter), (o), (n))
>  #define atomic64_cmpxchg_acquire(v, o, n) \
>  	cmpxchg_acquire(&((v)->counter), (o), (n))
> +#define atomic64_cmpxchg_release(v, o, n) \
> +	cmpxchg_release(&((v)->counter), (o), (n))
>  
>  #define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
>  #define atomic64_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new))
> diff --git a/arch/powerpc/include/asm/cmpxchg.h b/arch/powerpc/include/asm/cmpxchg.h
> index 9b001f1f6b32..e27a612b957f 100644
> --- a/arch/powerpc/include/asm/cmpxchg.h
> +++ b/arch/powerpc/include/asm/cmpxchg.h
> @@ -8,6 +8,24 @@
>  #include <asm/asm-compat.h>
>  #include <linux/bug.h>
>  
> +/*
> + * Since *_return_relaxed and {cmp}xchg_relaxed are implemented with
> + * a "bne-" instruction at the end, so an isync is enough as a acquire barrier
> + * on the platform without lwsync.
> + */
> +#define __atomic_op_acquire(op, args...)				\
> +({									\
> +	typeof(op##_relaxed(args)) __ret  = op##_relaxed(args);		\
> +	__asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory");	\
> +	__ret;								\
> +})
> +
> +#define __atomic_op_release(op, args...)				\
> +({									\
> +	__asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory");	\
> +	op##_relaxed(args);						\
> +})
> +
>  #ifdef __BIG_ENDIAN
>  #define BITOFF_CAL(size, off)	((sizeof(u32) - size - off) * BITS_PER_BYTE)
>  #else
> @@ -512,6 +530,9 @@ __cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new,
>  			(unsigned long)_o_, (unsigned long)_n_,		\
>  			sizeof(*(ptr)));				\
>  })
> +
> +#define cmpxchg_release(...) __atomic_op_release(cmpxchg, __VA_ARGS__)
> +
>  #ifdef CONFIG_PPC64
>  #define cmpxchg64(ptr, o, n)						\
>    ({									\
> @@ -533,6 +554,9 @@ __cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new,
>  	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
>  	cmpxchg_acquire((ptr), (o), (n));				\
>  })
> +
> +#define cmpxchg64_release(...) __atomic_op_release(cmpxchg64, __VA_ARGS__)
> +
>  #else
>  #include <asm-generic/cmpxchg-local.h>
>  #define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))

[-- Attachment #2: signature.asc --]
[-- Type: application/pgp-signature, Size: 488 bytes --]

WARNING: multiple messages have this Message-ID (diff)
From: boqun.feng@gmail.com (Boqun Feng)
To: linux-arm-kernel@lists.infradead.org
Subject: [PATCH] locking/atomics/powerpc: Move cmpxchg helpers to asm/cmpxchg.h and define the full set of cmpxchg APIs
Date: Sat, 5 May 2018 22:03:40 +0800	[thread overview]
Message-ID: <20180505140340.uzfhoc42xvas4m72@tardis> (raw)
In-Reply-To: <20180505132751.gwzu2vbzibr2risd@gmail.com>

On Sat, May 05, 2018 at 03:27:51PM +0200, Ingo Molnar wrote:
> 
> * Boqun Feng <boqun.feng@gmail.com> wrote:
> 
> > > May I suggest the patch below? No change in functionality, but it documents the 
> > > lack of the cmpxchg_release() APIs and maps them explicitly to the full cmpxchg() 
> > > version. (Which the generic code does now in a rather roundabout way.)
> > > 
> > 
> > Hmm.. cmpxchg_release() is actually lwsync() + cmpxchg_relaxed(), but
> > you just make it sync() + cmpxchg_relaxed() + sync() with the fallback,
> > and sync() is much heavier, so I don't think the fallback is correct.
> 
> Indeed!
> 
> The bit I missed previously is that PowerPC provides its own __atomic_op_release() 
> method:
> 
>    #define __atomic_op_release(op, args...)                                \
>    ({                                                                      \
>            __asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory");    \
>            op##_relaxed(args);                                             \
>    })
> 
> ... which maps to LWSYNC as you say, and my patch made that worse.
> 
> > I think maybe you can move powerpc's __atomic_op_{acqurie,release}()
> > from atomic.h to cmpxchg.h (in arch/powerpc/include/asm), and
> > 
> > 	#define cmpxchg_release __atomic_op_release(cmpxchg, __VA_ARGS__);
> > 	#define cmpxchg64_release __atomic_op_release(cmpxchg64, __VA_ARGS__);
> > 
> > I put a diff below to say what I mean (untested).
> > 
> > > Also, the change to arch/powerpc/include/asm/atomic.h has no functional effect 
> > > right now either, but should anyone add a _relaxed() variant in the future, with 
> > > this change atomic_cmpxchg_release() and atomic64_cmpxchg_release() will pick that 
> > > up automatically.
> > > 
> > 
> > You mean with your other modification in include/linux/atomic.h, right?
> > Because with the unmodified include/linux/atomic.h, we already pick that
> > autmatically. If so, I think that's fine.
> > 
> > Here is the diff for the modification for cmpxchg_release(), the idea is
> > we generate them in asm/cmpxchg.h other than linux/atomic.h for ppc, so
> > we keep the new linux/atomic.h working. Because if I understand
> > correctly, the next linux/atomic.h only accepts that
> > 
> > 1)	architecture only defines fully ordered primitives
> > 
> > or
> > 
> > 2)	architecture only defines _relaxed primitives
> > 
> > or
> > 
> > 3)	architecture defines all four (fully, _relaxed, _acquire,
> > 	_release) primitives
> > 
> > So powerpc needs to define all four primitives in its only
> > asm/cmpxchg.h.
> 
> Correct, although the new logic is still RFC, PeterZ didn't like the first version 
> I proposed and might NAK them.
> 

Understood. From my side, I don't have strong feelings for either way.
But since powerpc gets affected with the new logic, so I'm glad I could
help.

> Thanks for the patch - I have created the patch below from it and added your 
> Signed-off-by.
> 

Thanks ;-)

> The only change I made beyond a trivial build fix is that I also added the release 
> atomics variants explicitly:
> 
> +#define atomic_cmpxchg_release(v, o, n) \
> +	cmpxchg_release(&((v)->counter), (o), (n))
> +#define atomic64_cmpxchg_release(v, o, n) \
> +	cmpxchg_release(&((v)->counter), (o), (n))
> 
> It has passed a PowerPC cross-build test here, but no runtime tests.
> 

Do you have the commit at any branch in tip tree? I could pull it and
cross-build and check the assembly code of lib/atomic64_test.c, that way
I could verify whether we mess something up.

> Does this patch look good to you?
> 

Yep!

Regards,
Boqun

> (Still subject to PeterZ's Ack/NAK.)
> 
> Thanks,
> 
> 	Ingo
> 
> ======================>
> From: Boqun Feng <boqun.feng@gmail.com>
> Date: Sat, 5 May 2018 19:28:17 +0800
> Subject: [PATCH] locking/atomics/powerpc: Move cmpxchg helpers to asm/cmpxchg.h and define the full set of cmpxchg APIs
> 
> Move PowerPC's __op_{acqurie,release}() from atomic.h to
> cmpxchg.h (in arch/powerpc/include/asm), plus use them to
> define these two methods:
> 
> 	#define cmpxchg_release __op_release(cmpxchg, __VA_ARGS__);
> 	#define cmpxchg64_release __op_release(cmpxchg64, __VA_ARGS__);
> 
> ... the idea is to generate all these methods in cmpxchg.h and to define the full
> array of atomic primitives, including the cmpxchg_release() methods which were
> defined by the generic code before.
> 
> Also define the atomic[64]_() variants explicitly.
> 
> This ensures that all these low level cmpxchg APIs are defined in
> PowerPC headers, with no generic header fallbacks.
> 
> No change in functionality or code generation.
> 
> Signed-off-by: Boqun Feng <boqun.feng@gmail.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Mark Rutland <mark.rutland@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Cc: aryabinin at virtuozzo.com
> Cc: catalin.marinas at arm.com
> Cc: dvyukov at google.com
> Cc: linux-arm-kernel at lists.infradead.org
> Cc: will.deacon at arm.com
> Link: http://lkml.kernel.org/r/20180505112817.ihrb726i37bwm4cj at tardis
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> ---
>  arch/powerpc/include/asm/atomic.h  | 22 ++++------------------
>  arch/powerpc/include/asm/cmpxchg.h | 24 ++++++++++++++++++++++++
>  2 files changed, 28 insertions(+), 18 deletions(-)
> 
> diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
> index 682b3e6a1e21..4e06955ec10f 100644
> --- a/arch/powerpc/include/asm/atomic.h
> +++ b/arch/powerpc/include/asm/atomic.h
> @@ -13,24 +13,6 @@
>  
>  #define ATOMIC_INIT(i)		{ (i) }
>  
> -/*
> - * Since *_return_relaxed and {cmp}xchg_relaxed are implemented with
> - * a "bne-" instruction at the end, so an isync is enough as a acquire barrier
> - * on the platform without lwsync.
> - */
> -#define __atomic_op_acquire(op, args...)				\
> -({									\
> -	typeof(op##_relaxed(args)) __ret  = op##_relaxed(args);		\
> -	__asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory");	\
> -	__ret;								\
> -})
> -
> -#define __atomic_op_release(op, args...)				\
> -({									\
> -	__asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory");	\
> -	op##_relaxed(args);						\
> -})
> -
>  static __inline__ int atomic_read(const atomic_t *v)
>  {
>  	int t;
> @@ -213,6 +195,8 @@ static __inline__ int atomic_dec_return_relaxed(atomic_t *v)
>  	cmpxchg_relaxed(&((v)->counter), (o), (n))
>  #define atomic_cmpxchg_acquire(v, o, n) \
>  	cmpxchg_acquire(&((v)->counter), (o), (n))
> +#define atomic_cmpxchg_release(v, o, n) \
> +	cmpxchg_release(&((v)->counter), (o), (n))
>  
>  #define atomic_xchg(v, new) (xchg(&((v)->counter), new))
>  #define atomic_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new))
> @@ -519,6 +503,8 @@ static __inline__ long atomic64_dec_if_positive(atomic64_t *v)
>  	cmpxchg_relaxed(&((v)->counter), (o), (n))
>  #define atomic64_cmpxchg_acquire(v, o, n) \
>  	cmpxchg_acquire(&((v)->counter), (o), (n))
> +#define atomic64_cmpxchg_release(v, o, n) \
> +	cmpxchg_release(&((v)->counter), (o), (n))
>  
>  #define atomic64_xchg(v, new) (xchg(&((v)->counter), new))
>  #define atomic64_xchg_relaxed(v, new) xchg_relaxed(&((v)->counter), (new))
> diff --git a/arch/powerpc/include/asm/cmpxchg.h b/arch/powerpc/include/asm/cmpxchg.h
> index 9b001f1f6b32..e27a612b957f 100644
> --- a/arch/powerpc/include/asm/cmpxchg.h
> +++ b/arch/powerpc/include/asm/cmpxchg.h
> @@ -8,6 +8,24 @@
>  #include <asm/asm-compat.h>
>  #include <linux/bug.h>
>  
> +/*
> + * Since *_return_relaxed and {cmp}xchg_relaxed are implemented with
> + * a "bne-" instruction at the end, so an isync is enough as a acquire barrier
> + * on the platform without lwsync.
> + */
> +#define __atomic_op_acquire(op, args...)				\
> +({									\
> +	typeof(op##_relaxed(args)) __ret  = op##_relaxed(args);		\
> +	__asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory");	\
> +	__ret;								\
> +})
> +
> +#define __atomic_op_release(op, args...)				\
> +({									\
> +	__asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory");	\
> +	op##_relaxed(args);						\
> +})
> +
>  #ifdef __BIG_ENDIAN
>  #define BITOFF_CAL(size, off)	((sizeof(u32) - size - off) * BITS_PER_BYTE)
>  #else
> @@ -512,6 +530,9 @@ __cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new,
>  			(unsigned long)_o_, (unsigned long)_n_,		\
>  			sizeof(*(ptr)));				\
>  })
> +
> +#define cmpxchg_release(...) __atomic_op_release(cmpxchg, __VA_ARGS__)
> +
>  #ifdef CONFIG_PPC64
>  #define cmpxchg64(ptr, o, n)						\
>    ({									\
> @@ -533,6 +554,9 @@ __cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new,
>  	BUILD_BUG_ON(sizeof(*(ptr)) != 8);				\
>  	cmpxchg_acquire((ptr), (o), (n));				\
>  })
> +
> +#define cmpxchg64_release(...) __atomic_op_release(cmpxchg64, __VA_ARGS__)
> +
>  #else
>  #include <asm-generic/cmpxchg-local.h>
>  #define cmpxchg64_local(ptr, o, n) __cmpxchg64_local_generic((ptr), (o), (n))
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 488 bytes
Desc: not available
URL: <http://lists.infradead.org/pipermail/linux-arm-kernel/attachments/20180505/fec9a4cc/attachment-0001.sig>

  reply	other threads:[~2018-05-05 13:59 UTC|newest]

Thread overview: 103+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-05-04 17:39 [PATCH 0/6] arm64: add instrumented atomics Mark Rutland
2018-05-04 17:39 ` Mark Rutland
2018-05-04 17:39 ` [PATCH 1/6] locking/atomic, asm-generic: instrument ordering variants Mark Rutland
2018-05-04 17:39   ` Mark Rutland
2018-05-04 18:01   ` Peter Zijlstra
2018-05-04 18:01     ` Peter Zijlstra
2018-05-04 18:09     ` Mark Rutland
2018-05-04 18:09       ` Mark Rutland
2018-05-04 18:24       ` Peter Zijlstra
2018-05-04 18:24         ` Peter Zijlstra
2018-05-05  9:12         ` Mark Rutland
2018-05-05  9:12           ` Mark Rutland
2018-05-05  8:11       ` [PATCH] locking/atomics: Clean up the atomic.h maze of #defines Ingo Molnar
2018-05-05  8:11         ` Ingo Molnar
2018-05-05  8:36         ` [PATCH] locking/atomics: Simplify the op definitions in atomic.h some more Ingo Molnar
2018-05-05  8:36           ` Ingo Molnar
2018-05-05  8:54           ` [PATCH] locking/atomics: Combine the atomic_andnot() and atomic64_andnot() API definitions Ingo Molnar
2018-05-05  8:54             ` Ingo Molnar
2018-05-06 12:15             ` [tip:locking/core] " tip-bot for Ingo Molnar
2018-05-06 14:15             ` [PATCH] " Andrea Parri
2018-05-06 14:15               ` Andrea Parri
2018-05-06 12:14           ` [tip:locking/core] locking/atomics: Simplify the op definitions in atomic.h some more tip-bot for Ingo Molnar
2018-05-09  7:33             ` Peter Zijlstra
2018-05-09 13:03               ` Will Deacon
2018-05-15  8:54                 ` Ingo Molnar
2018-05-15  8:35               ` Ingo Molnar
2018-05-15 11:41                 ` Peter Zijlstra
2018-05-15 12:13                   ` Peter Zijlstra
2018-05-15 15:43                   ` Mark Rutland
2018-05-15 17:10                     ` Peter Zijlstra
2018-05-15 17:53                       ` Mark Rutland
2018-05-15 18:11                         ` Peter Zijlstra
2018-05-15 18:15                           ` Peter Zijlstra
2018-05-15 18:52                             ` Linus Torvalds
2018-05-15 19:39                               ` Peter Zijlstra
2018-05-21 17:12                           ` Mark Rutland
2018-05-06 14:12           ` [PATCH] " Andrea Parri
2018-05-06 14:12             ` Andrea Parri
2018-05-06 14:57             ` Ingo Molnar
2018-05-06 14:57               ` Ingo Molnar
2018-05-07  9:54               ` Andrea Parri
2018-05-07  9:54                 ` Andrea Parri
2018-05-18 18:43               ` Palmer Dabbelt
2018-05-18 18:43                 ` Palmer Dabbelt
2018-05-05  8:47         ` [PATCH] locking/atomics: Clean up the atomic.h maze of #defines Peter Zijlstra
2018-05-05  8:47           ` Peter Zijlstra
2018-05-05  9:04           ` Ingo Molnar
2018-05-05  9:04             ` Ingo Molnar
2018-05-05  9:24             ` Peter Zijlstra
2018-05-05  9:24               ` Peter Zijlstra
2018-05-05  9:38             ` Ingo Molnar
2018-05-05  9:38               ` Ingo Molnar
2018-05-05 10:00               ` [RFC PATCH] locking/atomics/powerpc: Introduce optimized cmpxchg_release() family of APIs for PowerPC Ingo Molnar
2018-05-05 10:00                 ` Ingo Molnar
2018-05-05 10:26                 ` Boqun Feng
2018-05-05 10:26                   ` Boqun Feng
2018-05-06  1:56                 ` Benjamin Herrenschmidt
2018-05-06  1:56                   ` Benjamin Herrenschmidt
2018-05-05 10:16               ` [PATCH] locking/atomics: Clean up the atomic.h maze of #defines Boqun Feng
2018-05-05 10:16                 ` Boqun Feng
2018-05-05 10:35                 ` [RFC PATCH] locking/atomics/powerpc: Clarify why the cmpxchg_relaxed() family of APIs falls back to full cmpxchg() Ingo Molnar
2018-05-05 10:35                   ` Ingo Molnar
2018-05-05 11:28                   ` Boqun Feng
2018-05-05 11:28                     ` Boqun Feng
2018-05-05 13:27                     ` [PATCH] locking/atomics/powerpc: Move cmpxchg helpers to asm/cmpxchg.h and define the full set of cmpxchg APIs Ingo Molnar
2018-05-05 13:27                       ` Ingo Molnar
2018-05-05 14:03                       ` Boqun Feng [this message]
2018-05-05 14:03                         ` Boqun Feng
2018-05-06 12:11                         ` Ingo Molnar
2018-05-06 12:11                           ` Ingo Molnar
2018-05-07  1:04                           ` Boqun Feng
2018-05-07  1:04                             ` Boqun Feng
2018-05-07  6:50                             ` Ingo Molnar
2018-05-07  6:50                               ` Ingo Molnar
2018-05-06 12:13                     ` [tip:locking/core] " tip-bot for Boqun Feng
2018-05-07 13:31                       ` [PATCH v2] " Boqun Feng
2018-05-07 13:31                         ` Boqun Feng
2018-05-05  9:05           ` [PATCH] locking/atomics: Clean up the atomic.h maze of #defines Dmitry Vyukov
2018-05-05  9:05             ` Dmitry Vyukov
2018-05-05  9:32             ` Peter Zijlstra
2018-05-05  9:32               ` Peter Zijlstra
2018-05-07  6:43               ` [RFC PATCH] locking/atomics/x86/64: Clean up and fix details of <asm/atomic64_64.h> Ingo Molnar
2018-05-07  6:43                 ` Ingo Molnar
2018-05-05  9:09           ` [PATCH] locking/atomics: Clean up the atomic.h maze of #defines Ingo Molnar
2018-05-05  9:09             ` Ingo Molnar
2018-05-05  9:29             ` Peter Zijlstra
2018-05-05  9:29               ` Peter Zijlstra
2018-05-05 10:48               ` [PATCH] locking/atomics: Shorten the __atomic_op() defines to __op() Ingo Molnar
2018-05-05 10:48                 ` Ingo Molnar
2018-05-05 10:59                 ` Ingo Molnar
2018-05-05 10:59                   ` Ingo Molnar
2018-05-06 12:15                 ` [tip:locking/core] " tip-bot for Ingo Molnar
2018-05-06 12:14         ` [tip:locking/core] locking/atomics: Clean up the atomic.h maze of #defines tip-bot for Ingo Molnar
2018-05-04 17:39 ` [PATCH 2/6] locking/atomic, asm-generic: instrument atomic*andnot*() Mark Rutland
2018-05-04 17:39   ` Mark Rutland
2018-05-04 17:39 ` [PATCH 3/6] arm64: use <linux/atomic.h> for cmpxchg Mark Rutland
2018-05-04 17:39   ` Mark Rutland
2018-05-04 17:39 ` [PATCH 4/6] arm64: fix assembly constraints " Mark Rutland
2018-05-04 17:39   ` Mark Rutland
2018-05-04 17:39 ` [PATCH 5/6] arm64: use instrumented atomics Mark Rutland
2018-05-04 17:39   ` Mark Rutland
2018-05-04 17:39 ` [PATCH 6/6] arm64: instrument smp_{load_acquire,store_release} Mark Rutland
2018-05-04 17:39   ` Mark Rutland

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180505140340.uzfhoc42xvas4m72@tardis \
    --to=boqun.feng@gmail.com \
    --cc=aryabinin@virtuozzo.com \
    --cc=catalin.marinas@arm.com \
    --cc=dvyukov@google.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mark.rutland@arm.com \
    --cc=mingo@kernel.org \
    --cc=peterz@infradead.org \
    --cc=will.deacon@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.