All of lore.kernel.org
 help / color / mirror / Atom feed
From: Boqun Feng <boqun.feng@gmail.com>
To: Nicholas Piggin <npiggin@gmail.com>
Cc: Alexey Kardashevskiy <aik@ozlabs.ru>,
	Arnd Bergmann <arnd@arndb.de>,
	Christophe Leroy <christophe.leroy@c-s.fr>,
	linux-arch@vger.kernel.org, linux-kernel@vger.kernel.org,
	linuxppc-dev@lists.ozlabs.org,
	Michael Ellerman <mpe@ellerman.id.au>,
	Peter Zijlstra <peterz@infradead.org>,
	Will Deacon <will@kernel.org>
Subject: Re: [PATCH 3/3] powerpc: rewrite atomics to use ARCH_ATOMIC
Date: Wed, 23 Dec 2020 10:45:47 +0800	[thread overview]
Message-ID: <X+KvWzrkV+3pxnz2@boqun-archlinux> (raw)
In-Reply-To: <1608608903.7wgw6zmbi8.astroid@bobo.none>

On Tue, Dec 22, 2020 at 01:52:50PM +1000, Nicholas Piggin wrote:
> Excerpts from Boqun Feng's message of November 14, 2020 1:30 am:
> > Hi Nicholas,
> > 
> > On Wed, Nov 11, 2020 at 09:07:23PM +1000, Nicholas Piggin wrote:
> >> All the cool kids are doing it.
> >> 
> >> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> >> ---
> >>  arch/powerpc/include/asm/atomic.h  | 681 ++++++++++-------------------
> >>  arch/powerpc/include/asm/cmpxchg.h |  62 +--
> >>  2 files changed, 248 insertions(+), 495 deletions(-)
> >> 
> >> diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
> >> index 8a55eb8cc97b..899aa2403ba7 100644
> >> --- a/arch/powerpc/include/asm/atomic.h
> >> +++ b/arch/powerpc/include/asm/atomic.h
> >> @@ -11,185 +11,285 @@
> >>  #include <asm/cmpxchg.h>
> >>  #include <asm/barrier.h>
> >>  
> >> +#define ARCH_ATOMIC
> >> +
> >> +#ifndef CONFIG_64BIT
> >> +#include <asm-generic/atomic64.h>
> >> +#endif
> >> +
> >>  /*
> >>   * Since *_return_relaxed and {cmp}xchg_relaxed are implemented with
> >>   * a "bne-" instruction at the end, so an isync is enough as a acquire barrier
> >>   * on the platform without lwsync.
> >>   */
> >>  #define __atomic_acquire_fence()					\
> >> -	__asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory")
> >> +	asm volatile(PPC_ACQUIRE_BARRIER "" : : : "memory")
> >>  
> >>  #define __atomic_release_fence()					\
> >> -	__asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory")
> >> +	asm volatile(PPC_RELEASE_BARRIER "" : : : "memory")
> >>  
> >> -static __inline__ int atomic_read(const atomic_t *v)
> >> -{
> >> -	int t;
> >> +#define __atomic_pre_full_fence		smp_mb
> >>  
> >> -	__asm__ __volatile__("lwz%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter));
> >> +#define __atomic_post_full_fence	smp_mb
> >>  
> 
> Thanks for the review.
> 
> > Do you need to define __atomic_{pre,post}_full_fence for PPC? IIRC, they
> > are default smp_mb__{before,atomic}_atomic(), so are smp_mb() defautly
> > on PPC.
> 
> Okay I didn't realise that's not required.
> 
> >> -	return t;
> >> +#define arch_atomic_read(v)			__READ_ONCE((v)->counter)
> >> +#define arch_atomic_set(v, i)			__WRITE_ONCE(((v)->counter), (i))
> >> +#ifdef CONFIG_64BIT
> >> +#define ATOMIC64_INIT(i)			{ (i) }
> >> +#define arch_atomic64_read(v)			__READ_ONCE((v)->counter)
> >> +#define arch_atomic64_set(v, i)			__WRITE_ONCE(((v)->counter), (i))
> >> +#endif
> >> +
> > [...]
> >>  
> >> +#define ATOMIC_FETCH_OP_UNLESS_RELAXED(name, type, dtype, width, asm_op) \
> >> +static inline int arch_##name##_relaxed(type *v, dtype a, dtype u)	\
> > 
> > I don't think we have atomic_fetch_*_unless_relaxed() at atomic APIs,
> > ditto for:
> > 
> > 	atomic_fetch_add_unless_relaxed()
> > 	atomic_inc_not_zero_relaxed()
> > 	atomic_dec_if_positive_relaxed()
> > 
> > , and we don't have the _acquire() and _release() variants for them
> > either, and if you don't define their fully-ordered version (e.g.
> > atomic_inc_not_zero()), atomic-arch-fallback.h will use read and cmpxchg
> > to implement them, and I think not what we want.
> 
> Okay. How can those be added? The atoimc generation is pretty 
> complicated.
> 

Yeah, I know ;-) I think you can just implement and define fully-ordered
verions:

	arch_atomic_fetch_*_unless()
	arch_atomic_inc_not_zero()
	arch_atomic_dec_if_postive()

, that should work.

Rules of atomic generation, IIRC:

1.	If you define _relaxed, _acquire, _release or fully-ordered
	version, atomic generation will use that version

2.	If you define _relaxed, atomic generation will use that and
	barriers to generate _acquire, _release and fully-ordered
	versions, unless they are already defined (as Rule #1 says)

3.	If you don't define _relaxed, but define the fully-ordered
	version, atomic generation will use the fully-ordered version
	and use it as _relaxed variants and generate the rest using Rule
	#2.

> > [...]
> >>  
> >>  #endif /* __KERNEL__ */
> >>  #endif /* _ASM_POWERPC_ATOMIC_H_ */
> >> diff --git a/arch/powerpc/include/asm/cmpxchg.h b/arch/powerpc/include/asm/cmpxchg.h
> >> index cf091c4c22e5..181f7e8b3281 100644
> >> --- a/arch/powerpc/include/asm/cmpxchg.h
> >> +++ b/arch/powerpc/include/asm/cmpxchg.h
> >> @@ -192,7 +192,7 @@ __xchg_relaxed(void *ptr, unsigned long x, unsigned int size)
> >>       		(unsigned long)_x_, sizeof(*(ptr))); 			     \
> >>    })
> >>  
> >> -#define xchg_relaxed(ptr, x)						\
> >> +#define arch_xchg_relaxed(ptr, x)					\
> >>  ({									\
> >>  	__typeof__(*(ptr)) _x_ = (x);					\
> >>  	(__typeof__(*(ptr))) __xchg_relaxed((ptr),			\
> >> @@ -448,35 +448,7 @@ __cmpxchg_relaxed(void *ptr, unsigned long old, unsigned long new,
> >>  	return old;
> >>  }
> >>  
> >> -static __always_inline unsigned long
> >> -__cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new,
> >> -		  unsigned int size)
> >> -{
> >> -	switch (size) {
> >> -	case 1:
> >> -		return __cmpxchg_u8_acquire(ptr, old, new);
> >> -	case 2:
> >> -		return __cmpxchg_u16_acquire(ptr, old, new);
> >> -	case 4:
> >> -		return __cmpxchg_u32_acquire(ptr, old, new);
> >> -#ifdef CONFIG_PPC64
> >> -	case 8:
> >> -		return __cmpxchg_u64_acquire(ptr, old, new);
> >> -#endif
> >> -	}
> >> -	BUILD_BUG_ON_MSG(1, "Unsupported size for __cmpxchg_acquire");
> >> -	return old;
> >> -}
> >> -#define cmpxchg(ptr, o, n)						 \
> >> -  ({									 \
> >> -     __typeof__(*(ptr)) _o_ = (o);					 \
> >> -     __typeof__(*(ptr)) _n_ = (n);					 \
> >> -     (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_,		 \
> >> -				    (unsigned long)_n_, sizeof(*(ptr))); \
> >> -  })
> >> -
> >> -
> > 
> > If you remove {atomic_}_cmpxchg_{,_acquire}() and use the version
> > provided by atomic-arch-fallback.h, then a fail cmpxchg or
> > cmpxchg_acquire() will still result into a full barrier or a acquire
> > barrier after the RMW operation, the barrier is not necessary and
> > probably this is not what we want?
> 
> Why is that done? That seems like a very subtle difference. Shouldn't
> the fallback version skip the barrier?
> 

The fallback version is something like:

	smp_mb__before_atomic();
	cmpxchg_relaxed();
	smp_mb__after_atomic();

, so there will be a full barrier on PPC after the cmpxchg no matter
whether the cmpxchg succeed or not. And the fallback version cannot skip
the barrier, because there is no way the fallback version tells whether
the cmpxchg_relaxed() succeed or not. So in my previous version of PPC
atomic variants support, I defined cmpxchg_acquire() in asm header
instead of using atomic generation.

That said, now I think about this, maybe we can implement the fallback
version as:

	smp_mb__before_atomic();
	ret = cmpxchg_relaxed(ptr, old, new);
	if (old == ret)
		smp_mb__after_atomic();

, in this way, the fallback version can handle the barrier skipping
better.

Regards,
Boqun

> Thanks,
> Nick

WARNING: multiple messages have this Message-ID (diff)
From: Boqun Feng <boqun.feng@gmail.com>
To: Nicholas Piggin <npiggin@gmail.com>
Cc: Christophe Leroy <christophe.leroy@c-s.fr>,
	linux-arch@vger.kernel.org, Arnd Bergmann <arnd@arndb.de>,
	Alexey Kardashevskiy <aik@ozlabs.ru>,
	linux-kernel@vger.kernel.org,
	Peter Zijlstra <peterz@infradead.org>,
	Will Deacon <will@kernel.org>,
	linuxppc-dev@lists.ozlabs.org
Subject: Re: [PATCH 3/3] powerpc: rewrite atomics to use ARCH_ATOMIC
Date: Wed, 23 Dec 2020 10:45:47 +0800	[thread overview]
Message-ID: <X+KvWzrkV+3pxnz2@boqun-archlinux> (raw)
In-Reply-To: <1608608903.7wgw6zmbi8.astroid@bobo.none>

On Tue, Dec 22, 2020 at 01:52:50PM +1000, Nicholas Piggin wrote:
> Excerpts from Boqun Feng's message of November 14, 2020 1:30 am:
> > Hi Nicholas,
> > 
> > On Wed, Nov 11, 2020 at 09:07:23PM +1000, Nicholas Piggin wrote:
> >> All the cool kids are doing it.
> >> 
> >> Signed-off-by: Nicholas Piggin <npiggin@gmail.com>
> >> ---
> >>  arch/powerpc/include/asm/atomic.h  | 681 ++++++++++-------------------
> >>  arch/powerpc/include/asm/cmpxchg.h |  62 +--
> >>  2 files changed, 248 insertions(+), 495 deletions(-)
> >> 
> >> diff --git a/arch/powerpc/include/asm/atomic.h b/arch/powerpc/include/asm/atomic.h
> >> index 8a55eb8cc97b..899aa2403ba7 100644
> >> --- a/arch/powerpc/include/asm/atomic.h
> >> +++ b/arch/powerpc/include/asm/atomic.h
> >> @@ -11,185 +11,285 @@
> >>  #include <asm/cmpxchg.h>
> >>  #include <asm/barrier.h>
> >>  
> >> +#define ARCH_ATOMIC
> >> +
> >> +#ifndef CONFIG_64BIT
> >> +#include <asm-generic/atomic64.h>
> >> +#endif
> >> +
> >>  /*
> >>   * Since *_return_relaxed and {cmp}xchg_relaxed are implemented with
> >>   * a "bne-" instruction at the end, so an isync is enough as a acquire barrier
> >>   * on the platform without lwsync.
> >>   */
> >>  #define __atomic_acquire_fence()					\
> >> -	__asm__ __volatile__(PPC_ACQUIRE_BARRIER "" : : : "memory")
> >> +	asm volatile(PPC_ACQUIRE_BARRIER "" : : : "memory")
> >>  
> >>  #define __atomic_release_fence()					\
> >> -	__asm__ __volatile__(PPC_RELEASE_BARRIER "" : : : "memory")
> >> +	asm volatile(PPC_RELEASE_BARRIER "" : : : "memory")
> >>  
> >> -static __inline__ int atomic_read(const atomic_t *v)
> >> -{
> >> -	int t;
> >> +#define __atomic_pre_full_fence		smp_mb
> >>  
> >> -	__asm__ __volatile__("lwz%U1%X1 %0,%1" : "=r"(t) : "m"(v->counter));
> >> +#define __atomic_post_full_fence	smp_mb
> >>  
> 
> Thanks for the review.
> 
> > Do you need to define __atomic_{pre,post}_full_fence for PPC? IIRC, they
> > are default smp_mb__{before,atomic}_atomic(), so are smp_mb() defautly
> > on PPC.
> 
> Okay I didn't realise that's not required.
> 
> >> -	return t;
> >> +#define arch_atomic_read(v)			__READ_ONCE((v)->counter)
> >> +#define arch_atomic_set(v, i)			__WRITE_ONCE(((v)->counter), (i))
> >> +#ifdef CONFIG_64BIT
> >> +#define ATOMIC64_INIT(i)			{ (i) }
> >> +#define arch_atomic64_read(v)			__READ_ONCE((v)->counter)
> >> +#define arch_atomic64_set(v, i)			__WRITE_ONCE(((v)->counter), (i))
> >> +#endif
> >> +
> > [...]
> >>  
> >> +#define ATOMIC_FETCH_OP_UNLESS_RELAXED(name, type, dtype, width, asm_op) \
> >> +static inline int arch_##name##_relaxed(type *v, dtype a, dtype u)	\
> > 
> > I don't think we have atomic_fetch_*_unless_relaxed() at atomic APIs,
> > ditto for:
> > 
> > 	atomic_fetch_add_unless_relaxed()
> > 	atomic_inc_not_zero_relaxed()
> > 	atomic_dec_if_positive_relaxed()
> > 
> > , and we don't have the _acquire() and _release() variants for them
> > either, and if you don't define their fully-ordered version (e.g.
> > atomic_inc_not_zero()), atomic-arch-fallback.h will use read and cmpxchg
> > to implement them, and I think not what we want.
> 
> Okay. How can those be added? The atoimc generation is pretty 
> complicated.
> 

Yeah, I know ;-) I think you can just implement and define fully-ordered
verions:

	arch_atomic_fetch_*_unless()
	arch_atomic_inc_not_zero()
	arch_atomic_dec_if_postive()

, that should work.

Rules of atomic generation, IIRC:

1.	If you define _relaxed, _acquire, _release or fully-ordered
	version, atomic generation will use that version

2.	If you define _relaxed, atomic generation will use that and
	barriers to generate _acquire, _release and fully-ordered
	versions, unless they are already defined (as Rule #1 says)

3.	If you don't define _relaxed, but define the fully-ordered
	version, atomic generation will use the fully-ordered version
	and use it as _relaxed variants and generate the rest using Rule
	#2.

> > [...]
> >>  
> >>  #endif /* __KERNEL__ */
> >>  #endif /* _ASM_POWERPC_ATOMIC_H_ */
> >> diff --git a/arch/powerpc/include/asm/cmpxchg.h b/arch/powerpc/include/asm/cmpxchg.h
> >> index cf091c4c22e5..181f7e8b3281 100644
> >> --- a/arch/powerpc/include/asm/cmpxchg.h
> >> +++ b/arch/powerpc/include/asm/cmpxchg.h
> >> @@ -192,7 +192,7 @@ __xchg_relaxed(void *ptr, unsigned long x, unsigned int size)
> >>       		(unsigned long)_x_, sizeof(*(ptr))); 			     \
> >>    })
> >>  
> >> -#define xchg_relaxed(ptr, x)						\
> >> +#define arch_xchg_relaxed(ptr, x)					\
> >>  ({									\
> >>  	__typeof__(*(ptr)) _x_ = (x);					\
> >>  	(__typeof__(*(ptr))) __xchg_relaxed((ptr),			\
> >> @@ -448,35 +448,7 @@ __cmpxchg_relaxed(void *ptr, unsigned long old, unsigned long new,
> >>  	return old;
> >>  }
> >>  
> >> -static __always_inline unsigned long
> >> -__cmpxchg_acquire(void *ptr, unsigned long old, unsigned long new,
> >> -		  unsigned int size)
> >> -{
> >> -	switch (size) {
> >> -	case 1:
> >> -		return __cmpxchg_u8_acquire(ptr, old, new);
> >> -	case 2:
> >> -		return __cmpxchg_u16_acquire(ptr, old, new);
> >> -	case 4:
> >> -		return __cmpxchg_u32_acquire(ptr, old, new);
> >> -#ifdef CONFIG_PPC64
> >> -	case 8:
> >> -		return __cmpxchg_u64_acquire(ptr, old, new);
> >> -#endif
> >> -	}
> >> -	BUILD_BUG_ON_MSG(1, "Unsupported size for __cmpxchg_acquire");
> >> -	return old;
> >> -}
> >> -#define cmpxchg(ptr, o, n)						 \
> >> -  ({									 \
> >> -     __typeof__(*(ptr)) _o_ = (o);					 \
> >> -     __typeof__(*(ptr)) _n_ = (n);					 \
> >> -     (__typeof__(*(ptr))) __cmpxchg((ptr), (unsigned long)_o_,		 \
> >> -				    (unsigned long)_n_, sizeof(*(ptr))); \
> >> -  })
> >> -
> >> -
> > 
> > If you remove {atomic_}_cmpxchg_{,_acquire}() and use the version
> > provided by atomic-arch-fallback.h, then a fail cmpxchg or
> > cmpxchg_acquire() will still result into a full barrier or a acquire
> > barrier after the RMW operation, the barrier is not necessary and
> > probably this is not what we want?
> 
> Why is that done? That seems like a very subtle difference. Shouldn't
> the fallback version skip the barrier?
> 

The fallback version is something like:

	smp_mb__before_atomic();
	cmpxchg_relaxed();
	smp_mb__after_atomic();

, so there will be a full barrier on PPC after the cmpxchg no matter
whether the cmpxchg succeed or not. And the fallback version cannot skip
the barrier, because there is no way the fallback version tells whether
the cmpxchg_relaxed() succeed or not. So in my previous version of PPC
atomic variants support, I defined cmpxchg_acquire() in asm header
instead of using atomic generation.

That said, now I think about this, maybe we can implement the fallback
version as:

	smp_mb__before_atomic();
	ret = cmpxchg_relaxed(ptr, old, new);
	if (old == ret)
		smp_mb__after_atomic();

, in this way, the fallback version can handle the barrier skipping
better.

Regards,
Boqun

> Thanks,
> Nick

  reply	other threads:[~2020-12-23  2:47 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-11-11 11:07 [PATCH 0/3] powerpc: convert to use ARCH_ATOMIC Nicholas Piggin
2020-11-11 11:07 ` Nicholas Piggin
2020-11-11 11:07 ` [PATCH 1/3] asm-generic/atomic64: Add support for ARCH_ATOMIC Nicholas Piggin
2020-11-11 11:07   ` Nicholas Piggin
2020-11-11 13:39   ` Christophe Leroy
2020-11-11 13:39     ` Christophe Leroy
2020-11-11 13:44     ` Peter Zijlstra
2020-11-11 13:44       ` Peter Zijlstra
2020-11-16  1:48       ` Nicholas Piggin
2020-11-16  1:48         ` Nicholas Piggin
2020-11-11 11:07 ` [PATCH 2/3] powerpc/64s/iommu: don't use atomic_ function on atomic64_t type Nicholas Piggin
2020-11-11 11:07   ` Nicholas Piggin
2020-11-11 11:07 ` [PATCH 3/3] powerpc: rewrite atomics to use ARCH_ATOMIC Nicholas Piggin
2020-11-11 11:07   ` Nicholas Piggin
2020-11-11 19:07   ` kernel test robot
2020-11-11 19:07     ` kernel test robot
2020-11-11 19:07     ` kernel test robot
2020-11-13  5:05   ` kernel test robot
2020-11-13  5:05     ` kernel test robot
2020-11-13  5:05     ` kernel test robot
2020-11-13 15:30   ` Boqun Feng
2020-11-13 15:30     ` Boqun Feng
2020-12-22  3:52     ` Nicholas Piggin
2020-12-22  3:52       ` Nicholas Piggin
2020-12-23  2:45       ` Boqun Feng [this message]
2020-12-23  2:45         ` Boqun Feng
2020-12-15 11:01 ` [PATCH 0/3] powerpc: convert " Michael Ellerman
2020-12-15 11:01   ` Michael Ellerman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=X+KvWzrkV+3pxnz2@boqun-archlinux \
    --to=boqun.feng@gmail.com \
    --cc=aik@ozlabs.ru \
    --cc=arnd@arndb.de \
    --cc=christophe.leroy@c-s.fr \
    --cc=linux-arch@vger.kernel.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linuxppc-dev@lists.ozlabs.org \
    --cc=mpe@ellerman.id.au \
    --cc=npiggin@gmail.com \
    --cc=peterz@infradead.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.