* [tip:locking/core] arch,s390: Convert smp_mb__*()
@ 2014-04-18 13:11 tip-bot for Peter Zijlstra
2014-04-23 7:37 ` Martin Schwidefsky
0 siblings, 1 reply; 2+ messages in thread
From: tip-bot for Peter Zijlstra @ 2014-04-18 13:11 UTC (permalink / raw)
To: linux-tip-commits
Cc: linux-kernel, hpa, mingo, will.deacon, torvalds, schwidefsky,
peterz, paulmck, gang.chen, heiko.carstens, tglx
Commit-ID: 0e530747c69f1e191f101a925bb4051894e5c7b0
Gitweb: http://git.kernel.org/tip/0e530747c69f1e191f101a925bb4051894e5c7b0
Author: Peter Zijlstra <peterz@infradead.org>
AuthorDate: Thu, 13 Mar 2014 19:00:35 +0100
Committer: Ingo Molnar <mingo@kernel.org>
CommitDate: Fri, 18 Apr 2014 14:20:42 +0200
arch,s390: Convert smp_mb__*()
As per the existing implementation; implement the new one using
smp_mb().
AFAICT the s390 compare-and-swap does imply a barrier, however there
are some immediate ops that seem to be singly-copy atomic and do not
imply a barrier. One such is the "ni" op (which would be
and-immediate) which is used for the constant clear_bit
implementation. Therefore s390 needs full barriers for the
{before,after} atomic ops.
Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Link: http://lkml.kernel.org/n/tip-kme5dz5hcobpnufnnkh1ech2@git.kernel.org
Cc: Chen Gang <gang.chen@asianux.com>
Cc: Heiko Carstens <heiko.carstens@de.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Martin Schwidefsky <schwidefsky@de.ibm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux390@de.ibm.com
Cc: linux-kernel@vger.kernel.org
Cc: linux-s390@vger.kernel.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
arch/s390/include/asm/atomic.h | 5 -----
arch/s390/include/asm/barrier.h | 5 +++--
2 files changed, 3 insertions(+), 7 deletions(-)
diff --git a/arch/s390/include/asm/atomic.h b/arch/s390/include/asm/atomic.h
index 1d47061..fa934fe 100644
--- a/arch/s390/include/asm/atomic.h
+++ b/arch/s390/include/asm/atomic.h
@@ -412,9 +412,4 @@ static inline long long atomic64_dec_if_positive(atomic64_t *v)
#define atomic64_dec_and_test(_v) (atomic64_sub_return(1, _v) == 0)
#define atomic64_inc_not_zero(v) atomic64_add_unless((v), 1, 0)
-#define smp_mb__before_atomic_dec() smp_mb()
-#define smp_mb__after_atomic_dec() smp_mb()
-#define smp_mb__before_atomic_inc() smp_mb()
-#define smp_mb__after_atomic_inc() smp_mb()
-
#endif /* __ARCH_S390_ATOMIC__ */
diff --git a/arch/s390/include/asm/barrier.h b/arch/s390/include/asm/barrier.h
index 578680f..19ff956 100644
--- a/arch/s390/include/asm/barrier.h
+++ b/arch/s390/include/asm/barrier.h
@@ -27,8 +27,9 @@
#define smp_rmb() rmb()
#define smp_wmb() wmb()
#define smp_read_barrier_depends() read_barrier_depends()
-#define smp_mb__before_clear_bit() smp_mb()
-#define smp_mb__after_clear_bit() smp_mb()
+
+#define smp_mb__before_atomic() smp_mb()
+#define smp_mb__after_atomic() smp_mb()
#define set_mb(var, value) do { var = value; mb(); } while (0)
^ permalink raw reply related [flat|nested] 2+ messages in thread
* Re: [tip:locking/core] arch,s390: Convert smp_mb__*()
2014-04-18 13:11 [tip:locking/core] arch,s390: Convert smp_mb__*() tip-bot for Peter Zijlstra
@ 2014-04-23 7:37 ` Martin Schwidefsky
0 siblings, 0 replies; 2+ messages in thread
From: Martin Schwidefsky @ 2014-04-23 7:37 UTC (permalink / raw)
To: mingo, hpa, linux-kernel, torvalds, will.deacon, peterz,
schwidefsky, paulmck, gang.chen, heiko.carstens, tglx
Cc: tipbot, linux-tip-commits
Hi Peter,
On Fri, 18 Apr 2014 06:11:38 -0700
tip-bot for Peter Zijlstra <tipbot@zytor.com> wrote:
> arch,s390: Convert smp_mb__*()
>
> As per the existing implementation; implement the new one using
> smp_mb().
>
> AFAICT the s390 compare-and-swap does imply a barrier, however there
> are some immediate ops that seem to be singly-copy atomic and do not
> imply a barrier. One such is the "ni" op (which would be
> and-immediate) which is used for the constant clear_bit
> implementation. Therefore s390 needs full barriers for the
> {before,after} atomic ops.
Good catch, if ni/oi are used this is required. What we want to
add is an #ifdef CONFIG_HAVE_MARCH_Z196_FEATURES. With smp_mb__*
defined as smp_mb() we get additional barriers for older machines
while the atomic ops are defined with compare-and-swap which
already does the barrier before and after the operation.
We can do that with a patch on top of this one, so no hurry.
--
blue skies,
Martin.
"Reality continues to ruin my life." - Calvin.
^ permalink raw reply [flat|nested] 2+ messages in thread
end of thread, other threads:[~2014-04-23 7:38 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2014-04-18 13:11 [tip:locking/core] arch,s390: Convert smp_mb__*() tip-bot for Peter Zijlstra
2014-04-23 7:37 ` Martin Schwidefsky
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).