linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 for-4.17 1/3] arm64: Remove smp_mb() from arch_spin_is_locked()
@ 2018-03-28  9:46 Andrea Parri
  0 siblings, 0 replies; only message in thread
From: Andrea Parri @ 2018-03-28  9:46 UTC (permalink / raw)
  To: Will Deacon, Catalin Marinas, Peter Zijlstra, Ingo Molnar
  Cc: Linus Torvalds, linux-kernel, Andrea Parri

Commit 38b850a73034f ("arm64: spinlock: order spin_{is_locked,unlock_wait}
against local locks") added an smp_mb() to arch_spin_is_locked(), in order
"to ensure that the lock value is always loaded after any other locks have
been taken by the current CPU", and reported one example (the "insane case"
in ipc/sem.c) relying on such guarantee.

It is however understood (and not documented) that spin_is_locked() is not
required to ensure such an ordering guarantee, guarantee that is currently
_not_ provided by all implementations/architectures, and that callers rely-
ing on such ordering should instead insert suitable memory barriers before
acting on the result of spin_is_locked().

Following a recent auditing[1] of the callsites of {,raw_}spin_is_locked()
revealing that none of these callers are relying on the ordering guarantee
anymore, this commit removes the leading smp_mb() from this primitive thus
effectively reverting 38b850a73034f.

[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2

Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
---
 arch/arm64/include/asm/spinlock.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index ebdae15d665de..26c5bd7d88d8d 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -122,11 +122,6 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
-	/*
-	 * Ensure prior spin_lock operations to other locks have completed
-	 * on this CPU before we test whether "lock" is locked.
-	 */
-	smp_mb(); /* ^^^ */
 	return !arch_spin_value_unlocked(READ_ONCE(*lock));
 }
 
-- 
2.7.4

^ permalink raw reply related	[flat|nested] only message in thread

only message in thread, other threads:[~2018-03-28  9:47 UTC | newest]

Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-03-28  9:46 [PATCH v2 for-4.17 1/3] arm64: Remove smp_mb() from arch_spin_is_locked() Andrea Parri

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).