All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH memory-model 0/3] Kernel fixes to spin_is_locked()
@ 2018-05-14 23:01 Paul E. McKenney
  2018-05-14 23:01 ` [PATCH memory-model 1/3] locking: Document the semantics of spin_is_locked() Paul E. McKenney
                   ` (2 more replies)
  0 siblings, 3 replies; 7+ messages in thread
From: Paul E. McKenney @ 2018-05-14 23:01 UTC (permalink / raw)
  To: linux-kernel, linux-arch, mingo
  Cc: stern, parri.andrea, will.deacon, peterz, boqun.feng, npiggin,
	dhowells, j.alglave, luc.maranget, akiyks

Hello!

This series contains fixes to the kernel related to the semantics
of spin_is_locked(), all courtesy of Andrea Parri, and all ready for
inclusion in -tip:

1.	Document the semantics of spin_is_locked() by adding a docbook
	header comment.

2.	Remove smp_mb() from arch_spin_is_locked(), given that the
	new order-free spin_is_locked() semantics require no such barrier.

3.	Clean up comment and #ifndef for {,queued_}spin_is_locked().
	The comment was "XXX think about spin_is_locked", and I can
	attest that we have now done some serious thinking.  ;-)

							Thanx, Paul

------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 7+ messages in thread

* [PATCH memory-model 1/3] locking: Document the semantics of spin_is_locked()
  2018-05-14 23:01 [PATCH memory-model 0/3] Kernel fixes to spin_is_locked() Paul E. McKenney
@ 2018-05-14 23:01 ` Paul E. McKenney
  2018-05-15  6:25   ` [tip:locking/core] locking/spinlocks: " tip-bot for Andrea Parri
  2018-05-14 23:01 ` [PATCH memory-model 2/3] arm64: Remove smp_mb() from arch_spin_is_locked() Paul E. McKenney
  2018-05-14 23:01 ` [PATCH memory-model 3/3] locking: Clean up comment and #ifndef for {,queued_}spin_is_locked() Paul E. McKenney
  2 siblings, 1 reply; 7+ messages in thread
From: Paul E. McKenney @ 2018-05-14 23:01 UTC (permalink / raw)
  To: linux-kernel, linux-arch, mingo
  Cc: stern, parri.andrea, will.deacon, peterz, boqun.feng, npiggin,
	dhowells, j.alglave, luc.maranget, akiyks, Andrea Parri,
	Paul E. McKenney

From: Andrea Parri <andrea.parri@amarulasolutions.com>

There appeared to be a certain, recurrent uncertainty concerning the
semantics of spin_is_locked(), likely a consequence of the fact that
this semantics remains undocumented or that it has been historically
linked to the (likewise unclear) semantics of spin_unlock_wait().

A recent auditing [1] of the callers of the primitive confirmed that
none of them are relying on particular ordering guarantees; document
this semantics by adding a docbook header to spin_is_locked(). Also,
describe behaviors specific to certain CONFIG_SMP=n builds.

[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
    https://marc.info/?l=linux-kernel&m=152042843808540&w=2
    https://marc.info/?l=linux-kernel&m=152043346110262&w=2

Co-Developed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Co-Developed-by: Alan Stern <stern@rowland.harvard.edu>
Co-Developed-by: David Howells <dhowells@redhat.com>
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: David Howells <dhowells@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Jade Alglave <j.alglave@ucl.ac.uk>
Cc: Luc Maranget <luc.maranget@inria.fr>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Akira Yokosawa <akiyks@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
---
 include/linux/spinlock.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 4894d322d258..1e8a46435838 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -380,6 +380,24 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
 	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
 })
 
+/**
+ * spin_is_locked() - Check whether a spinlock is locked.
+ * @lock: Pointer to the spinlock.
+ *
+ * This function is NOT required to provide any memory ordering
+ * guarantees; it could be used for debugging purposes or, when
+ * additional synchronization is needed, accompanied with other
+ * constructs (memory barriers) enforcing the synchronization.
+ *
+ * Returns: 1 if @lock is locked, 0 otherwise.
+ *
+ * Note that the function only tells you that the spinlock is
+ * seen to be locked, not that it is locked on your CPU.
+ *
+ * Further, on CONFIG_SMP=n builds with CONFIG_DEBUG_SPINLOCK=n,
+ * the return value is always 0 (see include/linux/spinlock_up.h).
+ * Therefore you should not rely heavily on the return value.
+ */
 static __always_inline int spin_is_locked(spinlock_t *lock)
 {
 	return raw_spin_is_locked(&lock->rlock);
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH memory-model 2/3] arm64: Remove smp_mb() from arch_spin_is_locked()
  2018-05-14 23:01 [PATCH memory-model 0/3] Kernel fixes to spin_is_locked() Paul E. McKenney
  2018-05-14 23:01 ` [PATCH memory-model 1/3] locking: Document the semantics of spin_is_locked() Paul E. McKenney
@ 2018-05-14 23:01 ` Paul E. McKenney
  2018-05-15  6:26   ` [tip:locking/core] locking/spinlocks/arm64: " tip-bot for Andrea Parri
  2018-05-14 23:01 ` [PATCH memory-model 3/3] locking: Clean up comment and #ifndef for {,queued_}spin_is_locked() Paul E. McKenney
  2 siblings, 1 reply; 7+ messages in thread
From: Paul E. McKenney @ 2018-05-14 23:01 UTC (permalink / raw)
  To: linux-kernel, linux-arch, mingo
  Cc: stern, parri.andrea, will.deacon, peterz, boqun.feng, npiggin,
	dhowells, j.alglave, luc.maranget, akiyks, Andrea Parri,
	Catalin Marinas, Ingo Molnar, Paul E. McKenney, Linus Torvalds

From: Andrea Parri <andrea.parri@amarulasolutions.com>

Commit 38b850a73034f ("arm64: spinlock: order spin_{is_locked,unlock_wait}
against local locks") added an smp_mb() to arch_spin_is_locked(), in order
"to ensure that the lock value is always loaded after any other locks have
been taken by the current CPU", and reported one example (the "insane case"
in ipc/sem.c) relying on such guarantee.

It is however understood that spin_is_locked() is not required to provide
such an ordering guarantee (a guarantee that is currently not provided by
all the implementations/archs), and that callers relying on such ordering
should instead insert suitable memory barriers before acting on the result
of spin_is_locked().

Following a recent auditing [1] of the callers of {,raw_}spin_is_locked(),
revealing that none of them are relying on the ordering guarantee anymore,
this commit removes the leading smp_mb() from the primitive thus reverting
38b850a73034f.

[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
    https://marc.info/?l=linux-kernel&m=152042843808540&w=2
    https://marc.info/?l=linux-kernel&m=152043346110262&w=2

Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 arch/arm64/include/asm/spinlock.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index ebdae15d665d..26c5bd7d88d8 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -122,11 +122,6 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
-	/*
-	 * Ensure prior spin_lock operations to other locks have completed
-	 * on this CPU before we test whether "lock" is locked.
-	 */
-	smp_mb(); /* ^^^ */
 	return !arch_spin_value_unlocked(READ_ONCE(*lock));
 }
 
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [PATCH memory-model 3/3] locking: Clean up comment and #ifndef for {,queued_}spin_is_locked()
  2018-05-14 23:01 [PATCH memory-model 0/3] Kernel fixes to spin_is_locked() Paul E. McKenney
  2018-05-14 23:01 ` [PATCH memory-model 1/3] locking: Document the semantics of spin_is_locked() Paul E. McKenney
  2018-05-14 23:01 ` [PATCH memory-model 2/3] arm64: Remove smp_mb() from arch_spin_is_locked() Paul E. McKenney
@ 2018-05-14 23:01 ` Paul E. McKenney
  2018-05-15  6:26   ` [tip:locking/core] locking/spinlocks: " tip-bot for Andrea Parri
  2 siblings, 1 reply; 7+ messages in thread
From: Paul E. McKenney @ 2018-05-14 23:01 UTC (permalink / raw)
  To: linux-kernel, linux-arch, mingo
  Cc: stern, parri.andrea, will.deacon, peterz, boqun.feng, npiggin,
	dhowells, j.alglave, luc.maranget, akiyks, Andrea Parri,
	Ingo Molnar, Paul E. McKenney, Linus Torvalds

From: Andrea Parri <andrea.parri@amarulasolutions.com>

Removes "#ifndef queued_spin_is_locked" from the generic code: this is
unused and it's reasonable to conclude that it will continue to be unused.

Also removes the comment about spin_is_locked() from mutex_is_locked():
the comment remains valid but not particularly useful.

Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/asm-generic/qspinlock.h | 2 --
 include/linux/mutex.h           | 3 ---
 2 files changed, 5 deletions(-)

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index b37b4ad7eb94..dc4e4ac4937e 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -26,7 +26,6 @@
  * @lock: Pointer to queued spinlock structure
  * Return: 1 if it is locked, 0 otherwise
  */
-#ifndef queued_spin_is_locked
 static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
 {
 	/*
@@ -35,7 +34,6 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
 	 */
 	return atomic_read(&lock->val);
 }
-#endif
 
 /**
  * queued_spin_value_unlocked - is the spinlock structure unlocked?
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 14bc0d5d0ee5..3093dd162424 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -146,9 +146,6 @@ extern void __mutex_init(struct mutex *lock, const char *name,
  */
 static inline bool mutex_is_locked(struct mutex *lock)
 {
-	/*
-	 * XXX think about spin_is_locked
-	 */
 	return __mutex_owner(lock) != NULL;
 }
 
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [tip:locking/core] locking/spinlocks: Document the semantics of spin_is_locked()
  2018-05-14 23:01 ` [PATCH memory-model 1/3] locking: Document the semantics of spin_is_locked() Paul E. McKenney
@ 2018-05-15  6:25   ` tip-bot for Andrea Parri
  0 siblings, 0 replies; 7+ messages in thread
From: tip-bot for Andrea Parri @ 2018-05-15  6:25 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: torvalds, peterz, j.alglave, dhowells, andrea.parri, will.deacon,
	akiyks, npiggin, akpm, paulmck, stern, hpa, boqun.feng, mingo,
	luc.maranget, tglx, linux-kernel, rdunlap

Commit-ID:  b7e4aadef28f217de8907eec60a964328797a2be
Gitweb:     https://git.kernel.org/tip/b7e4aadef28f217de8907eec60a964328797a2be
Author:     Andrea Parri <andrea.parri@amarulasolutions.com>
AuthorDate: Mon, 14 May 2018 16:01:27 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 15 May 2018 08:11:15 +0200

locking/spinlocks: Document the semantics of spin_is_locked()

There appeared to be a certain, recurrent uncertainty concerning the
semantics of spin_is_locked(), likely a consequence of the fact that
this semantics remains undocumented or that it has been historically
linked to the (likewise unclear) semantics of spin_unlock_wait().

A recent auditing [1] of the callers of the primitive confirmed that
none of them are relying on particular ordering guarantees; document
this semantics by adding a docbook header to spin_is_locked(). Also,
describe behaviors specific to certain CONFIG_SMP=n builds.

[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
    https://marc.info/?l=linux-kernel&m=152042843808540&w=2
    https://marc.info/?l=linux-kernel&m=152043346110262&w=2

Co-Developed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Co-Developed-by: Alan Stern <stern@rowland.harvard.edu>
Co-Developed-by: David Howells <dhowells@redhat.com>
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: David Howells <dhowells@redhat.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
Cc: Akira Yokosawa <akiyks@gmail.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Jade Alglave <j.alglave@ucl.ac.uk>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Luc Maranget <luc.maranget@inria.fr>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Will Deacon <will.deacon@arm.com>
Cc: linux-arch@vger.kernel.org
Cc: parri.andrea@gmail.com
Link: http://lkml.kernel.org/r/1526338889-7003-1-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/linux/spinlock.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 4894d322d258..1e8a46435838 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -380,6 +380,24 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
 	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
 })
 
+/**
+ * spin_is_locked() - Check whether a spinlock is locked.
+ * @lock: Pointer to the spinlock.
+ *
+ * This function is NOT required to provide any memory ordering
+ * guarantees; it could be used for debugging purposes or, when
+ * additional synchronization is needed, accompanied with other
+ * constructs (memory barriers) enforcing the synchronization.
+ *
+ * Returns: 1 if @lock is locked, 0 otherwise.
+ *
+ * Note that the function only tells you that the spinlock is
+ * seen to be locked, not that it is locked on your CPU.
+ *
+ * Further, on CONFIG_SMP=n builds with CONFIG_DEBUG_SPINLOCK=n,
+ * the return value is always 0 (see include/linux/spinlock_up.h).
+ * Therefore you should not rely heavily on the return value.
+ */
 static __always_inline int spin_is_locked(spinlock_t *lock)
 {
 	return raw_spin_is_locked(&lock->rlock);

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [tip:locking/core] locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked()
  2018-05-14 23:01 ` [PATCH memory-model 2/3] arm64: Remove smp_mb() from arch_spin_is_locked() Paul E. McKenney
@ 2018-05-15  6:26   ` tip-bot for Andrea Parri
  0 siblings, 0 replies; 7+ messages in thread
From: tip-bot for Andrea Parri @ 2018-05-15  6:26 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: catalin.marinas, andrea.parri, hpa, will.deacon, peterz, mingo,
	akpm, torvalds, linux-kernel, paulmck, tglx

Commit-ID:  c6f5d02b6a0fb91be5d656885ce02cf28952181d
Gitweb:     https://git.kernel.org/tip/c6f5d02b6a0fb91be5d656885ce02cf28952181d
Author:     Andrea Parri <andrea.parri@amarulasolutions.com>
AuthorDate: Mon, 14 May 2018 16:01:28 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 15 May 2018 08:11:15 +0200

locking/spinlocks/arm64: Remove smp_mb() from arch_spin_is_locked()

The following commit:

  38b850a73034f ("arm64: spinlock: order spin_{is_locked,unlock_wait} against local locks")

... added an smp_mb() to arch_spin_is_locked(), in order
"to ensure that the lock value is always loaded after any other locks have
been taken by the current CPU", and reported one example (the "insane case"
in ipc/sem.c) relying on such guarantee.

It is however understood that spin_is_locked() is not required to provide
such an ordering guarantee (a guarantee that is currently not provided by
all the implementations/archs), and that callers relying on such ordering
should instead insert suitable memory barriers before acting on the result
of spin_is_locked().

Following a recent auditing [1] of the callers of {,raw_}spin_is_locked(),
revealing that none of them are relying on the ordering guarantee anymore,
this commit removes the leading smp_mb() from the primitive thus reverting
38b850a73034f.

[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
    https://marc.info/?l=linux-kernel&m=152042843808540&w=2
    https://marc.info/?l=linux-kernel&m=152043346110262&w=2

Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akiyks@gmail.com
Cc: boqun.feng@gmail.com
Cc: dhowells@redhat.com
Cc: j.alglave@ucl.ac.uk
Cc: linux-arch@vger.kernel.org
Cc: luc.maranget@inria.fr
Cc: npiggin@gmail.com
Cc: parri.andrea@gmail.com
Cc: stern@rowland.harvard.edu
Link: http://lkml.kernel.org/r/1526338889-7003-2-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 arch/arm64/include/asm/spinlock.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index ebdae15d665d..26c5bd7d88d8 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -122,11 +122,6 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
-	/*
-	 * Ensure prior spin_lock operations to other locks have completed
-	 * on this CPU before we test whether "lock" is locked.
-	 */
-	smp_mb(); /* ^^^ */
 	return !arch_spin_value_unlocked(READ_ONCE(*lock));
 }
 

^ permalink raw reply related	[flat|nested] 7+ messages in thread

* [tip:locking/core] locking/spinlocks: Clean up comment and #ifndef for {,queued_}spin_is_locked()
  2018-05-14 23:01 ` [PATCH memory-model 3/3] locking: Clean up comment and #ifndef for {,queued_}spin_is_locked() Paul E. McKenney
@ 2018-05-15  6:26   ` tip-bot for Andrea Parri
  0 siblings, 0 replies; 7+ messages in thread
From: tip-bot for Andrea Parri @ 2018-05-15  6:26 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: tglx, peterz, will.deacon, mingo, linux-kernel, hpa, akpm,
	andrea.parri, paulmck, torvalds

Commit-ID:  1362ae43c503a4e333ab6948fc4c6e0e794e1558
Gitweb:     https://git.kernel.org/tip/1362ae43c503a4e333ab6948fc4c6e0e794e1558
Author:     Andrea Parri <andrea.parri@amarulasolutions.com>
AuthorDate: Mon, 14 May 2018 16:01:29 -0700
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Tue, 15 May 2018 08:11:15 +0200

locking/spinlocks: Clean up comment and #ifndef for {,queued_}spin_is_locked()

Removes "#ifndef queued_spin_is_locked" from the generic code: this is
unused and it's reasonable to conclude that it will continue to be unused.

Also removes the comment about spin_is_locked() from mutex_is_locked():
the comment remains valid but not particularly useful.

Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: akiyks@gmail.com
Cc: boqun.feng@gmail.com
Cc: dhowells@redhat.com
Cc: j.alglave@ucl.ac.uk
Cc: linux-arch@vger.kernel.org
Cc: luc.maranget@inria.fr
Cc: npiggin@gmail.com
Cc: parri.andrea@gmail.com
Cc: stern@rowland.harvard.edu
Link: http://lkml.kernel.org/r/1526338889-7003-3-git-send-email-paulmck@linux.vnet.ibm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 include/asm-generic/qspinlock.h | 2 --
 include/linux/mutex.h           | 3 ---
 2 files changed, 5 deletions(-)

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index a8ed0a352d75..9cc457597ddf 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -26,7 +26,6 @@
  * @lock: Pointer to queued spinlock structure
  * Return: 1 if it is locked, 0 otherwise
  */
-#ifndef queued_spin_is_locked
 static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
 {
 	/*
@@ -35,7 +34,6 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
 	 */
 	return atomic_read(&lock->val);
 }
-#endif
 
 /**
  * queued_spin_value_unlocked - is the spinlock structure unlocked?
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 14bc0d5d0ee5..3093dd162424 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -146,9 +146,6 @@ extern void __mutex_init(struct mutex *lock, const char *name,
  */
 static inline bool mutex_is_locked(struct mutex *lock)
 {
-	/*
-	 * XXX think about spin_is_locked
-	 */
 	return __mutex_owner(lock) != NULL;
 }
 

^ permalink raw reply related	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2018-05-15  6:27 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-14 23:01 [PATCH memory-model 0/3] Kernel fixes to spin_is_locked() Paul E. McKenney
2018-05-14 23:01 ` [PATCH memory-model 1/3] locking: Document the semantics of spin_is_locked() Paul E. McKenney
2018-05-15  6:25   ` [tip:locking/core] locking/spinlocks: " tip-bot for Andrea Parri
2018-05-14 23:01 ` [PATCH memory-model 2/3] arm64: Remove smp_mb() from arch_spin_is_locked() Paul E. McKenney
2018-05-15  6:26   ` [tip:locking/core] locking/spinlocks/arm64: " tip-bot for Andrea Parri
2018-05-14 23:01 ` [PATCH memory-model 3/3] locking: Clean up comment and #ifndef for {,queued_}spin_is_locked() Paul E. McKenney
2018-05-15  6:26   ` [tip:locking/core] locking/spinlocks: " tip-bot for Andrea Parri

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.