linux-arch.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH memory-model 0/3] Kernel fixes to spin_is_locked()
@ 2018-05-14 23:01 Paul E. McKenney
  2018-05-14 23:01 ` Paul E. McKenney
                   ` (3 more replies)
  0 siblings, 4 replies; 8+ messages in thread
From: Paul E. McKenney @ 2018-05-14 23:01 UTC (permalink / raw)
  To: linux-kernel, linux-arch, mingo
  Cc: stern, parri.andrea, will.deacon, peterz, boqun.feng, npiggin,
	dhowells, j.alglave, luc.maranget, akiyks

Hello!

This series contains fixes to the kernel related to the semantics
of spin_is_locked(), all courtesy of Andrea Parri, and all ready for
inclusion in -tip:

1.	Document the semantics of spin_is_locked() by adding a docbook
	header comment.

2.	Remove smp_mb() from arch_spin_is_locked(), given that the
	new order-free spin_is_locked() semantics require no such barrier.

3.	Clean up comment and #ifndef for {,queued_}spin_is_locked().
	The comment was "XXX think about spin_is_locked", and I can
	attest that we have now done some serious thinking.  ;-)

							Thanx, Paul

------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH memory-model 0/3] Kernel fixes to spin_is_locked()
  2018-05-14 23:01 [PATCH memory-model 0/3] Kernel fixes to spin_is_locked() Paul E. McKenney
@ 2018-05-14 23:01 ` Paul E. McKenney
  2018-05-14 23:01 ` [PATCH memory-model 1/3] locking: Document the semantics of spin_is_locked() Paul E. McKenney
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 8+ messages in thread
From: Paul E. McKenney @ 2018-05-14 23:01 UTC (permalink / raw)
  To: linux-kernel, linux-arch, mingo
  Cc: stern, parri.andrea, will.deacon, peterz, boqun.feng, npiggin,
	dhowells, j.alglave, luc.maranget, akiyks

Hello!

This series contains fixes to the kernel related to the semantics
of spin_is_locked(), all courtesy of Andrea Parri, and all ready for
inclusion in -tip:

1.	Document the semantics of spin_is_locked() by adding a docbook
	header comment.

2.	Remove smp_mb() from arch_spin_is_locked(), given that the
	new order-free spin_is_locked() semantics require no such barrier.

3.	Clean up comment and #ifndef for {,queued_}spin_is_locked().
	The comment was "XXX think about spin_is_locked", and I can
	attest that we have now done some serious thinking.  ;-)

							Thanx, Paul

------------------------------------------------------------------------

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [PATCH memory-model 1/3] locking: Document the semantics of spin_is_locked()
  2018-05-14 23:01 [PATCH memory-model 0/3] Kernel fixes to spin_is_locked() Paul E. McKenney
  2018-05-14 23:01 ` Paul E. McKenney
@ 2018-05-14 23:01 ` Paul E. McKenney
  2018-05-14 23:01   ` Paul E. McKenney
  2018-05-14 23:01 ` [PATCH memory-model 2/3] arm64: Remove smp_mb() from arch_spin_is_locked() Paul E. McKenney
  2018-05-14 23:01 ` [PATCH memory-model 3/3] locking: Clean up comment and #ifndef for {,queued_}spin_is_locked() Paul E. McKenney
  3 siblings, 1 reply; 8+ messages in thread
From: Paul E. McKenney @ 2018-05-14 23:01 UTC (permalink / raw)
  To: linux-kernel, linux-arch, mingo
  Cc: stern, parri.andrea, will.deacon, peterz, boqun.feng, npiggin,
	dhowells, j.alglave, luc.maranget, akiyks, Andrea Parri,
	Paul E. McKenney

From: Andrea Parri <andrea.parri@amarulasolutions.com>

There appeared to be a certain, recurrent uncertainty concerning the
semantics of spin_is_locked(), likely a consequence of the fact that
this semantics remains undocumented or that it has been historically
linked to the (likewise unclear) semantics of spin_unlock_wait().

A recent auditing [1] of the callers of the primitive confirmed that
none of them are relying on particular ordering guarantees; document
this semantics by adding a docbook header to spin_is_locked(). Also,
describe behaviors specific to certain CONFIG_SMP=n builds.

[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
    https://marc.info/?l=linux-kernel&m=152042843808540&w=2
    https://marc.info/?l=linux-kernel&m=152043346110262&w=2

Co-Developed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Co-Developed-by: Alan Stern <stern@rowland.harvard.edu>
Co-Developed-by: David Howells <dhowells@redhat.com>
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: David Howells <dhowells@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Jade Alglave <j.alglave@ucl.ac.uk>
Cc: Luc Maranget <luc.maranget@inria.fr>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Akira Yokosawa <akiyks@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
---
 include/linux/spinlock.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 4894d322d258..1e8a46435838 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -380,6 +380,24 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
 	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
 })
 
+/**
+ * spin_is_locked() - Check whether a spinlock is locked.
+ * @lock: Pointer to the spinlock.
+ *
+ * This function is NOT required to provide any memory ordering
+ * guarantees; it could be used for debugging purposes or, when
+ * additional synchronization is needed, accompanied with other
+ * constructs (memory barriers) enforcing the synchronization.
+ *
+ * Returns: 1 if @lock is locked, 0 otherwise.
+ *
+ * Note that the function only tells you that the spinlock is
+ * seen to be locked, not that it is locked on your CPU.
+ *
+ * Further, on CONFIG_SMP=n builds with CONFIG_DEBUG_SPINLOCK=n,
+ * the return value is always 0 (see include/linux/spinlock_up.h).
+ * Therefore you should not rely heavily on the return value.
+ */
 static __always_inline int spin_is_locked(spinlock_t *lock)
 {
 	return raw_spin_is_locked(&lock->rlock);
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH memory-model 1/3] locking: Document the semantics of spin_is_locked()
  2018-05-14 23:01 ` [PATCH memory-model 1/3] locking: Document the semantics of spin_is_locked() Paul E. McKenney
@ 2018-05-14 23:01   ` Paul E. McKenney
  0 siblings, 0 replies; 8+ messages in thread
From: Paul E. McKenney @ 2018-05-14 23:01 UTC (permalink / raw)
  To: linux-kernel, linux-arch, mingo
  Cc: stern, parri.andrea, will.deacon, peterz, boqun.feng, npiggin,
	dhowells, j.alglave, luc.maranget, akiyks, Andrea Parri,
	Paul E. McKenney

From: Andrea Parri <andrea.parri@amarulasolutions.com>

There appeared to be a certain, recurrent uncertainty concerning the
semantics of spin_is_locked(), likely a consequence of the fact that
this semantics remains undocumented or that it has been historically
linked to the (likewise unclear) semantics of spin_unlock_wait().

A recent auditing [1] of the callers of the primitive confirmed that
none of them are relying on particular ordering guarantees; document
this semantics by adding a docbook header to spin_is_locked(). Also,
describe behaviors specific to certain CONFIG_SMP=n builds.

[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
    https://marc.info/?l=linux-kernel&m=152042843808540&w=2
    https://marc.info/?l=linux-kernel&m=152043346110262&w=2

Co-Developed-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Co-Developed-by: Alan Stern <stern@rowland.harvard.edu>
Co-Developed-by: David Howells <dhowells@redhat.com>
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Signed-off-by: Alan Stern <stern@rowland.harvard.edu>
Signed-off-by: David Howells <dhowells@redhat.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: Jade Alglave <j.alglave@ucl.ac.uk>
Cc: Luc Maranget <luc.maranget@inria.fr>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Akira Yokosawa <akiyks@gmail.com>
Cc: Ingo Molnar <mingo@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
Acked-by: Randy Dunlap <rdunlap@infradead.org>
---
 include/linux/spinlock.h | 18 ++++++++++++++++++
 1 file changed, 18 insertions(+)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 4894d322d258..1e8a46435838 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -380,6 +380,24 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
 	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
 })
 
+/**
+ * spin_is_locked() - Check whether a spinlock is locked.
+ * @lock: Pointer to the spinlock.
+ *
+ * This function is NOT required to provide any memory ordering
+ * guarantees; it could be used for debugging purposes or, when
+ * additional synchronization is needed, accompanied with other
+ * constructs (memory barriers) enforcing the synchronization.
+ *
+ * Returns: 1 if @lock is locked, 0 otherwise.
+ *
+ * Note that the function only tells you that the spinlock is
+ * seen to be locked, not that it is locked on your CPU.
+ *
+ * Further, on CONFIG_SMP=n builds with CONFIG_DEBUG_SPINLOCK=n,
+ * the return value is always 0 (see include/linux/spinlock_up.h).
+ * Therefore you should not rely heavily on the return value.
+ */
 static __always_inline int spin_is_locked(spinlock_t *lock)
 {
 	return raw_spin_is_locked(&lock->rlock);
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH memory-model 2/3] arm64: Remove smp_mb() from arch_spin_is_locked()
  2018-05-14 23:01 [PATCH memory-model 0/3] Kernel fixes to spin_is_locked() Paul E. McKenney
  2018-05-14 23:01 ` Paul E. McKenney
  2018-05-14 23:01 ` [PATCH memory-model 1/3] locking: Document the semantics of spin_is_locked() Paul E. McKenney
@ 2018-05-14 23:01 ` Paul E. McKenney
  2018-05-14 23:01   ` Paul E. McKenney
  2018-05-14 23:01 ` [PATCH memory-model 3/3] locking: Clean up comment and #ifndef for {,queued_}spin_is_locked() Paul E. McKenney
  3 siblings, 1 reply; 8+ messages in thread
From: Paul E. McKenney @ 2018-05-14 23:01 UTC (permalink / raw)
  To: linux-kernel, linux-arch, mingo
  Cc: stern, parri.andrea, will.deacon, peterz, boqun.feng, npiggin,
	dhowells, j.alglave, luc.maranget, akiyks, Andrea Parri,
	Catalin Marinas, Ingo Molnar, Paul E. McKenney, Linus Torvalds

From: Andrea Parri <andrea.parri@amarulasolutions.com>

Commit 38b850a73034f ("arm64: spinlock: order spin_{is_locked,unlock_wait}
against local locks") added an smp_mb() to arch_spin_is_locked(), in order
"to ensure that the lock value is always loaded after any other locks have
been taken by the current CPU", and reported one example (the "insane case"
in ipc/sem.c) relying on such guarantee.

It is however understood that spin_is_locked() is not required to provide
such an ordering guarantee (a guarantee that is currently not provided by
all the implementations/archs), and that callers relying on such ordering
should instead insert suitable memory barriers before acting on the result
of spin_is_locked().

Following a recent auditing [1] of the callers of {,raw_}spin_is_locked(),
revealing that none of them are relying on the ordering guarantee anymore,
this commit removes the leading smp_mb() from the primitive thus reverting
38b850a73034f.

[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
    https://marc.info/?l=linux-kernel&m=152042843808540&w=2
    https://marc.info/?l=linux-kernel&m=152043346110262&w=2

Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 arch/arm64/include/asm/spinlock.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index ebdae15d665d..26c5bd7d88d8 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -122,11 +122,6 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
-	/*
-	 * Ensure prior spin_lock operations to other locks have completed
-	 * on this CPU before we test whether "lock" is locked.
-	 */
-	smp_mb(); /* ^^^ */
 	return !arch_spin_value_unlocked(READ_ONCE(*lock));
 }
 
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH memory-model 2/3] arm64: Remove smp_mb() from arch_spin_is_locked()
  2018-05-14 23:01 ` [PATCH memory-model 2/3] arm64: Remove smp_mb() from arch_spin_is_locked() Paul E. McKenney
@ 2018-05-14 23:01   ` Paul E. McKenney
  0 siblings, 0 replies; 8+ messages in thread
From: Paul E. McKenney @ 2018-05-14 23:01 UTC (permalink / raw)
  To: linux-kernel, linux-arch, mingo
  Cc: stern, parri.andrea, will.deacon, peterz, boqun.feng, npiggin,
	dhowells, j.alglave, luc.maranget, akiyks, Andrea Parri,
	Catalin Marinas, Ingo Molnar, Paul E. McKenney, Linus Torvalds

From: Andrea Parri <andrea.parri@amarulasolutions.com>

Commit 38b850a73034f ("arm64: spinlock: order spin_{is_locked,unlock_wait}
against local locks") added an smp_mb() to arch_spin_is_locked(), in order
"to ensure that the lock value is always loaded after any other locks have
been taken by the current CPU", and reported one example (the "insane case"
in ipc/sem.c) relying on such guarantee.

It is however understood that spin_is_locked() is not required to provide
such an ordering guarantee (a guarantee that is currently not provided by
all the implementations/archs), and that callers relying on such ordering
should instead insert suitable memory barriers before acting on the result
of spin_is_locked().

Following a recent auditing [1] of the callers of {,raw_}spin_is_locked(),
revealing that none of them are relying on the ordering guarantee anymore,
this commit removes the leading smp_mb() from the primitive thus reverting
38b850a73034f.

[1] https://marc.info/?l=linux-kernel&m=151981440005264&w=2
    https://marc.info/?l=linux-kernel&m=152042843808540&w=2
    https://marc.info/?l=linux-kernel&m=152043346110262&w=2

Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Catalin Marinas <catalin.marinas@arm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 arch/arm64/include/asm/spinlock.h | 5 -----
 1 file changed, 5 deletions(-)

diff --git a/arch/arm64/include/asm/spinlock.h b/arch/arm64/include/asm/spinlock.h
index ebdae15d665d..26c5bd7d88d8 100644
--- a/arch/arm64/include/asm/spinlock.h
+++ b/arch/arm64/include/asm/spinlock.h
@@ -122,11 +122,6 @@ static inline int arch_spin_value_unlocked(arch_spinlock_t lock)
 
 static inline int arch_spin_is_locked(arch_spinlock_t *lock)
 {
-	/*
-	 * Ensure prior spin_lock operations to other locks have completed
-	 * on this CPU before we test whether "lock" is locked.
-	 */
-	smp_mb(); /* ^^^ */
 	return !arch_spin_value_unlocked(READ_ONCE(*lock));
 }
 
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH memory-model 3/3] locking: Clean up comment and #ifndef for {,queued_}spin_is_locked()
  2018-05-14 23:01 [PATCH memory-model 0/3] Kernel fixes to spin_is_locked() Paul E. McKenney
                   ` (2 preceding siblings ...)
  2018-05-14 23:01 ` [PATCH memory-model 2/3] arm64: Remove smp_mb() from arch_spin_is_locked() Paul E. McKenney
@ 2018-05-14 23:01 ` Paul E. McKenney
  2018-05-14 23:01   ` Paul E. McKenney
  3 siblings, 1 reply; 8+ messages in thread
From: Paul E. McKenney @ 2018-05-14 23:01 UTC (permalink / raw)
  To: linux-kernel, linux-arch, mingo
  Cc: stern, parri.andrea, will.deacon, peterz, boqun.feng, npiggin,
	dhowells, j.alglave, luc.maranget, akiyks, Andrea Parri,
	Ingo Molnar, Paul E. McKenney, Linus Torvalds

From: Andrea Parri <andrea.parri@amarulasolutions.com>

Removes "#ifndef queued_spin_is_locked" from the generic code: this is
unused and it's reasonable to conclude that it will continue to be unused.

Also removes the comment about spin_is_locked() from mutex_is_locked():
the comment remains valid but not particularly useful.

Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/asm-generic/qspinlock.h | 2 --
 include/linux/mutex.h           | 3 ---
 2 files changed, 5 deletions(-)

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index b37b4ad7eb94..dc4e4ac4937e 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -26,7 +26,6 @@
  * @lock: Pointer to queued spinlock structure
  * Return: 1 if it is locked, 0 otherwise
  */
-#ifndef queued_spin_is_locked
 static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
 {
 	/*
@@ -35,7 +34,6 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
 	 */
 	return atomic_read(&lock->val);
 }
-#endif
 
 /**
  * queued_spin_value_unlocked - is the spinlock structure unlocked?
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 14bc0d5d0ee5..3093dd162424 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -146,9 +146,6 @@ extern void __mutex_init(struct mutex *lock, const char *name,
  */
 static inline bool mutex_is_locked(struct mutex *lock)
 {
-	/*
-	 * XXX think about spin_is_locked
-	 */
 	return __mutex_owner(lock) != NULL;
 }
 
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [PATCH memory-model 3/3] locking: Clean up comment and #ifndef for {,queued_}spin_is_locked()
  2018-05-14 23:01 ` [PATCH memory-model 3/3] locking: Clean up comment and #ifndef for {,queued_}spin_is_locked() Paul E. McKenney
@ 2018-05-14 23:01   ` Paul E. McKenney
  0 siblings, 0 replies; 8+ messages in thread
From: Paul E. McKenney @ 2018-05-14 23:01 UTC (permalink / raw)
  To: linux-kernel, linux-arch, mingo
  Cc: stern, parri.andrea, will.deacon, peterz, boqun.feng, npiggin,
	dhowells, j.alglave, luc.maranget, akiyks, Andrea Parri,
	Ingo Molnar, Paul E. McKenney, Linus Torvalds

From: Andrea Parri <andrea.parri@amarulasolutions.com>

Removes "#ifndef queued_spin_is_locked" from the generic code: this is
unused and it's reasonable to conclude that it will continue to be unused.

Also removes the comment about spin_is_locked() from mutex_is_locked():
the comment remains valid but not particularly useful.

Suggested-by: Will Deacon <will.deacon@arm.com>
Signed-off-by: Andrea Parri <andrea.parri@amarulasolutions.com>
Acked-by: Will Deacon <will.deacon@arm.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/asm-generic/qspinlock.h | 2 --
 include/linux/mutex.h           | 3 ---
 2 files changed, 5 deletions(-)

diff --git a/include/asm-generic/qspinlock.h b/include/asm-generic/qspinlock.h
index b37b4ad7eb94..dc4e4ac4937e 100644
--- a/include/asm-generic/qspinlock.h
+++ b/include/asm-generic/qspinlock.h
@@ -26,7 +26,6 @@
  * @lock: Pointer to queued spinlock structure
  * Return: 1 if it is locked, 0 otherwise
  */
-#ifndef queued_spin_is_locked
 static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
 {
 	/*
@@ -35,7 +34,6 @@ static __always_inline int queued_spin_is_locked(struct qspinlock *lock)
 	 */
 	return atomic_read(&lock->val);
 }
-#endif
 
 /**
  * queued_spin_value_unlocked - is the spinlock structure unlocked?
diff --git a/include/linux/mutex.h b/include/linux/mutex.h
index 14bc0d5d0ee5..3093dd162424 100644
--- a/include/linux/mutex.h
+++ b/include/linux/mutex.h
@@ -146,9 +146,6 @@ extern void __mutex_init(struct mutex *lock, const char *name,
  */
 static inline bool mutex_is_locked(struct mutex *lock)
 {
-	/*
-	 * XXX think about spin_is_locked
-	 */
 	return __mutex_owner(lock) != NULL;
 }
 
-- 
2.5.2

^ permalink raw reply related	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-05-14 23:01 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-05-14 23:01 [PATCH memory-model 0/3] Kernel fixes to spin_is_locked() Paul E. McKenney
2018-05-14 23:01 ` Paul E. McKenney
2018-05-14 23:01 ` [PATCH memory-model 1/3] locking: Document the semantics of spin_is_locked() Paul E. McKenney
2018-05-14 23:01   ` Paul E. McKenney
2018-05-14 23:01 ` [PATCH memory-model 2/3] arm64: Remove smp_mb() from arch_spin_is_locked() Paul E. McKenney
2018-05-14 23:01   ` Paul E. McKenney
2018-05-14 23:01 ` [PATCH memory-model 3/3] locking: Clean up comment and #ifndef for {,queued_}spin_is_locked() Paul E. McKenney
2018-05-14 23:01   ` Paul E. McKenney

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).