All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] Documentation/locking: Document the semantics of spin_is_locked()
@ 2018-02-28 10:39 Andrea Parri
  2018-02-28 10:56 ` Will Deacon
  0 siblings, 1 reply; 10+ messages in thread
From: Andrea Parri @ 2018-02-28 10:39 UTC (permalink / raw)
  To: linux-kernel
  Cc: Andrea Parri, Alan Stern, Will Deacon, Peter Zijlstra,
	Boqun Feng, Nicholas Piggin, David Howells, Jade Alglave,
	Luc Maranget, Paul E. McKenney, Akira Yokosawa

There appeared to be a certain, recurrent uncertainty concerning the
semantics of spin_is_locked(), likely a consequence of the fact that
this semantics remains undocumented or that it has been historically
linked to the (likewise unclear) semantics of spin_unlock_wait().

Document this semantics.

Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
Cc: Alan Stern <stern@rowland.harvard.edu>
Cc: Will Deacon <will.deacon@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Boqun Feng <boqun.feng@gmail.com>
Cc: Nicholas Piggin <npiggin@gmail.com>
Cc: David Howells <dhowells@redhat.com>
Cc: Jade Alglave <j.alglave@ucl.ac.uk>
Cc: Luc Maranget <luc.maranget@inria.fr>
Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
Cc: Akira Yokosawa <akiyks@gmail.com>
---
 include/linux/spinlock.h | 11 +++++++++++
 1 file changed, 11 insertions(+)

diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
index 4894d322d2584..2639fdc9a916c 100644
--- a/include/linux/spinlock.h
+++ b/include/linux/spinlock.h
@@ -380,6 +380,17 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
 	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
 })
 
+/**
+ * spin_is_locked() - Check whether a spinlock is locked.
+ * @lock: Pointer to the spinlock.
+ *
+ * This function is NOT required to provide any memory ordering
+ * guarantees; it could be used for debugging purposes or, when
+ * additional synchronization is needed, accompanied with other
+ * constructs (memory barriers) enforcing the synchronization.
+ *
+ * Return: 1, if @lock is (found to be) locked; 0, otherwise.
+ */
 static __always_inline int spin_is_locked(spinlock_t *lock)
 {
 	return raw_spin_is_locked(&lock->rlock);
-- 
2.7.4

^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH] Documentation/locking: Document the semantics of spin_is_locked()
  2018-02-28 10:39 [PATCH] Documentation/locking: Document the semantics of spin_is_locked() Andrea Parri
@ 2018-02-28 10:56 ` Will Deacon
  2018-02-28 11:24   ` Andrea Parri
  2018-02-28 15:16   ` Alan Stern
  0 siblings, 2 replies; 10+ messages in thread
From: Will Deacon @ 2018-02-28 10:56 UTC (permalink / raw)
  To: Andrea Parri
  Cc: linux-kernel, Alan Stern, Peter Zijlstra, Boqun Feng,
	Nicholas Piggin, David Howells, Jade Alglave, Luc Maranget,
	Paul E. McKenney, Akira Yokosawa

On Wed, Feb 28, 2018 at 11:39:32AM +0100, Andrea Parri wrote:
> There appeared to be a certain, recurrent uncertainty concerning the
> semantics of spin_is_locked(), likely a consequence of the fact that
> this semantics remains undocumented or that it has been historically
> linked to the (likewise unclear) semantics of spin_unlock_wait().
> 
> Document this semantics.
> 
> Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
> Cc: Alan Stern <stern@rowland.harvard.edu>
> Cc: Will Deacon <will.deacon@arm.com>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Boqun Feng <boqun.feng@gmail.com>
> Cc: Nicholas Piggin <npiggin@gmail.com>
> Cc: David Howells <dhowells@redhat.com>
> Cc: Jade Alglave <j.alglave@ucl.ac.uk>
> Cc: Luc Maranget <luc.maranget@inria.fr>
> Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> Cc: Akira Yokosawa <akiyks@gmail.com>
> ---
>  include/linux/spinlock.h | 11 +++++++++++
>  1 file changed, 11 insertions(+)
> 
> diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> index 4894d322d2584..2639fdc9a916c 100644
> --- a/include/linux/spinlock.h
> +++ b/include/linux/spinlock.h
> @@ -380,6 +380,17 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
>  	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
>  })
>  
> +/**
> + * spin_is_locked() - Check whether a spinlock is locked.
> + * @lock: Pointer to the spinlock.
> + *
> + * This function is NOT required to provide any memory ordering
> + * guarantees; it could be used for debugging purposes or, when
> + * additional synchronization is needed, accompanied with other
> + * constructs (memory barriers) enforcing the synchronization.
> + *
> + * Return: 1, if @lock is (found to be) locked; 0, otherwise.
> + */

I also don't think this is quite right, since the spin_is_locked check
must be ordered after all prior lock acquisitions (to any lock) on the same
CPU. That's why we have an smp_mb() in there on arm64 (see 38b850a73034f).

So this is a change in semantics and we need to audit the users before
proceeding. We should also keep spin_is_locked consistent with the versions
for mutex, rwsem, bit_spin.

Will

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Documentation/locking: Document the semantics of spin_is_locked()
  2018-02-28 10:56 ` Will Deacon
@ 2018-02-28 11:24   ` Andrea Parri
  2018-02-28 11:34     ` Will Deacon
  2018-02-28 15:16   ` Alan Stern
  1 sibling, 1 reply; 10+ messages in thread
From: Andrea Parri @ 2018-02-28 11:24 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-kernel, Alan Stern, Peter Zijlstra, Boqun Feng,
	Nicholas Piggin, David Howells, Jade Alglave, Luc Maranget,
	Paul E. McKenney, Akira Yokosawa

On Wed, Feb 28, 2018 at 10:56:32AM +0000, Will Deacon wrote:
> On Wed, Feb 28, 2018 at 11:39:32AM +0100, Andrea Parri wrote:
> > There appeared to be a certain, recurrent uncertainty concerning the
> > semantics of spin_is_locked(), likely a consequence of the fact that
> > this semantics remains undocumented or that it has been historically
> > linked to the (likewise unclear) semantics of spin_unlock_wait().
> > 
> > Document this semantics.
> > 
> > Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
> > Cc: Alan Stern <stern@rowland.harvard.edu>
> > Cc: Will Deacon <will.deacon@arm.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Boqun Feng <boqun.feng@gmail.com>
> > Cc: Nicholas Piggin <npiggin@gmail.com>
> > Cc: David Howells <dhowells@redhat.com>
> > Cc: Jade Alglave <j.alglave@ucl.ac.uk>
> > Cc: Luc Maranget <luc.maranget@inria.fr>
> > Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > Cc: Akira Yokosawa <akiyks@gmail.com>
> > ---
> >  include/linux/spinlock.h | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> > 
> > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> > index 4894d322d2584..2639fdc9a916c 100644
> > --- a/include/linux/spinlock.h
> > +++ b/include/linux/spinlock.h
> > @@ -380,6 +380,17 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
> >  	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
> >  })
> >  
> > +/**
> > + * spin_is_locked() - Check whether a spinlock is locked.
> > + * @lock: Pointer to the spinlock.
> > + *
> > + * This function is NOT required to provide any memory ordering
> > + * guarantees; it could be used for debugging purposes or, when
> > + * additional synchronization is needed, accompanied with other
> > + * constructs (memory barriers) enforcing the synchronization.
> > + *
> > + * Return: 1, if @lock is (found to be) locked; 0, otherwise.
> > + */
> 
> I also don't think this is quite right, since the spin_is_locked check
> must be ordered after all prior lock acquisitions (to any lock) on the same
> CPU. That's why we have an smp_mb() in there on arm64 (see 38b850a73034f).

So, arm64 (and powerpc) complies to the semantics I _have_ in mind ...


> 
> So this is a change in semantics and we need to audit the users before
> proceeding. We should also keep spin_is_locked consistent with the versions
> for mutex, rwsem, bit_spin.

Well, strictly speaking, it isn't (given that the current semantics is,
as reported above, currently undocumented); for the same reason, cases
relying on anything more than _nothing_ (if any) are already broken ...

  Andrea


> 
> Will

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Documentation/locking: Document the semantics of spin_is_locked()
  2018-02-28 11:24   ` Andrea Parri
@ 2018-02-28 11:34     ` Will Deacon
  2018-02-28 12:15       ` Andrea Parri
  0 siblings, 1 reply; 10+ messages in thread
From: Will Deacon @ 2018-02-28 11:34 UTC (permalink / raw)
  To: Andrea Parri
  Cc: linux-kernel, Alan Stern, Peter Zijlstra, Boqun Feng,
	Nicholas Piggin, David Howells, Jade Alglave, Luc Maranget,
	Paul E. McKenney, Akira Yokosawa

On Wed, Feb 28, 2018 at 12:24:03PM +0100, Andrea Parri wrote:
> On Wed, Feb 28, 2018 at 10:56:32AM +0000, Will Deacon wrote:
> > On Wed, Feb 28, 2018 at 11:39:32AM +0100, Andrea Parri wrote:
> > > There appeared to be a certain, recurrent uncertainty concerning the
> > > semantics of spin_is_locked(), likely a consequence of the fact that
> > > this semantics remains undocumented or that it has been historically
> > > linked to the (likewise unclear) semantics of spin_unlock_wait().
> > > 
> > > Document this semantics.
> > > 
> > > Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
> > > Cc: Alan Stern <stern@rowland.harvard.edu>
> > > Cc: Will Deacon <will.deacon@arm.com>
> > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > Cc: Boqun Feng <boqun.feng@gmail.com>
> > > Cc: Nicholas Piggin <npiggin@gmail.com>
> > > Cc: David Howells <dhowells@redhat.com>
> > > Cc: Jade Alglave <j.alglave@ucl.ac.uk>
> > > Cc: Luc Maranget <luc.maranget@inria.fr>
> > > Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > > Cc: Akira Yokosawa <akiyks@gmail.com>
> > > ---
> > >  include/linux/spinlock.h | 11 +++++++++++
> > >  1 file changed, 11 insertions(+)
> > > 
> > > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> > > index 4894d322d2584..2639fdc9a916c 100644
> > > --- a/include/linux/spinlock.h
> > > +++ b/include/linux/spinlock.h
> > > @@ -380,6 +380,17 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
> > >  	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
> > >  })
> > >  
> > > +/**
> > > + * spin_is_locked() - Check whether a spinlock is locked.
> > > + * @lock: Pointer to the spinlock.
> > > + *
> > > + * This function is NOT required to provide any memory ordering
> > > + * guarantees; it could be used for debugging purposes or, when
> > > + * additional synchronization is needed, accompanied with other
> > > + * constructs (memory barriers) enforcing the synchronization.
> > > + *
> > > + * Return: 1, if @lock is (found to be) locked; 0, otherwise.
> > > + */
> > 
> > I also don't think this is quite right, since the spin_is_locked check
> > must be ordered after all prior lock acquisitions (to any lock) on the same
> > CPU. That's why we have an smp_mb() in there on arm64 (see 38b850a73034f).
> 
> So, arm64 (and powerpc) complies to the semantics I _have_ in mind ...

Sure, but they're offering more than that at present. If I can remove the
smp_mb() in our spin_is_locked implementation, I will, but we need to know
what that will break even if you consider that code to be broken because it
relies on something undocumented.

> > So this is a change in semantics and we need to audit the users before
> > proceeding. We should also keep spin_is_locked consistent with the versions
> > for mutex, rwsem, bit_spin.
> 
> Well, strictly speaking, it isn't (given that the current semantics is,
> as reported above, currently undocumented); for the same reason, cases
> relying on anything more than _nothing_ (if any) are already broken ...

I suppose it depends on whether you consider the code or the documentation
to be authoritative. I tend to err on the side of the former for the kernel.
To be clear: I'm perfectly ok relaxing the semantics, but only if there's
some evidence that you've looked at the callsites and determined that they
won't break.  That's why I think a better first step would be to convert a
bunch of them to using lockdep for the "assert that I hold this lock"
checks, so we can start to see where the interesting cases are.

Will

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Documentation/locking: Document the semantics of spin_is_locked()
  2018-02-28 11:34     ` Will Deacon
@ 2018-02-28 12:15       ` Andrea Parri
  2018-02-28 14:39         ` Paul E. McKenney
  2018-03-07 13:13         ` Andrea Parri
  0 siblings, 2 replies; 10+ messages in thread
From: Andrea Parri @ 2018-02-28 12:15 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-kernel, Alan Stern, Peter Zijlstra, Boqun Feng,
	Nicholas Piggin, David Howells, Jade Alglave, Luc Maranget,
	Paul E. McKenney, Akira Yokosawa

On Wed, Feb 28, 2018 at 11:34:56AM +0000, Will Deacon wrote:
> On Wed, Feb 28, 2018 at 12:24:03PM +0100, Andrea Parri wrote:
> > On Wed, Feb 28, 2018 at 10:56:32AM +0000, Will Deacon wrote:
> > > On Wed, Feb 28, 2018 at 11:39:32AM +0100, Andrea Parri wrote:
> > > > There appeared to be a certain, recurrent uncertainty concerning the
> > > > semantics of spin_is_locked(), likely a consequence of the fact that
> > > > this semantics remains undocumented or that it has been historically
> > > > linked to the (likewise unclear) semantics of spin_unlock_wait().
> > > > 
> > > > Document this semantics.
> > > > 
> > > > Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
> > > > Cc: Alan Stern <stern@rowland.harvard.edu>
> > > > Cc: Will Deacon <will.deacon@arm.com>
> > > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > > Cc: Boqun Feng <boqun.feng@gmail.com>
> > > > Cc: Nicholas Piggin <npiggin@gmail.com>
> > > > Cc: David Howells <dhowells@redhat.com>
> > > > Cc: Jade Alglave <j.alglave@ucl.ac.uk>
> > > > Cc: Luc Maranget <luc.maranget@inria.fr>
> > > > Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > > > Cc: Akira Yokosawa <akiyks@gmail.com>
> > > > ---
> > > >  include/linux/spinlock.h | 11 +++++++++++
> > > >  1 file changed, 11 insertions(+)
> > > > 
> > > > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> > > > index 4894d322d2584..2639fdc9a916c 100644
> > > > --- a/include/linux/spinlock.h
> > > > +++ b/include/linux/spinlock.h
> > > > @@ -380,6 +380,17 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
> > > >  	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
> > > >  })
> > > >  
> > > > +/**
> > > > + * spin_is_locked() - Check whether a spinlock is locked.
> > > > + * @lock: Pointer to the spinlock.
> > > > + *
> > > > + * This function is NOT required to provide any memory ordering
> > > > + * guarantees; it could be used for debugging purposes or, when
> > > > + * additional synchronization is needed, accompanied with other
> > > > + * constructs (memory barriers) enforcing the synchronization.
> > > > + *
> > > > + * Return: 1, if @lock is (found to be) locked; 0, otherwise.
> > > > + */
> > > 
> > > I also don't think this is quite right, since the spin_is_locked check
> > > must be ordered after all prior lock acquisitions (to any lock) on the same
> > > CPU. That's why we have an smp_mb() in there on arm64 (see 38b850a73034f).
> > 
> > So, arm64 (and powerpc) complies to the semantics I _have_ in mind ...
> 
> Sure, but they're offering more than that at present. If I can remove the
> smp_mb() in our spin_is_locked implementation, I will, but we need to know
> what that will break even if you consider that code to be broken because it
> relies on something undocumented.
> 
> > > So this is a change in semantics and we need to audit the users before
> > > proceeding. We should also keep spin_is_locked consistent with the versions
> > > for mutex, rwsem, bit_spin.
> > 
> > Well, strictly speaking, it isn't (given that the current semantics is,
> > as reported above, currently undocumented); for the same reason, cases
> > relying on anything more than _nothing_ (if any) are already broken ...
> 
> I suppose it depends on whether you consider the code or the documentation
> to be authoritative. I tend to err on the side of the former for the kernel.
> To be clear: I'm perfectly ok relaxing the semantics, but only if there's
> some evidence that you've looked at the callsites and determined that they
> won't break.  That's why I think a better first step would be to convert a
> bunch of them to using lockdep for the "assert that I hold this lock"
> checks, so we can start to see where the interesting cases are.

Sure, I'll do (queued after the RISC-V patches I'm currently working on).

So I think that we could all agree that the semantics I'm proposing here
would be very simple to reason with ;-).  You know, OTOH, this auditing
could turn out to be all but "simple"...

  https://marc.info/?l=linux-kernel&m=149910202928559&w=2
  https://marc.info/?l=linux-kernel&m=149886113629263&w=2
  https://marc.info/?l=linux-kernel&m=149912971028729&w=2

but I'll have a try, IAC.  Perhaps, a temporary solution/workaround can
be to simplify/clarify the semantics and to insert the smp_mb() (or the
smp_mb__before_islocked(), ...) in the "dubious" use cases.

  Andrea


> 
> Will

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Documentation/locking: Document the semantics of spin_is_locked()
  2018-02-28 12:15       ` Andrea Parri
@ 2018-02-28 14:39         ` Paul E. McKenney
  2018-03-07 13:13         ` Andrea Parri
  1 sibling, 0 replies; 10+ messages in thread
From: Paul E. McKenney @ 2018-02-28 14:39 UTC (permalink / raw)
  To: Andrea Parri
  Cc: Will Deacon, linux-kernel, Alan Stern, Peter Zijlstra,
	Boqun Feng, Nicholas Piggin, David Howells, Jade Alglave,
	Luc Maranget, Akira Yokosawa

On Wed, Feb 28, 2018 at 01:15:23PM +0100, Andrea Parri wrote:
> On Wed, Feb 28, 2018 at 11:34:56AM +0000, Will Deacon wrote:
> > On Wed, Feb 28, 2018 at 12:24:03PM +0100, Andrea Parri wrote:
> > > On Wed, Feb 28, 2018 at 10:56:32AM +0000, Will Deacon wrote:
> > > > On Wed, Feb 28, 2018 at 11:39:32AM +0100, Andrea Parri wrote:
> > > > > There appeared to be a certain, recurrent uncertainty concerning the
> > > > > semantics of spin_is_locked(), likely a consequence of the fact that
> > > > > this semantics remains undocumented or that it has been historically
> > > > > linked to the (likewise unclear) semantics of spin_unlock_wait().
> > > > > 
> > > > > Document this semantics.
> > > > > 
> > > > > Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
> > > > > Cc: Alan Stern <stern@rowland.harvard.edu>
> > > > > Cc: Will Deacon <will.deacon@arm.com>
> > > > > Cc: Peter Zijlstra <peterz@infradead.org>
> > > > > Cc: Boqun Feng <boqun.feng@gmail.com>
> > > > > Cc: Nicholas Piggin <npiggin@gmail.com>
> > > > > Cc: David Howells <dhowells@redhat.com>
> > > > > Cc: Jade Alglave <j.alglave@ucl.ac.uk>
> > > > > Cc: Luc Maranget <luc.maranget@inria.fr>
> > > > > Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > > > > Cc: Akira Yokosawa <akiyks@gmail.com>
> > > > > ---
> > > > >  include/linux/spinlock.h | 11 +++++++++++
> > > > >  1 file changed, 11 insertions(+)
> > > > > 
> > > > > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> > > > > index 4894d322d2584..2639fdc9a916c 100644
> > > > > --- a/include/linux/spinlock.h
> > > > > +++ b/include/linux/spinlock.h
> > > > > @@ -380,6 +380,17 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
> > > > >  	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
> > > > >  })
> > > > >  
> > > > > +/**
> > > > > + * spin_is_locked() - Check whether a spinlock is locked.
> > > > > + * @lock: Pointer to the spinlock.
> > > > > + *
> > > > > + * This function is NOT required to provide any memory ordering
> > > > > + * guarantees; it could be used for debugging purposes or, when
> > > > > + * additional synchronization is needed, accompanied with other
> > > > > + * constructs (memory barriers) enforcing the synchronization.
> > > > > + *
> > > > > + * Return: 1, if @lock is (found to be) locked; 0, otherwise.
> > > > > + */
> > > > 
> > > > I also don't think this is quite right, since the spin_is_locked check
> > > > must be ordered after all prior lock acquisitions (to any lock) on the same
> > > > CPU. That's why we have an smp_mb() in there on arm64 (see 38b850a73034f).
> > > 
> > > So, arm64 (and powerpc) complies to the semantics I _have_ in mind ...
> > 
> > Sure, but they're offering more than that at present. If I can remove the
> > smp_mb() in our spin_is_locked implementation, I will, but we need to know
> > what that will break even if you consider that code to be broken because it
> > relies on something undocumented.
> > 
> > > > So this is a change in semantics and we need to audit the users before
> > > > proceeding. We should also keep spin_is_locked consistent with the versions
> > > > for mutex, rwsem, bit_spin.
> > > 
> > > Well, strictly speaking, it isn't (given that the current semantics is,
> > > as reported above, currently undocumented); for the same reason, cases
> > > relying on anything more than _nothing_ (if any) are already broken ...
> > 
> > I suppose it depends on whether you consider the code or the documentation
> > to be authoritative. I tend to err on the side of the former for the kernel.
> > To be clear: I'm perfectly ok relaxing the semantics, but only if there's
> > some evidence that you've looked at the callsites and determined that they
> > won't break.  That's why I think a better first step would be to convert a
> > bunch of them to using lockdep for the "assert that I hold this lock"
> > checks, so we can start to see where the interesting cases are.
> 
> Sure, I'll do (queued after the RISC-V patches I'm currently working on).
> 
> So I think that we could all agree that the semantics I'm proposing here
> would be very simple to reason with ;-).  You know, OTOH, this auditing
> could turn out to be all but "simple"...
> 
>   https://marc.info/?l=linux-kernel&m=149910202928559&w=2
>   https://marc.info/?l=linux-kernel&m=149886113629263&w=2
>   https://marc.info/?l=linux-kernel&m=149912971028729&w=2

Indeed, if it was easy, we probably would have already done it.  ;-)

> but I'll have a try, IAC.  Perhaps, a temporary solution/workaround can
> be to simplify/clarify the semantics and to insert the smp_mb() (or the
> smp_mb__before_islocked(), ...) in the "dubious" use cases.

One approach that can be quite helpful is to instrument the code, perhaps
using tracing or perhaps as Will suggests using lockdep, to make it tell
you what it is up to.  Another approach is to find peope who actually
use kdgb and see if any of them mess with CPU hotplug.

							Thanx, Paul

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Documentation/locking: Document the semantics of spin_is_locked()
  2018-02-28 10:56 ` Will Deacon
  2018-02-28 11:24   ` Andrea Parri
@ 2018-02-28 15:16   ` Alan Stern
  1 sibling, 0 replies; 10+ messages in thread
From: Alan Stern @ 2018-02-28 15:16 UTC (permalink / raw)
  To: Will Deacon
  Cc: Andrea Parri, linux-kernel, Peter Zijlstra, Boqun Feng,
	Nicholas Piggin, David Howells, Jade Alglave, Luc Maranget,
	Paul E. McKenney, Akira Yokosawa

On Wed, 28 Feb 2018, Will Deacon wrote:

> On Wed, Feb 28, 2018 at 11:39:32AM +0100, Andrea Parri wrote:
> > There appeared to be a certain, recurrent uncertainty concerning the
> > semantics of spin_is_locked(), likely a consequence of the fact that
> > this semantics remains undocumented or that it has been historically
> > linked to the (likewise unclear) semantics of spin_unlock_wait().
> > 
> > Document this semantics.
> > 
> > Signed-off-by: Andrea Parri <parri.andrea@gmail.com>
> > Cc: Alan Stern <stern@rowland.harvard.edu>
> > Cc: Will Deacon <will.deacon@arm.com>
> > Cc: Peter Zijlstra <peterz@infradead.org>
> > Cc: Boqun Feng <boqun.feng@gmail.com>
> > Cc: Nicholas Piggin <npiggin@gmail.com>
> > Cc: David Howells <dhowells@redhat.com>
> > Cc: Jade Alglave <j.alglave@ucl.ac.uk>
> > Cc: Luc Maranget <luc.maranget@inria.fr>
> > Cc: "Paul E. McKenney" <paulmck@linux.vnet.ibm.com>
> > Cc: Akira Yokosawa <akiyks@gmail.com>
> > ---
> >  include/linux/spinlock.h | 11 +++++++++++
> >  1 file changed, 11 insertions(+)
> > 
> > diff --git a/include/linux/spinlock.h b/include/linux/spinlock.h
> > index 4894d322d2584..2639fdc9a916c 100644
> > --- a/include/linux/spinlock.h
> > +++ b/include/linux/spinlock.h
> > @@ -380,6 +380,17 @@ static __always_inline int spin_trylock_irq(spinlock_t *lock)
> >  	raw_spin_trylock_irqsave(spinlock_check(lock), flags); \
> >  })
> >  
> > +/**
> > + * spin_is_locked() - Check whether a spinlock is locked.
> > + * @lock: Pointer to the spinlock.
> > + *
> > + * This function is NOT required to provide any memory ordering
> > + * guarantees; it could be used for debugging purposes or, when
> > + * additional synchronization is needed, accompanied with other
> > + * constructs (memory barriers) enforcing the synchronization.
> > + *
> > + * Return: 1, if @lock is (found to be) locked; 0, otherwise.
> > + */
> 
> I also don't think this is quite right, since the spin_is_locked check
> must be ordered after all prior lock acquisitions (to any lock) on the same
> CPU. That's why we have an smp_mb() in there on arm64 (see 38b850a73034f).

Pardon me for being a little slow ... I often have trouble 
understanding what people mean when they talk about ordering.

In this case it sounds like you're referring to situations where we 
want to avoid ABBA deadlocks, and we want to avoid the overhead of 
actually taking a lock when all that's really needed is to check that 
nobody else owns the lock.

spinlock_t a, b;

P0() {
	spin_lock(&a);
	while (spin_is_locked(&b))
		cpu_relax();
	/* Assert P1 isn't in its critical section and won't enter it */
	...
	spin_unlock(&a);
}

P1() {
	spin_lock(&b);
	if (spin_is_locked(&a)) {
		spin_unlock(&b);	/* P0 wants us not to do anything */
		return;
	}
	...
	spin_unlock(&b);
}

Is this what you meant?

If so, the basic pattern is essentially SB.  Taking a lock involves 
doing a store, and testing a lock involves doing a read.  So ignoring 
all the extraneous stuff and any memory barriers, this looks like:

P0() {
	WRITE_ONCE(a, 1);
	r0 = READ_ONCE(b);
}

P1() {
	WRITE_ONCE(b, 1);
	r1 = READ_ONCE(a);
}

exists (0:r0=0 /\ 1:r1=0)

And of course it is well known that the SB pattern requires full memory
barriers on both sides.  Hence the original code should have been
written more like this:

P0() {
	spin_lock(&a);
	smp_mb__after_spinlock();
	while (spin_is_locked(&b))
		cpu_relax();
	/* Assert P1 won't enter its critical section */
	...
	spin_unlock(&a);
}

P1() {
	spin_lock(&b);
	smp_mb__after_spinlock();
	if (spin_is_locked(&a)) {
		spin_unlock(&b);	/* P0 wants us not to do anything */
		return;
	}
	...
	spin_unlock(&b);
}

with no need for any particular ordering of the spin_is_locked calls.

Or have I completely misunderstood your point?

Alan

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Documentation/locking: Document the semantics of spin_is_locked()
  2018-02-28 12:15       ` Andrea Parri
  2018-02-28 14:39         ` Paul E. McKenney
@ 2018-03-07 13:13         ` Andrea Parri
  2018-03-07 14:37           ` Daniel Thompson
  1 sibling, 1 reply; 10+ messages in thread
From: Andrea Parri @ 2018-03-07 13:13 UTC (permalink / raw)
  To: Will Deacon
  Cc: linux-kernel, Alan Stern, Peter Zijlstra, Boqun Feng,
	Nicholas Piggin, David Howells, Jade Alglave, Luc Maranget,
	Paul E. McKenney, Akira Yokosawa, Daniel Thompson, Jason Wessel

On Wed, Feb 28, 2018 at 01:15:23PM +0100, Andrea Parri wrote:
> On Wed, Feb 28, 2018 at 11:34:56AM +0000, Will Deacon wrote:

[...]

>> only if there's some evidence that you've looked at the callsites
>> and determined that they won't break.

I looked at the callsites for {,raw_}spin_is_locked() (reported below):

In most cases (40+), these primitives are used within BUG_ON/WARN_ON or
the like; a handful of other cases using these with no concurrency, for
checking "self-lock", or for heuristics.

I confirm that the "ipc/sem.c" case, mentioned in the arm64 and powerpc
commits adding smp_mb() to their arch_spin_is_locked(), disappeared.

And that the "debug_core" case seems to be the only case requiring some
thoughts: my understanding (but I Cc the KGDB maintainers, so that they
can correct me, or provide other details) is that KGDB does not rely on
implicit barriers in raw_spin_is_locked().

(This seems instead to rely on barriers in the IPIs sending/handling, in
 part., kgdb_roundup_cpus, kgdb_nmicallback; yes, these barriers are not
 documented, but I've discussed with Daniel, Jason about the eventuality
 of adding such documentations/inline comments.)

(N.B. I have _not_ tested any of these observations, say by removing the
 smp_mb() from your implementation; so, you know...)

  Andrea


./mm/khugepaged.c:1222:						VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&khugepaged_mm_lock));
./mm/khugepaged.c:1663:						VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&khugepaged_mm_lock));
./mm/swap.c:828:						VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&lruvec_pgdat(lruvec)->lru_lock));
./security/apparmor/file.c:497:					old = rcu_dereference_protected(fctx->label, spin_is_locked(&fctx->lock));
./net/netfilter/ipset/ip_set_hash_gen.h:18:			__ipset_dereference_protected(p, spin_is_locked(&(set)->lock))
./fs/ocfs2/dlmglue.c:760:					mlog_bug_on_msg(spin_is_locked(&res->l_lock),
./fs/ocfs2/inode.c:1194:					mlog_bug_on_msg(spin_is_locked(&oi->ip_lock),
./fs/userfaultfd.c:156:						VM_BUG_ON(spin_is_locked(&ctx->fault_pending_wqh.lock));
./fs/userfaultfd.c:158:						VM_BUG_ON(spin_is_locked(&ctx->fault_wqh.lock));
./fs/userfaultfd.c:160:						VM_BUG_ON(spin_is_locked(&ctx->event_wqh.lock));
./fs/userfaultfd.c:162:						VM_BUG_ON(spin_is_locked(&ctx->fd_wqh.lock));
./fs/userfaultfd.c:919:						VM_BUG_ON(!spin_is_locked(&wqh->lock));
./virt/kvm/arm/vgic/vgic.c:192:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&irq->irq_lock));
./virt/kvm/arm/vgic/vgic.c:269:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));
./virt/kvm/arm/vgic/vgic.c:307:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&irq->irq_lock));
./virt/kvm/arm/vgic/vgic.c:663:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&irq->irq_lock));
./virt/kvm/arm/vgic/vgic.c:694:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));
./virt/kvm/arm/vgic/vgic.c:715:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));
./virt/kvm/kvm_main.c:3934:					WARN_ON(raw_spin_is_locked(&kvm_count_lock));
./kernel/debug/debug_core.c:527:				if (!raw_spin_is_locked(&dbg_slave_lock))
./kernel/debug/debug_core.c:755:				raw_spin_is_locked(&dbg_master_lock)) {
./kernel/locking/spinlock_debug.c:98:				SPIN_BUG_ON(!raw_spin_is_locked(lock), lock, "already unlocked");
./kernel/locking/mutex-debug.c:39:				SMP_DEBUG_LOCKS_WARN_ON(!spin_is_locked(&lock->wait_lock));
./kernel/locking/mutex-debug.c:54:				SMP_DEBUG_LOCKS_WARN_ON(!spin_is_locked(&lock->wait_lock));
./kernel/futex.c:1368:						if (WARN_ON_SMP(!q->lock_ptr || !spin_is_locked(q->lock_ptr))
./kernel/printk/printk_safe.c:281:				if (in_nmi() && raw_spin_is_locked(&logbuf_lock)) { // same cpu (printk in nmi)
./kernel/printk/printk_safe.c:314:	    			raw_spin_is_locked(&logbuf_lock)) { // same cpu (printk in nmi)
./include/net/sock.h:1529:					return !sk->sk_lock.owned && !spin_is_locked(&sk->sk_lock.slock); // returns in BUG_ON/WARN_ON_ONCE
./arch/x86/pci/i386.c:62:					WARN_ON_SMP(!spin_is_locked(&pcibios_fwaddrmap_lock));
./arch/cris/arch-v32/drivers/cryptocop.c:3446:			printk("cryptocop_completed_jobs_lock %d\n", spin_is_locked(&cryptocop_completed_jobs_lock));
./arch/cris/arch-v32/drivers/cryptocop.c:3447:			printk("cryptocop_job_queue_lock %d\n", spin_is_locked(&cryptocop_job_queue_lock));
./arch/cris/arch-v32/drivers/cryptocop.c:3448:			printk("descr_pool_lock %d\n", spin_is_locked(&descr_pool_lock));
./arch/cris/arch-v32/drivers/cryptocop.c:3449:			printk("cryptocop_sessions_lock %d\n", spin_is_locked(cryptocop_sessions_lock));
./arch/cris/arch-v32/drivers/cryptocop.c:3450:			printk("running_job_lock %d\n", spin_is_locked(running_job_lock));
./arch/cris/arch-v32/drivers/cryptocop.c:3451:			printk("cryptocop_process_lock %d\n", spin_is_locked(cryptocop_process_lock));
./arch/parisc/kernel/firmware.c:208:        			if (spin_is_locked(&pdc_lock)) // self-lock: if (is_locked) unlock(pdc_lock)
./drivers/staging/irda/drivers/sir_dev.c:637:			if(spin_is_locked(&dev->tx_lock)) { 	// for debug
./drivers/staging/lustre/lustre/osc/osc_cl_internal.h:189:	return spin_is_locked(&obj->oo_lock); // for assert
./drivers/tty/serial/sn_console.c:891:				if (spin_is_locked(&port->sc_port.lock)) { // single lock
./drivers/tty/serial/sn_console.c:908:				if (!spin_is_locked(&port->sc_port.lock) // single lock
./drivers/misc/sgi-xp/xpc_channel.c:31:				DBUG_ON(!spin_is_locked(&ch->lock));
./drivers/misc/sgi-xp/xpc_channel.c:85:				DBUG_ON(!spin_is_locked(&ch->lock));
./drivers/misc/sgi-xp/xpc_channel.c:761:			DBUG_ON(!spin_is_locked(&ch->lock));
./drivers/misc/sgi-xp/xpc_sn2.c:1674:				DBUG_ON(!spin_is_locked(&ch->lock));
./drivers/misc/sgi-xp/xpc_uv.c:1186:				DBUG_ON(!spin_is_locked(&ch->lock));
./drivers/net/ethernet/smsc/smsc911x.h:70:			WARN_ON_SMP(!spin_is_locked(&pdata->mac_lock))
./drivers/net/ethernet/intel/igbvf/mbx.c:267:			WARN_ON_ONCE(!spin_is_locked(&hw->mbx_lock));
./drivers/net/ethernet/intel/igbvf/mbx.c:305:			WARN_ON_ONCE(!spin_is_locked(&hw->mbx_lock));
./drivers/net/ethernet/intel/i40e/i40e_main.c:1527:		WARN(!spin_is_locked(&vsi->mac_filter_hash_lock),
./drivers/net/wireless/zydas/zd1211rw/zd_mac.c:238:		ZD_ASSERT(!spin_is_locked(&mac->lock));
./drivers/scsi/fnic/fnic_scsi.c:184:				int sh_locked = spin_is_locked(host->host_lock); // self-lock: if (!is_locked) lock(host_lock)
./drivers/scsi/snic/snic_scsi.c:2004:				SNIC_BUG_ON(!spin_is_locked(io_lock));
./drivers/scsi/snic/snic_scsi.c:2607:				SNIC_BUG_ON(!spin_is_locked(io_lock));
./drivers/atm/nicstar.c:2692:					if (spin_is_locked(&card->int_lock)) {	// optimization ("Probably it isn't worth spinning")
./drivers/hv/hv_balloon.c:644:					WARN_ON_ONCE(!spin_is_locked(&dm_device.ha_lock));

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Documentation/locking: Document the semantics of spin_is_locked()
  2018-03-07 13:13         ` Andrea Parri
@ 2018-03-07 14:37           ` Daniel Thompson
  2018-03-13 12:24             ` Andrea Parri
  0 siblings, 1 reply; 10+ messages in thread
From: Daniel Thompson @ 2018-03-07 14:37 UTC (permalink / raw)
  To: Andrea Parri
  Cc: Will Deacon, linux-kernel, Alan Stern, Peter Zijlstra,
	Boqun Feng, Nicholas Piggin, David Howells, Jade Alglave,
	Luc Maranget, Paul E. McKenney, Akira Yokosawa, Jason Wessel

On Wed, Mar 07, 2018 at 02:13:41PM +0100, Andrea Parri wrote:
> On Wed, Feb 28, 2018 at 01:15:23PM +0100, Andrea Parri wrote:
> > On Wed, Feb 28, 2018 at 11:34:56AM +0000, Will Deacon wrote:
> 
> [...]
> 
> >> only if there's some evidence that you've looked at the callsites
> >> and determined that they won't break.
> 
> I looked at the callsites for {,raw_}spin_is_locked() (reported below):
>
> In most cases (40+), these primitives are used within BUG_ON/WARN_ON or
> the like; a handful of other cases using these with no concurrency, for
> checking "self-lock", or for heuristics.
> 
> I confirm that the "ipc/sem.c" case, mentioned in the arm64 and powerpc
> commits adding smp_mb() to their arch_spin_is_locked(), disappeared.
> 
> And that the "debug_core" case seems to be the only case requiring some
> thoughts: my understanding (but I Cc the KGDB maintainers, so that they
> can correct me, or provide other details) is that KGDB does not rely on
> implicit barriers in raw_spin_is_locked().
> 
> (This seems instead to rely on barriers in the IPIs sending/handling, in
>  part., kgdb_roundup_cpus, kgdb_nmicallback; yes, these barriers are not
>  documented, but I've discussed with Daniel, Jason about the eventuality
>  of adding such documentations/inline comments.)

Indeed.

Whilst responding to queries from Andrea I certainly saw opportunities 
to clean things up... and the result of those clean ups would actually 
be the removal of both calls to raw_spin_is_locked(). Nevertheless, for
now lets deal with the code as it is:

The calls to raw_spin_is_locked() within debug-core will pretty much always 
be from cores that did not take the lock because the code is triggered
once we have selected a master and are rounding up the other cpus. Thus
we do have to analyse the sequencing here.

Pretty much every architecture I looked at implements the round up 
using the IPI machinery (hardly surprising; this is obvious way to 
implement it). I think this provides the required barriers implicitly
so the is-this-round-up code will correctly observe the locks to be 
locked when triggered via an IPI.

It is more difficult to describe the analysis if the is-this-a-round-up
code is spuriously triggered before the IPI but so far I've not come up 
with anything worse than a benign race (which exists even with barriers).
The round up code will eventually figure out it has spuriously tried to
park and will exit without altering important state. The return value of 
kgdb_nmicallback() does change in this case but no architecture cares 
about that[1].


Daniel


[1] So one of the clean ups I alluded to above is therefore to remove 
    the return code ;-) .


> (N.B. I have _not_ tested any of these observations, say by removing the
>  smp_mb() from your implementation; so, you know...)
> 
>   Andrea
> 
> 
> ./mm/khugepaged.c:1222:						VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&khugepaged_mm_lock));
> ./mm/khugepaged.c:1663:						VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&khugepaged_mm_lock));
> ./mm/swap.c:828:						VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&lruvec_pgdat(lruvec)->lru_lock));
> ./security/apparmor/file.c:497:					old = rcu_dereference_protected(fctx->label, spin_is_locked(&fctx->lock));
> ./net/netfilter/ipset/ip_set_hash_gen.h:18:			__ipset_dereference_protected(p, spin_is_locked(&(set)->lock))
> ./fs/ocfs2/dlmglue.c:760:					mlog_bug_on_msg(spin_is_locked(&res->l_lock),
> ./fs/ocfs2/inode.c:1194:					mlog_bug_on_msg(spin_is_locked(&oi->ip_lock),
> ./fs/userfaultfd.c:156:						VM_BUG_ON(spin_is_locked(&ctx->fault_pending_wqh.lock));
> ./fs/userfaultfd.c:158:						VM_BUG_ON(spin_is_locked(&ctx->fault_wqh.lock));
> ./fs/userfaultfd.c:160:						VM_BUG_ON(spin_is_locked(&ctx->event_wqh.lock));
> ./fs/userfaultfd.c:162:						VM_BUG_ON(spin_is_locked(&ctx->fd_wqh.lock));
> ./fs/userfaultfd.c:919:						VM_BUG_ON(!spin_is_locked(&wqh->lock));
> ./virt/kvm/arm/vgic/vgic.c:192:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&irq->irq_lock));
> ./virt/kvm/arm/vgic/vgic.c:269:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));
> ./virt/kvm/arm/vgic/vgic.c:307:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&irq->irq_lock));
> ./virt/kvm/arm/vgic/vgic.c:663:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&irq->irq_lock));
> ./virt/kvm/arm/vgic/vgic.c:694:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));
> ./virt/kvm/arm/vgic/vgic.c:715:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));
> ./virt/kvm/kvm_main.c:3934:					WARN_ON(raw_spin_is_locked(&kvm_count_lock));
> ./kernel/debug/debug_core.c:527:				if (!raw_spin_is_locked(&dbg_slave_lock))
> ./kernel/debug/debug_core.c:755:				raw_spin_is_locked(&dbg_master_lock)) {
> ./kernel/locking/spinlock_debug.c:98:				SPIN_BUG_ON(!raw_spin_is_locked(lock), lock, "already unlocked");
> ./kernel/locking/mutex-debug.c:39:				SMP_DEBUG_LOCKS_WARN_ON(!spin_is_locked(&lock->wait_lock));
> ./kernel/locking/mutex-debug.c:54:				SMP_DEBUG_LOCKS_WARN_ON(!spin_is_locked(&lock->wait_lock));
> ./kernel/futex.c:1368:						if (WARN_ON_SMP(!q->lock_ptr || !spin_is_locked(q->lock_ptr))
> ./kernel/printk/printk_safe.c:281:				if (in_nmi() && raw_spin_is_locked(&logbuf_lock)) { // same cpu (printk in nmi)
> ./kernel/printk/printk_safe.c:314:	    			raw_spin_is_locked(&logbuf_lock)) { // same cpu (printk in nmi)
> ./include/net/sock.h:1529:					return !sk->sk_lock.owned && !spin_is_locked(&sk->sk_lock.slock); // returns in BUG_ON/WARN_ON_ONCE
> ./arch/x86/pci/i386.c:62:					WARN_ON_SMP(!spin_is_locked(&pcibios_fwaddrmap_lock));
> ./arch/cris/arch-v32/drivers/cryptocop.c:3446:			printk("cryptocop_completed_jobs_lock %d\n", spin_is_locked(&cryptocop_completed_jobs_lock));
> ./arch/cris/arch-v32/drivers/cryptocop.c:3447:			printk("cryptocop_job_queue_lock %d\n", spin_is_locked(&cryptocop_job_queue_lock));
> ./arch/cris/arch-v32/drivers/cryptocop.c:3448:			printk("descr_pool_lock %d\n", spin_is_locked(&descr_pool_lock));
> ./arch/cris/arch-v32/drivers/cryptocop.c:3449:			printk("cryptocop_sessions_lock %d\n", spin_is_locked(cryptocop_sessions_lock));
> ./arch/cris/arch-v32/drivers/cryptocop.c:3450:			printk("running_job_lock %d\n", spin_is_locked(running_job_lock));
> ./arch/cris/arch-v32/drivers/cryptocop.c:3451:			printk("cryptocop_process_lock %d\n", spin_is_locked(cryptocop_process_lock));
> ./arch/parisc/kernel/firmware.c:208:        			if (spin_is_locked(&pdc_lock)) // self-lock: if (is_locked) unlock(pdc_lock)
> ./drivers/staging/irda/drivers/sir_dev.c:637:			if(spin_is_locked(&dev->tx_lock)) { 	// for debug
> ./drivers/staging/lustre/lustre/osc/osc_cl_internal.h:189:	return spin_is_locked(&obj->oo_lock); // for assert
> ./drivers/tty/serial/sn_console.c:891:				if (spin_is_locked(&port->sc_port.lock)) { // single lock
> ./drivers/tty/serial/sn_console.c:908:				if (!spin_is_locked(&port->sc_port.lock) // single lock
> ./drivers/misc/sgi-xp/xpc_channel.c:31:				DBUG_ON(!spin_is_locked(&ch->lock));
> ./drivers/misc/sgi-xp/xpc_channel.c:85:				DBUG_ON(!spin_is_locked(&ch->lock));
> ./drivers/misc/sgi-xp/xpc_channel.c:761:			DBUG_ON(!spin_is_locked(&ch->lock));
> ./drivers/misc/sgi-xp/xpc_sn2.c:1674:				DBUG_ON(!spin_is_locked(&ch->lock));
> ./drivers/misc/sgi-xp/xpc_uv.c:1186:				DBUG_ON(!spin_is_locked(&ch->lock));
> ./drivers/net/ethernet/smsc/smsc911x.h:70:			WARN_ON_SMP(!spin_is_locked(&pdata->mac_lock))
> ./drivers/net/ethernet/intel/igbvf/mbx.c:267:			WARN_ON_ONCE(!spin_is_locked(&hw->mbx_lock));
> ./drivers/net/ethernet/intel/igbvf/mbx.c:305:			WARN_ON_ONCE(!spin_is_locked(&hw->mbx_lock));
> ./drivers/net/ethernet/intel/i40e/i40e_main.c:1527:		WARN(!spin_is_locked(&vsi->mac_filter_hash_lock),
> ./drivers/net/wireless/zydas/zd1211rw/zd_mac.c:238:		ZD_ASSERT(!spin_is_locked(&mac->lock));
> ./drivers/scsi/fnic/fnic_scsi.c:184:				int sh_locked = spin_is_locked(host->host_lock); // self-lock: if (!is_locked) lock(host_lock)
> ./drivers/scsi/snic/snic_scsi.c:2004:				SNIC_BUG_ON(!spin_is_locked(io_lock));
> ./drivers/scsi/snic/snic_scsi.c:2607:				SNIC_BUG_ON(!spin_is_locked(io_lock));
> ./drivers/atm/nicstar.c:2692:					if (spin_is_locked(&card->int_lock)) {	// optimization ("Probably it isn't worth spinning")
> ./drivers/hv/hv_balloon.c:644:					WARN_ON_ONCE(!spin_is_locked(&dm_device.ha_lock));

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH] Documentation/locking: Document the semantics of spin_is_locked()
  2018-03-07 14:37           ` Daniel Thompson
@ 2018-03-13 12:24             ` Andrea Parri
  0 siblings, 0 replies; 10+ messages in thread
From: Andrea Parri @ 2018-03-13 12:24 UTC (permalink / raw)
  To: Daniel Thompson
  Cc: Will Deacon, linux-kernel, Alan Stern, Peter Zijlstra,
	Boqun Feng, Nicholas Piggin, David Howells, Jade Alglave,
	Luc Maranget, Paul E. McKenney, Akira Yokosawa, Jason Wessel

On Wed, Mar 07, 2018 at 02:37:30PM +0000, Daniel Thompson wrote:
> On Wed, Mar 07, 2018 at 02:13:41PM +0100, Andrea Parri wrote:
> > On Wed, Feb 28, 2018 at 01:15:23PM +0100, Andrea Parri wrote:
> > > On Wed, Feb 28, 2018 at 11:34:56AM +0000, Will Deacon wrote:
> > 
> > [...]
> > 
> > >> only if there's some evidence that you've looked at the callsites
> > >> and determined that they won't break.
> > 
> > I looked at the callsites for {,raw_}spin_is_locked() (reported below):
> >
> > In most cases (40+), these primitives are used within BUG_ON/WARN_ON or
> > the like; a handful of other cases using these with no concurrency, for
> > checking "self-lock", or for heuristics.
> > 
> > I confirm that the "ipc/sem.c" case, mentioned in the arm64 and powerpc
> > commits adding smp_mb() to their arch_spin_is_locked(), disappeared.
> > 
> > And that the "debug_core" case seems to be the only case requiring some
> > thoughts: my understanding (but I Cc the KGDB maintainers, so that they
> > can correct me, or provide other details) is that KGDB does not rely on
> > implicit barriers in raw_spin_is_locked().
> > 
> > (This seems instead to rely on barriers in the IPIs sending/handling, in
> >  part., kgdb_roundup_cpus, kgdb_nmicallback; yes, these barriers are not
> >  documented, but I've discussed with Daniel, Jason about the eventuality
> >  of adding such documentations/inline comments.)
> 
> Indeed.
> 
> Whilst responding to queries from Andrea I certainly saw opportunities 
> to clean things up... and the result of those clean ups would actually 
> be the removal of both calls to raw_spin_is_locked(). Nevertheless, for
> now lets deal with the code as it is:
> 
> The calls to raw_spin_is_locked() within debug-core will pretty much always 
> be from cores that did not take the lock because the code is triggered
> once we have selected a master and are rounding up the other cpus. Thus
> we do have to analyse the sequencing here.
> 
> Pretty much every architecture I looked at implements the round up 
> using the IPI machinery (hardly surprising; this is obvious way to 
> implement it). I think this provides the required barriers implicitly
> so the is-this-round-up code will correctly observe the locks to be 
> locked when triggered via an IPI.
> 
> It is more difficult to describe the analysis if the is-this-a-round-up
> code is spuriously triggered before the IPI but so far I've not come up 
> with anything worse than a benign race (which exists even with barriers).
> The round up code will eventually figure out it has spuriously tried to
> park and will exit without altering important state. The return value of 
> kgdb_nmicallback() does change in this case but no architecture cares 
> about that[1].
> 
> 
> Daniel

Thank you, Daniel.  Are there other remarks about this auditing?

What are the current options concerning the topic of my patch (semantics
of spin_is_locked)? I still think that we should reach some consensus...

  Andrea


> 
> 
> [1] So one of the clean ups I alluded to above is therefore to remove 
>     the return code ;-) .
> 
> 
> > (N.B. I have _not_ tested any of these observations, say by removing the
> >  smp_mb() from your implementation; so, you know...)
> > 
> >   Andrea
> > 
> > 
> > ./mm/khugepaged.c:1222:						VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&khugepaged_mm_lock));
> > ./mm/khugepaged.c:1663:						VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&khugepaged_mm_lock));
> > ./mm/swap.c:828:						VM_BUG_ON(NR_CPUS != 1 && !spin_is_locked(&lruvec_pgdat(lruvec)->lru_lock));
> > ./security/apparmor/file.c:497:					old = rcu_dereference_protected(fctx->label, spin_is_locked(&fctx->lock));
> > ./net/netfilter/ipset/ip_set_hash_gen.h:18:			__ipset_dereference_protected(p, spin_is_locked(&(set)->lock))
> > ./fs/ocfs2/dlmglue.c:760:					mlog_bug_on_msg(spin_is_locked(&res->l_lock),
> > ./fs/ocfs2/inode.c:1194:					mlog_bug_on_msg(spin_is_locked(&oi->ip_lock),
> > ./fs/userfaultfd.c:156:						VM_BUG_ON(spin_is_locked(&ctx->fault_pending_wqh.lock));
> > ./fs/userfaultfd.c:158:						VM_BUG_ON(spin_is_locked(&ctx->fault_wqh.lock));
> > ./fs/userfaultfd.c:160:						VM_BUG_ON(spin_is_locked(&ctx->event_wqh.lock));
> > ./fs/userfaultfd.c:162:						VM_BUG_ON(spin_is_locked(&ctx->fd_wqh.lock));
> > ./fs/userfaultfd.c:919:						VM_BUG_ON(!spin_is_locked(&wqh->lock));
> > ./virt/kvm/arm/vgic/vgic.c:192:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&irq->irq_lock));
> > ./virt/kvm/arm/vgic/vgic.c:269:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));
> > ./virt/kvm/arm/vgic/vgic.c:307:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&irq->irq_lock));
> > ./virt/kvm/arm/vgic/vgic.c:663:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&irq->irq_lock));
> > ./virt/kvm/arm/vgic/vgic.c:694:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));
> > ./virt/kvm/arm/vgic/vgic.c:715:					DEBUG_SPINLOCK_BUG_ON(!spin_is_locked(&vgic_cpu->ap_list_lock));
> > ./virt/kvm/kvm_main.c:3934:					WARN_ON(raw_spin_is_locked(&kvm_count_lock));
> > ./kernel/debug/debug_core.c:527:				if (!raw_spin_is_locked(&dbg_slave_lock))
> > ./kernel/debug/debug_core.c:755:				raw_spin_is_locked(&dbg_master_lock)) {
> > ./kernel/locking/spinlock_debug.c:98:				SPIN_BUG_ON(!raw_spin_is_locked(lock), lock, "already unlocked");
> > ./kernel/locking/mutex-debug.c:39:				SMP_DEBUG_LOCKS_WARN_ON(!spin_is_locked(&lock->wait_lock));
> > ./kernel/locking/mutex-debug.c:54:				SMP_DEBUG_LOCKS_WARN_ON(!spin_is_locked(&lock->wait_lock));
> > ./kernel/futex.c:1368:						if (WARN_ON_SMP(!q->lock_ptr || !spin_is_locked(q->lock_ptr))
> > ./kernel/printk/printk_safe.c:281:				if (in_nmi() && raw_spin_is_locked(&logbuf_lock)) { // same cpu (printk in nmi)
> > ./kernel/printk/printk_safe.c:314:	    			raw_spin_is_locked(&logbuf_lock)) { // same cpu (printk in nmi)
> > ./include/net/sock.h:1529:					return !sk->sk_lock.owned && !spin_is_locked(&sk->sk_lock.slock); // returns in BUG_ON/WARN_ON_ONCE
> > ./arch/x86/pci/i386.c:62:					WARN_ON_SMP(!spin_is_locked(&pcibios_fwaddrmap_lock));
> > ./arch/cris/arch-v32/drivers/cryptocop.c:3446:			printk("cryptocop_completed_jobs_lock %d\n", spin_is_locked(&cryptocop_completed_jobs_lock));
> > ./arch/cris/arch-v32/drivers/cryptocop.c:3447:			printk("cryptocop_job_queue_lock %d\n", spin_is_locked(&cryptocop_job_queue_lock));
> > ./arch/cris/arch-v32/drivers/cryptocop.c:3448:			printk("descr_pool_lock %d\n", spin_is_locked(&descr_pool_lock));
> > ./arch/cris/arch-v32/drivers/cryptocop.c:3449:			printk("cryptocop_sessions_lock %d\n", spin_is_locked(cryptocop_sessions_lock));
> > ./arch/cris/arch-v32/drivers/cryptocop.c:3450:			printk("running_job_lock %d\n", spin_is_locked(running_job_lock));
> > ./arch/cris/arch-v32/drivers/cryptocop.c:3451:			printk("cryptocop_process_lock %d\n", spin_is_locked(cryptocop_process_lock));
> > ./arch/parisc/kernel/firmware.c:208:        			if (spin_is_locked(&pdc_lock)) // self-lock: if (is_locked) unlock(pdc_lock)
> > ./drivers/staging/irda/drivers/sir_dev.c:637:			if(spin_is_locked(&dev->tx_lock)) { 	// for debug
> > ./drivers/staging/lustre/lustre/osc/osc_cl_internal.h:189:	return spin_is_locked(&obj->oo_lock); // for assert
> > ./drivers/tty/serial/sn_console.c:891:				if (spin_is_locked(&port->sc_port.lock)) { // single lock
> > ./drivers/tty/serial/sn_console.c:908:				if (!spin_is_locked(&port->sc_port.lock) // single lock
> > ./drivers/misc/sgi-xp/xpc_channel.c:31:				DBUG_ON(!spin_is_locked(&ch->lock));
> > ./drivers/misc/sgi-xp/xpc_channel.c:85:				DBUG_ON(!spin_is_locked(&ch->lock));
> > ./drivers/misc/sgi-xp/xpc_channel.c:761:			DBUG_ON(!spin_is_locked(&ch->lock));
> > ./drivers/misc/sgi-xp/xpc_sn2.c:1674:				DBUG_ON(!spin_is_locked(&ch->lock));
> > ./drivers/misc/sgi-xp/xpc_uv.c:1186:				DBUG_ON(!spin_is_locked(&ch->lock));
> > ./drivers/net/ethernet/smsc/smsc911x.h:70:			WARN_ON_SMP(!spin_is_locked(&pdata->mac_lock))
> > ./drivers/net/ethernet/intel/igbvf/mbx.c:267:			WARN_ON_ONCE(!spin_is_locked(&hw->mbx_lock));
> > ./drivers/net/ethernet/intel/igbvf/mbx.c:305:			WARN_ON_ONCE(!spin_is_locked(&hw->mbx_lock));
> > ./drivers/net/ethernet/intel/i40e/i40e_main.c:1527:		WARN(!spin_is_locked(&vsi->mac_filter_hash_lock),
> > ./drivers/net/wireless/zydas/zd1211rw/zd_mac.c:238:		ZD_ASSERT(!spin_is_locked(&mac->lock));
> > ./drivers/scsi/fnic/fnic_scsi.c:184:				int sh_locked = spin_is_locked(host->host_lock); // self-lock: if (!is_locked) lock(host_lock)
> > ./drivers/scsi/snic/snic_scsi.c:2004:				SNIC_BUG_ON(!spin_is_locked(io_lock));
> > ./drivers/scsi/snic/snic_scsi.c:2607:				SNIC_BUG_ON(!spin_is_locked(io_lock));
> > ./drivers/atm/nicstar.c:2692:					if (spin_is_locked(&card->int_lock)) {	// optimization ("Probably it isn't worth spinning")
> > ./drivers/hv/hv_balloon.c:644:					WARN_ON_ONCE(!spin_is_locked(&dm_device.ha_lock));

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2018-03-13 12:24 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-02-28 10:39 [PATCH] Documentation/locking: Document the semantics of spin_is_locked() Andrea Parri
2018-02-28 10:56 ` Will Deacon
2018-02-28 11:24   ` Andrea Parri
2018-02-28 11:34     ` Will Deacon
2018-02-28 12:15       ` Andrea Parri
2018-02-28 14:39         ` Paul E. McKenney
2018-03-07 13:13         ` Andrea Parri
2018-03-07 14:37           ` Daniel Thompson
2018-03-13 12:24             ` Andrea Parri
2018-02-28 15:16   ` Alan Stern

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.