All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] atomic: Fix _atomic_dec_and_lock() deadlock on UP
@ 2009-06-15 18:11 Valerie Aurora
  2009-06-15 18:45 ` Andrew Morton
  2009-06-15 18:56 ` Paul E. McKenney
  0 siblings, 2 replies; 5+ messages in thread
From: Valerie Aurora @ 2009-06-15 18:11 UTC (permalink / raw)
  To: Nick Piggin; +Cc: Jan Blunck, linux-kernel, Andrew Morton, Paul McKenney

From: Jan Blunck <jblunck@suse.de>

_atomic_dec_and_lock() can deadlock on UP with spinlock debugging
enabled.  Currently, on UP we unconditionally spin_lock() first, which
calls __spin_lock_debug(), which takes the lock unconditionally even
on UP.  This will deadlock in situations in which we call
atomic_dec_and_lock() knowing that the counter won't go to zero
(because we hold another reference) and that we already hold the lock.
Instead, we should use the SMP code path which only takes the lock if
necessary.

Signed-off-by: Jan Blunck <jblunck@suse.de>
Signed-off-by: Valerie Aurora (Henson) <vaurora@redhat.com>
---
 lib/dec_and_lock.c |    3 +--
 1 files changed, 1 insertions(+), 2 deletions(-)

diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
index a65c314..e73822a 100644
--- a/lib/dec_and_lock.c
+++ b/lib/dec_and_lock.c
@@ -19,11 +19,10 @@
  */
 int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
 {
-#ifdef CONFIG_SMP
 	/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
 	if (atomic_add_unless(atomic, -1, 1))
 		return 0;
-#endif
+
 	/* Otherwise do it the slow way */
 	spin_lock(lock);
 	if (atomic_dec_and_test(atomic))
-- 
1.6.0.6


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH] atomic: Fix _atomic_dec_and_lock() deadlock on UP
  2009-06-15 18:11 [PATCH] atomic: Fix _atomic_dec_and_lock() deadlock on UP Valerie Aurora
@ 2009-06-15 18:45 ` Andrew Morton
  2009-06-15 19:12   ` Valerie Aurora
  2009-06-15 18:56 ` Paul E. McKenney
  1 sibling, 1 reply; 5+ messages in thread
From: Andrew Morton @ 2009-06-15 18:45 UTC (permalink / raw)
  To: Valerie Aurora; +Cc: npiggin, jblunck, linux-kernel, paulmck

On Mon, 15 Jun 2009 14:11:13 -0400
Valerie Aurora <vaurora@redhat.com> wrote:

> _atomic_dec_and_lock() can deadlock on UP with spinlock debugging
> enabled.  Currently, on UP we unconditionally spin_lock() first, which
> calls __spin_lock_debug(), which takes the lock unconditionally even
> on UP.  This will deadlock in situations in which we call
> atomic_dec_and_lock() knowing that the counter won't go to zero
> (because we hold another reference) and that we already hold the lock.
> Instead, we should use the SMP code path which only takes the lock if
> necessary.

Yup, I have this queued for 2.6.31 as
atomic-only-take-lock-when-the-counter-drops-to-zero-on-up-as-well.patch,
with a different changelog:

  _atomic_dec_and_lock() should not unconditionally take the lock before
  calling atomic_dec_and_test() in the UP case.  For consistency reasons it
  should behave exactly like in the SMP case.

  Besides that this works around the problem that with CONFIG_DEBUG_SPINLOCK
  this spins in __spin_lock_debug() if the lock is already taken even if the
  counter doesn't drop to 0.

  Signed-off-by: Jan Blunck <jblunck@suse.de>
  Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
  Acked-by: Nick Piggin <npiggin@suse.de>
  Signed-off-by: Andrew Morton <akpm@linux-foundation.org>


I can't remember why we decided that 2.6.30 doesn't need this.


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] atomic: Fix _atomic_dec_and_lock() deadlock on UP
  2009-06-15 18:11 [PATCH] atomic: Fix _atomic_dec_and_lock() deadlock on UP Valerie Aurora
  2009-06-15 18:45 ` Andrew Morton
@ 2009-06-15 18:56 ` Paul E. McKenney
  1 sibling, 0 replies; 5+ messages in thread
From: Paul E. McKenney @ 2009-06-15 18:56 UTC (permalink / raw)
  To: Valerie Aurora; +Cc: Nick Piggin, Jan Blunck, linux-kernel, Andrew Morton

On Mon, Jun 15, 2009 at 02:11:13PM -0400, Valerie Aurora wrote:
> From: Jan Blunck <jblunck@suse.de>
> 
> _atomic_dec_and_lock() can deadlock on UP with spinlock debugging
> enabled.  Currently, on UP we unconditionally spin_lock() first, which
> calls __spin_lock_debug(), which takes the lock unconditionally even
> on UP.  This will deadlock in situations in which we call
> atomic_dec_and_lock() knowing that the counter won't go to zero
> (because we hold another reference) and that we already hold the lock.
> Instead, we should use the SMP code path which only takes the lock if
> necessary.

Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> Signed-off-by: Jan Blunck <jblunck@suse.de>
> Signed-off-by: Valerie Aurora (Henson) <vaurora@redhat.com>
> ---
>  lib/dec_and_lock.c |    3 +--
>  1 files changed, 1 insertions(+), 2 deletions(-)
> 
> diff --git a/lib/dec_and_lock.c b/lib/dec_and_lock.c
> index a65c314..e73822a 100644
> --- a/lib/dec_and_lock.c
> +++ b/lib/dec_and_lock.c
> @@ -19,11 +19,10 @@
>   */
>  int _atomic_dec_and_lock(atomic_t *atomic, spinlock_t *lock)
>  {
> -#ifdef CONFIG_SMP
>  	/* Subtract 1 from counter unless that drops it to 0 (ie. it was 1) */
>  	if (atomic_add_unless(atomic, -1, 1))
>  		return 0;
> -#endif
> +
>  	/* Otherwise do it the slow way */
>  	spin_lock(lock);
>  	if (atomic_dec_and_test(atomic))
> -- 
> 1.6.0.6
> 

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] atomic: Fix _atomic_dec_and_lock() deadlock on UP
  2009-06-15 18:45 ` Andrew Morton
@ 2009-06-15 19:12   ` Valerie Aurora
  2009-06-15 19:31     ` Andrew Morton
  0 siblings, 1 reply; 5+ messages in thread
From: Valerie Aurora @ 2009-06-15 19:12 UTC (permalink / raw)
  To: Andrew Morton; +Cc: npiggin, jblunck, linux-kernel, paulmck

On Mon, Jun 15, 2009 at 11:45:43AM -0700, Andrew Morton wrote:
> On Mon, 15 Jun 2009 14:11:13 -0400
> Valerie Aurora <vaurora@redhat.com> wrote:
> 
> > _atomic_dec_and_lock() can deadlock on UP with spinlock debugging
> > enabled.  Currently, on UP we unconditionally spin_lock() first, which
> > calls __spin_lock_debug(), which takes the lock unconditionally even
> > on UP.  This will deadlock in situations in which we call
> > atomic_dec_and_lock() knowing that the counter won't go to zero
> > (because we hold another reference) and that we already hold the lock.
> > Instead, we should use the SMP code path which only takes the lock if
> > necessary.
> 
> Yup, I have this queued for 2.6.31 as
> atomic-only-take-lock-when-the-counter-drops-to-zero-on-up-as-well.patch,
> with a different changelog:
> 
>   _atomic_dec_and_lock() should not unconditionally take the lock before
>   calling atomic_dec_and_test() in the UP case.  For consistency reasons it
>   should behave exactly like in the SMP case.
> 
>   Besides that this works around the problem that with CONFIG_DEBUG_SPINLOCK
>   this spins in __spin_lock_debug() if the lock is already taken even if the
>   counter doesn't drop to 0.
> 
>   Signed-off-by: Jan Blunck <jblunck@suse.de>
>   Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
>   Acked-by: Nick Piggin <npiggin@suse.de>
>   Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> 
> 
> I can't remember why we decided that 2.6.30 doesn't need this.

Great, last I heard the changelog was still a problem.  Thanks,

-VAL

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH] atomic: Fix _atomic_dec_and_lock() deadlock on UP
  2009-06-15 19:12   ` Valerie Aurora
@ 2009-06-15 19:31     ` Andrew Morton
  0 siblings, 0 replies; 5+ messages in thread
From: Andrew Morton @ 2009-06-15 19:31 UTC (permalink / raw)
  To: Valerie Aurora; +Cc: npiggin, jblunck, linux-kernel, paulmck

On Mon, 15 Jun 2009 15:12:23 -0400
Valerie Aurora <vaurora@redhat.com> wrote:

> On Mon, Jun 15, 2009 at 11:45:43AM -0700, Andrew Morton wrote:
> > On Mon, 15 Jun 2009 14:11:13 -0400
> > Valerie Aurora <vaurora@redhat.com> wrote:
> > 
> > > _atomic_dec_and_lock() can deadlock on UP with spinlock debugging
> > > enabled.  Currently, on UP we unconditionally spin_lock() first, which
> > > calls __spin_lock_debug(), which takes the lock unconditionally even
> > > on UP.  This will deadlock in situations in which we call
> > > atomic_dec_and_lock() knowing that the counter won't go to zero
> > > (because we hold another reference) and that we already hold the lock.
> > > Instead, we should use the SMP code path which only takes the lock if
> > > necessary.
> > 
> > Yup, I have this queued for 2.6.31 as
> > atomic-only-take-lock-when-the-counter-drops-to-zero-on-up-as-well.patch,
> > with a different changelog:
> > 
> >   _atomic_dec_and_lock() should not unconditionally take the lock before
> >   calling atomic_dec_and_test() in the UP case.  For consistency reasons it
> >   should behave exactly like in the SMP case.
> > 
> >   Besides that this works around the problem that with CONFIG_DEBUG_SPINLOCK
> >   this spins in __spin_lock_debug() if the lock is already taken even if the
> >   counter doesn't drop to 0.
> > 
> >   Signed-off-by: Jan Blunck <jblunck@suse.de>
> >   Acked-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> >   Acked-by: Nick Piggin <npiggin@suse.de>
> >   Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> > 
> > 
> > I can't remember why we decided that 2.6.30 doesn't need this.
> 
> Great, last I heard the changelog was still a problem.  Thanks,
> 

<goes back and checks>

OK, I decided that we didn't need this in 2.6.30 or earlier because
Jan's union mount code is the only known triggerer of the problem.

However the patch is clearly a suitable thing for -stable.  Opinions
are sought..


^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2009-06-15 19:32 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-06-15 18:11 [PATCH] atomic: Fix _atomic_dec_and_lock() deadlock on UP Valerie Aurora
2009-06-15 18:45 ` Andrew Morton
2009-06-15 19:12   ` Valerie Aurora
2009-06-15 19:31     ` Andrew Morton
2009-06-15 18:56 ` Paul E. McKenney

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.