All of lore.kernel.org
 help / color / mirror / Atom feed
* linux-next oops in __lock_acquire for process_one_work
@ 2012-05-07 17:19 Hugh Dickins
  2012-05-07 17:57 ` Tejun Heo
  0 siblings, 1 reply; 18+ messages in thread
From: Hugh Dickins @ 2012-05-07 17:19 UTC (permalink / raw)
  To: Tejun Heo; +Cc: Stephen Boyd, Yong Zhang, linux-kernel

Hi Tejun,

Running MM load on recent linux-nexts (e.g. 3.4.0-rc5-next-20120504),
with CONFIG_PROVE_LOCKING=y, I've been hitting an oops in __lock_acquire
called from lock_acquire called from process_one_work: serving mm/swap.c's
lru_add_drain_all - schedule_on_each_cpu(lru_add_drain_per_cpu).

In each case the oopsing address has been ffffffff00000198, and the
oopsing instruction is the "atomic_inc((atomic_t *)&class->ops)" in
__lock_acquire: so class is ffffffff00000000.

I notice Stephen's commit 0976dfc1d0cd80a4e9dfaf87bd8744612bde475a
workqueue: Catch more locking problems with flush_work()
in linux-next but not 3.4-rc, adding
	lock_map_acquire(&work->lockdep_map);
	lock_map_release(&work->lockdep_map);
to flush_work.

I believe that occasionally races with your
	struct lockdep_map lockdep_map = work->lockdep_map;
in process_one_work, putting an entry into the class_cache
just as you're copying it, so you end up with half a pointer.
yes, the structure copy is using "rep movsl" not "rep movsq".

I've reverted Stephen's commit from my testing, and indeed it's
now run that MM load much longer than I've seen since this bug
first appeared.  Though I suspect that strictly it's your
unlocked copying of the lockdep_map that's to blame.  Probably
easily fixed by someone who understands lockdep - not me!

Hugh

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-07 17:19 linux-next oops in __lock_acquire for process_one_work Hugh Dickins
@ 2012-05-07 17:57 ` Tejun Heo
  2012-05-08 13:03   ` Peter Zijlstra
  0 siblings, 1 reply; 18+ messages in thread
From: Tejun Heo @ 2012-05-07 17:57 UTC (permalink / raw)
  To: Hugh Dickins, Peter Zijlstra, Ingo Molnar
  Cc: Stephen Boyd, Yong Zhang, linux-kernel

(cc'ing Peter and Ingo and quoting whole body)

On Mon, May 07, 2012 at 10:19:09AM -0700, Hugh Dickins wrote:
> Running MM load on recent linux-nexts (e.g. 3.4.0-rc5-next-20120504),
> with CONFIG_PROVE_LOCKING=y, I've been hitting an oops in __lock_acquire
> called from lock_acquire called from process_one_work: serving mm/swap.c's
> lru_add_drain_all - schedule_on_each_cpu(lru_add_drain_per_cpu).
> 
> In each case the oopsing address has been ffffffff00000198, and the
> oopsing instruction is the "atomic_inc((atomic_t *)&class->ops)" in
> __lock_acquire: so class is ffffffff00000000.
> 
> I notice Stephen's commit 0976dfc1d0cd80a4e9dfaf87bd8744612bde475a
> workqueue: Catch more locking problems with flush_work()
> in linux-next but not 3.4-rc, adding
> 	lock_map_acquire(&work->lockdep_map);
> 	lock_map_release(&work->lockdep_map);
> to flush_work.
> 
> I believe that occasionally races with your
> 	struct lockdep_map lockdep_map = work->lockdep_map;
> in process_one_work, putting an entry into the class_cache
> just as you're copying it, so you end up with half a pointer.
> yes, the structure copy is using "rep movsl" not "rep movsq".
> 
> I've reverted Stephen's commit from my testing, and indeed it's
> now run that MM load much longer than I've seen since this bug
> first appeared.  Though I suspect that strictly it's your
> unlocked copying of the lockdep_map that's to blame.  Probably
> easily fixed by someone who understands lockdep - not me!

The offending commit is 0976dfc1d0cd80a4e9dfaf87bd8744612bde475a
"workqueue: Catch more locking problems with flush_work()".  It sounds
fancy but all it does is adding the following to flush_work().

	lock_map_acquire(&work->lockdep_map);
	lock_map_release(&work->lockdep_map);

Which seems correct to me and more importantly not different from what
wait_on_work() does, so if this is broken, flush_work_sync() and
cancel_work_sync() are broken too - probably masked by lower usage
frequency.

It seems the problem stems from how process_one_work() "caches"
lockdep_map.  This part predates cmwq changes but it seems necessary
because the work item may be freed during execution but lockdep_map
should be released after execution is complete.  Peter, do you
remember how this lockdep_map copying is added?  Is (or was) this
correct?  If it's broken, how do we fix it?  Add a lockdep_map copy
API which does some magic lockdep locking dancing?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-07 17:57 ` Tejun Heo
@ 2012-05-08 13:03   ` Peter Zijlstra
  2012-05-08 16:58     ` Tejun Heo
  2012-05-08 18:05     ` linux-next oops in __lock_acquire for process_one_work Hugh Dickins
  0 siblings, 2 replies; 18+ messages in thread
From: Peter Zijlstra @ 2012-05-08 13:03 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Hugh Dickins, Ingo Molnar, Stephen Boyd, Yong Zhang, linux-kernel

On Mon, 2012-05-07 at 10:57 -0700, Tejun Heo wrote:
> (cc'ing Peter and Ingo and quoting whole body)
> 
> On Mon, May 07, 2012 at 10:19:09AM -0700, Hugh Dickins wrote:
> > Running MM load on recent linux-nexts (e.g. 3.4.0-rc5-next-20120504),
> > with CONFIG_PROVE_LOCKING=y, I've been hitting an oops in __lock_acquire
> > called from lock_acquire called from process_one_work: serving mm/swap.c's
> > lru_add_drain_all - schedule_on_each_cpu(lru_add_drain_per_cpu).
> > 
> > In each case the oopsing address has been ffffffff00000198, and the
> > oopsing instruction is the "atomic_inc((atomic_t *)&class->ops)" in
> > __lock_acquire: so class is ffffffff00000000.
> > 
> > I notice Stephen's commit 0976dfc1d0cd80a4e9dfaf87bd8744612bde475a
> > workqueue: Catch more locking problems with flush_work()
> > in linux-next but not 3.4-rc, adding
> > 	lock_map_acquire(&work->lockdep_map);
> > 	lock_map_release(&work->lockdep_map);
> > to flush_work.
> > 
> > I believe that occasionally races with your
> > 	struct lockdep_map lockdep_map = work->lockdep_map;
> > in process_one_work, putting an entry into the class_cache
> > just as you're copying it, so you end up with half a pointer.
> > yes, the structure copy is using "rep movsl" not "rep movsq".

But the copy is copying from work->lockdep_map, not to, so it doesn't
matter does it? If anything would explode it would be the:

  lock_map_acquire(&lockdep_map);

because that's the target of the copy and could indeed observe a partial
update (assuming the reported but silly "rep movsl").

> The offending commit is 0976dfc1d0cd80a4e9dfaf87bd8744612bde475a
> "workqueue: Catch more locking problems with flush_work()".  It sounds
> fancy but all it does is adding the following to flush_work().
> 
> 	lock_map_acquire(&work->lockdep_map);
> 	lock_map_release(&work->lockdep_map);
> 
> Which seems correct to me and more importantly not different from what
> wait_on_work() does, so if this is broken, flush_work_sync() and
> cancel_work_sync() are broken too - probably masked by lower usage
> frequency.
> 
> It seems the problem stems from how process_one_work() "caches"
> lockdep_map.  This part predates cmwq changes but it seems necessary
> because the work item may be freed during execution but lockdep_map
> should be released after execution is complete.

Exactly.

>   Peter, do you
> remember how this lockdep_map copying is added?  Is (or was) this
> correct?  If it's broken, how do we fix it?  Add a lockdep_map copy
> API which does some magic lockdep locking dancing?

I think there's a problem if indeed we do silly things like small copies
like Hugh saw (why would gcc ever generate small copies for objects that
are naturally aligned and naturally sized?).

Something like the below should fix that problem, but it doesn't explain
the observed issue..

---
 include/linux/lockdep.h |   19 +++++++++++++++++++
 kernel/timer.c          |    2 +-
 kernel/workqueue.c      |    2 +-
 3 files changed, 21 insertions(+), 2 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index d36619e..dc6661b 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -157,6 +157,25 @@ struct lockdep_map {
 #endif
 };
 
+static inline struct lockdep_map lockdep_copy_map(struct lockdep_map *lock)
+{
+	struct lockdep_map _lock = *lock;
+	int i;
+
+	/*
+	 * Since the class cache can be modified concurrently we could observe
+	 * half pointers (64bit arch using 32bit copy insns). Therefore clear
+	 * the caches and take the performance hit.
+	 *
+	 * XXX it doesn't work well with lockdep_set_class_and_subclass(), since
+	 *     that relies on cache abuse.
+	 */
+	for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++)
+		_lock.class_cache[i] = NULL;
+
+	return _lock;
+}
+
 /*
  * Every lock has a list of other locks that were taken after it.
  * We only grow the list, never remove from it:
diff --git a/kernel/timer.c b/kernel/timer.c
index a297ffc..fa98821 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -1102,7 +1102,7 @@ static void call_timer_fn(struct timer_list *timer, void (*fn)(unsigned long),
 	 * warnings as well as problems when looking into
 	 * timer->lockdep_map, make a copy and use that here.
 	 */
-	struct lockdep_map lockdep_map = timer->lockdep_map;
+	struct lockdep_map lockdep_map = lockdep_map_copy(&timer->lockdep_map);
 #endif
 	/*
 	 * Couple the lock chain with the lock chain at
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 5abf42f..5d92b43 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1810,7 +1810,7 @@ __acquires(&gcwq->lock)
 	 * lock freed" warnings as well as problems when looking into
 	 * work->lockdep_map, make a copy and use that here.
 	 */
-	struct lockdep_map lockdep_map = work->lockdep_map;
+	struct lockdep_map lockdep_map = lockdep_copy_map(&work->lockdep_map);
 #endif
 	/*
 	 * A single work shouldn't be executed concurrently by


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-08 13:03   ` Peter Zijlstra
@ 2012-05-08 16:58     ` Tejun Heo
  2012-05-08 17:02       ` Peter Zijlstra
  2012-05-08 18:11       ` Hugh Dickins
  2012-05-08 18:05     ` linux-next oops in __lock_acquire for process_one_work Hugh Dickins
  1 sibling, 2 replies; 18+ messages in thread
From: Tejun Heo @ 2012-05-08 16:58 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Hugh Dickins, Ingo Molnar, Stephen Boyd, Yong Zhang, linux-kernel

On Tue, May 08, 2012 at 03:03:22PM +0200, Peter Zijlstra wrote:
> I think there's a problem if indeed we do silly things like small copies
> like Hugh saw (why would gcc ever generate small copies for objects that
> are naturally aligned and naturally sized?).
> 
> Something like the below should fix that problem, but it doesn't explain
> the observed issue..

Hmmm.... Hugh, can you please verify whether this patch makes the
problem go away somehow?

> @@ -1810,7 +1810,7 @@ __acquires(&gcwq->lock)
>  	 * lock freed" warnings as well as problems when looking into
>  	 * work->lockdep_map, make a copy and use that here.
>  	 */
> -	struct lockdep_map lockdep_map = work->lockdep_map;
> +	struct lockdep_map lockdep_map = lockdep_copy_map(&work->lockdep_map);

If this is the correct fix for whatever reason, maybe we want the copy
interface to be a bit more conventional?  lockdep_copy_map(to, from)?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-08 16:58     ` Tejun Heo
@ 2012-05-08 17:02       ` Peter Zijlstra
  2012-05-08 18:11       ` Hugh Dickins
  1 sibling, 0 replies; 18+ messages in thread
From: Peter Zijlstra @ 2012-05-08 17:02 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Hugh Dickins, Ingo Molnar, Stephen Boyd, Yong Zhang, linux-kernel

On Tue, 2012-05-08 at 09:58 -0700, Tejun Heo wrote:
> On Tue, May 08, 2012 at 03:03:22PM +0200, Peter Zijlstra wrote:
> > I think there's a problem if indeed we do silly things like small copies
> > like Hugh saw (why would gcc ever generate small copies for objects that
> > are naturally aligned and naturally sized?).
> > 
> > Something like the below should fix that problem, but it doesn't explain
> > the observed issue..
> 
> Hmmm.... Hugh, can you please verify whether this patch makes the
> problem go away somehow?
> 
> > @@ -1810,7 +1810,7 @@ __acquires(&gcwq->lock)
> >  	 * lock freed" warnings as well as problems when looking into
> >  	 * work->lockdep_map, make a copy and use that here.
> >  	 */
> > -	struct lockdep_map lockdep_map = work->lockdep_map;
> > +	struct lockdep_map lockdep_map = lockdep_copy_map(&work->lockdep_map);
> 
> If this is the correct fix for whatever reason, maybe we want the copy
> interface to be a bit more conventional?  lockdep_copy_map(to, from)?

Sure why not.. still not quite understanding the whole issue though.



^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-08 13:03   ` Peter Zijlstra
  2012-05-08 16:58     ` Tejun Heo
@ 2012-05-08 18:05     ` Hugh Dickins
  1 sibling, 0 replies; 18+ messages in thread
From: Hugh Dickins @ 2012-05-08 18:05 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Tejun Heo, Ingo Molnar, Stephen Boyd, Yong Zhang, linux-kernel

On Tue, 8 May 2012, Peter Zijlstra wrote:
> On Mon, 2012-05-07 at 10:57 -0700, Tejun Heo wrote:
> > (cc'ing Peter and Ingo and quoting whole body)
> > 
> > On Mon, May 07, 2012 at 10:19:09AM -0700, Hugh Dickins wrote:
> > > Running MM load on recent linux-nexts (e.g. 3.4.0-rc5-next-20120504),
> > > with CONFIG_PROVE_LOCKING=y, I've been hitting an oops in __lock_acquire
> > > called from lock_acquire called from process_one_work: serving mm/swap.c's
> > > lru_add_drain_all - schedule_on_each_cpu(lru_add_drain_per_cpu).
> > > 
> > > In each case the oopsing address has been ffffffff00000198, and the
> > > oopsing instruction is the "atomic_inc((atomic_t *)&class->ops)" in
> > > __lock_acquire: so class is ffffffff00000000.
> > > 
> > > I notice Stephen's commit 0976dfc1d0cd80a4e9dfaf87bd8744612bde475a
> > > workqueue: Catch more locking problems with flush_work()
> > > in linux-next but not 3.4-rc, adding
> > > 	lock_map_acquire(&work->lockdep_map);
> > > 	lock_map_release(&work->lockdep_map);
> > > to flush_work.
> > > 
> > > I believe that occasionally races with your
> > > 	struct lockdep_map lockdep_map = work->lockdep_map;
> > > in process_one_work, putting an entry into the class_cache
> > > just as you're copying it, so you end up with half a pointer.
> > > yes, the structure copy is using "rep movsl" not "rep movsq".
> 
> But the copy is copying from work->lockdep_map, not to, so it doesn't
> matter does it?

It doesn't matter to work->lockdep_map, it matters to the lockdep_map
on process_one_work()'s stack.

> If anything would explode it would be the:
> 
>   lock_map_acquire(&lockdep_map);

Yes, on line 1867 of the rc5-next-2012504 kernel/workqueue.c (line 1864
in rc6).  I have not noted down the offset in process_one_work() at which
it crashed (calling lock_acquire calling __lock_acquire), so cannot
reconfirm, but I believe that's the lock_acquire() I tracked it to.

> 
> because that's the target of the copy and could indeed observe a partial
> update (assuming the reported but silly "rep movsl").

When the racing copy is done earlier with "rep movsl", the bogus pointer
ffffffff00000000 is put in class_cache[N].  Then when it comes to
lock_map_acquire(&lockdep_map) here, it oopses on it.

Do you see it now?  (It's always hard to understand misunderstandings ;)

> 
> > The offending commit is 0976dfc1d0cd80a4e9dfaf87bd8744612bde475a
> > "workqueue: Catch more locking problems with flush_work()".  It sounds
> > fancy but all it does is adding the following to flush_work().
> > 
> > 	lock_map_acquire(&work->lockdep_map);
> > 	lock_map_release(&work->lockdep_map);
> > 
> > Which seems correct to me and more importantly not different from what
> > wait_on_work() does, so if this is broken, flush_work_sync() and
> > cancel_work_sync() are broken too - probably masked by lower usage
> > frequency.
> > 
> > It seems the problem stems from how process_one_work() "caches"
> > lockdep_map.  This part predates cmwq changes but it seems necessary
> > because the work item may be freed during execution but lockdep_map
> > should be released after execution is complete.
> 
> Exactly.
> 
> >   Peter, do you
> > remember how this lockdep_map copying is added?  Is (or was) this
> > correct?  If it's broken, how do we fix it?  Add a lockdep_map copy
> > API which does some magic lockdep locking dancing?
> 
> I think there's a problem if indeed we do silly things like small copies
> like Hugh saw (why would gcc ever generate small copies for objects that
> are naturally aligned and naturally sized?).

I don't know.  gcc 4.5.1 here.  The structure is only 32 bytes, maybe
there's some advantage to copying that with movl rather than movq.

> 
> Something like the below should fix that problem, but it doesn't explain
> the observed issue..
> 
> ---
>  include/linux/lockdep.h |   19 +++++++++++++++++++
>  kernel/timer.c          |    2 +-
>  kernel/workqueue.c      |    2 +-
>  3 files changed, 21 insertions(+), 2 deletions(-)
> 
> diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
> index d36619e..dc6661b 100644
> --- a/include/linux/lockdep.h
> +++ b/include/linux/lockdep.h
> @@ -157,6 +157,25 @@ struct lockdep_map {
>  #endif
>  };
>  
> +static inline struct lockdep_map lockdep_copy_map(struct lockdep_map *lock)

Does that "inline" need to be
"__this_will_go_very_horribly_wrong_if_not_inline"?

Ah, looks like Tejun suggests a more conventional interface in his reply.

> +{
> +	struct lockdep_map _lock = *lock;
> +	int i;
> +
> +	/*
> +	 * Since the class cache can be modified concurrently we could observe
> +	 * half pointers (64bit arch using 32bit copy insns). Therefore clear
> +	 * the caches and take the performance hit.
> +	 *
> +	 * XXX it doesn't work well with lockdep_set_class_and_subclass(), since
> +	 *     that relies on cache abuse.
> +	 */
> +	for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++)
> +		_lock.class_cache[i] = NULL;
> +
> +	return _lock;
> +}
> +
>  /*
>   * Every lock has a list of other locks that were taken after it.
>   * We only grow the list, never remove from it:
> diff --git a/kernel/timer.c b/kernel/timer.c
> index a297ffc..fa98821 100644
> --- a/kernel/timer.c
> +++ b/kernel/timer.c
> @@ -1102,7 +1102,7 @@ static void call_timer_fn(struct timer_list *timer, void (*fn)(unsigned long),
>  	 * warnings as well as problems when looking into
>  	 * timer->lockdep_map, make a copy and use that here.
>  	 */
> -	struct lockdep_map lockdep_map = timer->lockdep_map;
> +	struct lockdep_map lockdep_map = lockdep_map_copy(&timer->lockdep_map);
>  #endif
>  	/*
>  	 * Couple the lock chain with the lock chain at
> diff --git a/kernel/workqueue.c b/kernel/workqueue.c
> index 5abf42f..5d92b43 100644
> --- a/kernel/workqueue.c
> +++ b/kernel/workqueue.c
> @@ -1810,7 +1810,7 @@ __acquires(&gcwq->lock)
>  	 * lock freed" warnings as well as problems when looking into
>  	 * work->lockdep_map, make a copy and use that here.
>  	 */
> -	struct lockdep_map lockdep_map = work->lockdep_map;
> +	struct lockdep_map lockdep_map = lockdep_copy_map(&work->lockdep_map);
>  #endif
>  	/*
>  	 * A single work shouldn't be executed concurrently by

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-08 16:58     ` Tejun Heo
  2012-05-08 17:02       ` Peter Zijlstra
@ 2012-05-08 18:11       ` Hugh Dickins
  2012-05-08 22:31         ` Peter Zijlstra
  1 sibling, 1 reply; 18+ messages in thread
From: Hugh Dickins @ 2012-05-08 18:11 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Peter Zijlstra, Ingo Molnar, Stephen Boyd, Yong Zhang, linux-kernel

On Tue, 8 May 2012, Tejun Heo wrote:
> On Tue, May 08, 2012 at 03:03:22PM +0200, Peter Zijlstra wrote:
> > I think there's a problem if indeed we do silly things like small copies
> > like Hugh saw (why would gcc ever generate small copies for objects that
> > are naturally aligned and naturally sized?).
> > 
> > Something like the below should fix that problem, but it doesn't explain
> > the observed issue..
> 
> Hmmm.... Hugh, can you please verify whether this patch makes the
> problem go away somehow?

Sure, but I won't start the run until tonight, and it'll then take
a couple of days for us to be sure (sometimes it hit within the
hour, sometimes it would take half a day).

Certainly the principle, cleaning out the cache, looked sound.

> 
> > @@ -1810,7 +1810,7 @@ __acquires(&gcwq->lock)
> >  	 * lock freed" warnings as well as problems when looking into
> >  	 * work->lockdep_map, make a copy and use that here.
> >  	 */
> > -	struct lockdep_map lockdep_map = work->lockdep_map;
> > +	struct lockdep_map lockdep_map = lockdep_copy_map(&work->lockdep_map);
> 
> If this is the correct fix for whatever reason, maybe we want the copy
> interface to be a bit more conventional?  lockdep_copy_map(to, from)?

Please send me the version of patch you'd like to put in
(lest I make it up myself and you don't like the result).

Hugh

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-08 18:11       ` Hugh Dickins
@ 2012-05-08 22:31         ` Peter Zijlstra
  2012-05-08 22:58           ` Hugh Dickins
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2012-05-08 22:31 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Tejun Heo, Ingo Molnar, Stephen Boyd, Yong Zhang, linux-kernel

On Tue, 2012-05-08 at 11:11 -0700, Hugh Dickins wrote:
> Please send me the version of patch you'd like to put in
> (lest I make it up myself and you don't like the result).

something like so?

---
 include/linux/lockdep.h |   17 +++++++++++++++++
 kernel/timer.c          |    4 +++-
 kernel/workqueue.c      |    4 +++-
 3 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index d36619e..968c3e2 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -157,6 +157,23 @@ struct lockdep_map {
 #endif
 };
 
+static inline void lockdep_copy_map(struct lockdep_map *to struct lockdep_map *from)
+{
+	int i;
+
+	*to = *from;
+	/*
+	 * Since the class cache can be modified concurrently we could observe
+	 * half pointers (64bit arch using 32bit copy insns). Therefore clear
+	 * the caches and take the performance hit.
+	 *
+	 * XXX it doesn't work well with lockdep_set_class_and_subclass(), since
+	 *     that relies on cache abuse.
+	 */
+	for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++)
+		to->class_cache[i] = NULL;
+}
+
 /*
  * Every lock has a list of other locks that were taken after it.
  * We only grow the list, never remove from it:
diff --git a/kernel/timer.c b/kernel/timer.c
index a297ffc..6aa7ad8 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -1102,7 +1102,9 @@ static void call_timer_fn(struct timer_list *timer, void (*fn)(unsigned long),
 	 * warnings as well as problems when looking into
 	 * timer->lockdep_map, make a copy and use that here.
 	 */
-	struct lockdep_map lockdep_map = timer->lockdep_map;
+	struct lockdep_map lockdep_map;
+       
+	lockdep_copy_map(&lockdep_map, &timer->lockdep_map);
 #endif
 	/*
 	 * Couple the lock chain with the lock chain at
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 5abf42f..7d77d1f 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1810,7 +1810,9 @@ __acquires(&gcwq->lock)
 	 * lock freed" warnings as well as problems when looking into
 	 * work->lockdep_map, make a copy and use that here.
 	 */
-	struct lockdep_map lockdep_map = work->lockdep_map;
+	struct lockdep_map lockdep_map;
+       
+	lockdep_copy_map(&lockdep_map, &work->lockdep_map);
 #endif
 	/*
 	 * A single work shouldn't be executed concurrently by


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-08 22:31         ` Peter Zijlstra
@ 2012-05-08 22:58           ` Hugh Dickins
  2012-05-09  9:25             ` Ingo Molnar
  0 siblings, 1 reply; 18+ messages in thread
From: Hugh Dickins @ 2012-05-08 22:58 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Tejun Heo, Ingo Molnar, Stephen Boyd, Yong Zhang, linux-kernel

On Wed, 9 May 2012, Peter Zijlstra wrote:
> On Tue, 2012-05-08 at 11:11 -0700, Hugh Dickins wrote:
> > Please send me the version of patch you'd like to put in
> > (lest I make it up myself and you don't like the result).
> 
> something like so?

More like the one below: I'm not alone in preferring a comma between args!

And you're not a believer in checkpatch.pl, I see: I've removed trailing
spaces; but left the 85-col line, that's not a fight I'll have with you.

I'll set it going when I get home later - thanks.

Hugh

---
 include/linux/lockdep.h |   17 +++++++++++++++++
 kernel/timer.c          |    4 +++-
 kernel/workqueue.c      |    4 +++-
 3 files changed, 23 insertions(+), 2 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index d36619e..968c3e2 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -157,6 +157,23 @@ struct lockdep_map {
 #endif
 };
 
+static inline void lockdep_copy_map(struct lockdep_map *to, struct lockdep_map *from)
+{
+	int i;
+
+	*to = *from;
+	/*
+	 * Since the class cache can be modified concurrently we could observe
+	 * half pointers (64bit arch using 32bit copy insns). Therefore clear
+	 * the caches and take the performance hit.
+	 *
+	 * XXX it doesn't work well with lockdep_set_class_and_subclass(), since
+	 *     that relies on cache abuse.
+	 */
+	for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++)
+		to->class_cache[i] = NULL;
+}
+
 /*
  * Every lock has a list of other locks that were taken after it.
  * We only grow the list, never remove from it:
diff --git a/kernel/timer.c b/kernel/timer.c
index a297ffc..6aa7ad8 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -1102,7 +1102,9 @@ static void call_timer_fn(struct timer_list *timer, void (*fn)(unsigned long),
 	 * warnings as well as problems when looking into
 	 * timer->lockdep_map, make a copy and use that here.
 	 */
-	struct lockdep_map lockdep_map = timer->lockdep_map;
+	struct lockdep_map lockdep_map;
+
+	lockdep_copy_map(&lockdep_map, &timer->lockdep_map);
 #endif
 	/*
 	 * Couple the lock chain with the lock chain at
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index 5abf42f..7d77d1f 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1810,7 +1810,9 @@ __acquires(&gcwq->lock)
 	 * lock freed" warnings as well as problems when looking into
 	 * work->lockdep_map, make a copy and use that here.
 	 */
-	struct lockdep_map lockdep_map = work->lockdep_map;
+	struct lockdep_map lockdep_map;
+
+	lockdep_copy_map(&lockdep_map, &work->lockdep_map);
 #endif
 	/*
 	 * A single work shouldn't be executed concurrently by

^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-08 22:58           ` Hugh Dickins
@ 2012-05-09  9:25             ` Ingo Molnar
  2012-05-09 20:09               ` Hugh Dickins
  0 siblings, 1 reply; 18+ messages in thread
From: Ingo Molnar @ 2012-05-09  9:25 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Peter Zijlstra, Tejun Heo, Ingo Molnar, Stephen Boyd, Yong Zhang,
	linux-kernel


* Hugh Dickins <hughd@google.com> wrote:

> On Wed, 9 May 2012, Peter Zijlstra wrote:
> > On Tue, 2012-05-08 at 11:11 -0700, Hugh Dickins wrote:
> > > Please send me the version of patch you'd like to put in
> > > (lest I make it up myself and you don't like the result).
> > 
> > something like so?
> 
> More like the one below: I'm not alone in preferring a comma 
> between args!

Silly compilers!

> And you're not a believer in checkpatch.pl, I see: I've removed trailing
> spaces; but left the 85-col line, that's not a fight I'll have with you.

I suspect we could break up the prototype like this:

static inline void
lockdep_copy_map(struct lockdep_map *to, struct lockdep_map *from)

> I'll set it going when I get home later - thanks.

Do we still need an explanation about why it's needed and why it 
makes a difference?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-09  9:25             ` Ingo Molnar
@ 2012-05-09 20:09               ` Hugh Dickins
  2012-05-10 17:52                 ` Hugh Dickins
  0 siblings, 1 reply; 18+ messages in thread
From: Hugh Dickins @ 2012-05-09 20:09 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Tejun Heo, Ingo Molnar, Stephen Boyd, Yong Zhang,
	linux-kernel

On Wed, 9 May 2012, Ingo Molnar wrote:
> * Hugh Dickins <hughd@google.com> wrote:
> 
> > I'll set it going when I get home later - thanks.

Going fine so far, but a more convincing final report tomorrow.

> 
> Do we still need an explanation about why it's needed and why it 
> makes a difference?

I don't see the difficulty in understanding it.  Peter didn't comment
whether my further explanations convinced him or not.  Or perhaps you're
asking for some commit description text - I may not be the right person to
write it, since I didn't make myself understood very well, but here's a go.

lockdep: fix oops in processing workqueue

Under memory load, on x86_64, with lockdep enabled, the workqueue's
process_one_work() has been seen to oops in __lock_acquire(), barfing
on a 0xffffffff00000000 pointer in the lockdep_map's class_cache[].

Because it's permissible to free a work_struct from its callout function,
the map used is an onstack copy of the map given in the work_struct: and
that copy is made without any locking.

Surprisingly, gcc (4.5.1 in Hugh's case) uses "rep movsl" rather than
"rep movsq" for that structure copy: which might race with a workqueue
user's wait_on_work() doing lock_map_acquire() on the source of the
copy, putting a pointer into the class_cache[], but only in time for
the top half of that pointer to be copied to the destination map.

Boom when process_one_work() subsequently does lock_map_acquire()
on its onstack copy of the lockdep_map.

Fix this, and a similar instance in call_timer_fn(), with a
lockdep_copy_map() function which additionally NULLs the class_cache[].

Note: this oops was actually seen on 3.4-next, where flush_work() newly
does the racing lock_map_acquire(); but Tejun points out that 3.4 and
earlier are already vulnerable to the same through wait_on_work().

Hugh

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-09 20:09               ` Hugh Dickins
@ 2012-05-10 17:52                 ` Hugh Dickins
  2012-05-14 21:27                   ` Tejun Heo
  0 siblings, 1 reply; 18+ messages in thread
From: Hugh Dickins @ 2012-05-10 17:52 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Peter Zijlstra, Tejun Heo, Ingo Molnar, Stephen Boyd, Yong Zhang,
	linux-kernel

On Wed, 9 May 2012, Hugh Dickins wrote:
> On Wed, 9 May 2012, Ingo Molnar wrote:
> > * Hugh Dickins <hughd@google.com> wrote:
> > 
> > > I'll set it going when I get home later - thanks.
> 
> Going fine so far, but a more convincing final report tomorrow.

Yes, as expected, Peter's patch (+ ,) is good: that load ran four
times longer than the longest it managed before without the patch.
Just needs Peter's signoff, I think.

Hugh

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-10 17:52                 ` Hugh Dickins
@ 2012-05-14 21:27                   ` Tejun Heo
  2012-05-15 11:11                     ` Peter Zijlstra
  0 siblings, 1 reply; 18+ messages in thread
From: Tejun Heo @ 2012-05-14 21:27 UTC (permalink / raw)
  To: Hugh Dickins
  Cc: Ingo Molnar, Peter Zijlstra, Ingo Molnar, Stephen Boyd,
	Yong Zhang, linux-kernel

On Thu, May 10, 2012 at 10:52:46AM -0700, Hugh Dickins wrote:
> On Wed, 9 May 2012, Hugh Dickins wrote:
> > On Wed, 9 May 2012, Ingo Molnar wrote:
> > > * Hugh Dickins <hughd@google.com> wrote:
> > > 
> > > > I'll set it going when I get home later - thanks.
> > 
> > Going fine so far, but a more convincing final report tomorrow.
> 
> Yes, as expected, Peter's patch (+ ,) is good: that load ran four
> times longer than the longest it managed before without the patch.
> Just needs Peter's signoff, I think.

Peter, can I take your patch with Hugh's description w/ your SOB?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: linux-next oops in __lock_acquire for process_one_work
  2012-05-14 21:27                   ` Tejun Heo
@ 2012-05-15 11:11                     ` Peter Zijlstra
  2012-05-15 15:10                       ` [PATCH] lockdep: fix oops in processing workqueue Tejun Heo
  0 siblings, 1 reply; 18+ messages in thread
From: Peter Zijlstra @ 2012-05-15 11:11 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Hugh Dickins, Ingo Molnar, Ingo Molnar, Stephen Boyd, Yong Zhang,
	linux-kernel

On Mon, 2012-05-14 at 14:27 -0700, Tejun Heo wrote:
> On Thu, May 10, 2012 at 10:52:46AM -0700, Hugh Dickins wrote:
> > On Wed, 9 May 2012, Hugh Dickins wrote:
> > > On Wed, 9 May 2012, Ingo Molnar wrote:
> > > > * Hugh Dickins <hughd@google.com> wrote:
> > > > 
> > > > > I'll set it going when I get home later - thanks.
> > > 
> > > Going fine so far, but a more convincing final report tomorrow.
> > 
> > Yes, as expected, Peter's patch (+ ,) is good: that load ran four
> > times longer than the longest it managed before without the patch.
> > Just needs Peter's signoff, I think.
> 
> Peter, can I take your patch with Hugh's description w/ your SOB?

Yes, sorry, I meant to queue it up the lockdep tree but got distracted
by other stuff.

If you want it, feel tree to take it.

^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH] lockdep: fix oops in processing workqueue
  2012-05-15 11:11                     ` Peter Zijlstra
@ 2012-05-15 15:10                       ` Tejun Heo
  2012-05-15 15:29                         ` Dave Jones
  0 siblings, 1 reply; 18+ messages in thread
From: Tejun Heo @ 2012-05-15 15:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Hugh Dickins, Ingo Molnar, Ingo Molnar, Stephen Boyd, Yong Zhang,
	linux-kernel

>From 4d82a1debbffec129cc387aafa8f40b7bbab3297 Mon Sep 17 00:00:00 2001
From: Peter Zijlstra <peterz@infradead.org>
Date: Tue, 15 May 2012 08:06:19 -0700

Under memory load, on x86_64, with lockdep enabled, the workqueue's
process_one_work() has been seen to oops in __lock_acquire(), barfing
on a 0xffffffff00000000 pointer in the lockdep_map's class_cache[].

Because it's permissible to free a work_struct from its callout function,
the map used is an onstack copy of the map given in the work_struct: and
that copy is made without any locking.

Surprisingly, gcc (4.5.1 in Hugh's case) uses "rep movsl" rather than
"rep movsq" for that structure copy: which might race with a workqueue
user's wait_on_work() doing lock_map_acquire() on the source of the
copy, putting a pointer into the class_cache[], but only in time for
the top half of that pointer to be copied to the destination map.

Boom when process_one_work() subsequently does lock_map_acquire()
on its onstack copy of the lockdep_map.

Fix this, and a similar instance in call_timer_fn(), with a
lockdep_copy_map() function which additionally NULLs the class_cache[].

Note: this oops was actually seen on 3.4-next, where flush_work() newly
does the racing lock_map_acquire(); but Tejun points out that 3.4 and
earlier are already vulnerable to the same through wait_on_work().

* Patch orginally from Peter.  Hugh modified it a bit and wrote the
  description.

Signed-off-by: Peter Zijlstra <peterz@infradead.org>
Reported-by: Hugh Dickins <hughd@google.com>
LKML-Reference: <alpine.LSU.2.00.1205070951170.1544@eggly.anvils>
Signed-off-by: Tejun Heo <tj@kernel.org>
---
Applied to wq/for-3.5 with the function decl broken in traditional
way.

Thanks.

 include/linux/lockdep.h |   18 ++++++++++++++++++
 kernel/timer.c          |    4 +++-
 kernel/workqueue.c      |    4 +++-
 3 files changed, 24 insertions(+), 2 deletions(-)

diff --git a/include/linux/lockdep.h b/include/linux/lockdep.h
index d36619e..00e4637 100644
--- a/include/linux/lockdep.h
+++ b/include/linux/lockdep.h
@@ -157,6 +157,24 @@ struct lockdep_map {
 #endif
 };
 
+static inline void lockdep_copy_map(struct lockdep_map *to,
+				    struct lockdep_map *from)
+{
+	int i;
+
+	*to = *from;
+	/*
+	 * Since the class cache can be modified concurrently we could observe
+	 * half pointers (64bit arch using 32bit copy insns). Therefore clear
+	 * the caches and take the performance hit.
+	 *
+	 * XXX it doesn't work well with lockdep_set_class_and_subclass(), since
+	 *     that relies on cache abuse.
+	 */
+	for (i = 0; i < NR_LOCKDEP_CACHING_CLASSES; i++)
+		to->class_cache[i] = NULL;
+}
+
 /*
  * Every lock has a list of other locks that were taken after it.
  * We only grow the list, never remove from it:
diff --git a/kernel/timer.c b/kernel/timer.c
index a297ffc..b123852 100644
--- a/kernel/timer.c
+++ b/kernel/timer.c
@@ -1102,7 +1102,9 @@ static void call_timer_fn(struct timer_list *timer, void (*fn)(unsigned long),
 	 * warnings as well as problems when looking into
 	 * timer->lockdep_map, make a copy and use that here.
 	 */
-	struct lockdep_map lockdep_map = timer->lockdep_map;
+	struct lockdep_map lockdep_map;
+
+	lockdep_copy_map(&lockdep_map, &timer->lockdep_map);
 #endif
 	/*
 	 * Couple the lock chain with the lock chain at
diff --git a/kernel/workqueue.c b/kernel/workqueue.c
index c36c86c..9a3128d 100644
--- a/kernel/workqueue.c
+++ b/kernel/workqueue.c
@@ -1818,7 +1818,9 @@ __acquires(&gcwq->lock)
 	 * lock freed" warnings as well as problems when looking into
 	 * work->lockdep_map, make a copy and use that here.
 	 */
-	struct lockdep_map lockdep_map = work->lockdep_map;
+	struct lockdep_map lockdep_map;
+
+	lockdep_copy_map(&lockdep_map, &work->lockdep_map);
 #endif
 	/*
 	 * A single work shouldn't be executed concurrently by
-- 
1.7.7.3


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH] lockdep: fix oops in processing workqueue
  2012-05-15 15:10                       ` [PATCH] lockdep: fix oops in processing workqueue Tejun Heo
@ 2012-05-15 15:29                         ` Dave Jones
  2012-05-15 15:31                           ` Tejun Heo
  0 siblings, 1 reply; 18+ messages in thread
From: Dave Jones @ 2012-05-15 15:29 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Peter Zijlstra, Hugh Dickins, Ingo Molnar, Ingo Molnar,
	Stephen Boyd, Yong Zhang, linux-kernel

On Tue, May 15, 2012 at 08:10:48AM -0700, Tejun Heo wrote:
 > >From 4d82a1debbffec129cc387aafa8f40b7bbab3297 Mon Sep 17 00:00:00 2001
 > From: Peter Zijlstra <peterz@infradead.org>
 > Date: Tue, 15 May 2012 08:06:19 -0700
 > 
 > Under memory load, on x86_64, with lockdep enabled, the workqueue's
 > process_one_work() has been seen to oops in __lock_acquire(), barfing
 > on a 0xffffffff00000000 pointer in the lockdep_map's class_cache[].

can you elaborate what 'memory load' means here ?
I'm curious if I can add something to my fuzzing tool to shake out bugs like this.

	Dave


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] lockdep: fix oops in processing workqueue
  2012-05-15 15:29                         ` Dave Jones
@ 2012-05-15 15:31                           ` Tejun Heo
  2012-05-15 20:36                             ` Hugh Dickins
  0 siblings, 1 reply; 18+ messages in thread
From: Tejun Heo @ 2012-05-15 15:31 UTC (permalink / raw)
  To: Dave Jones, Peter Zijlstra, Hugh Dickins, Ingo Molnar,
	Ingo Molnar, Stephen Boyd, Yong Zhang, linux-kernel

On Tue, May 15, 2012 at 11:29:52AM -0400, Dave Jones wrote:
> On Tue, May 15, 2012 at 08:10:48AM -0700, Tejun Heo wrote:
>  > >From 4d82a1debbffec129cc387aafa8f40b7bbab3297 Mon Sep 17 00:00:00 2001
>  > From: Peter Zijlstra <peterz@infradead.org>
>  > Date: Tue, 15 May 2012 08:06:19 -0700
>  > 
>  > Under memory load, on x86_64, with lockdep enabled, the workqueue's
>  > process_one_work() has been seen to oops in __lock_acquire(), barfing
>  > on a 0xffffffff00000000 pointer in the lockdep_map's class_cache[].
> 
> can you elaborate what 'memory load' means here ?
> I'm curious if I can add something to my fuzzing tool to shake out bugs like this.

I think Hugh knows and can explain this much better than I do.  Hugh?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH] lockdep: fix oops in processing workqueue
  2012-05-15 15:31                           ` Tejun Heo
@ 2012-05-15 20:36                             ` Hugh Dickins
  0 siblings, 0 replies; 18+ messages in thread
From: Hugh Dickins @ 2012-05-15 20:36 UTC (permalink / raw)
  To: Dave Jones
  Cc: Tejun Heo, Peter Zijlstra, Hugh Dickins, Ingo Molnar,
	Ingo Molnar, Stephen Boyd, Yong Zhang, linux-kernel

On Tue, 15 May 2012, Tejun Heo wrote:
> On Tue, May 15, 2012 at 11:29:52AM -0400, Dave Jones wrote:
> > On Tue, May 15, 2012 at 08:10:48AM -0700, Tejun Heo wrote:
> >  > >From 4d82a1debbffec129cc387aafa8f40b7bbab3297 Mon Sep 17 00:00:00 2001
> >  > From: Peter Zijlstra <peterz@infradead.org>
> >  > Date: Tue, 15 May 2012 08:06:19 -0700
> >  > 
> >  > Under memory load, on x86_64, with lockdep enabled, the workqueue's
> >  > process_one_work() has been seen to oops in __lock_acquire(), barfing
> >  > on a 0xffffffff00000000 pointer in the lockdep_map's class_cache[].
> > 
> > can you elaborate what 'memory load' means here ?
> > I'm curious if I can add something to my fuzzing tool to shake out bugs like this.
> 
> I think Hugh knows and can explain this much better than I do.  Hugh?

Quoting from myself, quoting from myself, on an earlier occasion:

"It's the tmpfs swapping test that I've been running, with variations,
for years.  System booted with mem=700M and 1.5G swap, two repetitious
make -j20 kernel builds (of a 2.6.24 kernel: I stuck with that because
the balance of built to unbuilt source grows smaller with later kernels),
one directly in a tmpfs, the other in a 1k-block ext2 (that I drive with
ext4's CONFIG_EXT4_USE_FOR_EXT23) on /dev/loop0 on a 450MB tmpfs file."

Most of those details will be irrelevant in this case, but it's been a
useful test down the years, catching lots of bugs and races.  On this
occasion I was running a variation which further puts each of the builds
in its own 300M mem cgroup, with a concurrent script which cycles around
making a new 300M mem cgroup, moving all the tasks from one old into the
new (with memory.move_charge_at_immigrate set to 3), then rmdir the old.

It was probably the moving of memcg charges (or the rmdir'ing of memcg,
which also involves moving memcg charges) which was making so many calls
to lru_add_drain_all() to show the problem - lru_add_drain_all() has to
schedule work on each cpu and then flush_work() on each.

(But it only came up as a problem in linux-next, which has added some
lockmap accesses to flush_work() - Tejun spotted that upstream could
already be vulnerable via other routes, but this was all I ever hit.)

Hugh

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2012-05-15 20:37 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-05-07 17:19 linux-next oops in __lock_acquire for process_one_work Hugh Dickins
2012-05-07 17:57 ` Tejun Heo
2012-05-08 13:03   ` Peter Zijlstra
2012-05-08 16:58     ` Tejun Heo
2012-05-08 17:02       ` Peter Zijlstra
2012-05-08 18:11       ` Hugh Dickins
2012-05-08 22:31         ` Peter Zijlstra
2012-05-08 22:58           ` Hugh Dickins
2012-05-09  9:25             ` Ingo Molnar
2012-05-09 20:09               ` Hugh Dickins
2012-05-10 17:52                 ` Hugh Dickins
2012-05-14 21:27                   ` Tejun Heo
2012-05-15 11:11                     ` Peter Zijlstra
2012-05-15 15:10                       ` [PATCH] lockdep: fix oops in processing workqueue Tejun Heo
2012-05-15 15:29                         ` Dave Jones
2012-05-15 15:31                           ` Tejun Heo
2012-05-15 20:36                             ` Hugh Dickins
2012-05-08 18:05     ` linux-next oops in __lock_acquire for process_one_work Hugh Dickins

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.