All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
@ 2009-07-07 15:25 Joao Correia
  2009-07-07 15:33 ` Peter Zijlstra
  0 siblings, 1 reply; 21+ messages in thread
From: Joao Correia @ 2009-07-07 15:25 UTC (permalink / raw)
  To: LKML; +Cc: Amerigo Wang, a.p.zijlstra

(Applies to current Linus tree, as of 2.6.31-rc2)

As it stands now, the limit is too low and is being hit by false
positives. Increasing its value will allow for more room to work with.

This was suggested by Ingo Molnar
(http://article.gmane.org/gmane.linux.kernel/852005) but never
submitted as a patch, to the best of my knowledge.


Signed-off-by: Joao Correia <joaomiguelcorreia@gmail.com>

---
 kernel/lockdep_internals.h | 2 +-
 1 files changed, 1 insertions(+), 1 deletions(-)

diff --git a/kernel/lockdep_internals.h b/kernel/lockdep_internals.h
index 699a2ac..93af1f1 100644
--- a/kernel/lockdep_internals.h
+++ b/kernel/lockdep_internals.h
@@ -65,7 +65,7 @@ enum {
  * Stack-trace: tightly packed array of stack backtrace
  * addresses. Protected by the hash_lock.
  */
-#define MAX_STACK_TRACE_ENTRIES	262144UL
+#define MAX_STACK_TRACE_ENTRIES	1048576UL

 extern struct list_head all_lock_classes;
 extern struct lock_chain lock_chains[];
---

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-07 15:25 [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES Joao Correia
@ 2009-07-07 15:33 ` Peter Zijlstra
       [not found]   ` <a5d9929e0907070838q7ed3306du3bb7880e47d7207b@mail.gmail.com>
  0 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2009-07-07 15:33 UTC (permalink / raw)
  To: Joao Correia; +Cc: LKML, Amerigo Wang

On Tue, 2009-07-07 at 16:25 +0100, Joao Correia wrote:
> (Applies to current Linus tree, as of 2.6.31-rc2)
> 
> As it stands now, the limit is too low and is being hit by false
> positives. Increasing its value will allow for more room to work with.
> 
> This was suggested by Ingo Molnar
> (http://article.gmane.org/gmane.linux.kernel/852005) but never
> submitted as a patch, to the best of my knowledge.

Right, we found a bug in the dma-debug code that generated tons of
classes where only 1 was needed, which in turn generated tons of chains
and stack entries.

But that got merged, but you're seeing more of this?


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Fwd: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
       [not found]   ` <a5d9929e0907070838q7ed3306du3bb7880e47d7207b@mail.gmail.com>
@ 2009-07-07 15:38     ` Joao Correia
       [not found]     ` <1246981444.9777.11.camel@twins>
  1 sibling, 0 replies; 21+ messages in thread
From: Joao Correia @ 2009-07-07 15:38 UTC (permalink / raw)
  To: LKML

(clumsy fingers part 2, forwarding to the list)


---------- Forwarded message ----------
From: Joao Correia <joaomiguelcorreia@gmail.com>
Date: Tue, Jul 7, 2009 at 4:38 PM
Subject: Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
To: Peter Zijlstra <a.p.zijlstra@chello.nl>


Yes. Anything 2.6.31 forward triggers this immediatly during init
process, at random places.

Joao Correia

On Tue, Jul 7, 2009 at 4:33 PM, Peter Zijlstra<a.p.zijlstra@chello.nl> wrote:
> On Tue, 2009-07-07 at 16:25 +0100, Joao Correia wrote:
>> (Applies to current Linus tree, as of 2.6.31-rc2)
>>
>> As it stands now, the limit is too low and is being hit by false
>> positives. Increasing its value will allow for more room to work with.
>>
>> This was suggested by Ingo Molnar
>> (http://article.gmane.org/gmane.linux.kernel/852005) but never
>> submitted as a patch, to the best of my knowledge.
>
> Right, we found a bug in the dma-debug code that generated tons of
> classes where only 1 was needed, which in turn generated tons of chains
> and stack entries.
>
> But that got merged, but you're seeing more of this?
>
>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
       [not found]     ` <1246981444.9777.11.camel@twins>
@ 2009-07-07 15:50       ` Joao Correia
  2009-07-07 15:55         ` Peter Zijlstra
  0 siblings, 1 reply; 21+ messages in thread
From: Joao Correia @ 2009-07-07 15:50 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: LKML, Américo Wang

On Tue, Jul 7, 2009 at 4:44 PM, Peter Zijlstra<a.p.zijlstra@chello.nl> wrote:
> On Tue, 2009-07-07 at 16:38 +0100, Joao Correia wrote:
>> On Tue, Jul 7, 2009 at 4:33 PM, Peter Zijlstra<a.p.zijlstra@chello.nl> wrote:
>> > On Tue, 2009-07-07 at 16:25 +0100, Joao Correia wrote:
>> >> (Applies to current Linus tree, as of 2.6.31-rc2)
>> >>
>> >> As it stands now, the limit is too low and is being hit by false
>> >> positives. Increasing its value will allow for more room to work with.
>> >>
>> >> This was suggested by Ingo Molnar
>> >> (http://article.gmane.org/gmane.linux.kernel/852005) but never
>> >> submitted as a patch, to the best of my knowledge.
>> >
>> > Right, we found a bug in the dma-debug code that generated tons of
>> > classes where only 1 was needed, which in turn generated tons of chains
>> > and stack entries.
>> >
>> > But that got merged, but you're seeing more of this?
>
>> Yes. Anything 2.6.31 forward triggers this immediatly during init
>> process, at random places.
>
> Not on my machines it doesn't.. so I suspect its something weird in
> your .config or maybe due to some hardware you have that I don't that
> triggers different drivers or somesuch.
>
>
>

I am not the only one reporting this, and it happens, for example,
with a stock .config from a Fedora 11 install.

It may, of course, be a funny driver interaction yes, but other than
stripping the box piece by piece, how would one go about debugging
this otherwise?

Joao Correia

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-07 15:50       ` Joao Correia
@ 2009-07-07 15:55         ` Peter Zijlstra
  2009-07-07 15:59           ` Joao Correia
  2009-07-08 17:22           ` Dave Jones
  0 siblings, 2 replies; 21+ messages in thread
From: Peter Zijlstra @ 2009-07-07 15:55 UTC (permalink / raw)
  To: Joao Correia; +Cc: LKML, Américo Wang

On Tue, 2009-07-07 at 16:50 +0100, Joao Correia wrote:

> >> Yes. Anything 2.6.31 forward triggers this immediatly during init
> >> process, at random places.
> >
> > Not on my machines it doesn't.. so I suspect its something weird in
> > your .config or maybe due to some hardware you have that I don't that
> > triggers different drivers or somesuch.
> 
> I am not the only one reporting this, and it happens, for example,
> with a stock .config from a Fedora 11 install.
> 
> It may, of course, be a funny driver interaction yes, but other than
> stripping the box piece by piece, how would one go about debugging
> this otherwise?

One thing to do is stare (or share) at the output
of /proc/lockdep_chains and see if there's some particularly large
chains in there, or many of the same name or something.

/proc/lockdep_stats might also be interesting, mine reads like:

[root@opteron ~]# cat /proc/lockdep_stats
 lock-classes:                          641 [max: 8191]
 direct dependencies:                  3794 [max: 16384]
 indirect dependencies:                7557
 all direct dependencies:             73254
 dependency chains:                    3716 [max: 32768]
 dependency chain hlocks:             10167 [max: 163840]
 in-hardirq chains:                      21
 in-softirq chains:                     353
 in-process chains:                    3342
 stack-trace entries:                 91035 [max: 262144]
 combined max dependencies:        26035284
 hardirq-safe locks:                     28
 hardirq-unsafe locks:                  460
 softirq-safe locks:                    114
 softirq-unsafe locks:                  373
 irq-safe locks:                        123
 irq-unsafe locks:                      460
 hardirq-read-safe locks:                 0
 hardirq-read-unsafe locks:              45
 softirq-read-safe locks:                 8
 softirq-read-unsafe locks:              39
 irq-read-safe locks:                     8
 irq-read-unsafe locks:                  45
 uncategorized locks:                   106
 unused locks:                            1
 max locking depth:                      14
 max recursion depth:                    10
 debug_locks:                             1



^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-07 15:55         ` Peter Zijlstra
@ 2009-07-07 15:59           ` Joao Correia
  2009-07-08 17:22           ` Dave Jones
  1 sibling, 0 replies; 21+ messages in thread
From: Joao Correia @ 2009-07-07 15:59 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: LKML, Américo Wang

On Tue, Jul 7, 2009 at 4:55 PM, Peter Zijlstra<a.p.zijlstra@chello.nl> wrote:
> On Tue, 2009-07-07 at 16:50 +0100, Joao Correia wrote:
>
>> >> Yes. Anything 2.6.31 forward triggers this immediatly during init
>> >> process, at random places.
>> >
>> > Not on my machines it doesn't.. so I suspect its something weird in
>> > your .config or maybe due to some hardware you have that I don't that
>> > triggers different drivers or somesuch.
>>
>> I am not the only one reporting this, and it happens, for example,
>> with a stock .config from a Fedora 11 install.
>>
>> It may, of course, be a funny driver interaction yes, but other than
>> stripping the box piece by piece, how would one go about debugging
>> this otherwise?
>
> One thing to do is stare (or share) at the output
> of /proc/lockdep_chains and see if there's some particularly large
> chains in there, or many of the same name or something.
>
> /proc/lockdep_stats might also be interesting, mine reads like:
>
> [root@opteron ~]# cat /proc/lockdep_stats
>  lock-classes:                          641 [max: 8191]
>  direct dependencies:                  3794 [max: 16384]
>  indirect dependencies:                7557
>  all direct dependencies:             73254
>  dependency chains:                    3716 [max: 32768]
>  dependency chain hlocks:             10167 [max: 163840]
>  in-hardirq chains:                      21
>  in-softirq chains:                     353
>  in-process chains:                    3342
>  stack-trace entries:                 91035 [max: 262144]
>  combined max dependencies:        26035284
>  hardirq-safe locks:                     28
>  hardirq-unsafe locks:                  460
>  softirq-safe locks:                    114
>  softirq-unsafe locks:                  373
>  irq-safe locks:                        123
>  irq-unsafe locks:                      460
>  hardirq-read-safe locks:                 0
>  hardirq-read-unsafe locks:              45
>  softirq-read-safe locks:                 8
>  softirq-read-unsafe locks:              39
>  irq-read-safe locks:                     8
>  irq-read-unsafe locks:                  45
>  uncategorized locks:                   106
>  unused locks:                            1
>  max locking depth:                      14
>  max recursion depth:                    10
>  debug_locks:                             1
>
>
>


I have definitly much higher numbers than those:

[root@hightech git]# cat /proc/lockdep_stats
 lock-classes:                         1793 [max: 8191]
 direct dependencies:                 12032 [max: 16384]
 indirect dependencies:              126642
 all direct dependencies:           1310382
 dependency chains:                   28387 [max: 65536]
 dependency chain hlocks:            100117 [max: 327680]
 in-hardirq chains:                    3029
 in-softirq chains:                    7585
 in-process chains:                   17773
 stack-trace entries:                417613 [max: 1048576]
 combined max dependencies:       523805800
 hardirq-safe locks:                   1074
 hardirq-unsafe locks:                  575
 softirq-safe locks:                   1171
 softirq-unsafe locks:                  445
 irq-safe locks:                       1184
 irq-unsafe locks:                      575
 hardirq-read-safe locks:                 0
 hardirq-read-unsafe locks:              68
 softirq-read-safe locks:                17
 softirq-read-unsafe locks:              55
 irq-read-safe locks:                    17
 irq-read-unsafe locks:                  68
 uncategorized locks:                   103
 unused locks:                            0
 max locking depth:                      16
 max recursion depth:                    11
 debug_locks:                             1

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-07 15:55         ` Peter Zijlstra
  2009-07-07 15:59           ` Joao Correia
@ 2009-07-08 17:22           ` Dave Jones
  2009-07-08 18:36             ` Peter Zijlstra
  1 sibling, 1 reply; 21+ messages in thread
From: Dave Jones @ 2009-07-08 17:22 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: Joao Correia, LKML, Américo Wang

On Tue, Jul 07, 2009 at 05:55:01PM +0200, Peter Zijlstra wrote:
 > On Tue, 2009-07-07 at 16:50 +0100, Joao Correia wrote:
 > 
 > > >> Yes. Anything 2.6.31 forward triggers this immediatly during init
 > > >> process, at random places.
 > > >
 > > > Not on my machines it doesn't.. so I suspect its something weird in
 > > > your .config or maybe due to some hardware you have that I don't that
 > > > triggers different drivers or somesuch.
 > > 
 > > I am not the only one reporting this, and it happens, for example,
 > > with a stock .config from a Fedora 11 install.
 > > 
 > > It may, of course, be a funny driver interaction yes, but other than
 > > stripping the box piece by piece, how would one go about debugging
 > > this otherwise?
 > 
 > One thing to do is stare (or share) at the output
 > of /proc/lockdep_chains and see if there's some particularly large
 > chains in there, or many of the same name or something.

I don't see any long chains, just lots of them.
29065 lines on my box that's hitting MAX_STACK_TRACE_ENTRIES 

 > /proc/lockdep_stats might also be interesting, mine reads like:
 
 lock-classes:                         1518 [max: 8191]
 direct dependencies:                  7142 [max: 16384]
 indirect dependencies:               97905
 all direct dependencies:            753706
 dependency chains:                    6201 [max: 32768]
 dependency chain hlocks:             16665 [max: 163840]
 in-hardirq chains:                    1380
 in-softirq chains:                     589
 in-process chains:                    4232
 stack-trace entries:                262144 [max: 262144]
 combined max dependencies:      3449006070
 hardirq-safe locks:                   1008
 hardirq-unsafe locks:                  364
 softirq-safe locks:                    838
 softirq-unsafe locks:                  322
 irq-safe locks:                       1043
 irq-unsafe locks:                      364
 hardirq-read-safe locks:                 0
 hardirq-read-unsafe locks:              48
 softirq-read-safe locks:                 0
 softirq-read-unsafe locks:              48
 irq-read-safe locks:                     0
 irq-read-unsafe locks:                  48
 uncategorized locks:                   104
 unused locks:                            0
 max locking depth:                       9
 max recursion depth:                    10
 debug_locks:                             0


	Dave


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-08 17:22           ` Dave Jones
@ 2009-07-08 18:36             ` Peter Zijlstra
  2009-07-08 18:44               ` Dave Jones
                                 ` (3 more replies)
  0 siblings, 4 replies; 21+ messages in thread
From: Peter Zijlstra @ 2009-07-08 18:36 UTC (permalink / raw)
  To: Dave Jones
  Cc: Joao Correia, LKML, Américo Wang, Frederic Weisbecker,
	Arjan van de Ven, Catalin Marinas

On Wed, 2009-07-08 at 13:22 -0400, Dave Jones wrote:
> On Tue, Jul 07, 2009 at 05:55:01PM +0200, Peter Zijlstra wrote:
>  > On Tue, 2009-07-07 at 16:50 +0100, Joao Correia wrote:
>  > 
>  > > >> Yes. Anything 2.6.31 forward triggers this immediatly during init
>  > > >> process, at random places.
>  > > >
>  > > > Not on my machines it doesn't.. so I suspect its something weird in
>  > > > your .config or maybe due to some hardware you have that I don't that
>  > > > triggers different drivers or somesuch.
>  > > 
>  > > I am not the only one reporting this, and it happens, for example,
>  > > with a stock .config from a Fedora 11 install.
>  > > 
>  > > It may, of course, be a funny driver interaction yes, but other than
>  > > stripping the box piece by piece, how would one go about debugging
>  > > this otherwise?
>  > 
>  > One thing to do is stare (or share) at the output
>  > of /proc/lockdep_chains and see if there's some particularly large
>  > chains in there, or many of the same name or something.
> 
> I don't see any long chains, just lots of them.
> 29065 lines on my box that's hitting MAX_STACK_TRACE_ENTRIES 
> 
>  > /proc/lockdep_stats might also be interesting, mine reads like:
>  
>  lock-classes:                         1518 [max: 8191]
>  direct dependencies:                  7142 [max: 16384]

Since we have 7 states per class, and can take one trace per state, and
also take one trace per dependency, this would yield a max of:

  7*1518+7142 = 17768 stack traces

With the current limit of 262144 stack-trace entries, that would leave
us with and avg depth of:

  262144/17768 = 14.75

Now since we would not use all states for each class, we'd likely have a
little more, but that would still suggest we have rather deep stack
traces on avg.

Looking at a lockdep dump hch gave me I can see that that is certainly
possible, I see tons of very deep callchains.

/me wonders if we're getting significantly deeper..

OK I guess we can raise this one, does doubling work? That would get us
around 29 entries per trace..

Also, Dave do these distro init scrips still load every module on the
planet or are we more sensible these days?

module load/unload cycles are really bad for lockdep resources.

--

As a side node, I see that each and every trace ends with a -1 entry:

...
[ 1194.412158]    [<c01f7990>] do_mount+0x3c0/0x7c0
[ 1194.412158]    [<c01f7e14>] sys_mount+0x84/0xb0
[ 1194.412158]    [<c01221b1>] syscall_call+0x7/0xb
[ 1194.412158]    [<ffffffff>] 0xffffffff

Which seems to come from:

void save_stack_trace(struct stack_trace *trace)
{
        dump_trace(current, NULL, NULL, 0, &save_stack_ops, trace);
        if (trace->nr_entries < trace->max_entries)
                trace->entries[trace->nr_entries++] = ULONG_MAX;
}
EXPORT_SYMBOL_GPL(save_stack_trace);

commit 006e84ee3a54e393ec6bef2a9bc891dc5bde2843 seems involved,..

Anybody got clue?


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-08 18:36             ` Peter Zijlstra
@ 2009-07-08 18:44               ` Dave Jones
  2009-07-08 19:48               ` Joao Correia
                                 ` (2 subsequent siblings)
  3 siblings, 0 replies; 21+ messages in thread
From: Dave Jones @ 2009-07-08 18:44 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Joao Correia, LKML, Américo Wang, Frederic Weisbecker,
	Arjan van de Ven, Catalin Marinas

On Wed, Jul 08, 2009 at 08:36:04PM +0200, Peter Zijlstra wrote:
 > Looking at a lockdep dump hch gave me I can see that that is certainly
 > possible, I see tons of very deep callchains.
 > 
 > /me wonders if we're getting significantly deeper..
 > 
 > OK I guess we can raise this one, does doubling work? That would get us
 > around 29 entries per trace..
 > 
 > Also, Dave do these distro init scrips still load every module on the
 > planet or are we more sensible these days?
 > module load/unload cycles are really bad for lockdep resources.

34 modules get loaded on the system I saw the trace on. 
(That's from lsmod after fulling booting up). I'm not aware of any
module unloading that happens during bootup.
 
	Dave


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-08 18:36             ` Peter Zijlstra
  2009-07-08 18:44               ` Dave Jones
@ 2009-07-08 19:48               ` Joao Correia
  2009-07-08 19:56                 ` Peter Zijlstra
  2009-07-09  4:39               ` Dave Jones
  2009-07-09  9:06               ` Catalin Marinas
  3 siblings, 1 reply; 21+ messages in thread
From: Joao Correia @ 2009-07-08 19:48 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: LKML, Américo Wang

On Wed, Jul 8, 2009 at 7:36 PM, Peter Zijlstra<a.p.zijlstra@chello.nl> wrote:
> On Wed, 2009-07-08 at 13:22 -0400, Dave Jones wrote:
>> On Tue, Jul 07, 2009 at 05:55:01PM +0200, Peter Zijlstra wrote:
>>  > On Tue, 2009-07-07 at 16:50 +0100, Joao Correia wrote:
>>  >
>>  > > >> Yes. Anything 2.6.31 forward triggers this immediatly during init
>>  > > >> process, at random places.
>>  > > >
>>  > > > Not on my machines it doesn't.. so I suspect its something weird in
>>  > > > your .config or maybe due to some hardware you have that I don't that
>>  > > > triggers different drivers or somesuch.
>>  > >
>>  > > I am not the only one reporting this, and it happens, for example,
>>  > > with a stock .config from a Fedora 11 install.
>>  > >
>>  > > It may, of course, be a funny driver interaction yes, but other than
>>  > > stripping the box piece by piece, how would one go about debugging
>>  > > this otherwise?
>>  >
>>  > One thing to do is stare (or share) at the output
>>  > of /proc/lockdep_chains and see if there's some particularly large
>>  > chains in there, or many of the same name or something.
>>
>> I don't see any long chains, just lots of them.
>> 29065 lines on my box that's hitting MAX_STACK_TRACE_ENTRIES
>>
>>  > /proc/lockdep_stats might also be interesting, mine reads like:
>>
>>  lock-classes:                         1518 [max: 8191]
>>  direct dependencies:                  7142 [max: 16384]
>
> Since we have 7 states per class, and can take one trace per state, and
> also take one trace per dependency, this would yield a max of:
>
>  7*1518+7142 = 17768 stack traces
>
> With the current limit of 262144 stack-trace entries, that would leave
> us with and avg depth of:
>
>  262144/17768 = 14.75
>
> Now since we would not use all states for each class, we'd likely have a
> little more, but that would still suggest we have rather deep stack
> traces on avg.
>
> Looking at a lockdep dump hch gave me I can see that that is certainly
> possible, I see tons of very deep callchains.
>
> /me wonders if we're getting significantly deeper..
>
> OK I guess we can raise this one, does doubling work? That would get us
> around 29 entries per trace..
>
> Also, Dave do these distro init scrips still load every module on the
> planet or are we more sensible these days?
>
> module load/unload cycles are really bad for lockdep resources.
>
> --
>
> As a side node, I see that each and every trace ends with a -1 entry:
>
> ...
> [ 1194.412158]    [<c01f7990>] do_mount+0x3c0/0x7c0
> [ 1194.412158]    [<c01f7e14>] sys_mount+0x84/0xb0
> [ 1194.412158]    [<c01221b1>] syscall_call+0x7/0xb
> [ 1194.412158]    [<ffffffff>] 0xffffffff
>
> Which seems to come from:
>
> void save_stack_trace(struct stack_trace *trace)
> {
>        dump_trace(current, NULL, NULL, 0, &save_stack_ops, trace);
>        if (trace->nr_entries < trace->max_entries)
>                trace->entries[trace->nr_entries++] = ULONG_MAX;
> }
> EXPORT_SYMBOL_GPL(save_stack_trace);
>
> commit 006e84ee3a54e393ec6bef2a9bc891dc5bde2843 seems involved,..
>
> Anybody got clue?
>
>

Im in no way pretending to understand all the details on this system,
but i do know there has been, as of late, an effort to make the init
process do more things at once, ie, load more modules in parallel to
speed up the process. Can't that be at least partially responsible for
this? Even if its just making some other problem more obvious?

Joao Correia

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-08 19:48               ` Joao Correia
@ 2009-07-08 19:56                 ` Peter Zijlstra
  0 siblings, 0 replies; 21+ messages in thread
From: Peter Zijlstra @ 2009-07-08 19:56 UTC (permalink / raw)
  To: Joao Correia; +Cc: LKML, Américo Wang

On Wed, 2009-07-08 at 20:48 +0100, Joao Correia wrote:
> 
> 
> Im in no way pretending to understand all the details on this system,
> but i do know there has been, as of late, an effort to make the init
> process do more things at once, ie, load more modules in parallel to
> speed up the process. Can't that be at least partially responsible for
> this? Even if its just making some other problem more obvious?

Nah, lockdep serializes all that under a single lock ;-)

module unloading in particular wastes resources from a lockdep pov, so
as long as that doesn't happen, we're good.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-08 18:36             ` Peter Zijlstra
  2009-07-08 18:44               ` Dave Jones
  2009-07-08 19:48               ` Joao Correia
@ 2009-07-09  4:39               ` Dave Jones
  2009-07-09  8:02                 ` Peter Zijlstra
  2009-07-09  9:06               ` Catalin Marinas
  3 siblings, 1 reply; 21+ messages in thread
From: Dave Jones @ 2009-07-09  4:39 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Joao Correia, LKML, Américo Wang, Frederic Weisbecker,
	Arjan van de Ven, Catalin Marinas

On Wed, Jul 08, 2009 at 08:36:04PM +0200, Peter Zijlstra wrote:
 > Looking at a lockdep dump hch gave me I can see that that is certainly
 > possible, I see tons of very deep callchains.
 > 
 > /me wonders if we're getting significantly deeper..

Looking at /proc/lockdep, I'm curious..
Take a look at http://davej.fedorapeople.org/lockdep
scroll down to c12c0924

What's up with all those old_style_spin_init's ?

	Dave


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-09  4:39               ` Dave Jones
@ 2009-07-09  8:02                 ` Peter Zijlstra
  2009-07-09 16:10                   ` Dave Jones
  0 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2009-07-09  8:02 UTC (permalink / raw)
  To: Dave Jones
  Cc: Joao Correia, LKML, Américo Wang, Frederic Weisbecker,
	Arjan van de Ven, Catalin Marinas

On Thu, 2009-07-09 at 00:39 -0400, Dave Jones wrote:
> On Wed, Jul 08, 2009 at 08:36:04PM +0200, Peter Zijlstra wrote:
>  > Looking at a lockdep dump hch gave me I can see that that is certainly
>  > possible, I see tons of very deep callchains.
>  > 
>  > /me wonders if we're getting significantly deeper..
> 
> Looking at /proc/lockdep, I'm curious..
> Take a look at http://davej.fedorapeople.org/lockdep
> scroll down to c12c0924
> 
> What's up with all those old_style_spin_init's ?

What kernel are you running? Does your lib/dma_debug.c:dma_debug_init()
have spin_lock_init() in that HASH_SIZE loop?

If so, then there's someone else doing silly, if not, you seriously need
to upgrade your kernel because you're running something ancient,
like .30 :-)


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-08 18:36             ` Peter Zijlstra
                                 ` (2 preceding siblings ...)
  2009-07-09  4:39               ` Dave Jones
@ 2009-07-09  9:06               ` Catalin Marinas
  2009-07-09  9:09                 ` Peter Zijlstra
  3 siblings, 1 reply; 21+ messages in thread
From: Catalin Marinas @ 2009-07-09  9:06 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Dave Jones, Joao Correia, LKML, Américo Wang,
	Frederic Weisbecker, Arjan van de Ven

On Wed, 2009-07-08 at 20:36 +0200, Peter Zijlstra wrote:
> As a side node, I see that each and every trace ends with a -1 entry:
> 
> ...
> [ 1194.412158]    [<c01f7990>] do_mount+0x3c0/0x7c0
> [ 1194.412158]    [<c01f7e14>] sys_mount+0x84/0xb0
> [ 1194.412158]    [<c01221b1>] syscall_call+0x7/0xb
> [ 1194.412158]    [<ffffffff>] 0xffffffff
> 
> Which seems to come from:
> 
> void save_stack_trace(struct stack_trace *trace)
> {
>         dump_trace(current, NULL, NULL, 0, &save_stack_ops, trace);
>         if (trace->nr_entries < trace->max_entries)
>                 trace->entries[trace->nr_entries++] = ULONG_MAX;
> }
> EXPORT_SYMBOL_GPL(save_stack_trace);
> 
> commit 006e84ee3a54e393ec6bef2a9bc891dc5bde2843 seems involved,..

The reason for this is that if there are no more traces to show, it
inserts -1. In this case, it cannot trace beyond the system call. If the
stack trace is truncated because of the maximum number of trace entries
it can show, you won't get a -1.

Before the commit above, it was always inserting -1 even if the trace
was longer than the maximum number of entries.

-- 
Catalin


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-09  9:06               ` Catalin Marinas
@ 2009-07-09  9:09                 ` Peter Zijlstra
  2009-07-20 13:31                   ` [PATCH] lockdep: fixup stacktrace wastage Peter Zijlstra
  0 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2009-07-09  9:09 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Dave Jones, Joao Correia, LKML, Américo Wang,
	Frederic Weisbecker, Arjan van de Ven

On Thu, 2009-07-09 at 10:06 +0100, Catalin Marinas wrote:
> On Wed, 2009-07-08 at 20:36 +0200, Peter Zijlstra wrote:
> > As a side node, I see that each and every trace ends with a -1 entry:
> > 
> > ...
> > [ 1194.412158]    [<c01f7990>] do_mount+0x3c0/0x7c0
> > [ 1194.412158]    [<c01f7e14>] sys_mount+0x84/0xb0
> > [ 1194.412158]    [<c01221b1>] syscall_call+0x7/0xb
> > [ 1194.412158]    [<ffffffff>] 0xffffffff
> > 
> > Which seems to come from:
> > 
> > void save_stack_trace(struct stack_trace *trace)
> > {
> >         dump_trace(current, NULL, NULL, 0, &save_stack_ops, trace);
> >         if (trace->nr_entries < trace->max_entries)
> >                 trace->entries[trace->nr_entries++] = ULONG_MAX;
> > }
> > EXPORT_SYMBOL_GPL(save_stack_trace);
> > 
> > commit 006e84ee3a54e393ec6bef2a9bc891dc5bde2843 seems involved,..
> 
> The reason for this is that if there are no more traces to show, it
> inserts -1. In this case, it cannot trace beyond the system call. If the
> stack trace is truncated because of the maximum number of trace entries
> it can show, you won't get a -1.
> 
> Before the commit above, it was always inserting -1 even if the trace
> was longer than the maximum number of entries.

Seems daft to me, I'll fix up lockdep to truncate that last entry,
having a gazillion copies of -1 in the trace entries doesn't make sense.


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-09  8:02                 ` Peter Zijlstra
@ 2009-07-09 16:10                   ` Dave Jones
  2009-07-09 17:07                     ` Peter Zijlstra
  0 siblings, 1 reply; 21+ messages in thread
From: Dave Jones @ 2009-07-09 16:10 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Joao Correia, LKML, Américo Wang, Frederic Weisbecker,
	Arjan van de Ven, Catalin Marinas

On Thu, Jul 09, 2009 at 10:02:36AM +0200, Peter Zijlstra wrote:
 > On Thu, 2009-07-09 at 00:39 -0400, Dave Jones wrote:
 > > On Wed, Jul 08, 2009 at 08:36:04PM +0200, Peter Zijlstra wrote:
 > >  > Looking at a lockdep dump hch gave me I can see that that is certainly
 > >  > possible, I see tons of very deep callchains.
 > >  > 
 > >  > /me wonders if we're getting significantly deeper..
 > > 
 > > Looking at /proc/lockdep, I'm curious..
 > > Take a look at http://davej.fedorapeople.org/lockdep
 > > scroll down to c12c0924
 > > 
 > > What's up with all those old_style_spin_init's ?
 > 
 > What kernel are you running?

.31rc2

 > Does your lib/dma_debug.c:dma_debug_init()
 > have spin_lock_init() in that HASH_SIZE loop?

it's doing it by hand..

 717         for (i = 0; i < HASH_SIZE; ++i) {
 718                 INIT_LIST_HEAD(&dma_entry_hash[i].list);
 719                 dma_entry_hash[i].lock = SPIN_LOCK_UNLOCKED;
 720         }


	Dave
 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-09 16:10                   ` Dave Jones
@ 2009-07-09 17:07                     ` Peter Zijlstra
  2009-07-10 15:50                       ` Joao Correia
  0 siblings, 1 reply; 21+ messages in thread
From: Peter Zijlstra @ 2009-07-09 17:07 UTC (permalink / raw)
  To: Dave Jones
  Cc: Joao Correia, LKML, Américo Wang, Frederic Weisbecker,
	Arjan van de Ven, Catalin Marinas, joerg.roedel

On Thu, 2009-07-09 at 12:10 -0400, Dave Jones wrote:
> On Thu, Jul 09, 2009 at 10:02:36AM +0200, Peter Zijlstra wrote:
>  > On Thu, 2009-07-09 at 00:39 -0400, Dave Jones wrote:
>  > > On Wed, Jul 08, 2009 at 08:36:04PM +0200, Peter Zijlstra wrote:
>  > >  > Looking at a lockdep dump hch gave me I can see that that is certainly
>  > >  > possible, I see tons of very deep callchains.
>  > >  > 
>  > >  > /me wonders if we're getting significantly deeper..
>  > > 
>  > > Looking at /proc/lockdep, I'm curious..
>  > > Take a look at http://davej.fedorapeople.org/lockdep
>  > > scroll down to c12c0924
>  > > 
>  > > What's up with all those old_style_spin_init's ?
>  > 
>  > What kernel are you running?
> 
> ..31rc2
> 
>  > Does your lib/dma_debug.c:dma_debug_init()
>  > have spin_lock_init() in that HASH_SIZE loop?
> 
> it's doing it by hand..
> 
>  717         for (i = 0; i < HASH_SIZE; ++i) {
>  718                 INIT_LIST_HEAD(&dma_entry_hash[i].list);
>  719                 dma_entry_hash[i].lock = SPIN_LOCK_UNLOCKED;
>  720         }

Hmm, that's the problem, it should read:

        for (i = 0; i < HASH_SIZE; ++i) {
                INIT_LIST_HEAD(&dma_entry_hash[i].list);
                spin_lock_init(&dma_entry_hash[i].lock);
        }

and does in -tip, so maybe Ingo took that patch, but I thought Joerg
would push that Linus wards. Joerg?


^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES
  2009-07-09 17:07                     ` Peter Zijlstra
@ 2009-07-10 15:50                       ` Joao Correia
  0 siblings, 0 replies; 21+ messages in thread
From: Joao Correia @ 2009-07-10 15:50 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: LKML, Américo Wang, Frederic Weisbecker, Arjan van de Ven,
	Catalin Marinas, joerg.roedel, Dave Jones

On Thu, Jul 9, 2009 at 6:07 PM, Peter Zijlstra<a.p.zijlstra@chello.nl> wrote:
> On Thu, 2009-07-09 at 12:10 -0400, Dave Jones wrote:
>> On Thu, Jul 09, 2009 at 10:02:36AM +0200, Peter Zijlstra wrote:
>>  > On Thu, 2009-07-09 at 00:39 -0400, Dave Jones wrote:
>>  > > On Wed, Jul 08, 2009 at 08:36:04PM +0200, Peter Zijlstra wrote:
>>  > >  > Looking at a lockdep dump hch gave me I can see that that is certainly
>>  > >  > possible, I see tons of very deep callchains.
>>  > >  >
>>  > >  > /me wonders if we're getting significantly deeper..
>>  > >
>>  > > Looking at /proc/lockdep, I'm curious..
>>  > > Take a look at http://davej.fedorapeople.org/lockdep
>>  > > scroll down to c12c0924
>>  > >
>>  > > What's up with all those old_style_spin_init's ?
>>  >
>>  > What kernel are you running?
>>
>> ..31rc2
>>
>>  > Does your lib/dma_debug.c:dma_debug_init()
>>  > have spin_lock_init() in that HASH_SIZE loop?
>>
>> it's doing it by hand..
>>
>>  717         for (i = 0; i < HASH_SIZE; ++i) {
>>  718                 INIT_LIST_HEAD(&dma_entry_hash[i].list);
>>  719                 dma_entry_hash[i].lock = SPIN_LOCK_UNLOCKED;
>>  720         }
>
> Hmm, that's the problem, it should read:
>
>        for (i = 0; i < HASH_SIZE; ++i) {
>                INIT_LIST_HEAD(&dma_entry_hash[i].list);
>                spin_lock_init(&dma_entry_hash[i].lock);
>        }
>
> and does in -tip, so maybe Ingo took that patch, but I thought Joerg
> would push that Linus wards. Joerg?
>
>

Indeed, changing to spin_lock_init keeps the values at a much
healthier value, and the warnings do not trigger anymore. Thanks for
looking into this.

Disregard my patch submission please.

Joao Correia

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH] lockdep: fixup stacktrace wastage
  2009-07-09  9:09                 ` Peter Zijlstra
@ 2009-07-20 13:31                   ` Peter Zijlstra
  2009-08-02 13:14                     ` [tip:core/locking] lockdep: Fix backtraces tip-bot for Peter Zijlstra
  2009-08-02 13:51                     ` tip-bot for Peter Zijlstra
  0 siblings, 2 replies; 21+ messages in thread
From: Peter Zijlstra @ 2009-07-20 13:31 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Dave Jones, Joao Correia, LKML, Américo Wang,
	Frederic Weisbecker, Arjan van de Ven, mingo

On Thu, 2009-07-09 at 11:10 +0200, Peter Zijlstra wrote:
> On Thu, 2009-07-09 at 10:06 +0100, Catalin Marinas wrote:
> > On Wed, 2009-07-08 at 20:36 +0200, Peter Zijlstra wrote:
> > > As a side node, I see that each and every trace ends with a -1 entry:
> > > 
> > > ...
> > > [ 1194.412158]    [<c01f7990>] do_mount+0x3c0/0x7c0
> > > [ 1194.412158]    [<c01f7e14>] sys_mount+0x84/0xb0
> > > [ 1194.412158]    [<c01221b1>] syscall_call+0x7/0xb
> > > [ 1194.412158]    [<ffffffff>] 0xffffffff
> > > 
> > > Which seems to come from:
> > > 
> > > void save_stack_trace(struct stack_trace *trace)
> > > {
> > >         dump_trace(current, NULL, NULL, 0, &save_stack_ops, trace);
> > >         if (trace->nr_entries < trace->max_entries)
> > >                 trace->entries[trace->nr_entries++] = ULONG_MAX;
> > > }
> > > EXPORT_SYMBOL_GPL(save_stack_trace);
> > > 
> > > commit 006e84ee3a54e393ec6bef2a9bc891dc5bde2843 seems involved,..
> > 
> > The reason for this is that if there are no more traces to show, it
> > inserts -1. In this case, it cannot trace beyond the system call. If the
> > stack trace is truncated because of the maximum number of trace entries
> > it can show, you won't get a -1.
> > 
> > Before the commit above, it was always inserting -1 even if the trace
> > was longer than the maximum number of entries.
> 
> Seems daft to me, I'll fix up lockdep to truncate that last entry,
> having a gazillion copies of -1 in the trace entries doesn't make sense.

Bugger, not all arches have that. I'll queue up the below, if it breaks
anything, I'll go around and fix up all arches to never emit this -1
crap.

---
Subject: lockdep: fixup stacktrace wastage
From: Peter Zijlstra <a.p.zijlstra@chello.nl>
Date: Mon Jul 20 15:27:04 CEST 2009

Some silly architectures emit trailing -1 entries on stacktraces, trim those
to save space.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 kernel/lockdep.c |    6 ++++++
 1 file changed, 6 insertions(+)

Index: linux-2.6/kernel/lockdep.c
===================================================================
--- linux-2.6.orig/kernel/lockdep.c
+++ linux-2.6/kernel/lockdep.c
@@ -367,6 +367,12 @@ static int save_trace(struct stack_trace
 
 	save_stack_trace(trace);
 
+	/*
+	 * Some daft arches put -1 at the end to indicate its a full trace.
+	 */
+	if (trace->entries[trace->nr_entries-1] == ULONG_MAX)
+		trace->nr_entries--;
+
 	trace->max_entries = trace->nr_entries;
 
 	nr_stack_trace_entries += trace->nr_entries;


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [tip:core/locking] lockdep: Fix backtraces
  2009-07-20 13:31                   ` [PATCH] lockdep: fixup stacktrace wastage Peter Zijlstra
@ 2009-08-02 13:14                     ` tip-bot for Peter Zijlstra
  2009-08-02 13:51                     ` tip-bot for Peter Zijlstra
  1 sibling, 0 replies; 21+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-08-02 13:14 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, a.p.zijlstra, tglx, mingo

Commit-ID:  96c6ab754464eb2d214b81392551ed581bd573ed
Gitweb:     http://git.kernel.org/tip/96c6ab754464eb2d214b81392551ed581bd573ed
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Mon, 20 Jul 2009 15:27:04 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Sun, 2 Aug 2009 14:59:29 +0200

lockdep: Fix backtraces

Truncate stupid -1 entries in backtraces.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1248096665.15751.8816.camel@twins>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 kernel/lockdep.c |   12 +++++++++++-
 1 files changed, 11 insertions(+), 1 deletions(-)

diff --git a/kernel/lockdep.c b/kernel/lockdep.c
index 1cedb00..2f09702 100644
--- a/kernel/lockdep.c
+++ b/kernel/lockdep.c
@@ -367,11 +367,21 @@ static int save_trace(struct stack_trace *trace)
 
 	save_stack_trace(trace);
 
+	/*
+	 * Some daft arches put -1 at the end to indicate its a full trace.
+	 *
+	 * <rant> this is buggy anyway, since it takes a whole extra entry so a
+	 * complete trace that maxes out the entries provided will be reported
+	 * as incomplete, friggin useless </rant>
+	 */
+	if (trace->entries[trace->nr_entries-1] == ULONG_MAX)
+		trace->nr_entries--;
+
 	trace->max_entries = trace->nr_entries;
 
 	nr_stack_trace_entries += trace->nr_entries;
 
-	if (nr_stack_trace_entries == MAX_STACK_TRACE_ENTRIES) {
+	if (nr_stack_trace_entries >= MAX_STACK_TRACE_ENTRIES-1) {
 		if (!debug_locks_off_graph_unlock())
 			return 0;
 

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [tip:core/locking] lockdep: Fix backtraces
  2009-07-20 13:31                   ` [PATCH] lockdep: fixup stacktrace wastage Peter Zijlstra
  2009-08-02 13:14                     ` [tip:core/locking] lockdep: Fix backtraces tip-bot for Peter Zijlstra
@ 2009-08-02 13:51                     ` tip-bot for Peter Zijlstra
  1 sibling, 0 replies; 21+ messages in thread
From: tip-bot for Peter Zijlstra @ 2009-08-02 13:51 UTC (permalink / raw)
  To: linux-tip-commits; +Cc: linux-kernel, hpa, mingo, a.p.zijlstra, tglx, mingo

Commit-ID:  4f84f4330a11b9eb828bf5af557f4c79c64614a3
Gitweb:     http://git.kernel.org/tip/4f84f4330a11b9eb828bf5af557f4c79c64614a3
Author:     Peter Zijlstra <a.p.zijlstra@chello.nl>
AuthorDate: Mon, 20 Jul 2009 15:27:04 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Sun, 2 Aug 2009 15:41:31 +0200

lockdep: Fix backtraces

Truncate stupid -1 entries in backtraces.

Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
LKML-Reference: <1248096665.15751.8816.camel@twins>
Signed-off-by: Ingo Molnar <mingo@elte.hu>


---
 kernel/lockdep.c |   12 +++++++++++-
 1 files changed, 11 insertions(+), 1 deletions(-)

diff --git a/kernel/lockdep.c b/kernel/lockdep.c
index 1cedb00..2f09702 100644
--- a/kernel/lockdep.c
+++ b/kernel/lockdep.c
@@ -367,11 +367,21 @@ static int save_trace(struct stack_trace *trace)
 
 	save_stack_trace(trace);
 
+	/*
+	 * Some daft arches put -1 at the end to indicate its a full trace.
+	 *
+	 * <rant> this is buggy anyway, since it takes a whole extra entry so a
+	 * complete trace that maxes out the entries provided will be reported
+	 * as incomplete, friggin useless </rant>
+	 */
+	if (trace->entries[trace->nr_entries-1] == ULONG_MAX)
+		trace->nr_entries--;
+
 	trace->max_entries = trace->nr_entries;
 
 	nr_stack_trace_entries += trace->nr_entries;
 
-	if (nr_stack_trace_entries == MAX_STACK_TRACE_ENTRIES) {
+	if (nr_stack_trace_entries >= MAX_STACK_TRACE_ENTRIES-1) {
 		if (!debug_locks_off_graph_unlock())
 			return 0;
 

^ permalink raw reply related	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2009-08-02 13:52 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2009-07-07 15:25 [PATCH 1/3] Increase lockdep limits: MAX_STACK_TRACE_ENTRIES Joao Correia
2009-07-07 15:33 ` Peter Zijlstra
     [not found]   ` <a5d9929e0907070838q7ed3306du3bb7880e47d7207b@mail.gmail.com>
2009-07-07 15:38     ` Fwd: " Joao Correia
     [not found]     ` <1246981444.9777.11.camel@twins>
2009-07-07 15:50       ` Joao Correia
2009-07-07 15:55         ` Peter Zijlstra
2009-07-07 15:59           ` Joao Correia
2009-07-08 17:22           ` Dave Jones
2009-07-08 18:36             ` Peter Zijlstra
2009-07-08 18:44               ` Dave Jones
2009-07-08 19:48               ` Joao Correia
2009-07-08 19:56                 ` Peter Zijlstra
2009-07-09  4:39               ` Dave Jones
2009-07-09  8:02                 ` Peter Zijlstra
2009-07-09 16:10                   ` Dave Jones
2009-07-09 17:07                     ` Peter Zijlstra
2009-07-10 15:50                       ` Joao Correia
2009-07-09  9:06               ` Catalin Marinas
2009-07-09  9:09                 ` Peter Zijlstra
2009-07-20 13:31                   ` [PATCH] lockdep: fixup stacktrace wastage Peter Zijlstra
2009-08-02 13:14                     ` [tip:core/locking] lockdep: Fix backtraces tip-bot for Peter Zijlstra
2009-08-02 13:51                     ` tip-bot for Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.