All of lore.kernel.org
 help / color / mirror / Atom feed
* possible recursive locking detected cache_alloc_refill() + cache_flusharray()
@ 2011-07-16 21:18 Sebastian Siewior
  2011-07-17 21:34 ` Thomas Gleixner
  0 siblings, 1 reply; 16+ messages in thread
From: Sebastian Siewior @ 2011-07-16 21:18 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka Enberg, Matt Mackall, linux-mm, tglx

Hi,

just hit the following with full debuging turned on:

| =============================================
| [ INFO: possible recursive locking detected ]
| 3.0.0-rc7-00088-g1765a36 #64
| ---------------------------------------------
| udevd/1054 is trying to acquire lock:
|  (&(&parent->list_lock)->rlock){..-...}, at: [<c00bf640>] cache_alloc_refill+0xac/0x868
|
| but task is already holding lock:
|  (&(&parent->list_lock)->rlock){..-...}, at: [<c00be47c>] cache_flusharray+0x58/0x148
|
| other info that might help us debug this:
|  Possible unsafe locking scenario:
|
|        CPU0
|        ----
|   lock(&(&parent->list_lock)->rlock);
|   lock(&(&parent->list_lock)->rlock);
|
|  *** DEADLOCK ***
|
|  May be due to missing lock nesting notation
|
| 1 lock held by udevd/1054:
|  #0:  (&(&parent->list_lock)->rlock){..-...}, at: [<c00be47c>] cache_flusharray+0x58/0x148
|
| stack backtrace:
| Call Trace:
| [ed077a30] [c0008034] show_stack+0x48/0x168 (unreliable)
| [ed077a70] [c006a184] __lock_acquire+0x15f8/0x1a14
| [ed077b10] [c006aa80] lock_acquire+0x7c/0x98
| [ed077b50] [c02f7160] _raw_spin_lock+0x3c/0x80
| [ed077b70] [c00bf640] cache_alloc_refill+0xac/0x868
| [ed077bd0] [c00bf4e0] kmem_cache_alloc+0x198/0x1c4
| [ed077bf0] [c01971ac] __debug_object_init+0x268/0x414
| [ed077c50] [c004ba24] rcuhead_fixup_activate+0x34/0x80
| [ed077c70] [c0196a1c] debug_object_activate+0xec/0x1a0
| [ed077ca0] [c007ef38] __call_rcu+0x38/0x1d4
| [ed077cc0] [c00bea44] slab_destroy+0x1f8/0x204
| [ed077d00] [c00beaac] free_block+0x5c/0x1e0
| [ed077d40] [c00be568] cache_flusharray+0x144/0x148
| [ed077d70] [c00be828] kmem_cache_free+0x118/0x13c
| [ed077d90] [c00b18a8] __put_anon_vma+0x88/0xf4
| [ed077da0] [c00b320c] unlink_anon_vmas+0x17c/0x180
| [ed077dd0] [c00ab364] free_pgtables+0x58/0xbc
| [ed077df0] [c00ae158] exit_mmap+0xe8/0x12c
| [ed077e60] [c002b63c] mmput+0x74/0x118
| [ed077e80] [c002fc90] exit_mm+0x13c/0x168
| [ed077eb0] [c0032450] do_exit+0x640/0x6b4
| [ed077f10] [c003250c] do_group_exit+0x48/0xa8
| [ed077f30] [c0032580] sys_exit_group+0x14/0x28
| [ed077f40] [c000ef14] ret_from_syscall+0x0/0x3c
| --- Exception: c01 at 0xfef5c9c
|     LR = 0xffaf988

haven't found a report of this so far.

Sebastian

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-16 21:18 possible recursive locking detected cache_alloc_refill() + cache_flusharray() Sebastian Siewior
@ 2011-07-17 21:34 ` Thomas Gleixner
  2011-07-20 13:21   ` Pekka Enberg
  0 siblings, 1 reply; 16+ messages in thread
From: Thomas Gleixner @ 2011-07-17 21:34 UTC (permalink / raw)
  To: Sebastian Siewior
  Cc: Christoph Lameter, Pekka Enberg, Matt Mackall, linux-mm, Peter Zijlstra

On Sat, 16 Jul 2011, Sebastian Siewior wrote:

> Hi,
> 
> just hit the following with full debuging turned on:
> 
> | =============================================
> | [ INFO: possible recursive locking detected ]
> | 3.0.0-rc7-00088-g1765a36 #64
> | ---------------------------------------------
> | udevd/1054 is trying to acquire lock:
> |  (&(&parent->list_lock)->rlock){..-...}, at: [<c00bf640>] cache_alloc_refill+0xac/0x868
> |
> | but task is already holding lock:
> |  (&(&parent->list_lock)->rlock){..-...}, at: [<c00be47c>] cache_flusharray+0x58/0x148
> |
> | other info that might help us debug this:
> |  Possible unsafe locking scenario:
> |
> |        CPU0
> |        ----
> |   lock(&(&parent->list_lock)->rlock);
> |   lock(&(&parent->list_lock)->rlock);

Known problem. Pekka is looking into it.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-17 21:34 ` Thomas Gleixner
@ 2011-07-20 13:21   ` Pekka Enberg
  2011-07-20 13:30     ` Peter Zijlstra
  0 siblings, 1 reply; 16+ messages in thread
From: Pekka Enberg @ 2011-07-20 13:21 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: Sebastian Siewior, Christoph Lameter, Matt Mackall, linux-mm,
	Peter Zijlstra

On Sat, 16 Jul 2011, Sebastian Siewior wrote:
>> just hit the following with full debuging turned on:
>>
>> | =============================================
>> | [ INFO: possible recursive locking detected ]
>> | 3.0.0-rc7-00088-g1765a36 #64
>> | ---------------------------------------------
>> | udevd/1054 is trying to acquire lock:
>> |  (&(&parent->list_lock)->rlock){..-...}, at: [<c00bf640>] cache_alloc_refill+0xac/0x868
>> |
>> | but task is already holding lock:
>> |  (&(&parent->list_lock)->rlock){..-...}, at: [<c00be47c>] cache_flusharray+0x58/0x148
>> |
>> | other info that might help us debug this:
>> |  Possible unsafe locking scenario:
>> |
>> |        CPU0
>> |        ----
>> |   lock(&(&parent->list_lock)->rlock);
>> |   lock(&(&parent->list_lock)->rlock);

On Sun, 17 Jul 2011, Thomas Gleixner wrote:
> Known problem. Pekka is looking into it.

Actually, I kinda was hoping Peter would make it go away. ;-)

Looking at the lockdep report, it's l3->list_lock and I really don't quite 
understand why it started to happen now. There hasn't been any major 
changes in mm/slab.c for a while. Did lockdep become more strict recently?

 			Pekka

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-20 13:21   ` Pekka Enberg
@ 2011-07-20 13:30     ` Peter Zijlstra
  2011-07-20 13:52       ` Pekka Enberg
  0 siblings, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2011-07-20 13:30 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Thomas Gleixner, Sebastian Siewior, Christoph Lameter,
	Matt Mackall, linux-mm

On Wed, 2011-07-20 at 16:21 +0300, Pekka Enberg wrote:
> On Sat, 16 Jul 2011, Sebastian Siewior wrote:
> >> just hit the following with full debuging turned on:
> >>
> >> | =============================================
> >> | [ INFO: possible recursive locking detected ]
> >> | 3.0.0-rc7-00088-g1765a36 #64
> >> | ---------------------------------------------
> >> | udevd/1054 is trying to acquire lock:
> >> |  (&(&parent->list_lock)->rlock){..-...}, at: [<c00bf640>] cache_alloc_refill+0xac/0x868
> >> |
> >> | but task is already holding lock:
> >> |  (&(&parent->list_lock)->rlock){..-...}, at: [<c00be47c>] cache_flusharray+0x58/0x148
> >> |
> >> | other info that might help us debug this:
> >> |  Possible unsafe locking scenario:
> >> |
> >> |        CPU0
> >> |        ----
> >> |   lock(&(&parent->list_lock)->rlock);
> >> |   lock(&(&parent->list_lock)->rlock);
> 
> On Sun, 17 Jul 2011, Thomas Gleixner wrote:
> > Known problem. Pekka is looking into it.
> 
> Actually, I kinda was hoping Peter would make it go away. ;-)
> 
> Looking at the lockdep report, it's l3->list_lock and I really don't quite 
> understand why it started to happen now. There hasn't been any major 
> changes in mm/slab.c for a while. Did lockdep become more strict recently?

Not that I know.. :-) I bet -rt just makes it easier to trigger this
weirdness.

Let me try and look at slab.c without my eyes burning out.. I so hate
that code.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-20 13:30     ` Peter Zijlstra
@ 2011-07-20 13:52       ` Pekka Enberg
  2011-07-20 14:00         ` Christoph Lameter
  2011-07-20 15:44         ` Peter Zijlstra
  0 siblings, 2 replies; 16+ messages in thread
From: Pekka Enberg @ 2011-07-20 13:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Sebastian Siewior, Christoph Lameter,
	Matt Mackall, linux-mm

On Wed, 20 Jul 2011, Peter Zijlstra wrote:
>>>> just hit the following with full debuging turned on:
>>>>
>>>> | =============================================
>>>> | [ INFO: possible recursive locking detected ]
>>>> | 3.0.0-rc7-00088-g1765a36 #64
>>>> | ---------------------------------------------
>>>> | udevd/1054 is trying to acquire lock:
>>>> |  (&(&parent->list_lock)->rlock){..-...}, at: [<c00bf640>] cache_alloc_refill+0xac/0x868
>>>> |
>>>> | but task is already holding lock:
>>>> |  (&(&parent->list_lock)->rlock){..-...}, at: [<c00be47c>] cache_flusharray+0x58/0x148
>>>> |
>>>> | other info that might help us debug this:
>>>> |  Possible unsafe locking scenario:
>>>> |
>>>> |        CPU0
>>>> |        ----
>>>> |   lock(&(&parent->list_lock)->rlock);
>>>> |   lock(&(&parent->list_lock)->rlock);
>>
>> On Sun, 17 Jul 2011, Thomas Gleixner wrote:
>>> Known problem. Pekka is looking into it.
>>
>> Actually, I kinda was hoping Peter would make it go away. ;-)
>>
>> Looking at the lockdep report, it's l3->list_lock and I really don't quite
>> understand why it started to happen now. There hasn't been any major
>> changes in mm/slab.c for a while. Did lockdep become more strict recently?
>
> Not that I know.. :-) I bet -rt just makes it easier to trigger this
> weirdness.
>
> Let me try and look at slab.c without my eyes burning out.. I so hate
> that code.

So what exactly is the lockdep complaint above telling us? We're holding 
on to l3->list_lock in cache_flusharray() (kfree path) but somehow we now 
entered cache_alloc_refill() (kmalloc path!) and attempt to take the same 
lock or lock in the same class.

I am confused. How can that happen?

 			Pekka

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-20 13:52       ` Pekka Enberg
@ 2011-07-20 14:00         ` Christoph Lameter
  2011-07-20 15:44         ` Peter Zijlstra
  1 sibling, 0 replies; 16+ messages in thread
From: Christoph Lameter @ 2011-07-20 14:00 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Peter Zijlstra, Thomas Gleixner, Sebastian Siewior, Matt Mackall,
	linux-mm

On Wed, 20 Jul 2011, Pekka Enberg wrote:

> So what exactly is the lockdep complaint above telling us? We're holding on to
> l3->list_lock in cache_flusharray() (kfree path) but somehow we now entered
> cache_alloc_refill() (kmalloc path!) and attempt to take the same lock or lock
> in the same class.
>
> I am confused. How can that happen?

I guess you need a slab with CFLGS_OFF_SLAB metadata management. Then slab
does some recursive things doing allocations and free for metadata while
allocating larger objects.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-20 13:52       ` Pekka Enberg
  2011-07-20 14:00         ` Christoph Lameter
@ 2011-07-20 15:44         ` Peter Zijlstra
  2011-07-21  7:14           ` Sebastian Siewior
  2011-07-28 10:46           ` possible recursive locking detected cache_alloc_refill() + cache_flusharray() Pekka Enberg
  1 sibling, 2 replies; 16+ messages in thread
From: Peter Zijlstra @ 2011-07-20 15:44 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Thomas Gleixner, Sebastian Siewior, Christoph Lameter,
	Matt Mackall, linux-mm

On Wed, 2011-07-20 at 16:52 +0300, Pekka Enberg wrote:

> So what exactly is the lockdep complaint above telling us? We're holding 
> on to l3->list_lock in cache_flusharray() (kfree path) but somehow we now 
> entered cache_alloc_refill() (kmalloc path!) and attempt to take the same 
> lock or lock in the same class.
> 
> I am confused. How can that happen?

[   13.540663]  [<c106b54e>] print_deadlock_bug+0xce/0xe0
[   13.540663]  [<c106d5fa>] validate_chain+0x5aa/0x720
[   13.540663]  [<c106da07>] __lock_acquire+0x297/0x480
[   13.540663]  [<c106e15b>] lock_acquire+0x7b/0xa0
[   13.540663]  [<c10c66c6>] ? cache_alloc_refill+0x66/0x2e0
[   13.540663]  [<c13ca4e6>] _raw_spin_lock+0x36/0x70
[   13.540663]  [<c10c66c6>] ? cache_alloc_refill+0x66/0x2e0
[   13.540663]  [<c11f6ac6>] ? __debug_object_init+0x346/0x360
[   13.540663]  [<c10c66c6>] cache_alloc_refill+0x66/0x2e0
[   13.540663]  [<c106da25>] ? __lock_acquire+0x2b5/0x480
[   13.540663]  [<c11f6ac6>] ? __debug_object_init+0x346/0x360
[   13.540663]  [<c10c635f>] kmem_cache_alloc+0x11f/0x140
[   13.540663]  [<c11f6ac6>] __debug_object_init+0x346/0x360
[   13.540663]  [<c106df62>] ? __lock_release+0x72/0x180
[   13.540663]  [<c11f6365>] ? debug_object_activate+0x85/0x130
[   13.540663]  [<c11f6b17>] debug_object_init+0x17/0x20
[   13.540663]  [<c10543da>] rcuhead_fixup_activate+0x1a/0x60
[   13.540663]  [<c11f6375>] debug_object_activate+0x95/0x130
[   13.540663]  [<c10c60a0>] ? kmem_cache_shrink+0x50/0x50
[   13.540663]  [<c108e60a>] __call_rcu+0x2a/0x180
[   13.540663]  [<c10c48b0>] ? slab_destroy_debugcheck+0x70/0x110
[   13.540663]  [<c108e77d>] call_rcu_sched+0xd/0x10
[   13.540663]  [<c10c58d3>] slab_destroy+0x73/0x80
[   13.540663]  [<c10c591f>] free_block+0x3f/0x1b0
[   13.540663]  [<c10c5ad3>] ? cache_flusharray+0x43/0x110
[   13.540663]  [<c10c5b03>] cache_flusharray+0x73/0x110
[   13.540663]  [<c10c5847>] kmem_cache_free+0xb7/0xd0
[   13.540663]  [<c10bbfb9>] __put_anon_vma+0x49/0xa0
[   13.540663]  [<c10bc5dc>] unlink_anon_vmas+0xfc/0x160
[   13.540663]  [<c10b451c>] free_pgtables+0x3c/0x90
[   13.540663]  [<c10b9a8f>] exit_mmap+0xbf/0xf0
[   13.540663]  [<c1039d3c>] mmput+0x4c/0xc0
[   13.540663]  [<c103d9bc>] exit_mm+0xec/0x130
[   13.540663]  [<c13cadc2>] ? _raw_spin_unlock_irq+0x22/0x30
[   13.540663]  [<c103fa03>] do_exit+0x123/0x390
[   13.540663]  [<c10cb9c5>] ? fput+0x15/0x20
[   13.540663]  [<c10c7c2d>] ? filp_close+0x4d/0x80
[   13.540663]  [<c103fca9>] do_group_exit+0x39/0xa0
[   13.540663]  [<c103fd23>] sys_exit_group+0x13/0x20
[   13.540663]  [<c13cb70c>] sysenter_do_call+0x12/0x32

Shows quite clearly how it happens, now its a false-positive, since the
debug object slab doesn't use rcu-freeing and thus it can never be the
same slab.

We just need to annotate the SLAB_DEBUG_OBJECTS slab with a different
key. Something like the below, except that doesn't quite cover cpu
hotplug yet I think.. /me pokes more

Completely untested, hasn't even seen a compiler etc..

---
 mm/slab.c |   65 ++++++++++++++++++++++++++++++++++++++++++++----------------
 1 files changed, 47 insertions(+), 18 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index d96e223..c13f7e9 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -620,6 +620,37 @@ int slab_is_available(void)
 static struct lock_class_key on_slab_l3_key;
 static struct lock_class_key on_slab_alc_key;
 
+static struct lock_class_key debugobj_l3_key;
+static struct lock_class_key debugobj_alc_key;
+
+static void slab_set_lock_classes(struct kmem_cache *cachep, 
+		struct lock_class_key *l3_key, struct lock_class_key *alc_key)
+{
+	struct array_cache **alc;
+	struct kmem_list3 *l3;
+	int r;
+
+	l3 = cachep->nodelists[q];
+	if (!l3)
+		return;
+
+	lockdep_set_class(&l3->list_lock, l3_key);
+	alc = l3->alien;
+	/*
+	 * FIXME: This check for BAD_ALIEN_MAGIC
+	 * should go away when common slab code is taught to
+	 * work even without alien caches.
+	 * Currently, non NUMA code returns BAD_ALIEN_MAGIC
+	 * for alloc_alien_cache,
+	 */
+	if (!alc || (unsigned long)alc == BAD_ALIEN_MAGIC)
+		return;
+	for_each_node(r) {
+		if (alc[r])
+			lockdep_set_class(&alc[r]->lock, alc_key);
+	}
+}
+
 static void init_node_lock_keys(int q)
 {
 	struct cache_sizes *s = malloc_sizes;
@@ -628,29 +659,14 @@ static void init_node_lock_keys(int q)
 		return;
 
 	for (s = malloc_sizes; s->cs_size != ULONG_MAX; s++) {
-		struct array_cache **alc;
 		struct kmem_list3 *l3;
-		int r;
 
 		l3 = s->cs_cachep->nodelists[q];
 		if (!l3 || OFF_SLAB(s->cs_cachep))
 			continue;
-		lockdep_set_class(&l3->list_lock, &on_slab_l3_key);
-		alc = l3->alien;
-		/*
-		 * FIXME: This check for BAD_ALIEN_MAGIC
-		 * should go away when common slab code is taught to
-		 * work even without alien caches.
-		 * Currently, non NUMA code returns BAD_ALIEN_MAGIC
-		 * for alloc_alien_cache,
-		 */
-		if (!alc || (unsigned long)alc == BAD_ALIEN_MAGIC)
-			continue;
-		for_each_node(r) {
-			if (alc[r])
-				lockdep_set_class(&alc[r]->lock,
-					&on_slab_alc_key);
-		}
+
+		slab_set_lock_classes(s->cs_cachep,
+				&on_slab_l3_key, &on_slab_alc_key)
 	}
 }
 
@@ -2424,6 +2440,19 @@ kmem_cache_create (const char *name, size_t size, size_t align,
 		goto oops;
 	}
 
+	if (flags & SLAB_DEBUG_OBJECTS) {
+		/*
+		 * Would deadlock through slab_destroy()->call_rcu()->
+		 * debug_object_activate()->kmem_cache_alloc().
+		 */
+		WARN_ON_ONCE(flags & SLAB_DESTROY_BY_RCU);
+
+#ifdef CONFIG_LOCKDEP
+		slab_set_lock_classes(cachep, 
+				&debugobj_l3_key, &debugobj_alc_key);
+#endif
+	}
+
 	/* cache setup completed, link it into the list */
 	list_add(&cachep->next, &cache_chain);
 oops:

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-20 15:44         ` Peter Zijlstra
@ 2011-07-21  7:14           ` Sebastian Siewior
  2011-07-22  8:17             ` Pekka Enberg
  2011-07-22 13:26             ` Peter Zijlstra
  2011-07-28 10:46           ` possible recursive locking detected cache_alloc_refill() + cache_flusharray() Pekka Enberg
  1 sibling, 2 replies; 16+ messages in thread
From: Sebastian Siewior @ 2011-07-21  7:14 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Pekka Enberg, Thomas Gleixner, Sebastian Siewior,
	Christoph Lameter, Matt Mackall, linux-mm

* Thus spake Peter Zijlstra (peterz@infradead.org):
> We just need to annotate the SLAB_DEBUG_OBJECTS slab with a different
> key. Something like the below, except that doesn't quite cover cpu
> hotplug yet I think.. /me pokes more
> 
> Completely untested, hasn't even seen a compiler etc..

This fix on-top passes the compiler and the splash on boot is also gone.

---
 mm/slab.c |   28 ++++++++++++++++++++--------
 1 files changed, 20 insertions(+), 8 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index c13f7e9..fcf8380 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -623,8 +623,9 @@ static struct lock_class_key on_slab_alc_key;
 static struct lock_class_key debugobj_l3_key;
 static struct lock_class_key debugobj_alc_key;
 
-static void slab_set_lock_classes(struct kmem_cache *cachep, 
-		struct lock_class_key *l3_key, struct lock_class_key *alc_key)
+static void slab_set_lock_classes(struct kmem_cache *cachep,
+		struct lock_class_key *l3_key, struct lock_class_key *alc_key,
+		int q)
 {
 	struct array_cache **alc;
 	struct kmem_list3 *l3;
@@ -651,6 +652,16 @@ static void slab_set_lock_classes(struct kmem_cache *cachep,
 	}
 }
 
+static void slab_each_set_lock_classes(struct kmem_cache *cachep)
+{
+	int node;
+
+	for_each_online_node(node) {
+		slab_set_lock_classes(cachep, &debugobj_l3_key,
+				&debugobj_alc_key, node);
+	}
+}
+
 static void init_node_lock_keys(int q)
 {
 	struct cache_sizes *s = malloc_sizes;
@@ -665,8 +676,8 @@ static void init_node_lock_keys(int q)
 		if (!l3 || OFF_SLAB(s->cs_cachep))
 			continue;
 
-		slab_set_lock_classes(s->cs_cachep,
-				&on_slab_l3_key, &on_slab_alc_key)
+		slab_set_lock_classes(s->cs_cachep, &on_slab_l3_key,
+				&on_slab_alc_key, q);
 	}
 }
 
@@ -685,6 +696,10 @@ static void init_node_lock_keys(int q)
 static inline void init_lock_keys(void)
 {
 }
+
+static void slab_each_set_lock_classes(struct kmem_cache *cachep)
+{
+}
 #endif
 
 /*
@@ -2447,10 +2462,7 @@ kmem_cache_create (const char *name, size_t size, size_t align,
 		 */
 		WARN_ON_ONCE(flags & SLAB_DESTROY_BY_RCU);
 
-#ifdef CONFIG_LOCKDEP
-		slab_set_lock_classes(cachep, 
-				&debugobj_l3_key, &debugobj_alc_key);
-#endif
+		slab_each_set_lock_classes(cachep);
 	}
 
 	/* cache setup completed, link it into the list */
-- 
1.7.4.4

Sebastian

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-21  7:14           ` Sebastian Siewior
@ 2011-07-22  8:17             ` Pekka Enberg
  2011-07-22 13:26             ` Peter Zijlstra
  1 sibling, 0 replies; 16+ messages in thread
From: Pekka Enberg @ 2011-07-22  8:17 UTC (permalink / raw)
  To: Sebastian Siewior
  Cc: Peter Zijlstra, Thomas Gleixner, Christoph Lameter, Matt Mackall,
	linux-mm

On Thu, 21 Jul 2011, Sebastian Siewior wrote:
> * Thus spake Peter Zijlstra (peterz@infradead.org):
>> We just need to annotate the SLAB_DEBUG_OBJECTS slab with a different
>> key. Something like the below, except that doesn't quite cover cpu
>> hotplug yet I think.. /me pokes more
>>
>> Completely untested, hasn't even seen a compiler etc..
>
> This fix on-top passes the compiler and the splash on boot is also gone.

Can someone send me a patch I can apply to slab.git? Alternatively, 
lockdep tree can pick it up:

Acked-by: Pekka Enberg <penberg@kernel.org>

 			Pekka

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-21  7:14           ` Sebastian Siewior
  2011-07-22  8:17             ` Pekka Enberg
@ 2011-07-22 13:26             ` Peter Zijlstra
  2011-07-23 11:22               ` Sebastian Andrzej Siewior
  2011-08-04  8:35               ` [tip:core/urgent] slab, lockdep: Annotate slab -> rcu -> debug_object -> slab tip-bot for Peter Zijlstra
  1 sibling, 2 replies; 16+ messages in thread
From: Peter Zijlstra @ 2011-07-22 13:26 UTC (permalink / raw)
  To: Sebastian Siewior
  Cc: Pekka Enberg, Thomas Gleixner, Christoph Lameter, Matt Mackall, linux-mm

On Thu, 2011-07-21 at 09:14 +0200, Sebastian Siewior wrote:
> * Thus spake Peter Zijlstra (peterz@infradead.org):
> > We just need to annotate the SLAB_DEBUG_OBJECTS slab with a different
> > key. Something like the below, except that doesn't quite cover cpu
> > hotplug yet I think.. /me pokes more
> > 
> > Completely untested, hasn't even seen a compiler etc..
> 
> This fix on-top passes the compiler and the splash on boot is also gone.

Thanks!
 
> +static void slab_each_set_lock_classes(struct kmem_cache *cachep)
> +{
> +	int node;
> +
> +	for_each_online_node(node) {
> +		slab_set_lock_classes(cachep, &debugobj_l3_key,
> +				&debugobj_alc_key, node);
> +	}
> +}

Hmm, O(nr_nodes^2), sounds about right for alien crap, right?

Still needs some hotplug love though, maybe something like the below...
Sebastian, would you be willing to give the thing another spin to see if
I didnt (again) break anything silly?

---
Subject: slab, lockdep: Annotate debug object slabs

Lockdep thinks there's lock recursion through:

	kmem_cache_free()
	  cache_flusharray()
	    spin_lock(&l3->list_lock)  <----------------\
	    free_block()                                |
	      slab_destroy()                            |
		call_rcu()                              |
		  debug_object_activate()               |
		    debug_object_init()                 |
		      __debug_object_init()             |
			kmem_cache_alloc()              |
			  cache_alloc_refill()          |
			    spin_lock(&l3->list_lock) --/

Now debug objects doesn't use SLAB_DESTROY_BY_RCU and hence there is no
actual possibility of recursing. Luckily debug objects marks it slab
with SLAB_DEBUG_OBJECTS so we can identify the thing.

Mark all SLAB_DEBUG_OBJECTS (all one!) slab caches with a special
lockdep key so that lockdep sees its a different cachep.

Also add a WARN on trying to create a SLAB_DESTROY_BY_RCU |
SLAB_DEBUG_OBJECTS cache, to avoid possible future trouble.

Reported-by: Sebastian Siewior <sebastian@breakpoint.cc>
[ fixes to the initial patch ]
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
---
 mm/slab.c |   86 ++++++++++++++++++++++++++++++++++++++++++++++++-------------
 1 files changed, 68 insertions(+), 18 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index d96e223..2175d45 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -620,6 +620,51 @@ int slab_is_available(void)
 static struct lock_class_key on_slab_l3_key;
 static struct lock_class_key on_slab_alc_key;
 
+static struct lock_class_key debugobj_l3_key;
+static struct lock_class_key debugobj_alc_key;
+
+static void slab_set_lock_classes(struct kmem_cache *cachep,
+		struct lock_class_key *l3_key, struct lock_class_key *alc_key,
+		int q)
+{
+	struct array_cache **alc;
+	struct kmem_list3 *l3;
+	int r;
+
+	l3 = cachep->nodelists[q];
+	if (!l3)
+		return;
+
+	lockdep_set_class(&l3->list_lock, l3_key);
+	alc = l3->alien;
+	/*
+	 * FIXME: This check for BAD_ALIEN_MAGIC
+	 * should go away when common slab code is taught to
+	 * work even without alien caches.
+	 * Currently, non NUMA code returns BAD_ALIEN_MAGIC
+	 * for alloc_alien_cache,
+	 */
+	if (!alc || (unsigned long)alc == BAD_ALIEN_MAGIC)
+		return;
+	for_each_node(r) {
+		if (alc[r])
+			lockdep_set_class(&alc[r]->lock, alc_key);
+	}
+}
+
+static void slab_set_debugobj_lock_classes_node(struct kmem_cache *cachep, int node)
+{
+	slab_set_lock_classes(cachep, &debugobj_l3_key, &debugobj_alc_key, node);
+}
+
+static void slab_set_debugobj_lock_classes(struct kmem_cache *cachep)
+{
+	int node;
+
+	for_each_online_node(node)
+		slab_set_debugobj_lock_classes_node(cachep, node);
+}
+
 static void init_node_lock_keys(int q)
 {
 	struct cache_sizes *s = malloc_sizes;
@@ -628,29 +673,14 @@ static void init_node_lock_keys(int q)
 		return;
 
 	for (s = malloc_sizes; s->cs_size != ULONG_MAX; s++) {
-		struct array_cache **alc;
 		struct kmem_list3 *l3;
-		int r;
 
 		l3 = s->cs_cachep->nodelists[q];
 		if (!l3 || OFF_SLAB(s->cs_cachep))
 			continue;
-		lockdep_set_class(&l3->list_lock, &on_slab_l3_key);
-		alc = l3->alien;
-		/*
-		 * FIXME: This check for BAD_ALIEN_MAGIC
-		 * should go away when common slab code is taught to
-		 * work even without alien caches.
-		 * Currently, non NUMA code returns BAD_ALIEN_MAGIC
-		 * for alloc_alien_cache,
-		 */
-		if (!alc || (unsigned long)alc == BAD_ALIEN_MAGIC)
-			continue;
-		for_each_node(r) {
-			if (alc[r])
-				lockdep_set_class(&alc[r]->lock,
-					&on_slab_alc_key);
-		}
+
+		slab_set_lock_classes(s->cs_cachep, &on_slab_l3_key,
+				&on_slab_alc_key, q);
 	}
 }
 
@@ -669,6 +699,14 @@ static void init_node_lock_keys(int q)
 static inline void init_lock_keys(void)
 {
 }
+
+static void slab_set_debugobj_lock_classes_node(struct kmem_cache *cachep, int node)
+{
+}
+
+static void slab_set_debugobj_lock_classes(struct kmem_cache *cachep, int node)
+{
+}
 #endif
 
 /*
@@ -1262,6 +1300,8 @@ static int __cpuinit cpuup_prepare(long cpu)
 		spin_unlock_irq(&l3->list_lock);
 		kfree(shared);
 		free_alien_cache(alien);
+		if (cachep->flags & SLAB_DEBUG_OBJECTS)
+			slab_set_debugobj_lock_classes_node(cachep, node);
 	}
 	init_node_lock_keys(node);
 
@@ -2424,6 +2464,16 @@ kmem_cache_create (const char *name, size_t size, size_t align,
 		goto oops;
 	}
 
+	if (flags & SLAB_DEBUG_OBJECTS) {
+		/*
+		 * Would deadlock through slab_destroy()->call_rcu()->
+		 * debug_object_activate()->kmem_cache_alloc().
+		 */
+		WARN_ON_ONCE(flags & SLAB_DESTROY_BY_RCU);
+
+		slab_set_debugobj_lock_classes(cachep);
+	}
+
 	/* cache setup completed, link it into the list */
 	list_add(&cachep->next, &cache_chain);
 oops:

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-22 13:26             ` Peter Zijlstra
@ 2011-07-23 11:22               ` Sebastian Andrzej Siewior
  2011-08-04  8:35               ` [tip:core/urgent] slab, lockdep: Annotate slab -> rcu -> debug_object -> slab tip-bot for Peter Zijlstra
  1 sibling, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2011-07-23 11:22 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Pekka Enberg, Thomas Gleixner, Christoph Lameter, Matt Mackall, linux-mm

* Thus spake Peter Zijlstra (peterz@infradead.org):
> Thanks!
You're welcome.

> > +static void slab_each_set_lock_classes(struct kmem_cache *cachep)
> > +{
> > +	int node;
> > +
> > +	for_each_online_node(node) {
> > +		slab_set_lock_classes(cachep, &debugobj_l3_key,
> > +				&debugobj_alc_key, node);
> > +	}
> > +}
> 
> Hmm, O(nr_nodes^2), sounds about right for alien crap, right?
A little less if not all nodes are online :) However it is the same kind of
init used earlier by setup_cpu_cache().
I tried to pull lockclass into cachep but lockdep didn't like this.

> Still needs some hotplug love though, maybe something like the below...
> Sebastian, would you be willing to give the thing another spin to see if
> I didnt (again) break anything silly?
Looks good, compiles and seems to work :)

Sebastian

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-20 15:44         ` Peter Zijlstra
  2011-07-21  7:14           ` Sebastian Siewior
@ 2011-07-28 10:46           ` Pekka Enberg
  2011-07-28 10:56             ` Sebastian Andrzej Siewior
  2011-07-28 10:56             ` Peter Zijlstra
  1 sibling, 2 replies; 16+ messages in thread
From: Pekka Enberg @ 2011-07-28 10:46 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Sebastian Siewior, Christoph Lameter,
	Matt Mackall, linux-mm

On Wed, 20 Jul 2011, Peter Zijlstra wrote:
> We just need to annotate the SLAB_DEBUG_OBJECTS slab with a different
> key. Something like the below, except that doesn't quite cover cpu
> hotplug yet I think.. /me pokes more
>
> Completely untested, hasn't even seen a compiler etc..

Ping? Did someone send me a patch I can apply?

>
> ---
> mm/slab.c |   65 ++++++++++++++++++++++++++++++++++++++++++++----------------
> 1 files changed, 47 insertions(+), 18 deletions(-)
>
> diff --git a/mm/slab.c b/mm/slab.c
> index d96e223..c13f7e9 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -620,6 +620,37 @@ int slab_is_available(void)
> static struct lock_class_key on_slab_l3_key;
> static struct lock_class_key on_slab_alc_key;
>
> +static struct lock_class_key debugobj_l3_key;
> +static struct lock_class_key debugobj_alc_key;
> +
> +static void slab_set_lock_classes(struct kmem_cache *cachep,
> +		struct lock_class_key *l3_key, struct lock_class_key *alc_key)
> +{
> +	struct array_cache **alc;
> +	struct kmem_list3 *l3;
> +	int r;
> +
> +	l3 = cachep->nodelists[q];
> +	if (!l3)
> +		return;
> +
> +	lockdep_set_class(&l3->list_lock, l3_key);
> +	alc = l3->alien;
> +	/*
> +	 * FIXME: This check for BAD_ALIEN_MAGIC
> +	 * should go away when common slab code is taught to
> +	 * work even without alien caches.
> +	 * Currently, non NUMA code returns BAD_ALIEN_MAGIC
> +	 * for alloc_alien_cache,
> +	 */
> +	if (!alc || (unsigned long)alc == BAD_ALIEN_MAGIC)
> +		return;
> +	for_each_node(r) {
> +		if (alc[r])
> +			lockdep_set_class(&alc[r]->lock, alc_key);
> +	}
> +}
> +
> static void init_node_lock_keys(int q)
> {
> 	struct cache_sizes *s = malloc_sizes;
> @@ -628,29 +659,14 @@ static void init_node_lock_keys(int q)
> 		return;
>
> 	for (s = malloc_sizes; s->cs_size != ULONG_MAX; s++) {
> -		struct array_cache **alc;
> 		struct kmem_list3 *l3;
> -		int r;
>
> 		l3 = s->cs_cachep->nodelists[q];
> 		if (!l3 || OFF_SLAB(s->cs_cachep))
> 			continue;
> -		lockdep_set_class(&l3->list_lock, &on_slab_l3_key);
> -		alc = l3->alien;
> -		/*
> -		 * FIXME: This check for BAD_ALIEN_MAGIC
> -		 * should go away when common slab code is taught to
> -		 * work even without alien caches.
> -		 * Currently, non NUMA code returns BAD_ALIEN_MAGIC
> -		 * for alloc_alien_cache,
> -		 */
> -		if (!alc || (unsigned long)alc == BAD_ALIEN_MAGIC)
> -			continue;
> -		for_each_node(r) {
> -			if (alc[r])
> -				lockdep_set_class(&alc[r]->lock,
> -					&on_slab_alc_key);
> -		}
> +
> +		slab_set_lock_classes(s->cs_cachep,
> +				&on_slab_l3_key, &on_slab_alc_key)
> 	}
> }
>
> @@ -2424,6 +2440,19 @@ kmem_cache_create (const char *name, size_t size, size_t align,
> 		goto oops;
> 	}
>
> +	if (flags & SLAB_DEBUG_OBJECTS) {
> +		/*
> +		 * Would deadlock through slab_destroy()->call_rcu()->
> +		 * debug_object_activate()->kmem_cache_alloc().
> +		 */
> +		WARN_ON_ONCE(flags & SLAB_DESTROY_BY_RCU);
> +
> +#ifdef CONFIG_LOCKDEP
> +		slab_set_lock_classes(cachep,
> +				&debugobj_l3_key, &debugobj_alc_key);
> +#endif
> +	}
> +
> 	/* cache setup completed, link it into the list */
> 	list_add(&cachep->next, &cache_chain);
> oops:
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-28 10:56             ` Peter Zijlstra
@ 2011-07-28 10:55               ` Pekka Enberg
  0 siblings, 0 replies; 16+ messages in thread
From: Pekka Enberg @ 2011-07-28 10:55 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Thomas Gleixner, Sebastian Siewior, Christoph Lameter,
	Matt Mackall, linux-mm

On Thu, Jul 28, 2011 at 1:56 PM, Peter Zijlstra <peterz@infradead.org> wrote:
>> > Completely untested, hasn't even seen a compiler etc..
>>
>> Ping? Did someone send me a patch I can apply?
>
> I've queued a slightly updated patch for the lockdep tree. It should
> hopefully hit -tip soonish.

Oh, okay. Thanks, Peter.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-28 10:46           ` possible recursive locking detected cache_alloc_refill() + cache_flusharray() Pekka Enberg
@ 2011-07-28 10:56             ` Sebastian Andrzej Siewior
  2011-07-28 10:56             ` Peter Zijlstra
  1 sibling, 0 replies; 16+ messages in thread
From: Sebastian Andrzej Siewior @ 2011-07-28 10:56 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Peter Zijlstra, Thomas Gleixner, Sebastian Siewior,
	Christoph Lameter, Matt Mackall, linux-mm

* Pekka Enberg | 2011-07-28 13:46:23 [+0300]:

>On Wed, 20 Jul 2011, Peter Zijlstra wrote:
>> We just need to annotate the SLAB_DEBUG_OBJECTS slab with a different
>> key. Something like the below, except that doesn't quite cover cpu
>> hotplug yet I think.. /me pokes more
>>
>> Completely untested, hasn't even seen a compiler etc..
>
>Ping? Did someone send me a patch I can apply?

Yes, peter did. Please see following mail from
| 22.07.11 15:26  Peter Zijlstra 
in this thread.

Sebastian

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* Re: possible recursive locking detected cache_alloc_refill() + cache_flusharray()
  2011-07-28 10:46           ` possible recursive locking detected cache_alloc_refill() + cache_flusharray() Pekka Enberg
  2011-07-28 10:56             ` Sebastian Andrzej Siewior
@ 2011-07-28 10:56             ` Peter Zijlstra
  2011-07-28 10:55               ` Pekka Enberg
  1 sibling, 1 reply; 16+ messages in thread
From: Peter Zijlstra @ 2011-07-28 10:56 UTC (permalink / raw)
  To: Pekka Enberg
  Cc: Thomas Gleixner, Sebastian Siewior, Christoph Lameter,
	Matt Mackall, linux-mm

On Thu, 2011-07-28 at 13:46 +0300, Pekka Enberg wrote:
> On Wed, 20 Jul 2011, Peter Zijlstra wrote:
> > We just need to annotate the SLAB_DEBUG_OBJECTS slab with a different
> > key. Something like the below, except that doesn't quite cover cpu
> > hotplug yet I think.. /me pokes more
> >
> > Completely untested, hasn't even seen a compiler etc..
> 
> Ping? Did someone send me a patch I can apply?

I've queued a slightly updated patch for the lockdep tree. It should
hopefully hit -tip soonish.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Fight unfair telecom internet charges in Canada: sign http://stopthemeter.ca/
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 16+ messages in thread

* [tip:core/urgent] slab, lockdep: Annotate slab -> rcu -> debug_object -> slab
  2011-07-22 13:26             ` Peter Zijlstra
  2011-07-23 11:22               ` Sebastian Andrzej Siewior
@ 2011-08-04  8:35               ` tip-bot for Peter Zijlstra
  1 sibling, 0 replies; 16+ messages in thread
From: tip-bot for Peter Zijlstra @ 2011-08-04  8:35 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: linux-kernel, hpa, mingo, a.p.zijlstra, penberg, peterz, tglx,
	sebastian, mingo

Commit-ID:  83835b3d9aec8e9f666d8223d8a386814f756266
Gitweb:     http://git.kernel.org/tip/83835b3d9aec8e9f666d8223d8a386814f756266
Author:     Peter Zijlstra <peterz@infradead.org>
AuthorDate: Fri, 22 Jul 2011 15:26:05 +0200
Committer:  Ingo Molnar <mingo@elte.hu>
CommitDate: Thu, 4 Aug 2011 10:17:54 +0200

slab, lockdep: Annotate slab -> rcu -> debug_object -> slab

Lockdep thinks there's lock recursion through:

	kmem_cache_free()
	  cache_flusharray()
	    spin_lock(&l3->list_lock)  <----------------.
	    free_block()                                |
	      slab_destroy()                            |
		call_rcu()                              |
		  debug_object_activate()               |
		    debug_object_init()                 |
		      __debug_object_init()             |
			kmem_cache_alloc()              |
			  cache_alloc_refill()          |
			    spin_lock(&l3->list_lock) --'

Now debug objects doesn't use SLAB_DESTROY_BY_RCU and hence there is no
actual possibility of recursing. Luckily debug objects marks it slab
with SLAB_DEBUG_OBJECTS so we can identify the thing.

Mark all SLAB_DEBUG_OBJECTS (all one!) slab caches with a special
lockdep key so that lockdep sees its a different cachep.

Also add a WARN on trying to create a SLAB_DESTROY_BY_RCU |
SLAB_DEBUG_OBJECTS cache, to avoid possible future trouble.

Reported-and-tested-by: Sebastian Siewior <sebastian@breakpoint.cc>
[ fixes to the initial patch ]
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Pekka Enberg <penberg@kernel.org>
Signed-off-by: Peter Zijlstra <a.p.zijlstra@chello.nl>
Link: http://lkml.kernel.org/r/1311341165.27400.58.camel@twins
Signed-off-by: Ingo Molnar <mingo@elte.hu>
---
 mm/slab.c |   86 ++++++++++++++++++++++++++++++++++++++++++++++++-------------
 1 files changed, 68 insertions(+), 18 deletions(-)

diff --git a/mm/slab.c b/mm/slab.c
index 9594740..0703578 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -622,6 +622,51 @@ int slab_is_available(void)
 static struct lock_class_key on_slab_l3_key;
 static struct lock_class_key on_slab_alc_key;
 
+static struct lock_class_key debugobj_l3_key;
+static struct lock_class_key debugobj_alc_key;
+
+static void slab_set_lock_classes(struct kmem_cache *cachep,
+		struct lock_class_key *l3_key, struct lock_class_key *alc_key,
+		int q)
+{
+	struct array_cache **alc;
+	struct kmem_list3 *l3;
+	int r;
+
+	l3 = cachep->nodelists[q];
+	if (!l3)
+		return;
+
+	lockdep_set_class(&l3->list_lock, l3_key);
+	alc = l3->alien;
+	/*
+	 * FIXME: This check for BAD_ALIEN_MAGIC
+	 * should go away when common slab code is taught to
+	 * work even without alien caches.
+	 * Currently, non NUMA code returns BAD_ALIEN_MAGIC
+	 * for alloc_alien_cache,
+	 */
+	if (!alc || (unsigned long)alc == BAD_ALIEN_MAGIC)
+		return;
+	for_each_node(r) {
+		if (alc[r])
+			lockdep_set_class(&alc[r]->lock, alc_key);
+	}
+}
+
+static void slab_set_debugobj_lock_classes_node(struct kmem_cache *cachep, int node)
+{
+	slab_set_lock_classes(cachep, &debugobj_l3_key, &debugobj_alc_key, node);
+}
+
+static void slab_set_debugobj_lock_classes(struct kmem_cache *cachep)
+{
+	int node;
+
+	for_each_online_node(node)
+		slab_set_debugobj_lock_classes_node(cachep, node);
+}
+
 static void init_node_lock_keys(int q)
 {
 	struct cache_sizes *s = malloc_sizes;
@@ -630,29 +675,14 @@ static void init_node_lock_keys(int q)
 		return;
 
 	for (s = malloc_sizes; s->cs_size != ULONG_MAX; s++) {
-		struct array_cache **alc;
 		struct kmem_list3 *l3;
-		int r;
 
 		l3 = s->cs_cachep->nodelists[q];
 		if (!l3 || OFF_SLAB(s->cs_cachep))
 			continue;
-		lockdep_set_class(&l3->list_lock, &on_slab_l3_key);
-		alc = l3->alien;
-		/*
-		 * FIXME: This check for BAD_ALIEN_MAGIC
-		 * should go away when common slab code is taught to
-		 * work even without alien caches.
-		 * Currently, non NUMA code returns BAD_ALIEN_MAGIC
-		 * for alloc_alien_cache,
-		 */
-		if (!alc || (unsigned long)alc == BAD_ALIEN_MAGIC)
-			continue;
-		for_each_node(r) {
-			if (alc[r])
-				lockdep_set_class(&alc[r]->lock,
-					&on_slab_alc_key);
-		}
+
+		slab_set_lock_classes(s->cs_cachep, &on_slab_l3_key,
+				&on_slab_alc_key, q);
 	}
 }
 
@@ -671,6 +701,14 @@ static void init_node_lock_keys(int q)
 static inline void init_lock_keys(void)
 {
 }
+
+static void slab_set_debugobj_lock_classes_node(struct kmem_cache *cachep, int node)
+{
+}
+
+static void slab_set_debugobj_lock_classes(struct kmem_cache *cachep)
+{
+}
 #endif
 
 /*
@@ -1264,6 +1302,8 @@ static int __cpuinit cpuup_prepare(long cpu)
 		spin_unlock_irq(&l3->list_lock);
 		kfree(shared);
 		free_alien_cache(alien);
+		if (cachep->flags & SLAB_DEBUG_OBJECTS)
+			slab_set_debugobj_lock_classes_node(cachep, node);
 	}
 	init_node_lock_keys(node);
 
@@ -2426,6 +2466,16 @@ kmem_cache_create (const char *name, size_t size, size_t align,
 		goto oops;
 	}
 
+	if (flags & SLAB_DEBUG_OBJECTS) {
+		/*
+		 * Would deadlock through slab_destroy()->call_rcu()->
+		 * debug_object_activate()->kmem_cache_alloc().
+		 */
+		WARN_ON_ONCE(flags & SLAB_DESTROY_BY_RCU);
+
+		slab_set_debugobj_lock_classes(cachep);
+	}
+
 	/* cache setup completed, link it into the list */
 	list_add(&cachep->next, &cache_chain);
 oops:

^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2011-08-04  8:35 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-07-16 21:18 possible recursive locking detected cache_alloc_refill() + cache_flusharray() Sebastian Siewior
2011-07-17 21:34 ` Thomas Gleixner
2011-07-20 13:21   ` Pekka Enberg
2011-07-20 13:30     ` Peter Zijlstra
2011-07-20 13:52       ` Pekka Enberg
2011-07-20 14:00         ` Christoph Lameter
2011-07-20 15:44         ` Peter Zijlstra
2011-07-21  7:14           ` Sebastian Siewior
2011-07-22  8:17             ` Pekka Enberg
2011-07-22 13:26             ` Peter Zijlstra
2011-07-23 11:22               ` Sebastian Andrzej Siewior
2011-08-04  8:35               ` [tip:core/urgent] slab, lockdep: Annotate slab -> rcu -> debug_object -> slab tip-bot for Peter Zijlstra
2011-07-28 10:46           ` possible recursive locking detected cache_alloc_refill() + cache_flusharray() Pekka Enberg
2011-07-28 10:56             ` Sebastian Andrzej Siewior
2011-07-28 10:56             ` Peter Zijlstra
2011-07-28 10:55               ` Pekka Enberg

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.