linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* 2.5.39 kmem_cache bug
@ 2002-09-28 20:13 John Levon
  2002-09-28 20:56 ` Andrew Morton
  0 siblings, 1 reply; 10+ messages in thread
From: John Levon @ 2002-09-28 20:13 UTC (permalink / raw)
  To: linux-kernel


kmem_cache_destroy() is falsely reporting
"kmem_cache_destroy: Can't free all objects" in 2.5.39. I have
verified my code was freeing all allocated items correctly.

Reverting this chunk :

-                       list_add(&slabp->list, &cachep->slabs_free);
+/*                     list_add(&slabp->list, &cachep->slabs_free);            */
+                       if (unlikely(list_empty(&cachep->slabs_partial)))
+                               list_add(&slabp->list, &cachep->slabs_partial);
+                       else
+                               kmem_slab_destroy(cachep, slabp);

and the problem goes away. I haven't investigated why.

This is with CONFIG_SMP, !CONFIG_PREEMPT

regards
john

-- 
"When your name is Winner, that's it. You don't need a nickname."
	- Loser Lane

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 2.5.39 kmem_cache bug
  2002-09-28 20:13 2.5.39 kmem_cache bug John Levon
@ 2002-09-28 20:56 ` Andrew Morton
  2002-09-28 21:12   ` Manfred Spraul
                     ` (3 more replies)
  0 siblings, 4 replies; 10+ messages in thread
From: Andrew Morton @ 2002-09-28 20:56 UTC (permalink / raw)
  To: John Levon, Ed Tomlinson, Manfred Spraul; +Cc: linux-kernel

John Levon wrote:
> 
> kmem_cache_destroy() is falsely reporting
> "kmem_cache_destroy: Can't free all objects" in 2.5.39. I have
> verified my code was freeing all allocated items correctly.
> 
> Reverting this chunk :
> 
> -                       list_add(&slabp->list, &cachep->slabs_free);
> +/*                     list_add(&slabp->list, &cachep->slabs_free);            */
> +                       if (unlikely(list_empty(&cachep->slabs_partial)))
> +                               list_add(&slabp->list, &cachep->slabs_partial);
> +                       else
> +                               kmem_slab_destroy(cachep, slabp);
> 
> and the problem goes away. I haven't investigated why.
> 

Thanks.  That's the code which leaves one empty page available
for new allocations rather than freeing it immediately.

It's temporary.  Ed, I think we can just do

	if (list_empty(&cachep->slabs_free))
		list_add(&slabp->list, &cachep->slabs_free);
	else
		kmem_slab_destroy(cachep, slabp);

there?

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 2.5.39 kmem_cache bug
  2002-09-28 20:56 ` Andrew Morton
@ 2002-09-28 21:12   ` Manfred Spraul
  2002-09-28 21:23   ` John Levon
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 10+ messages in thread
From: Manfred Spraul @ 2002-09-28 21:12 UTC (permalink / raw)
  To: Andrew Morton; +Cc: John Levon, Ed Tomlinson, linux-kernel

Andrew Morton wrote:
> John Levon wrote:
> 
>>kmem_cache_destroy() is falsely reporting
>>"kmem_cache_destroy: Can't free all objects" in 2.5.39. I have
>>verified my code was freeing all allocated items correctly.
>>
>>Reverting this chunk :
>>
>>-                       list_add(&slabp->list, &cachep->slabs_free);
>>+/*                     list_add(&slabp->list, &cachep->slabs_free);            */
>>+                       if (unlikely(list_empty(&cachep->slabs_partial)))
>>+                               list_add(&slabp->list, &cachep->slabs_partial);
>>+                       else
>>+                               kmem_slab_destroy(cachep, slabp);
>>
>>and the problem goes away. I haven't investigated why.
>>
> 
> 
> Thanks.  That's the code which leaves one empty page available
> for new allocations rather than freeing it immediately.
> 
> It's temporary.  Ed, I think we can just do
> 
> 	if (list_empty(&cachep->slabs_free))
> 		list_add(&slabp->list, &cachep->slabs_free);
> 	else
> 		kmem_slab_destroy(cachep, slabp);
> 
> there?

Look correct.
If you apply it, then reenable the BUG check in s_show() if a slab with 
0 allocations is found on the partial list, too.

--
	Manfred


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 2.5.39 kmem_cache bug
  2002-09-28 20:56 ` Andrew Morton
  2002-09-28 21:12   ` Manfred Spraul
@ 2002-09-28 21:23   ` John Levon
  2002-09-28 21:35     ` Andrew Morton
  2002-09-29 11:45   ` Ed Tomlinson
  2002-09-29 13:15   ` Ed Tomlinson
  3 siblings, 1 reply; 10+ messages in thread
From: John Levon @ 2002-09-28 21:23 UTC (permalink / raw)
  To: Andrew Morton; +Cc: Ed Tomlinson, Manfred Spraul, linux-kernel

On Sat, Sep 28, 2002 at 01:56:55PM -0700, Andrew Morton wrote:

> 	if (list_empty(&cachep->slabs_free))
> 		list_add(&slabp->list, &cachep->slabs_free);
> 	else
> 		kmem_slab_destroy(cachep, slabp);

This seems to work for me on a quick test.

thanks
john

-- 
"When your name is Winner, that's it. You don't need a nickname."
	- Loser Lane

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 2.5.39 kmem_cache bug
  2002-09-28 21:23   ` John Levon
@ 2002-09-28 21:35     ` Andrew Morton
  0 siblings, 0 replies; 10+ messages in thread
From: Andrew Morton @ 2002-09-28 21:35 UTC (permalink / raw)
  To: John Levon; +Cc: Ed Tomlinson, Manfred Spraul, linux-kernel

John Levon wrote:
> 
> On Sat, Sep 28, 2002 at 01:56:55PM -0700, Andrew Morton wrote:
> 
> >       if (list_empty(&cachep->slabs_free))
> >               list_add(&slabp->list, &cachep->slabs_free);
> >       else
> >               kmem_slab_destroy(cachep, slabp);
> 
> This seems to work for me on a quick test.
> 

Thanks.    I'll send the below patch.




Slab currently has a policy of buffering a single spare page per slab. 
We're putting that on the partially-full list, which confuses
kmem_cache_destroy().

So put it on cachep->slabs_free, which is where empty pages go.




 mm/slab.c |    9 ++++-----
 1 files changed, 4 insertions(+), 5 deletions(-)

--- 2.5.39/mm/slab.c~slab-fix	Sat Sep 28 14:20:52 2002
+++ 2.5.39-akpm/mm/slab.c	Sat Sep 28 14:23:50 2002
@@ -1499,9 +1499,9 @@ static inline void kmem_cache_free_one(k
 		if (unlikely(!--slabp->inuse)) {
 			/* Was partial or full, now empty. */
 			list_del(&slabp->list);
-/*			list_add(&slabp->list, &cachep->slabs_free); 		*/
-			if (unlikely(list_empty(&cachep->slabs_partial)))
-				list_add(&slabp->list, &cachep->slabs_partial);
+			/* We only buffer a single page */
+			if (list_empty(&cachep->slabs_free))
+				list_add(&slabp->list, &cachep->slabs_free);
 			else
 				kmem_slab_destroy(cachep, slabp);
 		} else if (unlikely(inuse == cachep->num)) {
@@ -1977,8 +1977,7 @@ static int s_show(struct seq_file *m, vo
 	}
 	list_for_each(q,&cachep->slabs_partial) {
 		slabp = list_entry(q, slab_t, list);
-		if (slabp->inuse == cachep->num)
-			BUG();
+		BUG_ON(slabp->inuse == cachep->num || !slabp->inuse);
 		active_objs += slabp->inuse;
 		active_slabs++;
 	}

.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 2.5.39 kmem_cache bug
  2002-09-28 20:56 ` Andrew Morton
  2002-09-28 21:12   ` Manfred Spraul
  2002-09-28 21:23   ` John Levon
@ 2002-09-29 11:45   ` Ed Tomlinson
  2002-09-29 12:13     ` Manfred Spraul
  2002-09-29 13:15   ` Ed Tomlinson
  3 siblings, 1 reply; 10+ messages in thread
From: Ed Tomlinson @ 2002-09-29 11:45 UTC (permalink / raw)
  To: Andrew Morton, John Levon, Manfred Spraul; +Cc: linux-kernel

On September 28, 2002 04:56 pm, Andrew Morton wrote:
> John Levon wrote:
> > kmem_cache_destroy() is falsely reporting
> > "kmem_cache_destroy: Can't free all objects" in 2.5.39. I have
> > verified my code was freeing all allocated items correctly.
> >
> > Reverting this chunk :
> >
> > -                       list_add(&slabp->list, &cachep->slabs_free);
> > +/*                     list_add(&slabp->list, &cachep->slabs_free);     
> >       */ +                       if
> > (unlikely(list_empty(&cachep->slabs_partial))) +                         
> >      list_add(&slabp->list, &cachep->slabs_partial); +                   
> >    else
> > +                               kmem_slab_destroy(cachep, slabp);
> >
> > and the problem goes away. I haven't investigated why.
>kmem_cache_destroy
> Thanks.  That's the code which leaves one empty page available
> for new allocations rather than freeing it immediately.
>
> It's temporary.  Ed, I think we can just do
>
> 	if (list_empty(&cachep->slabs_free))
> 		list_add(&slabp->list, &cachep->slabs_free);
> 	else
> 		kmem_slab_destroy(cachep, slabp);
>
> there?

Yes we can do this.  I would rather fix kmem_cache_destroy though.  Think that, if 
we play our cards right, we can get rid of the cachep->slabs_free list with out too
much pain.

Ed


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 2.5.39 kmem_cache bug
  2002-09-29 11:45   ` Ed Tomlinson
@ 2002-09-29 12:13     ` Manfred Spraul
  0 siblings, 0 replies; 10+ messages in thread
From: Manfred Spraul @ 2002-09-29 12:13 UTC (permalink / raw)
  To: Ed Tomlinson; +Cc: Andrew Morton, John Levon, linux-kernel

Ed Tomlinson wrote:
> 
> Yes we can do this.  I would rather fix kmem_cache_destroy though.  Think that, if 
> we play our cards right, we can get rid of the cachep->slabs_free list with out too
> much pain.
> 
Please - lets check first if the free list is actually a problem, before 
deciding to kill it.

If you remove the free list, it becomes impossible to find the freeable 
slab, if another (partial) slab is added to the partial list afterwards.

And I'm definitively against locking up one slab in each cache - it 
coudl be a order=5 allocation. It would be possible to hack around that 
(if alloc is high-order, then partial slabs do not exist), but that's 
too ugly to think about.

--
	Manfred



^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 2.5.39 kmem_cache bug
  2002-09-28 20:56 ` Andrew Morton
                     ` (2 preceding siblings ...)
  2002-09-29 11:45   ` Ed Tomlinson
@ 2002-09-29 13:15   ` Ed Tomlinson
  2002-09-29 13:52     ` Manfred Spraul
  2002-09-29 13:53     ` John Levon
  3 siblings, 2 replies; 10+ messages in thread
From: Ed Tomlinson @ 2002-09-29 13:15 UTC (permalink / raw)
  To: Andrew Morton, John Levon, Manfred Spraul; +Cc: linux-kernel

On September 28, 2002 04:56 pm, Andrew Morton wrote:
> John Levon wrote:
> > kmem_cache_destroy() is falsely reporting
> > "kmem_cache_destroy: Can't free all objects" in 2.5.39. I have
> > verified my code was freeing all allocated items correctly.
> >
> > Reverting this chunk :
> >
> > -                       list_add(&slabp->list, &cachep->slabs_free);
> > +/*                     list_add(&slabp->list, &cachep->slabs_free);     
> >       */ +                       if
> > (unlikely(list_empty(&cachep->slabs_partial))) +                         
> >      list_add(&slabp->list, &cachep->slabs_partial); +                   
> >    else
> > +                               kmem_slab_destroy(cachep, slabp);
> >
> > and the problem goes away. I haven't investigated why.
>
> Thanks.  That's the code which leaves one empty page available
> for new allocations rather than freeing it immediately.
>
> It's temporary.  Ed, I think we can just do
>
> 	if (list_empty(&cachep->slabs_free))
> 		list_add(&slabp->list, &cachep->slabs_free);
> 	else
> 		kmem_slab_destroy(cachep, slabp);
>
> there?

How about this (untested) instead.  If we can avoid using cachep->slabs_free its 
a good thing.  Why use three lists when two can do the job?  I use a loop to clean 
the partial list since its possible that for some caches we may want to have more
than one slabp of buffer.

Thoughts?
Ed

---------
diff -Nru a/mm/slab.c b/mm/slab.c
--- a/mm/slab.c	Sun Sep 29 09:08:53 2002
+++ b/mm/slab.c	Sun Sep 29 09:08:53 2002
@@ -1036,7 +1036,26 @@
 	list_del(&cachep->next);
 	up(&cache_chain_sem);
 
-	if (__kmem_cache_shrink(cachep)) {
+	/* remove any empty partial pages */
+	spin_lock_irq(&cachep->spinlock);
+	while (!cachep->growing) {
+		struct list_head *p;
+		slab_t *slabp;
+
+		p = cachep->slabs_partial.prev;
+		if (p == &cachep->slabs_partial)
+			break;
+
+		slabp = list_entry(cachep->slabs_partial.prev, slab_t, list);
+		if (slabp->inuse)
+			break;
+
+		list_del(&slabp->list);
+
+	}
+	spin_unlock_irq(&cachep->spinlock);
+
+	if (!list_empty(&cachep->slabs_full) || !list_empty(&cachep->slabs_partial)) {
 		printk(KERN_ERR "kmem_cache_destroy: Can't free all objects %p\n",
 		       cachep);
 		down(&cache_chain_sem);

---------


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 2.5.39 kmem_cache bug
  2002-09-29 13:15   ` Ed Tomlinson
@ 2002-09-29 13:52     ` Manfred Spraul
  2002-09-29 13:53     ` John Levon
  1 sibling, 0 replies; 10+ messages in thread
From: Manfred Spraul @ 2002-09-29 13:52 UTC (permalink / raw)
  To: Ed Tomlinson; +Cc: Andrew Morton, John Levon, linux-kernel

Ed Tomlinson wrote:
>  
> -	if (__kmem_cache_shrink(cachep)) {
> +	/* remove any empty partial pages */
> +	spin_lock_irq(&cachep->spinlock);
> +	while (!cachep->growing) {
> +		struct list_head *p;
> +		slab_t *slabp;
> +

growing is guaranteed to be false - loop is not necessary.

Actually growing can be removed completely, it's never necessary, 
neither with nor without kmem_cache_reap().

Ed, there are far more cleanups possible in slab.c than just going from 
3 to 2 lists, but IMHO it's far to early to make the design decision for 
2 lists.

Or wait: You want 2 lists? Ok, agreed.

free and partial ;-)

The full list is unnecessary, it's possible to replace it with a 
counter. It also saves several cacheline accesses for the list_del() and 
list_add() into the full list...

--
	Manfred


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: 2.5.39 kmem_cache bug
  2002-09-29 13:15   ` Ed Tomlinson
  2002-09-29 13:52     ` Manfred Spraul
@ 2002-09-29 13:53     ` John Levon
  1 sibling, 0 replies; 10+ messages in thread
From: John Levon @ 2002-09-29 13:53 UTC (permalink / raw)
  To: Ed Tomlinson; +Cc: Andrew Morton, Manfred Spraul, linux-kernel

On Sun, Sep 29, 2002 at 09:15:52AM -0400, Ed Tomlinson wrote:

> How about this (untested) instead.  If we can avoid using cachep->slabs_free its 
> a good thing.  Why use three lists when two can do the job?  I use a loop to clean 
> the partial list since its possible that for some caches we may want to have more
> than one slabp of buffer.

This patch seems to work for me too (though I have tested it only
lightly)

At least it fixes the false kmem_cache_destroy() report vs. 2.5.39

regards
john

-- 
"When your name is Winner, that's it. You don't need a nickname."
	- Loser Lane

^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2002-09-29 13:47 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2002-09-28 20:13 2.5.39 kmem_cache bug John Levon
2002-09-28 20:56 ` Andrew Morton
2002-09-28 21:12   ` Manfred Spraul
2002-09-28 21:23   ` John Levon
2002-09-28 21:35     ` Andrew Morton
2002-09-29 11:45   ` Ed Tomlinson
2002-09-29 12:13     ` Manfred Spraul
2002-09-29 13:15   ` Ed Tomlinson
2002-09-29 13:52     ` Manfred Spraul
2002-09-29 13:53     ` John Levon

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).