* [PATCH] slub: remove one code path and reduce lock contention in __slab_free()
@ 2012-07-27 20:17 Joonsoo Kim
2012-07-27 20:46 ` Christoph Lameter
0 siblings, 1 reply; 4+ messages in thread
From: Joonsoo Kim @ 2012-07-27 20:17 UTC (permalink / raw)
To: Pekka Enberg; +Cc: linux-kernel, linux-mm, Joonsoo Kim, Christoph Lameter
When we try to free object, there is some of case that we need
to take a node lock. This is the necessary step for preventing a race.
After taking a lock, then we try to cmpxchg_double_slab().
But, there is a possible scenario that cmpxchg_double_slab() is failed.
Following example explains this.
CPU A CPU B
need lock
... need lock
... lock!!
lock..but spin free success
spin... unlock
lock!!
free fail
In this case, retry with taking a lock is occured in CPU A.
I think that in this case,
"release a lock first, and re-take a lock if necessary" is preferable way.
There are two reasons for this.
First, this makes __slab_free()'s logic somehow simple.
Second, it may reduce lock contention.
When we do retrying, status of slab is already changed,
so we don't need a lock anymore in almost every case.
"release a lock first, and re-take a lock if necessary" policy is
helpful to this.
Signed-off-by: Joonsoo Kim <js1304@gmail.com>
Cc: Christoph Lameter <cl@linux.com>
---
This is v2 of "slub: release a lock if freeing object with a lock is failed in __slab_free()"
Subject and commit log are changed from v1.
Code is same as v1.
diff --git a/mm/slub.c b/mm/slub.c
index ca778e5..efce427 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2421,7 +2421,6 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
void *prior;
void **object = (void *)x;
int was_frozen;
- int inuse;
struct page new;
unsigned long counters;
struct kmem_cache_node *n = NULL;
@@ -2433,13 +2432,17 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
return;
do {
+ if (unlikely(n)) {
+ spin_unlock_irqrestore(&n->list_lock, flags);
+ n = NULL;
+ }
prior = page->freelist;
counters = page->counters;
set_freepointer(s, object, prior);
new.counters = counters;
was_frozen = new.frozen;
new.inuse--;
- if ((!new.inuse || !prior) && !was_frozen && !n) {
+ if ((!new.inuse || !prior) && !was_frozen) {
if (!kmem_cache_debug(s) && !prior)
@@ -2464,7 +2467,6 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
}
}
- inuse = new.inuse;
} while (!cmpxchg_double_slab(s, page,
prior, counters,
@@ -2490,25 +2492,17 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
return;
}
+ if (unlikely(!new.inuse && n->nr_partial > s->min_partial))
+ goto slab_empty;
+
/*
- * was_frozen may have been set after we acquired the list_lock in
- * an earlier loop. So we need to check it here again.
+ * Objects left in the slab. If it was not on the partial list before
+ * then add it.
*/
- if (was_frozen)
- stat(s, FREE_FROZEN);
- else {
- if (unlikely(!inuse && n->nr_partial > s->min_partial))
- goto slab_empty;
-
- /*
- * Objects left in the slab. If it was not on the partial list before
- * then add it.
- */
- if (unlikely(!prior)) {
- remove_full(s, page);
- add_partial(n, page, DEACTIVATE_TO_TAIL);
- stat(s, FREE_ADD_PARTIAL);
- }
+ if (kmem_cache_debug(s) && unlikely(!prior)) {
+ remove_full(s, page);
+ add_partial(n, page, DEACTIVATE_TO_TAIL);
+ stat(s, FREE_ADD_PARTIAL);
}
spin_unlock_irqrestore(&n->list_lock, flags);
return;
--
1.7.9.5
^ permalink raw reply related [flat|nested] 4+ messages in thread
* Re: [PATCH] slub: remove one code path and reduce lock contention in __slab_free()
2012-07-27 20:17 [PATCH] slub: remove one code path and reduce lock contention in __slab_free() Joonsoo Kim
@ 2012-07-27 20:46 ` Christoph Lameter
2012-07-28 13:56 ` JoonSoo Kim
0 siblings, 1 reply; 4+ messages in thread
From: Christoph Lameter @ 2012-07-27 20:46 UTC (permalink / raw)
To: Joonsoo Kim; +Cc: Pekka Enberg, linux-kernel, linux-mm
On Sat, 28 Jul 2012, Joonsoo Kim wrote:
> Subject and commit log are changed from v1.
That looks a bit better. But the changelog could use more cleanup and
clearer expression.
> @@ -2490,25 +2492,17 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
> return;
> }
>
> + if (unlikely(!new.inuse && n->nr_partial > s->min_partial))
> + goto slab_empty;
> +
So we can never encounter a empty slab that was frozen before? Really?
Remote frees can decrement inuse again. All objects of a slab frozen on
one cpu could be allocated while the slab is still frozen. The
unfreezing requires slab_alloc to encounter a NULL pointer after all.
A remote processor could obtain a pointer to all these objects and free
them. The code here would cause an unfreeze action. Another alloc on the
first processor would cause a *second* unfreeze action on a page that was
freed.
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] slub: remove one code path and reduce lock contention in __slab_free()
2012-07-27 20:46 ` Christoph Lameter
@ 2012-07-28 13:56 ` JoonSoo Kim
2012-07-30 19:12 ` Christoph Lameter
0 siblings, 1 reply; 4+ messages in thread
From: JoonSoo Kim @ 2012-07-28 13:56 UTC (permalink / raw)
To: Christoph Lameter; +Cc: Pekka Enberg, linux-kernel, linux-mm
2012/7/28 Christoph Lameter <cl@linux.com>:
> On Sat, 28 Jul 2012, Joonsoo Kim wrote:
>
>> Subject and commit log are changed from v1.
>
> That looks a bit better. But the changelog could use more cleanup and
> clearer expression.
>
>> @@ -2490,25 +2492,17 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
>> return;
>> }
>>
>> + if (unlikely(!new.inuse && n->nr_partial > s->min_partial))
>> + goto slab_empty;
>> +
>
> So we can never encounter a empty slab that was frozen before? Really?
In my suggestion, 'was_frozen = 1' is "always" handled without taking a lock.
Then, never hit following code.
+ if (unlikely(!new.inuse && n->nr_partial > s->min_partial))
+ goto slab_empty;
+
Instead, hit following code.
if (likely(!n)) {
/*
* If we just froze the page then put it onto the
* per cpu partial list.
*/
if (new.frozen && !was_frozen) {
put_cpu_partial(s, page, 1);
stat(s, CPU_PARTIAL_FREE);
}
/*
* The list lock was not taken therefore no list
* activity can be necessary.
*/
if (was_frozen)
stat(s, FREE_FROZEN);
return;
}
So, even if we encounter a empty slab that was frozen before, we just
do "stat(s, FREE_FROZEN)".
Please let me know my answer is sufficient.
Thanks!!
^ permalink raw reply [flat|nested] 4+ messages in thread
* Re: [PATCH] slub: remove one code path and reduce lock contention in __slab_free()
2012-07-28 13:56 ` JoonSoo Kim
@ 2012-07-30 19:12 ` Christoph Lameter
0 siblings, 0 replies; 4+ messages in thread
From: Christoph Lameter @ 2012-07-30 19:12 UTC (permalink / raw)
To: JoonSoo Kim; +Cc: Pekka Enberg, linux-kernel, linux-mm
On Sat, 28 Jul 2012, JoonSoo Kim wrote:
> 2012/7/28 Christoph Lameter <cl@linux.com>:
> > On Sat, 28 Jul 2012, Joonsoo Kim wrote:
> >
> >> Subject and commit log are changed from v1.
> >
> > That looks a bit better. But the changelog could use more cleanup and
> > clearer expression.
> >
> >> @@ -2490,25 +2492,17 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
> >> return;
> >> }
> >>
> >> + if (unlikely(!new.inuse && n->nr_partial > s->min_partial))
> >> + goto slab_empty;
> >> +
> >
> > So we can never encounter a empty slab that was frozen before? Really?
>
> In my suggestion, 'was_frozen = 1' is "always" handled without taking a lock.
Yepo that is true with this patch.
> Then, never hit following code.
> + if (unlikely(!new.inuse && n->nr_partial > s->min_partial))
> + goto slab_empty;
> +
Correct.
> Instead, hit following code.
> if (likely(!n)) {
>
> /*
> * If we just froze the page then put it onto the
> * per cpu partial list.
> */
> if (new.frozen && !was_frozen) {
> put_cpu_partial(s, page, 1);
> stat(s, CPU_PARTIAL_FREE);
> }
> /*
> * The list lock was not taken therefore no list
> * activity can be necessary.
> */
> if (was_frozen)
> stat(s, FREE_FROZEN);
> return;
> }
>
> So, even if we encounter a empty slab that was frozen before, we just
> do "stat(s, FREE_FROZEN)".
> Please let me know my answer is sufficient.
Yes.
Acked-by: Christoph Lameter <cl@linux.com>
^ permalink raw reply [flat|nested] 4+ messages in thread
end of thread, other threads:[~2012-07-30 19:12 UTC | newest]
Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-07-27 20:17 [PATCH] slub: remove one code path and reduce lock contention in __slab_free() Joonsoo Kim
2012-07-27 20:46 ` Christoph Lameter
2012-07-28 13:56 ` JoonSoo Kim
2012-07-30 19:12 ` Christoph Lameter
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).