* [PATCH] slub: Use the correct per cpu slab on CPU_DEAD
@ 2012-10-27 19:18 ` Thomas Gleixner
0 siblings, 0 replies; 6+ messages in thread
From: Thomas Gleixner @ 2012-10-27 19:18 UTC (permalink / raw)
To: Christoph Lameter; +Cc: linux-mm, LKML
While making slub available for RT I noticed, that during CPU offline
for each kmem_cache __flush_cpu_slab() is called on a live CPU. This
correctly flushs the cpu_slab of the dead CPU via flush_slab. Though
unfreeze_partials which is called from __flush_cpu_slab() after that
looks at the cpu_slab of the cpu on which this is called. So we fail
to look at the partials of the dead cpu.
Correct this by extending the arguments of unfreeze_partials with the
target cpu number and use per_cpu_ptr instead of this_cpu_ptr.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
mm/slub.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c
+++ linux-2.6/mm/slub.c
@@ -1874,10 +1874,10 @@ redo:
*
* This function must be called with interrupt disabled.
*/
-static void unfreeze_partials(struct kmem_cache *s)
+static void unfreeze_partials(struct kmem_cache *s, unsigned int cpu)
{
struct kmem_cache_node *n = NULL, *n2 = NULL;
- struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab);
+ struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu);
struct page *page, *discard_page = NULL;
while ((page = c->partial)) {
@@ -1963,7 +1963,7 @@ static int put_cpu_partial(struct kmem_c
* set to the per node partial list.
*/
local_irq_save(flags);
- unfreeze_partials(s);
+ unfreeze_partials(s, smp_processor_id());
local_irq_restore(flags);
oldpage = NULL;
pobjects = 0;
@@ -2006,7 +2006,7 @@ static inline void __flush_cpu_slab(stru
if (c->page)
flush_slab(s, c);
- unfreeze_partials(s);
+ unfreeze_partials(s, cpu);
}
}
^ permalink raw reply [flat|nested] 6+ messages in thread
* [PATCH] slub: Use the correct per cpu slab on CPU_DEAD
@ 2012-10-27 19:18 ` Thomas Gleixner
0 siblings, 0 replies; 6+ messages in thread
From: Thomas Gleixner @ 2012-10-27 19:18 UTC (permalink / raw)
To: Christoph Lameter; +Cc: linux-mm, LKML
While making slub available for RT I noticed, that during CPU offline
for each kmem_cache __flush_cpu_slab() is called on a live CPU. This
correctly flushs the cpu_slab of the dead CPU via flush_slab. Though
unfreeze_partials which is called from __flush_cpu_slab() after that
looks at the cpu_slab of the cpu on which this is called. So we fail
to look at the partials of the dead cpu.
Correct this by extending the arguments of unfreeze_partials with the
target cpu number and use per_cpu_ptr instead of this_cpu_ptr.
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
---
mm/slub.c | 8 ++++----
1 file changed, 4 insertions(+), 4 deletions(-)
Index: linux-2.6/mm/slub.c
===================================================================
--- linux-2.6.orig/mm/slub.c
+++ linux-2.6/mm/slub.c
@@ -1874,10 +1874,10 @@ redo:
*
* This function must be called with interrupt disabled.
*/
-static void unfreeze_partials(struct kmem_cache *s)
+static void unfreeze_partials(struct kmem_cache *s, unsigned int cpu)
{
struct kmem_cache_node *n = NULL, *n2 = NULL;
- struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab);
+ struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu);
struct page *page, *discard_page = NULL;
while ((page = c->partial)) {
@@ -1963,7 +1963,7 @@ static int put_cpu_partial(struct kmem_c
* set to the per node partial list.
*/
local_irq_save(flags);
- unfreeze_partials(s);
+ unfreeze_partials(s, smp_processor_id());
local_irq_restore(flags);
oldpage = NULL;
pobjects = 0;
@@ -2006,7 +2006,7 @@ static inline void __flush_cpu_slab(stru
if (c->page)
flush_slab(s, c);
- unfreeze_partials(s);
+ unfreeze_partials(s, cpu);
}
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] slub: Use the correct per cpu slab on CPU_DEAD
2012-10-27 19:18 ` Thomas Gleixner
@ 2012-10-30 15:29 ` Christoph Lameter
-1 siblings, 0 replies; 6+ messages in thread
From: Christoph Lameter @ 2012-10-30 15:29 UTC (permalink / raw)
To: Thomas Gleixner; +Cc: linux-mm, LKML
On Sat, 27 Oct 2012, Thomas Gleixner wrote:
> Correct this by extending the arguments of unfreeze_partials with the
> target cpu number and use per_cpu_ptr instead of this_cpu_ptr.
Passing the kmem_cache_cpu pointer instead simplifies this a bit and avoid
a per_cpu_ptr operations. That reduces code somewhat and results in no
additional operations for the fast path.
Subject: Use correct cpu_slab on dead cpu
Pass a kmem_cache_cpu pointer into unfreeze partials so that a different
kmem_cache_cpu structure than the local one can be specified.
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c 2012-10-30 10:23:33.040649727 -0500
+++ linux/mm/slub.c 2012-10-30 10:25:03.401312250 -0500
@@ -1874,10 +1874,10 @@ redo:
*
* This function must be called with interrupt disabled.
*/
-static void unfreeze_partials(struct kmem_cache *s)
+static void unfreeze_partials(struct kmem_cache *s,
+ struct kmem_cache_cpu *c)
{
struct kmem_cache_node *n = NULL, *n2 = NULL;
- struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab);
struct page *page, *discard_page = NULL;
while ((page = c->partial)) {
@@ -1963,7 +1963,7 @@ static int put_cpu_partial(struct kmem_c
* set to the per node partial list.
*/
local_irq_save(flags);
- unfreeze_partials(s);
+ unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
local_irq_restore(flags);
oldpage = NULL;
pobjects = 0;
@@ -2006,7 +2006,7 @@ static inline void __flush_cpu_slab(stru
if (c->page)
flush_slab(s, c);
- unfreeze_partials(s);
+ unfreeze_partials(s, c);
}
}
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] slub: Use the correct per cpu slab on CPU_DEAD
@ 2012-10-30 15:29 ` Christoph Lameter
0 siblings, 0 replies; 6+ messages in thread
From: Christoph Lameter @ 2012-10-30 15:29 UTC (permalink / raw)
To: Thomas Gleixner; +Cc: linux-mm, LKML
On Sat, 27 Oct 2012, Thomas Gleixner wrote:
> Correct this by extending the arguments of unfreeze_partials with the
> target cpu number and use per_cpu_ptr instead of this_cpu_ptr.
Passing the kmem_cache_cpu pointer instead simplifies this a bit and avoid
a per_cpu_ptr operations. That reduces code somewhat and results in no
additional operations for the fast path.
Subject: Use correct cpu_slab on dead cpu
Pass a kmem_cache_cpu pointer into unfreeze partials so that a different
kmem_cache_cpu structure than the local one can be specified.
Reported-by: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Christoph Lameter <cl@linux.com>
Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c 2012-10-30 10:23:33.040649727 -0500
+++ linux/mm/slub.c 2012-10-30 10:25:03.401312250 -0500
@@ -1874,10 +1874,10 @@ redo:
*
* This function must be called with interrupt disabled.
*/
-static void unfreeze_partials(struct kmem_cache *s)
+static void unfreeze_partials(struct kmem_cache *s,
+ struct kmem_cache_cpu *c)
{
struct kmem_cache_node *n = NULL, *n2 = NULL;
- struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab);
struct page *page, *discard_page = NULL;
while ((page = c->partial)) {
@@ -1963,7 +1963,7 @@ static int put_cpu_partial(struct kmem_c
* set to the per node partial list.
*/
local_irq_save(flags);
- unfreeze_partials(s);
+ unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
local_irq_restore(flags);
oldpage = NULL;
pobjects = 0;
@@ -2006,7 +2006,7 @@ static inline void __flush_cpu_slab(stru
if (c->page)
flush_slab(s, c);
- unfreeze_partials(s);
+ unfreeze_partials(s, c);
}
}
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] slub: Use the correct per cpu slab on CPU_DEAD
2012-10-30 15:29 ` Christoph Lameter
@ 2012-10-30 17:01 ` Thomas Gleixner
-1 siblings, 0 replies; 6+ messages in thread
From: Thomas Gleixner @ 2012-10-30 17:01 UTC (permalink / raw)
To: Christoph Lameter; +Cc: linux-mm, LKML
On Tue, 30 Oct 2012, Christoph Lameter wrote:
> On Sat, 27 Oct 2012, Thomas Gleixner wrote:
>
> > Correct this by extending the arguments of unfreeze_partials with the
> > target cpu number and use per_cpu_ptr instead of this_cpu_ptr.
>
> Passing the kmem_cache_cpu pointer instead simplifies this a bit and avoid
> a per_cpu_ptr operations. That reduces code somewhat and results in no
> additional operations for the fast path.
>
>
> Subject: Use correct cpu_slab on dead cpu
>
> Pass a kmem_cache_cpu pointer into unfreeze partials so that a different
> kmem_cache_cpu structure than the local one can be specified.
>
> Reported-by: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Christoph Lameter <cl@linux.com>
Yep. That looks less ugly :)
Acked-by: Thomas Gleixner <tglx@linutronix.de>
> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c 2012-10-30 10:23:33.040649727 -0500
> +++ linux/mm/slub.c 2012-10-30 10:25:03.401312250 -0500
> @@ -1874,10 +1874,10 @@ redo:
> *
> * This function must be called with interrupt disabled.
> */
> -static void unfreeze_partials(struct kmem_cache *s)
> +static void unfreeze_partials(struct kmem_cache *s,
> + struct kmem_cache_cpu *c)
> {
> struct kmem_cache_node *n = NULL, *n2 = NULL;
> - struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab);
> struct page *page, *discard_page = NULL;
>
> while ((page = c->partial)) {
> @@ -1963,7 +1963,7 @@ static int put_cpu_partial(struct kmem_c
> * set to the per node partial list.
> */
> local_irq_save(flags);
> - unfreeze_partials(s);
> + unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
> local_irq_restore(flags);
> oldpage = NULL;
> pobjects = 0;
> @@ -2006,7 +2006,7 @@ static inline void __flush_cpu_slab(stru
> if (c->page)
> flush_slab(s, c);
>
> - unfreeze_partials(s);
> + unfreeze_partials(s, c);
> }
> }
>
>
^ permalink raw reply [flat|nested] 6+ messages in thread
* Re: [PATCH] slub: Use the correct per cpu slab on CPU_DEAD
@ 2012-10-30 17:01 ` Thomas Gleixner
0 siblings, 0 replies; 6+ messages in thread
From: Thomas Gleixner @ 2012-10-30 17:01 UTC (permalink / raw)
To: Christoph Lameter; +Cc: linux-mm, LKML
On Tue, 30 Oct 2012, Christoph Lameter wrote:
> On Sat, 27 Oct 2012, Thomas Gleixner wrote:
>
> > Correct this by extending the arguments of unfreeze_partials with the
> > target cpu number and use per_cpu_ptr instead of this_cpu_ptr.
>
> Passing the kmem_cache_cpu pointer instead simplifies this a bit and avoid
> a per_cpu_ptr operations. That reduces code somewhat and results in no
> additional operations for the fast path.
>
>
> Subject: Use correct cpu_slab on dead cpu
>
> Pass a kmem_cache_cpu pointer into unfreeze partials so that a different
> kmem_cache_cpu structure than the local one can be specified.
>
> Reported-by: Thomas Gleixner <tglx@linutronix.de>
> Signed-off-by: Christoph Lameter <cl@linux.com>
Yep. That looks less ugly :)
Acked-by: Thomas Gleixner <tglx@linutronix.de>
> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c 2012-10-30 10:23:33.040649727 -0500
> +++ linux/mm/slub.c 2012-10-30 10:25:03.401312250 -0500
> @@ -1874,10 +1874,10 @@ redo:
> *
> * This function must be called with interrupt disabled.
> */
> -static void unfreeze_partials(struct kmem_cache *s)
> +static void unfreeze_partials(struct kmem_cache *s,
> + struct kmem_cache_cpu *c)
> {
> struct kmem_cache_node *n = NULL, *n2 = NULL;
> - struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab);
> struct page *page, *discard_page = NULL;
>
> while ((page = c->partial)) {
> @@ -1963,7 +1963,7 @@ static int put_cpu_partial(struct kmem_c
> * set to the per node partial list.
> */
> local_irq_save(flags);
> - unfreeze_partials(s);
> + unfreeze_partials(s, this_cpu_ptr(s->cpu_slab));
> local_irq_restore(flags);
> oldpage = NULL;
> pobjects = 0;
> @@ -2006,7 +2006,7 @@ static inline void __flush_cpu_slab(stru
> if (c->page)
> flush_slab(s, c);
>
> - unfreeze_partials(s);
> + unfreeze_partials(s, c);
> }
> }
>
>
--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org. For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>
^ permalink raw reply [flat|nested] 6+ messages in thread
end of thread, other threads:[~2012-10-30 17:01 UTC | newest]
Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-10-27 19:18 [PATCH] slub: Use the correct per cpu slab on CPU_DEAD Thomas Gleixner
2012-10-27 19:18 ` Thomas Gleixner
2012-10-30 15:29 ` Christoph Lameter
2012-10-30 15:29 ` Christoph Lameter
2012-10-30 17:01 ` Thomas Gleixner
2012-10-30 17:01 ` Thomas Gleixner
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.