linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v6 5/8] slub: only IPI CPUs that have per cpu obj to flush
@ 2012-01-08 16:27 Gilad Ben-Yossef
  2012-01-11  7:04 ` Milton Miller
  0 siblings, 1 reply; 3+ messages in thread
From: Gilad Ben-Yossef @ 2012-01-08 16:27 UTC (permalink / raw)
  To: linux-kernel
  Cc: Gilad Ben-Yossef, Christoph Lameter, Chris Metcalf,
	Peter Zijlstra, Frederic Weisbecker, Russell King, linux-mm,
	Pekka Enberg, Matt Mackall, Sasha Levin, Rik van Riel,
	Andi Kleen, Mel Gorman, Andrew Morton, Alexander Viro,
	linux-fsdevel, Avi Kivity, Michal Nazarewicz, Kosaki Motohiro

flush_all() is called for each kmem_cahce_destroy(). So every cache
being destroyed dynamically ended up sending an IPI to each CPU in the
system, regardless if the cache has ever been used there.

For example, if you close the Infinband ipath driver char device file,
the close file ops calls kmem_cache_destroy(). So running some
infiniband config tool on one a single CPU dedicated to system tasks
might interrupt the rest of the 127 CPUs I dedicated to some CPU
intensive task.

I suspect there is a good chance that every line in the output of "git
grep kmem_cache_destroy linux/ | grep '\->'" has a similar scenario.

This patch attempts to rectify this issue by sending an IPI to flush
the per cpu objects back to the free lists only to CPUs that seems to
have such objects.

The check which CPU to IPI is racy but we don't care since asking a
CPU without per cpu objects to flush does no damage and as far as I
can tell the flush_all by itself is racy against allocs on remote
CPUs anyway, so if you meant the flush_all to be determinstic, you
had to arrange for locking regardless.

Without this patch the following artificial test case:

$ cd /sys/kernel/slab
$ for DIR in *; do cat $DIR/alloc_calls > /dev/null; done

produces 166 IPIs on an cpuset isolated CPU. With it it produces none.

The code path of memory allocation failure for CPUMASK_OFFSTACK=y
config was tested using fault injection framework.

Signed-off-by: Gilad Ben-Yossef <gilad@benyossef.com>
CC: Christoph Lameter <cl@linux.com>
CC: Chris Metcalf <cmetcalf@tilera.com>
CC: Peter Zijlstra <a.p.zijlstra@chello.nl>
CC: Frederic Weisbecker <fweisbec@gmail.com>
CC: Russell King <linux@arm.linux.org.uk>
CC: linux-mm@kvack.org
CC: Pekka Enberg <penberg@kernel.org>
CC: Matt Mackall <mpm@selenic.com>
CC: Sasha Levin <levinsasha928@gmail.com>
CC: Rik van Riel <riel@redhat.com>
CC: Andi Kleen <andi@firstfloor.org>
CC: Mel Gorman <mel@csn.ul.ie>
CC: Andrew Morton <akpm@linux-foundation.org>
CC: Alexander Viro <viro@zeniv.linux.org.uk>
CC: linux-fsdevel@vger.kernel.org
CC: Avi Kivity <avi@redhat.com>
CC: Michal Nazarewicz <mina86@mina86.org>
CC: Kosaki Motohiro <kosaki.motohiro@gmail.com>
---
 mm/slub.c |   10 +++++++++-
 1 files changed, 9 insertions(+), 1 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 09ccee8..31833d6 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2013,9 +2013,17 @@ static void flush_cpu_slab(void *d)
 	__flush_cpu_slab(s, smp_processor_id());
 }
 
+static int has_cpu_slab(int cpu, void *info)
+{
+	struct kmem_cache *s = info;
+	struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu);
+
+	return !!(c->page);
+}
+
 static void flush_all(struct kmem_cache *s)
 {
-	on_each_cpu(flush_cpu_slab, s, 1);
+	on_each_cpu_cond(has_cpu_slab, flush_cpu_slab, s, 1);
 }
 
 /*
-- 
1.7.0.4


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH v6 5/8] slub: only IPI CPUs that have per cpu obj to flush
  2012-01-08 16:27 [PATCH v6 5/8] slub: only IPI CPUs that have per cpu obj to flush Gilad Ben-Yossef
@ 2012-01-11  7:04 ` Milton Miller
  2012-01-18 12:09   ` Gilad Ben-Yossef
  0 siblings, 1 reply; 3+ messages in thread
From: Milton Miller @ 2012-01-11  7:04 UTC (permalink / raw)
  To: Gilad Ben-Yossef, linux-kernel
  Cc: Christoph Lameter, Michal Nazarewicz, Mel Gorman,
	KOSAKI Motohiro, Chris Metcalf, Peter Zijlstra,
	Frederic Weisbecker, Russell King, linux-mm, Pekka Enberg,
	Matt Mackall, Rik van Riel, Andi Kleen, Sasha Levin,
	Andrew Morton, Alexander Viro, linux-fsdevel, Avi Kivity


On Sun Jan 08 2012 about 11:28:11 EST, Gilad Ben-Yossef wrote:
> flush_all() is called for each kmem_cahce_destroy(). So every cache
> being destroyed dynamically ended up sending an IPI to each CPU in the

ends up

> system, regardless if the cache has ever been used there.
> 
> For example, if you close the Infinband ipath driver char device file,
> the close file ops calls kmem_cache_destroy(). So running some
> infiniband config tool on one a single CPU dedicated to system tasks
> might interrupt the rest of the 127 CPUs I dedicated to some CPU

127 CPUs dedicated

> intensive task.

CPU intensive or latency sensitive task.


> 
> I suspect there is a good chance that every line in the output of "git
> grep kmem_cache_destroy linux/ | grep '\->'" has a similar scenario.
> 
> This patch attempts to rectify this issue by sending an IPI to flush
> the per cpu objects back to the free lists only to CPUs that seems to
> have such objects.

that seem to have

> 
> The check which CPU to IPI is racy but we don't care since asking a
> CPU without per cpu objects to flush does no damage and as far as I
> can tell the flush_all by itself is racy against allocs on remote
> CPUs anyway, so if you meant the flush_all to be determinstic, you

required (vs meant)

> had to arrange for locking regardless.
> 
> Without this patch the following artificial test case:
> 
> $ cd /sys/kernel/slab
> $ for DIR in *; do cat $DIR/alloc_calls > /dev/null; done
> 
> produces 166 IPIs on an cpuset isolated CPU. With it it produces none.
> 
> The code path of memory allocation failure for CPUMASK_OFFSTACK=y
> config was tested using fault injection framework.
> 
..
> mm/slub.c | 10 +++++++++-
> 1 files changed, 9 insertions(+), 1 deletions(-)
> 
> diff --git a/mm/slub.c b/mm/slub.c
> index 09ccee8..31833d6 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -2013,9 +2013,17 @@ static void flush_cpu_slab(void *d)
> __flush_cpu_slab(s, smp_processor_id());
> }
> 
> +static int has_cpu_slab(int cpu, void *info)
> +{
> + struct kmem_cache *s = info;
> + struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu);
> +
> + return !!(c->page);

__flush_cpu_slab is careful to test that the the per_cpu_ptr is not
NULL before referencing the page field.  free_percpu likewise ignores
NULL pointers.  We need to check !!(c && c->page) here.


[change int to bool assuming you make the change in the other patch].

> +}
> +
> static void flush_all(struct kmem_cache *s)
> {
> - on_each_cpu(flush_cpu_slab, s, 1);
> + on_each_cpu_cond(has_cpu_slab, flush_cpu_slab, s, 1);
> }

milton

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH v6 5/8] slub: only IPI CPUs that have per cpu obj to flush
  2012-01-11  7:04 ` Milton Miller
@ 2012-01-18 12:09   ` Gilad Ben-Yossef
  0 siblings, 0 replies; 3+ messages in thread
From: Gilad Ben-Yossef @ 2012-01-18 12:09 UTC (permalink / raw)
  To: Milton Miller
  Cc: linux-kernel, Christoph Lameter, Michal Nazarewicz, Mel Gorman,
	KOSAKI Motohiro, Chris Metcalf, Peter Zijlstra,
	Frederic Weisbecker, Russell King, linux-mm, Pekka Enberg,
	Matt Mackall, Rik van Riel, Andi Kleen, Sasha Levin,
	Andrew Morton, Alexander Viro, linux-fsdevel, Avi Kivity

On Wed, Jan 11, 2012 at 9:04 AM, Milton Miller <miltonm@bga.com> wrote:

>
> > mm/slub.c | 10 +++++++++-
> > 1 files changed, 9 insertions(+), 1 deletions(-)
> >
> > diff --git a/mm/slub.c b/mm/slub.c
> > index 09ccee8..31833d6 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -2013,9 +2013,17 @@ static void flush_cpu_slab(void *d)
> > __flush_cpu_slab(s, smp_processor_id());
> > }
> >
> > +static int has_cpu_slab(int cpu, void *info)
> > +{
> > + struct kmem_cache *s = info;
> > + struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu);
> > +
> > + return !!(c->page);
>
> __flush_cpu_slab is careful to test that the the per_cpu_ptr is not
> NULL before referencing the page field.  free_percpu likewise ignores
> NULL pointers.  We need to check !!(c && c->page) here.
>
This is indeed what I did in the first iterations but Christoph indicated that
c could never be NULL in his review of the patch. See :
https://lkml.org/lkml/2011/11/15/207

I integrated all the other review comment of this patch though. Thanks!
Gilad

--
Gilad Ben-Yossef
Chief Coffee Drinker
gilad@benyossef.com
Israel Cell: +972-52-8260388
US Cell: +1-973-8260388
http://benyossef.com

"Unfortunately, cache misses are an equal opportunity pain provider."
-- Mike Galbraith, LKML

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2012-01-18 12:09 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2012-01-08 16:27 [PATCH v6 5/8] slub: only IPI CPUs that have per cpu obj to flush Gilad Ben-Yossef
2012-01-11  7:04 ` Milton Miller
2012-01-18 12:09   ` Gilad Ben-Yossef

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).