linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG
@ 2018-06-19 21:33 Shakeel Butt
  2018-06-19 21:38 ` Shakeel Butt
                   ` (4 more replies)
  0 siblings, 5 replies; 8+ messages in thread
From: Shakeel Butt @ 2018-06-19 21:33 UTC (permalink / raw)
  To: Jason A . Donenfeld
  Cc: linux-mm, linux-kernel, Shakeel Butt, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, stable

For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
allocated per node for a kmem_cache. Thus, slabs_node() in
__kmem_cache_empty() will always return 0. So, in such situation, it is
required to check per-cpu slabs to make sure if a kmem_cache is empty or
not.

Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
per-cpu slabs.

Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Reported-by: Jason A . Donenfeld <Jason@zx2c4.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <stable@vger.kernel.org>
---
 mm/slub.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index a3b8467c14af..731c02b371ae 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3673,9 +3673,23 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
 
 bool __kmem_cache_empty(struct kmem_cache *s)
 {
-	int node;
+	int cpu, node;
 	struct kmem_cache_node *n;
 
+	/*
+	 * slabs_node will always be 0 for !CONFIG_SLUB_DEBUG. So, manually
+	 * check slabs for all cpus.
+	 */
+	if (!IS_ENABLED(CONFIG_SLUB_DEBUG)) {
+		for_each_online_cpu(cpu) {
+			struct kmem_cache_cpu *c;
+
+			c = per_cpu_ptr(s->cpu_slab, cpu);
+			if (c->page || slub_percpu_partial(c))
+				return false;
+		}
+	}
+
 	for_each_kmem_cache_node(s, node, n)
 		if (n->nr_partial || slabs_node(s, node))
 			return false;
-- 
2.18.0.rc1.244.gcf134e6275-goog


^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG
  2018-06-19 21:33 [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG Shakeel Butt
@ 2018-06-19 21:38 ` Shakeel Butt
  2018-06-19 21:53 ` Jason A. Donenfeld
                   ` (3 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Shakeel Butt @ 2018-06-19 21:38 UTC (permalink / raw)
  To: Jason, Andrey Ryabinin
  Cc: Linux MM, LKML, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, stable

On Tue, Jun 19, 2018 at 2:33 PM Shakeel Butt <shakeelb@google.com> wrote:
>
> For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> allocated per node for a kmem_cache. Thus, slabs_node() in
> __kmem_cache_empty() will always return 0. So, in such situation, it is
> required to check per-cpu slabs to make sure if a kmem_cache is empty or
> not.
>
> Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
> not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
> per-cpu slabs.
>
> Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> Reported-by: Jason A . Donenfeld <Jason@zx2c4.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: <stable@vger.kernel.org>

Forgot to Cc: Andrey Ryabinin <aryabinin@virtuozzo.com>

> ---
>  mm/slub.c | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index a3b8467c14af..731c02b371ae 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3673,9 +3673,23 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
>
>  bool __kmem_cache_empty(struct kmem_cache *s)
>  {
> -       int node;
> +       int cpu, node;
>         struct kmem_cache_node *n;
>
> +       /*
> +        * slabs_node will always be 0 for !CONFIG_SLUB_DEBUG. So, manually
> +        * check slabs for all cpus.
> +        */
> +       if (!IS_ENABLED(CONFIG_SLUB_DEBUG)) {
> +               for_each_online_cpu(cpu) {
> +                       struct kmem_cache_cpu *c;
> +
> +                       c = per_cpu_ptr(s->cpu_slab, cpu);
> +                       if (c->page || slub_percpu_partial(c))
> +                               return false;
> +               }
> +       }
> +
>         for_each_kmem_cache_node(s, node, n)
>                 if (n->nr_partial || slabs_node(s, node))
>                         return false;
> --
> 2.18.0.rc1.244.gcf134e6275-goog
>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG
  2018-06-19 21:33 [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG Shakeel Butt
  2018-06-19 21:38 ` Shakeel Butt
@ 2018-06-19 21:53 ` Jason A. Donenfeld
  2018-06-19 22:21 ` Andrew Morton
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 8+ messages in thread
From: Jason A. Donenfeld @ 2018-06-19 21:53 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Linux-MM, LKML, cl, penberg, rientjes, iamjoonsoo.kim,
	Andrew Morton, stable

On Tue, Jun 19, 2018 at 11:34 PM Shakeel Butt <shakeelb@google.com> wrote:
>
> For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> allocated per node for a kmem_cache. Thus, slabs_node() in
> __kmem_cache_empty() will always return 0. So, in such situation, it is
> required to check per-cpu slabs to make sure if a kmem_cache is empty or
> not.
>
> Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
> not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
> per-cpu slabs.
>
> Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
> Signed-off-by: Shakeel Butt <shakeelb@google.com>
> Reported-by: Jason A . Donenfeld <Jason@zx2c4.com>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: <stable@vger.kernel.org>
> ---
>  mm/slub.c | 16 +++++++++++++++-
>  1 file changed, 15 insertions(+), 1 deletion(-)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index a3b8467c14af..731c02b371ae 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3673,9 +3673,23 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
>
>  bool __kmem_cache_empty(struct kmem_cache *s)
>  {
> -       int node;
> +       int cpu, node;
>         struct kmem_cache_node *n;
>
> +       /*
> +        * slabs_node will always be 0 for !CONFIG_SLUB_DEBUG. So, manually
> +        * check slabs for all cpus.
> +        */
> +       if (!IS_ENABLED(CONFIG_SLUB_DEBUG)) {
> +               for_each_online_cpu(cpu) {
> +                       struct kmem_cache_cpu *c;
> +
> +                       c = per_cpu_ptr(s->cpu_slab, cpu);
> +                       if (c->page || slub_percpu_partial(c))
> +                               return false;
> +               }
> +       }
> +
>         for_each_kmem_cache_node(s, node, n)
>                 if (n->nr_partial || slabs_node(s, node))
>                         return false;
> --
> 2.18.0.rc1.244.gcf134e6275-goog
>

I can confirm that this fixes the test case on build.wireguard.com.

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG
  2018-06-19 21:33 [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG Shakeel Butt
  2018-06-19 21:38 ` Shakeel Butt
  2018-06-19 21:53 ` Jason A. Donenfeld
@ 2018-06-19 22:21 ` Andrew Morton
  2018-06-20  0:49 ` David Rientjes
  2018-06-20 12:09 ` Andrey Ryabinin
  4 siblings, 0 replies; 8+ messages in thread
From: Andrew Morton @ 2018-06-19 22:21 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Jason A . Donenfeld, linux-mm, linux-kernel, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, stable

On Tue, 19 Jun 2018 14:33:52 -0700 Shakeel Butt <shakeelb@google.com> wrote:

> For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> allocated per node for a kmem_cache. Thus, slabs_node() in
> __kmem_cache_empty() will always return 0. So, in such situation, it is
> required to check per-cpu slabs to make sure if a kmem_cache is empty or
> not.
> 
> Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
> not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
> per-cpu slabs.

Thanks guys.  I'll beef up this changelog a bit by adding

f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
causes crashes when using slub, as described at
http://lkml.kernel.org/r/CAHmME9rtoPwxUSnktxzKso14iuVCWT7BE_-_8PAC=pGw1iJnQg@mail.gmail.com

So that a) Greg knows why we're sending it at him and b) other people
who are seeing the same sorts of crashes in their various kernels will
know that this patch will probably fix them.


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG
  2018-06-19 21:33 [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG Shakeel Butt
                   ` (2 preceding siblings ...)
  2018-06-19 22:21 ` Andrew Morton
@ 2018-06-20  0:49 ` David Rientjes
  2018-06-20  1:24   ` Shakeel Butt
  2018-06-20 12:09 ` Andrey Ryabinin
  4 siblings, 1 reply; 8+ messages in thread
From: David Rientjes @ 2018-06-20  0:49 UTC (permalink / raw)
  To: Shakeel Butt
  Cc: Jason A . Donenfeld, linux-mm, linux-kernel, Christoph Lameter,
	Pekka Enberg, Joonsoo Kim, Andrew Morton, stable

On Tue, 19 Jun 2018, Shakeel Butt wrote:

> diff --git a/mm/slub.c b/mm/slub.c
> index a3b8467c14af..731c02b371ae 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3673,9 +3673,23 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
>  
>  bool __kmem_cache_empty(struct kmem_cache *s)
>  {
> -	int node;
> +	int cpu, node;

Nit: wouldn't cpu be unused if CONFIG_SLUB_DEBUG is disabled?

>  	struct kmem_cache_node *n;
>  
> +	/*
> +	 * slabs_node will always be 0 for !CONFIG_SLUB_DEBUG. So, manually
> +	 * check slabs for all cpus.
> +	 */
> +	if (!IS_ENABLED(CONFIG_SLUB_DEBUG)) {
> +		for_each_online_cpu(cpu) {
> +			struct kmem_cache_cpu *c;
> +
> +			c = per_cpu_ptr(s->cpu_slab, cpu);
> +			if (c->page || slub_percpu_partial(c))
> +				return false;
> +		}
> +	}
> +
>  	for_each_kmem_cache_node(s, node, n)
>  		if (n->nr_partial || slabs_node(s, node))
>  			return false;

Wouldn't it just be better to allow {inc,dec}_slabs_node() to adjust the 
nr_slabs counter instead of doing the per-cpu iteration on every shutdown?

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG
  2018-06-20  0:49 ` David Rientjes
@ 2018-06-20  1:24   ` Shakeel Butt
  0 siblings, 0 replies; 8+ messages in thread
From: Shakeel Butt @ 2018-06-20  1:24 UTC (permalink / raw)
  To: David Rientjes
  Cc: Jason, Linux MM, LKML, Christoph Lameter, Pekka Enberg,
	Joonsoo Kim, Andrew Morton, stable

On Tue, Jun 19, 2018 at 5:49 PM David Rientjes <rientjes@google.com> wrote:
>
> On Tue, 19 Jun 2018, Shakeel Butt wrote:
>
> > diff --git a/mm/slub.c b/mm/slub.c
> > index a3b8467c14af..731c02b371ae 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3673,9 +3673,23 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
> >
> >  bool __kmem_cache_empty(struct kmem_cache *s)
> >  {
> > -     int node;
> > +     int cpu, node;
>
> Nit: wouldn't cpu be unused if CONFIG_SLUB_DEBUG is disabled?
>

I think I didn't get the warning as I didn't use #ifdef.

> >       struct kmem_cache_node *n;
> >
> > +     /*
> > +      * slabs_node will always be 0 for !CONFIG_SLUB_DEBUG. So, manually
> > +      * check slabs for all cpus.
> > +      */
> > +     if (!IS_ENABLED(CONFIG_SLUB_DEBUG)) {
> > +             for_each_online_cpu(cpu) {
> > +                     struct kmem_cache_cpu *c;
> > +
> > +                     c = per_cpu_ptr(s->cpu_slab, cpu);
> > +                     if (c->page || slub_percpu_partial(c))
> > +                             return false;
> > +             }
> > +     }
> > +
> >       for_each_kmem_cache_node(s, node, n)
> >               if (n->nr_partial || slabs_node(s, node))
> >                       return false;
>
> Wouldn't it just be better to allow {inc,dec}_slabs_node() to adjust the
> nr_slabs counter instead of doing the per-cpu iteration on every shutdown?

Yes that is doable as the functions {inc,dec}_slabs_node() are called
in slow path. I can move them out of CONFIG_SLUB_DEBUG. I think the
reason 0f389ec63077 ("slub: No need for per node slab counters if
!SLUB_DEBUG") put them under CONFIG_SLUB_DEBUG is because these
counters were only read through sysfs API which were disabled on
!CONFIG_SLUB_DEBUG. However we have a usecase other than sysfs API.

Please let me know if there is any objection to this conversion. For
large machines I think this is preferable approach.

thanks,
Shakeel

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG
  2018-06-19 21:33 [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG Shakeel Butt
                   ` (3 preceding siblings ...)
  2018-06-20  0:49 ` David Rientjes
@ 2018-06-20 12:09 ` Andrey Ryabinin
  2018-06-20 21:36   ` Shakeel Butt
  4 siblings, 1 reply; 8+ messages in thread
From: Andrey Ryabinin @ 2018-06-20 12:09 UTC (permalink / raw)
  To: Shakeel Butt, Jason A . Donenfeld
  Cc: linux-mm, linux-kernel, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, stable



On 06/20/2018 12:33 AM, Shakeel Butt wrote:
> For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> allocated per node for a kmem_cache. Thus, slabs_node() in
> __kmem_cache_empty() will always return 0. So, in such situation, it is
> required to check per-cpu slabs to make sure if a kmem_cache is empty or
> not.
> 
> Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
> not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
> per-cpu slabs.

So what? Yes, they call flush_all() and then check if there are non-empty slabs left.
And that check doesn't work in case of disabled CONFIG_SLUB_DEBUG.
How is flush_all() or per-cpu slabs even relevant here?


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG
  2018-06-20 12:09 ` Andrey Ryabinin
@ 2018-06-20 21:36   ` Shakeel Butt
  0 siblings, 0 replies; 8+ messages in thread
From: Shakeel Butt @ 2018-06-20 21:36 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Jason A . Donenfeld, Linux MM, LKML, Christoph Lameter,
	Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton, stable

On Wed, Jun 20, 2018 at 5:08 AM Andrey Ryabinin <aryabinin@virtuozzo.com> wrote:
>
>
>
> On 06/20/2018 12:33 AM, Shakeel Butt wrote:
> > For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
> > allocated per node for a kmem_cache. Thus, slabs_node() in
> > __kmem_cache_empty() will always return 0. So, in such situation, it is
> > required to check per-cpu slabs to make sure if a kmem_cache is empty or
> > not.
> >
> > Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
> > not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
> > per-cpu slabs.
>
> So what? Yes, they call flush_all() and then check if there are non-empty slabs left.
> And that check doesn't work in case of disabled CONFIG_SLUB_DEBUG.
> How is flush_all() or per-cpu slabs even relevant here?
>

The flush_all() will move all cpu slabs and partials to node's partial
list and thus later check of node's partial list will handle non-empty
slabs situation. However what I missed is the 'full slabs' which are
not on any list for !CONFIG_SLUB_DEBUG. So, this patch is not the
complete solution. I think David's suggestion is the complete
solution. I will post a patch based on David's suggestion.

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2018-06-20 21:36 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-06-19 21:33 [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG Shakeel Butt
2018-06-19 21:38 ` Shakeel Butt
2018-06-19 21:53 ` Jason A. Donenfeld
2018-06-19 22:21 ` Andrew Morton
2018-06-20  0:49 ` David Rientjes
2018-06-20  1:24   ` Shakeel Butt
2018-06-20 12:09 ` Andrey Ryabinin
2018-06-20 21:36   ` Shakeel Butt

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).