All of lore.kernel.org
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeelb@google.com>
To: "Jason A . Donenfeld" <Jason@zx2c4.com>
Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Shakeel Butt <shakeelb@google.com>,
	Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	stable@vger.kernel.org
Subject: [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG
Date: Tue, 19 Jun 2018 14:33:52 -0700	[thread overview]
Message-ID: <20180619213352.71740-1-shakeelb@google.com> (raw)

For !CONFIG_SLUB_DEBUG, SLUB does not maintain the number of slabs
allocated per node for a kmem_cache. Thus, slabs_node() in
__kmem_cache_empty() will always return 0. So, in such situation, it is
required to check per-cpu slabs to make sure if a kmem_cache is empty or
not.

Please note that __kmem_cache_shutdown() and __kmem_cache_shrink() are
not affected by !CONFIG_SLUB_DEBUG as they call flush_all() to clear
per-cpu slabs.

Fixes: f9e13c0a5a33 ("slab, slub: skip unnecessary kasan_cache_shutdown()")
Signed-off-by: Shakeel Butt <shakeelb@google.com>
Reported-by: Jason A . Donenfeld <Jason@zx2c4.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <stable@vger.kernel.org>
---
 mm/slub.c | 16 +++++++++++++++-
 1 file changed, 15 insertions(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index a3b8467c14af..731c02b371ae 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3673,9 +3673,23 @@ static void free_partial(struct kmem_cache *s, struct kmem_cache_node *n)
 
 bool __kmem_cache_empty(struct kmem_cache *s)
 {
-	int node;
+	int cpu, node;
 	struct kmem_cache_node *n;
 
+	/*
+	 * slabs_node will always be 0 for !CONFIG_SLUB_DEBUG. So, manually
+	 * check slabs for all cpus.
+	 */
+	if (!IS_ENABLED(CONFIG_SLUB_DEBUG)) {
+		for_each_online_cpu(cpu) {
+			struct kmem_cache_cpu *c;
+
+			c = per_cpu_ptr(s->cpu_slab, cpu);
+			if (c->page || slub_percpu_partial(c))
+				return false;
+		}
+	}
+
 	for_each_kmem_cache_node(s, node, n)
 		if (n->nr_partial || slabs_node(s, node))
 			return false;
-- 
2.18.0.rc1.244.gcf134e6275-goog


             reply	other threads:[~2018-06-19 21:34 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-06-19 21:33 Shakeel Butt [this message]
2018-06-19 21:38 ` [PATCH] slub: fix __kmem_cache_empty for !CONFIG_SLUB_DEBUG Shakeel Butt
2018-06-19 21:53 ` Jason A. Donenfeld
2018-06-19 22:21 ` Andrew Morton
2018-06-20  0:49 ` David Rientjes
2018-06-20  1:24   ` Shakeel Butt
2018-06-20 12:09 ` Andrey Ryabinin
2018-06-20 21:36   ` Shakeel Butt

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20180619213352.71740-1-shakeelb@google.com \
    --to=shakeelb@google.com \
    --cc=Jason@zx2c4.com \
    --cc=akpm@linux-foundation.org \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=stable@vger.kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.