linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] slub: fix unreclaimable slab stat for bulk free
@ 2021-07-28 15:53 Shakeel Butt
  2021-07-28 16:45 ` Michal Hocko
                   ` (3 more replies)
  0 siblings, 4 replies; 9+ messages in thread
From: Shakeel Butt @ 2021-07-28 15:53 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Vlastimil Babka
  Cc: Michal Hocko, Roman Gushchin, Wang Hai, Muchun Song,
	Andrew Morton, linux-mm, linux-kernel, Shakeel Butt

SLUB uses page allocator for higher order allocations and update
unreclaimable slab stat for such allocations. At the moment, the bulk
free for SLUB does not share code with normal free code path for these
type of allocations and have missed the stat update. So, fix the stat
update by common code. The user visible impact of the bug is the
potential of inconsistent unreclaimable slab stat visible through
meminfo and vmstat.

Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
Signed-off-by: Shakeel Butt <shakeelb@google.com>
---
 mm/slub.c | 22 ++++++++++++----------
 1 file changed, 12 insertions(+), 10 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 6dad2b6fda6f..03770291aa6b 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3238,6 +3238,16 @@ struct detached_freelist {
 	struct kmem_cache *s;
 };
 
+static inline void free_nonslab_page(struct page *page)
+{
+	unsigned int order = compound_order(page);
+
+	VM_BUG_ON_PAGE(!PageCompound(page), page);
+	kfree_hook(page_address(page));
+	mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, -(PAGE_SIZE << order));
+	__free_pages(page, order);
+}
+
 /*
  * This function progressively scans the array with free objects (with
  * a limited look ahead) and extract objects belonging to the same
@@ -3274,9 +3284,7 @@ int build_detached_freelist(struct kmem_cache *s, size_t size,
 	if (!s) {
 		/* Handle kalloc'ed objects */
 		if (unlikely(!PageSlab(page))) {
-			BUG_ON(!PageCompound(page));
-			kfree_hook(object);
-			__free_pages(page, compound_order(page));
+			free_nonslab_page(page);
 			p[size] = NULL; /* mark object processed */
 			return size;
 		}
@@ -4252,13 +4260,7 @@ void kfree(const void *x)
 
 	page = virt_to_head_page(x);
 	if (unlikely(!PageSlab(page))) {
-		unsigned int order = compound_order(page);
-
-		BUG_ON(!PageCompound(page));
-		kfree_hook(object);
-		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
-				      -(PAGE_SIZE << order));
-		__free_pages(page, order);
+		free_nonslab_page(page);
 		return;
 	}
 	slab_free(page->slab_cache, page, object, NULL, 1, _RET_IP_);
-- 
2.32.0.432.gabb21c7263-goog


^ permalink raw reply related	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2021-08-03 14:45 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-28 15:53 [PATCH] slub: fix unreclaimable slab stat for bulk free Shakeel Butt
2021-07-28 16:45 ` Michal Hocko
2021-07-28 23:30 ` Roman Gushchin
2021-07-29  5:40 ` Muchun Song
2021-07-29  6:52 ` Kefeng Wang
2021-07-29 14:03   ` Shakeel Butt
2021-08-03 14:24     ` Kefeng Wang
2021-08-03 14:29       ` Vlastimil Babka
2021-08-03 14:44         ` Kefeng Wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).