linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm/slub: Reduce memory consumption in extreme scenarios
@ 2023-03-14 12:34 Chen Jun
  2023-03-14 14:41 ` Vlastimil Babka
  0 siblings, 1 reply; 10+ messages in thread
From: Chen Jun @ 2023-03-14 12:34 UTC (permalink / raw)
  To: linux-kernel, linux-mm, cl, penberg, rientjes, iamjoonsoo.kim,
	akpm, vbabka
  Cc: xuqiang36, chenjun102, wangkefeng.wang

When kmalloc_node() is called without __GFP_THISNODE and the target node
lacks sufficient memory, SLUB allocates a folio from a different node
other than the requested node, instead of taking a partial slab from it.

However, since the allocated folio does not belong to the requested
node, it is deactivated and added to the partial slab list of the node
it belongs to.

This behavior can result in excessive memory usage when the requested
node has insufficient memory, as SLUB will repeatedly allocate folios
from other nodes without reusing the previously allocated ones.

To prevent memory wastage,
when (node != NUMA_NO_NODE) && (gfpflags & __GFP_THISNODE) is:
1) try to get a partial slab from target node with __GFP_THISNODE.
2) if 1) failed, try to allocate a new slab from target node with
   __GFP_THISNODE.
3) if 2) failed, retry 1) and 2) without __GFP_THISNODE constraint.

when node != NUMA_NO_NODE || (gfpflags & __GFP_THISNODE), the behavior
remains unchanged.

On qemu with 4 numa nodes and each numa has 1G memory. Write a test ko
to call kmalloc_node(196, GFP_KERNEL, 3) for (4 * 1024 + 4) * 1024 times.

cat /proc/slabinfo shows:
kmalloc-256       4200530 13519712    256   32    2 : tunables..

after this patch,
cat /proc/slabinfo shows:
kmalloc-256       4200558 4200768    256   32    2 : tunables..

Signed-off-by: Chen Jun <chenjun102@huawei.com>
---
 mm/slub.c | 22 +++++++++++++++++++---
 1 file changed, 19 insertions(+), 3 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 39327e98fce3..32e436957e03 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2384,7 +2384,7 @@ static void *get_partial(struct kmem_cache *s, int node, struct partial_context
 		searchnode = numa_mem_id();
 
 	object = get_partial_node(s, get_node(s, searchnode), pc);
-	if (object || node != NUMA_NO_NODE)
+	if (object || (node != NUMA_NO_NODE && (pc->flags & __GFP_THISNODE)))
 		return object;
 
 	return get_any_partial(s, pc);
@@ -3069,6 +3069,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	struct slab *slab;
 	unsigned long flags;
 	struct partial_context pc;
+	bool try_thisnode = true;
 
 	stat(s, ALLOC_SLOWPATH);
 
@@ -3181,8 +3182,18 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	}
 
 new_objects:
-
 	pc.flags = gfpflags;
+
+	/*
+	 * when (node != NUMA_NO_NODE) && (gfpflags & __GFP_THISNODE)
+	 * 1) try to get a partial slab from target node with __GFP_THISNODE.
+	 * 2) if 1) failed, try to allocate a new slab from target node with
+	 *    __GFP_THISNODE.
+	 * 3) if 2) failed, retry 1) and 2) without __GFP_THISNODE constraint.
+	 */
+	if (node != NUMA_NO_NODE && !(gfpflags & __GFP_THISNODE) && try_thisnode)
+			pc.flags |= __GFP_THISNODE;
+
 	pc.slab = &slab;
 	pc.orig_size = orig_size;
 	freelist = get_partial(s, node, &pc);
@@ -3190,10 +3201,15 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 		goto check_new_slab;
 
 	slub_put_cpu_ptr(s->cpu_slab);
-	slab = new_slab(s, gfpflags, node);
+	slab = new_slab(s, pc.flags, node);
 	c = slub_get_cpu_ptr(s->cpu_slab);
 
 	if (unlikely(!slab)) {
+		if (node != NUMA_NO_NODE && !(gfpflags & __GFP_THISNODE) && try_thisnode) {
+			try_thisnode = false;
+			goto new_objects;
+		}
+
 		slab_out_of_memory(s, gfpflags, node);
 		return NULL;
 	}
-- 
2.17.1



^ permalink raw reply related	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2023-03-29  8:41 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-14 12:34 [PATCH] mm/slub: Reduce memory consumption in extreme scenarios Chen Jun
2023-03-14 14:41 ` Vlastimil Babka
2023-03-17 11:32   ` chenjun (AM)
2023-03-17 12:06     ` Vlastimil Babka
2023-03-19  7:22       ` chenjun (AM)
2023-03-20  8:05         ` Vlastimil Babka
2023-03-20  9:12           ` Mike Rapoport
2023-03-21  9:30             ` chenjun (AM)
2023-03-29  8:41               ` Vlastimil Babka
2023-03-21  9:41             ` Vlastimil Babka

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).