From: Hyeonggon Yoo <42.hyeyoo@gmail.com>
To: Oliver Sang <oliver.sang@intel.com>
Cc: Jay Patel <jaypatel@linux.ibm.com>,
oe-lkp@lists.linux.dev, lkp@intel.com, linux-mm@kvack.org,
ying.huang@intel.com, feng.tang@intel.com,
fengwei.yin@intel.com, cl@linux.com, penberg@kernel.org,
rientjes@google.com, iamjoonsoo.kim@lge.com,
akpm@linux-foundation.org, vbabka@suse.cz,
aneesh.kumar@linux.ibm.com, tsahu@linux.ibm.com,
piyushs@linux.ibm.com
Subject: Re: [PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage
Date: Thu, 20 Jul 2023 23:15:04 +0900 [thread overview]
Message-ID: <CAB=+i9SRCZ1OKBTrojbnbR2YgtmGoRiuTW4VBqvbW1=TNgVWMQ@mail.gmail.com> (raw)
In-Reply-To: <CAB=+i9Rn0WXgK-CfaKy0k7HXHx3VEmSjzopaPakcThSG5Ri3vA@mail.gmail.com>
[-- Attachment #1: Type: text/plain, Size: 2026 bytes --]
On Thu, Jul 20, 2023 at 10:46 PM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote:
>
> On Thu, Jul 20, 2023 at 9:59 PM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote:
> > On Thu, Jul 20, 2023 at 12:01 PM Oliver Sang <oliver.sang@intel.com> wrote:
> > > > > commit:
> > > > > 7bc162d5cc ("Merge branches 'slab/for-6.5/prandom', 'slab/for-6.5/slab_no_merge' and 'slab/for-6.5/slab-deprecate' into slab/for-next")
> > > > > a0fd217e6d ("mm/slub: Optimize slub memory usage")
> > > > >
> > > > > 7bc162d5cc4de5c3 a0fd217e6d6fbd23e91f8796787
> > > > > ---------------- ---------------------------
> > > > > %stddev %change %stddev
> > > > > \ | \
> > > > > 222503 ą 86% +108.7% 464342 ą 58% numa-meminfo.node1.Active
> > > > > 222459 ą 86% +108.7% 464294 ą 58% numa-meminfo.node1.Active(anon)
> > > > > 55573 ą 85% +108.0% 115619 ą 58% numa-vmstat.node1.nr_active_anon
> > > > > 55573 ą 85% +108.0% 115618 ą 58% numa-vmstat.node1.nr_zone_active_anon
> > > >
> > > > I'm quite baffled while reading this.
> > > > How did changing slab order calculation double the number of active anon pages?
> > > > I doubt two experiments were performed on the same settings.
> > >
> > > let me introduce our test process.
> > >
> > > we make sure the tests upon commit and its parent have exact same environment
> > > except the kernel difference, and we also make sure the config to build the
> > > commit and its parent are identical.
> > >
> > > we run tests for one commit at least 6 times to make sure the data is stable.
> > >
> > > such like for this case, we rebuild the commit and its parent's kernel, the
> > > config is attached FYI.
>
> Oh I missed the attachments.
> I need more time to look more into that, but could you please test
> this patch (attached)?
Oh, my mistake. It has nothing to do with reclamation modifiers.
The correct patch should be this. Sorry for the noise.
[-- Attachment #2: 0001-mm-slub-do-not-allocate-from-remote-node-to-allocate.patch --]
[-- Type: text/x-patch, Size: 1013 bytes --]
From 74142b5131e731f662740d34623d93fd324f9b65 Mon Sep 17 00:00:00 2001
From: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Date: Thu, 20 Jul 2023 22:29:16 +0900
Subject: [PATCH] mm/slub: do not allocate from remote node to allocate high
order slab
Signed-off-by: Hyeonggon Yoo <42.hyeyoo@gmail.com>
---
mm/slub.c | 2 +-
1 file changed, 1 insertion(+), 1 deletion(-)
diff --git a/mm/slub.c b/mm/slub.c
index f7940048138c..c584237d6a0d 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2010,7 +2010,7 @@ static struct slab *allocate_slab(struct kmem_cache *s, gfp_t flags, int node)
* Let the initial higher-order allocation fail under memory pressure
* so we fall-back to the minimum order allocation.
*/
- alloc_gfp = (flags | __GFP_NOWARN | __GFP_NORETRY) & ~__GFP_NOFAIL;
+ alloc_gfp = (flags | __GFP_THISNODE | __GFP_NOWARN | __GFP_NORETRY) & ~__GFP_NOFAIL;
if ((alloc_gfp & __GFP_DIRECT_RECLAIM) && oo_order(oo) > oo_order(s->min))
alloc_gfp = (alloc_gfp | __GFP_NOMEMALLOC) & ~__GFP_RECLAIM;
--
2.41.0
next prev parent reply other threads:[~2023-07-20 14:15 UTC|newest]
Thread overview: 25+ messages / expand[flat|nested] mbox.gz Atom feed top
2023-06-28 9:57 [PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage Jay Patel
2023-07-03 0:13 ` David Rientjes
2023-07-03 8:39 ` Jay Patel
2023-07-09 14:42 ` Hyeonggon Yoo
2023-07-12 13:06 ` Vlastimil Babka
2023-07-20 10:30 ` Jay Patel
2023-07-17 13:41 ` kernel test robot
2023-07-18 6:43 ` Hyeonggon Yoo
2023-07-20 3:00 ` Oliver Sang
2023-07-20 12:59 ` Hyeonggon Yoo
2023-07-20 13:46 ` Hyeonggon Yoo
2023-07-20 14:15 ` Hyeonggon Yoo [this message]
2023-07-24 2:39 ` Oliver Sang
2023-07-31 9:49 ` Hyeonggon Yoo
2023-07-20 13:49 ` Feng Tang
2023-07-20 15:05 ` Hyeonggon Yoo
2023-07-21 14:50 ` Binder Makin
2023-07-21 15:39 ` Hyeonggon Yoo
2023-07-21 18:31 ` Binder Makin
2023-07-24 14:35 ` Feng Tang
2023-07-25 3:13 ` Hyeonggon Yoo
2023-07-25 9:12 ` Feng Tang
2023-08-29 8:30 ` Feng Tang
2023-07-26 10:06 ` Vlastimil Babka
2023-08-10 10:38 ` Jay Patel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to='CAB=+i9SRCZ1OKBTrojbnbR2YgtmGoRiuTW4VBqvbW1=TNgVWMQ@mail.gmail.com' \
--to=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=aneesh.kumar@linux.ibm.com \
--cc=cl@linux.com \
--cc=feng.tang@intel.com \
--cc=fengwei.yin@intel.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=jaypatel@linux.ibm.com \
--cc=linux-mm@kvack.org \
--cc=lkp@intel.com \
--cc=oe-lkp@lists.linux.dev \
--cc=oliver.sang@intel.com \
--cc=penberg@kernel.org \
--cc=piyushs@linux.ibm.com \
--cc=rientjes@google.com \
--cc=tsahu@linux.ibm.com \
--cc=vbabka@suse.cz \
--cc=ying.huang@intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).