linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Binder Makin <merimus@google.com>
To: Hyeonggon Yoo <42.hyeyoo@gmail.com>
Cc: Feng Tang <feng.tang@intel.com>,
	"Sang, Oliver" <oliver.sang@intel.com>,
	 Jay Patel <jaypatel@linux.ibm.com>,
	"oe-lkp@lists.linux.dev" <oe-lkp@lists.linux.dev>,
	lkp <lkp@intel.com>,  "linux-mm@kvack.org" <linux-mm@kvack.org>,
	"Huang, Ying" <ying.huang@intel.com>,
	 "Yin, Fengwei" <fengwei.yin@intel.com>,
	"cl@linux.com" <cl@linux.com>,
	 "penberg@kernel.org" <penberg@kernel.org>,
	"rientjes@google.com" <rientjes@google.com>,
	 "iamjoonsoo.kim@lge.com" <iamjoonsoo.kim@lge.com>,
	 "akpm@linux-foundation.org" <akpm@linux-foundation.org>,
	"vbabka@suse.cz" <vbabka@suse.cz>,
	 "aneesh.kumar@linux.ibm.com" <aneesh.kumar@linux.ibm.com>,
	"tsahu@linux.ibm.com" <tsahu@linux.ibm.com>,
	 "piyushs@linux.ibm.com" <piyushs@linux.ibm.com>
Subject: Re: [PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage
Date: Fri, 21 Jul 2023 14:31:37 -0400	[thread overview]
Message-ID: <CAANmLtznX_qmj+ODOSnAJkzkM89E3a4eo-Zbpx=ABSmhFSjUqQ@mail.gmail.com> (raw)
In-Reply-To: <CAB=+i9RhdfVeeEROizQt17DW_q2SRo+gZg3ZV_t5OaNpoSB8oA@mail.gmail.com>

baseline is 6.1.38
Other is 6.1.38 with the patch from
https://lore.kernel.org/linux-mm/a44ff1d018998e3330e309ac3ae76575bf09e311.camel@linux.ibm.com/T/

the AMD and Intel machine are both dual socket
and ARM machine is single.

I happen to have those setup to grab SReclaim and SUnreclaim so could
run them quickly.
Can certain dig into more details though.

On Fri, Jul 21, 2023 at 11:40 AM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote:
>
> On Fri, Jul 21, 2023 at 11:50 PM Binder Makin <merimus@google.com> wrote:
> >
> > Quick run with hackbench and unixbench on large intel, amd, and arm machines
> > Patch was applied to 6.1.38
> >
> > hackbench
> > Intel performance -2.9% - +1.57% SReclaim -3.2% SUnreclaim -2.4%
> > Amd performance -28% - +7.58% SReclaim +21.31 SUnreclaim +20.72
> > ARM performance -0.6 - +1.6%  SReclaim +24% SUnreclaim +70%
> >
> > unixbench
> > Intel performance -1.4 - +1.59% SReclaimm -1.65% SUnreclaim -1.59%
> > Amd performance -1.9% - +1.05% SReclaim -3.1% SUnreclaimm -0.81%
> > ARM performance -0.09% - +0.54% SReclaimm -1.05% SUnreclaim -2.03%
> >
> > AMD Hackbench
> > 28% drop on hackbench_thread_pipes_234
>
> Hi Binder,
> Thank you for measuring!!
>
> Can you please provide more information?
> Baseline is 6.1.38, and the other is the one, or two patches applied
> on baseline?
> (optimizing slub memory usage v2, and not allocating high order slabs
> from remote nodes)
>
> The 28% drop in AMD is quite huge, and the overall memory usage increased a lot.
>
> Does the AMD machine have 2 sockets?
> Did remote node allocations increase or decrease? `numastat`
>
> Can you get some profiles indicating increased list_lock contention?
> (or change in values provided by `slabinfo skbuff_head_cache` when
> with CONFIG_SLUB_STATS built?)
>
> > On Thu, Jul 20, 2023 at 11:08 AM Hyeonggon Yoo <42.hyeyoo@gmail.com> wrote:
> > >
> > > On Thu, Jul 20, 2023 at 11:16 PM Feng Tang <feng.tang@intel.com> wrote:
> > > >
> > > > Hi Hyeonggon,
> > > >
> > > > On Thu, Jul 20, 2023 at 08:59:56PM +0800, Hyeonggon Yoo wrote:
> > > > > On Thu, Jul 20, 2023 at 12:01 PM Oliver Sang <oliver.sang@intel.com> wrote:
> > > > > >
> > > > > > hi, Hyeonggon Yoo,
> > > > > >
> > > > > > On Tue, Jul 18, 2023 at 03:43:16PM +0900, Hyeonggon Yoo wrote:
> > > > > > > On Mon, Jul 17, 2023 at 10:41 PM kernel test robot
> > > > > > > <oliver.sang@intel.com> wrote:
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > Hello,
> > > > > > > >
> > > > > > > > kernel test robot noticed a -12.5% regression of hackbench.throughput on:
> > > > > > > >
> > > > > > > >
> > > > > > > > commit: a0fd217e6d6fbd23e91f8796787b621e7d576088 ("[PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage")
> > > > > > > > url: https://github.com/intel-lab-lkp/linux/commits/Jay-Patel/mm-slub-Optimize-slub-memory-usage/20230628-180050
> > > > > > > > base: git://git.kernel.org/cgit/linux/kernel/git/vbabka/slab.git for-next
> > > > > > > > patch link: https://lore.kernel.org/all/20230628095740.589893-1-jaypatel@linux.ibm.com/
> > > > > > > > patch subject: [PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage
> > > > > > > >
> > > > > > > > testcase: hackbench
> > > > > > > > test machine: 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
> > > > > > > > parameters:
> > > > > > > >
> > > > > > > >         nr_threads: 100%
> > > > > > > >         iterations: 4
> > > > > > > >         mode: process
> > > > > > > >         ipc: socket
> > > > > > > >         cpufreq_governor: performance
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > >
> > > > > > > > If you fix the issue in a separate patch/commit (i.e. not just a new version of
> > > > > > > > the same patch/commit), kindly add following tags
> > > > > > > > | Reported-by: kernel test robot <oliver.sang@intel.com>
> > > > > > > > | Closes: https://lore.kernel.org/oe-lkp/202307172140.3b34825a-oliver.sang@intel.com
> > > > > > > >
> > > > > > > >
> > > > > > > > Details are as below:
> > > > > > > > -------------------------------------------------------------------------------------------------->
> > > > > > > >
> > > > > > > >
> > > > > > > > To reproduce:
> > > > > > > >
> > > > > > > >         git clone https://github.com/intel/lkp-tests.git
> > > > > > > >         cd lkp-tests
> > > > > > > >         sudo bin/lkp install job.yaml           # job file is attached in this email
> > > > > > > >         bin/lkp split-job --compatible job.yaml # generate the yaml file for lkp run
> > > > > > > >         sudo bin/lkp run generated-yaml-file
> > > > > > > >
> > > > > > > >         # if come across any failure that blocks the test,
> > > > > > > >         # please remove ~/.lkp and /lkp dir to run from a clean state.
> > > > > > > >
> > > > > > > > =========================================================================================
> > > > > > > > compiler/cpufreq_governor/ipc/iterations/kconfig/mode/nr_threads/rootfs/tbox_group/testcase:
> > > > > > > >   gcc-12/performance/socket/4/x86_64-rhel-8.3/process/100%/debian-11.1-x86_64-20220510.cgz/lkp-icl-2sp2/hackbench
> > > > > > > >
> > > > > > > > commit:
> > > > > > > >   7bc162d5cc ("Merge branches 'slab/for-6.5/prandom', 'slab/for-6.5/slab_no_merge' and 'slab/for-6.5/slab-deprecate' into slab/for-next")
> > > > > > > >   a0fd217e6d ("mm/slub: Optimize slub memory usage")
> > > > > > > >
> > > > > > > > 7bc162d5cc4de5c3 a0fd217e6d6fbd23e91f8796787
> > > > > > > > ---------------- ---------------------------
> > > > > > > >          %stddev     %change         %stddev
> > > > > > > >              \          |                \
> > > > > > > >     222503 ą 86%    +108.7%     464342 ą 58%  numa-meminfo.node1.Active
> > > > > > > >     222459 ą 86%    +108.7%     464294 ą 58%  numa-meminfo.node1.Active(anon)
> > > > > > > >      55573 ą 85%    +108.0%     115619 ą 58%  numa-vmstat.node1.nr_active_anon
> > > > > > > >      55573 ą 85%    +108.0%     115618 ą 58%  numa-vmstat.node1.nr_zone_active_anon
> > > > > > >
> > > > > > > I'm quite baffled while reading this.
> > > > > > > How did changing slab order calculation double the number of active anon pages?
> > > > > > > I doubt two experiments were performed on the same settings.
> > > > > >
> > > > > > let me introduce our test process.
> > > > > >
> > > > > > we make sure the tests upon commit and its parent have exact same environment
> > > > > > except the kernel difference, and we also make sure the config to build the
> > > > > > commit and its parent are identical.
> > > > > >
> > > > > > we run tests for one commit at least 6 times to make sure the data is stable.
> > > > > >
> > > > > > such like for this case, we rebuild the commit and its parent's kernel, the
> > > > > > config is attached FYI.
> > > > >
> > > > > Hello Oliver,
> > > > >
> > > > > Thank you for confirming the testing environment is totally fine.
> > > > > and I'm sorry. I didn't mean to offend that your tests were bad.
> > > > >
> > > > > It was more like  "oh, the data totally doesn't make sense to me"
> > > > > and I blamed the tests rather than my poor understanding of the data ;)
> > > > >
> > > > > Anyway,
> > > > > as the data shows a repeatable regression,
> > > > > let's think more about the possible scenario:
> > > > >
> > > > > I can't stop thinking that the patch must've affected the system's
> > > > > reclamation behavior in some way.
> > > > > (I think more active anon pages with a similar number total of anon
> > > > > pages implies the kernel scanned more pages)
> > > > >
> > > > > It might be because kswapd was more frequently woken up (possible if
> > > > > skbs were allocated with GFP_ATOMIC)
> > > > > But the data provided is not enough to support this argument.
> > > > >
> > > > > >  2.43 ± 7% +4.5 6.90 ± 11% perf-profile.children.cycles-pp.get_partial_node
> > > > > >  3.23 ±  5%      +4.5        7.77 ±  9%  perf-profile.children.cycles-pp.___slab_alloc
> > > > > >  7.51 ±  2%      +4.6       12.11 ±  5%  perf-profile.children.cycles-pp.kmalloc_reserve
> > > > > > 6.94 ±  2%      +4.7       11.62 ±  6%  perf-profile.children.cycles-pp.__kmalloc_node_track_caller
> > > > > > 6.46 ±  2%      +4.8       11.22 ±  6%  perf-profile.children.cycles-pp.__kmem_cache_alloc_node
> > > > > >  8.48 ±  4%      +7.9       16.42 ±  8%  perf-profile.children.cycles-pp._raw_spin_lock_irqsave
> > > > > >  6.12 ±  6%      +8.6       14.74 ±  9%  perf-profile.children.cycles-pp.native_queued_spin_lock_slowpath
> > > > >
> > > > > And this increased cycles in the SLUB slowpath implies that the actual
> > > > > number of objects available in
> > > > > the per cpu partial list has been decreased, possibly because of
> > > > > inaccuracy in the heuristic?
> > > > > (cuz the assumption that slabs cached per are half-filled, and that
> > > > > slabs' order is s->oo)
> > > >
> > > > From the patch:
> > > >
> > > >  static unsigned int slub_max_order =
> > > > -       IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : PAGE_ALLOC_COSTLY_ORDER;
> > > > +       IS_ENABLED(CONFIG_SLUB_TINY) ? 1 : 2;
> > > >
> > > > Could this be related? that it reduces the order for some slab cache,
> > > > so each per-cpu slab will has less objects, which makes the contention
> > > > for per-node spinlock 'list_lock' more severe when the slab allocation
> > > > is under pressure from many concurrent threads.
> > >
> > > hackbench uses skbuff_head_cache intensively. So we need to check if
> > > skbuff_head_cache's
> > > order was increased or decreased. On my desktop skbuff_head_cache's
> > > order is 1 and I roughly
> > > guessed it was increased, (but it's still worth checking in the testing env)
> > >
> > > But decreased slab order does not necessarily mean decreased number
> > > of cached objects per CPU, because when oo_order(s->oo) is smaller,
> > > then it caches
> > > more slabs into the per cpu slab list.
> > >
> > > I think more problematic situation is when oo_order(s->oo) is higher,
> > > because the heuristic
> > > in SLUB assumes that each slab has order of oo_order(s->oo) and it's
> > > half-filled. if it allocates
> > > slabs with order lower than oo_order(s->oo), the number of cached
> > > objects per CPU
> > > decreases drastically due to the inaccurate assumption.
> > >
> > > So yeah, decreased number of cached objects per CPU could be the cause
> > > of the regression due to the heuristic.
> > >
> > > And I have another theory: it allocated high order slabs from remote node
> > > even if there are slabs with lower order in the local node.
> > >
> > > ofc we need further experiment, but I think both improving the
> > > accuracy of heuristic and
> > > avoiding allocating high order slabs from remote nodes would make SLUB
> > > more robust.
> > >
> > > > I don't have direct data to backup it, and I can try some experiment.
> > >
> > > Thank you for taking time for experiment!
> > >
> > > Thanks,
> > > Hyeonggon
> > >
> > > > > > then retest on this test machine:
> > > > > > 128 threads 2 sockets Intel(R) Xeon(R) Gold 6338 CPU @ 2.00GHz (Ice Lake) with 256G memory
> > >


  reply	other threads:[~2023-07-21 18:31 UTC|newest]

Thread overview: 25+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2023-06-28  9:57 [PATCH] [RFC PATCH v2]mm/slub: Optimize slub memory usage Jay Patel
2023-07-03  0:13 ` David Rientjes
2023-07-03  8:39   ` Jay Patel
2023-07-09 14:42   ` Hyeonggon Yoo
2023-07-12 13:06 ` Vlastimil Babka
2023-07-20 10:30   ` Jay Patel
2023-07-17 13:41 ` kernel test robot
2023-07-18  6:43   ` Hyeonggon Yoo
2023-07-20  3:00     ` Oliver Sang
2023-07-20 12:59       ` Hyeonggon Yoo
2023-07-20 13:46         ` Hyeonggon Yoo
2023-07-20 14:15           ` Hyeonggon Yoo
2023-07-24  2:39             ` Oliver Sang
2023-07-31  9:49               ` Hyeonggon Yoo
2023-07-20 13:49         ` Feng Tang
2023-07-20 15:05           ` Hyeonggon Yoo
2023-07-21 14:50             ` Binder Makin
2023-07-21 15:39               ` Hyeonggon Yoo
2023-07-21 18:31                 ` Binder Makin [this message]
2023-07-24 14:35             ` Feng Tang
2023-07-25  3:13               ` Hyeonggon Yoo
2023-07-25  9:12                 ` Feng Tang
2023-08-29  8:30                   ` Feng Tang
2023-07-26 10:06                 ` Vlastimil Babka
2023-08-10 10:38                   ` Jay Patel

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAANmLtznX_qmj+ODOSnAJkzkM89E3a4eo-Zbpx=ABSmhFSjUqQ@mail.gmail.com' \
    --to=merimus@google.com \
    --cc=42.hyeyoo@gmail.com \
    --cc=akpm@linux-foundation.org \
    --cc=aneesh.kumar@linux.ibm.com \
    --cc=cl@linux.com \
    --cc=feng.tang@intel.com \
    --cc=fengwei.yin@intel.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=jaypatel@linux.ibm.com \
    --cc=linux-mm@kvack.org \
    --cc=lkp@intel.com \
    --cc=oe-lkp@lists.linux.dev \
    --cc=oliver.sang@intel.com \
    --cc=penberg@kernel.org \
    --cc=piyushs@linux.ibm.com \
    --cc=rientjes@google.com \
    --cc=tsahu@linux.ibm.com \
    --cc=vbabka@suse.cz \
    --cc=ying.huang@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).