From: Masayoshi Mizuma <msys.mizuma@gmail.com>
To: Waiman Long <longman@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
Michal Hocko <mhocko@kernel.org>,
Vladimir Davydov <vdavydov.dev@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Tejun Heo <tj@kernel.org>, Christoph Lameter <cl@linux.com>,
Pekka Enberg <penberg@kernel.org>,
David Rientjes <rientjes@google.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
Vlastimil Babka <vbabka@suse.cz>, Roman Gushchin <guro@fb.com>,
linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
linux-mm@kvack.org, Shakeel Butt <shakeelb@google.com>,
Muchun Song <songmuchun@bytedance.com>,
Alex Shi <alex.shi@linux.alibaba.com>,
Chris Down <chris@chrisdown.name>,
Yafang Shao <laoar.shao@gmail.com>,
Wei Yang <richard.weiyang@gmail.com>,
Xing Zhengjun <zhengjun.xing@linux.intel.com>
Subject: Re: [PATCH v3 0/5] mm/memcg: Reduce kmemcache memory accounting overhead
Date: Wed, 14 Apr 2021 23:26:42 -0400 [thread overview]
Message-ID: <20210415032642.gfaevezaxoj4od3d@gabell> (raw)
In-Reply-To: <20210414012027.5352-1-longman@redhat.com>
On Tue, Apr 13, 2021 at 09:20:22PM -0400, Waiman Long wrote:
> v3:
> - Add missing "inline" qualifier to the alternate mod_obj_stock_state()
> in patch 3.
> - Remove redundant current_obj_stock() call in patch 5.
>
> v2:
> - Fix bug found by test robot in patch 5.
> - Update cover letter and commit logs.
>
> With the recent introduction of the new slab memory controller, we
> eliminate the need for having separate kmemcaches for each memory
> cgroup and reduce overall kernel memory usage. However, we also add
> additional memory accounting overhead to each call of kmem_cache_alloc()
> and kmem_cache_free().
>
> For workloads that require a lot of kmemcache allocations and
> de-allocations, they may experience performance regression as illustrated
> in [1] and [2].
>
> A simple kernel module that performs repeated loop of 100,000,000
> kmem_cache_alloc() and kmem_cache_free() of a 64-byte object at module
> init time is used for benchmarking. The test was run on a CascadeLake
> server with turbo-boosting disable to reduce run-to-run variation.
>
> With memory accounting disable, the run time was 2.848s. With memory
> accounting enabled, the run times with the application of various
> patches in the patchset were:
>
> Applied patches Run time Accounting overhead Overhead %age
> --------------- -------- ------------------- -------------
> None 10.800s 7.952s 100.0%
> 1-2 9.140s 6.292s 79.1%
> 1-3 7.641s 4.793s 60.3%
> 1-5 6.801s 3.953s 49.7%
>
> Note that this is the best case scenario where most updates happen only
> to the percpu stocks. Real workloads will likely have a certain amount
> of updates to the memcg charges and vmstats. So the performance benefit
> will be less.
>
> It was found that a big part of the memory accounting overhead
> was caused by the local_irq_save()/local_irq_restore() sequences in
> updating local stock charge bytes and vmstat array, at least in x86
> systems. There are two such sequences in kmem_cache_alloc() and two
> in kmem_cache_free(). This patchset tries to reduce the use of such
> sequences as much as possible. In fact, it eliminates them in the common
> case. Another part of this patchset to cache the vmstat data update in
> the local stock as well which also helps.
>
> [1] https://lore.kernel.org/linux-mm/20210408193948.vfktg3azh2wrt56t@gabell/T/#u
Hi Longman,
Thank you for your patches.
I rerun the benchmark with your patches, it seems that the reduction
is small... The total duration of sendto() and recvfrom() system call
during the benchmark are as follows.
- sendto
- v5.8 vanilla: 2576.056 msec (100%)
- v5.12-rc7 vanilla: 2988.911 msec (116%)
- v5.12-rc7 with your patches (1-5): 2984.307 msec (115%)
- recvfrom
- v5.8 vanilla: 2113.156 msec (100%)
- v5.12-rc7 vanilla: 2305.810 msec (109%)
- v5.12-rc7 with your patches (1-5): 2287.351 msec (108%)
kmem_cache_alloc()/kmem_cache_free() are called around 1,400,000 times during
the benchmark. I ran a loop in a kernel module as following. The duration
is reduced by your patches actually.
---
dummy_cache = KMEM_CACHE(dummy, SLAB_ACCOUNT);
for (i = 0; i < 1400000; i++) {
p = kmem_cache_alloc(dummy_cache, GFP_KERNEL);
kmem_cache_free(dummy_cache, p);
}
---
- v5.12-rc7 vanilla: 110 msec (100%)
- v5.12-rc7 with your patches (1-5): 85 msec (77%)
It seems that the reduction is small for the benchmark though...
Anyway, I can see your patches reduce the overhead.
Please feel free to add:
Tested-by: Masayoshi Mizuma <m.mizuma@jp.fujitsu.com>
Thanks!
Masa
> [2] https://lore.kernel.org/lkml/20210114025151.GA22932@xsang-OptiPlex-9020/
>
> Waiman Long (5):
> mm/memcg: Pass both memcg and lruvec to mod_memcg_lruvec_state()
> mm/memcg: Introduce obj_cgroup_uncharge_mod_state()
> mm/memcg: Cache vmstat data in percpu memcg_stock_pcp
> mm/memcg: Separate out object stock data into its own struct
> mm/memcg: Optimize user context object stock access
>
> include/linux/memcontrol.h | 14 ++-
> mm/memcontrol.c | 199 ++++++++++++++++++++++++++++++++-----
> mm/percpu.c | 9 +-
> mm/slab.h | 32 +++---
> 4 files changed, 196 insertions(+), 58 deletions(-)
>
> --
> 2.18.1
>
next prev parent reply other threads:[~2021-04-15 3:26 UTC|newest]
Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-04-14 1:20 [PATCH v3 0/5] mm/memcg: Reduce kmemcache memory accounting overhead Waiman Long
2021-04-14 1:20 ` [PATCH v3 1/5] mm/memcg: Pass both memcg and lruvec to mod_memcg_lruvec_state() Waiman Long
2021-04-15 3:27 ` Masayoshi Mizuma
2021-04-15 16:40 ` Johannes Weiner
2021-04-15 16:59 ` Waiman Long
2021-04-16 15:48 ` Johannes Weiner
2021-04-14 1:20 ` [PATCH v3 2/5] mm/memcg: Introduce obj_cgroup_uncharge_mod_state() Waiman Long
2021-04-15 3:27 ` Masayoshi Mizuma
2021-04-15 16:30 ` Johannes Weiner
2021-04-15 16:35 ` Waiman Long
2021-04-15 18:10 ` Johannes Weiner
2021-04-15 18:47 ` Waiman Long
2021-04-15 19:40 ` Johannes Weiner
2021-04-15 19:44 ` Waiman Long
2021-04-15 20:19 ` Johannes Weiner
2021-04-14 1:20 ` [PATCH v3 3/5] mm/memcg: Cache vmstat data in percpu memcg_stock_pcp Waiman Long
2021-04-15 3:28 ` Masayoshi Mizuma
2021-04-15 16:50 ` Johannes Weiner
2021-04-15 17:08 ` Waiman Long
2021-04-15 18:13 ` Johannes Weiner
2021-04-14 1:20 ` [PATCH v3 4/5] mm/memcg: Separate out object stock data into its own struct Waiman Long
2021-04-15 3:28 ` Masayoshi Mizuma
2021-04-15 16:57 ` Johannes Weiner
2021-04-14 1:20 ` [PATCH v3 5/5] mm/memcg: Optimize user context object stock access Waiman Long
2021-04-15 3:28 ` Masayoshi Mizuma
2021-04-15 9:44 ` Christoph Lameter
2021-04-15 12:16 ` Masayoshi Mizuma
2021-04-15 17:53 ` Johannes Weiner
2021-04-15 18:16 ` Waiman Long
2021-04-15 18:53 ` Johannes Weiner
2021-04-15 19:06 ` Waiman Long
2021-04-15 3:26 ` Masayoshi Mizuma [this message]
2021-04-15 13:17 ` [PATCH v3 0/5] mm/memcg: Reduce kmemcache memory accounting overhead Waiman Long
2021-04-15 15:47 ` Masayoshi Mizuma
2021-04-15 17:10 ` Matthew Wilcox
2021-04-15 17:41 ` Waiman Long
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20210415032642.gfaevezaxoj4od3d@gabell \
--to=msys.mizuma@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=alex.shi@linux.alibaba.com \
--cc=cgroups@vger.kernel.org \
--cc=chris@chrisdown.name \
--cc=cl@linux.com \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=iamjoonsoo.kim@lge.com \
--cc=laoar.shao@gmail.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=longman@redhat.com \
--cc=mhocko@kernel.org \
--cc=penberg@kernel.org \
--cc=richard.weiyang@gmail.com \
--cc=rientjes@google.com \
--cc=shakeelb@google.com \
--cc=songmuchun@bytedance.com \
--cc=tj@kernel.org \
--cc=vbabka@suse.cz \
--cc=vdavydov.dev@gmail.com \
--cc=zhengjun.xing@linux.intel.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).