All of lore.kernel.org
 help / color / mirror / Atom feed
From: Roman Gushchin <guro@fb.com>
To: Waiman Long <llong@redhat.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Michal Hocko <mhocko@kernel.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Tejun Heo <tj@kernel.org>, Christoph Lameter <cl@linux.com>,
	Pekka Enberg <penberg@kernel.org>,
	David Rientjes <rientjes@google.com>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>,
	Vlastimil Babka <vbabka@suse.cz>, <linux-kernel@vger.kernel.org>,
	<cgroups@vger.kernel.org>, <linux-mm@kvack.org>,
	Shakeel Butt <shakeelb@google.com>,
	Muchun Song <songmuchun@bytedance.com>,
	Alex Shi <alex.shi@linux.alibaba.com>,
	Chris Down <chris@chrisdown.name>,
	Yafang Shao <laoar.shao@gmail.com>,
	Wei Yang <richard.weiyang@gmail.com>,
	Masayoshi Mizuma <msys.mizuma@gmail.com>,
	Xing Zhengjun <zhengjun.xing@linux.intel.com>,
	Matthew Wilcox <willy@infradead.org>
Subject: Re: [PATCH-next v5 2/4] mm/memcg: Cache vmstat data in percpu memcg_stock_pcp
Date: Thu, 22 Apr 2021 18:56:26 -0700	[thread overview]
Message-ID: <YIIpSvs09GhDN+gb@carbon> (raw)
In-Reply-To: <ded96eba-8c0c-1822-61b5-de0577b7ebab@redhat.com>

On Thu, Apr 22, 2021 at 12:58:52PM -0400, Waiman Long wrote:
> On 4/21/21 7:28 PM, Roman Gushchin wrote:
> > On Tue, Apr 20, 2021 at 03:29:05PM -0400, Waiman Long wrote:
> > > Before the new slab memory controller with per object byte charging,
> > > charging and vmstat data update happen only when new slab pages are
> > > allocated or freed. Now they are done with every kmem_cache_alloc()
> > > and kmem_cache_free(). This causes additional overhead for workloads
> > > that generate a lot of alloc and free calls.
> > > 
> > > The memcg_stock_pcp is used to cache byte charge for a specific
> > > obj_cgroup to reduce that overhead. To further reducing it, this patch
> > > makes the vmstat data cached in the memcg_stock_pcp structure as well
> > > until it accumulates a page size worth of update or when other cached
> > > data change. Caching the vmstat data in the per-cpu stock eliminates two
> > > writes to non-hot cachelines for memcg specific as well as memcg-lruvecs
> > > specific vmstat data by a write to a hot local stock cacheline.
> > > 
> > > On a 2-socket Cascade Lake server with instrumentation enabled and this
> > > patch applied, it was found that about 20% (634400 out of 3243830)
> > > of the time when mod_objcg_state() is called leads to an actual call
> > > to __mod_objcg_state() after initial boot. When doing parallel kernel
> > > build, the figure was about 17% (24329265 out of 142512465). So caching
> > > the vmstat data reduces the number of calls to __mod_objcg_state()
> > > by more than 80%.
> > > 
> > > Signed-off-by: Waiman Long <longman@redhat.com>
> > > Reviewed-by: Shakeel Butt <shakeelb@google.com>
> > > ---
> > >   mm/memcontrol.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++--
> > >   1 file changed, 83 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > index 7cd7187a017c..292b4783b1a7 100644
> > > --- a/mm/memcontrol.c
> > > +++ b/mm/memcontrol.c
> > > @@ -782,8 +782,9 @@ void __mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val)
> > >   	rcu_read_unlock();
> > >   }
> > > -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
> > > -		     enum node_stat_item idx, int nr)
> > > +static inline void mod_objcg_mlstate(struct obj_cgroup *objcg,
> > > +				     struct pglist_data *pgdat,
> > > +				     enum node_stat_item idx, int nr)
> > >   {
> > >   	struct mem_cgroup *memcg;
> > >   	struct lruvec *lruvec;
> > > @@ -791,7 +792,7 @@ void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
> > >   	rcu_read_lock();
> > >   	memcg = obj_cgroup_memcg(objcg);
> > >   	lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > > -	mod_memcg_lruvec_state(lruvec, idx, nr);
> > > +	__mod_memcg_lruvec_state(lruvec, idx, nr);
> > >   	rcu_read_unlock();
> > >   }
> > > @@ -2059,7 +2060,10 @@ struct memcg_stock_pcp {
> > >   #ifdef CONFIG_MEMCG_KMEM
> > >   	struct obj_cgroup *cached_objcg;
> > > +	struct pglist_data *cached_pgdat;
> > I wonder if we want to have per-node counters instead?
> > That would complicate the initialization of pcp stocks a bit,
> > but might shave off some additional cpu time.
> > But we can do it later too.
> > 
> A per node counter will certainly complicate the code and reduce the
> performance benefit too.

Hm, why? We wouldn't need to flush the stock if the release happens
on some other cpu not matching the current pgdat.

> I got a pretty good hit rate of 80%+ with the
> current code on a 2-socket system. The hit rate will probably drop when
> there are more nodes. I will do some more investigation, but it will not be
> for this patchset.

Works for me!

Thanks!

WARNING: multiple messages have this Message-ID (diff)
From: Roman Gushchin <guro-b10kYP2dOMg@public.gmane.org>
To: Waiman Long <llong-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
Cc: Johannes Weiner <hannes-druUgvl0LCNAfugRpC6u6w@public.gmane.org>,
	Michal Hocko <mhocko-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Vladimir Davydov
	<vdavydov.dev-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Andrew Morton
	<akpm-de/tnXTf+JLsfHDXvbKv3WD2FQJk+8+b@public.gmane.org>,
	Tejun Heo <tj-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	Christoph Lameter <cl-vYTEC60ixJUAvxtiuMwx3w@public.gmane.org>,
	Pekka Enberg <penberg-DgEjT+Ai2ygdnm+yROfE0A@public.gmane.org>,
	David Rientjes <rientjes-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Joonsoo Kim <iamjoonsoo.kim-Hm3cg6mZ9cc@public.gmane.org>,
	Vlastimil Babka <vbabka-AlSwsSmVLrQ@public.gmane.org>,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	cgroups-u79uwXL29TY76Z2rM5mHXA@public.gmane.org,
	linux-mm-Bw31MaZKKs3YtjvyW6yDsg@public.gmane.org,
	Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>,
	Muchun Song <songmuchun-EC8Uxl6Npydl57MIdRCFDg@public.gmane.org>,
	Alex Shi
	<alex.shi-KPsoFbNs7GizrGE5bRqYAgC/G2K4zDHf@public.gmane.org>,
	Chris Down <chris-6Bi1550iOqEnzZ6mRAm98g@public.gmane.org>,
	Yafang Shao <laoar.shao-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Wei Yang
	<richard.weiyang-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Masayoshi Mizuma
	<msys.mizuma-Re5JQEeQqe8AvxtiuMwx3w@public.gmane.org>,
	Xing Zhengjun
	<zhengjun.xing-VuQAYsv1563Yd54FQh9/CA@public.gmane.org>
Subject: Re: [PATCH-next v5 2/4] mm/memcg: Cache vmstat data in percpu memcg_stock_pcp
Date: Thu, 22 Apr 2021 18:56:26 -0700	[thread overview]
Message-ID: <YIIpSvs09GhDN+gb@carbon> (raw)
In-Reply-To: <ded96eba-8c0c-1822-61b5-de0577b7ebab-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>

On Thu, Apr 22, 2021 at 12:58:52PM -0400, Waiman Long wrote:
> On 4/21/21 7:28 PM, Roman Gushchin wrote:
> > On Tue, Apr 20, 2021 at 03:29:05PM -0400, Waiman Long wrote:
> > > Before the new slab memory controller with per object byte charging,
> > > charging and vmstat data update happen only when new slab pages are
> > > allocated or freed. Now they are done with every kmem_cache_alloc()
> > > and kmem_cache_free(). This causes additional overhead for workloads
> > > that generate a lot of alloc and free calls.
> > > 
> > > The memcg_stock_pcp is used to cache byte charge for a specific
> > > obj_cgroup to reduce that overhead. To further reducing it, this patch
> > > makes the vmstat data cached in the memcg_stock_pcp structure as well
> > > until it accumulates a page size worth of update or when other cached
> > > data change. Caching the vmstat data in the per-cpu stock eliminates two
> > > writes to non-hot cachelines for memcg specific as well as memcg-lruvecs
> > > specific vmstat data by a write to a hot local stock cacheline.
> > > 
> > > On a 2-socket Cascade Lake server with instrumentation enabled and this
> > > patch applied, it was found that about 20% (634400 out of 3243830)
> > > of the time when mod_objcg_state() is called leads to an actual call
> > > to __mod_objcg_state() after initial boot. When doing parallel kernel
> > > build, the figure was about 17% (24329265 out of 142512465). So caching
> > > the vmstat data reduces the number of calls to __mod_objcg_state()
> > > by more than 80%.
> > > 
> > > Signed-off-by: Waiman Long <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> > > Reviewed-by: Shakeel Butt <shakeelb-hpIqsD4AKlfQT0dZR+AlfA@public.gmane.org>
> > > ---
> > >   mm/memcontrol.c | 86 +++++++++++++++++++++++++++++++++++++++++++++++--
> > >   1 file changed, 83 insertions(+), 3 deletions(-)
> > > 
> > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > index 7cd7187a017c..292b4783b1a7 100644
> > > --- a/mm/memcontrol.c
> > > +++ b/mm/memcontrol.c
> > > @@ -782,8 +782,9 @@ void __mod_lruvec_kmem_state(void *p, enum node_stat_item idx, int val)
> > >   	rcu_read_unlock();
> > >   }
> > > -void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
> > > -		     enum node_stat_item idx, int nr)
> > > +static inline void mod_objcg_mlstate(struct obj_cgroup *objcg,
> > > +				     struct pglist_data *pgdat,
> > > +				     enum node_stat_item idx, int nr)
> > >   {
> > >   	struct mem_cgroup *memcg;
> > >   	struct lruvec *lruvec;
> > > @@ -791,7 +792,7 @@ void mod_objcg_state(struct obj_cgroup *objcg, struct pglist_data *pgdat,
> > >   	rcu_read_lock();
> > >   	memcg = obj_cgroup_memcg(objcg);
> > >   	lruvec = mem_cgroup_lruvec(memcg, pgdat);
> > > -	mod_memcg_lruvec_state(lruvec, idx, nr);
> > > +	__mod_memcg_lruvec_state(lruvec, idx, nr);
> > >   	rcu_read_unlock();
> > >   }
> > > @@ -2059,7 +2060,10 @@ struct memcg_stock_pcp {
> > >   #ifdef CONFIG_MEMCG_KMEM
> > >   	struct obj_cgroup *cached_objcg;
> > > +	struct pglist_data *cached_pgdat;
> > I wonder if we want to have per-node counters instead?
> > That would complicate the initialization of pcp stocks a bit,
> > but might shave off some additional cpu time.
> > But we can do it later too.
> > 
> A per node counter will certainly complicate the code and reduce the
> performance benefit too.

Hm, why? We wouldn't need to flush the stock if the release happens
on some other cpu not matching the current pgdat.

> I got a pretty good hit rate of 80%+ with the
> current code on a 2-socket system. The hit rate will probably drop when
> there are more nodes. I will do some more investigation, but it will not be
> for this patchset.

Works for me!

Thanks!

  reply	other threads:[~2021-04-23  1:57 UTC|newest]

Thread overview: 32+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-04-20 19:29 [PATCH-next v5 0/4] mm/memcg: Reduce kmemcache memory accounting overhead Waiman Long
2021-04-20 19:29 ` [PATCH-next v5 1/4] mm/memcg: Move mod_objcg_state() to memcontrol.c Waiman Long
2021-04-21 15:26   ` Shakeel Butt
2021-04-21 15:26     ` Shakeel Butt
2021-04-21 15:26     ` Shakeel Butt
2021-04-21 23:08   ` Roman Gushchin
2021-04-21 23:08     ` Roman Gushchin
2021-04-20 19:29 ` [PATCH-next v5 2/4] mm/memcg: Cache vmstat data in percpu memcg_stock_pcp Waiman Long
2021-04-20 19:29   ` Waiman Long
2021-04-21 23:28   ` Roman Gushchin
2021-04-21 23:28     ` Roman Gushchin
2021-04-22 16:58     ` Waiman Long
2021-04-22 16:58       ` Waiman Long
2021-04-23  1:56       ` Roman Gushchin [this message]
2021-04-23  1:56         ` Roman Gushchin
2021-04-23 16:52         ` Waiman Long
2021-04-23 16:52           ` Waiman Long
2021-04-20 19:29 ` [PATCH-next v5 3/4] mm/memcg: Improve refill_obj_stock() performance Waiman Long
2021-04-20 19:29   ` Waiman Long
2021-04-21 23:55   ` Roman Gushchin
2021-04-21 23:55     ` Roman Gushchin
2021-04-22 17:26     ` Waiman Long
2021-04-22 17:26       ` Waiman Long
2021-04-23  2:28       ` Roman Gushchin
2021-04-23  2:28         ` Roman Gushchin
2021-04-23 20:06         ` Waiman Long
2021-04-23 20:06           ` Waiman Long
2021-04-26 19:24   ` Shakeel Butt
2021-04-26 19:24     ` Shakeel Butt
2021-04-26 19:24     ` Shakeel Butt
2021-04-20 19:29 ` [PATCH-next v5 4/4] mm/memcg: Optimize user context object stock access Waiman Long
2021-04-20 19:29   ` Waiman Long

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YIIpSvs09GhDN+gb@carbon \
    --to=guro@fb.com \
    --cc=akpm@linux-foundation.org \
    --cc=alex.shi@linux.alibaba.com \
    --cc=cgroups@vger.kernel.org \
    --cc=chris@chrisdown.name \
    --cc=cl@linux.com \
    --cc=hannes@cmpxchg.org \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=laoar.shao@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=llong@redhat.com \
    --cc=mhocko@kernel.org \
    --cc=msys.mizuma@gmail.com \
    --cc=penberg@kernel.org \
    --cc=richard.weiyang@gmail.com \
    --cc=rientjes@google.com \
    --cc=shakeelb@google.com \
    --cc=songmuchun@bytedance.com \
    --cc=tj@kernel.org \
    --cc=vbabka@suse.cz \
    --cc=vdavydov.dev@gmail.com \
    --cc=willy@infradead.org \
    --cc=zhengjun.xing@linux.intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.