linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Shakeel Butt <shakeelb@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Mel Gorman <mgorman@suse.de>, Roman Gushchin <guro@fb.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	Yafang Shao <laoar.shao@gmail.com>, Linux MM <linux-mm@kvack.org>,
	Cgroups <cgroups@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] memcg: expose root cgroup's memory.stat
Date: Fri, 15 May 2020 10:29:55 +0200	[thread overview]
Message-ID: <20200515082955.GJ29153@dhcp22.suse.cz> (raw)
In-Reply-To: <CALvZod5VHHUV+_AXs4+5sLOPGyxm709kQ1q=uHMPVxW8pwXZ=g@mail.gmail.com>

On Sat 09-05-20 07:06:38, Shakeel Butt wrote:
> On Fri, May 8, 2020 at 2:44 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
> >
> > On Fri, May 08, 2020 at 10:06:30AM -0700, Shakeel Butt wrote:
> > > One way to measure the efficiency of memory reclaim is to look at the
> > > ratio (pgscan+pfrefill)/pgsteal. However at the moment these stats are
> > > not updated consistently at the system level and the ratio of these are
> > > not very meaningful. The pgsteal and pgscan are updated for only global
> > > reclaim while pgrefill gets updated for global as well as cgroup
> > > reclaim.
> > >
> > > Please note that this difference is only for system level vmstats. The
> > > cgroup stats returned by memory.stat are actually consistent. The
> > > cgroup's pgsteal contains number of reclaimed pages for global as well
> > > as cgroup reclaim. So, one way to get the system level stats is to get
> > > these stats from root's memory.stat, so, expose memory.stat for the root
> > > cgroup.
> > >
> > >       from Johannes Weiner:
> > >       There are subtle differences between /proc/vmstat and
> > >       memory.stat, and cgroup-aware code that wants to watch the full
> > >       hierarchy currently has to know about these intricacies and
> > >       translate semantics back and forth.

Can we have those subtle differences documented please?

> > >
> > >       Generally having the fully recursive memory.stat at the root
> > >       level could help a broader range of usecases.
> >
> > The changelog begs the question why we don't just "fix" the
> > system-level stats. It may be useful to include the conclusions from
> > that discussion, and why there is value in keeping the stats this way.
> >
> 
> Right. Andrew, can you please add the following para to the changelog?
> 
> Why not fix the stats by including both the global and cgroup reclaim
> activity instead of exposing root cgroup's memory.stat? The reason is
> the benefit of having metrics exposing the activity that happens
> purely due to machine capacity rather than localized activity that
> happens due to the limits throughout the cgroup tree. Additionally
> there are userspace tools like sysstat(sar) which reads these stats to
> inform about the system level reclaim activity. So, we should not
> break such use-cases.
> 
> > > Signed-off-by: Shakeel Butt <shakeelb@google.com>
> > > Suggested-by: Johannes Weiner <hannes@cmpxchg.org>
> >
> > Acked-by: Johannes Weiner <hannes@cmpxchg.org>
> 
> Thanks a lot.

I was quite surprised that the patch is so simple TBH. For some reason
I've still had memories that we do not account for root memcg (likely
because mem_cgroup_is_root(memcg) bail out in the try_charge. But stats
are slightly different here. I have started looking at different stat
counters because they are not really all the same. E.g. 
- mem_cgroup_charge_statistics accounts for each memcg
- memcg_charge_kernel_stack relies on pages being associated with a
  memcg and that in turn relies on __memcg_kmem_charge_page which bails
  out on root memcg
- memcg_charge_slab (NR_SLAB*) skips over root memcg as well
- __mod_lruvec_page_state relies on page->mem_cgroup as well but this
  one is ok for paths which go through commit_charge path.

That being said we should really double check which stats are
accounted properly. At least MEMCG_KERNEL_STACK_KB won't unless I am
misreading the code.

I do not mind displaying the root's memcg stats but a) a closer look had
to be done for each counter and b) a clarification of differences from
the global vmstat counters would be really handy.

-- 
Michal Hocko
SUSE Labs


  parent reply	other threads:[~2020-05-15  8:29 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-08 17:06 [PATCH] memcg: expose root cgroup's memory.stat Shakeel Butt
2020-05-08 21:44 ` Johannes Weiner
2020-05-09 14:06   ` Shakeel Butt
2020-05-09 14:43     ` Yafang Shao
2020-05-15  8:29     ` Michal Hocko [this message]
2020-05-15 13:24       ` Johannes Weiner
2020-05-15 13:44         ` Shakeel Butt
2020-05-15 15:00           ` Roman Gushchin
2020-05-15 17:49             ` Shakeel Butt
2020-05-15 18:09               ` Johannes Weiner
2020-05-16  0:13                 ` Shakeel Butt
2020-05-15 18:09               ` Roman Gushchin
2020-05-16  0:06                 ` Shakeel Butt
2020-05-16  1:42 ` Chris Down

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200515082955.GJ29153@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=laoar.shao@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=shakeelb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).