linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Johannes Weiner <hannes@cmpxchg.org>
To: Shakeel Butt <shakeelb@google.com>
Cc: Roman Gushchin <guro@fb.com>, Michal Hocko <mhocko@kernel.org>,
	Mel Gorman <mgorman@suse.de>,
	Andrew Morton <akpm@linux-foundation.org>,
	Yafang Shao <laoar.shao@gmail.com>, Linux MM <linux-mm@kvack.org>,
	Cgroups <cgroups@vger.kernel.org>,
	LKML <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] memcg: expose root cgroup's memory.stat
Date: Fri, 15 May 2020 14:09:06 -0400	[thread overview]
Message-ID: <20200515180906.GA630613@cmpxchg.org> (raw)
In-Reply-To: <CALvZod5EHzK-UzS9WgkzpZ2T+WwA8LottxrTzUi3qFwvUbOk4w@mail.gmail.com>

On Fri, May 15, 2020 at 10:49:22AM -0700, Shakeel Butt wrote:
> On Fri, May 15, 2020 at 8:00 AM Roman Gushchin <guro@fb.com> wrote:
> > On Fri, May 15, 2020 at 06:44:44AM -0700, Shakeel Butt wrote:
> > > On Fri, May 15, 2020 at 6:24 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > > > You're right. It should only bypass the page_counter, but still set
> > > > page->mem_cgroup = root_mem_cgroup, just like user pages.
> >
> > What about kernel threads? We consider them belonging to the root memory
> > cgroup. Should their memory consumption being considered in root-level stats?
> >
> > I'm not sure we really want it, but I guess we need to document how
> > kernel threads are handled.
> 
> What will be the cons of updating root-level stats for kthreads?

Should kernel threads be doing GFP_ACCOUNT allocations without
memalloc_use_memcg()? GFP_ACCOUNT implies that the memory consumption
can be significant and should be attributed to userspace activity.

If the kernel thread has no userspace entity to blame, it seems to
imply the same thing as a !GFP_ACCOUNT allocation: shared public
infrastructure, not interesting to account to any specific cgroup.

I'm not sure if we have such allocations right now. But IMO we should
not account anything from kthreads, or interrupts for that matter,
/unless/ there is a specific active_memcg that was set by the kthread
or the interrupt.


  reply	other threads:[~2020-05-15 18:09 UTC|newest]

Thread overview: 14+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-08 17:06 [PATCH] memcg: expose root cgroup's memory.stat Shakeel Butt
2020-05-08 21:44 ` Johannes Weiner
2020-05-09 14:06   ` Shakeel Butt
2020-05-09 14:43     ` Yafang Shao
2020-05-15  8:29     ` Michal Hocko
2020-05-15 13:24       ` Johannes Weiner
2020-05-15 13:44         ` Shakeel Butt
2020-05-15 15:00           ` Roman Gushchin
2020-05-15 17:49             ` Shakeel Butt
2020-05-15 18:09               ` Johannes Weiner [this message]
2020-05-16  0:13                 ` Shakeel Butt
2020-05-15 18:09               ` Roman Gushchin
2020-05-16  0:06                 ` Shakeel Butt
2020-05-16  1:42 ` Chris Down

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200515180906.GA630613@cmpxchg.org \
    --to=hannes@cmpxchg.org \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=laoar.shao@gmail.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mhocko@kernel.org \
    --cc=shakeelb@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).