linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Shakeel Butt <shakeelb@google.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Muchun Song <songmuchun@bytedance.com>,
	Greg Thelen <gthelen@google.com>, Roman Gushchin <guro@fb.com>,
	Michal Hocko <mhocko@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>,
	Xiongchun duan <duanxiongchun@bytedance.com>
Subject: Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages
Date: Tue, 30 Mar 2021 17:28:06 -0700	[thread overview]
Message-ID: <CALvZod6zOUBBBbEcAxbYxDgwGUwtZht8EhB_ygm25bAsssZj5Q@mail.gmail.com> (raw)
In-Reply-To: <YGOTrAf5bRBRJaBP@cmpxchg.org>

On Tue, Mar 30, 2021 at 2:10 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
[...]
> > The main concern I have with *just* reparenting LRU pages is that for
> > the long running systems, the root memcg will become a dumping ground.
> > In addition a job running multiple times on a machine will see
> > inconsistent memory usage if it re-accesses the file pages which were
> > reparented to the root memcg.
>
> I don't understand how Muchun's patches are supposed to *change* the
> behavior the way you are describing it. This IS today's behavior.
>
> We have hierarchical accounting, and a page that belongs to a leaf
> cgroup will automatically belong to all its parents.
>
> Further, if you delete a cgroup today, the abandoned cache will stay
> physically linked to that cgroup, but that zombie group no longer acts
> as a control structure: it imposes no limit and no protection; the
> pages will be reclaimed as if it WERE linked to the parent.
>
> For all intents and purposes, when you delete a cgroup today, its
> remaining pages ARE dumped onto the parent.
>
> The only difference is that today they pointlessly pin the leaf cgroup
> as a holding vessel - which is then round-robin'd from the parent
> during reclaim in order to pretend that all these child pages actually
> ARE linked to the parent's LRU list.
>
> Remember how we used to have every page physically linked to multiple
> lrus? The leaf cgroup and the root?
>
> All pages always belong to the (virtual) LRU list of all ancestor
> cgroups. The only thing Muchun changes is that they no longer pin a
> cgroup that has no semantical meaning anymore (because it's neither
> visible to the user nor exerts any contol over the pages anymore).
>

Indeed you are right. Even if the physical representation of the tree
has changed, the logical picture remains the same.

[Subconsciously I was sad that we will lose the information about the
origin memcg of the page for debugging purposes but then I thought if
we really need it then we can just add that metadata in the obj_cgroup
object. So, never mind.]

> Maybe I'm missing something that either you or Roman can explain to
> me. But this series looks like a (rare) pure win.
>
> Whether you like the current semantics is a separate discussion IMO.
>
> > Please note that I do agree with the mentioned problem and we do see
> > this issue in our fleet. Internally we have a "memcg mount option"
> > feature which couples a file system with a memcg and all file pages
> > allocated on that file system will be charged to that memcg. Multiple
> > instances (concurrent or subsequent) of the job will use that file
> > system (with a dedicated memcg) without leaving the zombies behind. I
> > am not pushing for this solution as it comes with its own intricacies
> > (e.g. if memcg coupled with a file system has a limit, the oom
> > behavior would be awkward and therefore internally we don't put a
> > limit on such memcgs). Though I want this to be part of discussion.
>
> Right, you disconnect memory from the tasks that are allocating it,
> and so you can't assign culpability when you need to.
>
> OOM is one thing, but there are also CPU cycles and IO bandwidth
> consumed during reclaim.
>

We didn't really have any issue regarding CPU or IO but that might be
due to our unique setup (i.e. no local disk).

> > I think the underlying reasons behind this issue are:
> >
> > 1) Filesystem shared by disjoint jobs.
> > 2) For job dedicated filesystems, the lifetime of the filesystem is
> > different from the lifetime of the job.
>
> There is also the case of deleting a cgroup just to recreate it right
> after for the same job. Many job managers do this on restart right now
> - like systemd, and what we're using in our fleet. This seems
> avoidable by recycling a group for another instance of the same job.

I was bundling the scenario you mentioned with (2) i.e. the filesystem
persists across multiple subsequent instances of the same job.

>
> Sharing is a more difficult discussion. If you access a page that you
> share with another cgroup, it may or may not be subject to your own or
> your buddy's memory limits. The only limit it is guaranteed to be
> subjected to is that of your parent. So One thing I could imagine is,
> instead of having a separate cgroup outside the hierarchy, we would
> reparent live pages the second they are accessed from a foreign
> cgroup. And reparent them until you reach the first common ancestor.
>
> This way, when you mount a filesystem shared by two jobs, you can put
> them into a joint subtree, and the root level of this subtree captures
> all the memory (as well as the reclaim CPU and IO) used by the two
> jobs - the private portions and the shared portions - and doesn't make
> them the liability of jobs in the system that DON'T share the same fs.

I will give more thought on this idea and see where it goes.

>
> But again, this is a useful discussion to have, but I don't quite see
> why it's relevant to Muchun's patches. They're purely an optimization.
>
> So I'd like to clear that up first before going further.

I think we are on the same page i.e. these patches change the physical
representation of the memcg tree but logically it remains the same and
fixes the zombie memcg issue.

  reply	other threads:[~2021-03-31  0:28 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-30 10:15 [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages Muchun Song
2021-03-30 10:15 ` [RFC PATCH 01/15] mm: memcontrol: fix page charging in page replacement Muchun Song
2021-04-02 15:07   ` Johannes Weiner
2021-03-30 10:15 ` [RFC PATCH 02/15] mm: memcontrol: bail out early when !mm in get_mem_cgroup_from_mm Muchun Song
2021-04-02 15:08   ` Johannes Weiner
2021-03-30 10:15 ` [RFC PATCH 03/15] mm: memcontrol: remove the pgdata parameter of mem_cgroup_page_lruvec Muchun Song
2021-04-02 15:22   ` Johannes Weiner
2021-03-30 10:15 ` [RFC PATCH 04/15] mm: memcontrol: use lruvec_memcg in lruvec_holds_page_lru_lock Muchun Song
2021-04-02 16:21   ` Johannes Weiner
2021-04-03 12:37     ` [External] " Muchun Song
2021-03-30 10:15 ` [RFC PATCH 05/15] mm: memcontrol: simplify the logic of objcg pinning memcg Muchun Song
2021-03-30 10:15 ` [RFC PATCH 06/15] mm: memcontrol: move the objcg infrastructure out of CONFIG_MEMCG_KMEM Muchun Song
2021-03-30 10:15 ` [RFC PATCH 07/15] mm: memcontrol: introduce compact_lock_page_lruvec_irqsave Muchun Song
2021-03-30 10:15 ` [RFC PATCH 08/15] mm: memcontrol: make lruvec lock safe when the LRU pages reparented Muchun Song
2021-03-30 10:15 ` [RFC PATCH 09/15] mm: thp: introduce lock/unlock_split_queue{_irqsave}() Muchun Song
2021-03-30 10:15 ` [RFC PATCH 10/15] mm: thp: make deferred split queue lock safe when the LRU pages reparented Muchun Song
2021-03-30 10:15 ` [RFC PATCH 11/15] mm: memcontrol: make all the callers of page_memcg() safe Muchun Song
2021-03-30 10:15 ` [RFC PATCH 12/15] mm: memcontrol: introduce memcg_reparent_ops Muchun Song
2021-03-30 10:15 ` [RFC PATCH 13/15] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Muchun Song
2021-03-30 10:15 ` [RFC PATCH 14/15] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() Muchun Song
2021-03-30 10:15 ` [RFC PATCH 15/15] mm: lru: add VM_BUG_ON_PAGE to lru maintenance function Muchun Song
2021-03-30 18:34 ` [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages Shakeel Butt
2021-03-30 18:58   ` Roman Gushchin
2021-03-30 21:30     ` Johannes Weiner
2021-03-30 22:05       ` Roman Gushchin
2021-03-31 15:17         ` Johannes Weiner
2021-04-01 16:07           ` [External] " Muchun Song
2021-04-01 17:15             ` Shakeel Butt
2021-04-02  3:14               ` Muchun Song
2021-04-02 17:30               ` Johannes Weiner
2021-04-01 21:34             ` Roman Gushchin
2021-04-01 22:55           ` Yang Shi
2021-04-02  4:03             ` [External] " Muchun Song
2021-03-30 21:10   ` Johannes Weiner
2021-03-31  0:28     ` Shakeel Butt [this message]
2021-03-31  3:28     ` Balbir Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CALvZod6zOUBBBbEcAxbYxDgwGUwtZht8EhB_ygm25bAsssZj5Q@mail.gmail.com \
    --to=shakeelb@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=duanxiongchun@bytedance.com \
    --cc=gthelen@google.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=songmuchun@bytedance.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).