All of lore.kernel.org
 help / color / mirror / Atom feed
From: Roman Gushchin <guro@fb.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Shakeel Butt <shakeelb@google.com>,
	Muchun Song <songmuchun@bytedance.com>,
	Greg Thelen <gthelen@google.com>,
	Michal Hocko <mhocko@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	Vladimir Davydov <vdavydov.dev@gmail.com>,
	LKML <linux-kernel@vger.kernel.org>,
	Linux MM <linux-mm@kvack.org>,
	Xiongchun duan <duanxiongchun@bytedance.com>
Subject: Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages
Date: Tue, 30 Mar 2021 15:05:42 -0700	[thread overview]
Message-ID: <YGOgth4IUflVE/Me@carbon.dhcp.thefacebook.com> (raw)
In-Reply-To: <YGOYYgWbwiYlKmzV@cmpxchg.org>

On Tue, Mar 30, 2021 at 05:30:10PM -0400, Johannes Weiner wrote:
> On Tue, Mar 30, 2021 at 11:58:31AM -0700, Roman Gushchin wrote:
> > On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> > > On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > >
> > > > Since the following patchsets applied. All the kernel memory are charged
> > > > with the new APIs of obj_cgroup.
> > > >
> > > >         [v17,00/19] The new cgroup slab memory controller
> > > >         [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> > > >
> > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > it exists at a larger scale and is causing recurring problems in the real
> > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > and make page reclaim very inefficient.
> > > >
> > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > >
> > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > of the dying cgroups will not increase if we run the following test script.
> > > >
> > > > ```bash
> > > > #!/bin/bash
> > > >
> > > > cat /proc/cgroups | grep memory
> > > >
> > > > cd /sys/fs/cgroup/memory
> > > >
> > > > for i in range{1..500}
> > > > do
> > > >         mkdir test
> > > >         echo $$ > test/cgroup.procs
> > > >         sleep 60 &
> > > >         echo $$ > cgroup.procs
> > > >         echo `cat test/cgroup.procs` > cgroup.procs
> > > >         rmdir test
> > > > done
> > > >
> > > > cat /proc/cgroups | grep memory
> > > > ```
> > > >
> > > > Patch 1 aims to fix page charging in page replacement.
> > > > Patch 2-5 are code cleanup and simplification.
> > > > Patch 6-15 convert LRU pages pin to the objcg direction.
> > > 
> > > The main concern I have with *just* reparenting LRU pages is that for
> > > the long running systems, the root memcg will become a dumping ground.
> > > In addition a job running multiple times on a machine will see
> > > inconsistent memory usage if it re-accesses the file pages which were
> > > reparented to the root memcg.
> > 
> > I agree, but also the reparenting is not the perfect thing in a combination
> > with any memory protections (e.g. memory.low).
> > 
> > Imagine the following configuration:
> > workload.slice
> > - workload_gen_1.service   memory.min = 30G
> > - workload_gen_2.service   memory.min = 30G
> > - workload_gen_3.service   memory.min = 30G
> >   ...
> > 
> > Parent cgroup and several generations of the child cgroup, protected by a memory.low.
> > Once the memory is getting reparented, it's not protected anymore.
> 
> That doesn't sound right.
> 
> A deleted cgroup today exerts no control over its abandoned
> pages. css_reset() will blow out any control settings.

I know. Currently it works in the following way: once cgroup gen_1 is deleted,
it's memory is not protected anymore, so eventually it's getting evicted and
re-faulted as gen_2 (or gen_N) memory. Muchun's patchset doesn't change this,
of course. But long-term we likely wanna re-charge such pages to new cgroups
and avoid unnecessary evictions and re-faults. Switching to obj_cgroups doesn't
help and likely will complicate this change. So I'm a bit skeptical here.

Also, in my experience the pagecache is not the main/worst memcg reference
holder (writeback structures are way worse). Pages are relatively large
(in comparison to some slab objects), and rarely there is only one page pinning
a separate memcg. And switching to obj_cgroup doesn't completely eliminate
the problem: we just switch from accumulating larger mem_cgroups to accumulating
smaller obj_cgroups.

With all this said, I'm not necessarily opposing the patchset, but it's
necessary to discuss how it fits into the long-term picture.
E.g. if we're going to use obj_cgroup API for page-sized objects, shouldn't
we split it back into the reparenting and bytes-sized accounting parts,
as I initially suggested. And shouldn't we move the reparenting part to
the cgroup core level, so we could use it for other controllers
(e.g. to fix the writeback problem).

> 
> If you're talking about protection previously inherited by
> workload.slice, that continues to apply as it always has.
> 
> None of this is really accidental. Per definition the workload.slice
> control domain includes workload_gen_1.service. And per definition,
> the workload_gen_1.service domain ceases to exist when you delete it.
> 
> There are no (or shouldn't be any!) semantic changes from the physical
> unlinking from a dead control domain.
> 
> > Also, I'm somewhat concerned about the interaction of the reparenting
> > with the writeback and dirty throttling. How does it work together?
> 
> What interaction specifically?
> 
> When you delete a cgroup that had both the block and the memory
> controller enabled, the control domain of both goes away and it
> becomes subject to whatever control domain is above it (if any).
> 
> A higher control domain in turn takes a recursive view of the subtree,
> see mem_cgroup_wb_stats(), so when control is exerted, it applies
> regardless of how and where pages are physically linked in children.
> 
> It's also already possible to enable e.g. block control only at a very
> high level and memory control down to a lower level. Per design this
> code can live with different domain sizes for memory and block.

I'm totally happy if it's safe, I just don't know this code well enough
to be sure without taking a closer look.

  reply	other threads:[~2021-03-30 22:06 UTC|newest]

Thread overview: 44+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-30 10:15 [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages Muchun Song
2021-03-30 10:15 ` [RFC PATCH 01/15] mm: memcontrol: fix page charging in page replacement Muchun Song
2021-04-02 15:07   ` Johannes Weiner
2021-03-30 10:15 ` [RFC PATCH 02/15] mm: memcontrol: bail out early when !mm in get_mem_cgroup_from_mm Muchun Song
2021-04-02 15:08   ` Johannes Weiner
2021-03-30 10:15 ` [RFC PATCH 03/15] mm: memcontrol: remove the pgdata parameter of mem_cgroup_page_lruvec Muchun Song
2021-04-02 15:22   ` Johannes Weiner
2021-03-30 10:15 ` [RFC PATCH 04/15] mm: memcontrol: use lruvec_memcg in lruvec_holds_page_lru_lock Muchun Song
2021-04-02 16:21   ` Johannes Weiner
2021-04-03 12:37     ` [External] " Muchun Song
2021-04-03 12:37       ` Muchun Song
2021-03-30 10:15 ` [RFC PATCH 05/15] mm: memcontrol: simplify the logic of objcg pinning memcg Muchun Song
2021-03-30 10:15 ` [RFC PATCH 06/15] mm: memcontrol: move the objcg infrastructure out of CONFIG_MEMCG_KMEM Muchun Song
2021-03-30 10:15 ` [RFC PATCH 07/15] mm: memcontrol: introduce compact_lock_page_lruvec_irqsave Muchun Song
2021-03-30 10:15 ` [RFC PATCH 08/15] mm: memcontrol: make lruvec lock safe when the LRU pages reparented Muchun Song
2021-03-30 10:15 ` [RFC PATCH 09/15] mm: thp: introduce lock/unlock_split_queue{_irqsave}() Muchun Song
2021-03-30 10:15 ` [RFC PATCH 10/15] mm: thp: make deferred split queue lock safe when the LRU pages reparented Muchun Song
2021-03-30 10:15 ` [RFC PATCH 11/15] mm: memcontrol: make all the callers of page_memcg() safe Muchun Song
2021-03-30 10:15 ` [RFC PATCH 12/15] mm: memcontrol: introduce memcg_reparent_ops Muchun Song
2021-03-30 10:15 ` [RFC PATCH 13/15] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Muchun Song
2021-03-30 10:15 ` [RFC PATCH 14/15] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() Muchun Song
2021-03-30 10:15 ` [RFC PATCH 15/15] mm: lru: add VM_BUG_ON_PAGE to lru maintenance function Muchun Song
2021-03-30 18:34 ` [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages Shakeel Butt
2021-03-30 18:34   ` Shakeel Butt
2021-03-30 18:58   ` Roman Gushchin
2021-03-30 21:30     ` Johannes Weiner
2021-03-30 22:05       ` Roman Gushchin [this message]
2021-03-31 15:17         ` Johannes Weiner
2021-04-01 16:07           ` [External] " Muchun Song
2021-04-01 16:07             ` Muchun Song
2021-04-01 17:15             ` Shakeel Butt
2021-04-01 17:15               ` Shakeel Butt
2021-04-02  3:14               ` Muchun Song
2021-04-02  3:14                 ` Muchun Song
2021-04-02 17:30               ` Johannes Weiner
2021-04-01 21:34             ` Roman Gushchin
2021-04-01 22:55           ` Yang Shi
2021-04-01 22:55             ` Yang Shi
2021-04-02  4:03             ` [External] " Muchun Song
2021-04-02  4:03               ` Muchun Song
2021-03-30 21:10   ` Johannes Weiner
2021-03-31  0:28     ` Shakeel Butt
2021-03-31  0:28       ` Shakeel Butt
2021-03-31  3:28     ` Balbir Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=YGOgth4IUflVE/Me@carbon.dhcp.thefacebook.com \
    --to=guro@fb.com \
    --cc=akpm@linux-foundation.org \
    --cc=duanxiongchun@bytedance.com \
    --cc=gthelen@google.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=shakeelb@google.com \
    --cc=songmuchun@bytedance.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.