linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Muchun Song <songmuchun@bytedance.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Roman Gushchin <guro@fb.com>, Shakeel Butt <shakeelb@google.com>,
	Greg Thelen <gthelen@google.com>,
	 Michal Hocko <mhocko@kernel.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	 Vladimir Davydov <vdavydov.dev@gmail.com>,
	LKML <linux-kernel@vger.kernel.org>,
	 Linux MM <linux-mm@kvack.org>,
	Xiongchun duan <duanxiongchun@bytedance.com>
Subject: Re: [External] Re: [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages
Date: Fri, 2 Apr 2021 00:07:42 +0800	[thread overview]
Message-ID: <CAMZfGtVYMDt_3CyH1g8d+Sey8J+W2HVY73xwCk_anm3UgHcr0w@mail.gmail.com> (raw)
In-Reply-To: <YGSSemLLpiHJHanV@cmpxchg.org>

On Wed, Mar 31, 2021 at 11:17 PM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Tue, Mar 30, 2021 at 03:05:42PM -0700, Roman Gushchin wrote:
> > On Tue, Mar 30, 2021 at 05:30:10PM -0400, Johannes Weiner wrote:
> > > On Tue, Mar 30, 2021 at 11:58:31AM -0700, Roman Gushchin wrote:
> > > > On Tue, Mar 30, 2021 at 11:34:11AM -0700, Shakeel Butt wrote:
> > > > > On Tue, Mar 30, 2021 at 3:20 AM Muchun Song <songmuchun@bytedance.com> wrote:
> > > > > >
> > > > > > Since the following patchsets applied. All the kernel memory are charged
> > > > > > with the new APIs of obj_cgroup.
> > > > > >
> > > > > >         [v17,00/19] The new cgroup slab memory controller
> > > > > >         [v5,0/7] Use obj_cgroup APIs to charge kmem pages
> > > > > >
> > > > > > But user memory allocations (LRU pages) pinning memcgs for a long time -
> > > > > > it exists at a larger scale and is causing recurring problems in the real
> > > > > > world: page cache doesn't get reclaimed for a long time, or is used by the
> > > > > > second, third, fourth, ... instance of the same job that was restarted into
> > > > > > a new cgroup every time. Unreclaimable dying cgroups pile up, waste memory,
> > > > > > and make page reclaim very inefficient.
> > > > > >
> > > > > > We can convert LRU pages and most other raw memcg pins to the objcg direction
> > > > > > to fix this problem, and then the LRU pages will not pin the memcgs.
> > > > > >
> > > > > > This patchset aims to make the LRU pages to drop the reference to memory
> > > > > > cgroup by using the APIs of obj_cgroup. Finally, we can see that the number
> > > > > > of the dying cgroups will not increase if we run the following test script.
> > > > > >
> > > > > > ```bash
> > > > > > #!/bin/bash
> > > > > >
> > > > > > cat /proc/cgroups | grep memory
> > > > > >
> > > > > > cd /sys/fs/cgroup/memory
> > > > > >
> > > > > > for i in range{1..500}
> > > > > > do
> > > > > >         mkdir test
> > > > > >         echo $$ > test/cgroup.procs
> > > > > >         sleep 60 &
> > > > > >         echo $$ > cgroup.procs
> > > > > >         echo `cat test/cgroup.procs` > cgroup.procs
> > > > > >         rmdir test
> > > > > > done
> > > > > >
> > > > > > cat /proc/cgroups | grep memory
> > > > > > ```
> > > > > >
> > > > > > Patch 1 aims to fix page charging in page replacement.
> > > > > > Patch 2-5 are code cleanup and simplification.
> > > > > > Patch 6-15 convert LRU pages pin to the objcg direction.
> > > > >
> > > > > The main concern I have with *just* reparenting LRU pages is that for
> > > > > the long running systems, the root memcg will become a dumping ground.
> > > > > In addition a job running multiple times on a machine will see
> > > > > inconsistent memory usage if it re-accesses the file pages which were
> > > > > reparented to the root memcg.
> > > >
> > > > I agree, but also the reparenting is not the perfect thing in a combination
> > > > with any memory protections (e.g. memory.low).
> > > >
> > > > Imagine the following configuration:
> > > > workload.slice
> > > > - workload_gen_1.service   memory.min = 30G
> > > > - workload_gen_2.service   memory.min = 30G
> > > > - workload_gen_3.service   memory.min = 30G
> > > >   ...
> > > >
> > > > Parent cgroup and several generations of the child cgroup, protected by a memory.low.
> > > > Once the memory is getting reparented, it's not protected anymore.
> > >
> > > That doesn't sound right.
> > >
> > > A deleted cgroup today exerts no control over its abandoned
> > > pages. css_reset() will blow out any control settings.
> >
> > I know. Currently it works in the following way: once cgroup gen_1 is deleted,
> > it's memory is not protected anymore, so eventually it's getting evicted and
> > re-faulted as gen_2 (or gen_N) memory. Muchun's patchset doesn't change this,
> > of course. But long-term we likely wanna re-charge such pages to new cgroups
> > and avoid unnecessary evictions and re-faults. Switching to obj_cgroups doesn't
> > help and likely will complicate this change. So I'm a bit skeptical here.
>
> We should be careful with the long-term plans.
>
> The zombie issue is a pretty urgent concern that has caused several
> production emergencies now. It needs a fix sooner rather than later.

Thank you very much for clarifying the problem for me. I do agree
with you. This issue should be fixed ASAP. Using objcg is a good
choice. Dying objcg should not be a problem. Because the size of
objcg is so small compared to memcg.

Thanks.

>
> The long-term plans of how to handle shared/reused data better will
> require some time to work out. There are MANY open questions around
> recharging to arbitrary foreign cgroup users. Like how to identify
> accesses after the page's cgroup has been deleted: Do we need to
> annotate every page cache lookup? Do we need to inject minor faults to
> recharge mmapped pages? We can't wait for this discussion to complete.
>
> I also think the objcg is helping with that direction rather than
> getting in the way because:
>
> - The old charge moving code we have for LRU pages isn't reusable
>   anyway. It relies on optimistic locking to work, and may leave
>   memory behind in arbitrary and unpredictable ways. After a few
>   cycles, objects tend to be spread all over the place.
>
>   The objcg provides a new synchronization scheme that will always
>   work because the readside (page/object to memcg lookup) needs to be
>   prepared for the memcg to change and/or die at any time.
>
> - There isn't much point in recharging only some of the abandoned
>   memory. We've tried per-allocation class reparenting and it didn't
>   work out too well. Newly accounted allocations crop up faster than
>   we can conjure up tailor-made reparenting schemes for them.
>
>   The objcg provides a generic reference and reassignment scheme that
>   can be used by all tracked objects.
>
>   Importantly, once we have a central thing as LRU pages converted, we
>   can require all new allocation tracking to use objcg from the start.
>
> > Also, in my experience the pagecache is not the main/worst memcg reference
> > holder (writeback structures are way worse). Pages are relatively large
> > (in comparison to some slab objects), and rarely there is only one page pinning
> > a separate memcg.
>
> I've seen that exact thing cause zombies to pile up: one or two pages
> in the old group, pinned by the next instance of a job. If the job has
> a sufficiently large working set, this can pin a LOT of dead
> cgroups. Is it the biggest or most problematic source of buildups?
> Maybe not. But there is definitely cause to fix it.
>
> LRU pages are also the most difficult to convert because of their rich
> interactions. It's a good test of the api. If it works for those
> pages, converting everybody else will be much simpler.
>
> And again, as the core memory consumer it sets the tone of how memcg
> rmapping is supposed to work for new and existing object types. This
> helps align ongoing development.
>
> > And switching to obj_cgroup doesn't completely eliminate the
> > problem: we just switch from accumulating larger mem_cgroups to
> > accumulating smaller obj_cgroups.
>
> In your own words, the discrepancy between tiny objects pinning large
> memcgs is a problem. objcgs are smaller than most objects, so it's not
> much different as if an object were simply a few bytes bigger in size.
> A memcg on the other hand is vastly bigger than most objects. It's
> also composed of many allocations and so causes more fragmentation.
>
> Another issue is that memcgs with abandoned objects need to be visited
> by the reclaimer on every single reclaim walk, a hot path. The list of
> objcgs on the other hand is only iterated when the cgroup is deleted,
> which is not a fast path. It's also rare that parents with many dead
> children are deleted at all (system.slice, workload.slice etc.)
>
> So no, I would say for all intents and purposes, it fixes all the
> problems we're having with zombie memcgs.
>
> > With all this said, I'm not necessarily opposing the patchset, but it's
> > necessary to discuss how it fits into the long-term picture.
> > E.g. if we're going to use obj_cgroup API for page-sized objects, shouldn't
> > we split it back into the reparenting and bytes-sized accounting parts,
> > as I initially suggested. And shouldn't we move the reparenting part to
> > the cgroup core level, so we could use it for other controllers
> > (e.g. to fix the writeback problem).
>
> Yes, I do think we want to generalize it. But I wouldn't say it's a
> requirement for these patches, either:
>
> - The byte-sized accounting part is one atomic_t. That's 4 bytes
>   pinned unnecessarily - compared to an entire struct memcg right now
>   which includes memory accounting, swap accounting, byte accounting,
>   and a whole lot of other things likely not used by the stale object.
>
> - The cgroup generalization is a good idea too, but that doesn't
>   really change the callsites either. Unless you were thinking of
>   renaming, but objcg seems like a good, generic fit for a name to
>   describe the linkage between objects to a cgroup.
>
>   The memcg member will need to change into something generic (a
>   css_set type mapping), but that can likely be hidden behind
>   page_memcg(), objcg_memcg() and similar helpers.
>
> Both of these aspects can be improved incrementally.


  reply	other threads:[~2021-04-01 16:08 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-03-30 10:15 [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages Muchun Song
2021-03-30 10:15 ` [RFC PATCH 01/15] mm: memcontrol: fix page charging in page replacement Muchun Song
2021-04-02 15:07   ` Johannes Weiner
2021-03-30 10:15 ` [RFC PATCH 02/15] mm: memcontrol: bail out early when !mm in get_mem_cgroup_from_mm Muchun Song
2021-04-02 15:08   ` Johannes Weiner
2021-03-30 10:15 ` [RFC PATCH 03/15] mm: memcontrol: remove the pgdata parameter of mem_cgroup_page_lruvec Muchun Song
2021-04-02 15:22   ` Johannes Weiner
2021-03-30 10:15 ` [RFC PATCH 04/15] mm: memcontrol: use lruvec_memcg in lruvec_holds_page_lru_lock Muchun Song
2021-04-02 16:21   ` Johannes Weiner
2021-04-03 12:37     ` [External] " Muchun Song
2021-03-30 10:15 ` [RFC PATCH 05/15] mm: memcontrol: simplify the logic of objcg pinning memcg Muchun Song
2021-03-30 10:15 ` [RFC PATCH 06/15] mm: memcontrol: move the objcg infrastructure out of CONFIG_MEMCG_KMEM Muchun Song
2021-03-30 10:15 ` [RFC PATCH 07/15] mm: memcontrol: introduce compact_lock_page_lruvec_irqsave Muchun Song
2021-03-30 10:15 ` [RFC PATCH 08/15] mm: memcontrol: make lruvec lock safe when the LRU pages reparented Muchun Song
2021-03-30 10:15 ` [RFC PATCH 09/15] mm: thp: introduce lock/unlock_split_queue{_irqsave}() Muchun Song
2021-03-30 10:15 ` [RFC PATCH 10/15] mm: thp: make deferred split queue lock safe when the LRU pages reparented Muchun Song
2021-03-30 10:15 ` [RFC PATCH 11/15] mm: memcontrol: make all the callers of page_memcg() safe Muchun Song
2021-03-30 10:15 ` [RFC PATCH 12/15] mm: memcontrol: introduce memcg_reparent_ops Muchun Song
2021-03-30 10:15 ` [RFC PATCH 13/15] mm: memcontrol: use obj_cgroup APIs to charge the LRU pages Muchun Song
2021-03-30 10:15 ` [RFC PATCH 14/15] mm: memcontrol: rename {un}lock_page_memcg() to {un}lock_page_objcg() Muchun Song
2021-03-30 10:15 ` [RFC PATCH 15/15] mm: lru: add VM_BUG_ON_PAGE to lru maintenance function Muchun Song
2021-03-30 18:34 ` [RFC PATCH 00/15] Use obj_cgroup APIs to charge the LRU pages Shakeel Butt
2021-03-30 18:58   ` Roman Gushchin
2021-03-30 21:30     ` Johannes Weiner
2021-03-30 22:05       ` Roman Gushchin
2021-03-31 15:17         ` Johannes Weiner
2021-04-01 16:07           ` Muchun Song [this message]
2021-04-01 17:15             ` [External] " Shakeel Butt
2021-04-02  3:14               ` Muchun Song
2021-04-02 17:30               ` Johannes Weiner
2021-04-01 21:34             ` Roman Gushchin
2021-04-01 22:55           ` Yang Shi
2021-04-02  4:03             ` [External] " Muchun Song
2021-03-30 21:10   ` Johannes Weiner
2021-03-31  0:28     ` Shakeel Butt
2021-03-31  3:28     ` Balbir Singh

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=CAMZfGtVYMDt_3CyH1g8d+Sey8J+W2HVY73xwCk_anm3UgHcr0w@mail.gmail.com \
    --to=songmuchun@bytedance.com \
    --cc=akpm@linux-foundation.org \
    --cc=duanxiongchun@bytedance.com \
    --cc=gthelen@google.com \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=shakeelb@google.com \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).