linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Yosry Ahmed <yosryahmed@google.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Yang Shi <shy828301@gmail.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Eric Bergen <ebergen@meta.com>
Subject: Re: [PATCH] mm: vmscan: split khugepaged stats from direct reclaim stats
Date: Fri, 28 Oct 2022 10:41:17 -0700	[thread overview]
Message-ID: <CAJD7tkbBbh+uXe7S=a0E5=FBX4wVa7YDJDLmti370v2sVWVtWw@mail.gmail.com> (raw)
In-Reply-To: <Y1vprODaLJLk0dka@cmpxchg.org>

On Fri, Oct 28, 2022 at 7:39 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
>
> On Thu, Oct 27, 2022 at 01:43:24PM -0700, Yosry Ahmed wrote:
> > On Thu, Oct 27, 2022 at 7:15 AM Johannes Weiner <hannes@cmpxchg.org> wrote:
> > > On Wed, Oct 26, 2022 at 07:41:21PM -0700, Yosry Ahmed wrote:
> > > > My 2c, if we care about direct reclaim as in reclaim that may stall
> > > > user space application allocations, then there are other reclaim
> > > > contexts that may pollute the direct reclaim stats. For instance,
> > > > proactive reclaim, or reclaim done by writing a limit lower than the
> > > > current usage to memory.max or memory.high, as they are not done in
> > > > the context of the application allocating memory.
> > > >
> > > > At Google, we have some internal direct reclaim memcg statistics, and
> > > > the way we handle this is by passing a flag from such contexts to
> > > > try_to_free_mem_cgroup_pages() in the reclaim_options arg. This flag
> > > > is echod into a scan_struct bit, which we then use to filter out
> > > > direct reclaim operations that actually cause latencies in user space
> > > > allocations.
> > > >
> > > > Perhaps something similar might be more generic here? I am not sure
> > > > what context khugepaged reclaims memory from, but I think it's not a
> > > > memcg context, so maybe we want to generalize the reclaim_options arg
> > > > to try_to_free_pages() or whatever interface khugepaged uses to free
> > > > memory.
> > >
> > > So at the /proc/vmstat level, I'm not sure it matters much because it
> > > doesn't count any cgroup_reclaim() activity.
> > >
> > > But at the cgroup level, it sure would be nice to split out proactive
> > > reclaim churn. Both in terms of not polluting direct reclaim counts,
> > > but also for *knowing* how much proactive reclaim is doing.
> > >
> > > Do you have separate counters for this?
> >
> > Not yet. Currently we only have the first part, not polluting direct
> > reclaim counts.
> >
> > We basically exclude reclaim coming from memory.reclaim, setting
> > memory.max/memory.limit_in_bytes, memory.high (on write, not hitting
> > the high limit), and memory.force_empty from direct reclaim stats.
> >
> > As for having a separate counter for proactive reclaim, do you think
> > it should be limited to reclaim coming from memory.reclaim (and
> > potentially memory.force_empty), or should it include reclaim coming
> > from limit-setting as well?
>
> A combined counter seems reasonable to me. We *have* used the limit
> knobs to drive proactive reclaim in production in the past, so it's
> not a stretch. And I can't think of a scenario where you'd like them
> to be separate.
>
> I could think of two ways of describing it:
>
> pgscan_user: User-requested reclaim. Could be confusing if we ever
> have an in-kernel proactive reclaim driver - unless that would then go
> to another counter (new or kswapd).
>
> pgscan_ext: Reclaim activity from extraordinary/external
> requests. External as in: outside the allocation context.

I imagine if the kernel is doing proactive reclaim on its own, we
might want a separate counter for that anyway to monitor what the
kernel is doing. So maybe pgscan_user sounds nice for now, but I also
like that the latter explicitly says "this is external to the
allocation context". But we can just go with pgscan_user and document
it properly.

How would khugepaged fit in this story? Seems like it would be part of
pgscan_ext but not pgscan_user. I imagine we also don't want to
pollute proactive reclaim counters with khugepaged reclaim (or other
non-direct reclaim).

Maybe pgscan_user and pgscan_kernel/pgscan_indirect for things like khugepaged?
The problem with pgscan_kernel/indirect is that if we add a proactive
reclaim kthread in the future it would technically fit there but we
would want a separate counter for it.

I am honestly not sure where to put khugepaged. The reasons I don't
like a dedicated counter for khugepaged are:
- What if other kthreads like khugepaged start doing the same, do we
add one counter per-thread?
- What if we deprecate khugepaged (or such threads)? Seems more likely
than deprecating kswapd.

Looks like we want a stat that would group all of this reclaim coming
from non-direct kthreads, but would not include a future proactive
reclaim kthread.

  reply	other threads:[~2022-10-28 17:42 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-10-25 17:05 [PATCH] mm: vmscan: split khugepaged stats from direct reclaim stats Johannes Weiner
2022-10-25 17:16 ` Matthew Wilcox
2022-10-25 20:43   ` Johannes Weiner
2022-10-25 19:40 ` Yang Shi
2022-10-25 20:54   ` Johannes Weiner
2022-10-25 21:53     ` Yang Shi
2022-10-26 17:32       ` Johannes Weiner
2022-10-26 20:51         ` Yang Shi
2022-10-27  2:41           ` Yosry Ahmed
2022-10-27 14:15             ` Johannes Weiner
2022-10-27 20:43               ` Yosry Ahmed
2022-10-28 14:39                 ` Johannes Weiner
2022-10-28 17:41                   ` Yosry Ahmed [this message]
2022-10-31 16:00                     ` Johannes Weiner
2022-10-31 16:46                       ` Yosry Ahmed

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to='CAJD7tkbBbh+uXe7S=a0E5=FBX4wVa7YDJDLmti370v2sVWVtWw@mail.gmail.com' \
    --to=yosryahmed@google.com \
    --cc=akpm@linux-foundation.org \
    --cc=ebergen@meta.com \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=shy828301@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).