linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Chris Down <chris@chrisdown.name>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>,
	linux-mm@kvack.org, cgroups@vger.kernel.org,
	linux-kernel@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH] mm, memcg: reclaim more aggressively before high allocator throttling
Date: Thu, 21 May 2020 15:28:26 +0200	[thread overview]
Message-ID: <20200521132826.GS6462@dhcp22.suse.cz> (raw)
In-Reply-To: <20200521130530.GE990580@chrisdown.name>

On Thu 21-05-20 14:05:30, Chris Down wrote:
> Chris Down writes:
> > > I believe I have asked in other email in this thread. Could you explain
> > > why enforcint the requested target (memcg_nr_pages_over_high) is
> > > insufficient for the problem you are dealing with? Because that would
> > > make sense for large targets to me while it would keep relatively
> > > reasonable semantic of the throttling - aka proportional to the memory
> > > demand rather than the excess.
> > 
> > memcg_nr_pages_over_high is related to the charge size. As such, if
> > you're way over memory.high as a result of transient reclaim failures,
> > but the majority of your charges are small, it's going to hard to make
> > meaningful progress:
> > 
> > 1. Most nr_pages will be MEMCG_CHARGE_BATCH, which is not enough to help;
> > 2. Large allocations will only get a single reclaim attempt to succeed.
> > 
> > As such, in many cases we're either doomed to successfully reclaim a
> > paltry amount of pages, or fail to reclaim a lot of pages. Asking
> > try_to_free_pages() to deal with those huge allocations is generally not
> > reasonable, regardless of the specifics of why it doesn't work in this
> > case.
> 
> Oh, I somehow elided the "enforcing" part of your proposal. Still, there's
> no guarantee even if large allocations are reclaimed fully that we will end
> up going back below memory.high, because even a single other large
> allocation which fails to reclaim can knock us out of whack again.

Yeah, there is no guarantee and that is fine. Because memory.high is not
about guarantee. It is about a best effort and slowing down the
allocation pace so that the userspace has time to do something about
that.

That being said I would be really curious about how enforcing the
memcg_nr_pages_over_high target works in your setups where you see the
problem. If that doesn't work for some reason and the reclaim should be
more pro-active then I would suggest to scale it via memcg_nr_pages_over_high
rather than essentially keep it around and ignore it. Preserving at
least some form of fairness and predictable behavior is important IMHO
but if there is no way to achieve that then there should be a very good
explanation for that.

I hope that we it is more clear what is our thinking now. I will be FTO
for upcoming days trying to get some rest from email so my response time
will be longer. Will be back on Thursday.
-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2020-05-21 13:28 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-05-20 14:37 [PATCH] mm, memcg: reclaim more aggressively before high allocator throttling Chris Down
2020-05-20 16:07 ` Michal Hocko
2020-05-20 16:51   ` Johannes Weiner
2020-05-20 17:04     ` Michal Hocko
2020-05-20 17:51       ` Johannes Weiner
2020-05-21  7:32         ` Michal Hocko
2020-05-21 13:51           ` Johannes Weiner
2020-05-21 14:22             ` Johannes Weiner
2020-05-21 14:35             ` Michal Hocko
2020-05-21 15:02               ` Chris Down
2020-05-21 16:38               ` Johannes Weiner
2020-05-21 17:37                 ` Michal Hocko
2020-05-21 18:45                   ` Johannes Weiner
2020-05-28 16:31                     ` Michal Hocko
2020-05-28 16:48                       ` Chris Down
2020-05-29  7:31                         ` Michal Hocko
2020-05-29 10:08                           ` Chris Down
2020-05-29 10:14                             ` Michal Hocko
2020-05-28 20:11                       ` Johannes Weiner
2020-05-20 20:26   ` Chris Down
2020-05-21  7:19     ` Michal Hocko
2020-05-21 11:27       ` Chris Down
2020-05-21 12:04         ` Michal Hocko
2020-05-21 12:23           ` Chris Down
2020-05-21 12:24             ` Chris Down
2020-05-21 12:37             ` Michal Hocko
2020-05-21 12:57               ` Chris Down
2020-05-21 13:05                 ` Chris Down
2020-05-21 13:28                   ` Michal Hocko [this message]
2020-05-21 13:21                 ` Michal Hocko
2020-05-21 13:41                   ` Chris Down
2020-05-21 13:58                     ` Michal Hocko
2020-05-21 14:22                       ` Chris Down
2020-05-21 12:28         ` Michal Hocko
2020-05-28 18:02 ` Shakeel Butt
2020-05-28 19:48   ` Chris Down
2020-05-28 20:29     ` Johannes Weiner
2020-05-28 21:02       ` Shakeel Butt
2020-05-28 21:14       ` Chris Down
2020-05-29  7:25       ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200521132826.GS6462@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=chris@chrisdown.name \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).