linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Roman Gushchin <guro@fb.com>
To: Johannes Weiner <hannes@cmpxchg.org>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Tejun Heo <tj@kernel.org>, Michal Hocko <mhocko@suse.com>,
	<linux-mm@kvack.org>, <cgroups@vger.kernel.org>,
	<linux-kernel@vger.kernel.org>, <kernel-team@fb.com>
Subject: Re: [PATCH] mm: memcontrol: prevent starvation when writing memory.high
Date: Tue, 12 Jan 2021 12:12:37 -0800	[thread overview]
Message-ID: <20210112201237.GB99586@carbon.dhcp.thefacebook.com> (raw)
In-Reply-To: <X/38ZwyOE96SAfa9@cmpxchg.org>

On Tue, Jan 12, 2021 at 02:45:43PM -0500, Johannes Weiner wrote:
> On Tue, Jan 12, 2021 at 09:03:22AM -0800, Roman Gushchin wrote:
> > On Tue, Jan 12, 2021 at 11:30:11AM -0500, Johannes Weiner wrote:
> > > When a value is written to a cgroup's memory.high control file, the
> > > write() context first tries to reclaim the cgroup to size before
> > > putting the limit in place for the workload. Concurrent charges from
> > > the workload can keep such a write() looping in reclaim indefinitely.
> > > 
> > > In the past, a write to memory.high would first put the limit in place
> > > for the workload, then do targeted reclaim until the new limit has
> > > been met - similar to how we do it for memory.max. This wasn't prone
> > > to the described starvation issue. However, this sequence could cause
> > > excessive latencies in the workload, when allocating threads could be
> > > put into long penalty sleeps on the sudden memory.high overage created
> > > by the write(), before that had a chance to work it off.
> > > 
> > > Now that memory_high_write() performs reclaim before enforcing the new
> > > limit, reflect that the cgroup may well fail to converge due to
> > > concurrent workload activity. Bail out of the loop after a few tries.
> > > 
> > > Fixes: 536d3bf261a2 ("mm: memcontrol: avoid workload stalls when lowering memory.high")
> > > Cc: <stable@vger.kernel.org> # 5.8+
> > > Reported-by: Tejun Heo <tj@kernel.org>
> > > Signed-off-by: Johannes Weiner <hannes@cmpxchg.org>
> > > ---
> > >  mm/memcontrol.c | 7 +++----
> > >  1 file changed, 3 insertions(+), 4 deletions(-)
> > > 
> > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> > > index 605f671203ef..63a8d47c1cd3 100644
> > > --- a/mm/memcontrol.c
> > > +++ b/mm/memcontrol.c
> > > @@ -6275,7 +6275,6 @@ static ssize_t memory_high_write(struct kernfs_open_file *of,
> > >  
> > >  	for (;;) {
> > >  		unsigned long nr_pages = page_counter_read(&memcg->memory);
> > > -		unsigned long reclaimed;
> > >  
> > >  		if (nr_pages <= high)
> > >  			break;
> > > @@ -6289,10 +6288,10 @@ static ssize_t memory_high_write(struct kernfs_open_file *of,
> > >  			continue;
> > >  		}
> > >  
> > > -		reclaimed = try_to_free_mem_cgroup_pages(memcg, nr_pages - high,
> > > -							 GFP_KERNEL, true);
> > > +		try_to_free_mem_cgroup_pages(memcg, nr_pages - high,
> > > +					     GFP_KERNEL, true);
> > >  
> > > -		if (!reclaimed && !nr_retries--)
> > > +		if (!nr_retries--)
> > 
> > Shouldn't it be (!reclaimed || !nr_retries) instead?
> > 
> > If reclaimed == 0, it probably doesn't make much sense to retry.
> 
> We usually allow nr_retries worth of no-progress reclaim cycles to
> make up for intermittent reclaim failures.
> 
> The difference to OOMs/memory.max is that we don't want to loop
> indefinitely on forward progress, but we should allow the usual number
> of no-progress loops.

Re memory.max: trying really hard makes sense because we are OOMing otherwise.
With memory.high such an idea is questionable: if were not able to reclaim
a single page from the first attempt, it's unlikely that we can reclaim many
from repeating 16 times.

My concern here is that we can see CPU regressions in some cases when there is
no reclaimable memory. Do you think we can win something by trying harder?
If so, it's worth mentioning in the commit log. Because it's really a separate
change to what's described in the log, to some extent it's a move into an opposite
direction.

Thanks!

  reply	other threads:[~2021-01-12 21:59 UTC|newest]

Thread overview: 19+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-01-12 16:30 [PATCH] mm: memcontrol: prevent starvation when writing memory.high Johannes Weiner
2021-01-12 17:03 ` Roman Gushchin
2021-01-12 19:45   ` Johannes Weiner
2021-01-12 20:12     ` Roman Gushchin [this message]
2021-01-12 21:11       ` Johannes Weiner
2021-01-12 21:45         ` Roman Gushchin
2021-01-15 15:34           ` Johannes Weiner
2021-01-12 18:59 ` Shakeel Butt
2021-01-12 19:53   ` Johannes Weiner
2021-01-12 20:28     ` Shakeel Butt
2021-01-13 14:46 ` Michal Hocko
2021-01-15 16:20   ` Johannes Weiner
2021-01-15 17:03     ` Roman Gushchin
2021-01-15 20:55       ` Johannes Weiner
2021-01-15 21:27         ` Roman Gushchin
2021-01-19 16:47           ` Johannes Weiner
2021-01-18 13:12     ` Michal Hocko
2021-01-13 17:25 ` Michal Koutný
2021-01-13 18:06 ` Roman Gushchin

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210112201237.GB99586@carbon.dhcp.thefacebook.com \
    --to=guro@fb.com \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@suse.com \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).