linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Chris Down <chris@chrisdown.name>
To: Michal Hocko <mhocko@kernel.org>
Cc: Johannes Weiner <hannes@cmpxchg.org>, Tejun Heo <tj@kernel.org>,
	Roman Gushchin <guro@fb.com>,
	linux-kernel@vger.kernel.org, cgroups@vger.kernel.org,
	linux-mm@kvack.org, kernel-team@fb.com,
	Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH REBASED] mm: Throttle allocators when failing reclaim over memory.high
Date: Wed, 10 Apr 2019 16:34:49 +0100	[thread overview]
Message-ID: <20190410153449.GA14915@chrisdown.name> (raw)
In-Reply-To: <20190410153307.GA11122@chrisdown.name>

Hey Michal,

Just to come back to your last e-mail about how this interacts with OOM.

Michal Hocko writes:
> I am not really opposed to the throttling in the absence of a reclaimable
> memory. We do that for the regular allocation paths already
> (should_reclaim_retry). A swapless system with anon memory is very likely to
> oom too quickly and this sounds like a real problem. But I do not think that
> we should throttle the allocation to freeze it completely. We should
> eventually OOM. And that was my question about essentially. How much we
> can/should throttle to give a high limit events consumer enough time to
> intervene. I am sorry to still not have time to study the patch more closely
> but this should be explained in the changelog. Are we talking about
> seconds/minutes or simply freeze each allocator to death?

Per-allocation, the maximum is 2 seconds (MEMCG_MAX_HIGH_DELAY_JIFFIES), so we 
don't freeze things to death -- they can recover if they are amenable to it.  
The idea here is that primarily you handle it, just like memory.oom_control in 
v1 (as mentioned in the commit message, or as a last resort, the kernel will 
still OOM if our userspace daemon has kicked the bucket or is otherwise 
ineffective.

If you're setting memory.high and memory.max together, then setting memory.high 
always has to come with a.) tolerance of heavy throttling by your application, 
and b.) userspace intervention in the case of high memory pressure resulting. 
This patch doesn't really change those semantics.

  reply	other threads:[~2019-04-10 15:34 UTC|newest]

Thread overview: 11+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-02-01  1:13 [PATCH] mm: Throttle allocators when failing reclaim over memory.high Chris Down
2019-02-01  7:17 ` Michal Hocko
2019-02-01 16:12   ` Johannes Weiner
2019-02-28  9:52     ` Michal Hocko
2019-02-01 19:16   ` Chris Down
2019-04-10 15:33     ` [PATCH REBASED] " Chris Down
2019-04-10 15:34       ` Chris Down [this message]
2019-05-01 18:41         ` [PATCH v3] " Chris Down
2019-05-07  8:44           ` Michal Hocko
2019-07-23 18:07           ` [PATCH v4] " Chris Down
2019-07-23 20:50             ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20190410153449.GA14915@chrisdown.name \
    --to=chris@chrisdown.name \
    --cc=akpm@linux-foundation.org \
    --cc=cgroups@vger.kernel.org \
    --cc=guro@fb.com \
    --cc=hannes@cmpxchg.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=tj@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).