From: Suren Baghdasaryan <email@example.com>
To: Michal Hocko <firstname.lastname@example.org>
Cc: Andrew Morton <email@example.com>,
David Rientjes <firstname.lastname@example.org>,
Matthew Wilcox <email@example.com>,
Souptick Joarder <firstname.lastname@example.org>,
Roman Gushchin <email@example.com>,
Johannes Weiner <firstname.lastname@example.org>,
Tetsuo Handa <email@example.com>,
firstname.lastname@example.org, Shakeel Butt <email@example.com>,
Christian Brauner <firstname.lastname@example.org>,
Minchan Kim <email@example.com>,
Tim Murray <firstname.lastname@example.org>,
Daniel Colascione <email@example.com>,
Joel Fernandes <firstname.lastname@example.org>,
Jann Horn <email@example.com>, linux-mm <firstname.lastname@example.org>,
Subject: Re: [RFC 0/2] opportunistic memory reclaim of a killed process
Date: Thu, 11 Apr 2019 09:47:31 -0700 [thread overview]
Message-ID: <CAJuCfpEqCKSHwAmR_TR3FaQzb=jkPH1nvzvkhAG57=Pb09GVrA@mail.gmail.com> (raw)
Thanks for the feedback!
On Thu, Apr 11, 2019 at 3:51 AM Michal Hocko <email@example.com> wrote:
> On Wed 10-04-19 18:43:51, Suren Baghdasaryan wrote:
> > Proposed solution uses existing oom-reaper thread to increase memory
> > reclaim rate of a killed process and to make this rate more deterministic.
> > By no means the proposed solution is considered the best and was chosen
> > because it was simple to implement and allowed for test data collection.
> > The downside of this solution is that it requires additional “expedite”
> > hint for something which has to be fast in all cases. Would be great to
> > find a way that does not require additional hints.
> I have to say I do not like this much. It is abusing an implementation
> detail of the OOM implementation and makes it an official API.
I agree with you that this particular implementation is abusing oom
internal machinery and I don't think it is acceptable as is (hence
this is sent as an RFC). I would like to discuss the viability of the
idea of reaping kill victim's mm asynchronously. If we agree that this
is worth our time, only then I would love to get into more details on
how to implement it. The implementation in this RFC is a convenient
way to illustrate the idea and to collect test data.
> there are some non trivial assumptions to be fullfilled to use the
> current oom_reaper. First of all all the process groups that share the
> address space have to be killed. How do you want to guarantee/implement
> that with a simply kill to a thread/process group?
I'm not sure I understood this correctly but if you are asking how do
we know that the mm we are reaping is not shared with processes that
are not being killed then I think your task_will_free_mem() checks for
that. Or have I misunderstood your question?
> > Other possible approaches include:
> > - Implementing a dedicated syscall to perform opportunistic reclaim in the
> > context of the process waiting for the victim’s death. A natural boost
> > bonus occurs if the waiting process has high or RT priority and is not
> > limited by cpuset cgroup in its CPU choices.
> > - Implement a mechanism that would perform opportunistic reclaim if it’s
> > possible unconditionally (similar to checks in task_will_free_mem()).
> > - Implement opportunistic reclaim that uses shrinker interface, PSI or
> > other memory pressure indications as a hint to engage.
> I would question whether we really need this at all? Relying on the exit
> speed sounds like a fundamental design problem of anything that relies
> on it.
Relying on it is wrong, I agree. There are protections like allocation
throttling that we can fall back to stop memory depletion. However
having a way to free up resources that are not needed by a dying
process quickly would help to avoid throttling which hurts user
I agree that this is an optimization which is beneficial in a specific
case - when we kill to free up resources. However this is an important
optimization for systems with low memory resources like embedded
systems, phones, etc. The only way to prevent being cornered into
throttling is to increase the free memory margin that system needs to
maintain (I describe this in my cover letter). And with limited
overall memory resources memory space is at a premium, so we try to
decrease that margin.
I think the other and arguably even more important issue than the
speed of memory reclaim is that this speed depends on what the victim
is doing at the time of a kill. This introduces non-determinism in how
fast we can free up resource and at this point we don't even know how
much safety margin we need.
> Sure task exit might be slow, but async mm tear down is just a
> mere optimization this is not guaranteed to really help in speading
> things up. OOM killer uses it as a guarantee for a forward progress in a
> finite time rather than as soon as possible.
> Michal Hocko
> SUSE Labs
> You received this message because you are subscribed to the Google Groups "kernel-team" group.
> To unsubscribe from this group and stop receiving emails from it, send an email to firstname.lastname@example.org.
next prev parent reply other threads:[~2019-04-11 16:47 UTC|newest]
Thread overview: 43+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-04-11 1:43 [RFC 0/2] opportunistic memory reclaim of a killed process Suren Baghdasaryan
2019-04-11 1:43 ` [RFC 1/2] mm: oom: expose expedite_reclaim to use oom_reaper outside of oom_kill.c Suren Baghdasaryan
2019-04-25 21:12 ` Tetsuo Handa
2019-04-25 21:56 ` Suren Baghdasaryan
2019-04-11 1:43 ` [RFC 2/2] signal: extend pidfd_send_signal() to allow expedited process killing Suren Baghdasaryan
2019-04-11 10:30 ` Christian Brauner
2019-04-11 10:34 ` Christian Brauner
2019-04-11 15:18 ` Suren Baghdasaryan
2019-04-11 15:23 ` Suren Baghdasaryan
2019-04-11 16:25 ` Daniel Colascione
2019-04-11 15:33 ` Matthew Wilcox
2019-04-11 17:05 ` Johannes Weiner
2019-04-11 17:09 ` Suren Baghdasaryan
2019-04-11 17:33 ` Daniel Colascione
2019-04-11 17:36 ` Matthew Wilcox
2019-04-11 17:47 ` Daniel Colascione
2019-04-12 6:49 ` Michal Hocko
2019-04-12 14:15 ` Suren Baghdasaryan
2019-04-12 14:20 ` Daniel Colascione
2019-04-12 21:03 ` Matthew Wilcox
2019-04-11 17:52 ` Suren Baghdasaryan
2019-04-11 21:45 ` Roman Gushchin
2019-04-11 21:59 ` Suren Baghdasaryan
2019-04-12 6:53 ` Michal Hocko
2019-04-12 14:10 ` Suren Baghdasaryan
2019-04-12 14:14 ` Daniel Colascione
2019-04-12 15:30 ` Daniel Colascione
2019-04-25 16:09 ` Suren Baghdasaryan
2019-04-11 10:51 ` [RFC 0/2] opportunistic memory reclaim of a killed process Michal Hocko
2019-04-11 16:18 ` Joel Fernandes
2019-04-11 18:12 ` Michal Hocko
2019-04-11 19:14 ` Joel Fernandes
2019-04-11 20:11 ` Michal Hocko
2019-04-11 21:11 ` Joel Fernandes
2019-04-11 16:20 ` Sandeep Patil
2019-04-11 16:47 ` Suren Baghdasaryan [this message]
2019-04-11 18:19 ` Michal Hocko
2019-04-11 19:56 ` Suren Baghdasaryan
2019-04-11 20:17 ` Michal Hocko
2019-04-11 17:19 ` Johannes Weiner
2019-04-11 11:51 ` [Lsf-pc] " Rik van Riel
2019-04-11 12:16 ` Michal Hocko
2019-04-11 16:54 ` Suren Baghdasaryan
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).