From: Michal Hocko <mhocko@kernel.org>
To: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: David Rientjes <rientjes@google.com>,
Yafang Shao <laoar.shao@gmail.com>,
Andrew Morton <akpm@linux-foundation.org>,
Johannes Weiner <hannes@cmpxchg.org>,
Linux MM <linux-mm@kvack.org>
Subject: Re: [PATCH v2] memcg, oom: check memcg margin for parallel oom
Date: Thu, 16 Jul 2020 08:11:56 +0200 [thread overview]
Message-ID: <20200716061156.GB31089@dhcp22.suse.cz> (raw)
In-Reply-To: <b74ac098-6cb5-8770-d6df-1bbb18332c4c@i-love.sakura.ne.jp>
On Thu 16-07-20 14:54:01, Tetsuo Handa wrote:
> On 2020/07/16 2:30, David Rientjes wrote:
> > But regardless of whether we present previous data to the user in the
> > kernel log or not, we've determined that oom killing a process is a
> > serious matter and go to any lengths possible to avoid having to do it.
> > For us, that means waiting until the "point of no return" to either go
> > ahead with oom killing a process or aborting and retrying the charge.
> >
> > I don't think moving the mem_cgroup_margin() check to out_of_memory()
> > right before printing the oom info and killing the process is a very
> > invasive patch. Any strong preference against doing it that way? I think
> > moving the check as late as possible to save a process from being killed
> > when racing with an exiter or killed process (including perhaps current)
> > has a pretty clear motivation.
> >
>
> How about ignoring MMF_OOM_SKIP for once? I think this has almost same
> effect as moving the mem_cgroup_margin() check to out_of_memory()
> right before printing the oom info and killing the process.
How would that help with races when a task is exiting while the oom
selects a victim? We are not talking about races with the oom_reaper
IIUC. Btw. if races with the oom_reaper are a concern then I would much
rather delay the wake up than complicate the existing protocol even
further.
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2020-07-16 6:12 UTC|newest]
Thread overview: 28+ messages / expand[flat|nested] mbox.gz Atom feed top
2020-07-14 13:57 [PATCH v2] memcg, oom: check memcg margin for parallel oom Yafang Shao
2020-07-14 14:05 ` Michal Hocko
2020-07-14 14:30 ` Chris Down
2020-07-14 18:46 ` David Rientjes
2020-07-15 1:44 ` Yafang Shao
2020-07-15 2:44 ` David Rientjes
2020-07-15 3:10 ` Yafang Shao
2020-07-15 3:18 ` David Rientjes
2020-07-15 3:31 ` Yafang Shao
2020-07-15 17:30 ` David Rientjes
2020-07-16 2:38 ` Yafang Shao
2020-07-16 7:04 ` David Rientjes
2020-07-16 11:53 ` Yafang Shao
2020-07-16 12:21 ` Michal Hocko
2020-07-16 13:09 ` Tetsuo Handa
2020-07-16 19:53 ` David Rientjes
2020-07-17 1:35 ` Yafang Shao
2020-07-17 19:26 ` David Rientjes
2020-07-18 2:15 ` Yafang Shao
2020-07-16 5:54 ` Tetsuo Handa
2020-07-16 6:11 ` Michal Hocko [this message]
2020-07-16 7:06 ` David Rientjes
2020-07-16 6:08 ` Michal Hocko
2020-07-16 6:56 ` David Rientjes
2020-07-16 7:12 ` Michal Hocko
2020-07-16 20:04 ` David Rientjes
2020-07-28 18:04 ` Johannes Weiner
2020-07-15 6:56 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20200716061156.GB31089@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=hannes@cmpxchg.org \
--cc=laoar.shao@gmail.com \
--cc=linux-mm@kvack.org \
--cc=penguin-kernel@i-love.sakura.ne.jp \
--cc=rientjes@google.com \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).