linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: David Rientjes <rientjes@google.com>
Cc: Yafang Shao <laoar.shao@gmail.com>,
	Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>,
	Andrew Morton <akpm@linux-foundation.org>,
	Johannes Weiner <hannes@cmpxchg.org>,
	Linux MM <linux-mm@kvack.org>
Subject: Re: [PATCH v2] memcg, oom: check memcg margin for parallel oom
Date: Thu, 16 Jul 2020 09:12:40 +0200	[thread overview]
Message-ID: <20200716071240.GD31089@dhcp22.suse.cz> (raw)
In-Reply-To: <alpine.DEB.2.23.453.2007152349510.2921049@chino.kir.corp.google.com>

On Wed 15-07-20 23:56:11, David Rientjes wrote:
> On Thu, 16 Jul 2020, Michal Hocko wrote:
> 
> > > I don't think moving the mem_cgroup_margin() check to out_of_memory() 
> > > right before printing the oom info and killing the process is a very 
> > > invasive patch.  Any strong preference against doing it that way?  I think 
> > > moving the check as late as possible to save a process from being killed 
> > > when racing with an exiter or killed process (including perhaps current) 
> > > has a pretty clear motivation.
> > 
> > We have been through this discussion several times in the past IIRC
> > The conclusion has been that the allocator (charging path for
> > the memcg) is the one to define OOM situation. This is an inherently
> > racy situation as long as we are not synchronizing oom with the world,
> > which I believe we agree, we do not want to do. There are few exceptions
> > to bail out early from the oom under certain situations and the trend
> > was to remove some of the existing ones rather than adding new because
> > they had subtle side effects and were prone to lockups.
> > 
> > As much as it might sound attractive to move mem_cgroup_margin resp.
> > last allocation attempt closer to the actual oom killing I haven't seen
> > any convincing data that would support that such a change would make a
> > big difference. select_bad_process is not a free operation as it scales
> > with the number of tasks in the oom domain but it shouldn't be a super
> > expensive. The oom reporting is by far the most expensive part of the
> > operation.
> > 
> > That being said, really convincing data should be presented in order
> > to do such a change. I do not think we want to do that just in case.
> 
> It's not possible to present data because we've had such a check for years 
> in our fleet so I can't say that it has prevented X unnecessary oom kills 
> compared to doing the check prior to calling out_of_memory().  I'm hoping 
> that can be understood.
> 
> Since Yafang is facing the same issue, and there is no significant 
> downside to doing the mem_cgroup_margin() check prior to 
> oom_kill_process() (or checking task_will_free_mem(current)), and it's 
> acknowledged that it *can* prevent unnecessary oom killing, which is a 
> very good thing, I'd like to understand why such resistance to it.

Because exactly this kind of arguments has led to quite some "should be
fine" heuristics which kicked back: do not kill exiting task, sacrifice
child instead of a victim just to name few. All of them make some sense
from a glance but they can serious kick back as the experience has
thought us.

Really, I do not see what is so hard to understand that each heuristic,
especially those to subtle areas like oom definitely is, needs data to
justify them. We are running this for years is really not an argument.
Sure arguing that your workload leads to x amount of false positives
and just shifting the check to later saves y amount of them sounds like
a relevant argument to me.

> Killing a user process is a serious matter.

It definitely is and I believe that nobody is questioning that. The oom
situation is a serious matter on its own. It says that the system has
failed to balance the memory consumption and the oom killer is the
_last_ resort action to be taken.

> I would fully agree if the 
> margin is only one page: it's still better to kill something off.

And exactly these kinds of details make any heuristic subtle and hard
maintain.

> But 
> when a process has uncharged memory by means induced by a process waiting 
> on oom notication, such as a userspace kill or dropping of caches from 
> your malloc implementation, that uncharge can be quite substantial and oom 
> killing is then unnecessary.

That should happen quite some time before the hard limit is reached and
we have means to achieve that. High limit is there to help with
pro-active reclaim before the OOM happens on the hard limit. If you are
stuck with the v1 then oom disable and shifting the whole logic into the
userspace is another variant.
 
> I can refresh the patch and send it formally.

-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2020-07-16  7:12 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-07-14 13:57 [PATCH v2] memcg, oom: check memcg margin for parallel oom Yafang Shao
2020-07-14 14:05 ` Michal Hocko
2020-07-14 14:30 ` Chris Down
2020-07-14 18:46 ` David Rientjes
2020-07-15  1:44   ` Yafang Shao
2020-07-15  2:44     ` David Rientjes
2020-07-15  3:10       ` Yafang Shao
2020-07-15  3:18         ` David Rientjes
2020-07-15  3:31           ` Yafang Shao
2020-07-15 17:30             ` David Rientjes
2020-07-16  2:38               ` Yafang Shao
2020-07-16  7:04                 ` David Rientjes
2020-07-16 11:53                   ` Yafang Shao
2020-07-16 12:21                     ` Michal Hocko
2020-07-16 13:09                       ` Tetsuo Handa
2020-07-16 19:53                     ` David Rientjes
2020-07-17  1:35                       ` Yafang Shao
2020-07-17 19:26                         ` David Rientjes
2020-07-18  2:15                           ` Yafang Shao
2020-07-16  5:54               ` Tetsuo Handa
2020-07-16  6:11                 ` Michal Hocko
2020-07-16  7:06                   ` David Rientjes
2020-07-16  6:08               ` Michal Hocko
2020-07-16  6:56                 ` David Rientjes
2020-07-16  7:12                   ` Michal Hocko [this message]
2020-07-16 20:04                     ` David Rientjes
2020-07-28 18:04                   ` Johannes Weiner
2020-07-15  6:56         ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200716071240.GD31089@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=laoar.shao@gmail.com \
    --cc=linux-mm@kvack.org \
    --cc=penguin-kernel@i-love.sakura.ne.jp \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).