linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Roman Gushchin <guro@fb.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Andrew Morton <akpm@linux-foundation.org>, <linux-mm@kvack.org>,
	<kernel-team@fb.com>, <linux-kernel@vger.kernel.org>
Subject: Re: [PATCH] mm: memcg: make memory.oom.group tolerable to task migration
Date: Tue, 17 Mar 2020 11:38:36 -0700	[thread overview]
Message-ID: <20200317183836.GA276471@carbon.DHCP.thefacebook.com> (raw)
In-Reply-To: <20200317075212.GC26018@dhcp22.suse.cz>

On Tue, Mar 17, 2020 at 08:52:12AM +0100, Michal Hocko wrote:
> On Mon 16-03-20 15:35:10, Roman Gushchin wrote:
> > If a task is getting moved out of the OOMing cgroup, it might
> > result in unexpected OOM killings if memory.oom.group is used
> > anywhere in the cgroup tree.
> > 
> > Imagine the following example:
> > 
> >           A (oom.group = 1)
> >          / \
> >   (OOM) B   C
> > 
> > Let's say B's memory.max is exceeded and it's OOMing. The OOM killer
> > selects a task in B as a victim, but someone asynchronously moves
> > the task into C.
> 
> I can see Reported-by here, does that mean that the race really happened
> in real workloads? If yes, I would be really curious. Mostly because
> moving tasks outside of the oom domain is quite questionable without
> charge migration.

Yes, I've got a number of OOM messages where oom_cgroup != task_cgroup.
The only reasonable explanation is that the task has been moved out after
being selected as a victim. In my case it resulted in killing all tasks
in A, and it what hurt the workload.

> 
> > mem_cgroup_get_oom_group() will iterate over all
> > ancestors of C up to the root cgroup. In theory it had to stop
> > at the oom_domain level - the memory cgroup which is OOMing.
> > But because B is not an ancestor of C, it's not happening.
> > Instead it chooses A (because it's oom.group is set), and kills
> > all tasks in A. This behavior is wrong because the OOM happened in B,
> > so there is no reason to kill anything outside.
> > 
> > Fix this by checking it the memory cgroup to which the task belongs
> > is a descendant of the oom_domain. If not, memory.oom.group should
> > be ignored, and the OOM killer should kill only the victim task.
> 
> I was about to suggest storing the memcg in oom_evaluate_task but then I
> have realized that this would be both more complex and I am not yet
> sure it would be better so much better after all.
> 
> The thing is that killing the selected task makes a lot of sense
> because it was the largest consumer. No matter it has run away. On the
> other hand if your B was oom.group = 1 then one could expect that any
> OOM killer event in that group will result in the whole group tear
> down. This is however a gray zone because we do emit MEMCG_OOM event but
> MEMCG_OOM_KILL event will go to the victim's at-the-time memcg. So the
> observer B could think that the oom was resolved without killing while
> observer C would see a kill event without oom.

I agree. Killing the task outside of the OOMing cgroup is already strange.

Should we somehow lock the OOMing cgroup? So that tasks can not escape
and enter it until the finish of the OOM killing?

It seems to be a better idea, because it will also make the oom.group
killing less racy: currently a forking app can potentially escape from it.

And the we can put something like
	if (WARN_ON_ONCE(!mem_cgroup_is_descendant(memcg, oom_domain)))
		goto out;
to mem_cgroup_get_oom_group?

What do you think?

Thanks!


  reply	other threads:[~2020-03-17 18:38 UTC|newest]

Thread overview: 8+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-16 22:35 [PATCH] mm: memcg: make memory.oom.group tolerable to task migration Roman Gushchin
2020-03-17  7:52 ` Michal Hocko
2020-03-17 18:38   ` Roman Gushchin [this message]
2020-03-17 18:55     ` Michal Hocko
2020-03-17 20:36       ` Roman Gushchin
2020-03-18 12:31         ` Michal Hocko
2020-03-18 12:32 ` Michal Hocko
2020-03-19 13:37 ` Johannes Weiner

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200317183836.GA276471@carbon.DHCP.thefacebook.com \
    --to=guro@fb.com \
    --cc=akpm@linux-foundation.org \
    --cc=kernel-team@fb.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).