linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: memcg: make memory.oom.group tolerable to task migration
@ 2020-03-16 22:35 Roman Gushchin
  2020-03-17  7:52 ` Michal Hocko
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Roman Gushchin @ 2020-03-16 22:35 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Michal Hocko, linux-mm, kernel-team, linux-kernel, Roman Gushchin

If a task is getting moved out of the OOMing cgroup, it might
result in unexpected OOM killings if memory.oom.group is used
anywhere in the cgroup tree.

Imagine the following example:

          A (oom.group = 1)
         / \
  (OOM) B   C

Let's say B's memory.max is exceeded and it's OOMing. The OOM killer
selects a task in B as a victim, but someone asynchronously moves
the task into C. mem_cgroup_get_oom_group() will iterate over all
ancestors of C up to the root cgroup. In theory it had to stop
at the oom_domain level - the memory cgroup which is OOMing.
But because B is not an ancestor of C, it's not happening.
Instead it chooses A (because it's oom.group is set), and kills
all tasks in A. This behavior is wrong because the OOM happened in B,
so there is no reason to kill anything outside.

Fix this by checking it the memory cgroup to which the task belongs
is a descendant of the oom_domain. If not, memory.oom.group should
be ignored, and the OOM killer should kill only the victim task.

Signed-off-by: Roman Gushchin <guro@fb.com>
Reported-by: Dan Schatzberg <dschatzberg@fb.com>
---
 mm/memcontrol.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index daa399be4688..d8c4b7aa4e73 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1930,6 +1930,14 @@ struct mem_cgroup *mem_cgroup_get_oom_group(struct task_struct *victim,
 	if (memcg == root_mem_cgroup)
 		goto out;
 
+	/*
+	 * If the victim task has been asynchronously moved to a different
+	 * memory cgroup, we might end up killing tasks outside oom_domain.
+	 * In this case it's better to ignore memory.group.oom.
+	 */
+	if (unlikely(!mem_cgroup_is_descendant(memcg, oom_domain)))
+		goto out;
+
 	/*
 	 * Traverse the memory cgroup hierarchy from the victim task's
 	 * cgroup up to the OOMing cgroup (or root) to find the
-- 
2.24.1



^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm: memcg: make memory.oom.group tolerable to task migration
  2020-03-16 22:35 [PATCH] mm: memcg: make memory.oom.group tolerable to task migration Roman Gushchin
@ 2020-03-17  7:52 ` Michal Hocko
  2020-03-17 18:38   ` Roman Gushchin
  2020-03-18 12:32 ` Michal Hocko
  2020-03-19 13:37 ` Johannes Weiner
  2 siblings, 1 reply; 8+ messages in thread
From: Michal Hocko @ 2020-03-17  7:52 UTC (permalink / raw)
  To: Roman Gushchin; +Cc: Andrew Morton, linux-mm, kernel-team, linux-kernel

On Mon 16-03-20 15:35:10, Roman Gushchin wrote:
> If a task is getting moved out of the OOMing cgroup, it might
> result in unexpected OOM killings if memory.oom.group is used
> anywhere in the cgroup tree.
> 
> Imagine the following example:
> 
>           A (oom.group = 1)
>          / \
>   (OOM) B   C
> 
> Let's say B's memory.max is exceeded and it's OOMing. The OOM killer
> selects a task in B as a victim, but someone asynchronously moves
> the task into C.

I can see Reported-by here, does that mean that the race really happened
in real workloads? If yes, I would be really curious. Mostly because
moving tasks outside of the oom domain is quite questionable without
charge migration.

> mem_cgroup_get_oom_group() will iterate over all
> ancestors of C up to the root cgroup. In theory it had to stop
> at the oom_domain level - the memory cgroup which is OOMing.
> But because B is not an ancestor of C, it's not happening.
> Instead it chooses A (because it's oom.group is set), and kills
> all tasks in A. This behavior is wrong because the OOM happened in B,
> so there is no reason to kill anything outside.
> 
> Fix this by checking it the memory cgroup to which the task belongs
> is a descendant of the oom_domain. If not, memory.oom.group should
> be ignored, and the OOM killer should kill only the victim task.

I was about to suggest storing the memcg in oom_evaluate_task but then I
have realized that this would be both more complex and I am not yet
sure it would be better so much better after all.

The thing is that killing the selected task makes a lot of sense
because it was the largest consumer. No matter it has run away. On the
other hand if your B was oom.group = 1 then one could expect that any
OOM killer event in that group will result in the whole group tear
down. This is however a gray zone because we do emit MEMCG_OOM event but
MEMCG_OOM_KILL event will go to the victim's at-the-time memcg. So the
observer B could think that the oom was resolved without killing while
observer C would see a kill event without oom.

That being said, please try to think about the above. I will give it
some more time as well. Killing the selected victim is the obviously
correct thing and your patch does that so it is correct in that regard
but I believe that the group oom behavior in the original oom domain
remains an open question.

Fixes: 3d8b38eb81ca ("mm, oom: introduce memory.oom.group")
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Reported-by: Dan Schatzberg <dschatzberg@fb.com>
> ---
>  mm/memcontrol.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index daa399be4688..d8c4b7aa4e73 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1930,6 +1930,14 @@ struct mem_cgroup *mem_cgroup_get_oom_group(struct task_struct *victim,
>  	if (memcg == root_mem_cgroup)
>  		goto out;
>  
> +	/*
> +	 * If the victim task has been asynchronously moved to a different
> +	 * memory cgroup, we might end up killing tasks outside oom_domain.
> +	 * In this case it's better to ignore memory.group.oom.
> +	 */
> +	if (unlikely(!mem_cgroup_is_descendant(memcg, oom_domain)))
> +		goto out;
> +
>  	/*
>  	 * Traverse the memory cgroup hierarchy from the victim task's
>  	 * cgroup up to the OOMing cgroup (or root) to find the
> -- 
> 2.24.1

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm: memcg: make memory.oom.group tolerable to task migration
  2020-03-17  7:52 ` Michal Hocko
@ 2020-03-17 18:38   ` Roman Gushchin
  2020-03-17 18:55     ` Michal Hocko
  0 siblings, 1 reply; 8+ messages in thread
From: Roman Gushchin @ 2020-03-17 18:38 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Andrew Morton, linux-mm, kernel-team, linux-kernel

On Tue, Mar 17, 2020 at 08:52:12AM +0100, Michal Hocko wrote:
> On Mon 16-03-20 15:35:10, Roman Gushchin wrote:
> > If a task is getting moved out of the OOMing cgroup, it might
> > result in unexpected OOM killings if memory.oom.group is used
> > anywhere in the cgroup tree.
> > 
> > Imagine the following example:
> > 
> >           A (oom.group = 1)
> >          / \
> >   (OOM) B   C
> > 
> > Let's say B's memory.max is exceeded and it's OOMing. The OOM killer
> > selects a task in B as a victim, but someone asynchronously moves
> > the task into C.
> 
> I can see Reported-by here, does that mean that the race really happened
> in real workloads? If yes, I would be really curious. Mostly because
> moving tasks outside of the oom domain is quite questionable without
> charge migration.

Yes, I've got a number of OOM messages where oom_cgroup != task_cgroup.
The only reasonable explanation is that the task has been moved out after
being selected as a victim. In my case it resulted in killing all tasks
in A, and it what hurt the workload.

> 
> > mem_cgroup_get_oom_group() will iterate over all
> > ancestors of C up to the root cgroup. In theory it had to stop
> > at the oom_domain level - the memory cgroup which is OOMing.
> > But because B is not an ancestor of C, it's not happening.
> > Instead it chooses A (because it's oom.group is set), and kills
> > all tasks in A. This behavior is wrong because the OOM happened in B,
> > so there is no reason to kill anything outside.
> > 
> > Fix this by checking it the memory cgroup to which the task belongs
> > is a descendant of the oom_domain. If not, memory.oom.group should
> > be ignored, and the OOM killer should kill only the victim task.
> 
> I was about to suggest storing the memcg in oom_evaluate_task but then I
> have realized that this would be both more complex and I am not yet
> sure it would be better so much better after all.
> 
> The thing is that killing the selected task makes a lot of sense
> because it was the largest consumer. No matter it has run away. On the
> other hand if your B was oom.group = 1 then one could expect that any
> OOM killer event in that group will result in the whole group tear
> down. This is however a gray zone because we do emit MEMCG_OOM event but
> MEMCG_OOM_KILL event will go to the victim's at-the-time memcg. So the
> observer B could think that the oom was resolved without killing while
> observer C would see a kill event without oom.

I agree. Killing the task outside of the OOMing cgroup is already strange.

Should we somehow lock the OOMing cgroup? So that tasks can not escape
and enter it until the finish of the OOM killing?

It seems to be a better idea, because it will also make the oom.group
killing less racy: currently a forking app can potentially escape from it.

And the we can put something like
	if (WARN_ON_ONCE(!mem_cgroup_is_descendant(memcg, oom_domain)))
		goto out;
to mem_cgroup_get_oom_group?

What do you think?

Thanks!


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm: memcg: make memory.oom.group tolerable to task migration
  2020-03-17 18:38   ` Roman Gushchin
@ 2020-03-17 18:55     ` Michal Hocko
  2020-03-17 20:36       ` Roman Gushchin
  0 siblings, 1 reply; 8+ messages in thread
From: Michal Hocko @ 2020-03-17 18:55 UTC (permalink / raw)
  To: Roman Gushchin; +Cc: Andrew Morton, linux-mm, kernel-team, linux-kernel

On Tue 17-03-20 11:38:36, Roman Gushchin wrote:
> On Tue, Mar 17, 2020 at 08:52:12AM +0100, Michal Hocko wrote:
> > On Mon 16-03-20 15:35:10, Roman Gushchin wrote:
> > > If a task is getting moved out of the OOMing cgroup, it might
> > > result in unexpected OOM killings if memory.oom.group is used
> > > anywhere in the cgroup tree.
> > > 
> > > Imagine the following example:
> > > 
> > >           A (oom.group = 1)
> > >          / \
> > >   (OOM) B   C
> > > 
> > > Let's say B's memory.max is exceeded and it's OOMing. The OOM killer
> > > selects a task in B as a victim, but someone asynchronously moves
> > > the task into C.
> > 
> > I can see Reported-by here, does that mean that the race really happened
> > in real workloads? If yes, I would be really curious. Mostly because
> > moving tasks outside of the oom domain is quite questionable without
> > charge migration.
> 
> Yes, I've got a number of OOM messages where oom_cgroup != task_cgroup.
> The only reasonable explanation is that the task has been moved out after
> being selected as a victim. In my case it resulted in killing all tasks
> in A, and it what hurt the workload.

Is this an expected behavior of the workload or potentially a bug.
Because really, migrating outside of the oom domain is problematic
already. Essentially you are going to kill a wrong task if the largest
memory consumer is migrating before the oom killer manages to find the
task.

> > > mem_cgroup_get_oom_group() will iterate over all
> > > ancestors of C up to the root cgroup. In theory it had to stop
> > > at the oom_domain level - the memory cgroup which is OOMing.
> > > But because B is not an ancestor of C, it's not happening.
> > > Instead it chooses A (because it's oom.group is set), and kills
> > > all tasks in A. This behavior is wrong because the OOM happened in B,
> > > so there is no reason to kill anything outside.
> > > 
> > > Fix this by checking it the memory cgroup to which the task belongs
> > > is a descendant of the oom_domain. If not, memory.oom.group should
> > > be ignored, and the OOM killer should kill only the victim task.
> > 
> > I was about to suggest storing the memcg in oom_evaluate_task but then I
> > have realized that this would be both more complex and I am not yet
> > sure it would be better so much better after all.
> > 
> > The thing is that killing the selected task makes a lot of sense
> > because it was the largest consumer. No matter it has run away. On the
> > other hand if your B was oom.group = 1 then one could expect that any
> > OOM killer event in that group will result in the whole group tear
> > down. This is however a gray zone because we do emit MEMCG_OOM event but
> > MEMCG_OOM_KILL event will go to the victim's at-the-time memcg. So the
> > observer B could think that the oom was resolved without killing while
> > observer C would see a kill event without oom.
> 
> I agree. Killing the task outside of the OOMing cgroup is already strange.

Strange? Maybe but if you think about it, not that much in fact because
you are still killing a task that was in the memcg at the time of the
evaluation. Sure that largest task might not be the biggest contributor
to the charged memory - as mentioned above - but well, this is what you
ask for when migrating over oom domains.

> Should we somehow lock the OOMing cgroup? So that tasks can not escape
> and enter it until the finish of the OOM killing?

I do not think this is going to help all that much. Sure we can note the
memcg at the oom_evaluate_task and use it later for the group oom
handling. But races will always be there. Having oom path to depend on
locks used elsewhere is a can of worms. It would add a very hard to
evaluate dependences.

> It seems to be a better idea, because it will also make the oom.group
> killing less racy: currently a forking app can potentially escape from it.
> 
> And the we can put something like
> 	if (WARN_ON_ONCE(!mem_cgroup_is_descendant(memcg, oom_domain)))
> 		goto out;
> to mem_cgroup_get_oom_group?

This would be a user triggerable warning and that sounds like a bad idea
to me. We should just live with races. The only question I still do not
have a proper answer for is how much we do care. If we do not care all
that much about the original memcg then go with your patch. But if we
want to be slightly more careful then we should note the memcg in
oom_evaluate_task and use it when killing.

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm: memcg: make memory.oom.group tolerable to task migration
  2020-03-17 18:55     ` Michal Hocko
@ 2020-03-17 20:36       ` Roman Gushchin
  2020-03-18 12:31         ` Michal Hocko
  0 siblings, 1 reply; 8+ messages in thread
From: Roman Gushchin @ 2020-03-17 20:36 UTC (permalink / raw)
  To: Michal Hocko; +Cc: Andrew Morton, linux-mm, kernel-team, linux-kernel

On Tue, Mar 17, 2020 at 07:55:29PM +0100, Michal Hocko wrote:
> On Tue 17-03-20 11:38:36, Roman Gushchin wrote:
> > On Tue, Mar 17, 2020 at 08:52:12AM +0100, Michal Hocko wrote:
> > > On Mon 16-03-20 15:35:10, Roman Gushchin wrote:
> > > > If a task is getting moved out of the OOMing cgroup, it might
> > > > result in unexpected OOM killings if memory.oom.group is used
> > > > anywhere in the cgroup tree.
> > > > 
> > > > Imagine the following example:
> > > > 
> > > >           A (oom.group = 1)
> > > >          / \
> > > >   (OOM) B   C
> > > > 
> > > > Let's say B's memory.max is exceeded and it's OOMing. The OOM killer
> > > > selects a task in B as a victim, but someone asynchronously moves
> > > > the task into C.
> > > 
> > > I can see Reported-by here, does that mean that the race really happened
> > > in real workloads? If yes, I would be really curious. Mostly because
> > > moving tasks outside of the oom domain is quite questionable without
> > > charge migration.
> > 
> > Yes, I've got a number of OOM messages where oom_cgroup != task_cgroup.
> > The only reasonable explanation is that the task has been moved out after
> > being selected as a victim. In my case it resulted in killing all tasks
> > in A, and it what hurt the workload.
> 
> Is this an expected behavior of the workload or potentially a bug.
> Because really, migrating outside of the oom domain is problematic
> already. Essentially you are going to kill a wrong task if the largest
> memory consumer is migrating before the oom killer manages to find the
> task.

I don't think it's easy to a userspace program to predict OOMs and avoid migrations
in these circumstances. In my case one cgroup is some sort of execution
engine and the other is the workload which it starts. All inside the bigger
cgroup managed by systemd.

Generally speaking, charge migration was always very problematic, and it's
super easy to expose weird corner cases with migrating large tasks.
I hope that long-term we will switch to entering cgroups using clone3
or something similar.

> 
> > > > mem_cgroup_get_oom_group() will iterate over all
> > > > ancestors of C up to the root cgroup. In theory it had to stop
> > > > at the oom_domain level - the memory cgroup which is OOMing.
> > > > But because B is not an ancestor of C, it's not happening.
> > > > Instead it chooses A (because it's oom.group is set), and kills
> > > > all tasks in A. This behavior is wrong because the OOM happened in B,
> > > > so there is no reason to kill anything outside.
> > > > 
> > > > Fix this by checking it the memory cgroup to which the task belongs
> > > > is a descendant of the oom_domain. If not, memory.oom.group should
> > > > be ignored, and the OOM killer should kill only the victim task.
> > > 
> > > I was about to suggest storing the memcg in oom_evaluate_task but then I
> > > have realized that this would be both more complex and I am not yet
> > > sure it would be better so much better after all.
> > > 
> > > The thing is that killing the selected task makes a lot of sense
> > > because it was the largest consumer. No matter it has run away. On the
> > > other hand if your B was oom.group = 1 then one could expect that any
> > > OOM killer event in that group will result in the whole group tear
> > > down. This is however a gray zone because we do emit MEMCG_OOM event but
> > > MEMCG_OOM_KILL event will go to the victim's at-the-time memcg. So the
> > > observer B could think that the oom was resolved without killing while
> > > observer C would see a kill event without oom.
> > 
> > I agree. Killing the task outside of the OOMing cgroup is already strange.
> 
> Strange? Maybe but if you think about it, not that much in fact because
> you are still killing a task that was in the memcg at the time of the
> evaluation. Sure that largest task might not be the biggest contributor
> to the charged memory - as mentioned above - but well, this is what you
> ask for when migrating over oom domains.
> 
> > Should we somehow lock the OOMing cgroup? So that tasks can not escape
> > and enter it until the finish of the OOM killing?
> 
> I do not think this is going to help all that much. Sure we can note the
> memcg at the oom_evaluate_task and use it later for the group oom
> handling. But races will always be there. Having oom path to depend on
> locks used elsewhere is a can of worms. It would add a very hard to
> evaluate dependences.

Well, it can be just a single cgroup flag/bit. But I agree that it can
create more problems, so maybe it's better to avoid this path.

> 
> > It seems to be a better idea, because it will also make the oom.group
> > killing less racy: currently a forking app can potentially escape from it.
> > 
> > And the we can put something like
> > 	if (WARN_ON_ONCE(!mem_cgroup_is_descendant(memcg, oom_domain)))
> > 		goto out;
> > to mem_cgroup_get_oom_group?
> 
> This would be a user triggerable warning and that sounds like a bad idea
> to me. We should just live with races. The only question I still do not
> have a proper answer for is how much we do care. If we do not care all
> that much about the original memcg then go with your patch. But if we
> want to be slightly more careful then we should note the memcg in
> oom_evaluate_task and use it when killing.

But it won't close the race, right?

oom_evaluate_task() can race with a task migration too, so we can record
the old or the new cgroup.

Then I'd stick with my original patch which solves the main problem here:
unnecessary killing of too many tasks.

Thanks!


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm: memcg: make memory.oom.group tolerable to task migration
  2020-03-17 20:36       ` Roman Gushchin
@ 2020-03-18 12:31         ` Michal Hocko
  0 siblings, 0 replies; 8+ messages in thread
From: Michal Hocko @ 2020-03-18 12:31 UTC (permalink / raw)
  To: Roman Gushchin; +Cc: Andrew Morton, linux-mm, kernel-team, linux-kernel

On Tue 17-03-20 13:36:45, Roman Gushchin wrote:
[...]
> > > And the we can put something like
> > > 	if (WARN_ON_ONCE(!mem_cgroup_is_descendant(memcg, oom_domain)))
> > > 		goto out;
> > > to mem_cgroup_get_oom_group?
> > 
> > This would be a user triggerable warning and that sounds like a bad idea
> > to me. We should just live with races. The only question I still do not
> > have a proper answer for is how much we do care. If we do not care all
> > that much about the original memcg then go with your patch. But if we
> > want to be slightly more careful then we should note the memcg in
> > oom_evaluate_task and use it when killing.
> 
> But it won't close the race, right?
>
> oom_evaluate_task() can race with a task migration too, so we can record
> the old or the new cgroup.

Are you sure? I thought that cgroups iterator code would take care of
those races. The documentation doesn't tell much in that respect. Maybe
it would be good to add a clarification there.
 
> Then I'd stick with my original patch which solves the main problem here:
> unnecessary killing of too many tasks.

OK, I am fine with that. I couldn't convince myself that the other part
of the problem is serious enough. Maybe we will find workloads which do
care and we can add that later on.
-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm: memcg: make memory.oom.group tolerable to task migration
  2020-03-16 22:35 [PATCH] mm: memcg: make memory.oom.group tolerable to task migration Roman Gushchin
  2020-03-17  7:52 ` Michal Hocko
@ 2020-03-18 12:32 ` Michal Hocko
  2020-03-19 13:37 ` Johannes Weiner
  2 siblings, 0 replies; 8+ messages in thread
From: Michal Hocko @ 2020-03-18 12:32 UTC (permalink / raw)
  To: Roman Gushchin; +Cc: Andrew Morton, linux-mm, kernel-team, linux-kernel

On Mon 16-03-20 15:35:10, Roman Gushchin wrote:
> If a task is getting moved out of the OOMing cgroup, it might
> result in unexpected OOM killings if memory.oom.group is used
> anywhere in the cgroup tree.
> 
> Imagine the following example:
> 
>           A (oom.group = 1)
>          / \
>   (OOM) B   C
> 
> Let's say B's memory.max is exceeded and it's OOMing. The OOM killer
> selects a task in B as a victim, but someone asynchronously moves
> the task into C. mem_cgroup_get_oom_group() will iterate over all
> ancestors of C up to the root cgroup. In theory it had to stop
> at the oom_domain level - the memory cgroup which is OOMing.
> But because B is not an ancestor of C, it's not happening.
> Instead it chooses A (because it's oom.group is set), and kills
> all tasks in A. This behavior is wrong because the OOM happened in B,
> so there is no reason to kill anything outside.
> 
> Fix this by checking it the memory cgroup to which the task belongs
> is a descendant of the oom_domain. If not, memory.oom.group should
> be ignored, and the OOM killer should kill only the victim task.
> 
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Reported-by: Dan Schatzberg <dschatzberg@fb.com>

After the follow up discussion I do agree that this should be sufficient
for now.
Acked-by: Michal Hocko <mhocko@suse.com>

> ---
>  mm/memcontrol.c | 8 ++++++++
>  1 file changed, 8 insertions(+)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index daa399be4688..d8c4b7aa4e73 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1930,6 +1930,14 @@ struct mem_cgroup *mem_cgroup_get_oom_group(struct task_struct *victim,
>  	if (memcg == root_mem_cgroup)
>  		goto out;
>  
> +	/*
> +	 * If the victim task has been asynchronously moved to a different
> +	 * memory cgroup, we might end up killing tasks outside oom_domain.
> +	 * In this case it's better to ignore memory.group.oom.
> +	 */
> +	if (unlikely(!mem_cgroup_is_descendant(memcg, oom_domain)))
> +		goto out;
> +
>  	/*
>  	 * Traverse the memory cgroup hierarchy from the victim task's
>  	 * cgroup up to the OOMing cgroup (or root) to find the
> -- 
> 2.24.1

-- 
Michal Hocko
SUSE Labs


^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [PATCH] mm: memcg: make memory.oom.group tolerable to task migration
  2020-03-16 22:35 [PATCH] mm: memcg: make memory.oom.group tolerable to task migration Roman Gushchin
  2020-03-17  7:52 ` Michal Hocko
  2020-03-18 12:32 ` Michal Hocko
@ 2020-03-19 13:37 ` Johannes Weiner
  2 siblings, 0 replies; 8+ messages in thread
From: Johannes Weiner @ 2020-03-19 13:37 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Michal Hocko, linux-mm, kernel-team, linux-kernel

On Mon, Mar 16, 2020 at 03:35:10PM -0700, Roman Gushchin wrote:
> If a task is getting moved out of the OOMing cgroup, it might
> result in unexpected OOM killings if memory.oom.group is used
> anywhere in the cgroup tree.
> 
> Imagine the following example:
> 
>           A (oom.group = 1)
>          / \
>   (OOM) B   C
> 
> Let's say B's memory.max is exceeded and it's OOMing. The OOM killer
> selects a task in B as a victim, but someone asynchronously moves
> the task into C. mem_cgroup_get_oom_group() will iterate over all
> ancestors of C up to the root cgroup. In theory it had to stop
> at the oom_domain level - the memory cgroup which is OOMing.
> But because B is not an ancestor of C, it's not happening.
> Instead it chooses A (because it's oom.group is set), and kills
> all tasks in A. This behavior is wrong because the OOM happened in B,
> so there is no reason to kill anything outside.
> 
> Fix this by checking it the memory cgroup to which the task belongs
> is a descendant of the oom_domain. If not, memory.oom.group should
> be ignored, and the OOM killer should kill only the victim task.
> 
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Reported-by: Dan Schatzberg <dschatzberg@fb.com>

Acked-by: Johannes Weiner <hannes@cmpxchg.org>


^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2020-03-19 13:37 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-03-16 22:35 [PATCH] mm: memcg: make memory.oom.group tolerable to task migration Roman Gushchin
2020-03-17  7:52 ` Michal Hocko
2020-03-17 18:38   ` Roman Gushchin
2020-03-17 18:55     ` Michal Hocko
2020-03-17 20:36       ` Roman Gushchin
2020-03-18 12:31         ` Michal Hocko
2020-03-18 12:32 ` Michal Hocko
2020-03-19 13:37 ` Johannes Weiner

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).