linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] memcg, oom: check memcg margin for parallel oom
@ 2020-11-06  8:51 zhongjiang-ali
  2020-11-09  7:56 ` zhong jiang
  0 siblings, 1 reply; 2+ messages in thread
From: zhongjiang-ali @ 2020-11-06  8:51 UTC (permalink / raw)
  To: hannes, mhocko, akpm; +Cc: linux-mm

From: Yafang Shao <laoar.shao@gmail.com>

Memcg oom killer invocation is synchronized by the global oom_lock and
tasks are sleeping on the lock while somebody is selecting the victim or
potentially race with the oom_reaper is releasing the victim's memory.
This can result in a pointless oom killer invocation because a waiter
might be racing with the oom_reaper

        P1              oom_reaper              P2
                        oom_reap_task           mutex_lock(oom_lock)
                                                out_of_memory # no victim because we have one already
                        __oom_reap_task_mm      mute_unlock(oom_lock)
 mutex_lock(oom_lock)
                        set MMF_OOM_SKIP
 select_bad_process
 # finds a new victim

The page allocator prevents from this race by trying to allocate after the
lock can be acquired (in __alloc_pages_may_oom) which acts as a last
minute check.  Moreover page allocator simply doesn't block on the
oom_lock and simply retries the whole reclaim process.

Memcg oom killer should do the last minute check as well.  Call
mem_cgroup_margin to do that.  Trylock on the oom_lock could be done as
well but this doesn't seem to be necessary at this stage.

[mhocko@kernel.org: commit log]

Suggested-by: Michal Hocko <mhocko@kernel.org>
Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
Acked-by: Michal Hocko <mhocko@suse.com>
Acked-by: Chris Down <chris@chrisdown.name>
Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: David Rientjes <rientjes@google.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Link: http://lkml.kernel.org/r/1594735034-19190-1-git-send-email-laoar.shao@gmail.com
Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
Signed-off-by: zhongjiang-ali <zhongjiang-ali@linux.alibaba.com>
---
 mm/memcontrol.c | 8 +++++++-
 1 file changed, 7 insertions(+), 1 deletion(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index b30a52d..369d9e1 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -1663,15 +1663,21 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
 		.gfp_mask = gfp_mask,
 		.order = order,
 	};
-	bool ret;
+	bool ret = true;
 
 	if (mutex_lock_killable(&oom_lock))
 		return true;
+
+	if (mem_cgroup_margin(memcg) >= (1 << order))
+		goto unlock;
+
 	/*
 	 * A few threads which were not waiting at mutex_lock_killable() can
 	 * fail to bail out. Therefore, check again after holding oom_lock.
 	 */
 	ret = should_force_charge() || out_of_memory(&oc);
+
+unlock:
 	mutex_unlock(&oom_lock);
 	return ret;
 }
-- 
1.8.3.1



^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] memcg, oom: check memcg margin for parallel oom
  2020-11-06  8:51 [PATCH] memcg, oom: check memcg margin for parallel oom zhongjiang-ali
@ 2020-11-09  7:56 ` zhong jiang
  0 siblings, 0 replies; 2+ messages in thread
From: zhong jiang @ 2020-11-09  7:56 UTC (permalink / raw)
  To: hannes, mhocko, akpm; +Cc: linux-mm

I am sorry for my stupid email.   Please ignore it.  Thanks😅

On 2020/11/6 4:51 下午, zhongjiang-ali wrote:
> From: Yafang Shao <laoar.shao@gmail.com>
>
> Memcg oom killer invocation is synchronized by the global oom_lock and
> tasks are sleeping on the lock while somebody is selecting the victim or
> potentially race with the oom_reaper is releasing the victim's memory.
> This can result in a pointless oom killer invocation because a waiter
> might be racing with the oom_reaper
>
>          P1              oom_reaper              P2
>                          oom_reap_task           mutex_lock(oom_lock)
>                                                  out_of_memory # no victim because we have one already
>                          __oom_reap_task_mm      mute_unlock(oom_lock)
>   mutex_lock(oom_lock)
>                          set MMF_OOM_SKIP
>   select_bad_process
>   # finds a new victim
>
> The page allocator prevents from this race by trying to allocate after the
> lock can be acquired (in __alloc_pages_may_oom) which acts as a last
> minute check.  Moreover page allocator simply doesn't block on the
> oom_lock and simply retries the whole reclaim process.
>
> Memcg oom killer should do the last minute check as well.  Call
> mem_cgroup_margin to do that.  Trylock on the oom_lock could be done as
> well but this doesn't seem to be necessary at this stage.
>
> [mhocko@kernel.org: commit log]
>
> Suggested-by: Michal Hocko <mhocko@kernel.org>
> Signed-off-by: Yafang Shao <laoar.shao@gmail.com>
> Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
> Acked-by: Michal Hocko <mhocko@suse.com>
> Acked-by: Chris Down <chris@chrisdown.name>
> Cc: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Link: http://lkml.kernel.org/r/1594735034-19190-1-git-send-email-laoar.shao@gmail.com
> Signed-off-by: Linus Torvalds <torvalds@linux-foundation.org>
> Signed-off-by: zhongjiang-ali <zhongjiang-ali@linux.alibaba.com>
> ---
>   mm/memcontrol.c | 8 +++++++-
>   1 file changed, 7 insertions(+), 1 deletion(-)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index b30a52d..369d9e1 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1663,15 +1663,21 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
>   		.gfp_mask = gfp_mask,
>   		.order = order,
>   	};
> -	bool ret;
> +	bool ret = true;
>   
>   	if (mutex_lock_killable(&oom_lock))
>   		return true;
> +
> +	if (mem_cgroup_margin(memcg) >= (1 << order))
> +		goto unlock;
> +
>   	/*
>   	 * A few threads which were not waiting at mutex_lock_killable() can
>   	 * fail to bail out. Therefore, check again after holding oom_lock.
>   	 */
>   	ret = should_force_charge() || out_of_memory(&oc);
> +
> +unlock:
>   	mutex_unlock(&oom_lock);
>   	return ret;
>   }


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2020-11-09  7:57 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-06  8:51 [PATCH] memcg, oom: check memcg margin for parallel oom zhongjiang-ali
2020-11-09  7:56 ` zhong jiang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).