All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
To: Michal Hocko <mhocko@kernel.org>, linux-mm@kvack.org
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	David Rientjes <rientjes@google.com>,
	Andrew Morton <akpm@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Michal Hocko <mhocko@suse.com>
Subject: Re: [RFC PATCH 2/2] memcg: do not report racy no-eligible OOM tasks
Date: Mon, 22 Oct 2018 20:45:17 +0900	[thread overview]
Message-ID: <f9a8079f-55b0-301e-9b3d-a5250bd7d277@i-love.sakura.ne.jp> (raw)
In-Reply-To: <20181022071323.9550-3-mhocko@kernel.org>

On 2018/10/22 16:13, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> Tetsuo has reported [1] that a single process group memcg might easily
> swamp the log with no-eligible oom victim reports due to race between
> the memcg charge and oom_reaper
> 
> Thread 1		Thread2				oom_reaper
> try_charge		try_charge
> 			  mem_cgroup_out_of_memory
> 			    mutex_lock(oom_lock)
>   mem_cgroup_out_of_memory
>     mutex_lock(oom_lock)
> 			      out_of_memory
> 			        select_bad_process
> 				oom_kill_process(current)
> 				  wake_oom_reaper
> 							  oom_reap_task
> 							  MMF_OOM_SKIP->victim
> 			    mutex_unlock(oom_lock)
>     out_of_memory
>       select_bad_process # no task
> 
> If Thread1 didn't race it would bail out from try_charge and force the
> charge. We can achieve the same by checking tsk_is_oom_victim inside
> the oom_lock and therefore close the race.
> 
> [1] http://lkml.kernel.org/r/bb2074c0-34fe-8c2c-1c7d-db71338f1e7f@i-love.sakura.ne.jp
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>  mm/memcontrol.c | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index e79cb59552d9..a9dfed29967b 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1380,10 +1380,22 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
>  		.gfp_mask = gfp_mask,
>  		.order = order,
>  	};
> -	bool ret;
> +	bool ret = true;
>  
>  	mutex_lock(&oom_lock);
> +
> +	/*
> +	 * multi-threaded tasks might race with oom_reaper and gain
> +	 * MMF_OOM_SKIP before reaching out_of_memory which can lead
> +	 * to out_of_memory failure if the task is the last one in
> +	 * memcg which would be a false possitive failure reported
> +	 */
> +	if (tsk_is_oom_victim(current))
> +		goto unlock;
> +

This is not wrong but is strange. We can use mutex_lock_killable(&oom_lock)
so that any killed threads no longer wait for oom_lock.

Also, closing this race for only memcg OOM path is strange. Global OOM path
(which are CLONE_VM without CLONE_THREAD) is still suffering this race
(though frequency is lower than memcg OOM due to use of mutex_trylock()). Either
checking before calling out_of_memory() or checking task_will_free_mem(current)
inside out_of_memory() will close this race for both paths.

>  	ret = out_of_memory(&oc);
> +
> +unlock:
>  	mutex_unlock(&oom_lock);
>  	return ret;
>  }
> 

  reply	other threads:[~2018-10-22 11:45 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2018-10-22  7:13 [RFC PATCH 0/2] oom, memcg: do not report racy no-eligible OOM Michal Hocko
2018-10-22  7:13 ` Michal Hocko
2018-10-22  7:13 ` [RFC PATCH 1/2] mm, oom: marks all killed tasks as oom victims Michal Hocko
2018-10-22  7:13   ` Michal Hocko
2018-10-22  7:58   ` Tetsuo Handa
2018-10-22  8:48     ` Michal Hocko
2018-10-22  9:42       ` Tetsuo Handa
2018-10-22 10:43         ` Michal Hocko
2018-10-22 10:56           ` Tetsuo Handa
2018-10-22 11:12             ` Michal Hocko
2018-10-22 11:16   ` [RFC PATCH v2 " Michal Hocko
2018-10-22 11:16     ` Michal Hocko
2018-10-22  7:13 ` [RFC PATCH 2/2] memcg: do not report racy no-eligible OOM tasks Michal Hocko
2018-10-22  7:13   ` Michal Hocko
2018-10-22 11:45   ` Tetsuo Handa [this message]
2018-10-22 12:03     ` Michal Hocko
2018-10-22 13:20       ` Tetsuo Handa
2018-10-22 13:43         ` Michal Hocko
2018-10-22 15:12           ` Tetsuo Handa
2018-10-23  1:01       ` Tetsuo Handa
2018-10-23 11:42         ` Michal Hocko
2018-10-23 12:10           ` Michal Hocko
2018-10-23 12:33             ` Tetsuo Handa
2018-10-23 12:48               ` Michal Hocko
2018-10-26 14:25   ` Johannes Weiner
2018-10-26 19:25     ` Michal Hocko
2018-10-26 19:33       ` Michal Hocko
2018-10-27  1:10         ` Tetsuo Handa
2018-11-06  9:44           ` Tetsuo Handa
2018-11-06  9:44             ` Tetsuo Handa
2018-11-06 12:42             ` Michal Hocko
2018-11-07  9:45               ` Tetsuo Handa
2018-11-07 10:08                 ` Michal Hocko
2018-12-07 12:43                   ` Tetsuo Handa
2018-12-12 10:23                     ` Tetsuo Handa
2018-12-12 10:23                       ` Tetsuo Handa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=f9a8079f-55b0-301e-9b3d-a5250bd7d277@i-love.sakura.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mhocko@suse.com \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.