linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
To: Michal Hocko <mhocko@kernel.org>, linux-mm@kvack.org
Cc: Johannes Weiner <hannes@cmpxchg.org>,
	Andrew Morton <akpm@linux-foundation.org>,
	LKML <linux-kernel@vger.kernel.org>,
	Michal Hocko <mhocko@suse.com>
Subject: Re: [PATCH 2/2] memcg: do not report racy no-eligible OOM tasks
Date: Tue, 8 Jan 2019 05:59:49 +0900	[thread overview]
Message-ID: <fa8892d1-4a38-dccd-9597-923924aa0a66@i-love.sakura.ne.jp> (raw)
In-Reply-To: <20190107143802.16847-3-mhocko@kernel.org>

On 2019/01/07 23:38, Michal Hocko wrote:
> From: Michal Hocko <mhocko@suse.com>
> 
> Tetsuo has reported [1] that a single process group memcg might easily
> swamp the log with no-eligible oom victim reports due to race between
> the memcg charge and oom_reaper

This explanation is outdated. I reported that one memcg OOM killer can
kill all processes in that memcg. I expect the changelog to be updated.

> 
> Thread 1		Thread2				oom_reaper
> try_charge		try_charge
> 			  mem_cgroup_out_of_memory
> 			    mutex_lock(oom_lock)
>   mem_cgroup_out_of_memory
>     mutex_lock(oom_lock)
> 			      out_of_memory
> 			        select_bad_process
> 				oom_kill_process(current)
> 				  wake_oom_reaper
> 							  oom_reap_task
> 							  MMF_OOM_SKIP->victim
> 			    mutex_unlock(oom_lock)
>     out_of_memory
>       select_bad_process # no task
> 
> If Thread1 didn't race it would bail out from try_charge and force the
> charge. We can achieve the same by checking tsk_is_oom_victim inside
> the oom_lock and therefore close the race.
> 
> [1] http://lkml.kernel.org/r/bb2074c0-34fe-8c2c-1c7d-db71338f1e7f@i-love.sakura.ne.jp
> Signed-off-by: Michal Hocko <mhocko@suse.com>
> ---
>  mm/memcontrol.c | 14 +++++++++++++-
>  1 file changed, 13 insertions(+), 1 deletion(-)
> 
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index af7f18b32389..90eb2e2093e7 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1387,10 +1387,22 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
>  		.gfp_mask = gfp_mask,
>  		.order = order,
>  	};
> -	bool ret;
> +	bool ret = true;
>  
>  	mutex_lock(&oom_lock);

And because of "[PATCH 1/2] mm, oom: marks all killed tasks as oom
victims", mark_oom_victim() will be called on current thread even if
we used mutex_lock_killable(&oom_lock) here, like you said

  mutex_lock_killable would take care of exiting task already. I would
  then still prefer to check for mark_oom_victim because that is not racy
  with the exit path clearing signals. I can update my patch to use
  _killable lock variant if we are really going with the memcg specific
  fix.

. If current thread is not yet killed by the OOM killer but can terminate
without invoking the OOM killer, using mutex_lock_killable(&oom_lock) here
saves some processes. What is the race you are referring by "racy with the
exit path clearing signals" ?

> +
> +	/*
> +	 * multi-threaded tasks might race with oom_reaper and gain
> +	 * MMF_OOM_SKIP before reaching out_of_memory which can lead
> +	 * to out_of_memory failure if the task is the last one in
> +	 * memcg which would be a false possitive failure reported
> +	 */

Not only out_of_memory() failure. Current thread needlessly tries to
select next OOM victim. out_of_memory() failure is nothing but a result
of no eligible candidate case.

> +	if (tsk_is_oom_victim(current))
> +		goto unlock;
> +
>  	ret = out_of_memory(&oc);
> +
> +unlock:
>  	mutex_unlock(&oom_lock);
>  	return ret;
>  }
> 


  reply	other threads:[~2019-01-07 21:00 UTC|newest]

Thread overview: 28+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-01-07 14:38 [PATCH 0/2] oom, memcg: do not report racy no-eligible OOM Michal Hocko
2019-01-07 14:38 ` [PATCH 1/2] mm, oom: marks all killed tasks as oom victims Michal Hocko
2019-01-07 20:58   ` Tetsuo Handa
2019-01-08  8:11     ` Michal Hocko
2019-01-07 14:38 ` [PATCH 2/2] memcg: do not report racy no-eligible OOM tasks Michal Hocko
2019-01-07 20:59   ` Tetsuo Handa [this message]
2019-01-08  8:14     ` Michal Hocko
2019-01-08 10:39       ` Tetsuo Handa
2019-01-08 11:46         ` Michal Hocko
2019-01-08  8:35   ` kbuild test robot
2019-01-08  9:39     ` Michal Hocko
2019-01-11  0:23       ` [kbuild-all] " Rong Chen
2019-01-08 14:21 ` [PATCH 3/2] memcg: Facilitate termination of memcg OOM victims Tetsuo Handa
2019-01-08 14:38   ` Michal Hocko
2019-01-09 11:03 ` [PATCH 0/2] oom, memcg: do not report racy no-eligible OOM Michal Hocko
2019-01-09 11:34   ` Tetsuo Handa
2019-01-09 12:02     ` Michal Hocko
2019-01-10 23:59       ` Tetsuo Handa
2019-01-11 10:25         ` Tetsuo Handa
2019-01-11 11:33           ` Michal Hocko
2019-01-11 12:40             ` Tetsuo Handa
2019-01-11 13:34               ` Michal Hocko
2019-01-11 14:31                 ` Tetsuo Handa
2019-01-11 15:07                   ` Michal Hocko
2019-01-11 15:37                     ` Tetsuo Handa
2019-01-11 16:45                       ` Michal Hocko
2019-01-12 10:52                         ` Tetsuo Handa
2019-01-13 17:36                           ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=fa8892d1-4a38-dccd-9597-923924aa0a66@i-love.sakura.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mhocko@suse.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).