linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: David Rientjes <rientjes@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [patch] mm, oom: make a last minute check to prevent unnecessary memcg oom kills
Date: Tue, 10 Mar 2020 23:19:38 +0100	[thread overview]
Message-ID: <20200310221938.GF8447@dhcp22.suse.cz> (raw)
In-Reply-To: <alpine.DEB.2.21.2003101454580.142656@chino.kir.corp.google.com>

On Tue 10-03-20 14:55:50, David Rientjes wrote:
> Killing a user process as a result of hitting memcg limits is a serious
> decision that is unfortunately needed only when no forward progress in
> reclaiming memory can be made.
> 
> Deciding the appropriate oom victim can take a sufficient amount of time
> that allows another process that is exiting to actually uncharge to the
> same memcg hierarchy and prevent unnecessarily killing user processes.
> 
> An example is to prevent *multiple* unnecessary oom kills on a system
> with two cores where the oom kill occurs when there is an abundance of
> free memory available:
> 
> Memory cgroup out of memory: Killed process 628 (repro) total-vm:41944kB, anon-rss:40888kB, file-rss:496kB, shmem-rss:0kB, UID:0 pgtables:116kB oom_score_adj:0
> <immediately after>
> repro invoked oom-killer: gfp_mask=0xcc0(GFP_KERNEL), order=0, oom_score_adj=0
> CPU: 1 PID: 629 Comm: repro Not tainted 5.6.0-rc5+ #130
> Call Trace:
>  dump_stack+0x78/0xb6
>  dump_header+0x55/0x240
>  oom_kill_process+0xc5/0x170
>  out_of_memory+0x305/0x4a0
>  try_charge+0x77b/0xac0
>  mem_cgroup_try_charge+0x10a/0x220
>  mem_cgroup_try_charge_delay+0x1e/0x40
>  handle_mm_fault+0xdf2/0x15f0
>  do_user_addr_fault+0x21f/0x420
>  async_page_fault+0x2f/0x40
> memory: usage 61336kB, limit 102400kB, failcnt 74
> 
> Notice the second memcg oom kill shows usage is >40MB below its limit of
> 100MB but a process is still unnecessarily killed because the decision has
> already been made to oom kill by calling out_of_memory() before the
> initial victim had a chance to uncharge its memory.

Could you be more specific about the specific workload please?

> Make a last minute check to determine if an oom kill is really needed to
> prevent unnecessary oom killing.

I really see no reason why the memcg oom should behave differently from
the global case. In both cases there will be a point of no return.
Where-ever it is done it will be racy and the oom victim selection will
play the race window role. There is simply no way around that without
making the whole thing completely synchronous. This all looks like a
micro optimization and I would really like to see a relevant real world
usecase presented before new special casing is added.

> 
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: stable@vger.kernel.org
> Signed-off-by: David Rientjes <rientjes@google.com>
> ---
>  include/linux/memcontrol.h |  7 +++++++
>  mm/memcontrol.c            |  2 +-
>  mm/oom_kill.c              | 16 +++++++++++++---
>  3 files changed, 21 insertions(+), 4 deletions(-)
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -445,6 +445,8 @@ void mem_cgroup_iter_break(struct mem_cgroup *, struct mem_cgroup *);
>  int mem_cgroup_scan_tasks(struct mem_cgroup *,
>  			  int (*)(struct task_struct *, void *), void *);
>  
> +unsigned long mem_cgroup_margin(struct mem_cgroup *memcg);
> +
>  static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg)
>  {
>  	if (mem_cgroup_disabled())
> @@ -945,6 +947,11 @@ static inline int mem_cgroup_scan_tasks(struct mem_cgroup *memcg,
>  	return 0;
>  }
>  
> +static inline unsigned long mem_cgroup_margin(struct mem_cgroup *memcg)
> +{
> +	return 0;
> +}
> +
>  static inline unsigned short mem_cgroup_id(struct mem_cgroup *memcg)
>  {
>  	return 0;
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -1286,7 +1286,7 @@ void mem_cgroup_update_lru_size(struct lruvec *lruvec, enum lru_list lru,
>   * Returns the maximum amount of memory @mem can be charged with, in
>   * pages.
>   */
> -static unsigned long mem_cgroup_margin(struct mem_cgroup *memcg)
> +unsigned long mem_cgroup_margin(struct mem_cgroup *memcg)
>  {
>  	unsigned long margin = 0;
>  	unsigned long count;
> diff --git a/mm/oom_kill.c b/mm/oom_kill.c
> --- a/mm/oom_kill.c
> +++ b/mm/oom_kill.c
> @@ -972,9 +972,6 @@ static void oom_kill_process(struct oom_control *oc, const char *message)
>  	}
>  	task_unlock(victim);
>  
> -	if (__ratelimit(&oom_rs))
> -		dump_header(oc, victim);
> -
>  	/*
>  	 * Do we need to kill the entire memory cgroup?
>  	 * Or even one of the ancestor memory cgroups?
> @@ -982,6 +979,19 @@ static void oom_kill_process(struct oom_control *oc, const char *message)
>  	 */
>  	oom_group = mem_cgroup_get_oom_group(victim, oc->memcg);
>  
> +	if (is_memcg_oom(oc)) {
> +		cond_resched();
> +
> +		/* One last check: do we *really* need to kill? */
> +		if (mem_cgroup_margin(oc->memcg) >= (1 << oc->order)) {
> +			put_task_struct(victim);
> +			return;
> +		}
> +	}
> +
> +	if (__ratelimit(&oom_rs))
> +		dump_header(oc, victim);
> +
>  	__oom_kill_process(victim, message);
>  
>  	/*

-- 
Michal Hocko
SUSE Labs


  reply	other threads:[~2020-03-10 22:19 UTC|newest]

Thread overview: 12+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-10 21:55 [patch] mm, oom: make a last minute check to prevent unnecessary memcg oom kills David Rientjes
2020-03-10 22:19 ` Michal Hocko [this message]
2020-03-10 22:54   ` David Rientjes
2020-03-11  8:39     ` Michal Hocko
2020-03-17  7:59       ` Michal Hocko
2020-03-11 11:41     ` Tetsuo Handa
2020-03-11 19:51       ` David Rientjes
2020-03-17 18:25     ` Robert Kolchmeyer
2020-03-17 19:00       ` Ami Fischman
2020-03-18  9:57         ` Michal Hocko
2020-03-18 15:20           ` Ami Fischman
2020-03-18  9:55       ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20200310221938.GF8447@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).