linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
To: David Rientjes <rientjes@google.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	Vlastimil Babka <vbabka@suse.cz>,
	Michal Hocko <mhocko@kernel.org>,
	linux-kernel@vger.kernel.org, linux-mm@kvack.org
Subject: Re: [patch] mm, oom: prevent soft lockup on memcg oom for UP systems
Date: Fri, 13 Mar 2020 09:15:09 +0900	[thread overview]
Message-ID: <202003130015.02D0F9uT079462@www262.sakura.ne.jp> (raw)
In-Reply-To: <alpine.DEB.2.21.2003121101030.158939@chino.kir.corp.google.com>

David Rientjes wrote:
> > By the way, will you share the reproducer (and how to use the reproducer) ?
> > 
> 
> On an UP kernel with swap disabled, you limit a memcg to 100MB and start 
> three processes that each fault 40MB attached to it.  Same reproducer as 
> the "mm, oom: make a last minute check to prevent unnecessary memcg oom 
> kills" patch except in that case there are two cores.
> 

I'm not a heavy memcg user. Please provide steps for reproducing your problem
in a "copy and pastable" way (e.g. bash script, C program).

> > @@ -1576,6 +1576,7 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask,
> >  	 */
> >  	ret = should_force_charge() || out_of_memory(&oc);
> >  	mutex_unlock(&oom_lock);
> > +	schedule_timeout_killable(1);
> >  	return ret;
> >  }
> >  
> 
> If current was process chosen for oom kill, this would actually induce the 
> problem, not fix it.
> 

Why? Memcg OOM path allows using forced charge path if should_force_charge() == true.

Since your lockup report

  Call Trace:
   shrink_node+0x40d/0x7d0
   do_try_to_free_pages+0x13f/0x470
   try_to_free_mem_cgroup_pages+0x16d/0x230
   try_charge+0x247/0xac0
   mem_cgroup_try_charge+0x10a/0x220
   mem_cgroup_try_charge_delay+0x1e/0x40
   handle_mm_fault+0xdf2/0x15f0
   do_user_addr_fault+0x21f/0x420
   page_fault+0x2f/0x40

says that allocating thread was calling try_to_free_mem_cgroup_pages() from try_charge(),
allocating thread must be able to reach mem_cgroup_out_of_memory() from mem_cgroup_oom()
 from try_charge(). And actually

  Memory cgroup out of memory: Killed process 808 (repro) total-vm:41944kB, anon-rss:35344kB, file-rss:504kB, shmem-rss:0kB, UID:0 pgtables:108kB oom_score_adj:0

says that allocating thread did reach mem_cgroup_out_of_memory(). Then, allocating thread
must be able to sleep at mem_cgroup_out_of_memory() if schedule_timeout_killable(1) is
mem_cgroup_out_of_memory().

Also, if current process was chosen for OOM-kill, current process will be able to leave
try_charge() due to should_force_charge() == true, won't it?

Thus, how can "this would actually induce the problem, not fix it." happen?
If your problem is that something keeps allocating threads away from reaching
should_force_charge() check, please explain the mechanism. If that is explained,
I would agree that schedule_timeout_killable(1) in mem_cgroup_out_of_memory()
won't help.


  parent reply	other threads:[~2020-03-13  0:15 UTC|newest]

Thread overview: 37+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2020-03-10 21:39 [patch] mm, oom: prevent soft lockup on memcg oom for UP systems David Rientjes
2020-03-10 22:05 ` Tetsuo Handa
2020-03-10 22:55   ` David Rientjes
2020-03-11  9:34     ` Tetsuo Handa
2020-03-11 19:38       ` David Rientjes
2020-03-11 22:04         ` Tetsuo Handa
2020-03-11 22:14           ` David Rientjes
2020-03-12  0:12             ` Tetsuo Handa
2020-03-12 18:07               ` David Rientjes
2020-03-12 22:32                 ` Andrew Morton
2020-03-16  9:31                   ` Michal Hocko
2020-03-16 10:04                     ` Tetsuo Handa
2020-03-16 10:14                       ` Michal Hocko
2020-03-13  0:15                 ` Tetsuo Handa [this message]
2020-03-13 22:01                   ` David Rientjes
2020-03-13 23:15                     ` Tetsuo Handa
2020-03-13 23:32                       ` Tetsuo Handa
2020-03-16 23:59                         ` David Rientjes
2020-03-17  3:18                           ` Tetsuo Handa
2020-03-17  4:09                             ` David Rientjes
2020-03-18  0:55                               ` [patch v2] " David Rientjes
2020-03-18  9:42                                 ` Michal Hocko
2020-03-18 21:40                                   ` David Rientjes
2020-03-18 22:03                                     ` [patch v3] " David Rientjes
2020-03-19  7:09                                       ` Michal Hocko
2020-03-12  4:23             ` [patch] " Tetsuo Handa
2020-03-10 22:10 ` Michal Hocko
2020-03-10 23:02   ` David Rientjes
2020-03-11  8:27     ` Michal Hocko
2020-03-11 19:45       ` David Rientjes
2020-03-12  8:32         ` Michal Hocko
2020-03-12 18:20           ` David Rientjes
2020-03-12 20:16             ` Michal Hocko
2020-03-16  9:32               ` Michal Hocko
2020-03-11  0:18 ` Andrew Morton
2020-03-11  0:34   ` David Rientjes
2020-03-11  8:36   ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=202003130015.02D0F9uT079462@www262.sakura.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).