All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: David Rientjes <rientjes@google.com>,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm,oom_kill: Close race window of needlessly selecting new victims.
Date: Fri, 23 Jun 2017 14:45:51 +0200	[thread overview]
Message-ID: <20170623124550.GX5308@dhcp22.suse.cz> (raw)
In-Reply-To: <201706220053.v5M0rmOU078764@www262.sakura.ne.jp>

On Thu 22-06-17 09:53:48, Tetsuo Handa wrote:
> David Rientjes wrote:
> > On Wed, 21 Jun 2017, Tetsuo Handa wrote:
> > > Umm... So, you are pointing out that select_bad_process() aborts based on
> > > TIF_MEMDIE or MMF_OOM_SKIP is broken because victim threads can be removed
> > >  from global task list or cgroup's task list. Then, the OOM killer will have to
> > > wait until all mm_struct of interested OOM domain (system wide or some cgroup)
> > > is reaped by the OOM reaper. Simplest way is to wait until all mm_struct are
> > > reaped by the OOM reaper, for currently we are not tracking which memory cgroup
> > > each mm_struct belongs to, are we? But that can cause needless delay when
> > > multiple OOM events occurred in different OOM domains. Do we want to (and can we)
> > > make it possible to tell whether each mm_struct queued to the OOM reaper's list
> > > belongs to the thread calling out_of_memory() ?
> > > 
> > 
> > I am saying that taking mmget() in mark_oom_victim() and then only 
> > dropping it with mmput_async() after it can grab mm->mmap_sem, which the 
> > exit path itself takes, or the oom reaper happens to schedule, causes 
> > __mmput() to be called much later and thus we remove the process from the 
> > tasklist or call cgroup_exit() earlier than the memory can be unmapped 
> > with your patch.  As a result, subsequent calls to the oom killer kills 
> > everything before the original victim's mm can undergo __mmput() because 
> > the oom reaper still holds the reference.
> 
> Here is "wait for all mm_struct are reaped by the OOM reaper" version.

Well, this is getting more and more hairy. I think we should explore the
possibility of oom_reaper vs. exit_mmap working together after all.

Yes, I've said that a solution fully withing the oom proper would be
preferable but this just grows into complex hairy mess. Maybe we just
find out that oom_reaper vs. exit_mmap is just not feasible and we will
reconsider this approach in the end but let's try a clean solution
first. As I've said there is nothing fundamentally hard about parallel
unmapping MADV_DONTNEED does that already. We just have to iron out
those tiny details.
-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org>
To: Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>
Cc: David Rientjes <rientjes@google.com>,
	akpm@linux-foundation.org, linux-mm@kvack.org,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH] mm,oom_kill: Close race window of needlessly selecting new victims.
Date: Fri, 23 Jun 2017 14:45:51 +0200	[thread overview]
Message-ID: <20170623124550.GX5308@dhcp22.suse.cz> (raw)
In-Reply-To: <201706220053.v5M0rmOU078764@www262.sakura.ne.jp>

On Thu 22-06-17 09:53:48, Tetsuo Handa wrote:
> David Rientjes wrote:
> > On Wed, 21 Jun 2017, Tetsuo Handa wrote:
> > > Umm... So, you are pointing out that select_bad_process() aborts based on
> > > TIF_MEMDIE or MMF_OOM_SKIP is broken because victim threads can be removed
> > >  from global task list or cgroup's task list. Then, the OOM killer will have to
> > > wait until all mm_struct of interested OOM domain (system wide or some cgroup)
> > > is reaped by the OOM reaper. Simplest way is to wait until all mm_struct are
> > > reaped by the OOM reaper, for currently we are not tracking which memory cgroup
> > > each mm_struct belongs to, are we? But that can cause needless delay when
> > > multiple OOM events occurred in different OOM domains. Do we want to (and can we)
> > > make it possible to tell whether each mm_struct queued to the OOM reaper's list
> > > belongs to the thread calling out_of_memory() ?
> > > 
> > 
> > I am saying that taking mmget() in mark_oom_victim() and then only 
> > dropping it with mmput_async() after it can grab mm->mmap_sem, which the 
> > exit path itself takes, or the oom reaper happens to schedule, causes 
> > __mmput() to be called much later and thus we remove the process from the 
> > tasklist or call cgroup_exit() earlier than the memory can be unmapped 
> > with your patch.  As a result, subsequent calls to the oom killer kills 
> > everything before the original victim's mm can undergo __mmput() because 
> > the oom reaper still holds the reference.
> 
> Here is "wait for all mm_struct are reaped by the OOM reaper" version.

Well, this is getting more and more hairy. I think we should explore the
possibility of oom_reaper vs. exit_mmap working together after all.

Yes, I've said that a solution fully withing the oom proper would be
preferable but this just grows into complex hairy mess. Maybe we just
find out that oom_reaper vs. exit_mmap is just not feasible and we will
reconsider this approach in the end but let's try a clean solution
first. As I've said there is nothing fundamentally hard about parallel
unmapping MADV_DONTNEED does that already. We just have to iron out
those tiny details.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-06-23 12:45 UTC|newest]

Thread overview: 70+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-06-14 23:43 [patch] mm, oom: prevent additional oom kills before memory is freed David Rientjes
2017-06-14 23:43 ` David Rientjes
2017-06-15 10:39 ` Michal Hocko
2017-06-15 10:39   ` Michal Hocko
2017-06-15 10:53   ` Tetsuo Handa
2017-06-15 10:53     ` Tetsuo Handa
2017-06-15 11:01     ` Michal Hocko
2017-06-15 11:01       ` Michal Hocko
2017-06-15 11:32       ` Tetsuo Handa
2017-06-15 11:32         ` Tetsuo Handa
2017-06-15 12:03         ` Michal Hocko
2017-06-15 12:03           ` Michal Hocko
2017-06-15 12:13           ` Michal Hocko
2017-06-15 12:13             ` Michal Hocko
2017-06-15 13:01             ` Tetsuo Handa
2017-06-15 13:01               ` Tetsuo Handa
2017-06-15 13:22               ` Michal Hocko
2017-06-15 13:22                 ` Michal Hocko
2017-06-15 21:43                 ` Tetsuo Handa
2017-06-15 21:43                   ` Tetsuo Handa
2017-06-15 21:37               ` David Rientjes
2017-06-15 21:37                 ` David Rientjes
2017-06-15 12:20       ` Michal Hocko
2017-06-15 12:20         ` Michal Hocko
2017-06-15 21:26   ` David Rientjes
2017-06-15 21:26     ` David Rientjes
2017-06-15 21:41     ` Michal Hocko
2017-06-15 21:41       ` Michal Hocko
2017-06-15 22:03       ` David Rientjes
2017-06-15 22:03         ` David Rientjes
2017-06-15 22:12         ` Michal Hocko
2017-06-15 22:12           ` Michal Hocko
2017-06-15 22:42           ` David Rientjes
2017-06-15 22:42             ` David Rientjes
2017-06-16  8:06             ` Michal Hocko
2017-06-16  8:06               ` Michal Hocko
2017-06-16  0:54           ` Tetsuo Handa
2017-06-16  0:54             ` Tetsuo Handa
2017-06-16  4:00             ` Tetsuo Handa
2017-06-16  4:00               ` Tetsuo Handa
2017-06-16  8:39             ` Michal Hocko
2017-06-16  8:39               ` Michal Hocko
2017-06-16 10:27               ` Tetsuo Handa
2017-06-16 10:27                 ` Tetsuo Handa
2017-06-16 11:02                 ` Michal Hocko
2017-06-16 11:02                   ` Michal Hocko
2017-06-16 14:26                   ` Re: [patch] mm, oom: prevent additional oom kills before memoryis freed Tetsuo Handa
2017-06-16 14:26                     ` Tetsuo Handa
2017-06-16 14:42                     ` Michal Hocko
2017-06-16 14:42                       ` Michal Hocko
2017-06-17 13:30                       ` Re: [patch] mm, oom: prevent additional oom kills before memory is freed Tetsuo Handa
2017-06-17 13:30                         ` Tetsuo Handa
2017-06-23 12:38                         ` Michal Hocko
2017-06-23 12:38                           ` Michal Hocko
2017-06-16 12:22       ` Tetsuo Handa
2017-06-16 12:22         ` Tetsuo Handa
2017-06-16 14:12         ` Michal Hocko
2017-06-16 14:12           ` Michal Hocko
2017-06-17  5:17           ` [PATCH] mm,oom_kill: Close race window of needlessly selecting new victims Tetsuo Handa
2017-06-17  5:17             ` Tetsuo Handa
2017-06-20 22:12             ` David Rientjes
2017-06-20 22:12               ` David Rientjes
2017-06-21  2:17               ` Tetsuo Handa
2017-06-21 20:31                 ` David Rientjes
2017-06-21 20:31                   ` David Rientjes
2017-06-22  0:53                   ` Tetsuo Handa
2017-06-23 12:45                     ` Michal Hocko [this message]
2017-06-23 12:45                       ` Michal Hocko
2017-06-21 13:18               ` Michal Hocko
2017-06-21 13:18                 ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20170623124550.GX5308@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=akpm@linux-foundation.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=penguin-kernel@i-love.sakura.ne.jp \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.