All of lore.kernel.org
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
To: mhocko@kernel.org
Cc: linux-mm@kvack.org, rientjes@google.com, akpm@linux-foundation.org
Subject: Re: [PATCH 3/3] mm, oom_reaper: clear TIF_MEMDIE for all tasks queued for oom_reaper
Date: Wed, 20 Apr 2016 00:07:50 +0900	[thread overview]
Message-ID: <201604200007.IFD52169.FLSOOVQHJOFFtM@I-love.SAKURA.ne.jp> (raw)
In-Reply-To: <20160419141722.GB4126@dhcp22.suse.cz>

Michal Hocko wrote:
> On Mon 18-04-16 20:59:51, Tetsuo Handa wrote:
> > Michal Hocko wrote:
> > > Here is what should work - I have only compile tested it. I will prepare
> > > the proper patch later this week with other oom reaper patches or after
> > > I come back from LSF/MM.
> > 
> > Excuse me, but is system_wq suitable for queuing operations which may take
> > unpredictable duration to flush?
> > 
> >   system_wq is the one used by schedule[_delayed]_work[_on]().
> >   Multi-CPU multi-threaded.  There are users which expect relatively
> >   short queue flush time.  Don't queue works which can run for too
> >   long.
> 
> An alternative would be using a dedicated WQ with WQ_MEM_RECLAIM which I
> am not really sure would be justified considering we are talking about a
> highly unlikely event. You do not want to consume resources permanently
> for an eventual and not fatal event.

Yes, the reason SysRq-f is still not using a dedicated WQ with WQ_MEM_RECLAIM
will be the same.

> 
> > We
> > haven't guaranteed that SysRq-f can always fire and select a different OOM
> > victim, but you proposed always clearing TIF_MEMDIE without thinking the
> > possibility of the OOM victim with mmap_sem held for write being stuck at
> > unkillable wait.
> > 
> > I wonder about your definition of "robustness". You are almost always missing
> > the worst scenario. You are trying to manage OOM without defining default:
> > label in a switch statement. I don't think your approach is robust.
> 
> I am trying to be as robust as it is viable. You have to realize we are
> in the catastrophic path already and there is simply no deterministic
> way out.

I know we are talking about the catastrophic situation. Since you insist on
deterministic approach, we are struggling so much.
If you tolerate
http://lkml.kernel.org/r/201604152111.JBD95763.LMFOOHQOtFSFJV@I-love.SAKURA.ne.jp
approach as the fastpath (deterministic but could fail) and
http://lkml.kernel.org/r/201604200006.FBG45192.SOHFQJFOOLFMtV@I-love.SAKURA.ne.jp
approach as the slowpath (non-deterministic but never fail), we don't need to
use a dedicated WQ with WQ_MEM_RECLAIM for avoiding this mmput() trap and the
SysRq-f trap. What a simple answer. ;-)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2016-04-19 15:08 UTC|newest]

Thread overview: 40+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-04-06 14:13 [PATCH 0/3] oom reaper follow ups v1 Michal Hocko
2016-04-06 14:13 ` Michal Hocko
2016-04-06 14:13 ` [PATCH 1/3] mm, oom: move GFP_NOFS check to out_of_memory Michal Hocko
2016-04-06 14:13   ` Michal Hocko
2016-04-06 14:13 ` [PATCH 2/3] oom, oom_reaper: Try to reap tasks which skip regular OOM killer path Michal Hocko
2016-04-06 14:13   ` Michal Hocko
2016-04-07 11:38   ` Tetsuo Handa
2016-04-07 11:38     ` Tetsuo Handa
2016-04-08 11:19     ` Tetsuo Handa
2016-04-08 11:19       ` Tetsuo Handa
2016-04-08 11:50       ` Michal Hocko
2016-04-08 11:50         ` Michal Hocko
2016-04-09  4:39         ` [PATCH 2/3] oom, oom_reaper: Try to reap tasks which skipregular " Tetsuo Handa
2016-04-09  4:39           ` Tetsuo Handa
2016-04-11 12:02           ` Michal Hocko
2016-04-11 12:02             ` Michal Hocko
2016-04-11 13:26             ` [PATCH 2/3] oom, oom_reaper: Try to reap tasks which skip regular " Tetsuo Handa
2016-04-11 13:26               ` Tetsuo Handa
2016-04-11 13:43               ` Michal Hocko
2016-04-11 13:43                 ` Michal Hocko
2016-04-13 11:08                 ` Tetsuo Handa
2016-04-13 11:08                   ` Tetsuo Handa
2016-04-08 11:34     ` Michal Hocko
2016-04-08 11:34       ` Michal Hocko
2016-04-08 13:14   ` Michal Hocko
2016-04-08 13:14     ` Michal Hocko
2016-04-06 14:13 ` [PATCH 3/3] mm, oom_reaper: clear TIF_MEMDIE for all tasks queued for oom_reaper Michal Hocko
2016-04-06 14:13   ` Michal Hocko
2016-04-07 11:55   ` Tetsuo Handa
2016-04-07 11:55     ` Tetsuo Handa
2016-04-08 11:34     ` Michal Hocko
2016-04-08 11:34       ` Michal Hocko
2016-04-16  2:51       ` Tetsuo Handa
2016-04-17 11:54         ` Michal Hocko
2016-04-18 11:59           ` Tetsuo Handa
2016-04-19 14:17             ` Michal Hocko
2016-04-19 15:07               ` Tetsuo Handa [this message]
2016-04-19 19:32                 ` Michal Hocko
2016-04-08 13:07   ` Michal Hocko
2016-04-08 13:07     ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201604200007.IFD52169.FLSOOVQHJOFFtM@I-love.SAKURA.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=akpm@linux-foundation.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.