linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
To: mhocko@kernel.org
Cc: linux-mm@kvack.org, hannes@cmpxchg.org, rientjes@google.com,
	linux-kernel@vger.kernel.org
Subject: Re: [PATCH v2] mm/page_alloc: Wait for oom_lock before retrying.
Date: Tue, 18 Jul 2017 06:42:31 +0900	[thread overview]
Message-ID: <201707180642.IHF86993.OFMLVOOFJQHSFt@I-love.SAKURA.ne.jp> (raw)
In-Reply-To: <20170717152440.GM12888@dhcp22.suse.cz>

Michal Hocko wrote:
> On Sun 16-07-17 19:59:51, Tetsuo Handa wrote:
> > Since the whole memory reclaim path has never been designed to handle the
> > scheduling priority inversions, those locations which are assuming that
> > execution of some code path shall eventually complete without using
> > synchronization mechanisms can get stuck (livelock) due to scheduling
> > priority inversions, for CPU time is not guaranteed to be yielded to some
> > thread doing such code path.
> > 
> > mutex_trylock() in __alloc_pages_may_oom() (waiting for oom_lock) and
> > schedule_timeout_killable(1) in out_of_memory() (already held oom_lock) is
> > one of such locations, and it was demonstrated using artificial stressing
> > that the system gets stuck effectively forever because SCHED_IDLE priority
> > thread is unable to resume execution at schedule_timeout_killable(1) if
> > a lot of !SCHED_IDLE priority threads are wasting CPU time [1].
> > 
> > To solve this problem properly, complete redesign and rewrite of the whole
> > memory reclaim path will be needed. But we are not going to think about
> > reimplementing the the whole stack (at least for foreseeable future).
> > 
> > Thus, this patch workarounds livelock by forcibly yielding enough CPU time
> > to the thread holding oom_lock by using mutex_lock_killable() mechanism,
> > so that the OOM killer/reaper can use CPU time yielded by this patch.
> > Of course, this patch does not help if the cause of lack of CPU time is
> > somewhere else (e.g. executing CPU intensive computation with very high
> > scheduling priority), but that is not fault of this patch.
> > This patch only manages not to lockup if the cause of lack of CPU time is
> > direct reclaim storm wasting CPU time without making any progress while
> > waiting for oom_lock.
> 
> I have to think about this some more. Hitting much more on the oom_lock
> is a problem while __oom_reap_task_mm still depends on the oom_lock. With
> http://lkml.kernel.org/r/20170626130346.26314-1-mhocko@kernel.org it
> doesn't do anymore.

I suggested preserving oom_lock serialization when setting MMF_OOM_SKIP in
reply to that post (unless we use some trick for force calling
get_page_from_freelist() after confirming that there is no !MMF_OOM_SKIP mm).

> 
> Also this whole reasoning is little bit dubious to me. The whole reclaim
> stack might still preempt the holder of the lock so you are addressin
> only a very specific contention case where everybody hits the oom. I
> suspect that a differently constructed testcase might result in the same
> problem.

I think that direct reclaim/compaction is primary source of CPU time
consumption, for there will be nothing more to do other than
get_page_from_freelist() and schedule_timeout_uninterruptible() if
we are waiting for somebody else to make progress using the OOM killer.
Thus, if we wait using mutex_lock_killable(), direct reclaim/compaction
will not be called (i.e. the rest of whole reclaim stack will not preempt
the holder of the oom_lock) after each allocating thread failed to acquire
the oom_lock.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-07-17 21:42 UTC|newest]

Thread overview: 7+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-07-16 10:59 [PATCH v2] mm/page_alloc: Wait for oom_lock before retrying Tetsuo Handa
2017-07-17  8:56 ` Michal Hocko
2017-07-17 13:50   ` Tetsuo Handa
2017-07-17 15:15     ` Michal Hocko
2017-07-17 15:24 ` Michal Hocko
2017-07-17 21:42   ` Tetsuo Handa [this message]
2017-07-18  9:08     ` Michal Hocko

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=201707180642.IHF86993.OFMLVOOFJQHSFt@I-love.SAKURA.ne.jp \
    --to=penguin-kernel@i-love.sakura.ne.jp \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=rientjes@google.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).