All of lore.kernel.org
 help / color / mirror / Atom feed
From: Michal Hocko <mhocko@kernel.org>
To: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: aarcange@redhat.com, akpm@linux-foundation.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	rientjes@google.com, hannes@cmpxchg.org,
	mjaggi@caviumnetworks.com, mgorman@suse.de, oleg@redhat.com,
	vdavydov.dev@gmail.com, vbabka@suse.cz
Subject: Re: [PATCH] mm,oom: Try last second allocation before and after selecting an OOM victim.
Date: Tue, 31 Oct 2017 13:48:55 +0100	[thread overview]
Message-ID: <20171031124855.rszis5gefbxwriiz@dhcp22.suse.cz> (raw)
In-Reply-To: <201710312142.DBB81723.FOOFJMQLStFVOH@I-love.SAKURA.ne.jp>

On Tue 31-10-17 21:42:23, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Tue 31-10-17 19:40:09, Tetsuo Handa wrote:
> > > The reason I used __alloc_pages_slowpath() in alloc_pages_before_oomkill() is
> > > to avoid duplicating code (such as checking for ALLOC_OOM and rebuilding zone
> > > list) which needs to be maintained in sync with __alloc_pages_slowpath().
> > >
> > > If you don't like calling __alloc_pages_slowpath() from
> > > alloc_pages_before_oomkill(), I'm OK with calling __alloc_pages_nodemask()
> > > (with __GFP_DIRECT_RECLAIM/__GFP_NOFAIL cleared and __GFP_NOWARN set), for
> > > direct reclaim functions can call __alloc_pages_nodemask() (with PF_MEMALLOC
> > > set in order to avoid recursion of direct reclaim).
> > > 
> > > We are rebuilding zone list if selected as an OOM victim, for
> > > __gfp_pfmemalloc_flags() returns ALLOC_OOM if oom_reserves_allowed(current)
> > > is true.
> > 
> > So your answer is copy&paste without a deeper understanding, righ?
> 
> Right. I wanted to avoid duplicating code.
> But I had to duplicate in order to allow OOM victims to try ALLOC_OOM.

I absolutely hate this cargo cult programming!

[...]

> > While both have some merit, the first reason is mostly historical
> > because we have the explicit locking now and it is really unlikely that
> > the memory would be available right after we have given up trying.
> > Last attempt allocation makes some sense of course but considering that
> > the oom victim selection is quite an expensive operation which can take
> > a considerable amount of time it makes much more sense to retry the
> > allocation after the most expensive part rather than before. Therefore
> > move the last attempt right before we are trying to kill an oom victim
> > to rule potential races when somebody could have freed a lot of memory
> > in the meantime. This will reduce the time window for potentially
> > pre-mature OOM killing considerably.
> 
> But this is about "doing last second allocation attempt after selecting
> an OOM victim". This is not about "allowing OOM victims to try ALLOC_OOM
> before selecting next OOM victim" which is the actual problem I'm trying
> to deal with.

then split it into two. First make the general case and then add a more
sophisticated on top. Dealing with multiple issues at once is what makes
all those brain cells suffer.
-- 
Michal Hocko
SUSE Labs

WARNING: multiple messages have this Message-ID (diff)
From: Michal Hocko <mhocko@kernel.org>
To: Tetsuo Handa <penguin-kernel@I-love.SAKURA.ne.jp>
Cc: aarcange@redhat.com, akpm@linux-foundation.org,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	rientjes@google.com, hannes@cmpxchg.org,
	mjaggi@caviumnetworks.com, mgorman@suse.de, oleg@redhat.com,
	vdavydov.dev@gmail.com, vbabka@suse.cz
Subject: Re: [PATCH] mm,oom: Try last second allocation before and after selecting an OOM victim.
Date: Tue, 31 Oct 2017 13:48:55 +0100	[thread overview]
Message-ID: <20171031124855.rszis5gefbxwriiz@dhcp22.suse.cz> (raw)
In-Reply-To: <201710312142.DBB81723.FOOFJMQLStFVOH@I-love.SAKURA.ne.jp>

On Tue 31-10-17 21:42:23, Tetsuo Handa wrote:
> Michal Hocko wrote:
> > On Tue 31-10-17 19:40:09, Tetsuo Handa wrote:
> > > The reason I used __alloc_pages_slowpath() in alloc_pages_before_oomkill() is
> > > to avoid duplicating code (such as checking for ALLOC_OOM and rebuilding zone
> > > list) which needs to be maintained in sync with __alloc_pages_slowpath().
> > >
> > > If you don't like calling __alloc_pages_slowpath() from
> > > alloc_pages_before_oomkill(), I'm OK with calling __alloc_pages_nodemask()
> > > (with __GFP_DIRECT_RECLAIM/__GFP_NOFAIL cleared and __GFP_NOWARN set), for
> > > direct reclaim functions can call __alloc_pages_nodemask() (with PF_MEMALLOC
> > > set in order to avoid recursion of direct reclaim).
> > > 
> > > We are rebuilding zone list if selected as an OOM victim, for
> > > __gfp_pfmemalloc_flags() returns ALLOC_OOM if oom_reserves_allowed(current)
> > > is true.
> > 
> > So your answer is copy&paste without a deeper understanding, righ?
> 
> Right. I wanted to avoid duplicating code.
> But I had to duplicate in order to allow OOM victims to try ALLOC_OOM.

I absolutely hate this cargo cult programming!

[...]

> > While both have some merit, the first reason is mostly historical
> > because we have the explicit locking now and it is really unlikely that
> > the memory would be available right after we have given up trying.
> > Last attempt allocation makes some sense of course but considering that
> > the oom victim selection is quite an expensive operation which can take
> > a considerable amount of time it makes much more sense to retry the
> > allocation after the most expensive part rather than before. Therefore
> > move the last attempt right before we are trying to kill an oom victim
> > to rule potential races when somebody could have freed a lot of memory
> > in the meantime. This will reduce the time window for potentially
> > pre-mature OOM killing considerably.
> 
> But this is about "doing last second allocation attempt after selecting
> an OOM victim". This is not about "allowing OOM victims to try ALLOC_OOM
> before selecting next OOM victim" which is the actual problem I'm trying
> to deal with.

then split it into two. First make the general case and then add a more
sophisticated on top. Dealing with multiple issues at once is what makes
all those brain cells suffer.
-- 
Michal Hocko
SUSE Labs

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

  reply	other threads:[~2017-10-31 12:48 UTC|newest]

Thread overview: 30+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2017-10-28  8:07 [PATCH] mm,oom: Try last second allocation before and after selecting an OOM victim Tetsuo Handa
2017-10-28  8:07 ` Tetsuo Handa
2017-10-30 14:18 ` Michal Hocko
2017-10-30 14:18   ` Michal Hocko
2017-10-31 10:40   ` Tetsuo Handa
2017-10-31 10:40     ` Tetsuo Handa
2017-10-31 12:10     ` Michal Hocko
2017-10-31 12:10       ` Michal Hocko
2017-10-31 12:42       ` Tetsuo Handa
2017-10-31 12:42         ` Tetsuo Handa
2017-10-31 12:48         ` Michal Hocko [this message]
2017-10-31 12:48           ` Michal Hocko
2017-10-31 13:13           ` Tetsuo Handa
2017-10-31 13:13             ` Tetsuo Handa
2017-10-31 13:22             ` Michal Hocko
2017-10-31 13:22               ` Michal Hocko
2017-10-31 13:51               ` Tetsuo Handa
2017-10-31 13:51                 ` Tetsuo Handa
2017-10-31 14:10                 ` Michal Hocko
2017-10-31 14:10                   ` Michal Hocko
2017-11-01 11:58                   ` Tetsuo Handa
2017-11-01 11:58                     ` Tetsuo Handa
2017-11-01 12:46                     ` Michal Hocko
2017-11-01 12:46                       ` Michal Hocko
2017-11-01 14:38                       ` Tetsuo Handa
2017-11-01 14:38                         ` Tetsuo Handa
2017-11-01 14:48                         ` Michal Hocko
2017-11-01 14:48                           ` Michal Hocko
2017-11-01 15:37                           ` Tetsuo Handa
2017-11-01 15:37                             ` Tetsuo Handa

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20171031124855.rszis5gefbxwriiz@dhcp22.suse.cz \
    --to=mhocko@kernel.org \
    --cc=aarcange@redhat.com \
    --cc=akpm@linux-foundation.org \
    --cc=hannes@cmpxchg.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@suse.de \
    --cc=mjaggi@caviumnetworks.com \
    --cc=oleg@redhat.com \
    --cc=penguin-kernel@I-love.SAKURA.ne.jp \
    --cc=rientjes@google.com \
    --cc=vbabka@suse.cz \
    --cc=vdavydov.dev@gmail.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.