linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: David Rientjes <rientjes@google.com>
To: Michal Hocko <mhocko@kernel.org>
Cc: Mikulas Patocka <mpatocka@redhat.com>,
	Tetsuo Handa <penguin-kernel@i-love.sakura.ne.jp>,
	Ondrej Kozina <okozina@redhat.com>,
	Jerome Marchand <jmarchan@redhat.com>,
	Stanislav Kozina <skozina@redhat.com>,
	linux-mm@kvack.org, linux-kernel@vger.kernel.org
Subject: Re: System freezes after OOM
Date: Thu, 14 Jul 2016 13:38:42 -0700 (PDT)	[thread overview]
Message-ID: <alpine.DEB.2.10.1607141326500.68666@chino.kir.corp.google.com> (raw)
In-Reply-To: <20160714152913.GC12289@dhcp22.suse.cz>

On Thu, 14 Jul 2016, Michal Hocko wrote:

> > It prevents the whole system from livelocking due to an oom killed process 
> > stalling forever waiting for mempool_alloc() to return.  No other threads 
> > may be oom killed while waiting for it to exit.
> 
> But it is true that the patch has unintended side effect for any mempool
> allocation from the reclaim path (aka PF_MEMALLOC context).

If PF_MEMALLOC context is allocating too much memory reserves, then I'd 
argue that is a problem independent of using mempool_alloc() since 
mempool_alloc() can evolve directly into a call to the page allocator.  
How does such a process guarantee that it cannot deplete memory reserves 
with a simple call to the page allocator?  Since nothing in the page 
allocator is preventing complete depletion of reserves (it simply uses 
ALLOC_NO_WATERMARKS), the caller in a PF_MEMALLOC context must be 
responsible.

> So do you
> think we should rework your additional patch to be explicit about
> TIF_MEMDIE?

Not sure which additional patch you're referring to, the only patch that I 
proposed was commit f9054c70d28b which solved hundreds of machines from 
timing out.

> Something like the following (not even compile tested for
> illustration). Tetsuo has properly pointed out that this doesn't work
> for multithreaded processes reliable but put that aside for now as that
> needs a fix on a different layer. I believe we can fix that quite
> easily after recent/planned changes.
> ---
> diff --git a/mm/mempool.c b/mm/mempool.c
> index 8f65464da5de..ea26d75c8adf 100644
> --- a/mm/mempool.c
> +++ b/mm/mempool.c
> @@ -322,20 +322,20 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask)
>  
>  	might_sleep_if(gfp_mask & __GFP_DIRECT_RECLAIM);
>  
> +	gfp_mask |= __GFP_NOMEMALLOC;   /* don't allocate emergency reserves */
>  	gfp_mask |= __GFP_NORETRY;	/* don't loop in __alloc_pages */
>  	gfp_mask |= __GFP_NOWARN;	/* failures are OK */
>  
>  	gfp_temp = gfp_mask & ~(__GFP_DIRECT_RECLAIM|__GFP_IO);
>  
>  repeat_alloc:
> -	if (likely(pool->curr_nr)) {
> -		/*
> -		 * Don't allocate from emergency reserves if there are
> -		 * elements available.  This check is racy, but it will
> -		 * be rechecked each loop.
> -		 */
> -		gfp_temp |= __GFP_NOMEMALLOC;
> -	}
> +	/*
> +	 * Make sure that the OOM victim will get access to memory reserves
> +	 * properly if there are no objects in the pool to prevent from
> +	 * livelocks.
> +	 */
> +	if (!likely(pool->curr_nr) && test_thread_flag(TIF_MEMDIE))
> +		gfp_temp &= ~__GFP_NOMEMALLOC;
>  
>  	element = pool->alloc(gfp_temp, pool->pool_data);
>  	if (likely(element != NULL))
> @@ -359,7 +359,7 @@ void *mempool_alloc(mempool_t *pool, gfp_t gfp_mask)
>  	 * We use gfp mask w/o direct reclaim or IO for the first round.  If
>  	 * alloc failed with that and @pool was empty, retry immediately.
>  	 */
> -	if ((gfp_temp & ~__GFP_NOMEMALLOC) != gfp_mask) {
> +	if ((gfp_temp & __GFP_DIRECT_RECLAIM) != (gfp_mask & __GFP_DIRECT_RECLAIM)) {
>  		spin_unlock_irqrestore(&pool->lock, flags);
>  		gfp_temp = gfp_mask;
>  		goto repeat_alloc;

This is bogus and quite obviously leads to oom livelock: if a process is 
holding a mutex and does mempool_alloc(), since __GFP_WAIT is allowed in 
process context for mempool allocation, it can stall here in an oom 
condition if there are no elements available on the mempool freelist.  If 
the oom victim contends the same mutex, the system livelocks and the same 
bug arises because the holder of the mutex loops forever.  This is the 
exact behavior that f9054c70d28b also fixes.

These aren't hypothetical situations, the patch fixed hundreds of machines 
from regularly timing out.  The fundamental reason is that mempool_alloc() 
must not loop forever in process context: that is needed when the 
allocator is either an oom victim itself or the oom victim is blocked by 
an allocator.  mempool_alloc() must guarantee forward progress in such a 
context.

The end result is that when in PF_MEMALLOC context, allocators must be 
responsible and not deplete all memory reserves.

  reply	other threads:[~2016-07-14 20:38 UTC|newest]

Thread overview: 60+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
     [not found] <57837CEE.1010609@redhat.com>
     [not found] ` <f80dc690-7e71-26b2-59a2-5a1557d26713@redhat.com>
     [not found]   ` <9be09452-de7f-d8be-fd5d-4a80d1cd1ba3@redhat.com>
2016-07-11 15:43     ` System freezes after OOM Mikulas Patocka
2016-07-12  6:49       ` Michal Hocko
2016-07-12 23:44         ` Mikulas Patocka
2016-07-13  8:35           ` Jerome Marchand
2016-07-13 11:14             ` Michal Hocko
2016-07-13 14:21               ` Mikulas Patocka
2016-07-13 11:10           ` Michal Hocko
2016-07-13 12:50             ` Michal Hocko
2016-07-13 13:44               ` Milan Broz
2016-07-13 15:21                 ` Mikulas Patocka
2016-07-14  9:09                   ` Michal Hocko
2016-07-14  9:46                     ` Milan Broz
2016-07-13 15:02             ` Mikulas Patocka
2016-07-14 10:51               ` [dm-devel] " Ondrej Kozina
2016-07-14 12:51               ` Michal Hocko
2016-07-14 14:00                 ` Mikulas Patocka
2016-07-14 14:59                   ` Michal Hocko
2016-07-14 15:25                     ` Ondrej Kozina
2016-07-14 17:35                     ` Mikulas Patocka
2016-07-15  8:35                       ` Michal Hocko
2016-07-15 12:11                         ` Mikulas Patocka
2016-07-15 12:22                           ` Michal Hocko
2016-07-15 17:02                             ` Mikulas Patocka
2016-07-18  7:22                               ` Michal Hocko
2016-07-14 14:08                 ` Ondrej Kozina
2016-07-14 15:31                   ` Michal Hocko
2016-07-14 17:07                     ` Ondrej Kozina
2016-07-14 17:36                       ` Michal Hocko
2016-07-14 17:39                         ` Michal Hocko
2016-07-15 11:42                       ` Tetsuo Handa
2016-07-13 13:19           ` Tetsuo Handa
2016-07-13 13:39             ` Michal Hocko
2016-07-13 14:18               ` Mikulas Patocka
2016-07-13 14:56                 ` Michal Hocko
2016-07-13 15:11                   ` Mikulas Patocka
2016-07-13 23:53                     ` David Rientjes
2016-07-14 11:01                       ` Tetsuo Handa
2016-07-14 12:29                         ` Mikulas Patocka
2016-07-14 20:26                         ` David Rientjes
2016-07-14 21:40                           ` Tetsuo Handa
2016-07-14 22:04                             ` David Rientjes
2016-07-15 11:25                           ` Mikulas Patocka
2016-07-15 21:21                             ` David Rientjes
2016-07-14 12:27                       ` Mikulas Patocka
2016-07-14 20:22                         ` David Rientjes
2016-07-15 11:21                           ` Mikulas Patocka
2016-07-15 21:25                             ` David Rientjes
2016-07-15 21:39                               ` Mikulas Patocka
2016-07-15 21:58                                 ` David Rientjes
2016-07-15 23:53                                   ` Mikulas Patocka
2016-07-18 15:14                             ` Johannes Weiner
2016-07-14 15:29                       ` Michal Hocko
2016-07-14 20:38                         ` David Rientjes [this message]
2016-07-15  7:22                           ` Michal Hocko
2016-07-15  8:23                             ` Michal Hocko
2016-07-15 12:00                             ` Mikulas Patocka
2016-07-15 21:47                             ` David Rientjes
2016-07-18  7:39                               ` Michal Hocko
2016-07-18 21:03                                 ` David Rientjes
2016-07-14  0:01             ` David Rientjes

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=alpine.DEB.2.10.1607141326500.68666@chino.kir.corp.google.com \
    --to=rientjes@google.com \
    --cc=jmarchan@redhat.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mhocko@kernel.org \
    --cc=mpatocka@redhat.com \
    --cc=okozina@redhat.com \
    --cc=penguin-kernel@i-love.sakura.ne.jp \
    --cc=skozina@redhat.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).