From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1751189AbcGNLBd (ORCPT ); Thu, 14 Jul 2016 07:01:33 -0400 Received: from www262.sakura.ne.jp ([202.181.97.72]:24444 "EHLO www262.sakura.ne.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751119AbcGNLB2 (ORCPT ); Thu, 14 Jul 2016 07:01:28 -0400 To: rientjes@google.com, mpatocka@redhat.com Cc: mhocko@kernel.org, okozina@redhat.com, jmarchan@redhat.com, skozina@redhat.com, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: System freezes after OOM From: Tetsuo Handa References: <20160713133955.GK28723@dhcp22.suse.cz> <20160713145638.GM28723@dhcp22.suse.cz> In-Reply-To: Message-Id: <201607142001.BJD07258.SMOHFOJVtLFOQF@I-love.SAKURA.ne.jp> X-Mailer: Winbiff [Version 2.51 PL2] X-Accept-Language: ja,en,zh Date: Thu, 14 Jul 2016 20:01:27 +0900 Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Michal Hocko wrote: > OK, this is the part I have missed. I didn't realize that the swapout > path, which is indeed PF_MEMALLOC, can get down to blk code which uses > mempools. A quick code travers shows that at least > make_request_fn = blk_queue_bio > blk_queue_bio > get_request > __get_request > > might do that. And in that case I agree that the above mentioned patch > has unintentional side effects and should be re-evaluated. David, what > do you think? An obvious fixup would be considering TIF_MEMDIE in > mempool_alloc explicitly. TIF_MEMDIE is racy. Since the OOM killer sets TIF_MEMDIE on only one thread, there is no guarantee that TIF_MEMDIE is set to the thread which is looping inside mempool_alloc(). And since __GFP_NORETRY is used (regardless of f9054c70d28bc214), out_of_memory() is not called via __alloc_pages_may_oom(). This means that the thread which is looping inside mempool_alloc() can't get TIF_MEMDIE unless TIF_MEMDIE is set by the OOM killer. Maybe set __GFP_NOMEMALLOC by default at mempool_alloc() and remove it at mempool_alloc() when fatal_signal_pending() is true? But that behavior can OOM-kill somebody else when current was not OOM-killed. Sigh... David Rientjes wrote: > On Wed, 13 Jul 2016, Mikulas Patocka wrote: > > > What are the real problems that f9054c70d28bc214b2857cf8db8269f4f45a5e23 > > tries to fix? > > > > It prevents the whole system from livelocking due to an oom killed process > stalling forever waiting for mempool_alloc() to return. No other threads > may be oom killed while waiting for it to exit. Is that concern still valid? We have the OOM reaper for CONFIG_MMU=y case.