From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752712AbdLAN0k (ORCPT ); Fri, 1 Dec 2017 08:26:40 -0500 Received: from www262.sakura.ne.jp ([202.181.97.72]:29293 "EHLO www262.sakura.ne.jp" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1752042AbdLAN0i (ORCPT ); Fri, 1 Dec 2017 08:26:38 -0500 To: mhocko@kernel.org, guro@fb.com Cc: linux-mm@vger.kernel.org, vdavydov.dev@gmail.com, hannes@cmpxchg.org, rientjes@google.com, akpm@linux-foundation.org, tj@kernel.org, kernel-team@fb.com, cgroups@vger.kernel.org, linux-doc@vger.kernel.org, linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [PATCH] mm, oom: simplify alloc_pages_before_oomkill handling From: Tetsuo Handa References: <20171130152824.1591-1-guro@fb.com> <20171201091425.ekrpxsmkwcusozua@dhcp22.suse.cz> In-Reply-To: <20171201091425.ekrpxsmkwcusozua@dhcp22.suse.cz> Message-Id: <201712012226.JAC87573.FLtJVOOOFFSMQH@I-love.SAKURA.ne.jp> X-Mailer: Winbiff [Version 2.51 PL2] X-Accept-Language: ja,en,zh Date: Fri, 1 Dec 2017 22:26:23 +0900 Mime-Version: 1.0 Content-Type: text/plain; charset=us-ascii Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Michal Hocko wrote: > Recently added alloc_pages_before_oomkill gained new caller with this > patchset and I think it just grown to deserve a simpler code flow. > What do you think about this on top of the series? I'm planning to post below patch in order to mitigate OOM lockup problem caused by scheduling priority. But I'm OK with your patch because your patch will not conflict with below patch. ---------- diff --git a/mm/page_alloc.c b/mm/page_alloc.c index b2746a7..ef6e951 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3332,13 +3332,14 @@ void warn_alloc(gfp_t gfp_mask, nodemask_t *nodemask, const char *fmt, ...) *did_some_progress = 0; /* - * Acquire the oom lock. If that fails, somebody else is - * making progress for us. + * Acquire the oom lock. If that fails, give enough CPU time to the + * owner of the oom lock in order to help reclaiming memory. */ - if (!mutex_trylock(&oom_lock)) { - *did_some_progress = 1; + while (!mutex_trylock(&oom_lock)) { + page = alloc_pages_before_oomkill(oc); + if (page) + return page; schedule_timeout_uninterruptible(1); - return NULL; } /* Coredumps can quickly deplete all memory reserves */ ----------