From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A26E8C4CECE for ; Thu, 12 Mar 2020 00:12:27 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 84EC02074D for ; Thu, 12 Mar 2020 00:12:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 84EC02074D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=i-love.sakura.ne.jp Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B982F6B0005; Wed, 11 Mar 2020 20:12:26 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B478F6B0006; Wed, 11 Mar 2020 20:12:26 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A37016B0007; Wed, 11 Mar 2020 20:12:26 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0127.hostedemail.com [216.40.44.127]) by kanga.kvack.org (Postfix) with ESMTP id 8D8166B0005 for ; Wed, 11 Mar 2020 20:12:26 -0400 (EDT) Received: from smtpin18.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 59C93180ACF76 for ; Thu, 12 Mar 2020 00:12:26 +0000 (UTC) X-FDA: 76584783492.18.kitty27_89958f991ca18 X-HE-Tag: kitty27_89958f991ca18 X-Filterd-Recvd-Size: 4613 Received: from www262.sakura.ne.jp (www262.sakura.ne.jp [202.181.97.72]) by imf03.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Mar 2020 00:12:25 +0000 (UTC) Received: from fsav402.sakura.ne.jp (fsav402.sakura.ne.jp [133.242.250.101]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id 02C0CFQk043539; Thu, 12 Mar 2020 09:12:15 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Received: from www262.sakura.ne.jp (202.181.97.72) by fsav402.sakura.ne.jp (F-Secure/fsigk_smtp/550/fsav402.sakura.ne.jp); Thu, 12 Mar 2020 09:12:15 +0900 (JST) X-Virus-Status: clean(F-Secure/fsigk_smtp/550/fsav402.sakura.ne.jp) Received: from www262.sakura.ne.jp (localhost [127.0.0.1]) by www262.sakura.ne.jp (8.15.2/8.15.2) with ESMTP id 02C0CEIj043534; Thu, 12 Mar 2020 09:12:15 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Received: (from i-love@localhost) by www262.sakura.ne.jp (8.15.2/8.15.2/Submit) id 02C0CEUB043533; Thu, 12 Mar 2020 09:12:14 +0900 (JST) (envelope-from penguin-kernel@i-love.sakura.ne.jp) Message-Id: <202003120012.02C0CEUB043533@www262.sakura.ne.jp> X-Authentication-Warning: www262.sakura.ne.jp: i-love set sender to penguin-kernel@i-love.sakura.ne.jp using -f Subject: Re: [patch] mm, oom: prevent soft lockup on memcg oom for UP systems From: Tetsuo Handa To: David Rientjes Cc: Andrew Morton , Vlastimil Babka , Michal Hocko , linux-kernel@vger.kernel.org, linux-mm@kvack.org MIME-Version: 1.0 Date: Thu, 12 Mar 2020 09:12:14 +0900 References: <993e7783-60e9-ba03-b512-c829b9e833fd@i-love.sakura.ne.jp> In-Reply-To: Content-Type: text/plain; charset="ISO-2022-JP" Content-Transfer-Encoding: 7bit X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: > On Thu, 12 Mar 2020, Tetsuo Handa wrote: > > > If you have an alternate patch to try, we can test it. But since this > > > cond_resched() is needed anyway, I'm not sure it will change the result. > > > > schedule_timeout_killable(1) is an alternate patch to try; I don't think > > that this cond_resched() is needed anyway. > > > > You are suggesting schedule_timeout_killable(1) in shrink_node_memcgs()? > Andrew Morton also mentioned whether cond_resched() in shrink_node_memcgs() is enough. But like you mentioned, David Rientjes wrote: > On Tue, 10 Mar 2020, Andrew Morton wrote: > > > > --- a/mm/vmscan.c > > > +++ b/mm/vmscan.c > > > @@ -2637,6 +2637,8 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) > > > unsigned long reclaimed; > > > unsigned long scanned; > > > > > > + cond_resched(); > > > + > > > switch (mem_cgroup_protected(target_memcg, memcg)) { > > > case MEMCG_PROT_MIN: > > > /* > > > > > > Obviously better, but this will still spin wheels until this tasks's > > timeslice expires, and we might want to do something to help ensure > > that the victim runs next (or soon)? > > > > We used to have a schedule_timeout_killable(1) to address exactly that > scenario but it was removed in 4.19: > > commit 9bfe5ded054b8e28a94c78580f233d6879a00146 > Author: Michal Hocko > Date: Fri Aug 17 15:49:04 2018 -0700 > > mm, oom: remove sleep from under oom_lock you can try re-adding sleep outside of oom_lock: diff --git a/mm/memcontrol.c b/mm/memcontrol.c index d09776cd6e10..3aee7e0eca4e 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -1576,6 +1576,7 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, */ ret = should_force_charge() || out_of_memory(&oc); mutex_unlock(&oom_lock); + schedule_timeout_killable(1); return ret; } diff --git a/mm/page_alloc.c b/mm/page_alloc.c index 3c4eb750a199..e80158049651 100644 --- a/mm/page_alloc.c +++ b/mm/page_alloc.c @@ -3797,7 +3797,6 @@ __alloc_pages_may_oom(gfp_t gfp_mask, unsigned int order, */ if (!mutex_trylock(&oom_lock)) { *did_some_progress = 1; - schedule_timeout_uninterruptible(1); return NULL; } @@ -4590,6 +4589,7 @@ __alloc_pages_slowpath(gfp_t gfp_mask, unsigned int order, /* Retry as long as the OOM killer is making progress */ if (did_some_progress) { + schedule_timeout_uninterruptible(1); no_progress_loops = 0; goto retry; } By the way, will you share the reproducer (and how to use the reproducer) ?