From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.9 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 9848FC4332B for ; Wed, 18 Mar 2020 21:40:50 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 5677E20714 for ; Wed, 18 Mar 2020 21:40:50 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="Zxh3TAA2" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 5677E20714 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id E64F16B0075; Wed, 18 Mar 2020 17:40:49 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E175C6B007B; Wed, 18 Mar 2020 17:40:49 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CDDD26B009E; Wed, 18 Mar 2020 17:40:49 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0197.hostedemail.com [216.40.44.197]) by kanga.kvack.org (Postfix) with ESMTP id B29FC6B0075 for ; Wed, 18 Mar 2020 17:40:49 -0400 (EDT) Received: from smtpin09.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 6C9354DD6 for ; Wed, 18 Mar 2020 21:40:49 +0000 (UTC) X-FDA: 76609803018.09.brake39_5e71e7af68f56 X-HE-Tag: brake39_5e71e7af68f56 X-Filterd-Recvd-Size: 6618 Received: from mail-pg1-f194.google.com (mail-pg1-f194.google.com [209.85.215.194]) by imf06.hostedemail.com (Postfix) with ESMTP for ; Wed, 18 Mar 2020 21:40:48 +0000 (UTC) Received: by mail-pg1-f194.google.com with SMTP id y30so14379852pga.13 for ; Wed, 18 Mar 2020 14:40:48 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=tM1vSIKwCnlZUi/xGFebQI1FUo5MhWvpnsIJnM9Uk1Y=; b=Zxh3TAA2TCweDaW7C+E0Yv8wPsabF/oNuLlphDFoxMBH9tdjRpxhTdsjLSucayZibU u4SwgpHEZAkrvbn7UUnqkCu9OOoPkoMeeSWm+pMvCcbT4vDmMZ5hjW7yhshioxFdTQLI VpWe/E+Mfz+DRBCt2L1VASK1FpgzLl4aKzAibEMySdN1r/sOyZQqUhDGgiLvLkGfzxz4 3Nd6KpSVahsfqxARdCtadmC/2ayZab4uuGcXx+EYX2oyKzgTJYYwMgGuusoWC3Ig0pHl Aqfhh/hcuLjjyiYsn9dYLK65TJI+rSbbQFsaoKyREY8EJfGXjzqaR/dUjnSjnqSX9/em ZmTQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=tM1vSIKwCnlZUi/xGFebQI1FUo5MhWvpnsIJnM9Uk1Y=; b=qYP/BzCZNFV1JQRib1MJm10OlQVw0PxyBE6stb04Aekb0/E9NdmsxkTj/SQoE6u6Zr 6wo0AenGt+0uL7qbWgfwhlz+g5fOsZf/7fbBDE38ImOJCmG5fRs7HrvSXiKlEloa2aeq EuN+aFD3LnVWcBSHYOl+Q0vKQkd7JBpMPNI+dOvR2WuikZGkLKuH+zEExMz9r2hh6Ndx IvUmTIMyEpHEUBgmhC71NDKwcyzQSAGZhiOTKoTOe4f5OV41OwDHd3OFJ4NesoCvld0j x0E0d7eRLbCBtERjOMnW55g4xXa3QhIhFUiSR10vUDVptuQMqD7biHEkoi/tkrESjo7G 43Mw== X-Gm-Message-State: ANhLgQ28+4EauSHokX/LZymp2lPvbx17tEonU+YHMqwNGBjObc/KeAqL Z8FLMWKEUitbzfeF/1gJdDQI2A== X-Google-Smtp-Source: ADFU+vu2wFsjtdzyFMMql6gFNUPR6++F1kzN2nAhJ4bCAj4cn4r4N8RG6OE6fTq1rCyAtJqRlhmSCw== X-Received: by 2002:aa7:96a6:: with SMTP id g6mr364347pfk.88.1584567647456; Wed, 18 Mar 2020 14:40:47 -0700 (PDT) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id q9sm53027pgs.89.2020.03.18.14.40.46 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 18 Mar 2020 14:40:46 -0700 (PDT) Date: Wed, 18 Mar 2020 14:40:45 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Michal Hocko cc: Andrew Morton , Tetsuo Handa , Vlastimil Babka , Robert Kolchmeyer , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [patch v2] mm, oom: prevent soft lockup on memcg oom for UP systems In-Reply-To: <20200318094219.GE21362@dhcp22.suse.cz> Message-ID: References: <8395df04-9b7a-0084-4bb5-e430efe18b97@i-love.sakura.ne.jp> <202003170318.02H3IpSx047471@www262.sakura.ne.jp> <20200318094219.GE21362@dhcp22.suse.cz> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 18 Mar 2020, Michal Hocko wrote: > > When a process is oom killed as a result of memcg limits and the victim > > is waiting to exit, nothing ends up actually yielding the processor back > > to the victim on UP systems with preemption disabled. Instead, the > > charging process simply loops in memcg reclaim and eventually soft > > lockups. > > It seems that my request to describe the setup got ignored. Sigh. > > > Memory cgroup out of memory: Killed process 808 (repro) total-vm:41944kB, > > anon-rss:35344kB, file-rss:504kB, shmem-rss:0kB, UID:0 pgtables:108kB > > oom_score_adj:0 > > watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [repro:806] > > CPU: 0 PID: 806 Comm: repro Not tainted 5.6.0-rc5+ #136 > > RIP: 0010:shrink_lruvec+0x4e9/0xa40 > > ... > > Call Trace: > > shrink_node+0x40d/0x7d0 > > do_try_to_free_pages+0x13f/0x470 > > try_to_free_mem_cgroup_pages+0x16d/0x230 > > try_charge+0x247/0xac0 > > mem_cgroup_try_charge+0x10a/0x220 > > mem_cgroup_try_charge_delay+0x1e/0x40 > > handle_mm_fault+0xdf2/0x15f0 > > do_user_addr_fault+0x21f/0x420 > > page_fault+0x2f/0x40 > > > > Make sure that once the oom killer has been called that we forcibly yield > > if current is not the chosen victim regardless of priority to allow for > > memory freeing. The same situation can theoretically occur in the page > > allocator, so do this after dropping oom_lock there as well. > > I would have prefered the cond_resched solution proposed previously but > I can live with this as well. I would just ask to add more information > to the changelog. E.g. I'm still planning on sending the cond_resched() change as well, but not as advertised to fix this particular issue per Tetsuo's feedback. I think the reported issue showed it's possible to excessively loop in reclaim without a conditional yield depending on various memcg configs and the shrink_node_memcgs() cond_resched() is still appropriate for interactivity but also because the iteration of memcgs can be particularly long. > " > We used to have a short sleep after the oom handling but 9bfe5ded054b > ("mm, oom: remove sleep from under oom_lock") has removed it because > sleep inside the oom_lock is dangerous. This patch restores the sleep > outside of the lock. Will do. > " > > Suggested-by: Tetsuo Handa > > Tested-by: Robert Kolchmeyer > > Cc: stable@vger.kernel.org > > Signed-off-by: David Rientjes > > --- > > mm/memcontrol.c | 2 ++ > > mm/page_alloc.c | 2 ++ > > 2 files changed, 4 insertions(+) > > > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > > --- a/mm/memcontrol.c > > +++ b/mm/memcontrol.c > > @@ -1576,6 +1576,8 @@ static bool mem_cgroup_out_of_memory(struct mem_cgroup *memcg, gfp_t gfp_mask, > > */ > > ret = should_force_charge() || out_of_memory(&oc); > > mutex_unlock(&oom_lock); > > + if (!fatal_signal_pending(current)) > > + schedule_timeout_killable(1); > > Check for fatal_signal_pending is redundant. > > -- > Michal Hocko > SUSE Labs >