From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.9 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1,USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id D0345C2BAEE for ; Wed, 11 Mar 2020 19:45:44 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id ADDED20691 for ; Wed, 11 Mar 2020 19:45:45 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="oYpp36/H" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org ADDED20691 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 0EED66B0005; Wed, 11 Mar 2020 15:45:44 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 09EED6B0006; Wed, 11 Mar 2020 15:45:44 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id ED0556B0007; Wed, 11 Mar 2020 15:45:43 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0089.hostedemail.com [216.40.44.89]) by kanga.kvack.org (Postfix) with ESMTP id D52216B0005 for ; Wed, 11 Mar 2020 15:45:43 -0400 (EDT) Received: from smtpin17.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 9E00E181AEF30 for ; Wed, 11 Mar 2020 19:45:43 +0000 (UTC) X-FDA: 76584111366.17.range04_8945fcd1e4640 X-HE-Tag: range04_8945fcd1e4640 X-Filterd-Recvd-Size: 6372 Received: from mail-pl1-f195.google.com (mail-pl1-f195.google.com [209.85.214.195]) by imf43.hostedemail.com (Postfix) with ESMTP for ; Wed, 11 Mar 2020 19:45:43 +0000 (UTC) Received: by mail-pl1-f195.google.com with SMTP id b22so1558449pls.12 for ; Wed, 11 Mar 2020 12:45:43 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:in-reply-to:message-id:references :user-agent:mime-version; bh=nFRs9bK23pbJQXC6bUj6mkbtL51dUeEnYQnX4EaW7/o=; b=oYpp36/HyX8lIYTEX+wvxIJ0e83Ge2gcbE6GFr5zkNlm4gw0jUtfJ0IpGhGQxHIpcF Yqo2psLJ2Ksv5G42JEowpaGmxcuoJrGVHqUikPBEGqAM67uMLrulwf1wrJ+yjEtVoaar XPWlknhaLrQf7UxdZpnQsQbRc3tllMOhtsPkspavbOGpxC6uaB0KZDB0wtmedoMTEiFR yg+zfAQvWOe/udO2+kRD0J3tFTcjhIfciRb2d0WC3J39Th30tdSQ2JJuBiZmqGJJmXvT KUhvpStha+ELIksu5T7IaC3kZoNVKOJNAdlOF/Yp8zPzfiYaWjUrjZ4Y09kdlomVKQ57 Etbg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:in-reply-to:message-id :references:user-agent:mime-version; bh=nFRs9bK23pbJQXC6bUj6mkbtL51dUeEnYQnX4EaW7/o=; b=JpBTlVgL+H7AFLWevdcGUIxUg0i79lKSh5vGBAqGMayBy7WelKpRpbP6VBx9sbExsX I1cl2yxrTJKje6ILwOvA/6yclGYmLSSIZJQcF++XtLap3ckqf1rhKURsBlkzQpovkPYt ZO5HnEGpp5bgWwbglhN2qVplDeDaLxlp0zFdb0WgzusuP9FFNRfJ3xDNAwZzJ9O4uhoe i7IkOMjtxKZSRjGvsYp4qUSC9JFzvl9jv2J7/VRpj/dQe60Wv7uUau7bTpt7D58kWeWH 24LC70lKVSsR8db0qzniYfMvtvDbBRPjBS10eQn7NMmShC9INwvEYbYZZhmtgXjJj9Se q4yg== X-Gm-Message-State: ANhLgQ0XnP95mUkX4/7dEvdLXUpJCv/ETnxBoDanwcee7ao7k1cYpBnj Rfs6AhQgfUiNaGgLMgs44UNodg== X-Google-Smtp-Source: ADFU+vvqdY78yHScE4P7paO+WdY5n+XAJZKUa2HXsu65btUXaSDv4nu/HBdiBfww9RdqwwTfy0TQMg== X-Received: by 2002:a17:90b:3888:: with SMTP id mu8mr318609pjb.33.1583955941766; Wed, 11 Mar 2020 12:45:41 -0700 (PDT) Received: from [2620:15c:17:3:3a5:23a7:5e32:4598] ([2620:15c:17:3:3a5:23a7:5e32:4598]) by smtp.gmail.com with ESMTPSA id a6sm5853180pfb.104.2020.03.11.12.45.40 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Wed, 11 Mar 2020 12:45:40 -0700 (PDT) Date: Wed, 11 Mar 2020 12:45:40 -0700 (PDT) From: David Rientjes X-X-Sender: rientjes@chino.kir.corp.google.com To: Michal Hocko cc: Andrew Morton , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [patch] mm, oom: prevent soft lockup on memcg oom for UP systems In-Reply-To: <20200311082736.GA23944@dhcp22.suse.cz> Message-ID: References: <20200310221019.GE8447@dhcp22.suse.cz> <20200311082736.GA23944@dhcp22.suse.cz> User-Agent: Alpine 2.21 (DEB 202 2017-01-01) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed, 11 Mar 2020, Michal Hocko wrote: > > > > When a process is oom killed as a result of memcg limits and the victim > > > > is waiting to exit, nothing ends up actually yielding the processor back > > > > to the victim on UP systems with preemption disabled. Instead, the > > > > charging process simply loops in memcg reclaim and eventually soft > > > > lockups. > > > > > > > > Memory cgroup out of memory: Killed process 808 (repro) total-vm:41944kB, anon-rss:35344kB, file-rss:504kB, shmem-rss:0kB, UID:0 pgtables:108kB oom_score_adj:0 > > > > watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [repro:806] > > > > CPU: 0 PID: 806 Comm: repro Not tainted 5.6.0-rc5+ #136 > > > > RIP: 0010:shrink_lruvec+0x4e9/0xa40 > > > > ... > > > > Call Trace: > > > > shrink_node+0x40d/0x7d0 > > > > do_try_to_free_pages+0x13f/0x470 > > > > try_to_free_mem_cgroup_pages+0x16d/0x230 > > > > try_charge+0x247/0xac0 > > > > mem_cgroup_try_charge+0x10a/0x220 > > > > mem_cgroup_try_charge_delay+0x1e/0x40 > > > > handle_mm_fault+0xdf2/0x15f0 > > > > do_user_addr_fault+0x21f/0x420 > > > > page_fault+0x2f/0x40 > > > > > > > > Make sure that something ends up actually yielding the processor back to > > > > the victim to allow for memory freeing. Most appropriate place appears to > > > > be shrink_node_memcgs() where the iteration of all decendant memcgs could > > > > be particularly lengthy. > > > > > > There is a cond_resched in shrink_lruvec and another one in > > > shrink_page_list. Why doesn't any of them hit? Is it because there are > > > no pages on the LRU list? Because rss data suggests there should be > > > enough pages to go that path. Or maybe it is shrink_slab path that takes > > > too long? > > > > > > > I think it can be a number of cases, most notably mem_cgroup_protected() > > checks which is why the cond_resched() is added above it. Rather than add > > cond_resched() only for MEMCG_PROT_MIN and for certain MEMCG_PROT_LOW, the > > cond_resched() is added above the switch clause because the iteration > > itself may be potentially very lengthy. > > Was any of the above the case for your soft lockup case? How have you > managed to trigger it? As I've said I am not against the patch but I > would really like to see an actual explanation what happened rather than > speculations of what might have happened. If for nothing else then for > the future reference. > Yes, this is how it was triggered in my own testing. > If this is really about all the hierarchy being MEMCG_PROT_MIN protected > and that results in a very expensive and pointless reclaim walk that can > trigger soft lockup then it should be explicitly mentioned in the > changelog. I think the changelog clearly states that we need to guarantee that a reclaimer will yield the processor back to allow a victim to exit. This is where we make the guarantee. If it helps for the specific reason it triggered in my testing, we could add: "For example, mem_cgroup_protected() can prohibit reclaim and thus any yielding in page reclaim would not address the issue."