From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-1.0 required=3.0 tests=MAILING_LIST_MULTI, SPF_HELO_NONE,SPF_PASS autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 5F36DC10DCE for ; Thu, 12 Mar 2020 08:32:46 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 145EE206B7 for ; Thu, 12 Mar 2020 08:32:46 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 145EE206B7 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=kernel.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 95D406B0003; Thu, 12 Mar 2020 04:32:45 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 90D556B0006; Thu, 12 Mar 2020 04:32:45 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 822D16B0007; Thu, 12 Mar 2020 04:32:45 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0015.hostedemail.com [216.40.44.15]) by kanga.kvack.org (Postfix) with ESMTP id 6C5426B0003 for ; Thu, 12 Mar 2020 04:32:45 -0400 (EDT) Received: from smtpin01.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 1406745AB for ; Thu, 12 Mar 2020 08:32:45 +0000 (UTC) X-FDA: 76586044290.01.scene49_8c6477867db12 X-HE-Tag: scene49_8c6477867db12 X-Filterd-Recvd-Size: 6686 Received: from mail-wm1-f66.google.com (mail-wm1-f66.google.com [209.85.128.66]) by imf09.hostedemail.com (Postfix) with ESMTP for ; Thu, 12 Mar 2020 08:32:44 +0000 (UTC) Received: by mail-wm1-f66.google.com with SMTP id e26so5204973wme.5 for ; Thu, 12 Mar 2020 01:32:44 -0700 (PDT) X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=y2vrstQh1zW/hZDali97uunaWWCj2ea+q+1b8YJaz9s=; b=gsrD3prptlkA7oQRSfQge1eg8uJSx7mXxXTKgSVQNs9TxwAOKNsDLD4T0+tns6d2jF Nin8dQhrUp1KQJHJnkWCE5/t0q/GD19h/VAmYFWm9X4wXjC7ak0N3yfywrIbK385BRFy B9W8BQj3If/JjxBohWTVzjgyICO3XszLVJD8Ck7+S3b+KWrpPJR/ldk5LglGDTNX49MR 1vkvkGLQCYyziJZl7WmIN0TFa3Ur0tyub1+zlfpImeW3Fnf16F8PPgB04z0XKbmihvSN xzXpNu6bNgWnwbsIpPUNd12Rts1Qx7/t1/iE7R+0HbywuYRfZrsWp7asLVby4uqLcAla xqIw== X-Gm-Message-State: ANhLgQ0ZERtIXIZCa3gIFaUHNxtNFD5OJyRW1TDgtc6AFIrTKjCLVhOe EtVRaa3Urn3kZZte60x1ZAc= X-Google-Smtp-Source: ADFU+vvva8LArSKeKAM6eHoXmnqDT18zn4MX2uQA8OEqtrJ47Vnk7MtxQAWTcPleQ/Mrz/vEpNj2vw== X-Received: by 2002:a1c:6745:: with SMTP id b66mr3613933wmc.30.1584001963638; Thu, 12 Mar 2020 01:32:43 -0700 (PDT) Received: from localhost (ip-37-188-253-35.eurotel.cz. [37.188.253.35]) by smtp.gmail.com with ESMTPSA id w1sm10867129wmc.11.2020.03.12.01.32.42 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 12 Mar 2020 01:32:42 -0700 (PDT) Date: Thu, 12 Mar 2020 09:32:41 +0100 From: Michal Hocko To: David Rientjes Cc: Andrew Morton , Vlastimil Babka , linux-kernel@vger.kernel.org, linux-mm@kvack.org Subject: Re: [patch] mm, oom: prevent soft lockup on memcg oom for UP systems Message-ID: <20200312083241.GT23944@dhcp22.suse.cz> References: <20200310221019.GE8447@dhcp22.suse.cz> <20200311082736.GA23944@dhcp22.suse.cz> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Wed 11-03-20 12:45:40, David Rientjes wrote: > On Wed, 11 Mar 2020, Michal Hocko wrote: > > > > > > When a process is oom killed as a result of memcg limits and the victim > > > > > is waiting to exit, nothing ends up actually yielding the processor back > > > > > to the victim on UP systems with preemption disabled. Instead, the > > > > > charging process simply loops in memcg reclaim and eventually soft > > > > > lockups. > > > > > > > > > > Memory cgroup out of memory: Killed process 808 (repro) total-vm:41944kB, anon-rss:35344kB, file-rss:504kB, shmem-rss:0kB, UID:0 pgtables:108kB oom_score_adj:0 > > > > > watchdog: BUG: soft lockup - CPU#0 stuck for 23s! [repro:806] > > > > > CPU: 0 PID: 806 Comm: repro Not tainted 5.6.0-rc5+ #136 > > > > > RIP: 0010:shrink_lruvec+0x4e9/0xa40 > > > > > ... > > > > > Call Trace: > > > > > shrink_node+0x40d/0x7d0 > > > > > do_try_to_free_pages+0x13f/0x470 > > > > > try_to_free_mem_cgroup_pages+0x16d/0x230 > > > > > try_charge+0x247/0xac0 > > > > > mem_cgroup_try_charge+0x10a/0x220 > > > > > mem_cgroup_try_charge_delay+0x1e/0x40 > > > > > handle_mm_fault+0xdf2/0x15f0 > > > > > do_user_addr_fault+0x21f/0x420 > > > > > page_fault+0x2f/0x40 > > > > > > > > > > Make sure that something ends up actually yielding the processor back to > > > > > the victim to allow for memory freeing. Most appropriate place appears to > > > > > be shrink_node_memcgs() where the iteration of all decendant memcgs could > > > > > be particularly lengthy. > > > > > > > > There is a cond_resched in shrink_lruvec and another one in > > > > shrink_page_list. Why doesn't any of them hit? Is it because there are > > > > no pages on the LRU list? Because rss data suggests there should be > > > > enough pages to go that path. Or maybe it is shrink_slab path that takes > > > > too long? > > > > > > > > > > I think it can be a number of cases, most notably mem_cgroup_protected() > > > checks which is why the cond_resched() is added above it. Rather than add > > > cond_resched() only for MEMCG_PROT_MIN and for certain MEMCG_PROT_LOW, the > > > cond_resched() is added above the switch clause because the iteration > > > itself may be potentially very lengthy. > > > > Was any of the above the case for your soft lockup case? How have you > > managed to trigger it? As I've said I am not against the patch but I > > would really like to see an actual explanation what happened rather than > > speculations of what might have happened. If for nothing else then for > > the future reference. > > > > Yes, this is how it was triggered in my own testing. > > > If this is really about all the hierarchy being MEMCG_PROT_MIN protected > > and that results in a very expensive and pointless reclaim walk that can > > trigger soft lockup then it should be explicitly mentioned in the > > changelog. > > I think the changelog clearly states that we need to guarantee that a > reclaimer will yield the processor back to allow a victim to exit. This > is where we make the guarantee. If it helps for the specific reason it > triggered in my testing, we could add: > > "For example, mem_cgroup_protected() can prohibit reclaim and thus any > yielding in page reclaim would not address the issue." I would suggest something like the following: " The reclaim path (including the OOM) relies on explicit scheduling points to hand over execution to tasks which could help with the reclaim process. Currently it is mostly shrink_page_list which yields CPU for each reclaimed page. This might be insuficient though in some configurations. E.g. when a memcg OOM path is triggered in a hierarchy which doesn't have any reclaimable memory because of memory reclaim protection (MEMCG_PROT_MIN) then there is possible to trigger a soft lockup during an out of memory situation on non preemptible kernels Fix this by adding a cond_resched up in the reclaim path and make sure there is a yield point regardless of reclaimability of the target hierarchy. " -- Michal Hocko SUSE Labs