From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6462DC4338F for ; Thu, 19 Aug 2021 20:37:24 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id DF72760720 for ; Thu, 19 Aug 2021 20:37:23 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org DF72760720 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id 7DBC68D0001; Thu, 19 Aug 2021 16:37:23 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 764D36B0071; Thu, 19 Aug 2021 16:37:23 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 605C88D0001; Thu, 19 Aug 2021 16:37:23 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0080.hostedemail.com [216.40.44.80]) by kanga.kvack.org (Postfix) with ESMTP id 440556B006C for ; Thu, 19 Aug 2021 16:37:23 -0400 (EDT) Received: from smtpin32.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id E2209181AEF32 for ; Thu, 19 Aug 2021 20:37:22 +0000 (UTC) X-FDA: 78492990324.32.A6BAFBC Received: from mail-qk1-f179.google.com (mail-qk1-f179.google.com [209.85.222.179]) by imf06.hostedemail.com (Postfix) with ESMTP id 7850C801A89B for ; Thu, 19 Aug 2021 20:37:22 +0000 (UTC) Received: by mail-qk1-f179.google.com with SMTP id o123so8548649qkf.12 for ; Thu, 19 Aug 2021 13:37:22 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to; bh=H/v8jtp4MTdUHWsrgy0yYUi2ohESXHCLRr9KfdOGOGU=; b=X3sJinq0pDD/8IcMO4wwhfPnPyw3rNuV+72zNB5mcMP+5uSf8rC33okclSRDnV5fff L7FpVXiWwN6kVi1mm8yfgZaVorUvtR9kwdH/5bKkyHYtlbqgBoGR0H3oDPpUkSsTbCAo oJKuXgkgB1gxZfB/F64F4GTywyYi+yG8WIuudFyJPoKVkCSM9t0G21kBP2ON24sW5HZX 6AA9uAzy+DdAdC3lCJXMV80rVwwszeJOMPe8YMgwxpkjXiGrYzGioMd+VnKG1vB/jtW+ MvDkTXpKRo2jCBx/6ln3g0PmRfwz6bF3ZuJ2ZmqmQxvnim5j84oaJ6w0G3xXdBmsi9Dc vU4Q== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to; bh=H/v8jtp4MTdUHWsrgy0yYUi2ohESXHCLRr9KfdOGOGU=; b=X0qlFEM9TLkmbYghW0xRjo3kileoO5FszDQDnM99+9AXnxTFWajpgIJnEisWc4vCQt v3DxX1PqUBHtoq2m6HLAU8e3cUQ4OQ8pNvUBLmfNIt4HXZCrVafMGwG+CXEOLXPfS1Au P+GzzBTLdkl7O8RzlcqvcyE/p7YqE84qhBpMeSlLINQYt5Vo4+5gTp2Pr4sZlu5fuAXC J5EHmLRh7BIUf2RQKd3NxzImjbEmJQwqKCVZjrlSng2xn7ejW67ork+kbsh5cKjpWZgP ICLZnVXCK2nBVhR5bYkUKQKY/9JXUSxHwwBfix/ILthJWeDbM0SR1XSGhem5TVf2iDdN 6TtQ== X-Gm-Message-State: AOAM531ujpMTLotQlerhk1ez8OCpt2z9AzNftecBKoXgqzJqcsBcbpnP mXgAbNt8BYiPmJH+z+VmGmH2IQ== X-Google-Smtp-Source: ABdhPJytkjyRm+4YOePYeBBlJwcE+dj7l2FNMe+YO2l/un06Vvg9Esl5WwHLGqJ0hTtBK2oTqxId6A== X-Received: by 2002:a05:620a:941:: with SMTP id w1mr5410596qkw.434.1629405441687; Thu, 19 Aug 2021 13:37:21 -0700 (PDT) Received: from localhost (cpe-98-15-154-102.hvc.res.rr.com. [98.15.154.102]) by smtp.gmail.com with ESMTPSA id z6sm372234qtq.78.2021.08.19.13.37.20 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Thu, 19 Aug 2021 13:37:20 -0700 (PDT) Date: Thu, 19 Aug 2021 16:38:59 -0400 From: Johannes Weiner To: Michal Hocko Cc: Andrew Morton , Leon Yang , Chris Down , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: Re: [PATCH] mm: memcontrol: fix occasional OOMs due to proportional memory.low reclaim Message-ID: References: <20210817180506.220056-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: Authentication-Results: imf06.hostedemail.com; dkim=pass header.d=cmpxchg-org.20150623.gappssmtp.com header.s=20150623 header.b=X3sJinq0; dmarc=pass (policy=none) header.from=cmpxchg.org; spf=pass (imf06.hostedemail.com: domain of hannes@cmpxchg.org designates 209.85.222.179 as permitted sender) smtp.mailfrom=hannes@cmpxchg.org X-Stat-Signature: 66czwj78ewztcne1o43r68jiynebwf7e X-Rspamd-Queue-Id: 7850C801A89B X-Rspamd-Server: rspam05 X-HE-Tag: 1629405442-812657 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, Aug 19, 2021 at 05:01:38PM +0200, Michal Hocko wrote: > On Tue 17-08-21 14:05:06, Johannes Weiner wrote: > > We've noticed occasional OOM killing when memory.low settings are in > > effect for cgroups. This is unexpected and undesirable as memory.low > > is supposed to express non-OOMing memory priorities between cgroups. > > > > The reason for this is proportional memory.low reclaim. When cgroups > > are below their memory.low threshold, reclaim passes them over in the > > first round, and then retries if it couldn't find pages anywhere else. > > But when cgroups are slighly above their memory.low setting, page scan > > force is scaled down and diminished in proportion to the overage, to > > the point where it can cause reclaim to fail as well - only in that > > case we currently don't retry, and instead trigger OOM. > > > > To fix this, hook proportional reclaim into the same retry logic we > > have in place for when cgroups are skipped entirely. This way if > > reclaim fails and some cgroups were scanned with dimished pressure, > > we'll try another full-force cycle before giving up and OOMing. > > > > Reported-by: Leon Yang > > Signed-off-by: Johannes Weiner > > Acked-by: Michal Hocko Thanks > > Although I have to say that the code is quite tricky and it deserves > more comments. See below. > > [...] > > @@ -2576,6 +2578,15 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, > > * hard protection. > > */ > > unsigned long cgroup_size = mem_cgroup_size(memcg); > > + unsigned long protection; > > + > > + /* memory.low scaling, make sure we retry before OOM */ > > + if (!sc->memcg_low_reclaim && low > min) { > > + protection = low; > > + sc->memcg_low_skipped = 1; > > + } else { > > + protection = min; > > + } > > Just by looking at this in isolation one could be really curious how > does this not break the low memory protection altogether. You're right, it's a bit too terse. > The logic is spread over 3 different places. > > Would something like the following be more understandable? > > /* > * Low limit protected memcgs are already excluded at > * a higher level (shrink_node_memcgs) but scaling > * down the reclaim target can result in hard to > * reclaim and premature OOM. We do not have a full > * picture here so we cannot really judge this > * sutuation here but pro-actively flag this scenario > * and let do_try_to_free_pages to retry if > * there is no progress. > */ I've been drafting around with this, but it seems to say the same thing as the comment I put into struct scan_control already: /* * Cgroup memory below memory.low is protected as long as we * don't threaten to OOM. If any cgroup is reclaimed at * reduced force or passed over entirely due to its memory.low * setting (memcg_low_skipped), and nothing is reclaimed as a * result, then go back back for one more cycle that reclaims * the protected memory (memcg_low_reclaim) to avert OOM. */ How about a brief version of this with a pointer to the original? diff --git a/mm/vmscan.c b/mm/vmscan.c index 701106e1829c..c32d686719d5 100644 --- a/mm/vmscan.c +++ b/mm/vmscan.c @@ -2580,7 +2580,12 @@ static void get_scan_count(struct lruvec *lruvec, struct scan_control *sc, unsigned long cgroup_size = mem_cgroup_size(memcg); unsigned long protection; - /* memory.low scaling, make sure we retry before OOM */ + /* + * Soft protection must not cause reclaim failure. Let + * the upper level know if we skipped pages during the + * first pass, so it can retry if necessary. See the + * struct scan_control definition of those flags. + */ if (!sc->memcg_low_reclaim && low > min) { protection = low; sc->memcg_low_skipped = 1; @@ -2853,16 +2858,16 @@ static void shrink_node_memcgs(pg_data_t *pgdat, struct scan_control *sc) if (mem_cgroup_below_min(memcg)) { /* - * Hard protection. - * If there is no reclaimable memory, OOM. + * Hard protection. Always respected. If there is not + * enough reclaimable memory elsewhere, it's an OOM. */ continue; } else if (mem_cgroup_below_low(memcg)) { /* - * Soft protection. - * Respect the protection only as long as - * there is an unprotected supply - * of reclaimable memory from other cgroups. + * Soft protection must not cause reclaim failure. Let + * the upper level know if we skipped pages during the + * first pass, so it can retry if necessary. See the + * struct scan_control definition of those flags. */ if (!sc->memcg_low_reclaim) { sc->memcg_low_skipped = 1;