From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 95F92C43331 for ; Thu, 2 Apr 2020 04:07:03 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3C8F120747 for ; Thu, 2 Apr 2020 04:07:03 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="h80bLaqr" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3C8F120747 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id DDE768E0045; Thu, 2 Apr 2020 00:07:02 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id DB4968E000D; Thu, 2 Apr 2020 00:07:02 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CA2EC8E0045; Thu, 2 Apr 2020 00:07:02 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0067.hostedemail.com [216.40.44.67]) by kanga.kvack.org (Postfix) with ESMTP id AF8A18E000D for ; Thu, 2 Apr 2020 00:07:02 -0400 (EDT) Received: from smtpin14.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id 7480F8248047 for ; Thu, 2 Apr 2020 04:07:02 +0000 (UTC) X-FDA: 76661579484.14.magic11_3d6aa4c91b654 X-HE-Tag: magic11_3d6aa4c91b654 X-Filterd-Recvd-Size: 11449 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf33.hostedemail.com (Postfix) with ESMTP for ; Thu, 2 Apr 2020 04:07:01 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id E95EA206E9; Thu, 2 Apr 2020 04:07:00 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585800421; bh=VEydb97FepfPCayGk/bqfy/3A3XTkmo1xT1BdwSsGXE=; h=Date:From:To:Subject:In-Reply-To:From; b=h80bLaqr+a1KWKmC3p9KrQhP4qXCQjlotsKMmubQaUImBwMQOmdc6RWjXQWbYn4Ls Nqo6KVhsGMLcQCxINOLxhXYByPSLmDAnqaAlxf3N2PjMKv976moMkOlLuu9aG21Gmd uSeOyAkhCSwew9IV062ZztbVjG2e1zsEjrJnFrUo= Date: Wed, 01 Apr 2020 21:07:00 -0700 From: Andrew Morton To: akpm@linux-foundation.org, chris@chrisdown.name, guro@fb.com, hannes@cmpxchg.org, linux-mm@kvack.org, mhocko@suse.com, mkoutny@suse.com, mm-commits@vger.kernel.org, tj@kernel.org, torvalds@linux-foundation.org Subject: [patch 069/155] mm: memcontrol: fix memory.low proportional distribution Message-ID: <20200402040700.jd3s7gbW4%akpm@linux-foundation.org> In-Reply-To: <20200401210155.09e3b9742e1c6e732f5a7250@linux-foundation.org> User-Agent: s-nail v14.8.16 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =46rom: Johannes Weiner Subject: mm: memcontrol: fix memory.low proportional distribution Patch series "mm: memcontrol: recursive memory.low protection", v3. The current memory.low (and memory.min) semantics require protection to be assigned to a cgroup in an untinterrupted chain from the top-level cgroup all the way to the leaf. In practice, we want to protect entire cgroup subtrees from each other (system management software vs. workload), but we would like the VM to balance memory optimally *within* each subtree, without having to make explicit weight allocations among individual components. The current semantics make that impossible. They also introduce unmanageable complexity into more advanced resource trees. For example: host root `- system.slice `- rpm upgrades `- logging `- workload.slice `- a container `- system.slice `- workload.slice `- job A `- component 1 `- component 2 `- job B At a host-level perspective, we would like to protect the outer workload.slice subtree as a whole from rpm upgrades, logging etc. But for that to be effective, right now we'd have to propagate it down through the container, the inner workload.slice, into the job cgroup and ultimately the component cgroups where memory is actually, physically allocated.=20 This may cross several tree delegation points and namespace boundaries, which make such a setup near impossible. CPU and IO on the other hand are already distributed recursively. The user would simply configure allowances at the host level, and they would apply to the entire subtree without any downward propagation. To enable the above-mentioned usecases and bring memory in line with other resource controllers, this patch series extends memory.low/min such that settings apply recursively to the entire subtree. Users can still assign explicit shares in subgroups, but if they don't, any ancestral protection will be distributed such that children compete freely amongst each other - as if no memory control were enabled inside the subtree - but enjoy protection from neighboring trees. In the above example, the user would then be able to configure shares of CPU, IO and memory at the host level to comprehensively protect and isolate the workload.slice as a whole from system.slice activity. Patch #1 fixes an existing bug that can give a cgroup tree more protection than it should receive as per ancestor configuration. Patch #2 simplifies and documents the existing code to make it easier to reason about the changes in the next patch. Patch #3 finally implements recursive memory protection semantics. Because of a risk of regressing legacy setups, the new semantics are hidden behind a cgroup2 mount option, 'memory_recursiveprot'. More details in patch #3. This patch (of 3): When memory.low is overcommitted - i.e. the children claim more protection than their shared ancestor grants them - the allowance is distributed in proportion to how much each sibling uses their own declared protection: low_usage =3D min(memory.low, memory.current) elow =3D parent_elow * (low_usage / siblings_low_usage) However, siblings_low_usage is not the sum of all low_usages. It sums up the usages of *only those cgroups that are within their memory.low* That means that low_usage can be *bigger* than siblings_low_usage, and consequently the total protection afforded to the children can be bigger than what the ancestor grants the subtree. Consider three groups where two are in excess of their protection: A/memory.low =3D 10G A/A1/memory.low =3D 10G, memory.current =3D 20G A/A2/memory.low =3D 10G, memory.current =3D 20G A/A3/memory.low =3D 10G, memory.current =3D 8G siblings_low_usage =3D 8G (only A3 contributes) A1/elow =3D parent_elow(10G) * low_usage(10G) / siblings_low_usage(8G) = =3D 12.5G -> 10G A2/elow =3D parent_elow(10G) * low_usage(10G) / siblings_low_usage(8G) = =3D 12.5G -> 10G A3/elow =3D parent_elow(10G) * low_usage(8G) / siblings_low_usage(8G) =3D = 10.0G (the 12.5G are capped to the explicit memory.low setting of 10G) With that, the sum of all awarded protection below A is 30G, when A only grants 10G for the entire subtree. What does this mean in practice? A1 and A2 would still be in excess of their 10G allowance and would be reclaimed, whereas A3 would not. As they eventually drop below their protection setting, they would be counted in siblings_low_usage again and the error would right itself. When reclaim was applied in a binary fashion (cgroup is reclaimed when it's above its protection, otherwise it's skipped) this would actually work out just fine. However, since 1bc63fb1272b ("mm, memcg: make scan aggression always exclude protection"), reclaim pressure is scaled to how much a cgroup is above its protection. As a result this calculation error unduly skews pressure away from A1 and A2 toward the rest of the system. But why did we do it like this in the first place? The reasoning behind exempting groups in excess from siblings_low_usage was to go after them first during reclaim in an overcommitted subtree: A/memory.low =3D 2G, memory.current =3D 4G A/A1/memory.low =3D 3G, memory.current =3D 2G A/A2/memory.low =3D 1G, memory.current =3D 2G siblings_low_usage =3D 2G (only A1 contributes) A1/elow =3D parent_elow(2G) * low_usage(2G) / siblings_low_usage(2G) =3D = 2G A2/elow =3D parent_elow(2G) * low_usage(1G) / siblings_low_usage(2G) =3D = 1G While the children combined are overcomitting A and are technically both at fault, A2 is actively declaring unprotected memory and we would like to reclaim that first. However, while this sounds like a noble goal on the face of it, it doesn't make much difference in actual memory distribution: Because A is overcommitted, reclaim will not stop once A2 gets pushed back to within its allowance; we'll have to reclaim A1 either way. The end result is still that protection is distributed proportionally, with A1 getting 3/4 (1.5G) and A2 getting 1/4 (0.5G) of A's allowance. [ If A weren't overcommitted, it wouldn't make a difference since each cgroup would just get the protection it declares: A/memory.low =3D 2G, memory.current =3D 3G A/A1/memory.low =3D 1G, memory.current =3D 1G A/A2/memory.low =3D 1G, memory.current =3D 2G With the current calculation: siblings_low_usage =3D 1G (only A1 contributes) A1/elow =3D parent_elow(2G) * low_usage(1G) / siblings_low_usage(1G) =3D = 2G -> 1G A2/elow =3D parent_elow(2G) * low_usage(1G) / siblings_low_usage(1G) =3D = 2G -> 1G Including excess groups in siblings_low_usage: siblings_low_usage =3D 2G A1/elow =3D parent_elow(2G) * low_usage(1G) / siblings_low_usage(2G) =3D = 1G -> 1G A2/elow =3D parent_elow(2G) * low_usage(1G) / siblings_low_usage(2G) =3D = 1G -> 1G ] Simplify the calculation and fix the proportional reclaim bug by including excess cgroups in siblings_low_usage. After this patch, the effective memory.low distribution from the example above would be as follows: A/memory.low =3D 10G A/A1/memory.low =3D 10G, memory.current =3D 20G A/A2/memory.low =3D 10G, memory.current =3D 20G A/A3/memory.low =3D 10G, memory.current =3D 8G siblings_low_usage =3D 28G A1/elow =3D parent_elow(10G) * low_usage(10G) / siblings_low_usage(28G) = =3D 3.5G A2/elow =3D parent_elow(10G) * low_usage(10G) / siblings_low_usage(28G) = =3D 3.5G A3/elow =3D parent_elow(10G) * low_usage(8G) / siblings_low_usage(28G) = =3D 2.8G Link: http://lkml.kernel.org/r/20200227195606.46212-2-hannes@cmpxchg.org Fixes: 1bc63fb1272b ("mm, memcg: make scan aggression always exclude protec= tion") Fixes: 230671533d64 ("mm: memory.low hierarchical behavior") Signed-off-by: Johannes Weiner Acked-by: Tejun Heo Acked-by: Roman Gushchin Acked-by: Chris Down Acked-by: Michal Hocko Cc: Michal Koutn=C3=BD Signed-off-by: Andrew Morton --- mm/memcontrol.c | 4 +--- mm/page_counter.c | 12 ++---------- 2 files changed, 3 insertions(+), 13 deletions(-) --- a/mm/memcontrol.c~mm-memcontrol-fix-memorylow-proportional-distribution +++ a/mm/memcontrol.c @@ -6266,9 +6266,7 @@ struct cgroup_subsys memory_cgrp_subsys * elow =3D min( memory.low, parent->elow * ------------------ ), * siblings_low_usage * - * | memory.current, if memory.current < memory.low - * low_usage =3D | - * | 0, otherwise. + * low_usage =3D min(memory.low, memory.current) * * * Such definition of the effective memory.low provides the expected --- a/mm/page_counter.c~mm-memcontrol-fix-memorylow-proportional-distributi= on +++ a/mm/page_counter.c @@ -23,11 +23,7 @@ static void propagate_protected_usage(st return; =20 if (c->min || atomic_long_read(&c->min_usage)) { - if (usage <=3D c->min) - protected =3D usage; - else - protected =3D 0; - + protected =3D min(usage, c->min); old_protected =3D atomic_long_xchg(&c->min_usage, protected); delta =3D protected - old_protected; if (delta) @@ -35,11 +31,7 @@ static void propagate_protected_usage(st } =20 if (c->low || atomic_long_read(&c->low_usage)) { - if (usage <=3D c->low) - protected =3D usage; - else - protected =3D 0; - + protected =3D min(usage, c->low); old_protected =3D atomic_long_xchg(&c->low_usage, protected); delta =3D protected - old_protected; if (delta) _