From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.8 required=3.0 tests=DKIMWL_WL_HIGH,DKIM_SIGNED, DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 28803C43331 for ; Thu, 2 Apr 2020 04:07:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id C64FC206E9 for ; Thu, 2 Apr 2020 04:07:06 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (1024-bit key) header.d=kernel.org header.i=@kernel.org header.b="WcTSzfHv" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org C64FC206E9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 79BDB8E0046; Thu, 2 Apr 2020 00:07:06 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 74D998E000D; Thu, 2 Apr 2020 00:07:06 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 63A8F8E0046; Thu, 2 Apr 2020 00:07:06 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0065.hostedemail.com [216.40.44.65]) by kanga.kvack.org (Postfix) with ESMTP id 4D6818E000D for ; Thu, 2 Apr 2020 00:07:06 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 0F28945B3 for ; Thu, 2 Apr 2020 04:07:06 +0000 (UTC) X-FDA: 76661579652.27.oven13_3defd3f059d38 X-HE-Tag: oven13_3defd3f059d38 X-Filterd-Recvd-Size: 11604 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf38.hostedemail.com (Postfix) with ESMTP for ; Thu, 2 Apr 2020 04:07:05 +0000 (UTC) Received: from localhost.localdomain (c-73-231-172-41.hsd1.ca.comcast.net [73.231.172.41]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPSA id 7870620747; Thu, 2 Apr 2020 04:07:04 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=kernel.org; s=default; t=1585800424; bh=eiUOBM6JvGX/zzkeZKbWJ509AZ9uoYuyR5p55/tHcgs=; h=Date:From:To:Subject:In-Reply-To:From; b=WcTSzfHvAU1DmAGzbUhnCXYV97nzdKNrthj9EQmeWi8YVcodE/d7Rx7ZcB9E6TbtI pqubhfda7fQZ6MIVScId4xAvX1ZTtjfP3EKr6L7fWr3mvi/paArFsWr1NBcq83BOke biruh1yYyfF+v/x5WpSSECRazdEo2J5ZWv0JuYxQ= Date: Wed, 01 Apr 2020 21:07:03 -0700 From: Andrew Morton To: akpm@linux-foundation.org, chris@chrisdown.name, guro@fb.com, hannes@cmpxchg.org, linux-mm@kvack.org, mhocko@suse.com, mkoutny@suse.com, mm-commits@vger.kernel.org, tj@kernel.org, torvalds@linux-foundation.org Subject: [patch 070/155] mm: memcontrol: clean up and document effective low/min calculations Message-ID: <20200402040703.7W1N7eLnV%akpm@linux-foundation.org> In-Reply-To: <20200401210155.09e3b9742e1c6e732f5a7250@linux-foundation.org> User-Agent: s-nail v14.8.16 MIME-Version: 1.0 Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: =46rom: Johannes Weiner Subject: mm: memcontrol: clean up and document effective low/min calculatio= ns The effective protection of any given cgroup is a somewhat complicated construct that depends on the ancestor's configuration, siblings' configurations, as well as current memory utilization in all these groups. It's done this way to satisfy hierarchical delegation requirements while also making the configuration semantics flexible and expressive in complex real life scenarios. Unfortunately, all the rules and requirements are sparsely documented, and the code is a little too clever in merging different scenarios into a single min() expression. This makes it hard to reason about the implementation and avoid breaking semantics when making changes to it. This patch documents each semantic rule individually and splits out the handling of the overcommit case from the regular case. Michal Koutn=C3=BD also points out that the points of equilibrium as descri= bed in the existing example scenarios aren't actually accurate. Delete these examples for now to avoid confusion. Link: http://lkml.kernel.org/r/20200227195606.46212-3-hannes@cmpxchg.org Signed-off-by: Johannes Weiner Acked-by: Tejun Heo Acked-by: Roman Gushchin Acked-by: Chris Down Acked-by: Michal Hocko Cc: Michal Koutn=C3=BD Signed-off-by: Andrew Morton --- mm/memcontrol.c | 177 +++++++++++++++++++++------------------------- 1 file changed, 84 insertions(+), 93 deletions(-) --- a/mm/memcontrol.c~mm-memcontrol-clean-up-and-document-effective-low-min= -calculations +++ a/mm/memcontrol.c @@ -6234,6 +6234,76 @@ struct cgroup_subsys memory_cgrp_subsys .early_init =3D 0, }; =20 +/* + * This function calculates an individual cgroup's effective + * protection which is derived from its own memory.min/low, its + * parent's and siblings' settings, as well as the actual memory + * distribution in the tree. + * + * The following rules apply to the effective protection values: + * + * 1. At the first level of reclaim, effective protection is equal to + * the declared protection in memory.min and memory.low. + * + * 2. To enable safe delegation of the protection configuration, at + * subsequent levels the effective protection is capped to the + * parent's effective protection. + * + * 3. To make complex and dynamic subtrees easier to configure, the + * user is allowed to overcommit the declared protection at a given + * level. If that is the case, the parent's effective protection is + * distributed to the children in proportion to how much protection + * they have declared and how much of it they are utilizing. + * + * This makes distribution proportional, but also work-conserving: + * if one cgroup claims much more protection than it uses memory, + * the unused remainder is available to its siblings. + * + * 4. Conversely, when the declared protection is undercommitted at a + * given level, the distribution of the larger parental protection + * budget is NOT proportional. A cgroup's protection from a sibling + * is capped to its own memory.min/low setting. + * + */ +static unsigned long effective_protection(unsigned long usage, + unsigned long setting, + unsigned long parent_effective, + unsigned long siblings_protected) +{ + unsigned long protected; + + protected =3D min(usage, setting); + /* + * If all cgroups at this level combined claim and use more + * protection then what the parent affords them, distribute + * shares in proportion to utilization. + * + * We are using actual utilization rather than the statically + * claimed protection in order to be work-conserving: claimed + * but unused protection is available to siblings that would + * otherwise get a smaller chunk than what they claimed. + */ + if (siblings_protected > parent_effective) + return protected * parent_effective / siblings_protected; + + /* + * Ok, utilized protection of all children is within what the + * parent affords them, so we know whatever this child claims + * and utilizes is effectively protected. + * + * If there is unprotected usage beyond this value, reclaim + * will apply pressure in proportion to that amount. + * + * If there is unutilized protection, the cgroup will be fully + * shielded from reclaim, but we do return a smaller value for + * protection than what the group could enjoy in theory. This + * is okay. With the overcommit distribution above, effective + * protection is always dependent on how memory is actually + * consumed among the siblings anyway. + */ + return protected; +} + /** * mem_cgroup_protected - check if memory consumption is in the normal ran= ge * @root: the top ancestor of the sub-tree being checked @@ -6247,67 +6317,11 @@ struct cgroup_subsys memory_cgrp_subsys * MEMCG_PROT_LOW: cgroup memory is protected as long there is * an unprotected supply of reclaimable memory from other cgroups. * MEMCG_PROT_MIN: cgroup memory is protected - * - * @root is exclusive; it is never protected when looked at directly - * - * To provide a proper hierarchical behavior, effective memory.min/low val= ues - * are used. Below is the description of how effective memory.low is calcu= lated. - * Effective memory.min values is calculated in the same way. - * - * Effective memory.low is always equal or less than the original memory.l= ow. - * If there is no memory.low overcommittment (which is always true for - * top-level memory cgroups), these two values are equal. - * Otherwise, it's a part of parent's effective memory.low, - * calculated as a cgroup's memory.low usage divided by sum of sibling's - * memory.low usages, where memory.low usage is the size of actually - * protected memory. - * - * low_usage - * elow =3D min( memory.low, parent->elow * ------------------ ), - * siblings_low_usage - * - * low_usage =3D min(memory.low, memory.current) - * - * - * Such definition of the effective memory.low provides the expected - * hierarchical behavior: parent's memory.low value is limiting - * children, unprotected memory is reclaimed first and cgroups, - * which are not using their guarantee do not affect actual memory - * distribution. - * - * For example, if there are memcgs A, A/B, A/C, A/D and A/E: - * - * A A/memory.low =3D 2G, A/memory.current =3D 6G - * //\\ - * BC DE B/memory.low =3D 3G B/memory.current =3D 2G - * C/memory.low =3D 1G C/memory.current =3D 2G - * D/memory.low =3D 0 D/memory.current =3D 2G - * E/memory.low =3D 10G E/memory.current =3D 0 - * - * and the memory pressure is applied, the following memory distribution - * is expected (approximately): - * - * A/memory.current =3D 2G - * - * B/memory.current =3D 1.3G - * C/memory.current =3D 0.6G - * D/memory.current =3D 0 - * E/memory.current =3D 0 - * - * These calculations require constant tracking of the actual low usages - * (see propagate_protected_usage()), as well as recursive calculation of - * effective memory.low values. But as we do call mem_cgroup_protected() - * path for each memory cgroup top-down from the reclaim, - * it's possible to optimize this part, and save calculated elow - * for next usage. This part is intentionally racy, but it's ok, - * as memory.low is a best-effort mechanism. */ enum mem_cgroup_protection mem_cgroup_protected(struct mem_cgroup *root, struct mem_cgroup *memcg) { struct mem_cgroup *parent; - unsigned long emin, parent_emin; - unsigned long elow, parent_elow; unsigned long usage; =20 if (mem_cgroup_disabled()) @@ -6322,52 +6336,29 @@ enum mem_cgroup_protection mem_cgroup_pr if (!usage) return MEMCG_PROT_NONE; =20 - emin =3D memcg->memory.min; - elow =3D memcg->memory.low; - parent =3D parent_mem_cgroup(memcg); /* No parent means a non-hierarchical mode on v1 memcg */ if (!parent) return MEMCG_PROT_NONE; =20 - if (parent =3D=3D root) - goto exit; - - parent_emin =3D READ_ONCE(parent->memory.emin); - emin =3D min(emin, parent_emin); - if (emin && parent_emin) { - unsigned long min_usage, siblings_min_usage; - - min_usage =3D min(usage, memcg->memory.min); - siblings_min_usage =3D atomic_long_read( - &parent->memory.children_min_usage); - - if (min_usage && siblings_min_usage) - emin =3D min(emin, parent_emin * min_usage / - siblings_min_usage); - } - - parent_elow =3D READ_ONCE(parent->memory.elow); - elow =3D min(elow, parent_elow); - if (elow && parent_elow) { - unsigned long low_usage, siblings_low_usage; - - low_usage =3D min(usage, memcg->memory.low); - siblings_low_usage =3D atomic_long_read( - &parent->memory.children_low_usage); - - if (low_usage && siblings_low_usage) - elow =3D min(elow, parent_elow * low_usage / - siblings_low_usage); + if (parent =3D=3D root) { + memcg->memory.emin =3D memcg->memory.min; + memcg->memory.elow =3D memcg->memory.low; + goto out; } =20 -exit: - memcg->memory.emin =3D emin; - memcg->memory.elow =3D elow; + memcg->memory.emin =3D effective_protection(usage, + memcg->memory.min, READ_ONCE(parent->memory.emin), + atomic_long_read(&parent->memory.children_min_usage)); + + memcg->memory.elow =3D effective_protection(usage, + memcg->memory.low, READ_ONCE(parent->memory.elow), + atomic_long_read(&parent->memory.children_low_usage)); =20 - if (usage <=3D emin) +out: + if (usage <=3D memcg->memory.emin) return MEMCG_PROT_MIN; - else if (usage <=3D elow) + else if (usage <=3D memcg->memory.elow) return MEMCG_PROT_LOW; else return MEMCG_PROT_NONE; _