From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 75FE6C433DB for ; Fri, 5 Feb 2021 18:28:41 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F148564E4D for ; Fri, 5 Feb 2021 18:28:40 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F148564E4D Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id A14066B0078; Fri, 5 Feb 2021 13:28:37 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 97BFC8D0003; Fri, 5 Feb 2021 13:28:37 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 7A6BD6B007D; Fri, 5 Feb 2021 13:28:37 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0114.hostedemail.com [216.40.44.114]) by kanga.kvack.org (Postfix) with ESMTP id 57A316B0078 for ; Fri, 5 Feb 2021 13:28:37 -0500 (EST) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 220311F06 for ; Fri, 5 Feb 2021 18:28:37 +0000 (UTC) X-FDA: 77785049874.27.store85_5005589275e6 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 081A53D663 for ; Fri, 5 Feb 2021 18:28:37 +0000 (UTC) X-HE-Tag: store85_5005589275e6 X-Filterd-Recvd-Size: 6724 Received: from mail-qk1-f174.google.com (mail-qk1-f174.google.com [209.85.222.174]) by imf50.hostedemail.com (Postfix) with ESMTP for ; Fri, 5 Feb 2021 18:28:36 +0000 (UTC) Received: by mail-qk1-f174.google.com with SMTP id a12so7840889qkh.10 for ; Fri, 05 Feb 2021 10:28:36 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=NkGochaaRcmTlyZxkCgRxNgKhbX45qwASpGU1xNehx0=; b=0F6KH20bviGrQNM9hbNvSXi7Vk/KA9cDm/VECQSQnHEPGSsqEZRI0RBWRVQ6p7HnxD kC8C0UB7VSTJzFNjkBOtg4VH27hzXIAf2zYhcPeZ+v+UP273dW65hwzx7s1q0zHV9zuE 6vfEBaxgqx27Nni6gCcAWoGeoPzLDvXiXlOZTA9n3Z6qpCthBOX9QIdwNhwPDX36/zpY bL4fcNoSLRdYjOitEDTdeTKEkerao0KI7iYwRLfbfjEzu0FaMeyPFq93DsS1foVkMa5C iVzRPcKT76qsfJIiN0d/uMOZ0uR77juU+76rJpaWEPDpmJM4ewdbqq1FxIf6bknXkOCX bD1g== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=NkGochaaRcmTlyZxkCgRxNgKhbX45qwASpGU1xNehx0=; b=bQ0tSkHM3ubK6hLI9lEP7D9eHn/NGWSmh+CgBxLHMlleYZ5L1+Kf+eAKBuxTC4uhmZ Ui7zcgkgCBr2W2El4lqitqvMyTVbRuf91D6Zhz90Mku7QbUdzkaeeGgJSTdmDfpyMDge vOFoiogsha7BgK8j4zqQ+ySXbu9DzirStUWiq69/eBGC4lZfnVb+gN4nVAXFU21gG08m HWe90RsYrR/WmZXh6rRWRK/YcTgXq7T+qKL5hKncPL5qdQa42pKmnuy3d/8BrbwTRPdR bROukSAFbtMYwWiLKJHl7UyJYCTw5BadoE3mfE47XpVXyaqYdm/6mTl9Xy0pGFR5dQHl NoTg== X-Gm-Message-State: AOAM530nb8n3HPEbb2pz/Mw1P9NSFDtuIhDOsP2NKLbuA6hdy5Tf6sUY KyxwQpPdo0SYZ2M1MgWmptZQrQ== X-Google-Smtp-Source: ABdhPJwOCyzi6wMCfUG8t5Uh5azCf+Qr016KkMDhRws7YkXV+J/NqxYYWdgO7dds/64Fu1/bKXhz0g== X-Received: by 2002:a37:9c94:: with SMTP id f142mr5628212qke.388.1612549715911; Fri, 05 Feb 2021 10:28:35 -0800 (PST) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id q92sm8919874qtd.92.2021.02.05.10.28.34 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 05 Feb 2021 10:28:35 -0800 (PST) From: Johannes Weiner To: Andrew Morton , Tejun Heo Cc: Michal Hocko , Roman Gushchin , Shakeel Butt , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 7/8] mm: memcontrol: consolidate lruvec stat flushing Date: Fri, 5 Feb 2021 13:28:05 -0500 Message-Id: <20210205182806.17220-8-hannes@cmpxchg.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210205182806.17220-1-hannes@cmpxchg.org> References: <20210205182806.17220-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are two functions to flush the per-cpu data of an lruvec into the rest of the cgroup tree: when the cgroup is being freed, and when a CPU disappears during hotplug. The difference is whether all CPUs or just one is being collected, but the rest of the flushing code is the same. Merge them into one function and share the common code. Signed-off-by: Johannes Weiner --- mm/memcontrol.c | 74 +++++++++++++++++++------------------------------ 1 file changed, 28 insertions(+), 46 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 5dc0bd53b64a..490357945f2c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2410,39 +2410,39 @@ static void drain_all_stock(struct mem_cgroup *ro= ot_memcg) mutex_unlock(&percpu_charge_mutex); } =20 -static int memcg_hotplug_cpu_dead(unsigned int cpu) +static void memcg_flush_lruvec_page_state(struct mem_cgroup *memcg, int = cpu) { - struct memcg_stock_pcp *stock; - struct mem_cgroup *memcg; - - stock =3D &per_cpu(memcg_stock, cpu); - drain_stock(stock); + int nid; =20 - for_each_mem_cgroup(memcg) { + for_each_node(nid) { + struct mem_cgroup_per_node *pn =3D memcg->nodeinfo[nid]; + unsigned long stat[NR_VM_NODE_STAT_ITEMS]; + struct batched_lruvec_stat *lstatc; int i; =20 + lstatc =3D per_cpu_ptr(pn->lruvec_stat_cpu, cpu); for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) { - int nid; + stat[i] =3D lstatc->count[i]; + lstatc->count[i] =3D 0; + } =20 - for_each_node(nid) { - struct batched_lruvec_stat *lstatc; - struct mem_cgroup_per_node *pn; - long x; + do { + for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) + atomic_long_add(stat[i], &pn->lruvec_stat[i]); + } while ((pn =3D parent_nodeinfo(pn, nid))); + } +} =20 - pn =3D memcg->nodeinfo[nid]; - lstatc =3D per_cpu_ptr(pn->lruvec_stat_cpu, cpu); +static int memcg_hotplug_cpu_dead(unsigned int cpu) +{ + struct memcg_stock_pcp *stock; + struct mem_cgroup *memcg; =20 - x =3D lstatc->count[i]; - lstatc->count[i] =3D 0; + stock =3D &per_cpu(memcg_stock, cpu); + drain_stock(stock); =20 - if (x) { - do { - atomic_long_add(x, &pn->lruvec_stat[i]); - } while ((pn =3D parent_nodeinfo(pn, nid))); - } - } - } - } + for_each_mem_cgroup(memcg) + memcg_flush_lruvec_page_state(memcg, cpu); =20 return 0; } @@ -3636,27 +3636,6 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsy= s_state *css, } } =20 -static void memcg_flush_lruvec_page_state(struct mem_cgroup *memcg) -{ - int node; - - for_each_node(node) { - struct mem_cgroup_per_node *pn =3D memcg->nodeinfo[node]; - unsigned long stat[NR_VM_NODE_STAT_ITEMS] =3D { 0 }; - struct mem_cgroup_per_node *pi; - int cpu, i; - - for_each_online_cpu(cpu) - for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) - stat[i] +=3D per_cpu( - pn->lruvec_stat_cpu->count[i], cpu); - - for (pi =3D pn; pi; pi =3D parent_nodeinfo(pi, node)) - for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) - atomic_long_add(stat[i], &pi->lruvec_stat[i]); - } -} - #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { @@ -5192,12 +5171,15 @@ static void __mem_cgroup_free(struct mem_cgroup *= memcg) =20 static void mem_cgroup_free(struct mem_cgroup *memcg) { + int cpu; + memcg_wb_domain_exit(memcg); /* * Flush percpu lruvec stats to guarantee the value * correctness on parent's and all ancestor levels. */ - memcg_flush_lruvec_page_state(memcg); + for_each_online_cpu(cpu) + memcg_flush_lruvec_page_state(memcg, cpu); __mem_cgroup_free(memcg); } =20 --=20 2.30.0