From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.5 required=3.0 tests=BAYES_00,DKIM_INVALID, DKIM_SIGNED,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 040D9C433E6 for ; Tue, 2 Feb 2021 18:48:07 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id B1E6E64E9C for ; Tue, 2 Feb 2021 18:48:06 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org B1E6E64E9C Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=cmpxchg.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 5A9046B0073; Tue, 2 Feb 2021 13:48:04 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 50A736B0074; Tue, 2 Feb 2021 13:48:04 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 362A96B0075; Tue, 2 Feb 2021 13:48:04 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0013.hostedemail.com [216.40.44.13]) by kanga.kvack.org (Postfix) with ESMTP id 20AB16B0073 for ; Tue, 2 Feb 2021 13:48:04 -0500 (EST) Received: from smtpin11.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay03.hostedemail.com (Postfix) with ESMTP id D612E8249980 for ; Tue, 2 Feb 2021 18:48:03 +0000 (UTC) X-FDA: 77774212446.11.cause56_1800815275cd Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin11.hostedemail.com (Postfix) with ESMTP id B249E180F8B82 for ; Tue, 2 Feb 2021 18:48:03 +0000 (UTC) X-HE-Tag: cause56_1800815275cd X-Filterd-Recvd-Size: 6990 Received: from mail-qk1-f180.google.com (mail-qk1-f180.google.com [209.85.222.180]) by imf42.hostedemail.com (Postfix) with ESMTP for ; Tue, 2 Feb 2021 18:48:03 +0000 (UTC) Received: by mail-qk1-f180.google.com with SMTP id a12so20827722qkh.10 for ; Tue, 02 Feb 2021 10:48:03 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=cmpxchg-org.20150623.gappssmtp.com; s=20150623; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=wH4c6SXNdhNwh+C23BEFkUNz9OTjPe+NannGM5GtmrQ=; b=J29bxNRinL1CN+JqxD3H5YMXk5tmfxXkSEjOPJX/ssjGALv8pJGYjjR5KFiU19djZJ S6ZsrjzCu/7fjI1R9wGkp68lqh0Ham2Gkmos778PuL03eCfcX0NJrzAZdDt3IblYPUkg m+bR2gl8Du2Ksk+3F3P4Otlj2gPliOGcn9EL7zt/X9MOjB5pmNpc13Wp6wwl+p6vpN3R ETUpZrK5m+SFtwGy8cKknOGHSThCZWAXNgKzokTBTDb36plDldtYBXVfUWucOZO4abs/ z7WaXx9SJRbFDMlQJNh08vgzpdCyFr4nbRhRFHVEc5AmJpfeG5ASg2LtbyD26zAjntrl NDOQ== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=wH4c6SXNdhNwh+C23BEFkUNz9OTjPe+NannGM5GtmrQ=; b=be06jHUMlnd2KLDVQ52GQ4prdNghBJ4PrCfVtKME6oY8Yisyo5wrSfbW+4MZVAgE+N lOvtG+cNr5oE/3b1DY8JBHZOj7PwGvUz4eDT3FU/BoDI33QmoTDXMOUVfR6quXboEVBx HVZ5dF2cZmOkuUqYLCy6iiMR750yN7ac61Osm5XAG8ZFN5ecbF75wy7z2NZKPHJGZAs0 9k0l83nvclLBJk3hmiEm1ZNd3SonrAXtfSlG0FU67U8klkW0zK9axY2ssB+koNRPVCgy DVhPjXywWbSYffiJMPLpoKr61ueNl3F9H4HPhgKdCOJfrZducucCKxR4UJ9H5sdngmYi SmXw== X-Gm-Message-State: AOAM5326mYg8R7syF5AimFEowUJa0h2qOWGKVr4VnryBXYeLYlzzjVMj rinKiO0xTiRkplv8KmP6FRMXIXPQsUIOUA== X-Google-Smtp-Source: ABdhPJyNb24nKjqr7ExMx3Nitbz7FaveWMnR7LErpEQpfoxBwRQEu+EI4Tol1j/vL6HvAGomXp8glA== X-Received: by 2002:ae9:d881:: with SMTP id u123mr22659274qkf.133.1612291682726; Tue, 02 Feb 2021 10:48:02 -0800 (PST) Received: from localhost (70.44.39.90.res-cmts.bus.ptd.net. [70.44.39.90]) by smtp.gmail.com with ESMTPSA id k129sm18117119qkf.108.2021.02.02.10.48.01 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Tue, 02 Feb 2021 10:48:02 -0800 (PST) From: Johannes Weiner To: Andrew Morton , Tejun Heo Cc: Michal Hocko , Roman Gushchin , linux-mm@kvack.org, cgroups@vger.kernel.org, linux-kernel@vger.kernel.org, kernel-team@fb.com Subject: [PATCH 7/7] mm: memcontrol: consolidate lruvec stat flushing Date: Tue, 2 Feb 2021 13:47:46 -0500 Message-Id: <20210202184746.119084-8-hannes@cmpxchg.org> X-Mailer: git-send-email 2.30.0 In-Reply-To: <20210202184746.119084-1-hannes@cmpxchg.org> References: <20210202184746.119084-1-hannes@cmpxchg.org> MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: There are two functions to flush the per-cpu data of an lruvec into the rest of the cgroup tree: when the cgroup is being freed, and when a CPU disappears during hotplug. The difference is whether all CPUs or just one is being collected, but the rest of the flushing code is the same. Merge them into one function and share the common code. Signed-off-by: Johannes Weiner --- mm/memcontrol.c | 88 +++++++++++++++++++++++-------------------------- 1 file changed, 42 insertions(+), 46 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index b205b2413186..88e8afc49a46 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -2410,39 +2410,56 @@ static void drain_all_stock(struct mem_cgroup *ro= ot_memcg) mutex_unlock(&percpu_charge_mutex); } =20 -static int memcg_hotplug_cpu_dead(unsigned int cpu) +static void memcg_flush_lruvec_page_state(struct mem_cgroup *memcg, int = cpu) { - struct memcg_stock_pcp *stock; - struct mem_cgroup *memcg; - - stock =3D &per_cpu(memcg_stock, cpu); - drain_stock(stock); + int nid; =20 - for_each_mem_cgroup(memcg) { + for_each_node(nid) { + struct mem_cgroup_per_node *pn =3D memcg->nodeinfo[nid]; + unsigned long stat[NR_VM_NODE_STAT_ITEMS] =3D { 0, }; + struct batched_lruvec_stat *lstatc; int i; =20 - for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) { - int nid; - - for_each_node(nid) { - struct batched_lruvec_stat *lstatc; - struct mem_cgroup_per_node *pn; - long x; - - pn =3D memcg->nodeinfo[nid]; + if (cpu =3D=3D -1) { + int cpui; + /* + * The memcg is about to be freed, collect all + * CPUs, no need to zero anything out. + */ + for_each_online_cpu(cpui) { + lstatc =3D per_cpu_ptr(pn->lruvec_stat_cpu, cpui); + for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) + stat[i] +=3D lstatc->count[i]; + } + } else { + /* + * The CPU has gone away, collect and zero out + * its stats, it may come back later. + */ + for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) { lstatc =3D per_cpu_ptr(pn->lruvec_stat_cpu, cpu); - - x =3D lstatc->count[i]; + stat[i] =3D lstatc->count[i]; lstatc->count[i] =3D 0; - - if (x) { - do { - atomic_long_add(x, &pn->lruvec_stat[i]); - } while ((pn =3D parent_nodeinfo(pn, nid))); - } } } + + do { + for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) + atomic_long_add(stat[i], &pn->lruvec_stat[i]); + } while ((pn =3D parent_nodeinfo(pn, nid))); } +} + +static int memcg_hotplug_cpu_dead(unsigned int cpu) +{ + struct memcg_stock_pcp *stock; + struct mem_cgroup *memcg; + + stock =3D &per_cpu(memcg_stock, cpu); + drain_stock(stock); + + for_each_mem_cgroup(memcg) + memcg_flush_lruvec_page_state(memcg, cpu); =20 return 0; } @@ -3636,27 +3653,6 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsy= s_state *css, } } =20 -static void memcg_flush_lruvec_page_state(struct mem_cgroup *memcg) -{ - int node; - - for_each_node(node) { - struct mem_cgroup_per_node *pn =3D memcg->nodeinfo[node]; - unsigned long stat[NR_VM_NODE_STAT_ITEMS] =3D {0, }; - struct mem_cgroup_per_node *pi; - int cpu, i; - - for_each_online_cpu(cpu) - for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) - stat[i] +=3D per_cpu( - pn->lruvec_stat_cpu->count[i], cpu); - - for (pi =3D pn; pi; pi =3D parent_nodeinfo(pi, node)) - for (i =3D 0; i < NR_VM_NODE_STAT_ITEMS; i++) - atomic_long_add(stat[i], &pi->lruvec_stat[i]); - } -} - #ifdef CONFIG_MEMCG_KMEM static int memcg_online_kmem(struct mem_cgroup *memcg) { @@ -5197,7 +5193,7 @@ static void mem_cgroup_free(struct mem_cgroup *memc= g) * Flush percpu lruvec stats to guarantee the value * correctness on parent's and all ancestor levels. */ - memcg_flush_lruvec_page_state(memcg); + memcg_flush_lruvec_page_state(memcg, -1); __mem_cgroup_free(memcg); } =20 --=20 2.30.0