From: Michal Hocko <mhocko@kernel.org>
To: Roman Gushchin <guro@fb.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
linux-mm@kvack.org, Johannes Weiner <hannes@cmpxchg.org>,
linux-kernel@vger.kernel.org, kernel-team@fb.com
Subject: Re: [PATCH 1/2] mm: memcontrol: flush percpu vmstats before releasing memcg
Date: Wed, 14 Aug 2019 13:26:29 +0200 [thread overview]
Message-ID: <20190814112629.GU17933@dhcp22.suse.cz> (raw)
In-Reply-To: <20190812222911.2364802-2-guro@fb.com>
On Mon 12-08-19 15:29:10, Roman Gushchin wrote:
> Percpu caching of local vmstats with the conditional propagation
> by the cgroup tree leads to an accumulation of errors on non-leaf
> levels.
>
> Let's imagine two nested memory cgroups A and A/B. Say, a process
> belonging to A/B allocates 100 pagecache pages on the CPU 0.
> The percpu cache will spill 3 times, so that 32*3=96 pages will be
> accounted to A/B and A atomic vmstat counters, 4 pages will remain
> in the percpu cache.
>
> Imagine A/B is nearby memory.max, so that every following allocation
> triggers a direct reclaim on the local CPU. Say, each such attempt
> will free 16 pages on a new cpu. That means every percpu cache will
> have -16 pages, except the first one, which will have 4 - 16 = -12.
> A/B and A atomic counters will not be touched at all.
>
> Now a user removes A/B. All percpu caches are freed and corresponding
> vmstat numbers are forgotten. A has 96 pages more than expected.
>
> As memory cgroups are created and destroyed, errors do accumulate.
> Even 1-2 pages differences can accumulate into large numbers.
>
> To fix this issue let's accumulate and propagate percpu vmstat
> values before releasing the memory cgroup. At this point these
> numbers are stable and cannot be changed.
It is worth spending a word or two on why this doesn't matter during the
memcg life time.
> Since on cpu hotplug we do flush percpu vmstats anyway, we can
> iterate only over online cpus.
>
> Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty")
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
Acked-by: Michal Hocko <mhocko@suse.com>
> ---
> mm/memcontrol.c | 40 ++++++++++++++++++++++++++++++++++++++++
> 1 file changed, 40 insertions(+)
>
> diff --git a/mm/memcontrol.c b/mm/memcontrol.c
> index 3e821f34399f..348f685ab94b 100644
> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3412,6 +3412,41 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
> return 0;
> }
>
> +static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg)
> +{
> + unsigned long stat[MEMCG_NR_STAT];
> + struct mem_cgroup *mi;
> + int node, cpu, i;
> +
> + for (i = 0; i < MEMCG_NR_STAT; i++)
> + stat[i] = 0;
> +
> + for_each_online_cpu(cpu)
> + for (i = 0; i < MEMCG_NR_STAT; i++)
> + stat[i] += raw_cpu_read(memcg->vmstats_percpu->stat[i]);
> +
> + for (mi = memcg; mi; mi = parent_mem_cgroup(mi))
> + for (i = 0; i < MEMCG_NR_STAT; i++)
> + atomic_long_add(stat[i], &mi->vmstats[i]);
> +
> + for_each_node(node) {
> + struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];
> + struct mem_cgroup_per_node *pi;
> +
> + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
> + stat[i] = 0;
> +
> + for_each_online_cpu(cpu)
> + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
> + stat[i] += raw_cpu_read(
> + pn->lruvec_stat_cpu->count[i]);
> +
> + for (pi = pn; pi; pi = parent_nodeinfo(pi, node))
> + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
> + atomic_long_add(stat[i], &pi->lruvec_stat[i]);
> + }
> +}
> +
> static void memcg_offline_kmem(struct mem_cgroup *memcg)
> {
> struct cgroup_subsys_state *css;
> @@ -4805,6 +4840,11 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg)
> {
> int node;
>
> + /*
> + * Flush percpu vmstats to guarantee the value correctness
> + * on parent's and all ancestor levels.
> + */
> + memcg_flush_percpu_vmstats(memcg);
> for_each_node(node)
> free_mem_cgroup_per_node_info(memcg, node);
> free_percpu(memcg->vmstats_percpu);
> --
> 2.21.0
--
Michal Hocko
SUSE Labs
next prev parent reply other threads:[~2019-08-14 11:26 UTC|newest]
Thread overview: 10+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-12 22:29 [PATCH 0/2] flush percpu vmstats Roman Gushchin
2019-08-12 22:29 ` [PATCH 1/2] mm: memcontrol: flush percpu vmstats before releasing memcg Roman Gushchin
2019-08-13 21:27 ` Andrew Morton
2019-08-13 21:46 ` Roman Gushchin
2020-09-01 15:32 ` Yang Shi
2019-08-14 11:26 ` Michal Hocko [this message]
2019-08-12 22:29 ` [PATCH 2/2] mm: memcontrol: flush percpu slab vmstats on kmem offlining Roman Gushchin
2019-08-14 11:32 ` Michal Hocko
2019-08-14 21:54 ` Roman Gushchin
2019-08-15 8:35 ` Michal Hocko
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190814112629.GU17933@dhcp22.suse.cz \
--to=mhocko@kernel.org \
--cc=akpm@linux-foundation.org \
--cc=guro@fb.com \
--cc=hannes@cmpxchg.org \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).