* [PATCH 0/2] flush percpu vmstats @ 2019-08-12 22:29 Roman Gushchin 2019-08-12 22:29 ` [PATCH 1/2] mm: memcontrol: flush percpu vmstats before releasing memcg Roman Gushchin 2019-08-12 22:29 ` [PATCH 2/2] mm: memcontrol: flush percpu slab vmstats on kmem offlining Roman Gushchin 0 siblings, 2 replies; 10+ messages in thread From: Roman Gushchin @ 2019-08-12 22:29 UTC (permalink / raw) To: Andrew Morton, linux-mm Cc: Michal Hocko, Johannes Weiner, linux-kernel, kernel-team, Roman Gushchin While working on v2 of the slabs vmstats flushing patch, I've realized the the problem is much more generic and affects all vmstats, not only slabs. So the patch has been converted to set of 2. v2: 1) added patch 1, patch 2 rebased on top 2) s/for_each_cpu()/for_each_online_cpu() (by Andrew Morton) Roman Gushchin (2): mm: memcontrol: flush percpu vmstats before releasing memcg mm: memcontrol: flush percpu slab vmstats on kmem offlining mm/memcontrol.c | 59 +++++++++++++++++++++++++++++++++++++++++++++++++ 1 file changed, 59 insertions(+) -- 2.21.0 ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 1/2] mm: memcontrol: flush percpu vmstats before releasing memcg 2019-08-12 22:29 [PATCH 0/2] flush percpu vmstats Roman Gushchin @ 2019-08-12 22:29 ` Roman Gushchin 2019-08-13 21:27 ` Andrew Morton 2019-08-14 11:26 ` Michal Hocko 2019-08-12 22:29 ` [PATCH 2/2] mm: memcontrol: flush percpu slab vmstats on kmem offlining Roman Gushchin 1 sibling, 2 replies; 10+ messages in thread From: Roman Gushchin @ 2019-08-12 22:29 UTC (permalink / raw) To: Andrew Morton, linux-mm Cc: Michal Hocko, Johannes Weiner, linux-kernel, kernel-team, Roman Gushchin Percpu caching of local vmstats with the conditional propagation by the cgroup tree leads to an accumulation of errors on non-leaf levels. Let's imagine two nested memory cgroups A and A/B. Say, a process belonging to A/B allocates 100 pagecache pages on the CPU 0. The percpu cache will spill 3 times, so that 32*3=96 pages will be accounted to A/B and A atomic vmstat counters, 4 pages will remain in the percpu cache. Imagine A/B is nearby memory.max, so that every following allocation triggers a direct reclaim on the local CPU. Say, each such attempt will free 16 pages on a new cpu. That means every percpu cache will have -16 pages, except the first one, which will have 4 - 16 = -12. A/B and A atomic counters will not be touched at all. Now a user removes A/B. All percpu caches are freed and corresponding vmstat numbers are forgotten. A has 96 pages more than expected. As memory cgroups are created and destroyed, errors do accumulate. Even 1-2 pages differences can accumulate into large numbers. To fix this issue let's accumulate and propagate percpu vmstat values before releasing the memory cgroup. At this point these numbers are stable and cannot be changed. Since on cpu hotplug we do flush percpu vmstats anyway, we can iterate only over online cpus. Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty") Signed-off-by: Roman Gushchin <guro@fb.com> Cc: Johannes Weiner <hannes@cmpxchg.org> --- mm/memcontrol.c | 40 ++++++++++++++++++++++++++++++++++++++++ 1 file changed, 40 insertions(+) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 3e821f34399f..348f685ab94b 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3412,6 +3412,41 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) return 0; } +static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg) +{ + unsigned long stat[MEMCG_NR_STAT]; + struct mem_cgroup *mi; + int node, cpu, i; + + for (i = 0; i < MEMCG_NR_STAT; i++) + stat[i] = 0; + + for_each_online_cpu(cpu) + for (i = 0; i < MEMCG_NR_STAT; i++) + stat[i] += raw_cpu_read(memcg->vmstats_percpu->stat[i]); + + for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) + for (i = 0; i < MEMCG_NR_STAT; i++) + atomic_long_add(stat[i], &mi->vmstats[i]); + + for_each_node(node) { + struct mem_cgroup_per_node *pn = memcg->nodeinfo[node]; + struct mem_cgroup_per_node *pi; + + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) + stat[i] = 0; + + for_each_online_cpu(cpu) + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) + stat[i] += raw_cpu_read( + pn->lruvec_stat_cpu->count[i]); + + for (pi = pn; pi; pi = parent_nodeinfo(pi, node)) + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) + atomic_long_add(stat[i], &pi->lruvec_stat[i]); + } +} + static void memcg_offline_kmem(struct mem_cgroup *memcg) { struct cgroup_subsys_state *css; @@ -4805,6 +4840,11 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg) { int node; + /* + * Flush percpu vmstats to guarantee the value correctness + * on parent's and all ancestor levels. + */ + memcg_flush_percpu_vmstats(memcg); for_each_node(node) free_mem_cgroup_per_node_info(memcg, node); free_percpu(memcg->vmstats_percpu); -- 2.21.0 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 1/2] mm: memcontrol: flush percpu vmstats before releasing memcg 2019-08-12 22:29 ` [PATCH 1/2] mm: memcontrol: flush percpu vmstats before releasing memcg Roman Gushchin @ 2019-08-13 21:27 ` Andrew Morton 2019-08-13 21:46 ` Roman Gushchin 2019-08-14 11:26 ` Michal Hocko 1 sibling, 1 reply; 10+ messages in thread From: Andrew Morton @ 2019-08-13 21:27 UTC (permalink / raw) To: Roman Gushchin Cc: linux-mm, Michal Hocko, Johannes Weiner, linux-kernel, kernel-team On Mon, 12 Aug 2019 15:29:10 -0700 Roman Gushchin <guro@fb.com> wrote: > Percpu caching of local vmstats with the conditional propagation > by the cgroup tree leads to an accumulation of errors on non-leaf > levels. > > Let's imagine two nested memory cgroups A and A/B. Say, a process > belonging to A/B allocates 100 pagecache pages on the CPU 0. > The percpu cache will spill 3 times, so that 32*3=96 pages will be > accounted to A/B and A atomic vmstat counters, 4 pages will remain > in the percpu cache. > > Imagine A/B is nearby memory.max, so that every following allocation > triggers a direct reclaim on the local CPU. Say, each such attempt > will free 16 pages on a new cpu. That means every percpu cache will > have -16 pages, except the first one, which will have 4 - 16 = -12. > A/B and A atomic counters will not be touched at all. > > Now a user removes A/B. All percpu caches are freed and corresponding > vmstat numbers are forgotten. A has 96 pages more than expected. > > As memory cgroups are created and destroyed, errors do accumulate. > Even 1-2 pages differences can accumulate into large numbers. > > To fix this issue let's accumulate and propagate percpu vmstat > values before releasing the memory cgroup. At this point these > numbers are stable and cannot be changed. > > Since on cpu hotplug we do flush percpu vmstats anyway, we can > iterate only over online cpus. > > Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty") Is this not serious enough for a cc:stable? ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/2] mm: memcontrol: flush percpu vmstats before releasing memcg 2019-08-13 21:27 ` Andrew Morton @ 2019-08-13 21:46 ` Roman Gushchin 2020-09-01 15:32 ` Yang Shi 0 siblings, 1 reply; 10+ messages in thread From: Roman Gushchin @ 2019-08-13 21:46 UTC (permalink / raw) To: Andrew Morton Cc: linux-mm, Michal Hocko, Johannes Weiner, linux-kernel, Kernel Team, stable On Tue, Aug 13, 2019 at 02:27:52PM -0700, Andrew Morton wrote: > On Mon, 12 Aug 2019 15:29:10 -0700 Roman Gushchin <guro@fb.com> wrote: > > > Percpu caching of local vmstats with the conditional propagation > > by the cgroup tree leads to an accumulation of errors on non-leaf > > levels. > > > > Let's imagine two nested memory cgroups A and A/B. Say, a process > > belonging to A/B allocates 100 pagecache pages on the CPU 0. > > The percpu cache will spill 3 times, so that 32*3=96 pages will be > > accounted to A/B and A atomic vmstat counters, 4 pages will remain > > in the percpu cache. > > > > Imagine A/B is nearby memory.max, so that every following allocation > > triggers a direct reclaim on the local CPU. Say, each such attempt > > will free 16 pages on a new cpu. That means every percpu cache will > > have -16 pages, except the first one, which will have 4 - 16 = -12. > > A/B and A atomic counters will not be touched at all. > > > > Now a user removes A/B. All percpu caches are freed and corresponding > > vmstat numbers are forgotten. A has 96 pages more than expected. > > > > As memory cgroups are created and destroyed, errors do accumulate. > > Even 1-2 pages differences can accumulate into large numbers. > > > > To fix this issue let's accumulate and propagate percpu vmstat > > values before releasing the memory cgroup. At this point these > > numbers are stable and cannot be changed. > > > > Since on cpu hotplug we do flush percpu vmstats anyway, we can > > iterate only over online cpus. > > > > Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty") > > Is this not serious enough for a cc:stable? I hope the "Fixes" tag will work, but yeah, my bad, cc:stable is definitely a good idea here. Added stable@ to cc. Thanks! ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/2] mm: memcontrol: flush percpu vmstats before releasing memcg 2019-08-13 21:46 ` Roman Gushchin @ 2020-09-01 15:32 ` Yang Shi 0 siblings, 0 replies; 10+ messages in thread From: Yang Shi @ 2020-09-01 15:32 UTC (permalink / raw) To: Roman Gushchin Cc: Andrew Morton, linux-mm, Michal Hocko, Johannes Weiner, linux-kernel, Kernel Team, stable This report is kind of late, hope everyone still remembers the context. I just happened to see a similar problem on our v4.19 kernel, please see the below output from memory.stat: total_cache 7361626112 total_rss 8268165120 total_rss_huge 0 total_shmem 0 total_mapped_file 4154929152 total_dirty 389689344 total_writeback 101376000 ... [snip] ... total_inactive_anon 4096 total_active_anon 1638400 total_inactive_file 208990208 total_active_file 275030016 And memory.usage_in_bytes: 1248215040 The total_* counters are way bigger than the counters of LRUs and usage. Some ephemeral cgroups were created/deleted frequently under this problematic cgroup. And this host has been up for more than 200 days. I didn't see such problems on shorter uptime hosts (the other 4.19 host is up for 19 days) and v5.4 hosts. v4.19 also updates stats from per-cpu caches, and total_* sum all sub cgroups together. So it seems this is the same problem. Anyway this is not a significant problem since we can get the correct numbers from other counters, i.e. LRUs, but just confusing. Not sure if it is worth backporting the fix to v4.19. On Tue, Aug 13, 2019 at 2:46 PM Roman Gushchin <guro@fb.com> wrote: > > On Tue, Aug 13, 2019 at 02:27:52PM -0700, Andrew Morton wrote: > > On Mon, 12 Aug 2019 15:29:10 -0700 Roman Gushchin <guro@fb.com> wrote: > > > > > Percpu caching of local vmstats with the conditional propagation > > > by the cgroup tree leads to an accumulation of errors on non-leaf > > > levels. > > > > > > Let's imagine two nested memory cgroups A and A/B. Say, a process > > > belonging to A/B allocates 100 pagecache pages on the CPU 0. > > > The percpu cache will spill 3 times, so that 32*3=96 pages will be > > > accounted to A/B and A atomic vmstat counters, 4 pages will remain > > > in the percpu cache. > > > > > > Imagine A/B is nearby memory.max, so that every following allocation > > > triggers a direct reclaim on the local CPU. Say, each such attempt > > > will free 16 pages on a new cpu. That means every percpu cache will > > > have -16 pages, except the first one, which will have 4 - 16 = -12. > > > A/B and A atomic counters will not be touched at all. > > > > > > Now a user removes A/B. All percpu caches are freed and corresponding > > > vmstat numbers are forgotten. A has 96 pages more than expected. > > > > > > As memory cgroups are created and destroyed, errors do accumulate. > > > Even 1-2 pages differences can accumulate into large numbers. > > > > > > To fix this issue let's accumulate and propagate percpu vmstat > > > values before releasing the memory cgroup. At this point these > > > numbers are stable and cannot be changed. > > > > > > Since on cpu hotplug we do flush percpu vmstats anyway, we can > > > iterate only over online cpus. > > > > > > Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty") > > > > Is this not serious enough for a cc:stable? > > I hope the "Fixes" tag will work, but yeah, my bad, cc:stable is definitely > a good idea here. > > Added stable@ to cc. > > Thanks! > ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 1/2] mm: memcontrol: flush percpu vmstats before releasing memcg 2019-08-12 22:29 ` [PATCH 1/2] mm: memcontrol: flush percpu vmstats before releasing memcg Roman Gushchin 2019-08-13 21:27 ` Andrew Morton @ 2019-08-14 11:26 ` Michal Hocko 1 sibling, 0 replies; 10+ messages in thread From: Michal Hocko @ 2019-08-14 11:26 UTC (permalink / raw) To: Roman Gushchin Cc: Andrew Morton, linux-mm, Johannes Weiner, linux-kernel, kernel-team On Mon 12-08-19 15:29:10, Roman Gushchin wrote: > Percpu caching of local vmstats with the conditional propagation > by the cgroup tree leads to an accumulation of errors on non-leaf > levels. > > Let's imagine two nested memory cgroups A and A/B. Say, a process > belonging to A/B allocates 100 pagecache pages on the CPU 0. > The percpu cache will spill 3 times, so that 32*3=96 pages will be > accounted to A/B and A atomic vmstat counters, 4 pages will remain > in the percpu cache. > > Imagine A/B is nearby memory.max, so that every following allocation > triggers a direct reclaim on the local CPU. Say, each such attempt > will free 16 pages on a new cpu. That means every percpu cache will > have -16 pages, except the first one, which will have 4 - 16 = -12. > A/B and A atomic counters will not be touched at all. > > Now a user removes A/B. All percpu caches are freed and corresponding > vmstat numbers are forgotten. A has 96 pages more than expected. > > As memory cgroups are created and destroyed, errors do accumulate. > Even 1-2 pages differences can accumulate into large numbers. > > To fix this issue let's accumulate and propagate percpu vmstat > values before releasing the memory cgroup. At this point these > numbers are stable and cannot be changed. It is worth spending a word or two on why this doesn't matter during the memcg life time. > Since on cpu hotplug we do flush percpu vmstats anyway, we can > iterate only over online cpus. > > Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty") > Signed-off-by: Roman Gushchin <guro@fb.com> > Cc: Johannes Weiner <hannes@cmpxchg.org> Acked-by: Michal Hocko <mhocko@suse.com> > --- > mm/memcontrol.c | 40 ++++++++++++++++++++++++++++++++++++++++ > 1 file changed, 40 insertions(+) > > diff --git a/mm/memcontrol.c b/mm/memcontrol.c > index 3e821f34399f..348f685ab94b 100644 > --- a/mm/memcontrol.c > +++ b/mm/memcontrol.c > @@ -3412,6 +3412,41 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) > return 0; > } > > +static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg) > +{ > + unsigned long stat[MEMCG_NR_STAT]; > + struct mem_cgroup *mi; > + int node, cpu, i; > + > + for (i = 0; i < MEMCG_NR_STAT; i++) > + stat[i] = 0; > + > + for_each_online_cpu(cpu) > + for (i = 0; i < MEMCG_NR_STAT; i++) > + stat[i] += raw_cpu_read(memcg->vmstats_percpu->stat[i]); > + > + for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) > + for (i = 0; i < MEMCG_NR_STAT; i++) > + atomic_long_add(stat[i], &mi->vmstats[i]); > + > + for_each_node(node) { > + struct mem_cgroup_per_node *pn = memcg->nodeinfo[node]; > + struct mem_cgroup_per_node *pi; > + > + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) > + stat[i] = 0; > + > + for_each_online_cpu(cpu) > + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) > + stat[i] += raw_cpu_read( > + pn->lruvec_stat_cpu->count[i]); > + > + for (pi = pn; pi; pi = parent_nodeinfo(pi, node)) > + for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) > + atomic_long_add(stat[i], &pi->lruvec_stat[i]); > + } > +} > + > static void memcg_offline_kmem(struct mem_cgroup *memcg) > { > struct cgroup_subsys_state *css; > @@ -4805,6 +4840,11 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg) > { > int node; > > + /* > + * Flush percpu vmstats to guarantee the value correctness > + * on parent's and all ancestor levels. > + */ > + memcg_flush_percpu_vmstats(memcg); > for_each_node(node) > free_mem_cgroup_per_node_info(memcg, node); > free_percpu(memcg->vmstats_percpu); > -- > 2.21.0 -- Michal Hocko SUSE Labs ^ permalink raw reply [flat|nested] 10+ messages in thread
* [PATCH 2/2] mm: memcontrol: flush percpu slab vmstats on kmem offlining 2019-08-12 22:29 [PATCH 0/2] flush percpu vmstats Roman Gushchin 2019-08-12 22:29 ` [PATCH 1/2] mm: memcontrol: flush percpu vmstats before releasing memcg Roman Gushchin @ 2019-08-12 22:29 ` Roman Gushchin 2019-08-14 11:32 ` Michal Hocko 1 sibling, 1 reply; 10+ messages in thread From: Roman Gushchin @ 2019-08-12 22:29 UTC (permalink / raw) To: Andrew Morton, linux-mm Cc: Michal Hocko, Johannes Weiner, linux-kernel, kernel-team, Roman Gushchin I've noticed that the "slab" value in memory.stat is sometimes 0, even if some children memory cgroups have a non-zero "slab" value. The following investigation showed that this is the result of the kmem_cache reparenting in combination with the per-cpu batching of slab vmstats. At the offlining some vmstat value may leave in the percpu cache, not being propagated upwards by the cgroup hierarchy. It means that stats on ancestor levels are lower than actual. Later when slab pages are released, the precise number of pages is substracted on the parent level, making the value negative. We don't show negative values, 0 is printed instead. To fix this issue, let's flush percpu slab memcg and lruvec stats on memcg offlining. This guarantees that numbers on all ancestor levels are accurate and match the actual number of outstanding slab pages. Fixes: fb2f2b0adb98 ("mm: memcg/slab: reparent memcg kmem_caches on cgroup removal") Signed-off-by: Roman Gushchin <guro@fb.com> Cc: Johannes Weiner <hannes@cmpxchg.org> --- mm/memcontrol.c | 35 +++++++++++++++++++++++++++-------- 1 file changed, 27 insertions(+), 8 deletions(-) diff --git a/mm/memcontrol.c b/mm/memcontrol.c index 348f685ab94b..6d2427abcc0c 100644 --- a/mm/memcontrol.c +++ b/mm/memcontrol.c @@ -3412,37 +3412,49 @@ static int memcg_online_kmem(struct mem_cgroup *memcg) return 0; } -static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg) +static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg, bool slab_only) { unsigned long stat[MEMCG_NR_STAT]; struct mem_cgroup *mi; int node, cpu, i; + int min_idx, max_idx; - for (i = 0; i < MEMCG_NR_STAT; i++) + if (slab_only) { + min_idx = NR_SLAB_RECLAIMABLE; + max_idx = NR_SLAB_UNRECLAIMABLE; + } else { + min_idx = 0; + max_idx = MEMCG_NR_STAT; + } + + for (i = min_idx; i < max_idx; i++) stat[i] = 0; for_each_online_cpu(cpu) - for (i = 0; i < MEMCG_NR_STAT; i++) + for (i = min_idx; i < max_idx; i++) stat[i] += raw_cpu_read(memcg->vmstats_percpu->stat[i]); for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) - for (i = 0; i < MEMCG_NR_STAT; i++) + for (i = min_idx; i < max_idx; i++) atomic_long_add(stat[i], &mi->vmstats[i]); + if (!slab_only) + max_idx = NR_VM_NODE_STAT_ITEMS; + for_each_node(node) { struct mem_cgroup_per_node *pn = memcg->nodeinfo[node]; struct mem_cgroup_per_node *pi; - for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) + for (i = min_idx; i < max_idx; i++) stat[i] = 0; for_each_online_cpu(cpu) - for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) + for (i = min_idx; i < max_idx; i++) stat[i] += raw_cpu_read( pn->lruvec_stat_cpu->count[i]); for (pi = pn; pi; pi = parent_nodeinfo(pi, node)) - for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++) + for (i = min_idx; i < max_idx; i++) atomic_long_add(stat[i], &pi->lruvec_stat[i]); } } @@ -3467,7 +3479,14 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg) if (!parent) parent = root_mem_cgroup; + /* + * Deactivate and reparent kmem_caches. Then flush percpu + * slab statistics to have precise values at the parent and + * all ancestor levels. It's required to keep slab stats + * accurate after the reparenting of kmem_caches. + */ memcg_deactivate_kmem_caches(memcg, parent); + memcg_flush_percpu_vmstats(memcg, true); kmemcg_id = memcg->kmemcg_id; BUG_ON(kmemcg_id < 0); @@ -4844,7 +4863,7 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg) * Flush percpu vmstats to guarantee the value correctness * on parent's and all ancestor levels. */ - memcg_flush_percpu_vmstats(memcg); + memcg_flush_percpu_vmstats(memcg, false); for_each_node(node) free_mem_cgroup_per_node_info(memcg, node); free_percpu(memcg->vmstats_percpu); -- 2.21.0 ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] mm: memcontrol: flush percpu slab vmstats on kmem offlining 2019-08-12 22:29 ` [PATCH 2/2] mm: memcontrol: flush percpu slab vmstats on kmem offlining Roman Gushchin @ 2019-08-14 11:32 ` Michal Hocko 2019-08-14 21:54 ` Roman Gushchin 0 siblings, 1 reply; 10+ messages in thread From: Michal Hocko @ 2019-08-14 11:32 UTC (permalink / raw) To: Roman Gushchin Cc: Andrew Morton, linux-mm, Johannes Weiner, linux-kernel, kernel-team On Mon 12-08-19 15:29:11, Roman Gushchin wrote: > I've noticed that the "slab" value in memory.stat is sometimes 0, > even if some children memory cgroups have a non-zero "slab" value. > The following investigation showed that this is the result > of the kmem_cache reparenting in combination with the per-cpu > batching of slab vmstats. > > At the offlining some vmstat value may leave in the percpu cache, > not being propagated upwards by the cgroup hierarchy. It means > that stats on ancestor levels are lower than actual. Later when > slab pages are released, the precise number of pages is substracted > on the parent level, making the value negative. We don't show negative > values, 0 is printed instead. So the difference with other counters is that slab ones are reparented and that's why we have treat them specially? I guess that is what the comment in the code suggest but being explicit in the changelog would be nice. [...] > -static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg) > +static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg, bool slab_only) > { > unsigned long stat[MEMCG_NR_STAT]; > struct mem_cgroup *mi; > int node, cpu, i; > + int min_idx, max_idx; > > - for (i = 0; i < MEMCG_NR_STAT; i++) > + if (slab_only) { > + min_idx = NR_SLAB_RECLAIMABLE; > + max_idx = NR_SLAB_UNRECLAIMABLE; > + } else { > + min_idx = 0; > + max_idx = MEMCG_NR_STAT; > + } This is just ugly has hell! I really detest how this implicitly makes counters value very special without any note in the node_stat_item definition. Is it such a big deal to have a per counter flush and do the loop over all counters resp. specific counters around it so much worse? This should be really a slow path to safe few instructions or cache misses, no? -- Michal Hocko SUSE Labs ^ permalink raw reply [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] mm: memcontrol: flush percpu slab vmstats on kmem offlining 2019-08-14 11:32 ` Michal Hocko @ 2019-08-14 21:54 ` Roman Gushchin 2019-08-15 8:35 ` Michal Hocko 0 siblings, 1 reply; 10+ messages in thread From: Roman Gushchin @ 2019-08-14 21:54 UTC (permalink / raw) To: Michal Hocko Cc: Andrew Morton, linux-mm, Johannes Weiner, linux-kernel, Kernel Team On Wed, Aug 14, 2019 at 01:32:42PM +0200, Michal Hocko wrote: > On Mon 12-08-19 15:29:11, Roman Gushchin wrote: > > I've noticed that the "slab" value in memory.stat is sometimes 0, > > even if some children memory cgroups have a non-zero "slab" value. > > The following investigation showed that this is the result > > of the kmem_cache reparenting in combination with the per-cpu > > batching of slab vmstats. > > > > At the offlining some vmstat value may leave in the percpu cache, > > not being propagated upwards by the cgroup hierarchy. It means > > that stats on ancestor levels are lower than actual. Later when > > slab pages are released, the precise number of pages is substracted > > on the parent level, making the value negative. We don't show negative > > values, 0 is printed instead. > > So the difference with other counters is that slab ones are reparented > and that's why we have treat them specially? I guess that is what the > comment in the code suggest but being explicit in the changelog would be > nice. Right. And I believe the list can be extended further. Objects which are often outliving the origin memory cgroup (e.g. pagecache pages) are pinning dead cgroups, so it will be cool to reparent them all. > > [...] > > -static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg) > > +static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg, bool slab_only) > > { > > unsigned long stat[MEMCG_NR_STAT]; > > struct mem_cgroup *mi; > > int node, cpu, i; > > + int min_idx, max_idx; > > > > - for (i = 0; i < MEMCG_NR_STAT; i++) > > + if (slab_only) { > > + min_idx = NR_SLAB_RECLAIMABLE; > > + max_idx = NR_SLAB_UNRECLAIMABLE; > > + } else { > > + min_idx = 0; > > + max_idx = MEMCG_NR_STAT; > > + } > > This is just ugly has hell! I really detest how this implicitly makes > counters value very special without any note in the node_stat_item > definition. Is it such a big deal to have a per counter flush and do > the loop over all counters resp. specific counters around it so much > worse? This should be really a slow path to safe few instructions or > cache misses, no? I believe that it is a big deal, because it's NR_VMSTAT_ITEMS * all memory cgroups * online cpus * numa nodes. If the goal is to merge it with cpu hotplug code, I'd think about passing cpumask to it, and do the opposite. Also I'm not sure I understand why reordering loops will make it less ugly. But you're right, a comment nearby NR_SLAB_(UN)RECLAIMABLE definition is totaly worth it. How about something like: diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h index 8b5f758942a2..231bcbe5dcc6 100644 --- a/include/linux/mmzone.h +++ b/include/linux/mmzone.h @@ -215,8 +215,9 @@ enum node_stat_item { NR_INACTIVE_FILE, /* " " " " " */ NR_ACTIVE_FILE, /* " " " " " */ NR_UNEVICTABLE, /* " " " " " */ - NR_SLAB_RECLAIMABLE, - NR_SLAB_UNRECLAIMABLE, + NR_SLAB_RECLAIMABLE, /* Please, do not reorder this item */ + NR_SLAB_UNRECLAIMABLE, /* and this one without looking at + * memcg_flush_percpu_vmstats() first. */ NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */ NR_ISOLATED_FILE, /* Temporary isolated pages from file lru */ WORKINGSET_NODES, -- Thanks! ^ permalink raw reply related [flat|nested] 10+ messages in thread
* Re: [PATCH 2/2] mm: memcontrol: flush percpu slab vmstats on kmem offlining 2019-08-14 21:54 ` Roman Gushchin @ 2019-08-15 8:35 ` Michal Hocko 0 siblings, 0 replies; 10+ messages in thread From: Michal Hocko @ 2019-08-15 8:35 UTC (permalink / raw) To: Roman Gushchin Cc: Andrew Morton, linux-mm, Johannes Weiner, linux-kernel, Kernel Team On Wed 14-08-19 21:54:12, Roman Gushchin wrote: > On Wed, Aug 14, 2019 at 01:32:42PM +0200, Michal Hocko wrote: > > On Mon 12-08-19 15:29:11, Roman Gushchin wrote: > > > I've noticed that the "slab" value in memory.stat is sometimes 0, > > > even if some children memory cgroups have a non-zero "slab" value. > > > The following investigation showed that this is the result > > > of the kmem_cache reparenting in combination with the per-cpu > > > batching of slab vmstats. > > > > > > At the offlining some vmstat value may leave in the percpu cache, > > > not being propagated upwards by the cgroup hierarchy. It means > > > that stats on ancestor levels are lower than actual. Later when > > > slab pages are released, the precise number of pages is substracted > > > on the parent level, making the value negative. We don't show negative > > > values, 0 is printed instead. > > > > So the difference with other counters is that slab ones are reparented > > and that's why we have treat them specially? I guess that is what the > > comment in the code suggest but being explicit in the changelog would be > > nice. > > Right. And I believe the list can be extended further. Objects which > are often outliving the origin memory cgroup (e.g. pagecache pages) > are pinning dead cgroups, so it will be cool to reparent them all. > > > > > [...] > > > -static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg) > > > +static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg, bool slab_only) > > > { > > > unsigned long stat[MEMCG_NR_STAT]; > > > struct mem_cgroup *mi; > > > int node, cpu, i; > > > + int min_idx, max_idx; > > > > > > - for (i = 0; i < MEMCG_NR_STAT; i++) > > > + if (slab_only) { > > > + min_idx = NR_SLAB_RECLAIMABLE; > > > + max_idx = NR_SLAB_UNRECLAIMABLE; > > > + } else { > > > + min_idx = 0; > > > + max_idx = MEMCG_NR_STAT; > > > + } > > > > This is just ugly has hell! I really detest how this implicitly makes > > counters value very special without any note in the node_stat_item > > definition. Is it such a big deal to have a per counter flush and do > > the loop over all counters resp. specific counters around it so much > > worse? This should be really a slow path to safe few instructions or > > cache misses, no? > > I believe that it is a big deal, because it's > NR_VMSTAT_ITEMS * all memory cgroups * online cpus * numa nodes. I am not sure I follow. I just meant to remove all for (i = 0; i < MEMCG_NR_STAT; i++) from flushing and do that loop around the flushing function. That would mean that the NR_SLAB_$FOO wouldn't have to play tricks and simply call the flushing for the two counters. > If the goal is to merge it with cpu hotplug code, I'd think about passing > cpumask to it, and do the opposite. Also I'm not sure I understand > why reordering loops will make it less ugly. And adding a cpu/nodemasks would just work with that as well, right. > > But you're right, a comment nearby NR_SLAB_(UN)RECLAIMABLE definition > is totaly worth it. How about something like: > > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h > index 8b5f758942a2..231bcbe5dcc6 100644 > --- a/include/linux/mmzone.h > +++ b/include/linux/mmzone.h > @@ -215,8 +215,9 @@ enum node_stat_item { > NR_INACTIVE_FILE, /* " " " " " */ > NR_ACTIVE_FILE, /* " " " " " */ > NR_UNEVICTABLE, /* " " " " " */ > - NR_SLAB_RECLAIMABLE, > - NR_SLAB_UNRECLAIMABLE, > + NR_SLAB_RECLAIMABLE, /* Please, do not reorder this item */ > + NR_SLAB_UNRECLAIMABLE, /* and this one without looking at > + * memcg_flush_percpu_vmstats() first. */ > NR_ISOLATED_ANON, /* Temporary isolated pages from anon lru */ > NR_ISOLATED_FILE, /* Temporary isolated pages from file lru */ > WORKINGSET_NODES, Thanks, that is an improvement. -- Michal Hocko SUSE Labs ^ permalink raw reply [flat|nested] 10+ messages in thread
end of thread, other threads:[~2020-09-01 15:32 UTC | newest] Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed) -- links below jump to the message on this page -- 2019-08-12 22:29 [PATCH 0/2] flush percpu vmstats Roman Gushchin 2019-08-12 22:29 ` [PATCH 1/2] mm: memcontrol: flush percpu vmstats before releasing memcg Roman Gushchin 2019-08-13 21:27 ` Andrew Morton 2019-08-13 21:46 ` Roman Gushchin 2020-09-01 15:32 ` Yang Shi 2019-08-14 11:26 ` Michal Hocko 2019-08-12 22:29 ` [PATCH 2/2] mm: memcontrol: flush percpu slab vmstats on kmem offlining Roman Gushchin 2019-08-14 11:32 ` Michal Hocko 2019-08-14 21:54 ` Roman Gushchin 2019-08-15 8:35 ` Michal Hocko
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).