linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/3] vmstats/vmevents flushing
@ 2019-08-19 20:23 Roman Gushchin
  2019-08-19 20:23 ` [PATCH v2 1/3] mm: memcontrol: flush percpu vmstats before releasing memcg Roman Gushchin
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: Roman Gushchin @ 2019-08-19 20:23 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Michal Hocko, Johannes Weiner, linux-kernel, kernel-team, Roman Gushchin

This is v2 of the patchset (v1 has been sent as a set of separate patches).
Kbuild test robot reported build issues:
memcg_flush_percpu_vmstats() and memcg_flush_percpu_vmevents() were
accidentally placed under CONFIG_MEMCG_KMEM, and caused
!CONFIG_MEMCG_KMEM build to fail.

V2 contains a trivial fix: both function were moved out of
the CONFIG_MEMCG_KMEM section.

Also, the add-comments-to-slab-enums-definition patch were merged into
patch 2.

Andrew, can you, please, drop the following 4 patches from the
mm tree and replaces them with this updated version?
  1) mm: memcontrol: flush percpu vmevents before releasing memcg
  2) mm-memcontrol-flush-percpu-slab-vmstats-on-kmem-offlining-fix
  3) mm: memcontrol: flush percpu slab vmstats on kmem offlining
  4) mm: memcontrol: flush percpu vmstats before releasing memcg

Thanks!

Roman Gushchin (3):
  mm: memcontrol: flush percpu vmstats before releasing memcg
  mm: memcontrol: flush percpu slab vmstats on kmem offlining
  mm: memcontrol: flush percpu vmevents before releasing memcg

 include/linux/mmzone.h |  5 +--
 mm/memcontrol.c        | 79 ++++++++++++++++++++++++++++++++++++++++++
 2 files changed, 82 insertions(+), 2 deletions(-)

-- 
2.21.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v2 1/3] mm: memcontrol: flush percpu vmstats before releasing memcg
  2019-08-19 20:23 [PATCH v2 0/3] vmstats/vmevents flushing Roman Gushchin
@ 2019-08-19 20:23 ` Roman Gushchin
  2019-08-19 20:23 ` [PATCH v2 2/3] mm: memcontrol: flush percpu slab vmstats on kmem offlining Roman Gushchin
  2019-08-19 20:23 ` [PATCH v2 3/3] mm: memcontrol: flush percpu vmevents before releasing memcg Roman Gushchin
  2 siblings, 0 replies; 6+ messages in thread
From: Roman Gushchin @ 2019-08-19 20:23 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Michal Hocko, Johannes Weiner, linux-kernel, kernel-team,
	Roman Gushchin, Vladimir Davydov, stable

Percpu caching of local vmstats with the conditional propagation by the
cgroup tree leads to an accumulation of errors on non-leaf levels.

Let's imagine two nested memory cgroups A and A/B.  Say, a process
belonging to A/B allocates 100 pagecache pages on the CPU 0.  The percpu
cache will spill 3 times, so that 32*3=96 pages will be accounted to A/B
and A atomic vmstat counters, 4 pages will remain in the percpu cache.

Imagine A/B is nearby memory.max, so that every following allocation
triggers a direct reclaim on the local CPU.  Say, each such attempt will
free 16 pages on a new cpu.  That means every percpu cache will have -16
pages, except the first one, which will have 4 - 16 = -12.  A/B and A
atomic counters will not be touched at all.

Now a user removes A/B.  All percpu caches are freed and corresponding
vmstat numbers are forgotten.  A has 96 pages more than expected.

As memory cgroups are created and destroyed, errors do accumulate.  Even
1-2 pages differences can accumulate into large numbers.

To fix this issue let's accumulate and propagate percpu vmstat values
before releasing the memory cgroup.  At this point these numbers are
stable and cannot be changed.

Since on cpu hotplug we do flush percpu vmstats anyway, we can iterate
only over online cpus.

Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty")
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: <stable@vger.kernel.org>
---
 mm/memcontrol.c | 40 ++++++++++++++++++++++++++++++++++++++++
 1 file changed, 40 insertions(+)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 3e821f34399f..818165d8de3f 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3383,6 +3383,41 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
 	}
 }
 
+static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg)
+{
+	unsigned long stat[MEMCG_NR_STAT];
+	struct mem_cgroup *mi;
+	int node, cpu, i;
+
+	for (i = 0; i < MEMCG_NR_STAT; i++)
+		stat[i] = 0;
+
+	for_each_online_cpu(cpu)
+		for (i = 0; i < MEMCG_NR_STAT; i++)
+			stat[i] += raw_cpu_read(memcg->vmstats_percpu->stat[i]);
+
+	for (mi = memcg; mi; mi = parent_mem_cgroup(mi))
+		for (i = 0; i < MEMCG_NR_STAT; i++)
+			atomic_long_add(stat[i], &mi->vmstats[i]);
+
+	for_each_node(node) {
+		struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];
+		struct mem_cgroup_per_node *pi;
+
+		for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
+			stat[i] = 0;
+
+		for_each_online_cpu(cpu)
+			for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
+				stat[i] += raw_cpu_read(
+					pn->lruvec_stat_cpu->count[i]);
+
+		for (pi = pn; pi; pi = parent_nodeinfo(pi, node))
+			for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
+				atomic_long_add(stat[i], &pi->lruvec_stat[i]);
+	}
+}
+
 #ifdef CONFIG_MEMCG_KMEM
 static int memcg_online_kmem(struct mem_cgroup *memcg)
 {
@@ -4805,6 +4840,11 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg)
 {
 	int node;
 
+	/*
+	 * Flush percpu vmstats to guarantee the value correctness
+	 * on parent's and all ancestor levels.
+	 */
+	memcg_flush_percpu_vmstats(memcg);
 	for_each_node(node)
 		free_mem_cgroup_per_node_info(memcg, node);
 	free_percpu(memcg->vmstats_percpu);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 2/3] mm: memcontrol: flush percpu slab vmstats on kmem offlining
  2019-08-19 20:23 [PATCH v2 0/3] vmstats/vmevents flushing Roman Gushchin
  2019-08-19 20:23 ` [PATCH v2 1/3] mm: memcontrol: flush percpu vmstats before releasing memcg Roman Gushchin
@ 2019-08-19 20:23 ` Roman Gushchin
  2019-08-19 22:27   ` Andrew Morton
  2019-08-19 20:23 ` [PATCH v2 3/3] mm: memcontrol: flush percpu vmevents before releasing memcg Roman Gushchin
  2 siblings, 1 reply; 6+ messages in thread
From: Roman Gushchin @ 2019-08-19 20:23 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Michal Hocko, Johannes Weiner, linux-kernel, kernel-team,
	Roman Gushchin, Vladimir Davydov

I've noticed that the "slab" value in memory.stat is sometimes 0,
even if some children memory cgroups have a non-zero "slab" value.
The following investigation showed that this is the result
of the kmem_cache reparenting in combination with the per-cpu
batching of slab vmstats.

At the offlining some vmstat value may leave in the percpu cache,
not being propagated upwards by the cgroup hierarchy. It means
that stats on ancestor levels are lower than actual. Later when
slab pages are released, the precise number of pages is substracted
on the parent level, making the value negative. We don't show negative
values, 0 is printed instead.

To fix this issue, let's flush percpu slab memcg and lruvec stats
on memcg offlining. This guarantees that numbers on all ancestor
levels are accurate and match the actual number of outstanding
slab pages.

Fixes: fb2f2b0adb98 ("mm: memcg/slab: reparent memcg kmem_caches on cgroup removal")
Signed-off-by: Roman Gushchin <guro@fb.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Michal Hocko <mhocko@kernel.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
---
 include/linux/mmzone.h |  5 +++--
 mm/memcontrol.c        | 35 +++++++++++++++++++++++++++--------
 2 files changed, 30 insertions(+), 10 deletions(-)

diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 8b5f758942a2..bda20282746b 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -215,8 +215,9 @@ enum node_stat_item {
 	NR_INACTIVE_FILE,	/*  "     "     "   "       "         */
 	NR_ACTIVE_FILE,		/*  "     "     "   "       "         */
 	NR_UNEVICTABLE,		/*  "     "     "   "       "         */
-	NR_SLAB_RECLAIMABLE,
-	NR_SLAB_UNRECLAIMABLE,
+	NR_SLAB_RECLAIMABLE,	/* Please do not reorder this item */
+	NR_SLAB_UNRECLAIMABLE,	/* and this one without looking at
+				 * memcg_flush_percpu_vmstats() first. */
 	NR_ISOLATED_ANON,	/* Temporary isolated pages from anon lru */
 	NR_ISOLATED_FILE,	/* Temporary isolated pages from file lru */
 	WORKINGSET_NODES,
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 818165d8de3f..ebd72b80c90b 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3383,37 +3383,49 @@ static u64 mem_cgroup_read_u64(struct cgroup_subsys_state *css,
 	}
 }
 
-static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg)
+static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg, bool slab_only)
 {
 	unsigned long stat[MEMCG_NR_STAT];
 	struct mem_cgroup *mi;
 	int node, cpu, i;
+	int min_idx, max_idx;
 
-	for (i = 0; i < MEMCG_NR_STAT; i++)
+	if (slab_only) {
+		min_idx = NR_SLAB_RECLAIMABLE;
+		max_idx = NR_SLAB_UNRECLAIMABLE;
+	} else {
+		min_idx = 0;
+		max_idx = MEMCG_NR_STAT;
+	}
+
+	for (i = min_idx; i < max_idx; i++)
 		stat[i] = 0;
 
 	for_each_online_cpu(cpu)
-		for (i = 0; i < MEMCG_NR_STAT; i++)
+		for (i = min_idx; i < max_idx; i++)
 			stat[i] += raw_cpu_read(memcg->vmstats_percpu->stat[i]);
 
 	for (mi = memcg; mi; mi = parent_mem_cgroup(mi))
-		for (i = 0; i < MEMCG_NR_STAT; i++)
+		for (i = min_idx; i < max_idx; i++)
 			atomic_long_add(stat[i], &mi->vmstats[i]);
 
+	if (!slab_only)
+		max_idx = NR_VM_NODE_STAT_ITEMS;
+
 	for_each_node(node) {
 		struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];
 		struct mem_cgroup_per_node *pi;
 
-		for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
+		for (i = min_idx; i < max_idx; i++)
 			stat[i] = 0;
 
 		for_each_online_cpu(cpu)
-			for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
+			for (i = min_idx; i < max_idx; i++)
 				stat[i] += raw_cpu_read(
 					pn->lruvec_stat_cpu->count[i]);
 
 		for (pi = pn; pi; pi = parent_nodeinfo(pi, node))
-			for (i = 0; i < NR_VM_NODE_STAT_ITEMS; i++)
+			for (i = min_idx; i < max_idx; i++)
 				atomic_long_add(stat[i], &pi->lruvec_stat[i]);
 	}
 }
@@ -3467,7 +3479,14 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg)
 	if (!parent)
 		parent = root_mem_cgroup;
 
+	/*
+	 * Deactivate and reparent kmem_caches. Then flush percpu
+	 * slab statistics to have precise values at the parent and
+	 * all ancestor levels. It's required to keep slab stats
+	 * accurate after the reparenting of kmem_caches.
+	 */
 	memcg_deactivate_kmem_caches(memcg, parent);
+	memcg_flush_percpu_vmstats(memcg, true);
 
 	kmemcg_id = memcg->kmemcg_id;
 	BUG_ON(kmemcg_id < 0);
@@ -4844,7 +4863,7 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg)
 	 * Flush percpu vmstats to guarantee the value correctness
 	 * on parent's and all ancestor levels.
 	 */
-	memcg_flush_percpu_vmstats(memcg);
+	memcg_flush_percpu_vmstats(memcg, false);
 	for_each_node(node)
 		free_mem_cgroup_per_node_info(memcg, node);
 	free_percpu(memcg->vmstats_percpu);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v2 3/3] mm: memcontrol: flush percpu vmevents before releasing memcg
  2019-08-19 20:23 [PATCH v2 0/3] vmstats/vmevents flushing Roman Gushchin
  2019-08-19 20:23 ` [PATCH v2 1/3] mm: memcontrol: flush percpu vmstats before releasing memcg Roman Gushchin
  2019-08-19 20:23 ` [PATCH v2 2/3] mm: memcontrol: flush percpu slab vmstats on kmem offlining Roman Gushchin
@ 2019-08-19 20:23 ` Roman Gushchin
  2 siblings, 0 replies; 6+ messages in thread
From: Roman Gushchin @ 2019-08-19 20:23 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Michal Hocko, Johannes Weiner, linux-kernel, kernel-team,
	Roman Gushchin, Vladimir Davydov, stable

Similar to vmstats, percpu caching of local vmevents leads to an
accumulation of errors on non-leaf levels.  This happens because some
leftovers may remain in percpu caches, so that they are never propagated
up by the cgroup tree and just disappear into nonexistence with on
releasing of the memory cgroup.

To fix this issue let's accumulate and propagate percpu vmevents values
before releasing the memory cgroup similar to what we're doing with
vmstats.

Since on cpu hotplug we do flush percpu vmstats anyway, we can iterate
only over online cpus.

Fixes: 42a300353577 ("mm: memcontrol: fix recursive statistics correctness & scalabilty")
Signed-off-by: Roman Gushchin <guro@fb.com>
Acked-by: Michal Hocko <mhocko@suse.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
Cc: <stable@vger.kernel.org>
---
 mm/memcontrol.c | 22 +++++++++++++++++++++-
 1 file changed, 21 insertions(+), 1 deletion(-)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index ebd72b80c90b..3137de6a46f0 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3430,6 +3430,25 @@ static void memcg_flush_percpu_vmstats(struct mem_cgroup *memcg, bool slab_only)
 	}
 }
 
+static void memcg_flush_percpu_vmevents(struct mem_cgroup *memcg)
+{
+	unsigned long events[NR_VM_EVENT_ITEMS];
+	struct mem_cgroup *mi;
+	int cpu, i;
+
+	for (i = 0; i < NR_VM_EVENT_ITEMS; i++)
+		events[i] = 0;
+
+	for_each_online_cpu(cpu)
+		for (i = 0; i < NR_VM_EVENT_ITEMS; i++)
+			events[i] += raw_cpu_read(
+				memcg->vmstats_percpu->events[i]);
+
+	for (mi = memcg; mi; mi = parent_mem_cgroup(mi))
+		for (i = 0; i < NR_VM_EVENT_ITEMS; i++)
+			atomic_long_add(events[i], &mi->vmevents[i]);
+}
+
 #ifdef CONFIG_MEMCG_KMEM
 static int memcg_online_kmem(struct mem_cgroup *memcg)
 {
@@ -4860,10 +4879,11 @@ static void __mem_cgroup_free(struct mem_cgroup *memcg)
 	int node;
 
 	/*
-	 * Flush percpu vmstats to guarantee the value correctness
+	 * Flush percpu vmstats and vmevents to guarantee the value correctness
 	 * on parent's and all ancestor levels.
 	 */
 	memcg_flush_percpu_vmstats(memcg, false);
+	memcg_flush_percpu_vmevents(memcg);
 	for_each_node(node)
 		free_mem_cgroup_per_node_info(memcg, node);
 	free_percpu(memcg->vmstats_percpu);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 2/3] mm: memcontrol: flush percpu slab vmstats on kmem offlining
  2019-08-19 20:23 ` [PATCH v2 2/3] mm: memcontrol: flush percpu slab vmstats on kmem offlining Roman Gushchin
@ 2019-08-19 22:27   ` Andrew Morton
  2019-08-19 22:46     ` Roman Gushchin
  0 siblings, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2019-08-19 22:27 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: linux-mm, Michal Hocko, Johannes Weiner, linux-kernel,
	kernel-team, Vladimir Davydov

On Mon, 19 Aug 2019 13:23:37 -0700 Roman Gushchin <guro@fb.com> wrote:

> I've noticed that the "slab" value in memory.stat is sometimes 0,
> even if some children memory cgroups have a non-zero "slab" value.
> The following investigation showed that this is the result
> of the kmem_cache reparenting in combination with the per-cpu
> batching of slab vmstats.
> 
> At the offlining some vmstat value may leave in the percpu cache,
> not being propagated upwards by the cgroup hierarchy. It means
> that stats on ancestor levels are lower than actual. Later when
> slab pages are released, the precise number of pages is substracted
> on the parent level, making the value negative. We don't show negative
> values, 0 is printed instead.
> 
> To fix this issue, let's flush percpu slab memcg and lruvec stats
> on memcg offlining. This guarantees that numbers on all ancestor
> levels are accurate and match the actual number of outstanding
> slab pages.
> 
> Fixes: fb2f2b0adb98 ("mm: memcg/slab: reparent memcg kmem_caches on cgroup removal")
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Cc: Johannes Weiner <hannes@cmpxchg.org>
> Cc: Michal Hocko <mhocko@kernel.org>
> Cc: Vladimir Davydov <vdavydov.dev@gmail.com>

[1/3] and [3/3] have cc:stable.  [2/3] does not.  However [3/3] does
not correctly apply without [2/3] having being applied.


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 2/3] mm: memcontrol: flush percpu slab vmstats on kmem offlining
  2019-08-19 22:27   ` Andrew Morton
@ 2019-08-19 22:46     ` Roman Gushchin
  0 siblings, 0 replies; 6+ messages in thread
From: Roman Gushchin @ 2019-08-19 22:46 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, Michal Hocko, Johannes Weiner, linux-kernel,
	Kernel Team, Vladimir Davydov

On Mon, Aug 19, 2019 at 03:27:44PM -0700, Andrew Morton wrote:
> On Mon, 19 Aug 2019 13:23:37 -0700 Roman Gushchin <guro@fb.com> wrote:
> 
> > I've noticed that the "slab" value in memory.stat is sometimes 0,
> > even if some children memory cgroups have a non-zero "slab" value.
> > The following investigation showed that this is the result
> > of the kmem_cache reparenting in combination with the per-cpu
> > batching of slab vmstats.
> > 
> > At the offlining some vmstat value may leave in the percpu cache,
> > not being propagated upwards by the cgroup hierarchy. It means
> > that stats on ancestor levels are lower than actual. Later when
> > slab pages are released, the precise number of pages is substracted
> > on the parent level, making the value negative. We don't show negative
> > values, 0 is printed instead.
> > 
> > To fix this issue, let's flush percpu slab memcg and lruvec stats
> > on memcg offlining. This guarantees that numbers on all ancestor
> > levels are accurate and match the actual number of outstanding
> > slab pages.
> > 
> > Fixes: fb2f2b0adb98 ("mm: memcg/slab: reparent memcg kmem_caches on cgroup removal")
> > Signed-off-by: Roman Gushchin <guro@fb.com>
> > Cc: Johannes Weiner <hannes@cmpxchg.org>
> > Cc: Michal Hocko <mhocko@kernel.org>
> > Cc: Vladimir Davydov <vdavydov.dev@gmail.com>
> 
> [1/3] and [3/3] have cc:stable.  [2/3] does not.  However [3/3] does
> not correctly apply without [2/3] having being applied.

Right, [2/3] is required by slab kmem reparenting, which appeared in 5.3.

I can rearrange [2/3] and [3/3] so that first two patches will have
cc table and apply correctly. Let me do this, I'll send v3 shortly.

Thanks!

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-08-19 22:47 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-19 20:23 [PATCH v2 0/3] vmstats/vmevents flushing Roman Gushchin
2019-08-19 20:23 ` [PATCH v2 1/3] mm: memcontrol: flush percpu vmstats before releasing memcg Roman Gushchin
2019-08-19 20:23 ` [PATCH v2 2/3] mm: memcontrol: flush percpu slab vmstats on kmem offlining Roman Gushchin
2019-08-19 22:27   ` Andrew Morton
2019-08-19 22:46     ` Roman Gushchin
2019-08-19 20:23 ` [PATCH v2 3/3] mm: memcontrol: flush percpu vmevents before releasing memcg Roman Gushchin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).