linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: memcontrol: flush slab vmstats on kmem offlining
@ 2019-08-08 20:36 Roman Gushchin
  2019-08-08 21:21 ` Andrew Morton
  0 siblings, 1 reply; 4+ messages in thread
From: Roman Gushchin @ 2019-08-08 20:36 UTC (permalink / raw)
  To: Andrew Morton, linux-mm
  Cc: Michal Hocko, Johannes Weiner, linux-kernel, kernel-team, Roman Gushchin

I've noticed that the "slab" value in memory.stat is sometimes 0,
even if some children memory cgroups have a non-zero "slab" value.
The following investigation showed that this is the result
of the kmem_cache reparenting in combination with the per-cpu
batching of slab vmstats.

At the offlining some vmstat value may leave in the percpu cache,
not being propagated upwards by the cgroup hierarchy. It means
that stats on ancestor levels are lower than actual. Later when
slab pages are released, the precise number of pages is substracted
on the parent level, making the value negative. We don't show negative
values, 0 is printed instead.

To fix this issue, let's flush percpu slab memcg and lruvec stats
on memcg offlining. This guarantees that numbers on all ancestor
levels are accurate and match the actual number of outstanding
slab pages.

Fixes: fb2f2b0adb98 ("mm: memcg/slab: reparent memcg kmem_caches on cgroup removal")
Signed-off-by: Roman Gushchin <guro@fb.com>
Cc: Johannes Weiner <hannes@cmpxchg.org>
---
 mm/memcontrol.c | 51 +++++++++++++++++++++++++++++++++++++++++++++++++
 1 file changed, 51 insertions(+)

diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 3e821f34399f..3a5f6f486cdf 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -3412,6 +3412,50 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
 	return 0;
 }
 
+static void memcg_flush_slab_node_stats(struct mem_cgroup *memcg, int node)
+{
+	struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];
+	struct mem_cgroup_per_node *pi;
+	unsigned long recl = 0, unrecl = 0;
+	int cpu;
+
+	for_each_possible_cpu(cpu) {
+		recl += raw_cpu_read(
+			pn->lruvec_stat_cpu->count[NR_SLAB_RECLAIMABLE]);
+		unrecl += raw_cpu_read(
+			pn->lruvec_stat_cpu->count[NR_SLAB_UNRECLAIMABLE]);
+	}
+
+	for (pi = pn; pi; pi = parent_nodeinfo(pi, node)) {
+		atomic_long_add(recl,
+				&pi->lruvec_stat[NR_SLAB_RECLAIMABLE]);
+		atomic_long_add(unrecl,
+				&pi->lruvec_stat[NR_SLAB_UNRECLAIMABLE]);
+	}
+}
+
+static void memcg_flush_slab_vmstats(struct mem_cgroup *memcg)
+{
+	struct mem_cgroup *mi;
+	unsigned long recl = 0, unrecl = 0;
+	int node, cpu;
+
+	for_each_possible_cpu(cpu) {
+		recl += raw_cpu_read(
+			memcg->vmstats_percpu->stat[NR_SLAB_RECLAIMABLE]);
+		unrecl += raw_cpu_read(
+			memcg->vmstats_percpu->stat[NR_SLAB_UNRECLAIMABLE]);
+	}
+
+	for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) {
+		atomic_long_add(recl, &mi->vmstats[NR_SLAB_RECLAIMABLE]);
+		atomic_long_add(unrecl, &mi->vmstats[NR_SLAB_UNRECLAIMABLE]);
+	}
+
+	for_each_node(node)
+		memcg_flush_slab_node_stats(memcg, node);
+}
+
 static void memcg_offline_kmem(struct mem_cgroup *memcg)
 {
 	struct cgroup_subsys_state *css;
@@ -3432,7 +3476,14 @@ static void memcg_offline_kmem(struct mem_cgroup *memcg)
 	if (!parent)
 		parent = root_mem_cgroup;
 
+	/*
+	 * Deactivate and reparent kmem_caches. Then Flush percpu
+	 * slab statistics to have precise values at the parent and
+	 * all ancestor levels. It's required to keep slab stats
+	 * accurate after the reparenting of kmem_caches.
+	 */
 	memcg_deactivate_kmem_caches(memcg, parent);
+	memcg_flush_slab_vmstats(memcg);
 
 	kmemcg_id = memcg->kmemcg_id;
 	BUG_ON(kmemcg_id < 0);
-- 
2.21.0


^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm: memcontrol: flush slab vmstats on kmem offlining
  2019-08-08 20:36 [PATCH] mm: memcontrol: flush slab vmstats on kmem offlining Roman Gushchin
@ 2019-08-08 21:21 ` Andrew Morton
  2019-08-08 21:47   ` Roman Gushchin
  0 siblings, 1 reply; 4+ messages in thread
From: Andrew Morton @ 2019-08-08 21:21 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: linux-mm, Michal Hocko, Johannes Weiner, linux-kernel, kernel-team

On Thu, 8 Aug 2019 13:36:04 -0700 Roman Gushchin <guro@fb.com> wrote:

> I've noticed that the "slab" value in memory.stat is sometimes 0,
> even if some children memory cgroups have a non-zero "slab" value.
> The following investigation showed that this is the result
> of the kmem_cache reparenting in combination with the per-cpu
> batching of slab vmstats.
> 
> At the offlining some vmstat value may leave in the percpu cache,
> not being propagated upwards by the cgroup hierarchy. It means
> that stats on ancestor levels are lower than actual. Later when
> slab pages are released, the precise number of pages is substracted
> on the parent level, making the value negative. We don't show negative
> values, 0 is printed instead.
> 
> To fix this issue, let's flush percpu slab memcg and lruvec stats
> on memcg offlining. This guarantees that numbers on all ancestor
> levels are accurate and match the actual number of outstanding
> slab pages.
> 

Looks expensive.  How frequently can these functions be called?

> --- a/mm/memcontrol.c
> +++ b/mm/memcontrol.c
> @@ -3412,6 +3412,50 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
>  	return 0;
>  }
>  
> +static void memcg_flush_slab_node_stats(struct mem_cgroup *memcg, int node)
> +{
> +	struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];
> +	struct mem_cgroup_per_node *pi;
> +	unsigned long recl = 0, unrecl = 0;
> +	int cpu;
> +
> +	for_each_possible_cpu(cpu) {
> +		recl += raw_cpu_read(
> +			pn->lruvec_stat_cpu->count[NR_SLAB_RECLAIMABLE]);
> +		unrecl += raw_cpu_read(
> +			pn->lruvec_stat_cpu->count[NR_SLAB_UNRECLAIMABLE]);
> +	}
> +
> +	for (pi = pn; pi; pi = parent_nodeinfo(pi, node)) {
> +		atomic_long_add(recl,
> +				&pi->lruvec_stat[NR_SLAB_RECLAIMABLE]);
> +		atomic_long_add(unrecl,
> +				&pi->lruvec_stat[NR_SLAB_UNRECLAIMABLE]);
> +	}
> +}
> +
> +static void memcg_flush_slab_vmstats(struct mem_cgroup *memcg)
> +{
> +	struct mem_cgroup *mi;
> +	unsigned long recl = 0, unrecl = 0;
> +	int node, cpu;
> +
> +	for_each_possible_cpu(cpu) {
> +		recl += raw_cpu_read(
> +			memcg->vmstats_percpu->stat[NR_SLAB_RECLAIMABLE]);
> +		unrecl += raw_cpu_read(
> +			memcg->vmstats_percpu->stat[NR_SLAB_UNRECLAIMABLE]);
> +	}
> +
> +	for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) {
> +		atomic_long_add(recl, &mi->vmstats[NR_SLAB_RECLAIMABLE]);
> +		atomic_long_add(unrecl, &mi->vmstats[NR_SLAB_UNRECLAIMABLE]);
> +	}
> +
> +	for_each_node(node)
> +		memcg_flush_slab_node_stats(memcg, node);

This loops across all possible CPUs once for each possible node.  Ouch.

Implementing hotplug handlers in here (which is surprisingly simple)
brings this down to num_online_nodes * num_online_cpus which is, I
think, potentially vastly better.


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm: memcontrol: flush slab vmstats on kmem offlining
  2019-08-08 21:21 ` Andrew Morton
@ 2019-08-08 21:47   ` Roman Gushchin
  2019-08-08 23:02     ` Andrew Morton
  0 siblings, 1 reply; 4+ messages in thread
From: Roman Gushchin @ 2019-08-08 21:47 UTC (permalink / raw)
  To: Andrew Morton
  Cc: linux-mm, Michal Hocko, Johannes Weiner, linux-kernel, Kernel Team

On Thu, Aug 08, 2019 at 02:21:46PM -0700, Andrew Morton wrote:
> On Thu, 8 Aug 2019 13:36:04 -0700 Roman Gushchin <guro@fb.com> wrote:
> 
> > I've noticed that the "slab" value in memory.stat is sometimes 0,
> > even if some children memory cgroups have a non-zero "slab" value.
> > The following investigation showed that this is the result
> > of the kmem_cache reparenting in combination with the per-cpu
> > batching of slab vmstats.
> > 
> > At the offlining some vmstat value may leave in the percpu cache,
> > not being propagated upwards by the cgroup hierarchy. It means
> > that stats on ancestor levels are lower than actual. Later when
> > slab pages are released, the precise number of pages is substracted
> > on the parent level, making the value negative. We don't show negative
> > values, 0 is printed instead.
> > 
> > To fix this issue, let's flush percpu slab memcg and lruvec stats
> > on memcg offlining. This guarantees that numbers on all ancestor
> > levels are accurate and match the actual number of outstanding
> > slab pages.
> > 
> 
> Looks expensive.  How frequently can these functions be called?

Once per memcg lifetime.

> 
> > --- a/mm/memcontrol.c
> > +++ b/mm/memcontrol.c
> > @@ -3412,6 +3412,50 @@ static int memcg_online_kmem(struct mem_cgroup *memcg)
> >  	return 0;
> >  }
> >  
> > +static void memcg_flush_slab_node_stats(struct mem_cgroup *memcg, int node)
> > +{
> > +	struct mem_cgroup_per_node *pn = memcg->nodeinfo[node];
> > +	struct mem_cgroup_per_node *pi;
> > +	unsigned long recl = 0, unrecl = 0;
> > +	int cpu;
> > +
> > +	for_each_possible_cpu(cpu) {
> > +		recl += raw_cpu_read(
> > +			pn->lruvec_stat_cpu->count[NR_SLAB_RECLAIMABLE]);
> > +		unrecl += raw_cpu_read(
> > +			pn->lruvec_stat_cpu->count[NR_SLAB_UNRECLAIMABLE]);
> > +	}
> > +
> > +	for (pi = pn; pi; pi = parent_nodeinfo(pi, node)) {
> > +		atomic_long_add(recl,
> > +				&pi->lruvec_stat[NR_SLAB_RECLAIMABLE]);
> > +		atomic_long_add(unrecl,
> > +				&pi->lruvec_stat[NR_SLAB_UNRECLAIMABLE]);
> > +	}
> > +}
> > +
> > +static void memcg_flush_slab_vmstats(struct mem_cgroup *memcg)
> > +{
> > +	struct mem_cgroup *mi;
> > +	unsigned long recl = 0, unrecl = 0;
> > +	int node, cpu;
> > +
> > +	for_each_possible_cpu(cpu) {
> > +		recl += raw_cpu_read(
> > +			memcg->vmstats_percpu->stat[NR_SLAB_RECLAIMABLE]);
> > +		unrecl += raw_cpu_read(
> > +			memcg->vmstats_percpu->stat[NR_SLAB_UNRECLAIMABLE]);
> > +	}
> > +
> > +	for (mi = memcg; mi; mi = parent_mem_cgroup(mi)) {
> > +		atomic_long_add(recl, &mi->vmstats[NR_SLAB_RECLAIMABLE]);
> > +		atomic_long_add(unrecl, &mi->vmstats[NR_SLAB_UNRECLAIMABLE]);
> > +	}
> > +
> > +	for_each_node(node)
> > +		memcg_flush_slab_node_stats(memcg, node);
> 
> This loops across all possible CPUs once for each possible node.  Ouch.
> 
> Implementing hotplug handlers in here (which is surprisingly simple)
> brings this down to num_online_nodes * num_online_cpus which is, I
> think, potentially vastly better.
>

Hm, maybe I'm biased because we don't play much with offlining, and
don't have many NUMA nodes. What's the real world scenario? Disabling
hyperthreading?

Idk, given that it happens once per memcg lifetime, and memcg destruction
isn't cheap anyway, I'm not sure it worth it. But if you are, I'm happy
to add hotplug handlers.

I also thought about merging per-memcg stats and per-memcg-per-node stats
(reading part can aggregate over 2? 4? numa nodes each time). That will
make everything overall cheaper. But it's a separate topic.

Thanks!

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm: memcontrol: flush slab vmstats on kmem offlining
  2019-08-08 21:47   ` Roman Gushchin
@ 2019-08-08 23:02     ` Andrew Morton
  0 siblings, 0 replies; 4+ messages in thread
From: Andrew Morton @ 2019-08-08 23:02 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: linux-mm, Michal Hocko, Johannes Weiner, linux-kernel, Kernel Team

On Thu, 8 Aug 2019 21:47:11 +0000 Roman Gushchin <guro@fb.com> wrote:

> On Thu, Aug 08, 2019 at 02:21:46PM -0700, Andrew Morton wrote:
> > On Thu, 8 Aug 2019 13:36:04 -0700 Roman Gushchin <guro@fb.com> wrote:
> > 
> > > I've noticed that the "slab" value in memory.stat is sometimes 0,
> > > even if some children memory cgroups have a non-zero "slab" value.
> > > The following investigation showed that this is the result
> > > of the kmem_cache reparenting in combination with the per-cpu
> > > batching of slab vmstats.
> > > 
> > > At the offlining some vmstat value may leave in the percpu cache,
> > > not being propagated upwards by the cgroup hierarchy. It means
> > > that stats on ancestor levels are lower than actual. Later when
> > > slab pages are released, the precise number of pages is substracted
> > > on the parent level, making the value negative. We don't show negative
> > > values, 0 is printed instead.
> > > 
> > > To fix this issue, let's flush percpu slab memcg and lruvec stats
> > > on memcg offlining. This guarantees that numbers on all ancestor
> > > levels are accurate and match the actual number of outstanding
> > > slab pages.
> > > 
> > 
> > Looks expensive.  How frequently can these functions be called?
> 
> Once per memcg lifetime.

iirc there are some workloads in which this can be rapid?

> > > +	for_each_node(node)
> > > +		memcg_flush_slab_node_stats(memcg, node);
> > 
> > This loops across all possible CPUs once for each possible node.  Ouch.
> > 
> > Implementing hotplug handlers in here (which is surprisingly simple)
> > brings this down to num_online_nodes * num_online_cpus which is, I
> > think, potentially vastly better.
> >
> 
> Hm, maybe I'm biased because we don't play much with offlining, and
> don't have many NUMA nodes. What's the real world scenario? Disabling
> hyperthreading?

I assume it's machines which could take a large number of CPUs but in
fact have few.  I've asked this in response to many patches down the
ages and have never really got a clear answer.

A concern is that if such machines do exist, it will take a long time
for the regression reports to get to us.  Especially if such machines
are rare.

> Idk, given that it happens once per memcg lifetime, and memcg destruction
> isn't cheap anyway, I'm not sure it worth it. But if you are, I'm happy
> to add hotplug handlers.

I think it's worth taking a look.  As I mentioned, it can turn out to
be stupidly simple.

> I also thought about merging per-memcg stats and per-memcg-per-node stats
> (reading part can aggregate over 2? 4? numa nodes each time). That will
> make everything overall cheaper. But it's a separate topic.


^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2019-08-08 23:03 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-08-08 20:36 [PATCH] mm: memcontrol: flush slab vmstats on kmem offlining Roman Gushchin
2019-08-08 21:21 ` Andrew Morton
2019-08-08 21:47   ` Roman Gushchin
2019-08-08 23:02     ` Andrew Morton

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).