linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [Potential Spoof] [PATCH v2 0/5] mm: memcg accounting of percpu memory
       [not found] <20200608230819.832349-1-guro@fb.com>
@ 2020-06-16 21:19 ` Roman Gushchin
  2020-06-17 20:39   ` Andrew Morton
       [not found] ` <20200608230819.832349-4-guro@fb.com>
  1 sibling, 1 reply; 6+ messages in thread
From: Roman Gushchin @ 2020-06-16 21:19 UTC (permalink / raw)
  To: Andrew Morton, Dennis Zhou
  Cc: Johannes Weiner, Michal Hocko, Shakeel Butt, linux-mm,
	kernel-team, linux-kernel

On Mon, Jun 08, 2020 at 04:08:14PM -0700, Roman Gushchin wrote:
> This patchset adds percpu memory accounting to memory cgroups.
> It's based on the rework of the slab controller and reuses concepts
> and features introduced for the per-object slab accounting.
> 
> Percpu memory is becoming more and more widely used by various
> subsystems, and the total amount of memory controlled by the percpu
> allocator can make a good part of the total memory.
> 
> As an example, bpf maps can consume a lot of percpu memory,
> and they are created by a user. Also, some cgroup internals
> (e.g. memory controller statistics) can be quite large.
> On a machine with many CPUs and big number of cgroups they
> can consume hundreds of megabytes.
> 
> So the lack of memcg accounting is creating a breach in the memory
> isolation. Similar to the slab memory, percpu memory should be
> accounted by default.
> 
> Percpu allocations by their nature are scattered over multiple pages,
> so they can't be tracked on the per-page basis. So the per-object
> tracking introduced by the new slab controller is reused.
> 
> The patchset implements charging of percpu allocations, adds
> memcg-level statistics, enables accounting for percpu allocations made
> by memory cgroup internals and provides some basic tests.
> 
> To implement the accounting of percpu memory without a significant
> memory and performance overhead the following approach is used:
> all accounted allocations are placed into a separate percpu chunk
> (or chunks). These chunks are similar to default chunks, except
> that they do have an attached vector of pointers to obj_cgroup objects,
> which is big enough to save a pointer for each allocated object.
> On the allocation, if the allocation has to be accounted
> (__GFP_ACCOUNT is passed, the allocating process belongs to a non-root
> memory cgroup, etc), the memory cgroup is getting charged and if the maximum
> limit is not exceeded the allocation is performed using a memcg-aware
> chunk. Otherwise -ENOMEM is returned or the allocation is forced over
> the limit, depending on gfp (as any other kernel memory allocation).
> The memory cgroup information is saved in the obj_cgroup vector
> at the corresponding offset. On the release time the memcg
> information is restored from the vector and the cgroup is getting
> uncharged.
> Unaccounted allocations (at this point the absolute majority
> of all percpu allocations) are performed in the old way, so no
> additional overhead is expected.
> 
> To avoid pinning dying memory cgroups by outstanding allocations,
> obj_cgroup API is used instead of directly saving memory cgroup pointers.
> obj_cgroup is basically a pointer to a memory cgroup with a standalone
> reference counter. The trick is that it can be atomically swapped
> to point at the parent cgroup, so that the original memory cgroup
> can be released prior to all objects, which has been charged to it.
> Because all charges and statistics are fully recursive, it's perfectly
> correct to uncharge the parent cgroup instead. This scheme is used
> in the slab memory accounting, and percpu memory can just follow
> the scheme.
> 
> This version is based on top of v6 of the new slab controller
> patchset. The following patches are actually required by this series:
>   mm: memcg: factor out memcg- and lruvec-level changes out of __mod_lruvec_state()
>   mm: memcg: prepare for byte-sized vmstat items
>   mm: memcg: convert vmstat slab counters to bytes
>   mm: slub: implement SLUB version of obj_to_index()
>   mm: memcontrol: decouple reference counting from page accounting
>   mm: memcg/slab: obj_cgroup API

Hello, Andrew!

How this patchset should be routed: through the mm or percpu tree?

It has been acked by Dennis (the percpu maintainer), but it does depend
on first several patches from the slab controller rework patchset.

The slab controller rework is ready to be merged: as in v6 most patches
in the series were acked by Johannes and/or Vlastimil and no questions
or concerns were raised after v6.

Please, let me know if you want me to resend both patchsets.

Thank you!

Roman


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Potential Spoof] [PATCH v2 0/5] mm: memcg accounting of percpu memory
  2020-06-16 21:19 ` [Potential Spoof] [PATCH v2 0/5] mm: memcg accounting of percpu memory Roman Gushchin
@ 2020-06-17 20:39   ` Andrew Morton
  2020-06-17 20:47     ` Roman Gushchin
  0 siblings, 1 reply; 6+ messages in thread
From: Andrew Morton @ 2020-06-17 20:39 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Dennis Zhou, Johannes Weiner, Michal Hocko, Shakeel Butt,
	linux-mm, kernel-team, linux-kernel

On Tue, 16 Jun 2020 14:19:01 -0700 Roman Gushchin <guro@fb.com> wrote:

> > 
> > This version is based on top of v6 of the new slab controller
> > patchset. The following patches are actually required by this series:
> >   mm: memcg: factor out memcg- and lruvec-level changes out of __mod_lruvec_state()
> >   mm: memcg: prepare for byte-sized vmstat items
> >   mm: memcg: convert vmstat slab counters to bytes
> >   mm: slub: implement SLUB version of obj_to_index()
> >   mm: memcontrol: decouple reference counting from page accounting
> >   mm: memcg/slab: obj_cgroup API
> 
> Hello, Andrew!
> 
> How this patchset should be routed: through the mm or percpu tree?
> 
> It has been acked by Dennis (the percpu maintainer), but it does depend
> on first several patches from the slab controller rework patchset.

I can grab both.

> The slab controller rework is ready to be merged: as in v6 most patches
> in the series were acked by Johannes and/or Vlastimil and no questions
> or concerns were raised after v6.
> 
> Please, let me know if you want me to resend both patchsets.

There was quite a bit of valuable discussion in response to [0/n] which
really should have been in the changelog[s] from day one. 
slab-vs-slub, performance testing, etc.

So, umm, I'll take a look at both series now but I do think an enhanced
[0/n] description is warranted?


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [Potential Spoof] [PATCH v2 0/5] mm: memcg accounting of percpu memory
  2020-06-17 20:39   ` Andrew Morton
@ 2020-06-17 20:47     ` Roman Gushchin
  0 siblings, 0 replies; 6+ messages in thread
From: Roman Gushchin @ 2020-06-17 20:47 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Dennis Zhou, Johannes Weiner, Michal Hocko, Shakeel Butt,
	linux-mm, kernel-team, linux-kernel

On Wed, Jun 17, 2020 at 01:39:49PM -0700, Andrew Morton wrote:
> On Tue, 16 Jun 2020 14:19:01 -0700 Roman Gushchin <guro@fb.com> wrote:
> 
> > > 
> > > This version is based on top of v6 of the new slab controller
> > > patchset. The following patches are actually required by this series:
> > >   mm: memcg: factor out memcg- and lruvec-level changes out of __mod_lruvec_state()
> > >   mm: memcg: prepare for byte-sized vmstat items
> > >   mm: memcg: convert vmstat slab counters to bytes
> > >   mm: slub: implement SLUB version of obj_to_index()
> > >   mm: memcontrol: decouple reference counting from page accounting
> > >   mm: memcg/slab: obj_cgroup API
> > 
> > Hello, Andrew!
> > 
> > How this patchset should be routed: through the mm or percpu tree?
> > 
> > It has been acked by Dennis (the percpu maintainer), but it does depend
> > on first several patches from the slab controller rework patchset.
> 
> I can grab both.

Perfect, thanks!

> 
> > The slab controller rework is ready to be merged: as in v6 most patches
> > in the series were acked by Johannes and/or Vlastimil and no questions
> > or concerns were raised after v6.
> > 
> > Please, let me know if you want me to resend both patchsets.
> 
> There was quite a bit of valuable discussion in response to [0/n] which
> really should have been in the changelog[s] from day one. 
> slab-vs-slub, performance testing, etc.
> 
> So, umm, I'll take a look at both series now but I do think an enhanced
> [0/n] description is warranted?
> 

Yes, I'm running suggested tests right now, and will update on the results.

Thanks!

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 3/5] mm: memcg/percpu: per-memcg percpu memory statistics
       [not found] ` <20200608230819.832349-4-guro@fb.com>
@ 2020-06-22  1:48   ` Nathan Chancellor
  2020-06-22  3:26     ` Roman Gushchin
  0 siblings, 1 reply; 6+ messages in thread
From: Nathan Chancellor @ 2020-06-22  1:48 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter,
	Johannes Weiner, Michal Hocko, Shakeel Butt, linux-mm,
	kernel-team, linux-kernel, clang-built-linux

On Mon, Jun 08, 2020 at 04:08:17PM -0700, Roman Gushchin wrote:
> Percpu memory can represent a noticeable chunk of the total
> memory consumption, especially on big machines with many CPUs.
> Let's track percpu memory usage for each memcg and display
> it in memory.stat.
> 
> A percpu allocation is usually scattered over multiple pages
> (and nodes), and can be significantly smaller than a page.
> So let's add a byte-sized counter on the memcg level:
> MEMCG_PERCPU_B. Byte-sized vmstat infra created for slabs
> can be perfectly reused for percpu case.
> 
> Signed-off-by: Roman Gushchin <guro@fb.com>
> Acked-by: Dennis Zhou <dennis@kernel.org>
> ---
>  Documentation/admin-guide/cgroup-v2.rst |  4 ++++
>  include/linux/memcontrol.h              |  8 ++++++++
>  mm/memcontrol.c                         |  4 +++-
>  mm/percpu.c                             | 10 ++++++++++
>  4 files changed, 25 insertions(+), 1 deletion(-)
> 
> diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> index ce3e05e41724..7c1e784239bf 100644
> --- a/Documentation/admin-guide/cgroup-v2.rst
> +++ b/Documentation/admin-guide/cgroup-v2.rst
> @@ -1274,6 +1274,10 @@ PAGE_SIZE multiple when read back.
>  		Amount of memory used for storing in-kernel data
>  		structures.
>  
> +	  percpu
> +		Amount of memory used for storing per-cpu kernel
> +		data structures.
> +
>  	  sock
>  		Amount of memory used in network transmission buffers
>  
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index eede46c43573..7ed3af71a6fb 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -32,11 +32,19 @@ struct kmem_cache;
>  enum memcg_stat_item {
>  	MEMCG_SWAP = NR_VM_NODE_STAT_ITEMS,
>  	MEMCG_SOCK,
> +	MEMCG_PERCPU_B,
>  	/* XXX: why are these zone and not node counters? */
>  	MEMCG_KERNEL_STACK_KB,
>  	MEMCG_NR_STAT,
>  };
>  
> +static __always_inline bool memcg_stat_item_in_bytes(enum memcg_stat_item item)
> +{
> +	if (item == MEMCG_PERCPU_B)
> +		return true;
> +	return vmstat_item_in_bytes(item);

This patch is now in -next and this line causes a warning from clang,
which shows up in every translation unit that includes this header,
which is a lot:

include/linux/memcontrol.h:45:30: warning: implicit conversion from
enumeration type 'enum memcg_stat_item' to different enumeration type
'enum node_stat_item' [-Wenum-conversion]
        return vmstat_item_in_bytes(item);
               ~~~~~~~~~~~~~~~~~~~~ ^~~~
1 warning generated.

I assume this conversion is intentional; if so, it seems like expecting
a specific enum is misleading. Perhaps this should be applied on top?

Cheers,
Nathan

diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
index 2499f78cf32d..bddeb4ce7a4f 100644
--- a/include/linux/memcontrol.h
+++ b/include/linux/memcontrol.h
@@ -38,7 +38,7 @@ enum memcg_stat_item {
 	MEMCG_NR_STAT,
 };
 
-static __always_inline bool memcg_stat_item_in_bytes(enum memcg_stat_item item)
+static __always_inline bool memcg_stat_item_in_bytes(int item)
 {
 	if (item == MEMCG_PERCPU_B)
 		return true;
diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
index 084ee1c17160..52d7961a24f0 100644
--- a/include/linux/mmzone.h
+++ b/include/linux/mmzone.h
@@ -211,7 +211,7 @@ enum node_stat_item {
  * measured in pages). This defines the API part, the internal representation
  * might be different.
  */
-static __always_inline bool vmstat_item_in_bytes(enum node_stat_item item)
+static __always_inline bool vmstat_item_in_bytes(int item)
 {
 	/*
 	 * Global and per-node slab counters track slab pages.

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 3/5] mm: memcg/percpu: per-memcg percpu memory statistics
  2020-06-22  1:48   ` [PATCH v2 3/5] mm: memcg/percpu: per-memcg percpu memory statistics Nathan Chancellor
@ 2020-06-22  3:26     ` Roman Gushchin
  2020-06-22  3:51       ` Nathan Chancellor
  0 siblings, 1 reply; 6+ messages in thread
From: Roman Gushchin @ 2020-06-22  3:26 UTC (permalink / raw)
  To: Nathan Chancellor
  Cc: Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter,
	Johannes Weiner, Michal Hocko, Shakeel Butt, linux-mm,
	kernel-team, linux-kernel, clang-built-linux

On Sun, Jun 21, 2020 at 06:48:03PM -0700, Nathan Chancellor wrote:
> On Mon, Jun 08, 2020 at 04:08:17PM -0700, Roman Gushchin wrote:
> > Percpu memory can represent a noticeable chunk of the total
> > memory consumption, especially on big machines with many CPUs.
> > Let's track percpu memory usage for each memcg and display
> > it in memory.stat.
> > 
> > A percpu allocation is usually scattered over multiple pages
> > (and nodes), and can be significantly smaller than a page.
> > So let's add a byte-sized counter on the memcg level:
> > MEMCG_PERCPU_B. Byte-sized vmstat infra created for slabs
> > can be perfectly reused for percpu case.
> > 
> > Signed-off-by: Roman Gushchin <guro@fb.com>
> > Acked-by: Dennis Zhou <dennis@kernel.org>
> > ---
> >  Documentation/admin-guide/cgroup-v2.rst |  4 ++++
> >  include/linux/memcontrol.h              |  8 ++++++++
> >  mm/memcontrol.c                         |  4 +++-
> >  mm/percpu.c                             | 10 ++++++++++
> >  4 files changed, 25 insertions(+), 1 deletion(-)
> > 
> > diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> > index ce3e05e41724..7c1e784239bf 100644
> > --- a/Documentation/admin-guide/cgroup-v2.rst
> > +++ b/Documentation/admin-guide/cgroup-v2.rst
> > @@ -1274,6 +1274,10 @@ PAGE_SIZE multiple when read back.
> >  		Amount of memory used for storing in-kernel data
> >  		structures.
> >  
> > +	  percpu
> > +		Amount of memory used for storing per-cpu kernel
> > +		data structures.
> > +
> >  	  sock
> >  		Amount of memory used in network transmission buffers
> >  
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index eede46c43573..7ed3af71a6fb 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -32,11 +32,19 @@ struct kmem_cache;
> >  enum memcg_stat_item {
> >  	MEMCG_SWAP = NR_VM_NODE_STAT_ITEMS,
> >  	MEMCG_SOCK,
> > +	MEMCG_PERCPU_B,
> >  	/* XXX: why are these zone and not node counters? */
> >  	MEMCG_KERNEL_STACK_KB,
> >  	MEMCG_NR_STAT,
> >  };
> >  
> > +static __always_inline bool memcg_stat_item_in_bytes(enum memcg_stat_item item)
> > +{
> > +	if (item == MEMCG_PERCPU_B)
> > +		return true;
> > +	return vmstat_item_in_bytes(item);
> 
> This patch is now in -next and this line causes a warning from clang,
> which shows up in every translation unit that includes this header,
> which is a lot:
> 
> include/linux/memcontrol.h:45:30: warning: implicit conversion from
> enumeration type 'enum memcg_stat_item' to different enumeration type
> 'enum node_stat_item' [-Wenum-conversion]
>         return vmstat_item_in_bytes(item);
>                ~~~~~~~~~~~~~~~~~~~~ ^~~~
> 1 warning generated.
> 
> I assume this conversion is intentional; if so, it seems like expecting
> a specific enum is misleading. Perhaps this should be applied on top?

Hi Nathan!

Yeah, these enums are kind of stacked on each other, so memcg_stat values
extend node_stat values. And I think your patch is correct.

I'm going to refresh the series with some small fixups. If you're not against
it, I'll merge your patch into the corresponding patches.

And thank you for reporting the problem!

> 
> Cheers,
> Nathan
> 
> diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> index 2499f78cf32d..bddeb4ce7a4f 100644
> --- a/include/linux/memcontrol.h
> +++ b/include/linux/memcontrol.h
> @@ -38,7 +38,7 @@ enum memcg_stat_item {
>  	MEMCG_NR_STAT,
>  };
>  
> -static __always_inline bool memcg_stat_item_in_bytes(enum memcg_stat_item item)
> +static __always_inline bool memcg_stat_item_in_bytes(int item)
>  {
>  	if (item == MEMCG_PERCPU_B)
>  		return true;
> diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> index 084ee1c17160..52d7961a24f0 100644
> --- a/include/linux/mmzone.h
> +++ b/include/linux/mmzone.h
> @@ -211,7 +211,7 @@ enum node_stat_item {
>   * measured in pages). This defines the API part, the internal representation
>   * might be different.
>   */
> -static __always_inline bool vmstat_item_in_bytes(enum node_stat_item item)
> +static __always_inline bool vmstat_item_in_bytes(int item)
>  {
>  	/*
>  	 * Global and per-node slab counters track slab pages.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2 3/5] mm: memcg/percpu: per-memcg percpu memory statistics
  2020-06-22  3:26     ` Roman Gushchin
@ 2020-06-22  3:51       ` Nathan Chancellor
  0 siblings, 0 replies; 6+ messages in thread
From: Nathan Chancellor @ 2020-06-22  3:51 UTC (permalink / raw)
  To: Roman Gushchin
  Cc: Andrew Morton, Dennis Zhou, Tejun Heo, Christoph Lameter,
	Johannes Weiner, Michal Hocko, Shakeel Butt, linux-mm,
	kernel-team, linux-kernel, clang-built-linux

On Sun, Jun 21, 2020 at 08:26:35PM -0700, Roman Gushchin wrote:
> On Sun, Jun 21, 2020 at 06:48:03PM -0700, Nathan Chancellor wrote:
> > On Mon, Jun 08, 2020 at 04:08:17PM -0700, Roman Gushchin wrote:
> > > Percpu memory can represent a noticeable chunk of the total
> > > memory consumption, especially on big machines with many CPUs.
> > > Let's track percpu memory usage for each memcg and display
> > > it in memory.stat.
> > > 
> > > A percpu allocation is usually scattered over multiple pages
> > > (and nodes), and can be significantly smaller than a page.
> > > So let's add a byte-sized counter on the memcg level:
> > > MEMCG_PERCPU_B. Byte-sized vmstat infra created for slabs
> > > can be perfectly reused for percpu case.
> > > 
> > > Signed-off-by: Roman Gushchin <guro@fb.com>
> > > Acked-by: Dennis Zhou <dennis@kernel.org>
> > > ---
> > >  Documentation/admin-guide/cgroup-v2.rst |  4 ++++
> > >  include/linux/memcontrol.h              |  8 ++++++++
> > >  mm/memcontrol.c                         |  4 +++-
> > >  mm/percpu.c                             | 10 ++++++++++
> > >  4 files changed, 25 insertions(+), 1 deletion(-)
> > > 
> > > diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
> > > index ce3e05e41724..7c1e784239bf 100644
> > > --- a/Documentation/admin-guide/cgroup-v2.rst
> > > +++ b/Documentation/admin-guide/cgroup-v2.rst
> > > @@ -1274,6 +1274,10 @@ PAGE_SIZE multiple when read back.
> > >  		Amount of memory used for storing in-kernel data
> > >  		structures.
> > >  
> > > +	  percpu
> > > +		Amount of memory used for storing per-cpu kernel
> > > +		data structures.
> > > +
> > >  	  sock
> > >  		Amount of memory used in network transmission buffers
> > >  
> > > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > > index eede46c43573..7ed3af71a6fb 100644
> > > --- a/include/linux/memcontrol.h
> > > +++ b/include/linux/memcontrol.h
> > > @@ -32,11 +32,19 @@ struct kmem_cache;
> > >  enum memcg_stat_item {
> > >  	MEMCG_SWAP = NR_VM_NODE_STAT_ITEMS,
> > >  	MEMCG_SOCK,
> > > +	MEMCG_PERCPU_B,
> > >  	/* XXX: why are these zone and not node counters? */
> > >  	MEMCG_KERNEL_STACK_KB,
> > >  	MEMCG_NR_STAT,
> > >  };
> > >  
> > > +static __always_inline bool memcg_stat_item_in_bytes(enum memcg_stat_item item)
> > > +{
> > > +	if (item == MEMCG_PERCPU_B)
> > > +		return true;
> > > +	return vmstat_item_in_bytes(item);
> > 
> > This patch is now in -next and this line causes a warning from clang,
> > which shows up in every translation unit that includes this header,
> > which is a lot:
> > 
> > include/linux/memcontrol.h:45:30: warning: implicit conversion from
> > enumeration type 'enum memcg_stat_item' to different enumeration type
> > 'enum node_stat_item' [-Wenum-conversion]
> >         return vmstat_item_in_bytes(item);
> >                ~~~~~~~~~~~~~~~~~~~~ ^~~~
> > 1 warning generated.
> > 
> > I assume this conversion is intentional; if so, it seems like expecting
> > a specific enum is misleading. Perhaps this should be applied on top?
> 
> Hi Nathan!
> 
> Yeah, these enums are kind of stacked on each other, so memcg_stat values
> extend node_stat values. And I think your patch is correct.

Yeah, I figured. It happens in the kernel in a couple of different
places that we have seen so far. So far, my suggestion seems to be the
best one that we have uncovered when dealing with one enum that
supplments or extends another. These functions are fairly self
explanation so I don't think that blowing away the type safety here is a
big deal.

> I'm going to refresh the series with some small fixups. If you're not against
> it, I'll merge your patch into the corresponding patches.

Please do! Thanks for the quick response.

Cheers,
Nathan

> And thank you for reporting the problem!
> 
> > 
> > Cheers,
> > Nathan
> > 
> > diff --git a/include/linux/memcontrol.h b/include/linux/memcontrol.h
> > index 2499f78cf32d..bddeb4ce7a4f 100644
> > --- a/include/linux/memcontrol.h
> > +++ b/include/linux/memcontrol.h
> > @@ -38,7 +38,7 @@ enum memcg_stat_item {
> >  	MEMCG_NR_STAT,
> >  };
> >  
> > -static __always_inline bool memcg_stat_item_in_bytes(enum memcg_stat_item item)
> > +static __always_inline bool memcg_stat_item_in_bytes(int item)
> >  {
> >  	if (item == MEMCG_PERCPU_B)
> >  		return true;
> > diff --git a/include/linux/mmzone.h b/include/linux/mmzone.h
> > index 084ee1c17160..52d7961a24f0 100644
> > --- a/include/linux/mmzone.h
> > +++ b/include/linux/mmzone.h
> > @@ -211,7 +211,7 @@ enum node_stat_item {
> >   * measured in pages). This defines the API part, the internal representation
> >   * might be different.
> >   */
> > -static __always_inline bool vmstat_item_in_bytes(enum node_stat_item item)
> > +static __always_inline bool vmstat_item_in_bytes(int item)
> >  {
> >  	/*
> >  	 * Global and per-node slab counters track slab pages.

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2020-06-22  3:51 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20200608230819.832349-1-guro@fb.com>
2020-06-16 21:19 ` [Potential Spoof] [PATCH v2 0/5] mm: memcg accounting of percpu memory Roman Gushchin
2020-06-17 20:39   ` Andrew Morton
2020-06-17 20:47     ` Roman Gushchin
     [not found] ` <20200608230819.832349-4-guro@fb.com>
2020-06-22  1:48   ` [PATCH v2 3/5] mm: memcg/percpu: per-memcg percpu memory statistics Nathan Chancellor
2020-06-22  3:26     ` Roman Gushchin
2020-06-22  3:51       ` Nathan Chancellor

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).