linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] mm: memcontrol: fix slub memory accounting
@ 2021-02-23  9:24 Muchun Song
  2021-02-23 15:21 ` Shakeel Butt
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Muchun Song @ 2021-02-23  9:24 UTC (permalink / raw)
  To: hannes, mhocko, vdavydov.dev, akpm, guro, shakeelb
  Cc: cgroups, linux-mm, linux-kernel, Muchun Song

SLUB currently account kmalloc() and kmalloc_node() allocations larger
than order-1 page per-node. But it forget to update the per-memcg
vmstats. So it can lead to inaccurate statistics of "slab_unreclaimable"
which is from memory.stat. Fix it by using mod_lruvec_page_state instead
of mod_node_page_state.

Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
Signed-off-by: Muchun Song <songmuchun@bytedance.com>
---
 mm/slab_common.c | 4 ++--
 mm/slub.c        | 8 ++++----
 2 files changed, 6 insertions(+), 6 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index 821f657d38b5..20ffb2b37058 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -906,8 +906,8 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order)
 	page = alloc_pages(flags, order);
 	if (likely(page)) {
 		ret = page_address(page);
-		mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
-				    PAGE_SIZE << order);
+		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+				      PAGE_SIZE << order);
 	}
 	ret = kasan_kmalloc_large(ret, size, flags);
 	/* As ret might get tagged, call kmemleak hook after KASAN. */
diff --git a/mm/slub.c b/mm/slub.c
index e564008c2329..f2f953de456e 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -4057,8 +4057,8 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node)
 	page = alloc_pages_node(node, flags, order);
 	if (page) {
 		ptr = page_address(page);
-		mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
-				    PAGE_SIZE << order);
+		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+				      PAGE_SIZE << order);
 	}
 
 	return kmalloc_large_node_hook(ptr, size, flags);
@@ -4193,8 +4193,8 @@ void kfree(const void *x)
 
 		BUG_ON(!PageCompound(page));
 		kfree_hook(object);
-		mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B,
-				    -(PAGE_SIZE << order));
+		mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B,
+				      -(PAGE_SIZE << order));
 		__free_pages(page, order);
 		return;
 	}
-- 
2.11.0



^ permalink raw reply related	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm: memcontrol: fix slub memory accounting
  2021-02-23  9:24 [PATCH] mm: memcontrol: fix slub memory accounting Muchun Song
@ 2021-02-23 15:21 ` Shakeel Butt
  2021-02-23 15:37 ` Roman Gushchin
  2021-02-23 18:42 ` Michal Koutný
  2 siblings, 0 replies; 4+ messages in thread
From: Shakeel Butt @ 2021-02-23 15:21 UTC (permalink / raw)
  To: Muchun Song
  Cc: Johannes Weiner, Michal Hocko, Vladimir Davydov, Andrew Morton,
	Roman Gushchin, Cgroups, Linux MM, LKML

On Tue, Feb 23, 2021 at 1:25 AM Muchun Song <songmuchun@bytedance.com> wrote:
>
> SLUB currently account kmalloc() and kmalloc_node() allocations larger
> than order-1 page per-node. But it forget to update the per-memcg
> vmstats. So it can lead to inaccurate statistics of "slab_unreclaimable"
> which is from memory.stat. Fix it by using mod_lruvec_page_state instead
> of mod_node_page_state.
>
> Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Reviewed-by: Shakeel Butt <shakeelb@google.com>


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm: memcontrol: fix slub memory accounting
  2021-02-23  9:24 [PATCH] mm: memcontrol: fix slub memory accounting Muchun Song
  2021-02-23 15:21 ` Shakeel Butt
@ 2021-02-23 15:37 ` Roman Gushchin
  2021-02-23 18:42 ` Michal Koutný
  2 siblings, 0 replies; 4+ messages in thread
From: Roman Gushchin @ 2021-02-23 15:37 UTC (permalink / raw)
  To: Muchun Song
  Cc: hannes, mhocko, vdavydov.dev, akpm, shakeelb, cgroups, linux-mm,
	linux-kernel

On Tue, Feb 23, 2021 at 05:24:23PM +0800, Muchun Song wrote:
> SLUB currently account kmalloc() and kmalloc_node() allocations larger
> than order-1 page per-node. But it forget to update the per-memcg
> vmstats. So it can lead to inaccurate statistics of "slab_unreclaimable"
> which is from memory.stat. Fix it by using mod_lruvec_page_state instead
> of mod_node_page_state.
> 
> Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting")
> Signed-off-by: Muchun Song <songmuchun@bytedance.com>

Reviewed-by: Roman Gushchin <guro@fb.com>

Thanks!


^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH] mm: memcontrol: fix slub memory accounting
  2021-02-23  9:24 [PATCH] mm: memcontrol: fix slub memory accounting Muchun Song
  2021-02-23 15:21 ` Shakeel Butt
  2021-02-23 15:37 ` Roman Gushchin
@ 2021-02-23 18:42 ` Michal Koutný
  2 siblings, 0 replies; 4+ messages in thread
From: Michal Koutný @ 2021-02-23 18:42 UTC (permalink / raw)
  To: Muchun Song
  Cc: hannes, mhocko, vdavydov.dev, akpm, guro, shakeelb, cgroups,
	linux-mm, linux-kernel

[-- Attachment #1: Type: text/plain, Size: 253 bytes --]

On Tue, Feb 23, 2021 at 05:24:23PM +0800, Muchun Song <songmuchun@bytedance.com> wrote:
>  mm/slab_common.c | 4 ++--
>  mm/slub.c        | 8 ++++----
>  2 files changed, 6 insertions(+), 6 deletions(-)
Reviewed-by: Michal Koutný <mkoutny@suse.com>

[-- Attachment #2: Digital signature --]
[-- Type: application/pgp-signature, Size: 833 bytes --]

^ permalink raw reply	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2021-02-23 18:42 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-23  9:24 [PATCH] mm: memcontrol: fix slub memory accounting Muchun Song
2021-02-23 15:21 ` Shakeel Butt
2021-02-23 15:37 ` Roman Gushchin
2021-02-23 18:42 ` Michal Koutný

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).