From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-18.8 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_CR_TRAILER,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 243EDC41621 for ; Tue, 2 Mar 2021 05:53:47 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 0AA896146B for ; Tue, 2 Mar 2021 05:53:46 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1445043AbhCBCze (ORCPT ); Mon, 1 Mar 2021 21:55:34 -0500 Received: from mail.kernel.org ([198.145.29.99]:37644 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S236108AbhCAUFS (ORCPT ); Mon, 1 Mar 2021 15:05:18 -0500 Received: by mail.kernel.org (Postfix) with ESMTPSA id 255966518E; Mon, 1 Mar 2021 17:58:43 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linuxfoundation.org; s=korg; t=1614621523; bh=BUL5uoNcCWdwokjMni+Ek+0yE3koJPM/U+6otJIt2vQ=; h=From:To:Cc:Subject:Date:In-Reply-To:References:From; b=vH135KtS5mrDtumCs182W78A3GoB1LrIGoswbGw4Xz7w9uLKTlPx9iiVTYpxiTOXK Ze8UhSECCoAsFoQHSqVa8p2LCEaYejvSboNnM27Q9R/8PPoXcugseIXy+xwEhOL730 4SxJXZj7gVSOobmJow7zvKM1jgufIogGfL23G/wU= From: Greg Kroah-Hartman To: linux-kernel@vger.kernel.org Cc: Greg Kroah-Hartman , stable@vger.kernel.org, Muchun Song , Shakeel Butt , Roman Gushchin , =?UTF-8?q?Michal=20Koutn=C3=BD?= , Johannes Weiner , Michal Hocko , Vladimir Davydov , Andrew Morton , Linus Torvalds , Sasha Levin Subject: [PATCH 5.11 547/775] mm: memcontrol: fix slub memory accounting Date: Mon, 1 Mar 2021 17:11:55 +0100 Message-Id: <20210301161228.517586435@linuxfoundation.org> X-Mailer: git-send-email 2.30.1 In-Reply-To: <20210301161201.679371205@linuxfoundation.org> References: <20210301161201.679371205@linuxfoundation.org> User-Agent: quilt/0.66 MIME-Version: 1.0 Content-Type: text/plain; charset=UTF-8 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Muchun Song [ Upstream commit 96403bfe50c344b587ea53894954a9d152af1c9d ] SLUB currently account kmalloc() and kmalloc_node() allocations larger than order-1 page per-node. But it forget to update the per-memcg vmstats. So it can lead to inaccurate statistics of "slab_unreclaimable" which is from memory.stat. Fix it by using mod_lruvec_page_state instead of mod_node_page_state. Link: https://lkml.kernel.org/r/20210223092423.42420-1-songmuchun@bytedance.com Fixes: 6a486c0ad4dc ("mm, sl[ou]b: improve memory accounting") Signed-off-by: Muchun Song Reviewed-by: Shakeel Butt Reviewed-by: Roman Gushchin Reviewed-by: Michal Koutný Cc: Johannes Weiner Cc: Michal Hocko Cc: Vladimir Davydov Signed-off-by: Andrew Morton Signed-off-by: Linus Torvalds Signed-off-by: Sasha Levin --- mm/slab_common.c | 4 ++-- mm/slub.c | 8 ++++---- 2 files changed, 6 insertions(+), 6 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index e981c80d216c2..0b775cb5c1089 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -837,8 +837,8 @@ void *kmalloc_order(size_t size, gfp_t flags, unsigned int order) page = alloc_pages(flags, order); if (likely(page)) { ret = page_address(page); - mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); } ret = kasan_kmalloc_large(ret, size, flags); /* As ret might get tagged, call kmemleak hook after KASAN. */ diff --git a/mm/slub.c b/mm/slub.c index b22a4b101c846..69dacc61b8435 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -3999,8 +3999,8 @@ static void *kmalloc_large_node(size_t size, gfp_t flags, int node) page = alloc_pages_node(node, flags, order); if (page) { ptr = page_address(page); - mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B, - PAGE_SIZE << order); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + PAGE_SIZE << order); } return kmalloc_large_node_hook(ptr, size, flags); @@ -4131,8 +4131,8 @@ void kfree(const void *x) BUG_ON(!PageCompound(page)); kfree_hook(object); - mod_node_page_state(page_pgdat(page), NR_SLAB_UNRECLAIMABLE_B, - -(PAGE_SIZE << order)); + mod_lruvec_page_state(page, NR_SLAB_UNRECLAIMABLE_B, + -(PAGE_SIZE << order)); __free_pages(page, order); return; } -- 2.27.0