From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id DB2D2C433DB for ; Wed, 17 Mar 2021 07:55:32 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 34BC264F99 for ; Wed, 17 Mar 2021 07:55:32 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 34BC264F99 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BB0EB6B0074; Wed, 17 Mar 2021 03:55:31 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B85D96B0075; Wed, 17 Mar 2021 03:55:31 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A4F266B0078; Wed, 17 Mar 2021 03:55:31 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0019.hostedemail.com [216.40.44.19]) by kanga.kvack.org (Postfix) with ESMTP id 8D5786B0074 for ; Wed, 17 Mar 2021 03:55:31 -0400 (EDT) Received: from smtpin16.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay02.hostedemail.com (Postfix) with ESMTP id 568CA62C0 for ; Wed, 17 Mar 2021 07:55:31 +0000 (UTC) X-FDA: 77928606462.16.2D96FB0 Received: from out30-130.freemail.mail.aliyun.com (out30-130.freemail.mail.aliyun.com [115.124.30.130]) by imf05.hostedemail.com (Postfix) with ESMTP id 9968AE0001B2 for ; Wed, 17 Mar 2021 07:55:29 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R191e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04400;MF=xlpang@linux.alibaba.com;NM=1;PH=DS;RN=15;SR=0;TI=SMTPD_---0USEIes2_1615967694; Received: from localhost(mailfrom:xlpang@linux.alibaba.com fp:SMTPD_---0USEIes2_1615967694) by smtp.aliyun-inc.com(127.0.0.1); Wed, 17 Mar 2021 15:54:54 +0800 From: Xunlei Pang To: Christoph Lameter , Christoph Lameter , Pekka Enberg , Vlastimil Babka , Roman Gushchin , Konstantin Khlebnikov , David Rientjes , Matthew Wilcox , Shu Ming , Andrew Morton Cc: linux-kernel@vger.kernel.org, linux-mm@kvack.org, Wen Yang , James Wang , Xunlei Pang Subject: [PATCH v4 3/3] mm/slub: Get rid of count_partial() Date: Wed, 17 Mar 2021 15:54:52 +0800 Message-Id: <1615967692-80524-4-git-send-email-xlpang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1615967692-80524-1-git-send-email-xlpang@linux.alibaba.com> References: <1615967692-80524-1-git-send-email-xlpang@linux.alibaba.com> X-Stat-Signature: ubhei7uui1xps1hb38ak47t4oc5khaym X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 9968AE0001B2 Received-SPF: none (linux.alibaba.com>: No applicable sender policy available) receiver=imf05; identity=mailfrom; envelope-from=""; helo=out30-130.freemail.mail.aliyun.com; client-ip=115.124.30.130 X-HE-DKIM-Result: none/none X-HE-Tag: 1615967729-693783 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: Now the partial counters are ready, let's use them to get rid of count_partial(). The partial counters will involve in to calculate the accurate partial usage when CONFIG_SLUB_DEBUG_PARTIAL is on, otherwise simply assume their zero usage statistics. Tested-by: James Wang Signed-off-by: Xunlei Pang --- mm/slub.c | 64 +++++++++++++++++++++++++++++++-------------------------------- 1 file changed, 31 insertions(+), 33 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 856aea4..9bff669 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2533,11 +2533,6 @@ static inline int node_match(struct page *page, int node) } #ifdef CONFIG_SLUB_DEBUG -static int count_free(struct page *page) -{ - return page->objects - page->inuse; -} - static inline unsigned long node_nr_objs(struct kmem_cache_node *n) { return atomic_long_read(&n->total_objects); @@ -2545,18 +2540,33 @@ static inline unsigned long node_nr_objs(struct kmem_cache_node *n) #endif /* CONFIG_SLUB_DEBUG */ #if defined(CONFIG_SLUB_DEBUG) || defined(CONFIG_SYSFS) -static unsigned long count_partial(struct kmem_cache_node *n, - int (*get_count)(struct page *)) +enum partial_item { PARTIAL_FREE, PARTIAL_INUSE, PARTIAL_TOTAL, PARTIAL_SLAB }; + +static unsigned long partial_counter(struct kmem_cache_node *n, + enum partial_item item) { - unsigned long flags; - unsigned long x = 0; - struct page *page; + unsigned long ret = 0; - spin_lock_irqsave(&n->list_lock, flags); - list_for_each_entry(page, &n->partial, slab_list) - x += get_count(page); - spin_unlock_irqrestore(&n->list_lock, flags); - return x; +#ifdef CONFIG_SLUB_DEBUG_PARTIAL + if (item == PARTIAL_FREE) { + ret = per_cpu_sum(*n->partial_free_objs); + if ((long)ret < 0) + ret = 0; + } else if (item == PARTIAL_TOTAL) { + ret = n->partial_total_objs; + } else if (item == PARTIAL_INUSE) { + ret = per_cpu_sum(*n->partial_free_objs); + if ((long)ret < 0) + ret = 0; + ret = n->partial_total_objs - ret; + if ((long)ret < 0) + ret = 0; + } else { /* item == PARTIAL_SLAB */ + ret = n->nr_partial; + } +#endif + + return ret; } #endif /* CONFIG_SLUB_DEBUG || CONFIG_SYSFS */ @@ -2587,7 +2597,7 @@ static unsigned long count_partial(struct kmem_cache_node *n, unsigned long nr_objs; unsigned long nr_free; - nr_free = count_partial(n, count_free); + nr_free = partial_counter(n, PARTIAL_FREE); nr_slabs = node_nr_slabs(n); nr_objs = node_nr_objs(n); @@ -4654,18 +4664,6 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t gfpflags, EXPORT_SYMBOL(__kmalloc_node_track_caller); #endif -#ifdef CONFIG_SYSFS -static int count_inuse(struct page *page) -{ - return page->inuse; -} - -static int count_total(struct page *page) -{ - return page->objects; -} -#endif - #ifdef CONFIG_SLUB_DEBUG static void validate_slab(struct kmem_cache *s, struct page *page) { @@ -5102,7 +5100,7 @@ static ssize_t show_slab_objects(struct kmem_cache *s, x = atomic_long_read(&n->total_objects); else if (flags & SO_OBJECTS) x = atomic_long_read(&n->total_objects) - - count_partial(n, count_free); + partial_counter(n, PARTIAL_FREE); else x = atomic_long_read(&n->nr_slabs); total += x; @@ -5116,11 +5114,11 @@ static ssize_t show_slab_objects(struct kmem_cache *s, for_each_kmem_cache_node(s, node, n) { if (flags & SO_TOTAL) - x = count_partial(n, count_total); + x = partial_counter(n, PARTIAL_TOTAL); else if (flags & SO_OBJECTS) - x = count_partial(n, count_inuse); + x = partial_counter(n, PARTIAL_INUSE); else - x = n->nr_partial; + x = partial_counter(n, PARTIAL_SLAB); total += x; nodes[node] += x; } @@ -5884,7 +5882,7 @@ void get_slabinfo(struct kmem_cache *s, struct slabinfo *sinfo) for_each_kmem_cache_node(s, node, n) { nr_slabs += node_nr_slabs(n); nr_objs += node_nr_objs(n); - nr_free += count_partial(n, count_free); + nr_free += partial_counter(n, PARTIAL_FREE); } sinfo->active_objs = nr_objs - nr_free; -- 1.8.3.1