From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 7258AC433E1 for ; Thu, 13 Aug 2020 08:49:55 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3EA3B20771 for ; Thu, 13 Aug 2020 08:49:55 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3EA3B20771 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=huawei.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id BDB228D0002; Thu, 13 Aug 2020 04:49:54 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B8D468D0001; Thu, 13 Aug 2020 04:49:54 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A7C068D0002; Thu, 13 Aug 2020 04:49:54 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0163.hostedemail.com [216.40.44.163]) by kanga.kvack.org (Postfix) with ESMTP id 8DAC78D0001 for ; Thu, 13 Aug 2020 04:49:54 -0400 (EDT) Received: from smtpin27.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 499F8181AEF1D for ; Thu, 13 Aug 2020 08:49:54 +0000 (UTC) X-FDA: 77144922708.27.drop22_240a15e26ff2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin27.hostedemail.com (Postfix) with ESMTP id 1FA863D663 for ; Thu, 13 Aug 2020 08:49:54 +0000 (UTC) X-HE-Tag: drop22_240a15e26ff2 X-Filterd-Recvd-Size: 5072 Received: from huawei.com (szxga05-in.huawei.com [45.249.212.191]) by imf15.hostedemail.com (Postfix) with ESMTP for ; Thu, 13 Aug 2020 08:49:53 +0000 (UTC) Received: from DGGEMS404-HUB.china.huawei.com (unknown [172.30.72.60]) by Forcepoint Email with ESMTP id 2A4F769E3430D62507E1; Thu, 13 Aug 2020 16:49:47 +0800 (CST) Received: from DESKTOP-A9S207P.china.huawei.com (10.174.179.61) by DGGEMS404-HUB.china.huawei.com (10.3.19.204) with Microsoft SMTP Server id 14.3.487.0; Thu, 13 Aug 2020 16:49:40 +0800 From: To: Christoph Lameter , Pekka Enberg , "David Rientjes" , Joonsoo Kim , "Andrew Morton" CC: , , Abel Wu , "open list:SLAB ALLOCATOR" , "open list" Subject: [PATCH] mm/slub: sysfs cleanup on cpu partial when !SLUB_CPU_PARTIAL Date: Thu, 13 Aug 2020 16:48:54 +0800 Message-ID: <20200813084858.1494-1-wuyun.wu@huawei.com> X-Mailer: git-send-email 2.28.0.windows.1 MIME-Version: 1.0 Content-Type: text/plain X-Originating-IP: [10.174.179.61] X-CFilter-Loop: Reflected X-Rspamd-Queue-Id: 1FA863D663 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam05 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Abel Wu Hide cpu partial related sysfs entries when !CONFIG_SLUB_CPU_PARTIAL to avoid confusion. Signed-off-by: Abel Wu --- mm/slub.c | 56 +++++++++++++++++++++++++++++++------------------------ 1 file changed, 32 insertions(+), 24 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 5d89e4064f83..4f496ae5a820 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -5071,29 +5071,6 @@ static ssize_t min_partial_store(struct kmem_cache= *s, const char *buf, } SLAB_ATTR(min_partial); =20 -static ssize_t cpu_partial_show(struct kmem_cache *s, char *buf) -{ - return sprintf(buf, "%u\n", slub_cpu_partial(s)); -} - -static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf, - size_t length) -{ - unsigned int objects; - int err; - - err =3D kstrtouint(buf, 10, &objects); - if (err) - return err; - if (objects && !kmem_cache_has_cpu_partial(s)) - return -EINVAL; - - slub_set_cpu_partial(s, objects); - flush_all(s); - return length; -} -SLAB_ATTR(cpu_partial); - static ssize_t ctor_show(struct kmem_cache *s, char *buf) { if (!s->ctor) @@ -5132,6 +5109,30 @@ static ssize_t objects_partial_show(struct kmem_ca= che *s, char *buf) } SLAB_ATTR_RO(objects_partial); =20 +#ifdef CONFIG_SLUB_CPU_PARTIAL +static ssize_t cpu_partial_show(struct kmem_cache *s, char *buf) +{ + return sprintf(buf, "%u\n", slub_cpu_partial(s)); +} + +static ssize_t cpu_partial_store(struct kmem_cache *s, const char *buf, + size_t length) +{ + unsigned int objects; + int err; + + err =3D kstrtouint(buf, 10, &objects); + if (err) + return err; + if (objects && !kmem_cache_has_cpu_partial(s)) + return -EINVAL; + + slub_set_cpu_partial(s, objects); + flush_all(s); + return length; +} +SLAB_ATTR(cpu_partial); + static ssize_t slabs_cpu_partial_show(struct kmem_cache *s, char *buf) { int objects =3D 0; @@ -5166,6 +5167,7 @@ static ssize_t slabs_cpu_partial_show(struct kmem_c= ache *s, char *buf) return len + sprintf(buf + len, "\n"); } SLAB_ATTR_RO(slabs_cpu_partial); +#endif =20 static ssize_t reclaim_account_show(struct kmem_cache *s, char *buf) { @@ -5496,10 +5498,12 @@ STAT_ATTR(DEACTIVATE_BYPASS, deactivate_bypass); STAT_ATTR(ORDER_FALLBACK, order_fallback); STAT_ATTR(CMPXCHG_DOUBLE_CPU_FAIL, cmpxchg_double_cpu_fail); STAT_ATTR(CMPXCHG_DOUBLE_FAIL, cmpxchg_double_fail); +#ifdef CONFIG_SLUB_CPU_PARTIAL STAT_ATTR(CPU_PARTIAL_ALLOC, cpu_partial_alloc); STAT_ATTR(CPU_PARTIAL_FREE, cpu_partial_free); STAT_ATTR(CPU_PARTIAL_NODE, cpu_partial_node); STAT_ATTR(CPU_PARTIAL_DRAIN, cpu_partial_drain); +#endif #endif /* CONFIG_SLUB_STATS */ =20 static struct attribute *slab_attrs[] =3D { @@ -5508,7 +5512,6 @@ static struct attribute *slab_attrs[] =3D { &objs_per_slab_attr.attr, &order_attr.attr, &min_partial_attr.attr, - &cpu_partial_attr.attr, &objects_attr.attr, &objects_partial_attr.attr, &partial_attr.attr, @@ -5520,7 +5523,10 @@ static struct attribute *slab_attrs[] =3D { &reclaim_account_attr.attr, &destroy_by_rcu_attr.attr, &shrink_attr.attr, +#ifdef CONFIG_SLUB_CPU_PARTIAL + &cpu_partial_attr.attr, &slabs_cpu_partial_attr.attr, +#endif #ifdef CONFIG_SLUB_DEBUG &total_objects_attr.attr, &slabs_attr.attr, @@ -5562,11 +5568,13 @@ static struct attribute *slab_attrs[] =3D { &order_fallback_attr.attr, &cmpxchg_double_fail_attr.attr, &cmpxchg_double_cpu_fail_attr.attr, +#ifdef CONFIG_SLUB_CPU_PARTIAL &cpu_partial_alloc_attr.attr, &cpu_partial_free_attr.attr, &cpu_partial_node_attr.attr, &cpu_partial_drain_attr.attr, #endif +#endif #ifdef CONFIG_FAILSLAB &failslab_attr.attr, #endif --=20 2.28.0.windows.1