From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-13.0 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4A3D5C433E1 for ; Mon, 10 Aug 2020 12:18:02 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id F263E2070B for ; Mon, 10 Aug 2020 12:18:01 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org F263E2070B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 4975B6B0006; Mon, 10 Aug 2020 08:18:01 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id 448D96B0007; Mon, 10 Aug 2020 08:18:01 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 3373D6B0008; Mon, 10 Aug 2020 08:18:01 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0142.hostedemail.com [216.40.44.142]) by kanga.kvack.org (Postfix) with ESMTP id 1D0D16B0006 for ; Mon, 10 Aug 2020 08:18:01 -0400 (EDT) Received: from smtpin07.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id C423E181AEF00 for ; Mon, 10 Aug 2020 12:18:00 +0000 (UTC) X-FDA: 77134560720.07.sense75_5e0e8fd26fda Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin07.hostedemail.com (Postfix) with ESMTP id 96F7D1803F9B8 for ; Mon, 10 Aug 2020 12:18:00 +0000 (UTC) X-HE-Tag: sense75_5e0e8fd26fda X-Filterd-Recvd-Size: 5127 Received: from out30-57.freemail.mail.aliyun.com (out30-57.freemail.mail.aliyun.com [115.124.30.57]) by imf40.hostedemail.com (Postfix) with ESMTP for ; Mon, 10 Aug 2020 12:17:58 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R201e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e07484;MF=xlpang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0U5N4-L-_1597061873; Received: from localhost(mailfrom:xlpang@linux.alibaba.com fp:SMTPD_---0U5N4-L-_1597061873) by smtp.aliyun-inc.com(127.0.0.1); Mon, 10 Aug 2020 20:17:54 +0800 From: Xunlei Pang To: Vlastimil Babka , Christoph Lameter , Wen Yang , Roman Gushchin , Pekka Enberg , Konstantin Khlebnikov , David Rientjes , Xunlei Pang Cc: linux-kernel@vger.kernel.org, "linux-mm@kvack.org" Subject: [PATCH v2 3/3] mm/slub: Use percpu partial free counter Date: Mon, 10 Aug 2020 20:17:52 +0800 Message-Id: <1597061872-58724-4-git-send-email-xlpang@linux.alibaba.com> X-Mailer: git-send-email 1.8.3.1 In-Reply-To: <1597061872-58724-1-git-send-email-xlpang@linux.alibaba.com> References: <1597061872-58724-1-git-send-email-xlpang@linux.alibaba.com> X-Rspamd-Queue-Id: 96F7D1803F9B8 X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: The only concern of introducing partial counter is that, partial_free_objs may cause atomic operation contention in case of same SLUB concurrent __slab_free(). This patch changes it to be a percpu counter to avoid that. Co-developed-by: Wen Yang Signed-off-by: Xunlei Pang --- mm/slab.h | 2 +- mm/slub.c | 38 +++++++++++++++++++++++++++++++------- 2 files changed, 32 insertions(+), 8 deletions(-) diff --git a/mm/slab.h b/mm/slab.h index c85e2fa..a709a70 100644 --- a/mm/slab.h +++ b/mm/slab.h @@ -616,7 +616,7 @@ struct kmem_cache_node { #ifdef CONFIG_SLUB unsigned long nr_partial; struct list_head partial; - atomic_long_t partial_free_objs; + atomic_long_t __percpu *partial_free_objs; atomic_long_t partial_total_objs; #ifdef CONFIG_SLUB_DEBUG atomic_long_t nr_slabs; diff --git a/mm/slub.c b/mm/slub.c index 25a4421..f6fc60b 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -1775,11 +1775,21 @@ static void discard_slab(struct kmem_cache *s, struct page *page) /* * Management of partially allocated slabs. */ +static inline long get_partial_free(struct kmem_cache_node *n) +{ + long nr = 0; + int cpu; + + for_each_possible_cpu(cpu) + nr += atomic_long_read(per_cpu_ptr(n->partial_free_objs, cpu)); + + return nr; +} static inline void __update_partial_free(struct kmem_cache_node *n, long delta) { - atomic_long_add(delta, &n->partial_free_objs); + atomic_long_add(delta, this_cpu_ptr(n->partial_free_objs)); } static inline void @@ -2429,12 +2439,12 @@ static unsigned long partial_counter(struct kmem_cache_node *n, unsigned long ret = 0; if (item == PARTIAL_FREE) { - ret = atomic_long_read(&n->partial_free_objs); + ret = get_partial_free(n); } else if (item == PARTIAL_TOTAL) { ret = atomic_long_read(&n->partial_total_objs); } else if (item == PARTIAL_INUSE) { ret = atomic_long_read(&n->partial_total_objs) - - atomic_long_read(&n->partial_free_objs); + get_partial_free(n); if ((long)ret < 0) ret = 0; } @@ -3390,19 +3400,28 @@ static inline int calculate_order(unsigned int size) return -ENOSYS; } -static void +static int init_kmem_cache_node(struct kmem_cache_node *n) { + int cpu; + n->nr_partial = 0; spin_lock_init(&n->list_lock); INIT_LIST_HEAD(&n->partial); - atomic_long_set(&n->partial_free_objs, 0); + + n->partial_free_objs = alloc_percpu(atomic_long_t); + if (!n->partial_free_objs) + return -ENOMEM; + for_each_possible_cpu(cpu) + atomic_long_set(per_cpu_ptr(n->partial_free_objs, cpu), 0); atomic_long_set(&n->partial_total_objs, 0); #ifdef CONFIG_SLUB_DEBUG atomic_long_set(&n->nr_slabs, 0); atomic_long_set(&n->total_objects, 0); INIT_LIST_HEAD(&n->full); #endif + + return 0; } static inline int alloc_kmem_cache_cpus(struct kmem_cache *s) @@ -3463,7 +3482,7 @@ static void early_kmem_cache_node_alloc(int node) page->inuse = 1; page->frozen = 0; kmem_cache_node->node[node] = n; - init_kmem_cache_node(n); + BUG_ON(init_kmem_cache_node(n) < 0); inc_slabs_node(kmem_cache_node, node, page->objects); /* @@ -3481,6 +3500,7 @@ static void free_kmem_cache_nodes(struct kmem_cache *s) for_each_kmem_cache_node(s, node, n) { s->node[node] = NULL; + free_percpu(n->partial_free_objs); kmem_cache_free(kmem_cache_node, n); } } @@ -3511,7 +3531,11 @@ static int init_kmem_cache_nodes(struct kmem_cache *s) return 0; } - init_kmem_cache_node(n); + if (init_kmem_cache_node(n) < 0) { + free_kmem_cache_nodes(s); + return 0; + } + s->node[node] = n; } return 1; -- 1.8.3.1