From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-10.3 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,NICE_REPLY_A, SPF_HELO_NONE,SPF_PASS,UNPARSEABLE_RELAY,USER_AGENT_SANE_1 autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 1EC63C433DB for ; Wed, 3 Mar 2021 13:46:52 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 70ABD64EF1 for ; Wed, 3 Mar 2021 13:46:51 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 70ABD64EF1 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=linux.alibaba.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id EE3928D0168; Wed, 3 Mar 2021 08:46:50 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id E935E8D0157; Wed, 3 Mar 2021 08:46:50 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id D5ABC8D0168; Wed, 3 Mar 2021 08:46:50 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0159.hostedemail.com [216.40.44.159]) by kanga.kvack.org (Postfix) with ESMTP id BADD98D0157 for ; Wed, 3 Mar 2021 08:46:50 -0500 (EST) Received: from smtpin23.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 7A87168BE for ; Wed, 3 Mar 2021 13:46:50 +0000 (UTC) X-FDA: 77878688580.23.5092620 Received: from out30-42.freemail.mail.aliyun.com (out30-42.freemail.mail.aliyun.com [115.124.30.42]) by imf30.hostedemail.com (Postfix) with ESMTP id 7FBB8E0001B4 for ; Wed, 3 Mar 2021 13:46:46 +0000 (UTC) X-Alimail-AntiSpam:AC=PASS;BC=-1|-1;BR=01201311R141e4;CH=green;DM=||false|;DS=||;FP=0|-1|-1|-1|0|-1|-1|-1;HT=e01e04423;MF=xlpang@linux.alibaba.com;NM=1;PH=DS;RN=10;SR=0;TI=SMTPD_---0UQGMW59_1614779196; Received: from xunleideMacBook-Pro.local(mailfrom:xlpang@linux.alibaba.com fp:SMTPD_---0UQGMW59_1614779196) by smtp.aliyun-inc.com(127.0.0.1); Wed, 03 Mar 2021 21:46:36 +0800 Reply-To: xlpang@linux.alibaba.com Subject: Re: [PATCH v2 3/3] mm/slub: Use percpu partial free counter To: Christoph Lameter , Xunlei Pang Cc: Vlastimil Babka , Wen Yang , Roman Gushchin , Pekka Enberg , Konstantin Khlebnikov , David Rientjes , linux-kernel@vger.kernel.org, "linux-mm@kvack.org" References: <1597061872-58724-1-git-send-email-xlpang@linux.alibaba.com> <1597061872-58724-4-git-send-email-xlpang@linux.alibaba.com> From: Xunlei Pang Message-ID: <1c61e8fb-98ef-6d5f-12d7-ab5f16890e17@linux.alibaba.com> Date: Wed, 3 Mar 2021 21:46:36 +0800 User-Agent: Mozilla/5.0 (Macintosh; Intel Mac OS X 10.12; rv:78.0) Gecko/20100101 Thunderbird/78.7.1 MIME-Version: 1.0 In-Reply-To: Content-Type: text/plain; charset=utf-8 Content-Transfer-Encoding: 7bit X-Stat-Signature: 6rno3brj7xzyh9xxmcrrqcmqtjn6htbk X-Rspamd-Server: rspam02 X-Rspamd-Queue-Id: 7FBB8E0001B4 Received-SPF: none (linux.alibaba.com>: No applicable sender policy available) receiver=imf30; identity=mailfrom; envelope-from=""; helo=out30-42.freemail.mail.aliyun.com; client-ip=115.124.30.42 X-HE-DKIM-Result: none/none X-HE-Tag: 1614779206-276189 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On 3/2/21 5:14 PM, Christoph Lameter wrote: > On Mon, 10 Aug 2020, Xunlei Pang wrote: > >> >> diff --git a/mm/slab.h b/mm/slab.h >> index c85e2fa..a709a70 100644 >> --- a/mm/slab.h >> +++ b/mm/slab.h >> @@ -616,7 +616,7 @@ struct kmem_cache_node { >> #ifdef CONFIG_SLUB >> unsigned long nr_partial; >> struct list_head partial; >> - atomic_long_t partial_free_objs; >> + atomic_long_t __percpu *partial_free_objs; > > A percpu counter is never atomic. Just use unsigned long and use this_cpu > operations for this thing. That should cut down further on the overhead. > >> --- a/mm/slub.c >> +++ b/mm/slub.c >> @@ -1775,11 +1775,21 @@ static void discard_slab(struct kmem_cache *s, struct page *page) >> /* >> * Management of partially allocated slabs. >> */ >> +static inline long get_partial_free(struct kmem_cache_node *n) >> +{ >> + long nr = 0; >> + int cpu; >> + >> + for_each_possible_cpu(cpu) >> + nr += atomic_long_read(per_cpu_ptr(n->partial_free_objs, cpu)); > > this_cpu_read(*n->partial_free_objs) > >> static inline void >> __update_partial_free(struct kmem_cache_node *n, long delta) >> { >> - atomic_long_add(delta, &n->partial_free_objs); >> + atomic_long_add(delta, this_cpu_ptr(n->partial_free_objs)); > > this_cpu_add() > > and so on. > Thanks, I changed them both to use "unsigned long", and will send v3 out after our internal performance regression test passes.