From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-5.2 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1 autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 6EE1FC433E0 for ; Tue, 7 Jul 2020 06:59:11 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 3A5A4206E9 for ; Tue, 7 Jul 2020 06:59:11 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 3A5A4206E9 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id B51968D0002; Tue, 7 Jul 2020 02:59:10 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id B02016B002F; Tue, 7 Jul 2020 02:59:10 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id A17ED8D0002; Tue, 7 Jul 2020 02:59:10 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0243.hostedemail.com [216.40.44.243]) by kanga.kvack.org (Postfix) with ESMTP id 89A0C6B0027 for ; Tue, 7 Jul 2020 02:59:10 -0400 (EDT) Received: from smtpin26.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay05.hostedemail.com (Postfix) with ESMTP id 41002181AEF09 for ; Tue, 7 Jul 2020 06:59:10 +0000 (UTC) X-FDA: 77010378060.26.coal02_611131026eb2 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin26.hostedemail.com (Postfix) with ESMTP id 1ED2F1804B65C for ; Tue, 7 Jul 2020 06:59:10 +0000 (UTC) X-HE-Tag: coal02_611131026eb2 X-Filterd-Recvd-Size: 2431 Received: from gentwo.org (gentwo.org [3.19.106.255]) by imf26.hostedemail.com (Postfix) with ESMTP for ; Tue, 7 Jul 2020 06:59:09 +0000 (UTC) Received: by gentwo.org (Postfix, from userid 1002) id 3CA403FD78; Tue, 7 Jul 2020 06:59:04 +0000 (UTC) Received: from localhost (localhost [127.0.0.1]) by gentwo.org (Postfix) with ESMTP id 3976D3FD77; Tue, 7 Jul 2020 06:59:04 +0000 (UTC) Date: Tue, 7 Jul 2020 06:59:04 +0000 (UTC) From: Christopher Lameter X-X-Sender: cl@www.lameter.com To: Xunlei Pang cc: Andrew Morton , Wen Yang , Yang Shi , Roman Gushchin , linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: Re: [PATCH 1/2] mm/slub: Introduce two counters for the partial objects In-Reply-To: <1593678728-128358-1-git-send-email-xlpang@linux.alibaba.com> Message-ID: References: <1593678728-128358-1-git-send-email-xlpang@linux.alibaba.com> User-Agent: Alpine 2.22 (DEB 394 2020-01-19) MIME-Version: 1.0 Content-Type: text/plain; charset=US-ASCII X-Rspamd-Queue-Id: 1ED2F1804B65C X-Spamd-Result: default: False [0.00 / 100.00] X-Rspamd-Server: rspam01 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: On Thu, 2 Jul 2020, Xunlei Pang wrote: > This patch introduces two counters to maintain the actual number > of partial objects dynamically instead of iterating the partial > page lists with list_lock held. > > New counters of kmem_cache_node are: pfree_objects, ptotal_objects. > The main operations are under list_lock in slow path, its performance > impact is minimal. If at all then these counters need to be under CONFIG_SLUB_DEBUG. > --- a/mm/slab.h > +++ b/mm/slab.h > @@ -616,6 +616,8 @@ struct kmem_cache_node { > #ifdef CONFIG_SLUB > unsigned long nr_partial; > struct list_head partial; > + atomic_long_t pfree_objects; /* partial free objects */ > + atomic_long_t ptotal_objects; /* partial total objects */ Please in the CONFIG_SLUB_DEBUG. Without CONFIG_SLUB_DEBUG we need to build with minimal memory footprint. > #ifdef CONFIG_SLUB_DEBUG > atomic_long_t nr_slabs; > atomic_long_t total_objects; > diff --git a/mm/slub.c b/mm/slub.c Also this looks to be quite heavy on the cache and on execution time. Note that the list_lock could be taken frequently in the performance sensitive case of freeing an object that is not in the partial lists.