From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.8 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0EF09C433DB for ; Fri, 15 Jan 2021 18:35:58 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 8E1992399A for ; Fri, 15 Jan 2021 18:35:57 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 8E1992399A Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=suse.cz Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=owner-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix) id 7D6308D01CB; Fri, 15 Jan 2021 13:35:56 -0500 (EST) Received: by kanga.kvack.org (Postfix, from userid 40) id 7848B8D01B2; Fri, 15 Jan 2021 13:35:56 -0500 (EST) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id 69B0E8D01CB; Fri, 15 Jan 2021 13:35:56 -0500 (EST) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0234.hostedemail.com [216.40.44.234]) by kanga.kvack.org (Postfix) with ESMTP id 548DB8D01B2 for ; Fri, 15 Jan 2021 13:35:56 -0500 (EST) Received: from smtpin25.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay04.hostedemail.com (Postfix) with ESMTP id 12B6D33CD for ; Fri, 15 Jan 2021 18:35:56 +0000 (UTC) X-FDA: 77708863512.25.baby61_1015ebd27531 Received: from filter.hostedemail.com (10.5.16.251.rfc1918.com [10.5.16.251]) by smtpin25.hostedemail.com (Postfix) with ESMTP id E5C9818134578 for ; Fri, 15 Jan 2021 18:35:55 +0000 (UTC) X-HE-Tag: baby61_1015ebd27531 X-Filterd-Recvd-Size: 5589 Received: from mx2.suse.de (mx2.suse.de [195.135.220.15]) by imf41.hostedemail.com (Postfix) with ESMTP for ; Fri, 15 Jan 2021 18:35:55 +0000 (UTC) X-Virus-Scanned: by amavisd-new at test-mx.suse.de Received: from relay2.suse.de (unknown [195.135.221.27]) by mx2.suse.de (Postfix) with ESMTP id CE96EACAD; Fri, 15 Jan 2021 18:35:53 +0000 (UTC) From: Vlastimil Babka To: linux-mm@kvack.org Cc: linux-kernel@vger.kernel.org, Christoph Lameter , Pekka Enberg , David Rientjes , Joonsoo Kim , Andrew Morton , Jann Horn , Vlastimil Babka Subject: [PATCH] mm, slub: splice cpu and page freelists in deactivate_slab() Date: Fri, 15 Jan 2021 19:35:43 +0100 Message-Id: <20210115183543.15097-1-vbabka@suse.cz> X-Mailer: git-send-email 2.29.2 MIME-Version: 1.0 Content-Transfer-Encoding: quoted-printable X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: In deactivate_slab() we currently move all but one objects on the cpu fre= elist to the page freelist one by one using the costly cmpxchg_double() operati= on. Then we unfreeze the page while moving the last object on page freelist, = with a final cmpxchg_double(). This can be optimized to avoid the cmpxchg_double() per object. Just coun= t the objects on cpu freelist (to adjust page->inuse properly) and also remembe= r the last object in the chain. Then splice page->freelist to the last object a= nd effectively add the whole cpu freelist to page->freelist while unfreezing= the page, with a single cmpxchg_double(). Signed-off-by: Vlastimil Babka --- Hi, I stumbled on the optimization while pondering over what to do with the p= ercpu partial list memory wastage [1], but it should be useful on its own. I ha= ven't run any measurements yet, but eliminating cmpxchg_double() operations sho= uld be obviously faster [TM]. Passed some basic testing, including hardened free= list and slub_debug. [1] https://lore.kernel.org/linux-mm/CAG48ez2Qx5K1Cab-m8BdSibp6wLTip6ro4=3D= -umR7BLsEgjEYzA@mail.gmail.com/ mm/slub.c | 59 ++++++++++++++++++++++--------------------------------- 1 file changed, 24 insertions(+), 35 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 0d4bdf6783ee..c3141aa962be 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2167,9 +2167,9 @@ static void deactivate_slab(struct kmem_cache *s, s= truct page *page, { enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE }; struct kmem_cache_node *n =3D get_node(s, page_to_nid(page)); - int lock =3D 0; + int lock =3D 0, free_delta =3D 0; enum slab_modes l =3D M_NONE, m =3D M_NONE; - void *nextfree; + void *nextfree, *freelist_iter, *freelist_tail; int tail =3D DEACTIVATE_TO_HEAD; struct page new; struct page old; @@ -2180,45 +2180,34 @@ static void deactivate_slab(struct kmem_cache *s,= struct page *page, } =20 /* - * Stage one: Free all available per cpu objects back - * to the page freelist while it is still frozen. Leave the - * last one. - * - * There is no need to take the list->lock because the page - * is still frozen. + * Stage one: Count the objects on cpu's freelist as free_delta and + * remember the last object in freelist_tail for later splicing. */ - while (freelist && (nextfree =3D get_freepointer(s, freelist))) { - void *prior; - unsigned long counters; + freelist_tail =3D NULL; + freelist_iter =3D freelist; + while (freelist_iter) { + nextfree =3D get_freepointer(s, freelist_iter); =20 /* * If 'nextfree' is invalid, it is possible that the object at - * 'freelist' is already corrupted. So isolate all objects - * starting at 'freelist'. + * 'freelist_iter' is already corrupted. So isolate all objects + * starting at 'freelist_iter' by skipping them. */ - if (freelist_corrupted(s, page, &freelist, nextfree)) + if (freelist_corrupted(s, page, &freelist_iter, nextfree)) break; =20 - do { - prior =3D page->freelist; - counters =3D page->counters; - set_freepointer(s, freelist, prior); - new.counters =3D counters; - new.inuse--; - VM_BUG_ON(!new.frozen); + freelist_tail =3D freelist_iter; + free_delta++; =20 - } while (!__cmpxchg_double_slab(s, page, - prior, counters, - freelist, new.counters, - "drain percpu freelist")); - - freelist =3D nextfree; + freelist_iter =3D nextfree; } =20 /* - * Stage two: Ensure that the page is unfrozen while the - * list presence reflects the actual number of objects - * during unfreeze. + * Stage two: Unfreeze the page while splicing the per-cpu + * freelist to the head of page's freelist. + * + * Ensure that the page is unfrozen while the list presence + * reflects the actual number of objects during unfreeze.=20 * * We setup the list membership and then perform a cmpxchg * with the count. If there is a mismatch then the page @@ -2231,15 +2220,15 @@ static void deactivate_slab(struct kmem_cache *s,= struct page *page, */ redo: =20 - old.freelist =3D page->freelist; - old.counters =3D page->counters; + old.freelist =3D READ_ONCE(page->freelist); + old.counters =3D READ_ONCE(page->counters); VM_BUG_ON(!old.frozen); =20 /* Determine target state of the slab */ new.counters =3D old.counters; - if (freelist) { - new.inuse--; - set_freepointer(s, freelist, old.freelist); + if (freelist_tail) { + new.inuse -=3D free_delta; + set_freepointer(s, freelist_tail, old.freelist); new.freelist =3D freelist; } else new.freelist =3D old.freelist; --=20 2.29.2