From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 70D70C43217 for ; Wed, 8 Sep 2021 02:54:02 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 5AD0661130 for ; Wed, 8 Sep 2021 02:54:02 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1347428AbhIHCzI (ORCPT ); Tue, 7 Sep 2021 22:55:08 -0400 Received: from mail.kernel.org ([198.145.29.99]:50964 "EHLO mail.kernel.org" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1347231AbhIHCzC (ORCPT ); Tue, 7 Sep 2021 22:55:02 -0400 Received: by mail.kernel.org (Postfix) with ESMTPSA id 8ABF561100; Wed, 8 Sep 2021 02:53:54 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1631069635; bh=jNVGlCha4ibtXVSN+t6sgxeBwyA4Y12PZCcGCH6xhuw=; h=Date:From:To:Subject:In-Reply-To:From; b=zRlTyOc3AEeXnuXrTS92QEatIvGQm1OB1dfephJMA/+2WjSHeAoD/gWQqJDqOknMI a/3zU+UbibwogQsv4bIc3uSWfg2/HiUokaVuHWq4rHsQlu2pLeq1upArypTOC7UjD1 Mgoe/R6LFieKS6cIFqsvQXmI+e2DSTBywxisYTnY= Date: Tue, 07 Sep 2021 19:53:54 -0700 From: Andrew Morton To: akpm@linux-foundation.org, bigeasy@linutronix.de, brouer@redhat.com, cl@linux.com, efault@gmx.de, iamjoonsoo.kim@lge.com, jannh@google.com, linux-mm@kvack.org, mgorman@techsingularity.net, mm-commits@vger.kernel.org, penberg@kernel.org, quic_qiancai@quicinc.com, rientjes@google.com, tglx@linutronix.de, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 018/147] mm, slub: move reset of c->page and freelist out of deactivate_slab() Message-ID: <20210908025354.K4RaPivie%akpm@linux-foundation.org> In-Reply-To: <20210907195226.14b1d22a07c085b22968b933@linux-foundation.org> User-Agent: s-nail v14.8.16 Precedence: bulk Reply-To: linux-kernel@vger.kernel.org List-ID: X-Mailing-List: mm-commits@vger.kernel.org From: Vlastimil Babka Subject: mm, slub: move reset of c->page and freelist out of deactivate_slab() deactivate_slab() removes the cpu slab by merging the cpu freelist with slab's freelist and putting the slab on the proper node's list. It also sets the respective kmem_cache_cpu pointers to NULL. By extracting the kmem_cache_cpu operations from the function, we can make it not dependent on disabled irqs. Also if we return a single free pointer from ___slab_alloc, we no longer have to assign kmem_cache_cpu.page before deactivation or care if somebody preempted us and assigned a different page to our kmem_cache_cpu in the process. Link: https://lkml.kernel.org/r/20210904105003.11688-19-vbabka@suse.cz Signed-off-by: Vlastimil Babka Cc: Christoph Lameter Cc: David Rientjes Cc: Jann Horn Cc: Jesper Dangaard Brouer Cc: Joonsoo Kim Cc: Mel Gorman Cc: Mike Galbraith Cc: Pekka Enberg Cc: Qian Cai Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Signed-off-by: Andrew Morton --- mm/slub.c | 31 ++++++++++++++++++------------- 1 file changed, 18 insertions(+), 13 deletions(-) --- a/mm/slub.c~mm-slub-move-reset-of-c-page-and-freelist-out-of-deactivate_slab +++ a/mm/slub.c @@ -2209,10 +2209,13 @@ static void init_kmem_cache_cpus(struct } /* - * Remove the cpu slab + * Finishes removing the cpu slab. Merges cpu's freelist with page's freelist, + * unfreezes the slabs and puts it on the proper list. + * Assumes the slab has been already safely taken away from kmem_cache_cpu + * by the caller. */ static void deactivate_slab(struct kmem_cache *s, struct page *page, - void *freelist, struct kmem_cache_cpu *c) + void *freelist) { enum slab_modes { M_NONE, M_PARTIAL, M_FULL, M_FREE }; struct kmem_cache_node *n = get_node(s, page_to_nid(page)); @@ -2341,9 +2344,6 @@ redo: discard_slab(s, page); stat(s, FREE_SLAB); } - - c->page = NULL; - c->freelist = NULL; } /* @@ -2468,10 +2468,16 @@ static void put_cpu_partial(struct kmem_ static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) { - stat(s, CPUSLAB_FLUSH); - deactivate_slab(s, c->page, c->freelist, c); + void *freelist = c->freelist; + struct page *page = c->page; + c->page = NULL; + c->freelist = NULL; c->tid = next_tid(c->tid); + + deactivate_slab(s, page, freelist); + + stat(s, CPUSLAB_FLUSH); } /* @@ -2769,7 +2775,10 @@ deactivate_slab: local_irq_restore(flags); goto reread_page; } - deactivate_slab(s, page, c->freelist, c); + freelist = c->freelist; + c->page = NULL; + c->freelist = NULL; + deactivate_slab(s, page, freelist); local_irq_restore(flags); new_slab: @@ -2848,11 +2857,7 @@ check_new_page: return_single: local_irq_save(flags); - if (unlikely(c->page)) - flush_slab(s, c); - c->page = page; - - deactivate_slab(s, page, get_freepointer(s, freelist), c); + deactivate_slab(s, page, get_freepointer(s, freelist)); local_irq_restore(flags); return freelist; } _