From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-15.8 required=3.0 tests=BAYES_00,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id A4B8CC4332F for ; Thu, 2 Sep 2021 21:52:08 +0000 (UTC) Received: from kanga.kvack.org (kanga.kvack.org [205.233.56.17]) by mail.kernel.org (Postfix) with ESMTP id 54F2F61103 for ; Thu, 2 Sep 2021 21:52:08 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 54F2F61103 Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=linux-foundation.org Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kvack.org Received: by kanga.kvack.org (Postfix) id EE74E6B0071; Thu, 2 Sep 2021 17:52:07 -0400 (EDT) Received: by kanga.kvack.org (Postfix, from userid 40) id E46A36B0081; Thu, 2 Sep 2021 17:52:07 -0400 (EDT) X-Delivered-To: int-list-linux-mm@kvack.org Received: by kanga.kvack.org (Postfix, from userid 63042) id CC17F8D0001; Thu, 2 Sep 2021 17:52:07 -0400 (EDT) X-Delivered-To: linux-mm@kvack.org Received: from forelay.hostedemail.com (smtprelay0165.hostedemail.com [216.40.44.165]) by kanga.kvack.org (Postfix) with ESMTP id BC3E06B0071 for ; Thu, 2 Sep 2021 17:52:07 -0400 (EDT) Received: from smtpin15.hostedemail.com (10.5.19.251.rfc1918.com [10.5.19.251]) by forelay01.hostedemail.com (Postfix) with ESMTP id 8C541184558FA for ; Thu, 2 Sep 2021 21:52:07 +0000 (UTC) X-FDA: 78543981894.15.0785FB9 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by imf17.hostedemail.com (Postfix) with ESMTP id 453A6F000090 for ; Thu, 2 Sep 2021 21:52:07 +0000 (UTC) Received: by mail.kernel.org (Postfix) with ESMTPSA id 1788C61107; Thu, 2 Sep 2021 21:52:06 +0000 (UTC) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/simple; d=linux-foundation.org; s=korg; t=1630619526; bh=a0SQdmMFG84rm9BY87zGWxnAQhMoU/lki5fV6r/2mv4=; h=Date:From:To:Subject:In-Reply-To:From; b=wu8WHzCfNpU6rf3mqwQret3oQlgBSdeCkMYqbm+gvRsQM29vuniZHWsNSI5rkbvFx KrGPJpM2pTVLrV3cs+0yDubdX1EU6GIhtT7BOG3Oi5myHOnEbdS0pWX+ybVwmUONZ8 VLLRH1rvcsGcuCLofY5ablafGeFhs8vbfwWpO1VY= Date: Thu, 02 Sep 2021 14:52:05 -0700 From: Andrew Morton To: akpm@linux-foundation.org, bigeasy@linutronix.de, brouer@redhat.com, cl@linux.com, efault@gmx.de, iamjoonsoo.kim@lge.com, jannh@google.com, linux-mm@kvack.org, mgorman@techsingularity.net, mm-commits@vger.kernel.org, penberg@kernel.org, rientjes@google.com, tglx@linutronix.de, torvalds@linux-foundation.org, vbabka@suse.cz Subject: [patch 040/212] mm, slub: make slab_lock() disable irqs with PREEMPT_RT Message-ID: <20210902215205.cEty-ni0y%akpm@linux-foundation.org> In-Reply-To: <20210902144820.78957dff93d7bea620d55a89@linux-foundation.org> User-Agent: s-nail v14.8.16 X-Rspamd-Queue-Id: 453A6F000090 Authentication-Results: imf17.hostedemail.com; dkim=pass header.d=linux-foundation.org header.s=korg header.b=wu8WHzCf; dmarc=none; spf=pass (imf17.hostedemail.com: domain of akpm@linux-foundation.org designates 198.145.29.99 as permitted sender) smtp.mailfrom=akpm@linux-foundation.org X-Rspamd-Server: rspam01 X-Stat-Signature: ew56je57rtkibifx463a5x5zqzhqa83u X-HE-Tag: 1630619527-495534 X-Bogosity: Ham, tests=bogofilter, spamicity=0.000000, version=1.2.4 Sender: owner-linux-mm@kvack.org Precedence: bulk X-Loop: owner-majordomo@kvack.org List-ID: From: Vlastimil Babka Subject: mm, slub: make slab_lock() disable irqs with PREEMPT_RT We need to disable irqs around slab_lock() (a bit spinlock) to make it irq-safe. The calls to slab_lock() are nested under spin_lock_irqsave() which doesn't disable irqs on PREEMPT_RT, so add explicit disabling with PREEMPT_RT. We also distinguish cmpxchg_double_slab() where we do the disabling explicitly and __cmpxchg_double_slab() for contexts with already disabled irqs. However these context are also typically spin_lock_irqsave() thus insufficient on PREEMPT_RT. Thus, change __cmpxchg_double_slab() to be same as cmpxchg_double_slab() on PREEMPT_RT. Link: https://lkml.kernel.org/r/20210805152000.12817-33-vbabka@suse.cz Signed-off-by: Vlastimil Babka Cc: Christoph Lameter Cc: David Rientjes Cc: Jann Horn Cc: Jesper Dangaard Brouer Cc: Joonsoo Kim Cc: Mel Gorman Cc: Mike Galbraith Cc: Pekka Enberg Cc: Sebastian Andrzej Siewior Cc: Thomas Gleixner Signed-off-by: Andrew Morton --- mm/slub.c | 13 +++++++++---- 1 file changed, 9 insertions(+), 4 deletions(-) --- a/mm/slub.c~mm-slub-make-slab_lock-disable-irqs-with-preempt_rt +++ a/mm/slub.c @@ -380,12 +380,12 @@ __slab_unlock(struct page *page, unsigne static __always_inline void slab_lock(struct page *page, unsigned long *flags) { - __slab_lock(page, flags, false); + __slab_lock(page, flags, IS_ENABLED(CONFIG_PREEMPT_RT)); } static __always_inline void slab_unlock(struct page *page, unsigned long *flags) { - __slab_unlock(page, flags, false); + __slab_unlock(page, flags, IS_ENABLED(CONFIG_PREEMPT_RT)); } static inline bool ___cmpxchg_double_slab(struct kmem_cache *s, struct page *page, @@ -429,14 +429,19 @@ static inline bool ___cmpxchg_double_sla return false; } -/* Interrupts must be disabled (for the fallback code to work right) */ +/* + * Interrupts must be disabled (for the fallback code to work right), typically + * by an _irqsave() lock variant. Except on PREEMPT_RT where locks are different + * so we disable interrupts explicitly here. + */ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, const char *n) { return ___cmpxchg_double_slab(s, page, freelist_old, counters_old, - freelist_new, counters_new, n, false); + freelist_new, counters_new, n, + IS_ENABLED(CONFIG_PREEMPT_RT)); } static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, _