From: Vlastimil Babka <vbabka@suse.cz> To: Andrew Morton <akpm@linux-foundation.org>, Christoph Lameter <cl@linux.com>, David Rientjes <rientjes@google.com>, Pekka Enberg <penberg@kernel.org>, Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Galbraith <efault@gmx.de>, Sebastian Andrzej Siewior <bigeasy@linutronix.de>, Thomas Gleixner <tglx@linutronix.de>, Mel Gorman <mgorman@techsingularity.net>, Jesper Dangaard Brouer <brouer@redhat.com>, Jann Horn <jannh@google.com>, Vlastimil Babka <vbabka@suse.cz> Subject: [PATCH v5 06/35] mm, slub: unify cmpxchg_double_slab() and __cmpxchg_double_slab() Date: Mon, 23 Aug 2021 16:57:57 +0200 [thread overview] Message-ID: <20210823145826.3857-7-vbabka@suse.cz> (raw) In-Reply-To: <20210823145826.3857-1-vbabka@suse.cz> These functions differ only in irq disabling in the slow path. We can create a common function with an extra bool parameter to control the irq disabling. As the functions are inline and the parameter compile-time constant, there will be no runtime overhead due to this change. Also change the DEBUG_VM based irqs disable assert to the more standard lockdep_assert based one. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Christoph Lameter <cl@linux.com> --- mm/slub.c | 62 +++++++++++++++++++++---------------------------------- 1 file changed, 24 insertions(+), 38 deletions(-) diff --git a/mm/slub.c b/mm/slub.c index 79e53303844c..e1c4e934c620 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -371,13 +371,13 @@ static __always_inline void slab_unlock(struct page *page) __bit_spin_unlock(PG_locked, &page->flags); } -/* Interrupts must be disabled (for the fallback code to work right) */ -static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page, +static inline bool ___cmpxchg_double_slab(struct kmem_cache *s, struct page *page, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, - const char *n) + const char *n, bool disable_irqs) { - VM_BUG_ON(!irqs_disabled()); + if (!disable_irqs) + lockdep_assert_irqs_disabled(); #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \ defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE) if (s->flags & __CMPXCHG_DOUBLE) { @@ -388,15 +388,23 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page } else #endif { + unsigned long flags; + + if (disable_irqs) + local_irq_save(flags); slab_lock(page); if (page->freelist == freelist_old && page->counters == counters_old) { page->freelist = freelist_new; page->counters = counters_new; slab_unlock(page); + if (disable_irqs) + local_irq_restore(flags); return true; } slab_unlock(page); + if (disable_irqs) + local_irq_restore(flags); } cpu_relax(); @@ -409,45 +417,23 @@ static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page return false; } -static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, +/* Interrupts must be disabled (for the fallback code to work right) */ +static inline bool __cmpxchg_double_slab(struct kmem_cache *s, struct page *page, void *freelist_old, unsigned long counters_old, void *freelist_new, unsigned long counters_new, const char *n) { -#if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \ - defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE) - if (s->flags & __CMPXCHG_DOUBLE) { - if (cmpxchg_double(&page->freelist, &page->counters, - freelist_old, counters_old, - freelist_new, counters_new)) - return true; - } else -#endif - { - unsigned long flags; - - local_irq_save(flags); - slab_lock(page); - if (page->freelist == freelist_old && - page->counters == counters_old) { - page->freelist = freelist_new; - page->counters = counters_new; - slab_unlock(page); - local_irq_restore(flags); - return true; - } - slab_unlock(page); - local_irq_restore(flags); - } - - cpu_relax(); - stat(s, CMPXCHG_DOUBLE_FAIL); - -#ifdef SLUB_DEBUG_CMPXCHG - pr_info("%s %s: cmpxchg double redo ", n, s->name); -#endif + return ___cmpxchg_double_slab(s, page, freelist_old, counters_old, + freelist_new, counters_new, n, false); +} - return false; +static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page, + void *freelist_old, unsigned long counters_old, + void *freelist_new, unsigned long counters_new, + const char *n) +{ + return ___cmpxchg_double_slab(s, page, freelist_old, counters_old, + freelist_new, counters_new, n, true); } #ifdef CONFIG_SLUB_DEBUG -- 2.32.0
next prev parent reply other threads:[~2021-08-23 14:59 UTC|newest] Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-08-23 14:57 [PATCH v5 00/35] SLUB: reduce irq disabled scope and make it RT compatible Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 01/35] mm, slub: don't call flush_all() from slab_debug_trace_open() Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 02/35] mm, slub: allocate private object map for debugfs listings Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 03/35] mm, slub: allocate private object map for validate_slab_cache() Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 04/35] mm, slub: don't disable irq for debug_check_no_locks_freed() Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 05/35] mm, slub: remove redundant unfreeze_partials() from put_cpu_partial() Vlastimil Babka 2021-08-23 14:57 ` Vlastimil Babka [this message] 2021-08-23 14:57 ` [PATCH v5 07/35] mm, slub: extract get_partial() from new_slab_objects() Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 08/35] mm, slub: dissolve new_slab_objects() into ___slab_alloc() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 09/35] mm, slub: return slab page from get_partial() and set c->page afterwards Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 10/35] mm, slub: restructure new page checks in ___slab_alloc() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 11/35] mm, slub: simplify kmem_cache_cpu and tid setup Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 12/35] mm, slub: move disabling/enabling irqs to ___slab_alloc() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 13/35] mm, slub: do initial checks in ___slab_alloc() with irqs enabled Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 14/35] mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 15/35] mm, slub: restore irqs around calling new_slab() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 16/35] mm, slub: validate slab from partial list or page allocator before making it cpu slab Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 17/35] mm, slub: check new pages with restored irqs Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 18/35] mm, slub: stop disabling irqs around get_partial() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 19/35] mm, slub: move reset of c->page and freelist out of deactivate_slab() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 20/35] mm, slub: make locking in deactivate_slab() irq-safe Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 21/35] mm, slub: call deactivate_slab() without disabling irqs Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 22/35] mm, slub: move irq control into unfreeze_partials() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 23/35] mm, slub: discard slabs in unfreeze_partials() without irqs disabled Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 24/35] mm, slub: detach whole partial list at once in unfreeze_partials() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 25/35] mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 26/35] mm, slub: only disable irq with spin_lock in __unfreeze_partials() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 27/35] mm, slub: don't disable irqs in slub_cpu_dead() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 28/35] mm, slab: make flush_slab() possible to call with irqs enabled Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 29/35] mm: slub: Move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 30/35] mm: slub: Make object_map_lock a raw_spinlock_t Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 31/35] mm, slub: optionally save/restore irqs in slab_[un]lock()/ Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 32/35] mm, slub: make slab_lock() disable irqs with PREEMPT_RT Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 33/35] mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 34/35] mm, slub: use migrate_disable() on PREEMPT_RT Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 35/35] mm, slub: convert kmem_cpu_slab protection to local_lock Vlastimil Babka 2021-08-23 15:11 ` [PATCH v5 00/35] SLUB: reduce irq disabled scope and make it RT compatible Sebastian Andrzej Siewior
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210823145826.3857-7-vbabka@suse.cz \ --to=vbabka@suse.cz \ --cc=akpm@linux-foundation.org \ --cc=bigeasy@linutronix.de \ --cc=brouer@redhat.com \ --cc=cl@linux.com \ --cc=efault@gmx.de \ --cc=iamjoonsoo.kim@lge.com \ --cc=jannh@google.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@techsingularity.net \ --cc=penberg@kernel.org \ --cc=rientjes@google.com \ --cc=tglx@linutronix.de \ --subject='Re: [PATCH v5 06/35] mm, slub: unify cmpxchg_double_slab() and __cmpxchg_double_slab()' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).