From: Vlastimil Babka <vbabka@suse.cz>
To: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Rongwei Wang <rongwei.wang@linux.alibaba.com>,
Christoph Lameter <cl@linux.com>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>,
David Rientjes <rientjes@google.com>,
Pekka Enberg <penberg@kernel.org>,
Hyeonggon Yoo <42.hyeyoo@gmail.com>,
Roman Gushchin <roman.gushchin@linux.dev>,
linux-mm@kvack.org, Thomas Gleixner <tglx@linutronix.de>,
Mike Galbraith <efault@gmx.de>,
Andrew Morton <akpm@linux-foundation.org>
Subject: Re: [PATCH 6/5] slub: Make PREEMPT_RT support less convoluted
Date: Thu, 25 Aug 2022 10:41:16 +0200 [thread overview]
Message-ID: <2903a7a4-7ef1-92e0-05df-ef7cf2fa65b1@suse.cz> (raw)
In-Reply-To: <YwcqCCJM1oLREWZc@linutronix.de>
On 8/25/22 09:51, Sebastian Andrzej Siewior wrote:
> From: Thomas Gleixner <tglx@linutronix.de>
>
> The slub code already has a few helpers depending on PREEMPT_RT. Add a few
> more and get rid of the CONFIG_PREEMPT_RT conditionals all over the place.
>
> No functional change.
>
> Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Christoph Lameter <cl@linux.com>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: Vlastimil Babka <vbabka@suse.cz>
> Cc: linux-mm@kvack.org
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> ---
>
> Vlastimil, does it work for you to include this patch in your series? It
> depends now on your series :) It has this USE_LOCKLESS_FAST_PATH() Linus
> asked about so we should be good.
Sure, I'll add it, thanks!
>
> mm/slub.c | 56 ++++++++++++++++++++++++--------------------------------
> 1 file changed, 24 insertions(+), 32 deletions(-)
>
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -104,9 +104,11 @@
> * except the stat counters. This is a percpu structure manipulated only by
> * the local cpu, so the lock protects against being preempted or interrupted
> * by an irq. Fast path operations rely on lockless operations instead.
> - * On PREEMPT_RT, the local lock does not actually disable irqs (and thus
> - * prevent the lockless operations), so fastpath operations also need to take
> - * the lock and are no longer lockless.
> + *
> + * On PREEMPT_RT, the local lock neither disables interrupts nor preemption
> + * which means the lockless fastpath cannot be used as it might interfere with
> + * an in-progress slow path operations. In this case the local lock is always
> + * taken but it still utilizes the freelist for the common operations.
> *
> * lockless fastpaths
> *
> @@ -167,8 +169,9 @@
> * function call even on !PREEMPT_RT, use inline preempt_disable() there.
> */
> #ifndef CONFIG_PREEMPT_RT
> -#define slub_get_cpu_ptr(var) get_cpu_ptr(var)
> -#define slub_put_cpu_ptr(var) put_cpu_ptr(var)
> +#define slub_get_cpu_ptr(var) get_cpu_ptr(var)
> +#define slub_put_cpu_ptr(var) put_cpu_ptr(var)
> +#define USE_LOCKLESS_FAST_PATH() (true)
> #else
> #define slub_get_cpu_ptr(var) \
> ({ \
> @@ -180,6 +183,7 @@ do { \
> (void)(var); \
> migrate_enable(); \
> } while (0)
> +#define USE_LOCKLESS_FAST_PATH() (false)
> #endif
>
> #ifdef CONFIG_SLUB_DEBUG
> @@ -474,7 +478,7 @@ static inline bool __cmpxchg_double_slab
> void *freelist_new, unsigned long counters_new,
> const char *n)
> {
> - if (!IS_ENABLED(CONFIG_PREEMPT_RT))
> + if (USE_LOCKLESS_FAST_PATH())
> lockdep_assert_irqs_disabled();
> #if defined(CONFIG_HAVE_CMPXCHG_DOUBLE) && \
> defined(CONFIG_HAVE_ALIGNED_STRUCT_PAGE)
> @@ -3287,14 +3291,8 @@ static __always_inline void *slab_alloc_
>
> object = c->freelist;
> slab = c->slab;
> - /*
> - * We cannot use the lockless fastpath on PREEMPT_RT because if a
> - * slowpath has taken the local_lock_irqsave(), it is not protected
> - * against a fast path operation in an irq handler. So we need to take
> - * the slow path which uses local_lock. It is still relatively fast if
> - * there is a suitable cpu freelist.
> - */
> - if (IS_ENABLED(CONFIG_PREEMPT_RT) ||
> +
> + if (!USE_LOCKLESS_FAST_PATH() ||
> unlikely(!object || !slab || !node_match(slab, node))) {
> object = __slab_alloc(s, gfpflags, node, addr, c);
> } else {
> @@ -3554,6 +3552,7 @@ static __always_inline void do_slab_free
> void *tail_obj = tail ? : head;
> struct kmem_cache_cpu *c;
> unsigned long tid;
> + void **freelist;
>
> redo:
> /*
> @@ -3568,9 +3567,13 @@ static __always_inline void do_slab_free
> /* Same with comment on barrier() in slab_alloc_node() */
> barrier();
>
> - if (likely(slab == c->slab)) {
> -#ifndef CONFIG_PREEMPT_RT
> - void **freelist = READ_ONCE(c->freelist);
> + if (unlikely(slab != c->slab)) {
> + __slab_free(s, slab, head, tail_obj, cnt, addr);
> + return;
> + }
> +
> + if (USE_LOCKLESS_FAST_PATH()) {
> + freelist = READ_ONCE(c->freelist);
>
> set_freepointer(s, tail_obj, freelist);
>
> @@ -3582,16 +3585,8 @@ static __always_inline void do_slab_free
> note_cmpxchg_failure("slab_free", s, tid);
> goto redo;
> }
> -#else /* CONFIG_PREEMPT_RT */
> - /*
> - * We cannot use the lockless fastpath on PREEMPT_RT because if
> - * a slowpath has taken the local_lock_irqsave(), it is not
> - * protected against a fast path operation in an irq handler. So
> - * we need to take the local_lock. We shouldn't simply defer to
> - * __slab_free() as that wouldn't use the cpu freelist at all.
> - */
> - void **freelist;
> -
> + } else {
> + /* Update the free list under the local lock */
> local_lock(&s->cpu_slab->lock);
> c = this_cpu_ptr(s->cpu_slab);
> if (unlikely(slab != c->slab)) {
> @@ -3606,11 +3601,8 @@ static __always_inline void do_slab_free
> c->tid = next_tid(tid);
>
> local_unlock(&s->cpu_slab->lock);
> -#endif
> - stat(s, FREE_FASTPATH);
> - } else
> - __slab_free(s, slab, head, tail_obj, cnt, addr);
> -
> + }
> + stat(s, FREE_FASTPATH);
> }
>
> static __always_inline void slab_free(struct kmem_cache *s, struct slab *slab,
next prev parent reply other threads:[~2022-08-25 8:41 UTC|newest]
Thread overview: 20+ messages / expand[flat|nested] mbox.gz Atom feed top
2022-08-23 17:03 [PATCH v2 0/5] mm/slub: fix validation races and cleanup locking Vlastimil Babka
2022-08-23 17:03 ` [PATCH v2 1/5] mm/slub: move free_debug_processing() further Vlastimil Babka
2022-08-23 17:03 ` [PATCH v2 2/5] mm/slub: restrict sysfs validation to debug caches and make it safe Vlastimil Babka
2022-08-24 4:41 ` Hyeonggon Yoo
2022-08-23 17:03 ` [PATCH v2 3/5] mm/slub: remove slab_lock() usage for debug operations Vlastimil Babka
2022-08-23 17:03 ` [PATCH v2 4/5] mm/slub: convert object_map_lock to non-raw spinlock Vlastimil Babka
2022-08-24 15:53 ` Sebastian Andrzej Siewior
2022-08-23 17:04 ` [PATCH v2 5/5] mm/slub: simplify __cmpxchg_double_slab() and slab_[un]lock() Vlastimil Babka
2022-08-24 10:24 ` Hyeonggon Yoo
2022-08-24 11:51 ` Vlastimil Babka
2022-08-24 12:45 ` Hyeonggon Yoo
2022-08-24 16:31 ` Sebastian Andrzej Siewior
2022-08-24 13:04 ` Hyeonggon Yoo
2022-08-25 12:41 ` Vlastimil Babka
2022-08-24 16:25 ` Sebastian Andrzej Siewior
2022-08-25 12:59 ` Vlastimil Babka
2022-08-25 7:51 ` [PATCH 6/5] slub: Make PREEMPT_RT support less convoluted Sebastian Andrzej Siewior
2022-08-25 8:41 ` Vlastimil Babka [this message]
2022-08-25 8:49 ` Hyeonggon Yoo
2022-08-25 13:16 ` [PATCH v2 0/5] mm/slub: fix validation races and cleanup locking Vlastimil Babka
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=2903a7a4-7ef1-92e0-05df-ef7cf2fa65b1@suse.cz \
--to=vbabka@suse.cz \
--cc=42.hyeyoo@gmail.com \
--cc=akpm@linux-foundation.org \
--cc=bigeasy@linutronix.de \
--cc=cl@linux.com \
--cc=efault@gmx.de \
--cc=iamjoonsoo.kim@lge.com \
--cc=linux-mm@kvack.org \
--cc=penberg@kernel.org \
--cc=rientjes@google.com \
--cc=roman.gushchin@linux.dev \
--cc=rongwei.wang@linux.alibaba.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).