* [merged] mm-slub-use-migrate_disable-on-preempt_rt.patch removed from -mm tree
@ 2021-09-09 21:01 akpm
0 siblings, 0 replies; only message in thread
From: akpm @ 2021-09-09 21:01 UTC (permalink / raw)
To: bigeasy, brouer, cl, efault, iamjoonsoo.kim, jannh, mgorman,
mm-commits, penberg, quic_qiancai, rientjes, tglx, vbabka
The patch titled
Subject: mm, slub: use migrate_disable() on PREEMPT_RT
has been removed from the -mm tree. Its filename was
mm-slub-use-migrate_disable-on-preempt_rt.patch
This patch was dropped because it was merged into mainline or a subsystem tree
------------------------------------------------------
From: Vlastimil Babka <vbabka@suse.cz>
Subject: mm, slub: use migrate_disable() on PREEMPT_RT
We currently use preempt_disable() (directly or via get_cpu_ptr()) to
stabilize the pointer to kmem_cache_cpu. On PREEMPT_RT this would be
incompatible with the list_lock spinlock. We can use migrate_disable()
instead, but that increases overhead on !PREEMPT_RT as it's an
unconditional function call.
In order to get the best available mechanism on both PREEMPT_RT and
!PREEMPT_RT, introduce private slub_get_cpu_ptr() and slub_put_cpu_ptr()
wrappers and use them.
Link: https://lkml.kernel.org/r/20210904105003.11688-33-vbabka@suse.cz
Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Cc: Christoph Lameter <cl@linux.com>
Cc: David Rientjes <rientjes@google.com>
Cc: Jann Horn <jannh@google.com>
Cc: Jesper Dangaard Brouer <brouer@redhat.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Mel Gorman <mgorman@techsingularity.net>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: Qian Cai <quic_qiancai@quicinc.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Cc: Thomas Gleixner <tglx@linutronix.de>
Signed-off-by: Andrew Morton <akpm@linux-foundation.org>
---
mm/slub.c | 39 ++++++++++++++++++++++++++++++---------
1 file changed, 30 insertions(+), 9 deletions(-)
--- a/mm/slub.c~mm-slub-use-migrate_disable-on-preempt_rt
+++ a/mm/slub.c
@@ -118,6 +118,26 @@
* the fast path and disables lockless freelists.
*/
+/*
+ * We could simply use migrate_disable()/enable() but as long as it's a
+ * function call even on !PREEMPT_RT, use inline preempt_disable() there.
+ */
+#ifndef CONFIG_PREEMPT_RT
+#define slub_get_cpu_ptr(var) get_cpu_ptr(var)
+#define slub_put_cpu_ptr(var) put_cpu_ptr(var)
+#else
+#define slub_get_cpu_ptr(var) \
+({ \
+ migrate_disable(); \
+ this_cpu_ptr(var); \
+})
+#define slub_put_cpu_ptr(var) \
+do { \
+ (void)(var); \
+ migrate_enable(); \
+} while (0)
+#endif
+
#ifdef CONFIG_SLUB_DEBUG
#ifdef CONFIG_SLUB_DEBUG_ON
DEFINE_STATIC_KEY_TRUE(slub_debug_enabled);
@@ -2852,7 +2872,7 @@ redo:
if (unlikely(!pfmemalloc_match_unsafe(page, gfpflags)))
goto deactivate_slab;
- /* must check again c->page in case IRQ handler changed it */
+ /* must check again c->page in case we got preempted and it changed */
local_irq_save(flags);
if (unlikely(page != c->page)) {
local_irq_restore(flags);
@@ -2911,7 +2931,8 @@ new_slab:
}
if (unlikely(!slub_percpu_partial(c))) {
local_irq_restore(flags);
- goto new_objects; /* stolen by an IRQ handler */
+ /* we were preempted and partial list got empty */
+ goto new_objects;
}
page = c->page = slub_percpu_partial(c);
@@ -2927,9 +2948,9 @@ new_objects:
if (freelist)
goto check_new_page;
- put_cpu_ptr(s->cpu_slab);
+ slub_put_cpu_ptr(s->cpu_slab);
page = new_slab(s, gfpflags, node);
- c = get_cpu_ptr(s->cpu_slab);
+ c = slub_get_cpu_ptr(s->cpu_slab);
if (unlikely(!page)) {
slab_out_of_memory(s, gfpflags, node);
@@ -3012,12 +3033,12 @@ static void *__slab_alloc(struct kmem_ca
* cpu before disabling preemption. Need to reload cpu area
* pointer.
*/
- c = get_cpu_ptr(s->cpu_slab);
+ c = slub_get_cpu_ptr(s->cpu_slab);
#endif
p = ___slab_alloc(s, gfpflags, node, addr, c);
#ifdef CONFIG_PREEMPT_COUNT
- put_cpu_ptr(s->cpu_slab);
+ slub_put_cpu_ptr(s->cpu_slab);
#endif
return p;
}
@@ -3546,7 +3567,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
* IRQs, which protects against PREEMPT and interrupts
* handlers invoking normal fastpath.
*/
- c = get_cpu_ptr(s->cpu_slab);
+ c = slub_get_cpu_ptr(s->cpu_slab);
local_irq_disable();
for (i = 0; i < size; i++) {
@@ -3592,7 +3613,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
}
c->tid = next_tid(c->tid);
local_irq_enable();
- put_cpu_ptr(s->cpu_slab);
+ slub_put_cpu_ptr(s->cpu_slab);
/*
* memcg and kmem_cache debug support and memory initialization.
@@ -3602,7 +3623,7 @@ int kmem_cache_alloc_bulk(struct kmem_ca
slab_want_init_on_alloc(flags, s));
return i;
error:
- put_cpu_ptr(s->cpu_slab);
+ slub_put_cpu_ptr(s->cpu_slab);
slab_post_alloc_hook(s, objcg, flags, i, p, false);
__kmem_cache_free_bulk(s, i, p);
return 0;
_
Patches currently in -mm which might be from vbabka@suse.cz are
^ permalink raw reply [flat|nested] only message in thread
only message in thread, other threads:[~2021-09-09 21:01 UTC | newest]
Thread overview: (only message) (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-09-09 21:01 [merged] mm-slub-use-migrate_disable-on-preempt_rt.patch removed from -mm tree akpm
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).