From: Vlastimil Babka <vbabka@suse.cz> To: linux-mm@kvack.org, Christoph Lameter <cl@linux.com>, David Rientjes <rientjes@google.com>, Pekka Enberg <penberg@kernel.org>, Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: Andrew Morton <akpm@linux-foundation.org>, linux-kernel@vger.kernel.org, Mike Galbraith <efault@gmx.de>, Sebastian Andrzej Siewior <bigeasy@linutronix.de>, Thomas Gleixner <tglx@linutronix.de>, Mel Gorman <mgorman@techsingularity.net>, Vlastimil Babka <vbabka@suse.cz> Subject: [PATCH v6 28/33] mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context Date: Sat, 4 Sep 2021 12:49:58 +0200 [thread overview] Message-ID: <20210904105003.11688-29-vbabka@suse.cz> (raw) In-Reply-To: <20210904105003.11688-1-vbabka@suse.cz> From: Sebastian Andrzej Siewior <bigeasy@linutronix.de> flush_all() flushes a specific SLAB cache on each CPU (where the cache is present). The deactivate_slab()/__free_slab() invocation happens within IPI handler and is problematic for PREEMPT_RT. The flush operation is not a frequent operation or a hot path. The per-CPU flush operation can be moved to within a workqueue. Because a workqueue handler, unlike IPI handler, does not disable irqs, flush_slab() now has to disable them for working with the kmem_cache_cpu fields. deactivate_slab() is safe to call with irqs enabled. [vbabka@suse.cz: adapt to new SLUB changes] Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de> Signed-off-by: Vlastimil Babka <vbabka@suse.cz> --- mm/slab_common.c | 2 ++ mm/slub.c | 94 +++++++++++++++++++++++++++++++++++++++--------- 2 files changed, 80 insertions(+), 16 deletions(-) diff --git a/mm/slab_common.c b/mm/slab_common.c index 1c673c323baf..ec2bb0beed75 100644 --- a/mm/slab_common.c +++ b/mm/slab_common.c @@ -502,6 +502,7 @@ void kmem_cache_destroy(struct kmem_cache *s) if (unlikely(!s)) return; + cpus_read_lock(); mutex_lock(&slab_mutex); s->refcount--; @@ -516,6 +517,7 @@ void kmem_cache_destroy(struct kmem_cache *s) } out_unlock: mutex_unlock(&slab_mutex); + cpus_read_unlock(); } EXPORT_SYMBOL(kmem_cache_destroy); diff --git a/mm/slub.c b/mm/slub.c index fa9a366d2d9c..b7f8b9d34e46 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2496,16 +2496,25 @@ static void put_cpu_partial(struct kmem_cache *s, struct page *page, int drain) static inline void flush_slab(struct kmem_cache *s, struct kmem_cache_cpu *c) { - void *freelist = c->freelist; - struct page *page = c->page; + unsigned long flags; + struct page *page; + void *freelist; + + local_irq_save(flags); + + page = c->page; + freelist = c->freelist; c->page = NULL; c->freelist = NULL; c->tid = next_tid(c->tid); - deactivate_slab(s, page, freelist); + local_irq_restore(flags); - stat(s, CPUSLAB_FLUSH); + if (page) { + deactivate_slab(s, page, freelist); + stat(s, CPUSLAB_FLUSH); + } } static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) @@ -2526,15 +2535,27 @@ static inline void __flush_cpu_slab(struct kmem_cache *s, int cpu) unfreeze_partials_cpu(s, c); } +struct slub_flush_work { + struct work_struct work; + struct kmem_cache *s; + bool skip; +}; + /* * Flush cpu slab. * - * Called from IPI handler with interrupts disabled. + * Called from CPU work handler with migration disabled. */ -static void flush_cpu_slab(void *d) +static void flush_cpu_slab(struct work_struct *w) { - struct kmem_cache *s = d; - struct kmem_cache_cpu *c = this_cpu_ptr(s->cpu_slab); + struct kmem_cache *s; + struct kmem_cache_cpu *c; + struct slub_flush_work *sfw; + + sfw = container_of(w, struct slub_flush_work, work); + + s = sfw->s; + c = this_cpu_ptr(s->cpu_slab); if (c->page) flush_slab(s, c); @@ -2542,17 +2563,51 @@ static void flush_cpu_slab(void *d) unfreeze_partials(s); } -static bool has_cpu_slab(int cpu, void *info) +static bool has_cpu_slab(int cpu, struct kmem_cache *s) { - struct kmem_cache *s = info; struct kmem_cache_cpu *c = per_cpu_ptr(s->cpu_slab, cpu); return c->page || slub_percpu_partial(c); } +static DEFINE_MUTEX(flush_lock); +static DEFINE_PER_CPU(struct slub_flush_work, slub_flush); + +static void flush_all_cpus_locked(struct kmem_cache *s) +{ + struct slub_flush_work *sfw; + unsigned int cpu; + + lockdep_assert_cpus_held(); + mutex_lock(&flush_lock); + + for_each_online_cpu(cpu) { + sfw = &per_cpu(slub_flush, cpu); + if (!has_cpu_slab(cpu, s)) { + sfw->skip = true; + continue; + } + INIT_WORK(&sfw->work, flush_cpu_slab); + sfw->skip = false; + sfw->s = s; + schedule_work_on(cpu, &sfw->work); + } + + for_each_online_cpu(cpu) { + sfw = &per_cpu(slub_flush, cpu); + if (sfw->skip) + continue; + flush_work(&sfw->work); + } + + mutex_unlock(&flush_lock); +} + static void flush_all(struct kmem_cache *s) { - on_each_cpu_cond(has_cpu_slab, flush_cpu_slab, s, 1); + cpus_read_lock(); + flush_all_cpus_locked(s); + cpus_read_unlock(); } /* @@ -4097,7 +4152,7 @@ int __kmem_cache_shutdown(struct kmem_cache *s) int node; struct kmem_cache_node *n; - flush_all(s); + flush_all_cpus_locked(s); /* Attempt to free all objects */ for_each_kmem_cache_node(s, node, n) { free_partial(s, n); @@ -4373,7 +4428,7 @@ EXPORT_SYMBOL(kfree); * being allocated from last increasing the chance that the last objects * are freed in them. */ -int __kmem_cache_shrink(struct kmem_cache *s) +static int __kmem_cache_do_shrink(struct kmem_cache *s) { int node; int i; @@ -4385,7 +4440,6 @@ int __kmem_cache_shrink(struct kmem_cache *s) unsigned long flags; int ret = 0; - flush_all(s); for_each_kmem_cache_node(s, node, n) { INIT_LIST_HEAD(&discard); for (i = 0; i < SHRINK_PROMOTE_MAX; i++) @@ -4435,13 +4489,21 @@ int __kmem_cache_shrink(struct kmem_cache *s) return ret; } +int __kmem_cache_shrink(struct kmem_cache *s) +{ + flush_all(s); + return __kmem_cache_do_shrink(s); +} + static int slab_mem_going_offline_callback(void *arg) { struct kmem_cache *s; mutex_lock(&slab_mutex); - list_for_each_entry(s, &slab_caches, list) - __kmem_cache_shrink(s); + list_for_each_entry(s, &slab_caches, list) { + flush_all_cpus_locked(s); + __kmem_cache_do_shrink(s); + } mutex_unlock(&slab_mutex); return 0; -- 2.33.0
next prev parent reply other threads:[~2021-09-04 10:51 UTC|newest] Thread overview: 36+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-09-04 10:49 [PATCH v6 00/33] SLUB: reduce irq disabled scope and make it RT compatible Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 01/33] mm, slub: don't call flush_all() from slab_debug_trace_open() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 02/33] mm, slub: allocate private object map for debugfs listings Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 03/33] mm, slub: allocate private object map for validate_slab_cache() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 04/33] mm, slub: don't disable irq for debug_check_no_locks_freed() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 05/33] mm, slub: remove redundant unfreeze_partials() from put_cpu_partial() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 06/33] mm, slub: extract get_partial() from new_slab_objects() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 07/33] mm, slub: dissolve new_slab_objects() into ___slab_alloc() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 08/33] mm, slub: return slab page from get_partial() and set c->page afterwards Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 09/33] mm, slub: restructure new page checks in ___slab_alloc() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 10/33] mm, slub: simplify kmem_cache_cpu and tid setup Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 11/33] mm, slub: move disabling/enabling irqs to ___slab_alloc() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 12/33] mm, slub: do initial checks in ___slab_alloc() with irqs enabled Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 13/33] mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 14/33] mm, slub: restore irqs around calling new_slab() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 15/33] mm, slub: validate slab from partial list or page allocator before making it cpu slab Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 16/33] mm, slub: check new pages with restored irqs Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 17/33] mm, slub: stop disabling irqs around get_partial() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 18/33] mm, slub: move reset of c->page and freelist out of deactivate_slab() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 19/33] mm, slub: make locking in deactivate_slab() irq-safe Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 20/33] mm, slub: call deactivate_slab() without disabling irqs Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 21/33] mm, slub: move irq control into unfreeze_partials() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 22/33] mm, slub: discard slabs in unfreeze_partials() without irqs disabled Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 23/33] mm, slub: detach whole partial list at once in unfreeze_partials() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 24/33] mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 25/33] mm, slub: only disable irq with spin_lock in __unfreeze_partials() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 26/33] mm, slub: don't disable irqs in slub_cpu_dead() Vlastimil Babka 2021-09-04 10:49 ` [PATCH v6 27/33] mm, slab: split out the cpu offline variant of flush_slab() Vlastimil Babka 2021-09-04 10:49 ` Vlastimil Babka [this message] 2021-09-04 10:49 ` [PATCH v6 29/33] mm: slub: make object_map_lock a raw_spinlock_t Vlastimil Babka 2021-09-04 10:50 ` [PATCH v6 30/33] mm, slub: make slab_lock() disable irqs with PREEMPT_RT Vlastimil Babka 2021-09-04 10:50 ` [PATCH v6 31/33] mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg Vlastimil Babka 2021-09-04 10:50 ` [PATCH v6 32/33] mm, slub: use migrate_disable() on PREEMPT_RT Vlastimil Babka 2021-09-04 10:50 ` [PATCH v6 33/33] mm, slub: convert kmem_cpu_slab protection to local_lock Vlastimil Babka 2021-09-05 14:16 ` [PATCH v6 00/33] SLUB: reduce irq disabled scope and make it RT compatible Mike Galbraith 2021-09-07 8:20 ` Mel Gorman
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210904105003.11688-29-vbabka@suse.cz \ --to=vbabka@suse.cz \ --cc=akpm@linux-foundation.org \ --cc=bigeasy@linutronix.de \ --cc=cl@linux.com \ --cc=efault@gmx.de \ --cc=iamjoonsoo.kim@lge.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@techsingularity.net \ --cc=penberg@kernel.org \ --cc=rientjes@google.com \ --cc=tglx@linutronix.de \ --subject='Re: [PATCH v6 28/33] mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).