From: Vlastimil Babka <vbabka@suse.cz>
To: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
Christoph Lameter <cl@linux.com>,
David Rientjes <rientjes@google.com>,
Pekka Enberg <penberg@kernel.org>,
Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
Thomas Gleixner <tglx@linutronix.de>,
Mel Gorman <mgorman@techsingularity.net>,
Jesper Dangaard Brouer <brouer@redhat.com>,
Peter Zijlstra <peterz@infradead.org>,
Jann Horn <jannh@google.com>
Subject: Re: [RFC 26/26] mm, slub: convert kmem_cpu_slab protection to local_lock
Date: Tue, 25 May 2021 18:11:08 +0200 [thread overview]
Message-ID: <d6adf5e1-e80f-e198-67ef-f0917bda0a8f@suse.cz> (raw)
In-Reply-To: <20210524233946.20352-27-vbabka@suse.cz>
On 5/25/21 1:39 AM, Vlastimil Babka wrote:
> Embed local_lock into struct kmem_cpu_slab and use the irq-safe versions of
> local_lock instead of plain local_irq_save/restore. On !PREEMPT_RT that's
> equivalent, with better lockdep visibility. On PREEMPT_RT that means better
> preemption.
>
> Also update the comment about locking scheme in SLUB to reflect changes done
> by this series.
>
> Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Hm just realized that this won't work on RT because unlike true irq disable, the
local_lock on RT won't protect from irq handler doing alloc/free changing the
critical fields in kmem_cache_cpu in the lockless fastpath, which doesn't take
the local_lock.
Good that I've put the local_lock patch last, I guess. The reduced irq disabled
sections should still help.
> ---
> include/linux/slub_def.h | 2 +
> mm/slub.c | 90 ++++++++++++++++++++++++++++------------
> 2 files changed, 66 insertions(+), 26 deletions(-)
>
> diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
> index dcde82a4434c..b5bcac29b979 100644
> --- a/include/linux/slub_def.h
> +++ b/include/linux/slub_def.h
> @@ -10,6 +10,7 @@
> #include <linux/kfence.h>
> #include <linux/kobject.h>
> #include <linux/reciprocal_div.h>
> +#include <linux/local_lock.h>
>
> enum stat_item {
> ALLOC_FASTPATH, /* Allocation from cpu slab */
> @@ -41,6 +42,7 @@ enum stat_item {
> NR_SLUB_STAT_ITEMS };
>
> struct kmem_cache_cpu {
> + local_lock_t lock; /* Protects the fields below except stat */
> void **freelist; /* Pointer to next available object */
> unsigned long tid; /* Globally unique transaction id */
> struct page *page; /* The slab from which we are allocating */
> diff --git a/mm/slub.c b/mm/slub.c
> index 8818c210cb97..5455c6bda997 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -43,13 +43,22 @@
> /*
> * Lock order:
> * 1. slab_mutex (Global Mutex)
> - * 2. node->list_lock
> + * 2. node->list_lock (Spinlock)
> + * OR
> + * kmem_cache->cpu_slab->lock (Local lock)
> * 3. slab_lock(page) (Only on some arches and for debugging)
> + * 4. object_map_lock (Only for debugging)
> *
> * slab_mutex
> *
> * The role of the slab_mutex is to protect the list of all the slabs
> * and to synchronize major metadata changes to slab cache structures.
> + * Also synchronizes memory hotplug callbacks.
> + *
> + * slab_lock
> + *
> + * The slab_lock is a wrapper around the page lock, thus it is a bit
> + * spinlock.
> *
> * The slab_lock is only used for debugging and on arches that do not
> * have the ability to do a cmpxchg_double. It only protects:
> @@ -65,6 +74,8 @@
> * froze the slab is the only one that can retrieve the objects from the
> * page's freelist.
> *
> + * list_lock
> + *
> * The list_lock protects the partial and full list on each node and
> * the partial slab counter. If taken then no new slabs may be added or
> * removed from the lists nor make the number of partial slabs be modified.
> @@ -76,10 +87,33 @@
> * slabs, operations can continue without any centralized lock. F.e.
> * allocating a long series of objects that fill up slabs does not require
> * the list lock.
> - * Interrupts are disabled during allocation and deallocation in order to
> - * make the slab allocator safe to use in the context of an irq. In addition
> - * interrupts are disabled to ensure that the processor does not change
> - * while handling per_cpu slabs, due to kernel preemption.
> + *
> + * cpu_slab->lock local lock
> + *
> + * This locks protect slowpath manipulation of all kmem_cache_cpu fields
> + * except the stat counters. This is a percpu structure manipulated only by
> + * the local cpu, so the lock protects against being preempted or interrupted
> + * by an irq. Fast path operations rely on lockless operations instead.
> + *
> + * lockless fastpaths
> + *
> + * The fast path allocation (slab_alloc_node()) and freeing (do_slab_free())
> + * are fully lockless when satisfied from the percpu slab (and when
> + * cmpxchg_double is possible to use, otherwise slab_lock is taken).
> + * They also don't disable preemption or migration or irqs. They rely on
> + * the transaction id (tid) field to detect being preempted or moved to
> + * another cpu.
> + *
> + * irq, preemption, migration considerations
> + *
> + * Interrupts are disabled as part of list_lock or local_lock operations, or
> + * around the slab_lock operation, in order to make the slab allocator safe
> + * to use in the context of an irq.
> + *
> + * In addition, migration is disabled in the allocation slowpath, bulk
> + * allocation, and put_cpu_partial(), so that the local cpu doesn't change in
> + * the process and e.g. the kmem_cache_cpu pointer doesn't have to be
> + * revalidated in each section protected by the local lock.
> *
> * SLUB assigns one slab for allocation to each processor.
> * Allocations only occur from these slabs called cpu slabs.
> @@ -427,7 +461,7 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page,
> page->freelist = freelist_new;
> page->counters = counters_new;
> slab_unlock(page);
> - local_irq_restore(flags);
> + local_unlock_irqrestore(&s->cpu_slab->lock, flags);
> return true;
> }
> slab_unlock(page);
> @@ -2157,9 +2191,13 @@ static inline void note_cmpxchg_failure(const char *n,
> static void init_kmem_cache_cpus(struct kmem_cache *s)
> {
> int cpu;
> + struct kmem_cache_cpu *c;
>
> - for_each_possible_cpu(cpu)
> - per_cpu_ptr(s->cpu_slab, cpu)->tid = init_tid(cpu);
> + for_each_possible_cpu(cpu) {
> + c = per_cpu_ptr(s->cpu_slab, cpu);
> + local_lock_init(&c->lock);
> + c->tid = init_tid(cpu);
> + }
> }
>
> /*
> @@ -2708,9 +2746,9 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> goto deactivate_slab;
>
> /* must check again c->page in case of IRQ */
> - local_irq_save(flags);
> + local_lock_irqsave(&s->cpu_slab->lock, flags);
> if (unlikely(page != c->page)) {
> - local_irq_restore(flags);
> + local_unlock_irqrestore(&s->cpu_slab->lock, flags);
> goto reread_page;
> }
> freelist = c->freelist;
> @@ -2721,7 +2759,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
>
> if (!freelist) {
> c->page = NULL;
> - local_irq_restore(flags);
> + local_unlock_irqrestore(&s->cpu_slab->lock, flags);
> stat(s, DEACTIVATE_BYPASS);
> goto new_slab;
> }
> @@ -2737,37 +2775,37 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> VM_BUG_ON(!c->page->frozen);
> c->freelist = get_freepointer(s, freelist);
> c->tid = next_tid(c->tid);
> - local_irq_restore(flags);
> + local_unlock_irqrestore(&s->cpu_slab->lock, flags);
> return freelist;
>
> deactivate_slab:
> - local_irq_save(flags);
> + local_lock_irqsave(&s->cpu_slab->lock, flags);
> if (page != c->page) {
> - local_irq_restore(flags);
> + local_unlock_irqrestore(&s->cpu_slab->lock, flags);
> goto reread_page;
> }
> freelist = c->freelist;
> c->page = NULL;
> c->freelist = NULL;
> - local_irq_restore(flags);
> + local_unlock_irqrestore(&s->cpu_slab->lock, flags);
> deactivate_slab(s, page, freelist);
>
> new_slab:
>
> if (slub_percpu_partial(c)) {
> - local_irq_save(flags);
> + local_lock_irqsave(&s->cpu_slab->lock, flags);
> if (unlikely(c->page)) {
> - local_irq_restore(flags);
> + local_unlock_irqrestore(&s->cpu_slab->lock, flags);
> goto reread_page;
> }
> if (unlikely(!slub_percpu_partial(c))) { /* stolen by IRQ? */
> - local_irq_restore(flags);
> + local_unlock_irqrestore(&s->cpu_slab->lock, flags);
> goto new_objects;
> }
>
> page = c->page = slub_percpu_partial(c);
> slub_set_percpu_partial(c, page);
> - local_irq_restore(flags);
> + local_unlock_irqrestore(&s->cpu_slab->lock, flags);
> stat(s, CPU_PARTIAL_ALLOC);
> goto redo;
> }
> @@ -2820,7 +2858,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> goto return_single;
>
> retry_load_page:
> - local_irq_save(flags);
> + local_lock_irqsave(&s->cpu_slab->lock, flags);
> if (unlikely(c->page)) {
> void *flush_freelist = c->freelist;
> struct page *flush_page = c->page;
> @@ -2829,7 +2867,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
> c->freelist = NULL;
> c->tid = next_tid(c->tid);
>
> - local_irq_restore(flags);
> + local_unlock_irqrestore(&s->cpu_slab->lock, flags);
>
> deactivate_slab(s, flush_page, flush_freelist);
>
> @@ -3389,7 +3427,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> */
> migrate_disable();
> c = this_cpu_ptr(s->cpu_slab);
> - local_irq_disable();
> + local_lock_irq(&s->cpu_slab->lock);
>
> for (i = 0; i < size; i++) {
> void *object = kfence_alloc(s, s->object_size, flags);
> @@ -3410,7 +3448,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> */
> c->tid = next_tid(c->tid);
>
> - local_irq_enable();
> + local_unlock_irq(&s->cpu_slab->lock);
>
> /*
> * Invoking slow path likely have side-effect
> @@ -3424,7 +3462,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> c = this_cpu_ptr(s->cpu_slab);
> maybe_wipe_obj_freeptr(s, p[i]);
>
> - local_irq_disable();
> + local_lock_irq(&s->cpu_slab->lock);
>
> continue; /* goto for-loop */
> }
> @@ -3433,7 +3471,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> maybe_wipe_obj_freeptr(s, p[i]);
> }
> c->tid = next_tid(c->tid);
> - local_irq_enable();
> + local_unlock_irq(&s->cpu_slab->lock);
> migrate_enable();
>
> /*
> @@ -3444,7 +3482,7 @@ int kmem_cache_alloc_bulk(struct kmem_cache *s, gfp_t flags, size_t size,
> slab_want_init_on_alloc(flags, s));
> return i;
> error:
> - local_irq_enable();
> + local_unlock_irq(&s->cpu_slab->lock);
> slab_post_alloc_hook(s, objcg, flags, i, p, false);
> __kmem_cache_free_bulk(s, i, p);
> return 0;
>
prev parent reply other threads:[~2021-05-25 16:11 UTC|newest]
Thread overview: 53+ messages / expand[flat|nested] mbox.gz Atom feed top
2021-05-24 23:39 [RFC 00/26] SLUB: use local_lock for kmem_cache_cpu protection and reduce disabling irqs Vlastimil Babka
2021-05-24 23:39 ` [RFC 01/26] mm, slub: allocate private object map for sysfs listings Vlastimil Babka
2021-05-25 8:06 ` Christoph Lameter
2021-05-25 10:13 ` Mel Gorman
2021-05-24 23:39 ` [RFC 02/26] mm, slub: allocate private object map for validate_slab_cache() Vlastimil Babka
2021-05-25 8:09 ` Christoph Lameter
2021-05-25 10:17 ` Mel Gorman
2021-05-25 10:36 ` Vlastimil Babka
2021-05-25 11:33 ` Mel Gorman
2021-06-08 10:37 ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 03/26] mm, slub: don't disable irq for debug_check_no_locks_freed() Vlastimil Babka
2021-05-25 10:24 ` Mel Gorman
2021-05-24 23:39 ` [RFC 04/26] mm, slub: simplify kmem_cache_cpu and tid setup Vlastimil Babka
2021-05-25 11:47 ` Mel Gorman
2021-05-24 23:39 ` [RFC 05/26] mm, slub: extract get_partial() from new_slab_objects() Vlastimil Babka
2021-05-25 9:03 ` Christoph Lameter
2021-05-25 11:54 ` Mel Gorman
2021-05-24 23:39 ` [RFC 06/26] mm, slub: dissolve new_slab_objects() into ___slab_alloc() Vlastimil Babka
2021-05-25 9:06 ` Christoph Lameter
2021-05-25 11:59 ` Mel Gorman
2021-05-24 23:39 ` [RFC 07/26] mm, slub: return slab page from get_partial() and set c->page afterwards Vlastimil Babka
2021-05-25 9:12 ` Christoph Lameter
2021-06-08 10:48 ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 08/26] mm, slub: restructure new page checks in ___slab_alloc() Vlastimil Babka
2021-05-25 12:09 ` Mel Gorman
2021-05-24 23:39 ` [RFC 09/26] mm, slub: move disabling/enabling irqs to ___slab_alloc() Vlastimil Babka
2021-05-25 12:35 ` Mel Gorman
2021-05-25 12:47 ` Vlastimil Babka
2021-05-25 15:10 ` Mel Gorman
2021-05-25 17:24 ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 10/26] mm, slub: do initial checks in ___slab_alloc() with irqs enabled Vlastimil Babka
2021-05-25 13:04 ` Mel Gorman
2021-06-08 12:13 ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 11/26] mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() Vlastimil Babka
2021-05-25 16:00 ` Jann Horn
2021-05-24 23:39 ` [RFC 12/26] mm, slub: restore irqs around calling new_slab() Vlastimil Babka
2021-05-24 23:39 ` [RFC 13/26] mm, slub: validate partial and newly allocated slabs before loading them Vlastimil Babka
2021-05-24 23:39 ` [RFC 14/26] mm, slub: check new pages with restored irqs Vlastimil Babka
2021-05-24 23:39 ` [RFC 15/26] mm, slub: stop disabling irqs around get_partial() Vlastimil Babka
2021-05-24 23:39 ` [RFC 16/26] mm, slub: move reset of c->page and freelist out of deactivate_slab() Vlastimil Babka
2021-05-24 23:39 ` [RFC 17/26] mm, slub: make locking in deactivate_slab() irq-safe Vlastimil Babka
2021-05-24 23:39 ` [RFC 18/26] mm, slub: call deactivate_slab() without disabling irqs Vlastimil Babka
2021-05-24 23:39 ` [RFC 19/26] mm, slub: move irq control into unfreeze_partials() Vlastimil Babka
2021-05-24 23:39 ` [RFC 20/26] mm, slub: discard slabs in unfreeze_partials() without irqs disabled Vlastimil Babka
2021-05-24 23:39 ` [RFC 21/26] mm, slub: detach whole partial list at once in unfreeze_partials() Vlastimil Babka
2021-05-24 23:39 ` [RFC 22/26] mm, slub: detach percpu partial list in unfreeze_partials() using this_cpu_cmpxchg() Vlastimil Babka
2021-05-24 23:39 ` [RFC 23/26] mm, slub: only disable irq with spin_lock in __unfreeze_partials() Vlastimil Babka
2021-05-24 23:39 ` [RFC 24/26] mm, slub: don't disable irqs in slub_cpu_dead() Vlastimil Babka
2021-05-24 23:39 ` [RFC 25/26] mm, slub: use migrate_disable() in put_cpu_partial() Vlastimil Babka
2021-05-25 15:33 ` Jann Horn
2021-06-09 8:41 ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 26/26] mm, slub: convert kmem_cpu_slab protection to local_lock Vlastimil Babka
2021-05-25 16:11 ` Vlastimil Babka [this message]
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=d6adf5e1-e80f-e198-67ef-f0917bda0a8f@suse.cz \
--to=vbabka@suse.cz \
--cc=bigeasy@linutronix.de \
--cc=brouer@redhat.com \
--cc=cl@linux.com \
--cc=iamjoonsoo.kim@lge.com \
--cc=jannh@google.com \
--cc=linux-kernel@vger.kernel.org \
--cc=linux-mm@kvack.org \
--cc=mgorman@techsingularity.net \
--cc=penberg@kernel.org \
--cc=peterz@infradead.org \
--cc=rientjes@google.com \
--cc=tglx@linutronix.de \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).