From: Vlastimil Babka <vbabka@suse.cz> To: Andrew Morton <akpm@linux-foundation.org>, Christoph Lameter <cl@linux.com>, David Rientjes <rientjes@google.com>, Pekka Enberg <penberg@kernel.org>, Joonsoo Kim <iamjoonsoo.kim@lge.com> Cc: linux-mm@kvack.org, linux-kernel@vger.kernel.org, Mike Galbraith <efault@gmx.de>, Sebastian Andrzej Siewior <bigeasy@linutronix.de>, Thomas Gleixner <tglx@linutronix.de>, Mel Gorman <mgorman@techsingularity.net>, Jesper Dangaard Brouer <brouer@redhat.com>, Jann Horn <jannh@google.com>, Vlastimil Babka <vbabka@suse.cz> Subject: [PATCH v5 13/35] mm, slub: do initial checks in ___slab_alloc() with irqs enabled Date: Mon, 23 Aug 2021 16:58:04 +0200 [thread overview] Message-ID: <20210823145826.3857-14-vbabka@suse.cz> (raw) In-Reply-To: <20210823145826.3857-1-vbabka@suse.cz> As another step of shortening irq disabled sections in ___slab_alloc(), delay disabling irqs until we pass the initial checks if there is a cached percpu slab and it's suitable for our allocation. Now we have to recheck c->page after actually disabling irqs as an allocation in irq handler might have replaced it. Because we call pfmemalloc_match() as one of the checks, we might hit VM_BUG_ON_PAGE(!PageSlab(page)) in PageSlabPfmemalloc in case we get interrupted and the page is freed. Thus introduce a pfmemalloc_match_unsafe() variant that lacks the PageSlab check. Signed-off-by: Vlastimil Babka <vbabka@suse.cz> Acked-by: Mel Gorman <mgorman@techsingularity.net> --- include/linux/page-flags.h | 9 +++++++ mm/slub.c | 54 +++++++++++++++++++++++++++++++------- 2 files changed, 54 insertions(+), 9 deletions(-) diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h index 5922031ffab6..7fda4fb85bdc 100644 --- a/include/linux/page-flags.h +++ b/include/linux/page-flags.h @@ -815,6 +815,15 @@ static inline int PageSlabPfmemalloc(struct page *page) return PageActive(page); } +/* + * A version of PageSlabPfmemalloc() for opportunistic checks where the page + * might have been freed under us and not be a PageSlab anymore. + */ +static inline int __PageSlabPfmemalloc(struct page *page) +{ + return PageActive(page); +} + static inline void SetPageSlabPfmemalloc(struct page *page) { VM_BUG_ON_PAGE(!PageSlab(page), page); diff --git a/mm/slub.c b/mm/slub.c index 31f946e03823..fcc38638c645 100644 --- a/mm/slub.c +++ b/mm/slub.c @@ -2606,6 +2606,19 @@ static inline bool pfmemalloc_match(struct page *page, gfp_t gfpflags) return true; } +/* + * A variant of pfmemalloc_match() that tests page flags without asserting + * PageSlab. Intended for opportunistic checks before taking a lock and + * rechecking that nobody else freed the page under us. + */ +static inline bool pfmemalloc_match_unsafe(struct page *page, gfp_t gfpflags) +{ + if (unlikely(__PageSlabPfmemalloc(page))) + return gfp_pfmemalloc_allowed(gfpflags); + + return true; +} + /* * Check the page->freelist of a page and either transfer the freelist to the * per cpu freelist or deactivate the page. @@ -2668,8 +2681,9 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, stat(s, ALLOC_SLOWPATH); - local_irq_save(flags); - page = c->page; +reread_page: + + page = READ_ONCE(c->page); if (!page) { /* * if the node is not online or has no normal memory, just @@ -2678,6 +2692,11 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, if (unlikely(node != NUMA_NO_NODE && !node_isset(node, slab_nodes))) node = NUMA_NO_NODE; + local_irq_save(flags); + if (unlikely(c->page)) { + local_irq_restore(flags); + goto reread_page; + } goto new_slab; } redo: @@ -2692,8 +2711,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, goto redo; } else { stat(s, ALLOC_NODE_MISMATCH); - deactivate_slab(s, page, c->freelist, c); - goto new_slab; + goto deactivate_slab; } } @@ -2702,12 +2720,15 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, * PFMEMALLOC but right now, we are losing the pfmemalloc * information when the page leaves the per-cpu allocator */ - if (unlikely(!pfmemalloc_match(page, gfpflags))) { - deactivate_slab(s, page, c->freelist, c); - goto new_slab; - } + if (unlikely(!pfmemalloc_match_unsafe(page, gfpflags))) + goto deactivate_slab; - /* must check again c->freelist in case of cpu migration or IRQ */ + /* must check again c->page in case IRQ handler changed it */ + local_irq_save(flags); + if (unlikely(page != c->page)) { + local_irq_restore(flags); + goto reread_page; + } freelist = c->freelist; if (freelist) goto load_freelist; @@ -2723,6 +2744,9 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, stat(s, ALLOC_REFILL); load_freelist: + + lockdep_assert_irqs_disabled(); + /* * freelist is pointing to the list of objects to be used. * page is pointing to the page from which the objects are obtained. @@ -2734,11 +2758,23 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node, local_irq_restore(flags); return freelist; +deactivate_slab: + + local_irq_save(flags); + if (page != c->page) { + local_irq_restore(flags); + goto reread_page; + } + deactivate_slab(s, page, c->freelist, c); + new_slab: + lockdep_assert_irqs_disabled(); + if (slub_percpu_partial(c)) { page = c->page = slub_percpu_partial(c); slub_set_percpu_partial(c, page); + local_irq_restore(flags); stat(s, CPU_PARTIAL_ALLOC); goto redo; } -- 2.32.0
next prev parent reply other threads:[~2021-08-23 15:00 UTC|newest] Thread overview: 37+ messages / expand[flat|nested] mbox.gz Atom feed top 2021-08-23 14:57 [PATCH v5 00/35] SLUB: reduce irq disabled scope and make it RT compatible Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 01/35] mm, slub: don't call flush_all() from slab_debug_trace_open() Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 02/35] mm, slub: allocate private object map for debugfs listings Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 03/35] mm, slub: allocate private object map for validate_slab_cache() Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 04/35] mm, slub: don't disable irq for debug_check_no_locks_freed() Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 05/35] mm, slub: remove redundant unfreeze_partials() from put_cpu_partial() Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 06/35] mm, slub: unify cmpxchg_double_slab() and __cmpxchg_double_slab() Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 07/35] mm, slub: extract get_partial() from new_slab_objects() Vlastimil Babka 2021-08-23 14:57 ` [PATCH v5 08/35] mm, slub: dissolve new_slab_objects() into ___slab_alloc() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 09/35] mm, slub: return slab page from get_partial() and set c->page afterwards Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 10/35] mm, slub: restructure new page checks in ___slab_alloc() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 11/35] mm, slub: simplify kmem_cache_cpu and tid setup Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 12/35] mm, slub: move disabling/enabling irqs to ___slab_alloc() Vlastimil Babka 2021-08-23 14:58 ` Vlastimil Babka [this message] 2021-08-23 14:58 ` [PATCH v5 14/35] mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 15/35] mm, slub: restore irqs around calling new_slab() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 16/35] mm, slub: validate slab from partial list or page allocator before making it cpu slab Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 17/35] mm, slub: check new pages with restored irqs Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 18/35] mm, slub: stop disabling irqs around get_partial() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 19/35] mm, slub: move reset of c->page and freelist out of deactivate_slab() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 20/35] mm, slub: make locking in deactivate_slab() irq-safe Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 21/35] mm, slub: call deactivate_slab() without disabling irqs Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 22/35] mm, slub: move irq control into unfreeze_partials() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 23/35] mm, slub: discard slabs in unfreeze_partials() without irqs disabled Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 24/35] mm, slub: detach whole partial list at once in unfreeze_partials() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 25/35] mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 26/35] mm, slub: only disable irq with spin_lock in __unfreeze_partials() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 27/35] mm, slub: don't disable irqs in slub_cpu_dead() Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 28/35] mm, slab: make flush_slab() possible to call with irqs enabled Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 29/35] mm: slub: Move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 30/35] mm: slub: Make object_map_lock a raw_spinlock_t Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 31/35] mm, slub: optionally save/restore irqs in slab_[un]lock()/ Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 32/35] mm, slub: make slab_lock() disable irqs with PREEMPT_RT Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 33/35] mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 34/35] mm, slub: use migrate_disable() on PREEMPT_RT Vlastimil Babka 2021-08-23 14:58 ` [PATCH v5 35/35] mm, slub: convert kmem_cpu_slab protection to local_lock Vlastimil Babka 2021-08-23 15:11 ` [PATCH v5 00/35] SLUB: reduce irq disabled scope and make it RT compatible Sebastian Andrzej Siewior
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20210823145826.3857-14-vbabka@suse.cz \ --to=vbabka@suse.cz \ --cc=akpm@linux-foundation.org \ --cc=bigeasy@linutronix.de \ --cc=brouer@redhat.com \ --cc=cl@linux.com \ --cc=efault@gmx.de \ --cc=iamjoonsoo.kim@lge.com \ --cc=jannh@google.com \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=mgorman@techsingularity.net \ --cc=penberg@kernel.org \ --cc=rientjes@google.com \ --cc=tglx@linutronix.de \ --subject='Re: [PATCH v5 13/35] mm, slub: do initial checks in ___slab_alloc() with irqs enabled' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).