linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: linux-mm@kvack.org, Christoph Lameter <cl@linux.com>,
	David Rientjes <rientjes@google.com>,
	Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Mike Galbraith <efault@gmx.de>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Mel Gorman <mgorman@techsingularity.net>,
	Vlastimil Babka <vbabka@suse.cz>
Subject: [PATCH v6 12/33] mm, slub: do initial checks in ___slab_alloc() with irqs enabled
Date: Sat,  4 Sep 2021 12:49:42 +0200	[thread overview]
Message-ID: <20210904105003.11688-13-vbabka@suse.cz> (raw)
In-Reply-To: <20210904105003.11688-1-vbabka@suse.cz>

As another step of shortening irq disabled sections in ___slab_alloc(), delay
disabling irqs until we pass the initial checks if there is a cached percpu
slab and it's suitable for our allocation.

Now we have to recheck c->page after actually disabling irqs as an allocation
in irq handler might have replaced it.

Because we call pfmemalloc_match() as one of the checks, we might hit
VM_BUG_ON_PAGE(!PageSlab(page)) in PageSlabPfmemalloc in case we get
interrupted and the page is freed. Thus introduce a pfmemalloc_match_unsafe()
variant that lacks the PageSlab check.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
---
 include/linux/page-flags.h |  9 +++++++
 mm/slub.c                  | 54 +++++++++++++++++++++++++++++++-------
 2 files changed, 54 insertions(+), 9 deletions(-)

diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 5922031ffab6..7fda4fb85bdc 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -815,6 +815,15 @@ static inline int PageSlabPfmemalloc(struct page *page)
 	return PageActive(page);
 }
 
+/*
+ * A version of PageSlabPfmemalloc() for opportunistic checks where the page
+ * might have been freed under us and not be a PageSlab anymore.
+ */
+static inline int __PageSlabPfmemalloc(struct page *page)
+{
+	return PageActive(page);
+}
+
 static inline void SetPageSlabPfmemalloc(struct page *page)
 {
 	VM_BUG_ON_PAGE(!PageSlab(page), page);
diff --git a/mm/slub.c b/mm/slub.c
index dda05cc83eef..6295695d8515 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2620,6 +2620,19 @@ static inline bool pfmemalloc_match(struct page *page, gfp_t gfpflags)
 	return true;
 }
 
+/*
+ * A variant of pfmemalloc_match() that tests page flags without asserting
+ * PageSlab. Intended for opportunistic checks before taking a lock and
+ * rechecking that nobody else freed the page under us.
+ */
+static inline bool pfmemalloc_match_unsafe(struct page *page, gfp_t gfpflags)
+{
+	if (unlikely(__PageSlabPfmemalloc(page)))
+		return gfp_pfmemalloc_allowed(gfpflags);
+
+	return true;
+}
+
 /*
  * Check the page->freelist of a page and either transfer the freelist to the
  * per cpu freelist or deactivate the page.
@@ -2682,8 +2695,9 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 
 	stat(s, ALLOC_SLOWPATH);
 
-	local_irq_save(flags);
-	page = c->page;
+reread_page:
+
+	page = READ_ONCE(c->page);
 	if (!page) {
 		/*
 		 * if the node is not online or has no normal memory, just
@@ -2692,6 +2706,11 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 		if (unlikely(node != NUMA_NO_NODE &&
 			     !node_isset(node, slab_nodes)))
 			node = NUMA_NO_NODE;
+		local_irq_save(flags);
+		if (unlikely(c->page)) {
+			local_irq_restore(flags);
+			goto reread_page;
+		}
 		goto new_slab;
 	}
 redo:
@@ -2706,8 +2725,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 			goto redo;
 		} else {
 			stat(s, ALLOC_NODE_MISMATCH);
-			deactivate_slab(s, page, c->freelist, c);
-			goto new_slab;
+			goto deactivate_slab;
 		}
 	}
 
@@ -2716,12 +2734,15 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	 * PFMEMALLOC but right now, we are losing the pfmemalloc
 	 * information when the page leaves the per-cpu allocator
 	 */
-	if (unlikely(!pfmemalloc_match(page, gfpflags))) {
-		deactivate_slab(s, page, c->freelist, c);
-		goto new_slab;
-	}
+	if (unlikely(!pfmemalloc_match_unsafe(page, gfpflags)))
+		goto deactivate_slab;
 
-	/* must check again c->freelist in case of cpu migration or IRQ */
+	/* must check again c->page in case IRQ handler changed it */
+	local_irq_save(flags);
+	if (unlikely(page != c->page)) {
+		local_irq_restore(flags);
+		goto reread_page;
+	}
 	freelist = c->freelist;
 	if (freelist)
 		goto load_freelist;
@@ -2737,6 +2758,9 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	stat(s, ALLOC_REFILL);
 
 load_freelist:
+
+	lockdep_assert_irqs_disabled();
+
 	/*
 	 * freelist is pointing to the list of objects to be used.
 	 * page is pointing to the page from which the objects are obtained.
@@ -2748,11 +2772,23 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	local_irq_restore(flags);
 	return freelist;
 
+deactivate_slab:
+
+	local_irq_save(flags);
+	if (page != c->page) {
+		local_irq_restore(flags);
+		goto reread_page;
+	}
+	deactivate_slab(s, page, c->freelist, c);
+
 new_slab:
 
+	lockdep_assert_irqs_disabled();
+
 	if (slub_percpu_partial(c)) {
 		page = c->page = slub_percpu_partial(c);
 		slub_set_percpu_partial(c, page);
+		local_irq_restore(flags);
 		stat(s, CPU_PARTIAL_ALLOC);
 		goto redo;
 	}
-- 
2.33.0


  parent reply	other threads:[~2021-09-04 10:51 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-04 10:49 [PATCH v6 00/33] SLUB: reduce irq disabled scope and make it RT compatible Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 01/33] mm, slub: don't call flush_all() from slab_debug_trace_open() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 02/33] mm, slub: allocate private object map for debugfs listings Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 03/33] mm, slub: allocate private object map for validate_slab_cache() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 04/33] mm, slub: don't disable irq for debug_check_no_locks_freed() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 05/33] mm, slub: remove redundant unfreeze_partials() from put_cpu_partial() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 06/33] mm, slub: extract get_partial() from new_slab_objects() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 07/33] mm, slub: dissolve new_slab_objects() into ___slab_alloc() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 08/33] mm, slub: return slab page from get_partial() and set c->page afterwards Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 09/33] mm, slub: restructure new page checks in ___slab_alloc() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 10/33] mm, slub: simplify kmem_cache_cpu and tid setup Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 11/33] mm, slub: move disabling/enabling irqs to ___slab_alloc() Vlastimil Babka
2021-09-04 10:49 ` Vlastimil Babka [this message]
2021-09-04 10:49 ` [PATCH v6 13/33] mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 14/33] mm, slub: restore irqs around calling new_slab() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 15/33] mm, slub: validate slab from partial list or page allocator before making it cpu slab Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 16/33] mm, slub: check new pages with restored irqs Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 17/33] mm, slub: stop disabling irqs around get_partial() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 18/33] mm, slub: move reset of c->page and freelist out of deactivate_slab() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 19/33] mm, slub: make locking in deactivate_slab() irq-safe Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 20/33] mm, slub: call deactivate_slab() without disabling irqs Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 21/33] mm, slub: move irq control into unfreeze_partials() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 22/33] mm, slub: discard slabs in unfreeze_partials() without irqs disabled Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 23/33] mm, slub: detach whole partial list at once in unfreeze_partials() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 24/33] mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 25/33] mm, slub: only disable irq with spin_lock in __unfreeze_partials() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 26/33] mm, slub: don't disable irqs in slub_cpu_dead() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 27/33] mm, slab: split out the cpu offline variant of flush_slab() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 28/33] mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 29/33] mm: slub: make object_map_lock a raw_spinlock_t Vlastimil Babka
2021-09-04 10:50 ` [PATCH v6 30/33] mm, slub: make slab_lock() disable irqs with PREEMPT_RT Vlastimil Babka
2021-09-04 10:50 ` [PATCH v6 31/33] mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg Vlastimil Babka
2021-09-04 10:50 ` [PATCH v6 32/33] mm, slub: use migrate_disable() on PREEMPT_RT Vlastimil Babka
2021-09-04 10:50 ` [PATCH v6 33/33] mm, slub: convert kmem_cpu_slab protection to local_lock Vlastimil Babka
2021-09-05 14:16 ` [PATCH v6 00/33] SLUB: reduce irq disabled scope and make it RT compatible Mike Galbraith
2021-09-07  8:20 ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210904105003.11688-13-vbabka@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=cl@linux.com \
    --cc=efault@gmx.de \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).