All of lore.kernel.org
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Christoph Lameter <cl@linux.com>,
	David Rientjes <rientjes@google.com>,
	Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Mel Gorman <mgorman@techsingularity.net>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Jann Horn <jannh@google.com>, Vlastimil Babka <vbabka@suse.cz>
Subject: [RFC v2 13/34] mm, slub: do initial checks in  ___slab_alloc() with irqs enabled
Date: Wed,  9 Jun 2021 13:38:42 +0200	[thread overview]
Message-ID: <20210609113903.1421-14-vbabka@suse.cz> (raw)
In-Reply-To: <20210609113903.1421-1-vbabka@suse.cz>

As another step of shortening irq disabled sections in ___slab_alloc(), delay
disabling irqs until we pass the initial checks if there is a cached percpu
slab and it's suitable for our allocation.

Now we have to recheck c->page after actually disabling irqs as an allocation
in irq handler might have replaced it.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
Acked-by: Mel Gorman <mgorman@techsingularity.net>
---
 mm/slub.c | 41 ++++++++++++++++++++++++++++++++---------
 1 file changed, 32 insertions(+), 9 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 4800d768d0d3..22b5bfef8aa6 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2622,8 +2622,9 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 
 	stat(s, ALLOC_SLOWPATH);
 
-	local_irq_save(flags);
-	page = c->page;
+reread_page:
+
+	page = READ_ONCE(c->page);
 	if (!page) {
 		/*
 		 * if the node is not online or has no normal memory, just
@@ -2632,6 +2633,11 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 		if (unlikely(node != NUMA_NO_NODE &&
 			     !node_isset(node, slab_nodes)))
 			node = NUMA_NO_NODE;
+		local_irq_save(flags);
+		if (unlikely(c->page)) {
+			local_irq_restore(flags);
+			goto reread_page;
+		}
 		goto new_slab;
 	}
 redo:
@@ -2646,8 +2652,7 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 			goto redo;
 		} else {
 			stat(s, ALLOC_NODE_MISMATCH);
-			deactivate_slab(s, page, c->freelist, c);
-			goto new_slab;
+			goto deactivate_slab;
 		}
 	}
 
@@ -2656,12 +2661,15 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	 * PFMEMALLOC but right now, we are losing the pfmemalloc
 	 * information when the page leaves the per-cpu allocator
 	 */
-	if (unlikely(!pfmemalloc_match(page, gfpflags))) {
-		deactivate_slab(s, page, c->freelist, c);
-		goto new_slab;
-	}
+	if (unlikely(!pfmemalloc_match(page, gfpflags)))
+		goto deactivate_slab;
 
-	/* must check again c->freelist in case of cpu migration or IRQ */
+	/* must check again c->page in case IRQ handler changed it */
+	local_irq_save(flags);
+	if (unlikely(page != c->page)) {
+		local_irq_restore(flags);
+		goto reread_page;
+	}
 	freelist = c->freelist;
 	if (freelist)
 		goto load_freelist;
@@ -2677,6 +2685,9 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	stat(s, ALLOC_REFILL);
 
 load_freelist:
+
+	lockdep_assert_irqs_disabled();
+
 	/*
 	 * freelist is pointing to the list of objects to be used.
 	 * page is pointing to the page from which the objects are obtained.
@@ -2688,11 +2699,23 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	local_irq_restore(flags);
 	return freelist;
 
+deactivate_slab:
+
+	local_irq_save(flags);
+	if (page != c->page) {
+		local_irq_restore(flags);
+		goto reread_page;
+	}
+	deactivate_slab(s, page, c->freelist, c);
+
 new_slab:
 
+	lockdep_assert_irqs_disabled();
+
 	if (slub_percpu_partial(c)) {
 		page = c->page = slub_percpu_partial(c);
 		slub_set_percpu_partial(c, page);
+		local_irq_restore(flags);
 		stat(s, CPU_PARTIAL_ALLOC);
 		goto redo;
 	}
-- 
2.31.1


  parent reply	other threads:[~2021-06-09 11:40 UTC|newest]

Thread overview: 65+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-06-09 11:38 [RFC v2 00/34] SLUB: reduce irq disabled scope and make it RT compatible Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 01/34] mm, slub: don't call flush_all() from list_locations() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 02/34] mm, slub: allocate private object map for sysfs listings Vlastimil Babka
2021-06-09 13:29   ` Christoph Lameter
2021-06-09 13:29     ` Christoph Lameter
2021-06-09 11:38 ` [RFC v2 03/34] mm, slub: allocate private object map for validate_slab_cache() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 04/34] mm, slub: don't disable irq for debug_check_no_locks_freed() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 05/34] mm, slub: remove redundant unfreeze_partials() from put_cpu_partial() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 06/34] mm, slub: unify cmpxchg_double_slab() and __cmpxchg_double_slab() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 07/34] mm, slub: extract get_partial() from new_slab_objects() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 08/34] mm, slub: dissolve new_slab_objects() into ___slab_alloc() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 09/34] mm, slub: return slab page from get_partial() and set c->page afterwards Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 10/34] mm, slub: restructure new page checks in ___slab_alloc() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 11/34] mm, slub: simplify kmem_cache_cpu and tid setup Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 12/34] mm, slub: move disabling/enabling irqs to ___slab_alloc() Vlastimil Babka
2021-07-06  4:38   ` Mike Galbraith
2021-07-06  4:38     ` Mike Galbraith
2021-06-09 11:38 ` Vlastimil Babka [this message]
2021-06-09 11:38 ` [RFC v2 14/34] mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 15/34] mm, slub: restore irqs around calling new_slab() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 16/34] mm, slub: validate slab from partial list or page allocator before making it cpu slab Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 17/34] mm, slub: check new pages with restored irqs Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 18/34] mm, slub: stop disabling irqs around get_partial() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 19/34] mm, slub: move reset of c->page and freelist out of deactivate_slab() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 20/34] mm, slub: make locking in deactivate_slab() irq-safe Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 21/34] mm, slub: call deactivate_slab() without disabling irqs Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 22/34] mm, slub: move irq control into unfreeze_partials() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 23/34] mm, slub: discard slabs in unfreeze_partials() without irqs disabled Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 24/34] mm, slub: detach whole partial list at once in unfreeze_partials() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 25/34] mm, slub: detach percpu partial list in unfreeze_partials() using this_cpu_cmpxchg() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 26/34] mm, slub: only disable irq with spin_lock in __unfreeze_partials() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 27/34] mm, slub: don't disable irqs in slub_cpu_dead() Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 28/34] mm, slab: make flush_slab() possible to call with irqs enabled Vlastimil Babka
2021-06-09 11:38 ` [RFC v2 29/34] mm: slub: Move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context Vlastimil Babka
2021-06-09 22:29   ` Cyrill Gorcunov
2021-06-10  8:32     ` Vlastimil Babka
2021-06-10  8:36       ` Cyrill Gorcunov
2021-07-07  6:33   ` Hillf Danton
2021-06-09 11:38 ` [RFC v2 30/34] mm: slub: Make object_map_lock a raw_spinlock_t Vlastimil Babka
2021-06-09 11:39 ` [RFC v2 31/34] mm, slub: optionally save/restore irqs in slab_[un]lock()/ Vlastimil Babka
2021-07-02 12:17   ` Sebastian Andrzej Siewior
2021-06-09 11:39 ` [RFC v2 32/34] mm, slub: make slab_lock() disable irqs with PREEMPT_RT Vlastimil Babka
2021-06-09 11:39 ` [RFC v2 33/34] mm, slub: use migrate_disable() on PREEMPT_RT Vlastimil Babka
2021-06-14 11:07   ` Vlastimil Babka
2021-06-14 11:16     ` Sebastian Andrzej Siewior
2021-06-14 11:33       ` Vlastimil Babka
2021-06-14 12:54         ` Vlastimil Babka
2021-06-14 14:01         ` Sebastian Andrzej Siewior
2021-06-09 11:39 ` [RFC v2 34/34] mm, slub: convert kmem_cpu_slab protection to local_lock Vlastimil Babka
2021-06-14  9:49 ` [RFC v2 00/34] SLUB: reduce irq disabled scope and make it RT compatible Mel Gorman
2021-06-14 11:31   ` Mel Gorman
2021-06-14 11:10 ` Vlastimil Babka
2021-07-02 18:29 ` Sebastian Andrzej Siewior
2021-07-02 20:25   ` Vlastimil Babka
2021-07-29 13:49     ` Sebastian Andrzej Siewior
2021-07-29 14:17       ` Vlastimil Babka
2021-07-29 14:37         ` Sebastian Andrzej Siewior
2021-07-03  7:24   ` Mike Galbraith
2021-07-03 15:47     ` Mike Galbraith
2021-07-04  5:37       ` Mike Galbraith
2021-07-18  7:41     ` Vlastimil Babka
2021-07-18  8:29       ` Mike Galbraith
2021-07-18 12:09         ` Mike Galbraith
2021-07-05 16:00   ` Mike Galbraith
2021-07-06 17:56     ` Mike Galbraith

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210609113903.1421-14-vbabka@suse.cz \
    --to=vbabka@suse.cz \
    --cc=bigeasy@linutronix.de \
    --cc=brouer@redhat.com \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=jannh@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=penberg@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.