linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: linux-mm@kvack.org, Christoph Lameter <cl@linux.com>,
	David Rientjes <rientjes@google.com>,
	Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>,
	linux-kernel@vger.kernel.org, Mike Galbraith <efault@gmx.de>,
	Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Mel Gorman <mgorman@techsingularity.net>,
	Vlastimil Babka <vbabka@suse.cz>
Subject: [PATCH v6 08/33] mm, slub: return slab page from get_partial() and set c->page afterwards
Date: Sat,  4 Sep 2021 12:49:38 +0200	[thread overview]
Message-ID: <20210904105003.11688-9-vbabka@suse.cz> (raw)
In-Reply-To: <20210904105003.11688-1-vbabka@suse.cz>

The function get_partial() finds a suitable page on a partial list, acquires
and returns its freelist and assigns the page pointer to kmem_cache_cpu.
In later patch we will need more control over the kmem_cache_cpu.page
assignment, so instead of passing a kmem_cache_cpu pointer, pass a pointer to a
pointer to a page that get_partial() can fill and the caller can assign the
kmem_cache_cpu.page pointer. No functional change as all of this still happens
with disabled IRQs.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/slub.c | 21 +++++++++++----------
 1 file changed, 11 insertions(+), 10 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 0c645b0e96d9..e9d582eee7d7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -2017,7 +2017,7 @@ static inline bool pfmemalloc_match(struct page *page, gfp_t gfpflags);
  * Try to allocate a partial slab from a specific node.
  */
 static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
-				struct kmem_cache_cpu *c, gfp_t flags)
+			      struct page **ret_page, gfp_t flags)
 {
 	struct page *page, *page2;
 	void *object = NULL;
@@ -2046,7 +2046,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
 
 		available += objects;
 		if (!object) {
-			c->page = page;
+			*ret_page = page;
 			stat(s, ALLOC_FROM_PARTIAL);
 			object = t;
 		} else {
@@ -2066,7 +2066,7 @@ static void *get_partial_node(struct kmem_cache *s, struct kmem_cache_node *n,
  * Get a page from somewhere. Search in increasing NUMA distances.
  */
 static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
-		struct kmem_cache_cpu *c)
+			     struct page **ret_page)
 {
 #ifdef CONFIG_NUMA
 	struct zonelist *zonelist;
@@ -2108,7 +2108,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
 
 			if (n && cpuset_zone_allowed(zone, flags) &&
 					n->nr_partial > s->min_partial) {
-				object = get_partial_node(s, n, c, flags);
+				object = get_partial_node(s, n, ret_page, flags);
 				if (object) {
 					/*
 					 * Don't check read_mems_allowed_retry()
@@ -2130,7 +2130,7 @@ static void *get_any_partial(struct kmem_cache *s, gfp_t flags,
  * Get a partial page, lock it and return it.
  */
 static void *get_partial(struct kmem_cache *s, gfp_t flags, int node,
-		struct kmem_cache_cpu *c)
+			 struct page **ret_page)
 {
 	void *object;
 	int searchnode = node;
@@ -2138,11 +2138,11 @@ static void *get_partial(struct kmem_cache *s, gfp_t flags, int node,
 	if (node == NUMA_NO_NODE)
 		searchnode = numa_mem_id();
 
-	object = get_partial_node(s, get_node(s, searchnode), c, flags);
+	object = get_partial_node(s, get_node(s, searchnode), ret_page, flags);
 	if (object || node != NUMA_NO_NODE)
 		return object;
 
-	return get_any_partial(s, flags, c);
+	return get_any_partial(s, flags, ret_page);
 }
 
 #ifdef CONFIG_PREEMPTION
@@ -2754,9 +2754,11 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 		goto redo;
 	}
 
-	freelist = get_partial(s, gfpflags, node, c);
-	if (freelist)
+	freelist = get_partial(s, gfpflags, node, &page);
+	if (freelist) {
+		c->page = page;
 		goto check_new_page;
+	}
 
 	page = new_slab(s, gfpflags, node);
 
@@ -2780,7 +2782,6 @@ static void *___slab_alloc(struct kmem_cache *s, gfp_t gfpflags, int node,
 	c->page = page;
 
 check_new_page:
-	page = c->page;
 	if (likely(!kmem_cache_debug(s) && pfmemalloc_match(page, gfpflags)))
 		goto load_freelist;
 
-- 
2.33.0


  parent reply	other threads:[~2021-09-04 10:50 UTC|newest]

Thread overview: 36+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-09-04 10:49 [PATCH v6 00/33] SLUB: reduce irq disabled scope and make it RT compatible Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 01/33] mm, slub: don't call flush_all() from slab_debug_trace_open() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 02/33] mm, slub: allocate private object map for debugfs listings Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 03/33] mm, slub: allocate private object map for validate_slab_cache() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 04/33] mm, slub: don't disable irq for debug_check_no_locks_freed() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 05/33] mm, slub: remove redundant unfreeze_partials() from put_cpu_partial() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 06/33] mm, slub: extract get_partial() from new_slab_objects() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 07/33] mm, slub: dissolve new_slab_objects() into ___slab_alloc() Vlastimil Babka
2021-09-04 10:49 ` Vlastimil Babka [this message]
2021-09-04 10:49 ` [PATCH v6 09/33] mm, slub: restructure new page checks in ___slab_alloc() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 10/33] mm, slub: simplify kmem_cache_cpu and tid setup Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 11/33] mm, slub: move disabling/enabling irqs to ___slab_alloc() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 12/33] mm, slub: do initial checks in ___slab_alloc() with irqs enabled Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 13/33] mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 14/33] mm, slub: restore irqs around calling new_slab() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 15/33] mm, slub: validate slab from partial list or page allocator before making it cpu slab Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 16/33] mm, slub: check new pages with restored irqs Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 17/33] mm, slub: stop disabling irqs around get_partial() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 18/33] mm, slub: move reset of c->page and freelist out of deactivate_slab() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 19/33] mm, slub: make locking in deactivate_slab() irq-safe Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 20/33] mm, slub: call deactivate_slab() without disabling irqs Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 21/33] mm, slub: move irq control into unfreeze_partials() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 22/33] mm, slub: discard slabs in unfreeze_partials() without irqs disabled Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 23/33] mm, slub: detach whole partial list at once in unfreeze_partials() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 24/33] mm, slub: separate detaching of partial list in unfreeze_partials() from unfreezing Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 25/33] mm, slub: only disable irq with spin_lock in __unfreeze_partials() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 26/33] mm, slub: don't disable irqs in slub_cpu_dead() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 27/33] mm, slab: split out the cpu offline variant of flush_slab() Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 28/33] mm: slub: move flush_cpu_slab() invocations __free_slab() invocations out of IRQ context Vlastimil Babka
2021-09-04 10:49 ` [PATCH v6 29/33] mm: slub: make object_map_lock a raw_spinlock_t Vlastimil Babka
2021-09-04 10:50 ` [PATCH v6 30/33] mm, slub: make slab_lock() disable irqs with PREEMPT_RT Vlastimil Babka
2021-09-04 10:50 ` [PATCH v6 31/33] mm, slub: protect put_cpu_partial() with disabled irqs instead of cmpxchg Vlastimil Babka
2021-09-04 10:50 ` [PATCH v6 32/33] mm, slub: use migrate_disable() on PREEMPT_RT Vlastimil Babka
2021-09-04 10:50 ` [PATCH v6 33/33] mm, slub: convert kmem_cpu_slab protection to local_lock Vlastimil Babka
2021-09-05 14:16 ` [PATCH v6 00/33] SLUB: reduce irq disabled scope and make it RT compatible Mike Galbraith
2021-09-07  8:20 ` Mel Gorman

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210904105003.11688-9-vbabka@suse.cz \
    --to=vbabka@suse.cz \
    --cc=akpm@linux-foundation.org \
    --cc=bigeasy@linutronix.de \
    --cc=cl@linux.com \
    --cc=efault@gmx.de \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=penberg@kernel.org \
    --cc=rientjes@google.com \
    --cc=tglx@linutronix.de \
    --subject='Re: [PATCH v6 08/33] mm, slub: return slab page from get_partial() and set c->page afterwards' \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).