linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
From: Vlastimil Babka <vbabka@suse.cz>
To: linux-mm@kvack.org, linux-kernel@vger.kernel.org,
	Christoph Lameter <cl@linux.com>,
	David Rientjes <rientjes@google.com>,
	Pekka Enberg <penberg@kernel.org>,
	Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Sebastian Andrzej Siewior <bigeasy@linutronix.de>,
	Thomas Gleixner <tglx@linutronix.de>,
	Mel Gorman <mgorman@techsingularity.net>,
	Jesper Dangaard Brouer <brouer@redhat.com>,
	Peter Zijlstra <peterz@infradead.org>,
	Jann Horn <jannh@google.com>, Vlastimil Babka <vbabka@suse.cz>
Subject: [RFC 01/26] mm, slub: allocate private object map for sysfs listings
Date: Tue, 25 May 2021 01:39:21 +0200	[thread overview]
Message-ID: <20210524233946.20352-2-vbabka@suse.cz> (raw)
In-Reply-To: <20210524233946.20352-1-vbabka@suse.cz>

Slub has a static spinlock protected bitmap for marking which objects are on
freelist when it wants to list them, for situations where dynamically
allocating such map can lead to recursion or locking issues, and on-stack
bitmap would be too large.

The handlers of sysfs files alloc_calls and free_calls also currently use this
shared bitmap, but their syscall context makes it straightforward to allocate a
private map before entering locked sections, so switch these processing paths
to use a private bitmap.

Signed-off-by: Vlastimil Babka <vbabka@suse.cz>
---
 mm/slub.c | 43 +++++++++++++++++++++++++++++--------------
 1 file changed, 29 insertions(+), 14 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 3f96e099817a..4c876749f322 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -448,6 +448,18 @@ static inline bool cmpxchg_double_slab(struct kmem_cache *s, struct page *page,
 static unsigned long object_map[BITS_TO_LONGS(MAX_OBJS_PER_PAGE)];
 static DEFINE_SPINLOCK(object_map_lock);
 
+static void __fill_map(unsigned long *obj_map, struct kmem_cache *s,
+		       struct page *page)
+{
+	void *addr = page_address(page);
+	void *p;
+
+	bitmap_zero(obj_map, page->objects);
+
+	for (p = page->freelist; p; p = get_freepointer(s, p))
+		set_bit(__obj_to_index(s, addr, p), obj_map);
+}
+
 /*
  * Determine a map of object in use on a page.
  *
@@ -457,17 +469,11 @@ static DEFINE_SPINLOCK(object_map_lock);
 static unsigned long *get_map(struct kmem_cache *s, struct page *page)
 	__acquires(&object_map_lock)
 {
-	void *p;
-	void *addr = page_address(page);
-
 	VM_BUG_ON(!irqs_disabled());
 
 	spin_lock(&object_map_lock);
 
-	bitmap_zero(object_map, page->objects);
-
-	for (p = page->freelist; p; p = get_freepointer(s, p))
-		set_bit(__obj_to_index(s, addr, p), object_map);
+	__fill_map(object_map, s, page);
 
 	return object_map;
 }
@@ -4813,17 +4819,17 @@ static int add_location(struct loc_track *t, struct kmem_cache *s,
 }
 
 static void process_slab(struct loc_track *t, struct kmem_cache *s,
-		struct page *page, enum track_item alloc)
+		struct page *page, enum track_item alloc,
+		unsigned long *obj_map)
 {
 	void *addr = page_address(page);
 	void *p;
-	unsigned long *map;
 
-	map = get_map(s, page);
+	__fill_map(obj_map, s, page);
+
 	for_each_object(p, s, addr, page->objects)
-		if (!test_bit(__obj_to_index(s, addr, p), map))
+		if (!test_bit(__obj_to_index(s, addr, p), obj_map))
 			add_location(t, s, get_track(s, p, alloc));
-	put_map(map);
 }
 
 static int list_locations(struct kmem_cache *s, char *buf,
@@ -4834,11 +4840,18 @@ static int list_locations(struct kmem_cache *s, char *buf,
 	struct loc_track t = { 0, 0, NULL };
 	int node;
 	struct kmem_cache_node *n;
+	unsigned long *obj_map;
+
+	obj_map = bitmap_alloc(oo_objects(s->oo), GFP_KERNEL);
+	if (!obj_map)
+		return sysfs_emit(buf, "Out of memory\n");
 
 	if (!alloc_loc_track(&t, PAGE_SIZE / sizeof(struct location),
 			     GFP_KERNEL)) {
+		bitmap_free(obj_map);
 		return sysfs_emit(buf, "Out of memory\n");
 	}
+
 	/* Push back cpu slabs */
 	flush_all(s);
 
@@ -4851,12 +4864,14 @@ static int list_locations(struct kmem_cache *s, char *buf,
 
 		spin_lock_irqsave(&n->list_lock, flags);
 		list_for_each_entry(page, &n->partial, slab_list)
-			process_slab(&t, s, page, alloc);
+			process_slab(&t, s, page, alloc, obj_map);
 		list_for_each_entry(page, &n->full, slab_list)
-			process_slab(&t, s, page, alloc);
+			process_slab(&t, s, page, alloc, obj_map);
 		spin_unlock_irqrestore(&n->list_lock, flags);
 	}
 
+	bitmap_free(obj_map);
+
 	for (i = 0; i < t.count; i++) {
 		struct location *l = &t.loc[i];
 
-- 
2.31.1



  reply	other threads:[~2021-05-24 23:40 UTC|newest]

Thread overview: 53+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2021-05-24 23:39 [RFC 00/26] SLUB: use local_lock for kmem_cache_cpu protection and reduce disabling irqs Vlastimil Babka
2021-05-24 23:39 ` Vlastimil Babka [this message]
2021-05-25  8:06   ` [RFC 01/26] mm, slub: allocate private object map for sysfs listings Christoph Lameter
2021-05-25 10:13   ` Mel Gorman
2021-05-24 23:39 ` [RFC 02/26] mm, slub: allocate private object map for validate_slab_cache() Vlastimil Babka
2021-05-25  8:09   ` Christoph Lameter
2021-05-25 10:17   ` Mel Gorman
2021-05-25 10:36     ` Vlastimil Babka
2021-05-25 11:33       ` Mel Gorman
2021-06-08 10:37         ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 03/26] mm, slub: don't disable irq for debug_check_no_locks_freed() Vlastimil Babka
2021-05-25 10:24   ` Mel Gorman
2021-05-24 23:39 ` [RFC 04/26] mm, slub: simplify kmem_cache_cpu and tid setup Vlastimil Babka
2021-05-25 11:47   ` Mel Gorman
2021-05-24 23:39 ` [RFC 05/26] mm, slub: extract get_partial() from new_slab_objects() Vlastimil Babka
2021-05-25  9:03   ` Christoph Lameter
2021-05-25 11:54   ` Mel Gorman
2021-05-24 23:39 ` [RFC 06/26] mm, slub: dissolve new_slab_objects() into ___slab_alloc() Vlastimil Babka
2021-05-25  9:06   ` Christoph Lameter
2021-05-25 11:59   ` Mel Gorman
2021-05-24 23:39 ` [RFC 07/26] mm, slub: return slab page from get_partial() and set c->page afterwards Vlastimil Babka
2021-05-25  9:12   ` Christoph Lameter
2021-06-08 10:48     ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 08/26] mm, slub: restructure new page checks in ___slab_alloc() Vlastimil Babka
2021-05-25 12:09   ` Mel Gorman
2021-05-24 23:39 ` [RFC 09/26] mm, slub: move disabling/enabling irqs to ___slab_alloc() Vlastimil Babka
2021-05-25 12:35   ` Mel Gorman
2021-05-25 12:47     ` Vlastimil Babka
2021-05-25 15:10       ` Mel Gorman
2021-05-25 17:24       ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 10/26] mm, slub: do initial checks in ___slab_alloc() with irqs enabled Vlastimil Babka
2021-05-25 13:04   ` Mel Gorman
2021-06-08 12:13     ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 11/26] mm, slub: move disabling irqs closer to get_partial() in ___slab_alloc() Vlastimil Babka
2021-05-25 16:00   ` Jann Horn
2021-05-24 23:39 ` [RFC 12/26] mm, slub: restore irqs around calling new_slab() Vlastimil Babka
2021-05-24 23:39 ` [RFC 13/26] mm, slub: validate partial and newly allocated slabs before loading them Vlastimil Babka
2021-05-24 23:39 ` [RFC 14/26] mm, slub: check new pages with restored irqs Vlastimil Babka
2021-05-24 23:39 ` [RFC 15/26] mm, slub: stop disabling irqs around get_partial() Vlastimil Babka
2021-05-24 23:39 ` [RFC 16/26] mm, slub: move reset of c->page and freelist out of deactivate_slab() Vlastimil Babka
2021-05-24 23:39 ` [RFC 17/26] mm, slub: make locking in deactivate_slab() irq-safe Vlastimil Babka
2021-05-24 23:39 ` [RFC 18/26] mm, slub: call deactivate_slab() without disabling irqs Vlastimil Babka
2021-05-24 23:39 ` [RFC 19/26] mm, slub: move irq control into unfreeze_partials() Vlastimil Babka
2021-05-24 23:39 ` [RFC 20/26] mm, slub: discard slabs in unfreeze_partials() without irqs disabled Vlastimil Babka
2021-05-24 23:39 ` [RFC 21/26] mm, slub: detach whole partial list at once in unfreeze_partials() Vlastimil Babka
2021-05-24 23:39 ` [RFC 22/26] mm, slub: detach percpu partial list in unfreeze_partials() using this_cpu_cmpxchg() Vlastimil Babka
2021-05-24 23:39 ` [RFC 23/26] mm, slub: only disable irq with spin_lock in __unfreeze_partials() Vlastimil Babka
2021-05-24 23:39 ` [RFC 24/26] mm, slub: don't disable irqs in slub_cpu_dead() Vlastimil Babka
2021-05-24 23:39 ` [RFC 25/26] mm, slub: use migrate_disable() in put_cpu_partial() Vlastimil Babka
2021-05-25 15:33   ` Jann Horn
2021-06-09  8:41     ` Vlastimil Babka
2021-05-24 23:39 ` [RFC 26/26] mm, slub: convert kmem_cpu_slab protection to local_lock Vlastimil Babka
2021-05-25 16:11   ` Vlastimil Babka

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20210524233946.20352-2-vbabka@suse.cz \
    --to=vbabka@suse.cz \
    --cc=bigeasy@linutronix.de \
    --cc=brouer@redhat.com \
    --cc=cl@linux.com \
    --cc=iamjoonsoo.kim@lge.com \
    --cc=jannh@google.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=linux-mm@kvack.org \
    --cc=mgorman@techsingularity.net \
    --cc=penberg@kernel.org \
    --cc=peterz@infradead.org \
    --cc=rientjes@google.com \
    --cc=tglx@linutronix.de \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).