linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block
       [not found] <20201211011907.GA16110@paulmck-ThinkPad-P72>
@ 2020-12-11  1:19 ` paulmck
  2020-12-11  2:22   ` Joonsoo Kim
  2020-12-11  1:19 ` [PATCH v3 sl-b 2/6] mm: Make mem_dump_obj() handle NULL and zero-sized pointers paulmck
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: paulmck @ 2020-12-11  1:19 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, iamjoonsoo.kim, andrii,
	Paul E. McKenney, Christoph Lameter, Pekka Enberg,
	David Rientjes, linux-mm

From: "Paul E. McKenney" <paulmck@kernel.org>

There are kernel facilities such as per-CPU reference counts that give
error messages in generic handlers or callbacks, whose messages are
unenlightening.  In the case of per-CPU reference-count underflow, this
is not a problem when creating a new use of this facility because in that
case the bug is almost certainly in the code implementing that new use.
However, trouble arises when deploying across many systems, which might
exercise corner cases that were not seen during development and testing.
Here, it would be really nice to get some kind of hint as to which of
several uses the underflow was caused by.

This commit therefore exposes a mem_dump_obj() function that takes
a pointer to memory (which must still be allocated if it has been
dynamically allocated) and prints available information on where that
memory came from.  This pointer can reference the middle of the block as
well as the beginning of the block, as needed by things like RCU callback
functions and timer handlers that might not know where the beginning of
the memory block is.  These functions and handlers can use mem_dump_obj()
to print out better hints as to where the problem might lie.

The information printed can depend on kernel configuration.  For example,
the allocation return address can be printed only for slab and slub,
and even then only when the necessary debug has been enabled.  For slab,
build with CONFIG_DEBUG_SLAB=y, and either use sizes with ample space
to the next power of two or use the SLAB_STORE_USER when creating the
kmem_cache structure.  For slub, build with CONFIG_SLUB_DEBUG=y and
boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create()
if more focused use is desired.  Also for slub, use CONFIG_STACKTRACE
to enable printing of the allocation-time stack trace.

Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <linux-mm@kvack.org>
Reported-by: Andrii Nakryiko <andrii@kernel.org>
[ paulmck: Convert to printing and change names per Joonsoo Kim. ]
[ paulmck: Move slab definition per Stephen Rothwell and kbuild test robot. ]
[ paulmck: Handle CONFIG_MMU=n case where vmalloc() is kmalloc(). ]
[ paulmck: Apply Vlastimil Babka feedback on slab.c kmem_provenance(). ]
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 include/linux/mm.h   |  2 ++
 include/linux/slab.h |  2 ++
 mm/slab.c            | 20 ++++++++++++++
 mm/slab.h            | 12 +++++++++
 mm/slab_common.c     | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++++
 mm/slob.c            |  6 +++++
 mm/slub.c            | 36 +++++++++++++++++++++++++
 mm/util.c            | 24 +++++++++++++++++
 8 files changed, 176 insertions(+)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ef360fe..1eea266 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -3153,5 +3153,7 @@ unsigned long wp_shared_mapping_range(struct address_space *mapping,
 
 extern int sysctl_nr_trim_pages;
 
+void mem_dump_obj(void *object);
+
 #endif /* __KERNEL__ */
 #endif /* _LINUX_MM_H */
diff --git a/include/linux/slab.h b/include/linux/slab.h
index dd6897f..169b511 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -186,6 +186,8 @@ void kfree(const void *);
 void kfree_sensitive(const void *);
 size_t __ksize(const void *);
 size_t ksize(const void *);
+bool kmem_valid_obj(void *object);
+void kmem_dump_obj(void *object);
 
 #ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
 void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
diff --git a/mm/slab.c b/mm/slab.c
index b111356..66f00ad 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -3633,6 +3633,26 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t flags,
 EXPORT_SYMBOL(__kmalloc_node_track_caller);
 #endif /* CONFIG_NUMA */
 
+void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
+{
+	struct kmem_cache *cachep;
+	unsigned int objnr;
+	void *objp;
+
+	kpp->kp_ptr = object;
+	kpp->kp_page = page;
+	cachep = page->slab_cache;
+	kpp->kp_slab_cache = cachep;
+	objp = object - obj_offset(cachep);
+	kpp->kp_data_offset = obj_offset(cachep);
+	page = virt_to_head_page(objp);
+	objnr = obj_to_index(cachep, page, objp);
+	objp = index_to_obj(cachep, page, objnr);
+	kpp->kp_objp = objp;
+	if (DEBUG && cachep->flags & SLAB_STORE_USER)
+		kpp->kp_ret = *dbg_userword(cachep, objp);
+}
+
 /**
  * __do_kmalloc - allocate memory
  * @size: how many bytes of memory are required.
diff --git a/mm/slab.h b/mm/slab.h
index 6d7c6a5..0dc705b 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -630,4 +630,16 @@ static inline bool slab_want_init_on_free(struct kmem_cache *c)
 	return false;
 }
 
+#define KS_ADDRS_COUNT 16
+struct kmem_obj_info {
+	void *kp_ptr;
+	struct page *kp_page;
+	void *kp_objp;
+	unsigned long kp_data_offset;
+	struct kmem_cache *kp_slab_cache;
+	void *kp_ret;
+	void *kp_stack[KS_ADDRS_COUNT];
+};
+void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page);
+
 #endif /* MM_SLAB_H */
diff --git a/mm/slab_common.c b/mm/slab_common.c
index f9ccd5d..df2e203 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -536,6 +536,80 @@ bool slab_is_available(void)
 	return slab_state >= UP;
 }
 
+/**
+ * kmem_valid_obj - does the pointer reference a valid slab object?
+ * @object: pointer to query.
+ *
+ * Return: %true if the pointer is to a not-yet-freed object from
+ * kmalloc() or kmem_cache_alloc(), either %true or %false if the pointer
+ * is to an already-freed object, and %false otherwise.
+ */
+bool kmem_valid_obj(void *object)
+{
+	struct page *page;
+
+	if (!virt_addr_valid(object))
+		return false;
+	page = virt_to_head_page(object);
+	return PageSlab(page);
+}
+
+/**
+ * kmem_dump_obj - Print available slab provenance information
+ * @object: slab object for which to find provenance information.
+ *
+ * This function uses pr_cont(), so that the caller is expected to have
+ * printed out whatever preamble is appropriate.  The provenance information
+ * depends on the type of object and on how much debugging is enabled.
+ * For a slab-cache object, the fact that it is a slab object is printed,
+ * and, if available, the slab name, return address, and stack trace from
+ * the allocation of that object.
+ *
+ * This function will splat if passed a pointer to a non-slab object.
+ * If you are not sure what type of object you have, you should instead
+ * use mem_dump_obj().
+ */
+void kmem_dump_obj(void *object)
+{
+	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
+	int i;
+	struct page *page;
+	unsigned long ptroffset;
+	struct kmem_obj_info kp = { };
+
+	if (WARN_ON_ONCE(!virt_addr_valid(object)))
+		return;
+	page = virt_to_head_page(object);
+	if (WARN_ON_ONCE(!PageSlab(page))) {
+		pr_cont(" non-slab memory.\n");
+		return;
+	}
+	kmem_obj_info(&kp, object, page);
+	if (kp.kp_slab_cache)
+		pr_cont(" slab%s %s", cp, kp.kp_slab_cache->name);
+	else
+		pr_cont(" slab%s", cp);
+	if (kp.kp_objp)
+		pr_cont(" start %px", kp.kp_objp);
+	if (kp.kp_data_offset)
+		pr_cont(" data offset %lu", kp.kp_data_offset);
+	if (kp.kp_objp) {
+		ptroffset = ((char *)object - (char *)kp.kp_objp) - kp.kp_data_offset;
+		pr_cont(" pointer offset %lu", ptroffset);
+	}
+	if (kp.kp_slab_cache && kp.kp_slab_cache->usersize)
+		pr_cont(" size %u", kp.kp_slab_cache->usersize);
+	if (kp.kp_ret)
+		pr_cont(" allocated at %pS\n", kp.kp_ret);
+	else
+		pr_cont("\n");
+	for (i = 0; i < ARRAY_SIZE(kp.kp_stack); i++) {
+		if (!kp.kp_stack[i])
+			break;
+		pr_info("    %pS\n", kp.kp_stack[i]);
+	}
+}
+
 #ifndef CONFIG_SLOB
 /* Create a cache during boot when no slab services are available yet */
 void __init create_boot_cache(struct kmem_cache *s, const char *name,
diff --git a/mm/slob.c b/mm/slob.c
index 7cc9805..2ed1de2 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -461,6 +461,12 @@ static void slob_free(void *block, int size)
 	spin_unlock_irqrestore(&slob_lock, flags);
 }
 
+void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
+{
+	kpp->kp_ptr = object;
+	kpp->kp_page = page;
+}
+
 /*
  * End of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend.
  */
diff --git a/mm/slub.c b/mm/slub.c
index b30be23..0459d2a 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3918,6 +3918,42 @@ int __kmem_cache_shutdown(struct kmem_cache *s)
 	return 0;
 }
 
+void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
+{
+#ifdef CONFIG_SLUB_DEBUG
+	void *base;
+	int i;
+	unsigned int objnr;
+	void *objp;
+	void *objp0;
+	struct kmem_cache *s = page->slab_cache;
+	struct track *trackp;
+
+	kpp->kp_ptr = object;
+	kpp->kp_page = page;
+	kpp->kp_slab_cache = s;
+	base = page_address(page);
+	objp0 = kasan_reset_tag(object);
+	objp = restore_red_left(s, objp0);
+	objnr = obj_to_index(s, page, objp);
+	kpp->kp_data_offset = (unsigned long)((char *)objp0 - (char *)objp);
+	objp = base + s->size * objnr;
+	kpp->kp_objp = objp;
+	if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) ||
+	    !(s->flags & SLAB_STORE_USER))
+		return;
+	trackp = get_track(s, objp, TRACK_ALLOC);
+	kpp->kp_ret = (void *)trackp->addr;
+#ifdef CONFIG_STACKTRACE
+	for (i = 0; i < KS_ADDRS_COUNT && i < TRACK_ADDRS_COUNT; i++) {
+		kpp->kp_stack[i] = (void *)trackp->addrs[i];
+		if (!kpp->kp_stack[i])
+			break;
+	}
+#endif
+#endif
+}
+
 /********************************************************************
  *		Kmalloc subsystem
  *******************************************************************/
diff --git a/mm/util.c b/mm/util.c
index 4ddb6e1..f2e0c4d9 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -970,3 +970,27 @@ int __weak memcmp_pages(struct page *page1, struct page *page2)
 	kunmap_atomic(addr1);
 	return ret;
 }
+
+/**
+ * mem_dump_obj - Print available provenance information
+ * @object: object for which to find provenance information.
+ *
+ * This function uses pr_cont(), so that the caller is expected to have
+ * printed out whatever preamble is appropriate.  The provenance information
+ * depends on the type of object and on how much debugging is enabled.
+ * For example, for a slab-cache object, the slab name is printed, and,
+ * if available, the return address and stack trace from the allocation
+ * of that object.
+ */
+void mem_dump_obj(void *object)
+{
+	if (!virt_addr_valid(object)) {
+		pr_cont(" non-paged (local) memory.\n");
+		return;
+	}
+	if (kmem_valid_obj(object)) {
+		kmem_dump_obj(object);
+		return;
+	}
+	pr_cont(" non-slab memory.\n");
+}
-- 
2.9.5



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 sl-b 2/6] mm: Make mem_dump_obj() handle NULL and zero-sized pointers
       [not found] <20201211011907.GA16110@paulmck-ThinkPad-P72>
  2020-12-11  1:19 ` [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block paulmck
@ 2020-12-11  1:19 ` paulmck
  2020-12-11  1:20 ` [PATCH v3 sl-b 3/6] mm: Make mem_dump_obj() handle vmalloc() memory paulmck
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: paulmck @ 2020-12-11  1:19 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, iamjoonsoo.kim, andrii,
	Paul E. McKenney, Christoph Lameter, Pekka Enberg,
	David Rientjes, linux-mm

From: "Paul E. McKenney" <paulmck@kernel.org>

This commit makes mem_dump_obj() call out NULL and zero-sized pointers
specially instead of classifying them as non-paged memory.

Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <linux-mm@kvack.org>
Reported-by: Andrii Nakryiko <andrii@kernel.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 mm/util.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/mm/util.c b/mm/util.c
index f2e0c4d9..f7c94c8 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -985,7 +985,12 @@ int __weak memcmp_pages(struct page *page1, struct page *page2)
 void mem_dump_obj(void *object)
 {
 	if (!virt_addr_valid(object)) {
-		pr_cont(" non-paged (local) memory.\n");
+		if (object == NULL)
+			pr_cont(" NULL pointer.\n");
+		else if (object == ZERO_SIZE_PTR)
+			pr_cont(" zero-size pointer.\n");
+		else
+			pr_cont(" non-paged (local) memory.\n");
 		return;
 	}
 	if (kmem_valid_obj(object)) {
-- 
2.9.5



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 sl-b 3/6] mm: Make mem_dump_obj() handle vmalloc() memory
       [not found] <20201211011907.GA16110@paulmck-ThinkPad-P72>
  2020-12-11  1:19 ` [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block paulmck
  2020-12-11  1:19 ` [PATCH v3 sl-b 2/6] mm: Make mem_dump_obj() handle NULL and zero-sized pointers paulmck
@ 2020-12-11  1:20 ` paulmck
  2020-12-11  1:20 ` [PATCH v3 sl-b 4/6] mm: Make mem_obj_dump() vmalloc() dumps include start and length paulmck
  2020-12-11  1:20 ` [PATCH v3 sl-b 5/6] rcu: Make call_rcu() print mem_dump_obj() info for double-freed callback paulmck
  4 siblings, 0 replies; 11+ messages in thread
From: paulmck @ 2020-12-11  1:20 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, iamjoonsoo.kim, andrii,
	Paul E. McKenney, linux-mm

From: "Paul E. McKenney" <paulmck@kernel.org>

This commit adds vmalloc() support to mem_dump_obj().  Note that the
vmalloc_dump_obj() function combines the checking and dumping, in
contrast with the split between kmem_valid_obj() and kmem_dump_obj().
The reason for the difference is that the checking in the vmalloc()
case involves acquiring a global lock, and redundant acquisitions of
global locks should be avoided, even on not-so-fast paths.

Note that this change causes on-stack variables to be reported as
vmalloc() storage from kernel_clone() or similar, depending on the degree
of inlining that your compiler does.  This is likely more helpful than
the earlier "non-paged (local) memory".

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: <linux-mm@kvack.org>
Reported-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 include/linux/vmalloc.h |  6 ++++++
 mm/util.c               | 14 ++++++++------
 mm/vmalloc.c            | 12 ++++++++++++
 3 files changed, 26 insertions(+), 6 deletions(-)

diff --git a/include/linux/vmalloc.h b/include/linux/vmalloc.h
index 938eaf9..c89c2be 100644
--- a/include/linux/vmalloc.h
+++ b/include/linux/vmalloc.h
@@ -248,4 +248,10 @@ pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
 int register_vmap_purge_notifier(struct notifier_block *nb);
 int unregister_vmap_purge_notifier(struct notifier_block *nb);
 
+#ifdef CONFIG_MMU
+bool vmalloc_dump_obj(void *object);
+#else
+static inline bool vmalloc_dump_obj(void *object) { return false; }
+#endif
+
 #endif /* _LINUX_VMALLOC_H */
diff --git a/mm/util.c b/mm/util.c
index f7c94c8..dcde696 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -984,18 +984,20 @@ int __weak memcmp_pages(struct page *page1, struct page *page2)
  */
 void mem_dump_obj(void *object)
 {
+	if (kmem_valid_obj(object)) {
+		kmem_dump_obj(object);
+		return;
+	}
+	if (vmalloc_dump_obj(object))
+		return;
 	if (!virt_addr_valid(object)) {
 		if (object == NULL)
 			pr_cont(" NULL pointer.\n");
 		else if (object == ZERO_SIZE_PTR)
 			pr_cont(" zero-size pointer.\n");
 		else
-			pr_cont(" non-paged (local) memory.\n");
-		return;
-	}
-	if (kmem_valid_obj(object)) {
-		kmem_dump_obj(object);
+			pr_cont(" non-paged memory.\n");
 		return;
 	}
-	pr_cont(" non-slab memory.\n");
+	pr_cont(" non-slab/vmalloc memory.\n");
 }
diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 6ae491a..7421719 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3431,6 +3431,18 @@ void pcpu_free_vm_areas(struct vm_struct **vms, int nr_vms)
 }
 #endif	/* CONFIG_SMP */
 
+bool vmalloc_dump_obj(void *object)
+{
+	struct vm_struct *vm;
+	void *objp = (void *)PAGE_ALIGN((unsigned long)object);
+
+	vm = find_vm_area(objp);
+	if (!vm)
+		return false;
+	pr_cont(" vmalloc allocated at %pS\n", vm->caller);
+	return true;
+}
+
 #ifdef CONFIG_PROC_FS
 static void *s_start(struct seq_file *m, loff_t *pos)
 	__acquires(&vmap_purge_lock)
-- 
2.9.5



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 sl-b 4/6] mm: Make mem_obj_dump() vmalloc() dumps include start and length
       [not found] <20201211011907.GA16110@paulmck-ThinkPad-P72>
                   ` (2 preceding siblings ...)
  2020-12-11  1:20 ` [PATCH v3 sl-b 3/6] mm: Make mem_dump_obj() handle vmalloc() memory paulmck
@ 2020-12-11  1:20 ` paulmck
  2020-12-11  1:20 ` [PATCH v3 sl-b 5/6] rcu: Make call_rcu() print mem_dump_obj() info for double-freed callback paulmck
  4 siblings, 0 replies; 11+ messages in thread
From: paulmck @ 2020-12-11  1:20 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, iamjoonsoo.kim, andrii,
	Paul E. McKenney, linux-mm

From: "Paul E. McKenney" <paulmck@kernel.org>

This commit adds the starting address and number of pages to the vmalloc()
information dumped by way of vmalloc_dump_obj().

Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: <linux-mm@kvack.org>
Reported-by: Andrii Nakryiko <andrii@kernel.org>
Suggested-by: Vlastimil Babka <vbabka@suse.cz>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 mm/vmalloc.c | 3 ++-
 1 file changed, 2 insertions(+), 1 deletion(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index 7421719..77b1100 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -3439,7 +3439,8 @@ bool vmalloc_dump_obj(void *object)
 	vm = find_vm_area(objp);
 	if (!vm)
 		return false;
-	pr_cont(" vmalloc allocated at %pS\n", vm->caller);
+	pr_cont(" %u-page vmalloc region starting at %#lx allocated at %pS\n",
+		vm->nr_pages, (unsigned long)vm->addr, vm->caller);
 	return true;
 }
 
-- 
2.9.5



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v3 sl-b 5/6] rcu: Make call_rcu() print mem_dump_obj() info for double-freed callback
       [not found] <20201211011907.GA16110@paulmck-ThinkPad-P72>
                   ` (3 preceding siblings ...)
  2020-12-11  1:20 ` [PATCH v3 sl-b 4/6] mm: Make mem_obj_dump() vmalloc() dumps include start and length paulmck
@ 2020-12-11  1:20 ` paulmck
  4 siblings, 0 replies; 11+ messages in thread
From: paulmck @ 2020-12-11  1:20 UTC (permalink / raw)
  To: rcu
  Cc: linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, iamjoonsoo.kim, andrii,
	Paul E. McKenney, Christoph Lameter, Pekka Enberg,
	David Rientjes, linux-mm

From: "Paul E. McKenney" <paulmck@kernel.org>

The debug-object double-free checks in __call_rcu() print out the
RCU callback function, which is usually sufficient to track down the
double free.  However, all uses of things like queue_rcu_work() will
have the same RCU callback function (rcu_work_rcufn() in this case),
so a diagnostic message for a double queue_rcu_work() needs more than
just the callback function.

This commit therefore calls mem_dump_obj() to dump out any additional
available information on the double-freed callback.

Cc: Christoph Lameter <cl@linux.com>
Cc: Pekka Enberg <penberg@kernel.org>
Cc: David Rientjes <rientjes@google.com>
Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: <linux-mm@kvack.org>
Reported-by: Andrii Nakryiko <andrii@kernel.org>
Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
---
 kernel/rcu/tree.c | 7 +++++--
 1 file changed, 5 insertions(+), 2 deletions(-)

diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index b408dca..80ceee5 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2959,6 +2959,7 @@ static void check_cb_ovld(struct rcu_data *rdp)
 static void
 __call_rcu(struct rcu_head *head, rcu_callback_t func)
 {
+	static atomic_t doublefrees;
 	unsigned long flags;
 	struct rcu_data *rdp;
 	bool was_alldone;
@@ -2972,8 +2973,10 @@ __call_rcu(struct rcu_head *head, rcu_callback_t func)
 		 * Use rcu:rcu_callback trace event to find the previous
 		 * time callback was passed to __call_rcu().
 		 */
-		WARN_ONCE(1, "__call_rcu(): Double-freed CB %p->%pS()!!!\n",
-			  head, head->func);
+		if (atomic_inc_return(&doublefrees) < 4) {
+			pr_err("%s(): Double-freed CB %p->%pS()!!!  ", __func__, head, head->func);
+			mem_dump_obj(head);
+		}
 		WRITE_ONCE(head->func, rcu_leak_callback);
 		return;
 	}
-- 
2.9.5



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block
  2020-12-11  1:19 ` [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block paulmck
@ 2020-12-11  2:22   ` Joonsoo Kim
  2020-12-11  3:33     ` Paul E. McKenney
  0 siblings, 1 reply; 11+ messages in thread
From: Joonsoo Kim @ 2020-12-11  2:22 UTC (permalink / raw)
  To: paulmck
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, andrii, Christoph Lameter,
	Pekka Enberg, David Rientjes, linux-mm

On Thu, Dec 10, 2020 at 05:19:58PM -0800, paulmck@kernel.org wrote:
> From: "Paul E. McKenney" <paulmck@kernel.org>
> 
> There are kernel facilities such as per-CPU reference counts that give
> error messages in generic handlers or callbacks, whose messages are
> unenlightening.  In the case of per-CPU reference-count underflow, this
> is not a problem when creating a new use of this facility because in that
> case the bug is almost certainly in the code implementing that new use.
> However, trouble arises when deploying across many systems, which might
> exercise corner cases that were not seen during development and testing.
> Here, it would be really nice to get some kind of hint as to which of
> several uses the underflow was caused by.
> 
> This commit therefore exposes a mem_dump_obj() function that takes
> a pointer to memory (which must still be allocated if it has been
> dynamically allocated) and prints available information on where that
> memory came from.  This pointer can reference the middle of the block as
> well as the beginning of the block, as needed by things like RCU callback
> functions and timer handlers that might not know where the beginning of
> the memory block is.  These functions and handlers can use mem_dump_obj()
> to print out better hints as to where the problem might lie.
> 
> The information printed can depend on kernel configuration.  For example,
> the allocation return address can be printed only for slab and slub,
> and even then only when the necessary debug has been enabled.  For slab,
> build with CONFIG_DEBUG_SLAB=y, and either use sizes with ample space
> to the next power of two or use the SLAB_STORE_USER when creating the
> kmem_cache structure.  For slub, build with CONFIG_SLUB_DEBUG=y and
> boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create()
> if more focused use is desired.  Also for slub, use CONFIG_STACKTRACE
> to enable printing of the allocation-time stack trace.
> 
> Cc: Christoph Lameter <cl@linux.com>
> Cc: Pekka Enberg <penberg@kernel.org>
> Cc: David Rientjes <rientjes@google.com>
> Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: <linux-mm@kvack.org>
> Reported-by: Andrii Nakryiko <andrii@kernel.org>
> [ paulmck: Convert to printing and change names per Joonsoo Kim. ]
> [ paulmck: Move slab definition per Stephen Rothwell and kbuild test robot. ]
> [ paulmck: Handle CONFIG_MMU=n case where vmalloc() is kmalloc(). ]
> [ paulmck: Apply Vlastimil Babka feedback on slab.c kmem_provenance(). ]
> Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> ---
>  include/linux/mm.h   |  2 ++
>  include/linux/slab.h |  2 ++
>  mm/slab.c            | 20 ++++++++++++++
>  mm/slab.h            | 12 +++++++++
>  mm/slab_common.c     | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++++
>  mm/slob.c            |  6 +++++
>  mm/slub.c            | 36 +++++++++++++++++++++++++
>  mm/util.c            | 24 +++++++++++++++++
>  8 files changed, 176 insertions(+)
> 
> diff --git a/include/linux/mm.h b/include/linux/mm.h
> index ef360fe..1eea266 100644
> --- a/include/linux/mm.h
> +++ b/include/linux/mm.h
> @@ -3153,5 +3153,7 @@ unsigned long wp_shared_mapping_range(struct address_space *mapping,
>  
>  extern int sysctl_nr_trim_pages;
>  
> +void mem_dump_obj(void *object);
> +
>  #endif /* __KERNEL__ */
>  #endif /* _LINUX_MM_H */
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index dd6897f..169b511 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -186,6 +186,8 @@ void kfree(const void *);
>  void kfree_sensitive(const void *);
>  size_t __ksize(const void *);
>  size_t ksize(const void *);
> +bool kmem_valid_obj(void *object);
> +void kmem_dump_obj(void *object);
>  
>  #ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
>  void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
> diff --git a/mm/slab.c b/mm/slab.c
> index b111356..66f00ad 100644
> --- a/mm/slab.c
> +++ b/mm/slab.c
> @@ -3633,6 +3633,26 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t flags,
>  EXPORT_SYMBOL(__kmalloc_node_track_caller);
>  #endif /* CONFIG_NUMA */
>  
> +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
> +{
> +	struct kmem_cache *cachep;
> +	unsigned int objnr;
> +	void *objp;
> +
> +	kpp->kp_ptr = object;
> +	kpp->kp_page = page;
> +	cachep = page->slab_cache;
> +	kpp->kp_slab_cache = cachep;
> +	objp = object - obj_offset(cachep);
> +	kpp->kp_data_offset = obj_offset(cachep);
> +	page = virt_to_head_page(objp);
> +	objnr = obj_to_index(cachep, page, objp);
> +	objp = index_to_obj(cachep, page, objnr);
> +	kpp->kp_objp = objp;
> +	if (DEBUG && cachep->flags & SLAB_STORE_USER)
> +		kpp->kp_ret = *dbg_userword(cachep, objp);
> +}
> +
>  /**
>   * __do_kmalloc - allocate memory
>   * @size: how many bytes of memory are required.
> diff --git a/mm/slab.h b/mm/slab.h
> index 6d7c6a5..0dc705b 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -630,4 +630,16 @@ static inline bool slab_want_init_on_free(struct kmem_cache *c)
>  	return false;
>  }
>  
> +#define KS_ADDRS_COUNT 16
> +struct kmem_obj_info {
> +	void *kp_ptr;
> +	struct page *kp_page;
> +	void *kp_objp;
> +	unsigned long kp_data_offset;
> +	struct kmem_cache *kp_slab_cache;
> +	void *kp_ret;
> +	void *kp_stack[KS_ADDRS_COUNT];
> +};
> +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page);
> +
>  #endif /* MM_SLAB_H */
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index f9ccd5d..df2e203 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -536,6 +536,80 @@ bool slab_is_available(void)
>  	return slab_state >= UP;
>  }
>  
> +/**
> + * kmem_valid_obj - does the pointer reference a valid slab object?
> + * @object: pointer to query.
> + *
> + * Return: %true if the pointer is to a not-yet-freed object from
> + * kmalloc() or kmem_cache_alloc(), either %true or %false if the pointer
> + * is to an already-freed object, and %false otherwise.
> + */
> +bool kmem_valid_obj(void *object)
> +{
> +	struct page *page;
> +
> +	if (!virt_addr_valid(object))
> +		return false;
> +	page = virt_to_head_page(object);
> +	return PageSlab(page);
> +}
> +
> +/**
> + * kmem_dump_obj - Print available slab provenance information
> + * @object: slab object for which to find provenance information.
> + *
> + * This function uses pr_cont(), so that the caller is expected to have
> + * printed out whatever preamble is appropriate.  The provenance information
> + * depends on the type of object and on how much debugging is enabled.
> + * For a slab-cache object, the fact that it is a slab object is printed,
> + * and, if available, the slab name, return address, and stack trace from
> + * the allocation of that object.
> + *
> + * This function will splat if passed a pointer to a non-slab object.
> + * If you are not sure what type of object you have, you should instead
> + * use mem_dump_obj().
> + */
> +void kmem_dump_obj(void *object)
> +{
> +	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
> +	int i;
> +	struct page *page;
> +	unsigned long ptroffset;
> +	struct kmem_obj_info kp = { };
> +
> +	if (WARN_ON_ONCE(!virt_addr_valid(object)))
> +		return;
> +	page = virt_to_head_page(object);
> +	if (WARN_ON_ONCE(!PageSlab(page))) {
> +		pr_cont(" non-slab memory.\n");
> +		return;
> +	}
> +	kmem_obj_info(&kp, object, page);
> +	if (kp.kp_slab_cache)
> +		pr_cont(" slab%s %s", cp, kp.kp_slab_cache->name);
> +	else
> +		pr_cont(" slab%s", cp);
> +	if (kp.kp_objp)
> +		pr_cont(" start %px", kp.kp_objp);
> +	if (kp.kp_data_offset)
> +		pr_cont(" data offset %lu", kp.kp_data_offset);

I don't check the code deeply but kp_data_offset could be 0 in normal
situation. Is it intentional not to print a message in this case?

> +	if (kp.kp_objp) {
> +		ptroffset = ((char *)object - (char *)kp.kp_objp) - kp.kp_data_offset;
> +		pr_cont(" pointer offset %lu", ptroffset);
> +	}
> +	if (kp.kp_slab_cache && kp.kp_slab_cache->usersize)
> +		pr_cont(" size %u", kp.kp_slab_cache->usersize);
> +	if (kp.kp_ret)
> +		pr_cont(" allocated at %pS\n", kp.kp_ret);
> +	else
> +		pr_cont("\n");
> +	for (i = 0; i < ARRAY_SIZE(kp.kp_stack); i++) {
> +		if (!kp.kp_stack[i])
> +			break;
> +		pr_info("    %pS\n", kp.kp_stack[i]);
> +	}
> +}
> +
>  #ifndef CONFIG_SLOB
>  /* Create a cache during boot when no slab services are available yet */
>  void __init create_boot_cache(struct kmem_cache *s, const char *name,
> diff --git a/mm/slob.c b/mm/slob.c
> index 7cc9805..2ed1de2 100644
> --- a/mm/slob.c
> +++ b/mm/slob.c
> @@ -461,6 +461,12 @@ static void slob_free(void *block, int size)
>  	spin_unlock_irqrestore(&slob_lock, flags);
>  }
>  
> +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
> +{
> +	kpp->kp_ptr = object;
> +	kpp->kp_page = page;
> +}
> +
>  /*
>   * End of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend.
>   */
> diff --git a/mm/slub.c b/mm/slub.c
> index b30be23..0459d2a 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -3918,6 +3918,42 @@ int __kmem_cache_shutdown(struct kmem_cache *s)
>  	return 0;
>  }
>  
> +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
> +{
> +#ifdef CONFIG_SLUB_DEBUG

We can get some infos even if CONFIG_SLUB_DEBUG isn't defined.
Please move them out.

Thanks.
 


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block
  2020-12-11  2:22   ` Joonsoo Kim
@ 2020-12-11  3:33     ` Paul E. McKenney
  2020-12-11  3:42       ` Paul E. McKenney
  2020-12-11  6:54       ` Joonsoo Kim
  0 siblings, 2 replies; 11+ messages in thread
From: Paul E. McKenney @ 2020-12-11  3:33 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, andrii, Christoph Lameter,
	Pekka Enberg, David Rientjes, linux-mm

On Fri, Dec 11, 2020 at 11:22:10AM +0900, Joonsoo Kim wrote:
> On Thu, Dec 10, 2020 at 05:19:58PM -0800, paulmck@kernel.org wrote:
> > From: "Paul E. McKenney" <paulmck@kernel.org>
> > 
> > There are kernel facilities such as per-CPU reference counts that give
> > error messages in generic handlers or callbacks, whose messages are
> > unenlightening.  In the case of per-CPU reference-count underflow, this
> > is not a problem when creating a new use of this facility because in that
> > case the bug is almost certainly in the code implementing that new use.
> > However, trouble arises when deploying across many systems, which might
> > exercise corner cases that were not seen during development and testing.
> > Here, it would be really nice to get some kind of hint as to which of
> > several uses the underflow was caused by.
> > 
> > This commit therefore exposes a mem_dump_obj() function that takes
> > a pointer to memory (which must still be allocated if it has been
> > dynamically allocated) and prints available information on where that
> > memory came from.  This pointer can reference the middle of the block as
> > well as the beginning of the block, as needed by things like RCU callback
> > functions and timer handlers that might not know where the beginning of
> > the memory block is.  These functions and handlers can use mem_dump_obj()
> > to print out better hints as to where the problem might lie.
> > 
> > The information printed can depend on kernel configuration.  For example,
> > the allocation return address can be printed only for slab and slub,
> > and even then only when the necessary debug has been enabled.  For slab,
> > build with CONFIG_DEBUG_SLAB=y, and either use sizes with ample space
> > to the next power of two or use the SLAB_STORE_USER when creating the
> > kmem_cache structure.  For slub, build with CONFIG_SLUB_DEBUG=y and
> > boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create()
> > if more focused use is desired.  Also for slub, use CONFIG_STACKTRACE
> > to enable printing of the allocation-time stack trace.
> > 
> > Cc: Christoph Lameter <cl@linux.com>
> > Cc: Pekka Enberg <penberg@kernel.org>
> > Cc: David Rientjes <rientjes@google.com>
> > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > Cc: Andrew Morton <akpm@linux-foundation.org>
> > Cc: <linux-mm@kvack.org>
> > Reported-by: Andrii Nakryiko <andrii@kernel.org>
> > [ paulmck: Convert to printing and change names per Joonsoo Kim. ]
> > [ paulmck: Move slab definition per Stephen Rothwell and kbuild test robot. ]
> > [ paulmck: Handle CONFIG_MMU=n case where vmalloc() is kmalloc(). ]
> > [ paulmck: Apply Vlastimil Babka feedback on slab.c kmem_provenance(). ]
> > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > ---
> >  include/linux/mm.h   |  2 ++
> >  include/linux/slab.h |  2 ++
> >  mm/slab.c            | 20 ++++++++++++++
> >  mm/slab.h            | 12 +++++++++
> >  mm/slab_common.c     | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> >  mm/slob.c            |  6 +++++
> >  mm/slub.c            | 36 +++++++++++++++++++++++++
> >  mm/util.c            | 24 +++++++++++++++++
> >  8 files changed, 176 insertions(+)
> > 
> > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > index ef360fe..1eea266 100644
> > --- a/include/linux/mm.h
> > +++ b/include/linux/mm.h
> > @@ -3153,5 +3153,7 @@ unsigned long wp_shared_mapping_range(struct address_space *mapping,
> >  
> >  extern int sysctl_nr_trim_pages;
> >  
> > +void mem_dump_obj(void *object);
> > +
> >  #endif /* __KERNEL__ */
> >  #endif /* _LINUX_MM_H */
> > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > index dd6897f..169b511 100644
> > --- a/include/linux/slab.h
> > +++ b/include/linux/slab.h
> > @@ -186,6 +186,8 @@ void kfree(const void *);
> >  void kfree_sensitive(const void *);
> >  size_t __ksize(const void *);
> >  size_t ksize(const void *);
> > +bool kmem_valid_obj(void *object);
> > +void kmem_dump_obj(void *object);
> >  
> >  #ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
> >  void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
> > diff --git a/mm/slab.c b/mm/slab.c
> > index b111356..66f00ad 100644
> > --- a/mm/slab.c
> > +++ b/mm/slab.c
> > @@ -3633,6 +3633,26 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t flags,
> >  EXPORT_SYMBOL(__kmalloc_node_track_caller);
> >  #endif /* CONFIG_NUMA */
> >  
> > +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
> > +{
> > +	struct kmem_cache *cachep;
> > +	unsigned int objnr;
> > +	void *objp;
> > +
> > +	kpp->kp_ptr = object;
> > +	kpp->kp_page = page;
> > +	cachep = page->slab_cache;
> > +	kpp->kp_slab_cache = cachep;
> > +	objp = object - obj_offset(cachep);
> > +	kpp->kp_data_offset = obj_offset(cachep);
> > +	page = virt_to_head_page(objp);
> > +	objnr = obj_to_index(cachep, page, objp);
> > +	objp = index_to_obj(cachep, page, objnr);
> > +	kpp->kp_objp = objp;
> > +	if (DEBUG && cachep->flags & SLAB_STORE_USER)
> > +		kpp->kp_ret = *dbg_userword(cachep, objp);
> > +}
> > +
> >  /**
> >   * __do_kmalloc - allocate memory
> >   * @size: how many bytes of memory are required.
> > diff --git a/mm/slab.h b/mm/slab.h
> > index 6d7c6a5..0dc705b 100644
> > --- a/mm/slab.h
> > +++ b/mm/slab.h
> > @@ -630,4 +630,16 @@ static inline bool slab_want_init_on_free(struct kmem_cache *c)
> >  	return false;
> >  }
> >  
> > +#define KS_ADDRS_COUNT 16
> > +struct kmem_obj_info {
> > +	void *kp_ptr;
> > +	struct page *kp_page;
> > +	void *kp_objp;
> > +	unsigned long kp_data_offset;
> > +	struct kmem_cache *kp_slab_cache;
> > +	void *kp_ret;
> > +	void *kp_stack[KS_ADDRS_COUNT];
> > +};
> > +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page);
> > +
> >  #endif /* MM_SLAB_H */
> > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > index f9ccd5d..df2e203 100644
> > --- a/mm/slab_common.c
> > +++ b/mm/slab_common.c
> > @@ -536,6 +536,80 @@ bool slab_is_available(void)
> >  	return slab_state >= UP;
> >  }
> >  
> > +/**
> > + * kmem_valid_obj - does the pointer reference a valid slab object?
> > + * @object: pointer to query.
> > + *
> > + * Return: %true if the pointer is to a not-yet-freed object from
> > + * kmalloc() or kmem_cache_alloc(), either %true or %false if the pointer
> > + * is to an already-freed object, and %false otherwise.
> > + */
> > +bool kmem_valid_obj(void *object)
> > +{
> > +	struct page *page;
> > +
> > +	if (!virt_addr_valid(object))
> > +		return false;
> > +	page = virt_to_head_page(object);
> > +	return PageSlab(page);
> > +}
> > +
> > +/**
> > + * kmem_dump_obj - Print available slab provenance information
> > + * @object: slab object for which to find provenance information.
> > + *
> > + * This function uses pr_cont(), so that the caller is expected to have
> > + * printed out whatever preamble is appropriate.  The provenance information
> > + * depends on the type of object and on how much debugging is enabled.
> > + * For a slab-cache object, the fact that it is a slab object is printed,
> > + * and, if available, the slab name, return address, and stack trace from
> > + * the allocation of that object.
> > + *
> > + * This function will splat if passed a pointer to a non-slab object.
> > + * If you are not sure what type of object you have, you should instead
> > + * use mem_dump_obj().
> > + */
> > +void kmem_dump_obj(void *object)
> > +{
> > +	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
> > +	int i;
> > +	struct page *page;
> > +	unsigned long ptroffset;
> > +	struct kmem_obj_info kp = { };
> > +
> > +	if (WARN_ON_ONCE(!virt_addr_valid(object)))
> > +		return;
> > +	page = virt_to_head_page(object);
> > +	if (WARN_ON_ONCE(!PageSlab(page))) {
> > +		pr_cont(" non-slab memory.\n");
> > +		return;
> > +	}
> > +	kmem_obj_info(&kp, object, page);
> > +	if (kp.kp_slab_cache)
> > +		pr_cont(" slab%s %s", cp, kp.kp_slab_cache->name);
> > +	else
> > +		pr_cont(" slab%s", cp);
> > +	if (kp.kp_objp)
> > +		pr_cont(" start %px", kp.kp_objp);
> > +	if (kp.kp_data_offset)
> > +		pr_cont(" data offset %lu", kp.kp_data_offset);
> 
> I don't check the code deeply but kp_data_offset could be 0 in normal
> situation. Is it intentional not to print a message in this case?

Yes, so that it tells you of the offset only if it is non-zero, which as
you say happens only if certain debugging options are enabled.  Easy to
print it unconditionally if that is preferred!

> > +	if (kp.kp_objp) {
> > +		ptroffset = ((char *)object - (char *)kp.kp_objp) - kp.kp_data_offset;
> > +		pr_cont(" pointer offset %lu", ptroffset);
> > +	}
> > +	if (kp.kp_slab_cache && kp.kp_slab_cache->usersize)
> > +		pr_cont(" size %u", kp.kp_slab_cache->usersize);
> > +	if (kp.kp_ret)
> > +		pr_cont(" allocated at %pS\n", kp.kp_ret);
> > +	else
> > +		pr_cont("\n");
> > +	for (i = 0; i < ARRAY_SIZE(kp.kp_stack); i++) {
> > +		if (!kp.kp_stack[i])
> > +			break;
> > +		pr_info("    %pS\n", kp.kp_stack[i]);
> > +	}
> > +}
> > +
> >  #ifndef CONFIG_SLOB
> >  /* Create a cache during boot when no slab services are available yet */
> >  void __init create_boot_cache(struct kmem_cache *s, const char *name,
> > diff --git a/mm/slob.c b/mm/slob.c
> > index 7cc9805..2ed1de2 100644
> > --- a/mm/slob.c
> > +++ b/mm/slob.c
> > @@ -461,6 +461,12 @@ static void slob_free(void *block, int size)
> >  	spin_unlock_irqrestore(&slob_lock, flags);
> >  }
> >  
> > +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
> > +{
> > +	kpp->kp_ptr = object;
> > +	kpp->kp_page = page;
> > +}
> > +
> >  /*
> >   * End of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend.
> >   */
> > diff --git a/mm/slub.c b/mm/slub.c
> > index b30be23..0459d2a 100644
> > --- a/mm/slub.c
> > +++ b/mm/slub.c
> > @@ -3918,6 +3918,42 @@ int __kmem_cache_shutdown(struct kmem_cache *s)
> >  	return 0;
> >  }
> >  
> > +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
> > +{
> > +#ifdef CONFIG_SLUB_DEBUG
> 
> We can get some infos even if CONFIG_SLUB_DEBUG isn't defined.
> Please move them out.

I guess since I worry about CONFIG_MMU=n it only makes sense to also
worry about CONFIG_SLUB_DEBUG=n.  Fix update.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block
  2020-12-11  3:33     ` Paul E. McKenney
@ 2020-12-11  3:42       ` Paul E. McKenney
  2020-12-11  6:58         ` Joonsoo Kim
  2020-12-11  6:54       ` Joonsoo Kim
  1 sibling, 1 reply; 11+ messages in thread
From: Paul E. McKenney @ 2020-12-11  3:42 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, andrii, Christoph Lameter,
	Pekka Enberg, David Rientjes, linux-mm

On Thu, Dec 10, 2020 at 07:33:59PM -0800, Paul E. McKenney wrote:
> On Fri, Dec 11, 2020 at 11:22:10AM +0900, Joonsoo Kim wrote:
> > On Thu, Dec 10, 2020 at 05:19:58PM -0800, paulmck@kernel.org wrote:
> > > From: "Paul E. McKenney" <paulmck@kernel.org>

[ . . . ]

> > We can get some infos even if CONFIG_SLUB_DEBUG isn't defined.
> > Please move them out.
> 
> I guess since I worry about CONFIG_MMU=n it only makes sense to also
> worry about CONFIG_SLUB_DEBUG=n.  Fix update.

Like this?  (Patch on top of the series, to be folded into the first one.)

							Thanx, Paul

------------------------------------------------------------------------

diff --git a/mm/slub.c b/mm/slub.c
index 0459d2a..abf43f0 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -3920,21 +3920,24 @@ int __kmem_cache_shutdown(struct kmem_cache *s)
 
 void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
 {
-#ifdef CONFIG_SLUB_DEBUG
 	void *base;
-	int i;
+	int __maybe_unused i;
 	unsigned int objnr;
 	void *objp;
 	void *objp0;
 	struct kmem_cache *s = page->slab_cache;
-	struct track *trackp;
+	struct track __maybe_unused *trackp;
 
 	kpp->kp_ptr = object;
 	kpp->kp_page = page;
 	kpp->kp_slab_cache = s;
 	base = page_address(page);
 	objp0 = kasan_reset_tag(object);
+#ifdef CONFIG_SLUB_DEBUG
 	objp = restore_red_left(s, objp0);
+#else
+	objp = objp0;
+#endif
 	objnr = obj_to_index(s, page, objp);
 	kpp->kp_data_offset = (unsigned long)((char *)objp0 - (char *)objp);
 	objp = base + s->size * objnr;
@@ -3942,6 +3945,7 @@ void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
 	if (WARN_ON_ONCE(objp < base || objp >= base + page->objects * s->size || (objp - base) % s->size) ||
 	    !(s->flags & SLAB_STORE_USER))
 		return;
+#ifdef CONFIG_SLUB_DEBUG
 	trackp = get_track(s, objp, TRACK_ALLOC);
 	kpp->kp_ret = (void *)trackp->addr;
 #ifdef CONFIG_STACKTRACE


^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block
  2020-12-11  3:33     ` Paul E. McKenney
  2020-12-11  3:42       ` Paul E. McKenney
@ 2020-12-11  6:54       ` Joonsoo Kim
  1 sibling, 0 replies; 11+ messages in thread
From: Joonsoo Kim @ 2020-12-11  6:54 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, andrii, Christoph Lameter,
	Pekka Enberg, David Rientjes, linux-mm

On Thu, Dec 10, 2020 at 07:33:59PM -0800, Paul E. McKenney wrote:
> On Fri, Dec 11, 2020 at 11:22:10AM +0900, Joonsoo Kim wrote:
> > On Thu, Dec 10, 2020 at 05:19:58PM -0800, paulmck@kernel.org wrote:
> > > From: "Paul E. McKenney" <paulmck@kernel.org>
> > > 
> > > There are kernel facilities such as per-CPU reference counts that give
> > > error messages in generic handlers or callbacks, whose messages are
> > > unenlightening.  In the case of per-CPU reference-count underflow, this
> > > is not a problem when creating a new use of this facility because in that
> > > case the bug is almost certainly in the code implementing that new use.
> > > However, trouble arises when deploying across many systems, which might
> > > exercise corner cases that were not seen during development and testing.
> > > Here, it would be really nice to get some kind of hint as to which of
> > > several uses the underflow was caused by.
> > > 
> > > This commit therefore exposes a mem_dump_obj() function that takes
> > > a pointer to memory (which must still be allocated if it has been
> > > dynamically allocated) and prints available information on where that
> > > memory came from.  This pointer can reference the middle of the block as
> > > well as the beginning of the block, as needed by things like RCU callback
> > > functions and timer handlers that might not know where the beginning of
> > > the memory block is.  These functions and handlers can use mem_dump_obj()
> > > to print out better hints as to where the problem might lie.
> > > 
> > > The information printed can depend on kernel configuration.  For example,
> > > the allocation return address can be printed only for slab and slub,
> > > and even then only when the necessary debug has been enabled.  For slab,
> > > build with CONFIG_DEBUG_SLAB=y, and either use sizes with ample space
> > > to the next power of two or use the SLAB_STORE_USER when creating the
> > > kmem_cache structure.  For slub, build with CONFIG_SLUB_DEBUG=y and
> > > boot with slub_debug=U, or pass SLAB_STORE_USER to kmem_cache_create()
> > > if more focused use is desired.  Also for slub, use CONFIG_STACKTRACE
> > > to enable printing of the allocation-time stack trace.
> > > 
> > > Cc: Christoph Lameter <cl@linux.com>
> > > Cc: Pekka Enberg <penberg@kernel.org>
> > > Cc: David Rientjes <rientjes@google.com>
> > > Cc: Joonsoo Kim <iamjoonsoo.kim@lge.com>
> > > Cc: Andrew Morton <akpm@linux-foundation.org>
> > > Cc: <linux-mm@kvack.org>
> > > Reported-by: Andrii Nakryiko <andrii@kernel.org>
> > > [ paulmck: Convert to printing and change names per Joonsoo Kim. ]
> > > [ paulmck: Move slab definition per Stephen Rothwell and kbuild test robot. ]
> > > [ paulmck: Handle CONFIG_MMU=n case where vmalloc() is kmalloc(). ]
> > > [ paulmck: Apply Vlastimil Babka feedback on slab.c kmem_provenance(). ]
> > > Signed-off-by: Paul E. McKenney <paulmck@kernel.org>
> > > ---
> > >  include/linux/mm.h   |  2 ++
> > >  include/linux/slab.h |  2 ++
> > >  mm/slab.c            | 20 ++++++++++++++
> > >  mm/slab.h            | 12 +++++++++
> > >  mm/slab_common.c     | 74 ++++++++++++++++++++++++++++++++++++++++++++++++++++
> > >  mm/slob.c            |  6 +++++
> > >  mm/slub.c            | 36 +++++++++++++++++++++++++
> > >  mm/util.c            | 24 +++++++++++++++++
> > >  8 files changed, 176 insertions(+)
> > > 
> > > diff --git a/include/linux/mm.h b/include/linux/mm.h
> > > index ef360fe..1eea266 100644
> > > --- a/include/linux/mm.h
> > > +++ b/include/linux/mm.h
> > > @@ -3153,5 +3153,7 @@ unsigned long wp_shared_mapping_range(struct address_space *mapping,
> > >  
> > >  extern int sysctl_nr_trim_pages;
> > >  
> > > +void mem_dump_obj(void *object);
> > > +
> > >  #endif /* __KERNEL__ */
> > >  #endif /* _LINUX_MM_H */
> > > diff --git a/include/linux/slab.h b/include/linux/slab.h
> > > index dd6897f..169b511 100644
> > > --- a/include/linux/slab.h
> > > +++ b/include/linux/slab.h
> > > @@ -186,6 +186,8 @@ void kfree(const void *);
> > >  void kfree_sensitive(const void *);
> > >  size_t __ksize(const void *);
> > >  size_t ksize(const void *);
> > > +bool kmem_valid_obj(void *object);
> > > +void kmem_dump_obj(void *object);
> > >  
> > >  #ifdef CONFIG_HAVE_HARDENED_USERCOPY_ALLOCATOR
> > >  void __check_heap_object(const void *ptr, unsigned long n, struct page *page,
> > > diff --git a/mm/slab.c b/mm/slab.c
> > > index b111356..66f00ad 100644
> > > --- a/mm/slab.c
> > > +++ b/mm/slab.c
> > > @@ -3633,6 +3633,26 @@ void *__kmalloc_node_track_caller(size_t size, gfp_t flags,
> > >  EXPORT_SYMBOL(__kmalloc_node_track_caller);
> > >  #endif /* CONFIG_NUMA */
> > >  
> > > +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
> > > +{
> > > +	struct kmem_cache *cachep;
> > > +	unsigned int objnr;
> > > +	void *objp;
> > > +
> > > +	kpp->kp_ptr = object;
> > > +	kpp->kp_page = page;
> > > +	cachep = page->slab_cache;
> > > +	kpp->kp_slab_cache = cachep;
> > > +	objp = object - obj_offset(cachep);
> > > +	kpp->kp_data_offset = obj_offset(cachep);
> > > +	page = virt_to_head_page(objp);
> > > +	objnr = obj_to_index(cachep, page, objp);
> > > +	objp = index_to_obj(cachep, page, objnr);
> > > +	kpp->kp_objp = objp;
> > > +	if (DEBUG && cachep->flags & SLAB_STORE_USER)
> > > +		kpp->kp_ret = *dbg_userword(cachep, objp);
> > > +}
> > > +
> > >  /**
> > >   * __do_kmalloc - allocate memory
> > >   * @size: how many bytes of memory are required.
> > > diff --git a/mm/slab.h b/mm/slab.h
> > > index 6d7c6a5..0dc705b 100644
> > > --- a/mm/slab.h
> > > +++ b/mm/slab.h
> > > @@ -630,4 +630,16 @@ static inline bool slab_want_init_on_free(struct kmem_cache *c)
> > >  	return false;
> > >  }
> > >  
> > > +#define KS_ADDRS_COUNT 16
> > > +struct kmem_obj_info {
> > > +	void *kp_ptr;
> > > +	struct page *kp_page;
> > > +	void *kp_objp;
> > > +	unsigned long kp_data_offset;
> > > +	struct kmem_cache *kp_slab_cache;
> > > +	void *kp_ret;
> > > +	void *kp_stack[KS_ADDRS_COUNT];
> > > +};
> > > +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page);
> > > +
> > >  #endif /* MM_SLAB_H */
> > > diff --git a/mm/slab_common.c b/mm/slab_common.c
> > > index f9ccd5d..df2e203 100644
> > > --- a/mm/slab_common.c
> > > +++ b/mm/slab_common.c
> > > @@ -536,6 +536,80 @@ bool slab_is_available(void)
> > >  	return slab_state >= UP;
> > >  }
> > >  
> > > +/**
> > > + * kmem_valid_obj - does the pointer reference a valid slab object?
> > > + * @object: pointer to query.
> > > + *
> > > + * Return: %true if the pointer is to a not-yet-freed object from
> > > + * kmalloc() or kmem_cache_alloc(), either %true or %false if the pointer
> > > + * is to an already-freed object, and %false otherwise.
> > > + */
> > > +bool kmem_valid_obj(void *object)
> > > +{
> > > +	struct page *page;
> > > +
> > > +	if (!virt_addr_valid(object))
> > > +		return false;
> > > +	page = virt_to_head_page(object);
> > > +	return PageSlab(page);
> > > +}
> > > +
> > > +/**
> > > + * kmem_dump_obj - Print available slab provenance information
> > > + * @object: slab object for which to find provenance information.
> > > + *
> > > + * This function uses pr_cont(), so that the caller is expected to have
> > > + * printed out whatever preamble is appropriate.  The provenance information
> > > + * depends on the type of object and on how much debugging is enabled.
> > > + * For a slab-cache object, the fact that it is a slab object is printed,
> > > + * and, if available, the slab name, return address, and stack trace from
> > > + * the allocation of that object.
> > > + *
> > > + * This function will splat if passed a pointer to a non-slab object.
> > > + * If you are not sure what type of object you have, you should instead
> > > + * use mem_dump_obj().
> > > + */
> > > +void kmem_dump_obj(void *object)
> > > +{
> > > +	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
> > > +	int i;
> > > +	struct page *page;
> > > +	unsigned long ptroffset;
> > > +	struct kmem_obj_info kp = { };
> > > +
> > > +	if (WARN_ON_ONCE(!virt_addr_valid(object)))
> > > +		return;
> > > +	page = virt_to_head_page(object);
> > > +	if (WARN_ON_ONCE(!PageSlab(page))) {
> > > +		pr_cont(" non-slab memory.\n");
> > > +		return;
> > > +	}
> > > +	kmem_obj_info(&kp, object, page);
> > > +	if (kp.kp_slab_cache)
> > > +		pr_cont(" slab%s %s", cp, kp.kp_slab_cache->name);
> > > +	else
> > > +		pr_cont(" slab%s", cp);
> > > +	if (kp.kp_objp)
> > > +		pr_cont(" start %px", kp.kp_objp);
> > > +	if (kp.kp_data_offset)
> > > +		pr_cont(" data offset %lu", kp.kp_data_offset);
> > 
> > I don't check the code deeply but kp_data_offset could be 0 in normal
> > situation. Is it intentional not to print a message in this case?
> 
> Yes, so that it tells you of the offset only if it is non-zero, which as
> you say happens only if certain debugging options are enabled.  Easy to
> print it unconditionally if that is preferred!

Okay. I have no preference here. The question is just to understand
the code correctly for myself.

> 
> > > +	if (kp.kp_objp) {
> > > +		ptroffset = ((char *)object - (char *)kp.kp_objp) - kp.kp_data_offset;
> > > +		pr_cont(" pointer offset %lu", ptroffset);
> > > +	}
> > > +	if (kp.kp_slab_cache && kp.kp_slab_cache->usersize)
> > > +		pr_cont(" size %u", kp.kp_slab_cache->usersize);
> > > +	if (kp.kp_ret)
> > > +		pr_cont(" allocated at %pS\n", kp.kp_ret);
> > > +	else
> > > +		pr_cont("\n");
> > > +	for (i = 0; i < ARRAY_SIZE(kp.kp_stack); i++) {
> > > +		if (!kp.kp_stack[i])
> > > +			break;
> > > +		pr_info("    %pS\n", kp.kp_stack[i]);
> > > +	}
> > > +}
> > > +
> > >  #ifndef CONFIG_SLOB
> > >  /* Create a cache during boot when no slab services are available yet */
> > >  void __init create_boot_cache(struct kmem_cache *s, const char *name,
> > > diff --git a/mm/slob.c b/mm/slob.c
> > > index 7cc9805..2ed1de2 100644
> > > --- a/mm/slob.c
> > > +++ b/mm/slob.c
> > > @@ -461,6 +461,12 @@ static void slob_free(void *block, int size)
> > >  	spin_unlock_irqrestore(&slob_lock, flags);
> > >  }
> > >  
> > > +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
> > > +{
> > > +	kpp->kp_ptr = object;
> > > +	kpp->kp_page = page;
> > > +}
> > > +
> > >  /*
> > >   * End of slob allocator proper. Begin kmem_cache_alloc and kmalloc frontend.
> > >   */
> > > diff --git a/mm/slub.c b/mm/slub.c
> > > index b30be23..0459d2a 100644
> > > --- a/mm/slub.c
> > > +++ b/mm/slub.c
> > > @@ -3918,6 +3918,42 @@ int __kmem_cache_shutdown(struct kmem_cache *s)
> > >  	return 0;
> > >  }
> > >  
> > > +void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct page *page)
> > > +{
> > > +#ifdef CONFIG_SLUB_DEBUG
> > 
> > We can get some infos even if CONFIG_SLUB_DEBUG isn't defined.
> > Please move them out.
> 
> I guess since I worry about CONFIG_MMU=n it only makes sense to also
> worry about CONFIG_SLUB_DEBUG=n.  Fix update.

Okay!

Thanks.



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block
  2020-12-11  3:42       ` Paul E. McKenney
@ 2020-12-11  6:58         ` Joonsoo Kim
  2020-12-11 16:59           ` Paul E. McKenney
  0 siblings, 1 reply; 11+ messages in thread
From: Joonsoo Kim @ 2020-12-11  6:58 UTC (permalink / raw)
  To: Paul E. McKenney
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, andrii, Christoph Lameter,
	Pekka Enberg, David Rientjes, linux-mm

On Thu, Dec 10, 2020 at 07:42:27PM -0800, Paul E. McKenney wrote:
> On Thu, Dec 10, 2020 at 07:33:59PM -0800, Paul E. McKenney wrote:
> > On Fri, Dec 11, 2020 at 11:22:10AM +0900, Joonsoo Kim wrote:
> > > On Thu, Dec 10, 2020 at 05:19:58PM -0800, paulmck@kernel.org wrote:
> > > > From: "Paul E. McKenney" <paulmck@kernel.org>
> 
> [ . . . ]
> 
> > > We can get some infos even if CONFIG_SLUB_DEBUG isn't defined.
> > > Please move them out.
> > 
> > I guess since I worry about CONFIG_MMU=n it only makes sense to also
> > worry about CONFIG_SLUB_DEBUG=n.  Fix update.
> 
> Like this?  (Patch on top of the series, to be folded into the first one.)

Yes!

Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Thanks.


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block
  2020-12-11  6:58         ` Joonsoo Kim
@ 2020-12-11 16:59           ` Paul E. McKenney
  0 siblings, 0 replies; 11+ messages in thread
From: Paul E. McKenney @ 2020-12-11 16:59 UTC (permalink / raw)
  To: Joonsoo Kim
  Cc: rcu, linux-kernel, kernel-team, mingo, jiangshanlai, akpm,
	mathieu.desnoyers, josh, tglx, peterz, rostedt, dhowells,
	edumazet, fweisbec, oleg, joel, andrii, Christoph Lameter,
	Pekka Enberg, David Rientjes, linux-mm

On Fri, Dec 11, 2020 at 03:58:51PM +0900, Joonsoo Kim wrote:
> On Thu, Dec 10, 2020 at 07:42:27PM -0800, Paul E. McKenney wrote:
> > On Thu, Dec 10, 2020 at 07:33:59PM -0800, Paul E. McKenney wrote:
> > > On Fri, Dec 11, 2020 at 11:22:10AM +0900, Joonsoo Kim wrote:
> > > > On Thu, Dec 10, 2020 at 05:19:58PM -0800, paulmck@kernel.org wrote:
> > > > > From: "Paul E. McKenney" <paulmck@kernel.org>
> > 
> > [ . . . ]
> > 
> > > > We can get some infos even if CONFIG_SLUB_DEBUG isn't defined.
> > > > Please move them out.
> > > 
> > > I guess since I worry about CONFIG_MMU=n it only makes sense to also
> > > worry about CONFIG_SLUB_DEBUG=n.  Fix update.
> > 
> > Like this?  (Patch on top of the series, to be folded into the first one.)
> 
> Yes!
> 
> Acked-by: Joonsoo Kim <iamjoonsoo.kim@lge.com>

Applied, and thank you again for the review and feedback!

Suggestions on where to route these?  Left to my own devices, they
go via -rcu in the v5.12 merge window.

							Thanx, Paul


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2020-12-11 16:59 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <20201211011907.GA16110@paulmck-ThinkPad-P72>
2020-12-11  1:19 ` [PATCH v3 sl-b 1/6] mm: Add mem_dump_obj() to print source of memory block paulmck
2020-12-11  2:22   ` Joonsoo Kim
2020-12-11  3:33     ` Paul E. McKenney
2020-12-11  3:42       ` Paul E. McKenney
2020-12-11  6:58         ` Joonsoo Kim
2020-12-11 16:59           ` Paul E. McKenney
2020-12-11  6:54       ` Joonsoo Kim
2020-12-11  1:19 ` [PATCH v3 sl-b 2/6] mm: Make mem_dump_obj() handle NULL and zero-sized pointers paulmck
2020-12-11  1:20 ` [PATCH v3 sl-b 3/6] mm: Make mem_dump_obj() handle vmalloc() memory paulmck
2020-12-11  1:20 ` [PATCH v3 sl-b 4/6] mm: Make mem_obj_dump() vmalloc() dumps include start and length paulmck
2020-12-11  1:20 ` [PATCH v3 sl-b 5/6] rcu: Make call_rcu() print mem_dump_obj() info for double-freed callback paulmck

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).