linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/3] rcu: Dump memory object info if callback function is invalid
@ 2023-08-03 10:17 thunder.leizhen
  2023-08-03 10:17 ` [PATCH v5 1/3] mm: Remove kmem_valid_obj() thunder.leizhen
                   ` (2 more replies)
  0 siblings, 3 replies; 6+ messages in thread
From: thunder.leizhen @ 2023-08-03 10:17 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo,
	linux-mm, Paul E . McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Boqun Feng,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang, rcu,
	linux-kernel
  Cc: Zhen Lei

From: Zhen Lei <thunder.leizhen@huawei.com>

v4 --> v5:
1. Add Reviewed-by Acked-by for patch 1/3
2. Add patch 3/3:
   mm: Dump the memory of slab object in kmem_dump_obj()

v3 --> v4:
1. Remove kmem_valid_obj() and convert kmem_dump_obj() to work the same way
   as vmalloc_dump_obj().
2. In kernel/rcu/rcu.h
-#include <linux/mm.h>
+#include <linux/slab.h>

v2 --> v3:
1. I made statistics about the source of 'rhp'. kmem_valid_obj() accounts for
   more than 97.5%, and vmalloc accounts for less than 1%. So change call
   mem_dump_obj() to call kmem_dump_obj() can meet debugging requirements and
   avoid the potential deadlock risk of vmalloc_dump_obj().
-		mem_dump_obj(rhp);
+		if (kmem_valid_obj(rhp))
+			kmem_dump_obj(rhp);

   The discussion about vmap_area_lock deadlock in v2:
   https://lkml.org/lkml/2022/11/11/493

2. Provide static inline empty functions for kmem_valid_obj() and kmem_dump_obj()
   when CONFIG_PRINTK=n.

v1 --> v2:
1. Remove condition "(unsigned long)rhp->func & 0x3", it have problems on x86.
2. Paul E. McKenney helped me update the commit message, thanks.


Zhen Lei (3):
  mm: Remove kmem_valid_obj()
  rcu: Dump memory object info if callback function is invalid
  mm: Dump the memory of slab object in kmem_dump_obj()

 include/linux/slab.h  |  5 +--
 kernel/rcu/rcu.h      |  7 +++++
 kernel/rcu/srcutiny.c |  1 +
 kernel/rcu/srcutree.c |  1 +
 kernel/rcu/tasks.h    |  1 +
 kernel/rcu/tiny.c     |  1 +
 kernel/rcu/tree.c     |  1 +
 mm/slab_common.c      | 71 +++++++++++++++++++++++--------------------
 mm/util.c             |  4 +--
 9 files changed, 54 insertions(+), 38 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v5 1/3] mm: Remove kmem_valid_obj()
  2023-08-03 10:17 [PATCH v5 0/3] rcu: Dump memory object info if callback function is invalid thunder.leizhen
@ 2023-08-03 10:17 ` thunder.leizhen
  2023-08-03 10:17 ` [PATCH v5 2/3] rcu: Dump memory object info if callback function is invalid thunder.leizhen
  2023-08-03 10:17 ` [PATCH v5 3/3] mm: Dump the memory of slab object in kmem_dump_obj() thunder.leizhen
  2 siblings, 0 replies; 6+ messages in thread
From: thunder.leizhen @ 2023-08-03 10:17 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo,
	linux-mm, Paul E . McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Boqun Feng,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang, rcu,
	linux-kernel
  Cc: Zhen Lei

From: Zhen Lei <thunder.leizhen@huawei.com>

Function kmem_dump_obj() will splat if passed a pointer to a non-slab
object. So no one will call it directly. It is always necessary to call
kmem_valid_obj() first to determine whether the passed pointer to a
valid slab object. Then merging kmem_valid_obj() into kmem_dump_obj()
will make the code more concise. So convert kmem_dump_obj() to work the
same way as vmalloc_dump_obj(). After this, no one calls kmem_valid_obj()
anymore, and it can be safely removed.

Suggested-by: Matthew Wilcox <willy@infradead.org>
Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
Reviewed-by: Matthew Wilcox (Oracle) <willy@infradead.org>
Acked-by: Vlastimil Babka <vbabka@suse.cz>
---
 include/linux/slab.h |  5 +++--
 mm/slab_common.c     | 41 +++++++++++------------------------------
 mm/util.c            |  4 +---
 3 files changed, 15 insertions(+), 35 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 848c7c82ad5ad0b..d8ed2e810ec4448 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -244,8 +244,9 @@ DEFINE_FREE(kfree, void *, if (_T) kfree(_T))
 size_t ksize(const void *objp);
 
 #ifdef CONFIG_PRINTK
-bool kmem_valid_obj(void *object);
-void kmem_dump_obj(void *object);
+bool kmem_dump_obj(void *object);
+#else
+static inline bool kmem_dump_obj(void *object) { return false; }
 #endif
 
 /*
diff --git a/mm/slab_common.c b/mm/slab_common.c
index d1555ea2981ac51..ee6ed6dd7ba9fa5 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -528,26 +528,6 @@ bool slab_is_available(void)
 }
 
 #ifdef CONFIG_PRINTK
-/**
- * kmem_valid_obj - does the pointer reference a valid slab object?
- * @object: pointer to query.
- *
- * Return: %true if the pointer is to a not-yet-freed object from
- * kmalloc() or kmem_cache_alloc(), either %true or %false if the pointer
- * is to an already-freed object, and %false otherwise.
- */
-bool kmem_valid_obj(void *object)
-{
-	struct folio *folio;
-
-	/* Some arches consider ZERO_SIZE_PTR to be a valid address. */
-	if (object < (void *)PAGE_SIZE || !virt_addr_valid(object))
-		return false;
-	folio = virt_to_folio(object);
-	return folio_test_slab(folio);
-}
-EXPORT_SYMBOL_GPL(kmem_valid_obj);
-
 static void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *slab)
 {
 	if (__kfence_obj_info(kpp, object, slab))
@@ -566,11 +546,11 @@ static void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *
  * and, if available, the slab name, return address, and stack trace from
  * the allocation and last free path of that object.
  *
- * This function will splat if passed a pointer to a non-slab object.
- * If you are not sure what type of object you have, you should instead
- * use mem_dump_obj().
+ * Return: %true if the pointer is to a not-yet-freed object from
+ * kmalloc() or kmem_cache_alloc(), either %true or %false if the pointer
+ * is to an already-freed object, and %false otherwise.
  */
-void kmem_dump_obj(void *object)
+bool kmem_dump_obj(void *object)
 {
 	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
 	int i;
@@ -578,13 +558,13 @@ void kmem_dump_obj(void *object)
 	unsigned long ptroffset;
 	struct kmem_obj_info kp = { };
 
-	if (WARN_ON_ONCE(!virt_addr_valid(object)))
-		return;
+	/* Some arches consider ZERO_SIZE_PTR to be a valid address. */
+	if (object < (void *)PAGE_SIZE || !virt_addr_valid(object))
+		return false;
 	slab = virt_to_slab(object);
-	if (WARN_ON_ONCE(!slab)) {
-		pr_cont(" non-slab memory.\n");
-		return;
-	}
+	if (!slab)
+		return false;
+
 	kmem_obj_info(&kp, object, slab);
 	if (kp.kp_slab_cache)
 		pr_cont(" slab%s %s", cp, kp.kp_slab_cache->name);
@@ -621,6 +601,7 @@ void kmem_dump_obj(void *object)
 		pr_info("    %pS\n", kp.kp_free_stack[i]);
 	}
 
+	return true;
 }
 EXPORT_SYMBOL_GPL(kmem_dump_obj);
 #endif
diff --git a/mm/util.c b/mm/util.c
index dd12b9531ac4cad..ddfbb22dc1876d3 100644
--- a/mm/util.c
+++ b/mm/util.c
@@ -1063,10 +1063,8 @@ void mem_dump_obj(void *object)
 {
 	const char *type;
 
-	if (kmem_valid_obj(object)) {
-		kmem_dump_obj(object);
+	if (kmem_dump_obj(object))
 		return;
-	}
 
 	if (vmalloc_dump_obj(object))
 		return;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v5 2/3] rcu: Dump memory object info if callback function is invalid
  2023-08-03 10:17 [PATCH v5 0/3] rcu: Dump memory object info if callback function is invalid thunder.leizhen
  2023-08-03 10:17 ` [PATCH v5 1/3] mm: Remove kmem_valid_obj() thunder.leizhen
@ 2023-08-03 10:17 ` thunder.leizhen
  2023-08-03 10:17 ` [PATCH v5 3/3] mm: Dump the memory of slab object in kmem_dump_obj() thunder.leizhen
  2 siblings, 0 replies; 6+ messages in thread
From: thunder.leizhen @ 2023-08-03 10:17 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo,
	linux-mm, Paul E . McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Boqun Feng,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang, rcu,
	linux-kernel
  Cc: Zhen Lei

From: Zhen Lei <thunder.leizhen@huawei.com>

When a structure containing an RCU callback rhp is (incorrectly) freed
and reallocated after rhp is passed to call_rcu(), it is not unusual for
rhp->func to be set to NULL. This defeats the debugging prints used by
__call_rcu_common() in kernels built with CONFIG_DEBUG_OBJECTS_RCU_HEAD=y,
which expect to identify the offending code using the identity of this
function.

And in kernels build without CONFIG_DEBUG_OBJECTS_RCU_HEAD=y, things
are even worse, as can be seen from this splat:

Unable to handle kernel NULL pointer dereference at virtual address 0
... ...
PC is at 0x0
LR is at rcu_do_batch+0x1c0/0x3b8
... ...
 (rcu_do_batch) from (rcu_core+0x1d4/0x284)
 (rcu_core) from (__do_softirq+0x24c/0x344)
 (__do_softirq) from (__irq_exit_rcu+0x64/0x108)
 (__irq_exit_rcu) from (irq_exit+0x8/0x10)
 (irq_exit) from (__handle_domain_irq+0x74/0x9c)
 (__handle_domain_irq) from (gic_handle_irq+0x8c/0x98)
 (gic_handle_irq) from (__irq_svc+0x5c/0x94)
 (__irq_svc) from (arch_cpu_idle+0x20/0x3c)
 (arch_cpu_idle) from (default_idle_call+0x4c/0x78)
 (default_idle_call) from (do_idle+0xf8/0x150)
 (do_idle) from (cpu_startup_entry+0x18/0x20)
 (cpu_startup_entry) from (0xc01530)

This commit therefore adds calls to mem_dump_obj(rhp) to output some
information, for example:

  slab kmalloc-256 start ffff410c45019900 pointer offset 0 size 256

This provides the rough size of the memory block and the offset of the
rcu_head structure, which as least provides at least a few clues to help
locate the problem. If the problem is reproducible, additional slab
debugging can be enabled, for example, CONFIG_DEBUG_SLAB=y, which can
provide significantly more information.

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
 kernel/rcu/rcu.h      | 7 +++++++
 kernel/rcu/srcutiny.c | 1 +
 kernel/rcu/srcutree.c | 1 +
 kernel/rcu/tasks.h    | 1 +
 kernel/rcu/tiny.c     | 1 +
 kernel/rcu/tree.c     | 1 +
 6 files changed, 12 insertions(+)

diff --git a/kernel/rcu/rcu.h b/kernel/rcu/rcu.h
index d1dcb09750efbd6..bc81582238b9846 100644
--- a/kernel/rcu/rcu.h
+++ b/kernel/rcu/rcu.h
@@ -10,6 +10,7 @@
 #ifndef __LINUX_RCU_H
 #define __LINUX_RCU_H
 
+#include <linux/slab.h>
 #include <trace/events/rcu.h>
 
 /*
@@ -248,6 +249,12 @@ static inline void debug_rcu_head_unqueue(struct rcu_head *head)
 }
 #endif	/* #else !CONFIG_DEBUG_OBJECTS_RCU_HEAD */
 
+static inline void debug_rcu_head_callback(struct rcu_head *rhp)
+{
+	if (unlikely(!rhp->func))
+		kmem_dump_obj(rhp);
+}
+
 extern int rcu_cpu_stall_suppress_at_boot;
 
 static inline bool rcu_stall_is_suppressed_at_boot(void)
diff --git a/kernel/rcu/srcutiny.c b/kernel/rcu/srcutiny.c
index 336af24e0fe358a..c38e5933a5d6937 100644
--- a/kernel/rcu/srcutiny.c
+++ b/kernel/rcu/srcutiny.c
@@ -138,6 +138,7 @@ void srcu_drive_gp(struct work_struct *wp)
 	while (lh) {
 		rhp = lh;
 		lh = lh->next;
+		debug_rcu_head_callback(rhp);
 		local_bh_disable();
 		rhp->func(rhp);
 		local_bh_enable();
diff --git a/kernel/rcu/srcutree.c b/kernel/rcu/srcutree.c
index f1a905200fc2f79..833a8f848a90ae6 100644
--- a/kernel/rcu/srcutree.c
+++ b/kernel/rcu/srcutree.c
@@ -1710,6 +1710,7 @@ static void srcu_invoke_callbacks(struct work_struct *work)
 	rhp = rcu_cblist_dequeue(&ready_cbs);
 	for (; rhp != NULL; rhp = rcu_cblist_dequeue(&ready_cbs)) {
 		debug_rcu_head_unqueue(rhp);
+		debug_rcu_head_callback(rhp);
 		local_bh_disable();
 		rhp->func(rhp);
 		local_bh_enable();
diff --git a/kernel/rcu/tasks.h b/kernel/rcu/tasks.h
index 7294be62727b12c..148ac6a464bfb12 100644
--- a/kernel/rcu/tasks.h
+++ b/kernel/rcu/tasks.h
@@ -538,6 +538,7 @@ static void rcu_tasks_invoke_cbs(struct rcu_tasks *rtp, struct rcu_tasks_percpu
 	raw_spin_unlock_irqrestore_rcu_node(rtpcp, flags);
 	len = rcl.len;
 	for (rhp = rcu_cblist_dequeue(&rcl); rhp; rhp = rcu_cblist_dequeue(&rcl)) {
+		debug_rcu_head_callback(rhp);
 		local_bh_disable();
 		rhp->func(rhp);
 		local_bh_enable();
diff --git a/kernel/rcu/tiny.c b/kernel/rcu/tiny.c
index 42f7589e51e09e7..fec804b7908032d 100644
--- a/kernel/rcu/tiny.c
+++ b/kernel/rcu/tiny.c
@@ -97,6 +97,7 @@ static inline bool rcu_reclaim_tiny(struct rcu_head *head)
 
 	trace_rcu_invoke_callback("", head);
 	f = head->func;
+	debug_rcu_head_callback(head);
 	WRITE_ONCE(head->func, (rcu_callback_t)0L);
 	f(head);
 	rcu_lock_release(&rcu_callback_map);
diff --git a/kernel/rcu/tree.c b/kernel/rcu/tree.c
index 7c79480bfaa04e4..927c5ba0ae42269 100644
--- a/kernel/rcu/tree.c
+++ b/kernel/rcu/tree.c
@@ -2135,6 +2135,7 @@ static void rcu_do_batch(struct rcu_data *rdp)
 		trace_rcu_invoke_callback(rcu_state.name, rhp);
 
 		f = rhp->func;
+		debug_rcu_head_callback(rhp);
 		WRITE_ONCE(rhp->func, (rcu_callback_t)0L);
 		f(rhp);
 
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v5 3/3] mm: Dump the memory of slab object in kmem_dump_obj()
  2023-08-03 10:17 [PATCH v5 0/3] rcu: Dump memory object info if callback function is invalid thunder.leizhen
  2023-08-03 10:17 ` [PATCH v5 1/3] mm: Remove kmem_valid_obj() thunder.leizhen
  2023-08-03 10:17 ` [PATCH v5 2/3] rcu: Dump memory object info if callback function is invalid thunder.leizhen
@ 2023-08-03 10:17 ` thunder.leizhen
  2023-08-03 10:34   ` Vlastimil Babka
  2 siblings, 1 reply; 6+ messages in thread
From: thunder.leizhen @ 2023-08-03 10:17 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Vlastimil Babka, Roman Gushchin, Hyeonggon Yoo,
	linux-mm, Paul E . McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Boqun Feng,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang, rcu,
	linux-kernel
  Cc: Zhen Lei

From: Zhen Lei <thunder.leizhen@huawei.com>

The contents of the slab object may contain some magic words and other
useful information that may be helpful in locating problems such as
memory corruption and use-after-free.

To avoid print flooding, dump up to "16 * sizeof(int) = 64" bytes
centered on argument 'ojbect'.

For example:
slab kmalloc-64 start ffff4043802d8b40 pointer offset 24 size 64
[8b40]: 12345678 00000000 8092d000 ffff8000
[8b50]: 00101000 00000000 8199ee00 ffff4043
[8b60]: 00000000 00000000 00000000 00000100
[8b70]: 00000000 9abcdef0 a8744de4 ffffc7fe

Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
---
 mm/slab_common.c | 30 +++++++++++++++++++++++++++---
 1 file changed, 27 insertions(+), 3 deletions(-)

diff --git a/mm/slab_common.c b/mm/slab_common.c
index ee6ed6dd7ba9fa5..0232de9a3b29cf5 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -553,7 +553,7 @@ static void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *
 bool kmem_dump_obj(void *object)
 {
 	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
-	int i;
+	int i, object_size = 0;
 	struct slab *slab;
 	unsigned long ptroffset;
 	struct kmem_obj_info kp = { };
@@ -580,12 +580,36 @@ bool kmem_dump_obj(void *object)
 		ptroffset = ((char *)object - (char *)kp.kp_objp) - kp.kp_data_offset;
 		pr_cont(" pointer offset %lu", ptroffset);
 	}
-	if (kp.kp_slab_cache && kp.kp_slab_cache->object_size)
-		pr_cont(" size %u", kp.kp_slab_cache->object_size);
+	if (kp.kp_slab_cache && kp.kp_slab_cache->object_size) {
+		object_size = kp.kp_slab_cache->object_size;
+		pr_cont(" size %u", object_size);
+	}
 	if (kp.kp_ret)
 		pr_cont(" allocated at %pS\n", kp.kp_ret);
 	else
 		pr_cont("\n");
+
+	/* Dump a small piece of memory centered on 'object' */
+	if (kp.kp_objp && object_size) {
+		int *p = object, n = 16;
+
+		p += n / 2;
+		if ((void *)p > kp.kp_objp + object_size)
+			p = kp.kp_objp + object_size;
+
+		p -= n;
+		if ((void *)p < kp.kp_objp)
+			p = kp.kp_objp;
+
+		n = min_t(int, object_size / sizeof(int), n);
+		for (i = 0; i < n; i++, p++) {
+			if (i % 4 == 0)
+				pr_info("[%04lx]:", 0xffff & (unsigned long)p);
+			pr_cont(" %08x", *p);
+		}
+		pr_cont("\n");
+	}
+
 	for (i = 0; i < ARRAY_SIZE(kp.kp_stack); i++) {
 		if (!kp.kp_stack[i])
 			break;
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v5 3/3] mm: Dump the memory of slab object in kmem_dump_obj()
  2023-08-03 10:17 ` [PATCH v5 3/3] mm: Dump the memory of slab object in kmem_dump_obj() thunder.leizhen
@ 2023-08-03 10:34   ` Vlastimil Babka
  2023-08-04  1:44     ` Leizhen (ThunderTown)
  0 siblings, 1 reply; 6+ messages in thread
From: Vlastimil Babka @ 2023-08-03 10:34 UTC (permalink / raw)
  To: thunder.leizhen, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
	linux-mm, Paul E . McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Boqun Feng,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang, rcu,
	linux-kernel
  Cc: Zhen Lei

On 8/3/23 12:17, thunder.leizhen@huaweicloud.com wrote:
> From: Zhen Lei <thunder.leizhen@huawei.com>
> 
> The contents of the slab object may contain some magic words and other
> useful information that may be helpful in locating problems such as
> memory corruption and use-after-free.
> 
> To avoid print flooding, dump up to "16 * sizeof(int) = 64" bytes
> centered on argument 'ojbect'.
> 
> For example:
> slab kmalloc-64 start ffff4043802d8b40 pointer offset 24 size 64
> [8b40]: 12345678 00000000 8092d000 ffff8000
> [8b50]: 00101000 00000000 8199ee00 ffff4043
> [8b60]: 00000000 00000000 00000000 00000100
> [8b70]: 00000000 9abcdef0 a8744de4 ffffc7fe
> 
> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
> ---
>  mm/slab_common.c | 30 +++++++++++++++++++++++++++---
>  1 file changed, 27 insertions(+), 3 deletions(-)
> 
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index ee6ed6dd7ba9fa5..0232de9a3b29cf5 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -553,7 +553,7 @@ static void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *
>  bool kmem_dump_obj(void *object)
>  {
>  	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
> -	int i;
> +	int i, object_size = 0;
>  	struct slab *slab;
>  	unsigned long ptroffset;
>  	struct kmem_obj_info kp = { };
> @@ -580,12 +580,36 @@ bool kmem_dump_obj(void *object)
>  		ptroffset = ((char *)object - (char *)kp.kp_objp) - kp.kp_data_offset;
>  		pr_cont(" pointer offset %lu", ptroffset);
>  	}
> -	if (kp.kp_slab_cache && kp.kp_slab_cache->object_size)
> -		pr_cont(" size %u", kp.kp_slab_cache->object_size);
> +	if (kp.kp_slab_cache && kp.kp_slab_cache->object_size) {
> +		object_size = kp.kp_slab_cache->object_size;
> +		pr_cont(" size %u", object_size);
> +	}
>  	if (kp.kp_ret)
>  		pr_cont(" allocated at %pS\n", kp.kp_ret);
>  	else
>  		pr_cont("\n");
> +
> +	/* Dump a small piece of memory centered on 'object' */
> +	if (kp.kp_objp && object_size) {
> +		int *p = object, n = 16;
> +
> +		p += n / 2;
> +		if ((void *)p > kp.kp_objp + object_size)
> +			p = kp.kp_objp + object_size;
> +
> +		p -= n;
> +		if ((void *)p < kp.kp_objp)
> +			p = kp.kp_objp;
> +
> +		n = min_t(int, object_size / sizeof(int), n);
> +		for (i = 0; i < n; i++, p++) {
> +			if (i % 4 == 0)
> +				pr_info("[%04lx]:", 0xffff & (unsigned long)p);
> +			pr_cont(" %08x", *p);
> +		}
> +		pr_cont("\n");

There's a print_hex_dump() for this, see how it's used from e.g. __dump_page().


> +	}
> +
>  	for (i = 0; i < ARRAY_SIZE(kp.kp_stack); i++) {
>  		if (!kp.kp_stack[i])
>  			break;


^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v5 3/3] mm: Dump the memory of slab object in kmem_dump_obj()
  2023-08-03 10:34   ` Vlastimil Babka
@ 2023-08-04  1:44     ` Leizhen (ThunderTown)
  0 siblings, 0 replies; 6+ messages in thread
From: Leizhen (ThunderTown) @ 2023-08-04  1:44 UTC (permalink / raw)
  To: Vlastimil Babka, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Roman Gushchin, Hyeonggon Yoo,
	linux-mm, Paul E . McKenney, Frederic Weisbecker,
	Neeraj Upadhyay, Joel Fernandes, Josh Triplett, Boqun Feng,
	Steven Rostedt, Mathieu Desnoyers, Lai Jiangshan, Zqiang, rcu,
	linux-kernel
  Cc: Zhen Lei



On 2023/8/3 18:34, Vlastimil Babka wrote:
> On 8/3/23 12:17, thunder.leizhen@huaweicloud.com wrote:
>> From: Zhen Lei <thunder.leizhen@huawei.com>
>>
>> The contents of the slab object may contain some magic words and other
>> useful information that may be helpful in locating problems such as
>> memory corruption and use-after-free.
>>
>> To avoid print flooding, dump up to "16 * sizeof(int) = 64" bytes
>> centered on argument 'ojbect'.
>>
>> For example:
>> slab kmalloc-64 start ffff4043802d8b40 pointer offset 24 size 64
>> [8b40]: 12345678 00000000 8092d000 ffff8000
>> [8b50]: 00101000 00000000 8199ee00 ffff4043
>> [8b60]: 00000000 00000000 00000000 00000100
>> [8b70]: 00000000 9abcdef0 a8744de4 ffffc7fe
>>
>> Signed-off-by: Zhen Lei <thunder.leizhen@huawei.com>
>> ---
>>  mm/slab_common.c | 30 +++++++++++++++++++++++++++---
>>  1 file changed, 27 insertions(+), 3 deletions(-)
>>
>> diff --git a/mm/slab_common.c b/mm/slab_common.c
>> index ee6ed6dd7ba9fa5..0232de9a3b29cf5 100644
>> --- a/mm/slab_common.c
>> +++ b/mm/slab_common.c
>> @@ -553,7 +553,7 @@ static void kmem_obj_info(struct kmem_obj_info *kpp, void *object, struct slab *
>>  bool kmem_dump_obj(void *object)
>>  {
>>  	char *cp = IS_ENABLED(CONFIG_MMU) ? "" : "/vmalloc";
>> -	int i;
>> +	int i, object_size = 0;
>>  	struct slab *slab;
>>  	unsigned long ptroffset;
>>  	struct kmem_obj_info kp = { };
>> @@ -580,12 +580,36 @@ bool kmem_dump_obj(void *object)
>>  		ptroffset = ((char *)object - (char *)kp.kp_objp) - kp.kp_data_offset;
>>  		pr_cont(" pointer offset %lu", ptroffset);
>>  	}
>> -	if (kp.kp_slab_cache && kp.kp_slab_cache->object_size)
>> -		pr_cont(" size %u", kp.kp_slab_cache->object_size);
>> +	if (kp.kp_slab_cache && kp.kp_slab_cache->object_size) {
>> +		object_size = kp.kp_slab_cache->object_size;
>> +		pr_cont(" size %u", object_size);
>> +	}
>>  	if (kp.kp_ret)
>>  		pr_cont(" allocated at %pS\n", kp.kp_ret);
>>  	else
>>  		pr_cont("\n");
>> +
>> +	/* Dump a small piece of memory centered on 'object' */
>> +	if (kp.kp_objp && object_size) {
>> +		int *p = object, n = 16;
>> +
>> +		p += n / 2;
>> +		if ((void *)p > kp.kp_objp + object_size)
>> +			p = kp.kp_objp + object_size;
>> +
>> +		p -= n;
>> +		if ((void *)p < kp.kp_objp)
>> +			p = kp.kp_objp;
>> +
>> +		n = min_t(int, object_size / sizeof(int), n);
>> +		for (i = 0; i < n; i++, p++) {
>> +			if (i % 4 == 0)
>> +				pr_info("[%04lx]:", 0xffff & (unsigned long)p);
>> +			pr_cont(" %08x", *p);
>> +		}
>> +		pr_cont("\n");
> 
> There's a print_hex_dump() for this, see how it's used from e.g. __dump_page().

Thank you very much. The code has suddenly been a lot simpler.

However, print_hex_dump() can be further enhanced, I will add a patch, let's
discuss it together.

> 
> 
>> +	}
>> +
>>  	for (i = 0; i < ARRAY_SIZE(kp.kp_stack); i++) {
>>  		if (!kp.kp_stack[i])
>>  			break;
> 
> .
> 

-- 
Regards,
  Zhen Lei


^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2023-08-04  1:44 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-08-03 10:17 [PATCH v5 0/3] rcu: Dump memory object info if callback function is invalid thunder.leizhen
2023-08-03 10:17 ` [PATCH v5 1/3] mm: Remove kmem_valid_obj() thunder.leizhen
2023-08-03 10:17 ` [PATCH v5 2/3] rcu: Dump memory object info if callback function is invalid thunder.leizhen
2023-08-03 10:17 ` [PATCH v5 3/3] mm: Dump the memory of slab object in kmem_dump_obj() thunder.leizhen
2023-08-03 10:34   ` Vlastimil Babka
2023-08-04  1:44     ` Leizhen (ThunderTown)

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).