linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan
@ 2022-06-03  3:54 Patrick Wang
  2022-06-03  3:54 ` [PATCH v2 1/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address Patrick Wang
                   ` (5 more replies)
  0 siblings, 6 replies; 18+ messages in thread
From: Patrick Wang @ 2022-06-03  3:54 UTC (permalink / raw)
  To: catalin.marinas, akpm; +Cc: linux-mm, linux-kernel, yee.lee, patrick.wang.shcn

The kmemleak_*_phys() interface uses "min_low_pfn" and
"max_low_pfn" to check address. But on some architectures,
kmemleak_*_phys() is called before those two variables
initialized. The following steps will be taken:

1) Add OBJECT_PHYS flag and rbtree for the objects allocated
   with physical address
2) Store physical address in objects if allocated with OBJECT_PHYS
3) Check the boundary when scan instead of in kmemleak_*_phys()

This patch set will solve:
https://lore.kernel.org/r/20220527032504.30341-1-yee.lee@mediatek.com
https://lore.kernel.org/r/9dd08bb5-f39e-53d8-f88d-bec598a08c93@gmail.com

v1: https://lore.kernel.org/r/20220531150823.1004101-1-patrick.wang.shcn@gmail.com

v1->v2:
 - add rbtree for the objects allocated with physical address
 - store physical address in objects if allocated with OBJECT_PHYS
 - check the upper object boundary as well and avoid duplicate check

Patrick Wang (4):
  mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical
    address
  mm: kmemleak: add rbtree for objects allocated with physical address
  mm: kmemleak: handle address stored in object based on its type
  mm: kmemleak: kmemleak_*_phys() set address type and check PA when
    scan

 mm/kmemleak.c | 193 ++++++++++++++++++++++++++++++++------------------
 1 file changed, 123 insertions(+), 70 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 18+ messages in thread

* [PATCH v2 1/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address
  2022-06-03  3:54 [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
@ 2022-06-03  3:54 ` Patrick Wang
  2022-06-06 11:55   ` Catalin Marinas
  2022-06-03  3:54 ` [PATCH v2 2/4] mm: kmemleak: add rbtree " Patrick Wang
                   ` (4 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Patrick Wang @ 2022-06-03  3:54 UTC (permalink / raw)
  To: catalin.marinas, akpm; +Cc: linux-mm, linux-kernel, yee.lee, patrick.wang.shcn

Add OBJECT_PHYS flag for object. This flag is used
to identify the objects allocated with physical
address.The create_object() function is added an
argument to set that flag.

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
---
 mm/kmemleak.c | 23 +++++++++++++----------
 1 file changed, 13 insertions(+), 10 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index a182f5ddaf68..1e9e0aa93ae5 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -172,6 +172,8 @@ struct kmemleak_object {
 #define OBJECT_NO_SCAN		(1 << 2)
 /* flag set to fully scan the object when scan_area allocation failed */
 #define OBJECT_FULL_SCAN	(1 << 3)
+/* flag set for object allocated with physical address */
+#define OBJECT_PHYS		(1 << 4)
 
 #define HEX_PREFIX		"    "
 /* number of bytes to print per line; must be 16 or 32 */
@@ -575,7 +577,8 @@ static int __save_stack_trace(unsigned long *trace)
  * memory block and add it to the object_list and object_tree_root.
  */
 static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
-					     int min_count, gfp_t gfp)
+					     int min_count, gfp_t gfp,
+					     bool is_phys)
 {
 	unsigned long flags;
 	struct kmemleak_object *object, *parent;
@@ -595,7 +598,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
 	INIT_HLIST_HEAD(&object->area_list);
 	raw_spin_lock_init(&object->lock);
 	atomic_set(&object->use_count, 1);
-	object->flags = OBJECT_ALLOCATED;
+	object->flags = OBJECT_ALLOCATED | (is_phys ? OBJECT_PHYS : 0);
 	object->pointer = ptr;
 	object->size = kfence_ksize((void *)ptr) ?: size;
 	object->excess_ref = 0;
@@ -729,10 +732,10 @@ static void delete_object_part(unsigned long ptr, size_t size)
 	end = object->pointer + object->size;
 	if (ptr > start)
 		create_object(start, ptr - start, object->min_count,
-			      GFP_KERNEL);
+			      GFP_KERNEL, object->flags & OBJECT_PHYS);
 	if (ptr + size < end)
 		create_object(ptr + size, end - ptr - size, object->min_count,
-			      GFP_KERNEL);
+			      GFP_KERNEL, object->flags & OBJECT_PHYS);
 
 	__delete_object(object);
 }
@@ -904,7 +907,7 @@ void __ref kmemleak_alloc(const void *ptr, size_t size, int min_count,
 	pr_debug("%s(0x%p, %zu, %d)\n", __func__, ptr, size, min_count);
 
 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
-		create_object((unsigned long)ptr, size, min_count, gfp);
+		create_object((unsigned long)ptr, size, min_count, gfp, false);
 }
 EXPORT_SYMBOL_GPL(kmemleak_alloc);
 
@@ -931,7 +934,7 @@ void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size,
 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
 		for_each_possible_cpu(cpu)
 			create_object((unsigned long)per_cpu_ptr(ptr, cpu),
-				      size, 0, gfp);
+				      size, 0, gfp, false);
 }
 EXPORT_SYMBOL_GPL(kmemleak_alloc_percpu);
 
@@ -953,7 +956,7 @@ void __ref kmemleak_vmalloc(const struct vm_struct *area, size_t size, gfp_t gfp
 	 * the virtual address of the vmalloc'ed block.
 	 */
 	if (kmemleak_enabled) {
-		create_object((unsigned long)area->addr, size, 2, gfp);
+		create_object((unsigned long)area->addr, size, 2, gfp, false);
 		object_set_excess_ref((unsigned long)area,
 				      (unsigned long)area->addr);
 	}
@@ -1966,14 +1969,14 @@ void __init kmemleak_init(void)
 
 	/* register the data/bss sections */
 	create_object((unsigned long)_sdata, _edata - _sdata,
-		      KMEMLEAK_GREY, GFP_ATOMIC);
+		      KMEMLEAK_GREY, GFP_ATOMIC, false);
 	create_object((unsigned long)__bss_start, __bss_stop - __bss_start,
-		      KMEMLEAK_GREY, GFP_ATOMIC);
+		      KMEMLEAK_GREY, GFP_ATOMIC, false);
 	/* only register .data..ro_after_init if not within .data */
 	if (&__start_ro_after_init < &_sdata || &__end_ro_after_init > &_edata)
 		create_object((unsigned long)__start_ro_after_init,
 			      __end_ro_after_init - __start_ro_after_init,
-			      KMEMLEAK_GREY, GFP_ATOMIC);
+			      KMEMLEAK_GREY, GFP_ATOMIC, false);
 }
 
 /*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 2/4] mm: kmemleak: add rbtree for objects allocated with physical address
  2022-06-03  3:54 [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
  2022-06-03  3:54 ` [PATCH v2 1/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address Patrick Wang
@ 2022-06-03  3:54 ` Patrick Wang
  2022-06-06 14:38   ` Catalin Marinas
  2022-06-03  3:54 ` [PATCH v2 3/4] mm: kmemleak: handle address stored in object based on its type Patrick Wang
                   ` (3 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Patrick Wang @ 2022-06-03  3:54 UTC (permalink / raw)
  To: catalin.marinas, akpm; +Cc: linux-mm, linux-kernel, yee.lee, patrick.wang.shcn

Add object_phys_tree_root to store the objects allocated with
physical address. Distinguish it from object_tree_root by
OBJECT_PHYS flag or function argument.

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
---
 mm/kmemleak.c | 99 +++++++++++++++++++++++++++++----------------------
 1 file changed, 57 insertions(+), 42 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 1e9e0aa93ae5..218144392446 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -14,14 +14,16 @@
  * The following locks and mutexes are used by kmemleak:
  *
  * - kmemleak_lock (raw_spinlock_t): protects the object_list modifications and
- *   accesses to the object_tree_root. The object_list is the main list
- *   holding the metadata (struct kmemleak_object) for the allocated memory
- *   blocks. The object_tree_root is a red black tree used to look-up
- *   metadata based on a pointer to the corresponding memory block.  The
- *   kmemleak_object structures are added to the object_list and
- *   object_tree_root in the create_object() function called from the
- *   kmemleak_alloc() callback and removed in delete_object() called from the
- *   kmemleak_free() callback
+ *   accesses to the object_tree_root (or object_phys_tree_root). The
+ *   object_list is the main list holding the metadata (struct kmemleak_object)
+ *   for the allocated memory blocks. The object_tree_root and object_phys_tree_root
+ *   are red black trees used to look-up metadata based on a pointer to the
+ *   corresponding memory block. The object_phys_tree_root is for objects
+ *   allocated with physical address. The kmemleak_object structures are
+ *   added to the object_list and object_tree_root (or object_phys_tree_root)
+ *   in the create_object() function called from the kmemleak_alloc() (or
+ *   kmemleak_alloc_phys()) callback and removed in delete_object() called from
+ *   the kmemleak_free() callback
  * - kmemleak_object.lock (raw_spinlock_t): protects a kmemleak_object.
  *   Accesses to the metadata (e.g. count) are protected by this lock. Note
  *   that some members of this structure may be protected by other means
@@ -195,7 +197,9 @@ static int mem_pool_free_count = ARRAY_SIZE(mem_pool);
 static LIST_HEAD(mem_pool_free_list);
 /* search tree for object boundaries */
 static struct rb_root object_tree_root = RB_ROOT;
-/* protecting the access to object_list and object_tree_root */
+/* search tree for object (with OBJECT_PHYS flag) boundaries */
+static struct rb_root object_phys_tree_root = RB_ROOT;
+/* protecting the access to object_list, object_tree_root (or object_phys_tree_root) */
 static DEFINE_RAW_SPINLOCK(kmemleak_lock);
 
 /* allocation caches for kmemleak internal data */
@@ -380,9 +384,11 @@ static void dump_object_info(struct kmemleak_object *object)
  * beginning of the memory block are allowed. The kmemleak_lock must be held
  * when calling this function.
  */
-static struct kmemleak_object *lookup_object(unsigned long ptr, int alias)
+static struct kmemleak_object *lookup_object(unsigned long ptr, int alias,
+					     bool is_phys)
 {
-	struct rb_node *rb = object_tree_root.rb_node;
+	struct rb_node *rb = is_phys ? object_phys_tree_root.rb_node :
+			     object_tree_root.rb_node;
 	unsigned long untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
 
 	while (rb) {
@@ -517,14 +523,15 @@ static void put_object(struct kmemleak_object *object)
 /*
  * Look up an object in the object search tree and increase its use_count.
  */
-static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
+static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias,
+						   bool is_phys)
 {
 	unsigned long flags;
 	struct kmemleak_object *object;
 
 	rcu_read_lock();
 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
-	object = lookup_object(ptr, alias);
+	object = lookup_object(ptr, alias, is_phys);
 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
 
 	/* check whether the object is still available */
@@ -536,27 +543,32 @@ static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
 }
 
 /*
- * Remove an object from the object_tree_root and object_list. Must be called
- * with the kmemleak_lock held _if_ kmemleak is still enabled.
+ * Remove an object from the object_tree_root (or object_phys_tree_root)
+ * and object_list. Must be called with the kmemleak_lock held _if_ kmemleak
+ * is still enabled.
  */
 static void __remove_object(struct kmemleak_object *object)
 {
-	rb_erase(&object->rb_node, &object_tree_root);
+	rb_erase(&object->rb_node, object->flags & OBJECT_PHYS ?
+				   &object_phys_tree_root :
+				   &object_tree_root);
 	list_del_rcu(&object->object_list);
 }
 
 /*
  * Look up an object in the object search tree and remove it from both
- * object_tree_root and object_list. The returned object's use_count should be
- * at least 1, as initially set by create_object().
+ * object_tree_root (or object_phys_tree_root) and object_list. The
+ * returned object's use_count should be at least 1, as initially set
+ * by create_object().
  */
-static struct kmemleak_object *find_and_remove_object(unsigned long ptr, int alias)
+static struct kmemleak_object *find_and_remove_object(unsigned long ptr, int alias,
+						      bool is_phys)
 {
 	unsigned long flags;
 	struct kmemleak_object *object;
 
 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
-	object = lookup_object(ptr, alias);
+	object = lookup_object(ptr, alias, is_phys);
 	if (object)
 		__remove_object(object);
 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
@@ -574,7 +586,8 @@ static int __save_stack_trace(unsigned long *trace)
 
 /*
  * Create the metadata (struct kmemleak_object) corresponding to an allocated
- * memory block and add it to the object_list and object_tree_root.
+ * memory block and add it to the object_list and object_tree_root (or
+ * object_phys_tree_root).
  */
 static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
 					     int min_count, gfp_t gfp,
@@ -633,7 +646,8 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
 	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
 	min_addr = min(min_addr, untagged_ptr);
 	max_addr = max(max_addr, untagged_ptr + size);
-	link = &object_tree_root.rb_node;
+	link = is_phys ? &object_phys_tree_root.rb_node :
+		&object_tree_root.rb_node;
 	rb_parent = NULL;
 	while (*link) {
 		rb_parent = *link;
@@ -657,7 +671,8 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
 		}
 	}
 	rb_link_node(&object->rb_node, rb_parent, link);
-	rb_insert_color(&object->rb_node, &object_tree_root);
+	rb_insert_color(&object->rb_node, is_phys ? &object_phys_tree_root :
+					  &object_tree_root);
 
 	list_add_tail_rcu(&object->object_list, &object_list);
 out:
@@ -693,7 +708,7 @@ static void delete_object_full(unsigned long ptr)
 {
 	struct kmemleak_object *object;
 
-	object = find_and_remove_object(ptr, 0);
+	object = find_and_remove_object(ptr, 0, false);
 	if (!object) {
 #ifdef DEBUG
 		kmemleak_warn("Freeing unknown object at 0x%08lx\n",
@@ -709,12 +724,12 @@ static void delete_object_full(unsigned long ptr)
  * delete it. If the memory block is partially freed, the function may create
  * additional metadata for the remaining parts of the block.
  */
-static void delete_object_part(unsigned long ptr, size_t size)
+static void delete_object_part(unsigned long ptr, size_t size, bool is_phys)
 {
 	struct kmemleak_object *object;
 	unsigned long start, end;
 
-	object = find_and_remove_object(ptr, 1);
+	object = find_and_remove_object(ptr, 1, is_phys);
 	if (!object) {
 #ifdef DEBUG
 		kmemleak_warn("Partially freeing unknown object at 0x%08lx (size %zu)\n",
@@ -756,11 +771,11 @@ static void paint_it(struct kmemleak_object *object, int color)
 	raw_spin_unlock_irqrestore(&object->lock, flags);
 }
 
-static void paint_ptr(unsigned long ptr, int color)
+static void paint_ptr(unsigned long ptr, int color, bool is_phys)
 {
 	struct kmemleak_object *object;
 
-	object = find_and_get_object(ptr, 0);
+	object = find_and_get_object(ptr, 0, is_phys);
 	if (!object) {
 		kmemleak_warn("Trying to color unknown object at 0x%08lx as %s\n",
 			      ptr,
@@ -776,18 +791,18 @@ static void paint_ptr(unsigned long ptr, int color)
  * Mark an object permanently as gray-colored so that it can no longer be
  * reported as a leak. This is used in general to mark a false positive.
  */
-static void make_gray_object(unsigned long ptr)
+static void make_gray_object(unsigned long ptr, bool is_phys)
 {
-	paint_ptr(ptr, KMEMLEAK_GREY);
+	paint_ptr(ptr, KMEMLEAK_GREY, is_phys);
 }
 
 /*
  * Mark the object as black-colored so that it is ignored from scans and
  * reporting.
  */
-static void make_black_object(unsigned long ptr)
+static void make_black_object(unsigned long ptr, bool is_phys)
 {
-	paint_ptr(ptr, KMEMLEAK_BLACK);
+	paint_ptr(ptr, KMEMLEAK_BLACK, is_phys);
 }
 
 /*
@@ -802,7 +817,7 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
 	unsigned long untagged_ptr;
 	unsigned long untagged_objp;
 
-	object = find_and_get_object(ptr, 1);
+	object = find_and_get_object(ptr, 1, false);
 	if (!object) {
 		kmemleak_warn("Adding scan area to unknown object at 0x%08lx\n",
 			      ptr);
@@ -852,7 +867,7 @@ static void object_set_excess_ref(unsigned long ptr, unsigned long excess_ref)
 	unsigned long flags;
 	struct kmemleak_object *object;
 
-	object = find_and_get_object(ptr, 0);
+	object = find_and_get_object(ptr, 0, false);
 	if (!object) {
 		kmemleak_warn("Setting excess_ref on unknown object at 0x%08lx\n",
 			      ptr);
@@ -875,7 +890,7 @@ static void object_no_scan(unsigned long ptr)
 	unsigned long flags;
 	struct kmemleak_object *object;
 
-	object = find_and_get_object(ptr, 0);
+	object = find_and_get_object(ptr, 0, false);
 	if (!object) {
 		kmemleak_warn("Not scanning unknown object at 0x%08lx\n", ptr);
 		return;
@@ -993,7 +1008,7 @@ void __ref kmemleak_free_part(const void *ptr, size_t size)
 	pr_debug("%s(0x%p)\n", __func__, ptr);
 
 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
-		delete_object_part((unsigned long)ptr, size);
+		delete_object_part((unsigned long)ptr, size, false);
 }
 EXPORT_SYMBOL_GPL(kmemleak_free_part);
 
@@ -1034,7 +1049,7 @@ void __ref kmemleak_update_trace(const void *ptr)
 	if (!kmemleak_enabled || IS_ERR_OR_NULL(ptr))
 		return;
 
-	object = find_and_get_object((unsigned long)ptr, 1);
+	object = find_and_get_object((unsigned long)ptr, 1, false);
 	if (!object) {
 #ifdef DEBUG
 		kmemleak_warn("Updating stack trace for unknown object at %p\n",
@@ -1063,7 +1078,7 @@ void __ref kmemleak_not_leak(const void *ptr)
 	pr_debug("%s(0x%p)\n", __func__, ptr);
 
 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
-		make_gray_object((unsigned long)ptr);
+		make_gray_object((unsigned long)ptr, false);
 }
 EXPORT_SYMBOL(kmemleak_not_leak);
 
@@ -1081,7 +1096,7 @@ void __ref kmemleak_ignore(const void *ptr)
 	pr_debug("%s(0x%p)\n", __func__, ptr);
 
 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
-		make_black_object((unsigned long)ptr);
+		make_black_object((unsigned long)ptr, false);
 }
 EXPORT_SYMBOL(kmemleak_ignore);
 
@@ -1275,7 +1290,7 @@ static void scan_block(void *_start, void *_end,
 		 * is still present in object_tree_root and object_list
 		 * (with updates protected by kmemleak_lock).
 		 */
-		object = lookup_object(pointer, 1);
+		object = lookup_object(pointer, 1, false);
 		if (!object)
 			continue;
 		if (object == scanned)
@@ -1299,7 +1314,7 @@ static void scan_block(void *_start, void *_end,
 		raw_spin_unlock(&object->lock);
 
 		if (excess_ref) {
-			object = lookup_object(excess_ref, 0);
+			object = lookup_object(excess_ref, 0, false);
 			if (!object)
 				continue;
 			if (object == scanned)
@@ -1728,7 +1743,7 @@ static int dump_str_object_info(const char *str)
 
 	if (kstrtoul(str, 0, &addr))
 		return -EINVAL;
-	object = find_and_get_object(addr, 0);
+	object = find_and_get_object(addr, 0, false);
 	if (!object) {
 		pr_info("Unknown object at 0x%08lx\n", addr);
 		return -EINVAL;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 3/4] mm: kmemleak: handle address stored in object based on its type
  2022-06-03  3:54 [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
  2022-06-03  3:54 ` [PATCH v2 1/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address Patrick Wang
  2022-06-03  3:54 ` [PATCH v2 2/4] mm: kmemleak: add rbtree " Patrick Wang
@ 2022-06-03  3:54 ` Patrick Wang
  2022-06-06 15:01   ` Catalin Marinas
  2022-06-03  3:54 ` [PATCH v2 4/4] mm: kmemleak: kmemleak_*_phys() set address type and check PA when scan Patrick Wang
                   ` (2 subsequent siblings)
  5 siblings, 1 reply; 18+ messages in thread
From: Patrick Wang @ 2022-06-03  3:54 UTC (permalink / raw)
  To: catalin.marinas, akpm; +Cc: linux-mm, linux-kernel, yee.lee, patrick.wang.shcn

Treat the address stored in object in different way according
to its type:

- Only use kasan_reset_tag for virtual address
- Only update min_addr and max_addr for virtual address
- Convert physical address to virtual address in scan_object

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
---
 mm/kmemleak.c | 34 ++++++++++++++++++++++++----------
 1 file changed, 24 insertions(+), 10 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 218144392446..246a70b7218f 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -297,7 +297,9 @@ static void hex_dump_object(struct seq_file *seq,
 	warn_or_seq_printf(seq, "  hex dump (first %zu bytes):\n", len);
 	kasan_disable_current();
 	warn_or_seq_hex_dump(seq, DUMP_PREFIX_NONE, HEX_ROW_SIZE,
-			     HEX_GROUP_SIZE, kasan_reset_tag((void *)ptr), len, HEX_ASCII);
+			     HEX_GROUP_SIZE, object->flags & OBJECT_PHYS ? ptr :
+			     kasan_reset_tag((void *)ptr),
+			     len, HEX_ASCII);
 	kasan_enable_current();
 }
 
@@ -389,14 +391,15 @@ static struct kmemleak_object *lookup_object(unsigned long ptr, int alias,
 {
 	struct rb_node *rb = is_phys ? object_phys_tree_root.rb_node :
 			     object_tree_root.rb_node;
-	unsigned long untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
+	unsigned long untagged_ptr = is_phys ? ptr : (unsigned long)kasan_reset_tag((void *)ptr);
 
 	while (rb) {
 		struct kmemleak_object *object;
 		unsigned long untagged_objp;
 
 		object = rb_entry(rb, struct kmemleak_object, rb_node);
-		untagged_objp = (unsigned long)kasan_reset_tag((void *)object->pointer);
+		untagged_objp = is_phys ? object->pointer :
+				(unsigned long)kasan_reset_tag((void *)object->pointer);
 
 		if (untagged_ptr < untagged_objp)
 			rb = object->rb_node.rb_left;
@@ -643,16 +646,19 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
 
 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
 
-	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
-	min_addr = min(min_addr, untagged_ptr);
-	max_addr = max(max_addr, untagged_ptr + size);
+	untagged_ptr = is_phys ? ptr : (unsigned long)kasan_reset_tag((void *)ptr);
+	if (!is_phys) {
+		min_addr = min(min_addr, untagged_ptr);
+		max_addr = max(max_addr, untagged_ptr + size);
+	}
 	link = is_phys ? &object_phys_tree_root.rb_node :
 		&object_tree_root.rb_node;
 	rb_parent = NULL;
 	while (*link) {
 		rb_parent = *link;
 		parent = rb_entry(rb_parent, struct kmemleak_object, rb_node);
-		untagged_objp = (unsigned long)kasan_reset_tag((void *)parent->pointer);
+		untagged_objp = is_phys ? parent->pointer :
+				(unsigned long)kasan_reset_tag((void *)parent->pointer);
 		if (untagged_ptr + size <= untagged_objp)
 			link = &parent->rb_node.rb_left;
 		else if (untagged_objp + parent->size <= untagged_ptr)
@@ -1202,7 +1208,9 @@ static bool update_checksum(struct kmemleak_object *object)
 
 	kasan_disable_current();
 	kcsan_disable_current();
-	object->checksum = crc32(0, kasan_reset_tag((void *)object->pointer), object->size);
+	object->checksum = crc32(0, object->flags & OBJECT_PHYS ? (void *)object->pointer :
+				    kasan_reset_tag((void *)object->pointer),
+				    object->size);
 	kasan_enable_current();
 	kcsan_enable_current();
 
@@ -1353,6 +1361,7 @@ static void scan_object(struct kmemleak_object *object)
 {
 	struct kmemleak_scan_area *area;
 	unsigned long flags;
+	void *obj_ptr;
 
 	/*
 	 * Once the object->lock is acquired, the corresponding memory block
@@ -1364,10 +1373,15 @@ static void scan_object(struct kmemleak_object *object)
 	if (!(object->flags & OBJECT_ALLOCATED))
 		/* already freed object */
 		goto out;
+
+	obj_ptr = object->flags & OBJECT_PHYS ?
+		  __va((void *)object->pointer) :
+		  (void *)object->pointer;
+
 	if (hlist_empty(&object->area_list) ||
 	    object->flags & OBJECT_FULL_SCAN) {
-		void *start = (void *)object->pointer;
-		void *end = (void *)(object->pointer + object->size);
+		void *start = obj_ptr;
+		void *end = obj_ptr + object->size;
 		void *next;
 
 		do {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* [PATCH v2 4/4] mm: kmemleak: kmemleak_*_phys() set address type and check PA when scan
  2022-06-03  3:54 [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
                   ` (2 preceding siblings ...)
  2022-06-03  3:54 ` [PATCH v2 3/4] mm: kmemleak: handle address stored in object based on its type Patrick Wang
@ 2022-06-03  3:54 ` Patrick Wang
  2022-06-06 15:29   ` Catalin Marinas
  2022-06-03 11:01 ` [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check " Catalin Marinas
  2022-06-08  2:46 ` Kuan-Ying Lee
  5 siblings, 1 reply; 18+ messages in thread
From: Patrick Wang @ 2022-06-03  3:54 UTC (permalink / raw)
  To: catalin.marinas, akpm; +Cc: linux-mm, linux-kernel, yee.lee, patrick.wang.shcn

The kmemleak_*_phys() functions call next level interface
by setting address type to physical. And the physical address
of objects will be checked for its boundary when scan instead
of in kmemleak_*_phys().

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
---
 mm/kmemleak.c | 37 +++++++++++++++++++++++++++++--------
 1 file changed, 29 insertions(+), 8 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 246a70b7218f..62d1ad8f8a44 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -1156,8 +1156,12 @@ EXPORT_SYMBOL(kmemleak_no_scan);
 void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count,
 			       gfp_t gfp)
 {
-	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
-		kmemleak_alloc(__va(phys), size, min_count, gfp);
+	pr_debug("%s(0x%pa, %zu, %d)\n", __func__, &phys, size, min_count);
+
+	if (kmemleak_enabled && !min_count)
+		/* create object with OBJECT_PHYS flag */
+		create_object((unsigned long)phys, size, min_count,
+			      gfp, true);
 }
 EXPORT_SYMBOL(kmemleak_alloc_phys);
 
@@ -1170,8 +1174,10 @@ EXPORT_SYMBOL(kmemleak_alloc_phys);
  */
 void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
 {
-	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
-		kmemleak_free_part(__va(phys), size);
+	pr_debug("%s(0x%pa)\n", __func__, &phys);
+
+	if (kmemleak_enabled)
+		delete_object_part((unsigned long)phys, size, true);
 }
 EXPORT_SYMBOL(kmemleak_free_part_phys);
 
@@ -1182,8 +1188,10 @@ EXPORT_SYMBOL(kmemleak_free_part_phys);
  */
 void __ref kmemleak_not_leak_phys(phys_addr_t phys)
 {
-	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
-		kmemleak_not_leak(__va(phys));
+	pr_debug("%s(0x%pa)\n", __func__, &phys);
+
+	if (kmemleak_enabled)
+		make_gray_object((unsigned long)phys, true);
 }
 EXPORT_SYMBOL(kmemleak_not_leak_phys);
 
@@ -1194,8 +1202,10 @@ EXPORT_SYMBOL(kmemleak_not_leak_phys);
  */
 void __ref kmemleak_ignore_phys(phys_addr_t phys)
 {
-	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
-		kmemleak_ignore(__va(phys));
+	pr_debug("%s(0x%pa)\n", __func__, &phys);
+
+	if (kmemleak_enabled)
+		make_black_object((unsigned long)phys, true);
 }
 EXPORT_SYMBOL(kmemleak_ignore_phys);
 
@@ -1468,6 +1478,17 @@ static void kmemleak_scan(void)
 			dump_object_info(object);
 		}
 #endif
+
+		/* ignore objects outside lowmem (paint them black) */
+		if ((object->flags & OBJECT_PHYS) &&
+		   !(object->flags & OBJECT_NO_SCAN)) {
+			unsigned long phys = object->pointer;
+
+			if (PHYS_PFN(phys) < min_low_pfn ||
+			    PHYS_PFN(phys + object->size) >= max_low_pfn)
+				__paint_it(object, KMEMLEAK_BLACK);
+		}
+
 		/* reset the reference count (whiten the object) */
 		object->count = 0;
 		if (color_gray(object) && get_object(object))
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan
  2022-06-03  3:54 [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
                   ` (3 preceding siblings ...)
  2022-06-03  3:54 ` [PATCH v2 4/4] mm: kmemleak: kmemleak_*_phys() set address type and check PA when scan Patrick Wang
@ 2022-06-03 11:01 ` Catalin Marinas
  2022-06-04  3:01   ` patrick wang
  2022-06-08  2:46 ` Kuan-Ying Lee
  5 siblings, 1 reply; 18+ messages in thread
From: Catalin Marinas @ 2022-06-03 11:01 UTC (permalink / raw)
  To: Patrick Wang; +Cc: akpm, linux-mm, linux-kernel, yee.lee

On Fri, Jun 03, 2022 at 11:54:11AM +0800, Patrick Wang wrote:
> Patrick Wang (4):
>   mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical
>     address
>   mm: kmemleak: add rbtree for objects allocated with physical address
>   mm: kmemleak: handle address stored in object based on its type
>   mm: kmemleak: kmemleak_*_phys() set address type and check PA when
>     scan

This looks fine at a very quick look but I'll do a in-depth review next
week. One more thing needed is to remove the min_count argument to
kmemleak_alloc_phys() and assume it's always 0. After this series we
can't track them for leaking anyway.

Thanks for putting this together.

-- 
Catalin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan
  2022-06-03 11:01 ` [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check " Catalin Marinas
@ 2022-06-04  3:01   ` patrick wang
  0 siblings, 0 replies; 18+ messages in thread
From: patrick wang @ 2022-06-04  3:01 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: Andrew Morton, open list:MEMORY MANAGEMENT, linux-kernel, Yee Lee

On Fri, Jun 3, 2022 at 7:01 PM Catalin Marinas <catalin.marinas@arm.com> wrote:
>
> On Fri, Jun 03, 2022 at 11:54:11AM +0800, Patrick Wang wrote:
> > Patrick Wang (4):
> >   mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical
> >     address
> >   mm: kmemleak: add rbtree for objects allocated with physical address
> >   mm: kmemleak: handle address stored in object based on its type
> >   mm: kmemleak: kmemleak_*_phys() set address type and check PA when
> >     scan
>
> This looks fine at a very quick look but I'll do a in-depth review next
> week. One more thing needed is to remove the min_count argument to
> kmemleak_alloc_phys() and assume it's always 0. After this series we
> can't track them for leaking anyway.

Will do in the next version.

Thanks,
Patrick

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address
  2022-06-03  3:54 ` [PATCH v2 1/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address Patrick Wang
@ 2022-06-06 11:55   ` Catalin Marinas
  2022-06-07 14:32     ` Patrick Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Catalin Marinas @ 2022-06-06 11:55 UTC (permalink / raw)
  To: Patrick Wang; +Cc: akpm, linux-mm, linux-kernel, yee.lee

On Fri, Jun 03, 2022 at 11:54:12AM +0800, Patrick Wang wrote:
> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index a182f5ddaf68..1e9e0aa93ae5 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -172,6 +172,8 @@ struct kmemleak_object {
>  #define OBJECT_NO_SCAN		(1 << 2)
>  /* flag set to fully scan the object when scan_area allocation failed */
>  #define OBJECT_FULL_SCAN	(1 << 3)
> +/* flag set for object allocated with physical address */
> +#define OBJECT_PHYS		(1 << 4)
>  
>  #define HEX_PREFIX		"    "
>  /* number of bytes to print per line; must be 16 or 32 */
> @@ -575,7 +577,8 @@ static int __save_stack_trace(unsigned long *trace)
>   * memory block and add it to the object_list and object_tree_root.
>   */
>  static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
> -					     int min_count, gfp_t gfp)
> +					     int min_count, gfp_t gfp,
> +					     bool is_phys)

The patch looks fine but I wonder whether we should have different
functions for is_phys true/false, we may end up fewer changes overall
since most places simply pass is_phys == false:

static struct kmemleak_object *__create_object(unsigned long ptr, size_t size,
					       int min_count, gfp_t gfp,
					       bool is_phys);

static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
					     int min_count, gfp_t gfp)
{
	return __create_object(ptr, size, min_count, gfp, false);
}

static struct kmemleak_object *create_object_phys(unsigned long ptr, size_t size,
						  int min_count, gfp_t gfp)
{
	return __create_object(ptr, size, min_count, gfp, true);
}

Same for the other patches that change a few more functions.

-- 
Catalin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/4] mm: kmemleak: add rbtree for objects allocated with physical address
  2022-06-03  3:54 ` [PATCH v2 2/4] mm: kmemleak: add rbtree " Patrick Wang
@ 2022-06-06 14:38   ` Catalin Marinas
  2022-06-07 14:34     ` Patrick Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Catalin Marinas @ 2022-06-06 14:38 UTC (permalink / raw)
  To: Patrick Wang; +Cc: akpm, linux-mm, linux-kernel, yee.lee

On Fri, Jun 03, 2022 at 11:54:13AM +0800, Patrick Wang wrote:
> @@ -536,27 +543,32 @@ static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
>  }
>  
>  /*
> - * Remove an object from the object_tree_root and object_list. Must be called
> - * with the kmemleak_lock held _if_ kmemleak is still enabled.
> + * Remove an object from the object_tree_root (or object_phys_tree_root)
> + * and object_list. Must be called with the kmemleak_lock held _if_ kmemleak
> + * is still enabled.
>   */
>  static void __remove_object(struct kmemleak_object *object)
>  {
> -	rb_erase(&object->rb_node, &object_tree_root);
> +	rb_erase(&object->rb_node, object->flags & OBJECT_PHYS ?
> +				   &object_phys_tree_root :
> +				   &object_tree_root);

This pattern appears in a few place, I guess it's better with a macro,
say get_object_tree_root(object). But see how many are left, I have some
comments below on reducing the diff.

> @@ -709,12 +724,12 @@ static void delete_object_full(unsigned long ptr)
>   * delete it. If the memory block is partially freed, the function may create
>   * additional metadata for the remaining parts of the block.
>   */
> -static void delete_object_part(unsigned long ptr, size_t size)
> +static void delete_object_part(unsigned long ptr, size_t size, bool is_phys)
>  {
>  	struct kmemleak_object *object;
>  	unsigned long start, end;
>  
> -	object = find_and_remove_object(ptr, 1);
> +	object = find_and_remove_object(ptr, 1, is_phys);
>  	if (!object) {
>  #ifdef DEBUG
>  		kmemleak_warn("Partially freeing unknown object at 0x%08lx (size %zu)\n",

The previous patch introduced a check on object->flags for
delete_object_part(). I think you can just use is_phys directly now when
calling create_object().

> @@ -756,11 +771,11 @@ static void paint_it(struct kmemleak_object *object, int color)
>  	raw_spin_unlock_irqrestore(&object->lock, flags);
>  }
>  
> -static void paint_ptr(unsigned long ptr, int color)
> +static void paint_ptr(unsigned long ptr, int color, bool is_phys)
>  {
>  	struct kmemleak_object *object;
>  
> -	object = find_and_get_object(ptr, 0);
> +	object = find_and_get_object(ptr, 0, is_phys);
>  	if (!object) {
>  		kmemleak_warn("Trying to color unknown object at 0x%08lx as %s\n",
>  			      ptr,
> @@ -776,18 +791,18 @@ static void paint_ptr(unsigned long ptr, int color)
>   * Mark an object permanently as gray-colored so that it can no longer be
>   * reported as a leak. This is used in general to mark a false positive.
>   */
> -static void make_gray_object(unsigned long ptr)
> +static void make_gray_object(unsigned long ptr, bool is_phys)
>  {
> -	paint_ptr(ptr, KMEMLEAK_GREY);
> +	paint_ptr(ptr, KMEMLEAK_GREY, is_phys);
>  }
>  
>  /*
>   * Mark the object as black-colored so that it is ignored from scans and
>   * reporting.
>   */
> -static void make_black_object(unsigned long ptr)
> +static void make_black_object(unsigned long ptr, bool is_phys)
>  {
> -	paint_ptr(ptr, KMEMLEAK_BLACK);
> +	paint_ptr(ptr, KMEMLEAK_BLACK, is_phys);
>  }

We won't need any of these functions to get an 'is_phys' argument if we
make kmemleak_alloc_phys() always create gray objects (do this as one of
the first patches in the series).

>  
>  /*
> @@ -802,7 +817,7 @@ static void add_scan_area(unsigned long ptr, size_t size, gfp_t gfp)
>  	unsigned long untagged_ptr;
>  	unsigned long untagged_objp;
>  
> -	object = find_and_get_object(ptr, 1);
> +	object = find_and_get_object(ptr, 1, false);
>  	if (!object) {
>  		kmemleak_warn("Adding scan area to unknown object at 0x%08lx\n",
>  			      ptr);
> @@ -852,7 +867,7 @@ static void object_set_excess_ref(unsigned long ptr, unsigned long excess_ref)
>  	unsigned long flags;
>  	struct kmemleak_object *object;
>  
> -	object = find_and_get_object(ptr, 0);
> +	object = find_and_get_object(ptr, 0, false);
>  	if (!object) {
>  		kmemleak_warn("Setting excess_ref on unknown object at 0x%08lx\n",
>  			      ptr);
> @@ -875,7 +890,7 @@ static void object_no_scan(unsigned long ptr)
>  	unsigned long flags;
>  	struct kmemleak_object *object;
>  
> -	object = find_and_get_object(ptr, 0);
> +	object = find_and_get_object(ptr, 0, false);
>  	if (!object) {
>  		kmemleak_warn("Not scanning unknown object at 0x%08lx\n", ptr);
>  		return;

Same for these.

> @@ -993,7 +1008,7 @@ void __ref kmemleak_free_part(const void *ptr, size_t size)
>  	pr_debug("%s(0x%p)\n", __func__, ptr);
>  
>  	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
> -		delete_object_part((unsigned long)ptr, size);
> +		delete_object_part((unsigned long)ptr, size, false);
>  }
>  EXPORT_SYMBOL_GPL(kmemleak_free_part);
>  
> @@ -1034,7 +1049,7 @@ void __ref kmemleak_update_trace(const void *ptr)
>  	if (!kmemleak_enabled || IS_ERR_OR_NULL(ptr))
>  		return;
>  
> -	object = find_and_get_object((unsigned long)ptr, 1);
> +	object = find_and_get_object((unsigned long)ptr, 1, false);
>  	if (!object) {
>  #ifdef DEBUG
>  		kmemleak_warn("Updating stack trace for unknown object at %p\n",
> @@ -1063,7 +1078,7 @@ void __ref kmemleak_not_leak(const void *ptr)
>  	pr_debug("%s(0x%p)\n", __func__, ptr);
>  
>  	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
> -		make_gray_object((unsigned long)ptr);
> +		make_gray_object((unsigned long)ptr, false);
>  }
>  EXPORT_SYMBOL(kmemleak_not_leak);
>  
> @@ -1081,7 +1096,7 @@ void __ref kmemleak_ignore(const void *ptr)
>  	pr_debug("%s(0x%p)\n", __func__, ptr);
>  
>  	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
> -		make_black_object((unsigned long)ptr);
> +		make_black_object((unsigned long)ptr, false);
>  }
>  EXPORT_SYMBOL(kmemleak_ignore);

If we avoid changing make_*_object(), we won't need these anymore.

> @@ -1275,7 +1290,7 @@ static void scan_block(void *_start, void *_end,
>  		 * is still present in object_tree_root and object_list
>  		 * (with updates protected by kmemleak_lock).
>  		 */
> -		object = lookup_object(pointer, 1);
> +		object = lookup_object(pointer, 1, false);
>  		if (!object)
>  			continue;
>  		if (object == scanned)
> @@ -1299,7 +1314,7 @@ static void scan_block(void *_start, void *_end,
>  		raw_spin_unlock(&object->lock);
>  
>  		if (excess_ref) {
> -			object = lookup_object(excess_ref, 0);
> +			object = lookup_object(excess_ref, 0, false);
>  			if (!object)
>  				continue;
>  			if (object == scanned)
> @@ -1728,7 +1743,7 @@ static int dump_str_object_info(const char *str)
>  
>  	if (kstrtoul(str, 0, &addr))
>  		return -EINVAL;
> -	object = find_and_get_object(addr, 0);
> +	object = find_and_get_object(addr, 0, false);
>  	if (!object) {
>  		pr_info("Unknown object at 0x%08lx\n", addr);
>  		return -EINVAL;

I think find_and_get_object() is never called on a phys object, so you
can probably simplify these a bit. Just add an is_phys argument where
strictly necessary and maybe even add a separate function like
lookup_object_phys() to reduce the other changes.

-- 
Catalin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/4] mm: kmemleak: handle address stored in object based on its type
  2022-06-03  3:54 ` [PATCH v2 3/4] mm: kmemleak: handle address stored in object based on its type Patrick Wang
@ 2022-06-06 15:01   ` Catalin Marinas
  2022-06-07 14:36     ` Patrick Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Catalin Marinas @ 2022-06-06 15:01 UTC (permalink / raw)
  To: Patrick Wang; +Cc: akpm, linux-mm, linux-kernel, yee.lee

On Fri, Jun 03, 2022 at 11:54:14AM +0800, Patrick Wang wrote:
> Treat the address stored in object in different way according
> to its type:
> 
> - Only use kasan_reset_tag for virtual address
> - Only update min_addr and max_addr for virtual address
> - Convert physical address to virtual address in scan_object
> 
> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
> ---
>  mm/kmemleak.c | 34 ++++++++++++++++++++++++----------
>  1 file changed, 24 insertions(+), 10 deletions(-)
> 
> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index 218144392446..246a70b7218f 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -297,7 +297,9 @@ static void hex_dump_object(struct seq_file *seq,
>  	warn_or_seq_printf(seq, "  hex dump (first %zu bytes):\n", len);
>  	kasan_disable_current();
>  	warn_or_seq_hex_dump(seq, DUMP_PREFIX_NONE, HEX_ROW_SIZE,
> -			     HEX_GROUP_SIZE, kasan_reset_tag((void *)ptr), len, HEX_ASCII);
> +			     HEX_GROUP_SIZE, object->flags & OBJECT_PHYS ? ptr :
> +			     kasan_reset_tag((void *)ptr),
> +			     len, HEX_ASCII);
>  	kasan_enable_current();
>  }

This will go wrong since ptr is the actual physical address, it cannot
be dereferenced. This should only be used on virtual pointers and this
is the case already as we never print unreferenced objects from the phys
tree. What we could do though is something like an early exit from this
function (together with a comment that it doesn't support dumping such
objects):

	if (WARN_ON_ONCE(object->flags & OBJECT_PHYS))
		return;

>  
> @@ -389,14 +391,15 @@ static struct kmemleak_object *lookup_object(unsigned long ptr, int alias,
>  {
>  	struct rb_node *rb = is_phys ? object_phys_tree_root.rb_node :
>  			     object_tree_root.rb_node;
> -	unsigned long untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
> +	unsigned long untagged_ptr = is_phys ? ptr : (unsigned long)kasan_reset_tag((void *)ptr);
>  
>  	while (rb) {
>  		struct kmemleak_object *object;
>  		unsigned long untagged_objp;
>  
>  		object = rb_entry(rb, struct kmemleak_object, rb_node);
> -		untagged_objp = (unsigned long)kasan_reset_tag((void *)object->pointer);
> +		untagged_objp = is_phys ? object->pointer :
> +				(unsigned long)kasan_reset_tag((void *)object->pointer);
>  
>  		if (untagged_ptr < untagged_objp)
>  			rb = object->rb_node.rb_left;

You could leave this unchanged. A phys pointer is already untagged, so
it wouldn't make any difference.

> @@ -643,16 +646,19 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
>  
>  	raw_spin_lock_irqsave(&kmemleak_lock, flags);
>  
> -	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
> -	min_addr = min(min_addr, untagged_ptr);
> -	max_addr = max(max_addr, untagged_ptr + size);
> +	untagged_ptr = is_phys ? ptr : (unsigned long)kasan_reset_tag((void *)ptr);

Same here.

> +	if (!is_phys) {
> +		min_addr = min(min_addr, untagged_ptr);
> +		max_addr = max(max_addr, untagged_ptr + size);
> +	}
>  	link = is_phys ? &object_phys_tree_root.rb_node :
>  		&object_tree_root.rb_node;
>  	rb_parent = NULL;
>  	while (*link) {
>  		rb_parent = *link;
>  		parent = rb_entry(rb_parent, struct kmemleak_object, rb_node);
> -		untagged_objp = (unsigned long)kasan_reset_tag((void *)parent->pointer);
> +		untagged_objp = is_phys ? parent->pointer :
> +				(unsigned long)kasan_reset_tag((void *)parent->pointer);

And here.

>  		if (untagged_ptr + size <= untagged_objp)
>  			link = &parent->rb_node.rb_left;
>  		else if (untagged_objp + parent->size <= untagged_ptr)
> @@ -1202,7 +1208,9 @@ static bool update_checksum(struct kmemleak_object *object)
>  
>  	kasan_disable_current();
>  	kcsan_disable_current();
> -	object->checksum = crc32(0, kasan_reset_tag((void *)object->pointer), object->size);
> +	object->checksum = crc32(0, object->flags & OBJECT_PHYS ? (void *)object->pointer :
> +				    kasan_reset_tag((void *)object->pointer),
> +				    object->size);

Luckily that's never called on a phys object, otherwise *object->pointer
would segfault. As for hex_dump, just return early with a warning if
that's the case.

>  	kasan_enable_current();
>  	kcsan_enable_current();
>  
> @@ -1353,6 +1361,7 @@ static void scan_object(struct kmemleak_object *object)
>  {
>  	struct kmemleak_scan_area *area;
>  	unsigned long flags;
> +	void *obj_ptr;
>  
>  	/*
>  	 * Once the object->lock is acquired, the corresponding memory block
> @@ -1364,10 +1373,15 @@ static void scan_object(struct kmemleak_object *object)
>  	if (!(object->flags & OBJECT_ALLOCATED))
>  		/* already freed object */
>  		goto out;
> +
> +	obj_ptr = object->flags & OBJECT_PHYS ?
> +		  __va((void *)object->pointer) :
> +		  (void *)object->pointer;
> +
>  	if (hlist_empty(&object->area_list) ||
>  	    object->flags & OBJECT_FULL_SCAN) {
> -		void *start = (void *)object->pointer;
> -		void *end = (void *)(object->pointer + object->size);
> +		void *start = obj_ptr;
> +		void *end = obj_ptr + object->size;
>  		void *next;
>  
>  		do {

This looks fine, assuming that the following patch adds the checks for
objects above max_low_pfn (I haven't got there yet).

-- 
Catalin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 4/4] mm: kmemleak: kmemleak_*_phys() set address type and check PA when scan
  2022-06-03  3:54 ` [PATCH v2 4/4] mm: kmemleak: kmemleak_*_phys() set address type and check PA when scan Patrick Wang
@ 2022-06-06 15:29   ` Catalin Marinas
  2022-06-07 14:37     ` Patrick Wang
  0 siblings, 1 reply; 18+ messages in thread
From: Catalin Marinas @ 2022-06-06 15:29 UTC (permalink / raw)
  To: Patrick Wang; +Cc: akpm, linux-mm, linux-kernel, yee.lee

On Fri, Jun 03, 2022 at 11:54:15AM +0800, Patrick Wang wrote:
> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index 246a70b7218f..62d1ad8f8a44 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -1156,8 +1156,12 @@ EXPORT_SYMBOL(kmemleak_no_scan);
>  void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count,
>  			       gfp_t gfp)
>  {
> -	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
> -		kmemleak_alloc(__va(phys), size, min_count, gfp);
> +	pr_debug("%s(0x%pa, %zu, %d)\n", __func__, &phys, size, min_count);
> +
> +	if (kmemleak_enabled && !min_count)
> +		/* create object with OBJECT_PHYS flag */
> +		create_object((unsigned long)phys, size, min_count,
> +			      gfp, true);
>  }

With an early patch, just drop min_count altogether from this API,
assume 0.

>  EXPORT_SYMBOL(kmemleak_alloc_phys);
>  
> @@ -1170,8 +1174,10 @@ EXPORT_SYMBOL(kmemleak_alloc_phys);
>   */
>  void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
>  {
> -	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
> -		kmemleak_free_part(__va(phys), size);
> +	pr_debug("%s(0x%pa)\n", __func__, &phys);
> +
> +	if (kmemleak_enabled)
> +		delete_object_part((unsigned long)phys, size, true);
>  }
>  EXPORT_SYMBOL(kmemleak_free_part_phys);
>  
> @@ -1182,8 +1188,10 @@ EXPORT_SYMBOL(kmemleak_free_part_phys);
>   */
>  void __ref kmemleak_not_leak_phys(phys_addr_t phys)
>  {
> -	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
> -		kmemleak_not_leak(__va(phys));
> +	pr_debug("%s(0x%pa)\n", __func__, &phys);
> +
> +	if (kmemleak_enabled)
> +		make_gray_object((unsigned long)phys, true);
>  }
>  EXPORT_SYMBOL(kmemleak_not_leak_phys);

This function doesn't have any callers, so please remove it.

> @@ -1194,8 +1202,10 @@ EXPORT_SYMBOL(kmemleak_not_leak_phys);
>   */
>  void __ref kmemleak_ignore_phys(phys_addr_t phys)
>  {
> -	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
> -		kmemleak_ignore(__va(phys));
> +	pr_debug("%s(0x%pa)\n", __func__, &phys);
> +
> +	if (kmemleak_enabled)
> +		make_black_object((unsigned long)phys, true);
>  }
>  EXPORT_SYMBOL(kmemleak_ignore_phys);

Ah, that's still in use and we do need make_black_object(), contrary to
what I commented on a previous patch. We can still avoid changing
make_gray_object().

(we could replace the kmemleak_ignore_phys() calls
kmemleak_free_part_phys() but that's not in line with what we do for the
virtual objects)

> @@ -1468,6 +1478,17 @@ static void kmemleak_scan(void)
>  			dump_object_info(object);
>  		}
>  #endif
> +
> +		/* ignore objects outside lowmem (paint them black) */
> +		if ((object->flags & OBJECT_PHYS) &&
> +		   !(object->flags & OBJECT_NO_SCAN)) {
> +			unsigned long phys = object->pointer;
> +
> +			if (PHYS_PFN(phys) < min_low_pfn ||
> +			    PHYS_PFN(phys + object->size) >= max_low_pfn)
> +				__paint_it(object, KMEMLEAK_BLACK);
> +		}
> +
>  		/* reset the reference count (whiten the object) */
>  		object->count = 0;
>  		if (color_gray(object) && get_object(object))

This looks fine.

-- 
Catalin

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address
  2022-06-06 11:55   ` Catalin Marinas
@ 2022-06-07 14:32     ` Patrick Wang
  2022-06-09  9:54       ` Catalin Marinas
  0 siblings, 1 reply; 18+ messages in thread
From: Patrick Wang @ 2022-06-07 14:32 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: akpm, linux-mm, linux-kernel, yee.lee



On 2022/6/6 19:55, Catalin Marinas wrote:
> On Fri, Jun 03, 2022 at 11:54:12AM +0800, Patrick Wang wrote:
>> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
>> index a182f5ddaf68..1e9e0aa93ae5 100644
>> --- a/mm/kmemleak.c
>> +++ b/mm/kmemleak.c
>> @@ -172,6 +172,8 @@ struct kmemleak_object {
>>   #define OBJECT_NO_SCAN		(1 << 2)
>>   /* flag set to fully scan the object when scan_area allocation failed */
>>   #define OBJECT_FULL_SCAN	(1 << 3)
>> +/* flag set for object allocated with physical address */
>> +#define OBJECT_PHYS		(1 << 4)
>>   
>>   #define HEX_PREFIX		"    "
>>   /* number of bytes to print per line; must be 16 or 32 */
>> @@ -575,7 +577,8 @@ static int __save_stack_trace(unsigned long *trace)
>>    * memory block and add it to the object_list and object_tree_root.
>>    */
>>   static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
>> -					     int min_count, gfp_t gfp)
>> +					     int min_count, gfp_t gfp,
>> +					     bool is_phys)
> 
> The patch looks fine but I wonder whether we should have different
> functions for is_phys true/false, we may end up fewer changes overall
> since most places simply pass is_phys == false:

This should be better. Will do.

> 
> static struct kmemleak_object *__create_object(unsigned long ptr, size_t size,
> 					       int min_count, gfp_t gfp,
> 					       bool is_phys);
> 
> static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
> 					     int min_count, gfp_t gfp)
> {
> 	return __create_object(ptr, size, min_count, gfp, false);
> }
> 
> static struct kmemleak_object *create_object_phys(unsigned long ptr, size_t size,
> 						  int min_count, gfp_t gfp)
> {
> 	return __create_object(ptr, size, min_count, gfp, true);
> }
> 
> Same for the other patches that change a few more functions.

Since the physical objects are only used as gray objects.
Changes on irrelevant places will be removed.

The leak check could be taken on physical objects. Conversion
of block address from virtual to physical before lookup should
make this work (this is useless currently). I think we'd better
know about this.

Thanks,
Patrick


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 2/4] mm: kmemleak: add rbtree for objects allocated with physical address
  2022-06-06 14:38   ` Catalin Marinas
@ 2022-06-07 14:34     ` Patrick Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Patrick Wang @ 2022-06-07 14:34 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: akpm, linux-mm, linux-kernel, yee.lee



On 2022/6/6 22:38, Catalin Marinas wrote:
> On Fri, Jun 03, 2022 at 11:54:13AM +0800, Patrick Wang wrote:
>> @@ -536,27 +543,32 @@ static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
>>   }
>>   
>>   /*
>> - * Remove an object from the object_tree_root and object_list. Must be called
>> - * with the kmemleak_lock held _if_ kmemleak is still enabled.
>> + * Remove an object from the object_tree_root (or object_phys_tree_root)
>> + * and object_list. Must be called with the kmemleak_lock held _if_ kmemleak
>> + * is still enabled.
>>    */
>>   static void __remove_object(struct kmemleak_object *object)
>>   {
>> -	rb_erase(&object->rb_node, &object_tree_root);
>> +	rb_erase(&object->rb_node, object->flags & OBJECT_PHYS ?
>> +				   &object_phys_tree_root :
>> +				   &object_tree_root);
> 
> This pattern appears in a few place, I guess it's better with a macro,
> say get_object_tree_root(object). But see how many are left, I have some
> comments below on reducing the diff.

Will do.

> 
>> @@ -709,12 +724,12 @@ static void delete_object_full(unsigned long ptr)
>>    * delete it. If the memory block is partially freed, the function may create
>>    * additional metadata for the remaining parts of the block.
>>    */
>> -static void delete_object_part(unsigned long ptr, size_t size)
>> +static void delete_object_part(unsigned long ptr, size_t size, bool is_phys)
>>   {
>>   	struct kmemleak_object *object;
>>   	unsigned long start, end;
>>   
>> -	object = find_and_remove_object(ptr, 1);
>> +	object = find_and_remove_object(ptr, 1, is_phys);
>>   	if (!object) {
>>   #ifdef DEBUG
>>   		kmemleak_warn("Partially freeing unknown object at 0x%08lx (size %zu)\n",
> 
> The previous patch introduced a check on object->flags for
> delete_object_part(). I think you can just use is_phys directly now when
> calling create_object().

Will do.

> 
>> @@ -1275,7 +1290,7 @@ static void scan_block(void *_start, void *_end,
>>   		 * is still present in object_tree_root and object_list
>>   		 * (with updates protected by kmemleak_lock).
>>   		 */
>> -		object = lookup_object(pointer, 1);
>> +		object = lookup_object(pointer, 1, false);
>>   		if (!object)
>>   			continue;
>>   		if (object == scanned)
>> @@ -1299,7 +1314,7 @@ static void scan_block(void *_start, void *_end,
>>   		raw_spin_unlock(&object->lock);
>>   
>>   		if (excess_ref) {
>> -			object = lookup_object(excess_ref, 0);
>> +			object = lookup_object(excess_ref, 0, false);
>>   			if (!object)
>>   				continue;
>>   			if (object == scanned)
>> @@ -1728,7 +1743,7 @@ static int dump_str_object_info(const char *str)
>>   
>>   	if (kstrtoul(str, 0, &addr))
>>   		return -EINVAL;
>> -	object = find_and_get_object(addr, 0);
>> +	object = find_and_get_object(addr, 0, false);
>>   	if (!object) {
>>   		pr_info("Unknown object at 0x%08lx\n", addr);
>>   		return -EINVAL;
> 
> I think find_and_get_object() is never called on a phys object, so you
> can probably simplify these a bit. Just add an is_phys argument where
> strictly necessary and maybe even add a separate function like
> lookup_object_phys() to reduce the other changes.

Will add lookup_object_phys() function and find_and_get_object_phys()
function. The find_and_get_object() function is also called in many
places.

Thanks,
Patrick


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 3/4] mm: kmemleak: handle address stored in object based on its type
  2022-06-06 15:01   ` Catalin Marinas
@ 2022-06-07 14:36     ` Patrick Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Patrick Wang @ 2022-06-07 14:36 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: akpm, linux-mm, linux-kernel, yee.lee



On 2022/6/6 23:01, Catalin Marinas wrote:
> On Fri, Jun 03, 2022 at 11:54:14AM +0800, Patrick Wang wrote:
>> Treat the address stored in object in different way according
>> to its type:
>>
>> - Only use kasan_reset_tag for virtual address
>> - Only update min_addr and max_addr for virtual address
>> - Convert physical address to virtual address in scan_object
>>
>> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
>> Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
>> ---
>>   mm/kmemleak.c | 34 ++++++++++++++++++++++++----------
>>   1 file changed, 24 insertions(+), 10 deletions(-)
>>
>> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
>> index 218144392446..246a70b7218f 100644
>> --- a/mm/kmemleak.c
>> +++ b/mm/kmemleak.c
>> @@ -297,7 +297,9 @@ static void hex_dump_object(struct seq_file *seq,
>>   	warn_or_seq_printf(seq, "  hex dump (first %zu bytes):\n", len);
>>   	kasan_disable_current();
>>   	warn_or_seq_hex_dump(seq, DUMP_PREFIX_NONE, HEX_ROW_SIZE,
>> -			     HEX_GROUP_SIZE, kasan_reset_tag((void *)ptr), len, HEX_ASCII);
>> +			     HEX_GROUP_SIZE, object->flags & OBJECT_PHYS ? ptr :
>> +			     kasan_reset_tag((void *)ptr),
>> +			     len, HEX_ASCII);
>>   	kasan_enable_current();
>>   }
> 
> This will go wrong since ptr is the actual physical address, it cannot
> be dereferenced. This should only be used on virtual pointers and this
> is the case already as we never print unreferenced objects from the phys
> tree. What we could do though is something like an early exit from this
> function (together with a comment that it doesn't support dumping such
> objects):
> 
> 	if (WARN_ON_ONCE(object->flags & OBJECT_PHYS))
> 		return;
> 

I also found this. Will do.

>>   
>> @@ -389,14 +391,15 @@ static struct kmemleak_object *lookup_object(unsigned long ptr, int alias,
>>   {
>>   	struct rb_node *rb = is_phys ? object_phys_tree_root.rb_node :
>>   			     object_tree_root.rb_node;
>> -	unsigned long untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
>> +	unsigned long untagged_ptr = is_phys ? ptr : (unsigned long)kasan_reset_tag((void *)ptr);
>>   
>>   	while (rb) {
>>   		struct kmemleak_object *object;
>>   		unsigned long untagged_objp;
>>   
>>   		object = rb_entry(rb, struct kmemleak_object, rb_node);
>> -		untagged_objp = (unsigned long)kasan_reset_tag((void *)object->pointer);
>> +		untagged_objp = is_phys ? object->pointer :
>> +				(unsigned long)kasan_reset_tag((void *)object->pointer);
>>   
>>   		if (untagged_ptr < untagged_objp)
>>   			rb = object->rb_node.rb_left;
> 
> You could leave this unchanged. A phys pointer is already untagged, so
> it wouldn't make any difference.

Right, will do.

> 
>> @@ -643,16 +646,19 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
>>   
>>   	raw_spin_lock_irqsave(&kmemleak_lock, flags);
>>   
>> -	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
>> -	min_addr = min(min_addr, untagged_ptr);
>> -	max_addr = max(max_addr, untagged_ptr + size);
>> +	untagged_ptr = is_phys ? ptr : (unsigned long)kasan_reset_tag((void *)ptr);
> 
> Same here.

Will do.

> 
>> +	if (!is_phys) {
>> +		min_addr = min(min_addr, untagged_ptr);
>> +		max_addr = max(max_addr, untagged_ptr + size);
>> +	}
>>   	link = is_phys ? &object_phys_tree_root.rb_node :
>>   		&object_tree_root.rb_node;
>>   	rb_parent = NULL;
>>   	while (*link) {
>>   		rb_parent = *link;
>>   		parent = rb_entry(rb_parent, struct kmemleak_object, rb_node);
>> -		untagged_objp = (unsigned long)kasan_reset_tag((void *)parent->pointer);
>> +		untagged_objp = is_phys ? parent->pointer :
>> +				(unsigned long)kasan_reset_tag((void *)parent->pointer);
> 
> And here.

Will do.

> 
>>   		if (untagged_ptr + size <= untagged_objp)
>>   			link = &parent->rb_node.rb_left;
>>   		else if (untagged_objp + parent->size <= untagged_ptr)
>> @@ -1202,7 +1208,9 @@ static bool update_checksum(struct kmemleak_object *object)
>>   
>>   	kasan_disable_current();
>>   	kcsan_disable_current();
>> -	object->checksum = crc32(0, kasan_reset_tag((void *)object->pointer), object->size);
>> +	object->checksum = crc32(0, object->flags & OBJECT_PHYS ? (void *)object->pointer :
>> +				    kasan_reset_tag((void *)object->pointer),
>> +				    object->size);
> 
> Luckily that's never called on a phys object, otherwise *object->pointer
> would segfault. As for hex_dump, just return early with a warning if
> that's the case.

Right, will do.

Thanks,
Patrick


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 4/4] mm: kmemleak: kmemleak_*_phys() set address type and check PA when scan
  2022-06-06 15:29   ` Catalin Marinas
@ 2022-06-07 14:37     ` Patrick Wang
  0 siblings, 0 replies; 18+ messages in thread
From: Patrick Wang @ 2022-06-07 14:37 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: akpm, linux-mm, linux-kernel, yee.lee



On 2022/6/6 23:29, Catalin Marinas wrote:
> On Fri, Jun 03, 2022 at 11:54:15AM +0800, Patrick Wang wrote:
>> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
>> index 246a70b7218f..62d1ad8f8a44 100644
>> --- a/mm/kmemleak.c
>> +++ b/mm/kmemleak.c
>> @@ -1156,8 +1156,12 @@ EXPORT_SYMBOL(kmemleak_no_scan);
>>   void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count,
>>   			       gfp_t gfp)
>>   {
>> -	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
>> -		kmemleak_alloc(__va(phys), size, min_count, gfp);
>> +	pr_debug("%s(0x%pa, %zu, %d)\n", __func__, &phys, size, min_count);
>> +
>> +	if (kmemleak_enabled && !min_count)
>> +		/* create object with OBJECT_PHYS flag */
>> +		create_object((unsigned long)phys, size, min_count,
>> +			      gfp, true);
>>   }
> 
> With an early patch, just drop min_count altogether from this API,
> assume 0.

Will do.

> 
>>   EXPORT_SYMBOL(kmemleak_alloc_phys);
>>   
>> @@ -1170,8 +1174,10 @@ EXPORT_SYMBOL(kmemleak_alloc_phys);
>>    */
>>   void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
>>   {
>> -	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
>> -		kmemleak_free_part(__va(phys), size);
>> +	pr_debug("%s(0x%pa)\n", __func__, &phys);
>> +
>> +	if (kmemleak_enabled)
>> +		delete_object_part((unsigned long)phys, size, true);
>>   }
>>   EXPORT_SYMBOL(kmemleak_free_part_phys);
>>   
>> @@ -1182,8 +1188,10 @@ EXPORT_SYMBOL(kmemleak_free_part_phys);
>>    */
>>   void __ref kmemleak_not_leak_phys(phys_addr_t phys)
>>   {
>> -	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
>> -		kmemleak_not_leak(__va(phys));
>> +	pr_debug("%s(0x%pa)\n", __func__, &phys);
>> +
>> +	if (kmemleak_enabled)
>> +		make_gray_object((unsigned long)phys, true);
>>   }
>>   EXPORT_SYMBOL(kmemleak_not_leak_phys);
> 
> This function doesn't have any callers, so please remove it.

Will do.

Thanks,
Patrick


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan
  2022-06-03  3:54 [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
                   ` (4 preceding siblings ...)
  2022-06-03 11:01 ` [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check " Catalin Marinas
@ 2022-06-08  2:46 ` Kuan-Ying Lee
  2022-06-08 23:44   ` patrick wang
  5 siblings, 1 reply; 18+ messages in thread
From: Kuan-Ying Lee @ 2022-06-08  2:46 UTC (permalink / raw)
  To: Patrick Wang, catalin.marinas, akpm
  Cc: linux-mm, linux-kernel, Yee Lee (李建誼),
	kuan-ying.lee

On Fri, 2022-06-03 at 11:54 +0800, Patrick Wang wrote:
> The kmemleak_*_phys() interface uses "min_low_pfn" and
> "max_low_pfn" to check address. But on some architectures,
> kmemleak_*_phys() is called before those two variables
> initialized. The following steps will be taken:
> 
> 1) Add OBJECT_PHYS flag and rbtree for the objects allocated
>    with physical address
> 2) Store physical address in objects if allocated with OBJECT_PHYS
> 3) Check the boundary when scan instead of in kmemleak_*_phys()
> 
> This patch set will solve:
> https://lore.kernel.org/r/20220527032504.30341-1-yee.lee@mediatek.com
> 
https://lore.kernel.org/r/9dd08bb5-f39e-53d8-f88d-bec598a08c93@gmail.com

Hi Patrick,

If this patchset fix the above issue, I think we need to add
the below fixes tag.

Fixes: 23c2d497de21 ("mm: kmemleak: take a full lowmem check in
kmemleak_*_phys()")

Thanks.

> 
> v1: 
> https://lore.kernel.org/r/20220531150823.1004101-1-patrick.wang.shcn@gmail.com
> 
> v1->v2:
>  - add rbtree for the objects allocated with physical address
>  - store physical address in objects if allocated with OBJECT_PHYS
>  - check the upper object boundary as well and avoid duplicate check
> 
> Patrick Wang (4):
>   mm: kmemleak: add OBJECT_PHYS flag for objects allocated with
> physical
>     address
>   mm: kmemleak: add rbtree for objects allocated with physical
> address
>   mm: kmemleak: handle address stored in object based on its type
>   mm: kmemleak: kmemleak_*_phys() set address type and check PA when
>     scan
> 
>  mm/kmemleak.c | 193 ++++++++++++++++++++++++++++++++--------------
> ----
>  1 file changed, 123 insertions(+), 70 deletions(-)
> 
> -- 
> 2.25.1
> 
> 


^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan
  2022-06-08  2:46 ` Kuan-Ying Lee
@ 2022-06-08 23:44   ` patrick wang
  0 siblings, 0 replies; 18+ messages in thread
From: patrick wang @ 2022-06-08 23:44 UTC (permalink / raw)
  To: Kuan-Ying Lee
  Cc: catalin.marinas, akpm, linux-mm, linux-kernel,
	Yee Lee (李建誼)

On Wed, Jun 8, 2022 at 10:46 AM Kuan-Ying Lee
<Kuan-Ying.Lee@mediatek.com> wrote:
>
> On Fri, 2022-06-03 at 11:54 +0800, Patrick Wang wrote:
> > The kmemleak_*_phys() interface uses "min_low_pfn" and
> > "max_low_pfn" to check address. But on some architectures,
> > kmemleak_*_phys() is called before those two variables
> > initialized. The following steps will be taken:
> >
> > 1) Add OBJECT_PHYS flag and rbtree for the objects allocated
> >    with physical address
> > 2) Store physical address in objects if allocated with OBJECT_PHYS
> > 3) Check the boundary when scan instead of in kmemleak_*_phys()
> >
> > This patch set will solve:
> > https://lore.kernel.org/r/20220527032504.30341-1-yee.lee@mediatek.com
> >
> https://lore.kernel.org/r/9dd08bb5-f39e-53d8-f88d-bec598a08c93@gmail.com
>
> Hi Patrick,
>
> If this patchset fix the above issue, I think we need to add
> the below fixes tag.
>
> Fixes: 23c2d497de21 ("mm: kmemleak: take a full lowmem check in
> kmemleak_*_phys()")

Hi Kuan-Ying,

Thanks for taking a look.

This series should solve the false positive on ppc32 and arm64.
And the false positive on arm64 is triggered by commit
23c2d497de21. So I will add the fixes tag.

Thanks,
Patrick

^ permalink raw reply	[flat|nested] 18+ messages in thread

* Re: [PATCH v2 1/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address
  2022-06-07 14:32     ` Patrick Wang
@ 2022-06-09  9:54       ` Catalin Marinas
  0 siblings, 0 replies; 18+ messages in thread
From: Catalin Marinas @ 2022-06-09  9:54 UTC (permalink / raw)
  To: Patrick Wang; +Cc: akpm, linux-mm, linux-kernel, yee.lee

On Tue, Jun 07, 2022 at 10:32:26PM +0800, Patrick Wang wrote:
> The leak check could be taken on physical objects. Conversion
> of block address from virtual to physical before lookup should
> make this work (this is useless currently). I think we'd better
> know about this.

Yes, we could add this, but since all the phys objects are currently
'gray', it won't make any difference, other than an extra lookup in the
phys tree.

-- 
Catalin

^ permalink raw reply	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2022-06-09  9:54 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-03  3:54 [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
2022-06-03  3:54 ` [PATCH v2 1/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address Patrick Wang
2022-06-06 11:55   ` Catalin Marinas
2022-06-07 14:32     ` Patrick Wang
2022-06-09  9:54       ` Catalin Marinas
2022-06-03  3:54 ` [PATCH v2 2/4] mm: kmemleak: add rbtree " Patrick Wang
2022-06-06 14:38   ` Catalin Marinas
2022-06-07 14:34     ` Patrick Wang
2022-06-03  3:54 ` [PATCH v2 3/4] mm: kmemleak: handle address stored in object based on its type Patrick Wang
2022-06-06 15:01   ` Catalin Marinas
2022-06-07 14:36     ` Patrick Wang
2022-06-03  3:54 ` [PATCH v2 4/4] mm: kmemleak: kmemleak_*_phys() set address type and check PA when scan Patrick Wang
2022-06-06 15:29   ` Catalin Marinas
2022-06-07 14:37     ` Patrick Wang
2022-06-03 11:01 ` [PATCH v2 0/4] mm: kmemleak: store objects allocated with physical address separately and check " Catalin Marinas
2022-06-04  3:01   ` patrick wang
2022-06-08  2:46 ` Kuan-Ying Lee
2022-06-08 23:44   ` patrick wang

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).