linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan
@ 2022-06-11  3:55 Patrick Wang
  2022-06-11  3:55 ` [PATCH v4 1/4] mm: kmemleak: remove kmemleak_not_leak_phys() and the min_count argument to kmemleak_alloc_phys() Patrick Wang
                   ` (4 more replies)
  0 siblings, 5 replies; 11+ messages in thread
From: Patrick Wang @ 2022-06-11  3:55 UTC (permalink / raw)
  To: catalin.marinas, akpm; +Cc: linux-mm, linux-kernel, yee.lee, patrick.wang.shcn

The kmemleak_*_phys() interface uses "min_low_pfn" and
"max_low_pfn" to check address. But on some architectures,
kmemleak_*_phys() is called before those two variables
initialized. The following steps will be taken:

1) Add OBJECT_PHYS flag and rbtree for the objects allocated
   with physical address
2) Store physical address in objects if allocated with OBJECT_PHYS
3) Check the boundary when scan instead of in kmemleak_*_phys()

This patch set will solve:
https://lore.kernel.org/r/20220527032504.30341-1-yee.lee@mediatek.com
https://lore.kernel.org/r/9dd08bb5-f39e-53d8-f88d-bec598a08c93@gmail.com

v3: https://lore.kernel.org/r/20220609124950.1694394-1-patrick.wang.shcn@gmail.com
v2: https://lore.kernel.org/r/20220603035415.1243913-1-patrick.wang.shcn@gmail.com
v1: https://lore.kernel.org/r/20220531150823.1004101-1-patrick.wang.shcn@gmail.com

v3->v4:
 - fix a build warning
 - move the prototype change of kmemleak_alloc_phys() and the
   kmemleak_not_leak_phys() removal to a separate patch (patch 1)

v2->v3:
 - remove the min_count argument to kmemleak_alloc_phys() function and assume it's 0
 - remove unused kmemleak_not_leak_phys() function
 - add functions to reduce unnecessary changes
 - remove the check for kasan_reset_tag()
 - add Fixes tag in patch 3

v1->v2:
 - add rbtree for the objects allocated with physical address
 - store physical address in objects if allocated with OBJECT_PHYS
 - check the upper object boundary as well and avoid duplicate check

Patrick Wang (4):
  mm: kmemleak: remove kmemleak_not_leak_phys() and the min_count
    argument to kmemleak_alloc_phys()
  mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical
    address
  mm: kmemleak: add rbtree and store physical address for objects
    allocated with PA
  mm: kmemleak: check physical address when scan

 Documentation/dev-tools/kmemleak.rst    |   1 -
 drivers/of/fdt.c                        |   2 +-
 include/linux/kmemleak.h                |   8 +-
 mm/kmemleak.c                           | 200 ++++++++++++++++--------
 mm/memblock.c                           |  14 +-
 tools/testing/memblock/linux/kmemleak.h |   2 +-
 6 files changed, 145 insertions(+), 82 deletions(-)

-- 
2.25.1



^ permalink raw reply	[flat|nested] 11+ messages in thread

* [PATCH v4 1/4] mm: kmemleak: remove kmemleak_not_leak_phys() and the min_count argument to kmemleak_alloc_phys()
  2022-06-11  3:55 [PATCH v4 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
@ 2022-06-11  3:55 ` Patrick Wang
  2022-06-11  9:47   ` Catalin Marinas
  2022-06-11  3:55 ` [PATCH v4 2/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address Patrick Wang
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 11+ messages in thread
From: Patrick Wang @ 2022-06-11  3:55 UTC (permalink / raw)
  To: catalin.marinas, akpm; +Cc: linux-mm, linux-kernel, yee.lee, patrick.wang.shcn

Remove the unused kmemleak_not_leak_phys() function.
And remove the min_count argument to kmemleak_alloc_phys()
function, assume it's 0.

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
---
 Documentation/dev-tools/kmemleak.rst    |  1 -
 drivers/of/fdt.c                        |  2 +-
 include/linux/kmemleak.h                |  8 ++------
 mm/kmemleak.c                           | 20 +++-----------------
 mm/memblock.c                           | 14 +++++++-------
 tools/testing/memblock/linux/kmemleak.h |  2 +-
 6 files changed, 14 insertions(+), 33 deletions(-)

diff --git a/Documentation/dev-tools/kmemleak.rst b/Documentation/dev-tools/kmemleak.rst
index 1c935f41cd3a..5483fd39ef29 100644
--- a/Documentation/dev-tools/kmemleak.rst
+++ b/Documentation/dev-tools/kmemleak.rst
@@ -174,7 +174,6 @@ mapping:
 
 - ``kmemleak_alloc_phys``
 - ``kmemleak_free_part_phys``
-- ``kmemleak_not_leak_phys``
 - ``kmemleak_ignore_phys``
 
 Dealing with false positives/negatives
diff --git a/drivers/of/fdt.c b/drivers/of/fdt.c
index a8f5b6532165..2c677e84c3f5 100644
--- a/drivers/of/fdt.c
+++ b/drivers/of/fdt.c
@@ -529,7 +529,7 @@ static int __init __reserved_mem_reserve_reg(unsigned long node,
 			pr_debug("Reserved memory: reserved region for node '%s': base %pa, size %lu MiB\n",
 				uname, &base, (unsigned long)(size / SZ_1M));
 			if (!nomap)
-				kmemleak_alloc_phys(base, size, 0, 0);
+				kmemleak_alloc_phys(base, size, 0);
 		}
 		else
 			pr_info("Reserved memory: failed to reserve memory for node '%s': base %pa, size %lu MiB\n",
diff --git a/include/linux/kmemleak.h b/include/linux/kmemleak.h
index 34684b2026ab..6a3cd1bf4680 100644
--- a/include/linux/kmemleak.h
+++ b/include/linux/kmemleak.h
@@ -29,10 +29,9 @@ extern void kmemleak_not_leak(const void *ptr) __ref;
 extern void kmemleak_ignore(const void *ptr) __ref;
 extern void kmemleak_scan_area(const void *ptr, size_t size, gfp_t gfp) __ref;
 extern void kmemleak_no_scan(const void *ptr) __ref;
-extern void kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count,
+extern void kmemleak_alloc_phys(phys_addr_t phys, size_t size,
 				gfp_t gfp) __ref;
 extern void kmemleak_free_part_phys(phys_addr_t phys, size_t size) __ref;
-extern void kmemleak_not_leak_phys(phys_addr_t phys) __ref;
 extern void kmemleak_ignore_phys(phys_addr_t phys) __ref;
 
 static inline void kmemleak_alloc_recursive(const void *ptr, size_t size,
@@ -107,15 +106,12 @@ static inline void kmemleak_no_scan(const void *ptr)
 {
 }
 static inline void kmemleak_alloc_phys(phys_addr_t phys, size_t size,
-				       int min_count, gfp_t gfp)
+				       gfp_t gfp)
 {
 }
 static inline void kmemleak_free_part_phys(phys_addr_t phys, size_t size)
 {
 }
-static inline void kmemleak_not_leak_phys(phys_addr_t phys)
-{
-}
 static inline void kmemleak_ignore_phys(phys_addr_t phys)
 {
 }
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index a182f5ddaf68..156eafafa182 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -1125,15 +1125,13 @@ EXPORT_SYMBOL(kmemleak_no_scan);
  *			 address argument
  * @phys:	physical address of the object
  * @size:	size of the object
- * @min_count:	minimum number of references to this object.
- *              See kmemleak_alloc()
  * @gfp:	kmalloc() flags used for kmemleak internal memory allocations
  */
-void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, int min_count,
-			       gfp_t gfp)
+void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, gfp_t gfp)
 {
 	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
-		kmemleak_alloc(__va(phys), size, min_count, gfp);
+		/* assume min_count 0 */
+		kmemleak_alloc(__va(phys), size, 0, gfp);
 }
 EXPORT_SYMBOL(kmemleak_alloc_phys);
 
@@ -1151,18 +1149,6 @@ void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
 }
 EXPORT_SYMBOL(kmemleak_free_part_phys);
 
-/**
- * kmemleak_not_leak_phys - similar to kmemleak_not_leak but taking a physical
- *			    address argument
- * @phys:	physical address of the object
- */
-void __ref kmemleak_not_leak_phys(phys_addr_t phys)
-{
-	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
-		kmemleak_not_leak(__va(phys));
-}
-EXPORT_SYMBOL(kmemleak_not_leak_phys);
-
 /**
  * kmemleak_ignore_phys - similar to kmemleak_ignore but taking a physical
  *			  address argument
diff --git a/mm/memblock.c b/mm/memblock.c
index e4f03a6e8e56..749abd2685c4 100644
--- a/mm/memblock.c
+++ b/mm/memblock.c
@@ -1345,8 +1345,8 @@ __next_mem_pfn_range_in_zone(u64 *idx, struct zone *zone,
  * from the regions with mirroring enabled and then retried from any
  * memory region.
  *
- * In addition, function sets the min_count to 0 using kmemleak_alloc_phys for
- * allocated boot memory block, so that it is never reported as leaks.
+ * In addition, function using kmemleak_alloc_phys for allocated boot
+ * memory block, it is never reported as leaks.
  *
  * Return:
  * Physical address of allocated memory block on success, %0 on failure.
@@ -1398,12 +1398,12 @@ phys_addr_t __init memblock_alloc_range_nid(phys_addr_t size,
 	 */
 	if (end != MEMBLOCK_ALLOC_NOLEAKTRACE)
 		/*
-		 * The min_count is set to 0 so that memblock allocated
-		 * blocks are never reported as leaks. This is because many
-		 * of these blocks are only referred via the physical
-		 * address which is not looked up by kmemleak.
+		 * Memblock allocated blocks are never reported as
+		 * leaks. This is because many of these blocks are
+		 * only referred via the physical address which is
+		 * not looked up by kmemleak.
 		 */
-		kmemleak_alloc_phys(found, size, 0, 0);
+		kmemleak_alloc_phys(found, size, 0);
 
 	return found;
 }
diff --git a/tools/testing/memblock/linux/kmemleak.h b/tools/testing/memblock/linux/kmemleak.h
index 462f8c5e8aa0..5fed13bb9ec4 100644
--- a/tools/testing/memblock/linux/kmemleak.h
+++ b/tools/testing/memblock/linux/kmemleak.h
@@ -7,7 +7,7 @@ static inline void kmemleak_free_part_phys(phys_addr_t phys, size_t size)
 }
 
 static inline void kmemleak_alloc_phys(phys_addr_t phys, size_t size,
-				       int min_count, gfp_t gfp)
+				       gfp_t gfp)
 {
 }
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 2/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address
  2022-06-11  3:55 [PATCH v4 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
  2022-06-11  3:55 ` [PATCH v4 1/4] mm: kmemleak: remove kmemleak_not_leak_phys() and the min_count argument to kmemleak_alloc_phys() Patrick Wang
@ 2022-06-11  3:55 ` Patrick Wang
  2022-06-11  3:55 ` [PATCH v4 3/4] mm: kmemleak: add rbtree and store physical address for objects allocated with PA Patrick Wang
                   ` (2 subsequent siblings)
  4 siblings, 0 replies; 11+ messages in thread
From: Patrick Wang @ 2022-06-11  3:55 UTC (permalink / raw)
  To: catalin.marinas, akpm; +Cc: linux-mm, linux-kernel, yee.lee, patrick.wang.shcn

Add OBJECT_PHYS flag for object. This flag is used
to identify the objects allocated with physical address.
The create_object_phys() function is added as well to
set that flag and is used by kmemleak_alloc_phys().

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/kmemleak.c | 40 +++++++++++++++++++++++++++++++---------
 1 file changed, 31 insertions(+), 9 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 156eafafa182..d82d8db0e8df 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -172,6 +172,8 @@ struct kmemleak_object {
 #define OBJECT_NO_SCAN		(1 << 2)
 /* flag set to fully scan the object when scan_area allocation failed */
 #define OBJECT_FULL_SCAN	(1 << 3)
+/* flag set for object allocated with physical address */
+#define OBJECT_PHYS		(1 << 4)
 
 #define HEX_PREFIX		"    "
 /* number of bytes to print per line; must be 16 or 32 */
@@ -574,8 +576,9 @@ static int __save_stack_trace(unsigned long *trace)
  * Create the metadata (struct kmemleak_object) corresponding to an allocated
  * memory block and add it to the object_list and object_tree_root.
  */
-static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
-					     int min_count, gfp_t gfp)
+static struct kmemleak_object *__create_object(unsigned long ptr, size_t size,
+					     int min_count, gfp_t gfp,
+					     bool is_phys)
 {
 	unsigned long flags;
 	struct kmemleak_object *object, *parent;
@@ -595,7 +598,7 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
 	INIT_HLIST_HEAD(&object->area_list);
 	raw_spin_lock_init(&object->lock);
 	atomic_set(&object->use_count, 1);
-	object->flags = OBJECT_ALLOCATED;
+	object->flags = OBJECT_ALLOCATED | (is_phys ? OBJECT_PHYS : 0);
 	object->pointer = ptr;
 	object->size = kfence_ksize((void *)ptr) ?: size;
 	object->excess_ref = 0;
@@ -662,6 +665,20 @@ static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
 	return object;
 }
 
+/* Create kmemleak object which allocated with virtual address. */
+static struct kmemleak_object *create_object(unsigned long ptr, size_t size,
+					     int min_count, gfp_t gfp)
+{
+	return __create_object(ptr, size, min_count, gfp, false);
+}
+
+/* Create kmemleak object which allocated with physical address. */
+static struct kmemleak_object *create_object_phys(unsigned long ptr, size_t size,
+					     int min_count, gfp_t gfp)
+{
+	return __create_object(ptr, size, min_count, gfp, true);
+}
+
 /*
  * Mark the object as not allocated and schedule RCU freeing via put_object().
  */
@@ -728,11 +745,11 @@ static void delete_object_part(unsigned long ptr, size_t size)
 	start = object->pointer;
 	end = object->pointer + object->size;
 	if (ptr > start)
-		create_object(start, ptr - start, object->min_count,
-			      GFP_KERNEL);
+		__create_object(start, ptr - start, object->min_count,
+			      GFP_KERNEL, object->flags & OBJECT_PHYS);
 	if (ptr + size < end)
-		create_object(ptr + size, end - ptr - size, object->min_count,
-			      GFP_KERNEL);
+		__create_object(ptr + size, end - ptr - size, object->min_count,
+			      GFP_KERNEL, object->flags & OBJECT_PHYS);
 
 	__delete_object(object);
 }
@@ -1129,9 +1146,14 @@ EXPORT_SYMBOL(kmemleak_no_scan);
  */
 void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, gfp_t gfp)
 {
+	pr_debug("%s(0x%pa, %zu)\n", __func__, &phys, size);
+
 	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
-		/* assume min_count 0 */
-		kmemleak_alloc(__va(phys), size, 0, gfp);
+		/*
+		 * Create object with OBJECT_PHYS flag and
+		 * assume min_count 0.
+		 */
+		create_object_phys((unsigned long)__va(phys), size, 0, gfp);
 }
 EXPORT_SYMBOL(kmemleak_alloc_phys);
 
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 3/4] mm: kmemleak: add rbtree and store physical address for objects allocated with PA
  2022-06-11  3:55 [PATCH v4 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
  2022-06-11  3:55 ` [PATCH v4 1/4] mm: kmemleak: remove kmemleak_not_leak_phys() and the min_count argument to kmemleak_alloc_phys() Patrick Wang
  2022-06-11  3:55 ` [PATCH v4 2/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address Patrick Wang
@ 2022-06-11  3:55 ` Patrick Wang
  2022-06-23  8:45   ` Yee Lee
  2022-06-11  3:55 ` [PATCH v4 4/4] mm: kmemleak: check physical address when scan Patrick Wang
  2022-07-12 13:43 ` [PATCH v4 0/4] mm: kmemleak: store objects allocated with physical address separately and check " Geert Uytterhoeven
  4 siblings, 1 reply; 11+ messages in thread
From: Patrick Wang @ 2022-06-11  3:55 UTC (permalink / raw)
  To: catalin.marinas, akpm; +Cc: linux-mm, linux-kernel, yee.lee, patrick.wang.shcn

Add object_phys_tree_root to store the objects allocated with
physical address. Distinguish it from object_tree_root by
OBJECT_PHYS flag or function argument. The physical address
is stored directly in those objects.

Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/kmemleak.c | 133 ++++++++++++++++++++++++++++++++++----------------
 1 file changed, 91 insertions(+), 42 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index d82d8db0e8df..155f50e1a604 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -14,14 +14,16 @@
  * The following locks and mutexes are used by kmemleak:
  *
  * - kmemleak_lock (raw_spinlock_t): protects the object_list modifications and
- *   accesses to the object_tree_root. The object_list is the main list
- *   holding the metadata (struct kmemleak_object) for the allocated memory
- *   blocks. The object_tree_root is a red black tree used to look-up
- *   metadata based on a pointer to the corresponding memory block.  The
- *   kmemleak_object structures are added to the object_list and
- *   object_tree_root in the create_object() function called from the
- *   kmemleak_alloc() callback and removed in delete_object() called from the
- *   kmemleak_free() callback
+ *   accesses to the object_tree_root (or object_phys_tree_root). The
+ *   object_list is the main list holding the metadata (struct kmemleak_object)
+ *   for the allocated memory blocks. The object_tree_root and object_phys_tree_root
+ *   are red black trees used to look-up metadata based on a pointer to the
+ *   corresponding memory block. The object_phys_tree_root is for objects
+ *   allocated with physical address. The kmemleak_object structures are
+ *   added to the object_list and object_tree_root (or object_phys_tree_root)
+ *   in the create_object() function called from the kmemleak_alloc() (or
+ *   kmemleak_alloc_phys()) callback and removed in delete_object() called from
+ *   the kmemleak_free() callback
  * - kmemleak_object.lock (raw_spinlock_t): protects a kmemleak_object.
  *   Accesses to the metadata (e.g. count) are protected by this lock. Note
  *   that some members of this structure may be protected by other means
@@ -195,7 +197,9 @@ static int mem_pool_free_count = ARRAY_SIZE(mem_pool);
 static LIST_HEAD(mem_pool_free_list);
 /* search tree for object boundaries */
 static struct rb_root object_tree_root = RB_ROOT;
-/* protecting the access to object_list and object_tree_root */
+/* search tree for object (with OBJECT_PHYS flag) boundaries */
+static struct rb_root object_phys_tree_root = RB_ROOT;
+/* protecting the access to object_list, object_tree_root (or object_phys_tree_root) */
 static DEFINE_RAW_SPINLOCK(kmemleak_lock);
 
 /* allocation caches for kmemleak internal data */
@@ -287,6 +291,9 @@ static void hex_dump_object(struct seq_file *seq,
 	const u8 *ptr = (const u8 *)object->pointer;
 	size_t len;
 
+	if (WARN_ON_ONCE(object->flags & OBJECT_PHYS))
+		return;
+
 	/* limit the number of lines to HEX_MAX_LINES */
 	len = min_t(size_t, object->size, HEX_MAX_LINES * HEX_ROW_SIZE);
 
@@ -380,9 +387,11 @@ static void dump_object_info(struct kmemleak_object *object)
  * beginning of the memory block are allowed. The kmemleak_lock must be held
  * when calling this function.
  */
-static struct kmemleak_object *lookup_object(unsigned long ptr, int alias)
+static struct kmemleak_object *__lookup_object(unsigned long ptr, int alias,
+					       bool is_phys)
 {
-	struct rb_node *rb = object_tree_root.rb_node;
+	struct rb_node *rb = is_phys ? object_phys_tree_root.rb_node :
+			     object_tree_root.rb_node;
 	unsigned long untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
 
 	while (rb) {
@@ -408,6 +417,12 @@ static struct kmemleak_object *lookup_object(unsigned long ptr, int alias)
 	return NULL;
 }
 
+/* Look-up a kmemleak object which allocated with virtual address. */
+static struct kmemleak_object *lookup_object(unsigned long ptr, int alias)
+{
+	return __lookup_object(ptr, alias, false);
+}
+
 /*
  * Increment the object use_count. Return 1 if successful or 0 otherwise. Note
  * that once an object's use_count reached 0, the RCU freeing was already
@@ -517,14 +532,15 @@ static void put_object(struct kmemleak_object *object)
 /*
  * Look up an object in the object search tree and increase its use_count.
  */
-static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
+static struct kmemleak_object *__find_and_get_object(unsigned long ptr, int alias,
+						     bool is_phys)
 {
 	unsigned long flags;
 	struct kmemleak_object *object;
 
 	rcu_read_lock();
 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
-	object = lookup_object(ptr, alias);
+	object = __lookup_object(ptr, alias, is_phys);
 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
 
 	/* check whether the object is still available */
@@ -535,28 +551,39 @@ static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
 	return object;
 }
 
+/* Look up and get an object which allocated with virtual address. */
+static struct kmemleak_object *find_and_get_object(unsigned long ptr, int alias)
+{
+	return __find_and_get_object(ptr, alias, false);
+}
+
 /*
- * Remove an object from the object_tree_root and object_list. Must be called
- * with the kmemleak_lock held _if_ kmemleak is still enabled.
+ * Remove an object from the object_tree_root (or object_phys_tree_root)
+ * and object_list. Must be called with the kmemleak_lock held _if_ kmemleak
+ * is still enabled.
  */
 static void __remove_object(struct kmemleak_object *object)
 {
-	rb_erase(&object->rb_node, &object_tree_root);
+	rb_erase(&object->rb_node, object->flags & OBJECT_PHYS ?
+				   &object_phys_tree_root :
+				   &object_tree_root);
 	list_del_rcu(&object->object_list);
 }
 
 /*
  * Look up an object in the object search tree and remove it from both
- * object_tree_root and object_list. The returned object's use_count should be
- * at least 1, as initially set by create_object().
+ * object_tree_root (or object_phys_tree_root) and object_list. The
+ * returned object's use_count should be at least 1, as initially set
+ * by create_object().
  */
-static struct kmemleak_object *find_and_remove_object(unsigned long ptr, int alias)
+static struct kmemleak_object *find_and_remove_object(unsigned long ptr, int alias,
+						      bool is_phys)
 {
 	unsigned long flags;
 	struct kmemleak_object *object;
 
 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
-	object = lookup_object(ptr, alias);
+	object = __lookup_object(ptr, alias, is_phys);
 	if (object)
 		__remove_object(object);
 	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
@@ -574,7 +601,8 @@ static int __save_stack_trace(unsigned long *trace)
 
 /*
  * Create the metadata (struct kmemleak_object) corresponding to an allocated
- * memory block and add it to the object_list and object_tree_root.
+ * memory block and add it to the object_list and object_tree_root (or
+ * object_phys_tree_root).
  */
 static struct kmemleak_object *__create_object(unsigned long ptr, size_t size,
 					     int min_count, gfp_t gfp,
@@ -631,9 +659,16 @@ static struct kmemleak_object *__create_object(unsigned long ptr, size_t size,
 	raw_spin_lock_irqsave(&kmemleak_lock, flags);
 
 	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
-	min_addr = min(min_addr, untagged_ptr);
-	max_addr = max(max_addr, untagged_ptr + size);
-	link = &object_tree_root.rb_node;
+	/*
+	 * Only update min_addr and max_addr with object
+	 * storing virtual address.
+	 */
+	if (!is_phys) {
+		min_addr = min(min_addr, untagged_ptr);
+		max_addr = max(max_addr, untagged_ptr + size);
+	}
+	link = is_phys ? &object_phys_tree_root.rb_node :
+		&object_tree_root.rb_node;
 	rb_parent = NULL;
 	while (*link) {
 		rb_parent = *link;
@@ -657,7 +692,8 @@ static struct kmemleak_object *__create_object(unsigned long ptr, size_t size,
 		}
 	}
 	rb_link_node(&object->rb_node, rb_parent, link);
-	rb_insert_color(&object->rb_node, &object_tree_root);
+	rb_insert_color(&object->rb_node, is_phys ? &object_phys_tree_root :
+					  &object_tree_root);
 
 	list_add_tail_rcu(&object->object_list, &object_list);
 out:
@@ -707,7 +743,7 @@ static void delete_object_full(unsigned long ptr)
 {
 	struct kmemleak_object *object;
 
-	object = find_and_remove_object(ptr, 0);
+	object = find_and_remove_object(ptr, 0, false);
 	if (!object) {
 #ifdef DEBUG
 		kmemleak_warn("Freeing unknown object at 0x%08lx\n",
@@ -723,12 +759,12 @@ static void delete_object_full(unsigned long ptr)
  * delete it. If the memory block is partially freed, the function may create
  * additional metadata for the remaining parts of the block.
  */
-static void delete_object_part(unsigned long ptr, size_t size)
+static void delete_object_part(unsigned long ptr, size_t size, bool is_phys)
 {
 	struct kmemleak_object *object;
 	unsigned long start, end;
 
-	object = find_and_remove_object(ptr, 1);
+	object = find_and_remove_object(ptr, 1, is_phys);
 	if (!object) {
 #ifdef DEBUG
 		kmemleak_warn("Partially freeing unknown object at 0x%08lx (size %zu)\n",
@@ -746,10 +782,10 @@ static void delete_object_part(unsigned long ptr, size_t size)
 	end = object->pointer + object->size;
 	if (ptr > start)
 		__create_object(start, ptr - start, object->min_count,
-			      GFP_KERNEL, object->flags & OBJECT_PHYS);
+			      GFP_KERNEL, is_phys);
 	if (ptr + size < end)
 		__create_object(ptr + size, end - ptr - size, object->min_count,
-			      GFP_KERNEL, object->flags & OBJECT_PHYS);
+			      GFP_KERNEL, is_phys);
 
 	__delete_object(object);
 }
@@ -770,11 +806,11 @@ static void paint_it(struct kmemleak_object *object, int color)
 	raw_spin_unlock_irqrestore(&object->lock, flags);
 }
 
-static void paint_ptr(unsigned long ptr, int color)
+static void paint_ptr(unsigned long ptr, int color, bool is_phys)
 {
 	struct kmemleak_object *object;
 
-	object = find_and_get_object(ptr, 0);
+	object = __find_and_get_object(ptr, 0, is_phys);
 	if (!object) {
 		kmemleak_warn("Trying to color unknown object at 0x%08lx as %s\n",
 			      ptr,
@@ -792,16 +828,16 @@ static void paint_ptr(unsigned long ptr, int color)
  */
 static void make_gray_object(unsigned long ptr)
 {
-	paint_ptr(ptr, KMEMLEAK_GREY);
+	paint_ptr(ptr, KMEMLEAK_GREY, false);
 }
 
 /*
  * Mark the object as black-colored so that it is ignored from scans and
  * reporting.
  */
-static void make_black_object(unsigned long ptr)
+static void make_black_object(unsigned long ptr, bool is_phys)
 {
-	paint_ptr(ptr, KMEMLEAK_BLACK);
+	paint_ptr(ptr, KMEMLEAK_BLACK, is_phys);
 }
 
 /*
@@ -1007,7 +1043,7 @@ void __ref kmemleak_free_part(const void *ptr, size_t size)
 	pr_debug("%s(0x%p)\n", __func__, ptr);
 
 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
-		delete_object_part((unsigned long)ptr, size);
+		delete_object_part((unsigned long)ptr, size, false);
 }
 EXPORT_SYMBOL_GPL(kmemleak_free_part);
 
@@ -1095,7 +1131,7 @@ void __ref kmemleak_ignore(const void *ptr)
 	pr_debug("%s(0x%p)\n", __func__, ptr);
 
 	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
-		make_black_object((unsigned long)ptr);
+		make_black_object((unsigned long)ptr, false);
 }
 EXPORT_SYMBOL(kmemleak_ignore);
 
@@ -1153,7 +1189,7 @@ void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, gfp_t gfp)
 		 * Create object with OBJECT_PHYS flag and
 		 * assume min_count 0.
 		 */
-		create_object_phys((unsigned long)__va(phys), size, 0, gfp);
+		create_object_phys((unsigned long)phys, size, 0, gfp);
 }
 EXPORT_SYMBOL(kmemleak_alloc_phys);
 
@@ -1166,8 +1202,10 @@ EXPORT_SYMBOL(kmemleak_alloc_phys);
  */
 void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
 {
+	pr_debug("%s(0x%pa)\n", __func__, &phys);
+
 	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
-		kmemleak_free_part(__va(phys), size);
+		delete_object_part((unsigned long)phys, size, true);
 }
 EXPORT_SYMBOL(kmemleak_free_part_phys);
 
@@ -1178,8 +1216,10 @@ EXPORT_SYMBOL(kmemleak_free_part_phys);
  */
 void __ref kmemleak_ignore_phys(phys_addr_t phys)
 {
+	pr_debug("%s(0x%pa)\n", __func__, &phys);
+
 	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
-		kmemleak_ignore(__va(phys));
+		make_black_object((unsigned long)phys, true);
 }
 EXPORT_SYMBOL(kmemleak_ignore_phys);
 
@@ -1190,6 +1230,9 @@ static bool update_checksum(struct kmemleak_object *object)
 {
 	u32 old_csum = object->checksum;
 
+	if (WARN_ON_ONCE(object->flags & OBJECT_PHYS))
+		return false;
+
 	kasan_disable_current();
 	kcsan_disable_current();
 	object->checksum = crc32(0, kasan_reset_tag((void *)object->pointer), object->size);
@@ -1343,6 +1386,7 @@ static void scan_object(struct kmemleak_object *object)
 {
 	struct kmemleak_scan_area *area;
 	unsigned long flags;
+	void *obj_ptr;
 
 	/*
 	 * Once the object->lock is acquired, the corresponding memory block
@@ -1354,10 +1398,15 @@ static void scan_object(struct kmemleak_object *object)
 	if (!(object->flags & OBJECT_ALLOCATED))
 		/* already freed object */
 		goto out;
+
+	obj_ptr = object->flags & OBJECT_PHYS ?
+		  __va((phys_addr_t)object->pointer) :
+		  (void *)object->pointer;
+
 	if (hlist_empty(&object->area_list) ||
 	    object->flags & OBJECT_FULL_SCAN) {
-		void *start = (void *)object->pointer;
-		void *end = (void *)(object->pointer + object->size);
+		void *start = obj_ptr;
+		void *end = obj_ptr + object->size;
 		void *next;
 
 		do {
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* [PATCH v4 4/4] mm: kmemleak: check physical address when scan
  2022-06-11  3:55 [PATCH v4 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
                   ` (2 preceding siblings ...)
  2022-06-11  3:55 ` [PATCH v4 3/4] mm: kmemleak: add rbtree and store physical address for objects allocated with PA Patrick Wang
@ 2022-06-11  3:55 ` Patrick Wang
  2022-07-12 13:43 ` [PATCH v4 0/4] mm: kmemleak: store objects allocated with physical address separately and check " Geert Uytterhoeven
  4 siblings, 0 replies; 11+ messages in thread
From: Patrick Wang @ 2022-06-11  3:55 UTC (permalink / raw)
  To: catalin.marinas, akpm; +Cc: linux-mm, linux-kernel, yee.lee, patrick.wang.shcn

Check the physical address of objects for its boundary
when scan instead of in kmemleak_*_phys().

Fixes: 23c2d497de21 ("mm: kmemleak: take a full lowmem check in kmemleak_*_phys()")
Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/kmemleak.c | 17 ++++++++++++++---
 1 file changed, 14 insertions(+), 3 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 155f50e1a604..387d6fa402c6 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -1184,7 +1184,7 @@ void __ref kmemleak_alloc_phys(phys_addr_t phys, size_t size, gfp_t gfp)
 {
 	pr_debug("%s(0x%pa, %zu)\n", __func__, &phys, size);
 
-	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
+	if (kmemleak_enabled)
 		/*
 		 * Create object with OBJECT_PHYS flag and
 		 * assume min_count 0.
@@ -1204,7 +1204,7 @@ void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
 {
 	pr_debug("%s(0x%pa)\n", __func__, &phys);
 
-	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
+	if (kmemleak_enabled)
 		delete_object_part((unsigned long)phys, size, true);
 }
 EXPORT_SYMBOL(kmemleak_free_part_phys);
@@ -1218,7 +1218,7 @@ void __ref kmemleak_ignore_phys(phys_addr_t phys)
 {
 	pr_debug("%s(0x%pa)\n", __func__, &phys);
 
-	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) < max_low_pfn)
+	if (kmemleak_enabled)
 		make_black_object((unsigned long)phys, true);
 }
 EXPORT_SYMBOL(kmemleak_ignore_phys);
@@ -1493,6 +1493,17 @@ static void kmemleak_scan(void)
 			dump_object_info(object);
 		}
 #endif
+
+		/* ignore objects outside lowmem (paint them black) */
+		if ((object->flags & OBJECT_PHYS) &&
+		   !(object->flags & OBJECT_NO_SCAN)) {
+			unsigned long phys = object->pointer;
+
+			if (PHYS_PFN(phys) < min_low_pfn ||
+			    PHYS_PFN(phys + object->size) >= max_low_pfn)
+				__paint_it(object, KMEMLEAK_BLACK);
+		}
+
 		/* reset the reference count (whiten the object) */
 		object->count = 0;
 		if (color_gray(object) && get_object(object))
-- 
2.25.1



^ permalink raw reply related	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 1/4] mm: kmemleak: remove kmemleak_not_leak_phys() and the min_count argument to kmemleak_alloc_phys()
  2022-06-11  3:55 ` [PATCH v4 1/4] mm: kmemleak: remove kmemleak_not_leak_phys() and the min_count argument to kmemleak_alloc_phys() Patrick Wang
@ 2022-06-11  9:47   ` Catalin Marinas
  0 siblings, 0 replies; 11+ messages in thread
From: Catalin Marinas @ 2022-06-11  9:47 UTC (permalink / raw)
  To: Patrick Wang; +Cc: akpm, linux-mm, linux-kernel, yee.lee

On Sat, Jun 11, 2022 at 11:55:48AM +0800, Patrick Wang wrote:
> Remove the unused kmemleak_not_leak_phys() function.
> And remove the min_count argument to kmemleak_alloc_phys()
> function, assume it's 0.
> 
> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 3/4] mm: kmemleak: add rbtree and store physical address for objects allocated with PA
  2022-06-11  3:55 ` [PATCH v4 3/4] mm: kmemleak: add rbtree and store physical address for objects allocated with PA Patrick Wang
@ 2022-06-23  8:45   ` Yee Lee
  2022-06-23 11:25     ` Yee Lee
  0 siblings, 1 reply; 11+ messages in thread
From: Yee Lee @ 2022-06-23  8:45 UTC (permalink / raw)
  To: Patrick Wang, catalin.marinas, akpm; +Cc: linux-mm, linux-kernel

Now we have seperated rb_tree for phys and virts addresses. But why
can't we have kmemleak_free_phys()? It may apply the same format to
delete_object_full(). 

Some users would request to remove the kmemleak object from the phys
tree but we don't have this one.

On Sat, 2022-06-11 at 11:55 +0800, Patrick Wang wrote:
> Add object_phys_tree_root to store the objects allocated with
> physical address. Distinguish it from object_tree_root by
> OBJECT_PHYS flag or function argument. The physical address
> is stored directly in those objects.
> 
> Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
> Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
> Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> ---
>  mm/kmemleak.c | 133 ++++++++++++++++++++++++++++++++++------------
> ----
>  1 file changed, 91 insertions(+), 42 deletions(-)
> 
> diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> index d82d8db0e8df..155f50e1a604 100644
> --- a/mm/kmemleak.c
> +++ b/mm/kmemleak.c
> @@ -14,14 +14,16 @@
>   * The following locks and mutexes are used by kmemleak:
>   *
>   * - kmemleak_lock (raw_spinlock_t): protects the object_list
> modifications and
> - *   accesses to the object_tree_root. The object_list is the main
> list
> - *   holding the metadata (struct kmemleak_object) for the allocated
> memory
> - *   blocks. The object_tree_root is a red black tree used to look-
> up
> - *   metadata based on a pointer to the corresponding memory
> block.  The
> - *   kmemleak_object structures are added to the object_list and
> - *   object_tree_root in the create_object() function called from
> the
> - *   kmemleak_alloc() callback and removed in delete_object() called
> from the
> - *   kmemleak_free() callback
> + *   accesses to the object_tree_root (or object_phys_tree_root).
> The
> + *   object_list is the main list holding the metadata (struct
> kmemleak_object)
> + *   for the allocated memory blocks. The object_tree_root and
> object_phys_tree_root
> + *   are red black trees used to look-up metadata based on a pointer
> to the
> + *   corresponding memory block. The object_phys_tree_root is for
> objects
> + *   allocated with physical address. The kmemleak_object structures
> are
> + *   added to the object_list and object_tree_root (or
> object_phys_tree_root)
> + *   in the create_object() function called from the
> kmemleak_alloc() (or
> + *   kmemleak_alloc_phys()) callback and removed in delete_object()
> called from
> + *   the kmemleak_free() callback
>   * - kmemleak_object.lock (raw_spinlock_t): protects a
> kmemleak_object.
>   *   Accesses to the metadata (e.g. count) are protected by this
> lock. Note
>   *   that some members of this structure may be protected by other
> means
> @@ -195,7 +197,9 @@ static int mem_pool_free_count =
> ARRAY_SIZE(mem_pool);
>  static LIST_HEAD(mem_pool_free_list);
>  /* search tree for object boundaries */
>  static struct rb_root object_tree_root = RB_ROOT;
> -/* protecting the access to object_list and object_tree_root */
> +/* search tree for object (with OBJECT_PHYS flag) boundaries */
> +static struct rb_root object_phys_tree_root = RB_ROOT;
> +/* protecting the access to object_list, object_tree_root (or
> object_phys_tree_root) */
>  static DEFINE_RAW_SPINLOCK(kmemleak_lock);
>  
>  /* allocation caches for kmemleak internal data */
> @@ -287,6 +291,9 @@ static void hex_dump_object(struct seq_file *seq,
>  	const u8 *ptr = (const u8 *)object->pointer;
>  	size_t len;
>  
> +	if (WARN_ON_ONCE(object->flags & OBJECT_PHYS))
> +		return;
> +
>  	/* limit the number of lines to HEX_MAX_LINES */
>  	len = min_t(size_t, object->size, HEX_MAX_LINES *
> HEX_ROW_SIZE);
>  
> @@ -380,9 +387,11 @@ static void dump_object_info(struct
> kmemleak_object *object)
>   * beginning of the memory block are allowed. The kmemleak_lock must
> be held
>   * when calling this function.
>   */
> -static struct kmemleak_object *lookup_object(unsigned long ptr, int
> alias)
> +static struct kmemleak_object *__lookup_object(unsigned long ptr,
> int alias,
> +					       bool is_phys)
>  {
> -	struct rb_node *rb = object_tree_root.rb_node;
> +	struct rb_node *rb = is_phys ? object_phys_tree_root.rb_node :
> +			     object_tree_root.rb_node;
>  	unsigned long untagged_ptr = (unsigned
> long)kasan_reset_tag((void *)ptr);
>  
>  	while (rb) {
> @@ -408,6 +417,12 @@ static struct kmemleak_object
> *lookup_object(unsigned long ptr, int alias)
>  	return NULL;
>  }
>  
> +/* Look-up a kmemleak object which allocated with virtual address.
> */
> +static struct kmemleak_object *lookup_object(unsigned long ptr, int
> alias)
> +{
> +	return __lookup_object(ptr, alias, false);
> +}
> +
>  /*
>   * Increment the object use_count. Return 1 if successful or 0
> otherwise. Note
>   * that once an object's use_count reached 0, the RCU freeing was
> already
> @@ -517,14 +532,15 @@ static void put_object(struct kmemleak_object
> *object)
>  /*
>   * Look up an object in the object search tree and increase its
> use_count.
>   */
> -static struct kmemleak_object *find_and_get_object(unsigned long
> ptr, int alias)
> +static struct kmemleak_object *__find_and_get_object(unsigned long
> ptr, int alias,
> +						     bool is_phys)
>  {
>  	unsigned long flags;
>  	struct kmemleak_object *object;
>  
>  	rcu_read_lock();
>  	raw_spin_lock_irqsave(&kmemleak_lock, flags);
> -	object = lookup_object(ptr, alias);
> +	object = __lookup_object(ptr, alias, is_phys);
>  	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
>  
>  	/* check whether the object is still available */
> @@ -535,28 +551,39 @@ static struct kmemleak_object
> *find_and_get_object(unsigned long ptr, int alias)
>  	return object;
>  }
>  
> +/* Look up and get an object which allocated with virtual address.
> */
> +static struct kmemleak_object *find_and_get_object(unsigned long
> ptr, int alias)
> +{
> +	return __find_and_get_object(ptr, alias, false);
> +}
> +
>  /*
> - * Remove an object from the object_tree_root and object_list. Must
> be called
> - * with the kmemleak_lock held _if_ kmemleak is still enabled.
> + * Remove an object from the object_tree_root (or
> object_phys_tree_root)
> + * and object_list. Must be called with the kmemleak_lock held _if_
> kmemleak
> + * is still enabled.
>   */
>  static void __remove_object(struct kmemleak_object *object)
>  {
> -	rb_erase(&object->rb_node, &object_tree_root);
> +	rb_erase(&object->rb_node, object->flags & OBJECT_PHYS ?
> +				   &object_phys_tree_root :
> +				   &object_tree_root);
>  	list_del_rcu(&object->object_list);
>  }
>  
>  /*
>   * Look up an object in the object search tree and remove it from
> both
> - * object_tree_root and object_list. The returned object's use_count
> should be
> - * at least 1, as initially set by create_object().
> + * object_tree_root (or object_phys_tree_root) and object_list. The
> + * returned object's use_count should be at least 1, as initially
> set
> + * by create_object().
>   */
> -static struct kmemleak_object *find_and_remove_object(unsigned long
> ptr, int alias)
> +static struct kmemleak_object *find_and_remove_object(unsigned long
> ptr, int alias,
> +						      bool is_phys)
>  {
>  	unsigned long flags;
>  	struct kmemleak_object *object;
>  
>  	raw_spin_lock_irqsave(&kmemleak_lock, flags);
> -	object = lookup_object(ptr, alias);
> +	object = __lookup_object(ptr, alias, is_phys);
>  	if (object)
>  		__remove_object(object);
>  	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
> @@ -574,7 +601,8 @@ static int __save_stack_trace(unsigned long
> *trace)
>  
>  /*
>   * Create the metadata (struct kmemleak_object) corresponding to an
> allocated
> - * memory block and add it to the object_list and object_tree_root.
> + * memory block and add it to the object_list and object_tree_root
> (or
> + * object_phys_tree_root).
>   */
>  static struct kmemleak_object *__create_object(unsigned long ptr,
> size_t size,
>  					     int min_count, gfp_t gfp,
> @@ -631,9 +659,16 @@ static struct kmemleak_object
> *__create_object(unsigned long ptr, size_t size,
>  	raw_spin_lock_irqsave(&kmemleak_lock, flags);
>  
>  	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
> -	min_addr = min(min_addr, untagged_ptr);
> -	max_addr = max(max_addr, untagged_ptr + size);
> -	link = &object_tree_root.rb_node;
> +	/*
> +	 * Only update min_addr and max_addr with object
> +	 * storing virtual address.
> +	 */
> +	if (!is_phys) {
> +		min_addr = min(min_addr, untagged_ptr);
> +		max_addr = max(max_addr, untagged_ptr + size);
> +	}
> +	link = is_phys ? &object_phys_tree_root.rb_node :
> +		&object_tree_root.rb_node;
>  	rb_parent = NULL;
>  	while (*link) {
>  		rb_parent = *link;
> @@ -657,7 +692,8 @@ static struct kmemleak_object
> *__create_object(unsigned long ptr, size_t size,
>  		}
>  	}
>  	rb_link_node(&object->rb_node, rb_parent, link);
> -	rb_insert_color(&object->rb_node, &object_tree_root);
> +	rb_insert_color(&object->rb_node, is_phys ?
> &object_phys_tree_root :
> +					  &object_tree_root);
>  
>  	list_add_tail_rcu(&object->object_list, &object_list);
>  out:
> @@ -707,7 +743,7 @@ static void delete_object_full(unsigned long ptr)
>  {
>  	struct kmemleak_object *object;
>  
> -	object = find_and_remove_object(ptr, 0);
> +	object = find_and_remove_object(ptr, 0, false);
>  	if (!object) {
>  #ifdef DEBUG
>  		kmemleak_warn("Freeing unknown object at 0x%08lx\n",
> @@ -723,12 +759,12 @@ static void delete_object_full(unsigned long
> ptr)
>   * delete it. If the memory block is partially freed, the function
> may create
>   * additional metadata for the remaining parts of the block.
>   */
> -static void delete_object_part(unsigned long ptr, size_t size)
> +static void delete_object_part(unsigned long ptr, size_t size, bool
> is_phys)
>  {
>  	struct kmemleak_object *object;
>  	unsigned long start, end;
>  
> -	object = find_and_remove_object(ptr, 1);
> +	object = find_and_remove_object(ptr, 1, is_phys);
>  	if (!object) {
>  #ifdef DEBUG
>  		kmemleak_warn("Partially freeing unknown object at
> 0x%08lx (size %zu)\n",
> @@ -746,10 +782,10 @@ static void delete_object_part(unsigned long
> ptr, size_t size)
>  	end = object->pointer + object->size;
>  	if (ptr > start)
>  		__create_object(start, ptr - start, object->min_count,
> -			      GFP_KERNEL, object->flags & OBJECT_PHYS);
> +			      GFP_KERNEL, is_phys);
>  	if (ptr + size < end)
>  		__create_object(ptr + size, end - ptr - size, object-
> >min_count,
> -			      GFP_KERNEL, object->flags & OBJECT_PHYS);
> +			      GFP_KERNEL, is_phys);
>  
>  	__delete_object(object);
>  }
> @@ -770,11 +806,11 @@ static void paint_it(struct kmemleak_object
> *object, int color)
>  	raw_spin_unlock_irqrestore(&object->lock, flags);
>  }
>  
> -static void paint_ptr(unsigned long ptr, int color)
> +static void paint_ptr(unsigned long ptr, int color, bool is_phys)
>  {
>  	struct kmemleak_object *object;
>  
> -	object = find_and_get_object(ptr, 0);
> +	object = __find_and_get_object(ptr, 0, is_phys);
>  	if (!object) {
>  		kmemleak_warn("Trying to color unknown object at
> 0x%08lx as %s\n",
>  			      ptr,
> @@ -792,16 +828,16 @@ static void paint_ptr(unsigned long ptr, int
> color)
>   */
>  static void make_gray_object(unsigned long ptr)
>  {
> -	paint_ptr(ptr, KMEMLEAK_GREY);
> +	paint_ptr(ptr, KMEMLEAK_GREY, false);
>  }
>  
>  /*
>   * Mark the object as black-colored so that it is ignored from scans
> and
>   * reporting.
>   */
> -static void make_black_object(unsigned long ptr)
> +static void make_black_object(unsigned long ptr, bool is_phys)
>  {
> -	paint_ptr(ptr, KMEMLEAK_BLACK);
> +	paint_ptr(ptr, KMEMLEAK_BLACK, is_phys);
>  }
>  
>  /*
> @@ -1007,7 +1043,7 @@ void __ref kmemleak_free_part(const void *ptr,
> size_t size)
>  	pr_debug("%s(0x%p)\n", __func__, ptr);
>  
>  	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
> -		delete_object_part((unsigned long)ptr, size);
> +		delete_object_part((unsigned long)ptr, size, false);
>  }
>  EXPORT_SYMBOL_GPL(kmemleak_free_part);
>  
> @@ -1095,7 +1131,7 @@ void __ref kmemleak_ignore(const void *ptr)
>  	pr_debug("%s(0x%p)\n", __func__, ptr);
>  
>  	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
> -		make_black_object((unsigned long)ptr);
> +		make_black_object((unsigned long)ptr, false);
>  }
>  EXPORT_SYMBOL(kmemleak_ignore);
>  
> @@ -1153,7 +1189,7 @@ void __ref kmemleak_alloc_phys(phys_addr_t
> phys, size_t size, gfp_t gfp)
>  		 * Create object with OBJECT_PHYS flag and
>  		 * assume min_count 0.
>  		 */
> -		create_object_phys((unsigned long)__va(phys), size, 0,
> gfp);
> +		create_object_phys((unsigned long)phys, size, 0, gfp);
>  }
>  EXPORT_SYMBOL(kmemleak_alloc_phys);
>  
> @@ -1166,8 +1202,10 @@ EXPORT_SYMBOL(kmemleak_alloc_phys);
>   */
>  void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
>  {
> +	pr_debug("%s(0x%pa)\n", __func__, &phys);
> +
>  	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) <
> max_low_pfn)
> -		kmemleak_free_part(__va(phys), size);
> +		delete_object_part((unsigned long)phys, size, true);
>  }
>  EXPORT_SYMBOL(kmemleak_free_part_phys);
>  
> @@ -1178,8 +1216,10 @@ EXPORT_SYMBOL(kmemleak_free_part_phys);
>   */
>  void __ref kmemleak_ignore_phys(phys_addr_t phys)
>  {
> +	pr_debug("%s(0x%pa)\n", __func__, &phys);
> +
>  	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) <
> max_low_pfn)
> -		kmemleak_ignore(__va(phys));
> +		make_black_object((unsigned long)phys, true);
>  }
>  EXPORT_SYMBOL(kmemleak_ignore_phys);
>  
> @@ -1190,6 +1230,9 @@ static bool update_checksum(struct
> kmemleak_object *object)
>  {
>  	u32 old_csum = object->checksum;
>  
> +	if (WARN_ON_ONCE(object->flags & OBJECT_PHYS))
> +		return false;
> +
>  	kasan_disable_current();
>  	kcsan_disable_current();
>  	object->checksum = crc32(0, kasan_reset_tag((void *)object-
> >pointer), object->size);
> @@ -1343,6 +1386,7 @@ static void scan_object(struct kmemleak_object
> *object)
>  {
>  	struct kmemleak_scan_area *area;
>  	unsigned long flags;
> +	void *obj_ptr;
>  
>  	/*
>  	 * Once the object->lock is acquired, the corresponding memory
> block
> @@ -1354,10 +1398,15 @@ static void scan_object(struct
> kmemleak_object *object)
>  	if (!(object->flags & OBJECT_ALLOCATED))
>  		/* already freed object */
>  		goto out;
> +
> +	obj_ptr = object->flags & OBJECT_PHYS ?
> +		  __va((phys_addr_t)object->pointer) :
> +		  (void *)object->pointer;
> +
>  	if (hlist_empty(&object->area_list) ||
>  	    object->flags & OBJECT_FULL_SCAN) {
> -		void *start = (void *)object->pointer;
> -		void *end = (void *)(object->pointer + object->size);
> +		void *start = obj_ptr;
> +		void *end = obj_ptr + object->size;
>  		void *next;
>  
>  		do {



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 3/4] mm: kmemleak: add rbtree and store physical address for objects allocated with PA
  2022-06-23  8:45   ` Yee Lee
@ 2022-06-23 11:25     ` Yee Lee
  2022-06-24 10:18       ` Catalin Marinas
  2022-06-25  6:38       ` Patrick Wang
  0 siblings, 2 replies; 11+ messages in thread
From: Yee Lee @ 2022-06-23 11:25 UTC (permalink / raw)
  To: Patrick Wang, catalin.marinas, akpm; +Cc: linux-mm, linux-kernel

On Thu, 2022-06-23 at 16:45 +0800, Yee Lee wrote:
> Now we have seperated rb_tree for phys and virts addresses. But why
> can't we have kmemleak_free_phys()? It may apply the same format to
> delete_object_full(). 
> 
> Some users would request to remove the kmemleak object from the phys
> tree but we don't have this one.

Please check this, an issue happened at kfence with the latest kmemleak
patches. kfence pool allocated memory from memblock but have no way to
free it from the phys tree.
https://lkml.org/lkml/2022/6/23/486

> 
> On Sat, 2022-06-11 at 11:55 +0800, Patrick Wang wrote:
> > Add object_phys_tree_root to store the objects allocated with
> > physical address. Distinguish it from object_tree_root by
> > OBJECT_PHYS flag or function argument. The physical address
> > is stored directly in those objects.
> > 
> > Suggested-by: Catalin Marinas <catalin.marinas@arm.com>
> > Signed-off-by: Patrick Wang <patrick.wang.shcn@gmail.com>
> > Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
> > ---
> >  mm/kmemleak.c | 133 ++++++++++++++++++++++++++++++++++------------
> > ----
> >  1 file changed, 91 insertions(+), 42 deletions(-)
> > 
> > diff --git a/mm/kmemleak.c b/mm/kmemleak.c
> > index d82d8db0e8df..155f50e1a604 100644
> > --- a/mm/kmemleak.c
> > +++ b/mm/kmemleak.c
> > @@ -14,14 +14,16 @@
> >   * The following locks and mutexes are used by kmemleak:
> >   *
> >   * - kmemleak_lock (raw_spinlock_t): protects the object_list
> > modifications and
> > - *   accesses to the object_tree_root. The object_list is the main
> > list
> > - *   holding the metadata (struct kmemleak_object) for the
> > allocated
> > memory
> > - *   blocks. The object_tree_root is a red black tree used to
> > look-
> > up
> > - *   metadata based on a pointer to the corresponding memory
> > block.  The
> > - *   kmemleak_object structures are added to the object_list and
> > - *   object_tree_root in the create_object() function called from
> > the
> > - *   kmemleak_alloc() callback and removed in delete_object()
> > called
> > from the
> > - *   kmemleak_free() callback
> > + *   accesses to the object_tree_root (or object_phys_tree_root).
> > The
> > + *   object_list is the main list holding the metadata (struct
> > kmemleak_object)
> > + *   for the allocated memory blocks. The object_tree_root and
> > object_phys_tree_root
> > + *   are red black trees used to look-up metadata based on a
> > pointer
> > to the
> > + *   corresponding memory block. The object_phys_tree_root is for
> > objects
> > + *   allocated with physical address. The kmemleak_object
> > structures
> > are
> > + *   added to the object_list and object_tree_root (or
> > object_phys_tree_root)
> > + *   in the create_object() function called from the
> > kmemleak_alloc() (or
> > + *   kmemleak_alloc_phys()) callback and removed in
> > delete_object()
> > called from
> > + *   the kmemleak_free() callback
> >   * - kmemleak_object.lock (raw_spinlock_t): protects a
> > kmemleak_object.
> >   *   Accesses to the metadata (e.g. count) are protected by this
> > lock. Note
> >   *   that some members of this structure may be protected by other
> > means
> > @@ -195,7 +197,9 @@ static int mem_pool_free_count =
> > ARRAY_SIZE(mem_pool);
> >  static LIST_HEAD(mem_pool_free_list);
> >  /* search tree for object boundaries */
> >  static struct rb_root object_tree_root = RB_ROOT;
> > -/* protecting the access to object_list and object_tree_root */
> > +/* search tree for object (with OBJECT_PHYS flag) boundaries */
> > +static struct rb_root object_phys_tree_root = RB_ROOT;
> > +/* protecting the access to object_list, object_tree_root (or
> > object_phys_tree_root) */
> >  static DEFINE_RAW_SPINLOCK(kmemleak_lock);
> >  
> >  /* allocation caches for kmemleak internal data */
> > @@ -287,6 +291,9 @@ static void hex_dump_object(struct seq_file
> > *seq,
> >  	const u8 *ptr = (const u8 *)object->pointer;
> >  	size_t len;
> >  
> > +	if (WARN_ON_ONCE(object->flags & OBJECT_PHYS))
> > +		return;
> > +
> >  	/* limit the number of lines to HEX_MAX_LINES */
> >  	len = min_t(size_t, object->size, HEX_MAX_LINES *
> > HEX_ROW_SIZE);
> >  
> > @@ -380,9 +387,11 @@ static void dump_object_info(struct
> > kmemleak_object *object)
> >   * beginning of the memory block are allowed. The kmemleak_lock
> > must
> > be held
> >   * when calling this function.
> >   */
> > -static struct kmemleak_object *lookup_object(unsigned long ptr,
> > int
> > alias)
> > +static struct kmemleak_object *__lookup_object(unsigned long ptr,
> > int alias,
> > +					       bool is_phys)
> >  {
> > -	struct rb_node *rb = object_tree_root.rb_node;
> > +	struct rb_node *rb = is_phys ? object_phys_tree_root.rb_node :
> > +			     object_tree_root.rb_node;
> >  	unsigned long untagged_ptr = (unsigned
> > long)kasan_reset_tag((void *)ptr);
> >  
> >  	while (rb) {
> > @@ -408,6 +417,12 @@ static struct kmemleak_object
> > *lookup_object(unsigned long ptr, int alias)
> >  	return NULL;
> >  }
> >  
> > +/* Look-up a kmemleak object which allocated with virtual address.
> > */
> > +static struct kmemleak_object *lookup_object(unsigned long ptr,
> > int
> > alias)
> > +{
> > +	return __lookup_object(ptr, alias, false);
> > +}
> > +
> >  /*
> >   * Increment the object use_count. Return 1 if successful or 0
> > otherwise. Note
> >   * that once an object's use_count reached 0, the RCU freeing was
> > already
> > @@ -517,14 +532,15 @@ static void put_object(struct kmemleak_object
> > *object)
> >  /*
> >   * Look up an object in the object search tree and increase its
> > use_count.
> >   */
> > -static struct kmemleak_object *find_and_get_object(unsigned long
> > ptr, int alias)
> > +static struct kmemleak_object *__find_and_get_object(unsigned long
> > ptr, int alias,
> > +						     bool is_phys)
> >  {
> >  	unsigned long flags;
> >  	struct kmemleak_object *object;
> >  
> >  	rcu_read_lock();
> >  	raw_spin_lock_irqsave(&kmemleak_lock, flags);
> > -	object = lookup_object(ptr, alias);
> > +	object = __lookup_object(ptr, alias, is_phys);
> >  	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
> >  
> >  	/* check whether the object is still available */
> > @@ -535,28 +551,39 @@ static struct kmemleak_object
> > *find_and_get_object(unsigned long ptr, int alias)
> >  	return object;
> >  }
> >  
> > +/* Look up and get an object which allocated with virtual address.
> > */
> > +static struct kmemleak_object *find_and_get_object(unsigned long
> > ptr, int alias)
> > +{
> > +	return __find_and_get_object(ptr, alias, false);
> > +}
> > +
> >  /*
> > - * Remove an object from the object_tree_root and object_list.
> > Must
> > be called
> > - * with the kmemleak_lock held _if_ kmemleak is still enabled.
> > + * Remove an object from the object_tree_root (or
> > object_phys_tree_root)
> > + * and object_list. Must be called with the kmemleak_lock held
> > _if_
> > kmemleak
> > + * is still enabled.
> >   */
> >  static void __remove_object(struct kmemleak_object *object)
> >  {
> > -	rb_erase(&object->rb_node, &object_tree_root);
> > +	rb_erase(&object->rb_node, object->flags & OBJECT_PHYS ?
> > +				   &object_phys_tree_root :
> > +				   &object_tree_root);
> >  	list_del_rcu(&object->object_list);
> >  }
> >  
> >  /*
> >   * Look up an object in the object search tree and remove it from
> > both
> > - * object_tree_root and object_list. The returned object's
> > use_count
> > should be
> > - * at least 1, as initially set by create_object().
> > + * object_tree_root (or object_phys_tree_root) and object_list.
> > The
> > + * returned object's use_count should be at least 1, as initially
> > set
> > + * by create_object().
> >   */
> > -static struct kmemleak_object *find_and_remove_object(unsigned
> > long
> > ptr, int alias)
> > +static struct kmemleak_object *find_and_remove_object(unsigned
> > long
> > ptr, int alias,
> > +						      bool is_phys)
> >  {
> >  	unsigned long flags;
> >  	struct kmemleak_object *object;
> >  
> >  	raw_spin_lock_irqsave(&kmemleak_lock, flags);
> > -	object = lookup_object(ptr, alias);
> > +	object = __lookup_object(ptr, alias, is_phys);
> >  	if (object)
> >  		__remove_object(object);
> >  	raw_spin_unlock_irqrestore(&kmemleak_lock, flags);
> > @@ -574,7 +601,8 @@ static int __save_stack_trace(unsigned long
> > *trace)
> >  
> >  /*
> >   * Create the metadata (struct kmemleak_object) corresponding to
> > an
> > allocated
> > - * memory block and add it to the object_list and
> > object_tree_root.
> > + * memory block and add it to the object_list and object_tree_root
> > (or
> > + * object_phys_tree_root).
> >   */
> >  static struct kmemleak_object *__create_object(unsigned long ptr,
> > size_t size,
> >  					     int min_count, gfp_t gfp,
> > @@ -631,9 +659,16 @@ static struct kmemleak_object
> > *__create_object(unsigned long ptr, size_t size,
> >  	raw_spin_lock_irqsave(&kmemleak_lock, flags);
> >  
> >  	untagged_ptr = (unsigned long)kasan_reset_tag((void *)ptr);
> > -	min_addr = min(min_addr, untagged_ptr);
> > -	max_addr = max(max_addr, untagged_ptr + size);
> > -	link = &object_tree_root.rb_node;
> > +	/*
> > +	 * Only update min_addr and max_addr with object
> > +	 * storing virtual address.
> > +	 */
> > +	if (!is_phys) {
> > +		min_addr = min(min_addr, untagged_ptr);
> > +		max_addr = max(max_addr, untagged_ptr + size);
> > +	}
> > +	link = is_phys ? &object_phys_tree_root.rb_node :
> > +		&object_tree_root.rb_node;
> >  	rb_parent = NULL;
> >  	while (*link) {
> >  		rb_parent = *link;
> > @@ -657,7 +692,8 @@ static struct kmemleak_object
> > *__create_object(unsigned long ptr, size_t size,
> >  		}
> >  	}
> >  	rb_link_node(&object->rb_node, rb_parent, link);
> > -	rb_insert_color(&object->rb_node, &object_tree_root);
> > +	rb_insert_color(&object->rb_node, is_phys ?
> > &object_phys_tree_root :
> > +					  &object_tree_root);
> >  
> >  	list_add_tail_rcu(&object->object_list, &object_list);
> >  out:
> > @@ -707,7 +743,7 @@ static void delete_object_full(unsigned long
> > ptr)
> >  {
> >  	struct kmemleak_object *object;
> >  
> > -	object = find_and_remove_object(ptr, 0);
> > +	object = find_and_remove_object(ptr, 0, false);
> >  	if (!object) {
> >  #ifdef DEBUG
> >  		kmemleak_warn("Freeing unknown object at 0x%08lx\n",
> > @@ -723,12 +759,12 @@ static void delete_object_full(unsigned long
> > ptr)
> >   * delete it. If the memory block is partially freed, the function
> > may create
> >   * additional metadata for the remaining parts of the block.
> >   */
> > -static void delete_object_part(unsigned long ptr, size_t size)
> > +static void delete_object_part(unsigned long ptr, size_t size,
> > bool
> > is_phys)
> >  {
> >  	struct kmemleak_object *object;
> >  	unsigned long start, end;
> >  
> > -	object = find_and_remove_object(ptr, 1);
> > +	object = find_and_remove_object(ptr, 1, is_phys);
> >  	if (!object) {
> >  #ifdef DEBUG
> >  		kmemleak_warn("Partially freeing unknown object at
> > 0x%08lx (size %zu)\n",
> > @@ -746,10 +782,10 @@ static void delete_object_part(unsigned long
> > ptr, size_t size)
> >  	end = object->pointer + object->size;
> >  	if (ptr > start)
> >  		__create_object(start, ptr - start, object->min_count,
> > -			      GFP_KERNEL, object->flags & OBJECT_PHYS);
> > +			      GFP_KERNEL, is_phys);
> >  	if (ptr + size < end)
> >  		__create_object(ptr + size, end - ptr - size, object-
> > > min_count,
> > 
> > -			      GFP_KERNEL, object->flags & OBJECT_PHYS);
> > +			      GFP_KERNEL, is_phys);
> >  
> >  	__delete_object(object);
> >  }
> > @@ -770,11 +806,11 @@ static void paint_it(struct kmemleak_object
> > *object, int color)
> >  	raw_spin_unlock_irqrestore(&object->lock, flags);
> >  }
> >  
> > -static void paint_ptr(unsigned long ptr, int color)
> > +static void paint_ptr(unsigned long ptr, int color, bool is_phys)
> >  {
> >  	struct kmemleak_object *object;
> >  
> > -	object = find_and_get_object(ptr, 0);
> > +	object = __find_and_get_object(ptr, 0, is_phys);
> >  	if (!object) {
> >  		kmemleak_warn("Trying to color unknown object at
> > 0x%08lx as %s\n",
> >  			      ptr,
> > @@ -792,16 +828,16 @@ static void paint_ptr(unsigned long ptr, int
> > color)
> >   */
> >  static void make_gray_object(unsigned long ptr)
> >  {
> > -	paint_ptr(ptr, KMEMLEAK_GREY);
> > +	paint_ptr(ptr, KMEMLEAK_GREY, false);
> >  }
> >  
> >  /*
> >   * Mark the object as black-colored so that it is ignored from
> > scans
> > and
> >   * reporting.
> >   */
> > -static void make_black_object(unsigned long ptr)
> > +static void make_black_object(unsigned long ptr, bool is_phys)
> >  {
> > -	paint_ptr(ptr, KMEMLEAK_BLACK);
> > +	paint_ptr(ptr, KMEMLEAK_BLACK, is_phys);
> >  }
> >  
> >  /*
> > @@ -1007,7 +1043,7 @@ void __ref kmemleak_free_part(const void
> > *ptr,
> > size_t size)
> >  	pr_debug("%s(0x%p)\n", __func__, ptr);
> >  
> >  	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
> > -		delete_object_part((unsigned long)ptr, size);
> > +		delete_object_part((unsigned long)ptr, size, false);
> >  }
> >  EXPORT_SYMBOL_GPL(kmemleak_free_part);
> >  
> > @@ -1095,7 +1131,7 @@ void __ref kmemleak_ignore(const void *ptr)
> >  	pr_debug("%s(0x%p)\n", __func__, ptr);
> >  
> >  	if (kmemleak_enabled && ptr && !IS_ERR(ptr))
> > -		make_black_object((unsigned long)ptr);
> > +		make_black_object((unsigned long)ptr, false);
> >  }
> >  EXPORT_SYMBOL(kmemleak_ignore);
> >  
> > @@ -1153,7 +1189,7 @@ void __ref kmemleak_alloc_phys(phys_addr_t
> > phys, size_t size, gfp_t gfp)
> >  		 * Create object with OBJECT_PHYS flag and
> >  		 * assume min_count 0.
> >  		 */
> > -		create_object_phys((unsigned long)__va(phys), size, 0,
> > gfp);
> > +		create_object_phys((unsigned long)phys, size, 0, gfp);
> >  }
> >  EXPORT_SYMBOL(kmemleak_alloc_phys);
> >  
> > @@ -1166,8 +1202,10 @@ EXPORT_SYMBOL(kmemleak_alloc_phys);
> >   */
> >  void __ref kmemleak_free_part_phys(phys_addr_t phys, size_t size)
> >  {
> > +	pr_debug("%s(0x%pa)\n", __func__, &phys);
> > +
> >  	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) <
> > max_low_pfn)
> > -		kmemleak_free_part(__va(phys), size);
> > +		delete_object_part((unsigned long)phys, size, true);
> >  }
> >  EXPORT_SYMBOL(kmemleak_free_part_phys);
> >  
> > @@ -1178,8 +1216,10 @@ EXPORT_SYMBOL(kmemleak_free_part_phys);
> >   */
> >  void __ref kmemleak_ignore_phys(phys_addr_t phys)
> >  {
> > +	pr_debug("%s(0x%pa)\n", __func__, &phys);
> > +
> >  	if (PHYS_PFN(phys) >= min_low_pfn && PHYS_PFN(phys) <
> > max_low_pfn)
> > -		kmemleak_ignore(__va(phys));
> > +		make_black_object((unsigned long)phys, true);
> >  }
> >  EXPORT_SYMBOL(kmemleak_ignore_phys);
> >  
> > @@ -1190,6 +1230,9 @@ static bool update_checksum(struct
> > kmemleak_object *object)
> >  {
> >  	u32 old_csum = object->checksum;
> >  
> > +	if (WARN_ON_ONCE(object->flags & OBJECT_PHYS))
> > +		return false;
> > +
> >  	kasan_disable_current();
> >  	kcsan_disable_current();
> >  	object->checksum = crc32(0, kasan_reset_tag((void *)object-
> > > pointer), object->size);
> > 
> > @@ -1343,6 +1386,7 @@ static void scan_object(struct
> > kmemleak_object
> > *object)
> >  {
> >  	struct kmemleak_scan_area *area;
> >  	unsigned long flags;
> > +	void *obj_ptr;
> >  
> >  	/*
> >  	 * Once the object->lock is acquired, the corresponding memory
> > block
> > @@ -1354,10 +1398,15 @@ static void scan_object(struct
> > kmemleak_object *object)
> >  	if (!(object->flags & OBJECT_ALLOCATED))
> >  		/* already freed object */
> >  		goto out;
> > +
> > +	obj_ptr = object->flags & OBJECT_PHYS ?
> > +		  __va((phys_addr_t)object->pointer) :
> > +		  (void *)object->pointer;
> > +
> >  	if (hlist_empty(&object->area_list) ||
> >  	    object->flags & OBJECT_FULL_SCAN) {
> > -		void *start = (void *)object->pointer;
> > -		void *end = (void *)(object->pointer + object->size);
> > +		void *start = obj_ptr;
> > +		void *end = obj_ptr + object->size;
> >  		void *next;
> >  
> >  		do {



^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 3/4] mm: kmemleak: add rbtree and store physical address for objects allocated with PA
  2022-06-23 11:25     ` Yee Lee
@ 2022-06-24 10:18       ` Catalin Marinas
  2022-06-25  6:38       ` Patrick Wang
  1 sibling, 0 replies; 11+ messages in thread
From: Catalin Marinas @ 2022-06-24 10:18 UTC (permalink / raw)
  To: Yee Lee; +Cc: Patrick Wang, akpm, linux-mm, linux-kernel, Marco Elver

On Thu, Jun 23, 2022 at 07:25:15PM +0800, Yee Lee wrote:
> On Thu, 2022-06-23 at 16:45 +0800, Yee Lee wrote:
> > Now we have seperated rb_tree for phys and virts addresses. But why
> > can't we have kmemleak_free_phys()? It may apply the same format to
> > delete_object_full(). 
> > 
> > Some users would request to remove the kmemleak object from the phys
> > tree but we don't have this one.
> 
> Please check this, an issue happened at kfence with the latest kmemleak
> patches. kfence pool allocated memory from memblock but have no way to
> free it from the phys tree.
> https://lkml.org/lkml/2022/6/23/486

I don't think I was cc'ed on the other thread but at a quick look, what
you probably want is:

	kmemleak_ignore_phys(__kfence_pool);

instead of the current kmemleak_free(). With Patrick's changes, you can
no longer tell kmemleak about an object with a physical address and free
it with the virtual one.

-- 
Catalin


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 3/4] mm: kmemleak: add rbtree and store physical address for objects allocated with PA
  2022-06-23 11:25     ` Yee Lee
  2022-06-24 10:18       ` Catalin Marinas
@ 2022-06-25  6:38       ` Patrick Wang
  1 sibling, 0 replies; 11+ messages in thread
From: Patrick Wang @ 2022-06-25  6:38 UTC (permalink / raw)
  To: Yee Lee, catalin.marinas, akpm; +Cc: linux-mm, linux-kernel



On 2022/6/23 19:25, Yee Lee wrote:
> On Thu, 2022-06-23 at 16:45 +0800, Yee Lee wrote:
>> Now we have seperated rb_tree for phys and virts addresses. But why
>> can't we have kmemleak_free_phys()? It may apply the same format to
>> delete_object_full().
>>
>> Some users would request to remove the kmemleak object from the phys
>> tree but we don't have this one.
> 
> Please check this, an issue happened at kfence with the latest kmemleak
> patches. kfence pool allocated memory from memblock but have no way to
> free it from the phys tree.
> https://lkml.org/lkml/2022/6/23/486

Hi Yee,

Thanks for your information. Similar situation appears in
percpu.c (address allocated with memblock, object freed with
kmemleak_free(), if I didn't miss others). Kmemleak_ignore_phys()
could replace kmemleak_free() for physical objects like Catalin said.
And adding kmemleak_free_phys() might not be essential, because
there are few places that meet the above situation.

Thanks,
Patrick


^ permalink raw reply	[flat|nested] 11+ messages in thread

* Re: [PATCH v4 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan
  2022-06-11  3:55 [PATCH v4 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
                   ` (3 preceding siblings ...)
  2022-06-11  3:55 ` [PATCH v4 4/4] mm: kmemleak: check physical address when scan Patrick Wang
@ 2022-07-12 13:43 ` Geert Uytterhoeven
  4 siblings, 0 replies; 11+ messages in thread
From: Geert Uytterhoeven @ 2022-07-12 13:43 UTC (permalink / raw)
  To: Patrick Wang
  Cc: Catalin Marinas, Andrew Morton, Linux MM,
	Linux Kernel Mailing List, yee.lee, Linux-Renesas

Hi Patrick,

On Sat, Jun 11, 2022 at 8:50 AM Patrick Wang
<patrick.wang.shcn@gmail.com> wrote:
> The kmemleak_*_phys() interface uses "min_low_pfn" and
> "max_low_pfn" to check address. But on some architectures,
> kmemleak_*_phys() is called before those two variables
> initialized. The following steps will be taken:
>
> 1) Add OBJECT_PHYS flag and rbtree for the objects allocated
>    with physical address
> 2) Store physical address in objects if allocated with OBJECT_PHYS
> 3) Check the boundary when scan instead of in kmemleak_*_phys()

Thanks for your series!

> This patch set will solve:
> https://lore.kernel.org/r/20220527032504.30341-1-yee.lee@mediatek.com
> https://lore.kernel.org/r/9dd08bb5-f39e-53d8-f88d-bec598a08c93@gmail.com

Thanks, this finally gets rid of the thousands of suspected memory
leaks reported since commit 23c2d497de21f258 ("mm: kmemleak: take
a full lowmem check in kmemleak_*_phys()") in v5.18-rc3 on my arm64
boards.

Tested-by: Geert Uytterhoeven <geert+renesas@glider.be>

Gr{oetje,eeting}s,

                        Geert

--
Geert Uytterhoeven -- There's lots of Linux beyond ia32 -- geert@linux-m68k.org

In personal conversations with technical people, I call myself a hacker. But
when I'm talking to journalists I just say "programmer" or something like that.
                                -- Linus Torvalds


^ permalink raw reply	[flat|nested] 11+ messages in thread

end of thread, other threads:[~2022-07-12 13:43 UTC | newest]

Thread overview: 11+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-06-11  3:55 [PATCH v4 0/4] mm: kmemleak: store objects allocated with physical address separately and check when scan Patrick Wang
2022-06-11  3:55 ` [PATCH v4 1/4] mm: kmemleak: remove kmemleak_not_leak_phys() and the min_count argument to kmemleak_alloc_phys() Patrick Wang
2022-06-11  9:47   ` Catalin Marinas
2022-06-11  3:55 ` [PATCH v4 2/4] mm: kmemleak: add OBJECT_PHYS flag for objects allocated with physical address Patrick Wang
2022-06-11  3:55 ` [PATCH v4 3/4] mm: kmemleak: add rbtree and store physical address for objects allocated with PA Patrick Wang
2022-06-23  8:45   ` Yee Lee
2022-06-23 11:25     ` Yee Lee
2022-06-24 10:18       ` Catalin Marinas
2022-06-25  6:38       ` Patrick Wang
2022-06-11  3:55 ` [PATCH v4 4/4] mm: kmemleak: check physical address when scan Patrick Wang
2022-07-12 13:43 ` [PATCH v4 0/4] mm: kmemleak: store objects allocated with physical address separately and check " Geert Uytterhoeven

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).