All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/4] Kmemleak patches for -next
@ 2011-09-29 11:02 Catalin Marinas
  2011-09-29 11:02 ` [PATCH 1/4] kmemleak: Show where early_log issues come from Catalin Marinas
                   ` (3 more replies)
  0 siblings, 4 replies; 22+ messages in thread
From: Catalin Marinas @ 2011-09-29 11:02 UTC (permalink / raw)
  To: linux-kernel

Hi,

This is a set of kmemleak improvements which I would like to push to my
for-next branch. The second patch touches the percpu code, so it needs
ack from the corresponding maintainers (cc'ed).

Thanks.


Catalin Marinas (4):
      kmemleak: Show where early_log issues come from
      kmemleak: Handle percpu memory allocation
      kmemleak: When the early log buffer is exceeded, report the actual number
      kmemleak: Report previously found leaks even after an error


 mm/kmemleak.c |   80 ++++++++++++++++++++++++++++++++++++++++-----------------
 mm/percpu.c   |   22 +++++++++++++++-
 2 files changed, 77 insertions(+), 25 deletions(-)

-- 
Catalin


^ permalink raw reply	[flat|nested] 22+ messages in thread

* [PATCH 1/4] kmemleak: Show where early_log issues come from
  2011-09-29 11:02 [PATCH 0/4] Kmemleak patches for -next Catalin Marinas
@ 2011-09-29 11:02 ` Catalin Marinas
  2011-09-29 11:02 ` [PATCH 2/4] kmemleak: Handle percpu memory allocation Catalin Marinas
                   ` (2 subsequent siblings)
  3 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2011-09-29 11:02 UTC (permalink / raw)
  To: linux-kernel; +Cc: Andrew Morton, Steven Rostedt

Based on initial patch by Steven Rostedt.

Early kmemleak warnings did not show where the actual kmemleak API had
been called from but rather just a backtrace to the kmemleak_init()
function. By having all early kmemleak logs record the stack_trace, we
can have kmemleak_init() write exactly where the problem occurred. This
patch adds the setting of the kmemleak_warning variable every time a
kmemleak warning is issued. The kmemleak_init() function checks this
variable during early log replaying and prints the log trace if there
was any warning.

Reported-by: Steven Rostedt <rostedt@goodmis.org>
Cc: Andrew Morton <akpm@google.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Acked-by: Steven Rostedt <rostedt@goodmis.org>
---
 mm/kmemleak.c |   37 ++++++++++++++++++++++++++++---------
 1 files changed, 28 insertions(+), 9 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index d6880f5..573152e 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -196,7 +196,9 @@ static atomic_t kmemleak_enabled = ATOMIC_INIT(0);
 static atomic_t kmemleak_initialized = ATOMIC_INIT(0);
 /* enables or disables early logging of the memory operations */
 static atomic_t kmemleak_early_log = ATOMIC_INIT(1);
-/* set if a fata kmemleak error has occurred */
+/* set if a kmemleak warning was issued */
+static atomic_t kmemleak_warning = ATOMIC_INIT(0);
+/* set if a fatal kmemleak error has occurred */
 static atomic_t kmemleak_error = ATOMIC_INIT(0);
 
 /* minimum and maximum address that may be valid pointers */
@@ -259,9 +261,10 @@ static void kmemleak_disable(void);
 /*
  * Print a warning and dump the stack trace.
  */
-#define kmemleak_warn(x...)	do {	\
-	pr_warning(x);			\
-	dump_stack();			\
+#define kmemleak_warn(x...)	do {		\
+	pr_warning(x);				\
+	dump_stack();				\
+	atomic_set(&kmemleak_warning, 1);	\
 } while (0)
 
 /*
@@ -403,8 +406,8 @@ static struct kmemleak_object *lookup_object(unsigned long ptr, int alias)
 		object = prio_tree_entry(node, struct kmemleak_object,
 					 tree_node);
 		if (!alias && object->pointer != ptr) {
-			pr_warning("Found object by alias at 0x%08lx\n", ptr);
-			dump_stack();
+			kmemleak_warn("Found object by alias at 0x%08lx\n",
+				      ptr);
 			dump_object_info(object);
 			object = NULL;
 		}
@@ -811,8 +814,7 @@ static void __init log_early(int op_type, const void *ptr, size_t size,
 	log->ptr = ptr;
 	log->size = size;
 	log->min_count = min_count;
-	if (op_type == KMEMLEAK_ALLOC)
-		log->trace_len = __save_stack_trace(log->trace);
+	log->trace_len = __save_stack_trace(log->trace);
 	crt_early_log++;
 	local_irq_restore(flags);
 }
@@ -1659,6 +1661,17 @@ static int kmemleak_boot_config(char *str)
 }
 early_param("kmemleak", kmemleak_boot_config);
 
+static void __init print_log_trace(struct early_log *log)
+{
+	struct stack_trace trace;
+
+	trace.nr_entries = log->trace_len;
+	trace.entries = log->trace;
+
+	pr_notice("Early log backtrace:\n");
+	print_stack_trace(&trace, 2);
+}
+
 /*
  * Kmemleak initialization.
  */
@@ -1720,7 +1733,13 @@ void __init kmemleak_init(void)
 			kmemleak_no_scan(log->ptr);
 			break;
 		default:
-			WARN_ON(1);
+			kmemleak_warn("Unknown early log operation: %d\n",
+				      log->op_type);
+		}
+
+		if (atomic_read(&kmemleak_warning)) {
+			print_log_trace(log);
+			atomic_set(&kmemleak_warning, 0);
 		}
 	}
 }



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-09-29 11:02 [PATCH 0/4] Kmemleak patches for -next Catalin Marinas
  2011-09-29 11:02 ` [PATCH 1/4] kmemleak: Show where early_log issues come from Catalin Marinas
@ 2011-09-29 11:02 ` Catalin Marinas
  2011-09-29 11:56   ` Eric Dumazet
  2011-09-29 19:28   ` Tejun Heo
  2011-09-29 11:02 ` [PATCH 3/4] kmemleak: When the early log buffer is exceeded, report the actual number Catalin Marinas
  2011-09-29 11:02 ` [PATCH 4/4] kmemleak: Report previously found leaks even after an error Catalin Marinas
  3 siblings, 2 replies; 22+ messages in thread
From: Catalin Marinas @ 2011-09-29 11:02 UTC (permalink / raw)
  To: linux-kernel; +Cc: Huajun Li, Tejun Heo, Christoph Lameter

This patch adds kmemleak callbacks from the percpu allocator, reducing a
number of false positives caused by kmemleak not scanning such memory
blocks. The percpu chunks are never reported as leaks because of current
kmemleak limitations with the __percpu pointer not pointing directly to
the actual chunks.

Reported-by: Huajun Li <huajun.li.lee@gmail.com>
Cc: Tejun Heo <tj@kernel.org>,
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/percpu.c |   22 +++++++++++++++++++++-
 1 files changed, 21 insertions(+), 1 deletions(-)

diff --git a/mm/percpu.c b/mm/percpu.c
index bf80e55..ece9f85 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -67,6 +67,7 @@
 #include <linux/spinlock.h>
 #include <linux/vmalloc.h>
 #include <linux/workqueue.h>
+#include <linux/kmemleak.h>
 
 #include <asm/cacheflush.h>
 #include <asm/sections.h>
@@ -709,6 +710,8 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved)
 	const char *err;
 	int slot, off, new_alloc;
 	unsigned long flags;
+	void __percpu *ptr;
+	unsigned int cpu;
 
 	if (unlikely(!size || size > PCPU_MIN_UNIT_SIZE || align > PAGE_SIZE)) {
 		WARN(true, "illegal size (%zu) or align (%zu) for "
@@ -801,7 +804,16 @@ area_found:
 	mutex_unlock(&pcpu_alloc_mutex);
 
 	/* return address relative to base address */
-	return __addr_to_pcpu_ptr(chunk->base_addr + off);
+	ptr = __addr_to_pcpu_ptr(chunk->base_addr + off);
+
+	/*
+	 * Percpu allocations are currently reported as leaks (kmemleak false
+	 * positives). To avoid this, just set min_count to 0.
+	 */
+	for_each_possible_cpu(cpu)
+		kmemleak_alloc(per_cpu_ptr(ptr, cpu), size, 0, GFP_KERNEL);
+
+	return ptr;
 
 fail_unlock:
 	spin_unlock_irqrestore(&pcpu_lock, flags);
@@ -911,10 +923,14 @@ void free_percpu(void __percpu *ptr)
 	struct pcpu_chunk *chunk;
 	unsigned long flags;
 	int off;
+	unsigned int cpu;
 
 	if (!ptr)
 		return;
 
+	for_each_possible_cpu(cpu)
+		kmemleak_free(per_cpu_ptr(ptr, cpu));
+
 	addr = __pcpu_ptr_to_addr(ptr);
 
 	spin_lock_irqsave(&pcpu_lock, flags);
@@ -1619,6 +1635,8 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
 			rc = -ENOMEM;
 			goto out_free_areas;
 		}
+		/* kmemleak tracks the percpu allocations separately */
+		kmemleak_free(ptr);
 		areas[group] = ptr;
 
 		base = min(ptr, base);
@@ -1733,6 +1751,8 @@ int __init pcpu_page_first_chunk(size_t reserved_size,
 					   "for cpu%u\n", psize_str, cpu);
 				goto enomem;
 			}
+			/* kmemleak tracks the percpu allocations separately */
+			kmemleak_free(ptr);
 			pages[j++] = virt_to_page(ptr);
 		}
 



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 3/4] kmemleak: When the early log buffer is exceeded, report the actual number
  2011-09-29 11:02 [PATCH 0/4] Kmemleak patches for -next Catalin Marinas
  2011-09-29 11:02 ` [PATCH 1/4] kmemleak: Show where early_log issues come from Catalin Marinas
  2011-09-29 11:02 ` [PATCH 2/4] kmemleak: Handle percpu memory allocation Catalin Marinas
@ 2011-09-29 11:02 ` Catalin Marinas
  2011-09-29 11:02 ` [PATCH 4/4] kmemleak: Report previously found leaks even after an error Catalin Marinas
  3 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2011-09-29 11:02 UTC (permalink / raw)
  To: linux-kernel

Just telling that the early log buffer has been exceeded doesn't mean
much. This patch moves the error printing to the kmemleak_init()
function and displays the actual calls to the kmemleak API during early
logging.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/kmemleak.c |   21 +++++++++++++++------
 1 files changed, 15 insertions(+), 6 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 573152e..e6dd17b 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -797,9 +797,13 @@ static void __init log_early(int op_type, const void *ptr, size_t size,
 	unsigned long flags;
 	struct early_log *log;
 
+	if (atomic_read(&kmemleak_error)) {
+		/* kmemleak stopped recording, just count the requests */
+		crt_early_log++;
+		return;
+	}
+
 	if (crt_early_log >= ARRAY_SIZE(early_log)) {
-		pr_warning("Early log buffer exceeded, "
-			   "please increase DEBUG_KMEMLEAK_EARLY_LOG_SIZE\n");
 		kmemleak_disable();
 		return;
 	}
@@ -1634,7 +1638,6 @@ static void kmemleak_disable(void)
 		return;
 
 	/* stop any memory operation tracing */
-	atomic_set(&kmemleak_early_log, 0);
 	atomic_set(&kmemleak_enabled, 0);
 
 	/* check whether it is too early for a kernel thread */
@@ -1694,12 +1697,18 @@ void __init kmemleak_init(void)
 	scan_area_cache = KMEM_CACHE(kmemleak_scan_area, SLAB_NOLEAKTRACE);
 	INIT_PRIO_TREE_ROOT(&object_tree_root);
 
+	if (crt_early_log >= ARRAY_SIZE(early_log))
+		pr_warning("Early log buffer exceeded (%d), please increase "
+			   "DEBUG_KMEMLEAK_EARLY_LOG_SIZE\n", crt_early_log);
+
 	/* the kernel is still in UP mode, so disabling the IRQs is enough */
 	local_irq_save(flags);
-	if (!atomic_read(&kmemleak_error)) {
+	atomic_set(&kmemleak_early_log, 0);
+	if (atomic_read(&kmemleak_error)) {
+		local_irq_restore(flags);
+		return;
+	} else
 		atomic_set(&kmemleak_enabled, 1);
-		atomic_set(&kmemleak_early_log, 0);
-	}
 	local_irq_restore(flags);
 
 	/*



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* [PATCH 4/4] kmemleak: Report previously found leaks even after an error
  2011-09-29 11:02 [PATCH 0/4] Kmemleak patches for -next Catalin Marinas
                   ` (2 preceding siblings ...)
  2011-09-29 11:02 ` [PATCH 3/4] kmemleak: When the early log buffer is exceeded, report the actual number Catalin Marinas
@ 2011-09-29 11:02 ` Catalin Marinas
  2011-10-04 17:45   ` Nick Bowler
  3 siblings, 1 reply; 22+ messages in thread
From: Catalin Marinas @ 2011-09-29 11:02 UTC (permalink / raw)
  To: linux-kernel

If an error fatal to kmemleak (like memory allocation failure) happens,
kmemleak disables itself but it also removes the access to any
previously found memory leaks. This patch allows read-only access to the
kmemleak debugfs interface but disables any other action.

Repored-by: Nick Bowler <nbowler@elliptictech.com>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/kmemleak.c |   22 +++++++++++++---------
 1 files changed, 13 insertions(+), 9 deletions(-)

diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index e6dd17b..381b67a 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -1473,9 +1473,6 @@ static const struct seq_operations kmemleak_seq_ops = {
 
 static int kmemleak_open(struct inode *inode, struct file *file)
 {
-	if (!atomic_read(&kmemleak_enabled))
-		return -EBUSY;
-
 	return seq_open(file, &kmemleak_seq_ops);
 }
 
@@ -1549,6 +1546,9 @@ static ssize_t kmemleak_write(struct file *file, const char __user *user_buf,
 	int buf_size;
 	int ret;
 
+	if (!atomic_read(&kmemleak_enabled))
+		return -EBUSY;
+
 	buf_size = min(size, (sizeof(buf) - 1));
 	if (strncpy_from_user(buf, user_buf, buf_size) < 0)
 		return -EFAULT;
@@ -1608,20 +1608,24 @@ static const struct file_operations kmemleak_fops = {
 };
 
 /*
- * Perform the freeing of the kmemleak internal objects after waiting for any
- * current memory scan to complete.
+ * Stop the memory scanning thread and free the kmemleak internal objects if
+ * no previous scan thread (otherwise, kmemleak may still have some useful
+ * information on memory leaks).
  */
 static void kmemleak_do_cleanup(struct work_struct *work)
 {
 	struct kmemleak_object *object;
+	bool cleanup = scan_thread == NULL;
 
 	mutex_lock(&scan_mutex);
 	stop_scan_thread();
 
-	rcu_read_lock();
-	list_for_each_entry_rcu(object, &object_list, object_list)
-		delete_object_full(object->pointer);
-	rcu_read_unlock();
+	if (cleanup) {
+		rcu_read_lock();
+		list_for_each_entry_rcu(object, &object_list, object_list)
+			delete_object_full(object->pointer);
+		rcu_read_unlock();
+	}
 	mutex_unlock(&scan_mutex);
 }
 



^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-09-29 11:02 ` [PATCH 2/4] kmemleak: Handle percpu memory allocation Catalin Marinas
@ 2011-09-29 11:56   ` Eric Dumazet
  2011-09-29 17:17     ` Catalin Marinas
  2011-09-29 19:28   ` Tejun Heo
  1 sibling, 1 reply; 22+ messages in thread
From: Eric Dumazet @ 2011-09-29 11:56 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Huajun Li, Tejun Heo, Christoph Lameter

Le jeudi 29 septembre 2011 à 12:02 +0100, Catalin Marinas a écrit :
> This patch adds kmemleak callbacks from the percpu allocator, reducing a
> number of false positives caused by kmemleak not scanning such memory
> blocks. The percpu chunks are never reported as leaks because of current
> kmemleak limitations with the __percpu pointer not pointing directly to
> the actual chunks.
> 
> Reported-by: Huajun Li <huajun.li.lee@gmail.com>
> Cc: Tejun Heo <tj@kernel.org>,
> Cc: Christoph Lameter <cl@linux-foundation.org>
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> ---

Hi Catalin

I wonder if you tried it on a 16 or 64 cpus machine ?

I guess we should increase CONFIG_DEBUG_KMEMLEAK_EARLY_LOG_SIZE default

Or you could add new kmemleak_percpu_alloc() / kmemleak_percpu_free()
primitives to not waste entries in early_log[]




^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-09-29 11:56   ` Eric Dumazet
@ 2011-09-29 17:17     ` Catalin Marinas
  2011-09-29 17:57       ` Christoph Lameter
  0 siblings, 1 reply; 22+ messages in thread
From: Catalin Marinas @ 2011-09-29 17:17 UTC (permalink / raw)
  To: Eric Dumazet; +Cc: linux-kernel, Huajun Li, Tejun Heo, Christoph Lameter

On Thu, Sep 29, 2011 at 12:56:13PM +0100, Eric Dumazet wrote:
> Le jeudi 29 septembre 2011 à 12:02 +0100, Catalin Marinas a écrit :
> > This patch adds kmemleak callbacks from the percpu allocator, reducing a
> > number of false positives caused by kmemleak not scanning such memory
> > blocks. The percpu chunks are never reported as leaks because of current
> > kmemleak limitations with the __percpu pointer not pointing directly to
> > the actual chunks.
> > 
> > Reported-by: Huajun Li <huajun.li.lee@gmail.com>
> > Cc: Tejun Heo <tj@kernel.org>,
> > Cc: Christoph Lameter <cl@linux-foundation.org>
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > ---
> 
> I wonder if you tried it on a 16 or 64 cpus machine ?

I only tried on a 4-CPU system and the early log allocations went form
under 400 to nearly 800.

> I guess we should increase CONFIG_DEBUG_KMEMLEAK_EARLY_LOG_SIZE default
> 
> Or you could add new kmemleak_percpu_alloc() / kmemleak_percpu_free()
> primitives to not waste entries in early_log[]

If we don't track them in early_log[] before kmemleak is initialised,
they aren't tracked at all so we end up pretty much in the same
situations.

A better approach would be to tell kmemleak about the __percpu pointer
via some additional API (or some other way to detect that it's a
__percpu pointer) and kmemleak would do the for_each_possible_cpu()
conversion when scanning.

The question - can we guarantee that the __percpu pointer returned by
alloc_percpu() does not overlap with some memory block allocated
elsewhere? At least for the first chunk it seems to overlap with the
percpu data section, which is fine as that's not added in kmemleak as an
allocated block.

Allowing overlapping blocks in kmemleak may be a solution but it's not
implemented yet.

-- 
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-09-29 17:17     ` Catalin Marinas
@ 2011-09-29 17:57       ` Christoph Lameter
  0 siblings, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2011-09-29 17:57 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: Eric Dumazet, linux-kernel, Huajun Li, Tejun Heo

On Thu, 29 Sep 2011, Catalin Marinas wrote:

> A better approach would be to tell kmemleak about the __percpu pointer
> via some additional API (or some other way to detect that it's a
> __percpu pointer) and kmemleak would do the for_each_possible_cpu()
> conversion when scanning.
>
> The question - can we guarantee that the __percpu pointer returned by
> alloc_percpu() does not overlap with some memory block allocated
> elsewhere? At least for the first chunk it seems to overlap with the
> percpu data section, which is fine as that's not added in kmemleak as an
> allocated block.

percpu data can be placed in physical areas as well as virtually mapped
areas. The size of the offset returned by alloc_percpu() depends on the
mechanism and therefore is treated like an arbitrary value.

> Allowing overlapping blocks in kmemleak may be a solution but it's not
> implemented yet.

You could use the address of the instance of the percpu variable in the
*first* cpu as the canonical item. As soon as you calculate an actual
address (using a processor number) you will have a physical address that
cannot overlap elsewhere.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-09-29 11:02 ` [PATCH 2/4] kmemleak: Handle percpu memory allocation Catalin Marinas
  2011-09-29 11:56   ` Eric Dumazet
@ 2011-09-29 19:28   ` Tejun Heo
  2011-09-30  8:37     ` Catalin Marinas
  2011-10-03 15:21     ` Catalin Marinas
  1 sibling, 2 replies; 22+ messages in thread
From: Tejun Heo @ 2011-09-29 19:28 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Huajun Li, Christoph Lameter

Hello,

On Thu, Sep 29, 2011 at 12:02:28PM +0100, Catalin Marinas wrote:
> This patch adds kmemleak callbacks from the percpu allocator, reducing a
> number of false positives caused by kmemleak not scanning such memory
> blocks. The percpu chunks are never reported as leaks because of current
> kmemleak limitations with the __percpu pointer not pointing directly to
> the actual chunks.
...
> @@ -801,7 +804,16 @@ area_found:
>  	mutex_unlock(&pcpu_alloc_mutex);
>  
>  	/* return address relative to base address */
> -	return __addr_to_pcpu_ptr(chunk->base_addr + off);
> +	ptr = __addr_to_pcpu_ptr(chunk->base_addr + off);
> +
> +	/*
> +	 * Percpu allocations are currently reported as leaks (kmemleak false
> +	 * positives). To avoid this, just set min_count to 0.
> +	 */
> +	for_each_possible_cpu(cpu)
> +		kmemleak_alloc(per_cpu_ptr(ptr, cpu), size, 0, GFP_KERNEL);
> +
> +	return ptr;

I'm pretty ignorant about kmemleak but it scans memories looking for
references to allocated objects, right?  There currently is no way for
such scanner to tell a percpu pointer in memory from a regular pointer
making it impossible to track percpu objects properly from kmemleak.
If my understanding is correct, I don't see much point in tracking
each percpu alloc/free.  Why not just mark all pages taken by percpu
allocator as untrackable?

If we want to track percpu memory leak properly, I think we'll need
more invasive changes.  If kmemleak is enabled, we can offset percpu
pointer so that the scanner can tell percpu pointers and then kmemleak
should properly account for all percpu areas pointed to by the percpu
pointer.  Hmmm... or, alternatively, we can make kmemleak only track
allocations for the first possible cpu and ignore all the rest and
modify percpu such that percpu pointer points to the actual address of
the first cpu if kmemleak is enabled.

Thank you.

-- 
tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-09-29 19:28   ` Tejun Heo
@ 2011-09-30  8:37     ` Catalin Marinas
  2011-10-03 15:21     ` Catalin Marinas
  1 sibling, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2011-09-30  8:37 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel, Huajun Li, Christoph Lameter

Hi Tejun,

On Thu, Sep 29, 2011 at 08:28:18PM +0100, Tejun Heo wrote:
> On Thu, Sep 29, 2011 at 12:02:28PM +0100, Catalin Marinas wrote:
> > This patch adds kmemleak callbacks from the percpu allocator, reducing a
> > number of false positives caused by kmemleak not scanning such memory
> > blocks. The percpu chunks are never reported as leaks because of current
> > kmemleak limitations with the __percpu pointer not pointing directly to
> > the actual chunks.
> ...
> > @@ -801,7 +804,16 @@ area_found:
> >  	mutex_unlock(&pcpu_alloc_mutex);
> >  
> >  	/* return address relative to base address */
> > -	return __addr_to_pcpu_ptr(chunk->base_addr + off);
> > +	ptr = __addr_to_pcpu_ptr(chunk->base_addr + off);
> > +
> > +	/*
> > +	 * Percpu allocations are currently reported as leaks (kmemleak false
> > +	 * positives). To avoid this, just set min_count to 0.
> > +	 */
> > +	for_each_possible_cpu(cpu)
> > +		kmemleak_alloc(per_cpu_ptr(ptr, cpu), size, 0, GFP_KERNEL);
> > +
> > +	return ptr;
> 
> I'm pretty ignorant about kmemleak but it scans memories looking for
> references to allocated objects, right?  There currently is no way for
> such scanner to tell a percpu pointer in memory from a regular pointer
> making it impossible to track percpu objects properly from kmemleak.
> If my understanding is correct, I don't see much point in tracking
> each percpu alloc/free.  Why not just mark all pages taken by percpu
> allocator as untrackable?

That's pretty much the current behaviour (apart from embedded chunk).
The problem is that percpu blocks may contain references to other memory
blocks and if they are not found it leads to false positives. So rather
than marking each individual false positive as such, we could add
tracking support to percpu allocations.

> If we want to track percpu memory leak properly, I think we'll need
> more invasive changes.  If kmemleak is enabled, we can offset percpu
> pointer so that the scanner can tell percpu pointers and then kmemleak
> should properly account for all percpu areas pointed to by the percpu
> pointer.

Or we could just extend the kmemleak API to tell it's a percpu pointer
(e.g. kmemleak_alloc_percpu).

> Hmmm... or, alternatively, we can make kmemleak only track
> allocations for the first possible cpu and ignore all the rest and
> modify percpu such that percpu pointer points to the actual address of
> the first cpu if kmemleak is enabled.

This would solve the kmemleak issues with overlapping memory blocks.
I'll try to come up with a patch and see how feasible it is.

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-09-29 19:28   ` Tejun Heo
  2011-09-30  8:37     ` Catalin Marinas
@ 2011-10-03 15:21     ` Catalin Marinas
  2011-10-03 16:14       ` Christoph Lameter
  2011-10-04  7:59       ` Tejun Heo
  1 sibling, 2 replies; 22+ messages in thread
From: Catalin Marinas @ 2011-10-03 15:21 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel, Huajun Li, Christoph Lameter

Hi Tejun,

On Thu, Sep 29, 2011 at 08:28:18PM +0100, Tejun Heo wrote:
> If we want to track percpu memory leak properly, I think we'll need
> more invasive changes.  If kmemleak is enabled, we can offset percpu
> pointer so that the scanner can tell percpu pointers and then kmemleak
> should properly account for all percpu areas pointed to by the percpu
> pointer.  Hmmm... or, alternatively, we can make kmemleak only track
> allocations for the first possible cpu and ignore all the rest and
> modify percpu such that percpu pointer points to the actual address of
> the first cpu if kmemleak is enabled.

I had a look at the latter variant and we don't end up with clean code.
I'm not even sure it's worth the hassle as most leaks come from slab
allocations.

The updated patch below adds specific kmemleak API for percpu pointers
so that it only logs one call per allocation rather than the number of
possible cpus (could be split in two, though the patch is relatively
simple):

----------------8<---------------------------------
kmemleak: Handle percpu memory allocation

From: Catalin Marinas <catalin.marinas@arm.com>

This patch adds kmemleak callbacks from the percpu allocator, reducing a
number of false positives caused by kmemleak not scanning such memory
blocks. The percpu chunks are never reported as leaks because of current
kmemleak limitations with the __percpu pointer not pointing directly to
the actual chunks.

Reported-by: Huajun Li <huajun.li.lee@gmail.com>
Cc: Tejun Heo <tj@kernel.org>,
Cc: Christoph Lameter <cl@linux-foundation.org>
Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 Documentation/kmemleak.txt |    3 ++
 include/linux/kmemleak.h   |    8 +++++
 mm/kmemleak.c              |   72 ++++++++++++++++++++++++++++++++++++++++++++
 mm/percpu.c                |   12 +++++++
 4 files changed, 94 insertions(+), 1 deletions(-)

diff --git a/Documentation/kmemleak.txt b/Documentation/kmemleak.txt
index 51063e6..b6e3973 100644
--- a/Documentation/kmemleak.txt
+++ b/Documentation/kmemleak.txt
@@ -127,7 +127,10 @@ See the include/linux/kmemleak.h header for the functions prototype.
 
 kmemleak_init		 - initialize kmemleak
 kmemleak_alloc		 - notify of a memory block allocation
+kmemleak_alloc_percpu	 - notify of a percpu memory block allocation
 kmemleak_free		 - notify of a memory block freeing
+kmemleak_free_part	 - notify of a partial memory block freeing
+kmemleak_free_percpu	 - notify of a percpu memory block freeing
 kmemleak_not_leak	 - mark an object as not a leak
 kmemleak_ignore		 - do not scan or report an object as leak
 kmemleak_scan_area	 - add scan areas inside a memory block
diff --git a/include/linux/kmemleak.h b/include/linux/kmemleak.h
index 99d9a67..2a5e554 100644
--- a/include/linux/kmemleak.h
+++ b/include/linux/kmemleak.h
@@ -26,8 +26,10 @@
 extern void kmemleak_init(void) __ref;
 extern void kmemleak_alloc(const void *ptr, size_t size, int min_count,
 			   gfp_t gfp) __ref;
+extern void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size) __ref;
 extern void kmemleak_free(const void *ptr) __ref;
 extern void kmemleak_free_part(const void *ptr, size_t size) __ref;
+extern void kmemleak_free_percpu(const void __percpu *ptr) __ref;
 extern void kmemleak_padding(const void *ptr, unsigned long offset,
 			     size_t size) __ref;
 extern void kmemleak_not_leak(const void *ptr) __ref;
@@ -68,6 +70,9 @@ static inline void kmemleak_alloc_recursive(const void *ptr, size_t size,
 					    gfp_t gfp)
 {
 }
+static inline void kmemleak_alloc_percpu(const void __percpu *ptr, size_t size)
+{
+}
 static inline void kmemleak_free(const void *ptr)
 {
 }
@@ -77,6 +82,9 @@ static inline void kmemleak_free_part(const void *ptr, size_t size)
 static inline void kmemleak_free_recursive(const void *ptr, unsigned long flags)
 {
 }
+static inline void kmemleak_free_percpu(const void __percpu *ptr)
+{
+}
 static inline void kmemleak_not_leak(const void *ptr)
 {
 }
diff --git a/mm/kmemleak.c b/mm/kmemleak.c
index 381b67a..1d2b896 100644
--- a/mm/kmemleak.c
+++ b/mm/kmemleak.c
@@ -230,8 +230,10 @@ static int kmemleak_skip_disable;
 /* kmemleak operation type for early logging */
 enum {
 	KMEMLEAK_ALLOC,
+	KMEMLEAK_ALLOC_PERCPU,
 	KMEMLEAK_FREE,
 	KMEMLEAK_FREE_PART,
+	KMEMLEAK_FREE_PERCPU,
 	KMEMLEAK_NOT_LEAK,
 	KMEMLEAK_IGNORE,
 	KMEMLEAK_SCAN_AREA,
@@ -852,6 +854,20 @@ out:
 	rcu_read_unlock();
 }
 
+/*
+ * Log an early allocated block and populate the stack trace.
+ */
+static void early_alloc_percpu(struct early_log *log)
+{
+	unsigned int cpu;
+	const void __percpu *ptr = log->ptr;
+
+	for_each_possible_cpu(cpu) {
+		log->ptr = per_cpu_ptr(ptr, cpu);
+		early_alloc(log);
+	}
+}
+
 /**
  * kmemleak_alloc - register a newly allocated object
  * @ptr:	pointer to beginning of the object
@@ -879,6 +895,34 @@ void __ref kmemleak_alloc(const void *ptr, size_t size, int min_count,
 EXPORT_SYMBOL_GPL(kmemleak_alloc);
 
 /**
+ * kmemleak_alloc_percpu - register a newly allocated __percpu object
+ * @ptr:	__percpu pointer to beginning of the object
+ * @size:	size of the object
+ *
+ * This function is called from the kernel percpu allocator when a new object
+ * (memory block) is allocated (alloc_percpu). It assumes GFP_KERNEL
+ * allocation.
+ */
+void __ref kmemleak_alloc_percpu(const void __percpu *ptr, size_t size)
+{
+	unsigned int cpu;
+
+	pr_debug("%s(0x%p, %zu)\n", __func__, ptr, size);
+
+	/*
+	 * Percpu allocations are only scanned and not reported as leaks
+	 * (min_count is set to 0).
+	 */
+	if (atomic_read(&kmemleak_enabled) && ptr && !IS_ERR(ptr))
+		for_each_possible_cpu(cpu)
+			create_object((unsigned long)per_cpu_ptr(ptr, cpu),
+				      size, 0, GFP_KERNEL);
+	else if (atomic_read(&kmemleak_early_log))
+		log_early(KMEMLEAK_ALLOC_PERCPU, ptr, size, 0);
+}
+EXPORT_SYMBOL_GPL(kmemleak_alloc_percpu);
+
+/**
  * kmemleak_free - unregister a previously registered object
  * @ptr:	pointer to beginning of the object
  *
@@ -917,6 +961,28 @@ void __ref kmemleak_free_part(const void *ptr, size_t size)
 EXPORT_SYMBOL_GPL(kmemleak_free_part);
 
 /**
+ * kmemleak_free_percpu - unregister a previously registered __percpu object
+ * @ptr:	__percpu pointer to beginning of the object
+ *
+ * This function is called from the kernel percpu allocator when an object
+ * (memory block) is freed (free_percpu).
+ */
+void __ref kmemleak_free_percpu(const void __percpu *ptr)
+{
+	unsigned int cpu;
+
+	pr_debug("%s(0x%p)\n", __func__, ptr);
+
+	if (atomic_read(&kmemleak_enabled) && ptr && !IS_ERR(ptr))
+		for_each_possible_cpu(cpu)
+			delete_object_full((unsigned long)per_cpu_ptr(ptr,
+								      cpu));
+	else if (atomic_read(&kmemleak_early_log))
+		log_early(KMEMLEAK_FREE, ptr, 0, 0);
+}
+EXPORT_SYMBOL_GPL(kmemleak_free_percpu);
+
+/**
  * kmemleak_not_leak - mark an allocated object as false positive
  * @ptr:	pointer to beginning of the object
  *
@@ -1727,12 +1793,18 @@ void __init kmemleak_init(void)
 		case KMEMLEAK_ALLOC:
 			early_alloc(log);
 			break;
+		case KMEMLEAK_ALLOC_PERCPU:
+			early_alloc_percpu(log);
+			break;
 		case KMEMLEAK_FREE:
 			kmemleak_free(log->ptr);
 			break;
 		case KMEMLEAK_FREE_PART:
 			kmemleak_free_part(log->ptr, log->size);
 			break;
+		case KMEMLEAK_FREE_PERCPU:
+			kmemleak_free_percpu(log->ptr);
+			break;
 		case KMEMLEAK_NOT_LEAK:
 			kmemleak_not_leak(log->ptr);
 			break;
diff --git a/mm/percpu.c b/mm/percpu.c
index bf80e55..ce3b6cb 100644
--- a/mm/percpu.c
+++ b/mm/percpu.c
@@ -67,6 +67,7 @@
 #include <linux/spinlock.h>
 #include <linux/vmalloc.h>
 #include <linux/workqueue.h>
+#include <linux/kmemleak.h>
 
 #include <asm/cacheflush.h>
 #include <asm/sections.h>
@@ -709,6 +710,7 @@ static void __percpu *pcpu_alloc(size_t size, size_t align, bool reserved)
 	const char *err;
 	int slot, off, new_alloc;
 	unsigned long flags;
+	void __percpu *ptr;
 
 	if (unlikely(!size || size > PCPU_MIN_UNIT_SIZE || align > PAGE_SIZE)) {
 		WARN(true, "illegal size (%zu) or align (%zu) for "
@@ -801,7 +803,9 @@ area_found:
 	mutex_unlock(&pcpu_alloc_mutex);
 
 	/* return address relative to base address */
-	return __addr_to_pcpu_ptr(chunk->base_addr + off);
+	ptr = __addr_to_pcpu_ptr(chunk->base_addr + off);
+	kmemleak_alloc_percpu(ptr, size);
+	return ptr;
 
 fail_unlock:
 	spin_unlock_irqrestore(&pcpu_lock, flags);
@@ -915,6 +919,8 @@ void free_percpu(void __percpu *ptr)
 	if (!ptr)
 		return;
 
+	kmemleak_free_percpu(ptr);
+
 	addr = __pcpu_ptr_to_addr(ptr);
 
 	spin_lock_irqsave(&pcpu_lock, flags);
@@ -1619,6 +1625,8 @@ int __init pcpu_embed_first_chunk(size_t reserved_size, size_t dyn_size,
 			rc = -ENOMEM;
 			goto out_free_areas;
 		}
+		/* kmemleak tracks the percpu allocations separately */
+		kmemleak_free(ptr);
 		areas[group] = ptr;
 
 		base = min(ptr, base);
@@ -1733,6 +1741,8 @@ int __init pcpu_page_first_chunk(size_t reserved_size,
 					   "for cpu%u\n", psize_str, cpu);
 				goto enomem;
 			}
+			/* kmemleak tracks the percpu allocations separately */
+			kmemleak_free(ptr);
 			pages[j++] = virt_to_page(ptr);
 		}
 

-- 
Catalin

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-10-03 15:21     ` Catalin Marinas
@ 2011-10-03 16:14       ` Christoph Lameter
  2011-10-04  7:59       ` Tejun Heo
  1 sibling, 0 replies; 22+ messages in thread
From: Christoph Lameter @ 2011-10-03 16:14 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: Tejun Heo, linux-kernel, Huajun Li



Looks good to me

Acked-by: Christoph Lameter <cl@gentwo.org>


^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-10-03 15:21     ` Catalin Marinas
  2011-10-03 16:14       ` Christoph Lameter
@ 2011-10-04  7:59       ` Tejun Heo
  2011-10-04  9:04         ` Catalin Marinas
  1 sibling, 1 reply; 22+ messages in thread
From: Tejun Heo @ 2011-10-04  7:59 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Huajun Li, Christoph Lameter

Hello,

On Mon, Oct 03, 2011 at 04:21:26PM +0100, Catalin Marinas wrote:
> The updated patch below adds specific kmemleak API for percpu pointers
> so that it only logs one call per allocation rather than the number of
> possible cpus (could be split in two, though the patch is relatively
> simple):

The percpu part looks fine to me but I don't know how kmemleak works
to judge whether the kmemleak part is okay or not.  This just avoids
false positives from slab and would still require bumping up the early
log memory as # of cpus increases, right?

For percpu part,

  Acked-by: Tejun Heo <tj@kernel.org>

How do you want to route this?  If kmemleak patches get routed through
-mm, please feel free to send it to Andrew with Acked-by's added.

Thank you.

-- 
tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-10-04  7:59       ` Tejun Heo
@ 2011-10-04  9:04         ` Catalin Marinas
  2011-10-04  9:13           ` Tejun Heo
  0 siblings, 1 reply; 22+ messages in thread
From: Catalin Marinas @ 2011-10-04  9:04 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel, Huajun Li, Christoph Lameter

On Tue, Oct 04, 2011 at 08:59:15AM +0100, Tejun Heo wrote:
> On Mon, Oct 03, 2011 at 04:21:26PM +0100, Catalin Marinas wrote:
> > The updated patch below adds specific kmemleak API for percpu pointers
> > so that it only logs one call per allocation rather than the number of
> > possible cpus (could be split in two, though the patch is relatively
> > simple):
> 
> The percpu part looks fine to me but I don't know how kmemleak works
> to judge whether the kmemleak part is okay or not.  This just avoids
> false positives from slab and would still require bumping up the early
> log memory as # of cpus increases, right?

No, there is only one kmemleak call for each __percpu pointer (to the
specific kmemleak_*_percpu function). The kmemleak expands the percpu
pointer into corresponding blocks for each cpu but the early log only
stores a single call.

> For percpu part,
> 
>   Acked-by: Tejun Heo <tj@kernel.org>

Thanks.

> How do you want to route this?  If kmemleak patches get routed through
> -mm, please feel free to send it to Andrew with Acked-by's added.

I'll push them to -next for now and (depending on how late the merge
window opens) I can send a pull request to Linus.

-- 
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-10-04  9:04         ` Catalin Marinas
@ 2011-10-04  9:13           ` Tejun Heo
  2011-10-04  9:26             ` Catalin Marinas
  0 siblings, 1 reply; 22+ messages in thread
From: Tejun Heo @ 2011-10-04  9:13 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Huajun Li, Christoph Lameter

Hello,

On Tue, Oct 04, 2011 at 10:04:46AM +0100, Catalin Marinas wrote:
> > The percpu part looks fine to me but I don't know how kmemleak works
> > to judge whether the kmemleak part is okay or not.  This just avoids
> > false positives from slab and would still require bumping up the early
> > log memory as # of cpus increases, right?
> 
> No, there is only one kmemleak call for each __percpu pointer (to the
> specific kmemleak_*_percpu function). The kmemleak expands the percpu
> pointer into corresponding blocks for each cpu but the early log only
> stores a single call.

Hmmm... but the following definitely seems O(#PCPU_ALLOCS * #CPUS)?
What am I missing?

+/*
+ * Log an early allocated block and populate the stack trace.
+ */
+static void early_alloc_percpu(struct early_log *log)
+{
+       unsigned int cpu;
+       const void __percpu *ptr = log->ptr;
+
+       for_each_possible_cpu(cpu) {
+               log->ptr = per_cpu_ptr(ptr, cpu);
+               early_alloc(log);
+       }
+}

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-10-04  9:13           ` Tejun Heo
@ 2011-10-04  9:26             ` Catalin Marinas
  2011-10-04 16:59               ` Tejun Heo
  0 siblings, 1 reply; 22+ messages in thread
From: Catalin Marinas @ 2011-10-04  9:26 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel, Huajun Li, Christoph Lameter

On Tue, Oct 04, 2011 at 10:13:14AM +0100, Tejun Heo wrote:
> On Tue, Oct 04, 2011 at 10:04:46AM +0100, Catalin Marinas wrote:
> > > The percpu part looks fine to me but I don't know how kmemleak works
> > > to judge whether the kmemleak part is okay or not.  This just avoids
> > > false positives from slab and would still require bumping up the early
> > > log memory as # of cpus increases, right?
> > 
> > No, there is only one kmemleak call for each __percpu pointer (to the
> > specific kmemleak_*_percpu function). The kmemleak expands the percpu
> > pointer into corresponding blocks for each cpu but the early log only
> > stores a single call.
> 
> Hmmm... but the following definitely seems O(#PCPU_ALLOCS * #CPUS)?
> What am I missing?
> 
> +/*
> + * Log an early allocated block and populate the stack trace.
> + */
> +static void early_alloc_percpu(struct early_log *log)
> +{
> +       unsigned int cpu;
> +       const void __percpu *ptr = log->ptr;
> +
> +       for_each_possible_cpu(cpu) {
> +               log->ptr = per_cpu_ptr(ptr, cpu);
> +               early_alloc(log);
> +       }
> +}

Before kmemleak is initialised we still get memory allocations that
kmemleak stores in an early_log buffer (via the log_early() function
called from kmemleak_alloc_percpu). Later when kmemleak has all the data
structures in place, the kmemleak_init() function goes through the
early_log array and replays the previously recorded requests. The
early_alloc_percpu() function is used during early_log replaying and it
indeed registers every percpu memory block but the early_log is always
O(#PCPU_ALLOCS).

The reason we don't call kmemleak_alloc_percpu() directly during
replaying is that early_alloc() also copies the previously recorded
stack trace into the newly created object (otherwise all early
allocations would be shown as done by kmemleak_init).

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-10-04  9:26             ` Catalin Marinas
@ 2011-10-04 16:59               ` Tejun Heo
  2011-10-04 17:06                 ` Catalin Marinas
  0 siblings, 1 reply; 22+ messages in thread
From: Tejun Heo @ 2011-10-04 16:59 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Huajun Li, Christoph Lameter

Hello, Catalin.

On Tue, Oct 04, 2011 at 10:26:24AM +0100, Catalin Marinas wrote:
> Before kmemleak is initialised we still get memory allocations that
> kmemleak stores in an early_log buffer (via the log_early() function
> called from kmemleak_alloc_percpu). Later when kmemleak has all the data
> structures in place, the kmemleak_init() function goes through the
> early_log array and replays the previously recorded requests. The
> early_alloc_percpu() function is used during early_log replaying and it
> indeed registers every percpu memory block but the early_log is always
> O(#PCPU_ALLOCS).
> 
> The reason we don't call kmemleak_alloc_percpu() directly during
> replaying is that early_alloc() also copies the previously recorded
> stack trace into the newly created object (otherwise all early
> allocations would be shown as done by kmemleak_init).

Ah, okay, I was misreading the patch.  For some reason, log_early()
was nested inside for_each_possible_cpu(), but one other thing.
kmemleak_free_percpu() is calling log_early() w/ KMEMLEAK_FREE.
Shouldn't it be KMEMLEAK_FREE_PERCPU?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 2/4] kmemleak: Handle percpu memory allocation
  2011-10-04 16:59               ` Tejun Heo
@ 2011-10-04 17:06                 ` Catalin Marinas
  0 siblings, 0 replies; 22+ messages in thread
From: Catalin Marinas @ 2011-10-04 17:06 UTC (permalink / raw)
  To: Tejun Heo; +Cc: linux-kernel, Huajun Li, Christoph Lameter

On Tue, Oct 04, 2011 at 05:59:58PM +0100, Tejun Heo wrote:
> Hello, Catalin.
> 
> On Tue, Oct 04, 2011 at 10:26:24AM +0100, Catalin Marinas wrote:
> > Before kmemleak is initialised we still get memory allocations that
> > kmemleak stores in an early_log buffer (via the log_early() function
> > called from kmemleak_alloc_percpu). Later when kmemleak has all the data
> > structures in place, the kmemleak_init() function goes through the
> > early_log array and replays the previously recorded requests. The
> > early_alloc_percpu() function is used during early_log replaying and it
> > indeed registers every percpu memory block but the early_log is always
> > O(#PCPU_ALLOCS).
> > 
> > The reason we don't call kmemleak_alloc_percpu() directly during
> > replaying is that early_alloc() also copies the previously recorded
> > stack trace into the newly created object (otherwise all early
> > allocations would be shown as done by kmemleak_init).
> 
> Ah, okay, I was misreading the patch.  For some reason, log_early()
> was nested inside for_each_possible_cpu(), but one other thing.
> kmemleak_free_percpu() is calling log_early() w/ KMEMLEAK_FREE.
> Shouldn't it be KMEMLEAK_FREE_PERCPU?

Good point, I updated the patch.

Thanks.

-- 
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/4] kmemleak: Report previously found leaks even after an error
  2011-09-29 11:02 ` [PATCH 4/4] kmemleak: Report previously found leaks even after an error Catalin Marinas
@ 2011-10-04 17:45   ` Nick Bowler
  2011-10-04 20:50     ` Catalin Marinas
  0 siblings, 1 reply; 22+ messages in thread
From: Nick Bowler @ 2011-10-04 17:45 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel

On 2011-09-29 12:02 +0100, Catalin Marinas wrote:
> If an error fatal to kmemleak (like memory allocation failure) happens,
> kmemleak disables itself but it also removes the access to any
> previously found memory leaks. This patch allows read-only access to the
> kmemleak debugfs interface but disables any other action.
> 
> Repored-by: Nick Bowler <nbowler@elliptictech.com>

Reported-by: ...

> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>

Cool, I'll try this out.  Kmemleak rarely stays alive for more than a
few days on my desktop before shutting itself down due to an allocation
failure, so this should be really handy.

Cheers,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/4] kmemleak: Report previously found leaks even after an error
  2011-10-04 17:45   ` Nick Bowler
@ 2011-10-04 20:50     ` Catalin Marinas
  2011-10-04 21:14       ` Nick Bowler
  0 siblings, 1 reply; 22+ messages in thread
From: Catalin Marinas @ 2011-10-04 20:50 UTC (permalink / raw)
  To: Nick Bowler; +Cc: linux-kernel

On 4 October 2011 18:45, Nick Bowler <nbowler@elliptictech.com> wrote:
> On 2011-09-29 12:02 +0100, Catalin Marinas wrote:
>> If an error fatal to kmemleak (like memory allocation failure) happens,
>> kmemleak disables itself but it also removes the access to any
>> previously found memory leaks. This patch allows read-only access to the
>> kmemleak debugfs interface but disables any other action.
>>
>> Repored-by: Nick Bowler <nbowler@elliptictech.com>
>
> Reported-by: ...

I was wondering why you were not automatically cc'ed.

>> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
>
> Cool, I'll try this out.  Kmemleak rarely stays alive for more than a
> few days on my desktop before shutting itself down due to an allocation
> failure, so this should be really handy.

You can have a look at /proc/slabinfo and meminfo and see if there are
any really big leaks. In general the kmemleak objects number is higher
than the sum of all the other slab objects (but I haven't done any
statistics). Hopefully kmemleak doesn't leak memory :) (I should add
some simple checks though).

-- 
Catalin

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/4] kmemleak: Report previously found leaks even after an error
  2011-10-04 20:50     ` Catalin Marinas
@ 2011-10-04 21:14       ` Nick Bowler
  2011-10-24 18:57         ` Nick Bowler
  0 siblings, 1 reply; 22+ messages in thread
From: Nick Bowler @ 2011-10-04 21:14 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel

On 2011-10-04 21:50 +0100, Catalin Marinas wrote:
> On 4 October 2011 18:45, Nick Bowler <nbowler@elliptictech.com> wrote:
> > Cool, I'll try this out.  Kmemleak rarely stays alive for more than a
> > few days on my desktop before shutting itself down due to an allocation
> > failure, so this should be really handy.
> 
> You can have a look at /proc/slabinfo and meminfo and see if there are
> any really big leaks. In general the kmemleak objects number is higher
> than the sum of all the other slab objects (but I haven't done any
> statistics). Hopefully kmemleak doesn't leak memory :) (I should add
> some simple checks though).

I don't think the failures that cause kmemleak to shut down in my case
are actually caused by a memory leak.  The machine is rather old and
has only 1G of RAM.  I think kmemleak is just failing to allocate during
a period of real memory pressure.

-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH 4/4] kmemleak: Report previously found leaks even after an error
  2011-10-04 21:14       ` Nick Bowler
@ 2011-10-24 18:57         ` Nick Bowler
  0 siblings, 0 replies; 22+ messages in thread
From: Nick Bowler @ 2011-10-24 18:57 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel

On 2011-10-04 17:14 -0400, Nick Bowler wrote:
> On 2011-10-04 21:50 +0100, Catalin Marinas wrote:
> > On 4 October 2011 18:45, Nick Bowler <nbowler@elliptictech.com> wrote:
> > > Cool, I'll try this out.  Kmemleak rarely stays alive for more than a
> > > few days on my desktop before shutting itself down due to an allocation
> > > failure, so this should be really handy.
> > 
> > You can have a look at /proc/slabinfo and meminfo and see if there are
> > any really big leaks. In general the kmemleak objects number is higher
> > than the sum of all the other slab objects (but I haven't done any
> > statistics). Hopefully kmemleak doesn't leak memory :) (I should add
> > some simple checks though).
> 
> I don't think the failures that cause kmemleak to shut down in my case
> are actually caused by a memory leak.  The machine is rather old and
> has only 1G of RAM.  I think kmemleak is just failing to allocate during
> a period of real memory pressure.

FWIW, I applied this patch on top of 3.0.7 and I can successfully read
the debugfs entry after kmemleak shuts down.

Thanks,
-- 
Nick Bowler, Elliptic Technologies (http://www.elliptictech.com/)

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2011-10-24 18:57 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2011-09-29 11:02 [PATCH 0/4] Kmemleak patches for -next Catalin Marinas
2011-09-29 11:02 ` [PATCH 1/4] kmemleak: Show where early_log issues come from Catalin Marinas
2011-09-29 11:02 ` [PATCH 2/4] kmemleak: Handle percpu memory allocation Catalin Marinas
2011-09-29 11:56   ` Eric Dumazet
2011-09-29 17:17     ` Catalin Marinas
2011-09-29 17:57       ` Christoph Lameter
2011-09-29 19:28   ` Tejun Heo
2011-09-30  8:37     ` Catalin Marinas
2011-10-03 15:21     ` Catalin Marinas
2011-10-03 16:14       ` Christoph Lameter
2011-10-04  7:59       ` Tejun Heo
2011-10-04  9:04         ` Catalin Marinas
2011-10-04  9:13           ` Tejun Heo
2011-10-04  9:26             ` Catalin Marinas
2011-10-04 16:59               ` Tejun Heo
2011-10-04 17:06                 ` Catalin Marinas
2011-09-29 11:02 ` [PATCH 3/4] kmemleak: When the early log buffer is exceeded, report the actual number Catalin Marinas
2011-09-29 11:02 ` [PATCH 4/4] kmemleak: Report previously found leaks even after an error Catalin Marinas
2011-10-04 17:45   ` Nick Bowler
2011-10-04 20:50     ` Catalin Marinas
2011-10-04 21:14       ` Nick Bowler
2011-10-24 18:57         ` Nick Bowler

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.