All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 00/15] Kernel memory leak detector
@ 2008-12-10 18:26 Catalin Marinas
  2008-12-10 18:26 ` [PATCH 01/15] kmemleak: Add the base support Catalin Marinas
                   ` (15 more replies)
  0 siblings, 16 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:26 UTC (permalink / raw)
  To: linux-kernel

A new kmemleak version is available. Thanks to all who reviewed the code
and gave feedback. Kmemleak can also be found on this git tree:

git://linux-arm.org/linux-2.6.git kmemleak

Please note that even though I already got the ack from the
slab/slob/slub maintainers, I haven't included the corresponding lines
in the patch description because the patches were slightly modified to
pass the GFP flags to the kmemleak callbacks. I would be grateful if you
review these changes and re-acknowledge them.

Changes since the previous release:

- fatal errors for kmemleak are no longer fatal for the full system.
  Kmemleak can disable and clean-up after itself at run-time if such a
  condition occurs
- re-worked locking in the memleak.c file together with documentation on
  how it works
- kmemleak internal allocations use the GFP flags of the caller and are
  no longer restricted to GFP_ATOMIC
- better (hopefully) comments all over the code
- implemented most of the comments received so far (an important
  omission here is the tracking of alloc_bootmem calls since it looks a
  bit difficult to pair them with free_bootmem, the latter being used
  with reserve_bootmem or without a corresponding alloc_bootmem)

Still to do:

- run-time and boot-time configuration like task stacks scanning,
  disabling kmemleak, enabling/disabling the automatic scanning

Thanks for your comments.


Catalin Marinas (15):
      kmemleak: Add the corresponding MAINTAINERS entry
      kmemleak: Simple testing module for kmemleak
      kmemleak: Keep the __init functions after initialization
      kmemleak: Enable the building of the memory leak detector
      kmemleak: Remove some of the kmemleak false positives
      arm: Provide _sdata and __bss_stop in the vmlinux.lds.S file
      x86: Provide _sdata in the vmlinux_*.lds.S files
      kmemleak: Add modules support
      kmemleak: Add memleak_alloc callback from alloc_large_system_hash
      kmemleak: Add the vmalloc memory allocation/freeing hooks
      kmemleak: Add the slub memory allocation/freeing hooks
      kmemleak: Add the slob memory allocation/freeing hooks
      kmemleak: Add the slab memory allocation/freeing hooks
      kmemleak: Add documentation on the memory leak detector
      kmemleak: Add the base support


 Documentation/kmemleak.txt       |  127 ++++
 MAINTAINERS                      |    6 
 arch/arm/kernel/vmlinux.lds.S    |    2 
 arch/x86/kernel/vmlinux_32.lds.S |    1 
 arch/x86/kernel/vmlinux_64.lds.S |    1 
 drivers/char/vt.c                |    7 
 include/linux/init.h             |    6 
 include/linux/memleak.h          |   93 +++
 include/linux/percpu.h           |    5 
 include/linux/slab.h             |    2 
 init/main.c                      |    4 
 kernel/module.c                  |   56 ++
 lib/Kconfig.debug                |   46 +
 mm/Makefile                      |    2 
 mm/memleak-test.c                |  110 +++
 mm/memleak.c                     | 1263 ++++++++++++++++++++++++++++++++++++++
 mm/page_alloc.c                  |    3 
 mm/slab.c                        |   18 -
 mm/slob.c                        |   15 
 mm/slub.c                        |    5 
 mm/vmalloc.c                     |   29 +
 21 files changed, 1790 insertions(+), 11 deletions(-)
 create mode 100644 Documentation/kmemleak.txt
 create mode 100644 include/linux/memleak.h
 create mode 100644 mm/memleak-test.c
 create mode 100644 mm/memleak.c

-- 
Catalin

^ permalink raw reply	[flat|nested] 59+ messages in thread

* [PATCH 01/15] kmemleak: Add the base support
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
@ 2008-12-10 18:26 ` Catalin Marinas
  2008-12-11 22:01   ` Pekka Enberg
  2008-12-16 19:36   ` Paul E. McKenney
  2008-12-10 18:27 ` [PATCH 02/15] kmemleak: Add documentation on the memory leak detector Catalin Marinas
                   ` (14 subsequent siblings)
  15 siblings, 2 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:26 UTC (permalink / raw)
  To: linux-kernel; +Cc: Paul E. McKenney, Ingo Molnar, Pekka Enberg, Andrew Morton

This patch adds the base support for the kernel memory leak
detector. It traces the memory allocation/freeing in a way similar to
the Boehm's conservative garbage collector, the difference being that
the unreferenced objects are not freed but only shown in
/sys/kernel/debug/memleak. Enabling this feature introduces an
overhead to memory allocations.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
Cc: Andrew Morton <akpm@linux-foundation.org>
Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
---
 include/linux/memleak.h |   93 +++
 init/main.c             |    4 
 mm/memleak.c            | 1263 +++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 1359 insertions(+), 1 deletions(-)
 create mode 100644 include/linux/memleak.h
 create mode 100644 mm/memleak.c

diff --git a/include/linux/memleak.h b/include/linux/memleak.h
new file mode 100644
index 0000000..340b9fc
--- /dev/null
+++ b/include/linux/memleak.h
@@ -0,0 +1,93 @@
+/*
+ * include/linux/memleak.h
+ *
+ * Copyright (C) 2008 ARM Limited
+ * Written by Catalin Marinas <catalin.marinas@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#ifndef __MEMLEAK_H
+#define __MEMLEAK_H
+
+#ifdef CONFIG_DEBUG_MEMLEAK
+
+extern void memleak_init(void);
+extern void memleak_alloc(const void *ptr, size_t size, int min_count,
+			  gfp_t gfp);
+extern void memleak_free(const void *ptr);
+extern void memleak_padding(const void *ptr, unsigned long offset, size_t size);
+extern void memleak_not_leak(const void *ptr);
+extern void memleak_ignore(const void *ptr);
+extern void memleak_scan_area(const void *ptr, unsigned long offset,
+			      size_t length, gfp_t gfp);
+
+static inline void memleak_alloc_recursive(const void *ptr, size_t size,
+					   int min_count, unsigned long flags,
+					   gfp_t gfp)
+{
+	if (!(flags & SLAB_NOLEAKTRACE))
+		memleak_alloc(ptr, size, min_count, gfp);
+}
+
+static inline void memleak_free_recursive(const void *ptr, unsigned long flags)
+{
+	if (!(flags & SLAB_NOLEAKTRACE))
+		memleak_free(ptr);
+}
+
+static inline void memleak_erase(void **ptr)
+{
+	*ptr = NULL;
+}
+
+#else
+
+#define DECLARE_MEMLEAK_OFFSET(name, type, member)
+
+static inline void memleak_init(void)
+{
+}
+static inline void memleak_alloc(const void *ptr, size_t size, int min_count,
+				 gfp_t gfp)
+{
+}
+static inline void memleak_alloc_recursive(const void *ptr, size_t size,
+					   int min_count, unsigned long flags,
+					   gfp_t gfp)
+{
+}
+static inline void memleak_free(const void *ptr)
+{
+}
+static inline void memleak_free_recursive(const void *ptr, unsigned long flags)
+{
+}
+static inline void memleak_not_leak(const void *ptr)
+{
+}
+static inline void memleak_ignore(const void *ptr)
+{
+}
+static inline void memleak_scan_area(const void *ptr, unsigned long offset,
+				     size_t length, gfp_t gfp)
+{
+}
+static inline void memleak_erase(void **ptr)
+{
+}
+
+#endif	/* CONFIG_DEBUG_MEMLEAK */
+
+#endif	/* __MEMLEAK_H */
diff --git a/init/main.c b/init/main.c
index 7e117a2..81cbbb7 100644
--- a/init/main.c
+++ b/init/main.c
@@ -56,6 +56,7 @@
 #include <linux/debug_locks.h>
 #include <linux/debugobjects.h>
 #include <linux/lockdep.h>
+#include <linux/memleak.h>
 #include <linux/pid_namespace.h>
 #include <linux/device.h>
 #include <linux/kthread.h>
@@ -653,6 +654,8 @@ asmlinkage void __init start_kernel(void)
 	enable_debug_pagealloc();
 	cpu_hotplug_init();
 	kmem_cache_init();
+	prio_tree_init();
+	memleak_init();
 	debug_objects_mem_init();
 	idr_init_cache();
 	setup_per_cpu_pageset();
@@ -662,7 +665,6 @@ asmlinkage void __init start_kernel(void)
 	calibrate_delay();
 	pidmap_init();
 	pgtable_cache_init();
-	prio_tree_init();
 	anon_vma_init();
 #ifdef CONFIG_X86
 	if (efi_enabled)
diff --git a/mm/memleak.c b/mm/memleak.c
new file mode 100644
index 0000000..bd84ee0
--- /dev/null
+++ b/mm/memleak.c
@@ -0,0 +1,1263 @@
+/*
+ * mm/memleak.c
+ *
+ * Copyright (C) 2008 ARM Limited
+ * Written by Catalin Marinas <catalin.marinas@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ *
+ *
+ * For more information on the algorithm and kmemleak usage, please see
+ * Documentation/kmemleak.txt.
+ *
+ * Notes on locking
+ * ----------------
+ *
+ * The following locks are used by kmemleak:
+ *
+ * - memleak_lock (rw_lock): protects the object_list modifications and
+ *   accesses to the object_tree_root. The object_list is the main
+ *   list holding the metadata (struct memleak_object) for the allocated
+ *   memory blocks. The object_tree_root is a priority search tree used to
+ *   look-up metadata based on a pointer to the corresponding memory block.
+ *   The memleak_object structures are added to the object_list and
+ *   object_tree_root in the create_object() function called from the
+ *   memleak_alloc() callback and removed in delete_object() called from the
+ *   memleak_free() callback
+ * - memleak_object.lock (spinlock): protects a memleak_object. Accesses to
+ *   the metadata (e.g. count) are protected by this lock. Note that some
+ *   members of this structure may be protected by other means (atomic or
+ *   memleak_lock). This lock is also held when scanning the corresponding
+ *   memory block to avoid the kernel freeing it via the memleak_free()
+ *   callback. This is less heavyweight than holding a global lock like
+ *   memleak_lock during scanning
+ *
+ * The memleak_object structures have a use_count incremented or decremented
+ * using the get_object()/put_object() functions. When the use_count becomes
+ * 0, this count can no longer be incremented and put_object() schedules the
+ * memleak_object freeing via an RCU callback. All calls to the get_object()
+ * function must be protected by rcu_read_lock() to avoid accessing a freed
+ * structure.
+ *
+ * The only mutex used is scan_mutex. This ensures that only one thread may
+ * scan the memory for unreferenced objects at a time. The gray_list contains
+ * the objects which are already referenced or marked as false positives and
+ * need to be scanned. This list is only modified during a scanning episode
+ * when the scan_mutex is held. At the end of a scan, the gray_list is always
+ * empty. Note that the memleak_object.use_count is incremented when an object
+ * is added to the gray_list and therefore cannot be freed.
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/list.h>
+#include <linux/sched.h>
+#include <linux/jiffies.h>
+#include <linux/delay.h>
+#include <linux/module.h>
+#include <linux/kthread.h>
+#include <linux/prio_tree.h>
+#include <linux/gfp.h>
+#include <linux/kallsyms.h>
+#include <linux/debugfs.h>
+#include <linux/seq_file.h>
+#include <linux/cpumask.h>
+#include <linux/spinlock.h>
+#include <linux/mutex.h>
+#include <linux/rcupdate.h>
+#include <linux/stacktrace.h>
+#include <linux/cache.h>
+#include <linux/percpu.h>
+#include <linux/hardirq.h>
+#include <linux/mmzone.h>
+#include <linux/slab.h>
+#include <linux/thread_info.h>
+
+#include <asm/sections.h>
+#include <asm/processor.h>
+#include <asm/atomic.h>
+
+#include <linux/memleak.h>
+
+/*
+ * Kmemleak configuration and common defines.
+ */
+#define MAX_TRACE		16	/* stack trace length */
+#define REPORTS_NR		100	/* maximum number of reported leaks */
+#define MSECS_MIN_AGE		5000	/* minimum object age for reporting */
+#define MSECS_SCAN_YIELD	10	/* CPU yielding period */
+#define SECS_FIRST_SCAN		60	/* delay before the first scan */
+#define SECS_SCAN_PERIOD	600	/* auto scanning period */
+#undef SCAN_TASK_STACKS			/* scan the task kernel stacks */
+#undef REPORT_ORPHAN_FREEING		/* notify when freeing orphan objects */
+
+#define BYTES_PER_POINTER	sizeof(void *)
+
+/* scanning area inside a memory block */
+struct memleak_scan_area {
+	struct hlist_node node;
+	unsigned long offset;
+	size_t length;
+};
+
+/*
+ * Structure holding the metadata for each allocated memory block.
+ * Modifications to such objects should be made while holding the
+ * object->lock. Insertions or deletions from object_list, gray_list or
+ * tree_node are already protected by the corresponding locks or mutex (see
+ * the notes on locking above). These objects are reference-counted
+ * (use_count) and freed using the RCU mechanism.
+ */
+struct memleak_object {
+	spinlock_t lock;
+	unsigned long flags;		/* object status flags */
+	struct list_head object_list;
+	struct list_head gray_list;
+	struct prio_tree_node tree_node;
+	struct rcu_head rcu;		/* object_list lockless traversal */
+	/* object usage count; object freed when use_count == 0 */
+	atomic_t use_count;
+	unsigned long pointer;
+	size_t size;
+	/* minimum number of a pointers found before it is considered leak */
+	int min_count;
+	/* the total number of pointers found pointing to this object */
+	int count;
+	/* memory ranges to be scanned inside an object (empty for all) */
+	struct hlist_head area_list;
+	unsigned long trace[MAX_TRACE];
+	unsigned int trace_len;
+	unsigned long jiffies;		/* creation timestamp */
+	pid_t pid;			/* pid of the current task */
+	char comm[TASK_COMM_LEN];	/* executable name */
+};
+
+/* flag representing the memory block allocation status */
+#define OBJECT_ALLOCATED	(1 << 0)
+/* flag set after the first reporting of an unreference object */
+#define OBJECT_REPORTED		(1 << 1)
+
+/* the list of all allocated objects */
+static LIST_HEAD(object_list);
+/* the list of gray-colored objects (see color_gray comment below) */
+static LIST_HEAD(gray_list);
+/* prio search tree for object boundaries */
+static struct prio_tree_root object_tree_root;
+/* rw_lock protecting the access to object_list and prio_tree_root */
+static DEFINE_RWLOCK(memleak_lock);
+
+/* allocation caches for kmemleak internal data */
+static struct kmem_cache *object_cache;
+static struct kmem_cache *scan_area_cache;
+
+/* set if tracing memory operations is enabled */
+static atomic_t memleak_enabled = ATOMIC_INIT(0);
+/* set in the late_initcall if there were no errors */
+static atomic_t memleak_initialized = ATOMIC_INIT(0);
+/* enables or disables early logging of the memory operations */
+static atomic_t memleak_early_log = ATOMIC_INIT(1);
+/* set if a fata kmemleak error has occurred */
+static atomic_t memleak_error = ATOMIC_INIT(0);
+
+/* minimum and maximum address that may be valid pointers */
+static unsigned long min_addr = ULONG_MAX;
+static unsigned long max_addr;
+
+/* used for yielding the CPU to other tasks during scanning */
+static unsigned long next_scan_yield;
+static struct task_struct *scan_thread;
+static unsigned long jiffies_scan_yield;
+static unsigned long jiffies_min_age;
+static DEFINE_MUTEX(scan_mutex);
+
+/* number of leaks reported (for limitation purposes) */
+static int reported_leaks;
+
+/*
+ * Early object allocation/freeing logging. Kmemleak is initialized after the
+ * kernel allocator. However, both the kernel allocator and kmemleak may
+ * allocate memory blocks which need to be tracked. Kmemleak defines an
+ * arbitrary buffer to hold the allocation/freeing information before it is
+ * fully initialized.
+ */
+
+/* kmemleak operation type for early logging */
+enum {
+	MEMLEAK_ALLOC,
+	MEMLEAK_FREE,
+	MEMLEAK_NOT_LEAK,
+	MEMLEAK_IGNORE,
+	MEMLEAK_SCAN_AREA,
+};
+
+/*
+ * Structure holding the information passed to kmemleak callbacks during the
+ * early logging.
+ */
+struct early_log {
+	int op_type;			/* kmemleak operation type */
+	const void *ptr;		/* allocated/freed memory block */
+	size_t size;			/* memory block size */
+	int min_count;			/* minimum reference count */
+	unsigned long offset;		/* scan area offset */
+	size_t length;			/* scan area length */
+};
+
+/* early logging buffer and current position */
+static struct early_log __initdata early_log[200];
+static int __initdata crt_early_log;
+
+static void memleak_disable(void);
+
+/*
+ * Macro invoked when a serious kmemleak condition occured and cannot be
+ * recovered from. Kmemleak will be disabled and further allocation/freeing
+ * tracing no longer available.
+ */
+#define memleak_panic(x...) {	\
+	pr_warning(x);		\
+	memleak_disable();	\
+}
+
+/*
+ * Object colors, encoded with count and min_count:
+ * - white - orphan object, not enough references to it (count < min_count)
+ * - gray  - not orphan, marked as false positive (min_count == 0) or
+ *		sufficient references to it (count >= min_count)
+ * - black - ignore, it doesn't contain references (e.g. text section)
+ *		(min_count == -1). No function defined for this color.
+ * Newly created objects don't have any color assigned (object->count == -1)
+ * before the next memory scan when they become white.
+ */
+static int color_white(const struct memleak_object *object)
+{
+	return object->count != -1 && object->count < object->min_count;
+}
+
+static int color_gray(const struct memleak_object *object)
+{
+	return object->min_count != -1 && object->count >= object->min_count;
+}
+
+/*
+ * Objects are considered unreferenced only if their color is white, they have
+ * not be deleted and have a minimum age to avoid false positives caused by
+ * pointers temporarily stored in CPU registers.
+ */
+static int unreferenced_object(struct memleak_object *object)
+{
+	if (color_white(object) &&
+	    (object->flags & OBJECT_ALLOCATED) &&
+	    time_is_before_eq_jiffies(object->jiffies + jiffies_min_age))
+		return 1;
+	else
+		return 0;
+}
+
+/*
+ * Printing of the unreferenced objects information, either to the seq file
+ * or to the kernel log. The print_unreferenced() function must be called with
+ * the object->lock held.
+ */
+#define print_helper(seq, x...)			\
+do {						\
+	if (seq)				\
+		seq_printf(seq, x);		\
+	else					\
+		pr_info(x);			\
+} while (0)
+
+static void print_unreferenced(struct seq_file *seq,
+			       struct memleak_object *object)
+{
+	char namebuf[KSYM_NAME_LEN + 1] = "";
+	char *modname;
+	unsigned long symsize;
+	int i;
+
+	print_helper(seq, "unreferenced object 0x%08lx (size %zu):\n",
+		     object->pointer, object->size);
+	print_helper(seq, "  comm \"%s\", pid %d, jiffies %lu\n",
+		     object->comm, object->pid, object->jiffies);
+	print_helper(seq, "  backtrace:\n");
+
+	for (i = 0; i < object->trace_len; i++) {
+		unsigned long trace = object->trace[i];
+		unsigned long offset = 0;
+
+		kallsyms_lookup(trace, &symsize, &offset, &modname, namebuf);
+		print_helper(seq, "    [<%08lx>] %s\n", trace, namebuf);
+	}
+}
+
+/*
+ * Print the memleak_object information. This function is used mainly for
+ * debugging special cases when kmemleak operations. It must be called with
+ * the object->lock held.
+ */
+static void dump_object_info(struct memleak_object *object)
+{
+	struct stack_trace trace;
+
+	trace.nr_entries = object->trace_len;
+	trace.entries = object->trace;
+
+	pr_notice("kmemleak: Object 0x%08lx (size %zu):\n",
+		  object->tree_node.start, object->size);
+	pr_notice("  comm \"%s\", pid %d, jiffies %lu\n",
+		  object->comm, object->pid, object->jiffies);
+	pr_notice("  min_count = %d\n", object->min_count);
+	pr_notice("  count = %d\n", object->count);
+	pr_notice("  backtrace:\n");
+	print_stack_trace(&trace, 4);
+}
+
+/*
+ * Look-up a memory block metadata (memleak_object) in the priority search
+ * tree based on a pointer value. If alias is 0, only values pointing to the
+ * beginning of the memory block are allowed. The memleak_lock must be held
+ * when calling this function.
+ */
+static struct memleak_object *lookup_object(unsigned long ptr, int alias)
+{
+	struct prio_tree_node *node;
+	struct prio_tree_iter iter;
+	struct memleak_object *object;
+
+	prio_tree_iter_init(&iter, &object_tree_root, ptr, ptr);
+	node = prio_tree_next(&iter);
+	if (node) {
+		object = prio_tree_entry(node, struct memleak_object,
+					 tree_node);
+		if (!alias && object->pointer != ptr) {
+			pr_warning("kmemleak: Found object by alias");
+			object = NULL;
+		}
+	} else
+		object = NULL;
+
+	return object;
+}
+
+/*
+ * Increment the object use_count. Return 1 if successful or 0 otherwise. Note
+ * that once an object's use_count reached 0, the RCU freeing was already
+ * registered and the object should no longer be used. This function must be
+ * called under the protection of rcu_read_lock().
+ */
+static int get_object(struct memleak_object *object)
+{
+	return atomic_inc_not_zero(&object->use_count);
+}
+
+/*
+ * RCU callback to free a memleak_object.
+ */
+static void free_object_rcu(struct rcu_head *rcu)
+{
+	struct hlist_node *elem, *tmp;
+	struct memleak_scan_area *area;
+	struct memleak_object *object =
+		container_of(rcu, struct memleak_object, rcu);
+
+	/*
+	 * Once use_count is 0 (guaranteed by put_object), there is no other
+	 * code accessing this object, hence no need for locking.
+	 */
+	hlist_for_each_entry_safe(area, elem, tmp, &object->area_list, node) {
+		hlist_del(elem);
+		kmem_cache_free(scan_area_cache, area);
+	}
+	kmem_cache_free(object_cache, object);
+}
+
+/*
+ * Decrement the object use_count. Once the count is 0, free the object using
+ * an RCU callback. Since put_object() may be called via the memleak_free() ->
+ * delete_object() path, the delayed RCU freeing ensures that there is no
+ * recursive call to the kernel allocator. Lock-less RCU object_list traversal
+ * is also possible.
+ */
+static void put_object(struct memleak_object *object)
+{
+	if (!atomic_dec_and_test(&object->use_count))
+		return;
+
+	/* should only get here after delete_object was called */
+	BUG_ON(object->flags & OBJECT_ALLOCATED);
+
+	call_rcu(&object->rcu, free_object_rcu);
+}
+
+/*
+ * Look up an object in the prio search tree and increase its use_count.
+ */
+static struct memleak_object *find_and_get_object(unsigned long ptr, int alias)
+{
+	unsigned long flags;
+	struct memleak_object *object = NULL;
+
+	rcu_read_lock();
+	read_lock_irqsave(&memleak_lock, flags);
+	if (ptr >= min_addr && ptr < max_addr)
+		object = lookup_object(ptr, alias);
+	read_unlock_irqrestore(&memleak_lock, flags);
+
+	/* check whether the object is still available */
+	if (object && !get_object(object))
+		object = NULL;
+	rcu_read_unlock();
+
+	return object;
+}
+
+/*
+ * Create the metadata (struct memleak_object) corresponding to an allocated
+ * memory block and add it to the object_list and object_tree_root.
+ */
+static void create_object(unsigned long ptr, size_t size, int min_count,
+			  gfp_t gfp)
+{
+	unsigned long flags;
+	struct memleak_object *object;
+	struct prio_tree_node *node;
+	struct stack_trace trace;
+
+	object = kmem_cache_alloc(object_cache, gfp);
+	if (!object)
+		memleak_panic("kmemleak: Cannot allocate a memleak_object "
+			      "structure\n");
+
+	INIT_LIST_HEAD(&object->object_list);
+	INIT_LIST_HEAD(&object->gray_list);
+	INIT_HLIST_HEAD(&object->area_list);
+	spin_lock_init(&object->lock);
+	atomic_set(&object->use_count, 1);
+	object->flags = OBJECT_ALLOCATED;
+	object->pointer = ptr;
+	object->size = size;
+	object->min_count = min_count;
+	object->count = -1;			/* no color initially */
+	object->jiffies = jiffies;
+
+	/* task information */
+	if (in_irq()) {
+		object->pid = 0;
+		strncpy(object->comm, "hardirq", TASK_COMM_LEN);
+	} else if (in_softirq()) {
+		object->pid = 0;
+		strncpy(object->comm, "softirq", TASK_COMM_LEN);
+	} else {
+		object->pid = current->pid;
+		get_task_comm(object->comm, current);
+	}
+
+	/* kernel backtrace */
+	trace.max_entries = MAX_TRACE;
+	trace.nr_entries = 0;
+	trace.entries = object->trace;
+	trace.skip = 1;
+	save_stack_trace(&trace);
+	object->trace_len = trace.nr_entries;
+
+	INIT_PRIO_TREE_NODE(&object->tree_node);
+	object->tree_node.start = ptr;
+	object->tree_node.last = ptr + size - 1;
+
+	write_lock_irqsave(&memleak_lock, flags);
+	min_addr = min(min_addr, ptr);
+	max_addr = max(max_addr, ptr + size);
+	node = prio_tree_insert(&object_tree_root, &object->tree_node);
+	/*
+	 * The code calling the kernel does not yet have the pointer to the
+	 * memory block to be able to free it.  However, we still hold the
+	 * memleak_lock here in case parts of the kernel started freeing
+	 * random memory blocks.
+	 */
+	if (node != &object->tree_node) {
+		unsigned long flags;
+
+		pr_warning("kmemleak: Existing pointer\n");
+		dump_stack();
+
+		object = lookup_object(ptr, 1);
+		spin_lock_irqsave(&object->lock, flags);
+		dump_object_info(object);
+		spin_unlock_irqrestore(&object->lock, flags);
+
+		memleak_panic("kmemleak: Cannot insert 0x%lx into the object "
+			      "search tree\n", ptr);
+	}
+	list_add_tail_rcu(&object->object_list, &object_list);
+	write_unlock_irqrestore(&memleak_lock, flags);
+}
+
+/*
+ * Remove the metadata (struct memleak_object) for a memory block from the
+ * object_list and object_tree_root and decrement its use_count.
+ */
+static void delete_object(unsigned long ptr)
+{
+	unsigned long flags;
+	struct memleak_object *object;
+
+	write_lock_irqsave(&memleak_lock, flags);
+	object = lookup_object(ptr, 0);
+	if (!object) {
+		pr_warning("kmemleak: Freeing unknown object at 0x%08lx\n",
+			   ptr);
+		dump_stack();
+		write_unlock_irqrestore(&memleak_lock, flags);
+		return;
+	}
+	prio_tree_remove(&object_tree_root, &object->tree_node);
+	list_del_rcu(&object->object_list);
+	write_unlock_irqrestore(&memleak_lock, flags);
+
+	BUG_ON(!(object->flags & OBJECT_ALLOCATED));
+	BUG_ON(atomic_read(&object->use_count) < 1);
+
+	/*
+	 * Locking here also ensures that the corresponding memory block
+	 * cannot be freed when it is being scanned.
+	 */
+	spin_lock_irqsave(&object->lock, flags);
+	object->flags &= ~OBJECT_ALLOCATED;
+#ifdef REPORT_ORPHAN_FREEING
+	if (color_white(object)) {
+		pr_warning("kmemleak: Freeing orphan object 0x%08lx\n", ptr);
+		dump_stack();
+		dump_object_info(object);
+	}
+#endif
+	spin_unlock_irqrestore(&object->lock, flags);
+	put_object(object);
+}
+
+/*
+ * Make a object permanently as gray-colored so that it can no longer be
+ * reported as a leak. This is used in general to mark a false positive.
+ */
+static void make_gray_object(unsigned long ptr)
+{
+	unsigned long flags;
+	struct memleak_object *object;
+
+	object = find_and_get_object(ptr, 0);
+	if (!object) {
+		dump_stack();
+		memleak_panic("kmemleak: Graying unknown object at 0x%08lx\n",
+			      ptr);
+	}
+
+	spin_lock_irqsave(&object->lock, flags);
+	object->min_count = 0;
+	spin_unlock_irqrestore(&object->lock, flags);
+	put_object(object);
+}
+
+/*
+ * Mark the object as black-colored so that it is ignored from scans and
+ * reporting.
+ */
+static void make_black_object(unsigned long ptr)
+{
+	unsigned long flags;
+	struct memleak_object *object;
+
+	object = find_and_get_object(ptr, 0);
+	if (!object) {
+		dump_stack();
+		memleak_panic("kmemleak: Blacking unknown object at 0x%08lx\n",
+			      ptr);
+	}
+
+	spin_lock_irqsave(&object->lock, flags);
+	object->min_count = -1;
+	spin_unlock_irqrestore(&object->lock, flags);
+	put_object(object);
+}
+
+/*
+ * Add a scanning area to the object. If at least one such area is added,
+ * kmemleak will only scan these ranges rather than the whole memory block.
+ */
+static void add_scan_area(unsigned long ptr, unsigned long offset,
+			  size_t length, gfp_t gfp)
+{
+	unsigned long flags;
+	struct memleak_object *object;
+	struct memleak_scan_area *area;
+
+	object = find_and_get_object(ptr, 0);
+	if (!object) {
+		dump_stack();
+		memleak_panic("kmemleak: Adding scan area to unknown "
+			      "object at 0x%08lx\n", ptr);
+	}
+
+	area = kmem_cache_alloc(scan_area_cache, gfp);
+	if (!area)
+		memleak_panic("kmemleak: Cannot allocate a scan area\n");
+
+	spin_lock_irqsave(&object->lock, flags);
+	if (offset + length > object->size) {
+		dump_stack();
+		dump_object_info(object);
+		memleak_panic("kmemleak: Scan area larger than object "
+			      "0x%08lx\n", ptr);
+	}
+
+	INIT_HLIST_NODE(&area->node);
+	area->offset = offset;
+	area->length = length;
+
+	hlist_add_head(&area->node, &object->area_list);
+	spin_unlock_irqrestore(&object->lock, flags);
+	put_object(object);
+}
+
+/*
+ * Log an early memleak_* call to the early_log buffer. These calls will be
+ * processed later once kmemleak is fully initialized.
+ */
+static void __init log_early(int op_type, const void *ptr, size_t size,
+			     int min_count,
+			     unsigned long offset, size_t length)
+{
+	unsigned long flags;
+	struct early_log *log;
+
+	if (crt_early_log >= ARRAY_SIZE(early_log))
+		memleak_panic("kmemleak: Early log buffer exceeded\n");
+
+	/*
+	 * There is no need for locking since the kernel is still in UP mode
+	 * at this stage. Disabling the IRQs is enough.
+	 */
+	local_irq_save(flags);
+	log = &early_log[crt_early_log];
+	log->op_type = op_type;
+	log->ptr = ptr;
+	log->size = size;
+	log->min_count = min_count;
+	log->offset = offset;
+	log->length = length;
+	crt_early_log++;
+	local_irq_restore(flags);
+}
+
+/*
+ * Memory allocation function callback. This function is called from the
+ * kernel allocators when a new block is allocated (kmem_cache_alloc, kmalloc,
+ * vmalloc etc.).
+ */
+void memleak_alloc(const void *ptr, size_t size, int min_count, gfp_t gfp)
+{
+	pr_debug("%s(0x%p, %zu, %d)\n", __func__, ptr, size, min_count);
+
+	if (atomic_read(&memleak_enabled) && ptr)
+		create_object((unsigned long)ptr, size, min_count, gfp);
+	else if (atomic_read(&memleak_early_log))
+		log_early(MEMLEAK_ALLOC, ptr, size, min_count, 0, 0);
+}
+EXPORT_SYMBOL_GPL(memleak_alloc);
+
+/*
+ * Memory freeing function callback. This function is called from the kernel
+ * allocators when a block is freed (kmem_cache_free, kfree, vfree etc.).
+ */
+void memleak_free(const void *ptr)
+{
+	pr_debug("%s(0x%p)\n", __func__, ptr);
+
+	if (atomic_read(&memleak_enabled) && ptr)
+		delete_object((unsigned long)ptr);
+	else if (atomic_read(&memleak_early_log))
+		log_early(MEMLEAK_FREE, ptr, 0, 0, 0, 0);
+}
+EXPORT_SYMBOL_GPL(memleak_free);
+
+/*
+ * Mark an already allocated memory block as a false positive. This will cause
+ * the block to no longer be reported as leak and always be scanned.
+ */
+void memleak_not_leak(const void *ptr)
+{
+	pr_debug("%s(0x%p)\n", __func__, ptr);
+
+	if (atomic_read(&memleak_enabled) && ptr)
+		make_gray_object((unsigned long)ptr);
+	else if (atomic_read(&memleak_early_log))
+		log_early(MEMLEAK_NOT_LEAK, ptr, 0, 0, 0, 0);
+}
+EXPORT_SYMBOL(memleak_not_leak);
+
+/*
+ * Ignore a memory block. This is usually done when it is known that the
+ * corresponding block is not a leak and does not contain any references to
+ * other allocated memory blocks.
+ */
+void memleak_ignore(const void *ptr)
+{
+	pr_debug("%s(0x%p)\n", __func__, ptr);
+
+	if (atomic_read(&memleak_enabled) && ptr)
+		make_black_object((unsigned long)ptr);
+	else if (atomic_read(&memleak_early_log))
+		log_early(MEMLEAK_IGNORE, ptr, 0, 0, 0, 0);
+}
+EXPORT_SYMBOL(memleak_ignore);
+
+/*
+ * Limit the range to be scanned in an allocated memory block.
+ */
+void memleak_scan_area(const void *ptr, unsigned long offset, size_t length,
+		       gfp_t gfp)
+{
+	pr_debug("%s(0x%p)\n", __func__, ptr);
+
+	if (atomic_read(&memleak_enabled) && ptr)
+		add_scan_area((unsigned long)ptr, offset, length, gfp);
+	else if (atomic_read(&memleak_early_log))
+		log_early(MEMLEAK_SCAN_AREA, ptr, 0, 0, offset, length);
+}
+EXPORT_SYMBOL(memleak_scan_area);
+
+/*
+ * Yield the CPU so that other tasks get a chance to run.  The yielding is
+ * rate-limited to avoid excessive number of calls to the schedule() function
+ * during memory scanning.
+ */
+static void scan_yield(void)
+{
+	might_sleep();
+
+	if (time_is_before_eq_jiffies(next_scan_yield)) {
+		schedule();
+		next_scan_yield = jiffies + jiffies_scan_yield;
+	}
+}
+
+/*
+ * Memory scanning is a long process and it needs to be interruptable. This
+ * function checks whether such interrupt condition occured.
+ */
+static int scan_should_stop(void)
+{
+	if (!atomic_read(&memleak_enabled))
+		return 1;
+	/*
+	 * This function may be called from either process or kthread context,
+	 * hence the need to check for both stop conditions.
+	 */
+	if ((current->mm && signal_pending(current)) ||
+	    (!current->mm && kthread_should_stop()))
+		return 1;
+	return 0;
+}
+
+/*
+ * Scan a memory block (exclusive range) for valid pointers and add those
+ * found to the gray list.
+ */
+static void scan_block(void *_start, void *_end, struct memleak_object *scanned)
+{
+	unsigned long *ptr;
+	unsigned long *start = PTR_ALIGN(_start, BYTES_PER_POINTER);
+	unsigned long *end = _end - (BYTES_PER_POINTER - 1);
+
+	for (ptr = start; ptr < end; ptr++) {
+		unsigned long flags;
+		unsigned long pointer = *ptr;
+		struct memleak_object *object;
+
+		if (scan_should_stop())
+			break;
+
+		/*
+		 * When scanning a memory block with a corresponding
+		 * memleak_object, the CPU yielding is handled in the calling
+		 * code since it holds the object->lock to avoid the block
+		 * freeing.
+		 */
+		if (!scanned)
+			scan_yield();
+
+		object = find_and_get_object(pointer, 1);
+		if (!object)
+			continue;
+		if (object == scanned) {
+			/* self referenced, ignore */
+			put_object(object);
+			continue;
+		}
+
+		/*
+		 * Avoid the lockdep recursive warning on object->lock being
+		 * previously acquired in scan_object(). These locks are
+		 * enclosed by scan_mutex.
+		 */
+		spin_lock_irqsave_nested(&object->lock, flags,
+					 SINGLE_DEPTH_NESTING);
+		if (!color_white(object)) {
+			/* non-orphan, ignored or new */
+			spin_unlock_irqrestore(&object->lock, flags);
+			put_object(object);
+			continue;
+		}
+
+		/*
+		 * Increase the object's reference count (number of pointers
+		 * to the memory block). If this count reaches the required
+		 * minimum, the object's color will become gray and it will be
+		 * added to the gray_list.
+		 */
+		object->count++;
+		if (color_gray(object))
+			list_add_tail(&object->gray_list, &gray_list);
+		else
+			put_object(object);
+		spin_unlock_irqrestore(&object->lock, flags);
+	}
+}
+
+/*
+ * Scan a memory block corresponding to a memleak_object. A condition is
+ * that object->use_count >= 1.
+ */
+static void scan_object(struct memleak_object *object)
+{
+	struct memleak_scan_area *area;
+	struct hlist_node *elem;
+	unsigned long flags;
+
+	/*
+	 * Once the object->lock is aquired, the corresponding memory block
+	 * cannot be freed (the same lock is aquired in delete_object).
+	 */
+	spin_lock_irqsave(&object->lock, flags);
+	if (!(object->flags & OBJECT_ALLOCATED))
+		/* already freed object */
+		goto out;
+	if (hlist_empty(&object->area_list))
+		scan_block((void *)object->pointer,
+			   (void *)(object->pointer + object->size), object);
+	else
+		hlist_for_each_entry(area, elem, &object->area_list, node)
+			scan_block((void *)(object->pointer + area->offset),
+				   (void *)(object->pointer + area->offset
+					    + area->length), object);
+ out:
+	spin_unlock_irqrestore(&object->lock, flags);
+}
+
+/*
+ * Scan data sections and all the referenced memory blocks allocated via the
+ * kernel's standard allocators. This function must be called with the
+ * scan_mutex held.
+ */
+static void memleak_scan(void)
+{
+	unsigned long flags;
+	struct memleak_object *object, *tmp;
+#ifdef CONFIG_SMP
+	int i;
+#endif
+#ifdef SCAN_TASK_STACKS
+	struct task_struct *task;
+#endif
+
+	/* prepare the memleak_object's */
+	rcu_read_lock();
+	list_for_each_entry_rcu(object, &object_list, object_list) {
+		spin_lock_irqsave(&object->lock, flags);
+#ifdef DEBUG
+		/*
+		 * With a few exceptions there should be a maximum of
+		 * 1 reference to any object at this point.
+		 */
+		if (atomic_read(&object->use_count) > 1) {
+			pr_debug("kmemleak: object->use_count = %d\n",
+				 atomic_read(&object->use_count));
+			dump_object_info(object);
+		}
+#endif
+		/* reset the reference count (whiten the object) */
+		object->count = 0;
+		if (color_gray(object) && get_object(object))
+			list_add_tail(&object->gray_list, &gray_list);
+
+		spin_unlock_irqrestore(&object->lock, flags);
+	}
+	rcu_read_unlock();
+
+	/* data/bss scanning */
+	scan_block(_sdata, _edata, NULL);
+	scan_block(__bss_start, __bss_stop, NULL);
+
+#ifdef CONFIG_SMP
+	/* per-cpu sections scanning */
+	for_each_possible_cpu(i)
+		scan_block(__per_cpu_start + per_cpu_offset(i),
+			   __per_cpu_end + per_cpu_offset(i), NULL);
+#endif
+
+#ifdef SCAN_TASK_STACKS
+	/*
+	 * Scanning the task stacks may introduce false negatives and it is
+	 * not enabled by default.
+	 */
+	read_lock(&tasklist_lock);
+	for_each_process(task)
+		scan_block(task_stack_page(task),
+			   task_stack_page(task) + THREAD_SIZE, NULL);
+	read_unlock(&tasklist_lock);
+#endif
+
+	/*
+	 * Scan the objects already referenced from the sections scanned
+	 * above. More objects will be referenced and, if there are no memory
+	 * leaks, all the objects will be scanned. The list traversal is safe
+	 * for both tail additions and removals from inside the loop. The
+	 * memleak objects cannot be freed from outside the loop because their
+	 * use_count was increased.
+	 */
+	object = list_entry(gray_list.next, typeof(*object), gray_list);
+	while (&object->gray_list != &gray_list) {
+		scan_yield();
+
+		/* may add new objects to the list */
+		if (!scan_should_stop())
+			scan_object(object);
+
+		tmp = list_entry(object->gray_list.next, typeof(*object),
+				 gray_list);
+
+		/* remove the object from the list and release it */
+		list_del(&object->gray_list);
+		put_object(object);
+
+		object = tmp;
+	}
+	BUG_ON(!list_empty(&gray_list));
+}
+
+/*
+ * Iterate over the object_list and return the first valid object at or after
+ * the required position with its use_count incremented. The function triggers
+ * a memory scanning when the pos argument points to the first position.
+ */
+static void *memleak_seq_start(struct seq_file *seq, loff_t *pos)
+{
+	struct memleak_object *object;
+	loff_t n = *pos;
+
+	if (!atomic_read(&memleak_enabled)) {
+		seq_printf(seq, "Kernel memory leak detector disabled\n");
+		return ERR_PTR(-EBUSY);
+	}
+	if (!n) {
+		memleak_scan();
+		reported_leaks = 0;
+	}
+	if (reported_leaks >= REPORTS_NR)
+		return NULL;
+
+	rcu_read_lock();
+	list_for_each_entry_rcu(object, &object_list, object_list) {
+		if (n-- > 0)
+			continue;
+		if (get_object(object))
+			goto out;
+	}
+	object = NULL;
+ out:
+	rcu_read_unlock();
+	return object;
+}
+
+/*
+ * Return the next object in the object_list. The function decrements the
+ * use_count of the previous object and increases that of the next one.
+ */
+static void *memleak_seq_next(struct seq_file *seq, void *v, loff_t *pos)
+{
+	struct memleak_object *prev_obj = v;
+	struct memleak_object *next_obj = NULL;
+	struct list_head *n = &prev_obj->object_list;
+
+	++(*pos);
+	if (reported_leaks >= REPORTS_NR)
+		goto out;
+
+	rcu_read_lock();
+	list_for_each_continue_rcu(n, &object_list) {
+		next_obj = list_entry(n, struct memleak_object, object_list);
+		if (get_object(next_obj))
+			break;
+	}
+	rcu_read_unlock();
+ out:
+	put_object(prev_obj);
+	return next_obj;
+}
+
+/*
+ * Decrement the use_count of the last object required, if any.
+ */
+static void memleak_seq_stop(struct seq_file *seq, void *v)
+{
+	if (v)
+		put_object(v);
+}
+
+/*
+ * Print the information for an unreferenced object to the seq file.
+ */
+static int memleak_seq_show(struct seq_file *seq, void *v)
+{
+	struct memleak_object *object = v;
+	unsigned long flags;
+
+	spin_lock_irqsave(&object->lock, flags);
+	if (!unreferenced_object(object))
+		goto out;
+	print_unreferenced(seq, object);
+	reported_leaks++;
+out:
+	spin_unlock_irqrestore(&object->lock, flags);
+	return 0;
+}
+
+static const struct seq_operations memleak_seq_ops = {
+	.start = memleak_seq_start,
+	.next  = memleak_seq_next,
+	.stop  = memleak_seq_stop,
+	.show  = memleak_seq_show,
+};
+
+static int memleak_seq_open(struct inode *inode, struct file *file)
+{
+	int ret = mutex_lock_interruptible(&scan_mutex);
+	if (ret < 0)
+		return ret;
+	ret = seq_open(file, &memleak_seq_ops);
+	if (ret < 0)
+		mutex_unlock(&scan_mutex);
+	return ret;
+}
+
+static int memleak_seq_release(struct inode *inode, struct file *file)
+{
+	int ret = seq_release(inode, file);
+	mutex_unlock(&scan_mutex);
+	return ret;
+}
+
+static const struct file_operations memleak_fops = {
+	.owner	 = THIS_MODULE,
+	.open    = memleak_seq_open,
+	.read    = seq_read,
+	.llseek  = seq_lseek,
+	.release = memleak_seq_release,
+};
+
+/*
+ * Thread function performing automatic memory scanning. Unreferenced objects
+ * at the end of a memory scan are reported but only the first time.
+ */
+static int memleak_scan_thread(void *arg)
+{
+	/*
+	 * Wait before the first scan to allow the system to fully initialize.
+	 */
+	ssleep(SECS_FIRST_SCAN);
+
+	while (!kthread_should_stop()) {
+		struct memleak_object *object;
+		int ret;
+
+		ret = mutex_lock_interruptible(&scan_mutex);
+		if (ret < 0)
+			continue;
+
+		memleak_scan();
+		reported_leaks = 0;
+
+		rcu_read_lock();
+		list_for_each_entry_rcu(object, &object_list, object_list) {
+			unsigned long flags;
+
+			if (reported_leaks >= REPORTS_NR)
+				break;
+			spin_lock_irqsave(&object->lock, flags);
+			if (!(object->flags & OBJECT_REPORTED) &&
+			    unreferenced_object(object)) {
+				print_unreferenced(NULL, object);
+				object->flags |= OBJECT_REPORTED;
+				reported_leaks++;
+			}
+			spin_unlock_irqrestore(&object->lock, flags);
+		}
+		rcu_read_unlock();
+
+		mutex_unlock(&scan_mutex);
+		/* sleep before the next scan */
+		ssleep(SECS_SCAN_PERIOD);
+	}
+
+	return 0;
+}
+
+/*
+ * Perform the freeing of the kmemleak internal objects after waiting for any
+ * current memory scan to complete.
+ */
+static int memleak_cleanup_thread(void *arg)
+{
+	struct memleak_object *object;
+
+	mutex_lock(&scan_mutex);
+	rcu_read_lock();
+	list_for_each_entry_rcu(object, &object_list, object_list)
+		delete_object(object->pointer);
+	rcu_read_unlock();
+	mutex_unlock(&scan_mutex);
+
+	return 0;
+}
+
+/*
+ * Start the clean-up thread.
+ */
+static void memleak_cleanup(void)
+{
+	struct task_struct *cleanup_thread;
+
+	cleanup_thread = kthread_run(memleak_cleanup_thread, NULL,
+				     "kmemleak-cleanup");
+	if (IS_ERR(cleanup_thread))
+		pr_warning("kmemleak: Failed to create the clean-up thread\n");
+}
+
+/*
+ * Disable kmemleak. No memory allocation/freeing will be traced once this
+ * function is called. Disabling kmemleak is an irreversible operation.
+ */
+static void memleak_disable(void)
+{
+	if (atomic_cmpxchg(&memleak_error, 0, 1))
+		return;
+
+	/* stop any memory operation tracing */
+	atomic_set(&memleak_early_log, 0);
+	atomic_set(&memleak_enabled, 0);
+
+	/* check whether it is too early for a kernel thread */
+	if (atomic_read(&memleak_initialized))
+		memleak_cleanup();
+
+	pr_info("Kernel memory leak detector disabled\n");
+}
+
+/*
+ * Kmemleak initialization.
+ */
+void __init memleak_init(void)
+{
+	int i;
+	unsigned long flags;
+
+	jiffies_scan_yield = msecs_to_jiffies(MSECS_SCAN_YIELD);
+	jiffies_min_age = msecs_to_jiffies(MSECS_MIN_AGE);
+
+	object_cache = KMEM_CACHE(memleak_object, SLAB_NOLEAKTRACE);
+	scan_area_cache = KMEM_CACHE(memleak_scan_area, SLAB_NOLEAKTRACE);
+	INIT_PRIO_TREE_ROOT(&object_tree_root);
+
+	/* the kernel is still in UP mode, so disabling the IRQs is enough */
+	local_irq_save(flags);
+	if (!atomic_read(&memleak_error)) {
+		atomic_set(&memleak_enabled, 1);
+		atomic_set(&memleak_early_log, 0);
+	}
+	local_irq_restore(flags);
+
+	/*
+	 * This is the point where tracking allocations is safe. Automatic
+	 * scanning is started during the late initcall. Add the early logged
+	 * callbacks to the kmemleak infrastructure.
+	 */
+	for (i = 0; i < crt_early_log; i++) {
+		struct early_log *log = &early_log[i];
+
+		switch (log->op_type) {
+		case MEMLEAK_ALLOC:
+			memleak_alloc(log->ptr, log->size, log->min_count,
+				      GFP_ATOMIC);
+			break;
+		case MEMLEAK_FREE:
+			memleak_free(log->ptr);
+			break;
+		case MEMLEAK_NOT_LEAK:
+			memleak_not_leak(log->ptr);
+			break;
+		case MEMLEAK_IGNORE:
+			memleak_ignore(log->ptr);
+			break;
+		case MEMLEAK_SCAN_AREA:
+			memleak_scan_area(log->ptr, log->offset, log->length,
+					  GFP_ATOMIC);
+			break;
+		default:
+			BUG();
+		}
+	}
+}
+
+/*
+ * Late initialization function.
+ */
+static int __init memleak_late_init(void)
+{
+	struct dentry *dentry;
+
+	atomic_set(&memleak_initialized, 1);
+
+	if (atomic_read(&memleak_error)) {
+		/*
+		 * Some error occured and kmemleak was disabled. There is a
+		 * small chance that memleak_disable() was called immediately
+		 * after setting memleak_initialized and we may end up with
+		 * two clean-up threads but serialized by scan_mutex.
+		 */
+		memleak_cleanup();
+		return -EBUSY;
+	}
+
+	dentry = debugfs_create_file("memleak", S_IRUGO, NULL, NULL,
+				     &memleak_fops);
+	if (!dentry)
+		return -ENOMEM;
+
+	scan_thread = kthread_run(memleak_scan_thread, NULL, "kmemleak");
+	if (IS_ERR(scan_thread))
+		pr_warning("kmemleak: Failed to create the scan thread\n");
+
+	pr_info("Kernel memory leak detector initialized\n");
+
+	return 0;
+}
+late_initcall(memleak_late_init);


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 02/15] kmemleak: Add documentation on the memory leak detector
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
  2008-12-10 18:26 ` [PATCH 01/15] kmemleak: Add the base support Catalin Marinas
@ 2008-12-10 18:27 ` Catalin Marinas
  2008-12-10 18:27 ` [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks Catalin Marinas
                   ` (13 subsequent siblings)
  15 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:27 UTC (permalink / raw)
  To: linux-kernel

This patch adds the Documentation/kmemleak.txt file with some
information about how kmemleak works.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 Documentation/kmemleak.txt |  127 ++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 127 insertions(+), 0 deletions(-)
 create mode 100644 Documentation/kmemleak.txt

diff --git a/Documentation/kmemleak.txt b/Documentation/kmemleak.txt
new file mode 100644
index 0000000..6617ce8
--- /dev/null
+++ b/Documentation/kmemleak.txt
@@ -0,0 +1,127 @@
+Kernel Memory Leak Detector
+===========================
+
+Introduction
+------------
+
+Kmemleak provides a way of detecting possible kernel memory leaks in a
+way similar to a tracing garbage collector
+(http://en.wikipedia.org/wiki/Garbage_collection_%28computer_science%29#Tracing_garbage_collectors),
+with the difference that the orphan objects are not freed but only
+reported via /sys/kernel/debug/memleak. A similar method is used by the
+Valgrind tool (memcheck --leak-check) to detect the memory leaks in
+user-space applications.
+
+Usage
+-----
+
+CONFIG_DEBUG_MEMLEAK in "Kernel hacking" has to be enabled. A kernel
+thread scans the memory every 10 min (by default) and prints any new
+unreferenced objects found. To trigger an intermediate scan and display
+all the possible memory leaks:
+
+  # mount -t debugfs nodev /sys/kernel/debug/
+  # cat /sys/kernel/debug/memleak
+
+Note that the orphan objects are listed in the order they were allocated
+and one object at the beginning of the list may cause other subsequent
+objects to be reported as orphan.
+
+Basic Algorithm
+---------------
+
+The memory allocations via kmalloc, vmalloc, kmem_cache_alloc and
+friends are traced and the pointers, together with additional
+information like size and stack trace, are stored in a prio search tree.
+The corresponding freeing function calls are tracked and the pointers
+removed from the kmemleak data structures.
+
+An allocated block of memory is considered orphan if no pointer to its
+start address or to any location inside the block can be found by
+scanning the memory (including saved registers). This means that there
+might be no way for the kernel to pass the address of the allocated
+block to a freeing function and therefore the block is considered a
+memory leak.
+
+The scanning algorithm steps:
+
+  1. mark all objects as white (remaining white objects will later be
+     considered orphan)
+  2. scan the memory starting with the data section and stacks, checking
+     the values against the addresses stored in the prio search tree. If
+     a pointer to a white object is found, the object is added to the
+     gray list
+  3. scan the gray objects for matching addresses (some white objects
+     can become gray and added at the end of the gray list) until the
+     gray set is finished
+  4. the remaining white objects are considered orphan and reported via
+     /sys/kernel/debug/memleak
+
+Some allocated memory blocks have pointers stored in the kernel's
+internal data structures and they cannot be detected as orphans. To
+avoid this, kmemleak can also store the number of values pointing to an
+address inside the block address range that need to be found so that the
+block is not considered a leak. One example is __vmalloc().
+
+Kmemleak API
+------------
+
+See the include/linux/memleak.h header for the functions prototype.
+
+memleak_init		- initialize kmemleak
+memleak_alloc		- notify of a memory block allocation
+memleak_free		- notify of a memory block freeing
+memleak_not_leak	- mark an object as not a leak
+memleak_ignore		- do not scan or report an object as leak
+memleak_scan_area	- add scan areas inside a memory block
+memleak_erase		- erase an old value in a pointer variable
+memleak_alloc_recursive	- as memleak_alloc but checks the recursiveness
+memleak_free_recursive	- as memleak_free but checks the recursiveness
+
+Dealing with false positives/negatives
+--------------------------------------
+
+The false negatives are real memory leaks (orphan objects) but not
+reported by kmemleak because values found during the memory scanning
+point to such objects. To reduce the number of false negatives, kmemleak
+provides the memleak_ignore, memleak_scan_area and memleak_erase
+functions (see above). The task stacks also increase the amount of false
+negatives and their scanning is not enabled by default.
+
+The false positives are objects wrongly reported as being memory leaks
+(orphan). For objects known not to be leaks, kmemleak provides the
+memleak_not_leak function. The memleak_ignore could also be used if the
+memory block is known not to contain other pointers and it will no
+longer be scanned.
+
+Some of the reported leaks are only transient, especially on SMP
+systems, because of pointers temporarily stored in CPU registers or
+stacks. Kmemleak defines MSECS_MIN_AGE (defaulting to 1000) representing
+the minimum age of an object to be reported as a memory leak.
+
+Limitations and Drawbacks
+-------------------------
+
+The main drawback is the reduced performance of memory allocation and
+freeing. To avoid other penalties, the memory scanning is only performed
+when the /sys/kernel/debug/memleak file is read. Anyway, this tool is
+intended for debugging purposes where the performance might not be the
+most important requirement.
+
+To keep the algorithm simple, kmemleak scans for values pointing to any
+address inside a block's address range. This may lead to an increased
+number of false negatives. However, it is likely that a real memory leak
+will eventually become visible.
+
+Another source of false negatives is the data stored in non-pointer
+values. In a future version, kmemleak could only scan the pointer
+members in the allocated structures. This feature would solve many of
+the false negative cases described above.
+
+The tool can report false positives. These are cases where an allocated
+block doesn't need to be freed (some cases in the init_call functions),
+the pointer is calculated by other methods than the usual container_of
+macro or the pointer is stored in a location not scanned by kmemleak.
+
+Page allocations and ioremap are not tracked. Only the ARM and i386
+architectures are currently supported.


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
  2008-12-10 18:26 ` [PATCH 01/15] kmemleak: Add the base support Catalin Marinas
  2008-12-10 18:27 ` [PATCH 02/15] kmemleak: Add documentation on the memory leak detector Catalin Marinas
@ 2008-12-10 18:27 ` Catalin Marinas
  2008-12-10 18:32   ` Dave Hansen
                     ` (2 more replies)
  2008-12-10 18:27 ` [PATCH 04/15] kmemleak: Add the slob " Catalin Marinas
                   ` (12 subsequent siblings)
  15 siblings, 3 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Pekka Enberg

This patch adds the callbacks to memleak_(alloc|free) functions from the
slab allocator. The patch also adds the SLAB_NOLEAKTRACE flag to avoid
recursive calls to kmemleak when it allocates its own data structures.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
---
 include/linux/slab.h |    2 ++
 mm/slab.c            |   18 ++++++++++++++++--
 2 files changed, 18 insertions(+), 2 deletions(-)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 000da12..d72ad0b 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -62,6 +62,8 @@
 # define SLAB_DEBUG_OBJECTS	0x00000000UL
 #endif
 
+#define SLAB_NOLEAKTRACE	0x00800000UL	/* Avoid kmemleak tracing */
+
 /* The following flags affect the page allocator grouping pages by mobility */
 #define SLAB_RECLAIM_ACCOUNT	0x00020000UL		/* Objects are reclaimable */
 #define SLAB_TEMPORARY		SLAB_RECLAIM_ACCOUNT	/* Objects are short-lived */
diff --git a/mm/slab.c b/mm/slab.c
index 0918751..d11112f 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -106,6 +106,7 @@
 #include	<linux/string.h>
 #include	<linux/uaccess.h>
 #include	<linux/nodemask.h>
+#include	<linux/memleak.h>
 #include	<linux/mempolicy.h>
 #include	<linux/mutex.h>
 #include	<linux/fault-inject.h>
@@ -177,13 +178,13 @@
 			 SLAB_STORE_USER | \
 			 SLAB_RECLAIM_ACCOUNT | SLAB_PANIC | \
 			 SLAB_DESTROY_BY_RCU | SLAB_MEM_SPREAD | \
-			 SLAB_DEBUG_OBJECTS)
+			 SLAB_DEBUG_OBJECTS | SLAB_NOLEAKTRACE)
 #else
 # define CREATE_MASK	(SLAB_HWCACHE_ALIGN | \
 			 SLAB_CACHE_DMA | \
 			 SLAB_RECLAIM_ACCOUNT | SLAB_PANIC | \
 			 SLAB_DESTROY_BY_RCU | SLAB_MEM_SPREAD | \
-			 SLAB_DEBUG_OBJECTS)
+			 SLAB_DEBUG_OBJECTS | SLAB_NOLEAKTRACE)
 #endif
 
 /*
@@ -2610,6 +2611,13 @@ static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp,
 		/* Slab management obj is off-slab. */
 		slabp = kmem_cache_alloc_node(cachep->slabp_cache,
 					      local_flags & ~GFP_THISNODE, nodeid);
+		/*
+		 * Only scan the list member to avoid false negatives
+		 * (especially caused by the s_mem pointer)
+		 */
+		memleak_scan_area(slabp, offsetof(struct slab, list),
+				  sizeof(struct list_head),
+				  local_flags & ~GFP_THISNODE);
 		if (!slabp)
 			return NULL;
 	} else {
@@ -3195,6 +3203,8 @@ static inline void *____cache_alloc(struct kmem_cache *cachep, gfp_t flags)
 		STATS_INC_ALLOCMISS(cachep);
 		objp = cache_alloc_refill(cachep, flags);
 	}
+	/* avoid false negatives */
+	memleak_erase(&ac->entry[ac->avail]);
 	return objp;
 }
 
@@ -3412,6 +3422,7 @@ __cache_alloc_node(struct kmem_cache *cachep, gfp_t flags, int nodeid,
   out:
 	local_irq_restore(save_flags);
 	ptr = cache_alloc_debugcheck_after(cachep, flags, ptr, caller);
+	memleak_alloc_recursive(ptr, obj_size(cachep), 1, cachep->flags, flags);
 
 	if (unlikely((flags & __GFP_ZERO) && ptr))
 		memset(ptr, 0, obj_size(cachep));
@@ -3465,6 +3476,8 @@ __cache_alloc(struct kmem_cache *cachep, gfp_t flags, void *caller)
 	objp = __do_cache_alloc(cachep, flags);
 	local_irq_restore(save_flags);
 	objp = cache_alloc_debugcheck_after(cachep, flags, objp, caller);
+	memleak_alloc_recursive(objp, obj_size(cachep), 1, cachep->flags,
+				flags);
 	prefetchw(objp);
 
 	if (unlikely((flags & __GFP_ZERO) && objp))
@@ -3580,6 +3593,7 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp)
 	struct array_cache *ac = cpu_cache_get(cachep);
 
 	check_irq_off();
+	memleak_free_recursive(objp, cachep->flags);
 	objp = cache_free_debugcheck(cachep, objp, __builtin_return_address(0));
 
 	/*


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 04/15] kmemleak: Add the slob memory allocation/freeing hooks
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (2 preceding siblings ...)
  2008-12-10 18:27 ` [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks Catalin Marinas
@ 2008-12-10 18:27 ` Catalin Marinas
  2008-12-10 18:36   ` Matt Mackall
  2008-12-11 21:37   ` Pekka Enberg
  2008-12-10 18:27 ` [PATCH 05/15] kmemleak: Add the slub " Catalin Marinas
                   ` (11 subsequent siblings)
  15 siblings, 2 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Pekka Enberg, Matt Mackall

This patch adds the callbacks to memleak_(alloc|free) functions from the
slob allocator.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Matt Mackall <mpm@selenic.com>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
---
 mm/slob.c |   15 +++++++++++----
 1 files changed, 11 insertions(+), 4 deletions(-)

diff --git a/mm/slob.c b/mm/slob.c
index cb675d1..ff5a98d 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -60,6 +60,7 @@
 #include <linux/kernel.h>
 #include <linux/slab.h>
 #include <linux/mm.h>
+#include <linux/memleak.h>
 #include <linux/cache.h>
 #include <linux/init.h>
 #include <linux/module.h>
@@ -463,6 +464,7 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
 {
 	unsigned int *m;
 	int align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
+	void *ret;
 
 	if (size < PAGE_SIZE - align) {
 		if (!size)
@@ -472,18 +474,18 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
 		if (!m)
 			return NULL;
 		*m = size;
-		return (void *)m + align;
+		ret = (void *)m + align;
 	} else {
-		void *ret;
-
 		ret = slob_new_page(gfp | __GFP_COMP, get_order(size), node);
 		if (ret) {
 			struct page *page;
 			page = virt_to_page(ret);
 			page->private = size;
 		}
-		return ret;
 	}
+
+	memleak_alloc(ret, size, 1, gfp);
+	return ret;
 }
 EXPORT_SYMBOL(__kmalloc_node);
 
@@ -493,6 +495,7 @@ void kfree(const void *block)
 
 	if (unlikely(ZERO_OR_NULL_PTR(block)))
 		return;
+	memleak_free(block);
 
 	sp = (struct slob_page *)virt_to_page(block);
 	if (slob_page(sp)) {
@@ -555,12 +558,14 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size,
 	} else if (flags & SLAB_PANIC)
 		panic("Cannot create slab cache %s\n", name);
 
+	memleak_alloc(c, sizeof(struct kmem_cache), 1, GFP_KERNEL);
 	return c;
 }
 EXPORT_SYMBOL(kmem_cache_create);
 
 void kmem_cache_destroy(struct kmem_cache *c)
 {
+	memleak_free(c);
 	slob_free(c, sizeof(struct kmem_cache));
 }
 EXPORT_SYMBOL(kmem_cache_destroy);
@@ -577,6 +582,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
 	if (c->ctor)
 		c->ctor(b);
 
+	memleak_alloc_recursive(b, c->size, 1, c->flags, flags);
 	return b;
 }
 EXPORT_SYMBOL(kmem_cache_alloc_node);
@@ -599,6 +605,7 @@ static void kmem_rcu_free(struct rcu_head *head)
 
 void kmem_cache_free(struct kmem_cache *c, void *b)
 {
+	memleak_free_recursive(b, c->flags);
 	if (unlikely(c->flags & SLAB_DESTROY_BY_RCU)) {
 		struct slob_rcu *slob_rcu;
 		slob_rcu = b + (c->size - sizeof(struct slob_rcu));


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 05/15] kmemleak: Add the slub memory allocation/freeing hooks
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (3 preceding siblings ...)
  2008-12-10 18:27 ` [PATCH 04/15] kmemleak: Add the slob " Catalin Marinas
@ 2008-12-10 18:27 ` Catalin Marinas
  2008-12-11 21:30   ` Pekka Enberg
  2008-12-10 18:27 ` [PATCH 06/15] kmemleak: Add the vmalloc " Catalin Marinas
                   ` (10 subsequent siblings)
  15 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Pekka Enberg, Christoph Lameter

This patch adds the callbacks to memleak_(alloc|free) functions from the
slub allocator.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Christoph Lameter <cl@linux-foundation.org>
Cc: Pekka Enberg <penberg@cs.helsinki.fi>
---
 mm/slub.c |    5 ++++-
 1 files changed, 4 insertions(+), 1 deletions(-)

diff --git a/mm/slub.c b/mm/slub.c
index 749588a..d9b07cb 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -18,6 +18,7 @@
 #include <linux/seq_file.h>
 #include <linux/cpu.h>
 #include <linux/cpuset.h>
+#include <linux/memleak.h>
 #include <linux/mempolicy.h>
 #include <linux/ctype.h>
 #include <linux/debugobjects.h>
@@ -140,7 +141,7 @@
  * Set of flags that will prevent slab merging
  */
 #define SLUB_NEVER_MERGE (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER | \
-		SLAB_TRACE | SLAB_DESTROY_BY_RCU)
+		SLAB_TRACE | SLAB_DESTROY_BY_RCU | SLAB_NOLEAKTRACE)
 
 #define SLUB_MERGE_SAME (SLAB_DEBUG_FREE | SLAB_RECLAIM_ACCOUNT | \
 		SLAB_CACHE_DMA)
@@ -1608,6 +1609,7 @@ static __always_inline void *slab_alloc(struct kmem_cache *s,
 	if (unlikely((gfpflags & __GFP_ZERO) && object))
 		memset(object, 0, objsize);
 
+	memleak_alloc_recursive(object, objsize, 1, s->flags, gfpflags);
 	return object;
 }
 
@@ -1710,6 +1712,7 @@ static __always_inline void slab_free(struct kmem_cache *s,
 	struct kmem_cache_cpu *c;
 	unsigned long flags;
 
+	memleak_free_recursive(x, s->flags);
 	local_irq_save(flags);
 	c = get_cpu_slab(s, smp_processor_id());
 	debug_check_no_locks_freed(object, c->objsize);


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 06/15] kmemleak: Add the vmalloc memory allocation/freeing hooks
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (4 preceding siblings ...)
  2008-12-10 18:27 ` [PATCH 05/15] kmemleak: Add the slub " Catalin Marinas
@ 2008-12-10 18:27 ` Catalin Marinas
  2008-12-10 18:27 ` [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash Catalin Marinas
                   ` (9 subsequent siblings)
  15 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:27 UTC (permalink / raw)
  To: linux-kernel

This patch adds the callbacks to memleak_(alloc|free) functions from
vmalloc/vfree.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/vmalloc.c |   29 ++++++++++++++++++++++++++---
 1 files changed, 26 insertions(+), 3 deletions(-)

diff --git a/mm/vmalloc.c b/mm/vmalloc.c
index f3f6e07..b15e29e 100644
--- a/mm/vmalloc.c
+++ b/mm/vmalloc.c
@@ -23,6 +23,7 @@
 #include <linux/rbtree.h>
 #include <linux/radix-tree.h>
 #include <linux/rcupdate.h>
+#include <linux/memleak.h>
 
 #include <asm/atomic.h>
 #include <asm/uaccess.h>
@@ -1196,6 +1197,9 @@ static void __vunmap(const void *addr, int deallocate_pages)
 void vfree(const void *addr)
 {
 	BUG_ON(in_interrupt());
+
+	memleak_free(addr);
+
 	__vunmap(addr, 1);
 }
 EXPORT_SYMBOL(vfree);
@@ -1305,8 +1309,17 @@ fail:
 
 void *__vmalloc_area(struct vm_struct *area, gfp_t gfp_mask, pgprot_t prot)
 {
-	return __vmalloc_area_node(area, gfp_mask, prot, -1,
-					__builtin_return_address(0));
+	void *addr = __vmalloc_area_node(area, gfp_mask, prot, -1,
+					 __builtin_return_address(0));
+
+	/*
+	 * This needs ref_count = 2 since vm_struct also contains a
+	 * pointer to this address. The guard page is also subtracted
+	 * from the size.
+	 */
+	memleak_alloc(addr, area->size - PAGE_SIZE, 2, gfp_mask);
+
+	return addr;
 }
 
 /**
@@ -1325,6 +1338,8 @@ static void *__vmalloc_node(unsigned long size, gfp_t gfp_mask, pgprot_t prot,
 						int node, void *caller)
 {
 	struct vm_struct *area;
+	void *addr;
+	unsigned long real_size = size;
 
 	size = PAGE_ALIGN(size);
 	if (!size || (size >> PAGE_SHIFT) > num_physpages)
@@ -1336,7 +1351,15 @@ static void *__vmalloc_node(unsigned long size, gfp_t gfp_mask, pgprot_t prot,
 	if (!area)
 		return NULL;
 
-	return __vmalloc_area_node(area, gfp_mask, prot, node, caller);
+	addr = __vmalloc_area_node(area, gfp_mask, prot, node, caller);
+
+	/*
+	 * This needs ref_count = 2 since the vm_struct also contains
+	 * a pointer to this address.
+	 */
+	memleak_alloc(addr, real_size, 2, gfp_mask);
+
+	return addr;
 }
 
 void *__vmalloc(unsigned long size, gfp_t gfp_mask, pgprot_t prot)


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (5 preceding siblings ...)
  2008-12-10 18:27 ` [PATCH 06/15] kmemleak: Add the vmalloc " Catalin Marinas
@ 2008-12-10 18:27 ` Catalin Marinas
  2008-12-10 19:04   ` Dave Hansen
  2008-12-10 18:27 ` [PATCH 08/15] kmemleak: Add modules support Catalin Marinas
                   ` (8 subsequent siblings)
  15 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:27 UTC (permalink / raw)
  To: linux-kernel

The alloc_large_system_hash function is called from various places in
the kernel and it contains pointers to other allocated structures. It
therefore needs to be traced by kmemleak.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 mm/page_alloc.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d8ac014..27efeb0 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -46,6 +46,7 @@
 #include <linux/page-isolation.h>
 #include <linux/page_cgroup.h>
 #include <linux/debugobjects.h>
+#include <linux/memleak.h>
 
 #include <asm/tlbflush.h>
 #include <asm/div64.h>
@@ -4570,6 +4571,8 @@ void *__init alloc_large_system_hash(const char *tablename,
 	if (_hash_mask)
 		*_hash_mask = (1 << log2qty) - 1;
 
+	memleak_alloc(table, size, 1, GFP_ATOMIC);
+
 	return table;
 }
 


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 08/15] kmemleak: Add modules support
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (6 preceding siblings ...)
  2008-12-10 18:27 ` [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash Catalin Marinas
@ 2008-12-10 18:27 ` Catalin Marinas
  2008-12-10 18:27 ` [PATCH 09/15] x86: Provide _sdata in the vmlinux_*.lds.S files Catalin Marinas
                   ` (7 subsequent siblings)
  15 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:27 UTC (permalink / raw)
  To: linux-kernel

This patch handles the kmemleak operations needed for modules loading so
that memory allocations from inside a module are properly tracked.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 kernel/module.c |   56 +++++++++++++++++++++++++++++++++++++++++++++++++++++++
 1 files changed, 56 insertions(+), 0 deletions(-)

diff --git a/kernel/module.c b/kernel/module.c
index 1f4cc00..7198681 100644
--- a/kernel/module.c
+++ b/kernel/module.c
@@ -51,6 +51,7 @@
 #include <asm/sections.h>
 #include <linux/tracepoint.h>
 #include <linux/ftrace.h>
+#include <linux/memleak.h>
 
 #if 0
 #define DEBUGP printk
@@ -409,6 +410,7 @@ static void *percpu_modalloc(unsigned long size, unsigned long align,
 	unsigned long extra;
 	unsigned int i;
 	void *ptr;
+	int cpu;
 
 	if (align > PAGE_SIZE) {
 		printk(KERN_WARNING "%s: per-cpu alignment %li > %li\n",
@@ -438,6 +440,11 @@ static void *percpu_modalloc(unsigned long size, unsigned long align,
 			if (!split_block(i, size))
 				return NULL;
 
+		/* add the per-cpu scanning areas */
+		for_each_possible_cpu(cpu)
+			memleak_alloc(ptr + per_cpu_offset(cpu), size, 0,
+				      GFP_KERNEL);
+
 		/* Mark allocated */
 		pcpu_size[i] = -pcpu_size[i];
 		return ptr;
@@ -452,6 +459,7 @@ static void percpu_modfree(void *freeme)
 {
 	unsigned int i;
 	void *ptr = __per_cpu_start + block_size(pcpu_size[0]);
+	int cpu;
 
 	/* First entry is core kernel percpu data. */
 	for (i = 1; i < pcpu_num_used; ptr += block_size(pcpu_size[i]), i++) {
@@ -463,6 +471,10 @@ static void percpu_modfree(void *freeme)
 	BUG();
 
  free:
+	/* remove the per-cpu scanning areas */
+	for_each_possible_cpu(cpu)
+		memleak_free(freeme + per_cpu_offset(cpu));
+
 	/* Merge with previous? */
 	if (pcpu_size[i-1] >= 0) {
 		pcpu_size[i-1] += pcpu_size[i];
@@ -1833,6 +1845,36 @@ static void *module_alloc_update_bounds(unsigned long size)
 	return ret;
 }
 
+#ifdef CONFIG_DEBUG_MEMLEAK
+static void memleak_load_module(struct module *mod, Elf_Ehdr *hdr,
+				Elf_Shdr *sechdrs, char *secstrings)
+{
+	unsigned int i;
+
+	/* only scan the sections containing data */
+	memleak_scan_area(mod->module_core,
+			  (unsigned long)mod - (unsigned long)mod->module_core,
+			  sizeof(struct module), GFP_KERNEL);
+
+	for (i = 1; i < hdr->e_shnum; i++) {
+		if (!(sechdrs[i].sh_flags & SHF_ALLOC))
+			continue;
+		if (strncmp(secstrings + sechdrs[i].sh_name, ".data", 5) != 0
+		    && strncmp(secstrings + sechdrs[i].sh_name, ".bss", 4) != 0)
+			continue;
+
+		memleak_scan_area(mod->module_core, sechdrs[i].sh_addr -
+				  (unsigned long)mod->module_core,
+				  sechdrs[i].sh_size, GFP_KERNEL);
+	}
+}
+#else
+static inline void memleak_load_module(struct module *mod, Elf_Ehdr *hdr,
+				       Elf_Shdr *sechdrs, char *secstrings)
+{
+}
+#endif
+
 /* Allocate and load the module: note that size of section 0 is always
    zero, and we rely on this for optional sections. */
 static noinline struct module *load_module(void __user *umod,
@@ -2011,6 +2053,12 @@ static noinline struct module *load_module(void __user *umod,
 
 	/* Do the allocs. */
 	ptr = module_alloc_update_bounds(mod->core_size);
+	/*
+	 * The pointer to this block is stored in the module structure
+	 * which is inside the block. Just mark it as not being a
+	 * leak.
+	 */
+	memleak_not_leak(ptr);
 	if (!ptr) {
 		err = -ENOMEM;
 		goto free_percpu;
@@ -2019,6 +2067,13 @@ static noinline struct module *load_module(void __user *umod,
 	mod->module_core = ptr;
 
 	ptr = module_alloc_update_bounds(mod->init_size);
+	/*
+	 * The pointer to this block is stored in the module structure
+	 * which is inside the block. This block doesn't need to be
+	 * scanned as it contains data and code that will be freed
+	 * after the module is initialized.
+	 */
+	memleak_ignore(ptr);
 	if (!ptr && mod->init_size) {
 		err = -ENOMEM;
 		goto free_core;
@@ -2049,6 +2104,7 @@ static noinline struct module *load_module(void __user *umod,
 	}
 	/* Module has been moved. */
 	mod = (void *)sechdrs[modindex].sh_addr;
+	memleak_load_module(mod, hdr, sechdrs, secstrings);
 
 	/* Now we've moved module, initialize linked lists, etc. */
 	module_unload_init(mod);


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 09/15] x86: Provide _sdata in the vmlinux_*.lds.S files
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (7 preceding siblings ...)
  2008-12-10 18:27 ` [PATCH 08/15] kmemleak: Add modules support Catalin Marinas
@ 2008-12-10 18:27 ` Catalin Marinas
  2008-12-10 18:27 ` [PATCH 10/15] arm: Provide _sdata and __bss_stop in the vmlinux.lds.S file Catalin Marinas
                   ` (6 subsequent siblings)
  15 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar

_sdata is a common symbol defined by many architectures and made
available to the kernel via asm-generic/sections.h. Kmemleak uses this
symbol when scanning the data sections.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
---
 arch/x86/kernel/vmlinux_32.lds.S |    1 +
 arch/x86/kernel/vmlinux_64.lds.S |    1 +
 2 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/x86/kernel/vmlinux_32.lds.S b/arch/x86/kernel/vmlinux_32.lds.S
index a9b8560..b5d2b49 100644
--- a/arch/x86/kernel/vmlinux_32.lds.S
+++ b/arch/x86/kernel/vmlinux_32.lds.S
@@ -62,6 +62,7 @@ SECTIONS
 
   /* writeable */
   . = ALIGN(PAGE_SIZE);
+  _sdata = .;			/* Start of data section */
   .data : AT(ADDR(.data) - LOAD_OFFSET) {	/* Data */
 	DATA_DATA
 	CONSTRUCTORS
diff --git a/arch/x86/kernel/vmlinux_64.lds.S b/arch/x86/kernel/vmlinux_64.lds.S
index 46e0544..8ad376c 100644
--- a/arch/x86/kernel/vmlinux_64.lds.S
+++ b/arch/x86/kernel/vmlinux_64.lds.S
@@ -52,6 +52,7 @@ SECTIONS
   RODATA
 
   . = ALIGN(PAGE_SIZE);		/* Align data segment to page size boundary */
+  _sdata = .;			/* Start of data section */
 				/* Data */
   .data : AT(ADDR(.data) - LOAD_OFFSET) {
 	DATA_DATA


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 10/15] arm: Provide _sdata and __bss_stop in the vmlinux.lds.S file
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (8 preceding siblings ...)
  2008-12-10 18:27 ` [PATCH 09/15] x86: Provide _sdata in the vmlinux_*.lds.S files Catalin Marinas
@ 2008-12-10 18:27 ` Catalin Marinas
  2008-12-10 18:27 ` [PATCH 11/15] kmemleak: Remove some of the kmemleak false positives Catalin Marinas
                   ` (5 subsequent siblings)
  15 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:27 UTC (permalink / raw)
  To: linux-kernel; +Cc: Russell King

_sdata and __bss_stop are common symbols defined by many architectures
and made available to the kernel via asm-generic/sections.h. Kmemleak
uses these symbols when scanning the data sections.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Russell King <rmk+lkml@arm.linux.org.uk>
---
 arch/arm/kernel/vmlinux.lds.S |    2 ++
 1 files changed, 2 insertions(+), 0 deletions(-)

diff --git a/arch/arm/kernel/vmlinux.lds.S b/arch/arm/kernel/vmlinux.lds.S
index 4898bdc..3cf1d44 100644
--- a/arch/arm/kernel/vmlinux.lds.S
+++ b/arch/arm/kernel/vmlinux.lds.S
@@ -120,6 +120,7 @@ SECTIONS
 
 	.data : AT(__data_loc) {
 		__data_start = .;	/* address in memory */
+		_sdata = .;
 
 		/*
 		 * first, the init task union, aligned
@@ -171,6 +172,7 @@ SECTIONS
 		__bss_start = .;	/* BSS				*/
 		*(.bss)
 		*(COMMON)
+		__bss_stop = .;
 		_end = .;
 	}
 					/* Stabs debugging sections.	*/


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 11/15] kmemleak: Remove some of the kmemleak false positives
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (9 preceding siblings ...)
  2008-12-10 18:27 ` [PATCH 10/15] arm: Provide _sdata and __bss_stop in the vmlinux.lds.S file Catalin Marinas
@ 2008-12-10 18:27 ` Catalin Marinas
  2008-12-10 18:28 ` [PATCH 12/15] kmemleak: Enable the building of the memory leak detector Catalin Marinas
                   ` (4 subsequent siblings)
  15 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:27 UTC (permalink / raw)
  To: linux-kernel

There are allocations for which the main pointer cannot be found but
they are not memory leaks. This patch fixes some of them. For more
information on false positives, see Documentation/kmemleak.txt.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 drivers/char/vt.c      |    7 +++++++
 include/linux/percpu.h |    5 +++++
 2 files changed, 12 insertions(+), 0 deletions(-)

diff --git a/drivers/char/vt.c b/drivers/char/vt.c
index a5af607..a36221b 100644
--- a/drivers/char/vt.c
+++ b/drivers/char/vt.c
@@ -104,6 +104,7 @@
 #include <linux/io.h>
 #include <asm/system.h>
 #include <linux/uaccess.h>
+#include <linux/memleak.h>
 
 #define MAX_NR_CON_DRIVER 16
 
@@ -2882,6 +2883,12 @@ static int __init con_init(void)
 	 */
 	for (currcons = 0; currcons < MIN_NR_CONSOLES; currcons++) {
 		vc_cons[currcons].d = vc = alloc_bootmem(sizeof(struct vc_data));
+		/*
+		 * Kmemleak does not track the memory allocated via
+		 * alloc_bootmem() but this block contains pointers to
+		 * other blocks allocated via kmalloc.
+		 */
+		memleak_alloc(vc, sizeof(struct vc_data), 1, GFP_ATOMIC);
 		INIT_WORK(&vc_cons[currcons].SAK_work, vc_SAK);
 		visual_init(vc, currcons, 1);
 		vc->vc_screenbuf = (unsigned short *)alloc_bootmem(vc->vc_screenbuf_size);
diff --git a/include/linux/percpu.h b/include/linux/percpu.h
index 9f2a375..4d1ce18 100644
--- a/include/linux/percpu.h
+++ b/include/linux/percpu.h
@@ -69,7 +69,12 @@ struct percpu_data {
 	void *ptrs[1];
 };
 
+/* pointer disguising messes up the kmemleak objects tracking */
+#ifndef CONFIG_DEBUG_MEMLEAK
 #define __percpu_disguise(pdata) (struct percpu_data *)~(unsigned long)(pdata)
+#else
+#define __percpu_disguise(pdata) (struct percpu_data *)(pdata)
+#endif
 /* 
  * Use this to get to a cpu's version of the per-cpu object dynamically
  * allocated. Non-atomic access to the current CPU's version should


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 12/15] kmemleak: Enable the building of the memory leak detector
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (10 preceding siblings ...)
  2008-12-10 18:27 ` [PATCH 11/15] kmemleak: Remove some of the kmemleak false positives Catalin Marinas
@ 2008-12-10 18:28 ` Catalin Marinas
  2008-12-10 19:20   ` Dave Hansen
  2008-12-10 18:28 ` [PATCH 13/15] kmemleak: Keep the __init functions after initialization Catalin Marinas
                   ` (3 subsequent siblings)
  15 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:28 UTC (permalink / raw)
  To: linux-kernel

This patch adds the Kconfig.debug and Makefile entries needed for
building kmemleak into the kernel.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 lib/Kconfig.debug |   23 +++++++++++++++++++++++
 mm/Makefile       |    1 +
 2 files changed, 24 insertions(+), 0 deletions(-)

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index b0f239e..1e59827 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -290,6 +290,29 @@ config SLUB_STATS
 	  out which slabs are relevant to a particular load.
 	  Try running: slabinfo -DA
 
+config DEBUG_MEMLEAK
+	bool "Kernel memory leak detector"
+	default n
+	depends on EXPERIMENTAL
+	select DEBUG_SLAB if SLAB
+	select SLUB_DEBUG if SLUB
+	select DEBUG_FS
+	select STACKTRACE
+	select FRAME_POINTER
+	select KALLSYMS
+	help
+	  Say Y here if you want to enable the memory leak
+	  detector. The memory allocation/freeing is traced in a way
+	  similar to the Boehm's conservative garbage collector, the
+	  difference being that the orphan objects are not freed but
+	  only shown in /sys/kernel/debug/memleak. Enabling this
+	  feature will introduce an overhead to memory
+	  allocations. See Documentation/kmemleak.txt for more
+	  details.
+
+	  In order to access the memleak file, debugfs needs to be
+	  mounted (usually at /sys/kernel/debug).
+
 config DEBUG_PREEMPT
 	bool "Debug preemptible kernel"
 	depends on DEBUG_KERNEL && PREEMPT && (TRACE_IRQFLAGS_SUPPORT || PPC64)
diff --git a/mm/Makefile b/mm/Makefile
index c06b45a..3e43536 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -34,3 +34,4 @@ obj-$(CONFIG_MIGRATION) += migrate.o
 obj-$(CONFIG_SMP) += allocpercpu.o
 obj-$(CONFIG_QUICKLIST) += quicklist.o
 obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o
+obj-$(CONFIG_DEBUG_MEMLEAK) += memleak.o


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 13/15] kmemleak: Keep the __init functions after initialization
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (11 preceding siblings ...)
  2008-12-10 18:28 ` [PATCH 12/15] kmemleak: Enable the building of the memory leak detector Catalin Marinas
@ 2008-12-10 18:28 ` Catalin Marinas
  2008-12-10 18:44   ` Sam Ravnborg
  2008-12-10 18:28 ` [PATCH 14/15] kmemleak: Simple testing module for kmemleak Catalin Marinas
                   ` (2 subsequent siblings)
  15 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:28 UTC (permalink / raw)
  To: linux-kernel

This patch adds the CONFIG_DEBUG_KEEP_INIT option which preserves the
.init.* sections after initialization. Memory leaks happening during
this phase can be more easily tracked.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 include/linux/init.h |    6 ++++++
 lib/Kconfig.debug    |   12 ++++++++++++
 2 files changed, 18 insertions(+), 0 deletions(-)

diff --git a/include/linux/init.h b/include/linux/init.h
index 68cb026..41321ad 100644
--- a/include/linux/init.h
+++ b/include/linux/init.h
@@ -40,9 +40,15 @@
 
 /* These are for everybody (although not all archs will actually
    discard it in modules) */
+#ifdef CONFIG_DEBUG_KEEP_INIT
+#define __init
+#define __initdata
+#define __initconst
+#else
 #define __init		__section(.init.text) __cold notrace
 #define __initdata	__section(.init.data)
 #define __initconst	__section(.init.rodata)
+#endif
 #define __exitdata	__section(.exit.data)
 #define __exit_call	__used __section(.exitcall.exit)
 
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 1e59827..72cde77 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -313,6 +313,18 @@ config DEBUG_MEMLEAK
 	  In order to access the memleak file, debugfs needs to be
 	  mounted (usually at /sys/kernel/debug).
 
+config DEBUG_KEEP_INIT
+	bool "Do not free the __init code/data"
+	default n
+	depends on DEBUG_MEMLEAK
+	help
+	  This option moves the __init code/data out of the
+	  .init.text/.init.data sections. It is useful for identifying
+	  memory leaks happening during the kernel or modules
+	  initialization.
+
+	  If unsure, say N.
+
 config DEBUG_PREEMPT
 	bool "Debug preemptible kernel"
 	depends on DEBUG_KERNEL && PREEMPT && (TRACE_IRQFLAGS_SUPPORT || PPC64)


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 14/15] kmemleak: Simple testing module for kmemleak
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (12 preceding siblings ...)
  2008-12-10 18:28 ` [PATCH 13/15] kmemleak: Keep the __init functions after initialization Catalin Marinas
@ 2008-12-10 18:28 ` Catalin Marinas
  2008-12-10 18:28 ` [PATCH 15/15] kmemleak: Add the corresponding MAINTAINERS entry Catalin Marinas
  2008-12-11  9:44 ` [PATCH 00/15] Kernel memory leak detector Catalin Marinas
  15 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:28 UTC (permalink / raw)
  To: linux-kernel

This patch adds a loadable module that deliberately leaks memory. It
is used for testing various memory leaking scenarios.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 lib/Kconfig.debug |   11 +++++
 mm/Makefile       |    1 
 mm/memleak-test.c |  110 +++++++++++++++++++++++++++++++++++++++++++++++++++++
 3 files changed, 122 insertions(+), 0 deletions(-)
 create mode 100644 mm/memleak-test.c

diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 72cde77..205c1da 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -313,6 +313,17 @@ config DEBUG_MEMLEAK
 	  In order to access the memleak file, debugfs needs to be
 	  mounted (usually at /sys/kernel/debug).
 
+config DEBUG_MEMLEAK_TEST
+	tristate "Test the kernel memory leak detector"
+	default n
+	depends on DEBUG_MEMLEAK
+	help
+	  Say Y or M here to build a test for the kernel memory leak
+	  detector. This option enables a module that explicitly leaks
+	  memory.
+
+	  If unsure, say N.
+
 config DEBUG_KEEP_INIT
 	bool "Do not free the __init code/data"
 	default n
diff --git a/mm/Makefile b/mm/Makefile
index 3e43536..deb5935 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -35,3 +35,4 @@ obj-$(CONFIG_SMP) += allocpercpu.o
 obj-$(CONFIG_QUICKLIST) += quicklist.o
 obj-$(CONFIG_CGROUP_MEM_RES_CTLR) += memcontrol.o page_cgroup.o
 obj-$(CONFIG_DEBUG_MEMLEAK) += memleak.o
+obj-$(CONFIG_DEBUG_MEMLEAK_TEST) += memleak-test.o
diff --git a/mm/memleak-test.c b/mm/memleak-test.c
new file mode 100644
index 0000000..0f3e651
--- /dev/null
+++ b/mm/memleak-test.c
@@ -0,0 +1,110 @@
+/*
+ * mm/memleak-test.c
+ *
+ * Copyright (C) 2008 ARM Limited
+ * Written by Catalin Marinas <catalin.marinas@arm.com>
+ *
+ * This program is free software; you can redistribute it and/or modify
+ * it under the terms of the GNU General Public License version 2 as
+ * published by the Free Software Foundation.
+ *
+ * This program is distributed in the hope that it will be useful,
+ * but WITHOUT ANY WARRANTY; without even the implied warranty of
+ * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
+ * GNU General Public License for more details.
+ *
+ * You should have received a copy of the GNU General Public License
+ * along with this program; if not, write to the Free Software
+ * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
+ */
+
+#include <linux/init.h>
+#include <linux/kernel.h>
+#include <linux/module.h>
+#include <linux/slab.h>
+#include <linux/vmalloc.h>
+#include <linux/list.h>
+#include <linux/percpu.h>
+
+#include <linux/memleak.h>
+
+struct test_node {
+	long header[25];
+	struct list_head list;
+	long footer[25];
+};
+
+static LIST_HEAD(test_list);
+static DEFINE_PER_CPU(void *, test_pointer);
+
+/*
+ * Some very simple testing. This function needs to be extended for
+ * proper testing.
+ */
+static int __init memleak_test_init(void)
+{
+	struct test_node *elem;
+	int i;
+
+	printk(KERN_INFO "Kmemleak testing\n");
+
+	/* make some orphan objects */
+	pr_info("kmemleak: kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL));
+	pr_info("kmemleak: kmalloc(32) = %p\n", kmalloc(32, GFP_KERNEL));
+	pr_info("kmemleak: kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL));
+	pr_info("kmemleak: kmalloc(1024) = %p\n", kmalloc(1024, GFP_KERNEL));
+	pr_info("kmemleak: kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL));
+	pr_info("kmemleak: kmalloc(2048) = %p\n", kmalloc(2048, GFP_KERNEL));
+	pr_info("kmemleak: kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL));
+	pr_info("kmemleak: kmalloc(4096) = %p\n", kmalloc(4096, GFP_KERNEL));
+#ifndef CONFIG_MODULES
+	pr_info("kmemleak: kmem_cache_alloc(files_cachep) = %p\n",
+		kmem_cache_alloc(files_cachep, GFP_KERNEL));
+	pr_info("kmemleak: kmem_cache_alloc(files_cachep) = %p\n",
+		kmem_cache_alloc(files_cachep, GFP_KERNEL));
+#endif
+	pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
+	pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
+	pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
+	pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
+	pr_info("kmemleak: vmalloc(64) = %p\n", vmalloc(64));
+
+	/*
+	 * Add elements to a list. They should only appear as orphan
+	 * after the module is removed.
+	 */
+	for (i = 0; i < 10; i++) {
+		elem = kmalloc(sizeof(*elem), GFP_KERNEL);
+		pr_info("kmemleak: kmalloc(sizeof(*elem)) = %p\n", elem);
+		if (!elem)
+			return -ENOMEM;
+		memset(elem, 0, sizeof(*elem));
+		INIT_LIST_HEAD(&elem->list);
+
+		list_add_tail(&elem->list, &test_list);
+	}
+
+	for_each_possible_cpu(i) {
+		per_cpu(test_pointer, i) = kmalloc(129, GFP_KERNEL);
+		pr_info("kmemleak: kmalloc(129) = %p\n",
+			per_cpu(test_pointer, i));
+	}
+
+	return 0;
+}
+module_init(memleak_test_init);
+
+static void __exit memleak_test_exit(void)
+{
+	struct test_node *elem, *tmp;
+
+	/*
+	 * Remove the list elements without actually freeing the
+	 * memory.
+	 */
+	list_for_each_entry_safe(elem, tmp, &test_list, list)
+		list_del(&elem->list);
+}
+module_exit(memleak_test_exit);
+
+MODULE_LICENSE("GPL");


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* [PATCH 15/15] kmemleak: Add the corresponding MAINTAINERS entry
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (13 preceding siblings ...)
  2008-12-10 18:28 ` [PATCH 14/15] kmemleak: Simple testing module for kmemleak Catalin Marinas
@ 2008-12-10 18:28 ` Catalin Marinas
  2008-12-11  9:44 ` [PATCH 00/15] Kernel memory leak detector Catalin Marinas
  15 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-10 18:28 UTC (permalink / raw)
  To: linux-kernel

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
---
 MAINTAINERS |    6 ++++++
 1 files changed, 6 insertions(+), 0 deletions(-)

diff --git a/MAINTAINERS b/MAINTAINERS
index 24741de..44ee125 100644
--- a/MAINTAINERS
+++ b/MAINTAINERS
@@ -2503,6 +2503,12 @@ L:	kernel-janitors@vger.kernel.org
 W:	http://www.kerneljanitors.org/
 S:	Maintained
 
+KERNEL MEMORY LEAK DETECTOR
+P:	Catalin Marinas
+M:	catalin.marinas@arm.com
+L:	linux-kernel@vger.kernel.org
+S:	Maintained
+
 KERNEL NFSD, SUNRPC, AND LOCKD SERVERS
 P:	J. Bruce Fields
 M:	bfields@fieldses.org


^ permalink raw reply related	[flat|nested] 59+ messages in thread

* Re: [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-10 18:27 ` [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks Catalin Marinas
@ 2008-12-10 18:32   ` Dave Hansen
  2008-12-10 18:53   ` Dave Hansen
  2008-12-11 21:22   ` Pekka Enberg
  2 siblings, 0 replies; 59+ messages in thread
From: Dave Hansen @ 2008-12-10 18:32 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Pekka Enberg

On Wed, 2008-12-10 at 18:27 +0000, Catalin Marinas wrote:
> This patch adds the callbacks to memleak_(alloc|free) functions from
> the slab allocator. The patch also adds the SLAB_NOLEAKTRACE flag to
> avoid recursive calls to kmemleak when it allocates its own data
> structures.

You might also want to try linux-mm@kvack.org for these.  You'll
probably get some better review there, and they're just _kinda_ mm
related. ;)

-- Dave


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 04/15] kmemleak: Add the slob memory allocation/freeing hooks
  2008-12-10 18:27 ` [PATCH 04/15] kmemleak: Add the slob " Catalin Marinas
@ 2008-12-10 18:36   ` Matt Mackall
  2008-12-11  9:47     ` Catalin Marinas
  2008-12-11 21:37   ` Pekka Enberg
  1 sibling, 1 reply; 59+ messages in thread
From: Matt Mackall @ 2008-12-10 18:36 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Pekka Enberg

On Wed, 2008-12-10 at 18:27 +0000, Catalin Marinas wrote:
> This patch adds the callbacks to memleak_(alloc|free) functions from the
> slob allocator.

Is this different than the last one?

Acked-by: Matt Mackall <mpm@selenic.com>

> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Matt Mackall <mpm@selenic.com>
> Cc: Pekka Enberg <penberg@cs.helsinki.fi>
> ---
>  mm/slob.c |   15 +++++++++++----
>  1 files changed, 11 insertions(+), 4 deletions(-)
> 
> diff --git a/mm/slob.c b/mm/slob.c
> index cb675d1..ff5a98d 100644
> --- a/mm/slob.c
> +++ b/mm/slob.c
> @@ -60,6 +60,7 @@
>  #include <linux/kernel.h>
>  #include <linux/slab.h>
>  #include <linux/mm.h>
> +#include <linux/memleak.h>
>  #include <linux/cache.h>
>  #include <linux/init.h>
>  #include <linux/module.h>
> @@ -463,6 +464,7 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
>  {
>  	unsigned int *m;
>  	int align = max(ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
> +	void *ret;
>  
>  	if (size < PAGE_SIZE - align) {
>  		if (!size)
> @@ -472,18 +474,18 @@ void *__kmalloc_node(size_t size, gfp_t gfp, int node)
>  		if (!m)
>  			return NULL;
>  		*m = size;
> -		return (void *)m + align;
> +		ret = (void *)m + align;
>  	} else {
> -		void *ret;
> -
>  		ret = slob_new_page(gfp | __GFP_COMP, get_order(size), node);
>  		if (ret) {
>  			struct page *page;
>  			page = virt_to_page(ret);
>  			page->private = size;
>  		}
> -		return ret;
>  	}
> +
> +	memleak_alloc(ret, size, 1, gfp);
> +	return ret;
>  }
>  EXPORT_SYMBOL(__kmalloc_node);
>  
> @@ -493,6 +495,7 @@ void kfree(const void *block)
>  
>  	if (unlikely(ZERO_OR_NULL_PTR(block)))
>  		return;
> +	memleak_free(block);
>  
>  	sp = (struct slob_page *)virt_to_page(block);
>  	if (slob_page(sp)) {
> @@ -555,12 +558,14 @@ struct kmem_cache *kmem_cache_create(const char *name, size_t size,
>  	} else if (flags & SLAB_PANIC)
>  		panic("Cannot create slab cache %s\n", name);
>  
> +	memleak_alloc(c, sizeof(struct kmem_cache), 1, GFP_KERNEL);
>  	return c;
>  }
>  EXPORT_SYMBOL(kmem_cache_create);
>  
>  void kmem_cache_destroy(struct kmem_cache *c)
>  {
> +	memleak_free(c);
>  	slob_free(c, sizeof(struct kmem_cache));
>  }
>  EXPORT_SYMBOL(kmem_cache_destroy);
> @@ -577,6 +582,7 @@ void *kmem_cache_alloc_node(struct kmem_cache *c, gfp_t flags, int node)
>  	if (c->ctor)
>  		c->ctor(b);
>  
> +	memleak_alloc_recursive(b, c->size, 1, c->flags, flags);
>  	return b;
>  }
>  EXPORT_SYMBOL(kmem_cache_alloc_node);
> @@ -599,6 +605,7 @@ static void kmem_rcu_free(struct rcu_head *head)
>  
>  void kmem_cache_free(struct kmem_cache *c, void *b)
>  {
> +	memleak_free_recursive(b, c->flags);
>  	if (unlikely(c->flags & SLAB_DESTROY_BY_RCU)) {
>  		struct slob_rcu *slob_rcu;
>  		slob_rcu = b + (c->size - sizeof(struct slob_rcu));
-- 
Mathematics is the supreme nostalgia of our time.


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 13/15] kmemleak: Keep the __init functions after initialization
  2008-12-10 18:28 ` [PATCH 13/15] kmemleak: Keep the __init functions after initialization Catalin Marinas
@ 2008-12-10 18:44   ` Sam Ravnborg
  2008-12-17 13:09     ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Sam Ravnborg @ 2008-12-10 18:44 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel

On Wed, Dec 10, 2008 at 06:28:06PM +0000, Catalin Marinas wrote:
> This patch adds the CONFIG_DEBUG_KEEP_INIT option which preserves the
> .init.* sections after initialization. Memory leaks happening during
> this phase can be more easily tracked.

This patch manipulate the section names of these functions.
The better way would be to keep the section names as they are
and then in init.h decide where to add these sections.

This will require a new set of CONFIG_ symbols but then
it is obvious what happens.

Something like:

config	KEEP_INIT
	bool

config	KMEMLEAK
	...
	select KEEP_INIT
	select DEBUG_KEEP_CPUINIT
	select DEBUG_KEEP_MEMINIT

config HOTPLUG
	...
	select KEEP_INIT

And then use these symbols in vmlinux.lds.h

	Sam

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-10 18:27 ` [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks Catalin Marinas
  2008-12-10 18:32   ` Dave Hansen
@ 2008-12-10 18:53   ` Dave Hansen
  2008-12-11 21:22   ` Pekka Enberg
  2 siblings, 0 replies; 59+ messages in thread
From: Dave Hansen @ 2008-12-10 18:53 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Pekka Enberg

On Wed, 2008-12-10 at 18:27 +0000, Catalin Marinas wrote:
> @@ -3195,6 +3203,8 @@ static inline void *____cache_alloc(struct
> kmem_cache *cachep, gfp_t flags)
>                 STATS_INC_ALLOCMISS(cachep);
>                 objp = cache_alloc_refill(cachep, flags);
>         }
> +       /* avoid false negatives */
> +       memleak_erase(&ac->entry[ac->avail]);
>         return objp;
>  }

It would be really nice here to say *how* it is avoiding false
negatives. :)

How about:

/* Don't let the pointer from the slab itself count as referencing */

-- Dave


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash
  2008-12-10 18:27 ` [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash Catalin Marinas
@ 2008-12-10 19:04   ` Dave Hansen
  2008-12-11  9:50     ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Dave Hansen @ 2008-12-10 19:04 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel

On Wed, 2008-12-10 at 18:27 +0000, Catalin Marinas wrote:
> 
> @@ -4570,6 +4571,8 @@ void *__init alloc_large_system_hash(const char *tablename,
>         if (_hash_mask)
>                 *_hash_mask = (1 << log2qty) - 1;
>  
> +       memleak_alloc(table, size, 1, GFP_ATOMIC);
> +
>         return table;
>  }

Why is this sucker GFP_ATOMIC?

Since alloc_large_system_hash() is using bootmem (and is called early),
I'm a little surprised that it is OK to call into memleak_alloc() which
uses kmem_cache_alloc().  Is the slab even set up at this point?

-- Dave


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 12/15] kmemleak: Enable the building of the memory leak detector
  2008-12-10 18:28 ` [PATCH 12/15] kmemleak: Enable the building of the memory leak detector Catalin Marinas
@ 2008-12-10 19:20   ` Dave Hansen
  2008-12-12 17:27     ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Dave Hansen @ 2008-12-10 19:20 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel

On Wed, 2008-12-10 at 18:28 +0000, Catalin Marinas wrote:
> +config DEBUG_MEMLEAK
> +       bool "Kernel memory leak detector"
> +       default n
> +       depends on EXPERIMENTAL
> +       select DEBUG_SLAB if SLAB
> +       select SLUB_DEBUG if SLUB
> +       select DEBUG_FS
> +       select STACKTRACE
> +       select FRAME_POINTER
> +       select KALLSYMS

So, not all architectures have STACKTRACE or FRAME_POINTER.  I think a
few of these should at least be done with depends.

Is this feature accessible if DEBUG_FS=n?  It seems to compile OK, but I
wonder if it is useful.

-- Dave


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 00/15] Kernel memory leak detector
  2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
                   ` (14 preceding siblings ...)
  2008-12-10 18:28 ` [PATCH 15/15] kmemleak: Add the corresponding MAINTAINERS entry Catalin Marinas
@ 2008-12-11  9:44 ` Catalin Marinas
  15 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-11  9:44 UTC (permalink / raw)
  To: linux-kernel

On Wed, 2008-12-10 at 18:26 +0000, Catalin Marinas wrote:
> A new kmemleak version is available. Thanks to all who reviewed the code
> and gave feedback. Kmemleak can also be found on this git tree:
> 
> git://linux-arm.org/linux-2.6.git kmemleak
[...]
> Changes since the previous release:

Something I forgot to mention here - I dropped the mem_map array
scanning since it doesn't work in all the kernel configurations and, as
Andrew Morton pointed out, it goes too deep into the kernel structures.
I didn't see any false positives on ARM but it may need more testing.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 04/15] kmemleak: Add the slob memory allocation/freeing hooks
  2008-12-10 18:36   ` Matt Mackall
@ 2008-12-11  9:47     ` Catalin Marinas
  0 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-11  9:47 UTC (permalink / raw)
  To: Matt Mackall; +Cc: linux-kernel, Pekka Enberg

On Wed, 2008-12-10 at 12:36 -0600, Matt Mackall wrote:
> On Wed, 2008-12-10 at 18:27 +0000, Catalin Marinas wrote:
> > This patch adds the callbacks to memleak_(alloc|free) functions from the
> > slob allocator.
> 
> Is this different than the last one?

Slightly. It now passes the gfp flags to the memleak_alloc callback
(that's how I noticed that slob_alloc gets the SLAB_* flags).

> Acked-by: Matt Mackall <mpm@selenic.com>

Thanks.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash
  2008-12-10 19:04   ` Dave Hansen
@ 2008-12-11  9:50     ` Catalin Marinas
  2008-12-11 10:08       ` Catalin Marinas
  2008-12-11 17:30       ` Dave Hansen
  0 siblings, 2 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-11  9:50 UTC (permalink / raw)
  To: Dave Hansen; +Cc: linux-kernel

On Wed, 2008-12-10 at 11:04 -0800, Dave Hansen wrote:
> On Wed, 2008-12-10 at 18:27 +0000, Catalin Marinas wrote:
> > 
> > @@ -4570,6 +4571,8 @@ void *__init alloc_large_system_hash(const char *tablename,
> >         if (_hash_mask)
> >                 *_hash_mask = (1 << log2qty) - 1;
> >  
> > +       memleak_alloc(table, size, 1, GFP_ATOMIC);
> > +
> >         return table;
> >  }
> 
> Why is this sucker GFP_ATOMIC?

It could be GFP_KERNEL, it don't think it really matter at this point.

> Since alloc_large_system_hash() is using bootmem (and is called early),
> I'm a little surprised that it is OK to call into memleak_alloc() which
> uses kmem_cache_alloc().  Is the slab even set up at this point?

It doesn't need to be. Early callbacks like this are logged by kmemleak
in a buffer and properly registered once the slab allocator is fully
initialised (slab initialisation needs to allocate some memory for
itself as well).

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash
  2008-12-11  9:50     ` Catalin Marinas
@ 2008-12-11 10:08       ` Catalin Marinas
  2008-12-11 17:30       ` Dave Hansen
  1 sibling, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-11 10:08 UTC (permalink / raw)
  To: Dave Hansen; +Cc: linux-kernel

On Thu, 2008-12-11 at 09:50 +0000, Catalin Marinas wrote:
> On Wed, 2008-12-10 at 11:04 -0800, Dave Hansen wrote:
> > On Wed, 2008-12-10 at 18:27 +0000, Catalin Marinas wrote:
> > > 
> > > @@ -4570,6 +4571,8 @@ void *__init alloc_large_system_hash(const char *tablename,
> > >         if (_hash_mask)
> > >                 *_hash_mask = (1 << log2qty) - 1;
> > >  
> > > +       memleak_alloc(table, size, 1, GFP_ATOMIC);
> > > +
> > >         return table;
> > >  }
> > 
> > Why is this sucker GFP_ATOMIC?
> 
> It could be GFP_KERNEL, it don't think it really matter at this point.

Actually, for consistency is should be GFP_ATOMIC even if the flag might
not be used. All the other allocations in this function (vmalloc,
__get_free_pages) use GFP_ATOMIC.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash
  2008-12-11  9:50     ` Catalin Marinas
  2008-12-11 10:08       ` Catalin Marinas
@ 2008-12-11 17:30       ` Dave Hansen
  2008-12-11 17:38         ` Catalin Marinas
  1 sibling, 1 reply; 59+ messages in thread
From: Dave Hansen @ 2008-12-11 17:30 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel

On Thu, 2008-12-11 at 09:50 +0000, Catalin Marinas wrote:
> > Since alloc_large_system_hash() is using bootmem (and is called early),
> > I'm a little surprised that it is OK to call into memleak_alloc() which
> > uses kmem_cache_alloc().  Is the slab even set up at this point?
> 
> It doesn't need to be. Early callbacks like this are logged by kmemleak
> in a buffer and properly registered once the slab allocator is fully
> initialised (slab initialisation needs to allocate some memory for
> itself as well).

Ahh, thanks for the clarification.  Could you add something to the code
to this effect?

-- Dave


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash
  2008-12-11 17:30       ` Dave Hansen
@ 2008-12-11 17:38         ` Catalin Marinas
  2008-12-11 17:45           ` Dave Hansen
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-11 17:38 UTC (permalink / raw)
  To: Dave Hansen; +Cc: linux-kernel

On Thu, 2008-12-11 at 09:30 -0800, Dave Hansen wrote:
> On Thu, 2008-12-11 at 09:50 +0000, Catalin Marinas wrote:
> > > Since alloc_large_system_hash() is using bootmem (and is called early),
> > > I'm a little surprised that it is OK to call into memleak_alloc() which
> > > uses kmem_cache_alloc().  Is the slab even set up at this point?
> > 
> > It doesn't need to be. Early callbacks like this are logged by kmemleak
> > in a buffer and properly registered once the slab allocator is fully
> > initialised (slab initialisation needs to allocate some memory for
> > itself as well).
> 
> Ahh, thanks for the clarification.  Could you add something to the code
> to this effect?

Do you mean a comment? I can do this.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash
  2008-12-11 17:38         ` Catalin Marinas
@ 2008-12-11 17:45           ` Dave Hansen
  2008-12-11 19:47             ` Pekka Enberg
  0 siblings, 1 reply; 59+ messages in thread
From: Dave Hansen @ 2008-12-11 17:45 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel

On Thu, 2008-12-11 at 17:38 +0000, Catalin Marinas wrote:
> On Thu, 2008-12-11 at 09:30 -0800, Dave Hansen wrote:
> > On Thu, 2008-12-11 at 09:50 +0000, Catalin Marinas wrote:
> > > > Since alloc_large_system_hash() is using bootmem (and is called early),
> > > > I'm a little surprised that it is OK to call into memleak_alloc() which
> > > > uses kmem_cache_alloc().  Is the slab even set up at this point?
> > > 
> > > It doesn't need to be. Early callbacks like this are logged by kmemleak
> > > in a buffer and properly registered once the slab allocator is fully
> > > initialised (slab initialisation needs to allocate some memory for
> > > itself as well).
> > 
> > Ahh, thanks for the clarification.  Could you add something to the code
> > to this effect?
> 
> Do you mean a comment? I can do this.

Yeah, something like

/*
 * kmemleak doesn't actually allocate memory when called this early
 * so the GFP_ATOMIC here is actually meaningless, but consistent
 * with the rest of this function.
 */

Maybe that's too verbose. :)

-- Dave


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash
  2008-12-11 17:45           ` Dave Hansen
@ 2008-12-11 19:47             ` Pekka Enberg
  2008-12-12 17:04               ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Pekka Enberg @ 2008-12-11 19:47 UTC (permalink / raw)
  To: Dave Hansen; +Cc: Catalin Marinas, linux-kernel

On Thu, 2008-12-11 at 17:38 +0000, Catalin Marinas wrote:
>> Do you mean a comment? I can do this.

On Thu, Dec 11, 2008 at 7:45 PM, Dave Hansen <dave@linux.vnet.ibm.com> wrote:
> Yeah, something like
>
> /*
>  * kmemleak doesn't actually allocate memory when called this early
>  * so the GFP_ATOMIC here is actually meaningless, but consistent
>  * with the rest of this function.
>  */
>
> Maybe that's too verbose. :)

I'd suggest just doing a separate kmemleak_early_alloc() hook without
the gfp flag.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-10 18:27 ` [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks Catalin Marinas
  2008-12-10 18:32   ` Dave Hansen
  2008-12-10 18:53   ` Dave Hansen
@ 2008-12-11 21:22   ` Pekka Enberg
  2008-12-12 14:27     ` Catalin Marinas
  2 siblings, 1 reply; 59+ messages in thread
From: Pekka Enberg @ 2008-12-11 21:22 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel

Hi Catalin,

Catalin Marinas wrote:
> @@ -2610,6 +2611,13 @@ static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp,
>  		/* Slab management obj is off-slab. */
>  		slabp = kmem_cache_alloc_node(cachep->slabp_cache,
>  					      local_flags & ~GFP_THISNODE, nodeid);
> +		/*
> +		 * Only scan the list member to avoid false negatives
> +		 * (especially caused by the s_mem pointer)
> +		 */

Heh, I run into this part again and as I have a long term memory of a 
goldfish I had to look up the discussion we had. So may I suggest you 
change the comment to:

/*
  * If the first object in the slab is leaked (it's allocated but no
  * one has a reference to it), we want to make sure kmemleak does not
  * treat the ->s_mem pointer as a reference to the object. Otherwise
  * we will not report the leak.
  */

> +		memleak_scan_area(slabp, offsetof(struct slab, list),
> +				  sizeof(struct list_head),
> +				  local_flags & ~GFP_THISNODE);
>  		if (!slabp)
>  			return NULL;
>  	} else {
> @@ -3195,6 +3203,8 @@ static inline void *____cache_alloc(struct kmem_cache *cachep, gfp_t flags)
>  		STATS_INC_ALLOCMISS(cachep);
>  		objp = cache_alloc_refill(cachep, flags);
>  	}
> +	/* avoid false negatives */
> +	memleak_erase(&ac->entry[ac->avail]);

For this, maybe something like this:

/*
  * To avoid a false negative, if an object that is in one of the
  * per-CPU caches is leaked, we need to make sure kmemleak doesn't
  * treat the array pointers as a reference to the object.
  */

>  	return objp;
>  }
>  

Do you take care of the per-node lists as well?

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 05/15] kmemleak: Add the slub memory allocation/freeing hooks
  2008-12-10 18:27 ` [PATCH 05/15] kmemleak: Add the slub " Catalin Marinas
@ 2008-12-11 21:30   ` Pekka Enberg
  2008-12-12 13:45     ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Pekka Enberg @ 2008-12-11 21:30 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Christoph Lameter

Catalin Marinas wrote:
> This patch adds the callbacks to memleak_(alloc|free) functions from the
> slub allocator.
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Christoph Lameter <cl@linux-foundation.org>
> Cc: Pekka Enberg <penberg@cs.helsinki.fi>

Hmm, I'm not sure I understand why struct kmem_cache_cpu ->freelist is 
never scanned. For SMP, I suppose kmemleak doesn't scan the per-CPU 
areas? But for UP, struct kmem_cache is allocated with kmalloc() and 
that contains struct kmem_cache_cpu as well.

And I suppose we never scan struct pages either. Otherwise ->freelist 
there would be a problem as well.

		Pekka

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 04/15] kmemleak: Add the slob memory allocation/freeing hooks
  2008-12-10 18:27 ` [PATCH 04/15] kmemleak: Add the slob " Catalin Marinas
  2008-12-10 18:36   ` Matt Mackall
@ 2008-12-11 21:37   ` Pekka Enberg
  1 sibling, 0 replies; 59+ messages in thread
From: Pekka Enberg @ 2008-12-11 21:37 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Matt Mackall

Catalin Marinas wrote:
> This patch adds the callbacks to memleak_(alloc|free) functions from the
> slob allocator.
> 
> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Matt Mackall <mpm@selenic.com>
> Cc: Pekka Enberg <penberg@cs.helsinki.fi>

Acked-by: Pekka Enberg <penberg@cs.helsinki.fi>

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 01/15] kmemleak: Add the base support
  2008-12-10 18:26 ` [PATCH 01/15] kmemleak: Add the base support Catalin Marinas
@ 2008-12-11 22:01   ` Pekka Enberg
  2008-12-12 11:36     ` Catalin Marinas
  2008-12-16 19:36   ` Paul E. McKenney
  1 sibling, 1 reply; 59+ messages in thread
From: Pekka Enberg @ 2008-12-11 22:01 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-kernel, Paul E. McKenney, Ingo Molnar, Andrew Morton

Hi Catalin,

Few minor nits below.

On Wed, Dec 10, 2008 at 8:26 PM, Catalin Marinas
<catalin.marinas@arm.com> wrote:
> +static void put_object(struct memleak_object *object)
> +{
> +       if (!atomic_dec_and_test(&object->use_count))
> +               return;
> +
> +       /* should only get here after delete_object was called */
> +       BUG_ON(object->flags & OBJECT_ALLOCATED);

This could be

    if (WARN_ON(object->flags & OBJECT_ALLOCATED))
            return;

> +static void create_object(unsigned long ptr, size_t size, int min_count,
> +                         gfp_t gfp)
> +{
> +       unsigned long flags;
> +       struct memleak_object *object;
> +       struct prio_tree_node *node;
> +       struct stack_trace trace;
> +
> +       object = kmem_cache_alloc(object_cache, gfp);
> +       if (!object)
> +               memleak_panic("kmemleak: Cannot allocate a memleak_object "
> +                             "structure\n");

Don't you want to exit early here if object == NULL?

> +
> +       INIT_LIST_HEAD(&object->object_list);
> +       INIT_LIST_HEAD(&object->gray_list);
> +       INIT_HLIST_HEAD(&object->area_list);
> +       spin_lock_init(&object->lock);
> +       atomic_set(&object->use_count, 1);
> +       object->flags = OBJECT_ALLOCATED;
> +       object->pointer = ptr;
> +       object->size = size;
> +       object->min_count = min_count;
> +       object->count = -1;                     /* no color initially */
> +       object->jiffies = jiffies;
> +
> +       /* task information */
> +       if (in_irq()) {
> +               object->pid = 0;
> +               strncpy(object->comm, "hardirq", TASK_COMM_LEN);
> +       } else if (in_softirq()) {
> +               object->pid = 0;
> +               strncpy(object->comm, "softirq", TASK_COMM_LEN);
> +       } else {
> +               object->pid = current->pid;
> +               get_task_comm(object->comm, current);
> +       }
> +
> +       /* kernel backtrace */
> +       trace.max_entries = MAX_TRACE;
> +       trace.nr_entries = 0;
> +       trace.entries = object->trace;
> +       trace.skip = 1;
> +       save_stack_trace(&trace);
> +       object->trace_len = trace.nr_entries;
> +
> +       INIT_PRIO_TREE_NODE(&object->tree_node);
> +       object->tree_node.start = ptr;
> +       object->tree_node.last = ptr + size - 1;
> +
> +       write_lock_irqsave(&memleak_lock, flags);
> +       min_addr = min(min_addr, ptr);
> +       max_addr = max(max_addr, ptr + size);
> +       node = prio_tree_insert(&object_tree_root, &object->tree_node);
> +       /*
> +        * The code calling the kernel does not yet have the pointer to the
> +        * memory block to be able to free it.  However, we still hold the
> +        * memleak_lock here in case parts of the kernel started freeing
> +        * random memory blocks.
> +        */
> +       if (node != &object->tree_node) {
> +               unsigned long flags;
> +
> +               pr_warning("kmemleak: Existing pointer\n");
> +               dump_stack();

How come you don't dump_stack() or even WARN_ON() unconditionally in
kmemleak_panic() which is called bit later so you can remove this kind
of ad hoc logging?

> +
> +               object = lookup_object(ptr, 1);
> +               spin_lock_irqsave(&object->lock, flags);
> +               dump_object_info(object);
> +               spin_unlock_irqrestore(&object->lock, flags);
> +
> +               memleak_panic("kmemleak: Cannot insert 0x%lx into the object "
> +                             "search tree\n", ptr);


> +       }
> +       list_add_tail_rcu(&object->object_list, &object_list);
> +       write_unlock_irqrestore(&memleak_lock, flags);
> +}
> +
> +/*
> + * Remove the metadata (struct memleak_object) for a memory block from the
> + * object_list and object_tree_root and decrement its use_count.
> + */
> +static void delete_object(unsigned long ptr)
> +{
> +       unsigned long flags;
> +       struct memleak_object *object;
> +
> +       write_lock_irqsave(&memleak_lock, flags);
> +       object = lookup_object(ptr, 0);
> +       if (!object) {
> +               pr_warning("kmemleak: Freeing unknown object at 0x%08lx\n",
> +                          ptr);
> +               dump_stack();

Hmm, dump_stack() is called in quite a few places. Might make sense to
add a memleak_report() function that does this in an uniform way.

> +               write_unlock_irqrestore(&memleak_lock, flags);
> +               return;
> +       }
> +       prio_tree_remove(&object_tree_root, &object->tree_node);
> +       list_del_rcu(&object->object_list);
> +       write_unlock_irqrestore(&memleak_lock, flags);
> +
> +       BUG_ON(!(object->flags & OBJECT_ALLOCATED));
> +       BUG_ON(atomic_read(&object->use_count) < 1);

These could be converted to WARN_ON() calls, I think?

> +
> +       /*
> +        * Locking here also ensures that the corresponding memory block
> +        * cannot be freed when it is being scanned.
> +        */
> +       spin_lock_irqsave(&object->lock, flags);
> +       object->flags &= ~OBJECT_ALLOCATED;
> +#ifdef REPORT_ORPHAN_FREEING
> +       if (color_white(object)) {
> +               pr_warning("kmemleak: Freeing orphan object 0x%08lx\n", ptr);
> +               dump_stack();
> +               dump_object_info(object);
> +       }
> +#endif
> +       spin_unlock_irqrestore(&object->lock, flags);
> +       put_object(object);
> +}
> +
> +/*
> + * Make a object permanently as gray-colored so that it can no longer be
> + * reported as a leak. This is used in general to mark a false positive.
> + */
> +static void make_gray_object(unsigned long ptr)
> +{
> +       unsigned long flags;
> +       struct memleak_object *object;
> +
> +       object = find_and_get_object(ptr, 0);
> +       if (!object) {
> +               dump_stack();
> +               memleak_panic("kmemleak: Graying unknown object at 0x%08lx\n",
> +                             ptr);

Early return here?

> +       }
> +
> +       spin_lock_irqsave(&object->lock, flags);
> +       object->min_count = 0;
> +       spin_unlock_irqrestore(&object->lock, flags);
> +       put_object(object);
> +}
> +
> +/*
> + * Mark the object as black-colored so that it is ignored from scans and
> + * reporting.
> + */
> +static void make_black_object(unsigned long ptr)
> +{
> +       unsigned long flags;
> +       struct memleak_object *object;
> +
> +       object = find_and_get_object(ptr, 0);
> +       if (!object) {
> +               dump_stack();
> +               memleak_panic("kmemleak: Blacking unknown object at 0x%08lx\n",
> +                             ptr);

Ditto.

> +       }
> +
> +       spin_lock_irqsave(&object->lock, flags);
> +       object->min_count = -1;
> +       spin_unlock_irqrestore(&object->lock, flags);
> +       put_object(object);
> +}
> +
> +/*
> + * Add a scanning area to the object. If at least one such area is added,
> + * kmemleak will only scan these ranges rather than the whole memory block.
> + */
> +static void add_scan_area(unsigned long ptr, unsigned long offset,
> +                         size_t length, gfp_t gfp)
> +{
> +       unsigned long flags;
> +       struct memleak_object *object;
> +       struct memleak_scan_area *area;
> +
> +       object = find_and_get_object(ptr, 0);
> +       if (!object) {
> +               dump_stack();
> +               memleak_panic("kmemleak: Adding scan area to unknown "
> +                             "object at 0x%08lx\n", ptr);

Ditto.

> +       }
> +
> +       area = kmem_cache_alloc(scan_area_cache, gfp);
> +       if (!area)
> +               memleak_panic("kmemleak: Cannot allocate a scan area\n");
> +
> +       spin_lock_irqsave(&object->lock, flags);
> +       if (offset + length > object->size) {
> +               dump_stack();
> +               dump_object_info(object);
> +               memleak_panic("kmemleak: Scan area larger than object "
> +                             "0x%08lx\n", ptr);
> +       }
> +
> +       INIT_HLIST_NODE(&area->node);
> +       area->offset = offset;
> +       area->length = length;
> +
> +       hlist_add_head(&area->node, &object->area_list);
> +       spin_unlock_irqrestore(&object->lock, flags);
> +       put_object(object);
> +}
> +
> +/*
> + * Log an early memleak_* call to the early_log buffer. These calls will be
> + * processed later once kmemleak is fully initialized.
> + */
> +static void __init log_early(int op_type, const void *ptr, size_t size,
> +                            int min_count,
> +                            unsigned long offset, size_t length)
> +{
> +       unsigned long flags;
> +       struct early_log *log;
> +
> +       if (crt_early_log >= ARRAY_SIZE(early_log))
> +               memleak_panic("kmemleak: Early log buffer exceeded\n");

Here as well.

> +
> +       /*
> +        * There is no need for locking since the kernel is still in UP mode
> +        * at this stage. Disabling the IRQs is enough.
> +        */
> +       local_irq_save(flags);
> +       log = &early_log[crt_early_log];
> +       log->op_type = op_type;
> +       log->ptr = ptr;
> +       log->size = size;
> +       log->min_count = min_count;
> +       log->offset = offset;
> +       log->length = length;
> +       crt_early_log++;
> +       local_irq_restore(flags);
> +}

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 01/15] kmemleak: Add the base support
  2008-12-11 22:01   ` Pekka Enberg
@ 2008-12-12 11:36     ` Catalin Marinas
  2008-12-12 13:14       ` Pekka Enberg
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-12 11:36 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: linux-kernel, Paul E. McKenney, Ingo Molnar, Andrew Morton

On Fri, 2008-12-12 at 00:01 +0200, Pekka Enberg wrote:
> On Wed, Dec 10, 2008 at 8:26 PM, Catalin Marinas
> <catalin.marinas@arm.com> wrote:
> > +static void put_object(struct memleak_object *object)
> > +{
> > +       if (!atomic_dec_and_test(&object->use_count))
> > +               return;
> > +
> > +       /* should only get here after delete_object was called */
> > +       BUG_ON(object->flags & OBJECT_ALLOCATED);
> 
> This could be
> 
>     if (WARN_ON(object->flags & OBJECT_ALLOCATED))
>             return;

I'm not sure just warning would be enough. If this happens, its a severe
bug in kmemleak and the tool is no longer useful (it could even leak
memory or free already freed blocks). I could change it to a
memleak_panic call but if the object use_count isn't reliable, the
memleak_disable call wouldn't work properly either.

> > +static void create_object(unsigned long ptr, size_t size, int min_count,
> > +                         gfp_t gfp)
> > +{
> > +       unsigned long flags;
> > +       struct memleak_object *object;
> > +       struct prio_tree_node *node;
> > +       struct stack_trace trace;
> > +
> > +       object = kmem_cache_alloc(object_cache, gfp);
> > +       if (!object)
> > +               memleak_panic("kmemleak: Cannot allocate a memleak_object "
> > +                             "structure\n");
> 
> Don't you want to exit early here if object == NULL?

Yes, indeed. That omission was caused by s/panic/memleak_panic/

> > +       if (node != &object->tree_node) {
> > +               unsigned long flags;
> > +
> > +               pr_warning("kmemleak: Existing pointer\n");
> > +               dump_stack();
> 
> How come you don't dump_stack() or even WARN_ON() unconditionally in
> kmemleak_panic() which is called bit later so you can remove this kind
> of ad hoc logging?

Yes, I'll unify these via the memleak_panic() macro.

> > +static void delete_object(unsigned long ptr)
> > +{
> > +       unsigned long flags;
> > +       struct memleak_object *object;
> > +
> > +       write_lock_irqsave(&memleak_lock, flags);
> > +       object = lookup_object(ptr, 0);
> > +       if (!object) {
> > +               pr_warning("kmemleak: Freeing unknown object at 0x%08lx\n",
> > +                          ptr);
> > +               dump_stack();
> 
> Hmm, dump_stack() is called in quite a few places. Might make sense to
> add a memleak_report() function that does this in an uniform way.

Yes.

> > +               write_unlock_irqrestore(&memleak_lock, flags);
> > +               return;
> > +       }
> > +       prio_tree_remove(&object_tree_root, &object->tree_node);
> > +       list_del_rcu(&object->object_list);
> > +       write_unlock_irqrestore(&memleak_lock, flags);
> > +
> > +       BUG_ON(!(object->flags & OBJECT_ALLOCATED));
> > +       BUG_ON(atomic_read(&object->use_count) < 1);
> 
> These could be converted to WARN_ON() calls, I think?

See my comment above, these are genuine kmemleak bugs and it shouldn't
just warn. Hopefully they will never happen unless a get/put_object is
missing.

Thanks.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 01/15] kmemleak: Add the base support
  2008-12-12 11:36     ` Catalin Marinas
@ 2008-12-12 13:14       ` Pekka Enberg
  0 siblings, 0 replies; 59+ messages in thread
From: Pekka Enberg @ 2008-12-12 13:14 UTC (permalink / raw)
  To: Catalin Marinas
  Cc: linux-kernel, Paul E. McKenney, Ingo Molnar, Andrew Morton

On Fri, 2008-12-12 at 11:36 +0000, Catalin Marinas wrote:
> On Fri, 2008-12-12 at 00:01 +0200, Pekka Enberg wrote:
> > On Wed, Dec 10, 2008 at 8:26 PM, Catalin Marinas
> > <catalin.marinas@arm.com> wrote:
> > > +static void put_object(struct memleak_object *object)
> > > +{
> > > +       if (!atomic_dec_and_test(&object->use_count))
> > > +               return;
> > > +
> > > +       /* should only get here after delete_object was called */
> > > +       BUG_ON(object->flags & OBJECT_ALLOCATED);
> > 
> > This could be
> > 
> >     if (WARN_ON(object->flags & OBJECT_ALLOCATED))
> >             return;
> 
> I'm not sure just warning would be enough. If this happens, its a severe
> bug in kmemleak and the tool is no longer useful (it could even leak
> memory or free already freed blocks). I could change it to a
> memleak_panic call but if the object use_count isn't reliable, the
> memleak_disable call wouldn't work properly either.

Oh, we use WARN_ON() for things like this as well to maximize the
likelihood of the oops actually reaching the user. But whatever works
for you the best.


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 05/15] kmemleak: Add the slub memory allocation/freeing hooks
  2008-12-11 21:30   ` Pekka Enberg
@ 2008-12-12 13:45     ` Catalin Marinas
  2008-12-18 10:51       ` Pekka Enberg
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-12 13:45 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: linux-kernel, Christoph Lameter

On Thu, 2008-12-11 at 23:30 +0200, Pekka Enberg wrote:
> Catalin Marinas wrote:
> > This patch adds the callbacks to memleak_(alloc|free) functions from the
> > slub allocator.
> > 
> > Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> > Cc: Christoph Lameter <cl@linux-foundation.org>
> > Cc: Pekka Enberg <penberg@cs.helsinki.fi>
> 
> Hmm, I'm not sure I understand why struct kmem_cache_cpu ->freelist is 
> never scanned. 

Did you get any false positives? Or were you expecting false negatives
because of freelist scanning which never occurred?

> For SMP, I suppose kmemleak doesn't scan the per-CPU 
> areas?

It should scan the per-CPU areas in the memleak_scan() function:

#ifdef CONFIG_SMP
	/* per-cpu sections scanning */
	for_each_possible_cpu(i)
		scan_block(__per_cpu_start + per_cpu_offset(i),
			   __per_cpu_end + per_cpu_offset(i), NULL);
#endif

>  But for UP, struct kmem_cache is allocated with kmalloc() and 
> that contains struct kmem_cache_cpu as well.

They should be scanned as well.

> And I suppose we never scan struct pages either. Otherwise ->freelist 
> there would be a problem as well.

It was scanning the mem_map arrays in the past but removed this part and
haven't seen any problems (on ARM).

Why would the ->freelist be a problem? I don't fully understand the slub
allocator. Aren't objects added to the freelist only after they were
freed? In __slab_alloc there seems to be a line:

c->page->freelist = NULL;

so the freelist won't count as a reference anymore. After freeing an
object, kmemleak no longer cares about references to it.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-11 21:22   ` Pekka Enberg
@ 2008-12-12 14:27     ` Catalin Marinas
  2008-12-18 10:46       ` Pekka Enberg
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-12 14:27 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: linux-kernel

On Thu, 2008-12-11 at 23:22 +0200, Pekka Enberg wrote:
> Catalin Marinas wrote:
> > @@ -2610,6 +2611,13 @@ static struct slab *alloc_slabmgmt(struct kmem_cache *cachep, void *objp,
> >  		/* Slab management obj is off-slab. */
> >  		slabp = kmem_cache_alloc_node(cachep->slabp_cache,
> >  					      local_flags & ~GFP_THISNODE, nodeid);
> > +		/*
> > +		 * Only scan the list member to avoid false negatives
> > +		 * (especially caused by the s_mem pointer)
> > +		 */
> 
> Heh, I run into this part again and as I have a long term memory of a 
> goldfish I had to look up the discussion we had. So may I suggest you 
> change the comment to:
> 
> /*
>   * If the first object in the slab is leaked (it's allocated but no
>   * one has a reference to it), we want to make sure kmemleak does not
>   * treat the ->s_mem pointer as a reference to the object. Otherwise
>   * we will not report the leak.
>   */

OK, thanks. It's more verbose but it makes it pretty clear.

> > +		memleak_scan_area(slabp, offsetof(struct slab, list),
> > +				  sizeof(struct list_head),
> > +				  local_flags & ~GFP_THISNODE);
> >  		if (!slabp)
> >  			return NULL;
> >  	} else {
> > @@ -3195,6 +3203,8 @@ static inline void *____cache_alloc(struct kmem_cache *cachep, gfp_t flags)
> >  		STATS_INC_ALLOCMISS(cachep);
> >  		objp = cache_alloc_refill(cachep, flags);
> >  	}
> > +	/* avoid false negatives */
> > +	memleak_erase(&ac->entry[ac->avail]);
> 
> For this, maybe something like this:
> 
> /*
>   * To avoid a false negative, if an object that is in one of the
>   * per-CPU caches is leaked, we need to make sure kmemleak doesn't
>   * treat the array pointers as a reference to the object.
>   */

OK.

> >  	return objp;
> >  }
> >  
> 
> Do you take care of the per-node lists as well?

I can't figure out what other location should be erased.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash
  2008-12-11 19:47             ` Pekka Enberg
@ 2008-12-12 17:04               ` Catalin Marinas
  2008-12-12 17:17                 ` Dave Hansen
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-12 17:04 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: Dave Hansen, linux-kernel

On Thu, 2008-12-11 at 21:47 +0200, Pekka Enberg wrote:
> On Thu, 2008-12-11 at 17:38 +0000, Catalin Marinas wrote:
> >> Do you mean a comment? I can do this.
> 
> On Thu, Dec 11, 2008 at 7:45 PM, Dave Hansen <dave@linux.vnet.ibm.com> wrote:
> > Yeah, something like
> >
> > /*
> >  * kmemleak doesn't actually allocate memory when called this early
> >  * so the GFP_ATOMIC here is actually meaningless, but consistent
> >  * with the rest of this function.
> >  */
> >
> > Maybe that's too verbose. :)
> 
> I'd suggest just doing a separate kmemleak_early_alloc() hook without
> the gfp flag.

It looks to me like alloc_large_system_hash() could also be called at
some later point and it may even invoke __vmalloc() if hashdist is set.
So I would prefer not to introduce another hook and additional if's to
know which one to call. BTW, I think the callback should actually be (to
avoid duplicating the vmalloc call, with proper comment):

	if (!hashdist)
		memleak_alloc(table, size, 1, GFP_ATOMIC);

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash
  2008-12-12 17:04               ` Catalin Marinas
@ 2008-12-12 17:17                 ` Dave Hansen
  2008-12-12 17:43                   ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Dave Hansen @ 2008-12-12 17:17 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: Pekka Enberg, linux-kernel

On Fri, 2008-12-12 at 17:04 +0000, Catalin Marinas wrote:
> It looks to me like alloc_large_system_hash() could also be called at
> some later point and it may even invoke __vmalloc() if hashdist is set.
> So I would prefer not to introduce another hook and additional if's to
> know which one to call. BTW, I think the callback should actually be (to
> avoid duplicating the vmalloc call, with proper comment):
> 
>         if (!hashdist)
>                 memleak_alloc(table, size, 1, GFP_ATOMIC);

Does memleak_alloc() detect if it gets called twice on the same memory?
Also, is alloc_large_system_hash() contained in the tests that you can
compile for kmemleak?

-- Dave


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 12/15] kmemleak: Enable the building of the memory leak detector
  2008-12-10 19:20   ` Dave Hansen
@ 2008-12-12 17:27     ` Catalin Marinas
  2008-12-12 18:02       ` Dave Hansen
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-12 17:27 UTC (permalink / raw)
  To: Dave Hansen; +Cc: linux-kernel

On Wed, 2008-12-10 at 11:20 -0800, Dave Hansen wrote:
> On Wed, 2008-12-10 at 18:28 +0000, Catalin Marinas wrote:
> > +config DEBUG_MEMLEAK
> > +       bool "Kernel memory leak detector"
> > +       default n
> > +       depends on EXPERIMENTAL
> > +       select DEBUG_SLAB if SLAB
> > +       select SLUB_DEBUG if SLUB
> > +       select DEBUG_FS
> > +       select STACKTRACE
> > +       select FRAME_POINTER
> > +       select KALLSYMS
> 
> So, not all architectures have STACKTRACE or FRAME_POINTER.  I think a
> few of these should at least be done with depends.

I think it could depend on STACKTRACE_SUPPORT. Alternatively, it could
select STACKTRACE only if it is supported, though for architectures
without it, the kmemleak reports wouldn't be very useful.

Does FRAME_POINTER even matter? I think STACKTRACE should be enough to
get the backtrace. I even have some ARM patches for stack unwinding
where FRAME_POINTER is disabled (and shouldn't be enabled).

> Is this feature accessible if DEBUG_FS=n?  It seems to compile OK, but I
> wonder if it is useful.

Well, it is recommended. If you don't have this, you can't trigger a
scan manually by reading the /sys/kernel/debug/memleak file (have to
rely on the automatic thread). In my local tree (not published yet), I
also added support for run-time configuration by writing to this file.
Is there any disadvantage in always selecting DEBUG_FS?

Thanks.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash
  2008-12-12 17:17                 ` Dave Hansen
@ 2008-12-12 17:43                   ` Catalin Marinas
  0 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-12 17:43 UTC (permalink / raw)
  To: Dave Hansen; +Cc: Pekka Enberg, linux-kernel

On Fri, 2008-12-12 at 09:17 -0800, Dave Hansen wrote:
> On Fri, 2008-12-12 at 17:04 +0000, Catalin Marinas wrote:
> > It looks to me like alloc_large_system_hash() could also be called at
> > some later point and it may even invoke __vmalloc() if hashdist is set.
> > So I would prefer not to introduce another hook and additional if's to
> > know which one to call. BTW, I think the callback should actually be (to
> > avoid duplicating the vmalloc call, with proper comment):
> > 
> >         if (!hashdist)
> >                 memleak_alloc(table, size, 1, GFP_ATOMIC);
> 
> Does memleak_alloc() detect if it gets called twice on the same memory?

It does, and panics (disables itself). I think if this happens it is a
kmemleak bug and some hook is missing or added twice.

> Also, is alloc_large_system_hash() contained in the tests that you can
> compile for kmemleak?

No. This memleak_alloc() callback was mainly added to avoid plenty of
false reports from various parts of the kernel (especially the IPv4
stack). It wasn't really meant to track the allocated hash blocks.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 12/15] kmemleak: Enable the building of the memory leak detector
  2008-12-12 17:27     ` Catalin Marinas
@ 2008-12-12 18:02       ` Dave Hansen
  0 siblings, 0 replies; 59+ messages in thread
From: Dave Hansen @ 2008-12-12 18:02 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel

On Fri, 2008-12-12 at 17:27 +0000, Catalin Marinas wrote:
> On Wed, 2008-12-10 at 11:20 -0800, Dave Hansen wrote:
> > On Wed, 2008-12-10 at 18:28 +0000, Catalin Marinas wrote:
> > > +config DEBUG_MEMLEAK
> > > +       bool "Kernel memory leak detector"
> > > +       default n
> > > +       depends on EXPERIMENTAL
> > > +       select DEBUG_SLAB if SLAB
> > > +       select SLUB_DEBUG if SLUB
> > > +       select DEBUG_FS
> > > +       select STACKTRACE
> > > +       select FRAME_POINTER
> > > +       select KALLSYMS
> > 
> > So, not all architectures have STACKTRACE or FRAME_POINTER.  I think a
> > few of these should at least be done with depends.
> 
> I think it could depend on STACKTRACE_SUPPORT. Alternatively, it could
> select STACKTRACE only if it is supported, though for architectures
> without it, the kmemleak reports wouldn't be very useful.

I think they'd still be pretty useful.  They would be certainly harder
to track down, but "something allocating N bytes from the slab is
leaking" is certainly better than nothing.  

> Does FRAME_POINTER even matter? I think STACKTRACE should be enough to
> get the backtrace. I even have some ARM patches for stack unwinding
> where FRAME_POINTER is disabled (and shouldn't be enabled).

It is supposed to give cleaner stack traces.  But if you don't strictly
require it, I'd leave it up to whatever the user had set before.  If
people are getting crappy stack traces from anywhere in the kernel, I
think they know where to go to fix it. ;)

> > Is this feature accessible if DEBUG_FS=n?  It seems to compile OK, but I
> > wonder if it is useful.
> 
> Well, it is recommended. If you don't have this, you can't trigger a
> scan manually by reading the /sys/kernel/debug/memleak file (have to
> rely on the automatic thread). In my local tree (not published yet), I
> also added support for run-time configuration by writing to this file.
> Is there any disadvantage in always selecting DEBUG_FS?

I was just worried that debugfs was the only mechanism for this feature
to get its data in and out of the kernel.  If it was not there, that
this feature was useless.

The problem with 'select' when it is "mixed" with 'depends':

config DEBUG_FS
        bool "Debug Filesystem"
        depends on SYSFS

When you 'select DEBUG_FS' it will *not* turn on SYSFS:

$ egrep 'DEBUG_FS|MEMLEAK|G_SYSFS' .config
# CONFIG_SYSFS is not set
CONFIG_DEBUG_FS=y
CONFIG_DEBUG_MEMLEAK=y
CONFIG_DEBUG_MEMLEAK_TEST=y

I also tried compiling your feature with a bunch of things turned off in
my .config: http://sr71.net/~dave/linux/kmemleak.config

I got a compile error:

/home/dave/work/temp/linux-2.6/mm/memleak-test.c: In function ‘memleak_test_init’:
/home/dave/work/temp/linux-2.6/mm/memleak-test.c:61: error: ‘files_cachep’ undeclared (first use in this function)
/home/dave/work/temp/linux-2.6/mm/memleak-test.c:61: error: (Each undeclared identifier is reported only once
/home/dave/work/temp/linux-2.6/mm/memleak-test.c:61: error: for each function it appears in.)

make allnoconfig is cool. :)

-- Dave


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 01/15] kmemleak: Add the base support
  2008-12-10 18:26 ` [PATCH 01/15] kmemleak: Add the base support Catalin Marinas
  2008-12-11 22:01   ` Pekka Enberg
@ 2008-12-16 19:36   ` Paul E. McKenney
  2008-12-17  9:44     ` Catalin Marinas
  1 sibling, 1 reply; 59+ messages in thread
From: Paul E. McKenney @ 2008-12-16 19:36 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Ingo Molnar, Pekka Enberg, Andrew Morton

On Wed, Dec 10, 2008 at 06:26:59PM +0000, Catalin Marinas wrote:
> This patch adds the base support for the kernel memory leak
> detector. It traces the memory allocation/freeing in a way similar to
> the Boehm's conservative garbage collector, the difference being that
> the unreferenced objects are not freed but only shown in
> /sys/kernel/debug/memleak. Enabling this feature introduces an
> overhead to memory allocations.

Looks good to me from an RCU viewpoint!

Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

> Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
> Cc: Ingo Molnar <mingo@elte.hu>
> Cc: Pekka Enberg <penberg@cs.helsinki.fi>
> Cc: Andrew Morton <akpm@linux-foundation.org>
> Cc: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> ---
>  include/linux/memleak.h |   93 +++
>  init/main.c             |    4 
>  mm/memleak.c            | 1263 +++++++++++++++++++++++++++++++++++++++++++++++
>  3 files changed, 1359 insertions(+), 1 deletions(-)
>  create mode 100644 include/linux/memleak.h
>  create mode 100644 mm/memleak.c
> 
> diff --git a/include/linux/memleak.h b/include/linux/memleak.h
> new file mode 100644
> index 0000000..340b9fc
> --- /dev/null
> +++ b/include/linux/memleak.h
> @@ -0,0 +1,93 @@
> +/*
> + * include/linux/memleak.h
> + *
> + * Copyright (C) 2008 ARM Limited
> + * Written by Catalin Marinas <catalin.marinas@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
> + */
> +
> +#ifndef __MEMLEAK_H
> +#define __MEMLEAK_H
> +
> +#ifdef CONFIG_DEBUG_MEMLEAK
> +
> +extern void memleak_init(void);
> +extern void memleak_alloc(const void *ptr, size_t size, int min_count,
> +			  gfp_t gfp);
> +extern void memleak_free(const void *ptr);
> +extern void memleak_padding(const void *ptr, unsigned long offset, size_t size);
> +extern void memleak_not_leak(const void *ptr);
> +extern void memleak_ignore(const void *ptr);
> +extern void memleak_scan_area(const void *ptr, unsigned long offset,
> +			      size_t length, gfp_t gfp);
> +
> +static inline void memleak_alloc_recursive(const void *ptr, size_t size,
> +					   int min_count, unsigned long flags,
> +					   gfp_t gfp)
> +{
> +	if (!(flags & SLAB_NOLEAKTRACE))
> +		memleak_alloc(ptr, size, min_count, gfp);
> +}
> +
> +static inline void memleak_free_recursive(const void *ptr, unsigned long flags)
> +{
> +	if (!(flags & SLAB_NOLEAKTRACE))
> +		memleak_free(ptr);
> +}
> +
> +static inline void memleak_erase(void **ptr)
> +{
> +	*ptr = NULL;
> +}
> +
> +#else
> +
> +#define DECLARE_MEMLEAK_OFFSET(name, type, member)
> +
> +static inline void memleak_init(void)
> +{
> +}
> +static inline void memleak_alloc(const void *ptr, size_t size, int min_count,
> +				 gfp_t gfp)
> +{
> +}
> +static inline void memleak_alloc_recursive(const void *ptr, size_t size,
> +					   int min_count, unsigned long flags,
> +					   gfp_t gfp)
> +{
> +}
> +static inline void memleak_free(const void *ptr)
> +{
> +}
> +static inline void memleak_free_recursive(const void *ptr, unsigned long flags)
> +{
> +}
> +static inline void memleak_not_leak(const void *ptr)
> +{
> +}
> +static inline void memleak_ignore(const void *ptr)
> +{
> +}
> +static inline void memleak_scan_area(const void *ptr, unsigned long offset,
> +				     size_t length, gfp_t gfp)
> +{
> +}
> +static inline void memleak_erase(void **ptr)
> +{
> +}
> +
> +#endif	/* CONFIG_DEBUG_MEMLEAK */
> +
> +#endif	/* __MEMLEAK_H */
> diff --git a/init/main.c b/init/main.c
> index 7e117a2..81cbbb7 100644
> --- a/init/main.c
> +++ b/init/main.c
> @@ -56,6 +56,7 @@
>  #include <linux/debug_locks.h>
>  #include <linux/debugobjects.h>
>  #include <linux/lockdep.h>
> +#include <linux/memleak.h>
>  #include <linux/pid_namespace.h>
>  #include <linux/device.h>
>  #include <linux/kthread.h>
> @@ -653,6 +654,8 @@ asmlinkage void __init start_kernel(void)
>  	enable_debug_pagealloc();
>  	cpu_hotplug_init();
>  	kmem_cache_init();
> +	prio_tree_init();
> +	memleak_init();
>  	debug_objects_mem_init();
>  	idr_init_cache();
>  	setup_per_cpu_pageset();
> @@ -662,7 +665,6 @@ asmlinkage void __init start_kernel(void)
>  	calibrate_delay();
>  	pidmap_init();
>  	pgtable_cache_init();
> -	prio_tree_init();
>  	anon_vma_init();
>  #ifdef CONFIG_X86
>  	if (efi_enabled)
> diff --git a/mm/memleak.c b/mm/memleak.c
> new file mode 100644
> index 0000000..bd84ee0
> --- /dev/null
> +++ b/mm/memleak.c
> @@ -0,0 +1,1263 @@
> +/*
> + * mm/memleak.c
> + *
> + * Copyright (C) 2008 ARM Limited
> + * Written by Catalin Marinas <catalin.marinas@arm.com>
> + *
> + * This program is free software; you can redistribute it and/or modify
> + * it under the terms of the GNU General Public License version 2 as
> + * published by the Free Software Foundation.
> + *
> + * This program is distributed in the hope that it will be useful,
> + * but WITHOUT ANY WARRANTY; without even the implied warranty of
> + * MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
> + * GNU General Public License for more details.
> + *
> + * You should have received a copy of the GNU General Public License
> + * along with this program; if not, write to the Free Software
> + * Foundation, Inc., 59 Temple Place, Suite 330, Boston, MA 02111-1307 USA
> + *
> + *
> + * For more information on the algorithm and kmemleak usage, please see
> + * Documentation/kmemleak.txt.
> + *
> + * Notes on locking
> + * ----------------
> + *
> + * The following locks are used by kmemleak:
> + *
> + * - memleak_lock (rw_lock): protects the object_list modifications and
> + *   accesses to the object_tree_root. The object_list is the main
> + *   list holding the metadata (struct memleak_object) for the allocated
> + *   memory blocks. The object_tree_root is a priority search tree used to
> + *   look-up metadata based on a pointer to the corresponding memory block.
> + *   The memleak_object structures are added to the object_list and
> + *   object_tree_root in the create_object() function called from the
> + *   memleak_alloc() callback and removed in delete_object() called from the
> + *   memleak_free() callback
> + * - memleak_object.lock (spinlock): protects a memleak_object. Accesses to
> + *   the metadata (e.g. count) are protected by this lock. Note that some
> + *   members of this structure may be protected by other means (atomic or
> + *   memleak_lock). This lock is also held when scanning the corresponding
> + *   memory block to avoid the kernel freeing it via the memleak_free()
> + *   callback. This is less heavyweight than holding a global lock like
> + *   memleak_lock during scanning
> + *
> + * The memleak_object structures have a use_count incremented or decremented
> + * using the get_object()/put_object() functions. When the use_count becomes
> + * 0, this count can no longer be incremented and put_object() schedules the
> + * memleak_object freeing via an RCU callback. All calls to the get_object()
> + * function must be protected by rcu_read_lock() to avoid accessing a freed
> + * structure.
> + *
> + * The only mutex used is scan_mutex. This ensures that only one thread may
> + * scan the memory for unreferenced objects at a time. The gray_list contains
> + * the objects which are already referenced or marked as false positives and
> + * need to be scanned. This list is only modified during a scanning episode
> + * when the scan_mutex is held. At the end of a scan, the gray_list is always
> + * empty. Note that the memleak_object.use_count is incremented when an object
> + * is added to the gray_list and therefore cannot be freed.
> + */
> +
> +#include <linux/init.h>
> +#include <linux/kernel.h>
> +#include <linux/list.h>
> +#include <linux/sched.h>
> +#include <linux/jiffies.h>
> +#include <linux/delay.h>
> +#include <linux/module.h>
> +#include <linux/kthread.h>
> +#include <linux/prio_tree.h>
> +#include <linux/gfp.h>
> +#include <linux/kallsyms.h>
> +#include <linux/debugfs.h>
> +#include <linux/seq_file.h>
> +#include <linux/cpumask.h>
> +#include <linux/spinlock.h>
> +#include <linux/mutex.h>
> +#include <linux/rcupdate.h>
> +#include <linux/stacktrace.h>
> +#include <linux/cache.h>
> +#include <linux/percpu.h>
> +#include <linux/hardirq.h>
> +#include <linux/mmzone.h>
> +#include <linux/slab.h>
> +#include <linux/thread_info.h>
> +
> +#include <asm/sections.h>
> +#include <asm/processor.h>
> +#include <asm/atomic.h>
> +
> +#include <linux/memleak.h>
> +
> +/*
> + * Kmemleak configuration and common defines.
> + */
> +#define MAX_TRACE		16	/* stack trace length */
> +#define REPORTS_NR		100	/* maximum number of reported leaks */
> +#define MSECS_MIN_AGE		5000	/* minimum object age for reporting */
> +#define MSECS_SCAN_YIELD	10	/* CPU yielding period */
> +#define SECS_FIRST_SCAN		60	/* delay before the first scan */
> +#define SECS_SCAN_PERIOD	600	/* auto scanning period */
> +#undef SCAN_TASK_STACKS			/* scan the task kernel stacks */
> +#undef REPORT_ORPHAN_FREEING		/* notify when freeing orphan objects */
> +
> +#define BYTES_PER_POINTER	sizeof(void *)
> +
> +/* scanning area inside a memory block */
> +struct memleak_scan_area {
> +	struct hlist_node node;
> +	unsigned long offset;
> +	size_t length;
> +};
> +
> +/*
> + * Structure holding the metadata for each allocated memory block.
> + * Modifications to such objects should be made while holding the
> + * object->lock. Insertions or deletions from object_list, gray_list or
> + * tree_node are already protected by the corresponding locks or mutex (see
> + * the notes on locking above). These objects are reference-counted
> + * (use_count) and freed using the RCU mechanism.
> + */
> +struct memleak_object {
> +	spinlock_t lock;
> +	unsigned long flags;		/* object status flags */
> +	struct list_head object_list;
> +	struct list_head gray_list;
> +	struct prio_tree_node tree_node;
> +	struct rcu_head rcu;		/* object_list lockless traversal */
> +	/* object usage count; object freed when use_count == 0 */
> +	atomic_t use_count;
> +	unsigned long pointer;
> +	size_t size;
> +	/* minimum number of a pointers found before it is considered leak */
> +	int min_count;
> +	/* the total number of pointers found pointing to this object */
> +	int count;
> +	/* memory ranges to be scanned inside an object (empty for all) */
> +	struct hlist_head area_list;
> +	unsigned long trace[MAX_TRACE];
> +	unsigned int trace_len;
> +	unsigned long jiffies;		/* creation timestamp */
> +	pid_t pid;			/* pid of the current task */
> +	char comm[TASK_COMM_LEN];	/* executable name */
> +};
> +
> +/* flag representing the memory block allocation status */
> +#define OBJECT_ALLOCATED	(1 << 0)
> +/* flag set after the first reporting of an unreference object */
> +#define OBJECT_REPORTED		(1 << 1)
> +
> +/* the list of all allocated objects */
> +static LIST_HEAD(object_list);
> +/* the list of gray-colored objects (see color_gray comment below) */
> +static LIST_HEAD(gray_list);
> +/* prio search tree for object boundaries */
> +static struct prio_tree_root object_tree_root;
> +/* rw_lock protecting the access to object_list and prio_tree_root */
> +static DEFINE_RWLOCK(memleak_lock);
> +
> +/* allocation caches for kmemleak internal data */
> +static struct kmem_cache *object_cache;
> +static struct kmem_cache *scan_area_cache;
> +
> +/* set if tracing memory operations is enabled */
> +static atomic_t memleak_enabled = ATOMIC_INIT(0);
> +/* set in the late_initcall if there were no errors */
> +static atomic_t memleak_initialized = ATOMIC_INIT(0);
> +/* enables or disables early logging of the memory operations */
> +static atomic_t memleak_early_log = ATOMIC_INIT(1);
> +/* set if a fata kmemleak error has occurred */
> +static atomic_t memleak_error = ATOMIC_INIT(0);
> +
> +/* minimum and maximum address that may be valid pointers */
> +static unsigned long min_addr = ULONG_MAX;
> +static unsigned long max_addr;
> +
> +/* used for yielding the CPU to other tasks during scanning */
> +static unsigned long next_scan_yield;
> +static struct task_struct *scan_thread;
> +static unsigned long jiffies_scan_yield;
> +static unsigned long jiffies_min_age;
> +static DEFINE_MUTEX(scan_mutex);
> +
> +/* number of leaks reported (for limitation purposes) */
> +static int reported_leaks;
> +
> +/*
> + * Early object allocation/freeing logging. Kmemleak is initialized after the
> + * kernel allocator. However, both the kernel allocator and kmemleak may
> + * allocate memory blocks which need to be tracked. Kmemleak defines an
> + * arbitrary buffer to hold the allocation/freeing information before it is
> + * fully initialized.
> + */
> +
> +/* kmemleak operation type for early logging */
> +enum {
> +	MEMLEAK_ALLOC,
> +	MEMLEAK_FREE,
> +	MEMLEAK_NOT_LEAK,
> +	MEMLEAK_IGNORE,
> +	MEMLEAK_SCAN_AREA,
> +};
> +
> +/*
> + * Structure holding the information passed to kmemleak callbacks during the
> + * early logging.
> + */
> +struct early_log {
> +	int op_type;			/* kmemleak operation type */
> +	const void *ptr;		/* allocated/freed memory block */
> +	size_t size;			/* memory block size */
> +	int min_count;			/* minimum reference count */
> +	unsigned long offset;		/* scan area offset */
> +	size_t length;			/* scan area length */
> +};
> +
> +/* early logging buffer and current position */
> +static struct early_log __initdata early_log[200];
> +static int __initdata crt_early_log;
> +
> +static void memleak_disable(void);
> +
> +/*
> + * Macro invoked when a serious kmemleak condition occured and cannot be
> + * recovered from. Kmemleak will be disabled and further allocation/freeing
> + * tracing no longer available.
> + */
> +#define memleak_panic(x...) {	\
> +	pr_warning(x);		\
> +	memleak_disable();	\
> +}
> +
> +/*
> + * Object colors, encoded with count and min_count:
> + * - white - orphan object, not enough references to it (count < min_count)
> + * - gray  - not orphan, marked as false positive (min_count == 0) or
> + *		sufficient references to it (count >= min_count)
> + * - black - ignore, it doesn't contain references (e.g. text section)
> + *		(min_count == -1). No function defined for this color.
> + * Newly created objects don't have any color assigned (object->count == -1)
> + * before the next memory scan when they become white.
> + */
> +static int color_white(const struct memleak_object *object)
> +{
> +	return object->count != -1 && object->count < object->min_count;
> +}
> +
> +static int color_gray(const struct memleak_object *object)
> +{
> +	return object->min_count != -1 && object->count >= object->min_count;
> +}
> +
> +/*
> + * Objects are considered unreferenced only if their color is white, they have
> + * not be deleted and have a minimum age to avoid false positives caused by
> + * pointers temporarily stored in CPU registers.
> + */
> +static int unreferenced_object(struct memleak_object *object)
> +{
> +	if (color_white(object) &&
> +	    (object->flags & OBJECT_ALLOCATED) &&
> +	    time_is_before_eq_jiffies(object->jiffies + jiffies_min_age))
> +		return 1;
> +	else
> +		return 0;
> +}
> +
> +/*
> + * Printing of the unreferenced objects information, either to the seq file
> + * or to the kernel log. The print_unreferenced() function must be called with
> + * the object->lock held.
> + */
> +#define print_helper(seq, x...)			\
> +do {						\
> +	if (seq)				\
> +		seq_printf(seq, x);		\
> +	else					\
> +		pr_info(x);			\
> +} while (0)
> +
> +static void print_unreferenced(struct seq_file *seq,
> +			       struct memleak_object *object)
> +{
> +	char namebuf[KSYM_NAME_LEN + 1] = "";
> +	char *modname;
> +	unsigned long symsize;
> +	int i;
> +
> +	print_helper(seq, "unreferenced object 0x%08lx (size %zu):\n",
> +		     object->pointer, object->size);
> +	print_helper(seq, "  comm \"%s\", pid %d, jiffies %lu\n",
> +		     object->comm, object->pid, object->jiffies);
> +	print_helper(seq, "  backtrace:\n");
> +
> +	for (i = 0; i < object->trace_len; i++) {
> +		unsigned long trace = object->trace[i];
> +		unsigned long offset = 0;
> +
> +		kallsyms_lookup(trace, &symsize, &offset, &modname, namebuf);
> +		print_helper(seq, "    [<%08lx>] %s\n", trace, namebuf);
> +	}
> +}
> +
> +/*
> + * Print the memleak_object information. This function is used mainly for
> + * debugging special cases when kmemleak operations. It must be called with
> + * the object->lock held.
> + */
> +static void dump_object_info(struct memleak_object *object)
> +{
> +	struct stack_trace trace;
> +
> +	trace.nr_entries = object->trace_len;
> +	trace.entries = object->trace;
> +
> +	pr_notice("kmemleak: Object 0x%08lx (size %zu):\n",
> +		  object->tree_node.start, object->size);
> +	pr_notice("  comm \"%s\", pid %d, jiffies %lu\n",
> +		  object->comm, object->pid, object->jiffies);
> +	pr_notice("  min_count = %d\n", object->min_count);
> +	pr_notice("  count = %d\n", object->count);
> +	pr_notice("  backtrace:\n");
> +	print_stack_trace(&trace, 4);
> +}
> +
> +/*
> + * Look-up a memory block metadata (memleak_object) in the priority search
> + * tree based on a pointer value. If alias is 0, only values pointing to the
> + * beginning of the memory block are allowed. The memleak_lock must be held
> + * when calling this function.
> + */
> +static struct memleak_object *lookup_object(unsigned long ptr, int alias)
> +{
> +	struct prio_tree_node *node;
> +	struct prio_tree_iter iter;
> +	struct memleak_object *object;
> +
> +	prio_tree_iter_init(&iter, &object_tree_root, ptr, ptr);
> +	node = prio_tree_next(&iter);
> +	if (node) {
> +		object = prio_tree_entry(node, struct memleak_object,
> +					 tree_node);
> +		if (!alias && object->pointer != ptr) {
> +			pr_warning("kmemleak: Found object by alias");
> +			object = NULL;
> +		}
> +	} else
> +		object = NULL;
> +
> +	return object;
> +}
> +
> +/*
> + * Increment the object use_count. Return 1 if successful or 0 otherwise. Note
> + * that once an object's use_count reached 0, the RCU freeing was already
> + * registered and the object should no longer be used. This function must be
> + * called under the protection of rcu_read_lock().
> + */
> +static int get_object(struct memleak_object *object)
> +{
> +	return atomic_inc_not_zero(&object->use_count);
> +}
> +
> +/*
> + * RCU callback to free a memleak_object.
> + */
> +static void free_object_rcu(struct rcu_head *rcu)
> +{
> +	struct hlist_node *elem, *tmp;
> +	struct memleak_scan_area *area;
> +	struct memleak_object *object =
> +		container_of(rcu, struct memleak_object, rcu);
> +
> +	/*
> +	 * Once use_count is 0 (guaranteed by put_object), there is no other
> +	 * code accessing this object, hence no need for locking.
> +	 */
> +	hlist_for_each_entry_safe(area, elem, tmp, &object->area_list, node) {
> +		hlist_del(elem);
> +		kmem_cache_free(scan_area_cache, area);
> +	}
> +	kmem_cache_free(object_cache, object);
> +}
> +
> +/*
> + * Decrement the object use_count. Once the count is 0, free the object using
> + * an RCU callback. Since put_object() may be called via the memleak_free() ->
> + * delete_object() path, the delayed RCU freeing ensures that there is no
> + * recursive call to the kernel allocator. Lock-less RCU object_list traversal
> + * is also possible.
> + */
> +static void put_object(struct memleak_object *object)
> +{
> +	if (!atomic_dec_and_test(&object->use_count))
> +		return;
> +
> +	/* should only get here after delete_object was called */
> +	BUG_ON(object->flags & OBJECT_ALLOCATED);
> +
> +	call_rcu(&object->rcu, free_object_rcu);
> +}
> +
> +/*
> + * Look up an object in the prio search tree and increase its use_count.
> + */
> +static struct memleak_object *find_and_get_object(unsigned long ptr, int alias)
> +{
> +	unsigned long flags;
> +	struct memleak_object *object = NULL;
> +
> +	rcu_read_lock();
> +	read_lock_irqsave(&memleak_lock, flags);
> +	if (ptr >= min_addr && ptr < max_addr)
> +		object = lookup_object(ptr, alias);
> +	read_unlock_irqrestore(&memleak_lock, flags);
> +
> +	/* check whether the object is still available */
> +	if (object && !get_object(object))
> +		object = NULL;
> +	rcu_read_unlock();
> +
> +	return object;
> +}
> +
> +/*
> + * Create the metadata (struct memleak_object) corresponding to an allocated
> + * memory block and add it to the object_list and object_tree_root.
> + */
> +static void create_object(unsigned long ptr, size_t size, int min_count,
> +			  gfp_t gfp)
> +{
> +	unsigned long flags;
> +	struct memleak_object *object;
> +	struct prio_tree_node *node;
> +	struct stack_trace trace;
> +
> +	object = kmem_cache_alloc(object_cache, gfp);
> +	if (!object)
> +		memleak_panic("kmemleak: Cannot allocate a memleak_object "
> +			      "structure\n");
> +
> +	INIT_LIST_HEAD(&object->object_list);
> +	INIT_LIST_HEAD(&object->gray_list);
> +	INIT_HLIST_HEAD(&object->area_list);
> +	spin_lock_init(&object->lock);
> +	atomic_set(&object->use_count, 1);
> +	object->flags = OBJECT_ALLOCATED;
> +	object->pointer = ptr;
> +	object->size = size;
> +	object->min_count = min_count;
> +	object->count = -1;			/* no color initially */
> +	object->jiffies = jiffies;
> +
> +	/* task information */
> +	if (in_irq()) {
> +		object->pid = 0;
> +		strncpy(object->comm, "hardirq", TASK_COMM_LEN);
> +	} else if (in_softirq()) {
> +		object->pid = 0;
> +		strncpy(object->comm, "softirq", TASK_COMM_LEN);
> +	} else {
> +		object->pid = current->pid;
> +		get_task_comm(object->comm, current);
> +	}
> +
> +	/* kernel backtrace */
> +	trace.max_entries = MAX_TRACE;
> +	trace.nr_entries = 0;
> +	trace.entries = object->trace;
> +	trace.skip = 1;
> +	save_stack_trace(&trace);
> +	object->trace_len = trace.nr_entries;
> +
> +	INIT_PRIO_TREE_NODE(&object->tree_node);
> +	object->tree_node.start = ptr;
> +	object->tree_node.last = ptr + size - 1;
> +
> +	write_lock_irqsave(&memleak_lock, flags);
> +	min_addr = min(min_addr, ptr);
> +	max_addr = max(max_addr, ptr + size);
> +	node = prio_tree_insert(&object_tree_root, &object->tree_node);
> +	/*
> +	 * The code calling the kernel does not yet have the pointer to the
> +	 * memory block to be able to free it.  However, we still hold the
> +	 * memleak_lock here in case parts of the kernel started freeing
> +	 * random memory blocks.
> +	 */
> +	if (node != &object->tree_node) {
> +		unsigned long flags;
> +
> +		pr_warning("kmemleak: Existing pointer\n");
> +		dump_stack();
> +
> +		object = lookup_object(ptr, 1);
> +		spin_lock_irqsave(&object->lock, flags);
> +		dump_object_info(object);
> +		spin_unlock_irqrestore(&object->lock, flags);
> +
> +		memleak_panic("kmemleak: Cannot insert 0x%lx into the object "
> +			      "search tree\n", ptr);
> +	}
> +	list_add_tail_rcu(&object->object_list, &object_list);
> +	write_unlock_irqrestore(&memleak_lock, flags);
> +}
> +
> +/*
> + * Remove the metadata (struct memleak_object) for a memory block from the
> + * object_list and object_tree_root and decrement its use_count.
> + */
> +static void delete_object(unsigned long ptr)
> +{
> +	unsigned long flags;
> +	struct memleak_object *object;
> +
> +	write_lock_irqsave(&memleak_lock, flags);
> +	object = lookup_object(ptr, 0);
> +	if (!object) {
> +		pr_warning("kmemleak: Freeing unknown object at 0x%08lx\n",
> +			   ptr);
> +		dump_stack();
> +		write_unlock_irqrestore(&memleak_lock, flags);
> +		return;
> +	}
> +	prio_tree_remove(&object_tree_root, &object->tree_node);
> +	list_del_rcu(&object->object_list);
> +	write_unlock_irqrestore(&memleak_lock, flags);
> +
> +	BUG_ON(!(object->flags & OBJECT_ALLOCATED));
> +	BUG_ON(atomic_read(&object->use_count) < 1);
> +
> +	/*
> +	 * Locking here also ensures that the corresponding memory block
> +	 * cannot be freed when it is being scanned.
> +	 */
> +	spin_lock_irqsave(&object->lock, flags);
> +	object->flags &= ~OBJECT_ALLOCATED;
> +#ifdef REPORT_ORPHAN_FREEING
> +	if (color_white(object)) {
> +		pr_warning("kmemleak: Freeing orphan object 0x%08lx\n", ptr);
> +		dump_stack();
> +		dump_object_info(object);
> +	}
> +#endif
> +	spin_unlock_irqrestore(&object->lock, flags);
> +	put_object(object);
> +}
> +
> +/*
> + * Make a object permanently as gray-colored so that it can no longer be
> + * reported as a leak. This is used in general to mark a false positive.
> + */
> +static void make_gray_object(unsigned long ptr)
> +{
> +	unsigned long flags;
> +	struct memleak_object *object;
> +
> +	object = find_and_get_object(ptr, 0);
> +	if (!object) {
> +		dump_stack();
> +		memleak_panic("kmemleak: Graying unknown object at 0x%08lx\n",
> +			      ptr);
> +	}
> +
> +	spin_lock_irqsave(&object->lock, flags);
> +	object->min_count = 0;
> +	spin_unlock_irqrestore(&object->lock, flags);
> +	put_object(object);
> +}
> +
> +/*
> + * Mark the object as black-colored so that it is ignored from scans and
> + * reporting.
> + */
> +static void make_black_object(unsigned long ptr)
> +{
> +	unsigned long flags;
> +	struct memleak_object *object;
> +
> +	object = find_and_get_object(ptr, 0);
> +	if (!object) {
> +		dump_stack();
> +		memleak_panic("kmemleak: Blacking unknown object at 0x%08lx\n",
> +			      ptr);
> +	}
> +
> +	spin_lock_irqsave(&object->lock, flags);
> +	object->min_count = -1;
> +	spin_unlock_irqrestore(&object->lock, flags);
> +	put_object(object);
> +}
> +
> +/*
> + * Add a scanning area to the object. If at least one such area is added,
> + * kmemleak will only scan these ranges rather than the whole memory block.
> + */
> +static void add_scan_area(unsigned long ptr, unsigned long offset,
> +			  size_t length, gfp_t gfp)
> +{
> +	unsigned long flags;
> +	struct memleak_object *object;
> +	struct memleak_scan_area *area;
> +
> +	object = find_and_get_object(ptr, 0);
> +	if (!object) {
> +		dump_stack();
> +		memleak_panic("kmemleak: Adding scan area to unknown "
> +			      "object at 0x%08lx\n", ptr);
> +	}
> +
> +	area = kmem_cache_alloc(scan_area_cache, gfp);
> +	if (!area)
> +		memleak_panic("kmemleak: Cannot allocate a scan area\n");
> +
> +	spin_lock_irqsave(&object->lock, flags);
> +	if (offset + length > object->size) {
> +		dump_stack();
> +		dump_object_info(object);
> +		memleak_panic("kmemleak: Scan area larger than object "
> +			      "0x%08lx\n", ptr);
> +	}
> +
> +	INIT_HLIST_NODE(&area->node);
> +	area->offset = offset;
> +	area->length = length;
> +
> +	hlist_add_head(&area->node, &object->area_list);
> +	spin_unlock_irqrestore(&object->lock, flags);
> +	put_object(object);
> +}
> +
> +/*
> + * Log an early memleak_* call to the early_log buffer. These calls will be
> + * processed later once kmemleak is fully initialized.
> + */
> +static void __init log_early(int op_type, const void *ptr, size_t size,
> +			     int min_count,
> +			     unsigned long offset, size_t length)
> +{
> +	unsigned long flags;
> +	struct early_log *log;
> +
> +	if (crt_early_log >= ARRAY_SIZE(early_log))
> +		memleak_panic("kmemleak: Early log buffer exceeded\n");
> +
> +	/*
> +	 * There is no need for locking since the kernel is still in UP mode
> +	 * at this stage. Disabling the IRQs is enough.
> +	 */
> +	local_irq_save(flags);
> +	log = &early_log[crt_early_log];
> +	log->op_type = op_type;
> +	log->ptr = ptr;
> +	log->size = size;
> +	log->min_count = min_count;
> +	log->offset = offset;
> +	log->length = length;
> +	crt_early_log++;
> +	local_irq_restore(flags);
> +}
> +
> +/*
> + * Memory allocation function callback. This function is called from the
> + * kernel allocators when a new block is allocated (kmem_cache_alloc, kmalloc,
> + * vmalloc etc.).
> + */
> +void memleak_alloc(const void *ptr, size_t size, int min_count, gfp_t gfp)
> +{
> +	pr_debug("%s(0x%p, %zu, %d)\n", __func__, ptr, size, min_count);
> +
> +	if (atomic_read(&memleak_enabled) && ptr)
> +		create_object((unsigned long)ptr, size, min_count, gfp);
> +	else if (atomic_read(&memleak_early_log))
> +		log_early(MEMLEAK_ALLOC, ptr, size, min_count, 0, 0);
> +}
> +EXPORT_SYMBOL_GPL(memleak_alloc);
> +
> +/*
> + * Memory freeing function callback. This function is called from the kernel
> + * allocators when a block is freed (kmem_cache_free, kfree, vfree etc.).
> + */
> +void memleak_free(const void *ptr)
> +{
> +	pr_debug("%s(0x%p)\n", __func__, ptr);
> +
> +	if (atomic_read(&memleak_enabled) && ptr)
> +		delete_object((unsigned long)ptr);
> +	else if (atomic_read(&memleak_early_log))
> +		log_early(MEMLEAK_FREE, ptr, 0, 0, 0, 0);
> +}
> +EXPORT_SYMBOL_GPL(memleak_free);
> +
> +/*
> + * Mark an already allocated memory block as a false positive. This will cause
> + * the block to no longer be reported as leak and always be scanned.
> + */
> +void memleak_not_leak(const void *ptr)
> +{
> +	pr_debug("%s(0x%p)\n", __func__, ptr);
> +
> +	if (atomic_read(&memleak_enabled) && ptr)
> +		make_gray_object((unsigned long)ptr);
> +	else if (atomic_read(&memleak_early_log))
> +		log_early(MEMLEAK_NOT_LEAK, ptr, 0, 0, 0, 0);
> +}
> +EXPORT_SYMBOL(memleak_not_leak);
> +
> +/*
> + * Ignore a memory block. This is usually done when it is known that the
> + * corresponding block is not a leak and does not contain any references to
> + * other allocated memory blocks.
> + */
> +void memleak_ignore(const void *ptr)
> +{
> +	pr_debug("%s(0x%p)\n", __func__, ptr);
> +
> +	if (atomic_read(&memleak_enabled) && ptr)
> +		make_black_object((unsigned long)ptr);
> +	else if (atomic_read(&memleak_early_log))
> +		log_early(MEMLEAK_IGNORE, ptr, 0, 0, 0, 0);
> +}
> +EXPORT_SYMBOL(memleak_ignore);
> +
> +/*
> + * Limit the range to be scanned in an allocated memory block.
> + */
> +void memleak_scan_area(const void *ptr, unsigned long offset, size_t length,
> +		       gfp_t gfp)
> +{
> +	pr_debug("%s(0x%p)\n", __func__, ptr);
> +
> +	if (atomic_read(&memleak_enabled) && ptr)
> +		add_scan_area((unsigned long)ptr, offset, length, gfp);
> +	else if (atomic_read(&memleak_early_log))
> +		log_early(MEMLEAK_SCAN_AREA, ptr, 0, 0, offset, length);
> +}
> +EXPORT_SYMBOL(memleak_scan_area);
> +
> +/*
> + * Yield the CPU so that other tasks get a chance to run.  The yielding is
> + * rate-limited to avoid excessive number of calls to the schedule() function
> + * during memory scanning.
> + */
> +static void scan_yield(void)
> +{
> +	might_sleep();
> +
> +	if (time_is_before_eq_jiffies(next_scan_yield)) {
> +		schedule();
> +		next_scan_yield = jiffies + jiffies_scan_yield;
> +	}
> +}
> +
> +/*
> + * Memory scanning is a long process and it needs to be interruptable. This
> + * function checks whether such interrupt condition occured.
> + */
> +static int scan_should_stop(void)
> +{
> +	if (!atomic_read(&memleak_enabled))
> +		return 1;
> +	/*
> +	 * This function may be called from either process or kthread context,
> +	 * hence the need to check for both stop conditions.
> +	 */
> +	if ((current->mm && signal_pending(current)) ||
> +	    (!current->mm && kthread_should_stop()))
> +		return 1;
> +	return 0;
> +}
> +
> +/*
> + * Scan a memory block (exclusive range) for valid pointers and add those
> + * found to the gray list.
> + */
> +static void scan_block(void *_start, void *_end, struct memleak_object *scanned)
> +{
> +	unsigned long *ptr;
> +	unsigned long *start = PTR_ALIGN(_start, BYTES_PER_POINTER);
> +	unsigned long *end = _end - (BYTES_PER_POINTER - 1);
> +
> +	for (ptr = start; ptr < end; ptr++) {
> +		unsigned long flags;
> +		unsigned long pointer = *ptr;
> +		struct memleak_object *object;
> +
> +		if (scan_should_stop())
> +			break;
> +
> +		/*
> +		 * When scanning a memory block with a corresponding
> +		 * memleak_object, the CPU yielding is handled in the calling
> +		 * code since it holds the object->lock to avoid the block
> +		 * freeing.
> +		 */
> +		if (!scanned)
> +			scan_yield();
> +
> +		object = find_and_get_object(pointer, 1);
> +		if (!object)
> +			continue;
> +		if (object == scanned) {
> +			/* self referenced, ignore */
> +			put_object(object);
> +			continue;
> +		}
> +
> +		/*
> +		 * Avoid the lockdep recursive warning on object->lock being
> +		 * previously acquired in scan_object(). These locks are
> +		 * enclosed by scan_mutex.
> +		 */
> +		spin_lock_irqsave_nested(&object->lock, flags,
> +					 SINGLE_DEPTH_NESTING);
> +		if (!color_white(object)) {
> +			/* non-orphan, ignored or new */
> +			spin_unlock_irqrestore(&object->lock, flags);
> +			put_object(object);
> +			continue;
> +		}
> +
> +		/*
> +		 * Increase the object's reference count (number of pointers
> +		 * to the memory block). If this count reaches the required
> +		 * minimum, the object's color will become gray and it will be
> +		 * added to the gray_list.
> +		 */
> +		object->count++;
> +		if (color_gray(object))
> +			list_add_tail(&object->gray_list, &gray_list);
> +		else
> +			put_object(object);
> +		spin_unlock_irqrestore(&object->lock, flags);
> +	}
> +}
> +
> +/*
> + * Scan a memory block corresponding to a memleak_object. A condition is
> + * that object->use_count >= 1.
> + */
> +static void scan_object(struct memleak_object *object)
> +{
> +	struct memleak_scan_area *area;
> +	struct hlist_node *elem;
> +	unsigned long flags;
> +
> +	/*
> +	 * Once the object->lock is aquired, the corresponding memory block
> +	 * cannot be freed (the same lock is aquired in delete_object).
> +	 */
> +	spin_lock_irqsave(&object->lock, flags);
> +	if (!(object->flags & OBJECT_ALLOCATED))
> +		/* already freed object */
> +		goto out;
> +	if (hlist_empty(&object->area_list))
> +		scan_block((void *)object->pointer,
> +			   (void *)(object->pointer + object->size), object);
> +	else
> +		hlist_for_each_entry(area, elem, &object->area_list, node)
> +			scan_block((void *)(object->pointer + area->offset),
> +				   (void *)(object->pointer + area->offset
> +					    + area->length), object);
> + out:
> +	spin_unlock_irqrestore(&object->lock, flags);
> +}
> +
> +/*
> + * Scan data sections and all the referenced memory blocks allocated via the
> + * kernel's standard allocators. This function must be called with the
> + * scan_mutex held.
> + */
> +static void memleak_scan(void)
> +{
> +	unsigned long flags;
> +	struct memleak_object *object, *tmp;
> +#ifdef CONFIG_SMP
> +	int i;
> +#endif
> +#ifdef SCAN_TASK_STACKS
> +	struct task_struct *task;
> +#endif
> +
> +	/* prepare the memleak_object's */
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(object, &object_list, object_list) {
> +		spin_lock_irqsave(&object->lock, flags);
> +#ifdef DEBUG
> +		/*
> +		 * With a few exceptions there should be a maximum of
> +		 * 1 reference to any object at this point.
> +		 */
> +		if (atomic_read(&object->use_count) > 1) {
> +			pr_debug("kmemleak: object->use_count = %d\n",
> +				 atomic_read(&object->use_count));
> +			dump_object_info(object);
> +		}
> +#endif
> +		/* reset the reference count (whiten the object) */
> +		object->count = 0;
> +		if (color_gray(object) && get_object(object))
> +			list_add_tail(&object->gray_list, &gray_list);
> +
> +		spin_unlock_irqrestore(&object->lock, flags);
> +	}
> +	rcu_read_unlock();
> +
> +	/* data/bss scanning */
> +	scan_block(_sdata, _edata, NULL);
> +	scan_block(__bss_start, __bss_stop, NULL);
> +
> +#ifdef CONFIG_SMP
> +	/* per-cpu sections scanning */
> +	for_each_possible_cpu(i)
> +		scan_block(__per_cpu_start + per_cpu_offset(i),
> +			   __per_cpu_end + per_cpu_offset(i), NULL);
> +#endif
> +
> +#ifdef SCAN_TASK_STACKS
> +	/*
> +	 * Scanning the task stacks may introduce false negatives and it is
> +	 * not enabled by default.
> +	 */
> +	read_lock(&tasklist_lock);
> +	for_each_process(task)
> +		scan_block(task_stack_page(task),
> +			   task_stack_page(task) + THREAD_SIZE, NULL);
> +	read_unlock(&tasklist_lock);
> +#endif
> +
> +	/*
> +	 * Scan the objects already referenced from the sections scanned
> +	 * above. More objects will be referenced and, if there are no memory
> +	 * leaks, all the objects will be scanned. The list traversal is safe
> +	 * for both tail additions and removals from inside the loop. The
> +	 * memleak objects cannot be freed from outside the loop because their
> +	 * use_count was increased.
> +	 */
> +	object = list_entry(gray_list.next, typeof(*object), gray_list);
> +	while (&object->gray_list != &gray_list) {
> +		scan_yield();
> +
> +		/* may add new objects to the list */
> +		if (!scan_should_stop())
> +			scan_object(object);
> +
> +		tmp = list_entry(object->gray_list.next, typeof(*object),
> +				 gray_list);
> +
> +		/* remove the object from the list and release it */
> +		list_del(&object->gray_list);
> +		put_object(object);
> +
> +		object = tmp;
> +	}
> +	BUG_ON(!list_empty(&gray_list));
> +}
> +
> +/*
> + * Iterate over the object_list and return the first valid object at or after
> + * the required position with its use_count incremented. The function triggers
> + * a memory scanning when the pos argument points to the first position.
> + */
> +static void *memleak_seq_start(struct seq_file *seq, loff_t *pos)
> +{
> +	struct memleak_object *object;
> +	loff_t n = *pos;
> +
> +	if (!atomic_read(&memleak_enabled)) {
> +		seq_printf(seq, "Kernel memory leak detector disabled\n");
> +		return ERR_PTR(-EBUSY);
> +	}
> +	if (!n) {
> +		memleak_scan();
> +		reported_leaks = 0;
> +	}
> +	if (reported_leaks >= REPORTS_NR)
> +		return NULL;
> +
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(object, &object_list, object_list) {
> +		if (n-- > 0)
> +			continue;
> +		if (get_object(object))
> +			goto out;
> +	}
> +	object = NULL;
> + out:
> +	rcu_read_unlock();
> +	return object;
> +}
> +
> +/*
> + * Return the next object in the object_list. The function decrements the
> + * use_count of the previous object and increases that of the next one.
> + */
> +static void *memleak_seq_next(struct seq_file *seq, void *v, loff_t *pos)
> +{
> +	struct memleak_object *prev_obj = v;
> +	struct memleak_object *next_obj = NULL;
> +	struct list_head *n = &prev_obj->object_list;
> +
> +	++(*pos);
> +	if (reported_leaks >= REPORTS_NR)
> +		goto out;
> +
> +	rcu_read_lock();
> +	list_for_each_continue_rcu(n, &object_list) {
> +		next_obj = list_entry(n, struct memleak_object, object_list);
> +		if (get_object(next_obj))
> +			break;
> +	}
> +	rcu_read_unlock();
> + out:
> +	put_object(prev_obj);
> +	return next_obj;
> +}
> +
> +/*
> + * Decrement the use_count of the last object required, if any.
> + */
> +static void memleak_seq_stop(struct seq_file *seq, void *v)
> +{
> +	if (v)
> +		put_object(v);
> +}
> +
> +/*
> + * Print the information for an unreferenced object to the seq file.
> + */
> +static int memleak_seq_show(struct seq_file *seq, void *v)
> +{
> +	struct memleak_object *object = v;
> +	unsigned long flags;
> +
> +	spin_lock_irqsave(&object->lock, flags);
> +	if (!unreferenced_object(object))
> +		goto out;
> +	print_unreferenced(seq, object);
> +	reported_leaks++;
> +out:
> +	spin_unlock_irqrestore(&object->lock, flags);
> +	return 0;
> +}
> +
> +static const struct seq_operations memleak_seq_ops = {
> +	.start = memleak_seq_start,
> +	.next  = memleak_seq_next,
> +	.stop  = memleak_seq_stop,
> +	.show  = memleak_seq_show,
> +};
> +
> +static int memleak_seq_open(struct inode *inode, struct file *file)
> +{
> +	int ret = mutex_lock_interruptible(&scan_mutex);
> +	if (ret < 0)
> +		return ret;
> +	ret = seq_open(file, &memleak_seq_ops);
> +	if (ret < 0)
> +		mutex_unlock(&scan_mutex);
> +	return ret;
> +}
> +
> +static int memleak_seq_release(struct inode *inode, struct file *file)
> +{
> +	int ret = seq_release(inode, file);
> +	mutex_unlock(&scan_mutex);
> +	return ret;
> +}
> +
> +static const struct file_operations memleak_fops = {
> +	.owner	 = THIS_MODULE,
> +	.open    = memleak_seq_open,
> +	.read    = seq_read,
> +	.llseek  = seq_lseek,
> +	.release = memleak_seq_release,
> +};
> +
> +/*
> + * Thread function performing automatic memory scanning. Unreferenced objects
> + * at the end of a memory scan are reported but only the first time.
> + */
> +static int memleak_scan_thread(void *arg)
> +{
> +	/*
> +	 * Wait before the first scan to allow the system to fully initialize.
> +	 */
> +	ssleep(SECS_FIRST_SCAN);
> +
> +	while (!kthread_should_stop()) {
> +		struct memleak_object *object;
> +		int ret;
> +
> +		ret = mutex_lock_interruptible(&scan_mutex);
> +		if (ret < 0)
> +			continue;
> +
> +		memleak_scan();
> +		reported_leaks = 0;
> +
> +		rcu_read_lock();
> +		list_for_each_entry_rcu(object, &object_list, object_list) {
> +			unsigned long flags;
> +
> +			if (reported_leaks >= REPORTS_NR)
> +				break;
> +			spin_lock_irqsave(&object->lock, flags);
> +			if (!(object->flags & OBJECT_REPORTED) &&
> +			    unreferenced_object(object)) {
> +				print_unreferenced(NULL, object);
> +				object->flags |= OBJECT_REPORTED;
> +				reported_leaks++;
> +			}
> +			spin_unlock_irqrestore(&object->lock, flags);
> +		}
> +		rcu_read_unlock();
> +
> +		mutex_unlock(&scan_mutex);
> +		/* sleep before the next scan */
> +		ssleep(SECS_SCAN_PERIOD);
> +	}
> +
> +	return 0;
> +}
> +
> +/*
> + * Perform the freeing of the kmemleak internal objects after waiting for any
> + * current memory scan to complete.
> + */
> +static int memleak_cleanup_thread(void *arg)
> +{
> +	struct memleak_object *object;
> +
> +	mutex_lock(&scan_mutex);
> +	rcu_read_lock();
> +	list_for_each_entry_rcu(object, &object_list, object_list)
> +		delete_object(object->pointer);
> +	rcu_read_unlock();
> +	mutex_unlock(&scan_mutex);
> +
> +	return 0;
> +}
> +
> +/*
> + * Start the clean-up thread.
> + */
> +static void memleak_cleanup(void)
> +{
> +	struct task_struct *cleanup_thread;
> +
> +	cleanup_thread = kthread_run(memleak_cleanup_thread, NULL,
> +				     "kmemleak-cleanup");
> +	if (IS_ERR(cleanup_thread))
> +		pr_warning("kmemleak: Failed to create the clean-up thread\n");
> +}
> +
> +/*
> + * Disable kmemleak. No memory allocation/freeing will be traced once this
> + * function is called. Disabling kmemleak is an irreversible operation.
> + */
> +static void memleak_disable(void)
> +{
> +	if (atomic_cmpxchg(&memleak_error, 0, 1))
> +		return;
> +
> +	/* stop any memory operation tracing */
> +	atomic_set(&memleak_early_log, 0);
> +	atomic_set(&memleak_enabled, 0);
> +
> +	/* check whether it is too early for a kernel thread */
> +	if (atomic_read(&memleak_initialized))
> +		memleak_cleanup();
> +
> +	pr_info("Kernel memory leak detector disabled\n");
> +}
> +
> +/*
> + * Kmemleak initialization.
> + */
> +void __init memleak_init(void)
> +{
> +	int i;
> +	unsigned long flags;
> +
> +	jiffies_scan_yield = msecs_to_jiffies(MSECS_SCAN_YIELD);
> +	jiffies_min_age = msecs_to_jiffies(MSECS_MIN_AGE);
> +
> +	object_cache = KMEM_CACHE(memleak_object, SLAB_NOLEAKTRACE);
> +	scan_area_cache = KMEM_CACHE(memleak_scan_area, SLAB_NOLEAKTRACE);
> +	INIT_PRIO_TREE_ROOT(&object_tree_root);
> +
> +	/* the kernel is still in UP mode, so disabling the IRQs is enough */
> +	local_irq_save(flags);
> +	if (!atomic_read(&memleak_error)) {
> +		atomic_set(&memleak_enabled, 1);
> +		atomic_set(&memleak_early_log, 0);
> +	}
> +	local_irq_restore(flags);
> +
> +	/*
> +	 * This is the point where tracking allocations is safe. Automatic
> +	 * scanning is started during the late initcall. Add the early logged
> +	 * callbacks to the kmemleak infrastructure.
> +	 */
> +	for (i = 0; i < crt_early_log; i++) {
> +		struct early_log *log = &early_log[i];
> +
> +		switch (log->op_type) {
> +		case MEMLEAK_ALLOC:
> +			memleak_alloc(log->ptr, log->size, log->min_count,
> +				      GFP_ATOMIC);
> +			break;
> +		case MEMLEAK_FREE:
> +			memleak_free(log->ptr);
> +			break;
> +		case MEMLEAK_NOT_LEAK:
> +			memleak_not_leak(log->ptr);
> +			break;
> +		case MEMLEAK_IGNORE:
> +			memleak_ignore(log->ptr);
> +			break;
> +		case MEMLEAK_SCAN_AREA:
> +			memleak_scan_area(log->ptr, log->offset, log->length,
> +					  GFP_ATOMIC);
> +			break;
> +		default:
> +			BUG();
> +		}
> +	}
> +}
> +
> +/*
> + * Late initialization function.
> + */
> +static int __init memleak_late_init(void)
> +{
> +	struct dentry *dentry;
> +
> +	atomic_set(&memleak_initialized, 1);
> +
> +	if (atomic_read(&memleak_error)) {
> +		/*
> +		 * Some error occured and kmemleak was disabled. There is a
> +		 * small chance that memleak_disable() was called immediately
> +		 * after setting memleak_initialized and we may end up with
> +		 * two clean-up threads but serialized by scan_mutex.
> +		 */
> +		memleak_cleanup();
> +		return -EBUSY;
> +	}
> +
> +	dentry = debugfs_create_file("memleak", S_IRUGO, NULL, NULL,
> +				     &memleak_fops);
> +	if (!dentry)
> +		return -ENOMEM;
> +
> +	scan_thread = kthread_run(memleak_scan_thread, NULL, "kmemleak");
> +	if (IS_ERR(scan_thread))
> +		pr_warning("kmemleak: Failed to create the scan thread\n");
> +
> +	pr_info("Kernel memory leak detector initialized\n");
> +
> +	return 0;
> +}
> +late_initcall(memleak_late_init);
> 

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 01/15] kmemleak: Add the base support
  2008-12-16 19:36   ` Paul E. McKenney
@ 2008-12-17  9:44     ` Catalin Marinas
  2008-12-17 17:15       ` Paul E. McKenney
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-17  9:44 UTC (permalink / raw)
  To: paulmck; +Cc: linux-kernel, Ingo Molnar, Pekka Enberg, Andrew Morton

On Tue, 2008-12-16 at 11:36 -0800, Paul E. McKenney wrote:
> On Wed, Dec 10, 2008 at 06:26:59PM +0000, Catalin Marinas wrote:
> > This patch adds the base support for the kernel memory leak
> > detector. It traces the memory allocation/freeing in a way similar to
> > the Boehm's conservative garbage collector, the difference being that
> > the unreferenced objects are not freed but only shown in
> > /sys/kernel/debug/memleak. Enabling this feature introduces an
> > overhead to memory allocations.
> 
> Looks good to me from an RCU viewpoint!
> 
> Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>

Thanks for reviewing it.

FYI, in the version I'm going to post this week I added another mutex to
ensure the exclusive opening of the /sys/kernel/debug/memleak file as
one can now use this file to configure kmemleak at run-time. The RCU
locking isn't affected and I'll add your "Reviewed-by:" line.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 13/15] kmemleak: Keep the __init functions after initialization
  2008-12-10 18:44   ` Sam Ravnborg
@ 2008-12-17 13:09     ` Catalin Marinas
  0 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-17 13:09 UTC (permalink / raw)
  To: Sam Ravnborg; +Cc: linux-kernel

Sam,

On Wed, 2008-12-10 at 19:44 +0100, Sam Ravnborg wrote:
> On Wed, Dec 10, 2008 at 06:28:06PM +0000, Catalin Marinas wrote:
> > This patch adds the CONFIG_DEBUG_KEEP_INIT option which preserves the
> > .init.* sections after initialization. Memory leaks happening during
> > this phase can be more easily tracked.
> 
> This patch manipulate the section names of these functions.
> The better way would be to keep the section names as they are
> and then in init.h decide where to add these sections.
> 
> This will require a new set of CONFIG_ symbols but then
> it is obvious what happens.

Thanks for your comments. I had a look at the vmlinux.lds.h file and
there are indeed better options like DEV_KEEP etc.

Anyway, I think I'll drop this patch completely. All that kmemleak needs
is actually the kernel symbols to be able to print the stack trace.
These don't seem to be removed together with the .init sections cleanup.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 01/15] kmemleak: Add the base support
  2008-12-17  9:44     ` Catalin Marinas
@ 2008-12-17 17:15       ` Paul E. McKenney
  0 siblings, 0 replies; 59+ messages in thread
From: Paul E. McKenney @ 2008-12-17 17:15 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Ingo Molnar, Pekka Enberg, Andrew Morton

On Wed, Dec 17, 2008 at 09:44:56AM +0000, Catalin Marinas wrote:
> On Tue, 2008-12-16 at 11:36 -0800, Paul E. McKenney wrote:
> > On Wed, Dec 10, 2008 at 06:26:59PM +0000, Catalin Marinas wrote:
> > > This patch adds the base support for the kernel memory leak
> > > detector. It traces the memory allocation/freeing in a way similar to
> > > the Boehm's conservative garbage collector, the difference being that
> > > the unreferenced objects are not freed but only shown in
> > > /sys/kernel/debug/memleak. Enabling this feature introduces an
> > > overhead to memory allocations.
> > 
> > Looks good to me from an RCU viewpoint!
> > 
> > Reviewed-by: Paul E. McKenney <paulmck@linux.vnet.ibm.com>
> 
> Thanks for reviewing it.
> 
> FYI, in the version I'm going to post this week I added another mutex to
> ensure the exclusive opening of the /sys/kernel/debug/memleak file as
> one can now use this file to configure kmemleak at run-time. The RCU
> locking isn't affected and I'll add your "Reviewed-by:" line.

Fair enough!

						Thanx, Paul

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-12 14:27     ` Catalin Marinas
@ 2008-12-18 10:46       ` Pekka Enberg
  2008-12-18 16:38         ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Pekka Enberg @ 2008-12-18 10:46 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, cl

On Fri, 2008-12-12 at 14:27 +0000, Catalin Marinas wrote:
> > Do you take care of the per-node lists as well?
> 
> I can't figure out what other location should be erased.

As far as I can tell, you need to tell kmemleak not to scan the alien
caches and the shared array that is shared by all CPUs that belong to
one node. I'm adding Christoph to the CC in case he wants to comment on
this.


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 05/15] kmemleak: Add the slub memory allocation/freeing hooks
  2008-12-12 13:45     ` Catalin Marinas
@ 2008-12-18 10:51       ` Pekka Enberg
  2008-12-18 15:28         ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Pekka Enberg @ 2008-12-18 10:51 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Christoph Lameter

Hi Catalin,

On Fri, 2008-12-12 at 13:45 +0000, Catalin Marinas wrote:
> > Hmm, I'm not sure I understand why struct kmem_cache_cpu ->freelist is 
> > never scanned. 
> 
> Did you get any false positives? Or were you expecting false negatives
> because of freelist scanning which never occurred?

I haven't tested kmemleak so I'm just commenting on the code. I was
thinking about false negatives, not false positives.

On Fri, 2008-12-12 at 13:45 +0000, Catalin Marinas wrote:
> > For SMP, I suppose kmemleak doesn't scan the per-CPU 
> > areas?
> 
> It should scan the per-CPU areas in the memleak_scan() function:
> 
> #ifdef CONFIG_SMP
> 	/* per-cpu sections scanning */
> 	for_each_possible_cpu(i)
> 		scan_block(__per_cpu_start + per_cpu_offset(i),
> 			   __per_cpu_end + per_cpu_offset(i), NULL);
> #endif
> 
> >  But for UP, struct kmem_cache is allocated with kmalloc() and 
> > that contains struct kmem_cache_cpu as well.
> 
> They should be scanned as well.
>
> > And I suppose we never scan struct pages either. Otherwise ->freelist 
> > there would be a problem as well.
> 
> It was scanning the mem_map arrays in the past but removed this part and
> haven't seen any problems (on ARM).
> 
> Why would the ->freelist be a problem? I don't fully understand the slub
> allocator. Aren't objects added to the freelist only after they were
> freed? In __slab_alloc there seems to be a line:
> 
> c->page->freelist = NULL;
> 
> so the freelist won't count as a reference anymore. After freeing an
> object, kmemleak no longer cares about references to it.

I think we're talking about two different things here. Don't we then
have false negatives because we reach ->freelist of struct
kmem_cache_cpu which contains a pointer to an object that is free'd
(take a look at slab_free() fast-path)?

		Pekka


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 05/15] kmemleak: Add the slub memory allocation/freeing hooks
  2008-12-18 10:51       ` Pekka Enberg
@ 2008-12-18 15:28         ` Catalin Marinas
  2008-12-18 16:05           ` Pekka Enberg
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-18 15:28 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: linux-kernel, Christoph Lameter

Hi Pekka,

On Thu, 2008-12-18 at 12:51 +0200, Pekka Enberg wrote:
> On Fri, 2008-12-12 at 13:45 +0000, Catalin Marinas wrote:
> > Pekka Enberg wrote:
> > > Hmm, I'm not sure I understand why struct kmem_cache_cpu ->freelist is 
> > > never scanned. 
> >
> > Why would the ->freelist be a problem? I don't fully understand the slub
> > allocator. Aren't objects added to the freelist only after they were
> > freed? In __slab_alloc there seems to be a line:
> > 
> > c->page->freelist = NULL;
> > 
> > so the freelist won't count as a reference anymore. After freeing an
> > object, kmemleak no longer cares about references to it.
> 
> I think we're talking about two different things here. Don't we then
> have false negatives because we reach ->freelist of struct
> kmem_cache_cpu which contains a pointer to an object that is free'd
> (take a look at slab_free() fast-path)?

Just to make sure I understand it correctly, the slab_free() fast path
stores the pointer to the freed object into c->freelist. However, this
object is no longer tracked by kmemleak because of the
kmemleak_free_recursive() call at the beginning of this function (false
negatives make sense only for allocated objects).

On the slab_alloc() fast path, the pointer to an allocated object is
obtained from the c->freelist pointer but this seems to be overridden by
the pointer to the next free object, object[c->offset], which isn't yet
tracked by kmemleak. So, during a memory scan, it shouldn't matter that
the kmem_cache_cpu structures are called as they don't contain any
pointer to an allocated (not free) object.

The new slabs are allocated with alloc_pages() and these are not tracked
by kmemleak.

Is my understanding correct? Thanks.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 05/15] kmemleak: Add the slub memory allocation/freeing hooks
  2008-12-18 15:28         ` Catalin Marinas
@ 2008-12-18 16:05           ` Pekka Enberg
  0 siblings, 0 replies; 59+ messages in thread
From: Pekka Enberg @ 2008-12-18 16:05 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: linux-kernel, Christoph Lameter

Hi Catalin,

Catalin Marinas wrote:
> Just to make sure I understand it correctly, the slab_free() fast path
> stores the pointer to the freed object into c->freelist. However, this
> object is no longer tracked by kmemleak because of the
> kmemleak_free_recursive() call at the beginning of this function (false
> negatives make sense only for allocated objects).

Indeed. For SLAB, it's a problem because the per-CPU cache pointer is 
not cleared from the struct array_cache upon _allocation_ which is the 
culprit of the false negative there.

Catalin Marinas wrote:
> Is my understanding correct? Thanks.

Yes, it is and I was just confused. Thanks!

		Pekka

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-18 10:46       ` Pekka Enberg
@ 2008-12-18 16:38         ` Catalin Marinas
  2008-12-18 16:49           ` Christoph Lameter
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-18 16:38 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: linux-kernel, cl

On Thu, 2008-12-18 at 12:46 +0200, Pekka Enberg wrote:
> On Fri, 2008-12-12 at 14:27 +0000, Catalin Marinas wrote:
> > > Do you take care of the per-node lists as well?
> > 
> > I can't figure out what other location should be erased.
> 
> As far as I can tell, you need to tell kmemleak not to scan the alien
> caches and the shared array that is shared by all CPUs that belong to
> one node. I'm adding Christoph to the CC in case he wants to comment on
> this.

In the ____cache_alloc() kmemleak clears the
cachep->array->entry[ac->avail] pointer but this may not be enough as
freed and later re-allocated objects may have pointers in the alien
cache (is that correct?). A better approach (haven't tried it yet) would
be not to scan objects allocated via alloc_arraycache() at all. However,
there is still the initarray_cache/generic which are automatically
scanned via the data section (unless I add an attribute to place them in
a different, not scanned, section).

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-18 16:38         ` Catalin Marinas
@ 2008-12-18 16:49           ` Christoph Lameter
  2008-12-18 17:02             ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Christoph Lameter @ 2008-12-18 16:49 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: Pekka Enberg, linux-kernel

On Thu, 18 Dec 2008, Catalin Marinas wrote:

> In the ____cache_alloc() kmemleak clears the
> cachep->array->entry[ac->avail] pointer but this may not be enough as
> freed and later re-allocated objects may have pointers in the alien
> cache (is that correct?). A better approach (haven't tried it yet) would
> be not to scan objects allocated via alloc_arraycache() at all. However,
> there is still the initarray_cache/generic which are automatically
> scanned via the data section (unless I add an attribute to place them in
> a different, not scanned, section).

An allocated object is not part of any cache in SLAB. Only freed objects
are kept in the slab queues. A freed object can only be in one queue at a
time.


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-18 16:49           ` Christoph Lameter
@ 2008-12-18 17:02             ` Catalin Marinas
  2008-12-18 19:35               ` Christoph Lameter
  0 siblings, 1 reply; 59+ messages in thread
From: Catalin Marinas @ 2008-12-18 17:02 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka Enberg, linux-kernel

On Thu, 2008-12-18 at 10:49 -0600, Christoph Lameter wrote:
> On Thu, 18 Dec 2008, Catalin Marinas wrote:
> 
> > In the ____cache_alloc() kmemleak clears the
> > cachep->array->entry[ac->avail] pointer but this may not be enough as
> > freed and later re-allocated objects may have pointers in the alien
> > cache (is that correct?). A better approach (haven't tried it yet) would
> > be not to scan objects allocated via alloc_arraycache() at all. However,
> > there is still the initarray_cache/generic which are automatically
> > scanned via the data section (unless I add an attribute to place them in
> > a different, not scanned, section).
> 
> An allocated object is not part of any cache in SLAB. Only freed objects
> are kept in the slab queues. A freed object can only be in one queue at a
> time.

OK, but is there a chance that an stale pointer remains in such caches?
There seems to be the transfer_objects() function that moves pointers
around but doesn't clear the source values.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-18 17:02             ` Catalin Marinas
@ 2008-12-18 19:35               ` Christoph Lameter
  2008-12-18 20:06                 ` Pekka Enberg
  0 siblings, 1 reply; 59+ messages in thread
From: Christoph Lameter @ 2008-12-18 19:35 UTC (permalink / raw)
  To: Catalin Marinas; +Cc: Pekka Enberg, linux-kernel

On Thu, 18 Dec 2008, Catalin Marinas wrote:

> OK, but is there a chance that an stale pointer remains in such caches?

Definitely. The pointers are never cleared. There are counters in the
caches that are used to index into an array.

> There seems to be the transfer_objects() function that moves pointers
> around but doesn't clear the source values.

No need to. The counter updates take care of things.


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-18 19:35               ` Christoph Lameter
@ 2008-12-18 20:06                 ` Pekka Enberg
  2008-12-18 21:41                   ` Christoph Lameter
  0 siblings, 1 reply; 59+ messages in thread
From: Pekka Enberg @ 2008-12-18 20:06 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Catalin Marinas, linux-kernel

On Thu, Dec 18, 2008 at 9:35 PM, Christoph Lameter
<cl@linux-foundation.org> wrote:
> On Thu, 18 Dec 2008, Catalin Marinas wrote:
>
>> OK, but is there a chance that an stale pointer remains in such caches?
>
> Definitely. The pointers are never cleared. There are counters in the
> caches that are used to index into an array.
>
>> There seems to be the transfer_objects() function that moves pointers
>> around but doesn't clear the source values.
>
> No need to. The counter updates take care of things.

For kmemleak, that's a problem. Unless we explicitly annotate the
caches, it will scan them and think that there's a pointer to a leaked
object (i.e. false negative). Catalin already took care of the per-CPU
caches but AFAICT we still need to take care of the per-node caches
and the shared caches.

^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-18 20:06                 ` Pekka Enberg
@ 2008-12-18 21:41                   ` Christoph Lameter
  2008-12-19 10:44                     ` Catalin Marinas
  0 siblings, 1 reply; 59+ messages in thread
From: Christoph Lameter @ 2008-12-18 21:41 UTC (permalink / raw)
  To: Pekka Enberg; +Cc: Catalin Marinas, linux-kernel

On Thu, 18 Dec 2008, Pekka Enberg wrote:

> For kmemleak, that's a problem. Unless we explicitly annotate the
> caches, it will scan them and think that there's a pointer to a leaked
> object (i.e. false negative). Catalin already took care of the per-CPU
> caches but AFAICT we still need to take care of the per-node caches
> and the shared caches.

Why doesnt kmemleak simply use the counter as a boundary and only access
those pointers that are valid?


^ permalink raw reply	[flat|nested] 59+ messages in thread

* Re: [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks
  2008-12-18 21:41                   ` Christoph Lameter
@ 2008-12-19 10:44                     ` Catalin Marinas
  0 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-12-19 10:44 UTC (permalink / raw)
  To: Christoph Lameter; +Cc: Pekka Enberg, linux-kernel

On Thu, 2008-12-18 at 15:41 -0600, Christoph Lameter wrote:
> On Thu, 18 Dec 2008, Pekka Enberg wrote:
> 
> > For kmemleak, that's a problem. Unless we explicitly annotate the
> > caches, it will scan them and think that there's a pointer to a leaked
> > object (i.e. false negative). Catalin already took care of the per-CPU
> > caches but AFAICT we still need to take care of the per-node caches
> > and the shared caches.
> 
> Why doesnt kmemleak simply use the counter as a boundary and only access
> those pointers that are valid?

Since the valid pointers in these caches only point to freed objects
(which aren't tracked by kmemleak), it's better for kmemleak not to scan
such structures at all. I added a kmemleak_no_scan() annotation for
this.

Thanks for clarification.

-- 
Catalin


^ permalink raw reply	[flat|nested] 59+ messages in thread

* [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash
  2008-11-29 10:43 Catalin Marinas
@ 2008-11-29 10:43 ` Catalin Marinas
  0 siblings, 0 replies; 59+ messages in thread
From: Catalin Marinas @ 2008-11-29 10:43 UTC (permalink / raw)
  To: linux-kernel; +Cc: Ingo Molnar

The alloc_large_system_hash function is called from various places in
the kernel and it contains pointers to other allocated structures. It
therefore needs to be traced by kmemleak.

Signed-off-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Ingo Molnar <mingo@elte.hu>
---
 mm/page_alloc.c |    3 +++
 1 files changed, 3 insertions(+), 0 deletions(-)

diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index d8ac014..90e7dbd 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -46,6 +46,7 @@
 #include <linux/page-isolation.h>
 #include <linux/page_cgroup.h>
 #include <linux/debugobjects.h>
+#include <linux/memleak.h>
 
 #include <asm/tlbflush.h>
 #include <asm/div64.h>
@@ -4570,6 +4571,8 @@ void *__init alloc_large_system_hash(const char *tablename,
 	if (_hash_mask)
 		*_hash_mask = (1 << log2qty) - 1;
 
+	memleak_alloc(table, size, 1);
+
 	return table;
 }
 


^ permalink raw reply related	[flat|nested] 59+ messages in thread

end of thread, other threads:[~2008-12-19 10:45 UTC | newest]

Thread overview: 59+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2008-12-10 18:26 [PATCH 00/15] Kernel memory leak detector Catalin Marinas
2008-12-10 18:26 ` [PATCH 01/15] kmemleak: Add the base support Catalin Marinas
2008-12-11 22:01   ` Pekka Enberg
2008-12-12 11:36     ` Catalin Marinas
2008-12-12 13:14       ` Pekka Enberg
2008-12-16 19:36   ` Paul E. McKenney
2008-12-17  9:44     ` Catalin Marinas
2008-12-17 17:15       ` Paul E. McKenney
2008-12-10 18:27 ` [PATCH 02/15] kmemleak: Add documentation on the memory leak detector Catalin Marinas
2008-12-10 18:27 ` [PATCH 03/15] kmemleak: Add the slab memory allocation/freeing hooks Catalin Marinas
2008-12-10 18:32   ` Dave Hansen
2008-12-10 18:53   ` Dave Hansen
2008-12-11 21:22   ` Pekka Enberg
2008-12-12 14:27     ` Catalin Marinas
2008-12-18 10:46       ` Pekka Enberg
2008-12-18 16:38         ` Catalin Marinas
2008-12-18 16:49           ` Christoph Lameter
2008-12-18 17:02             ` Catalin Marinas
2008-12-18 19:35               ` Christoph Lameter
2008-12-18 20:06                 ` Pekka Enberg
2008-12-18 21:41                   ` Christoph Lameter
2008-12-19 10:44                     ` Catalin Marinas
2008-12-10 18:27 ` [PATCH 04/15] kmemleak: Add the slob " Catalin Marinas
2008-12-10 18:36   ` Matt Mackall
2008-12-11  9:47     ` Catalin Marinas
2008-12-11 21:37   ` Pekka Enberg
2008-12-10 18:27 ` [PATCH 05/15] kmemleak: Add the slub " Catalin Marinas
2008-12-11 21:30   ` Pekka Enberg
2008-12-12 13:45     ` Catalin Marinas
2008-12-18 10:51       ` Pekka Enberg
2008-12-18 15:28         ` Catalin Marinas
2008-12-18 16:05           ` Pekka Enberg
2008-12-10 18:27 ` [PATCH 06/15] kmemleak: Add the vmalloc " Catalin Marinas
2008-12-10 18:27 ` [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash Catalin Marinas
2008-12-10 19:04   ` Dave Hansen
2008-12-11  9:50     ` Catalin Marinas
2008-12-11 10:08       ` Catalin Marinas
2008-12-11 17:30       ` Dave Hansen
2008-12-11 17:38         ` Catalin Marinas
2008-12-11 17:45           ` Dave Hansen
2008-12-11 19:47             ` Pekka Enberg
2008-12-12 17:04               ` Catalin Marinas
2008-12-12 17:17                 ` Dave Hansen
2008-12-12 17:43                   ` Catalin Marinas
2008-12-10 18:27 ` [PATCH 08/15] kmemleak: Add modules support Catalin Marinas
2008-12-10 18:27 ` [PATCH 09/15] x86: Provide _sdata in the vmlinux_*.lds.S files Catalin Marinas
2008-12-10 18:27 ` [PATCH 10/15] arm: Provide _sdata and __bss_stop in the vmlinux.lds.S file Catalin Marinas
2008-12-10 18:27 ` [PATCH 11/15] kmemleak: Remove some of the kmemleak false positives Catalin Marinas
2008-12-10 18:28 ` [PATCH 12/15] kmemleak: Enable the building of the memory leak detector Catalin Marinas
2008-12-10 19:20   ` Dave Hansen
2008-12-12 17:27     ` Catalin Marinas
2008-12-12 18:02       ` Dave Hansen
2008-12-10 18:28 ` [PATCH 13/15] kmemleak: Keep the __init functions after initialization Catalin Marinas
2008-12-10 18:44   ` Sam Ravnborg
2008-12-17 13:09     ` Catalin Marinas
2008-12-10 18:28 ` [PATCH 14/15] kmemleak: Simple testing module for kmemleak Catalin Marinas
2008-12-10 18:28 ` [PATCH 15/15] kmemleak: Add the corresponding MAINTAINERS entry Catalin Marinas
2008-12-11  9:44 ` [PATCH 00/15] Kernel memory leak detector Catalin Marinas
  -- strict thread matches above, loose matches on Subject: below --
2008-11-29 10:43 Catalin Marinas
2008-11-29 10:43 ` [PATCH 07/15] kmemleak: Add memleak_alloc callback from alloc_large_system_hash Catalin Marinas

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.