linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC 00/11] mm: debug: formatting memory management structs
@ 2015-04-14 20:56 Sasha Levin
  2015-04-14 20:56 ` [RFC 01/11] mm: debug: format flags in a buffer Sasha Levin
                   ` (11 more replies)
  0 siblings, 12 replies; 19+ messages in thread
From: Sasha Levin @ 2015-04-14 20:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, kirill, linux-mm

This patch series adds knowledge about various memory management structures
to the standard print functions.

In essence, it allows us to easily print those structures:

	printk("%pZp %pZm %pZv", page, mm, vma);

This allows us to customize output when hitting bugs even further, thus
we introduce VM_BUG() which allows printing anything when hitting a bug
rather than just a single piece of information.

This also means we can get rid of VM_BUG_ON_* since they're now nothing
more than a format string.

Sasha Levin (11):
  mm: debug: format flags in a buffer
  mm: debug: deal with a new family of MM pointers
  mm: debug: dump VMA into a string rather than directly on screen
  mm: debug: dump struct MM into a string rather than directly on
    screen
  mm: debug: dump page into a string rather than directly on screen
  mm: debug: clean unused code
  mm: debug: VM_BUG()
  mm: debug: kill VM_BUG_ON_PAGE
  mm: debug: kill VM_BUG_ON_VMA
  mm: debug: kill VM_BUG_ON_MM
  mm: debug: use VM_BUG() to help with debug output

 arch/arm/mm/mmap.c               |    2 +-
 arch/frv/mm/elf-fdpic.c          |    4 +-
 arch/mips/mm/gup.c               |    4 +-
 arch/parisc/kernel/sys_parisc.c  |    2 +-
 arch/powerpc/mm/hugetlbpage.c    |    2 +-
 arch/powerpc/mm/pgtable_64.c     |    4 +-
 arch/s390/mm/gup.c               |    2 +-
 arch/s390/mm/mmap.c              |    2 +-
 arch/s390/mm/pgtable.c           |    6 +--
 arch/sh/mm/mmap.c                |    2 +-
 arch/sparc/kernel/sys_sparc_64.c |    4 +-
 arch/sparc/mm/gup.c              |    2 +-
 arch/sparc/mm/hugetlbpage.c      |    4 +-
 arch/tile/mm/hugetlbpage.c       |    2 +-
 arch/x86/kernel/sys_x86_64.c     |    2 +-
 arch/x86/mm/gup.c                |    8 ++--
 arch/x86/mm/hugetlbpage.c        |    2 +-
 arch/x86/mm/pgtable.c            |    6 +--
 include/linux/huge_mm.h          |    2 +-
 include/linux/hugetlb.h          |    2 +-
 include/linux/hugetlb_cgroup.h   |    4 +-
 include/linux/mm.h               |   22 ++++-----
 include/linux/mmdebug.h          |   40 ++++++----------
 include/linux/page-flags.h       |   26 +++++-----
 include/linux/pagemap.h          |   11 +++--
 include/linux/rmap.h             |    2 +-
 kernel/fork.c                    |    2 +-
 lib/vsprintf.c                   |   22 +++++++++
 mm/balloon_compaction.c          |    4 +-
 mm/cleancache.c                  |    6 +--
 mm/compaction.c                  |    2 +-
 mm/debug.c                       |   98 ++++++++++++++++++++------------------
 mm/filemap.c                     |   18 +++----
 mm/gup.c                         |   12 ++---
 mm/huge_memory.c                 |   50 +++++++++----------
 mm/hugetlb.c                     |   28 +++++------
 mm/hugetlb_cgroup.c              |    2 +-
 mm/internal.h                    |    8 ++--
 mm/interval_tree.c               |    2 +-
 mm/kasan/report.c                |    2 +-
 mm/ksm.c                         |   13 ++---
 mm/memcontrol.c                  |   48 +++++++++----------
 mm/memory.c                      |   10 ++--
 mm/memory_hotplug.c              |    2 +-
 mm/migrate.c                     |    6 +--
 mm/mlock.c                       |    4 +-
 mm/mmap.c                        |   15 +++---
 mm/mremap.c                      |    4 +-
 mm/page_alloc.c                  |   28 +++++------
 mm/page_io.c                     |    4 +-
 mm/pagewalk.c                    |    2 +-
 mm/pgtable-generic.c             |    8 ++--
 mm/rmap.c                        |   20 ++++----
 mm/shmem.c                       |   10 ++--
 mm/slub.c                        |    4 +-
 mm/swap.c                        |   39 +++++++--------
 mm/swap_state.c                  |   16 +++----
 mm/swapfile.c                    |    8 ++--
 mm/vmscan.c                      |   24 +++++-----
 59 files changed, 355 insertions(+), 335 deletions(-)

-- 
1.7.10.4


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC 01/11] mm: debug: format flags in a buffer
  2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
@ 2015-04-14 20:56 ` Sasha Levin
  2015-04-30 15:39   ` Kirill A. Shutemov
  2015-04-14 20:56 ` [RFC 02/11] mm: debug: deal with a new family of MM pointers Sasha Levin
                   ` (10 subsequent siblings)
  11 siblings, 1 reply; 19+ messages in thread
From: Sasha Levin @ 2015-04-14 20:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, kirill, linux-mm

Format various flags to a string buffer rather than printing them. This is
a helper for later.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 mm/debug.c |   35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/mm/debug.c b/mm/debug.c
index 3eb3ac2..c9f7dd7 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -80,6 +80,41 @@ static void dump_flags(unsigned long flags,
 	pr_cont(")\n");
 }
 
+static char *format_flags(unsigned long flags,
+			const struct trace_print_flags *names, int count,
+			char *buf, char *end)
+{
+	const char *delim = "";
+	unsigned long mask;
+	int i;
+
+	buf += snprintf(buf, (buf > end ? 0 : end - buf),
+				"flags: %#lx(", flags);
+
+	/* remove zone id */
+	flags &= (1UL << NR_PAGEFLAGS) - 1;
+
+	for (i = 0; i < count && flags; i++) {
+                mask = names[i].mask;
+                if ((flags & mask) != mask)
+                        continue;
+
+                flags &= ~mask;
+		buf += snprintf(buf, (buf > end ? 0 : end - buf),
+                		"%s%s", delim, names[i].name);
+                delim = "|";
+        }
+
+        /* check for left over flags */
+        if (flags)
+		buf += snprintf(buf, (buf > end ? 0 : end - buf),
+                		"%s%#lx", delim, flags);
+
+	buf += snprintf(buf, (buf > end ? 0 : end - buf), ")\n");
+
+	return buf;
+}
+
 void dump_page_badflags(struct page *page, const char *reason,
 		unsigned long badflags)
 {
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 02/11] mm: debug: deal with a new family of MM pointers
  2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
  2015-04-14 20:56 ` [RFC 01/11] mm: debug: format flags in a buffer Sasha Levin
@ 2015-04-14 20:56 ` Sasha Levin
  2015-04-30 16:17   ` Kirill A. Shutemov
  2015-04-14 20:56 ` [RFC 03/11] mm: debug: dump VMA into a string rather than directly on screen Sasha Levin
                   ` (9 subsequent siblings)
  11 siblings, 1 reply; 19+ messages in thread
From: Sasha Levin @ 2015-04-14 20:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, kirill, linux-mm

This teaches our printing functions about a new family of MM pointer that it
could now print.

I've picked %pZ because %pm and %pM were already taken, so I figured it
doesn't really matter what we go with. We also have the option of stealing
one of those two...

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 lib/vsprintf.c |   13 +++++++++++++
 1 file changed, 13 insertions(+)

diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 8243e2f..809d19d 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -1375,6 +1375,16 @@ char *comm_name(char *buf, char *end, struct task_struct *tsk,
 	return string(buf, end, name, spec);
 }
 
+static noinline_for_stack
+char *mm_pointer(char *buf, char *end, struct task_struct *tsk,
+		struct printf_spec spec, const char *fmt)
+{
+	switch (fmt[1]) {
+	}
+
+	return buf;
+}
+
 int kptr_restrict __read_mostly;
 
 /*
@@ -1463,6 +1473,7 @@ int kptr_restrict __read_mostly;
  *        (legacy clock framework) of the clock
  * - 'Cr' For a clock, it prints the current rate of the clock
  * - 'T' task_struct->comm
+ * - 'Z' Outputs a readable version of a type of memory management struct.
  *
  * Note: The difference between 'S' and 'F' is that on ia64 and ppc64
  * function pointers are really function descriptors, which contain a
@@ -1615,6 +1626,8 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
 				   spec, fmt);
 	case 'T':
 		return comm_name(buf, end, ptr, spec, fmt);
+	case 'Z':
+		return mm_pointer(buf, end, ptr, spec, fmt);
 	}
 	spec.flags |= SMALL;
 	if (spec.field_width == -1) {
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 03/11] mm: debug: dump VMA into a string rather than directly on screen
  2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
  2015-04-14 20:56 ` [RFC 01/11] mm: debug: format flags in a buffer Sasha Levin
  2015-04-14 20:56 ` [RFC 02/11] mm: debug: deal with a new family of MM pointers Sasha Levin
@ 2015-04-14 20:56 ` Sasha Levin
  2015-04-30 16:18   ` Kirill A. Shutemov
  2015-04-14 20:56 ` [RFC 04/11] mm: debug: dump struct MM " Sasha Levin
                   ` (8 subsequent siblings)
  11 siblings, 1 reply; 19+ messages in thread
From: Sasha Levin @ 2015-04-14 20:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, kirill, linux-mm

This lets us use regular string formatting code to dump VMAs, use it
in VM_BUG_ON_VMA instead of just printing it to screen as well.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 include/linux/mmdebug.h |    8 ++++++--
 lib/vsprintf.c          |    7 +++++--
 mm/debug.c              |   26 ++++++++++++++------------
 3 files changed, 25 insertions(+), 16 deletions(-)

diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 877ef22..506e405 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -10,10 +10,10 @@ struct mm_struct;
 extern void dump_page(struct page *page, const char *reason);
 extern void dump_page_badflags(struct page *page, const char *reason,
 			       unsigned long badflags);
-void dump_vma(const struct vm_area_struct *vma);
 void dump_mm(const struct mm_struct *mm);
 
 #ifdef CONFIG_DEBUG_VM
+char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
 #define VM_BUG_ON(cond) BUG_ON(cond)
 #define VM_BUG_ON_PAGE(cond, page)					\
 	do {								\
@@ -25,7 +25,7 @@ void dump_mm(const struct mm_struct *mm);
 #define VM_BUG_ON_VMA(cond, vma)					\
 	do {								\
 		if (unlikely(cond)) {					\
-			dump_vma(vma);					\
+			pr_emerg("%pZv", vma);				\
 			BUG();						\
 		}							\
 	} while (0)
@@ -40,6 +40,10 @@ void dump_mm(const struct mm_struct *mm);
 #define VM_WARN_ON_ONCE(cond) WARN_ON_ONCE(cond)
 #define VM_WARN_ONCE(cond, format...) WARN_ONCE(cond, format)
 #else
+static char *format_vma(const struct vm_area_struct *vma, char *buf, char *end)
+{
+	return buf;
+}
 #define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
 #define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond)
 #define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond)
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 809d19d..b4800c1 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -1376,10 +1376,12 @@ char *comm_name(char *buf, char *end, struct task_struct *tsk,
 }
 
 static noinline_for_stack
-char *mm_pointer(char *buf, char *end, struct task_struct *tsk,
+char *mm_pointer(char *buf, char *end, const void *ptr,
 		struct printf_spec spec, const char *fmt)
 {
 	switch (fmt[1]) {
+	case 'v':
+		return format_vma(ptr, buf, end);
 	}
 
 	return buf;
@@ -1473,7 +1475,8 @@ int kptr_restrict __read_mostly;
  *        (legacy clock framework) of the clock
  * - 'Cr' For a clock, it prints the current rate of the clock
  * - 'T' task_struct->comm
- * - 'Z' Outputs a readable version of a type of memory management struct.
+ * - 'Z[v]' Outputs a readable version of a type of memory management struct:
+ *		v struct vm_area_struct
  *
  * Note: The difference between 'S' and 'F' is that on ia64 and ppc64
  * function pointers are really function descriptors, which contain a
diff --git a/mm/debug.c b/mm/debug.c
index c9f7dd7..82e2e1c 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -186,20 +186,22 @@ static const struct trace_print_flags vmaflags_names[] = {
 	{VM_MERGEABLE,			"mergeable"	},
 };
 
-void dump_vma(const struct vm_area_struct *vma)
+char *format_vma(const struct vm_area_struct *vma, char *buf, char *end)
 {
-	pr_emerg("vma %p start %p end %p\n"
-		"next %p prev %p mm %p\n"
-		"prot %lx anon_vma %p vm_ops %p\n"
-		"pgoff %lx file %p private_data %p\n",
-		vma, (void *)vma->vm_start, (void *)vma->vm_end, vma->vm_next,
-		vma->vm_prev, vma->vm_mm,
-		(unsigned long)pgprot_val(vma->vm_page_prot),
-		vma->anon_vma, vma->vm_ops, vma->vm_pgoff,
-		vma->vm_file, vma->vm_private_data);
-	dump_flags(vma->vm_flags, vmaflags_names, ARRAY_SIZE(vmaflags_names));
+	buf += snprintf(buf, buf > end ? 0 : end - buf,
+		"vma %p start %p end %p\n"
+                "next %p prev %p mm %p\n"
+                "prot %lx anon_vma %p vm_ops %p\n"
+                "pgoff %lx file %p private_data %p\n",
+                vma, (void *)vma->vm_start, (void *)vma->vm_end, vma->vm_next,
+                vma->vm_prev, vma->vm_mm,
+                (unsigned long)pgprot_val(vma->vm_page_prot),
+                vma->anon_vma, vma->vm_ops, vma->vm_pgoff,
+                vma->vm_file, vma->vm_private_data);
+
+        return format_flags(vma->vm_flags, vmaflags_names, ARRAY_SIZE(vmaflags_names),
+				buf, end);
 }
-EXPORT_SYMBOL(dump_vma);
 
 void dump_mm(const struct mm_struct *mm)
 {
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 04/11] mm: debug: dump struct MM into a string rather than directly on screen
  2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
                   ` (2 preceding siblings ...)
  2015-04-14 20:56 ` [RFC 03/11] mm: debug: dump VMA into a string rather than directly on screen Sasha Levin
@ 2015-04-14 20:56 ` Sasha Levin
  2015-04-14 20:56 ` [RFC 05/11] mm: debug: dump page " Sasha Levin
                   ` (7 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Sasha Levin @ 2015-04-14 20:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, kirill, linux-mm

This lets us use regular string formatting code to dump MMs, use it
in VM_BUG_ON_MM instead of just printing it to screen as well.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 include/linux/mmdebug.h |    8 ++++++--
 lib/vsprintf.c          |    5 ++++-
 mm/debug.c              |   11 +++++++----
 3 files changed, 17 insertions(+), 7 deletions(-)

diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 506e405..202ebdf 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -10,10 +10,10 @@ struct mm_struct;
 extern void dump_page(struct page *page, const char *reason);
 extern void dump_page_badflags(struct page *page, const char *reason,
 			       unsigned long badflags);
-void dump_mm(const struct mm_struct *mm);
 
 #ifdef CONFIG_DEBUG_VM
 char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
+char *format_mm(const struct mm_struct *mm, char *buf, char *end);
 #define VM_BUG_ON(cond) BUG_ON(cond)
 #define VM_BUG_ON_PAGE(cond, page)					\
 	do {								\
@@ -32,7 +32,7 @@ char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
 #define VM_BUG_ON_MM(cond, mm)						\
 	do {								\
 		if (unlikely(cond)) {					\
-			dump_mm(mm);					\
+			pr_emerg("%pZm", mm);				\
 			BUG();						\
 		}							\
 	} while (0)
@@ -44,6 +44,10 @@ static char *format_vma(const struct vm_area_struct *vma, char *buf, char *end)
 {
 	return buf;
 }
+static char *format_mm(const struct mm_struct *mm, char *buf, char *end)
+{
+	return buf;
+}
 #define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
 #define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond)
 #define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond)
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index b4800c1..1ca3114 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -1382,6 +1382,8 @@ char *mm_pointer(char *buf, char *end, const void *ptr,
 	switch (fmt[1]) {
 	case 'v':
 		return format_vma(ptr, buf, end);
+	case 'm':
+		return format_mm(ptr, buf, end);
 	}
 
 	return buf;
@@ -1475,8 +1477,9 @@ int kptr_restrict __read_mostly;
  *        (legacy clock framework) of the clock
  * - 'Cr' For a clock, it prints the current rate of the clock
  * - 'T' task_struct->comm
- * - 'Z[v]' Outputs a readable version of a type of memory management struct:
+ * - 'Z[mv]' Outputs a readable version of a type of memory management struct:
  *		v struct vm_area_struct
+ *		m struct mm_struct
  *
  * Note: The difference between 'S' and 'F' is that on ia64 and ppc64
  * function pointers are really function descriptors, which contain a
diff --git a/mm/debug.c b/mm/debug.c
index 82e2e1c..dff65ff 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -203,9 +203,10 @@ char *format_vma(const struct vm_area_struct *vma, char *buf, char *end)
 				buf, end);
 }
 
-void dump_mm(const struct mm_struct *mm)
+char *format_mm(const struct mm_struct *mm, char *buf, char *end)
 {
-	pr_emerg("mm %p mmap %p seqnum %d task_size %lu\n"
+	buf += snprintf(buf, buf > end ? 0 : end - buf,
+		"mm %p mmap %p seqnum %d task_size %lu\n"
 #ifdef CONFIG_MMU
 		"get_unmapped_area %p\n"
 #endif
@@ -270,8 +271,10 @@ void dump_mm(const struct mm_struct *mm)
 		""		/* This is here to not have a comma! */
 		);
 
-		dump_flags(mm->def_flags, vmaflags_names,
-				ARRAY_SIZE(vmaflags_names));
+	buf = format_flags(mm->def_flags, vmaflags_names,
+				ARRAY_SIZE(vmaflags_names), buf, end);
+
+	return buf;
 }
 
 #endif		/* CONFIG_DEBUG_VM */
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 05/11] mm: debug: dump page into a string rather than directly on screen
  2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
                   ` (3 preceding siblings ...)
  2015-04-14 20:56 ` [RFC 04/11] mm: debug: dump struct MM " Sasha Levin
@ 2015-04-14 20:56 ` Sasha Levin
  2015-04-14 20:56 ` [RFC 06/11] mm: debug: clean unused code Sasha Levin
                   ` (6 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Sasha Levin @ 2015-04-14 20:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, kirill, linux-mm

This lets us use regular string formatting code to dump VMAs, use it
in VM_BUG_ON_PAGE instead of just printing it to screen as well.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 include/linux/mmdebug.h |    6 ++----
 lib/vsprintf.c          |    5 ++++-
 mm/balloon_compaction.c |    4 ++--
 mm/debug.c              |   28 +++++++++++-----------------
 mm/kasan/report.c       |    2 +-
 mm/memory.c             |    2 +-
 mm/memory_hotplug.c     |    2 +-
 mm/page_alloc.c         |    2 +-
 8 files changed, 23 insertions(+), 28 deletions(-)

diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 202ebdf..8b3f5a0 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -7,9 +7,7 @@ struct page;
 struct vm_area_struct;
 struct mm_struct;
 
-extern void dump_page(struct page *page, const char *reason);
-extern void dump_page_badflags(struct page *page, const char *reason,
-			       unsigned long badflags);
+char *format_page(struct page *page, char *buf, char *end);
 
 #ifdef CONFIG_DEBUG_VM
 char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
@@ -18,7 +16,7 @@ char *format_mm(const struct mm_struct *mm, char *buf, char *end);
 #define VM_BUG_ON_PAGE(cond, page)					\
 	do {								\
 		if (unlikely(cond)) {					\
-			dump_page(page, "VM_BUG_ON_PAGE(" __stringify(cond)")");\
+			pr_emerg("%pZp", page);				\
 			BUG();						\
 		}							\
 	} while (0)
diff --git a/lib/vsprintf.c b/lib/vsprintf.c
index 1ca3114..8511be7 100644
--- a/lib/vsprintf.c
+++ b/lib/vsprintf.c
@@ -1384,6 +1384,8 @@ char *mm_pointer(char *buf, char *end, const void *ptr,
 		return format_vma(ptr, buf, end);
 	case 'm':
 		return format_mm(ptr, buf, end);
+	case 'p':
+		return format_page(ptr, buf, end);
 	}
 
 	return buf;
@@ -1477,9 +1479,10 @@ int kptr_restrict __read_mostly;
  *        (legacy clock framework) of the clock
  * - 'Cr' For a clock, it prints the current rate of the clock
  * - 'T' task_struct->comm
- * - 'Z[mv]' Outputs a readable version of a type of memory management struct:
+ * - 'Z[mpv]' Outputs a readable version of a type of memory management struct:
  *		v struct vm_area_struct
  *		m struct mm_struct
+ *		p struct page
  *
  * Note: The difference between 'S' and 'F' is that on ia64 and ppc64
  * function pointers are really function descriptors, which contain a
diff --git a/mm/balloon_compaction.c b/mm/balloon_compaction.c
index fcad832..88b3cae 100644
--- a/mm/balloon_compaction.c
+++ b/mm/balloon_compaction.c
@@ -187,7 +187,7 @@ void balloon_page_putback(struct page *page)
 		put_page(page);
 	} else {
 		WARN_ON(1);
-		dump_page(page, "not movable balloon page");
+		pr_alert("Not movable balloon page:\n%pZp", page);
 	}
 	unlock_page(page);
 }
@@ -207,7 +207,7 @@ int balloon_page_migrate(struct page *newpage,
 	BUG_ON(!trylock_page(newpage));
 
 	if (WARN_ON(!__is_movable_balloon_page(page))) {
-		dump_page(page, "not movable balloon page");
+		pr_alert("Not movable balloon page:\n%pZp", page);
 		unlock_page(newpage);
 		return rc;
 	}
diff --git a/mm/debug.c b/mm/debug.c
index dff65ff..f64bb6e 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -115,32 +115,26 @@ static char *format_flags(unsigned long flags,
 	return buf;
 }
 
-void dump_page_badflags(struct page *page, const char *reason,
-		unsigned long badflags)
+char *format_page(struct page *page, char *buf, char *end)
 {
-	pr_emerg("page:%p count:%d mapcount:%d mapping:%p index:%#lx\n",
+	buf += snprintf(buf, (buf > end ? 0 : end - buf),
+		"page:%p count:%d mapcount:%d mapping:%p index:%#lx\n",
 		  page, atomic_read(&page->_count), page_mapcount(page),
 		  page->mapping, page->index);
+
 	BUILD_BUG_ON(ARRAY_SIZE(pageflag_names) != __NR_PAGEFLAGS);
-	dump_flags(page->flags, pageflag_names, ARRAY_SIZE(pageflag_names));
-	if (reason)
-		pr_alert("page dumped because: %s\n", reason);
-	if (page->flags & badflags) {
-		pr_alert("bad because of flags:\n");
-		dump_flags(page->flags & badflags,
-				pageflag_names, ARRAY_SIZE(pageflag_names));
-	}
+
+	buf = format_flags(page->flags, pageflag_names,
+			ARRAY_SIZE(pageflag_names), buf, end);
 #ifdef CONFIG_MEMCG
 	if (page->mem_cgroup)
-		pr_alert("page->mem_cgroup:%p\n", page->mem_cgroup);
+		buf += snprintf(buf, (buf > end ? 0 : end - buf),
+			"page->mem_cgroup:%p\n", page->mem_cgroup);
 #endif
-}
 
-void dump_page(struct page *page, const char *reason)
-{
-	dump_page_badflags(page, reason, 0);
+	return buf;
 }
-EXPORT_SYMBOL(dump_page);
+EXPORT_SYMBOL(format_page);
 
 #ifdef CONFIG_DEBUG_VM
 
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 680ceed..272a282 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -121,7 +121,7 @@ static void print_address_description(struct kasan_access_info *info)
 				"kasan: bad access detected");
 			return;
 		}
-		dump_page(page, "kasan: bad access detected");
+		pr_emerg("kasan: bad access detected:\n%pZp", page);
 	}
 
 	if (kernel_or_module_addr(addr)) {
diff --git a/mm/memory.c b/mm/memory.c
index d1fa0c1..6e5d4bd 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -683,7 +683,7 @@ static void print_bad_pte(struct vm_area_struct *vma, unsigned long addr,
 		current->comm,
 		(long long)pte_val(pte), (long long)pmd_val(*pmd));
 	if (page)
-		dump_page(page, "bad pte");
+		pr_alert("Bad pte:\n%pZp", page);
 	printk(KERN_ALERT
 		"addr:%p vm_flags:%08lx anon_vma:%p mapping:%p index:%lx\n",
 		(void *)addr, vma->vm_flags, vma->anon_vma, mapping, index);
diff --git a/mm/memory_hotplug.c b/mm/memory_hotplug.c
index c6a8d95..366fba0 100644
--- a/mm/memory_hotplug.c
+++ b/mm/memory_hotplug.c
@@ -1431,7 +1431,7 @@ do_migrate_range(unsigned long start_pfn, unsigned long end_pfn)
 #ifdef CONFIG_DEBUG_VM
 			printk(KERN_ALERT "removing pfn %lx from LRU failed\n",
 			       pfn);
-			dump_page(page, "failed to remove from LRU");
+			pr_alert("Failed to remove from LRU:\n%pZp", page);
 #endif
 			put_page(page);
 			/* Because we don't have big zone->lock. we should
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 5bd9711..4887731 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -332,7 +332,7 @@ static void bad_page(struct page *page, const char *reason,
 
 	printk(KERN_ALERT "BUG: Bad page state in process %s  pfn:%05lx\n",
 		current->comm, page_to_pfn(page));
-	dump_page_badflags(page, reason, bad_flags);
+	pr_alert("%s:\n%pZpBad flags: %lX", reason, page, bad_flags);
 
 	print_modules();
 	dump_stack();
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 06/11] mm: debug: clean unused code
  2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
                   ` (4 preceding siblings ...)
  2015-04-14 20:56 ` [RFC 05/11] mm: debug: dump page " Sasha Levin
@ 2015-04-14 20:56 ` Sasha Levin
  2015-04-14 20:56 ` [RFC 07/11] mm: debug: VM_BUG() Sasha Levin
                   ` (5 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Sasha Levin @ 2015-04-14 20:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, kirill, linux-mm

Remove dump_flags which is no longer used.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 mm/debug.c |   30 ------------------------------
 1 file changed, 30 deletions(-)

diff --git a/mm/debug.c b/mm/debug.c
index f64bb6e..13f2555 100644
--- a/mm/debug.c
+++ b/mm/debug.c
@@ -50,36 +50,6 @@ static const struct trace_print_flags pageflag_names[] = {
 #endif
 };
 
-static void dump_flags(unsigned long flags,
-			const struct trace_print_flags *names, int count)
-{
-	const char *delim = "";
-	unsigned long mask;
-	int i;
-
-	pr_emerg("flags: %#lx(", flags);
-
-	/* remove zone id */
-	flags &= (1UL << NR_PAGEFLAGS) - 1;
-
-	for (i = 0; i < count && flags; i++) {
-
-		mask = names[i].mask;
-		if ((flags & mask) != mask)
-			continue;
-
-		flags &= ~mask;
-		pr_cont("%s%s", delim, names[i].name);
-		delim = "|";
-	}
-
-	/* check for left over flags */
-	if (flags)
-		pr_cont("%s%#lx", delim, flags);
-
-	pr_cont(")\n");
-}
-
 static char *format_flags(unsigned long flags,
 			const struct trace_print_flags *names, int count,
 			char *buf, char *end)
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 07/11] mm: debug: VM_BUG()
  2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
                   ` (5 preceding siblings ...)
  2015-04-14 20:56 ` [RFC 06/11] mm: debug: clean unused code Sasha Levin
@ 2015-04-14 20:56 ` Sasha Levin
  2015-04-30 16:22   ` Kirill A. Shutemov
  2015-04-14 20:56 ` [RFC 08/11] mm: debug: kill VM_BUG_ON_PAGE Sasha Levin
                   ` (4 subsequent siblings)
  11 siblings, 1 reply; 19+ messages in thread
From: Sasha Levin @ 2015-04-14 20:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, kirill, linux-mm

VM_BUG() complements VM_BUG_ON() just like with WARN() and WARN_ON().

This lets us format custom strings to output when a VM_BUG() is hit.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 include/linux/mmdebug.h |   10 +++++++++-
 1 file changed, 9 insertions(+), 1 deletion(-)

diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 8b3f5a0..42f41e3 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -12,7 +12,14 @@ char *format_page(struct page *page, char *buf, char *end);
 #ifdef CONFIG_DEBUG_VM
 char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
 char *format_mm(const struct mm_struct *mm, char *buf, char *end);
-#define VM_BUG_ON(cond) BUG_ON(cond)
+#define VM_BUG(cond, fmt...)						\
+	do {								\
+		if (unlikely(cond)) {					\
+			pr_emerg(fmt);					\
+			BUG();						\
+		}							\
+	} while (0)
+#define VM_BUG_ON(cond) VM_BUG(cond, "%s\n", __stringify(cond))
 #define VM_BUG_ON_PAGE(cond, page)					\
 	do {								\
 		if (unlikely(cond)) {					\
@@ -46,6 +53,7 @@ static char *format_mm(const struct mm_struct *mm, char *buf, char *end)
 {
 	return buf;
 }
+#define VM_BUG(cond, fmt...) BUILD_BUG_ON_INVALID(cond)
 #define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
 #define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond)
 #define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond)
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 08/11] mm: debug: kill VM_BUG_ON_PAGE
  2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
                   ` (6 preceding siblings ...)
  2015-04-14 20:56 ` [RFC 07/11] mm: debug: VM_BUG() Sasha Levin
@ 2015-04-14 20:56 ` Sasha Levin
  2015-04-14 20:56 ` [RFC 09/11] mm: debug: kill VM_BUG_ON_VMA Sasha Levin
                   ` (3 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Sasha Levin @ 2015-04-14 20:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, kirill, linux-mm

Just use VM_BUG() instead.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 arch/x86/mm/gup.c              |    8 +++----
 include/linux/hugetlb.h        |    2 +-
 include/linux/hugetlb_cgroup.h |    4 ++--
 include/linux/mm.h             |   22 +++++++++---------
 include/linux/mmdebug.h        |    8 -------
 include/linux/page-flags.h     |   26 +++++++++++-----------
 include/linux/pagemap.h        |   11 ++++-----
 mm/cleancache.c                |    6 ++---
 mm/compaction.c                |    2 +-
 mm/filemap.c                   |   18 +++++++--------
 mm/gup.c                       |    6 ++---
 mm/huge_memory.c               |   38 +++++++++++++++----------------
 mm/hugetlb.c                   |   14 ++++++------
 mm/hugetlb_cgroup.c            |    2 +-
 mm/internal.h                  |    8 +++----
 mm/ksm.c                       |   13 ++++++-----
 mm/memcontrol.c                |   48 ++++++++++++++++++++--------------------
 mm/memory.c                    |    8 +++----
 mm/migrate.c                   |    6 ++---
 mm/mlock.c                     |    4 ++--
 mm/page_alloc.c                |   26 +++++++++++-----------
 mm/page_io.c                   |    4 ++--
 mm/rmap.c                      |   14 ++++++------
 mm/shmem.c                     |   10 +++++----
 mm/slub.c                      |    4 ++--
 mm/swap.c                      |   39 ++++++++++++++++----------------
 mm/swap_state.c                |   16 +++++++-------
 mm/swapfile.c                  |    8 +++----
 mm/vmscan.c                    |   24 ++++++++++----------
 29 files changed, 198 insertions(+), 201 deletions(-)

diff --git a/arch/x86/mm/gup.c b/arch/x86/mm/gup.c
index 81bf3d2..b04ea9e 100644
--- a/arch/x86/mm/gup.c
+++ b/arch/x86/mm/gup.c
@@ -108,8 +108,8 @@ static noinline int gup_pte_range(pmd_t pmd, unsigned long addr,
 
 static inline void get_head_page_multiple(struct page *page, int nr)
 {
-	VM_BUG_ON_PAGE(page != compound_head(page), page);
-	VM_BUG_ON_PAGE(page_count(page) == 0, page);
+	VM_BUG(page != compound_head(page), "%pZp", page);
+	VM_BUG(page_count(page) == 0, "%pZp", page);
 	atomic_add(nr, &page->_count);
 	SetPageReferenced(page);
 }
@@ -135,7 +135,7 @@ static noinline int gup_huge_pmd(pmd_t pmd, unsigned long addr,
 	head = pte_page(pte);
 	page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
 	do {
-		VM_BUG_ON_PAGE(compound_head(page) != head, page);
+		VM_BUG(compound_head(page) != head, "%pZp", page);
 		pages[*nr] = page;
 		if (PageTail(page))
 			get_huge_page_tail(page);
@@ -212,7 +212,7 @@ static noinline int gup_huge_pud(pud_t pud, unsigned long addr,
 	head = pte_page(pte);
 	page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
 	do {
-		VM_BUG_ON_PAGE(compound_head(page) != head, page);
+		VM_BUG(compound_head(page) != head, "%pZp", page);
 		pages[*nr] = page;
 		if (PageTail(page))
 			get_huge_page_tail(page);
diff --git a/include/linux/hugetlb.h b/include/linux/hugetlb.h
index 2050261..0da5cc4 100644
--- a/include/linux/hugetlb.h
+++ b/include/linux/hugetlb.h
@@ -415,7 +415,7 @@ static inline pte_t arch_make_huge_pte(pte_t entry, struct vm_area_struct *vma,
 
 static inline struct hstate *page_hstate(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageHuge(page), page);
+	VM_BUG(!PageHuge(page), "%pZp", page);
 	return size_to_hstate(PAGE_SIZE << compound_order(page));
 }
 
diff --git a/include/linux/hugetlb_cgroup.h b/include/linux/hugetlb_cgroup.h
index bcc853e..7cca841 100644
--- a/include/linux/hugetlb_cgroup.h
+++ b/include/linux/hugetlb_cgroup.h
@@ -28,7 +28,7 @@ struct hugetlb_cgroup;
 
 static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageHuge(page), page);
+	VM_BUG(!PageHuge(page), "%pZp", page);
 
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return NULL;
@@ -38,7 +38,7 @@ static inline struct hugetlb_cgroup *hugetlb_cgroup_from_page(struct page *page)
 static inline
 int set_hugetlb_cgroup(struct page *page, struct hugetlb_cgroup *h_cg)
 {
-	VM_BUG_ON_PAGE(!PageHuge(page), page);
+	VM_BUG(!PageHuge(page), "%pZp", page);
 
 	if (compound_order(page) < HUGETLB_CGROUP_MIN_ORDER)
 		return -1;
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 5d20fba..62996a8 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -340,7 +340,7 @@ static inline int get_freepage_migratetype(struct page *page)
  */
 static inline int put_page_testzero(struct page *page)
 {
-	VM_BUG_ON_PAGE(atomic_read(&page->_count) == 0, page);
+	VM_BUG(atomic_read(&page->_count) == 0, "%pZp", page);
 	return atomic_dec_and_test(&page->_count);
 }
 
@@ -404,7 +404,7 @@ extern void kvfree(const void *addr);
 static inline void compound_lock(struct page *page)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	VM_BUG_ON_PAGE(PageSlab(page), page);
+	VM_BUG(PageSlab(page), "%pZp", page);
 	bit_spin_lock(PG_compound_lock, &page->flags);
 #endif
 }
@@ -412,7 +412,7 @@ static inline void compound_lock(struct page *page)
 static inline void compound_unlock(struct page *page)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	VM_BUG_ON_PAGE(PageSlab(page), page);
+	VM_BUG(PageSlab(page), "%pZp", page);
 	bit_spin_unlock(PG_compound_lock, &page->flags);
 #endif
 }
@@ -448,7 +448,7 @@ static inline void page_mapcount_reset(struct page *page)
 
 static inline int page_mapcount(struct page *page)
 {
-	VM_BUG_ON_PAGE(PageSlab(page), page);
+	VM_BUG(PageSlab(page), "%pZp", page);
 	return atomic_read(&page->_mapcount) + 1;
 }
 
@@ -472,7 +472,7 @@ static inline bool __compound_tail_refcounted(struct page *page)
  */
 static inline bool compound_tail_refcounted(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageHead(page), page);
+	VM_BUG(!PageHead(page), "%pZp", page);
 	return __compound_tail_refcounted(page);
 }
 
@@ -481,9 +481,9 @@ static inline void get_huge_page_tail(struct page *page)
 	/*
 	 * __split_huge_page_refcount() cannot run from under us.
 	 */
-	VM_BUG_ON_PAGE(!PageTail(page), page);
-	VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
-	VM_BUG_ON_PAGE(atomic_read(&page->_count) != 0, page);
+	VM_BUG(!PageTail(page), "%pZp", page);
+	VM_BUG(page_mapcount(page) < 0, "%pZp", page);
+	VM_BUG(atomic_read(&page->_count) != 0, "%pZp", page);
 	if (compound_tail_refcounted(page->first_page))
 		atomic_inc(&page->_mapcount);
 }
@@ -499,7 +499,7 @@ static inline void get_page(struct page *page)
 	 * Getting a normal page or the head of a compound page
 	 * requires to already have an elevated page->_count.
 	 */
-	VM_BUG_ON_PAGE(atomic_read(&page->_count) <= 0, page);
+	VM_BUG(atomic_read(&page->_count) <= 0, "%pZp", page);
 	atomic_inc(&page->_count);
 }
 
@@ -1441,7 +1441,7 @@ static inline bool ptlock_init(struct page *page)
 	 * slab code uses page->slab_cache and page->first_page (for tail
 	 * pages), which share storage with page->ptl.
 	 */
-	VM_BUG_ON_PAGE(*(unsigned long *)&page->ptl, page);
+	VM_BUG(*(unsigned long *)&page->ptl, "%pZp", page);
 	if (!ptlock_alloc(page))
 		return false;
 	spin_lock_init(ptlock_ptr(page));
@@ -1538,7 +1538,7 @@ static inline bool pgtable_pmd_page_ctor(struct page *page)
 static inline void pgtable_pmd_page_dtor(struct page *page)
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	VM_BUG_ON_PAGE(page->pmd_huge_pte, page);
+	VM_BUG(page->pmd_huge_pte, "%pZp", page);
 #endif
 	ptlock_free(page);
 }
diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 42f41e3..f43f868 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -20,13 +20,6 @@ char *format_mm(const struct mm_struct *mm, char *buf, char *end);
 		}							\
 	} while (0)
 #define VM_BUG_ON(cond) VM_BUG(cond, "%s\n", __stringify(cond))
-#define VM_BUG_ON_PAGE(cond, page)					\
-	do {								\
-		if (unlikely(cond)) {					\
-			pr_emerg("%pZp", page);				\
-			BUG();						\
-		}							\
-	} while (0)
 #define VM_BUG_ON_VMA(cond, vma)					\
 	do {								\
 		if (unlikely(cond)) {					\
@@ -55,7 +48,6 @@ static char *format_mm(const struct mm_struct *mm, char *buf, char *end)
 }
 #define VM_BUG(cond, fmt...) BUILD_BUG_ON_INVALID(cond)
 #define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
-#define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond)
 #define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond)
 #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond)
 #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond)
diff --git a/include/linux/page-flags.h b/include/linux/page-flags.h
index 91b7f9b..f1a18ad 100644
--- a/include/linux/page-flags.h
+++ b/include/linux/page-flags.h
@@ -139,13 +139,13 @@ enum pageflags {
 #define PF_HEAD(page, enforce)	compound_head(page)
 #define PF_NO_TAIL(page, enforce) ({					\
 		if (enforce)						\
-			VM_BUG_ON_PAGE(PageTail(page), page);		\
+			VM_BUG(PageTail(page), "%pZp", page);		\
 		else							\
 			page = compound_head(page);			\
 		page;})
 #define PF_NO_COMPOUND(page, enforce) ({					\
 		if (enforce)						\
-			VM_BUG_ON_PAGE(PageCompound(page), page);	\
+			VM_BUG(PageCompound(page), "%pZp", page);	\
 		page;})
 
 /*
@@ -429,14 +429,14 @@ static inline int PageUptodate(struct page *page)
 
 static inline void __SetPageUptodate(struct page *page)
 {
-	VM_BUG_ON_PAGE(PageTail(page), page);
+	VM_BUG(PageTail(page), "%pZp", page);
 	smp_wmb();
 	__set_bit(PG_uptodate, &page->flags);
 }
 
 static inline void SetPageUptodate(struct page *page)
 {
-	VM_BUG_ON_PAGE(PageTail(page), page);
+	VM_BUG(PageTail(page), "%pZp", page);
 	/*
 	 * Memory barrier must be issued before setting the PG_uptodate bit,
 	 * so that all previous stores issued in order to bring the page
@@ -572,7 +572,7 @@ static inline bool page_huge_active(struct page *page)
  */
 static inline int PageTransHuge(struct page *page)
 {
-	VM_BUG_ON_PAGE(PageTail(page), page);
+	VM_BUG(PageTail(page), "%pZp", page);
 	return PageHead(page);
 }
 
@@ -620,13 +620,13 @@ static inline int PageBuddy(struct page *page)
 
 static inline void __SetPageBuddy(struct page *page)
 {
-	VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
+	VM_BUG(atomic_read(&page->_mapcount) != -1, "%pZp", page);
 	atomic_set(&page->_mapcount, PAGE_BUDDY_MAPCOUNT_VALUE);
 }
 
 static inline void __ClearPageBuddy(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageBuddy(page), page);
+	VM_BUG(!PageBuddy(page), "%pZp", page);
 	atomic_set(&page->_mapcount, -1);
 }
 
@@ -639,13 +639,13 @@ static inline int PageBalloon(struct page *page)
 
 static inline void __SetPageBalloon(struct page *page)
 {
-	VM_BUG_ON_PAGE(atomic_read(&page->_mapcount) != -1, page);
+	VM_BUG(atomic_read(&page->_mapcount) != -1, "%pZp", page);
 	atomic_set(&page->_mapcount, PAGE_BALLOON_MAPCOUNT_VALUE);
 }
 
 static inline void __ClearPageBalloon(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageBalloon(page), page);
+	VM_BUG(!PageBalloon(page), "%pZp", page);
 	atomic_set(&page->_mapcount, -1);
 }
 
@@ -655,25 +655,25 @@ static inline void __ClearPageBalloon(struct page *page)
  */
 static inline int PageSlabPfmemalloc(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageSlab(page), page);
+	VM_BUG(!PageSlab(page), "%pZp", page);
 	return PageActive(page);
 }
 
 static inline void SetPageSlabPfmemalloc(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageSlab(page), page);
+	VM_BUG(!PageSlab(page), "%pZp", page);
 	SetPageActive(page);
 }
 
 static inline void __ClearPageSlabPfmemalloc(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageSlab(page), page);
+	VM_BUG(!PageSlab(page), "%pZp", page);
 	__ClearPageActive(page);
 }
 
 static inline void ClearPageSlabPfmemalloc(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageSlab(page), page);
+	VM_BUG(!PageSlab(page), "%pZp", page);
 	ClearPageActive(page);
 }
 
diff --git a/include/linux/pagemap.h b/include/linux/pagemap.h
index 7c37907..fa9ba8b 100644
--- a/include/linux/pagemap.h
+++ b/include/linux/pagemap.h
@@ -157,7 +157,7 @@ static inline int page_cache_get_speculative(struct page *page)
 	 * disabling preempt, and hence no need for the "speculative get" that
 	 * SMP requires.
 	 */
-	VM_BUG_ON_PAGE(page_count(page) == 0, page);
+	VM_BUG(page_count(page) == 0, "%pZp", page);
 	atomic_inc(&page->_count);
 
 #else
@@ -170,7 +170,7 @@ static inline int page_cache_get_speculative(struct page *page)
 		return 0;
 	}
 #endif
-	VM_BUG_ON_PAGE(PageTail(page), page);
+	VM_BUG(PageTail(page), "%pZp", page);
 
 	return 1;
 }
@@ -186,14 +186,15 @@ static inline int page_cache_add_speculative(struct page *page, int count)
 # ifdef CONFIG_PREEMPT_COUNT
 	VM_BUG_ON(!in_atomic());
 # endif
-	VM_BUG_ON_PAGE(page_count(page) == 0, page);
+	VM_BUG(page_count(page) == 0, "%pZp", page);
 	atomic_add(count, &page->_count);
 
 #else
 	if (unlikely(!atomic_add_unless(&page->_count, count, 0)))
 		return 0;
 #endif
-	VM_BUG_ON_PAGE(PageCompound(page) && page != compound_head(page), page);
+	VM_BUG(PageCompound(page) && page != compound_head(page), "%pZp",
+	       page);
 
 	return 1;
 }
@@ -205,7 +206,7 @@ static inline int page_freeze_refs(struct page *page, int count)
 
 static inline void page_unfreeze_refs(struct page *page, int count)
 {
-	VM_BUG_ON_PAGE(page_count(page) != 0, page);
+	VM_BUG(page_count(page) != 0, "%pZp", page);
 	VM_BUG_ON(count == 0);
 
 	atomic_set(&page->_count, count);
diff --git a/mm/cleancache.c b/mm/cleancache.c
index 8fc5081..d4d5ce0 100644
--- a/mm/cleancache.c
+++ b/mm/cleancache.c
@@ -185,7 +185,7 @@ int __cleancache_get_page(struct page *page)
 		goto out;
 	}
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
 	pool_id = page->mapping->host->i_sb->cleancache_poolid;
 	if (pool_id < 0)
 		goto out;
@@ -223,7 +223,7 @@ void __cleancache_put_page(struct page *page)
 		return;
 	}
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
 	pool_id = page->mapping->host->i_sb->cleancache_poolid;
 	if (pool_id >= 0 &&
 		cleancache_get_key(page->mapping->host, &key) >= 0) {
@@ -252,7 +252,7 @@ void __cleancache_invalidate_page(struct address_space *mapping,
 		return;
 
 	if (pool_id >= 0) {
-		VM_BUG_ON_PAGE(!PageLocked(page), page);
+		VM_BUG(!PageLocked(page), "%pZp", page);
 		if (cleancache_get_key(mapping->host, &key) >= 0) {
 			cleancache_ops->invalidate_page(pool_id,
 					key, page->index);
diff --git a/mm/compaction.c b/mm/compaction.c
index e6c4f94..e6c8601 100644
--- a/mm/compaction.c
+++ b/mm/compaction.c
@@ -800,7 +800,7 @@ isolate_migratepages_block(struct compact_control *cc, unsigned long low_pfn,
 		if (__isolate_lru_page(page, isolate_mode) != 0)
 			continue;
 
-		VM_BUG_ON_PAGE(PageTransCompound(page), page);
+		VM_BUG(PageTransCompound(page), "%pZp", page);
 
 		/* Successfully isolated */
 		del_page_from_lru_list(page, lruvec, page_lru(page));
diff --git a/mm/filemap.c b/mm/filemap.c
index 3544844..11a62de 100644
--- a/mm/filemap.c
+++ b/mm/filemap.c
@@ -462,9 +462,9 @@ int replace_page_cache_page(struct page *old, struct page *new, gfp_t gfp_mask)
 {
 	int error;
 
-	VM_BUG_ON_PAGE(!PageLocked(old), old);
-	VM_BUG_ON_PAGE(!PageLocked(new), new);
-	VM_BUG_ON_PAGE(new->mapping, new);
+	VM_BUG(!PageLocked(old), "%pZp", old);
+	VM_BUG(!PageLocked(new), "%pZp", new);
+	VM_BUG(new->mapping, "%pZp", new);
 
 	error = radix_tree_preload(gfp_mask & ~__GFP_HIGHMEM);
 	if (!error) {
@@ -549,8 +549,8 @@ static int __add_to_page_cache_locked(struct page *page,
 	struct mem_cgroup *memcg;
 	int error;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
-	VM_BUG_ON_PAGE(PageSwapBacked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
+	VM_BUG(PageSwapBacked(page), "%pZp", page);
 
 	if (!huge) {
 		error = mem_cgroup_try_charge(page, current->mm,
@@ -743,7 +743,7 @@ EXPORT_SYMBOL_GPL(add_page_wait_queue);
 void unlock_page(struct page *page)
 {
 	page = compound_head(page);
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
 	clear_bit_unlock(PG_locked, &page->flags);
 	smp_mb__after_atomic();
 	wake_up_page(page, PG_locked);
@@ -1036,7 +1036,7 @@ repeat:
 			page_cache_release(page);
 			goto repeat;
 		}
-		VM_BUG_ON_PAGE(page->index != offset, page);
+		VM_BUG(page->index != offset, "%pZp", page);
 	}
 	return page;
 }
@@ -1093,7 +1093,7 @@ repeat:
 			page_cache_release(page);
 			goto repeat;
 		}
-		VM_BUG_ON_PAGE(page->index != offset, page);
+		VM_BUG(page->index != offset, "%pZp", page);
 	}
 
 	if (page && (fgp_flags & FGP_ACCESSED))
@@ -1914,7 +1914,7 @@ retry_find:
 		put_page(page);
 		goto retry_find;
 	}
-	VM_BUG_ON_PAGE(page->index != offset, page);
+	VM_BUG(page->index != offset, "%pZp", page);
 
 	/*
 	 * We have a locked page in the page cache, now we need to check
diff --git a/mm/gup.c b/mm/gup.c
index 6297f6b..743648e 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -1084,7 +1084,7 @@ static int gup_huge_pmd(pmd_t orig, pmd_t *pmdp, unsigned long addr,
 	page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
 	tail = page;
 	do {
-		VM_BUG_ON_PAGE(compound_head(page) != head, page);
+		VM_BUG(compound_head(page) != head, "%pZp", page);
 		pages[*nr] = page;
 		(*nr)++;
 		page++;
@@ -1131,7 +1131,7 @@ static int gup_huge_pud(pud_t orig, pud_t *pudp, unsigned long addr,
 	page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
 	tail = page;
 	do {
-		VM_BUG_ON_PAGE(compound_head(page) != head, page);
+		VM_BUG(compound_head(page) != head, "%pZp", page);
 		pages[*nr] = page;
 		(*nr)++;
 		page++;
@@ -1174,7 +1174,7 @@ static int gup_huge_pgd(pgd_t orig, pgd_t *pgdp, unsigned long addr,
 	page = head + ((addr & ~PGDIR_MASK) >> PAGE_SHIFT);
 	tail = page;
 	do {
-		VM_BUG_ON_PAGE(compound_head(page) != head, page);
+		VM_BUG(compound_head(page) != head, "%pZp", page);
 		pages[*nr] = page;
 		(*nr)++;
 		page++;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index aca0846..7ba3947 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -724,7 +724,7 @@ static int __do_huge_pmd_anonymous_page(struct mm_struct *mm,
 	pgtable_t pgtable;
 	spinlock_t *ptl;
 
-	VM_BUG_ON_PAGE(!PageCompound(page), page);
+	VM_BUG(!PageCompound(page), "%pZp", page);
 
 	if (mem_cgroup_try_charge(page, mm, gfp, &memcg))
 		return VM_FAULT_OOM;
@@ -898,7 +898,7 @@ int copy_huge_pmd(struct mm_struct *dst_mm, struct mm_struct *src_mm,
 		goto out;
 	}
 	src_page = pmd_page(pmd);
-	VM_BUG_ON_PAGE(!PageHead(src_page), src_page);
+	VM_BUG(!PageHead(src_page), "%pZp", src_page);
 	get_page(src_page);
 	page_dup_rmap(src_page);
 	add_mm_counter(dst_mm, MM_ANONPAGES, HPAGE_PMD_NR);
@@ -1030,7 +1030,7 @@ static int do_huge_pmd_wp_page_fallback(struct mm_struct *mm,
 	ptl = pmd_lock(mm, pmd);
 	if (unlikely(!pmd_same(*pmd, orig_pmd)))
 		goto out_free_pages;
-	VM_BUG_ON_PAGE(!PageHead(page), page);
+	VM_BUG(!PageHead(page), "%pZp", page);
 
 	pmdp_clear_flush_notify(vma, haddr, pmd);
 	/* leave pmd empty until pte is filled */
@@ -1102,7 +1102,7 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
 		goto out_unlock;
 
 	page = pmd_page(orig_pmd);
-	VM_BUG_ON_PAGE(!PageCompound(page) || !PageHead(page), page);
+	VM_BUG(!PageCompound(page) || !PageHead(page), "%pZp", page);
 	if (page_mapcount(page) == 1) {
 		pmd_t entry;
 		entry = pmd_mkyoung(orig_pmd);
@@ -1185,7 +1185,7 @@ alloc:
 			add_mm_counter(mm, MM_ANONPAGES, HPAGE_PMD_NR);
 			put_huge_zero_page();
 		} else {
-			VM_BUG_ON_PAGE(!PageHead(page), page);
+			VM_BUG(!PageHead(page), "%pZp", page);
 			page_remove_rmap(page);
 			put_page(page);
 		}
@@ -1223,7 +1223,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 		goto out;
 
 	page = pmd_page(*pmd);
-	VM_BUG_ON_PAGE(!PageHead(page), page);
+	VM_BUG(!PageHead(page), "%pZp", page);
 	if (flags & FOLL_TOUCH) {
 		pmd_t _pmd;
 		/*
@@ -1248,7 +1248,7 @@ struct page *follow_trans_huge_pmd(struct vm_area_struct *vma,
 		}
 	}
 	page += (addr & ~HPAGE_PMD_MASK) >> PAGE_SHIFT;
-	VM_BUG_ON_PAGE(!PageCompound(page), page);
+	VM_BUG(!PageCompound(page), "%pZp", page);
 	if (flags & FOLL_GET)
 		get_page_foll(page);
 
@@ -1401,7 +1401,7 @@ int madvise_free_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 
 		/* No hugepage in swapcache */
 		page = pmd_page(orig_pmd);
-		VM_BUG_ON_PAGE(PageSwapCache(page), page);
+		VM_BUG(PageSwapCache(page), "%pZp", page);
 
 		orig_pmd = pmd_mkold(orig_pmd);
 		orig_pmd = pmd_mkclean(orig_pmd);
@@ -1442,9 +1442,9 @@ int zap_huge_pmd(struct mmu_gather *tlb, struct vm_area_struct *vma,
 		} else {
 			page = pmd_page(orig_pmd);
 			page_remove_rmap(page);
-			VM_BUG_ON_PAGE(page_mapcount(page) < 0, page);
+			VM_BUG(page_mapcount(page) < 0, "%pZp", page);
 			add_mm_counter(tlb->mm, MM_ANONPAGES, -HPAGE_PMD_NR);
-			VM_BUG_ON_PAGE(!PageHead(page), page);
+			VM_BUG(!PageHead(page), "%pZp", page);
 			atomic_long_dec(&tlb->mm->nr_ptes);
 			spin_unlock(ptl);
 			tlb_remove_page(tlb, page);
@@ -2190,9 +2190,9 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 		if (unlikely(!page))
 			goto out;
 
-		VM_BUG_ON_PAGE(PageCompound(page), page);
-		VM_BUG_ON_PAGE(!PageAnon(page), page);
-		VM_BUG_ON_PAGE(!PageSwapBacked(page), page);
+		VM_BUG(PageCompound(page), "%pZp", page);
+		VM_BUG(!PageAnon(page), "%pZp", page);
+		VM_BUG(!PageSwapBacked(page), "%pZp", page);
 
 		/*
 		 * We can do it before isolate_lru_page because the
@@ -2235,8 +2235,8 @@ static int __collapse_huge_page_isolate(struct vm_area_struct *vma,
 		}
 		/* 0 stands for page_is_file_cache(page) == false */
 		inc_zone_page_state(page, NR_ISOLATED_ANON + 0);
-		VM_BUG_ON_PAGE(!PageLocked(page), page);
-		VM_BUG_ON_PAGE(PageLRU(page), page);
+		VM_BUG(!PageLocked(page), "%pZp", page);
+		VM_BUG(PageLRU(page), "%pZp", page);
 
 		/* If there is no mapped pte young don't collapse the page */
 		if (pte_young(pteval) || PageReferenced(page) ||
@@ -2278,7 +2278,7 @@ static void __collapse_huge_page_copy(pte_t *pte, struct page *page,
 		} else {
 			src_page = pte_page(pteval);
 			copy_user_highpage(page, src_page, address, vma);
-			VM_BUG_ON_PAGE(page_mapcount(src_page) != 1, src_page);
+			VM_BUG(page_mapcount(src_page) != 1, "%pZp", src_page);
 			release_pte_page(src_page);
 			/*
 			 * ptl mostly unnecessary, but preempt has to
@@ -2381,7 +2381,7 @@ khugepaged_alloc_page(struct page **hpage, gfp_t gfp, struct mm_struct *mm,
 		       struct vm_area_struct *vma, unsigned long address,
 		       int node)
 {
-	VM_BUG_ON_PAGE(*hpage, *hpage);
+	VM_BUG(*hpage, "%pZp", *hpage);
 
 	/*
 	 * Before allocating the hugepage, release the mmap_sem read lock.
@@ -2655,7 +2655,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 		if (khugepaged_scan_abort(node))
 			goto out_unmap;
 		khugepaged_node_load[node]++;
-		VM_BUG_ON_PAGE(PageCompound(page), page);
+		VM_BUG(PageCompound(page), "%pZp", page);
 		if (!PageLRU(page) || PageLocked(page) || !PageAnon(page))
 			goto out_unmap;
 		/*
@@ -2955,7 +2955,7 @@ again:
 		return;
 	}
 	page = pmd_page(*pmd);
-	VM_BUG_ON_PAGE(!page_count(page), page);
+	VM_BUG(!page_count(page), "%pZp", page);
 	get_page(page);
 	spin_unlock(ptl);
 	mmu_notifier_invalidate_range_end(mm, mmun_start, mmun_end);
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index e8c92ae..584b516 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -901,7 +901,7 @@ static void update_and_free_page(struct hstate *h, struct page *page)
 				1 << PG_active | 1 << PG_private |
 				1 << PG_writeback);
 	}
-	VM_BUG_ON_PAGE(hugetlb_cgroup_from_page(page), page);
+	VM_BUG(hugetlb_cgroup_from_page(page), "%pZp", page);
 	set_compound_page_dtor(page, NULL);
 	set_page_refcounted(page);
 	if (hstate_is_gigantic(h)) {
@@ -932,20 +932,20 @@ struct hstate *size_to_hstate(unsigned long size)
  */
 bool page_huge_active(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageHuge(page), page);
+	VM_BUG(!PageHuge(page), "%pZp", page);
 	return PageHead(page) && PagePrivate(&page[1]);
 }
 
 /* never called for tail page */
 static void set_page_huge_active(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
+	VM_BUG(!PageHeadHuge(page), "%pZp", page);
 	SetPagePrivate(&page[1]);
 }
 
 static void clear_page_huge_active(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageHeadHuge(page), page);
+	VM_BUG(!PageHeadHuge(page), "%pZp", page);
 	ClearPagePrivate(&page[1]);
 }
 
@@ -1373,7 +1373,7 @@ retry:
 		 * no users -- drop the buddy allocator's reference.
 		 */
 		put_page_testzero(page);
-		VM_BUG_ON_PAGE(page_count(page), page);
+		VM_BUG(page_count(page), "%pZp", page);
 		enqueue_huge_page(h, page);
 	}
 free:
@@ -3934,7 +3934,7 @@ bool isolate_huge_page(struct page *page, struct list_head *list)
 {
 	bool ret = true;
 
-	VM_BUG_ON_PAGE(!PageHead(page), page);
+	VM_BUG(!PageHead(page), "%pZp", page);
 	spin_lock(&hugetlb_lock);
 	if (!page_huge_active(page) || !get_page_unless_zero(page)) {
 		ret = false;
@@ -3949,7 +3949,7 @@ unlock:
 
 void putback_active_hugepage(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageHead(page), page);
+	VM_BUG(!PageHead(page), "%pZp", page);
 	spin_lock(&hugetlb_lock);
 	set_page_huge_active(page);
 	list_move_tail(&page->lru, &(page_hstate(page))->hugepage_activelist);
diff --git a/mm/hugetlb_cgroup.c b/mm/hugetlb_cgroup.c
index 6e00574..9df90f5 100644
--- a/mm/hugetlb_cgroup.c
+++ b/mm/hugetlb_cgroup.c
@@ -403,7 +403,7 @@ void hugetlb_cgroup_migrate(struct page *oldhpage, struct page *newhpage)
 	if (hugetlb_cgroup_disabled())
 		return;
 
-	VM_BUG_ON_PAGE(!PageHuge(oldhpage), oldhpage);
+	VM_BUG(!PageHuge(oldhpage), "%pZp", oldhpage);
 	spin_lock(&hugetlb_lock);
 	h_cg = hugetlb_cgroup_from_page(oldhpage);
 	set_hugetlb_cgroup(oldhpage, NULL);
diff --git a/mm/internal.h b/mm/internal.h
index a25e359..0fefe0b 100644
--- a/mm/internal.h
+++ b/mm/internal.h
@@ -42,8 +42,8 @@ static inline unsigned long ra_submit(struct file_ra_state *ra,
  */
 static inline void set_page_refcounted(struct page *page)
 {
-	VM_BUG_ON_PAGE(PageTail(page), page);
-	VM_BUG_ON_PAGE(atomic_read(&page->_count), page);
+	VM_BUG(PageTail(page), "%pZp", page);
+	VM_BUG(atomic_read(&page->_count), "%pZp", page);
 	set_page_count(page, 1);
 }
 
@@ -61,7 +61,7 @@ static inline void __get_page_tail_foll(struct page *page,
 	 * speculative page access (like in
 	 * page_cache_get_speculative()) on tail pages.
 	 */
-	VM_BUG_ON_PAGE(atomic_read(&page->first_page->_count) <= 0, page);
+	VM_BUG(atomic_read(&page->first_page->_count) <= 0, "%pZp", page);
 	if (get_page_head)
 		atomic_inc(&page->first_page->_count);
 	get_huge_page_tail(page);
@@ -86,7 +86,7 @@ static inline void get_page_foll(struct page *page)
 		 * Getting a normal page or the head of a compound page
 		 * requires to already have an elevated page->_count.
 		 */
-		VM_BUG_ON_PAGE(atomic_read(&page->_count) <= 0, page);
+		VM_BUG(atomic_read(&page->_count) <= 0, "%pZp", page);
 		atomic_inc(&page->_count);
 	}
 }
diff --git a/mm/ksm.c b/mm/ksm.c
index bc7be0e..040185f 100644
--- a/mm/ksm.c
+++ b/mm/ksm.c
@@ -1897,13 +1897,13 @@ int rmap_walk_ksm(struct page *page, struct rmap_walk_control *rwc)
 	int ret = SWAP_AGAIN;
 	int search_new_forks = 0;
 
-	VM_BUG_ON_PAGE(!PageKsm(page), page);
+	VM_BUG(!PageKsm(page), "%pZp", page);
 
 	/*
 	 * Rely on the page lock to protect against concurrent modifications
 	 * to that page's node of the stable tree.
 	 */
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
 
 	stable_node = page_stable_node(page);
 	if (!stable_node)
@@ -1957,13 +1957,14 @@ void ksm_migrate_page(struct page *newpage, struct page *oldpage)
 {
 	struct stable_node *stable_node;
 
-	VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage);
-	VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
-	VM_BUG_ON_PAGE(newpage->mapping != oldpage->mapping, newpage);
+	VM_BUG(!PageLocked(oldpage), "%pZp", oldpage);
+	VM_BUG(!PageLocked(newpage), "%pZp", newpage);
+	VM_BUG(newpage->mapping != oldpage->mapping, "%pZp", newpage);
 
 	stable_node = page_stable_node(newpage);
 	if (stable_node) {
-		VM_BUG_ON_PAGE(stable_node->kpfn != page_to_pfn(oldpage), oldpage);
+		VM_BUG(stable_node->kpfn != page_to_pfn(oldpage), "%pZp",
+		       oldpage);
 		stable_node->kpfn = page_to_pfn(newpage);
 		/*
 		 * newpage->mapping was set in advance; now we need smp_wmb()
diff --git a/mm/memcontrol.c b/mm/memcontrol.c
index 14c2f20..6ae7c39 100644
--- a/mm/memcontrol.c
+++ b/mm/memcontrol.c
@@ -2365,7 +2365,7 @@ struct mem_cgroup *try_get_mem_cgroup_from_page(struct page *page)
 	unsigned short id;
 	swp_entry_t ent;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
 
 	memcg = page->mem_cgroup;
 	if (memcg) {
@@ -2407,7 +2407,7 @@ static void unlock_page_lru(struct page *page, int isolated)
 		struct lruvec *lruvec;
 
 		lruvec = mem_cgroup_page_lruvec(page, zone);
-		VM_BUG_ON_PAGE(PageLRU(page), page);
+		VM_BUG(PageLRU(page), "%pZp", page);
 		SetPageLRU(page);
 		add_page_to_lru_list(page, lruvec, page_lru(page));
 	}
@@ -2419,7 +2419,7 @@ static void commit_charge(struct page *page, struct mem_cgroup *memcg,
 {
 	int isolated;
 
-	VM_BUG_ON_PAGE(page->mem_cgroup, page);
+	VM_BUG(page->mem_cgroup, "%pZp", page);
 
 	/*
 	 * In some cases, SwapCache and FUSE(splice_buf->radixtree), the page
@@ -2726,7 +2726,7 @@ void __memcg_kmem_uncharge_pages(struct page *page, int order)
 	if (!memcg)
 		return;
 
-	VM_BUG_ON_PAGE(mem_cgroup_is_root(memcg), page);
+	VM_BUG(mem_cgroup_is_root(memcg), "%pZp", page);
 
 	memcg_uncharge_kmem(memcg, 1 << order);
 	page->mem_cgroup = NULL;
@@ -4748,7 +4748,7 @@ static int mem_cgroup_move_account(struct page *page,
 	int ret;
 
 	VM_BUG_ON(from == to);
-	VM_BUG_ON_PAGE(PageLRU(page), page);
+	VM_BUG(PageLRU(page), "%pZp", page);
 	/*
 	 * The page is isolated from LRU. So, collapse function
 	 * will not handle this page. But page splitting can happen.
@@ -4864,7 +4864,7 @@ static enum mc_target_type get_mctgt_type_thp(struct vm_area_struct *vma,
 	enum mc_target_type ret = MC_TARGET_NONE;
 
 	page = pmd_page(pmd);
-	VM_BUG_ON_PAGE(!page || !PageHead(page), page);
+	VM_BUG(!page || !PageHead(page), "%pZp", page);
 	if (!(mc.flags & MOVE_ANON))
 		return ret;
 	if (page->mem_cgroup == mc.from) {
@@ -5479,7 +5479,7 @@ int mem_cgroup_try_charge(struct page *page, struct mm_struct *mm,
 
 	if (PageTransHuge(page)) {
 		nr_pages <<= compound_order(page);
-		VM_BUG_ON_PAGE(!PageTransHuge(page), page);
+		VM_BUG(!PageTransHuge(page), "%pZp", page);
 	}
 
 	if (do_swap_account && PageSwapCache(page))
@@ -5521,8 +5521,8 @@ void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg,
 {
 	unsigned int nr_pages = 1;
 
-	VM_BUG_ON_PAGE(!page->mapping, page);
-	VM_BUG_ON_PAGE(PageLRU(page) && !lrucare, page);
+	VM_BUG(!page->mapping, "%pZp", page);
+	VM_BUG(PageLRU(page) && !lrucare, "%pZp", page);
 
 	if (mem_cgroup_disabled())
 		return;
@@ -5538,7 +5538,7 @@ void mem_cgroup_commit_charge(struct page *page, struct mem_cgroup *memcg,
 
 	if (PageTransHuge(page)) {
 		nr_pages <<= compound_order(page);
-		VM_BUG_ON_PAGE(!PageTransHuge(page), page);
+		VM_BUG(!PageTransHuge(page), "%pZp", page);
 	}
 
 	local_irq_disable();
@@ -5580,7 +5580,7 @@ void mem_cgroup_cancel_charge(struct page *page, struct mem_cgroup *memcg)
 
 	if (PageTransHuge(page)) {
 		nr_pages <<= compound_order(page);
-		VM_BUG_ON_PAGE(!PageTransHuge(page), page);
+		VM_BUG(!PageTransHuge(page), "%pZp", page);
 	}
 
 	cancel_charge(memcg, nr_pages);
@@ -5630,8 +5630,8 @@ static void uncharge_list(struct list_head *page_list)
 		page = list_entry(next, struct page, lru);
 		next = page->lru.next;
 
-		VM_BUG_ON_PAGE(PageLRU(page), page);
-		VM_BUG_ON_PAGE(page_count(page), page);
+		VM_BUG(PageLRU(page), "%pZp", page);
+		VM_BUG(page_count(page), "%pZp", page);
 
 		if (!page->mem_cgroup)
 			continue;
@@ -5653,7 +5653,7 @@ static void uncharge_list(struct list_head *page_list)
 
 		if (PageTransHuge(page)) {
 			nr_pages <<= compound_order(page);
-			VM_BUG_ON_PAGE(!PageTransHuge(page), page);
+			VM_BUG(!PageTransHuge(page), "%pZp", page);
 			nr_huge += nr_pages;
 		}
 
@@ -5724,13 +5724,13 @@ void mem_cgroup_migrate(struct page *oldpage, struct page *newpage,
 	struct mem_cgroup *memcg;
 	int isolated;
 
-	VM_BUG_ON_PAGE(!PageLocked(oldpage), oldpage);
-	VM_BUG_ON_PAGE(!PageLocked(newpage), newpage);
-	VM_BUG_ON_PAGE(!lrucare && PageLRU(oldpage), oldpage);
-	VM_BUG_ON_PAGE(!lrucare && PageLRU(newpage), newpage);
-	VM_BUG_ON_PAGE(PageAnon(oldpage) != PageAnon(newpage), newpage);
-	VM_BUG_ON_PAGE(PageTransHuge(oldpage) != PageTransHuge(newpage),
-		       newpage);
+	VM_BUG(!PageLocked(oldpage), "%pZp", oldpage);
+	VM_BUG(!PageLocked(newpage), "%pZp", newpage);
+	VM_BUG(!lrucare && PageLRU(oldpage), "%pZp", oldpage);
+	VM_BUG(!lrucare && PageLRU(newpage), "%pZp", newpage);
+	VM_BUG(PageAnon(oldpage) != PageAnon(newpage), "%pZp", newpage);
+	VM_BUG(PageTransHuge(oldpage) != PageTransHuge(newpage), "%pZp",
+	       newpage);
 
 	if (mem_cgroup_disabled())
 		return;
@@ -5812,8 +5812,8 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 	struct mem_cgroup *memcg;
 	unsigned short oldid;
 
-	VM_BUG_ON_PAGE(PageLRU(page), page);
-	VM_BUG_ON_PAGE(page_count(page), page);
+	VM_BUG(PageLRU(page), "%pZp", page);
+	VM_BUG(page_count(page), "%pZp", page);
 
 	if (!do_swap_account)
 		return;
@@ -5825,7 +5825,7 @@ void mem_cgroup_swapout(struct page *page, swp_entry_t entry)
 		return;
 
 	oldid = swap_cgroup_record(entry, mem_cgroup_id(memcg));
-	VM_BUG_ON_PAGE(oldid, page);
+	VM_BUG(oldid, "%pZp", page);
 	mem_cgroup_swap_statistics(memcg, true);
 
 	page->mem_cgroup = NULL;
diff --git a/mm/memory.c b/mm/memory.c
index 6e5d4bd..dd509d9 100644
--- a/mm/memory.c
+++ b/mm/memory.c
@@ -302,7 +302,7 @@ int __tlb_remove_page(struct mmu_gather *tlb, struct page *page)
 			return 0;
 		batch = tlb->active;
 	}
-	VM_BUG_ON_PAGE(batch->nr > batch->max, page);
+	VM_BUG(batch->nr > batch->max, "%pZp", page);
 
 	return batch->max - batch->nr;
 }
@@ -1977,7 +1977,7 @@ static int do_page_mkwrite(struct vm_area_struct *vma, struct page *page,
 		}
 		ret |= VM_FAULT_LOCKED;
 	} else
-		VM_BUG_ON_PAGE(!PageLocked(page), page);
+		VM_BUG(!PageLocked(page), "%pZp", page);
 	return ret;
 }
 
@@ -2020,7 +2020,7 @@ static inline int wp_page_reuse(struct mm_struct *mm,
 			lock_page(page);
 
 		dirtied = set_page_dirty(page);
-		VM_BUG_ON_PAGE(PageAnon(page), page);
+		VM_BUG(PageAnon(page), "%pZp", page);
 		mapping = page->mapping;
 		unlock_page(page);
 		page_cache_release(page);
@@ -2763,7 +2763,7 @@ static int __do_fault(struct vm_area_struct *vma, unsigned long address,
 	if (unlikely(!(ret & VM_FAULT_LOCKED)))
 		lock_page(vmf.page);
 	else
-		VM_BUG_ON_PAGE(!PageLocked(vmf.page), vmf.page);
+		VM_BUG(!PageLocked(vmf.page), "%pZp", vmf.page);
 
  out:
 	*page = vmf.page;
diff --git a/mm/migrate.c b/mm/migrate.c
index 022adc2..2693888 100644
--- a/mm/migrate.c
+++ b/mm/migrate.c
@@ -500,7 +500,7 @@ void migrate_page_copy(struct page *newpage, struct page *page)
 	if (PageUptodate(page))
 		SetPageUptodate(newpage);
 	if (TestClearPageActive(page)) {
-		VM_BUG_ON_PAGE(PageUnevictable(page), page);
+		VM_BUG(PageUnevictable(page), "%pZp", page);
 		SetPageActive(newpage);
 	} else if (TestClearPageUnevictable(page))
 		SetPageUnevictable(newpage);
@@ -869,7 +869,7 @@ static int __unmap_and_move(struct page *page, struct page *newpage,
 	 * free the metadata, so the page can be freed.
 	 */
 	if (!page->mapping) {
-		VM_BUG_ON_PAGE(PageAnon(page), page);
+		VM_BUG(PageAnon(page), "%pZp", page);
 		if (page_has_private(page)) {
 			try_to_free_buffers(page);
 			goto out_unlock;
@@ -1606,7 +1606,7 @@ static int numamigrate_isolate_page(pg_data_t *pgdat, struct page *page)
 {
 	int page_lru;
 
-	VM_BUG_ON_PAGE(compound_order(page) && !PageTransHuge(page), page);
+	VM_BUG(compound_order(page) && !PageTransHuge(page), "%pZp", page);
 
 	/* Avoid migrating to a node that is nearly full */
 	if (!migrate_balanced_pgdat(pgdat, 1UL << compound_order(page)))
diff --git a/mm/mlock.c b/mm/mlock.c
index 6fd2cf1..54269cd 100644
--- a/mm/mlock.c
+++ b/mm/mlock.c
@@ -232,8 +232,8 @@ static int __mlock_posix_error_return(long retval)
 static bool __putback_lru_fast_prepare(struct page *page, struct pagevec *pvec,
 		int *pgrescued)
 {
-	VM_BUG_ON_PAGE(PageLRU(page), page);
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG(PageLRU(page), "%pZp", page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
 
 	if (page_mapcount(page) <= 1 && page_evictable(page)) {
 		pagevec_add(pvec, page);
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 4887731..2cabcaa 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -525,7 +525,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
 		if (page_zone_id(page) != page_zone_id(buddy))
 			return 0;
 
-		VM_BUG_ON_PAGE(page_count(buddy) != 0, buddy);
+		VM_BUG(page_count(buddy) != 0, "%pZp", buddy);
 
 		return 1;
 	}
@@ -539,7 +539,7 @@ static inline int page_is_buddy(struct page *page, struct page *buddy,
 		if (page_zone_id(page) != page_zone_id(buddy))
 			return 0;
 
-		VM_BUG_ON_PAGE(page_count(buddy) != 0, buddy);
+		VM_BUG(page_count(buddy) != 0, "%pZp", buddy);
 
 		return 1;
 	}
@@ -583,7 +583,7 @@ static inline void __free_one_page(struct page *page,
 	int max_order = MAX_ORDER;
 
 	VM_BUG_ON(!zone_is_initialized(zone));
-	VM_BUG_ON_PAGE(page->flags & PAGE_FLAGS_CHECK_AT_PREP, page);
+	VM_BUG(page->flags & PAGE_FLAGS_CHECK_AT_PREP, "%pZp", page);
 
 	VM_BUG_ON(migratetype == -1);
 	if (is_migrate_isolate(migratetype)) {
@@ -600,8 +600,8 @@ static inline void __free_one_page(struct page *page,
 
 	page_idx = pfn & ((1 << max_order) - 1);
 
-	VM_BUG_ON_PAGE(page_idx & ((1 << order) - 1), page);
-	VM_BUG_ON_PAGE(bad_range(zone, page), page);
+	VM_BUG(page_idx & ((1 << order) - 1), "%pZp", page);
+	VM_BUG(bad_range(zone, page), "%pZp", page);
 
 	while (order < max_order - 1) {
 		buddy_idx = __find_buddy_index(page_idx, order);
@@ -790,8 +790,8 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 	bool compound = PageCompound(page);
 	int i, bad = 0;
 
-	VM_BUG_ON_PAGE(PageTail(page), page);
-	VM_BUG_ON_PAGE(compound && compound_order(page) != order, page);
+	VM_BUG(PageTail(page), "%pZp", page);
+	VM_BUG(compound && compound_order(page) != order, "%pZp", page);
 
 	trace_mm_page_free(page, order);
 	kmemcheck_free_shadow(page, order);
@@ -914,7 +914,7 @@ static inline void expand(struct zone *zone, struct page *page,
 		area--;
 		high--;
 		size >>= 1;
-		VM_BUG_ON_PAGE(bad_range(zone, &page[size]), &page[size]);
+		VM_BUG(bad_range(zone, &page[size]), "%pZp", &page[size]);
 
 		if (IS_ENABLED(CONFIG_DEBUG_PAGEALLOC) &&
 			debug_guardpage_enabled() &&
@@ -1086,7 +1086,7 @@ int move_freepages(struct zone *zone,
 
 	for (page = start_page; page <= end_page;) {
 		/* Make sure we are not inadvertently changing nodes */
-		VM_BUG_ON_PAGE(page_to_nid(page) != zone_to_nid(zone), page);
+		VM_BUG(page_to_nid(page) != zone_to_nid(zone), "%pZp", page);
 
 		if (!pfn_valid_within(page_to_pfn(page))) {
 			page++;
@@ -1611,8 +1611,8 @@ void split_page(struct page *page, unsigned int order)
 {
 	int i;
 
-	VM_BUG_ON_PAGE(PageCompound(page), page);
-	VM_BUG_ON_PAGE(!page_count(page), page);
+	VM_BUG(PageCompound(page), "%pZp", page);
+	VM_BUG(!page_count(page), "%pZp", page);
 
 #ifdef CONFIG_KMEMCHECK
 	/*
@@ -1764,7 +1764,7 @@ struct page *buffered_rmqueue(struct zone *preferred_zone,
 	zone_statistics(preferred_zone, zone, gfp_flags);
 	local_irq_restore(flags);
 
-	VM_BUG_ON_PAGE(bad_range(zone, page), page);
+	VM_BUG(bad_range(zone, page), "%pZp", page);
 	return page;
 
 failed:
@@ -6210,7 +6210,7 @@ void set_pfnblock_flags_mask(struct page *page, unsigned long flags,
 	word_bitidx = bitidx / BITS_PER_LONG;
 	bitidx &= (BITS_PER_LONG-1);
 
-	VM_BUG_ON_PAGE(!zone_spans_pfn(zone, pfn), page);
+	VM_BUG(!zone_spans_pfn(zone, pfn), "%pZp", page);
 
 	bitidx += end_bitidx;
 	mask <<= (BITS_PER_LONG - bitidx - 1);
diff --git a/mm/page_io.c b/mm/page_io.c
index 6424869..deea5be 100644
--- a/mm/page_io.c
+++ b/mm/page_io.c
@@ -331,8 +331,8 @@ int swap_readpage(struct page *page)
 	int ret = 0;
 	struct swap_info_struct *sis = page_swap_info(page);
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
-	VM_BUG_ON_PAGE(PageUptodate(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
+	VM_BUG(PageUptodate(page), "%pZp", page);
 	if (frontswap_load(page) == 0) {
 		SetPageUptodate(page);
 		unlock_page(page);
diff --git a/mm/rmap.c b/mm/rmap.c
index dad23a4..f8a6bca 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -971,9 +971,9 @@ void page_move_anon_rmap(struct page *page,
 {
 	struct anon_vma *anon_vma = vma->anon_vma;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
 	VM_BUG_ON_VMA(!anon_vma, vma);
-	VM_BUG_ON_PAGE(page->index != linear_page_index(vma, address), page);
+	VM_BUG(page->index != linear_page_index(vma, address), "%pZp", page);
 
 	anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
 	page->mapping = (struct address_space *) anon_vma;
@@ -1078,7 +1078,7 @@ void do_page_add_anon_rmap(struct page *page,
 	if (unlikely(PageKsm(page)))
 		return;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
 	/* address might be in next vma when migration races vma_adjust */
 	if (first)
 		__page_set_anon_rmap(page, vma, address, exclusive);
@@ -1274,7 +1274,7 @@ static int try_to_unmap_one(struct page *page, struct vm_area_struct *vma,
 		pte_t swp_pte;
 
 		if (flags & TTU_FREE) {
-			VM_BUG_ON_PAGE(PageSwapCache(page), page);
+			VM_BUG(PageSwapCache(page), "%pZp", page);
 			if (!dirty && !PageDirty(page)) {
 				/* It's a freeable page by MADV_FREE */
 				dec_mm_counter(mm, MM_ANONPAGES);
@@ -1407,7 +1407,7 @@ int try_to_unmap(struct page *page, enum ttu_flags flags)
 		.anon_lock = page_lock_anon_vma_read,
 	};
 
-	VM_BUG_ON_PAGE(!PageHuge(page) && PageTransHuge(page), page);
+	VM_BUG(!PageHuge(page) && PageTransHuge(page), "%pZp", page);
 
 	/*
 	 * During exec, a temporary VMA is setup and later moved.
@@ -1453,7 +1453,7 @@ int try_to_munlock(struct page *page)
 
 	};
 
-	VM_BUG_ON_PAGE(!PageLocked(page) || PageLRU(page), page);
+	VM_BUG(!PageLocked(page) || PageLRU(page), "%pZp", page);
 
 	ret = rmap_walk(page, &rwc);
 	return ret;
@@ -1559,7 +1559,7 @@ static int rmap_walk_file(struct page *page, struct rmap_walk_control *rwc)
 	 * structure at mapping cannot be freed and reused yet,
 	 * so we can safely take mapping->i_mmap_rwsem.
 	 */
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
 
 	if (!mapping)
 		return ret;
diff --git a/mm/shmem.c b/mm/shmem.c
index 86b8929..bf494cf 100644
--- a/mm/shmem.c
+++ b/mm/shmem.c
@@ -295,8 +295,8 @@ static int shmem_add_to_page_cache(struct page *page,
 {
 	int error;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
-	VM_BUG_ON_PAGE(!PageSwapBacked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
+	VM_BUG(!PageSwapBacked(page), "%pZp", page);
 
 	page_cache_get(page);
 	page->mapping = mapping;
@@ -436,7 +436,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 				continue;
 			if (!unfalloc || !PageUptodate(page)) {
 				if (page->mapping == mapping) {
-					VM_BUG_ON_PAGE(PageWriteback(page), page);
+					VM_BUG(PageWriteback(page), "%pZp",
+					       page);
 					truncate_inode_page(mapping, page);
 				}
 			}
@@ -513,7 +514,8 @@ static void shmem_undo_range(struct inode *inode, loff_t lstart, loff_t lend,
 			lock_page(page);
 			if (!unfalloc || !PageUptodate(page)) {
 				if (page->mapping == mapping) {
-					VM_BUG_ON_PAGE(PageWriteback(page), page);
+					VM_BUG(PageWriteback(page), "%pZp",
+					       page);
 					truncate_inode_page(mapping, page);
 				} else {
 					/* Page was replaced by swap: retry */
diff --git a/mm/slub.c b/mm/slub.c
index a98b3d1..61c5f54 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -338,13 +338,13 @@ static inline int oo_objects(struct kmem_cache_order_objects x)
  */
 static __always_inline void slab_lock(struct page *page)
 {
-	VM_BUG_ON_PAGE(PageTail(page), page);
+	VM_BUG(PageTail(page), "%pZp", page);
 	bit_spin_lock(PG_locked, &page->flags);
 }
 
 static __always_inline void slab_unlock(struct page *page)
 {
-	VM_BUG_ON_PAGE(PageTail(page), page);
+	VM_BUG(PageTail(page), "%pZp", page);
 	__bit_spin_unlock(PG_locked, &page->flags);
 }
 
diff --git a/mm/swap.c b/mm/swap.c
index 8773de0..47af078 100644
--- a/mm/swap.c
+++ b/mm/swap.c
@@ -59,7 +59,7 @@ static void __page_cache_release(struct page *page)
 
 		spin_lock_irqsave(&zone->lru_lock, flags);
 		lruvec = mem_cgroup_page_lruvec(page, zone);
-		VM_BUG_ON_PAGE(!PageLRU(page), page);
+		VM_BUG(!PageLRU(page), "%pZp", page);
 		__ClearPageLRU(page);
 		del_page_from_lru_list(page, lruvec, page_off_lru(page));
 		spin_unlock_irqrestore(&zone->lru_lock, flags);
@@ -131,8 +131,8 @@ void put_unrefcounted_compound_page(struct page *page_head, struct page *page)
 		 * __split_huge_page_refcount cannot race
 		 * here, see the comment above this function.
 		 */
-		VM_BUG_ON_PAGE(!PageHead(page_head), page_head);
-		VM_BUG_ON_PAGE(page_mapcount(page) != 0, page);
+		VM_BUG(!PageHead(page_head), "%pZp", page_head);
+		VM_BUG(page_mapcount(page) != 0, "%pZp", page);
 		if (put_page_testzero(page_head)) {
 			/*
 			 * If this is the tail of a slab THP page,
@@ -148,7 +148,7 @@ void put_unrefcounted_compound_page(struct page *page_head, struct page *page)
 			 * not go away until the compound page enters
 			 * the buddy allocator.
 			 */
-			VM_BUG_ON_PAGE(PageSlab(page_head), page_head);
+			VM_BUG(PageSlab(page_head), "%pZp", page_head);
 			__put_compound_page(page_head);
 		}
 	} else
@@ -202,7 +202,7 @@ out_put_single:
 				__put_single_page(page);
 			return;
 		}
-		VM_BUG_ON_PAGE(page_head != page->first_page, page);
+		VM_BUG(page_head != page->first_page, "%pZp", page);
 		/*
 		 * We can release the refcount taken by
 		 * get_page_unless_zero() now that
@@ -210,12 +210,13 @@ out_put_single:
 		 * compound_lock.
 		 */
 		if (put_page_testzero(page_head))
-			VM_BUG_ON_PAGE(1, page_head);
+			VM_BUG(1, "%pZp", page_head);
 		/* __split_huge_page_refcount will wait now */
-		VM_BUG_ON_PAGE(page_mapcount(page) <= 0, page);
+		VM_BUG(page_mapcount(page) <= 0, "%pZp", page);
 		atomic_dec(&page->_mapcount);
-		VM_BUG_ON_PAGE(atomic_read(&page_head->_count) <= 0, page_head);
-		VM_BUG_ON_PAGE(atomic_read(&page->_count) != 0, page);
+		VM_BUG(atomic_read(&page_head->_count) <= 0, "%pZp",
+		       page_head);
+		VM_BUG(atomic_read(&page->_count) != 0, "%pZp", page);
 		compound_unlock_irqrestore(page_head, flags);
 
 		if (put_page_testzero(page_head)) {
@@ -226,7 +227,7 @@ out_put_single:
 		}
 	} else {
 		/* @page_head is a dangling pointer */
-		VM_BUG_ON_PAGE(PageTail(page), page);
+		VM_BUG(PageTail(page), "%pZp", page);
 		goto out_put_single;
 	}
 }
@@ -306,7 +307,7 @@ bool __get_page_tail(struct page *page)
 			 * page. __split_huge_page_refcount
 			 * cannot race here.
 			 */
-			VM_BUG_ON_PAGE(!PageHead(page_head), page_head);
+			VM_BUG(!PageHead(page_head), "%pZp", page_head);
 			__get_page_tail_foll(page, true);
 			return true;
 		} else {
@@ -668,8 +669,8 @@ EXPORT_SYMBOL(lru_cache_add_file);
  */
 void lru_cache_add(struct page *page)
 {
-	VM_BUG_ON_PAGE(PageActive(page) && PageUnevictable(page), page);
-	VM_BUG_ON_PAGE(PageLRU(page), page);
+	VM_BUG(PageActive(page) && PageUnevictable(page), "%pZp", page);
+	VM_BUG(PageLRU(page), "%pZp", page);
 	__lru_cache_add(page);
 }
 
@@ -710,7 +711,7 @@ void add_page_to_unevictable_list(struct page *page)
 void lru_cache_add_active_or_unevictable(struct page *page,
 					 struct vm_area_struct *vma)
 {
-	VM_BUG_ON_PAGE(PageLRU(page), page);
+	VM_BUG(PageLRU(page), "%pZp", page);
 
 	if (likely((vma->vm_flags & (VM_LOCKED | VM_SPECIAL)) != VM_LOCKED)) {
 		SetPageActive(page);
@@ -995,7 +996,7 @@ void release_pages(struct page **pages, int nr, bool cold)
 			}
 
 			lruvec = mem_cgroup_page_lruvec(page, zone);
-			VM_BUG_ON_PAGE(!PageLRU(page), page);
+			VM_BUG(!PageLRU(page), "%pZp", page);
 			__ClearPageLRU(page);
 			del_page_from_lru_list(page, lruvec, page_off_lru(page));
 		}
@@ -1038,9 +1039,9 @@ void lru_add_page_tail(struct page *page, struct page *page_tail,
 {
 	const int file = 0;
 
-	VM_BUG_ON_PAGE(!PageHead(page), page);
-	VM_BUG_ON_PAGE(PageCompound(page_tail), page);
-	VM_BUG_ON_PAGE(PageLRU(page_tail), page);
+	VM_BUG(!PageHead(page), "%pZp", page);
+	VM_BUG(PageCompound(page_tail), "%pZp", page);
+	VM_BUG(PageLRU(page_tail), "%pZp", page);
 	VM_BUG_ON(NR_CPUS != 1 &&
 		  !spin_is_locked(&lruvec_zone(lruvec)->lru_lock));
 
@@ -1079,7 +1080,7 @@ static void __pagevec_lru_add_fn(struct page *page, struct lruvec *lruvec,
 	int active = PageActive(page);
 	enum lru_list lru = page_lru(page);
 
-	VM_BUG_ON_PAGE(PageLRU(page), page);
+	VM_BUG(PageLRU(page), "%pZp", page);
 
 	SetPageLRU(page);
 	add_page_to_lru_list(page, lruvec, lru);
diff --git a/mm/swap_state.c b/mm/swap_state.c
index a2611ce..0609662 100644
--- a/mm/swap_state.c
+++ b/mm/swap_state.c
@@ -81,9 +81,9 @@ int __add_to_swap_cache(struct page *page, swp_entry_t entry)
 	int error;
 	struct address_space *address_space;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
-	VM_BUG_ON_PAGE(PageSwapCache(page), page);
-	VM_BUG_ON_PAGE(!PageSwapBacked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
+	VM_BUG(PageSwapCache(page), "%pZp", page);
+	VM_BUG(!PageSwapBacked(page), "%pZp", page);
 
 	page_cache_get(page);
 	SetPageSwapCache(page);
@@ -137,9 +137,9 @@ void __delete_from_swap_cache(struct page *page)
 	swp_entry_t entry;
 	struct address_space *address_space;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
-	VM_BUG_ON_PAGE(!PageSwapCache(page), page);
-	VM_BUG_ON_PAGE(PageWriteback(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
+	VM_BUG(!PageSwapCache(page), "%pZp", page);
+	VM_BUG(PageWriteback(page), "%pZp", page);
 
 	entry.val = page_private(page);
 	address_space = swap_address_space(entry);
@@ -163,8 +163,8 @@ int add_to_swap(struct page *page, struct list_head *list)
 	swp_entry_t entry;
 	int err;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
-	VM_BUG_ON_PAGE(!PageUptodate(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
+	VM_BUG(!PageUptodate(page), "%pZp", page);
 
 	entry = get_swap_page();
 	if (!entry.val)
diff --git a/mm/swapfile.c b/mm/swapfile.c
index a7e7210..d71dcd6 100644
--- a/mm/swapfile.c
+++ b/mm/swapfile.c
@@ -884,7 +884,7 @@ int reuse_swap_page(struct page *page)
 {
 	int count;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
 	if (unlikely(PageKsm(page)))
 		return 0;
 	count = page_mapcount(page);
@@ -904,7 +904,7 @@ int reuse_swap_page(struct page *page)
  */
 int try_to_free_swap(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
 
 	if (!PageSwapCache(page))
 		return 0;
@@ -2710,7 +2710,7 @@ struct swap_info_struct *page_swap_info(struct page *page)
  */
 struct address_space *__page_file_mapping(struct page *page)
 {
-	VM_BUG_ON_PAGE(!PageSwapCache(page), page);
+	VM_BUG(!PageSwapCache(page), "%pZp", page);
 	return page_swap_info(page)->swap_file->f_mapping;
 }
 EXPORT_SYMBOL_GPL(__page_file_mapping);
@@ -2718,7 +2718,7 @@ EXPORT_SYMBOL_GPL(__page_file_mapping);
 pgoff_t __page_file_index(struct page *page)
 {
 	swp_entry_t swap = { .val = page_private(page) };
-	VM_BUG_ON_PAGE(!PageSwapCache(page), page);
+	VM_BUG(!PageSwapCache(page), "%pZp", page);
 	return swp_offset(swap);
 }
 EXPORT_SYMBOL_GPL(__page_file_index);
diff --git a/mm/vmscan.c b/mm/vmscan.c
index 9cc982f..6a1a329 100644
--- a/mm/vmscan.c
+++ b/mm/vmscan.c
@@ -688,7 +688,7 @@ void putback_lru_page(struct page *page)
 	bool is_unevictable;
 	int was_unevictable = PageUnevictable(page);
 
-	VM_BUG_ON_PAGE(PageLRU(page), page);
+	VM_BUG(PageLRU(page), "%pZp", page);
 
 redo:
 	ClearPageUnevictable(page);
@@ -761,7 +761,7 @@ static enum page_references page_check_references(struct page *page,
 	unsigned long vm_flags;
 	int pte_dirty;
 
-	VM_BUG_ON_PAGE(!PageLocked(page), page);
+	VM_BUG(!PageLocked(page), "%pZp", page);
 
 	referenced_ptes = page_referenced(page, 1, sc->target_mem_cgroup,
 					  &vm_flags, &pte_dirty);
@@ -887,8 +887,8 @@ static unsigned long shrink_page_list(struct list_head *page_list,
 		if (!trylock_page(page))
 			goto keep;
 
-		VM_BUG_ON_PAGE(PageActive(page), page);
-		VM_BUG_ON_PAGE(page_zone(page) != zone, page);
+		VM_BUG(PageActive(page), "%pZp", page);
+		VM_BUG(page_zone(page) != zone, "%pZp", page);
 
 		sc->nr_scanned++;
 
@@ -1059,7 +1059,7 @@ unmap:
 				 * due to skipping of swapcache so we free
 				 * page in here rather than __remove_mapping.
 				 */
-				VM_BUG_ON_PAGE(PageSwapCache(page), page);
+				VM_BUG(PageSwapCache(page), "%pZp", page);
 				if (!page_freeze_refs(page, 1))
 					goto keep_locked;
 				__ClearPageLocked(page);
@@ -1196,14 +1196,14 @@ activate_locked:
 		/* Not a candidate for swapping, so reclaim swap space. */
 		if (PageSwapCache(page) && vm_swap_full())
 			try_to_free_swap(page);
-		VM_BUG_ON_PAGE(PageActive(page), page);
+		VM_BUG(PageActive(page), "%pZp", page);
 		SetPageActive(page);
 		pgactivate++;
 keep_locked:
 		unlock_page(page);
 keep:
 		list_add(&page->lru, &ret_pages);
-		VM_BUG_ON_PAGE(PageLRU(page) || PageUnevictable(page), page);
+		VM_BUG(PageLRU(page) || PageUnevictable(page), "%pZp", page);
 	}
 
 	mem_cgroup_uncharge_list(&free_pages);
@@ -1358,7 +1358,7 @@ static unsigned long isolate_lru_pages(unsigned long nr_to_scan,
 		page = lru_to_page(src);
 		prefetchw_prev_lru_page(page, src, flags);
 
-		VM_BUG_ON_PAGE(!PageLRU(page), page);
+		VM_BUG(!PageLRU(page), "%pZp", page);
 
 		switch (__isolate_lru_page(page, mode)) {
 		case 0:
@@ -1413,7 +1413,7 @@ int isolate_lru_page(struct page *page)
 {
 	int ret = -EBUSY;
 
-	VM_BUG_ON_PAGE(!page_count(page), page);
+	VM_BUG(!page_count(page), "%pZp", page);
 
 	if (PageLRU(page)) {
 		struct zone *zone = page_zone(page);
@@ -1501,7 +1501,7 @@ putback_inactive_pages(struct lruvec *lruvec, struct list_head *page_list)
 		struct page *page = lru_to_page(page_list);
 		int lru;
 
-		VM_BUG_ON_PAGE(PageLRU(page), page);
+		VM_BUG(PageLRU(page), "%pZp", page);
 		list_del(&page->lru);
 		if (unlikely(!page_evictable(page))) {
 			spin_unlock_irq(&zone->lru_lock);
@@ -1736,7 +1736,7 @@ static void move_active_pages_to_lru(struct lruvec *lruvec,
 		page = lru_to_page(list);
 		lruvec = mem_cgroup_page_lruvec(page, zone);
 
-		VM_BUG_ON_PAGE(PageLRU(page), page);
+		VM_BUG(PageLRU(page), "%pZp", page);
 		SetPageLRU(page);
 
 		nr_pages = hpage_nr_pages(page);
@@ -3861,7 +3861,7 @@ void check_move_unevictable_pages(struct page **pages, int nr_pages)
 		if (page_evictable(page)) {
 			enum lru_list lru = page_lru_base_type(page);
 
-			VM_BUG_ON_PAGE(PageActive(page), page);
+			VM_BUG(PageActive(page), "%pZp", page);
 			ClearPageUnevictable(page);
 			del_page_from_lru_list(page, lruvec, LRU_UNEVICTABLE);
 			add_page_to_lru_list(page, lruvec, lru);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 09/11] mm: debug: kill VM_BUG_ON_VMA
  2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
                   ` (7 preceding siblings ...)
  2015-04-14 20:56 ` [RFC 08/11] mm: debug: kill VM_BUG_ON_PAGE Sasha Levin
@ 2015-04-14 20:56 ` Sasha Levin
  2015-04-14 20:56 ` [RFC 10/11] mm: debug: kill VM_BUG_ON_MM Sasha Levin
                   ` (2 subsequent siblings)
  11 siblings, 0 replies; 19+ messages in thread
From: Sasha Levin @ 2015-04-14 20:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, kirill, linux-mm

Just use VM_BUG() instead.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 include/linux/huge_mm.h |    2 +-
 include/linux/mmdebug.h |    8 --------
 include/linux/rmap.h    |    2 +-
 mm/gup.c                |    4 ++--
 mm/huge_memory.c        |    6 +++---
 mm/hugetlb.c            |   14 +++++++-------
 mm/interval_tree.c      |    2 +-
 mm/mmap.c               |   11 +++++------
 mm/mremap.c             |    4 ++--
 mm/rmap.c               |    6 +++---
 10 files changed, 25 insertions(+), 34 deletions(-)

diff --git a/include/linux/huge_mm.h b/include/linux/huge_mm.h
index 44a840a..cfd745b 100644
--- a/include/linux/huge_mm.h
+++ b/include/linux/huge_mm.h
@@ -136,7 +136,7 @@ extern int __pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma,
 static inline int pmd_trans_huge_lock(pmd_t *pmd, struct vm_area_struct *vma,
 		spinlock_t **ptl)
 {
-	VM_BUG_ON_VMA(!rwsem_is_locked(&vma->vm_mm->mmap_sem), vma);
+	VM_BUG(!rwsem_is_locked(&vma->vm_mm->mmap_sem), "%pZv", vma);
 	if (pmd_trans_huge(*pmd))
 		return __pmd_trans_huge_lock(pmd, vma, ptl);
 	else
diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index f43f868..5106ab5 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -20,13 +20,6 @@ char *format_mm(const struct mm_struct *mm, char *buf, char *end);
 		}							\
 	} while (0)
 #define VM_BUG_ON(cond) VM_BUG(cond, "%s\n", __stringify(cond))
-#define VM_BUG_ON_VMA(cond, vma)					\
-	do {								\
-		if (unlikely(cond)) {					\
-			pr_emerg("%pZv", vma);				\
-			BUG();						\
-		}							\
-	} while (0)
 #define VM_BUG_ON_MM(cond, mm)						\
 	do {								\
 		if (unlikely(cond)) {					\
@@ -48,7 +41,6 @@ static char *format_mm(const struct mm_struct *mm, char *buf, char *end)
 }
 #define VM_BUG(cond, fmt...) BUILD_BUG_ON_INVALID(cond)
 #define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
-#define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond)
 #define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond)
 #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond)
 #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond)
diff --git a/include/linux/rmap.h b/include/linux/rmap.h
index bf36b6e..54beb2f 100644
--- a/include/linux/rmap.h
+++ b/include/linux/rmap.h
@@ -153,7 +153,7 @@ int anon_vma_fork(struct vm_area_struct *, struct vm_area_struct *);
 static inline void anon_vma_merge(struct vm_area_struct *vma,
 				  struct vm_area_struct *next)
 {
-	VM_BUG_ON_VMA(vma->anon_vma != next->anon_vma, vma);
+	VM_BUG(vma->anon_vma != next->anon_vma, "%pZv", vma);
 	unlink_anon_vmas(next);
 }
 
diff --git a/mm/gup.c b/mm/gup.c
index 743648e..0b851ac 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -846,8 +846,8 @@ long populate_vma_page_range(struct vm_area_struct *vma,
 
 	VM_BUG_ON(start & ~PAGE_MASK);
 	VM_BUG_ON(end   & ~PAGE_MASK);
-	VM_BUG_ON_VMA(start < vma->vm_start, vma);
-	VM_BUG_ON_VMA(end   > vma->vm_end, vma);
+	VM_BUG(start < vma->vm_start, "%pZv", vma);
+	VM_BUG(end > vma->vm_end, "%pZv", vma);
 	VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_sem), mm);
 
 	gup_flags = FOLL_TOUCH | FOLL_POPULATE;
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index 7ba3947..d4b20cd 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -1093,7 +1093,7 @@ int do_huge_pmd_wp_page(struct mm_struct *mm, struct vm_area_struct *vma,
 	gfp_t huge_gfp;			/* for allocation and charge */
 
 	ptl = pmd_lockptr(mm, pmd);
-	VM_BUG_ON_VMA(!vma->anon_vma, vma);
+	VM_BUG(!vma->anon_vma, "%pZv", vma);
 	haddr = address & HPAGE_PMD_MASK;
 	if (is_huge_zero_pmd(orig_pmd))
 		goto alloc;
@@ -2108,7 +2108,7 @@ int khugepaged_enter_vma_merge(struct vm_area_struct *vma,
 	if (vma->vm_ops)
 		/* khugepaged not yet working on file or special mappings */
 		return 0;
-	VM_BUG_ON_VMA(vm_flags & VM_NO_THP, vma);
+	VM_BUG(vm_flags & VM_NO_THP, "%pZv", vma);
 	hstart = (vma->vm_start + ~HPAGE_PMD_MASK) & HPAGE_PMD_MASK;
 	hend = vma->vm_end & HPAGE_PMD_MASK;
 	if (hstart < hend)
@@ -2466,7 +2466,7 @@ static bool hugepage_vma_check(struct vm_area_struct *vma)
 		return false;
 	if (is_vma_temporary_stack(vma))
 		return false;
-	VM_BUG_ON_VMA(vma->vm_flags & VM_NO_THP, vma);
+	VM_BUG(vma->vm_flags & VM_NO_THP, "%pZv", vma);
 	return true;
 }
 
diff --git a/mm/hugetlb.c b/mm/hugetlb.c
index 584b516..3c6767b 100644
--- a/mm/hugetlb.c
+++ b/mm/hugetlb.c
@@ -503,7 +503,7 @@ static inline struct resv_map *inode_resv_map(struct inode *inode)
 
 static struct resv_map *vma_resv_map(struct vm_area_struct *vma)
 {
-	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
+	VM_BUG(!is_vm_hugetlb_page(vma), "%pZv", vma);
 	if (vma->vm_flags & VM_MAYSHARE) {
 		struct address_space *mapping = vma->vm_file->f_mapping;
 		struct inode *inode = mapping->host;
@@ -518,8 +518,8 @@ static struct resv_map *vma_resv_map(struct vm_area_struct *vma)
 
 static void set_vma_resv_map(struct vm_area_struct *vma, struct resv_map *map)
 {
-	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
-	VM_BUG_ON_VMA(vma->vm_flags & VM_MAYSHARE, vma);
+	VM_BUG(!is_vm_hugetlb_page(vma), "%pZv", vma);
+	VM_BUG(vma->vm_flags & VM_MAYSHARE, "%pZv", vma);
 
 	set_vma_private_data(vma, (get_vma_private_data(vma) &
 				HPAGE_RESV_MASK) | (unsigned long)map);
@@ -527,15 +527,15 @@ static void set_vma_resv_map(struct vm_area_struct *vma, struct resv_map *map)
 
 static void set_vma_resv_flags(struct vm_area_struct *vma, unsigned long flags)
 {
-	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
-	VM_BUG_ON_VMA(vma->vm_flags & VM_MAYSHARE, vma);
+	VM_BUG(!is_vm_hugetlb_page(vma), "%pZv", vma);
+	VM_BUG(vma->vm_flags & VM_MAYSHARE, "%pZv", vma);
 
 	set_vma_private_data(vma, get_vma_private_data(vma) | flags);
 }
 
 static int is_vma_resv_set(struct vm_area_struct *vma, unsigned long flag)
 {
-	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
+	VM_BUG(!is_vm_hugetlb_page(vma), "%pZv", vma);
 
 	return (get_vma_private_data(vma) & flag) != 0;
 }
@@ -543,7 +543,7 @@ static int is_vma_resv_set(struct vm_area_struct *vma, unsigned long flag)
 /* Reset counters to 0 and clear all HPAGE_RESV_* flags */
 void reset_vma_resv_huge_pages(struct vm_area_struct *vma)
 {
-	VM_BUG_ON_VMA(!is_vm_hugetlb_page(vma), vma);
+	VM_BUG(!is_vm_hugetlb_page(vma), "%pZv", vma);
 	if (!(vma->vm_flags & VM_MAYSHARE))
 		vma->vm_private_data = (void *)0;
 }
diff --git a/mm/interval_tree.c b/mm/interval_tree.c
index f2c2492..49d4f53 100644
--- a/mm/interval_tree.c
+++ b/mm/interval_tree.c
@@ -34,7 +34,7 @@ void vma_interval_tree_insert_after(struct vm_area_struct *node,
 	struct vm_area_struct *parent;
 	unsigned long last = vma_last_pgoff(node);
 
-	VM_BUG_ON_VMA(vma_start_pgoff(node) != vma_start_pgoff(prev), node);
+	VM_BUG(vma_start_pgoff(node) != vma_start_pgoff(prev), "%pZv", node);
 
 	if (!prev->shared.rb.rb_right) {
 		parent = prev;
diff --git a/mm/mmap.c b/mm/mmap.c
index bb50cac..f2db320 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -426,9 +426,8 @@ static void validate_mm_rb(struct rb_root *root, struct vm_area_struct *ignore)
 	for (nd = rb_first(root); nd; nd = rb_next(nd)) {
 		struct vm_area_struct *vma;
 		vma = rb_entry(nd, struct vm_area_struct, vm_rb);
-		VM_BUG_ON_VMA(vma != ignore &&
-			vma->rb_subtree_gap != vma_compute_subtree_gap(vma),
-			vma);
+		VM_BUG(vma != ignore && vma->rb_subtree_gap != vma_compute_subtree_gap(vma),
+		       "%pZv", vma);
 	}
 }
 
@@ -805,8 +804,8 @@ again:			remove_next = 1 + (end > next->vm_end);
 	if (!anon_vma && adjust_next)
 		anon_vma = next->anon_vma;
 	if (anon_vma) {
-		VM_BUG_ON_VMA(adjust_next && next->anon_vma &&
-			  anon_vma != next->anon_vma, next);
+		VM_BUG(adjust_next && next->anon_vma && anon_vma != next->anon_vma,
+		       "%pZv", next);
 		anon_vma_lock_write(anon_vma);
 		anon_vma_interval_tree_pre_update_vma(vma);
 		if (adjust_next)
@@ -2932,7 +2931,7 @@ struct vm_area_struct *copy_vma(struct vm_area_struct **vmap,
 			 * safe. It is only safe to keep the vm_pgoff
 			 * linear if there are no pages mapped yet.
 			 */
-			VM_BUG_ON_VMA(faulted_in_anon_vma, new_vma);
+			VM_BUG(faulted_in_anon_vma, "%pZv", new_vma);
 			*vmap = vma = new_vma;
 		}
 		*need_rmap_locks = (new_vma->vm_pgoff <= vma->vm_pgoff);
diff --git a/mm/mremap.c b/mm/mremap.c
index afa3ab7..47f208e 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -193,8 +193,8 @@ unsigned long move_page_tables(struct vm_area_struct *vma,
 		if (pmd_trans_huge(*old_pmd)) {
 			int err = 0;
 			if (extent == HPAGE_PMD_SIZE) {
-				VM_BUG_ON_VMA(vma->vm_file || !vma->anon_vma,
-					      vma);
+				VM_BUG(vma->vm_file || !vma->anon_vma,
+				       "%pZv", vma);
 				/* See comment in move_ptes() */
 				if (need_rmap_locks)
 					anon_vma_lock_write(vma->anon_vma);
diff --git a/mm/rmap.c b/mm/rmap.c
index f8a6bca..1ef7e6f 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -576,7 +576,7 @@ vma_address(struct page *page, struct vm_area_struct *vma)
 	unsigned long address = __vma_address(page, vma);
 
 	/* page should be within @vma mapping range */
-	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
+	VM_BUG(address < vma->vm_start || address >= vma->vm_end, "%pZv", vma);
 
 	return address;
 }
@@ -972,7 +972,7 @@ void page_move_anon_rmap(struct page *page,
 	struct anon_vma *anon_vma = vma->anon_vma;
 
 	VM_BUG(!PageLocked(page), "%pZp", page);
-	VM_BUG_ON_VMA(!anon_vma, vma);
+	VM_BUG(!anon_vma, "%pZv", vma);
 	VM_BUG(page->index != linear_page_index(vma, address), "%pZp", page);
 
 	anon_vma = (void *) anon_vma + PAGE_MAPPING_ANON;
@@ -1099,7 +1099,7 @@ void do_page_add_anon_rmap(struct page *page,
 void page_add_new_anon_rmap(struct page *page,
 	struct vm_area_struct *vma, unsigned long address)
 {
-	VM_BUG_ON_VMA(address < vma->vm_start || address >= vma->vm_end, vma);
+	VM_BUG(address < vma->vm_start || address >= vma->vm_end, "%pZv", vma);
 	SetPageSwapBacked(page);
 	atomic_set(&page->_mapcount, 0); /* increment count (starts at -1) */
 	if (PageTransHuge(page))
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 10/11] mm: debug: kill VM_BUG_ON_MM
  2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
                   ` (8 preceding siblings ...)
  2015-04-14 20:56 ` [RFC 09/11] mm: debug: kill VM_BUG_ON_VMA Sasha Levin
@ 2015-04-14 20:56 ` Sasha Levin
  2015-04-14 20:56 ` [RFC 11/11] mm: debug: use VM_BUG() to help with debug output Sasha Levin
  2015-04-15  8:45 ` [RFC 00/11] mm: debug: formatting memory management structs Kirill A. Shutemov
  11 siblings, 0 replies; 19+ messages in thread
From: Sasha Levin @ 2015-04-14 20:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, kirill, linux-mm

Just use VM_BUG() instead.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 include/linux/mmdebug.h |    8 --------
 kernel/fork.c           |    2 +-
 mm/gup.c                |    2 +-
 mm/huge_memory.c        |    2 +-
 mm/mmap.c               |    2 +-
 mm/pagewalk.c           |    2 +-
 6 files changed, 5 insertions(+), 13 deletions(-)

diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
index 5106ab5..b810800 100644
--- a/include/linux/mmdebug.h
+++ b/include/linux/mmdebug.h
@@ -20,13 +20,6 @@ char *format_mm(const struct mm_struct *mm, char *buf, char *end);
 		}							\
 	} while (0)
 #define VM_BUG_ON(cond) VM_BUG(cond, "%s\n", __stringify(cond))
-#define VM_BUG_ON_MM(cond, mm)						\
-	do {								\
-		if (unlikely(cond)) {					\
-			pr_emerg("%pZm", mm);				\
-			BUG();						\
-		}							\
-	} while (0)
 #define VM_WARN_ON(cond) WARN_ON(cond)
 #define VM_WARN_ON_ONCE(cond) WARN_ON_ONCE(cond)
 #define VM_WARN_ONCE(cond, format...) WARN_ONCE(cond, format)
@@ -41,7 +34,6 @@ static char *format_mm(const struct mm_struct *mm, char *buf, char *end)
 }
 #define VM_BUG(cond, fmt...) BUILD_BUG_ON_INVALID(cond)
 #define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
-#define VM_BUG_ON_MM(cond, mm) VM_BUG_ON(cond)
 #define VM_WARN_ON(cond) BUILD_BUG_ON_INVALID(cond)
 #define VM_WARN_ON_ONCE(cond) BUILD_BUG_ON_INVALID(cond)
 #define VM_WARN_ONCE(cond, format...) BUILD_BUG_ON_INVALID(cond)
diff --git a/kernel/fork.c b/kernel/fork.c
index 18c44fb..36a7c36 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -645,7 +645,7 @@ static void check_mm(struct mm_struct *mm)
 				mm_nr_pmds(mm));
 
 #if defined(CONFIG_TRANSPARENT_HUGEPAGE) && !USE_SPLIT_PMD_PTLOCKS
-	VM_BUG_ON_MM(mm->pmd_huge_pte, mm);
+	VM_BUG(mm->pmd_huge_pte, "%pZm", mm);
 #endif
 }
 
diff --git a/mm/gup.c b/mm/gup.c
index 0b851ac..57cc2de 100644
--- a/mm/gup.c
+++ b/mm/gup.c
@@ -848,7 +848,7 @@ long populate_vma_page_range(struct vm_area_struct *vma,
 	VM_BUG_ON(end   & ~PAGE_MASK);
 	VM_BUG(start < vma->vm_start, "%pZv", vma);
 	VM_BUG(end > vma->vm_end, "%pZv", vma);
-	VM_BUG_ON_MM(!rwsem_is_locked(&mm->mmap_sem), mm);
+	VM_BUG(!rwsem_is_locked(&mm->mmap_sem), "%pZm", mm);
 
 	gup_flags = FOLL_TOUCH | FOLL_POPULATE;
 	/*
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index d4b20cd..cda190f 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2072,7 +2072,7 @@ int __khugepaged_enter(struct mm_struct *mm)
 		return -ENOMEM;
 
 	/* __khugepaged_exit() must not run from under us */
-	VM_BUG_ON_MM(khugepaged_test_exit(mm), mm);
+	VM_BUG(khugepaged_test_exit(mm), "%pZm", mm);
 	if (unlikely(test_and_set_bit(MMF_VM_HUGEPAGE, &mm->flags))) {
 		free_mm_slot(mm_slot);
 		return 0;
diff --git a/mm/mmap.c b/mm/mmap.c
index f2db320..311a795 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -464,7 +464,7 @@ static void validate_mm(struct mm_struct *mm)
 			pr_emerg("map_count %d rb %d\n", mm->map_count, i);
 		bug = 1;
 	}
-	VM_BUG_ON_MM(bug, mm);
+	VM_BUG(bug, "%pZm", mm);
 }
 #else
 #define validate_mm_rb(root, ignore) do { } while (0)
diff --git a/mm/pagewalk.c b/mm/pagewalk.c
index 29f2f8b..952cddc 100644
--- a/mm/pagewalk.c
+++ b/mm/pagewalk.c
@@ -249,7 +249,7 @@ int walk_page_range(unsigned long start, unsigned long end,
 	if (!walk->mm)
 		return -EINVAL;
 
-	VM_BUG_ON_MM(!rwsem_is_locked(&walk->mm->mmap_sem), walk->mm);
+	VM_BUG(!rwsem_is_locked(&walk->mm->mmap_sem), "%pZm", walk->mm);
 
 	vma = find_vma(walk->mm, start);
 	do {
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC 11/11] mm: debug: use VM_BUG() to help with debug output
  2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
                   ` (9 preceding siblings ...)
  2015-04-14 20:56 ` [RFC 10/11] mm: debug: kill VM_BUG_ON_MM Sasha Levin
@ 2015-04-14 20:56 ` Sasha Levin
  2015-04-15  8:45 ` [RFC 00/11] mm: debug: formatting memory management structs Kirill A. Shutemov
  11 siblings, 0 replies; 19+ messages in thread
From: Sasha Levin @ 2015-04-14 20:56 UTC (permalink / raw)
  To: linux-kernel; +Cc: akpm, kirill, linux-mm

This shows how we can use VM_BUG() to improve output in various
common places.

Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
---
 arch/arm/mm/mmap.c               |    2 +-
 arch/frv/mm/elf-fdpic.c          |    4 ++--
 arch/mips/mm/gup.c               |    4 ++--
 arch/parisc/kernel/sys_parisc.c  |    2 +-
 arch/powerpc/mm/hugetlbpage.c    |    2 +-
 arch/powerpc/mm/pgtable_64.c     |    4 ++--
 arch/s390/mm/gup.c               |    2 +-
 arch/s390/mm/mmap.c              |    2 +-
 arch/s390/mm/pgtable.c           |    6 +++---
 arch/sh/mm/mmap.c                |    2 +-
 arch/sparc/kernel/sys_sparc_64.c |    4 ++--
 arch/sparc/mm/gup.c              |    2 +-
 arch/sparc/mm/hugetlbpage.c      |    4 ++--
 arch/tile/mm/hugetlbpage.c       |    2 +-
 arch/x86/kernel/sys_x86_64.c     |    2 +-
 arch/x86/mm/hugetlbpage.c        |    2 +-
 arch/x86/mm/pgtable.c            |    6 +++---
 mm/huge_memory.c                 |    4 ++--
 mm/mmap.c                        |    2 +-
 mm/pgtable-generic.c             |    8 ++++----
 20 files changed, 33 insertions(+), 33 deletions(-)

diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 407dc78..6767df7 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -159,7 +159,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	 * allocations.
 	 */
 	if (addr & ~PAGE_MASK) {
-		VM_BUG_ON(addr != -ENOMEM);
+		VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 		info.flags = 0;
 		info.low_limit = mm->mmap_base;
 		info.high_limit = TASK_SIZE;
diff --git a/arch/frv/mm/elf-fdpic.c b/arch/frv/mm/elf-fdpic.c
index 836f147..6ae5497 100644
--- a/arch/frv/mm/elf-fdpic.c
+++ b/arch/frv/mm/elf-fdpic.c
@@ -88,7 +88,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
 	addr = vm_unmapped_area(&info);
 	if (!(addr & ~PAGE_MASK))
 		goto success;
-	VM_BUG_ON(addr != -ENOMEM);
+	VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 
 	/* search from just above the WorkRAM area to the top of memory */
 	info.low_limit = PAGE_ALIGN(0x80000000);
@@ -96,7 +96,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
 	addr = vm_unmapped_area(&info);
 	if (!(addr & ~PAGE_MASK))
 		goto success;
-	VM_BUG_ON(addr != -ENOMEM);
+	VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 
 #if 0
 	printk("[area] l=%lx (ENOMEM) f='%s'\n",
diff --git a/arch/mips/mm/gup.c b/arch/mips/mm/gup.c
index 349995d..364e27b 100644
--- a/arch/mips/mm/gup.c
+++ b/arch/mips/mm/gup.c
@@ -85,7 +85,7 @@ static int gup_huge_pmd(pmd_t pmd, unsigned long addr, unsigned long end,
 	head = pte_page(pte);
 	page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
 	do {
-		VM_BUG_ON(compound_head(page) != head);
+		VM_BUG(compound_head(page) != head, "%pZp\n%pZp", page, head);
 		pages[*nr] = page;
 		if (PageTail(page))
 			get_huge_page_tail(page);
@@ -151,7 +151,7 @@ static int gup_huge_pud(pud_t pud, unsigned long addr, unsigned long end,
 	head = pte_page(pte);
 	page = head + ((addr & ~PUD_MASK) >> PAGE_SHIFT);
 	do {
-		VM_BUG_ON(compound_head(page) != head);
+		VM_BUG(compound_head(page) != head, "%pZp\n%pZp", page, head);
 		pages[*nr] = page;
 		if (PageTail(page))
 			get_huge_page_tail(page);
diff --git a/arch/parisc/kernel/sys_parisc.c b/arch/parisc/kernel/sys_parisc.c
index e1ffea2..845823c 100644
--- a/arch/parisc/kernel/sys_parisc.c
+++ b/arch/parisc/kernel/sys_parisc.c
@@ -187,7 +187,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	addr = vm_unmapped_area(&info);
 	if (!(addr & ~PAGE_MASK))
 		goto found_addr;
-	VM_BUG_ON(addr != -ENOMEM);
+	VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 
 	/*
 	 * A failed mmap() very likely causes application failure,
diff --git a/arch/powerpc/mm/hugetlbpage.c b/arch/powerpc/mm/hugetlbpage.c
index fa9d5c2..8e8834c 100644
--- a/arch/powerpc/mm/hugetlbpage.c
+++ b/arch/powerpc/mm/hugetlbpage.c
@@ -1062,7 +1062,7 @@ int gup_hugepte(pte_t *ptep, unsigned long sz, unsigned long addr,
 	page = head + ((addr & (sz-1)) >> PAGE_SHIFT);
 	tail = page;
 	do {
-		VM_BUG_ON(compound_head(page) != head);
+		VM_BUG(compound_head(page) != head, "%pZp\n%pZp", page, head);
 		pages[*nr] = page;
 		(*nr)++;
 		page++;
diff --git a/arch/powerpc/mm/pgtable_64.c b/arch/powerpc/mm/pgtable_64.c
index 59daa5e..b33bc22 100644
--- a/arch/powerpc/mm/pgtable_64.c
+++ b/arch/powerpc/mm/pgtable_64.c
@@ -559,7 +559,7 @@ pmd_t pmdp_clear_flush(struct vm_area_struct *vma, unsigned long address,
 {
 	pmd_t pmd;
 
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 	if (pmd_trans_huge(*pmdp)) {
 		pmd = pmdp_get_and_clear(vma->vm_mm, address, pmdp);
 	} else {
@@ -627,7 +627,7 @@ void pmdp_splitting_flush(struct vm_area_struct *vma,
 {
 	unsigned long old, tmp;
 
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 
 #ifdef CONFIG_DEBUG_VM
 	WARN_ON(!pmd_trans_huge(*pmdp));
diff --git a/arch/s390/mm/gup.c b/arch/s390/mm/gup.c
index 1eb41bb..2ad6ba0 100644
--- a/arch/s390/mm/gup.c
+++ b/arch/s390/mm/gup.c
@@ -66,7 +66,7 @@ static inline int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
 	page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
 	tail = page;
 	do {
-		VM_BUG_ON(compound_head(page) != head);
+		VM_BUG(compound_head(page) != head, "%pZp\n%pZp", page, head);
 		pages[*nr] = page;
 		(*nr)++;
 		page++;
diff --git a/arch/s390/mm/mmap.c b/arch/s390/mm/mmap.c
index 6e552af..178eb32 100644
--- a/arch/s390/mm/mmap.c
+++ b/arch/s390/mm/mmap.c
@@ -167,7 +167,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	 * allocations.
 	 */
 	if (addr & ~PAGE_MASK) {
-		VM_BUG_ON(addr != -ENOMEM);
+		VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 		info.flags = 0;
 		info.low_limit = TASK_UNMAPPED_BASE;
 		info.high_limit = TASK_SIZE;
diff --git a/arch/s390/mm/pgtable.c b/arch/s390/mm/pgtable.c
index 33f5894..e16bf2c 100644
--- a/arch/s390/mm/pgtable.c
+++ b/arch/s390/mm/pgtable.c
@@ -1389,7 +1389,7 @@ EXPORT_SYMBOL_GPL(gmap_test_and_clear_dirty);
 int pmdp_clear_flush_young(struct vm_area_struct *vma, unsigned long address,
 			   pmd_t *pmdp)
 {
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 	/* No need to flush TLB
 	 * On s390 reference bits are in storage key and never in TLB */
 	return pmdp_test_and_clear_young(vma, address, pmdp);
@@ -1399,7 +1399,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
 			  unsigned long address, pmd_t *pmdp,
 			  pmd_t entry, int dirty)
 {
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 
 	entry = pmd_mkyoung(entry);
 	if (dirty)
@@ -1419,7 +1419,7 @@ static void pmdp_splitting_flush_sync(void *arg)
 void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address,
 			  pmd_t *pmdp)
 {
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 	if (!test_and_set_bit(_SEGMENT_ENTRY_SPLIT_BIT,
 			      (unsigned long *) pmdp)) {
 		/* need to serialize against gup-fast (IRQ disabled) */
diff --git a/arch/sh/mm/mmap.c b/arch/sh/mm/mmap.c
index 6777177..f30fd96 100644
--- a/arch/sh/mm/mmap.c
+++ b/arch/sh/mm/mmap.c
@@ -132,7 +132,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	 * allocations.
 	 */
 	if (addr & ~PAGE_MASK) {
-		VM_BUG_ON(addr != -ENOMEM);
+		VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 		info.flags = 0;
 		info.low_limit = TASK_UNMAPPED_BASE;
 		info.high_limit = TASK_SIZE;
diff --git a/arch/sparc/kernel/sys_sparc_64.c b/arch/sparc/kernel/sys_sparc_64.c
index 30e7ddb..a77210d 100644
--- a/arch/sparc/kernel/sys_sparc_64.c
+++ b/arch/sparc/kernel/sys_sparc_64.c
@@ -131,7 +131,7 @@ unsigned long arch_get_unmapped_area(struct file *filp, unsigned long addr, unsi
 	addr = vm_unmapped_area(&info);
 
 	if ((addr & ~PAGE_MASK) && task_size > VA_EXCLUDE_END) {
-		VM_BUG_ON(addr != -ENOMEM);
+		VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 		info.low_limit = VA_EXCLUDE_END;
 		info.high_limit = task_size;
 		addr = vm_unmapped_area(&info);
@@ -200,7 +200,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	 * allocations.
 	 */
 	if (addr & ~PAGE_MASK) {
-		VM_BUG_ON(addr != -ENOMEM);
+		VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 		info.flags = 0;
 		info.low_limit = TASK_UNMAPPED_BASE;
 		info.high_limit = STACK_TOP32;
diff --git a/arch/sparc/mm/gup.c b/arch/sparc/mm/gup.c
index 2e5c4fc..9d92335 100644
--- a/arch/sparc/mm/gup.c
+++ b/arch/sparc/mm/gup.c
@@ -84,7 +84,7 @@ static int gup_huge_pmd(pmd_t *pmdp, pmd_t pmd, unsigned long addr,
 	page = head + ((addr & ~PMD_MASK) >> PAGE_SHIFT);
 	tail = page;
 	do {
-		VM_BUG_ON(compound_head(page) != head);
+		VM_BUG(compound_head(page) != head, "%pZp\n%pZp", page, head);
 		pages[*nr] = page;
 		(*nr)++;
 		page++;
diff --git a/arch/sparc/mm/hugetlbpage.c b/arch/sparc/mm/hugetlbpage.c
index 4242eab..463214e 100644
--- a/arch/sparc/mm/hugetlbpage.c
+++ b/arch/sparc/mm/hugetlbpage.c
@@ -42,7 +42,7 @@ static unsigned long hugetlb_get_unmapped_area_bottomup(struct file *filp,
 	addr = vm_unmapped_area(&info);
 
 	if ((addr & ~PAGE_MASK) && task_size > VA_EXCLUDE_END) {
-		VM_BUG_ON(addr != -ENOMEM);
+		VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 		info.low_limit = VA_EXCLUDE_END;
 		info.high_limit = task_size;
 		addr = vm_unmapped_area(&info);
@@ -79,7 +79,7 @@ hugetlb_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	 * allocations.
 	 */
 	if (addr & ~PAGE_MASK) {
-		VM_BUG_ON(addr != -ENOMEM);
+		VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 		info.flags = 0;
 		info.low_limit = TASK_UNMAPPED_BASE;
 		info.high_limit = STACK_TOP32;
diff --git a/arch/tile/mm/hugetlbpage.c b/arch/tile/mm/hugetlbpage.c
index 8416240..e46dab5 100644
--- a/arch/tile/mm/hugetlbpage.c
+++ b/arch/tile/mm/hugetlbpage.c
@@ -205,7 +205,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
 	 * allocations.
 	 */
 	if (addr & ~PAGE_MASK) {
-		VM_BUG_ON(addr != -ENOMEM);
+		VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 		info.flags = 0;
 		info.low_limit = TASK_UNMAPPED_BASE;
 		info.high_limit = TASK_SIZE;
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index 10e0272..9737762 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -203,7 +203,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	addr = vm_unmapped_area(&info);
 	if (!(addr & ~PAGE_MASK))
 		return addr;
-	VM_BUG_ON(addr != -ENOMEM);
+	VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 
 bottomup:
 	/*
diff --git a/arch/x86/mm/hugetlbpage.c b/arch/x86/mm/hugetlbpage.c
index 42982b2..ae468ee 100644
--- a/arch/x86/mm/hugetlbpage.c
+++ b/arch/x86/mm/hugetlbpage.c
@@ -111,7 +111,7 @@ static unsigned long hugetlb_get_unmapped_area_topdown(struct file *file,
 	 * allocations.
 	 */
 	if (addr & ~PAGE_MASK) {
-		VM_BUG_ON(addr != -ENOMEM);
+		VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 		info.flags = 0;
 		info.low_limit = TASK_UNMAPPED_BASE;
 		info.high_limit = TASK_SIZE;
diff --git a/arch/x86/mm/pgtable.c b/arch/x86/mm/pgtable.c
index 3d6edea..7ec9841 100644
--- a/arch/x86/mm/pgtable.c
+++ b/arch/x86/mm/pgtable.c
@@ -427,7 +427,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
 {
 	int changed = !pmd_same(*pmdp, entry);
 
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 
 	if (changed && dirty) {
 		*pmdp = entry;
@@ -501,7 +501,7 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
 {
 	int young;
 
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 
 	young = pmdp_test_and_clear_young(vma, address, pmdp);
 	if (young)
@@ -514,7 +514,7 @@ void pmdp_splitting_flush(struct vm_area_struct *vma,
 			  unsigned long address, pmd_t *pmdp)
 {
 	int set;
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 	set = !test_and_set_bit(_PAGE_BIT_SPLITTING,
 				(unsigned long *)pmdp);
 	if (set) {
diff --git a/mm/huge_memory.c b/mm/huge_memory.c
index cda190f..ccc8186 100644
--- a/mm/huge_memory.c
+++ b/mm/huge_memory.c
@@ -2488,7 +2488,7 @@ static void collapse_huge_page(struct mm_struct *mm,
 	unsigned long mmun_end;		/* For mmu_notifiers */
 	gfp_t gfp;
 
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 
 	/* Only allocate from the target node */
 	gfp = alloc_hugepage_gfpmask(khugepaged_defrag(), __GFP_OTHER_NODE) |
@@ -2620,7 +2620,7 @@ static int khugepaged_scan_pmd(struct mm_struct *mm,
 	int node = NUMA_NO_NODE;
 	bool writable = false, referenced = false;
 
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 
 	pmd = mm_find_pmd(mm, address);
 	if (!pmd)
diff --git a/mm/mmap.c b/mm/mmap.c
index 311a795..5439e8e 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -1977,7 +1977,7 @@ arch_get_unmapped_area_topdown(struct file *filp, const unsigned long addr0,
 	 * allocations.
 	 */
 	if (addr & ~PAGE_MASK) {
-		VM_BUG_ON(addr != -ENOMEM);
+		VM_BUG(addr != -ENOMEM, "addr = %lu\n", addr);
 		info.flags = 0;
 		info.low_limit = TASK_UNMAPPED_BASE;
 		info.high_limit = TASK_SIZE;
diff --git a/mm/pgtable-generic.c b/mm/pgtable-generic.c
index c25f94b..97327c3 100644
--- a/mm/pgtable-generic.c
+++ b/mm/pgtable-generic.c
@@ -64,7 +64,7 @@ int pmdp_set_access_flags(struct vm_area_struct *vma,
 {
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
 	int changed = !pmd_same(*pmdp, entry);
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 	if (changed) {
 		set_pmd_at(vma->vm_mm, address, pmdp, entry);
 		flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
@@ -95,7 +95,7 @@ int pmdp_clear_flush_young(struct vm_area_struct *vma,
 {
 	int young;
 #ifdef CONFIG_TRANSPARENT_HUGEPAGE
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 #else
 	BUG();
 #endif /* CONFIG_TRANSPARENT_HUGEPAGE */
@@ -125,7 +125,7 @@ pmd_t pmdp_clear_flush(struct vm_area_struct *vma, unsigned long address,
 		       pmd_t *pmdp)
 {
 	pmd_t pmd;
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 	pmd = pmdp_get_and_clear(vma->vm_mm, address, pmdp);
 	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
 	return pmd;
@@ -139,7 +139,7 @@ void pmdp_splitting_flush(struct vm_area_struct *vma, unsigned long address,
 			  pmd_t *pmdp)
 {
 	pmd_t pmd = pmd_mksplitting(*pmdp);
-	VM_BUG_ON(address & ~HPAGE_PMD_MASK);
+	VM_BUG(address & ~HPAGE_PMD_MASK, "address = %lu\n", address);
 	set_pmd_at(vma->vm_mm, address, pmdp, pmd);
 	/* tlb flush only to serialize against gup-fast */
 	flush_tlb_range(vma, address, address + HPAGE_PMD_SIZE);
-- 
1.7.10.4


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC 00/11] mm: debug: formatting memory management structs
  2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
                   ` (10 preceding siblings ...)
  2015-04-14 20:56 ` [RFC 11/11] mm: debug: use VM_BUG() to help with debug output Sasha Levin
@ 2015-04-15  8:45 ` Kirill A. Shutemov
  2015-04-15 12:52   ` Sasha Levin
  11 siblings, 1 reply; 19+ messages in thread
From: Kirill A. Shutemov @ 2015-04-15  8:45 UTC (permalink / raw)
  To: Sasha Levin; +Cc: linux-kernel, akpm, linux-mm

On Tue, Apr 14, 2015 at 04:56:22PM -0400, Sasha Levin wrote:
> This patch series adds knowledge about various memory management structures
> to the standard print functions.
> 
> In essence, it allows us to easily print those structures:
> 
> 	printk("%pZp %pZm %pZv", page, mm, vma);

Notably, you don't have \n in your format line. And it brings question how
well dump_page() and friends fit printk-like interface. dump_page()
produces multi-line print out.
Is it something printk() users would expect?

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC 00/11] mm: debug: formatting memory management structs
  2015-04-15  8:45 ` [RFC 00/11] mm: debug: formatting memory management structs Kirill A. Shutemov
@ 2015-04-15 12:52   ` Sasha Levin
  0 siblings, 0 replies; 19+ messages in thread
From: Sasha Levin @ 2015-04-15 12:52 UTC (permalink / raw)
  To: Kirill A. Shutemov; +Cc: linux-kernel, akpm, linux-mm

On 04/15/2015 04:45 AM, Kirill A. Shutemov wrote:
> On Tue, Apr 14, 2015 at 04:56:22PM -0400, Sasha Levin wrote:
>> > This patch series adds knowledge about various memory management structures
>> > to the standard print functions.
>> > 
>> > In essence, it allows us to easily print those structures:
>> > 
>> > 	printk("%pZp %pZm %pZv", page, mm, vma);
> Notably, you don't have \n in your format line. And it brings question how
> well dump_page() and friends fit printk-like interface. dump_page()
> produces multi-line print out.
> Is it something printk() users would expect?

Since were printing large amount of data out of multiple fields (rather than just
one potentially long field like "path"), the way I see it we could print it in one
line, and let it wrap.

While this is what printk users would most likely expect in theory, in practice it
might scroll off the screen, making us miss important output, it would also be awkward
making that long line part of anything else; what else would you add there?

While if we break it up into multiple lines, we keep it working the same way it worked
so far. Also, using any of those new printk format specifiers wouldn't be too common, so
we can hope that whoever uses them knows what he's doing and how the output will look
like.

Is there a usecase where we'd want to keep it as a single line?


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC 01/11] mm: debug: format flags in a buffer
  2015-04-14 20:56 ` [RFC 01/11] mm: debug: format flags in a buffer Sasha Levin
@ 2015-04-30 15:39   ` Kirill A. Shutemov
  0 siblings, 0 replies; 19+ messages in thread
From: Kirill A. Shutemov @ 2015-04-30 15:39 UTC (permalink / raw)
  To: Sasha Levin; +Cc: linux-kernel, akpm, linux-mm

On Tue, Apr 14, 2015 at 04:56:23PM -0400, Sasha Levin wrote:
> Format various flags to a string buffer rather than printing them. This is
> a helper for later.
> 
> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
> ---
>  mm/debug.c |   35 +++++++++++++++++++++++++++++++++++
>  1 file changed, 35 insertions(+)
> 
> diff --git a/mm/debug.c b/mm/debug.c
> index 3eb3ac2..c9f7dd7 100644
> --- a/mm/debug.c
> +++ b/mm/debug.c
> @@ -80,6 +80,41 @@ static void dump_flags(unsigned long flags,
>  	pr_cont(")\n");
>  }
>  
> +static char *format_flags(unsigned long flags,
> +			const struct trace_print_flags *names, int count,
> +			char *buf, char *end)
> +{
> +	const char *delim = "";
> +	unsigned long mask;
> +	int i;
> +
> +	buf += snprintf(buf, (buf > end ? 0 : end - buf),
> +				"flags: %#lx(", flags);
> +
> +	/* remove zone id */
> +	flags &= (1UL << NR_PAGEFLAGS) - 1;
> +
> +	for (i = 0; i < count && flags; i++) {
> +                mask = names[i].mask;
> +                if ((flags & mask) != mask)
> +                        continue;
> +
> +                flags &= ~mask;
> +		buf += snprintf(buf, (buf > end ? 0 : end - buf),
> +                		"%s%s", delim, names[i].name);

Indent is off. Otherwise look okay to me.

> +                delim = "|";
> +        }
> +
> +        /* check for left over flags */
> +        if (flags)
> +		buf += snprintf(buf, (buf > end ? 0 : end - buf),
> +                		"%s%#lx", delim, flags);
> +
> +	buf += snprintf(buf, (buf > end ? 0 : end - buf), ")\n");
> +
> +	return buf;
> +}
> +
>  void dump_page_badflags(struct page *page, const char *reason,
>  		unsigned long badflags)
>  {
> -- 
> 1.7.10.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC 02/11] mm: debug: deal with a new family of MM pointers
  2015-04-14 20:56 ` [RFC 02/11] mm: debug: deal with a new family of MM pointers Sasha Levin
@ 2015-04-30 16:17   ` Kirill A. Shutemov
  2015-04-30 16:44     ` Sasha Levin
  0 siblings, 1 reply; 19+ messages in thread
From: Kirill A. Shutemov @ 2015-04-30 16:17 UTC (permalink / raw)
  To: Sasha Levin; +Cc: linux-kernel, akpm, linux-mm

On Tue, Apr 14, 2015 at 04:56:24PM -0400, Sasha Levin wrote:
> This teaches our printing functions about a new family of MM pointer that it
> could now print.
> 
> I've picked %pZ because %pm and %pM were already taken, so I figured it
> doesn't really matter what we go with. We also have the option of stealing
> one of those two...
> 
> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
> ---
>  lib/vsprintf.c |   13 +++++++++++++
>  1 file changed, 13 insertions(+)
> 
> diff --git a/lib/vsprintf.c b/lib/vsprintf.c
> index 8243e2f..809d19d 100644
> --- a/lib/vsprintf.c
> +++ b/lib/vsprintf.c
> @@ -1375,6 +1375,16 @@ char *comm_name(char *buf, char *end, struct task_struct *tsk,
>  	return string(buf, end, name, spec);
>  }
>  
> +static noinline_for_stack
> +char *mm_pointer(char *buf, char *end, struct task_struct *tsk,
> +		struct printf_spec spec, const char *fmt)
> +{
> +	switch (fmt[1]) {

shouldn't we printout at least pointer address for unknown suffixes?

> +	}
> +
> +	return buf;
> +}
> +
>  int kptr_restrict __read_mostly;
>  
>  /*
> @@ -1463,6 +1473,7 @@ int kptr_restrict __read_mostly;
>   *        (legacy clock framework) of the clock
>   * - 'Cr' For a clock, it prints the current rate of the clock
>   * - 'T' task_struct->comm
> + * - 'Z' Outputs a readable version of a type of memory management struct.
>   *
>   * Note: The difference between 'S' and 'F' is that on ia64 and ppc64
>   * function pointers are really function descriptors, which contain a
> @@ -1615,6 +1626,8 @@ char *pointer(const char *fmt, char *buf, char *end, void *ptr,
>  				   spec, fmt);
>  	case 'T':
>  		return comm_name(buf, end, ptr, spec, fmt);
> +	case 'Z':
> +		return mm_pointer(buf, end, ptr, spec, fmt);
>  	}
>  	spec.flags |= SMALL;
>  	if (spec.field_width == -1) {
> -- 
> 1.7.10.4
> 
> --
> To unsubscribe from this list: send the line "unsubscribe linux-kernel" in
> the body of a message to majordomo@vger.kernel.org
> More majordomo info at  http://vger.kernel.org/majordomo-info.html
> Please read the FAQ at  http://www.tux.org/lkml/

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC 03/11] mm: debug: dump VMA into a string rather than directly on screen
  2015-04-14 20:56 ` [RFC 03/11] mm: debug: dump VMA into a string rather than directly on screen Sasha Levin
@ 2015-04-30 16:18   ` Kirill A. Shutemov
  0 siblings, 0 replies; 19+ messages in thread
From: Kirill A. Shutemov @ 2015-04-30 16:18 UTC (permalink / raw)
  To: Sasha Levin; +Cc: linux-kernel, akpm, linux-mm

On Tue, Apr 14, 2015 at 04:56:25PM -0400, Sasha Levin wrote:
> This lets us use regular string formatting code to dump VMAs, use it
> in VM_BUG_ON_VMA instead of just printing it to screen as well.
> 
> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
> ---
>  include/linux/mmdebug.h |    8 ++++++--
>  lib/vsprintf.c          |    7 +++++--
>  mm/debug.c              |   26 ++++++++++++++------------
>  3 files changed, 25 insertions(+), 16 deletions(-)
> 
> diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
> index 877ef22..506e405 100644
> --- a/include/linux/mmdebug.h
> +++ b/include/linux/mmdebug.h
> @@ -10,10 +10,10 @@ struct mm_struct;
>  extern void dump_page(struct page *page, const char *reason);
>  extern void dump_page_badflags(struct page *page, const char *reason,
>  			       unsigned long badflags);
> -void dump_vma(const struct vm_area_struct *vma);
>  void dump_mm(const struct mm_struct *mm);
>  
>  #ifdef CONFIG_DEBUG_VM
> +char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
>  #define VM_BUG_ON(cond) BUG_ON(cond)
>  #define VM_BUG_ON_PAGE(cond, page)					\
>  	do {								\
> @@ -25,7 +25,7 @@ void dump_mm(const struct mm_struct *mm);
>  #define VM_BUG_ON_VMA(cond, vma)					\
>  	do {								\
>  		if (unlikely(cond)) {					\
> -			dump_vma(vma);					\
> +			pr_emerg("%pZv", vma);				\
>  			BUG();						\
>  		}							\
>  	} while (0)
> @@ -40,6 +40,10 @@ void dump_mm(const struct mm_struct *mm);
>  #define VM_WARN_ON_ONCE(cond) WARN_ON_ONCE(cond)
>  #define VM_WARN_ONCE(cond, format...) WARN_ONCE(cond, format)
>  #else
> +static char *format_vma(const struct vm_area_struct *vma, char *buf, char *end)
> +{

Again: print address ?

> +	return buf;
> +}
>  #define VM_BUG_ON(cond) BUILD_BUG_ON_INVALID(cond)
>  #define VM_BUG_ON_PAGE(cond, page) VM_BUG_ON(cond)
>  #define VM_BUG_ON_VMA(cond, vma) VM_BUG_ON(cond)
-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC 07/11] mm: debug: VM_BUG()
  2015-04-14 20:56 ` [RFC 07/11] mm: debug: VM_BUG() Sasha Levin
@ 2015-04-30 16:22   ` Kirill A. Shutemov
  0 siblings, 0 replies; 19+ messages in thread
From: Kirill A. Shutemov @ 2015-04-30 16:22 UTC (permalink / raw)
  To: Sasha Levin; +Cc: linux-kernel, akpm, linux-mm

On Tue, Apr 14, 2015 at 04:56:29PM -0400, Sasha Levin wrote:
> VM_BUG() complements VM_BUG_ON() just like with WARN() and WARN_ON().
> 
> This lets us format custom strings to output when a VM_BUG() is hit.
> 
> Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
> ---
>  include/linux/mmdebug.h |   10 +++++++++-
>  1 file changed, 9 insertions(+), 1 deletion(-)
> 
> diff --git a/include/linux/mmdebug.h b/include/linux/mmdebug.h
> index 8b3f5a0..42f41e3 100644
> --- a/include/linux/mmdebug.h
> +++ b/include/linux/mmdebug.h
> @@ -12,7 +12,14 @@ char *format_page(struct page *page, char *buf, char *end);
>  #ifdef CONFIG_DEBUG_VM
>  char *format_vma(const struct vm_area_struct *vma, char *buf, char *end);
>  char *format_mm(const struct mm_struct *mm, char *buf, char *end);
> -#define VM_BUG_ON(cond) BUG_ON(cond)
> +#define VM_BUG(cond, fmt...)						\

vm_bugf() ? ;)

-- 
 Kirill A. Shutemov

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC 02/11] mm: debug: deal with a new family of MM pointers
  2015-04-30 16:17   ` Kirill A. Shutemov
@ 2015-04-30 16:44     ` Sasha Levin
  0 siblings, 0 replies; 19+ messages in thread
From: Sasha Levin @ 2015-04-30 16:44 UTC (permalink / raw)
  To: Kirill A. Shutemov; +Cc: linux-kernel, akpm, linux-mm

On 04/30/2015 12:17 PM, Kirill A. Shutemov wrote:
> On Tue, Apr 14, 2015 at 04:56:24PM -0400, Sasha Levin wrote:
>> > This teaches our printing functions about a new family of MM pointer that it
>> > could now print.
>> > 
>> > I've picked %pZ because %pm and %pM were already taken, so I figured it
>> > doesn't really matter what we go with. We also have the option of stealing
>> > one of those two...
>> > 
>> > Signed-off-by: Sasha Levin <sasha.levin@oracle.com>
>> > ---
>> >  lib/vsprintf.c |   13 +++++++++++++
>> >  1 file changed, 13 insertions(+)
>> > 
>> > diff --git a/lib/vsprintf.c b/lib/vsprintf.c
>> > index 8243e2f..809d19d 100644
>> > --- a/lib/vsprintf.c
>> > +++ b/lib/vsprintf.c
>> > @@ -1375,6 +1375,16 @@ char *comm_name(char *buf, char *end, struct task_struct *tsk,
>> >  	return string(buf, end, name, spec);
>> >  }
>> >  
>> > +static noinline_for_stack
>> > +char *mm_pointer(char *buf, char *end, struct task_struct *tsk,
>> > +		struct printf_spec spec, const char *fmt)
>> > +{
>> > +	switch (fmt[1]) {
> shouldn't we printout at least pointer address for unknown suffixes?

Sure, we can. We can also add a WARN() to make that failure obvious (there's
no reason to use an unrecognised %pZ* format on purpose).


Thanks,
Sasha


^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2015-04-30 16:44 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-04-14 20:56 [RFC 00/11] mm: debug: formatting memory management structs Sasha Levin
2015-04-14 20:56 ` [RFC 01/11] mm: debug: format flags in a buffer Sasha Levin
2015-04-30 15:39   ` Kirill A. Shutemov
2015-04-14 20:56 ` [RFC 02/11] mm: debug: deal with a new family of MM pointers Sasha Levin
2015-04-30 16:17   ` Kirill A. Shutemov
2015-04-30 16:44     ` Sasha Levin
2015-04-14 20:56 ` [RFC 03/11] mm: debug: dump VMA into a string rather than directly on screen Sasha Levin
2015-04-30 16:18   ` Kirill A. Shutemov
2015-04-14 20:56 ` [RFC 04/11] mm: debug: dump struct MM " Sasha Levin
2015-04-14 20:56 ` [RFC 05/11] mm: debug: dump page " Sasha Levin
2015-04-14 20:56 ` [RFC 06/11] mm: debug: clean unused code Sasha Levin
2015-04-14 20:56 ` [RFC 07/11] mm: debug: VM_BUG() Sasha Levin
2015-04-30 16:22   ` Kirill A. Shutemov
2015-04-14 20:56 ` [RFC 08/11] mm: debug: kill VM_BUG_ON_PAGE Sasha Levin
2015-04-14 20:56 ` [RFC 09/11] mm: debug: kill VM_BUG_ON_VMA Sasha Levin
2015-04-14 20:56 ` [RFC 10/11] mm: debug: kill VM_BUG_ON_MM Sasha Levin
2015-04-14 20:56 ` [RFC 11/11] mm: debug: use VM_BUG() to help with debug output Sasha Levin
2015-04-15  8:45 ` [RFC 00/11] mm: debug: formatting memory management structs Kirill A. Shutemov
2015-04-15 12:52   ` Sasha Levin

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).