All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] x86/mm: fix a crash with kmemleak_scan()
@ 2019-04-23 16:58 Qian Cai
  2019-04-24  8:08 ` Catalin Marinas
  2019-04-24  9:46 ` [tip:x86/urgent] x86/mm: Fix " tip-bot for Qian Cai
  0 siblings, 2 replies; 3+ messages in thread
From: Qian Cai @ 2019-04-23 16:58 UTC (permalink / raw)
  To: bp, tglx, mingo
  Cc: catalin.marinas, dave.hansen, luto, peterz, x86, linux-kernel,
	brijesh.singh, Qian Cai

The first kmemleak_scan() after boot would trigger a crash below because

kernel_init
  free_initmem
    mem_encrypt_free_decrypted_mem
      free_init_pages

unmapped some memory inside the .bss with DEBUG_PAGEALLOC=y. Since
kmemleak_init() will register the .data/.bss sections (only register
.data..ro_after_init if not within .data) and then kmemleak_scan() will
scan those address and dereference them looking for pointer referencing.
If free_init_pages() free and unmap pages in those sections,
kmemleak_scan() will trigger a crash if referencing one of those
addresses.

BUG: unable to handle kernel paging request at ffffffffbd402000
CPU: 12 PID: 325 Comm: kmemleak Not tainted 5.1.0-rc4+ #4
RIP: 0010:scan_block+0x58/0x160
Call Trace:
 scan_gray_list+0x1d9/0x280
 kmemleak_scan+0x485/0xad0
 kmemleak_scan_thread+0x9f/0xc4
 kthread+0x1d2/0x1f0
 ret_from_fork+0x35/0x40

Since kmemleak_free_part() is tolerant to unknown objects (not tracked by
kmemleak), it is fine to call it from free_init_pages() even if not all
address ranges passed to this function are known to kmemleak.

Signed-off-by: Qian Cai <cai@lca.pw>
---
 arch/x86/mm/init.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index f905a2371080..8dacdb96899e 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -5,6 +5,7 @@
 #include <linux/memblock.h>
 #include <linux/swapfile.h>
 #include <linux/swapops.h>
+#include <linux/kmemleak.h>
 
 #include <asm/set_memory.h>
 #include <asm/e820/api.h>
@@ -766,6 +767,11 @@ void free_init_pages(const char *what, unsigned long begin, unsigned long end)
 	if (debug_pagealloc_enabled()) {
 		pr_info("debug: unmapping init [mem %#010lx-%#010lx]\n",
 			begin, end - 1);
+		/*
+		 * Inform kmemleak about the hole in the memory since the
+		 * corresponding pages will be unmapped.
+		 */
+		kmemleak_free_part((void *)begin, end - begin);
 		set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
 	} else {
 		/*
-- 
2.20.1 (Apple Git-117)


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH] x86/mm: fix a crash with kmemleak_scan()
  2019-04-23 16:58 [PATCH] x86/mm: fix a crash with kmemleak_scan() Qian Cai
@ 2019-04-24  8:08 ` Catalin Marinas
  2019-04-24  9:46 ` [tip:x86/urgent] x86/mm: Fix " tip-bot for Qian Cai
  1 sibling, 0 replies; 3+ messages in thread
From: Catalin Marinas @ 2019-04-24  8:08 UTC (permalink / raw)
  To: Qian Cai
  Cc: bp, tglx, mingo, dave.hansen, luto, peterz, x86, linux-kernel,
	brijesh.singh

On Tue, Apr 23, 2019 at 12:58:11PM -0400, Qian Cai wrote:
> The first kmemleak_scan() after boot would trigger a crash below because
> 
> kernel_init
>   free_initmem
>     mem_encrypt_free_decrypted_mem
>       free_init_pages
> 
> unmapped some memory inside the .bss with DEBUG_PAGEALLOC=y. Since
> kmemleak_init() will register the .data/.bss sections (only register
> .data..ro_after_init if not within .data) and then kmemleak_scan() will
> scan those address and dereference them looking for pointer referencing.
> If free_init_pages() free and unmap pages in those sections,
> kmemleak_scan() will trigger a crash if referencing one of those
> addresses.
> 
> BUG: unable to handle kernel paging request at ffffffffbd402000
> CPU: 12 PID: 325 Comm: kmemleak Not tainted 5.1.0-rc4+ #4
> RIP: 0010:scan_block+0x58/0x160
> Call Trace:
>  scan_gray_list+0x1d9/0x280
>  kmemleak_scan+0x485/0xad0
>  kmemleak_scan_thread+0x9f/0xc4
>  kthread+0x1d2/0x1f0
>  ret_from_fork+0x35/0x40
> 
> Since kmemleak_free_part() is tolerant to unknown objects (not tracked by
> kmemleak), it is fine to call it from free_init_pages() even if not all
> address ranges passed to this function are known to kmemleak.
> 
> Signed-off-by: Qian Cai <cai@lca.pw>

Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>

^ permalink raw reply	[flat|nested] 3+ messages in thread

* [tip:x86/urgent] x86/mm: Fix a crash with kmemleak_scan()
  2019-04-23 16:58 [PATCH] x86/mm: fix a crash with kmemleak_scan() Qian Cai
  2019-04-24  8:08 ` Catalin Marinas
@ 2019-04-24  9:46 ` tip-bot for Qian Cai
  1 sibling, 0 replies; 3+ messages in thread
From: tip-bot for Qian Cai @ 2019-04-24  9:46 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: bp, mingo, catalin.marinas, brijesh.singh, mingo, peterz, cai,
	x86, linux-kernel, luto, dave.hansen, hpa, tglx

Commit-ID:  0d02113b31b2017dd349ec9df2314e798a90fa6e
Gitweb:     https://git.kernel.org/tip/0d02113b31b2017dd349ec9df2314e798a90fa6e
Author:     Qian Cai <cai@lca.pw>
AuthorDate: Tue, 23 Apr 2019 12:58:11 -0400
Committer:  Borislav Petkov <bp@suse.de>
CommitDate: Wed, 24 Apr 2019 11:32:34 +0200

x86/mm: Fix a crash with kmemleak_scan()

The first kmemleak_scan() call after boot would trigger the crash below
because this callpath:

  kernel_init
    free_initmem
      mem_encrypt_free_decrypted_mem
        free_init_pages

unmaps memory inside the .bss when DEBUG_PAGEALLOC=y.

kmemleak_init() will register the .data/.bss sections and then
kmemleak_scan() will scan those addresses and dereference them looking
for pointer references. If free_init_pages() frees and unmaps pages in
those sections, kmemleak_scan() will crash if referencing one of those
addresses:

  BUG: unable to handle kernel paging request at ffffffffbd402000
  CPU: 12 PID: 325 Comm: kmemleak Not tainted 5.1.0-rc4+ #4
  RIP: 0010:scan_block
  Call Trace:
   scan_gray_list
   kmemleak_scan
   kmemleak_scan_thread
   kthread
   ret_from_fork

Since kmemleak_free_part() is tolerant to unknown objects (not tracked
by kmemleak), it is fine to call it from free_init_pages() even if not
all address ranges passed to this function are known to kmemleak.

 [ bp: Massage. ]

Fixes: b3f0907c71e0 ("x86/mm: Add .bss..decrypted section to hold shared variables")
Signed-off-by: Qian Cai <cai@lca.pw>
Signed-off-by: Borislav Petkov <bp@suse.de>
Reviewed-by: Catalin Marinas <catalin.marinas@arm.com>
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Brijesh Singh <brijesh.singh@amd.com>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: x86-ml <x86@kernel.org>
Link: https://lkml.kernel.org/r/20190423165811.36699-1-cai@lca.pw
---
 arch/x86/mm/init.c | 6 ++++++
 1 file changed, 6 insertions(+)

diff --git a/arch/x86/mm/init.c b/arch/x86/mm/init.c
index f905a2371080..8dacdb96899e 100644
--- a/arch/x86/mm/init.c
+++ b/arch/x86/mm/init.c
@@ -5,6 +5,7 @@
 #include <linux/memblock.h>
 #include <linux/swapfile.h>
 #include <linux/swapops.h>
+#include <linux/kmemleak.h>
 
 #include <asm/set_memory.h>
 #include <asm/e820/api.h>
@@ -766,6 +767,11 @@ void free_init_pages(const char *what, unsigned long begin, unsigned long end)
 	if (debug_pagealloc_enabled()) {
 		pr_info("debug: unmapping init [mem %#010lx-%#010lx]\n",
 			begin, end - 1);
+		/*
+		 * Inform kmemleak about the hole in the memory since the
+		 * corresponding pages will be unmapped.
+		 */
+		kmemleak_free_part((void *)begin, end - begin);
 		set_memory_np(begin, (end - begin) >> PAGE_SHIFT);
 	} else {
 		/*

^ permalink raw reply related	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2019-04-24  9:46 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-23 16:58 [PATCH] x86/mm: fix a crash with kmemleak_scan() Qian Cai
2019-04-24  8:08 ` Catalin Marinas
2019-04-24  9:46 ` [tip:x86/urgent] x86/mm: Fix " tip-bot for Qian Cai

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.