linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2 0/2] mm/kasan: dump alloc/free stack for page allocator
@ 2019-09-09  8:24 walter-zh.wu
  2019-09-09 13:07 ` Vlastimil Babka
  2019-09-10  9:53 ` Walter Wu
  0 siblings, 2 replies; 7+ messages in thread
From: walter-zh.wu @ 2019-09-09  8:24 UTC (permalink / raw)
  To: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Matthias Brugger, Andrew Morton, Martin Schwidefsky, Will Deacon,
	Andrey Konovalov, Arnd Bergmann, Thomas Gleixner, Michal Hocko,
	Qian Cai
  Cc: linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
	linux-mediatek, wsd_upstream, Walter Wu

From: Walter Wu <walter-zh.wu@mediatek.com>

This patch is KASAN report adds the alloc/free stacks for page allocator
in order to help programmer to see memory corruption caused by page.

By default, KASAN doesn't record alloc and free stack for page allocator.
It is difficult to fix up page use-after-free or dobule-free issue.

Our patchsets will record the last stack of pages.
It is very helpful for solving the page use-after-free or double-free.

KASAN report will show the last stack of page, it may be:
a) If page is in-use state, then it prints alloc stack.
   It is useful to fix up page out-of-bound issue.

BUG: KASAN: slab-out-of-bounds in kmalloc_pagealloc_oob_right+0x88/0x90
Write of size 1 at addr ffffffc0d64ea00a by task cat/115
...
Allocation stack of page:
 set_page_stack.constprop.1+0x30/0xc8
 kasan_alloc_pages+0x18/0x38
 prep_new_page+0x5c/0x150
 get_page_from_freelist+0xb8c/0x17c8
 __alloc_pages_nodemask+0x1a0/0x11b0
 kmalloc_order+0x28/0x58
 kmalloc_order_trace+0x28/0xe0
 kmalloc_pagealloc_oob_right+0x2c/0x68

b) If page is freed state, then it prints free stack.
   It is useful to fix up page use-after-free or double-free issue.

BUG: KASAN: use-after-free in kmalloc_pagealloc_uaf+0x70/0x80
Write of size 1 at addr ffffffc0d651c000 by task cat/115
...
Free stack of page:
 kasan_free_pages+0x68/0x70
 __free_pages_ok+0x3c0/0x1328
 __free_pages+0x50/0x78
 kfree+0x1c4/0x250
 kmalloc_pagealloc_uaf+0x38/0x80

This has been discussed, please refer below link.
https://bugzilla.kernel.org/show_bug.cgi?id=203967

Changes since v1:
- slim page_owner and move it into kasan
- enable the feature by default

Signed-off-by: Walter Wu <walter-zh.wu@mediatek.com>
---
 include/linux/kasan.h |  1 +
 lib/Kconfig.kasan     |  2 ++
 mm/kasan/common.c     | 32 ++++++++++++++++++++++++++++++++
 mm/kasan/kasan.h      |  5 +++++
 mm/kasan/report.c     | 27 +++++++++++++++++++++++++++
 5 files changed, 67 insertions(+)

diff --git a/include/linux/kasan.h b/include/linux/kasan.h
index cc8a03cc9674..97e1bcb20489 100644
--- a/include/linux/kasan.h
+++ b/include/linux/kasan.h
@@ -19,6 +19,7 @@ extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
 extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
 extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD];
 extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
+extern struct page_ext_operations page_stack_ops;
 
 int kasan_populate_early_shadow(const void *shadow_start,
 				const void *shadow_end);
diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
index 4fafba1a923b..b5a9410ba4e8 100644
--- a/lib/Kconfig.kasan
+++ b/lib/Kconfig.kasan
@@ -41,6 +41,7 @@ config KASAN_GENERIC
 	select SLUB_DEBUG if SLUB
 	select CONSTRUCTORS
 	select STACKDEPOT
+	select PAGE_EXTENSION
 	help
 	  Enables generic KASAN mode.
 	  Supported in both GCC and Clang. With GCC it requires version 4.9.2
@@ -63,6 +64,7 @@ config KASAN_SW_TAGS
 	select SLUB_DEBUG if SLUB
 	select CONSTRUCTORS
 	select STACKDEPOT
+	select PAGE_EXTENSION
 	help
 	  Enables software tag-based KASAN mode.
 	  This mode requires Top Byte Ignore support by the CPU and therefore
diff --git a/mm/kasan/common.c b/mm/kasan/common.c
index 2277b82902d8..c349143d2587 100644
--- a/mm/kasan/common.c
+++ b/mm/kasan/common.c
@@ -211,10 +211,38 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark)
 	kasan_unpoison_shadow(sp, size);
 }
 
+static bool need_page_stack(void)
+{
+	return true;
+}
+
+struct page_ext_operations page_stack_ops = {
+	.size = sizeof(depot_stack_handle_t),
+	.need = need_page_stack,
+};
+
+static void set_page_stack(struct page *page, gfp_t gfp_mask)
+{
+	struct page_ext *page_ext = lookup_page_ext(page);
+	depot_stack_handle_t handle;
+	depot_stack_handle_t *page_stack;
+
+	if (unlikely(!page_ext))
+		return;
+
+	handle = save_stack(gfp_mask);
+
+	page_stack = get_page_stack(page_ext);
+	*page_stack = handle;
+}
+
 void kasan_alloc_pages(struct page *page, unsigned int order)
 {
 	u8 tag;
 	unsigned long i;
+	gfp_t gfp_flags = GFP_KERNEL;
+
+	set_page_stack(page, gfp_flags);
 
 	if (unlikely(PageHighMem(page)))
 		return;
@@ -227,6 +255,10 @@ void kasan_alloc_pages(struct page *page, unsigned int order)
 
 void kasan_free_pages(struct page *page, unsigned int order)
 {
+	gfp_t gfp_flags = GFP_KERNEL;
+
+	set_page_stack(page, gfp_flags);
+
 	if (likely(!PageHighMem(page)))
 		kasan_poison_shadow(page_address(page),
 				PAGE_SIZE << order,
diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
index 014f19e76247..95b3b510d04f 100644
--- a/mm/kasan/kasan.h
+++ b/mm/kasan/kasan.h
@@ -126,6 +126,11 @@ static inline bool addr_has_shadow(const void *addr)
 	return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
 }
 
+static inline depot_stack_handle_t *get_page_stack(struct page_ext *page_ext)
+{
+	return (void *)page_ext + page_stack_ops.offset;
+}
+
 void kasan_poison_shadow(const void *address, size_t size, u8 value);
 
 /**
diff --git a/mm/kasan/report.c b/mm/kasan/report.c
index 0e5f965f1882..2e26bc192114 100644
--- a/mm/kasan/report.c
+++ b/mm/kasan/report.c
@@ -344,6 +344,32 @@ static void print_address_stack_frame(const void *addr)
 	print_decoded_frame_descr(frame_descr);
 }
 
+static void dump_page_stack(struct page *page)
+{
+	struct page_ext *page_ext = lookup_page_ext(page);
+	depot_stack_handle_t handle;
+	unsigned long *entries;
+	unsigned int nr_entries;
+	depot_stack_handle_t *page_stack;
+
+	if (unlikely(!page_ext))
+		return;
+
+	page_stack = get_page_stack(page_ext);
+
+	handle = READ_ONCE(*page_stack);
+	if (!handle)
+		return;
+
+	if ((unsigned long)page->flags & PAGE_FLAGS_CHECK_AT_PREP)
+		pr_info("Allocation stack of page:\n");
+	else
+		pr_info("Free stack of page:\n");
+
+	nr_entries = stack_depot_fetch(handle, &entries);
+	stack_trace_print(entries, nr_entries, 0);
+}
+
 static void print_address_description(void *addr)
 {
 	struct page *page = addr_to_page(addr);
@@ -366,6 +392,7 @@ static void print_address_description(void *addr)
 	if (page) {
 		pr_err("The buggy address belongs to the page:\n");
 		dump_page(page, "kasan: bad access detected");
+		dump_page_stack(page);
 	}
 
 	print_address_stack_frame(addr);
-- 
2.18.0



^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 0/2] mm/kasan: dump alloc/free stack for page allocator
  2019-09-09  8:24 [PATCH v2 0/2] mm/kasan: dump alloc/free stack for page allocator walter-zh.wu
@ 2019-09-09 13:07 ` Vlastimil Babka
  2019-09-10 10:50   ` Andrey Ryabinin
  2019-09-10  9:53 ` Walter Wu
  1 sibling, 1 reply; 7+ messages in thread
From: Vlastimil Babka @ 2019-09-09 13:07 UTC (permalink / raw)
  To: walter-zh.wu, Andrey Ryabinin, Alexander Potapenko,
	Dmitry Vyukov, Matthias Brugger, Andrew Morton,
	Martin Schwidefsky, Will Deacon, Andrey Konovalov, Arnd Bergmann,
	Thomas Gleixner, Michal Hocko, Qian Cai
  Cc: linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
	linux-mediatek, wsd_upstream

On 9/9/19 10:24 AM, walter-zh.wu@mediatek.com wrote:
> From: Walter Wu <walter-zh.wu@mediatek.com>
> 
> This patch is KASAN report adds the alloc/free stacks for page allocator
> in order to help programmer to see memory corruption caused by page.
> 
> By default, KASAN doesn't record alloc and free stack for page allocator.
> It is difficult to fix up page use-after-free or dobule-free issue.
> 
> Our patchsets will record the last stack of pages.
> It is very helpful for solving the page use-after-free or double-free.
> 
> KASAN report will show the last stack of page, it may be:
> a) If page is in-use state, then it prints alloc stack.
>     It is useful to fix up page out-of-bound issue.

I still disagree with duplicating most of page_owner functionality for 
the sake of using a single stack handle for both alloc and free (while 
page_owner + debug_pagealloc with patches in mmotm uses two handles). It 
reduces the amount of potentially important debugging information, and I 
really doubt the u32-per-page savings are significant, given the rest of 
KASAN overhead.

> BUG: KASAN: slab-out-of-bounds in kmalloc_pagealloc_oob_right+0x88/0x90
> Write of size 1 at addr ffffffc0d64ea00a by task cat/115
> ...
> Allocation stack of page:
>   set_page_stack.constprop.1+0x30/0xc8
>   kasan_alloc_pages+0x18/0x38
>   prep_new_page+0x5c/0x150
>   get_page_from_freelist+0xb8c/0x17c8
>   __alloc_pages_nodemask+0x1a0/0x11b0
>   kmalloc_order+0x28/0x58
>   kmalloc_order_trace+0x28/0xe0
>   kmalloc_pagealloc_oob_right+0x2c/0x68
> 
> b) If page is freed state, then it prints free stack.
>     It is useful to fix up page use-after-free or double-free issue.
> 
> BUG: KASAN: use-after-free in kmalloc_pagealloc_uaf+0x70/0x80
> Write of size 1 at addr ffffffc0d651c000 by task cat/115
> ...
> Free stack of page:
>   kasan_free_pages+0x68/0x70
>   __free_pages_ok+0x3c0/0x1328
>   __free_pages+0x50/0x78
>   kfree+0x1c4/0x250
>   kmalloc_pagealloc_uaf+0x38/0x80
> 
> This has been discussed, please refer below link.
> https://bugzilla.kernel.org/show_bug.cgi?id=203967

That's not a discussion, but a single comment from Dmitry, which btw 
contains "provide alloc *and* free stacks for it" ("it" refers to page, 
emphasis mine). It would be nice if he or other KASAN guys could clarify.

> Changes since v1:
> - slim page_owner and move it into kasan
> - enable the feature by default
> 
> Signed-off-by: Walter Wu <walter-zh.wu@mediatek.com>
> ---
>   include/linux/kasan.h |  1 +
>   lib/Kconfig.kasan     |  2 ++
>   mm/kasan/common.c     | 32 ++++++++++++++++++++++++++++++++
>   mm/kasan/kasan.h      |  5 +++++
>   mm/kasan/report.c     | 27 +++++++++++++++++++++++++++
>   5 files changed, 67 insertions(+)


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 0/2] mm/kasan: dump alloc/free stack for page allocator
  2019-09-09  8:24 [PATCH v2 0/2] mm/kasan: dump alloc/free stack for page allocator walter-zh.wu
  2019-09-09 13:07 ` Vlastimil Babka
@ 2019-09-10  9:53 ` Walter Wu
  1 sibling, 0 replies; 7+ messages in thread
From: Walter Wu @ 2019-09-10  9:53 UTC (permalink / raw)
  To: Andrey Ryabinin, Dmitry Vyukov, Andrey Konovalov, Arnd Bergmann,
	Qian Cai
  Cc: Alexander Potapenko, Matthias Brugger, Andrew Morton,
	Martin Schwidefsky, Will Deacon, Thomas Gleixner, Michal Hocko,
	linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
	linux-mediatek, wsd_upstream

On Mon, 2019-09-09 at 16:24 +0800, walter-zh.wu@mediatek.com wrote:
> From: Walter Wu <walter-zh.wu@mediatek.com>
> 
> This patch is KASAN report adds the alloc/free stacks for page allocator
> in order to help programmer to see memory corruption caused by page.
> 
> By default, KASAN doesn't record alloc and free stack for page allocator.
> It is difficult to fix up page use-after-free or dobule-free issue.
> 
> Our patchsets will record the last stack of pages.
> It is very helpful for solving the page use-after-free or double-free.
> 
> KASAN report will show the last stack of page, it may be:
> a) If page is in-use state, then it prints alloc stack.
>    It is useful to fix up page out-of-bound issue.
> 
> BUG: KASAN: slab-out-of-bounds in kmalloc_pagealloc_oob_right+0x88/0x90
> Write of size 1 at addr ffffffc0d64ea00a by task cat/115
> ...
> Allocation stack of page:
>  set_page_stack.constprop.1+0x30/0xc8
>  kasan_alloc_pages+0x18/0x38
>  prep_new_page+0x5c/0x150
>  get_page_from_freelist+0xb8c/0x17c8
>  __alloc_pages_nodemask+0x1a0/0x11b0
>  kmalloc_order+0x28/0x58
>  kmalloc_order_trace+0x28/0xe0
>  kmalloc_pagealloc_oob_right+0x2c/0x68
> 
> b) If page is freed state, then it prints free stack.
>    It is useful to fix up page use-after-free or double-free issue.
> 
> BUG: KASAN: use-after-free in kmalloc_pagealloc_uaf+0x70/0x80
> Write of size 1 at addr ffffffc0d651c000 by task cat/115
> ...
> Free stack of page:
>  kasan_free_pages+0x68/0x70
>  __free_pages_ok+0x3c0/0x1328
>  __free_pages+0x50/0x78
>  kfree+0x1c4/0x250
>  kmalloc_pagealloc_uaf+0x38/0x80
> 
> This has been discussed, please refer below link.
> https://bugzilla.kernel.org/show_bug.cgi?id=203967
> 
> Changes since v1:
> - slim page_owner and move it into kasan
> - enable the feature by default
> 
> Signed-off-by: Walter Wu <walter-zh.wu@mediatek.com>
> ---
>  include/linux/kasan.h |  1 +
>  lib/Kconfig.kasan     |  2 ++
>  mm/kasan/common.c     | 32 ++++++++++++++++++++++++++++++++
>  mm/kasan/kasan.h      |  5 +++++
>  mm/kasan/report.c     | 27 +++++++++++++++++++++++++++
>  5 files changed, 67 insertions(+)
> 
> diff --git a/include/linux/kasan.h b/include/linux/kasan.h
> index cc8a03cc9674..97e1bcb20489 100644
> --- a/include/linux/kasan.h
> +++ b/include/linux/kasan.h
> @@ -19,6 +19,7 @@ extern pte_t kasan_early_shadow_pte[PTRS_PER_PTE];
>  extern pmd_t kasan_early_shadow_pmd[PTRS_PER_PMD];
>  extern pud_t kasan_early_shadow_pud[PTRS_PER_PUD];
>  extern p4d_t kasan_early_shadow_p4d[MAX_PTRS_PER_P4D];
> +extern struct page_ext_operations page_stack_ops;
>  
>  int kasan_populate_early_shadow(const void *shadow_start,
>  				const void *shadow_end);
> diff --git a/lib/Kconfig.kasan b/lib/Kconfig.kasan
> index 4fafba1a923b..b5a9410ba4e8 100644
> --- a/lib/Kconfig.kasan
> +++ b/lib/Kconfig.kasan
> @@ -41,6 +41,7 @@ config KASAN_GENERIC
>  	select SLUB_DEBUG if SLUB
>  	select CONSTRUCTORS
>  	select STACKDEPOT
> +	select PAGE_EXTENSION
>  	help
>  	  Enables generic KASAN mode.
>  	  Supported in both GCC and Clang. With GCC it requires version 4.9.2
> @@ -63,6 +64,7 @@ config KASAN_SW_TAGS
>  	select SLUB_DEBUG if SLUB
>  	select CONSTRUCTORS
>  	select STACKDEPOT
> +	select PAGE_EXTENSION
>  	help
>  	  Enables software tag-based KASAN mode.
>  	  This mode requires Top Byte Ignore support by the CPU and therefore
> diff --git a/mm/kasan/common.c b/mm/kasan/common.c
> index 2277b82902d8..c349143d2587 100644
> --- a/mm/kasan/common.c
> +++ b/mm/kasan/common.c
> @@ -211,10 +211,38 @@ void kasan_unpoison_stack_above_sp_to(const void *watermark)
>  	kasan_unpoison_shadow(sp, size);
>  }
>  
> +static bool need_page_stack(void)
> +{
> +	return true;
> +}
> +
> +struct page_ext_operations page_stack_ops = {
> +	.size = sizeof(depot_stack_handle_t),
> +	.need = need_page_stack,
> +};
> +
> +static void set_page_stack(struct page *page, gfp_t gfp_mask)
> +{
> +	struct page_ext *page_ext = lookup_page_ext(page);
> +	depot_stack_handle_t handle;
> +	depot_stack_handle_t *page_stack;
> +
> +	if (unlikely(!page_ext))
> +		return;
> +
> +	handle = save_stack(gfp_mask);
> +
> +	page_stack = get_page_stack(page_ext);
> +	*page_stack = handle;
> +}
> +
>  void kasan_alloc_pages(struct page *page, unsigned int order)
>  {
>  	u8 tag;
>  	unsigned long i;
> +	gfp_t gfp_flags = GFP_KERNEL;
> +
> +	set_page_stack(page, gfp_flags);
>  
>  	if (unlikely(PageHighMem(page)))
>  		return;
> @@ -227,6 +255,10 @@ void kasan_alloc_pages(struct page *page, unsigned int order)
>  
>  void kasan_free_pages(struct page *page, unsigned int order)
>  {
> +	gfp_t gfp_flags = GFP_KERNEL;
> +
> +	set_page_stack(page, gfp_flags);
> +
>  	if (likely(!PageHighMem(page)))
>  		kasan_poison_shadow(page_address(page),
>  				PAGE_SIZE << order,
> diff --git a/mm/kasan/kasan.h b/mm/kasan/kasan.h
> index 014f19e76247..95b3b510d04f 100644
> --- a/mm/kasan/kasan.h
> +++ b/mm/kasan/kasan.h
> @@ -126,6 +126,11 @@ static inline bool addr_has_shadow(const void *addr)
>  	return (addr >= kasan_shadow_to_mem((void *)KASAN_SHADOW_START));
>  }
>  
> +static inline depot_stack_handle_t *get_page_stack(struct page_ext *page_ext)
> +{
> +	return (void *)page_ext + page_stack_ops.offset;
> +}
> +
>  void kasan_poison_shadow(const void *address, size_t size, u8 value);
>  
>  /**
> diff --git a/mm/kasan/report.c b/mm/kasan/report.c
> index 0e5f965f1882..2e26bc192114 100644
> --- a/mm/kasan/report.c
> +++ b/mm/kasan/report.c
> @@ -344,6 +344,32 @@ static void print_address_stack_frame(const void *addr)
>  	print_decoded_frame_descr(frame_descr);
>  }
>  
> +static void dump_page_stack(struct page *page)
> +{
> +	struct page_ext *page_ext = lookup_page_ext(page);
> +	depot_stack_handle_t handle;
> +	unsigned long *entries;
> +	unsigned int nr_entries;
> +	depot_stack_handle_t *page_stack;
> +
> +	if (unlikely(!page_ext))
> +		return;
> +
> +	page_stack = get_page_stack(page_ext);
> +
> +	handle = READ_ONCE(*page_stack);
> +	if (!handle)
> +		return;
> +
> +	if ((unsigned long)page->flags & PAGE_FLAGS_CHECK_AT_PREP)
> +		pr_info("Allocation stack of page:\n");
> +	else
> +		pr_info("Free stack of page:\n");
> +
> +	nr_entries = stack_depot_fetch(handle, &entries);
> +	stack_trace_print(entries, nr_entries, 0);
> +}
> +
>  static void print_address_description(void *addr)
>  {
>  	struct page *page = addr_to_page(addr);
> @@ -366,6 +392,7 @@ static void print_address_description(void *addr)
>  	if (page) {
>  		pr_err("The buggy address belongs to the page:\n");
>  		dump_page(page, "kasan: bad access detected");
> +		dump_page_stack(page);
>  	}
>  
>  	print_address_stack_frame(addr);

Hi All,

We implement another version, it is different with v1. We hope that you
can give an ideas and make the KASAN report better. If it is possible,
we can use the less memory to show the corruption information that is
enough to help programmer to fix up memory corruption.

Thanks.
Walter



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 0/2] mm/kasan: dump alloc/free stack for page allocator
  2019-09-09 13:07 ` Vlastimil Babka
@ 2019-09-10 10:50   ` Andrey Ryabinin
  2019-09-10 11:53     ` Vlastimil Babka
  2019-09-10 12:46     ` Walter Wu
  0 siblings, 2 replies; 7+ messages in thread
From: Andrey Ryabinin @ 2019-09-10 10:50 UTC (permalink / raw)
  To: Vlastimil Babka, walter-zh.wu, Alexander Potapenko,
	Dmitry Vyukov, Matthias Brugger, Andrew Morton,
	Martin Schwidefsky, Will Deacon, Andrey Konovalov, Arnd Bergmann,
	Thomas Gleixner, Michal Hocko, Qian Cai
  Cc: linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
	linux-mediatek, wsd_upstream



On 9/9/19 4:07 PM, Vlastimil Babka wrote:
> On 9/9/19 10:24 AM, walter-zh.wu@mediatek.com wrote:
>> From: Walter Wu <walter-zh.wu@mediatek.com>
>>
>> This patch is KASAN report adds the alloc/free stacks for page allocator
>> in order to help programmer to see memory corruption caused by page.
>>
>> By default, KASAN doesn't record alloc and free stack for page allocator.
>> It is difficult to fix up page use-after-free or dobule-free issue.
>>
>> Our patchsets will record the last stack of pages.
>> It is very helpful for solving the page use-after-free or double-free.
>>
>> KASAN report will show the last stack of page, it may be:
>> a) If page is in-use state, then it prints alloc stack.
>>     It is useful to fix up page out-of-bound issue.
> 
> I still disagree with duplicating most of page_owner functionality for the sake of using a single stack handle for both alloc and free (while page_owner + debug_pagealloc with patches in mmotm uses two handles). It reduces the amount of potentially important debugging information, and I really doubt the u32-per-page savings are significant, given the rest of KASAN overhead.
> 
>> BUG: KASAN: slab-out-of-bounds in kmalloc_pagealloc_oob_right+0x88/0x90
>> Write of size 1 at addr ffffffc0d64ea00a by task cat/115
>> ...
>> Allocation stack of page:
>>   set_page_stack.constprop.1+0x30/0xc8
>>   kasan_alloc_pages+0x18/0x38
>>   prep_new_page+0x5c/0x150
>>   get_page_from_freelist+0xb8c/0x17c8
>>   __alloc_pages_nodemask+0x1a0/0x11b0
>>   kmalloc_order+0x28/0x58
>>   kmalloc_order_trace+0x28/0xe0
>>   kmalloc_pagealloc_oob_right+0x2c/0x68
>>
>> b) If page is freed state, then it prints free stack.
>>     It is useful to fix up page use-after-free or double-free issue.
>>
>> BUG: KASAN: use-after-free in kmalloc_pagealloc_uaf+0x70/0x80
>> Write of size 1 at addr ffffffc0d651c000 by task cat/115
>> ...
>> Free stack of page:
>>   kasan_free_pages+0x68/0x70
>>   __free_pages_ok+0x3c0/0x1328
>>   __free_pages+0x50/0x78
>>   kfree+0x1c4/0x250
>>   kmalloc_pagealloc_uaf+0x38/0x80
>>
>> This has been discussed, please refer below link.
>> https://bugzilla.kernel.org/show_bug.cgi?id=203967
> 
> That's not a discussion, but a single comment from Dmitry, which btw contains "provide alloc *and* free stacks for it" ("it" refers to page, emphasis mine). It would be nice if he or other KASAN guys could clarify.
> 

For slab objects we memorize both alloc and free stacks. You'll never know in advance what information will be usefull
to fix an issue, so it usually better to provide more information. I don't think we should do anything different for pages.

Given that we already have the page_owner responsible for providing alloc/free stacks for pages, all that we should in KASAN do is to
enable the feature by default. Free stack saving should be decoupled from debug_pagealloc into separate option so that it can be enabled 
by KASAN and/or debug_pagealloc.

 




^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 0/2] mm/kasan: dump alloc/free stack for page allocator
  2019-09-10 10:50   ` Andrey Ryabinin
@ 2019-09-10 11:53     ` Vlastimil Babka
  2019-09-10 12:45       ` Walter Wu
  2019-09-10 12:46     ` Walter Wu
  1 sibling, 1 reply; 7+ messages in thread
From: Vlastimil Babka @ 2019-09-10 11:53 UTC (permalink / raw)
  To: Andrey Ryabinin, walter-zh.wu, Alexander Potapenko,
	Dmitry Vyukov, Matthias Brugger, Andrew Morton,
	Martin Schwidefsky, Will Deacon, Andrey Konovalov, Arnd Bergmann,
	Thomas Gleixner, Michal Hocko, Qian Cai
  Cc: linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
	linux-mediatek, wsd_upstream

On 9/10/19 12:50 PM, Andrey Ryabinin wrote:
> 
> 
> For slab objects we memorize both alloc and free stacks. You'll never know in advance what information will be usefull
> to fix an issue, so it usually better to provide more information. I don't think we should do anything different for pages.

Exactly, thanks.

> Given that we already have the page_owner responsible for providing alloc/free stacks for pages, all that we should in KASAN do is to
> enable the feature by default. Free stack saving should be decoupled from debug_pagealloc into separate option so that it can be enabled
> by KASAN and/or debug_pagealloc.

Right. Walter, can you do it that way, or should I?

Thanks,
Vlastimil


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 0/2] mm/kasan: dump alloc/free stack for page allocator
  2019-09-10 11:53     ` Vlastimil Babka
@ 2019-09-10 12:45       ` Walter Wu
  0 siblings, 0 replies; 7+ messages in thread
From: Walter Wu @ 2019-09-10 12:45 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Andrey Ryabinin, Alexander Potapenko, Dmitry Vyukov,
	Matthias Brugger, Andrew Morton, Martin Schwidefsky, Will Deacon,
	Andrey Konovalov, Arnd Bergmann, Thomas Gleixner, Michal Hocko,
	Qian Cai, linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
	linux-mediatek, wsd_upstream

On Tue, 2019-09-10 at 13:53 +0200, Vlastimil Babka wrote:
> On 9/10/19 12:50 PM, Andrey Ryabinin wrote:
> > 
> > 
> > For slab objects we memorize both alloc and free stacks. You'll never know in advance what information will be usefull
> > to fix an issue, so it usually better to provide more information. I don't think we should do anything different for pages.
> 
> Exactly, thanks.
> 
> > Given that we already have the page_owner responsible for providing alloc/free stacks for pages, all that we should in KASAN do is to
> > enable the feature by default. Free stack saving should be decoupled from debug_pagealloc into separate option so that it can be enabled
> > by KASAN and/or debug_pagealloc.
> 
> Right. Walter, can you do it that way, or should I?
> 
> Thanks,
> Vlastimil

I will send new patch v3.



^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [PATCH v2 0/2] mm/kasan: dump alloc/free stack for page allocator
  2019-09-10 10:50   ` Andrey Ryabinin
  2019-09-10 11:53     ` Vlastimil Babka
@ 2019-09-10 12:46     ` Walter Wu
  1 sibling, 0 replies; 7+ messages in thread
From: Walter Wu @ 2019-09-10 12:46 UTC (permalink / raw)
  To: Andrey Ryabinin
  Cc: Vlastimil Babka, Alexander Potapenko, Dmitry Vyukov,
	Matthias Brugger, Andrew Morton, Martin Schwidefsky, Will Deacon,
	Andrey Konovalov, Arnd Bergmann, Thomas Gleixner, Michal Hocko,
	Qian Cai, linux-kernel, kasan-dev, linux-mm, linux-arm-kernel,
	linux-mediatek, wsd_upstream

On Tue, 2019-09-10 at 13:50 +0300, Andrey Ryabinin wrote:
> 
> On 9/9/19 4:07 PM, Vlastimil Babka wrote:
> > On 9/9/19 10:24 AM, walter-zh.wu@mediatek.com wrote:
> >> From: Walter Wu <walter-zh.wu@mediatek.com>
> >>
> >> This patch is KASAN report adds the alloc/free stacks for page allocator
> >> in order to help programmer to see memory corruption caused by page.
> >>
> >> By default, KASAN doesn't record alloc and free stack for page allocator.
> >> It is difficult to fix up page use-after-free or dobule-free issue.
> >>
> >> Our patchsets will record the last stack of pages.
> >> It is very helpful for solving the page use-after-free or double-free.
> >>
> >> KASAN report will show the last stack of page, it may be:
> >> a) If page is in-use state, then it prints alloc stack.
> >>     It is useful to fix up page out-of-bound issue.
> > 
> > I still disagree with duplicating most of page_owner functionality for the sake of using a single stack handle for both alloc and free (while page_owner + debug_pagealloc with patches in mmotm uses two handles). It reduces the amount of potentially important debugging information, and I really doubt the u32-per-page savings are significant, given the rest of KASAN overhead.
> > 
> >> BUG: KASAN: slab-out-of-bounds in kmalloc_pagealloc_oob_right+0x88/0x90
> >> Write of size 1 at addr ffffffc0d64ea00a by task cat/115
> >> ...
> >> Allocation stack of page:
> >>   set_page_stack.constprop.1+0x30/0xc8
> >>   kasan_alloc_pages+0x18/0x38
> >>   prep_new_page+0x5c/0x150
> >>   get_page_from_freelist+0xb8c/0x17c8
> >>   __alloc_pages_nodemask+0x1a0/0x11b0
> >>   kmalloc_order+0x28/0x58
> >>   kmalloc_order_trace+0x28/0xe0
> >>   kmalloc_pagealloc_oob_right+0x2c/0x68
> >>
> >> b) If page is freed state, then it prints free stack.
> >>     It is useful to fix up page use-after-free or double-free issue.
> >>
> >> BUG: KASAN: use-after-free in kmalloc_pagealloc_uaf+0x70/0x80
> >> Write of size 1 at addr ffffffc0d651c000 by task cat/115
> >> ...
> >> Free stack of page:
> >>   kasan_free_pages+0x68/0x70
> >>   __free_pages_ok+0x3c0/0x1328
> >>   __free_pages+0x50/0x78
> >>   kfree+0x1c4/0x250
> >>   kmalloc_pagealloc_uaf+0x38/0x80
> >>
> >> This has been discussed, please refer below link.
> >> https://bugzilla.kernel.org/show_bug.cgi?id=203967
> > 
> > That's not a discussion, but a single comment from Dmitry, which btw contains "provide alloc *and* free stacks for it" ("it" refers to page, emphasis mine). It would be nice if he or other KASAN guys could clarify.
> > 
> 
> For slab objects we memorize both alloc and free stacks. You'll never know in advance what information will be usefull
> to fix an issue, so it usually better to provide more information. I don't think we should do anything different for pages.
> 
> Given that we already have the page_owner responsible for providing alloc/free stacks for pages, all that we should in KASAN do is to
> enable the feature by default. Free stack saving should be decoupled from debug_pagealloc into separate option so that it can be enabled 
> by KASAN and/or debug_pagealloc.

Thanks your suggestion.
We will send the patch v3 as described above.





^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2019-09-10 12:46 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-09-09  8:24 [PATCH v2 0/2] mm/kasan: dump alloc/free stack for page allocator walter-zh.wu
2019-09-09 13:07 ` Vlastimil Babka
2019-09-10 10:50   ` Andrey Ryabinin
2019-09-10 11:53     ` Vlastimil Babka
2019-09-10 12:45       ` Walter Wu
2019-09-10 12:46     ` Walter Wu
2019-09-10  9:53 ` Walter Wu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).