linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH v3 0/2] mm/page_poison.c: Allow for zero poisoning
@ 2016-02-24 23:35 Kees Cook
  2016-02-24 23:35 ` [RFC][PATCH v3 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option Kees Cook
                   ` (2 more replies)
  0 siblings, 3 replies; 8+ messages in thread
From: Kees Cook @ 2016-02-24 23:35 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Andrew Morton, Kirill A. Shutemov, Vlastimil Babka,
	Michal Hocko, Mathias Krause, Dave Hansen, Jianyu Zhan, linux-mm,
	linux-kernel

This is my attempt to rebase this series:

[PATCHv2, 2/2] mm/page_poisoning.c: Allow for zero poisoning
[PATCHv2, 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option

to the poisoning series in linux-next. It replaces the following mmotm:

mm-page_poisoningc-allow-for-zero-poisoning.patch
mm-page_poisoningc-allow-for-zero-poisoning-checkpatch-fixes.patch
mm-page_poisonc-enable-page_poisoning-as-a-separate-option.patch
mm-page_poisonc-enable-page_poisoning-as-a-separate-option-fix.patch

These patches work for me (linux-next does not) when using
CONFIG_PAGE_POISONING_ZERO=y

I've marked this RFC because I did the rebase -- bugs should be blamed
on me. :)

-Kees

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* [RFC][PATCH v3 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option
  2016-02-24 23:35 [RFC][PATCH v3 0/2] mm/page_poison.c: Allow for zero poisoning Kees Cook
@ 2016-02-24 23:35 ` Kees Cook
  2016-02-26  2:53   ` Jianyu Zhan
  2016-02-24 23:35 ` [RFC][PATCH v3 2/2] mm/page_poison.c: Allow for zero poisoning Kees Cook
  2016-02-26  2:04 ` [RFC][PATCH v3 0/2] " Laura Abbott
  2 siblings, 1 reply; 8+ messages in thread
From: Kees Cook @ 2016-02-24 23:35 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Andrew Morton, Kirill A. Shutemov, Vlastimil Babka,
	Michal Hocko, Mathias Krause, Dave Hansen, Jianyu Zhan, linux-mm,
	linux-kernel

From: Laura Abbott <labbott@fedoraproject.org>

Page poisoning is currently setup as a feature if architectures don't
have architecture debug page_alloc to allow unmapping of pages. It has
uses apart from that though. Clearing of the pages on free provides
an increase in security as it helps to limit the risk of information
leaks. Allow page poisoning to be enabled as a separate option
independent of any other debug feature. Because of how hiberanation
is implemented, the checks on alloc cannot occur if hibernation is
enabled. This option can also be set on !HIBERNATION as well.

Credit to Grsecurity/PaX team for inspiring this work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
[rebased by Kees Cook <keescook@chromium.org>]
Tested-by: Kees Cook <keescook@chromium.org>
---
 include/linux/mm.h |  7 +++----
 mm/Kconfig.debug   | 22 +++++++++++++++++++++-
 mm/Makefile        |  4 ----
 mm/page_alloc.c    |  2 ++
 mm/page_poison.c   | 29 +++++++++++++++++++++++++----
 5 files changed, 51 insertions(+), 13 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index ea5de9d3e00b..6cdd8d91e5ef 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2199,13 +2199,12 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 
 
 #ifdef CONFIG_PAGE_POISONING
-extern void poison_pages(struct page *page, int n);
-extern void unpoison_pages(struct page *page, int n);
 extern bool page_poisoning_enabled(void);
+extern void kernel_poison_pages(struct page *page, int numpages, int enable);
 #else
-static inline void poison_pages(struct page *page, int n) { }
-static inline void unpoison_pages(struct page *page, int n) { }
 static inline bool page_poisoning_enabled(void) { return false; }
+static inline void kernel_poison_pages(struct page *page, int numpages,
+				       int enable) { }
 #endif
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index a0c136af9c91..ddf71d7cb6ba 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -41,4 +41,24 @@ config DEBUG_PAGEALLOC_ENABLE_DEFAULT
 	  can be overridden by debug_pagealloc=off|on.
 
 config PAGE_POISONING
-	bool
+	bool "Poison pages after freeing"
+	select PAGE_EXTENSION
+	select PAGE_POISONING_NO_SANITY if HIBERNATION
+	---help---
+	  Fill the pages with poison patterns after free_pages() and verify
+	  the patterns before alloc_pages. The filling of the memory helps
+	  reduce the risk of information leaks from freed data. This does
+	  have a potential performance impact.
+
+	  If unsure, say N
+
+config PAGE_POISONING_NO_SANITY
+	depends on PAGE_POISONING
+	bool "Only poison, don't sanity check"
+	---help---
+	   Skip the sanity checking on alloc, only fill the pages with
+	   poison on free. This reduces some of the overhead of the
+	   poisoning feature.
+
+	   If you are only interested in sanitization, say Y. Otherwise
+	   say N.
diff --git a/mm/Makefile b/mm/Makefile
index fb1a7948c107..ec59c071b4f9 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -13,7 +13,6 @@ KCOV_INSTRUMENT_slob.o := n
 KCOV_INSTRUMENT_slab.o := n
 KCOV_INSTRUMENT_slub.o := n
 KCOV_INSTRUMENT_page_alloc.o := n
-KCOV_INSTRUMENT_debug-pagealloc.o := n
 KCOV_INSTRUMENT_kmemleak.o := n
 KCOV_INSTRUMENT_kmemcheck.o := n
 KCOV_INSTRUMENT_memcontrol.o := n
@@ -63,9 +62,6 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
 obj-$(CONFIG_SLOB) += slob.o
 obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
 obj-$(CONFIG_KSM) += ksm.o
-ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
-	obj-$(CONFIG_DEBUG_PAGEALLOC) += debug-pagealloc.o
-endif
 obj-$(CONFIG_PAGE_POISONING) += page_poison.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index a34c359d8e81..0bdb3cfd83b5 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1026,6 +1026,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 					   PAGE_SIZE << order);
 	}
 	arch_free_page(page, order);
+	kernel_poison_pages(page, 1 << order, 0);
 	kernel_map_pages(page, 1 << order, 0);
 
 	return true;
@@ -1497,6 +1498,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kernel_poison_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
diff --git a/mm/page_poison.c b/mm/page_poison.c
index 92ead727b8f0..884a6f854432 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -80,7 +80,7 @@ static void poison_page(struct page *page)
 	kunmap_atomic(addr);
 }
 
-void poison_pages(struct page *page, int n)
+static void poison_pages(struct page *page, int n)
 {
 	int i;
 
@@ -101,6 +101,9 @@ static void check_poison_mem(unsigned char *mem, size_t bytes)
 	unsigned char *start;
 	unsigned char *end;
 
+	if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
+		return;
+
 	start = memchr_inv(mem, PAGE_POISON, bytes);
 	if (!start)
 		return;
@@ -113,9 +116,9 @@ static void check_poison_mem(unsigned char *mem, size_t bytes)
 	if (!__ratelimit(&ratelimit))
 		return;
 	else if (start == end && single_bit_flip(*start, PAGE_POISON))
-		printk(KERN_ERR "pagealloc: single bit error\n");
+		pr_err("pagealloc: single bit error\n");
 	else
-		printk(KERN_ERR "pagealloc: memory corruption\n");
+		pr_err("pagealloc: memory corruption\n");
 
 	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
 			end - start + 1, 1);
@@ -135,10 +138,28 @@ static void unpoison_page(struct page *page)
 	kunmap_atomic(addr);
 }
 
-void unpoison_pages(struct page *page, int n)
+static void unpoison_pages(struct page *page, int n)
 {
 	int i;
 
 	for (i = 0; i < n; i++)
 		unpoison_page(page + i);
 }
+
+void kernel_poison_pages(struct page *page, int numpages, int enable)
+{
+	if (!page_poisoning_enabled())
+		return;
+
+	if (enable)
+		unpoison_pages(page, numpages);
+	else
+		poison_pages(page, numpages);
+}
+
+#ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
+void __kernel_map_pages(struct page *page, int numpages, int enable)
+{
+	/* This function does nothing, all work is done via poison pages */
+}
+#endif
-- 
2.6.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* [RFC][PATCH v3 2/2] mm/page_poison.c: Allow for zero poisoning
  2016-02-24 23:35 [RFC][PATCH v3 0/2] mm/page_poison.c: Allow for zero poisoning Kees Cook
  2016-02-24 23:35 ` [RFC][PATCH v3 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option Kees Cook
@ 2016-02-24 23:35 ` Kees Cook
  2016-02-26  2:04 ` [RFC][PATCH v3 0/2] " Laura Abbott
  2 siblings, 0 replies; 8+ messages in thread
From: Kees Cook @ 2016-02-24 23:35 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Andrew Morton, Kirill A. Shutemov, Vlastimil Babka,
	Michal Hocko, Mathias Krause, Dave Hansen, Jianyu Zhan, linux-mm,
	linux-kernel

From: Laura Abbott <labbott@fedoraproject.org>

By default, page poisoning uses a poison value (0xaa) on free. If this
is changed to 0, the page is not only sanitized but zeroing on alloc
with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
corruption from the poisoning is harder to detect. This feature also
cannot be used with hibernation since pages are not guaranteed to be
zeroed after hibernation.

Credit to Grsecurity/PaX team for inspiring this work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
[rebased by Kees Cook <keescook@chromium.org>]
Tested-by: Kees Cook <keescook@chromium.org>
---
 include/linux/mm.h       |  2 ++
 include/linux/poison.h   |  4 ++++
 kernel/power/hibernate.c | 17 +++++++++++++++++
 mm/Kconfig.debug         | 14 ++++++++++++++
 mm/page_alloc.c          | 11 ++++++++++-
 mm/page_ext.c            | 10 ++++++++--
 mm/page_poison.c         |  7 +++++--
 7 files changed, 60 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 6cdd8d91e5ef..c53e19fd5cfc 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2201,10 +2201,12 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 #ifdef CONFIG_PAGE_POISONING
 extern bool page_poisoning_enabled(void);
 extern void kernel_poison_pages(struct page *page, int numpages, int enable);
+extern bool page_is_poisoned(struct page *page);
 #else
 static inline bool page_poisoning_enabled(void) { return false; }
 static inline void kernel_poison_pages(struct page *page, int numpages,
 				       int enable) { }
+static inline bool page_is_poisoned(struct page *page) { return false; }
 #endif
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/poison.h b/include/linux/poison.h
index 4a27153574e2..51334edec506 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -30,7 +30,11 @@
 #define TIMER_ENTRY_STATIC	((void *) 0x300 + POISON_POINTER_DELTA)
 
 /********** mm/debug-pagealloc.c **********/
+#ifdef CONFIG_PAGE_POISONING_ZERO
+#define PAGE_POISON 0x00
+#else
 #define PAGE_POISON 0xaa
+#endif
 
 /********** mm/page_alloc.c ************/
 
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index b7342a24f559..aa0f26b58426 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -1158,6 +1158,22 @@ static int __init kaslr_nohibernate_setup(char *str)
 	return nohibernate_setup(str);
 }
 
+static int __init page_poison_nohibernate_setup(char *str)
+{
+#ifdef CONFIG_PAGE_POISONING_ZERO
+	/*
+	 * The zeroing option for page poison skips the checks on alloc.
+	 * since hibernation doesn't save free pages there's no way to
+	 * guarantee the pages will still be zeroed.
+	 */
+	if (!strcmp(str, "on")) {
+		pr_info("Disabling hibernation due to page poisoning\n");
+		return nohibernate_setup(str);
+	}
+#endif
+	return 1;
+}
+
 __setup("noresume", noresume_setup);
 __setup("resume_offset=", resume_offset_setup);
 __setup("resume=", resume_setup);
@@ -1166,3 +1182,4 @@ __setup("resumewait", resumewait_setup);
 __setup("resumedelay=", resumedelay_setup);
 __setup("nohibernate", nohibernate_setup);
 __setup("kaslr", kaslr_nohibernate_setup);
+__setup("page_poison=", page_poison_nohibernate_setup);
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index ddf71d7cb6ba..802c0eb589ab 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -62,3 +62,17 @@ config PAGE_POISONING_NO_SANITY
 
 	   If you are only interested in sanitization, say Y. Otherwise
 	   say N.
+
+config PAGE_POISONING_ZERO
+	bool "Use zero for poisoning instead of alternating bits"
+	depends on PAGE_POISONING
+	---help---
+	   Instead of using the existing poison value (0xAA), fill the pages
+	   with zeros. This makes it harder to detect when errors are
+	   occurring due to sanitization but the zeroing at free means that
+	   it is no longer necessary to write zeros when GFP_ZERO is used on
+	   allocation.
+
+	   Enabling page poisoning with this option will disable hibernation
+
+	   If unsure, say N
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 0bdb3cfd83b5..83de29d16b74 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1482,15 +1482,24 @@ static inline int check_new_page(struct page *page)
 	return 0;
 }
 
+static inline bool free_pages_prezeroed(bool poisoned)
+{
+	return IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) &&
+		page_poisoning_enabled() && poisoned;
+}
+
 static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 								int alloc_flags)
 {
 	int i;
+	bool poisoned = true;
 
 	for (i = 0; i < (1 << order); i++) {
 		struct page *p = page + i;
 		if (unlikely(check_new_page(p)))
 			return 1;
+		if (poisoned)
+			poisoned &= page_is_poisoned(p);
 	}
 
 	set_page_private(page, 0);
@@ -1501,7 +1510,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 	kernel_poison_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
-	if (gfp_flags & __GFP_ZERO)
+	if (!free_pages_prezeroed(poisoned) && (gfp_flags & __GFP_ZERO))
 		for (i = 0; i < (1 << order); i++)
 			clear_highpage(page + i);
 
diff --git a/mm/page_ext.c b/mm/page_ext.c
index 292ca7b8debd..2d864e64f7fe 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -106,12 +106,15 @@ struct page_ext *lookup_page_ext(struct page *page)
 	struct page_ext *base;
 
 	base = NODE_DATA(page_to_nid(page))->node_page_ext;
-#ifdef CONFIG_DEBUG_VM
+#if defined(CONFIG_DEBUG_VM) || defined(CONFIG_PAGE_POISONING)
 	/*
 	 * The sanity checks the page allocator does upon freeing a
 	 * page can reach here before the page_ext arrays are
 	 * allocated when feeding a range of pages to the allocator
 	 * for the first time during bootup or memory hotplug.
+	 *
+	 * This check is also necessary for ensuring page poisoning
+	 * works as expected when enabled
 	 */
 	if (unlikely(!base))
 		return NULL;
@@ -180,12 +183,15 @@ struct page_ext *lookup_page_ext(struct page *page)
 {
 	unsigned long pfn = page_to_pfn(page);
 	struct mem_section *section = __pfn_to_section(pfn);
-#ifdef CONFIG_DEBUG_VM
+#if defined(CONFIG_DEBUG_VM) || defined(CONFIG_PAGE_POISONING)
 	/*
 	 * The sanity checks the page allocator does upon freeing a
 	 * page can reach here before the page_ext arrays are
 	 * allocated when feeding a range of pages to the allocator
 	 * for the first time during bootup or memory hotplug.
+	 *
+	 * This check is also necessary for ensuring page poisoning
+	 * works as expected when enabled
 	 */
 	if (!section->page_ext)
 		return NULL;
diff --git a/mm/page_poison.c b/mm/page_poison.c
index 884a6f854432..f52701fe7b6d 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -63,11 +63,14 @@ static inline void clear_page_poison(struct page *page)
 	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
 }
 
-static inline bool page_poison(struct page *page)
+bool page_is_poisoned(struct page *page)
 {
 	struct page_ext *page_ext;
 
 	page_ext = lookup_page_ext(page);
+	if (!page_ext)
+		return false;
+
 	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
 }
 
@@ -129,7 +132,7 @@ static void unpoison_page(struct page *page)
 {
 	void *addr;
 
-	if (!page_poison(page))
+	if (!page_is_poisoned(page))
 		return;
 
 	addr = kmap_atomic(page);
-- 
2.6.3

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 8+ messages in thread

* Re: [RFC][PATCH v3 0/2] mm/page_poison.c: Allow for zero poisoning
  2016-02-24 23:35 [RFC][PATCH v3 0/2] mm/page_poison.c: Allow for zero poisoning Kees Cook
  2016-02-24 23:35 ` [RFC][PATCH v3 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option Kees Cook
  2016-02-24 23:35 ` [RFC][PATCH v3 2/2] mm/page_poison.c: Allow for zero poisoning Kees Cook
@ 2016-02-26  2:04 ` Laura Abbott
  2 siblings, 0 replies; 8+ messages in thread
From: Laura Abbott @ 2016-02-26  2:04 UTC (permalink / raw)
  To: Kees Cook, Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Mathias Krause, Dave Hansen, Jianyu Zhan, linux-mm, linux-kernel

On 02/24/2016 03:35 PM, Kees Cook wrote:
> This is my attempt to rebase this series:
>
> [PATCHv2, 2/2] mm/page_poisoning.c: Allow for zero poisoning
> [PATCHv2, 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option
>
> to the poisoning series in linux-next. It replaces the following mmotm:
>
> mm-page_poisoningc-allow-for-zero-poisoning.patch
> mm-page_poisoningc-allow-for-zero-poisoning-checkpatch-fixes.patch
> mm-page_poisonc-enable-page_poisoning-as-a-separate-option.patch
> mm-page_poisonc-enable-page_poisoning-as-a-separate-option-fix.patch
>
> These patches work for me (linux-next does not) when using
> CONFIG_PAGE_POISONING_ZERO=y
>
> I've marked this RFC because I did the rebase -- bugs should be blamed
> on me. :)
>
> -Kees
>

The rebase looks fine to me. Were there any more comments on this?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC][PATCH v3 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option
  2016-02-24 23:35 ` [RFC][PATCH v3 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option Kees Cook
@ 2016-02-26  2:53   ` Jianyu Zhan
  2016-02-26  4:45     ` Laura Abbott
  0 siblings, 1 reply; 8+ messages in thread
From: Jianyu Zhan @ 2016-02-26  2:53 UTC (permalink / raw)
  To: Kees Cook
  Cc: Laura Abbott, Andrew Morton, Kirill A. Shutemov, Vlastimil Babka,
	Michal Hocko, Mathias Krause, Dave Hansen, linux-mm, LKML

On Thu, Feb 25, 2016 at 7:35 AM, Kees Cook <keescook@chromium.org> wrote:
>  config PAGE_POISONING
> -       bool
> +       bool "Poison pages after freeing"
> +       select PAGE_EXTENSION
> +       select PAGE_POISONING_NO_SANITY if HIBERNATION
> +       ---help---
> +         Fill the pages with poison patterns after free_pages() and verify
> +         the patterns before alloc_pages. The filling of the memory helps
> +         reduce the risk of information leaks from freed data. This does
> +         have a potential performance impact.
> +
> +         If unsure, say N
> +

I would suggest that you add some wording in the help text to clarify
that what "poisoning"
means here is not the same as that in "HWPoison".

The previous one is pattern padding, while the latter one is just
nomenclature borrowed from
Intel for memory failure.

> +config PAGE_POISONING_NO_SANITY
> +       depends on PAGE_POISONING
> +       bool "Only poison, don't sanity check"
> +       ---help---
> +          Skip the sanity checking on alloc, only fill the pages with
> +          poison on free. This reduces some of the overhead of the
> +          poisoning feature.
> +
> +          If you are only interested in sanitization, say Y. Otherwise
> +          say N.
> diff --git a/mm/Makefile b/mm/Makefile
> index fb1a7948c107..ec59c071b4f9 100644
> --- a/mm/Makefile
> +++ b/mm/Makefile
> @@ -13,7 +13,6 @@ KCOV_INSTRUMENT_slob.o := n
>  KCOV_INSTRUMENT_slab.o := n
>  KCOV_INSTRUMENT_slub.o := n
>  KCOV_INSTRUMENT_page_alloc.o := n
> -KCOV_INSTRUMENT_debug-pagealloc.o := n
>  KCOV_INSTRUMENT_kmemleak.o := n
>  KCOV_INSTRUMENT_kmemcheck.o := n
>  KCOV_INSTRUMENT_memcontrol.o := n
> @@ -63,9 +62,6 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
>  obj-$(CONFIG_SLOB) += slob.o
>  obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
>  obj-$(CONFIG_KSM) += ksm.o
> -ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
> -       obj-$(CONFIG_DEBUG_PAGEALLOC) += debug-pagealloc.o
> -endif
>  obj-$(CONFIG_PAGE_POISONING) += page_poison.o
>  obj-$(CONFIG_SLAB) += slab.o
>  obj-$(CONFIG_SLUB) += slub.o
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index a34c359d8e81..0bdb3cfd83b5 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1026,6 +1026,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>                                            PAGE_SIZE << order);
>         }
>         arch_free_page(page, order);
> +       kernel_poison_pages(page, 1 << order, 0);
>         kernel_map_pages(page, 1 << order, 0);
>
>         return true;
> @@ -1497,6 +1498,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>
>         arch_alloc_page(page, order);
>         kernel_map_pages(page, 1 << order, 1);
> +       kernel_poison_pages(page, 1 << order, 1);
>         kasan_alloc_pages(page, order);
>
>         if (gfp_flags & __GFP_ZERO)
> diff --git a/mm/page_poison.c b/mm/page_poison.c
> index 92ead727b8f0..884a6f854432 100644
> --- a/mm/page_poison.c
> +++ b/mm/page_poison.c
> @@ -80,7 +80,7 @@ static void poison_page(struct page *page)
>         kunmap_atomic(addr);
>  }
>
> -void poison_pages(struct page *page, int n)
> +static void poison_pages(struct page *page, int n)
>  {
>         int i;
>
> @@ -101,6 +101,9 @@ static void check_poison_mem(unsigned char *mem, size_t bytes)
>         unsigned char *start;
>         unsigned char *end;
>
> +       if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
> +               return;
> +
>         start = memchr_inv(mem, PAGE_POISON, bytes);
>         if (!start)
>                 return;
> @@ -113,9 +116,9 @@ static void check_poison_mem(unsigned char *mem, size_t bytes)
>         if (!__ratelimit(&ratelimit))
>                 return;
>         else if (start == end && single_bit_flip(*start, PAGE_POISON))
> -               printk(KERN_ERR "pagealloc: single bit error\n");
> +               pr_err("pagealloc: single bit error\n");
>         else
> -               printk(KERN_ERR "pagealloc: memory corruption\n");
> +               pr_err("pagealloc: memory corruption\n");
>
>         print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
>                         end - start + 1, 1);
> @@ -135,10 +138,28 @@ static void unpoison_page(struct page *page)
>         kunmap_atomic(addr);
>  }
>
> -void unpoison_pages(struct page *page, int n)
> +static void unpoison_pages(struct page *page, int n)
>  {
>         int i;
>
>         for (i = 0; i < n; i++)
>                 unpoison_page(page + i);
>  }
> +
> +void kernel_poison_pages(struct page *page, int numpages, int enable)
> +{
> +       if (!page_poisoning_enabled())
> +               return;
> +
> +       if (enable)
> +               unpoison_pages(page, numpages);
> +       else
> +               poison_pages(page, numpages);
> +}
> +
> +#ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
> +void __kernel_map_pages(struct page *page, int numpages, int enable)
> +{
> +       /* This function does nothing, all work is done via poison pages */
> +}
> +#endif

IMHO,  kernel_map_pages is originally incorporated for debugging page
allocation.
And latter for archs that do not support arch-specific page poisoning,
a software poisoning
method was used.

So I think it is not appropriate to use two interfaces in the alloc/free hooks.

The kernel_poison_pages actually should be an implementation detail
and should be hided
in the kernel_map_pages interface.


Thanks,
Jianyu Zhan

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC][PATCH v3 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option
  2016-02-26  2:53   ` Jianyu Zhan
@ 2016-02-26  4:45     ` Laura Abbott
  2016-02-26  5:34       ` Jianyu Zhan
  0 siblings, 1 reply; 8+ messages in thread
From: Laura Abbott @ 2016-02-26  4:45 UTC (permalink / raw)
  To: Jianyu Zhan, Kees Cook
  Cc: Laura Abbott, Andrew Morton, Kirill A. Shutemov, Vlastimil Babka,
	Michal Hocko, Mathias Krause, Dave Hansen, linux-mm, LKML

On 02/25/2016 06:53 PM, Jianyu Zhan wrote:
> On Thu, Feb 25, 2016 at 7:35 AM, Kees Cook <keescook@chromium.org> wrote:
>>   config PAGE_POISONING
>> -       bool
>> +       bool "Poison pages after freeing"
>> +       select PAGE_EXTENSION
>> +       select PAGE_POISONING_NO_SANITY if HIBERNATION
>> +       ---help---
>> +         Fill the pages with poison patterns after free_pages() and verify
>> +         the patterns before alloc_pages. The filling of the memory helps
>> +         reduce the risk of information leaks from freed data. This does
>> +         have a potential performance impact.
>> +
>> +         If unsure, say N
>> +
>
> I would suggest that you add some wording in the help text to clarify
> that what "poisoning"
> means here is not the same as that in "HWPoison".
>
> The previous one is pattern padding, while the latter one is just
> nomenclature borrowed from
> Intel for memory failure.
>

Do you have some suggestion on wording here? I'm not sure what else to
say besides poison patterns to differentiate from hardware poison.
  
>> +config PAGE_POISONING_NO_SANITY
>> +       depends on PAGE_POISONING
>> +       bool "Only poison, don't sanity check"
>> +       ---help---
>> +          Skip the sanity checking on alloc, only fill the pages with
>> +          poison on free. This reduces some of the overhead of the
>> +          poisoning feature.
>> +
>> +          If you are only interested in sanitization, say Y. Otherwise
>> +          say N.
>> diff --git a/mm/Makefile b/mm/Makefile
>> index fb1a7948c107..ec59c071b4f9 100644
>> --- a/mm/Makefile
>> +++ b/mm/Makefile
>> @@ -13,7 +13,6 @@ KCOV_INSTRUMENT_slob.o := n
>>   KCOV_INSTRUMENT_slab.o := n
>>   KCOV_INSTRUMENT_slub.o := n
>>   KCOV_INSTRUMENT_page_alloc.o := n
>> -KCOV_INSTRUMENT_debug-pagealloc.o := n
>>   KCOV_INSTRUMENT_kmemleak.o := n
>>   KCOV_INSTRUMENT_kmemcheck.o := n
>>   KCOV_INSTRUMENT_memcontrol.o := n
>> @@ -63,9 +62,6 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
>>   obj-$(CONFIG_SLOB) += slob.o
>>   obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
>>   obj-$(CONFIG_KSM) += ksm.o
>> -ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
>> -       obj-$(CONFIG_DEBUG_PAGEALLOC) += debug-pagealloc.o
>> -endif
>>   obj-$(CONFIG_PAGE_POISONING) += page_poison.o
>>   obj-$(CONFIG_SLAB) += slab.o
>>   obj-$(CONFIG_SLUB) += slub.o
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index a34c359d8e81..0bdb3cfd83b5 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1026,6 +1026,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>>                                             PAGE_SIZE << order);
>>          }
>>          arch_free_page(page, order);
>> +       kernel_poison_pages(page, 1 << order, 0);
>>          kernel_map_pages(page, 1 << order, 0);
>>
>>          return true;
>> @@ -1497,6 +1498,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>
>>          arch_alloc_page(page, order);
>>          kernel_map_pages(page, 1 << order, 1);
>> +       kernel_poison_pages(page, 1 << order, 1);
>>          kasan_alloc_pages(page, order);
>>
>>          if (gfp_flags & __GFP_ZERO)
>> diff --git a/mm/page_poison.c b/mm/page_poison.c
>> index 92ead727b8f0..884a6f854432 100644
>> --- a/mm/page_poison.c
>> +++ b/mm/page_poison.c
>> @@ -80,7 +80,7 @@ static void poison_page(struct page *page)
>>          kunmap_atomic(addr);
>>   }
>>
>> -void poison_pages(struct page *page, int n)
>> +static void poison_pages(struct page *page, int n)
>>   {
>>          int i;
>>
>> @@ -101,6 +101,9 @@ static void check_poison_mem(unsigned char *mem, size_t bytes)
>>          unsigned char *start;
>>          unsigned char *end;
>>
>> +       if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
>> +               return;
>> +
>>          start = memchr_inv(mem, PAGE_POISON, bytes);
>>          if (!start)
>>                  return;
>> @@ -113,9 +116,9 @@ static void check_poison_mem(unsigned char *mem, size_t bytes)
>>          if (!__ratelimit(&ratelimit))
>>                  return;
>>          else if (start == end && single_bit_flip(*start, PAGE_POISON))
>> -               printk(KERN_ERR "pagealloc: single bit error\n");
>> +               pr_err("pagealloc: single bit error\n");
>>          else
>> -               printk(KERN_ERR "pagealloc: memory corruption\n");
>> +               pr_err("pagealloc: memory corruption\n");
>>
>>          print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
>>                          end - start + 1, 1);
>> @@ -135,10 +138,28 @@ static void unpoison_page(struct page *page)
>>          kunmap_atomic(addr);
>>   }
>>
>> -void unpoison_pages(struct page *page, int n)
>> +static void unpoison_pages(struct page *page, int n)
>>   {
>>          int i;
>>
>>          for (i = 0; i < n; i++)
>>                  unpoison_page(page + i);
>>   }
>> +
>> +void kernel_poison_pages(struct page *page, int numpages, int enable)
>> +{
>> +       if (!page_poisoning_enabled())
>> +               return;
>> +
>> +       if (enable)
>> +               unpoison_pages(page, numpages);
>> +       else
>> +               poison_pages(page, numpages);
>> +}
>> +
>> +#ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
>> +void __kernel_map_pages(struct page *page, int numpages, int enable)
>> +{
>> +       /* This function does nothing, all work is done via poison pages */
>> +}
>> +#endif
>
> IMHO,  kernel_map_pages is originally incorporated for debugging page
> allocation.
> And latter for archs that do not support arch-specific page poisoning,
> a software poisoning
> method was used.
>
> So I think it is not appropriate to use two interfaces in the alloc/free hooks.
>
> The kernel_poison_pages actually should be an implementation detail
> and should be hided
> in the kernel_map_pages interface.
>

We want to have the poisoning independent of anything that kernel_map_pages
does. It was originally added for software poisoning for arches that
didn't have the full ARCH_SUPPORTS_DEBUG_PAGEALLOC support but there's
nothing that specifically ties it to mapping. It's beneficial even when
we aren't mapping/unmapping the pages so putting it in kernel_map_pages
would defeat what we're trying to accomplish here.
  
>
> Thanks,
> Jianyu Zhan
>

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC][PATCH v3 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option
  2016-02-26  4:45     ` Laura Abbott
@ 2016-02-26  5:34       ` Jianyu Zhan
  2016-02-26 22:21         ` Laura Abbott
  0 siblings, 1 reply; 8+ messages in thread
From: Jianyu Zhan @ 2016-02-26  5:34 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Laura Abbott, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko, Mathias Krause, Dave Hansen,
	linux-mm, LKML

On Fri, Feb 26, 2016 at 12:45 PM, Laura Abbott <labbott@redhat.com> wrote:
> Do you have some suggestion on wording here? I'm not sure what else to
> say besides poison patterns to differentiate from hardware poison.
>


Is the below wording OK?


config PAGE_POISONING
        bool
        bool "Poison pages after freeing"
        select PAGE_EXTENSION
        select PAGE_POISONING_NO_SANITY if HIBERNATION
        ---help---
             Fill the pages with poison patterns after free_pages() and verify
             the patterns before alloc_pages. The filling of the memory helps
             reduce the risk of information leaks from freed data. This does
             have a potential performance impact.

             Note that "poison" here is not the same thing as that in "HWPoison"
             for CONFIG_MEMORY_FAILURE, in which "poison" is just a nomenclature
             borrowed from Intel , for the processor support for
"poisoned" memory, an
             adaptive method for flagging and recovering from memory errors

>
>>>
>>> +config PAGE_POISONING_NO_SANITY
>>> +       depends on PAGE_POISONING
>>> +       bool "Only poison, don't sanity check"
>>> +       ---help---
>>> +          Skip the sanity checking on alloc, only fill the pages with
>>> +          poison on free. This reduces some of the overhead of the
>>> +          poisoning feature.
>>> +
>>> +          If you are only interested in sanitization, say Y. Otherwise
>>> +          say N.
>>> diff --git a/mm/Makefile b/mm/Makefile
>>> index fb1a7948c107..ec59c071b4f9 100644
>>> --- a/mm/Makefile
>>> +++ b/mm/Makefile
>>> @@ -13,7 +13,6 @@ KCOV_INSTRUMENT_slob.o := n
>>>   KCOV_INSTRUMENT_slab.o := n
>>>   KCOV_INSTRUMENT_slub.o := n
>>>   KCOV_INSTRUMENT_page_alloc.o := n
>>> -KCOV_INSTRUMENT_debug-pagealloc.o := n
>>>   KCOV_INSTRUMENT_kmemleak.o := n
>>>   KCOV_INSTRUMENT_kmemcheck.o := n
>>>   KCOV_INSTRUMENT_memcontrol.o := n
>>> @@ -63,9 +62,6 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
>>>   obj-$(CONFIG_SLOB) += slob.o
>>>   obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
>>>   obj-$(CONFIG_KSM) += ksm.o
>>> -ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
>>> -       obj-$(CONFIG_DEBUG_PAGEALLOC) += debug-pagealloc.o
>>> -endif
>>>   obj-$(CONFIG_PAGE_POISONING) += page_poison.o
>>>   obj-$(CONFIG_SLAB) += slab.o
>>>   obj-$(CONFIG_SLUB) += slub.o
>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>> index a34c359d8e81..0bdb3cfd83b5 100644
>>> --- a/mm/page_alloc.c
>>> +++ b/mm/page_alloc.c
>>> @@ -1026,6 +1026,7 @@ static bool free_pages_prepare(struct page *page,
>>> unsigned int order)
>>>                                             PAGE_SIZE << order);
>>>          }
>>>          arch_free_page(page, order);
>>> +       kernel_poison_pages(page, 1 << order, 0);
>>>          kernel_map_pages(page, 1 << order, 0);
>>>
>>>          return true;
>>> @@ -1497,6 +1498,7 @@ static int prep_new_page(struct page *page,
>>> unsigned int order, gfp_t gfp_flags,
>>>
>>>          arch_alloc_page(page, order);
>>>          kernel_map_pages(page, 1 << order, 1);
>>> +       kernel_poison_pages(page, 1 << order, 1);
>>>          kasan_alloc_pages(page, order);
>>>
>>>          if (gfp_flags & __GFP_ZERO)
>>> diff --git a/mm/page_poison.c b/mm/page_poison.c
>>> index 92ead727b8f0..884a6f854432 100644
>>> --- a/mm/page_poison.c
>>> +++ b/mm/page_poison.c
>>> @@ -80,7 +80,7 @@ static void poison_page(struct page *page)
>>>          kunmap_atomic(addr);
>>>   }
>>>
>>> -void poison_pages(struct page *page, int n)
>>> +static void poison_pages(struct page *page, int n)
>>>   {
>>>          int i;
>>>
>>> @@ -101,6 +101,9 @@ static void check_poison_mem(unsigned char *mem,
>>> size_t bytes)
>>>          unsigned char *start;
>>>          unsigned char *end;
>>>
>>> +       if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
>>> +               return;
>>> +
>>>          start = memchr_inv(mem, PAGE_POISON, bytes);
>>>          if (!start)
>>>                  return;
>>> @@ -113,9 +116,9 @@ static void check_poison_mem(unsigned char *mem,
>>> size_t bytes)
>>>          if (!__ratelimit(&ratelimit))
>>>                  return;
>>>          else if (start == end && single_bit_flip(*start, PAGE_POISON))
>>> -               printk(KERN_ERR "pagealloc: single bit error\n");
>>> +               pr_err("pagealloc: single bit error\n");
>>>          else
>>> -               printk(KERN_ERR "pagealloc: memory corruption\n");
>>> +               pr_err("pagealloc: memory corruption\n");
>>>
>>>          print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
>>>                          end - start + 1, 1);
>>> @@ -135,10 +138,28 @@ static void unpoison_page(struct page *page)
>>>          kunmap_atomic(addr);
>>>   }
>>>
>>> -void unpoison_pages(struct page *page, int n)
>>> +static void unpoison_pages(struct page *page, int n)
>>>   {
>>>          int i;
>>>
>>>          for (i = 0; i < n; i++)
>>>                  unpoison_page(page + i);
>>>   }
>>> +
>>> +void kernel_poison_pages(struct page *page, int numpages, int enable)
>>> +{
>>> +       if (!page_poisoning_enabled())
>>> +               return;
>>> +
>>> +       if (enable)
>>> +               unpoison_pages(page, numpages);
>>> +       else
>>> +               poison_pages(page, numpages);
>>> +}
>>> +
>>> +#ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
>>> +void __kernel_map_pages(struct page *page, int numpages, int enable)
>>> +{
>>> +       /* This function does nothing, all work is done via poison pages
>>> */
>>> +}
>>> +#endif
>>
>>
>> IMHO,  kernel_map_pages is originally incorporated for debugging page
>> allocation.
>> And latter for archs that do not support arch-specific page poisoning,
>> a software poisoning
>> method was used.
>>
>> So I think it is not appropriate to use two interfaces in the alloc/free
>> hooks.
>>
>> The kernel_poison_pages actually should be an implementation detail
>> and should be hided
>> in the kernel_map_pages interface.
>>
>
> We want to have the poisoning independent of anything that kernel_map_pages
> does. It was originally added for software poisoning for arches that
> didn't have the full ARCH_SUPPORTS_DEBUG_PAGEALLOC support but there's
> nothing that specifically ties it to mapping. It's beneficial even when
> we aren't mapping/unmapping the pages so putting it in kernel_map_pages
> would defeat what we're trying to accomplish here.
>

Ok, fair enough. If so,  I suggest you add this clarification into the
code, or as least, in
the changelog.


Thanks,
Jianyu Zhan

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

* Re: [RFC][PATCH v3 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option
  2016-02-26  5:34       ` Jianyu Zhan
@ 2016-02-26 22:21         ` Laura Abbott
  0 siblings, 0 replies; 8+ messages in thread
From: Laura Abbott @ 2016-02-26 22:21 UTC (permalink / raw)
  To: Jianyu Zhan
  Cc: Kees Cook, Laura Abbott, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko, Mathias Krause, Dave Hansen,
	linux-mm, LKML

On 02/25/2016 09:34 PM, Jianyu Zhan wrote:
> On Fri, Feb 26, 2016 at 12:45 PM, Laura Abbott <labbott@redhat.com> wrote:
>> Do you have some suggestion on wording here? I'm not sure what else to
>> say besides poison patterns to differentiate from hardware poison.
>>
>
>
> Is the below wording OK?
>
>
> config PAGE_POISONING
>          bool
>          bool "Poison pages after freeing"
>          select PAGE_EXTENSION
>          select PAGE_POISONING_NO_SANITY if HIBERNATION
>          ---help---
>               Fill the pages with poison patterns after free_pages() and verify
>               the patterns before alloc_pages. The filling of the memory helps
>               reduce the risk of information leaks from freed data. This does
>               have a potential performance impact.
>
>               Note that "poison" here is not the same thing as that in "HWPoison"
>               for CONFIG_MEMORY_FAILURE, in which "poison" is just a nomenclature
>               borrowed from Intel , for the processor support for
> "poisoned" memory, an
>               adaptive method for flagging and recovering from memory errors
>

Okay, I see what you are getting at here. This sounds okay.
  
>>
>>>>
>>>> +config PAGE_POISONING_NO_SANITY
>>>> +       depends on PAGE_POISONING
>>>> +       bool "Only poison, don't sanity check"
>>>> +       ---help---
>>>> +          Skip the sanity checking on alloc, only fill the pages with
>>>> +          poison on free. This reduces some of the overhead of the
>>>> +          poisoning feature.
>>>> +
>>>> +          If you are only interested in sanitization, say Y. Otherwise
>>>> +          say N.
>>>> diff --git a/mm/Makefile b/mm/Makefile
>>>> index fb1a7948c107..ec59c071b4f9 100644
>>>> --- a/mm/Makefile
>>>> +++ b/mm/Makefile
>>>> @@ -13,7 +13,6 @@ KCOV_INSTRUMENT_slob.o := n
>>>>    KCOV_INSTRUMENT_slab.o := n
>>>>    KCOV_INSTRUMENT_slub.o := n
>>>>    KCOV_INSTRUMENT_page_alloc.o := n
>>>> -KCOV_INSTRUMENT_debug-pagealloc.o := n
>>>>    KCOV_INSTRUMENT_kmemleak.o := n
>>>>    KCOV_INSTRUMENT_kmemcheck.o := n
>>>>    KCOV_INSTRUMENT_memcontrol.o := n
>>>> @@ -63,9 +62,6 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
>>>>    obj-$(CONFIG_SLOB) += slob.o
>>>>    obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
>>>>    obj-$(CONFIG_KSM) += ksm.o
>>>> -ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
>>>> -       obj-$(CONFIG_DEBUG_PAGEALLOC) += debug-pagealloc.o
>>>> -endif
>>>>    obj-$(CONFIG_PAGE_POISONING) += page_poison.o
>>>>    obj-$(CONFIG_SLAB) += slab.o
>>>>    obj-$(CONFIG_SLUB) += slub.o
>>>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>>>> index a34c359d8e81..0bdb3cfd83b5 100644
>>>> --- a/mm/page_alloc.c
>>>> +++ b/mm/page_alloc.c
>>>> @@ -1026,6 +1026,7 @@ static bool free_pages_prepare(struct page *page,
>>>> unsigned int order)
>>>>                                              PAGE_SIZE << order);
>>>>           }
>>>>           arch_free_page(page, order);
>>>> +       kernel_poison_pages(page, 1 << order, 0);
>>>>           kernel_map_pages(page, 1 << order, 0);
>>>>
>>>>           return true;
>>>> @@ -1497,6 +1498,7 @@ static int prep_new_page(struct page *page,
>>>> unsigned int order, gfp_t gfp_flags,
>>>>
>>>>           arch_alloc_page(page, order);
>>>>           kernel_map_pages(page, 1 << order, 1);
>>>> +       kernel_poison_pages(page, 1 << order, 1);
>>>>           kasan_alloc_pages(page, order);
>>>>
>>>>           if (gfp_flags & __GFP_ZERO)
>>>> diff --git a/mm/page_poison.c b/mm/page_poison.c
>>>> index 92ead727b8f0..884a6f854432 100644
>>>> --- a/mm/page_poison.c
>>>> +++ b/mm/page_poison.c
>>>> @@ -80,7 +80,7 @@ static void poison_page(struct page *page)
>>>>           kunmap_atomic(addr);
>>>>    }
>>>>
>>>> -void poison_pages(struct page *page, int n)
>>>> +static void poison_pages(struct page *page, int n)
>>>>    {
>>>>           int i;
>>>>
>>>> @@ -101,6 +101,9 @@ static void check_poison_mem(unsigned char *mem,
>>>> size_t bytes)
>>>>           unsigned char *start;
>>>>           unsigned char *end;
>>>>
>>>> +       if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
>>>> +               return;
>>>> +
>>>>           start = memchr_inv(mem, PAGE_POISON, bytes);
>>>>           if (!start)
>>>>                   return;
>>>> @@ -113,9 +116,9 @@ static void check_poison_mem(unsigned char *mem,
>>>> size_t bytes)
>>>>           if (!__ratelimit(&ratelimit))
>>>>                   return;
>>>>           else if (start == end && single_bit_flip(*start, PAGE_POISON))
>>>> -               printk(KERN_ERR "pagealloc: single bit error\n");
>>>> +               pr_err("pagealloc: single bit error\n");
>>>>           else
>>>> -               printk(KERN_ERR "pagealloc: memory corruption\n");
>>>> +               pr_err("pagealloc: memory corruption\n");
>>>>
>>>>           print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
>>>>                           end - start + 1, 1);
>>>> @@ -135,10 +138,28 @@ static void unpoison_page(struct page *page)
>>>>           kunmap_atomic(addr);
>>>>    }
>>>>
>>>> -void unpoison_pages(struct page *page, int n)
>>>> +static void unpoison_pages(struct page *page, int n)
>>>>    {
>>>>           int i;
>>>>
>>>>           for (i = 0; i < n; i++)
>>>>                   unpoison_page(page + i);
>>>>    }
>>>> +
>>>> +void kernel_poison_pages(struct page *page, int numpages, int enable)
>>>> +{
>>>> +       if (!page_poisoning_enabled())
>>>> +               return;
>>>> +
>>>> +       if (enable)
>>>> +               unpoison_pages(page, numpages);
>>>> +       else
>>>> +               poison_pages(page, numpages);
>>>> +}
>>>> +
>>>> +#ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
>>>> +void __kernel_map_pages(struct page *page, int numpages, int enable)
>>>> +{
>>>> +       /* This function does nothing, all work is done via poison pages
>>>> */
>>>> +}
>>>> +#endif
>>>
>>>
>>> IMHO,  kernel_map_pages is originally incorporated for debugging page
>>> allocation.
>>> And latter for archs that do not support arch-specific page poisoning,
>>> a software poisoning
>>> method was used.
>>>
>>> So I think it is not appropriate to use two interfaces in the alloc/free
>>> hooks.
>>>
>>> The kernel_poison_pages actually should be an implementation detail
>>> and should be hided
>>> in the kernel_map_pages interface.
>>>
>>
>> We want to have the poisoning independent of anything that kernel_map_pages
>> does. It was originally added for software poisoning for arches that
>> didn't have the full ARCH_SUPPORTS_DEBUG_PAGEALLOC support but there's
>> nothing that specifically ties it to mapping. It's beneficial even when
>> we aren't mapping/unmapping the pages so putting it in kernel_map_pages
>> would defeat what we're trying to accomplish here.
>>
>
> Ok, fair enough. If so,  I suggest you add this clarification into the
> code, or as least, in
> the changelog.

Sounds fine.

>
>
> Thanks,
> Jianyu Zhan
>

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 8+ messages in thread

end of thread, other threads:[~2016-02-26 22:21 UTC | newest]

Thread overview: 8+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-02-24 23:35 [RFC][PATCH v3 0/2] mm/page_poison.c: Allow for zero poisoning Kees Cook
2016-02-24 23:35 ` [RFC][PATCH v3 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option Kees Cook
2016-02-26  2:53   ` Jianyu Zhan
2016-02-26  4:45     ` Laura Abbott
2016-02-26  5:34       ` Jianyu Zhan
2016-02-26 22:21         ` Laura Abbott
2016-02-24 23:35 ` [RFC][PATCH v3 2/2] mm/page_poison.c: Allow for zero poisoning Kees Cook
2016-02-26  2:04 ` [RFC][PATCH v3 0/2] " Laura Abbott

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).