All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCHv4 0/2] Sanitization of buddy pages
@ 2016-03-04 23:50 ` Laura Abbott
  0 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-03-04 23:50 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Kees Cook
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening

Hi,

This is v4 of the santization of buddy pages. This is mostly just a rebase
and some phrasing tweaks from v2. Kees submitted a rebase of v3 so this is v4.

Kees, I'm hoping you will give your Tested-by and provide some stats from the
tests you were running before.

Thanks,
Laura

Laura Abbott (2):
  mm/page_poison.c: Enable PAGE_POISONING as a separate option
  mm/page_poisoning.c: Allow for zero poisoning

 Documentation/kernel-parameters.txt |   5 +
 include/linux/mm.h                  |  11 +++
 include/linux/poison.h              |   4 +
 kernel/power/hibernate.c            |  17 ++++
 mm/Kconfig.debug                    |  39 +++++++-
 mm/Makefile                         |   2 +-
 mm/debug-pagealloc.c                | 137 ----------------------------
 mm/page_alloc.c                     |  13 ++-
 mm/page_ext.c                       |  10 +-
 mm/page_poison.c                    | 176 ++++++++++++++++++++++++++++++++++++
 10 files changed, 272 insertions(+), 142 deletions(-)
 delete mode 100644 mm/debug-pagealloc.c
 create mode 100644 mm/page_poison.c

-- 
2.5.0

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCHv4 0/2] Sanitization of buddy pages
@ 2016-03-04 23:50 ` Laura Abbott
  0 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-03-04 23:50 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Kees Cook
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening

Hi,

This is v4 of the santization of buddy pages. This is mostly just a rebase
and some phrasing tweaks from v2. Kees submitted a rebase of v3 so this is v4.

Kees, I'm hoping you will give your Tested-by and provide some stats from the
tests you were running before.

Thanks,
Laura

Laura Abbott (2):
  mm/page_poison.c: Enable PAGE_POISONING as a separate option
  mm/page_poisoning.c: Allow for zero poisoning

 Documentation/kernel-parameters.txt |   5 +
 include/linux/mm.h                  |  11 +++
 include/linux/poison.h              |   4 +
 kernel/power/hibernate.c            |  17 ++++
 mm/Kconfig.debug                    |  39 +++++++-
 mm/Makefile                         |   2 +-
 mm/debug-pagealloc.c                | 137 ----------------------------
 mm/page_alloc.c                     |  13 ++-
 mm/page_ext.c                       |  10 +-
 mm/page_poison.c                    | 176 ++++++++++++++++++++++++++++++++++++
 10 files changed, 272 insertions(+), 142 deletions(-)
 delete mode 100644 mm/debug-pagealloc.c
 create mode 100644 mm/page_poison.c

-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [kernel-hardening] [PATCHv4 0/2] Sanitization of buddy pages
@ 2016-03-04 23:50 ` Laura Abbott
  0 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-03-04 23:50 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Kees Cook
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening

Hi,

This is v4 of the santization of buddy pages. This is mostly just a rebase
and some phrasing tweaks from v2. Kees submitted a rebase of v3 so this is v4.

Kees, I'm hoping you will give your Tested-by and provide some stats from the
tests you were running before.

Thanks,
Laura

Laura Abbott (2):
  mm/page_poison.c: Enable PAGE_POISONING as a separate option
  mm/page_poisoning.c: Allow for zero poisoning

 Documentation/kernel-parameters.txt |   5 +
 include/linux/mm.h                  |  11 +++
 include/linux/poison.h              |   4 +
 kernel/power/hibernate.c            |  17 ++++
 mm/Kconfig.debug                    |  39 +++++++-
 mm/Makefile                         |   2 +-
 mm/debug-pagealloc.c                | 137 ----------------------------
 mm/page_alloc.c                     |  13 ++-
 mm/page_ext.c                       |  10 +-
 mm/page_poison.c                    | 176 ++++++++++++++++++++++++++++++++++++
 10 files changed, 272 insertions(+), 142 deletions(-)
 delete mode 100644 mm/debug-pagealloc.c
 create mode 100644 mm/page_poison.c

-- 
2.5.0

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCHv4 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option
  2016-03-04 23:50 ` Laura Abbott
  (?)
@ 2016-03-04 23:50   ` Laura Abbott
  -1 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-03-04 23:50 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Kees Cook
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening


Page poisoning is currently setup as a feature if architectures don't
have architecture debug page_alloc to allow unmapping of pages. It has
uses apart from that though. Clearing of the pages on free provides
an increase in security as it helps to limit the risk of information
leaks. Allow page poisoning to be enabled as a separate option
independent of kernel_map pages since the two features do separate
work. Because of how hiberanation is implemented, the checks on alloc
cannot occur if hibernation is enabled. The runtime alloc checks can
also be enabled with an option when !HIBERNATION.

Credit to Grsecurity/PaX team for inspiring this work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 Documentation/kernel-parameters.txt |   5 ++
 include/linux/mm.h                  |   9 ++
 mm/Kconfig.debug                    |  25 +++++-
 mm/Makefile                         |   2 +-
 mm/debug-pagealloc.c                | 137 ----------------------------
 mm/page_alloc.c                     |   2 +
 mm/page_poison.c                    | 173 ++++++++++++++++++++++++++++++++++++
 7 files changed, 214 insertions(+), 139 deletions(-)
 delete mode 100644 mm/debug-pagealloc.c
 create mode 100644 mm/page_poison.c

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 9a53c92..4d99dc8 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2721,6 +2721,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			we can turn it on.
 			on: enable the feature
 
+	page_poison=	[KNL] Boot-time parameter changing the state of
+			poisoning on the buddy allocator.
+			off: turn off poisoning
+			on: turn on poisoning
+
 	panic=		[KNL] Kernel behaviour on panic: delay <timeout>
 			timeout > 0: seconds before rebooting
 			timeout = 0: wait forever
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 516e149..27e385d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2175,6 +2175,15 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 			       unsigned long size, pte_fn_t fn, void *data);
 
 
+#ifdef CONFIG_PAGE_POISONING
+extern bool page_poisoning_enabled(void);
+extern void kernel_poison_pages(struct page *page, int numpages, int enable);
+#else
+static inline bool page_poisoning_enabled(void) { return false; }
+static inline void kernel_poison_pages(struct page *page, int numpages,
+					int enable) { }
+#endif
+
 #ifdef CONFIG_DEBUG_PAGEALLOC
 extern bool _debug_pagealloc_enabled;
 extern void __kernel_map_pages(struct page *page, int numpages, int enable);
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 957d3da..35d6895 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -27,4 +27,27 @@ config DEBUG_PAGEALLOC
 	  a resume because free pages are not saved to the suspend image.
 
 config PAGE_POISONING
-	bool
+	bool "Poison pages after freeing"
+	select PAGE_EXTENSION
+	select PAGE_POISONING_NO_SANITY if HIBERNATION
+	---help---
+	  Fill the pages with poison patterns after free_pages() and verify
+	  the patterns before alloc_pages. The filling of the memory helps
+	  reduce the risk of information leaks from freed data. This does
+	  have a potential performance impact.
+
+	  Note that "poison" here is not the same thing as the "HWPoison"
+	  for CONFIG_MEMORY_FAILURE. This is software poisoning only.
+
+	  If unsure, say N
+
+config PAGE_POISONING_NO_SANITY
+	depends on PAGE_POISONING
+	bool "Only poison, don't sanity check"
+	---help---
+	   Skip the sanity checking on alloc, only fill the pages with
+	   poison on free. This reduces some of the overhead of the
+	   poisoning feature.
+
+	   If you are only interested in sanitization, say Y. Otherwise
+	   say N.
diff --git a/mm/Makefile b/mm/Makefile
index 2ed4319..cfdd481d 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -48,7 +48,7 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
 obj-$(CONFIG_SLOB) += slob.o
 obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
 obj-$(CONFIG_KSM) += ksm.o
-obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
+obj-$(CONFIG_PAGE_POISONING) += page_poison.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
diff --git a/mm/debug-pagealloc.c b/mm/debug-pagealloc.c
deleted file mode 100644
index 5bf5906..0000000
--- a/mm/debug-pagealloc.c
+++ /dev/null
@@ -1,137 +0,0 @@
-#include <linux/kernel.h>
-#include <linux/string.h>
-#include <linux/mm.h>
-#include <linux/highmem.h>
-#include <linux/page_ext.h>
-#include <linux/poison.h>
-#include <linux/ratelimit.h>
-
-static bool page_poisoning_enabled __read_mostly;
-
-static bool need_page_poisoning(void)
-{
-	if (!debug_pagealloc_enabled())
-		return false;
-
-	return true;
-}
-
-static void init_page_poisoning(void)
-{
-	if (!debug_pagealloc_enabled())
-		return;
-
-	page_poisoning_enabled = true;
-}
-
-struct page_ext_operations page_poisoning_ops = {
-	.need = need_page_poisoning,
-	.init = init_page_poisoning,
-};
-
-static inline void set_page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	__set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline void clear_page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline bool page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static void poison_page(struct page *page)
-{
-	void *addr = kmap_atomic(page);
-
-	set_page_poison(page);
-	memset(addr, PAGE_POISON, PAGE_SIZE);
-	kunmap_atomic(addr);
-}
-
-static void poison_pages(struct page *page, int n)
-{
-	int i;
-
-	for (i = 0; i < n; i++)
-		poison_page(page + i);
-}
-
-static bool single_bit_flip(unsigned char a, unsigned char b)
-{
-	unsigned char error = a ^ b;
-
-	return error && !(error & (error - 1));
-}
-
-static void check_poison_mem(unsigned char *mem, size_t bytes)
-{
-	static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 10);
-	unsigned char *start;
-	unsigned char *end;
-
-	start = memchr_inv(mem, PAGE_POISON, bytes);
-	if (!start)
-		return;
-
-	for (end = mem + bytes - 1; end > start; end--) {
-		if (*end != PAGE_POISON)
-			break;
-	}
-
-	if (!__ratelimit(&ratelimit))
-		return;
-	else if (start == end && single_bit_flip(*start, PAGE_POISON))
-		printk(KERN_ERR "pagealloc: single bit error\n");
-	else
-		printk(KERN_ERR "pagealloc: memory corruption\n");
-
-	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
-			end - start + 1, 1);
-	dump_stack();
-}
-
-static void unpoison_page(struct page *page)
-{
-	void *addr;
-
-	if (!page_poison(page))
-		return;
-
-	addr = kmap_atomic(page);
-	check_poison_mem(addr, PAGE_SIZE);
-	clear_page_poison(page);
-	kunmap_atomic(addr);
-}
-
-static void unpoison_pages(struct page *page, int n)
-{
-	int i;
-
-	for (i = 0; i < n; i++)
-		unpoison_page(page + i);
-}
-
-void __kernel_map_pages(struct page *page, int numpages, int enable)
-{
-	if (!page_poisoning_enabled)
-		return;
-
-	if (enable)
-		unpoison_pages(page, numpages);
-	else
-		poison_pages(page, numpages);
-}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 838ca8bb..6c4d7bc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1002,6 +1002,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 					   PAGE_SIZE << order);
 	}
 	arch_free_page(page, order);
+	kernel_poison_pages(page, 1 << order, 0);
 	kernel_map_pages(page, 1 << order, 0);
 
 	return true;
@@ -1397,6 +1398,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kernel_poison_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
diff --git a/mm/page_poison.c b/mm/page_poison.c
new file mode 100644
index 0000000..89d3bc7
--- /dev/null
+++ b/mm/page_poison.c
@@ -0,0 +1,173 @@
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/page_ext.h>
+#include <linux/poison.h>
+#include <linux/ratelimit.h>
+
+static bool __page_poisoning_enabled __read_mostly;
+static bool want_page_poisoning __read_mostly;
+
+static int early_page_poison_param(char *buf)
+{
+	if (!buf)
+		return -EINVAL;
+
+	if (strcmp(buf, "on") == 0)
+		want_page_poisoning = true;
+	else if (strcmp(buf, "off") == 0)
+		want_page_poisoning = false;
+
+	return 0;
+}
+early_param("page_poison", early_page_poison_param);
+
+bool page_poisoning_enabled(void)
+{
+	return __page_poisoning_enabled;
+}
+
+static bool need_page_poisoning(void)
+{
+	return want_page_poisoning;
+}
+
+static void init_page_poisoning(void)
+{
+	/*
+	 * page poisoning is debug page alloc for some arches. If either
+	 * of those options are enabled, enable poisoning
+	 */
+	if (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC)) {
+		if (!want_page_poisoning && !debug_pagealloc_enabled())
+			return;
+	} else {
+		if (!want_page_poisoning)
+			return;
+	}
+
+	__page_poisoning_enabled = true;
+}
+
+struct page_ext_operations page_poisoning_ops = {
+	.need = need_page_poisoning,
+	.init = init_page_poisoning,
+};
+
+static inline void set_page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	__set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static inline void clear_page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static inline bool page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static void poison_page(struct page *page)
+{
+	void *addr = kmap_atomic(page);
+
+	set_page_poison(page);
+	memset(addr, PAGE_POISON, PAGE_SIZE);
+	kunmap_atomic(addr);
+}
+
+static void poison_pages(struct page *page, int n)
+{
+	int i;
+
+	for (i = 0; i < n; i++)
+		poison_page(page + i);
+}
+
+static bool single_bit_flip(unsigned char a, unsigned char b)
+{
+	unsigned char error = a ^ b;
+
+	return error && !(error & (error - 1));
+}
+
+static void check_poison_mem(unsigned char *mem, size_t bytes)
+{
+	static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 10);
+	unsigned char *start;
+	unsigned char *end;
+
+	if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
+		return;
+
+	start = memchr_inv(mem, PAGE_POISON, bytes);
+	if (!start)
+		return;
+
+	for (end = mem + bytes - 1; end > start; end--) {
+		if (*end != PAGE_POISON)
+			break;
+	}
+
+	if (!__ratelimit(&ratelimit))
+		return;
+	else if (start == end && single_bit_flip(*start, PAGE_POISON))
+		pr_err("pagealloc: single bit error\n");
+	else
+		pr_err("pagealloc: memory corruption\n");
+
+	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
+			end - start + 1, 1);
+	dump_stack();
+}
+
+static void unpoison_page(struct page *page)
+{
+	void *addr;
+
+	if (!page_poison(page))
+		return;
+
+	addr = kmap_atomic(page);
+	check_poison_mem(addr, PAGE_SIZE);
+	clear_page_poison(page);
+	kunmap_atomic(addr);
+}
+
+static void unpoison_pages(struct page *page, int n)
+{
+	int i;
+
+	for (i = 0; i < n; i++)
+		unpoison_page(page + i);
+}
+
+void kernel_poison_pages(struct page *page, int numpages, int enable)
+{
+	if (!page_poisoning_enabled())
+		return;
+
+	if (enable)
+		unpoison_pages(page, numpages);
+	else
+		poison_pages(page, numpages);
+}
+
+#ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
+void __kernel_map_pages(struct page *page, int numpages, int enable)
+{
+	/* This function does nothing, all work is done via poison pages */
+}
+#endif
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCHv4 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option
@ 2016-03-04 23:50   ` Laura Abbott
  0 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-03-04 23:50 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Kees Cook
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening


Page poisoning is currently setup as a feature if architectures don't
have architecture debug page_alloc to allow unmapping of pages. It has
uses apart from that though. Clearing of the pages on free provides
an increase in security as it helps to limit the risk of information
leaks. Allow page poisoning to be enabled as a separate option
independent of kernel_map pages since the two features do separate
work. Because of how hiberanation is implemented, the checks on alloc
cannot occur if hibernation is enabled. The runtime alloc checks can
also be enabled with an option when !HIBERNATION.

Credit to Grsecurity/PaX team for inspiring this work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 Documentation/kernel-parameters.txt |   5 ++
 include/linux/mm.h                  |   9 ++
 mm/Kconfig.debug                    |  25 +++++-
 mm/Makefile                         |   2 +-
 mm/debug-pagealloc.c                | 137 ----------------------------
 mm/page_alloc.c                     |   2 +
 mm/page_poison.c                    | 173 ++++++++++++++++++++++++++++++++++++
 7 files changed, 214 insertions(+), 139 deletions(-)
 delete mode 100644 mm/debug-pagealloc.c
 create mode 100644 mm/page_poison.c

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 9a53c92..4d99dc8 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2721,6 +2721,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			we can turn it on.
 			on: enable the feature
 
+	page_poison=	[KNL] Boot-time parameter changing the state of
+			poisoning on the buddy allocator.
+			off: turn off poisoning
+			on: turn on poisoning
+
 	panic=		[KNL] Kernel behaviour on panic: delay <timeout>
 			timeout > 0: seconds before rebooting
 			timeout = 0: wait forever
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 516e149..27e385d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2175,6 +2175,15 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 			       unsigned long size, pte_fn_t fn, void *data);
 
 
+#ifdef CONFIG_PAGE_POISONING
+extern bool page_poisoning_enabled(void);
+extern void kernel_poison_pages(struct page *page, int numpages, int enable);
+#else
+static inline bool page_poisoning_enabled(void) { return false; }
+static inline void kernel_poison_pages(struct page *page, int numpages,
+					int enable) { }
+#endif
+
 #ifdef CONFIG_DEBUG_PAGEALLOC
 extern bool _debug_pagealloc_enabled;
 extern void __kernel_map_pages(struct page *page, int numpages, int enable);
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 957d3da..35d6895 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -27,4 +27,27 @@ config DEBUG_PAGEALLOC
 	  a resume because free pages are not saved to the suspend image.
 
 config PAGE_POISONING
-	bool
+	bool "Poison pages after freeing"
+	select PAGE_EXTENSION
+	select PAGE_POISONING_NO_SANITY if HIBERNATION
+	---help---
+	  Fill the pages with poison patterns after free_pages() and verify
+	  the patterns before alloc_pages. The filling of the memory helps
+	  reduce the risk of information leaks from freed data. This does
+	  have a potential performance impact.
+
+	  Note that "poison" here is not the same thing as the "HWPoison"
+	  for CONFIG_MEMORY_FAILURE. This is software poisoning only.
+
+	  If unsure, say N
+
+config PAGE_POISONING_NO_SANITY
+	depends on PAGE_POISONING
+	bool "Only poison, don't sanity check"
+	---help---
+	   Skip the sanity checking on alloc, only fill the pages with
+	   poison on free. This reduces some of the overhead of the
+	   poisoning feature.
+
+	   If you are only interested in sanitization, say Y. Otherwise
+	   say N.
diff --git a/mm/Makefile b/mm/Makefile
index 2ed4319..cfdd481d 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -48,7 +48,7 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
 obj-$(CONFIG_SLOB) += slob.o
 obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
 obj-$(CONFIG_KSM) += ksm.o
-obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
+obj-$(CONFIG_PAGE_POISONING) += page_poison.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
diff --git a/mm/debug-pagealloc.c b/mm/debug-pagealloc.c
deleted file mode 100644
index 5bf5906..0000000
--- a/mm/debug-pagealloc.c
+++ /dev/null
@@ -1,137 +0,0 @@
-#include <linux/kernel.h>
-#include <linux/string.h>
-#include <linux/mm.h>
-#include <linux/highmem.h>
-#include <linux/page_ext.h>
-#include <linux/poison.h>
-#include <linux/ratelimit.h>
-
-static bool page_poisoning_enabled __read_mostly;
-
-static bool need_page_poisoning(void)
-{
-	if (!debug_pagealloc_enabled())
-		return false;
-
-	return true;
-}
-
-static void init_page_poisoning(void)
-{
-	if (!debug_pagealloc_enabled())
-		return;
-
-	page_poisoning_enabled = true;
-}
-
-struct page_ext_operations page_poisoning_ops = {
-	.need = need_page_poisoning,
-	.init = init_page_poisoning,
-};
-
-static inline void set_page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	__set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline void clear_page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline bool page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static void poison_page(struct page *page)
-{
-	void *addr = kmap_atomic(page);
-
-	set_page_poison(page);
-	memset(addr, PAGE_POISON, PAGE_SIZE);
-	kunmap_atomic(addr);
-}
-
-static void poison_pages(struct page *page, int n)
-{
-	int i;
-
-	for (i = 0; i < n; i++)
-		poison_page(page + i);
-}
-
-static bool single_bit_flip(unsigned char a, unsigned char b)
-{
-	unsigned char error = a ^ b;
-
-	return error && !(error & (error - 1));
-}
-
-static void check_poison_mem(unsigned char *mem, size_t bytes)
-{
-	static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 10);
-	unsigned char *start;
-	unsigned char *end;
-
-	start = memchr_inv(mem, PAGE_POISON, bytes);
-	if (!start)
-		return;
-
-	for (end = mem + bytes - 1; end > start; end--) {
-		if (*end != PAGE_POISON)
-			break;
-	}
-
-	if (!__ratelimit(&ratelimit))
-		return;
-	else if (start == end && single_bit_flip(*start, PAGE_POISON))
-		printk(KERN_ERR "pagealloc: single bit error\n");
-	else
-		printk(KERN_ERR "pagealloc: memory corruption\n");
-
-	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
-			end - start + 1, 1);
-	dump_stack();
-}
-
-static void unpoison_page(struct page *page)
-{
-	void *addr;
-
-	if (!page_poison(page))
-		return;
-
-	addr = kmap_atomic(page);
-	check_poison_mem(addr, PAGE_SIZE);
-	clear_page_poison(page);
-	kunmap_atomic(addr);
-}
-
-static void unpoison_pages(struct page *page, int n)
-{
-	int i;
-
-	for (i = 0; i < n; i++)
-		unpoison_page(page + i);
-}
-
-void __kernel_map_pages(struct page *page, int numpages, int enable)
-{
-	if (!page_poisoning_enabled)
-		return;
-
-	if (enable)
-		unpoison_pages(page, numpages);
-	else
-		poison_pages(page, numpages);
-}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 838ca8bb..6c4d7bc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1002,6 +1002,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 					   PAGE_SIZE << order);
 	}
 	arch_free_page(page, order);
+	kernel_poison_pages(page, 1 << order, 0);
 	kernel_map_pages(page, 1 << order, 0);
 
 	return true;
@@ -1397,6 +1398,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kernel_poison_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
diff --git a/mm/page_poison.c b/mm/page_poison.c
new file mode 100644
index 0000000..89d3bc7
--- /dev/null
+++ b/mm/page_poison.c
@@ -0,0 +1,173 @@
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/page_ext.h>
+#include <linux/poison.h>
+#include <linux/ratelimit.h>
+
+static bool __page_poisoning_enabled __read_mostly;
+static bool want_page_poisoning __read_mostly;
+
+static int early_page_poison_param(char *buf)
+{
+	if (!buf)
+		return -EINVAL;
+
+	if (strcmp(buf, "on") == 0)
+		want_page_poisoning = true;
+	else if (strcmp(buf, "off") == 0)
+		want_page_poisoning = false;
+
+	return 0;
+}
+early_param("page_poison", early_page_poison_param);
+
+bool page_poisoning_enabled(void)
+{
+	return __page_poisoning_enabled;
+}
+
+static bool need_page_poisoning(void)
+{
+	return want_page_poisoning;
+}
+
+static void init_page_poisoning(void)
+{
+	/*
+	 * page poisoning is debug page alloc for some arches. If either
+	 * of those options are enabled, enable poisoning
+	 */
+	if (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC)) {
+		if (!want_page_poisoning && !debug_pagealloc_enabled())
+			return;
+	} else {
+		if (!want_page_poisoning)
+			return;
+	}
+
+	__page_poisoning_enabled = true;
+}
+
+struct page_ext_operations page_poisoning_ops = {
+	.need = need_page_poisoning,
+	.init = init_page_poisoning,
+};
+
+static inline void set_page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	__set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static inline void clear_page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static inline bool page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static void poison_page(struct page *page)
+{
+	void *addr = kmap_atomic(page);
+
+	set_page_poison(page);
+	memset(addr, PAGE_POISON, PAGE_SIZE);
+	kunmap_atomic(addr);
+}
+
+static void poison_pages(struct page *page, int n)
+{
+	int i;
+
+	for (i = 0; i < n; i++)
+		poison_page(page + i);
+}
+
+static bool single_bit_flip(unsigned char a, unsigned char b)
+{
+	unsigned char error = a ^ b;
+
+	return error && !(error & (error - 1));
+}
+
+static void check_poison_mem(unsigned char *mem, size_t bytes)
+{
+	static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 10);
+	unsigned char *start;
+	unsigned char *end;
+
+	if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
+		return;
+
+	start = memchr_inv(mem, PAGE_POISON, bytes);
+	if (!start)
+		return;
+
+	for (end = mem + bytes - 1; end > start; end--) {
+		if (*end != PAGE_POISON)
+			break;
+	}
+
+	if (!__ratelimit(&ratelimit))
+		return;
+	else if (start == end && single_bit_flip(*start, PAGE_POISON))
+		pr_err("pagealloc: single bit error\n");
+	else
+		pr_err("pagealloc: memory corruption\n");
+
+	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
+			end - start + 1, 1);
+	dump_stack();
+}
+
+static void unpoison_page(struct page *page)
+{
+	void *addr;
+
+	if (!page_poison(page))
+		return;
+
+	addr = kmap_atomic(page);
+	check_poison_mem(addr, PAGE_SIZE);
+	clear_page_poison(page);
+	kunmap_atomic(addr);
+}
+
+static void unpoison_pages(struct page *page, int n)
+{
+	int i;
+
+	for (i = 0; i < n; i++)
+		unpoison_page(page + i);
+}
+
+void kernel_poison_pages(struct page *page, int numpages, int enable)
+{
+	if (!page_poisoning_enabled())
+		return;
+
+	if (enable)
+		unpoison_pages(page, numpages);
+	else
+		poison_pages(page, numpages);
+}
+
+#ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
+void __kernel_map_pages(struct page *page, int numpages, int enable)
+{
+	/* This function does nothing, all work is done via poison pages */
+}
+#endif
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [kernel-hardening] [PATCHv4 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option
@ 2016-03-04 23:50   ` Laura Abbott
  0 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-03-04 23:50 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Kees Cook
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening


Page poisoning is currently setup as a feature if architectures don't
have architecture debug page_alloc to allow unmapping of pages. It has
uses apart from that though. Clearing of the pages on free provides
an increase in security as it helps to limit the risk of information
leaks. Allow page poisoning to be enabled as a separate option
independent of kernel_map pages since the two features do separate
work. Because of how hiberanation is implemented, the checks on alloc
cannot occur if hibernation is enabled. The runtime alloc checks can
also be enabled with an option when !HIBERNATION.

Credit to Grsecurity/PaX team for inspiring this work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 Documentation/kernel-parameters.txt |   5 ++
 include/linux/mm.h                  |   9 ++
 mm/Kconfig.debug                    |  25 +++++-
 mm/Makefile                         |   2 +-
 mm/debug-pagealloc.c                | 137 ----------------------------
 mm/page_alloc.c                     |   2 +
 mm/page_poison.c                    | 173 ++++++++++++++++++++++++++++++++++++
 7 files changed, 214 insertions(+), 139 deletions(-)
 delete mode 100644 mm/debug-pagealloc.c
 create mode 100644 mm/page_poison.c

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index 9a53c92..4d99dc8 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2721,6 +2721,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			we can turn it on.
 			on: enable the feature
 
+	page_poison=	[KNL] Boot-time parameter changing the state of
+			poisoning on the buddy allocator.
+			off: turn off poisoning
+			on: turn on poisoning
+
 	panic=		[KNL] Kernel behaviour on panic: delay <timeout>
 			timeout > 0: seconds before rebooting
 			timeout = 0: wait forever
diff --git a/include/linux/mm.h b/include/linux/mm.h
index 516e149..27e385d 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2175,6 +2175,15 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 			       unsigned long size, pte_fn_t fn, void *data);
 
 
+#ifdef CONFIG_PAGE_POISONING
+extern bool page_poisoning_enabled(void);
+extern void kernel_poison_pages(struct page *page, int numpages, int enable);
+#else
+static inline bool page_poisoning_enabled(void) { return false; }
+static inline void kernel_poison_pages(struct page *page, int numpages,
+					int enable) { }
+#endif
+
 #ifdef CONFIG_DEBUG_PAGEALLOC
 extern bool _debug_pagealloc_enabled;
 extern void __kernel_map_pages(struct page *page, int numpages, int enable);
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 957d3da..35d6895 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -27,4 +27,27 @@ config DEBUG_PAGEALLOC
 	  a resume because free pages are not saved to the suspend image.
 
 config PAGE_POISONING
-	bool
+	bool "Poison pages after freeing"
+	select PAGE_EXTENSION
+	select PAGE_POISONING_NO_SANITY if HIBERNATION
+	---help---
+	  Fill the pages with poison patterns after free_pages() and verify
+	  the patterns before alloc_pages. The filling of the memory helps
+	  reduce the risk of information leaks from freed data. This does
+	  have a potential performance impact.
+
+	  Note that "poison" here is not the same thing as the "HWPoison"
+	  for CONFIG_MEMORY_FAILURE. This is software poisoning only.
+
+	  If unsure, say N
+
+config PAGE_POISONING_NO_SANITY
+	depends on PAGE_POISONING
+	bool "Only poison, don't sanity check"
+	---help---
+	   Skip the sanity checking on alloc, only fill the pages with
+	   poison on free. This reduces some of the overhead of the
+	   poisoning feature.
+
+	   If you are only interested in sanitization, say Y. Otherwise
+	   say N.
diff --git a/mm/Makefile b/mm/Makefile
index 2ed4319..cfdd481d 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -48,7 +48,7 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
 obj-$(CONFIG_SLOB) += slob.o
 obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
 obj-$(CONFIG_KSM) += ksm.o
-obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
+obj-$(CONFIG_PAGE_POISONING) += page_poison.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
diff --git a/mm/debug-pagealloc.c b/mm/debug-pagealloc.c
deleted file mode 100644
index 5bf5906..0000000
--- a/mm/debug-pagealloc.c
+++ /dev/null
@@ -1,137 +0,0 @@
-#include <linux/kernel.h>
-#include <linux/string.h>
-#include <linux/mm.h>
-#include <linux/highmem.h>
-#include <linux/page_ext.h>
-#include <linux/poison.h>
-#include <linux/ratelimit.h>
-
-static bool page_poisoning_enabled __read_mostly;
-
-static bool need_page_poisoning(void)
-{
-	if (!debug_pagealloc_enabled())
-		return false;
-
-	return true;
-}
-
-static void init_page_poisoning(void)
-{
-	if (!debug_pagealloc_enabled())
-		return;
-
-	page_poisoning_enabled = true;
-}
-
-struct page_ext_operations page_poisoning_ops = {
-	.need = need_page_poisoning,
-	.init = init_page_poisoning,
-};
-
-static inline void set_page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	__set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline void clear_page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline bool page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static void poison_page(struct page *page)
-{
-	void *addr = kmap_atomic(page);
-
-	set_page_poison(page);
-	memset(addr, PAGE_POISON, PAGE_SIZE);
-	kunmap_atomic(addr);
-}
-
-static void poison_pages(struct page *page, int n)
-{
-	int i;
-
-	for (i = 0; i < n; i++)
-		poison_page(page + i);
-}
-
-static bool single_bit_flip(unsigned char a, unsigned char b)
-{
-	unsigned char error = a ^ b;
-
-	return error && !(error & (error - 1));
-}
-
-static void check_poison_mem(unsigned char *mem, size_t bytes)
-{
-	static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 10);
-	unsigned char *start;
-	unsigned char *end;
-
-	start = memchr_inv(mem, PAGE_POISON, bytes);
-	if (!start)
-		return;
-
-	for (end = mem + bytes - 1; end > start; end--) {
-		if (*end != PAGE_POISON)
-			break;
-	}
-
-	if (!__ratelimit(&ratelimit))
-		return;
-	else if (start == end && single_bit_flip(*start, PAGE_POISON))
-		printk(KERN_ERR "pagealloc: single bit error\n");
-	else
-		printk(KERN_ERR "pagealloc: memory corruption\n");
-
-	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
-			end - start + 1, 1);
-	dump_stack();
-}
-
-static void unpoison_page(struct page *page)
-{
-	void *addr;
-
-	if (!page_poison(page))
-		return;
-
-	addr = kmap_atomic(page);
-	check_poison_mem(addr, PAGE_SIZE);
-	clear_page_poison(page);
-	kunmap_atomic(addr);
-}
-
-static void unpoison_pages(struct page *page, int n)
-{
-	int i;
-
-	for (i = 0; i < n; i++)
-		unpoison_page(page + i);
-}
-
-void __kernel_map_pages(struct page *page, int numpages, int enable)
-{
-	if (!page_poisoning_enabled)
-		return;
-
-	if (enable)
-		unpoison_pages(page, numpages);
-	else
-		poison_pages(page, numpages);
-}
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 838ca8bb..6c4d7bc 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1002,6 +1002,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 					   PAGE_SIZE << order);
 	}
 	arch_free_page(page, order);
+	kernel_poison_pages(page, 1 << order, 0);
 	kernel_map_pages(page, 1 << order, 0);
 
 	return true;
@@ -1397,6 +1398,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 
 	arch_alloc_page(page, order);
 	kernel_map_pages(page, 1 << order, 1);
+	kernel_poison_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
 	if (gfp_flags & __GFP_ZERO)
diff --git a/mm/page_poison.c b/mm/page_poison.c
new file mode 100644
index 0000000..89d3bc7
--- /dev/null
+++ b/mm/page_poison.c
@@ -0,0 +1,173 @@
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/page_ext.h>
+#include <linux/poison.h>
+#include <linux/ratelimit.h>
+
+static bool __page_poisoning_enabled __read_mostly;
+static bool want_page_poisoning __read_mostly;
+
+static int early_page_poison_param(char *buf)
+{
+	if (!buf)
+		return -EINVAL;
+
+	if (strcmp(buf, "on") == 0)
+		want_page_poisoning = true;
+	else if (strcmp(buf, "off") == 0)
+		want_page_poisoning = false;
+
+	return 0;
+}
+early_param("page_poison", early_page_poison_param);
+
+bool page_poisoning_enabled(void)
+{
+	return __page_poisoning_enabled;
+}
+
+static bool need_page_poisoning(void)
+{
+	return want_page_poisoning;
+}
+
+static void init_page_poisoning(void)
+{
+	/*
+	 * page poisoning is debug page alloc for some arches. If either
+	 * of those options are enabled, enable poisoning
+	 */
+	if (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC)) {
+		if (!want_page_poisoning && !debug_pagealloc_enabled())
+			return;
+	} else {
+		if (!want_page_poisoning)
+			return;
+	}
+
+	__page_poisoning_enabled = true;
+}
+
+struct page_ext_operations page_poisoning_ops = {
+	.need = need_page_poisoning,
+	.init = init_page_poisoning,
+};
+
+static inline void set_page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	__set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static inline void clear_page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static inline bool page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static void poison_page(struct page *page)
+{
+	void *addr = kmap_atomic(page);
+
+	set_page_poison(page);
+	memset(addr, PAGE_POISON, PAGE_SIZE);
+	kunmap_atomic(addr);
+}
+
+static void poison_pages(struct page *page, int n)
+{
+	int i;
+
+	for (i = 0; i < n; i++)
+		poison_page(page + i);
+}
+
+static bool single_bit_flip(unsigned char a, unsigned char b)
+{
+	unsigned char error = a ^ b;
+
+	return error && !(error & (error - 1));
+}
+
+static void check_poison_mem(unsigned char *mem, size_t bytes)
+{
+	static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 10);
+	unsigned char *start;
+	unsigned char *end;
+
+	if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
+		return;
+
+	start = memchr_inv(mem, PAGE_POISON, bytes);
+	if (!start)
+		return;
+
+	for (end = mem + bytes - 1; end > start; end--) {
+		if (*end != PAGE_POISON)
+			break;
+	}
+
+	if (!__ratelimit(&ratelimit))
+		return;
+	else if (start == end && single_bit_flip(*start, PAGE_POISON))
+		pr_err("pagealloc: single bit error\n");
+	else
+		pr_err("pagealloc: memory corruption\n");
+
+	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
+			end - start + 1, 1);
+	dump_stack();
+}
+
+static void unpoison_page(struct page *page)
+{
+	void *addr;
+
+	if (!page_poison(page))
+		return;
+
+	addr = kmap_atomic(page);
+	check_poison_mem(addr, PAGE_SIZE);
+	clear_page_poison(page);
+	kunmap_atomic(addr);
+}
+
+static void unpoison_pages(struct page *page, int n)
+{
+	int i;
+
+	for (i = 0; i < n; i++)
+		unpoison_page(page + i);
+}
+
+void kernel_poison_pages(struct page *page, int numpages, int enable)
+{
+	if (!page_poisoning_enabled())
+		return;
+
+	if (enable)
+		unpoison_pages(page, numpages);
+	else
+		poison_pages(page, numpages);
+}
+
+#ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
+void __kernel_map_pages(struct page *page, int numpages, int enable)
+{
+	/* This function does nothing, all work is done via poison pages */
+}
+#endif
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCHv4 2/2] mm/page_poisoning.c: Allow for zero poisoning
  2016-03-04 23:50 ` Laura Abbott
  (?)
@ 2016-03-04 23:50   ` Laura Abbott
  -1 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-03-04 23:50 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Kees Cook
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening


By default, page poisoning uses a poison value (0xaa) on free. If this
is changed to 0, the page is not only sanitized but zeroing on alloc
with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
corruption from the poisoning is harder to detect. This feature also
cannot be used with hibernation since pages are not guaranteed to be
zeroed after hibernation.

Credit to Grsecurity/PaX team for inspiring this work

Acked-by: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 include/linux/mm.h       |  2 ++
 include/linux/poison.h   |  4 ++++
 kernel/power/hibernate.c | 17 +++++++++++++++++
 mm/Kconfig.debug         | 14 ++++++++++++++
 mm/page_alloc.c          | 11 ++++++++++-
 mm/page_ext.c            | 10 ++++++++--
 mm/page_poison.c         |  7 +++++--
 7 files changed, 60 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 27e385d..f3d2ef6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2178,10 +2178,12 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 #ifdef CONFIG_PAGE_POISONING
 extern bool page_poisoning_enabled(void);
 extern void kernel_poison_pages(struct page *page, int numpages, int enable);
+extern bool page_is_poisoned(struct page *page);
 #else
 static inline bool page_poisoning_enabled(void) { return false; }
 static inline void kernel_poison_pages(struct page *page, int numpages,
 					int enable) { }
+static inline bool page_is_poisoned(struct page *page) { return false; }
 #endif
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/poison.h b/include/linux/poison.h
index 4a27153..51334ed 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -30,7 +30,11 @@
 #define TIMER_ENTRY_STATIC	((void *) 0x300 + POISON_POINTER_DELTA)
 
 /********** mm/debug-pagealloc.c **********/
+#ifdef CONFIG_PAGE_POISONING_ZERO
+#define PAGE_POISON 0x00
+#else
 #define PAGE_POISON 0xaa
+#endif
 
 /********** mm/page_alloc.c ************/
 
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index b7342a2..aa0f26b 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -1158,6 +1158,22 @@ static int __init kaslr_nohibernate_setup(char *str)
 	return nohibernate_setup(str);
 }
 
+static int __init page_poison_nohibernate_setup(char *str)
+{
+#ifdef CONFIG_PAGE_POISONING_ZERO
+	/*
+	 * The zeroing option for page poison skips the checks on alloc.
+	 * since hibernation doesn't save free pages there's no way to
+	 * guarantee the pages will still be zeroed.
+	 */
+	if (!strcmp(str, "on")) {
+		pr_info("Disabling hibernation due to page poisoning\n");
+		return nohibernate_setup(str);
+	}
+#endif
+	return 1;
+}
+
 __setup("noresume", noresume_setup);
 __setup("resume_offset=", resume_offset_setup);
 __setup("resume=", resume_setup);
@@ -1166,3 +1182,4 @@ __setup("resumewait", resumewait_setup);
 __setup("resumedelay=", resumedelay_setup);
 __setup("nohibernate", nohibernate_setup);
 __setup("kaslr", kaslr_nohibernate_setup);
+__setup("page_poison=", page_poison_nohibernate_setup);
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 35d6895..613437b 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -51,3 +51,17 @@ config PAGE_POISONING_NO_SANITY
 
 	   If you are only interested in sanitization, say Y. Otherwise
 	   say N.
+
+config PAGE_POISONING_ZERO
+	bool "Use zero for poisoning instead of random data"
+	depends on PAGE_POISONING
+	---help---
+	   Instead of using the existing poison value, fill the pages with
+	   zeros. This makes it harder to detect when errors are occurring
+	   due to sanitization but the zeroing at free means that it is
+	   no longer necessary to write zeros when GFP_ZERO is used on
+	   allocation.
+
+	   Enabling page poisoning with this option will disable hibernation
+
+	   If unsure, say N
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6c4d7bc..c04ed17 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1382,15 +1382,24 @@ static inline int check_new_page(struct page *page)
 	return 0;
 }
 
+static inline bool free_pages_prezeroed(bool poisoned)
+{
+	return IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) &&
+		page_poisoning_enabled() && poisoned;
+}
+
 static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 								int alloc_flags)
 {
 	int i;
+	bool poisoned = true;
 
 	for (i = 0; i < (1 << order); i++) {
 		struct page *p = page + i;
 		if (unlikely(check_new_page(p)))
 			return 1;
+		if (poisoned)
+			poisoned &= page_is_poisoned(p);
 	}
 
 	set_page_private(page, 0);
@@ -1401,7 +1410,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 	kernel_poison_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
-	if (gfp_flags & __GFP_ZERO)
+	if (!free_pages_prezeroed(poisoned) && (gfp_flags & __GFP_ZERO))
 		for (i = 0; i < (1 << order); i++)
 			clear_highpage(page + i);
 
diff --git a/mm/page_ext.c b/mm/page_ext.c
index 292ca7b..2d864e6 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -106,12 +106,15 @@ struct page_ext *lookup_page_ext(struct page *page)
 	struct page_ext *base;
 
 	base = NODE_DATA(page_to_nid(page))->node_page_ext;
-#ifdef CONFIG_DEBUG_VM
+#if defined(CONFIG_DEBUG_VM) || defined(CONFIG_PAGE_POISONING)
 	/*
 	 * The sanity checks the page allocator does upon freeing a
 	 * page can reach here before the page_ext arrays are
 	 * allocated when feeding a range of pages to the allocator
 	 * for the first time during bootup or memory hotplug.
+	 *
+	 * This check is also necessary for ensuring page poisoning
+	 * works as expected when enabled
 	 */
 	if (unlikely(!base))
 		return NULL;
@@ -180,12 +183,15 @@ struct page_ext *lookup_page_ext(struct page *page)
 {
 	unsigned long pfn = page_to_pfn(page);
 	struct mem_section *section = __pfn_to_section(pfn);
-#ifdef CONFIG_DEBUG_VM
+#if defined(CONFIG_DEBUG_VM) || defined(CONFIG_PAGE_POISONING)
 	/*
 	 * The sanity checks the page allocator does upon freeing a
 	 * page can reach here before the page_ext arrays are
 	 * allocated when feeding a range of pages to the allocator
 	 * for the first time during bootup or memory hotplug.
+	 *
+	 * This check is also necessary for ensuring page poisoning
+	 * works as expected when enabled
 	 */
 	if (!section->page_ext)
 		return NULL;
diff --git a/mm/page_poison.c b/mm/page_poison.c
index 89d3bc7..479e7ea 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -71,11 +71,14 @@ static inline void clear_page_poison(struct page *page)
 	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
 }
 
-static inline bool page_poison(struct page *page)
+bool page_is_poisoned(struct page *page)
 {
 	struct page_ext *page_ext;
 
 	page_ext = lookup_page_ext(page);
+	if (!page_ext)
+		return false;
+
 	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
 }
 
@@ -137,7 +140,7 @@ static void unpoison_page(struct page *page)
 {
 	void *addr;
 
-	if (!page_poison(page))
+	if (!page_is_poisoned(page))
 		return;
 
 	addr = kmap_atomic(page);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCHv4 2/2] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-03-04 23:50   ` Laura Abbott
  0 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-03-04 23:50 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Kees Cook
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening


By default, page poisoning uses a poison value (0xaa) on free. If this
is changed to 0, the page is not only sanitized but zeroing on alloc
with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
corruption from the poisoning is harder to detect. This feature also
cannot be used with hibernation since pages are not guaranteed to be
zeroed after hibernation.

Credit to Grsecurity/PaX team for inspiring this work

Acked-by: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 include/linux/mm.h       |  2 ++
 include/linux/poison.h   |  4 ++++
 kernel/power/hibernate.c | 17 +++++++++++++++++
 mm/Kconfig.debug         | 14 ++++++++++++++
 mm/page_alloc.c          | 11 ++++++++++-
 mm/page_ext.c            | 10 ++++++++--
 mm/page_poison.c         |  7 +++++--
 7 files changed, 60 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 27e385d..f3d2ef6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2178,10 +2178,12 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 #ifdef CONFIG_PAGE_POISONING
 extern bool page_poisoning_enabled(void);
 extern void kernel_poison_pages(struct page *page, int numpages, int enable);
+extern bool page_is_poisoned(struct page *page);
 #else
 static inline bool page_poisoning_enabled(void) { return false; }
 static inline void kernel_poison_pages(struct page *page, int numpages,
 					int enable) { }
+static inline bool page_is_poisoned(struct page *page) { return false; }
 #endif
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/poison.h b/include/linux/poison.h
index 4a27153..51334ed 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -30,7 +30,11 @@
 #define TIMER_ENTRY_STATIC	((void *) 0x300 + POISON_POINTER_DELTA)
 
 /********** mm/debug-pagealloc.c **********/
+#ifdef CONFIG_PAGE_POISONING_ZERO
+#define PAGE_POISON 0x00
+#else
 #define PAGE_POISON 0xaa
+#endif
 
 /********** mm/page_alloc.c ************/
 
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index b7342a2..aa0f26b 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -1158,6 +1158,22 @@ static int __init kaslr_nohibernate_setup(char *str)
 	return nohibernate_setup(str);
 }
 
+static int __init page_poison_nohibernate_setup(char *str)
+{
+#ifdef CONFIG_PAGE_POISONING_ZERO
+	/*
+	 * The zeroing option for page poison skips the checks on alloc.
+	 * since hibernation doesn't save free pages there's no way to
+	 * guarantee the pages will still be zeroed.
+	 */
+	if (!strcmp(str, "on")) {
+		pr_info("Disabling hibernation due to page poisoning\n");
+		return nohibernate_setup(str);
+	}
+#endif
+	return 1;
+}
+
 __setup("noresume", noresume_setup);
 __setup("resume_offset=", resume_offset_setup);
 __setup("resume=", resume_setup);
@@ -1166,3 +1182,4 @@ __setup("resumewait", resumewait_setup);
 __setup("resumedelay=", resumedelay_setup);
 __setup("nohibernate", nohibernate_setup);
 __setup("kaslr", kaslr_nohibernate_setup);
+__setup("page_poison=", page_poison_nohibernate_setup);
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 35d6895..613437b 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -51,3 +51,17 @@ config PAGE_POISONING_NO_SANITY
 
 	   If you are only interested in sanitization, say Y. Otherwise
 	   say N.
+
+config PAGE_POISONING_ZERO
+	bool "Use zero for poisoning instead of random data"
+	depends on PAGE_POISONING
+	---help---
+	   Instead of using the existing poison value, fill the pages with
+	   zeros. This makes it harder to detect when errors are occurring
+	   due to sanitization but the zeroing at free means that it is
+	   no longer necessary to write zeros when GFP_ZERO is used on
+	   allocation.
+
+	   Enabling page poisoning with this option will disable hibernation
+
+	   If unsure, say N
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6c4d7bc..c04ed17 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1382,15 +1382,24 @@ static inline int check_new_page(struct page *page)
 	return 0;
 }
 
+static inline bool free_pages_prezeroed(bool poisoned)
+{
+	return IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) &&
+		page_poisoning_enabled() && poisoned;
+}
+
 static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 								int alloc_flags)
 {
 	int i;
+	bool poisoned = true;
 
 	for (i = 0; i < (1 << order); i++) {
 		struct page *p = page + i;
 		if (unlikely(check_new_page(p)))
 			return 1;
+		if (poisoned)
+			poisoned &= page_is_poisoned(p);
 	}
 
 	set_page_private(page, 0);
@@ -1401,7 +1410,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 	kernel_poison_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
-	if (gfp_flags & __GFP_ZERO)
+	if (!free_pages_prezeroed(poisoned) && (gfp_flags & __GFP_ZERO))
 		for (i = 0; i < (1 << order); i++)
 			clear_highpage(page + i);
 
diff --git a/mm/page_ext.c b/mm/page_ext.c
index 292ca7b..2d864e6 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -106,12 +106,15 @@ struct page_ext *lookup_page_ext(struct page *page)
 	struct page_ext *base;
 
 	base = NODE_DATA(page_to_nid(page))->node_page_ext;
-#ifdef CONFIG_DEBUG_VM
+#if defined(CONFIG_DEBUG_VM) || defined(CONFIG_PAGE_POISONING)
 	/*
 	 * The sanity checks the page allocator does upon freeing a
 	 * page can reach here before the page_ext arrays are
 	 * allocated when feeding a range of pages to the allocator
 	 * for the first time during bootup or memory hotplug.
+	 *
+	 * This check is also necessary for ensuring page poisoning
+	 * works as expected when enabled
 	 */
 	if (unlikely(!base))
 		return NULL;
@@ -180,12 +183,15 @@ struct page_ext *lookup_page_ext(struct page *page)
 {
 	unsigned long pfn = page_to_pfn(page);
 	struct mem_section *section = __pfn_to_section(pfn);
-#ifdef CONFIG_DEBUG_VM
+#if defined(CONFIG_DEBUG_VM) || defined(CONFIG_PAGE_POISONING)
 	/*
 	 * The sanity checks the page allocator does upon freeing a
 	 * page can reach here before the page_ext arrays are
 	 * allocated when feeding a range of pages to the allocator
 	 * for the first time during bootup or memory hotplug.
+	 *
+	 * This check is also necessary for ensuring page poisoning
+	 * works as expected when enabled
 	 */
 	if (!section->page_ext)
 		return NULL;
diff --git a/mm/page_poison.c b/mm/page_poison.c
index 89d3bc7..479e7ea 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -71,11 +71,14 @@ static inline void clear_page_poison(struct page *page)
 	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
 }
 
-static inline bool page_poison(struct page *page)
+bool page_is_poisoned(struct page *page)
 {
 	struct page_ext *page_ext;
 
 	page_ext = lookup_page_ext(page);
+	if (!page_ext)
+		return false;
+
 	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
 }
 
@@ -137,7 +140,7 @@ static void unpoison_page(struct page *page)
 {
 	void *addr;
 
-	if (!page_poison(page))
+	if (!page_is_poisoned(page))
 		return;
 
 	addr = kmap_atomic(page);
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [kernel-hardening] [PATCHv4 2/2] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-03-04 23:50   ` Laura Abbott
  0 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-03-04 23:50 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Kees Cook
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening


By default, page poisoning uses a poison value (0xaa) on free. If this
is changed to 0, the page is not only sanitized but zeroing on alloc
with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
corruption from the poisoning is harder to detect. This feature also
cannot be used with hibernation since pages are not guaranteed to be
zeroed after hibernation.

Credit to Grsecurity/PaX team for inspiring this work

Acked-by: Rafael J. Wysocki <rjw@rjwysocki.net>
Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 include/linux/mm.h       |  2 ++
 include/linux/poison.h   |  4 ++++
 kernel/power/hibernate.c | 17 +++++++++++++++++
 mm/Kconfig.debug         | 14 ++++++++++++++
 mm/page_alloc.c          | 11 ++++++++++-
 mm/page_ext.c            | 10 ++++++++--
 mm/page_poison.c         |  7 +++++--
 7 files changed, 60 insertions(+), 5 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 27e385d..f3d2ef6 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2178,10 +2178,12 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 #ifdef CONFIG_PAGE_POISONING
 extern bool page_poisoning_enabled(void);
 extern void kernel_poison_pages(struct page *page, int numpages, int enable);
+extern bool page_is_poisoned(struct page *page);
 #else
 static inline bool page_poisoning_enabled(void) { return false; }
 static inline void kernel_poison_pages(struct page *page, int numpages,
 					int enable) { }
+static inline bool page_is_poisoned(struct page *page) { return false; }
 #endif
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/include/linux/poison.h b/include/linux/poison.h
index 4a27153..51334ed 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -30,7 +30,11 @@
 #define TIMER_ENTRY_STATIC	((void *) 0x300 + POISON_POINTER_DELTA)
 
 /********** mm/debug-pagealloc.c **********/
+#ifdef CONFIG_PAGE_POISONING_ZERO
+#define PAGE_POISON 0x00
+#else
 #define PAGE_POISON 0xaa
+#endif
 
 /********** mm/page_alloc.c ************/
 
diff --git a/kernel/power/hibernate.c b/kernel/power/hibernate.c
index b7342a2..aa0f26b 100644
--- a/kernel/power/hibernate.c
+++ b/kernel/power/hibernate.c
@@ -1158,6 +1158,22 @@ static int __init kaslr_nohibernate_setup(char *str)
 	return nohibernate_setup(str);
 }
 
+static int __init page_poison_nohibernate_setup(char *str)
+{
+#ifdef CONFIG_PAGE_POISONING_ZERO
+	/*
+	 * The zeroing option for page poison skips the checks on alloc.
+	 * since hibernation doesn't save free pages there's no way to
+	 * guarantee the pages will still be zeroed.
+	 */
+	if (!strcmp(str, "on")) {
+		pr_info("Disabling hibernation due to page poisoning\n");
+		return nohibernate_setup(str);
+	}
+#endif
+	return 1;
+}
+
 __setup("noresume", noresume_setup);
 __setup("resume_offset=", resume_offset_setup);
 __setup("resume=", resume_setup);
@@ -1166,3 +1182,4 @@ __setup("resumewait", resumewait_setup);
 __setup("resumedelay=", resumedelay_setup);
 __setup("nohibernate", nohibernate_setup);
 __setup("kaslr", kaslr_nohibernate_setup);
+__setup("page_poison=", page_poison_nohibernate_setup);
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 35d6895..613437b 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -51,3 +51,17 @@ config PAGE_POISONING_NO_SANITY
 
 	   If you are only interested in sanitization, say Y. Otherwise
 	   say N.
+
+config PAGE_POISONING_ZERO
+	bool "Use zero for poisoning instead of random data"
+	depends on PAGE_POISONING
+	---help---
+	   Instead of using the existing poison value, fill the pages with
+	   zeros. This makes it harder to detect when errors are occurring
+	   due to sanitization but the zeroing at free means that it is
+	   no longer necessary to write zeros when GFP_ZERO is used on
+	   allocation.
+
+	   Enabling page poisoning with this option will disable hibernation
+
+	   If unsure, say N
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 6c4d7bc..c04ed17 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1382,15 +1382,24 @@ static inline int check_new_page(struct page *page)
 	return 0;
 }
 
+static inline bool free_pages_prezeroed(bool poisoned)
+{
+	return IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) &&
+		page_poisoning_enabled() && poisoned;
+}
+
 static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 								int alloc_flags)
 {
 	int i;
+	bool poisoned = true;
 
 	for (i = 0; i < (1 << order); i++) {
 		struct page *p = page + i;
 		if (unlikely(check_new_page(p)))
 			return 1;
+		if (poisoned)
+			poisoned &= page_is_poisoned(p);
 	}
 
 	set_page_private(page, 0);
@@ -1401,7 +1410,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 	kernel_poison_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
-	if (gfp_flags & __GFP_ZERO)
+	if (!free_pages_prezeroed(poisoned) && (gfp_flags & __GFP_ZERO))
 		for (i = 0; i < (1 << order); i++)
 			clear_highpage(page + i);
 
diff --git a/mm/page_ext.c b/mm/page_ext.c
index 292ca7b..2d864e6 100644
--- a/mm/page_ext.c
+++ b/mm/page_ext.c
@@ -106,12 +106,15 @@ struct page_ext *lookup_page_ext(struct page *page)
 	struct page_ext *base;
 
 	base = NODE_DATA(page_to_nid(page))->node_page_ext;
-#ifdef CONFIG_DEBUG_VM
+#if defined(CONFIG_DEBUG_VM) || defined(CONFIG_PAGE_POISONING)
 	/*
 	 * The sanity checks the page allocator does upon freeing a
 	 * page can reach here before the page_ext arrays are
 	 * allocated when feeding a range of pages to the allocator
 	 * for the first time during bootup or memory hotplug.
+	 *
+	 * This check is also necessary for ensuring page poisoning
+	 * works as expected when enabled
 	 */
 	if (unlikely(!base))
 		return NULL;
@@ -180,12 +183,15 @@ struct page_ext *lookup_page_ext(struct page *page)
 {
 	unsigned long pfn = page_to_pfn(page);
 	struct mem_section *section = __pfn_to_section(pfn);
-#ifdef CONFIG_DEBUG_VM
+#if defined(CONFIG_DEBUG_VM) || defined(CONFIG_PAGE_POISONING)
 	/*
 	 * The sanity checks the page allocator does upon freeing a
 	 * page can reach here before the page_ext arrays are
 	 * allocated when feeding a range of pages to the allocator
 	 * for the first time during bootup or memory hotplug.
+	 *
+	 * This check is also necessary for ensuring page poisoning
+	 * works as expected when enabled
 	 */
 	if (!section->page_ext)
 		return NULL;
diff --git a/mm/page_poison.c b/mm/page_poison.c
index 89d3bc7..479e7ea 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -71,11 +71,14 @@ static inline void clear_page_poison(struct page *page)
 	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
 }
 
-static inline bool page_poison(struct page *page)
+bool page_is_poisoned(struct page *page)
 {
 	struct page_ext *page_ext;
 
 	page_ext = lookup_page_ext(page);
+	if (!page_ext)
+		return false;
+
 	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
 }
 
@@ -137,7 +140,7 @@ static void unpoison_page(struct page *page)
 {
 	void *addr;
 
-	if (!page_poison(page))
+	if (!page_is_poisoned(page))
 		return;
 
 	addr = kmap_atomic(page);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCHv4 2/2] mm/page_poisoning.c: Allow for zero poisoning
  2016-03-04 23:50   ` Laura Abbott
  (?)
@ 2016-03-05  0:07     ` Andrew Morton
  -1 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2016-03-05  0:07 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kirill A. Shutemov, Vlastimil Babka, Michal Hocko, Kees Cook,
	linux-mm, linux-kernel, kernel-hardening

On Fri,  4 Mar 2016 15:50:48 -0800 Laura Abbott <labbott@fedoraproject.org> wrote:

> 
> By default, page poisoning uses a poison value (0xaa) on free. If this
> is changed to 0, the page is not only sanitized but zeroing on alloc
> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
> corruption from the poisoning is harder to detect. This feature also
> cannot be used with hibernation since pages are not guaranteed to be
> zeroed after hibernation.
> 
> Credit to Grsecurity/PaX team for inspiring this work
> 
> --- a/kernel/power/hibernate.c
> +++ b/kernel/power/hibernate.c
> @@ -1158,6 +1158,22 @@ static int __init kaslr_nohibernate_setup(char *str)
>  	return nohibernate_setup(str);
>  }
>  
> +static int __init page_poison_nohibernate_setup(char *str)
> +{
> +#ifdef CONFIG_PAGE_POISONING_ZERO
> +	/*
> +	 * The zeroing option for page poison skips the checks on alloc.
> +	 * since hibernation doesn't save free pages there's no way to
> +	 * guarantee the pages will still be zeroed.
> +	 */
> +	if (!strcmp(str, "on")) {
> +		pr_info("Disabling hibernation due to page poisoning\n");
> +		return nohibernate_setup(str);
> +	}
> +#endif
> +	return 1;
> +}

It seems a bit unfriendly to silently accept the boot option but not
actually do anything with it.  Perhaps a `#else pr_info("sorry")' is
needed.

But I bet we made the same mistake in 1000 other places.

What happens if page_poison_nohibernate_setup() simply doesn't exist
when CONFIG_PAGE_POISONING_ZERO=n?  It looks like
kernel/params.c:parse_args() says "Unknown parameter".

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv4 2/2] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-03-05  0:07     ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2016-03-05  0:07 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kirill A. Shutemov, Vlastimil Babka, Michal Hocko, Kees Cook,
	linux-mm, linux-kernel, kernel-hardening

On Fri,  4 Mar 2016 15:50:48 -0800 Laura Abbott <labbott@fedoraproject.org> wrote:

> 
> By default, page poisoning uses a poison value (0xaa) on free. If this
> is changed to 0, the page is not only sanitized but zeroing on alloc
> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
> corruption from the poisoning is harder to detect. This feature also
> cannot be used with hibernation since pages are not guaranteed to be
> zeroed after hibernation.
> 
> Credit to Grsecurity/PaX team for inspiring this work
> 
> --- a/kernel/power/hibernate.c
> +++ b/kernel/power/hibernate.c
> @@ -1158,6 +1158,22 @@ static int __init kaslr_nohibernate_setup(char *str)
>  	return nohibernate_setup(str);
>  }
>  
> +static int __init page_poison_nohibernate_setup(char *str)
> +{
> +#ifdef CONFIG_PAGE_POISONING_ZERO
> +	/*
> +	 * The zeroing option for page poison skips the checks on alloc.
> +	 * since hibernation doesn't save free pages there's no way to
> +	 * guarantee the pages will still be zeroed.
> +	 */
> +	if (!strcmp(str, "on")) {
> +		pr_info("Disabling hibernation due to page poisoning\n");
> +		return nohibernate_setup(str);
> +	}
> +#endif
> +	return 1;
> +}

It seems a bit unfriendly to silently accept the boot option but not
actually do anything with it.  Perhaps a `#else pr_info("sorry")' is
needed.

But I bet we made the same mistake in 1000 other places.

What happens if page_poison_nohibernate_setup() simply doesn't exist
when CONFIG_PAGE_POISONING_ZERO=n?  It looks like
kernel/params.c:parse_args() says "Unknown parameter".


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [kernel-hardening] Re: [PATCHv4 2/2] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-03-05  0:07     ` Andrew Morton
  0 siblings, 0 replies; 21+ messages in thread
From: Andrew Morton @ 2016-03-05  0:07 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kirill A. Shutemov, Vlastimil Babka, Michal Hocko, Kees Cook,
	linux-mm, linux-kernel, kernel-hardening

On Fri,  4 Mar 2016 15:50:48 -0800 Laura Abbott <labbott@fedoraproject.org> wrote:

> 
> By default, page poisoning uses a poison value (0xaa) on free. If this
> is changed to 0, the page is not only sanitized but zeroing on alloc
> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
> corruption from the poisoning is harder to detect. This feature also
> cannot be used with hibernation since pages are not guaranteed to be
> zeroed after hibernation.
> 
> Credit to Grsecurity/PaX team for inspiring this work
> 
> --- a/kernel/power/hibernate.c
> +++ b/kernel/power/hibernate.c
> @@ -1158,6 +1158,22 @@ static int __init kaslr_nohibernate_setup(char *str)
>  	return nohibernate_setup(str);
>  }
>  
> +static int __init page_poison_nohibernate_setup(char *str)
> +{
> +#ifdef CONFIG_PAGE_POISONING_ZERO
> +	/*
> +	 * The zeroing option for page poison skips the checks on alloc.
> +	 * since hibernation doesn't save free pages there's no way to
> +	 * guarantee the pages will still be zeroed.
> +	 */
> +	if (!strcmp(str, "on")) {
> +		pr_info("Disabling hibernation due to page poisoning\n");
> +		return nohibernate_setup(str);
> +	}
> +#endif
> +	return 1;
> +}

It seems a bit unfriendly to silently accept the boot option but not
actually do anything with it.  Perhaps a `#else pr_info("sorry")' is
needed.

But I bet we made the same mistake in 1000 other places.

What happens if page_poison_nohibernate_setup() simply doesn't exist
when CONFIG_PAGE_POISONING_ZERO=n?  It looks like
kernel/params.c:parse_args() says "Unknown parameter".

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv4 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option
  2016-03-04 23:50   ` Laura Abbott
  (?)
@ 2016-03-05  0:17     ` kbuild test robot
  -1 siblings, 0 replies; 21+ messages in thread
From: kbuild test robot @ 2016-03-05  0:17 UTC (permalink / raw)
  To: Laura Abbott
  Cc: kbuild-all, Andrew Morton, Kirill A. Shutemov, Vlastimil Babka,
	Michal Hocko, Kees Cook, Laura Abbott, linux-mm, linux-kernel,
	kernel-hardening

[-- Attachment #1: Type: text/plain, Size: 1543 bytes --]

Hi Laura,

[auto build test ERROR on v4.5-rc6]
[cannot apply to next-20160304]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]

url:    https://github.com/0day-ci/linux/commits/Laura-Abbott/mm-page_poison-c-Enable-PAGE_POISONING-as-a-separate-option/20160305-075416
config: sparc64-allyesconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=sparc64 

All errors (new ones prefixed by >>):

   mm/page_poison.c: In function 'init_page_poisoning':
>> mm/page_poison.c:43:3: error: implicit declaration of function 'debug_pagealloc_enabled' [-Werror=implicit-function-declaration]
      if (!want_page_poisoning && !debug_pagealloc_enabled())
      ^
   cc1: some warnings being treated as errors

vim +/debug_pagealloc_enabled +43 mm/page_poison.c

    37	{
    38		/*
    39		 * page poisoning is debug page alloc for some arches. If either
    40		 * of those options are enabled, enable poisoning
    41		 */
    42		if (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC)) {
  > 43			if (!want_page_poisoning && !debug_pagealloc_enabled())
    44				return;
    45		} else {
    46			if (!want_page_poisoning)

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 45113 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv4 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option
@ 2016-03-05  0:17     ` kbuild test robot
  0 siblings, 0 replies; 21+ messages in thread
From: kbuild test robot @ 2016-03-05  0:17 UTC (permalink / raw)
  To: Laura Abbott
  Cc: kbuild-all, Andrew Morton, Kirill A. Shutemov, Vlastimil Babka,
	Michal Hocko, Kees Cook, linux-mm, linux-kernel,
	kernel-hardening

[-- Attachment #1: Type: text/plain, Size: 1543 bytes --]

Hi Laura,

[auto build test ERROR on v4.5-rc6]
[cannot apply to next-20160304]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]

url:    https://github.com/0day-ci/linux/commits/Laura-Abbott/mm-page_poison-c-Enable-PAGE_POISONING-as-a-separate-option/20160305-075416
config: sparc64-allyesconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=sparc64 

All errors (new ones prefixed by >>):

   mm/page_poison.c: In function 'init_page_poisoning':
>> mm/page_poison.c:43:3: error: implicit declaration of function 'debug_pagealloc_enabled' [-Werror=implicit-function-declaration]
      if (!want_page_poisoning && !debug_pagealloc_enabled())
      ^
   cc1: some warnings being treated as errors

vim +/debug_pagealloc_enabled +43 mm/page_poison.c

    37	{
    38		/*
    39		 * page poisoning is debug page alloc for some arches. If either
    40		 * of those options are enabled, enable poisoning
    41		 */
    42		if (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC)) {
  > 43			if (!want_page_poisoning && !debug_pagealloc_enabled())
    44				return;
    45		} else {
    46			if (!want_page_poisoning)

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 45113 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [kernel-hardening] Re: [PATCHv4 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option
@ 2016-03-05  0:17     ` kbuild test robot
  0 siblings, 0 replies; 21+ messages in thread
From: kbuild test robot @ 2016-03-05  0:17 UTC (permalink / raw)
  To: Laura Abbott
  Cc: kbuild-all, Andrew Morton, Kirill A. Shutemov, Vlastimil Babka,
	Michal Hocko, Kees Cook, linux-mm, linux-kernel,
	kernel-hardening

[-- Attachment #1: Type: text/plain, Size: 1543 bytes --]

Hi Laura,

[auto build test ERROR on v4.5-rc6]
[cannot apply to next-20160304]
[if your patch is applied to the wrong git tree, please drop us a note to help improving the system]

url:    https://github.com/0day-ci/linux/commits/Laura-Abbott/mm-page_poison-c-Enable-PAGE_POISONING-as-a-separate-option/20160305-075416
config: sparc64-allyesconfig (attached as .config)
reproduce:
        wget https://git.kernel.org/cgit/linux/kernel/git/wfg/lkp-tests.git/plain/sbin/make.cross -O ~/bin/make.cross
        chmod +x ~/bin/make.cross
        # save the attached .config to linux build tree
        make.cross ARCH=sparc64 

All errors (new ones prefixed by >>):

   mm/page_poison.c: In function 'init_page_poisoning':
>> mm/page_poison.c:43:3: error: implicit declaration of function 'debug_pagealloc_enabled' [-Werror=implicit-function-declaration]
      if (!want_page_poisoning && !debug_pagealloc_enabled())
      ^
   cc1: some warnings being treated as errors

vim +/debug_pagealloc_enabled +43 mm/page_poison.c

    37	{
    38		/*
    39		 * page poisoning is debug page alloc for some arches. If either
    40		 * of those options are enabled, enable poisoning
    41		 */
    42		if (!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC)) {
  > 43			if (!want_page_poisoning && !debug_pagealloc_enabled())
    44				return;
    45		} else {
    46			if (!want_page_poisoning)

---
0-DAY kernel test infrastructure                Open Source Technology Center
https://lists.01.org/pipermail/kbuild-all                   Intel Corporation

[-- Attachment #2: .config.gz --]
[-- Type: application/octet-stream, Size: 45113 bytes --]

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv4 2/2] mm/page_poisoning.c: Allow for zero poisoning
  2016-03-05  0:07     ` Andrew Morton
  (?)
@ 2016-03-08  0:33       ` Laura Abbott
  -1 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-03-08  0:33 UTC (permalink / raw)
  To: Andrew Morton, Laura Abbott
  Cc: Kirill A. Shutemov, Vlastimil Babka, Michal Hocko, Kees Cook,
	linux-mm, linux-kernel, kernel-hardening

On 03/04/2016 04:07 PM, Andrew Morton wrote:
> On Fri,  4 Mar 2016 15:50:48 -0800 Laura Abbott <labbott@fedoraproject.org> wrote:
>
>>
>> By default, page poisoning uses a poison value (0xaa) on free. If this
>> is changed to 0, the page is not only sanitized but zeroing on alloc
>> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
>> corruption from the poisoning is harder to detect. This feature also
>> cannot be used with hibernation since pages are not guaranteed to be
>> zeroed after hibernation.
>>
>> Credit to Grsecurity/PaX team for inspiring this work
>>
>> --- a/kernel/power/hibernate.c
>> +++ b/kernel/power/hibernate.c
>> @@ -1158,6 +1158,22 @@ static int __init kaslr_nohibernate_setup(char *str)
>>   	return nohibernate_setup(str);
>>   }
>>
>> +static int __init page_poison_nohibernate_setup(char *str)
>> +{
>> +#ifdef CONFIG_PAGE_POISONING_ZERO
>> +	/*
>> +	 * The zeroing option for page poison skips the checks on alloc.
>> +	 * since hibernation doesn't save free pages there's no way to
>> +	 * guarantee the pages will still be zeroed.
>> +	 */
>> +	if (!strcmp(str, "on")) {
>> +		pr_info("Disabling hibernation due to page poisoning\n");
>> +		return nohibernate_setup(str);
>> +	}
>> +#endif
>> +	return 1;
>> +}
>
> It seems a bit unfriendly to silently accept the boot option but not
> actually do anything with it.  Perhaps a `#else pr_info("sorry")' is
> needed.
>
> But I bet we made the same mistake in 1000 other places.
>
> What happens if page_poison_nohibernate_setup() simply doesn't exist
> when CONFIG_PAGE_POISONING_ZERO=n?  It looks like
> kernel/params.c:parse_args() says "Unknown parameter".
>
>

I didn't see that behavior when I tested, even with nonsense parameters.
It looks like it might fall back to some other behavior before giving
-ENOENT?

It's also worth noting the page_poison= option is also parsed in
mm/page_poison.c to do other on/off of the poisoning feature. The
option code supported it and it seemed to match better with what the
existing hibernate code was doing with turning off options.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv4 2/2] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-03-08  0:33       ` Laura Abbott
  0 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-03-08  0:33 UTC (permalink / raw)
  To: Andrew Morton, Laura Abbott
  Cc: Kirill A. Shutemov, Vlastimil Babka, Michal Hocko, Kees Cook,
	linux-mm, linux-kernel, kernel-hardening

On 03/04/2016 04:07 PM, Andrew Morton wrote:
> On Fri,  4 Mar 2016 15:50:48 -0800 Laura Abbott <labbott@fedoraproject.org> wrote:
>
>>
>> By default, page poisoning uses a poison value (0xaa) on free. If this
>> is changed to 0, the page is not only sanitized but zeroing on alloc
>> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
>> corruption from the poisoning is harder to detect. This feature also
>> cannot be used with hibernation since pages are not guaranteed to be
>> zeroed after hibernation.
>>
>> Credit to Grsecurity/PaX team for inspiring this work
>>
>> --- a/kernel/power/hibernate.c
>> +++ b/kernel/power/hibernate.c
>> @@ -1158,6 +1158,22 @@ static int __init kaslr_nohibernate_setup(char *str)
>>   	return nohibernate_setup(str);
>>   }
>>
>> +static int __init page_poison_nohibernate_setup(char *str)
>> +{
>> +#ifdef CONFIG_PAGE_POISONING_ZERO
>> +	/*
>> +	 * The zeroing option for page poison skips the checks on alloc.
>> +	 * since hibernation doesn't save free pages there's no way to
>> +	 * guarantee the pages will still be zeroed.
>> +	 */
>> +	if (!strcmp(str, "on")) {
>> +		pr_info("Disabling hibernation due to page poisoning\n");
>> +		return nohibernate_setup(str);
>> +	}
>> +#endif
>> +	return 1;
>> +}
>
> It seems a bit unfriendly to silently accept the boot option but not
> actually do anything with it.  Perhaps a `#else pr_info("sorry")' is
> needed.
>
> But I bet we made the same mistake in 1000 other places.
>
> What happens if page_poison_nohibernate_setup() simply doesn't exist
> when CONFIG_PAGE_POISONING_ZERO=n?  It looks like
> kernel/params.c:parse_args() says "Unknown parameter".
>
>

I didn't see that behavior when I tested, even with nonsense parameters.
It looks like it might fall back to some other behavior before giving
-ENOENT?

It's also worth noting the page_poison= option is also parsed in
mm/page_poison.c to do other on/off of the poisoning feature. The
option code supported it and it seemed to match better with what the
existing hibernate code was doing with turning off options.

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [kernel-hardening] Re: [PATCHv4 2/2] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-03-08  0:33       ` Laura Abbott
  0 siblings, 0 replies; 21+ messages in thread
From: Laura Abbott @ 2016-03-08  0:33 UTC (permalink / raw)
  To: Andrew Morton, Laura Abbott
  Cc: Kirill A. Shutemov, Vlastimil Babka, Michal Hocko, Kees Cook,
	linux-mm, linux-kernel, kernel-hardening

On 03/04/2016 04:07 PM, Andrew Morton wrote:
> On Fri,  4 Mar 2016 15:50:48 -0800 Laura Abbott <labbott@fedoraproject.org> wrote:
>
>>
>> By default, page poisoning uses a poison value (0xaa) on free. If this
>> is changed to 0, the page is not only sanitized but zeroing on alloc
>> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
>> corruption from the poisoning is harder to detect. This feature also
>> cannot be used with hibernation since pages are not guaranteed to be
>> zeroed after hibernation.
>>
>> Credit to Grsecurity/PaX team for inspiring this work
>>
>> --- a/kernel/power/hibernate.c
>> +++ b/kernel/power/hibernate.c
>> @@ -1158,6 +1158,22 @@ static int __init kaslr_nohibernate_setup(char *str)
>>   	return nohibernate_setup(str);
>>   }
>>
>> +static int __init page_poison_nohibernate_setup(char *str)
>> +{
>> +#ifdef CONFIG_PAGE_POISONING_ZERO
>> +	/*
>> +	 * The zeroing option for page poison skips the checks on alloc.
>> +	 * since hibernation doesn't save free pages there's no way to
>> +	 * guarantee the pages will still be zeroed.
>> +	 */
>> +	if (!strcmp(str, "on")) {
>> +		pr_info("Disabling hibernation due to page poisoning\n");
>> +		return nohibernate_setup(str);
>> +	}
>> +#endif
>> +	return 1;
>> +}
>
> It seems a bit unfriendly to silently accept the boot option but not
> actually do anything with it.  Perhaps a `#else pr_info("sorry")' is
> needed.
>
> But I bet we made the same mistake in 1000 other places.
>
> What happens if page_poison_nohibernate_setup() simply doesn't exist
> when CONFIG_PAGE_POISONING_ZERO=n?  It looks like
> kernel/params.c:parse_args() says "Unknown parameter".
>
>

I didn't see that behavior when I tested, even with nonsense parameters.
It looks like it might fall back to some other behavior before giving
-ENOENT?

It's also worth noting the page_poison= option is also parsed in
mm/page_poison.c to do other on/off of the poisoning feature. The
option code supported it and it seemed to match better with what the
existing hibernate code was doing with turning off options.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv4 0/2] Sanitization of buddy pages
  2016-03-04 23:50 ` Laura Abbott
  (?)
@ 2016-03-09 21:00   ` Kees Cook
  -1 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-03-09 21:00 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Linux-MM, LKML, kernel-hardening

On Fri, Mar 4, 2016 at 3:50 PM, Laura Abbott <labbott@fedoraproject.org> wrote:
> Hi,
>
> This is v4 of the santization of buddy pages. This is mostly just a rebase
> and some phrasing tweaks from v2. Kees submitted a rebase of v3 so this is v4.
>
> Kees, I'm hoping you will give your Tested-by and provide some stats from the
> tests you were running before.

Yeah, please consider these:

Tested-by: Kees Cook <keescook@chromium.org>

The benchmarking should be the same as the v3 I reported on, repeated
here for good measure:

The sanity checks appear to add about 3% additional overhead, but
poisoning seems to add about 9%.

DEBUG_PAGEALLOC=n
PAGE_POISONING=y
PAGE_POISONING_NO_SANITY=y
PAGE_POISONING_ZERO=y

Run times: 389.23 384.88 386.33
Mean: 386.81
Std Dev: 1.81

LKDTM detects nothing, as expected.


DEBUG_PAGEALLOC=n
PAGE_POISONING=y
PAGE_POISONING_NO_SANITY=y
PAGE_POISONING_ZERO=n
slub_debug=P page_poison=on

Run times: 435.63 419.20 422.82
Mean: 425.89
Std Dev: 7.05

Overhead: 9.2% vs all disabled

Poisoning confirmed: READ_AFTER_FREE, READ_BUDDY_AFTER_FREE
Writes not detected, as expected.


DEBUG_PAGEALLOC=n
PAGE_POISONING=y
PAGE_POISONING_NO_SANITY=y
PAGE_POISONING_ZERO=y
slub_debug=P page_poison=on

Run times: 423.44 422.32 424.95
Mean: 423.57
Std Dev: 1.08

Overhead 8.7% overhead vs disabled, 0.5% improvement over non-zero
poison (though only the buddy allocator is using the zero poison).

Poisoning confirmed: READ_AFTER_FREE, READ_BUDDY_AFTER_FREE
Writes not detected, as expected.


DEBUG_PAGEALLOC=n
PAGE_POISONING=y
PAGE_POISONING_NO_SANITY=n
PAGE_POISONING_ZERO=y
slub_debug=FP page_poison=on

Run times: 454.26 429.46 430.48
Mean: 438.07
Std Dev: 11.46

Overhead: 11.7% vs nothing, 3% more overhead than no sanitizing.

All four tests detect correctly.


>
> Thanks,
> Laura
>
> Laura Abbott (2):
>   mm/page_poison.c: Enable PAGE_POISONING as a separate option
>   mm/page_poisoning.c: Allow for zero poisoning
>
>  Documentation/kernel-parameters.txt |   5 +
>  include/linux/mm.h                  |  11 +++
>  include/linux/poison.h              |   4 +
>  kernel/power/hibernate.c            |  17 ++++
>  mm/Kconfig.debug                    |  39 +++++++-
>  mm/Makefile                         |   2 +-
>  mm/debug-pagealloc.c                | 137 ----------------------------
>  mm/page_alloc.c                     |  13 ++-
>  mm/page_ext.c                       |  10 +-
>  mm/page_poison.c                    | 176 ++++++++++++++++++++++++++++++++++++
>  10 files changed, 272 insertions(+), 142 deletions(-)
>  delete mode 100644 mm/debug-pagealloc.c
>  create mode 100644 mm/page_poison.c
>
> --
> 2.5.0
>

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCHv4 0/2] Sanitization of buddy pages
@ 2016-03-09 21:00   ` Kees Cook
  0 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-03-09 21:00 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Linux-MM, LKML, kernel-hardening

On Fri, Mar 4, 2016 at 3:50 PM, Laura Abbott <labbott@fedoraproject.org> wrote:
> Hi,
>
> This is v4 of the santization of buddy pages. This is mostly just a rebase
> and some phrasing tweaks from v2. Kees submitted a rebase of v3 so this is v4.
>
> Kees, I'm hoping you will give your Tested-by and provide some stats from the
> tests you were running before.

Yeah, please consider these:

Tested-by: Kees Cook <keescook@chromium.org>

The benchmarking should be the same as the v3 I reported on, repeated
here for good measure:

The sanity checks appear to add about 3% additional overhead, but
poisoning seems to add about 9%.

DEBUG_PAGEALLOC=n
PAGE_POISONING=y
PAGE_POISONING_NO_SANITY=y
PAGE_POISONING_ZERO=y

Run times: 389.23 384.88 386.33
Mean: 386.81
Std Dev: 1.81

LKDTM detects nothing, as expected.


DEBUG_PAGEALLOC=n
PAGE_POISONING=y
PAGE_POISONING_NO_SANITY=y
PAGE_POISONING_ZERO=n
slub_debug=P page_poison=on

Run times: 435.63 419.20 422.82
Mean: 425.89
Std Dev: 7.05

Overhead: 9.2% vs all disabled

Poisoning confirmed: READ_AFTER_FREE, READ_BUDDY_AFTER_FREE
Writes not detected, as expected.


DEBUG_PAGEALLOC=n
PAGE_POISONING=y
PAGE_POISONING_NO_SANITY=y
PAGE_POISONING_ZERO=y
slub_debug=P page_poison=on

Run times: 423.44 422.32 424.95
Mean: 423.57
Std Dev: 1.08

Overhead 8.7% overhead vs disabled, 0.5% improvement over non-zero
poison (though only the buddy allocator is using the zero poison).

Poisoning confirmed: READ_AFTER_FREE, READ_BUDDY_AFTER_FREE
Writes not detected, as expected.


DEBUG_PAGEALLOC=n
PAGE_POISONING=y
PAGE_POISONING_NO_SANITY=n
PAGE_POISONING_ZERO=y
slub_debug=FP page_poison=on

Run times: 454.26 429.46 430.48
Mean: 438.07
Std Dev: 11.46

Overhead: 11.7% vs nothing, 3% more overhead than no sanitizing.

All four tests detect correctly.


>
> Thanks,
> Laura
>
> Laura Abbott (2):
>   mm/page_poison.c: Enable PAGE_POISONING as a separate option
>   mm/page_poisoning.c: Allow for zero poisoning
>
>  Documentation/kernel-parameters.txt |   5 +
>  include/linux/mm.h                  |  11 +++
>  include/linux/poison.h              |   4 +
>  kernel/power/hibernate.c            |  17 ++++
>  mm/Kconfig.debug                    |  39 +++++++-
>  mm/Makefile                         |   2 +-
>  mm/debug-pagealloc.c                | 137 ----------------------------
>  mm/page_alloc.c                     |  13 ++-
>  mm/page_ext.c                       |  10 +-
>  mm/page_poison.c                    | 176 ++++++++++++++++++++++++++++++++++++
>  10 files changed, 272 insertions(+), 142 deletions(-)
>  delete mode 100644 mm/debug-pagealloc.c
>  create mode 100644 mm/page_poison.c
>
> --
> 2.5.0
>

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 21+ messages in thread

* [kernel-hardening] Re: [PATCHv4 0/2] Sanitization of buddy pages
@ 2016-03-09 21:00   ` Kees Cook
  0 siblings, 0 replies; 21+ messages in thread
From: Kees Cook @ 2016-03-09 21:00 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Linux-MM, LKML, kernel-hardening

On Fri, Mar 4, 2016 at 3:50 PM, Laura Abbott <labbott@fedoraproject.org> wrote:
> Hi,
>
> This is v4 of the santization of buddy pages. This is mostly just a rebase
> and some phrasing tweaks from v2. Kees submitted a rebase of v3 so this is v4.
>
> Kees, I'm hoping you will give your Tested-by and provide some stats from the
> tests you were running before.

Yeah, please consider these:

Tested-by: Kees Cook <keescook@chromium.org>

The benchmarking should be the same as the v3 I reported on, repeated
here for good measure:

The sanity checks appear to add about 3% additional overhead, but
poisoning seems to add about 9%.

DEBUG_PAGEALLOC=n
PAGE_POISONING=y
PAGE_POISONING_NO_SANITY=y
PAGE_POISONING_ZERO=y

Run times: 389.23 384.88 386.33
Mean: 386.81
Std Dev: 1.81

LKDTM detects nothing, as expected.


DEBUG_PAGEALLOC=n
PAGE_POISONING=y
PAGE_POISONING_NO_SANITY=y
PAGE_POISONING_ZERO=n
slub_debug=P page_poison=on

Run times: 435.63 419.20 422.82
Mean: 425.89
Std Dev: 7.05

Overhead: 9.2% vs all disabled

Poisoning confirmed: READ_AFTER_FREE, READ_BUDDY_AFTER_FREE
Writes not detected, as expected.


DEBUG_PAGEALLOC=n
PAGE_POISONING=y
PAGE_POISONING_NO_SANITY=y
PAGE_POISONING_ZERO=y
slub_debug=P page_poison=on

Run times: 423.44 422.32 424.95
Mean: 423.57
Std Dev: 1.08

Overhead 8.7% overhead vs disabled, 0.5% improvement over non-zero
poison (though only the buddy allocator is using the zero poison).

Poisoning confirmed: READ_AFTER_FREE, READ_BUDDY_AFTER_FREE
Writes not detected, as expected.


DEBUG_PAGEALLOC=n
PAGE_POISONING=y
PAGE_POISONING_NO_SANITY=n
PAGE_POISONING_ZERO=y
slub_debug=FP page_poison=on

Run times: 454.26 429.46 430.48
Mean: 438.07
Std Dev: 11.46

Overhead: 11.7% vs nothing, 3% more overhead than no sanitizing.

All four tests detect correctly.


>
> Thanks,
> Laura
>
> Laura Abbott (2):
>   mm/page_poison.c: Enable PAGE_POISONING as a separate option
>   mm/page_poisoning.c: Allow for zero poisoning
>
>  Documentation/kernel-parameters.txt |   5 +
>  include/linux/mm.h                  |  11 +++
>  include/linux/poison.h              |   4 +
>  kernel/power/hibernate.c            |  17 ++++
>  mm/Kconfig.debug                    |  39 +++++++-
>  mm/Makefile                         |   2 +-
>  mm/debug-pagealloc.c                | 137 ----------------------------
>  mm/page_alloc.c                     |  13 ++-
>  mm/page_ext.c                       |  10 +-
>  mm/page_poison.c                    | 176 ++++++++++++++++++++++++++++++++++++
>  10 files changed, 272 insertions(+), 142 deletions(-)
>  delete mode 100644 mm/debug-pagealloc.c
>  create mode 100644 mm/page_poison.c
>
> --
> 2.5.0
>

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2016-03-09 21:00 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-03-04 23:50 [PATCHv4 0/2] Sanitization of buddy pages Laura Abbott
2016-03-04 23:50 ` [kernel-hardening] " Laura Abbott
2016-03-04 23:50 ` Laura Abbott
2016-03-04 23:50 ` [PATCHv4 1/2] mm/page_poison.c: Enable PAGE_POISONING as a separate option Laura Abbott
2016-03-04 23:50   ` [kernel-hardening] " Laura Abbott
2016-03-04 23:50   ` Laura Abbott
2016-03-05  0:17   ` kbuild test robot
2016-03-05  0:17     ` [kernel-hardening] " kbuild test robot
2016-03-05  0:17     ` kbuild test robot
2016-03-04 23:50 ` [PATCHv4 2/2] mm/page_poisoning.c: Allow for zero poisoning Laura Abbott
2016-03-04 23:50   ` [kernel-hardening] " Laura Abbott
2016-03-04 23:50   ` Laura Abbott
2016-03-05  0:07   ` Andrew Morton
2016-03-05  0:07     ` [kernel-hardening] " Andrew Morton
2016-03-05  0:07     ` Andrew Morton
2016-03-08  0:33     ` Laura Abbott
2016-03-08  0:33       ` [kernel-hardening] " Laura Abbott
2016-03-08  0:33       ` Laura Abbott
2016-03-09 21:00 ` [PATCHv4 0/2] Sanitization of buddy pages Kees Cook
2016-03-09 21:00   ` [kernel-hardening] " Kees Cook
2016-03-09 21:00   ` Kees Cook

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.