All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC][PATCH 0/3] Sanitization of buddy pages
@ 2016-01-25 16:55 ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-25 16:55 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook

Hi,

This is an implementation of page poisoning/sanitization for all arches. It
takes advantage of the existing implementation for
!ARCH_SUPPORTS_DEBUG_PAGEALLOC arches. This is a different approach than what
the Grsecurity patches were taking but should provide equivalent functionality.

For those who aren't familiar with this, the goal of sanitization is to reduce
the severity of use after free and uninitialized data bugs. Memory is cleared
on free so any sensitive data is no longer available. Discussion of
sanitization was brough up in a thread about CVEs
(lkml.kernel.org/g/<20160119112812.GA10818@mwanda>)

I eventually expect Kconfig names will want to be changed and or moved if this
is going to be used for security but that can happen later.

Credit to Mathias Krause for the version in grsecurity

Laura Abbott (3):
  mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
  mm/page_poison.c: Enable PAGE_POISONING as a separate option
  mm/page_poisoning.c: Allow for zero poisoning

 Documentation/kernel-parameters.txt |   5 ++
 include/linux/mm.h                  |  13 +++
 include/linux/poison.h              |   4 +
 mm/Kconfig.debug                    |  35 +++++++-
 mm/Makefile                         |   5 +-
 mm/debug-pagealloc.c                | 127 +----------------------------
 mm/page_alloc.c                     |  10 ++-
 mm/page_poison.c                    | 158 ++++++++++++++++++++++++++++++++++++
 8 files changed, 228 insertions(+), 129 deletions(-)
 create mode 100644 mm/page_poison.c

-- 
2.5.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [RFC][PATCH 0/3] Sanitization of buddy pages
@ 2016-01-25 16:55 ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-25 16:55 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook

Hi,

This is an implementation of page poisoning/sanitization for all arches. It
takes advantage of the existing implementation for
!ARCH_SUPPORTS_DEBUG_PAGEALLOC arches. This is a different approach than what
the Grsecurity patches were taking but should provide equivalent functionality.

For those who aren't familiar with this, the goal of sanitization is to reduce
the severity of use after free and uninitialized data bugs. Memory is cleared
on free so any sensitive data is no longer available. Discussion of
sanitization was brough up in a thread about CVEs
(lkml.kernel.org/g/<20160119112812.GA10818@mwanda>)

I eventually expect Kconfig names will want to be changed and or moved if this
is going to be used for security but that can happen later.

Credit to Mathias Krause for the version in grsecurity

Laura Abbott (3):
  mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
  mm/page_poison.c: Enable PAGE_POISONING as a separate option
  mm/page_poisoning.c: Allow for zero poisoning

 Documentation/kernel-parameters.txt |   5 ++
 include/linux/mm.h                  |  13 +++
 include/linux/poison.h              |   4 +
 mm/Kconfig.debug                    |  35 +++++++-
 mm/Makefile                         |   5 +-
 mm/debug-pagealloc.c                | 127 +----------------------------
 mm/page_alloc.c                     |  10 ++-
 mm/page_poison.c                    | 158 ++++++++++++++++++++++++++++++++++++
 8 files changed, 228 insertions(+), 129 deletions(-)
 create mode 100644 mm/page_poison.c

-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [kernel-hardening] [RFC][PATCH 0/3] Sanitization of buddy pages
@ 2016-01-25 16:55 ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-25 16:55 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook

Hi,

This is an implementation of page poisoning/sanitization for all arches. It
takes advantage of the existing implementation for
!ARCH_SUPPORTS_DEBUG_PAGEALLOC arches. This is a different approach than what
the Grsecurity patches were taking but should provide equivalent functionality.

For those who aren't familiar with this, the goal of sanitization is to reduce
the severity of use after free and uninitialized data bugs. Memory is cleared
on free so any sensitive data is no longer available. Discussion of
sanitization was brough up in a thread about CVEs
(lkml.kernel.org/g/<20160119112812.GA10818@mwanda>)

I eventually expect Kconfig names will want to be changed and or moved if this
is going to be used for security but that can happen later.

Credit to Mathias Krause for the version in grsecurity

Laura Abbott (3):
  mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
  mm/page_poison.c: Enable PAGE_POISONING as a separate option
  mm/page_poisoning.c: Allow for zero poisoning

 Documentation/kernel-parameters.txt |   5 ++
 include/linux/mm.h                  |  13 +++
 include/linux/poison.h              |   4 +
 mm/Kconfig.debug                    |  35 +++++++-
 mm/Makefile                         |   5 +-
 mm/debug-pagealloc.c                | 127 +----------------------------
 mm/page_alloc.c                     |  10 ++-
 mm/page_poison.c                    | 158 ++++++++++++++++++++++++++++++++++++
 8 files changed, 228 insertions(+), 129 deletions(-)
 create mode 100644 mm/page_poison.c

-- 
2.5.0

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [RFC][PATCH 1/3] mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
  2016-01-25 16:55 ` Laura Abbott
  (?)
@ 2016-01-25 16:55   ` Laura Abbott
  -1 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-25 16:55 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


For architectures that do not have debug page_alloc
(!ARCH_SUPPORTS_DEBUG_PAGEALLOC), page poisoning is used instead.
Even architectures that do have DEBUG_PAGEALLOC may want to take advantage of
the poisoning feature. Separate out page poisoning into a separate file. This
does not change the default behavior for !ARCH_SUPPORTS_DEBUG_PAGEALLOC.

Credit to Mathias Krause and grsecurity for original work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 Documentation/kernel-parameters.txt |   5 ++
 include/linux/mm.h                  |  10 +++
 mm/Makefile                         |   5 +-
 mm/debug-pagealloc.c                | 121 +-----------------------------
 mm/page_poison.c                    | 144 ++++++++++++++++++++++++++++++++++++
 5 files changed, 164 insertions(+), 121 deletions(-)
 create mode 100644 mm/page_poison.c

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index cfb2c0f..343a4f1 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2681,6 +2681,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			we can turn it on.
 			on: enable the feature
 
+	page_poison=	[KNL] Boot-time parameter changing the state of
+			poisoning on the buddy allocator.
+			off: turn off poisoning
+			on: turn on poisoning
+
 	panic=		[KNL] Kernel behaviour on panic: delay <timeout>
 			timeout > 0: seconds before rebooting
 			timeout = 0: wait forever
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f1cd22f..25551c1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2174,6 +2174,16 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 			       unsigned long size, pte_fn_t fn, void *data);
 
 
+#ifdef CONFIG_PAGE_POISONING
+extern void poison_pages(struct page *page, int n);
+extern void unpoison_pages(struct page *page, int n);
+extern bool page_poisoning_enabled(void);
+#else
+static inline void poison_pages(struct page *page, int n) { }
+static inline void unpoison_pages(struct page *page, int n) { }
+static inline bool page_poisoning_enabled(void) { return false; }
+#endif
+
 #ifdef CONFIG_DEBUG_PAGEALLOC
 extern bool _debug_pagealloc_enabled;
 extern void __kernel_map_pages(struct page *page, int numpages, int enable);
diff --git a/mm/Makefile b/mm/Makefile
index 2ed4319..f256978 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -48,7 +48,10 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
 obj-$(CONFIG_SLOB) += slob.o
 obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
 obj-$(CONFIG_KSM) += ksm.o
-obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
+ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
+	obj-$(CONFIG_DEBUG_PAGEALLOC) += debug-pagealloc.o
+endif
+obj-$(CONFIG_PAGE_POISONING) += page_poison.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
diff --git a/mm/debug-pagealloc.c b/mm/debug-pagealloc.c
index 5bf5906..3cc4c1d 100644
--- a/mm/debug-pagealloc.c
+++ b/mm/debug-pagealloc.c
@@ -6,128 +6,9 @@
 #include <linux/poison.h>
 #include <linux/ratelimit.h>
 
-static bool page_poisoning_enabled __read_mostly;
-
-static bool need_page_poisoning(void)
-{
-	if (!debug_pagealloc_enabled())
-		return false;
-
-	return true;
-}
-
-static void init_page_poisoning(void)
-{
-	if (!debug_pagealloc_enabled())
-		return;
-
-	page_poisoning_enabled = true;
-}
-
-struct page_ext_operations page_poisoning_ops = {
-	.need = need_page_poisoning,
-	.init = init_page_poisoning,
-};
-
-static inline void set_page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	__set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline void clear_page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline bool page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static void poison_page(struct page *page)
-{
-	void *addr = kmap_atomic(page);
-
-	set_page_poison(page);
-	memset(addr, PAGE_POISON, PAGE_SIZE);
-	kunmap_atomic(addr);
-}
-
-static void poison_pages(struct page *page, int n)
-{
-	int i;
-
-	for (i = 0; i < n; i++)
-		poison_page(page + i);
-}
-
-static bool single_bit_flip(unsigned char a, unsigned char b)
-{
-	unsigned char error = a ^ b;
-
-	return error && !(error & (error - 1));
-}
-
-static void check_poison_mem(unsigned char *mem, size_t bytes)
-{
-	static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 10);
-	unsigned char *start;
-	unsigned char *end;
-
-	start = memchr_inv(mem, PAGE_POISON, bytes);
-	if (!start)
-		return;
-
-	for (end = mem + bytes - 1; end > start; end--) {
-		if (*end != PAGE_POISON)
-			break;
-	}
-
-	if (!__ratelimit(&ratelimit))
-		return;
-	else if (start == end && single_bit_flip(*start, PAGE_POISON))
-		printk(KERN_ERR "pagealloc: single bit error\n");
-	else
-		printk(KERN_ERR "pagealloc: memory corruption\n");
-
-	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
-			end - start + 1, 1);
-	dump_stack();
-}
-
-static void unpoison_page(struct page *page)
-{
-	void *addr;
-
-	if (!page_poison(page))
-		return;
-
-	addr = kmap_atomic(page);
-	check_poison_mem(addr, PAGE_SIZE);
-	clear_page_poison(page);
-	kunmap_atomic(addr);
-}
-
-static void unpoison_pages(struct page *page, int n)
-{
-	int i;
-
-	for (i = 0; i < n; i++)
-		unpoison_page(page + i);
-}
-
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-	if (!page_poisoning_enabled)
+	if (!page_poisoning_enabled())
 		return;
 
 	if (enable)
diff --git a/mm/page_poison.c b/mm/page_poison.c
new file mode 100644
index 0000000..0f369a6
--- /dev/null
+++ b/mm/page_poison.c
@@ -0,0 +1,144 @@
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/page_ext.h>
+#include <linux/poison.h>
+#include <linux/ratelimit.h>
+
+static bool __page_poisoning_enabled __read_mostly;
+static bool want_page_poisoning __read_mostly =
+	!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC);
+
+static int early_page_poison_param(char *buf)
+{
+        if (!buf)
+                return -EINVAL;
+
+        if (strcmp(buf, "on") == 0)
+                want_page_poisoning = true;
+	else if (strcmp(buf, "off") == 0)
+		want_page_poisoning = false;
+
+        return 0;
+}
+early_param("page_poison", early_page_poison_param);
+
+bool page_poisoning_enabled(void)
+{
+	return __page_poisoning_enabled;
+}
+
+static bool need_page_poisoning(void)
+{
+	return want_page_poisoning;
+}
+
+static void init_page_poisoning(void)
+{
+	if (!want_page_poisoning)
+		return;
+
+	__page_poisoning_enabled = true;
+}
+
+struct page_ext_operations page_poisoning_ops = {
+	.need = need_page_poisoning,
+	.init = init_page_poisoning,
+};
+
+static inline void set_page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	__set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static inline void clear_page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static inline bool page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static void poison_page(struct page *page)
+{
+	void *addr = kmap_atomic(page);
+
+	set_page_poison(page);
+	memset(addr, PAGE_POISON, PAGE_SIZE);
+	kunmap_atomic(addr);
+}
+
+void poison_pages(struct page *page, int n)
+{
+	int i;
+
+	for (i = 0; i < n; i++)
+		poison_page(page + i);
+}
+
+static bool single_bit_flip(unsigned char a, unsigned char b)
+{
+	unsigned char error = a ^ b;
+
+	return error && !(error & (error - 1));
+}
+
+static void check_poison_mem(unsigned char *mem, size_t bytes)
+{
+	static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 10);
+	unsigned char *start;
+	unsigned char *end;
+
+	start = memchr_inv(mem, PAGE_POISON, bytes);
+	if (!start)
+		return;
+
+	for (end = mem + bytes - 1; end > start; end--) {
+		if (*end != PAGE_POISON)
+			break;
+	}
+
+	if (!__ratelimit(&ratelimit))
+		return;
+	else if (start == end && single_bit_flip(*start, PAGE_POISON))
+		printk(KERN_ERR "pagealloc: single bit error\n");
+	else
+		printk(KERN_ERR "pagealloc: memory corruption\n");
+
+	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
+			end - start + 1, 1);
+	dump_stack();
+}
+
+static void unpoison_page(struct page *page)
+{
+	void *addr;
+
+	if (!page_poison(page))
+		return;
+
+	addr = kmap_atomic(page);
+	check_poison_mem(addr, PAGE_SIZE);
+	clear_page_poison(page);
+	kunmap_atomic(addr);
+}
+
+void unpoison_pages(struct page *page, int n)
+{
+	int i;
+
+	for (i = 0; i < n; i++)
+		unpoison_page(page + i);
+}
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [RFC][PATCH 1/3] mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
@ 2016-01-25 16:55   ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-25 16:55 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


For architectures that do not have debug page_alloc
(!ARCH_SUPPORTS_DEBUG_PAGEALLOC), page poisoning is used instead.
Even architectures that do have DEBUG_PAGEALLOC may want to take advantage of
the poisoning feature. Separate out page poisoning into a separate file. This
does not change the default behavior for !ARCH_SUPPORTS_DEBUG_PAGEALLOC.

Credit to Mathias Krause and grsecurity for original work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 Documentation/kernel-parameters.txt |   5 ++
 include/linux/mm.h                  |  10 +++
 mm/Makefile                         |   5 +-
 mm/debug-pagealloc.c                | 121 +-----------------------------
 mm/page_poison.c                    | 144 ++++++++++++++++++++++++++++++++++++
 5 files changed, 164 insertions(+), 121 deletions(-)
 create mode 100644 mm/page_poison.c

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index cfb2c0f..343a4f1 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2681,6 +2681,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			we can turn it on.
 			on: enable the feature
 
+	page_poison=	[KNL] Boot-time parameter changing the state of
+			poisoning on the buddy allocator.
+			off: turn off poisoning
+			on: turn on poisoning
+
 	panic=		[KNL] Kernel behaviour on panic: delay <timeout>
 			timeout > 0: seconds before rebooting
 			timeout = 0: wait forever
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f1cd22f..25551c1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2174,6 +2174,16 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 			       unsigned long size, pte_fn_t fn, void *data);
 
 
+#ifdef CONFIG_PAGE_POISONING
+extern void poison_pages(struct page *page, int n);
+extern void unpoison_pages(struct page *page, int n);
+extern bool page_poisoning_enabled(void);
+#else
+static inline void poison_pages(struct page *page, int n) { }
+static inline void unpoison_pages(struct page *page, int n) { }
+static inline bool page_poisoning_enabled(void) { return false; }
+#endif
+
 #ifdef CONFIG_DEBUG_PAGEALLOC
 extern bool _debug_pagealloc_enabled;
 extern void __kernel_map_pages(struct page *page, int numpages, int enable);
diff --git a/mm/Makefile b/mm/Makefile
index 2ed4319..f256978 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -48,7 +48,10 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
 obj-$(CONFIG_SLOB) += slob.o
 obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
 obj-$(CONFIG_KSM) += ksm.o
-obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
+ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
+	obj-$(CONFIG_DEBUG_PAGEALLOC) += debug-pagealloc.o
+endif
+obj-$(CONFIG_PAGE_POISONING) += page_poison.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
diff --git a/mm/debug-pagealloc.c b/mm/debug-pagealloc.c
index 5bf5906..3cc4c1d 100644
--- a/mm/debug-pagealloc.c
+++ b/mm/debug-pagealloc.c
@@ -6,128 +6,9 @@
 #include <linux/poison.h>
 #include <linux/ratelimit.h>
 
-static bool page_poisoning_enabled __read_mostly;
-
-static bool need_page_poisoning(void)
-{
-	if (!debug_pagealloc_enabled())
-		return false;
-
-	return true;
-}
-
-static void init_page_poisoning(void)
-{
-	if (!debug_pagealloc_enabled())
-		return;
-
-	page_poisoning_enabled = true;
-}
-
-struct page_ext_operations page_poisoning_ops = {
-	.need = need_page_poisoning,
-	.init = init_page_poisoning,
-};
-
-static inline void set_page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	__set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline void clear_page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline bool page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static void poison_page(struct page *page)
-{
-	void *addr = kmap_atomic(page);
-
-	set_page_poison(page);
-	memset(addr, PAGE_POISON, PAGE_SIZE);
-	kunmap_atomic(addr);
-}
-
-static void poison_pages(struct page *page, int n)
-{
-	int i;
-
-	for (i = 0; i < n; i++)
-		poison_page(page + i);
-}
-
-static bool single_bit_flip(unsigned char a, unsigned char b)
-{
-	unsigned char error = a ^ b;
-
-	return error && !(error & (error - 1));
-}
-
-static void check_poison_mem(unsigned char *mem, size_t bytes)
-{
-	static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 10);
-	unsigned char *start;
-	unsigned char *end;
-
-	start = memchr_inv(mem, PAGE_POISON, bytes);
-	if (!start)
-		return;
-
-	for (end = mem + bytes - 1; end > start; end--) {
-		if (*end != PAGE_POISON)
-			break;
-	}
-
-	if (!__ratelimit(&ratelimit))
-		return;
-	else if (start == end && single_bit_flip(*start, PAGE_POISON))
-		printk(KERN_ERR "pagealloc: single bit error\n");
-	else
-		printk(KERN_ERR "pagealloc: memory corruption\n");
-
-	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
-			end - start + 1, 1);
-	dump_stack();
-}
-
-static void unpoison_page(struct page *page)
-{
-	void *addr;
-
-	if (!page_poison(page))
-		return;
-
-	addr = kmap_atomic(page);
-	check_poison_mem(addr, PAGE_SIZE);
-	clear_page_poison(page);
-	kunmap_atomic(addr);
-}
-
-static void unpoison_pages(struct page *page, int n)
-{
-	int i;
-
-	for (i = 0; i < n; i++)
-		unpoison_page(page + i);
-}
-
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-	if (!page_poisoning_enabled)
+	if (!page_poisoning_enabled())
 		return;
 
 	if (enable)
diff --git a/mm/page_poison.c b/mm/page_poison.c
new file mode 100644
index 0000000..0f369a6
--- /dev/null
+++ b/mm/page_poison.c
@@ -0,0 +1,144 @@
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/page_ext.h>
+#include <linux/poison.h>
+#include <linux/ratelimit.h>
+
+static bool __page_poisoning_enabled __read_mostly;
+static bool want_page_poisoning __read_mostly =
+	!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC);
+
+static int early_page_poison_param(char *buf)
+{
+        if (!buf)
+                return -EINVAL;
+
+        if (strcmp(buf, "on") == 0)
+                want_page_poisoning = true;
+	else if (strcmp(buf, "off") == 0)
+		want_page_poisoning = false;
+
+        return 0;
+}
+early_param("page_poison", early_page_poison_param);
+
+bool page_poisoning_enabled(void)
+{
+	return __page_poisoning_enabled;
+}
+
+static bool need_page_poisoning(void)
+{
+	return want_page_poisoning;
+}
+
+static void init_page_poisoning(void)
+{
+	if (!want_page_poisoning)
+		return;
+
+	__page_poisoning_enabled = true;
+}
+
+struct page_ext_operations page_poisoning_ops = {
+	.need = need_page_poisoning,
+	.init = init_page_poisoning,
+};
+
+static inline void set_page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	__set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static inline void clear_page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static inline bool page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static void poison_page(struct page *page)
+{
+	void *addr = kmap_atomic(page);
+
+	set_page_poison(page);
+	memset(addr, PAGE_POISON, PAGE_SIZE);
+	kunmap_atomic(addr);
+}
+
+void poison_pages(struct page *page, int n)
+{
+	int i;
+
+	for (i = 0; i < n; i++)
+		poison_page(page + i);
+}
+
+static bool single_bit_flip(unsigned char a, unsigned char b)
+{
+	unsigned char error = a ^ b;
+
+	return error && !(error & (error - 1));
+}
+
+static void check_poison_mem(unsigned char *mem, size_t bytes)
+{
+	static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 10);
+	unsigned char *start;
+	unsigned char *end;
+
+	start = memchr_inv(mem, PAGE_POISON, bytes);
+	if (!start)
+		return;
+
+	for (end = mem + bytes - 1; end > start; end--) {
+		if (*end != PAGE_POISON)
+			break;
+	}
+
+	if (!__ratelimit(&ratelimit))
+		return;
+	else if (start == end && single_bit_flip(*start, PAGE_POISON))
+		printk(KERN_ERR "pagealloc: single bit error\n");
+	else
+		printk(KERN_ERR "pagealloc: memory corruption\n");
+
+	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
+			end - start + 1, 1);
+	dump_stack();
+}
+
+static void unpoison_page(struct page *page)
+{
+	void *addr;
+
+	if (!page_poison(page))
+		return;
+
+	addr = kmap_atomic(page);
+	check_poison_mem(addr, PAGE_SIZE);
+	clear_page_poison(page);
+	kunmap_atomic(addr);
+}
+
+void unpoison_pages(struct page *page, int n)
+{
+	int i;
+
+	for (i = 0; i < n; i++)
+		unpoison_page(page + i);
+}
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [kernel-hardening] [RFC][PATCH 1/3] mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
@ 2016-01-25 16:55   ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-25 16:55 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


For architectures that do not have debug page_alloc
(!ARCH_SUPPORTS_DEBUG_PAGEALLOC), page poisoning is used instead.
Even architectures that do have DEBUG_PAGEALLOC may want to take advantage of
the poisoning feature. Separate out page poisoning into a separate file. This
does not change the default behavior for !ARCH_SUPPORTS_DEBUG_PAGEALLOC.

Credit to Mathias Krause and grsecurity for original work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>
---
 Documentation/kernel-parameters.txt |   5 ++
 include/linux/mm.h                  |  10 +++
 mm/Makefile                         |   5 +-
 mm/debug-pagealloc.c                | 121 +-----------------------------
 mm/page_poison.c                    | 144 ++++++++++++++++++++++++++++++++++++
 5 files changed, 164 insertions(+), 121 deletions(-)
 create mode 100644 mm/page_poison.c

diff --git a/Documentation/kernel-parameters.txt b/Documentation/kernel-parameters.txt
index cfb2c0f..343a4f1 100644
--- a/Documentation/kernel-parameters.txt
+++ b/Documentation/kernel-parameters.txt
@@ -2681,6 +2681,11 @@ bytes respectively. Such letter suffixes can also be entirely omitted.
 			we can turn it on.
 			on: enable the feature
 
+	page_poison=	[KNL] Boot-time parameter changing the state of
+			poisoning on the buddy allocator.
+			off: turn off poisoning
+			on: turn on poisoning
+
 	panic=		[KNL] Kernel behaviour on panic: delay <timeout>
 			timeout > 0: seconds before rebooting
 			timeout = 0: wait forever
diff --git a/include/linux/mm.h b/include/linux/mm.h
index f1cd22f..25551c1 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2174,6 +2174,16 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 			       unsigned long size, pte_fn_t fn, void *data);
 
 
+#ifdef CONFIG_PAGE_POISONING
+extern void poison_pages(struct page *page, int n);
+extern void unpoison_pages(struct page *page, int n);
+extern bool page_poisoning_enabled(void);
+#else
+static inline void poison_pages(struct page *page, int n) { }
+static inline void unpoison_pages(struct page *page, int n) { }
+static inline bool page_poisoning_enabled(void) { return false; }
+#endif
+
 #ifdef CONFIG_DEBUG_PAGEALLOC
 extern bool _debug_pagealloc_enabled;
 extern void __kernel_map_pages(struct page *page, int numpages, int enable);
diff --git a/mm/Makefile b/mm/Makefile
index 2ed4319..f256978 100644
--- a/mm/Makefile
+++ b/mm/Makefile
@@ -48,7 +48,10 @@ obj-$(CONFIG_SPARSEMEM_VMEMMAP) += sparse-vmemmap.o
 obj-$(CONFIG_SLOB) += slob.o
 obj-$(CONFIG_MMU_NOTIFIER) += mmu_notifier.o
 obj-$(CONFIG_KSM) += ksm.o
-obj-$(CONFIG_PAGE_POISONING) += debug-pagealloc.o
+ifndef CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC
+	obj-$(CONFIG_DEBUG_PAGEALLOC) += debug-pagealloc.o
+endif
+obj-$(CONFIG_PAGE_POISONING) += page_poison.o
 obj-$(CONFIG_SLAB) += slab.o
 obj-$(CONFIG_SLUB) += slub.o
 obj-$(CONFIG_KMEMCHECK) += kmemcheck.o
diff --git a/mm/debug-pagealloc.c b/mm/debug-pagealloc.c
index 5bf5906..3cc4c1d 100644
--- a/mm/debug-pagealloc.c
+++ b/mm/debug-pagealloc.c
@@ -6,128 +6,9 @@
 #include <linux/poison.h>
 #include <linux/ratelimit.h>
 
-static bool page_poisoning_enabled __read_mostly;
-
-static bool need_page_poisoning(void)
-{
-	if (!debug_pagealloc_enabled())
-		return false;
-
-	return true;
-}
-
-static void init_page_poisoning(void)
-{
-	if (!debug_pagealloc_enabled())
-		return;
-
-	page_poisoning_enabled = true;
-}
-
-struct page_ext_operations page_poisoning_ops = {
-	.need = need_page_poisoning,
-	.init = init_page_poisoning,
-};
-
-static inline void set_page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	__set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline void clear_page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static inline bool page_poison(struct page *page)
-{
-	struct page_ext *page_ext;
-
-	page_ext = lookup_page_ext(page);
-	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
-}
-
-static void poison_page(struct page *page)
-{
-	void *addr = kmap_atomic(page);
-
-	set_page_poison(page);
-	memset(addr, PAGE_POISON, PAGE_SIZE);
-	kunmap_atomic(addr);
-}
-
-static void poison_pages(struct page *page, int n)
-{
-	int i;
-
-	for (i = 0; i < n; i++)
-		poison_page(page + i);
-}
-
-static bool single_bit_flip(unsigned char a, unsigned char b)
-{
-	unsigned char error = a ^ b;
-
-	return error && !(error & (error - 1));
-}
-
-static void check_poison_mem(unsigned char *mem, size_t bytes)
-{
-	static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 10);
-	unsigned char *start;
-	unsigned char *end;
-
-	start = memchr_inv(mem, PAGE_POISON, bytes);
-	if (!start)
-		return;
-
-	for (end = mem + bytes - 1; end > start; end--) {
-		if (*end != PAGE_POISON)
-			break;
-	}
-
-	if (!__ratelimit(&ratelimit))
-		return;
-	else if (start == end && single_bit_flip(*start, PAGE_POISON))
-		printk(KERN_ERR "pagealloc: single bit error\n");
-	else
-		printk(KERN_ERR "pagealloc: memory corruption\n");
-
-	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
-			end - start + 1, 1);
-	dump_stack();
-}
-
-static void unpoison_page(struct page *page)
-{
-	void *addr;
-
-	if (!page_poison(page))
-		return;
-
-	addr = kmap_atomic(page);
-	check_poison_mem(addr, PAGE_SIZE);
-	clear_page_poison(page);
-	kunmap_atomic(addr);
-}
-
-static void unpoison_pages(struct page *page, int n)
-{
-	int i;
-
-	for (i = 0; i < n; i++)
-		unpoison_page(page + i);
-}
-
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-	if (!page_poisoning_enabled)
+	if (!page_poisoning_enabled())
 		return;
 
 	if (enable)
diff --git a/mm/page_poison.c b/mm/page_poison.c
new file mode 100644
index 0000000..0f369a6
--- /dev/null
+++ b/mm/page_poison.c
@@ -0,0 +1,144 @@
+#include <linux/kernel.h>
+#include <linux/string.h>
+#include <linux/mm.h>
+#include <linux/highmem.h>
+#include <linux/page_ext.h>
+#include <linux/poison.h>
+#include <linux/ratelimit.h>
+
+static bool __page_poisoning_enabled __read_mostly;
+static bool want_page_poisoning __read_mostly =
+	!IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC);
+
+static int early_page_poison_param(char *buf)
+{
+        if (!buf)
+                return -EINVAL;
+
+        if (strcmp(buf, "on") == 0)
+                want_page_poisoning = true;
+	else if (strcmp(buf, "off") == 0)
+		want_page_poisoning = false;
+
+        return 0;
+}
+early_param("page_poison", early_page_poison_param);
+
+bool page_poisoning_enabled(void)
+{
+	return __page_poisoning_enabled;
+}
+
+static bool need_page_poisoning(void)
+{
+	return want_page_poisoning;
+}
+
+static void init_page_poisoning(void)
+{
+	if (!want_page_poisoning)
+		return;
+
+	__page_poisoning_enabled = true;
+}
+
+struct page_ext_operations page_poisoning_ops = {
+	.need = need_page_poisoning,
+	.init = init_page_poisoning,
+};
+
+static inline void set_page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	__set_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static inline void clear_page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	__clear_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static inline bool page_poison(struct page *page)
+{
+	struct page_ext *page_ext;
+
+	page_ext = lookup_page_ext(page);
+	return test_bit(PAGE_EXT_DEBUG_POISON, &page_ext->flags);
+}
+
+static void poison_page(struct page *page)
+{
+	void *addr = kmap_atomic(page);
+
+	set_page_poison(page);
+	memset(addr, PAGE_POISON, PAGE_SIZE);
+	kunmap_atomic(addr);
+}
+
+void poison_pages(struct page *page, int n)
+{
+	int i;
+
+	for (i = 0; i < n; i++)
+		poison_page(page + i);
+}
+
+static bool single_bit_flip(unsigned char a, unsigned char b)
+{
+	unsigned char error = a ^ b;
+
+	return error && !(error & (error - 1));
+}
+
+static void check_poison_mem(unsigned char *mem, size_t bytes)
+{
+	static DEFINE_RATELIMIT_STATE(ratelimit, 5 * HZ, 10);
+	unsigned char *start;
+	unsigned char *end;
+
+	start = memchr_inv(mem, PAGE_POISON, bytes);
+	if (!start)
+		return;
+
+	for (end = mem + bytes - 1; end > start; end--) {
+		if (*end != PAGE_POISON)
+			break;
+	}
+
+	if (!__ratelimit(&ratelimit))
+		return;
+	else if (start == end && single_bit_flip(*start, PAGE_POISON))
+		printk(KERN_ERR "pagealloc: single bit error\n");
+	else
+		printk(KERN_ERR "pagealloc: memory corruption\n");
+
+	print_hex_dump(KERN_ERR, "", DUMP_PREFIX_ADDRESS, 16, 1, start,
+			end - start + 1, 1);
+	dump_stack();
+}
+
+static void unpoison_page(struct page *page)
+{
+	void *addr;
+
+	if (!page_poison(page))
+		return;
+
+	addr = kmap_atomic(page);
+	check_poison_mem(addr, PAGE_SIZE);
+	clear_page_poison(page);
+	kunmap_atomic(addr);
+}
+
+void unpoison_pages(struct page *page, int n)
+{
+	int i;
+
+	for (i = 0; i < n; i++)
+		unpoison_page(page + i);
+}
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [RFC][PATCH 2/3] mm/page_poison.c: Enable PAGE_POISONING as a separate option
  2016-01-25 16:55 ` Laura Abbott
  (?)
@ 2016-01-25 16:55   ` Laura Abbott
  -1 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-25 16:55 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


Page poisoning is currently setup as a feature if architectures don't
have architecture debug page_alloc to allow unmapping of pages. It has
uses apart from that though. Clearing of the pages on free provides
an increase in security as it helps to limit the risk of information
leaks. Allow page poisoning to be enabled as a separate option
independent of any other debug feature. Because of how hiberanation
is implemented, the checks on alloc cannot occur if hibernation is
enabled. This option can also be set on !HIBERNATION as well.

Credit to Mathias Krause and grsecurity for original work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>

---
 include/linux/mm.h   |  3 +++
 mm/Kconfig.debug     | 22 +++++++++++++++++++++-
 mm/debug-pagealloc.c |  8 +-------
 mm/page_alloc.c      |  2 ++
 mm/page_poison.c     | 14 ++++++++++++++
 5 files changed, 41 insertions(+), 8 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 25551c1..d14bca4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2178,10 +2178,13 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 extern void poison_pages(struct page *page, int n);
 extern void unpoison_pages(struct page *page, int n);
 extern bool page_poisoning_enabled(void);
+extern void kernel_poison_pages(struct page *page, int numpages, int enable);
 #else
 static inline void poison_pages(struct page *page, int n) { }
 static inline void unpoison_pages(struct page *page, int n) { }
 static inline bool page_poisoning_enabled(void) { return false; }
+static inline void kernel_poison_pages(struct page *page, int numpages,
+					int enable) { }
 #endif
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 957d3da..c300f5f 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -27,4 +27,24 @@ config DEBUG_PAGEALLOC
 	  a resume because free pages are not saved to the suspend image.
 
 config PAGE_POISONING
-	bool
+	bool "Poisson pages after freeing"
+	select PAGE_EXTENSION
+	select PAGE_POISONING_NO_SANITY if HIBERNATION
+	---help---
+	  Fill the pages with poison patterns after free_pages() and verify
+	  the patterns before alloc_pages. The filling of the memory helps
+	  reduce the risk of information leaks from freed data. This does
+	  have a potential performance impact.
+
+	  If unsure, say N
+
+config PAGE_POISONING_NO_SANITY
+	depends on PAGE_POISONING
+	bool "Only poison, don't sanity check"
+	---help---
+	   Skip the sanity checking on alloc, only fill the pages with
+	   poison on free. This reduces some of the overhead of the
+	   poisoning feature.
+
+	   If you are only interested in sanitization, say Y. Otherwise
+	   say N.
diff --git a/mm/debug-pagealloc.c b/mm/debug-pagealloc.c
index 3cc4c1d..0928d13 100644
--- a/mm/debug-pagealloc.c
+++ b/mm/debug-pagealloc.c
@@ -8,11 +8,5 @@
 
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-	if (!page_poisoning_enabled())
-		return;
-
-	if (enable)
-		unpoison_pages(page, numpages);
-	else
-		poison_pages(page, numpages);
+	kernel_poison_pages(page, numpages, enable);
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 63358d9..c733421 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1002,6 +1002,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 					   PAGE_SIZE << order);
 	}
 	arch_free_page(page, order);
+	kernel_poison_pages(page, 1 << order, 0);
 	kernel_map_pages(page, 1 << order, 0);
 
 	return true;
@@ -1396,6 +1397,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 	set_page_refcounted(page);
 
 	arch_alloc_page(page, order);
+	kernel_poison_pages(page, 1 << order, 1);
 	kernel_map_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
diff --git a/mm/page_poison.c b/mm/page_poison.c
index 0f369a6..f6ae58b 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -101,6 +101,9 @@ static void check_poison_mem(unsigned char *mem, size_t bytes)
 	unsigned char *start;
 	unsigned char *end;
 
+	if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
+		return;
+
 	start = memchr_inv(mem, PAGE_POISON, bytes);
 	if (!start)
 		return;
@@ -142,3 +145,14 @@ void unpoison_pages(struct page *page, int n)
 	for (i = 0; i < n; i++)
 		unpoison_page(page + i);
 }
+
+void kernel_poison_pages(struct page *page, int numpages, int enable)
+{
+	if (!page_poisoning_enabled())
+		return;
+
+	if (enable)
+		unpoison_pages(page, numpages);
+	else
+		poison_pages(page, numpages);
+}
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [RFC][PATCH 2/3] mm/page_poison.c: Enable PAGE_POISONING as a separate option
@ 2016-01-25 16:55   ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-25 16:55 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


Page poisoning is currently setup as a feature if architectures don't
have architecture debug page_alloc to allow unmapping of pages. It has
uses apart from that though. Clearing of the pages on free provides
an increase in security as it helps to limit the risk of information
leaks. Allow page poisoning to be enabled as a separate option
independent of any other debug feature. Because of how hiberanation
is implemented, the checks on alloc cannot occur if hibernation is
enabled. This option can also be set on !HIBERNATION as well.

Credit to Mathias Krause and grsecurity for original work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>

---
 include/linux/mm.h   |  3 +++
 mm/Kconfig.debug     | 22 +++++++++++++++++++++-
 mm/debug-pagealloc.c |  8 +-------
 mm/page_alloc.c      |  2 ++
 mm/page_poison.c     | 14 ++++++++++++++
 5 files changed, 41 insertions(+), 8 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 25551c1..d14bca4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2178,10 +2178,13 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 extern void poison_pages(struct page *page, int n);
 extern void unpoison_pages(struct page *page, int n);
 extern bool page_poisoning_enabled(void);
+extern void kernel_poison_pages(struct page *page, int numpages, int enable);
 #else
 static inline void poison_pages(struct page *page, int n) { }
 static inline void unpoison_pages(struct page *page, int n) { }
 static inline bool page_poisoning_enabled(void) { return false; }
+static inline void kernel_poison_pages(struct page *page, int numpages,
+					int enable) { }
 #endif
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 957d3da..c300f5f 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -27,4 +27,24 @@ config DEBUG_PAGEALLOC
 	  a resume because free pages are not saved to the suspend image.
 
 config PAGE_POISONING
-	bool
+	bool "Poisson pages after freeing"
+	select PAGE_EXTENSION
+	select PAGE_POISONING_NO_SANITY if HIBERNATION
+	---help---
+	  Fill the pages with poison patterns after free_pages() and verify
+	  the patterns before alloc_pages. The filling of the memory helps
+	  reduce the risk of information leaks from freed data. This does
+	  have a potential performance impact.
+
+	  If unsure, say N
+
+config PAGE_POISONING_NO_SANITY
+	depends on PAGE_POISONING
+	bool "Only poison, don't sanity check"
+	---help---
+	   Skip the sanity checking on alloc, only fill the pages with
+	   poison on free. This reduces some of the overhead of the
+	   poisoning feature.
+
+	   If you are only interested in sanitization, say Y. Otherwise
+	   say N.
diff --git a/mm/debug-pagealloc.c b/mm/debug-pagealloc.c
index 3cc4c1d..0928d13 100644
--- a/mm/debug-pagealloc.c
+++ b/mm/debug-pagealloc.c
@@ -8,11 +8,5 @@
 
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-	if (!page_poisoning_enabled())
-		return;
-
-	if (enable)
-		unpoison_pages(page, numpages);
-	else
-		poison_pages(page, numpages);
+	kernel_poison_pages(page, numpages, enable);
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 63358d9..c733421 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1002,6 +1002,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 					   PAGE_SIZE << order);
 	}
 	arch_free_page(page, order);
+	kernel_poison_pages(page, 1 << order, 0);
 	kernel_map_pages(page, 1 << order, 0);
 
 	return true;
@@ -1396,6 +1397,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 	set_page_refcounted(page);
 
 	arch_alloc_page(page, order);
+	kernel_poison_pages(page, 1 << order, 1);
 	kernel_map_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
diff --git a/mm/page_poison.c b/mm/page_poison.c
index 0f369a6..f6ae58b 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -101,6 +101,9 @@ static void check_poison_mem(unsigned char *mem, size_t bytes)
 	unsigned char *start;
 	unsigned char *end;
 
+	if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
+		return;
+
 	start = memchr_inv(mem, PAGE_POISON, bytes);
 	if (!start)
 		return;
@@ -142,3 +145,14 @@ void unpoison_pages(struct page *page, int n)
 	for (i = 0; i < n; i++)
 		unpoison_page(page + i);
 }
+
+void kernel_poison_pages(struct page *page, int numpages, int enable)
+{
+	if (!page_poisoning_enabled())
+		return;
+
+	if (enable)
+		unpoison_pages(page, numpages);
+	else
+		poison_pages(page, numpages);
+}
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [kernel-hardening] [RFC][PATCH 2/3] mm/page_poison.c: Enable PAGE_POISONING as a separate option
@ 2016-01-25 16:55   ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-25 16:55 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


Page poisoning is currently setup as a feature if architectures don't
have architecture debug page_alloc to allow unmapping of pages. It has
uses apart from that though. Clearing of the pages on free provides
an increase in security as it helps to limit the risk of information
leaks. Allow page poisoning to be enabled as a separate option
independent of any other debug feature. Because of how hiberanation
is implemented, the checks on alloc cannot occur if hibernation is
enabled. This option can also be set on !HIBERNATION as well.

Credit to Mathias Krause and grsecurity for original work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>

---
 include/linux/mm.h   |  3 +++
 mm/Kconfig.debug     | 22 +++++++++++++++++++++-
 mm/debug-pagealloc.c |  8 +-------
 mm/page_alloc.c      |  2 ++
 mm/page_poison.c     | 14 ++++++++++++++
 5 files changed, 41 insertions(+), 8 deletions(-)

diff --git a/include/linux/mm.h b/include/linux/mm.h
index 25551c1..d14bca4 100644
--- a/include/linux/mm.h
+++ b/include/linux/mm.h
@@ -2178,10 +2178,13 @@ extern int apply_to_page_range(struct mm_struct *mm, unsigned long address,
 extern void poison_pages(struct page *page, int n);
 extern void unpoison_pages(struct page *page, int n);
 extern bool page_poisoning_enabled(void);
+extern void kernel_poison_pages(struct page *page, int numpages, int enable);
 #else
 static inline void poison_pages(struct page *page, int n) { }
 static inline void unpoison_pages(struct page *page, int n) { }
 static inline bool page_poisoning_enabled(void) { return false; }
+static inline void kernel_poison_pages(struct page *page, int numpages,
+					int enable) { }
 #endif
 
 #ifdef CONFIG_DEBUG_PAGEALLOC
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index 957d3da..c300f5f 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -27,4 +27,24 @@ config DEBUG_PAGEALLOC
 	  a resume because free pages are not saved to the suspend image.
 
 config PAGE_POISONING
-	bool
+	bool "Poisson pages after freeing"
+	select PAGE_EXTENSION
+	select PAGE_POISONING_NO_SANITY if HIBERNATION
+	---help---
+	  Fill the pages with poison patterns after free_pages() and verify
+	  the patterns before alloc_pages. The filling of the memory helps
+	  reduce the risk of information leaks from freed data. This does
+	  have a potential performance impact.
+
+	  If unsure, say N
+
+config PAGE_POISONING_NO_SANITY
+	depends on PAGE_POISONING
+	bool "Only poison, don't sanity check"
+	---help---
+	   Skip the sanity checking on alloc, only fill the pages with
+	   poison on free. This reduces some of the overhead of the
+	   poisoning feature.
+
+	   If you are only interested in sanitization, say Y. Otherwise
+	   say N.
diff --git a/mm/debug-pagealloc.c b/mm/debug-pagealloc.c
index 3cc4c1d..0928d13 100644
--- a/mm/debug-pagealloc.c
+++ b/mm/debug-pagealloc.c
@@ -8,11 +8,5 @@
 
 void __kernel_map_pages(struct page *page, int numpages, int enable)
 {
-	if (!page_poisoning_enabled())
-		return;
-
-	if (enable)
-		unpoison_pages(page, numpages);
-	else
-		poison_pages(page, numpages);
+	kernel_poison_pages(page, numpages, enable);
 }
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index 63358d9..c733421 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1002,6 +1002,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
 					   PAGE_SIZE << order);
 	}
 	arch_free_page(page, order);
+	kernel_poison_pages(page, 1 << order, 0);
 	kernel_map_pages(page, 1 << order, 0);
 
 	return true;
@@ -1396,6 +1397,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 	set_page_refcounted(page);
 
 	arch_alloc_page(page, order);
+	kernel_poison_pages(page, 1 << order, 1);
 	kernel_map_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
diff --git a/mm/page_poison.c b/mm/page_poison.c
index 0f369a6..f6ae58b 100644
--- a/mm/page_poison.c
+++ b/mm/page_poison.c
@@ -101,6 +101,9 @@ static void check_poison_mem(unsigned char *mem, size_t bytes)
 	unsigned char *start;
 	unsigned char *end;
 
+	if (IS_ENABLED(CONFIG_PAGE_POISONING_NO_SANITY))
+		return;
+
 	start = memchr_inv(mem, PAGE_POISON, bytes);
 	if (!start)
 		return;
@@ -142,3 +145,14 @@ void unpoison_pages(struct page *page, int n)
 	for (i = 0; i < n; i++)
 		unpoison_page(page + i);
 }
+
+void kernel_poison_pages(struct page *page, int numpages, int enable)
+{
+	if (!page_poisoning_enabled())
+		return;
+
+	if (enable)
+		unpoison_pages(page, numpages);
+	else
+		poison_pages(page, numpages);
+}
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [RFC][PATCH 3/3] mm/page_poisoning.c: Allow for zero poisoning
  2016-01-25 16:55 ` Laura Abbott
  (?)
@ 2016-01-25 16:55   ` Laura Abbott
  -1 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-25 16:55 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


By default, page poisoning uses a poison value (0xaa) on free. If this
is changed to 0, the page is not only sanitized but zeroing on alloc
with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
corruption from the poisoning is harder to detect. This feature also
cannot be used with hibernation since pages are not guaranteed to be
zeroed after hibernation.

Credit to Mathias Krause and grsecurity for original work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>

---
 include/linux/poison.h |  4 ++++
 mm/Kconfig.debug       | 13 +++++++++++++
 mm/page_alloc.c        |  8 +++++++-
 3 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/include/linux/poison.h b/include/linux/poison.h
index 4a27153..51334ed 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -30,7 +30,11 @@
 #define TIMER_ENTRY_STATIC	((void *) 0x300 + POISON_POINTER_DELTA)
 
 /********** mm/debug-pagealloc.c **********/
+#ifdef CONFIG_PAGE_POISONING_ZERO
+#define PAGE_POISON 0x00
+#else
 #define PAGE_POISON 0xaa
+#endif
 
 /********** mm/page_alloc.c ************/
 
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index c300f5f..8ec7dc6 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -48,3 +48,16 @@ config PAGE_POISONING_NO_SANITY
 
 	   If you are only interested in sanitization, say Y. Otherwise
 	   say N.
+
+config PAGE_POISONING_ZERO
+	bool "Use zero for poisoning instead of random data"
+	depends on !HIBERNATION
+	depends on PAGE_POISONING
+	---help---
+	   Instead of using the existing poison value, fill the pages with
+	   zeros. This makes it harder to detect when errors are occuring
+	   due to sanitization but the zeroing at free means that it is
+	   no longer necessary to write zeros when GFP_ZERO is used on
+	   allocation.
+
+	   If unsure, say N
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c733421..7395eee 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1382,6 +1382,12 @@ static inline int check_new_page(struct page *page)
 	return 0;
 }
 
+static inline bool should_zero(void)
+{
+	return !IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) ||
+		!page_poisoning_enabled();
+}
+
 static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 								int alloc_flags)
 {
@@ -1401,7 +1407,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 	kernel_map_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
-	if (gfp_flags & __GFP_ZERO)
+	if (should_zero() && gfp_flags & __GFP_ZERO)
 		for (i = 0; i < (1 << order); i++)
 			clear_highpage(page + i);
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [RFC][PATCH 3/3] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-01-25 16:55   ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-25 16:55 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


By default, page poisoning uses a poison value (0xaa) on free. If this
is changed to 0, the page is not only sanitized but zeroing on alloc
with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
corruption from the poisoning is harder to detect. This feature also
cannot be used with hibernation since pages are not guaranteed to be
zeroed after hibernation.

Credit to Mathias Krause and grsecurity for original work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>

---
 include/linux/poison.h |  4 ++++
 mm/Kconfig.debug       | 13 +++++++++++++
 mm/page_alloc.c        |  8 +++++++-
 3 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/include/linux/poison.h b/include/linux/poison.h
index 4a27153..51334ed 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -30,7 +30,11 @@
 #define TIMER_ENTRY_STATIC	((void *) 0x300 + POISON_POINTER_DELTA)
 
 /********** mm/debug-pagealloc.c **********/
+#ifdef CONFIG_PAGE_POISONING_ZERO
+#define PAGE_POISON 0x00
+#else
 #define PAGE_POISON 0xaa
+#endif
 
 /********** mm/page_alloc.c ************/
 
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index c300f5f..8ec7dc6 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -48,3 +48,16 @@ config PAGE_POISONING_NO_SANITY
 
 	   If you are only interested in sanitization, say Y. Otherwise
 	   say N.
+
+config PAGE_POISONING_ZERO
+	bool "Use zero for poisoning instead of random data"
+	depends on !HIBERNATION
+	depends on PAGE_POISONING
+	---help---
+	   Instead of using the existing poison value, fill the pages with
+	   zeros. This makes it harder to detect when errors are occuring
+	   due to sanitization but the zeroing at free means that it is
+	   no longer necessary to write zeros when GFP_ZERO is used on
+	   allocation.
+
+	   If unsure, say N
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c733421..7395eee 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1382,6 +1382,12 @@ static inline int check_new_page(struct page *page)
 	return 0;
 }
 
+static inline bool should_zero(void)
+{
+	return !IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) ||
+		!page_poisoning_enabled();
+}
+
 static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 								int alloc_flags)
 {
@@ -1401,7 +1407,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 	kernel_map_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
-	if (gfp_flags & __GFP_ZERO)
+	if (should_zero() && gfp_flags & __GFP_ZERO)
 		for (i = 0; i < (1 << order); i++)
 			clear_highpage(page + i);
 
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* [kernel-hardening] [RFC][PATCH 3/3] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-01-25 16:55   ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-25 16:55 UTC (permalink / raw)
  To: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, kernel-hardening, Kees Cook


By default, page poisoning uses a poison value (0xaa) on free. If this
is changed to 0, the page is not only sanitized but zeroing on alloc
with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
corruption from the poisoning is harder to detect. This feature also
cannot be used with hibernation since pages are not guaranteed to be
zeroed after hibernation.

Credit to Mathias Krause and grsecurity for original work

Signed-off-by: Laura Abbott <labbott@fedoraproject.org>

---
 include/linux/poison.h |  4 ++++
 mm/Kconfig.debug       | 13 +++++++++++++
 mm/page_alloc.c        |  8 +++++++-
 3 files changed, 24 insertions(+), 1 deletion(-)

diff --git a/include/linux/poison.h b/include/linux/poison.h
index 4a27153..51334ed 100644
--- a/include/linux/poison.h
+++ b/include/linux/poison.h
@@ -30,7 +30,11 @@
 #define TIMER_ENTRY_STATIC	((void *) 0x300 + POISON_POINTER_DELTA)
 
 /********** mm/debug-pagealloc.c **********/
+#ifdef CONFIG_PAGE_POISONING_ZERO
+#define PAGE_POISON 0x00
+#else
 #define PAGE_POISON 0xaa
+#endif
 
 /********** mm/page_alloc.c ************/
 
diff --git a/mm/Kconfig.debug b/mm/Kconfig.debug
index c300f5f..8ec7dc6 100644
--- a/mm/Kconfig.debug
+++ b/mm/Kconfig.debug
@@ -48,3 +48,16 @@ config PAGE_POISONING_NO_SANITY
 
 	   If you are only interested in sanitization, say Y. Otherwise
 	   say N.
+
+config PAGE_POISONING_ZERO
+	bool "Use zero for poisoning instead of random data"
+	depends on !HIBERNATION
+	depends on PAGE_POISONING
+	---help---
+	   Instead of using the existing poison value, fill the pages with
+	   zeros. This makes it harder to detect when errors are occuring
+	   due to sanitization but the zeroing at free means that it is
+	   no longer necessary to write zeros when GFP_ZERO is used on
+	   allocation.
+
+	   If unsure, say N
diff --git a/mm/page_alloc.c b/mm/page_alloc.c
index c733421..7395eee 100644
--- a/mm/page_alloc.c
+++ b/mm/page_alloc.c
@@ -1382,6 +1382,12 @@ static inline int check_new_page(struct page *page)
 	return 0;
 }
 
+static inline bool should_zero(void)
+{
+	return !IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) ||
+		!page_poisoning_enabled();
+}
+
 static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 								int alloc_flags)
 {
@@ -1401,7 +1407,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
 	kernel_map_pages(page, 1 << order, 1);
 	kasan_alloc_pages(page, order);
 
-	if (gfp_flags & __GFP_ZERO)
+	if (should_zero() && gfp_flags & __GFP_ZERO)
 		for (i = 0; i < (1 << order); i++)
 			clear_highpage(page + i);
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 3/3] mm/page_poisoning.c: Allow for zero poisoning
  2016-01-25 16:55   ` Laura Abbott
@ 2016-01-25 20:16     ` Dave Hansen
  -1 siblings, 0 replies; 40+ messages in thread
From: Dave Hansen @ 2016-01-25 20:16 UTC (permalink / raw)
  To: kernel-hardening, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook

Thanks for doing this!  It all looks pretty straightforward.

On 01/25/2016 08:55 AM, Laura Abbott wrote:
> By default, page poisoning uses a poison value (0xaa) on free. If this
> is changed to 0, the page is not only sanitized but zeroing on alloc
> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
> corruption from the poisoning is harder to detect. This feature also
> cannot be used with hibernation since pages are not guaranteed to be
> zeroed after hibernation.

Ugh, that's a good point about hibernation.  I'm not sure how widely it
gets used but it does look pretty widely enabled in distribution kernels.

Is this something that's fixable?  It seems like we could have the
hibernation code run through and zero all the free lists.  Or, we could
just disable the optimization at runtime when a hibernation is done.

Not that we _have_ to do any of this now, but if a runtime knob (like a
sysctl) could be fun too.  I would be nice for folks to turn it on and
off if they wanted the added security of "real" poisoning vs. the
potential performance boost from this optimization.

> +static inline bool should_zero(void)
> +{
> +	return !IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) ||
> +		!page_poisoning_enabled();
> +}

I wonder if calling this "free_pages_prezeroed()" would make things a
bit more clear when we use it in prep_new_page().

>  static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>  								int alloc_flags)
>  {
> @@ -1401,7 +1407,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>  	kernel_map_pages(page, 1 << order, 1);
>  	kasan_alloc_pages(page, order);
>  
> -	if (gfp_flags & __GFP_ZERO)
> +	if (should_zero() && gfp_flags & __GFP_ZERO)
>  		for (i = 0; i < (1 << order); i++)
>  			clear_highpage(page + i);

It's probably also worth pointing out that this can be a really nice
feature to have in virtual machines where memory is being deduplicated.
 As it stands now, the free lists end up with gunk in them and tend not
to be easy to deduplicate.  This patch would fix that.

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 3/3] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-01-25 20:16     ` Dave Hansen
  0 siblings, 0 replies; 40+ messages in thread
From: Dave Hansen @ 2016-01-25 20:16 UTC (permalink / raw)
  To: kernel-hardening, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook

Thanks for doing this!  It all looks pretty straightforward.

On 01/25/2016 08:55 AM, Laura Abbott wrote:
> By default, page poisoning uses a poison value (0xaa) on free. If this
> is changed to 0, the page is not only sanitized but zeroing on alloc
> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
> corruption from the poisoning is harder to detect. This feature also
> cannot be used with hibernation since pages are not guaranteed to be
> zeroed after hibernation.

Ugh, that's a good point about hibernation.  I'm not sure how widely it
gets used but it does look pretty widely enabled in distribution kernels.

Is this something that's fixable?  It seems like we could have the
hibernation code run through and zero all the free lists.  Or, we could
just disable the optimization at runtime when a hibernation is done.

Not that we _have_ to do any of this now, but if a runtime knob (like a
sysctl) could be fun too.  I would be nice for folks to turn it on and
off if they wanted the added security of "real" poisoning vs. the
potential performance boost from this optimization.

> +static inline bool should_zero(void)
> +{
> +	return !IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) ||
> +		!page_poisoning_enabled();
> +}

I wonder if calling this "free_pages_prezeroed()" would make things a
bit more clear when we use it in prep_new_page().

>  static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>  								int alloc_flags)
>  {
> @@ -1401,7 +1407,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>  	kernel_map_pages(page, 1 << order, 1);
>  	kasan_alloc_pages(page, order);
>  
> -	if (gfp_flags & __GFP_ZERO)
> +	if (should_zero() && gfp_flags & __GFP_ZERO)
>  		for (i = 0; i < (1 << order); i++)
>  			clear_highpage(page + i);

It's probably also worth pointing out that this can be a really nice
feature to have in virtual machines where memory is being deduplicated.
 As it stands now, the free lists end up with gunk in them and tend not
to be easy to deduplicate.  This patch would fix that.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 3/3] mm/page_poisoning.c: Allow for zero poisoning
  2016-01-25 20:16     ` Dave Hansen
  (?)
@ 2016-01-25 22:05       ` Kees Cook
  -1 siblings, 0 replies; 40+ messages in thread
From: Kees Cook @ 2016-01-25 22:05 UTC (permalink / raw)
  To: Dave Hansen
  Cc: kernel-hardening, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko, Laura Abbott, Linux-MM, LKML

On Mon, Jan 25, 2016 at 12:16 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> Thanks for doing this!  It all looks pretty straightforward.
>
> On 01/25/2016 08:55 AM, Laura Abbott wrote:
>> By default, page poisoning uses a poison value (0xaa) on free. If this
>> is changed to 0, the page is not only sanitized but zeroing on alloc
>> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
>> corruption from the poisoning is harder to detect. This feature also
>> cannot be used with hibernation since pages are not guaranteed to be
>> zeroed after hibernation.
>
> Ugh, that's a good point about hibernation.  I'm not sure how widely it
> gets used but it does look pretty widely enabled in distribution kernels.
>
> Is this something that's fixable?  It seems like we could have the
> hibernation code run through and zero all the free lists.  Or, we could
> just disable the optimization at runtime when a hibernation is done.

We can also make hibernation run-time disabled when poisoning is used
(similar to how kASLR disables it).

> Not that we _have_ to do any of this now, but if a runtime knob (like a
> sysctl) could be fun too.  I would be nice for folks to turn it on and
> off if they wanted the added security of "real" poisoning vs. the
> potential performance boost from this optimization.
>
>> +static inline bool should_zero(void)
>> +{
>> +     return !IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) ||
>> +             !page_poisoning_enabled();
>> +}
>
> I wonder if calling this "free_pages_prezeroed()" would make things a
> bit more clear when we use it in prep_new_page().
>
>>  static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>                                                               int alloc_flags)
>>  {
>> @@ -1401,7 +1407,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>       kernel_map_pages(page, 1 << order, 1);
>>       kasan_alloc_pages(page, order);
>>
>> -     if (gfp_flags & __GFP_ZERO)
>> +     if (should_zero() && gfp_flags & __GFP_ZERO)
>>               for (i = 0; i < (1 << order); i++)
>>                       clear_highpage(page + i);
>
> It's probably also worth pointing out that this can be a really nice
> feature to have in virtual machines where memory is being deduplicated.
>  As it stands now, the free lists end up with gunk in them and tend not
> to be easy to deduplicate.  This patch would fix that.

Oh, good point!

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 3/3] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-01-25 22:05       ` Kees Cook
  0 siblings, 0 replies; 40+ messages in thread
From: Kees Cook @ 2016-01-25 22:05 UTC (permalink / raw)
  To: Dave Hansen
  Cc: kernel-hardening, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko, Laura Abbott, Linux-MM, LKML

On Mon, Jan 25, 2016 at 12:16 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> Thanks for doing this!  It all looks pretty straightforward.
>
> On 01/25/2016 08:55 AM, Laura Abbott wrote:
>> By default, page poisoning uses a poison value (0xaa) on free. If this
>> is changed to 0, the page is not only sanitized but zeroing on alloc
>> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
>> corruption from the poisoning is harder to detect. This feature also
>> cannot be used with hibernation since pages are not guaranteed to be
>> zeroed after hibernation.
>
> Ugh, that's a good point about hibernation.  I'm not sure how widely it
> gets used but it does look pretty widely enabled in distribution kernels.
>
> Is this something that's fixable?  It seems like we could have the
> hibernation code run through and zero all the free lists.  Or, we could
> just disable the optimization at runtime when a hibernation is done.

We can also make hibernation run-time disabled when poisoning is used
(similar to how kASLR disables it).

> Not that we _have_ to do any of this now, but if a runtime knob (like a
> sysctl) could be fun too.  I would be nice for folks to turn it on and
> off if they wanted the added security of "real" poisoning vs. the
> potential performance boost from this optimization.
>
>> +static inline bool should_zero(void)
>> +{
>> +     return !IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) ||
>> +             !page_poisoning_enabled();
>> +}
>
> I wonder if calling this "free_pages_prezeroed()" would make things a
> bit more clear when we use it in prep_new_page().
>
>>  static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>                                                               int alloc_flags)
>>  {
>> @@ -1401,7 +1407,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>       kernel_map_pages(page, 1 << order, 1);
>>       kasan_alloc_pages(page, order);
>>
>> -     if (gfp_flags & __GFP_ZERO)
>> +     if (should_zero() && gfp_flags & __GFP_ZERO)
>>               for (i = 0; i < (1 << order); i++)
>>                       clear_highpage(page + i);
>
> It's probably also worth pointing out that this can be a really nice
> feature to have in virtual machines where memory is being deduplicated.
>  As it stands now, the free lists end up with gunk in them and tend not
> to be easy to deduplicate.  This patch would fix that.

Oh, good point!

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 3/3] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-01-25 22:05       ` Kees Cook
  0 siblings, 0 replies; 40+ messages in thread
From: Kees Cook @ 2016-01-25 22:05 UTC (permalink / raw)
  To: Dave Hansen
  Cc: kernel-hardening, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko, Laura Abbott, Linux-MM, LKML

On Mon, Jan 25, 2016 at 12:16 PM, Dave Hansen <dave.hansen@intel.com> wrote:
> Thanks for doing this!  It all looks pretty straightforward.
>
> On 01/25/2016 08:55 AM, Laura Abbott wrote:
>> By default, page poisoning uses a poison value (0xaa) on free. If this
>> is changed to 0, the page is not only sanitized but zeroing on alloc
>> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
>> corruption from the poisoning is harder to detect. This feature also
>> cannot be used with hibernation since pages are not guaranteed to be
>> zeroed after hibernation.
>
> Ugh, that's a good point about hibernation.  I'm not sure how widely it
> gets used but it does look pretty widely enabled in distribution kernels.
>
> Is this something that's fixable?  It seems like we could have the
> hibernation code run through and zero all the free lists.  Or, we could
> just disable the optimization at runtime when a hibernation is done.

We can also make hibernation run-time disabled when poisoning is used
(similar to how kASLR disables it).

> Not that we _have_ to do any of this now, but if a runtime knob (like a
> sysctl) could be fun too.  I would be nice for folks to turn it on and
> off if they wanted the added security of "real" poisoning vs. the
> potential performance boost from this optimization.
>
>> +static inline bool should_zero(void)
>> +{
>> +     return !IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) ||
>> +             !page_poisoning_enabled();
>> +}
>
> I wonder if calling this "free_pages_prezeroed()" would make things a
> bit more clear when we use it in prep_new_page().
>
>>  static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>                                                               int alloc_flags)
>>  {
>> @@ -1401,7 +1407,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>       kernel_map_pages(page, 1 << order, 1);
>>       kasan_alloc_pages(page, order);
>>
>> -     if (gfp_flags & __GFP_ZERO)
>> +     if (should_zero() && gfp_flags & __GFP_ZERO)
>>               for (i = 0; i < (1 << order); i++)
>>                       clear_highpage(page + i);
>
> It's probably also worth pointing out that this can be a really nice
> feature to have in virtual machines where memory is being deduplicated.
>  As it stands now, the free lists end up with gunk in them and tend not
> to be easy to deduplicate.  This patch would fix that.

Oh, good point!

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 3/3] mm/page_poisoning.c: Allow for zero poisoning
  2016-01-25 22:05       ` Kees Cook
  (?)
@ 2016-01-26  1:33         ` Laura Abbott
  -1 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-26  1:33 UTC (permalink / raw)
  To: Kees Cook, Dave Hansen
  Cc: kernel-hardening, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko, Laura Abbott, Linux-MM, LKML

On 01/25/2016 02:05 PM, Kees Cook wrote:
> On Mon, Jan 25, 2016 at 12:16 PM, Dave Hansen <dave.hansen@intel.com> wrote:
>> Thanks for doing this!  It all looks pretty straightforward.
>>
>> On 01/25/2016 08:55 AM, Laura Abbott wrote:
>>> By default, page poisoning uses a poison value (0xaa) on free. If this
>>> is changed to 0, the page is not only sanitized but zeroing on alloc
>>> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
>>> corruption from the poisoning is harder to detect. This feature also
>>> cannot be used with hibernation since pages are not guaranteed to be
>>> zeroed after hibernation.
>>
>> Ugh, that's a good point about hibernation.  I'm not sure how widely it
>> gets used but it does look pretty widely enabled in distribution kernels.
>>
>> Is this something that's fixable?  It seems like we could have the
>> hibernation code run through and zero all the free lists.  Or, we could
>> just disable the optimization at runtime when a hibernation is done.
>
> We can also make hibernation run-time disabled when poisoning is used
> (similar to how kASLR disables it).
>

I'll look into the approach kASLR uses to disable hibernation although
having the hibernation code zero the memory could be useful as well.
We can see if there are actual complaints.
  
>> Not that we _have_ to do any of this now, but if a runtime knob (like a
>> sysctl) could be fun too.  I would be nice for folks to turn it on and
>> off if they wanted the added security of "real" poisoning vs. the
>> potential performance boost from this optimization.
>>
>>> +static inline bool should_zero(void)
>>> +{
>>> +     return !IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) ||
>>> +             !page_poisoning_enabled();
>>> +}
>>
>> I wonder if calling this "free_pages_prezeroed()" would make things a
>> bit more clear when we use it in prep_new_page().
>>

Yes that sounds much better

>>>   static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>>                                                                int alloc_flags)
>>>   {
>>> @@ -1401,7 +1407,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>>        kernel_map_pages(page, 1 << order, 1);
>>>        kasan_alloc_pages(page, order);
>>>
>>> -     if (gfp_flags & __GFP_ZERO)
>>> +     if (should_zero() && gfp_flags & __GFP_ZERO)
>>>                for (i = 0; i < (1 << order); i++)
>>>                        clear_highpage(page + i);
>>
>> It's probably also worth pointing out that this can be a really nice
>> feature to have in virtual machines where memory is being deduplicated.
>>   As it stands now, the free lists end up with gunk in them and tend not
>> to be easy to deduplicate.  This patch would fix that.

Interesting, do you have any benchmarks I could test?

>
> Oh, good point!
>
> -Kees
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 3/3] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-01-26  1:33         ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-26  1:33 UTC (permalink / raw)
  To: Kees Cook, Dave Hansen
  Cc: kernel-hardening, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko, Laura Abbott, Linux-MM, LKML

On 01/25/2016 02:05 PM, Kees Cook wrote:
> On Mon, Jan 25, 2016 at 12:16 PM, Dave Hansen <dave.hansen@intel.com> wrote:
>> Thanks for doing this!  It all looks pretty straightforward.
>>
>> On 01/25/2016 08:55 AM, Laura Abbott wrote:
>>> By default, page poisoning uses a poison value (0xaa) on free. If this
>>> is changed to 0, the page is not only sanitized but zeroing on alloc
>>> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
>>> corruption from the poisoning is harder to detect. This feature also
>>> cannot be used with hibernation since pages are not guaranteed to be
>>> zeroed after hibernation.
>>
>> Ugh, that's a good point about hibernation.  I'm not sure how widely it
>> gets used but it does look pretty widely enabled in distribution kernels.
>>
>> Is this something that's fixable?  It seems like we could have the
>> hibernation code run through and zero all the free lists.  Or, we could
>> just disable the optimization at runtime when a hibernation is done.
>
> We can also make hibernation run-time disabled when poisoning is used
> (similar to how kASLR disables it).
>

I'll look into the approach kASLR uses to disable hibernation although
having the hibernation code zero the memory could be useful as well.
We can see if there are actual complaints.
  
>> Not that we _have_ to do any of this now, but if a runtime knob (like a
>> sysctl) could be fun too.  I would be nice for folks to turn it on and
>> off if they wanted the added security of "real" poisoning vs. the
>> potential performance boost from this optimization.
>>
>>> +static inline bool should_zero(void)
>>> +{
>>> +     return !IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) ||
>>> +             !page_poisoning_enabled();
>>> +}
>>
>> I wonder if calling this "free_pages_prezeroed()" would make things a
>> bit more clear when we use it in prep_new_page().
>>

Yes that sounds much better

>>>   static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>>                                                                int alloc_flags)
>>>   {
>>> @@ -1401,7 +1407,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>>        kernel_map_pages(page, 1 << order, 1);
>>>        kasan_alloc_pages(page, order);
>>>
>>> -     if (gfp_flags & __GFP_ZERO)
>>> +     if (should_zero() && gfp_flags & __GFP_ZERO)
>>>                for (i = 0; i < (1 << order); i++)
>>>                        clear_highpage(page + i);
>>
>> It's probably also worth pointing out that this can be a really nice
>> feature to have in virtual machines where memory is being deduplicated.
>>   As it stands now, the free lists end up with gunk in them and tend not
>> to be easy to deduplicate.  This patch would fix that.

Interesting, do you have any benchmarks I could test?

>
> Oh, good point!
>
> -Kees
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 3/3] mm/page_poisoning.c: Allow for zero poisoning
@ 2016-01-26  1:33         ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-26  1:33 UTC (permalink / raw)
  To: Kees Cook, Dave Hansen
  Cc: kernel-hardening, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko, Laura Abbott, Linux-MM, LKML

On 01/25/2016 02:05 PM, Kees Cook wrote:
> On Mon, Jan 25, 2016 at 12:16 PM, Dave Hansen <dave.hansen@intel.com> wrote:
>> Thanks for doing this!  It all looks pretty straightforward.
>>
>> On 01/25/2016 08:55 AM, Laura Abbott wrote:
>>> By default, page poisoning uses a poison value (0xaa) on free. If this
>>> is changed to 0, the page is not only sanitized but zeroing on alloc
>>> with __GFP_ZERO can be skipped as well. The tradeoff is that detecting
>>> corruption from the poisoning is harder to detect. This feature also
>>> cannot be used with hibernation since pages are not guaranteed to be
>>> zeroed after hibernation.
>>
>> Ugh, that's a good point about hibernation.  I'm not sure how widely it
>> gets used but it does look pretty widely enabled in distribution kernels.
>>
>> Is this something that's fixable?  It seems like we could have the
>> hibernation code run through and zero all the free lists.  Or, we could
>> just disable the optimization at runtime when a hibernation is done.
>
> We can also make hibernation run-time disabled when poisoning is used
> (similar to how kASLR disables it).
>

I'll look into the approach kASLR uses to disable hibernation although
having the hibernation code zero the memory could be useful as well.
We can see if there are actual complaints.
  
>> Not that we _have_ to do any of this now, but if a runtime knob (like a
>> sysctl) could be fun too.  I would be nice for folks to turn it on and
>> off if they wanted the added security of "real" poisoning vs. the
>> potential performance boost from this optimization.
>>
>>> +static inline bool should_zero(void)
>>> +{
>>> +     return !IS_ENABLED(CONFIG_PAGE_POISONING_ZERO) ||
>>> +             !page_poisoning_enabled();
>>> +}
>>
>> I wonder if calling this "free_pages_prezeroed()" would make things a
>> bit more clear when we use it in prep_new_page().
>>

Yes that sounds much better

>>>   static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>>                                                                int alloc_flags)
>>>   {
>>> @@ -1401,7 +1407,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>>        kernel_map_pages(page, 1 << order, 1);
>>>        kasan_alloc_pages(page, order);
>>>
>>> -     if (gfp_flags & __GFP_ZERO)
>>> +     if (should_zero() && gfp_flags & __GFP_ZERO)
>>>                for (i = 0; i < (1 << order); i++)
>>>                        clear_highpage(page + i);
>>
>> It's probably also worth pointing out that this can be a really nice
>> feature to have in virtual machines where memory is being deduplicated.
>>   As it stands now, the free lists end up with gunk in them and tend not
>> to be easy to deduplicate.  This patch would fix that.

Interesting, do you have any benchmarks I could test?

>
> Oh, good point!
>
> -Kees
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [RFC][PATCH 0/3] Sanitization of buddy pages
  2016-01-25 16:55 ` Laura Abbott
  (?)
@ 2016-01-26  6:05   ` Sasha Levin
  -1 siblings, 0 replies; 40+ messages in thread
From: Sasha Levin @ 2016-01-26  6:05 UTC (permalink / raw)
  To: Laura Abbott, Andrew Morton, Kirill A. Shutemov, Vlastimil Babka,
	Michal Hocko
  Cc: linux-mm, linux-kernel, kernel-hardening, Kees Cook, Andrey Ryabinin

On 01/25/2016 11:55 AM, Laura Abbott wrote:
> Hi,
> 
> This is an implementation of page poisoning/sanitization for all arches. It
> takes advantage of the existing implementation for
> !ARCH_SUPPORTS_DEBUG_PAGEALLOC arches. This is a different approach than what
> the Grsecurity patches were taking but should provide equivalent functionality.
> 
> For those who aren't familiar with this, the goal of sanitization is to reduce
> the severity of use after free and uninitialized data bugs. Memory is cleared
> on free so any sensitive data is no longer available. Discussion of
> sanitization was brough up in a thread about CVEs
> (lkml.kernel.org/g/<20160119112812.GA10818@mwanda>)
> 
> I eventually expect Kconfig names will want to be changed and or moved if this
> is going to be used for security but that can happen later.
> 
> Credit to Mathias Krause for the version in grsecurity
> 
> Laura Abbott (3):
>   mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
>   mm/page_poison.c: Enable PAGE_POISONING as a separate option
>   mm/page_poisoning.c: Allow for zero poisoning
> 
>  Documentation/kernel-parameters.txt |   5 ++
>  include/linux/mm.h                  |  13 +++
>  include/linux/poison.h              |   4 +
>  mm/Kconfig.debug                    |  35 +++++++-
>  mm/Makefile                         |   5 +-
>  mm/debug-pagealloc.c                | 127 +----------------------------
>  mm/page_alloc.c                     |  10 ++-
>  mm/page_poison.c                    | 158 ++++++++++++++++++++++++++++++++++++
>  8 files changed, 228 insertions(+), 129 deletions(-)
>  create mode 100644 mm/page_poison.c
> 

Should poisoning of this kind be using kasan rather than "old fashioned"
poisoning?


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [RFC][PATCH 0/3] Sanitization of buddy pages
@ 2016-01-26  6:05   ` Sasha Levin
  0 siblings, 0 replies; 40+ messages in thread
From: Sasha Levin @ 2016-01-26  6:05 UTC (permalink / raw)
  To: Laura Abbott, Andrew Morton, Kirill A. Shutemov, Vlastimil Babka,
	Michal Hocko
  Cc: linux-mm, linux-kernel, kernel-hardening, Kees Cook, Andrey Ryabinin

On 01/25/2016 11:55 AM, Laura Abbott wrote:
> Hi,
> 
> This is an implementation of page poisoning/sanitization for all arches. It
> takes advantage of the existing implementation for
> !ARCH_SUPPORTS_DEBUG_PAGEALLOC arches. This is a different approach than what
> the Grsecurity patches were taking but should provide equivalent functionality.
> 
> For those who aren't familiar with this, the goal of sanitization is to reduce
> the severity of use after free and uninitialized data bugs. Memory is cleared
> on free so any sensitive data is no longer available. Discussion of
> sanitization was brough up in a thread about CVEs
> (lkml.kernel.org/g/<20160119112812.GA10818@mwanda>)
> 
> I eventually expect Kconfig names will want to be changed and or moved if this
> is going to be used for security but that can happen later.
> 
> Credit to Mathias Krause for the version in grsecurity
> 
> Laura Abbott (3):
>   mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
>   mm/page_poison.c: Enable PAGE_POISONING as a separate option
>   mm/page_poisoning.c: Allow for zero poisoning
> 
>  Documentation/kernel-parameters.txt |   5 ++
>  include/linux/mm.h                  |  13 +++
>  include/linux/poison.h              |   4 +
>  mm/Kconfig.debug                    |  35 +++++++-
>  mm/Makefile                         |   5 +-
>  mm/debug-pagealloc.c                | 127 +----------------------------
>  mm/page_alloc.c                     |  10 ++-
>  mm/page_poison.c                    | 158 ++++++++++++++++++++++++++++++++++++
>  8 files changed, 228 insertions(+), 129 deletions(-)
>  create mode 100644 mm/page_poison.c
> 

Should poisoning of this kind be using kasan rather than "old fashioned"
poisoning?


Thanks,
Sasha

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 0/3] Sanitization of buddy pages
@ 2016-01-26  6:05   ` Sasha Levin
  0 siblings, 0 replies; 40+ messages in thread
From: Sasha Levin @ 2016-01-26  6:05 UTC (permalink / raw)
  To: Laura Abbott, Andrew Morton, Kirill A. Shutemov, Vlastimil Babka,
	Michal Hocko
  Cc: linux-mm, linux-kernel, kernel-hardening, Kees Cook, Andrey Ryabinin

On 01/25/2016 11:55 AM, Laura Abbott wrote:
> Hi,
> 
> This is an implementation of page poisoning/sanitization for all arches. It
> takes advantage of the existing implementation for
> !ARCH_SUPPORTS_DEBUG_PAGEALLOC arches. This is a different approach than what
> the Grsecurity patches were taking but should provide equivalent functionality.
> 
> For those who aren't familiar with this, the goal of sanitization is to reduce
> the severity of use after free and uninitialized data bugs. Memory is cleared
> on free so any sensitive data is no longer available. Discussion of
> sanitization was brough up in a thread about CVEs
> (lkml.kernel.org/g/<20160119112812.GA10818@mwanda>)
> 
> I eventually expect Kconfig names will want to be changed and or moved if this
> is going to be used for security but that can happen later.
> 
> Credit to Mathias Krause for the version in grsecurity
> 
> Laura Abbott (3):
>   mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
>   mm/page_poison.c: Enable PAGE_POISONING as a separate option
>   mm/page_poisoning.c: Allow for zero poisoning
> 
>  Documentation/kernel-parameters.txt |   5 ++
>  include/linux/mm.h                  |  13 +++
>  include/linux/poison.h              |   4 +
>  mm/Kconfig.debug                    |  35 +++++++-
>  mm/Makefile                         |   5 +-
>  mm/debug-pagealloc.c                | 127 +----------------------------
>  mm/page_alloc.c                     |  10 ++-
>  mm/page_poison.c                    | 158 ++++++++++++++++++++++++++++++++++++
>  8 files changed, 228 insertions(+), 129 deletions(-)
>  create mode 100644 mm/page_poison.c
> 

Should poisoning of this kind be using kasan rather than "old fashioned"
poisoning?


Thanks,
Sasha

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [RFC][PATCH 1/3] mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
  2016-01-25 16:55   ` Laura Abbott
  (?)
@ 2016-01-26  6:26     ` Jianyu Zhan
  -1 siblings, 0 replies; 40+ messages in thread
From: Jianyu Zhan @ 2016-01-26  6:26 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	linux-mm, LKML, kernel-hardening, Kees Cook

On Tue, Jan 26, 2016 at 12:55 AM, Laura Abbott
<labbott@fedoraproject.org> wrote:
> +static bool __page_poisoning_enabled __read_mostly;
> +static bool want_page_poisoning __read_mostly =
> +       !IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC);
> +


I would say this patch is nice with regard to decoupling
CONFIG_DEBUG_PAGEALLOC and CONFIG_PAGE_POISONING.

But  since when we enable CONFIG_DEBUG_PAGEALLOC,
CONFIG_PAGE_POISONING will be selected.

So it would be better to make page_poison.c totally
CONFIG_DEBUG_PAGEALLOC agnostic,  in case we latter have
more PAGE_POISONING users(currently only DEBUG_PAGEALLOC ). How about like this:

+static bool want_page_poisoning __read_mostly =
+       !IS_ENABLED(CONFIG_PAGE_POISONING );

Or just let it default to 'true',  since we only compile this
page_poison.c when we enable CONFIG_PAGE_POISONING.


Thanks,
Jianyu Zhan

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [RFC][PATCH 1/3] mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
@ 2016-01-26  6:26     ` Jianyu Zhan
  0 siblings, 0 replies; 40+ messages in thread
From: Jianyu Zhan @ 2016-01-26  6:26 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	linux-mm, LKML, kernel-hardening, Kees Cook

On Tue, Jan 26, 2016 at 12:55 AM, Laura Abbott
<labbott@fedoraproject.org> wrote:
> +static bool __page_poisoning_enabled __read_mostly;
> +static bool want_page_poisoning __read_mostly =
> +       !IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC);
> +


I would say this patch is nice with regard to decoupling
CONFIG_DEBUG_PAGEALLOC and CONFIG_PAGE_POISONING.

But  since when we enable CONFIG_DEBUG_PAGEALLOC,
CONFIG_PAGE_POISONING will be selected.

So it would be better to make page_poison.c totally
CONFIG_DEBUG_PAGEALLOC agnostic,  in case we latter have
more PAGE_POISONING users(currently only DEBUG_PAGEALLOC ). How about like this:

+static bool want_page_poisoning __read_mostly =
+       !IS_ENABLED(CONFIG_PAGE_POISONING );

Or just let it default to 'true',  since we only compile this
page_poison.c when we enable CONFIG_PAGE_POISONING.


Thanks,
Jianyu Zhan

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 1/3] mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
@ 2016-01-26  6:26     ` Jianyu Zhan
  0 siblings, 0 replies; 40+ messages in thread
From: Jianyu Zhan @ 2016-01-26  6:26 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	linux-mm, LKML, kernel-hardening, Kees Cook

On Tue, Jan 26, 2016 at 12:55 AM, Laura Abbott
<labbott@fedoraproject.org> wrote:
> +static bool __page_poisoning_enabled __read_mostly;
> +static bool want_page_poisoning __read_mostly =
> +       !IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC);
> +


I would say this patch is nice with regard to decoupling
CONFIG_DEBUG_PAGEALLOC and CONFIG_PAGE_POISONING.

But  since when we enable CONFIG_DEBUG_PAGEALLOC,
CONFIG_PAGE_POISONING will be selected.

So it would be better to make page_poison.c totally
CONFIG_DEBUG_PAGEALLOC agnostic,  in case we latter have
more PAGE_POISONING users(currently only DEBUG_PAGEALLOC ). How about like this:

+static bool want_page_poisoning __read_mostly =
+       !IS_ENABLED(CONFIG_PAGE_POISONING );

Or just let it default to 'true',  since we only compile this
page_poison.c when we enable CONFIG_PAGE_POISONING.


Thanks,
Jianyu Zhan

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [RFC][PATCH 2/3] mm/page_poison.c: Enable PAGE_POISONING as a separate option
  2016-01-25 16:55   ` Laura Abbott
  (?)
@ 2016-01-26  6:39     ` Jianyu Zhan
  -1 siblings, 0 replies; 40+ messages in thread
From: Jianyu Zhan @ 2016-01-26  6:39 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	linux-mm, LKML, kernel-hardening, Kees Cook

On Tue, Jan 26, 2016 at 12:55 AM, Laura Abbott
<labbott@fedoraproject.org> wrote:
> --- a/mm/debug-pagealloc.c
> +++ b/mm/debug-pagealloc.c
> @@ -8,11 +8,5 @@
>
>  void __kernel_map_pages(struct page *page, int numpages, int enable)
>  {
> -       if (!page_poisoning_enabled())
> -               return;
> -
> -       if (enable)
> -               unpoison_pages(page, numpages);
> -       else
> -               poison_pages(page, numpages);
> +       kernel_poison_pages(page, numpages, enable);
>  }
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 63358d9..c733421 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1002,6 +1002,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>                                            PAGE_SIZE << order);
>         }
>         arch_free_page(page, order);
> +       kernel_poison_pages(page, 1 << order, 0);
>         kernel_map_pages(page, 1 << order, 0);
>
>         return true;
> @@ -1396,6 +1397,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>         set_page_refcounted(page);
>
>         arch_alloc_page(page, order);
> +       kernel_poison_pages(page, 1 << order, 1);
>         kernel_map_pages(page, 1 << order, 1);
>         kasan_alloc_pages(page, order);
>

kernel_map_pages() will fall back to page poisoning scheme for
!ARCH_SUPPORTS_DEBUG_PAGEALLOC.

IIUC,  calling kernel_poison_pages() before kernel_map_pages() will be
equivalent to call kernel_poison_pages()
twice?!




Thanks,
Jianyu Zhan

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [RFC][PATCH 2/3] mm/page_poison.c: Enable PAGE_POISONING as a separate option
@ 2016-01-26  6:39     ` Jianyu Zhan
  0 siblings, 0 replies; 40+ messages in thread
From: Jianyu Zhan @ 2016-01-26  6:39 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	linux-mm, LKML, kernel-hardening, Kees Cook

On Tue, Jan 26, 2016 at 12:55 AM, Laura Abbott
<labbott@fedoraproject.org> wrote:
> --- a/mm/debug-pagealloc.c
> +++ b/mm/debug-pagealloc.c
> @@ -8,11 +8,5 @@
>
>  void __kernel_map_pages(struct page *page, int numpages, int enable)
>  {
> -       if (!page_poisoning_enabled())
> -               return;
> -
> -       if (enable)
> -               unpoison_pages(page, numpages);
> -       else
> -               poison_pages(page, numpages);
> +       kernel_poison_pages(page, numpages, enable);
>  }
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 63358d9..c733421 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1002,6 +1002,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>                                            PAGE_SIZE << order);
>         }
>         arch_free_page(page, order);
> +       kernel_poison_pages(page, 1 << order, 0);
>         kernel_map_pages(page, 1 << order, 0);
>
>         return true;
> @@ -1396,6 +1397,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>         set_page_refcounted(page);
>
>         arch_alloc_page(page, order);
> +       kernel_poison_pages(page, 1 << order, 1);
>         kernel_map_pages(page, 1 << order, 1);
>         kasan_alloc_pages(page, order);
>

kernel_map_pages() will fall back to page poisoning scheme for
!ARCH_SUPPORTS_DEBUG_PAGEALLOC.

IIUC,  calling kernel_poison_pages() before kernel_map_pages() will be
equivalent to call kernel_poison_pages()
twice?!




Thanks,
Jianyu Zhan

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 2/3] mm/page_poison.c: Enable PAGE_POISONING as a separate option
@ 2016-01-26  6:39     ` Jianyu Zhan
  0 siblings, 0 replies; 40+ messages in thread
From: Jianyu Zhan @ 2016-01-26  6:39 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	linux-mm, LKML, kernel-hardening, Kees Cook

On Tue, Jan 26, 2016 at 12:55 AM, Laura Abbott
<labbott@fedoraproject.org> wrote:
> --- a/mm/debug-pagealloc.c
> +++ b/mm/debug-pagealloc.c
> @@ -8,11 +8,5 @@
>
>  void __kernel_map_pages(struct page *page, int numpages, int enable)
>  {
> -       if (!page_poisoning_enabled())
> -               return;
> -
> -       if (enable)
> -               unpoison_pages(page, numpages);
> -       else
> -               poison_pages(page, numpages);
> +       kernel_poison_pages(page, numpages, enable);
>  }
> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
> index 63358d9..c733421 100644
> --- a/mm/page_alloc.c
> +++ b/mm/page_alloc.c
> @@ -1002,6 +1002,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>                                            PAGE_SIZE << order);
>         }
>         arch_free_page(page, order);
> +       kernel_poison_pages(page, 1 << order, 0);
>         kernel_map_pages(page, 1 << order, 0);
>
>         return true;
> @@ -1396,6 +1397,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>         set_page_refcounted(page);
>
>         arch_alloc_page(page, order);
> +       kernel_poison_pages(page, 1 << order, 1);
>         kernel_map_pages(page, 1 << order, 1);
>         kasan_alloc_pages(page, order);
>

kernel_map_pages() will fall back to page poisoning scheme for
!ARCH_SUPPORTS_DEBUG_PAGEALLOC.

IIUC,  calling kernel_poison_pages() before kernel_map_pages() will be
equivalent to call kernel_poison_pages()
twice?!




Thanks,
Jianyu Zhan

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 0/3] Sanitization of buddy pages
  2016-01-25 16:55 ` Laura Abbott
@ 2016-01-26  9:08   ` Mathias Krause
  -1 siblings, 0 replies; 40+ messages in thread
From: Mathias Krause @ 2016-01-26  9:08 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Laura Abbott, linux-mm, linux-kernel, Kees Cook, PaX Team

On 25 January 2016 at 17:55, Laura Abbott <labbott@fedoraproject.org> wrote:
> Hi,
>
> This is an implementation of page poisoning/sanitization for all arches. It
> takes advantage of the existing implementation for
> !ARCH_SUPPORTS_DEBUG_PAGEALLOC arches. This is a different approach than what
> the Grsecurity patches were taking but should provide equivalent functionality.
>
> For those who aren't familiar with this, the goal of sanitization is to reduce
> the severity of use after free and uninitialized data bugs. Memory is cleared
> on free so any sensitive data is no longer available. Discussion of
> sanitization was brough up in a thread about CVEs
> (lkml.kernel.org/g/<20160119112812.GA10818@mwanda>)
>
> I eventually expect Kconfig names will want to be changed and or moved if this
> is going to be used for security but that can happen later.
>
> Credit to Mathias Krause for the version in grsecurity

Thanks for the credits but I don't deserve them. I've contributed the
slab based sanitization only. The page based one shipped in PaX and
grsecurity is from the PaX Team.

>
> Laura Abbott (3):
>   mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
>   mm/page_poison.c: Enable PAGE_POISONING as a separate option
>   mm/page_poisoning.c: Allow for zero poisoning
>
>  Documentation/kernel-parameters.txt |   5 ++
>  include/linux/mm.h                  |  13 +++
>  include/linux/poison.h              |   4 +
>  mm/Kconfig.debug                    |  35 +++++++-
>  mm/Makefile                         |   5 +-
>  mm/debug-pagealloc.c                | 127 +----------------------------
>  mm/page_alloc.c                     |  10 ++-
>  mm/page_poison.c                    | 158 ++++++++++++++++++++++++++++++++++++
>  8 files changed, 228 insertions(+), 129 deletions(-)
>  create mode 100644 mm/page_poison.c
>
> --
> 2.5.0
>

Regards,
Mathias

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 0/3] Sanitization of buddy pages
@ 2016-01-26  9:08   ` Mathias Krause
  0 siblings, 0 replies; 40+ messages in thread
From: Mathias Krause @ 2016-01-26  9:08 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	Laura Abbott, linux-mm, linux-kernel, Kees Cook, PaX Team

On 25 January 2016 at 17:55, Laura Abbott <labbott@fedoraproject.org> wrote:
> Hi,
>
> This is an implementation of page poisoning/sanitization for all arches. It
> takes advantage of the existing implementation for
> !ARCH_SUPPORTS_DEBUG_PAGEALLOC arches. This is a different approach than what
> the Grsecurity patches were taking but should provide equivalent functionality.
>
> For those who aren't familiar with this, the goal of sanitization is to reduce
> the severity of use after free and uninitialized data bugs. Memory is cleared
> on free so any sensitive data is no longer available. Discussion of
> sanitization was brough up in a thread about CVEs
> (lkml.kernel.org/g/<20160119112812.GA10818@mwanda>)
>
> I eventually expect Kconfig names will want to be changed and or moved if this
> is going to be used for security but that can happen later.
>
> Credit to Mathias Krause for the version in grsecurity

Thanks for the credits but I don't deserve them. I've contributed the
slab based sanitization only. The page based one shipped in PaX and
grsecurity is from the PaX Team.

>
> Laura Abbott (3):
>   mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
>   mm/page_poison.c: Enable PAGE_POISONING as a separate option
>   mm/page_poisoning.c: Allow for zero poisoning
>
>  Documentation/kernel-parameters.txt |   5 ++
>  include/linux/mm.h                  |  13 +++
>  include/linux/poison.h              |   4 +
>  mm/Kconfig.debug                    |  35 +++++++-
>  mm/Makefile                         |   5 +-
>  mm/debug-pagealloc.c                | 127 +----------------------------
>  mm/page_alloc.c                     |  10 ++-
>  mm/page_poison.c                    | 158 ++++++++++++++++++++++++++++++++++++
>  8 files changed, 228 insertions(+), 129 deletions(-)
>  create mode 100644 mm/page_poison.c
>
> --
> 2.5.0
>

Regards,
Mathias

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [RFC][PATCH 1/3] mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
  2016-01-26  6:26     ` Jianyu Zhan
  (?)
@ 2016-01-26 20:25       ` Laura Abbott
  -1 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-26 20:25 UTC (permalink / raw)
  To: Jianyu Zhan, Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	linux-mm, LKML, kernel-hardening, Kees Cook

On 01/25/2016 10:26 PM, Jianyu Zhan wrote:
> On Tue, Jan 26, 2016 at 12:55 AM, Laura Abbott
> <labbott@fedoraproject.org> wrote:
>> +static bool __page_poisoning_enabled __read_mostly;
>> +static bool want_page_poisoning __read_mostly =
>> +       !IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC);
>> +
>
>
> I would say this patch is nice with regard to decoupling
> CONFIG_DEBUG_PAGEALLOC and CONFIG_PAGE_POISONING.
>
> But  since when we enable CONFIG_DEBUG_PAGEALLOC,
> CONFIG_PAGE_POISONING will be selected.
>
> So it would be better to make page_poison.c totally
> CONFIG_DEBUG_PAGEALLOC agnostic,  in case we latter have
> more PAGE_POISONING users(currently only DEBUG_PAGEALLOC ). How about like this:
>
> +static bool want_page_poisoning __read_mostly =
> +       !IS_ENABLED(CONFIG_PAGE_POISONING );
>
> Or just let it default to 'true',  since we only compile this
> page_poison.c when we enable CONFIG_PAGE_POISONING.
>

This patch was just supposed to be the refactor and keep the existing
behavior. There are no Kconfig changes here and the existing behavior
is to poison if !ARCH_SUPPORTS_DEBUG_PAGEALLOC so I think keeping
what I have is appropriate for this particular patch. This can be
updated in another series if appropriate.

Thanks,
Laura
  
>
> Thanks,
> Jianyu Zhan
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [RFC][PATCH 1/3] mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
@ 2016-01-26 20:25       ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-26 20:25 UTC (permalink / raw)
  To: Jianyu Zhan, Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	linux-mm, LKML, kernel-hardening, Kees Cook

On 01/25/2016 10:26 PM, Jianyu Zhan wrote:
> On Tue, Jan 26, 2016 at 12:55 AM, Laura Abbott
> <labbott@fedoraproject.org> wrote:
>> +static bool __page_poisoning_enabled __read_mostly;
>> +static bool want_page_poisoning __read_mostly =
>> +       !IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC);
>> +
>
>
> I would say this patch is nice with regard to decoupling
> CONFIG_DEBUG_PAGEALLOC and CONFIG_PAGE_POISONING.
>
> But  since when we enable CONFIG_DEBUG_PAGEALLOC,
> CONFIG_PAGE_POISONING will be selected.
>
> So it would be better to make page_poison.c totally
> CONFIG_DEBUG_PAGEALLOC agnostic,  in case we latter have
> more PAGE_POISONING users(currently only DEBUG_PAGEALLOC ). How about like this:
>
> +static bool want_page_poisoning __read_mostly =
> +       !IS_ENABLED(CONFIG_PAGE_POISONING );
>
> Or just let it default to 'true',  since we only compile this
> page_poison.c when we enable CONFIG_PAGE_POISONING.
>

This patch was just supposed to be the refactor and keep the existing
behavior. There are no Kconfig changes here and the existing behavior
is to poison if !ARCH_SUPPORTS_DEBUG_PAGEALLOC so I think keeping
what I have is appropriate for this particular patch. This can be
updated in another series if appropriate.

Thanks,
Laura
  
>
> Thanks,
> Jianyu Zhan
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 1/3] mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
@ 2016-01-26 20:25       ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-26 20:25 UTC (permalink / raw)
  To: Jianyu Zhan, Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	linux-mm, LKML, kernel-hardening, Kees Cook

On 01/25/2016 10:26 PM, Jianyu Zhan wrote:
> On Tue, Jan 26, 2016 at 12:55 AM, Laura Abbott
> <labbott@fedoraproject.org> wrote:
>> +static bool __page_poisoning_enabled __read_mostly;
>> +static bool want_page_poisoning __read_mostly =
>> +       !IS_ENABLED(CONFIG_ARCH_SUPPORTS_DEBUG_PAGEALLOC);
>> +
>
>
> I would say this patch is nice with regard to decoupling
> CONFIG_DEBUG_PAGEALLOC and CONFIG_PAGE_POISONING.
>
> But  since when we enable CONFIG_DEBUG_PAGEALLOC,
> CONFIG_PAGE_POISONING will be selected.
>
> So it would be better to make page_poison.c totally
> CONFIG_DEBUG_PAGEALLOC agnostic,  in case we latter have
> more PAGE_POISONING users(currently only DEBUG_PAGEALLOC ). How about like this:
>
> +static bool want_page_poisoning __read_mostly =
> +       !IS_ENABLED(CONFIG_PAGE_POISONING );
>
> Or just let it default to 'true',  since we only compile this
> page_poison.c when we enable CONFIG_PAGE_POISONING.
>

This patch was just supposed to be the refactor and keep the existing
behavior. There are no Kconfig changes here and the existing behavior
is to poison if !ARCH_SUPPORTS_DEBUG_PAGEALLOC so I think keeping
what I have is appropriate for this particular patch. This can be
updated in another series if appropriate.

Thanks,
Laura
  
>
> Thanks,
> Jianyu Zhan
>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [RFC][PATCH 2/3] mm/page_poison.c: Enable PAGE_POISONING as a separate option
  2016-01-26  6:39     ` Jianyu Zhan
  (?)
@ 2016-01-26 20:27       ` Laura Abbott
  -1 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-26 20:27 UTC (permalink / raw)
  To: Jianyu Zhan, Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	linux-mm, LKML, kernel-hardening, Kees Cook

On 01/25/2016 10:39 PM, Jianyu Zhan wrote:
> On Tue, Jan 26, 2016 at 12:55 AM, Laura Abbott
> <labbott@fedoraproject.org> wrote:
>> --- a/mm/debug-pagealloc.c
>> +++ b/mm/debug-pagealloc.c
>> @@ -8,11 +8,5 @@
>>
>>   void __kernel_map_pages(struct page *page, int numpages, int enable)
>>   {
>> -       if (!page_poisoning_enabled())
>> -               return;
>> -
>> -       if (enable)
>> -               unpoison_pages(page, numpages);
>> -       else
>> -               poison_pages(page, numpages);
>> +       kernel_poison_pages(page, numpages, enable);
>>   }
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 63358d9..c733421 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1002,6 +1002,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>>                                             PAGE_SIZE << order);
>>          }
>>          arch_free_page(page, order);
>> +       kernel_poison_pages(page, 1 << order, 0);
>>          kernel_map_pages(page, 1 << order, 0);
>>
>>          return true;
>> @@ -1396,6 +1397,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>          set_page_refcounted(page);
>>
>>          arch_alloc_page(page, order);
>> +       kernel_poison_pages(page, 1 << order, 1);
>>          kernel_map_pages(page, 1 << order, 1);
>>          kasan_alloc_pages(page, order);
>>
>
> kernel_map_pages() will fall back to page poisoning scheme for
> !ARCH_SUPPORTS_DEBUG_PAGEALLOC.
>
> IIUC,  calling kernel_poison_pages() before kernel_map_pages() will be
> equivalent to call kernel_poison_pages()
> twice?!
>
>

Yes, you are absolutely right. In the !ARCH_SUPPORTS_DEBUG_PAGEALLOC
case we shouldn't need to do anything in kernel_map_pages.

>
>
> Thanks,
> Jianyu Zhan
>

Thanks,
Laura

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [RFC][PATCH 2/3] mm/page_poison.c: Enable PAGE_POISONING as a separate option
@ 2016-01-26 20:27       ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-26 20:27 UTC (permalink / raw)
  To: Jianyu Zhan, Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	linux-mm, LKML, kernel-hardening, Kees Cook

On 01/25/2016 10:39 PM, Jianyu Zhan wrote:
> On Tue, Jan 26, 2016 at 12:55 AM, Laura Abbott
> <labbott@fedoraproject.org> wrote:
>> --- a/mm/debug-pagealloc.c
>> +++ b/mm/debug-pagealloc.c
>> @@ -8,11 +8,5 @@
>>
>>   void __kernel_map_pages(struct page *page, int numpages, int enable)
>>   {
>> -       if (!page_poisoning_enabled())
>> -               return;
>> -
>> -       if (enable)
>> -               unpoison_pages(page, numpages);
>> -       else
>> -               poison_pages(page, numpages);
>> +       kernel_poison_pages(page, numpages, enable);
>>   }
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 63358d9..c733421 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1002,6 +1002,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>>                                             PAGE_SIZE << order);
>>          }
>>          arch_free_page(page, order);
>> +       kernel_poison_pages(page, 1 << order, 0);
>>          kernel_map_pages(page, 1 << order, 0);
>>
>>          return true;
>> @@ -1396,6 +1397,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>          set_page_refcounted(page);
>>
>>          arch_alloc_page(page, order);
>> +       kernel_poison_pages(page, 1 << order, 1);
>>          kernel_map_pages(page, 1 << order, 1);
>>          kasan_alloc_pages(page, order);
>>
>
> kernel_map_pages() will fall back to page poisoning scheme for
> !ARCH_SUPPORTS_DEBUG_PAGEALLOC.
>
> IIUC,  calling kernel_poison_pages() before kernel_map_pages() will be
> equivalent to call kernel_poison_pages()
> twice?!
>
>

Yes, you are absolutely right. In the !ARCH_SUPPORTS_DEBUG_PAGEALLOC
case we shouldn't need to do anything in kernel_map_pages.

>
>
> Thanks,
> Jianyu Zhan
>

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 2/3] mm/page_poison.c: Enable PAGE_POISONING as a separate option
@ 2016-01-26 20:27       ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-26 20:27 UTC (permalink / raw)
  To: Jianyu Zhan, Laura Abbott
  Cc: Andrew Morton, Kirill A. Shutemov, Vlastimil Babka, Michal Hocko,
	linux-mm, LKML, kernel-hardening, Kees Cook

On 01/25/2016 10:39 PM, Jianyu Zhan wrote:
> On Tue, Jan 26, 2016 at 12:55 AM, Laura Abbott
> <labbott@fedoraproject.org> wrote:
>> --- a/mm/debug-pagealloc.c
>> +++ b/mm/debug-pagealloc.c
>> @@ -8,11 +8,5 @@
>>
>>   void __kernel_map_pages(struct page *page, int numpages, int enable)
>>   {
>> -       if (!page_poisoning_enabled())
>> -               return;
>> -
>> -       if (enable)
>> -               unpoison_pages(page, numpages);
>> -       else
>> -               poison_pages(page, numpages);
>> +       kernel_poison_pages(page, numpages, enable);
>>   }
>> diff --git a/mm/page_alloc.c b/mm/page_alloc.c
>> index 63358d9..c733421 100644
>> --- a/mm/page_alloc.c
>> +++ b/mm/page_alloc.c
>> @@ -1002,6 +1002,7 @@ static bool free_pages_prepare(struct page *page, unsigned int order)
>>                                             PAGE_SIZE << order);
>>          }
>>          arch_free_page(page, order);
>> +       kernel_poison_pages(page, 1 << order, 0);
>>          kernel_map_pages(page, 1 << order, 0);
>>
>>          return true;
>> @@ -1396,6 +1397,7 @@ static int prep_new_page(struct page *page, unsigned int order, gfp_t gfp_flags,
>>          set_page_refcounted(page);
>>
>>          arch_alloc_page(page, order);
>> +       kernel_poison_pages(page, 1 << order, 1);
>>          kernel_map_pages(page, 1 << order, 1);
>>          kasan_alloc_pages(page, order);
>>
>
> kernel_map_pages() will fall back to page poisoning scheme for
> !ARCH_SUPPORTS_DEBUG_PAGEALLOC.
>
> IIUC,  calling kernel_poison_pages() before kernel_map_pages() will be
> equivalent to call kernel_poison_pages()
> twice?!
>
>

Yes, you are absolutely right. In the !ARCH_SUPPORTS_DEBUG_PAGEALLOC
case we shouldn't need to do anything in kernel_map_pages.

>
>
> Thanks,
> Jianyu Zhan
>

Thanks,
Laura

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [RFC][PATCH 0/3] Sanitization of buddy pages
  2016-01-26  6:05   ` Sasha Levin
  (?)
@ 2016-01-26 20:34     ` Laura Abbott
  -1 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-26 20:34 UTC (permalink / raw)
  To: Sasha Levin, Laura Abbott, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko
  Cc: linux-mm, linux-kernel, kernel-hardening, Kees Cook, Andrey Ryabinin

On 01/25/2016 10:05 PM, Sasha Levin wrote:
> On 01/25/2016 11:55 AM, Laura Abbott wrote:
>> Hi,
>>
>> This is an implementation of page poisoning/sanitization for all arches. It
>> takes advantage of the existing implementation for
>> !ARCH_SUPPORTS_DEBUG_PAGEALLOC arches. This is a different approach than what
>> the Grsecurity patches were taking but should provide equivalent functionality.
>>
>> For those who aren't familiar with this, the goal of sanitization is to reduce
>> the severity of use after free and uninitialized data bugs. Memory is cleared
>> on free so any sensitive data is no longer available. Discussion of
>> sanitization was brough up in a thread about CVEs
>> (lkml.kernel.org/g/<20160119112812.GA10818@mwanda>)
>>
>> I eventually expect Kconfig names will want to be changed and or moved if this
>> is going to be used for security but that can happen later.
>>
>> Credit to Mathias Krause for the version in grsecurity
>>
>> Laura Abbott (3):
>>    mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
>>    mm/page_poison.c: Enable PAGE_POISONING as a separate option
>>    mm/page_poisoning.c: Allow for zero poisoning
>>
>>   Documentation/kernel-parameters.txt |   5 ++
>>   include/linux/mm.h                  |  13 +++
>>   include/linux/poison.h              |   4 +
>>   mm/Kconfig.debug                    |  35 +++++++-
>>   mm/Makefile                         |   5 +-
>>   mm/debug-pagealloc.c                | 127 +----------------------------
>>   mm/page_alloc.c                     |  10 ++-
>>   mm/page_poison.c                    | 158 ++++++++++++++++++++++++++++++++++++
>>   8 files changed, 228 insertions(+), 129 deletions(-)
>>   create mode 100644 mm/page_poison.c
>>
>
> Should poisoning of this kind be using kasan rather than "old fashioned"
> poisoning?
>
>

The two aren't mutually exclusive. kasan is serving a different purpose even
though it has sanitize in the name. kasan is designed to detect errors, the
purpose of this series is to make sure the memory is really cleared out.
This series also doesn't have the memory overhead of kasan.
  
> Thanks,
> Sasha
>

Thanks,
Laura

^ permalink raw reply	[flat|nested] 40+ messages in thread

* Re: [RFC][PATCH 0/3] Sanitization of buddy pages
@ 2016-01-26 20:34     ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-26 20:34 UTC (permalink / raw)
  To: Sasha Levin, Laura Abbott, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko
  Cc: linux-mm, linux-kernel, kernel-hardening, Kees Cook, Andrey Ryabinin

On 01/25/2016 10:05 PM, Sasha Levin wrote:
> On 01/25/2016 11:55 AM, Laura Abbott wrote:
>> Hi,
>>
>> This is an implementation of page poisoning/sanitization for all arches. It
>> takes advantage of the existing implementation for
>> !ARCH_SUPPORTS_DEBUG_PAGEALLOC arches. This is a different approach than what
>> the Grsecurity patches were taking but should provide equivalent functionality.
>>
>> For those who aren't familiar with this, the goal of sanitization is to reduce
>> the severity of use after free and uninitialized data bugs. Memory is cleared
>> on free so any sensitive data is no longer available. Discussion of
>> sanitization was brough up in a thread about CVEs
>> (lkml.kernel.org/g/<20160119112812.GA10818@mwanda>)
>>
>> I eventually expect Kconfig names will want to be changed and or moved if this
>> is going to be used for security but that can happen later.
>>
>> Credit to Mathias Krause for the version in grsecurity
>>
>> Laura Abbott (3):
>>    mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
>>    mm/page_poison.c: Enable PAGE_POISONING as a separate option
>>    mm/page_poisoning.c: Allow for zero poisoning
>>
>>   Documentation/kernel-parameters.txt |   5 ++
>>   include/linux/mm.h                  |  13 +++
>>   include/linux/poison.h              |   4 +
>>   mm/Kconfig.debug                    |  35 +++++++-
>>   mm/Makefile                         |   5 +-
>>   mm/debug-pagealloc.c                | 127 +----------------------------
>>   mm/page_alloc.c                     |  10 ++-
>>   mm/page_poison.c                    | 158 ++++++++++++++++++++++++++++++++++++
>>   8 files changed, 228 insertions(+), 129 deletions(-)
>>   create mode 100644 mm/page_poison.c
>>
>
> Should poisoning of this kind be using kasan rather than "old fashioned"
> poisoning?
>
>

The two aren't mutually exclusive. kasan is serving a different purpose even
though it has sanitize in the name. kasan is designed to detect errors, the
purpose of this series is to make sure the memory is really cleared out.
This series also doesn't have the memory overhead of kasan.
  
> Thanks,
> Sasha
>

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 40+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 0/3] Sanitization of buddy pages
@ 2016-01-26 20:34     ` Laura Abbott
  0 siblings, 0 replies; 40+ messages in thread
From: Laura Abbott @ 2016-01-26 20:34 UTC (permalink / raw)
  To: Sasha Levin, Laura Abbott, Andrew Morton, Kirill A. Shutemov,
	Vlastimil Babka, Michal Hocko
  Cc: linux-mm, linux-kernel, kernel-hardening, Kees Cook, Andrey Ryabinin

On 01/25/2016 10:05 PM, Sasha Levin wrote:
> On 01/25/2016 11:55 AM, Laura Abbott wrote:
>> Hi,
>>
>> This is an implementation of page poisoning/sanitization for all arches. It
>> takes advantage of the existing implementation for
>> !ARCH_SUPPORTS_DEBUG_PAGEALLOC arches. This is a different approach than what
>> the Grsecurity patches were taking but should provide equivalent functionality.
>>
>> For those who aren't familiar with this, the goal of sanitization is to reduce
>> the severity of use after free and uninitialized data bugs. Memory is cleared
>> on free so any sensitive data is no longer available. Discussion of
>> sanitization was brough up in a thread about CVEs
>> (lkml.kernel.org/g/<20160119112812.GA10818@mwanda>)
>>
>> I eventually expect Kconfig names will want to be changed and or moved if this
>> is going to be used for security but that can happen later.
>>
>> Credit to Mathias Krause for the version in grsecurity
>>
>> Laura Abbott (3):
>>    mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc
>>    mm/page_poison.c: Enable PAGE_POISONING as a separate option
>>    mm/page_poisoning.c: Allow for zero poisoning
>>
>>   Documentation/kernel-parameters.txt |   5 ++
>>   include/linux/mm.h                  |  13 +++
>>   include/linux/poison.h              |   4 +
>>   mm/Kconfig.debug                    |  35 +++++++-
>>   mm/Makefile                         |   5 +-
>>   mm/debug-pagealloc.c                | 127 +----------------------------
>>   mm/page_alloc.c                     |  10 ++-
>>   mm/page_poison.c                    | 158 ++++++++++++++++++++++++++++++++++++
>>   8 files changed, 228 insertions(+), 129 deletions(-)
>>   create mode 100644 mm/page_poison.c
>>
>
> Should poisoning of this kind be using kasan rather than "old fashioned"
> poisoning?
>
>

The two aren't mutually exclusive. kasan is serving a different purpose even
though it has sanitize in the name. kasan is designed to detect errors, the
purpose of this series is to make sure the memory is really cleared out.
This series also doesn't have the memory overhead of kasan.
  
> Thanks,
> Sasha
>

Thanks,
Laura

^ permalink raw reply	[flat|nested] 40+ messages in thread

end of thread, other threads:[~2016-01-26 20:34 UTC | newest]

Thread overview: 40+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-01-25 16:55 [RFC][PATCH 0/3] Sanitization of buddy pages Laura Abbott
2016-01-25 16:55 ` [kernel-hardening] " Laura Abbott
2016-01-25 16:55 ` Laura Abbott
2016-01-25 16:55 ` [RFC][PATCH 1/3] mm/debug-pagealloc.c: Split out page poisoning from debug page_alloc Laura Abbott
2016-01-25 16:55   ` [kernel-hardening] " Laura Abbott
2016-01-25 16:55   ` Laura Abbott
2016-01-26  6:26   ` Jianyu Zhan
2016-01-26  6:26     ` [kernel-hardening] " Jianyu Zhan
2016-01-26  6:26     ` Jianyu Zhan
2016-01-26 20:25     ` Laura Abbott
2016-01-26 20:25       ` [kernel-hardening] " Laura Abbott
2016-01-26 20:25       ` Laura Abbott
2016-01-25 16:55 ` [RFC][PATCH 2/3] mm/page_poison.c: Enable PAGE_POISONING as a separate option Laura Abbott
2016-01-25 16:55   ` [kernel-hardening] " Laura Abbott
2016-01-25 16:55   ` Laura Abbott
2016-01-26  6:39   ` Jianyu Zhan
2016-01-26  6:39     ` [kernel-hardening] " Jianyu Zhan
2016-01-26  6:39     ` Jianyu Zhan
2016-01-26 20:27     ` Laura Abbott
2016-01-26 20:27       ` [kernel-hardening] " Laura Abbott
2016-01-26 20:27       ` Laura Abbott
2016-01-25 16:55 ` [RFC][PATCH 3/3] mm/page_poisoning.c: Allow for zero poisoning Laura Abbott
2016-01-25 16:55   ` [kernel-hardening] " Laura Abbott
2016-01-25 16:55   ` Laura Abbott
2016-01-25 20:16   ` [kernel-hardening] " Dave Hansen
2016-01-25 20:16     ` Dave Hansen
2016-01-25 22:05     ` Kees Cook
2016-01-25 22:05       ` Kees Cook
2016-01-25 22:05       ` Kees Cook
2016-01-26  1:33       ` Laura Abbott
2016-01-26  1:33         ` Laura Abbott
2016-01-26  1:33         ` Laura Abbott
2016-01-26  6:05 ` [RFC][PATCH 0/3] Sanitization of buddy pages Sasha Levin
2016-01-26  6:05   ` [kernel-hardening] " Sasha Levin
2016-01-26  6:05   ` Sasha Levin
2016-01-26 20:34   ` Laura Abbott
2016-01-26 20:34     ` [kernel-hardening] " Laura Abbott
2016-01-26 20:34     ` Laura Abbott
2016-01-26  9:08 ` [kernel-hardening] " Mathias Krause
2016-01-26  9:08   ` Mathias Krause

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.