All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2015-12-22  3:40 ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening

Hi,

This is a partial port of the PAX_MEMORY_SANITIZE feature. The concept is
fairly simple: when memory is freed, existing data should be erased. This
helps to reduce the impact of problems
(e.g. 45a22f4 inotify: Fix reporting of cookies for inotify events
e4514cb RDMA/cxgb3: Fix information leak in send_abort()
your favorite use after free bug)

The biggest change from PAX_MEMORY_SANTIIZE is that this feature sanitizes
the SL[AOU]B allocators only. My plan is to work on the buddy allocator
santization after this series gets picked up. A side effect of this is
that allocations which go directly to the buddy allocator (i.e. large
allocations) aren't sanitized. I'd like feedback about whether it's worth
it to add sanitization on that path directly or just use the page
allocator sanitization when that comes in.

I also expanded the command line options, mostly for SLUB. Since SLUB
has had so much tuning work done for performance, I added an option
to only sanitize on the slow path. Freeing on only fast vs. slow path
was most noticable in the bulk test cases. Overall, I saw impacts of
3% to 20% on various benchmarks when this feature was enabled. The
overall impact of sanitize_slab=off seemed to be pretty negligable.

This feature is similar to the debug feature of SLAB_POISON. I did
consider trying to make that feature not related to debug. Ultimately,
I concluded there was too much extra debug overhead and other features
to make it worth it.

Bike shed whatever you like. The Kconfig will probably end up in
a separate sanitization Kconfig.

All credit for the original work should be given to Brad Spengler and
the PaX Team. 

Thanks,
Laura

Laura Abbott (7):
  mm/slab_common.c: Add common support for slab saniziation
  slub: Add support for sanitization
  slab: Add support for sanitization
  slob: Add support for sanitization
  mm: Mark several cases as SLAB_NO_SANITIZE
  mm: Add Kconfig option for slab sanitization
  lkdtm: Add READ_AFTER_FREE test

 drivers/misc/lkdtm.c     | 29 ++++++++++++++++
 fs/buffer.c              |  2 +-
 fs/dcache.c              |  2 +-
 include/linux/slab.h     |  7 ++++
 include/linux/slab_def.h |  4 +++
 init/Kconfig             | 48 ++++++++++++++++++++++++++
 kernel/fork.c            |  2 +-
 mm/rmap.c                |  4 +--
 mm/slab.c                | 35 +++++++++++++++++++
 mm/slab.h                | 24 ++++++++++++-
 mm/slab_common.c         | 53 ++++++++++++++++++++++++++++
 mm/slob.c                | 27 +++++++++++----
 mm/slub.c                | 90 +++++++++++++++++++++++++++++++++++++++++++++++-
 net/core/skbuff.c        |  4 +--
 14 files changed, 316 insertions(+), 15 deletions(-)

-- 
2.5.0


^ permalink raw reply	[flat|nested] 115+ messages in thread

* [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2015-12-22  3:40 ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening

Hi,

This is a partial port of the PAX_MEMORY_SANITIZE feature. The concept is
fairly simple: when memory is freed, existing data should be erased. This
helps to reduce the impact of problems
(e.g. 45a22f4 inotify: Fix reporting of cookies for inotify events
e4514cb RDMA/cxgb3: Fix information leak in send_abort()
your favorite use after free bug)

The biggest change from PAX_MEMORY_SANTIIZE is that this feature sanitizes
the SL[AOU]B allocators only. My plan is to work on the buddy allocator
santization after this series gets picked up. A side effect of this is
that allocations which go directly to the buddy allocator (i.e. large
allocations) aren't sanitized. I'd like feedback about whether it's worth
it to add sanitization on that path directly or just use the page
allocator sanitization when that comes in.

I also expanded the command line options, mostly for SLUB. Since SLUB
has had so much tuning work done for performance, I added an option
to only sanitize on the slow path. Freeing on only fast vs. slow path
was most noticable in the bulk test cases. Overall, I saw impacts of
3% to 20% on various benchmarks when this feature was enabled. The
overall impact of sanitize_slab=off seemed to be pretty negligable.

This feature is similar to the debug feature of SLAB_POISON. I did
consider trying to make that feature not related to debug. Ultimately,
I concluded there was too much extra debug overhead and other features
to make it worth it.

Bike shed whatever you like. The Kconfig will probably end up in
a separate sanitization Kconfig.

All credit for the original work should be given to Brad Spengler and
the PaX Team. 

Thanks,
Laura

Laura Abbott (7):
  mm/slab_common.c: Add common support for slab saniziation
  slub: Add support for sanitization
  slab: Add support for sanitization
  slob: Add support for sanitization
  mm: Mark several cases as SLAB_NO_SANITIZE
  mm: Add Kconfig option for slab sanitization
  lkdtm: Add READ_AFTER_FREE test

 drivers/misc/lkdtm.c     | 29 ++++++++++++++++
 fs/buffer.c              |  2 +-
 fs/dcache.c              |  2 +-
 include/linux/slab.h     |  7 ++++
 include/linux/slab_def.h |  4 +++
 init/Kconfig             | 48 ++++++++++++++++++++++++++
 kernel/fork.c            |  2 +-
 mm/rmap.c                |  4 +--
 mm/slab.c                | 35 +++++++++++++++++++
 mm/slab.h                | 24 ++++++++++++-
 mm/slab_common.c         | 53 ++++++++++++++++++++++++++++
 mm/slob.c                | 27 +++++++++++----
 mm/slub.c                | 90 +++++++++++++++++++++++++++++++++++++++++++++++-
 net/core/skbuff.c        |  4 +--
 14 files changed, 316 insertions(+), 15 deletions(-)

-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2015-12-22  3:40 ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening

Hi,

This is a partial port of the PAX_MEMORY_SANITIZE feature. The concept is
fairly simple: when memory is freed, existing data should be erased. This
helps to reduce the impact of problems
(e.g. 45a22f4 inotify: Fix reporting of cookies for inotify events
e4514cb RDMA/cxgb3: Fix information leak in send_abort()
your favorite use after free bug)

The biggest change from PAX_MEMORY_SANTIIZE is that this feature sanitizes
the SL[AOU]B allocators only. My plan is to work on the buddy allocator
santization after this series gets picked up. A side effect of this is
that allocations which go directly to the buddy allocator (i.e. large
allocations) aren't sanitized. I'd like feedback about whether it's worth
it to add sanitization on that path directly or just use the page
allocator sanitization when that comes in.

I also expanded the command line options, mostly for SLUB. Since SLUB
has had so much tuning work done for performance, I added an option
to only sanitize on the slow path. Freeing on only fast vs. slow path
was most noticable in the bulk test cases. Overall, I saw impacts of
3% to 20% on various benchmarks when this feature was enabled. The
overall impact of sanitize_slab=off seemed to be pretty negligable.

This feature is similar to the debug feature of SLAB_POISON. I did
consider trying to make that feature not related to debug. Ultimately,
I concluded there was too much extra debug overhead and other features
to make it worth it.

Bike shed whatever you like. The Kconfig will probably end up in
a separate sanitization Kconfig.

All credit for the original work should be given to Brad Spengler and
the PaX Team. 

Thanks,
Laura

Laura Abbott (7):
  mm/slab_common.c: Add common support for slab saniziation
  slub: Add support for sanitization
  slab: Add support for sanitization
  slob: Add support for sanitization
  mm: Mark several cases as SLAB_NO_SANITIZE
  mm: Add Kconfig option for slab sanitization
  lkdtm: Add READ_AFTER_FREE test

 drivers/misc/lkdtm.c     | 29 ++++++++++++++++
 fs/buffer.c              |  2 +-
 fs/dcache.c              |  2 +-
 include/linux/slab.h     |  7 ++++
 include/linux/slab_def.h |  4 +++
 init/Kconfig             | 48 ++++++++++++++++++++++++++
 kernel/fork.c            |  2 +-
 mm/rmap.c                |  4 +--
 mm/slab.c                | 35 +++++++++++++++++++
 mm/slab.h                | 24 ++++++++++++-
 mm/slab_common.c         | 53 ++++++++++++++++++++++++++++
 mm/slob.c                | 27 +++++++++++----
 mm/slub.c                | 90 +++++++++++++++++++++++++++++++++++++++++++++++-
 net/core/skbuff.c        |  4 +--
 14 files changed, 316 insertions(+), 15 deletions(-)

-- 
2.5.0

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
  2015-12-22  3:40 ` Laura Abbott
  (?)
@ 2015-12-22  3:40   ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


Each of the different allocators (SLAB/SLUB/SLOB) handles
clearing of objects differently depending on configuration.
Add common infrastructure for selecting sanitization levels
(off, slow path only, partial, full) and marking caches as
appropriate.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 include/linux/slab.h     |  7 +++++++
 include/linux/slab_def.h |  4 ++++
 mm/slab.h                | 22 ++++++++++++++++++++
 mm/slab_common.c         | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 86 insertions(+)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 2037a86..35c1e2d 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -23,6 +23,13 @@
 #define SLAB_DEBUG_FREE		0x00000100UL	/* DEBUG: Perform (expensive) checks on free */
 #define SLAB_RED_ZONE		0x00000400UL	/* DEBUG: Red zone objs in a cache */
 #define SLAB_POISON		0x00000800UL	/* DEBUG: Poison objects */
+
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+#define SLAB_NO_SANITIZE        0x00001000UL    /* Do not sanitize objs on free */
+#else
+#define SLAB_NO_SANITIZE        0x00000000UL
+#endif
+
 #define SLAB_HWCACHE_ALIGN	0x00002000UL	/* Align objs on cache lines */
 #define SLAB_CACHE_DMA		0x00004000UL	/* Use GFP_DMA memory */
 #define SLAB_STORE_USER		0x00010000UL	/* DEBUG: Store the last owner for bug hunting */
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index 33d0490..4c3fb93 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -69,6 +69,10 @@ struct kmem_cache {
 	 */
 	int obj_offset;
 #endif /* CONFIG_DEBUG_SLAB */
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	atomic_t sanitized;
+	atomic_t not_sanitized;
+#endif
 #ifdef CONFIG_MEMCG_KMEM
 	struct memcg_cache_params memcg_params;
 #endif
diff --git a/mm/slab.h b/mm/slab.h
index 7b60871..b54b636 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -66,6 +66,28 @@ extern struct list_head slab_caches;
 /* The slab cache that manages slab cache information */
 extern struct kmem_cache *kmem_cache;
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+#ifdef CONFIG_X86_64
+#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
+#else
+#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
+#endif
+enum slab_sanitize_mode {
+	/* No sanitization */
+	SLAB_SANITIZE_OFF = 0,
+
+	/* Partial sanitization happens only on the slow path */
+	SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,
+
+	/* Partial sanitization happens everywhere */
+	SLAB_SANITIZE_PARTIAL = 2,
+
+	/* Sanitization happens on all slabs, all paths */
+	SLAB_SANITIZE_FULL = 3,
+};
+extern enum slab_sanitize_mode sanitize_slab;
+#endif
+
 unsigned long calculate_alignment(unsigned long flags,
 		unsigned long align, unsigned long size);
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 3c6a86b..4b28f70 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -30,6 +30,42 @@ LIST_HEAD(slab_caches);
 DEFINE_MUTEX(slab_mutex);
 struct kmem_cache *kmem_cache;
 
+
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+enum slab_sanitize_mode sanitize_slab = SLAB_SANITIZE_PARTIAL;
+static int __init sanitize_slab_setup(char *str)
+{
+	if (!str)
+		return 0;
+
+	if (!strcmp(str, "0") || !strcmp(str, "off")) {
+		pr_info("slab sanitization disabled");
+		sanitize_slab = SLAB_SANITIZE_OFF;
+	} else if (!strcmp(str, "1") || !strcmp(str, "slow")) {
+		pr_info("slab sanitization partial slow path");
+		sanitize_slab = SLAB_SANITIZE_PARTIAL_SLOWPATH;
+	} else if (!strcmp(str, "2") || !strcmp(str, "partial")) {
+		pr_info("slab sanitization partial");
+		sanitize_slab = SLAB_SANITIZE_PARTIAL;
+	} else if (!strcmp(str, "3") || !strcmp(str, "full")) {
+		pr_info("slab sanitization full");
+		sanitize_slab = SLAB_SANITIZE_FULL;
+	} else
+		pr_err("slab sanitization: unsupported option '%s'\n", str);
+
+	return 0;
+}
+early_param("sanitize_slab", sanitize_slab_setup);
+
+static inline bool sanitize_mergeable(unsigned long flags)
+{
+	return (sanitize_slab == SLAB_SANITIZE_OFF) || (flags & SLAB_NO_SANITIZE);
+}
+#else
+static inline bool sanitize_mergeable(unsigned long flags) { return true; }
+#endif
+
+
 /*
  * Set of flags that will prevent slab merging
  */
@@ -227,6 +263,9 @@ static inline void destroy_memcg_params(struct kmem_cache *s)
  */
 int slab_unmergeable(struct kmem_cache *s)
 {
+	if (!sanitize_mergeable(s->flags))
+		return 1;
+
 	if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE))
 		return 1;
 
@@ -250,6 +289,9 @@ struct kmem_cache *find_mergeable(size_t size, size_t align,
 {
 	struct kmem_cache *s;
 
+	if (!sanitize_mergeable(flags))
+		return NULL;
+
 	if (slab_nomerge || (flags & SLAB_NEVER_MERGE))
 		return NULL;
 
@@ -407,6 +449,13 @@ kmem_cache_create(const char *name, size_t size, size_t align,
 	 */
 	flags &= CACHE_CREATE_MASK;
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	if (sanitize_slab == SLAB_SANITIZE_OFF || (flags & SLAB_DESTROY_BY_RCU))
+		flags |= SLAB_NO_SANITIZE;
+	else if (sanitize_slab == SLAB_SANITIZE_FULL)
+		flags &= ~SLAB_NO_SANITIZE;
+#endif
+
 	s = __kmem_cache_alias(name, size, align, flags, ctor);
 	if (s)
 		goto out_unlock;
@@ -1050,6 +1099,10 @@ static void print_slabinfo_header(struct seq_file *m)
 		 "<error> <maxfreeable> <nodeallocs> <remotefrees> <alienoverflow>");
 	seq_puts(m, " : cpustat <allochit> <allocmiss> <freehit> <freemiss>");
 #endif
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	seq_puts(m, " : sanitization <sanitized> <not_sanitized>");
+#endif
+
 	seq_putc(m, '\n');
 }
 
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


Each of the different allocators (SLAB/SLUB/SLOB) handles
clearing of objects differently depending on configuration.
Add common infrastructure for selecting sanitization levels
(off, slow path only, partial, full) and marking caches as
appropriate.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 include/linux/slab.h     |  7 +++++++
 include/linux/slab_def.h |  4 ++++
 mm/slab.h                | 22 ++++++++++++++++++++
 mm/slab_common.c         | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 86 insertions(+)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 2037a86..35c1e2d 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -23,6 +23,13 @@
 #define SLAB_DEBUG_FREE		0x00000100UL	/* DEBUG: Perform (expensive) checks on free */
 #define SLAB_RED_ZONE		0x00000400UL	/* DEBUG: Red zone objs in a cache */
 #define SLAB_POISON		0x00000800UL	/* DEBUG: Poison objects */
+
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+#define SLAB_NO_SANITIZE        0x00001000UL    /* Do not sanitize objs on free */
+#else
+#define SLAB_NO_SANITIZE        0x00000000UL
+#endif
+
 #define SLAB_HWCACHE_ALIGN	0x00002000UL	/* Align objs on cache lines */
 #define SLAB_CACHE_DMA		0x00004000UL	/* Use GFP_DMA memory */
 #define SLAB_STORE_USER		0x00010000UL	/* DEBUG: Store the last owner for bug hunting */
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index 33d0490..4c3fb93 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -69,6 +69,10 @@ struct kmem_cache {
 	 */
 	int obj_offset;
 #endif /* CONFIG_DEBUG_SLAB */
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	atomic_t sanitized;
+	atomic_t not_sanitized;
+#endif
 #ifdef CONFIG_MEMCG_KMEM
 	struct memcg_cache_params memcg_params;
 #endif
diff --git a/mm/slab.h b/mm/slab.h
index 7b60871..b54b636 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -66,6 +66,28 @@ extern struct list_head slab_caches;
 /* The slab cache that manages slab cache information */
 extern struct kmem_cache *kmem_cache;
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+#ifdef CONFIG_X86_64
+#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
+#else
+#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
+#endif
+enum slab_sanitize_mode {
+	/* No sanitization */
+	SLAB_SANITIZE_OFF = 0,
+
+	/* Partial sanitization happens only on the slow path */
+	SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,
+
+	/* Partial sanitization happens everywhere */
+	SLAB_SANITIZE_PARTIAL = 2,
+
+	/* Sanitization happens on all slabs, all paths */
+	SLAB_SANITIZE_FULL = 3,
+};
+extern enum slab_sanitize_mode sanitize_slab;
+#endif
+
 unsigned long calculate_alignment(unsigned long flags,
 		unsigned long align, unsigned long size);
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 3c6a86b..4b28f70 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -30,6 +30,42 @@ LIST_HEAD(slab_caches);
 DEFINE_MUTEX(slab_mutex);
 struct kmem_cache *kmem_cache;
 
+
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+enum slab_sanitize_mode sanitize_slab = SLAB_SANITIZE_PARTIAL;
+static int __init sanitize_slab_setup(char *str)
+{
+	if (!str)
+		return 0;
+
+	if (!strcmp(str, "0") || !strcmp(str, "off")) {
+		pr_info("slab sanitization disabled");
+		sanitize_slab = SLAB_SANITIZE_OFF;
+	} else if (!strcmp(str, "1") || !strcmp(str, "slow")) {
+		pr_info("slab sanitization partial slow path");
+		sanitize_slab = SLAB_SANITIZE_PARTIAL_SLOWPATH;
+	} else if (!strcmp(str, "2") || !strcmp(str, "partial")) {
+		pr_info("slab sanitization partial");
+		sanitize_slab = SLAB_SANITIZE_PARTIAL;
+	} else if (!strcmp(str, "3") || !strcmp(str, "full")) {
+		pr_info("slab sanitization full");
+		sanitize_slab = SLAB_SANITIZE_FULL;
+	} else
+		pr_err("slab sanitization: unsupported option '%s'\n", str);
+
+	return 0;
+}
+early_param("sanitize_slab", sanitize_slab_setup);
+
+static inline bool sanitize_mergeable(unsigned long flags)
+{
+	return (sanitize_slab == SLAB_SANITIZE_OFF) || (flags & SLAB_NO_SANITIZE);
+}
+#else
+static inline bool sanitize_mergeable(unsigned long flags) { return true; }
+#endif
+
+
 /*
  * Set of flags that will prevent slab merging
  */
@@ -227,6 +263,9 @@ static inline void destroy_memcg_params(struct kmem_cache *s)
  */
 int slab_unmergeable(struct kmem_cache *s)
 {
+	if (!sanitize_mergeable(s->flags))
+		return 1;
+
 	if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE))
 		return 1;
 
@@ -250,6 +289,9 @@ struct kmem_cache *find_mergeable(size_t size, size_t align,
 {
 	struct kmem_cache *s;
 
+	if (!sanitize_mergeable(flags))
+		return NULL;
+
 	if (slab_nomerge || (flags & SLAB_NEVER_MERGE))
 		return NULL;
 
@@ -407,6 +449,13 @@ kmem_cache_create(const char *name, size_t size, size_t align,
 	 */
 	flags &= CACHE_CREATE_MASK;
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	if (sanitize_slab == SLAB_SANITIZE_OFF || (flags & SLAB_DESTROY_BY_RCU))
+		flags |= SLAB_NO_SANITIZE;
+	else if (sanitize_slab == SLAB_SANITIZE_FULL)
+		flags &= ~SLAB_NO_SANITIZE;
+#endif
+
 	s = __kmem_cache_alias(name, size, align, flags, ctor);
 	if (s)
 		goto out_unlock;
@@ -1050,6 +1099,10 @@ static void print_slabinfo_header(struct seq_file *m)
 		 "<error> <maxfreeable> <nodeallocs> <remotefrees> <alienoverflow>");
 	seq_puts(m, " : cpustat <allochit> <allocmiss> <freehit> <freemiss>");
 #endif
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	seq_puts(m, " : sanitization <sanitized> <not_sanitized>");
+#endif
+
 	seq_putc(m, '\n');
 }
 
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [kernel-hardening] [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


Each of the different allocators (SLAB/SLUB/SLOB) handles
clearing of objects differently depending on configuration.
Add common infrastructure for selecting sanitization levels
(off, slow path only, partial, full) and marking caches as
appropriate.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 include/linux/slab.h     |  7 +++++++
 include/linux/slab_def.h |  4 ++++
 mm/slab.h                | 22 ++++++++++++++++++++
 mm/slab_common.c         | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
 4 files changed, 86 insertions(+)

diff --git a/include/linux/slab.h b/include/linux/slab.h
index 2037a86..35c1e2d 100644
--- a/include/linux/slab.h
+++ b/include/linux/slab.h
@@ -23,6 +23,13 @@
 #define SLAB_DEBUG_FREE		0x00000100UL	/* DEBUG: Perform (expensive) checks on free */
 #define SLAB_RED_ZONE		0x00000400UL	/* DEBUG: Red zone objs in a cache */
 #define SLAB_POISON		0x00000800UL	/* DEBUG: Poison objects */
+
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+#define SLAB_NO_SANITIZE        0x00001000UL    /* Do not sanitize objs on free */
+#else
+#define SLAB_NO_SANITIZE        0x00000000UL
+#endif
+
 #define SLAB_HWCACHE_ALIGN	0x00002000UL	/* Align objs on cache lines */
 #define SLAB_CACHE_DMA		0x00004000UL	/* Use GFP_DMA memory */
 #define SLAB_STORE_USER		0x00010000UL	/* DEBUG: Store the last owner for bug hunting */
diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
index 33d0490..4c3fb93 100644
--- a/include/linux/slab_def.h
+++ b/include/linux/slab_def.h
@@ -69,6 +69,10 @@ struct kmem_cache {
 	 */
 	int obj_offset;
 #endif /* CONFIG_DEBUG_SLAB */
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	atomic_t sanitized;
+	atomic_t not_sanitized;
+#endif
 #ifdef CONFIG_MEMCG_KMEM
 	struct memcg_cache_params memcg_params;
 #endif
diff --git a/mm/slab.h b/mm/slab.h
index 7b60871..b54b636 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -66,6 +66,28 @@ extern struct list_head slab_caches;
 /* The slab cache that manages slab cache information */
 extern struct kmem_cache *kmem_cache;
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+#ifdef CONFIG_X86_64
+#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
+#else
+#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
+#endif
+enum slab_sanitize_mode {
+	/* No sanitization */
+	SLAB_SANITIZE_OFF = 0,
+
+	/* Partial sanitization happens only on the slow path */
+	SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,
+
+	/* Partial sanitization happens everywhere */
+	SLAB_SANITIZE_PARTIAL = 2,
+
+	/* Sanitization happens on all slabs, all paths */
+	SLAB_SANITIZE_FULL = 3,
+};
+extern enum slab_sanitize_mode sanitize_slab;
+#endif
+
 unsigned long calculate_alignment(unsigned long flags,
 		unsigned long align, unsigned long size);
 
diff --git a/mm/slab_common.c b/mm/slab_common.c
index 3c6a86b..4b28f70 100644
--- a/mm/slab_common.c
+++ b/mm/slab_common.c
@@ -30,6 +30,42 @@ LIST_HEAD(slab_caches);
 DEFINE_MUTEX(slab_mutex);
 struct kmem_cache *kmem_cache;
 
+
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+enum slab_sanitize_mode sanitize_slab = SLAB_SANITIZE_PARTIAL;
+static int __init sanitize_slab_setup(char *str)
+{
+	if (!str)
+		return 0;
+
+	if (!strcmp(str, "0") || !strcmp(str, "off")) {
+		pr_info("slab sanitization disabled");
+		sanitize_slab = SLAB_SANITIZE_OFF;
+	} else if (!strcmp(str, "1") || !strcmp(str, "slow")) {
+		pr_info("slab sanitization partial slow path");
+		sanitize_slab = SLAB_SANITIZE_PARTIAL_SLOWPATH;
+	} else if (!strcmp(str, "2") || !strcmp(str, "partial")) {
+		pr_info("slab sanitization partial");
+		sanitize_slab = SLAB_SANITIZE_PARTIAL;
+	} else if (!strcmp(str, "3") || !strcmp(str, "full")) {
+		pr_info("slab sanitization full");
+		sanitize_slab = SLAB_SANITIZE_FULL;
+	} else
+		pr_err("slab sanitization: unsupported option '%s'\n", str);
+
+	return 0;
+}
+early_param("sanitize_slab", sanitize_slab_setup);
+
+static inline bool sanitize_mergeable(unsigned long flags)
+{
+	return (sanitize_slab == SLAB_SANITIZE_OFF) || (flags & SLAB_NO_SANITIZE);
+}
+#else
+static inline bool sanitize_mergeable(unsigned long flags) { return true; }
+#endif
+
+
 /*
  * Set of flags that will prevent slab merging
  */
@@ -227,6 +263,9 @@ static inline void destroy_memcg_params(struct kmem_cache *s)
  */
 int slab_unmergeable(struct kmem_cache *s)
 {
+	if (!sanitize_mergeable(s->flags))
+		return 1;
+
 	if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE))
 		return 1;
 
@@ -250,6 +289,9 @@ struct kmem_cache *find_mergeable(size_t size, size_t align,
 {
 	struct kmem_cache *s;
 
+	if (!sanitize_mergeable(flags))
+		return NULL;
+
 	if (slab_nomerge || (flags & SLAB_NEVER_MERGE))
 		return NULL;
 
@@ -407,6 +449,13 @@ kmem_cache_create(const char *name, size_t size, size_t align,
 	 */
 	flags &= CACHE_CREATE_MASK;
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	if (sanitize_slab == SLAB_SANITIZE_OFF || (flags & SLAB_DESTROY_BY_RCU))
+		flags |= SLAB_NO_SANITIZE;
+	else if (sanitize_slab == SLAB_SANITIZE_FULL)
+		flags &= ~SLAB_NO_SANITIZE;
+#endif
+
 	s = __kmem_cache_alias(name, size, align, flags, ctor);
 	if (s)
 		goto out_unlock;
@@ -1050,6 +1099,10 @@ static void print_slabinfo_header(struct seq_file *m)
 		 "<error> <maxfreeable> <nodeallocs> <remotefrees> <alienoverflow>");
 	seq_puts(m, " : cpustat <allochit> <allocmiss> <freehit> <freemiss>");
 #endif
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	seq_puts(m, " : sanitization <sanitized> <not_sanitized>");
+#endif
+
 	seq_putc(m, '\n');
 }
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 2/7] slub: Add support for sanitization
  2015-12-22  3:40 ` Laura Abbott
  (?)
@ 2015-12-22  3:40   ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


Clearing of objects on free only happens when SLUB_DEBUG and poisoning
is enabled for caches. This is a potential source of security leaks;
sensitive information may remain around well after its normal life
time. Add support for sanitizing objects independent of debug features.
Sanitization can be configured on only the slow path or all paths.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 mm/slub.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 89 insertions(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 4699751..a02e1e7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -220,6 +220,15 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si)
 #endif
 }
 
+static inline bool should_sanitize_slab(const struct kmem_cache *s)
+{
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+		return !(s->flags & SLAB_NO_SANITIZE);
+#else
+		return false;
+#endif
+}
+
 /********************************************************************
  * 			Core slab cache functions
  *******************************************************************/
@@ -286,6 +295,9 @@ static inline int slab_index(void *p, struct kmem_cache *s, void *addr)
 
 static inline size_t slab_ksize(const struct kmem_cache *s)
 {
+	if (should_sanitize_slab(s))
+		return s->object_size;
+
 #ifdef CONFIG_SLUB_DEBUG
 	/*
 	 * Debugging requires use of the padding between object
@@ -1263,6 +1275,66 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 
 #endif /* CONFIG_SLUB_DEBUG */
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+static void __sanitize_object(struct kmem_cache *s, void *p)
+{
+	memset(p, SLAB_MEMORY_SANITIZE_VALUE, s->object_size);
+	if (s->ctor)
+		s->ctor(p);
+}
+
+static void sanitize_objects(struct kmem_cache *s, void *head, void *tail,
+				int cnt)
+{
+	int i = 1;
+	void *p = head;
+
+	do {
+		if (i > cnt)
+			BUG();
+
+		__sanitize_object(s, p);
+		i++;
+	} while (p != tail && (p = get_freepointer(s, p)));
+}
+
+
+static void sanitize_slow_path(struct kmem_cache *s, void *head, void *tail,
+				int cnt)
+{
+	if (sanitize_slab != SLAB_SANITIZE_PARTIAL_SLOWPATH)
+		return;
+
+	if (!should_sanitize_slab(s))
+		return;
+
+	sanitize_objects(s, head, tail, cnt);
+}
+
+static void sanitize_object(struct kmem_cache *s, void *p)
+{
+	if (sanitize_slab != SLAB_SANITIZE_FULL &&
+	    sanitize_slab != SLAB_SANITIZE_PARTIAL)
+		return;
+
+	if (!should_sanitize_slab(s))
+		return;
+
+	__sanitize_object(s, p);
+}
+#else
+static void sanitize_slow_path(struct kmem_cache *s, void *head, void *tail,
+				int cnt)
+{
+	return;
+}
+
+static void sanitize_object(struct kmem_cache *s, void *p)
+{
+	return;
+}
+#endif
+
 /*
  * Hooks for other subsystems that check memory allocations. In a typical
  * production configuration these hooks all should produce no code at all.
@@ -1311,6 +1383,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
+	sanitize_object(s, x);
 	kmemleak_free_recursive(x, s->flags);
 
 	/*
@@ -1345,7 +1418,8 @@ static inline void slab_free_freelist_hook(struct kmem_cache *s,
 	defined(CONFIG_LOCKDEP)	||		\
 	defined(CONFIG_DEBUG_KMEMLEAK) ||	\
 	defined(CONFIG_DEBUG_OBJECTS_FREE) ||	\
-	defined(CONFIG_KASAN)
+	defined(CONFIG_KASAN) || \
+	defined(CONFIG_SLAB_MEMORY_SANITIZE)
 
 	void *object = head;
 	void *tail_obj = tail ? : head;
@@ -2645,6 +2719,8 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 
 	stat(s, FREE_SLOWPATH);
 
+	sanitize_slow_path(s, head, tail, cnt);
+
 	if (kmem_cache_debug(s) &&
 	    !(n = free_debug_processing(s, page, head, tail, cnt,
 					addr, &flags)))
@@ -3262,6 +3338,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 	s->inuse = size;
 
 	if (((flags & (SLAB_DESTROY_BY_RCU | SLAB_POISON)) ||
+		should_sanitize_slab(s) ||
 		s->ctor)) {
 		/*
 		 * Relocate free pointer after the object if it is not
@@ -4787,6 +4864,14 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf)
 SLAB_ATTR_RO(cache_dma);
 #endif
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+static ssize_t sanitize_show(struct kmem_cache *s, char *buf)
+{
+	return sprintf(buf, "%d\n", should_sanitize_slab(s));
+}
+SLAB_ATTR_RO(sanitize);
+#endif
+
 static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf)
 {
 	return sprintf(buf, "%d\n", !!(s->flags & SLAB_DESTROY_BY_RCU));
@@ -5129,6 +5214,9 @@ static struct attribute *slab_attrs[] = {
 #ifdef CONFIG_ZONE_DMA
 	&cache_dma_attr.attr,
 #endif
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	&sanitize_attr.attr,
+#endif
 #ifdef CONFIG_NUMA
 	&remote_node_defrag_ratio_attr.attr,
 #endif
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 2/7] slub: Add support for sanitization
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


Clearing of objects on free only happens when SLUB_DEBUG and poisoning
is enabled for caches. This is a potential source of security leaks;
sensitive information may remain around well after its normal life
time. Add support for sanitizing objects independent of debug features.
Sanitization can be configured on only the slow path or all paths.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 mm/slub.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 89 insertions(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 4699751..a02e1e7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -220,6 +220,15 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si)
 #endif
 }
 
+static inline bool should_sanitize_slab(const struct kmem_cache *s)
+{
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+		return !(s->flags & SLAB_NO_SANITIZE);
+#else
+		return false;
+#endif
+}
+
 /********************************************************************
  * 			Core slab cache functions
  *******************************************************************/
@@ -286,6 +295,9 @@ static inline int slab_index(void *p, struct kmem_cache *s, void *addr)
 
 static inline size_t slab_ksize(const struct kmem_cache *s)
 {
+	if (should_sanitize_slab(s))
+		return s->object_size;
+
 #ifdef CONFIG_SLUB_DEBUG
 	/*
 	 * Debugging requires use of the padding between object
@@ -1263,6 +1275,66 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 
 #endif /* CONFIG_SLUB_DEBUG */
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+static void __sanitize_object(struct kmem_cache *s, void *p)
+{
+	memset(p, SLAB_MEMORY_SANITIZE_VALUE, s->object_size);
+	if (s->ctor)
+		s->ctor(p);
+}
+
+static void sanitize_objects(struct kmem_cache *s, void *head, void *tail,
+				int cnt)
+{
+	int i = 1;
+	void *p = head;
+
+	do {
+		if (i > cnt)
+			BUG();
+
+		__sanitize_object(s, p);
+		i++;
+	} while (p != tail && (p = get_freepointer(s, p)));
+}
+
+
+static void sanitize_slow_path(struct kmem_cache *s, void *head, void *tail,
+				int cnt)
+{
+	if (sanitize_slab != SLAB_SANITIZE_PARTIAL_SLOWPATH)
+		return;
+
+	if (!should_sanitize_slab(s))
+		return;
+
+	sanitize_objects(s, head, tail, cnt);
+}
+
+static void sanitize_object(struct kmem_cache *s, void *p)
+{
+	if (sanitize_slab != SLAB_SANITIZE_FULL &&
+	    sanitize_slab != SLAB_SANITIZE_PARTIAL)
+		return;
+
+	if (!should_sanitize_slab(s))
+		return;
+
+	__sanitize_object(s, p);
+}
+#else
+static void sanitize_slow_path(struct kmem_cache *s, void *head, void *tail,
+				int cnt)
+{
+	return;
+}
+
+static void sanitize_object(struct kmem_cache *s, void *p)
+{
+	return;
+}
+#endif
+
 /*
  * Hooks for other subsystems that check memory allocations. In a typical
  * production configuration these hooks all should produce no code at all.
@@ -1311,6 +1383,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
+	sanitize_object(s, x);
 	kmemleak_free_recursive(x, s->flags);
 
 	/*
@@ -1345,7 +1418,8 @@ static inline void slab_free_freelist_hook(struct kmem_cache *s,
 	defined(CONFIG_LOCKDEP)	||		\
 	defined(CONFIG_DEBUG_KMEMLEAK) ||	\
 	defined(CONFIG_DEBUG_OBJECTS_FREE) ||	\
-	defined(CONFIG_KASAN)
+	defined(CONFIG_KASAN) || \
+	defined(CONFIG_SLAB_MEMORY_SANITIZE)
 
 	void *object = head;
 	void *tail_obj = tail ? : head;
@@ -2645,6 +2719,8 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 
 	stat(s, FREE_SLOWPATH);
 
+	sanitize_slow_path(s, head, tail, cnt);
+
 	if (kmem_cache_debug(s) &&
 	    !(n = free_debug_processing(s, page, head, tail, cnt,
 					addr, &flags)))
@@ -3262,6 +3338,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 	s->inuse = size;
 
 	if (((flags & (SLAB_DESTROY_BY_RCU | SLAB_POISON)) ||
+		should_sanitize_slab(s) ||
 		s->ctor)) {
 		/*
 		 * Relocate free pointer after the object if it is not
@@ -4787,6 +4864,14 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf)
 SLAB_ATTR_RO(cache_dma);
 #endif
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+static ssize_t sanitize_show(struct kmem_cache *s, char *buf)
+{
+	return sprintf(buf, "%d\n", should_sanitize_slab(s));
+}
+SLAB_ATTR_RO(sanitize);
+#endif
+
 static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf)
 {
 	return sprintf(buf, "%d\n", !!(s->flags & SLAB_DESTROY_BY_RCU));
@@ -5129,6 +5214,9 @@ static struct attribute *slab_attrs[] = {
 #ifdef CONFIG_ZONE_DMA
 	&cache_dma_attr.attr,
 #endif
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	&sanitize_attr.attr,
+#endif
 #ifdef CONFIG_NUMA
 	&remote_node_defrag_ratio_attr.attr,
 #endif
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [kernel-hardening] [RFC][PATCH 2/7] slub: Add support for sanitization
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


Clearing of objects on free only happens when SLUB_DEBUG and poisoning
is enabled for caches. This is a potential source of security leaks;
sensitive information may remain around well after its normal life
time. Add support for sanitizing objects independent of debug features.
Sanitization can be configured on only the slow path or all paths.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 mm/slub.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++-
 1 file changed, 89 insertions(+), 1 deletion(-)

diff --git a/mm/slub.c b/mm/slub.c
index 4699751..a02e1e7 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -220,6 +220,15 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si)
 #endif
 }
 
+static inline bool should_sanitize_slab(const struct kmem_cache *s)
+{
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+		return !(s->flags & SLAB_NO_SANITIZE);
+#else
+		return false;
+#endif
+}
+
 /********************************************************************
  * 			Core slab cache functions
  *******************************************************************/
@@ -286,6 +295,9 @@ static inline int slab_index(void *p, struct kmem_cache *s, void *addr)
 
 static inline size_t slab_ksize(const struct kmem_cache *s)
 {
+	if (should_sanitize_slab(s))
+		return s->object_size;
+
 #ifdef CONFIG_SLUB_DEBUG
 	/*
 	 * Debugging requires use of the padding between object
@@ -1263,6 +1275,66 @@ static inline void dec_slabs_node(struct kmem_cache *s, int node,
 
 #endif /* CONFIG_SLUB_DEBUG */
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+static void __sanitize_object(struct kmem_cache *s, void *p)
+{
+	memset(p, SLAB_MEMORY_SANITIZE_VALUE, s->object_size);
+	if (s->ctor)
+		s->ctor(p);
+}
+
+static void sanitize_objects(struct kmem_cache *s, void *head, void *tail,
+				int cnt)
+{
+	int i = 1;
+	void *p = head;
+
+	do {
+		if (i > cnt)
+			BUG();
+
+		__sanitize_object(s, p);
+		i++;
+	} while (p != tail && (p = get_freepointer(s, p)));
+}
+
+
+static void sanitize_slow_path(struct kmem_cache *s, void *head, void *tail,
+				int cnt)
+{
+	if (sanitize_slab != SLAB_SANITIZE_PARTIAL_SLOWPATH)
+		return;
+
+	if (!should_sanitize_slab(s))
+		return;
+
+	sanitize_objects(s, head, tail, cnt);
+}
+
+static void sanitize_object(struct kmem_cache *s, void *p)
+{
+	if (sanitize_slab != SLAB_SANITIZE_FULL &&
+	    sanitize_slab != SLAB_SANITIZE_PARTIAL)
+		return;
+
+	if (!should_sanitize_slab(s))
+		return;
+
+	__sanitize_object(s, p);
+}
+#else
+static void sanitize_slow_path(struct kmem_cache *s, void *head, void *tail,
+				int cnt)
+{
+	return;
+}
+
+static void sanitize_object(struct kmem_cache *s, void *p)
+{
+	return;
+}
+#endif
+
 /*
  * Hooks for other subsystems that check memory allocations. In a typical
  * production configuration these hooks all should produce no code at all.
@@ -1311,6 +1383,7 @@ static inline void slab_post_alloc_hook(struct kmem_cache *s, gfp_t flags,
 
 static inline void slab_free_hook(struct kmem_cache *s, void *x)
 {
+	sanitize_object(s, x);
 	kmemleak_free_recursive(x, s->flags);
 
 	/*
@@ -1345,7 +1418,8 @@ static inline void slab_free_freelist_hook(struct kmem_cache *s,
 	defined(CONFIG_LOCKDEP)	||		\
 	defined(CONFIG_DEBUG_KMEMLEAK) ||	\
 	defined(CONFIG_DEBUG_OBJECTS_FREE) ||	\
-	defined(CONFIG_KASAN)
+	defined(CONFIG_KASAN) || \
+	defined(CONFIG_SLAB_MEMORY_SANITIZE)
 
 	void *object = head;
 	void *tail_obj = tail ? : head;
@@ -2645,6 +2719,8 @@ static void __slab_free(struct kmem_cache *s, struct page *page,
 
 	stat(s, FREE_SLOWPATH);
 
+	sanitize_slow_path(s, head, tail, cnt);
+
 	if (kmem_cache_debug(s) &&
 	    !(n = free_debug_processing(s, page, head, tail, cnt,
 					addr, &flags)))
@@ -3262,6 +3338,7 @@ static int calculate_sizes(struct kmem_cache *s, int forced_order)
 	s->inuse = size;
 
 	if (((flags & (SLAB_DESTROY_BY_RCU | SLAB_POISON)) ||
+		should_sanitize_slab(s) ||
 		s->ctor)) {
 		/*
 		 * Relocate free pointer after the object if it is not
@@ -4787,6 +4864,14 @@ static ssize_t cache_dma_show(struct kmem_cache *s, char *buf)
 SLAB_ATTR_RO(cache_dma);
 #endif
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+static ssize_t sanitize_show(struct kmem_cache *s, char *buf)
+{
+	return sprintf(buf, "%d\n", should_sanitize_slab(s));
+}
+SLAB_ATTR_RO(sanitize);
+#endif
+
 static ssize_t destroy_by_rcu_show(struct kmem_cache *s, char *buf)
 {
 	return sprintf(buf, "%d\n", !!(s->flags & SLAB_DESTROY_BY_RCU));
@@ -5129,6 +5214,9 @@ static struct attribute *slab_attrs[] = {
 #ifdef CONFIG_ZONE_DMA
 	&cache_dma_attr.attr,
 #endif
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	&sanitize_attr.attr,
+#endif
 #ifdef CONFIG_NUMA
 	&remote_node_defrag_ratio_attr.attr,
 #endif
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 3/7] slab: Add support for sanitization
  2015-12-22  3:40 ` Laura Abbott
  (?)
@ 2015-12-22  3:40   ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


Clearing of objects on free only happens on debug paths. This is a
security risk since sensative data may exist long past it's life
span. Add unconditional clearing of objects on free.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 mm/slab.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/mm/slab.c b/mm/slab.c
index 4765c97..0ca92d8 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -319,6 +319,8 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
 #define STATS_INC_ALLOCMISS(x)	atomic_inc(&(x)->allocmiss)
 #define STATS_INC_FREEHIT(x)	atomic_inc(&(x)->freehit)
 #define STATS_INC_FREEMISS(x)	atomic_inc(&(x)->freemiss)
+#define STATS_INC_SANITIZED(x)	atomic_inc(&(x)->sanitized)
+#define STATS_INC_NOT_SANITIZED(x) atomic_inc(&(x)->not_sanitized)
 #else
 #define	STATS_INC_ACTIVE(x)	do { } while (0)
 #define	STATS_DEC_ACTIVE(x)	do { } while (0)
@@ -335,6 +337,8 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
 #define STATS_INC_ALLOCMISS(x)	do { } while (0)
 #define STATS_INC_FREEHIT(x)	do { } while (0)
 #define STATS_INC_FREEMISS(x)	do { } while (0)
+#define STATS_INC_SANITIZED(x)  do { } while (0)
+#define STATS_INC_NOT_SANITIZED(x) do { } while (0)
 #endif
 
 #if DEBUG
@@ -3359,6 +3363,27 @@ free_done:
 	memmove(ac->entry, &(ac->entry[batchcount]), sizeof(void *)*ac->avail);
 }
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+static void slab_sanitize(struct kmem_cache *cachep, void *objp)
+{
+	if (cachep->flags & (SLAB_POISON | SLAB_NO_SANITIZE)) {
+		STATS_INC_NOT_SANITIZED(cachep);
+	} else {
+		memset(objp, SLAB_MEMORY_SANITIZE_VALUE, cachep->object_size);
+
+		if (cachep->ctor)
+			cachep->ctor(objp);
+
+		STATS_INC_SANITIZED(cachep);
+	}
+}
+#else
+static void slab_sanitize(struct kmem_cache *cachep, void *objp)
+{
+	return;
+}
+#endif
+
 /*
  * Release an obj back to its cache. If the obj has a constructed state, it must
  * be in this state _before_ it is released.  Called with disabled ints.
@@ -3369,6 +3394,8 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp,
 	struct array_cache *ac = cpu_cache_get(cachep);
 
 	check_irq_off();
+
+	slab_sanitize(cachep, objp);
 	kmemleak_free_recursive(objp, cachep->flags);
 	objp = cache_free_debugcheck(cachep, objp, caller);
 
@@ -4014,6 +4041,14 @@ void slabinfo_show_stats(struct seq_file *m, struct kmem_cache *cachep)
 		seq_printf(m, " : cpustat %6lu %6lu %6lu %6lu",
 			   allochit, allocmiss, freehit, freemiss);
 	}
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	{
+		unsigned long sanitized = atomic_read(&cachep->sanitized);
+		unsigned long not_sanitized = atomic_read(&cachep->not_sanitized);
+
+		seq_printf(m, " : sanitized %6lu %6lu", sanitized, not_sanitized);
+	}
+#endif
 #endif
 }
 
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 3/7] slab: Add support for sanitization
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


Clearing of objects on free only happens on debug paths. This is a
security risk since sensative data may exist long past it's life
span. Add unconditional clearing of objects on free.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 mm/slab.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/mm/slab.c b/mm/slab.c
index 4765c97..0ca92d8 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -319,6 +319,8 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
 #define STATS_INC_ALLOCMISS(x)	atomic_inc(&(x)->allocmiss)
 #define STATS_INC_FREEHIT(x)	atomic_inc(&(x)->freehit)
 #define STATS_INC_FREEMISS(x)	atomic_inc(&(x)->freemiss)
+#define STATS_INC_SANITIZED(x)	atomic_inc(&(x)->sanitized)
+#define STATS_INC_NOT_SANITIZED(x) atomic_inc(&(x)->not_sanitized)
 #else
 #define	STATS_INC_ACTIVE(x)	do { } while (0)
 #define	STATS_DEC_ACTIVE(x)	do { } while (0)
@@ -335,6 +337,8 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
 #define STATS_INC_ALLOCMISS(x)	do { } while (0)
 #define STATS_INC_FREEHIT(x)	do { } while (0)
 #define STATS_INC_FREEMISS(x)	do { } while (0)
+#define STATS_INC_SANITIZED(x)  do { } while (0)
+#define STATS_INC_NOT_SANITIZED(x) do { } while (0)
 #endif
 
 #if DEBUG
@@ -3359,6 +3363,27 @@ free_done:
 	memmove(ac->entry, &(ac->entry[batchcount]), sizeof(void *)*ac->avail);
 }
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+static void slab_sanitize(struct kmem_cache *cachep, void *objp)
+{
+	if (cachep->flags & (SLAB_POISON | SLAB_NO_SANITIZE)) {
+		STATS_INC_NOT_SANITIZED(cachep);
+	} else {
+		memset(objp, SLAB_MEMORY_SANITIZE_VALUE, cachep->object_size);
+
+		if (cachep->ctor)
+			cachep->ctor(objp);
+
+		STATS_INC_SANITIZED(cachep);
+	}
+}
+#else
+static void slab_sanitize(struct kmem_cache *cachep, void *objp)
+{
+	return;
+}
+#endif
+
 /*
  * Release an obj back to its cache. If the obj has a constructed state, it must
  * be in this state _before_ it is released.  Called with disabled ints.
@@ -3369,6 +3394,8 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp,
 	struct array_cache *ac = cpu_cache_get(cachep);
 
 	check_irq_off();
+
+	slab_sanitize(cachep, objp);
 	kmemleak_free_recursive(objp, cachep->flags);
 	objp = cache_free_debugcheck(cachep, objp, caller);
 
@@ -4014,6 +4041,14 @@ void slabinfo_show_stats(struct seq_file *m, struct kmem_cache *cachep)
 		seq_printf(m, " : cpustat %6lu %6lu %6lu %6lu",
 			   allochit, allocmiss, freehit, freemiss);
 	}
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	{
+		unsigned long sanitized = atomic_read(&cachep->sanitized);
+		unsigned long not_sanitized = atomic_read(&cachep->not_sanitized);
+
+		seq_printf(m, " : sanitized %6lu %6lu", sanitized, not_sanitized);
+	}
+#endif
 #endif
 }
 
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [kernel-hardening] [RFC][PATCH 3/7] slab: Add support for sanitization
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


Clearing of objects on free only happens on debug paths. This is a
security risk since sensative data may exist long past it's life
span. Add unconditional clearing of objects on free.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 mm/slab.c | 35 +++++++++++++++++++++++++++++++++++
 1 file changed, 35 insertions(+)

diff --git a/mm/slab.c b/mm/slab.c
index 4765c97..0ca92d8 100644
--- a/mm/slab.c
+++ b/mm/slab.c
@@ -319,6 +319,8 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
 #define STATS_INC_ALLOCMISS(x)	atomic_inc(&(x)->allocmiss)
 #define STATS_INC_FREEHIT(x)	atomic_inc(&(x)->freehit)
 #define STATS_INC_FREEMISS(x)	atomic_inc(&(x)->freemiss)
+#define STATS_INC_SANITIZED(x)	atomic_inc(&(x)->sanitized)
+#define STATS_INC_NOT_SANITIZED(x) atomic_inc(&(x)->not_sanitized)
 #else
 #define	STATS_INC_ACTIVE(x)	do { } while (0)
 #define	STATS_DEC_ACTIVE(x)	do { } while (0)
@@ -335,6 +337,8 @@ static void kmem_cache_node_init(struct kmem_cache_node *parent)
 #define STATS_INC_ALLOCMISS(x)	do { } while (0)
 #define STATS_INC_FREEHIT(x)	do { } while (0)
 #define STATS_INC_FREEMISS(x)	do { } while (0)
+#define STATS_INC_SANITIZED(x)  do { } while (0)
+#define STATS_INC_NOT_SANITIZED(x) do { } while (0)
 #endif
 
 #if DEBUG
@@ -3359,6 +3363,27 @@ free_done:
 	memmove(ac->entry, &(ac->entry[batchcount]), sizeof(void *)*ac->avail);
 }
 
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+static void slab_sanitize(struct kmem_cache *cachep, void *objp)
+{
+	if (cachep->flags & (SLAB_POISON | SLAB_NO_SANITIZE)) {
+		STATS_INC_NOT_SANITIZED(cachep);
+	} else {
+		memset(objp, SLAB_MEMORY_SANITIZE_VALUE, cachep->object_size);
+
+		if (cachep->ctor)
+			cachep->ctor(objp);
+
+		STATS_INC_SANITIZED(cachep);
+	}
+}
+#else
+static void slab_sanitize(struct kmem_cache *cachep, void *objp)
+{
+	return;
+}
+#endif
+
 /*
  * Release an obj back to its cache. If the obj has a constructed state, it must
  * be in this state _before_ it is released.  Called with disabled ints.
@@ -3369,6 +3394,8 @@ static inline void __cache_free(struct kmem_cache *cachep, void *objp,
 	struct array_cache *ac = cpu_cache_get(cachep);
 
 	check_irq_off();
+
+	slab_sanitize(cachep, objp);
 	kmemleak_free_recursive(objp, cachep->flags);
 	objp = cache_free_debugcheck(cachep, objp, caller);
 
@@ -4014,6 +4041,14 @@ void slabinfo_show_stats(struct seq_file *m, struct kmem_cache *cachep)
 		seq_printf(m, " : cpustat %6lu %6lu %6lu %6lu",
 			   allochit, allocmiss, freehit, freemiss);
 	}
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	{
+		unsigned long sanitized = atomic_read(&cachep->sanitized);
+		unsigned long not_sanitized = atomic_read(&cachep->not_sanitized);
+
+		seq_printf(m, " : sanitized %6lu %6lu", sanitized, not_sanitized);
+	}
+#endif
 #endif
 }
 
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 4/7] slob: Add support for sanitization
  2015-12-22  3:40 ` Laura Abbott
  (?)
@ 2015-12-22  3:40   ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


The SLOB allocator does not clear objects on free. This is a security
risk since sensitive data may exist long past its expected life
span. Add support for clearing objects on free.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 mm/slob.c | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/mm/slob.c b/mm/slob.c
index 17e8f8c..37a4ecb 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -334,10 +334,21 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
 	return b;
 }
 
+static void slob_sanitize(struct kmem_cache *c, slob_t *b, int size)
+{
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	if (c && (c->flags & SLAB_NO_SANITIZE))
+		return;
+
+	if (sanitize_slab)
+		memset(b, SLAB_MEMORY_SANITIZE_VALUE, size);
+#endif
+}
+
 /*
  * slob_free: entry point into the slob allocator.
  */
-static void slob_free(void *block, int size)
+static void slob_free(struct kmem_cache *c, void *block, int size)
 {
 	struct page *sp;
 	slob_t *prev, *next, *b = (slob_t *)block;
@@ -365,6 +376,8 @@ static void slob_free(void *block, int size)
 		return;
 	}
 
+	slob_sanitize(c, block, size);
+
 	if (!slob_page_free(sp)) {
 		/* This slob page is about to become partially free. Easy! */
 		sp->units = units;
@@ -495,7 +508,7 @@ void kfree(const void *block)
 	if (PageSlab(sp)) {
 		int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
 		unsigned int *m = (unsigned int *)(block - align);
-		slob_free(m, *m + align);
+		slob_free(NULL, m, *m + align);
 	} else
 		__free_pages(sp, compound_order(sp));
 }
@@ -579,10 +592,10 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node)
 EXPORT_SYMBOL(kmem_cache_alloc_node);
 #endif
 
-static void __kmem_cache_free(void *b, int size)
+static void __kmem_cache_free(struct kmem_cache *c, void *b, int size)
 {
 	if (size < PAGE_SIZE)
-		slob_free(b, size);
+		slob_free(c, b, size);
 	else
 		slob_free_pages(b, get_order(size));
 }
@@ -592,7 +605,7 @@ static void kmem_rcu_free(struct rcu_head *head)
 	struct slob_rcu *slob_rcu = (struct slob_rcu *)head;
 	void *b = (void *)slob_rcu - (slob_rcu->size - sizeof(struct slob_rcu));
 
-	__kmem_cache_free(b, slob_rcu->size);
+	__kmem_cache_free(NULL, b, slob_rcu->size);
 }
 
 void kmem_cache_free(struct kmem_cache *c, void *b)
@@ -604,7 +617,7 @@ void kmem_cache_free(struct kmem_cache *c, void *b)
 		slob_rcu->size = c->size;
 		call_rcu(&slob_rcu->head, kmem_rcu_free);
 	} else {
-		__kmem_cache_free(b, c->size);
+		__kmem_cache_free(NULL, b, c->size);
 	}
 
 	trace_kmem_cache_free(_RET_IP_, b);
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 4/7] slob: Add support for sanitization
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


The SLOB allocator does not clear objects on free. This is a security
risk since sensitive data may exist long past its expected life
span. Add support for clearing objects on free.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 mm/slob.c | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/mm/slob.c b/mm/slob.c
index 17e8f8c..37a4ecb 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -334,10 +334,21 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
 	return b;
 }
 
+static void slob_sanitize(struct kmem_cache *c, slob_t *b, int size)
+{
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	if (c && (c->flags & SLAB_NO_SANITIZE))
+		return;
+
+	if (sanitize_slab)
+		memset(b, SLAB_MEMORY_SANITIZE_VALUE, size);
+#endif
+}
+
 /*
  * slob_free: entry point into the slob allocator.
  */
-static void slob_free(void *block, int size)
+static void slob_free(struct kmem_cache *c, void *block, int size)
 {
 	struct page *sp;
 	slob_t *prev, *next, *b = (slob_t *)block;
@@ -365,6 +376,8 @@ static void slob_free(void *block, int size)
 		return;
 	}
 
+	slob_sanitize(c, block, size);
+
 	if (!slob_page_free(sp)) {
 		/* This slob page is about to become partially free. Easy! */
 		sp->units = units;
@@ -495,7 +508,7 @@ void kfree(const void *block)
 	if (PageSlab(sp)) {
 		int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
 		unsigned int *m = (unsigned int *)(block - align);
-		slob_free(m, *m + align);
+		slob_free(NULL, m, *m + align);
 	} else
 		__free_pages(sp, compound_order(sp));
 }
@@ -579,10 +592,10 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node)
 EXPORT_SYMBOL(kmem_cache_alloc_node);
 #endif
 
-static void __kmem_cache_free(void *b, int size)
+static void __kmem_cache_free(struct kmem_cache *c, void *b, int size)
 {
 	if (size < PAGE_SIZE)
-		slob_free(b, size);
+		slob_free(c, b, size);
 	else
 		slob_free_pages(b, get_order(size));
 }
@@ -592,7 +605,7 @@ static void kmem_rcu_free(struct rcu_head *head)
 	struct slob_rcu *slob_rcu = (struct slob_rcu *)head;
 	void *b = (void *)slob_rcu - (slob_rcu->size - sizeof(struct slob_rcu));
 
-	__kmem_cache_free(b, slob_rcu->size);
+	__kmem_cache_free(NULL, b, slob_rcu->size);
 }
 
 void kmem_cache_free(struct kmem_cache *c, void *b)
@@ -604,7 +617,7 @@ void kmem_cache_free(struct kmem_cache *c, void *b)
 		slob_rcu->size = c->size;
 		call_rcu(&slob_rcu->head, kmem_rcu_free);
 	} else {
-		__kmem_cache_free(b, c->size);
+		__kmem_cache_free(NULL, b, c->size);
 	}
 
 	trace_kmem_cache_free(_RET_IP_, b);
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [kernel-hardening] [RFC][PATCH 4/7] slob: Add support for sanitization
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


The SLOB allocator does not clear objects on free. This is a security
risk since sensitive data may exist long past its expected life
span. Add support for clearing objects on free.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 mm/slob.c | 25 +++++++++++++++++++------
 1 file changed, 19 insertions(+), 6 deletions(-)

diff --git a/mm/slob.c b/mm/slob.c
index 17e8f8c..37a4ecb 100644
--- a/mm/slob.c
+++ b/mm/slob.c
@@ -334,10 +334,21 @@ static void *slob_alloc(size_t size, gfp_t gfp, int align, int node)
 	return b;
 }
 
+static void slob_sanitize(struct kmem_cache *c, slob_t *b, int size)
+{
+#ifdef CONFIG_SLAB_MEMORY_SANITIZE
+	if (c && (c->flags & SLAB_NO_SANITIZE))
+		return;
+
+	if (sanitize_slab)
+		memset(b, SLAB_MEMORY_SANITIZE_VALUE, size);
+#endif
+}
+
 /*
  * slob_free: entry point into the slob allocator.
  */
-static void slob_free(void *block, int size)
+static void slob_free(struct kmem_cache *c, void *block, int size)
 {
 	struct page *sp;
 	slob_t *prev, *next, *b = (slob_t *)block;
@@ -365,6 +376,8 @@ static void slob_free(void *block, int size)
 		return;
 	}
 
+	slob_sanitize(c, block, size);
+
 	if (!slob_page_free(sp)) {
 		/* This slob page is about to become partially free. Easy! */
 		sp->units = units;
@@ -495,7 +508,7 @@ void kfree(const void *block)
 	if (PageSlab(sp)) {
 		int align = max_t(size_t, ARCH_KMALLOC_MINALIGN, ARCH_SLAB_MINALIGN);
 		unsigned int *m = (unsigned int *)(block - align);
-		slob_free(m, *m + align);
+		slob_free(NULL, m, *m + align);
 	} else
 		__free_pages(sp, compound_order(sp));
 }
@@ -579,10 +592,10 @@ void *kmem_cache_alloc_node(struct kmem_cache *cachep, gfp_t gfp, int node)
 EXPORT_SYMBOL(kmem_cache_alloc_node);
 #endif
 
-static void __kmem_cache_free(void *b, int size)
+static void __kmem_cache_free(struct kmem_cache *c, void *b, int size)
 {
 	if (size < PAGE_SIZE)
-		slob_free(b, size);
+		slob_free(c, b, size);
 	else
 		slob_free_pages(b, get_order(size));
 }
@@ -592,7 +605,7 @@ static void kmem_rcu_free(struct rcu_head *head)
 	struct slob_rcu *slob_rcu = (struct slob_rcu *)head;
 	void *b = (void *)slob_rcu - (slob_rcu->size - sizeof(struct slob_rcu));
 
-	__kmem_cache_free(b, slob_rcu->size);
+	__kmem_cache_free(NULL, b, slob_rcu->size);
 }
 
 void kmem_cache_free(struct kmem_cache *c, void *b)
@@ -604,7 +617,7 @@ void kmem_cache_free(struct kmem_cache *c, void *b)
 		slob_rcu->size = c->size;
 		call_rcu(&slob_rcu->head, kmem_rcu_free);
 	} else {
-		__kmem_cache_free(b, c->size);
+		__kmem_cache_free(NULL, b, c->size);
 	}
 
 	trace_kmem_cache_free(_RET_IP_, b);
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 5/7] mm: Mark several cases as SLAB_NO_SANITIZE
  2015-12-22  3:40 ` Laura Abbott
  (?)
@ 2015-12-22  3:40   ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


Sanitization is useful for security but comes at the cost of performance
in clearing on free. Mark select caches as SLAB_NO_SANITIZE so
sanitization will not happen under the default configuration. The
kernel may be booted with the proper command line option to allow these
caches to be sanitized.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
This is the initial set of excludes that the grsecurity patches had.
More may need to be added/removed as the series is tested.
---
 fs/buffer.c       |  2 +-
 fs/dcache.c       |  2 +-
 kernel/fork.c     |  2 +-
 mm/rmap.c         |  4 ++--
 mm/slab.h         |  2 +-
 net/core/skbuff.c | 16 ++++++++--------
 6 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 4f4cd95..f19e4ab 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -3417,7 +3417,7 @@ void __init buffer_init(void)
 	bh_cachep = kmem_cache_create("buffer_head",
 			sizeof(struct buffer_head), 0,
 				(SLAB_RECLAIM_ACCOUNT|SLAB_PANIC|
-				SLAB_MEM_SPREAD),
+				SLAB_MEM_SPREAD|SLAB_NO_SANITIZE),
 				NULL);
 
 	/*
diff --git a/fs/dcache.c b/fs/dcache.c
index 5c33aeb..470f6be 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -3451,7 +3451,7 @@ void __init vfs_caches_init_early(void)
 void __init vfs_caches_init(void)
 {
 	names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
-			SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
+			SLAB_NO_SANITIZE|SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
 
 	dcache_init();
 	inode_init();
diff --git a/kernel/fork.c b/kernel/fork.c
index fce002e..35db9c3 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1868,7 +1868,7 @@ void __init proc_caches_init(void)
 	mm_cachep = kmem_cache_create("mm_struct",
 			sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
 			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
-	vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC);
+	vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_NO_SANITIZE);
 	mmap_init();
 	nsproxy_cache_init();
 }
diff --git a/mm/rmap.c b/mm/rmap.c
index b577fbb..74296d9 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -428,8 +428,8 @@ static void anon_vma_ctor(void *data)
 void __init anon_vma_init(void)
 {
 	anon_vma_cachep = kmem_cache_create("anon_vma", sizeof(struct anon_vma),
-			0, SLAB_DESTROY_BY_RCU|SLAB_PANIC, anon_vma_ctor);
-	anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
+		0, SLAB_DESTROY_BY_RCU|SLAB_PANIC|SLAB_NO_SANITIZE, anon_vma_ctor);
+	anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC|SLAB_NO_SANITIZE);
 }
 
 /*
diff --git a/mm/slab.h b/mm/slab.h
index b54b636..6de99da 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -137,7 +137,7 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
 
 /* Legal flag mask for kmem_cache_create(), for various configurations */
 #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
-			 SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS )
+			 SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS | SLAB_NO_SANITIZE)
 
 #if defined(CONFIG_DEBUG_SLAB)
 #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index b2df375..1d499ea 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3316,15 +3316,15 @@ done:
 void __init skb_init(void)
 {
 	skbuff_head_cache = kmem_cache_create("skbuff_head_cache",
-					      sizeof(struct sk_buff),
-					      0,
-					      SLAB_HWCACHE_ALIGN|SLAB_PANIC,
-					      NULL);
+				sizeof(struct sk_buff),
+				0,
+				SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
+				NULL);
 	skbuff_fclone_cache = kmem_cache_create("skbuff_fclone_cache",
-						sizeof(struct sk_buff_fclones),
-						0,
-						SLAB_HWCACHE_ALIGN|SLAB_PANIC,
-						NULL);
+				sizeof(struct sk_buff_fclones),
+				0,
+				SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
+				NULL);
 }
 
 /**
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 5/7] mm: Mark several cases as SLAB_NO_SANITIZE
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


Sanitization is useful for security but comes at the cost of performance
in clearing on free. Mark select caches as SLAB_NO_SANITIZE so
sanitization will not happen under the default configuration. The
kernel may be booted with the proper command line option to allow these
caches to be sanitized.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
This is the initial set of excludes that the grsecurity patches had.
More may need to be added/removed as the series is tested.
---
 fs/buffer.c       |  2 +-
 fs/dcache.c       |  2 +-
 kernel/fork.c     |  2 +-
 mm/rmap.c         |  4 ++--
 mm/slab.h         |  2 +-
 net/core/skbuff.c | 16 ++++++++--------
 6 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 4f4cd95..f19e4ab 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -3417,7 +3417,7 @@ void __init buffer_init(void)
 	bh_cachep = kmem_cache_create("buffer_head",
 			sizeof(struct buffer_head), 0,
 				(SLAB_RECLAIM_ACCOUNT|SLAB_PANIC|
-				SLAB_MEM_SPREAD),
+				SLAB_MEM_SPREAD|SLAB_NO_SANITIZE),
 				NULL);
 
 	/*
diff --git a/fs/dcache.c b/fs/dcache.c
index 5c33aeb..470f6be 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -3451,7 +3451,7 @@ void __init vfs_caches_init_early(void)
 void __init vfs_caches_init(void)
 {
 	names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
-			SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
+			SLAB_NO_SANITIZE|SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
 
 	dcache_init();
 	inode_init();
diff --git a/kernel/fork.c b/kernel/fork.c
index fce002e..35db9c3 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1868,7 +1868,7 @@ void __init proc_caches_init(void)
 	mm_cachep = kmem_cache_create("mm_struct",
 			sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
 			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
-	vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC);
+	vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_NO_SANITIZE);
 	mmap_init();
 	nsproxy_cache_init();
 }
diff --git a/mm/rmap.c b/mm/rmap.c
index b577fbb..74296d9 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -428,8 +428,8 @@ static void anon_vma_ctor(void *data)
 void __init anon_vma_init(void)
 {
 	anon_vma_cachep = kmem_cache_create("anon_vma", sizeof(struct anon_vma),
-			0, SLAB_DESTROY_BY_RCU|SLAB_PANIC, anon_vma_ctor);
-	anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
+		0, SLAB_DESTROY_BY_RCU|SLAB_PANIC|SLAB_NO_SANITIZE, anon_vma_ctor);
+	anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC|SLAB_NO_SANITIZE);
 }
 
 /*
diff --git a/mm/slab.h b/mm/slab.h
index b54b636..6de99da 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -137,7 +137,7 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
 
 /* Legal flag mask for kmem_cache_create(), for various configurations */
 #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
-			 SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS )
+			 SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS | SLAB_NO_SANITIZE)
 
 #if defined(CONFIG_DEBUG_SLAB)
 #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index b2df375..1d499ea 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3316,15 +3316,15 @@ done:
 void __init skb_init(void)
 {
 	skbuff_head_cache = kmem_cache_create("skbuff_head_cache",
-					      sizeof(struct sk_buff),
-					      0,
-					      SLAB_HWCACHE_ALIGN|SLAB_PANIC,
-					      NULL);
+				sizeof(struct sk_buff),
+				0,
+				SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
+				NULL);
 	skbuff_fclone_cache = kmem_cache_create("skbuff_fclone_cache",
-						sizeof(struct sk_buff_fclones),
-						0,
-						SLAB_HWCACHE_ALIGN|SLAB_PANIC,
-						NULL);
+				sizeof(struct sk_buff_fclones),
+				0,
+				SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
+				NULL);
 }
 
 /**
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [kernel-hardening] [RFC][PATCH 5/7] mm: Mark several cases as SLAB_NO_SANITIZE
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


Sanitization is useful for security but comes at the cost of performance
in clearing on free. Mark select caches as SLAB_NO_SANITIZE so
sanitization will not happen under the default configuration. The
kernel may be booted with the proper command line option to allow these
caches to be sanitized.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
This is the initial set of excludes that the grsecurity patches had.
More may need to be added/removed as the series is tested.
---
 fs/buffer.c       |  2 +-
 fs/dcache.c       |  2 +-
 kernel/fork.c     |  2 +-
 mm/rmap.c         |  4 ++--
 mm/slab.h         |  2 +-
 net/core/skbuff.c | 16 ++++++++--------
 6 files changed, 14 insertions(+), 14 deletions(-)

diff --git a/fs/buffer.c b/fs/buffer.c
index 4f4cd95..f19e4ab 100644
--- a/fs/buffer.c
+++ b/fs/buffer.c
@@ -3417,7 +3417,7 @@ void __init buffer_init(void)
 	bh_cachep = kmem_cache_create("buffer_head",
 			sizeof(struct buffer_head), 0,
 				(SLAB_RECLAIM_ACCOUNT|SLAB_PANIC|
-				SLAB_MEM_SPREAD),
+				SLAB_MEM_SPREAD|SLAB_NO_SANITIZE),
 				NULL);
 
 	/*
diff --git a/fs/dcache.c b/fs/dcache.c
index 5c33aeb..470f6be 100644
--- a/fs/dcache.c
+++ b/fs/dcache.c
@@ -3451,7 +3451,7 @@ void __init vfs_caches_init_early(void)
 void __init vfs_caches_init(void)
 {
 	names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
-			SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
+			SLAB_NO_SANITIZE|SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
 
 	dcache_init();
 	inode_init();
diff --git a/kernel/fork.c b/kernel/fork.c
index fce002e..35db9c3 100644
--- a/kernel/fork.c
+++ b/kernel/fork.c
@@ -1868,7 +1868,7 @@ void __init proc_caches_init(void)
 	mm_cachep = kmem_cache_create("mm_struct",
 			sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
 			SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
-	vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC);
+	vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_NO_SANITIZE);
 	mmap_init();
 	nsproxy_cache_init();
 }
diff --git a/mm/rmap.c b/mm/rmap.c
index b577fbb..74296d9 100644
--- a/mm/rmap.c
+++ b/mm/rmap.c
@@ -428,8 +428,8 @@ static void anon_vma_ctor(void *data)
 void __init anon_vma_init(void)
 {
 	anon_vma_cachep = kmem_cache_create("anon_vma", sizeof(struct anon_vma),
-			0, SLAB_DESTROY_BY_RCU|SLAB_PANIC, anon_vma_ctor);
-	anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
+		0, SLAB_DESTROY_BY_RCU|SLAB_PANIC|SLAB_NO_SANITIZE, anon_vma_ctor);
+	anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC|SLAB_NO_SANITIZE);
 }
 
 /*
diff --git a/mm/slab.h b/mm/slab.h
index b54b636..6de99da 100644
--- a/mm/slab.h
+++ b/mm/slab.h
@@ -137,7 +137,7 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
 
 /* Legal flag mask for kmem_cache_create(), for various configurations */
 #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
-			 SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS )
+			 SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS | SLAB_NO_SANITIZE)
 
 #if defined(CONFIG_DEBUG_SLAB)
 #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
diff --git a/net/core/skbuff.c b/net/core/skbuff.c
index b2df375..1d499ea 100644
--- a/net/core/skbuff.c
+++ b/net/core/skbuff.c
@@ -3316,15 +3316,15 @@ done:
 void __init skb_init(void)
 {
 	skbuff_head_cache = kmem_cache_create("skbuff_head_cache",
-					      sizeof(struct sk_buff),
-					      0,
-					      SLAB_HWCACHE_ALIGN|SLAB_PANIC,
-					      NULL);
+				sizeof(struct sk_buff),
+				0,
+				SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
+				NULL);
 	skbuff_fclone_cache = kmem_cache_create("skbuff_fclone_cache",
-						sizeof(struct sk_buff_fclones),
-						0,
-						SLAB_HWCACHE_ALIGN|SLAB_PANIC,
-						NULL);
+				sizeof(struct sk_buff_fclones),
+				0,
+				SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
+				NULL);
 }
 
 /**
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22  3:40 ` Laura Abbott
  (?)
@ 2015-12-22  3:40   ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


The SL[AOU]B allocators all behave differently w.r.t. to what happen
an object is freed. CONFIG_SLAB_SANITIZATION introduces a common
mechanism to control what happens on free. When this option is
enabled, objects may be poisoned according to a combination of
slab_sanitization command line option and whether SLAB_NO_SANITIZE
is set on a cache.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 init/Kconfig | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index 235c7a2..37857f3 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1755,6 +1755,42 @@ config SLUB_CPU_PARTIAL
 	  which requires the taking of locks that may cause latency spikes.
 	  Typically one would choose no for a realtime system.
 
+config SLAB_MEMORY_SANITIZE
+	bool "Sanitize all freed memory"
+	help
+	  By saying Y here the kernel will erase slab objects as soon as they
+	  are freed.  This in turn reduces the lifetime of data
+	  stored in them, making it less likely that sensitive information such
+	  as passwords, cryptographic secrets, etc stay in memory for too long.
+
+	  This is especially useful for programs whose runtime is short, long
+	  lived processes and the kernel itself benefit from this as long as
+	  they ensure timely freeing of memory that may hold sensitive
+	  information.
+
+	  A nice side effect of the sanitization of slab objects is the
+	  reduction of possible info leaks caused by padding bytes within the
+	  leaky structures.  Use-after-free bugs for structures containing
+	  pointers can also be detected as dereferencing the sanitized pointer
+	  will generate an access violation.
+
+	  The tradeoff is performance impact. The noticible impact can vary
+	  and you are advised to test this feature on your expected workload
+	  before deploying it
+
+	  The slab sanitization feature excludes a few slab caches per default
+	  for performance reasons. The level of sanitization can be adjusted
+	  with the sanitize_slab commandline option:
+		sanitize_slab=off: No sanitization will occur
+		santiize_slab=slow: Sanitization occurs only on the slow path
+		for all but the excluded slabs
+		(relevant for SLUB allocator only)
+		sanitize_slab=partial: Sanitization occurs on all path for all
+		but the excluded slabs
+		sanitize_slab=full: All slabs are sanitize
+
+	  If unsure, say Y here.
+
 config MMAP_ALLOW_UNINITIALIZED
 	bool "Allow mmapped anonymous memory to be uninitialized"
 	depends on EXPERT && !MMU
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


The SL[AOU]B allocators all behave differently w.r.t. to what happen
an object is freed. CONFIG_SLAB_SANITIZATION introduces a common
mechanism to control what happens on free. When this option is
enabled, objects may be poisoned according to a combination of
slab_sanitization command line option and whether SLAB_NO_SANITIZE
is set on a cache.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 init/Kconfig | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index 235c7a2..37857f3 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1755,6 +1755,42 @@ config SLUB_CPU_PARTIAL
 	  which requires the taking of locks that may cause latency spikes.
 	  Typically one would choose no for a realtime system.
 
+config SLAB_MEMORY_SANITIZE
+	bool "Sanitize all freed memory"
+	help
+	  By saying Y here the kernel will erase slab objects as soon as they
+	  are freed.  This in turn reduces the lifetime of data
+	  stored in them, making it less likely that sensitive information such
+	  as passwords, cryptographic secrets, etc stay in memory for too long.
+
+	  This is especially useful for programs whose runtime is short, long
+	  lived processes and the kernel itself benefit from this as long as
+	  they ensure timely freeing of memory that may hold sensitive
+	  information.
+
+	  A nice side effect of the sanitization of slab objects is the
+	  reduction of possible info leaks caused by padding bytes within the
+	  leaky structures.  Use-after-free bugs for structures containing
+	  pointers can also be detected as dereferencing the sanitized pointer
+	  will generate an access violation.
+
+	  The tradeoff is performance impact. The noticible impact can vary
+	  and you are advised to test this feature on your expected workload
+	  before deploying it
+
+	  The slab sanitization feature excludes a few slab caches per default
+	  for performance reasons. The level of sanitization can be adjusted
+	  with the sanitize_slab commandline option:
+		sanitize_slab=off: No sanitization will occur
+		santiize_slab=slow: Sanitization occurs only on the slow path
+		for all but the excluded slabs
+		(relevant for SLUB allocator only)
+		sanitize_slab=partial: Sanitization occurs on all path for all
+		but the excluded slabs
+		sanitize_slab=full: All slabs are sanitize
+
+	  If unsure, say Y here.
+
 config MMAP_ALLOW_UNINITIALIZED
 	bool "Allow mmapped anonymous memory to be uninitialized"
 	depends on EXPERT && !MMU
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook, kernel-hardening


The SL[AOU]B allocators all behave differently w.r.t. to what happen
an object is freed. CONFIG_SLAB_SANITIZATION introduces a common
mechanism to control what happens on free. When this option is
enabled, objects may be poisoned according to a combination of
slab_sanitization command line option and whether SLAB_NO_SANITIZE
is set on a cache.

All credit for the original work should be given to Brad Spengler and
the PaX Team.

Signed-off-by: Laura Abbott <laura@labbott.name>
---
 init/Kconfig | 36 ++++++++++++++++++++++++++++++++++++
 1 file changed, 36 insertions(+)

diff --git a/init/Kconfig b/init/Kconfig
index 235c7a2..37857f3 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1755,6 +1755,42 @@ config SLUB_CPU_PARTIAL
 	  which requires the taking of locks that may cause latency spikes.
 	  Typically one would choose no for a realtime system.
 
+config SLAB_MEMORY_SANITIZE
+	bool "Sanitize all freed memory"
+	help
+	  By saying Y here the kernel will erase slab objects as soon as they
+	  are freed.  This in turn reduces the lifetime of data
+	  stored in them, making it less likely that sensitive information such
+	  as passwords, cryptographic secrets, etc stay in memory for too long.
+
+	  This is especially useful for programs whose runtime is short, long
+	  lived processes and the kernel itself benefit from this as long as
+	  they ensure timely freeing of memory that may hold sensitive
+	  information.
+
+	  A nice side effect of the sanitization of slab objects is the
+	  reduction of possible info leaks caused by padding bytes within the
+	  leaky structures.  Use-after-free bugs for structures containing
+	  pointers can also be detected as dereferencing the sanitized pointer
+	  will generate an access violation.
+
+	  The tradeoff is performance impact. The noticible impact can vary
+	  and you are advised to test this feature on your expected workload
+	  before deploying it
+
+	  The slab sanitization feature excludes a few slab caches per default
+	  for performance reasons. The level of sanitization can be adjusted
+	  with the sanitize_slab commandline option:
+		sanitize_slab=off: No sanitization will occur
+		santiize_slab=slow: Sanitization occurs only on the slow path
+		for all but the excluded slabs
+		(relevant for SLUB allocator only)
+		sanitize_slab=partial: Sanitization occurs on all path for all
+		but the excluded slabs
+		sanitize_slab=full: All slabs are sanitize
+
+	  If unsure, say Y here.
+
 config MMAP_ALLOW_UNINITIALIZED
 	bool "Allow mmapped anonymous memory to be uninitialized"
 	depends on EXPERT && !MMU
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 7/7] lkdtm: Add READ_AFTER_FREE test
  2015-12-22  3:40 ` Laura Abbott
  (?)
@ 2015-12-22  3:40   ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook,
	kernel-hardening, Arnd Bergmann, Greg Kroah-Hartman


In a similar manner to WRITE_AFTER_FREE, add a READ_AFTER_FREE
test to test free poisoning features. Sample output when
no poison is present:

[   20.222501] lkdtm: Performing direct entry READ_AFTER_FREE
[   20.226163] lkdtm: Freed val: 12345678

with poison:

[   24.203748] lkdtm: Performing direct entry READ_AFTER_FREE
[   24.207261] general protection fault: 0000 [#1] SMP
[   24.208193] Modules linked in:
[   24.208193] CPU: 0 PID: 866 Comm: sh Not tainted 4.4.0-rc5-work+ #108

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Laura Abbott <laura@labbott.name>
---
 drivers/misc/lkdtm.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c
index 11fdadc..c641fb7 100644
--- a/drivers/misc/lkdtm.c
+++ b/drivers/misc/lkdtm.c
@@ -92,6 +92,7 @@ enum ctype {
 	CT_UNALIGNED_LOAD_STORE_WRITE,
 	CT_OVERWRITE_ALLOCATION,
 	CT_WRITE_AFTER_FREE,
+	CT_READ_AFTER_FREE,
 	CT_SOFTLOCKUP,
 	CT_HARDLOCKUP,
 	CT_SPINLOCKUP,
@@ -129,6 +130,7 @@ static char* cp_type[] = {
 	"UNALIGNED_LOAD_STORE_WRITE",
 	"OVERWRITE_ALLOCATION",
 	"WRITE_AFTER_FREE",
+	"READ_AFTER_FREE",
 	"SOFTLOCKUP",
 	"HARDLOCKUP",
 	"SPINLOCKUP",
@@ -417,6 +419,33 @@ static void lkdtm_do_action(enum ctype which)
 		memset(data, 0x78, len);
 		break;
 	}
+	case CT_READ_AFTER_FREE: {
+		int **base;
+		int *val, *tmp;
+
+		base = kmalloc(1024, GFP_KERNEL);
+		if (!base)
+			return;
+
+		val = kmalloc(1024, GFP_KERNEL);
+		if (!val)
+			return;
+
+		*val = 0x12345678;
+
+		/*
+		 * Don't just use the first entry since that's where the
+		 * freelist goes for the slab allocator
+		 */
+		base[1] = val;
+		kfree(base);
+
+		tmp = base[1];
+		pr_info("Freed val: %x\n", *tmp);
+
+		kfree(val);
+		break;
+	}
 	case CT_SOFTLOCKUP:
 		preempt_disable();
 		for (;;)
-- 
2.5.0


^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [RFC][PATCH 7/7] lkdtm: Add READ_AFTER_FREE test
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook,
	kernel-hardening, Arnd Bergmann, Greg Kroah-Hartman


In a similar manner to WRITE_AFTER_FREE, add a READ_AFTER_FREE
test to test free poisoning features. Sample output when
no poison is present:

[   20.222501] lkdtm: Performing direct entry READ_AFTER_FREE
[   20.226163] lkdtm: Freed val: 12345678

with poison:

[   24.203748] lkdtm: Performing direct entry READ_AFTER_FREE
[   24.207261] general protection fault: 0000 [#1] SMP
[   24.208193] Modules linked in:
[   24.208193] CPU: 0 PID: 866 Comm: sh Not tainted 4.4.0-rc5-work+ #108

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Laura Abbott <laura@labbott.name>
---
 drivers/misc/lkdtm.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c
index 11fdadc..c641fb7 100644
--- a/drivers/misc/lkdtm.c
+++ b/drivers/misc/lkdtm.c
@@ -92,6 +92,7 @@ enum ctype {
 	CT_UNALIGNED_LOAD_STORE_WRITE,
 	CT_OVERWRITE_ALLOCATION,
 	CT_WRITE_AFTER_FREE,
+	CT_READ_AFTER_FREE,
 	CT_SOFTLOCKUP,
 	CT_HARDLOCKUP,
 	CT_SPINLOCKUP,
@@ -129,6 +130,7 @@ static char* cp_type[] = {
 	"UNALIGNED_LOAD_STORE_WRITE",
 	"OVERWRITE_ALLOCATION",
 	"WRITE_AFTER_FREE",
+	"READ_AFTER_FREE",
 	"SOFTLOCKUP",
 	"HARDLOCKUP",
 	"SPINLOCKUP",
@@ -417,6 +419,33 @@ static void lkdtm_do_action(enum ctype which)
 		memset(data, 0x78, len);
 		break;
 	}
+	case CT_READ_AFTER_FREE: {
+		int **base;
+		int *val, *tmp;
+
+		base = kmalloc(1024, GFP_KERNEL);
+		if (!base)
+			return;
+
+		val = kmalloc(1024, GFP_KERNEL);
+		if (!val)
+			return;
+
+		*val = 0x12345678;
+
+		/*
+		 * Don't just use the first entry since that's where the
+		 * freelist goes for the slab allocator
+		 */
+		base[1] = val;
+		kfree(base);
+
+		tmp = base[1];
+		pr_info("Freed val: %x\n", *tmp);
+
+		kfree(val);
+		break;
+	}
 	case CT_SOFTLOCKUP:
 		preempt_disable();
 		for (;;)
-- 
2.5.0

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* [kernel-hardening] [RFC][PATCH 7/7] lkdtm: Add READ_AFTER_FREE test
@ 2015-12-22  3:40   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22  3:40 UTC (permalink / raw)
  To: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook,
	kernel-hardening, Arnd Bergmann, Greg Kroah-Hartman


In a similar manner to WRITE_AFTER_FREE, add a READ_AFTER_FREE
test to test free poisoning features. Sample output when
no poison is present:

[   20.222501] lkdtm: Performing direct entry READ_AFTER_FREE
[   20.226163] lkdtm: Freed val: 12345678

with poison:

[   24.203748] lkdtm: Performing direct entry READ_AFTER_FREE
[   24.207261] general protection fault: 0000 [#1] SMP
[   24.208193] Modules linked in:
[   24.208193] CPU: 0 PID: 866 Comm: sh Not tainted 4.4.0-rc5-work+ #108

Cc: Arnd Bergmann <arnd@arndb.de>
Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
Signed-off-by: Laura Abbott <laura@labbott.name>
---
 drivers/misc/lkdtm.c | 29 +++++++++++++++++++++++++++++
 1 file changed, 29 insertions(+)

diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c
index 11fdadc..c641fb7 100644
--- a/drivers/misc/lkdtm.c
+++ b/drivers/misc/lkdtm.c
@@ -92,6 +92,7 @@ enum ctype {
 	CT_UNALIGNED_LOAD_STORE_WRITE,
 	CT_OVERWRITE_ALLOCATION,
 	CT_WRITE_AFTER_FREE,
+	CT_READ_AFTER_FREE,
 	CT_SOFTLOCKUP,
 	CT_HARDLOCKUP,
 	CT_SPINLOCKUP,
@@ -129,6 +130,7 @@ static char* cp_type[] = {
 	"UNALIGNED_LOAD_STORE_WRITE",
 	"OVERWRITE_ALLOCATION",
 	"WRITE_AFTER_FREE",
+	"READ_AFTER_FREE",
 	"SOFTLOCKUP",
 	"HARDLOCKUP",
 	"SPINLOCKUP",
@@ -417,6 +419,33 @@ static void lkdtm_do_action(enum ctype which)
 		memset(data, 0x78, len);
 		break;
 	}
+	case CT_READ_AFTER_FREE: {
+		int **base;
+		int *val, *tmp;
+
+		base = kmalloc(1024, GFP_KERNEL);
+		if (!base)
+			return;
+
+		val = kmalloc(1024, GFP_KERNEL);
+		if (!val)
+			return;
+
+		*val = 0x12345678;
+
+		/*
+		 * Don't just use the first entry since that's where the
+		 * freelist goes for the slab allocator
+		 */
+		base[1] = val;
+		kfree(base);
+
+		tmp = base[1];
+		pr_info("Freed val: %x\n", *tmp);
+
+		kfree(val);
+		break;
+	}
 	case CT_SOFTLOCKUP:
 		preempt_disable();
 		for (;;)
-- 
2.5.0

^ permalink raw reply related	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22  3:40   ` Laura Abbott
@ 2015-12-22  9:33     ` Mathias Krause
  -1 siblings, 0 replies; 115+ messages in thread
From: Mathias Krause @ 2015-12-22  9:33 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Laura Abbott, linux-mm, linux-kernel, Kees Cook

On 22 December 2015 at 04:40, Laura Abbott <laura@labbott.name> wrote:
>
> The SL[AOU]B allocators all behave differently w.r.t. to what happen
> an object is freed. CONFIG_SLAB_SANITIZATION introduces a common
> mechanism to control what happens on free. When this option is
> enabled, objects may be poisoned according to a combination of
> slab_sanitization command line option and whether SLAB_NO_SANITIZE
> is set on a cache.
>
> All credit for the original work should be given to Brad Spengler and
> the PaX Team.
>
> Signed-off-by: Laura Abbott <laura@labbott.name>
> ---
>  init/Kconfig | 36 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 36 insertions(+)
>
> diff --git a/init/Kconfig b/init/Kconfig
> index 235c7a2..37857f3 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1755,6 +1755,42 @@ config SLUB_CPU_PARTIAL
>           which requires the taking of locks that may cause latency spikes.
>           Typically one would choose no for a realtime system.
>
> +config SLAB_MEMORY_SANITIZE
> +       bool "Sanitize all freed memory"
> +       help
> +         By saying Y here the kernel will erase slab objects as soon as they
> +         are freed.  This in turn reduces the lifetime of data
> +         stored in them, making it less likely that sensitive information such
> +         as passwords, cryptographic secrets, etc stay in memory for too long.
> +

> +         This is especially useful for programs whose runtime is short, long
> +         lived processes and the kernel itself benefit from this as long as
> +         they ensure timely freeing of memory that may hold sensitive
> +         information.

This part is not true. The code is handling SLAB objects only, so
talking about processes in this context is misleading. Freeing memory
in userland containing secrets cannot be covered by this feature as
is. It needs a counter-part in the userland memory allocator as well
as handling page sanitization in the buddy allocator.

I guess you've just copy+pasted that Kconfig description from the PaX
feature PAX_MEMORY_SANITIZE that also covers the buddy allocator,
therefore fits that description while this patch set does not. So
please adapt the text or implement the fully featured version.

> +
> +         A nice side effect of the sanitization of slab objects is the
> +         reduction of possible info leaks caused by padding bytes within the
> +         leaky structures.  Use-after-free bugs for structures containing
> +         pointers can also be detected as dereferencing the sanitized pointer
> +         will generate an access violation.
> +
> +         The tradeoff is performance impact. The noticible impact can vary
> +         and you are advised to test this feature on your expected workload
> +         before deploying it
> +

> +         The slab sanitization feature excludes a few slab caches per default
> +         for performance reasons. The level of sanitization can be adjusted
> +         with the sanitize_slab commandline option:
> +               sanitize_slab=off: No sanitization will occur
> +               santiize_slab=slow: Sanitization occurs only on the slow path
> +               for all but the excluded slabs
> +               (relevant for SLUB allocator only)
> +               sanitize_slab=partial: Sanitization occurs on all path for all
> +               but the excluded slabs
> +               sanitize_slab=full: All slabs are sanitize

This should probably be moved to Documentation/kernel-parameters.txt,
as can be found in the PaX patch[1]?

> +
> +         If unsure, say Y here.

Really? It has an unknown performance impact, depending on the
workload, which might make "unsure users" preferably say No, if they
don't care about info leaks.

Related to this, have you checked that the sanitization doesn't
interfere with the various slab handling schemes, namely RCU related
specialties? Not all caches are marked SLAB_DESTROY_BY_RCU, some use
call_rcu() instead, implicitly relying on the semantics RCU'ed slabs
permit, namely allowing a "use-after-free" access to be legitimate
within the RCU grace period. Scrubbing the object during that period
would break that assumption.

Speaking of RCU, do you have a plan to support RCU'ed slabs as well?

> +
>  config MMAP_ALLOW_UNINITIALIZED
>         bool "Allow mmapped anonymous memory to be uninitialized"
>         depends on EXPERT && !MMU
> --
> 2.5.0
>

Regards,
Mathias

[1] https://github.com/minipli/linux-grsec/blob/v4.3.3-pax/Documentation/kernel-parameters.txt#L2689-L2696

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22  9:33     ` Mathias Krause
  0 siblings, 0 replies; 115+ messages in thread
From: Mathias Krause @ 2015-12-22  9:33 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Laura Abbott, linux-mm, linux-kernel, Kees Cook

On 22 December 2015 at 04:40, Laura Abbott <laura@labbott.name> wrote:
>
> The SL[AOU]B allocators all behave differently w.r.t. to what happen
> an object is freed. CONFIG_SLAB_SANITIZATION introduces a common
> mechanism to control what happens on free. When this option is
> enabled, objects may be poisoned according to a combination of
> slab_sanitization command line option and whether SLAB_NO_SANITIZE
> is set on a cache.
>
> All credit for the original work should be given to Brad Spengler and
> the PaX Team.
>
> Signed-off-by: Laura Abbott <laura@labbott.name>
> ---
>  init/Kconfig | 36 ++++++++++++++++++++++++++++++++++++
>  1 file changed, 36 insertions(+)
>
> diff --git a/init/Kconfig b/init/Kconfig
> index 235c7a2..37857f3 100644
> --- a/init/Kconfig
> +++ b/init/Kconfig
> @@ -1755,6 +1755,42 @@ config SLUB_CPU_PARTIAL
>           which requires the taking of locks that may cause latency spikes.
>           Typically one would choose no for a realtime system.
>
> +config SLAB_MEMORY_SANITIZE
> +       bool "Sanitize all freed memory"
> +       help
> +         By saying Y here the kernel will erase slab objects as soon as they
> +         are freed.  This in turn reduces the lifetime of data
> +         stored in them, making it less likely that sensitive information such
> +         as passwords, cryptographic secrets, etc stay in memory for too long.
> +

> +         This is especially useful for programs whose runtime is short, long
> +         lived processes and the kernel itself benefit from this as long as
> +         they ensure timely freeing of memory that may hold sensitive
> +         information.

This part is not true. The code is handling SLAB objects only, so
talking about processes in this context is misleading. Freeing memory
in userland containing secrets cannot be covered by this feature as
is. It needs a counter-part in the userland memory allocator as well
as handling page sanitization in the buddy allocator.

I guess you've just copy+pasted that Kconfig description from the PaX
feature PAX_MEMORY_SANITIZE that also covers the buddy allocator,
therefore fits that description while this patch set does not. So
please adapt the text or implement the fully featured version.

> +
> +         A nice side effect of the sanitization of slab objects is the
> +         reduction of possible info leaks caused by padding bytes within the
> +         leaky structures.  Use-after-free bugs for structures containing
> +         pointers can also be detected as dereferencing the sanitized pointer
> +         will generate an access violation.
> +
> +         The tradeoff is performance impact. The noticible impact can vary
> +         and you are advised to test this feature on your expected workload
> +         before deploying it
> +

> +         The slab sanitization feature excludes a few slab caches per default
> +         for performance reasons. The level of sanitization can be adjusted
> +         with the sanitize_slab commandline option:
> +               sanitize_slab=off: No sanitization will occur
> +               santiize_slab=slow: Sanitization occurs only on the slow path
> +               for all but the excluded slabs
> +               (relevant for SLUB allocator only)
> +               sanitize_slab=partial: Sanitization occurs on all path for all
> +               but the excluded slabs
> +               sanitize_slab=full: All slabs are sanitize

This should probably be moved to Documentation/kernel-parameters.txt,
as can be found in the PaX patch[1]?

> +
> +         If unsure, say Y here.

Really? It has an unknown performance impact, depending on the
workload, which might make "unsure users" preferably say No, if they
don't care about info leaks.

Related to this, have you checked that the sanitization doesn't
interfere with the various slab handling schemes, namely RCU related
specialties? Not all caches are marked SLAB_DESTROY_BY_RCU, some use
call_rcu() instead, implicitly relying on the semantics RCU'ed slabs
permit, namely allowing a "use-after-free" access to be legitimate
within the RCU grace period. Scrubbing the object during that period
would break that assumption.

Speaking of RCU, do you have a plan to support RCU'ed slabs as well?

> +
>  config MMAP_ALLOW_UNINITIALIZED
>         bool "Allow mmapped anonymous memory to be uninitialized"
>         depends on EXPERT && !MMU
> --
> 2.5.0
>

Regards,
Mathias

[1] https://github.com/minipli/linux-grsec/blob/v4.3.3-pax/Documentation/kernel-parameters.txt#L2689-L2696

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22  3:40   ` Laura Abbott
@ 2015-12-22 14:57     ` Dave Hansen
  -1 siblings, 0 replies; 115+ messages in thread
From: Dave Hansen @ 2015-12-22 14:57 UTC (permalink / raw)
  To: kernel-hardening, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook

On 12/21/2015 07:40 PM, Laura Abbott wrote:
> +	  The tradeoff is performance impact. The noticible impact can vary
> +	  and you are advised to test this feature on your expected workload
> +	  before deploying it

What if instead of writing SLAB_MEMORY_SANITIZE_VALUE, we wrote 0's?
That still destroys the information, but it has the positive effect of
allowing a kzalloc() call to avoid zeroing the slab object.  It might
mitigate some of the performance impact.

If this is on at compile time, but booted with sanitize_slab=off, is
there a performance impact?

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22 14:57     ` Dave Hansen
  0 siblings, 0 replies; 115+ messages in thread
From: Dave Hansen @ 2015-12-22 14:57 UTC (permalink / raw)
  To: kernel-hardening, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton
  Cc: Laura Abbott, linux-mm, linux-kernel, Kees Cook

On 12/21/2015 07:40 PM, Laura Abbott wrote:
> +	  The tradeoff is performance impact. The noticible impact can vary
> +	  and you are advised to test this feature on your expected workload
> +	  before deploying it

What if instead of writing SLAB_MEMORY_SANITIZE_VALUE, we wrote 0's?
That still destroys the information, but it has the positive effect of
allowing a kzalloc() call to avoid zeroing the slab object.  It might
mitigate some of the performance impact.

If this is on at compile time, but booted with sanitize_slab=off, is
there a performance impact?

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
  2015-12-22  3:40 ` Laura Abbott
  (?)
@ 2015-12-22 16:08   ` Christoph Lameter
  -1 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2015-12-22 16:08 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, Kees Cook, kernel-hardening

On Mon, 21 Dec 2015, Laura Abbott wrote:

> The biggest change from PAX_MEMORY_SANTIIZE is that this feature sanitizes
> the SL[AOU]B allocators only. My plan is to work on the buddy allocator
> santization after this series gets picked up. A side effect of this is
> that allocations which go directly to the buddy allocator (i.e. large
> allocations) aren't sanitized. I'd like feedback about whether it's worth
> it to add sanitization on that path directly or just use the page
> allocator sanitization when that comes in.

I am not sure what the point of this patchset is. We have a similar effect
to sanitization already in the allocators through two mechanisms:

1. Slab poisoning
2. Allocation with GFP_ZERO

I do not think we need a third one. You could accomplish your goals much
easier without this code churn by either

1. Improve the existing poisoning mechanism. Ensure that there are no
   gaps. Security sensitive kernel slab caches can then be created with
   the  POISONING flag set. Maybe add a Kconfig flag that enables
   POISONING for each cache? What was the issue when you tried using
   posining for sanitization?

2. Add a mechanism that ensures that GFP_ZERO is set for each allocation.
   That way every object you retrieve is zeroed and thus you have implied
   sanitization. This also can be done in a rather simple way by changing
   the  GFP_KERNEL etc constants to include __GFP_ZERO depending on a
   Kconfig option. Or add some runtime setting of the gfp flags somewhere.

Generally I would favor option #2 if you must have sanitization because
that is the only option to really give you a deterministic content of
object on each allocation. Any half way measures would not work I think.

Note also that most allocations are already either allocations that zero
the content or they are immediately initializing the content of the
allocated object. After all the object is not really usable if the
content is random. You may be able to avoid this whole endeavor by
auditing the kernel for locations where the object is not initialized
after allocation.

Once one recognizes the above it seems that sanitization is pretty
useless. Its just another pass of writing zeroes before the allocator or
uer of the allocated object sets up deterministic content of the object or
-- in most cases -- zeroes it again.


^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2015-12-22 16:08   ` Christoph Lameter
  0 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2015-12-22 16:08 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, Kees Cook, kernel-hardening

On Mon, 21 Dec 2015, Laura Abbott wrote:

> The biggest change from PAX_MEMORY_SANTIIZE is that this feature sanitizes
> the SL[AOU]B allocators only. My plan is to work on the buddy allocator
> santization after this series gets picked up. A side effect of this is
> that allocations which go directly to the buddy allocator (i.e. large
> allocations) aren't sanitized. I'd like feedback about whether it's worth
> it to add sanitization on that path directly or just use the page
> allocator sanitization when that comes in.

I am not sure what the point of this patchset is. We have a similar effect
to sanitization already in the allocators through two mechanisms:

1. Slab poisoning
2. Allocation with GFP_ZERO

I do not think we need a third one. You could accomplish your goals much
easier without this code churn by either

1. Improve the existing poisoning mechanism. Ensure that there are no
   gaps. Security sensitive kernel slab caches can then be created with
   the  POISONING flag set. Maybe add a Kconfig flag that enables
   POISONING for each cache? What was the issue when you tried using
   posining for sanitization?

2. Add a mechanism that ensures that GFP_ZERO is set for each allocation.
   That way every object you retrieve is zeroed and thus you have implied
   sanitization. This also can be done in a rather simple way by changing
   the  GFP_KERNEL etc constants to include __GFP_ZERO depending on a
   Kconfig option. Or add some runtime setting of the gfp flags somewhere.

Generally I would favor option #2 if you must have sanitization because
that is the only option to really give you a deterministic content of
object on each allocation. Any half way measures would not work I think.

Note also that most allocations are already either allocations that zero
the content or they are immediately initializing the content of the
allocated object. After all the object is not really usable if the
content is random. You may be able to avoid this whole endeavor by
auditing the kernel for locations where the object is not initialized
after allocation.

Once one recognizes the above it seems that sanitization is pretty
useless. Its just another pass of writing zeroes before the allocator or
uer of the allocated object sets up deterministic content of the object or
-- in most cases -- zeroes it again.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2015-12-22 16:08   ` Christoph Lameter
  0 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2015-12-22 16:08 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, Kees Cook, kernel-hardening

On Mon, 21 Dec 2015, Laura Abbott wrote:

> The biggest change from PAX_MEMORY_SANTIIZE is that this feature sanitizes
> the SL[AOU]B allocators only. My plan is to work on the buddy allocator
> santization after this series gets picked up. A side effect of this is
> that allocations which go directly to the buddy allocator (i.e. large
> allocations) aren't sanitized. I'd like feedback about whether it's worth
> it to add sanitization on that path directly or just use the page
> allocator sanitization when that comes in.

I am not sure what the point of this patchset is. We have a similar effect
to sanitization already in the allocators through two mechanisms:

1. Slab poisoning
2. Allocation with GFP_ZERO

I do not think we need a third one. You could accomplish your goals much
easier without this code churn by either

1. Improve the existing poisoning mechanism. Ensure that there are no
   gaps. Security sensitive kernel slab caches can then be created with
   the  POISONING flag set. Maybe add a Kconfig flag that enables
   POISONING for each cache? What was the issue when you tried using
   posining for sanitization?

2. Add a mechanism that ensures that GFP_ZERO is set for each allocation.
   That way every object you retrieve is zeroed and thus you have implied
   sanitization. This also can be done in a rather simple way by changing
   the  GFP_KERNEL etc constants to include __GFP_ZERO depending on a
   Kconfig option. Or add some runtime setting of the gfp flags somewhere.

Generally I would favor option #2 if you must have sanitization because
that is the only option to really give you a deterministic content of
object on each allocation. Any half way measures would not work I think.

Note also that most allocations are already either allocations that zero
the content or they are immediately initializing the content of the
allocated object. After all the object is not really usable if the
content is random. You may be able to avoid this whole endeavor by
auditing the kernel for locations where the object is not initialized
after allocation.

Once one recognizes the above it seems that sanitization is pretty
useless. Its just another pass of writing zeroes before the allocator or
uer of the allocated object sets up deterministic content of the object or
-- in most cases -- zeroes it again.

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
  2015-12-22 16:08   ` Christoph Lameter
@ 2015-12-22 16:15     ` Dave Hansen
  -1 siblings, 0 replies; 115+ messages in thread
From: Dave Hansen @ 2015-12-22 16:15 UTC (permalink / raw)
  To: kernel-hardening, Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, Kees Cook

On 12/22/2015 08:08 AM, Christoph Lameter wrote:
> 2. Add a mechanism that ensures that GFP_ZERO is set for each allocation.
>    That way every object you retrieve is zeroed and thus you have implied
>    sanitization. This also can be done in a rather simple way by changing
>    the  GFP_KERNEL etc constants to include __GFP_ZERO depending on a
>    Kconfig option. Or add some runtime setting of the gfp flags somewhere.

That's a more comprehensive barrier to leaking information than what we
have now, and it would _also_ cover a big chunk of normal
alloc_page()-style allocations which would be nice.

But, doing this on the allocation side is less comprehensive than doing
at free() time.  We (ideally) want to make sure that unallocated memory
at rest does not contain sensitive contents.

Also, the free path _tends_ to be a bit less performance-critical than
the allocation side.  For instance, I think we generally care about
fork() performance a lot more than exit(). :)

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2015-12-22 16:15     ` Dave Hansen
  0 siblings, 0 replies; 115+ messages in thread
From: Dave Hansen @ 2015-12-22 16:15 UTC (permalink / raw)
  To: kernel-hardening, Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, Kees Cook

On 12/22/2015 08:08 AM, Christoph Lameter wrote:
> 2. Add a mechanism that ensures that GFP_ZERO is set for each allocation.
>    That way every object you retrieve is zeroed and thus you have implied
>    sanitization. This also can be done in a rather simple way by changing
>    the  GFP_KERNEL etc constants to include __GFP_ZERO depending on a
>    Kconfig option. Or add some runtime setting of the gfp flags somewhere.

That's a more comprehensive barrier to leaking information than what we
have now, and it would _also_ cover a big chunk of normal
alloc_page()-style allocations which would be nice.

But, doing this on the allocation side is less comprehensive than doing
at free() time.  We (ideally) want to make sure that unallocated memory
at rest does not contain sensitive contents.

Also, the free path _tends_ to be a bit less performance-critical than
the allocation side.  For instance, I think we generally care about
fork() performance a lot more than exit(). :)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 14:57     ` Dave Hansen
@ 2015-12-22 16:25       ` Christoph Lameter
  -1 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2015-12-22 16:25 UTC (permalink / raw)
  To: Dave Hansen
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Laura Abbott, linux-mm, linux-kernel, Kees Cook

On Tue, 22 Dec 2015, Dave Hansen wrote:

> On 12/21/2015 07:40 PM, Laura Abbott wrote:
> > +	  The tradeoff is performance impact. The noticible impact can vary
> > +	  and you are advised to test this feature on your expected workload
> > +	  before deploying it
>
> What if instead of writing SLAB_MEMORY_SANITIZE_VALUE, we wrote 0's?
> That still destroys the information, but it has the positive effect of
> allowing a kzalloc() call to avoid zeroing the slab object.  It might
> mitigate some of the performance impact.

We already write zeros in many cases or the object is initialized in a
different. No one really wants an uninitialized object. The problem may be
that a freed object is having its old content until reused. Which is
something that poisoning deals with.


^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22 16:25       ` Christoph Lameter
  0 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2015-12-22 16:25 UTC (permalink / raw)
  To: Dave Hansen
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Laura Abbott, linux-mm, linux-kernel, Kees Cook

On Tue, 22 Dec 2015, Dave Hansen wrote:

> On 12/21/2015 07:40 PM, Laura Abbott wrote:
> > +	  The tradeoff is performance impact. The noticible impact can vary
> > +	  and you are advised to test this feature on your expected workload
> > +	  before deploying it
>
> What if instead of writing SLAB_MEMORY_SANITIZE_VALUE, we wrote 0's?
> That still destroys the information, but it has the positive effect of
> allowing a kzalloc() call to avoid zeroing the slab object.  It might
> mitigate some of the performance impact.

We already write zeros in many cases or the object is initialized in a
different. No one really wants an uninitialized object. The problem may be
that a freed object is having its old content until reused. Which is
something that poisoning deals with.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
  2015-12-22 16:08   ` Christoph Lameter
                     ` (2 preceding siblings ...)
  (?)
@ 2015-12-22 16:38   ` Daniel Micay
  -1 siblings, 0 replies; 115+ messages in thread
From: Daniel Micay @ 2015-12-22 16:38 UTC (permalink / raw)
  To: kernel-hardening, Laura Abbott
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, Kees Cook

[-- Attachment #1: Type: text/plain, Size: 2303 bytes --]

> I am not sure what the point of this patchset is. We have a similar
> effect
> to sanitization already in the allocators through two mechanisms:

The rationale was covered earlier. Are you responding to that or did you
not see it?

> 1. Slab poisoning
> 2. Allocation with GFP_ZERO
> 
> I do not think we need a third one. You could accomplish your goals
> much
> easier without this code churn by either
> 
> 1. Improve the existing poisoning mechanism. Ensure that there are no
>    gaps. Security sensitive kernel slab caches can then be created
> with
>    the  POISONING flag set. Maybe add a Kconfig flag that enables
>    POISONING for each cache? What was the issue when you tried using
>    posining for sanitization?
> 
> 2. Add a mechanism that ensures that GFP_ZERO is set for each
> allocation.
>    That way every object you retrieve is zeroed and thus you have
> implied
>    sanitization. This also can be done in a rather simple way by
> changing
>    the  GFP_KERNEL etc constants to include __GFP_ZERO depending on a
>    Kconfig option. Or add some runtime setting of the gfp flags
> somewhere.
> 
> Generally I would favor option #2 if you must have sanitization
> because
> that is the only option to really give you a deterministic content of
> object on each allocation. Any half way measures would not work I
> think.
> 
> Note also that most allocations are already either allocations that
> zero
> the content or they are immediately initializing the content of the
> allocated object. After all the object is not really usable if the
> content is random. You may be able to avoid this whole endeavor by
> auditing the kernel for locations where the object is not initialized
> after allocation.
> 
> Once one recognizes the above it seems that sanitization is pretty
> useless. Its just another pass of writing zeroes before the allocator
> or
> uer of the allocated object sets up deterministic content of the
> object or
> -- in most cases -- zeroes it again.

Sanitization isn't just to prevent leaks from usage of uninitialized
data in later allocations. It's a mitigation for use-after-free (esp. if
it's combined with some form of delayed freeing) and it reduces the
lifetime of data.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 819 bytes --]

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 16:25       ` Christoph Lameter
  (?)
@ 2015-12-22 17:22       ` Dave Hansen
  2015-12-22 17:24         ` Christoph Lameter
  -1 siblings, 1 reply; 115+ messages in thread
From: Dave Hansen @ 2015-12-22 17:22 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Laura Abbott, linux-mm, linux-kernel, Kees Cook

[-- Attachment #1: Type: text/plain, Size: 1078 bytes --]

On 12/22/2015 08:25 AM, Christoph Lameter wrote:
> On Tue, 22 Dec 2015, Dave Hansen wrote:
>> On 12/21/2015 07:40 PM, Laura Abbott wrote:
>>> +	  The tradeoff is performance impact. The noticible impact can vary
>>> +	  and you are advised to test this feature on your expected workload
>>> +	  before deploying it
>>
>> What if instead of writing SLAB_MEMORY_SANITIZE_VALUE, we wrote 0's?
>> That still destroys the information, but it has the positive effect of
>> allowing a kzalloc() call to avoid zeroing the slab object.  It might
>> mitigate some of the performance impact.
> 
> We already write zeros in many cases or the object is initialized in a
> different. No one really wants an uninitialized object. The problem may be
> that a freed object is having its old content until reused. Which is
> something that poisoning deals with.

Or are you just saying that we should use the poisoning *code* that we
already have in slub?  Using the _code_ looks like a really good idea,
whether we're using it to write POISON_FREE, or 0's.  Something like the
attached patch?



[-- Attachment #2: slub-poison-zeros.patch --]
[-- Type: text/x-patch, Size: 1546 bytes --]



---

 b/mm/slub.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff -puN mm/slub.c~slub-poison-zeros mm/slub.c
--- a/mm/slub.c~slub-poison-zeros	2015-12-22 09:18:30.585371985 -0800
+++ b/mm/slub.c	2015-12-22 09:21:23.754174731 -0800
@@ -177,6 +177,7 @@ static inline bool kmem_cache_has_cpu_pa
 /* Internal SLUB flags */
 #define __OBJECT_POISON		0x80000000UL /* Poison object */
 #define __CMPXCHG_DOUBLE	0x40000000UL /* Use cmpxchg_double */
+#define __OBJECT_POISON_ZERO	0x20000000UL /* Poison with zeroes */
 
 #ifdef CONFIG_SMP
 static struct notifier_block slab_notifier;
@@ -678,7 +679,10 @@ static void init_object(struct kmem_cach
 	u8 *p = object;
 
 	if (s->flags & __OBJECT_POISON) {
-		memset(p, POISON_FREE, s->object_size - 1);
+		if (s->flags & __OBJECT_POISON_ZERO) {
+			memset(p, POISON_FREE, s->object_size - 1);
+		else
+			memset(p, 0, s->object_size - 1);
 		p[s->object_size - 1] = POISON_END;
 	}
 
@@ -2495,7 +2499,8 @@ redo:
 		stat(s, ALLOC_FASTPATH);
 	}
 
-	if (unlikely(gfpflags & __GFP_ZERO) && object)
+	if (unlikely(gfpflags & __GFP_ZERO) && object &&
+	    !(s->flags & __OBJECT_POISON_ZERO)) {
 		memset(object, 0, s->object_size);
 
 	slab_post_alloc_hook(s, gfpflags, object);
@@ -2839,7 +2844,8 @@ bool kmem_cache_alloc_bulk(struct kmem_c
 	local_irq_enable();
 
 	/* Clear memory outside IRQ disabled fastpath loop */
-	if (unlikely(flags & __GFP_ZERO)) {
+	if (unlikely(flags & __GFP_ZERO) &&
+	    !(s->flags & __OBJECT_POISON_ZERO)) {
 		int j;
 
 		for (j = 0; j < i; j++)
_

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 17:22       ` Dave Hansen
@ 2015-12-22 17:24         ` Christoph Lameter
  2015-12-22 17:28             ` Dave Hansen
  0 siblings, 1 reply; 115+ messages in thread
From: Christoph Lameter @ 2015-12-22 17:24 UTC (permalink / raw)
  To: Dave Hansen
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Laura Abbott, linux-mm, linux-kernel, Kees Cook

[-- Attachment #1: Type: text/plain, Size: 389 bytes --]

On Tue, 22 Dec 2015, Dave Hansen wrote:

> Or are you just saying that we should use the poisoning *code* that we
> already have in slub?  Using the _code_ looks like a really good idea,
> whether we're using it to write POISON_FREE, or 0's.  Something like the
> attached patch?

Why would you use zeros? The point is just to clear the information right?
The regular poisoning does that.

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #2: Type: text/x-patch; NAME=slub-poison-zeros.patch, Size: 1546 bytes --]



---

 b/mm/slub.c |   12 +++++++++---
 1 file changed, 9 insertions(+), 3 deletions(-)

diff -puN mm/slub.c~slub-poison-zeros mm/slub.c
--- a/mm/slub.c~slub-poison-zeros	2015-12-22 09:18:30.585371985 -0800
+++ b/mm/slub.c	2015-12-22 09:21:23.754174731 -0800
@@ -177,6 +177,7 @@ static inline bool kmem_cache_has_cpu_pa
 /* Internal SLUB flags */
 #define __OBJECT_POISON		0x80000000UL /* Poison object */
 #define __CMPXCHG_DOUBLE	0x40000000UL /* Use cmpxchg_double */
+#define __OBJECT_POISON_ZERO	0x20000000UL /* Poison with zeroes */
 
 #ifdef CONFIG_SMP
 static struct notifier_block slab_notifier;
@@ -678,7 +679,10 @@ static void init_object(struct kmem_cach
 	u8 *p = object;
 
 	if (s->flags & __OBJECT_POISON) {
-		memset(p, POISON_FREE, s->object_size - 1);
+		if (s->flags & __OBJECT_POISON_ZERO) {
+			memset(p, POISON_FREE, s->object_size - 1);
+		else
+			memset(p, 0, s->object_size - 1);
 		p[s->object_size - 1] = POISON_END;
 	}
 
@@ -2495,7 +2499,8 @@ redo:
 		stat(s, ALLOC_FASTPATH);
 	}
 
-	if (unlikely(gfpflags & __GFP_ZERO) && object)
+	if (unlikely(gfpflags & __GFP_ZERO) && object &&
+	    !(s->flags & __OBJECT_POISON_ZERO)) {
 		memset(object, 0, s->object_size);
 
 	slab_post_alloc_hook(s, gfpflags, object);
@@ -2839,7 +2844,8 @@ bool kmem_cache_alloc_bulk(struct kmem_c
 	local_irq_enable();
 
 	/* Clear memory outside IRQ disabled fastpath loop */
-	if (unlikely(flags & __GFP_ZERO)) {
+	if (unlikely(flags & __GFP_ZERO) &&
+	    !(s->flags & __OBJECT_POISON_ZERO)) {
 		int j;
 
 		for (j = 0; j < i; j++)
_

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 17:24         ` Christoph Lameter
@ 2015-12-22 17:28             ` Dave Hansen
  0 siblings, 0 replies; 115+ messages in thread
From: Dave Hansen @ 2015-12-22 17:28 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Laura Abbott, linux-mm, linux-kernel, Kees Cook

On 12/22/2015 09:24 AM, Christoph Lameter wrote:
> On Tue, 22 Dec 2015, Dave Hansen wrote:
>> Or are you just saying that we should use the poisoning *code* that we
>> already have in slub?  Using the _code_ looks like a really good idea,
>> whether we're using it to write POISON_FREE, or 0's.  Something like the
>> attached patch?
> 
> Why would you use zeros? The point is just to clear the information right?
> The regular poisoning does that.

It then allows you to avoid the zeroing at allocation time.

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22 17:28             ` Dave Hansen
  0 siblings, 0 replies; 115+ messages in thread
From: Dave Hansen @ 2015-12-22 17:28 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Laura Abbott, linux-mm, linux-kernel, Kees Cook

On 12/22/2015 09:24 AM, Christoph Lameter wrote:
> On Tue, 22 Dec 2015, Dave Hansen wrote:
>> Or are you just saying that we should use the poisoning *code* that we
>> already have in slub?  Using the _code_ looks like a really good idea,
>> whether we're using it to write POISON_FREE, or 0's.  Something like the
>> attached patch?
> 
> Why would you use zeros? The point is just to clear the information right?
> The regular poisoning does that.

It then allows you to avoid the zeroing at allocation time.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22  9:33     ` Mathias Krause
@ 2015-12-22 17:51       ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22 17:51 UTC (permalink / raw)
  To: Mathias Krause, kernel-hardening
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-mm, linux-kernel, Kees Cook

On 12/22/15 1:33 AM, Mathias Krause wrote:
> On 22 December 2015 at 04:40, Laura Abbott <laura@labbott.name> wrote:
>>
>> +config SLAB_MEMORY_SANITIZE
>> +       bool "Sanitize all freed memory"
>> +       help
>> +         By saying Y here the kernel will erase slab objects as soon as they
>> +         are freed.  This in turn reduces the lifetime of data
>> +         stored in them, making it less likely that sensitive information such
>> +         as passwords, cryptographic secrets, etc stay in memory for too long.
>> +
>
>> +         This is especially useful for programs whose runtime is short, long
>> +         lived processes and the kernel itself benefit from this as long as
>> +         they ensure timely freeing of memory that may hold sensitive
>> +         information.
>
> This part is not true. The code is handling SLAB objects only, so
> talking about processes in this context is misleading. Freeing memory
> in userland containing secrets cannot be covered by this feature as
> is. It needs a counter-part in the userland memory allocator as well
> as handling page sanitization in the buddy allocator.
>
> I guess you've just copy+pasted that Kconfig description from the PaX
> feature PAX_MEMORY_SANITIZE that also covers the buddy allocator,
> therefore fits that description while this patch set does not. So
> please adapt the text or implement the fully featured version.
>

I was thinking of secrets that may be stored in the slab allocator. While
certainly not as common they would exist. I'll clarify the text though
to make it obvious this is for kernel slab memory only.
  
>> +
>> +         A nice side effect of the sanitization of slab objects is the
>> +         reduction of possible info leaks caused by padding bytes within the
>> +         leaky structures.  Use-after-free bugs for structures containing
>> +         pointers can also be detected as dereferencing the sanitized pointer
>> +         will generate an access violation.
>> +
>> +         The tradeoff is performance impact. The noticible impact can vary
>> +         and you are advised to test this feature on your expected workload
>> +         before deploying it
>> +
>
>> +         The slab sanitization feature excludes a few slab caches per default
>> +         for performance reasons. The level of sanitization can be adjusted
>> +         with the sanitize_slab commandline option:
>> +               sanitize_slab=off: No sanitization will occur
>> +               santiize_slab=slow: Sanitization occurs only on the slow path
>> +               for all but the excluded slabs
>> +               (relevant for SLUB allocator only)
>> +               sanitize_slab=partial: Sanitization occurs on all path for all
>> +               but the excluded slabs
>> +               sanitize_slab=full: All slabs are sanitize
>
> This should probably be moved to Documentation/kernel-parameters.txt,
> as can be found in the PaX patch[1]?
>

Yes, I missed that. I'll fix that.
  
>> +
>> +         If unsure, say Y here.
>
> Really? It has an unknown performance impact, depending on the
> workload, which might make "unsure users" preferably say No, if they
> don't care about info leaks.

This is getting to the argument about security vs. performance and
what should be default. I think this deserves more advice than just
"If unsure, do X" so I'll add some more description about the trade
offs.

>
> Related to this, have you checked that the sanitization doesn't
> interfere with the various slab handling schemes, namely RCU related
> specialties? Not all caches are marked SLAB_DESTROY_BY_RCU, some use
> call_rcu() instead, implicitly relying on the semantics RCU'ed slabs
> permit, namely allowing a "use-after-free" access to be legitimate
> within the RCU grace period. Scrubbing the object during that period
> would break that assumption.

I haven't looked into that. I was working off the assumption that
if the regular SLAB debug poisoning worked so would the sanitization.
The regular debug poisoning only checks for SLAB_DESTROY_BY_RCU so
how does that work then?

>
> Speaking of RCU, do you have a plan to support RCU'ed slabs as well?
>

My only plan was to get the base support in. I didn't have a plan to
support RCU slabs but that's certainly something to be done in the
future.

Thanks,
Laura


^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22 17:51       ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22 17:51 UTC (permalink / raw)
  To: Mathias Krause, kernel-hardening
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-mm, linux-kernel, Kees Cook

On 12/22/15 1:33 AM, Mathias Krause wrote:
> On 22 December 2015 at 04:40, Laura Abbott <laura@labbott.name> wrote:
>>
>> +config SLAB_MEMORY_SANITIZE
>> +       bool "Sanitize all freed memory"
>> +       help
>> +         By saying Y here the kernel will erase slab objects as soon as they
>> +         are freed.  This in turn reduces the lifetime of data
>> +         stored in them, making it less likely that sensitive information such
>> +         as passwords, cryptographic secrets, etc stay in memory for too long.
>> +
>
>> +         This is especially useful for programs whose runtime is short, long
>> +         lived processes and the kernel itself benefit from this as long as
>> +         they ensure timely freeing of memory that may hold sensitive
>> +         information.
>
> This part is not true. The code is handling SLAB objects only, so
> talking about processes in this context is misleading. Freeing memory
> in userland containing secrets cannot be covered by this feature as
> is. It needs a counter-part in the userland memory allocator as well
> as handling page sanitization in the buddy allocator.
>
> I guess you've just copy+pasted that Kconfig description from the PaX
> feature PAX_MEMORY_SANITIZE that also covers the buddy allocator,
> therefore fits that description while this patch set does not. So
> please adapt the text or implement the fully featured version.
>

I was thinking of secrets that may be stored in the slab allocator. While
certainly not as common they would exist. I'll clarify the text though
to make it obvious this is for kernel slab memory only.
  
>> +
>> +         A nice side effect of the sanitization of slab objects is the
>> +         reduction of possible info leaks caused by padding bytes within the
>> +         leaky structures.  Use-after-free bugs for structures containing
>> +         pointers can also be detected as dereferencing the sanitized pointer
>> +         will generate an access violation.
>> +
>> +         The tradeoff is performance impact. The noticible impact can vary
>> +         and you are advised to test this feature on your expected workload
>> +         before deploying it
>> +
>
>> +         The slab sanitization feature excludes a few slab caches per default
>> +         for performance reasons. The level of sanitization can be adjusted
>> +         with the sanitize_slab commandline option:
>> +               sanitize_slab=off: No sanitization will occur
>> +               santiize_slab=slow: Sanitization occurs only on the slow path
>> +               for all but the excluded slabs
>> +               (relevant for SLUB allocator only)
>> +               sanitize_slab=partial: Sanitization occurs on all path for all
>> +               but the excluded slabs
>> +               sanitize_slab=full: All slabs are sanitize
>
> This should probably be moved to Documentation/kernel-parameters.txt,
> as can be found in the PaX patch[1]?
>

Yes, I missed that. I'll fix that.
  
>> +
>> +         If unsure, say Y here.
>
> Really? It has an unknown performance impact, depending on the
> workload, which might make "unsure users" preferably say No, if they
> don't care about info leaks.

This is getting to the argument about security vs. performance and
what should be default. I think this deserves more advice than just
"If unsure, do X" so I'll add some more description about the trade
offs.

>
> Related to this, have you checked that the sanitization doesn't
> interfere with the various slab handling schemes, namely RCU related
> specialties? Not all caches are marked SLAB_DESTROY_BY_RCU, some use
> call_rcu() instead, implicitly relying on the semantics RCU'ed slabs
> permit, namely allowing a "use-after-free" access to be legitimate
> within the RCU grace period. Scrubbing the object during that period
> would break that assumption.

I haven't looked into that. I was working off the assumption that
if the regular SLAB debug poisoning worked so would the sanitization.
The regular debug poisoning only checks for SLAB_DESTROY_BY_RCU so
how does that work then?

>
> Speaking of RCU, do you have a plan to support RCU'ed slabs as well?
>

My only plan was to get the base support in. I didn't have a plan to
support RCU slabs but that's certainly something to be done in the
future.

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 17:28             ` Dave Hansen
@ 2015-12-22 18:08               ` Christoph Lameter
  -1 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2015-12-22 18:08 UTC (permalink / raw)
  To: Dave Hansen
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Laura Abbott, linux-mm, linux-kernel, Kees Cook

On Tue, 22 Dec 2015, Dave Hansen wrote:

> > Why would you use zeros? The point is just to clear the information right?
> > The regular poisoning does that.
>
> It then allows you to avoid the zeroing at allocation time.

Well much of the code is expecting a zeroed object from the allocator and
its zeroed at that time. Zeroing makes the object cache hot which is an
important performance aspect.


^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22 18:08               ` Christoph Lameter
  0 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2015-12-22 18:08 UTC (permalink / raw)
  To: Dave Hansen
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Laura Abbott, linux-mm, linux-kernel, Kees Cook

On Tue, 22 Dec 2015, Dave Hansen wrote:

> > Why would you use zeros? The point is just to clear the information right?
> > The regular poisoning does that.
>
> It then allows you to avoid the zeroing at allocation time.

Well much of the code is expecting a zeroed object from the allocator and
its zeroed at that time. Zeroing makes the object cache hot which is an
important performance aspect.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 18:08               ` Christoph Lameter
@ 2015-12-22 18:19                 ` Dave Hansen
  -1 siblings, 0 replies; 115+ messages in thread
From: Dave Hansen @ 2015-12-22 18:19 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Laura Abbott, linux-mm, linux-kernel, Kees Cook

On 12/22/2015 10:08 AM, Christoph Lameter wrote:
> On Tue, 22 Dec 2015, Dave Hansen wrote:
>>> Why would you use zeros? The point is just to clear the information right?
>>> The regular poisoning does that.
>>
>> It then allows you to avoid the zeroing at allocation time.
> 
> Well much of the code is expecting a zeroed object from the allocator and
> its zeroed at that time. Zeroing makes the object cache hot which is an
> important performance aspect.

Yes, modifying this behavior has a performance impact.  It absolutely
needs to be evaluated, and I wouldn't want to speculate too much on how
good or bad any of the choices are.

Just to reiterate, I think we have 3 real choices here:

1. Zero at alloc, only when __GFP_ZERO
   (behavior today)
2. Poison at free, also Zero at alloc (when __GFP_ZERO)
   (this patch's proposed behavior, also what current poisoning does,
    doubles writes)
3. Zero at free, *don't* Zero at alloc (when __GFP_ZERO)
   (what I'm suggesting, possibly less perf impact vs. #2)



^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22 18:19                 ` Dave Hansen
  0 siblings, 0 replies; 115+ messages in thread
From: Dave Hansen @ 2015-12-22 18:19 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Laura Abbott, linux-mm, linux-kernel, Kees Cook

On 12/22/2015 10:08 AM, Christoph Lameter wrote:
> On Tue, 22 Dec 2015, Dave Hansen wrote:
>>> Why would you use zeros? The point is just to clear the information right?
>>> The regular poisoning does that.
>>
>> It then allows you to avoid the zeroing at allocation time.
> 
> Well much of the code is expecting a zeroed object from the allocator and
> its zeroed at that time. Zeroing makes the object cache hot which is an
> important performance aspect.

Yes, modifying this behavior has a performance impact.  It absolutely
needs to be evaluated, and I wouldn't want to speculate too much on how
good or bad any of the choices are.

Just to reiterate, I think we have 3 real choices here:

1. Zero at alloc, only when __GFP_ZERO
   (behavior today)
2. Poison at free, also Zero at alloc (when __GFP_ZERO)
   (this patch's proposed behavior, also what current poisoning does,
    doubles writes)
3. Zero at free, *don't* Zero at alloc (when __GFP_ZERO)
   (what I'm suggesting, possibly less perf impact vs. #2)


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 17:51       ` Laura Abbott
@ 2015-12-22 18:37         ` Mathias Krause
  -1 siblings, 0 replies; 115+ messages in thread
From: Mathias Krause @ 2015-12-22 18:37 UTC (permalink / raw)
  To: Laura Abbott
  Cc: kernel-hardening, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-mm,
	linux-kernel, Kees Cook

On 22 December 2015 at 18:51, Laura Abbott <laura@labbott.name> wrote:
>> [snip]
>>
>> Related to this, have you checked that the sanitization doesn't
>> interfere with the various slab handling schemes, namely RCU related
>> specialties? Not all caches are marked SLAB_DESTROY_BY_RCU, some use
>> call_rcu() instead, implicitly relying on the semantics RCU'ed slabs
>> permit, namely allowing a "use-after-free" access to be legitimate
>> within the RCU grace period. Scrubbing the object during that period
>> would break that assumption.
>
>
> I haven't looked into that. I was working off the assumption that
> if the regular SLAB debug poisoning worked so would the sanitization.
> The regular debug poisoning only checks for SLAB_DESTROY_BY_RCU so
> how does that work then?

Maybe it doesn't? ;)

How many systems, do you think, are running with enabled DEBUG_SLAB /
SLUB_DEBUG in production? Not so many, I'd guess. And the ones running
into issues probably just disable DEBUG_SLAB / SLUB_DEBUG.

Btw, SLUB not only looks for SLAB_DESTROY_BY_RCU but also excludes
"call_rcu slabs" via other mechanisms. As SLUB is the default SLAB
allocator for quite some time now, even with enabled SLUB_DEBUG one
wouldn't be able to trigger RCU related sanitization issues.

>> Speaking of RCU, do you have a plan to support RCU'ed slabs as well?
>>
>
> My only plan was to get the base support in. I didn't have a plan to
> support RCU slabs but that's certainly something to be done in the
> future.

"Base support", in my opinion, includes covering the buddy allocator
as well. Otherwise this feature is incomplete.

Regards,
Mathias

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22 18:37         ` Mathias Krause
  0 siblings, 0 replies; 115+ messages in thread
From: Mathias Krause @ 2015-12-22 18:37 UTC (permalink / raw)
  To: Laura Abbott
  Cc: kernel-hardening, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-mm,
	linux-kernel, Kees Cook

On 22 December 2015 at 18:51, Laura Abbott <laura@labbott.name> wrote:
>> [snip]
>>
>> Related to this, have you checked that the sanitization doesn't
>> interfere with the various slab handling schemes, namely RCU related
>> specialties? Not all caches are marked SLAB_DESTROY_BY_RCU, some use
>> call_rcu() instead, implicitly relying on the semantics RCU'ed slabs
>> permit, namely allowing a "use-after-free" access to be legitimate
>> within the RCU grace period. Scrubbing the object during that period
>> would break that assumption.
>
>
> I haven't looked into that. I was working off the assumption that
> if the regular SLAB debug poisoning worked so would the sanitization.
> The regular debug poisoning only checks for SLAB_DESTROY_BY_RCU so
> how does that work then?

Maybe it doesn't? ;)

How many systems, do you think, are running with enabled DEBUG_SLAB /
SLUB_DEBUG in production? Not so many, I'd guess. And the ones running
into issues probably just disable DEBUG_SLAB / SLUB_DEBUG.

Btw, SLUB not only looks for SLAB_DESTROY_BY_RCU but also excludes
"call_rcu slabs" via other mechanisms. As SLUB is the default SLAB
allocator for quite some time now, even with enabled SLUB_DEBUG one
wouldn't be able to trigger RCU related sanitization issues.

>> Speaking of RCU, do you have a plan to support RCU'ed slabs as well?
>>
>
> My only plan was to get the base support in. I didn't have a plan to
> support RCU slabs but that's certainly something to be done in the
> future.

"Base support", in my opinion, includes covering the buddy allocator
as well. Otherwise this feature is incomplete.

Regards,
Mathias

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 18:19                 ` Dave Hansen
@ 2015-12-22 19:13                   ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22 19:13 UTC (permalink / raw)
  To: Dave Hansen, Christoph Lameter
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-mm, linux-kernel, Kees Cook

On 12/22/15 10:19 AM, Dave Hansen wrote:
> On 12/22/2015 10:08 AM, Christoph Lameter wrote:
>> On Tue, 22 Dec 2015, Dave Hansen wrote:
>>>> Why would you use zeros? The point is just to clear the information right?
>>>> The regular poisoning does that.
>>>
>>> It then allows you to avoid the zeroing at allocation time.
>>
>> Well much of the code is expecting a zeroed object from the allocator and
>> its zeroed at that time. Zeroing makes the object cache hot which is an
>> important performance aspect.
>
> Yes, modifying this behavior has a performance impact.  It absolutely
> needs to be evaluated, and I wouldn't want to speculate too much on how
> good or bad any of the choices are.
>
> Just to reiterate, I think we have 3 real choices here:
>
> 1. Zero at alloc, only when __GFP_ZERO
>     (behavior today)
> 2. Poison at free, also Zero at alloc (when __GFP_ZERO)
>     (this patch's proposed behavior, also what current poisoning does,
>      doubles writes)
> 3. Zero at free, *don't* Zero at alloc (when __GFP_ZERO)
>     (what I'm suggesting, possibly less perf impact vs. #2)
>
>

poisoning with non-zero memory makes it easier to determine that the error
came from accessing the sanitized memory vs. some other case. I don't think
the feature would be as strong if the memory was only zeroed vs. some other
data value.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22 19:13                   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22 19:13 UTC (permalink / raw)
  To: Dave Hansen, Christoph Lameter
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-mm, linux-kernel, Kees Cook

On 12/22/15 10:19 AM, Dave Hansen wrote:
> On 12/22/2015 10:08 AM, Christoph Lameter wrote:
>> On Tue, 22 Dec 2015, Dave Hansen wrote:
>>>> Why would you use zeros? The point is just to clear the information right?
>>>> The regular poisoning does that.
>>>
>>> It then allows you to avoid the zeroing at allocation time.
>>
>> Well much of the code is expecting a zeroed object from the allocator and
>> its zeroed at that time. Zeroing makes the object cache hot which is an
>> important performance aspect.
>
> Yes, modifying this behavior has a performance impact.  It absolutely
> needs to be evaluated, and I wouldn't want to speculate too much on how
> good or bad any of the choices are.
>
> Just to reiterate, I think we have 3 real choices here:
>
> 1. Zero at alloc, only when __GFP_ZERO
>     (behavior today)
> 2. Poison at free, also Zero at alloc (when __GFP_ZERO)
>     (this patch's proposed behavior, also what current poisoning does,
>      doubles writes)
> 3. Zero at free, *don't* Zero at alloc (when __GFP_ZERO)
>     (what I'm suggesting, possibly less perf impact vs. #2)
>
>

poisoning with non-zero memory makes it easier to determine that the error
came from accessing the sanitized memory vs. some other case. I don't think
the feature would be as strong if the memory was only zeroed vs. some other
data value.

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 18:37         ` Mathias Krause
@ 2015-12-22 19:18           ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22 19:18 UTC (permalink / raw)
  To: Mathias Krause
  Cc: kernel-hardening, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-mm,
	linux-kernel, Kees Cook

On 12/22/15 10:37 AM, Mathias Krause wrote:
> On 22 December 2015 at 18:51, Laura Abbott <laura@labbott.name> wrote:
>>> [snip]
>>>
>>> Related to this, have you checked that the sanitization doesn't
>>> interfere with the various slab handling schemes, namely RCU related
>>> specialties? Not all caches are marked SLAB_DESTROY_BY_RCU, some use
>>> call_rcu() instead, implicitly relying on the semantics RCU'ed slabs
>>> permit, namely allowing a "use-after-free" access to be legitimate
>>> within the RCU grace period. Scrubbing the object during that period
>>> would break that assumption.
>>
>>
>> I haven't looked into that. I was working off the assumption that
>> if the regular SLAB debug poisoning worked so would the sanitization.
>> The regular debug poisoning only checks for SLAB_DESTROY_BY_RCU so
>> how does that work then?
>
> Maybe it doesn't? ;)
>
> How many systems, do you think, are running with enabled DEBUG_SLAB /
> SLUB_DEBUG in production? Not so many, I'd guess. And the ones running
> into issues probably just disable DEBUG_SLAB / SLUB_DEBUG.
>
> Btw, SLUB not only looks for SLAB_DESTROY_BY_RCU but also excludes
> "call_rcu slabs" via other mechanisms. As SLUB is the default SLAB
> allocator for quite some time now, even with enabled SLUB_DEBUG one
> wouldn't be able to trigger RCU related sanitization issues.
>

I've seen SLUB_DEBUG used in stress testing situations but you're
right about production and giving up if there are issues. I'll take
a closer look at this.
  
>>> Speaking of RCU, do you have a plan to support RCU'ed slabs as well?
>>>
>>
>> My only plan was to get the base support in. I didn't have a plan to
>> support RCU slabs but that's certainly something to be done in the
>> future.
>
> "Base support", in my opinion, includes covering the buddy allocator
> as well. Otherwise this feature is incomplete.

Point taken. I'll look at the buddy allocator post-holidays.

It was also pointed out I should be giving you full credit for this
feature originally. I apologize for not doing that. Thanks for
doing the original work and taking the time to review this series.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22 19:18           ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22 19:18 UTC (permalink / raw)
  To: Mathias Krause
  Cc: kernel-hardening, Christoph Lameter, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, linux-mm,
	linux-kernel, Kees Cook

On 12/22/15 10:37 AM, Mathias Krause wrote:
> On 22 December 2015 at 18:51, Laura Abbott <laura@labbott.name> wrote:
>>> [snip]
>>>
>>> Related to this, have you checked that the sanitization doesn't
>>> interfere with the various slab handling schemes, namely RCU related
>>> specialties? Not all caches are marked SLAB_DESTROY_BY_RCU, some use
>>> call_rcu() instead, implicitly relying on the semantics RCU'ed slabs
>>> permit, namely allowing a "use-after-free" access to be legitimate
>>> within the RCU grace period. Scrubbing the object during that period
>>> would break that assumption.
>>
>>
>> I haven't looked into that. I was working off the assumption that
>> if the regular SLAB debug poisoning worked so would the sanitization.
>> The regular debug poisoning only checks for SLAB_DESTROY_BY_RCU so
>> how does that work then?
>
> Maybe it doesn't? ;)
>
> How many systems, do you think, are running with enabled DEBUG_SLAB /
> SLUB_DEBUG in production? Not so many, I'd guess. And the ones running
> into issues probably just disable DEBUG_SLAB / SLUB_DEBUG.
>
> Btw, SLUB not only looks for SLAB_DESTROY_BY_RCU but also excludes
> "call_rcu slabs" via other mechanisms. As SLUB is the default SLAB
> allocator for quite some time now, even with enabled SLUB_DEBUG one
> wouldn't be able to trigger RCU related sanitization issues.
>

I've seen SLUB_DEBUG used in stress testing situations but you're
right about production and giving up if there are issues. I'll take
a closer look at this.
  
>>> Speaking of RCU, do you have a plan to support RCU'ed slabs as well?
>>>
>>
>> My only plan was to get the base support in. I didn't have a plan to
>> support RCU slabs but that's certainly something to be done in the
>> future.
>
> "Base support", in my opinion, includes covering the buddy allocator
> as well. Otherwise this feature is incomplete.

Point taken. I'll look at the buddy allocator post-holidays.

It was also pointed out I should be giving you full credit for this
feature originally. I apologize for not doing that. Thanks for
doing the original work and taking the time to review this series.

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 19:13                   ` Laura Abbott
@ 2015-12-22 19:32                     ` Dave Hansen
  -1 siblings, 0 replies; 115+ messages in thread
From: Dave Hansen @ 2015-12-22 19:32 UTC (permalink / raw)
  To: Laura Abbott, Christoph Lameter
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-mm, linux-kernel, Kees Cook

On 12/22/2015 11:13 AM, Laura Abbott wrote:
>> 3. Zero at free, *don't* Zero at alloc (when __GFP_ZERO)
>>     (what I'm suggesting, possibly less perf impact vs. #2)
> 
> poisoning with non-zero memory makes it easier to determine that the error
> came from accessing the sanitized memory vs. some other case. I don't think
> the feature would be as strong if the memory was only zeroed vs. some other
> data value.

How does that scenario work?  Your patch description says:

> +	  Use-after-free bugs for structures containing
> +	  pointers can also be detected as dereferencing the sanitized pointer
> +	  will generate an access violation.

In the case that we wrote all zeros, we'd be accessing userspace at a
known place that we don't generally allow memory to be mapped anyway.
Could you elaborate on a scenario where zeros are weaker than a random
poison value?

In any case (if a poison value is superior to 0's), it's a balance
between performance vs. the likelihood of the poisoned value being
tripped over.

I think the performance impact of this feature is going to be *the*
major thing that keeps folks from using it in practice.  I'm trying to
suggest a way that you _might_ preserve some performance, and get more
folks to use it.

1. Keep information from leaking (doesn't matter which value we write)
2. Detect use-after-free bugs (0's are less likely to be detected???)
3. Preserve performance (0's are likely to preserve more performance)

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22 19:32                     ` Dave Hansen
  0 siblings, 0 replies; 115+ messages in thread
From: Dave Hansen @ 2015-12-22 19:32 UTC (permalink / raw)
  To: Laura Abbott, Christoph Lameter
  Cc: kernel-hardening, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, linux-mm, linux-kernel, Kees Cook

On 12/22/2015 11:13 AM, Laura Abbott wrote:
>> 3. Zero at free, *don't* Zero at alloc (when __GFP_ZERO)
>>     (what I'm suggesting, possibly less perf impact vs. #2)
> 
> poisoning with non-zero memory makes it easier to determine that the error
> came from accessing the sanitized memory vs. some other case. I don't think
> the feature would be as strong if the memory was only zeroed vs. some other
> data value.

How does that scenario work?  Your patch description says:

> +	  Use-after-free bugs for structures containing
> +	  pointers can also be detected as dereferencing the sanitized pointer
> +	  will generate an access violation.

In the case that we wrote all zeros, we'd be accessing userspace at a
known place that we don't generally allow memory to be mapped anyway.
Could you elaborate on a scenario where zeros are weaker than a random
poison value?

In any case (if a poison value is superior to 0's), it's a balance
between performance vs. the likelihood of the poisoned value being
tripped over.

I think the performance impact of this feature is going to be *the*
major thing that keeps folks from using it in practice.  I'm trying to
suggest a way that you _might_ preserve some performance, and get more
folks to use it.

1. Keep information from leaking (doesn't matter which value we write)
2. Detect use-after-free bugs (0's are less likely to be detected???)
3. Preserve performance (0's are likely to preserve more performance)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 18:37         ` Mathias Krause
@ 2015-12-22 20:01           ` Christoph Lameter
  -1 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2015-12-22 20:01 UTC (permalink / raw)
  To: Mathias Krause
  Cc: Laura Abbott, kernel-hardening, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-mm, linux-kernel, Kees Cook

On Tue, 22 Dec 2015, Mathias Krause wrote:

> How many systems, do you think, are running with enabled DEBUG_SLAB /
> SLUB_DEBUG in production? Not so many, I'd guess. And the ones running
> into issues probably just disable DEBUG_SLAB / SLUB_DEBUG.

All systems run with SLUB_DEBUG in production. SLUB_DEBUG causes the code
for debugging to be compiled in. Then it can be enabled later with a
command line parameter.


^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22 20:01           ` Christoph Lameter
  0 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2015-12-22 20:01 UTC (permalink / raw)
  To: Mathias Krause
  Cc: Laura Abbott, kernel-hardening, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-mm, linux-kernel, Kees Cook

On Tue, 22 Dec 2015, Mathias Krause wrote:

> How many systems, do you think, are running with enabled DEBUG_SLAB /
> SLUB_DEBUG in production? Not so many, I'd guess. And the ones running
> into issues probably just disable DEBUG_SLAB / SLUB_DEBUG.

All systems run with SLUB_DEBUG in production. SLUB_DEBUG causes the code
for debugging to be compiled in. Then it can be enabled later with a
command line parameter.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
  2015-12-22 16:08   ` Christoph Lameter
  (?)
@ 2015-12-22 20:04     ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22 20:04 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, Kees Cook, kernel-hardening

On 12/22/15 8:08 AM, Christoph Lameter wrote:
> On Mon, 21 Dec 2015, Laura Abbott wrote:
>
>> The biggest change from PAX_MEMORY_SANTIIZE is that this feature sanitizes
>> the SL[AOU]B allocators only. My plan is to work on the buddy allocator
>> santization after this series gets picked up. A side effect of this is
>> that allocations which go directly to the buddy allocator (i.e. large
>> allocations) aren't sanitized. I'd like feedback about whether it's worth
>> it to add sanitization on that path directly or just use the page
>> allocator sanitization when that comes in.
>
> I am not sure what the point of this patchset is. We have a similar effect
> to sanitization already in the allocators through two mechanisms:
>
> 1. Slab poisoning
> 2. Allocation with GFP_ZERO
>
> I do not think we need a third one. You could accomplish your goals much
> easier without this code churn by either
>
> 1. Improve the existing poisoning mechanism. Ensure that there are no
>     gaps. Security sensitive kernel slab caches can then be created with
>     the  POISONING flag set. Maybe add a Kconfig flag that enables
>     POISONING for each cache? What was the issue when you tried using
>     posining for sanitization?

The existing poisoning does work for sanitization but it's still a debug
feature. It seemed more appropriate to keep debug features and non-debug
features separate hence the separate option and configuration.

>
> 2. Add a mechanism that ensures that GFP_ZERO is set for each allocation.
>     That way every object you retrieve is zeroed and thus you have implied
>     sanitization. This also can be done in a rather simple way by changing
>     the  GFP_KERNEL etc constants to include __GFP_ZERO depending on a
>     Kconfig option. Or add some runtime setting of the gfp flags somewhere.
>

That's good for allocation but sanitization is done on free. The goal
is to reduce any leftover data that might be around while on an unallocated
slab.
  
> Generally I would favor option #2 if you must have sanitization because
> that is the only option to really give you a deterministic content of
> object on each allocation. Any half way measures would not work I think.
>
> Note also that most allocations are already either allocations that zero
> the content or they are immediately initializing the content of the
> allocated object. After all the object is not really usable if the
> content is random. You may be able to avoid this whole endeavor by
> auditing the kernel for locations where the object is not initialized
> after allocation.
>
> Once one recognizes the above it seems that sanitization is pretty
> useless. Its just another pass of writing zeroes before the allocator or
> uer of the allocated object sets up deterministic content of the object or
> -- in most cases -- zeroes it again.
>

The sanitization is going towards kernel hardening which is designed to
help keep the kernel secure even when programmers screwed up. Auditing
still won't catch everything. sanitization is going towards the idea
of kernel self-protection which is what Grsecurity is known for
and Kees Cook is trying to promote for mainline
(http://lwn.net/Articles/662219/)

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2015-12-22 20:04     ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22 20:04 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, Kees Cook, kernel-hardening

On 12/22/15 8:08 AM, Christoph Lameter wrote:
> On Mon, 21 Dec 2015, Laura Abbott wrote:
>
>> The biggest change from PAX_MEMORY_SANTIIZE is that this feature sanitizes
>> the SL[AOU]B allocators only. My plan is to work on the buddy allocator
>> santization after this series gets picked up. A side effect of this is
>> that allocations which go directly to the buddy allocator (i.e. large
>> allocations) aren't sanitized. I'd like feedback about whether it's worth
>> it to add sanitization on that path directly or just use the page
>> allocator sanitization when that comes in.
>
> I am not sure what the point of this patchset is. We have a similar effect
> to sanitization already in the allocators through two mechanisms:
>
> 1. Slab poisoning
> 2. Allocation with GFP_ZERO
>
> I do not think we need a third one. You could accomplish your goals much
> easier without this code churn by either
>
> 1. Improve the existing poisoning mechanism. Ensure that there are no
>     gaps. Security sensitive kernel slab caches can then be created with
>     the  POISONING flag set. Maybe add a Kconfig flag that enables
>     POISONING for each cache? What was the issue when you tried using
>     posining for sanitization?

The existing poisoning does work for sanitization but it's still a debug
feature. It seemed more appropriate to keep debug features and non-debug
features separate hence the separate option and configuration.

>
> 2. Add a mechanism that ensures that GFP_ZERO is set for each allocation.
>     That way every object you retrieve is zeroed and thus you have implied
>     sanitization. This also can be done in a rather simple way by changing
>     the  GFP_KERNEL etc constants to include __GFP_ZERO depending on a
>     Kconfig option. Or add some runtime setting of the gfp flags somewhere.
>

That's good for allocation but sanitization is done on free. The goal
is to reduce any leftover data that might be around while on an unallocated
slab.
  
> Generally I would favor option #2 if you must have sanitization because
> that is the only option to really give you a deterministic content of
> object on each allocation. Any half way measures would not work I think.
>
> Note also that most allocations are already either allocations that zero
> the content or they are immediately initializing the content of the
> allocated object. After all the object is not really usable if the
> content is random. You may be able to avoid this whole endeavor by
> auditing the kernel for locations where the object is not initialized
> after allocation.
>
> Once one recognizes the above it seems that sanitization is pretty
> useless. Its just another pass of writing zeroes before the allocator or
> uer of the allocated object sets up deterministic content of the object or
> -- in most cases -- zeroes it again.
>

The sanitization is going towards kernel hardening which is designed to
help keep the kernel secure even when programmers screwed up. Auditing
still won't catch everything. sanitization is going towards the idea
of kernel self-protection which is what Grsecurity is known for
and Kees Cook is trying to promote for mainline
(http://lwn.net/Articles/662219/)

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2015-12-22 20:04     ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2015-12-22 20:04 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Pekka Enberg, David Rientjes, Joonsoo Kim, Andrew Morton,
	linux-mm, linux-kernel, Kees Cook, kernel-hardening

On 12/22/15 8:08 AM, Christoph Lameter wrote:
> On Mon, 21 Dec 2015, Laura Abbott wrote:
>
>> The biggest change from PAX_MEMORY_SANTIIZE is that this feature sanitizes
>> the SL[AOU]B allocators only. My plan is to work on the buddy allocator
>> santization after this series gets picked up. A side effect of this is
>> that allocations which go directly to the buddy allocator (i.e. large
>> allocations) aren't sanitized. I'd like feedback about whether it's worth
>> it to add sanitization on that path directly or just use the page
>> allocator sanitization when that comes in.
>
> I am not sure what the point of this patchset is. We have a similar effect
> to sanitization already in the allocators through two mechanisms:
>
> 1. Slab poisoning
> 2. Allocation with GFP_ZERO
>
> I do not think we need a third one. You could accomplish your goals much
> easier without this code churn by either
>
> 1. Improve the existing poisoning mechanism. Ensure that there are no
>     gaps. Security sensitive kernel slab caches can then be created with
>     the  POISONING flag set. Maybe add a Kconfig flag that enables
>     POISONING for each cache? What was the issue when you tried using
>     posining for sanitization?

The existing poisoning does work for sanitization but it's still a debug
feature. It seemed more appropriate to keep debug features and non-debug
features separate hence the separate option and configuration.

>
> 2. Add a mechanism that ensures that GFP_ZERO is set for each allocation.
>     That way every object you retrieve is zeroed and thus you have implied
>     sanitization. This also can be done in a rather simple way by changing
>     the  GFP_KERNEL etc constants to include __GFP_ZERO depending on a
>     Kconfig option. Or add some runtime setting of the gfp flags somewhere.
>

That's good for allocation but sanitization is done on free. The goal
is to reduce any leftover data that might be around while on an unallocated
slab.
  
> Generally I would favor option #2 if you must have sanitization because
> that is the only option to really give you a deterministic content of
> object on each allocation. Any half way measures would not work I think.
>
> Note also that most allocations are already either allocations that zero
> the content or they are immediately initializing the content of the
> allocated object. After all the object is not really usable if the
> content is random. You may be able to avoid this whole endeavor by
> auditing the kernel for locations where the object is not initialized
> after allocation.
>
> Once one recognizes the above it seems that sanitization is pretty
> useless. Its just another pass of writing zeroes before the allocator or
> uer of the allocated object sets up deterministic content of the object or
> -- in most cases -- zeroes it again.
>

The sanitization is going towards kernel hardening which is designed to
help keep the kernel secure even when programmers screwed up. Auditing
still won't catch everything. sanitization is going towards the idea
of kernel self-protection which is what Grsecurity is known for
and Kees Cook is trying to promote for mainline
(http://lwn.net/Articles/662219/)

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 20:01           ` Christoph Lameter
@ 2015-12-22 20:06             ` Mathias Krause
  -1 siblings, 0 replies; 115+ messages in thread
From: Mathias Krause @ 2015-12-22 20:06 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Laura Abbott, kernel-hardening, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-mm, linux-kernel, Kees Cook

On 22 December 2015 at 21:01, Christoph Lameter <cl@linux.com> wrote:
> On Tue, 22 Dec 2015, Mathias Krause wrote:
>
>> How many systems, do you think, are running with enabled DEBUG_SLAB /
>> SLUB_DEBUG in production? Not so many, I'd guess. And the ones running
>> into issues probably just disable DEBUG_SLAB / SLUB_DEBUG.
>
> All systems run with SLUB_DEBUG in production. SLUB_DEBUG causes the code
> for debugging to be compiled in. Then it can be enabled later with a
> command line parameter.

Indeed, I meant CONFIG_SLUB_DEBUG_ON, i.e. compiled in and enabled
SLAB cache debugging including poisoning.

Regards,
Mathias

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2015-12-22 20:06             ` Mathias Krause
  0 siblings, 0 replies; 115+ messages in thread
From: Mathias Krause @ 2015-12-22 20:06 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Laura Abbott, kernel-hardening, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, linux-mm, linux-kernel, Kees Cook

On 22 December 2015 at 21:01, Christoph Lameter <cl@linux.com> wrote:
> On Tue, 22 Dec 2015, Mathias Krause wrote:
>
>> How many systems, do you think, are running with enabled DEBUG_SLAB /
>> SLUB_DEBUG in production? Not so many, I'd guess. And the ones running
>> into issues probably just disable DEBUG_SLAB / SLUB_DEBUG.
>
> All systems run with SLUB_DEBUG in production. SLUB_DEBUG causes the code
> for debugging to be compiled in. Then it can be enabled later with a
> command line parameter.

Indeed, I meant CONFIG_SLUB_DEBUG_ON, i.e. compiled in and enabled
SLAB cache debugging including poisoning.

Regards,
Mathias

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
  2015-12-22  3:40   ` Laura Abbott
  (?)
@ 2015-12-22 20:48     ` Vlastimil Babka
  -1 siblings, 0 replies; 115+ messages in thread
From: Vlastimil Babka @ 2015-12-22 20:48 UTC (permalink / raw)
  To: Laura Abbott, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, Kees Cook, kernel-hardening, Mathias Krause

On 22.12.2015 4:40, Laura Abbott wrote:
> Each of the different allocators (SLAB/SLUB/SLOB) handles
> clearing of objects differently depending on configuration.
> Add common infrastructure for selecting sanitization levels
> (off, slow path only, partial, full) and marking caches as
> appropriate.
> 
> All credit for the original work should be given to Brad Spengler and
> the PaX Team.
> 
> Signed-off-by: Laura Abbott <laura@labbott.name>
>  
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +#ifdef CONFIG_X86_64
> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
> +#else
> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
> +#endif
> +enum slab_sanitize_mode {
> +	/* No sanitization */
> +	SLAB_SANITIZE_OFF = 0,
> +
> +	/* Partial sanitization happens only on the slow path */
> +	SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,

Can you explain more about this variant? I wonder who might find it useful
except someone getting a false sense of security, but cheaper.
It sounds like wanting the cake and eat it too :)
I would be surprised if such IMHO half-solution existed in the original
PAX_MEMORY_SANITIZE too?

Or is there something that guarantees that the objects freed on hotpath won't
stay there for long so the danger of leak is low? (And what about
use-after-free?) It depends on further slab activity, no? (I'm not that familiar
with SLUB, but I would expect the hotpath there being similar to SLAB freeing
the object on per-cpu array_cache. But, it seems the PARTIAL_SLOWPATH is not
implemented for SLAB, so there might be some fundamental difference I'm missing.)



^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
@ 2015-12-22 20:48     ` Vlastimil Babka
  0 siblings, 0 replies; 115+ messages in thread
From: Vlastimil Babka @ 2015-12-22 20:48 UTC (permalink / raw)
  To: Laura Abbott, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, Kees Cook, kernel-hardening, Mathias Krause

On 22.12.2015 4:40, Laura Abbott wrote:
> Each of the different allocators (SLAB/SLUB/SLOB) handles
> clearing of objects differently depending on configuration.
> Add common infrastructure for selecting sanitization levels
> (off, slow path only, partial, full) and marking caches as
> appropriate.
> 
> All credit for the original work should be given to Brad Spengler and
> the PaX Team.
> 
> Signed-off-by: Laura Abbott <laura@labbott.name>
>  
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +#ifdef CONFIG_X86_64
> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
> +#else
> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
> +#endif
> +enum slab_sanitize_mode {
> +	/* No sanitization */
> +	SLAB_SANITIZE_OFF = 0,
> +
> +	/* Partial sanitization happens only on the slow path */
> +	SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,

Can you explain more about this variant? I wonder who might find it useful
except someone getting a false sense of security, but cheaper.
It sounds like wanting the cake and eat it too :)
I would be surprised if such IMHO half-solution existed in the original
PAX_MEMORY_SANITIZE too?

Or is there something that guarantees that the objects freed on hotpath won't
stay there for long so the danger of leak is low? (And what about
use-after-free?) It depends on further slab activity, no? (I'm not that familiar
with SLUB, but I would expect the hotpath there being similar to SLAB freeing
the object on per-cpu array_cache. But, it seems the PARTIAL_SLOWPATH is not
implemented for SLAB, so there might be some fundamental difference I'm missing.)


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
@ 2015-12-22 20:48     ` Vlastimil Babka
  0 siblings, 0 replies; 115+ messages in thread
From: Vlastimil Babka @ 2015-12-22 20:48 UTC (permalink / raw)
  To: Laura Abbott, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton
  Cc: linux-mm, linux-kernel, Kees Cook, kernel-hardening, Mathias Krause

On 22.12.2015 4:40, Laura Abbott wrote:
> Each of the different allocators (SLAB/SLUB/SLOB) handles
> clearing of objects differently depending on configuration.
> Add common infrastructure for selecting sanitization levels
> (off, slow path only, partial, full) and marking caches as
> appropriate.
> 
> All credit for the original work should be given to Brad Spengler and
> the PaX Team.
> 
> Signed-off-by: Laura Abbott <laura@labbott.name>
>  
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +#ifdef CONFIG_X86_64
> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
> +#else
> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
> +#endif
> +enum slab_sanitize_mode {
> +	/* No sanitization */
> +	SLAB_SANITIZE_OFF = 0,
> +
> +	/* Partial sanitization happens only on the slow path */
> +	SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,

Can you explain more about this variant? I wonder who might find it useful
except someone getting a false sense of security, but cheaper.
It sounds like wanting the cake and eat it too :)
I would be surprised if such IMHO half-solution existed in the original
PAX_MEMORY_SANITIZE too?

Or is there something that guarantees that the objects freed on hotpath won't
stay there for long so the danger of leak is low? (And what about
use-after-free?) It depends on further slab activity, no? (I'm not that familiar
with SLUB, but I would expect the hotpath there being similar to SLAB freeing
the object on per-cpu array_cache. But, it seems the PARTIAL_SLOWPATH is not
implemented for SLAB, so there might be some fundamental difference I'm missing.)

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
  2015-12-22 20:04     ` Laura Abbott
  (?)
@ 2016-01-06  0:09       ` Kees Cook
  -1 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:09 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Tue, Dec 22, 2015 at 12:04 PM, Laura Abbott <laura@labbott.name> wrote:
> On 12/22/15 8:08 AM, Christoph Lameter wrote:
>>
>> On Mon, 21 Dec 2015, Laura Abbott wrote:
>>
>>> The biggest change from PAX_MEMORY_SANTIIZE is that this feature
>>> sanitizes
>>> the SL[AOU]B allocators only. My plan is to work on the buddy allocator
>>> santization after this series gets picked up. A side effect of this is
>>> that allocations which go directly to the buddy allocator (i.e. large
>>> allocations) aren't sanitized. I'd like feedback about whether it's worth
>>> it to add sanitization on that path directly or just use the page
>>> allocator sanitization when that comes in.

This looks great! I love the added lkdtm tests, too. Very cool.

>> I am not sure what the point of this patchset is. We have a similar effect
>> to sanitization already in the allocators through two mechanisms:
>>
>> 1. Slab poisoning
>> 2. Allocation with GFP_ZERO
>>
>> I do not think we need a third one. You could accomplish your goals much
>> easier without this code churn by either
>>
>> 1. Improve the existing poisoning mechanism. Ensure that there are no
>>     gaps. Security sensitive kernel slab caches can then be created with
>>     the  POISONING flag set. Maybe add a Kconfig flag that enables
>>     POISONING for each cache? What was the issue when you tried using
>>     posining for sanitization?
>
> The existing poisoning does work for sanitization but it's still a debug
> feature. It seemed more appropriate to keep debug features and non-debug
> features separate hence the separate option and configuration.

What stuff is intertwined in the existing poisoning that makes it
incompatible/orthogonal?

>> 2. Add a mechanism that ensures that GFP_ZERO is set for each allocation.
>>     That way every object you retrieve is zeroed and thus you have implied
>>     sanitization. This also can be done in a rather simple way by changing
>>     the  GFP_KERNEL etc constants to include __GFP_ZERO depending on a
>>     Kconfig option. Or add some runtime setting of the gfp flags
>> somewhere.
>>
>
> That's good for allocation but sanitization is done on free. The goal
> is to reduce any leftover data that might be around while on an unallocated
> slab.

Right -- we want this on free. I wonder if we could also add the
always-zero option as an additional improvement. A separate config,
since I suspect the overhead would be ugly.

>> Generally I would favor option #2 if you must have sanitization because
>> that is the only option to really give you a deterministic content of
>> object on each allocation. Any half way measures would not work I think.
>>
>> Note also that most allocations are already either allocations that zero
>> the content or they are immediately initializing the content of the
>> allocated object. After all the object is not really usable if the
>> content is random. You may be able to avoid this whole endeavor by
>> auditing the kernel for locations where the object is not initialized
>> after allocation.
>>
>> Once one recognizes the above it seems that sanitization is pretty
>> useless. Its just another pass of writing zeroes before the allocator or
>> uer of the allocated object sets up deterministic content of the object or
>> -- in most cases -- zeroes it again.
>>
>
> The sanitization is going towards kernel hardening which is designed to
> help keep the kernel secure even when programmers screwed up. Auditing
> still won't catch everything. sanitization is going towards the idea
> of kernel self-protection which is what Grsecurity is known for
> and Kees Cook is trying to promote for mainline
> (http://lwn.net/Articles/662219/)

Yup, well said. Auditing is important, and we're already doing it, but
we want to catch the mistakes during runtime, since we'll never be
free of bugs, bug lifetime is measured in years, and end users are
frequently forced to run Linux with additional non-upstream code, so
we want to protect them from those mistakes as well.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-06  0:09       ` Kees Cook
  0 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:09 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Tue, Dec 22, 2015 at 12:04 PM, Laura Abbott <laura@labbott.name> wrote:
> On 12/22/15 8:08 AM, Christoph Lameter wrote:
>>
>> On Mon, 21 Dec 2015, Laura Abbott wrote:
>>
>>> The biggest change from PAX_MEMORY_SANTIIZE is that this feature
>>> sanitizes
>>> the SL[AOU]B allocators only. My plan is to work on the buddy allocator
>>> santization after this series gets picked up. A side effect of this is
>>> that allocations which go directly to the buddy allocator (i.e. large
>>> allocations) aren't sanitized. I'd like feedback about whether it's worth
>>> it to add sanitization on that path directly or just use the page
>>> allocator sanitization when that comes in.

This looks great! I love the added lkdtm tests, too. Very cool.

>> I am not sure what the point of this patchset is. We have a similar effect
>> to sanitization already in the allocators through two mechanisms:
>>
>> 1. Slab poisoning
>> 2. Allocation with GFP_ZERO
>>
>> I do not think we need a third one. You could accomplish your goals much
>> easier without this code churn by either
>>
>> 1. Improve the existing poisoning mechanism. Ensure that there are no
>>     gaps. Security sensitive kernel slab caches can then be created with
>>     the  POISONING flag set. Maybe add a Kconfig flag that enables
>>     POISONING for each cache? What was the issue when you tried using
>>     posining for sanitization?
>
> The existing poisoning does work for sanitization but it's still a debug
> feature. It seemed more appropriate to keep debug features and non-debug
> features separate hence the separate option and configuration.

What stuff is intertwined in the existing poisoning that makes it
incompatible/orthogonal?

>> 2. Add a mechanism that ensures that GFP_ZERO is set for each allocation.
>>     That way every object you retrieve is zeroed and thus you have implied
>>     sanitization. This also can be done in a rather simple way by changing
>>     the  GFP_KERNEL etc constants to include __GFP_ZERO depending on a
>>     Kconfig option. Or add some runtime setting of the gfp flags
>> somewhere.
>>
>
> That's good for allocation but sanitization is done on free. The goal
> is to reduce any leftover data that might be around while on an unallocated
> slab.

Right -- we want this on free. I wonder if we could also add the
always-zero option as an additional improvement. A separate config,
since I suspect the overhead would be ugly.

>> Generally I would favor option #2 if you must have sanitization because
>> that is the only option to really give you a deterministic content of
>> object on each allocation. Any half way measures would not work I think.
>>
>> Note also that most allocations are already either allocations that zero
>> the content or they are immediately initializing the content of the
>> allocated object. After all the object is not really usable if the
>> content is random. You may be able to avoid this whole endeavor by
>> auditing the kernel for locations where the object is not initialized
>> after allocation.
>>
>> Once one recognizes the above it seems that sanitization is pretty
>> useless. Its just another pass of writing zeroes before the allocator or
>> uer of the allocated object sets up deterministic content of the object or
>> -- in most cases -- zeroes it again.
>>
>
> The sanitization is going towards kernel hardening which is designed to
> help keep the kernel secure even when programmers screwed up. Auditing
> still won't catch everything. sanitization is going towards the idea
> of kernel self-protection which is what Grsecurity is known for
> and Kees Cook is trying to promote for mainline
> (http://lwn.net/Articles/662219/)

Yup, well said. Auditing is important, and we're already doing it, but
we want to catch the mistakes during runtime, since we'll never be
free of bugs, bug lifetime is measured in years, and end users are
frequently forced to run Linux with additional non-upstream code, so
we want to protect them from those mistakes as well.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-06  0:09       ` Kees Cook
  0 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:09 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Tue, Dec 22, 2015 at 12:04 PM, Laura Abbott <laura@labbott.name> wrote:
> On 12/22/15 8:08 AM, Christoph Lameter wrote:
>>
>> On Mon, 21 Dec 2015, Laura Abbott wrote:
>>
>>> The biggest change from PAX_MEMORY_SANTIIZE is that this feature
>>> sanitizes
>>> the SL[AOU]B allocators only. My plan is to work on the buddy allocator
>>> santization after this series gets picked up. A side effect of this is
>>> that allocations which go directly to the buddy allocator (i.e. large
>>> allocations) aren't sanitized. I'd like feedback about whether it's worth
>>> it to add sanitization on that path directly or just use the page
>>> allocator sanitization when that comes in.

This looks great! I love the added lkdtm tests, too. Very cool.

>> I am not sure what the point of this patchset is. We have a similar effect
>> to sanitization already in the allocators through two mechanisms:
>>
>> 1. Slab poisoning
>> 2. Allocation with GFP_ZERO
>>
>> I do not think we need a third one. You could accomplish your goals much
>> easier without this code churn by either
>>
>> 1. Improve the existing poisoning mechanism. Ensure that there are no
>>     gaps. Security sensitive kernel slab caches can then be created with
>>     the  POISONING flag set. Maybe add a Kconfig flag that enables
>>     POISONING for each cache? What was the issue when you tried using
>>     posining for sanitization?
>
> The existing poisoning does work for sanitization but it's still a debug
> feature. It seemed more appropriate to keep debug features and non-debug
> features separate hence the separate option and configuration.

What stuff is intertwined in the existing poisoning that makes it
incompatible/orthogonal?

>> 2. Add a mechanism that ensures that GFP_ZERO is set for each allocation.
>>     That way every object you retrieve is zeroed and thus you have implied
>>     sanitization. This also can be done in a rather simple way by changing
>>     the  GFP_KERNEL etc constants to include __GFP_ZERO depending on a
>>     Kconfig option. Or add some runtime setting of the gfp flags
>> somewhere.
>>
>
> That's good for allocation but sanitization is done on free. The goal
> is to reduce any leftover data that might be around while on an unallocated
> slab.

Right -- we want this on free. I wonder if we could also add the
always-zero option as an additional improvement. A separate config,
since I suspect the overhead would be ugly.

>> Generally I would favor option #2 if you must have sanitization because
>> that is the only option to really give you a deterministic content of
>> object on each allocation. Any half way measures would not work I think.
>>
>> Note also that most allocations are already either allocations that zero
>> the content or they are immediately initializing the content of the
>> allocated object. After all the object is not really usable if the
>> content is random. You may be able to avoid this whole endeavor by
>> auditing the kernel for locations where the object is not initialized
>> after allocation.
>>
>> Once one recognizes the above it seems that sanitization is pretty
>> useless. Its just another pass of writing zeroes before the allocator or
>> uer of the allocated object sets up deterministic content of the object or
>> -- in most cases -- zeroes it again.
>>
>
> The sanitization is going towards kernel hardening which is designed to
> help keep the kernel secure even when programmers screwed up. Auditing
> still won't catch everything. sanitization is going towards the idea
> of kernel self-protection which is what Grsecurity is known for
> and Kees Cook is trying to promote for mainline
> (http://lwn.net/Articles/662219/)

Yup, well said. Auditing is important, and we're already doing it, but
we want to catch the mistakes during runtime, since we'll never be
free of bugs, bug lifetime is measured in years, and end users are
frequently forced to run Linux with additional non-upstream code, so
we want to protect them from those mistakes as well.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 7/7] lkdtm: Add READ_AFTER_FREE test
  2015-12-22  3:40   ` Laura Abbott
  (?)
@ 2016-01-06  0:15     ` Kees Cook
  -1 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:15 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening, Arnd Bergmann,
	Greg Kroah-Hartman

On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>
> In a similar manner to WRITE_AFTER_FREE, add a READ_AFTER_FREE
> test to test free poisoning features. Sample output when
> no poison is present:
>
> [   20.222501] lkdtm: Performing direct entry READ_AFTER_FREE
> [   20.226163] lkdtm: Freed val: 12345678
>
> with poison:
>
> [   24.203748] lkdtm: Performing direct entry READ_AFTER_FREE
> [   24.207261] general protection fault: 0000 [#1] SMP
> [   24.208193] Modules linked in:
> [   24.208193] CPU: 0 PID: 866 Comm: sh Not tainted 4.4.0-rc5-work+ #108
>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Signed-off-by: Laura Abbott <laura@labbott.name>
> ---
>  drivers/misc/lkdtm.c | 29 +++++++++++++++++++++++++++++
>  1 file changed, 29 insertions(+)
>
> diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c
> index 11fdadc..c641fb7 100644
> --- a/drivers/misc/lkdtm.c
> +++ b/drivers/misc/lkdtm.c
> @@ -92,6 +92,7 @@ enum ctype {
>         CT_UNALIGNED_LOAD_STORE_WRITE,
>         CT_OVERWRITE_ALLOCATION,
>         CT_WRITE_AFTER_FREE,
> +       CT_READ_AFTER_FREE,
>         CT_SOFTLOCKUP,
>         CT_HARDLOCKUP,
>         CT_SPINLOCKUP,
> @@ -129,6 +130,7 @@ static char* cp_type[] = {
>         "UNALIGNED_LOAD_STORE_WRITE",
>         "OVERWRITE_ALLOCATION",
>         "WRITE_AFTER_FREE",
> +       "READ_AFTER_FREE",
>         "SOFTLOCKUP",
>         "HARDLOCKUP",
>         "SPINLOCKUP",
> @@ -417,6 +419,33 @@ static void lkdtm_do_action(enum ctype which)
>                 memset(data, 0x78, len);
>                 break;
>         }
> +       case CT_READ_AFTER_FREE: {
> +               int **base;
> +               int *val, *tmp;
> +
> +               base = kmalloc(1024, GFP_KERNEL);
> +               if (!base)
> +                       return;
> +
> +               val = kmalloc(1024, GFP_KERNEL);
> +               if (!val)
> +                       return;

For both of these test failure return, I think there should be a
pr_warn too (see CT_EXEC_USERSPACE).

> +
> +               *val = 0x12345678;
> +
> +               /*
> +                * Don't just use the first entry since that's where the
> +                * freelist goes for the slab allocator
> +                */
> +               base[1] = val;

Maybe just aim at the middle, in case allocator freelist tracking ever
grows? base[1024/sizeof(int)/2] or something?

> +               kfree(base);
> +
> +               tmp = base[1];
> +               pr_info("Freed val: %x\n", *tmp);

Instead of depending on the deref to fail, maybe just use a simple
BUG_ON to test that the value did actually change? Or, change the
pr_info to "Failed to Oops when reading freed value: ..." just to be
slightly more verbose about what failed?

> +
> +               kfree(val);
> +               break;
> +       }
>         case CT_SOFTLOCKUP:
>                 preempt_disable();
>                 for (;;)
> --
> 2.5.0
>

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 7/7] lkdtm: Add READ_AFTER_FREE test
@ 2016-01-06  0:15     ` Kees Cook
  0 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:15 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening, Arnd Bergmann,
	Greg Kroah-Hartman

On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>
> In a similar manner to WRITE_AFTER_FREE, add a READ_AFTER_FREE
> test to test free poisoning features. Sample output when
> no poison is present:
>
> [   20.222501] lkdtm: Performing direct entry READ_AFTER_FREE
> [   20.226163] lkdtm: Freed val: 12345678
>
> with poison:
>
> [   24.203748] lkdtm: Performing direct entry READ_AFTER_FREE
> [   24.207261] general protection fault: 0000 [#1] SMP
> [   24.208193] Modules linked in:
> [   24.208193] CPU: 0 PID: 866 Comm: sh Not tainted 4.4.0-rc5-work+ #108
>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Signed-off-by: Laura Abbott <laura@labbott.name>
> ---
>  drivers/misc/lkdtm.c | 29 +++++++++++++++++++++++++++++
>  1 file changed, 29 insertions(+)
>
> diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c
> index 11fdadc..c641fb7 100644
> --- a/drivers/misc/lkdtm.c
> +++ b/drivers/misc/lkdtm.c
> @@ -92,6 +92,7 @@ enum ctype {
>         CT_UNALIGNED_LOAD_STORE_WRITE,
>         CT_OVERWRITE_ALLOCATION,
>         CT_WRITE_AFTER_FREE,
> +       CT_READ_AFTER_FREE,
>         CT_SOFTLOCKUP,
>         CT_HARDLOCKUP,
>         CT_SPINLOCKUP,
> @@ -129,6 +130,7 @@ static char* cp_type[] = {
>         "UNALIGNED_LOAD_STORE_WRITE",
>         "OVERWRITE_ALLOCATION",
>         "WRITE_AFTER_FREE",
> +       "READ_AFTER_FREE",
>         "SOFTLOCKUP",
>         "HARDLOCKUP",
>         "SPINLOCKUP",
> @@ -417,6 +419,33 @@ static void lkdtm_do_action(enum ctype which)
>                 memset(data, 0x78, len);
>                 break;
>         }
> +       case CT_READ_AFTER_FREE: {
> +               int **base;
> +               int *val, *tmp;
> +
> +               base = kmalloc(1024, GFP_KERNEL);
> +               if (!base)
> +                       return;
> +
> +               val = kmalloc(1024, GFP_KERNEL);
> +               if (!val)
> +                       return;

For both of these test failure return, I think there should be a
pr_warn too (see CT_EXEC_USERSPACE).

> +
> +               *val = 0x12345678;
> +
> +               /*
> +                * Don't just use the first entry since that's where the
> +                * freelist goes for the slab allocator
> +                */
> +               base[1] = val;

Maybe just aim at the middle, in case allocator freelist tracking ever
grows? base[1024/sizeof(int)/2] or something?

> +               kfree(base);
> +
> +               tmp = base[1];
> +               pr_info("Freed val: %x\n", *tmp);

Instead of depending on the deref to fail, maybe just use a simple
BUG_ON to test that the value did actually change? Or, change the
pr_info to "Failed to Oops when reading freed value: ..." just to be
slightly more verbose about what failed?

> +
> +               kfree(val);
> +               break;
> +       }
>         case CT_SOFTLOCKUP:
>                 preempt_disable();
>                 for (;;)
> --
> 2.5.0
>

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 7/7] lkdtm: Add READ_AFTER_FREE test
@ 2016-01-06  0:15     ` Kees Cook
  0 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:15 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening, Arnd Bergmann,
	Greg Kroah-Hartman

On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>
> In a similar manner to WRITE_AFTER_FREE, add a READ_AFTER_FREE
> test to test free poisoning features. Sample output when
> no poison is present:
>
> [   20.222501] lkdtm: Performing direct entry READ_AFTER_FREE
> [   20.226163] lkdtm: Freed val: 12345678
>
> with poison:
>
> [   24.203748] lkdtm: Performing direct entry READ_AFTER_FREE
> [   24.207261] general protection fault: 0000 [#1] SMP
> [   24.208193] Modules linked in:
> [   24.208193] CPU: 0 PID: 866 Comm: sh Not tainted 4.4.0-rc5-work+ #108
>
> Cc: Arnd Bergmann <arnd@arndb.de>
> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
> Signed-off-by: Laura Abbott <laura@labbott.name>
> ---
>  drivers/misc/lkdtm.c | 29 +++++++++++++++++++++++++++++
>  1 file changed, 29 insertions(+)
>
> diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c
> index 11fdadc..c641fb7 100644
> --- a/drivers/misc/lkdtm.c
> +++ b/drivers/misc/lkdtm.c
> @@ -92,6 +92,7 @@ enum ctype {
>         CT_UNALIGNED_LOAD_STORE_WRITE,
>         CT_OVERWRITE_ALLOCATION,
>         CT_WRITE_AFTER_FREE,
> +       CT_READ_AFTER_FREE,
>         CT_SOFTLOCKUP,
>         CT_HARDLOCKUP,
>         CT_SPINLOCKUP,
> @@ -129,6 +130,7 @@ static char* cp_type[] = {
>         "UNALIGNED_LOAD_STORE_WRITE",
>         "OVERWRITE_ALLOCATION",
>         "WRITE_AFTER_FREE",
> +       "READ_AFTER_FREE",
>         "SOFTLOCKUP",
>         "HARDLOCKUP",
>         "SPINLOCKUP",
> @@ -417,6 +419,33 @@ static void lkdtm_do_action(enum ctype which)
>                 memset(data, 0x78, len);
>                 break;
>         }
> +       case CT_READ_AFTER_FREE: {
> +               int **base;
> +               int *val, *tmp;
> +
> +               base = kmalloc(1024, GFP_KERNEL);
> +               if (!base)
> +                       return;
> +
> +               val = kmalloc(1024, GFP_KERNEL);
> +               if (!val)
> +                       return;

For both of these test failure return, I think there should be a
pr_warn too (see CT_EXEC_USERSPACE).

> +
> +               *val = 0x12345678;
> +
> +               /*
> +                * Don't just use the first entry since that's where the
> +                * freelist goes for the slab allocator
> +                */
> +               base[1] = val;

Maybe just aim at the middle, in case allocator freelist tracking ever
grows? base[1024/sizeof(int)/2] or something?

> +               kfree(base);
> +
> +               tmp = base[1];
> +               pr_info("Freed val: %x\n", *tmp);

Instead of depending on the deref to fail, maybe just use a simple
BUG_ON to test that the value did actually change? Or, change the
pr_info to "Failed to Oops when reading freed value: ..." just to be
slightly more verbose about what failed?

> +
> +               kfree(val);
> +               break;
> +       }
>         case CT_SOFTLOCKUP:
>                 preempt_disable();
>                 for (;;)
> --
> 2.5.0
>

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
  2015-12-22 20:48     ` Vlastimil Babka
  (?)
@ 2016-01-06  0:17       ` Kees Cook
  -1 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:17 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Laura Abbott, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Linux-MM, LKML, kernel-hardening,
	Mathias Krause

On Tue, Dec 22, 2015 at 12:48 PM, Vlastimil Babka <vbabka@suse.cz> wrote:
> On 22.12.2015 4:40, Laura Abbott wrote:
>> Each of the different allocators (SLAB/SLUB/SLOB) handles
>> clearing of objects differently depending on configuration.
>> Add common infrastructure for selecting sanitization levels
>> (off, slow path only, partial, full) and marking caches as
>> appropriate.
>>
>> All credit for the original work should be given to Brad Spengler and
>> the PaX Team.
>>
>> Signed-off-by: Laura Abbott <laura@labbott.name>
>>
>> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
>> +#ifdef CONFIG_X86_64
>> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
>> +#else
>> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
>> +#endif
>> +enum slab_sanitize_mode {
>> +     /* No sanitization */
>> +     SLAB_SANITIZE_OFF = 0,
>> +
>> +     /* Partial sanitization happens only on the slow path */
>> +     SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,
>
> Can you explain more about this variant? I wonder who might find it useful
> except someone getting a false sense of security, but cheaper.
> It sounds like wanting the cake and eat it too :)
> I would be surprised if such IMHO half-solution existed in the original
> PAX_MEMORY_SANITIZE too?
>
> Or is there something that guarantees that the objects freed on hotpath won't
> stay there for long so the danger of leak is low? (And what about
> use-after-free?) It depends on further slab activity, no? (I'm not that familiar
> with SLUB, but I would expect the hotpath there being similar to SLAB freeing
> the object on per-cpu array_cache. But, it seems the PARTIAL_SLOWPATH is not
> implemented for SLAB, so there might be some fundamental difference I'm missing.)

Perhaps the partial sanitize could be a separate patch so it's
features were more logically separated?

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
@ 2016-01-06  0:17       ` Kees Cook
  0 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:17 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Laura Abbott, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Linux-MM, LKML, kernel-hardening,
	Mathias Krause

On Tue, Dec 22, 2015 at 12:48 PM, Vlastimil Babka <vbabka@suse.cz> wrote:
> On 22.12.2015 4:40, Laura Abbott wrote:
>> Each of the different allocators (SLAB/SLUB/SLOB) handles
>> clearing of objects differently depending on configuration.
>> Add common infrastructure for selecting sanitization levels
>> (off, slow path only, partial, full) and marking caches as
>> appropriate.
>>
>> All credit for the original work should be given to Brad Spengler and
>> the PaX Team.
>>
>> Signed-off-by: Laura Abbott <laura@labbott.name>
>>
>> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
>> +#ifdef CONFIG_X86_64
>> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
>> +#else
>> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
>> +#endif
>> +enum slab_sanitize_mode {
>> +     /* No sanitization */
>> +     SLAB_SANITIZE_OFF = 0,
>> +
>> +     /* Partial sanitization happens only on the slow path */
>> +     SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,
>
> Can you explain more about this variant? I wonder who might find it useful
> except someone getting a false sense of security, but cheaper.
> It sounds like wanting the cake and eat it too :)
> I would be surprised if such IMHO half-solution existed in the original
> PAX_MEMORY_SANITIZE too?
>
> Or is there something that guarantees that the objects freed on hotpath won't
> stay there for long so the danger of leak is low? (And what about
> use-after-free?) It depends on further slab activity, no? (I'm not that familiar
> with SLUB, but I would expect the hotpath there being similar to SLAB freeing
> the object on per-cpu array_cache. But, it seems the PARTIAL_SLOWPATH is not
> implemented for SLAB, so there might be some fundamental difference I'm missing.)

Perhaps the partial sanitize could be a separate patch so it's
features were more logically separated?

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
@ 2016-01-06  0:17       ` Kees Cook
  0 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:17 UTC (permalink / raw)
  To: Vlastimil Babka
  Cc: Laura Abbott, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Andrew Morton, Linux-MM, LKML, kernel-hardening,
	Mathias Krause

On Tue, Dec 22, 2015 at 12:48 PM, Vlastimil Babka <vbabka@suse.cz> wrote:
> On 22.12.2015 4:40, Laura Abbott wrote:
>> Each of the different allocators (SLAB/SLUB/SLOB) handles
>> clearing of objects differently depending on configuration.
>> Add common infrastructure for selecting sanitization levels
>> (off, slow path only, partial, full) and marking caches as
>> appropriate.
>>
>> All credit for the original work should be given to Brad Spengler and
>> the PaX Team.
>>
>> Signed-off-by: Laura Abbott <laura@labbott.name>
>>
>> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
>> +#ifdef CONFIG_X86_64
>> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
>> +#else
>> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
>> +#endif
>> +enum slab_sanitize_mode {
>> +     /* No sanitization */
>> +     SLAB_SANITIZE_OFF = 0,
>> +
>> +     /* Partial sanitization happens only on the slow path */
>> +     SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,
>
> Can you explain more about this variant? I wonder who might find it useful
> except someone getting a false sense of security, but cheaper.
> It sounds like wanting the cake and eat it too :)
> I would be surprised if such IMHO half-solution existed in the original
> PAX_MEMORY_SANITIZE too?
>
> Or is there something that guarantees that the objects freed on hotpath won't
> stay there for long so the danger of leak is low? (And what about
> use-after-free?) It depends on further slab activity, no? (I'm not that familiar
> with SLUB, but I would expect the hotpath there being similar to SLAB freeing
> the object on per-cpu array_cache. But, it seems the PARTIAL_SLOWPATH is not
> implemented for SLAB, so there might be some fundamental difference I'm missing.)

Perhaps the partial sanitize could be a separate patch so it's
features were more logically separated?

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
  2015-12-22  3:40   ` Laura Abbott
  (?)
@ 2016-01-06  0:19     ` Kees Cook
  -1 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:19 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>
> Each of the different allocators (SLAB/SLUB/SLOB) handles
> clearing of objects differently depending on configuration.
> Add common infrastructure for selecting sanitization levels
> (off, slow path only, partial, full) and marking caches as
> appropriate.
>
> All credit for the original work should be given to Brad Spengler and
> the PaX Team.
>
> Signed-off-by: Laura Abbott <laura@labbott.name>
> ---
>  include/linux/slab.h     |  7 +++++++
>  include/linux/slab_def.h |  4 ++++
>  mm/slab.h                | 22 ++++++++++++++++++++
>  mm/slab_common.c         | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 86 insertions(+)
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 2037a86..35c1e2d 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -23,6 +23,13 @@
>  #define SLAB_DEBUG_FREE                0x00000100UL    /* DEBUG: Perform (expensive) checks on free */
>  #define SLAB_RED_ZONE          0x00000400UL    /* DEBUG: Red zone objs in a cache */
>  #define SLAB_POISON            0x00000800UL    /* DEBUG: Poison objects */
> +
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +#define SLAB_NO_SANITIZE        0x00001000UL    /* Do not sanitize objs on free */
> +#else
> +#define SLAB_NO_SANITIZE        0x00000000UL
> +#endif
> +
>  #define SLAB_HWCACHE_ALIGN     0x00002000UL    /* Align objs on cache lines */
>  #define SLAB_CACHE_DMA         0x00004000UL    /* Use GFP_DMA memory */
>  #define SLAB_STORE_USER                0x00010000UL    /* DEBUG: Store the last owner for bug hunting */
> diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
> index 33d0490..4c3fb93 100644
> --- a/include/linux/slab_def.h
> +++ b/include/linux/slab_def.h
> @@ -69,6 +69,10 @@ struct kmem_cache {
>          */
>         int obj_offset;
>  #endif /* CONFIG_DEBUG_SLAB */
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +       atomic_t sanitized;
> +       atomic_t not_sanitized;
> +#endif
>  #ifdef CONFIG_MEMCG_KMEM
>         struct memcg_cache_params memcg_params;
>  #endif
> diff --git a/mm/slab.h b/mm/slab.h
> index 7b60871..b54b636 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -66,6 +66,28 @@ extern struct list_head slab_caches;
>  /* The slab cache that manages slab cache information */
>  extern struct kmem_cache *kmem_cache;
>
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +#ifdef CONFIG_X86_64
> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
> +#else
> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
> +#endif
> +enum slab_sanitize_mode {
> +       /* No sanitization */
> +       SLAB_SANITIZE_OFF = 0,
> +
> +       /* Partial sanitization happens only on the slow path */
> +       SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,
> +
> +       /* Partial sanitization happens everywhere */
> +       SLAB_SANITIZE_PARTIAL = 2,
> +
> +       /* Sanitization happens on all slabs, all paths */
> +       SLAB_SANITIZE_FULL = 3,
> +};
> +extern enum slab_sanitize_mode sanitize_slab;
> +#endif
> +
>  unsigned long calculate_alignment(unsigned long flags,
>                 unsigned long align, unsigned long size);
>
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 3c6a86b..4b28f70 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -30,6 +30,42 @@ LIST_HEAD(slab_caches);
>  DEFINE_MUTEX(slab_mutex);
>  struct kmem_cache *kmem_cache;
>
> +
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +enum slab_sanitize_mode sanitize_slab = SLAB_SANITIZE_PARTIAL;
> +static int __init sanitize_slab_setup(char *str)
> +{
> +       if (!str)
> +               return 0;
> +
> +       if (!strcmp(str, "0") || !strcmp(str, "off")) {
> +               pr_info("slab sanitization disabled");
> +               sanitize_slab = SLAB_SANITIZE_OFF;
> +       } else if (!strcmp(str, "1") || !strcmp(str, "slow")) {
> +               pr_info("slab sanitization partial slow path");
> +               sanitize_slab = SLAB_SANITIZE_PARTIAL_SLOWPATH;
> +       } else if (!strcmp(str, "2") || !strcmp(str, "partial")) {
> +               pr_info("slab sanitization partial");
> +               sanitize_slab = SLAB_SANITIZE_PARTIAL;
> +       } else if (!strcmp(str, "3") || !strcmp(str, "full")) {
> +               pr_info("slab sanitization full");
> +               sanitize_slab = SLAB_SANITIZE_FULL;
> +       } else
> +               pr_err("slab sanitization: unsupported option '%s'\n", str);
> +
> +       return 0;
> +}
> +early_param("sanitize_slab", sanitize_slab_setup);

Bike shedding: maybe "slab_sanitize" word order to match
"slab_nomerge", "slab_max_order", "slub_debug", etc?

> +
> +static inline bool sanitize_mergeable(unsigned long flags)
> +{
> +       return (sanitize_slab == SLAB_SANITIZE_OFF) || (flags & SLAB_NO_SANITIZE);
> +}
> +#else
> +static inline bool sanitize_mergeable(unsigned long flags) { return true; }
> +#endif
> +
> +
>  /*
>   * Set of flags that will prevent slab merging
>   */
> @@ -227,6 +263,9 @@ static inline void destroy_memcg_params(struct kmem_cache *s)
>   */
>  int slab_unmergeable(struct kmem_cache *s)
>  {
> +       if (!sanitize_mergeable(s->flags))
> +               return 1;
> +
>         if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE))
>                 return 1;
>
> @@ -250,6 +289,9 @@ struct kmem_cache *find_mergeable(size_t size, size_t align,
>  {
>         struct kmem_cache *s;
>
> +       if (!sanitize_mergeable(flags))
> +               return NULL;
> +
>         if (slab_nomerge || (flags & SLAB_NEVER_MERGE))
>                 return NULL;
>
> @@ -407,6 +449,13 @@ kmem_cache_create(const char *name, size_t size, size_t align,
>          */
>         flags &= CACHE_CREATE_MASK;
>
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +       if (sanitize_slab == SLAB_SANITIZE_OFF || (flags & SLAB_DESTROY_BY_RCU))
> +               flags |= SLAB_NO_SANITIZE;
> +       else if (sanitize_slab == SLAB_SANITIZE_FULL)
> +               flags &= ~SLAB_NO_SANITIZE;
> +#endif
> +
>         s = __kmem_cache_alias(name, size, align, flags, ctor);
>         if (s)
>                 goto out_unlock;
> @@ -1050,6 +1099,10 @@ static void print_slabinfo_header(struct seq_file *m)
>                  "<error> <maxfreeable> <nodeallocs> <remotefrees> <alienoverflow>");
>         seq_puts(m, " : cpustat <allochit> <allocmiss> <freehit> <freemiss>");
>  #endif
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +       seq_puts(m, " : sanitization <sanitized> <not_sanitized>");
> +#endif
> +
>         seq_putc(m, '\n');
>  }
>
> --
> 2.5.0
>

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
@ 2016-01-06  0:19     ` Kees Cook
  0 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:19 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>
> Each of the different allocators (SLAB/SLUB/SLOB) handles
> clearing of objects differently depending on configuration.
> Add common infrastructure for selecting sanitization levels
> (off, slow path only, partial, full) and marking caches as
> appropriate.
>
> All credit for the original work should be given to Brad Spengler and
> the PaX Team.
>
> Signed-off-by: Laura Abbott <laura@labbott.name>
> ---
>  include/linux/slab.h     |  7 +++++++
>  include/linux/slab_def.h |  4 ++++
>  mm/slab.h                | 22 ++++++++++++++++++++
>  mm/slab_common.c         | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 86 insertions(+)
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 2037a86..35c1e2d 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -23,6 +23,13 @@
>  #define SLAB_DEBUG_FREE                0x00000100UL    /* DEBUG: Perform (expensive) checks on free */
>  #define SLAB_RED_ZONE          0x00000400UL    /* DEBUG: Red zone objs in a cache */
>  #define SLAB_POISON            0x00000800UL    /* DEBUG: Poison objects */
> +
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +#define SLAB_NO_SANITIZE        0x00001000UL    /* Do not sanitize objs on free */
> +#else
> +#define SLAB_NO_SANITIZE        0x00000000UL
> +#endif
> +
>  #define SLAB_HWCACHE_ALIGN     0x00002000UL    /* Align objs on cache lines */
>  #define SLAB_CACHE_DMA         0x00004000UL    /* Use GFP_DMA memory */
>  #define SLAB_STORE_USER                0x00010000UL    /* DEBUG: Store the last owner for bug hunting */
> diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
> index 33d0490..4c3fb93 100644
> --- a/include/linux/slab_def.h
> +++ b/include/linux/slab_def.h
> @@ -69,6 +69,10 @@ struct kmem_cache {
>          */
>         int obj_offset;
>  #endif /* CONFIG_DEBUG_SLAB */
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +       atomic_t sanitized;
> +       atomic_t not_sanitized;
> +#endif
>  #ifdef CONFIG_MEMCG_KMEM
>         struct memcg_cache_params memcg_params;
>  #endif
> diff --git a/mm/slab.h b/mm/slab.h
> index 7b60871..b54b636 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -66,6 +66,28 @@ extern struct list_head slab_caches;
>  /* The slab cache that manages slab cache information */
>  extern struct kmem_cache *kmem_cache;
>
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +#ifdef CONFIG_X86_64
> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
> +#else
> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
> +#endif
> +enum slab_sanitize_mode {
> +       /* No sanitization */
> +       SLAB_SANITIZE_OFF = 0,
> +
> +       /* Partial sanitization happens only on the slow path */
> +       SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,
> +
> +       /* Partial sanitization happens everywhere */
> +       SLAB_SANITIZE_PARTIAL = 2,
> +
> +       /* Sanitization happens on all slabs, all paths */
> +       SLAB_SANITIZE_FULL = 3,
> +};
> +extern enum slab_sanitize_mode sanitize_slab;
> +#endif
> +
>  unsigned long calculate_alignment(unsigned long flags,
>                 unsigned long align, unsigned long size);
>
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 3c6a86b..4b28f70 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -30,6 +30,42 @@ LIST_HEAD(slab_caches);
>  DEFINE_MUTEX(slab_mutex);
>  struct kmem_cache *kmem_cache;
>
> +
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +enum slab_sanitize_mode sanitize_slab = SLAB_SANITIZE_PARTIAL;
> +static int __init sanitize_slab_setup(char *str)
> +{
> +       if (!str)
> +               return 0;
> +
> +       if (!strcmp(str, "0") || !strcmp(str, "off")) {
> +               pr_info("slab sanitization disabled");
> +               sanitize_slab = SLAB_SANITIZE_OFF;
> +       } else if (!strcmp(str, "1") || !strcmp(str, "slow")) {
> +               pr_info("slab sanitization partial slow path");
> +               sanitize_slab = SLAB_SANITIZE_PARTIAL_SLOWPATH;
> +       } else if (!strcmp(str, "2") || !strcmp(str, "partial")) {
> +               pr_info("slab sanitization partial");
> +               sanitize_slab = SLAB_SANITIZE_PARTIAL;
> +       } else if (!strcmp(str, "3") || !strcmp(str, "full")) {
> +               pr_info("slab sanitization full");
> +               sanitize_slab = SLAB_SANITIZE_FULL;
> +       } else
> +               pr_err("slab sanitization: unsupported option '%s'\n", str);
> +
> +       return 0;
> +}
> +early_param("sanitize_slab", sanitize_slab_setup);

Bike shedding: maybe "slab_sanitize" word order to match
"slab_nomerge", "slab_max_order", "slub_debug", etc?

> +
> +static inline bool sanitize_mergeable(unsigned long flags)
> +{
> +       return (sanitize_slab == SLAB_SANITIZE_OFF) || (flags & SLAB_NO_SANITIZE);
> +}
> +#else
> +static inline bool sanitize_mergeable(unsigned long flags) { return true; }
> +#endif
> +
> +
>  /*
>   * Set of flags that will prevent slab merging
>   */
> @@ -227,6 +263,9 @@ static inline void destroy_memcg_params(struct kmem_cache *s)
>   */
>  int slab_unmergeable(struct kmem_cache *s)
>  {
> +       if (!sanitize_mergeable(s->flags))
> +               return 1;
> +
>         if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE))
>                 return 1;
>
> @@ -250,6 +289,9 @@ struct kmem_cache *find_mergeable(size_t size, size_t align,
>  {
>         struct kmem_cache *s;
>
> +       if (!sanitize_mergeable(flags))
> +               return NULL;
> +
>         if (slab_nomerge || (flags & SLAB_NEVER_MERGE))
>                 return NULL;
>
> @@ -407,6 +449,13 @@ kmem_cache_create(const char *name, size_t size, size_t align,
>          */
>         flags &= CACHE_CREATE_MASK;
>
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +       if (sanitize_slab == SLAB_SANITIZE_OFF || (flags & SLAB_DESTROY_BY_RCU))
> +               flags |= SLAB_NO_SANITIZE;
> +       else if (sanitize_slab == SLAB_SANITIZE_FULL)
> +               flags &= ~SLAB_NO_SANITIZE;
> +#endif
> +
>         s = __kmem_cache_alias(name, size, align, flags, ctor);
>         if (s)
>                 goto out_unlock;
> @@ -1050,6 +1099,10 @@ static void print_slabinfo_header(struct seq_file *m)
>                  "<error> <maxfreeable> <nodeallocs> <remotefrees> <alienoverflow>");
>         seq_puts(m, " : cpustat <allochit> <allocmiss> <freehit> <freemiss>");
>  #endif
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +       seq_puts(m, " : sanitization <sanitized> <not_sanitized>");
> +#endif
> +
>         seq_putc(m, '\n');
>  }
>
> --
> 2.5.0
>

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
@ 2016-01-06  0:19     ` Kees Cook
  0 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:19 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>
> Each of the different allocators (SLAB/SLUB/SLOB) handles
> clearing of objects differently depending on configuration.
> Add common infrastructure for selecting sanitization levels
> (off, slow path only, partial, full) and marking caches as
> appropriate.
>
> All credit for the original work should be given to Brad Spengler and
> the PaX Team.
>
> Signed-off-by: Laura Abbott <laura@labbott.name>
> ---
>  include/linux/slab.h     |  7 +++++++
>  include/linux/slab_def.h |  4 ++++
>  mm/slab.h                | 22 ++++++++++++++++++++
>  mm/slab_common.c         | 53 ++++++++++++++++++++++++++++++++++++++++++++++++
>  4 files changed, 86 insertions(+)
>
> diff --git a/include/linux/slab.h b/include/linux/slab.h
> index 2037a86..35c1e2d 100644
> --- a/include/linux/slab.h
> +++ b/include/linux/slab.h
> @@ -23,6 +23,13 @@
>  #define SLAB_DEBUG_FREE                0x00000100UL    /* DEBUG: Perform (expensive) checks on free */
>  #define SLAB_RED_ZONE          0x00000400UL    /* DEBUG: Red zone objs in a cache */
>  #define SLAB_POISON            0x00000800UL    /* DEBUG: Poison objects */
> +
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +#define SLAB_NO_SANITIZE        0x00001000UL    /* Do not sanitize objs on free */
> +#else
> +#define SLAB_NO_SANITIZE        0x00000000UL
> +#endif
> +
>  #define SLAB_HWCACHE_ALIGN     0x00002000UL    /* Align objs on cache lines */
>  #define SLAB_CACHE_DMA         0x00004000UL    /* Use GFP_DMA memory */
>  #define SLAB_STORE_USER                0x00010000UL    /* DEBUG: Store the last owner for bug hunting */
> diff --git a/include/linux/slab_def.h b/include/linux/slab_def.h
> index 33d0490..4c3fb93 100644
> --- a/include/linux/slab_def.h
> +++ b/include/linux/slab_def.h
> @@ -69,6 +69,10 @@ struct kmem_cache {
>          */
>         int obj_offset;
>  #endif /* CONFIG_DEBUG_SLAB */
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +       atomic_t sanitized;
> +       atomic_t not_sanitized;
> +#endif
>  #ifdef CONFIG_MEMCG_KMEM
>         struct memcg_cache_params memcg_params;
>  #endif
> diff --git a/mm/slab.h b/mm/slab.h
> index 7b60871..b54b636 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -66,6 +66,28 @@ extern struct list_head slab_caches;
>  /* The slab cache that manages slab cache information */
>  extern struct kmem_cache *kmem_cache;
>
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +#ifdef CONFIG_X86_64
> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
> +#else
> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
> +#endif
> +enum slab_sanitize_mode {
> +       /* No sanitization */
> +       SLAB_SANITIZE_OFF = 0,
> +
> +       /* Partial sanitization happens only on the slow path */
> +       SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,
> +
> +       /* Partial sanitization happens everywhere */
> +       SLAB_SANITIZE_PARTIAL = 2,
> +
> +       /* Sanitization happens on all slabs, all paths */
> +       SLAB_SANITIZE_FULL = 3,
> +};
> +extern enum slab_sanitize_mode sanitize_slab;
> +#endif
> +
>  unsigned long calculate_alignment(unsigned long flags,
>                 unsigned long align, unsigned long size);
>
> diff --git a/mm/slab_common.c b/mm/slab_common.c
> index 3c6a86b..4b28f70 100644
> --- a/mm/slab_common.c
> +++ b/mm/slab_common.c
> @@ -30,6 +30,42 @@ LIST_HEAD(slab_caches);
>  DEFINE_MUTEX(slab_mutex);
>  struct kmem_cache *kmem_cache;
>
> +
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +enum slab_sanitize_mode sanitize_slab = SLAB_SANITIZE_PARTIAL;
> +static int __init sanitize_slab_setup(char *str)
> +{
> +       if (!str)
> +               return 0;
> +
> +       if (!strcmp(str, "0") || !strcmp(str, "off")) {
> +               pr_info("slab sanitization disabled");
> +               sanitize_slab = SLAB_SANITIZE_OFF;
> +       } else if (!strcmp(str, "1") || !strcmp(str, "slow")) {
> +               pr_info("slab sanitization partial slow path");
> +               sanitize_slab = SLAB_SANITIZE_PARTIAL_SLOWPATH;
> +       } else if (!strcmp(str, "2") || !strcmp(str, "partial")) {
> +               pr_info("slab sanitization partial");
> +               sanitize_slab = SLAB_SANITIZE_PARTIAL;
> +       } else if (!strcmp(str, "3") || !strcmp(str, "full")) {
> +               pr_info("slab sanitization full");
> +               sanitize_slab = SLAB_SANITIZE_FULL;
> +       } else
> +               pr_err("slab sanitization: unsupported option '%s'\n", str);
> +
> +       return 0;
> +}
> +early_param("sanitize_slab", sanitize_slab_setup);

Bike shedding: maybe "slab_sanitize" word order to match
"slab_nomerge", "slab_max_order", "slub_debug", etc?

> +
> +static inline bool sanitize_mergeable(unsigned long flags)
> +{
> +       return (sanitize_slab == SLAB_SANITIZE_OFF) || (flags & SLAB_NO_SANITIZE);
> +}
> +#else
> +static inline bool sanitize_mergeable(unsigned long flags) { return true; }
> +#endif
> +
> +
>  /*
>   * Set of flags that will prevent slab merging
>   */
> @@ -227,6 +263,9 @@ static inline void destroy_memcg_params(struct kmem_cache *s)
>   */
>  int slab_unmergeable(struct kmem_cache *s)
>  {
> +       if (!sanitize_mergeable(s->flags))
> +               return 1;
> +
>         if (slab_nomerge || (s->flags & SLAB_NEVER_MERGE))
>                 return 1;
>
> @@ -250,6 +289,9 @@ struct kmem_cache *find_mergeable(size_t size, size_t align,
>  {
>         struct kmem_cache *s;
>
> +       if (!sanitize_mergeable(flags))
> +               return NULL;
> +
>         if (slab_nomerge || (flags & SLAB_NEVER_MERGE))
>                 return NULL;
>
> @@ -407,6 +449,13 @@ kmem_cache_create(const char *name, size_t size, size_t align,
>          */
>         flags &= CACHE_CREATE_MASK;
>
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +       if (sanitize_slab == SLAB_SANITIZE_OFF || (flags & SLAB_DESTROY_BY_RCU))
> +               flags |= SLAB_NO_SANITIZE;
> +       else if (sanitize_slab == SLAB_SANITIZE_FULL)
> +               flags &= ~SLAB_NO_SANITIZE;
> +#endif
> +
>         s = __kmem_cache_alias(name, size, align, flags, ctor);
>         if (s)
>                 goto out_unlock;
> @@ -1050,6 +1099,10 @@ static void print_slabinfo_header(struct seq_file *m)
>                  "<error> <maxfreeable> <nodeallocs> <remotefrees> <alienoverflow>");
>         seq_puts(m, " : cpustat <allochit> <allocmiss> <freehit> <freemiss>");
>  #endif
> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
> +       seq_puts(m, " : sanitization <sanitized> <not_sanitized>");
> +#endif
> +
>         seq_putc(m, '\n');
>  }
>
> --
> 2.5.0
>

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 5/7] mm: Mark several cases as SLAB_NO_SANITIZE
  2015-12-22  3:40   ` Laura Abbott
  (?)
@ 2016-01-06  0:21     ` Kees Cook
  -1 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:21 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>
> Sanitization is useful for security but comes at the cost of performance
> in clearing on free. Mark select caches as SLAB_NO_SANITIZE so
> sanitization will not happen under the default configuration. The

Can you describe why these were selected?

> kernel may be booted with the proper command line option to allow these
> caches to be sanitized.

Might be good to specifically mention the command line used to
sanitize even these caches.

-Kees

>
> All credit for the original work should be given to Brad Spengler and
> the PaX Team.
>
> Signed-off-by: Laura Abbott <laura@labbott.name>
> ---
> This is the initial set of excludes that the grsecurity patches had.
> More may need to be added/removed as the series is tested.
> ---
>  fs/buffer.c       |  2 +-
>  fs/dcache.c       |  2 +-
>  kernel/fork.c     |  2 +-
>  mm/rmap.c         |  4 ++--
>  mm/slab.h         |  2 +-
>  net/core/skbuff.c | 16 ++++++++--------
>  6 files changed, 14 insertions(+), 14 deletions(-)
>
> diff --git a/fs/buffer.c b/fs/buffer.c
> index 4f4cd95..f19e4ab 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -3417,7 +3417,7 @@ void __init buffer_init(void)
>         bh_cachep = kmem_cache_create("buffer_head",
>                         sizeof(struct buffer_head), 0,
>                                 (SLAB_RECLAIM_ACCOUNT|SLAB_PANIC|
> -                               SLAB_MEM_SPREAD),
> +                               SLAB_MEM_SPREAD|SLAB_NO_SANITIZE),
>                                 NULL);
>
>         /*
> diff --git a/fs/dcache.c b/fs/dcache.c
> index 5c33aeb..470f6be 100644
> --- a/fs/dcache.c
> +++ b/fs/dcache.c
> @@ -3451,7 +3451,7 @@ void __init vfs_caches_init_early(void)
>  void __init vfs_caches_init(void)
>  {
>         names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
> -                       SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
> +                       SLAB_NO_SANITIZE|SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
>
>         dcache_init();
>         inode_init();
> diff --git a/kernel/fork.c b/kernel/fork.c
> index fce002e..35db9c3 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1868,7 +1868,7 @@ void __init proc_caches_init(void)
>         mm_cachep = kmem_cache_create("mm_struct",
>                         sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
>                         SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
> -       vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC);
> +       vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_NO_SANITIZE);
>         mmap_init();
>         nsproxy_cache_init();
>  }
> diff --git a/mm/rmap.c b/mm/rmap.c
> index b577fbb..74296d9 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -428,8 +428,8 @@ static void anon_vma_ctor(void *data)
>  void __init anon_vma_init(void)
>  {
>         anon_vma_cachep = kmem_cache_create("anon_vma", sizeof(struct anon_vma),
> -                       0, SLAB_DESTROY_BY_RCU|SLAB_PANIC, anon_vma_ctor);
> -       anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
> +               0, SLAB_DESTROY_BY_RCU|SLAB_PANIC|SLAB_NO_SANITIZE, anon_vma_ctor);
> +       anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC|SLAB_NO_SANITIZE);
>  }
>
>  /*
> diff --git a/mm/slab.h b/mm/slab.h
> index b54b636..6de99da 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -137,7 +137,7 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
>
>  /* Legal flag mask for kmem_cache_create(), for various configurations */
>  #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
> -                        SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS )
> +                        SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS | SLAB_NO_SANITIZE)
>
>  #if defined(CONFIG_DEBUG_SLAB)
>  #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index b2df375..1d499ea 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3316,15 +3316,15 @@ done:
>  void __init skb_init(void)
>  {
>         skbuff_head_cache = kmem_cache_create("skbuff_head_cache",
> -                                             sizeof(struct sk_buff),
> -                                             0,
> -                                             SLAB_HWCACHE_ALIGN|SLAB_PANIC,
> -                                             NULL);
> +                               sizeof(struct sk_buff),
> +                               0,
> +                               SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
> +                               NULL);
>         skbuff_fclone_cache = kmem_cache_create("skbuff_fclone_cache",
> -                                               sizeof(struct sk_buff_fclones),
> -                                               0,
> -                                               SLAB_HWCACHE_ALIGN|SLAB_PANIC,
> -                                               NULL);
> +                               sizeof(struct sk_buff_fclones),
> +                               0,
> +                               SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
> +                               NULL);
>  }
>
>  /**
> --
> 2.5.0
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 5/7] mm: Mark several cases as SLAB_NO_SANITIZE
@ 2016-01-06  0:21     ` Kees Cook
  0 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:21 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>
> Sanitization is useful for security but comes at the cost of performance
> in clearing on free. Mark select caches as SLAB_NO_SANITIZE so
> sanitization will not happen under the default configuration. The

Can you describe why these were selected?

> kernel may be booted with the proper command line option to allow these
> caches to be sanitized.

Might be good to specifically mention the command line used to
sanitize even these caches.

-Kees

>
> All credit for the original work should be given to Brad Spengler and
> the PaX Team.
>
> Signed-off-by: Laura Abbott <laura@labbott.name>
> ---
> This is the initial set of excludes that the grsecurity patches had.
> More may need to be added/removed as the series is tested.
> ---
>  fs/buffer.c       |  2 +-
>  fs/dcache.c       |  2 +-
>  kernel/fork.c     |  2 +-
>  mm/rmap.c         |  4 ++--
>  mm/slab.h         |  2 +-
>  net/core/skbuff.c | 16 ++++++++--------
>  6 files changed, 14 insertions(+), 14 deletions(-)
>
> diff --git a/fs/buffer.c b/fs/buffer.c
> index 4f4cd95..f19e4ab 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -3417,7 +3417,7 @@ void __init buffer_init(void)
>         bh_cachep = kmem_cache_create("buffer_head",
>                         sizeof(struct buffer_head), 0,
>                                 (SLAB_RECLAIM_ACCOUNT|SLAB_PANIC|
> -                               SLAB_MEM_SPREAD),
> +                               SLAB_MEM_SPREAD|SLAB_NO_SANITIZE),
>                                 NULL);
>
>         /*
> diff --git a/fs/dcache.c b/fs/dcache.c
> index 5c33aeb..470f6be 100644
> --- a/fs/dcache.c
> +++ b/fs/dcache.c
> @@ -3451,7 +3451,7 @@ void __init vfs_caches_init_early(void)
>  void __init vfs_caches_init(void)
>  {
>         names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
> -                       SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
> +                       SLAB_NO_SANITIZE|SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
>
>         dcache_init();
>         inode_init();
> diff --git a/kernel/fork.c b/kernel/fork.c
> index fce002e..35db9c3 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1868,7 +1868,7 @@ void __init proc_caches_init(void)
>         mm_cachep = kmem_cache_create("mm_struct",
>                         sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
>                         SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
> -       vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC);
> +       vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_NO_SANITIZE);
>         mmap_init();
>         nsproxy_cache_init();
>  }
> diff --git a/mm/rmap.c b/mm/rmap.c
> index b577fbb..74296d9 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -428,8 +428,8 @@ static void anon_vma_ctor(void *data)
>  void __init anon_vma_init(void)
>  {
>         anon_vma_cachep = kmem_cache_create("anon_vma", sizeof(struct anon_vma),
> -                       0, SLAB_DESTROY_BY_RCU|SLAB_PANIC, anon_vma_ctor);
> -       anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
> +               0, SLAB_DESTROY_BY_RCU|SLAB_PANIC|SLAB_NO_SANITIZE, anon_vma_ctor);
> +       anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC|SLAB_NO_SANITIZE);
>  }
>
>  /*
> diff --git a/mm/slab.h b/mm/slab.h
> index b54b636..6de99da 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -137,7 +137,7 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
>
>  /* Legal flag mask for kmem_cache_create(), for various configurations */
>  #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
> -                        SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS )
> +                        SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS | SLAB_NO_SANITIZE)
>
>  #if defined(CONFIG_DEBUG_SLAB)
>  #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index b2df375..1d499ea 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3316,15 +3316,15 @@ done:
>  void __init skb_init(void)
>  {
>         skbuff_head_cache = kmem_cache_create("skbuff_head_cache",
> -                                             sizeof(struct sk_buff),
> -                                             0,
> -                                             SLAB_HWCACHE_ALIGN|SLAB_PANIC,
> -                                             NULL);
> +                               sizeof(struct sk_buff),
> +                               0,
> +                               SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
> +                               NULL);
>         skbuff_fclone_cache = kmem_cache_create("skbuff_fclone_cache",
> -                                               sizeof(struct sk_buff_fclones),
> -                                               0,
> -                                               SLAB_HWCACHE_ALIGN|SLAB_PANIC,
> -                                               NULL);
> +                               sizeof(struct sk_buff_fclones),
> +                               0,
> +                               SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
> +                               NULL);
>  }
>
>  /**
> --
> 2.5.0
>



-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 5/7] mm: Mark several cases as SLAB_NO_SANITIZE
@ 2016-01-06  0:21     ` Kees Cook
  0 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:21 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>
> Sanitization is useful for security but comes at the cost of performance
> in clearing on free. Mark select caches as SLAB_NO_SANITIZE so
> sanitization will not happen under the default configuration. The

Can you describe why these were selected?

> kernel may be booted with the proper command line option to allow these
> caches to be sanitized.

Might be good to specifically mention the command line used to
sanitize even these caches.

-Kees

>
> All credit for the original work should be given to Brad Spengler and
> the PaX Team.
>
> Signed-off-by: Laura Abbott <laura@labbott.name>
> ---
> This is the initial set of excludes that the grsecurity patches had.
> More may need to be added/removed as the series is tested.
> ---
>  fs/buffer.c       |  2 +-
>  fs/dcache.c       |  2 +-
>  kernel/fork.c     |  2 +-
>  mm/rmap.c         |  4 ++--
>  mm/slab.h         |  2 +-
>  net/core/skbuff.c | 16 ++++++++--------
>  6 files changed, 14 insertions(+), 14 deletions(-)
>
> diff --git a/fs/buffer.c b/fs/buffer.c
> index 4f4cd95..f19e4ab 100644
> --- a/fs/buffer.c
> +++ b/fs/buffer.c
> @@ -3417,7 +3417,7 @@ void __init buffer_init(void)
>         bh_cachep = kmem_cache_create("buffer_head",
>                         sizeof(struct buffer_head), 0,
>                                 (SLAB_RECLAIM_ACCOUNT|SLAB_PANIC|
> -                               SLAB_MEM_SPREAD),
> +                               SLAB_MEM_SPREAD|SLAB_NO_SANITIZE),
>                                 NULL);
>
>         /*
> diff --git a/fs/dcache.c b/fs/dcache.c
> index 5c33aeb..470f6be 100644
> --- a/fs/dcache.c
> +++ b/fs/dcache.c
> @@ -3451,7 +3451,7 @@ void __init vfs_caches_init_early(void)
>  void __init vfs_caches_init(void)
>  {
>         names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
> -                       SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
> +                       SLAB_NO_SANITIZE|SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
>
>         dcache_init();
>         inode_init();
> diff --git a/kernel/fork.c b/kernel/fork.c
> index fce002e..35db9c3 100644
> --- a/kernel/fork.c
> +++ b/kernel/fork.c
> @@ -1868,7 +1868,7 @@ void __init proc_caches_init(void)
>         mm_cachep = kmem_cache_create("mm_struct",
>                         sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
>                         SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
> -       vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC);
> +       vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_NO_SANITIZE);
>         mmap_init();
>         nsproxy_cache_init();
>  }
> diff --git a/mm/rmap.c b/mm/rmap.c
> index b577fbb..74296d9 100644
> --- a/mm/rmap.c
> +++ b/mm/rmap.c
> @@ -428,8 +428,8 @@ static void anon_vma_ctor(void *data)
>  void __init anon_vma_init(void)
>  {
>         anon_vma_cachep = kmem_cache_create("anon_vma", sizeof(struct anon_vma),
> -                       0, SLAB_DESTROY_BY_RCU|SLAB_PANIC, anon_vma_ctor);
> -       anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
> +               0, SLAB_DESTROY_BY_RCU|SLAB_PANIC|SLAB_NO_SANITIZE, anon_vma_ctor);
> +       anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC|SLAB_NO_SANITIZE);
>  }
>
>  /*
> diff --git a/mm/slab.h b/mm/slab.h
> index b54b636..6de99da 100644
> --- a/mm/slab.h
> +++ b/mm/slab.h
> @@ -137,7 +137,7 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
>
>  /* Legal flag mask for kmem_cache_create(), for various configurations */
>  #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
> -                        SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS )
> +                        SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS | SLAB_NO_SANITIZE)
>
>  #if defined(CONFIG_DEBUG_SLAB)
>  #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
> index b2df375..1d499ea 100644
> --- a/net/core/skbuff.c
> +++ b/net/core/skbuff.c
> @@ -3316,15 +3316,15 @@ done:
>  void __init skb_init(void)
>  {
>         skbuff_head_cache = kmem_cache_create("skbuff_head_cache",
> -                                             sizeof(struct sk_buff),
> -                                             0,
> -                                             SLAB_HWCACHE_ALIGN|SLAB_PANIC,
> -                                             NULL);
> +                               sizeof(struct sk_buff),
> +                               0,
> +                               SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
> +                               NULL);
>         skbuff_fclone_cache = kmem_cache_create("skbuff_fclone_cache",
> -                                               sizeof(struct sk_buff_fclones),
> -                                               0,
> -                                               SLAB_HWCACHE_ALIGN|SLAB_PANIC,
> -                                               NULL);
> +                               sizeof(struct sk_buff_fclones),
> +                               0,
> +                               SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
> +                               NULL);
>  }
>
>  /**
> --
> 2.5.0
>



-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2015-12-22 19:13                   ` Laura Abbott
  (?)
@ 2016-01-06  0:29                     ` Kees Cook
  -1 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:29 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Dave Hansen, Christoph Lameter, kernel-hardening, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Linux-MM, LKML

On Tue, Dec 22, 2015 at 11:13 AM, Laura Abbott <laura@labbott.name> wrote:
> On 12/22/15 10:19 AM, Dave Hansen wrote:
>>
>> On 12/22/2015 10:08 AM, Christoph Lameter wrote:
>>>
>>> On Tue, 22 Dec 2015, Dave Hansen wrote:
>>>>>
>>>>> Why would you use zeros? The point is just to clear the information
>>>>> right?
>>>>> The regular poisoning does that.
>>>>
>>>>
>>>> It then allows you to avoid the zeroing at allocation time.
>>>
>>>
>>> Well much of the code is expecting a zeroed object from the allocator and
>>> its zeroed at that time. Zeroing makes the object cache hot which is an
>>> important performance aspect.
>>
>>
>> Yes, modifying this behavior has a performance impact.  It absolutely
>> needs to be evaluated, and I wouldn't want to speculate too much on how
>> good or bad any of the choices are.
>>
>> Just to reiterate, I think we have 3 real choices here:
>>
>> 1. Zero at alloc, only when __GFP_ZERO
>>     (behavior today)
>> 2. Poison at free, also Zero at alloc (when __GFP_ZERO)
>>     (this patch's proposed behavior, also what current poisoning does,
>>      doubles writes)
>> 3. Zero at free, *don't* Zero at alloc (when __GFP_ZERO)
>>     (what I'm suggesting, possibly less perf impact vs. #2)
>>
>>
>
> poisoning with non-zero memory makes it easier to determine that the error
> came from accessing the sanitized memory vs. some other case. I don't think
> the feature would be as strong if the memory was only zeroed vs. some other
> data value.

I would tend to agree. If there are significant perf improvements for
"3" above, that should be easy to add on later as another choice.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2016-01-06  0:29                     ` Kees Cook
  0 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:29 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Dave Hansen, Christoph Lameter, kernel-hardening, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Linux-MM, LKML

On Tue, Dec 22, 2015 at 11:13 AM, Laura Abbott <laura@labbott.name> wrote:
> On 12/22/15 10:19 AM, Dave Hansen wrote:
>>
>> On 12/22/2015 10:08 AM, Christoph Lameter wrote:
>>>
>>> On Tue, 22 Dec 2015, Dave Hansen wrote:
>>>>>
>>>>> Why would you use zeros? The point is just to clear the information
>>>>> right?
>>>>> The regular poisoning does that.
>>>>
>>>>
>>>> It then allows you to avoid the zeroing at allocation time.
>>>
>>>
>>> Well much of the code is expecting a zeroed object from the allocator and
>>> its zeroed at that time. Zeroing makes the object cache hot which is an
>>> important performance aspect.
>>
>>
>> Yes, modifying this behavior has a performance impact.  It absolutely
>> needs to be evaluated, and I wouldn't want to speculate too much on how
>> good or bad any of the choices are.
>>
>> Just to reiterate, I think we have 3 real choices here:
>>
>> 1. Zero at alloc, only when __GFP_ZERO
>>     (behavior today)
>> 2. Poison at free, also Zero at alloc (when __GFP_ZERO)
>>     (this patch's proposed behavior, also what current poisoning does,
>>      doubles writes)
>> 3. Zero at free, *don't* Zero at alloc (when __GFP_ZERO)
>>     (what I'm suggesting, possibly less perf impact vs. #2)
>>
>>
>
> poisoning with non-zero memory makes it easier to determine that the error
> came from accessing the sanitized memory vs. some other case. I don't think
> the feature would be as strong if the memory was only zeroed vs. some other
> data value.

I would tend to agree. If there are significant perf improvements for
"3" above, that should be easy to add on later as another choice.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2016-01-06  0:29                     ` Kees Cook
  0 siblings, 0 replies; 115+ messages in thread
From: Kees Cook @ 2016-01-06  0:29 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Dave Hansen, Christoph Lameter, kernel-hardening, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Linux-MM, LKML

On Tue, Dec 22, 2015 at 11:13 AM, Laura Abbott <laura@labbott.name> wrote:
> On 12/22/15 10:19 AM, Dave Hansen wrote:
>>
>> On 12/22/2015 10:08 AM, Christoph Lameter wrote:
>>>
>>> On Tue, 22 Dec 2015, Dave Hansen wrote:
>>>>>
>>>>> Why would you use zeros? The point is just to clear the information
>>>>> right?
>>>>> The regular poisoning does that.
>>>>
>>>>
>>>> It then allows you to avoid the zeroing at allocation time.
>>>
>>>
>>> Well much of the code is expecting a zeroed object from the allocator and
>>> its zeroed at that time. Zeroing makes the object cache hot which is an
>>> important performance aspect.
>>
>>
>> Yes, modifying this behavior has a performance impact.  It absolutely
>> needs to be evaluated, and I wouldn't want to speculate too much on how
>> good or bad any of the choices are.
>>
>> Just to reiterate, I think we have 3 real choices here:
>>
>> 1. Zero at alloc, only when __GFP_ZERO
>>     (behavior today)
>> 2. Poison at free, also Zero at alloc (when __GFP_ZERO)
>>     (this patch's proposed behavior, also what current poisoning does,
>>      doubles writes)
>> 3. Zero at free, *don't* Zero at alloc (when __GFP_ZERO)
>>     (what I'm suggesting, possibly less perf impact vs. #2)
>>
>>
>
> poisoning with non-zero memory makes it easier to determine that the error
> came from accessing the sanitized memory vs. some other case. I don't think
> the feature would be as strong if the memory was only zeroed vs. some other
> data value.

I would tend to agree. If there are significant perf improvements for
"3" above, that should be easy to add on later as another choice.

-Kees

-- 
Kees Cook
Chrome OS & Brillo Security

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
  2016-01-06  0:17       ` Kees Cook
  (?)
@ 2016-01-06  2:06         ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  2:06 UTC (permalink / raw)
  To: Kees Cook, Vlastimil Babka
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening, Mathias Krause

On 1/5/16 4:17 PM, Kees Cook wrote:
> On Tue, Dec 22, 2015 at 12:48 PM, Vlastimil Babka <vbabka@suse.cz> wrote:
>> On 22.12.2015 4:40, Laura Abbott wrote:
>>> Each of the different allocators (SLAB/SLUB/SLOB) handles
>>> clearing of objects differently depending on configuration.
>>> Add common infrastructure for selecting sanitization levels
>>> (off, slow path only, partial, full) and marking caches as
>>> appropriate.
>>>
>>> All credit for the original work should be given to Brad Spengler and
>>> the PaX Team.
>>>
>>> Signed-off-by: Laura Abbott <laura@labbott.name>
>>>
>>> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
>>> +#ifdef CONFIG_X86_64
>>> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
>>> +#else
>>> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
>>> +#endif
>>> +enum slab_sanitize_mode {
>>> +     /* No sanitization */
>>> +     SLAB_SANITIZE_OFF = 0,
>>> +
>>> +     /* Partial sanitization happens only on the slow path */
>>> +     SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,
>>
>> Can you explain more about this variant? I wonder who might find it useful
>> except someone getting a false sense of security, but cheaper.
>> It sounds like wanting the cake and eat it too :)
>> I would be surprised if such IMHO half-solution existed in the original
>> PAX_MEMORY_SANITIZE too?
>>
>> Or is there something that guarantees that the objects freed on hotpath won't
>> stay there for long so the danger of leak is low? (And what about
>> use-after-free?) It depends on further slab activity, no? (I'm not that familiar
>> with SLUB, but I would expect the hotpath there being similar to SLAB freeing
>> the object on per-cpu array_cache. But, it seems the PARTIAL_SLOWPATH is not
>> implemented for SLAB, so there might be some fundamental difference I'm missing.)
>
> Perhaps the partial sanitize could be a separate patch so it's
> features were more logically separated?
>

I've done some more thinking and testing and I'm just going to drop the
slowpath idea. It helps some benchmarks but not enough. The concept is
out there if it's worth picking up later.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
@ 2016-01-06  2:06         ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  2:06 UTC (permalink / raw)
  To: Kees Cook, Vlastimil Babka
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening, Mathias Krause

On 1/5/16 4:17 PM, Kees Cook wrote:
> On Tue, Dec 22, 2015 at 12:48 PM, Vlastimil Babka <vbabka@suse.cz> wrote:
>> On 22.12.2015 4:40, Laura Abbott wrote:
>>> Each of the different allocators (SLAB/SLUB/SLOB) handles
>>> clearing of objects differently depending on configuration.
>>> Add common infrastructure for selecting sanitization levels
>>> (off, slow path only, partial, full) and marking caches as
>>> appropriate.
>>>
>>> All credit for the original work should be given to Brad Spengler and
>>> the PaX Team.
>>>
>>> Signed-off-by: Laura Abbott <laura@labbott.name>
>>>
>>> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
>>> +#ifdef CONFIG_X86_64
>>> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
>>> +#else
>>> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
>>> +#endif
>>> +enum slab_sanitize_mode {
>>> +     /* No sanitization */
>>> +     SLAB_SANITIZE_OFF = 0,
>>> +
>>> +     /* Partial sanitization happens only on the slow path */
>>> +     SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,
>>
>> Can you explain more about this variant? I wonder who might find it useful
>> except someone getting a false sense of security, but cheaper.
>> It sounds like wanting the cake and eat it too :)
>> I would be surprised if such IMHO half-solution existed in the original
>> PAX_MEMORY_SANITIZE too?
>>
>> Or is there something that guarantees that the objects freed on hotpath won't
>> stay there for long so the danger of leak is low? (And what about
>> use-after-free?) It depends on further slab activity, no? (I'm not that familiar
>> with SLUB, but I would expect the hotpath there being similar to SLAB freeing
>> the object on per-cpu array_cache. But, it seems the PARTIAL_SLOWPATH is not
>> implemented for SLAB, so there might be some fundamental difference I'm missing.)
>
> Perhaps the partial sanitize could be a separate patch so it's
> features were more logically separated?
>

I've done some more thinking and testing and I'm just going to drop the
slowpath idea. It helps some benchmarks but not enough. The concept is
out there if it's worth picking up later.

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation
@ 2016-01-06  2:06         ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  2:06 UTC (permalink / raw)
  To: Kees Cook, Vlastimil Babka
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening, Mathias Krause

On 1/5/16 4:17 PM, Kees Cook wrote:
> On Tue, Dec 22, 2015 at 12:48 PM, Vlastimil Babka <vbabka@suse.cz> wrote:
>> On 22.12.2015 4:40, Laura Abbott wrote:
>>> Each of the different allocators (SLAB/SLUB/SLOB) handles
>>> clearing of objects differently depending on configuration.
>>> Add common infrastructure for selecting sanitization levels
>>> (off, slow path only, partial, full) and marking caches as
>>> appropriate.
>>>
>>> All credit for the original work should be given to Brad Spengler and
>>> the PaX Team.
>>>
>>> Signed-off-by: Laura Abbott <laura@labbott.name>
>>>
>>> +#ifdef CONFIG_SLAB_MEMORY_SANITIZE
>>> +#ifdef CONFIG_X86_64
>>> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xfe'
>>> +#else
>>> +#define SLAB_MEMORY_SANITIZE_VALUE       '\xff'
>>> +#endif
>>> +enum slab_sanitize_mode {
>>> +     /* No sanitization */
>>> +     SLAB_SANITIZE_OFF = 0,
>>> +
>>> +     /* Partial sanitization happens only on the slow path */
>>> +     SLAB_SANITIZE_PARTIAL_SLOWPATH = 1,
>>
>> Can you explain more about this variant? I wonder who might find it useful
>> except someone getting a false sense of security, but cheaper.
>> It sounds like wanting the cake and eat it too :)
>> I would be surprised if such IMHO half-solution existed in the original
>> PAX_MEMORY_SANITIZE too?
>>
>> Or is there something that guarantees that the objects freed on hotpath won't
>> stay there for long so the danger of leak is low? (And what about
>> use-after-free?) It depends on further slab activity, no? (I'm not that familiar
>> with SLUB, but I would expect the hotpath there being similar to SLAB freeing
>> the object on per-cpu array_cache. But, it seems the PARTIAL_SLOWPATH is not
>> implemented for SLAB, so there might be some fundamental difference I'm missing.)
>
> Perhaps the partial sanitize could be a separate patch so it's
> features were more logically separated?
>

I've done some more thinking and testing and I'm just going to drop the
slowpath idea. It helps some benchmarks but not enough. The concept is
out there if it's worth picking up later.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 5/7] mm: Mark several cases as SLAB_NO_SANITIZE
  2016-01-06  0:21     ` Kees Cook
  (?)
@ 2016-01-06  2:11       ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  2:11 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/5/16 4:21 PM, Kees Cook wrote:
> On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>>
>> Sanitization is useful for security but comes at the cost of performance
>> in clearing on free. Mark select caches as SLAB_NO_SANITIZE so
>> sanitization will not happen under the default configuration. The
>
> Can you describe why these were selected?
>

These were the cases that existed in grsecurity. From looking, these seem
to be performance critical caches that have a relatively lower risk. I'll
adjust the commit text.

>> kernel may be booted with the proper command line option to allow these
>> caches to be sanitized.
>
> Might be good to specifically mention the command line used to
> sanitize even these caches.

Sure.

>
> -Kees

Thanks,
Laura

>
>>
>> All credit for the original work should be given to Brad Spengler and
>> the PaX Team.
>>
>> Signed-off-by: Laura Abbott <laura@labbott.name>
>> ---
>> This is the initial set of excludes that the grsecurity patches had.
>> More may need to be added/removed as the series is tested.
>> ---
>>   fs/buffer.c       |  2 +-
>>   fs/dcache.c       |  2 +-
>>   kernel/fork.c     |  2 +-
>>   mm/rmap.c         |  4 ++--
>>   mm/slab.h         |  2 +-
>>   net/core/skbuff.c | 16 ++++++++--------
>>   6 files changed, 14 insertions(+), 14 deletions(-)
>>
>> diff --git a/fs/buffer.c b/fs/buffer.c
>> index 4f4cd95..f19e4ab 100644
>> --- a/fs/buffer.c
>> +++ b/fs/buffer.c
>> @@ -3417,7 +3417,7 @@ void __init buffer_init(void)
>>          bh_cachep = kmem_cache_create("buffer_head",
>>                          sizeof(struct buffer_head), 0,
>>                                  (SLAB_RECLAIM_ACCOUNT|SLAB_PANIC|
>> -                               SLAB_MEM_SPREAD),
>> +                               SLAB_MEM_SPREAD|SLAB_NO_SANITIZE),
>>                                  NULL);
>>
>>          /*
>> diff --git a/fs/dcache.c b/fs/dcache.c
>> index 5c33aeb..470f6be 100644
>> --- a/fs/dcache.c
>> +++ b/fs/dcache.c
>> @@ -3451,7 +3451,7 @@ void __init vfs_caches_init_early(void)
>>   void __init vfs_caches_init(void)
>>   {
>>          names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
>> -                       SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
>> +                       SLAB_NO_SANITIZE|SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
>>
>>          dcache_init();
>>          inode_init();
>> diff --git a/kernel/fork.c b/kernel/fork.c
>> index fce002e..35db9c3 100644
>> --- a/kernel/fork.c
>> +++ b/kernel/fork.c
>> @@ -1868,7 +1868,7 @@ void __init proc_caches_init(void)
>>          mm_cachep = kmem_cache_create("mm_struct",
>>                          sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
>>                          SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
>> -       vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC);
>> +       vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_NO_SANITIZE);
>>          mmap_init();
>>          nsproxy_cache_init();
>>   }
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index b577fbb..74296d9 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -428,8 +428,8 @@ static void anon_vma_ctor(void *data)
>>   void __init anon_vma_init(void)
>>   {
>>          anon_vma_cachep = kmem_cache_create("anon_vma", sizeof(struct anon_vma),
>> -                       0, SLAB_DESTROY_BY_RCU|SLAB_PANIC, anon_vma_ctor);
>> -       anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
>> +               0, SLAB_DESTROY_BY_RCU|SLAB_PANIC|SLAB_NO_SANITIZE, anon_vma_ctor);
>> +       anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC|SLAB_NO_SANITIZE);
>>   }
>>
>>   /*
>> diff --git a/mm/slab.h b/mm/slab.h
>> index b54b636..6de99da 100644
>> --- a/mm/slab.h
>> +++ b/mm/slab.h
>> @@ -137,7 +137,7 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
>>
>>   /* Legal flag mask for kmem_cache_create(), for various configurations */
>>   #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
>> -                        SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS )
>> +                        SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS | SLAB_NO_SANITIZE)
>>
>>   #if defined(CONFIG_DEBUG_SLAB)
>>   #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
>> index b2df375..1d499ea 100644
>> --- a/net/core/skbuff.c
>> +++ b/net/core/skbuff.c
>> @@ -3316,15 +3316,15 @@ done:
>>   void __init skb_init(void)
>>   {
>>          skbuff_head_cache = kmem_cache_create("skbuff_head_cache",
>> -                                             sizeof(struct sk_buff),
>> -                                             0,
>> -                                             SLAB_HWCACHE_ALIGN|SLAB_PANIC,
>> -                                             NULL);
>> +                               sizeof(struct sk_buff),
>> +                               0,
>> +                               SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
>> +                               NULL);
>>          skbuff_fclone_cache = kmem_cache_create("skbuff_fclone_cache",
>> -                                               sizeof(struct sk_buff_fclones),
>> -                                               0,
>> -                                               SLAB_HWCACHE_ALIGN|SLAB_PANIC,
>> -                                               NULL);
>> +                               sizeof(struct sk_buff_fclones),
>> +                               0,
>> +                               SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
>> +                               NULL);
>>   }
>>
>>   /**
>> --
>> 2.5.0
>>
>
>
>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 5/7] mm: Mark several cases as SLAB_NO_SANITIZE
@ 2016-01-06  2:11       ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  2:11 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/5/16 4:21 PM, Kees Cook wrote:
> On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>>
>> Sanitization is useful for security but comes at the cost of performance
>> in clearing on free. Mark select caches as SLAB_NO_SANITIZE so
>> sanitization will not happen under the default configuration. The
>
> Can you describe why these were selected?
>

These were the cases that existed in grsecurity. From looking, these seem
to be performance critical caches that have a relatively lower risk. I'll
adjust the commit text.

>> kernel may be booted with the proper command line option to allow these
>> caches to be sanitized.
>
> Might be good to specifically mention the command line used to
> sanitize even these caches.

Sure.

>
> -Kees

Thanks,
Laura

>
>>
>> All credit for the original work should be given to Brad Spengler and
>> the PaX Team.
>>
>> Signed-off-by: Laura Abbott <laura@labbott.name>
>> ---
>> This is the initial set of excludes that the grsecurity patches had.
>> More may need to be added/removed as the series is tested.
>> ---
>>   fs/buffer.c       |  2 +-
>>   fs/dcache.c       |  2 +-
>>   kernel/fork.c     |  2 +-
>>   mm/rmap.c         |  4 ++--
>>   mm/slab.h         |  2 +-
>>   net/core/skbuff.c | 16 ++++++++--------
>>   6 files changed, 14 insertions(+), 14 deletions(-)
>>
>> diff --git a/fs/buffer.c b/fs/buffer.c
>> index 4f4cd95..f19e4ab 100644
>> --- a/fs/buffer.c
>> +++ b/fs/buffer.c
>> @@ -3417,7 +3417,7 @@ void __init buffer_init(void)
>>          bh_cachep = kmem_cache_create("buffer_head",
>>                          sizeof(struct buffer_head), 0,
>>                                  (SLAB_RECLAIM_ACCOUNT|SLAB_PANIC|
>> -                               SLAB_MEM_SPREAD),
>> +                               SLAB_MEM_SPREAD|SLAB_NO_SANITIZE),
>>                                  NULL);
>>
>>          /*
>> diff --git a/fs/dcache.c b/fs/dcache.c
>> index 5c33aeb..470f6be 100644
>> --- a/fs/dcache.c
>> +++ b/fs/dcache.c
>> @@ -3451,7 +3451,7 @@ void __init vfs_caches_init_early(void)
>>   void __init vfs_caches_init(void)
>>   {
>>          names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
>> -                       SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
>> +                       SLAB_NO_SANITIZE|SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
>>
>>          dcache_init();
>>          inode_init();
>> diff --git a/kernel/fork.c b/kernel/fork.c
>> index fce002e..35db9c3 100644
>> --- a/kernel/fork.c
>> +++ b/kernel/fork.c
>> @@ -1868,7 +1868,7 @@ void __init proc_caches_init(void)
>>          mm_cachep = kmem_cache_create("mm_struct",
>>                          sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
>>                          SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
>> -       vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC);
>> +       vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_NO_SANITIZE);
>>          mmap_init();
>>          nsproxy_cache_init();
>>   }
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index b577fbb..74296d9 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -428,8 +428,8 @@ static void anon_vma_ctor(void *data)
>>   void __init anon_vma_init(void)
>>   {
>>          anon_vma_cachep = kmem_cache_create("anon_vma", sizeof(struct anon_vma),
>> -                       0, SLAB_DESTROY_BY_RCU|SLAB_PANIC, anon_vma_ctor);
>> -       anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
>> +               0, SLAB_DESTROY_BY_RCU|SLAB_PANIC|SLAB_NO_SANITIZE, anon_vma_ctor);
>> +       anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC|SLAB_NO_SANITIZE);
>>   }
>>
>>   /*
>> diff --git a/mm/slab.h b/mm/slab.h
>> index b54b636..6de99da 100644
>> --- a/mm/slab.h
>> +++ b/mm/slab.h
>> @@ -137,7 +137,7 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
>>
>>   /* Legal flag mask for kmem_cache_create(), for various configurations */
>>   #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
>> -                        SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS )
>> +                        SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS | SLAB_NO_SANITIZE)
>>
>>   #if defined(CONFIG_DEBUG_SLAB)
>>   #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
>> index b2df375..1d499ea 100644
>> --- a/net/core/skbuff.c
>> +++ b/net/core/skbuff.c
>> @@ -3316,15 +3316,15 @@ done:
>>   void __init skb_init(void)
>>   {
>>          skbuff_head_cache = kmem_cache_create("skbuff_head_cache",
>> -                                             sizeof(struct sk_buff),
>> -                                             0,
>> -                                             SLAB_HWCACHE_ALIGN|SLAB_PANIC,
>> -                                             NULL);
>> +                               sizeof(struct sk_buff),
>> +                               0,
>> +                               SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
>> +                               NULL);
>>          skbuff_fclone_cache = kmem_cache_create("skbuff_fclone_cache",
>> -                                               sizeof(struct sk_buff_fclones),
>> -                                               0,
>> -                                               SLAB_HWCACHE_ALIGN|SLAB_PANIC,
>> -                                               NULL);
>> +                               sizeof(struct sk_buff_fclones),
>> +                               0,
>> +                               SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
>> +                               NULL);
>>   }
>>
>>   /**
>> --
>> 2.5.0
>>
>
>
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 5/7] mm: Mark several cases as SLAB_NO_SANITIZE
@ 2016-01-06  2:11       ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  2:11 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/5/16 4:21 PM, Kees Cook wrote:
> On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>>
>> Sanitization is useful for security but comes at the cost of performance
>> in clearing on free. Mark select caches as SLAB_NO_SANITIZE so
>> sanitization will not happen under the default configuration. The
>
> Can you describe why these were selected?
>

These were the cases that existed in grsecurity. From looking, these seem
to be performance critical caches that have a relatively lower risk. I'll
adjust the commit text.

>> kernel may be booted with the proper command line option to allow these
>> caches to be sanitized.
>
> Might be good to specifically mention the command line used to
> sanitize even these caches.

Sure.

>
> -Kees

Thanks,
Laura

>
>>
>> All credit for the original work should be given to Brad Spengler and
>> the PaX Team.
>>
>> Signed-off-by: Laura Abbott <laura@labbott.name>
>> ---
>> This is the initial set of excludes that the grsecurity patches had.
>> More may need to be added/removed as the series is tested.
>> ---
>>   fs/buffer.c       |  2 +-
>>   fs/dcache.c       |  2 +-
>>   kernel/fork.c     |  2 +-
>>   mm/rmap.c         |  4 ++--
>>   mm/slab.h         |  2 +-
>>   net/core/skbuff.c | 16 ++++++++--------
>>   6 files changed, 14 insertions(+), 14 deletions(-)
>>
>> diff --git a/fs/buffer.c b/fs/buffer.c
>> index 4f4cd95..f19e4ab 100644
>> --- a/fs/buffer.c
>> +++ b/fs/buffer.c
>> @@ -3417,7 +3417,7 @@ void __init buffer_init(void)
>>          bh_cachep = kmem_cache_create("buffer_head",
>>                          sizeof(struct buffer_head), 0,
>>                                  (SLAB_RECLAIM_ACCOUNT|SLAB_PANIC|
>> -                               SLAB_MEM_SPREAD),
>> +                               SLAB_MEM_SPREAD|SLAB_NO_SANITIZE),
>>                                  NULL);
>>
>>          /*
>> diff --git a/fs/dcache.c b/fs/dcache.c
>> index 5c33aeb..470f6be 100644
>> --- a/fs/dcache.c
>> +++ b/fs/dcache.c
>> @@ -3451,7 +3451,7 @@ void __init vfs_caches_init_early(void)
>>   void __init vfs_caches_init(void)
>>   {
>>          names_cachep = kmem_cache_create("names_cache", PATH_MAX, 0,
>> -                       SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
>> +                       SLAB_NO_SANITIZE|SLAB_HWCACHE_ALIGN|SLAB_PANIC, NULL);
>>
>>          dcache_init();
>>          inode_init();
>> diff --git a/kernel/fork.c b/kernel/fork.c
>> index fce002e..35db9c3 100644
>> --- a/kernel/fork.c
>> +++ b/kernel/fork.c
>> @@ -1868,7 +1868,7 @@ void __init proc_caches_init(void)
>>          mm_cachep = kmem_cache_create("mm_struct",
>>                          sizeof(struct mm_struct), ARCH_MIN_MMSTRUCT_ALIGN,
>>                          SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NOTRACK, NULL);
>> -       vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC);
>> +       vm_area_cachep = KMEM_CACHE(vm_area_struct, SLAB_PANIC|SLAB_NO_SANITIZE);
>>          mmap_init();
>>          nsproxy_cache_init();
>>   }
>> diff --git a/mm/rmap.c b/mm/rmap.c
>> index b577fbb..74296d9 100644
>> --- a/mm/rmap.c
>> +++ b/mm/rmap.c
>> @@ -428,8 +428,8 @@ static void anon_vma_ctor(void *data)
>>   void __init anon_vma_init(void)
>>   {
>>          anon_vma_cachep = kmem_cache_create("anon_vma", sizeof(struct anon_vma),
>> -                       0, SLAB_DESTROY_BY_RCU|SLAB_PANIC, anon_vma_ctor);
>> -       anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC);
>> +               0, SLAB_DESTROY_BY_RCU|SLAB_PANIC|SLAB_NO_SANITIZE, anon_vma_ctor);
>> +       anon_vma_chain_cachep = KMEM_CACHE(anon_vma_chain, SLAB_PANIC|SLAB_NO_SANITIZE);
>>   }
>>
>>   /*
>> diff --git a/mm/slab.h b/mm/slab.h
>> index b54b636..6de99da 100644
>> --- a/mm/slab.h
>> +++ b/mm/slab.h
>> @@ -137,7 +137,7 @@ static inline unsigned long kmem_cache_flags(unsigned long object_size,
>>
>>   /* Legal flag mask for kmem_cache_create(), for various configurations */
>>   #define SLAB_CORE_FLAGS (SLAB_HWCACHE_ALIGN | SLAB_CACHE_DMA | SLAB_PANIC | \
>> -                        SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS )
>> +                        SLAB_DESTROY_BY_RCU | SLAB_DEBUG_OBJECTS | SLAB_NO_SANITIZE)
>>
>>   #if defined(CONFIG_DEBUG_SLAB)
>>   #define SLAB_DEBUG_FLAGS (SLAB_RED_ZONE | SLAB_POISON | SLAB_STORE_USER)
>> diff --git a/net/core/skbuff.c b/net/core/skbuff.c
>> index b2df375..1d499ea 100644
>> --- a/net/core/skbuff.c
>> +++ b/net/core/skbuff.c
>> @@ -3316,15 +3316,15 @@ done:
>>   void __init skb_init(void)
>>   {
>>          skbuff_head_cache = kmem_cache_create("skbuff_head_cache",
>> -                                             sizeof(struct sk_buff),
>> -                                             0,
>> -                                             SLAB_HWCACHE_ALIGN|SLAB_PANIC,
>> -                                             NULL);
>> +                               sizeof(struct sk_buff),
>> +                               0,
>> +                               SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
>> +                               NULL);
>>          skbuff_fclone_cache = kmem_cache_create("skbuff_fclone_cache",
>> -                                               sizeof(struct sk_buff_fclones),
>> -                                               0,
>> -                                               SLAB_HWCACHE_ALIGN|SLAB_PANIC,
>> -                                               NULL);
>> +                               sizeof(struct sk_buff_fclones),
>> +                               0,
>> +                               SLAB_HWCACHE_ALIGN|SLAB_PANIC|SLAB_NO_SANITIZE,
>> +                               NULL);
>>   }
>>
>>   /**
>> --
>> 2.5.0
>>
>
>
>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
  2016-01-06  0:29                     ` Kees Cook
  (?)
@ 2016-01-06  2:46                       ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  2:46 UTC (permalink / raw)
  To: Kees Cook
  Cc: Dave Hansen, Christoph Lameter, kernel-hardening, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Linux-MM, LKML

On 1/5/16 4:29 PM, Kees Cook wrote:
> On Tue, Dec 22, 2015 at 11:13 AM, Laura Abbott <laura@labbott.name> wrote:
>> On 12/22/15 10:19 AM, Dave Hansen wrote:
>>>
>>> On 12/22/2015 10:08 AM, Christoph Lameter wrote:
>>>>
>>>> On Tue, 22 Dec 2015, Dave Hansen wrote:
>>>>>>
>>>>>> Why would you use zeros? The point is just to clear the information
>>>>>> right?
>>>>>> The regular poisoning does that.
>>>>>
>>>>>
>>>>> It then allows you to avoid the zeroing at allocation time.
>>>>
>>>>
>>>> Well much of the code is expecting a zeroed object from the allocator and
>>>> its zeroed at that time. Zeroing makes the object cache hot which is an
>>>> important performance aspect.
>>>
>>>
>>> Yes, modifying this behavior has a performance impact.  It absolutely
>>> needs to be evaluated, and I wouldn't want to speculate too much on how
>>> good or bad any of the choices are.
>>>
>>> Just to reiterate, I think we have 3 real choices here:
>>>
>>> 1. Zero at alloc, only when __GFP_ZERO
>>>      (behavior today)
>>> 2. Poison at free, also Zero at alloc (when __GFP_ZERO)
>>>      (this patch's proposed behavior, also what current poisoning does,
>>>       doubles writes)
>>> 3. Zero at free, *don't* Zero at alloc (when __GFP_ZERO)
>>>      (what I'm suggesting, possibly less perf impact vs. #2)
>>>
>>>
>>
>> poisoning with non-zero memory makes it easier to determine that the error
>> came from accessing the sanitized memory vs. some other case. I don't think
>> the feature would be as strong if the memory was only zeroed vs. some other
>> data value.
>
> I would tend to agree. If there are significant perf improvements for
> "3" above, that should be easy to add on later as another choice.
>

I was looking at the sanitization for the buddy allocator that exists in
grsecurity and that does option #3 (zero at free, skip __GFP_ZERO).
I'm going to look into adding that as an option for the slab allocator
and see what the performance numbers show.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2016-01-06  2:46                       ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  2:46 UTC (permalink / raw)
  To: Kees Cook
  Cc: Dave Hansen, Christoph Lameter, kernel-hardening, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Linux-MM, LKML

On 1/5/16 4:29 PM, Kees Cook wrote:
> On Tue, Dec 22, 2015 at 11:13 AM, Laura Abbott <laura@labbott.name> wrote:
>> On 12/22/15 10:19 AM, Dave Hansen wrote:
>>>
>>> On 12/22/2015 10:08 AM, Christoph Lameter wrote:
>>>>
>>>> On Tue, 22 Dec 2015, Dave Hansen wrote:
>>>>>>
>>>>>> Why would you use zeros? The point is just to clear the information
>>>>>> right?
>>>>>> The regular poisoning does that.
>>>>>
>>>>>
>>>>> It then allows you to avoid the zeroing at allocation time.
>>>>
>>>>
>>>> Well much of the code is expecting a zeroed object from the allocator and
>>>> its zeroed at that time. Zeroing makes the object cache hot which is an
>>>> important performance aspect.
>>>
>>>
>>> Yes, modifying this behavior has a performance impact.  It absolutely
>>> needs to be evaluated, and I wouldn't want to speculate too much on how
>>> good or bad any of the choices are.
>>>
>>> Just to reiterate, I think we have 3 real choices here:
>>>
>>> 1. Zero at alloc, only when __GFP_ZERO
>>>      (behavior today)
>>> 2. Poison at free, also Zero at alloc (when __GFP_ZERO)
>>>      (this patch's proposed behavior, also what current poisoning does,
>>>       doubles writes)
>>> 3. Zero at free, *don't* Zero at alloc (when __GFP_ZERO)
>>>      (what I'm suggesting, possibly less perf impact vs. #2)
>>>
>>>
>>
>> poisoning with non-zero memory makes it easier to determine that the error
>> came from accessing the sanitized memory vs. some other case. I don't think
>> the feature would be as strong if the memory was only zeroed vs. some other
>> data value.
>
> I would tend to agree. If there are significant perf improvements for
> "3" above, that should be easy to add on later as another choice.
>

I was looking at the sanitization for the buddy allocator that exists in
grsecurity and that does option #3 (zero at free, skip __GFP_ZERO).
I'm going to look into adding that as an option for the slab allocator
and see what the performance numbers show.

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [kernel-hardening] [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization
@ 2016-01-06  2:46                       ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  2:46 UTC (permalink / raw)
  To: Kees Cook
  Cc: Dave Hansen, Christoph Lameter, kernel-hardening, Pekka Enberg,
	David Rientjes, Joonsoo Kim, Andrew Morton, Linux-MM, LKML

On 1/5/16 4:29 PM, Kees Cook wrote:
> On Tue, Dec 22, 2015 at 11:13 AM, Laura Abbott <laura@labbott.name> wrote:
>> On 12/22/15 10:19 AM, Dave Hansen wrote:
>>>
>>> On 12/22/2015 10:08 AM, Christoph Lameter wrote:
>>>>
>>>> On Tue, 22 Dec 2015, Dave Hansen wrote:
>>>>>>
>>>>>> Why would you use zeros? The point is just to clear the information
>>>>>> right?
>>>>>> The regular poisoning does that.
>>>>>
>>>>>
>>>>> It then allows you to avoid the zeroing at allocation time.
>>>>
>>>>
>>>> Well much of the code is expecting a zeroed object from the allocator and
>>>> its zeroed at that time. Zeroing makes the object cache hot which is an
>>>> important performance aspect.
>>>
>>>
>>> Yes, modifying this behavior has a performance impact.  It absolutely
>>> needs to be evaluated, and I wouldn't want to speculate too much on how
>>> good or bad any of the choices are.
>>>
>>> Just to reiterate, I think we have 3 real choices here:
>>>
>>> 1. Zero at alloc, only when __GFP_ZERO
>>>      (behavior today)
>>> 2. Poison at free, also Zero at alloc (when __GFP_ZERO)
>>>      (this patch's proposed behavior, also what current poisoning does,
>>>       doubles writes)
>>> 3. Zero at free, *don't* Zero at alloc (when __GFP_ZERO)
>>>      (what I'm suggesting, possibly less perf impact vs. #2)
>>>
>>>
>>
>> poisoning with non-zero memory makes it easier to determine that the error
>> came from accessing the sanitized memory vs. some other case. I don't think
>> the feature would be as strong if the memory was only zeroed vs. some other
>> data value.
>
> I would tend to agree. If there are significant perf improvements for
> "3" above, that should be easy to add on later as another choice.
>

I was looking at the sanitization for the buddy allocator that exists in
grsecurity and that does option #3 (zero at free, skip __GFP_ZERO).
I'm going to look into adding that as an option for the slab allocator
and see what the performance numbers show.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 7/7] lkdtm: Add READ_AFTER_FREE test
  2016-01-06  0:15     ` Kees Cook
  (?)
@ 2016-01-06  2:49       ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  2:49 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening, Arnd Bergmann,
	Greg Kroah-Hartman

On 1/5/16 4:15 PM, Kees Cook wrote:
> On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>>
>> In a similar manner to WRITE_AFTER_FREE, add a READ_AFTER_FREE
>> test to test free poisoning features. Sample output when
>> no poison is present:
>>
>> [   20.222501] lkdtm: Performing direct entry READ_AFTER_FREE
>> [   20.226163] lkdtm: Freed val: 12345678
>>
>> with poison:
>>
>> [   24.203748] lkdtm: Performing direct entry READ_AFTER_FREE
>> [   24.207261] general protection fault: 0000 [#1] SMP
>> [   24.208193] Modules linked in:
>> [   24.208193] CPU: 0 PID: 866 Comm: sh Not tainted 4.4.0-rc5-work+ #108
>>
>> Cc: Arnd Bergmann <arnd@arndb.de>
>> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>> Signed-off-by: Laura Abbott <laura@labbott.name>
>> ---
>>   drivers/misc/lkdtm.c | 29 +++++++++++++++++++++++++++++
>>   1 file changed, 29 insertions(+)
>>
>> diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c
>> index 11fdadc..c641fb7 100644
>> --- a/drivers/misc/lkdtm.c
>> +++ b/drivers/misc/lkdtm.c
>> @@ -92,6 +92,7 @@ enum ctype {
>>          CT_UNALIGNED_LOAD_STORE_WRITE,
>>          CT_OVERWRITE_ALLOCATION,
>>          CT_WRITE_AFTER_FREE,
>> +       CT_READ_AFTER_FREE,
>>          CT_SOFTLOCKUP,
>>          CT_HARDLOCKUP,
>>          CT_SPINLOCKUP,
>> @@ -129,6 +130,7 @@ static char* cp_type[] = {
>>          "UNALIGNED_LOAD_STORE_WRITE",
>>          "OVERWRITE_ALLOCATION",
>>          "WRITE_AFTER_FREE",
>> +       "READ_AFTER_FREE",
>>          "SOFTLOCKUP",
>>          "HARDLOCKUP",
>>          "SPINLOCKUP",
>> @@ -417,6 +419,33 @@ static void lkdtm_do_action(enum ctype which)
>>                  memset(data, 0x78, len);
>>                  break;
>>          }
>> +       case CT_READ_AFTER_FREE: {
>> +               int **base;
>> +               int *val, *tmp;
>> +
>> +               base = kmalloc(1024, GFP_KERNEL);
>> +               if (!base)
>> +                       return;
>> +
>> +               val = kmalloc(1024, GFP_KERNEL);
>> +               if (!val)
>> +                       return;
>
> For both of these test failure return, I think there should be a
> pr_warn too (see CT_EXEC_USERSPACE).
>

I was going by the usual rule that messages on memory failures are
redundant because something somewhere else is going to be printing
out error messages.
  
>> +
>> +               *val = 0x12345678;
>> +
>> +               /*
>> +                * Don't just use the first entry since that's where the
>> +                * freelist goes for the slab allocator
>> +                */
>> +               base[1] = val;
>
> Maybe just aim at the middle, in case allocator freelist tracking ever
> grows? base[1024/sizeof(int)/2] or something?
>

Good point.

>> +               kfree(base);
>> +
>> +               tmp = base[1];
>> +               pr_info("Freed val: %x\n", *tmp);
>
> Instead of depending on the deref to fail, maybe just use a simple
> BUG_ON to test that the value did actually change? Or, change the
> pr_info to "Failed to Oops when reading freed value: ..." just to be
> slightly more verbose about what failed?
>

I'll come up with something to be more explicit here.


Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 7/7] lkdtm: Add READ_AFTER_FREE test
@ 2016-01-06  2:49       ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  2:49 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening, Arnd Bergmann,
	Greg Kroah-Hartman

On 1/5/16 4:15 PM, Kees Cook wrote:
> On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>>
>> In a similar manner to WRITE_AFTER_FREE, add a READ_AFTER_FREE
>> test to test free poisoning features. Sample output when
>> no poison is present:
>>
>> [   20.222501] lkdtm: Performing direct entry READ_AFTER_FREE
>> [   20.226163] lkdtm: Freed val: 12345678
>>
>> with poison:
>>
>> [   24.203748] lkdtm: Performing direct entry READ_AFTER_FREE
>> [   24.207261] general protection fault: 0000 [#1] SMP
>> [   24.208193] Modules linked in:
>> [   24.208193] CPU: 0 PID: 866 Comm: sh Not tainted 4.4.0-rc5-work+ #108
>>
>> Cc: Arnd Bergmann <arnd@arndb.de>
>> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>> Signed-off-by: Laura Abbott <laura@labbott.name>
>> ---
>>   drivers/misc/lkdtm.c | 29 +++++++++++++++++++++++++++++
>>   1 file changed, 29 insertions(+)
>>
>> diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c
>> index 11fdadc..c641fb7 100644
>> --- a/drivers/misc/lkdtm.c
>> +++ b/drivers/misc/lkdtm.c
>> @@ -92,6 +92,7 @@ enum ctype {
>>          CT_UNALIGNED_LOAD_STORE_WRITE,
>>          CT_OVERWRITE_ALLOCATION,
>>          CT_WRITE_AFTER_FREE,
>> +       CT_READ_AFTER_FREE,
>>          CT_SOFTLOCKUP,
>>          CT_HARDLOCKUP,
>>          CT_SPINLOCKUP,
>> @@ -129,6 +130,7 @@ static char* cp_type[] = {
>>          "UNALIGNED_LOAD_STORE_WRITE",
>>          "OVERWRITE_ALLOCATION",
>>          "WRITE_AFTER_FREE",
>> +       "READ_AFTER_FREE",
>>          "SOFTLOCKUP",
>>          "HARDLOCKUP",
>>          "SPINLOCKUP",
>> @@ -417,6 +419,33 @@ static void lkdtm_do_action(enum ctype which)
>>                  memset(data, 0x78, len);
>>                  break;
>>          }
>> +       case CT_READ_AFTER_FREE: {
>> +               int **base;
>> +               int *val, *tmp;
>> +
>> +               base = kmalloc(1024, GFP_KERNEL);
>> +               if (!base)
>> +                       return;
>> +
>> +               val = kmalloc(1024, GFP_KERNEL);
>> +               if (!val)
>> +                       return;
>
> For both of these test failure return, I think there should be a
> pr_warn too (see CT_EXEC_USERSPACE).
>

I was going by the usual rule that messages on memory failures are
redundant because something somewhere else is going to be printing
out error messages.
  
>> +
>> +               *val = 0x12345678;
>> +
>> +               /*
>> +                * Don't just use the first entry since that's where the
>> +                * freelist goes for the slab allocator
>> +                */
>> +               base[1] = val;
>
> Maybe just aim at the middle, in case allocator freelist tracking ever
> grows? base[1024/sizeof(int)/2] or something?
>

Good point.

>> +               kfree(base);
>> +
>> +               tmp = base[1];
>> +               pr_info("Freed val: %x\n", *tmp);
>
> Instead of depending on the deref to fail, maybe just use a simple
> BUG_ON to test that the value did actually change? Or, change the
> pr_info to "Failed to Oops when reading freed value: ..." just to be
> slightly more verbose about what failed?
>

I'll come up with something to be more explicit here.


Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 7/7] lkdtm: Add READ_AFTER_FREE test
@ 2016-01-06  2:49       ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  2:49 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening, Arnd Bergmann,
	Greg Kroah-Hartman

On 1/5/16 4:15 PM, Kees Cook wrote:
> On Mon, Dec 21, 2015 at 7:40 PM, Laura Abbott <laura@labbott.name> wrote:
>>
>> In a similar manner to WRITE_AFTER_FREE, add a READ_AFTER_FREE
>> test to test free poisoning features. Sample output when
>> no poison is present:
>>
>> [   20.222501] lkdtm: Performing direct entry READ_AFTER_FREE
>> [   20.226163] lkdtm: Freed val: 12345678
>>
>> with poison:
>>
>> [   24.203748] lkdtm: Performing direct entry READ_AFTER_FREE
>> [   24.207261] general protection fault: 0000 [#1] SMP
>> [   24.208193] Modules linked in:
>> [   24.208193] CPU: 0 PID: 866 Comm: sh Not tainted 4.4.0-rc5-work+ #108
>>
>> Cc: Arnd Bergmann <arnd@arndb.de>
>> Cc: Greg Kroah-Hartman <gregkh@linuxfoundation.org>
>> Signed-off-by: Laura Abbott <laura@labbott.name>
>> ---
>>   drivers/misc/lkdtm.c | 29 +++++++++++++++++++++++++++++
>>   1 file changed, 29 insertions(+)
>>
>> diff --git a/drivers/misc/lkdtm.c b/drivers/misc/lkdtm.c
>> index 11fdadc..c641fb7 100644
>> --- a/drivers/misc/lkdtm.c
>> +++ b/drivers/misc/lkdtm.c
>> @@ -92,6 +92,7 @@ enum ctype {
>>          CT_UNALIGNED_LOAD_STORE_WRITE,
>>          CT_OVERWRITE_ALLOCATION,
>>          CT_WRITE_AFTER_FREE,
>> +       CT_READ_AFTER_FREE,
>>          CT_SOFTLOCKUP,
>>          CT_HARDLOCKUP,
>>          CT_SPINLOCKUP,
>> @@ -129,6 +130,7 @@ static char* cp_type[] = {
>>          "UNALIGNED_LOAD_STORE_WRITE",
>>          "OVERWRITE_ALLOCATION",
>>          "WRITE_AFTER_FREE",
>> +       "READ_AFTER_FREE",
>>          "SOFTLOCKUP",
>>          "HARDLOCKUP",
>>          "SPINLOCKUP",
>> @@ -417,6 +419,33 @@ static void lkdtm_do_action(enum ctype which)
>>                  memset(data, 0x78, len);
>>                  break;
>>          }
>> +       case CT_READ_AFTER_FREE: {
>> +               int **base;
>> +               int *val, *tmp;
>> +
>> +               base = kmalloc(1024, GFP_KERNEL);
>> +               if (!base)
>> +                       return;
>> +
>> +               val = kmalloc(1024, GFP_KERNEL);
>> +               if (!val)
>> +                       return;
>
> For both of these test failure return, I think there should be a
> pr_warn too (see CT_EXEC_USERSPACE).
>

I was going by the usual rule that messages on memory failures are
redundant because something somewhere else is going to be printing
out error messages.
  
>> +
>> +               *val = 0x12345678;
>> +
>> +               /*
>> +                * Don't just use the first entry since that's where the
>> +                * freelist goes for the slab allocator
>> +                */
>> +               base[1] = val;
>
> Maybe just aim at the middle, in case allocator freelist tracking ever
> grows? base[1024/sizeof(int)/2] or something?
>

Good point.

>> +               kfree(base);
>> +
>> +               tmp = base[1];
>> +               pr_info("Freed val: %x\n", *tmp);
>
> Instead of depending on the deref to fail, maybe just use a simple
> BUG_ON to test that the value did actually change? Or, change the
> pr_info to "Failed to Oops when reading freed value: ..." just to be
> slightly more verbose about what failed?
>

I'll come up with something to be more explicit here.


Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
  2016-01-06  0:09       ` Kees Cook
  (?)
@ 2016-01-06  3:17         ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  3:17 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/5/16 4:09 PM, Kees Cook wrote:
> On Tue, Dec 22, 2015 at 12:04 PM, Laura Abbott <laura@labbott.name> wrote:
>> On 12/22/15 8:08 AM, Christoph Lameter wrote:
>>>
>>> On Mon, 21 Dec 2015, Laura Abbott wrote:
>>>
>>>> The biggest change from PAX_MEMORY_SANTIIZE is that this feature
>>>> sanitizes
>>>> the SL[AOU]B allocators only. My plan is to work on the buddy allocator
>>>> santization after this series gets picked up. A side effect of this is
>>>> that allocations which go directly to the buddy allocator (i.e. large
>>>> allocations) aren't sanitized. I'd like feedback about whether it's worth
>>>> it to add sanitization on that path directly or just use the page
>>>> allocator sanitization when that comes in.
>
> This looks great! I love the added lkdtm tests, too. Very cool.
>
>>> I am not sure what the point of this patchset is. We have a similar effect
>>> to sanitization already in the allocators through two mechanisms:
>>>
>>> 1. Slab poisoning
>>> 2. Allocation with GFP_ZERO
>>>
>>> I do not think we need a third one. You could accomplish your goals much
>>> easier without this code churn by either
>>>
>>> 1. Improve the existing poisoning mechanism. Ensure that there are no
>>>      gaps. Security sensitive kernel slab caches can then be created with
>>>      the  POISONING flag set. Maybe add a Kconfig flag that enables
>>>      POISONING for each cache? What was the issue when you tried using
>>>      posining for sanitization?
>>
>> The existing poisoning does work for sanitization but it's still a debug
>> feature. It seemed more appropriate to keep debug features and non-debug
>> features separate hence the separate option and configuration.
>
> What stuff is intertwined in the existing poisoning that makes it
> incompatible/orthogonal?
>

It's not the poisoning per se that's incompatible, it's how the poisoning is
set up. At least for slub, the current poisoning is part of SLUB_DEBUG which
enables other consistency checks on the allocator. Trying to pull out just
the poisoning for use when SLUB_DEBUG isn't on would result in roughly what
would be here anyway. I looked at trying to reuse some of the existing poisoning
and came to the conclusion it was less intrusive to the allocator to keep it
separate.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-06  3:17         ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  3:17 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/5/16 4:09 PM, Kees Cook wrote:
> On Tue, Dec 22, 2015 at 12:04 PM, Laura Abbott <laura@labbott.name> wrote:
>> On 12/22/15 8:08 AM, Christoph Lameter wrote:
>>>
>>> On Mon, 21 Dec 2015, Laura Abbott wrote:
>>>
>>>> The biggest change from PAX_MEMORY_SANTIIZE is that this feature
>>>> sanitizes
>>>> the SL[AOU]B allocators only. My plan is to work on the buddy allocator
>>>> santization after this series gets picked up. A side effect of this is
>>>> that allocations which go directly to the buddy allocator (i.e. large
>>>> allocations) aren't sanitized. I'd like feedback about whether it's worth
>>>> it to add sanitization on that path directly or just use the page
>>>> allocator sanitization when that comes in.
>
> This looks great! I love the added lkdtm tests, too. Very cool.
>
>>> I am not sure what the point of this patchset is. We have a similar effect
>>> to sanitization already in the allocators through two mechanisms:
>>>
>>> 1. Slab poisoning
>>> 2. Allocation with GFP_ZERO
>>>
>>> I do not think we need a third one. You could accomplish your goals much
>>> easier without this code churn by either
>>>
>>> 1. Improve the existing poisoning mechanism. Ensure that there are no
>>>      gaps. Security sensitive kernel slab caches can then be created with
>>>      the  POISONING flag set. Maybe add a Kconfig flag that enables
>>>      POISONING for each cache? What was the issue when you tried using
>>>      posining for sanitization?
>>
>> The existing poisoning does work for sanitization but it's still a debug
>> feature. It seemed more appropriate to keep debug features and non-debug
>> features separate hence the separate option and configuration.
>
> What stuff is intertwined in the existing poisoning that makes it
> incompatible/orthogonal?
>

It's not the poisoning per se that's incompatible, it's how the poisoning is
set up. At least for slub, the current poisoning is part of SLUB_DEBUG which
enables other consistency checks on the allocator. Trying to pull out just
the poisoning for use when SLUB_DEBUG isn't on would result in roughly what
would be here anyway. I looked at trying to reuse some of the existing poisoning
and came to the conclusion it was less intrusive to the allocator to keep it
separate.

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-06  3:17         ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-06  3:17 UTC (permalink / raw)
  To: Kees Cook
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/5/16 4:09 PM, Kees Cook wrote:
> On Tue, Dec 22, 2015 at 12:04 PM, Laura Abbott <laura@labbott.name> wrote:
>> On 12/22/15 8:08 AM, Christoph Lameter wrote:
>>>
>>> On Mon, 21 Dec 2015, Laura Abbott wrote:
>>>
>>>> The biggest change from PAX_MEMORY_SANTIIZE is that this feature
>>>> sanitizes
>>>> the SL[AOU]B allocators only. My plan is to work on the buddy allocator
>>>> santization after this series gets picked up. A side effect of this is
>>>> that allocations which go directly to the buddy allocator (i.e. large
>>>> allocations) aren't sanitized. I'd like feedback about whether it's worth
>>>> it to add sanitization on that path directly or just use the page
>>>> allocator sanitization when that comes in.
>
> This looks great! I love the added lkdtm tests, too. Very cool.
>
>>> I am not sure what the point of this patchset is. We have a similar effect
>>> to sanitization already in the allocators through two mechanisms:
>>>
>>> 1. Slab poisoning
>>> 2. Allocation with GFP_ZERO
>>>
>>> I do not think we need a third one. You could accomplish your goals much
>>> easier without this code churn by either
>>>
>>> 1. Improve the existing poisoning mechanism. Ensure that there are no
>>>      gaps. Security sensitive kernel slab caches can then be created with
>>>      the  POISONING flag set. Maybe add a Kconfig flag that enables
>>>      POISONING for each cache? What was the issue when you tried using
>>>      posining for sanitization?
>>
>> The existing poisoning does work for sanitization but it's still a debug
>> feature. It seemed more appropriate to keep debug features and non-debug
>> features separate hence the separate option and configuration.
>
> What stuff is intertwined in the existing poisoning that makes it
> incompatible/orthogonal?
>

It's not the poisoning per se that's incompatible, it's how the poisoning is
set up. At least for slub, the current poisoning is part of SLUB_DEBUG which
enables other consistency checks on the allocator. Trying to pull out just
the poisoning for use when SLUB_DEBUG isn't on would result in roughly what
would be here anyway. I looked at trying to reuse some of the existing poisoning
and came to the conclusion it was less intrusive to the allocator to keep it
separate.

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
  2016-01-06  3:17         ` Laura Abbott
  (?)
@ 2016-01-07 16:26           ` Christoph Lameter
  -1 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2016-01-07 16:26 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Tue, 5 Jan 2016, Laura Abbott wrote:

> It's not the poisoning per se that's incompatible, it's how the poisoning is
> set up. At least for slub, the current poisoning is part of SLUB_DEBUG which
> enables other consistency checks on the allocator. Trying to pull out just
> the poisoning for use when SLUB_DEBUG isn't on would result in roughly what
> would be here anyway. I looked at trying to reuse some of the existing
> poisoning
> and came to the conclusion it was less intrusive to the allocator to keep it
> separate.

SLUB_DEBUG does *not* enable any debugging features. It builds the logic
for debugging into the kernel but does not activate it. CONFIG_SLUB_DEBUG
is set for production kernels. The poisoning is build in by default into
any recent linux kernel out there. You can enable poisoning selectively
(and no other debug feature) by specifying slub_debug=P on the Linux
kernel command line right now.

There is a SLAB_POISON flag for each kmem_cache that can be set to
*only* enable poisoning and nothing else from code.



^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-07 16:26           ` Christoph Lameter
  0 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2016-01-07 16:26 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Tue, 5 Jan 2016, Laura Abbott wrote:

> It's not the poisoning per se that's incompatible, it's how the poisoning is
> set up. At least for slub, the current poisoning is part of SLUB_DEBUG which
> enables other consistency checks on the allocator. Trying to pull out just
> the poisoning for use when SLUB_DEBUG isn't on would result in roughly what
> would be here anyway. I looked at trying to reuse some of the existing
> poisoning
> and came to the conclusion it was less intrusive to the allocator to keep it
> separate.

SLUB_DEBUG does *not* enable any debugging features. It builds the logic
for debugging into the kernel but does not activate it. CONFIG_SLUB_DEBUG
is set for production kernels. The poisoning is build in by default into
any recent linux kernel out there. You can enable poisoning selectively
(and no other debug feature) by specifying slub_debug=P on the Linux
kernel command line right now.

There is a SLAB_POISON flag for each kmem_cache that can be set to
*only* enable poisoning and nothing else from code.


--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-07 16:26           ` Christoph Lameter
  0 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2016-01-07 16:26 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Tue, 5 Jan 2016, Laura Abbott wrote:

> It's not the poisoning per se that's incompatible, it's how the poisoning is
> set up. At least for slub, the current poisoning is part of SLUB_DEBUG which
> enables other consistency checks on the allocator. Trying to pull out just
> the poisoning for use when SLUB_DEBUG isn't on would result in roughly what
> would be here anyway. I looked at trying to reuse some of the existing
> poisoning
> and came to the conclusion it was less intrusive to the allocator to keep it
> separate.

SLUB_DEBUG does *not* enable any debugging features. It builds the logic
for debugging into the kernel but does not activate it. CONFIG_SLUB_DEBUG
is set for production kernels. The poisoning is build in by default into
any recent linux kernel out there. You can enable poisoning selectively
(and no other debug feature) by specifying slub_debug=P on the Linux
kernel command line right now.

There is a SLAB_POISON flag for each kmem_cache that can be set to
*only* enable poisoning and nothing else from code.

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
  2016-01-07 16:26           ` Christoph Lameter
  (?)
@ 2016-01-08  1:23             ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-08  1:23 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/7/16 8:26 AM, Christoph Lameter wrote:
> On Tue, 5 Jan 2016, Laura Abbott wrote:
>
>> It's not the poisoning per se that's incompatible, it's how the poisoning is
>> set up. At least for slub, the current poisoning is part of SLUB_DEBUG which
>> enables other consistency checks on the allocator. Trying to pull out just
>> the poisoning for use when SLUB_DEBUG isn't on would result in roughly what
>> would be here anyway. I looked at trying to reuse some of the existing
>> poisoning
>> and came to the conclusion it was less intrusive to the allocator to keep it
>> separate.
>
> SLUB_DEBUG does *not* enable any debugging features. It builds the logic
> for debugging into the kernel but does not activate it. CONFIG_SLUB_DEBUG
> is set for production kernels. The poisoning is build in by default into
> any recent linux kernel out there. You can enable poisoning selectively
> (and no other debug feature) by specifying slub_debug=P on the Linux
> kernel command line right now.
>
> There is a SLAB_POISON flag for each kmem_cache that can be set to
> *only* enable poisoning and nothing else from code.
>
>

The slub_debug=P not only poisons it enables other consistency checks on the
slab as well, assuming my understanding of what check_object does is correct.
My hope was to have the poison part only and none of the consistency checks in
an attempt to mitigate performance issues. I misunderstood when the checks
actually run and how SLUB_DEBUG was used.

Another option would be to have a flag like SLAB_NO_SANITY_CHECK.
sanitization enablement would just be that and SLAB_POISON
in the debug options. The disadvantage to this approach would be losing
the sanitization for ->ctor caches (the grsecurity version works around this
by re-initializing with ->ctor, I haven't heard any feedback if this actually
acceptable) and not having some of the fast paths enabled
(assuming I'm understanding the code path correctly.) which would also
be a performance penalty

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-08  1:23             ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-08  1:23 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/7/16 8:26 AM, Christoph Lameter wrote:
> On Tue, 5 Jan 2016, Laura Abbott wrote:
>
>> It's not the poisoning per se that's incompatible, it's how the poisoning is
>> set up. At least for slub, the current poisoning is part of SLUB_DEBUG which
>> enables other consistency checks on the allocator. Trying to pull out just
>> the poisoning for use when SLUB_DEBUG isn't on would result in roughly what
>> would be here anyway. I looked at trying to reuse some of the existing
>> poisoning
>> and came to the conclusion it was less intrusive to the allocator to keep it
>> separate.
>
> SLUB_DEBUG does *not* enable any debugging features. It builds the logic
> for debugging into the kernel but does not activate it. CONFIG_SLUB_DEBUG
> is set for production kernels. The poisoning is build in by default into
> any recent linux kernel out there. You can enable poisoning selectively
> (and no other debug feature) by specifying slub_debug=P on the Linux
> kernel command line right now.
>
> There is a SLAB_POISON flag for each kmem_cache that can be set to
> *only* enable poisoning and nothing else from code.
>
>

The slub_debug=P not only poisons it enables other consistency checks on the
slab as well, assuming my understanding of what check_object does is correct.
My hope was to have the poison part only and none of the consistency checks in
an attempt to mitigate performance issues. I misunderstood when the checks
actually run and how SLUB_DEBUG was used.

Another option would be to have a flag like SLAB_NO_SANITY_CHECK.
sanitization enablement would just be that and SLAB_POISON
in the debug options. The disadvantage to this approach would be losing
the sanitization for ->ctor caches (the grsecurity version works around this
by re-initializing with ->ctor, I haven't heard any feedback if this actually
acceptable) and not having some of the fast paths enabled
(assuming I'm understanding the code path correctly.) which would also
be a performance penalty

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-08  1:23             ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-08  1:23 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/7/16 8:26 AM, Christoph Lameter wrote:
> On Tue, 5 Jan 2016, Laura Abbott wrote:
>
>> It's not the poisoning per se that's incompatible, it's how the poisoning is
>> set up. At least for slub, the current poisoning is part of SLUB_DEBUG which
>> enables other consistency checks on the allocator. Trying to pull out just
>> the poisoning for use when SLUB_DEBUG isn't on would result in roughly what
>> would be here anyway. I looked at trying to reuse some of the existing
>> poisoning
>> and came to the conclusion it was less intrusive to the allocator to keep it
>> separate.
>
> SLUB_DEBUG does *not* enable any debugging features. It builds the logic
> for debugging into the kernel but does not activate it. CONFIG_SLUB_DEBUG
> is set for production kernels. The poisoning is build in by default into
> any recent linux kernel out there. You can enable poisoning selectively
> (and no other debug feature) by specifying slub_debug=P on the Linux
> kernel command line right now.
>
> There is a SLAB_POISON flag for each kmem_cache that can be set to
> *only* enable poisoning and nothing else from code.
>
>

The slub_debug=P not only poisons it enables other consistency checks on the
slab as well, assuming my understanding of what check_object does is correct.
My hope was to have the poison part only and none of the consistency checks in
an attempt to mitigate performance issues. I misunderstood when the checks
actually run and how SLUB_DEBUG was used.

Another option would be to have a flag like SLAB_NO_SANITY_CHECK.
sanitization enablement would just be that and SLAB_POISON
in the debug options. The disadvantage to this approach would be losing
the sanitization for ->ctor caches (the grsecurity version works around this
by re-initializing with ->ctor, I haven't heard any feedback if this actually
acceptable) and not having some of the fast paths enabled
(assuming I'm understanding the code path correctly.) which would also
be a performance penalty

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
  2016-01-08  1:23             ` Laura Abbott
  (?)
@ 2016-01-08 14:07               ` Christoph Lameter
  -1 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2016-01-08 14:07 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Thu, 7 Jan 2016, Laura Abbott wrote:

> The slub_debug=P not only poisons it enables other consistency checks on the
> slab as well, assuming my understanding of what check_object does is correct.
> My hope was to have the poison part only and none of the consistency checks in
> an attempt to mitigate performance issues. I misunderstood when the checks
> actually run and how SLUB_DEBUG was used.

Ok I see that there pointer check is done without checking the
corresponding debug flag. Patch attached thar fixes it.

> Another option would be to have a flag like SLAB_NO_SANITY_CHECK.
> sanitization enablement would just be that and SLAB_POISON
> in the debug options. The disadvantage to this approach would be losing
> the sanitization for ->ctor caches (the grsecurity version works around this
> by re-initializing with ->ctor, I haven't heard any feedback if this actually
> acceptable) and not having some of the fast paths enabled
> (assuming I'm understanding the code path correctly.) which would also
> be a performance penalty

I think we simply need to fix the missing check there. There is already a
flag SLAB_DEBUG_FREE for the pointer checks.



Subject: slub: Only perform pointer checks in check_object when SLAB_DEBUG_FREE is set

Seems that check_object() always checks for pointer issues currently.

Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c
+++ linux/mm/slub.c
@@ -848,6 +848,9 @@ static int check_object(struct kmem_cach
 		 */
 		return 1;

+	if (!(s->flags & SLAB_DEBUG_FREE))
+		return 1;
+
 	/* Check free pointer validity */
 	if (!check_valid_pointer(s, page, get_freepointer(s, p))) {
 		object_err(s, page, p, "Freepointer corrupt");

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-08 14:07               ` Christoph Lameter
  0 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2016-01-08 14:07 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Thu, 7 Jan 2016, Laura Abbott wrote:

> The slub_debug=P not only poisons it enables other consistency checks on the
> slab as well, assuming my understanding of what check_object does is correct.
> My hope was to have the poison part only and none of the consistency checks in
> an attempt to mitigate performance issues. I misunderstood when the checks
> actually run and how SLUB_DEBUG was used.

Ok I see that there pointer check is done without checking the
corresponding debug flag. Patch attached thar fixes it.

> Another option would be to have a flag like SLAB_NO_SANITY_CHECK.
> sanitization enablement would just be that and SLAB_POISON
> in the debug options. The disadvantage to this approach would be losing
> the sanitization for ->ctor caches (the grsecurity version works around this
> by re-initializing with ->ctor, I haven't heard any feedback if this actually
> acceptable) and not having some of the fast paths enabled
> (assuming I'm understanding the code path correctly.) which would also
> be a performance penalty

I think we simply need to fix the missing check there. There is already a
flag SLAB_DEBUG_FREE for the pointer checks.



Subject: slub: Only perform pointer checks in check_object when SLAB_DEBUG_FREE is set

Seems that check_object() always checks for pointer issues currently.

Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c
+++ linux/mm/slub.c
@@ -848,6 +848,9 @@ static int check_object(struct kmem_cach
 		 */
 		return 1;

+	if (!(s->flags & SLAB_DEBUG_FREE))
+		return 1;
+
 	/* Check free pointer validity */
 	if (!check_valid_pointer(s, page, get_freepointer(s, p))) {
 		object_err(s, page, p, "Freepointer corrupt");

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-08 14:07               ` Christoph Lameter
  0 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2016-01-08 14:07 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On Thu, 7 Jan 2016, Laura Abbott wrote:

> The slub_debug=P not only poisons it enables other consistency checks on the
> slab as well, assuming my understanding of what check_object does is correct.
> My hope was to have the poison part only and none of the consistency checks in
> an attempt to mitigate performance issues. I misunderstood when the checks
> actually run and how SLUB_DEBUG was used.

Ok I see that there pointer check is done without checking the
corresponding debug flag. Patch attached thar fixes it.

> Another option would be to have a flag like SLAB_NO_SANITY_CHECK.
> sanitization enablement would just be that and SLAB_POISON
> in the debug options. The disadvantage to this approach would be losing
> the sanitization for ->ctor caches (the grsecurity version works around this
> by re-initializing with ->ctor, I haven't heard any feedback if this actually
> acceptable) and not having some of the fast paths enabled
> (assuming I'm understanding the code path correctly.) which would also
> be a performance penalty

I think we simply need to fix the missing check there. There is already a
flag SLAB_DEBUG_FREE for the pointer checks.



Subject: slub: Only perform pointer checks in check_object when SLAB_DEBUG_FREE is set

Seems that check_object() always checks for pointer issues currently.

Signed-off-by: Christoph Lameter <cl@linux.com>

Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c
+++ linux/mm/slub.c
@@ -848,6 +848,9 @@ static int check_object(struct kmem_cach
 		 */
 		return 1;

+	if (!(s->flags & SLAB_DEBUG_FREE))
+		return 1;
+
 	/* Check free pointer validity */
 	if (!check_valid_pointer(s, page, get_freepointer(s, p))) {
 		object_err(s, page, p, "Freepointer corrupt");

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
  2016-01-08 14:07               ` Christoph Lameter
  (?)
@ 2016-01-14  3:49                 ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-14  3:49 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/8/16 6:07 AM, Christoph Lameter wrote:
> On Thu, 7 Jan 2016, Laura Abbott wrote:
>
>> The slub_debug=P not only poisons it enables other consistency checks on the
>> slab as well, assuming my understanding of what check_object does is correct.
>> My hope was to have the poison part only and none of the consistency checks in
>> an attempt to mitigate performance issues. I misunderstood when the checks
>> actually run and how SLUB_DEBUG was used.
>
> Ok I see that there pointer check is done without checking the
> corresponding debug flag. Patch attached thar fixes it.
>
>> Another option would be to have a flag like SLAB_NO_SANITY_CHECK.
>> sanitization enablement would just be that and SLAB_POISON
>> in the debug options. The disadvantage to this approach would be losing
>> the sanitization for ->ctor caches (the grsecurity version works around this
>> by re-initializing with ->ctor, I haven't heard any feedback if this actually
>> acceptable) and not having some of the fast paths enabled
>> (assuming I'm understanding the code path correctly.) which would also
>> be a performance penalty
>
> I think we simply need to fix the missing check there. There is already a
> flag SLAB_DEBUG_FREE for the pointer checks.
>
>

The patch improves performance but the overall performance of these full
sanitization patches is still significantly better than slub_debug=P. I'll
put some effort into seeing if I can figure out where the slow down is
coming from.

Thanks,
Laura

>
> Subject: slub: Only perform pointer checks in check_object when SLAB_DEBUG_FREE is set
>
> Seems that check_object() always checks for pointer issues currently.
>
> Signed-off-by: Christoph Lameter <cl@linux.com>
>
> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c
> +++ linux/mm/slub.c
> @@ -848,6 +848,9 @@ static int check_object(struct kmem_cach
>   		 */
>   		return 1;
>
> +	if (!(s->flags & SLAB_DEBUG_FREE))
> +		return 1;
> +
>   	/* Check free pointer validity */
>   	if (!check_valid_pointer(s, page, get_freepointer(s, p))) {
>   		object_err(s, page, p, "Freepointer corrupt");
>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-14  3:49                 ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-14  3:49 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/8/16 6:07 AM, Christoph Lameter wrote:
> On Thu, 7 Jan 2016, Laura Abbott wrote:
>
>> The slub_debug=P not only poisons it enables other consistency checks on the
>> slab as well, assuming my understanding of what check_object does is correct.
>> My hope was to have the poison part only and none of the consistency checks in
>> an attempt to mitigate performance issues. I misunderstood when the checks
>> actually run and how SLUB_DEBUG was used.
>
> Ok I see that there pointer check is done without checking the
> corresponding debug flag. Patch attached thar fixes it.
>
>> Another option would be to have a flag like SLAB_NO_SANITY_CHECK.
>> sanitization enablement would just be that and SLAB_POISON
>> in the debug options. The disadvantage to this approach would be losing
>> the sanitization for ->ctor caches (the grsecurity version works around this
>> by re-initializing with ->ctor, I haven't heard any feedback if this actually
>> acceptable) and not having some of the fast paths enabled
>> (assuming I'm understanding the code path correctly.) which would also
>> be a performance penalty
>
> I think we simply need to fix the missing check there. There is already a
> flag SLAB_DEBUG_FREE for the pointer checks.
>
>

The patch improves performance but the overall performance of these full
sanitization patches is still significantly better than slub_debug=P. I'll
put some effort into seeing if I can figure out where the slow down is
coming from.

Thanks,
Laura

>
> Subject: slub: Only perform pointer checks in check_object when SLAB_DEBUG_FREE is set
>
> Seems that check_object() always checks for pointer issues currently.
>
> Signed-off-by: Christoph Lameter <cl@linux.com>
>
> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c
> +++ linux/mm/slub.c
> @@ -848,6 +848,9 @@ static int check_object(struct kmem_cach
>   		 */
>   		return 1;
>
> +	if (!(s->flags & SLAB_DEBUG_FREE))
> +		return 1;
> +
>   	/* Check free pointer validity */
>   	if (!check_valid_pointer(s, page, get_freepointer(s, p))) {
>   		object_err(s, page, p, "Freepointer corrupt");
>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-14  3:49                 ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-14  3:49 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/8/16 6:07 AM, Christoph Lameter wrote:
> On Thu, 7 Jan 2016, Laura Abbott wrote:
>
>> The slub_debug=P not only poisons it enables other consistency checks on the
>> slab as well, assuming my understanding of what check_object does is correct.
>> My hope was to have the poison part only and none of the consistency checks in
>> an attempt to mitigate performance issues. I misunderstood when the checks
>> actually run and how SLUB_DEBUG was used.
>
> Ok I see that there pointer check is done without checking the
> corresponding debug flag. Patch attached thar fixes it.
>
>> Another option would be to have a flag like SLAB_NO_SANITY_CHECK.
>> sanitization enablement would just be that and SLAB_POISON
>> in the debug options. The disadvantage to this approach would be losing
>> the sanitization for ->ctor caches (the grsecurity version works around this
>> by re-initializing with ->ctor, I haven't heard any feedback if this actually
>> acceptable) and not having some of the fast paths enabled
>> (assuming I'm understanding the code path correctly.) which would also
>> be a performance penalty
>
> I think we simply need to fix the missing check there. There is already a
> flag SLAB_DEBUG_FREE for the pointer checks.
>
>

The patch improves performance but the overall performance of these full
sanitization patches is still significantly better than slub_debug=P. I'll
put some effort into seeing if I can figure out where the slow down is
coming from.

Thanks,
Laura

>
> Subject: slub: Only perform pointer checks in check_object when SLAB_DEBUG_FREE is set
>
> Seems that check_object() always checks for pointer issues currently.
>
> Signed-off-by: Christoph Lameter <cl@linux.com>
>
> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c
> +++ linux/mm/slub.c
> @@ -848,6 +848,9 @@ static int check_object(struct kmem_cach
>   		 */
>   		return 1;
>
> +	if (!(s->flags & SLAB_DEBUG_FREE))
> +		return 1;
> +
>   	/* Check free pointer validity */
>   	if (!check_valid_pointer(s, page, get_freepointer(s, p))) {
>   		object_err(s, page, p, "Freepointer corrupt");
>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
  2016-01-14  3:49                 ` Laura Abbott
  (?)
@ 2016-01-21  3:35                   ` Laura Abbott
  -1 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-21  3:35 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/13/16 7:49 PM, Laura Abbott wrote:
> On 1/8/16 6:07 AM, Christoph Lameter wrote:
>> On Thu, 7 Jan 2016, Laura Abbott wrote:
>>
>>> The slub_debug=P not only poisons it enables other consistency checks on the
>>> slab as well, assuming my understanding of what check_object does is correct.
>>> My hope was to have the poison part only and none of the consistency checks in
>>> an attempt to mitigate performance issues. I misunderstood when the checks
>>> actually run and how SLUB_DEBUG was used.
>>
>> Ok I see that there pointer check is done without checking the
>> corresponding debug flag. Patch attached thar fixes it.
>>
>>> Another option would be to have a flag like SLAB_NO_SANITY_CHECK.
>>> sanitization enablement would just be that and SLAB_POISON
>>> in the debug options. The disadvantage to this approach would be losing
>>> the sanitization for ->ctor caches (the grsecurity version works around this
>>> by re-initializing with ->ctor, I haven't heard any feedback if this actually
>>> acceptable) and not having some of the fast paths enabled
>>> (assuming I'm understanding the code path correctly.) which would also
>>> be a performance penalty
>>
>> I think we simply need to fix the missing check there. There is already a
>> flag SLAB_DEBUG_FREE for the pointer checks.
>>
>>
>
> The patch improves performance but the overall performance of these full
> sanitization patches is still significantly better than slub_debug=P. I'll
> put some effort into seeing if I can figure out where the slow down is
> coming from.
>

There are quite a few other checks which need to be skipped over as well,
but I don't think skipping those are going to be sufficient to give an
acceptable performance; a quick 'hackbench -g 20 -l 1000' shows at least
a 3.5 second difference between just skipping all the checks+slab_debug=P
and this series.

The SLAB_DEBUG flags force everything to skip the CPU caches which is
causing the slow down. I experimented with allowing the debugging to
happen with CPU caches but I'm not convinced it's possible to do the
checking on the fast path in a consistent manner without adding
locking. Is it worth refactoring the debugging to be able to be used
on cpu caches or should I take the approach here of having the clear
be separate from free_debug_processing?

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-21  3:35                   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-21  3:35 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/13/16 7:49 PM, Laura Abbott wrote:
> On 1/8/16 6:07 AM, Christoph Lameter wrote:
>> On Thu, 7 Jan 2016, Laura Abbott wrote:
>>
>>> The slub_debug=P not only poisons it enables other consistency checks on the
>>> slab as well, assuming my understanding of what check_object does is correct.
>>> My hope was to have the poison part only and none of the consistency checks in
>>> an attempt to mitigate performance issues. I misunderstood when the checks
>>> actually run and how SLUB_DEBUG was used.
>>
>> Ok I see that there pointer check is done without checking the
>> corresponding debug flag. Patch attached thar fixes it.
>>
>>> Another option would be to have a flag like SLAB_NO_SANITY_CHECK.
>>> sanitization enablement would just be that and SLAB_POISON
>>> in the debug options. The disadvantage to this approach would be losing
>>> the sanitization for ->ctor caches (the grsecurity version works around this
>>> by re-initializing with ->ctor, I haven't heard any feedback if this actually
>>> acceptable) and not having some of the fast paths enabled
>>> (assuming I'm understanding the code path correctly.) which would also
>>> be a performance penalty
>>
>> I think we simply need to fix the missing check there. There is already a
>> flag SLAB_DEBUG_FREE for the pointer checks.
>>
>>
>
> The patch improves performance but the overall performance of these full
> sanitization patches is still significantly better than slub_debug=P. I'll
> put some effort into seeing if I can figure out where the slow down is
> coming from.
>

There are quite a few other checks which need to be skipped over as well,
but I don't think skipping those are going to be sufficient to give an
acceptable performance; a quick 'hackbench -g 20 -l 1000' shows at least
a 3.5 second difference between just skipping all the checks+slab_debug=P
and this series.

The SLAB_DEBUG flags force everything to skip the CPU caches which is
causing the slow down. I experimented with allowing the debugging to
happen with CPU caches but I'm not convinced it's possible to do the
checking on the fast path in a consistent manner without adding
locking. Is it worth refactoring the debugging to be able to be used
on cpu caches or should I take the approach here of having the clear
be separate from free_debug_processing?

Thanks,
Laura

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-21  3:35                   ` Laura Abbott
  0 siblings, 0 replies; 115+ messages in thread
From: Laura Abbott @ 2016-01-21  3:35 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

On 1/13/16 7:49 PM, Laura Abbott wrote:
> On 1/8/16 6:07 AM, Christoph Lameter wrote:
>> On Thu, 7 Jan 2016, Laura Abbott wrote:
>>
>>> The slub_debug=P not only poisons it enables other consistency checks on the
>>> slab as well, assuming my understanding of what check_object does is correct.
>>> My hope was to have the poison part only and none of the consistency checks in
>>> an attempt to mitigate performance issues. I misunderstood when the checks
>>> actually run and how SLUB_DEBUG was used.
>>
>> Ok I see that there pointer check is done without checking the
>> corresponding debug flag. Patch attached thar fixes it.
>>
>>> Another option would be to have a flag like SLAB_NO_SANITY_CHECK.
>>> sanitization enablement would just be that and SLAB_POISON
>>> in the debug options. The disadvantage to this approach would be losing
>>> the sanitization for ->ctor caches (the grsecurity version works around this
>>> by re-initializing with ->ctor, I haven't heard any feedback if this actually
>>> acceptable) and not having some of the fast paths enabled
>>> (assuming I'm understanding the code path correctly.) which would also
>>> be a performance penalty
>>
>> I think we simply need to fix the missing check there. There is already a
>> flag SLAB_DEBUG_FREE for the pointer checks.
>>
>>
>
> The patch improves performance but the overall performance of these full
> sanitization patches is still significantly better than slub_debug=P. I'll
> put some effort into seeing if I can figure out where the slow down is
> coming from.
>

There are quite a few other checks which need to be skipped over as well,
but I don't think skipping those are going to be sufficient to give an
acceptable performance; a quick 'hackbench -g 20 -l 1000' shows at least
a 3.5 second difference between just skipping all the checks+slab_debug=P
and this series.

The SLAB_DEBUG flags force everything to skip the CPU caches which is
causing the slow down. I experimented with allowing the debugging to
happen with CPU caches but I'm not convinced it's possible to do the
checking on the fast path in a consistent manner without adding
locking. Is it worth refactoring the debugging to be able to be used
on cpu caches or should I take the approach here of having the clear
be separate from free_debug_processing?

Thanks,
Laura

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
  2016-01-21  3:35                   ` Laura Abbott
  (?)
@ 2016-01-21 15:39                     ` Christoph Lameter
  -1 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2016-01-21 15:39 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

n Wed, 20 Jan 2016, Laura Abbott wrote:

> The SLAB_DEBUG flags force everything to skip the CPU caches which is
> causing the slow down. I experimented with allowing the debugging to
> happen with CPU caches but I'm not convinced it's possible to do the
> checking on the fast path in a consistent manner without adding
> locking. Is it worth refactoring the debugging to be able to be used
> on cpu caches or should I take the approach here of having the clear
> be separate from free_debug_processing?

At least posioning would benefit from such work. I think both
sanitization and posoning should be done by the same logic. Remove
poisoning if necessary.

Note though that this security stuff should not have a significant impact
on the general case.

^ permalink raw reply	[flat|nested] 115+ messages in thread

* Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-21 15:39                     ` Christoph Lameter
  0 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2016-01-21 15:39 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

n Wed, 20 Jan 2016, Laura Abbott wrote:

> The SLAB_DEBUG flags force everything to skip the CPU caches which is
> causing the slow down. I experimented with allowing the debugging to
> happen with CPU caches but I'm not convinced it's possible to do the
> checking on the fast path in a consistent manner without adding
> locking. Is it worth refactoring the debugging to be able to be used
> on cpu caches or should I take the approach here of having the clear
> be separate from free_debug_processing?

At least posioning would benefit from such work. I think both
sanitization and posoning should be done by the same logic. Remove
poisoning if necessary.

Note though that this security stuff should not have a significant impact
on the general case.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 115+ messages in thread

* [kernel-hardening] Re: [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX
@ 2016-01-21 15:39                     ` Christoph Lameter
  0 siblings, 0 replies; 115+ messages in thread
From: Christoph Lameter @ 2016-01-21 15:39 UTC (permalink / raw)
  To: Laura Abbott
  Cc: Kees Cook, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Andrew Morton, Linux-MM, LKML, kernel-hardening

n Wed, 20 Jan 2016, Laura Abbott wrote:

> The SLAB_DEBUG flags force everything to skip the CPU caches which is
> causing the slow down. I experimented with allowing the debugging to
> happen with CPU caches but I'm not convinced it's possible to do the
> checking on the fast path in a consistent manner without adding
> locking. Is it worth refactoring the debugging to be able to be used
> on cpu caches or should I take the approach here of having the clear
> be separate from free_debug_processing?

At least posioning would benefit from such work. I think both
sanitization and posoning should be done by the same logic. Remove
poisoning if necessary.

Note though that this security stuff should not have a significant impact
on the general case.

^ permalink raw reply	[flat|nested] 115+ messages in thread

end of thread, other threads:[~2016-01-21 15:39 UTC | newest]

Thread overview: 115+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2015-12-22  3:40 [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX Laura Abbott
2015-12-22  3:40 ` [kernel-hardening] " Laura Abbott
2015-12-22  3:40 ` Laura Abbott
2015-12-22  3:40 ` [RFC][PATCH 1/7] mm/slab_common.c: Add common support for slab saniziation Laura Abbott
2015-12-22  3:40   ` [kernel-hardening] " Laura Abbott
2015-12-22  3:40   ` Laura Abbott
2015-12-22 20:48   ` Vlastimil Babka
2015-12-22 20:48     ` [kernel-hardening] " Vlastimil Babka
2015-12-22 20:48     ` Vlastimil Babka
2016-01-06  0:17     ` Kees Cook
2016-01-06  0:17       ` [kernel-hardening] " Kees Cook
2016-01-06  0:17       ` Kees Cook
2016-01-06  2:06       ` Laura Abbott
2016-01-06  2:06         ` [kernel-hardening] " Laura Abbott
2016-01-06  2:06         ` Laura Abbott
2016-01-06  0:19   ` Kees Cook
2016-01-06  0:19     ` [kernel-hardening] " Kees Cook
2016-01-06  0:19     ` Kees Cook
2015-12-22  3:40 ` [RFC][PATCH 2/7] slub: Add support for sanitization Laura Abbott
2015-12-22  3:40   ` [kernel-hardening] " Laura Abbott
2015-12-22  3:40   ` Laura Abbott
2015-12-22  3:40 ` [RFC][PATCH 3/7] slab: " Laura Abbott
2015-12-22  3:40   ` [kernel-hardening] " Laura Abbott
2015-12-22  3:40   ` Laura Abbott
2015-12-22  3:40 ` [RFC][PATCH 4/7] slob: " Laura Abbott
2015-12-22  3:40   ` [kernel-hardening] " Laura Abbott
2015-12-22  3:40   ` Laura Abbott
2015-12-22  3:40 ` [RFC][PATCH 5/7] mm: Mark several cases as SLAB_NO_SANITIZE Laura Abbott
2015-12-22  3:40   ` [kernel-hardening] " Laura Abbott
2015-12-22  3:40   ` Laura Abbott
2016-01-06  0:21   ` Kees Cook
2016-01-06  0:21     ` [kernel-hardening] " Kees Cook
2016-01-06  0:21     ` Kees Cook
2016-01-06  2:11     ` Laura Abbott
2016-01-06  2:11       ` [kernel-hardening] " Laura Abbott
2016-01-06  2:11       ` Laura Abbott
2015-12-22  3:40 ` [RFC][PATCH 6/7] mm: Add Kconfig option for slab sanitization Laura Abbott
2015-12-22  3:40   ` [kernel-hardening] " Laura Abbott
2015-12-22  3:40   ` Laura Abbott
2015-12-22  9:33   ` [kernel-hardening] " Mathias Krause
2015-12-22  9:33     ` Mathias Krause
2015-12-22 17:51     ` Laura Abbott
2015-12-22 17:51       ` Laura Abbott
2015-12-22 18:37       ` Mathias Krause
2015-12-22 18:37         ` Mathias Krause
2015-12-22 19:18         ` Laura Abbott
2015-12-22 19:18           ` Laura Abbott
2015-12-22 20:01         ` Christoph Lameter
2015-12-22 20:01           ` Christoph Lameter
2015-12-22 20:06           ` Mathias Krause
2015-12-22 20:06             ` Mathias Krause
2015-12-22 14:57   ` Dave Hansen
2015-12-22 14:57     ` Dave Hansen
2015-12-22 16:25     ` Christoph Lameter
2015-12-22 16:25       ` Christoph Lameter
2015-12-22 17:22       ` Dave Hansen
2015-12-22 17:24         ` Christoph Lameter
2015-12-22 17:28           ` Dave Hansen
2015-12-22 17:28             ` Dave Hansen
2015-12-22 18:08             ` Christoph Lameter
2015-12-22 18:08               ` Christoph Lameter
2015-12-22 18:19               ` Dave Hansen
2015-12-22 18:19                 ` Dave Hansen
2015-12-22 19:13                 ` Laura Abbott
2015-12-22 19:13                   ` Laura Abbott
2015-12-22 19:32                   ` Dave Hansen
2015-12-22 19:32                     ` Dave Hansen
2016-01-06  0:29                   ` Kees Cook
2016-01-06  0:29                     ` Kees Cook
2016-01-06  0:29                     ` Kees Cook
2016-01-06  2:46                     ` Laura Abbott
2016-01-06  2:46                       ` Laura Abbott
2016-01-06  2:46                       ` Laura Abbott
2015-12-22  3:40 ` [RFC][PATCH 7/7] lkdtm: Add READ_AFTER_FREE test Laura Abbott
2015-12-22  3:40   ` [kernel-hardening] " Laura Abbott
2015-12-22  3:40   ` Laura Abbott
2016-01-06  0:15   ` Kees Cook
2016-01-06  0:15     ` [kernel-hardening] " Kees Cook
2016-01-06  0:15     ` Kees Cook
2016-01-06  2:49     ` Laura Abbott
2016-01-06  2:49       ` [kernel-hardening] " Laura Abbott
2016-01-06  2:49       ` Laura Abbott
2015-12-22 16:08 ` [RFC][PATCH 0/7] Sanitization of slabs based on grsecurity/PaX Christoph Lameter
2015-12-22 16:08   ` [kernel-hardening] " Christoph Lameter
2015-12-22 16:08   ` Christoph Lameter
2015-12-22 16:15   ` [kernel-hardening] " Dave Hansen
2015-12-22 16:15     ` Dave Hansen
2015-12-22 16:38   ` Daniel Micay
2015-12-22 20:04   ` Laura Abbott
2015-12-22 20:04     ` [kernel-hardening] " Laura Abbott
2015-12-22 20:04     ` Laura Abbott
2016-01-06  0:09     ` Kees Cook
2016-01-06  0:09       ` [kernel-hardening] " Kees Cook
2016-01-06  0:09       ` Kees Cook
2016-01-06  3:17       ` Laura Abbott
2016-01-06  3:17         ` [kernel-hardening] " Laura Abbott
2016-01-06  3:17         ` Laura Abbott
2016-01-07 16:26         ` Christoph Lameter
2016-01-07 16:26           ` [kernel-hardening] " Christoph Lameter
2016-01-07 16:26           ` Christoph Lameter
2016-01-08  1:23           ` Laura Abbott
2016-01-08  1:23             ` [kernel-hardening] " Laura Abbott
2016-01-08  1:23             ` Laura Abbott
2016-01-08 14:07             ` Christoph Lameter
2016-01-08 14:07               ` [kernel-hardening] " Christoph Lameter
2016-01-08 14:07               ` Christoph Lameter
2016-01-14  3:49               ` Laura Abbott
2016-01-14  3:49                 ` [kernel-hardening] " Laura Abbott
2016-01-14  3:49                 ` Laura Abbott
2016-01-21  3:35                 ` Laura Abbott
2016-01-21  3:35                   ` [kernel-hardening] " Laura Abbott
2016-01-21  3:35                   ` Laura Abbott
2016-01-21 15:39                   ` Christoph Lameter
2016-01-21 15:39                     ` [kernel-hardening] " Christoph Lameter
2016-01-21 15:39                     ` Christoph Lameter

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.