linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3] mm: Add SLUB free list pointer obfuscation
@ 2017-07-06  0:27 Kees Cook
  2017-07-06 13:43 ` Christoph Lameter
  2017-07-24 21:17 ` [v3] " Alexander Popov
  0 siblings, 2 replies; 22+ messages in thread
From: Kees Cook @ 2017-07-06  0:27 UTC (permalink / raw)
  To: Andrew Morton
  Cc: Christoph Lameter, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Paul E. McKenney, Ingo Molnar, Josh Triplett, Andy Lutomirski,
	Nicolas Pitre, Tejun Heo, Daniel Mack, Sebastian Andrzej Siewior,
	Sergey Senozhatsky, Helge Deller, Rik van Riel, linux-mm,
	Tycho Andersen, linux-kernel, kernel-hardening

This SLUB free list pointer obfuscation code is modified from Brad
Spengler/PaX Team's code in the last public patch of grsecurity/PaX based
on my understanding of the code. Changes or omissions from the original
code are mine and don't reflect the original grsecurity/PaX code.

This adds a per-cache random value to SLUB caches that is XORed with
their freelist pointer address and value. This adds nearly zero overhead
and frustrates the very common heap overflow exploitation method of
overwriting freelist pointers. A recent example of the attack is written
up here: http://cyseclabs.com/blog/cve-2016-6187-heap-off-by-one-exploit

This is based on patches by Daniel Micay, and refactored to minimize the
use of #ifdef.

Under 200-count cycles of "hackbench -g 20 -l 1000" I saw the following
run times:

before:
	mean 10.11882499999999999995
	variance .03320378329145728642
	stdev .18221905304181911048

after:
	mean 10.12654000000000000014
	variance .04700556623115577889
	stdev .21680767106160192064

The difference gets lost in the noise, but if the above is to be taken
literally, using CONFIG_FREELIST_HARDENED is 0.07% slower.

Suggested-by: Daniel Micay <danielmicay@gmail.com>
Cc: Christoph Lameter <cl@linux.com>
Cc: Rik van Riel <riel@redhat.com>
Cc: Tycho Andersen <tycho@docker.com>
Signed-off-by: Kees Cook <keescook@chromium.org>
---
v3:
- use static inlines instead of macros (akpm).
v2:
- rename CONFIG_SLAB_HARDENED to CONFIG_FREELIST_HARDENED (labbott).
---
 include/linux/slub_def.h |  4 ++++
 init/Kconfig             |  9 +++++++++
 mm/slub.c                | 42 +++++++++++++++++++++++++++++++++++++-----
 3 files changed, 50 insertions(+), 5 deletions(-)

diff --git a/include/linux/slub_def.h b/include/linux/slub_def.h
index 07ef550c6627..d7990a83b416 100644
--- a/include/linux/slub_def.h
+++ b/include/linux/slub_def.h
@@ -93,6 +93,10 @@ struct kmem_cache {
 #endif
 #endif
 
+#ifdef CONFIG_SLAB_FREELIST_HARDENED
+	unsigned long random;
+#endif
+
 #ifdef CONFIG_NUMA
 	/*
 	 * Defragmentation by allocating from a remote node.
diff --git a/init/Kconfig b/init/Kconfig
index 1d3475fc9496..04ee3e507b9e 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1900,6 +1900,15 @@ config SLAB_FREELIST_RANDOM
 	  security feature reduces the predictability of the kernel slab
 	  allocator against heap overflows.
 
+config SLAB_FREELIST_HARDENED
+	bool "Harden slab freelist metadata"
+	depends on SLUB
+	help
+	  Many kernel heap attacks try to target slab cache metadata and
+	  other infrastructure. This options makes minor performance
+	  sacrifies to harden the kernel slab allocator against common
+	  freelist exploit methods.
+
 config SLUB_CPU_PARTIAL
 	default y
 	depends on SLUB && SMP
diff --git a/mm/slub.c b/mm/slub.c
index 57e5156f02be..eae0628d3346 100644
--- a/mm/slub.c
+++ b/mm/slub.c
@@ -34,6 +34,7 @@
 #include <linux/stacktrace.h>
 #include <linux/prefetch.h>
 #include <linux/memcontrol.h>
+#include <linux/random.h>
 
 #include <trace/events/kmem.h>
 
@@ -238,30 +239,58 @@ static inline void stat(const struct kmem_cache *s, enum stat_item si)
  * 			Core slab cache functions
  *******************************************************************/
 
+/*
+ * Returns freelist pointer (ptr). With hardening, this is obfuscated
+ * with an XOR of the address where the pointer is held and a per-cache
+ * random number.
+ */
+static inline void *freelist_ptr(const struct kmem_cache *s, void *ptr,
+				 unsigned long ptr_addr)
+{
+#ifdef CONFIG_SLAB_FREELIST_HARDENED
+	return (void *)((unsigned long)ptr ^ s->random ^ ptr_addr);
+#else
+	return ptr;
+#endif
+}
+
+/* Returns the freelist pointer recorded at location ptr_addr. */
+static inline void *freelist_dereference(const struct kmem_cache *s,
+					 void *ptr_addr)
+{
+	return freelist_ptr(s, (void *)*(unsigned long *)(ptr_addr),
+			    (unsigned long)ptr_addr);
+}
+
 static inline void *get_freepointer(struct kmem_cache *s, void *object)
 {
-	return *(void **)(object + s->offset);
+	return freelist_dereference(s, object + s->offset);
 }
 
 static void prefetch_freepointer(const struct kmem_cache *s, void *object)
 {
-	prefetch(object + s->offset);
+	if (object)
+		prefetch(freelist_dereference(s, object + s->offset));
 }
 
 static inline void *get_freepointer_safe(struct kmem_cache *s, void *object)
 {
+	unsigned long freepointer_addr;
 	void *p;
 
 	if (!debug_pagealloc_enabled())
 		return get_freepointer(s, object);
 
-	probe_kernel_read(&p, (void **)(object + s->offset), sizeof(p));
-	return p;
+	freepointer_addr = (unsigned long)object + s->offset;
+	probe_kernel_read(&p, (void **)freepointer_addr, sizeof(p));
+	return freelist_ptr(s, p, freepointer_addr);
 }
 
 static inline void set_freepointer(struct kmem_cache *s, void *object, void *fp)
 {
-	*(void **)(object + s->offset) = fp;
+	unsigned long freeptr_addr = (unsigned long)object + s->offset;
+
+	*(void **)freeptr_addr = freelist_ptr(s, fp, freeptr_addr);
 }
 
 /* Loop over all objects in a slab */
@@ -3536,6 +3565,9 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags)
 {
 	s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor);
 	s->reserved = 0;
+#ifdef CONFIG_SLAB_FREELIST_HARDENED
+	s->random = get_random_long();
+#endif
 
 	if (need_reserve_slab_rcu && (s->flags & SLAB_TYPESAFE_BY_RCU))
 		s->reserved = sizeof(struct rcu_head);
-- 
2.7.4


-- 
Kees Cook
Pixel Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 22+ messages in thread

* Re: [PATCH v3] mm: Add SLUB free list pointer obfuscation
  2017-07-06  0:27 [PATCH v3] mm: Add SLUB free list pointer obfuscation Kees Cook
@ 2017-07-06 13:43 ` Christoph Lameter
  2017-07-06 15:48   ` Kees Cook
  2017-07-24 21:17 ` [v3] " Alexander Popov
  1 sibling, 1 reply; 22+ messages in thread
From: Christoph Lameter @ 2017-07-06 13:43 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andrew Morton, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Paul E. McKenney, Ingo Molnar, Josh Triplett, Andy Lutomirski,
	Nicolas Pitre, Tejun Heo, Daniel Mack, Sebastian Andrzej Siewior,
	Sergey Senozhatsky, Helge Deller, Rik van Riel, linux-mm,
	Tycho Andersen, linux-kernel, kernel-hardening

On Wed, 5 Jul 2017, Kees Cook wrote:

> @@ -3536,6 +3565,9 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags)
>  {
>  	s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor);
>  	s->reserved = 0;
> +#ifdef CONFIG_SLAB_FREELIST_HARDENED
> +	s->random = get_random_long();
> +#endif
>
>  	if (need_reserve_slab_rcu && (s->flags & SLAB_TYPESAFE_BY_RCU))
>  		s->reserved = sizeof(struct rcu_head);
>

So if an attacker knows the internal structure of data then he can simply
dereference page->kmem_cache->random to decode the freepointer.

Assuming someone is already targeting a freelist pointer (which indicates
detailed knowledge of the internal structure) then I would think that
someone like that will also figure out how to follow the pointer links to
get to the random value.

Not seeing the point of all of this.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3] mm: Add SLUB free list pointer obfuscation
  2017-07-06 13:43 ` Christoph Lameter
@ 2017-07-06 15:48   ` Kees Cook
  2017-07-06 15:55     ` Christoph Lameter
  0 siblings, 1 reply; 22+ messages in thread
From: Kees Cook @ 2017-07-06 15:48 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Andrew Morton, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Paul E. McKenney, Ingo Molnar, Josh Triplett, Andy Lutomirski,
	Nicolas Pitre, Tejun Heo, Daniel Mack, Sebastian Andrzej Siewior,
	Sergey Senozhatsky, Helge Deller, Rik van Riel, Linux-MM,
	Tycho Andersen, LKML, kernel-hardening

On Thu, Jul 6, 2017 at 6:43 AM, Christoph Lameter <cl@linux.com> wrote:
> On Wed, 5 Jul 2017, Kees Cook wrote:
>
>> @@ -3536,6 +3565,9 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags)
>>  {
>>       s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor);
>>       s->reserved = 0;
>> +#ifdef CONFIG_SLAB_FREELIST_HARDENED
>> +     s->random = get_random_long();
>> +#endif
>>
>>       if (need_reserve_slab_rcu && (s->flags & SLAB_TYPESAFE_BY_RCU))
>>               s->reserved = sizeof(struct rcu_head);
>>
>
> So if an attacker knows the internal structure of data then he can simply
> dereference page->kmem_cache->random to decode the freepointer.

That requires a series of arbitrary reads. This is protecting against
attacks that use an adjacent slab object write overflow to write the
freelist pointer. This internal structure is very reliable, and has
been the basis of freelist attacks against the kernel for a decade.

> Assuming someone is already targeting a freelist pointer (which indicates
> detailed knowledge of the internal structure) then I would think that
> someone like that will also figure out how to follow the pointer links to
> get to the random value.

The kind of hardening this provides is to frustrate the expansion of
an attacker's capabilities. Most attacks are a chain of exploits that
slowly builds up the ability to perform arbitrary writes. For example,
a slab object overflow isn't an arbitrary write on its own, but when
combined with heap allocation layout control and an adjacent free
object, this can be upgraded to an arbitrary write.

> Not seeing the point of all of this.

It is a probabilistic defense, but then so is the stack protector.
This is a similar defense; while not perfect it makes the class of
attack much more difficult to mount.

-Kees

-- 
Kees Cook
Pixel Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3] mm: Add SLUB free list pointer obfuscation
  2017-07-06 15:48   ` Kees Cook
@ 2017-07-06 15:55     ` Christoph Lameter
  2017-07-06 16:16       ` [kernel-hardening] " Daniel Micay
  2017-07-06 17:53       ` Rik van Riel
  0 siblings, 2 replies; 22+ messages in thread
From: Christoph Lameter @ 2017-07-06 15:55 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andrew Morton, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Paul E. McKenney, Ingo Molnar, Josh Triplett, Andy Lutomirski,
	Nicolas Pitre, Tejun Heo, Daniel Mack, Sebastian Andrzej Siewior,
	Sergey Senozhatsky, Helge Deller, Rik van Riel, Linux-MM,
	Tycho Andersen, LKML, kernel-hardening

On Thu, 6 Jul 2017, Kees Cook wrote:

> On Thu, Jul 6, 2017 at 6:43 AM, Christoph Lameter <cl@linux.com> wrote:
> > On Wed, 5 Jul 2017, Kees Cook wrote:
> >
> >> @@ -3536,6 +3565,9 @@ static int kmem_cache_open(struct kmem_cache *s, unsigned long flags)
> >>  {
> >>       s->flags = kmem_cache_flags(s->size, flags, s->name, s->ctor);
> >>       s->reserved = 0;
> >> +#ifdef CONFIG_SLAB_FREELIST_HARDENED
> >> +     s->random = get_random_long();
> >> +#endif
> >>
> >>       if (need_reserve_slab_rcu && (s->flags & SLAB_TYPESAFE_BY_RCU))
> >>               s->reserved = sizeof(struct rcu_head);
> >>
> >
> > So if an attacker knows the internal structure of data then he can simply
> > dereference page->kmem_cache->random to decode the freepointer.
>
> That requires a series of arbitrary reads. This is protecting against
> attacks that use an adjacent slab object write overflow to write the
> freelist pointer. This internal structure is very reliable, and has
> been the basis of freelist attacks against the kernel for a decade.

These reads are not arbitrary. You can usually calculate the page struct
address easily from the address and then do a couple of loads to get
there.

Ok so you get rid of the old attacks because we did not have that
hardening in effect when they designed their approaches?

> It is a probabilistic defense, but then so is the stack protector.
> This is a similar defense; while not perfect it makes the class of
> attack much more difficult to mount.

Na I am not convinced of the "much more difficult". Maybe they will just
have to upgrade their approaches to fetch the proper values to decode.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [kernel-hardening] Re: [PATCH v3] mm: Add SLUB free list pointer obfuscation
  2017-07-06 15:55     ` Christoph Lameter
@ 2017-07-06 16:16       ` Daniel Micay
  2017-07-06 17:53       ` Rik van Riel
  1 sibling, 0 replies; 22+ messages in thread
From: Daniel Micay @ 2017-07-06 16:16 UTC (permalink / raw)
  To: Christoph Lameter, Kees Cook
  Cc: Andrew Morton, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Paul E. McKenney, Ingo Molnar, Josh Triplett, Andy Lutomirski,
	Nicolas Pitre, Tejun Heo, Daniel Mack, Sebastian Andrzej Siewior,
	Sergey Senozhatsky, Helge Deller, Rik van Riel, Linux-MM,
	Tycho Andersen, LKML, kernel-hardening

On Thu, 2017-07-06 at 10:55 -0500, Christoph Lameter wrote:
> On Thu, 6 Jul 2017, Kees Cook wrote:
> 
> > On Thu, Jul 6, 2017 at 6:43 AM, Christoph Lameter <cl@linux.com>
> > wrote:
> > > On Wed, 5 Jul 2017, Kees Cook wrote:
> > > 
> > > > @@ -3536,6 +3565,9 @@ static int kmem_cache_open(struct
> > > > kmem_cache *s, unsigned long flags)
> > > >  {
> > > >       s->flags = kmem_cache_flags(s->size, flags, s->name, s-
> > > > >ctor);
> > > >       s->reserved = 0;
> > > > +#ifdef CONFIG_SLAB_FREELIST_HARDENED
> > > > +     s->random = get_random_long();
> > > > +#endif
> > > > 
> > > >       if (need_reserve_slab_rcu && (s->flags &
> > > > SLAB_TYPESAFE_BY_RCU))
> > > >               s->reserved = sizeof(struct rcu_head);
> > > > 
> > > 
> > > So if an attacker knows the internal structure of data then he can
> > > simply
> > > dereference page->kmem_cache->random to decode the freepointer.
> > 
> > That requires a series of arbitrary reads. This is protecting
> > against
> > attacks that use an adjacent slab object write overflow to write the
> > freelist pointer. This internal structure is very reliable, and has
> > been the basis of freelist attacks against the kernel for a decade.
> 
> These reads are not arbitrary. You can usually calculate the page
> struct
> address easily from the address and then do a couple of loads to get
> there.

You're describing an arbitrary read vulnerability: an attacker able to
read the value of an address of their choosing. Requiring a powerful
additional primitive rather than only a small fixed size overflow or a
weak use-after-free vulnerability to use a common exploit vector is
useful.

A deterministic mitigation would be better, but I don't think an extra
slab allocator for hardened kernels would be welcomed. Since there isn't
a separate allocator for that niche, SLAB or SLUB are used. The ideal
would be bitmaps in `struct page` but that implies another allocator,
using single pages for the smallest size classes and potentially needing
to bloat `struct page` even with that.

There's definitely a limit to the hardening that can be done for SLUB,
but unless forking it into a different allocator is welcome that's what
will be suggested. Similarly, the slab freelist randomization feature is
a much weaker mitigation than it could be without these constraints
placed on it. This is much lower complexity than that and higher value
though...

> Ok so you get rid of the old attacks because we did not have that
> hardening in effect when they designed their approaches?
> 
> > It is a probabilistic defense, but then so is the stack protector.
> > This is a similar defense; while not perfect it makes the class of
> > attack much more difficult to mount.
> 
> Na I am not convinced of the "much more difficult". Maybe they will
> just
> have to upgrade their approaches to fetch the proper values to decode.

To fetch the values they would need an arbitrary read vulnerability or
the ability to dump them via uninitialized slab allocations as an extra
requirement.

An attacker can similarly bypass the stack canary by reading them from
stack frames via a stack buffer read overflow or uninitialized variable
usage leaking stack data. On non-x86, at least with SMP, the stack
canary is just a global variable that remains the same after
initialization too. That doesn't make it useless, although the kernel
doesn't have many linear overflows on the stack which is the real issue
with it as a mitigation. Despite that, most people are using kernels
with stack canaries, and that has a significant performance cost unlike
these kinds of changes.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3] mm: Add SLUB free list pointer obfuscation
  2017-07-06 15:55     ` Christoph Lameter
  2017-07-06 16:16       ` [kernel-hardening] " Daniel Micay
@ 2017-07-06 17:53       ` Rik van Riel
  2017-07-06 18:50         ` Kees Cook
  1 sibling, 1 reply; 22+ messages in thread
From: Rik van Riel @ 2017-07-06 17:53 UTC (permalink / raw)
  To: Christoph Lameter, Kees Cook
  Cc: Andrew Morton, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Paul E. McKenney, Ingo Molnar, Josh Triplett, Andy Lutomirski,
	Nicolas Pitre, Tejun Heo, Daniel Mack, Sebastian Andrzej Siewior,
	Sergey Senozhatsky, Helge Deller, Linux-MM, Tycho Andersen, LKML,
	kernel-hardening

[-- Attachment #1: Type: text/plain, Size: 2701 bytes --]

On Thu, 2017-07-06 at 10:55 -0500, Christoph Lameter wrote:
> On Thu, 6 Jul 2017, Kees Cook wrote:
> 
> > On Thu, Jul 6, 2017 at 6:43 AM, Christoph Lameter <cl@linux.com>
> > wrote:
> > > On Wed, 5 Jul 2017, Kees Cook wrote:
> > > 
> > > > @@ -3536,6 +3565,9 @@ static int kmem_cache_open(struct
> > > > kmem_cache *s, unsigned long flags)
> > > >  {
> > > >       s->flags = kmem_cache_flags(s->size, flags, s->name, s-
> > > > >ctor);
> > > >       s->reserved = 0;
> > > > +#ifdef CONFIG_SLAB_FREELIST_HARDENED
> > > > +     s->random = get_random_long();
> > > > +#endif
> > > > 
> > > >       if (need_reserve_slab_rcu && (s->flags &
> > > > SLAB_TYPESAFE_BY_RCU))
> > > >               s->reserved = sizeof(struct rcu_head);
> > > > 
> > > 
> > > So if an attacker knows the internal structure of data then he
> > > can simply
> > > dereference page->kmem_cache->random to decode the freepointer.
> > 
> > That requires a series of arbitrary reads. This is protecting
> > against
> > attacks that use an adjacent slab object write overflow to write
> > the
> > freelist pointer. This internal structure is very reliable, and has
> > been the basis of freelist attacks against the kernel for a decade.
> 
> These reads are not arbitrary. You can usually calculate the page
> struct
> address easily from the address and then do a couple of loads to get
> there.
> 
> Ok so you get rid of the old attacks because we did not have that
> hardening in effect when they designed their approaches?

The hardening protects against situations where
people do not have arbitrary code execution and
memory read access in the kernel, with the goal
of protecting people from acquiring those abilities.

> > It is a probabilistic defense, but then so is the stack protector.
> > This is a similar defense; while not perfect it makes the class of
> > attack much more difficult to mount.
> 
> Na I am not convinced of the "much more difficult". Maybe they will
> just
> have to upgrade their approaches to fetch the proper values to
> decode.

Easier said than done. Most of the time there is an
unpatched vulnerability outstanding, there is only
one known issue, before the kernel is updated by the
user, to a version that does not have that issue.

Bypassing kernel hardening typically requires the
use of multiple vulnerabilities, and the absence of
roadblocks (like hardening) that make a type of
vulnerability exploitable.

Between usercopy hardening, and these slub freelist
canaries (which is what they effectively are), several
classes of exploits are no longer usable.

-- 
All rights reversed

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3] mm: Add SLUB free list pointer obfuscation
  2017-07-06 17:53       ` Rik van Riel
@ 2017-07-06 18:50         ` Kees Cook
  2017-07-07 13:50           ` Christoph Lameter
  0 siblings, 1 reply; 22+ messages in thread
From: Kees Cook @ 2017-07-06 18:50 UTC (permalink / raw)
  To: Rik van Riel
  Cc: Christoph Lameter, Andrew Morton, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Linux-MM, Tycho Andersen, LKML, kernel-hardening

On Thu, Jul 6, 2017 at 10:53 AM, Rik van Riel <riel@redhat.com> wrote:
> On Thu, 2017-07-06 at 10:55 -0500, Christoph Lameter wrote:
>> On Thu, 6 Jul 2017, Kees Cook wrote:
>> > That requires a series of arbitrary reads. This is protecting
>> > against
>> > attacks that use an adjacent slab object write overflow to write
>> > the
>> > freelist pointer. This internal structure is very reliable, and has
>> > been the basis of freelist attacks against the kernel for a decade.
>>
>> These reads are not arbitrary. You can usually calculate the page struct
>> address easily from the address and then do a couple of loads to get
>> there.
>>
>> Ok so you get rid of the old attacks because we did not have that
>> hardening in effect when they designed their approaches?
>
> The hardening protects against situations where
> people do not have arbitrary code execution and
> memory read access in the kernel, with the goal
> of protecting people from acquiring those abilities.

Right. This is about blocking the escalation of attack capability. For
slab object overflow flaws, there are mainly two exploitation methods:
adjacent allocated object overwrite and adjacent freed object
overwrite (i.e. a freelist pointer overwrite). The first attack
depends heavily on which slab cache (and therefore which structures)
has been exposed by the bug. It's a very narrow and specific attack
method. The freelist attack is entirely general purpose since it
provides a reliable way to gain arbitrary write capabilities.
Protecting against that attack greatly narrows the options for an
attacker which makes attacks more expensive to create and possibly
less reliable (and reliability is crucial to successful attacks).

>> > It is a probabilistic defense, but then so is the stack protector.
>> > This is a similar defense; while not perfect it makes the class of
>> > attack much more difficult to mount.
>>
>> Na I am not convinced of the "much more difficult". Maybe they will just
>> have to upgrade their approaches to fetch the proper values to
>> decode.
>
> Easier said than done. Most of the time there is an
> unpatched vulnerability outstanding, there is only
> one known issue, before the kernel is updated by the
> user, to a version that does not have that issue.
>
> Bypassing kernel hardening typically requires the
> use of multiple vulnerabilities, and the absence of
> roadblocks (like hardening) that make a type of
> vulnerability exploitable.
>
> Between usercopy hardening, and these slub freelist
> canaries (which is what they effectively are), several
> classes of exploits are no longer usable.

Yup. I've been thinking of this patch kind of like glibc's PTR_MANGLE feature.

-Kees

-- 
Kees Cook
Pixel Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3] mm: Add SLUB free list pointer obfuscation
  2017-07-06 18:50         ` Kees Cook
@ 2017-07-07 13:50           ` Christoph Lameter
  2017-07-07 16:51             ` Kees Cook
  0 siblings, 1 reply; 22+ messages in thread
From: Christoph Lameter @ 2017-07-07 13:50 UTC (permalink / raw)
  To: Kees Cook
  Cc: Rik van Riel, Andrew Morton, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Linux-MM, Tycho Andersen, LKML, kernel-hardening

On Thu, 6 Jul 2017, Kees Cook wrote:

> Right. This is about blocking the escalation of attack capability. For
> slab object overflow flaws, there are mainly two exploitation methods:
> adjacent allocated object overwrite and adjacent freed object
> overwrite (i.e. a freelist pointer overwrite). The first attack
> depends heavily on which slab cache (and therefore which structures)
> has been exposed by the bug. It's a very narrow and specific attack
> method. The freelist attack is entirely general purpose since it
> provides a reliable way to gain arbitrary write capabilities.
> Protecting against that attack greatly narrows the options for an
> attacker which makes attacks more expensive to create and possibly
> less reliable (and reliability is crucial to successful attacks).


The simplest thing here is to vary the location of the freelist pointer.
That way you cannot hit the freepointer in a deterministic way

The freepointer is put at offset 0 right now. But you could put it
anywhere in the object.

Index: linux/mm/slub.c
===================================================================
--- linux.orig/mm/slub.c
+++ linux/mm/slub.c
@@ -3467,7 +3467,8 @@ static int calculate_sizes(struct kmem_c
 		 */
 		s->offset = size;
 		size += sizeof(void *);
-	}
+	} else
+		s->offset = s->size / sizeof(void *) * <insert random chance logic here>

 #ifdef CONFIG_SLUB_DEBUG
 	if (flags & SLAB_STORE_USER)

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3] mm: Add SLUB free list pointer obfuscation
  2017-07-07 13:50           ` Christoph Lameter
@ 2017-07-07 16:51             ` Kees Cook
  2017-07-07 17:06               ` Christoph Lameter
  0 siblings, 1 reply; 22+ messages in thread
From: Kees Cook @ 2017-07-07 16:51 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Rik van Riel, Andrew Morton, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Linux-MM, Tycho Andersen, LKML, kernel-hardening

On Fri, Jul 7, 2017 at 6:50 AM, Christoph Lameter <cl@linux.com> wrote:
> On Thu, 6 Jul 2017, Kees Cook wrote:
>
>> Right. This is about blocking the escalation of attack capability. For
>> slab object overflow flaws, there are mainly two exploitation methods:
>> adjacent allocated object overwrite and adjacent freed object
>> overwrite (i.e. a freelist pointer overwrite). The first attack
>> depends heavily on which slab cache (and therefore which structures)
>> has been exposed by the bug. It's a very narrow and specific attack
>> method. The freelist attack is entirely general purpose since it
>> provides a reliable way to gain arbitrary write capabilities.
>> Protecting against that attack greatly narrows the options for an
>> attacker which makes attacks more expensive to create and possibly
>> less reliable (and reliability is crucial to successful attacks).
>
>
> The simplest thing here is to vary the location of the freelist pointer.
> That way you cannot hit the freepointer in a deterministic way
>
> The freepointer is put at offset 0 right now. But you could put it
> anywhere in the object.
>
> Index: linux/mm/slub.c
> ===================================================================
> --- linux.orig/mm/slub.c
> +++ linux/mm/slub.c
> @@ -3467,7 +3467,8 @@ static int calculate_sizes(struct kmem_c
>                  */
>                 s->offset = size;
>                 size += sizeof(void *);
> -       }
> +       } else
> +               s->offset = s->size / sizeof(void *) * <insert random chance logic here>
>
>  #ifdef CONFIG_SLUB_DEBUG
>         if (flags & SLAB_STORE_USER)

I wouldn't mind having both mitigations, but this alone is still open
to spraying attacks. As long as an attacker's overflow can span an
entire object, they can still hit the freelist pointer (which is
especially true with small objects). With the XOR obfuscation they
have to know where the pointer is stored (usually not available since
they have only been able to arrange "next object is unallocated"
without knowing _where_ it is allocated) and the random number (stored
separately in the cache).

If we also added a >0 offset, that would make things even less
deterministic. Though I wonder if it would make the performance impact
higher. The XOR patch right now is very light.

Yet another option would be to moving the freelist pointer over by
sizeof(void *) and adding a canary to be checked at offset 0, but that
involves additional memory fetches and doesn't protect against a bad
array index attack (rather than a linear overflow). So, I still think
the XOR patch is the right first step. Would could further harden it,
but I think it's the place to start.

-Kees

-- 
Kees Cook
Pixel Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3] mm: Add SLUB free list pointer obfuscation
  2017-07-07 16:51             ` Kees Cook
@ 2017-07-07 17:06               ` Christoph Lameter
  2017-07-07 18:43                 ` Kees Cook
  0 siblings, 1 reply; 22+ messages in thread
From: Christoph Lameter @ 2017-07-07 17:06 UTC (permalink / raw)
  To: Kees Cook
  Cc: Rik van Riel, Andrew Morton, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Linux-MM, Tycho Andersen, LKML, kernel-hardening

On Fri, 7 Jul 2017, Kees Cook wrote:

> If we also added a >0 offset, that would make things even less
> deterministic. Though I wonder if it would make the performance impact
> higher. The XOR patch right now is very light.

There would be barely any performance impact if you keep the offset within
a cacheline since most objects start on a cacheline boundary. The
processor has to fetch the cacheline anyways.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [PATCH v3] mm: Add SLUB free list pointer obfuscation
  2017-07-07 17:06               ` Christoph Lameter
@ 2017-07-07 18:43                 ` Kees Cook
  0 siblings, 0 replies; 22+ messages in thread
From: Kees Cook @ 2017-07-07 18:43 UTC (permalink / raw)
  To: Christoph Lameter
  Cc: Rik van Riel, Andrew Morton, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Linux-MM, Tycho Andersen, LKML, kernel-hardening

On Fri, Jul 7, 2017 at 10:06 AM, Christoph Lameter <cl@linux.com> wrote:
> On Fri, 7 Jul 2017, Kees Cook wrote:
>
>> If we also added a >0 offset, that would make things even less
>> deterministic. Though I wonder if it would make the performance impact
>> higher. The XOR patch right now is very light.
>
> There would be barely any performance impact if you keep the offset within
> a cacheline since most objects start on a cacheline boundary. The
> processor has to fetch the cacheline anyways.

Sure, this seems like a nice additional bit of hardening, even if
we're limited to a cacheline. I'd still want to protect the spray and
index attacks though (which the XOR method covers), but we can do
both. We should keep them distinct patches, though. If you'll Ack the
XOR patch, I can poke at adding offset randomization?

-Kees

-- 
Kees Cook
Pixel Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [v3] mm: Add SLUB free list pointer obfuscation
  2017-07-06  0:27 [PATCH v3] mm: Add SLUB free list pointer obfuscation Kees Cook
  2017-07-06 13:43 ` Christoph Lameter
@ 2017-07-24 21:17 ` Alexander Popov
  2017-07-25  9:42   ` Alexander Popov
  2017-07-26  0:21   ` Kees Cook
  1 sibling, 2 replies; 22+ messages in thread
From: Alexander Popov @ 2017-07-24 21:17 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andrew Morton, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Rik van Riel, linux-mm, Tycho Andersen, linux-kernel,
	kernel-hardening



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [v3] mm: Add SLUB free list pointer obfuscation
  2017-07-24 21:17 ` [v3] " Alexander Popov
@ 2017-07-25  9:42   ` Alexander Popov
  2017-07-26  0:21   ` Kees Cook
  1 sibling, 0 replies; 22+ messages in thread
From: Alexander Popov @ 2017-07-25  9:42 UTC (permalink / raw)
  To: Kees Cook
  Cc: Andrew Morton, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Rik van Riel, linux-mm, Tycho Andersen, linux-kernel,
	kernel-hardening, alex.popov



^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [v3] mm: Add SLUB free list pointer obfuscation
  2017-07-24 21:17 ` [v3] " Alexander Popov
  2017-07-25  9:42   ` Alexander Popov
@ 2017-07-26  0:21   ` Kees Cook
  2017-07-26 14:08     ` Christopher Lameter
  1 sibling, 1 reply; 22+ messages in thread
From: Kees Cook @ 2017-07-26  0:21 UTC (permalink / raw)
  To: Alexander Popov
  Cc: Andrew Morton, Christoph Lameter, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Rik van Riel, Linux-MM, Tycho Andersen, LKML, kernel-hardening

On Mon, Jul 24, 2017 at 2:17 PM, Alexander Popov <alex.popov@linux.com> wrote:
> From 86f4f1f6deb76849e00c761fa30eeb479f789c35 Mon Sep 17 00:00:00 2001
> From: Alexander Popov <alex.popov@linux.com>
> Date: Mon, 24 Jul 2017 23:16:28 +0300
> Subject: [PATCH 2/2] mm/slub.c: add a naive detection of double free or
>  corruption
>
> On 06.07.2017 03:27, Kees Cook wrote:
>> This SLUB free list pointer obfuscation code is modified from Brad
>> Spengler/PaX Team's code in the last public patch of grsecurity/PaX based
>> on my understanding of the code. Changes or omissions from the original
>> code are mine and don't reflect the original grsecurity/PaX code.
>>
>> This adds a per-cache random value to SLUB caches that is XORed with
>> their freelist pointer address and value. This adds nearly zero overhead
>> and frustrates the very common heap overflow exploitation method of
>> overwriting freelist pointers. A recent example of the attack is written
>> up here: http://cyseclabs.com/blog/cve-2016-6187-heap-off-by-one-exploit
>>
>> This is based on patches by Daniel Micay, and refactored to minimize the
>> use of #ifdef.
>
> Hello!
>
> This is an addition to the SLAB_FREELIST_HARDENED feature. I'm sending it
> according the discussion here:
> http://www.openwall.com/lists/kernel-hardening/2017/07/17/9
>
> -- >8 --
>
> Add an assertion similar to "fasttop" check in GNU C Library allocator
> as a part of SLAB_FREELIST_HARDENED feature. An object added to a singly
> linked freelist should not point to itself. That helps to detect some
> double free errors (e.g. CVE-2017-2636) without slub_debug and KASAN.
>
> Signed-off-by: Alexander Popov <alex.popov@linux.com>
> ---
>  mm/slub.c | 4 ++++
>  1 file changed, 4 insertions(+)
>
> diff --git a/mm/slub.c b/mm/slub.c
> index c92d636..f39d06e 100644
> --- a/mm/slub.c
> +++ b/mm/slub.c
> @@ -290,6 +290,10 @@ static inline void set_freepointer(struct kmem_cache *s,
> void *object, void *fp)
>  {
>         unsigned long freeptr_addr = (unsigned long)object + s->offset;
>
> +#ifdef CONFIG_SLAB_FREELIST_HARDENED
> +       BUG_ON(object == fp); /* naive detection of double free or corruption */
> +#endif
> +
>         *(void **)freeptr_addr = freelist_ptr(s, fp, freeptr_addr);

What happens if, instead of BUG_ON, we do:

if (unlikely(WARN_RATELIMIT(object == fp, "double-free detected"))
        return;

That would ignore adding it back to the list, since it's already
there, yes? Or would this make SLUB go crazy? I can't tell from the
accounting details around callers to set_freepointer(). I assume it's
correct, since it's close to the same effect as BUG (i.e. we don't do
the update, but the cache remains visible to the system)

-Kees

-- 
Kees Cook
Pixel Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [v3] mm: Add SLUB free list pointer obfuscation
  2017-07-26  0:21   ` Kees Cook
@ 2017-07-26 14:08     ` Christopher Lameter
  2017-07-26 16:20       ` Kees Cook
  0 siblings, 1 reply; 22+ messages in thread
From: Christopher Lameter @ 2017-07-26 14:08 UTC (permalink / raw)
  To: Kees Cook
  Cc: Alexander Popov, Andrew Morton, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Rik van Riel, Linux-MM, Tycho Andersen, LKML

On Tue, 25 Jul 2017, Kees Cook wrote:

> > @@ -290,6 +290,10 @@ static inline void set_freepointer(struct kmem_cache *s,
> > void *object, void *fp)
> >  {
> >         unsigned long freeptr_addr = (unsigned long)object + s->offset;
> >
> > +#ifdef CONFIG_SLAB_FREELIST_HARDENED
> > +       BUG_ON(object == fp); /* naive detection of double free or corruption */
> > +#endif
> > +
> >         *(void **)freeptr_addr = freelist_ptr(s, fp, freeptr_addr);
>
> What happens if, instead of BUG_ON, we do:
>
> if (unlikely(WARN_RATELIMIT(object == fp, "double-free detected"))
>         return;

This may work for the free fastpath but the set_freepointer function is
use in multiple other locations. Maybe just add this to the fastpath
instead of to this fucnction?

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [v3] mm: Add SLUB free list pointer obfuscation
  2017-07-26 14:08     ` Christopher Lameter
@ 2017-07-26 16:20       ` Kees Cook
  2017-07-26 16:55         ` Christopher Lameter
  0 siblings, 1 reply; 22+ messages in thread
From: Kees Cook @ 2017-07-26 16:20 UTC (permalink / raw)
  To: Christopher Lameter
  Cc: Alexander Popov, Andrew Morton, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Rik van Riel, Linux-MM, Tycho Andersen, LKML, kernel-hardening

On Wed, Jul 26, 2017 at 7:08 AM, Christopher Lameter <cl@linux.com> wrote:
> On Tue, 25 Jul 2017, Kees Cook wrote:
>
>> > @@ -290,6 +290,10 @@ static inline void set_freepointer(struct kmem_cache *s,
>> > void *object, void *fp)
>> >  {
>> >         unsigned long freeptr_addr = (unsigned long)object + s->offset;
>> >
>> > +#ifdef CONFIG_SLAB_FREELIST_HARDENED
>> > +       BUG_ON(object == fp); /* naive detection of double free or corruption */
>> > +#endif
>> > +
>> >         *(void **)freeptr_addr = freelist_ptr(s, fp, freeptr_addr);
>>
>> What happens if, instead of BUG_ON, we do:
>>
>> if (unlikely(WARN_RATELIMIT(object == fp, "double-free detected"))
>>         return;
>
> This may work for the free fastpath but the set_freepointer function is
> use in multiple other locations. Maybe just add this to the fastpath
> instead of to this fucnction?

Do you mean do_slab_free()?

-Kees

-- 
Kees Cook
Pixel Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [v3] mm: Add SLUB free list pointer obfuscation
  2017-07-26 16:20       ` Kees Cook
@ 2017-07-26 16:55         ` Christopher Lameter
  2017-07-26 17:13           ` Kees Cook
  2017-07-27 22:48           ` Alexander Popov
  0 siblings, 2 replies; 22+ messages in thread
From: Christopher Lameter @ 2017-07-26 16:55 UTC (permalink / raw)
  To: Kees Cook
  Cc: Alexander Popov, Andrew Morton, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Rik van Riel, Linux-MM, Tycho Andersen, LKML

On Wed, 26 Jul 2017, Kees Cook wrote:

> >> What happens if, instead of BUG_ON, we do:
> >>
> >> if (unlikely(WARN_RATELIMIT(object == fp, "double-free detected"))
> >>         return;
> >
> > This may work for the free fastpath but the set_freepointer function is
> > use in multiple other locations. Maybe just add this to the fastpath
> > instead of to this fucnction?
>
> Do you mean do_slab_free()?

Yes inserting these lines into do_slab_free() would simple ignore the
double free operation in the fast path and that would be safe.

Although in either case we are adding code to the fastpath...

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [v3] mm: Add SLUB free list pointer obfuscation
  2017-07-26 16:55         ` Christopher Lameter
@ 2017-07-26 17:13           ` Kees Cook
  2017-07-27 15:15             ` Christopher Lameter
  2017-07-27 22:48           ` Alexander Popov
  1 sibling, 1 reply; 22+ messages in thread
From: Kees Cook @ 2017-07-26 17:13 UTC (permalink / raw)
  To: Christopher Lameter
  Cc: Alexander Popov, Andrew Morton, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Rik van Riel, Linux-MM, Tycho Andersen, LKML, kernel-hardening

On Wed, Jul 26, 2017 at 9:55 AM, Christopher Lameter <cl@linux.com> wrote:
> On Wed, 26 Jul 2017, Kees Cook wrote:
>
>> >> What happens if, instead of BUG_ON, we do:
>> >>
>> >> if (unlikely(WARN_RATELIMIT(object == fp, "double-free detected"))
>> >>         return;
>> >
>> > This may work for the free fastpath but the set_freepointer function is
>> > use in multiple other locations. Maybe just add this to the fastpath
>> > instead of to this fucnction?
>>
>> Do you mean do_slab_free()?
>
> Yes inserting these lines into do_slab_free() would simple ignore the
> double free operation in the fast path and that would be safe.
>
> Although in either case we are adding code to the fastpath...

While I'd like it unconditionally, I think Alexander's proposal was to
put it behind CONFIG_SLAB_FREELIST_HARDENED.

BTW, while I've got your attention, can you Ack the other patch? I
sent a v4 for the pointer obfuscation, which we really need:
https://lkml.org/lkml/2017/7/26/4

Thanks!

-Kees

-- 
Kees Cook
Pixel Security

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [v3] mm: Add SLUB free list pointer obfuscation
  2017-07-26 17:13           ` Kees Cook
@ 2017-07-27 15:15             ` Christopher Lameter
  0 siblings, 0 replies; 22+ messages in thread
From: Christopher Lameter @ 2017-07-27 15:15 UTC (permalink / raw)
  To: Kees Cook
  Cc: Alexander Popov, Andrew Morton, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Rik van Riel, Linux-MM, Tycho Andersen, LKML

On Wed, 26 Jul 2017, Kees Cook wrote:

> > Although in either case we are adding code to the fastpath...
>
> While I'd like it unconditionally, I think Alexander's proposal was to
> put it behind CONFIG_SLAB_FREELIST_HARDENED.

Sounds good.

> BTW, while I've got your attention, can you Ack the other patch? I
> sent a v4 for the pointer obfuscation, which we really need:
> https://lkml.org/lkml/2017/7/26/4

Just looked through it.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [v3] mm: Add SLUB free list pointer obfuscation
  2017-07-26 16:55         ` Christopher Lameter
  2017-07-26 17:13           ` Kees Cook
@ 2017-07-27 22:48           ` Alexander Popov
  2017-07-27 23:53             ` Christopher Lameter
  1 sibling, 1 reply; 22+ messages in thread
From: Alexander Popov @ 2017-07-27 22:48 UTC (permalink / raw)
  To: Christopher Lameter, Kees Cook
  Cc: Andrew Morton, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Paul E. McKenney, Ingo Molnar, Josh Triplett, Andy Lutomirski,
	Nicolas Pitre, Tejun Heo, Daniel Mack, Sebastian Andrzej Siewior,
	Sergey Senozhatsky, Helge Deller, Rik van Riel, Linux-MM,
	Tycho Andersen, LKML, kernel-hardening, alex.popov

Hello Christopher and Kees,

On 26.07.2017 19:55, Christopher Lameter wrote:
> On Wed, 26 Jul 2017, Kees Cook wrote:
> 
>>>> What happens if, instead of BUG_ON, we do:
>>>>
>>>> if (unlikely(WARN_RATELIMIT(object == fp, "double-free detected"))
>>>>         return;
>>>
>>> This may work for the free fastpath but the set_freepointer function is
>>> use in multiple other locations. Maybe just add this to the fastpath
>>> instead of to this fucnction?
>>
>> Do you mean do_slab_free()?
> 
> Yes inserting these lines into do_slab_free() would simple ignore the
> double free operation in the fast path and that would be safe.

I don't really like ignoring double-free. I think, that:
  - it will hide dangerous bugs in the kernel,
  - it can make some kernel exploits more stable.
I would rather add BUG_ON to set_freepointer() behind SLAB_FREELIST_HARDENED. Is
it fine?

At the same time avoiding the consequences of some double-free errors is better
than not doing that. It may be considered as kernel "self-healing", I don't
know. I can prepare a second patch for do_slab_free(), as you described. Would
you like it?

Best regards,
Alexander

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [v3] mm: Add SLUB free list pointer obfuscation
  2017-07-27 22:48           ` Alexander Popov
@ 2017-07-27 23:53             ` Christopher Lameter
  2017-07-31 20:17               ` Alexander Popov
  0 siblings, 1 reply; 22+ messages in thread
From: Christopher Lameter @ 2017-07-27 23:53 UTC (permalink / raw)
  To: Alexander Popov
  Cc: Kees Cook, Andrew Morton, Pekka Enberg, David Rientjes,
	Joonsoo Kim, Paul E. McKenney, Ingo Molnar, Josh Triplett,
	Andy Lutomirski, Nicolas Pitre, Tejun Heo, Daniel Mack,
	Sebastian Andrzej Siewior, Sergey Senozhatsky, Helge Deller,
	Rik van Riel, Linux-MM, Tycho Andersen, LKML,
	kernel-hardening@lists.openwall.com

On Fri, 28 Jul 2017, Alexander Popov wrote:

> I don't really like ignoring double-free. I think, that:
>   - it will hide dangerous bugs in the kernel,
>   - it can make some kernel exploits more stable.
> I would rather add BUG_ON to set_freepointer() behind SLAB_FREELIST_HARDENED. Is
> it fine?

I think Kees already added some logging output.

> At the same time avoiding the consequences of some double-free errors is better
> than not doing that. It may be considered as kernel "self-healing", I don't
> know. I can prepare a second patch for do_slab_free(), as you described. Would
> you like it?

The SLUB allocator is already self healing if you enable the option to do
so on bootup (covers more than just the double free case). What you
propose here is no different than that and just another way of having
similar functionality. In the best case it would work the same way.

^ permalink raw reply	[flat|nested] 22+ messages in thread

* Re: [v3] mm: Add SLUB free list pointer obfuscation
  2017-07-27 23:53             ` Christopher Lameter
@ 2017-07-31 20:17               ` Alexander Popov
  0 siblings, 0 replies; 22+ messages in thread
From: Alexander Popov @ 2017-07-31 20:17 UTC (permalink / raw)
  To: Christopher Lameter, Kees Cook
  Cc: Andrew Morton, Pekka Enberg, David Rientjes, Joonsoo Kim,
	Paul E. McKenney, Ingo Molnar, Josh Triplett, Andy Lutomirski,
	Nicolas Pitre, Tejun Heo, Daniel Mack, Sebastian Andrzej Siewior,
	Sergey Senozhatsky, Helge Deller, Rik van Riel, Linux-MM,
	Tycho Andersen, LKML, kernel-hardening

Hello Christopher and Kees,

Excuse me for the delayed reply.

On 28.07.2017 02:53, Christopher Lameter wrote:
> On Fri, 28 Jul 2017, Alexander Popov wrote:
> 
>> I don't really like ignoring double-free. I think, that:
>>   - it will hide dangerous bugs in the kernel,
>>   - it can make some kernel exploits more stable.
>> I would rather add BUG_ON to set_freepointer() behind SLAB_FREELIST_HARDENED. Is
>> it fine?
> 
> I think Kees already added some logging output.

Hm, I don't see anything like that in v4 of "SLUB free list pointer
obfuscation": https://patchwork.kernel.org/patch/9864165/

>> At the same time avoiding the consequences of some double-free errors is better
>> than not doing that. It may be considered as kernel "self-healing", I don't
>> know. I can prepare a second patch for do_slab_free(), as you described. Would
>> you like it?
> 
> The SLUB allocator is already self healing if you enable the option to do
> so on bootup (covers more than just the double free case). What you
> propose here is no different than that and just another way of having
> similar functionality. In the best case it would work the same way.

Ok, I see. Thanks.

Best regards,
Alexander

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 22+ messages in thread

end of thread, other threads:[~2017-07-31 20:17 UTC | newest]

Thread overview: 22+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-07-06  0:27 [PATCH v3] mm: Add SLUB free list pointer obfuscation Kees Cook
2017-07-06 13:43 ` Christoph Lameter
2017-07-06 15:48   ` Kees Cook
2017-07-06 15:55     ` Christoph Lameter
2017-07-06 16:16       ` [kernel-hardening] " Daniel Micay
2017-07-06 17:53       ` Rik van Riel
2017-07-06 18:50         ` Kees Cook
2017-07-07 13:50           ` Christoph Lameter
2017-07-07 16:51             ` Kees Cook
2017-07-07 17:06               ` Christoph Lameter
2017-07-07 18:43                 ` Kees Cook
2017-07-24 21:17 ` [v3] " Alexander Popov
2017-07-25  9:42   ` Alexander Popov
2017-07-26  0:21   ` Kees Cook
2017-07-26 14:08     ` Christopher Lameter
2017-07-26 16:20       ` Kees Cook
2017-07-26 16:55         ` Christopher Lameter
2017-07-26 17:13           ` Kees Cook
2017-07-27 15:15             ` Christopher Lameter
2017-07-27 22:48           ` Alexander Popov
2017-07-27 23:53             ` Christopher Lameter
2017-07-31 20:17               ` Alexander Popov

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).