All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 18:22 ` william.c.roberts
  0 siblings, 0 replies; 73+ messages in thread
From: william.c.roberts @ 2016-07-26 18:22 UTC (permalink / raw)
  To: jason, linux-mm, linux-kernel, kernel-hardening, akpm
  Cc: keescook, gregkh, nnk, jeffv, salyzyn, dcashman

The recent get_random_long() change in get_random_range() and then the
subsequent patches Jason put out, all stemmed from my tinkering
with the concept of randomizing mmap.

Any feedback would be greatly appreciated, including any feedback
indicating that I am idiot.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 18:22 ` william.c.roberts
  0 siblings, 0 replies; 73+ messages in thread
From: william.c.roberts @ 2016-07-26 18:22 UTC (permalink / raw)
  To: jason, linux-mm, linux-kernel, kernel-hardening, akpm
  Cc: keescook, gregkh, nnk, jeffv, salyzyn, dcashman

The recent get_random_long() change in get_random_range() and then the
subsequent patches Jason put out, all stemmed from my tinkering
with the concept of randomizing mmap.

Any feedback would be greatly appreciated, including any feedback
indicating that I am idiot.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 18:22 ` [kernel-hardening] " william.c.roberts
@ 2016-07-26 18:22   ` william.c.roberts
  -1 siblings, 0 replies; 73+ messages in thread
From: william.c.roberts @ 2016-07-26 18:22 UTC (permalink / raw)
  To: jason, linux-mm, linux-kernel, kernel-hardening, akpm
  Cc: keescook, gregkh, nnk, jeffv, salyzyn, dcashman, William Roberts

From: William Roberts <william.c.roberts@intel.com>

This patch introduces the ability randomize mmap locations where the
address is not requested, for instance when ld is allocating pages for
shared libraries. It chooses to randomize based on the current
personality for ASLR.

Currently, allocations are done sequentially within unmapped address
space gaps. This may happen top down or bottom up depending on scheme.

For instance these mmap calls produce contiguous mappings:
int size = getpagesize();
mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40026000
mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40027000

Note no gap between.

After patches:
int size = getpagesize();
mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x400b4000
mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40055000

Note gap between.

Using the test program mentioned here, that allocates fixed sized blocks
till exhaustion: https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
no difference was noticed in the number of allocations. Most varied from
run to run, but were always within a few allocations of one another
between patched and un-patched runs.

Performance Measurements:
Using strace with -T option and filtering for mmap on the program
ls shows a slowdown of approximate 3.7%

Signed-off-by: William Roberts <william.c.roberts@intel.com>
---
 mm/mmap.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/mm/mmap.c b/mm/mmap.c
index de2c176..7891272 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -43,6 +43,7 @@
 #include <linux/userfaultfd_k.h>
 #include <linux/moduleparam.h>
 #include <linux/pkeys.h>
+#include <linux/random.h>
 
 #include <asm/uaccess.h>
 #include <asm/cacheflush.h>
@@ -1582,6 +1583,24 @@ unacct_error:
 	return error;
 }
 
+/*
+ * Generate a random address within a range. This differs from randomize_addr() by randomizing
+ * on len sized chunks. This helps prevent fragmentation of the virtual memory map.
+ */
+static unsigned long randomize_mmap(unsigned long start, unsigned long end, unsigned long len)
+{
+	unsigned long slots;
+
+	if ((current->personality & ADDR_NO_RANDOMIZE) || !randomize_va_space)
+		return 0;
+
+	slots = (end - start)/len;
+	if (!slots)
+		return 0;
+
+	return PAGE_ALIGN(start + ((get_random_long() % slots) * len));
+}
+
 unsigned long unmapped_area(struct vm_unmapped_area_info *info)
 {
 	/*
@@ -1676,6 +1695,8 @@ found:
 	if (gap_start < info->low_limit)
 		gap_start = info->low_limit;
 
+	gap_start = randomize_mmap(gap_start, gap_end, length) ? : gap_start;
+
 	/* Adjust gap address to the desired alignment */
 	gap_start += (info->align_offset - gap_start) & info->align_mask;
 
@@ -1775,6 +1796,9 @@ found:
 found_highest:
 	/* Compute highest gap address at the desired alignment */
 	gap_end -= info->length;
+
+	gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
+
 	gap_end -= (gap_end - info->align_offset) & info->align_mask;
 
 	VM_BUG_ON(gap_end < info->low_limit);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 18:22   ` william.c.roberts
  0 siblings, 0 replies; 73+ messages in thread
From: william.c.roberts @ 2016-07-26 18:22 UTC (permalink / raw)
  To: jason, linux-mm, linux-kernel, kernel-hardening, akpm
  Cc: keescook, gregkh, nnk, jeffv, salyzyn, dcashman, William Roberts

From: William Roberts <william.c.roberts@intel.com>

This patch introduces the ability randomize mmap locations where the
address is not requested, for instance when ld is allocating pages for
shared libraries. It chooses to randomize based on the current
personality for ASLR.

Currently, allocations are done sequentially within unmapped address
space gaps. This may happen top down or bottom up depending on scheme.

For instance these mmap calls produce contiguous mappings:
int size = getpagesize();
mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40026000
mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40027000

Note no gap between.

After patches:
int size = getpagesize();
mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x400b4000
mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40055000

Note gap between.

Using the test program mentioned here, that allocates fixed sized blocks
till exhaustion: https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
no difference was noticed in the number of allocations. Most varied from
run to run, but were always within a few allocations of one another
between patched and un-patched runs.

Performance Measurements:
Using strace with -T option and filtering for mmap on the program
ls shows a slowdown of approximate 3.7%

Signed-off-by: William Roberts <william.c.roberts@intel.com>
---
 mm/mmap.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/mm/mmap.c b/mm/mmap.c
index de2c176..7891272 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -43,6 +43,7 @@
 #include <linux/userfaultfd_k.h>
 #include <linux/moduleparam.h>
 #include <linux/pkeys.h>
+#include <linux/random.h>
 
 #include <asm/uaccess.h>
 #include <asm/cacheflush.h>
@@ -1582,6 +1583,24 @@ unacct_error:
 	return error;
 }
 
+/*
+ * Generate a random address within a range. This differs from randomize_addr() by randomizing
+ * on len sized chunks. This helps prevent fragmentation of the virtual memory map.
+ */
+static unsigned long randomize_mmap(unsigned long start, unsigned long end, unsigned long len)
+{
+	unsigned long slots;
+
+	if ((current->personality & ADDR_NO_RANDOMIZE) || !randomize_va_space)
+		return 0;
+
+	slots = (end - start)/len;
+	if (!slots)
+		return 0;
+
+	return PAGE_ALIGN(start + ((get_random_long() % slots) * len));
+}
+
 unsigned long unmapped_area(struct vm_unmapped_area_info *info)
 {
 	/*
@@ -1676,6 +1695,8 @@ found:
 	if (gap_start < info->low_limit)
 		gap_start = info->low_limit;
 
+	gap_start = randomize_mmap(gap_start, gap_end, length) ? : gap_start;
+
 	/* Adjust gap address to the desired alignment */
 	gap_start += (info->align_offset - gap_start) & info->align_mask;
 
@@ -1775,6 +1796,9 @@ found:
 found_highest:
 	/* Compute highest gap address at the desired alignment */
 	gap_end -= info->length;
+
+	gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
+
 	gap_end -= (gap_end - info->align_offset) & info->align_mask;
 
 	VM_BUG_ON(gap_end < info->low_limit);
-- 
1.9.1

^ permalink raw reply related	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 18:22   ` [kernel-hardening] " william.c.roberts
@ 2016-07-26 20:03     ` Jason Cooper
  -1 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-26 20:03 UTC (permalink / raw)
  To: william.c.roberts
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

Hi William!

On Tue, Jul 26, 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> From: William Roberts <william.c.roberts@intel.com>
> 
> This patch introduces the ability randomize mmap locations where the
> address is not requested, for instance when ld is allocating pages for
> shared libraries. It chooses to randomize based on the current
> personality for ASLR.

Now I see how you found the randomize_range() fix. :-P

> Currently, allocations are done sequentially within unmapped address
> space gaps. This may happen top down or bottom up depending on scheme.
> 
> For instance these mmap calls produce contiguous mappings:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40026000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40027000
> 
> Note no gap between.
> 
> After patches:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x400b4000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40055000
> 
> Note gap between.
> 
> Using the test program mentioned here, that allocates fixed sized blocks
> till exhaustion: https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
> no difference was noticed in the number of allocations. Most varied from
> run to run, but were always within a few allocations of one another
> between patched and un-patched runs.

Did you test this with different allocation sizes?

> Performance Measurements:
> Using strace with -T option and filtering for mmap on the program
> ls shows a slowdown of approximate 3.7%

I think it would be helpful to show the effect on the resulting object
code.

> Signed-off-by: William Roberts <william.c.roberts@intel.com>
> ---
>  mm/mmap.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index de2c176..7891272 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -43,6 +43,7 @@
>  #include <linux/userfaultfd_k.h>
>  #include <linux/moduleparam.h>
>  #include <linux/pkeys.h>
> +#include <linux/random.h>
>  
>  #include <asm/uaccess.h>
>  #include <asm/cacheflush.h>
> @@ -1582,6 +1583,24 @@ unacct_error:
>  	return error;
>  }
>  
> +/*
> + * Generate a random address within a range. This differs from randomize_addr() by randomizing
> + * on len sized chunks. This helps prevent fragmentation of the virtual memory map.
> + */
> +static unsigned long randomize_mmap(unsigned long start, unsigned long end, unsigned long len)
> +{
> +	unsigned long slots;
> +
> +	if ((current->personality & ADDR_NO_RANDOMIZE) || !randomize_va_space)
> +		return 0;

Couldn't we avoid checking this every time?  Say, by assigning a
function pointer during init?

> +
> +	slots = (end - start)/len;
> +	if (!slots)
> +		return 0;
> +
> +	return PAGE_ALIGN(start + ((get_random_long() % slots) * len));
> +}
> +

Personally, I'd prefer this function noop out based on a configuration
option.

>  unsigned long unmapped_area(struct vm_unmapped_area_info *info)
>  {
>  	/*
> @@ -1676,6 +1695,8 @@ found:
>  	if (gap_start < info->low_limit)
>  		gap_start = info->low_limit;
>  
> +	gap_start = randomize_mmap(gap_start, gap_end, length) ? : gap_start;
> +
>  	/* Adjust gap address to the desired alignment */
>  	gap_start += (info->align_offset - gap_start) & info->align_mask;
>  
> @@ -1775,6 +1796,9 @@ found:
>  found_highest:
>  	/* Compute highest gap address at the desired alignment */
>  	gap_end -= info->length;
> +
> +	gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
> +
>  	gap_end -= (gap_end - info->align_offset) & info->align_mask;
>  
>  	VM_BUG_ON(gap_end < info->low_limit);

I'll have to dig into the mm code more before I can comment
intelligently on this.

thx,

Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 20:03     ` Jason Cooper
  0 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-26 20:03 UTC (permalink / raw)
  To: william.c.roberts
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

Hi William!

On Tue, Jul 26, 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> From: William Roberts <william.c.roberts@intel.com>
> 
> This patch introduces the ability randomize mmap locations where the
> address is not requested, for instance when ld is allocating pages for
> shared libraries. It chooses to randomize based on the current
> personality for ASLR.

Now I see how you found the randomize_range() fix. :-P

> Currently, allocations are done sequentially within unmapped address
> space gaps. This may happen top down or bottom up depending on scheme.
> 
> For instance these mmap calls produce contiguous mappings:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40026000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40027000
> 
> Note no gap between.
> 
> After patches:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x400b4000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40055000
> 
> Note gap between.
> 
> Using the test program mentioned here, that allocates fixed sized blocks
> till exhaustion: https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
> no difference was noticed in the number of allocations. Most varied from
> run to run, but were always within a few allocations of one another
> between patched and un-patched runs.

Did you test this with different allocation sizes?

> Performance Measurements:
> Using strace with -T option and filtering for mmap on the program
> ls shows a slowdown of approximate 3.7%

I think it would be helpful to show the effect on the resulting object
code.

> Signed-off-by: William Roberts <william.c.roberts@intel.com>
> ---
>  mm/mmap.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
> 
> diff --git a/mm/mmap.c b/mm/mmap.c
> index de2c176..7891272 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -43,6 +43,7 @@
>  #include <linux/userfaultfd_k.h>
>  #include <linux/moduleparam.h>
>  #include <linux/pkeys.h>
> +#include <linux/random.h>
>  
>  #include <asm/uaccess.h>
>  #include <asm/cacheflush.h>
> @@ -1582,6 +1583,24 @@ unacct_error:
>  	return error;
>  }
>  
> +/*
> + * Generate a random address within a range. This differs from randomize_addr() by randomizing
> + * on len sized chunks. This helps prevent fragmentation of the virtual memory map.
> + */
> +static unsigned long randomize_mmap(unsigned long start, unsigned long end, unsigned long len)
> +{
> +	unsigned long slots;
> +
> +	if ((current->personality & ADDR_NO_RANDOMIZE) || !randomize_va_space)
> +		return 0;

Couldn't we avoid checking this every time?  Say, by assigning a
function pointer during init?

> +
> +	slots = (end - start)/len;
> +	if (!slots)
> +		return 0;
> +
> +	return PAGE_ALIGN(start + ((get_random_long() % slots) * len));
> +}
> +

Personally, I'd prefer this function noop out based on a configuration
option.

>  unsigned long unmapped_area(struct vm_unmapped_area_info *info)
>  {
>  	/*
> @@ -1676,6 +1695,8 @@ found:
>  	if (gap_start < info->low_limit)
>  		gap_start = info->low_limit;
>  
> +	gap_start = randomize_mmap(gap_start, gap_end, length) ? : gap_start;
> +
>  	/* Adjust gap address to the desired alignment */
>  	gap_start += (info->align_offset - gap_start) & info->align_mask;
>  
> @@ -1775,6 +1796,9 @@ found:
>  found_highest:
>  	/* Compute highest gap address at the desired alignment */
>  	gap_end -= info->length;
> +
> +	gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
> +
>  	gap_end -= (gap_end - info->align_offset) & info->align_mask;
>  
>  	VM_BUG_ON(gap_end < info->low_limit);

I'll have to dig into the mm code more before I can comment
intelligently on this.

thx,

Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 20:03     ` [kernel-hardening] " Jason Cooper
@ 2016-07-26 20:11       ` Roberts, William C
  -1 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 20:11 UTC (permalink / raw)
  To: Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman



> -----Original Message-----
> From: Jason Cooper [mailto:jason@lakedaemon.net]
> Sent: Tuesday, July 26, 2016 1:03 PM
> To: Roberts, William C <william.c.roberts@intel.com>
> Cc: linux-mm@vger.kernel.org; linux-kernel@vger.kernel.org; kernel-
> hardening@lists.openwall.com; akpm@linux-foundation.org;
> keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> jeffv@google.com; salyzyn@android.com; dcashman@android.com
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> Hi William!
> 
> On Tue, Jul 26, 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > From: William Roberts <william.c.roberts@intel.com>
> >
> > This patch introduces the ability randomize mmap locations where the
> > address is not requested, for instance when ld is allocating pages for
> > shared libraries. It chooses to randomize based on the current
> > personality for ASLR.
> 
> Now I see how you found the randomize_range() fix. :-P
> 
> > Currently, allocations are done sequentially within unmapped address
> > space gaps. This may happen top down or bottom up depending on scheme.
> >
> > For instance these mmap calls produce contiguous mappings:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40026000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40027000
> >
> > Note no gap between.
> >
> > After patches:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x400b4000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40055000
> >
> > Note gap between.
> >
> > Using the test program mentioned here, that allocates fixed sized
> > blocks till exhaustion:
> > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
> > no difference was noticed in the number of allocations. Most varied
> > from run to run, but were always within a few allocations of one
> > another between patched and un-patched runs.
> 
> Did you test this with different allocation sizes?

No I didn't it. I wasn't sure the best way to test this, any ideas?

> 
> > Performance Measurements:
> > Using strace with -T option and filtering for mmap on the program ls
> > shows a slowdown of approximate 3.7%
> 
> I think it would be helpful to show the effect on the resulting object code.

Do you mean the maps of the process? I have some captures for whoopsie on my Ubuntu
system I can share.

One thing I didn't make clear in my commit message is why this is good. Right now, if you know
An address within in a process, you know all offsets done with mmap(). For instance, an offset
To libX can yield libY by adding/subtracting an offset. This is meant to make rops a bit harder, or
In general any mapping offset mmore difficult to find/guess.

> 
> > Signed-off-by: William Roberts <william.c.roberts@intel.com>
> > ---
> >  mm/mmap.c | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index de2c176..7891272 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -43,6 +43,7 @@
> >  #include <linux/userfaultfd_k.h>
> >  #include <linux/moduleparam.h>
> >  #include <linux/pkeys.h>
> > +#include <linux/random.h>
> >
> >  #include <asm/uaccess.h>
> >  #include <asm/cacheflush.h>
> > @@ -1582,6 +1583,24 @@ unacct_error:
> >  	return error;
> >  }
> >
> > +/*
> > + * Generate a random address within a range. This differs from
> > +randomize_addr() by randomizing
> > + * on len sized chunks. This helps prevent fragmentation of the virtual
> memory map.
> > + */
> > +static unsigned long randomize_mmap(unsigned long start, unsigned
> > +long end, unsigned long len) {
> > +	unsigned long slots;
> > +
> > +	if ((current->personality & ADDR_NO_RANDOMIZE) ||
> !randomize_va_space)
> > +		return 0;
> 
> Couldn't we avoid checking this every time?  Say, by assigning a function pointer
> during init?

Yeah that could be done. I just copied the way others checked elsewhere in the kernel :-P

> 
> > +
> > +	slots = (end - start)/len;
> > +	if (!slots)
> > +		return 0;
> > +
> > +	return PAGE_ALIGN(start + ((get_random_long() % slots) * len)); }
> > +
> 
> Personally, I'd prefer this function noop out based on a configuration option.

Me too.

> 
> >  unsigned long unmapped_area(struct vm_unmapped_area_info *info)  {
> >  	/*
> > @@ -1676,6 +1695,8 @@ found:
> >  	if (gap_start < info->low_limit)
> >  		gap_start = info->low_limit;
> >
> > +	gap_start = randomize_mmap(gap_start, gap_end, length) ? :
> > +gap_start;
> > +
> >  	/* Adjust gap address to the desired alignment */
> >  	gap_start += (info->align_offset - gap_start) & info->align_mask;
> >
> > @@ -1775,6 +1796,9 @@ found:
> >  found_highest:
> >  	/* Compute highest gap address at the desired alignment */
> >  	gap_end -= info->length;
> > +
> > +	gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
> > +
> >  	gap_end -= (gap_end - info->align_offset) & info->align_mask;
> >
> >  	VM_BUG_ON(gap_end < info->low_limit);
> 
> I'll have to dig into the mm code more before I can comment intelligently on this.
> 
> thx,
> 
> Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 20:11       ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 20:11 UTC (permalink / raw)
  To: Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman



> -----Original Message-----
> From: Jason Cooper [mailto:jason@lakedaemon.net]
> Sent: Tuesday, July 26, 2016 1:03 PM
> To: Roberts, William C <william.c.roberts@intel.com>
> Cc: linux-mm@vger.kernel.org; linux-kernel@vger.kernel.org; kernel-
> hardening@lists.openwall.com; akpm@linux-foundation.org;
> keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> jeffv@google.com; salyzyn@android.com; dcashman@android.com
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> Hi William!
> 
> On Tue, Jul 26, 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > From: William Roberts <william.c.roberts@intel.com>
> >
> > This patch introduces the ability randomize mmap locations where the
> > address is not requested, for instance when ld is allocating pages for
> > shared libraries. It chooses to randomize based on the current
> > personality for ASLR.
> 
> Now I see how you found the randomize_range() fix. :-P
> 
> > Currently, allocations are done sequentially within unmapped address
> > space gaps. This may happen top down or bottom up depending on scheme.
> >
> > For instance these mmap calls produce contiguous mappings:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40026000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40027000
> >
> > Note no gap between.
> >
> > After patches:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x400b4000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40055000
> >
> > Note gap between.
> >
> > Using the test program mentioned here, that allocates fixed sized
> > blocks till exhaustion:
> > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
> > no difference was noticed in the number of allocations. Most varied
> > from run to run, but were always within a few allocations of one
> > another between patched and un-patched runs.
> 
> Did you test this with different allocation sizes?

No I didn't it. I wasn't sure the best way to test this, any ideas?

> 
> > Performance Measurements:
> > Using strace with -T option and filtering for mmap on the program ls
> > shows a slowdown of approximate 3.7%
> 
> I think it would be helpful to show the effect on the resulting object code.

Do you mean the maps of the process? I have some captures for whoopsie on my Ubuntu
system I can share.

One thing I didn't make clear in my commit message is why this is good. Right now, if you know
An address within in a process, you know all offsets done with mmap(). For instance, an offset
To libX can yield libY by adding/subtracting an offset. This is meant to make rops a bit harder, or
In general any mapping offset mmore difficult to find/guess.

> 
> > Signed-off-by: William Roberts <william.c.roberts@intel.com>
> > ---
> >  mm/mmap.c | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index de2c176..7891272 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -43,6 +43,7 @@
> >  #include <linux/userfaultfd_k.h>
> >  #include <linux/moduleparam.h>
> >  #include <linux/pkeys.h>
> > +#include <linux/random.h>
> >
> >  #include <asm/uaccess.h>
> >  #include <asm/cacheflush.h>
> > @@ -1582,6 +1583,24 @@ unacct_error:
> >  	return error;
> >  }
> >
> > +/*
> > + * Generate a random address within a range. This differs from
> > +randomize_addr() by randomizing
> > + * on len sized chunks. This helps prevent fragmentation of the virtual
> memory map.
> > + */
> > +static unsigned long randomize_mmap(unsigned long start, unsigned
> > +long end, unsigned long len) {
> > +	unsigned long slots;
> > +
> > +	if ((current->personality & ADDR_NO_RANDOMIZE) ||
> !randomize_va_space)
> > +		return 0;
> 
> Couldn't we avoid checking this every time?  Say, by assigning a function pointer
> during init?

Yeah that could be done. I just copied the way others checked elsewhere in the kernel :-P

> 
> > +
> > +	slots = (end - start)/len;
> > +	if (!slots)
> > +		return 0;
> > +
> > +	return PAGE_ALIGN(start + ((get_random_long() % slots) * len)); }
> > +
> 
> Personally, I'd prefer this function noop out based on a configuration option.

Me too.

> 
> >  unsigned long unmapped_area(struct vm_unmapped_area_info *info)  {
> >  	/*
> > @@ -1676,6 +1695,8 @@ found:
> >  	if (gap_start < info->low_limit)
> >  		gap_start = info->low_limit;
> >
> > +	gap_start = randomize_mmap(gap_start, gap_end, length) ? :
> > +gap_start;
> > +
> >  	/* Adjust gap address to the desired alignment */
> >  	gap_start += (info->align_offset - gap_start) & info->align_mask;
> >
> > @@ -1775,6 +1796,9 @@ found:
> >  found_highest:
> >  	/* Compute highest gap address at the desired alignment */
> >  	gap_end -= info->length;
> > +
> > +	gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
> > +
> >  	gap_end -= (gap_end - info->align_offset) & info->align_mask;
> >
> >  	VM_BUG_ON(gap_end < info->low_limit);
> 
> I'll have to dig into the mm code more before I can comment intelligently on this.
> 
> thx,
> 
> Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 18:22   ` [kernel-hardening] " william.c.roberts
  (?)
  (?)
@ 2016-07-26 20:12   ` Rik van Riel
  2016-07-26 20:17       ` Roberts, William C
  -1 siblings, 1 reply; 73+ messages in thread
From: Rik van Riel @ 2016-07-26 20:12 UTC (permalink / raw)
  To: kernel-hardening, jason, linux-mm, linux-kernel, akpm
  Cc: keescook, gregkh, nnk, jeffv, salyzyn, dcashman, William Roberts

[-- Attachment #1: Type: text/plain, Size: 1500 bytes --]

On Tue, 2016-07-26 at 11:22 -0700, william.c.roberts@intel.com wrote:
> From: William Roberts <william.c.roberts@intel.com>
> 
> This patch introduces the ability randomize mmap locations where the
> address is not requested, for instance when ld is allocating pages
> for
> shared libraries. It chooses to randomize based on the current
> personality for ASLR.
> 
> Currently, allocations are done sequentially within unmapped address
> space gaps. This may happen top down or bottom up depending on
> scheme.
> 
> For instance these mmap calls produce contiguous mappings:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40026000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40027000
> 
> Note no gap between.
> 
> After patches:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x400b4000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40055000
> 
> Note gap between.

I suspect this randomization will be more useful
for file mappings than for anonymous mappings.

I don't know whether there are downsides to creating
more anonymous VMAs than we have to, with malloc
libraries that may perform various kinds of tricks
with mmap for their own performance reasons.

Does anyone have convincing reasons why mmap
randomization should do both file and anon, or
whether it should do just file mappings?

-- 
All rights reversed

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 473 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 20:03     ` [kernel-hardening] " Jason Cooper
  (?)
@ 2016-07-26 20:13       ` Roberts, William C
  -1 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 20:13 UTC (permalink / raw)
  To: Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

<snip>

RESEND fixing mm-list email....

> > -----Original Message-----
> > From: Jason Cooper [mailto:jason@lakedaemon.net]
> > Sent: Tuesday, July 26, 2016 1:03 PM
> > To: Roberts, William C <william.c.roberts@intel.com>
> > Cc: linux-mm@vger.kernel.org; linux-kernel@vger.kernel.org; kernel-
> > hardening@lists.openwall.com; akpm@linux-foundation.org;
> > keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> > jeffv@google.com; salyzyn@android.com; dcashman@android.com
> > Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> >
> > Hi William!
> >
> > On Tue, Jul 26, 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > From: William Roberts <william.c.roberts@intel.com>
> > >
> > > This patch introduces the ability randomize mmap locations where the
> > > address is not requested, for instance when ld is allocating pages
> > > for shared libraries. It chooses to randomize based on the current
> > > personality for ASLR.
> >
> > Now I see how you found the randomize_range() fix. :-P
> >
> > > Currently, allocations are done sequentially within unmapped address
> > > space gaps. This may happen top down or bottom up depending on scheme.
> > >
> > > For instance these mmap calls produce contiguous mappings:
> > > int size = getpagesize();
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40026000
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40027000
> > >
> > > Note no gap between.
> > >
> > > After patches:
> > > int size = getpagesize();
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x400b4000
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40055000
> > >
> > > Note gap between.
> > >
> > > Using the test program mentioned here, that allocates fixed sized
> > > blocks till exhaustion:
> > > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html
> > > , no difference was noticed in the number of allocations. Most
> > > varied from run to run, but were always within a few allocations of
> > > one another between patched and un-patched runs.
> >
> > Did you test this with different allocation sizes?
> 
> No I didn't it. I wasn't sure the best way to test this, any ideas?
> 
> >
> > > Performance Measurements:
> > > Using strace with -T option and filtering for mmap on the program ls
> > > shows a slowdown of approximate 3.7%
> >
> > I think it would be helpful to show the effect on the resulting object code.
> 
> Do you mean the maps of the process? I have some captures for whoopsie on my
> Ubuntu system I can share.
> 
> One thing I didn't make clear in my commit message is why this is good. Right
> now, if you know An address within in a process, you know all offsets done with
> mmap(). For instance, an offset To libX can yield libY by adding/subtracting an
> offset. This is meant to make rops a bit harder, or In general any mapping offset
> mmore difficult to find/guess.
> 
> >
> > > Signed-off-by: William Roberts <william.c.roberts@intel.com>
> > > ---
> > >  mm/mmap.c | 24 ++++++++++++++++++++++++
> > >  1 file changed, 24 insertions(+)
> > >
> > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > index de2c176..7891272 100644
> > > --- a/mm/mmap.c
> > > +++ b/mm/mmap.c
> > > @@ -43,6 +43,7 @@
> > >  #include <linux/userfaultfd_k.h>
> > >  #include <linux/moduleparam.h>
> > >  #include <linux/pkeys.h>
> > > +#include <linux/random.h>
> > >
> > >  #include <asm/uaccess.h>
> > >  #include <asm/cacheflush.h>
> > > @@ -1582,6 +1583,24 @@ unacct_error:
> > >  	return error;
> > >  }
> > >
> > > +/*
> > > + * Generate a random address within a range. This differs from
> > > +randomize_addr() by randomizing
> > > + * on len sized chunks. This helps prevent fragmentation of the
> > > +virtual
> > memory map.
> > > + */
> > > +static unsigned long randomize_mmap(unsigned long start, unsigned
> > > +long end, unsigned long len) {
> > > +	unsigned long slots;
> > > +
> > > +	if ((current->personality & ADDR_NO_RANDOMIZE) ||
> > !randomize_va_space)
> > > +		return 0;
> >
> > Couldn't we avoid checking this every time?  Say, by assigning a
> > function pointer during init?
> 
> Yeah that could be done. I just copied the way others checked elsewhere in the
> kernel :-P
> 
> >
> > > +
> > > +	slots = (end - start)/len;
> > > +	if (!slots)
> > > +		return 0;
> > > +
> > > +	return PAGE_ALIGN(start + ((get_random_long() % slots) * len)); }
> > > +
> >
> > Personally, I'd prefer this function noop out based on a configuration option.
> 
> Me too.
> 
> >
> > >  unsigned long unmapped_area(struct vm_unmapped_area_info *info)  {
> > >  	/*
> > > @@ -1676,6 +1695,8 @@ found:
> > >  	if (gap_start < info->low_limit)
> > >  		gap_start = info->low_limit;
> > >
> > > +	gap_start = randomize_mmap(gap_start, gap_end, length) ? :
> > > +gap_start;
> > > +
> > >  	/* Adjust gap address to the desired alignment */
> > >  	gap_start += (info->align_offset - gap_start) & info->align_mask;
> > >
> > > @@ -1775,6 +1796,9 @@ found:
> > >  found_highest:
> > >  	/* Compute highest gap address at the desired alignment */
> > >  	gap_end -= info->length;
> > > +
> > > +	gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
> > > +
> > >  	gap_end -= (gap_end - info->align_offset) & info->align_mask;
> > >
> > >  	VM_BUG_ON(gap_end < info->low_limit);
> >
> > I'll have to dig into the mm code more before I can comment intelligently on
> this.
> >
> > thx,
> >
> > Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 20:13       ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 20:13 UTC (permalink / raw)
  To: Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

<snip>

RESEND fixing mm-list email....

> > -----Original Message-----
> > From: Jason Cooper [mailto:jason@lakedaemon.net]
> > Sent: Tuesday, July 26, 2016 1:03 PM
> > To: Roberts, William C <william.c.roberts@intel.com>
> > Cc: linux-mm@vger.kernel.org; linux-kernel@vger.kernel.org; kernel-
> > hardening@lists.openwall.com; akpm@linux-foundation.org;
> > keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> > jeffv@google.com; salyzyn@android.com; dcashman@android.com
> > Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> >
> > Hi William!
> >
> > On Tue, Jul 26, 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > From: William Roberts <william.c.roberts@intel.com>
> > >
> > > This patch introduces the ability randomize mmap locations where the
> > > address is not requested, for instance when ld is allocating pages
> > > for shared libraries. It chooses to randomize based on the current
> > > personality for ASLR.
> >
> > Now I see how you found the randomize_range() fix. :-P
> >
> > > Currently, allocations are done sequentially within unmapped address
> > > space gaps. This may happen top down or bottom up depending on scheme.
> > >
> > > For instance these mmap calls produce contiguous mappings:
> > > int size = getpagesize();
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40026000
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40027000
> > >
> > > Note no gap between.
> > >
> > > After patches:
> > > int size = getpagesize();
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x400b4000
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40055000
> > >
> > > Note gap between.
> > >
> > > Using the test program mentioned here, that allocates fixed sized
> > > blocks till exhaustion:
> > > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html
> > > , no difference was noticed in the number of allocations. Most
> > > varied from run to run, but were always within a few allocations of
> > > one another between patched and un-patched runs.
> >
> > Did you test this with different allocation sizes?
> 
> No I didn't it. I wasn't sure the best way to test this, any ideas?
> 
> >
> > > Performance Measurements:
> > > Using strace with -T option and filtering for mmap on the program ls
> > > shows a slowdown of approximate 3.7%
> >
> > I think it would be helpful to show the effect on the resulting object code.
> 
> Do you mean the maps of the process? I have some captures for whoopsie on my
> Ubuntu system I can share.
> 
> One thing I didn't make clear in my commit message is why this is good. Right
> now, if you know An address within in a process, you know all offsets done with
> mmap(). For instance, an offset To libX can yield libY by adding/subtracting an
> offset. This is meant to make rops a bit harder, or In general any mapping offset
> mmore difficult to find/guess.
> 
> >
> > > Signed-off-by: William Roberts <william.c.roberts@intel.com>
> > > ---
> > >  mm/mmap.c | 24 ++++++++++++++++++++++++
> > >  1 file changed, 24 insertions(+)
> > >
> > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > index de2c176..7891272 100644
> > > --- a/mm/mmap.c
> > > +++ b/mm/mmap.c
> > > @@ -43,6 +43,7 @@
> > >  #include <linux/userfaultfd_k.h>
> > >  #include <linux/moduleparam.h>
> > >  #include <linux/pkeys.h>
> > > +#include <linux/random.h>
> > >
> > >  #include <asm/uaccess.h>
> > >  #include <asm/cacheflush.h>
> > > @@ -1582,6 +1583,24 @@ unacct_error:
> > >  	return error;
> > >  }
> > >
> > > +/*
> > > + * Generate a random address within a range. This differs from
> > > +randomize_addr() by randomizing
> > > + * on len sized chunks. This helps prevent fragmentation of the
> > > +virtual
> > memory map.
> > > + */
> > > +static unsigned long randomize_mmap(unsigned long start, unsigned
> > > +long end, unsigned long len) {
> > > +	unsigned long slots;
> > > +
> > > +	if ((current->personality & ADDR_NO_RANDOMIZE) ||
> > !randomize_va_space)
> > > +		return 0;
> >
> > Couldn't we avoid checking this every time?  Say, by assigning a
> > function pointer during init?
> 
> Yeah that could be done. I just copied the way others checked elsewhere in the
> kernel :-P
> 
> >
> > > +
> > > +	slots = (end - start)/len;
> > > +	if (!slots)
> > > +		return 0;
> > > +
> > > +	return PAGE_ALIGN(start + ((get_random_long() % slots) * len)); }
> > > +
> >
> > Personally, I'd prefer this function noop out based on a configuration option.
> 
> Me too.
> 
> >
> > >  unsigned long unmapped_area(struct vm_unmapped_area_info *info)  {
> > >  	/*
> > > @@ -1676,6 +1695,8 @@ found:
> > >  	if (gap_start < info->low_limit)
> > >  		gap_start = info->low_limit;
> > >
> > > +	gap_start = randomize_mmap(gap_start, gap_end, length) ? :
> > > +gap_start;
> > > +
> > >  	/* Adjust gap address to the desired alignment */
> > >  	gap_start += (info->align_offset - gap_start) & info->align_mask;
> > >
> > > @@ -1775,6 +1796,9 @@ found:
> > >  found_highest:
> > >  	/* Compute highest gap address at the desired alignment */
> > >  	gap_end -= info->length;
> > > +
> > > +	gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
> > > +
> > >  	gap_end -= (gap_end - info->align_offset) & info->align_mask;
> > >
> > >  	VM_BUG_ON(gap_end < info->low_limit);
> >
> > I'll have to dig into the mm code more before I can comment intelligently on
> this.
> >
> > thx,
> >
> > Jason.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 20:13       ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 20:13 UTC (permalink / raw)
  To: Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

<snip>

RESEND fixing mm-list email....

> > -----Original Message-----
> > From: Jason Cooper [mailto:jason@lakedaemon.net]
> > Sent: Tuesday, July 26, 2016 1:03 PM
> > To: Roberts, William C <william.c.roberts@intel.com>
> > Cc: linux-mm@vger.kernel.org; linux-kernel@vger.kernel.org; kernel-
> > hardening@lists.openwall.com; akpm@linux-foundation.org;
> > keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> > jeffv@google.com; salyzyn@android.com; dcashman@android.com
> > Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> >
> > Hi William!
> >
> > On Tue, Jul 26, 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > From: William Roberts <william.c.roberts@intel.com>
> > >
> > > This patch introduces the ability randomize mmap locations where the
> > > address is not requested, for instance when ld is allocating pages
> > > for shared libraries. It chooses to randomize based on the current
> > > personality for ASLR.
> >
> > Now I see how you found the randomize_range() fix. :-P
> >
> > > Currently, allocations are done sequentially within unmapped address
> > > space gaps. This may happen top down or bottom up depending on scheme.
> > >
> > > For instance these mmap calls produce contiguous mappings:
> > > int size = getpagesize();
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40026000
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40027000
> > >
> > > Note no gap between.
> > >
> > > After patches:
> > > int size = getpagesize();
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x400b4000
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40055000
> > >
> > > Note gap between.
> > >
> > > Using the test program mentioned here, that allocates fixed sized
> > > blocks till exhaustion:
> > > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html
> > > , no difference was noticed in the number of allocations. Most
> > > varied from run to run, but were always within a few allocations of
> > > one another between patched and un-patched runs.
> >
> > Did you test this with different allocation sizes?
> 
> No I didn't it. I wasn't sure the best way to test this, any ideas?
> 
> >
> > > Performance Measurements:
> > > Using strace with -T option and filtering for mmap on the program ls
> > > shows a slowdown of approximate 3.7%
> >
> > I think it would be helpful to show the effect on the resulting object code.
> 
> Do you mean the maps of the process? I have some captures for whoopsie on my
> Ubuntu system I can share.
> 
> One thing I didn't make clear in my commit message is why this is good. Right
> now, if you know An address within in a process, you know all offsets done with
> mmap(). For instance, an offset To libX can yield libY by adding/subtracting an
> offset. This is meant to make rops a bit harder, or In general any mapping offset
> mmore difficult to find/guess.
> 
> >
> > > Signed-off-by: William Roberts <william.c.roberts@intel.com>
> > > ---
> > >  mm/mmap.c | 24 ++++++++++++++++++++++++
> > >  1 file changed, 24 insertions(+)
> > >
> > > diff --git a/mm/mmap.c b/mm/mmap.c
> > > index de2c176..7891272 100644
> > > --- a/mm/mmap.c
> > > +++ b/mm/mmap.c
> > > @@ -43,6 +43,7 @@
> > >  #include <linux/userfaultfd_k.h>
> > >  #include <linux/moduleparam.h>
> > >  #include <linux/pkeys.h>
> > > +#include <linux/random.h>
> > >
> > >  #include <asm/uaccess.h>
> > >  #include <asm/cacheflush.h>
> > > @@ -1582,6 +1583,24 @@ unacct_error:
> > >  	return error;
> > >  }
> > >
> > > +/*
> > > + * Generate a random address within a range. This differs from
> > > +randomize_addr() by randomizing
> > > + * on len sized chunks. This helps prevent fragmentation of the
> > > +virtual
> > memory map.
> > > + */
> > > +static unsigned long randomize_mmap(unsigned long start, unsigned
> > > +long end, unsigned long len) {
> > > +	unsigned long slots;
> > > +
> > > +	if ((current->personality & ADDR_NO_RANDOMIZE) ||
> > !randomize_va_space)
> > > +		return 0;
> >
> > Couldn't we avoid checking this every time?  Say, by assigning a
> > function pointer during init?
> 
> Yeah that could be done. I just copied the way others checked elsewhere in the
> kernel :-P
> 
> >
> > > +
> > > +	slots = (end - start)/len;
> > > +	if (!slots)
> > > +		return 0;
> > > +
> > > +	return PAGE_ALIGN(start + ((get_random_long() % slots) * len)); }
> > > +
> >
> > Personally, I'd prefer this function noop out based on a configuration option.
> 
> Me too.
> 
> >
> > >  unsigned long unmapped_area(struct vm_unmapped_area_info *info)  {
> > >  	/*
> > > @@ -1676,6 +1695,8 @@ found:
> > >  	if (gap_start < info->low_limit)
> > >  		gap_start = info->low_limit;
> > >
> > > +	gap_start = randomize_mmap(gap_start, gap_end, length) ? :
> > > +gap_start;
> > > +
> > >  	/* Adjust gap address to the desired alignment */
> > >  	gap_start += (info->align_offset - gap_start) & info->align_mask;
> > >
> > > @@ -1775,6 +1796,9 @@ found:
> > >  found_highest:
> > >  	/* Compute highest gap address at the desired alignment */
> > >  	gap_end -= info->length;
> > > +
> > > +	gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
> > > +
> > >  	gap_end -= (gap_end - info->align_offset) & info->align_mask;
> > >
> > >  	VM_BUG_ON(gap_end < info->low_limit);
> >
> > I'll have to dig into the mm code more before I can comment intelligently on
> this.
> >
> > thx,
> >
> > Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 20:12   ` [kernel-hardening] " Rik van Riel
  2016-07-26 20:17       ` Roberts, William C
@ 2016-07-26 20:17       ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 20:17 UTC (permalink / raw)
  To: Rik van Riel, kernel-hardening, jason, linux-mm, linux-kernel, akpm
  Cc: keescook, gregkh, nnk, jeffv, salyzyn, dcashman

> -----Original Message-----
> From: Rik van Riel [mailto:riel@redhat.com]
> Sent: Tuesday, July 26, 2016 1:12 PM
> To: kernel-hardening@lists.openwall.com; jason@lakedaemon.net; linux- 
> mm@vger.kernel.org; linux-kernel@vger.kernel.org; akpm@linux- 
> foundation.org
> Cc: keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com; 
> jeffv@google.com; salyzyn@android.com; dcashman@android.com; Roberts, 
> William C <william.c.roberts@intel.com>
> Subject: Re: [kernel-hardening] [PATCH] [RFC] Introduce mmap 
> randomization
> 
> On Tue, 2016-07-26 at 11:22 -0700, william.c.roberts@intel.com wrote:
> > From: William Roberts <william.c.roberts@intel.com>
> >
> > This patch introduces the ability randomize mmap locations where the 
> > address is not requested, for instance when ld is allocating pages 
> > for shared libraries. It chooses to randomize based on the current 
> > personality for ASLR.
> >
> > Currently, allocations are done sequentially within unmapped address 
> > space gaps. This may happen top down or bottom up depending on scheme.
> >
> > For instance these mmap calls produce contiguous mappings:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40026000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40027000
> >
> > Note no gap between.
> >
> > After patches:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x400b4000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40055000
> >
> > Note gap between.
> 
> I suspect this randomization will be more useful for file mappings 
> than for anonymous mappings.
> 
> I don't know whether there are downsides to creating more anonymous 
> VMAs than we have to, with malloc libraries that may perform various 
> kinds of tricks with mmap for their own performance reasons.
> 
> Does anyone have convincing reasons why mmap randomization should do 
> both file and anon, or whether it should do just file mappings?

Throwing this out there, but If you're mmap'ing buffers at known offsets in the
program then folks know where to write/modify.

Jason Cooper mentioned using a KConfig around this (amongst other things) which perhaps
Controlling this at a better granularity would be beneficial.

> 
> --
> All rights reversed

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 20:17       ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 20:17 UTC (permalink / raw)
  To: Rik van Riel, kernel-hardening, jason, linux-mm, linux-kernel, akpm
  Cc: keescook, gregkh, nnk, jeffv, salyzyn, dcashman

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 2514 bytes --]

> -----Original Message-----
> From: Rik van Riel [mailto:riel@redhat.com]
> Sent: Tuesday, July 26, 2016 1:12 PM
> To: kernel-hardening@lists.openwall.com; jason@lakedaemon.net; linux- 
> mm@vger.kernel.org; linux-kernel@vger.kernel.org; akpm@linux- 
> foundation.org
> Cc: keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com; 
> jeffv@google.com; salyzyn@android.com; dcashman@android.com; Roberts, 
> William C <william.c.roberts@intel.com>
> Subject: Re: [kernel-hardening] [PATCH] [RFC] Introduce mmap 
> randomization
> 
> On Tue, 2016-07-26 at 11:22 -0700, william.c.roberts@intel.com wrote:
> > From: William Roberts <william.c.roberts@intel.com>
> >
> > This patch introduces the ability randomize mmap locations where the 
> > address is not requested, for instance when ld is allocating pages 
> > for shared libraries. It chooses to randomize based on the current 
> > personality for ASLR.
> >
> > Currently, allocations are done sequentially within unmapped address 
> > space gaps. This may happen top down or bottom up depending on scheme.
> >
> > For instance these mmap calls produce contiguous mappings:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40026000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40027000
> >
> > Note no gap between.
> >
> > After patches:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x400b4000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40055000
> >
> > Note gap between.
> 
> I suspect this randomization will be more useful for file mappings 
> than for anonymous mappings.
> 
> I don't know whether there are downsides to creating more anonymous 
> VMAs than we have to, with malloc libraries that may perform various 
> kinds of tricks with mmap for their own performance reasons.
> 
> Does anyone have convincing reasons why mmap randomization should do 
> both file and anon, or whether it should do just file mappings?

Throwing this out there, but If you're mmap'ing buffers at known offsets in the
program then folks know where to write/modify.

Jason Cooper mentioned using a KConfig around this (amongst other things) which perhaps
Controlling this at a better granularity would be beneficial.

> 
> --
> All rights reversed
N‹§²æìr¸›zǧu©ž²Æ {\b­†éì¹»\x1c®&Þ–)îÆi¢žØ^n‡r¶‰šŽŠÝ¢j$½§$¢¸\x05¢¹¨­è§~Š'.)îÄÃ,yèm¶ŸÿÃ\f%Š{±šj+ƒðèž×¦j)Z†·Ÿ

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 20:17       ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 20:17 UTC (permalink / raw)
  To: Rik van Riel, kernel-hardening, jason, linux-mm, linux-kernel, akpm
  Cc: keescook, gregkh, nnk, jeffv, salyzyn, dcashman

> -----Original Message-----
> From: Rik van Riel [mailto:riel@redhat.com]
> Sent: Tuesday, July 26, 2016 1:12 PM
> To: kernel-hardening@lists.openwall.com; jason@lakedaemon.net; linux- 
> mm@vger.kernel.org; linux-kernel@vger.kernel.org; akpm@linux- 
> foundation.org
> Cc: keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com; 
> jeffv@google.com; salyzyn@android.com; dcashman@android.com; Roberts, 
> William C <william.c.roberts@intel.com>
> Subject: Re: [kernel-hardening] [PATCH] [RFC] Introduce mmap 
> randomization
> 
> On Tue, 2016-07-26 at 11:22 -0700, william.c.roberts@intel.com wrote:
> > From: William Roberts <william.c.roberts@intel.com>
> >
> > This patch introduces the ability randomize mmap locations where the 
> > address is not requested, for instance when ld is allocating pages 
> > for shared libraries. It chooses to randomize based on the current 
> > personality for ASLR.
> >
> > Currently, allocations are done sequentially within unmapped address 
> > space gaps. This may happen top down or bottom up depending on scheme.
> >
> > For instance these mmap calls produce contiguous mappings:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40026000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40027000
> >
> > Note no gap between.
> >
> > After patches:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x400b4000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40055000
> >
> > Note gap between.
> 
> I suspect this randomization will be more useful for file mappings 
> than for anonymous mappings.
> 
> I don't know whether there are downsides to creating more anonymous 
> VMAs than we have to, with malloc libraries that may perform various 
> kinds of tricks with mmap for their own performance reasons.
> 
> Does anyone have convincing reasons why mmap randomization should do 
> both file and anon, or whether it should do just file mappings?

Throwing this out there, but If you're mmap'ing buffers at known offsets in the
program then folks know where to write/modify.

Jason Cooper mentioned using a KConfig around this (amongst other things) which perhaps
Controlling this at a better granularity would be beneficial.

> 
> --
> All rights reversed

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 18:22   ` [kernel-hardening] " william.c.roberts
@ 2016-07-26 20:41     ` Nick Kralevich
  -1 siblings, 0 replies; 73+ messages in thread
From: Nick Kralevich @ 2016-07-26 20:41 UTC (permalink / raw)
  To: Roberts, William C
  Cc: jason, linux-mm, lkml, kernel-hardening, Andrew Morton,
	Kees Cook, Greg KH, Jeffrey Vander Stoep, salyzyn,
	Daniel Cashman

My apologies in advance if I misunderstand the purposes of this patch.

IIUC, this patch adds a random gap between various mmap() mappings,
with the goal of ensuring that both the mmap base address and gaps
between pages are randomized.

If that's the goal, please note that this behavior has caused
significant performance problems to Android in the past. Specifically,
random gaps between mmap()ed regions causes memory space
fragmentation. After a program runs for a long time, the ability to
find large contiguous blocks of memory becomes impossible, and mmap()s
fail due to lack of a large enough address space.

This isn't just a theoretical concern. Android actually hit this on
kernels prior to
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7dbaa466780a754154531b44c2086f6618cee3a8
. Before that patch, the gaps between mmap()ed pages were randomized.
See the discussion at:

  http://lists.infradead.org/pipermail/linux-arm-kernel/2011-November/073082.html
  http://marc.info/?t=132070957400005&r=1&w=2

We ended up having to work around this problem in the following commits:

  https://android.googlesource.com/platform/dalvik/+/311886c6c6fcd3b531531f592d56caab5e2a259c
  https://android.googlesource.com/platform/art/+/51e5386
  https://android.googlesource.com/platform/art/+/f94b781

If this behavior was re-introduced, it's likely to cause
hard-to-reproduce problems, and I suspect Android based distributions
would tend to disable this feature either globally, or for
applications which make a large number of mmap() calls.

-- Nick



On Tue, Jul 26, 2016 at 11:22 AM,  <william.c.roberts@intel.com> wrote:
> From: William Roberts <william.c.roberts@intel.com>
>
> This patch introduces the ability randomize mmap locations where the
> address is not requested, for instance when ld is allocating pages for
> shared libraries. It chooses to randomize based on the current
> personality for ASLR.
>
> Currently, allocations are done sequentially within unmapped address
> space gaps. This may happen top down or bottom up depending on scheme.
>
> For instance these mmap calls produce contiguous mappings:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40026000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40027000
>
> Note no gap between.
>
> After patches:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x400b4000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40055000
>
> Note gap between.
>
> Using the test program mentioned here, that allocates fixed sized blocks
> till exhaustion: https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
> no difference was noticed in the number of allocations. Most varied from
> run to run, but were always within a few allocations of one another
> between patched and un-patched runs.
>
> Performance Measurements:
> Using strace with -T option and filtering for mmap on the program
> ls shows a slowdown of approximate 3.7%
>
> Signed-off-by: William Roberts <william.c.roberts@intel.com>
> ---
>  mm/mmap.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index de2c176..7891272 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -43,6 +43,7 @@
>  #include <linux/userfaultfd_k.h>
>  #include <linux/moduleparam.h>
>  #include <linux/pkeys.h>
> +#include <linux/random.h>
>
>  #include <asm/uaccess.h>
>  #include <asm/cacheflush.h>
> @@ -1582,6 +1583,24 @@ unacct_error:
>         return error;
>  }
>
> +/*
> + * Generate a random address within a range. This differs from randomize_addr() by randomizing
> + * on len sized chunks. This helps prevent fragmentation of the virtual memory map.
> + */
> +static unsigned long randomize_mmap(unsigned long start, unsigned long end, unsigned long len)
> +{
> +       unsigned long slots;
> +
> +       if ((current->personality & ADDR_NO_RANDOMIZE) || !randomize_va_space)
> +               return 0;
> +
> +       slots = (end - start)/len;
> +       if (!slots)
> +               return 0;
> +
> +       return PAGE_ALIGN(start + ((get_random_long() % slots) * len));
> +}
> +
>  unsigned long unmapped_area(struct vm_unmapped_area_info *info)
>  {
>         /*
> @@ -1676,6 +1695,8 @@ found:
>         if (gap_start < info->low_limit)
>                 gap_start = info->low_limit;
>
> +       gap_start = randomize_mmap(gap_start, gap_end, length) ? : gap_start;
> +
>         /* Adjust gap address to the desired alignment */
>         gap_start += (info->align_offset - gap_start) & info->align_mask;
>
> @@ -1775,6 +1796,9 @@ found:
>  found_highest:
>         /* Compute highest gap address at the desired alignment */
>         gap_end -= info->length;
> +
> +       gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
> +
>         gap_end -= (gap_end - info->align_offset) & info->align_mask;
>
>         VM_BUG_ON(gap_end < info->low_limit);
> --
> 1.9.1
>



-- 
Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 20:41     ` Nick Kralevich
  0 siblings, 0 replies; 73+ messages in thread
From: Nick Kralevich @ 2016-07-26 20:41 UTC (permalink / raw)
  To: Roberts, William C
  Cc: jason, linux-mm, lkml, kernel-hardening, Andrew Morton,
	Kees Cook, Greg KH, Jeffrey Vander Stoep, salyzyn,
	Daniel Cashman

My apologies in advance if I misunderstand the purposes of this patch.

IIUC, this patch adds a random gap between various mmap() mappings,
with the goal of ensuring that both the mmap base address and gaps
between pages are randomized.

If that's the goal, please note that this behavior has caused
significant performance problems to Android in the past. Specifically,
random gaps between mmap()ed regions causes memory space
fragmentation. After a program runs for a long time, the ability to
find large contiguous blocks of memory becomes impossible, and mmap()s
fail due to lack of a large enough address space.

This isn't just a theoretical concern. Android actually hit this on
kernels prior to
http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7dbaa466780a754154531b44c2086f6618cee3a8
. Before that patch, the gaps between mmap()ed pages were randomized.
See the discussion at:

  http://lists.infradead.org/pipermail/linux-arm-kernel/2011-November/073082.html
  http://marc.info/?t=132070957400005&r=1&w=2

We ended up having to work around this problem in the following commits:

  https://android.googlesource.com/platform/dalvik/+/311886c6c6fcd3b531531f592d56caab5e2a259c
  https://android.googlesource.com/platform/art/+/51e5386
  https://android.googlesource.com/platform/art/+/f94b781

If this behavior was re-introduced, it's likely to cause
hard-to-reproduce problems, and I suspect Android based distributions
would tend to disable this feature either globally, or for
applications which make a large number of mmap() calls.

-- Nick



On Tue, Jul 26, 2016 at 11:22 AM,  <william.c.roberts@intel.com> wrote:
> From: William Roberts <william.c.roberts@intel.com>
>
> This patch introduces the ability randomize mmap locations where the
> address is not requested, for instance when ld is allocating pages for
> shared libraries. It chooses to randomize based on the current
> personality for ASLR.
>
> Currently, allocations are done sequentially within unmapped address
> space gaps. This may happen top down or bottom up depending on scheme.
>
> For instance these mmap calls produce contiguous mappings:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40026000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40027000
>
> Note no gap between.
>
> After patches:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x400b4000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40055000
>
> Note gap between.
>
> Using the test program mentioned here, that allocates fixed sized blocks
> till exhaustion: https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
> no difference was noticed in the number of allocations. Most varied from
> run to run, but were always within a few allocations of one another
> between patched and un-patched runs.
>
> Performance Measurements:
> Using strace with -T option and filtering for mmap on the program
> ls shows a slowdown of approximate 3.7%
>
> Signed-off-by: William Roberts <william.c.roberts@intel.com>
> ---
>  mm/mmap.c | 24 ++++++++++++++++++++++++
>  1 file changed, 24 insertions(+)
>
> diff --git a/mm/mmap.c b/mm/mmap.c
> index de2c176..7891272 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -43,6 +43,7 @@
>  #include <linux/userfaultfd_k.h>
>  #include <linux/moduleparam.h>
>  #include <linux/pkeys.h>
> +#include <linux/random.h>
>
>  #include <asm/uaccess.h>
>  #include <asm/cacheflush.h>
> @@ -1582,6 +1583,24 @@ unacct_error:
>         return error;
>  }
>
> +/*
> + * Generate a random address within a range. This differs from randomize_addr() by randomizing
> + * on len sized chunks. This helps prevent fragmentation of the virtual memory map.
> + */
> +static unsigned long randomize_mmap(unsigned long start, unsigned long end, unsigned long len)
> +{
> +       unsigned long slots;
> +
> +       if ((current->personality & ADDR_NO_RANDOMIZE) || !randomize_va_space)
> +               return 0;
> +
> +       slots = (end - start)/len;
> +       if (!slots)
> +               return 0;
> +
> +       return PAGE_ALIGN(start + ((get_random_long() % slots) * len));
> +}
> +
>  unsigned long unmapped_area(struct vm_unmapped_area_info *info)
>  {
>         /*
> @@ -1676,6 +1695,8 @@ found:
>         if (gap_start < info->low_limit)
>                 gap_start = info->low_limit;
>
> +       gap_start = randomize_mmap(gap_start, gap_end, length) ? : gap_start;
> +
>         /* Adjust gap address to the desired alignment */
>         gap_start += (info->align_offset - gap_start) & info->align_mask;
>
> @@ -1775,6 +1796,9 @@ found:
>  found_highest:
>         /* Compute highest gap address at the desired alignment */
>         gap_end -= info->length;
> +
> +       gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
> +
>         gap_end -= (gap_end - info->align_offset) & info->align_mask;
>
>         VM_BUG_ON(gap_end < info->low_limit);
> --
> 1.9.1
>



-- 
Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 20:13       ` Roberts, William C
  (?)
@ 2016-07-26 20:59         ` Jason Cooper
  -1 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-26 20:59 UTC (permalink / raw)
  To: Roberts, William C
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

Hi William,

On Tue, Jul 26, 2016 at 08:13:23PM +0000, Roberts, William C wrote:
> > > From: Jason Cooper [mailto:jason@lakedaemon.net]
> > > On Tue, Jul 26, 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > > Performance Measurements:
> > > > Using strace with -T option and filtering for mmap on the program ls
> > > > shows a slowdown of approximate 3.7%
> > >
> > > I think it would be helpful to show the effect on the resulting object code.
> > 
> > Do you mean the maps of the process? I have some captures for whoopsie on my
> > Ubuntu system I can share.

No, I mean changes to mm/mmap.o.

> > One thing I didn't make clear in my commit message is why this is good. Right
> > now, if you know An address within in a process, you know all offsets done with
> > mmap(). For instance, an offset To libX can yield libY by adding/subtracting an
> > offset. This is meant to make rops a bit harder, or In general any mapping offset
> > mmore difficult to find/guess.

Are you able to quantify how many bits of entropy you're imposing on the
attacker?  Is this a chair in the hallway or a significant increase in
the chances of crashing the program before finding the desired address?

thx,

Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 20:59         ` Jason Cooper
  0 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-26 20:59 UTC (permalink / raw)
  To: Roberts, William C
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

Hi William,

On Tue, Jul 26, 2016 at 08:13:23PM +0000, Roberts, William C wrote:
> > > From: Jason Cooper [mailto:jason@lakedaemon.net]
> > > On Tue, Jul 26, 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > > Performance Measurements:
> > > > Using strace with -T option and filtering for mmap on the program ls
> > > > shows a slowdown of approximate 3.7%
> > >
> > > I think it would be helpful to show the effect on the resulting object code.
> > 
> > Do you mean the maps of the process? I have some captures for whoopsie on my
> > Ubuntu system I can share.

No, I mean changes to mm/mmap.o.

> > One thing I didn't make clear in my commit message is why this is good. Right
> > now, if you know An address within in a process, you know all offsets done with
> > mmap(). For instance, an offset To libX can yield libY by adding/subtracting an
> > offset. This is meant to make rops a bit harder, or In general any mapping offset
> > mmore difficult to find/guess.

Are you able to quantify how many bits of entropy you're imposing on the
attacker?  Is this a chair in the hallway or a significant increase in
the chances of crashing the program before finding the desired address?

thx,

Jason.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 20:59         ` Jason Cooper
  0 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-26 20:59 UTC (permalink / raw)
  To: Roberts, William C
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

Hi William,

On Tue, Jul 26, 2016 at 08:13:23PM +0000, Roberts, William C wrote:
> > > From: Jason Cooper [mailto:jason@lakedaemon.net]
> > > On Tue, Jul 26, 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > > Performance Measurements:
> > > > Using strace with -T option and filtering for mmap on the program ls
> > > > shows a slowdown of approximate 3.7%
> > >
> > > I think it would be helpful to show the effect on the resulting object code.
> > 
> > Do you mean the maps of the process? I have some captures for whoopsie on my
> > Ubuntu system I can share.

No, I mean changes to mm/mmap.o.

> > One thing I didn't make clear in my commit message is why this is good. Right
> > now, if you know An address within in a process, you know all offsets done with
> > mmap(). For instance, an offset To libX can yield libY by adding/subtracting an
> > offset. This is meant to make rops a bit harder, or In general any mapping offset
> > mmore difficult to find/guess.

Are you able to quantify how many bits of entropy you're imposing on the
attacker?  Is this a chair in the hallway or a significant increase in
the chances of crashing the program before finding the desired address?

thx,

Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 20:41     ` [kernel-hardening] " Nick Kralevich
@ 2016-07-26 21:02       ` Roberts, William C
  -1 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 21:02 UTC (permalink / raw)
  To: Nick Kralevich
  Cc: jason, linux-mm, lkml, kernel-hardening, Andrew Morton,
	Kees Cook, Greg KH, Jeffrey Vander Stoep, salyzyn,
	Daniel Cashman



> -----Original Message-----
> From: Nick Kralevich [mailto:nnk@google.com]
> Sent: Tuesday, July 26, 2016 1:41 PM
> To: Roberts, William C <william.c.roberts@intel.com>
> Cc: jason@lakedaemon.net; linux-mm@vger.kernel.org; lkml <linux-
> kernel@vger.kernel.org>; kernel-hardening@lists.openwall.com; Andrew
> Morton <akpm@linux-foundation.org>; Kees Cook <keescook@chromium.org>;
> Greg KH <gregkh@linuxfoundation.org>; Jeffrey Vander Stoep
> <jeffv@google.com>; salyzyn@android.com; Daniel Cashman
> <dcashman@android.com>
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> My apologies in advance if I misunderstand the purposes of this patch.
> 
> IIUC, this patch adds a random gap between various mmap() mappings, with the
> goal of ensuring that both the mmap base address and gaps between pages are
> randomized.
> 
> If that's the goal, please note that this behavior has caused significant
> performance problems to Android in the past. Specifically, random gaps between
> mmap()ed regions causes memory space fragmentation. After a program runs for
> a long time, the ability to find large contiguous blocks of memory becomes
> impossible, and mmap()s fail due to lack of a large enough address space.

Yes and fragmentation is definitely a problem here. Especially when the mmaps()
are not a consistent length for program life.

> 
> This isn't just a theoretical concern. Android actually hit this on kernels prior to
> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7dbaa46
> 6780a754154531b44c2086f6618cee3a8
> . Before that patch, the gaps between mmap()ed pages were randomized.
> See the discussion at:
> 
>   http://lists.infradead.org/pipermail/linux-arm-kernel/2011-
> November/073082.html
>   http://marc.info/?t=132070957400005&r=1&w=2
> 
> We ended up having to work around this problem in the following commits:
> 
> 
> https://android.googlesource.com/platform/dalvik/+/311886c6c6fcd3b531531f59
> 2d56caab5e2a259c
>   https://android.googlesource.com/platform/art/+/51e5386
>   https://android.googlesource.com/platform/art/+/f94b781
> 
> If this behavior was re-introduced, it's likely to cause hard-to-reproduce
> problems, and I suspect Android based distributions would tend to disable this
> feature either globally, or for applications which make a large number of mmap()
> calls.

Yeah and this is the issue I want to see if we can overcome. I see the biggest benefit
being on libraries loaded by dl. Perhaps a random flag and modify to linkers. Im just
spit balling here and collecting the feedback, like this. Thanks for the detail, that
helps a lot.

> 
> -- Nick
> 
> 
> 
> On Tue, Jul 26, 2016 at 11:22 AM,  <william.c.roberts@intel.com> wrote:
> > From: William Roberts <william.c.roberts@intel.com>
> >
> > This patch introduces the ability randomize mmap locations where the
> > address is not requested, for instance when ld is allocating pages for
> > shared libraries. It chooses to randomize based on the current
> > personality for ASLR.
> >
> > Currently, allocations are done sequentially within unmapped address
> > space gaps. This may happen top down or bottom up depending on scheme.
> >
> > For instance these mmap calls produce contiguous mappings:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40026000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40027000
> >
> > Note no gap between.
> >
> > After patches:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x400b4000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40055000
> >
> > Note gap between.
> >
> > Using the test program mentioned here, that allocates fixed sized
> > blocks till exhaustion:
> > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
> > no difference was noticed in the number of allocations. Most varied
> > from run to run, but were always within a few allocations of one
> > another between patched and un-patched runs.
> >
> > Performance Measurements:
> > Using strace with -T option and filtering for mmap on the program ls
> > shows a slowdown of approximate 3.7%
> >
> > Signed-off-by: William Roberts <william.c.roberts@intel.com>
> > ---
> >  mm/mmap.c | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index de2c176..7891272 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -43,6 +43,7 @@
> >  #include <linux/userfaultfd_k.h>
> >  #include <linux/moduleparam.h>
> >  #include <linux/pkeys.h>
> > +#include <linux/random.h>
> >
> >  #include <asm/uaccess.h>
> >  #include <asm/cacheflush.h>
> > @@ -1582,6 +1583,24 @@ unacct_error:
> >         return error;
> >  }
> >
> > +/*
> > + * Generate a random address within a range. This differs from
> > +randomize_addr() by randomizing
> > + * on len sized chunks. This helps prevent fragmentation of the virtual
> memory map.
> > + */
> > +static unsigned long randomize_mmap(unsigned long start, unsigned
> > +long end, unsigned long len) {
> > +       unsigned long slots;
> > +
> > +       if ((current->personality & ADDR_NO_RANDOMIZE) ||
> !randomize_va_space)
> > +               return 0;
> > +
> > +       slots = (end - start)/len;
> > +       if (!slots)
> > +               return 0;
> > +
> > +       return PAGE_ALIGN(start + ((get_random_long() % slots) *
> > +len)); }
> > +
> >  unsigned long unmapped_area(struct vm_unmapped_area_info *info)  {
> >         /*
> > @@ -1676,6 +1695,8 @@ found:
> >         if (gap_start < info->low_limit)
> >                 gap_start = info->low_limit;
> >
> > +       gap_start = randomize_mmap(gap_start, gap_end, length) ? :
> > + gap_start;
> > +
> >         /* Adjust gap address to the desired alignment */
> >         gap_start += (info->align_offset - gap_start) &
> > info->align_mask;
> >
> > @@ -1775,6 +1796,9 @@ found:
> >  found_highest:
> >         /* Compute highest gap address at the desired alignment */
> >         gap_end -= info->length;
> > +
> > +       gap_end = randomize_mmap(gap_start, gap_end, length) ? :
> > + gap_end;
> > +
> >         gap_end -= (gap_end - info->align_offset) & info->align_mask;
> >
> >         VM_BUG_ON(gap_end < info->low_limit);
> > --
> > 1.9.1
> >
> 
> 
> 
> --
> Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 21:02       ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 21:02 UTC (permalink / raw)
  To: Nick Kralevich
  Cc: jason, linux-mm, lkml, kernel-hardening, Andrew Morton,
	Kees Cook, Greg KH, Jeffrey Vander Stoep, salyzyn,
	Daniel Cashman



> -----Original Message-----
> From: Nick Kralevich [mailto:nnk@google.com]
> Sent: Tuesday, July 26, 2016 1:41 PM
> To: Roberts, William C <william.c.roberts@intel.com>
> Cc: jason@lakedaemon.net; linux-mm@vger.kernel.org; lkml <linux-
> kernel@vger.kernel.org>; kernel-hardening@lists.openwall.com; Andrew
> Morton <akpm@linux-foundation.org>; Kees Cook <keescook@chromium.org>;
> Greg KH <gregkh@linuxfoundation.org>; Jeffrey Vander Stoep
> <jeffv@google.com>; salyzyn@android.com; Daniel Cashman
> <dcashman@android.com>
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> My apologies in advance if I misunderstand the purposes of this patch.
> 
> IIUC, this patch adds a random gap between various mmap() mappings, with the
> goal of ensuring that both the mmap base address and gaps between pages are
> randomized.
> 
> If that's the goal, please note that this behavior has caused significant
> performance problems to Android in the past. Specifically, random gaps between
> mmap()ed regions causes memory space fragmentation. After a program runs for
> a long time, the ability to find large contiguous blocks of memory becomes
> impossible, and mmap()s fail due to lack of a large enough address space.

Yes and fragmentation is definitely a problem here. Especially when the mmaps()
are not a consistent length for program life.

> 
> This isn't just a theoretical concern. Android actually hit this on kernels prior to
> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7dbaa46
> 6780a754154531b44c2086f6618cee3a8
> . Before that patch, the gaps between mmap()ed pages were randomized.
> See the discussion at:
> 
>   http://lists.infradead.org/pipermail/linux-arm-kernel/2011-
> November/073082.html
>   http://marc.info/?t=132070957400005&r=1&w=2
> 
> We ended up having to work around this problem in the following commits:
> 
> 
> https://android.googlesource.com/platform/dalvik/+/311886c6c6fcd3b531531f59
> 2d56caab5e2a259c
>   https://android.googlesource.com/platform/art/+/51e5386
>   https://android.googlesource.com/platform/art/+/f94b781
> 
> If this behavior was re-introduced, it's likely to cause hard-to-reproduce
> problems, and I suspect Android based distributions would tend to disable this
> feature either globally, or for applications which make a large number of mmap()
> calls.

Yeah and this is the issue I want to see if we can overcome. I see the biggest benefit
being on libraries loaded by dl. Perhaps a random flag and modify to linkers. Im just
spit balling here and collecting the feedback, like this. Thanks for the detail, that
helps a lot.

> 
> -- Nick
> 
> 
> 
> On Tue, Jul 26, 2016 at 11:22 AM,  <william.c.roberts@intel.com> wrote:
> > From: William Roberts <william.c.roberts@intel.com>
> >
> > This patch introduces the ability randomize mmap locations where the
> > address is not requested, for instance when ld is allocating pages for
> > shared libraries. It chooses to randomize based on the current
> > personality for ASLR.
> >
> > Currently, allocations are done sequentially within unmapped address
> > space gaps. This may happen top down or bottom up depending on scheme.
> >
> > For instance these mmap calls produce contiguous mappings:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40026000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40027000
> >
> > Note no gap between.
> >
> > After patches:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x400b4000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40055000
> >
> > Note gap between.
> >
> > Using the test program mentioned here, that allocates fixed sized
> > blocks till exhaustion:
> > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
> > no difference was noticed in the number of allocations. Most varied
> > from run to run, but were always within a few allocations of one
> > another between patched and un-patched runs.
> >
> > Performance Measurements:
> > Using strace with -T option and filtering for mmap on the program ls
> > shows a slowdown of approximate 3.7%
> >
> > Signed-off-by: William Roberts <william.c.roberts@intel.com>
> > ---
> >  mm/mmap.c | 24 ++++++++++++++++++++++++
> >  1 file changed, 24 insertions(+)
> >
> > diff --git a/mm/mmap.c b/mm/mmap.c
> > index de2c176..7891272 100644
> > --- a/mm/mmap.c
> > +++ b/mm/mmap.c
> > @@ -43,6 +43,7 @@
> >  #include <linux/userfaultfd_k.h>
> >  #include <linux/moduleparam.h>
> >  #include <linux/pkeys.h>
> > +#include <linux/random.h>
> >
> >  #include <asm/uaccess.h>
> >  #include <asm/cacheflush.h>
> > @@ -1582,6 +1583,24 @@ unacct_error:
> >         return error;
> >  }
> >
> > +/*
> > + * Generate a random address within a range. This differs from
> > +randomize_addr() by randomizing
> > + * on len sized chunks. This helps prevent fragmentation of the virtual
> memory map.
> > + */
> > +static unsigned long randomize_mmap(unsigned long start, unsigned
> > +long end, unsigned long len) {
> > +       unsigned long slots;
> > +
> > +       if ((current->personality & ADDR_NO_RANDOMIZE) ||
> !randomize_va_space)
> > +               return 0;
> > +
> > +       slots = (end - start)/len;
> > +       if (!slots)
> > +               return 0;
> > +
> > +       return PAGE_ALIGN(start + ((get_random_long() % slots) *
> > +len)); }
> > +
> >  unsigned long unmapped_area(struct vm_unmapped_area_info *info)  {
> >         /*
> > @@ -1676,6 +1695,8 @@ found:
> >         if (gap_start < info->low_limit)
> >                 gap_start = info->low_limit;
> >
> > +       gap_start = randomize_mmap(gap_start, gap_end, length) ? :
> > + gap_start;
> > +
> >         /* Adjust gap address to the desired alignment */
> >         gap_start += (info->align_offset - gap_start) &
> > info->align_mask;
> >
> > @@ -1775,6 +1796,9 @@ found:
> >  found_highest:
> >         /* Compute highest gap address at the desired alignment */
> >         gap_end -= info->length;
> > +
> > +       gap_end = randomize_mmap(gap_start, gap_end, length) ? :
> > + gap_end;
> > +
> >         gap_end -= (gap_end - info->align_offset) & info->align_mask;
> >
> >         VM_BUG_ON(gap_end < info->low_limit);
> > --
> > 1.9.1
> >
> 
> 
> 
> --
> Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 20:59         ` Jason Cooper
  (?)
@ 2016-07-26 21:06           ` Roberts, William C
  -1 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 21:06 UTC (permalink / raw)
  To: Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman



> -----Original Message-----
> From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> Behalf Of Jason Cooper
> Sent: Tuesday, July 26, 2016 2:00 PM
> To: Roberts, William C <william.c.roberts@intel.com>
> Cc: linux-mm@kvack.org; linux-kernel@vger.kernel.org; kernel-
> hardening@lists.openwall.com; akpm@linux-foundation.org;
> keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> jeffv@google.com; salyzyn@android.com; dcashman@android.com
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> Hi William,
> 
> On Tue, Jul 26, 2016 at 08:13:23PM +0000, Roberts, William C wrote:
> > > > From: Jason Cooper [mailto:jason@lakedaemon.net] On Tue, Jul 26,
> > > > 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > > > Performance Measurements:
> > > > > Using strace with -T option and filtering for mmap on the
> > > > > program ls shows a slowdown of approximate 3.7%
> > > >
> > > > I think it would be helpful to show the effect on the resulting object code.
> > >
> > > Do you mean the maps of the process? I have some captures for
> > > whoopsie on my Ubuntu system I can share.
> 
> No, I mean changes to mm/mmap.o.

Sure I can post the objdump of that, do you just want a diff of old vs new?

> 
> > > One thing I didn't make clear in my commit message is why this is
> > > good. Right now, if you know An address within in a process, you
> > > know all offsets done with mmap(). For instance, an offset To libX
> > > can yield libY by adding/subtracting an offset. This is meant to
> > > make rops a bit harder, or In general any mapping offset mmore difficult to
> find/guess.
> 
> Are you able to quantify how many bits of entropy you're imposing on the
> attacker?  Is this a chair in the hallway or a significant increase in the chances of
> crashing the program before finding the desired address?

I'd likely need to take a small sample of programs and examine them, especially considering
That as gaps are harder to find, it forces the randomization down and randomization can
Be directly altered with length on mmap(), versus randomize_addr() which didn't have this
restriction but OOM'd do to fragmented easier.

> 
> thx,
> 
> Jason.
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to
> majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 21:06           ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 21:06 UTC (permalink / raw)
  To: Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman



> -----Original Message-----
> From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> Behalf Of Jason Cooper
> Sent: Tuesday, July 26, 2016 2:00 PM
> To: Roberts, William C <william.c.roberts@intel.com>
> Cc: linux-mm@kvack.org; linux-kernel@vger.kernel.org; kernel-
> hardening@lists.openwall.com; akpm@linux-foundation.org;
> keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> jeffv@google.com; salyzyn@android.com; dcashman@android.com
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> Hi William,
> 
> On Tue, Jul 26, 2016 at 08:13:23PM +0000, Roberts, William C wrote:
> > > > From: Jason Cooper [mailto:jason@lakedaemon.net] On Tue, Jul 26,
> > > > 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > > > Performance Measurements:
> > > > > Using strace with -T option and filtering for mmap on the
> > > > > program ls shows a slowdown of approximate 3.7%
> > > >
> > > > I think it would be helpful to show the effect on the resulting object code.
> > >
> > > Do you mean the maps of the process? I have some captures for
> > > whoopsie on my Ubuntu system I can share.
> 
> No, I mean changes to mm/mmap.o.

Sure I can post the objdump of that, do you just want a diff of old vs new?

> 
> > > One thing I didn't make clear in my commit message is why this is
> > > good. Right now, if you know An address within in a process, you
> > > know all offsets done with mmap(). For instance, an offset To libX
> > > can yield libY by adding/subtracting an offset. This is meant to
> > > make rops a bit harder, or In general any mapping offset mmore difficult to
> find/guess.
> 
> Are you able to quantify how many bits of entropy you're imposing on the
> attacker?  Is this a chair in the hallway or a significant increase in the chances of
> crashing the program before finding the desired address?

I'd likely need to take a small sample of programs and examine them, especially considering
That as gaps are harder to find, it forces the randomization down and randomization can
Be directly altered with length on mmap(), versus randomize_addr() which didn't have this
restriction but OOM'd do to fragmented easier.

> 
> thx,
> 
> Jason.
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to
> majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 21:06           ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 21:06 UTC (permalink / raw)
  To: Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman



> -----Original Message-----
> From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> Behalf Of Jason Cooper
> Sent: Tuesday, July 26, 2016 2:00 PM
> To: Roberts, William C <william.c.roberts@intel.com>
> Cc: linux-mm@kvack.org; linux-kernel@vger.kernel.org; kernel-
> hardening@lists.openwall.com; akpm@linux-foundation.org;
> keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> jeffv@google.com; salyzyn@android.com; dcashman@android.com
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> Hi William,
> 
> On Tue, Jul 26, 2016 at 08:13:23PM +0000, Roberts, William C wrote:
> > > > From: Jason Cooper [mailto:jason@lakedaemon.net] On Tue, Jul 26,
> > > > 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > > > Performance Measurements:
> > > > > Using strace with -T option and filtering for mmap on the
> > > > > program ls shows a slowdown of approximate 3.7%
> > > >
> > > > I think it would be helpful to show the effect on the resulting object code.
> > >
> > > Do you mean the maps of the process? I have some captures for
> > > whoopsie on my Ubuntu system I can share.
> 
> No, I mean changes to mm/mmap.o.

Sure I can post the objdump of that, do you just want a diff of old vs new?

> 
> > > One thing I didn't make clear in my commit message is why this is
> > > good. Right now, if you know An address within in a process, you
> > > know all offsets done with mmap(). For instance, an offset To libX
> > > can yield libY by adding/subtracting an offset. This is meant to
> > > make rops a bit harder, or In general any mapping offset mmore difficult to
> find/guess.
> 
> Are you able to quantify how many bits of entropy you're imposing on the
> attacker?  Is this a chair in the hallway or a significant increase in the chances of
> crashing the program before finding the desired address?

I'd likely need to take a small sample of programs and examine them, especially considering
That as gaps are harder to find, it forces the randomization down and randomization can
Be directly altered with length on mmap(), versus randomize_addr() which didn't have this
restriction but OOM'd do to fragmented easier.

> 
> thx,
> 
> Jason.
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to
> majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 21:02       ` [kernel-hardening] " Roberts, William C
  (?)
@ 2016-07-26 21:11         ` Nick Kralevich
  -1 siblings, 0 replies; 73+ messages in thread
From: Nick Kralevich @ 2016-07-26 21:11 UTC (permalink / raw)
  To: Roberts, William C
  Cc: jason, lkml, kernel-hardening, Andrew Morton, Kees Cook, Greg KH,
	Jeffrey Vander Stoep, salyzyn, Daniel Cashman, linux-mm

On Tue, Jul 26, 2016 at 2:02 PM, Roberts, William C
<william.c.roberts@intel.com> wrote:
>
>
>> -----Original Message-----
>> From: Nick Kralevich [mailto:nnk@google.com]
>> Sent: Tuesday, July 26, 2016 1:41 PM
>> To: Roberts, William C <william.c.roberts@intel.com>
>> Cc: jason@lakedaemon.net; linux-mm@vger.kernel.org; lkml <linux-
>> kernel@vger.kernel.org>; kernel-hardening@lists.openwall.com; Andrew
>> Morton <akpm@linux-foundation.org>; Kees Cook <keescook@chromium.org>;
>> Greg KH <gregkh@linuxfoundation.org>; Jeffrey Vander Stoep
>> <jeffv@google.com>; salyzyn@android.com; Daniel Cashman
>> <dcashman@android.com>
>> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
>>
>> My apologies in advance if I misunderstand the purposes of this patch.
>>
>> IIUC, this patch adds a random gap between various mmap() mappings, with the
>> goal of ensuring that both the mmap base address and gaps between pages are
>> randomized.
>>
>> If that's the goal, please note that this behavior has caused significant
>> performance problems to Android in the past. Specifically, random gaps between
>> mmap()ed regions causes memory space fragmentation. After a program runs for
>> a long time, the ability to find large contiguous blocks of memory becomes
>> impossible, and mmap()s fail due to lack of a large enough address space.
>
> Yes and fragmentation is definitely a problem here. Especially when the mmaps()
> are not a consistent length for program life.
>
>>
>> This isn't just a theoretical concern. Android actually hit this on kernels prior to
>> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7dbaa46
>> 6780a754154531b44c2086f6618cee3a8
>> . Before that patch, the gaps between mmap()ed pages were randomized.
>> See the discussion at:
>>
>>   http://lists.infradead.org/pipermail/linux-arm-kernel/2011-
>> November/073082.html
>>   http://marc.info/?t=132070957400005&r=1&w=2
>>
>> We ended up having to work around this problem in the following commits:
>>
>>
>> https://android.googlesource.com/platform/dalvik/+/311886c6c6fcd3b531531f59
>> 2d56caab5e2a259c
>>   https://android.googlesource.com/platform/art/+/51e5386
>>   https://android.googlesource.com/platform/art/+/f94b781
>>
>> If this behavior was re-introduced, it's likely to cause hard-to-reproduce
>> problems, and I suspect Android based distributions would tend to disable this
>> feature either globally, or for applications which make a large number of mmap()
>> calls.
>
> Yeah and this is the issue I want to see if we can overcome. I see the biggest benefit
> being on libraries loaded by dl. Perhaps a random flag and modify to linkers. Im just
> spit balling here and collecting the feedback, like this. Thanks for the detail, that
> helps a lot.

Android N introduced library load order randomization, which partially
helps with this.

https://android-review.googlesource.com/178130

There's also https://android-review.googlesource.com/248499 which adds
additional gaps for shared libraries.


>
>>
>> -- Nick
>>
>>
>>
>> On Tue, Jul 26, 2016 at 11:22 AM,  <william.c.roberts@intel.com> wrote:
>> > From: William Roberts <william.c.roberts@intel.com>
>> >
>> > This patch introduces the ability randomize mmap locations where the
>> > address is not requested, for instance when ld is allocating pages for
>> > shared libraries. It chooses to randomize based on the current
>> > personality for ASLR.
>> >
>> > Currently, allocations are done sequentially within unmapped address
>> > space gaps. This may happen top down or bottom up depending on scheme.
>> >
>> > For instance these mmap calls produce contiguous mappings:
>> > int size = getpagesize();
>> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x40026000
>> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x40027000
>> >
>> > Note no gap between.
>> >
>> > After patches:
>> > int size = getpagesize();
>> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x400b4000
>> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x40055000
>> >
>> > Note gap between.
>> >
>> > Using the test program mentioned here, that allocates fixed sized
>> > blocks till exhaustion:
>> > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
>> > no difference was noticed in the number of allocations. Most varied
>> > from run to run, but were always within a few allocations of one
>> > another between patched and un-patched runs.
>> >
>> > Performance Measurements:
>> > Using strace with -T option and filtering for mmap on the program ls
>> > shows a slowdown of approximate 3.7%
>> >
>> > Signed-off-by: William Roberts <william.c.roberts@intel.com>
>> > ---
>> >  mm/mmap.c | 24 ++++++++++++++++++++++++
>> >  1 file changed, 24 insertions(+)
>> >
>> > diff --git a/mm/mmap.c b/mm/mmap.c
>> > index de2c176..7891272 100644
>> > --- a/mm/mmap.c
>> > +++ b/mm/mmap.c
>> > @@ -43,6 +43,7 @@
>> >  #include <linux/userfaultfd_k.h>
>> >  #include <linux/moduleparam.h>
>> >  #include <linux/pkeys.h>
>> > +#include <linux/random.h>
>> >
>> >  #include <asm/uaccess.h>
>> >  #include <asm/cacheflush.h>
>> > @@ -1582,6 +1583,24 @@ unacct_error:
>> >         return error;
>> >  }
>> >
>> > +/*
>> > + * Generate a random address within a range. This differs from
>> > +randomize_addr() by randomizing
>> > + * on len sized chunks. This helps prevent fragmentation of the virtual
>> memory map.
>> > + */
>> > +static unsigned long randomize_mmap(unsigned long start, unsigned
>> > +long end, unsigned long len) {
>> > +       unsigned long slots;
>> > +
>> > +       if ((current->personality & ADDR_NO_RANDOMIZE) ||
>> !randomize_va_space)
>> > +               return 0;
>> > +
>> > +       slots = (end - start)/len;
>> > +       if (!slots)
>> > +               return 0;
>> > +
>> > +       return PAGE_ALIGN(start + ((get_random_long() % slots) *
>> > +len)); }
>> > +
>> >  unsigned long unmapped_area(struct vm_unmapped_area_info *info)  {
>> >         /*
>> > @@ -1676,6 +1695,8 @@ found:
>> >         if (gap_start < info->low_limit)
>> >                 gap_start = info->low_limit;
>> >
>> > +       gap_start = randomize_mmap(gap_start, gap_end, length) ? :
>> > + gap_start;
>> > +
>> >         /* Adjust gap address to the desired alignment */
>> >         gap_start += (info->align_offset - gap_start) &
>> > info->align_mask;
>> >
>> > @@ -1775,6 +1796,9 @@ found:
>> >  found_highest:
>> >         /* Compute highest gap address at the desired alignment */
>> >         gap_end -= info->length;
>> > +
>> > +       gap_end = randomize_mmap(gap_start, gap_end, length) ? :
>> > + gap_end;
>> > +
>> >         gap_end -= (gap_end - info->align_offset) & info->align_mask;
>> >
>> >         VM_BUG_ON(gap_end < info->low_limit);
>> > --
>> > 1.9.1
>> >
>>
>>
>>
>> --
>> Nick Kralevich | Android Security | nnk@google.com | 650.214.4037



-- 
Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 21:11         ` Nick Kralevich
  0 siblings, 0 replies; 73+ messages in thread
From: Nick Kralevich @ 2016-07-26 21:11 UTC (permalink / raw)
  To: Roberts, William C
  Cc: jason, lkml, kernel-hardening, Andrew Morton, Kees Cook, Greg KH,
	Jeffrey Vander Stoep, salyzyn, Daniel Cashman, linux-mm

On Tue, Jul 26, 2016 at 2:02 PM, Roberts, William C
<william.c.roberts@intel.com> wrote:
>
>
>> -----Original Message-----
>> From: Nick Kralevich [mailto:nnk@google.com]
>> Sent: Tuesday, July 26, 2016 1:41 PM
>> To: Roberts, William C <william.c.roberts@intel.com>
>> Cc: jason@lakedaemon.net; linux-mm@vger.kernel.org; lkml <linux-
>> kernel@vger.kernel.org>; kernel-hardening@lists.openwall.com; Andrew
>> Morton <akpm@linux-foundation.org>; Kees Cook <keescook@chromium.org>;
>> Greg KH <gregkh@linuxfoundation.org>; Jeffrey Vander Stoep
>> <jeffv@google.com>; salyzyn@android.com; Daniel Cashman
>> <dcashman@android.com>
>> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
>>
>> My apologies in advance if I misunderstand the purposes of this patch.
>>
>> IIUC, this patch adds a random gap between various mmap() mappings, with the
>> goal of ensuring that both the mmap base address and gaps between pages are
>> randomized.
>>
>> If that's the goal, please note that this behavior has caused significant
>> performance problems to Android in the past. Specifically, random gaps between
>> mmap()ed regions causes memory space fragmentation. After a program runs for
>> a long time, the ability to find large contiguous blocks of memory becomes
>> impossible, and mmap()s fail due to lack of a large enough address space.
>
> Yes and fragmentation is definitely a problem here. Especially when the mmaps()
> are not a consistent length for program life.
>
>>
>> This isn't just a theoretical concern. Android actually hit this on kernels prior to
>> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7dbaa46
>> 6780a754154531b44c2086f6618cee3a8
>> . Before that patch, the gaps between mmap()ed pages were randomized.
>> See the discussion at:
>>
>>   http://lists.infradead.org/pipermail/linux-arm-kernel/2011-
>> November/073082.html
>>   http://marc.info/?t=132070957400005&r=1&w=2
>>
>> We ended up having to work around this problem in the following commits:
>>
>>
>> https://android.googlesource.com/platform/dalvik/+/311886c6c6fcd3b531531f59
>> 2d56caab5e2a259c
>>   https://android.googlesource.com/platform/art/+/51e5386
>>   https://android.googlesource.com/platform/art/+/f94b781
>>
>> If this behavior was re-introduced, it's likely to cause hard-to-reproduce
>> problems, and I suspect Android based distributions would tend to disable this
>> feature either globally, or for applications which make a large number of mmap()
>> calls.
>
> Yeah and this is the issue I want to see if we can overcome. I see the biggest benefit
> being on libraries loaded by dl. Perhaps a random flag and modify to linkers. Im just
> spit balling here and collecting the feedback, like this. Thanks for the detail, that
> helps a lot.

Android N introduced library load order randomization, which partially
helps with this.

https://android-review.googlesource.com/178130

There's also https://android-review.googlesource.com/248499 which adds
additional gaps for shared libraries.


>
>>
>> -- Nick
>>
>>
>>
>> On Tue, Jul 26, 2016 at 11:22 AM,  <william.c.roberts@intel.com> wrote:
>> > From: William Roberts <william.c.roberts@intel.com>
>> >
>> > This patch introduces the ability randomize mmap locations where the
>> > address is not requested, for instance when ld is allocating pages for
>> > shared libraries. It chooses to randomize based on the current
>> > personality for ASLR.
>> >
>> > Currently, allocations are done sequentially within unmapped address
>> > space gaps. This may happen top down or bottom up depending on scheme.
>> >
>> > For instance these mmap calls produce contiguous mappings:
>> > int size = getpagesize();
>> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x40026000
>> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x40027000
>> >
>> > Note no gap between.
>> >
>> > After patches:
>> > int size = getpagesize();
>> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x400b4000
>> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x40055000
>> >
>> > Note gap between.
>> >
>> > Using the test program mentioned here, that allocates fixed sized
>> > blocks till exhaustion:
>> > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
>> > no difference was noticed in the number of allocations. Most varied
>> > from run to run, but were always within a few allocations of one
>> > another between patched and un-patched runs.
>> >
>> > Performance Measurements:
>> > Using strace with -T option and filtering for mmap on the program ls
>> > shows a slowdown of approximate 3.7%
>> >
>> > Signed-off-by: William Roberts <william.c.roberts@intel.com>
>> > ---
>> >  mm/mmap.c | 24 ++++++++++++++++++++++++
>> >  1 file changed, 24 insertions(+)
>> >
>> > diff --git a/mm/mmap.c b/mm/mmap.c
>> > index de2c176..7891272 100644
>> > --- a/mm/mmap.c
>> > +++ b/mm/mmap.c
>> > @@ -43,6 +43,7 @@
>> >  #include <linux/userfaultfd_k.h>
>> >  #include <linux/moduleparam.h>
>> >  #include <linux/pkeys.h>
>> > +#include <linux/random.h>
>> >
>> >  #include <asm/uaccess.h>
>> >  #include <asm/cacheflush.h>
>> > @@ -1582,6 +1583,24 @@ unacct_error:
>> >         return error;
>> >  }
>> >
>> > +/*
>> > + * Generate a random address within a range. This differs from
>> > +randomize_addr() by randomizing
>> > + * on len sized chunks. This helps prevent fragmentation of the virtual
>> memory map.
>> > + */
>> > +static unsigned long randomize_mmap(unsigned long start, unsigned
>> > +long end, unsigned long len) {
>> > +       unsigned long slots;
>> > +
>> > +       if ((current->personality & ADDR_NO_RANDOMIZE) ||
>> !randomize_va_space)
>> > +               return 0;
>> > +
>> > +       slots = (end - start)/len;
>> > +       if (!slots)
>> > +               return 0;
>> > +
>> > +       return PAGE_ALIGN(start + ((get_random_long() % slots) *
>> > +len)); }
>> > +
>> >  unsigned long unmapped_area(struct vm_unmapped_area_info *info)  {
>> >         /*
>> > @@ -1676,6 +1695,8 @@ found:
>> >         if (gap_start < info->low_limit)
>> >                 gap_start = info->low_limit;
>> >
>> > +       gap_start = randomize_mmap(gap_start, gap_end, length) ? :
>> > + gap_start;
>> > +
>> >         /* Adjust gap address to the desired alignment */
>> >         gap_start += (info->align_offset - gap_start) &
>> > info->align_mask;
>> >
>> > @@ -1775,6 +1796,9 @@ found:
>> >  found_highest:
>> >         /* Compute highest gap address at the desired alignment */
>> >         gap_end -= info->length;
>> > +
>> > +       gap_end = randomize_mmap(gap_start, gap_end, length) ? :
>> > + gap_end;
>> > +
>> >         gap_end -= (gap_end - info->align_offset) & info->align_mask;
>> >
>> >         VM_BUG_ON(gap_end < info->low_limit);
>> > --
>> > 1.9.1
>> >
>>
>>
>>
>> --
>> Nick Kralevich | Android Security | nnk@google.com | 650.214.4037



-- 
Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 21:11         ` Nick Kralevich
  0 siblings, 0 replies; 73+ messages in thread
From: Nick Kralevich @ 2016-07-26 21:11 UTC (permalink / raw)
  To: Roberts, William C
  Cc: jason, lkml, kernel-hardening, Andrew Morton, Kees Cook, Greg KH,
	Jeffrey Vander Stoep, salyzyn, Daniel Cashman, linux-mm

On Tue, Jul 26, 2016 at 2:02 PM, Roberts, William C
<william.c.roberts@intel.com> wrote:
>
>
>> -----Original Message-----
>> From: Nick Kralevich [mailto:nnk@google.com]
>> Sent: Tuesday, July 26, 2016 1:41 PM
>> To: Roberts, William C <william.c.roberts@intel.com>
>> Cc: jason@lakedaemon.net; linux-mm@vger.kernel.org; lkml <linux-
>> kernel@vger.kernel.org>; kernel-hardening@lists.openwall.com; Andrew
>> Morton <akpm@linux-foundation.org>; Kees Cook <keescook@chromium.org>;
>> Greg KH <gregkh@linuxfoundation.org>; Jeffrey Vander Stoep
>> <jeffv@google.com>; salyzyn@android.com; Daniel Cashman
>> <dcashman@android.com>
>> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
>>
>> My apologies in advance if I misunderstand the purposes of this patch.
>>
>> IIUC, this patch adds a random gap between various mmap() mappings, with the
>> goal of ensuring that both the mmap base address and gaps between pages are
>> randomized.
>>
>> If that's the goal, please note that this behavior has caused significant
>> performance problems to Android in the past. Specifically, random gaps between
>> mmap()ed regions causes memory space fragmentation. After a program runs for
>> a long time, the ability to find large contiguous blocks of memory becomes
>> impossible, and mmap()s fail due to lack of a large enough address space.
>
> Yes and fragmentation is definitely a problem here. Especially when the mmaps()
> are not a consistent length for program life.
>
>>
>> This isn't just a theoretical concern. Android actually hit this on kernels prior to
>> http://git.kernel.org/cgit/linux/kernel/git/torvalds/linux.git/commit/?id=7dbaa46
>> 6780a754154531b44c2086f6618cee3a8
>> . Before that patch, the gaps between mmap()ed pages were randomized.
>> See the discussion at:
>>
>>   http://lists.infradead.org/pipermail/linux-arm-kernel/2011-
>> November/073082.html
>>   http://marc.info/?t=132070957400005&r=1&w=2
>>
>> We ended up having to work around this problem in the following commits:
>>
>>
>> https://android.googlesource.com/platform/dalvik/+/311886c6c6fcd3b531531f59
>> 2d56caab5e2a259c
>>   https://android.googlesource.com/platform/art/+/51e5386
>>   https://android.googlesource.com/platform/art/+/f94b781
>>
>> If this behavior was re-introduced, it's likely to cause hard-to-reproduce
>> problems, and I suspect Android based distributions would tend to disable this
>> feature either globally, or for applications which make a large number of mmap()
>> calls.
>
> Yeah and this is the issue I want to see if we can overcome. I see the biggest benefit
> being on libraries loaded by dl. Perhaps a random flag and modify to linkers. Im just
> spit balling here and collecting the feedback, like this. Thanks for the detail, that
> helps a lot.

Android N introduced library load order randomization, which partially
helps with this.

https://android-review.googlesource.com/178130

There's also https://android-review.googlesource.com/248499 which adds
additional gaps for shared libraries.


>
>>
>> -- Nick
>>
>>
>>
>> On Tue, Jul 26, 2016 at 11:22 AM,  <william.c.roberts@intel.com> wrote:
>> > From: William Roberts <william.c.roberts@intel.com>
>> >
>> > This patch introduces the ability randomize mmap locations where the
>> > address is not requested, for instance when ld is allocating pages for
>> > shared libraries. It chooses to randomize based on the current
>> > personality for ASLR.
>> >
>> > Currently, allocations are done sequentially within unmapped address
>> > space gaps. This may happen top down or bottom up depending on scheme.
>> >
>> > For instance these mmap calls produce contiguous mappings:
>> > int size = getpagesize();
>> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x40026000
>> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x40027000
>> >
>> > Note no gap between.
>> >
>> > After patches:
>> > int size = getpagesize();
>> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x400b4000
>> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
>> 0x40055000
>> >
>> > Note gap between.
>> >
>> > Using the test program mentioned here, that allocates fixed sized
>> > blocks till exhaustion:
>> > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
>> > no difference was noticed in the number of allocations. Most varied
>> > from run to run, but were always within a few allocations of one
>> > another between patched and un-patched runs.
>> >
>> > Performance Measurements:
>> > Using strace with -T option and filtering for mmap on the program ls
>> > shows a slowdown of approximate 3.7%
>> >
>> > Signed-off-by: William Roberts <william.c.roberts@intel.com>
>> > ---
>> >  mm/mmap.c | 24 ++++++++++++++++++++++++
>> >  1 file changed, 24 insertions(+)
>> >
>> > diff --git a/mm/mmap.c b/mm/mmap.c
>> > index de2c176..7891272 100644
>> > --- a/mm/mmap.c
>> > +++ b/mm/mmap.c
>> > @@ -43,6 +43,7 @@
>> >  #include <linux/userfaultfd_k.h>
>> >  #include <linux/moduleparam.h>
>> >  #include <linux/pkeys.h>
>> > +#include <linux/random.h>
>> >
>> >  #include <asm/uaccess.h>
>> >  #include <asm/cacheflush.h>
>> > @@ -1582,6 +1583,24 @@ unacct_error:
>> >         return error;
>> >  }
>> >
>> > +/*
>> > + * Generate a random address within a range. This differs from
>> > +randomize_addr() by randomizing
>> > + * on len sized chunks. This helps prevent fragmentation of the virtual
>> memory map.
>> > + */
>> > +static unsigned long randomize_mmap(unsigned long start, unsigned
>> > +long end, unsigned long len) {
>> > +       unsigned long slots;
>> > +
>> > +       if ((current->personality & ADDR_NO_RANDOMIZE) ||
>> !randomize_va_space)
>> > +               return 0;
>> > +
>> > +       slots = (end - start)/len;
>> > +       if (!slots)
>> > +               return 0;
>> > +
>> > +       return PAGE_ALIGN(start + ((get_random_long() % slots) *
>> > +len)); }
>> > +
>> >  unsigned long unmapped_area(struct vm_unmapped_area_info *info)  {
>> >         /*
>> > @@ -1676,6 +1695,8 @@ found:
>> >         if (gap_start < info->low_limit)
>> >                 gap_start = info->low_limit;
>> >
>> > +       gap_start = randomize_mmap(gap_start, gap_end, length) ? :
>> > + gap_start;
>> > +
>> >         /* Adjust gap address to the desired alignment */
>> >         gap_start += (info->align_offset - gap_start) &
>> > info->align_mask;
>> >
>> > @@ -1775,6 +1796,9 @@ found:
>> >  found_highest:
>> >         /* Compute highest gap address at the desired alignment */
>> >         gap_end -= info->length;
>> > +
>> > +       gap_end = randomize_mmap(gap_start, gap_end, length) ? :
>> > + gap_end;
>> > +
>> >         gap_end -= (gap_end - info->align_offset) & info->align_mask;
>> >
>> >         VM_BUG_ON(gap_end < info->low_limit);
>> > --
>> > 1.9.1
>> >
>>
>>
>>
>> --
>> Nick Kralevich | Android Security | nnk@google.com | 650.214.4037



-- 
Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 21:06           ` Roberts, William C
  (?)
@ 2016-07-26 21:44             ` Jason Cooper
  -1 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-26 21:44 UTC (permalink / raw)
  To: Roberts, William C
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

On Tue, Jul 26, 2016 at 09:06:30PM +0000, Roberts, William C wrote:
> > From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> > Behalf Of Jason Cooper
> > On Tue, Jul 26, 2016 at 08:13:23PM +0000, Roberts, William C wrote:
> > > > > From: Jason Cooper [mailto:jason@lakedaemon.net] On Tue, Jul 26,
> > > > > 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > > > > Performance Measurements:
> > > > > > Using strace with -T option and filtering for mmap on the
> > > > > > program ls shows a slowdown of approximate 3.7%
> > > > >
> > > > > I think it would be helpful to show the effect on the resulting object code.
> > > >
> > > > Do you mean the maps of the process? I have some captures for
> > > > whoopsie on my Ubuntu system I can share.
> > 
> > No, I mean changes to mm/mmap.o.
> 
> Sure I can post the objdump of that, do you just want a diff of old vs new?

Well, I'm partial to scripts/objdiff, but bloat-o-meter might be more
familiar to most of the folks who you'll be trying to convince to merge
this.

But that's the least of your worries atm. :-/  I was going to dig into
mmap.c to confirm my suspicions, but Nick answered it for me.
Fragmentation caused by this sort of feature is known to have caused
problems in the past.

I would highly recommend studying those prior use cases and answering
those concerns before progressing too much further.  As I've mentioned
elsewhere, you'll need to quantify the increased difficulty to the
attacker that your patch imposes.  Personally, I would assess that first
to see if it's worth the effort at all.

> > > > One thing I didn't make clear in my commit message is why this
> > > > is good. Right now, if you know An address within in a process,
> > > > you know all offsets done with mmap(). For instance, an offset
> > > > To libX can yield libY by adding/subtracting an offset. This is
> > > > meant to make rops a bit harder, or In general any mapping
> > > > offset mmore difficult to
> > find/guess.
> > 
> > Are you able to quantify how many bits of entropy you're imposing on
> > the attacker?  Is this a chair in the hallway or a significant
> > increase in the chances of crashing the program before finding the
> > desired address?
> 
> I'd likely need to take a small sample of programs and examine them,
> especially considering That as gaps are harder to find, it forces the
> randomization down and randomization can Be directly altered with
> length on mmap(), versus randomize_addr() which didn't have this
> restriction but OOM'd do to fragmented easier.

Right, after the Android feedback from Nick, I think you have a lot of
work on your hands.  Not just in design, but also in developing convincing
arguments derived from real use cases.

thx,

Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 21:44             ` Jason Cooper
  0 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-26 21:44 UTC (permalink / raw)
  To: Roberts, William C
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

On Tue, Jul 26, 2016 at 09:06:30PM +0000, Roberts, William C wrote:
> > From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> > Behalf Of Jason Cooper
> > On Tue, Jul 26, 2016 at 08:13:23PM +0000, Roberts, William C wrote:
> > > > > From: Jason Cooper [mailto:jason@lakedaemon.net] On Tue, Jul 26,
> > > > > 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > > > > Performance Measurements:
> > > > > > Using strace with -T option and filtering for mmap on the
> > > > > > program ls shows a slowdown of approximate 3.7%
> > > > >
> > > > > I think it would be helpful to show the effect on the resulting object code.
> > > >
> > > > Do you mean the maps of the process? I have some captures for
> > > > whoopsie on my Ubuntu system I can share.
> > 
> > No, I mean changes to mm/mmap.o.
> 
> Sure I can post the objdump of that, do you just want a diff of old vs new?

Well, I'm partial to scripts/objdiff, but bloat-o-meter might be more
familiar to most of the folks who you'll be trying to convince to merge
this.

But that's the least of your worries atm. :-/  I was going to dig into
mmap.c to confirm my suspicions, but Nick answered it for me.
Fragmentation caused by this sort of feature is known to have caused
problems in the past.

I would highly recommend studying those prior use cases and answering
those concerns before progressing too much further.  As I've mentioned
elsewhere, you'll need to quantify the increased difficulty to the
attacker that your patch imposes.  Personally, I would assess that first
to see if it's worth the effort at all.

> > > > One thing I didn't make clear in my commit message is why this
> > > > is good. Right now, if you know An address within in a process,
> > > > you know all offsets done with mmap(). For instance, an offset
> > > > To libX can yield libY by adding/subtracting an offset. This is
> > > > meant to make rops a bit harder, or In general any mapping
> > > > offset mmore difficult to
> > find/guess.
> > 
> > Are you able to quantify how many bits of entropy you're imposing on
> > the attacker?  Is this a chair in the hallway or a significant
> > increase in the chances of crashing the program before finding the
> > desired address?
> 
> I'd likely need to take a small sample of programs and examine them,
> especially considering That as gaps are harder to find, it forces the
> randomization down and randomization can Be directly altered with
> length on mmap(), versus randomize_addr() which didn't have this
> restriction but OOM'd do to fragmented easier.

Right, after the Android feedback from Nick, I think you have a lot of
work on your hands.  Not just in design, but also in developing convincing
arguments derived from real use cases.

thx,

Jason.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 21:44             ` Jason Cooper
  0 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-26 21:44 UTC (permalink / raw)
  To: Roberts, William C
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

On Tue, Jul 26, 2016 at 09:06:30PM +0000, Roberts, William C wrote:
> > From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> > Behalf Of Jason Cooper
> > On Tue, Jul 26, 2016 at 08:13:23PM +0000, Roberts, William C wrote:
> > > > > From: Jason Cooper [mailto:jason@lakedaemon.net] On Tue, Jul 26,
> > > > > 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > > > > Performance Measurements:
> > > > > > Using strace with -T option and filtering for mmap on the
> > > > > > program ls shows a slowdown of approximate 3.7%
> > > > >
> > > > > I think it would be helpful to show the effect on the resulting object code.
> > > >
> > > > Do you mean the maps of the process? I have some captures for
> > > > whoopsie on my Ubuntu system I can share.
> > 
> > No, I mean changes to mm/mmap.o.
> 
> Sure I can post the objdump of that, do you just want a diff of old vs new?

Well, I'm partial to scripts/objdiff, but bloat-o-meter might be more
familiar to most of the folks who you'll be trying to convince to merge
this.

But that's the least of your worries atm. :-/  I was going to dig into
mmap.c to confirm my suspicions, but Nick answered it for me.
Fragmentation caused by this sort of feature is known to have caused
problems in the past.

I would highly recommend studying those prior use cases and answering
those concerns before progressing too much further.  As I've mentioned
elsewhere, you'll need to quantify the increased difficulty to the
attacker that your patch imposes.  Personally, I would assess that first
to see if it's worth the effort at all.

> > > > One thing I didn't make clear in my commit message is why this
> > > > is good. Right now, if you know An address within in a process,
> > > > you know all offsets done with mmap(). For instance, an offset
> > > > To libX can yield libY by adding/subtracting an offset. This is
> > > > meant to make rops a bit harder, or In general any mapping
> > > > offset mmore difficult to
> > find/guess.
> > 
> > Are you able to quantify how many bits of entropy you're imposing on
> > the attacker?  Is this a chair in the hallway or a significant
> > increase in the chances of crashing the program before finding the
> > desired address?
> 
> I'd likely need to take a small sample of programs and examine them,
> especially considering That as gaps are harder to find, it forces the
> randomization down and randomization can Be directly altered with
> length on mmap(), versus randomize_addr() which didn't have this
> restriction but OOM'd do to fragmented easier.

Right, after the Android feedback from Nick, I think you have a lot of
work on your hands.  Not just in design, but also in developing convincing
arguments derived from real use cases.

thx,

Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 21:44             ` Jason Cooper
  (?)
@ 2016-07-26 23:51               ` Dave Hansen
  -1 siblings, 0 replies; 73+ messages in thread
From: Dave Hansen @ 2016-07-26 23:51 UTC (permalink / raw)
  To: Jason Cooper, Roberts, William C
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

On 07/26/2016 02:44 PM, Jason Cooper wrote:
>> > I'd likely need to take a small sample of programs and examine them,
>> > especially considering That as gaps are harder to find, it forces the
>> > randomization down and randomization can Be directly altered with
>> > length on mmap(), versus randomize_addr() which didn't have this
>> > restriction but OOM'd do to fragmented easier.
> Right, after the Android feedback from Nick, I think you have a lot of
> work on your hands.  Not just in design, but also in developing convincing
> arguments derived from real use cases.

Why not just have the feature be disabled on 32-bit by default?  All of
the Android problems seemed to originate with having a constrained
32-bit address space.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 23:51               ` Dave Hansen
  0 siblings, 0 replies; 73+ messages in thread
From: Dave Hansen @ 2016-07-26 23:51 UTC (permalink / raw)
  To: Jason Cooper, Roberts, William C
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

On 07/26/2016 02:44 PM, Jason Cooper wrote:
>> > I'd likely need to take a small sample of programs and examine them,
>> > especially considering That as gaps are harder to find, it forces the
>> > randomization down and randomization can Be directly altered with
>> > length on mmap(), versus randomize_addr() which didn't have this
>> > restriction but OOM'd do to fragmented easier.
> Right, after the Android feedback from Nick, I think you have a lot of
> work on your hands.  Not just in design, but also in developing convincing
> arguments derived from real use cases.

Why not just have the feature be disabled on 32-bit by default?  All of
the Android problems seemed to originate with having a constrained
32-bit address space.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 23:51               ` Dave Hansen
  0 siblings, 0 replies; 73+ messages in thread
From: Dave Hansen @ 2016-07-26 23:51 UTC (permalink / raw)
  To: Jason Cooper, Roberts, William C
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

On 07/26/2016 02:44 PM, Jason Cooper wrote:
>> > I'd likely need to take a small sample of programs and examine them,
>> > especially considering That as gaps are harder to find, it forces the
>> > randomization down and randomization can Be directly altered with
>> > length on mmap(), versus randomize_addr() which didn't have this
>> > restriction but OOM'd do to fragmented easier.
> Right, after the Android feedback from Nick, I think you have a lot of
> work on your hands.  Not just in design, but also in developing convincing
> arguments derived from real use cases.

Why not just have the feature be disabled on 32-bit by default?  All of
the Android problems seemed to originate with having a constrained
32-bit address space.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 20:59         ` Jason Cooper
  (?)
@ 2016-07-27 16:59           ` Nick Kralevich
  -1 siblings, 0 replies; 73+ messages in thread
From: Nick Kralevich @ 2016-07-27 16:59 UTC (permalink / raw)
  To: Jason Cooper
  Cc: Roberts, William C, linux-mm, linux-kernel, kernel-hardening,
	akpm, keescook, gregkh, jeffv, salyzyn, dcashman

On Tue, Jul 26, 2016 at 1:59 PM, Jason Cooper <jason@lakedaemon.net> wrote:
>> > One thing I didn't make clear in my commit message is why this is good. Right
>> > now, if you know An address within in a process, you know all offsets done with
>> > mmap(). For instance, an offset To libX can yield libY by adding/subtracting an
>> > offset. This is meant to make rops a bit harder, or In general any mapping offset
>> > mmore difficult to find/guess.
>
> Are you able to quantify how many bits of entropy you're imposing on the
> attacker?  Is this a chair in the hallway or a significant increase in
> the chances of crashing the program before finding the desired address?

Quantifying the effect of many security changes is extremely
difficult, especially for a probabilistic defense like ASLR. I would
urge us to not place too high of a proof bar on this change.
Channeling Spender / grsecurity team, ASLR gets it's benefit not from
it's high benefit, but from it's low cost of implementation
(https://forums.grsecurity.net/viewtopic.php?f=7&t=3367). This patch
certainly meets the low cost of implementation bar.

In the Project Zero Stagefright post
(http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html),
we see that the linear allocation of memory combined with the low
number of bits in the initial mmap offset resulted in a much more
predictable layout which aided the attacker. The initial random mmap
base range was increased by Daniel Cashman in
d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done nothing to
address page relative attacks.

Inter-mmap randomization will decrease the predictability of later
mmap() allocations, which should help make data structures harder to
find in memory. In addition, this patch will also introduce unmapped
gaps between pages, preventing linear overruns from one mapping to
another another mapping. I am unable to quantify how much this will
improve security, but it should be > 0.

I like Dave Hansen's suggestion that this functionality be limited to
64 bits, where concerns about running out of address space are
essentially nil. I'd be supportive of this change if it was limited to
64 bits.

-- Nick

-- 
Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-27 16:59           ` Nick Kralevich
  0 siblings, 0 replies; 73+ messages in thread
From: Nick Kralevich @ 2016-07-27 16:59 UTC (permalink / raw)
  To: Jason Cooper
  Cc: Roberts, William C, linux-mm, linux-kernel, kernel-hardening,
	akpm, keescook, gregkh, jeffv, salyzyn, dcashman

On Tue, Jul 26, 2016 at 1:59 PM, Jason Cooper <jason@lakedaemon.net> wrote:
>> > One thing I didn't make clear in my commit message is why this is good. Right
>> > now, if you know An address within in a process, you know all offsets done with
>> > mmap(). For instance, an offset To libX can yield libY by adding/subtracting an
>> > offset. This is meant to make rops a bit harder, or In general any mapping offset
>> > mmore difficult to find/guess.
>
> Are you able to quantify how many bits of entropy you're imposing on the
> attacker?  Is this a chair in the hallway or a significant increase in
> the chances of crashing the program before finding the desired address?

Quantifying the effect of many security changes is extremely
difficult, especially for a probabilistic defense like ASLR. I would
urge us to not place too high of a proof bar on this change.
Channeling Spender / grsecurity team, ASLR gets it's benefit not from
it's high benefit, but from it's low cost of implementation
(https://forums.grsecurity.net/viewtopic.php?f=7&t=3367). This patch
certainly meets the low cost of implementation bar.

In the Project Zero Stagefright post
(http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html),
we see that the linear allocation of memory combined with the low
number of bits in the initial mmap offset resulted in a much more
predictable layout which aided the attacker. The initial random mmap
base range was increased by Daniel Cashman in
d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done nothing to
address page relative attacks.

Inter-mmap randomization will decrease the predictability of later
mmap() allocations, which should help make data structures harder to
find in memory. In addition, this patch will also introduce unmapped
gaps between pages, preventing linear overruns from one mapping to
another another mapping. I am unable to quantify how much this will
improve security, but it should be > 0.

I like Dave Hansen's suggestion that this functionality be limited to
64 bits, where concerns about running out of address space are
essentially nil. I'd be supportive of this change if it was limited to
64 bits.

-- Nick

-- 
Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-27 16:59           ` Nick Kralevich
  0 siblings, 0 replies; 73+ messages in thread
From: Nick Kralevich @ 2016-07-27 16:59 UTC (permalink / raw)
  To: Jason Cooper
  Cc: Roberts, William C, linux-mm, linux-kernel, kernel-hardening,
	akpm, keescook, gregkh, jeffv, salyzyn, dcashman

On Tue, Jul 26, 2016 at 1:59 PM, Jason Cooper <jason@lakedaemon.net> wrote:
>> > One thing I didn't make clear in my commit message is why this is good. Right
>> > now, if you know An address within in a process, you know all offsets done with
>> > mmap(). For instance, an offset To libX can yield libY by adding/subtracting an
>> > offset. This is meant to make rops a bit harder, or In general any mapping offset
>> > mmore difficult to find/guess.
>
> Are you able to quantify how many bits of entropy you're imposing on the
> attacker?  Is this a chair in the hallway or a significant increase in
> the chances of crashing the program before finding the desired address?

Quantifying the effect of many security changes is extremely
difficult, especially for a probabilistic defense like ASLR. I would
urge us to not place too high of a proof bar on this change.
Channeling Spender / grsecurity team, ASLR gets it's benefit not from
it's high benefit, but from it's low cost of implementation
(https://forums.grsecurity.net/viewtopic.php?f=7&t=3367). This patch
certainly meets the low cost of implementation bar.

In the Project Zero Stagefright post
(http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html),
we see that the linear allocation of memory combined with the low
number of bits in the initial mmap offset resulted in a much more
predictable layout which aided the attacker. The initial random mmap
base range was increased by Daniel Cashman in
d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done nothing to
address page relative attacks.

Inter-mmap randomization will decrease the predictability of later
mmap() allocations, which should help make data structures harder to
find in memory. In addition, this patch will also introduce unmapped
gaps between pages, preventing linear overruns from one mapping to
another another mapping. I am unable to quantify how much this will
improve security, but it should be > 0.

I like Dave Hansen's suggestion that this functionality be limited to
64 bits, where concerns about running out of address space are
essentially nil. I'd be supportive of this change if it was limited to
64 bits.

-- Nick

-- 
Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-27 16:59           ` Nick Kralevich
  (?)
@ 2016-07-28 21:07             ` Jason Cooper
  -1 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-28 21:07 UTC (permalink / raw)
  To: Nick Kralevich
  Cc: Roberts, William C, linux-mm, linux-kernel, kernel-hardening,
	akpm, keescook, gregkh, jeffv, salyzyn, dcashman

On Wed, Jul 27, 2016 at 09:59:35AM -0700, Nick Kralevich wrote:
> On Tue, Jul 26, 2016 at 1:59 PM, Jason Cooper <jason@lakedaemon.net> wrote:
> >> > One thing I didn't make clear in my commit message is why this is good. Right
> >> > now, if you know An address within in a process, you know all offsets done with
> >> > mmap(). For instance, an offset To libX can yield libY by adding/subtracting an
> >> > offset. This is meant to make rops a bit harder, or In general any mapping offset
> >> > mmore difficult to find/guess.
> >
> > Are you able to quantify how many bits of entropy you're imposing on the
> > attacker?  Is this a chair in the hallway or a significant increase in
> > the chances of crashing the program before finding the desired address?
> 
> Quantifying the effect of many security changes is extremely
> difficult, especially for a probabilistic defense like ASLR. I would
> urge us to not place too high of a proof bar on this change.
> Channeling Spender / grsecurity team, ASLR gets it's benefit not from
> it's high benefit, but from it's low cost of implementation
> (https://forums.grsecurity.net/viewtopic.php?f=7&t=3367). This patch
> certainly meets the low cost of implementation bar.

Ok, I buy that with the 64bit-only caveat.

> In the Project Zero Stagefright post
> (http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html),
> we see that the linear allocation of memory combined with the low
> number of bits in the initial mmap offset resulted in a much more
> predictable layout which aided the attacker. The initial random mmap
> base range was increased by Daniel Cashman in
> d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done nothing to
> address page relative attacks.
> 
> Inter-mmap randomization will decrease the predictability of later
> mmap() allocations, which should help make data structures harder to
> find in memory. In addition, this patch will also introduce unmapped
> gaps between pages, preventing linear overruns from one mapping to
> another another mapping. I am unable to quantify how much this will
> improve security, but it should be > 0.

One person calls "unmapped gaps between pages" a feature, others call it
a mess. ;-)

> I like Dave Hansen's suggestion that this functionality be limited to
> 64 bits, where concerns about running out of address space are
> essentially nil. I'd be supportive of this change if it was limited to
> 64 bits.

Agreed.

thx,

Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-28 21:07             ` Jason Cooper
  0 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-28 21:07 UTC (permalink / raw)
  To: Nick Kralevich
  Cc: Roberts, William C, linux-mm, linux-kernel, kernel-hardening,
	akpm, keescook, gregkh, jeffv, salyzyn, dcashman

On Wed, Jul 27, 2016 at 09:59:35AM -0700, Nick Kralevich wrote:
> On Tue, Jul 26, 2016 at 1:59 PM, Jason Cooper <jason@lakedaemon.net> wrote:
> >> > One thing I didn't make clear in my commit message is why this is good. Right
> >> > now, if you know An address within in a process, you know all offsets done with
> >> > mmap(). For instance, an offset To libX can yield libY by adding/subtracting an
> >> > offset. This is meant to make rops a bit harder, or In general any mapping offset
> >> > mmore difficult to find/guess.
> >
> > Are you able to quantify how many bits of entropy you're imposing on the
> > attacker?  Is this a chair in the hallway or a significant increase in
> > the chances of crashing the program before finding the desired address?
> 
> Quantifying the effect of many security changes is extremely
> difficult, especially for a probabilistic defense like ASLR. I would
> urge us to not place too high of a proof bar on this change.
> Channeling Spender / grsecurity team, ASLR gets it's benefit not from
> it's high benefit, but from it's low cost of implementation
> (https://forums.grsecurity.net/viewtopic.php?f=7&t=3367). This patch
> certainly meets the low cost of implementation bar.

Ok, I buy that with the 64bit-only caveat.

> In the Project Zero Stagefright post
> (http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html),
> we see that the linear allocation of memory combined with the low
> number of bits in the initial mmap offset resulted in a much more
> predictable layout which aided the attacker. The initial random mmap
> base range was increased by Daniel Cashman in
> d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done nothing to
> address page relative attacks.
> 
> Inter-mmap randomization will decrease the predictability of later
> mmap() allocations, which should help make data structures harder to
> find in memory. In addition, this patch will also introduce unmapped
> gaps between pages, preventing linear overruns from one mapping to
> another another mapping. I am unable to quantify how much this will
> improve security, but it should be > 0.

One person calls "unmapped gaps between pages" a feature, others call it
a mess. ;-)

> I like Dave Hansen's suggestion that this functionality be limited to
> 64 bits, where concerns about running out of address space are
> essentially nil. I'd be supportive of this change if it was limited to
> 64 bits.

Agreed.

thx,

Jason.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-28 21:07             ` Jason Cooper
  0 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-28 21:07 UTC (permalink / raw)
  To: Nick Kralevich
  Cc: Roberts, William C, linux-mm, linux-kernel, kernel-hardening,
	akpm, keescook, gregkh, jeffv, salyzyn, dcashman

On Wed, Jul 27, 2016 at 09:59:35AM -0700, Nick Kralevich wrote:
> On Tue, Jul 26, 2016 at 1:59 PM, Jason Cooper <jason@lakedaemon.net> wrote:
> >> > One thing I didn't make clear in my commit message is why this is good. Right
> >> > now, if you know An address within in a process, you know all offsets done with
> >> > mmap(). For instance, an offset To libX can yield libY by adding/subtracting an
> >> > offset. This is meant to make rops a bit harder, or In general any mapping offset
> >> > mmore difficult to find/guess.
> >
> > Are you able to quantify how many bits of entropy you're imposing on the
> > attacker?  Is this a chair in the hallway or a significant increase in
> > the chances of crashing the program before finding the desired address?
> 
> Quantifying the effect of many security changes is extremely
> difficult, especially for a probabilistic defense like ASLR. I would
> urge us to not place too high of a proof bar on this change.
> Channeling Spender / grsecurity team, ASLR gets it's benefit not from
> it's high benefit, but from it's low cost of implementation
> (https://forums.grsecurity.net/viewtopic.php?f=7&t=3367). This patch
> certainly meets the low cost of implementation bar.

Ok, I buy that with the 64bit-only caveat.

> In the Project Zero Stagefright post
> (http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html),
> we see that the linear allocation of memory combined with the low
> number of bits in the initial mmap offset resulted in a much more
> predictable layout which aided the attacker. The initial random mmap
> base range was increased by Daniel Cashman in
> d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done nothing to
> address page relative attacks.
> 
> Inter-mmap randomization will decrease the predictability of later
> mmap() allocations, which should help make data structures harder to
> find in memory. In addition, this patch will also introduce unmapped
> gaps between pages, preventing linear overruns from one mapping to
> another another mapping. I am unable to quantify how much this will
> improve security, but it should be > 0.

One person calls "unmapped gaps between pages" a feature, others call it
a mess. ;-)

> I like Dave Hansen's suggestion that this functionality be limited to
> 64 bits, where concerns about running out of address space are
> essentially nil. I'd be supportive of this change if it was limited to
> 64 bits.

Agreed.

thx,

Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-28 21:07             ` Jason Cooper
  (?)
  (?)
@ 2016-07-29 10:10             ` Daniel Micay
  2016-07-31 22:24                 ` Jason Cooper
  -1 siblings, 1 reply; 73+ messages in thread
From: Daniel Micay @ 2016-07-29 10:10 UTC (permalink / raw)
  To: kernel-hardening, Nick Kralevich
  Cc: Roberts, William C, linux-mm, linux-kernel, akpm, keescook,
	gregkh, jeffv, salyzyn, dcashman

[-- Attachment #1: Type: text/plain, Size: 2190 bytes --]

> > In the Project Zero Stagefright post
> > (http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html)
> > ,
> > we see that the linear allocation of memory combined with the low
> > number of bits in the initial mmap offset resulted in a much more
> > predictable layout which aided the attacker. The initial random mmap
> > base range was increased by Daniel Cashman in
> > d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done nothing to
> > address page relative attacks.
> > 
> > Inter-mmap randomization will decrease the predictability of later
> > mmap() allocations, which should help make data structures harder to
> > find in memory. In addition, this patch will also introduce unmapped
> > gaps between pages, preventing linear overruns from one mapping to
> > another another mapping. I am unable to quantify how much this will
> > improve security, but it should be > 0.
> 
> One person calls "unmapped gaps between pages" a feature, others call
> it
> a mess. ;-)

It's very hard to quantify the benefits of fine-grained randomization,
but there are other useful guarantees you could provide. It would be
quite helpful for the kernel to expose the option to force a PROT_NONE
mapping after every allocation. The gaps should actually be enforced.

So perhaps 3 things, simply exposed as off-by-default sysctl options (no
need for special treatment on 32-bit):

a) configurable minimum gap size in pages (for protection against linear
and small {under,over}flows)
b) configurable minimum gap size based on a ratio to allocation size
(for making the heap sparse to mitigate heap sprays, especially when
mixed with fine-grained randomization - for example 2x would add a 2M
gap after a 1M mapping)
c) configurable maximum random gap size (the random gap would be in
addition to the enforced minimums)

The randomization could just be considered an extra with minor benefits
rather than the whole feature. A full fine-grained randomization
implementation would need a higher-level form of randomization than gaps
in the kernel along with cooperation from userspace allocators. This
would make sense as one part of it though.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-29 10:10             ` [kernel-hardening] " Daniel Micay
@ 2016-07-31 22:24                 ` Jason Cooper
  0 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-31 22:24 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Nick Kralevich, Roberts, William C, linux-mm, linux-kernel, akpm,
	keescook, gregkh, jeffv, salyzyn, dcashman

Hi Daniel,

On Fri, Jul 29, 2016 at 06:10:02AM -0400, Daniel Micay wrote:
> > > In the Project Zero Stagefright post
> > > (http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html)
> > > , we see that the linear allocation of memory combined with the
> > > low number of bits in the initial mmap offset resulted in a much
> > > more predictable layout which aided the attacker. The initial
> > > random mmap base range was increased by Daniel Cashman in
> > > d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done nothing
> > > to address page relative attacks.
> > > 
> > > Inter-mmap randomization will decrease the predictability of later
> > > mmap() allocations, which should help make data structures harder
> > > to find in memory. In addition, this patch will also introduce
> > > unmapped gaps between pages, preventing linear overruns from one
> > > mapping to another another mapping. I am unable to quantify how
> > > much this will improve security, but it should be > 0.
> > 
> > One person calls "unmapped gaps between pages" a feature, others
> > call it a mess. ;-)
> 
> It's very hard to quantify the benefits of fine-grained randomization,

?  N = # of possible addresses.  The bigger N is, the more chances the
attacker will trip up before finding what they were looking for.

> but there are other useful guarantees you could provide. It would be
> quite helpful for the kernel to expose the option to force a PROT_NONE
> mapping after every allocation. The gaps should actually be enforced.
> 
> So perhaps 3 things, simply exposed as off-by-default sysctl options
> (no need for special treatment on 32-bit):

I'm certainly not an mm-developer, but this looks to me like we're
pushing the work of creating efficient, random mappings out to
userspace.  :-/

> a) configurable minimum gap size in pages (for protection against
> linear and small {under,over}flows) b) configurable minimum gap size
> based on a ratio to allocation size (for making the heap sparse to
> mitigate heap sprays, especially when mixed with fine-grained
> randomization - for example 2x would add a 2M gap after a 1M mapping)

mmm, this looks like an information leak.  Best to set a range of pages
and pick a random number within that range for each call.

> c) configurable maximum random gap size (the random gap would be in
> addition to the enforced minimums)
> 
> The randomization could just be considered an extra with minor
> benefits rather than the whole feature. A full fine-grained
> randomization implementation would need a higher-level form of
> randomization than gaps in the kernel along with cooperation from
> userspace allocators. This would make sense as one part of it though.

Ok, so here's an idea.  This idea could be used in conjunction with
random gaps, or on it's own.  It would be enhanced by userspace random
load order.

The benefit is that with 32bit address space, and no random gapping,
it's still not wasting much space.

Given a memory space, break it up into X bands such that there are 2*X
possible addresses.

  |A     B|C     D|E     F|G     H| ... |2*X-2  2*X-1|
  |--> <--|--> <--|--> <--|--> <--| ... |-->      <--|
min                                                  max

For each call to mmap, we randomly pick a value within [0 - 2*X).
Assuming A=0 in the diagram above, even values grow up, odd values grow
down.  Gradually consuming the single gap in the middle of each band.

How many bands to use would depend on:
  * 32/64bit
  * Average number of mmap calls
  * largest single mmap call usually seen
  * if using random gaps and range used

If the free gap in a chosen band is too small for the request, pick
again among the other bands.

Again, I'm not an mm dev, so I might be totally smoking crack on this
one...

thx,

Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-31 22:24                 ` Jason Cooper
  0 siblings, 0 replies; 73+ messages in thread
From: Jason Cooper @ 2016-07-31 22:24 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Nick Kralevich, Roberts, William C, linux-mm, linux-kernel, akpm,
	keescook, gregkh, jeffv, salyzyn, dcashman

Hi Daniel,

On Fri, Jul 29, 2016 at 06:10:02AM -0400, Daniel Micay wrote:
> > > In the Project Zero Stagefright post
> > > (http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html)
> > > , we see that the linear allocation of memory combined with the
> > > low number of bits in the initial mmap offset resulted in a much
> > > more predictable layout which aided the attacker. The initial
> > > random mmap base range was increased by Daniel Cashman in
> > > d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done nothing
> > > to address page relative attacks.
> > > 
> > > Inter-mmap randomization will decrease the predictability of later
> > > mmap() allocations, which should help make data structures harder
> > > to find in memory. In addition, this patch will also introduce
> > > unmapped gaps between pages, preventing linear overruns from one
> > > mapping to another another mapping. I am unable to quantify how
> > > much this will improve security, but it should be > 0.
> > 
> > One person calls "unmapped gaps between pages" a feature, others
> > call it a mess. ;-)
> 
> It's very hard to quantify the benefits of fine-grained randomization,

?  N = # of possible addresses.  The bigger N is, the more chances the
attacker will trip up before finding what they were looking for.

> but there are other useful guarantees you could provide. It would be
> quite helpful for the kernel to expose the option to force a PROT_NONE
> mapping after every allocation. The gaps should actually be enforced.
> 
> So perhaps 3 things, simply exposed as off-by-default sysctl options
> (no need for special treatment on 32-bit):

I'm certainly not an mm-developer, but this looks to me like we're
pushing the work of creating efficient, random mappings out to
userspace.  :-/

> a) configurable minimum gap size in pages (for protection against
> linear and small {under,over}flows) b) configurable minimum gap size
> based on a ratio to allocation size (for making the heap sparse to
> mitigate heap sprays, especially when mixed with fine-grained
> randomization - for example 2x would add a 2M gap after a 1M mapping)

mmm, this looks like an information leak.  Best to set a range of pages
and pick a random number within that range for each call.

> c) configurable maximum random gap size (the random gap would be in
> addition to the enforced minimums)
> 
> The randomization could just be considered an extra with minor
> benefits rather than the whole feature. A full fine-grained
> randomization implementation would need a higher-level form of
> randomization than gaps in the kernel along with cooperation from
> userspace allocators. This would make sense as one part of it though.

Ok, so here's an idea.  This idea could be used in conjunction with
random gaps, or on it's own.  It would be enhanced by userspace random
load order.

The benefit is that with 32bit address space, and no random gapping,
it's still not wasting much space.

Given a memory space, break it up into X bands such that there are 2*X
possible addresses.

  |A     B|C     D|E     F|G     H| ... |2*X-2  2*X-1|
  |--> <--|--> <--|--> <--|--> <--| ... |-->      <--|
min                                                  max

For each call to mmap, we randomly pick a value within [0 - 2*X).
Assuming A=0 in the diagram above, even values grow up, odd values grow
down.  Gradually consuming the single gap in the middle of each band.

How many bands to use would depend on:
  * 32/64bit
  * Average number of mmap calls
  * largest single mmap call usually seen
  * if using random gaps and range used

If the free gap in a chosen band is too small for the request, pick
again among the other bands.

Again, I'm not an mm dev, so I might be totally smoking crack on this
one...

thx,

Jason.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-31 22:24                 ` Jason Cooper
  (?)
@ 2016-08-01  0:24                 ` Daniel Micay
  -1 siblings, 0 replies; 73+ messages in thread
From: Daniel Micay @ 2016-08-01  0:24 UTC (permalink / raw)
  To: kernel-hardening
  Cc: Nick Kralevich, Roberts, William C, linux-mm, linux-kernel, akpm,
	keescook, gregkh, jeffv, salyzyn, dcashman

[-- Attachment #1: Type: text/plain, Size: 5091 bytes --]

> > It's very hard to quantify the benefits of fine-grained
> > randomization,
> 
> ?  N = # of possible addresses.  The bigger N is, the more chances the
> attacker will trip up before finding what they were looking for.

If the attacker is forcing the creation of many objects with a function
pointer and then trying to hit one, the only thing that would help is if
the heap is very sparse with random bases within it. They don't need to
hit a specific object for an exploit to work.

The details of how the randomization is done and the guarantees that are
provided certainly matter. Individual random gaps are low entropy and
they won't add up to much higher entropy randomization even for two
objects that are far apart. The entropy has no chance to build up since
the sizes will average out.

I'm not saying it doesn't make sense to do this (it's a feature that I
really want), but there are a lot of ways to approach fine-grained mmap
randomization and the design decisions should be justified and their
impact analyzed/measured.

> > but there are other useful guarantees you could provide. It would be
> > quite helpful for the kernel to expose the option to force a
> > PROT_NONE
> > mapping after every allocation. The gaps should actually be
> > enforced.
> > 
> > So perhaps 3 things, simply exposed as off-by-default sysctl options
> > (no need for special treatment on 32-bit):
> 
> I'm certainly not an mm-developer, but this looks to me like we're
> pushing the work of creating efficient, random mappings out to
> userspace.  :-/

Exposing configuration doesn't push work to userspace. I can't see any
way that this would be done by default even on 64-bit due to the extra
VMAs, so it really needs configuration.

> > a) configurable minimum gap size in pages (for protection against
> > linear and small {under,over}flows) b) configurable minimum gap size
> > based on a ratio to allocation size (for making the heap sparse to
> > mitigate heap sprays, especially when mixed with fine-grained
> > randomization - for example 2x would add a 2M gap after a 1M
> > mapping)
> 
> mmm, this looks like an information leak.  Best to set a range of
> pages
> and pick a random number within that range for each call.

A minimum gap size provides guarantees not offered by randomization
alone. It might not make sense to approach making the heap sparse by
forcing it separately from randomization, but doing it isn't leaking
information.

Obviously the random gap would be chosen by picking a maximum size (n)
and choosing a size between [0, n], which was the next potential
variable, separate from these non-randomization-related guarantees.

> > c) configurable maximum random gap size (the random gap would be in
> > addition to the enforced minimums)
> > 
> > The randomization could just be considered an extra with minor
> > benefits rather than the whole feature. A full fine-grained
> > randomization implementation would need a higher-level form of
> > randomization than gaps in the kernel along with cooperation from
> > userspace allocators. This would make sense as one part of it
> > though.
> 
> Ok, so here's an idea.  This idea could be used in conjunction with
> random gaps, or on it's own.  It would be enhanced by userspace random
> load order.
> 
> The benefit is that with 32bit address space, and no random gapping,
> it's still not wasting much space.
> 
> Given a memory space, break it up into X bands such that there are 2*X
> possible addresses.
> 
>   |A     B|C     D|E     F|G     H| ... |2*X-2  2*X-1|
>   |--> <--|--> <--|--> <--|--> <--| ... |-->      <--|
> min                                                  max
> 
> For each call to mmap, we randomly pick a value within [0 - 2*X).
> Assuming A=0 in the diagram above, even values grow up, odd values
> grow
> down.  Gradually consuming the single gap in the middle of each band.
> 
> How many bands to use would depend on:
>   * 32/64bit
>   * Average number of mmap calls
>   * largest single mmap call usually seen
>   * if using random gaps and range used
> 
> If the free gap in a chosen band is too small for the request, pick
> again among the other bands.
> 
> Again, I'm not an mm dev, so I might be totally smoking crack on this
> one...
> 
> thx,
> 
> Jason.

Address space fragmentation matters a lot, not only wasted space due to
memory that's explicitly reserved for random gaps. The randomization
guarantees under situations like memory exhaustion also matter.

I do think fine-grained randomization would be useful, but I think it's
unclear what a good approach would be, along with what the security
benefits would be. The malloc implementation is also very relevant.

OpenBSD has fine-grained mmap randomization with a fair bit of thought
put into how it works, but I don't think there has been much analysis of
it. The security properties really aren't clear.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
  2016-07-27 16:59           ` Nick Kralevich
  (?)
@ 2016-08-02 16:57             ` Roberts, William C
  -1 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-02 16:57 UTC (permalink / raw)
  To: Nick Kralevich, Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	jeffv, salyzyn, dcashman



> -----Original Message-----
> From: Nick Kralevich [mailto:nnk@google.com]
> Sent: Wednesday, July 27, 2016 10:00 AM
> To: Jason Cooper <jason@lakedaemon.net>
> Cc: Roberts, William C <william.c.roberts@intel.com>; linux-mm@kvack.org;
> linux-kernel@vger.kernel.org; kernel-hardening@lists.openwall.com;
> akpm@linux-foundation.org; keescook@chromium.org;
> gregkh@linuxfoundation.org; jeffv@google.com; salyzyn@android.com;
> dcashman@android.com
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> On Tue, Jul 26, 2016 at 1:59 PM, Jason Cooper <jason@lakedaemon.net> wrote:
> >> > One thing I didn't make clear in my commit message is why this is
> >> > good. Right now, if you know An address within in a process, you
> >> > know all offsets done with mmap(). For instance, an offset To libX
> >> > can yield libY by adding/subtracting an offset. This is meant to
> >> > make rops a bit harder, or In general any mapping offset mmore difficult to
> find/guess.
> >
> > Are you able to quantify how many bits of entropy you're imposing on
> > the attacker?  Is this a chair in the hallway or a significant
> > increase in the chances of crashing the program before finding the desired
> address?
> 
> Quantifying the effect of many security changes is extremely difficult, especially
> for a probabilistic defense like ASLR. I would urge us to not place too high of a
> proof bar on this change.
> Channeling Spender / grsecurity team, ASLR gets it's benefit not from it's high
> benefit, but from it's low cost of implementation
> (https://forums.grsecurity.net/viewtopic.php?f=7&t=3367). This patch certainly
> meets the low cost of implementation bar.
> 
> In the Project Zero Stagefright post
> (http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html),
> we see that the linear allocation of memory combined with the low number of
> bits in the initial mmap offset resulted in a much more predictable layout which
> aided the attacker. The initial random mmap base range was increased by Daniel
> Cashman in d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done
> nothing to address page relative attacks.
> 
> Inter-mmap randomization will decrease the predictability of later
> mmap() allocations, which should help make data structures harder to find in
> memory. In addition, this patch will also introduce unmapped gaps between
> pages, preventing linear overruns from one mapping to another another
> mapping. I am unable to quantify how much this will improve security, but it
> should be > 0.
> 
> I like Dave Hansen's suggestion that this functionality be limited to
> 64 bits, where concerns about running out of address space are essentially nil. I'd
> be supportive of this change if it was limited to
> 64 bits.

Sorry for the delay on responding, I was on vacation being worthless. Nick, very eloquently,
described what I failed to put in the commit message. I was thinking about this on vacation
and also thought that on 64 bit the fragmentation shouldn't be an issue.

@nnk, disabling ASLR via set_arch() on Android, is that only for 32 bit address spaces where
you had that problem?

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-02 16:57             ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-02 16:57 UTC (permalink / raw)
  To: Nick Kralevich, Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	jeffv, salyzyn, dcashman

[-- Warning: decoded text below may be mangled, UTF-8 assumed --]
[-- Attachment #1: Type: text/plain; charset="utf-8", Size: 3294 bytes --]



> -----Original Message-----
> From: Nick Kralevich [mailto:nnk@google.com]
> Sent: Wednesday, July 27, 2016 10:00 AM
> To: Jason Cooper <jason@lakedaemon.net>
> Cc: Roberts, William C <william.c.roberts@intel.com>; linux-mm@kvack.org;
> linux-kernel@vger.kernel.org; kernel-hardening@lists.openwall.com;
> akpm@linux-foundation.org; keescook@chromium.org;
> gregkh@linuxfoundation.org; jeffv@google.com; salyzyn@android.com;
> dcashman@android.com
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> On Tue, Jul 26, 2016 at 1:59 PM, Jason Cooper <jason@lakedaemon.net> wrote:
> >> > One thing I didn't make clear in my commit message is why this is
> >> > good. Right now, if you know An address within in a process, you
> >> > know all offsets done with mmap(). For instance, an offset To libX
> >> > can yield libY by adding/subtracting an offset. This is meant to
> >> > make rops a bit harder, or In general any mapping offset mmore difficult to
> find/guess.
> >
> > Are you able to quantify how many bits of entropy you're imposing on
> > the attacker?  Is this a chair in the hallway or a significant
> > increase in the chances of crashing the program before finding the desired
> address?
> 
> Quantifying the effect of many security changes is extremely difficult, especially
> for a probabilistic defense like ASLR. I would urge us to not place too high of a
> proof bar on this change.
> Channeling Spender / grsecurity team, ASLR gets it's benefit not from it's high
> benefit, but from it's low cost of implementation
> (https://forums.grsecurity.net/viewtopic.php?f=7&t=3367). This patch certainly
> meets the low cost of implementation bar.
> 
> In the Project Zero Stagefright post
> (http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html),
> we see that the linear allocation of memory combined with the low number of
> bits in the initial mmap offset resulted in a much more predictable layout which
> aided the attacker. The initial random mmap base range was increased by Daniel
> Cashman in d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done
> nothing to address page relative attacks.
> 
> Inter-mmap randomization will decrease the predictability of later
> mmap() allocations, which should help make data structures harder to find in
> memory. In addition, this patch will also introduce unmapped gaps between
> pages, preventing linear overruns from one mapping to another another
> mapping. I am unable to quantify how much this will improve security, but it
> should be > 0.
> 
> I like Dave Hansen's suggestion that this functionality be limited to
> 64 bits, where concerns about running out of address space are essentially nil. I'd
> be supportive of this change if it was limited to
> 64 bits.

Sorry for the delay on responding, I was on vacation being worthless. Nick, very eloquently,
described what I failed to put in the commit message. I was thinking about this on vacation
and also thought that on 64 bit the fragmentation shouldn't be an issue.

@nnk, disabling ASLR via set_arch() on Android, is that only for 32 bit address spaces where
you had that problem?
N‹§²æìr¸›zǧu©ž²Æ {\b­†éì¹»\x1c®&Þ–)îÆi¢žØ^n‡r¶‰šŽŠÝ¢j$½§$¢¸\x05¢¹¨­è§~Š'.)îÄÃ,yèm¶ŸÿÃ\f%Š{±šj+ƒðèž×¦j)Z†·Ÿ

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-02 16:57             ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-02 16:57 UTC (permalink / raw)
  To: Nick Kralevich, Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	jeffv, salyzyn, dcashman



> -----Original Message-----
> From: Nick Kralevich [mailto:nnk@google.com]
> Sent: Wednesday, July 27, 2016 10:00 AM
> To: Jason Cooper <jason@lakedaemon.net>
> Cc: Roberts, William C <william.c.roberts@intel.com>; linux-mm@kvack.org;
> linux-kernel@vger.kernel.org; kernel-hardening@lists.openwall.com;
> akpm@linux-foundation.org; keescook@chromium.org;
> gregkh@linuxfoundation.org; jeffv@google.com; salyzyn@android.com;
> dcashman@android.com
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> On Tue, Jul 26, 2016 at 1:59 PM, Jason Cooper <jason@lakedaemon.net> wrote:
> >> > One thing I didn't make clear in my commit message is why this is
> >> > good. Right now, if you know An address within in a process, you
> >> > know all offsets done with mmap(). For instance, an offset To libX
> >> > can yield libY by adding/subtracting an offset. This is meant to
> >> > make rops a bit harder, or In general any mapping offset mmore difficult to
> find/guess.
> >
> > Are you able to quantify how many bits of entropy you're imposing on
> > the attacker?  Is this a chair in the hallway or a significant
> > increase in the chances of crashing the program before finding the desired
> address?
> 
> Quantifying the effect of many security changes is extremely difficult, especially
> for a probabilistic defense like ASLR. I would urge us to not place too high of a
> proof bar on this change.
> Channeling Spender / grsecurity team, ASLR gets it's benefit not from it's high
> benefit, but from it's low cost of implementation
> (https://forums.grsecurity.net/viewtopic.php?f=7&t=3367). This patch certainly
> meets the low cost of implementation bar.
> 
> In the Project Zero Stagefright post
> (http://googleprojectzero.blogspot.com/2015/09/stagefrightened.html),
> we see that the linear allocation of memory combined with the low number of
> bits in the initial mmap offset resulted in a much more predictable layout which
> aided the attacker. The initial random mmap base range was increased by Daniel
> Cashman in d07e22597d1d355829b7b18ac19afa912cf758d1, but we've done
> nothing to address page relative attacks.
> 
> Inter-mmap randomization will decrease the predictability of later
> mmap() allocations, which should help make data structures harder to find in
> memory. In addition, this patch will also introduce unmapped gaps between
> pages, preventing linear overruns from one mapping to another another
> mapping. I am unable to quantify how much this will improve security, but it
> should be > 0.
> 
> I like Dave Hansen's suggestion that this functionality be limited to
> 64 bits, where concerns about running out of address space are essentially nil. I'd
> be supportive of this change if it was limited to
> 64 bits.

Sorry for the delay on responding, I was on vacation being worthless. Nick, very eloquently,
described what I failed to put in the commit message. I was thinking about this on vacation
and also thought that on 64 bit the fragmentation shouldn't be an issue.

@nnk, disabling ASLR via set_arch() on Android, is that only for 32 bit address spaces where
you had that problem?

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-08-02 16:57             ` Roberts, William C
  (?)
@ 2016-08-02 17:02               ` Nick Kralevich
  -1 siblings, 0 replies; 73+ messages in thread
From: Nick Kralevich @ 2016-08-02 17:02 UTC (permalink / raw)
  To: Roberts, William C
  Cc: Jason Cooper, linux-mm, linux-kernel, kernel-hardening, akpm,
	keescook, gregkh, jeffv, salyzyn, dcashman

On Tue, Aug 2, 2016 at 9:57 AM, Roberts, William C
<william.c.roberts@intel.com> wrote:
> @nnk, disabling ASLR via set_arch() on Android, is that only for 32 bit address spaces where
> you had that problem?

Yes. Only 32 bit address spaces had the fragmentation problem.

-- 
Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-02 17:02               ` Nick Kralevich
  0 siblings, 0 replies; 73+ messages in thread
From: Nick Kralevich @ 2016-08-02 17:02 UTC (permalink / raw)
  To: Roberts, William C
  Cc: Jason Cooper, linux-mm, linux-kernel, kernel-hardening, akpm,
	keescook, gregkh, jeffv, salyzyn, dcashman

On Tue, Aug 2, 2016 at 9:57 AM, Roberts, William C
<william.c.roberts@intel.com> wrote:
> @nnk, disabling ASLR via set_arch() on Android, is that only for 32 bit address spaces where
> you had that problem?

Yes. Only 32 bit address spaces had the fragmentation problem.

-- 
Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-02 17:02               ` Nick Kralevich
  0 siblings, 0 replies; 73+ messages in thread
From: Nick Kralevich @ 2016-08-02 17:02 UTC (permalink / raw)
  To: Roberts, William C
  Cc: Jason Cooper, linux-mm, linux-kernel, kernel-hardening, akpm,
	keescook, gregkh, jeffv, salyzyn, dcashman

On Tue, Aug 2, 2016 at 9:57 AM, Roberts, William C
<william.c.roberts@intel.com> wrote:
> @nnk, disabling ASLR via set_arch() on Android, is that only for 32 bit address spaces where
> you had that problem?

Yes. Only 32 bit address spaces had the fragmentation problem.

-- 
Nick Kralevich | Android Security | nnk@google.com | 650.214.4037

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 21:06           ` Roberts, William C
  (?)
@ 2016-08-02 17:15             ` Roberts, William C
  -1 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-02 17:15 UTC (permalink / raw)
  To: Roberts, William C, Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

<snip>
> >
> > No, I mean changes to mm/mmap.o.
> 

>From UML build:

NEW:
0000000000001610 <unmapped_area>:
    1610:	55                   	push   %rbp
    1611:	48 89 e5             	mov    %rsp,%rbp
    1614:	41 54                	push   %r12
    1616:	48 8d 45 e8          	lea    -0x18(%rbp),%rax
    161a:	53                   	push   %rbx
    161b:	48 89 fb             	mov    %rdi,%rbx
    161e:	48 83 ec 10          	sub    $0x10,%rsp
    1622:	48 25 00 e0 ff ff    	and    $0xffffffffffffe000,%rax
    1628:	48 8b 57 08          	mov    0x8(%rdi),%rdx
    162c:	48 03 57 20          	add    0x20(%rdi),%rdx
    1630:	48 8b 00             	mov    (%rax),%rax
    1633:	4c 8b 88 b0 01 00 00 	mov    0x1b0(%rax),%r9
    163a:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    1641:	0f 82 05 01 00 00    	jb     174c <unmapped_area+0x13c>
    1647:	48 8b 7f 18          	mov    0x18(%rdi),%rdi
    164b:	48 39 d7             	cmp    %rdx,%rdi
    164e:	0f 82 f8 00 00 00    	jb     174c <unmapped_area+0x13c>
    1654:	4c 8b 63 10          	mov    0x10(%rbx),%r12
    1658:	48 29 d7             	sub    %rdx,%rdi
    165b:	49 39 fc             	cmp    %rdi,%r12
    165e:	0f 87 e8 00 00 00    	ja     174c <unmapped_area+0x13c>
    1664:	49 8b 41 08          	mov    0x8(%r9),%rax
    1668:	48 85 c0             	test   %rax,%rax
    166b:	0f 84 93 00 00 00    	je     1704 <unmapped_area+0xf4>
    1671:	49 8b 49 08          	mov    0x8(%r9),%rcx
    1675:	48 39 51 18          	cmp    %rdx,0x18(%rcx)
    1679:	0f 82 85 00 00 00    	jb     1704 <unmapped_area+0xf4>
    167f:	4e 8d 14 22          	lea    (%rdx,%r12,1),%r10
    1683:	48 83 e9 20          	sub    $0x20,%rcx
    1687:	48 8b 31             	mov    (%rcx),%rsi
    168a:	4c 39 d6             	cmp    %r10,%rsi
    168d:	72 15                	jb     16a4 <unmapped_area+0x94>
    168f:	48 8b 41 30          	mov    0x30(%rcx),%rax
    1693:	48 85 c0             	test   %rax,%rax
    1696:	74 0c                	je     16a4 <unmapped_area+0x94>
    1698:	48 39 50 18          	cmp    %rdx,0x18(%rax)
    169c:	72 06                	jb     16a4 <unmapped_area+0x94>
    169e:	48 8d 48 e0          	lea    -0x20(%rax),%rcx
    16a2:	eb e3                	jmp    1687 <unmapped_area+0x77>
    16a4:	48 8b 41 18          	mov    0x18(%rcx),%rax
    16a8:	48 85 c0             	test   %rax,%rax
    16ab:	74 06                	je     16b3 <unmapped_area+0xa3>
    16ad:	4c 8b 40 08          	mov    0x8(%rax),%r8
    16b1:	eb 03                	jmp    16b6 <unmapped_area+0xa6>
    16b3:	45 31 c0             	xor    %r8d,%r8d
    16b6:	49 39 f8             	cmp    %rdi,%r8
    16b9:	0f 87 86 00 00 00    	ja     1745 <unmapped_area+0x135>
    16bf:	4c 39 d6             	cmp    %r10,%rsi
    16c2:	72 0b                	jb     16cf <unmapped_area+0xbf>
    16c4:	48 89 f0             	mov    %rsi,%rax
    16c7:	4c 29 c0             	sub    %r8,%rax
    16ca:	48 39 d0             	cmp    %rdx,%rax
    16cd:	73 49                	jae    1718 <unmapped_area+0x108>
    16cf:	48 8b 41 28          	mov    0x28(%rcx),%rax
    16d3:	48 85 c0             	test   %rax,%rax
    16d6:	74 06                	je     16de <unmapped_area+0xce>
    16d8:	48 39 50 18          	cmp    %rdx,0x18(%rax)
    16dc:	73 c0                	jae    169e <unmapped_area+0x8e>
    16de:	48 8b 41 20          	mov    0x20(%rcx),%rax
    16e2:	48 8d 71 20          	lea    0x20(%rcx),%rsi
    16e6:	48 83 e0 fc          	and    $0xfffffffffffffffc,%rax
    16ea:	74 18                	je     1704 <unmapped_area+0xf4>
    16ec:	48 3b 70 10          	cmp    0x10(%rax),%rsi
    16f0:	48 8d 48 e0          	lea    -0x20(%rax),%rcx
    16f4:	75 e8                	jne    16de <unmapped_area+0xce>
    16f6:	48 8b 70 f8          	mov    -0x8(%rax),%rsi
    16fa:	4c 8b 46 08          	mov    0x8(%rsi),%r8
    16fe:	48 8b 70 e0          	mov    -0x20(%rax),%rsi
    1702:	eb b2                	jmp    16b6 <unmapped_area+0xa6>
    1704:	4d 8b 41 38          	mov    0x38(%r9),%r8
    1708:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    170f:	49 39 f8             	cmp    %rdi,%r8
    1712:	77 38                	ja     174c <unmapped_area+0x13c>
    1714:	48 83 ce ff          	or     $0xffffffffffffffff,%rsi
    1718:	4d 39 e0             	cmp    %r12,%r8
    171b:	48 b8 00 00 00 00 00 	movabs $0x0,%rax
    1722:	00 00 00 
    1725:	4d 0f 43 e0          	cmovae %r8,%r12
    1729:	4c 89 e7             	mov    %r12,%rdi
    172c:	ff d0                	callq  *%rax
    172e:	48 85 c0             	test   %rax,%rax
    1731:	4c 0f 45 e0          	cmovne %rax,%r12
    1735:	48 8b 43 28          	mov    0x28(%rbx),%rax
    1739:	4c 29 e0             	sub    %r12,%rax
    173c:	48 23 43 20          	and    0x20(%rbx),%rax
    1740:	4c 01 e0             	add    %r12,%rax
    1743:	eb 07                	jmp    174c <unmapped_area+0x13c>
    1745:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    174c:	5a                   	pop    %rdx
    174d:	59                   	pop    %rcx
    174e:	5b                   	pop    %rbx
    174f:	41 5c                	pop    %r12
    1751:	5d                   	pop    %rbp
    1752:	c3                   	retq   

OLD:
0000000000001590 <unmapped_area>:
    1590:	55                   	push   %rbp
    1591:	48 89 e5             	mov    %rsp,%rbp
    1594:	53                   	push   %rbx
    1595:	48 8d 45 f0          	lea    -0x10(%rbp),%rax
    1599:	4c 8b 47 20          	mov    0x20(%rdi),%r8
    159d:	48 25 00 e0 ff ff    	and    $0xffffffffffffe000,%rax
    15a3:	48 8b 00             	mov    (%rax),%rax
    15a6:	4c 89 c6             	mov    %r8,%rsi
    15a9:	48 03 77 08          	add    0x8(%rdi),%rsi
    15ad:	4c 8b 98 b0 01 00 00 	mov    0x1b0(%rax),%r11
    15b4:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    15bb:	0f 82 e8 00 00 00    	jb     16a9 <unmapped_area+0x119>
    15c1:	4c 8b 57 18          	mov    0x18(%rdi),%r10
    15c5:	49 39 f2             	cmp    %rsi,%r10
    15c8:	0f 82 db 00 00 00    	jb     16a9 <unmapped_area+0x119>
    15ce:	4c 8b 4f 10          	mov    0x10(%rdi),%r9
    15d2:	49 29 f2             	sub    %rsi,%r10
    15d5:	4d 39 d1             	cmp    %r10,%r9
    15d8:	0f 87 cb 00 00 00    	ja     16a9 <unmapped_area+0x119>
    15de:	49 8b 43 08          	mov    0x8(%r11),%rax
    15e2:	48 85 c0             	test   %rax,%rax
    15e5:	0f 84 91 00 00 00    	je     167c <unmapped_area+0xec>
    15eb:	49 8b 53 08          	mov    0x8(%r11),%rdx
    15ef:	48 39 72 18          	cmp    %rsi,0x18(%rdx)
    15f3:	0f 82 83 00 00 00    	jb     167c <unmapped_area+0xec>
    15f9:	4a 8d 1c 0e          	lea    (%rsi,%r9,1),%rbx
    15fd:	48 83 ea 20          	sub    $0x20,%rdx
    1601:	48 8b 02             	mov    (%rdx),%rax
    1604:	48 39 d8             	cmp    %rbx,%rax
    1607:	72 15                	jb     161e <unmapped_area+0x8e>
    1609:	48 8b 4a 30          	mov    0x30(%rdx),%rcx
    160d:	48 85 c9             	test   %rcx,%rcx
    1610:	74 0c                	je     161e <unmapped_area+0x8e>
    1612:	48 39 71 18          	cmp    %rsi,0x18(%rcx)
    1616:	72 06                	jb     161e <unmapped_area+0x8e>
    1618:	48 8d 51 e0          	lea    -0x20(%rcx),%rdx
    161c:	eb e3                	jmp    1601 <unmapped_area+0x71>
    161e:	48 8b 4a 18          	mov    0x18(%rdx),%rcx
    1622:	48 85 c9             	test   %rcx,%rcx
    1625:	74 06                	je     162d <unmapped_area+0x9d>
    1627:	48 8b 49 08          	mov    0x8(%rcx),%rcx
    162b:	eb 02                	jmp    162f <unmapped_area+0x9f>
    162d:	31 c9                	xor    %ecx,%ecx
    162f:	4c 39 d1             	cmp    %r10,%rcx
    1632:	77 6e                	ja     16a2 <unmapped_area+0x112>
    1634:	48 39 d8             	cmp    %rbx,%rax
    1637:	72 08                	jb     1641 <unmapped_area+0xb1>
    1639:	48 29 c8             	sub    %rcx,%rax
    163c:	48 39 f0             	cmp    %rsi,%rax
    163f:	73 4b                	jae    168c <unmapped_area+0xfc>
    1641:	48 8b 42 28          	mov    0x28(%rdx),%rax
    1645:	48 85 c0             	test   %rax,%rax
    1648:	74 0c                	je     1656 <unmapped_area+0xc6>
    164a:	48 39 70 18          	cmp    %rsi,0x18(%rax)
    164e:	72 06                	jb     1656 <unmapped_area+0xc6>
    1650:	48 8d 50 e0          	lea    -0x20(%rax),%rdx
    1654:	eb ab                	jmp    1601 <unmapped_area+0x71>
    1656:	48 8b 42 20          	mov    0x20(%rdx),%rax
    165a:	48 8d 4a 20          	lea    0x20(%rdx),%rcx
    165e:	48 83 e0 fc          	and    $0xfffffffffffffffc,%rax
    1662:	74 18                	je     167c <unmapped_area+0xec>
    1664:	48 3b 48 10          	cmp    0x10(%rax),%rcx
    1668:	48 8d 50 e0          	lea    -0x20(%rax),%rdx
    166c:	75 e8                	jne    1656 <unmapped_area+0xc6>
    166e:	48 8b 48 f8          	mov    -0x8(%rax),%rcx
    1672:	48 8b 40 e0          	mov    -0x20(%rax),%rax
    1676:	48 8b 49 08          	mov    0x8(%rcx),%rcx
    167a:	eb b3                	jmp    162f <unmapped_area+0x9f>
    167c:	49 8b 4b 38          	mov    0x38(%r11),%rcx
    1680:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    1687:	4c 39 d1             	cmp    %r10,%rcx
    168a:	77 1d                	ja     16a9 <unmapped_area+0x119>
    168c:	48 8b 47 28          	mov    0x28(%rdi),%rax
    1690:	4c 39 c9             	cmp    %r9,%rcx
    1693:	49 0f 42 c9          	cmovb  %r9,%rcx
    1697:	48 29 c8             	sub    %rcx,%rax
    169a:	4c 21 c0             	and    %r8,%rax
    169d:	48 01 c8             	add    %rcx,%rax
    16a0:	eb 07                	jmp    16a9 <unmapped_area+0x119>
    16a2:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    16a9:	5b                   	pop    %rbx
    16aa:	5d                   	pop    %rbp
    16ab:	c3                   	retq   

<snip>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-02 17:15             ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-02 17:15 UTC (permalink / raw)
  To: Roberts, William C, Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

<snip>
> >
> > No, I mean changes to mm/mmap.o.
> 

>From UML build:

NEW:
0000000000001610 <unmapped_area>:
    1610:	55                   	push   %rbp
    1611:	48 89 e5             	mov    %rsp,%rbp
    1614:	41 54                	push   %r12
    1616:	48 8d 45 e8          	lea    -0x18(%rbp),%rax
    161a:	53                   	push   %rbx
    161b:	48 89 fb             	mov    %rdi,%rbx
    161e:	48 83 ec 10          	sub    $0x10,%rsp
    1622:	48 25 00 e0 ff ff    	and    $0xffffffffffffe000,%rax
    1628:	48 8b 57 08          	mov    0x8(%rdi),%rdx
    162c:	48 03 57 20          	add    0x20(%rdi),%rdx
    1630:	48 8b 00             	mov    (%rax),%rax
    1633:	4c 8b 88 b0 01 00 00 	mov    0x1b0(%rax),%r9
    163a:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    1641:	0f 82 05 01 00 00    	jb     174c <unmapped_area+0x13c>
    1647:	48 8b 7f 18          	mov    0x18(%rdi),%rdi
    164b:	48 39 d7             	cmp    %rdx,%rdi
    164e:	0f 82 f8 00 00 00    	jb     174c <unmapped_area+0x13c>
    1654:	4c 8b 63 10          	mov    0x10(%rbx),%r12
    1658:	48 29 d7             	sub    %rdx,%rdi
    165b:	49 39 fc             	cmp    %rdi,%r12
    165e:	0f 87 e8 00 00 00    	ja     174c <unmapped_area+0x13c>
    1664:	49 8b 41 08          	mov    0x8(%r9),%rax
    1668:	48 85 c0             	test   %rax,%rax
    166b:	0f 84 93 00 00 00    	je     1704 <unmapped_area+0xf4>
    1671:	49 8b 49 08          	mov    0x8(%r9),%rcx
    1675:	48 39 51 18          	cmp    %rdx,0x18(%rcx)
    1679:	0f 82 85 00 00 00    	jb     1704 <unmapped_area+0xf4>
    167f:	4e 8d 14 22          	lea    (%rdx,%r12,1),%r10
    1683:	48 83 e9 20          	sub    $0x20,%rcx
    1687:	48 8b 31             	mov    (%rcx),%rsi
    168a:	4c 39 d6             	cmp    %r10,%rsi
    168d:	72 15                	jb     16a4 <unmapped_area+0x94>
    168f:	48 8b 41 30          	mov    0x30(%rcx),%rax
    1693:	48 85 c0             	test   %rax,%rax
    1696:	74 0c                	je     16a4 <unmapped_area+0x94>
    1698:	48 39 50 18          	cmp    %rdx,0x18(%rax)
    169c:	72 06                	jb     16a4 <unmapped_area+0x94>
    169e:	48 8d 48 e0          	lea    -0x20(%rax),%rcx
    16a2:	eb e3                	jmp    1687 <unmapped_area+0x77>
    16a4:	48 8b 41 18          	mov    0x18(%rcx),%rax
    16a8:	48 85 c0             	test   %rax,%rax
    16ab:	74 06                	je     16b3 <unmapped_area+0xa3>
    16ad:	4c 8b 40 08          	mov    0x8(%rax),%r8
    16b1:	eb 03                	jmp    16b6 <unmapped_area+0xa6>
    16b3:	45 31 c0             	xor    %r8d,%r8d
    16b6:	49 39 f8             	cmp    %rdi,%r8
    16b9:	0f 87 86 00 00 00    	ja     1745 <unmapped_area+0x135>
    16bf:	4c 39 d6             	cmp    %r10,%rsi
    16c2:	72 0b                	jb     16cf <unmapped_area+0xbf>
    16c4:	48 89 f0             	mov    %rsi,%rax
    16c7:	4c 29 c0             	sub    %r8,%rax
    16ca:	48 39 d0             	cmp    %rdx,%rax
    16cd:	73 49                	jae    1718 <unmapped_area+0x108>
    16cf:	48 8b 41 28          	mov    0x28(%rcx),%rax
    16d3:	48 85 c0             	test   %rax,%rax
    16d6:	74 06                	je     16de <unmapped_area+0xce>
    16d8:	48 39 50 18          	cmp    %rdx,0x18(%rax)
    16dc:	73 c0                	jae    169e <unmapped_area+0x8e>
    16de:	48 8b 41 20          	mov    0x20(%rcx),%rax
    16e2:	48 8d 71 20          	lea    0x20(%rcx),%rsi
    16e6:	48 83 e0 fc          	and    $0xfffffffffffffffc,%rax
    16ea:	74 18                	je     1704 <unmapped_area+0xf4>
    16ec:	48 3b 70 10          	cmp    0x10(%rax),%rsi
    16f0:	48 8d 48 e0          	lea    -0x20(%rax),%rcx
    16f4:	75 e8                	jne    16de <unmapped_area+0xce>
    16f6:	48 8b 70 f8          	mov    -0x8(%rax),%rsi
    16fa:	4c 8b 46 08          	mov    0x8(%rsi),%r8
    16fe:	48 8b 70 e0          	mov    -0x20(%rax),%rsi
    1702:	eb b2                	jmp    16b6 <unmapped_area+0xa6>
    1704:	4d 8b 41 38          	mov    0x38(%r9),%r8
    1708:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    170f:	49 39 f8             	cmp    %rdi,%r8
    1712:	77 38                	ja     174c <unmapped_area+0x13c>
    1714:	48 83 ce ff          	or     $0xffffffffffffffff,%rsi
    1718:	4d 39 e0             	cmp    %r12,%r8
    171b:	48 b8 00 00 00 00 00 	movabs $0x0,%rax
    1722:	00 00 00 
    1725:	4d 0f 43 e0          	cmovae %r8,%r12
    1729:	4c 89 e7             	mov    %r12,%rdi
    172c:	ff d0                	callq  *%rax
    172e:	48 85 c0             	test   %rax,%rax
    1731:	4c 0f 45 e0          	cmovne %rax,%r12
    1735:	48 8b 43 28          	mov    0x28(%rbx),%rax
    1739:	4c 29 e0             	sub    %r12,%rax
    173c:	48 23 43 20          	and    0x20(%rbx),%rax
    1740:	4c 01 e0             	add    %r12,%rax
    1743:	eb 07                	jmp    174c <unmapped_area+0x13c>
    1745:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    174c:	5a                   	pop    %rdx
    174d:	59                   	pop    %rcx
    174e:	5b                   	pop    %rbx
    174f:	41 5c                	pop    %r12
    1751:	5d                   	pop    %rbp
    1752:	c3                   	retq   

OLD:
0000000000001590 <unmapped_area>:
    1590:	55                   	push   %rbp
    1591:	48 89 e5             	mov    %rsp,%rbp
    1594:	53                   	push   %rbx
    1595:	48 8d 45 f0          	lea    -0x10(%rbp),%rax
    1599:	4c 8b 47 20          	mov    0x20(%rdi),%r8
    159d:	48 25 00 e0 ff ff    	and    $0xffffffffffffe000,%rax
    15a3:	48 8b 00             	mov    (%rax),%rax
    15a6:	4c 89 c6             	mov    %r8,%rsi
    15a9:	48 03 77 08          	add    0x8(%rdi),%rsi
    15ad:	4c 8b 98 b0 01 00 00 	mov    0x1b0(%rax),%r11
    15b4:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    15bb:	0f 82 e8 00 00 00    	jb     16a9 <unmapped_area+0x119>
    15c1:	4c 8b 57 18          	mov    0x18(%rdi),%r10
    15c5:	49 39 f2             	cmp    %rsi,%r10
    15c8:	0f 82 db 00 00 00    	jb     16a9 <unmapped_area+0x119>
    15ce:	4c 8b 4f 10          	mov    0x10(%rdi),%r9
    15d2:	49 29 f2             	sub    %rsi,%r10
    15d5:	4d 39 d1             	cmp    %r10,%r9
    15d8:	0f 87 cb 00 00 00    	ja     16a9 <unmapped_area+0x119>
    15de:	49 8b 43 08          	mov    0x8(%r11),%rax
    15e2:	48 85 c0             	test   %rax,%rax
    15e5:	0f 84 91 00 00 00    	je     167c <unmapped_area+0xec>
    15eb:	49 8b 53 08          	mov    0x8(%r11),%rdx
    15ef:	48 39 72 18          	cmp    %rsi,0x18(%rdx)
    15f3:	0f 82 83 00 00 00    	jb     167c <unmapped_area+0xec>
    15f9:	4a 8d 1c 0e          	lea    (%rsi,%r9,1),%rbx
    15fd:	48 83 ea 20          	sub    $0x20,%rdx
    1601:	48 8b 02             	mov    (%rdx),%rax
    1604:	48 39 d8             	cmp    %rbx,%rax
    1607:	72 15                	jb     161e <unmapped_area+0x8e>
    1609:	48 8b 4a 30          	mov    0x30(%rdx),%rcx
    160d:	48 85 c9             	test   %rcx,%rcx
    1610:	74 0c                	je     161e <unmapped_area+0x8e>
    1612:	48 39 71 18          	cmp    %rsi,0x18(%rcx)
    1616:	72 06                	jb     161e <unmapped_area+0x8e>
    1618:	48 8d 51 e0          	lea    -0x20(%rcx),%rdx
    161c:	eb e3                	jmp    1601 <unmapped_area+0x71>
    161e:	48 8b 4a 18          	mov    0x18(%rdx),%rcx
    1622:	48 85 c9             	test   %rcx,%rcx
    1625:	74 06                	je     162d <unmapped_area+0x9d>
    1627:	48 8b 49 08          	mov    0x8(%rcx),%rcx
    162b:	eb 02                	jmp    162f <unmapped_area+0x9f>
    162d:	31 c9                	xor    %ecx,%ecx
    162f:	4c 39 d1             	cmp    %r10,%rcx
    1632:	77 6e                	ja     16a2 <unmapped_area+0x112>
    1634:	48 39 d8             	cmp    %rbx,%rax
    1637:	72 08                	jb     1641 <unmapped_area+0xb1>
    1639:	48 29 c8             	sub    %rcx,%rax
    163c:	48 39 f0             	cmp    %rsi,%rax
    163f:	73 4b                	jae    168c <unmapped_area+0xfc>
    1641:	48 8b 42 28          	mov    0x28(%rdx),%rax
    1645:	48 85 c0             	test   %rax,%rax
    1648:	74 0c                	je     1656 <unmapped_area+0xc6>
    164a:	48 39 70 18          	cmp    %rsi,0x18(%rax)
    164e:	72 06                	jb     1656 <unmapped_area+0xc6>
    1650:	48 8d 50 e0          	lea    -0x20(%rax),%rdx
    1654:	eb ab                	jmp    1601 <unmapped_area+0x71>
    1656:	48 8b 42 20          	mov    0x20(%rdx),%rax
    165a:	48 8d 4a 20          	lea    0x20(%rdx),%rcx
    165e:	48 83 e0 fc          	and    $0xfffffffffffffffc,%rax
    1662:	74 18                	je     167c <unmapped_area+0xec>
    1664:	48 3b 48 10          	cmp    0x10(%rax),%rcx
    1668:	48 8d 50 e0          	lea    -0x20(%rax),%rdx
    166c:	75 e8                	jne    1656 <unmapped_area+0xc6>
    166e:	48 8b 48 f8          	mov    -0x8(%rax),%rcx
    1672:	48 8b 40 e0          	mov    -0x20(%rax),%rax
    1676:	48 8b 49 08          	mov    0x8(%rcx),%rcx
    167a:	eb b3                	jmp    162f <unmapped_area+0x9f>
    167c:	49 8b 4b 38          	mov    0x38(%r11),%rcx
    1680:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    1687:	4c 39 d1             	cmp    %r10,%rcx
    168a:	77 1d                	ja     16a9 <unmapped_area+0x119>
    168c:	48 8b 47 28          	mov    0x28(%rdi),%rax
    1690:	4c 39 c9             	cmp    %r9,%rcx
    1693:	49 0f 42 c9          	cmovb  %r9,%rcx
    1697:	48 29 c8             	sub    %rcx,%rax
    169a:	4c 21 c0             	and    %r8,%rax
    169d:	48 01 c8             	add    %rcx,%rax
    16a0:	eb 07                	jmp    16a9 <unmapped_area+0x119>
    16a2:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    16a9:	5b                   	pop    %rbx
    16aa:	5d                   	pop    %rbp
    16ab:	c3                   	retq   

<snip>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-02 17:15             ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-02 17:15 UTC (permalink / raw)
  To: Roberts, William C, Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman

<snip>
> >
> > No, I mean changes to mm/mmap.o.
> 

>From UML build:

NEW:
0000000000001610 <unmapped_area>:
    1610:	55                   	push   %rbp
    1611:	48 89 e5             	mov    %rsp,%rbp
    1614:	41 54                	push   %r12
    1616:	48 8d 45 e8          	lea    -0x18(%rbp),%rax
    161a:	53                   	push   %rbx
    161b:	48 89 fb             	mov    %rdi,%rbx
    161e:	48 83 ec 10          	sub    $0x10,%rsp
    1622:	48 25 00 e0 ff ff    	and    $0xffffffffffffe000,%rax
    1628:	48 8b 57 08          	mov    0x8(%rdi),%rdx
    162c:	48 03 57 20          	add    0x20(%rdi),%rdx
    1630:	48 8b 00             	mov    (%rax),%rax
    1633:	4c 8b 88 b0 01 00 00 	mov    0x1b0(%rax),%r9
    163a:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    1641:	0f 82 05 01 00 00    	jb     174c <unmapped_area+0x13c>
    1647:	48 8b 7f 18          	mov    0x18(%rdi),%rdi
    164b:	48 39 d7             	cmp    %rdx,%rdi
    164e:	0f 82 f8 00 00 00    	jb     174c <unmapped_area+0x13c>
    1654:	4c 8b 63 10          	mov    0x10(%rbx),%r12
    1658:	48 29 d7             	sub    %rdx,%rdi
    165b:	49 39 fc             	cmp    %rdi,%r12
    165e:	0f 87 e8 00 00 00    	ja     174c <unmapped_area+0x13c>
    1664:	49 8b 41 08          	mov    0x8(%r9),%rax
    1668:	48 85 c0             	test   %rax,%rax
    166b:	0f 84 93 00 00 00    	je     1704 <unmapped_area+0xf4>
    1671:	49 8b 49 08          	mov    0x8(%r9),%rcx
    1675:	48 39 51 18          	cmp    %rdx,0x18(%rcx)
    1679:	0f 82 85 00 00 00    	jb     1704 <unmapped_area+0xf4>
    167f:	4e 8d 14 22          	lea    (%rdx,%r12,1),%r10
    1683:	48 83 e9 20          	sub    $0x20,%rcx
    1687:	48 8b 31             	mov    (%rcx),%rsi
    168a:	4c 39 d6             	cmp    %r10,%rsi
    168d:	72 15                	jb     16a4 <unmapped_area+0x94>
    168f:	48 8b 41 30          	mov    0x30(%rcx),%rax
    1693:	48 85 c0             	test   %rax,%rax
    1696:	74 0c                	je     16a4 <unmapped_area+0x94>
    1698:	48 39 50 18          	cmp    %rdx,0x18(%rax)
    169c:	72 06                	jb     16a4 <unmapped_area+0x94>
    169e:	48 8d 48 e0          	lea    -0x20(%rax),%rcx
    16a2:	eb e3                	jmp    1687 <unmapped_area+0x77>
    16a4:	48 8b 41 18          	mov    0x18(%rcx),%rax
    16a8:	48 85 c0             	test   %rax,%rax
    16ab:	74 06                	je     16b3 <unmapped_area+0xa3>
    16ad:	4c 8b 40 08          	mov    0x8(%rax),%r8
    16b1:	eb 03                	jmp    16b6 <unmapped_area+0xa6>
    16b3:	45 31 c0             	xor    %r8d,%r8d
    16b6:	49 39 f8             	cmp    %rdi,%r8
    16b9:	0f 87 86 00 00 00    	ja     1745 <unmapped_area+0x135>
    16bf:	4c 39 d6             	cmp    %r10,%rsi
    16c2:	72 0b                	jb     16cf <unmapped_area+0xbf>
    16c4:	48 89 f0             	mov    %rsi,%rax
    16c7:	4c 29 c0             	sub    %r8,%rax
    16ca:	48 39 d0             	cmp    %rdx,%rax
    16cd:	73 49                	jae    1718 <unmapped_area+0x108>
    16cf:	48 8b 41 28          	mov    0x28(%rcx),%rax
    16d3:	48 85 c0             	test   %rax,%rax
    16d6:	74 06                	je     16de <unmapped_area+0xce>
    16d8:	48 39 50 18          	cmp    %rdx,0x18(%rax)
    16dc:	73 c0                	jae    169e <unmapped_area+0x8e>
    16de:	48 8b 41 20          	mov    0x20(%rcx),%rax
    16e2:	48 8d 71 20          	lea    0x20(%rcx),%rsi
    16e6:	48 83 e0 fc          	and    $0xfffffffffffffffc,%rax
    16ea:	74 18                	je     1704 <unmapped_area+0xf4>
    16ec:	48 3b 70 10          	cmp    0x10(%rax),%rsi
    16f0:	48 8d 48 e0          	lea    -0x20(%rax),%rcx
    16f4:	75 e8                	jne    16de <unmapped_area+0xce>
    16f6:	48 8b 70 f8          	mov    -0x8(%rax),%rsi
    16fa:	4c 8b 46 08          	mov    0x8(%rsi),%r8
    16fe:	48 8b 70 e0          	mov    -0x20(%rax),%rsi
    1702:	eb b2                	jmp    16b6 <unmapped_area+0xa6>
    1704:	4d 8b 41 38          	mov    0x38(%r9),%r8
    1708:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    170f:	49 39 f8             	cmp    %rdi,%r8
    1712:	77 38                	ja     174c <unmapped_area+0x13c>
    1714:	48 83 ce ff          	or     $0xffffffffffffffff,%rsi
    1718:	4d 39 e0             	cmp    %r12,%r8
    171b:	48 b8 00 00 00 00 00 	movabs $0x0,%rax
    1722:	00 00 00 
    1725:	4d 0f 43 e0          	cmovae %r8,%r12
    1729:	4c 89 e7             	mov    %r12,%rdi
    172c:	ff d0                	callq  *%rax
    172e:	48 85 c0             	test   %rax,%rax
    1731:	4c 0f 45 e0          	cmovne %rax,%r12
    1735:	48 8b 43 28          	mov    0x28(%rbx),%rax
    1739:	4c 29 e0             	sub    %r12,%rax
    173c:	48 23 43 20          	and    0x20(%rbx),%rax
    1740:	4c 01 e0             	add    %r12,%rax
    1743:	eb 07                	jmp    174c <unmapped_area+0x13c>
    1745:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    174c:	5a                   	pop    %rdx
    174d:	59                   	pop    %rcx
    174e:	5b                   	pop    %rbx
    174f:	41 5c                	pop    %r12
    1751:	5d                   	pop    %rbp
    1752:	c3                   	retq   

OLD:
0000000000001590 <unmapped_area>:
    1590:	55                   	push   %rbp
    1591:	48 89 e5             	mov    %rsp,%rbp
    1594:	53                   	push   %rbx
    1595:	48 8d 45 f0          	lea    -0x10(%rbp),%rax
    1599:	4c 8b 47 20          	mov    0x20(%rdi),%r8
    159d:	48 25 00 e0 ff ff    	and    $0xffffffffffffe000,%rax
    15a3:	48 8b 00             	mov    (%rax),%rax
    15a6:	4c 89 c6             	mov    %r8,%rsi
    15a9:	48 03 77 08          	add    0x8(%rdi),%rsi
    15ad:	4c 8b 98 b0 01 00 00 	mov    0x1b0(%rax),%r11
    15b4:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    15bb:	0f 82 e8 00 00 00    	jb     16a9 <unmapped_area+0x119>
    15c1:	4c 8b 57 18          	mov    0x18(%rdi),%r10
    15c5:	49 39 f2             	cmp    %rsi,%r10
    15c8:	0f 82 db 00 00 00    	jb     16a9 <unmapped_area+0x119>
    15ce:	4c 8b 4f 10          	mov    0x10(%rdi),%r9
    15d2:	49 29 f2             	sub    %rsi,%r10
    15d5:	4d 39 d1             	cmp    %r10,%r9
    15d8:	0f 87 cb 00 00 00    	ja     16a9 <unmapped_area+0x119>
    15de:	49 8b 43 08          	mov    0x8(%r11),%rax
    15e2:	48 85 c0             	test   %rax,%rax
    15e5:	0f 84 91 00 00 00    	je     167c <unmapped_area+0xec>
    15eb:	49 8b 53 08          	mov    0x8(%r11),%rdx
    15ef:	48 39 72 18          	cmp    %rsi,0x18(%rdx)
    15f3:	0f 82 83 00 00 00    	jb     167c <unmapped_area+0xec>
    15f9:	4a 8d 1c 0e          	lea    (%rsi,%r9,1),%rbx
    15fd:	48 83 ea 20          	sub    $0x20,%rdx
    1601:	48 8b 02             	mov    (%rdx),%rax
    1604:	48 39 d8             	cmp    %rbx,%rax
    1607:	72 15                	jb     161e <unmapped_area+0x8e>
    1609:	48 8b 4a 30          	mov    0x30(%rdx),%rcx
    160d:	48 85 c9             	test   %rcx,%rcx
    1610:	74 0c                	je     161e <unmapped_area+0x8e>
    1612:	48 39 71 18          	cmp    %rsi,0x18(%rcx)
    1616:	72 06                	jb     161e <unmapped_area+0x8e>
    1618:	48 8d 51 e0          	lea    -0x20(%rcx),%rdx
    161c:	eb e3                	jmp    1601 <unmapped_area+0x71>
    161e:	48 8b 4a 18          	mov    0x18(%rdx),%rcx
    1622:	48 85 c9             	test   %rcx,%rcx
    1625:	74 06                	je     162d <unmapped_area+0x9d>
    1627:	48 8b 49 08          	mov    0x8(%rcx),%rcx
    162b:	eb 02                	jmp    162f <unmapped_area+0x9f>
    162d:	31 c9                	xor    %ecx,%ecx
    162f:	4c 39 d1             	cmp    %r10,%rcx
    1632:	77 6e                	ja     16a2 <unmapped_area+0x112>
    1634:	48 39 d8             	cmp    %rbx,%rax
    1637:	72 08                	jb     1641 <unmapped_area+0xb1>
    1639:	48 29 c8             	sub    %rcx,%rax
    163c:	48 39 f0             	cmp    %rsi,%rax
    163f:	73 4b                	jae    168c <unmapped_area+0xfc>
    1641:	48 8b 42 28          	mov    0x28(%rdx),%rax
    1645:	48 85 c0             	test   %rax,%rax
    1648:	74 0c                	je     1656 <unmapped_area+0xc6>
    164a:	48 39 70 18          	cmp    %rsi,0x18(%rax)
    164e:	72 06                	jb     1656 <unmapped_area+0xc6>
    1650:	48 8d 50 e0          	lea    -0x20(%rax),%rdx
    1654:	eb ab                	jmp    1601 <unmapped_area+0x71>
    1656:	48 8b 42 20          	mov    0x20(%rdx),%rax
    165a:	48 8d 4a 20          	lea    0x20(%rdx),%rcx
    165e:	48 83 e0 fc          	and    $0xfffffffffffffffc,%rax
    1662:	74 18                	je     167c <unmapped_area+0xec>
    1664:	48 3b 48 10          	cmp    0x10(%rax),%rcx
    1668:	48 8d 50 e0          	lea    -0x20(%rax),%rdx
    166c:	75 e8                	jne    1656 <unmapped_area+0xc6>
    166e:	48 8b 48 f8          	mov    -0x8(%rax),%rcx
    1672:	48 8b 40 e0          	mov    -0x20(%rax),%rax
    1676:	48 8b 49 08          	mov    0x8(%rcx),%rcx
    167a:	eb b3                	jmp    162f <unmapped_area+0x9f>
    167c:	49 8b 4b 38          	mov    0x38(%r11),%rcx
    1680:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    1687:	4c 39 d1             	cmp    %r10,%rcx
    168a:	77 1d                	ja     16a9 <unmapped_area+0x119>
    168c:	48 8b 47 28          	mov    0x28(%rdi),%rax
    1690:	4c 39 c9             	cmp    %r9,%rcx
    1693:	49 0f 42 c9          	cmovb  %r9,%rcx
    1697:	48 29 c8             	sub    %rcx,%rax
    169a:	4c 21 c0             	and    %r8,%rax
    169d:	48 01 c8             	add    %rcx,%rax
    16a0:	eb 07                	jmp    16a9 <unmapped_area+0x119>
    16a2:	48 c7 c0 f4 ff ff ff 	mov    $0xfffffffffffffff4,%rax
    16a9:	5b                   	pop    %rbx
    16aa:	5d                   	pop    %rbp
    16ab:	c3                   	retq   

<snip>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 21:44             ` Jason Cooper
  (?)
@ 2016-08-02 17:17               ` Roberts, William C
  -1 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-02 17:17 UTC (permalink / raw)
  To: Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman



> -----Original Message-----
> From: Jason Cooper [mailto:jason@lakedaemon.net]
> Sent: Tuesday, July 26, 2016 2:45 PM
> To: Roberts, William C <william.c.roberts@intel.com>
> Cc: linux-mm@kvack.org; linux-kernel@vger.kernel.org; kernel-
> hardening@lists.openwall.com; akpm@linux-foundation.org;
> keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> jeffv@google.com; salyzyn@android.com; dcashman@android.com
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> On Tue, Jul 26, 2016 at 09:06:30PM +0000, Roberts, William C wrote:
> > > From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> > > Behalf Of Jason Cooper On Tue, Jul 26, 2016 at 08:13:23PM +0000,
> > > Roberts, William C wrote:
> > > > > > From: Jason Cooper [mailto:jason@lakedaemon.net] On Tue, Jul
> > > > > > 26,
> > > > > > 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > > > > > Performance Measurements:
> > > > > > > Using strace with -T option and filtering for mmap on the
> > > > > > > program ls shows a slowdown of approximate 3.7%
> > > > > >
> > > > > > I think it would be helpful to show the effect on the resulting object
> code.
> > > > >
> > > > > Do you mean the maps of the process? I have some captures for
> > > > > whoopsie on my Ubuntu system I can share.
> > >
> > > No, I mean changes to mm/mmap.o.
> >
> > Sure I can post the objdump of that, do you just want a diff of old vs new?
> 
> Well, I'm partial to scripts/objdiff, but bloat-o-meter might be more familiar to
> most of the folks who you'll be trying to convince to merge this.

Ahh I didn't know there were tools for this, thanks.

> 
> But that's the least of your worries atm. :-/  I was going to dig into mmap.c to
> confirm my suspicions, but Nick answered it for me.
> Fragmentation caused by this sort of feature is known to have caused problems
> in the past.

I don't know of any mmap randomization done in the past like this. Only the ASLR stuff, which
has had known issues on 32 bit address spaces.

> 
> I would highly recommend studying those prior use cases and answering those
> concerns before progressing too much further.  As I've mentioned elsewhere,
> you'll need to quantify the increased difficulty to the attacker that your patch
> imposes.  Personally, I would assess that first to see if it's worth the effort at all.

Yes agreed.

> 
> > > > > One thing I didn't make clear in my commit message is why this
> > > > > is good. Right now, if you know An address within in a process,
> > > > > you know all offsets done with mmap(). For instance, an offset
> > > > > To libX can yield libY by adding/subtracting an offset. This is
> > > > > meant to make rops a bit harder, or In general any mapping
> > > > > offset mmore difficult to
> > > find/guess.
> > >
> > > Are you able to quantify how many bits of entropy you're imposing on
> > > the attacker?  Is this a chair in the hallway or a significant
> > > increase in the chances of crashing the program before finding the
> > > desired address?
> >
> > I'd likely need to take a small sample of programs and examine them,
> > especially considering That as gaps are harder to find, it forces the
> > randomization down and randomization can Be directly altered with
> > length on mmap(), versus randomize_addr() which didn't have this
> > restriction but OOM'd do to fragmented easier.
> 
> Right, after the Android feedback from Nick, I think you have a lot of work on
> your hands.  Not just in design, but also in developing convincing arguments
> derived from real use cases.
> 
> thx,
> 
> Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-02 17:17               ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-02 17:17 UTC (permalink / raw)
  To: Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman



> -----Original Message-----
> From: Jason Cooper [mailto:jason@lakedaemon.net]
> Sent: Tuesday, July 26, 2016 2:45 PM
> To: Roberts, William C <william.c.roberts@intel.com>
> Cc: linux-mm@kvack.org; linux-kernel@vger.kernel.org; kernel-
> hardening@lists.openwall.com; akpm@linux-foundation.org;
> keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> jeffv@google.com; salyzyn@android.com; dcashman@android.com
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> On Tue, Jul 26, 2016 at 09:06:30PM +0000, Roberts, William C wrote:
> > > From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> > > Behalf Of Jason Cooper On Tue, Jul 26, 2016 at 08:13:23PM +0000,
> > > Roberts, William C wrote:
> > > > > > From: Jason Cooper [mailto:jason@lakedaemon.net] On Tue, Jul
> > > > > > 26,
> > > > > > 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > > > > > Performance Measurements:
> > > > > > > Using strace with -T option and filtering for mmap on the
> > > > > > > program ls shows a slowdown of approximate 3.7%
> > > > > >
> > > > > > I think it would be helpful to show the effect on the resulting object
> code.
> > > > >
> > > > > Do you mean the maps of the process? I have some captures for
> > > > > whoopsie on my Ubuntu system I can share.
> > >
> > > No, I mean changes to mm/mmap.o.
> >
> > Sure I can post the objdump of that, do you just want a diff of old vs new?
> 
> Well, I'm partial to scripts/objdiff, but bloat-o-meter might be more familiar to
> most of the folks who you'll be trying to convince to merge this.

Ahh I didn't know there were tools for this, thanks.

> 
> But that's the least of your worries atm. :-/  I was going to dig into mmap.c to
> confirm my suspicions, but Nick answered it for me.
> Fragmentation caused by this sort of feature is known to have caused problems
> in the past.

I don't know of any mmap randomization done in the past like this. Only the ASLR stuff, which
has had known issues on 32 bit address spaces.

> 
> I would highly recommend studying those prior use cases and answering those
> concerns before progressing too much further.  As I've mentioned elsewhere,
> you'll need to quantify the increased difficulty to the attacker that your patch
> imposes.  Personally, I would assess that first to see if it's worth the effort at all.

Yes agreed.

> 
> > > > > One thing I didn't make clear in my commit message is why this
> > > > > is good. Right now, if you know An address within in a process,
> > > > > you know all offsets done with mmap(). For instance, an offset
> > > > > To libX can yield libY by adding/subtracting an offset. This is
> > > > > meant to make rops a bit harder, or In general any mapping
> > > > > offset mmore difficult to
> > > find/guess.
> > >
> > > Are you able to quantify how many bits of entropy you're imposing on
> > > the attacker?  Is this a chair in the hallway or a significant
> > > increase in the chances of crashing the program before finding the
> > > desired address?
> >
> > I'd likely need to take a small sample of programs and examine them,
> > especially considering That as gaps are harder to find, it forces the
> > randomization down and randomization can Be directly altered with
> > length on mmap(), versus randomize_addr() which didn't have this
> > restriction but OOM'd do to fragmented easier.
> 
> Right, after the Android feedback from Nick, I think you have a lot of work on
> your hands.  Not just in design, but also in developing convincing arguments
> derived from real use cases.
> 
> thx,
> 
> Jason.

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-02 17:17               ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-02 17:17 UTC (permalink / raw)
  To: Jason Cooper
  Cc: linux-mm, linux-kernel, kernel-hardening, akpm, keescook, gregkh,
	nnk, jeffv, salyzyn, dcashman



> -----Original Message-----
> From: Jason Cooper [mailto:jason@lakedaemon.net]
> Sent: Tuesday, July 26, 2016 2:45 PM
> To: Roberts, William C <william.c.roberts@intel.com>
> Cc: linux-mm@kvack.org; linux-kernel@vger.kernel.org; kernel-
> hardening@lists.openwall.com; akpm@linux-foundation.org;
> keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> jeffv@google.com; salyzyn@android.com; dcashman@android.com
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> On Tue, Jul 26, 2016 at 09:06:30PM +0000, Roberts, William C wrote:
> > > From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> > > Behalf Of Jason Cooper On Tue, Jul 26, 2016 at 08:13:23PM +0000,
> > > Roberts, William C wrote:
> > > > > > From: Jason Cooper [mailto:jason@lakedaemon.net] On Tue, Jul
> > > > > > 26,
> > > > > > 2016 at 11:22:26AM -0700, william.c.roberts@intel.com wrote:
> > > > > > > Performance Measurements:
> > > > > > > Using strace with -T option and filtering for mmap on the
> > > > > > > program ls shows a slowdown of approximate 3.7%
> > > > > >
> > > > > > I think it would be helpful to show the effect on the resulting object
> code.
> > > > >
> > > > > Do you mean the maps of the process? I have some captures for
> > > > > whoopsie on my Ubuntu system I can share.
> > >
> > > No, I mean changes to mm/mmap.o.
> >
> > Sure I can post the objdump of that, do you just want a diff of old vs new?
> 
> Well, I'm partial to scripts/objdiff, but bloat-o-meter might be more familiar to
> most of the folks who you'll be trying to convince to merge this.

Ahh I didn't know there were tools for this, thanks.

> 
> But that's the least of your worries atm. :-/  I was going to dig into mmap.c to
> confirm my suspicions, but Nick answered it for me.
> Fragmentation caused by this sort of feature is known to have caused problems
> in the past.

I don't know of any mmap randomization done in the past like this. Only the ASLR stuff, which
has had known issues on 32 bit address spaces.

> 
> I would highly recommend studying those prior use cases and answering those
> concerns before progressing too much further.  As I've mentioned elsewhere,
> you'll need to quantify the increased difficulty to the attacker that your patch
> imposes.  Personally, I would assess that first to see if it's worth the effort at all.

Yes agreed.

> 
> > > > > One thing I didn't make clear in my commit message is why this
> > > > > is good. Right now, if you know An address within in a process,
> > > > > you know all offsets done with mmap(). For instance, an offset
> > > > > To libX can yield libY by adding/subtracting an offset. This is
> > > > > meant to make rops a bit harder, or In general any mapping
> > > > > offset mmore difficult to
> > > find/guess.
> > >
> > > Are you able to quantify how many bits of entropy you're imposing on
> > > the attacker?  Is this a chair in the hallway or a significant
> > > increase in the chances of crashing the program before finding the
> > > desired address?
> >
> > I'd likely need to take a small sample of programs and examine them,
> > especially considering That as gaps are harder to find, it forces the
> > randomization down and randomization can Be directly altered with
> > length on mmap(), versus randomize_addr() which didn't have this
> > restriction but OOM'd do to fragmented easier.
> 
> Right, after the Android feedback from Nick, I think you have a lot of work on
> your hands.  Not just in design, but also in developing convincing arguments
> derived from real use cases.
> 
> thx,
> 
> Jason.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
  2016-08-02 17:17               ` Roberts, William C
  (?)
@ 2016-08-03 18:19                 ` Roberts, William C
  -1 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-03 18:19 UTC (permalink / raw)
  To: 'Jason Cooper'
  Cc: 'linux-mm@kvack.org',
	'linux-kernel@vger.kernel.org',
	'kernel-hardening@lists.openwall.com',
	'akpm@linux-foundation.org',
	'keescook@chromium.org',
	'gregkh@linuxfoundation.org', 'nnk@google.com',
	'jeffv@google.com', 'salyzyn@android.com',
	'dcashman@android.com'

<snip>
> 
> >
> > I would highly recommend studying those prior use cases and answering
> > those concerns before progressing too much further.  As I've mentioned
> > elsewhere, you'll need to quantify the increased difficulty to the
> > attacker that your patch imposes.  Personally, I would assess that first to see if
> it's worth the effort at all.
> 
> Yes agreed.
> 

For those following or those who care I have some preliminary results from a UML test bench. I need to set up better
testing, this I know :-P and test under constrained environments etc.

I ran 100,000 execs of bash and checked pmap for the location of libc's start address. I recorded this and kept track of the lowest
address it was loaded at as well as the highest, the range is aprox 37 bits of entropy. I calculated the Shannon entropy by calculating the frequency
of each address that libc was loaded at per 100,000 invocations, I am not sure if this is an abuse of that, considering Shannon's entropy is usually used
to calculate the entropy of byte sized units in a file (below you will find my city script). Plotting the data, it looked fairly random. Number theory is
not my strong suit, so if anyone has better ways of measuring entropy, I'm all ears, links appreciated.

I'm going to fire up some VMs in the coming weeks and test this more, ill post back with results if they differ from UML. Including ARM tablets running
Android.

low: 0x40000000
high: 0x401cb15000
range: 0x3fdcb15000
Shannon entropy: 10.514440

#!/usr/bin/env python

# modified from: http://www.kennethghartman.com/calculate-file-entropy/

import math
import sys

low=None
high=None

if len(sys.argv) != 2: 
    print "Usage: file_entropy.py [path]filename" 
    sys.exit()
 
d = {}
items=0
with open(sys.argv[1]) as f:
    for line in f:
	line = line.strip()
	line = line.lstrip("0")
	#print line
	items = items + 1
        if line not in d:
            d[line] = 1
        else:
            d[line] = d[line] + 1

	x = int("0x" + line, 16)
	if low == None:
		low = x
	if high == None:
		high = x

	if x < low:
		low = x

	if x > high:
		high = x


#print str(items)

#print d

print ("low: 0x%x" % low)
print ("high: 0x%x" % high)
print ("range: 0x%x" % (high - low))

# calculate the frequency of each address in the file
# XXX Should this really be in the 64 bit address space?
freqList = [] 
for k,v in d.iteritems(): 
    freqList.append(float(v) / items) 
 
#print freqList

# Shannon entropy 
ent = 0.0 
for freq in freqList: 
    if freq > 0: 
        ent = ent + freq * math.log(freq, 2) 
ent = -ent 
print ('Shannon entropy: %f' % ent  )

<snip>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-03 18:19                 ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-03 18:19 UTC (permalink / raw)
  To: 'Jason Cooper'
  Cc: 'linux-mm@kvack.org',
	'linux-kernel@vger.kernel.org',
	'kernel-hardening@lists.openwall.com',
	'akpm@linux-foundation.org',
	'keescook@chromium.org',
	'gregkh@linuxfoundation.org', 'nnk@google.com',
	'jeffv@google.com', 'salyzyn@android.com',
	'dcashman@android.com'

<snip>
> 
> >
> > I would highly recommend studying those prior use cases and answering
> > those concerns before progressing too much further.  As I've mentioned
> > elsewhere, you'll need to quantify the increased difficulty to the
> > attacker that your patch imposes.  Personally, I would assess that first to see if
> it's worth the effort at all.
> 
> Yes agreed.
> 

For those following or those who care I have some preliminary results from a UML test bench. I need to set up better
testing, this I know :-P and test under constrained environments etc.

I ran 100,000 execs of bash and checked pmap for the location of libc's start address. I recorded this and kept track of the lowest
address it was loaded at as well as the highest, the range is aprox 37 bits of entropy. I calculated the Shannon entropy by calculating the frequency
of each address that libc was loaded at per 100,000 invocations, I am not sure if this is an abuse of that, considering Shannon's entropy is usually used
to calculate the entropy of byte sized units in a file (below you will find my city script). Plotting the data, it looked fairly random. Number theory is
not my strong suit, so if anyone has better ways of measuring entropy, I'm all ears, links appreciated.

I'm going to fire up some VMs in the coming weeks and test this more, ill post back with results if they differ from UML. Including ARM tablets running
Android.

low: 0x40000000
high: 0x401cb15000
range: 0x3fdcb15000
Shannon entropy: 10.514440

#!/usr/bin/env python

# modified from: http://www.kennethghartman.com/calculate-file-entropy/

import math
import sys

low=None
high=None

if len(sys.argv) != 2: 
    print "Usage: file_entropy.py [path]filename" 
    sys.exit()
 
d = {}
items=0
with open(sys.argv[1]) as f:
    for line in f:
	line = line.strip()
	line = line.lstrip("0")
	#print line
	items = items + 1
        if line not in d:
            d[line] = 1
        else:
            d[line] = d[line] + 1

	x = int("0x" + line, 16)
	if low == None:
		low = x
	if high == None:
		high = x

	if x < low:
		low = x

	if x > high:
		high = x


#print str(items)

#print d

print ("low: 0x%x" % low)
print ("high: 0x%x" % high)
print ("range: 0x%x" % (high - low))

# calculate the frequency of each address in the file
# XXX Should this really be in the 64 bit address space?
freqList = [] 
for k,v in d.iteritems(): 
    freqList.append(float(v) / items) 
 
#print freqList

# Shannon entropy 
ent = 0.0 
for freq in freqList: 
    if freq > 0: 
        ent = ent + freq * math.log(freq, 2) 
ent = -ent 
print ('Shannon entropy: %f' % ent  )

<snip>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] RE: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-03 18:19                 ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-03 18:19 UTC (permalink / raw)
  To: 'Jason Cooper'
  Cc: 'linux-mm@kvack.org',
	'linux-kernel@vger.kernel.org',
	'kernel-hardening@lists.openwall.com',
	'akpm@linux-foundation.org',
	'keescook@chromium.org',
	'gregkh@linuxfoundation.org', 'nnk@google.com',
	'jeffv@google.com', 'salyzyn@android.com',
	'dcashman@android.com'

<snip>
> 
> >
> > I would highly recommend studying those prior use cases and answering
> > those concerns before progressing too much further.  As I've mentioned
> > elsewhere, you'll need to quantify the increased difficulty to the
> > attacker that your patch imposes.  Personally, I would assess that first to see if
> it's worth the effort at all.
> 
> Yes agreed.
> 

For those following or those who care I have some preliminary results from a UML test bench. I need to set up better
testing, this I know :-P and test under constrained environments etc.

I ran 100,000 execs of bash and checked pmap for the location of libc's start address. I recorded this and kept track of the lowest
address it was loaded at as well as the highest, the range is aprox 37 bits of entropy. I calculated the Shannon entropy by calculating the frequency
of each address that libc was loaded at per 100,000 invocations, I am not sure if this is an abuse of that, considering Shannon's entropy is usually used
to calculate the entropy of byte sized units in a file (below you will find my city script). Plotting the data, it looked fairly random. Number theory is
not my strong suit, so if anyone has better ways of measuring entropy, I'm all ears, links appreciated.

I'm going to fire up some VMs in the coming weeks and test this more, ill post back with results if they differ from UML. Including ARM tablets running
Android.

low: 0x40000000
high: 0x401cb15000
range: 0x3fdcb15000
Shannon entropy: 10.514440

#!/usr/bin/env python

# modified from: http://www.kennethghartman.com/calculate-file-entropy/

import math
import sys

low=None
high=None

if len(sys.argv) != 2: 
    print "Usage: file_entropy.py [path]filename" 
    sys.exit()
 
d = {}
items=0
with open(sys.argv[1]) as f:
    for line in f:
	line = line.strip()
	line = line.lstrip("0")
	#print line
	items = items + 1
        if line not in d:
            d[line] = 1
        else:
            d[line] = d[line] + 1

	x = int("0x" + line, 16)
	if low == None:
		low = x
	if high == None:
		high = x

	if x < low:
		low = x

	if x > high:
		high = x


#print str(items)

#print d

print ("low: 0x%x" % low)
print ("high: 0x%x" % high)
print ("range: 0x%x" % (high - low))

# calculate the frequency of each address in the file
# XXX Should this really be in the 64 bit address space?
freqList = [] 
for k,v in d.iteritems(): 
    freqList.append(float(v) / items) 
 
#print freqList

# Shannon entropy 
ent = 0.0 
for freq in freqList: 
    if freq > 0: 
        ent = ent + freq * math.log(freq, 2) 
ent = -ent 
print ('Shannon entropy: %f' % ent  )

<snip>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 18:22 ` [kernel-hardening] " william.c.roberts
  (?)
  (?)
@ 2016-08-04 16:53 ` Daniel Micay
  2016-08-04 16:55     ` Roberts, William C
  -1 siblings, 1 reply; 73+ messages in thread
From: Daniel Micay @ 2016-08-04 16:53 UTC (permalink / raw)
  To: kernel-hardening, jason, linux-mm, linux-kernel, akpm
  Cc: keescook, gregkh, nnk, jeffv, salyzyn, dcashman

[-- Attachment #1: Type: text/plain, Size: 556 bytes --]

On Tue, 2016-07-26 at 11:22 -0700, william.c.roberts@intel.com wrote:
> The recent get_random_long() change in get_random_range() and then the
> subsequent patches Jason put out, all stemmed from my tinkering
> with the concept of randomizing mmap.
> 
> Any feedback would be greatly appreciated, including any feedback
> indicating that I am idiot.

The RAND_THREADSTACK feature in grsecurity makes the gaps the way I
think would be ideal, i.e. tracked as part of the appropriate VMA. It
would be straightforward to make it more general purpose.

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization
  2016-08-04 16:53 ` [kernel-hardening] " Daniel Micay
@ 2016-08-04 16:55     ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-04 16:55 UTC (permalink / raw)
  To: kernel-hardening, jason, linux-mm, linux-kernel, akpm
  Cc: keescook, gregkh, nnk, jeffv, salyzyn, dcashman

> -----Original Message-----
> From: Daniel Micay [mailto:danielmicay@gmail.com]
> Sent: Thursday, August 4, 2016 9:53 AM
> To: kernel-hardening@lists.openwall.com; jason@lakedaemon.net; linux-
> mm@vger.kernel.org; linux-kernel@vger.kernel.org; akpm@linux-
> foundation.org
> Cc: keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> jeffv@google.com; salyzyn@android.com; dcashman@android.com
> Subject: Re: [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization
> 
> On Tue, 2016-07-26 at 11:22 -0700, william.c.roberts@intel.com wrote:
> > The recent get_random_long() change in get_random_range() and then the
> > subsequent patches Jason put out, all stemmed from my tinkering with
> > the concept of randomizing mmap.
> >
> > Any feedback would be greatly appreciated, including any feedback
> > indicating that I am idiot.
> 
> The RAND_THREADSTACK feature in grsecurity makes the gaps the way I think
> would be ideal, i.e. tracked as part of the appropriate VMA. It would be
> straightforward to make it more general purpose.

I am not familiar with that, thanks for pointing it out. I'll take a look when my time
frees up for this again.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-04 16:55     ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-08-04 16:55 UTC (permalink / raw)
  To: kernel-hardening, jason, linux-mm, linux-kernel, akpm
  Cc: keescook, gregkh, nnk, jeffv, salyzyn, dcashman

> -----Original Message-----
> From: Daniel Micay [mailto:danielmicay@gmail.com]
> Sent: Thursday, August 4, 2016 9:53 AM
> To: kernel-hardening@lists.openwall.com; jason@lakedaemon.net; linux-
> mm@vger.kernel.org; linux-kernel@vger.kernel.org; akpm@linux-
> foundation.org
> Cc: keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.com;
> jeffv@google.com; salyzyn@android.com; dcashman@android.com
> Subject: Re: [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization
> 
> On Tue, 2016-07-26 at 11:22 -0700, william.c.roberts@intel.com wrote:
> > The recent get_random_long() change in get_random_range() and then the
> > subsequent patches Jason put out, all stemmed from my tinkering with
> > the concept of randomizing mmap.
> >
> > Any feedback would be greatly appreciated, including any feedback
> > indicating that I am idiot.
> 
> The RAND_THREADSTACK feature in grsecurity makes the gaps the way I think
> would be ideal, i.e. tracked as part of the appropriate VMA. It would be
> straightforward to make it more general purpose.

I am not familiar with that, thanks for pointing it out. I'll take a look when my time
frees up for this again.

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [kernel-hardening] [PATCH] [RFC] Introduce mmap randomization
  2016-08-04 16:55     ` Roberts, William C
  (?)
@ 2016-08-04 17:10     ` Daniel Micay
  -1 siblings, 0 replies; 73+ messages in thread
From: Daniel Micay @ 2016-08-04 17:10 UTC (permalink / raw)
  To: kernel-hardening, jason, linux-mm, linux-kernel, akpm
  Cc: keescook, gregkh, nnk, jeffv, salyzyn, dcashman

[-- Attachment #1: Type: text/plain, Size: 1431 bytes --]

On Thu, 2016-08-04 at 16:55 +0000, Roberts, William C wrote:
> > 
> > -----Original Message-----
> > From: Daniel Micay [mailto:danielmicay@gmail.com]
> > Sent: Thursday, August 4, 2016 9:53 AM
> > To: kernel-hardening@lists.openwall.com; jason@lakedaemon.net;
> > linux-
> > mm@vger.kernel.org; linux-kernel@vger.kernel.org; akpm@linux-
> > foundation.org
> > Cc: keescook@chromium.org; gregkh@linuxfoundation.org; nnk@google.co
> > m;
> > jeffv@google.com; salyzyn@android.com; dcashman@android.com
> > Subject: Re: [kernel-hardening] [PATCH] [RFC] Introduce mmap
> > randomization
> > 
> > On Tue, 2016-07-26 at 11:22 -0700, william.c.roberts@intel.com
> > wrote:
> > > 
> > > The recent get_random_long() change in get_random_range() and then
> > > the
> > > subsequent patches Jason put out, all stemmed from my tinkering
> > > with
> > > the concept of randomizing mmap.
> > > 
> > > Any feedback would be greatly appreciated, including any feedback
> > > indicating that I am idiot.
> > 
> > The RAND_THREADSTACK feature in grsecurity makes the gaps the way I
> > think
> > would be ideal, i.e. tracked as part of the appropriate VMA. It
> > would be
> > straightforward to make it more general purpose.
> 
> I am not familiar with that, thanks for pointing it out. I'll take a
> look when my time
> frees up for this again.

I'm actually wrong about that now that I look more closely...

[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 851 bytes --]

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 18:22   ` [kernel-hardening] " william.c.roberts
@ 2016-08-14 16:22     ` Pavel Machek
  -1 siblings, 0 replies; 73+ messages in thread
From: Pavel Machek @ 2016-08-14 16:22 UTC (permalink / raw)
  To: william.c.roberts
  Cc: jason, linux-mm, linux-kernel, kernel-hardening, akpm, keescook,
	gregkh, nnk, jeffv, salyzyn, dcashman

On Tue 2016-07-26 11:22:26, william.c.roberts@intel.com wrote:
> From: William Roberts <william.c.roberts@intel.com>
> 
> This patch introduces the ability randomize mmap locations where the
> address is not requested, for instance when ld is allocating pages for
> shared libraries. It chooses to randomize based on the current
> personality for ASLR.
> 
> Currently, allocations are done sequentially within unmapped address
> space gaps. This may happen top down or bottom up depending on scheme.
> 
> For instance these mmap calls produce contiguous mappings:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40026000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40027000
> 
> Note no gap between.
> 
> After patches:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x400b4000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40055000
> 
> Note gap between.

Ok, I guess you can do it... but... what will be the effect on
available address space for a process? By doing this, won't you
fragment it horribly? This might be nasty on 32-bit systems...

Best regards,
								Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-14 16:22     ` Pavel Machek
  0 siblings, 0 replies; 73+ messages in thread
From: Pavel Machek @ 2016-08-14 16:22 UTC (permalink / raw)
  To: william.c.roberts
  Cc: jason, linux-mm, linux-kernel, kernel-hardening, akpm, keescook,
	gregkh, nnk, jeffv, salyzyn, dcashman

On Tue 2016-07-26 11:22:26, william.c.roberts@intel.com wrote:
> From: William Roberts <william.c.roberts@intel.com>
> 
> This patch introduces the ability randomize mmap locations where the
> address is not requested, for instance when ld is allocating pages for
> shared libraries. It chooses to randomize based on the current
> personality for ASLR.
> 
> Currently, allocations are done sequentially within unmapped address
> space gaps. This may happen top down or bottom up depending on scheme.
> 
> For instance these mmap calls produce contiguous mappings:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40026000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40027000
> 
> Note no gap between.
> 
> After patches:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x400b4000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40055000
> 
> Note gap between.

Ok, I guess you can do it... but... what will be the effect on
available address space for a process? By doing this, won't you
fragment it horribly? This might be nasty on 32-bit systems...

Best regards,
								Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-27 16:59           ` Nick Kralevich
  (?)
@ 2016-08-14 16:31             ` Pavel Machek 1
  -1 siblings, 0 replies; 73+ messages in thread
From: Pavel Machek 1 @ 2016-08-14 16:31 UTC (permalink / raw)
  To: Nick Kralevich
  Cc: Jason Cooper, Roberts, William C, linux-mm, linux-kernel,
	kernel-hardening, akpm, keescook, gregkh, jeffv, salyzyn,
	dcashman

Hi!

> Inter-mmap randomization will decrease the predictability of later
> mmap() allocations, which should help make data structures harder to
> find in memory. In addition, this patch will also introduce unmapped
> gaps between pages, preventing linear overruns from one mapping to
> another another mapping. I am unable to quantify how much this will
> improve security, but it should be > 0.
> 
> I like Dave Hansen's suggestion that this functionality be limited to
> 64 bits, where concerns about running out of address space are
> essentially nil. I'd be supportive of this change if it was limited to
> 64 bits.

Yep, 64bits is easier. But notice that x86-64 machines do _not_ have
full 64bits of address space...

...and that if you use as much address space as possible, TLB flushes
will be slower because page table entries will need more cache.

So this will likely have performance implications even when
application does no syscalls :-(.

How do you plan to deal with huge memory pages support?

Best regards,
								Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-14 16:31             ` Pavel Machek 1
  0 siblings, 0 replies; 73+ messages in thread
From: Pavel Machek 1 @ 2016-08-14 16:31 UTC (permalink / raw)
  To: Nick Kralevich
  Cc: Jason Cooper, Roberts, William C, linux-mm, linux-kernel,
	kernel-hardening, akpm, keescook, gregkh, jeffv, salyzyn,
	dcashman

Hi!

> Inter-mmap randomization will decrease the predictability of later
> mmap() allocations, which should help make data structures harder to
> find in memory. In addition, this patch will also introduce unmapped
> gaps between pages, preventing linear overruns from one mapping to
> another another mapping. I am unable to quantify how much this will
> improve security, but it should be > 0.
> 
> I like Dave Hansen's suggestion that this functionality be limited to
> 64 bits, where concerns about running out of address space are
> essentially nil. I'd be supportive of this change if it was limited to
> 64 bits.

Yep, 64bits is easier. But notice that x86-64 machines do _not_ have
full 64bits of address space...

...and that if you use as much address space as possible, TLB flushes
will be slower because page table entries will need more cache.

So this will likely have performance implications even when
application does no syscalls :-(.

How do you plan to deal with huge memory pages support?

Best regards,
								Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [kernel-hardening] Re: [PATCH] [RFC] Introduce mmap randomization
@ 2016-08-14 16:31             ` Pavel Machek 1
  0 siblings, 0 replies; 73+ messages in thread
From: Pavel Machek 1 @ 2016-08-14 16:31 UTC (permalink / raw)
  To: Nick Kralevich
  Cc: Jason Cooper, Roberts, William C, linux-mm, linux-kernel,
	kernel-hardening, akpm, keescook, gregkh, jeffv, salyzyn,
	dcashman

Hi!

> Inter-mmap randomization will decrease the predictability of later
> mmap() allocations, which should help make data structures harder to
> find in memory. In addition, this patch will also introduce unmapped
> gaps between pages, preventing linear overruns from one mapping to
> another another mapping. I am unable to quantify how much this will
> improve security, but it should be > 0.
> 
> I like Dave Hansen's suggestion that this functionality be limited to
> 64 bits, where concerns about running out of address space are
> essentially nil. I'd be supportive of this change if it was limited to
> 64 bits.

Yep, 64bits is easier. But notice that x86-64 machines do _not_ have
full 64bits of address space...

...and that if you use as much address space as possible, TLB flushes
will be slower because page table entries will need more cache.

So this will likely have performance implications even when
application does no syscalls :-(.

How do you plan to deal with huge memory pages support?

Best regards,
								Pavel
-- 
(english) http://www.livejournal.com/~pavelmachek
(cesky, pictures) http://atrey.karlin.mff.cuni.cz/~pavel/picture/horses/blog.html

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 20:29     ` Kirill A. Shutemov
@ 2016-07-26 20:35       ` Roberts, William C
  0 siblings, 0 replies; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 20:35 UTC (permalink / raw)
  To: Kirill A. Shutemov; +Cc: linux-mm



> -----Original Message-----
> From: owner-linux-mm@kvack.org [mailto:owner-linux-mm@kvack.org] On
> Behalf Of Kirill A. Shutemov
> Sent: Tuesday, July 26, 2016 1:29 PM
> To: Roberts, William C <william.c.roberts@intel.com>
> Cc: linux-mm@kvack.org
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> On Tue, Jul 26, 2016 at 07:57:45PM +0000, Roberts, William C wrote:
> >
> >
> > > -----Original Message-----
> > > From: Kirill A. Shutemov [mailto:kirill@shutemov.name]
> > > Sent: Tuesday, July 26, 2016 12:26 PM
> > > To: Roberts, William C <william.c.roberts@intel.com>
> > > Cc: linux-mm@kvack.org
> > > Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> > >
> > > On Tue, Jul 26, 2016 at 11:27:11AM -0700, william.c.roberts@intel.com wrote:
> > > > From: William Roberts <william.c.roberts@intel.com>
> > > >
> > > > This patch introduces the ability randomize mmap locations where
> > > > the address is not requested, for instance when ld is allocating
> > > > pages for shared libraries. It chooses to randomize based on the
> > > > current personality for ASLR.
> > > >
> > > > Currently, allocations are done sequentially within unmapped
> > > > address space gaps. This may happen top down or bottom up depending on
> scheme.
> > > >
> > > > For instance these mmap calls produce contiguous mappings:
> > > > int size = getpagesize();
> > > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > > 0x40026000
> > > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > > 0x40027000
> > > >
> > > > Note no gap between.
> > > >
> > > > After patches:
> > > > int size = getpagesize();
> > > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > > 0x400b4000
> > > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > > 0x40055000
> > > >
> > > > Note gap between.
> > >
> > > And why is it good?
> >
> > Currently if you get an info leak and discover, say the address to
> > libX It's just a matter of adding/subtracting a fixed offset to find
> > libY. This will make rop a bit harder if you're trying to rop into a
> > different library than what was leaked.
> >
> > This also has a benefit outside of just libraries in that it
> > randomizes all the Mappings done via mmap from run to run. So you
> > don't get consistent, known offsets to things within the memory space.
> >
> > >
> > > > Using the test program mentioned here, that allocates fixed sized
> > > > blocks till exhaustion:
> > > > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.ht
> > > > ml, no difference was noticed in the number of allocations. Most
> > > > varied from run to run, but were always within a few allocations
> > > > of one another between patched and un-patched runs.
> > > >
> > > > Performance Measurements:
> > > > Using strace with -T option and filtering for mmap on the program
> > > > ls shows a slowdown of approximate 3.7%
> > >
> > > NAK.
> > >
> > > It's just too costly. And no obvious benefits.
> >
> > Sorry I used to have the explanation in the message, a carless edit
> > removed it.
> >
> > The cost does suck, perhaps something like personality + KConfig option....
> 
> Cost sucks even more than you've mentioned: you'll pay on every page fault, as
> find_vma() would have more vmas in the tree and vmacache will not be that
> effective. That's something people spend a lot time to tune.

Yes that is very true. Perhaps randomizing in some other manner is more
Prudent, yet another mmap() flag?

> 
> Taking this into account, I can't see any real-world application that would opt-in
> for this security feature.

Dynamic linker?

> 
> --
>  Kirill A. Shutemov
> 
> --
> To unsubscribe, send a message with 'unsubscribe linux-mm' in the body to
> majordomo@kvack.org.  For more info on Linux MM,
> see: http://www.linux-mm.org/ .
> Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 19:57   ` Roberts, William C
@ 2016-07-26 20:29     ` Kirill A. Shutemov
  2016-07-26 20:35       ` Roberts, William C
  0 siblings, 1 reply; 73+ messages in thread
From: Kirill A. Shutemov @ 2016-07-26 20:29 UTC (permalink / raw)
  To: Roberts, William C; +Cc: linux-mm

On Tue, Jul 26, 2016 at 07:57:45PM +0000, Roberts, William C wrote:
> 
> 
> > -----Original Message-----
> > From: Kirill A. Shutemov [mailto:kirill@shutemov.name]
> > Sent: Tuesday, July 26, 2016 12:26 PM
> > To: Roberts, William C <william.c.roberts@intel.com>
> > Cc: linux-mm@kvack.org
> > Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> > 
> > On Tue, Jul 26, 2016 at 11:27:11AM -0700, william.c.roberts@intel.com wrote:
> > > From: William Roberts <william.c.roberts@intel.com>
> > >
> > > This patch introduces the ability randomize mmap locations where the
> > > address is not requested, for instance when ld is allocating pages for
> > > shared libraries. It chooses to randomize based on the current
> > > personality for ASLR.
> > >
> > > Currently, allocations are done sequentially within unmapped address
> > > space gaps. This may happen top down or bottom up depending on scheme.
> > >
> > > For instance these mmap calls produce contiguous mappings:
> > > int size = getpagesize();
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40026000
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40027000
> > >
> > > Note no gap between.
> > >
> > > After patches:
> > > int size = getpagesize();
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x400b4000
> > > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> > 0x40055000
> > >
> > > Note gap between.
> > 
> > And why is it good?
> 
> Currently if you get an info leak and discover, say the address to libX
> It's just a matter of adding/subtracting a fixed offset to find libY. This
> will make rop a bit harder if you're trying to rop into a different library
> than what was leaked.
> 
> This also has a benefit outside of just libraries in that it randomizes all the
> Mappings done via mmap from run to run. So you don't get consistent,
> known offsets to things within the memory space. 
> 
> > 
> > > Using the test program mentioned here, that allocates fixed sized
> > > blocks till exhaustion:
> > > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
> > > no difference was noticed in the number of allocations. Most varied
> > > from run to run, but were always within a few allocations of one
> > > another between patched and un-patched runs.
> > >
> > > Performance Measurements:
> > > Using strace with -T option and filtering for mmap on the program ls
> > > shows a slowdown of approximate 3.7%
> > 
> > NAK.
> > 
> > It's just too costly. And no obvious benefits.
> 
> Sorry I used to have the explanation in the message, a carless edit
> removed it.
> 
> The cost does suck, perhaps something like personality + KConfig option....

Cost sucks even more than you've mentioned: you'll pay on every page
fault, as find_vma() would have more vmas in the tree and vmacache will
not be that effective. That's something people spend a lot time to tune.

Taking this into account, I can't see any real-world application that
would opt-in for this security feature.

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* RE: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 19:26 ` Kirill A. Shutemov
@ 2016-07-26 19:57   ` Roberts, William C
  2016-07-26 20:29     ` Kirill A. Shutemov
  0 siblings, 1 reply; 73+ messages in thread
From: Roberts, William C @ 2016-07-26 19:57 UTC (permalink / raw)
  To: Kirill A. Shutemov; +Cc: linux-mm



> -----Original Message-----
> From: Kirill A. Shutemov [mailto:kirill@shutemov.name]
> Sent: Tuesday, July 26, 2016 12:26 PM
> To: Roberts, William C <william.c.roberts@intel.com>
> Cc: linux-mm@kvack.org
> Subject: Re: [PATCH] [RFC] Introduce mmap randomization
> 
> On Tue, Jul 26, 2016 at 11:27:11AM -0700, william.c.roberts@intel.com wrote:
> > From: William Roberts <william.c.roberts@intel.com>
> >
> > This patch introduces the ability randomize mmap locations where the
> > address is not requested, for instance when ld is allocating pages for
> > shared libraries. It chooses to randomize based on the current
> > personality for ASLR.
> >
> > Currently, allocations are done sequentially within unmapped address
> > space gaps. This may happen top down or bottom up depending on scheme.
> >
> > For instance these mmap calls produce contiguous mappings:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40026000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40027000
> >
> > Note no gap between.
> >
> > After patches:
> > int size = getpagesize();
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x400b4000
> > mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) =
> 0x40055000
> >
> > Note gap between.
> 
> And why is it good?

Currently if you get an info leak and discover, say the address to libX
It's just a matter of adding/subtracting a fixed offset to find libY. This
will make rop a bit harder if you're trying to rop into a different library
than what was leaked.

This also has a benefit outside of just libraries in that it randomizes all the
Mappings done via mmap from run to run. So you don't get consistent,
known offsets to things within the memory space. 

> 
> > Using the test program mentioned here, that allocates fixed sized
> > blocks till exhaustion:
> > https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
> > no difference was noticed in the number of allocations. Most varied
> > from run to run, but were always within a few allocations of one
> > another between patched and un-patched runs.
> >
> > Performance Measurements:
> > Using strace with -T option and filtering for mmap on the program ls
> > shows a slowdown of approximate 3.7%
> 
> NAK.
> 
> It's just too costly. And no obvious benefits.

Sorry I used to have the explanation in the message, a carless edit
removed it.

The cost does suck, perhaps something like personality + KConfig option....

> 
> --
>  Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* Re: [PATCH] [RFC] Introduce mmap randomization
  2016-07-26 18:27 william.c.roberts
@ 2016-07-26 19:26 ` Kirill A. Shutemov
  2016-07-26 19:57   ` Roberts, William C
  0 siblings, 1 reply; 73+ messages in thread
From: Kirill A. Shutemov @ 2016-07-26 19:26 UTC (permalink / raw)
  To: william.c.roberts; +Cc: linux-mm

On Tue, Jul 26, 2016 at 11:27:11AM -0700, william.c.roberts@intel.com wrote:
> From: William Roberts <william.c.roberts@intel.com>
> 
> This patch introduces the ability randomize mmap locations where the
> address is not requested, for instance when ld is allocating pages for
> shared libraries. It chooses to randomize based on the current
> personality for ASLR.
> 
> Currently, allocations are done sequentially within unmapped address
> space gaps. This may happen top down or bottom up depending on scheme.
> 
> For instance these mmap calls produce contiguous mappings:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40026000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40027000
> 
> Note no gap between.
> 
> After patches:
> int size = getpagesize();
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x400b4000
> mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40055000
> 
> Note gap between.

And why is it good?

> Using the test program mentioned here, that allocates fixed sized blocks
> till exhaustion: https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
> no difference was noticed in the number of allocations. Most varied from
> run to run, but were always within a few allocations of one another
> between patched and un-patched runs.
> 
> Performance Measurements:
> Using strace with -T option and filtering for mmap on the program
> ls shows a slowdown of approximate 3.7%

NAK.

It's just too costly. And no obvious benefits.

-- 
 Kirill A. Shutemov

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply	[flat|nested] 73+ messages in thread

* [PATCH] [RFC] Introduce mmap randomization
@ 2016-07-26 18:27 william.c.roberts
  2016-07-26 19:26 ` Kirill A. Shutemov
  0 siblings, 1 reply; 73+ messages in thread
From: william.c.roberts @ 2016-07-26 18:27 UTC (permalink / raw)
  To: linux-mm; +Cc: William Roberts

From: William Roberts <william.c.roberts@intel.com>

This patch introduces the ability randomize mmap locations where the
address is not requested, for instance when ld is allocating pages for
shared libraries. It chooses to randomize based on the current
personality for ASLR.

Currently, allocations are done sequentially within unmapped address
space gaps. This may happen top down or bottom up depending on scheme.

For instance these mmap calls produce contiguous mappings:
int size = getpagesize();
mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40026000
mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40027000

Note no gap between.

After patches:
int size = getpagesize();
mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x400b4000
mmap(NULL, size, flags, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x40055000

Note gap between.

Using the test program mentioned here, that allocates fixed sized blocks
till exhaustion: https://www.linux-mips.org/archives/linux-mips/2011-05/msg00252.html,
no difference was noticed in the number of allocations. Most varied from
run to run, but were always within a few allocations of one another
between patched and un-patched runs.

Performance Measurements:
Using strace with -T option and filtering for mmap on the program
ls shows a slowdown of approximate 3.7%

Signed-off-by: William Roberts <william.c.roberts@intel.com>
---
 mm/mmap.c | 24 ++++++++++++++++++++++++
 1 file changed, 24 insertions(+)

diff --git a/mm/mmap.c b/mm/mmap.c
index de2c176..7891272 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -43,6 +43,7 @@
 #include <linux/userfaultfd_k.h>
 #include <linux/moduleparam.h>
 #include <linux/pkeys.h>
+#include <linux/random.h>
 
 #include <asm/uaccess.h>
 #include <asm/cacheflush.h>
@@ -1582,6 +1583,24 @@ unacct_error:
 	return error;
 }
 
+/*
+ * Generate a random address within a range. This differs from randomize_addr() by randomizing
+ * on len sized chunks. This helps prevent fragmentation of the virtual memory map.
+ */
+static unsigned long randomize_mmap(unsigned long start, unsigned long end, unsigned long len)
+{
+	unsigned long slots;
+
+	if ((current->personality & ADDR_NO_RANDOMIZE) || !randomize_va_space)
+		return 0;
+
+	slots = (end - start)/len;
+	if (!slots)
+		return 0;
+
+	return PAGE_ALIGN(start + ((get_random_long() % slots) * len));
+}
+
 unsigned long unmapped_area(struct vm_unmapped_area_info *info)
 {
 	/*
@@ -1676,6 +1695,8 @@ found:
 	if (gap_start < info->low_limit)
 		gap_start = info->low_limit;
 
+	gap_start = randomize_mmap(gap_start, gap_end, length) ? : gap_start;
+
 	/* Adjust gap address to the desired alignment */
 	gap_start += (info->align_offset - gap_start) & info->align_mask;
 
@@ -1775,6 +1796,9 @@ found:
 found_highest:
 	/* Compute highest gap address at the desired alignment */
 	gap_end -= info->length;
+
+	gap_end = randomize_mmap(gap_start, gap_end, length) ? : gap_end;
+
 	gap_end -= (gap_end - info->align_offset) & info->align_mask;
 
 	VM_BUG_ON(gap_end < info->low_limit);
-- 
1.9.1

--
To unsubscribe, send a message with 'unsubscribe linux-mm' in
the body to majordomo@kvack.org.  For more info on Linux MM,
see: http://www.linux-mm.org/ .
Don't email: <a href=mailto:"dont@kvack.org"> email@kvack.org </a>

^ permalink raw reply related	[flat|nested] 73+ messages in thread

end of thread, other threads:[~2016-08-14 16:31 UTC | newest]

Thread overview: 73+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2016-07-26 18:22 [PATCH] [RFC] Introduce mmap randomization william.c.roberts
2016-07-26 18:22 ` [kernel-hardening] " william.c.roberts
2016-07-26 18:22 ` william.c.roberts
2016-07-26 18:22   ` [kernel-hardening] " william.c.roberts
2016-07-26 20:03   ` Jason Cooper
2016-07-26 20:03     ` [kernel-hardening] " Jason Cooper
2016-07-26 20:11     ` Roberts, William C
2016-07-26 20:11       ` [kernel-hardening] " Roberts, William C
2016-07-26 20:13     ` Roberts, William C
2016-07-26 20:13       ` [kernel-hardening] " Roberts, William C
2016-07-26 20:13       ` Roberts, William C
2016-07-26 20:59       ` Jason Cooper
2016-07-26 20:59         ` [kernel-hardening] " Jason Cooper
2016-07-26 20:59         ` Jason Cooper
2016-07-26 21:06         ` Roberts, William C
2016-07-26 21:06           ` [kernel-hardening] " Roberts, William C
2016-07-26 21:06           ` Roberts, William C
2016-07-26 21:44           ` Jason Cooper
2016-07-26 21:44             ` [kernel-hardening] " Jason Cooper
2016-07-26 21:44             ` Jason Cooper
2016-07-26 23:51             ` Dave Hansen
2016-07-26 23:51               ` [kernel-hardening] " Dave Hansen
2016-07-26 23:51               ` Dave Hansen
2016-08-02 17:17             ` Roberts, William C
2016-08-02 17:17               ` [kernel-hardening] " Roberts, William C
2016-08-02 17:17               ` Roberts, William C
2016-08-03 18:19               ` Roberts, William C
2016-08-03 18:19                 ` [kernel-hardening] " Roberts, William C
2016-08-03 18:19                 ` Roberts, William C
2016-08-02 17:15           ` Roberts, William C
2016-08-02 17:15             ` [kernel-hardening] " Roberts, William C
2016-08-02 17:15             ` Roberts, William C
2016-07-27 16:59         ` Nick Kralevich
2016-07-27 16:59           ` [kernel-hardening] " Nick Kralevich
2016-07-27 16:59           ` Nick Kralevich
2016-07-28 21:07           ` Jason Cooper
2016-07-28 21:07             ` [kernel-hardening] " Jason Cooper
2016-07-28 21:07             ` Jason Cooper
2016-07-29 10:10             ` [kernel-hardening] " Daniel Micay
2016-07-31 22:24               ` Jason Cooper
2016-07-31 22:24                 ` Jason Cooper
2016-08-01  0:24                 ` Daniel Micay
2016-08-02 16:57           ` Roberts, William C
2016-08-02 16:57             ` [kernel-hardening] " Roberts, William C
2016-08-02 16:57             ` Roberts, William C
2016-08-02 17:02             ` Nick Kralevich
2016-08-02 17:02               ` [kernel-hardening] " Nick Kralevich
2016-08-02 17:02               ` Nick Kralevich
2016-08-14 16:31           ` Pavel Machek 1
2016-08-14 16:31             ` [kernel-hardening] " Pavel Machek 1
2016-08-14 16:31             ` Pavel Machek 1
2016-07-26 20:12   ` [kernel-hardening] " Rik van Riel
2016-07-26 20:17     ` Roberts, William C
2016-07-26 20:17       ` Roberts, William C
2016-07-26 20:17       ` Roberts, William C
2016-07-26 20:41   ` Nick Kralevich
2016-07-26 20:41     ` [kernel-hardening] " Nick Kralevich
2016-07-26 21:02     ` Roberts, William C
2016-07-26 21:02       ` [kernel-hardening] " Roberts, William C
2016-07-26 21:11       ` Nick Kralevich
2016-07-26 21:11         ` [kernel-hardening] " Nick Kralevich
2016-07-26 21:11         ` Nick Kralevich
2016-08-14 16:22   ` Pavel Machek
2016-08-14 16:22     ` [kernel-hardening] " Pavel Machek
2016-08-04 16:53 ` [kernel-hardening] " Daniel Micay
2016-08-04 16:55   ` Roberts, William C
2016-08-04 16:55     ` Roberts, William C
2016-08-04 17:10     ` Daniel Micay
2016-07-26 18:27 william.c.roberts
2016-07-26 19:26 ` Kirill A. Shutemov
2016-07-26 19:57   ` Roberts, William C
2016-07-26 20:29     ` Kirill A. Shutemov
2016-07-26 20:35       ` Roberts, William C

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.