linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
       [not found] <5489E6D2.2060200@upv.es>
@ 2014-12-11 20:12 ` Hector Marco
  2014-12-11 22:11   ` Kees Cook
  0 siblings, 1 reply; 23+ messages in thread
From: Hector Marco @ 2014-12-11 20:12 UTC (permalink / raw)
  To: linux-kernel


Hello,

The following is an ASLR PIE implementation summary in order to help to
decide whether it is better to fix x86*, arm*, and MIPS without adding
randomize_va_space = 3 or move the PowerPC and the s390 to
randomize_va_space = 3.


Before any randomization, commit: f057eac (April 2005) the code in
fs/binfmt_elf.c was:

 } else if (loc->elf_ex.e_type == ET_DYN) {
         /* Try and get dynamic programs out of the way of the
          * default mmap base, as well as whatever program they
          * might try to exec.  This is because the brk will
          * follow the loader, and is not movable.  */
         load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
 }

It seems that they tried to get out dynamic programs of the way
of the default mmap base. I am not sure why.

The first architecture to implement PIE support was x86. To achieve
this, the code introduced by the commit 60bfba7 (Jul 2007) was:

  } else if (loc->elf_ex.e_type == ET_DYN) {
          /* Try and get dynamic programs out of the way of the
           * default mmap base, as well as whatever program they
           * might try to exec.  This is because the brk will
           * follow the loader, and is not movable.  */
+#ifdef CONFIG_X86
+           load_bias = 0;
+#else
           load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
+#endif
            }

After that, he code was removed (4 days later commit: d4e3cc3) and
reintroduced (commit: cc503c1) Jan 2008. From this commit, the x86*
are vulnerable to offset2lib attack.

Note that they (x86*) used "load_bias = 0;" which cause that PIE
executable be loaded at mmap base.

Around one year later, in Feb 2009, PowerPC provided support for PIE
executables but not following the X86* approach. PowerPC redefined
the ELF_ET_DYN_BASE. The change was:

-#define ELF_ET_DYN_BASE (0x20000000)
+#define ELF_ET_DYN_BASE (randomize_et_dyn(0x20000000))

The function "randomize_et_dyn" add a random value to the 0x20000000
which is not vulnerable to the offset2lib weakness. Note that in this
point two different ways of PIE implementation are coexisting.


Later, in Aug 2008, ARM started to support PIE (commit: e4eab08):

-#if defined(CONFIG_X86)
+#if defined(CONFIG_X86) || defined(CONFIG_ARM)
           load_bias = 0;
#else
           load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
#endif
 }


They only add "|| defined(CONFIG_ARM)". They followed the x86* PIE
support approach which consist on load the PIE executables
in the mmap base area.


After that, in Jan 2011, s390 started to support PIE (commit: d2c9dfc).
They decided to follow the "PowerPC PIE support approach" by redefining:

-#define ELF_ET_DYN_BASE         (STACK_TOP / 3 * 2)
+#define ELF_ET_DYN_BASE         (randomize_et_dyn(STACK_TOP / 3 * 2))


Later, in Nov 2012, the commit e39f560 changed:

-#if defined(CONFIG_X86) || defined(CONFIG_ARM)
+#ifdef CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE

I think that this was made to avoid a long defined because they must
have thought that more architectures will be added in the future.
Join this change the x86*, ARM and MIPS architectures set to "y" this
value in their respective Kconfig files.

The same day of the previous commit, MIPS started to support PIE
executables by setting "y" to the ARCH_BINFMT_ELF_RANDOMIZE_PIE in their
Kconfig. The commit is e26d196. Again MIPS followed the x86* and ARM
approaches.


Finally, in Nov 2014, following this approach ARM64 moved from "PowerPC"
approach to x86 one. The commit is 9298040.

-#define ELF_ET_DYN_BASE	(randomize_et_dyn(2 * TASK_SIZE_64 / 3))
+#define ELF_ET_DYN_BASE	(2 * TASK_SIZE_64 / 3)

And set to "y" the "ARCH_BINFMT_ELF_RANDOMIZE_PIE" which cause to load
the PIE application in the mmap base area.


I don't know if exists any reason to put the PIE executable in the mmap
base address or not, but this was the first and most adopted approach.

Now, by knowing the presence of the offset2lib weakness obviously is
better to use a different memory area.

>From my point of view, to use a "define name" which is a random value
depending on the architecture does not help much to read the code. I
think is better to implement the PIE support by adding a new value to
the mm_struct which is filled very early in the function
"arch_pick_mmap_layout" which sets up the VM layout. This file is
architecture dependent and the function says:

/*
 * This function, called very early during the creation of a new
 * process VM image, sets up which VM layout function to use:
 */
void arch_pick_mmap_layout(struct mm_struct *mm)


In this point the GAP stack is reserved and the mmap_base value is
calculated. I think this is the correct place to calculate where the PIE
executable will be loaded rather than rely on a "define" which obscure
the actual behavior (at first glance does not seem a random value).
Maybe this was the reason why most architectures followed the x86*
approach to support PIE. But now, with the offset2lib weakness this
approach need to be changed. From my point of view, moving to "PowerPC"
approach is not the best solution. I've taken a look to PaX code and
they implement a similar solution that I have been proposed.

Anyway, if you are still thinking that the best approach is the
"PowerPC" one, then I could change the patch to fix the x86*, ARM* and
MIPS following this approach.


Best regards,
Hector Marco.
http://hmarco.org



^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-11 20:12 ` [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack Hector Marco
@ 2014-12-11 22:11   ` Kees Cook
  2014-12-12 16:32     ` Hector Marco
  2015-01-07 17:26     ` Hector Marco Gisbert
  0 siblings, 2 replies; 23+ messages in thread
From: Kees Cook @ 2014-12-11 22:11 UTC (permalink / raw)
  To: Hector Marco
  Cc: linux-kernel, Andy Lutomirski, David Daney, Jiri Kosina,
	Arun Chandran, Hanno Böck, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Russell King - ARM Linux,
	Catalin Marinas, Will Deacon, Oleg Nesterov, Heiko Carstens,
	Martin Schwidefsky, Anton Blanchard, Benjamin Herrenschmidt,
	Christian Borntraeger

Hi,

On Thu, Dec 11, 2014 at 09:12:29PM +0100, Hector Marco wrote:
> 
> Hello,
> 
> The following is an ASLR PIE implementation summary in order to help to
> decide whether it is better to fix x86*, arm*, and MIPS without adding
> randomize_va_space = 3 or move the PowerPC and the s390 to
> randomize_va_space = 3.

If we can fix x86, arm, and MIPS without introducing randomize_va_space=3,
I would prefer it.

> Before any randomization, commit: f057eac (April 2005) the code in
> fs/binfmt_elf.c was:
> 
>  } else if (loc->elf_ex.e_type == ET_DYN) {
>          /* Try and get dynamic programs out of the way of the
>           * default mmap base, as well as whatever program they
>           * might try to exec.  This is because the brk will
>           * follow the loader, and is not movable.  */
>          load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
>  }
> 
> It seems that they tried to get out dynamic programs of the way
> of the default mmap base. I am not sure why.
> 
> The first architecture to implement PIE support was x86. To achieve
> this, the code introduced by the commit 60bfba7 (Jul 2007) was:
> 
>   } else if (loc->elf_ex.e_type == ET_DYN) {
>           /* Try and get dynamic programs out of the way of the
>            * default mmap base, as well as whatever program they
>            * might try to exec.  This is because the brk will
>            * follow the loader, and is not movable.  */
> +#ifdef CONFIG_X86
> +           load_bias = 0;
> +#else
>            load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
> +#endif
>             }
> 
> After that, he code was removed (4 days later commit: d4e3cc3) and
> reintroduced (commit: cc503c1) Jan 2008. From this commit, the x86*
> are vulnerable to offset2lib attack.
> 
> Note that they (x86*) used "load_bias = 0;" which cause that PIE
> executable be loaded at mmap base.
> 
> Around one year later, in Feb 2009, PowerPC provided support for PIE
> executables but not following the X86* approach. PowerPC redefined
> the ELF_ET_DYN_BASE. The change was:
> 
> -#define ELF_ET_DYN_BASE (0x20000000)
> +#define ELF_ET_DYN_BASE (randomize_et_dyn(0x20000000))
> 
> The function "randomize_et_dyn" add a random value to the 0x20000000
> which is not vulnerable to the offset2lib weakness. Note that in this
> point two different ways of PIE implementation are coexisting.
> 
> 
> Later, in Aug 2008, ARM started to support PIE (commit: e4eab08):
> 
> -#if defined(CONFIG_X86)
> +#if defined(CONFIG_X86) || defined(CONFIG_ARM)
>            load_bias = 0;
> #else
>            load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
> #endif
>  }
> 
> 
> They only add "|| defined(CONFIG_ARM)". They followed the x86* PIE
> support approach which consist on load the PIE executables
> in the mmap base area.
> 
> 
> After that, in Jan 2011, s390 started to support PIE (commit: d2c9dfc).
> They decided to follow the "PowerPC PIE support approach" by redefining:
> 
> -#define ELF_ET_DYN_BASE         (STACK_TOP / 3 * 2)
> +#define ELF_ET_DYN_BASE         (randomize_et_dyn(STACK_TOP / 3 * 2))
> 
> 
> Later, in Nov 2012, the commit e39f560 changed:
> 
> -#if defined(CONFIG_X86) || defined(CONFIG_ARM)
> +#ifdef CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE
> 
> I think that this was made to avoid a long defined because they must
> have thought that more architectures will be added in the future.
> Join this change the x86*, ARM and MIPS architectures set to "y" this
> value in their respective Kconfig files.
> 
> The same day of the previous commit, MIPS started to support PIE
> executables by setting "y" to the ARCH_BINFMT_ELF_RANDOMIZE_PIE in their
> Kconfig. The commit is e26d196. Again MIPS followed the x86* and ARM
> approaches.
> 
> 
> Finally, in Nov 2014, following this approach ARM64 moved from "PowerPC"
> approach to x86 one. The commit is 9298040.
> 
> -#define ELF_ET_DYN_BASE	(randomize_et_dyn(2 * TASK_SIZE_64 / 3))
> +#define ELF_ET_DYN_BASE	(2 * TASK_SIZE_64 / 3)
> 
> And set to "y" the "ARCH_BINFMT_ELF_RANDOMIZE_PIE" which cause to load
> the PIE application in the mmap base area.
> 
> 
> I don't know if exists any reason to put the PIE executable in the mmap
> base address or not, but this was the first and most adopted approach.
> 
> Now, by knowing the presence of the offset2lib weakness obviously is
> better to use a different memory area.
> 
> >From my point of view, to use a "define name" which is a random value
> depending on the architecture does not help much to read the code. I
> think is better to implement the PIE support by adding a new value to
> the mm_struct which is filled very early in the function
> "arch_pick_mmap_layout" which sets up the VM layout. This file is
> architecture dependent and the function says:
> 
> /*
>  * This function, called very early during the creation of a new
>  * process VM image, sets up which VM layout function to use:
>  */
> void arch_pick_mmap_layout(struct mm_struct *mm)
> 
> 
> In this point the GAP stack is reserved and the mmap_base value is
> calculated. I think this is the correct place to calculate where the PIE
> executable will be loaded rather than rely on a "define" which obscure
> the actual behavior (at first glance does not seem a random value).
> Maybe this was the reason why most architectures followed the x86*
> approach to support PIE. But now, with the offset2lib weakness this
> approach need to be changed. From my point of view, moving to "PowerPC"
> approach is not the best solution. I've taken a look to PaX code and
> they implement a similar solution that I have been proposed.
> 
> Anyway, if you are still thinking that the best approach is the
> "PowerPC" one, then I could change the patch to fix the x86*, ARM* and
> MIPS following this approach.

Yeah, I think we should get rid of ARCH_BINFMT_ELF_RANDOMIZE_PIE and just
fix this to do independent executable base randomization.

While we're at it, can we fix VDSO randomization as well? :)

-Kees

-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-11 22:11   ` Kees Cook
@ 2014-12-12 16:32     ` Hector Marco
  2014-12-12 17:17       ` Andy Lutomirski
  2015-01-07 17:26     ` Hector Marco Gisbert
  1 sibling, 1 reply; 23+ messages in thread
From: Hector Marco @ 2014-12-12 16:32 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Andy Lutomirski, David Daney, Jiri Kosina,
	Arun Chandran, Hanno Böck, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Russell King - ARM Linux,
	Catalin Marinas, Will Deacon, Oleg Nesterov, Heiko Carstens,
	Martin Schwidefsky, Anton Blanchard, Benjamin Herrenschmidt,
	Christian Borntraeger, Reno Robert, Ismael Ripoll

Hello,

I agree. I don't think a new randomization mode will be needed, just fix
the current randomize_va_space=2. Said other way: fixing the offset2lib
will not break any current program and so, no need to add additional
configuration options. May be we shall wait for some inputs
from the list (may be we are missing something).


Regarding to VDSO, definitively, is not randomized enough in 64bits.
Brute force attacks would be pretty fast even from the network.
I have identified the bug and seems quite easy to fix it.

On 32bit systems, this is not a issue because it is mapped in the
mmap area. In order to fix the VDSO on 64bit, the following
considerations shall
be discussed:


Performance:
    It seems (reading the kernel comments) that the random allocation
    algorithm tries to place the VDSO in the same PTE than the stack.
    But since the permissions of the stack and the VDSO are different
    it seems that are getting right the opposite.

    Effectively VDSO shall be correctly randomized because it contains
    enough useful exploitable stuff.

    I think that the possible solution is follow the x86_32 approach
    which consist on map the VDSO in the mmap area.

    It would be better fix VDSO in a different patch ? I can send a
    patch which fixes the VDSO on 64 bit.



Regards,
Hector Marco.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-12 16:32     ` Hector Marco
@ 2014-12-12 17:17       ` Andy Lutomirski
  2014-12-19 22:04         ` Hector Marco
  0 siblings, 1 reply; 23+ messages in thread
From: Andy Lutomirski @ 2014-12-12 17:17 UTC (permalink / raw)
  To: Hector Marco
  Cc: Catalin Marinas, Heiko Carstens, Oleg Nesterov, Ingo Molnar,
	Anton Blanchard, Jiri Kosina, Russell King - ARM Linux,
	H. Peter Anvin, David Daney, Andrew Morton, Arun Chandran,
	linux-kernel, Martin Schwidefsky, Ismael Ripoll,
	Christian Borntraeger, Thomas Gleixner, Hanno Böck,
	Will Deacon, Benjamin Herrenschmidt, Kees Cook, Reno Robert

On Dec 12, 2014 8:33 AM, "Hector Marco" <hecmargi@upv.es> wrote:
>
> Hello,
>
> I agree. I don't think a new randomization mode will be needed, just fix
> the current randomize_va_space=2. Said other way: fixing the offset2lib
> will not break any current program and so, no need to add additional
> configuration options. May be we shall wait for some inputs
> from the list (may be we are missing something).
>
>
> Regarding to VDSO, definitively, is not randomized enough in 64bits.
> Brute force attacks would be pretty fast even from the network.
> I have identified the bug and seems quite easy to fix it.
>
> On 32bit systems, this is not a issue because it is mapped in the
> mmap area. In order to fix the VDSO on 64bit, the following
> considerations shall
> be discussed:
>
>
> Performance:
>     It seems (reading the kernel comments) that the random allocation
>     algorithm tries to place the VDSO in the same PTE than the stack.

The comment is wrong.  It means PTE table.

>     But since the permissions of the stack and the VDSO are different
>     it seems that are getting right the opposite.

Permissions have page granularity, so this isn't a problem.

>
>     Effectively VDSO shall be correctly randomized because it contains
>     enough useful exploitable stuff.
>
>     I think that the possible solution is follow the x86_32 approach
>     which consist on map the VDSO in the mmap area.
>
>     It would be better fix VDSO in a different patch ? I can send a
>     patch which fixes the VDSO on 64 bit.
>

What are the considerations for 64-bit memory layout?  I haven't
touched it because I don't want to break userspace, but I don't know
what to be careful about.

--Andy

>
>
> Regards,
> Hector Marco.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-12 17:17       ` Andy Lutomirski
@ 2014-12-19 22:04         ` Hector Marco
  2014-12-19 22:11           ` Andy Lutomirski
  0 siblings, 1 reply; 23+ messages in thread
From: Hector Marco @ 2014-12-19 22:04 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Catalin Marinas, Heiko Carstens, Oleg Nesterov, Ingo Molnar,
	Anton Blanchard, Jiri Kosina, Russell King - ARM Linux,
	H. Peter Anvin, David Daney, Andrew Morton, Arun Chandran,
	linux-kernel, Martin Schwidefsky, Ismael Ripoll,
	Christian Borntraeger, Thomas Gleixner, Hanno Böck,
	Will Deacon, Benjamin Herrenschmidt, Kees Cook, Reno Robert



El 12/12/14 a las 18:17, Andy Lutomirski escribió:
> On Dec 12, 2014 8:33 AM, "Hector Marco" <hecmargi@upv.es> wrote:
>>
>> Hello,
>>
>> I agree. I don't think a new randomization mode will be needed, just fix
>> the current randomize_va_space=2. Said other way: fixing the offset2lib
>> will not break any current program and so, no need to add additional
>> configuration options. May be we shall wait for some inputs
>> from the list (may be we are missing something).
>>
>>
>> Regarding to VDSO, definitively, is not randomized enough in 64bits.
>> Brute force attacks would be pretty fast even from the network.
>> I have identified the bug and seems quite easy to fix it.
>>
>> On 32bit systems, this is not a issue because it is mapped in the
>> mmap area. In order to fix the VDSO on 64bit, the following
>> considerations shall
>> be discussed:
>>
>>
>> Performance:
>>      It seems (reading the kernel comments) that the random allocation
>>      algorithm tries to place the VDSO in the same PTE than the stack.
>
> The comment is wrong.  It means PTE table.
>
>>      But since the permissions of the stack and the VDSO are different
>>      it seems that are getting right the opposite.
>
> Permissions have page granularity, so this isn't a problem.
>
>>
>>      Effectively VDSO shall be correctly randomized because it contains
>>      enough useful exploitable stuff.
>>
>>      I think that the possible solution is follow the x86_32 approach
>>      which consist on map the VDSO in the mmap area.
>>
>>      It would be better fix VDSO in a different patch ? I can send a
>>      patch which fixes the VDSO on 64 bit.
>>
>
> What are the considerations for 64-bit memory layout?  I haven't
> touched it because I don't want to break userspace, but I don't know
> what to be careful about.
>
> --Andy

I don't think that mapping the VDSO in the mmap area breaks the
userspace. Actually, this is already happening with the current
implementation. You can see it by running:

setarch x86_64 -R cat /proc/self/maps


Do this break the userspace in some way ?


Regarding the solution to the offset2lib it seems that placing the
executable in a different memory region area could increase the
number of pages for the pages table (because it is more spread).
We should consider this before fixing the current implementation
(randomize_va_space=2).

I guess that the current implementation places the PIE executable in
the mmap base area jointly with the libraries in an attempt to reduce
the size of the page table.

Therefore, I can fix the current implementation (maintaining the
randomize_va_space=2) by moving the PIE executable from the mmap base
area to another one for x86*, ARM* and MIPS (as s390 and PowerPC do).
But we shall agree that this increment in the page table is not a
issue. Otherwise, the randomize_va_space=3 shall be considered.


Hector Marco.

>
>>
>>
>> Regards,
>> Hector Marco.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-19 22:04         ` Hector Marco
@ 2014-12-19 22:11           ` Andy Lutomirski
  2014-12-19 22:19             ` Cyrill Gorcunov
  2014-12-19 23:53             ` Andy Lutomirski
  0 siblings, 2 replies; 23+ messages in thread
From: Andy Lutomirski @ 2014-12-19 22:11 UTC (permalink / raw)
  To: Hector Marco, Cyrill Gorcunov, Pavel Emelyanov
  Cc: Catalin Marinas, Heiko Carstens, Oleg Nesterov, Ingo Molnar,
	Anton Blanchard, Jiri Kosina, Russell King - ARM Linux,
	H. Peter Anvin, David Daney, Andrew Morton, Arun Chandran,
	linux-kernel, Martin Schwidefsky, Ismael Ripoll,
	Christian Borntraeger, Thomas Gleixner, Hanno Böck,
	Will Deacon, Benjamin Herrenschmidt, Kees Cook, Reno Robert

On Fri, Dec 19, 2014 at 2:04 PM, Hector Marco <hecmargi@upv.es> wrote:
>
>
> El 12/12/14 a las 18:17, Andy Lutomirski escribió:
>
>> On Dec 12, 2014 8:33 AM, "Hector Marco" <hecmargi@upv.es> wrote:
>>>
>>>
>>> Hello,
>>>
>>> I agree. I don't think a new randomization mode will be needed, just fix
>>> the current randomize_va_space=2. Said other way: fixing the offset2lib
>>> will not break any current program and so, no need to add additional
>>> configuration options. May be we shall wait for some inputs
>>> from the list (may be we are missing something).
>>>
>>>
>>> Regarding to VDSO, definitively, is not randomized enough in 64bits.
>>> Brute force attacks would be pretty fast even from the network.
>>> I have identified the bug and seems quite easy to fix it.
>>>
>>> On 32bit systems, this is not a issue because it is mapped in the
>>> mmap area. In order to fix the VDSO on 64bit, the following
>>> considerations shall
>>> be discussed:
>>>
>>>
>>> Performance:
>>>      It seems (reading the kernel comments) that the random allocation
>>>      algorithm tries to place the VDSO in the same PTE than the stack.
>>
>>
>> The comment is wrong.  It means PTE table.
>>
>>>      But since the permissions of the stack and the VDSO are different
>>>      it seems that are getting right the opposite.
>>
>>
>> Permissions have page granularity, so this isn't a problem.
>>
>>>
>>>      Effectively VDSO shall be correctly randomized because it contains
>>>      enough useful exploitable stuff.
>>>
>>>      I think that the possible solution is follow the x86_32 approach
>>>      which consist on map the VDSO in the mmap area.
>>>
>>>      It would be better fix VDSO in a different patch ? I can send a
>>>      patch which fixes the VDSO on 64 bit.
>>>
>>
>> What are the considerations for 64-bit memory layout?  I haven't
>> touched it because I don't want to break userspace, but I don't know
>> what to be careful about.
>>
>> --Andy
>
>
> I don't think that mapping the VDSO in the mmap area breaks the
> userspace. Actually, this is already happening with the current
> implementation. You can see it by running:
>
> setarch x86_64 -R cat /proc/self/maps
>

Hmm.  So apparently we even switch which side of the stack the vdso is
on depending on the randomization setting.

>
> Do this break the userspace in some way ?
>
>
> Regarding the solution to the offset2lib it seems that placing the
> executable in a different memory region area could increase the
> number of pages for the pages table (because it is more spread).
> We should consider this before fixing the current implementation
> (randomize_va_space=2).
>
> I guess that the current implementation places the PIE executable in
> the mmap base area jointly with the libraries in an attempt to reduce
> the size of the page table.
>
> Therefore, I can fix the current implementation (maintaining the
> randomize_va_space=2) by moving the PIE executable from the mmap base
> area to another one for x86*, ARM* and MIPS (as s390 and PowerPC do).
> But we shall agree that this increment in the page table is not a
> issue. Otherwise, the randomize_va_space=3 shall be considered.

Wrt the vdso itself, though, there is an extra consideration: CRIU.  I
*think* that the CRIU vdso proxying scheme will work even if the vdso
changes sizes and is adjacent to other mappings.  Cyrill and/or Pavel,
am I right?

I'm not fundamentally opposed to mapping the vdso just like any other
shared library.  I still think that we should have an extra-strong
randomization mode in which all the libraries are randomized wrt each
other, though.  For many applications, the extra page table cost will
be negligible.

--Andy

>
>
> Hector Marco.
>
>>
>>>
>>>
>>> Regards,
>>> Hector Marco.



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-19 22:11           ` Andy Lutomirski
@ 2014-12-19 22:19             ` Cyrill Gorcunov
  2014-12-19 23:53             ` Andy Lutomirski
  1 sibling, 0 replies; 23+ messages in thread
From: Cyrill Gorcunov @ 2014-12-19 22:19 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Hector Marco, Pavel Emelyanov, Catalin Marinas, Heiko Carstens,
	Oleg Nesterov, Ingo Molnar, Anton Blanchard, Jiri Kosina,
	Russell King - ARM Linux, H. Peter Anvin, David Daney,
	Andrew Morton, Arun Chandran, linux-kernel, Martin Schwidefsky,
	Ismael Ripoll, Christian Borntraeger, Thomas Gleixner,
	Hanno Böck, Will Deacon, Benjamin Herrenschmidt, Kees Cook,
	Reno Robert

On Fri, Dec 19, 2014 at 02:11:37PM -0800, Andy Lutomirski wrote:
...
> >
> > Therefore, I can fix the current implementation (maintaining the
> > randomize_va_space=2) by moving the PIE executable from the mmap base
> > area to another one for x86*, ARM* and MIPS (as s390 and PowerPC do).
> > But we shall agree that this increment in the page table is not a
> > issue. Otherwise, the randomize_va_space=3 shall be considered.
> 
> Wrt the vdso itself, though, there is an extra consideration: CRIU.  I
> *think* that the CRIU vdso proxying scheme will work even if the vdso
> changes sizes and is adjacent to other mappings.  Cyrill and/or Pavel,
> am I right?

At least that was the idea. I've been testing old vdso from rhel5 proxified
to 3.x series where vvar segment is present, worked well.

> I'm not fundamentally opposed to mapping the vdso just like any other
> shared library.  I still think that we should have an extra-strong
> randomization mode in which all the libraries are randomized wrt each
> other, though.  For many applications, the extra page table cost will
> be negligible.

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-19 22:11           ` Andy Lutomirski
  2014-12-19 22:19             ` Cyrill Gorcunov
@ 2014-12-19 23:53             ` Andy Lutomirski
  2014-12-20  0:29               ` [PATCH] x86_64, vdso: Fix the vdso address randomization algorithm Andy Lutomirski
                                 ` (2 more replies)
  1 sibling, 3 replies; 23+ messages in thread
From: Andy Lutomirski @ 2014-12-19 23:53 UTC (permalink / raw)
  To: Hector Marco, Cyrill Gorcunov, Pavel Emelyanov
  Cc: Catalin Marinas, Heiko Carstens, Oleg Nesterov, Ingo Molnar,
	Anton Blanchard, Jiri Kosina, Russell King - ARM Linux,
	H. Peter Anvin, David Daney, Andrew Morton, Arun Chandran,
	linux-kernel, Martin Schwidefsky, Ismael Ripoll,
	Christian Borntraeger, Thomas Gleixner, Hanno Böck,
	Will Deacon, Benjamin Herrenschmidt, Kees Cook, Reno Robert

On Fri, Dec 19, 2014 at 2:11 PM, Andy Lutomirski <luto@amacapital.net> wrote:
> On Fri, Dec 19, 2014 at 2:04 PM, Hector Marco <hecmargi@upv.es> wrote:
>>
>>
>> El 12/12/14 a las 18:17, Andy Lutomirski escribió:
>>
>>> On Dec 12, 2014 8:33 AM, "Hector Marco" <hecmargi@upv.es> wrote:
>>>>
>>>>
>>>> Hello,
>>>>
>>>> I agree. I don't think a new randomization mode will be needed, just fix
>>>> the current randomize_va_space=2. Said other way: fixing the offset2lib
>>>> will not break any current program and so, no need to add additional
>>>> configuration options. May be we shall wait for some inputs
>>>> from the list (may be we are missing something).
>>>>
>>>>
>>>> Regarding to VDSO, definitively, is not randomized enough in 64bits.
>>>> Brute force attacks would be pretty fast even from the network.
>>>> I have identified the bug and seems quite easy to fix it.
>>>>
>>>> On 32bit systems, this is not a issue because it is mapped in the
>>>> mmap area. In order to fix the VDSO on 64bit, the following
>>>> considerations shall
>>>> be discussed:
>>>>
>>>>
>>>> Performance:
>>>>      It seems (reading the kernel comments) that the random allocation
>>>>      algorithm tries to place the VDSO in the same PTE than the stack.
>>>
>>>
>>> The comment is wrong.  It means PTE table.
>>>
>>>>      But since the permissions of the stack and the VDSO are different
>>>>      it seems that are getting right the opposite.
>>>
>>>
>>> Permissions have page granularity, so this isn't a problem.
>>>
>>>>
>>>>      Effectively VDSO shall be correctly randomized because it contains
>>>>      enough useful exploitable stuff.
>>>>
>>>>      I think that the possible solution is follow the x86_32 approach
>>>>      which consist on map the VDSO in the mmap area.
>>>>
>>>>      It would be better fix VDSO in a different patch ? I can send a
>>>>      patch which fixes the VDSO on 64 bit.
>>>>
>>>
>>> What are the considerations for 64-bit memory layout?  I haven't
>>> touched it because I don't want to break userspace, but I don't know
>>> what to be careful about.
>>>
>>> --Andy
>>
>>
>> I don't think that mapping the VDSO in the mmap area breaks the
>> userspace. Actually, this is already happening with the current
>> implementation. You can see it by running:
>>
>> setarch x86_64 -R cat /proc/self/maps
>>
>
> Hmm.  So apparently we even switch which side of the stack the vdso is
> on depending on the randomization setting.
>
>>
>> Do this break the userspace in some way ?
>>
>>
>> Regarding the solution to the offset2lib it seems that placing the
>> executable in a different memory region area could increase the
>> number of pages for the pages table (because it is more spread).
>> We should consider this before fixing the current implementation
>> (randomize_va_space=2).
>>
>> I guess that the current implementation places the PIE executable in
>> the mmap base area jointly with the libraries in an attempt to reduce
>> the size of the page table.
>>
>> Therefore, I can fix the current implementation (maintaining the
>> randomize_va_space=2) by moving the PIE executable from the mmap base
>> area to another one for x86*, ARM* and MIPS (as s390 and PowerPC do).
>> But we shall agree that this increment in the page table is not a
>> issue. Otherwise, the randomize_va_space=3 shall be considered.
>
> Wrt the vdso itself, though, there is an extra consideration: CRIU.  I
> *think* that the CRIU vdso proxying scheme will work even if the vdso
> changes sizes and is adjacent to other mappings.  Cyrill and/or Pavel,
> am I right?
>
> I'm not fundamentally opposed to mapping the vdso just like any other
> shared library.  I still think that we should have an extra-strong
> randomization mode in which all the libraries are randomized wrt each
> other, though.  For many applications, the extra page table cost will
> be negligible.

This is stupid.  The vdso randomization is just buggy, plain and
simple.  Patch coming.

>
> --Andy
>
>>
>>
>> Hector Marco.
>>
>>>
>>>>
>>>>
>>>> Regards,
>>>> Hector Marco.
>
>
>
> --
> Andy Lutomirski
> AMA Capital Management, LLC



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 23+ messages in thread

* [PATCH] x86_64, vdso: Fix the vdso address randomization algorithm
  2014-12-19 23:53             ` Andy Lutomirski
@ 2014-12-20  0:29               ` Andy Lutomirski
  2014-12-20 17:40               ` [PATCH v2] " Andy Lutomirski
  2014-12-22 17:36               ` [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack Hector Marco Gisbert
  2 siblings, 0 replies; 23+ messages in thread
From: Andy Lutomirski @ 2014-12-20  0:29 UTC (permalink / raw)
  To: linux-kernel; +Cc: Andy Lutomirski, stable

The theory behind vdso randomization is that it's mapped at a random
offset above the top of the stack.  To avoid wasting a page of
memory for an extra page table, the vdso isn't supposed to extend
past the lowest PMD into which it can fit.  Other than that, the
address should be a uniformly distributed address that meets all of
the alignment requirements.

The current algorithm is buggy: the vdso has about a 50% probability
of being at the very end of a PMD.  The current algorithm also has a
decent chance of failing outright due to incorrect handling of the
case where the top of the stack is near the top of its PMD.

This fixes the implementation.  The paxtest estimate of vdso
"randomisation" improves from 11 bits to 18 bits.  (Disclaimer: I
don't know what the paxtest code is actually calculating.)

It's worth noting that this algorithm is inherently biased: the vdso
is more likely to end up near the end of its PMD than near the
beginning.  Ideally we would either nix the PMD sharing requirement
or jointly randomize the vdso and the stack to eliminate the bias.

In the mean time, this is a considerable improvement with basically
no risk of incompatibility issues, since the allowed outputs of the
algorithm are unchanged.

As an easy test, doing this:

for i in `seq 10000`
  do grep -P vdso /proc/self/maps |cut -d- -f1
done |sort |uniq -d

used to produce lots of output (1445 lines on my most recent run).
A tiny subset looks like this:

7fffdfffe000
7fffe01fe000
7fffe05fe000
7fffe07fe000
7fffe09fe000
7fffe0bfe000
7fffe0dfe000

Note the suspicious fe000 endings.  With the fix, I get a much more
palatable 76 repeated addresses.

Cc: stable@vger.kernel.org
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
---

No particular rush here.  This should probably go in some time after -rc1,
unless someone feels that it's more urgent than that.

arch/x86/vdso/vma.c | 47 +++++++++++++++++++++++++++++++----------------
 1 file changed, 31 insertions(+), 16 deletions(-)

diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c
index 009495b9ab4b..62be12eb3f16 100644
--- a/arch/x86/vdso/vma.c
+++ b/arch/x86/vdso/vma.c
@@ -41,12 +41,17 @@ void __init init_vdso_image(const struct vdso_image *image)
 
 struct linux_binprm;
 
-/* Put the vdso above the (randomized) stack with another randomized offset.
-   This way there is no hole in the middle of address space.
-   To save memory make sure it is still in the same PTE as the stack top.
-   This doesn't give that many random bits.
-
-   Only used for the 64-bit and x32 vdsos. */
+/*
+ * Put the vdso above the (randomized) stack with another randomized
+ * offset.  This way there is no hole in the middle of address space.
+ * To save memory make sure it is still in the same PTE as the stack
+ * top.  This doesn't give that many random bits.
+ *
+ * Note that this algorithm is imperfect: the distribution of the vdso
+ * start address within a PMD is biased toward the end.
+ *
+ * Only used for the 64-bit and x32 vdsos.
+ */
 static unsigned long vdso_addr(unsigned long start, unsigned len)
 {
 #ifdef CONFIG_X86_32
@@ -54,22 +59,32 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
 #else
 	unsigned long addr, end;
 	unsigned offset;
-	end = (start + PMD_SIZE - 1) & PMD_MASK;
+
+	/*
+	 * Round up the start address.  It can start out unaligned as a result
+	 * of stack start randomization.
+	 */
+	start = PAGE_ALIGN(start);
+
+	/* Round the lowest possible end address up to a PMD boundary. */
+	end = (start + len + PMD_SIZE - 1) & PMD_MASK;
 	if (end >= TASK_SIZE_MAX)
 		end = TASK_SIZE_MAX;
 	end -= len;
-	/* This loses some more bits than a modulo, but is cheaper */
-	offset = get_random_int() & (PTRS_PER_PTE - 1);
-	addr = start + (offset << PAGE_SHIFT);
-	if (addr >= end)
-		addr = end;
+
+	if (end > start) {
+		offset = get_random_int() % ((end - start) >> PAGE_SHIFT);
+		addr = start + (offset << PAGE_SHIFT);
+		if (WARN_ON_ONCE(addr > end))
+			addr = end;
+	} else {
+		addr = start;
+	}
 
 	/*
-	 * page-align it here so that get_unmapped_area doesn't
-	 * align it wrongfully again to the next page. addr can come in 4K
-	 * unaligned here as a result of stack start randomization.
+	 * Forcibly align the final address in case we have a hardware
+	 * issue that requires alignment for performance reasons.
 	 */
-	addr = PAGE_ALIGN(addr);
 	addr = align_vdso_addr(addr);
 
 	return addr;
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* [PATCH v2] x86_64, vdso: Fix the vdso address randomization algorithm
  2014-12-19 23:53             ` Andy Lutomirski
  2014-12-20  0:29               ` [PATCH] x86_64, vdso: Fix the vdso address randomization algorithm Andy Lutomirski
@ 2014-12-20 17:40               ` Andy Lutomirski
  2014-12-20 21:13                 ` Kees Cook
  2014-12-22 17:36               ` [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack Hector Marco Gisbert
  2 siblings, 1 reply; 23+ messages in thread
From: Andy Lutomirski @ 2014-12-20 17:40 UTC (permalink / raw)
  To: linux-kernel; +Cc: x86, keescook, Andy Lutomirski, stable

The theory behind vdso randomization is that it's mapped at a random
offset above the top of the stack.  To avoid wasting a page of
memory for an extra page table, the vdso isn't supposed to extend
past the lowest PMD into which it can fit.  Other than that, the
address should be a uniformly distributed address that meets all of
the alignment requirements.

The current algorithm is buggy: the vdso has about a 50% probability
of being at the very end of a PMD.  The current algorithm also has a
decent chance of failing outright due to incorrect handling of the
case where the top of the stack is near the top of its PMD.

This fixes the implementation.  The paxtest estimate of vdso
"randomisation" improves from 11 bits to 18 bits.  (Disclaimer: I
don't know what the paxtest code is actually calculating.)

It's worth noting that this algorithm is inherently biased: the vdso
is more likely to end up near the end of its PMD than near the
beginning.  Ideally we would either nix the PMD sharing requirement
or jointly randomize the vdso and the stack to reduce the bias.

In the mean time, this is a considerable improvement with basically
no risk of compatibility issues, since the allowed outputs of the
algorithm are unchanged.

As an easy test, doing this:

for i in `seq 10000`
  do grep -P vdso /proc/self/maps |cut -d- -f1
done |sort |uniq -d

used to produce lots of output (1445 lines on my most recent run).
A tiny subset looks like this:

7fffdfffe000
7fffe01fe000
7fffe05fe000
7fffe07fe000
7fffe09fe000
7fffe0bfe000
7fffe0dfe000

Note the suspicious fe000 endings.  With the fix, I get a much more
palatable 76 repeated addresses.

Cc: stable@vger.kernel.org
Signed-off-by: Andy Lutomirski <luto@amacapital.net>
---
 arch/x86/vdso/vma.c | 45 +++++++++++++++++++++++++++++----------------
 1 file changed, 29 insertions(+), 16 deletions(-)

diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c
index 009495b9ab4b..1c9f750c3859 100644
--- a/arch/x86/vdso/vma.c
+++ b/arch/x86/vdso/vma.c
@@ -41,12 +41,17 @@ void __init init_vdso_image(const struct vdso_image *image)
 
 struct linux_binprm;
 
-/* Put the vdso above the (randomized) stack with another randomized offset.
-   This way there is no hole in the middle of address space.
-   To save memory make sure it is still in the same PTE as the stack top.
-   This doesn't give that many random bits.
-
-   Only used for the 64-bit and x32 vdsos. */
+/*
+ * Put the vdso above the (randomized) stack with another randomized
+ * offset.  This way there is no hole in the middle of address space.
+ * To save memory make sure it is still in the same PTE as the stack
+ * top.  This doesn't give that many random bits.
+ *
+ * Note that this algorithm is imperfect: the distribution of the vdso
+ * start address within a PMD is biased toward the end.
+ *
+ * Only used for the 64-bit and x32 vdsos.
+ */
 static unsigned long vdso_addr(unsigned long start, unsigned len)
 {
 #ifdef CONFIG_X86_32
@@ -54,22 +59,30 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
 #else
 	unsigned long addr, end;
 	unsigned offset;
-	end = (start + PMD_SIZE - 1) & PMD_MASK;
+
+	/*
+	 * Round up the start address.  It can start out unaligned as a result
+	 * of stack start randomization.
+	 */
+	start = PAGE_ALIGN(start);
+
+	/* Round the lowest possible end address up to a PMD boundary. */
+	end = (start + len + PMD_SIZE - 1) & PMD_MASK;
 	if (end >= TASK_SIZE_MAX)
 		end = TASK_SIZE_MAX;
 	end -= len;
-	/* This loses some more bits than a modulo, but is cheaper */
-	offset = get_random_int() & (PTRS_PER_PTE - 1);
-	addr = start + (offset << PAGE_SHIFT);
-	if (addr >= end)
-		addr = end;
+
+	if (end > start) {
+		offset = get_random_int() % (((end - start) >> PAGE_SHIFT) + 1);
+		addr = start + (offset << PAGE_SHIFT);
+	} else {
+		addr = start;
+	}
 
 	/*
-	 * page-align it here so that get_unmapped_area doesn't
-	 * align it wrongfully again to the next page. addr can come in 4K
-	 * unaligned here as a result of stack start randomization.
+	 * Forcibly align the final address in case we have a hardware
+	 * issue that requires alignment for performance reasons.
 	 */
-	addr = PAGE_ALIGN(addr);
 	addr = align_vdso_addr(addr);
 
 	return addr;
-- 
2.1.0


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH v2] x86_64, vdso: Fix the vdso address randomization algorithm
  2014-12-20 17:40               ` [PATCH v2] " Andy Lutomirski
@ 2014-12-20 21:13                 ` Kees Cook
  0 siblings, 0 replies; 23+ messages in thread
From: Kees Cook @ 2014-12-20 21:13 UTC (permalink / raw)
  To: Andy Lutomirski; +Cc: LKML, x86, # 3.4.x

On Sat, Dec 20, 2014 at 9:40 AM, Andy Lutomirski <luto@amacapital.net> wrote:
> The theory behind vdso randomization is that it's mapped at a random
> offset above the top of the stack.  To avoid wasting a page of
> memory for an extra page table, the vdso isn't supposed to extend
> past the lowest PMD into which it can fit.  Other than that, the
> address should be a uniformly distributed address that meets all of
> the alignment requirements.
>
> The current algorithm is buggy: the vdso has about a 50% probability
> of being at the very end of a PMD.  The current algorithm also has a
> decent chance of failing outright due to incorrect handling of the
> case where the top of the stack is near the top of its PMD.
>
> This fixes the implementation.  The paxtest estimate of vdso
> "randomisation" improves from 11 bits to 18 bits.  (Disclaimer: I
> don't know what the paxtest code is actually calculating.)
>
> It's worth noting that this algorithm is inherently biased: the vdso
> is more likely to end up near the end of its PMD than near the
> beginning.  Ideally we would either nix the PMD sharing requirement
> or jointly randomize the vdso and the stack to reduce the bias.
>
> In the mean time, this is a considerable improvement with basically
> no risk of compatibility issues, since the allowed outputs of the
> algorithm are unchanged.
>
> As an easy test, doing this:
>
> for i in `seq 10000`
>   do grep -P vdso /proc/self/maps |cut -d- -f1
> done |sort |uniq -d
>
> used to produce lots of output (1445 lines on my most recent run).
> A tiny subset looks like this:
>
> 7fffdfffe000
> 7fffe01fe000
> 7fffe05fe000
> 7fffe07fe000
> 7fffe09fe000
> 7fffe0bfe000
> 7fffe0dfe000
>
> Note the suspicious fe000 endings.  With the fix, I get a much more
> palatable 76 repeated addresses.
>
> Cc: stable@vger.kernel.org
> Signed-off-by: Andy Lutomirski <luto@amacapital.net>

Thanks for fixing this! :)

Reviewed-by: Kees Cook <keescook@chromium.org>

-Kees

> ---
>  arch/x86/vdso/vma.c | 45 +++++++++++++++++++++++++++++----------------
>  1 file changed, 29 insertions(+), 16 deletions(-)
>
> diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c
> index 009495b9ab4b..1c9f750c3859 100644
> --- a/arch/x86/vdso/vma.c
> +++ b/arch/x86/vdso/vma.c
> @@ -41,12 +41,17 @@ void __init init_vdso_image(const struct vdso_image *image)
>
>  struct linux_binprm;
>
> -/* Put the vdso above the (randomized) stack with another randomized offset.
> -   This way there is no hole in the middle of address space.
> -   To save memory make sure it is still in the same PTE as the stack top.
> -   This doesn't give that many random bits.
> -
> -   Only used for the 64-bit and x32 vdsos. */
> +/*
> + * Put the vdso above the (randomized) stack with another randomized
> + * offset.  This way there is no hole in the middle of address space.
> + * To save memory make sure it is still in the same PTE as the stack
> + * top.  This doesn't give that many random bits.
> + *
> + * Note that this algorithm is imperfect: the distribution of the vdso
> + * start address within a PMD is biased toward the end.
> + *
> + * Only used for the 64-bit and x32 vdsos.
> + */
>  static unsigned long vdso_addr(unsigned long start, unsigned len)
>  {
>  #ifdef CONFIG_X86_32
> @@ -54,22 +59,30 @@ static unsigned long vdso_addr(unsigned long start, unsigned len)
>  #else
>         unsigned long addr, end;
>         unsigned offset;
> -       end = (start + PMD_SIZE - 1) & PMD_MASK;
> +
> +       /*
> +        * Round up the start address.  It can start out unaligned as a result
> +        * of stack start randomization.
> +        */
> +       start = PAGE_ALIGN(start);
> +
> +       /* Round the lowest possible end address up to a PMD boundary. */
> +       end = (start + len + PMD_SIZE - 1) & PMD_MASK;
>         if (end >= TASK_SIZE_MAX)
>                 end = TASK_SIZE_MAX;
>         end -= len;
> -       /* This loses some more bits than a modulo, but is cheaper */
> -       offset = get_random_int() & (PTRS_PER_PTE - 1);
> -       addr = start + (offset << PAGE_SHIFT);
> -       if (addr >= end)
> -               addr = end;
> +
> +       if (end > start) {
> +               offset = get_random_int() % (((end - start) >> PAGE_SHIFT) + 1);
> +               addr = start + (offset << PAGE_SHIFT);
> +       } else {
> +               addr = start;
> +       }
>
>         /*
> -        * page-align it here so that get_unmapped_area doesn't
> -        * align it wrongfully again to the next page. addr can come in 4K
> -        * unaligned here as a result of stack start randomization.
> +        * Forcibly align the final address in case we have a hardware
> +        * issue that requires alignment for performance reasons.
>          */
> -       addr = PAGE_ALIGN(addr);
>         addr = align_vdso_addr(addr);
>
>         return addr;
> --
> 2.1.0
>



-- 
Kees Cook
Chrome OS Security

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-19 23:53             ` Andy Lutomirski
  2014-12-20  0:29               ` [PATCH] x86_64, vdso: Fix the vdso address randomization algorithm Andy Lutomirski
  2014-12-20 17:40               ` [PATCH v2] " Andy Lutomirski
@ 2014-12-22 17:36               ` Hector Marco Gisbert
  2014-12-22 17:56                 ` Andy Lutomirski
  2 siblings, 1 reply; 23+ messages in thread
From: Hector Marco Gisbert @ 2014-12-22 17:36 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Cyrill Gorcunov, Pavel Emelyanov, Catalin Marinas,
	Heiko Carstens, Oleg Nesterov, Ingo Molnar, Anton Blanchard,
	Jiri Kosina, Russell King - ARM Linux, H. Peter Anvin,
	David Daney, Andrew Morton, Arun Chandran, linux-kernel,
	Martin Schwidefsky, Ismael Ripoll, Christian Borntraeger,
	Thomas Gleixner, Hanno Böck, Will Deacon,
	Benjamin Herrenschmidt, Kees Cook, Reno Robert

[PATH] Randomize properly VVAR/VDSO areas

This is a simple patch to map the VVAR/VDSO areas in the mmap area,
rather than "close to the stack". Mapping the VVAR/VDSO in the mmap area
should fix the "VDSO weakness" (too little entropy). As I mentioned in a
previous message, this solutions should not break the userspace.

In fact, in the current kernel, the VVAR/VDSO are already mmaped in  
the mmap area under certain conditions. To check this you can run the  
following command, which causes to always locate the vdso in the mmap  
area:

$ setarch x86_64 -R cat /proc/self/maps

00400000-0040b000 r-xp         ...  /bin/cat
0060a000-0060b000 r--p         ...  /bin/cat
0060b000-0060c000 rw-p         ...  /bin/cat
0060c000-0062d000 rw-p         ...  [heap]
7ffff6c8c000-7ffff7a12000 r--p ...   /usr/lib/locale/locale-archive
7ffff7a12000-7ffff7bcf000 r-xp ...   /lib/x86_64-linux-gnu/libc-2.17.so
7ffff7bcf000-7ffff7dcf000 ---p ...   /lib/x86_64-linux-gnu/libc-2.17.so
7ffff7dcf000-7ffff7dd3000 r--p ...   /lib/x86_64-linux-gnu/libc-2.17.so
7ffff7dd3000-7ffff7dd5000 rw-p ...   /lib/x86_64-linux-gnu/libc-2.17.so
7ffff7dd5000-7ffff7dda000 rw-p ...
7ffff7dda000-7ffff7dfd000 r-xp ...   /lib/x86_64-linux-gnu/ld-2.17.so
7ffff7fd9000-7ffff7fdc000 rw-p ...
7ffff7ff8000-7ffff7ffa000 rw-p ...
7ffff7ffa000-7ffff7ffc000 r-xp ...   [vdso]
7ffff7ffc000-7ffff7ffd000 r--p ...   /lib/x86_64-linux-gnu/ld-2.17.so
7ffff7ffd000-7ffff7fff000 rw-p ...   /lib/x86_64-linux-gnu/ld-2.17.so
7ffffffde000-7ffffffff000 rw-p ...   [stack]
ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0   [vsyscall]

Besides using the setarch "to force" the location of the VDSO, the  
function get_unmapped_area may also return an address in the mmap area  
if the "suggested" address is not valid. This is a rare case, but  
which occurs from time to time.

Therefore, putting the VVAR/VDSO in the mmap area, as this patch does,  
should work smoothly.


Signed-off-by: Hector Marco-Gisbert <hecmargi@upv.es>
Signed-off-by: Ismael Ripoll <iripoll@upv.es>

diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c
index 009495b..b61eed2 100644
--- a/arch/x86/vdso/vma.c
+++ b/arch/x86/vdso/vma.c
@@ -41,42 +41,7 @@ void __init init_vdso_image(const struct vdso_image *image)

  struct linux_binprm;

-/* Put the vdso above the (randomized) stack with another randomized offset.
-   This way there is no hole in the middle of address space.
-   To save memory make sure it is still in the same PTE as the stack top.
-   This doesn't give that many random bits.
-
-   Only used for the 64-bit and x32 vdsos. */
-static unsigned long vdso_addr(unsigned long start, unsigned len)
-{
-#ifdef CONFIG_X86_32
-    return 0;
-#else
-    unsigned long addr, end;
-    unsigned offset;
-    end = (start + PMD_SIZE - 1) & PMD_MASK;
-    if (end >= TASK_SIZE_MAX)
-        end = TASK_SIZE_MAX;
-    end -= len;
-    /* This loses some more bits than a modulo, but is cheaper */
-    offset = get_random_int() & (PTRS_PER_PTE - 1);
-    addr = start + (offset << PAGE_SHIFT);
-    if (addr >= end)
-        addr = end;
-
-    /*
-     * page-align it here so that get_unmapped_area doesn't
-     * align it wrongfully again to the next page. addr can come in 4K
-     * unaligned here as a result of stack start randomization.
-     */
-    addr = PAGE_ALIGN(addr);
-    addr = align_vdso_addr(addr);
-
-    return addr;
-#endif
-}
-
-static int map_vdso(const struct vdso_image *image, bool calculate_addr)
+static int map_vdso(const struct vdso_image *image)
  {
      struct mm_struct *mm = current->mm;
      struct vm_area_struct *vma;
@@ -88,16 +53,9 @@ static int map_vdso(const struct vdso_image *image,  
bool calculate_addr)
          .pages = no_pages,
      };

-    if (calculate_addr) {
-        addr = vdso_addr(current->mm->start_stack,
-                 image->size - image->sym_vvar_start);
-    } else {
-        addr = 0;
-    }
-
      down_write(&mm->mmap_sem);

-    addr = get_unmapped_area(NULL, addr,
+    addr = get_unmapped_area(NULL, 0,
                   image->size - image->sym_vvar_start, 0, 0);
      if (IS_ERR_VALUE(addr)) {
          ret = addr;
@@ -172,7 +130,7 @@ static int load_vdso32(void)
      if (vdso32_enabled != 1)  /* Other values all mean "disabled" */
          return 0;

-    ret = map_vdso(selected_vdso32, false);
+    ret = map_vdso(selected_vdso32);
      if (ret)
          return ret;

@@ -191,7 +149,7 @@ int arch_setup_additional_pages(struct  
linux_binprm *bprm, int uses_interp)
      if (!vdso64_enabled)
          return 0;

-    return map_vdso(&vdso_image_64, true);
+    return map_vdso(&vdso_image_64);
  }

  #ifdef CONFIG_COMPAT
@@ -203,7 +161,7 @@ int compat_arch_setup_additional_pages(struct  
linux_binprm *bprm,
          if (!vdso64_enabled)
              return 0;

-        return map_vdso(&vdso_image_x32, true);
+        return map_vdso(&vdso_image_x32);
      }
  #endif


Andy Lutomirski <luto@amacapital.net> escribió:

> On Fri, Dec 19, 2014 at 2:11 PM, Andy Lutomirski <luto@amacapital.net> wrote:
>> On Fri, Dec 19, 2014 at 2:04 PM, Hector Marco <hecmargi@upv.es> wrote:
>>>
>>>
>>> El 12/12/14 a las 18:17, Andy Lutomirski escribió:
>>>
>>>> On Dec 12, 2014 8:33 AM, "Hector Marco" <hecmargi@upv.es> wrote:
>>>>>
>>>>>
>>>>> Hello,
>>>>>
>>>>> I agree. I don't think a new randomization mode will be needed, just fix
>>>>> the current randomize_va_space=2. Said other way: fixing the offset2lib
>>>>> will not break any current program and so, no need to add additional
>>>>> configuration options. May be we shall wait for some inputs
>>>>> from the list (may be we are missing something).
>>>>>
>>>>>
>>>>> Regarding to VDSO, definitively, is not randomized enough in 64bits.
>>>>> Brute force attacks would be pretty fast even from the network.
>>>>> I have identified the bug and seems quite easy to fix it.
>>>>>
>>>>> On 32bit systems, this is not a issue because it is mapped in the
>>>>> mmap area. In order to fix the VDSO on 64bit, the following
>>>>> considerations shall
>>>>> be discussed:
>>>>>
>>>>>
>>>>> Performance:
>>>>>      It seems (reading the kernel comments) that the random allocation
>>>>>      algorithm tries to place the VDSO in the same PTE than the stack.
>>>>
>>>>
>>>> The comment is wrong.  It means PTE table.
>>>>
>>>>>      But since the permissions of the stack and the VDSO are different
>>>>>      it seems that are getting right the opposite.
>>>>
>>>>
>>>> Permissions have page granularity, so this isn't a problem.
>>>>
>>>>>
>>>>>      Effectively VDSO shall be correctly randomized because it contains
>>>>>      enough useful exploitable stuff.
>>>>>
>>>>>      I think that the possible solution is follow the x86_32 approach
>>>>>      which consist on map the VDSO in the mmap area.
>>>>>
>>>>>      It would be better fix VDSO in a different patch ? I can send a
>>>>>      patch which fixes the VDSO on 64 bit.
>>>>>
>>>>
>>>> What are the considerations for 64-bit memory layout?  I haven't
>>>> touched it because I don't want to break userspace, but I don't know
>>>> what to be careful about.
>>>>
>>>> --Andy
>>>
>>>
>>> I don't think that mapping the VDSO in the mmap area breaks the
>>> userspace. Actually, this is already happening with the current
>>> implementation. You can see it by running:
>>>
>>> setarch x86_64 -R cat /proc/self/maps
>>>
>>
>> Hmm.  So apparently we even switch which side of the stack the vdso is
>> on depending on the randomization setting.
>>
>>>
>>> Do this break the userspace in some way ?
>>>
>>>
>>> Regarding the solution to the offset2lib it seems that placing the
>>> executable in a different memory region area could increase the
>>> number of pages for the pages table (because it is more spread).
>>> We should consider this before fixing the current implementation
>>> (randomize_va_space=2).
>>>
>>> I guess that the current implementation places the PIE executable in
>>> the mmap base area jointly with the libraries in an attempt to reduce
>>> the size of the page table.
>>>
>>> Therefore, I can fix the current implementation (maintaining the
>>> randomize_va_space=2) by moving the PIE executable from the mmap base
>>> area to another one for x86*, ARM* and MIPS (as s390 and PowerPC do).
>>> But we shall agree that this increment in the page table is not a
>>> issue. Otherwise, the randomize_va_space=3 shall be considered.
>>
>> Wrt the vdso itself, though, there is an extra consideration: CRIU.  I
>> *think* that the CRIU vdso proxying scheme will work even if the vdso
>> changes sizes and is adjacent to other mappings.  Cyrill and/or Pavel,
>> am I right?
>>
>> I'm not fundamentally opposed to mapping the vdso just like any other
>> shared library.  I still think that we should have an extra-strong
>> randomization mode in which all the libraries are randomized wrt each
>> other, though.  For many applications, the extra page table cost will
>> be negligible.
>
> This is stupid.  The vdso randomization is just buggy, plain and
> simple.  Patch coming.
>
>>
>> --Andy
>>
>>>
>>>
>>> Hector Marco.
>>>
>>>>
>>>>>
>>>>>
>>>>> Regards,
>>>>> Hector Marco.
>>
>>
>>
>> --
>> Andy Lutomirski
>> AMA Capital Management, LLC
>
>
>
> --
> Andy Lutomirski
> AMA Capital Management, LLC
>




^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-22 17:36               ` [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack Hector Marco Gisbert
@ 2014-12-22 17:56                 ` Andy Lutomirski
  2014-12-22 19:49                   ` Jiri Kosina
  2014-12-22 23:23                   ` Hector Marco Gisbert
  0 siblings, 2 replies; 23+ messages in thread
From: Andy Lutomirski @ 2014-12-22 17:56 UTC (permalink / raw)
  To: Hector Marco Gisbert
  Cc: Cyrill Gorcunov, Pavel Emelyanov, Catalin Marinas,
	Heiko Carstens, Oleg Nesterov, Ingo Molnar, Anton Blanchard,
	Jiri Kosina, Russell King - ARM Linux, H. Peter Anvin,
	David Daney, Andrew Morton, Arun Chandran, linux-kernel,
	Martin Schwidefsky, Ismael Ripoll, Christian Borntraeger,
	Thomas Gleixner, Hanno Böck, Will Deacon,
	Benjamin Herrenschmidt, Kees Cook, Reno Robert

On Mon, Dec 22, 2014 at 9:36 AM, Hector Marco Gisbert <hecmargi@upv.es> wrote:
> [PATH] Randomize properly VVAR/VDSO areas
>
> This is a simple patch to map the VVAR/VDSO areas in the mmap area,
> rather than "close to the stack". Mapping the VVAR/VDSO in the mmap area
> should fix the "VDSO weakness" (too little entropy). As I mentioned in a
> previous message, this solutions should not break the userspace.
>
> In fact, in the current kernel, the VVAR/VDSO are already mmaped in the mmap
> area under certain conditions. To check this you can run the following
> command, which causes to always locate the vdso in the mmap area:
>
> $ setarch x86_64 -R cat /proc/self/maps
>
> 00400000-0040b000 r-xp         ...  /bin/cat
> 0060a000-0060b000 r--p         ...  /bin/cat
> 0060b000-0060c000 rw-p         ...  /bin/cat
> 0060c000-0062d000 rw-p         ...  [heap]
> 7ffff6c8c000-7ffff7a12000 r--p ...   /usr/lib/locale/locale-archive
> 7ffff7a12000-7ffff7bcf000 r-xp ...   /lib/x86_64-linux-gnu/libc-2.17.so
> 7ffff7bcf000-7ffff7dcf000 ---p ...   /lib/x86_64-linux-gnu/libc-2.17.so
> 7ffff7dcf000-7ffff7dd3000 r--p ...   /lib/x86_64-linux-gnu/libc-2.17.so
> 7ffff7dd3000-7ffff7dd5000 rw-p ...   /lib/x86_64-linux-gnu/libc-2.17.so
> 7ffff7dd5000-7ffff7dda000 rw-p ...
> 7ffff7dda000-7ffff7dfd000 r-xp ...   /lib/x86_64-linux-gnu/ld-2.17.so
> 7ffff7fd9000-7ffff7fdc000 rw-p ...
> 7ffff7ff8000-7ffff7ffa000 rw-p ...
> 7ffff7ffa000-7ffff7ffc000 r-xp ...   [vdso]
> 7ffff7ffc000-7ffff7ffd000 r--p ...   /lib/x86_64-linux-gnu/ld-2.17.so
> 7ffff7ffd000-7ffff7fff000 rw-p ...   /lib/x86_64-linux-gnu/ld-2.17.so
> 7ffffffde000-7ffffffff000 rw-p ...   [stack]
> ffffffffff600000-ffffffffff601000 r-xp 00000000 00:00 0   [vsyscall]
>
> Besides using the setarch "to force" the location of the VDSO, the function
> get_unmapped_area may also return an address in the mmap area if the
> "suggested" address is not valid. This is a rare case, but which occurs from
> time to time.
>
> Therefore, putting the VVAR/VDSO in the mmap area, as this patch does,
> should work smoothly.

Before I even *consider* the code, I want to know two things:

1. Is there actually a problem in the first place?  The vdso
randomization in all released kernels is blatantly buggy, but it's
fixed in -tip, so it should be fixed by the time that 3.19-rc2 comes
out, and the fix is marked for -stable.  Can you try a fixed kernel:

https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/commit/?h=x86/urgent&id=fbe1bf140671619508dfa575d74a185ae53c5dbb

2. I'm not sure your patch helpes.  The currently exciting articles on
ASLR weaknesses seem to focus on two narrow issues:

a. With PIE executables, the offset from the executable to the
libraries is constant.  This is unfortunate when your threat model
allows you to learn the executable base address and all your gadgets
are in shared libraries.

b. The VDSO base address is pathetically low on min entropy.  This
will be dramatically improved shortly.

The pax tests seem to completely ignore the joint distribution of the
relevant addresses.  My crystal ball predicts that, if I apply your
patch, someone will write an article observing that the libc-to-vdso
offset is constant or, OMG!, the PIE-executable-to-vdso offset is
constant.

So... is there a problem in the first place, and is the situation
really improved with your patch?

--Andy


>
>
> Signed-off-by: Hector Marco-Gisbert <hecmargi@upv.es>
> Signed-off-by: Ismael Ripoll <iripoll@upv.es>
>
> diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c
> index 009495b..b61eed2 100644
> --- a/arch/x86/vdso/vma.c
> +++ b/arch/x86/vdso/vma.c
> @@ -41,42 +41,7 @@ void __init init_vdso_image(const struct vdso_image
> *image)
>
>  struct linux_binprm;
>
> -/* Put the vdso above the (randomized) stack with another randomized
> offset.
> -   This way there is no hole in the middle of address space.
> -   To save memory make sure it is still in the same PTE as the stack top.
> -   This doesn't give that many random bits.
> -
> -   Only used for the 64-bit and x32 vdsos. */
> -static unsigned long vdso_addr(unsigned long start, unsigned len)
> -{
> -#ifdef CONFIG_X86_32
> -    return 0;
> -#else
> -    unsigned long addr, end;
> -    unsigned offset;
> -    end = (start + PMD_SIZE - 1) & PMD_MASK;
> -    if (end >= TASK_SIZE_MAX)
> -        end = TASK_SIZE_MAX;
> -    end -= len;
> -    /* This loses some more bits than a modulo, but is cheaper */
> -    offset = get_random_int() & (PTRS_PER_PTE - 1);
> -    addr = start + (offset << PAGE_SHIFT);
> -    if (addr >= end)
> -        addr = end;
> -
> -    /*
> -     * page-align it here so that get_unmapped_area doesn't
> -     * align it wrongfully again to the next page. addr can come in 4K
> -     * unaligned here as a result of stack start randomization.
> -     */
> -    addr = PAGE_ALIGN(addr);
> -    addr = align_vdso_addr(addr);
> -
> -    return addr;
> -#endif
> -}
> -
> -static int map_vdso(const struct vdso_image *image, bool calculate_addr)
> +static int map_vdso(const struct vdso_image *image)
>  {
>      struct mm_struct *mm = current->mm;
>      struct vm_area_struct *vma;
> @@ -88,16 +53,9 @@ static int map_vdso(const struct vdso_image *image, bool
> calculate_addr)
>          .pages = no_pages,
>      };
>
> -    if (calculate_addr) {
> -        addr = vdso_addr(current->mm->start_stack,
> -                 image->size - image->sym_vvar_start);
> -    } else {
> -        addr = 0;
> -    }
> -
>      down_write(&mm->mmap_sem);
>
> -    addr = get_unmapped_area(NULL, addr,
> +    addr = get_unmapped_area(NULL, 0,
>                   image->size - image->sym_vvar_start, 0, 0);
>      if (IS_ERR_VALUE(addr)) {
>          ret = addr;
> @@ -172,7 +130,7 @@ static int load_vdso32(void)
>      if (vdso32_enabled != 1)  /* Other values all mean "disabled" */
>          return 0;
>
> -    ret = map_vdso(selected_vdso32, false);
> +    ret = map_vdso(selected_vdso32);
>      if (ret)
>          return ret;
>
> @@ -191,7 +149,7 @@ int arch_setup_additional_pages(struct linux_binprm
> *bprm, int uses_interp)
>      if (!vdso64_enabled)
>          return 0;
>
> -    return map_vdso(&vdso_image_64, true);
> +    return map_vdso(&vdso_image_64);
>  }
>
>  #ifdef CONFIG_COMPAT
> @@ -203,7 +161,7 @@ int compat_arch_setup_additional_pages(struct
> linux_binprm *bprm,
>          if (!vdso64_enabled)
>              return 0;
>
> -        return map_vdso(&vdso_image_x32, true);
> +        return map_vdso(&vdso_image_x32);
>      }
>  #endif
>
>
> Andy Lutomirski <luto@amacapital.net> escribió:
>
>
>> On Fri, Dec 19, 2014 at 2:11 PM, Andy Lutomirski <luto@amacapital.net>
>> wrote:
>>>
>>> On Fri, Dec 19, 2014 at 2:04 PM, Hector Marco <hecmargi@upv.es> wrote:
>>>>
>>>>
>>>>
>>>> El 12/12/14 a las 18:17, Andy Lutomirski escribió:
>>>>
>>>>> On Dec 12, 2014 8:33 AM, "Hector Marco" <hecmargi@upv.es> wrote:
>>>>>>
>>>>>>
>>>>>>
>>>>>> Hello,
>>>>>>
>>>>>> I agree. I don't think a new randomization mode will be needed, just
>>>>>> fix
>>>>>> the current randomize_va_space=2. Said other way: fixing the
>>>>>> offset2lib
>>>>>> will not break any current program and so, no need to add additional
>>>>>> configuration options. May be we shall wait for some inputs
>>>>>> from the list (may be we are missing something).
>>>>>>
>>>>>>
>>>>>> Regarding to VDSO, definitively, is not randomized enough in 64bits.
>>>>>> Brute force attacks would be pretty fast even from the network.
>>>>>> I have identified the bug and seems quite easy to fix it.
>>>>>>
>>>>>> On 32bit systems, this is not a issue because it is mapped in the
>>>>>> mmap area. In order to fix the VDSO on 64bit, the following
>>>>>> considerations shall
>>>>>> be discussed:
>>>>>>
>>>>>>
>>>>>> Performance:
>>>>>>      It seems (reading the kernel comments) that the random allocation
>>>>>>      algorithm tries to place the VDSO in the same PTE than the stack.
>>>>>
>>>>>
>>>>>
>>>>> The comment is wrong.  It means PTE table.
>>>>>
>>>>>>      But since the permissions of the stack and the VDSO are different
>>>>>>      it seems that are getting right the opposite.
>>>>>
>>>>>
>>>>>
>>>>> Permissions have page granularity, so this isn't a problem.
>>>>>
>>>>>>
>>>>>>      Effectively VDSO shall be correctly randomized because it
>>>>>> contains
>>>>>>      enough useful exploitable stuff.
>>>>>>
>>>>>>      I think that the possible solution is follow the x86_32 approach
>>>>>>      which consist on map the VDSO in the mmap area.
>>>>>>
>>>>>>      It would be better fix VDSO in a different patch ? I can send a
>>>>>>      patch which fixes the VDSO on 64 bit.
>>>>>>
>>>>>
>>>>> What are the considerations for 64-bit memory layout?  I haven't
>>>>> touched it because I don't want to break userspace, but I don't know
>>>>> what to be careful about.
>>>>>
>>>>> --Andy
>>>>
>>>>
>>>>
>>>> I don't think that mapping the VDSO in the mmap area breaks the
>>>> userspace. Actually, this is already happening with the current
>>>> implementation. You can see it by running:
>>>>
>>>> setarch x86_64 -R cat /proc/self/maps
>>>>
>>>
>>> Hmm.  So apparently we even switch which side of the stack the vdso is
>>> on depending on the randomization setting.
>>>
>>>>
>>>> Do this break the userspace in some way ?
>>>>
>>>>
>>>> Regarding the solution to the offset2lib it seems that placing the
>>>> executable in a different memory region area could increase the
>>>> number of pages for the pages table (because it is more spread).
>>>> We should consider this before fixing the current implementation
>>>> (randomize_va_space=2).
>>>>
>>>> I guess that the current implementation places the PIE executable in
>>>> the mmap base area jointly with the libraries in an attempt to reduce
>>>> the size of the page table.
>>>>
>>>> Therefore, I can fix the current implementation (maintaining the
>>>> randomize_va_space=2) by moving the PIE executable from the mmap base
>>>> area to another one for x86*, ARM* and MIPS (as s390 and PowerPC do).
>>>> But we shall agree that this increment in the page table is not a
>>>> issue. Otherwise, the randomize_va_space=3 shall be considered.
>>>
>>>
>>> Wrt the vdso itself, though, there is an extra consideration: CRIU.  I
>>> *think* that the CRIU vdso proxying scheme will work even if the vdso
>>> changes sizes and is adjacent to other mappings.  Cyrill and/or Pavel,
>>> am I right?
>>>
>>> I'm not fundamentally opposed to mapping the vdso just like any other
>>> shared library.  I still think that we should have an extra-strong
>>> randomization mode in which all the libraries are randomized wrt each
>>> other, though.  For many applications, the extra page table cost will
>>> be negligible.
>>
>>
>> This is stupid.  The vdso randomization is just buggy, plain and
>> simple.  Patch coming.
>>
>>>
>>> --Andy
>>>
>>>>
>>>>
>>>> Hector Marco.
>>>>
>>>>>
>>>>>>
>>>>>>
>>>>>> Regards,
>>>>>> Hector Marco.
>>>
>>>
>>>
>>>
>>> --
>>> Andy Lutomirski
>>> AMA Capital Management, LLC
>>
>>
>>
>>
>> --
>> Andy Lutomirski
>> AMA Capital Management, LLC
>>
>
>
>



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-22 17:56                 ` Andy Lutomirski
@ 2014-12-22 19:49                   ` Jiri Kosina
  2014-12-22 20:00                     ` Andy Lutomirski
  2014-12-22 23:23                   ` Hector Marco Gisbert
  1 sibling, 1 reply; 23+ messages in thread
From: Jiri Kosina @ 2014-12-22 19:49 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Hector Marco Gisbert, Cyrill Gorcunov, Pavel Emelyanov,
	Catalin Marinas, Heiko Carstens, Oleg Nesterov, Ingo Molnar,
	Anton Blanchard, Russell King - ARM Linux, H. Peter Anvin,
	David Daney, Andrew Morton, Arun Chandran, linux-kernel,
	Martin Schwidefsky, Ismael Ripoll, Christian Borntraeger,
	Thomas Gleixner, Hanno Böck, Will Deacon,
	Benjamin Herrenschmidt, Kees Cook, Reno Robert

On Mon, 22 Dec 2014, Andy Lutomirski wrote:

> a. With PIE executables, the offset from the executable to the
> libraries is constant.  This is unfortunate when your threat model
> allows you to learn the executable base address and all your gadgets
> are in shared libraries.

When I was originally pushing PIE executable randomization, I have been 
thinking about ways to solve this.

In theory, we could start playing games with load_addr in 
load_elf_interp() and randomizing it completely independently from mmap() 
base randomization, but the question is whether it's really worth the 
hassle and binfmt_elf code complication. I am not convinced.

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-22 19:49                   ` Jiri Kosina
@ 2014-12-22 20:00                     ` Andy Lutomirski
  2014-12-22 20:03                       ` Jiri Kosina
  0 siblings, 1 reply; 23+ messages in thread
From: Andy Lutomirski @ 2014-12-22 20:00 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Hector Marco Gisbert, Cyrill Gorcunov, Pavel Emelyanov,
	Catalin Marinas, Heiko Carstens, Oleg Nesterov, Ingo Molnar,
	Anton Blanchard, Russell King - ARM Linux, H. Peter Anvin,
	David Daney, Andrew Morton, Arun Chandran, linux-kernel,
	Martin Schwidefsky, Ismael Ripoll, Christian Borntraeger,
	Thomas Gleixner, Hanno Böck, Will Deacon,
	Benjamin Herrenschmidt, Kees Cook, Reno Robert

On Mon, Dec 22, 2014 at 11:49 AM, Jiri Kosina <jkosina@suse.cz> wrote:
> On Mon, 22 Dec 2014, Andy Lutomirski wrote:
>
>> a. With PIE executables, the offset from the executable to the
>> libraries is constant.  This is unfortunate when your threat model
>> allows you to learn the executable base address and all your gadgets
>> are in shared libraries.
>
> When I was originally pushing PIE executable randomization, I have been
> thinking about ways to solve this.
>
> In theory, we could start playing games with load_addr in
> load_elf_interp() and randomizing it completely independently from mmap()
> base randomization, but the question is whether it's really worth the
> hassle and binfmt_elf code complication. I am not convinced.

It could be worth having a mode that goes all out: randomize every
single allocation independently in, say, a 45 or 46-byte range.  That
would be about as strong ASLR as we could possibly have, it would
result in guard intervals around mmap data allocations (which has real
value), and it would still leave plenty of space for big address space
hogs like the Chromium sandbox.

The main downside would be lots of memory used for page tables.

--Andy

>
> --
> Jiri Kosina
> SUSE Labs



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-22 20:00                     ` Andy Lutomirski
@ 2014-12-22 20:03                       ` Jiri Kosina
  2014-12-22 20:13                         ` Andy Lutomirski
  0 siblings, 1 reply; 23+ messages in thread
From: Jiri Kosina @ 2014-12-22 20:03 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Hector Marco Gisbert, Cyrill Gorcunov, Pavel Emelyanov,
	Catalin Marinas, Heiko Carstens, Oleg Nesterov, Ingo Molnar,
	Anton Blanchard, Russell King - ARM Linux, H. Peter Anvin,
	David Daney, Andrew Morton, Arun Chandran, linux-kernel,
	Martin Schwidefsky, Ismael Ripoll, Christian Borntraeger,
	Thomas Gleixner, Hanno Böck, Will Deacon,
	Benjamin Herrenschmidt, Kees Cook, Reno Robert

On Mon, 22 Dec 2014, Andy Lutomirski wrote:

> It could be worth having a mode that goes all out: randomize every
> single allocation independently in, say, a 45 or 46-byte range.  That
> would be about as strong ASLR as we could possibly have, it would
> result in guard intervals around mmap data allocations (which has real
> value), and it would still leave plenty of space for big address space
> hogs like the Chromium sandbox.
> 
> The main downside would be lots of memory used for page tables.

Plus get_random_int() during every mmap() call. Plus the resulting VA 
space fragmentation.

-- 
Jiri Kosina
SUSE Labs

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-22 20:03                       ` Jiri Kosina
@ 2014-12-22 20:13                         ` Andy Lutomirski
  0 siblings, 0 replies; 23+ messages in thread
From: Andy Lutomirski @ 2014-12-22 20:13 UTC (permalink / raw)
  To: Jiri Kosina
  Cc: Hector Marco Gisbert, Cyrill Gorcunov, Pavel Emelyanov,
	Catalin Marinas, Heiko Carstens, Oleg Nesterov, Ingo Molnar,
	Anton Blanchard, Russell King - ARM Linux, H. Peter Anvin,
	David Daney, Andrew Morton, Arun Chandran, linux-kernel,
	Martin Schwidefsky, Ismael Ripoll, Christian Borntraeger,
	Thomas Gleixner, Hanno Böck, Will Deacon,
	Benjamin Herrenschmidt, Kees Cook, Reno Robert

On Mon, Dec 22, 2014 at 12:03 PM, Jiri Kosina <jkosina@suse.cz> wrote:
> On Mon, 22 Dec 2014, Andy Lutomirski wrote:
>
>> It could be worth having a mode that goes all out: randomize every
>> single allocation independently in, say, a 45 or 46-byte range.  That
>> would be about as strong ASLR as we could possibly have, it would
>> result in guard intervals around mmap data allocations (which has real
>> value), and it would still leave plenty of space for big address space
>> hogs like the Chromium sandbox.
>>
>> The main downside would be lots of memory used for page tables.
>
> Plus get_random_int() during every mmap() call.

If that's actually a problem, then I think we should fix
get_random_int.  Chacha20 can generate 64 bits in a few cycles.

> Plus the resulting VA
> space fragmentation.

I think the main cost of fragmentation would be the page tables and
vmas.  2^45 bytes is a lot of bytes.

We could tone it down a bit if we dedicated a range to mmapped data
and tried to pack it reasonably densely.  We could even do a fair
amount of merging for data-heavy applications if we gave MAP_PRIVATE |
MAP_ANONYMOUS, PROT_READ | PROT_WRITE mappings a decent chance of
ending up next to each other.

Anyway, this would be a knob.  The database people would presumably turn it off.

--Andy

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-22 17:56                 ` Andy Lutomirski
  2014-12-22 19:49                   ` Jiri Kosina
@ 2014-12-22 23:23                   ` Hector Marco Gisbert
  2014-12-22 23:38                     ` Andy Lutomirski
  1 sibling, 1 reply; 23+ messages in thread
From: Hector Marco Gisbert @ 2014-12-22 23:23 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Cyrill Gorcunov, Pavel Emelyanov, Catalin Marinas,
	Heiko Carstens, Oleg Nesterov, Ingo Molnar, Anton Blanchard,
	Jiri Kosina, Russell King - ARM Linux, H. Peter Anvin,
	David Daney, Andrew Morton, Arun Chandran, linux-kernel,
	Martin Schwidefsky, Ismael Ripoll, Christian Borntraeger,
	Thomas Gleixner, Hanno Böck, Will Deacon,
	Benjamin Herrenschmidt, Kees Cook, Reno Robert

> Before I even *consider* the code, I want to know two things:
  >
  > 1. Is there actually a problem in the first place?  The vdso
  > randomization in all released kernels is blatantly buggy, but it's
  > fixed in -tip, so it should be fixed by the time that 3.19-rc2 comes
  > out, and the fix is marked for -stable.  Can you try a fixed kernel:
  >
  >
https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/commit/?h=x86/urgent&id=fbe1bf140671619508dfa575d74a185ae53c5dbb


Well if it is already fixed, then great!.

But since the vdso is something like a library (cause it contains code,
and no data), it shall be handled as a library and so it shall be
located jointly with the other libraries rather than close to the stack.
Later I'll talk about randomizing libraries among them.

I think that the core idea of the current ASLR implementation is that
all the areas that share similar content (libraries, stack, heap,
application) shall be placed together. Following more or less the MILS
division. This way, a memory leak of an address of the stack is not very
useful for building a ROP on the libraries.

Another issue is the page table locality. The implementation tries to
allocate the vdso "close" to the stack so that is fits into the PMD of
the stack (and so, use less pages for the pagetables). Well, placing the
vdso in the mmap area would solve the problem at once.

Unfortunately, with your path the VDSO entropy has 18 entropy bits. But
this is not true. The real entropy is masked with the entropy of the
stack. In other words, if an attacker guesses where the stack is placed
they have to do negligible work to guess where the VDSO is located.
Note that, a memory leak from a data area (which is of little help to
the attacker) can be used to locate the VDSO (which is of great interest
because it is executable and contains nice stuff).

Using my solution, the VDSO will have the same 28 bits of randomness
than the libraries (but all will be together).

After after 10000 executions I have found 76 repeated addresses (still
low entropy, but much better than before). But with my patch, there was
no repetition (much better entropy).


  > 2. I'm not sure your patch helpes.  The currently exciting articles on
  > ASLR weaknesses seem to focus on two narrow issues:
  >
  > a. With PIE executables, the offset from the executable to the
  > libraries is constant.  This is unfortunate when your threat model
  > allows you to learn the executable base address and all your gadgets
  > are in shared libraries.

Regardes the offset2lib... The core idea is that we shall consider the
application code and libraries as two slightly different things (or two
different security regions). Since applications are in general more
prone to have bugs than libraries, it seems that this is the way to do
it from the security point of view.  Obviously, stack and libraries are
clearly apart (you can even assign different access permissions).
Application code and libraries are not that different, but it would be
better of they are not together.... and sincerely, I think that the cost
of allocate them apart is so small that it worth the code.

If the extra cost of (One or two pages) per process required to place
the application code to another area is too high, then may be it can be
implemented as another level of ASLR randomize_va_space=3 (if any).


  > b. The VDSO base address is pathetically low on min entropy.  This
  > will be dramatically improved shortly.
  >
  > The pax tests seem to completely ignore the joint distribution of the
  > relevant addresses.  My crystal ball predicts that, if I apply your
  > patch, someone will write an article observing that the libc-to-vdso
  > offset is constant or, OMG!, the PIE-executable-to-vdso offset is
  > constant.
  >
  > So... is there a problem in the first place, and is the situation
  > really improved with your patch?
  >
  > --Andy

Absolutely agree.

The offset2x shall be considered now. And rather than moving objects
like the vdso, vvar stack, heap... etc.. etc.. we shall consider
seriously the cost of a full (all maps) to be real random. That is
inter-mmap ASLR.

Current implementation is not that bad, except that the application was
considered in the same "category" than libraries. But I guess that it
deserves a region for its own. Also, I think that executable code shall
be apart from data.. which supports the idea of inter-mmap randomization.

Sorry if I'm mixing VDSO, and offset2lib issues, but they share a
similar core problem.


--Hector Marco.




^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-22 23:23                   ` Hector Marco Gisbert
@ 2014-12-22 23:38                     ` Andy Lutomirski
       [not found]                       ` <CAH4rwTKeN0P84FJnocoKV4t9rc2Ox_EYc+LEibD+Y83n7C8aVA@mail.gmail.com>
  0 siblings, 1 reply; 23+ messages in thread
From: Andy Lutomirski @ 2014-12-22 23:38 UTC (permalink / raw)
  To: Hector Marco Gisbert
  Cc: Cyrill Gorcunov, Pavel Emelyanov, Catalin Marinas,
	Heiko Carstens, Oleg Nesterov, Ingo Molnar, Anton Blanchard,
	Jiri Kosina, Russell King - ARM Linux, H. Peter Anvin,
	David Daney, Andrew Morton, Arun Chandran, linux-kernel,
	Martin Schwidefsky, Ismael Ripoll, Christian Borntraeger,
	Thomas Gleixner, Hanno Böck, Will Deacon,
	Benjamin Herrenschmidt, Kees Cook, Reno Robert

On Mon, Dec 22, 2014 at 3:23 PM, Hector Marco Gisbert <hecmargi@upv.es> wrote:
>> Before I even *consider* the code, I want to know two things:
>
>  >
>  > 1. Is there actually a problem in the first place?  The vdso
>  > randomization in all released kernels is blatantly buggy, but it's
>  > fixed in -tip, so it should be fixed by the time that 3.19-rc2 comes
>  > out, and the fix is marked for -stable.  Can you try a fixed kernel:
>  >
>  >
> https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/commit/?h=x86/urgent&id=fbe1bf140671619508dfa575d74a185ae53c5dbb
>
>
> Well if it is already fixed, then great!.
>
> But since the vdso is something like a library (cause it contains code,
> and no data), it shall be handled as a library and so it shall be
> located jointly with the other libraries rather than close to the stack.
> Later I'll talk about randomizing libraries among them.
>
> I think that the core idea of the current ASLR implementation is that
> all the areas that share similar content (libraries, stack, heap,
> application) shall be placed together. Following more or less the MILS
> division. This way, a memory leak of an address of the stack is not very
> useful for building a ROP on the libraries.
>
> Another issue is the page table locality. The implementation tries to
> allocate the vdso "close" to the stack so that is fits into the PMD of
> the stack (and so, use less pages for the pagetables). Well, placing the
> vdso in the mmap area would solve the problem at once.
>
> Unfortunately, with your path the VDSO entropy has 18 entropy bits. But
> this is not true. The real entropy is masked with the entropy of the
> stack. In other words, if an attacker guesses where the stack is placed
> they have to do negligible work to guess where the VDSO is located.
> Note that, a memory leak from a data area (which is of little help to
> the attacker) can be used to locate the VDSO (which is of great interest
> because it is executable and contains nice stuff).

I'm not sure it's negligible.  It's 9 bits if the attacker can figure
out the stack alignment and 18 bits if the attacker can't.  This isn't
great, but it's far from nothing.

>
> Using my solution, the VDSO will have the same 28 bits of randomness
> than the libraries (but all will be together).
>
> After after 10000 executions I have found 76 repeated addresses (still
> low entropy, but much better than before). But with my patch, there was
> no repetition (much better entropy).
>
>
>  > 2. I'm not sure your patch helpes.  The currently exciting articles on
>  > ASLR weaknesses seem to focus on two narrow issues:
>  >
>  > a. With PIE executables, the offset from the executable to the
>  > libraries is constant.  This is unfortunate when your threat model
>  > allows you to learn the executable base address and all your gadgets
>  > are in shared libraries.
>
> Regardes the offset2lib... The core idea is that we shall consider the
> application code and libraries as two slightly different things (or two
> different security regions). Since applications are in general more
> prone to have bugs than libraries, it seems that this is the way to do
> it from the security point of view.  Obviously, stack and libraries are
> clearly apart (you can even assign different access permissions).
> Application code and libraries are not that different, but it would be
> better of they are not together.... and sincerely, I think that the cost
> of allocate them apart is so small that it worth the code.
>
> If the extra cost of (One or two pages) per process required to place
> the application code to another area is too high, then may be it can be
> implemented as another level of ASLR randomize_va_space=3 (if any).
>
>
>  > b. The VDSO base address is pathetically low on min entropy.  This
>  > will be dramatically improved shortly.
>  >
>  > The pax tests seem to completely ignore the joint distribution of the
>  > relevant addresses.  My crystal ball predicts that, if I apply your
>  > patch, someone will write an article observing that the libc-to-vdso
>  > offset is constant or, OMG!, the PIE-executable-to-vdso offset is
>  > constant.
>  >
>  > So... is there a problem in the first place, and is the situation
>  > really improved with your patch?
>  >
>  > --Andy
>
> Absolutely agree.
>
> The offset2x shall be considered now. And rather than moving objects
> like the vdso, vvar stack, heap... etc.. etc.. we shall consider
> seriously the cost of a full (all maps) to be real random. That is
> inter-mmap ASLR.
>
> Current implementation is not that bad, except that the application was
> considered in the same "category" than libraries. But I guess that it
> deserves a region for its own. Also, I think that executable code shall
> be apart from data.. which supports the idea of inter-mmap randomization.
>
> Sorry if I'm mixing VDSO, and offset2lib issues, but they share a
> similar core problem.
>

If I see a real argument that randomizing the vdso like a library is
better than randomizing it separately but weakly, I'll gladly consider
it.  But the references I've seen (and I haven't looked all that hard,
and I'm not an memory exploit writer) are unconvincing.

I'd really rather see a strong inter-mmap randomization scheme adopted.

--Andy

>
> --Hector Marco.
>
>
>



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
       [not found]                       ` <CAH4rwTKeN0P84FJnocoKV4t9rc2Ox_EYc+LEibD+Y83n7C8aVA@mail.gmail.com>
@ 2014-12-23  8:15                         ` Andy Lutomirski
  2014-12-23 20:06                           ` Hector Marco Gisbert
  0 siblings, 1 reply; 23+ messages in thread
From: Andy Lutomirski @ 2014-12-23  8:15 UTC (permalink / raw)
  To: Reno Robert
  Cc: Hector Marco Gisbert, Cyrill Gorcunov, Pavel Emelyanov,
	Catalin Marinas, Heiko Carstens, Oleg Nesterov, Ingo Molnar,
	Anton Blanchard, Jiri Kosina, Russell King - ARM Linux,
	H. Peter Anvin, David Daney, Andrew Morton, Arun Chandran,
	linux-kernel, Martin Schwidefsky, Ismael Ripoll,
	Christian Borntraeger, Thomas Gleixner, Hanno Böck,
	Will Deacon, Benjamin Herrenschmidt, Kees Cook

On Tue, Dec 23, 2014 at 12:07 AM, Reno Robert <renorobert@gmail.com> wrote:
> Hi Andy,
>
> I would like to mention couple of things
>
> 1. With reference to details mentioned in vdso patch, there are ~76 repeated
> address in 10000 runs. This may not be good enough. Consider the case of
> local exploitation, one can still easily bruteforce the address and use
> gadgets in vdso without any information leak. Even in network daemon, if it
> restarts, bruteforce is still feasible. vdso has enough good code to chain a
> proper exploit, so its randomization should be considered serious.
>
> Also i'm not sure if there could be any modulo bias in the new patch with
> respect to offset computation.

There is a bias: the vdso end address is most likely to be at the end
of a PMD.  That bias is much, much smaller than before, though.

Keep in mind, though, that 76 repeated addresses in 10k runs isn't so
terrible, as an attacker won't know *which* 76 addresses will repeat.
But yes, this won't prevent an exploit by an attacker who can try many
thousands of times.

>
> 2. As Hector already mentioned, if someone could figure out the address of
> an executable segment(vdso in this case) by knowing an address of data area
> (stack), this may not be desirable. I agree that some information leak is
> needed here.

There are two reasons I don't want to just treat the vdso like every other dso:

1. Compatibility.  Changing that would not be okay for stable kernels,
whereas my patch doesn't change the allowed range.

2. I'd like to make sure it's not worse for some reason.  The vdso has
very well known contents, and if you can somehow jump somewhere
relative to another dso's code or data, then it could be used for an
exploit.  I don't know how realistic that is.

--Andy

>
>
> On Tue, Dec 23, 2014 at 5:08 AM, Andy Lutomirski <luto@amacapital.net>
> wrote:
>>
>> On Mon, Dec 22, 2014 at 3:23 PM, Hector Marco Gisbert <hecmargi@upv.es>
>> wrote:
>> >> Before I even *consider* the code, I want to know two things:
>> >
>> >  >
>> >  > 1. Is there actually a problem in the first place?  The vdso
>> >  > randomization in all released kernels is blatantly buggy, but it's
>> >  > fixed in -tip, so it should be fixed by the time that 3.19-rc2 comes
>> >  > out, and the fix is marked for -stable.  Can you try a fixed kernel:
>> >  >
>> >  >
>> >
>> > https://git.kernel.org/cgit/linux/kernel/git/tip/tip.git/commit/?h=x86/urgent&id=fbe1bf140671619508dfa575d74a185ae53c5dbb
>> >
>> >
>> > Well if it is already fixed, then great!.
>> >
>> > But since the vdso is something like a library (cause it contains code,
>> > and no data), it shall be handled as a library and so it shall be
>> > located jointly with the other libraries rather than close to the stack.
>> > Later I'll talk about randomizing libraries among them.
>> >
>> > I think that the core idea of the current ASLR implementation is that
>> > all the areas that share similar content (libraries, stack, heap,
>> > application) shall be placed together. Following more or less the MILS
>> > division. This way, a memory leak of an address of the stack is not very
>> > useful for building a ROP on the libraries.
>> >
>> > Another issue is the page table locality. The implementation tries to
>> > allocate the vdso "close" to the stack so that is fits into the PMD of
>> > the stack (and so, use less pages for the pagetables). Well, placing the
>> > vdso in the mmap area would solve the problem at once.
>> >
>> > Unfortunately, with your path the VDSO entropy has 18 entropy bits. But
>> > this is not true. The real entropy is masked with the entropy of the
>> > stack. In other words, if an attacker guesses where the stack is placed
>> > they have to do negligible work to guess where the VDSO is located.
>> > Note that, a memory leak from a data area (which is of little help to
>> > the attacker) can be used to locate the VDSO (which is of great interest
>> > because it is executable and contains nice stuff).
>>
>> I'm not sure it's negligible.  It's 9 bits if the attacker can figure
>> out the stack alignment and 18 bits if the attacker can't.  This isn't
>> great, but it's far from nothing.
>>
>> >
>> > Using my solution, the VDSO will have the same 28 bits of randomness
>> > than the libraries (but all will be together).
>> >
>> > After after 10000 executions I have found 76 repeated addresses (still
>> > low entropy, but much better than before). But with my patch, there was
>> > no repetition (much better entropy).
>> >
>> >
>> >  > 2. I'm not sure your patch helpes.  The currently exciting articles
>> > on
>> >  > ASLR weaknesses seem to focus on two narrow issues:
>> >  >
>> >  > a. With PIE executables, the offset from the executable to the
>> >  > libraries is constant.  This is unfortunate when your threat model
>> >  > allows you to learn the executable base address and all your gadgets
>> >  > are in shared libraries.
>> >
>> > Regardes the offset2lib... The core idea is that we shall consider the
>> > application code and libraries as two slightly different things (or two
>> > different security regions). Since applications are in general more
>> > prone to have bugs than libraries, it seems that this is the way to do
>> > it from the security point of view.  Obviously, stack and libraries are
>> > clearly apart (you can even assign different access permissions).
>> > Application code and libraries are not that different, but it would be
>> > better of they are not together.... and sincerely, I think that the cost
>> > of allocate them apart is so small that it worth the code.
>> >
>> > If the extra cost of (One or two pages) per process required to place
>> > the application code to another area is too high, then may be it can be
>> > implemented as another level of ASLR randomize_va_space=3 (if any).
>> >
>> >
>> >  > b. The VDSO base address is pathetically low on min entropy.  This
>> >  > will be dramatically improved shortly.
>> >  >
>> >  > The pax tests seem to completely ignore the joint distribution of the
>> >  > relevant addresses.  My crystal ball predicts that, if I apply your
>> >  > patch, someone will write an article observing that the libc-to-vdso
>> >  > offset is constant or, OMG!, the PIE-executable-to-vdso offset is
>> >  > constant.
>> >  >
>> >  > So... is there a problem in the first place, and is the situation
>> >  > really improved with your patch?
>> >  >
>> >  > --Andy
>> >
>> > Absolutely agree.
>> >
>> > The offset2x shall be considered now. And rather than moving objects
>> > like the vdso, vvar stack, heap... etc.. etc.. we shall consider
>> > seriously the cost of a full (all maps) to be real random. That is
>> > inter-mmap ASLR.
>> >
>> > Current implementation is not that bad, except that the application was
>> > considered in the same "category" than libraries. But I guess that it
>> > deserves a region for its own. Also, I think that executable code shall
>> > be apart from data.. which supports the idea of inter-mmap
>> > randomization.
>> >
>> > Sorry if I'm mixing VDSO, and offset2lib issues, but they share a
>> > similar core problem.
>> >
>>
>> If I see a real argument that randomizing the vdso like a library is
>> better than randomizing it separately but weakly, I'll gladly consider
>> it.  But the references I've seen (and I haven't looked all that hard,
>> and I'm not an memory exploit writer) are unconvincing.
>>
>> I'd really rather see a strong inter-mmap randomization scheme adopted.
>>
>> --Andy
>>
>> >
>> > --Hector Marco.
>> >
>> >
>> >
>>
>>
>>
>> --
>> Andy Lutomirski
>> AMA Capital Management, LLC
>
>
>
>
> --
> Regards,
> Reno Robert
> http://v0ids3curity.blogspot.in/
>
>



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-23  8:15                         ` Andy Lutomirski
@ 2014-12-23 20:06                           ` Hector Marco Gisbert
  2014-12-23 20:53                             ` Andy Lutomirski
  0 siblings, 1 reply; 23+ messages in thread
From: Hector Marco Gisbert @ 2014-12-23 20:06 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Reno Robert, Cyrill Gorcunov, Pavel Emelyanov, Catalin Marinas,
	Heiko Carstens, Oleg Nesterov, Ingo Molnar, Anton Blanchard,
	Jiri Kosina, Russell King - ARM Linux, H. Peter Anvin,
	David Daney, Andrew Morton, Arun Chandran, linux-kernel,
	Martin Schwidefsky, Ismael Ripoll, Christian Borntraeger,
	Thomas Gleixner, Hanno Böck, Will Deacon,
	Benjamin Herrenschmidt, Kees Cook

[PATCH] ASLRv3: inter-mmap ASLR (IM-ASLR).


The following is a patch that implements the inter-mmap (IM-ASLR)
which randomizes all mmaps. All the discussion about the current
implementation (offset2lib and vdso) shall be solved by fixing
the current implementation (randomize_va_space =2).

While we were working in the offset2lib we realized that
a complete solution would to randomize each mmap independently.
(as reno has suggested). Here is patch to achieve that and I
discussion about it.

First of all, I think that IM-ASLR is not mandatory, considering
current attacks and technology. Note that it very risky to make any
sound claim about what is secure and what is not. So, take the first
claim with caution.

IM-ASLR requires a large virtual memory space to avoid fragmentation
problems (see below), therefore, I think that it is only practical on
64 bit systems.

IM-ASLR will be the most advanced ASLR implementation overpassing
PaX ASLR. The present patch will prevent future threats (or current
threats, but unknown to me). It would be nice have it in Linux, but
a trade-off between security and performance shall be done before
adopting it as a default option.

Since the implementation is very simple and straight forward, I
suggest to include it as randomize_va_space=3. This way, it may
be enabled on a per process basis (personality).

Another aspect to think about is: Does this code adds or opens a new
backdoor or weakness? I don't think so. The code and the operation is
really simple, and it does not have side effects as far as I see.

Current implementations are based on the basic idea of "zones" of
similar or comparable criticality. Now we are discussing about where
the VDSO shall be placed, and if it shall be close to the stack zone
or on the mmap zone. Well, IM-ASLR solves this problem at once: all
objects are located on its own isolated "zone". In this sense, the
IM-ASLR removes most of the philosophical or subjective
arguments... which are always hard to justify.

Eventually, if the IM-ASLR is effective, we will set it by default for
all apps.



Regarding fragmentation:

2^46 is a huge virtual space, it is so large that fragmentation will
not be a problem. Just doing some numbers, you can understand the
extremely low probability of having problems due to fragmentation.

Let's suppose that the largest mmaped request is 1Gb. The worst case
scenario to fail a request is when the memory is fragmented in such a
way that all the free areas are of size (1Gb-4kb).

free     busy free    busy free     busy .....free
[1Gb-4kb][4kb][1G-4kb][4kb][1Gb-4kb][4kb].....[1Gb-4kb]..

This is a perfectly doomsday case.

Well, in this case the number of allocated (busy) mmaped areas of 4kb
needed to fragment the memory is:

2^46/2^30= 2^16.

That is, an application shall request > 64000 mmaps of one page
(4kb). And then, if it is extremely unlucky, the next mmap of 1Gb will
fail.

Obviously, we assume that all the 64000 request are "perfectly" placed
at 1Gb distance of each other. The probability that such a perfectly
space allocs occur is less than one out of (2^46)^(2^16). This is a
fairly "impossible" event.

Conclusion: Fragmentation is not a issue. We shall be more worried
about hitting a comet our city than running out of memory because of
fragmentation.

Signed-off-by: Hector Marco-Gisbert <hecmargi@upv.es>
Signed-off-by: Ismael Ripoll <iripoll@upv.es>

diff --git a/Documentation/sysctl/kernel.txt b/Documentation/sysctl/kernel.txt
index 75511ef..dde92ee 100644
--- a/Documentation/sysctl/kernel.txt
+++ b/Documentation/sysctl/kernel.txt
@@ -704,6 +704,18 @@ that support this feature.
      with CONFIG_COMPAT_BRK enabled, which excludes the heap from process
      address space randomization.

+3 - Inter-mmap randomization and extended entropy. Randomizes all
+    mmap requests when the addr is NULL.
+
+    This is an improvement of the previous ASLR option which:
+    a) extend the number of random bits on the addresses and
+    b) add randomness to the offset between mmapped objects.
+
+    This feature is only available on architectures which implement
+    large virtual memory space (i.e. 64bit systems). In 32bits, the
+    fragmentation can be a problem for applications which use large
+    memory areas.
+
  ==============================================================

  reboot-cmd: (Sparc only)
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b1f9a20..380873f 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1,6 +1,7 @@
  config ARM64
  	def_bool y
  	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
+	select RANDOMIZE_ALL_MMAPS
  	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
  	select ARCH_HAS_GCOV_PROFILE_ALL
  	select ARCH_HAS_SG_CHAIN
diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
index fde9923..2b54bbe 100644
--- a/arch/arm64/kernel/process.c
+++ b/arch/arm64/kernel/process.c
@@ -43,6 +43,9 @@
  #include <linux/hw_breakpoint.h>
  #include <linux/personality.h>
  #include <linux/notifier.h>
+#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
+#include <linux/security.h>
+#endif

  #include <asm/compat.h>
  #include <asm/cacheflush.h>
@@ -376,5 +379,16 @@ static unsigned long randomize_base(unsigned long base)

  unsigned long arch_randomize_brk(struct mm_struct *mm)
  {
+#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
+        unsigned long brk;
+        unsigned long min_addr = PAGE_ALIGN(mmap_min_addr);
+        unsigned long max_addr = PAGE_ALIGN(current->mm->mmap_base);
+
+        if ( (randomize_va_space > 2) && !is_compat_task() ){
+                brk = (get_random_long() << PAGE_SHIFT) % (max_addr -  
min_addr);
+                brk += min_addr;
+                return brk;
+        }
+#endif
  	return randomize_base(mm->brk);
  }
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ba397bd..2607ce9 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -86,6 +86,7 @@ config X86
  	select HAVE_ARCH_KMEMCHECK
  	select HAVE_USER_RETURN_NOTIFIER
  	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
+	select RANDOMIZE_ALL_MMAPS if X86_64
  	select HAVE_ARCH_JUMP_LABEL
  	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
  	select SPARSE_IRQ
diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
index e127dda..7b7745d 100644
--- a/arch/x86/kernel/process.c
+++ b/arch/x86/kernel/process.c
@@ -19,6 +19,9 @@
  #include <linux/cpuidle.h>
  #include <trace/events/power.h>
  #include <linux/hw_breakpoint.h>
+#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
+#include <linux/security.h>
+#endif
  #include <asm/cpu.h>
  #include <asm/apic.h>
  #include <asm/syscalls.h>
@@ -465,7 +468,18 @@ unsigned long arch_align_stack(unsigned long sp)

  unsigned long arch_randomize_brk(struct mm_struct *mm)
  {
-	unsigned long range_end = mm->brk + 0x02000000;
-	return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
+#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
+	unsigned long brk;
+	unsigned long min_addr = PAGE_ALIGN(mmap_min_addr);
+	unsigned long max_addr = PAGE_ALIGN(current->mm->mmap_base);
+
+	if ( (randomize_va_space > 2) && !is_compat_task() ){
+		brk = (get_random_long() << PAGE_SHIFT) % (max_addr - min_addr);
+		brk += min_addr;
+		return brk;
+	}
+#endif
+
+	return randomize_range(mm->brk, mm->brk + 0x02000000, 0) ? : mm->brk;
  }

diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c
index 009495b..205f1a3 100644
--- a/arch/x86/vdso/vma.c
+++ b/arch/x86/vdso/vma.c
@@ -19,6 +19,9 @@
  #include <asm/page.h>
  #include <asm/hpet.h>
  #include <asm/desc.h>
+#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
+#include <asm/compat.h>
+#endif

  #if defined(CONFIG_X86_64)
  unsigned int __read_mostly vdso64_enabled = 1;
@@ -54,6 +57,11 @@ static unsigned long vdso_addr(unsigned long start,  
unsigned len)
  #else
  	unsigned long addr, end;
  	unsigned offset;
+#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
+	if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 2)
+	    && !is_compat_task())
+		return 0;
+#endif
  	end = (start + PMD_SIZE - 1) & PMD_MASK;
  	if (end >= TASK_SIZE_MAX)
  		end = TASK_SIZE_MAX;
diff --git a/drivers/char/random.c b/drivers/char/random.c
index 04645c0..f6a231f 100644
--- a/drivers/char/random.c
+++ b/drivers/char/random.c
@@ -1740,6 +1740,13 @@ unsigned int get_random_int(void)
  }
  EXPORT_SYMBOL(get_random_int);

+#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
+unsigned long get_random_long(void)
+{
+	return get_random_int() + (sizeof(long) > 4 ? (unsigned  
long)get_random_int() << 32 : 0);
+}
+EXPORT_SYMBOL(get_random_long);
+#endif
  /*
   * randomize_range() returns a start address such that
   *
diff --git a/fs/Kconfig.binfmt b/fs/Kconfig.binfmt
index c055d56..2839124 100644
--- a/fs/Kconfig.binfmt
+++ b/fs/Kconfig.binfmt
@@ -30,6 +30,9 @@ config COMPAT_BINFMT_ELF
  config ARCH_BINFMT_ELF_RANDOMIZE_PIE
  	bool

+config RANDOMIZE_ALL_MMAPS
+	bool
+
  config ARCH_BINFMT_ELF_STATE
  	bool

diff --git a/include/linux/random.h b/include/linux/random.h
index b05856e..8ea61e1 100644
--- a/include/linux/random.h
+++ b/include/linux/random.h
@@ -23,6 +23,9 @@ extern const struct file_operations random_fops,  
urandom_fops;
  #endif

  unsigned int get_random_int(void);
+#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
+unsigned long get_random_long(void);
+#endif
  unsigned long randomize_range(unsigned long start, unsigned long  
end, unsigned long len);

  u32 prandom_u32(void);
diff --git a/mm/mmap.c b/mm/mmap.c
index 7b36aa7..8c9c3c7 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -41,6 +41,10 @@
  #include <linux/notifier.h>
  #include <linux/memory.h>
  #include <linux/printk.h>
+#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
+#include <linux/random.h>
+#include <asm/compat.h>
+#endif

  #include <asm/uaccess.h>
  #include <asm/cacheflush.h>
@@ -2005,7 +2009,19 @@ get_unmapped_area(struct file *file, unsigned  
long addr, unsigned long len,
  	unsigned long (*get_area)(struct file *, unsigned long,
  				  unsigned long, unsigned long, unsigned long);

-	unsigned long error = arch_mmap_check(addr, len, flags);
+	unsigned long error;
+#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
+	unsigned long min_addr = PAGE_ALIGN(mmap_min_addr);
+	unsigned long max_addr = PAGE_ALIGN(current->mm->mmap_base);
+
+	/* ASRLv3: If addr is NULL then randomize the mmap */
+	if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 2)
+	    && !is_compat_task() && !addr ){
+		addr = (get_random_long() << PAGE_SHIFT) % (max_addr - min_addr);
+		addr += min_addr;
+	}
+#endif
+	error = arch_mmap_check(addr, len, flags);
  	if (error)
  		return error;



Hector Marco.


^ permalink raw reply related	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-23 20:06                           ` Hector Marco Gisbert
@ 2014-12-23 20:53                             ` Andy Lutomirski
  0 siblings, 0 replies; 23+ messages in thread
From: Andy Lutomirski @ 2014-12-23 20:53 UTC (permalink / raw)
  To: Hector Marco Gisbert
  Cc: Reno Robert, Cyrill Gorcunov, Pavel Emelyanov, Catalin Marinas,
	Heiko Carstens, Oleg Nesterov, Ingo Molnar, Anton Blanchard,
	Jiri Kosina, Russell King - ARM Linux, H. Peter Anvin,
	David Daney, Andrew Morton, Arun Chandran, linux-kernel,
	Martin Schwidefsky, Ismael Ripoll, Christian Borntraeger,
	Thomas Gleixner, Hanno Böck, Will Deacon,
	Benjamin Herrenschmidt, Kees Cook

On Tue, Dec 23, 2014 at 12:06 PM, Hector Marco Gisbert <hecmargi@upv.es> wrote:
> [PATCH] ASLRv3: inter-mmap ASLR (IM-ASLR).
>
>
> The following is a patch that implements the inter-mmap (IM-ASLR)
> which randomizes all mmaps. All the discussion about the current
> implementation (offset2lib and vdso) shall be solved by fixing
> the current implementation (randomize_va_space =2).

General comments:

You have a bunch of copies of roughly this:

+       unsigned long brk;
+       unsigned long min_addr = PAGE_ALIGN(mmap_min_addr);
+       unsigned long max_addr = PAGE_ALIGN(current->mm->mmap_b
ase);
+
+       if ( (randomize_va_space > 2) && !is_compat_task() ){
+               brk = (get_random_long() << PAGE_SHIFT) % (max_addr - min_addr);
+               brk += min_addr;
+               return brk;
+       }

I would write one helper that does that.  I would also make a few changes:

 - Use get_random_bytes instead of get_random_long.

 - is_compat_task is wrong.  It returns true when called in the
context of a compat syscall, which isn't what you want in most cases.

 - For architectures with large enough max_addr - min_addr, you are
needlessly biasing your result.  How about:

(random_long % ((max - min) >> PAGE_SHIFT)) << PAGE_SHIFT

I also think that you should restrict the fully randomized range to
one quarter or one half of the total address space.  Things like the
Chromium sandbox need enormous amounts of contiguous virtual address
space to play in.  Also, you should make sure that a randomized mmap
never gets in the way of the stack or brk (maybe you're already doing
this).  Otherwise you'll have intermittent crashes.

--Andy

>
> While we were working in the offset2lib we realized that
> a complete solution would to randomize each mmap independently.
> (as reno has suggested). Here is patch to achieve that and I
> discussion about it.
>
> First of all, I think that IM-ASLR is not mandatory, considering
> current attacks and technology. Note that it very risky to make any
> sound claim about what is secure and what is not. So, take the first
> claim with caution.
>
> IM-ASLR requires a large virtual memory space to avoid fragmentation
> problems (see below), therefore, I think that it is only practical on
> 64 bit systems.
>
> IM-ASLR will be the most advanced ASLR implementation overpassing
> PaX ASLR. The present patch will prevent future threats (or current
> threats, but unknown to me). It would be nice have it in Linux, but
> a trade-off between security and performance shall be done before
> adopting it as a default option.
>
> Since the implementation is very simple and straight forward, I
> suggest to include it as randomize_va_space=3. This way, it may
> be enabled on a per process basis (personality).
>
> Another aspect to think about is: Does this code adds or opens a new
> backdoor or weakness? I don't think so. The code and the operation is
> really simple, and it does not have side effects as far as I see.
>
> Current implementations are based on the basic idea of "zones" of
> similar or comparable criticality. Now we are discussing about where
> the VDSO shall be placed, and if it shall be close to the stack zone
> or on the mmap zone. Well, IM-ASLR solves this problem at once: all
> objects are located on its own isolated "zone". In this sense, the
> IM-ASLR removes most of the philosophical or subjective
> arguments... which are always hard to justify.
>
> Eventually, if the IM-ASLR is effective, we will set it by default for
> all apps.
>
>
>
> Regarding fragmentation:
>
> 2^46 is a huge virtual space, it is so large that fragmentation will
> not be a problem. Just doing some numbers, you can understand the
> extremely low probability of having problems due to fragmentation.
>
> Let's suppose that the largest mmaped request is 1Gb. The worst case
> scenario to fail a request is when the memory is fragmented in such a
> way that all the free areas are of size (1Gb-4kb).
>
> free     busy free    busy free     busy .....free
> [1Gb-4kb][4kb][1G-4kb][4kb][1Gb-4kb][4kb].....[1Gb-4kb]..
>
> This is a perfectly doomsday case.
>
> Well, in this case the number of allocated (busy) mmaped areas of 4kb
> needed to fragment the memory is:
>
> 2^46/2^30= 2^16.
>
> That is, an application shall request > 64000 mmaps of one page
> (4kb). And then, if it is extremely unlucky, the next mmap of 1Gb will
> fail.
>
> Obviously, we assume that all the 64000 request are "perfectly" placed
> at 1Gb distance of each other. The probability that such a perfectly
> space allocs occur is less than one out of (2^46)^(2^16). This is a
> fairly "impossible" event.
>
> Conclusion: Fragmentation is not a issue. We shall be more worried
> about hitting a comet our city than running out of memory because of
> fragmentation.
>
> Signed-off-by: Hector Marco-Gisbert <hecmargi@upv.es>
> Signed-off-by: Ismael Ripoll <iripoll@upv.es>
>
> diff --git a/Documentation/sysctl/kernel.txt
> b/Documentation/sysctl/kernel.txt
> index 75511ef..dde92ee 100644
> --- a/Documentation/sysctl/kernel.txt
> +++ b/Documentation/sysctl/kernel.txt
> @@ -704,6 +704,18 @@ that support this feature.
>      with CONFIG_COMPAT_BRK enabled, which excludes the heap from process
>      address space randomization.
>
> +3 - Inter-mmap randomization and extended entropy. Randomizes all
> +    mmap requests when the addr is NULL.
> +
> +    This is an improvement of the previous ASLR option which:
> +    a) extend the number of random bits on the addresses and
> +    b) add randomness to the offset between mmapped objects.
> +
> +    This feature is only available on architectures which implement
> +    large virtual memory space (i.e. 64bit systems). In 32bits, the
> +    fragmentation can be a problem for applications which use large
> +    memory areas.
> +
>  ==============================================================
>
>  reboot-cmd: (Sparc only)
> diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
> index b1f9a20..380873f 100644
> --- a/arch/arm64/Kconfig
> +++ b/arch/arm64/Kconfig
> @@ -1,6 +1,7 @@
>  config ARM64
>         def_bool y
>         select ARCH_BINFMT_ELF_RANDOMIZE_PIE
> +       select RANDOMIZE_ALL_MMAPS
>         select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
>         select ARCH_HAS_GCOV_PROFILE_ALL
>         select ARCH_HAS_SG_CHAIN
> diff --git a/arch/arm64/kernel/process.c b/arch/arm64/kernel/process.c
> index fde9923..2b54bbe 100644
> --- a/arch/arm64/kernel/process.c
> +++ b/arch/arm64/kernel/process.c
> @@ -43,6 +43,9 @@
>  #include <linux/hw_breakpoint.h>
>  #include <linux/personality.h>
>  #include <linux/notifier.h>
> +#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
> +#include <linux/security.h>
> +#endif
>
>  #include <asm/compat.h>
>  #include <asm/cacheflush.h>
> @@ -376,5 +379,16 @@ static unsigned long randomize_base(unsigned long base)
>
>  unsigned long arch_randomize_brk(struct mm_struct *mm)
>  {
> +#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
> +        unsigned long brk;
> +        unsigned long min_addr = PAGE_ALIGN(mmap_min_addr);
> +        unsigned long max_addr = PAGE_ALIGN(current->mm->mmap_base);
> +
> +        if ( (randomize_va_space > 2) && !is_compat_task() ){
> +                brk = (get_random_long() << PAGE_SHIFT) % (max_addr -
> min_addr);
> +                brk += min_addr;
> +                return brk;
> +        }
> +#endif
>         return randomize_base(mm->brk);
>  }
> diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
> index ba397bd..2607ce9 100644
> --- a/arch/x86/Kconfig
> +++ b/arch/x86/Kconfig
> @@ -86,6 +86,7 @@ config X86
>         select HAVE_ARCH_KMEMCHECK
>         select HAVE_USER_RETURN_NOTIFIER
>         select ARCH_BINFMT_ELF_RANDOMIZE_PIE
> +       select RANDOMIZE_ALL_MMAPS if X86_64
>         select HAVE_ARCH_JUMP_LABEL
>         select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
>         select SPARSE_IRQ
> diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c
> index e127dda..7b7745d 100644
> --- a/arch/x86/kernel/process.c
> +++ b/arch/x86/kernel/process.c
> @@ -19,6 +19,9 @@
>  #include <linux/cpuidle.h>
>  #include <trace/events/power.h>
>  #include <linux/hw_breakpoint.h>
> +#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
> +#include <linux/security.h>
> +#endif
>  #include <asm/cpu.h>
>  #include <asm/apic.h>
>  #include <asm/syscalls.h>
> @@ -465,7 +468,18 @@ unsigned long arch_align_stack(unsigned long sp)
>
>  unsigned long arch_randomize_brk(struct mm_struct *mm)
>  {
> -       unsigned long range_end = mm->brk + 0x02000000;
> -       return randomize_range(mm->brk, range_end, 0) ? : mm->brk;
> +#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
> +       unsigned long brk;
> +       unsigned long min_addr = PAGE_ALIGN(mmap_min_addr);
> +       unsigned long max_addr = PAGE_ALIGN(current->mm->mmap_base);
> +
> +       if ( (randomize_va_space > 2) && !is_compat_task() ){
> +               brk = (get_random_long() << PAGE_SHIFT) % (max_addr -
> min_addr);
> +               brk += min_addr;
> +               return brk;
> +       }
> +#endif
> +
> +       return randomize_range(mm->brk, mm->brk + 0x02000000, 0) ? :
> mm->brk;
>  }
>
> diff --git a/arch/x86/vdso/vma.c b/arch/x86/vdso/vma.c
> index 009495b..205f1a3 100644
> --- a/arch/x86/vdso/vma.c
> +++ b/arch/x86/vdso/vma.c
> @@ -19,6 +19,9 @@
>  #include <asm/page.h>
>  #include <asm/hpet.h>
>  #include <asm/desc.h>
> +#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
> +#include <asm/compat.h>
> +#endif
>
>  #if defined(CONFIG_X86_64)
>  unsigned int __read_mostly vdso64_enabled = 1;
> @@ -54,6 +57,11 @@ static unsigned long vdso_addr(unsigned long start,
> unsigned len)
>  #else
>         unsigned long addr, end;
>         unsigned offset;
> +#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
> +       if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 2)
> +           && !is_compat_task())
> +               return 0;
> +#endif
>         end = (start + PMD_SIZE - 1) & PMD_MASK;
>         if (end >= TASK_SIZE_MAX)
>                 end = TASK_SIZE_MAX;
> diff --git a/drivers/char/random.c b/drivers/char/random.c
> index 04645c0..f6a231f 100644
> --- a/drivers/char/random.c
> +++ b/drivers/char/random.c
> @@ -1740,6 +1740,13 @@ unsigned int get_random_int(void)
>  }
>  EXPORT_SYMBOL(get_random_int);
>
> +#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
> +unsigned long get_random_long(void)
> +{
> +       return get_random_int() + (sizeof(long) > 4 ? (unsigned
> long)get_random_int() << 32 : 0);
> +}
> +EXPORT_SYMBOL(get_random_long);
> +#endif
>  /*
>   * randomize_range() returns a start address such that
>   *
> diff --git a/fs/Kconfig.binfmt b/fs/Kconfig.binfmt
> index c055d56..2839124 100644
> --- a/fs/Kconfig.binfmt
> +++ b/fs/Kconfig.binfmt
> @@ -30,6 +30,9 @@ config COMPAT_BINFMT_ELF
>  config ARCH_BINFMT_ELF_RANDOMIZE_PIE
>         bool
>
> +config RANDOMIZE_ALL_MMAPS
> +       bool
> +
>  config ARCH_BINFMT_ELF_STATE
>         bool
>
> diff --git a/include/linux/random.h b/include/linux/random.h
> index b05856e..8ea61e1 100644
> --- a/include/linux/random.h
> +++ b/include/linux/random.h
> @@ -23,6 +23,9 @@ extern const struct file_operations random_fops,
> urandom_fops;
>  #endif
>
>  unsigned int get_random_int(void);
> +#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
> +unsigned long get_random_long(void);
> +#endif
>  unsigned long randomize_range(unsigned long start, unsigned long end,
> unsigned long len);
>
>  u32 prandom_u32(void);
> diff --git a/mm/mmap.c b/mm/mmap.c
> index 7b36aa7..8c9c3c7 100644
> --- a/mm/mmap.c
> +++ b/mm/mmap.c
> @@ -41,6 +41,10 @@
>  #include <linux/notifier.h>
>  #include <linux/memory.h>
>  #include <linux/printk.h>
> +#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
> +#include <linux/random.h>
> +#include <asm/compat.h>
> +#endif
>
>  #include <asm/uaccess.h>
>  #include <asm/cacheflush.h>
> @@ -2005,7 +2009,19 @@ get_unmapped_area(struct file *file, unsigned long
> addr, unsigned long len,
>         unsigned long (*get_area)(struct file *, unsigned long,
>                                   unsigned long, unsigned long, unsigned
> long);
>
> -       unsigned long error = arch_mmap_check(addr, len, flags);
> +       unsigned long error;
> +#ifdef CONFIG_RANDOMIZE_ALL_MMAPS
> +       unsigned long min_addr = PAGE_ALIGN(mmap_min_addr);
> +       unsigned long max_addr = PAGE_ALIGN(current->mm->mmap_base);
> +
> +       /* ASRLv3: If addr is NULL then randomize the mmap */
> +       if ((current->flags & PF_RANDOMIZE) && (randomize_va_space > 2)
> +           && !is_compat_task() && !addr ){
> +               addr = (get_random_long() << PAGE_SHIFT) % (max_addr -
> min_addr);
> +               addr += min_addr;
> +       }
> +#endif
> +       error = arch_mmap_check(addr, len, flags);
>         if (error)
>                 return error;
>
>
>
> Hector Marco.
>



-- 
Andy Lutomirski
AMA Capital Management, LLC

^ permalink raw reply	[flat|nested] 23+ messages in thread

* Re: [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack
  2014-12-11 22:11   ` Kees Cook
  2014-12-12 16:32     ` Hector Marco
@ 2015-01-07 17:26     ` Hector Marco Gisbert
  1 sibling, 0 replies; 23+ messages in thread
From: Hector Marco Gisbert @ 2015-01-07 17:26 UTC (permalink / raw)
  To: Kees Cook
  Cc: linux-kernel, Andy Lutomirski, David Daney, Jiri Kosina,
	Arun Chandran, Hanno Böck, Andrew Morton, Thomas Gleixner,
	Ingo Molnar, H. Peter Anvin, Russell King - ARM Linux,
	Catalin Marinas, Will Deacon, Oleg Nesterov, Heiko Carstens,
	Martin Schwidefsky, Anton Blanchard, Benjamin Herrenschmidt,
	Christian Borntraeger


[PATH] Fix offset2lib issue for x86*, ARM*, PowerPC and MIPS.

Hi,

Following your suggestions here is the patch that fixes the offset2lib issue.

Only s390 architecture is not affected by the offset2lib, the solution for
all the architectures is based on this one, and so the
CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE option is not longer needed (removed).


Signed-off-by: Hector Marco-Gisbert <hecmargi@upv.es>
Signed-off-by: Ismael Ripoll <iripoll@upv.es>


diff --git a/arch/arm/Kconfig b/arch/arm/Kconfig
index 97d07ed..ee7ea7e 100644
--- a/arch/arm/Kconfig
+++ b/arch/arm/Kconfig
@@ -1,7 +1,6 @@
  config ARM
  	bool
  	default y
-	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
  	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
  	select ARCH_HAS_TICK_BROADCAST if GENERIC_CLOCKEVENTS_BROADCAST
  	select ARCH_HAVE_CUSTOM_GPIO_H
diff --git a/arch/arm/include/asm/elf.h b/arch/arm/include/asm/elf.h
index afb9caf..6755cd8 100644
--- a/arch/arm/include/asm/elf.h
+++ b/arch/arm/include/asm/elf.h
@@ -115,7 +115,8 @@ int dump_task_regs(struct task_struct *t,  
elf_gregset_t *elfregs);
     the loader.  We need to make sure that it is out of the way of the program
     that it will "exec", and that there is sufficient room for the brk.  */

-#define ELF_ET_DYN_BASE	(2 * TASK_SIZE / 3)
+extern unsigned long randomize_et_dyn(unsigned long base);
+#define ELF_ET_DYN_BASE	(randomize_et_dyn(2 * TASK_SIZE / 3))

  /* When the program starts, a1 contains a pointer to a function to be
     registered with atexit, as per the SVR4 ABI.  A value of 0 means we
diff --git a/arch/arm/mm/mmap.c b/arch/arm/mm/mmap.c
index 5e85ed3..75ba490 100644
--- a/arch/arm/mm/mmap.c
+++ b/arch/arm/mm/mmap.c
@@ -30,6 +30,17 @@ static int mmap_is_legacy(void)
  	return sysctl_legacy_va_layout;
  }

+static unsigned long mmap_rnd(void)
+{
+	unsigned long rnd = 0;
+
+	/* 8 bits of randomness in 20 address space bits */
+	if (current->flags & PF_RANDOMIZE)
+		rnd = (long)get_random_int() % (1 << 8);
+
+	return rnd << PAGE_SHIFT;
+}
+
  static unsigned long mmap_base(unsigned long rnd)
  {
  	unsigned long gap = rlimit(RLIMIT_STACK);
@@ -230,3 +241,12 @@ int devmem_is_allowed(unsigned long pfn)
  }

  #endif
+
+unsigned long randomize_et_dyn(unsigned long base)
+{
+	unsigned long ret;
+	if ((current->personality & ADDR_NO_RANDOMIZE) || !(current->flags &  
PF_RANDOMIZE))
+		return base;
+	ret = base + mmap_rnd();
+	return (ret > base) ? ret : base;
+}
diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index b1f9a20..5580d90 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -1,6 +1,5 @@
  config ARM64
  	def_bool y
-	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
  	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
  	select ARCH_HAS_GCOV_PROFILE_ALL
  	select ARCH_HAS_SG_CHAIN
diff --git a/arch/arm64/include/asm/elf.h b/arch/arm64/include/asm/elf.h
index 1f65be3..01d3aab 100644
--- a/arch/arm64/include/asm/elf.h
+++ b/arch/arm64/include/asm/elf.h
@@ -126,7 +126,7 @@ typedef struct user_fpsimd_state elf_fpregset_t;
   * that it will "exec", and that there is sufficient room for the brk.
   */
  extern unsigned long randomize_et_dyn(unsigned long base);
-#define ELF_ET_DYN_BASE	(2 * TASK_SIZE_64 / 3)
+#define ELF_ET_DYN_BASE	(randomize_et_dyn(2 * TASK_SIZE_64 / 3))

  /*
   * When the program starts, a1 contains a pointer to a function to be
@@ -169,7 +169,7 @@ extern unsigned long arch_randomize_brk(struct  
mm_struct *mm);
  #define COMPAT_ELF_PLATFORM		("v8l")
  #endif

-#define COMPAT_ELF_ET_DYN_BASE		(2 * TASK_SIZE_32 / 3)
+#define COMPAT_ELF_ET_DYN_BASE		(randomize_et_dyn(2 * TASK_SIZE_32 / 3))

  /* AArch32 registers. */
  #define COMPAT_ELF_NGREG		18
diff --git a/arch/arm64/mm/mmap.c b/arch/arm64/mm/mmap.c
index 54922d1..c444dcc 100644
--- a/arch/arm64/mm/mmap.c
+++ b/arch/arm64/mm/mmap.c
@@ -89,6 +89,15 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
  }
  EXPORT_SYMBOL_GPL(arch_pick_mmap_layout);

+unsigned long randomize_et_dyn(unsigned long base)
+{
+	unsigned long ret;
+	if ((current->personality & ADDR_NO_RANDOMIZE) || !(current->flags &  
PF_RANDOMIZE))
+		return base;
+	ret = base + mmap_rnd();
+	return (ret > base) ? ret : base;
+}
+

  /*
   * You really shouldn't be using read() or write() on /dev/mem.   
This might go
diff --git a/arch/mips/Kconfig b/arch/mips/Kconfig
index 3289969..31cc248 100644
--- a/arch/mips/Kconfig
+++ b/arch/mips/Kconfig
@@ -23,7 +23,6 @@ config MIPS
  	select HAVE_KRETPROBES
  	select HAVE_DEBUG_KMEMLEAK
  	select HAVE_SYSCALL_TRACEPOINTS
-	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
  	select HAVE_ARCH_TRANSPARENT_HUGEPAGE if CPU_SUPPORTS_HUGEPAGES && 64BIT
  	select RTC_LIB if !MACH_LOONGSON
  	select GENERIC_ATOMIC64 if !64BIT
diff --git a/arch/mips/include/asm/elf.h b/arch/mips/include/asm/elf.h
index eb4d95d..fcac4c99 100644
--- a/arch/mips/include/asm/elf.h
+++ b/arch/mips/include/asm/elf.h
@@ -402,7 +402,8 @@ extern const char *__elf_platform;
     that it will "exec", and that there is sufficient room for the brk.	*/

  #ifndef ELF_ET_DYN_BASE
-#define ELF_ET_DYN_BASE		(TASK_SIZE / 3 * 2)
+extern unsigned long randomize_et_dyn(unsigned long base);
+#define ELF_ET_DYN_BASE		(randomize_et_dyn(TASK_SIZE / 3 * 2))
  #endif

  #define ARCH_HAS_SETUP_ADDITIONAL_PAGES 1
diff --git a/arch/mips/mm/mmap.c b/arch/mips/mm/mmap.c
index f1baadd..d0c4a2d 100644
--- a/arch/mips/mm/mmap.c
+++ b/arch/mips/mm/mmap.c
@@ -196,3 +196,12 @@ int __virt_addr_valid(const volatile void *kaddr)
  	return pfn_valid(PFN_DOWN(virt_to_phys(kaddr)));
  }
  EXPORT_SYMBOL_GPL(__virt_addr_valid);
+
+unsigned long randomize_et_dyn(unsigned long base)
+{
+	unsigned long ret;
+	if ((current->personality & ADDR_NO_RANDOMIZE) || !(current->flags &  
PF_RANDOMIZE))
+		return base;
+	ret = base + brk_rnd();
+	return (ret > base) ? ret : base;
+}
diff --git a/arch/powerpc/Kconfig b/arch/powerpc/Kconfig
index a2a168e..fa4c877 100644
--- a/arch/powerpc/Kconfig
+++ b/arch/powerpc/Kconfig
@@ -88,7 +88,6 @@ config PPC
  	select ARCH_MIGHT_HAVE_PC_PARPORT
  	select ARCH_MIGHT_HAVE_PC_SERIO
  	select BINFMT_ELF
-	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
  	select OF
  	select OF_EARLY_FLATTREE
  	select OF_RESERVED_MEM
diff --git a/arch/powerpc/include/asm/elf.h b/arch/powerpc/include/asm/elf.h
index 57d289a..4080425 100644
--- a/arch/powerpc/include/asm/elf.h
+++ b/arch/powerpc/include/asm/elf.h
@@ -28,7 +28,8 @@
     the loader.  We need to make sure that it is out of the way of the program
     that it will "exec", and that there is sufficient room for the brk.  */

-#define ELF_ET_DYN_BASE	0x20000000
+extern unsigned long randomize_et_dyn(unsigned long base);
+#define ELF_ET_DYN_BASE	(randomize_et_dyn(0x20000000))

  #define ELF_CORE_EFLAGS (is_elf2_task() ? 2 : 0)

diff --git a/arch/powerpc/mm/mmap.c b/arch/powerpc/mm/mmap.c
index cb8bdbe..800f0a6 100644
--- a/arch/powerpc/mm/mmap.c
+++ b/arch/powerpc/mm/mmap.c
@@ -97,3 +97,12 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
  		mm->get_unmapped_area = arch_get_unmapped_area_topdown;
  	}
  }
+
+unsigned long randomize_et_dyn(unsigned long base)
+{
+	unsigned long ret;
+	if ((current->personality & ADDR_NO_RANDOMIZE) || !(current->flags &  
PF_RANDOMIZE))
+		return base;
+	ret = base + mmap_rnd();
+	return (ret > base) ? ret : base;
+}
diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig
index ba397bd..dcfe16c 100644
--- a/arch/x86/Kconfig
+++ b/arch/x86/Kconfig
@@ -85,7 +85,6 @@ config X86
  	select HAVE_CMPXCHG_DOUBLE
  	select HAVE_ARCH_KMEMCHECK
  	select HAVE_USER_RETURN_NOTIFIER
-	select ARCH_BINFMT_ELF_RANDOMIZE_PIE
  	select HAVE_ARCH_JUMP_LABEL
  	select ARCH_HAS_ATOMIC64_DEC_IF_POSITIVE
  	select SPARSE_IRQ
diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index ca3347a..92c6ac4 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -249,7 +249,8 @@ extern int force_personality32;
     the loader.  We need to make sure that it is out of the way of the program
     that it will "exec", and that there is sufficient room for the brk.  */

-#define ELF_ET_DYN_BASE		(TASK_SIZE / 3 * 2)
+extern unsigned long randomize_et_dyn(unsigned long base);
+#define ELF_ET_DYN_BASE		(randomize_et_dyn(TASK_SIZE / 3 * 2))

  /* This yields a mask that user programs can use to figure out what
     instruction set this CPU supports.  This could be done in user space,
diff --git a/arch/x86/mm/mmap.c b/arch/x86/mm/mmap.c
index 919b912..8f7b3bd 100644
--- a/arch/x86/mm/mmap.c
+++ b/arch/x86/mm/mmap.c
@@ -122,3 +122,11 @@ void arch_pick_mmap_layout(struct mm_struct *mm)
  		mm->get_unmapped_area = arch_get_unmapped_area_topdown;
  	}
  }
+unsigned long randomize_et_dyn(unsigned long base)
+{
+	unsigned long ret;
+	if ((current->personality & ADDR_NO_RANDOMIZE) || !(current->flags &  
PF_RANDOMIZE))
+		return base;
+	ret = base + mmap_rnd();
+	return (ret > base) ? ret : base;
+}
diff --git a/fs/Kconfig.binfmt b/fs/Kconfig.binfmt
index c055d56..1186190 100644
--- a/fs/Kconfig.binfmt
+++ b/fs/Kconfig.binfmt
@@ -27,8 +27,6 @@ config COMPAT_BINFMT_ELF
  	bool
  	depends on COMPAT && BINFMT_ELF

-config ARCH_BINFMT_ELF_RANDOMIZE_PIE
-	bool

  config ARCH_BINFMT_ELF_STATE
  	bool
diff --git a/fs/binfmt_elf.c b/fs/binfmt_elf.c
index 02b1691..72f7ff5 100644
--- a/fs/binfmt_elf.c
+++ b/fs/binfmt_elf.c
@@ -908,21 +908,7 @@ static int load_elf_binary(struct linux_binprm *bprm)
  			 * default mmap base, as well as whatever program they
  			 * might try to exec.  This is because the brk will
  			 * follow the loader, and is not movable.  */
-#ifdef CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE
-			/* Memory randomization might have been switched off
-			 * in runtime via sysctl or explicit setting of
-			 * personality flags.
-			 * If that is the case, retain the original non-zero
-			 * load_bias value in order to establish proper
-			 * non-randomized mappings.
-			 */
-			if (current->flags & PF_RANDOMIZE)
-				load_bias = 0;
-			else
-				load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
-#else
  			load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
-#endif
  		}

  		error = elf_map(bprm->file, load_bias + vaddr, elf_ppnt,



> Hi,
>
> On Thu, Dec 11, 2014 at 09:12:29PM +0100, Hector Marco wrote:
>>
>> Hello,
>>
>> The following is an ASLR PIE implementation summary in order to help to
>> decide whether it is better to fix x86*, arm*, and MIPS without adding
>> randomize_va_space = 3 or move the PowerPC and the s390 to
>> randomize_va_space = 3.
>
> If we can fix x86, arm, and MIPS without introducing randomize_va_space=3,
> I would prefer it.
>
>> Before any randomization, commit: f057eac (April 2005) the code in
>> fs/binfmt_elf.c was:
>>
>>  } else if (loc->elf_ex.e_type == ET_DYN) {
>>          /* Try and get dynamic programs out of the way of the
>>           * default mmap base, as well as whatever program they
>>           * might try to exec.  This is because the brk will
>>           * follow the loader, and is not movable.  */
>>          load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
>>  }
>>
>> It seems that they tried to get out dynamic programs of the way
>> of the default mmap base. I am not sure why.
>>
>> The first architecture to implement PIE support was x86. To achieve
>> this, the code introduced by the commit 60bfba7 (Jul 2007) was:
>>
>>   } else if (loc->elf_ex.e_type == ET_DYN) {
>>           /* Try and get dynamic programs out of the way of the
>>            * default mmap base, as well as whatever program they
>>            * might try to exec.  This is because the brk will
>>            * follow the loader, and is not movable.  */
>> +#ifdef CONFIG_X86
>> +           load_bias = 0;
>> +#else
>>            load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
>> +#endif
>>             }
>>
>> After that, he code was removed (4 days later commit: d4e3cc3) and
>> reintroduced (commit: cc503c1) Jan 2008. From this commit, the x86*
>> are vulnerable to offset2lib attack.
>>
>> Note that they (x86*) used "load_bias = 0;" which cause that PIE
>> executable be loaded at mmap base.
>>
>> Around one year later, in Feb 2009, PowerPC provided support for PIE
>> executables but not following the X86* approach. PowerPC redefined
>> the ELF_ET_DYN_BASE. The change was:
>>
>> -#define ELF_ET_DYN_BASE (0x20000000)
>> +#define ELF_ET_DYN_BASE (randomize_et_dyn(0x20000000))
>>
>> The function "randomize_et_dyn" add a random value to the 0x20000000
>> which is not vulnerable to the offset2lib weakness. Note that in this
>> point two different ways of PIE implementation are coexisting.
>>
>>
>> Later, in Aug 2008, ARM started to support PIE (commit: e4eab08):
>>
>> -#if defined(CONFIG_X86)
>> +#if defined(CONFIG_X86) || defined(CONFIG_ARM)
>>            load_bias = 0;
>> #else
>>            load_bias = ELF_PAGESTART(ELF_ET_DYN_BASE - vaddr);
>> #endif
>>  }
>>
>>
>> They only add "|| defined(CONFIG_ARM)". They followed the x86* PIE
>> support approach which consist on load the PIE executables
>> in the mmap base area.
>>
>>
>> After that, in Jan 2011, s390 started to support PIE (commit: d2c9dfc).
>> They decided to follow the "PowerPC PIE support approach" by redefining:
>>
>> -#define ELF_ET_DYN_BASE         (STACK_TOP / 3 * 2)
>> +#define ELF_ET_DYN_BASE         (randomize_et_dyn(STACK_TOP / 3 * 2))
>>
>>
>> Later, in Nov 2012, the commit e39f560 changed:
>>
>> -#if defined(CONFIG_X86) || defined(CONFIG_ARM)
>> +#ifdef CONFIG_ARCH_BINFMT_ELF_RANDOMIZE_PIE
>>
>> I think that this was made to avoid a long defined because they must
>> have thought that more architectures will be added in the future.
>> Join this change the x86*, ARM and MIPS architectures set to "y" this
>> value in their respective Kconfig files.
>>
>> The same day of the previous commit, MIPS started to support PIE
>> executables by setting "y" to the ARCH_BINFMT_ELF_RANDOMIZE_PIE in their
>> Kconfig. The commit is e26d196. Again MIPS followed the x86* and ARM
>> approaches.
>>
>>
>> Finally, in Nov 2014, following this approach ARM64 moved from "PowerPC"
>> approach to x86 one. The commit is 9298040.
>>
>> -#define ELF_ET_DYN_BASE	(randomize_et_dyn(2 * TASK_SIZE_64 / 3))
>> +#define ELF_ET_DYN_BASE	(2 * TASK_SIZE_64 / 3)
>>
>> And set to "y" the "ARCH_BINFMT_ELF_RANDOMIZE_PIE" which cause to load
>> the PIE application in the mmap base area.
>>
>>
>> I don't know if exists any reason to put the PIE executable in the mmap
>> base address or not, but this was the first and most adopted approach.
>>
>> Now, by knowing the presence of the offset2lib weakness obviously is
>> better to use a different memory area.
>>
>> >From my point of view, to use a "define name" which is a random value
>> depending on the architecture does not help much to read the code. I
>> think is better to implement the PIE support by adding a new value to
>> the mm_struct which is filled very early in the function
>> "arch_pick_mmap_layout" which sets up the VM layout. This file is
>> architecture dependent and the function says:
>>
>> /*
>>  * This function, called very early during the creation of a new
>>  * process VM image, sets up which VM layout function to use:
>>  */
>> void arch_pick_mmap_layout(struct mm_struct *mm)
>>
>>
>> In this point the GAP stack is reserved and the mmap_base value is
>> calculated. I think this is the correct place to calculate where the PIE
>> executable will be loaded rather than rely on a "define" which obscure
>> the actual behavior (at first glance does not seem a random value).
>> Maybe this was the reason why most architectures followed the x86*
>> approach to support PIE. But now, with the offset2lib weakness this
>> approach need to be changed. From my point of view, moving to "PowerPC"
>> approach is not the best solution. I've taken a look to PaX code and
>> they implement a similar solution that I have been proposed.
>>
>> Anyway, if you are still thinking that the best approach is the
>> "PowerPC" one, then I could change the patch to fix the x86*, ARM* and
>> MIPS following this approach.
>
> Yeah, I think we should get rid of ARCH_BINFMT_ELF_RANDOMIZE_PIE and just
> fix this to do independent executable base randomization.
>
> While we're at it, can we fix VDSO randomization as well? :)
>
> -Kees
>
> --
> Kees Cook
> Chrome OS Security
>




^ permalink raw reply related	[flat|nested] 23+ messages in thread

end of thread, other threads:[~2015-01-07 17:27 UTC | newest]

Thread overview: 23+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
     [not found] <5489E6D2.2060200@upv.es>
2014-12-11 20:12 ` [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack Hector Marco
2014-12-11 22:11   ` Kees Cook
2014-12-12 16:32     ` Hector Marco
2014-12-12 17:17       ` Andy Lutomirski
2014-12-19 22:04         ` Hector Marco
2014-12-19 22:11           ` Andy Lutomirski
2014-12-19 22:19             ` Cyrill Gorcunov
2014-12-19 23:53             ` Andy Lutomirski
2014-12-20  0:29               ` [PATCH] x86_64, vdso: Fix the vdso address randomization algorithm Andy Lutomirski
2014-12-20 17:40               ` [PATCH v2] " Andy Lutomirski
2014-12-20 21:13                 ` Kees Cook
2014-12-22 17:36               ` [PATCH] ASLRv3: randomize_va_space=3 preventing offset2lib attack Hector Marco Gisbert
2014-12-22 17:56                 ` Andy Lutomirski
2014-12-22 19:49                   ` Jiri Kosina
2014-12-22 20:00                     ` Andy Lutomirski
2014-12-22 20:03                       ` Jiri Kosina
2014-12-22 20:13                         ` Andy Lutomirski
2014-12-22 23:23                   ` Hector Marco Gisbert
2014-12-22 23:38                     ` Andy Lutomirski
     [not found]                       ` <CAH4rwTKeN0P84FJnocoKV4t9rc2Ox_EYc+LEibD+Y83n7C8aVA@mail.gmail.com>
2014-12-23  8:15                         ` Andy Lutomirski
2014-12-23 20:06                           ` Hector Marco Gisbert
2014-12-23 20:53                             ` Andy Lutomirski
2015-01-07 17:26     ` Hector Marco Gisbert

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).