linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap()
@ 2020-10-08 16:54 Topi Miettinen
  2020-10-08 17:07 ` Matthew Wilcox
  2020-10-08 17:13 ` Jann Horn
  0 siblings, 2 replies; 9+ messages in thread
From: Topi Miettinen @ 2020-10-08 16:54 UTC (permalink / raw)
  To: linux-hardening, akpm, linux-mm, linux-kernel; +Cc: Topi Miettinen

Writing a new value of 3 to /proc/sys/kernel/randomize_va_space
enables full randomization of memory mappings created with mmap(NULL,
...). With 2, the base of the VMA used for such mappings is random,
but the mappings are created in predictable places within the VMA and
in sequential order. With 3, new VMAs are created to fully randomize
the mappings. Also mremap(..., MREMAP_MAYMOVE) will move the mappings
even if not necessary.

On 32 bit systems this may cause problems due to increased VM
fragmentation if the address space gets crowded.

In this example, with value of 2, ld.so.cache, libc, an anonymous mmap
and locale-archive are located close to each other:
$ strace /bin/sync
...
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=189096, ...}) = 0
mmap(NULL, 189096, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7d9c1e7f2000
...
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0n\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1839792, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7d9c1e7f0000
mmap(NULL, 1852680, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0x7d9c1e62b000
...
openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=5642592, ...}) = 0
mmap(NULL, 5642592, PROT_READ, MAP_PRIVATE, 3, 0) = 0x7d9c1e0c9000

With 3, they are located in unrelated addresses:
$ echo 3 > /proc/sys/kernel/randomize_va_space
$ /bin/sync
...
openat(AT_FDCWD, "/etc/ld.so.cache", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=189096, ...}) = 0
mmap(NULL, 189096, PROT_READ, MAP_PRIVATE, 3, 0) = 0xeda4fbea000
...
openat(AT_FDCWD, "/lib/x86_64-linux-gnu/libc.so.6", O_RDONLY|O_CLOEXEC) = 3
read(3, "\177ELF\2\1\1\3\0\0\0\0\0\0\0\0\3\0>\0\1\0\0\0\0n\2\0\0\0\0\0"..., 832) = 832
fstat(3, {st_mode=S_IFREG|0755, st_size=1839792, ...}) = 0
mmap(NULL, 8192, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0xb8fb9c1d000
mmap(NULL, 1852680, PROT_READ, MAP_PRIVATE|MAP_DENYWRITE, 3, 0) = 0xaabd8598000
...
openat(AT_FDCWD, "/usr/lib/locale/locale-archive", O_RDONLY|O_CLOEXEC) = 3
fstat(3, {st_mode=S_IFREG|0644, st_size=5642592, ...}) = 0
mmap(NULL, 5642592, PROT_READ, MAP_PRIVATE, 3, 0) = 0xbe351ab8000

Signed-off-by: Topi Miettinen <toiwoton@gmail.com>
---
Resent also to hardening list (hopefully the right one)
v2: also randomize mremap(..., MREMAP_MAYMOVE)
---
 Documentation/admin-guide/hw-vuln/spectre.rst |  6 +++---
 Documentation/admin-guide/sysctl/kernel.rst   | 11 +++++++++++
 init/Kconfig                                  |  2 +-
 mm/mmap.c                                     |  7 ++++++-
 mm/mremap.c                                   | 15 +++++++++++++++
 5 files changed, 36 insertions(+), 5 deletions(-)

diff --git a/Documentation/admin-guide/hw-vuln/spectre.rst b/Documentation/admin-guide/hw-vuln/spectre.rst
index e05e581af5cf..9ea250522077 100644
--- a/Documentation/admin-guide/hw-vuln/spectre.rst
+++ b/Documentation/admin-guide/hw-vuln/spectre.rst
@@ -254,7 +254,7 @@ Spectre variant 2
    left by the previous process will also be cleared.
 
    User programs should use address space randomization to make attacks
-   more difficult (Set /proc/sys/kernel/randomize_va_space = 1 or 2).
+   more difficult (Set /proc/sys/kernel/randomize_va_space = 1, 2 or 3).
 
 3. A virtualized guest attacking the host
 ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
@@ -499,8 +499,8 @@ Spectre variant 2
    more overhead and run slower.
 
    User programs should use address space randomization
-   (/proc/sys/kernel/randomize_va_space = 1 or 2) to make attacks more
-   difficult.
+   (/proc/sys/kernel/randomize_va_space = 1, 2 or 3) to make attacks
+   more difficult.
 
 3. VM mitigation
 ^^^^^^^^^^^^^^^^
diff --git a/Documentation/admin-guide/sysctl/kernel.rst b/Documentation/admin-guide/sysctl/kernel.rst
index d4b32cc32bb7..acd0612155d9 100644
--- a/Documentation/admin-guide/sysctl/kernel.rst
+++ b/Documentation/admin-guide/sysctl/kernel.rst
@@ -1060,6 +1060,17 @@ that support this feature.
     Systems with ancient and/or broken binaries should be configured
     with ``CONFIG_COMPAT_BRK`` enabled, which excludes the heap from process
     address space randomization.
+
+3   Additionally enable full randomization of memory mappings created
+    with mmap(NULL, ...). With 2, the base of the VMA used for such
+    mappings is random, but the mappings are created in predictable
+    places within the VMA and in sequential order. With 3, new VMAs
+    are created to fully randomize the mappings. Also mremap(...,
+    MREMAP_MAYMOVE) will move the mappings even if not necessary.
+
+    On 32 bit systems this may cause problems due to increased VM
+    fragmentation if the address space gets crowded.
+
 ==  ===========================================================================
 
 
diff --git a/init/Kconfig b/init/Kconfig
index d6a0b31b13dc..c5ea2e694f6a 100644
--- a/init/Kconfig
+++ b/init/Kconfig
@@ -1859,7 +1859,7 @@ config COMPAT_BRK
 	  also breaks ancient binaries (including anything libc5 based).
 	  This option changes the bootup default to heap randomization
 	  disabled, and can be overridden at runtime by setting
-	  /proc/sys/kernel/randomize_va_space to 2.
+	  /proc/sys/kernel/randomize_va_space to 2 or 3.
 
 	  On non-ancient distros (post-2000 ones) N is usually a safe choice.
 
diff --git a/mm/mmap.c b/mm/mmap.c
index 40248d84ad5f..489368f43af1 100644
--- a/mm/mmap.c
+++ b/mm/mmap.c
@@ -47,6 +47,7 @@
 #include <linux/pkeys.h>
 #include <linux/oom.h>
 #include <linux/sched/mm.h>
+#include <linux/elf-randomize.h>
 
 #include <linux/uaccess.h>
 #include <asm/cacheflush.h>
@@ -206,7 +207,7 @@ SYSCALL_DEFINE1(brk, unsigned long, brk)
 #ifdef CONFIG_COMPAT_BRK
 	/*
 	 * CONFIG_COMPAT_BRK can still be overridden by setting
-	 * randomize_va_space to 2, which will still cause mm->start_brk
+	 * randomize_va_space to >= 2, which will still cause mm->start_brk
 	 * to be arbitrarily shifted
 	 */
 	if (current->brk_randomized)
@@ -1407,6 +1408,10 @@ unsigned long do_mmap(struct file *file, unsigned long addr,
 	if (mm->map_count > sysctl_max_map_count)
 		return -ENOMEM;
 
+	/* Pick a random address even outside current VMAs? */
+	if (!addr && randomize_va_space >= 3)
+		addr = arch_mmap_rnd();
+
 	/* Obtain the address to map to. we verify (or select) it and ensure
 	 * that it represents a valid section of the address space.
 	 */
diff --git a/mm/mremap.c b/mm/mremap.c
index 138abbae4f75..c7fd1ab5fb5f 100644
--- a/mm/mremap.c
+++ b/mm/mremap.c
@@ -24,6 +24,7 @@
 #include <linux/uaccess.h>
 #include <linux/mm-arch-hooks.h>
 #include <linux/userfaultfd_k.h>
+#include <linux/elf-randomize.h>
 
 #include <asm/cacheflush.h>
 #include <asm/tlbflush.h>
@@ -720,6 +721,20 @@ SYSCALL_DEFINE5(mremap, unsigned long, addr, unsigned long, old_len,
 		goto out;
 	}
 
+	if ((flags & MREMAP_MAYMOVE) && randomize_va_space >= 3) {
+		/*
+		 * Caller is happy with a different address, so let's
+		 * move even if not necessary!
+		 */
+		new_addr = arch_mmap_rnd();
+
+		ret = mremap_to(addr, old_len, new_addr, new_len,
+				&locked, flags, &uf, &uf_unmap_early,
+				&uf_unmap);
+		goto out;
+	}
+
+
 	/*
 	 * Always allow a shrinking remap: that just unmaps
 	 * the unnecessary pages..
-- 
2.28.0



^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap()
  2020-10-08 16:54 [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap() Topi Miettinen
@ 2020-10-08 17:07 ` Matthew Wilcox
  2020-10-08 18:11   ` Topi Miettinen
  2020-10-08 17:13 ` Jann Horn
  1 sibling, 1 reply; 9+ messages in thread
From: Matthew Wilcox @ 2020-10-08 17:07 UTC (permalink / raw)
  To: Topi Miettinen; +Cc: linux-hardening, akpm, linux-mm, linux-kernel

On Thu, Oct 08, 2020 at 07:54:08PM +0300, Topi Miettinen wrote:
> +3   Additionally enable full randomization of memory mappings created
> +    with mmap(NULL, ...). With 2, the base of the VMA used for such
> +    mappings is random, but the mappings are created in predictable
> +    places within the VMA and in sequential order. With 3, new VMAs
> +    are created to fully randomize the mappings. Also mremap(...,
> +    MREMAP_MAYMOVE) will move the mappings even if not necessary.
> +
> +    On 32 bit systems this may cause problems due to increased VM
> +    fragmentation if the address space gets crowded.

On all systems, it will reduce performance and increase memory usage due
to less efficient use of page tables and inability to merge adjacent VMAs
with compatible attributes.

> +	if ((flags & MREMAP_MAYMOVE) && randomize_va_space >= 3) {
> +		/*
> +		 * Caller is happy with a different address, so let's
> +		 * move even if not necessary!
> +		 */
> +		new_addr = arch_mmap_rnd();
> +
> +		ret = mremap_to(addr, old_len, new_addr, new_len,
> +				&locked, flags, &uf, &uf_unmap_early,
> +				&uf_unmap);
> +		goto out;
> +	}
> +
> +

Overly enthusiastic newline


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap()
  2020-10-08 16:54 [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap() Topi Miettinen
  2020-10-08 17:07 ` Matthew Wilcox
@ 2020-10-08 17:13 ` Jann Horn
  2020-10-08 17:23   ` Matthew Wilcox
  2020-10-08 18:10   ` Topi Miettinen
  1 sibling, 2 replies; 9+ messages in thread
From: Jann Horn @ 2020-10-08 17:13 UTC (permalink / raw)
  To: Topi Miettinen; +Cc: linux-hardening, Andrew Morton, Linux-MM, kernel list

On Thu, Oct 8, 2020 at 6:54 PM Topi Miettinen <toiwoton@gmail.com> wrote:
> Writing a new value of 3 to /proc/sys/kernel/randomize_va_space
> enables full randomization of memory mappings created with mmap(NULL,
> ...). With 2, the base of the VMA used for such mappings is random,
> but the mappings are created in predictable places within the VMA and
> in sequential order. With 3, new VMAs are created to fully randomize
> the mappings. Also mremap(..., MREMAP_MAYMOVE) will move the mappings
> even if not necessary.
[...]
> +       if ((flags & MREMAP_MAYMOVE) && randomize_va_space >= 3) {
> +               /*
> +                * Caller is happy with a different address, so let's
> +                * move even if not necessary!
> +                */
> +               new_addr = arch_mmap_rnd();
> +
> +               ret = mremap_to(addr, old_len, new_addr, new_len,
> +                               &locked, flags, &uf, &uf_unmap_early,
> +                               &uf_unmap);
> +               goto out;
> +       }

You just pick a random number as the address, and try to place the
mapping there? Won't this fail if e.g. the old address range overlaps
with the new one, causing mremap_to() to bail out at "if (addr +
old_len > new_addr && new_addr + new_len > addr)"?

Also, on Linux, the main program stack is (currently) an expanding
memory mapping that starts out being something like a couple hundred
kilobytes in size. If you allocate memory too close to the main
program stack, and someone then recurses deep enough to need more
memory, the program will crash. It sounds like your patch will
randomly make such programs crash.

Also, what's your strategy in general with regards to collisions with
existing mappings? Is your intention to just fall back to the classic
algorithm in that case?

You may want to consider whether it would be better to store
information about free memory per subtree in the VMA tree, together
with the maximum gap size that is already stored in each node, and
then walk down the tree randomly, with the randomness weighted by free
memory in the subtrees, but ignoring subtrees whose gaps are too
small. And for expanding stacks, it might be a good idea for other
reasons as well (locking consistency) to refactor them such that the
size in the VMA tree corresponds to the maximum expansion of the stack
(and if an allocation is about to fail, shrink such stack mappings).


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap()
  2020-10-08 17:13 ` Jann Horn
@ 2020-10-08 17:23   ` Matthew Wilcox
  2020-10-08 17:26     ` Jann Horn
  2020-10-08 18:10   ` Topi Miettinen
  1 sibling, 1 reply; 9+ messages in thread
From: Matthew Wilcox @ 2020-10-08 17:23 UTC (permalink / raw)
  To: Jann Horn
  Cc: Topi Miettinen, linux-hardening, Andrew Morton, Linux-MM, kernel list

On Thu, Oct 08, 2020 at 07:13:51PM +0200, Jann Horn wrote:
> You may want to consider whether it would be better to store
> information about free memory per subtree in the VMA tree, together
> with the maximum gap size that is already stored in each node, and
> then walk down the tree randomly, with the randomness weighted by free
> memory in the subtrees, but ignoring subtrees whose gaps are too
> small.

Please, no.  We're trying to get rid of the rbtree, not enhance it
further.  The new data structure is a B-tree and we'd rather not burden
it with extra per-node information (... although if we have to, we could)

> And for expanding stacks, it might be a good idea for other
> reasons as well (locking consistency) to refactor them such that the
> size in the VMA tree corresponds to the maximum expansion of the stack
> (and if an allocation is about to fail, shrink such stack mappings).

We're doing that as part of the B-tree ;-)  Although not the shrink
stack mappings part ...


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap()
  2020-10-08 17:23   ` Matthew Wilcox
@ 2020-10-08 17:26     ` Jann Horn
  2020-10-08 17:41       ` Matthew Wilcox
  0 siblings, 1 reply; 9+ messages in thread
From: Jann Horn @ 2020-10-08 17:26 UTC (permalink / raw)
  To: Matthew Wilcox
  Cc: Topi Miettinen, linux-hardening, Andrew Morton, Linux-MM, kernel list

On Thu, Oct 8, 2020 at 7:23 PM Matthew Wilcox <willy@infradead.org> wrote:
> On Thu, Oct 08, 2020 at 07:13:51PM +0200, Jann Horn wrote:
> > And for expanding stacks, it might be a good idea for other
> > reasons as well (locking consistency) to refactor them such that the
> > size in the VMA tree corresponds to the maximum expansion of the stack
> > (and if an allocation is about to fail, shrink such stack mappings).
>
> We're doing that as part of the B-tree ;-)  Although not the shrink
> stack mappings part ...

Wheee, thanks! Finally no more data races on ->vm_start?


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap()
  2020-10-08 17:26     ` Jann Horn
@ 2020-10-08 17:41       ` Matthew Wilcox
  0 siblings, 0 replies; 9+ messages in thread
From: Matthew Wilcox @ 2020-10-08 17:41 UTC (permalink / raw)
  To: Jann Horn
  Cc: Topi Miettinen, linux-hardening, Andrew Morton, Linux-MM, kernel list

On Thu, Oct 08, 2020 at 07:26:31PM +0200, Jann Horn wrote:
> On Thu, Oct 8, 2020 at 7:23 PM Matthew Wilcox <willy@infradead.org> wrote:
> > On Thu, Oct 08, 2020 at 07:13:51PM +0200, Jann Horn wrote:
> > > And for expanding stacks, it might be a good idea for other
> > > reasons as well (locking consistency) to refactor them such that the
> > > size in the VMA tree corresponds to the maximum expansion of the stack
> > > (and if an allocation is about to fail, shrink such stack mappings).
> >
> > We're doing that as part of the B-tree ;-)  Although not the shrink
> > stack mappings part ...
> 
> Wheee, thanks! Finally no more data races on ->vm_start?

Ah, maybe still that.  The B-tree records the start of the mapping in
the tree, but we still keep vma->vm_start as pointing to the current top
of the stack (it's still the top if it grows down ... right?)  The key is
that these numbers may now be different, so from the tree's point of view,
the vm addresses for 1MB below the stack appear to be occupied.  From the
VMA's point of view, the stack finishes where it was last accessed.

We also get rid of the insanity of "return the next VMA if there's no
VMA at this address" which most of the callers don't want and have to
check for.  Again, from the tree's point of view, there is a VMA at this
address, but from the VMA's point of view, it'll need to expand to reach
that address.

I don't think this piece is implemented yet, but it's definitely planned.


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap()
  2020-10-08 17:13 ` Jann Horn
  2020-10-08 17:23   ` Matthew Wilcox
@ 2020-10-08 18:10   ` Topi Miettinen
  2020-10-08 18:24     ` Jann Horn
  1 sibling, 1 reply; 9+ messages in thread
From: Topi Miettinen @ 2020-10-08 18:10 UTC (permalink / raw)
  To: Jann Horn; +Cc: linux-hardening, Andrew Morton, Linux-MM, kernel list

On 8.10.2020 20.13, Jann Horn wrote:
> On Thu, Oct 8, 2020 at 6:54 PM Topi Miettinen <toiwoton@gmail.com> wrote:
>> Writing a new value of 3 to /proc/sys/kernel/randomize_va_space
>> enables full randomization of memory mappings created with mmap(NULL,
>> ...). With 2, the base of the VMA used for such mappings is random,
>> but the mappings are created in predictable places within the VMA and
>> in sequential order. With 3, new VMAs are created to fully randomize
>> the mappings. Also mremap(..., MREMAP_MAYMOVE) will move the mappings
>> even if not necessary.
> [...]
>> +       if ((flags & MREMAP_MAYMOVE) && randomize_va_space >= 3) {
>> +               /*
>> +                * Caller is happy with a different address, so let's
>> +                * move even if not necessary!
>> +                */
>> +               new_addr = arch_mmap_rnd();
>> +
>> +               ret = mremap_to(addr, old_len, new_addr, new_len,
>> +                               &locked, flags, &uf, &uf_unmap_early,
>> +                               &uf_unmap);
>> +               goto out;
>> +       }
> 
> You just pick a random number as the address, and try to place the
> mapping there? Won't this fail if e.g. the old address range overlaps
> with the new one, causing mremap_to() to bail out at "if (addr +
> old_len > new_addr && new_addr + new_len > addr)"?

Thanks for the review. I think overlap would be OK in this case and the 
check should be skipped.

> Also, on Linux, the main program stack is (currently) an expanding
> memory mapping that starts out being something like a couple hundred
> kilobytes in size. If you allocate memory too close to the main
> program stack, and someone then recurses deep enough to need more
> memory, the program will crash. It sounds like your patch will
> randomly make such programs crash.

Right, especially on 32 bit systems this could be a real problem. I have 
limited the stack for tasks in the whole system to 2MB without problems 
(most use only 128kB) and on 48 bit virtual address systems the 
collision to 2MB area could be roughly 1/2^(48-21) which is a very small 
number. But perhaps this should be still be avoided by not picking an 
address too close to bottom of stack, say 64MB to be sure. It may also 
make this more useful also for 32 bit systems but overall I'm not so 
optimistic due to increased fragmentation.

> Also, what's your strategy in general with regards to collisions with
> existing mappings? Is your intention to just fall back to the classic
> algorithm in that case?

Maybe a different address could be tried (but not infinitely, say 5 
times) and then fall back to classics. This would not be good for the 
ASLR but I haven't seen mremap() to be used much in my tests.

> You may want to consider whether it would be better to store
> information about free memory per subtree in the VMA tree, together
> with the maximum gap size that is already stored in each node, and
> then walk down the tree randomly, with the randomness weighted by free
> memory in the subtrees, but ignoring subtrees whose gaps are too
> small. And for expanding stacks, it might be a good idea for other
> reasons as well (locking consistency) to refactor them such that the
> size in the VMA tree corresponds to the maximum expansion of the stack
> (and if an allocation is about to fail, shrink such stack mappings).

This would reduce the randomization which I want to avoid. I think the 
extra overhead should be OK: if this is unacceptable for a workload or 
system constraints, don't use mode '3' but '2'.

Instead of single global sysctl, this could be implemented as a new 
personality (or make this model the default and add a compatibility 
personality with no or less randomization), so it could be applied only 
for some tasks but not all.

-Topi


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap()
  2020-10-08 17:07 ` Matthew Wilcox
@ 2020-10-08 18:11   ` Topi Miettinen
  0 siblings, 0 replies; 9+ messages in thread
From: Topi Miettinen @ 2020-10-08 18:11 UTC (permalink / raw)
  To: Matthew Wilcox; +Cc: linux-hardening, akpm, linux-mm, linux-kernel

On 8.10.2020 20.07, Matthew Wilcox wrote:
> On Thu, Oct 08, 2020 at 07:54:08PM +0300, Topi Miettinen wrote:
>> +3   Additionally enable full randomization of memory mappings created
>> +    with mmap(NULL, ...). With 2, the base of the VMA used for such
>> +    mappings is random, but the mappings are created in predictable
>> +    places within the VMA and in sequential order. With 3, new VMAs
>> +    are created to fully randomize the mappings. Also mremap(...,
>> +    MREMAP_MAYMOVE) will move the mappings even if not necessary.
>> +
>> +    On 32 bit systems this may cause problems due to increased VM
>> +    fragmentation if the address space gets crowded.
> 
> On all systems, it will reduce performance and increase memory usage due
> to less efficient use of page tables and inability to merge adjacent VMAs
> with compatible attributes.

Right, I'll update the description.

>> +	if ((flags & MREMAP_MAYMOVE) && randomize_va_space >= 3) {
>> +		/*
>> +		 * Caller is happy with a different address, so let's
>> +		 * move even if not necessary!
>> +		 */
>> +		new_addr = arch_mmap_rnd();
>> +
>> +		ret = mremap_to(addr, old_len, new_addr, new_len,
>> +				&locked, flags, &uf, &uf_unmap_early,
>> +				&uf_unmap);
>> +		goto out;
>> +	}
>> +
>> +
> 
> Overly enthusiastic newline
> 

Will remove.

-Topi


^ permalink raw reply	[flat|nested] 9+ messages in thread

* Re: [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap()
  2020-10-08 18:10   ` Topi Miettinen
@ 2020-10-08 18:24     ` Jann Horn
  0 siblings, 0 replies; 9+ messages in thread
From: Jann Horn @ 2020-10-08 18:24 UTC (permalink / raw)
  To: Topi Miettinen; +Cc: linux-hardening, Andrew Morton, Linux-MM, kernel list

On Thu, Oct 8, 2020 at 8:10 PM Topi Miettinen <toiwoton@gmail.com> wrote:
> On 8.10.2020 20.13, Jann Horn wrote:
> > On Thu, Oct 8, 2020 at 6:54 PM Topi Miettinen <toiwoton@gmail.com> wrote:
> >> Writing a new value of 3 to /proc/sys/kernel/randomize_va_space
> >> enables full randomization of memory mappings created with mmap(NULL,
> >> ...). With 2, the base of the VMA used for such mappings is random,
> >> but the mappings are created in predictable places within the VMA and
> >> in sequential order. With 3, new VMAs are created to fully randomize
> >> the mappings. Also mremap(..., MREMAP_MAYMOVE) will move the mappings
> >> even if not necessary.
> > [...]
> >> +       if ((flags & MREMAP_MAYMOVE) && randomize_va_space >= 3) {
> >> +               /*
> >> +                * Caller is happy with a different address, so let's
> >> +                * move even if not necessary!
> >> +                */
> >> +               new_addr = arch_mmap_rnd();
> >> +
> >> +               ret = mremap_to(addr, old_len, new_addr, new_len,
> >> +                               &locked, flags, &uf, &uf_unmap_early,
> >> +                               &uf_unmap);
> >> +               goto out;
> >> +       }
> >
> > You just pick a random number as the address, and try to place the
> > mapping there? Won't this fail if e.g. the old address range overlaps
> > with the new one, causing mremap_to() to bail out at "if (addr +
> > old_len > new_addr && new_addr + new_len > addr)"?
>
> Thanks for the review. I think overlap would be OK in this case and the
> check should be skipped.

No, mremap() can't deal with overlap (and trying to add such support
would make mremap() unnecessarily complicated).


^ permalink raw reply	[flat|nested] 9+ messages in thread

end of thread, other threads:[~2020-10-08 18:24 UTC | newest]

Thread overview: 9+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-10-08 16:54 [PATCH RESEND v2] mm: Optional full ASLR for mmap() and mremap() Topi Miettinen
2020-10-08 17:07 ` Matthew Wilcox
2020-10-08 18:11   ` Topi Miettinen
2020-10-08 17:13 ` Jann Horn
2020-10-08 17:23   ` Matthew Wilcox
2020-10-08 17:26     ` Jann Horn
2020-10-08 17:41       ` Matthew Wilcox
2020-10-08 18:10   ` Topi Miettinen
2020-10-08 18:24     ` Jann Horn

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).