linux-mm.kvack.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] x86/vdso: Move vDSO to mmap region
@ 2024-02-10  9:18 Kees Cook
  2024-02-17  5:31 ` Kees Cook
  0 siblings, 1 reply; 2+ messages in thread
From: Kees Cook @ 2024-02-10  9:18 UTC (permalink / raw)
  To: Andy Lutomirski
  Cc: Kees Cook, Daniel Micay, Thomas Gleixner, Ingo Molnar,
	Borislav Petkov, Dave Hansen, x86, H. Peter Anvin,
	Eric Biederman, Brian Gerst, Nikolay Borisov, Chang S. Bae,
	Igor Zhbanov, Rick Edgecombe, Randy Dunlap, linux-mm, John Allen,
	linux-kernel, linux-hardening

From: Daniel Micay <danielmicay@gmail.com>

The vDSO (and its initial randomization) was introduced in commit
2aae950b21e4 ("x86_64: Add vDSO for x86-64 with gettimeofday/clock_gettime/getcpu"),
but had very low entropy. The entropy was improved in commit
394f56fe4801 ("x86_64, vdso: Fix the vdso address randomization algorithm"),
but there is still improvement to be made.

On principle there should not be executable code at a low entropy offset
from the stack, since the stack and executable code having separate
randomization is part of what makes ASLR stronger.

Remove the only executable code near the stack region and give the vDSO
the same randomized base as other mmap mappings including the linker
and other shared objects. This results in higher entropy being provided
and there's little to no advantage in separating this from the existing
executable code there. This is already how other architectures like
arm64 handle the vDSO.

As an side, while it's sensible for userspace to reserve the initial
mmap base as a region for executable code with a random gap for other
mmap allocations, along with providing randomization within that region,
there isn't much the kernel can do to help due to how dynamic linkers
load the shared objects.

This was extracted from the PaX RANDMMAP feature.

Closes: https://github.com/KSPP/linux/issues/280
Cc: Andy Lutomirski <luto@kernel.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Borislav Petkov <bp@alien8.de>
Cc: Dave Hansen <dave.hansen@linux.intel.com>
Cc: x86@kernel.org
Cc: "H. Peter Anvin" <hpa@zytor.com>
Cc: Eric Biederman <ebiederm@xmission.com>
Cc: Brian Gerst <brgerst@gmail.com>
Cc: Nikolay Borisov <nik.borisov@suse.com>
Cc: "Chang S. Bae" <chang.seok.bae@intel.com>
Cc: Igor Zhbanov <i.zhbanov@omprussia.ru>
Cc: Rick Edgecombe <rick.p.edgecombe@intel.com>
Cc: Randy Dunlap <rdunlap@infradead.org>
Cc: linux-mm@kvack.org
Signed-off-by: Daniel Micay <danielmicay@gmail.com>
[kees: updated commit log with historical details and other tweaks]
Signed-off-by: Kees Cook <keescook@chromium.org>
---
 arch/x86/entry/vdso/vma.c    | 57 ++----------------------------------
 arch/x86/include/asm/elf.h   |  1 -
 arch/x86/kernel/sys_x86_64.c |  7 -----
 3 files changed, 2 insertions(+), 63 deletions(-)

diff --git a/arch/x86/entry/vdso/vma.c b/arch/x86/entry/vdso/vma.c
index 7645730dc228..6d83ceb7f1ba 100644
--- a/arch/x86/entry/vdso/vma.c
+++ b/arch/x86/entry/vdso/vma.c
@@ -274,59 +274,6 @@ static int map_vdso(const struct vdso_image *image, unsigned long addr)
 	return ret;
 }
 
-#ifdef CONFIG_X86_64
-/*
- * Put the vdso above the (randomized) stack with another randomized
- * offset.  This way there is no hole in the middle of address space.
- * To save memory make sure it is still in the same PTE as the stack
- * top.  This doesn't give that many random bits.
- *
- * Note that this algorithm is imperfect: the distribution of the vdso
- * start address within a PMD is biased toward the end.
- *
- * Only used for the 64-bit and x32 vdsos.
- */
-static unsigned long vdso_addr(unsigned long start, unsigned len)
-{
-	unsigned long addr, end;
-	unsigned offset;
-
-	/*
-	 * Round up the start address.  It can start out unaligned as a result
-	 * of stack start randomization.
-	 */
-	start = PAGE_ALIGN(start);
-
-	/* Round the lowest possible end address up to a PMD boundary. */
-	end = (start + len + PMD_SIZE - 1) & PMD_MASK;
-	if (end >= DEFAULT_MAP_WINDOW)
-		end = DEFAULT_MAP_WINDOW;
-	end -= len;
-
-	if (end > start) {
-		offset = get_random_u32_below(((end - start) >> PAGE_SHIFT) + 1);
-		addr = start + (offset << PAGE_SHIFT);
-	} else {
-		addr = start;
-	}
-
-	/*
-	 * Forcibly align the final address in case we have a hardware
-	 * issue that requires alignment for performance reasons.
-	 */
-	addr = align_vdso_addr(addr);
-
-	return addr;
-}
-
-static int map_vdso_randomized(const struct vdso_image *image)
-{
-	unsigned long addr = vdso_addr(current->mm->start_stack, image->size-image->sym_vvar_start);
-
-	return map_vdso(image, addr);
-}
-#endif
-
 int map_vdso_once(const struct vdso_image *image, unsigned long addr)
 {
 	struct mm_struct *mm = current->mm;
@@ -369,7 +316,7 @@ int arch_setup_additional_pages(struct linux_binprm *bprm, int uses_interp)
 	if (!vdso64_enabled)
 		return 0;
 
-	return map_vdso_randomized(&vdso_image_64);
+	return map_vdso(&vdso_image_64, 0);
 }
 
 #ifdef CONFIG_COMPAT
@@ -380,7 +327,7 @@ int compat_arch_setup_additional_pages(struct linux_binprm *bprm,
 	if (x32) {
 		if (!vdso64_enabled)
 			return 0;
-		return map_vdso_randomized(&vdso_image_x32);
+		return map_vdso(&vdso_image_x32, 0);
 	}
 #endif
 #ifdef CONFIG_IA32_EMULATION
diff --git a/arch/x86/include/asm/elf.h b/arch/x86/include/asm/elf.h
index 1e16bd5ac781..1fb83d47711f 100644
--- a/arch/x86/include/asm/elf.h
+++ b/arch/x86/include/asm/elf.h
@@ -392,5 +392,4 @@ struct va_alignment {
 } ____cacheline_aligned;
 
 extern struct va_alignment va_align;
-extern unsigned long align_vdso_addr(unsigned long);
 #endif /* _ASM_X86_ELF_H */
diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c
index c783aeb37dce..cb9fa1d5c66f 100644
--- a/arch/x86/kernel/sys_x86_64.c
+++ b/arch/x86/kernel/sys_x86_64.c
@@ -52,13 +52,6 @@ static unsigned long get_align_bits(void)
 	return va_align.bits & get_align_mask();
 }
 
-unsigned long align_vdso_addr(unsigned long addr)
-{
-	unsigned long align_mask = get_align_mask();
-	addr = (addr + align_mask) & ~align_mask;
-	return addr | get_align_bits();
-}
-
 static int __init control_va_addr_alignment(char *str)
 {
 	/* guard against enabling this on other CPU families */
-- 
2.34.1



^ permalink raw reply related	[flat|nested] 2+ messages in thread

* Re: [PATCH] x86/vdso: Move vDSO to mmap region
  2024-02-10  9:18 [PATCH] x86/vdso: Move vDSO to mmap region Kees Cook
@ 2024-02-17  5:31 ` Kees Cook
  0 siblings, 0 replies; 2+ messages in thread
From: Kees Cook @ 2024-02-17  5:31 UTC (permalink / raw)
  To: Andy Lutomirski, Borislav Petkov, Dave Hansen
  Cc: Daniel Micay, Thomas Gleixner, Ingo Molnar, x86, H. Peter Anvin,
	Eric Biederman, Brian Gerst, Nikolay Borisov, Chang S. Bae,
	Igor Zhbanov, Rick Edgecombe, Randy Dunlap, linux-mm, John Allen,
	linux-kernel, linux-hardening

On Sat, Feb 10, 2024 at 01:18:35AM -0800, Kees Cook wrote:
> The vDSO (and its initial randomization) was introduced in commit
> 2aae950b21e4 ("x86_64: Add vDSO for x86-64 with gettimeofday/clock_gettime/getcpu"),
> but had very low entropy. The entropy was improved in commit
> 394f56fe4801 ("x86_64, vdso: Fix the vdso address randomization algorithm"),
> but there is still improvement to be made.
> 
> On principle there should not be executable code at a low entropy offset
> from the stack, since the stack and executable code having separate
> randomization is part of what makes ASLR stronger.
> 
> Remove the only executable code near the stack region and give the vDSO
> the same randomized base as other mmap mappings including the linker
> and other shared objects. This results in higher entropy being provided
> and there's little to no advantage in separating this from the existing
> executable code there. This is already how other architectures like
> arm64 handle the vDSO.

Thread ping. Anyone have thoughts on this? I can carry it in -next to
see if anything melts...

-- 
Kees Cook


^ permalink raw reply	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2024-02-17  5:31 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2024-02-10  9:18 [PATCH] x86/vdso: Move vDSO to mmap region Kees Cook
2024-02-17  5:31 ` Kees Cook

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).