From mboxrd@z Thu Jan 1 00:00:00 1970 Reply-To: kernel-hardening@lists.openwall.com From: Jason Cooper Date: Wed, 3 Aug 2016 23:39:08 +0000 Message-Id: <20160803233913.32511-3-jason@lakedaemon.net> In-Reply-To: <20160803233913.32511-1-jason@lakedaemon.net> References: <20160728204730.27453-1-jason@lakedaemon.net> <20160803233913.32511-1-jason@lakedaemon.net> Subject: [kernel-hardening] [PATCH v3 2/7] x86: Use simpler API for random address requests To: Kees Cook , Michael Ellerman , "Roberts, William C" , Yann Droneaud Cc: Linux-MM , LKML , kernel-hardening , Russell King - ARM Linux , Andrew Morton , Theodore Ts'o , Arnd Bergmann , gregkh@linuxfoundation.org, Catalin Marinas , Will Deacon , Ralf Baechle , benh@kernel.crashing.org, paulus@samba.org, "David S. Miller" , Thomas Gleixner , Ingo Molnar , "H . Peter Anvin" , x86@kernel.org, viro@zeniv.linux.org.uk, Nick Kralevich , Jeffrey Vander Stoep , Daniel Cashman , Jason Cooper List-ID: Currently, all callers to randomize_range() set the length to 0 and calculate end by adding a constant to the start address. We can simplify the API to remove a bunch of needless checks and variables. Use the new randomize_addr(start, range) call to set the requested address. Signed-off-by: Jason Cooper --- Changes from v2: - s/randomize_addr/randomize_page/ (Kees Cook) arch/x86/kernel/process.c | 3 +-- arch/x86/kernel/sys_x86_64.c | 5 +---- 2 files changed, 2 insertions(+), 6 deletions(-) diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 96becbbb52e0..8ca7f42d97f3 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -507,8 +507,7 @@ unsigned long arch_align_stack(unsigned long sp) unsigned long arch_randomize_brk(struct mm_struct *mm) { - unsigned long range_end = mm->brk + 0x02000000; - return randomize_range(mm->brk, range_end, 0) ? : mm->brk; + return randomize_page(mm->brk, 0x02000000); } /* diff --git a/arch/x86/kernel/sys_x86_64.c b/arch/x86/kernel/sys_x86_64.c index 10e0272d789a..a55ed63b9f91 100644 --- a/arch/x86/kernel/sys_x86_64.c +++ b/arch/x86/kernel/sys_x86_64.c @@ -101,7 +101,6 @@ static void find_start_end(unsigned long flags, unsigned long *begin, unsigned long *end) { if (!test_thread_flag(TIF_ADDR32) && (flags & MAP_32BIT)) { - unsigned long new_begin; /* This is usually used needed to map code in small model, so it needs to be in the first 31bit. Limit it to that. This means we need to move the @@ -112,9 +111,7 @@ static void find_start_end(unsigned long flags, unsigned long *begin, *begin = 0x40000000; *end = 0x80000000; if (current->flags & PF_RANDOMIZE) { - new_begin = randomize_range(*begin, *begin + 0x02000000, 0); - if (new_begin) - *begin = new_begin; + *begin = randomize_page(*begin, 0x02000000); } } else { *begin = current->mm->mmap_legacy_base; -- 2.9.2