From: Kees Cook <keescook@chromium.org> To: Thomas Gleixner <tglx@linutronix.de> Cc: Kees Cook <keescook@chromium.org>, Elena Reshetova <elena.reshetova@intel.com>, x86@kernel.org, Andy Lutomirski <luto@kernel.org>, Peter Zijlstra <peterz@infradead.org>, Catalin Marinas <catalin.marinas@arm.com>, Will Deacon <will@kernel.org>, Mark Rutland <mark.rutland@arm.com>, Alexander Potapenko <glider@google.com>, Ard Biesheuvel <ard.biesheuvel@linaro.org>, Jann Horn <jannh@google.com>, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-mm@kvack.org, linux-kernel@vger.kernel.org Subject: [PATCH v3 4/5] x86/entry: Enable random_kstack_offset support Date: Mon, 6 Apr 2020 16:16:05 -0700 [thread overview] Message-ID: <20200406231606.37619-5-keescook@chromium.org> (raw) In-Reply-To: <20200406231606.37619-1-keescook@chromium.org> Allow for a randomized stack offset on a per-syscall basis, with roughly 5 bits of entropy. In order to avoid unconditional stack canaries on syscall entry, also downgrade from -fstack-protector-strong to -fstack-protector to avoid triggering checks due to alloca(). Examining the resulting canary coverage changes to common.o, this also removes canaries in other functions, due to a handful of declarations of "__u64 args[6]" (from seccomp) and "unsigned long args[6]" (from tracepoints), but their accesses are indexed (instead of via dynamically sized linear reads/writes) so the risk of removing useful mitigation coverage here is very low. Signed-off-by: Kees Cook <keescook@chromium.org> --- arch/x86/Kconfig | 1 + arch/x86/entry/Makefile | 9 +++++++++ arch/x86/entry/common.c | 12 +++++++++++- 3 files changed, 21 insertions(+), 1 deletion(-) diff --git a/arch/x86/Kconfig b/arch/x86/Kconfig index beea77046f9b..b9d449581eb6 100644 --- a/arch/x86/Kconfig +++ b/arch/x86/Kconfig @@ -150,6 +150,7 @@ config X86 select HAVE_ARCH_TRANSPARENT_HUGEPAGE select HAVE_ARCH_TRANSPARENT_HUGEPAGE_PUD if X86_64 select HAVE_ARCH_VMAP_STACK if X86_64 + select HAVE_ARCH_RANDOMIZE_KSTACK_OFFSET select HAVE_ARCH_WITHIN_STACK_FRAMES select HAVE_ASM_MODVERSIONS select HAVE_CMPXCHG_DOUBLE diff --git a/arch/x86/entry/Makefile b/arch/x86/entry/Makefile index 06fc70cf5433..7b40e6ae2618 100644 --- a/arch/x86/entry/Makefile +++ b/arch/x86/entry/Makefile @@ -7,6 +7,15 @@ OBJECT_FILES_NON_STANDARD_entry_64_compat.o := y CFLAGS_syscall_64.o += $(call cc-option,-Wno-override-init,) CFLAGS_syscall_32.o += $(call cc-option,-Wno-override-init,) + +# Downgrade to -fstack-protector to avoid triggering unneeded stack canary +# checks due to randomize_kstack_offset. This also removes canaries in +# other places as well, due to a handful of declarations of __u64 args[6] +# (seccomp) and unsigned long args[6] (tracepoints), but their accesses +# are indexed (instead of via dynamically sized linear reads/writes) so +# the risk of removing useful mitigation coverage here is very low. +CFLAGS_common.o += $(subst -fstack-protector-strong,-fstack-protector,$(filter -fstack-protector-strong,$(KBUILD_CFLAGS))) + obj-y := entry_$(BITS).o thunk_$(BITS).o syscall_$(BITS).o obj-y += common.o diff --git a/arch/x86/entry/common.c b/arch/x86/entry/common.c index 9747876980b5..086d7af570af 100644 --- a/arch/x86/entry/common.c +++ b/arch/x86/entry/common.c @@ -26,6 +26,7 @@ #include <linux/livepatch.h> #include <linux/syscalls.h> #include <linux/uaccess.h> +#include <linux/randomize_kstack.h> #include <asm/desc.h> #include <asm/traps.h> @@ -189,6 +190,13 @@ __visible inline void prepare_exit_to_usermode(struct pt_regs *regs) lockdep_assert_irqs_disabled(); lockdep_sys_exit(); + /* + * x86_64 stack alignment means 3 bits are ignored, so keep + * the top 5 bits. x86_32 needs only 2 bits of alignment, so + * the top 6 bits will be used. + */ + choose_random_kstack_offset(rdtsc() & 0xFF); + cached_flags = READ_ONCE(ti->flags); if (unlikely(cached_flags & EXIT_TO_USERMODE_LOOP_FLAGS)) @@ -283,6 +291,7 @@ __visible void do_syscall_64(unsigned long nr, struct pt_regs *regs) { struct thread_info *ti; + add_random_kstack_offset(); enter_from_user_mode(); local_irq_enable(); ti = current_thread_info(); @@ -355,6 +364,7 @@ static __always_inline void do_syscall_32_irqs_on(struct pt_regs *regs) /* Handles int $0x80 */ __visible void do_int80_syscall_32(struct pt_regs *regs) { + add_random_kstack_offset(); enter_from_user_mode(); local_irq_enable(); do_syscall_32_irqs_on(regs); @@ -378,8 +388,8 @@ __visible long do_fast_syscall_32(struct pt_regs *regs) */ regs->ip = landing_pad; + add_random_kstack_offset(); enter_from_user_mode(); - local_irq_enable(); /* Fetch EBP from where the vDSO stashed it. */ -- 2.20.1
next prev parent reply other threads:[~2020-04-06 23:16 UTC|newest] Thread overview: 6+ messages / expand[flat|nested] mbox.gz Atom feed top 2020-04-06 23:16 [PATCH v3 0/5] Optionally randomize kernel stack offset each syscall Kees Cook 2020-04-06 23:16 ` [PATCH v3 1/5] jump_label: Provide CONFIG-driven build state defaults Kees Cook 2020-04-06 23:16 ` [PATCH v3 2/5] init_on_alloc: Unpessimize default-on builds Kees Cook 2020-04-06 23:16 ` [PATCH v3 3/5] stack: Optionally randomize kernel stack offset each syscall Kees Cook 2020-04-06 23:16 ` Kees Cook [this message] 2020-04-06 23:16 ` [PATCH v3 5/5] arm64: entry: Enable random_kstack_offset support Kees Cook
Reply instructions: You may reply publicly to this message via plain-text email using any one of the following methods: * Save the following mbox file, import it into your mail client, and reply-to-all from there: mbox Avoid top-posting and favor interleaved quoting: https://en.wikipedia.org/wiki/Posting_style#Interleaved_style * Reply using the --to, --cc, and --in-reply-to switches of git-send-email(1): git send-email \ --in-reply-to=20200406231606.37619-5-keescook@chromium.org \ --to=keescook@chromium.org \ --cc=ard.biesheuvel@linaro.org \ --cc=catalin.marinas@arm.com \ --cc=elena.reshetova@intel.com \ --cc=glider@google.com \ --cc=jannh@google.com \ --cc=kernel-hardening@lists.openwall.com \ --cc=linux-arm-kernel@lists.infradead.org \ --cc=linux-kernel@vger.kernel.org \ --cc=linux-mm@kvack.org \ --cc=luto@kernel.org \ --cc=mark.rutland@arm.com \ --cc=peterz@infradead.org \ --cc=tglx@linutronix.de \ --cc=will@kernel.org \ --cc=x86@kernel.org \ --subject='Re: [PATCH v3 4/5] x86/entry: Enable random_kstack_offset support' \ /path/to/YOUR_REPLY https://kernel.org/pub/software/scm/git/docs/git-send-email.html * If your mail client supports setting the In-Reply-To header via mailto: links, try the mailto: link
This is a public inbox, see mirroring instructions for how to clone and mirror all data and code used for this inbox; as well as URLs for NNTP newsgroup(s).