From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,USER_AGENT_GIT, USER_IN_DEF_DKIM_WL autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 778FDC5DF60 for ; Tue, 5 Nov 2019 23:56:33 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 077BD21A49 for ; Tue, 5 Nov 2019 23:56:32 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vD5+s4og" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S2387420AbfKEX4c (ORCPT ); Tue, 5 Nov 2019 18:56:32 -0500 Received: from mail-vk1-f201.google.com ([209.85.221.201]:41541 "EHLO mail-vk1-f201.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1730675AbfKEX4a (ORCPT ); Tue, 5 Nov 2019 18:56:30 -0500 Received: by mail-vk1-f201.google.com with SMTP id a17so599647vko.8 for ; Tue, 05 Nov 2019 15:56:29 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pwnEyvR3BkmSybUTmTjQ8/D3cz+orbxAnIJb4qPzfqA=; b=vD5+s4ogGg54jDdqgsvbOpymona94iC0eDbz2rhtbVaGqAbo3Uwdm+WTStNUf757oz rhOZ1W61xoHeOKkjAZCbQB7Bik/nXvs3uHyuRbPimWXWhDQRW2Q4FwtOgg7434ZCM740 Xd45VlbMFoddDFjVxBPBNZAG+PJ19bwu/XcsY4WbLXW4QRAk3A8yUgzo1NLeo/lxRurb IJBWaMRZNI5rUDQ5dDaDZwaY3O7t1LONSx14FUL2Z+PuX1bww6z9OT3L7eej6nKw70zY mU8S8c3YTKJw4GomD+ooYfTf4S5mIfgtoU+XKuteLY/uRvKPGplYpF78zhy63CocaByx tKqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pwnEyvR3BkmSybUTmTjQ8/D3cz+orbxAnIJb4qPzfqA=; b=qh+ejhm+xhH61o9fWovjLTzmzDDCNsJeDE0VlcFyr2x82y8KiAtO20gP6C1DGknySo 5mSh6jkICbxP1AOu0mWdvU+eLMm9LOn9JJ0OR2KXj3MG/C96C9yQIvxFuiC7A/N8heVe S9uoku9oWDqA1hQl5luB3fxickj/HLeh824F3sNTU7poXF62UXCpIJiLsA324nn2R9UW WiGjFlk7Yyk+O+0tq2gL1WJMYfZjR8yEh7Xk6ICI6tFLookC5A/r1dhym1UHGoXrF/SY 1ocYrb/6FKJnQ8kq1f3UUN5Bszc5zC6423TODrZLfyIiTOevS322J0dJZoRBzsoCbJhF Q2RQ== X-Gm-Message-State: APjAAAX+E54cyn4zFpD3LiqP4xoFnFuRg2XMTMJAEV1/3qBwWgxEfsJY zWgHg4HTR0KbmFEscIal3EsImUPWBGTzgFfk3/U= X-Google-Smtp-Source: APXvYqymXeZsAaNK1Srze4bJplR9GLQBMft2PpnY6xpA5Gc7ZiU0TBsBEMJtjGeBQaErvS6oDlHdTvIBRiRwBiqNXzs= X-Received: by 2002:a05:6122:cc:: with SMTP id h12mr9670605vkc.52.1572998188778; Tue, 05 Nov 2019 15:56:28 -0800 (PST) Date: Tue, 5 Nov 2019 15:55:59 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-6-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 05/14] add support for Clang's Shadow Call Stack (SCS) From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Content-Type: text/plain; charset="UTF-8" Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org This change adds generic support for Clang's Shadow Call Stack, which uses a shadow stack to protect return addresses from being overwritten by an attacker. Details are available here: https://clang.llvm.org/docs/ShadowCallStack.html Note that security guarantees in the kernel differ from the ones documented for user space. The kernel must store addresses of shadow stacks used by other tasks and interrupt handlers in memory, which means an attacker capable reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying shadow stacks that are not currently in use. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Miguel Ojeda --- Makefile | 6 ++ arch/Kconfig | 33 ++++++ include/linux/compiler-clang.h | 6 ++ include/linux/compiler_types.h | 4 + include/linux/scs.h | 57 ++++++++++ init/init_task.c | 8 ++ kernel/Makefile | 1 + kernel/fork.c | 9 ++ kernel/sched/core.c | 2 + kernel/scs.c | 187 +++++++++++++++++++++++++++++++++ 10 files changed, 313 insertions(+) create mode 100644 include/linux/scs.h create mode 100644 kernel/scs.c diff --git a/Makefile b/Makefile index b37d0e8fc61d..7f3a4c5c7dcc 100644 --- a/Makefile +++ b/Makefile @@ -846,6 +846,12 @@ ifdef CONFIG_LIVEPATCH KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone) endif +ifdef CONFIG_SHADOW_CALL_STACK +CC_FLAGS_SCS := -fsanitize=shadow-call-stack +KBUILD_CFLAGS += $(CC_FLAGS_SCS) +export CC_FLAGS_SCS +endif + # arch Makefile may override CC so keep this after arch Makefile is included NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include) diff --git a/arch/Kconfig b/arch/Kconfig index 5f8a5d84dbbe..5e34cbcd8d6a 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -521,6 +521,39 @@ config STACKPROTECTOR_STRONG about 20% of all kernel functions, which increases the kernel code size by about 2%. +config ARCH_SUPPORTS_SHADOW_CALL_STACK + bool + help + An architecture should select this if it supports Clang's Shadow + Call Stack, has asm/scs.h, and implements runtime support for shadow + stack switching. + +config SHADOW_CALL_STACK_VMAP + bool + depends on SHADOW_CALL_STACK + help + Use virtually mapped shadow call stacks. Selecting this option + provides better stack exhaustion protection, but increases per-thread + memory consumption as a full page is allocated for each shadow stack. + +config SHADOW_CALL_STACK + bool "Clang Shadow Call Stack" + depends on ARCH_SUPPORTS_SHADOW_CALL_STACK + help + This option enables Clang's Shadow Call Stack, which uses a + shadow stack to protect function return addresses from being + overwritten by an attacker. More information can be found from + Clang's documentation: + + https://clang.llvm.org/docs/ShadowCallStack.html + + Note that security guarantees in the kernel differ from the ones + documented for user space. The kernel must store addresses of shadow + stacks used by other tasks and interrupt handlers in memory, which + means an attacker capable reading and writing arbitrary memory may + be able to locate them and hijack control flow by modifying shadow + stacks that are not currently in use. + config HAVE_ARCH_WITHIN_STACK_FRAMES bool help diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 333a6695a918..18fc4d29ef27 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -42,3 +42,9 @@ * compilers, like ICC. */ #define barrier() __asm__ __volatile__("" : : : "memory") + +#if __has_feature(shadow_call_stack) +# define __noscs __attribute__((__no_sanitize__("shadow-call-stack"))) +#else +# define __noscs +#endif diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 72393a8c1a6c..be5d5be4b1ae 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -202,6 +202,10 @@ struct ftrace_likely_data { # define randomized_struct_fields_end #endif +#ifndef __noscs +# define __noscs +#endif + #ifndef asm_volatile_goto #define asm_volatile_goto(x...) asm goto(x) #endif diff --git a/include/linux/scs.h b/include/linux/scs.h new file mode 100644 index 000000000000..c5572fd770b0 --- /dev/null +++ b/include/linux/scs.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#ifndef _LINUX_SCS_H +#define _LINUX_SCS_H + +#include +#include +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +/* + * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit + * architecture) provided ~40% safety margin on stack usage while keeping + * memory allocation overhead reasonable. + */ +#define SCS_SIZE 1024UL +#define GFP_SCS (GFP_KERNEL | __GFP_ZERO) + +/* + * A random number outside the kernel's virtual address space to mark the + * end of the shadow stack. + */ +#define SCS_END_MAGIC 0xaf0194819b1635f6UL + +#define task_scs(tsk) (task_thread_info(tsk)->shadow_call_stack) + +static inline void task_set_scs(struct task_struct *tsk, void *s) +{ + task_scs(tsk) = s; +} + +extern void scs_init(void); +extern void scs_task_reset(struct task_struct *tsk); +extern int scs_prepare(struct task_struct *tsk, int node); +extern bool scs_corrupted(struct task_struct *tsk); +extern void scs_release(struct task_struct *tsk); + +#else /* CONFIG_SHADOW_CALL_STACK */ + +#define task_scs(tsk) NULL + +static inline void task_set_scs(struct task_struct *tsk, void *s) {} +static inline void scs_init(void) {} +static inline void scs_task_reset(struct task_struct *tsk) {} +static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } +static inline bool scs_corrupted(struct task_struct *tsk) { return false; } +static inline void scs_release(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* _LINUX_SCS_H */ diff --git a/init/init_task.c b/init/init_task.c index 9e5cbe5eab7b..cbd40460e903 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -184,6 +185,13 @@ struct task_struct init_task }; EXPORT_SYMBOL(init_task); +#ifdef CONFIG_SHADOW_CALL_STACK +unsigned long init_shadow_call_stack[SCS_SIZE / sizeof(long)] __init_task_data + __aligned(SCS_SIZE) = { + [(SCS_SIZE / sizeof(long)) - 1] = SCS_END_MAGIC +}; +#endif + /* * Initial thread structure. Alignment of this is handled by a special * linker map entry. diff --git a/kernel/Makefile b/kernel/Makefile index daad787fb795..313dbd44d576 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -102,6 +102,7 @@ obj-$(CONFIG_TRACEPOINTS) += trace/ obj-$(CONFIG_IRQ_WORK) += irq_work.o obj-$(CONFIG_CPU_PM) += cpu_pm.o obj-$(CONFIG_BPF) += bpf/ +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_PERF_EVENTS) += events/ diff --git a/kernel/fork.c b/kernel/fork.c index 55af6931c6ec..6c4266019935 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -94,6 +94,7 @@ #include #include #include +#include #include #include @@ -451,6 +452,8 @@ void put_task_stack(struct task_struct *tsk) void free_task(struct task_struct *tsk) { + scs_release(tsk); + #ifndef CONFIG_THREAD_INFO_IN_TASK /* * The task is finally done with both the stack and thread_info, @@ -834,6 +837,8 @@ void __init fork_init(void) NULL, free_vm_stack_cache); #endif + scs_init(); + lockdep_init_task(&init_task); uprobes_init(); } @@ -893,6 +898,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) if (err) goto free_stack; + err = scs_prepare(tsk, node); + if (err) + goto free_stack; + #ifdef CONFIG_SECCOMP /* * We must handle setting up seccomp filters once we're under diff --git a/kernel/sched/core.c b/kernel/sched/core.c index dd05a378631a..6769e27052bf 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11,6 +11,7 @@ #include #include +#include #include #include @@ -6018,6 +6019,7 @@ void init_idle(struct task_struct *idle, int cpu) idle->se.exec_start = sched_clock(); idle->flags |= PF_IDLE; + scs_task_reset(idle); kasan_unpoison_task_stack(idle); #ifdef CONFIG_SMP diff --git a/kernel/scs.c b/kernel/scs.c new file mode 100644 index 000000000000..e3234a4b92ec --- /dev/null +++ b/kernel/scs.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static inline void *__scs_base(struct task_struct *tsk) +{ + /* + * To minimize risk the of exposure, architectures may clear a + * task's thread_info::shadow_call_stack while that task is + * running, and only save/restore the active shadow call stack + * pointer when the usual register may be clobbered (e.g. across + * context switches). + * + * The shadow call stack is aligned to SCS_SIZE, and grows + * upwards, so we can mask out the low bits to extract the base + * when the task is not running. + */ + return (void *)((unsigned long)task_scs(tsk) & ~(SCS_SIZE - 1)); +} + +static inline unsigned long *scs_magic(void *s) +{ + return (unsigned long *)(s + SCS_SIZE) - 1; +} + +static inline void scs_set_magic(void *s) +{ + *scs_magic(s) = SCS_END_MAGIC; +} + +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + +/* Matches NR_CACHED_STACKS for VMAP_STACK */ +#define NR_CACHED_SCS 2 +static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]); + +static void *scs_alloc(int node) +{ + int i; + void *s; + + for (i = 0; i < NR_CACHED_SCS; i++) { + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + memset(s, 0, SCS_SIZE); + goto out; + } + } + + /* + * We allocate a full page for the shadow stack, which should be + * more than we need. Check the assumption nevertheless. + */ + BUILD_BUG_ON(SCS_SIZE > PAGE_SIZE); + + s = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, + node, __builtin_return_address(0)); + +out: + if (s) + scs_set_magic(s); + /* TODO: poison for KASAN, unpoison in scs_free */ + + return s; +} + +static void scs_free(void *s) +{ + int i; + + for (i = 0; i < NR_CACHED_SCS; i++) + if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL) + return; + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < NR_CACHED_SCS; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; +} + +void __init scs_init(void) +{ + WARN_ON(cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup)); +} + +#else /* !CONFIG_SHADOW_CALL_STACK_VMAP */ + +static struct kmem_cache *scs_cache; + +static inline void *scs_alloc(int node) +{ + void *s; + + s = kmem_cache_alloc_node(scs_cache, GFP_SCS, node); + if (s) { + scs_set_magic(s); + /* + * Poison the allocation to catch unintentional accesses to + * the shadow stack when KASAN is enabled. + */ + kasan_poison_object_data(scs_cache, s); + } + + return s; +} + +static inline void scs_free(void *s) +{ + kasan_unpoison_object_data(scs_cache, s); + kmem_cache_free(scs_cache, s); +} + +void __init scs_init(void) +{ + scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, + 0, NULL); + WARN_ON(!scs_cache); +} + +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +void scs_task_reset(struct task_struct *tsk) +{ + /* + * Reset the shadow stack to the base address in case the task + * is reused. + */ + task_set_scs(tsk, __scs_base(tsk)); +} + +int scs_prepare(struct task_struct *tsk, int node) +{ + void *s; + + s = scs_alloc(node); + if (!s) + return -ENOMEM; + + task_set_scs(tsk, s); + return 0; +} + +bool scs_corrupted(struct task_struct *tsk) +{ + unsigned long *magic = scs_magic(__scs_base(tsk)); + + return READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC; +} + +void scs_release(struct task_struct *tsk) +{ + void *s; + + s = __scs_base(tsk); + if (!s) + return; + + WARN_ON(scs_corrupted(tsk)); + + task_set_scs(tsk, NULL); + scs_free(s); +} -- 2.24.0.rc1.363.gb1bccd3e3d-goog From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.7 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS, URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 8B7C2C5DF60 for ; Tue, 5 Nov 2019 23:57:47 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 29F7921A49 for ; Tue, 5 Nov 2019 23:57:47 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="Erf/ozA9"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="vD5+s4og" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 29F7921A49 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:To:From:Subject:References:Mime-Version :Message-Id:In-Reply-To:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=Fd2M4FN59SxDZ5kusHk3x8Tw+R+ziGQroWBrMv6IUZM=; b=Erf/ozA9ru2gq0 f2VFx5Kpl6SuKMpGevp9iRwH4z3j/eFwk9W6NZpc95GUu5Wpe6gvMUNL047PsrWIWwn7bU2paG6yV fLz4gDVG5XvyQIRLobIN1Jt/KaucxEjfwQtDB756DZi2UwxthuIBKGOOyeoFTI+tPazgaNZuqVgkl HpqjgjAVGN01B6udExsfBmeuaNvop58+J1h1z6l7XVrPnhN0JgcOMWh1KGCiHKbtlMqlxaTejWzfi iVJnFFFJXuKK3GjUk3HZjIdvC6U0ukj/LTEwzMnmWICI0De+Sfq5iJX7C2ahU59kCspasRJrYpetT q3j7ZNuvKi83T/SjtTMA==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iS8hg-0000ac-5h; Tue, 05 Nov 2019 23:57:40 +0000 Received: from mail-vk1-xa49.google.com ([2607:f8b0:4864:20::a49]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iS8gY-0007qi-SH for linux-arm-kernel@lists.infradead.org; Tue, 05 Nov 2019 23:56:34 +0000 Received: by mail-vk1-xa49.google.com with SMTP id m132so9389564vka.20 for ; Tue, 05 Nov 2019 15:56:30 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pwnEyvR3BkmSybUTmTjQ8/D3cz+orbxAnIJb4qPzfqA=; b=vD5+s4ogGg54jDdqgsvbOpymona94iC0eDbz2rhtbVaGqAbo3Uwdm+WTStNUf757oz rhOZ1W61xoHeOKkjAZCbQB7Bik/nXvs3uHyuRbPimWXWhDQRW2Q4FwtOgg7434ZCM740 Xd45VlbMFoddDFjVxBPBNZAG+PJ19bwu/XcsY4WbLXW4QRAk3A8yUgzo1NLeo/lxRurb IJBWaMRZNI5rUDQ5dDaDZwaY3O7t1LONSx14FUL2Z+PuX1bww6z9OT3L7eej6nKw70zY mU8S8c3YTKJw4GomD+ooYfTf4S5mIfgtoU+XKuteLY/uRvKPGplYpF78zhy63CocaByx tKqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pwnEyvR3BkmSybUTmTjQ8/D3cz+orbxAnIJb4qPzfqA=; b=LtsPuoZCcYoMQtjXBgtLVmFpPJ1rZbVcpW2DrdEOraaS520H4RXDN9uLUhFRw55REA bZywyBEoxKZZNADrbH4/1SkF0sD/XmMLPzXuzbzhUQ0c7lS6U5hK6oXQdVkmEVLK3FSH D+4QUHKC+0oVHWv8a65YqFdwRS54JLIHsEBsI3l3QSjukf3Qlixfrfk/O7mYZFgTKEow MgOyBwI1b0LjoRPCJIcLNIhfXJ3NWNfb6/Sp/9HjQBFURyR9gWkkfiT8ypWKL5rHMPrU hJUTF5Y9BaDAgMmxLurdq+vaV3qUSwzoubKGzP1aI9scxKfYUmigs2DQaM+GyUQjvt3z JITQ== X-Gm-Message-State: APjAAAX98H28XBQniKSRduwCayprO8SOGjmzzf3q0gFpfPbLxXZDcaD8 gASOO2BQRUrTGNXIw9XCA2VrW14O2X0+sngvORw= X-Google-Smtp-Source: APXvYqymXeZsAaNK1Srze4bJplR9GLQBMft2PpnY6xpA5Gc7ZiU0TBsBEMJtjGeBQaErvS6oDlHdTvIBRiRwBiqNXzs= X-Received: by 2002:a05:6122:cc:: with SMTP id h12mr9670605vkc.52.1572998188778; Tue, 05 Nov 2019 15:56:28 -0800 (PST) Date: Tue, 5 Nov 2019 15:55:59 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-6-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 05/14] add support for Clang's Shadow Call Stack (SCS) From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191105_155631_176592_F6464C40 X-CRM114-Status: GOOD ( 26.73 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Mark Rutland , Kees Cook , Jann Horn , Masahiro Yamada , Marc Zyngier , kernel-hardening@lists.openwall.com, Nick Desaulniers , linux-kernel@vger.kernel.org, Miguel Ojeda , clang-built-linux@googlegroups.com, Sami Tolvanen , Laura Abbott , Dave Martin , linux-arm-kernel@lists.infradead.org Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org This change adds generic support for Clang's Shadow Call Stack, which uses a shadow stack to protect return addresses from being overwritten by an attacker. Details are available here: https://clang.llvm.org/docs/ShadowCallStack.html Note that security guarantees in the kernel differ from the ones documented for user space. The kernel must store addresses of shadow stacks used by other tasks and interrupt handlers in memory, which means an attacker capable reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying shadow stacks that are not currently in use. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Miguel Ojeda --- Makefile | 6 ++ arch/Kconfig | 33 ++++++ include/linux/compiler-clang.h | 6 ++ include/linux/compiler_types.h | 4 + include/linux/scs.h | 57 ++++++++++ init/init_task.c | 8 ++ kernel/Makefile | 1 + kernel/fork.c | 9 ++ kernel/sched/core.c | 2 + kernel/scs.c | 187 +++++++++++++++++++++++++++++++++ 10 files changed, 313 insertions(+) create mode 100644 include/linux/scs.h create mode 100644 kernel/scs.c diff --git a/Makefile b/Makefile index b37d0e8fc61d..7f3a4c5c7dcc 100644 --- a/Makefile +++ b/Makefile @@ -846,6 +846,12 @@ ifdef CONFIG_LIVEPATCH KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone) endif +ifdef CONFIG_SHADOW_CALL_STACK +CC_FLAGS_SCS := -fsanitize=shadow-call-stack +KBUILD_CFLAGS += $(CC_FLAGS_SCS) +export CC_FLAGS_SCS +endif + # arch Makefile may override CC so keep this after arch Makefile is included NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include) diff --git a/arch/Kconfig b/arch/Kconfig index 5f8a5d84dbbe..5e34cbcd8d6a 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -521,6 +521,39 @@ config STACKPROTECTOR_STRONG about 20% of all kernel functions, which increases the kernel code size by about 2%. +config ARCH_SUPPORTS_SHADOW_CALL_STACK + bool + help + An architecture should select this if it supports Clang's Shadow + Call Stack, has asm/scs.h, and implements runtime support for shadow + stack switching. + +config SHADOW_CALL_STACK_VMAP + bool + depends on SHADOW_CALL_STACK + help + Use virtually mapped shadow call stacks. Selecting this option + provides better stack exhaustion protection, but increases per-thread + memory consumption as a full page is allocated for each shadow stack. + +config SHADOW_CALL_STACK + bool "Clang Shadow Call Stack" + depends on ARCH_SUPPORTS_SHADOW_CALL_STACK + help + This option enables Clang's Shadow Call Stack, which uses a + shadow stack to protect function return addresses from being + overwritten by an attacker. More information can be found from + Clang's documentation: + + https://clang.llvm.org/docs/ShadowCallStack.html + + Note that security guarantees in the kernel differ from the ones + documented for user space. The kernel must store addresses of shadow + stacks used by other tasks and interrupt handlers in memory, which + means an attacker capable reading and writing arbitrary memory may + be able to locate them and hijack control flow by modifying shadow + stacks that are not currently in use. + config HAVE_ARCH_WITHIN_STACK_FRAMES bool help diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 333a6695a918..18fc4d29ef27 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -42,3 +42,9 @@ * compilers, like ICC. */ #define barrier() __asm__ __volatile__("" : : : "memory") + +#if __has_feature(shadow_call_stack) +# define __noscs __attribute__((__no_sanitize__("shadow-call-stack"))) +#else +# define __noscs +#endif diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 72393a8c1a6c..be5d5be4b1ae 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -202,6 +202,10 @@ struct ftrace_likely_data { # define randomized_struct_fields_end #endif +#ifndef __noscs +# define __noscs +#endif + #ifndef asm_volatile_goto #define asm_volatile_goto(x...) asm goto(x) #endif diff --git a/include/linux/scs.h b/include/linux/scs.h new file mode 100644 index 000000000000..c5572fd770b0 --- /dev/null +++ b/include/linux/scs.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#ifndef _LINUX_SCS_H +#define _LINUX_SCS_H + +#include +#include +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +/* + * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit + * architecture) provided ~40% safety margin on stack usage while keeping + * memory allocation overhead reasonable. + */ +#define SCS_SIZE 1024UL +#define GFP_SCS (GFP_KERNEL | __GFP_ZERO) + +/* + * A random number outside the kernel's virtual address space to mark the + * end of the shadow stack. + */ +#define SCS_END_MAGIC 0xaf0194819b1635f6UL + +#define task_scs(tsk) (task_thread_info(tsk)->shadow_call_stack) + +static inline void task_set_scs(struct task_struct *tsk, void *s) +{ + task_scs(tsk) = s; +} + +extern void scs_init(void); +extern void scs_task_reset(struct task_struct *tsk); +extern int scs_prepare(struct task_struct *tsk, int node); +extern bool scs_corrupted(struct task_struct *tsk); +extern void scs_release(struct task_struct *tsk); + +#else /* CONFIG_SHADOW_CALL_STACK */ + +#define task_scs(tsk) NULL + +static inline void task_set_scs(struct task_struct *tsk, void *s) {} +static inline void scs_init(void) {} +static inline void scs_task_reset(struct task_struct *tsk) {} +static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } +static inline bool scs_corrupted(struct task_struct *tsk) { return false; } +static inline void scs_release(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* _LINUX_SCS_H */ diff --git a/init/init_task.c b/init/init_task.c index 9e5cbe5eab7b..cbd40460e903 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -184,6 +185,13 @@ struct task_struct init_task }; EXPORT_SYMBOL(init_task); +#ifdef CONFIG_SHADOW_CALL_STACK +unsigned long init_shadow_call_stack[SCS_SIZE / sizeof(long)] __init_task_data + __aligned(SCS_SIZE) = { + [(SCS_SIZE / sizeof(long)) - 1] = SCS_END_MAGIC +}; +#endif + /* * Initial thread structure. Alignment of this is handled by a special * linker map entry. diff --git a/kernel/Makefile b/kernel/Makefile index daad787fb795..313dbd44d576 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -102,6 +102,7 @@ obj-$(CONFIG_TRACEPOINTS) += trace/ obj-$(CONFIG_IRQ_WORK) += irq_work.o obj-$(CONFIG_CPU_PM) += cpu_pm.o obj-$(CONFIG_BPF) += bpf/ +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_PERF_EVENTS) += events/ diff --git a/kernel/fork.c b/kernel/fork.c index 55af6931c6ec..6c4266019935 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -94,6 +94,7 @@ #include #include #include +#include #include #include @@ -451,6 +452,8 @@ void put_task_stack(struct task_struct *tsk) void free_task(struct task_struct *tsk) { + scs_release(tsk); + #ifndef CONFIG_THREAD_INFO_IN_TASK /* * The task is finally done with both the stack and thread_info, @@ -834,6 +837,8 @@ void __init fork_init(void) NULL, free_vm_stack_cache); #endif + scs_init(); + lockdep_init_task(&init_task); uprobes_init(); } @@ -893,6 +898,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) if (err) goto free_stack; + err = scs_prepare(tsk, node); + if (err) + goto free_stack; + #ifdef CONFIG_SECCOMP /* * We must handle setting up seccomp filters once we're under diff --git a/kernel/sched/core.c b/kernel/sched/core.c index dd05a378631a..6769e27052bf 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11,6 +11,7 @@ #include #include +#include #include #include @@ -6018,6 +6019,7 @@ void init_idle(struct task_struct *idle, int cpu) idle->se.exec_start = sched_clock(); idle->flags |= PF_IDLE; + scs_task_reset(idle); kasan_unpoison_task_stack(idle); #ifdef CONFIG_SMP diff --git a/kernel/scs.c b/kernel/scs.c new file mode 100644 index 000000000000..e3234a4b92ec --- /dev/null +++ b/kernel/scs.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static inline void *__scs_base(struct task_struct *tsk) +{ + /* + * To minimize risk the of exposure, architectures may clear a + * task's thread_info::shadow_call_stack while that task is + * running, and only save/restore the active shadow call stack + * pointer when the usual register may be clobbered (e.g. across + * context switches). + * + * The shadow call stack is aligned to SCS_SIZE, and grows + * upwards, so we can mask out the low bits to extract the base + * when the task is not running. + */ + return (void *)((unsigned long)task_scs(tsk) & ~(SCS_SIZE - 1)); +} + +static inline unsigned long *scs_magic(void *s) +{ + return (unsigned long *)(s + SCS_SIZE) - 1; +} + +static inline void scs_set_magic(void *s) +{ + *scs_magic(s) = SCS_END_MAGIC; +} + +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + +/* Matches NR_CACHED_STACKS for VMAP_STACK */ +#define NR_CACHED_SCS 2 +static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]); + +static void *scs_alloc(int node) +{ + int i; + void *s; + + for (i = 0; i < NR_CACHED_SCS; i++) { + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + memset(s, 0, SCS_SIZE); + goto out; + } + } + + /* + * We allocate a full page for the shadow stack, which should be + * more than we need. Check the assumption nevertheless. + */ + BUILD_BUG_ON(SCS_SIZE > PAGE_SIZE); + + s = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, + node, __builtin_return_address(0)); + +out: + if (s) + scs_set_magic(s); + /* TODO: poison for KASAN, unpoison in scs_free */ + + return s; +} + +static void scs_free(void *s) +{ + int i; + + for (i = 0; i < NR_CACHED_SCS; i++) + if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL) + return; + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < NR_CACHED_SCS; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; +} + +void __init scs_init(void) +{ + WARN_ON(cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup)); +} + +#else /* !CONFIG_SHADOW_CALL_STACK_VMAP */ + +static struct kmem_cache *scs_cache; + +static inline void *scs_alloc(int node) +{ + void *s; + + s = kmem_cache_alloc_node(scs_cache, GFP_SCS, node); + if (s) { + scs_set_magic(s); + /* + * Poison the allocation to catch unintentional accesses to + * the shadow stack when KASAN is enabled. + */ + kasan_poison_object_data(scs_cache, s); + } + + return s; +} + +static inline void scs_free(void *s) +{ + kasan_unpoison_object_data(scs_cache, s); + kmem_cache_free(scs_cache, s); +} + +void __init scs_init(void) +{ + scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, + 0, NULL); + WARN_ON(!scs_cache); +} + +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +void scs_task_reset(struct task_struct *tsk) +{ + /* + * Reset the shadow stack to the base address in case the task + * is reused. + */ + task_set_scs(tsk, __scs_base(tsk)); +} + +int scs_prepare(struct task_struct *tsk, int node) +{ + void *s; + + s = scs_alloc(node); + if (!s) + return -ENOMEM; + + task_set_scs(tsk, s); + return 0; +} + +bool scs_corrupted(struct task_struct *tsk) +{ + unsigned long *magic = scs_magic(__scs_base(tsk)); + + return READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC; +} + +void scs_release(struct task_struct *tsk) +{ + void *s; + + s = __scs_base(tsk); + if (!s) + return; + + WARN_ON(scs_corrupted(tsk)); + + task_set_scs(tsk, NULL); + scs_free(s); +} -- 2.24.0.rc1.363.gb1bccd3e3d-goog _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH, MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT,USER_IN_DEF_DKIM_WL autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0486BC5DF60 for ; Tue, 5 Nov 2019 23:57:16 +0000 (UTC) Received: from mother.openwall.net (mother.openwall.net [195.42.179.200]) by mail.kernel.org (Postfix) with SMTP id BACD721D81 for ; Tue, 5 Nov 2019 23:57:14 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="vD5+s4og" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org BACD721D81 Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=pass smtp.mailfrom=kernel-hardening-return-17305-kernel-hardening=archiver.kernel.org@lists.openwall.com Received: (qmail 23892 invoked by uid 550); 5 Nov 2019 23:56:41 -0000 Mailing-List: contact kernel-hardening-help@lists.openwall.com; run by ezmlm Precedence: bulk List-Post: List-Help: List-Unsubscribe: List-Subscribe: List-ID: Received: (qmail 23762 invoked from network); 5 Nov 2019 23:56:40 -0000 DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:in-reply-to:message-id:mime-version:references:subject:from:to :cc; bh=pwnEyvR3BkmSybUTmTjQ8/D3cz+orbxAnIJb4qPzfqA=; b=vD5+s4ogGg54jDdqgsvbOpymona94iC0eDbz2rhtbVaGqAbo3Uwdm+WTStNUf757oz rhOZ1W61xoHeOKkjAZCbQB7Bik/nXvs3uHyuRbPimWXWhDQRW2Q4FwtOgg7434ZCM740 Xd45VlbMFoddDFjVxBPBNZAG+PJ19bwu/XcsY4WbLXW4QRAk3A8yUgzo1NLeo/lxRurb IJBWaMRZNI5rUDQ5dDaDZwaY3O7t1LONSx14FUL2Z+PuX1bww6z9OT3L7eej6nKw70zY mU8S8c3YTKJw4GomD+ooYfTf4S5mIfgtoU+XKuteLY/uRvKPGplYpF78zhy63CocaByx tKqg== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:in-reply-to:message-id:mime-version :references:subject:from:to:cc; bh=pwnEyvR3BkmSybUTmTjQ8/D3cz+orbxAnIJb4qPzfqA=; b=NmqOB5BEO/bNgaYnbY0UFqtBaMbxQaV0iz3lSlKL3g7uuqTDweCX8Bqbka8OgtRDLG zt0zl0BY3XUKWjUvyK/tU3dDnFv7qiH9T3KgUYGS0QyC+5OjdFhDFnlAIya3JQ9FA3oO J6hFbIXymr7XiKuNV8CJ0hReZOCuZ0ZAu+z4RiL1PZjl2Krjjv2ArQBU+sWVt1JltV/u bRkXF1lbO0wGqt8UBMbd4R6URdFv11ebWxMN5L7G7HWK7CnnfROAwuPlcXeawrxH+CEq T6Uwqc5PE2J6p4HI6RWasqMNd+h6R16+pV8wu23WnwLqyzHExhp9hxcIgW4/xN65bo0R TgGg== X-Gm-Message-State: APjAAAU4ziX1TxS2PspGyJYug8bODQYqTNd8zDdOm6gXBTK2i5eNthe6 TuNQu9tamJv3TFs1aID7DSr41AD5DrlZ07m6TEg= X-Google-Smtp-Source: APXvYqymXeZsAaNK1Srze4bJplR9GLQBMft2PpnY6xpA5Gc7ZiU0TBsBEMJtjGeBQaErvS6oDlHdTvIBRiRwBiqNXzs= X-Received: by 2002:a05:6122:cc:: with SMTP id h12mr9670605vkc.52.1572998188778; Tue, 05 Nov 2019 15:56:28 -0800 (PST) Date: Tue, 5 Nov 2019 15:55:59 -0800 In-Reply-To: <20191105235608.107702-1-samitolvanen@google.com> Message-Id: <20191105235608.107702-6-samitolvanen@google.com> Mime-Version: 1.0 References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> X-Mailer: git-send-email 2.24.0.rc1.363.gb1bccd3e3d-goog Subject: [PATCH v5 05/14] add support for Clang's Shadow Call Stack (SCS) From: Sami Tolvanen To: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel Cc: Dave Martin , Kees Cook , Laura Abbott , Mark Rutland , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux@googlegroups.com, kernel-hardening@lists.openwall.com, linux-arm-kernel@lists.infradead.org, linux-kernel@vger.kernel.org, Sami Tolvanen Content-Type: text/plain; charset="UTF-8" This change adds generic support for Clang's Shadow Call Stack, which uses a shadow stack to protect return addresses from being overwritten by an attacker. Details are available here: https://clang.llvm.org/docs/ShadowCallStack.html Note that security guarantees in the kernel differ from the ones documented for user space. The kernel must store addresses of shadow stacks used by other tasks and interrupt handlers in memory, which means an attacker capable reading and writing arbitrary memory may be able to locate them and hijack control flow by modifying shadow stacks that are not currently in use. Signed-off-by: Sami Tolvanen Reviewed-by: Kees Cook Reviewed-by: Miguel Ojeda --- Makefile | 6 ++ arch/Kconfig | 33 ++++++ include/linux/compiler-clang.h | 6 ++ include/linux/compiler_types.h | 4 + include/linux/scs.h | 57 ++++++++++ init/init_task.c | 8 ++ kernel/Makefile | 1 + kernel/fork.c | 9 ++ kernel/sched/core.c | 2 + kernel/scs.c | 187 +++++++++++++++++++++++++++++++++ 10 files changed, 313 insertions(+) create mode 100644 include/linux/scs.h create mode 100644 kernel/scs.c diff --git a/Makefile b/Makefile index b37d0e8fc61d..7f3a4c5c7dcc 100644 --- a/Makefile +++ b/Makefile @@ -846,6 +846,12 @@ ifdef CONFIG_LIVEPATCH KBUILD_CFLAGS += $(call cc-option, -flive-patching=inline-clone) endif +ifdef CONFIG_SHADOW_CALL_STACK +CC_FLAGS_SCS := -fsanitize=shadow-call-stack +KBUILD_CFLAGS += $(CC_FLAGS_SCS) +export CC_FLAGS_SCS +endif + # arch Makefile may override CC so keep this after arch Makefile is included NOSTDINC_FLAGS += -nostdinc -isystem $(shell $(CC) -print-file-name=include) diff --git a/arch/Kconfig b/arch/Kconfig index 5f8a5d84dbbe..5e34cbcd8d6a 100644 --- a/arch/Kconfig +++ b/arch/Kconfig @@ -521,6 +521,39 @@ config STACKPROTECTOR_STRONG about 20% of all kernel functions, which increases the kernel code size by about 2%. +config ARCH_SUPPORTS_SHADOW_CALL_STACK + bool + help + An architecture should select this if it supports Clang's Shadow + Call Stack, has asm/scs.h, and implements runtime support for shadow + stack switching. + +config SHADOW_CALL_STACK_VMAP + bool + depends on SHADOW_CALL_STACK + help + Use virtually mapped shadow call stacks. Selecting this option + provides better stack exhaustion protection, but increases per-thread + memory consumption as a full page is allocated for each shadow stack. + +config SHADOW_CALL_STACK + bool "Clang Shadow Call Stack" + depends on ARCH_SUPPORTS_SHADOW_CALL_STACK + help + This option enables Clang's Shadow Call Stack, which uses a + shadow stack to protect function return addresses from being + overwritten by an attacker. More information can be found from + Clang's documentation: + + https://clang.llvm.org/docs/ShadowCallStack.html + + Note that security guarantees in the kernel differ from the ones + documented for user space. The kernel must store addresses of shadow + stacks used by other tasks and interrupt handlers in memory, which + means an attacker capable reading and writing arbitrary memory may + be able to locate them and hijack control flow by modifying shadow + stacks that are not currently in use. + config HAVE_ARCH_WITHIN_STACK_FRAMES bool help diff --git a/include/linux/compiler-clang.h b/include/linux/compiler-clang.h index 333a6695a918..18fc4d29ef27 100644 --- a/include/linux/compiler-clang.h +++ b/include/linux/compiler-clang.h @@ -42,3 +42,9 @@ * compilers, like ICC. */ #define barrier() __asm__ __volatile__("" : : : "memory") + +#if __has_feature(shadow_call_stack) +# define __noscs __attribute__((__no_sanitize__("shadow-call-stack"))) +#else +# define __noscs +#endif diff --git a/include/linux/compiler_types.h b/include/linux/compiler_types.h index 72393a8c1a6c..be5d5be4b1ae 100644 --- a/include/linux/compiler_types.h +++ b/include/linux/compiler_types.h @@ -202,6 +202,10 @@ struct ftrace_likely_data { # define randomized_struct_fields_end #endif +#ifndef __noscs +# define __noscs +#endif + #ifndef asm_volatile_goto #define asm_volatile_goto(x...) asm goto(x) #endif diff --git a/include/linux/scs.h b/include/linux/scs.h new file mode 100644 index 000000000000..c5572fd770b0 --- /dev/null +++ b/include/linux/scs.h @@ -0,0 +1,57 @@ +/* SPDX-License-Identifier: GPL-2.0 */ +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#ifndef _LINUX_SCS_H +#define _LINUX_SCS_H + +#include +#include +#include + +#ifdef CONFIG_SHADOW_CALL_STACK + +/* + * In testing, 1 KiB shadow stack size (i.e. 128 stack frames on a 64-bit + * architecture) provided ~40% safety margin on stack usage while keeping + * memory allocation overhead reasonable. + */ +#define SCS_SIZE 1024UL +#define GFP_SCS (GFP_KERNEL | __GFP_ZERO) + +/* + * A random number outside the kernel's virtual address space to mark the + * end of the shadow stack. + */ +#define SCS_END_MAGIC 0xaf0194819b1635f6UL + +#define task_scs(tsk) (task_thread_info(tsk)->shadow_call_stack) + +static inline void task_set_scs(struct task_struct *tsk, void *s) +{ + task_scs(tsk) = s; +} + +extern void scs_init(void); +extern void scs_task_reset(struct task_struct *tsk); +extern int scs_prepare(struct task_struct *tsk, int node); +extern bool scs_corrupted(struct task_struct *tsk); +extern void scs_release(struct task_struct *tsk); + +#else /* CONFIG_SHADOW_CALL_STACK */ + +#define task_scs(tsk) NULL + +static inline void task_set_scs(struct task_struct *tsk, void *s) {} +static inline void scs_init(void) {} +static inline void scs_task_reset(struct task_struct *tsk) {} +static inline int scs_prepare(struct task_struct *tsk, int node) { return 0; } +static inline bool scs_corrupted(struct task_struct *tsk) { return false; } +static inline void scs_release(struct task_struct *tsk) {} + +#endif /* CONFIG_SHADOW_CALL_STACK */ + +#endif /* _LINUX_SCS_H */ diff --git a/init/init_task.c b/init/init_task.c index 9e5cbe5eab7b..cbd40460e903 100644 --- a/init/init_task.c +++ b/init/init_task.c @@ -11,6 +11,7 @@ #include #include #include +#include #include #include @@ -184,6 +185,13 @@ struct task_struct init_task }; EXPORT_SYMBOL(init_task); +#ifdef CONFIG_SHADOW_CALL_STACK +unsigned long init_shadow_call_stack[SCS_SIZE / sizeof(long)] __init_task_data + __aligned(SCS_SIZE) = { + [(SCS_SIZE / sizeof(long)) - 1] = SCS_END_MAGIC +}; +#endif + /* * Initial thread structure. Alignment of this is handled by a special * linker map entry. diff --git a/kernel/Makefile b/kernel/Makefile index daad787fb795..313dbd44d576 100644 --- a/kernel/Makefile +++ b/kernel/Makefile @@ -102,6 +102,7 @@ obj-$(CONFIG_TRACEPOINTS) += trace/ obj-$(CONFIG_IRQ_WORK) += irq_work.o obj-$(CONFIG_CPU_PM) += cpu_pm.o obj-$(CONFIG_BPF) += bpf/ +obj-$(CONFIG_SHADOW_CALL_STACK) += scs.o obj-$(CONFIG_PERF_EVENTS) += events/ diff --git a/kernel/fork.c b/kernel/fork.c index 55af6931c6ec..6c4266019935 100644 --- a/kernel/fork.c +++ b/kernel/fork.c @@ -94,6 +94,7 @@ #include #include #include +#include #include #include @@ -451,6 +452,8 @@ void put_task_stack(struct task_struct *tsk) void free_task(struct task_struct *tsk) { + scs_release(tsk); + #ifndef CONFIG_THREAD_INFO_IN_TASK /* * The task is finally done with both the stack and thread_info, @@ -834,6 +837,8 @@ void __init fork_init(void) NULL, free_vm_stack_cache); #endif + scs_init(); + lockdep_init_task(&init_task); uprobes_init(); } @@ -893,6 +898,10 @@ static struct task_struct *dup_task_struct(struct task_struct *orig, int node) if (err) goto free_stack; + err = scs_prepare(tsk, node); + if (err) + goto free_stack; + #ifdef CONFIG_SECCOMP /* * We must handle setting up seccomp filters once we're under diff --git a/kernel/sched/core.c b/kernel/sched/core.c index dd05a378631a..6769e27052bf 100644 --- a/kernel/sched/core.c +++ b/kernel/sched/core.c @@ -11,6 +11,7 @@ #include #include +#include #include #include @@ -6018,6 +6019,7 @@ void init_idle(struct task_struct *idle, int cpu) idle->se.exec_start = sched_clock(); idle->flags |= PF_IDLE; + scs_task_reset(idle); kasan_unpoison_task_stack(idle); #ifdef CONFIG_SMP diff --git a/kernel/scs.c b/kernel/scs.c new file mode 100644 index 000000000000..e3234a4b92ec --- /dev/null +++ b/kernel/scs.c @@ -0,0 +1,187 @@ +// SPDX-License-Identifier: GPL-2.0 +/* + * Shadow Call Stack support. + * + * Copyright (C) 2019 Google LLC + */ + +#include +#include +#include +#include +#include +#include +#include +#include + +static inline void *__scs_base(struct task_struct *tsk) +{ + /* + * To minimize risk the of exposure, architectures may clear a + * task's thread_info::shadow_call_stack while that task is + * running, and only save/restore the active shadow call stack + * pointer when the usual register may be clobbered (e.g. across + * context switches). + * + * The shadow call stack is aligned to SCS_SIZE, and grows + * upwards, so we can mask out the low bits to extract the base + * when the task is not running. + */ + return (void *)((unsigned long)task_scs(tsk) & ~(SCS_SIZE - 1)); +} + +static inline unsigned long *scs_magic(void *s) +{ + return (unsigned long *)(s + SCS_SIZE) - 1; +} + +static inline void scs_set_magic(void *s) +{ + *scs_magic(s) = SCS_END_MAGIC; +} + +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP + +/* Matches NR_CACHED_STACKS for VMAP_STACK */ +#define NR_CACHED_SCS 2 +static DEFINE_PER_CPU(void *, scs_cache[NR_CACHED_SCS]); + +static void *scs_alloc(int node) +{ + int i; + void *s; + + for (i = 0; i < NR_CACHED_SCS; i++) { + s = this_cpu_xchg(scs_cache[i], NULL); + if (s) { + memset(s, 0, SCS_SIZE); + goto out; + } + } + + /* + * We allocate a full page for the shadow stack, which should be + * more than we need. Check the assumption nevertheless. + */ + BUILD_BUG_ON(SCS_SIZE > PAGE_SIZE); + + s = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, 0, + node, __builtin_return_address(0)); + +out: + if (s) + scs_set_magic(s); + /* TODO: poison for KASAN, unpoison in scs_free */ + + return s; +} + +static void scs_free(void *s) +{ + int i; + + for (i = 0; i < NR_CACHED_SCS; i++) + if (this_cpu_cmpxchg(scs_cache[i], 0, s) == NULL) + return; + + vfree_atomic(s); +} + +static int scs_cleanup(unsigned int cpu) +{ + int i; + void **cache = per_cpu_ptr(scs_cache, cpu); + + for (i = 0; i < NR_CACHED_SCS; i++) { + vfree(cache[i]); + cache[i] = NULL; + } + + return 0; +} + +void __init scs_init(void) +{ + WARN_ON(cpuhp_setup_state(CPUHP_BP_PREPARE_DYN, "scs:scs_cache", NULL, + scs_cleanup)); +} + +#else /* !CONFIG_SHADOW_CALL_STACK_VMAP */ + +static struct kmem_cache *scs_cache; + +static inline void *scs_alloc(int node) +{ + void *s; + + s = kmem_cache_alloc_node(scs_cache, GFP_SCS, node); + if (s) { + scs_set_magic(s); + /* + * Poison the allocation to catch unintentional accesses to + * the shadow stack when KASAN is enabled. + */ + kasan_poison_object_data(scs_cache, s); + } + + return s; +} + +static inline void scs_free(void *s) +{ + kasan_unpoison_object_data(scs_cache, s); + kmem_cache_free(scs_cache, s); +} + +void __init scs_init(void) +{ + scs_cache = kmem_cache_create("scs_cache", SCS_SIZE, SCS_SIZE, + 0, NULL); + WARN_ON(!scs_cache); +} + +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +void scs_task_reset(struct task_struct *tsk) +{ + /* + * Reset the shadow stack to the base address in case the task + * is reused. + */ + task_set_scs(tsk, __scs_base(tsk)); +} + +int scs_prepare(struct task_struct *tsk, int node) +{ + void *s; + + s = scs_alloc(node); + if (!s) + return -ENOMEM; + + task_set_scs(tsk, s); + return 0; +} + +bool scs_corrupted(struct task_struct *tsk) +{ + unsigned long *magic = scs_magic(__scs_base(tsk)); + + return READ_ONCE_NOCHECK(*magic) != SCS_END_MAGIC; +} + +void scs_release(struct task_struct *tsk) +{ + void *s; + + s = __scs_base(tsk); + if (!s) + return; + + WARN_ON(scs_corrupted(tsk)); + + task_set_scs(tsk, NULL); + scs_free(s); +} -- 2.24.0.rc1.363.gb1bccd3e3d-goog