From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-16.7 required=3.0 tests=BAYES_00, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER,INCLUDES_PATCH, MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED,USER_AGENT_GIT autolearn=unavailable autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id B4073C04FF3 for ; Fri, 21 May 2021 22:14:45 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 92D00613EC for ; Fri, 21 May 2021 22:14:45 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S230312AbhEUWQH (ORCPT ); Fri, 21 May 2021 18:16:07 -0400 Received: from mga09.intel.com ([134.134.136.24]:55678 "EHLO mga09.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S230297AbhEUWPF (ORCPT ); Fri, 21 May 2021 18:15:05 -0400 IronPort-SDR: VTlAMrxhrOgF+yqdi0SprkaC0XPzBw/hoDLQdu5tVPfd0Ycl7gDO00zQX1ftF2YqUEYllbPotT qNDSpQw3OuZw== X-IronPort-AV: E=McAfee;i="6200,9189,9991"; a="201618781" X-IronPort-AV: E=Sophos;i="5.82,319,1613462400"; d="scan'208";a="201618781" Received: from orsmga008.jf.intel.com ([10.7.209.65]) by orsmga102.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2021 15:13:21 -0700 IronPort-SDR: CwBimpn+/zLdfTvID2MjnTDclOjSlGAE+6+kGPxsEY+hA9U2bmrdO8xpSpnUVbGeV0i2Ddgb6a fmi/ul69javA== X-IronPort-AV: E=Sophos;i="5.82,319,1613462400"; d="scan'208";a="441116229" Received: from yyu32-desk.sc.intel.com ([143.183.136.146]) by orsmga008-auth.jf.intel.com with ESMTP/TLS/ECDHE-RSA-AES256-GCM-SHA384; 21 May 2021 15:13:20 -0700 From: Yu-cheng Yu To: x86@kernel.org, "H. Peter Anvin" , Thomas Gleixner , Ingo Molnar , linux-kernel@vger.kernel.org, linux-doc@vger.kernel.org, linux-mm@kvack.org, linux-arch@vger.kernel.org, linux-api@vger.kernel.org, Arnd Bergmann , Andy Lutomirski , Balbir Singh , Borislav Petkov , Cyrill Gorcunov , Dave Hansen , Eugene Syromiatnikov , Florian Weimer , "H.J. Lu" , Jann Horn , Jonathan Corbet , Kees Cook , Mike Kravetz , Nadav Amit , Oleg Nesterov , Pavel Machek , Peter Zijlstra , Randy Dunlap , "Ravi V. Shankar" , Vedvyas Shanbhogue , Dave Martin , Weijiang Yang , Pengfei Xu , Haitao Huang Cc: Yu-cheng Yu Subject: [PATCH v27 24/31] x86/cet/shstk: Handle thread shadow stack Date: Fri, 21 May 2021 15:12:04 -0700 Message-Id: <20210521221211.29077-25-yu-cheng.yu@intel.com> X-Mailer: git-send-email 2.21.0 In-Reply-To: <20210521221211.29077-1-yu-cheng.yu@intel.com> References: <20210521221211.29077-1-yu-cheng.yu@intel.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org For clone() with CLONE_VM, except vfork, the child and the parent must have separate shadow stacks. Thus, the kernel allocates, and frees on thread exit a new shadow stack for the child. Use stack_size passed from clone3() syscall for thread shadow stack size. A compat-mode thread shadow stack size is further reduced to 1/4. This allows more threads to run in a 32-bit address space. Signed-off-by: Yu-cheng Yu --- arch/x86/include/asm/cet.h | 5 +++ arch/x86/include/asm/mmu_context.h | 3 ++ arch/x86/kernel/process.c | 15 +++++--- arch/x86/kernel/shstk.c | 55 +++++++++++++++++++++++++++++- 4 files changed, 73 insertions(+), 5 deletions(-) diff --git a/arch/x86/include/asm/cet.h b/arch/x86/include/asm/cet.h index 6432baf4de1f..4314a41ab3c9 100644 --- a/arch/x86/include/asm/cet.h +++ b/arch/x86/include/asm/cet.h @@ -17,10 +17,15 @@ struct thread_shstk { #ifdef CONFIG_X86_SHADOW_STACK int shstk_setup(void); +int shstk_alloc_thread_stack(struct task_struct *p, unsigned long clone_flags, + unsigned long stack_size); void shstk_free(struct task_struct *p); void shstk_disable(void); #else static inline int shstk_setup(void) { return 0; } +static inline int shstk_alloc_thread_stack(struct task_struct *p, + unsigned long clone_flags, + unsigned long stack_size) { return 0; } static inline void shstk_free(struct task_struct *p) {} static inline void shstk_disable(void) {} #endif diff --git a/arch/x86/include/asm/mmu_context.h b/arch/x86/include/asm/mmu_context.h index 27516046117a..e1dd083261a5 100644 --- a/arch/x86/include/asm/mmu_context.h +++ b/arch/x86/include/asm/mmu_context.h @@ -12,6 +12,7 @@ #include #include #include +#include extern atomic64_t last_mm_ctx_id; @@ -146,6 +147,8 @@ do { \ #else #define deactivate_mm(tsk, mm) \ do { \ + if (!tsk->vfork_done) \ + shstk_free(tsk); \ load_gs_index(0); \ loadsegment(fs, 0); \ } while (0) diff --git a/arch/x86/kernel/process.c b/arch/x86/kernel/process.c index 5e1f38179f49..7a583a28ddb2 100644 --- a/arch/x86/kernel/process.c +++ b/arch/x86/kernel/process.c @@ -43,6 +43,7 @@ #include #include #include +#include #include "process.h" @@ -104,6 +105,7 @@ void exit_thread(struct task_struct *tsk) free_vm86(t); + shstk_free(tsk); fpu__drop(fpu); } @@ -117,8 +119,9 @@ static int set_new_tls(struct task_struct *p, unsigned long tls) return do_set_thread_area_64(p, ARCH_SET_FS, tls); } -int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg, - struct task_struct *p, unsigned long tls) +int copy_thread(unsigned long clone_flags, unsigned long sp, + unsigned long stack_size, struct task_struct *p, + unsigned long tls) { struct inactive_task_frame *frame; struct fork_frame *fork_frame; @@ -158,7 +161,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg, /* Kernel thread ? */ if (unlikely(p->flags & PF_KTHREAD)) { memset(childregs, 0, sizeof(struct pt_regs)); - kthread_frame_init(frame, sp, arg); + kthread_frame_init(frame, sp, stack_size); return 0; } @@ -185,7 +188,7 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg, */ childregs->sp = 0; childregs->ip = 0; - kthread_frame_init(frame, sp, arg); + kthread_frame_init(frame, sp, stack_size); return 0; } @@ -193,6 +196,10 @@ int copy_thread(unsigned long clone_flags, unsigned long sp, unsigned long arg, if (clone_flags & CLONE_SETTLS) ret = set_new_tls(p, tls); + /* Allocate a new shadow stack for pthread */ + if (!ret) + ret = shstk_alloc_thread_stack(p, clone_flags, stack_size); + if (!ret && unlikely(test_tsk_thread_flag(current, TIF_IO_BITMAP))) io_bitmap_share(p); diff --git a/arch/x86/kernel/shstk.c b/arch/x86/kernel/shstk.c index 5ea2b494e9f9..8e5f772181b9 100644 --- a/arch/x86/kernel/shstk.c +++ b/arch/x86/kernel/shstk.c @@ -71,6 +71,53 @@ int shstk_setup(void) return 0; } +int shstk_alloc_thread_stack(struct task_struct *tsk, unsigned long clone_flags, + unsigned long stack_size) +{ + struct thread_shstk *shstk = &tsk->thread.shstk; + struct cet_user_state *state; + unsigned long addr; + + if (!stack_size) + return -EINVAL; + + if (!shstk->size) + return 0; + + /* + * For CLONE_VM, except vfork, the child needs a separate shadow + * stack. + */ + if ((clone_flags & (CLONE_VFORK | CLONE_VM)) != CLONE_VM) + return 0; + + state = get_xsave_addr(&tsk->thread.fpu.state.xsave, XFEATURE_CET_USER); + if (!state) + return -EINVAL; + + /* + * Compat-mode pthreads share a limited address space. + * If each function call takes an average of four slots + * stack space, allocate 1/4 of stack size for shadow stack. + */ + if (in_compat_syscall()) + stack_size /= 4; + + stack_size = round_up(stack_size, PAGE_SIZE); + addr = alloc_shstk(stack_size); + if (IS_ERR_VALUE(addr)) { + shstk->base = 0; + shstk->size = 0; + return PTR_ERR((void *)addr); + } + + fpu__prepare_write(&tsk->thread.fpu); + state->user_ssp = (u64)(addr + stack_size); + shstk->base = addr; + shstk->size = stack_size; + return 0; +} + void shstk_free(struct task_struct *tsk) { struct thread_shstk *shstk = &tsk->thread.shstk; @@ -80,7 +127,13 @@ void shstk_free(struct task_struct *tsk) !shstk->base) return; - if (!tsk->mm) + /* + * When fork() with CLONE_VM fails, the child (tsk) already has a + * shadow stack allocated, and exit_thread() calls this function to + * free it. In this case the parent (current) and the child share + * the same mm struct. + */ + if (!tsk->mm || tsk->mm != current->mm) return; while (1) { -- 2.21.0