From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-11.2 required=3.0 tests=DKIMWL_WL_MED,DKIM_SIGNED, DKIM_VALID,DKIM_VALID_AU,FSL_HELO_FAKE,HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,USER_AGENT_SANE_1, USER_IN_DEF_DKIM_WL autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 0AF60C432C0 for ; Mon, 18 Nov 2019 23:13:55 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id CC90620708 for ; Mon, 18 Nov 2019 23:13:54 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=google.com header.i=@google.com header.b="LQSUtQxM" Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1727020AbfKRXNx (ORCPT ); Mon, 18 Nov 2019 18:13:53 -0500 Received: from mail-pf1-f195.google.com ([209.85.210.195]:36264 "EHLO mail-pf1-f195.google.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726787AbfKRXNw (ORCPT ); Mon, 18 Nov 2019 18:13:52 -0500 Received: by mail-pf1-f195.google.com with SMTP id b19so11131852pfd.3 for ; Mon, 18 Nov 2019 15:13:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=qUJh1Kxa/O0dbHY+8/bDJj9IILwdUvsp2P0za4TSTpc=; b=LQSUtQxMg2dkBxnifgBx+acnXe4M0kG6IjxmvrL72vK1j4b6WbQO1pQmzc43ue3xcx mqoop43yC9eMPolAHaMXumodOBnN6dU4M2bl1OixPCs/Lr0WrT+rIQ9Xs0ReR6eup1Yb vsXrNI09oEUvJZM6X1unWYGIfqYqYxVzEvsXtWkWr9VqhaQmjpRtIFlmhuqcyBIQdR9L EicpfsQ7tZ1RTD/X2JgsHShA21dwkF/ngX6Cud7viXIB0l8eoM4LbxtviF3zkB/Fmm67 hKqo4QnpxOgu0fbMUcy4QrackDoplphN+WVpWdHexkWUc9HETUcrP9QV4m5Odxi66hRF 3FJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=qUJh1Kxa/O0dbHY+8/bDJj9IILwdUvsp2P0za4TSTpc=; b=Uf13Z1QnFtdFBiHU/hGKEy7gGbiAjvmyrxO5mmlvcFtfHfIVt12jESxzTXNv4okNiX DWJyTfmgU2SqYuV2FdDxTLgetJZ0BW/BIJzn3dvkYKpqj9ZP/RVj4aoRR+AmBZDT42fn NOvtLJEI2DJ1n1DNWQN4d34HFNLo6LRJqYGGYoCwngVE71L1NU5L4LpY8i+FnXMTgh+h 6Y5loJTm2b9wO3ZtvjIg2MYV7NP57Ou+JVrA6u2EuBhuYf6+6ynz2TE1u4Ndd04pG4YJ WIhZXuC04OSPmA9EQ3mo+YYLyrCzbGsuTJTcy2Od1Decy+dbtl80rFMXlBFljTGIfvwD XuvA== X-Gm-Message-State: APjAAAUEsGLjDP0Gw5LINRFLzuaPr9ixYpWuVgP/E0e0s1ML4ZZhYNi8 wS2bfbSlojubfL6XyLf80y9e4Q== X-Google-Smtp-Source: APXvYqzAUXRC6SbTRv4jb/B5oz18PJBvPfFvJ1B1FzwsBf1n0/ZufwfY/f2j2TS4Qyi62GQ4PZOUAQ== X-Received: by 2002:a65:418d:: with SMTP id a13mr2039184pgq.208.1574118830980; Mon, 18 Nov 2019 15:13:50 -0800 (PST) Received: from google.com ([2620:15c:201:2:ce90:ab18:83b0:619]) by smtp.gmail.com with ESMTPSA id fh11sm551294pjb.2.2019.11.18.15.13.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Nov 2019 15:13:48 -0800 (PST) Date: Mon, 18 Nov 2019 15:13:43 -0800 From: Sami Tolvanen To: Mark Rutland Cc: Will Deacon , Catalin Marinas , Steven Rostedt , Masami Hiramatsu , Ard Biesheuvel , Dave Martin , Kees Cook , Laura Abbott , Marc Zyngier , Nick Desaulniers , Jann Horn , Miguel Ojeda , Masahiro Yamada , clang-built-linux , Kernel Hardening , linux-arm-kernel , LKML Subject: Re: [PATCH v5 14/14] arm64: implement Shadow Call Stack Message-ID: <20191118231343.GA231930@google.com> References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> <20191105235608.107702-15-samitolvanen@google.com> <20191115152047.GI41572@lakrids.cambridge.arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On Fri, Nov 15, 2019 at 12:19:20PM -0800, Sami Tolvanen wrote: > On Fri, Nov 15, 2019 at 7:20 AM Mark Rutland wrote: > > > > On Tue, Nov 05, 2019 at 03:56:08PM -0800, Sami Tolvanen wrote: > > > This change implements shadow stack switching, initial SCS set-up, > > > and interrupt shadow stacks for arm64. > > > > Each CPU also has an overflow stack, and two SDEI stacks, which should > > presumably be given their own SCS. SDEI is effectively a software-NMI, > > so it should almost certainly have the same treatement as IRQ. > > Makes sense. I'll take a look at adding shadow stacks for the SDEI handler. Mark, I wrote a preliminary patch for adding SDEI shadow stacks, but turns out I don't really have a way to test the SDEI code. Does the approach below look sane to you? Sami --- arch/arm64/include/asm/scs.h | 2 + arch/arm64/include/asm/stacktrace.h | 4 -- arch/arm64/kernel/entry.S | 14 +++- arch/arm64/kernel/scs.c | 106 +++++++++++++++++++++++----- arch/arm64/kernel/sdei.c | 7 ++ 5 files changed, 112 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h index c50d2b0c6c5f..8e327e14bc15 100644 --- a/arch/arm64/include/asm/scs.h +++ b/arch/arm64/include/asm/scs.h @@ -9,6 +9,7 @@ #ifdef CONFIG_SHADOW_CALL_STACK extern void scs_init_irq(void); +extern int scs_init_sdei(void); static __always_inline void scs_save(struct task_struct *tsk) { @@ -27,6 +28,7 @@ static inline void scs_overflow_check(struct task_struct *tsk) #else /* CONFIG_SHADOW_CALL_STACK */ static inline void scs_init_irq(void) {} +static inline int scs_init_sdei(void) { return 0; } static inline void scs_save(struct task_struct *tsk) {} static inline void scs_overflow_check(struct task_struct *tsk) {} diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index b6cf32fb4efe..4d9b1f48dc39 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -68,10 +68,6 @@ extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk); DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); -#ifdef CONFIG_SHADOW_CALL_STACK -DECLARE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); -#endif - static inline bool on_irq_stack(unsigned long sp, struct stack_info *info) { diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 5a02b61fc3e6..ac9dfb3da440 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -1309,13 +1309,16 @@ ENTRY(__sdei_asm_handler) mov x19, x1 +#if defined(CONFIG_VMAP_STACK) || defined(CONFIG_SHADOW_CALL_STACK) + ldrb w4, [x19, #SDEI_EVENT_PRIORITY] +#endif + #ifdef CONFIG_VMAP_STACK /* * entry.S may have been using sp as a scratch register, find whether * this is a normal or critical event and switch to the appropriate * stack for this CPU. */ - ldrb w4, [x19, #SDEI_EVENT_PRIORITY] cbnz w4, 1f ldr_this_cpu dst=x5, sym=sdei_stack_normal_ptr, tmp=x6 b 2f @@ -1325,6 +1328,15 @@ ENTRY(__sdei_asm_handler) mov sp, x5 #endif +#ifdef CONFIG_SHADOW_CALL_STACK + /* Use a separate shadow call stack for normal and critical events */ + cbnz w4, 3f + ldr_this_cpu dst=x18, sym=sdei_shadow_call_stack_normal_ptr, tmp=x6 + b 4f +3: ldr_this_cpu dst=x18, sym=sdei_shadow_call_stack_critical_ptr, tmp=x6 +4: +#endif + /* * We may have interrupted userspace, or a guest, or exit-from or * return-to either of these. We can't trust sp_el0, restore it. diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c index 9a1305a6eb5b..dddb7c56518b 100644 --- a/arch/arm64/kernel/scs.c +++ b/arch/arm64/kernel/scs.c @@ -10,31 +10,105 @@ #include #include -DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); +#define DECLARE_SCS(name) \ + DECLARE_PER_CPU(unsigned long *, name ## _ptr); \ + DECLARE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) -#ifndef CONFIG_SHADOW_CALL_STACK_VMAP -DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], irq_shadow_call_stack) - __aligned(SCS_SIZE); +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP +#define DEFINE_SCS(name) \ + DEFINE_PER_CPU(unsigned long *, name ## _ptr) +#else +/* Allocate a static per-CPU shadow stack */ +#define DEFINE_SCS(name) \ + DEFINE_PER_CPU(unsigned long *, name ## _ptr); \ + DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) \ + __aligned(SCS_SIZE) +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +DECLARE_SCS(irq_shadow_call_stack); +DECLARE_SCS(sdei_shadow_call_stack_normal); +DECLARE_SCS(sdei_shadow_call_stack_critical); + +DEFINE_SCS(irq_shadow_call_stack); +#ifdef CONFIG_ARM_SDE_INTERFACE +DEFINE_SCS(sdei_shadow_call_stack_normal); +DEFINE_SCS(sdei_shadow_call_stack_critical); #endif +static int scs_alloc_percpu(unsigned long * __percpu *ptr, int cpu) +{ + unsigned long *p; + + p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, + 0, cpu_to_node(cpu), + __builtin_return_address(0)); + + if (!p) + return -ENOMEM; + per_cpu(*ptr, cpu) = p; + + return 0; +} + +static void scs_free_percpu(unsigned long * __percpu *ptr, int cpu) +{ + unsigned long *p = per_cpu(*ptr, cpu); + + if (p) { + per_cpu(*ptr, cpu) = NULL; + vfree(p); + } +} + +static void scs_free_sdei(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { + scs_free_percpu(&sdei_shadow_call_stack_normal_ptr, cpu); + scs_free_percpu(&sdei_shadow_call_stack_critical_ptr, cpu); + } +} + void scs_init_irq(void) { int cpu; for_each_possible_cpu(cpu) { -#ifdef CONFIG_SHADOW_CALL_STACK_VMAP - unsigned long *p; + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK_VMAP)) + WARN_ON(scs_alloc_percpu(&irq_shadow_call_stack_ptr, + cpu)); + else + per_cpu(irq_shadow_call_stack_ptr, cpu) = + per_cpu(irq_shadow_call_stack, cpu); + } +} - p = __vmalloc_node_range(SCS_SIZE, SCS_SIZE, - VMALLOC_START, VMALLOC_END, - GFP_SCS, PAGE_KERNEL, - 0, cpu_to_node(cpu), - __builtin_return_address(0)); +int scs_init_sdei(void) +{ + int cpu; - per_cpu(irq_shadow_call_stack_ptr, cpu) = p; -#else - per_cpu(irq_shadow_call_stack_ptr, cpu) = - per_cpu(irq_shadow_call_stack, cpu); -#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + if (!IS_ENABLED(CONFIG_ARM_SDE_INTERFACE)) + return 0; + + for_each_possible_cpu(cpu) { + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK_VMAP)) { + if (scs_alloc_percpu( + &sdei_shadow_call_stack_normal_ptr, cpu) || + scs_alloc_percpu( + &sdei_shadow_call_stack_critical_ptr, cpu)) { + scs_free_sdei(); + return -ENOMEM; + } + } else { + per_cpu(sdei_shadow_call_stack_normal_ptr, cpu) = + per_cpu(sdei_shadow_call_stack_normal, cpu); + per_cpu(sdei_shadow_call_stack_critical_ptr, cpu) = + per_cpu(sdei_shadow_call_stack_critical, cpu); + } } + + return 0; } diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index ea94cf8f9dc6..3e85017a9c8b 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -161,6 +162,12 @@ unsigned long sdei_arch_get_entry_point(int conduit) return 0; } + if (scs_init_sdei()) { + if (IS_ENABLED(CONFIG_VMAP_STACK)) + free_sdei_stacks(); + return 0; + } + sdei_exit_mode = (conduit == CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC; #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-3.6 required=3.0 tests=DKIMWL_WL_HIGH, DKIM_ADSP_CUSTOM_MED,DKIM_SIGNED,DKIM_VALID,FSL_HELO_FAKE, HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE, SPF_PASS,USER_AGENT_SANE_1 autolearn=no autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 4325EC432C0 for ; Mon, 18 Nov 2019 23:14:01 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 0A9FE222DE for ; Mon, 18 Nov 2019 23:14:01 +0000 (UTC) Authentication-Results: mail.kernel.org; dkim=pass (2048-bit key) header.d=lists.infradead.org header.i=@lists.infradead.org header.b="hj5sa31D"; dkim=fail reason="signature verification failed" (2048-bit key) header.d=google.com header.i=@google.com header.b="LQSUtQxM" DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0A9FE222DE Authentication-Results: mail.kernel.org; dmarc=fail (p=reject dis=none) header.from=google.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20170209; h=Sender: Content-Transfer-Encoding:Content-Type:Cc:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:In-Reply-To:MIME-Version:References: Message-ID:Subject:To:From:Date:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=n9XaSxBaNQM0R2N06lh7lfqCWItx5adP/983/zDcm3M=; b=hj5sa31DzHd9CB tyna7BZegLzdERjkgSBkrCzY8jOhgynSgOqmfR5L3nnTvnJA9ftk438OXRPsLEUS4zllNekmA74fe Tfx/83wS54sjscr73r/I2mC/dM7IQfJAf8q38X3jBaudsG+sjO4B9wdXzQiqgWxkfUEt1U4JVsgID 3TVYOLnqoklwsjLyPpoVbyQkEk/mBPPAi/B61ukGywFDzKadAhT2VGNR8Mi8SisW+pld17qSl1ohX ea1MuUcejY/6DiG+9rMva5/Y2eTD4XaEoTEDZK8Rxq2QweJwgSdvBIj/KGrpg3G2TpBx7zDEQ5wp6 dgUV3h77aTan/ZTCLLzQ==; Received: from localhost ([127.0.0.1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.92.3 #3 (Red Hat Linux)) id 1iWqDY-0004Gm-4h; Mon, 18 Nov 2019 23:14:00 +0000 Received: from mail-pg1-x541.google.com ([2607:f8b0:4864:20::541]) by bombadil.infradead.org with esmtps (Exim 4.92.3 #3 (Red Hat Linux)) id 1iWqDS-0004Fr-Ss for linux-arm-kernel@lists.infradead.org; Mon, 18 Nov 2019 23:13:58 +0000 Received: by mail-pg1-x541.google.com with SMTP id h27so10376164pgn.0 for ; Mon, 18 Nov 2019 15:13:52 -0800 (PST) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=google.com; s=20161025; h=date:from:to:cc:subject:message-id:references:mime-version :content-disposition:in-reply-to:user-agent; bh=qUJh1Kxa/O0dbHY+8/bDJj9IILwdUvsp2P0za4TSTpc=; b=LQSUtQxMg2dkBxnifgBx+acnXe4M0kG6IjxmvrL72vK1j4b6WbQO1pQmzc43ue3xcx mqoop43yC9eMPolAHaMXumodOBnN6dU4M2bl1OixPCs/Lr0WrT+rIQ9Xs0ReR6eup1Yb vsXrNI09oEUvJZM6X1unWYGIfqYqYxVzEvsXtWkWr9VqhaQmjpRtIFlmhuqcyBIQdR9L EicpfsQ7tZ1RTD/X2JgsHShA21dwkF/ngX6Cud7viXIB0l8eoM4LbxtviF3zkB/Fmm67 hKqo4QnpxOgu0fbMUcy4QrackDoplphN+WVpWdHexkWUc9HETUcrP9QV4m5Odxi66hRF 3FJw== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20161025; h=x-gm-message-state:date:from:to:cc:subject:message-id:references :mime-version:content-disposition:in-reply-to:user-agent; bh=qUJh1Kxa/O0dbHY+8/bDJj9IILwdUvsp2P0za4TSTpc=; b=kKMUMGyiTolQNFCw1pDv1OcjH8EgKm/5Nuizw5s7sRnnhPZhiM7OYIs+Si4M+rQmeh ozPnTlvZDyTwa7kpyZY6qlcI8obLaizokXZrUsal5yxozkirIZRqOgJjZw0zUYUg6Sa8 HY4Kkmpg3oMjR2oDRD1F8w8AOM1ttM8jOKHZMwk4oOnhrlfV8TIT2D8HiN5L/torVUh1 QWZSoB0eocImPlSlZVeAdAY6mSkzAEOEpH0F06sI+ciWFsDZxOz3qCXzMCQmwpBvK4pi SjW67dbuBIdLKufBIbEXIWKEbdOegJt9CaPnUumEX4bDSGsFRIknxzzsuwxVzM0nO6SJ zULQ== X-Gm-Message-State: APjAAAX/Dk0HDoLuyFE+3QQb3p9wX3vRzXaEZPvte3WmhcPbQ8TmlNBS H9QOQTbIwn1qlJeHqtf0R9YMrw== X-Google-Smtp-Source: APXvYqzAUXRC6SbTRv4jb/B5oz18PJBvPfFvJ1B1FzwsBf1n0/ZufwfY/f2j2TS4Qyi62GQ4PZOUAQ== X-Received: by 2002:a65:418d:: with SMTP id a13mr2039184pgq.208.1574118830980; Mon, 18 Nov 2019 15:13:50 -0800 (PST) Received: from google.com ([2620:15c:201:2:ce90:ab18:83b0:619]) by smtp.gmail.com with ESMTPSA id fh11sm551294pjb.2.2019.11.18.15.13.47 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Mon, 18 Nov 2019 15:13:48 -0800 (PST) Date: Mon, 18 Nov 2019 15:13:43 -0800 From: Sami Tolvanen To: Mark Rutland Subject: Re: [PATCH v5 14/14] arm64: implement Shadow Call Stack Message-ID: <20191118231343.GA231930@google.com> References: <20191018161033.261971-1-samitolvanen@google.com> <20191105235608.107702-1-samitolvanen@google.com> <20191105235608.107702-15-samitolvanen@google.com> <20191115152047.GI41572@lakrids.cambridge.arm.com> MIME-Version: 1.0 Content-Disposition: inline In-Reply-To: User-Agent: Mutt/1.10.1 (2018-07-13) X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20191118_151354_934920_8D4A2FEC X-CRM114-Status: GOOD ( 22.75 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.29 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Cc: Kees Cook , Ard Biesheuvel , Masahiro Yamada , Catalin Marinas , Jann Horn , Nick Desaulniers , LKML , Steven Rostedt , Miguel Ojeda , clang-built-linux , Masami Hiramatsu , Marc Zyngier , Kernel Hardening , Laura Abbott , Will Deacon , Dave Martin , linux-arm-kernel Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+infradead-linux-arm-kernel=archiver.kernel.org@lists.infradead.org On Fri, Nov 15, 2019 at 12:19:20PM -0800, Sami Tolvanen wrote: > On Fri, Nov 15, 2019 at 7:20 AM Mark Rutland wrote: > > > > On Tue, Nov 05, 2019 at 03:56:08PM -0800, Sami Tolvanen wrote: > > > This change implements shadow stack switching, initial SCS set-up, > > > and interrupt shadow stacks for arm64. > > > > Each CPU also has an overflow stack, and two SDEI stacks, which should > > presumably be given their own SCS. SDEI is effectively a software-NMI, > > so it should almost certainly have the same treatement as IRQ. > > Makes sense. I'll take a look at adding shadow stacks for the SDEI handler. Mark, I wrote a preliminary patch for adding SDEI shadow stacks, but turns out I don't really have a way to test the SDEI code. Does the approach below look sane to you? Sami --- arch/arm64/include/asm/scs.h | 2 + arch/arm64/include/asm/stacktrace.h | 4 -- arch/arm64/kernel/entry.S | 14 +++- arch/arm64/kernel/scs.c | 106 +++++++++++++++++++++++----- arch/arm64/kernel/sdei.c | 7 ++ 5 files changed, 112 insertions(+), 21 deletions(-) diff --git a/arch/arm64/include/asm/scs.h b/arch/arm64/include/asm/scs.h index c50d2b0c6c5f..8e327e14bc15 100644 --- a/arch/arm64/include/asm/scs.h +++ b/arch/arm64/include/asm/scs.h @@ -9,6 +9,7 @@ #ifdef CONFIG_SHADOW_CALL_STACK extern void scs_init_irq(void); +extern int scs_init_sdei(void); static __always_inline void scs_save(struct task_struct *tsk) { @@ -27,6 +28,7 @@ static inline void scs_overflow_check(struct task_struct *tsk) #else /* CONFIG_SHADOW_CALL_STACK */ static inline void scs_init_irq(void) {} +static inline int scs_init_sdei(void) { return 0; } static inline void scs_save(struct task_struct *tsk) {} static inline void scs_overflow_check(struct task_struct *tsk) {} diff --git a/arch/arm64/include/asm/stacktrace.h b/arch/arm64/include/asm/stacktrace.h index b6cf32fb4efe..4d9b1f48dc39 100644 --- a/arch/arm64/include/asm/stacktrace.h +++ b/arch/arm64/include/asm/stacktrace.h @@ -68,10 +68,6 @@ extern void dump_backtrace(struct pt_regs *regs, struct task_struct *tsk); DECLARE_PER_CPU(unsigned long *, irq_stack_ptr); -#ifdef CONFIG_SHADOW_CALL_STACK -DECLARE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); -#endif - static inline bool on_irq_stack(unsigned long sp, struct stack_info *info) { diff --git a/arch/arm64/kernel/entry.S b/arch/arm64/kernel/entry.S index 5a02b61fc3e6..ac9dfb3da440 100644 --- a/arch/arm64/kernel/entry.S +++ b/arch/arm64/kernel/entry.S @@ -1309,13 +1309,16 @@ ENTRY(__sdei_asm_handler) mov x19, x1 +#if defined(CONFIG_VMAP_STACK) || defined(CONFIG_SHADOW_CALL_STACK) + ldrb w4, [x19, #SDEI_EVENT_PRIORITY] +#endif + #ifdef CONFIG_VMAP_STACK /* * entry.S may have been using sp as a scratch register, find whether * this is a normal or critical event and switch to the appropriate * stack for this CPU. */ - ldrb w4, [x19, #SDEI_EVENT_PRIORITY] cbnz w4, 1f ldr_this_cpu dst=x5, sym=sdei_stack_normal_ptr, tmp=x6 b 2f @@ -1325,6 +1328,15 @@ ENTRY(__sdei_asm_handler) mov sp, x5 #endif +#ifdef CONFIG_SHADOW_CALL_STACK + /* Use a separate shadow call stack for normal and critical events */ + cbnz w4, 3f + ldr_this_cpu dst=x18, sym=sdei_shadow_call_stack_normal_ptr, tmp=x6 + b 4f +3: ldr_this_cpu dst=x18, sym=sdei_shadow_call_stack_critical_ptr, tmp=x6 +4: +#endif + /* * We may have interrupted userspace, or a guest, or exit-from or * return-to either of these. We can't trust sp_el0, restore it. diff --git a/arch/arm64/kernel/scs.c b/arch/arm64/kernel/scs.c index 9a1305a6eb5b..dddb7c56518b 100644 --- a/arch/arm64/kernel/scs.c +++ b/arch/arm64/kernel/scs.c @@ -10,31 +10,105 @@ #include #include -DEFINE_PER_CPU(unsigned long *, irq_shadow_call_stack_ptr); +#define DECLARE_SCS(name) \ + DECLARE_PER_CPU(unsigned long *, name ## _ptr); \ + DECLARE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) -#ifndef CONFIG_SHADOW_CALL_STACK_VMAP -DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], irq_shadow_call_stack) - __aligned(SCS_SIZE); +#ifdef CONFIG_SHADOW_CALL_STACK_VMAP +#define DEFINE_SCS(name) \ + DEFINE_PER_CPU(unsigned long *, name ## _ptr) +#else +/* Allocate a static per-CPU shadow stack */ +#define DEFINE_SCS(name) \ + DEFINE_PER_CPU(unsigned long *, name ## _ptr); \ + DEFINE_PER_CPU(unsigned long [SCS_SIZE/sizeof(long)], name) \ + __aligned(SCS_SIZE) +#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + +DECLARE_SCS(irq_shadow_call_stack); +DECLARE_SCS(sdei_shadow_call_stack_normal); +DECLARE_SCS(sdei_shadow_call_stack_critical); + +DEFINE_SCS(irq_shadow_call_stack); +#ifdef CONFIG_ARM_SDE_INTERFACE +DEFINE_SCS(sdei_shadow_call_stack_normal); +DEFINE_SCS(sdei_shadow_call_stack_critical); #endif +static int scs_alloc_percpu(unsigned long * __percpu *ptr, int cpu) +{ + unsigned long *p; + + p = __vmalloc_node_range(PAGE_SIZE, SCS_SIZE, + VMALLOC_START, VMALLOC_END, + GFP_SCS, PAGE_KERNEL, + 0, cpu_to_node(cpu), + __builtin_return_address(0)); + + if (!p) + return -ENOMEM; + per_cpu(*ptr, cpu) = p; + + return 0; +} + +static void scs_free_percpu(unsigned long * __percpu *ptr, int cpu) +{ + unsigned long *p = per_cpu(*ptr, cpu); + + if (p) { + per_cpu(*ptr, cpu) = NULL; + vfree(p); + } +} + +static void scs_free_sdei(void) +{ + int cpu; + + for_each_possible_cpu(cpu) { + scs_free_percpu(&sdei_shadow_call_stack_normal_ptr, cpu); + scs_free_percpu(&sdei_shadow_call_stack_critical_ptr, cpu); + } +} + void scs_init_irq(void) { int cpu; for_each_possible_cpu(cpu) { -#ifdef CONFIG_SHADOW_CALL_STACK_VMAP - unsigned long *p; + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK_VMAP)) + WARN_ON(scs_alloc_percpu(&irq_shadow_call_stack_ptr, + cpu)); + else + per_cpu(irq_shadow_call_stack_ptr, cpu) = + per_cpu(irq_shadow_call_stack, cpu); + } +} - p = __vmalloc_node_range(SCS_SIZE, SCS_SIZE, - VMALLOC_START, VMALLOC_END, - GFP_SCS, PAGE_KERNEL, - 0, cpu_to_node(cpu), - __builtin_return_address(0)); +int scs_init_sdei(void) +{ + int cpu; - per_cpu(irq_shadow_call_stack_ptr, cpu) = p; -#else - per_cpu(irq_shadow_call_stack_ptr, cpu) = - per_cpu(irq_shadow_call_stack, cpu); -#endif /* CONFIG_SHADOW_CALL_STACK_VMAP */ + if (!IS_ENABLED(CONFIG_ARM_SDE_INTERFACE)) + return 0; + + for_each_possible_cpu(cpu) { + if (IS_ENABLED(CONFIG_SHADOW_CALL_STACK_VMAP)) { + if (scs_alloc_percpu( + &sdei_shadow_call_stack_normal_ptr, cpu) || + scs_alloc_percpu( + &sdei_shadow_call_stack_critical_ptr, cpu)) { + scs_free_sdei(); + return -ENOMEM; + } + } else { + per_cpu(sdei_shadow_call_stack_normal_ptr, cpu) = + per_cpu(sdei_shadow_call_stack_normal, cpu); + per_cpu(sdei_shadow_call_stack_critical_ptr, cpu) = + per_cpu(sdei_shadow_call_stack_critical, cpu); + } } + + return 0; } diff --git a/arch/arm64/kernel/sdei.c b/arch/arm64/kernel/sdei.c index ea94cf8f9dc6..3e85017a9c8b 100644 --- a/arch/arm64/kernel/sdei.c +++ b/arch/arm64/kernel/sdei.c @@ -12,6 +12,7 @@ #include #include #include +#include #include #include #include @@ -161,6 +162,12 @@ unsigned long sdei_arch_get_entry_point(int conduit) return 0; } + if (scs_init_sdei()) { + if (IS_ENABLED(CONFIG_VMAP_STACK)) + free_sdei_stacks(); + return 0; + } + sdei_exit_mode = (conduit == CONDUIT_HVC) ? SDEI_EXIT_HVC : SDEI_EXIT_SMC; #ifdef CONFIG_UNMAP_KERNEL_AT_EL0 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel