From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-9.0 required=3.0 tests=HEADER_FROM_DIFFERENT_DOMAINS, INCLUDES_PATCH,MAILING_LIST_MULTI,SIGNED_OFF_BY,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 3F31AC64EB1 for ; Fri, 7 Dec 2018 18:40:39 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [209.132.180.67]) by mail.kernel.org (Postfix) with ESMTP id 0BDCC2064D for ; Fri, 7 Dec 2018 18:40:39 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.3.2 mail.kernel.org 0BDCC2064D Authentication-Results: mail.kernel.org; dmarc=none (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=linux-kernel-owner@vger.kernel.org Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1726393AbeLGSki (ORCPT ); Fri, 7 Dec 2018 13:40:38 -0500 Received: from foss.arm.com ([217.140.101.70]:52128 "EHLO foss.arm.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1726077AbeLGSkg (ORCPT ); Fri, 7 Dec 2018 13:40:36 -0500 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.72.51.249]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 22BD8EBD; Fri, 7 Dec 2018 10:40:36 -0800 (PST) Received: from moonbear.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.72.51.249]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPSA id 0989E3F5AF; Fri, 7 Dec 2018 10:40:32 -0800 (PST) From: Kristina Martsenko To: linux-arm-kernel@lists.infradead.org Cc: Adam Wallis , Amit Kachhap , Andrew Jones , Ard Biesheuvel , Catalin Marinas , Christoffer Dall , Cyrill Gorcunov , Dave P Martin , Jacob Bramley , Kees Cook , Marc Zyngier , Mark Rutland , Ramana Radhakrishnan , Richard Henderson , Suzuki K Poulose , Will Deacon , kvmarm@lists.cs.columbia.edu, linux-kernel@vger.kernel.org Subject: [PATCH v6 09/13] arm64: perf: strip PAC when unwinding userspace Date: Fri, 7 Dec 2018 18:39:27 +0000 Message-Id: <20181207183931.4285-10-kristina.martsenko@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20181207183931.4285-1-kristina.martsenko@arm.com> References: <20181207183931.4285-1-kristina.martsenko@arm.com> Sender: linux-kernel-owner@vger.kernel.org Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mark Rutland When the kernel is unwinding userspace callchains, we can't expect that the userspace consumer of these callchains has the data necessary to strip the PAC from the stored LR. This patch has the kernel strip the PAC from user stackframes when the in-kernel unwinder is used. This only affects the LR value, and not the FP. This only affects the in-kernel unwinder. When userspace performs unwinding, it is up to userspace to strip PACs as necessary (which can be determined from DWARF information). Signed-off-by: Mark Rutland Signed-off-by: Kristina Martsenko Cc: Catalin Marinas Cc: Ramana Radhakrishnan Cc: Will Deacon --- arch/arm64/include/asm/pointer_auth.h | 7 +++++++ arch/arm64/kernel/perf_callchain.c | 6 +++++- 2 files changed, 12 insertions(+), 1 deletion(-) diff --git a/arch/arm64/include/asm/pointer_auth.h b/arch/arm64/include/asm/pointer_auth.h index 5721228836c1..89190d93c850 100644 --- a/arch/arm64/include/asm/pointer_auth.h +++ b/arch/arm64/include/asm/pointer_auth.h @@ -65,6 +65,12 @@ static inline void ptrauth_keys_switch(struct ptrauth_keys *keys) */ #define ptrauth_pac_mask() GENMASK(54, VA_BITS) +/* Only valid for EL0 TTBR0 instruction pointers */ +static inline unsigned long ptrauth_strip_insn_pac(unsigned long ptr) +{ + return ptr & ~ptrauth_pac_mask(); +} + #define ptrauth_thread_init_user(tsk) \ do { \ struct task_struct *__ptiu_tsk = (tsk); \ @@ -76,6 +82,7 @@ do { \ ptrauth_keys_switch(&(tsk)->thread_info.keys_user) #else /* CONFIG_ARM64_PTR_AUTH */ +#define ptrauth_strip_insn_pac(lr) (lr) #define ptrauth_thread_init_user(tsk) #define ptrauth_thread_switch(tsk) #endif /* CONFIG_ARM64_PTR_AUTH */ diff --git a/arch/arm64/kernel/perf_callchain.c b/arch/arm64/kernel/perf_callchain.c index bcafd7dcfe8b..94754f07f67a 100644 --- a/arch/arm64/kernel/perf_callchain.c +++ b/arch/arm64/kernel/perf_callchain.c @@ -18,6 +18,7 @@ #include #include +#include #include struct frame_tail { @@ -35,6 +36,7 @@ user_backtrace(struct frame_tail __user *tail, { struct frame_tail buftail; unsigned long err; + unsigned long lr; /* Also check accessibility of one struct frame_tail beyond */ if (!access_ok(VERIFY_READ, tail, sizeof(buftail))) @@ -47,7 +49,9 @@ user_backtrace(struct frame_tail __user *tail, if (err) return NULL; - perf_callchain_store(entry, buftail.lr); + lr = ptrauth_strip_insn_pac(buftail.lr); + + perf_callchain_store(entry, lr); /* * Frame pointers should strictly progress back up the stack -- 2.11.0