From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org X-Spam-Level: X-Spam-Status: No, score=-17.4 required=3.0 tests=BAYES_00,DKIMWL_WL_HIGH, DKIM_SIGNED,DKIM_VALID,HEADER_FROM_DIFFERENT_DOMAINS,INCLUDES_CR_TRAILER, INCLUDES_PATCH,MAILING_LIST_MULTI,SPF_HELO_NONE,SPF_PASS,URIBL_BLOCKED, USER_AGENT_GIT autolearn=ham autolearn_force=no version=3.4.0 Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C34E6C4338F for ; Mon, 2 Aug 2021 14:09:35 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 93C4B6023B for ; Mon, 2 Aug 2021 14:09:35 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 93C4B6023B Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=arm.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:MIME-Version:List-Subscribe:List-Help: List-Post:List-Archive:List-Unsubscribe:List-Id:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=ykRqzpdeQPWyTZyODq/OgIcZ0X4rbK6PZF4qgb4WgQc=; b=hxDWynlkT9ZN8j gFzAYhhYalCbBJbj5jh+Xy7303ATdlvmwfJEVmgzt9sXbLqmYIFeuAMGPSV5bgRmW9Euv6o9bhRNd ZdZLVDeJ4jAJutSkRQuFaOBedayDZ7A4N4kXu4I14p0Bs6nHihj/NuXL++zdtAHAuGjVNH8xh+i9L xmGW2n9QvTpCAF24SsGqCVOFbbzxTXxSAGwrdildWJurGspw/9RFe1eZqdQIi+DnVWG2beRP7Mh3N Ejku3ZviyTEr0WPIz6Ti1mg3E3x7xW4MnoTquRz5q9Aduzv+9duLJXJj42gmjxAJxaBxDMZl7fLFj C5thEcvcNkboFOgDReow==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mAYcE-00GY4p-MN; Mon, 02 Aug 2021 14:08:26 +0000 Received: from foss.arm.com ([217.140.110.172]) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mAYbq-00GXvr-NX for linux-arm-kernel@lists.infradead.org; Mon, 02 Aug 2021 14:08:05 +0000 Received: from usa-sjc-imap-foss1.foss.arm.com (unknown [10.121.207.14]) by usa-sjc-mx-foss1.foss.arm.com (Postfix) with ESMTP id 0835F12FC; Mon, 2 Aug 2021 07:08:00 -0700 (PDT) Received: from lakrids.cambridge.arm.com (usa-sjc-imap-foss1.foss.arm.com [10.121.207.14]) by usa-sjc-imap-foss1.foss.arm.com (Postfix) with ESMTPA id 078CF3F66F; Mon, 2 Aug 2021 07:07:58 -0700 (PDT) From: Mark Rutland To: linux-arm-kernel@lists.infradead.org Cc: catalin.marinas@arm.com, james.morse@arm.com, joey.gouly@arm.com, mark.rutland@arm.com, maz@kernel.org, will@kernel.org Subject: [PATCH 2/4] arm64: entry: clarify entry/exit helpers Date: Mon, 2 Aug 2021 15:07:31 +0100 Message-Id: <20210802140733.52716-3-mark.rutland@arm.com> X-Mailer: git-send-email 2.11.0 In-Reply-To: <20210802140733.52716-1-mark.rutland@arm.com> References: <20210802140733.52716-1-mark.rutland@arm.com> X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20210802_070802_903015_DBCC6294 X-CRM114-Status: GOOD ( 15.59 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , MIME-Version: 1.0 Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org When entering an exception, we must perform irq/context state management before we can use instrumentable C code. Similary, when exiting an exception we cannot use instrumentable C code after we perform irq/context state management. Originally, we'd intended that the enter_from_*() and exit_to_*() helpers would enforce this by virtue of being the first and last functions called, respectively, in an exception handler. However, as they now call instrumentable code themselves, this is not as clearly true. To make this more robust, this patch splits the irq/context state management into separate helpers, with all the helpers commented to make their intended purpose more obvious. In exit_to_kernel_mode() we'll now check TFSR_EL1 before we assert that IRQs are disabled, but this ordering is not important, and other than this there should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Cc: Catalin Marinas Cc: James Morse Cc: Joey Gouly Cc: Marc Zyngier Cc: Will Deacon --- arch/arm64/kernel/entry-common.c | 70 ++++++++++++++++++++++++++++++++++++---- 1 file changed, 63 insertions(+), 7 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 6f7a98d8d60f..6dc64f99f185 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -26,10 +26,14 @@ #include /* + * Handle IRQ/context state management when entering from kernel mode. + * Before this function is called it is not safe to call regular kernel code, + * intrumentable code, or any code which may trigger an exception. + * * This is intended to match the logic in irqentry_enter(), handling the kernel * mode transitions only. */ -static void noinstr enter_from_kernel_mode(struct pt_regs *regs) +static __always_inline void __enter_from_kernel_mode(struct pt_regs *regs) { regs->exit_rcu = false; @@ -45,20 +49,26 @@ static void noinstr enter_from_kernel_mode(struct pt_regs *regs) lockdep_hardirqs_off(CALLER_ADDR0); rcu_irq_enter_check_tick(); trace_hardirqs_off_finish(); +} +static void noinstr enter_from_kernel_mode(struct pt_regs *regs) +{ + __enter_from_kernel_mode(regs); mte_check_tfsr_entry(); } /* + * Handle irq/context state management when exiting to kernel mode. + * after this function returns it is not safe to call regular kernel code, + * intrumentable code, or any code which may trigger an exception. + * * This is intended to match the logic in irqentry_exit(), handling the kernel * mode transitions only, and with preemption handled elsewhere. */ -static void noinstr exit_to_kernel_mode(struct pt_regs *regs) +static __always_inline void __exit_to_kernel_mode(struct pt_regs *regs) { lockdep_assert_irqs_disabled(); - mte_check_tfsr_exit(); - if (interrupts_enabled(regs)) { if (regs->exit_rcu) { trace_hardirqs_on_prepare(); @@ -75,7 +85,18 @@ static void noinstr exit_to_kernel_mode(struct pt_regs *regs) } } -asmlinkage void noinstr enter_from_user_mode(void) +static void noinstr exit_to_kernel_mode(struct pt_regs *regs) +{ + mte_check_tfsr_exit(); + __exit_to_kernel_mode(regs); +} + +/* + * Handle IRQ/context state management when entering from user mode. + * Before this function is called it is not safe to call regular kernel code, + * intrumentable code, or any code which may trigger an exception. + */ +static __always_inline void __enter_from_user_mode(void) { lockdep_hardirqs_off(CALLER_ADDR0); CT_WARN_ON(ct_state() != CONTEXT_USER); @@ -83,9 +104,18 @@ asmlinkage void noinstr enter_from_user_mode(void) trace_hardirqs_off_finish(); } -asmlinkage void noinstr exit_to_user_mode(void) +asmlinkage void noinstr enter_from_user_mode(void) +{ + __enter_from_user_mode(); +} + +/* + * Handle IRQ/context state management when exiting to user mode. + * After this function returns it is not safe to call regular kernel code, + * intrumentable code, or any code which may trigger an exception. + */ +static __always_inline void __exit_to_user_mode(void) { - mte_check_tfsr_exit(); trace_hardirqs_on_prepare(); lockdep_hardirqs_on_prepare(CALLER_ADDR0); @@ -93,6 +123,17 @@ asmlinkage void noinstr exit_to_user_mode(void) lockdep_hardirqs_on(CALLER_ADDR0); } +asmlinkage void noinstr exit_to_user_mode(void) +{ + mte_check_tfsr_exit(); + __exit_to_user_mode(); +} + +/* + * Handle IRQ/context state management when entering an NMI from user/kernel + * mode. Before this function is called it is not safe to call regular kernel + * code, intrumentable code, or any code which may trigger an exception. + */ static void noinstr arm64_enter_nmi(struct pt_regs *regs) { regs->lockdep_hardirqs = lockdep_hardirqs_enabled(); @@ -106,6 +147,11 @@ static void noinstr arm64_enter_nmi(struct pt_regs *regs) ftrace_nmi_enter(); } +/* + * Handle IRQ/context state management when exiting an NMI from user/kernel + * mode. After this function returns it is not safe to call regular kernel + * code, intrumentable code, or any code which may trigger an exception. + */ static void noinstr arm64_exit_nmi(struct pt_regs *regs) { bool restore = regs->lockdep_hardirqs; @@ -123,6 +169,11 @@ static void noinstr arm64_exit_nmi(struct pt_regs *regs) __nmi_exit(); } +/* + * Handle IRQ/context state management when entering a debug exception from + * kernel mode. Before this function is called it is not safe to call regular + * kernel code, intrumentable code, or any code which may trigger an exception. + */ static void noinstr arm64_enter_el1_dbg(struct pt_regs *regs) { regs->lockdep_hardirqs = lockdep_hardirqs_enabled(); @@ -133,6 +184,11 @@ static void noinstr arm64_enter_el1_dbg(struct pt_regs *regs) trace_hardirqs_off_finish(); } +/* + * Handle IRQ/context state management when exiting a debug exception from + * kernel mode. After this function returns it is not safe to call regular + * kernel code, intrumentable code, or any code which may trigger an exception. + */ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs) { bool restore = regs->lockdep_hardirqs; -- 2.11.0 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel