From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id 55A14C433F5 for ; Fri, 1 Oct 2021 14:44:35 +0000 (UTC) Received: from vger.kernel.org (vger.kernel.org [23.128.96.18]) by mail.kernel.org (Postfix) with ESMTP id 3E36361507 for ; Fri, 1 Oct 2021 14:44:35 +0000 (UTC) Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1354607AbhJAOqS (ORCPT ); Fri, 1 Oct 2021 10:46:18 -0400 Received: from lindbergh.monkeyblade.net ([23.128.96.19]:57390 "EHLO lindbergh.monkeyblade.net" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1354616AbhJAOqP (ORCPT ); Fri, 1 Oct 2021 10:46:15 -0400 Received: from mail-pf1-x42f.google.com (mail-pf1-x42f.google.com [IPv6:2607:f8b0:4864:20::42f]) by lindbergh.monkeyblade.net (Postfix) with ESMTPS id 65A9DC061775 for ; Fri, 1 Oct 2021 07:44:31 -0700 (PDT) Received: by mail-pf1-x42f.google.com with SMTP id k26so8129345pfi.5 for ; Fri, 01 Oct 2021 07:44:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TKjVR6/OIbtaDS2aKgtnt+zzA9GH3sGfYZy/bT0/UJQ=; b=qobJ08dRI10r2InHr8xeRxigYELnCvL0kdMhtx8j0neLGSvzqJzls+vCvxNrQtkvqD CMDy1MlMppe/J0lSWd8sCKS3TV7UrYj/I/YTgP9wO0785caWfgaaaPgh1LKOvorRVlNx QIyZlz08dvM2rUtknaTv8BzWJLjkyDvgLDNlEIlTZSnvlmhky4dZ5xIIn8r6CkrrL8va jRf85nBB0A6Zw7rkIHhWEENK01q0+uLSuS5it6WSYTujXZpG0wkZ8MwwBsgLcagqQOWj Oz+rtdZAsnfnTGMC5dsivqzImk78MXR7vmtkjreQDNLgqBrP10vDcyqmf6+vQaKXt5UM P21A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TKjVR6/OIbtaDS2aKgtnt+zzA9GH3sGfYZy/bT0/UJQ=; b=who/vtOo4lNgrgo1832KemXYir7tBBTNfkB9pGtB9uYpcP3RCbifisMVyIIbwYDV4v EeJ2fOGhmKFM6IbEQ/onirj6PXh2MZmOuK2doVTETDQ/Fytu+6UwUyJAIdDsM7w4liMH ufgBYmb5kMj0iM7oiCir3XfY7i1ncPgDrX71MBCiVqgfcT8xAUkIDP7VPZyTJm7ebe8x S78nbVHcqNb15DQqIMyKgUO0E1qaPmZtJm3R63J/gZWrjcnAW0w65bqPbXZu8DOmHCB5 I3FEBgXc7gd15Vj/VMuK38+JIMnGXdPd6YPASry+f3RTL7MdiyQh+4yxwVZQEWiNKtk7 fNLg== X-Gm-Message-State: AOAM530i2rpSZjqyiyM9T9R02E4JQPCCN5aFAr5IoFw7qZKLR1K4mURs Ms+MnQuodFG/0WdZnXBcqg== X-Google-Smtp-Source: ABdhPJyvQsUwbRBCwjEBRRPxwgvVl9HSspJ8+/s3D+U6IrIIB6kJOG95/xHOOZZtvf6CpT9+fVX5nQ== X-Received: by 2002:a63:fb18:: with SMTP id o24mr4857595pgh.8.1633099471003; Fri, 01 Oct 2021 07:44:31 -0700 (PDT) Received: from piliu.users.ipa.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id z12sm6766203pge.16.2021.10.01.07.44.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 07:44:30 -0700 (PDT) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Mark Rutland , Pingfan Liu , Marc Zyngier , Catalin Marinas , Will Deacon , Joey Gouly , Sami Tolvanen , Julien Thierry , Thomas Gleixner , Yuichi Ito , linux-kernel@vger.kernel.org Subject: [PATCHv4 2/3] arm64: entry: refactor EL1 interrupt entry logic Date: Fri, 1 Oct 2021 22:44:05 +0800 Message-Id: <20211001144406.7719-3-kernelfans@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211001144406.7719-1-kernelfans@gmail.com> References: <20211001144406.7719-1-kernelfans@gmail.com> MIME-Version: 1.0 Content-Transfer-Encoding: 8bit Precedence: bulk List-ID: X-Mailing-List: linux-kernel@vger.kernel.org From: Mark Rutland Currently we distinguish IRQ and definitely-PNMI at entry/exit time via the enter_el1_irq_or_nmi() and enter_el1_irq_or_nmi() helpers. In subsequent patches we'll need to handle the two cases more distinctly in the body of the exception handler. To make this possible, this patch refactors el1_interrupt to be a top-level dispatcher to separate handlers for the IRQ and PNMI cases, removing the need for the enter_el1_irq_or_nmi() and exit_el1_irq_or_nmi() helpers. Note that since arm64_enter_nmi() calls __nmi_enter(), which increments the preemt_count, we could never preempt when handling a PNMI. We now only check for preemption in the IRQ case, which makes this clearer. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Signed-off-by: Pingfan Liu Reviewed-by: Marc Zyngier Cc: Catalin Marinas Cc: Will Deacon Cc: Joey Gouly Cc: Sami Tolvanen Cc: Julien Thierry Cc: Thomas Gleixner Cc: Yuichi Ito Cc: linux-kernel@vger.kernel.org To: linux-arm-kernel@lists.infradead.org --- arch/arm64/kernel/entry-common.c | 43 ++++++++++++++++---------------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 32f9796c4ffe..fecf046f0708 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -219,22 +219,6 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs) lockdep_hardirqs_on(CALLER_ADDR0); } -static void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs) -{ - if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) - arm64_enter_nmi(regs); - else - enter_from_kernel_mode(regs); -} - -static void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs) -{ - if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) - arm64_exit_nmi(regs); - else - exit_to_kernel_mode(regs); -} - static void __sched arm64_preempt_schedule_irq(void) { lockdep_assert_irqs_disabled(); @@ -432,14 +416,19 @@ asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs) } } -static void noinstr el1_interrupt(struct pt_regs *regs, - void (*handler)(struct pt_regs *)) +static __always_inline void +__el1_pnmi(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { - write_sysreg(DAIF_PROCCTX_NOIRQ, daif); - - enter_el1_irq_or_nmi(regs); + arm64_enter_nmi(regs); do_interrupt_handler(regs, handler); + arm64_exit_nmi(regs); +} +static __always_inline void +__el1_interrupt(struct pt_regs *regs, void (*handler)(struct pt_regs *)) +{ + enter_from_kernel_mode(regs); + do_interrupt_handler(regs, handler); /* * Note: thread_info::preempt_count includes both thread_info::count * and thread_info::need_resched, and is not equivalent to @@ -448,8 +437,18 @@ static void noinstr el1_interrupt(struct pt_regs *regs, if (IS_ENABLED(CONFIG_PREEMPTION) && READ_ONCE(current_thread_info()->preempt_count) == 0) arm64_preempt_schedule_irq(); + exit_to_kernel_mode(regs); +} - exit_el1_irq_or_nmi(regs); +static void noinstr el1_interrupt(struct pt_regs *regs, + void (*handler)(struct pt_regs *)) +{ + write_sysreg(DAIF_PROCCTX_NOIRQ, daif); + + if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) + __el1_pnmi(regs, handler); + else + __el1_interrupt(regs, handler); } asmlinkage void noinstr el1h_64_irq_handler(struct pt_regs *regs) -- 2.31.1 From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: X-Spam-Checker-Version: SpamAssassin 3.4.0 (2014-02-07) on aws-us-west-2-korg-lkml-1.web.codeaurora.org Received: from mail.kernel.org (mail.kernel.org [198.145.29.99]) by smtp.lore.kernel.org (Postfix) with ESMTP id C7015C433EF for ; Fri, 1 Oct 2021 14:47:28 +0000 (UTC) Received: from bombadil.infradead.org (bombadil.infradead.org [198.137.202.133]) (using TLSv1.2 with cipher ECDHE-RSA-AES256-GCM-SHA384 (256/256 bits)) (No client certificate requested) by mail.kernel.org (Postfix) with ESMTPS id 89E3161108 for ; Fri, 1 Oct 2021 14:47:27 +0000 (UTC) DMARC-Filter: OpenDMARC Filter v1.4.1 mail.kernel.org 89E3161108 Authentication-Results: mail.kernel.org; dmarc=fail (p=none dis=none) header.from=gmail.com Authentication-Results: mail.kernel.org; spf=none smtp.mailfrom=lists.infradead.org DKIM-Signature: v=1; a=rsa-sha256; q=dns/txt; c=relaxed/relaxed; d=lists.infradead.org; s=bombadil.20210309; h=Sender: Content-Transfer-Encoding:Content-Type:List-Subscribe:List-Help:List-Post: List-Archive:List-Unsubscribe:List-Id:MIME-Version:References:In-Reply-To: Message-Id:Date:Subject:Cc:To:From:Reply-To:Content-ID:Content-Description: Resent-Date:Resent-From:Resent-Sender:Resent-To:Resent-Cc:Resent-Message-ID: List-Owner; bh=/HVmo3RiApYYwk1wQzlQLDy4auSQppQ4fFj01xa2OgE=; b=eQUm1I63zuHgoz zG9lBB58G0ppVXFePBESC2aYHTp1dodQgNpktKpEf0z38TwZBIMkj+S7Iw73DcafyDjSXp2Mpsm1K yzNu8iLDipapxFtDHkAyAaTbHok7O1zz/ctrs0eC54xjbZJnO2EAMBZSTGxbobSo1ZtPe2vL7dlhp FBtRgvNBDCVz67zZJbJo5njGkK39iZ7ddi0soyQ0uU4MtHtvblw02vBuJ5XjZ/l8LjEoqRnZZMP9y A/O4tCcagvVKPTOimibWaxpoK8IkPVVD/6ktN+QIR0RU7acD34LT1sJX3KAbSs1MGr06R9uY3+kas TX5K7x95oHG2n5ao4g3Q==; Received: from localhost ([::1] helo=bombadil.infradead.org) by bombadil.infradead.org with esmtp (Exim 4.94.2 #2 (Red Hat Linux)) id 1mWJmK-000bWJ-Qm; Fri, 01 Oct 2021 14:44:49 +0000 Received: from mail-pg1-x534.google.com ([2607:f8b0:4864:20::534]) by bombadil.infradead.org with esmtps (Exim 4.94.2 #2 (Red Hat Linux)) id 1mWJm3-000bTq-Li for linux-arm-kernel@lists.infradead.org; Fri, 01 Oct 2021 14:44:33 +0000 Received: by mail-pg1-x534.google.com with SMTP id k24so9622690pgh.8 for ; Fri, 01 Oct 2021 07:44:31 -0700 (PDT) DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=gmail.com; s=20210112; h=from:to:cc:subject:date:message-id:in-reply-to:references :mime-version:content-transfer-encoding; bh=TKjVR6/OIbtaDS2aKgtnt+zzA9GH3sGfYZy/bT0/UJQ=; b=qobJ08dRI10r2InHr8xeRxigYELnCvL0kdMhtx8j0neLGSvzqJzls+vCvxNrQtkvqD CMDy1MlMppe/J0lSWd8sCKS3TV7UrYj/I/YTgP9wO0785caWfgaaaPgh1LKOvorRVlNx QIyZlz08dvM2rUtknaTv8BzWJLjkyDvgLDNlEIlTZSnvlmhky4dZ5xIIn8r6CkrrL8va jRf85nBB0A6Zw7rkIHhWEENK01q0+uLSuS5it6WSYTujXZpG0wkZ8MwwBsgLcagqQOWj Oz+rtdZAsnfnTGMC5dsivqzImk78MXR7vmtkjreQDNLgqBrP10vDcyqmf6+vQaKXt5UM P21A== X-Google-DKIM-Signature: v=1; a=rsa-sha256; c=relaxed/relaxed; d=1e100.net; s=20210112; h=x-gm-message-state:from:to:cc:subject:date:message-id:in-reply-to :references:mime-version:content-transfer-encoding; bh=TKjVR6/OIbtaDS2aKgtnt+zzA9GH3sGfYZy/bT0/UJQ=; b=7iFkE5SlA6RHd3QgIXu2B80aDqxxQfQ9YNrEQNrbGuS72lnHZpMaWiU5WzF+X64rl7 IVToEHZUWe8rDxnieotoKEh8awl1K7nrWDYOBaUdBVpjSvH2pBl8NalydLPC5FJGq/bg ngfFTbLJ4Q/QdQq848+XYmjh2Sxf022GTfRHq5z7pMP1WqGM9ZN8m5wNF7E4Q6RJGeKn gIOEN6U44dWlDxFD/ZcJSCbLvZfFT+NIFBOQxylRLeY5eloPxsikCy/Omrq899FP9d/I /gv+lhyCjdV6M3bh4idXpEwWS6K+ARvqJOCtlkiZPug3giQ0P5AlM0JlgXLTdvQv9Rqq +hIA== X-Gm-Message-State: AOAM531CTYB3wu/hIKwS2vX273mO9Aci66FQDOIfMF7ucUX27E7faghS foCZjKLTW2zvfCSdoKufF2SZWTQB7w== X-Google-Smtp-Source: ABdhPJyvQsUwbRBCwjEBRRPxwgvVl9HSspJ8+/s3D+U6IrIIB6kJOG95/xHOOZZtvf6CpT9+fVX5nQ== X-Received: by 2002:a63:fb18:: with SMTP id o24mr4857595pgh.8.1633099471003; Fri, 01 Oct 2021 07:44:31 -0700 (PDT) Received: from piliu.users.ipa.redhat.com ([209.132.188.80]) by smtp.gmail.com with ESMTPSA id z12sm6766203pge.16.2021.10.01.07.44.26 (version=TLS1_3 cipher=TLS_AES_256_GCM_SHA384 bits=256/256); Fri, 01 Oct 2021 07:44:30 -0700 (PDT) From: Pingfan Liu To: linux-arm-kernel@lists.infradead.org Cc: Mark Rutland , Pingfan Liu , Marc Zyngier , Catalin Marinas , Will Deacon , Joey Gouly , Sami Tolvanen , Julien Thierry , Thomas Gleixner , Yuichi Ito , linux-kernel@vger.kernel.org Subject: [PATCHv4 2/3] arm64: entry: refactor EL1 interrupt entry logic Date: Fri, 1 Oct 2021 22:44:05 +0800 Message-Id: <20211001144406.7719-3-kernelfans@gmail.com> X-Mailer: git-send-email 2.31.1 In-Reply-To: <20211001144406.7719-1-kernelfans@gmail.com> References: <20211001144406.7719-1-kernelfans@gmail.com> MIME-Version: 1.0 X-CRM114-Version: 20100106-BlameMichelson ( TRE 0.8.0 (BSD) ) MR-646709E3 X-CRM114-CacheID: sfid-20211001_074431_737248_F49A19CC X-CRM114-Status: GOOD ( 15.71 ) X-BeenThere: linux-arm-kernel@lists.infradead.org X-Mailman-Version: 2.1.34 Precedence: list List-Id: List-Unsubscribe: , List-Archive: List-Post: List-Help: List-Subscribe: , Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: 7bit Sender: "linux-arm-kernel" Errors-To: linux-arm-kernel-bounces+linux-arm-kernel=archiver.kernel.org@lists.infradead.org From: Mark Rutland Currently we distinguish IRQ and definitely-PNMI at entry/exit time via the enter_el1_irq_or_nmi() and enter_el1_irq_or_nmi() helpers. In subsequent patches we'll need to handle the two cases more distinctly in the body of the exception handler. To make this possible, this patch refactors el1_interrupt to be a top-level dispatcher to separate handlers for the IRQ and PNMI cases, removing the need for the enter_el1_irq_or_nmi() and exit_el1_irq_or_nmi() helpers. Note that since arm64_enter_nmi() calls __nmi_enter(), which increments the preemt_count, we could never preempt when handling a PNMI. We now only check for preemption in the IRQ case, which makes this clearer. There should be no functional change as a result of this patch. Signed-off-by: Mark Rutland Signed-off-by: Pingfan Liu Reviewed-by: Marc Zyngier Cc: Catalin Marinas Cc: Will Deacon Cc: Joey Gouly Cc: Sami Tolvanen Cc: Julien Thierry Cc: Thomas Gleixner Cc: Yuichi Ito Cc: linux-kernel@vger.kernel.org To: linux-arm-kernel@lists.infradead.org --- arch/arm64/kernel/entry-common.c | 43 ++++++++++++++++---------------- 1 file changed, 21 insertions(+), 22 deletions(-) diff --git a/arch/arm64/kernel/entry-common.c b/arch/arm64/kernel/entry-common.c index 32f9796c4ffe..fecf046f0708 100644 --- a/arch/arm64/kernel/entry-common.c +++ b/arch/arm64/kernel/entry-common.c @@ -219,22 +219,6 @@ static void noinstr arm64_exit_el1_dbg(struct pt_regs *regs) lockdep_hardirqs_on(CALLER_ADDR0); } -static void noinstr enter_el1_irq_or_nmi(struct pt_regs *regs) -{ - if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) - arm64_enter_nmi(regs); - else - enter_from_kernel_mode(regs); -} - -static void noinstr exit_el1_irq_or_nmi(struct pt_regs *regs) -{ - if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) - arm64_exit_nmi(regs); - else - exit_to_kernel_mode(regs); -} - static void __sched arm64_preempt_schedule_irq(void) { lockdep_assert_irqs_disabled(); @@ -432,14 +416,19 @@ asmlinkage void noinstr el1h_64_sync_handler(struct pt_regs *regs) } } -static void noinstr el1_interrupt(struct pt_regs *regs, - void (*handler)(struct pt_regs *)) +static __always_inline void +__el1_pnmi(struct pt_regs *regs, void (*handler)(struct pt_regs *)) { - write_sysreg(DAIF_PROCCTX_NOIRQ, daif); - - enter_el1_irq_or_nmi(regs); + arm64_enter_nmi(regs); do_interrupt_handler(regs, handler); + arm64_exit_nmi(regs); +} +static __always_inline void +__el1_interrupt(struct pt_regs *regs, void (*handler)(struct pt_regs *)) +{ + enter_from_kernel_mode(regs); + do_interrupt_handler(regs, handler); /* * Note: thread_info::preempt_count includes both thread_info::count * and thread_info::need_resched, and is not equivalent to @@ -448,8 +437,18 @@ static void noinstr el1_interrupt(struct pt_regs *regs, if (IS_ENABLED(CONFIG_PREEMPTION) && READ_ONCE(current_thread_info()->preempt_count) == 0) arm64_preempt_schedule_irq(); + exit_to_kernel_mode(regs); +} - exit_el1_irq_or_nmi(regs); +static void noinstr el1_interrupt(struct pt_regs *regs, + void (*handler)(struct pt_regs *)) +{ + write_sysreg(DAIF_PROCCTX_NOIRQ, daif); + + if (IS_ENABLED(CONFIG_ARM64_PSEUDO_NMI) && !interrupts_enabled(regs)) + __el1_pnmi(regs, handler); + else + __el1_interrupt(regs, handler); } asmlinkage void noinstr el1h_64_irq_handler(struct pt_regs *regs) -- 2.31.1 _______________________________________________ linux-arm-kernel mailing list linux-arm-kernel@lists.infradead.org http://lists.infradead.org/mailman/listinfo/linux-arm-kernel