linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Qi Zheng <zhengqi.arch@bytedance.com>
To: catalin.marinas@arm.com, will@kernel.org
Cc: linux-arm-kernel@lists.infradead.org,
	linux-kernel@vger.kernel.org,
	Qi Zheng <zhengqi.arch@bytedance.com>
Subject: [RFC PATCH 1/2] arm64: run softirqs on the per-CPU IRQ stack
Date: Thu,  7 Jul 2022 19:05:10 +0800	[thread overview]
Message-ID: <20220707110511.52129-2-zhengqi.arch@bytedance.com> (raw)
In-Reply-To: <20220707110511.52129-1-zhengqi.arch@bytedance.com>

Currently arm64 supports per-CPU IRQ stack, but softirqs
are still handled in the task context.

Since any call to local_bh_enable() at any level in the task's
call stack may trigger a softirq processing run, which could
potentially cause a task stack overflow if the combined stack
footprints exceed the stack's size, let's run these softirqs
on the IRQ stack as well.

Signed-off-by: Qi Zheng <zhengqi.arch@bytedance.com>
---
 arch/arm64/Kconfig      |  1 +
 arch/arm64/kernel/irq.c | 11 +++++++++++
 2 files changed, 12 insertions(+)

diff --git a/arch/arm64/Kconfig b/arch/arm64/Kconfig
index 3491d0d99891..402e16fec02a 100644
--- a/arch/arm64/Kconfig
+++ b/arch/arm64/Kconfig
@@ -230,6 +230,7 @@ config ARM64
 	select HAVE_ARCH_USERFAULTFD_MINOR if USERFAULTFD
 	select TRACE_IRQFLAGS_SUPPORT
 	select TRACE_IRQFLAGS_NMI_SUPPORT
+	select HAVE_SOFTIRQ_ON_OWN_STACK
 	help
 	  ARM 64-bit (AArch64) Linux support.
 
diff --git a/arch/arm64/kernel/irq.c b/arch/arm64/kernel/irq.c
index bda49430c9ea..e6aa37672fd4 100644
--- a/arch/arm64/kernel/irq.c
+++ b/arch/arm64/kernel/irq.c
@@ -22,6 +22,7 @@
 #include <linux/vmalloc.h>
 #include <asm/daifflags.h>
 #include <asm/vmap_stack.h>
+#include <asm/exception.h>
 
 /* Only access this in an NMI enter/exit */
 DEFINE_PER_CPU(struct nmi_ctx, nmi_contexts);
@@ -71,6 +72,16 @@ static void init_irq_stacks(void)
 }
 #endif
 
+static void ____do_softirq(struct pt_regs *regs)
+{
+	__do_softirq();
+}
+
+void do_softirq_own_stack(void)
+{
+	call_on_irq_stack(NULL, ____do_softirq);
+}
+
 static void default_handle_irq(struct pt_regs *regs)
 {
 	panic("IRQ taken without a root IRQ handler\n");
-- 
2.20.1


  reply	other threads:[~2022-07-07 11:06 UTC|newest]

Thread overview: 15+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2022-07-07 11:05 [RFC PATCH 0/2] arm64: run softirqs on the per-CPU IRQ stack Qi Zheng
2022-07-07 11:05 ` Qi Zheng [this message]
2022-07-07 12:58   ` [RFC PATCH 1/2] " Arnd Bergmann
2022-07-07 13:43     ` Qi Zheng
2022-07-07 13:53       ` Arnd Bergmann
2022-07-07 15:05         ` Qi Zheng
2022-07-07 11:05 ` [RFC PATCH 2/2] arm64: support HAVE_IRQ_EXIT_ON_IRQ_STACK Qi Zheng
2022-07-07 12:49   ` Arnd Bergmann
2022-07-07 13:38     ` Qi Zheng
2022-07-07 14:41       ` Arnd Bergmann
2022-07-07 15:00         ` Qi Zheng
2022-07-07 20:55           ` Arnd Bergmann
2022-07-08  3:13             ` Qi Zheng
2022-07-08  8:52               ` Arnd Bergmann
2022-07-08  9:13                 ` Qi Zheng

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20220707110511.52129-2-zhengqi.arch@bytedance.com \
    --to=zhengqi.arch@bytedance.com \
    --cc=catalin.marinas@arm.com \
    --cc=linux-arm-kernel@lists.infradead.org \
    --cc=linux-kernel@vger.kernel.org \
    --cc=will@kernel.org \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).