linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2]  irq: detect slow IRQ handlers
@ 2021-07-15  9:50 Mark Rutland
  2021-07-15  9:50 ` [PATCH v3 1/2] irq: abstract irqaction handler invocation Mark Rutland
  2021-07-15  9:50 ` [PATCH v3 2/2] irq: detect long-running IRQ handlers Mark Rutland
  0 siblings, 2 replies; 6+ messages in thread
From: Mark Rutland @ 2021-07-15  9:50 UTC (permalink / raw)
  To: linux-kernel, tglx; +Cc: mark.rutland, maz, paulmck, peterz

Hi,

While fuzzing arm64 with Syzkaller (under QEMU+KVM) over a number of releases,
I've occasionally seen some ridiculously long stalls (20+ seconds), where it
appears that a CPU is stuck in a hard IRQ context. As this gets detected after
the CPU returns to the interrupted context, it's difficult to identify where
exactly the stall is coming from.

These patches are intended to help tracking this down, with a WARN() if an IRQ
handler takes longer than a given timout (1 second by default), logging the
specific IRQ and handler function. While it's possible to achieve similar with
tracing, it's harder to integrate that into an automated fuzzing setup.

I've been running this for a short while, and haven't yet seen any of the
stalls with this applied, but I've tested with smaller timeout periods in the 1
millisecond range by overloading the host, so I'm confident that the check
works.

Thanks,
Mark.

Since v1 [1]:
* Minor commit message tweaks
* Add Paul's Acked-by
* Trivial rebase to v5.13-rc4

Since v2 [2]:
* Trivial rebase to v5.14-rc1

[1] https://lore.kernel.org/r/20210112135950.30607-1-mark.rutland@arm.com
[2] https://lore.kernel.org/r/20210615102507.9677-1-mark.rutland@arm.com

Mark Rutland (2):
  irq: abstract irqaction handler invocation
  irq: detect long-running IRQ handlers

 kernel/irq/chip.c      | 15 +++----------
 kernel/irq/handle.c    |  4 +---
 kernel/irq/internals.h | 57 ++++++++++++++++++++++++++++++++++++++++++++++++++
 lib/Kconfig.debug      | 15 +++++++++++++
 4 files changed, 76 insertions(+), 15 deletions(-)

-- 
2.11.0


^ permalink raw reply	[flat|nested] 6+ messages in thread

* [PATCH v3 1/2] irq: abstract irqaction handler invocation
  2021-07-15  9:50 [PATCH v3 0/2] irq: detect slow IRQ handlers Mark Rutland
@ 2021-07-15  9:50 ` Mark Rutland
  2021-07-15 10:49   ` Peter Zijlstra
  2021-07-15  9:50 ` [PATCH v3 2/2] irq: detect long-running IRQ handlers Mark Rutland
  1 sibling, 1 reply; 6+ messages in thread
From: Mark Rutland @ 2021-07-15  9:50 UTC (permalink / raw)
  To: linux-kernel, tglx; +Cc: mark.rutland, maz, paulmck, peterz

We have a few functions which invoke irqaction handlers, all of which
need to call trace_irq_handler_entry() and trace_irq_handler_exit().

In preparation for adding some additional debug logic to each irqaction
handler invocation, let's factor out this work to a helper. Where the
return value isn't consumed, the unused temporary variable is also
removed.

There should be no functional change as a result of this patch.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/irq/chip.c      | 15 +++------------
 kernel/irq/handle.c    |  4 +---
 kernel/irq/internals.h | 28 ++++++++++++++++++++++++++++
 3 files changed, 32 insertions(+), 15 deletions(-)

diff --git a/kernel/irq/chip.c b/kernel/irq/chip.c
index 7f04c7d8296e..804c2791315d 100644
--- a/kernel/irq/chip.c
+++ b/kernel/irq/chip.c
@@ -741,16 +741,13 @@ void handle_fasteoi_nmi(struct irq_desc *desc)
 	struct irq_chip *chip = irq_desc_get_chip(desc);
 	struct irqaction *action = desc->action;
 	unsigned int irq = irq_desc_get_irq(desc);
-	irqreturn_t res;
 
 	__kstat_incr_irqs_this_cpu(desc);
 
-	trace_irq_handler_entry(irq, action);
 	/*
 	 * NMIs cannot be shared, there is only one action.
 	 */
-	res = action->handler(irq, action->dev_id);
-	trace_irq_handler_exit(irq, action, res);
+	handle_irqaction(irq, action);
 
 	if (chip->irq_eoi)
 		chip->irq_eoi(&desc->irq_data);
@@ -914,7 +911,6 @@ void handle_percpu_devid_irq(struct irq_desc *desc)
 	struct irq_chip *chip = irq_desc_get_chip(desc);
 	struct irqaction *action = desc->action;
 	unsigned int irq = irq_desc_get_irq(desc);
-	irqreturn_t res;
 
 	/*
 	 * PER CPU interrupts are not serialized. Do not touch
@@ -926,9 +922,7 @@ void handle_percpu_devid_irq(struct irq_desc *desc)
 		chip->irq_ack(&desc->irq_data);
 
 	if (likely(action)) {
-		trace_irq_handler_entry(irq, action);
-		res = action->handler(irq, raw_cpu_ptr(action->percpu_dev_id));
-		trace_irq_handler_exit(irq, action, res);
+		handle_irqaction_percpu_devid(irq, action);
 	} else {
 		unsigned int cpu = smp_processor_id();
 		bool enabled = cpumask_test_cpu(cpu, desc->percpu_enabled);
@@ -957,13 +951,10 @@ void handle_percpu_devid_fasteoi_nmi(struct irq_desc *desc)
 	struct irq_chip *chip = irq_desc_get_chip(desc);
 	struct irqaction *action = desc->action;
 	unsigned int irq = irq_desc_get_irq(desc);
-	irqreturn_t res;
 
 	__kstat_incr_irqs_this_cpu(desc);
 
-	trace_irq_handler_entry(irq, action);
-	res = action->handler(irq, raw_cpu_ptr(action->percpu_dev_id));
-	trace_irq_handler_exit(irq, action, res);
+	handle_irqaction_percpu_devid(irq, action);
 
 	if (chip->irq_eoi)
 		chip->irq_eoi(&desc->irq_data);
diff --git a/kernel/irq/handle.c b/kernel/irq/handle.c
index 221d80c31e94..dbe5c9277dd7 100644
--- a/kernel/irq/handle.c
+++ b/kernel/irq/handle.c
@@ -152,9 +152,7 @@ irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc, unsigned int *flags
 		    !(action->flags & (IRQF_NO_THREAD | IRQF_PERCPU | IRQF_ONESHOT)))
 			lockdep_hardirq_threaded();
 
-		trace_irq_handler_entry(irq, action);
-		res = action->handler(irq, action->dev_id);
-		trace_irq_handler_exit(irq, action, res);
+		res = handle_irqaction(irq, action);
 
 		if (WARN_ONCE(!irqs_disabled(),"irq %u handler %pS enabled interrupts\n",
 			      irq, action->handler))
diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index 54363527feea..70a4694cc891 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -11,6 +11,8 @@
 #include <linux/pm_runtime.h>
 #include <linux/sched/clock.h>
 
+#include <trace/events/irq.h>
+
 #ifdef CONFIG_SPARSE_IRQ
 # define IRQ_BITMAP_BITS	(NR_IRQS + 8196)
 #else
@@ -107,6 +109,32 @@ irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc, unsigned int *flags
 irqreturn_t handle_irq_event_percpu(struct irq_desc *desc);
 irqreturn_t handle_irq_event(struct irq_desc *desc);
 
+static inline irqreturn_t __handle_irqaction(unsigned int irq,
+					     struct irqaction *action,
+					     void *dev_id)
+{
+	irqreturn_t res;
+
+	trace_irq_handler_entry(irq, action);
+	res = action->handler(irq, dev_id);
+	trace_irq_handler_exit(irq, action, res);
+
+	return res;
+}
+
+static inline irqreturn_t handle_irqaction(unsigned int irq,
+					   struct irqaction *action)
+{
+	return __handle_irqaction(irq, action, action->dev_id);
+}
+
+static inline irqreturn_t handle_irqaction_percpu_devid(unsigned int irq,
+							struct irqaction *action)
+{
+	return __handle_irqaction(irq, action,
+				  raw_cpu_ptr(action->percpu_dev_id));
+}
+
 /* Resending of interrupts :*/
 int check_irq_resend(struct irq_desc *desc, bool inject);
 bool irq_wait_for_poll(struct irq_desc *desc);
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* [PATCH v3 2/2] irq: detect long-running IRQ handlers
  2021-07-15  9:50 [PATCH v3 0/2] irq: detect slow IRQ handlers Mark Rutland
  2021-07-15  9:50 ` [PATCH v3 1/2] irq: abstract irqaction handler invocation Mark Rutland
@ 2021-07-15  9:50 ` Mark Rutland
  1 sibling, 0 replies; 6+ messages in thread
From: Mark Rutland @ 2021-07-15  9:50 UTC (permalink / raw)
  To: linux-kernel, tglx; +Cc: mark.rutland, maz, paulmck, peterz

If a hard IRQ handler takes a long time to handle an IRQ, it may cause a
soft lockup or RCU stall, but as this will be detected once the handler
has returned it can be difficult to attribute the delay to the specific
IRQ handler.

It's possible to trace IRQ handlers to diagnose this, but that's not a
great fit for automated testing environments (e.g. fuzzers), where
something like the existing lockup/stall detectors works well.

This patch adds a new stall detector for IRQ handlers, which reports
when handlers took longer than a given timeout value (defaulting to 1
second). This won't detect hung IRQ handlers (which requires an NMI, and
should already be caught by hung task detection on systems with NMIs),
but helps on platforms without NMI or where a periodic watchdog is
undesireable.

Signed-off-by: Mark Rutland <mark.rutland@arm.com>
Acked-by: Paul E. McKenney <paulmck@kernel.org>
Cc: Marc Zyngier <maz@kernel.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
---
 kernel/irq/internals.h | 35 ++++++++++++++++++++++++++++++++---
 lib/Kconfig.debug      | 15 +++++++++++++++
 2 files changed, 47 insertions(+), 3 deletions(-)

diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
index 70a4694cc891..191b6a9d30e2 100644
--- a/kernel/irq/internals.h
+++ b/kernel/irq/internals.h
@@ -6,6 +6,7 @@
  * kernel/irq/. Do not even think about using any information outside
  * of this file for your non core code.
  */
+#include <linux/bug.h>
 #include <linux/irqdesc.h>
 #include <linux/kernel_stat.h>
 #include <linux/pm_runtime.h>
@@ -122,17 +123,45 @@ static inline irqreturn_t __handle_irqaction(unsigned int irq,
 	return res;
 }
 
+#ifdef CONFIG_DETECT_SLOW_IRQ_HANDLER
+static inline irqreturn_t __handle_check_irqaction(unsigned int irq,
+						   struct irqaction *action,
+						   void *dev_id)
+{
+	u64 timeout = CONFIG_IRQ_HANDLER_TIMEOUT_NS;
+	u64 start, end, duration;
+	int res;
+
+	start = local_clock();
+	res = __handle_irqaction(irq, action, dev_id);
+	end = local_clock();
+
+	duration = end - start;
+	WARN(duration > timeout, "IRQ %d handler %ps took %llu ns\n",
+	     irq, action->handler, duration);
+
+	return res;
+}
+#else
+static inline irqreturn_t __handle_check_irqaction(unsigned int irq,
+						   struct irqaction *action,
+						   void *dev_id)
+{
+	return __handle_irqaction(irq, action, dev_id);
+}
+#endif
+
 static inline irqreturn_t handle_irqaction(unsigned int irq,
 					   struct irqaction *action)
 {
-	return __handle_irqaction(irq, action, action->dev_id);
+	return __handle_check_irqaction(irq, action, action->dev_id);
 }
 
 static inline irqreturn_t handle_irqaction_percpu_devid(unsigned int irq,
 							struct irqaction *action)
 {
-	return __handle_irqaction(irq, action,
-				  raw_cpu_ptr(action->percpu_dev_id));
+	return __handle_check_irqaction(irq, action,
+					raw_cpu_ptr(action->percpu_dev_id));
 }
 
 /* Resending of interrupts :*/
diff --git a/lib/Kconfig.debug b/lib/Kconfig.debug
index 831212722924..86003bc0572c 100644
--- a/lib/Kconfig.debug
+++ b/lib/Kconfig.debug
@@ -1155,6 +1155,21 @@ config WQ_WATCHDOG
 	  state.  This can be configured through kernel parameter
 	  "workqueue.watchdog_thresh" and its sysfs counterpart.
 
+config DETECT_SLOW_IRQ_HANDLER
+	bool "Detect long-running IRQ handlers"
+	help
+	  Say Y here to enable detection of long-running IRQ handlers. When a
+	  (hard) IRQ handler returns after a given timeout value (1s by
+	  default) a warning will be printed with the name of the handler.
+
+	  This can help to identify specific IRQ handlers which are
+	  contributing to stalls.
+
+config IRQ_HANDLER_TIMEOUT_NS
+	int "Timeout for long-running IRQ handlers (in nanoseconds)"
+	depends on DETECT_SLOW_IRQ_HANDLER
+	default 1000000000
+
 config TEST_LOCKUP
 	tristate "Test module to generate lockups"
 	depends on m
-- 
2.11.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 1/2] irq: abstract irqaction handler invocation
  2021-07-15  9:50 ` [PATCH v3 1/2] irq: abstract irqaction handler invocation Mark Rutland
@ 2021-07-15 10:49   ` Peter Zijlstra
  2021-07-15 11:15     ` Mark Rutland
  0 siblings, 1 reply; 6+ messages in thread
From: Peter Zijlstra @ 2021-07-15 10:49 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-kernel, tglx, maz, paulmck

On Thu, Jul 15, 2021 at 10:50:30AM +0100, Mark Rutland wrote:
> diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
> index 54363527feea..70a4694cc891 100644
> --- a/kernel/irq/internals.h
> +++ b/kernel/irq/internals.h
> @@ -11,6 +11,8 @@
>  #include <linux/pm_runtime.h>
>  #include <linux/sched/clock.h>
>  
> +#include <trace/events/irq.h>
> +
>  #ifdef CONFIG_SPARSE_IRQ
>  # define IRQ_BITMAP_BITS	(NR_IRQS + 8196)
>  #else
> @@ -107,6 +109,32 @@ irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc, unsigned int *flags
>  irqreturn_t handle_irq_event_percpu(struct irq_desc *desc);
>  irqreturn_t handle_irq_event(struct irq_desc *desc);
>  
> +static inline irqreturn_t __handle_irqaction(unsigned int irq,
> +					     struct irqaction *action,
> +					     void *dev_id)
> +{
> +	irqreturn_t res;
> +
> +	trace_irq_handler_entry(irq, action);
> +	res = action->handler(irq, dev_id);
> +	trace_irq_handler_exit(irq, action, res);
> +
> +	return res;
> +}
> +
> +static inline irqreturn_t handle_irqaction(unsigned int irq,
> +					   struct irqaction *action)
> +{
> +	return __handle_irqaction(irq, action, action->dev_id);
> +}
> +
> +static inline irqreturn_t handle_irqaction_percpu_devid(unsigned int irq,
> +							struct irqaction *action)
> +{
> +	return __handle_irqaction(irq, action,
> +				  raw_cpu_ptr(action->percpu_dev_id));
> +}

So I like this patch, it's a nice cleanup.

However, you could implement the next patch as a module that hooks into
those two tracepoints. Quite possibly the existing IRQ latency tracer
would already work for what you need and also provide you a function
trace of WTH the CPU was doing.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 1/2] irq: abstract irqaction handler invocation
  2021-07-15 10:49   ` Peter Zijlstra
@ 2021-07-15 11:15     ` Mark Rutland
  2021-07-15 13:10       ` Peter Zijlstra
  0 siblings, 1 reply; 6+ messages in thread
From: Mark Rutland @ 2021-07-15 11:15 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, tglx, maz, paulmck

On Thu, Jul 15, 2021 at 12:49:54PM +0200, Peter Zijlstra wrote:
> On Thu, Jul 15, 2021 at 10:50:30AM +0100, Mark Rutland wrote:
> > diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
> > index 54363527feea..70a4694cc891 100644
> > --- a/kernel/irq/internals.h
> > +++ b/kernel/irq/internals.h
> > @@ -11,6 +11,8 @@
> >  #include <linux/pm_runtime.h>
> >  #include <linux/sched/clock.h>
> >  
> > +#include <trace/events/irq.h>
> > +
> >  #ifdef CONFIG_SPARSE_IRQ
> >  # define IRQ_BITMAP_BITS	(NR_IRQS + 8196)
> >  #else
> > @@ -107,6 +109,32 @@ irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc, unsigned int *flags
> >  irqreturn_t handle_irq_event_percpu(struct irq_desc *desc);
> >  irqreturn_t handle_irq_event(struct irq_desc *desc);
> >  
> > +static inline irqreturn_t __handle_irqaction(unsigned int irq,
> > +					     struct irqaction *action,
> > +					     void *dev_id)
> > +{
> > +	irqreturn_t res;
> > +
> > +	trace_irq_handler_entry(irq, action);
> > +	res = action->handler(irq, dev_id);
> > +	trace_irq_handler_exit(irq, action, res);
> > +
> > +	return res;
> > +}
> > +
> > +static inline irqreturn_t handle_irqaction(unsigned int irq,
> > +					   struct irqaction *action)
> > +{
> > +	return __handle_irqaction(irq, action, action->dev_id);
> > +}
> > +
> > +static inline irqreturn_t handle_irqaction_percpu_devid(unsigned int irq,
> > +							struct irqaction *action)
> > +{
> > +	return __handle_irqaction(irq, action,
> > +				  raw_cpu_ptr(action->percpu_dev_id));
> > +}
> 
> So I like this patch, it's a nice cleanup.
> 
> However, you could implement the next patch as a module that hooks into
> those two tracepoints. Quite possibly the existing IRQ latency tracer
> would already work for what you need and also provide you a function
> trace of WTH the CPU was doing.

The issue with the existing tracers is that they're logging for
later/concurrent analysis, whereas what I need is a notification (e.g. a
WARN) when the maximum expected latency has been breached. That way it
gets caught by Syzkaller or whatever without needing to specially manage
the tracer.

If there's a way to do that (e.g. with boot-time options), I'm happy to
use that instead; I just couldn't see hwo to do that today, and was
under the impression that the existing tracepoints don't give quite what
I need (e.g. since the entry/exit hooks are separate, so I'd have to
store some state somewhere else).

I'm happy to take another look if you think I'm wrong on that. :)

Thanks,
Mark.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v3 1/2] irq: abstract irqaction handler invocation
  2021-07-15 11:15     ` Mark Rutland
@ 2021-07-15 13:10       ` Peter Zijlstra
  0 siblings, 0 replies; 6+ messages in thread
From: Peter Zijlstra @ 2021-07-15 13:10 UTC (permalink / raw)
  To: Mark Rutland; +Cc: linux-kernel, tglx, maz, paulmck

On Thu, Jul 15, 2021 at 12:15:31PM +0100, Mark Rutland wrote:
> On Thu, Jul 15, 2021 at 12:49:54PM +0200, Peter Zijlstra wrote:
> > On Thu, Jul 15, 2021 at 10:50:30AM +0100, Mark Rutland wrote:
> > > diff --git a/kernel/irq/internals.h b/kernel/irq/internals.h
> > > index 54363527feea..70a4694cc891 100644
> > > --- a/kernel/irq/internals.h
> > > +++ b/kernel/irq/internals.h
> > > @@ -11,6 +11,8 @@
> > >  #include <linux/pm_runtime.h>
> > >  #include <linux/sched/clock.h>
> > >  
> > > +#include <trace/events/irq.h>
> > > +
> > >  #ifdef CONFIG_SPARSE_IRQ
> > >  # define IRQ_BITMAP_BITS	(NR_IRQS + 8196)
> > >  #else
> > > @@ -107,6 +109,32 @@ irqreturn_t __handle_irq_event_percpu(struct irq_desc *desc, unsigned int *flags
> > >  irqreturn_t handle_irq_event_percpu(struct irq_desc *desc);
> > >  irqreturn_t handle_irq_event(struct irq_desc *desc);
> > >  
> > > +static inline irqreturn_t __handle_irqaction(unsigned int irq,
> > > +					     struct irqaction *action,
> > > +					     void *dev_id)
> > > +{
> > > +	irqreturn_t res;
> > > +
> > > +	trace_irq_handler_entry(irq, action);
> > > +	res = action->handler(irq, dev_id);
> > > +	trace_irq_handler_exit(irq, action, res);
> > > +
> > > +	return res;
> > > +}
> > > +
> > > +static inline irqreturn_t handle_irqaction(unsigned int irq,
> > > +					   struct irqaction *action)
> > > +{
> > > +	return __handle_irqaction(irq, action, action->dev_id);
> > > +}
> > > +
> > > +static inline irqreturn_t handle_irqaction_percpu_devid(unsigned int irq,
> > > +							struct irqaction *action)
> > > +{
> > > +	return __handle_irqaction(irq, action,
> > > +				  raw_cpu_ptr(action->percpu_dev_id));
> > > +}
> > 
> > So I like this patch, it's a nice cleanup.
> > 
> > However, you could implement the next patch as a module that hooks into
> > those two tracepoints. Quite possibly the existing IRQ latency tracer
> > would already work for what you need and also provide you a function
> > trace of WTH the CPU was doing.
> 
> The issue with the existing tracers is that they're logging for
> later/concurrent analysis, whereas what I need is a notification (e.g. a
> WARN) when the maximum expected latency has been breached. That way it
> gets caught by Syzkaller or whatever without needing to specially manage
> the tracer.
> 
> If there's a way to do that (e.g. with boot-time options), I'm happy to
> use that instead; I just couldn't see hwo to do that today, and was
> under the impression that the existing tracepoints don't give quite what
> I need (e.g. since the entry/exit hooks are separate, so I'd have to
> store some state somewhere else).
> 
> I'm happy to take another look if you think I'm wrong on that. :)

For this particular thing I think you can use a simple per-cpu variable;
we don't do nested interrupts.

DEFINE_PER_CPU(u64, my_timestamp);

static notrace void my_entry(unsigned int irq, struct irq_action *action)
{
	this_cpu_write(my_timestamp, sched_clock());
}

static notrace void my_exit(unsigned int irq, struct irq_action *action)
{
	u64 delta = sched_clock() - this_cpu_read(my_timestamp);
	WARN_ON_ONCE(delta > biggie);
}

__init int mod_init(void)
{
	register_trace_irq_handler_exit(my_exit, NULL);
	register_trace_irq_handler_entry(my_entry, NULL);
}

Should work, no?

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2021-07-15 13:11 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-07-15  9:50 [PATCH v3 0/2] irq: detect slow IRQ handlers Mark Rutland
2021-07-15  9:50 ` [PATCH v3 1/2] irq: abstract irqaction handler invocation Mark Rutland
2021-07-15 10:49   ` Peter Zijlstra
2021-07-15 11:15     ` Mark Rutland
2021-07-15 13:10       ` Peter Zijlstra
2021-07-15  9:50 ` [PATCH v3 2/2] irq: detect long-running IRQ handlers Mark Rutland

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).