All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3] smp: Wake ksoftirqd on PREEMPT_RT instead do_softirq().
@ 2022-02-08 16:54 Sebastian Andrzej Siewior
  2022-02-09  7:57 ` Christoph Hellwig
                   ` (2 more replies)
  0 siblings, 3 replies; 4+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-02-08 16:54 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Christoph Hellwig, Peter Zijlstra

The softirq implementation on PREEMPT_RT does not provide do_softirq(). The
softirq can not be handled directly here because migration_cpu_stop() is
invoked with disabled preemption/ interrupts.

A known user of scheduling softirqs from a remote function call is the block
layer. It won't happen on PREEMPT_RT because it doesn't make sense for
latency/ performance reasons and is disabled. Nevertheless this should
be handled in case of a new user pops up rather than simply ignoring it.

Waking ksoftirqd unconditionally can be problematic if softirqs were already
pending but not yet handled. This can happen since the migration thread
is running at a high priority and able to preempt a threaded-interrupt.
The woken-up ksoftirqd would catch-up all pending (and later raised)
softirqs which is not desired on PREEMPT_RT since it is no longer
handled where it has been originally raised. This in turn delays the
actual processing until a SCHED_OTHER task can run.

Wake the softirq thread on PREEMPT_RT if a remote function call raised
softirqs. Add warning in this case since this condition is not desired.

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
---
v2…v3:
   - Only wake ksoftirqd if the softirqs were raised wthin
     flush_smp_call_function_queue().
   - Add a warning in the wake case.
v1…v2: Drop an empty line.

 kernel/smp.c |   21 ++++++++++++++++++---
 1 file changed, 18 insertions(+), 3 deletions(-)
---
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -691,10 +691,25 @@ void flush_smp_call_function_from_idle(v
 	cfd_seq_store(this_cpu_ptr(&cfd_seq_local)->idle, CFD_SEQ_NOCPU,
 		      smp_processor_id(), CFD_SEQ_IDLE);
 	local_irq_save(flags);
-	flush_smp_call_function_queue(true);
-	if (local_softirq_pending())
-		do_softirq();
+	if (!IS_ENABLED(CONFIG_PREEMPT_RT)) {
+		flush_smp_call_function_queue(true);
+		if (local_softirq_pending())
+			do_softirq();
+	} else {
+		unsigned int pending_prev;
+		unsigned int pending_post;
 
+		pending_prev = local_softirq_pending();
+		flush_smp_call_function_queue(true);
+		pending_post = local_softirq_pending();
+
+		if (WARN_ON_ONCE(!pending_prev && pending_post)) {
+			struct task_struct *ksoftirqd = this_cpu_ksoftirqd();
+
+			if (ksoftirqd && !task_is_running(ksoftirqd))
+				wake_up_process(ksoftirqd);
+		}
+	}
 	local_irq_restore(flags);
 }
 

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] smp: Wake ksoftirqd on PREEMPT_RT instead do_softirq().
  2022-02-08 16:54 [PATCH v3] smp: Wake ksoftirqd on PREEMPT_RT instead do_softirq() Sebastian Andrzej Siewior
@ 2022-02-09  7:57 ` Christoph Hellwig
  2022-03-18  7:34 ` Sebastian Andrzej Siewior
  2022-05-01  8:05 ` [tip: sched/core] smp: Make softirq handling RT safe in flush_smp_call_function_queue() tip-bot2 for Sebastian Andrzej Siewior
  2 siblings, 0 replies; 4+ messages in thread
From: Christoph Hellwig @ 2022-02-09  7:57 UTC (permalink / raw)
  To: Sebastian Andrzej Siewior
  Cc: linux-kernel, Thomas Gleixner, Christoph Hellwig, Peter Zijlstra

On Tue, Feb 08, 2022 at 05:54:07PM +0100, Sebastian Andrzej Siewior wrote:
> The softirq implementation on PREEMPT_RT does not provide do_softirq(). The
> softirq can not be handled directly here because migration_cpu_stop() is
> invoked with disabled preemption/ interrupts.
> 
> A known user of scheduling softirqs from a remote function call is the block
> layer. It won't happen on PREEMPT_RT because it doesn't make sense for
> latency/ performance reasons and is disabled. Nevertheless this should
> be handled in case of a new user pops up rather than simply ignoring it.
> 
> Waking ksoftirqd unconditionally can be problematic if softirqs were already
> pending but not yet handled. This can happen since the migration thread
> is running at a high priority and able to preempt a threaded-interrupt.
> The woken-up ksoftirqd would catch-up all pending (and later raised)
> softirqs which is not desired on PREEMPT_RT since it is no longer
> handled where it has been originally raised. This in turn delays the
> actual processing until a SCHED_OTHER task can run.
> 
> Wake the softirq thread on PREEMPT_RT if a remote function call raised
> softirqs. Add warning in this case since this condition is not desired.
> 
> Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
> ---
> v2…v3:
>    - Only wake ksoftirqd if the softirqs were raised wthin
>      flush_smp_call_function_queue().
>    - Add a warning in the wake case.
> v1…v2: Drop an empty line.
> 
>  kernel/smp.c |   21 ++++++++++++++++++---
>  1 file changed, 18 insertions(+), 3 deletions(-)
> ---
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -691,10 +691,25 @@ void flush_smp_call_function_from_idle(v
>  	cfd_seq_store(this_cpu_ptr(&cfd_seq_local)->idle, CFD_SEQ_NOCPU,
>  		      smp_processor_id(), CFD_SEQ_IDLE);
>  	local_irq_save(flags);
> -	flush_smp_call_function_queue(true);
> -	if (local_softirq_pending())
> -		do_softirq();
> +	if (!IS_ENABLED(CONFIG_PREEMPT_RT)) {

I still absolutely hate these pointless negations in simple if
statements.

^ permalink raw reply	[flat|nested] 4+ messages in thread

* Re: [PATCH v3] smp: Wake ksoftirqd on PREEMPT_RT instead do_softirq().
  2022-02-08 16:54 [PATCH v3] smp: Wake ksoftirqd on PREEMPT_RT instead do_softirq() Sebastian Andrzej Siewior
  2022-02-09  7:57 ` Christoph Hellwig
@ 2022-03-18  7:34 ` Sebastian Andrzej Siewior
  2022-05-01  8:05 ` [tip: sched/core] smp: Make softirq handling RT safe in flush_smp_call_function_queue() tip-bot2 for Sebastian Andrzej Siewior
  2 siblings, 0 replies; 4+ messages in thread
From: Sebastian Andrzej Siewior @ 2022-03-18  7:34 UTC (permalink / raw)
  To: linux-kernel; +Cc: Thomas Gleixner, Christoph Hellwig, Peter Zijlstra

On 2022-02-08 17:54:08 [+0100], To linux-kernel@vger.kernel.org wrote:
> --- a/kernel/smp.c
> +++ b/kernel/smp.c
> @@ -691,10 +691,25 @@ void flush_smp_call_function_from_idle(v
>  	cfd_seq_store(this_cpu_ptr(&cfd_seq_local)->idle, CFD_SEQ_NOCPU,
>  		      smp_processor_id(), CFD_SEQ_IDLE);
>  	local_irq_save(flags);
> -	flush_smp_call_function_queue(true);
> -	if (local_softirq_pending())
> -		do_softirq();

One thing I noticed while looking for a possible PI-boost from idle/
sched_core_balance(): Interrupts are disabled by the caller right before
calling this function or that local_irq_save() here. That do_softirq()
below, may enable interrupts while invoking softirqs. Both caller don't
have a lock acquired so it should be okay.

Sebastian

^ permalink raw reply	[flat|nested] 4+ messages in thread

* [tip: sched/core] smp: Make softirq handling RT safe in flush_smp_call_function_queue()
  2022-02-08 16:54 [PATCH v3] smp: Wake ksoftirqd on PREEMPT_RT instead do_softirq() Sebastian Andrzej Siewior
  2022-02-09  7:57 ` Christoph Hellwig
  2022-03-18  7:34 ` Sebastian Andrzej Siewior
@ 2022-05-01  8:05 ` tip-bot2 for Sebastian Andrzej Siewior
  2 siblings, 0 replies; 4+ messages in thread
From: tip-bot2 for Sebastian Andrzej Siewior @ 2022-05-01  8:05 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Sebastian Andrzej Siewior, Thomas Gleixner,
	Peter Zijlstra (Intel),
	x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     1a90bfd220201fbe050dfc15deaac20ca5f15638
Gitweb:        https://git.kernel.org/tip/1a90bfd220201fbe050dfc15deaac20ca5f15638
Author:        Sebastian Andrzej Siewior <bigeasy@linutronix.de>
AuthorDate:    Wed, 13 Apr 2022 15:31:05 +02:00
Committer:     Thomas Gleixner <tglx@linutronix.de>
CommitterDate: Sun, 01 May 2022 10:03:43 +02:00

smp: Make softirq handling RT safe in flush_smp_call_function_queue()

flush_smp_call_function_queue() invokes do_softirq() which is not available
on PREEMPT_RT. flush_smp_call_function_queue() is invoked from the idle
task and the migration task with preemption or interrupts disabled.

So RT kernels cannot process soft interrupts in that context as that has to
acquire 'sleeping spinlocks' which is not possible with preemption or
interrupts disabled and forbidden from the idle task anyway.

The currently known SMP function call which raises a soft interrupt is in
the block layer, but this functionality is not enabled on RT kernels due to
latency and performance reasons.

RT could wake up ksoftirqd unconditionally, but this wants to be avoided if
there were soft interrupts pending already when this is invoked in the
context of the migration task. The migration task might have preempted a
threaded interrupt handler which raised a soft interrupt, but did not reach
the local_bh_enable() to process it. The "running" ksoftirqd might prevent
the handling in the interrupt thread context which is causing latency
issues.

Add a new function which handles this case explicitely for RT and falls
back to do_softirq() on !RT kernels. In the RT case this warns when one of
the flushed SMP function calls raised a soft interrupt so this can be
investigated.

[ tglx: Moved the RT part out of SMP code ]

Signed-off-by: Sebastian Andrzej Siewior <bigeasy@linutronix.de>
Signed-off-by: Thomas Gleixner <tglx@linutronix.de>
Acked-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lore.kernel.org/r/YgKgL6aPj8aBES6G@linutronix.de
Link: https://lore.kernel.org/r/20220413133024.356509586@linutronix.de

---
 include/linux/interrupt.h |  9 +++++++++
 kernel/smp.c              |  5 ++++-
 kernel/softirq.c          | 13 +++++++++++++
 3 files changed, 26 insertions(+), 1 deletion(-)

diff --git a/include/linux/interrupt.h b/include/linux/interrupt.h
index f40754c..a49fe8d 100644
--- a/include/linux/interrupt.h
+++ b/include/linux/interrupt.h
@@ -607,6 +607,15 @@ struct softirq_action
 asmlinkage void do_softirq(void);
 asmlinkage void __do_softirq(void);
 
+#ifdef CONFIG_PREEMPT_RT
+extern void do_softirq_post_smp_call_flush(unsigned int was_pending);
+#else
+static inline void do_softirq_post_smp_call_flush(unsigned int unused)
+{
+	do_softirq();
+}
+#endif
+
 extern void open_softirq(int nr, void (*action)(struct softirq_action *));
 extern void softirq_init(void);
 extern void __raise_softirq_irqoff(unsigned int nr);
diff --git a/kernel/smp.c b/kernel/smp.c
index 8e85f22..d54c2fe 100644
--- a/kernel/smp.c
+++ b/kernel/smp.c
@@ -696,6 +696,7 @@ static void __flush_smp_call_function_queue(bool warn_cpu_offline)
  */
 void flush_smp_call_function_queue(void)
 {
+	unsigned int was_pending;
 	unsigned long flags;
 
 	if (llist_empty(this_cpu_ptr(&call_single_queue)))
@@ -704,9 +705,11 @@ void flush_smp_call_function_queue(void)
 	cfd_seq_store(this_cpu_ptr(&cfd_seq_local)->idle, CFD_SEQ_NOCPU,
 		      smp_processor_id(), CFD_SEQ_IDLE);
 	local_irq_save(flags);
+	/* Get the already pending soft interrupts for RT enabled kernels */
+	was_pending = local_softirq_pending();
 	__flush_smp_call_function_queue(true);
 	if (local_softirq_pending())
-		do_softirq();
+		do_softirq_post_smp_call_flush(was_pending);
 
 	local_irq_restore(flags);
 }
diff --git a/kernel/softirq.c b/kernel/softirq.c
index fac8018..9f0aef8 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -294,6 +294,19 @@ static inline void invoke_softirq(void)
 		wakeup_softirqd();
 }
 
+/*
+ * flush_smp_call_function_queue() can raise a soft interrupt in a function
+ * call. On RT kernels this is undesired and the only known functionality
+ * in the block layer which does this is disabled on RT. If soft interrupts
+ * get raised which haven't been raised before the flush, warn so it can be
+ * investigated.
+ */
+void do_softirq_post_smp_call_flush(unsigned int was_pending)
+{
+	if (WARN_ON_ONCE(was_pending != local_softirq_pending()))
+		invoke_softirq();
+}
+
 #else /* CONFIG_PREEMPT_RT */
 
 /*

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2022-05-01  8:06 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-02-08 16:54 [PATCH v3] smp: Wake ksoftirqd on PREEMPT_RT instead do_softirq() Sebastian Andrzej Siewior
2022-02-09  7:57 ` Christoph Hellwig
2022-03-18  7:34 ` Sebastian Andrzej Siewior
2022-05-01  8:05 ` [tip: sched/core] smp: Make softirq handling RT safe in flush_smp_call_function_queue() tip-bot2 for Sebastian Andrzej Siewior

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.