linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH V3] Softirq:avoid large sched delay from the pending softirqs
@ 2020-07-23  4:54 qianjun.kernel
  2020-07-23 13:41 ` Thomas Gleixner
  0 siblings, 1 reply; 3+ messages in thread
From: qianjun.kernel @ 2020-07-23  4:54 UTC (permalink / raw)
  To: tglx, peterz, will, luto, urezki, linux-kernel; +Cc: laoar.shao, jun qian

From: jun qian <qianjun.kernel@gmail.com>

When get the pending softirqs, it need to process all the pending
softirqs in the while loop. If the processing time of each pending
softirq is need more than 2 msec in this loop, or one of the softirq
will running a long time, according to the original code logic, it
will process all the pending softirqs without wakeuping ksoftirqd,
which will cause a relatively large scheduling delay on the
corresponding CPU, which we do not wish to see. The patch will check
the total time to process pending softirq, if the time exceeds 2 ms
we need to wakeup the ksofirqd to aviod large sched delay.

Signed-off-by: jun qian <qianjun.kernel@gmail.com>
---
 kernel/softirq.c | 21 +++++++++++++--------
 1 file changed, 13 insertions(+), 8 deletions(-)

diff --git a/kernel/softirq.c b/kernel/softirq.c
index c4201b7f..8f47554 100644
--- a/kernel/softirq.c
+++ b/kernel/softirq.c
@@ -200,17 +200,15 @@ void __local_bh_enable_ip(unsigned long ip, unsigned int cnt)
 /*
  * We restart softirq processing for at most MAX_SOFTIRQ_RESTART times,
  * but break the loop if need_resched() is set or after 2 ms.
- * The MAX_SOFTIRQ_TIME provides a nice upper bound in most cases, but in
- * certain cases, such as stop_machine(), jiffies may cease to
- * increment and so we need the MAX_SOFTIRQ_RESTART limit as
- * well to make sure we eventually return from this method.
+ * In the loop, if the processing time of the softirq has exceeded 2
+ * milliseconds, we also need to break the loop to wakeup the ksofirqd.
  *
  * These limits have been established via experimentation.
  * The two things to balance is latency against fairness -
  * we want to handle softirqs as soon as possible, but they
  * should not be able to lock up the box.
  */
-#define MAX_SOFTIRQ_TIME  msecs_to_jiffies(2)
+#define MAX_SOFTIRQ_TIME_NS 2000000
 #define MAX_SOFTIRQ_RESTART 10
 
 #ifdef CONFIG_TRACE_IRQFLAGS
@@ -248,7 +246,7 @@ static inline void lockdep_softirq_end(bool in_hardirq) { }
 
 asmlinkage __visible void __softirq_entry __do_softirq(void)
 {
-	unsigned long end = jiffies + MAX_SOFTIRQ_TIME;
+	ktime_t end = ktime_get() + MAX_SOFTIRQ_TIME_NS;
 	unsigned long old_flags = current->flags;
 	int max_restart = MAX_SOFTIRQ_RESTART;
 	struct softirq_action *h;
@@ -299,6 +297,13 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
 		}
 		h++;
 		pending >>= softirq_bit;
+
+		/*
+		 * the softirq's action has been running for too much time
+		 * so it may need to wakeup the ksoftirqd
+		 */
+		if (need_resched() && ktime_get() > end)
+			break;
 	}
 
 	if (__this_cpu_read(ksoftirqd) == current)
@@ -307,8 +312,8 @@ asmlinkage __visible void __softirq_entry __do_softirq(void)
 
 	pending = local_softirq_pending();
 	if (pending) {
-		if (time_before(jiffies, end) && !need_resched() &&
-		    --max_restart)
+		if (!need_resched() && --max_restart &&
+		    ktime_get() <= end)
 			goto restart;
 
 		wakeup_softirqd();
-- 
1.8.3.1


^ permalink raw reply related	[flat|nested] 3+ messages in thread

* Re: [PATCH V3] Softirq:avoid large sched delay from the pending softirqs
  2020-07-23  4:54 [PATCH V3] Softirq:avoid large sched delay from the pending softirqs qianjun.kernel
@ 2020-07-23 13:41 ` Thomas Gleixner
  2020-07-23 14:11   ` jun qian
  0 siblings, 1 reply; 3+ messages in thread
From: Thomas Gleixner @ 2020-07-23 13:41 UTC (permalink / raw)
  To: qianjun.kernel, peterz, will, luto, urezki, linux-kernel
  Cc: laoar.shao, jun qian

qianjun.kernel@gmail.com writes:
> From: jun qian <qianjun.kernel@gmail.com>
> +		/*
> +		 * the softirq's action has been running for too much time
> +		 * so it may need to wakeup the ksoftirqd
> +		 */
> +		if (need_resched() && ktime_get() > end)
> +			break;

As per my reply on V2 this is leaking non handled pending bits. If you
do a V4, can you please use sched_clock() instead of ktime_get()?

Thanks,

        tglx

^ permalink raw reply	[flat|nested] 3+ messages in thread

* Re: [PATCH V3] Softirq:avoid large sched delay from the pending softirqs
  2020-07-23 13:41 ` Thomas Gleixner
@ 2020-07-23 14:11   ` jun qian
  0 siblings, 0 replies; 3+ messages in thread
From: jun qian @ 2020-07-23 14:11 UTC (permalink / raw)
  To: Thomas Gleixner
  Cc: peterz, will, luto, Uladzislau Rezki, linux-kernel, Yafang Shao

On Thu, Jul 23, 2020 at 9:41 PM Thomas Gleixner <tglx@linutronix.de> wrote:
>
> qianjun.kernel@gmail.com writes:
> > From: jun qian <qianjun.kernel@gmail.com>
> > +             /*
> > +              * the softirq's action has been running for too much time
> > +              * so it may need to wakeup the ksoftirqd
> > +              */
> > +             if (need_resched() && ktime_get() > end)
> > +                     break;
>
> As per my reply on V2 this is leaking non handled pending bits. If you
> do a V4, can you please use sched_clock() instead of ktime_get()?
>
The reason why the non handled pending bits leaked is
set_softirq_pending(0) called in the start, if the
loop is broken, the not handled bit will leak. This is my
understanding, I am not sure if it is correct or not.
Looking forward to your reply.

Thank you so much.

> Thanks,
>
>         tglx

^ permalink raw reply	[flat|nested] 3+ messages in thread

end of thread, other threads:[~2020-07-23 14:12 UTC | newest]

Thread overview: 3+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-07-23  4:54 [PATCH V3] Softirq:avoid large sched delay from the pending softirqs qianjun.kernel
2020-07-23 13:41 ` Thomas Gleixner
2020-07-23 14:11   ` jun qian

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).