All of lore.kernel.org
 help / color / mirror / Atom feed
* FAILED: patch "[PATCH] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard" failed to apply to 5.0-stable tree
@ 2019-04-23 20:51 gregkh
  2019-04-23 21:30 ` Phil Auld
  2019-04-23 22:07 ` Phil Auld
  0 siblings, 2 replies; 6+ messages in thread
From: gregkh @ 2019-04-23 20:51 UTC (permalink / raw)
  To: pauld, anton, bsegall, mingo, peterz, stable, tglx, torvalds; +Cc: stable


The patch below does not apply to the 5.0-stable tree.
If someone wants it applied there, or to any other stable or longterm
tree, then please email the backport, including the original git commit
id to <stable@vger.kernel.org>.

thanks,

greg k-h

------------------ original commit in Linus's tree ------------------

From 2e8e19226398db8265a8e675fcc0118b9e80c9e8 Mon Sep 17 00:00:00 2001
From: Phil Auld <pauld@redhat.com>
Date: Tue, 19 Mar 2019 09:00:05 -0400
Subject: [PATCH] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard
 lockup

With extremely short cfs_period_us setting on a parent task group with a large
number of children the for loop in sched_cfs_period_timer() can run until the
watchdog fires. There is no guarantee that the call to hrtimer_forward_now()
will ever return 0.  The large number of children can make
do_sched_cfs_period_timer() take longer than the period.

 NMI watchdog: Watchdog detected hard LOCKUP on cpu 24
 RIP: 0010:tg_nop+0x0/0x10
  <IRQ>
  walk_tg_tree_from+0x29/0xb0
  unthrottle_cfs_rq+0xe0/0x1a0
  distribute_cfs_runtime+0xd3/0xf0
  sched_cfs_period_timer+0xcb/0x160
  ? sched_cfs_slack_timer+0xd0/0xd0
  __hrtimer_run_queues+0xfb/0x270
  hrtimer_interrupt+0x122/0x270
  smp_apic_timer_interrupt+0x6a/0x140
  apic_timer_interrupt+0xf/0x20
  </IRQ>

To prevent this we add protection to the loop that detects when the loop has run
too many times and scales the period and quota up, proportionally, so that the timer
can complete before then next period expires.  This preserves the relative runtime
quota while preventing the hard lockup.

A warning is issued reporting this state and the new values.

Signed-off-by: Phil Auld <pauld@redhat.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: <stable@vger.kernel.org>
Cc: Anton Blanchard <anton@ozlabs.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Link: https://lkml.kernel.org/r/20190319130005.25492-1-pauld@redhat.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 40bd1e27b1b7..a4d9e14bf138 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4885,6 +4885,8 @@ static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer)
 	return HRTIMER_NORESTART;
 }
 
+extern const u64 max_cfs_quota_period;
+
 static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
 {
 	struct cfs_bandwidth *cfs_b =
@@ -4892,6 +4894,7 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
 	unsigned long flags;
 	int overrun;
 	int idle = 0;
+	int count = 0;
 
 	raw_spin_lock_irqsave(&cfs_b->lock, flags);
 	for (;;) {
@@ -4899,6 +4902,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
 		if (!overrun)
 			break;
 
+		if (++count > 3) {
+			u64 new, old = ktime_to_ns(cfs_b->period);
+
+			new = (old * 147) / 128; /* ~115% */
+			new = min(new, max_cfs_quota_period);
+
+			cfs_b->period = ns_to_ktime(new);
+
+			/* since max is 1s, this is limited to 1e9^2, which fits in u64 */
+			cfs_b->quota *= new;
+			cfs_b->quota = div64_u64(cfs_b->quota, old);
+
+			pr_warn_ratelimited(
+	"cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us %lld, cfs_quota_us = %lld)\n",
+				smp_processor_id(),
+				div_u64(new, NSEC_PER_USEC),
+				div_u64(cfs_b->quota, NSEC_PER_USEC));
+
+			/* reset count so we don't come right back in here */
+			count = 0;
+		}
+
 		idle = do_sched_cfs_period_timer(cfs_b, overrun, flags);
 	}
 	if (idle)


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: FAILED: patch "[PATCH] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard" failed to apply to 5.0-stable tree
  2019-04-23 20:51 FAILED: patch "[PATCH] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard" failed to apply to 5.0-stable tree gregkh
@ 2019-04-23 21:30 ` Phil Auld
  2019-04-24  0:01   ` Sasha Levin
  2019-04-24 13:46   ` Greg KH
  2019-04-23 22:07 ` Phil Auld
  1 sibling, 2 replies; 6+ messages in thread
From: Phil Auld @ 2019-04-23 21:30 UTC (permalink / raw)
  To: gregkh; +Cc: anton, bsegall, mingo, peterz, stable, tglx, torvalds


Hi Greg,

On Tue, Apr 23, 2019 at 10:51:12PM +0200 gregkh@linuxfoundation.org wrote:
> 
> The patch below does not apply to the 5.0-stable tree.
> If someone wants it applied there, or to any other stable or longterm
> tree, then please email the backport, including the original git commit
> id to <stable@vger.kernel.org>.

I sent the stable backport version as a reply to the tip-bot mail as
Sasha had suggested.  Is there something else I needed to do? Clearly :)

Thanks,
Phil


> 
> thanks,
> 
> greg k-h
> 
> ------------------ original commit in Linus's tree ------------------
> 
> From 2e8e19226398db8265a8e675fcc0118b9e80c9e8 Mon Sep 17 00:00:00 2001
> From: Phil Auld <pauld@redhat.com>
> Date: Tue, 19 Mar 2019 09:00:05 -0400
> Subject: [PATCH] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard
>  lockup
> 
> With extremely short cfs_period_us setting on a parent task group with a large
> number of children the for loop in sched_cfs_period_timer() can run until the
> watchdog fires. There is no guarantee that the call to hrtimer_forward_now()
> will ever return 0.  The large number of children can make
> do_sched_cfs_period_timer() take longer than the period.
> 
>  NMI watchdog: Watchdog detected hard LOCKUP on cpu 24
>  RIP: 0010:tg_nop+0x0/0x10
>   <IRQ>
>   walk_tg_tree_from+0x29/0xb0
>   unthrottle_cfs_rq+0xe0/0x1a0
>   distribute_cfs_runtime+0xd3/0xf0
>   sched_cfs_period_timer+0xcb/0x160
>   ? sched_cfs_slack_timer+0xd0/0xd0
>   __hrtimer_run_queues+0xfb/0x270
>   hrtimer_interrupt+0x122/0x270
>   smp_apic_timer_interrupt+0x6a/0x140
>   apic_timer_interrupt+0xf/0x20
>   </IRQ>
> 
> To prevent this we add protection to the loop that detects when the loop has run
> too many times and scales the period and quota up, proportionally, so that the timer
> can complete before then next period expires.  This preserves the relative runtime
> quota while preventing the hard lockup.
> 
> A warning is issued reporting this state and the new values.
> 
> Signed-off-by: Phil Auld <pauld@redhat.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: <stable@vger.kernel.org>
> Cc: Anton Blanchard <anton@ozlabs.org>
> Cc: Ben Segall <bsegall@google.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Link: https://lkml.kernel.org/r/20190319130005.25492-1-pauld@redhat.com
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 40bd1e27b1b7..a4d9e14bf138 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4885,6 +4885,8 @@ static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer)
>  	return HRTIMER_NORESTART;
>  }
>  
> +extern const u64 max_cfs_quota_period;
> +
>  static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
>  {
>  	struct cfs_bandwidth *cfs_b =
> @@ -4892,6 +4894,7 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
>  	unsigned long flags;
>  	int overrun;
>  	int idle = 0;
> +	int count = 0;
>  
>  	raw_spin_lock_irqsave(&cfs_b->lock, flags);
>  	for (;;) {
> @@ -4899,6 +4902,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
>  		if (!overrun)
>  			break;
>  
> +		if (++count > 3) {
> +			u64 new, old = ktime_to_ns(cfs_b->period);
> +
> +			new = (old * 147) / 128; /* ~115% */
> +			new = min(new, max_cfs_quota_period);
> +
> +			cfs_b->period = ns_to_ktime(new);
> +
> +			/* since max is 1s, this is limited to 1e9^2, which fits in u64 */
> +			cfs_b->quota *= new;
> +			cfs_b->quota = div64_u64(cfs_b->quota, old);
> +
> +			pr_warn_ratelimited(
> +	"cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us %lld, cfs_quota_us = %lld)\n",
> +				smp_processor_id(),
> +				div_u64(new, NSEC_PER_USEC),
> +				div_u64(cfs_b->quota, NSEC_PER_USEC));
> +
> +			/* reset count so we don't come right back in here */
> +			count = 0;
> +		}
> +
>  		idle = do_sched_cfs_period_timer(cfs_b, overrun, flags);
>  	}
>  	if (idle)
> 

-- 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: FAILED: patch "[PATCH] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard" failed to apply to 5.0-stable tree
  2019-04-23 20:51 FAILED: patch "[PATCH] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard" failed to apply to 5.0-stable tree gregkh
  2019-04-23 21:30 ` Phil Auld
@ 2019-04-23 22:07 ` Phil Auld
  1 sibling, 0 replies; 6+ messages in thread
From: Phil Auld @ 2019-04-23 22:07 UTC (permalink / raw)
  To: gregkh; +Cc: anton, bsegall, mingo, peterz, stable, tglx, torvalds


Hi Greg,

On Tue, Apr 23, 2019 at 10:51:12PM +0200 gregkh@linuxfoundation.org wrote:
> 
> The patch below does not apply to the 5.0-stable tree.
> If someone wants it applied there, or to any other stable or longterm
> tree, then please email the backport, including the original git commit
> id to <stable@vger.kernel.org>.
>

Below, replacing the original version, is the backported version. Minor
change due to different spinlock style in the context. It should apply
to 5.0 and all the 4.x stable trees, just with increasing offsets. 

Let me know if there is anything else I can do.


Thanks,
Phil


> thanks,
> 
> greg k-h
> 
> ------------------ original commit in Linus's tree ------------------
> 
> From 2e8e19226398db8265a8e675fcc0118b9e80c9e8 Mon Sep 17 00:00:00 2001
> From: Phil Auld <pauld@redhat.com>
> Date: Tue, 19 Mar 2019 09:00:05 -0400
> Subject: [PATCH] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard
>  lockup
> 
> With extremely short cfs_period_us setting on a parent task group with a large
> number of children the for loop in sched_cfs_period_timer() can run until the
> watchdog fires. There is no guarantee that the call to hrtimer_forward_now()
> will ever return 0.  The large number of children can make
> do_sched_cfs_period_timer() take longer than the period.
> 
>  NMI watchdog: Watchdog detected hard LOCKUP on cpu 24
>  RIP: 0010:tg_nop+0x0/0x10
>   <IRQ>
>   walk_tg_tree_from+0x29/0xb0
>   unthrottle_cfs_rq+0xe0/0x1a0
>   distribute_cfs_runtime+0xd3/0xf0
>   sched_cfs_period_timer+0xcb/0x160
>   ? sched_cfs_slack_timer+0xd0/0xd0
>   __hrtimer_run_queues+0xfb/0x270
>   hrtimer_interrupt+0x122/0x270
>   smp_apic_timer_interrupt+0x6a/0x140
>   apic_timer_interrupt+0xf/0x20
>   </IRQ>
> 
> To prevent this we add protection to the loop that detects when the loop has run
> too many times and scales the period and quota up, proportionally, so that the timer
> can complete before then next period expires.  This preserves the relative runtime
> quota while preventing the hard lockup.
> 
> A warning is issued reporting this state and the new values.
> 
> Signed-off-by: Phil Auld <pauld@redhat.com>
> Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
> Cc: <stable@vger.kernel.org>
> Cc: Anton Blanchard <anton@ozlabs.org>
> Cc: Ben Segall <bsegall@google.com>
> Cc: Linus Torvalds <torvalds@linux-foundation.org>
> Cc: Peter Zijlstra <peterz@infradead.org>
> Cc: Thomas Gleixner <tglx@linutronix.de>
> Link: https://lkml.kernel.org/r/20190319130005.25492-1-pauld@redhat.com
> Signed-off-by: Ingo Molnar <mingo@kernel.org>
>

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 310d0637fe4b..f0380229b6f2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4859,12 +4859,15 @@ static enum hrtimer_restart sched_cfs_slack_timer(struct hrtimer *timer)
 	return HRTIMER_NORESTART;
 }
 
+extern const u64 max_cfs_quota_period;
+
 static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
 {
 	struct cfs_bandwidth *cfs_b =
 		container_of(timer, struct cfs_bandwidth, period_timer);
 	int overrun;
 	int idle = 0;
+	int count = 0;
 
 	raw_spin_lock(&cfs_b->lock);
 	for (;;) {
@@ -4872,6 +4875,28 @@ static enum hrtimer_restart sched_cfs_period_timer(struct hrtimer *timer)
 		if (!overrun)
 			break;
 
+		if (++count > 3) {
+			u64 new, old = ktime_to_ns(cfs_b->period);
+
+			new = (old * 147) / 128; /* ~115% */
+			new = min(new, max_cfs_quota_period);
+
+			cfs_b->period = ns_to_ktime(new);
+
+			/* since max is 1s, this is limited to 1e9^2, which fits in u64 */
+			cfs_b->quota *= new;
+			cfs_b->quota = div64_u64(cfs_b->quota, old);
+
+			pr_warn_ratelimited(
+        "cfs_period_timer[cpu%d]: period too short, scaling up (new cfs_period_us %lld, cfs_quota_us = %lld)\n",
+	                        smp_processor_id(),
+	                        div_u64(new, NSEC_PER_USEC),
+                                div_u64(cfs_b->quota, NSEC_PER_USEC));
+
+			/* reset count so we don't come right back in here */
+			count = 0;
+		}
+
 		idle = do_sched_cfs_period_timer(cfs_b, overrun);
 	}
 	if (idle)


-- 

^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: FAILED: patch "[PATCH] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard" failed to apply to 5.0-stable tree
  2019-04-23 21:30 ` Phil Auld
@ 2019-04-24  0:01   ` Sasha Levin
  2019-04-24 13:46   ` Greg KH
  1 sibling, 0 replies; 6+ messages in thread
From: Sasha Levin @ 2019-04-24  0:01 UTC (permalink / raw)
  To: Phil Auld; +Cc: gregkh, anton, bsegall, mingo, peterz, stable, tglx, torvalds

On Tue, Apr 23, 2019 at 05:30:24PM -0400, Phil Auld wrote:
>
>Hi Greg,
>
>On Tue, Apr 23, 2019 at 10:51:12PM +0200 gregkh@linuxfoundation.org wrote:
>>
>> The patch below does not apply to the 5.0-stable tree.
>> If someone wants it applied there, or to any other stable or longterm
>> tree, then please email the backport, including the original git commit
>> id to <stable@vger.kernel.org>.
>
>I sent the stable backport version as a reply to the tip-bot mail as
>Sasha had suggested.  Is there something else I needed to do? Clearly :)

Hi Phil,

Indeed the backport you've provided is good and I've queued it for all
branches. I'll work on improving this workflow for the future. Thank
you!

--
Thanks,
Sasha

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: FAILED: patch "[PATCH] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard" failed to apply to 5.0-stable tree
  2019-04-23 21:30 ` Phil Auld
  2019-04-24  0:01   ` Sasha Levin
@ 2019-04-24 13:46   ` Greg KH
  2019-04-24 13:50     ` Phil Auld
  1 sibling, 1 reply; 6+ messages in thread
From: Greg KH @ 2019-04-24 13:46 UTC (permalink / raw)
  To: Phil Auld; +Cc: anton, bsegall, mingo, peterz, stable, tglx, torvalds

On Tue, Apr 23, 2019 at 05:30:24PM -0400, Phil Auld wrote:
> 
> Hi Greg,
> 
> On Tue, Apr 23, 2019 at 10:51:12PM +0200 gregkh@linuxfoundation.org wrote:
> > 
> > The patch below does not apply to the 5.0-stable tree.
> > If someone wants it applied there, or to any other stable or longterm
> > tree, then please email the backport, including the original git commit
> > id to <stable@vger.kernel.org>.
> 
> I sent the stable backport version as a reply to the tip-bot mail as
> Sasha had suggested.  Is there something else I needed to do? Clearly :)

Nope, my fault, I missed that, I should have caught it.

Thanks Sasha for picking this up.

greg k-h

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: FAILED: patch "[PATCH] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard" failed to apply to 5.0-stable tree
  2019-04-24 13:46   ` Greg KH
@ 2019-04-24 13:50     ` Phil Auld
  0 siblings, 0 replies; 6+ messages in thread
From: Phil Auld @ 2019-04-24 13:50 UTC (permalink / raw)
  To: Greg KH; +Cc: anton, bsegall, mingo, peterz, stable, tglx, torvalds

On Wed, Apr 24, 2019 at 03:46:05PM +0200 Greg KH wrote:
> On Tue, Apr 23, 2019 at 05:30:24PM -0400, Phil Auld wrote:
> > 
> > Hi Greg,
> > 
> > On Tue, Apr 23, 2019 at 10:51:12PM +0200 gregkh@linuxfoundation.org wrote:
> > > 
> > > The patch below does not apply to the 5.0-stable tree.
> > > If someone wants it applied there, or to any other stable or longterm
> > > tree, then please email the backport, including the original git commit
> > > id to <stable@vger.kernel.org>.
> > 
> > I sent the stable backport version as a reply to the tip-bot mail as
> > Sasha had suggested.  Is there something else I needed to do? Clearly :)
> 
> Nope, my fault, I missed that, I should have caught it.

No worries. You guys do an awesome job with all the stable stuff. It would make my 
head spin. 

> 
> Thanks Sasha for picking this up.
> 

Yes, thanks!


Cheers,
Phil


> greg k-h

-- 

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-04-24 13:50 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-04-23 20:51 FAILED: patch "[PATCH] sched/fair: Limit sched_cfs_period_timer() loop to avoid hard" failed to apply to 5.0-stable tree gregkh
2019-04-23 21:30 ` Phil Auld
2019-04-24  0:01   ` Sasha Levin
2019-04-24 13:46   ` Greg KH
2019-04-24 13:50     ` Phil Auld
2019-04-23 22:07 ` Phil Auld

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.