linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched/fair: Avoid divide by zero when rebalancing domains
@ 2018-07-04 14:24 Matt Fleming
  2018-07-05  8:02 ` [lkp-robot] [sched/fair] fbd5188493: WARNING:inconsistent_lock_state kernel test robot
  2018-07-05 10:10 ` [PATCH] sched/fair: Avoid divide by zero when rebalancing domains Valentin Schneider
  0 siblings, 2 replies; 12+ messages in thread
From: Matt Fleming @ 2018-07-04 14:24 UTC (permalink / raw)
  To: Peter Zijlstra; +Cc: linux-kernel, Matt Fleming, Ingo Molnar, Mike Galbraith

It's possible that the CPU doing nohz idle balance hasn't had its own
load updated for many seconds. This can lead to huge deltas between
rq->avg_stamp and rq->clock when rebalancing, and has been seen to
cause the following crash:

 divide error: 0000 [#1] SMP
 Call Trace:
  [<ffffffff810bcba8>] update_sd_lb_stats+0xe8/0x560
  [<ffffffff810bd04d>] find_busiest_group+0x2d/0x4b0
  [<ffffffff810bd640>] load_balance+0x170/0x950
  [<ffffffff810be3ff>] rebalance_domains+0x13f/0x290
  [<ffffffff810852bc>] __do_softirq+0xec/0x300
  [<ffffffff8108578a>] irq_exit+0xfa/0x110
  [<ffffffff816167d9>] reschedule_interrupt+0xc9/0xd0

Make sure we update the rq clock and load before balancing.

Cc: Ingo Molnar <mingo@kernel.org>
Cc: Mike Galbraith <umgwanakikbuti@gmail.com>
Cc: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Matt Fleming <matt@codeblueprint.co.uk>
---
 kernel/sched/fair.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 2f0a0be4d344..2c81662c858a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9597,6 +9597,16 @@ static bool _nohz_idle_balance(struct rq *this_rq, unsigned int flags,
 	 */
 	smp_mb();
 
+	/*
+	 * Ensure this_rq's clock and load are up-to-date before we
+	 * rebalance since it's possible that they haven't been
+	 * updated for multiple schedule periods, i.e. many seconds.
+	 */
+	raw_spin_lock_irq(&this_rq->lock);
+	update_rq_clock(this_rq);
+	cpu_load_update_idle(this_rq);
+	raw_spin_unlock_irq(&this_rq->lock);
+
 	for_each_cpu(balance_cpu, nohz.idle_cpus_mask) {
 		if (balance_cpu == this_cpu || !idle_cpu(balance_cpu))
 			continue;
-- 
2.13.6


^ permalink raw reply related	[flat|nested] 12+ messages in thread

end of thread, other threads:[~2018-08-17 12:58 UTC | newest]

Thread overview: 12+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-07-04 14:24 [PATCH] sched/fair: Avoid divide by zero when rebalancing domains Matt Fleming
2018-07-05  8:02 ` [lkp-robot] [sched/fair] fbd5188493: WARNING:inconsistent_lock_state kernel test robot
2018-07-05  8:58   ` Dietmar Eggemann
2018-07-05  9:52     ` Dietmar Eggemann
2018-07-05 13:24       ` Matt Fleming
2018-07-05 14:43         ` Matt Fleming
2018-07-05 14:59         ` Dietmar Eggemann
2018-07-05 10:10 ` [PATCH] sched/fair: Avoid divide by zero when rebalancing domains Valentin Schneider
2018-07-05 13:27   ` Matt Fleming
2018-07-05 16:54     ` Valentin Schneider
2018-08-17 10:27       ` Matt Fleming
2018-08-17 12:58         ` Valentin Schneider

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).