All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC,v2 1/3] sched: set loop_max after rq lock is taken
@ 2017-02-08  8:43 Uladzislau Rezki
  2017-02-08  8:43 ` [RFC,v2 2/3] sched: set number of iterations to h_nr_running Uladzislau Rezki
                   ` (2 more replies)
  0 siblings, 3 replies; 16+ messages in thread
From: Uladzislau Rezki @ 2017-02-08  8:43 UTC (permalink / raw)
  To: Ingo Molnar; +Cc: LKML, Peter Zijlstra, Uladzislau 2 Rezki

From: Uladzislau 2 Rezki <uladzislau2.rezki@sonymobile.com>

While doing a load balance there is a race in setting
loop_max variable since nr_running can be changed causing
incorect iteration loops.

As a result we may skip some candidates or check the same
tasks again.

Signed-off-by: Uladzislau 2 Rezki <uladzislau2.rezki@sonymobile.com>
---
 kernel/sched/fair.c | 7 ++++++-
 1 file changed, 6 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6559d19..4be7193 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8073,12 +8073,17 @@ static int load_balance(int this_cpu, struct rq *this_rq,
 		 * correctly treated as an imbalance.
 		 */
 		env.flags |= LBF_ALL_PINNED;
-		env.loop_max  = min(sysctl_sched_nr_migrate, busiest->nr_running);
 
 more_balance:
 		raw_spin_lock_irqsave(&busiest->lock, flags);
 
 		/*
+		 * Set loop_max when rq's lock is taken to prevent a race.
+		 */
+		env.loop_max = min(sysctl_sched_nr_migrate,
+						busiest->nr_running);
+
+		/*
 		 * cur_ld_moved - load moved in current iteration
 		 * ld_moved     - cumulative load moved across iterations
 		 */
-- 
2.1.4

^ permalink raw reply related	[flat|nested] 16+ messages in thread

end of thread, other threads:[~2017-03-08 15:36 UTC | newest]

Thread overview: 16+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-02-08  8:43 [RFC,v2 1/3] sched: set loop_max after rq lock is taken Uladzislau Rezki
2017-02-08  8:43 ` [RFC,v2 2/3] sched: set number of iterations to h_nr_running Uladzislau Rezki
2017-02-09 12:20   ` Peter Zijlstra
2017-02-09 18:59     ` Uladzislau Rezki
2017-02-08  8:43 ` [RFC,v2 3/3] sched: ignore task_h_load for CPU_NEWLY_IDLE Uladzislau Rezki
2017-02-08  9:19   ` Mike Galbraith
2017-02-09 10:12     ` Uladzislau Rezki
2017-02-09 12:22   ` Peter Zijlstra
2017-02-09 18:54     ` Uladzislau Rezki
2017-02-13 13:51       ` Peter Zijlstra
2017-02-13 17:17         ` Uladzislau Rezki
2017-02-14 18:28           ` Uladzislau Rezki
2017-02-15 18:58             ` Dietmar Eggemann
2017-02-16 11:20               ` Uladzislau Rezki
2017-03-08 15:35                 ` Uladzislau Rezki
2017-02-09 12:14 ` [RFC,v2 1/3] sched: set loop_max after rq lock is taken Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.