linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH] sched/fair: Sync task util before slow-path wakeup
@ 2017-08-02 13:10 Brendan Jackman
  2017-08-02 13:24 ` Peter Zijlstra
  0 siblings, 1 reply; 4+ messages in thread
From: Brendan Jackman @ 2017-08-02 13:10 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, linux-kernel
  Cc: Joel Fernandes, Andres Oportus, Dietmar Eggemann,
	Vincent Guittot, Josef Bacik, Morten Rasmussen

We use task_util in find_idlest_group via capacity_spare_wake. This
task_util is updated in wake_cap. However wake_cap is not the only
reason for ending up in find_idlest_group - we could have been sent
there by wake_wide. So explicitly sync the task util with prev_cpu
when we are about to head to find_idlest_group.

We could simply do this at the beginning of
select_task_rq_fair (i.e. irrespective of whether we're heading to
select_idle_sibling or find_idlest_group & co), but I didn't want to
slow down the select_idle_sibling path more than necessary.

Don't do this during fork balancing, we won't need the task_util and
we'd just clobber the last_update_time, which is supposed to be 0.

Signed-off-by: Brendan Jackman <brendan.jackman@arm.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Josef Bacik <josef@toxicpanda.com>
Cc: Ingo Molnar <mingo@redhat.com>
Cc: Morten Rasmussen <morten.rasmussen@arm.com>
Cc: Peter Zijlstra <peterz@infradead.org>
---
 kernel/sched/fair.c | 8 ++++++++
 1 file changed, 8 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index c95880e216f6..62869ff252b4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5913,6 +5913,14 @@ select_task_rq_fair(struct task_struct *p, int prev_cpu, int sd_flag, int wake_f
 			new_cpu = cpu;
 	}
 
+	if (sd && !(sd_flag & SD_BALANCE_FORK))
+		/*
+		 * We're going to need the task's util for capacity_spare_wake
+		 * in select_idlest_group. Sync it up to prev_cpu's
+		 * last_update_time.
+		 */
+		sync_entity_load_avg(&p->se);
+
 	if (!sd) {
  pick_cpu:
 		if (sd_flag & SD_BALANCE_WAKE) /* XXX always ? */
-- 
2.13.0

^ permalink raw reply related	[flat|nested] 4+ messages in thread

end of thread, other threads:[~2017-08-07 12:52 UTC | newest]

Thread overview: 4+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2017-08-02 13:10 [PATCH] sched/fair: Sync task util before slow-path wakeup Brendan Jackman
2017-08-02 13:24 ` Peter Zijlstra
2017-08-02 13:27   ` Brendan Jackman
2017-08-07 12:51   ` Morten Rasmussen

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).