All of lore.kernel.org
 help / color / mirror / Atom feed
From: Morten Rasmussen <morten.rasmussen@arm.com>
To: Peter Zijlstra <peterz@infradead.org>
Cc: mingo@redhat.com, dietmar.eggemann@arm.com, yuyang.du@intel.com,
	vincent.guittot@linaro.org, mgalbraith@suse.de,
	sgurrappadi@nvidia.com, freedom.tan@mediatek.com,
	keita.kobayashi.ym@renesas.com, linux-kernel@vger.kernel.org
Subject: Re: [PATCH v3 10/13] sched/fair: Compute task/cpu utilization at wake-up more correctly
Date: Thu, 18 Aug 2016 09:40:55 +0100	[thread overview]
Message-ID: <20160818084053.GG3391@e105550-lin.cambridge.arm.com> (raw)
In-Reply-To: <20160815154237.GE3391@e105550-lin.cambridge.arm.com>

On Mon, Aug 15, 2016 at 04:42:37PM +0100, Morten Rasmussen wrote:
> On Mon, Aug 15, 2016 at 04:23:42PM +0200, Peter Zijlstra wrote:
> > But unlike that function, it doesn't actually use __update_load_avg().
> > Why not?
> 
> Fair question :)
> 
> We currently exploit the fact that the task utilization is _not_ updated
> in wake-up balancing to make sure we don't under-estimate the capacity
> requirements for tasks that have slept for a while. If we update it, we
> loose the non-decayed 'peak' utilization, but I guess we could just
> store it somewhere when we do the wake-up decay.
> 
> I thought there was a better reason when I wrote the patch, but I don't
> recall right now. I will look into it again and see if we can use
> __update_load_avg() to do a proper update instead of doing things twice.

AFAICT, we should be able to synchronize the task utilization to the
previous rq utilization using __update_load_avg() as you suggest. The
patch below is should work as a replacement without any changes to
subsequent patches. It doesn't solve the under-estimation issue, but I
have another patch for that.

---8<---

>From 43226a896fad077c3ab4932f797df17159779d6e Mon Sep 17 00:00:00 2001
From: Morten Rasmussen <morten.rasmussen@arm.com>
Date: Thu, 28 Apr 2016 09:52:35 +0100
Subject: [PATCH] sched/fair: Compute task/cpu utilization at wake-up more
 correctly

At task wake-up load-tracking isn't updated until the task is enqueued.
The task's own view of its utilization contribution may therefore not be
aligned with its contribution to the cfs_rq load-tracking which may have
been updated in the meantime. Basically, the task's own utilization
hasn't yet accounted for the sleep decay, while the cfs_rq may have
(partially). Estimating the cfs_rq utilization in case the task is
migrated at wake-up as task_rq(p)->cfs.avg.util_avg - p->se.avg.util_avg
is therefore incorrect as the two load-tracking signals aren't time
synchronized (different last update).

To solve this problem, this patch synchronizes the task utilization with
its previous rq before the task utilization is used in the wake-up path.
Currently the update/synchronization is done _after_ the task has been
placed by select_task_rq_fair(). The synchronization is done without
having to take the rq lock using the existing mechanism used in
remove_entity_load_avg().

cc: Ingo Molnar <mingo@redhat.com>
cc: Peter Zijlstra <peterz@infradead.org>

Signed-off-by: Morten Rasmussen <morten.rasmussen@arm.com>
---
 kernel/sched/fair.c | 39 +++++++++++++++++++++++++++++++++++----
 1 file changed, 35 insertions(+), 4 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9e217eff3daf..8b6b8f9da28d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3119,13 +3119,25 @@ static inline u64 cfs_rq_last_update_time(struct cfs_rq *cfs_rq)
 #endif
 
 /*
+ * Synchronize entity load avg of dequeued entity without locking
+ * the previous rq.
+ */
+void sync_entity_load_avg(struct sched_entity *se)
+{
+	struct cfs_rq *cfs_rq = cfs_rq_of(se);
+	u64 last_update_time;
+
+	last_update_time = cfs_rq_last_update_time(cfs_rq);
+	__update_load_avg(last_update_time, cpu_of(rq_of(cfs_rq)), &se->avg, 0, 0, NULL);
+}
+
+/*
  * Task first catches up with cfs_rq, and then subtract
  * itself from the cfs_rq (task must be off the queue now).
  */
 void remove_entity_load_avg(struct sched_entity *se)
 {
 	struct cfs_rq *cfs_rq = cfs_rq_of(se);
-	u64 last_update_time;
 
 	/*
 	 * tasks cannot exit without having gone through wake_up_new_task() ->
@@ -3137,9 +3149,7 @@ void remove_entity_load_avg(struct sched_entity *se)
 	 * calls this.
 	 */
 
-	last_update_time = cfs_rq_last_update_time(cfs_rq);
-
-	__update_load_avg(last_update_time, cpu_of(rq_of(cfs_rq)), &se->avg, 0, 0, NULL);
+	sync_entity_load_avg(se);
 	atomic_long_add(se->avg.load_avg, &cfs_rq->removed_load_avg);
 	atomic_long_add(se->avg.util_avg, &cfs_rq->removed_util_avg);
 }
@@ -5377,6 +5387,24 @@ static inline int task_util(struct task_struct *p)
 	return p->se.avg.util_avg;
 }
 
+/*
+ * cpu_util_wake: Compute cpu utilization with any contributions from
+ * the waking task p removed.
+ */
+static int cpu_util_wake(int cpu, struct task_struct *p)
+{
+	unsigned long util, capacity;
+
+	/* Task has no contribution or is new */
+	if (cpu != task_cpu(p) || !p->se.avg.last_update_time)
+		return cpu_util(cpu);
+
+	capacity = capacity_orig_of(cpu);
+	util = max_t(long, cpu_rq(cpu)->cfs.avg.util_avg - task_util(p), 0);
+
+	return (util >= capacity) ? capacity : util;
+}
+
 static int wake_cap(struct task_struct *p, int cpu, int prev_cpu)
 {
 	long min_cap, max_cap;
@@ -5388,6 +5416,9 @@ static int wake_cap(struct task_struct *p, int cpu, int prev_cpu)
 	if (max_cap - min_cap < max_cap >> 3)
 		return 0;
 
+	/* Bring task utilization in sync with prev_cpu */
+	sync_entity_load_avg(&p->se);
+
 	return min_cap * 1024 < task_util(p) * capacity_margin;
 }
 
-- 
1.9.1

  reply	other threads:[~2016-08-18  8:51 UTC|newest]

Thread overview: 45+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2016-07-25 13:34 [PATCH v3 00/13] sched: Clean-ups and asymmetric cpu capacity support Morten Rasmussen
2016-07-25 13:34 ` [PATCH v3 01/13] sched: Fix power to capacity renaming in comment Morten Rasmussen
2016-07-25 13:34 ` [PATCH v3 02/13] sched/fair: Consistent use of prev_cpu in wakeup path Morten Rasmussen
2016-07-25 13:34 ` [PATCH v3 03/13] sched/fair: Optimize find_idlest_cpu() when there is no choice Morten Rasmussen
2016-07-25 13:34 ` [PATCH v3 04/13] sched/core: Remove unnecessary null-pointer check Morten Rasmussen
2016-08-18 10:56   ` [tip:sched/core] sched/core: Remove unnecessary NULL-pointer check tip-bot for Morten Rasmussen
2016-07-25 13:34 ` [PATCH v3 05/13] sched: Introduce SD_ASYM_CPUCAPACITY sched_domain topology flag Morten Rasmussen
2016-08-15 10:54   ` Peter Zijlstra
2016-08-15 11:43     ` Morten Rasmussen
2016-08-18 10:56     ` [tip:sched/core] sched/core: Clarify SD_flags comment tip-bot for Peter Zijlstra
2016-08-17  8:42   ` [PATCH v3 05/13] sched: Introduce SD_ASYM_CPUCAPACITY sched_domain topology flag Wanpeng Li
2016-08-17  9:23     ` Morten Rasmussen
2016-08-17  9:26       ` Wanpeng Li
2016-08-18 10:56   ` [tip:sched/core] sched/core: " tip-bot for Morten Rasmussen
2016-07-25 13:34 ` [PATCH v3 06/13] sched/core: Pass child domain into sd_init Morten Rasmussen
2016-08-18 10:57   ` [tip:sched/core] sched/core: Pass child domain into sd_init() tip-bot for Morten Rasmussen
2016-07-25 13:34 ` [PATCH v3 07/13] sched: Enable SD_BALANCE_WAKE for asymmetric capacity systems Morten Rasmussen
2016-08-18 10:57   ` [tip:sched/core] sched/core: " tip-bot for Morten Rasmussen
2016-07-25 13:34 ` [PATCH v3 08/13] sched: Store maximum per-cpu capacity in root domain Morten Rasmussen
2016-08-01 18:53   ` Dietmar Eggemann
2016-08-16 12:24     ` Vincent Guittot
2016-08-18 10:58     ` [tip:sched/core] sched/core: Store maximum per-CPU " tip-bot for Dietmar Eggemann
2016-07-25 13:34 ` [PATCH v3 09/13] sched/fair: Let asymmetric cpu configurations balance at wake-up Morten Rasmussen
2016-08-15 13:39   ` Peter Zijlstra
2016-08-15 15:01     ` Morten Rasmussen
2016-08-15 15:10       ` Peter Zijlstra
2016-08-15 15:30         ` Morten Rasmussen
2016-08-18 10:58   ` [tip:sched/core] sched/fair: Let asymmetric CPU " tip-bot for Morten Rasmussen
2016-07-25 13:34 ` [PATCH v3 10/13] sched/fair: Compute task/cpu utilization at wake-up more correctly Morten Rasmussen
2016-08-15 14:23   ` Peter Zijlstra
2016-08-15 15:42     ` Morten Rasmussen
2016-08-18  8:40       ` Morten Rasmussen [this message]
2016-08-18 10:24         ` Morten Rasmussen
2016-08-18 11:46           ` Wanpeng Li
2016-08-18 13:45             ` Morten Rasmussen
2016-08-19  1:43               ` Wanpeng Li
2016-08-19 14:03                 ` Morten Rasmussen
2016-08-22  1:48                   ` Wanpeng Li
2016-08-22 11:29                     ` Morten Rasmussen
2016-07-25 13:34 ` [PATCH v3 11/13] sched/fair: Consider spare capacity in find_idlest_group() Morten Rasmussen
2016-08-16 13:57   ` Vincent Guittot
2016-08-18 11:16     ` Morten Rasmussen
2016-08-18 12:28       ` Peter Zijlstra
2016-07-25 13:34 ` [PATCH v3 12/13] sched: Add per-cpu min capacity to sched_group_capacity Morten Rasmussen
2016-07-25 13:34 ` [PATCH v3 13/13] sched/fair: Avoid pulling tasks from non-overloaded higher capacity groups Morten Rasmussen

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=20160818084053.GG3391@e105550-lin.cambridge.arm.com \
    --to=morten.rasmussen@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=freedom.tan@mediatek.com \
    --cc=keita.kobayashi.ym@renesas.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mgalbraith@suse.de \
    --cc=mingo@redhat.com \
    --cc=peterz@infradead.org \
    --cc=sgurrappadi@nvidia.com \
    --cc=vincent.guittot@linaro.org \
    --cc=yuyang.du@intel.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.