* [PATCH] sched/pelt: ensure that *_sum is always synced with *_avg
@ 2021-06-01 8:58 Vincent Guittot
2021-06-03 11:03 ` [tip: sched/urgent] sched/pelt: Ensure " tip-bot2 for Vincent Guittot
0 siblings, 1 reply; 2+ messages in thread
From: Vincent Guittot @ 2021-06-01 8:58 UTC (permalink / raw)
To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
mgorman, bristot, linux-kernel
Cc: odin, Vincent Guittot
Rounding in PELT calculation happening when entities are attached/detached
of a cfs_rq can result into situations where util/runnable_avg is not null
but util/runnable_sum is. This is normally not possible so we need to
ensure that util/runnable_sum stays synced with util/runnable_avg.
detach_entity_load_avg() is the last place where we don't sync
util/runnable_sum with util/runnbale_avg when moving some sched_entities
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
While studying the problem raised by Odin, I have faced such situation
where the cfs_rq was removed from the list because of all *_sum being null
but runnbale_avg was not null.
kernel/sched/fair.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 161b92aa1c79..9b7da61ace51 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3720,11 +3720,17 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
*/
static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
+ /*
+ * cfs_rq->avg.period_contrib can be used for both cfs_rq and se.
+ * See ___update_load_avg() for details.
+ */
+ u32 divider = get_pelt_divider(&cfs_rq->avg);
+
dequeue_load_avg(cfs_rq, se);
sub_positive(&cfs_rq->avg.util_avg, se->avg.util_avg);
- sub_positive(&cfs_rq->avg.util_sum, se->avg.util_sum);
+ cfs_rq->avg.util_sum = cfs_rq->avg.util_avg * divider;
sub_positive(&cfs_rq->avg.runnable_avg, se->avg.runnable_avg);
- sub_positive(&cfs_rq->avg.runnable_sum, se->avg.runnable_sum);
+ cfs_rq->avg.runnable_sum = cfs_rq->avg.runnable_avg * divider;
add_tg_cfs_propagate(cfs_rq, -se->avg.load_sum);
--
2.17.1
^ permalink raw reply related [flat|nested] 2+ messages in thread
* [tip: sched/urgent] sched/pelt: Ensure that *_sum is always synced with *_avg
2021-06-01 8:58 [PATCH] sched/pelt: ensure that *_sum is always synced with *_avg Vincent Guittot
@ 2021-06-03 11:03 ` tip-bot2 for Vincent Guittot
0 siblings, 0 replies; 2+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2021-06-03 11:03 UTC (permalink / raw)
To: linux-tip-commits
Cc: Vincent Guittot, Peter Zijlstra (Intel), x86, linux-kernel
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: fcf6631f3736985ec89bdd76392d3c7bfb60119f
Gitweb: https://git.kernel.org/tip/fcf6631f3736985ec89bdd76392d3c7bfb60119f
Author: Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate: Tue, 01 Jun 2021 10:58:32 +02:00
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Thu, 03 Jun 2021 12:55:55 +02:00
sched/pelt: Ensure that *_sum is always synced with *_avg
Rounding in PELT calculation happening when entities are attached/detached
of a cfs_rq can result into situations where util/runnable_avg is not null
but util/runnable_sum is. This is normally not possible so we need to
ensure that util/runnable_sum stays synced with util/runnable_avg.
detach_entity_load_avg() is the last place where we don't sync
util/runnable_sum with util/runnbale_avg when moving some sched_entities
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Link: https://lkml.kernel.org/r/20210601085832.12626-1-vincent.guittot@linaro.org
---
kernel/sched/fair.c | 10 ++++++++--
1 file changed, 8 insertions(+), 2 deletions(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e7c8277..7b98fb3 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -3765,11 +3765,17 @@ static void attach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *s
*/
static void detach_entity_load_avg(struct cfs_rq *cfs_rq, struct sched_entity *se)
{
+ /*
+ * cfs_rq->avg.period_contrib can be used for both cfs_rq and se.
+ * See ___update_load_avg() for details.
+ */
+ u32 divider = get_pelt_divider(&cfs_rq->avg);
+
dequeue_load_avg(cfs_rq, se);
sub_positive(&cfs_rq->avg.util_avg, se->avg.util_avg);
- sub_positive(&cfs_rq->avg.util_sum, se->avg.util_sum);
+ cfs_rq->avg.util_sum = cfs_rq->avg.util_avg * divider;
sub_positive(&cfs_rq->avg.runnable_avg, se->avg.runnable_avg);
- sub_positive(&cfs_rq->avg.runnable_sum, se->avg.runnable_sum);
+ cfs_rq->avg.runnable_sum = cfs_rq->avg.runnable_avg * divider;
add_tg_cfs_propagate(cfs_rq, -se->avg.load_sum);
^ permalink raw reply related [flat|nested] 2+ messages in thread
end of thread, other threads:[~2021-06-03 11:03 UTC | newest]
Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-06-01 8:58 [PATCH] sched/pelt: ensure that *_sum is always synced with *_avg Vincent Guittot
2021-06-03 11:03 ` [tip: sched/urgent] sched/pelt: Ensure " tip-bot2 for Vincent Guittot
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).