From: Rik van Riel <riel@surriel.com>
To: linux-kernel@vger.kernel.org
Cc: kernel-team@fb.com, pjt@google.com, dietmar.eggemann@arm.com,
peterz@infradead.org, mingo@redhat.com, morten.rasmussen@arm.com,
tglx@linutronix.de, mgorman@techsingularity.net,
vincent.guittot@linaro.org, Rik van Riel <riel@surriel.com>
Subject: [PATCH 14/15] sched,fair: ramp up task_se_h_weight quickly
Date: Wed, 21 Aug 2019 22:17:39 -0400 [thread overview]
Message-ID: <20190822021740.15554-15-riel@surriel.com> (raw)
In-Reply-To: <20190822021740.15554-1-riel@surriel.com>
The code in update_cfs_group / calc_group_shares has some logic to
quickly ramp up the load when a task has just started running in a
cgroup, in order to get sane values for the cgroup se->load.weight.
This code adds a similar hack to task_se_h_weight.
However, THIS CODE IS WRONG, since it does not do things hierarchically.
I am wondering a few things here:
1) Should I have something similar to the logic in calc_group_shares
in update_cfs_rq_h_load?
2) If so, should I also use that fast-ramp-up value for task_h_load,
to prevent the load balancer from thinking it is moving zero weight
tasks around?
3) If update_cfs_rq_h_load is the wrong place, where should I be
calculating a hierarchical group weight value, instead?
Not-yet-signed-off-by: Rik van Riel <riel@surriel.com>
Signed-off-by: Rik van Riel <riel@surriel.com>
---
kernel/sched/fair.c | 7 ++++++-
1 file changed, 6 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index d6c881c5c4d5..3df5d60b245f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7672,6 +7672,7 @@ static void update_cfs_rq_h_load(struct cfs_rq *cfs_rq)
static unsigned long task_se_h_weight(struct sched_entity *se)
{
+ unsigned long group_load;
struct cfs_rq *cfs_rq;
if (!task_se_in_cgroup(se))
@@ -7680,8 +7681,12 @@ static unsigned long task_se_h_weight(struct sched_entity *se)
cfs_rq = group_cfs_rq_of_parent(se);
update_cfs_rq_h_load(cfs_rq);
+ /* Ramp up quickly to keep h_weight sane. */
+ group_load = max(scale_load_down(se->parent->load.weight),
+ cfs_rq->h_load);
+
/* Reduce the load.weight by the h_load of the group the task is in. */
- return (cfs_rq->h_load * se->load.weight) >> SCHED_FIXEDPOINT_SHIFT;
+ return (group_load * se->load.weight) >> SCHED_FIXEDPOINT_SHIFT;
}
static unsigned long task_se_h_load(struct sched_entity *se)
--
2.20.1
next prev parent reply other threads:[~2019-08-22 2:19 UTC|newest]
Thread overview: 46+ messages / expand[flat|nested] mbox.gz Atom feed top
2019-08-22 2:17 [PATCH RFC v4 0/15] sched,fair: flatten CPU controller runqueues Rik van Riel
2019-08-22 2:17 ` [PATCH 01/15] sched: introduce task_se_h_load helper Rik van Riel
2019-08-23 18:13 ` Dietmar Eggemann
2019-08-24 0:05 ` Rik van Riel
2019-08-22 2:17 ` [PATCH 02/15] sched: change /proc/sched_debug fields Rik van Riel
2019-08-22 2:17 ` [PATCH 03/15] sched,fair: redefine runnable_load_avg as the sum of task_h_load Rik van Riel
2019-08-28 13:50 ` Vincent Guittot
2019-08-28 14:47 ` Rik van Riel
2019-08-28 15:02 ` Vincent Guittot
2019-08-22 2:17 ` [PATCH 04/15] sched,fair: move runnable_load_avg to cfs_rq Rik van Riel
2019-08-22 2:17 ` [PATCH 05/15] sched,fair: remove cfs_rqs from leaf_cfs_rq_list bottom up Rik van Riel
2019-08-28 14:09 ` Vincent Guittot
2019-08-22 2:17 ` [PATCH 06/15] sched,cfs: use explicit cfs_rq of parent se helper Rik van Riel
2019-08-28 13:53 ` Vincent Guittot
2019-08-28 15:28 ` Rik van Riel
2019-08-28 16:34 ` Vincent Guittot
2019-08-22 2:17 ` [PATCH 07/15] sched,cfs: fix zero length timeslice calculation Rik van Riel
2019-08-28 16:59 ` Vincent Guittot
2019-08-22 2:17 ` [PATCH 08/15] sched,fair: simplify timeslice length code Rik van Riel
2019-08-28 17:32 ` Vincent Guittot
2019-08-28 23:18 ` Rik van Riel
2019-08-29 14:02 ` Vincent Guittot
2019-08-29 16:00 ` Rik van Riel
2019-08-30 6:41 ` Vincent Guittot
2019-08-30 15:01 ` Rik van Riel
2019-09-02 7:51 ` Vincent Guittot
2019-09-02 17:47 ` Rik van Riel
2019-08-22 2:17 ` [PATCH 09/15] sched,fair: refactor enqueue/dequeue_entity Rik van Riel
2019-09-03 15:38 ` Vincent Guittot
2019-09-03 20:27 ` Rik van Riel
2019-09-04 6:44 ` Vincent Guittot
2019-08-22 2:17 ` [PATCH 10/15] sched,fair: add helper functions for flattened runqueue Rik van Riel
2019-08-22 2:17 ` [PATCH 11/15] sched,fair: flatten hierarchical runqueues Rik van Riel
2019-08-23 18:14 ` Dietmar Eggemann
2019-08-24 1:16 ` Rik van Riel
2019-08-22 2:17 ` [PATCH 12/15] sched,fair: flatten update_curr functionality Rik van Riel
2019-08-27 10:37 ` Dietmar Eggemann
2019-08-22 2:17 ` [PATCH 13/15] sched,fair: propagate sum_exec_runtime up the hierarchy Rik van Riel
2019-08-28 7:51 ` Dietmar Eggemann
2019-08-28 13:14 ` Rik van Riel
2019-08-29 17:20 ` Dietmar Eggemann
2019-08-29 18:06 ` Rik van Riel
2019-08-22 2:17 ` Rik van Riel [this message]
2019-08-22 2:17 ` [PATCH 15/15] sched,fair: scale vdiff in wakeup_preempt_entity Rik van Riel
2019-09-02 10:53 ` [PATCH RFC v4 0/15] sched,fair: flatten CPU controller runqueues Dietmar Eggemann
2019-09-03 1:44 ` Rik van Riel
Reply instructions:
You may reply publicly to this message via plain-text email
using any one of the following methods:
* Save the following mbox file, import it into your mail client,
and reply-to-all from there: mbox
Avoid top-posting and favor interleaved quoting:
https://en.wikipedia.org/wiki/Posting_style#Interleaved_style
* Reply using the --to, --cc, and --in-reply-to
switches of git-send-email(1):
git send-email \
--in-reply-to=20190822021740.15554-15-riel@surriel.com \
--to=riel@surriel.com \
--cc=dietmar.eggemann@arm.com \
--cc=kernel-team@fb.com \
--cc=linux-kernel@vger.kernel.org \
--cc=mgorman@techsingularity.net \
--cc=mingo@redhat.com \
--cc=morten.rasmussen@arm.com \
--cc=peterz@infradead.org \
--cc=pjt@google.com \
--cc=tglx@linutronix.de \
--cc=vincent.guittot@linaro.org \
/path/to/YOUR_REPLY
https://kernel.org/pub/software/scm/git/docs/git-send-email.html
* If your mail client supports setting the In-Reply-To header
via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line
before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).