linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
From: Vincent Guittot <vincent.guittot@linaro.org>
To: linux-kernel@vger.kernel.org, mingo@redhat.com, peterz@infradead.org
Cc: pauld@redhat.com, valentin.schneider@arm.com,
	srikar@linux.vnet.ibm.com, quentin.perret@arm.com,
	dietmar.eggemann@arm.com, Morten.Rasmussen@arm.com,
	Vincent Guittot <vincent.guittot@linaro.org>
Subject: [PATCH v2 6/8] sched/fair: use load instead of runnable load
Date: Thu,  1 Aug 2019 16:40:22 +0200	[thread overview]
Message-ID: <1564670424-26023-7-git-send-email-vincent.guittot@linaro.org> (raw)
In-Reply-To: <1564670424-26023-1-git-send-email-vincent.guittot@linaro.org>

runnable load has been introduced to take into account the case
where blocked load biases the load balance decision which was selecting
underutilized group with huge blocked load whereas other groups were
overloaded.

The load is now only used when groups are overloaded. In this case,
it's worth being conservative and taking into account the sleeping
tasks that might wakeup on the cpu.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 23 ++++++++++++++---------
 1 file changed, 14 insertions(+), 9 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f05f1ad..dfaf0b8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5445,6 +5445,11 @@ static unsigned long cpu_runnable_load(struct rq *rq)
 	return cfs_rq_runnable_load_avg(&rq->cfs);
 }
 
+static unsigned long cpu_load(struct rq *rq)
+{
+	return cfs_rq_load_avg(&rq->cfs);
+}
+
 static unsigned long capacity_of(int cpu)
 {
 	return cpu_rq(cpu)->cpu_capacity;
@@ -5540,7 +5545,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
 	s64 this_eff_load, prev_eff_load;
 	unsigned long task_load;
 
-	this_eff_load = cpu_runnable_load(cpu_rq(this_cpu));
+	this_eff_load = cpu_load(cpu_rq(this_cpu));
 
 	if (sync) {
 		unsigned long current_load = task_h_load(current);
@@ -5558,7 +5563,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
 		this_eff_load *= 100;
 	this_eff_load *= capacity_of(prev_cpu);
 
-	prev_eff_load = cpu_runnable_load(cpu_rq(prev_cpu));
+	prev_eff_load = cpu_load(cpu_rq(prev_cpu));
 	prev_eff_load -= task_load;
 	if (sched_feat(WA_BIAS))
 		prev_eff_load *= 100 + (sd->imbalance_pct - 100) / 2;
@@ -5646,7 +5651,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 		max_spare_cap = 0;
 
 		for_each_cpu(i, sched_group_span(group)) {
-			load = cpu_runnable_load(cpu_rq(i));
+			load = cpu_load(cpu_rq(i));
 			runnable_load += load;
 
 			avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs);
@@ -5787,7 +5792,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this
 				continue;
 			}
 
-			load = cpu_runnable_load(cpu_rq(i));
+			load = cpu_load(cpu_rq(i));
 			if (load < min_load) {
 				min_load = load;
 				least_loaded_cpu = i;
@@ -8128,7 +8133,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 		if ((env->flags & LBF_NOHZ_STATS) && update_nohz_stats(rq, false))
 			env->flags |= LBF_NOHZ_AGAIN;
 
-		sgs->group_load += cpu_runnable_load(rq);
+		sgs->group_load += cpu_load(rq);
 		sgs->group_util += cpu_util(i);
 		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
 
@@ -8569,7 +8574,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 	init_sd_lb_stats(&sds);
 
 	/*
-	 * Compute the various statistics relavent for load balancing at
+	 * Compute the various statistics relevant for load balancing at
 	 * this level.
 	 */
 	update_sd_lb_stats(env, &sds);
@@ -8748,10 +8753,10 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 
 		case migrate_load:
 			/*
-			 * When comparing with load imbalance, use cpu_runnable_load()
+			 * When comparing with load imbalance, use cpu_load()
 			 * which is not scaled with the CPU capacity.
 			 */
-			load = cpu_runnable_load(rq);
+			load = cpu_load(rq);
 
 			if (nr_running == 1 && load > env->imbalance &&
 			    !check_cpu_capacity(rq, env->sd))
@@ -8759,7 +8764,7 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 
 			/*
 			 * For the load comparisons with the other CPU's, consider
-			 * the cpu_runnable_load() scaled with the CPU capacity, so
+			 * the cpu_load() scaled with the CPU capacity, so
 			 * that the load can be moved away from the CPU that is
 			 * potentially running at a lower capacity.
 			 *
-- 
2.7.4


  parent reply	other threads:[~2019-08-01 14:40 UTC|newest]

Thread overview: 31+ messages / expand[flat|nested]  mbox.gz  Atom feed  top
2019-08-01 14:40 [PATCH v2 0/8] sched/fair: rework the CFS load balance Vincent Guittot
2019-08-01 14:40 ` [PATCH v2 1/8] sched/fair: clean up asym packing Vincent Guittot
2019-08-01 14:40 ` [PATCH v2 2/8] sched/fair: rename sum_nr_running to sum_h_nr_running Vincent Guittot
2019-08-01 14:40 ` [PATCH v2 3/8] sched/fair: remove meaningless imbalance calculation Vincent Guittot
2019-08-01 14:40 ` [PATCH v2 4/8] sched/fair: rework load_balance Vincent Guittot
2019-08-05 17:07   ` Valentin Schneider
2019-08-26  9:26     ` Vincent Guittot
2019-08-28 10:25       ` Valentin Schneider
2019-08-06 15:56   ` Peter Zijlstra
2019-08-26  9:31     ` Vincent Guittot
2019-08-06 17:17   ` Valentin Schneider
2019-08-07 11:16     ` Valentin Schneider
2019-08-26 10:11     ` Vincent Guittot
2019-08-28 14:19       ` Valentin Schneider
2019-08-29 14:26         ` Vincent Guittot
2019-08-30 14:33           ` Valentin Schneider
2019-08-01 14:40 ` [PATCH v2 5/8] sched/fair: use rq->nr_running when balancing load Vincent Guittot
2019-08-01 14:40 ` Vincent Guittot [this message]
2019-08-06 16:07   ` [PATCH v2 6/8] sched/fair: use load instead of runnable load Peter Zijlstra
2019-08-26 15:45     ` Vincent Guittot
2019-08-01 14:40 ` [PATCH v2 7/8] sched/fair: evenly spread tasks when not overloaded Vincent Guittot
2019-08-01 14:40 ` [PATCH v2 8/8] sched/fair: use utilization to select misfit task Vincent Guittot
2019-08-01 16:27   ` Valentin Schneider
2019-08-02  8:29     ` Vincent Guittot
2019-08-02 10:49       ` Valentin Schneider
2019-08-02 12:56   ` [PATCH v3] " Vincent Guittot
2019-08-02 14:27     ` Valentin Schneider
2019-08-05 11:01     ` Valentin Schneider
2019-08-29 19:23 ` [PATCH v2 0/8] sched/fair: rework the CFS load balance Phil Auld
2019-08-30  6:46   ` Vincent Guittot
     [not found] ` <20190809052124.13016-1-hdanton@sina.com>
2019-09-02 13:07   ` [PATCH v2 5/8] sched/fair: use rq->nr_running when balancing load Vincent Guittot

Reply instructions:

You may reply publicly to this message via plain-text email
using any one of the following methods:

* Save the following mbox file, import it into your mail client,
  and reply-to-all from there: mbox

  Avoid top-posting and favor interleaved quoting:
  https://en.wikipedia.org/wiki/Posting_style#Interleaved_style

* Reply using the --to, --cc, and --in-reply-to
  switches of git-send-email(1):

  git send-email \
    --in-reply-to=1564670424-26023-7-git-send-email-vincent.guittot@linaro.org \
    --to=vincent.guittot@linaro.org \
    --cc=Morten.Rasmussen@arm.com \
    --cc=dietmar.eggemann@arm.com \
    --cc=linux-kernel@vger.kernel.org \
    --cc=mingo@redhat.com \
    --cc=pauld@redhat.com \
    --cc=peterz@infradead.org \
    --cc=quentin.perret@arm.com \
    --cc=srikar@linux.vnet.ibm.com \
    --cc=valentin.schneider@arm.com \
    /path/to/YOUR_REPLY

  https://kernel.org/pub/software/scm/git/docs/git-send-email.html

* If your mail client supports setting the In-Reply-To header
  via mailto: links, try the mailto: link
Be sure your reply has a Subject: header at the top and a blank line before the message body.
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).