linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v4 00/10] sched/fair: rework the CFS load balance
@ 2019-10-18 13:26 Vincent Guittot
  2019-10-18 13:26 ` [PATCH v4 01/11] sched/fair: clean up asym packing Vincent Guittot
                   ` (12 more replies)
  0 siblings, 13 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-18 13:26 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	Vincent Guittot

Several wrong task placement have been raised with the current load
balance algorithm but their fixes are not always straight forward and
end up with using biased values to force migrations. A cleanup and rework
of the load balance will help to handle such UCs and enable to fine grain
the behavior of the scheduler for other cases.

Patch 1 has already been sent separately and only consolidate asym policy
in one place and help the review of the changes in load_balance.

Patch 2 renames the sum of h_nr_running in stats.

Patch 3 removes meaningless imbalance computation to make review of
patch 4 easier.

Patch 4 reworks load_balance algorithm and fixes some wrong task placement
but try to stay conservative.

Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
into account when pulling tasks.

Patch 6 replaces runnable_load by load now that the signal is only used
when overloaded.

Patch 7 improves the spread of tasks at the 1st scheduling level.

Patch 8 uses utilization instead of load in all steps of misfit task
path.

Patch 9 replaces runnable_load_avg by load_avg in the wake up path.

Patch 10 optimizes find_idlest_group() that was using both runnable_load
and load. This has not been squashed with previous patch to ease the
review.

Patch 11 reworks find_idlest_group() to follow the same steps as
find_busiest_group()

Some benchmarks results based on 8 iterations of each tests:
- small arm64 dual quad cores system

           tip/sched/core        w/ this patchset    improvement
schedpipe      53125 +/-0.18%        53443 +/-0.52%   (+0.60%)

hackbench -l (2560/#grp) -g #grp
 1 groups      1.579 +/-29.16%       1.410 +/-13.46% (+10.70%)
 4 groups      1.269 +/-9.69%        1.205 +/-3.27%   (+5.00%)
 8 groups      1.117 +/-1.51%        1.123 +/-1.27%   (+4.57%)
16 groups      1.176 +/-1.76%        1.164 +/-2.42%   (+1.07%)

Unixbench shell8
  1 test     1963.48 +/-0.36%       1902.88 +/-0.73%    (-3.09%)
224 tests    2427.60 +/-0.20%       2469.80 +/-0.42%  (1.74%)

- large arm64 2 nodes / 224 cores system

           tip/sched/core        w/ this patchset    improvement
schedpipe     124084 +/-1.36%       124445 +/-0.67%   (+0.29%)

hackbench -l (256000/#grp) -g #grp
  1 groups    15.305 +/-1.50%       14.001 +/-1.99%   (+8.52%)
  4 groups     5.959 +/-0.70%        5.542 +/-3.76%   (+6.99%)
 16 groups     3.120 +/-1.72%        3.253 +/-0.61%   (-4.92%)
 32 groups     2.911 +/-0.88%        2.837 +/-1.16%   (+2.54%)
 64 groups     2.805 +/-1.90%        2.716 +/-1.18%   (+3.17%)
128 groups     3.166 +/-7.71%        3.891 +/-6.77%   (+5.82%)
256 groups     3.655 +/-10.09%       3.185 +/-6.65%  (+12.87%)

dbench
  1 groups   328.176 +/-0.29%      330.217 +/-0.32%   (+0.62%)
  4 groups   930.739 +/-0.50%      957.173 +/-0.66%   (+2.84%)
 16 groups  1928.292 +/-0.36%     1978.234 +/-0.88%   (+0.92%)
 32 groups  2369.348 +/-1.72%     2454.020 +/-0.90%   (+3.57%)
 64 groups  2583.880 +/-3.39%     2618.860 +/-0.84%   (+1.35%)
128 groups  2256.406 +/-10.67%    2392.498 +/-2.13%   (+6.03%)
256 groups  1257.546 +/-3.81%     1674.684 +/-4.97%  (+33.17%)

Unixbench shell8
  1 test     6944.16 +/-0.02     6605.82 +/-0.11      (-4.87%)
224 tests   13499.02 +/-0.14    13637.94 +/-0.47%     (+1.03%)
lkp reported a -10% regression on shell8 (1 test) for v3 that 
seems that is partially recovered on my platform with v4.

tip/sched/core sha1:
  commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")
  
Changes since v3:
- small typo and variable ordering fixes
- add some acked/reviewed tag
- set 1 instead of load for migrate_misfit
- use nr_h_running instead of load for asym_packing
- update the optimization of find_idlest_group() and put back somes
 conditions when comparing load
- rework find_idlest_group() to match find_busiest_group() behavior

Changes since v2:
- fix typo and reorder code
- some minor code fixes
- optimize the find_idles_group()

Not covered in this patchset:
- Better detection of overloaded and fully busy state, especially for cases
  when nr_running > nr CPUs.

Vincent Guittot (11):
  sched/fair: clean up asym packing
  sched/fair: rename sum_nr_running to sum_h_nr_running
  sched/fair: remove meaningless imbalance calculation
  sched/fair: rework load_balance
  sched/fair: use rq->nr_running when balancing load
  sched/fair: use load instead of runnable load in load_balance
  sched/fair: evenly spread tasks when not overloaded
  sched/fair: use utilization to select misfit task
  sched/fair: use load instead of runnable load in wakeup path
  sched/fair: optimize find_idlest_group
  sched/fair: rework find_idlest_group

 kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
 1 file changed, 682 insertions(+), 499 deletions(-)

-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* [PATCH v4 01/11] sched/fair: clean up asym packing
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
@ 2019-10-18 13:26 ` Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Clean " tip-bot2 for Vincent Guittot
  2019-10-30 14:51   ` [PATCH v4 01/11] sched/fair: clean " Mel Gorman
  2019-10-18 13:26 ` [PATCH v4 02/11] sched/fair: rename sum_nr_running to sum_h_nr_running Vincent Guittot
                   ` (11 subsequent siblings)
  12 siblings, 2 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-18 13:26 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	Vincent Guittot

Clean up asym packing to follow the default load balance behavior:
- classify the group by creating a group_asym_packing field.
- calculate the imbalance in calculate_imbalance() instead of bypassing it.

We don't need to test twice same conditions anymore to detect asym packing
and we consolidate the calculation of imbalance in calculate_imbalance().

There is no functional changes.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Rik van Riel <riel@surriel.com>
---
 kernel/sched/fair.c | 63 ++++++++++++++---------------------------------------
 1 file changed, 16 insertions(+), 47 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1f0a5e1..617145c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7675,6 +7675,7 @@ struct sg_lb_stats {
 	unsigned int group_weight;
 	enum group_type group_type;
 	int group_no_capacity;
+	unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
 	unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
 #ifdef CONFIG_NUMA_BALANCING
 	unsigned int nr_numa_running;
@@ -8129,9 +8130,17 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 	 * ASYM_PACKING needs to move all the work to the highest
 	 * prority CPUs in the group, therefore mark all groups
 	 * of lower priority than ourself as busy.
+	 *
+	 * This is primarily intended to used at the sibling level.  Some
+	 * cores like POWER7 prefer to use lower numbered SMT threads.  In the
+	 * case of POWER7, it can move to lower SMT modes only when higher
+	 * threads are idle.  When in lower SMT modes, the threads will
+	 * perform better since they share less core resources.  Hence when we
+	 * have idle threads, we want them to be the higher ones.
 	 */
 	if (sgs->sum_nr_running &&
 	    sched_asym_prefer(env->dst_cpu, sg->asym_prefer_cpu)) {
+		sgs->group_asym_packing = 1;
 		if (!sds->busiest)
 			return true;
 
@@ -8273,51 +8282,6 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 }
 
 /**
- * check_asym_packing - Check to see if the group is packed into the
- *			sched domain.
- *
- * This is primarily intended to used at the sibling level.  Some
- * cores like POWER7 prefer to use lower numbered SMT threads.  In the
- * case of POWER7, it can move to lower SMT modes only when higher
- * threads are idle.  When in lower SMT modes, the threads will
- * perform better since they share less core resources.  Hence when we
- * have idle threads, we want them to be the higher ones.
- *
- * This packing function is run on idle threads.  It checks to see if
- * the busiest CPU in this domain (core in the P7 case) has a higher
- * CPU number than the packing function is being run on.  Here we are
- * assuming lower CPU number will be equivalent to lower a SMT thread
- * number.
- *
- * Return: 1 when packing is required and a task should be moved to
- * this CPU.  The amount of the imbalance is returned in env->imbalance.
- *
- * @env: The load balancing environment.
- * @sds: Statistics of the sched_domain which is to be packed
- */
-static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)
-{
-	int busiest_cpu;
-
-	if (!(env->sd->flags & SD_ASYM_PACKING))
-		return 0;
-
-	if (env->idle == CPU_NOT_IDLE)
-		return 0;
-
-	if (!sds->busiest)
-		return 0;
-
-	busiest_cpu = sds->busiest->asym_prefer_cpu;
-	if (sched_asym_prefer(busiest_cpu, env->dst_cpu))
-		return 0;
-
-	env->imbalance = sds->busiest_stat.group_load;
-
-	return 1;
-}
-
-/**
  * fix_small_imbalance - Calculate the minor imbalance that exists
  *			amongst the groups of a sched_domain, during
  *			load balancing.
@@ -8401,6 +8365,11 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 	local = &sds->local_stat;
 	busiest = &sds->busiest_stat;
 
+	if (busiest->group_asym_packing) {
+		env->imbalance = busiest->group_load;
+		return;
+	}
+
 	if (busiest->group_type == group_imbalanced) {
 		/*
 		 * In the group_imb case we cannot rely on group-wide averages
@@ -8505,8 +8474,8 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 	busiest = &sds.busiest_stat;
 
 	/* ASYM feature bypasses nice load balance check */
-	if (check_asym_packing(env, &sds))
-		return sds.busiest;
+	if (busiest->group_asym_packing)
+		goto force_balance;
 
 	/* There is no busy sibling group to pull tasks from */
 	if (!sds.busiest || busiest->sum_nr_running == 0)
-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* [PATCH v4 02/11] sched/fair: rename sum_nr_running to sum_h_nr_running
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
  2019-10-18 13:26 ` [PATCH v4 01/11] sched/fair: clean up asym packing Vincent Guittot
@ 2019-10-18 13:26 ` Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rename sg_lb_stats::sum_nr_running " tip-bot2 for Vincent Guittot
  2019-10-30 14:53   ` [PATCH v4 02/11] sched/fair: rename sum_nr_running " Mel Gorman
  2019-10-18 13:26 ` [PATCH v4 03/11] sched/fair: remove meaningless imbalance calculation Vincent Guittot
                   ` (10 subsequent siblings)
  12 siblings, 2 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-18 13:26 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	Vincent Guittot

Rename sum_nr_running to sum_h_nr_running because it effectively tracks
cfs->h_nr_running so we can use sum_nr_running to track rq->nr_running
when needed.

There is no functional changes.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Rik van Riel <riel@surriel.com>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
---
 kernel/sched/fair.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 617145c..9a2aceb 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7670,7 +7670,7 @@ struct sg_lb_stats {
 	unsigned long load_per_task;
 	unsigned long group_capacity;
 	unsigned long group_util; /* Total utilization of the group */
-	unsigned int sum_nr_running; /* Nr tasks running in the group */
+	unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
 	unsigned int idle_cpus;
 	unsigned int group_weight;
 	enum group_type group_type;
@@ -7715,7 +7715,7 @@ static inline void init_sd_lb_stats(struct sd_lb_stats *sds)
 		.total_capacity = 0UL,
 		.busiest_stat = {
 			.avg_load = 0UL,
-			.sum_nr_running = 0,
+			.sum_h_nr_running = 0,
 			.group_type = group_other,
 		},
 	};
@@ -7906,7 +7906,7 @@ static inline int sg_imbalanced(struct sched_group *group)
 static inline bool
 group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
 {
-	if (sgs->sum_nr_running < sgs->group_weight)
+	if (sgs->sum_h_nr_running < sgs->group_weight)
 		return true;
 
 	if ((sgs->group_capacity * 100) >
@@ -7927,7 +7927,7 @@ group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
 static inline bool
 group_is_overloaded(struct lb_env *env, struct sg_lb_stats *sgs)
 {
-	if (sgs->sum_nr_running <= sgs->group_weight)
+	if (sgs->sum_h_nr_running <= sgs->group_weight)
 		return false;
 
 	if ((sgs->group_capacity * 100) <
@@ -8019,7 +8019,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 
 		sgs->group_load += cpu_runnable_load(rq);
 		sgs->group_util += cpu_util(i);
-		sgs->sum_nr_running += rq->cfs.h_nr_running;
+		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
 
 		nr_running = rq->nr_running;
 		if (nr_running > 1)
@@ -8049,8 +8049,8 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 	sgs->group_capacity = group->sgc->capacity;
 	sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity;
 
-	if (sgs->sum_nr_running)
-		sgs->load_per_task = sgs->group_load / sgs->sum_nr_running;
+	if (sgs->sum_h_nr_running)
+		sgs->load_per_task = sgs->group_load / sgs->sum_h_nr_running;
 
 	sgs->group_weight = group->group_weight;
 
@@ -8107,7 +8107,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 	 * capable CPUs may harm throughput. Maximize throughput,
 	 * power/energy consequences are not considered.
 	 */
-	if (sgs->sum_nr_running <= sgs->group_weight &&
+	if (sgs->sum_h_nr_running <= sgs->group_weight &&
 	    group_smaller_min_cpu_capacity(sds->local, sg))
 		return false;
 
@@ -8138,7 +8138,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 	 * perform better since they share less core resources.  Hence when we
 	 * have idle threads, we want them to be the higher ones.
 	 */
-	if (sgs->sum_nr_running &&
+	if (sgs->sum_h_nr_running &&
 	    sched_asym_prefer(env->dst_cpu, sg->asym_prefer_cpu)) {
 		sgs->group_asym_packing = 1;
 		if (!sds->busiest)
@@ -8156,9 +8156,9 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 #ifdef CONFIG_NUMA_BALANCING
 static inline enum fbq_type fbq_classify_group(struct sg_lb_stats *sgs)
 {
-	if (sgs->sum_nr_running > sgs->nr_numa_running)
+	if (sgs->sum_h_nr_running > sgs->nr_numa_running)
 		return regular;
-	if (sgs->sum_nr_running > sgs->nr_preferred_running)
+	if (sgs->sum_h_nr_running > sgs->nr_preferred_running)
 		return remote;
 	return all;
 }
@@ -8233,7 +8233,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 		 */
 		if (prefer_sibling && sds->local &&
 		    group_has_capacity(env, local) &&
-		    (sgs->sum_nr_running > local->sum_nr_running + 1)) {
+		    (sgs->sum_h_nr_running > local->sum_h_nr_running + 1)) {
 			sgs->group_no_capacity = 1;
 			sgs->group_type = group_classify(sg, sgs);
 		}
@@ -8245,7 +8245,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 
 next_group:
 		/* Now, start updating sd_lb_stats */
-		sds->total_running += sgs->sum_nr_running;
+		sds->total_running += sgs->sum_h_nr_running;
 		sds->total_load += sgs->group_load;
 		sds->total_capacity += sgs->group_capacity;
 
@@ -8299,7 +8299,7 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
 	local = &sds->local_stat;
 	busiest = &sds->busiest_stat;
 
-	if (!local->sum_nr_running)
+	if (!local->sum_h_nr_running)
 		local->load_per_task = cpu_avg_load_per_task(env->dst_cpu);
 	else if (busiest->load_per_task > local->load_per_task)
 		imbn = 1;
@@ -8397,7 +8397,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 	 */
 	if (busiest->group_type == group_overloaded &&
 	    local->group_type   == group_overloaded) {
-		load_above_capacity = busiest->sum_nr_running * SCHED_CAPACITY_SCALE;
+		load_above_capacity = busiest->sum_h_nr_running * SCHED_CAPACITY_SCALE;
 		if (load_above_capacity > busiest->group_capacity) {
 			load_above_capacity -= busiest->group_capacity;
 			load_above_capacity *= scale_load_down(NICE_0_LOAD);
@@ -8478,7 +8478,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 		goto force_balance;
 
 	/* There is no busy sibling group to pull tasks from */
-	if (!sds.busiest || busiest->sum_nr_running == 0)
+	if (!sds.busiest || busiest->sum_h_nr_running == 0)
 		goto out_balanced;
 
 	/* XXX broken for overlapping NUMA groups */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* [PATCH v4 03/11] sched/fair: remove meaningless imbalance calculation
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
  2019-10-18 13:26 ` [PATCH v4 01/11] sched/fair: clean up asym packing Vincent Guittot
  2019-10-18 13:26 ` [PATCH v4 02/11] sched/fair: rename sum_nr_running to sum_h_nr_running Vincent Guittot
@ 2019-10-18 13:26 ` Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Remove " tip-bot2 for Vincent Guittot
  2019-10-18 13:26 ` [PATCH v4 04/11] sched/fair: rework load_balance Vincent Guittot
                   ` (9 subsequent siblings)
  12 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-10-18 13:26 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	Vincent Guittot

clean up load_balance and remove meaningless calculation and fields before
adding new algorithm.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Rik van Riel <riel@surriel.com>
---
 kernel/sched/fair.c | 105 +---------------------------------------------------
 1 file changed, 1 insertion(+), 104 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9a2aceb..e004841 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5390,18 +5390,6 @@ static unsigned long capacity_of(int cpu)
 	return cpu_rq(cpu)->cpu_capacity;
 }
 
-static unsigned long cpu_avg_load_per_task(int cpu)
-{
-	struct rq *rq = cpu_rq(cpu);
-	unsigned long nr_running = READ_ONCE(rq->cfs.h_nr_running);
-	unsigned long load_avg = cpu_runnable_load(rq);
-
-	if (nr_running)
-		return load_avg / nr_running;
-
-	return 0;
-}
-
 static void record_wakee(struct task_struct *p)
 {
 	/*
@@ -7667,7 +7655,6 @@ static unsigned long task_h_load(struct task_struct *p)
 struct sg_lb_stats {
 	unsigned long avg_load; /*Avg load across the CPUs of the group */
 	unsigned long group_load; /* Total load over the CPUs of the group */
-	unsigned long load_per_task;
 	unsigned long group_capacity;
 	unsigned long group_util; /* Total utilization of the group */
 	unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
@@ -8049,9 +8036,6 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 	sgs->group_capacity = group->sgc->capacity;
 	sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity;
 
-	if (sgs->sum_h_nr_running)
-		sgs->load_per_task = sgs->group_load / sgs->sum_h_nr_running;
-
 	sgs->group_weight = group->group_weight;
 
 	sgs->group_no_capacity = group_is_overloaded(env, sgs);
@@ -8282,76 +8266,6 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 }
 
 /**
- * fix_small_imbalance - Calculate the minor imbalance that exists
- *			amongst the groups of a sched_domain, during
- *			load balancing.
- * @env: The load balancing environment.
- * @sds: Statistics of the sched_domain whose imbalance is to be calculated.
- */
-static inline
-void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
-{
-	unsigned long tmp, capa_now = 0, capa_move = 0;
-	unsigned int imbn = 2;
-	unsigned long scaled_busy_load_per_task;
-	struct sg_lb_stats *local, *busiest;
-
-	local = &sds->local_stat;
-	busiest = &sds->busiest_stat;
-
-	if (!local->sum_h_nr_running)
-		local->load_per_task = cpu_avg_load_per_task(env->dst_cpu);
-	else if (busiest->load_per_task > local->load_per_task)
-		imbn = 1;
-
-	scaled_busy_load_per_task =
-		(busiest->load_per_task * SCHED_CAPACITY_SCALE) /
-		busiest->group_capacity;
-
-	if (busiest->avg_load + scaled_busy_load_per_task >=
-	    local->avg_load + (scaled_busy_load_per_task * imbn)) {
-		env->imbalance = busiest->load_per_task;
-		return;
-	}
-
-	/*
-	 * OK, we don't have enough imbalance to justify moving tasks,
-	 * however we may be able to increase total CPU capacity used by
-	 * moving them.
-	 */
-
-	capa_now += busiest->group_capacity *
-			min(busiest->load_per_task, busiest->avg_load);
-	capa_now += local->group_capacity *
-			min(local->load_per_task, local->avg_load);
-	capa_now /= SCHED_CAPACITY_SCALE;
-
-	/* Amount of load we'd subtract */
-	if (busiest->avg_load > scaled_busy_load_per_task) {
-		capa_move += busiest->group_capacity *
-			    min(busiest->load_per_task,
-				busiest->avg_load - scaled_busy_load_per_task);
-	}
-
-	/* Amount of load we'd add */
-	if (busiest->avg_load * busiest->group_capacity <
-	    busiest->load_per_task * SCHED_CAPACITY_SCALE) {
-		tmp = (busiest->avg_load * busiest->group_capacity) /
-		      local->group_capacity;
-	} else {
-		tmp = (busiest->load_per_task * SCHED_CAPACITY_SCALE) /
-		      local->group_capacity;
-	}
-	capa_move += local->group_capacity *
-		    min(local->load_per_task, local->avg_load + tmp);
-	capa_move /= SCHED_CAPACITY_SCALE;
-
-	/* Move if we gain throughput */
-	if (capa_move > capa_now)
-		env->imbalance = busiest->load_per_task;
-}
-
-/**
  * calculate_imbalance - Calculate the amount of imbalance present within the
  *			 groups of a given sched_domain during load balance.
  * @env: load balance environment
@@ -8370,15 +8284,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 		return;
 	}
 
-	if (busiest->group_type == group_imbalanced) {
-		/*
-		 * In the group_imb case we cannot rely on group-wide averages
-		 * to ensure CPU-load equilibrium, look at wider averages. XXX
-		 */
-		busiest->load_per_task =
-			min(busiest->load_per_task, sds->avg_load);
-	}
-
 	/*
 	 * Avg load of busiest sg can be less and avg load of local sg can
 	 * be greater than avg load across all sgs of sd because avg load
@@ -8389,7 +8294,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 	    (busiest->avg_load <= sds->avg_load ||
 	     local->avg_load >= sds->avg_load)) {
 		env->imbalance = 0;
-		return fix_small_imbalance(env, sds);
+		return;
 	}
 
 	/*
@@ -8427,14 +8332,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 				       busiest->group_misfit_task_load);
 	}
 
-	/*
-	 * if *imbalance is less than the average load per runnable task
-	 * there is no guarantee that any tasks will be moved so we'll have
-	 * a think about bumping its value to force at least one task to be
-	 * moved
-	 */
-	if (env->imbalance < busiest->load_per_task)
-		return fix_small_imbalance(env, sds);
 }
 
 /******* find_busiest_group() helpers end here *********************/
-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* [PATCH v4 04/11] sched/fair: rework load_balance
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
                   ` (2 preceding siblings ...)
  2019-10-18 13:26 ` [PATCH v4 03/11] sched/fair: remove meaningless imbalance calculation Vincent Guittot
@ 2019-10-18 13:26 ` Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rework load_balance() tip-bot2 for Vincent Guittot
  2019-10-30 15:45   ` [PATCH v4 04/11] sched/fair: rework load_balance Mel Gorman
  2019-10-18 13:26 ` [PATCH v4 05/11] sched/fair: use rq->nr_running when balancing load Vincent Guittot
                   ` (8 subsequent siblings)
  12 siblings, 2 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-18 13:26 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	Vincent Guittot

The load_balance algorithm contains some heuristics which have become
meaningless since the rework of the scheduler's metrics like the
introduction of PELT.

Furthermore, load is an ill-suited metric for solving certain task
placement imbalance scenarios. For instance, in the presence of idle CPUs,
we should simply try to get at least one task per CPU, whereas the current
load-based algorithm can actually leave idle CPUs alone simply because the
load is somewhat balanced. The current algorithm ends up creating virtual
and meaningless value like the avg_load_per_task or tweaks the state of a
group to make it overloaded whereas it's not, in order to try to migrate
tasks.

load_balance should better qualify the imbalance of the group and clearly
define what has to be moved to fix this imbalance.

The type of sched_group has been extended to better reflect the type of
imbalance. We now have :
	group_has_spare
	group_fully_busy
	group_misfit_task
	group_asym_packing
	group_imbalanced
	group_overloaded

Based on the type of sched_group, load_balance now sets what it wants to
move in order to fix the imbalance. It can be some load as before but also
some utilization, a number of task or a type of task:
	migrate_task
	migrate_util
	migrate_load
	migrate_misfit

This new load_balance algorithm fixes several pending wrong tasks
placement:
- the 1 task per CPU case with asymmetric system
- the case of cfs task preempted by other class
- the case of tasks not evenly spread on groups with spare capacity

Also the load balance decisions have been consolidated in the 3 functions
below after removing the few bypasses and hacks of the current code:
- update_sd_pick_busiest() select the busiest sched_group.
- find_busiest_group() checks if there is an imbalance between local and
  busiest group.
- calculate_imbalance() decides what have to be moved.

Finally, the now unused field total_running of struct sd_lb_stats has been
removed.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 611 ++++++++++++++++++++++++++++++++++------------------
 1 file changed, 402 insertions(+), 209 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e004841..5ae5281 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7068,11 +7068,26 @@ static unsigned long __read_mostly max_load_balance_interval = HZ/10;
 
 enum fbq_type { regular, remote, all };
 
+/*
+ * group_type describes the group of CPUs at the moment of the load balance.
+ * The enum is ordered by pulling priority, with the group with lowest priority
+ * first so the groupe_type can be simply compared when selecting the busiest
+ * group. see update_sd_pick_busiest().
+ */
 enum group_type {
-	group_other = 0,
+	group_has_spare = 0,
+	group_fully_busy,
 	group_misfit_task,
+	group_asym_packing,
 	group_imbalanced,
-	group_overloaded,
+	group_overloaded
+};
+
+enum migration_type {
+	migrate_load = 0,
+	migrate_util,
+	migrate_task,
+	migrate_misfit
 };
 
 #define LBF_ALL_PINNED	0x01
@@ -7105,7 +7120,7 @@ struct lb_env {
 	unsigned int		loop_max;
 
 	enum fbq_type		fbq_type;
-	enum group_type		src_grp_type;
+	enum migration_type	migration_type;
 	struct list_head	tasks;
 };
 
@@ -7328,7 +7343,7 @@ static struct task_struct *detach_one_task(struct lb_env *env)
 static const unsigned int sched_nr_migrate_break = 32;
 
 /*
- * detach_tasks() -- tries to detach up to imbalance runnable load from
+ * detach_tasks() -- tries to detach up to imbalance load/util/tasks from
  * busiest_rq, as part of a balancing operation within domain "sd".
  *
  * Returns number of detached tasks if successful and 0 otherwise.
@@ -7336,8 +7351,8 @@ static const unsigned int sched_nr_migrate_break = 32;
 static int detach_tasks(struct lb_env *env)
 {
 	struct list_head *tasks = &env->src_rq->cfs_tasks;
+	unsigned long util, load;
 	struct task_struct *p;
-	unsigned long load;
 	int detached = 0;
 
 	lockdep_assert_held(&env->src_rq->lock);
@@ -7370,19 +7385,51 @@ static int detach_tasks(struct lb_env *env)
 		if (!can_migrate_task(p, env))
 			goto next;
 
-		load = task_h_load(p);
+		switch (env->migration_type) {
+		case migrate_load:
+			load = task_h_load(p);
 
-		if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed)
-			goto next;
+			if (sched_feat(LB_MIN) &&
+			    load < 16 && !env->sd->nr_balance_failed)
+				goto next;
 
-		if ((load / 2) > env->imbalance)
-			goto next;
+			if ((load / 2) > env->imbalance)
+				goto next;
+
+			env->imbalance -= load;
+			break;
+
+		case migrate_util:
+			util = task_util_est(p);
+
+			if (util > env->imbalance)
+				goto next;
+
+			env->imbalance -= util;
+			break;
+
+		case migrate_task:
+			env->imbalance--;
+			break;
+
+		case migrate_misfit:
+			load = task_h_load(p);
+
+			/*
+			 * load of misfit task might decrease a bit since it has
+			 * been recorded. Be conservative in the condition.
+			 */
+			if (load / 2 < env->imbalance)
+				goto next;
+
+			env->imbalance = 0;
+			break;
+		}
 
 		detach_task(p, env);
 		list_add(&p->se.group_node, &env->tasks);
 
 		detached++;
-		env->imbalance -= load;
 
 #ifdef CONFIG_PREEMPTION
 		/*
@@ -7396,7 +7443,7 @@ static int detach_tasks(struct lb_env *env)
 
 		/*
 		 * We only want to steal up to the prescribed amount of
-		 * runnable load.
+		 * load/util/tasks.
 		 */
 		if (env->imbalance <= 0)
 			break;
@@ -7661,7 +7708,6 @@ struct sg_lb_stats {
 	unsigned int idle_cpus;
 	unsigned int group_weight;
 	enum group_type group_type;
-	int group_no_capacity;
 	unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
 	unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
 #ifdef CONFIG_NUMA_BALANCING
@@ -7677,10 +7723,10 @@ struct sg_lb_stats {
 struct sd_lb_stats {
 	struct sched_group *busiest;	/* Busiest group in this sd */
 	struct sched_group *local;	/* Local group in this sd */
-	unsigned long total_running;
 	unsigned long total_load;	/* Total load of all groups in sd */
 	unsigned long total_capacity;	/* Total capacity of all groups in sd */
 	unsigned long avg_load;	/* Average load across all groups in sd */
+	unsigned int prefer_sibling; /* tasks should go to sibling first */
 
 	struct sg_lb_stats busiest_stat;/* Statistics of the busiest group */
 	struct sg_lb_stats local_stat;	/* Statistics of the local group */
@@ -7691,19 +7737,18 @@ static inline void init_sd_lb_stats(struct sd_lb_stats *sds)
 	/*
 	 * Skimp on the clearing to avoid duplicate work. We can avoid clearing
 	 * local_stat because update_sg_lb_stats() does a full clear/assignment.
-	 * We must however clear busiest_stat::avg_load because
-	 * update_sd_pick_busiest() reads this before assignment.
+	 * We must however set busiest_stat::group_type and
+	 * busiest_stat::idle_cpus to the worst busiest group because
+	 * update_sd_pick_busiest() reads these before assignment.
 	 */
 	*sds = (struct sd_lb_stats){
 		.busiest = NULL,
 		.local = NULL,
-		.total_running = 0UL,
 		.total_load = 0UL,
 		.total_capacity = 0UL,
 		.busiest_stat = {
-			.avg_load = 0UL,
-			.sum_h_nr_running = 0,
-			.group_type = group_other,
+			.idle_cpus = UINT_MAX,
+			.group_type = group_has_spare,
 		},
 	};
 }
@@ -7945,19 +7990,26 @@ group_smaller_max_cpu_capacity(struct sched_group *sg, struct sched_group *ref)
 }
 
 static inline enum
-group_type group_classify(struct sched_group *group,
+group_type group_classify(struct lb_env *env,
+			  struct sched_group *group,
 			  struct sg_lb_stats *sgs)
 {
-	if (sgs->group_no_capacity)
+	if (group_is_overloaded(env, sgs))
 		return group_overloaded;
 
 	if (sg_imbalanced(group))
 		return group_imbalanced;
 
+	if (sgs->group_asym_packing)
+		return group_asym_packing;
+
 	if (sgs->group_misfit_task_load)
 		return group_misfit_task;
 
-	return group_other;
+	if (!group_has_capacity(env, sgs))
+		return group_fully_busy;
+
+	return group_has_spare;
 }
 
 static bool update_nohz_stats(struct rq *rq, bool force)
@@ -7994,10 +8046,12 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 				      struct sg_lb_stats *sgs,
 				      int *sg_status)
 {
-	int i, nr_running;
+	int i, nr_running, local_group;
 
 	memset(sgs, 0, sizeof(*sgs));
 
+	local_group = cpumask_test_cpu(env->dst_cpu, sched_group_span(group));
+
 	for_each_cpu_and(i, sched_group_span(group), env->cpus) {
 		struct rq *rq = cpu_rq(i);
 
@@ -8022,9 +8076,16 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 		/*
 		 * No need to call idle_cpu() if nr_running is not 0
 		 */
-		if (!nr_running && idle_cpu(i))
+		if (!nr_running && idle_cpu(i)) {
 			sgs->idle_cpus++;
+			/* Idle cpu can't have misfit task */
+			continue;
+		}
+
+		if (local_group)
+			continue;
 
+		/* Check for a misfit task on the cpu */
 		if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
 		    sgs->group_misfit_task_load < rq->misfit_task_load) {
 			sgs->group_misfit_task_load = rq->misfit_task_load;
@@ -8032,14 +8093,24 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 		}
 	}
 
-	/* Adjust by relative CPU capacity of the group */
+	/* Check if dst cpu is idle and preferred to this group */
+	if (env->sd->flags & SD_ASYM_PACKING &&
+	    env->idle != CPU_NOT_IDLE &&
+	    sgs->sum_h_nr_running &&
+	    sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
+		sgs->group_asym_packing = 1;
+	}
+
 	sgs->group_capacity = group->sgc->capacity;
-	sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity;
 
 	sgs->group_weight = group->group_weight;
 
-	sgs->group_no_capacity = group_is_overloaded(env, sgs);
-	sgs->group_type = group_classify(group, sgs);
+	sgs->group_type = group_classify(env, group, sgs);
+
+	/* Computing avg_load makes sense only when group is overloaded */
+	if (sgs->group_type == group_overloaded)
+		sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
+				sgs->group_capacity;
 }
 
 /**
@@ -8062,6 +8133,10 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 {
 	struct sg_lb_stats *busiest = &sds->busiest_stat;
 
+	/* Make sure that there is at least one task to pull */
+	if (!sgs->sum_h_nr_running)
+		return false;
+
 	/*
 	 * Don't try to pull misfit tasks we can't help.
 	 * We can use max_capacity here as reduction in capacity on some
@@ -8070,7 +8145,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 	 */
 	if (sgs->group_type == group_misfit_task &&
 	    (!group_smaller_max_cpu_capacity(sg, sds->local) ||
-	     !group_has_capacity(env, &sds->local_stat)))
+	     sds->local_stat.group_type != group_has_spare))
 		return false;
 
 	if (sgs->group_type > busiest->group_type)
@@ -8079,62 +8154,80 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 	if (sgs->group_type < busiest->group_type)
 		return false;
 
-	if (sgs->avg_load <= busiest->avg_load)
-		return false;
-
-	if (!(env->sd->flags & SD_ASYM_CPUCAPACITY))
-		goto asym_packing;
-
 	/*
-	 * Candidate sg has no more than one task per CPU and
-	 * has higher per-CPU capacity. Migrating tasks to less
-	 * capable CPUs may harm throughput. Maximize throughput,
-	 * power/energy consequences are not considered.
+	 * The candidate and the current busiest group are the same type of
+	 * group. Let check which one is the busiest according to the type.
 	 */
-	if (sgs->sum_h_nr_running <= sgs->group_weight &&
-	    group_smaller_min_cpu_capacity(sds->local, sg))
-		return false;
 
-	/*
-	 * If we have more than one misfit sg go with the biggest misfit.
-	 */
-	if (sgs->group_type == group_misfit_task &&
-	    sgs->group_misfit_task_load < busiest->group_misfit_task_load)
+	switch (sgs->group_type) {
+	case group_overloaded:
+		/* Select the overloaded group with highest avg_load. */
+		if (sgs->avg_load <= busiest->avg_load)
+			return false;
+		break;
+
+	case group_imbalanced:
+		/*
+		 * Select the 1st imbalanced group as we don't have any way to
+		 * choose one more than another.
+		 */
 		return false;
 
-asym_packing:
-	/* This is the busiest node in its class. */
-	if (!(env->sd->flags & SD_ASYM_PACKING))
-		return true;
+	case group_asym_packing:
+		/* Prefer to move from lowest priority CPU's work */
+		if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
+			return false;
+		break;
 
-	/* No ASYM_PACKING if target CPU is already busy */
-	if (env->idle == CPU_NOT_IDLE)
-		return true;
-	/*
-	 * ASYM_PACKING needs to move all the work to the highest
-	 * prority CPUs in the group, therefore mark all groups
-	 * of lower priority than ourself as busy.
-	 *
-	 * This is primarily intended to used at the sibling level.  Some
-	 * cores like POWER7 prefer to use lower numbered SMT threads.  In the
-	 * case of POWER7, it can move to lower SMT modes only when higher
-	 * threads are idle.  When in lower SMT modes, the threads will
-	 * perform better since they share less core resources.  Hence when we
-	 * have idle threads, we want them to be the higher ones.
-	 */
-	if (sgs->sum_h_nr_running &&
-	    sched_asym_prefer(env->dst_cpu, sg->asym_prefer_cpu)) {
-		sgs->group_asym_packing = 1;
-		if (!sds->busiest)
-			return true;
+	case group_misfit_task:
+		/*
+		 * If we have more than one misfit sg go with the biggest
+		 * misfit.
+		 */
+		if (sgs->group_misfit_task_load < busiest->group_misfit_task_load)
+			return false;
+		break;
 
-		/* Prefer to move from lowest priority CPU's work */
-		if (sched_asym_prefer(sds->busiest->asym_prefer_cpu,
-				      sg->asym_prefer_cpu))
-			return true;
+	case group_fully_busy:
+		/*
+		 * Select the fully busy group with highest avg_load. In
+		 * theory, there is no need to pull task from such kind of
+		 * group because tasks have all compute capacity that they need
+		 * but we can still improve the overall throughput by reducing
+		 * contention when accessing shared HW resources.
+		 *
+		 * XXX for now avg_load is not computed and always 0 so we
+		 * select the 1st one.
+		 */
+		if (sgs->avg_load <= busiest->avg_load)
+			return false;
+		break;
+
+	case group_has_spare:
+		/*
+		 * Select not overloaded group with lowest number of
+		 * idle cpus. We could also compare the spare capacity
+		 * which is more stable but it can end up that the
+		 * group has less spare capacity but finally more idle
+		 * cpus which means less opportunity to pull tasks.
+		 */
+		if (sgs->idle_cpus >= busiest->idle_cpus)
+			return false;
+		break;
 	}
 
-	return false;
+	/*
+	 * Candidate sg has no more than one task per CPU and has higher
+	 * per-CPU capacity. Migrating tasks to less capable CPUs may harm
+	 * throughput. Maximize throughput, power/energy consequences are not
+	 * considered.
+	 */
+	if ((env->sd->flags & SD_ASYM_CPUCAPACITY) &&
+	    (sgs->group_type <= group_fully_busy) &&
+	    (group_smaller_min_cpu_capacity(sds->local, sg)))
+		return false;
+
+	return true;
 }
 
 #ifdef CONFIG_NUMA_BALANCING
@@ -8172,13 +8265,13 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
  * @env: The load balancing environment.
  * @sds: variable to hold the statistics for this sched_domain.
  */
+
 static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sds)
 {
 	struct sched_domain *child = env->sd->child;
 	struct sched_group *sg = env->sd->groups;
 	struct sg_lb_stats *local = &sds->local_stat;
 	struct sg_lb_stats tmp_sgs;
-	bool prefer_sibling = child && child->flags & SD_PREFER_SIBLING;
 	int sg_status = 0;
 
 #ifdef CONFIG_NO_HZ_COMMON
@@ -8205,22 +8298,6 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 		if (local_group)
 			goto next_group;
 
-		/*
-		 * In case the child domain prefers tasks go to siblings
-		 * first, lower the sg capacity so that we'll try
-		 * and move all the excess tasks away. We lower the capacity
-		 * of a group only if the local group has the capacity to fit
-		 * these excess tasks. The extra check prevents the case where
-		 * you always pull from the heaviest group when it is already
-		 * under-utilized (possible with a large weight task outweighs
-		 * the tasks on the system).
-		 */
-		if (prefer_sibling && sds->local &&
-		    group_has_capacity(env, local) &&
-		    (sgs->sum_h_nr_running > local->sum_h_nr_running + 1)) {
-			sgs->group_no_capacity = 1;
-			sgs->group_type = group_classify(sg, sgs);
-		}
 
 		if (update_sd_pick_busiest(env, sds, sg, sgs)) {
 			sds->busiest = sg;
@@ -8229,13 +8306,15 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 
 next_group:
 		/* Now, start updating sd_lb_stats */
-		sds->total_running += sgs->sum_h_nr_running;
 		sds->total_load += sgs->group_load;
 		sds->total_capacity += sgs->group_capacity;
 
 		sg = sg->next;
 	} while (sg != env->sd->groups);
 
+	/* Tag domain that child domain prefers tasks go to siblings first */
+	sds->prefer_sibling = child && child->flags & SD_PREFER_SIBLING;
+
 #ifdef CONFIG_NO_HZ_COMMON
 	if ((env->flags & LBF_NOHZ_AGAIN) &&
 	    cpumask_subset(nohz.idle_cpus_mask, sched_domain_span(env->sd))) {
@@ -8273,69 +8352,149 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
  */
 static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
 {
-	unsigned long max_pull, load_above_capacity = ~0UL;
 	struct sg_lb_stats *local, *busiest;
 
 	local = &sds->local_stat;
 	busiest = &sds->busiest_stat;
 
-	if (busiest->group_asym_packing) {
-		env->imbalance = busiest->group_load;
+	if (busiest->group_type == group_misfit_task) {
+		/* Set imbalance to allow misfit task to be balanced. */
+		env->migration_type = migrate_misfit;
+		env->imbalance = busiest->group_misfit_task_load;
+		return;
+	}
+
+	if (busiest->group_type == group_asym_packing) {
+		/*
+		 * In case of asym capacity, we will try to migrate all load to
+		 * the preferred CPU.
+		 */
+		env->migration_type = migrate_task;
+		env->imbalance = busiest->sum_h_nr_running;
+		return;
+	}
+
+	if (busiest->group_type == group_imbalanced) {
+		/*
+		 * In the group_imb case we cannot rely on group-wide averages
+		 * to ensure CPU-load equilibrium, try to move any task to fix
+		 * the imbalance. The next load balance will take care of
+		 * balancing back the system.
+		 */
+		env->migration_type = migrate_task;
+		env->imbalance = 1;
 		return;
 	}
 
 	/*
-	 * Avg load of busiest sg can be less and avg load of local sg can
-	 * be greater than avg load across all sgs of sd because avg load
-	 * factors in sg capacity and sgs with smaller group_type are
-	 * skipped when updating the busiest sg:
+	 * Try to use spare capacity of local group without overloading it or
+	 * emptying busiest
 	 */
-	if (busiest->group_type != group_misfit_task &&
-	    (busiest->avg_load <= sds->avg_load ||
-	     local->avg_load >= sds->avg_load)) {
-		env->imbalance = 0;
+	if (local->group_type == group_has_spare) {
+		if (busiest->group_type > group_fully_busy) {
+			/*
+			 * If busiest is overloaded, try to fill spare
+			 * capacity. This might end up creating spare capacity
+			 * in busiest or busiest still being overloaded but
+			 * there is no simple way to directly compute the
+			 * amount of load to migrate in order to balance the
+			 * system.
+			 */
+			env->migration_type = migrate_util;
+			env->imbalance = max(local->group_capacity, local->group_util) -
+					 local->group_util;
+
+			/*
+			 * In some case, the group's utilization is max or even
+			 * higher than capacity because of migrations but the
+			 * local CPU is (newly) idle. There is at least one
+			 * waiting task in this overloaded busiest group. Let
+			 * try to pull it.
+			 */
+			if (env->idle != CPU_NOT_IDLE && env->imbalance == 0) {
+				env->migration_type = migrate_task;
+				env->imbalance = 1;
+			}
+
+			return;
+		}
+
+		if (busiest->group_weight == 1 || sds->prefer_sibling) {
+			unsigned int nr_diff = busiest->sum_h_nr_running;
+			/*
+			 * When prefer sibling, evenly spread running tasks on
+			 * groups.
+			 */
+			env->migration_type = migrate_task;
+			lsub_positive(&nr_diff, local->sum_h_nr_running);
+			env->imbalance = nr_diff >> 1;
+			return;
+		}
+
+		/*
+		 * If there is no overload, we just want to even the number of
+		 * idle cpus.
+		 */
+		env->migration_type = migrate_task;
+		env->imbalance = max_t(long, 0, (local->idle_cpus -
+						 busiest->idle_cpus) >> 1);
 		return;
 	}
 
 	/*
-	 * If there aren't any idle CPUs, avoid creating some.
+	 * Local is fully busy but has to take more load to relieve the
+	 * busiest group
 	 */
-	if (busiest->group_type == group_overloaded &&
-	    local->group_type   == group_overloaded) {
-		load_above_capacity = busiest->sum_h_nr_running * SCHED_CAPACITY_SCALE;
-		if (load_above_capacity > busiest->group_capacity) {
-			load_above_capacity -= busiest->group_capacity;
-			load_above_capacity *= scale_load_down(NICE_0_LOAD);
-			load_above_capacity /= busiest->group_capacity;
-		} else
-			load_above_capacity = ~0UL;
+	if (local->group_type < group_overloaded) {
+		/*
+		 * Local will become overloaded so the avg_load metrics are
+		 * finally needed.
+		 */
+
+		local->avg_load = (local->group_load * SCHED_CAPACITY_SCALE) /
+				  local->group_capacity;
+
+		sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) /
+				sds->total_capacity;
 	}
 
 	/*
-	 * We're trying to get all the CPUs to the average_load, so we don't
-	 * want to push ourselves above the average load, nor do we wish to
-	 * reduce the max loaded CPU below the average load. At the same time,
-	 * we also don't want to reduce the group load below the group
-	 * capacity. Thus we look for the minimum possible imbalance.
+	 * Both group are or will become overloaded and we're trying to get all
+	 * the CPUs to the average_load, so we don't want to push ourselves
+	 * above the average load, nor do we wish to reduce the max loaded CPU
+	 * below the average load. At the same time, we also don't want to
+	 * reduce the group load below the group capacity. Thus we look for
+	 * the minimum possible imbalance.
 	 */
-	max_pull = min(busiest->avg_load - sds->avg_load, load_above_capacity);
-
-	/* How much load to actually move to equalise the imbalance */
+	env->migration_type = migrate_load;
 	env->imbalance = min(
-		max_pull * busiest->group_capacity,
+		(busiest->avg_load - sds->avg_load) * busiest->group_capacity,
 		(sds->avg_load - local->avg_load) * local->group_capacity
 	) / SCHED_CAPACITY_SCALE;
-
-	/* Boost imbalance to allow misfit task to be balanced. */
-	if (busiest->group_type == group_misfit_task) {
-		env->imbalance = max_t(long, env->imbalance,
-				       busiest->group_misfit_task_load);
-	}
-
 }
 
 /******* find_busiest_group() helpers end here *********************/
 
+/*
+ * Decision matrix according to the local and busiest group type
+ *
+ * busiest \ local has_spare fully_busy misfit asym imbalanced overloaded
+ * has_spare        nr_idle   balanced   N/A    N/A  balanced   balanced
+ * fully_busy       nr_idle   nr_idle    N/A    N/A  balanced   balanced
+ * misfit_task      force     N/A        N/A    N/A  force      force
+ * asym_packing     force     force      N/A    N/A  force      force
+ * imbalanced       force     force      N/A    N/A  force      force
+ * overloaded       force     force      N/A    N/A  force      avg_load
+ *
+ * N/A :      Not Applicable because already filtered while updating
+ *            statistics.
+ * balanced : The system is balanced for these 2 groups.
+ * force :    Calculate the imbalance as load migration is probably needed.
+ * avg_load : Only if imbalance is significant enough.
+ * nr_idle :  dst_cpu is not busy and the number of idle cpus is quite
+ *            different in groups.
+ */
+
 /**
  * find_busiest_group - Returns the busiest group within the sched_domain
  * if there is an imbalance.
@@ -8370,17 +8529,17 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 	local = &sds.local_stat;
 	busiest = &sds.busiest_stat;
 
-	/* ASYM feature bypasses nice load balance check */
-	if (busiest->group_asym_packing)
-		goto force_balance;
-
 	/* There is no busy sibling group to pull tasks from */
-	if (!sds.busiest || busiest->sum_h_nr_running == 0)
+	if (!sds.busiest)
 		goto out_balanced;
 
-	/* XXX broken for overlapping NUMA groups */
-	sds.avg_load = (SCHED_CAPACITY_SCALE * sds.total_load)
-						/ sds.total_capacity;
+	/* Misfit tasks should be dealt with regardless of the avg load */
+	if (busiest->group_type == group_misfit_task)
+		goto force_balance;
+
+	/* ASYM feature bypasses nice load balance check */
+	if (busiest->group_type == group_asym_packing)
+		goto force_balance;
 
 	/*
 	 * If the busiest group is imbalanced the below checks don't
@@ -8391,55 +8550,64 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 		goto force_balance;
 
 	/*
-	 * When dst_cpu is idle, prevent SMP nice and/or asymmetric group
-	 * capacities from resulting in underutilization due to avg_load.
-	 */
-	if (env->idle != CPU_NOT_IDLE && group_has_capacity(env, local) &&
-	    busiest->group_no_capacity)
-		goto force_balance;
-
-	/* Misfit tasks should be dealt with regardless of the avg load */
-	if (busiest->group_type == group_misfit_task)
-		goto force_balance;
-
-	/*
 	 * If the local group is busier than the selected busiest group
 	 * don't try and pull any tasks.
 	 */
-	if (local->avg_load >= busiest->avg_load)
+	if (local->group_type > busiest->group_type)
 		goto out_balanced;
 
 	/*
-	 * Don't pull any tasks if this group is already above the domain
-	 * average load.
+	 * When groups are overloaded, use the avg_load to ensure fairness
+	 * between tasks.
 	 */
-	if (local->avg_load >= sds.avg_load)
-		goto out_balanced;
+	if (local->group_type == group_overloaded) {
+		/*
+		 * If the local group is more loaded than the selected
+		 * busiest group don't try and pull any tasks.
+		 */
+		if (local->avg_load >= busiest->avg_load)
+			goto out_balanced;
+
+		/* XXX broken for overlapping NUMA groups */
+		sds.avg_load = (sds.total_load * SCHED_CAPACITY_SCALE) /
+				sds.total_capacity;
 
-	if (env->idle == CPU_IDLE) {
 		/*
-		 * This CPU is idle. If the busiest group is not overloaded
-		 * and there is no imbalance between this and busiest group
-		 * wrt idle CPUs, it is balanced. The imbalance becomes
-		 * significant if the diff is greater than 1 otherwise we
-		 * might end up to just move the imbalance on another group
+		 * Don't pull any tasks if this group is already above the
+		 * domain average load.
 		 */
-		if ((busiest->group_type != group_overloaded) &&
-				(local->idle_cpus <= (busiest->idle_cpus + 1)))
+		if (local->avg_load >= sds.avg_load)
 			goto out_balanced;
-	} else {
+
 		/*
-		 * In the CPU_NEWLY_IDLE, CPU_NOT_IDLE cases, use
-		 * imbalance_pct to be conservative.
+		 * If the busiest group is more loaded, use imbalance_pct to be
+		 * conservative.
 		 */
 		if (100 * busiest->avg_load <=
 				env->sd->imbalance_pct * local->avg_load)
 			goto out_balanced;
 	}
 
+	/* Try to move all excess tasks to child's sibling domain */
+	if (sds.prefer_sibling && local->group_type == group_has_spare &&
+	    busiest->sum_h_nr_running > local->sum_h_nr_running + 1)
+		goto force_balance;
+
+	if (busiest->group_type != group_overloaded &&
+	     (env->idle == CPU_NOT_IDLE ||
+	      local->idle_cpus <= (busiest->idle_cpus + 1)))
+		/*
+		 * If the busiest group is not overloaded
+		 * and there is no imbalance between this and busiest group
+		 * wrt idle CPUs, it is balanced. The imbalance
+		 * becomes significant if the diff is greater than 1 otherwise
+		 * we might end up to just move the imbalance on another
+		 * group.
+		 */
+		goto out_balanced;
+
 force_balance:
 	/* Looks like there is an imbalance. Compute it */
-	env->src_grp_type = busiest->group_type;
 	calculate_imbalance(env, &sds);
 	return env->imbalance ? sds.busiest : NULL;
 
@@ -8455,11 +8623,13 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 				     struct sched_group *group)
 {
 	struct rq *busiest = NULL, *rq;
-	unsigned long busiest_load = 0, busiest_capacity = 1;
+	unsigned long busiest_util = 0, busiest_load = 0, busiest_capacity = 1;
+	unsigned int busiest_nr = 0;
 	int i;
 
 	for_each_cpu_and(i, sched_group_span(group), env->cpus) {
-		unsigned long capacity, load;
+		unsigned long capacity, load, util;
+		unsigned int nr_running;
 		enum fbq_type rt;
 
 		rq = cpu_rq(i);
@@ -8487,20 +8657,8 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 		if (rt > env->fbq_type)
 			continue;
 
-		/*
-		 * For ASYM_CPUCAPACITY domains with misfit tasks we simply
-		 * seek the "biggest" misfit task.
-		 */
-		if (env->src_grp_type == group_misfit_task) {
-			if (rq->misfit_task_load > busiest_load) {
-				busiest_load = rq->misfit_task_load;
-				busiest = rq;
-			}
-
-			continue;
-		}
-
 		capacity = capacity_of(i);
+		nr_running = rq->cfs.h_nr_running;
 
 		/*
 		 * For ASYM_CPUCAPACITY domains, don't pick a CPU that could
@@ -8510,35 +8668,70 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 		 */
 		if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
 		    capacity_of(env->dst_cpu) < capacity &&
-		    rq->nr_running == 1)
+		    nr_running == 1)
 			continue;
 
-		load = cpu_runnable_load(rq);
+		switch (env->migration_type) {
+		case migrate_load:
+			/*
+			 * When comparing with load imbalance, use
+			 * cpu_runnable_load() which is not scaled with the CPU
+			 * capacity.
+			 */
+			load = cpu_runnable_load(rq);
 
-		/*
-		 * When comparing with imbalance, use cpu_runnable_load()
-		 * which is not scaled with the CPU capacity.
-		 */
+			if (nr_running == 1 && load > env->imbalance &&
+			    !check_cpu_capacity(rq, env->sd))
+				break;
 
-		if (rq->nr_running == 1 && load > env->imbalance &&
-		    !check_cpu_capacity(rq, env->sd))
-			continue;
+			/*
+			 * For the load comparisons with the other CPU's,
+			 * consider the cpu_runnable_load() scaled with the CPU
+			 * capacity, so that the load can be moved away from
+			 * the CPU that is potentially running at a lower
+			 * capacity.
+			 *
+			 * Thus we're looking for max(load_i / capacity_i),
+			 * crosswise multiplication to rid ourselves of the
+			 * division works out to:
+			 * load_i * capacity_j > load_j * capacity_i;
+			 * where j is our previous maximum.
+			 */
+			if (load * busiest_capacity > busiest_load * capacity) {
+				busiest_load = load;
+				busiest_capacity = capacity;
+				busiest = rq;
+			}
+			break;
+
+		case migrate_util:
+			util = cpu_util(cpu_of(rq));
+
+			if (busiest_util < util) {
+				busiest_util = util;
+				busiest = rq;
+			}
+			break;
+
+		case migrate_task:
+			if (busiest_nr < nr_running) {
+				busiest_nr = nr_running;
+				busiest = rq;
+			}
+			break;
+
+		case migrate_misfit:
+			/*
+			 * For ASYM_CPUCAPACITY domains with misfit tasks we
+			 * simply seek the "biggest" misfit task.
+			 */
+			if (rq->misfit_task_load > busiest_load) {
+				busiest_load = rq->misfit_task_load;
+				busiest = rq;
+			}
+
+			break;
 
-		/*
-		 * For the load comparisons with the other CPU's, consider
-		 * the cpu_runnable_load() scaled with the CPU capacity, so
-		 * that the load can be moved away from the CPU that is
-		 * potentially running at a lower capacity.
-		 *
-		 * Thus we're looking for max(load_i / capacity_i), crosswise
-		 * multiplication to rid ourselves of the division works out
-		 * to: load_i * capacity_j > load_j * capacity_i;  where j is
-		 * our previous maximum.
-		 */
-		if (load * busiest_capacity > busiest_load * capacity) {
-			busiest_load = load;
-			busiest_capacity = capacity;
-			busiest = rq;
 		}
 	}
 
@@ -8584,7 +8777,7 @@ voluntary_active_balance(struct lb_env *env)
 			return 1;
 	}
 
-	if (env->src_grp_type == group_misfit_task)
+	if (env->migration_type == migrate_misfit)
 		return 1;
 
 	return 0;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* [PATCH v4 05/11] sched/fair: use rq->nr_running when balancing load
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
                   ` (3 preceding siblings ...)
  2019-10-18 13:26 ` [PATCH v4 04/11] sched/fair: rework load_balance Vincent Guittot
@ 2019-10-18 13:26 ` Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use " tip-bot2 for Vincent Guittot
  2019-10-30 15:54   ` [PATCH v4 05/11] sched/fair: use " Mel Gorman
  2019-10-18 13:26 ` [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance Vincent Guittot
                   ` (7 subsequent siblings)
  12 siblings, 2 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-18 13:26 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	Vincent Guittot

cfs load_balance only takes care of CFS tasks whereas CPUs can be used by
other scheduling class. Typically, a CFS task preempted by a RT or deadline
task will not get a chance to be pulled on another CPU because the
load_balance doesn't take into account tasks from other classes.
Add sum of nr_running in the statistics and use it to detect such
situation.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5ae5281..e09fe12b 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7704,6 +7704,7 @@ struct sg_lb_stats {
 	unsigned long group_load; /* Total load over the CPUs of the group */
 	unsigned long group_capacity;
 	unsigned long group_util; /* Total utilization of the group */
+	unsigned int sum_nr_running; /* Nr of tasks running in the group */
 	unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
 	unsigned int idle_cpus;
 	unsigned int group_weight;
@@ -7938,7 +7939,7 @@ static inline int sg_imbalanced(struct sched_group *group)
 static inline bool
 group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
 {
-	if (sgs->sum_h_nr_running < sgs->group_weight)
+	if (sgs->sum_nr_running < sgs->group_weight)
 		return true;
 
 	if ((sgs->group_capacity * 100) >
@@ -7959,7 +7960,7 @@ group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
 static inline bool
 group_is_overloaded(struct lb_env *env, struct sg_lb_stats *sgs)
 {
-	if (sgs->sum_h_nr_running <= sgs->group_weight)
+	if (sgs->sum_nr_running <= sgs->group_weight)
 		return false;
 
 	if ((sgs->group_capacity * 100) <
@@ -8063,6 +8064,8 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
 
 		nr_running = rq->nr_running;
+		sgs->sum_nr_running += nr_running;
+
 		if (nr_running > 1)
 			*sg_status |= SG_OVERLOAD;
 
@@ -8420,13 +8423,13 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 		}
 
 		if (busiest->group_weight == 1 || sds->prefer_sibling) {
-			unsigned int nr_diff = busiest->sum_h_nr_running;
+			unsigned int nr_diff = busiest->sum_nr_running;
 			/*
 			 * When prefer sibling, evenly spread running tasks on
 			 * groups.
 			 */
 			env->migration_type = migrate_task;
-			lsub_positive(&nr_diff, local->sum_h_nr_running);
+			lsub_positive(&nr_diff, local->sum_nr_running);
 			env->imbalance = nr_diff >> 1;
 			return;
 		}
@@ -8590,7 +8593,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 
 	/* Try to move all excess tasks to child's sibling domain */
 	if (sds.prefer_sibling && local->group_type == group_has_spare &&
-	    busiest->sum_h_nr_running > local->sum_h_nr_running + 1)
+	    busiest->sum_nr_running > local->sum_nr_running + 1)
 		goto force_balance;
 
 	if (busiest->group_type != group_overloaded &&
-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
                   ` (4 preceding siblings ...)
  2019-10-18 13:26 ` [PATCH v4 05/11] sched/fair: use rq->nr_running when balancing load Vincent Guittot
@ 2019-10-18 13:26 ` Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use load instead of runnable load in load_balance() tip-bot2 for Vincent Guittot
  2019-10-30 15:58   ` [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance Mel Gorman
  2019-10-18 13:26 ` [PATCH v4 07/11] sched/fair: evenly spread tasks when not overloaded Vincent Guittot
                   ` (6 subsequent siblings)
  12 siblings, 2 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-18 13:26 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	Vincent Guittot

runnable load has been introduced to take into account the case
where blocked load biases the load balance decision which was selecting
underutilized group with huge blocked load whereas other groups were
overloaded.

The load is now only used when groups are overloaded. In this case,
it's worth being conservative and taking into account the sleeping
tasks that might wakeup on the cpu.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e09fe12b..9ac2264 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5385,6 +5385,11 @@ static unsigned long cpu_runnable_load(struct rq *rq)
 	return cfs_rq_runnable_load_avg(&rq->cfs);
 }
 
+static unsigned long cpu_load(struct rq *rq)
+{
+	return cfs_rq_load_avg(&rq->cfs);
+}
+
 static unsigned long capacity_of(int cpu)
 {
 	return cpu_rq(cpu)->cpu_capacity;
@@ -8059,7 +8064,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 		if ((env->flags & LBF_NOHZ_STATS) && update_nohz_stats(rq, false))
 			env->flags |= LBF_NOHZ_AGAIN;
 
-		sgs->group_load += cpu_runnable_load(rq);
+		sgs->group_load += cpu_load(rq);
 		sgs->group_util += cpu_util(i);
 		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
 
@@ -8517,7 +8522,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 	init_sd_lb_stats(&sds);
 
 	/*
-	 * Compute the various statistics relavent for load balancing at
+	 * Compute the various statistics relevant for load balancing at
 	 * this level.
 	 */
 	update_sd_lb_stats(env, &sds);
@@ -8677,11 +8682,10 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 		switch (env->migration_type) {
 		case migrate_load:
 			/*
-			 * When comparing with load imbalance, use
-			 * cpu_runnable_load() which is not scaled with the CPU
-			 * capacity.
+			 * When comparing with load imbalance, use cpu_load()
+			 * which is not scaled with the CPU capacity.
 			 */
-			load = cpu_runnable_load(rq);
+			load = cpu_load(rq);
 
 			if (nr_running == 1 && load > env->imbalance &&
 			    !check_cpu_capacity(rq, env->sd))
@@ -8689,10 +8693,10 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 
 			/*
 			 * For the load comparisons with the other CPU's,
-			 * consider the cpu_runnable_load() scaled with the CPU
-			 * capacity, so that the load can be moved away from
-			 * the CPU that is potentially running at a lower
-			 * capacity.
+			 * consider the cpu_load() scaled with the CPU
+			 * capacity, so that the load can be moved away
+			 * from the CPU that is potentially running at a
+			 * lower capacity.
 			 *
 			 * Thus we're looking for max(load_i / capacity_i),
 			 * crosswise multiplication to rid ourselves of the
-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* [PATCH v4 07/11] sched/fair: evenly spread tasks when not overloaded
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
                   ` (5 preceding siblings ...)
  2019-10-18 13:26 ` [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance Vincent Guittot
@ 2019-10-18 13:26 ` Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Spread out tasks evenly " tip-bot2 for Vincent Guittot
  2019-10-30 16:03   ` [PATCH v4 07/11] sched/fair: evenly spread tasks " Mel Gorman
  2019-10-18 13:26 ` [PATCH v4 08/11] sched/fair: use utilization to select misfit task Vincent Guittot
                   ` (5 subsequent siblings)
  12 siblings, 2 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-18 13:26 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	Vincent Guittot

When there is only 1 cpu per group, using the idle cpus to evenly spread
tasks doesn't make sense and nr_running is a better metrics.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 40 ++++++++++++++++++++++++++++------------
 1 file changed, 28 insertions(+), 12 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9ac2264..9b8e20d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8601,18 +8601,34 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 	    busiest->sum_nr_running > local->sum_nr_running + 1)
 		goto force_balance;
 
-	if (busiest->group_type != group_overloaded &&
-	     (env->idle == CPU_NOT_IDLE ||
-	      local->idle_cpus <= (busiest->idle_cpus + 1)))
-		/*
-		 * If the busiest group is not overloaded
-		 * and there is no imbalance between this and busiest group
-		 * wrt idle CPUs, it is balanced. The imbalance
-		 * becomes significant if the diff is greater than 1 otherwise
-		 * we might end up to just move the imbalance on another
-		 * group.
-		 */
-		goto out_balanced;
+	if (busiest->group_type != group_overloaded) {
+		if (env->idle == CPU_NOT_IDLE)
+			/*
+			 * If the busiest group is not overloaded (and as a
+			 * result the local one too) but this cpu is already
+			 * busy, let another idle cpu try to pull task.
+			 */
+			goto out_balanced;
+
+		if (busiest->group_weight > 1 &&
+		    local->idle_cpus <= (busiest->idle_cpus + 1))
+			/*
+			 * If the busiest group is not overloaded
+			 * and there is no imbalance between this and busiest
+			 * group wrt idle CPUs, it is balanced. The imbalance
+			 * becomes significant if the diff is greater than 1
+			 * otherwise we might end up to just move the imbalance
+			 * on another group. Of course this applies only if
+			 * there is more than 1 CPU per group.
+			 */
+			goto out_balanced;
+
+		if (busiest->sum_h_nr_running == 1)
+			/*
+			 * busiest doesn't have any tasks waiting to run
+			 */
+			goto out_balanced;
+	}
 
 force_balance:
 	/* Looks like there is an imbalance. Compute it */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* [PATCH v4 08/11] sched/fair: use utilization to select misfit task
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
                   ` (6 preceding siblings ...)
  2019-10-18 13:26 ` [PATCH v4 07/11] sched/fair: evenly spread tasks when not overloaded Vincent Guittot
@ 2019-10-18 13:26 ` Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use " tip-bot2 for Vincent Guittot
  2019-10-18 13:26 ` [PATCH v4 09/11] sched/fair: use load instead of runnable load in wakeup path Vincent Guittot
                   ` (4 subsequent siblings)
  12 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-10-18 13:26 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	Vincent Guittot

utilization is used to detect a misfit task but the load is then used to
select the task on the CPU which can lead to select a small task with
high weight instead of the task that triggered the misfit migration.

Check that task can't fit the CPU's capacity when selecting the misfit
task instead of using the load.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Valentin Schneider <valentin.schneider@arm.com>
---
 kernel/sched/fair.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 9b8e20d..670856d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7418,13 +7418,8 @@ static int detach_tasks(struct lb_env *env)
 			break;
 
 		case migrate_misfit:
-			load = task_h_load(p);
-
-			/*
-			 * load of misfit task might decrease a bit since it has
-			 * been recorded. Be conservative in the condition.
-			 */
-			if (load / 2 < env->imbalance)
+			/* This is not a misfit task */
+			if (task_fits_capacity(p, capacity_of(env->src_cpu)))
 				goto next;
 
 			env->imbalance = 0;
@@ -8368,7 +8363,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 	if (busiest->group_type == group_misfit_task) {
 		/* Set imbalance to allow misfit task to be balanced. */
 		env->migration_type = migrate_misfit;
-		env->imbalance = busiest->group_misfit_task_load;
+		env->imbalance = 1;
 		return;
 	}
 
-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* [PATCH v4 09/11] sched/fair: use load instead of runnable load in wakeup path
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
                   ` (7 preceding siblings ...)
  2019-10-18 13:26 ` [PATCH v4 08/11] sched/fair: use utilization to select misfit task Vincent Guittot
@ 2019-10-18 13:26 ` Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use " tip-bot2 for Vincent Guittot
  2019-10-18 13:26 ` [PATCH v4 10/11] sched/fair: optimize find_idlest_group Vincent Guittot
                   ` (3 subsequent siblings)
  12 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-10-18 13:26 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	Vincent Guittot

runnable load has been introduced to take into account the case where
blocked load biases the wake up path which may end to select an overloaded
CPU with a large number of runnable tasks instead of an underutilized
CPU with a huge blocked load.

Tha wake up path now starts to looks for idle CPUs before comparing
runnable load and it's worth aligning the wake up path with the
load_balance.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 670856d..6203e71 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1475,7 +1475,12 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 	       group_faults_cpu(ng, src_nid) * group_faults(p, dst_nid) * 4;
 }
 
-static unsigned long cpu_runnable_load(struct rq *rq);
+static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq);
+
+static unsigned long cpu_runnable_load(struct rq *rq)
+{
+	return cfs_rq_runnable_load_avg(&rq->cfs);
+}
 
 /* Cached statistics for all CPUs within a node */
 struct numa_stats {
@@ -5380,11 +5385,6 @@ static int sched_idle_cpu(int cpu)
 			rq->nr_running);
 }
 
-static unsigned long cpu_runnable_load(struct rq *rq)
-{
-	return cfs_rq_runnable_load_avg(&rq->cfs);
-}
-
 static unsigned long cpu_load(struct rq *rq)
 {
 	return cfs_rq_load_avg(&rq->cfs);
@@ -5485,7 +5485,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
 	s64 this_eff_load, prev_eff_load;
 	unsigned long task_load;
 
-	this_eff_load = cpu_runnable_load(cpu_rq(this_cpu));
+	this_eff_load = cpu_load(cpu_rq(this_cpu));
 
 	if (sync) {
 		unsigned long current_load = task_h_load(current);
@@ -5503,7 +5503,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
 		this_eff_load *= 100;
 	this_eff_load *= capacity_of(prev_cpu);
 
-	prev_eff_load = cpu_runnable_load(cpu_rq(prev_cpu));
+	prev_eff_load = cpu_load(cpu_rq(prev_cpu));
 	prev_eff_load -= task_load;
 	if (sched_feat(WA_BIAS))
 		prev_eff_load *= 100 + (sd->imbalance_pct - 100) / 2;
@@ -5591,7 +5591,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 		max_spare_cap = 0;
 
 		for_each_cpu(i, sched_group_span(group)) {
-			load = cpu_runnable_load(cpu_rq(i));
+			load = cpu_load(cpu_rq(i));
 			runnable_load += load;
 
 			avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs);
@@ -5732,7 +5732,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this
 				continue;
 			}
 
-			load = cpu_runnable_load(cpu_rq(i));
+			load = cpu_load(cpu_rq(i));
 			if (load < min_load) {
 				min_load = load;
 				least_loaded_cpu = i;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* [PATCH v4 10/11] sched/fair: optimize find_idlest_group
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
                   ` (8 preceding siblings ...)
  2019-10-18 13:26 ` [PATCH v4 09/11] sched/fair: use load instead of runnable load in wakeup path Vincent Guittot
@ 2019-10-18 13:26 ` Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Optimize find_idlest_group() tip-bot2 for Vincent Guittot
  2019-10-18 13:26 ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Vincent Guittot
                   ` (2 subsequent siblings)
  12 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-10-18 13:26 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	Vincent Guittot

find_idlest_group() now reads CPU's load_avg in 2 different ways.
Consolidate the function to read and use load_avg only once and simplify
the algorithm to only look for the group with lowest load_avg.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 50 ++++++++++++++------------------------------------
 1 file changed, 14 insertions(+), 36 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 6203e71..ed1800d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5560,16 +5560,14 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 {
 	struct sched_group *idlest = NULL, *group = sd->groups;
 	struct sched_group *most_spare_sg = NULL;
-	unsigned long min_runnable_load = ULONG_MAX;
-	unsigned long this_runnable_load = ULONG_MAX;
-	unsigned long min_avg_load = ULONG_MAX, this_avg_load = ULONG_MAX;
+	unsigned long min_load = ULONG_MAX, this_load = ULONG_MAX;
 	unsigned long most_spare = 0, this_spare = 0;
 	int imbalance_scale = 100 + (sd->imbalance_pct-100)/2;
 	unsigned long imbalance = scale_load_down(NICE_0_LOAD) *
 				(sd->imbalance_pct-100) / 100;
 
 	do {
-		unsigned long load, avg_load, runnable_load;
+		unsigned long load;
 		unsigned long spare_cap, max_spare_cap;
 		int local_group;
 		int i;
@@ -5586,15 +5584,11 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 		 * Tally up the load of all CPUs in the group and find
 		 * the group containing the CPU with most spare capacity.
 		 */
-		avg_load = 0;
-		runnable_load = 0;
+		load = 0;
 		max_spare_cap = 0;
 
 		for_each_cpu(i, sched_group_span(group)) {
-			load = cpu_load(cpu_rq(i));
-			runnable_load += load;
-
-			avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs);
+			load += cpu_load(cpu_rq(i));
 
 			spare_cap = capacity_spare_without(i, p);
 
@@ -5603,31 +5597,15 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 		}
 
 		/* Adjust by relative CPU capacity of the group */
-		avg_load = (avg_load * SCHED_CAPACITY_SCALE) /
-					group->sgc->capacity;
-		runnable_load = (runnable_load * SCHED_CAPACITY_SCALE) /
+		load = (load * SCHED_CAPACITY_SCALE) /
 					group->sgc->capacity;
 
 		if (local_group) {
-			this_runnable_load = runnable_load;
-			this_avg_load = avg_load;
+			this_load = load;
 			this_spare = max_spare_cap;
 		} else {
-			if (min_runnable_load > (runnable_load + imbalance)) {
-				/*
-				 * The runnable load is significantly smaller
-				 * so we can pick this new CPU:
-				 */
-				min_runnable_load = runnable_load;
-				min_avg_load = avg_load;
-				idlest = group;
-			} else if ((runnable_load < (min_runnable_load + imbalance)) &&
-				   (100*min_avg_load > imbalance_scale*avg_load)) {
-				/*
-				 * The runnable loads are close so take the
-				 * blocked load into account through avg_load:
-				 */
-				min_avg_load = avg_load;
+			if (load < min_load) {
+				min_load = load;
 				idlest = group;
 			}
 
@@ -5668,18 +5646,18 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 	 * local domain to be very lightly loaded relative to the remote
 	 * domains but "imbalance" skews the comparison making remote CPUs
 	 * look much more favourable. When considering cross-domain, add
-	 * imbalance to the runnable load on the remote node and consider
-	 * staying local.
+	 * imbalance to the load on the remote node and consider staying
+	 * local.
 	 */
 	if ((sd->flags & SD_NUMA) &&
-	    min_runnable_load + imbalance >= this_runnable_load)
+	     min_load + imbalance >= this_load)
 		return NULL;
 
-	if (min_runnable_load > (this_runnable_load + imbalance))
+	if (min_load >= this_load + imbalance)
 		return NULL;
 
-	if ((this_runnable_load < (min_runnable_load + imbalance)) &&
-	     (100*this_avg_load < imbalance_scale*min_avg_load))
+	if ((this_load < (min_load + imbalance)) &&
+	    (100*this_load < imbalance_scale*min_load))
 		return NULL;
 
 	return idlest;
-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
                   ` (9 preceding siblings ...)
  2019-10-18 13:26 ` [PATCH v4 10/11] sched/fair: optimize find_idlest_group Vincent Guittot
@ 2019-10-18 13:26 ` Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rework find_idlest_group() tip-bot2 for Vincent Guittot
                     ` (3 more replies)
  2019-10-21  7:50 ` [PATCH v4 00/10] sched/fair: rework the CFS load balance Ingo Molnar
  2019-11-25 12:48 ` Valentin Schneider
  12 siblings, 4 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-18 13:26 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	Vincent Guittot

The slow wake up path computes per sched_group statisics to select the
idlest group, which is quite similar to what load_balance() is doing
for selecting busiest group. Rework find_idlest_group() to classify the
sched_group and select the idlest one following the same steps as
load_balance().

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
 kernel/sched/fair.c | 384 ++++++++++++++++++++++++++++++++++------------------
 1 file changed, 256 insertions(+), 128 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ed1800d..fbaafae 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5541,127 +5541,9 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
 	return target;
 }
 
-static unsigned long cpu_util_without(int cpu, struct task_struct *p);
-
-static unsigned long capacity_spare_without(int cpu, struct task_struct *p)
-{
-	return max_t(long, capacity_of(cpu) - cpu_util_without(cpu, p), 0);
-}
-
-/*
- * find_idlest_group finds and returns the least busy CPU group within the
- * domain.
- *
- * Assumes p is allowed on at least one CPU in sd.
- */
 static struct sched_group *
 find_idlest_group(struct sched_domain *sd, struct task_struct *p,
-		  int this_cpu, int sd_flag)
-{
-	struct sched_group *idlest = NULL, *group = sd->groups;
-	struct sched_group *most_spare_sg = NULL;
-	unsigned long min_load = ULONG_MAX, this_load = ULONG_MAX;
-	unsigned long most_spare = 0, this_spare = 0;
-	int imbalance_scale = 100 + (sd->imbalance_pct-100)/2;
-	unsigned long imbalance = scale_load_down(NICE_0_LOAD) *
-				(sd->imbalance_pct-100) / 100;
-
-	do {
-		unsigned long load;
-		unsigned long spare_cap, max_spare_cap;
-		int local_group;
-		int i;
-
-		/* Skip over this group if it has no CPUs allowed */
-		if (!cpumask_intersects(sched_group_span(group),
-					p->cpus_ptr))
-			continue;
-
-		local_group = cpumask_test_cpu(this_cpu,
-					       sched_group_span(group));
-
-		/*
-		 * Tally up the load of all CPUs in the group and find
-		 * the group containing the CPU with most spare capacity.
-		 */
-		load = 0;
-		max_spare_cap = 0;
-
-		for_each_cpu(i, sched_group_span(group)) {
-			load += cpu_load(cpu_rq(i));
-
-			spare_cap = capacity_spare_without(i, p);
-
-			if (spare_cap > max_spare_cap)
-				max_spare_cap = spare_cap;
-		}
-
-		/* Adjust by relative CPU capacity of the group */
-		load = (load * SCHED_CAPACITY_SCALE) /
-					group->sgc->capacity;
-
-		if (local_group) {
-			this_load = load;
-			this_spare = max_spare_cap;
-		} else {
-			if (load < min_load) {
-				min_load = load;
-				idlest = group;
-			}
-
-			if (most_spare < max_spare_cap) {
-				most_spare = max_spare_cap;
-				most_spare_sg = group;
-			}
-		}
-	} while (group = group->next, group != sd->groups);
-
-	/*
-	 * The cross-over point between using spare capacity or least load
-	 * is too conservative for high utilization tasks on partially
-	 * utilized systems if we require spare_capacity > task_util(p),
-	 * so we allow for some task stuffing by using
-	 * spare_capacity > task_util(p)/2.
-	 *
-	 * Spare capacity can't be used for fork because the utilization has
-	 * not been set yet, we must first select a rq to compute the initial
-	 * utilization.
-	 */
-	if (sd_flag & SD_BALANCE_FORK)
-		goto skip_spare;
-
-	if (this_spare > task_util(p) / 2 &&
-	    imbalance_scale*this_spare > 100*most_spare)
-		return NULL;
-
-	if (most_spare > task_util(p) / 2)
-		return most_spare_sg;
-
-skip_spare:
-	if (!idlest)
-		return NULL;
-
-	/*
-	 * When comparing groups across NUMA domains, it's possible for the
-	 * local domain to be very lightly loaded relative to the remote
-	 * domains but "imbalance" skews the comparison making remote CPUs
-	 * look much more favourable. When considering cross-domain, add
-	 * imbalance to the load on the remote node and consider staying
-	 * local.
-	 */
-	if ((sd->flags & SD_NUMA) &&
-	     min_load + imbalance >= this_load)
-		return NULL;
-
-	if (min_load >= this_load + imbalance)
-		return NULL;
-
-	if ((this_load < (min_load + imbalance)) &&
-	    (100*this_load < imbalance_scale*min_load))
-		return NULL;
-
-	return idlest;
-}
+		  int this_cpu, int sd_flag);
 
 /*
  * find_idlest_group_cpu - find the idlest CPU among the CPUs in the group.
@@ -5734,7 +5616,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
 		return prev_cpu;
 
 	/*
-	 * We need task's util for capacity_spare_without, sync it up to
+	 * We need task's util for cpu_util_without, sync it up to
 	 * prev_cpu's last_update_time.
 	 */
 	if (!(sd_flag & SD_BALANCE_FORK))
@@ -7915,13 +7797,13 @@ static inline int sg_imbalanced(struct sched_group *group)
  * any benefit for the load balance.
  */
 static inline bool
-group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
+group_has_capacity(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
 {
 	if (sgs->sum_nr_running < sgs->group_weight)
 		return true;
 
 	if ((sgs->group_capacity * 100) >
-			(sgs->group_util * env->sd->imbalance_pct))
+			(sgs->group_util * imbalance_pct))
 		return true;
 
 	return false;
@@ -7936,13 +7818,13 @@ group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
  *  false.
  */
 static inline bool
-group_is_overloaded(struct lb_env *env, struct sg_lb_stats *sgs)
+group_is_overloaded(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
 {
 	if (sgs->sum_nr_running <= sgs->group_weight)
 		return false;
 
 	if ((sgs->group_capacity * 100) <
-			(sgs->group_util * env->sd->imbalance_pct))
+			(sgs->group_util * imbalance_pct))
 		return true;
 
 	return false;
@@ -7969,11 +7851,11 @@ group_smaller_max_cpu_capacity(struct sched_group *sg, struct sched_group *ref)
 }
 
 static inline enum
-group_type group_classify(struct lb_env *env,
+group_type group_classify(unsigned int imbalance_pct,
 			  struct sched_group *group,
 			  struct sg_lb_stats *sgs)
 {
-	if (group_is_overloaded(env, sgs))
+	if (group_is_overloaded(imbalance_pct, sgs))
 		return group_overloaded;
 
 	if (sg_imbalanced(group))
@@ -7985,7 +7867,7 @@ group_type group_classify(struct lb_env *env,
 	if (sgs->group_misfit_task_load)
 		return group_misfit_task;
 
-	if (!group_has_capacity(env, sgs))
+	if (!group_has_capacity(imbalance_pct, sgs))
 		return group_fully_busy;
 
 	return group_has_spare;
@@ -8086,7 +7968,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 
 	sgs->group_weight = group->group_weight;
 
-	sgs->group_type = group_classify(env, group, sgs);
+	sgs->group_type = group_classify(env->sd->imbalance_pct, group, sgs);
 
 	/* Computing avg_load makes sense only when group is overloaded */
 	if (sgs->group_type == group_overloaded)
@@ -8241,6 +8123,252 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
 }
 #endif /* CONFIG_NUMA_BALANCING */
 
+
+struct sg_lb_stats;
+
+/*
+ * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
+ * @denv: The ched_domain level to look for idlest group.
+ * @group: sched_group whose statistics are to be updated.
+ * @sgs: variable to hold the statistics for this group.
+ */
+static inline void update_sg_wakeup_stats(struct sched_domain *sd,
+					  struct sched_group *group,
+					  struct sg_lb_stats *sgs,
+					  struct task_struct *p)
+{
+	int i, nr_running;
+
+	memset(sgs, 0, sizeof(*sgs));
+
+	for_each_cpu(i, sched_group_span(group)) {
+		struct rq *rq = cpu_rq(i);
+
+		sgs->group_load += cpu_load(rq);
+		sgs->group_util += cpu_util_without(i, p);
+		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
+
+		nr_running = rq->nr_running;
+		sgs->sum_nr_running += nr_running;
+
+		/*
+		 * No need to call idle_cpu() if nr_running is not 0
+		 */
+		if (!nr_running && idle_cpu(i))
+			sgs->idle_cpus++;
+
+
+	}
+
+	/* Check if task fits in the group */
+	if (sd->flags & SD_ASYM_CPUCAPACITY &&
+	    !task_fits_capacity(p, group->sgc->max_capacity)) {
+		sgs->group_misfit_task_load = 1;
+	}
+
+	sgs->group_capacity = group->sgc->capacity;
+
+	sgs->group_type = group_classify(sd->imbalance_pct, group, sgs);
+
+	/*
+	 * Computing avg_load makes sense only when group is fully busy or
+	 * overloaded
+	 */
+	if (sgs->group_type < group_fully_busy)
+		sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
+				sgs->group_capacity;
+}
+
+static bool update_pick_idlest(struct sched_group *idlest,
+			       struct sg_lb_stats *idlest_sgs,
+			       struct sched_group *group,
+			       struct sg_lb_stats *sgs)
+{
+	if (sgs->group_type < idlest_sgs->group_type)
+		return true;
+
+	if (sgs->group_type > idlest_sgs->group_type)
+		return false;
+
+	/*
+	 * The candidate and the current idles group are the same type of
+	 * group. Let check which one is the idlest according to the type.
+	 */
+
+	switch (sgs->group_type) {
+	case group_overloaded:
+	case group_fully_busy:
+		/* Select the group with lowest avg_load. */
+		if (idlest_sgs->avg_load <= sgs->avg_load)
+			return false;
+		break;
+
+	case group_imbalanced:
+	case group_asym_packing:
+		/* Those types are not used in the slow wakeup path */
+		return false;
+
+	case group_misfit_task:
+		/* Select group with the highest max capacity */
+		if (idlest->sgc->max_capacity >= group->sgc->max_capacity)
+			return false;
+		break;
+
+	case group_has_spare:
+		/* Select group with most idle CPUs */
+		if (idlest_sgs->idle_cpus >= sgs->idle_cpus)
+			return false;
+		break;
+	}
+
+	return true;
+}
+
+/*
+ * find_idlest_group finds and returns the least busy CPU group within the
+ * domain.
+ *
+ * Assumes p is allowed on at least one CPU in sd.
+ */
+static struct sched_group *
+find_idlest_group(struct sched_domain *sd, struct task_struct *p,
+		  int this_cpu, int sd_flag)
+{
+	struct sched_group *idlest = NULL, *local = NULL, *group = sd->groups;
+	struct sg_lb_stats local_sgs, tmp_sgs;
+	struct sg_lb_stats *sgs;
+	unsigned long imbalance;
+	struct sg_lb_stats idlest_sgs = {
+			.avg_load = UINT_MAX,
+			.group_type = group_overloaded,
+	};
+
+	imbalance = scale_load_down(NICE_0_LOAD) *
+				(sd->imbalance_pct-100) / 100;
+
+	do {
+		int local_group;
+
+		/* Skip over this group if it has no CPUs allowed */
+		if (!cpumask_intersects(sched_group_span(group),
+					p->cpus_ptr))
+			continue;
+
+		local_group = cpumask_test_cpu(this_cpu,
+					       sched_group_span(group));
+
+		if (local_group) {
+			sgs = &local_sgs;
+			local = group;
+		} else {
+			sgs = &tmp_sgs;
+		}
+
+		update_sg_wakeup_stats(sd, group, sgs, p);
+
+		if (!local_group && update_pick_idlest(idlest, &idlest_sgs, group, sgs)) {
+			idlest = group;
+			idlest_sgs = *sgs;
+		}
+
+	} while (group = group->next, group != sd->groups);
+
+
+	/* There is no idlest group to push tasks to */
+	if (!idlest)
+		return NULL;
+
+	/*
+	 * If the local group is idler than the selected idlest group
+	 * don't try and push the task.
+	 */
+	if (local_sgs.group_type < idlest_sgs.group_type)
+		return NULL;
+
+	/*
+	 * If the local group is busier than the selected idlest group
+	 * try and push the task.
+	 */
+	if (local_sgs.group_type > idlest_sgs.group_type)
+		return idlest;
+
+	switch (local_sgs.group_type) {
+	case group_overloaded:
+	case group_fully_busy:
+		/*
+		 * When comparing groups across NUMA domains, it's possible for
+		 * the local domain to be very lightly loaded relative to the
+		 * remote domains but "imbalance" skews the comparison making
+		 * remote CPUs look much more favourable. When considering
+		 * cross-domain, add imbalance to the load on the remote node
+		 * and consider staying local.
+		 */
+
+		if ((sd->flags & SD_NUMA) &&
+		    ((idlest_sgs.avg_load + imbalance) >= local_sgs.avg_load))
+			return NULL;
+
+		/*
+		 * If the local group is less loaded than the selected
+		 * idlest group don't try and push any tasks.
+		 */
+		if (idlest_sgs.avg_load >= (local_sgs.avg_load + imbalance))
+			return NULL;
+
+		if (100 * local_sgs.avg_load <= sd->imbalance_pct * idlest_sgs.avg_load)
+			return NULL;
+		break;
+
+	case group_imbalanced:
+	case group_asym_packing:
+		/* Those type are not used in the slow wakeup path */
+		return NULL;
+
+	case group_misfit_task:
+		/* Select group with the highest max capacity */
+		if (local->sgc->max_capacity >= idlest->sgc->max_capacity)
+			return NULL;
+		break;
+
+	case group_has_spare:
+		if (sd->flags & SD_NUMA) {
+#ifdef CONFIG_NUMA_BALANCING
+			int idlest_cpu;
+			/*
+			 * If there is spare capacity at NUMA, try to select
+			 * the preferred node
+			 */
+			if (cpu_to_node(this_cpu) == p->numa_preferred_nid)
+				return NULL;
+
+			idlest_cpu = cpumask_first(sched_group_span(idlest));
+			if (cpu_to_node(idlest_cpu) == p->numa_preferred_nid)
+				return idlest;
+#endif
+			/*
+			 * Otherwise, keep the task on this node to stay close
+			 * its wakeup source and improve locality. If there is
+			 * a real need of migration, periodic load balance will
+			 * take care of it.
+			 */
+			if (local_sgs.idle_cpus)
+				return NULL;
+		}
+
+		/*
+		 * Select group with highest number of idle cpus. We could also
+		 * compare the utilization which is more stable but it can end
+		 * up that the group has less spare capacity but finally more
+		 * idle cpus which means more opportunity to run task.
+		 */
+		if (local_sgs.idle_cpus >= idlest_sgs.idle_cpus)
+			return NULL;
+		break;
+	}
+
+	return idlest;
+}
+
 /**
  * update_sd_lb_stats - Update sched_domain's statistics for load balancing.
  * @env: The load balancing environment.
-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
                   ` (10 preceding siblings ...)
  2019-10-18 13:26 ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Vincent Guittot
@ 2019-10-21  7:50 ` Ingo Molnar
  2019-10-21  8:44   ` Vincent Guittot
  2019-10-30 16:24   ` Mel Gorman
  2019-11-25 12:48 ` Valentin Schneider
  12 siblings, 2 replies; 89+ messages in thread
From: Ingo Molnar @ 2019-10-21  7:50 UTC (permalink / raw)
  To: Vincent Guittot, Mel Gorman
  Cc: linux-kernel, mingo, peterz, pauld, valentin.schneider, srikar,
	quentin.perret, dietmar.eggemann, Morten.Rasmussen, hdanton,
	parth, riel


* Vincent Guittot <vincent.guittot@linaro.org> wrote:

> Several wrong task placement have been raised with the current load
> balance algorithm but their fixes are not always straight forward and
> end up with using biased values to force migrations. A cleanup and rework
> of the load balance will help to handle such UCs and enable to fine grain
> the behavior of the scheduler for other cases.
> 
> Patch 1 has already been sent separately and only consolidate asym policy
> in one place and help the review of the changes in load_balance.
> 
> Patch 2 renames the sum of h_nr_running in stats.
> 
> Patch 3 removes meaningless imbalance computation to make review of
> patch 4 easier.
> 
> Patch 4 reworks load_balance algorithm and fixes some wrong task placement
> but try to stay conservative.
> 
> Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
> into account when pulling tasks.
> 
> Patch 6 replaces runnable_load by load now that the signal is only used
> when overloaded.
> 
> Patch 7 improves the spread of tasks at the 1st scheduling level.
> 
> Patch 8 uses utilization instead of load in all steps of misfit task
> path.
> 
> Patch 9 replaces runnable_load_avg by load_avg in the wake up path.
> 
> Patch 10 optimizes find_idlest_group() that was using both runnable_load
> and load. This has not been squashed with previous patch to ease the
> review.
> 
> Patch 11 reworks find_idlest_group() to follow the same steps as
> find_busiest_group()
> 
> Some benchmarks results based on 8 iterations of each tests:
> - small arm64 dual quad cores system
> 
>            tip/sched/core        w/ this patchset    improvement
> schedpipe      53125 +/-0.18%        53443 +/-0.52%   (+0.60%)
> 
> hackbench -l (2560/#grp) -g #grp
>  1 groups      1.579 +/-29.16%       1.410 +/-13.46% (+10.70%)
>  4 groups      1.269 +/-9.69%        1.205 +/-3.27%   (+5.00%)
>  8 groups      1.117 +/-1.51%        1.123 +/-1.27%   (+4.57%)
> 16 groups      1.176 +/-1.76%        1.164 +/-2.42%   (+1.07%)
> 
> Unixbench shell8
>   1 test     1963.48 +/-0.36%       1902.88 +/-0.73%    (-3.09%)
> 224 tests    2427.60 +/-0.20%       2469.80 +/-0.42%  (1.74%)
> 
> - large arm64 2 nodes / 224 cores system
> 
>            tip/sched/core        w/ this patchset    improvement
> schedpipe     124084 +/-1.36%       124445 +/-0.67%   (+0.29%)
> 
> hackbench -l (256000/#grp) -g #grp
>   1 groups    15.305 +/-1.50%       14.001 +/-1.99%   (+8.52%)
>   4 groups     5.959 +/-0.70%        5.542 +/-3.76%   (+6.99%)
>  16 groups     3.120 +/-1.72%        3.253 +/-0.61%   (-4.92%)
>  32 groups     2.911 +/-0.88%        2.837 +/-1.16%   (+2.54%)
>  64 groups     2.805 +/-1.90%        2.716 +/-1.18%   (+3.17%)
> 128 groups     3.166 +/-7.71%        3.891 +/-6.77%   (+5.82%)
> 256 groups     3.655 +/-10.09%       3.185 +/-6.65%  (+12.87%)
> 
> dbench
>   1 groups   328.176 +/-0.29%      330.217 +/-0.32%   (+0.62%)
>   4 groups   930.739 +/-0.50%      957.173 +/-0.66%   (+2.84%)
>  16 groups  1928.292 +/-0.36%     1978.234 +/-0.88%   (+0.92%)
>  32 groups  2369.348 +/-1.72%     2454.020 +/-0.90%   (+3.57%)
>  64 groups  2583.880 +/-3.39%     2618.860 +/-0.84%   (+1.35%)
> 128 groups  2256.406 +/-10.67%    2392.498 +/-2.13%   (+6.03%)
> 256 groups  1257.546 +/-3.81%     1674.684 +/-4.97%  (+33.17%)
> 
> Unixbench shell8
>   1 test     6944.16 +/-0.02     6605.82 +/-0.11      (-4.87%)
> 224 tests   13499.02 +/-0.14    13637.94 +/-0.47%     (+1.03%)
> lkp reported a -10% regression on shell8 (1 test) for v3 that 
> seems that is partially recovered on my platform with v4.
> 
> tip/sched/core sha1:
>   commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")
>   
> Changes since v3:
> - small typo and variable ordering fixes
> - add some acked/reviewed tag
> - set 1 instead of load for migrate_misfit
> - use nr_h_running instead of load for asym_packing
> - update the optimization of find_idlest_group() and put back somes
>  conditions when comparing load
> - rework find_idlest_group() to match find_busiest_group() behavior
> 
> Changes since v2:
> - fix typo and reorder code
> - some minor code fixes
> - optimize the find_idles_group()
> 
> Not covered in this patchset:
> - Better detection of overloaded and fully busy state, especially for cases
>   when nr_running > nr CPUs.
> 
> Vincent Guittot (11):
>   sched/fair: clean up asym packing
>   sched/fair: rename sum_nr_running to sum_h_nr_running
>   sched/fair: remove meaningless imbalance calculation
>   sched/fair: rework load_balance
>   sched/fair: use rq->nr_running when balancing load
>   sched/fair: use load instead of runnable load in load_balance
>   sched/fair: evenly spread tasks when not overloaded
>   sched/fair: use utilization to select misfit task
>   sched/fair: use load instead of runnable load in wakeup path
>   sched/fair: optimize find_idlest_group
>   sched/fair: rework find_idlest_group
> 
>  kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
>  1 file changed, 682 insertions(+), 499 deletions(-)

Thanks, that's an excellent series!

I've queued it up in sched/core with a handful of readability edits to 
comments and changelogs.

There are some upstreaming caveats though, I expect this series to be a 
performance regression magnet:

 - load_balance() and wake-up changes invariably are such: some workloads 
   only work/scale well by accident, and if we touch the logic it might 
   flip over into a less advantageous scheduling pattern.

 - In particular the changes from balancing and waking on runnable load 
   to full load that includes blocking *will* shift IO-intensive 
   workloads that you tests don't fully capture I believe. You also made 
   idle balancing more aggressive in essence - which might reduce cache 
   locality for some workloads.

A full run on Mel Gorman's magic scalability test-suite would be super 
useful ...

Anyway, please be on the lookout for such performance regression reports.

Also, we seem to have grown a fair amount of these TODO entries:

  kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
  kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
  kernel/sched/fair.c:     * XXX illustrate
  kernel/sched/fair.c:    } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
  kernel/sched/fair.c: * can also include other factors [XXX].
  kernel/sched/fair.c: * [XXX expand on:
  kernel/sched/fair.c: * [XXX more?]
  kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
  kernel/sched/fair.c:             * XXX for now avg_load is not computed and always 0 so we
  kernel/sched/fair.c:            /* XXX broken for overlapping NUMA groups */

:-)

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-21  7:50 ` [PATCH v4 00/10] sched/fair: rework the CFS load balance Ingo Molnar
@ 2019-10-21  8:44   ` Vincent Guittot
  2019-10-21 12:56     ` Phil Auld
  2019-10-24 12:38     ` Phil Auld
  2019-10-30 16:24   ` Mel Gorman
  1 sibling, 2 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-21  8:44 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Mel Gorman, linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <mingo@kernel.org> wrote:
>
>
> * Vincent Guittot <vincent.guittot@linaro.org> wrote:
>
> > Several wrong task placement have been raised with the current load
> > balance algorithm but their fixes are not always straight forward and
> > end up with using biased values to force migrations. A cleanup and rework
> > of the load balance will help to handle such UCs and enable to fine grain
> > the behavior of the scheduler for other cases.
> >
> > Patch 1 has already been sent separately and only consolidate asym policy
> > in one place and help the review of the changes in load_balance.
> >
> > Patch 2 renames the sum of h_nr_running in stats.
> >
> > Patch 3 removes meaningless imbalance computation to make review of
> > patch 4 easier.
> >
> > Patch 4 reworks load_balance algorithm and fixes some wrong task placement
> > but try to stay conservative.
> >
> > Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
> > into account when pulling tasks.
> >
> > Patch 6 replaces runnable_load by load now that the signal is only used
> > when overloaded.
> >
> > Patch 7 improves the spread of tasks at the 1st scheduling level.
> >
> > Patch 8 uses utilization instead of load in all steps of misfit task
> > path.
> >
> > Patch 9 replaces runnable_load_avg by load_avg in the wake up path.
> >
> > Patch 10 optimizes find_idlest_group() that was using both runnable_load
> > and load. This has not been squashed with previous patch to ease the
> > review.
> >
> > Patch 11 reworks find_idlest_group() to follow the same steps as
> > find_busiest_group()
> >
> > Some benchmarks results based on 8 iterations of each tests:
> > - small arm64 dual quad cores system
> >
> >            tip/sched/core        w/ this patchset    improvement
> > schedpipe      53125 +/-0.18%        53443 +/-0.52%   (+0.60%)
> >
> > hackbench -l (2560/#grp) -g #grp
> >  1 groups      1.579 +/-29.16%       1.410 +/-13.46% (+10.70%)
> >  4 groups      1.269 +/-9.69%        1.205 +/-3.27%   (+5.00%)
> >  8 groups      1.117 +/-1.51%        1.123 +/-1.27%   (+4.57%)
> > 16 groups      1.176 +/-1.76%        1.164 +/-2.42%   (+1.07%)
> >
> > Unixbench shell8
> >   1 test     1963.48 +/-0.36%       1902.88 +/-0.73%    (-3.09%)
> > 224 tests    2427.60 +/-0.20%       2469.80 +/-0.42%  (1.74%)
> >
> > - large arm64 2 nodes / 224 cores system
> >
> >            tip/sched/core        w/ this patchset    improvement
> > schedpipe     124084 +/-1.36%       124445 +/-0.67%   (+0.29%)
> >
> > hackbench -l (256000/#grp) -g #grp
> >   1 groups    15.305 +/-1.50%       14.001 +/-1.99%   (+8.52%)
> >   4 groups     5.959 +/-0.70%        5.542 +/-3.76%   (+6.99%)
> >  16 groups     3.120 +/-1.72%        3.253 +/-0.61%   (-4.92%)
> >  32 groups     2.911 +/-0.88%        2.837 +/-1.16%   (+2.54%)
> >  64 groups     2.805 +/-1.90%        2.716 +/-1.18%   (+3.17%)
> > 128 groups     3.166 +/-7.71%        3.891 +/-6.77%   (+5.82%)
> > 256 groups     3.655 +/-10.09%       3.185 +/-6.65%  (+12.87%)
> >
> > dbench
> >   1 groups   328.176 +/-0.29%      330.217 +/-0.32%   (+0.62%)
> >   4 groups   930.739 +/-0.50%      957.173 +/-0.66%   (+2.84%)
> >  16 groups  1928.292 +/-0.36%     1978.234 +/-0.88%   (+0.92%)
> >  32 groups  2369.348 +/-1.72%     2454.020 +/-0.90%   (+3.57%)
> >  64 groups  2583.880 +/-3.39%     2618.860 +/-0.84%   (+1.35%)
> > 128 groups  2256.406 +/-10.67%    2392.498 +/-2.13%   (+6.03%)
> > 256 groups  1257.546 +/-3.81%     1674.684 +/-4.97%  (+33.17%)
> >
> > Unixbench shell8
> >   1 test     6944.16 +/-0.02     6605.82 +/-0.11      (-4.87%)
> > 224 tests   13499.02 +/-0.14    13637.94 +/-0.47%     (+1.03%)
> > lkp reported a -10% regression on shell8 (1 test) for v3 that
> > seems that is partially recovered on my platform with v4.
> >
> > tip/sched/core sha1:
> >   commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")
> >
> > Changes since v3:
> > - small typo and variable ordering fixes
> > - add some acked/reviewed tag
> > - set 1 instead of load for migrate_misfit
> > - use nr_h_running instead of load for asym_packing
> > - update the optimization of find_idlest_group() and put back somes
> >  conditions when comparing load
> > - rework find_idlest_group() to match find_busiest_group() behavior
> >
> > Changes since v2:
> > - fix typo and reorder code
> > - some minor code fixes
> > - optimize the find_idles_group()
> >
> > Not covered in this patchset:
> > - Better detection of overloaded and fully busy state, especially for cases
> >   when nr_running > nr CPUs.
> >
> > Vincent Guittot (11):
> >   sched/fair: clean up asym packing
> >   sched/fair: rename sum_nr_running to sum_h_nr_running
> >   sched/fair: remove meaningless imbalance calculation
> >   sched/fair: rework load_balance
> >   sched/fair: use rq->nr_running when balancing load
> >   sched/fair: use load instead of runnable load in load_balance
> >   sched/fair: evenly spread tasks when not overloaded
> >   sched/fair: use utilization to select misfit task
> >   sched/fair: use load instead of runnable load in wakeup path
> >   sched/fair: optimize find_idlest_group
> >   sched/fair: rework find_idlest_group
> >
> >  kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
> >  1 file changed, 682 insertions(+), 499 deletions(-)
>
> Thanks, that's an excellent series!
>
> I've queued it up in sched/core with a handful of readability edits to
> comments and changelogs.

Thanks

>
> There are some upstreaming caveats though, I expect this series to be a
> performance regression magnet:
>
>  - load_balance() and wake-up changes invariably are such: some workloads
>    only work/scale well by accident, and if we touch the logic it might
>    flip over into a less advantageous scheduling pattern.
>
>  - In particular the changes from balancing and waking on runnable load
>    to full load that includes blocking *will* shift IO-intensive
>    workloads that you tests don't fully capture I believe. You also made
>    idle balancing more aggressive in essence - which might reduce cache
>    locality for some workloads.
>
> A full run on Mel Gorman's magic scalability test-suite would be super
> useful ...
>
> Anyway, please be on the lookout for such performance regression reports.

Yes I monitor the regressions on the mailing list

>
> Also, we seem to have grown a fair amount of these TODO entries:
>
>   kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
>   kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
>   kernel/sched/fair.c:     * XXX illustrate
>   kernel/sched/fair.c:    } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
>   kernel/sched/fair.c: * can also include other factors [XXX].
>   kernel/sched/fair.c: * [XXX expand on:
>   kernel/sched/fair.c: * [XXX more?]
>   kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
>   kernel/sched/fair.c:             * XXX for now avg_load is not computed and always 0 so we
>   kernel/sched/fair.c:            /* XXX broken for overlapping NUMA groups */
>

I will have a look :-)

> :-)
>
> Thanks,
>
>         Ingo

^ permalink raw reply	[flat|nested] 89+ messages in thread

* [tip: sched/core] sched/fair: Use load instead of runnable load in wakeup path
  2019-10-18 13:26 ` [PATCH v4 09/11] sched/fair: use load instead of runnable load in wakeup path Vincent Guittot
@ 2019-10-21  9:12   ` tip-bot2 for Vincent Guittot
  0 siblings, 0 replies; 89+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2019-10-21  9:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Ben Segall, Dietmar Eggemann, Juri Lelli,
	Linus Torvalds, Mel Gorman, Mike Galbraith, Morten.Rasmussen,
	Peter Zijlstra, Steven Rostedt, Thomas Gleixner, hdanton, parth,
	pauld, quentin.perret, riel, srikar, valentin.schneider,
	Ingo Molnar, Borislav Petkov, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     11f10e5420f6cecac7d4823638bff040c257aba9
Gitweb:        https://git.kernel.org/tip/11f10e5420f6cecac7d4823638bff040c257aba9
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Fri, 18 Oct 2019 15:26:36 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 21 Oct 2019 09:40:55 +02:00

sched/fair: Use load instead of runnable load in wakeup path

Runnable load was originally introduced to take into account the case where
blocked load biases the wake up path which may end to select an overloaded
CPU with a large number of runnable tasks instead of an underutilized
CPU with a huge blocked load.

Tha wake up path now starts looking for idle CPUs before comparing
runnable load and it's worth aligning the wake up path with the
load_balance() logic.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: riel@surriel.com
Cc: srikar@linux.vnet.ibm.com
Cc: valentin.schneider@arm.com
Link: https://lkml.kernel.org/r/1571405198-27570-10-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 20 ++++++++++----------
 1 file changed, 10 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 1fd6f39..b0703b4 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -1474,7 +1474,12 @@ bool should_numa_migrate_memory(struct task_struct *p, struct page * page,
 	       group_faults_cpu(ng, src_nid) * group_faults(p, dst_nid) * 4;
 }
 
-static unsigned long cpu_runnable_load(struct rq *rq);
+static inline unsigned long cfs_rq_runnable_load_avg(struct cfs_rq *cfs_rq);
+
+static unsigned long cpu_runnable_load(struct rq *rq)
+{
+	return cfs_rq_runnable_load_avg(&rq->cfs);
+}
 
 /* Cached statistics for all CPUs within a node */
 struct numa_stats {
@@ -5370,11 +5375,6 @@ static int sched_idle_cpu(int cpu)
 			rq->nr_running);
 }
 
-static unsigned long cpu_runnable_load(struct rq *rq)
-{
-	return cfs_rq_runnable_load_avg(&rq->cfs);
-}
-
 static unsigned long cpu_load(struct rq *rq)
 {
 	return cfs_rq_load_avg(&rq->cfs);
@@ -5475,7 +5475,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
 	s64 this_eff_load, prev_eff_load;
 	unsigned long task_load;
 
-	this_eff_load = cpu_runnable_load(cpu_rq(this_cpu));
+	this_eff_load = cpu_load(cpu_rq(this_cpu));
 
 	if (sync) {
 		unsigned long current_load = task_h_load(current);
@@ -5493,7 +5493,7 @@ wake_affine_weight(struct sched_domain *sd, struct task_struct *p,
 		this_eff_load *= 100;
 	this_eff_load *= capacity_of(prev_cpu);
 
-	prev_eff_load = cpu_runnable_load(cpu_rq(prev_cpu));
+	prev_eff_load = cpu_load(cpu_rq(prev_cpu));
 	prev_eff_load -= task_load;
 	if (sched_feat(WA_BIAS))
 		prev_eff_load *= 100 + (sd->imbalance_pct - 100) / 2;
@@ -5581,7 +5581,7 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 		max_spare_cap = 0;
 
 		for_each_cpu(i, sched_group_span(group)) {
-			load = cpu_runnable_load(cpu_rq(i));
+			load = cpu_load(cpu_rq(i));
 			runnable_load += load;
 
 			avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs);
@@ -5722,7 +5722,7 @@ find_idlest_group_cpu(struct sched_group *group, struct task_struct *p, int this
 				continue;
 			}
 
-			load = cpu_runnable_load(cpu_rq(i));
+			load = cpu_load(cpu_rq(i));
 			if (load < min_load) {
 				min_load = load;
 				least_loaded_cpu = i;

^ permalink raw reply	[flat|nested] 89+ messages in thread

* [tip: sched/core] sched/fair: Rework find_idlest_group()
  2019-10-18 13:26 ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Vincent Guittot
@ 2019-10-21  9:12   ` tip-bot2 for Vincent Guittot
  2019-10-22 16:46   ` [PATCH] sched/fair: fix rework of find_idlest_group() Vincent Guittot
                     ` (2 subsequent siblings)
  3 siblings, 0 replies; 89+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2019-10-21  9:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Ben Segall, Dietmar Eggemann, Juri Lelli,
	Linus Torvalds, Mel Gorman, Mike Galbraith, Morten.Rasmussen,
	Peter Zijlstra, Steven Rostedt, Thomas Gleixner, hdanton, parth,
	pauld, quentin.perret, riel, srikar, valentin.schneider,
	Ingo Molnar, Borislav Petkov, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     57abff067a084889b6e06137e61a3dc3458acd56
Gitweb:        https://git.kernel.org/tip/57abff067a084889b6e06137e61a3dc3458acd56
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Fri, 18 Oct 2019 15:26:38 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 21 Oct 2019 09:40:55 +02:00

sched/fair: Rework find_idlest_group()

The slow wake up path computes per sched_group statisics to select the
idlest group, which is quite similar to what load_balance() is doing
for selecting busiest group. Rework find_idlest_group() to classify the
sched_group and select the idlest one following the same steps as
load_balance().

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: riel@surriel.com
Cc: srikar@linux.vnet.ibm.com
Cc: valentin.schneider@arm.com
Link: https://lkml.kernel.org/r/1571405198-27570-12-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 384 ++++++++++++++++++++++++++++---------------
 1 file changed, 256 insertions(+), 128 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 95a57c7..a81c364 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5531,127 +5531,9 @@ static int wake_affine(struct sched_domain *sd, struct task_struct *p,
 	return target;
 }
 
-static unsigned long cpu_util_without(int cpu, struct task_struct *p);
-
-static unsigned long capacity_spare_without(int cpu, struct task_struct *p)
-{
-	return max_t(long, capacity_of(cpu) - cpu_util_without(cpu, p), 0);
-}
-
-/*
- * find_idlest_group finds and returns the least busy CPU group within the
- * domain.
- *
- * Assumes p is allowed on at least one CPU in sd.
- */
 static struct sched_group *
 find_idlest_group(struct sched_domain *sd, struct task_struct *p,
-		  int this_cpu, int sd_flag)
-{
-	struct sched_group *idlest = NULL, *group = sd->groups;
-	struct sched_group *most_spare_sg = NULL;
-	unsigned long min_load = ULONG_MAX, this_load = ULONG_MAX;
-	unsigned long most_spare = 0, this_spare = 0;
-	int imbalance_scale = 100 + (sd->imbalance_pct-100)/2;
-	unsigned long imbalance = scale_load_down(NICE_0_LOAD) *
-				(sd->imbalance_pct-100) / 100;
-
-	do {
-		unsigned long load;
-		unsigned long spare_cap, max_spare_cap;
-		int local_group;
-		int i;
-
-		/* Skip over this group if it has no CPUs allowed */
-		if (!cpumask_intersects(sched_group_span(group),
-					p->cpus_ptr))
-			continue;
-
-		local_group = cpumask_test_cpu(this_cpu,
-					       sched_group_span(group));
-
-		/*
-		 * Tally up the load of all CPUs in the group and find
-		 * the group containing the CPU with most spare capacity.
-		 */
-		load = 0;
-		max_spare_cap = 0;
-
-		for_each_cpu(i, sched_group_span(group)) {
-			load += cpu_load(cpu_rq(i));
-
-			spare_cap = capacity_spare_without(i, p);
-
-			if (spare_cap > max_spare_cap)
-				max_spare_cap = spare_cap;
-		}
-
-		/* Adjust by relative CPU capacity of the group */
-		load = (load * SCHED_CAPACITY_SCALE) /
-					group->sgc->capacity;
-
-		if (local_group) {
-			this_load = load;
-			this_spare = max_spare_cap;
-		} else {
-			if (load < min_load) {
-				min_load = load;
-				idlest = group;
-			}
-
-			if (most_spare < max_spare_cap) {
-				most_spare = max_spare_cap;
-				most_spare_sg = group;
-			}
-		}
-	} while (group = group->next, group != sd->groups);
-
-	/*
-	 * The cross-over point between using spare capacity or least load
-	 * is too conservative for high utilization tasks on partially
-	 * utilized systems if we require spare_capacity > task_util(p),
-	 * so we allow for some task stuffing by using
-	 * spare_capacity > task_util(p)/2.
-	 *
-	 * Spare capacity can't be used for fork because the utilization has
-	 * not been set yet, we must first select a rq to compute the initial
-	 * utilization.
-	 */
-	if (sd_flag & SD_BALANCE_FORK)
-		goto skip_spare;
-
-	if (this_spare > task_util(p) / 2 &&
-	    imbalance_scale*this_spare > 100*most_spare)
-		return NULL;
-
-	if (most_spare > task_util(p) / 2)
-		return most_spare_sg;
-
-skip_spare:
-	if (!idlest)
-		return NULL;
-
-	/*
-	 * When comparing groups across NUMA domains, it's possible for the
-	 * local domain to be very lightly loaded relative to the remote
-	 * domains but "imbalance" skews the comparison making remote CPUs
-	 * look much more favourable. When considering cross-domain, add
-	 * imbalance to the load on the remote node and consider staying
-	 * local.
-	 */
-	if ((sd->flags & SD_NUMA) &&
-	     min_load + imbalance >= this_load)
-		return NULL;
-
-	if (min_load >= this_load + imbalance)
-		return NULL;
-
-	if ((this_load < (min_load + imbalance)) &&
-	    (100*this_load < imbalance_scale*min_load))
-		return NULL;
-
-	return idlest;
-}
+		  int this_cpu, int sd_flag);
 
 /*
  * find_idlest_group_cpu - find the idlest CPU among the CPUs in the group.
@@ -5724,7 +5606,7 @@ static inline int find_idlest_cpu(struct sched_domain *sd, struct task_struct *p
 		return prev_cpu;
 
 	/*
-	 * We need task's util for capacity_spare_without, sync it up to
+	 * We need task's util for cpu_util_without, sync it up to
 	 * prev_cpu's last_update_time.
 	 */
 	if (!(sd_flag & SD_BALANCE_FORK))
@@ -7905,13 +7787,13 @@ static inline int sg_imbalanced(struct sched_group *group)
  * any benefit for the load balance.
  */
 static inline bool
-group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
+group_has_capacity(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
 {
 	if (sgs->sum_nr_running < sgs->group_weight)
 		return true;
 
 	if ((sgs->group_capacity * 100) >
-			(sgs->group_util * env->sd->imbalance_pct))
+			(sgs->group_util * imbalance_pct))
 		return true;
 
 	return false;
@@ -7926,13 +7808,13 @@ group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
  *  false.
  */
 static inline bool
-group_is_overloaded(struct lb_env *env, struct sg_lb_stats *sgs)
+group_is_overloaded(unsigned int imbalance_pct, struct sg_lb_stats *sgs)
 {
 	if (sgs->sum_nr_running <= sgs->group_weight)
 		return false;
 
 	if ((sgs->group_capacity * 100) <
-			(sgs->group_util * env->sd->imbalance_pct))
+			(sgs->group_util * imbalance_pct))
 		return true;
 
 	return false;
@@ -7959,11 +7841,11 @@ group_smaller_max_cpu_capacity(struct sched_group *sg, struct sched_group *ref)
 }
 
 static inline enum
-group_type group_classify(struct lb_env *env,
+group_type group_classify(unsigned int imbalance_pct,
 			  struct sched_group *group,
 			  struct sg_lb_stats *sgs)
 {
-	if (group_is_overloaded(env, sgs))
+	if (group_is_overloaded(imbalance_pct, sgs))
 		return group_overloaded;
 
 	if (sg_imbalanced(group))
@@ -7975,7 +7857,7 @@ group_type group_classify(struct lb_env *env,
 	if (sgs->group_misfit_task_load)
 		return group_misfit_task;
 
-	if (!group_has_capacity(env, sgs))
+	if (!group_has_capacity(imbalance_pct, sgs))
 		return group_fully_busy;
 
 	return group_has_spare;
@@ -8076,7 +7958,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 
 	sgs->group_weight = group->group_weight;
 
-	sgs->group_type = group_classify(env, group, sgs);
+	sgs->group_type = group_classify(env->sd->imbalance_pct, group, sgs);
 
 	/* Computing avg_load makes sense only when group is overloaded */
 	if (sgs->group_type == group_overloaded)
@@ -8231,6 +8113,252 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
 }
 #endif /* CONFIG_NUMA_BALANCING */
 
+
+struct sg_lb_stats;
+
+/*
+ * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
+ * @denv: The ched_domain level to look for idlest group.
+ * @group: sched_group whose statistics are to be updated.
+ * @sgs: variable to hold the statistics for this group.
+ */
+static inline void update_sg_wakeup_stats(struct sched_domain *sd,
+					  struct sched_group *group,
+					  struct sg_lb_stats *sgs,
+					  struct task_struct *p)
+{
+	int i, nr_running;
+
+	memset(sgs, 0, sizeof(*sgs));
+
+	for_each_cpu(i, sched_group_span(group)) {
+		struct rq *rq = cpu_rq(i);
+
+		sgs->group_load += cpu_load(rq);
+		sgs->group_util += cpu_util_without(i, p);
+		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
+
+		nr_running = rq->nr_running;
+		sgs->sum_nr_running += nr_running;
+
+		/*
+		 * No need to call idle_cpu() if nr_running is not 0
+		 */
+		if (!nr_running && idle_cpu(i))
+			sgs->idle_cpus++;
+
+
+	}
+
+	/* Check if task fits in the group */
+	if (sd->flags & SD_ASYM_CPUCAPACITY &&
+	    !task_fits_capacity(p, group->sgc->max_capacity)) {
+		sgs->group_misfit_task_load = 1;
+	}
+
+	sgs->group_capacity = group->sgc->capacity;
+
+	sgs->group_type = group_classify(sd->imbalance_pct, group, sgs);
+
+	/*
+	 * Computing avg_load makes sense only when group is fully busy or
+	 * overloaded
+	 */
+	if (sgs->group_type < group_fully_busy)
+		sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
+				sgs->group_capacity;
+}
+
+static bool update_pick_idlest(struct sched_group *idlest,
+			       struct sg_lb_stats *idlest_sgs,
+			       struct sched_group *group,
+			       struct sg_lb_stats *sgs)
+{
+	if (sgs->group_type < idlest_sgs->group_type)
+		return true;
+
+	if (sgs->group_type > idlest_sgs->group_type)
+		return false;
+
+	/*
+	 * The candidate and the current idlest group are the same type of
+	 * group. Let check which one is the idlest according to the type.
+	 */
+
+	switch (sgs->group_type) {
+	case group_overloaded:
+	case group_fully_busy:
+		/* Select the group with lowest avg_load. */
+		if (idlest_sgs->avg_load <= sgs->avg_load)
+			return false;
+		break;
+
+	case group_imbalanced:
+	case group_asym_packing:
+		/* Those types are not used in the slow wakeup path */
+		return false;
+
+	case group_misfit_task:
+		/* Select group with the highest max capacity */
+		if (idlest->sgc->max_capacity >= group->sgc->max_capacity)
+			return false;
+		break;
+
+	case group_has_spare:
+		/* Select group with most idle CPUs */
+		if (idlest_sgs->idle_cpus >= sgs->idle_cpus)
+			return false;
+		break;
+	}
+
+	return true;
+}
+
+/*
+ * find_idlest_group() finds and returns the least busy CPU group within the
+ * domain.
+ *
+ * Assumes p is allowed on at least one CPU in sd.
+ */
+static struct sched_group *
+find_idlest_group(struct sched_domain *sd, struct task_struct *p,
+		  int this_cpu, int sd_flag)
+{
+	struct sched_group *idlest = NULL, *local = NULL, *group = sd->groups;
+	struct sg_lb_stats local_sgs, tmp_sgs;
+	struct sg_lb_stats *sgs;
+	unsigned long imbalance;
+	struct sg_lb_stats idlest_sgs = {
+			.avg_load = UINT_MAX,
+			.group_type = group_overloaded,
+	};
+
+	imbalance = scale_load_down(NICE_0_LOAD) *
+				(sd->imbalance_pct-100) / 100;
+
+	do {
+		int local_group;
+
+		/* Skip over this group if it has no CPUs allowed */
+		if (!cpumask_intersects(sched_group_span(group),
+					p->cpus_ptr))
+			continue;
+
+		local_group = cpumask_test_cpu(this_cpu,
+					       sched_group_span(group));
+
+		if (local_group) {
+			sgs = &local_sgs;
+			local = group;
+		} else {
+			sgs = &tmp_sgs;
+		}
+
+		update_sg_wakeup_stats(sd, group, sgs, p);
+
+		if (!local_group && update_pick_idlest(idlest, &idlest_sgs, group, sgs)) {
+			idlest = group;
+			idlest_sgs = *sgs;
+		}
+
+	} while (group = group->next, group != sd->groups);
+
+
+	/* There is no idlest group to push tasks to */
+	if (!idlest)
+		return NULL;
+
+	/*
+	 * If the local group is idler than the selected idlest group
+	 * don't try and push the task.
+	 */
+	if (local_sgs.group_type < idlest_sgs.group_type)
+		return NULL;
+
+	/*
+	 * If the local group is busier than the selected idlest group
+	 * try and push the task.
+	 */
+	if (local_sgs.group_type > idlest_sgs.group_type)
+		return idlest;
+
+	switch (local_sgs.group_type) {
+	case group_overloaded:
+	case group_fully_busy:
+		/*
+		 * When comparing groups across NUMA domains, it's possible for
+		 * the local domain to be very lightly loaded relative to the
+		 * remote domains but "imbalance" skews the comparison making
+		 * remote CPUs look much more favourable. When considering
+		 * cross-domain, add imbalance to the load on the remote node
+		 * and consider staying local.
+		 */
+
+		if ((sd->flags & SD_NUMA) &&
+		    ((idlest_sgs.avg_load + imbalance) >= local_sgs.avg_load))
+			return NULL;
+
+		/*
+		 * If the local group is less loaded than the selected
+		 * idlest group don't try and push any tasks.
+		 */
+		if (idlest_sgs.avg_load >= (local_sgs.avg_load + imbalance))
+			return NULL;
+
+		if (100 * local_sgs.avg_load <= sd->imbalance_pct * idlest_sgs.avg_load)
+			return NULL;
+		break;
+
+	case group_imbalanced:
+	case group_asym_packing:
+		/* Those type are not used in the slow wakeup path */
+		return NULL;
+
+	case group_misfit_task:
+		/* Select group with the highest max capacity */
+		if (local->sgc->max_capacity >= idlest->sgc->max_capacity)
+			return NULL;
+		break;
+
+	case group_has_spare:
+		if (sd->flags & SD_NUMA) {
+#ifdef CONFIG_NUMA_BALANCING
+			int idlest_cpu;
+			/*
+			 * If there is spare capacity at NUMA, try to select
+			 * the preferred node
+			 */
+			if (cpu_to_node(this_cpu) == p->numa_preferred_nid)
+				return NULL;
+
+			idlest_cpu = cpumask_first(sched_group_span(idlest));
+			if (cpu_to_node(idlest_cpu) == p->numa_preferred_nid)
+				return idlest;
+#endif
+			/*
+			 * Otherwise, keep the task on this node to stay close
+			 * its wakeup source and improve locality. If there is
+			 * a real need of migration, periodic load balance will
+			 * take care of it.
+			 */
+			if (local_sgs.idle_cpus)
+				return NULL;
+		}
+
+		/*
+		 * Select group with highest number of idle CPUs. We could also
+		 * compare the utilization which is more stable but it can end
+		 * up that the group has less spare capacity but finally more
+		 * idle CPUs which means more opportunity to run task.
+		 */
+		if (local_sgs.idle_cpus >= idlest_sgs.idle_cpus)
+			return NULL;
+		break;
+	}
+
+	return idlest;
+}
+
 /**
  * update_sd_lb_stats - Update sched_domain's statistics for load balancing.
  * @env: The load balancing environment.

^ permalink raw reply	[flat|nested] 89+ messages in thread

* [tip: sched/core] sched/fair: Optimize find_idlest_group()
  2019-10-18 13:26 ` [PATCH v4 10/11] sched/fair: optimize find_idlest_group Vincent Guittot
@ 2019-10-21  9:12   ` tip-bot2 for Vincent Guittot
  0 siblings, 0 replies; 89+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2019-10-21  9:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Ben Segall, Dietmar Eggemann, Juri Lelli,
	Linus Torvalds, Mel Gorman, Mike Galbraith, Morten.Rasmussen,
	Peter Zijlstra, Steven Rostedt, Thomas Gleixner, hdanton, parth,
	pauld, quentin.perret, riel, srikar, valentin.schneider,
	Ingo Molnar, Borislav Petkov, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     fc1273f4cefe6670d528715581c848abf64f391c
Gitweb:        https://git.kernel.org/tip/fc1273f4cefe6670d528715581c848abf64f391c
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Fri, 18 Oct 2019 15:26:37 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 21 Oct 2019 09:40:55 +02:00

sched/fair: Optimize find_idlest_group()

find_idlest_group() now reads CPU's load_avg in two different ways.

Consolidate the function to read and use load_avg only once and simplify
the algorithm to only look for the group with lowest load_avg.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: riel@surriel.com
Cc: srikar@linux.vnet.ibm.com
Cc: valentin.schneider@arm.com
Link: https://lkml.kernel.org/r/1571405198-27570-11-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 50 ++++++++++++--------------------------------
 1 file changed, 14 insertions(+), 36 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index b0703b4..95a57c7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5550,16 +5550,14 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 {
 	struct sched_group *idlest = NULL, *group = sd->groups;
 	struct sched_group *most_spare_sg = NULL;
-	unsigned long min_runnable_load = ULONG_MAX;
-	unsigned long this_runnable_load = ULONG_MAX;
-	unsigned long min_avg_load = ULONG_MAX, this_avg_load = ULONG_MAX;
+	unsigned long min_load = ULONG_MAX, this_load = ULONG_MAX;
 	unsigned long most_spare = 0, this_spare = 0;
 	int imbalance_scale = 100 + (sd->imbalance_pct-100)/2;
 	unsigned long imbalance = scale_load_down(NICE_0_LOAD) *
 				(sd->imbalance_pct-100) / 100;
 
 	do {
-		unsigned long load, avg_load, runnable_load;
+		unsigned long load;
 		unsigned long spare_cap, max_spare_cap;
 		int local_group;
 		int i;
@@ -5576,15 +5574,11 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 		 * Tally up the load of all CPUs in the group and find
 		 * the group containing the CPU with most spare capacity.
 		 */
-		avg_load = 0;
-		runnable_load = 0;
+		load = 0;
 		max_spare_cap = 0;
 
 		for_each_cpu(i, sched_group_span(group)) {
-			load = cpu_load(cpu_rq(i));
-			runnable_load += load;
-
-			avg_load += cfs_rq_load_avg(&cpu_rq(i)->cfs);
+			load += cpu_load(cpu_rq(i));
 
 			spare_cap = capacity_spare_without(i, p);
 
@@ -5593,31 +5587,15 @@ find_idlest_group(struct sched_domain *sd, struct task_struct *p,
 		}
 
 		/* Adjust by relative CPU capacity of the group */
-		avg_load = (avg_load * SCHED_CAPACITY_SCALE) /
-					group->sgc->capacity;
-		runnable_load = (runnable_load * SCHED_CAPACITY_SCALE) /
+		load = (load * SCHED_CAPACITY_SCALE) /
 					group->sgc->capacity;
 
 		if (local_group) {
-			this_runnable_load = runnable_load;
-			this_avg_load = avg_load;
+			this_load = load;
 			this_spare = max_spare_cap;
 		} else {
-			if (min_runnable_load > (runnable_load + imbalance)) {
-				/*
-				 * The runnable load is significantly smaller
-				 * so we can pick this new CPU:
-				 */
-				min_runnable_load = runnable_load;
-				min_avg_load = avg_load;
-				idlest = group;
-			} else if ((runnable_load < (min_runnable_load + imbalance)) &&
-				   (100*min_avg_load > imbalance_scale*avg_load)) {
-				/*
-				 * The runnable loads are close so take the
-				 * blocked load into account through avg_load:
-				 */
-				min_avg_load = avg_load;
+			if (load < min_load) {
+				min_load = load;
 				idlest = group;
 			}
 
@@ -5658,18 +5636,18 @@ skip_spare:
 	 * local domain to be very lightly loaded relative to the remote
 	 * domains but "imbalance" skews the comparison making remote CPUs
 	 * look much more favourable. When considering cross-domain, add
-	 * imbalance to the runnable load on the remote node and consider
-	 * staying local.
+	 * imbalance to the load on the remote node and consider staying
+	 * local.
 	 */
 	if ((sd->flags & SD_NUMA) &&
-	    min_runnable_load + imbalance >= this_runnable_load)
+	     min_load + imbalance >= this_load)
 		return NULL;
 
-	if (min_runnable_load > (this_runnable_load + imbalance))
+	if (min_load >= this_load + imbalance)
 		return NULL;
 
-	if ((this_runnable_load < (min_runnable_load + imbalance)) &&
-	     (100*this_avg_load < imbalance_scale*min_avg_load))
+	if ((this_load < (min_load + imbalance)) &&
+	    (100*this_load < imbalance_scale*min_load))
 		return NULL;
 
 	return idlest;

^ permalink raw reply	[flat|nested] 89+ messages in thread

* [tip: sched/core] sched/fair: Spread out tasks evenly when not overloaded
  2019-10-18 13:26 ` [PATCH v4 07/11] sched/fair: evenly spread tasks when not overloaded Vincent Guittot
@ 2019-10-21  9:12   ` tip-bot2 for Vincent Guittot
  2019-10-30 16:03   ` [PATCH v4 07/11] sched/fair: evenly spread tasks " Mel Gorman
  1 sibling, 0 replies; 89+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2019-10-21  9:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Ben Segall, Dietmar Eggemann, Juri Lelli,
	Linus Torvalds, Mel Gorman, Mike Galbraith, Morten.Rasmussen,
	Peter Zijlstra, Steven Rostedt, Thomas Gleixner, hdanton, parth,
	pauld, quentin.perret, riel, srikar, valentin.schneider,
	Ingo Molnar, Borislav Petkov, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     2ab4092fc82d6001fdd9d51dbba27d04dec967e0
Gitweb:        https://git.kernel.org/tip/2ab4092fc82d6001fdd9d51dbba27d04dec967e0
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Fri, 18 Oct 2019 15:26:34 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 21 Oct 2019 09:40:54 +02:00

sched/fair: Spread out tasks evenly when not overloaded

When there is only one CPU per group, using the idle CPUs to evenly spread
tasks doesn't make sense and nr_running is a better metrics.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: riel@surriel.com
Cc: srikar@linux.vnet.ibm.com
Cc: valentin.schneider@arm.com
Link: https://lkml.kernel.org/r/1571405198-27570-8-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 40 ++++++++++++++++++++++++++++------------
 1 file changed, 28 insertions(+), 12 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e6a3db0..f489f60 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -8591,18 +8591,34 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 	    busiest->sum_nr_running > local->sum_nr_running + 1)
 		goto force_balance;
 
-	if (busiest->group_type != group_overloaded &&
-	     (env->idle == CPU_NOT_IDLE ||
-	      local->idle_cpus <= (busiest->idle_cpus + 1)))
-		/*
-		 * If the busiest group is not overloaded
-		 * and there is no imbalance between this and busiest group
-		 * wrt. idle CPUs, it is balanced. The imbalance
-		 * becomes significant if the diff is greater than 1 otherwise
-		 * we might end up just moving the imbalance to another
-		 * group.
-		 */
-		goto out_balanced;
+	if (busiest->group_type != group_overloaded) {
+		if (env->idle == CPU_NOT_IDLE)
+			/*
+			 * If the busiest group is not overloaded (and as a
+			 * result the local one too) but this CPU is already
+			 * busy, let another idle CPU try to pull task.
+			 */
+			goto out_balanced;
+
+		if (busiest->group_weight > 1 &&
+		    local->idle_cpus <= (busiest->idle_cpus + 1))
+			/*
+			 * If the busiest group is not overloaded
+			 * and there is no imbalance between this and busiest
+			 * group wrt idle CPUs, it is balanced. The imbalance
+			 * becomes significant if the diff is greater than 1
+			 * otherwise we might end up to just move the imbalance
+			 * on another group. Of course this applies only if
+			 * there is more than 1 CPU per group.
+			 */
+			goto out_balanced;
+
+		if (busiest->sum_h_nr_running == 1)
+			/*
+			 * busiest doesn't have any tasks waiting to run
+			 */
+			goto out_balanced;
+	}
 
 force_balance:
 	/* Looks like there is an imbalance. Compute it */

^ permalink raw reply	[flat|nested] 89+ messages in thread

* [tip: sched/core] sched/fair: Use load instead of runnable load in load_balance()
  2019-10-18 13:26 ` [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance Vincent Guittot
@ 2019-10-21  9:12   ` tip-bot2 for Vincent Guittot
  2019-10-30 15:58   ` [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance Mel Gorman
  1 sibling, 0 replies; 89+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2019-10-21  9:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Ben Segall, Dietmar Eggemann, Juri Lelli,
	Linus Torvalds, Mel Gorman, Mike Galbraith, Morten.Rasmussen,
	Peter Zijlstra, Steven Rostedt, Thomas Gleixner, hdanton, parth,
	pauld, quentin.perret, riel, srikar, valentin.schneider,
	Ingo Molnar, Borislav Petkov, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     b0fb1eb4f04ae4768231b9731efb1134e22053a4
Gitweb:        https://git.kernel.org/tip/b0fb1eb4f04ae4768231b9731efb1134e22053a4
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Fri, 18 Oct 2019 15:26:33 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 21 Oct 2019 09:40:54 +02:00

sched/fair: Use load instead of runnable load in load_balance()

'runnable load' was originally introduced to take into account the case
where blocked load biases the load balance decision which was selecting
underutilized groups with huge blocked load whereas other groups were
overloaded.

The load is now only used when groups are overloaded. In this case,
it's worth being conservative and taking into account the sleeping
tasks that might wake up on the CPU.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: riel@surriel.com
Cc: srikar@linux.vnet.ibm.com
Cc: valentin.schneider@arm.com
Link: https://lkml.kernel.org/r/1571405198-27570-7-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 24 ++++++++++++++----------
 1 file changed, 14 insertions(+), 10 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 4e7396c..e6a3db0 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5375,6 +5375,11 @@ static unsigned long cpu_runnable_load(struct rq *rq)
 	return cfs_rq_runnable_load_avg(&rq->cfs);
 }
 
+static unsigned long cpu_load(struct rq *rq)
+{
+	return cfs_rq_load_avg(&rq->cfs);
+}
+
 static unsigned long capacity_of(int cpu)
 {
 	return cpu_rq(cpu)->cpu_capacity;
@@ -8049,7 +8054,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 		if ((env->flags & LBF_NOHZ_STATS) && update_nohz_stats(rq, false))
 			env->flags |= LBF_NOHZ_AGAIN;
 
-		sgs->group_load += cpu_runnable_load(rq);
+		sgs->group_load += cpu_load(rq);
 		sgs->group_util += cpu_util(i);
 		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
 
@@ -8507,7 +8512,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 	init_sd_lb_stats(&sds);
 
 	/*
-	 * Compute the various statistics relavent for load balancing at
+	 * Compute the various statistics relevant for load balancing at
 	 * this level.
 	 */
 	update_sd_lb_stats(env, &sds);
@@ -8667,11 +8672,10 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 		switch (env->migration_type) {
 		case migrate_load:
 			/*
-			 * When comparing with load imbalance, use
-			 * cpu_runnable_load() which is not scaled with the CPU
-			 * capacity.
+			 * When comparing with load imbalance, use cpu_load()
+			 * which is not scaled with the CPU capacity.
 			 */
-			load = cpu_runnable_load(rq);
+			load = cpu_load(rq);
 
 			if (nr_running == 1 && load > env->imbalance &&
 			    !check_cpu_capacity(rq, env->sd))
@@ -8679,10 +8683,10 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 
 			/*
 			 * For the load comparisons with the other CPUs,
-			 * consider the cpu_runnable_load() scaled with the CPU
-			 * capacity, so that the load can be moved away from
-			 * the CPU that is potentially running at a lower
-			 * capacity.
+			 * consider the cpu_load() scaled with the CPU
+			 * capacity, so that the load can be moved away
+			 * from the CPU that is potentially running at a
+			 * lower capacity.
 			 *
 			 * Thus we're looking for max(load_i / capacity_i),
 			 * crosswise multiplication to rid ourselves of the

^ permalink raw reply	[flat|nested] 89+ messages in thread

* [tip: sched/core] sched/fair: Use utilization to select misfit task
  2019-10-18 13:26 ` [PATCH v4 08/11] sched/fair: use utilization to select misfit task Vincent Guittot
@ 2019-10-21  9:12   ` tip-bot2 for Vincent Guittot
  0 siblings, 0 replies; 89+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2019-10-21  9:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Valentin Schneider, Ben Segall,
	Dietmar Eggemann, Juri Lelli, Linus Torvalds, Mel Gorman,
	Mike Galbraith, Morten.Rasmussen, Peter Zijlstra, Steven Rostedt,
	Thomas Gleixner, hdanton, parth, pauld, quentin.perret, riel,
	srikar, Ingo Molnar, Borislav Petkov, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     c63be7be59de65d12ff7b4329acea99cf734d6de
Gitweb:        https://git.kernel.org/tip/c63be7be59de65d12ff7b4329acea99cf734d6de
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Fri, 18 Oct 2019 15:26:35 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 21 Oct 2019 09:40:54 +02:00

sched/fair: Use utilization to select misfit task

Utilization is used to detect a misfit task but the load is then used to
select the task on the CPU which can lead to select a small task with
high weight instead of the task that triggered the misfit migration.

Check that task can't fit the CPU's capacity when selecting the misfit
task instead of using the load.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Valentin Schneider <valentin.schneider@arm.com>
Cc: Ben Segall <bsegall@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: riel@surriel.com
Cc: srikar@linux.vnet.ibm.com
Link: https://lkml.kernel.org/r/1571405198-27570-9-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 11 +++--------
 1 file changed, 3 insertions(+), 8 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index f489f60..1fd6f39 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7408,13 +7408,8 @@ static int detach_tasks(struct lb_env *env)
 			break;
 
 		case migrate_misfit:
-			load = task_h_load(p);
-
-			/*
-			 * Load of misfit task might decrease a bit since it has
-			 * been recorded. Be conservative in the condition.
-			 */
-			if (load/2 < env->imbalance)
+			/* This is not a misfit task */
+			if (task_fits_capacity(p, capacity_of(env->src_cpu)))
 				goto next;
 
 			env->imbalance = 0;
@@ -8358,7 +8353,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 	if (busiest->group_type == group_misfit_task) {
 		/* Set imbalance to allow misfit tasks to be balanced. */
 		env->migration_type = migrate_misfit;
-		env->imbalance = busiest->group_misfit_task_load;
+		env->imbalance = 1;
 		return;
 	}
 

^ permalink raw reply	[flat|nested] 89+ messages in thread

* [tip: sched/core] sched/fair: Rename sg_lb_stats::sum_nr_running to sum_h_nr_running
  2019-10-18 13:26 ` [PATCH v4 02/11] sched/fair: rename sum_nr_running to sum_h_nr_running Vincent Guittot
@ 2019-10-21  9:12   ` tip-bot2 for Vincent Guittot
  2019-10-30 14:53   ` [PATCH v4 02/11] sched/fair: rename sum_nr_running " Mel Gorman
  1 sibling, 0 replies; 89+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2019-10-21  9:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Valentin Schneider, Rik van Riel, Ben Segall,
	Dietmar Eggemann, Juri Lelli, Linus Torvalds, Mel Gorman,
	Mike Galbraith, Morten.Rasmussen, Peter Zijlstra, Steven Rostedt,
	Thomas Gleixner, hdanton, parth, pauld, quentin.perret, srikar,
	Ingo Molnar, Borislav Petkov, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     a34983470301018324f0110791da452fee1318c2
Gitweb:        https://git.kernel.org/tip/a34983470301018324f0110791da452fee1318c2
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Fri, 18 Oct 2019 15:26:29 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 21 Oct 2019 09:40:53 +02:00

sched/fair: Rename sg_lb_stats::sum_nr_running to sum_h_nr_running

Rename sum_nr_running to sum_h_nr_running because it effectively tracks
cfs->h_nr_running so we can use sum_nr_running to track rq->nr_running
when needed.

There are no functional changes.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
Acked-by: Rik van Riel <riel@surriel.com>
Cc: Ben Segall <bsegall@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: srikar@linux.vnet.ibm.com
Link: https://lkml.kernel.org/r/1571405198-27570-3-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 32 ++++++++++++++++----------------
 1 file changed, 16 insertions(+), 16 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 5ce0f71..ad8f16a 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7660,7 +7660,7 @@ struct sg_lb_stats {
 	unsigned long load_per_task;
 	unsigned long group_capacity;
 	unsigned long group_util; /* Total utilization of the group */
-	unsigned int sum_nr_running; /* Nr tasks running in the group */
+	unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
 	unsigned int idle_cpus;
 	unsigned int group_weight;
 	enum group_type group_type;
@@ -7705,7 +7705,7 @@ static inline void init_sd_lb_stats(struct sd_lb_stats *sds)
 		.total_capacity = 0UL,
 		.busiest_stat = {
 			.avg_load = 0UL,
-			.sum_nr_running = 0,
+			.sum_h_nr_running = 0,
 			.group_type = group_other,
 		},
 	};
@@ -7896,7 +7896,7 @@ static inline int sg_imbalanced(struct sched_group *group)
 static inline bool
 group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
 {
-	if (sgs->sum_nr_running < sgs->group_weight)
+	if (sgs->sum_h_nr_running < sgs->group_weight)
 		return true;
 
 	if ((sgs->group_capacity * 100) >
@@ -7917,7 +7917,7 @@ group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
 static inline bool
 group_is_overloaded(struct lb_env *env, struct sg_lb_stats *sgs)
 {
-	if (sgs->sum_nr_running <= sgs->group_weight)
+	if (sgs->sum_h_nr_running <= sgs->group_weight)
 		return false;
 
 	if ((sgs->group_capacity * 100) <
@@ -8009,7 +8009,7 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 
 		sgs->group_load += cpu_runnable_load(rq);
 		sgs->group_util += cpu_util(i);
-		sgs->sum_nr_running += rq->cfs.h_nr_running;
+		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
 
 		nr_running = rq->nr_running;
 		if (nr_running > 1)
@@ -8039,8 +8039,8 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 	sgs->group_capacity = group->sgc->capacity;
 	sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity;
 
-	if (sgs->sum_nr_running)
-		sgs->load_per_task = sgs->group_load / sgs->sum_nr_running;
+	if (sgs->sum_h_nr_running)
+		sgs->load_per_task = sgs->group_load / sgs->sum_h_nr_running;
 
 	sgs->group_weight = group->group_weight;
 
@@ -8097,7 +8097,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 	 * capable CPUs may harm throughput. Maximize throughput,
 	 * power/energy consequences are not considered.
 	 */
-	if (sgs->sum_nr_running <= sgs->group_weight &&
+	if (sgs->sum_h_nr_running <= sgs->group_weight &&
 	    group_smaller_min_cpu_capacity(sds->local, sg))
 		return false;
 
@@ -8128,7 +8128,7 @@ asym_packing:
 	 * perform better since they share less core resources.  Hence when we
 	 * have idle threads, we want them to be the higher ones.
 	 */
-	if (sgs->sum_nr_running &&
+	if (sgs->sum_h_nr_running &&
 	    sched_asym_prefer(env->dst_cpu, sg->asym_prefer_cpu)) {
 		sgs->group_asym_packing = 1;
 		if (!sds->busiest)
@@ -8146,9 +8146,9 @@ asym_packing:
 #ifdef CONFIG_NUMA_BALANCING
 static inline enum fbq_type fbq_classify_group(struct sg_lb_stats *sgs)
 {
-	if (sgs->sum_nr_running > sgs->nr_numa_running)
+	if (sgs->sum_h_nr_running > sgs->nr_numa_running)
 		return regular;
-	if (sgs->sum_nr_running > sgs->nr_preferred_running)
+	if (sgs->sum_h_nr_running > sgs->nr_preferred_running)
 		return remote;
 	return all;
 }
@@ -8223,7 +8223,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 		 */
 		if (prefer_sibling && sds->local &&
 		    group_has_capacity(env, local) &&
-		    (sgs->sum_nr_running > local->sum_nr_running + 1)) {
+		    (sgs->sum_h_nr_running > local->sum_h_nr_running + 1)) {
 			sgs->group_no_capacity = 1;
 			sgs->group_type = group_classify(sg, sgs);
 		}
@@ -8235,7 +8235,7 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 
 next_group:
 		/* Now, start updating sd_lb_stats */
-		sds->total_running += sgs->sum_nr_running;
+		sds->total_running += sgs->sum_h_nr_running;
 		sds->total_load += sgs->group_load;
 		sds->total_capacity += sgs->group_capacity;
 
@@ -8289,7 +8289,7 @@ void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
 	local = &sds->local_stat;
 	busiest = &sds->busiest_stat;
 
-	if (!local->sum_nr_running)
+	if (!local->sum_h_nr_running)
 		local->load_per_task = cpu_avg_load_per_task(env->dst_cpu);
 	else if (busiest->load_per_task > local->load_per_task)
 		imbn = 1;
@@ -8387,7 +8387,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 	 */
 	if (busiest->group_type == group_overloaded &&
 	    local->group_type   == group_overloaded) {
-		load_above_capacity = busiest->sum_nr_running * SCHED_CAPACITY_SCALE;
+		load_above_capacity = busiest->sum_h_nr_running * SCHED_CAPACITY_SCALE;
 		if (load_above_capacity > busiest->group_capacity) {
 			load_above_capacity -= busiest->group_capacity;
 			load_above_capacity *= scale_load_down(NICE_0_LOAD);
@@ -8468,7 +8468,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 		goto force_balance;
 
 	/* There is no busy sibling group to pull tasks from */
-	if (!sds.busiest || busiest->sum_nr_running == 0)
+	if (!sds.busiest || busiest->sum_h_nr_running == 0)
 		goto out_balanced;
 
 	/* XXX broken for overlapping NUMA groups */

^ permalink raw reply	[flat|nested] 89+ messages in thread

* [tip: sched/core] sched/fair: Remove meaningless imbalance calculation
  2019-10-18 13:26 ` [PATCH v4 03/11] sched/fair: remove meaningless imbalance calculation Vincent Guittot
@ 2019-10-21  9:12   ` tip-bot2 for Vincent Guittot
  0 siblings, 0 replies; 89+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2019-10-21  9:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Rik van Riel, Ben Segall, Dietmar Eggemann,
	Juri Lelli, Linus Torvalds, Mel Gorman, Mike Galbraith,
	Morten.Rasmussen, Peter Zijlstra, Steven Rostedt,
	Thomas Gleixner, hdanton, parth, pauld, quentin.perret, srikar,
	valentin.schneider, Ingo Molnar, Borislav Petkov, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     fcf0553db6f4c79387864f6e4ab4a891601f395e
Gitweb:        https://git.kernel.org/tip/fcf0553db6f4c79387864f6e4ab4a891601f395e
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Fri, 18 Oct 2019 15:26:30 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 21 Oct 2019 09:40:53 +02:00

sched/fair: Remove meaningless imbalance calculation

Clean up load_balance() and remove meaningless calculation and fields before
adding a new algorithm.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Rik van Riel <riel@surriel.com>
Cc: Ben Segall <bsegall@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: srikar@linux.vnet.ibm.com
Cc: valentin.schneider@arm.com
Link: https://lkml.kernel.org/r/1571405198-27570-4-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 105 +-------------------------------------------
 1 file changed, 1 insertion(+), 104 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index ad8f16a..a1bc04f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5380,18 +5380,6 @@ static unsigned long capacity_of(int cpu)
 	return cpu_rq(cpu)->cpu_capacity;
 }
 
-static unsigned long cpu_avg_load_per_task(int cpu)
-{
-	struct rq *rq = cpu_rq(cpu);
-	unsigned long nr_running = READ_ONCE(rq->cfs.h_nr_running);
-	unsigned long load_avg = cpu_runnable_load(rq);
-
-	if (nr_running)
-		return load_avg / nr_running;
-
-	return 0;
-}
-
 static void record_wakee(struct task_struct *p)
 {
 	/*
@@ -7657,7 +7645,6 @@ static unsigned long task_h_load(struct task_struct *p)
 struct sg_lb_stats {
 	unsigned long avg_load; /*Avg load across the CPUs of the group */
 	unsigned long group_load; /* Total load over the CPUs of the group */
-	unsigned long load_per_task;
 	unsigned long group_capacity;
 	unsigned long group_util; /* Total utilization of the group */
 	unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
@@ -8039,9 +8026,6 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 	sgs->group_capacity = group->sgc->capacity;
 	sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity;
 
-	if (sgs->sum_h_nr_running)
-		sgs->load_per_task = sgs->group_load / sgs->sum_h_nr_running;
-
 	sgs->group_weight = group->group_weight;
 
 	sgs->group_no_capacity = group_is_overloaded(env, sgs);
@@ -8272,76 +8256,6 @@ next_group:
 }
 
 /**
- * fix_small_imbalance - Calculate the minor imbalance that exists
- *			amongst the groups of a sched_domain, during
- *			load balancing.
- * @env: The load balancing environment.
- * @sds: Statistics of the sched_domain whose imbalance is to be calculated.
- */
-static inline
-void fix_small_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
-{
-	unsigned long tmp, capa_now = 0, capa_move = 0;
-	unsigned int imbn = 2;
-	unsigned long scaled_busy_load_per_task;
-	struct sg_lb_stats *local, *busiest;
-
-	local = &sds->local_stat;
-	busiest = &sds->busiest_stat;
-
-	if (!local->sum_h_nr_running)
-		local->load_per_task = cpu_avg_load_per_task(env->dst_cpu);
-	else if (busiest->load_per_task > local->load_per_task)
-		imbn = 1;
-
-	scaled_busy_load_per_task =
-		(busiest->load_per_task * SCHED_CAPACITY_SCALE) /
-		busiest->group_capacity;
-
-	if (busiest->avg_load + scaled_busy_load_per_task >=
-	    local->avg_load + (scaled_busy_load_per_task * imbn)) {
-		env->imbalance = busiest->load_per_task;
-		return;
-	}
-
-	/*
-	 * OK, we don't have enough imbalance to justify moving tasks,
-	 * however we may be able to increase total CPU capacity used by
-	 * moving them.
-	 */
-
-	capa_now += busiest->group_capacity *
-			min(busiest->load_per_task, busiest->avg_load);
-	capa_now += local->group_capacity *
-			min(local->load_per_task, local->avg_load);
-	capa_now /= SCHED_CAPACITY_SCALE;
-
-	/* Amount of load we'd subtract */
-	if (busiest->avg_load > scaled_busy_load_per_task) {
-		capa_move += busiest->group_capacity *
-			    min(busiest->load_per_task,
-				busiest->avg_load - scaled_busy_load_per_task);
-	}
-
-	/* Amount of load we'd add */
-	if (busiest->avg_load * busiest->group_capacity <
-	    busiest->load_per_task * SCHED_CAPACITY_SCALE) {
-		tmp = (busiest->avg_load * busiest->group_capacity) /
-		      local->group_capacity;
-	} else {
-		tmp = (busiest->load_per_task * SCHED_CAPACITY_SCALE) /
-		      local->group_capacity;
-	}
-	capa_move += local->group_capacity *
-		    min(local->load_per_task, local->avg_load + tmp);
-	capa_move /= SCHED_CAPACITY_SCALE;
-
-	/* Move if we gain throughput */
-	if (capa_move > capa_now)
-		env->imbalance = busiest->load_per_task;
-}
-
-/**
  * calculate_imbalance - Calculate the amount of imbalance present within the
  *			 groups of a given sched_domain during load balance.
  * @env: load balance environment
@@ -8360,15 +8274,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 		return;
 	}
 
-	if (busiest->group_type == group_imbalanced) {
-		/*
-		 * In the group_imb case we cannot rely on group-wide averages
-		 * to ensure CPU-load equilibrium, look at wider averages. XXX
-		 */
-		busiest->load_per_task =
-			min(busiest->load_per_task, sds->avg_load);
-	}
-
 	/*
 	 * Avg load of busiest sg can be less and avg load of local sg can
 	 * be greater than avg load across all sgs of sd because avg load
@@ -8379,7 +8284,7 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 	    (busiest->avg_load <= sds->avg_load ||
 	     local->avg_load >= sds->avg_load)) {
 		env->imbalance = 0;
-		return fix_small_imbalance(env, sds);
+		return;
 	}
 
 	/*
@@ -8417,14 +8322,6 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 				       busiest->group_misfit_task_load);
 	}
 
-	/*
-	 * if *imbalance is less than the average load per runnable task
-	 * there is no guarantee that any tasks will be moved so we'll have
-	 * a think about bumping its value to force at least one task to be
-	 * moved
-	 */
-	if (env->imbalance < busiest->load_per_task)
-		return fix_small_imbalance(env, sds);
 }
 
 /******* find_busiest_group() helpers end here *********************/

^ permalink raw reply	[flat|nested] 89+ messages in thread

* [tip: sched/core] sched/fair: Rework load_balance()
  2019-10-18 13:26 ` [PATCH v4 04/11] sched/fair: rework load_balance Vincent Guittot
@ 2019-10-21  9:12   ` tip-bot2 for Vincent Guittot
  2019-10-30 15:45   ` [PATCH v4 04/11] sched/fair: rework load_balance Mel Gorman
  1 sibling, 0 replies; 89+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2019-10-21  9:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Ben Segall, Dietmar Eggemann, Juri Lelli,
	Linus Torvalds, Mel Gorman, Mike Galbraith, Morten.Rasmussen,
	Peter Zijlstra, Steven Rostedt, Thomas Gleixner, hdanton, parth,
	pauld, quentin.perret, riel, srikar, valentin.schneider,
	Ingo Molnar, Borislav Petkov, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912
Gitweb:        https://git.kernel.org/tip/0b0695f2b34a4afa3f6e9aa1ff0e5336d8dad912
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Fri, 18 Oct 2019 15:26:31 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 21 Oct 2019 09:40:53 +02:00

sched/fair: Rework load_balance()

The load_balance() algorithm contains some heuristics which have become
meaningless since the rework of the scheduler's metrics like the
introduction of PELT.

Furthermore, load is an ill-suited metric for solving certain task
placement imbalance scenarios.

For instance, in the presence of idle CPUs, we should simply try to get at
least one task per CPU, whereas the current load-based algorithm can actually
leave idle CPUs alone simply because the load is somewhat balanced.

The current algorithm ends up creating virtual and meaningless values like
the avg_load_per_task or tweaks the state of a group to make it overloaded
whereas it's not, in order to try to migrate tasks.

load_balance() should better qualify the imbalance of the group and clearly
define what has to be moved to fix this imbalance.

The type of sched_group has been extended to better reflect the type of
imbalance. We now have:

	group_has_spare
	group_fully_busy
	group_misfit_task
	group_asym_packing
	group_imbalanced
	group_overloaded

Based on the type of sched_group, load_balance now sets what it wants to
move in order to fix the imbalance. It can be some load as before but also
some utilization, a number of task or a type of task:

	migrate_task
	migrate_util
	migrate_load
	migrate_misfit

This new load_balance() algorithm fixes several pending wrong tasks
placement:

 - the 1 task per CPU case with asymmetric system
 - the case of cfs task preempted by other class
 - the case of tasks not evenly spread on groups with spare capacity

Also the load balance decisions have been consolidated in the 3 functions
below after removing the few bypasses and hacks of the current code:

 - update_sd_pick_busiest() select the busiest sched_group.
 - find_busiest_group() checks if there is an imbalance between local and
   busiest group.
 - calculate_imbalance() decides what have to be moved.

Finally, the now unused field total_running of struct sd_lb_stats has been
removed.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: riel@surriel.com
Cc: srikar@linux.vnet.ibm.com
Cc: valentin.schneider@arm.com
Link: https://lkml.kernel.org/r/1571405198-27570-5-git-send-email-vincent.guittot@linaro.org
[ Small readability and spelling updates. ]
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 611 ++++++++++++++++++++++++++++---------------
 1 file changed, 402 insertions(+), 209 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a1bc04f..76a2aa8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7058,11 +7058,26 @@ static unsigned long __read_mostly max_load_balance_interval = HZ/10;
 
 enum fbq_type { regular, remote, all };
 
+/*
+ * group_type describes the group of CPUs at the moment of the load balance.
+ * The enum is ordered by pulling priority, with the group with lowest priority
+ * first so the groupe_type can be simply compared when selecting the busiest
+ * group. see update_sd_pick_busiest().
+ */
 enum group_type {
-	group_other = 0,
+	group_has_spare = 0,
+	group_fully_busy,
 	group_misfit_task,
+	group_asym_packing,
 	group_imbalanced,
-	group_overloaded,
+	group_overloaded
+};
+
+enum migration_type {
+	migrate_load = 0,
+	migrate_util,
+	migrate_task,
+	migrate_misfit
 };
 
 #define LBF_ALL_PINNED	0x01
@@ -7095,7 +7110,7 @@ struct lb_env {
 	unsigned int		loop_max;
 
 	enum fbq_type		fbq_type;
-	enum group_type		src_grp_type;
+	enum migration_type	migration_type;
 	struct list_head	tasks;
 };
 
@@ -7318,7 +7333,7 @@ static struct task_struct *detach_one_task(struct lb_env *env)
 static const unsigned int sched_nr_migrate_break = 32;
 
 /*
- * detach_tasks() -- tries to detach up to imbalance runnable load from
+ * detach_tasks() -- tries to detach up to imbalance load/util/tasks from
  * busiest_rq, as part of a balancing operation within domain "sd".
  *
  * Returns number of detached tasks if successful and 0 otherwise.
@@ -7326,8 +7341,8 @@ static const unsigned int sched_nr_migrate_break = 32;
 static int detach_tasks(struct lb_env *env)
 {
 	struct list_head *tasks = &env->src_rq->cfs_tasks;
+	unsigned long util, load;
 	struct task_struct *p;
-	unsigned long load;
 	int detached = 0;
 
 	lockdep_assert_held(&env->src_rq->lock);
@@ -7360,19 +7375,51 @@ static int detach_tasks(struct lb_env *env)
 		if (!can_migrate_task(p, env))
 			goto next;
 
-		load = task_h_load(p);
+		switch (env->migration_type) {
+		case migrate_load:
+			load = task_h_load(p);
 
-		if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed)
-			goto next;
+			if (sched_feat(LB_MIN) &&
+			    load < 16 && !env->sd->nr_balance_failed)
+				goto next;
 
-		if ((load / 2) > env->imbalance)
-			goto next;
+			if (load/2 > env->imbalance)
+				goto next;
+
+			env->imbalance -= load;
+			break;
+
+		case migrate_util:
+			util = task_util_est(p);
+
+			if (util > env->imbalance)
+				goto next;
+
+			env->imbalance -= util;
+			break;
+
+		case migrate_task:
+			env->imbalance--;
+			break;
+
+		case migrate_misfit:
+			load = task_h_load(p);
+
+			/*
+			 * Load of misfit task might decrease a bit since it has
+			 * been recorded. Be conservative in the condition.
+			 */
+			if (load/2 < env->imbalance)
+				goto next;
+
+			env->imbalance = 0;
+			break;
+		}
 
 		detach_task(p, env);
 		list_add(&p->se.group_node, &env->tasks);
 
 		detached++;
-		env->imbalance -= load;
 
 #ifdef CONFIG_PREEMPTION
 		/*
@@ -7386,7 +7433,7 @@ static int detach_tasks(struct lb_env *env)
 
 		/*
 		 * We only want to steal up to the prescribed amount of
-		 * runnable load.
+		 * load/util/tasks.
 		 */
 		if (env->imbalance <= 0)
 			break;
@@ -7651,7 +7698,6 @@ struct sg_lb_stats {
 	unsigned int idle_cpus;
 	unsigned int group_weight;
 	enum group_type group_type;
-	int group_no_capacity;
 	unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
 	unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
 #ifdef CONFIG_NUMA_BALANCING
@@ -7667,10 +7713,10 @@ struct sg_lb_stats {
 struct sd_lb_stats {
 	struct sched_group *busiest;	/* Busiest group in this sd */
 	struct sched_group *local;	/* Local group in this sd */
-	unsigned long total_running;
 	unsigned long total_load;	/* Total load of all groups in sd */
 	unsigned long total_capacity;	/* Total capacity of all groups in sd */
 	unsigned long avg_load;	/* Average load across all groups in sd */
+	unsigned int prefer_sibling; /* tasks should go to sibling first */
 
 	struct sg_lb_stats busiest_stat;/* Statistics of the busiest group */
 	struct sg_lb_stats local_stat;	/* Statistics of the local group */
@@ -7681,19 +7727,18 @@ static inline void init_sd_lb_stats(struct sd_lb_stats *sds)
 	/*
 	 * Skimp on the clearing to avoid duplicate work. We can avoid clearing
 	 * local_stat because update_sg_lb_stats() does a full clear/assignment.
-	 * We must however clear busiest_stat::avg_load because
-	 * update_sd_pick_busiest() reads this before assignment.
+	 * We must however set busiest_stat::group_type and
+	 * busiest_stat::idle_cpus to the worst busiest group because
+	 * update_sd_pick_busiest() reads these before assignment.
 	 */
 	*sds = (struct sd_lb_stats){
 		.busiest = NULL,
 		.local = NULL,
-		.total_running = 0UL,
 		.total_load = 0UL,
 		.total_capacity = 0UL,
 		.busiest_stat = {
-			.avg_load = 0UL,
-			.sum_h_nr_running = 0,
-			.group_type = group_other,
+			.idle_cpus = UINT_MAX,
+			.group_type = group_has_spare,
 		},
 	};
 }
@@ -7935,19 +7980,26 @@ group_smaller_max_cpu_capacity(struct sched_group *sg, struct sched_group *ref)
 }
 
 static inline enum
-group_type group_classify(struct sched_group *group,
+group_type group_classify(struct lb_env *env,
+			  struct sched_group *group,
 			  struct sg_lb_stats *sgs)
 {
-	if (sgs->group_no_capacity)
+	if (group_is_overloaded(env, sgs))
 		return group_overloaded;
 
 	if (sg_imbalanced(group))
 		return group_imbalanced;
 
+	if (sgs->group_asym_packing)
+		return group_asym_packing;
+
 	if (sgs->group_misfit_task_load)
 		return group_misfit_task;
 
-	return group_other;
+	if (!group_has_capacity(env, sgs))
+		return group_fully_busy;
+
+	return group_has_spare;
 }
 
 static bool update_nohz_stats(struct rq *rq, bool force)
@@ -7984,10 +8036,12 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 				      struct sg_lb_stats *sgs,
 				      int *sg_status)
 {
-	int i, nr_running;
+	int i, nr_running, local_group;
 
 	memset(sgs, 0, sizeof(*sgs));
 
+	local_group = cpumask_test_cpu(env->dst_cpu, sched_group_span(group));
+
 	for_each_cpu_and(i, sched_group_span(group), env->cpus) {
 		struct rq *rq = cpu_rq(i);
 
@@ -8012,9 +8066,16 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 		/*
 		 * No need to call idle_cpu() if nr_running is not 0
 		 */
-		if (!nr_running && idle_cpu(i))
+		if (!nr_running && idle_cpu(i)) {
 			sgs->idle_cpus++;
+			/* Idle cpu can't have misfit task */
+			continue;
+		}
+
+		if (local_group)
+			continue;
 
+		/* Check for a misfit task on the cpu */
 		if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
 		    sgs->group_misfit_task_load < rq->misfit_task_load) {
 			sgs->group_misfit_task_load = rq->misfit_task_load;
@@ -8022,14 +8083,24 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 		}
 	}
 
-	/* Adjust by relative CPU capacity of the group */
+	/* Check if dst CPU is idle and preferred to this group */
+	if (env->sd->flags & SD_ASYM_PACKING &&
+	    env->idle != CPU_NOT_IDLE &&
+	    sgs->sum_h_nr_running &&
+	    sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
+		sgs->group_asym_packing = 1;
+	}
+
 	sgs->group_capacity = group->sgc->capacity;
-	sgs->avg_load = (sgs->group_load*SCHED_CAPACITY_SCALE) / sgs->group_capacity;
 
 	sgs->group_weight = group->group_weight;
 
-	sgs->group_no_capacity = group_is_overloaded(env, sgs);
-	sgs->group_type = group_classify(group, sgs);
+	sgs->group_type = group_classify(env, group, sgs);
+
+	/* Computing avg_load makes sense only when group is overloaded */
+	if (sgs->group_type == group_overloaded)
+		sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
+				sgs->group_capacity;
 }
 
 /**
@@ -8052,6 +8123,10 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 {
 	struct sg_lb_stats *busiest = &sds->busiest_stat;
 
+	/* Make sure that there is at least one task to pull */
+	if (!sgs->sum_h_nr_running)
+		return false;
+
 	/*
 	 * Don't try to pull misfit tasks we can't help.
 	 * We can use max_capacity here as reduction in capacity on some
@@ -8060,7 +8135,7 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 	 */
 	if (sgs->group_type == group_misfit_task &&
 	    (!group_smaller_max_cpu_capacity(sg, sds->local) ||
-	     !group_has_capacity(env, &sds->local_stat)))
+	     sds->local_stat.group_type != group_has_spare))
 		return false;
 
 	if (sgs->group_type > busiest->group_type)
@@ -8069,62 +8144,80 @@ static bool update_sd_pick_busiest(struct lb_env *env,
 	if (sgs->group_type < busiest->group_type)
 		return false;
 
-	if (sgs->avg_load <= busiest->avg_load)
-		return false;
-
-	if (!(env->sd->flags & SD_ASYM_CPUCAPACITY))
-		goto asym_packing;
-
 	/*
-	 * Candidate sg has no more than one task per CPU and
-	 * has higher per-CPU capacity. Migrating tasks to less
-	 * capable CPUs may harm throughput. Maximize throughput,
-	 * power/energy consequences are not considered.
+	 * The candidate and the current busiest group are the same type of
+	 * group. Let check which one is the busiest according to the type.
 	 */
-	if (sgs->sum_h_nr_running <= sgs->group_weight &&
-	    group_smaller_min_cpu_capacity(sds->local, sg))
-		return false;
 
-	/*
-	 * If we have more than one misfit sg go with the biggest misfit.
-	 */
-	if (sgs->group_type == group_misfit_task &&
-	    sgs->group_misfit_task_load < busiest->group_misfit_task_load)
+	switch (sgs->group_type) {
+	case group_overloaded:
+		/* Select the overloaded group with highest avg_load. */
+		if (sgs->avg_load <= busiest->avg_load)
+			return false;
+		break;
+
+	case group_imbalanced:
+		/*
+		 * Select the 1st imbalanced group as we don't have any way to
+		 * choose one more than another.
+		 */
 		return false;
 
-asym_packing:
-	/* This is the busiest node in its class. */
-	if (!(env->sd->flags & SD_ASYM_PACKING))
-		return true;
+	case group_asym_packing:
+		/* Prefer to move from lowest priority CPU's work */
+		if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
+			return false;
+		break;
 
-	/* No ASYM_PACKING if target CPU is already busy */
-	if (env->idle == CPU_NOT_IDLE)
-		return true;
-	/*
-	 * ASYM_PACKING needs to move all the work to the highest
-	 * prority CPUs in the group, therefore mark all groups
-	 * of lower priority than ourself as busy.
-	 *
-	 * This is primarily intended to used at the sibling level.  Some
-	 * cores like POWER7 prefer to use lower numbered SMT threads.  In the
-	 * case of POWER7, it can move to lower SMT modes only when higher
-	 * threads are idle.  When in lower SMT modes, the threads will
-	 * perform better since they share less core resources.  Hence when we
-	 * have idle threads, we want them to be the higher ones.
-	 */
-	if (sgs->sum_h_nr_running &&
-	    sched_asym_prefer(env->dst_cpu, sg->asym_prefer_cpu)) {
-		sgs->group_asym_packing = 1;
-		if (!sds->busiest)
-			return true;
+	case group_misfit_task:
+		/*
+		 * If we have more than one misfit sg go with the biggest
+		 * misfit.
+		 */
+		if (sgs->group_misfit_task_load < busiest->group_misfit_task_load)
+			return false;
+		break;
 
-		/* Prefer to move from lowest priority CPU's work */
-		if (sched_asym_prefer(sds->busiest->asym_prefer_cpu,
-				      sg->asym_prefer_cpu))
-			return true;
+	case group_fully_busy:
+		/*
+		 * Select the fully busy group with highest avg_load. In
+		 * theory, there is no need to pull task from such kind of
+		 * group because tasks have all compute capacity that they need
+		 * but we can still improve the overall throughput by reducing
+		 * contention when accessing shared HW resources.
+		 *
+		 * XXX for now avg_load is not computed and always 0 so we
+		 * select the 1st one.
+		 */
+		if (sgs->avg_load <= busiest->avg_load)
+			return false;
+		break;
+
+	case group_has_spare:
+		/*
+		 * Select not overloaded group with lowest number of
+		 * idle cpus. We could also compare the spare capacity
+		 * which is more stable but it can end up that the
+		 * group has less spare capacity but finally more idle
+		 * CPUs which means less opportunity to pull tasks.
+		 */
+		if (sgs->idle_cpus >= busiest->idle_cpus)
+			return false;
+		break;
 	}
 
-	return false;
+	/*
+	 * Candidate sg has no more than one task per CPU and has higher
+	 * per-CPU capacity. Migrating tasks to less capable CPUs may harm
+	 * throughput. Maximize throughput, power/energy consequences are not
+	 * considered.
+	 */
+	if ((env->sd->flags & SD_ASYM_CPUCAPACITY) &&
+	    (sgs->group_type <= group_fully_busy) &&
+	    (group_smaller_min_cpu_capacity(sds->local, sg)))
+		return false;
+
+	return true;
 }
 
 #ifdef CONFIG_NUMA_BALANCING
@@ -8162,13 +8255,13 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
  * @env: The load balancing environment.
  * @sds: variable to hold the statistics for this sched_domain.
  */
+
 static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sds)
 {
 	struct sched_domain *child = env->sd->child;
 	struct sched_group *sg = env->sd->groups;
 	struct sg_lb_stats *local = &sds->local_stat;
 	struct sg_lb_stats tmp_sgs;
-	bool prefer_sibling = child && child->flags & SD_PREFER_SIBLING;
 	int sg_status = 0;
 
 #ifdef CONFIG_NO_HZ_COMMON
@@ -8195,22 +8288,6 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 		if (local_group)
 			goto next_group;
 
-		/*
-		 * In case the child domain prefers tasks go to siblings
-		 * first, lower the sg capacity so that we'll try
-		 * and move all the excess tasks away. We lower the capacity
-		 * of a group only if the local group has the capacity to fit
-		 * these excess tasks. The extra check prevents the case where
-		 * you always pull from the heaviest group when it is already
-		 * under-utilized (possible with a large weight task outweighs
-		 * the tasks on the system).
-		 */
-		if (prefer_sibling && sds->local &&
-		    group_has_capacity(env, local) &&
-		    (sgs->sum_h_nr_running > local->sum_h_nr_running + 1)) {
-			sgs->group_no_capacity = 1;
-			sgs->group_type = group_classify(sg, sgs);
-		}
 
 		if (update_sd_pick_busiest(env, sds, sg, sgs)) {
 			sds->busiest = sg;
@@ -8219,13 +8296,15 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
 
 next_group:
 		/* Now, start updating sd_lb_stats */
-		sds->total_running += sgs->sum_h_nr_running;
 		sds->total_load += sgs->group_load;
 		sds->total_capacity += sgs->group_capacity;
 
 		sg = sg->next;
 	} while (sg != env->sd->groups);
 
+	/* Tag domain that child domain prefers tasks go to siblings first */
+	sds->prefer_sibling = child && child->flags & SD_PREFER_SIBLING;
+
 #ifdef CONFIG_NO_HZ_COMMON
 	if ((env->flags & LBF_NOHZ_AGAIN) &&
 	    cpumask_subset(nohz.idle_cpus_mask, sched_domain_span(env->sd))) {
@@ -8263,69 +8342,149 @@ next_group:
  */
 static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
 {
-	unsigned long max_pull, load_above_capacity = ~0UL;
 	struct sg_lb_stats *local, *busiest;
 
 	local = &sds->local_stat;
 	busiest = &sds->busiest_stat;
 
-	if (busiest->group_asym_packing) {
-		env->imbalance = busiest->group_load;
+	if (busiest->group_type == group_misfit_task) {
+		/* Set imbalance to allow misfit tasks to be balanced. */
+		env->migration_type = migrate_misfit;
+		env->imbalance = busiest->group_misfit_task_load;
+		return;
+	}
+
+	if (busiest->group_type == group_asym_packing) {
+		/*
+		 * In case of asym capacity, we will try to migrate all load to
+		 * the preferred CPU.
+		 */
+		env->migration_type = migrate_task;
+		env->imbalance = busiest->sum_h_nr_running;
+		return;
+	}
+
+	if (busiest->group_type == group_imbalanced) {
+		/*
+		 * In the group_imb case we cannot rely on group-wide averages
+		 * to ensure CPU-load equilibrium, try to move any task to fix
+		 * the imbalance. The next load balance will take care of
+		 * balancing back the system.
+		 */
+		env->migration_type = migrate_task;
+		env->imbalance = 1;
 		return;
 	}
 
 	/*
-	 * Avg load of busiest sg can be less and avg load of local sg can
-	 * be greater than avg load across all sgs of sd because avg load
-	 * factors in sg capacity and sgs with smaller group_type are
-	 * skipped when updating the busiest sg:
+	 * Try to use spare capacity of local group without overloading it or
+	 * emptying busiest
 	 */
-	if (busiest->group_type != group_misfit_task &&
-	    (busiest->avg_load <= sds->avg_load ||
-	     local->avg_load >= sds->avg_load)) {
-		env->imbalance = 0;
+	if (local->group_type == group_has_spare) {
+		if (busiest->group_type > group_fully_busy) {
+			/*
+			 * If busiest is overloaded, try to fill spare
+			 * capacity. This might end up creating spare capacity
+			 * in busiest or busiest still being overloaded but
+			 * there is no simple way to directly compute the
+			 * amount of load to migrate in order to balance the
+			 * system.
+			 */
+			env->migration_type = migrate_util;
+			env->imbalance = max(local->group_capacity, local->group_util) -
+					 local->group_util;
+
+			/*
+			 * In some cases, the group's utilization is max or even
+			 * higher than capacity because of migrations but the
+			 * local CPU is (newly) idle. There is at least one
+			 * waiting task in this overloaded busiest group. Let's
+			 * try to pull it.
+			 */
+			if (env->idle != CPU_NOT_IDLE && env->imbalance == 0) {
+				env->migration_type = migrate_task;
+				env->imbalance = 1;
+			}
+
+			return;
+		}
+
+		if (busiest->group_weight == 1 || sds->prefer_sibling) {
+			unsigned int nr_diff = busiest->sum_h_nr_running;
+			/*
+			 * When prefer sibling, evenly spread running tasks on
+			 * groups.
+			 */
+			env->migration_type = migrate_task;
+			lsub_positive(&nr_diff, local->sum_h_nr_running);
+			env->imbalance = nr_diff >> 1;
+			return;
+		}
+
+		/*
+		 * If there is no overload, we just want to even the number of
+		 * idle cpus.
+		 */
+		env->migration_type = migrate_task;
+		env->imbalance = max_t(long, 0, (local->idle_cpus -
+						 busiest->idle_cpus) >> 1);
 		return;
 	}
 
 	/*
-	 * If there aren't any idle CPUs, avoid creating some.
+	 * Local is fully busy but has to take more load to relieve the
+	 * busiest group
 	 */
-	if (busiest->group_type == group_overloaded &&
-	    local->group_type   == group_overloaded) {
-		load_above_capacity = busiest->sum_h_nr_running * SCHED_CAPACITY_SCALE;
-		if (load_above_capacity > busiest->group_capacity) {
-			load_above_capacity -= busiest->group_capacity;
-			load_above_capacity *= scale_load_down(NICE_0_LOAD);
-			load_above_capacity /= busiest->group_capacity;
-		} else
-			load_above_capacity = ~0UL;
+	if (local->group_type < group_overloaded) {
+		/*
+		 * Local will become overloaded so the avg_load metrics are
+		 * finally needed.
+		 */
+
+		local->avg_load = (local->group_load * SCHED_CAPACITY_SCALE) /
+				  local->group_capacity;
+
+		sds->avg_load = (sds->total_load * SCHED_CAPACITY_SCALE) /
+				sds->total_capacity;
 	}
 
 	/*
-	 * We're trying to get all the CPUs to the average_load, so we don't
-	 * want to push ourselves above the average load, nor do we wish to
-	 * reduce the max loaded CPU below the average load. At the same time,
-	 * we also don't want to reduce the group load below the group
-	 * capacity. Thus we look for the minimum possible imbalance.
+	 * Both group are or will become overloaded and we're trying to get all
+	 * the CPUs to the average_load, so we don't want to push ourselves
+	 * above the average load, nor do we wish to reduce the max loaded CPU
+	 * below the average load. At the same time, we also don't want to
+	 * reduce the group load below the group capacity. Thus we look for
+	 * the minimum possible imbalance.
 	 */
-	max_pull = min(busiest->avg_load - sds->avg_load, load_above_capacity);
-
-	/* How much load to actually move to equalise the imbalance */
+	env->migration_type = migrate_load;
 	env->imbalance = min(
-		max_pull * busiest->group_capacity,
+		(busiest->avg_load - sds->avg_load) * busiest->group_capacity,
 		(sds->avg_load - local->avg_load) * local->group_capacity
 	) / SCHED_CAPACITY_SCALE;
-
-	/* Boost imbalance to allow misfit task to be balanced. */
-	if (busiest->group_type == group_misfit_task) {
-		env->imbalance = max_t(long, env->imbalance,
-				       busiest->group_misfit_task_load);
-	}
-
 }
 
 /******* find_busiest_group() helpers end here *********************/
 
+/*
+ * Decision matrix according to the local and busiest group type:
+ *
+ * busiest \ local has_spare fully_busy misfit asym imbalanced overloaded
+ * has_spare        nr_idle   balanced   N/A    N/A  balanced   balanced
+ * fully_busy       nr_idle   nr_idle    N/A    N/A  balanced   balanced
+ * misfit_task      force     N/A        N/A    N/A  force      force
+ * asym_packing     force     force      N/A    N/A  force      force
+ * imbalanced       force     force      N/A    N/A  force      force
+ * overloaded       force     force      N/A    N/A  force      avg_load
+ *
+ * N/A :      Not Applicable because already filtered while updating
+ *            statistics.
+ * balanced : The system is balanced for these 2 groups.
+ * force :    Calculate the imbalance as load migration is probably needed.
+ * avg_load : Only if imbalance is significant enough.
+ * nr_idle :  dst_cpu is not busy and the number of idle CPUs is quite
+ *            different in groups.
+ */
+
 /**
  * find_busiest_group - Returns the busiest group within the sched_domain
  * if there is an imbalance.
@@ -8360,17 +8519,17 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 	local = &sds.local_stat;
 	busiest = &sds.busiest_stat;
 
-	/* ASYM feature bypasses nice load balance check */
-	if (busiest->group_asym_packing)
-		goto force_balance;
-
 	/* There is no busy sibling group to pull tasks from */
-	if (!sds.busiest || busiest->sum_h_nr_running == 0)
+	if (!sds.busiest)
 		goto out_balanced;
 
-	/* XXX broken for overlapping NUMA groups */
-	sds.avg_load = (SCHED_CAPACITY_SCALE * sds.total_load)
-						/ sds.total_capacity;
+	/* Misfit tasks should be dealt with regardless of the avg load */
+	if (busiest->group_type == group_misfit_task)
+		goto force_balance;
+
+	/* ASYM feature bypasses nice load balance check */
+	if (busiest->group_type == group_asym_packing)
+		goto force_balance;
 
 	/*
 	 * If the busiest group is imbalanced the below checks don't
@@ -8381,55 +8540,64 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 		goto force_balance;
 
 	/*
-	 * When dst_cpu is idle, prevent SMP nice and/or asymmetric group
-	 * capacities from resulting in underutilization due to avg_load.
-	 */
-	if (env->idle != CPU_NOT_IDLE && group_has_capacity(env, local) &&
-	    busiest->group_no_capacity)
-		goto force_balance;
-
-	/* Misfit tasks should be dealt with regardless of the avg load */
-	if (busiest->group_type == group_misfit_task)
-		goto force_balance;
-
-	/*
 	 * If the local group is busier than the selected busiest group
 	 * don't try and pull any tasks.
 	 */
-	if (local->avg_load >= busiest->avg_load)
+	if (local->group_type > busiest->group_type)
 		goto out_balanced;
 
 	/*
-	 * Don't pull any tasks if this group is already above the domain
-	 * average load.
+	 * When groups are overloaded, use the avg_load to ensure fairness
+	 * between tasks.
 	 */
-	if (local->avg_load >= sds.avg_load)
-		goto out_balanced;
+	if (local->group_type == group_overloaded) {
+		/*
+		 * If the local group is more loaded than the selected
+		 * busiest group don't try to pull any tasks.
+		 */
+		if (local->avg_load >= busiest->avg_load)
+			goto out_balanced;
+
+		/* XXX broken for overlapping NUMA groups */
+		sds.avg_load = (sds.total_load * SCHED_CAPACITY_SCALE) /
+				sds.total_capacity;
 
-	if (env->idle == CPU_IDLE) {
 		/*
-		 * This CPU is idle. If the busiest group is not overloaded
-		 * and there is no imbalance between this and busiest group
-		 * wrt idle CPUs, it is balanced. The imbalance becomes
-		 * significant if the diff is greater than 1 otherwise we
-		 * might end up to just move the imbalance on another group
+		 * Don't pull any tasks if this group is already above the
+		 * domain average load.
 		 */
-		if ((busiest->group_type != group_overloaded) &&
-				(local->idle_cpus <= (busiest->idle_cpus + 1)))
+		if (local->avg_load >= sds.avg_load)
 			goto out_balanced;
-	} else {
+
 		/*
-		 * In the CPU_NEWLY_IDLE, CPU_NOT_IDLE cases, use
-		 * imbalance_pct to be conservative.
+		 * If the busiest group is more loaded, use imbalance_pct to be
+		 * conservative.
 		 */
 		if (100 * busiest->avg_load <=
 				env->sd->imbalance_pct * local->avg_load)
 			goto out_balanced;
 	}
 
+	/* Try to move all excess tasks to child's sibling domain */
+	if (sds.prefer_sibling && local->group_type == group_has_spare &&
+	    busiest->sum_h_nr_running > local->sum_h_nr_running + 1)
+		goto force_balance;
+
+	if (busiest->group_type != group_overloaded &&
+	     (env->idle == CPU_NOT_IDLE ||
+	      local->idle_cpus <= (busiest->idle_cpus + 1)))
+		/*
+		 * If the busiest group is not overloaded
+		 * and there is no imbalance between this and busiest group
+		 * wrt. idle CPUs, it is balanced. The imbalance
+		 * becomes significant if the diff is greater than 1 otherwise
+		 * we might end up just moving the imbalance to another
+		 * group.
+		 */
+		goto out_balanced;
+
 force_balance:
 	/* Looks like there is an imbalance. Compute it */
-	env->src_grp_type = busiest->group_type;
 	calculate_imbalance(env, &sds);
 	return env->imbalance ? sds.busiest : NULL;
 
@@ -8445,11 +8613,13 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 				     struct sched_group *group)
 {
 	struct rq *busiest = NULL, *rq;
-	unsigned long busiest_load = 0, busiest_capacity = 1;
+	unsigned long busiest_util = 0, busiest_load = 0, busiest_capacity = 1;
+	unsigned int busiest_nr = 0;
 	int i;
 
 	for_each_cpu_and(i, sched_group_span(group), env->cpus) {
-		unsigned long capacity, load;
+		unsigned long capacity, load, util;
+		unsigned int nr_running;
 		enum fbq_type rt;
 
 		rq = cpu_rq(i);
@@ -8477,20 +8647,8 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 		if (rt > env->fbq_type)
 			continue;
 
-		/*
-		 * For ASYM_CPUCAPACITY domains with misfit tasks we simply
-		 * seek the "biggest" misfit task.
-		 */
-		if (env->src_grp_type == group_misfit_task) {
-			if (rq->misfit_task_load > busiest_load) {
-				busiest_load = rq->misfit_task_load;
-				busiest = rq;
-			}
-
-			continue;
-		}
-
 		capacity = capacity_of(i);
+		nr_running = rq->cfs.h_nr_running;
 
 		/*
 		 * For ASYM_CPUCAPACITY domains, don't pick a CPU that could
@@ -8500,35 +8658,70 @@ static struct rq *find_busiest_queue(struct lb_env *env,
 		 */
 		if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
 		    capacity_of(env->dst_cpu) < capacity &&
-		    rq->nr_running == 1)
+		    nr_running == 1)
 			continue;
 
-		load = cpu_runnable_load(rq);
+		switch (env->migration_type) {
+		case migrate_load:
+			/*
+			 * When comparing with load imbalance, use
+			 * cpu_runnable_load() which is not scaled with the CPU
+			 * capacity.
+			 */
+			load = cpu_runnable_load(rq);
 
-		/*
-		 * When comparing with imbalance, use cpu_runnable_load()
-		 * which is not scaled with the CPU capacity.
-		 */
+			if (nr_running == 1 && load > env->imbalance &&
+			    !check_cpu_capacity(rq, env->sd))
+				break;
 
-		if (rq->nr_running == 1 && load > env->imbalance &&
-		    !check_cpu_capacity(rq, env->sd))
-			continue;
+			/*
+			 * For the load comparisons with the other CPUs,
+			 * consider the cpu_runnable_load() scaled with the CPU
+			 * capacity, so that the load can be moved away from
+			 * the CPU that is potentially running at a lower
+			 * capacity.
+			 *
+			 * Thus we're looking for max(load_i / capacity_i),
+			 * crosswise multiplication to rid ourselves of the
+			 * division works out to:
+			 * load_i * capacity_j > load_j * capacity_i;
+			 * where j is our previous maximum.
+			 */
+			if (load * busiest_capacity > busiest_load * capacity) {
+				busiest_load = load;
+				busiest_capacity = capacity;
+				busiest = rq;
+			}
+			break;
+
+		case migrate_util:
+			util = cpu_util(cpu_of(rq));
+
+			if (busiest_util < util) {
+				busiest_util = util;
+				busiest = rq;
+			}
+			break;
+
+		case migrate_task:
+			if (busiest_nr < nr_running) {
+				busiest_nr = nr_running;
+				busiest = rq;
+			}
+			break;
+
+		case migrate_misfit:
+			/*
+			 * For ASYM_CPUCAPACITY domains with misfit tasks we
+			 * simply seek the "biggest" misfit task.
+			 */
+			if (rq->misfit_task_load > busiest_load) {
+				busiest_load = rq->misfit_task_load;
+				busiest = rq;
+			}
+
+			break;
 
-		/*
-		 * For the load comparisons with the other CPU's, consider
-		 * the cpu_runnable_load() scaled with the CPU capacity, so
-		 * that the load can be moved away from the CPU that is
-		 * potentially running at a lower capacity.
-		 *
-		 * Thus we're looking for max(load_i / capacity_i), crosswise
-		 * multiplication to rid ourselves of the division works out
-		 * to: load_i * capacity_j > load_j * capacity_i;  where j is
-		 * our previous maximum.
-		 */
-		if (load * busiest_capacity > busiest_load * capacity) {
-			busiest_load = load;
-			busiest_capacity = capacity;
-			busiest = rq;
 		}
 	}
 
@@ -8574,7 +8767,7 @@ voluntary_active_balance(struct lb_env *env)
 			return 1;
 	}
 
-	if (env->src_grp_type == group_misfit_task)
+	if (env->migration_type == migrate_misfit)
 		return 1;
 
 	return 0;

^ permalink raw reply	[flat|nested] 89+ messages in thread

* [tip: sched/core] sched/fair: Use rq->nr_running when balancing load
  2019-10-18 13:26 ` [PATCH v4 05/11] sched/fair: use rq->nr_running when balancing load Vincent Guittot
@ 2019-10-21  9:12   ` tip-bot2 for Vincent Guittot
  2019-10-30 15:54   ` [PATCH v4 05/11] sched/fair: use " Mel Gorman
  1 sibling, 0 replies; 89+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2019-10-21  9:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Ben Segall, Dietmar Eggemann, Juri Lelli,
	Linus Torvalds, Mel Gorman, Mike Galbraith, Morten.Rasmussen,
	Peter Zijlstra, Steven Rostedt, Thomas Gleixner, hdanton, parth,
	pauld, quentin.perret, riel, srikar, valentin.schneider,
	Ingo Molnar, Borislav Petkov, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     5e23e474431529b7d1480f649ce33d0e9c1b2e48
Gitweb:        https://git.kernel.org/tip/5e23e474431529b7d1480f649ce33d0e9c1b2e48
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Fri, 18 Oct 2019 15:26:32 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 21 Oct 2019 09:40:54 +02:00

sched/fair: Use rq->nr_running when balancing load

CFS load_balance() only takes care of CFS tasks whereas CPUs can be used by
other scheduling classes. Typically, a CFS task preempted by an RT or deadline
task will not get a chance to be pulled by another CPU because
load_balance() doesn't take into account tasks from other classes.
Add sum of nr_running in the statistics and use it to detect such
situations.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Ben Segall <bsegall@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: riel@surriel.com
Cc: srikar@linux.vnet.ibm.com
Cc: valentin.schneider@arm.com
Link: https://lkml.kernel.org/r/1571405198-27570-6-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 13 ++++++++-----
 1 file changed, 8 insertions(+), 5 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 76a2aa8..4e7396c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7694,6 +7694,7 @@ struct sg_lb_stats {
 	unsigned long group_load; /* Total load over the CPUs of the group */
 	unsigned long group_capacity;
 	unsigned long group_util; /* Total utilization of the group */
+	unsigned int sum_nr_running; /* Nr of tasks running in the group */
 	unsigned int sum_h_nr_running; /* Nr of CFS tasks running in the group */
 	unsigned int idle_cpus;
 	unsigned int group_weight;
@@ -7928,7 +7929,7 @@ static inline int sg_imbalanced(struct sched_group *group)
 static inline bool
 group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
 {
-	if (sgs->sum_h_nr_running < sgs->group_weight)
+	if (sgs->sum_nr_running < sgs->group_weight)
 		return true;
 
 	if ((sgs->group_capacity * 100) >
@@ -7949,7 +7950,7 @@ group_has_capacity(struct lb_env *env, struct sg_lb_stats *sgs)
 static inline bool
 group_is_overloaded(struct lb_env *env, struct sg_lb_stats *sgs)
 {
-	if (sgs->sum_h_nr_running <= sgs->group_weight)
+	if (sgs->sum_nr_running <= sgs->group_weight)
 		return false;
 
 	if ((sgs->group_capacity * 100) <
@@ -8053,6 +8054,8 @@ static inline void update_sg_lb_stats(struct lb_env *env,
 		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
 
 		nr_running = rq->nr_running;
+		sgs->sum_nr_running += nr_running;
+
 		if (nr_running > 1)
 			*sg_status |= SG_OVERLOAD;
 
@@ -8410,13 +8413,13 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 		}
 
 		if (busiest->group_weight == 1 || sds->prefer_sibling) {
-			unsigned int nr_diff = busiest->sum_h_nr_running;
+			unsigned int nr_diff = busiest->sum_nr_running;
 			/*
 			 * When prefer sibling, evenly spread running tasks on
 			 * groups.
 			 */
 			env->migration_type = migrate_task;
-			lsub_positive(&nr_diff, local->sum_h_nr_running);
+			lsub_positive(&nr_diff, local->sum_nr_running);
 			env->imbalance = nr_diff >> 1;
 			return;
 		}
@@ -8580,7 +8583,7 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 
 	/* Try to move all excess tasks to child's sibling domain */
 	if (sds.prefer_sibling && local->group_type == group_has_spare &&
-	    busiest->sum_h_nr_running > local->sum_h_nr_running + 1)
+	    busiest->sum_nr_running > local->sum_nr_running + 1)
 		goto force_balance;
 
 	if (busiest->group_type != group_overloaded &&

^ permalink raw reply	[flat|nested] 89+ messages in thread

* [tip: sched/core] sched/fair: Clean up asym packing
  2019-10-18 13:26 ` [PATCH v4 01/11] sched/fair: clean up asym packing Vincent Guittot
@ 2019-10-21  9:12   ` tip-bot2 for Vincent Guittot
  2019-10-30 14:51   ` [PATCH v4 01/11] sched/fair: clean " Mel Gorman
  1 sibling, 0 replies; 89+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2019-10-21  9:12 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Vincent Guittot, Rik van Riel, Ben Segall, Dietmar Eggemann,
	Juri Lelli, Linus Torvalds, Mel Gorman, Mike Galbraith,
	Morten.Rasmussen, Peter Zijlstra, Steven Rostedt,
	Thomas Gleixner, hdanton, parth, pauld, quentin.perret, srikar,
	valentin.schneider, Ingo Molnar, Borislav Petkov, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     490ba971d8b498ba3a47999ab94c6a0d1830ad41
Gitweb:        https://git.kernel.org/tip/490ba971d8b498ba3a47999ab94c6a0d1830ad41
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Fri, 18 Oct 2019 15:26:28 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 21 Oct 2019 09:40:53 +02:00

sched/fair: Clean up asym packing

Clean up asym packing to follow the default load balance behavior:

- classify the group by creating a group_asym_packing field.
- calculate the imbalance in calculate_imbalance() instead of bypassing it.

We don't need to test twice same conditions anymore to detect asym packing
and we consolidate the calculation of imbalance in calculate_imbalance().

There is no functional changes.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Rik van Riel <riel@surriel.com>
Cc: Ben Segall <bsegall@google.com>
Cc: Dietmar Eggemann <dietmar.eggemann@arm.com>
Cc: Juri Lelli <juri.lelli@redhat.com>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Mel Gorman <mgorman@suse.de>
Cc: Mike Galbraith <efault@gmx.de>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Steven Rostedt <rostedt@goodmis.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: srikar@linux.vnet.ibm.com
Cc: valentin.schneider@arm.com
Link: https://lkml.kernel.org/r/1571405198-27570-2-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 63 +++++++++++---------------------------------
 1 file changed, 16 insertions(+), 47 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 682a754..5ce0f71 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -7665,6 +7665,7 @@ struct sg_lb_stats {
 	unsigned int group_weight;
 	enum group_type group_type;
 	int group_no_capacity;
+	unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
 	unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
 #ifdef CONFIG_NUMA_BALANCING
 	unsigned int nr_numa_running;
@@ -8119,9 +8120,17 @@ asym_packing:
 	 * ASYM_PACKING needs to move all the work to the highest
 	 * prority CPUs in the group, therefore mark all groups
 	 * of lower priority than ourself as busy.
+	 *
+	 * This is primarily intended to used at the sibling level.  Some
+	 * cores like POWER7 prefer to use lower numbered SMT threads.  In the
+	 * case of POWER7, it can move to lower SMT modes only when higher
+	 * threads are idle.  When in lower SMT modes, the threads will
+	 * perform better since they share less core resources.  Hence when we
+	 * have idle threads, we want them to be the higher ones.
 	 */
 	if (sgs->sum_nr_running &&
 	    sched_asym_prefer(env->dst_cpu, sg->asym_prefer_cpu)) {
+		sgs->group_asym_packing = 1;
 		if (!sds->busiest)
 			return true;
 
@@ -8263,51 +8272,6 @@ next_group:
 }
 
 /**
- * check_asym_packing - Check to see if the group is packed into the
- *			sched domain.
- *
- * This is primarily intended to used at the sibling level.  Some
- * cores like POWER7 prefer to use lower numbered SMT threads.  In the
- * case of POWER7, it can move to lower SMT modes only when higher
- * threads are idle.  When in lower SMT modes, the threads will
- * perform better since they share less core resources.  Hence when we
- * have idle threads, we want them to be the higher ones.
- *
- * This packing function is run on idle threads.  It checks to see if
- * the busiest CPU in this domain (core in the P7 case) has a higher
- * CPU number than the packing function is being run on.  Here we are
- * assuming lower CPU number will be equivalent to lower a SMT thread
- * number.
- *
- * Return: 1 when packing is required and a task should be moved to
- * this CPU.  The amount of the imbalance is returned in env->imbalance.
- *
- * @env: The load balancing environment.
- * @sds: Statistics of the sched_domain which is to be packed
- */
-static int check_asym_packing(struct lb_env *env, struct sd_lb_stats *sds)
-{
-	int busiest_cpu;
-
-	if (!(env->sd->flags & SD_ASYM_PACKING))
-		return 0;
-
-	if (env->idle == CPU_NOT_IDLE)
-		return 0;
-
-	if (!sds->busiest)
-		return 0;
-
-	busiest_cpu = sds->busiest->asym_prefer_cpu;
-	if (sched_asym_prefer(busiest_cpu, env->dst_cpu))
-		return 0;
-
-	env->imbalance = sds->busiest_stat.group_load;
-
-	return 1;
-}
-
-/**
  * fix_small_imbalance - Calculate the minor imbalance that exists
  *			amongst the groups of a sched_domain, during
  *			load balancing.
@@ -8391,6 +8355,11 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 	local = &sds->local_stat;
 	busiest = &sds->busiest_stat;
 
+	if (busiest->group_asym_packing) {
+		env->imbalance = busiest->group_load;
+		return;
+	}
+
 	if (busiest->group_type == group_imbalanced) {
 		/*
 		 * In the group_imb case we cannot rely on group-wide averages
@@ -8495,8 +8464,8 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
 	busiest = &sds.busiest_stat;
 
 	/* ASYM feature bypasses nice load balance check */
-	if (check_asym_packing(env, &sds))
-		return sds.busiest;
+	if (busiest->group_asym_packing)
+		goto force_balance;
 
 	/* There is no busy sibling group to pull tasks from */
 	if (!sds.busiest || busiest->sum_nr_running == 0)

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-21  8:44   ` Vincent Guittot
@ 2019-10-21 12:56     ` Phil Auld
  2019-10-24 12:38     ` Phil Auld
  1 sibling, 0 replies; 89+ messages in thread
From: Phil Auld @ 2019-10-21 12:56 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Ingo Molnar, Mel Gorman, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Valentin Schneider, Srikar Dronamraju,
	Quentin Perret, Dietmar Eggemann, Morten Rasmussen, Hillf Danton,
	Parth Shah, Rik van Riel

On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <mingo@kernel.org> wrote:
> >
> >
> > * Vincent Guittot <vincent.guittot@linaro.org> wrote:
> >
> > > Several wrong task placement have been raised with the current load
> > > balance algorithm but their fixes are not always straight forward and
> > > end up with using biased values to force migrations. A cleanup and rework
> > > of the load balance will help to handle such UCs and enable to fine grain
> > > the behavior of the scheduler for other cases.
> > >
> > > Patch 1 has already been sent separately and only consolidate asym policy
> > > in one place and help the review of the changes in load_balance.
> > >
> > > Patch 2 renames the sum of h_nr_running in stats.
> > >
> > > Patch 3 removes meaningless imbalance computation to make review of
> > > patch 4 easier.
> > >
> > > Patch 4 reworks load_balance algorithm and fixes some wrong task placement
> > > but try to stay conservative.
> > >
> > > Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
> > > into account when pulling tasks.
> > >
> > > Patch 6 replaces runnable_load by load now that the signal is only used
> > > when overloaded.
> > >
> > > Patch 7 improves the spread of tasks at the 1st scheduling level.
> > >
> > > Patch 8 uses utilization instead of load in all steps of misfit task
> > > path.
> > >
> > > Patch 9 replaces runnable_load_avg by load_avg in the wake up path.
> > >
> > > Patch 10 optimizes find_idlest_group() that was using both runnable_load
> > > and load. This has not been squashed with previous patch to ease the
> > > review.
> > >
> > > Patch 11 reworks find_idlest_group() to follow the same steps as
> > > find_busiest_group()
> > >
> > > Some benchmarks results based on 8 iterations of each tests:
> > > - small arm64 dual quad cores system
> > >
> > >            tip/sched/core        w/ this patchset    improvement
> > > schedpipe      53125 +/-0.18%        53443 +/-0.52%   (+0.60%)
> > >
> > > hackbench -l (2560/#grp) -g #grp
> > >  1 groups      1.579 +/-29.16%       1.410 +/-13.46% (+10.70%)
> > >  4 groups      1.269 +/-9.69%        1.205 +/-3.27%   (+5.00%)
> > >  8 groups      1.117 +/-1.51%        1.123 +/-1.27%   (+4.57%)
> > > 16 groups      1.176 +/-1.76%        1.164 +/-2.42%   (+1.07%)
> > >
> > > Unixbench shell8
> > >   1 test     1963.48 +/-0.36%       1902.88 +/-0.73%    (-3.09%)
> > > 224 tests    2427.60 +/-0.20%       2469.80 +/-0.42%  (1.74%)
> > >
> > > - large arm64 2 nodes / 224 cores system
> > >
> > >            tip/sched/core        w/ this patchset    improvement
> > > schedpipe     124084 +/-1.36%       124445 +/-0.67%   (+0.29%)
> > >
> > > hackbench -l (256000/#grp) -g #grp
> > >   1 groups    15.305 +/-1.50%       14.001 +/-1.99%   (+8.52%)
> > >   4 groups     5.959 +/-0.70%        5.542 +/-3.76%   (+6.99%)
> > >  16 groups     3.120 +/-1.72%        3.253 +/-0.61%   (-4.92%)
> > >  32 groups     2.911 +/-0.88%        2.837 +/-1.16%   (+2.54%)
> > >  64 groups     2.805 +/-1.90%        2.716 +/-1.18%   (+3.17%)
> > > 128 groups     3.166 +/-7.71%        3.891 +/-6.77%   (+5.82%)
> > > 256 groups     3.655 +/-10.09%       3.185 +/-6.65%  (+12.87%)
> > >
> > > dbench
> > >   1 groups   328.176 +/-0.29%      330.217 +/-0.32%   (+0.62%)
> > >   4 groups   930.739 +/-0.50%      957.173 +/-0.66%   (+2.84%)
> > >  16 groups  1928.292 +/-0.36%     1978.234 +/-0.88%   (+0.92%)
> > >  32 groups  2369.348 +/-1.72%     2454.020 +/-0.90%   (+3.57%)
> > >  64 groups  2583.880 +/-3.39%     2618.860 +/-0.84%   (+1.35%)
> > > 128 groups  2256.406 +/-10.67%    2392.498 +/-2.13%   (+6.03%)
> > > 256 groups  1257.546 +/-3.81%     1674.684 +/-4.97%  (+33.17%)
> > >
> > > Unixbench shell8
> > >   1 test     6944.16 +/-0.02     6605.82 +/-0.11      (-4.87%)
> > > 224 tests   13499.02 +/-0.14    13637.94 +/-0.47%     (+1.03%)
> > > lkp reported a -10% regression on shell8 (1 test) for v3 that
> > > seems that is partially recovered on my platform with v4.
> > >
> > > tip/sched/core sha1:
> > >   commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")
> > >
> > > Changes since v3:
> > > - small typo and variable ordering fixes
> > > - add some acked/reviewed tag
> > > - set 1 instead of load for migrate_misfit
> > > - use nr_h_running instead of load for asym_packing
> > > - update the optimization of find_idlest_group() and put back somes
> > >  conditions when comparing load
> > > - rework find_idlest_group() to match find_busiest_group() behavior
> > >
> > > Changes since v2:
> > > - fix typo and reorder code
> > > - some minor code fixes
> > > - optimize the find_idles_group()
> > >
> > > Not covered in this patchset:
> > > - Better detection of overloaded and fully busy state, especially for cases
> > >   when nr_running > nr CPUs.
> > >
> > > Vincent Guittot (11):
> > >   sched/fair: clean up asym packing
> > >   sched/fair: rename sum_nr_running to sum_h_nr_running
> > >   sched/fair: remove meaningless imbalance calculation
> > >   sched/fair: rework load_balance
> > >   sched/fair: use rq->nr_running when balancing load
> > >   sched/fair: use load instead of runnable load in load_balance
> > >   sched/fair: evenly spread tasks when not overloaded
> > >   sched/fair: use utilization to select misfit task
> > >   sched/fair: use load instead of runnable load in wakeup path
> > >   sched/fair: optimize find_idlest_group
> > >   sched/fair: rework find_idlest_group
> > >
> > >  kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
> > >  1 file changed, 682 insertions(+), 499 deletions(-)
> >
> > Thanks, that's an excellent series!
> >
> > I've queued it up in sched/core with a handful of readability edits to
> > comments and changelogs.
> 
> Thanks
> 
> >
> > There are some upstreaming caveats though, I expect this series to be a
> > performance regression magnet:
> >
> >  - load_balance() and wake-up changes invariably are such: some workloads
> >    only work/scale well by accident, and if we touch the logic it might
> >    flip over into a less advantageous scheduling pattern.
> >
> >  - In particular the changes from balancing and waking on runnable load
> >    to full load that includes blocking *will* shift IO-intensive
> >    workloads that you tests don't fully capture I believe. You also made
> >    idle balancing more aggressive in essence - which might reduce cache
> >    locality for some workloads.
> >
> > A full run on Mel Gorman's magic scalability test-suite would be super
> > useful ...
> >
> > Anyway, please be on the lookout for such performance regression reports.
> 
> Yes I monitor the regressions on the mailing list
> 

Nice to see these in! Our perf team is running tests on this version. I 
should have results in a couple days. 


Cheers,
Phil

> >
> > Also, we seem to have grown a fair amount of these TODO entries:
> >
> >   kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> >   kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> >   kernel/sched/fair.c:     * XXX illustrate
> >   kernel/sched/fair.c:    } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> >   kernel/sched/fair.c: * can also include other factors [XXX].
> >   kernel/sched/fair.c: * [XXX expand on:
> >   kernel/sched/fair.c: * [XXX more?]
> >   kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> >   kernel/sched/fair.c:             * XXX for now avg_load is not computed and always 0 so we
> >   kernel/sched/fair.c:            /* XXX broken for overlapping NUMA groups */
> >
> 
> I will have a look :-)
> 
> > :-)
> >
> > Thanks,
> >
> >         Ingo

-- 


^ permalink raw reply	[flat|nested] 89+ messages in thread

* [PATCH] sched/fair: fix rework of find_idlest_group()
  2019-10-18 13:26 ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rework find_idlest_group() tip-bot2 for Vincent Guittot
@ 2019-10-22 16:46   ` Vincent Guittot
  2019-10-23  7:50     ` Chen, Rong A
                       ` (3 more replies)
  2019-11-20 11:58   ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Qais Yousef
  2019-11-22 14:34   ` Valentin Schneider
  3 siblings, 4 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-22 16:46 UTC (permalink / raw)
  To: linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel,
	rong.a.chen, Vincent Guittot

The task, for which the scheduler looks for the idlest group of CPUs, must
be discounted from all statistics in order to get a fair comparison
between groups. This includes utilization, load, nr_running and idle_cpus.

Such unfairness can be easily highlighted with the unixbench execl 1 task.
This test continuously call execve() and the scheduler looks for the idlest
group/CPU on which it should place the task. Because the task runs on the
local group/CPU, the latter seems already busy even if there is nothing
else running on it. As a result, the scheduler will always select another
group/CPU than the local one.

Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
Reported-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---

This recover most of the perf regression on my system and I have asked
Rong if he can rerun the test with the patch to check that it fixes his
system as well.

 kernel/sched/fair.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++-----
 1 file changed, 83 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a81c364..0ad4b21 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5379,6 +5379,36 @@ static unsigned long cpu_load(struct rq *rq)
 {
 	return cfs_rq_load_avg(&rq->cfs);
 }
+/*
+ * cpu_load_without - compute cpu load without any contributions from *p
+ * @cpu: the CPU which load is requested
+ * @p: the task which load should be discounted
+ *
+ * The load of a CPU is defined by the load of tasks currently enqueued on that
+ * CPU as well as tasks which are currently sleeping after an execution on that
+ * CPU.
+ *
+ * This method returns the load of the specified CPU by discounting the load of
+ * the specified task, whenever the task is currently contributing to the CPU
+ * load.
+ */
+static unsigned long cpu_load_without(struct rq *rq, struct task_struct *p)
+{
+	struct cfs_rq *cfs_rq;
+	unsigned int load;
+
+	/* Task has no contribution or is new */
+	if (cpu_of(rq) != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
+		return cpu_load(rq);
+
+	cfs_rq = &rq->cfs;
+	load = READ_ONCE(cfs_rq->avg.load_avg);
+
+	/* Discount task's util from CPU's util */
+	lsub_positive(&load, task_h_load(p));
+
+	return load;
+}
 
 static unsigned long capacity_of(int cpu)
 {
@@ -8117,10 +8147,55 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
 struct sg_lb_stats;
 
 /*
+ * task_running_on_cpu - return 1 if @p is running on @cpu.
+ */
+
+static unsigned int task_running_on_cpu(int cpu, struct task_struct *p)
+{
+	/* Task has no contribution or is new */
+	if (cpu != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
+		return 0;
+
+	if (task_on_rq_queued(p))
+		return 1;
+
+	return 0;
+}
+
+/**
+ * idle_cpu_without - would a given CPU be idle without p ?
+ * @cpu: the processor on which idleness is tested.
+ * @p: task which should be ignored.
+ *
+ * Return: 1 if the CPU would be idle. 0 otherwise.
+ */
+static int idle_cpu_without(int cpu, struct task_struct *p)
+{
+	struct rq *rq = cpu_rq(cpu);
+
+	if ((rq->curr != rq->idle) && (rq->curr != p))
+		return 0;
+
+	/*
+	 * rq->nr_running can't be used but an updated version without the
+	 * impact of p on cpu must be used instead. The updated nr_running
+	 * be computed and tested before calling idle_cpu_without().
+	 */
+
+#ifdef CONFIG_SMP
+	if (!llist_empty(&rq->wake_list))
+		return 0;
+#endif
+
+	return 1;
+}
+
+/*
  * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
- * @denv: The ched_domain level to look for idlest group.
+ * @sd: The sched_domain level to look for idlest group.
  * @group: sched_group whose statistics are to be updated.
  * @sgs: variable to hold the statistics for this group.
+ * @p: The task for which we look for the idlest group/CPU.
  */
 static inline void update_sg_wakeup_stats(struct sched_domain *sd,
 					  struct sched_group *group,
@@ -8133,21 +8208,22 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
 
 	for_each_cpu(i, sched_group_span(group)) {
 		struct rq *rq = cpu_rq(i);
+		unsigned int local;
 
-		sgs->group_load += cpu_load(rq);
+		sgs->group_load += cpu_load_without(rq, p);
 		sgs->group_util += cpu_util_without(i, p);
-		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
+		local = task_running_on_cpu(i, p);
+		sgs->sum_h_nr_running += rq->cfs.h_nr_running - local;
 
-		nr_running = rq->nr_running;
+		nr_running = rq->nr_running - local;
 		sgs->sum_nr_running += nr_running;
 
 		/*
-		 * No need to call idle_cpu() if nr_running is not 0
+		 * No need to call idle_cpu_without() if nr_running is not 0
 		 */
-		if (!nr_running && idle_cpu(i))
+		if (!nr_running && idle_cpu_without(i, p))
 			sgs->idle_cpus++;
 
-
 	}
 
 	/* Check if task fits in the group */
-- 
2.7.4


^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH] sched/fair: fix rework of find_idlest_group()
  2019-10-22 16:46   ` [PATCH] sched/fair: fix rework of find_idlest_group() Vincent Guittot
@ 2019-10-23  7:50     ` Chen, Rong A
  2019-10-30 16:07     ` Mel Gorman
                       ` (2 subsequent siblings)
  3 siblings, 0 replies; 89+ messages in thread
From: Chen, Rong A @ 2019-10-23  7:50 UTC (permalink / raw)
  To: Vincent Guittot, linux-kernel, mingo, peterz
  Cc: pauld, valentin.schneider, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel

Tested-by: kernel test robot <rong.a.chen@intel.com>

On 10/23/2019 12:46 AM, Vincent Guittot wrote:
> The task, for which the scheduler looks for the idlest group of CPUs, must
> be discounted from all statistics in order to get a fair comparison
> between groups. This includes utilization, load, nr_running and idle_cpus.
>
> Such unfairness can be easily highlighted with the unixbench execl 1 task.
> This test continuously call execve() and the scheduler looks for the idlest
> group/CPU on which it should place the task. Because the task runs on the
> local group/CPU, the latter seems already busy even if there is nothing
> else running on it. As a result, the scheduler will always select another
> group/CPU than the local one.
>
> Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
> Reported-by: kernel test robot <rong.a.chen@intel.com>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>
> This recover most of the perf regression on my system and I have asked
> Rong if he can rerun the test with the patch to check that it fixes his
> system as well.
>
>   kernel/sched/fair.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++-----
>   1 file changed, 83 insertions(+), 7 deletions(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a81c364..0ad4b21 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5379,6 +5379,36 @@ static unsigned long cpu_load(struct rq *rq)
>   {
>   	return cfs_rq_load_avg(&rq->cfs);
>   }
> +/*
> + * cpu_load_without - compute cpu load without any contributions from *p
> + * @cpu: the CPU which load is requested
> + * @p: the task which load should be discounted
> + *
> + * The load of a CPU is defined by the load of tasks currently enqueued on that
> + * CPU as well as tasks which are currently sleeping after an execution on that
> + * CPU.
> + *
> + * This method returns the load of the specified CPU by discounting the load of
> + * the specified task, whenever the task is currently contributing to the CPU
> + * load.
> + */
> +static unsigned long cpu_load_without(struct rq *rq, struct task_struct *p)
> +{
> +	struct cfs_rq *cfs_rq;
> +	unsigned int load;
> +
> +	/* Task has no contribution or is new */
> +	if (cpu_of(rq) != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
> +		return cpu_load(rq);
> +
> +	cfs_rq = &rq->cfs;
> +	load = READ_ONCE(cfs_rq->avg.load_avg);
> +
> +	/* Discount task's util from CPU's util */
> +	lsub_positive(&load, task_h_load(p));
> +
> +	return load;
> +}
>   
>   static unsigned long capacity_of(int cpu)
>   {
> @@ -8117,10 +8147,55 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
>   struct sg_lb_stats;
>   
>   /*
> + * task_running_on_cpu - return 1 if @p is running on @cpu.
> + */
> +
> +static unsigned int task_running_on_cpu(int cpu, struct task_struct *p)
> +{
> +	/* Task has no contribution or is new */
> +	if (cpu != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
> +		return 0;
> +
> +	if (task_on_rq_queued(p))
> +		return 1;
> +
> +	return 0;
> +}
> +
> +/**
> + * idle_cpu_without - would a given CPU be idle without p ?
> + * @cpu: the processor on which idleness is tested.
> + * @p: task which should be ignored.
> + *
> + * Return: 1 if the CPU would be idle. 0 otherwise.
> + */
> +static int idle_cpu_without(int cpu, struct task_struct *p)
> +{
> +	struct rq *rq = cpu_rq(cpu);
> +
> +	if ((rq->curr != rq->idle) && (rq->curr != p))
> +		return 0;
> +
> +	/*
> +	 * rq->nr_running can't be used but an updated version without the
> +	 * impact of p on cpu must be used instead. The updated nr_running
> +	 * be computed and tested before calling idle_cpu_without().
> +	 */
> +
> +#ifdef CONFIG_SMP
> +	if (!llist_empty(&rq->wake_list))
> +		return 0;
> +#endif
> +
> +	return 1;
> +}
> +
> +/*
>    * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
> - * @denv: The ched_domain level to look for idlest group.
> + * @sd: The sched_domain level to look for idlest group.
>    * @group: sched_group whose statistics are to be updated.
>    * @sgs: variable to hold the statistics for this group.
> + * @p: The task for which we look for the idlest group/CPU.
>    */
>   static inline void update_sg_wakeup_stats(struct sched_domain *sd,
>   					  struct sched_group *group,
> @@ -8133,21 +8208,22 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
>   
>   	for_each_cpu(i, sched_group_span(group)) {
>   		struct rq *rq = cpu_rq(i);
> +		unsigned int local;
>   
> -		sgs->group_load += cpu_load(rq);
> +		sgs->group_load += cpu_load_without(rq, p);
>   		sgs->group_util += cpu_util_without(i, p);
> -		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
> +		local = task_running_on_cpu(i, p);
> +		sgs->sum_h_nr_running += rq->cfs.h_nr_running - local;
>   
> -		nr_running = rq->nr_running;
> +		nr_running = rq->nr_running - local;
>   		sgs->sum_nr_running += nr_running;
>   
>   		/*
> -		 * No need to call idle_cpu() if nr_running is not 0
> +		 * No need to call idle_cpu_without() if nr_running is not 0
>   		 */
> -		if (!nr_running && idle_cpu(i))
> +		if (!nr_running && idle_cpu_without(i, p))
>   			sgs->idle_cpus++;
>   
> -
>   	}
>   
>   	/* Check if task fits in the group */


^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-21  8:44   ` Vincent Guittot
  2019-10-21 12:56     ` Phil Auld
@ 2019-10-24 12:38     ` Phil Auld
  2019-10-24 13:46       ` Phil Auld
  1 sibling, 1 reply; 89+ messages in thread
From: Phil Auld @ 2019-10-24 12:38 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Ingo Molnar, Mel Gorman, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Valentin Schneider, Srikar Dronamraju,
	Quentin Perret, Dietmar Eggemann, Morten Rasmussen, Hillf Danton,
	Parth Shah, Rik van Riel

Hi Vincent,

On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <mingo@kernel.org> wrote:
> >
> >
> > * Vincent Guittot <vincent.guittot@linaro.org> wrote:
> >
> > > Several wrong task placement have been raised with the current load
> > > balance algorithm but their fixes are not always straight forward and
> > > end up with using biased values to force migrations. A cleanup and rework
> > > of the load balance will help to handle such UCs and enable to fine grain
> > > the behavior of the scheduler for other cases.
> > >
> > > Patch 1 has already been sent separately and only consolidate asym policy
> > > in one place and help the review of the changes in load_balance.
> > >
> > > Patch 2 renames the sum of h_nr_running in stats.
> > >
> > > Patch 3 removes meaningless imbalance computation to make review of
> > > patch 4 easier.
> > >
> > > Patch 4 reworks load_balance algorithm and fixes some wrong task placement
> > > but try to stay conservative.
> > >
> > > Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
> > > into account when pulling tasks.
> > >
> > > Patch 6 replaces runnable_load by load now that the signal is only used
> > > when overloaded.
> > >
> > > Patch 7 improves the spread of tasks at the 1st scheduling level.
> > >
> > > Patch 8 uses utilization instead of load in all steps of misfit task
> > > path.
> > >
> > > Patch 9 replaces runnable_load_avg by load_avg in the wake up path.
> > >
> > > Patch 10 optimizes find_idlest_group() that was using both runnable_load
> > > and load. This has not been squashed with previous patch to ease the
> > > review.
> > >
> > > Patch 11 reworks find_idlest_group() to follow the same steps as
> > > find_busiest_group()
> > >
> > > Some benchmarks results based on 8 iterations of each tests:
> > > - small arm64 dual quad cores system
> > >
> > >            tip/sched/core        w/ this patchset    improvement
> > > schedpipe      53125 +/-0.18%        53443 +/-0.52%   (+0.60%)
> > >
> > > hackbench -l (2560/#grp) -g #grp
> > >  1 groups      1.579 +/-29.16%       1.410 +/-13.46% (+10.70%)
> > >  4 groups      1.269 +/-9.69%        1.205 +/-3.27%   (+5.00%)
> > >  8 groups      1.117 +/-1.51%        1.123 +/-1.27%   (+4.57%)
> > > 16 groups      1.176 +/-1.76%        1.164 +/-2.42%   (+1.07%)
> > >
> > > Unixbench shell8
> > >   1 test     1963.48 +/-0.36%       1902.88 +/-0.73%    (-3.09%)
> > > 224 tests    2427.60 +/-0.20%       2469.80 +/-0.42%  (1.74%)
> > >
> > > - large arm64 2 nodes / 224 cores system
> > >
> > >            tip/sched/core        w/ this patchset    improvement
> > > schedpipe     124084 +/-1.36%       124445 +/-0.67%   (+0.29%)
> > >
> > > hackbench -l (256000/#grp) -g #grp
> > >   1 groups    15.305 +/-1.50%       14.001 +/-1.99%   (+8.52%)
> > >   4 groups     5.959 +/-0.70%        5.542 +/-3.76%   (+6.99%)
> > >  16 groups     3.120 +/-1.72%        3.253 +/-0.61%   (-4.92%)
> > >  32 groups     2.911 +/-0.88%        2.837 +/-1.16%   (+2.54%)
> > >  64 groups     2.805 +/-1.90%        2.716 +/-1.18%   (+3.17%)
> > > 128 groups     3.166 +/-7.71%        3.891 +/-6.77%   (+5.82%)
> > > 256 groups     3.655 +/-10.09%       3.185 +/-6.65%  (+12.87%)
> > >
> > > dbench
> > >   1 groups   328.176 +/-0.29%      330.217 +/-0.32%   (+0.62%)
> > >   4 groups   930.739 +/-0.50%      957.173 +/-0.66%   (+2.84%)
> > >  16 groups  1928.292 +/-0.36%     1978.234 +/-0.88%   (+0.92%)
> > >  32 groups  2369.348 +/-1.72%     2454.020 +/-0.90%   (+3.57%)
> > >  64 groups  2583.880 +/-3.39%     2618.860 +/-0.84%   (+1.35%)
> > > 128 groups  2256.406 +/-10.67%    2392.498 +/-2.13%   (+6.03%)
> > > 256 groups  1257.546 +/-3.81%     1674.684 +/-4.97%  (+33.17%)
> > >
> > > Unixbench shell8
> > >   1 test     6944.16 +/-0.02     6605.82 +/-0.11      (-4.87%)
> > > 224 tests   13499.02 +/-0.14    13637.94 +/-0.47%     (+1.03%)
> > > lkp reported a -10% regression on shell8 (1 test) for v3 that
> > > seems that is partially recovered on my platform with v4.
> > >
> > > tip/sched/core sha1:
> > >   commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")
> > >
> > > Changes since v3:
> > > - small typo and variable ordering fixes
> > > - add some acked/reviewed tag
> > > - set 1 instead of load for migrate_misfit
> > > - use nr_h_running instead of load for asym_packing
> > > - update the optimization of find_idlest_group() and put back somes
> > >  conditions when comparing load
> > > - rework find_idlest_group() to match find_busiest_group() behavior
> > >
> > > Changes since v2:
> > > - fix typo and reorder code
> > > - some minor code fixes
> > > - optimize the find_idles_group()
> > >
> > > Not covered in this patchset:
> > > - Better detection of overloaded and fully busy state, especially for cases
> > >   when nr_running > nr CPUs.
> > >
> > > Vincent Guittot (11):
> > >   sched/fair: clean up asym packing
> > >   sched/fair: rename sum_nr_running to sum_h_nr_running
> > >   sched/fair: remove meaningless imbalance calculation
> > >   sched/fair: rework load_balance
> > >   sched/fair: use rq->nr_running when balancing load
> > >   sched/fair: use load instead of runnable load in load_balance
> > >   sched/fair: evenly spread tasks when not overloaded
> > >   sched/fair: use utilization to select misfit task
> > >   sched/fair: use load instead of runnable load in wakeup path
> > >   sched/fair: optimize find_idlest_group
> > >   sched/fair: rework find_idlest_group
> > >
> > >  kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
> > >  1 file changed, 682 insertions(+), 499 deletions(-)
> >
> > Thanks, that's an excellent series!
> >
> > I've queued it up in sched/core with a handful of readability edits to
> > comments and changelogs.
> 
> Thanks
> 
> >
> > There are some upstreaming caveats though, I expect this series to be a
> > performance regression magnet:
> >
> >  - load_balance() and wake-up changes invariably are such: some workloads
> >    only work/scale well by accident, and if we touch the logic it might
> >    flip over into a less advantageous scheduling pattern.
> >
> >  - In particular the changes from balancing and waking on runnable load
> >    to full load that includes blocking *will* shift IO-intensive
> >    workloads that you tests don't fully capture I believe. You also made
> >    idle balancing more aggressive in essence - which might reduce cache
> >    locality for some workloads.
> >
> > A full run on Mel Gorman's magic scalability test-suite would be super
> > useful ...
> >
> > Anyway, please be on the lookout for such performance regression reports.
> 
> Yes I monitor the regressions on the mailing list


Our kernel perf tests show good results across the board for v4. 

The issue we hit on the 8-node system is fixed. Thanks!

As we didn't see the fairness issue I don't expect the results to be
that different on v4a (with the followup patch) but those tests are
queued up now and we'll see what they look like. 

Numbers for my specific testcase (the cgroup imbalance) are basically 
the same as I posted for v3 (plus the better 8-node numbers). I.e. this
series solves that issue. 


Cheers,
Phil


> 
> >
> > Also, we seem to have grown a fair amount of these TODO entries:
> >
> >   kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> >   kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> >   kernel/sched/fair.c:     * XXX illustrate
> >   kernel/sched/fair.c:    } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> >   kernel/sched/fair.c: * can also include other factors [XXX].
> >   kernel/sched/fair.c: * [XXX expand on:
> >   kernel/sched/fair.c: * [XXX more?]
> >   kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> >   kernel/sched/fair.c:             * XXX for now avg_load is not computed and always 0 so we
> >   kernel/sched/fair.c:            /* XXX broken for overlapping NUMA groups */
> >
> 
> I will have a look :-)
> 
> > :-)
> >
> > Thanks,
> >
> >         Ingo

-- 


^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-24 12:38     ` Phil Auld
@ 2019-10-24 13:46       ` Phil Auld
  2019-10-24 14:59         ` Vincent Guittot
  0 siblings, 1 reply; 89+ messages in thread
From: Phil Auld @ 2019-10-24 13:46 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Ingo Molnar, Mel Gorman, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Valentin Schneider, Srikar Dronamraju,
	Quentin Perret, Dietmar Eggemann, Morten Rasmussen, Hillf Danton,
	Parth Shah, Rik van Riel

On Thu, Oct 24, 2019 at 08:38:44AM -0400 Phil Auld wrote:
> Hi Vincent,
> 
> On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> > On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <mingo@kernel.org> wrote:
> > >
> > >
> > > * Vincent Guittot <vincent.guittot@linaro.org> wrote:
> > >
> > > > Several wrong task placement have been raised with the current load
> > > > balance algorithm but their fixes are not always straight forward and
> > > > end up with using biased values to force migrations. A cleanup and rework
> > > > of the load balance will help to handle such UCs and enable to fine grain
> > > > the behavior of the scheduler for other cases.
> > > >
> > > > Patch 1 has already been sent separately and only consolidate asym policy
> > > > in one place and help the review of the changes in load_balance.
> > > >
> > > > Patch 2 renames the sum of h_nr_running in stats.
> > > >
> > > > Patch 3 removes meaningless imbalance computation to make review of
> > > > patch 4 easier.
> > > >
> > > > Patch 4 reworks load_balance algorithm and fixes some wrong task placement
> > > > but try to stay conservative.
> > > >
> > > > Patch 5 add the sum of nr_running to monitor non cfs tasks and take that
> > > > into account when pulling tasks.
> > > >
> > > > Patch 6 replaces runnable_load by load now that the signal is only used
> > > > when overloaded.
> > > >
> > > > Patch 7 improves the spread of tasks at the 1st scheduling level.
> > > >
> > > > Patch 8 uses utilization instead of load in all steps of misfit task
> > > > path.
> > > >
> > > > Patch 9 replaces runnable_load_avg by load_avg in the wake up path.
> > > >
> > > > Patch 10 optimizes find_idlest_group() that was using both runnable_load
> > > > and load. This has not been squashed with previous patch to ease the
> > > > review.
> > > >
> > > > Patch 11 reworks find_idlest_group() to follow the same steps as
> > > > find_busiest_group()
> > > >
> > > > Some benchmarks results based on 8 iterations of each tests:
> > > > - small arm64 dual quad cores system
> > > >
> > > >            tip/sched/core        w/ this patchset    improvement
> > > > schedpipe      53125 +/-0.18%        53443 +/-0.52%   (+0.60%)
> > > >
> > > > hackbench -l (2560/#grp) -g #grp
> > > >  1 groups      1.579 +/-29.16%       1.410 +/-13.46% (+10.70%)
> > > >  4 groups      1.269 +/-9.69%        1.205 +/-3.27%   (+5.00%)
> > > >  8 groups      1.117 +/-1.51%        1.123 +/-1.27%   (+4.57%)
> > > > 16 groups      1.176 +/-1.76%        1.164 +/-2.42%   (+1.07%)
> > > >
> > > > Unixbench shell8
> > > >   1 test     1963.48 +/-0.36%       1902.88 +/-0.73%    (-3.09%)
> > > > 224 tests    2427.60 +/-0.20%       2469.80 +/-0.42%  (1.74%)
> > > >
> > > > - large arm64 2 nodes / 224 cores system
> > > >
> > > >            tip/sched/core        w/ this patchset    improvement
> > > > schedpipe     124084 +/-1.36%       124445 +/-0.67%   (+0.29%)
> > > >
> > > > hackbench -l (256000/#grp) -g #grp
> > > >   1 groups    15.305 +/-1.50%       14.001 +/-1.99%   (+8.52%)
> > > >   4 groups     5.959 +/-0.70%        5.542 +/-3.76%   (+6.99%)
> > > >  16 groups     3.120 +/-1.72%        3.253 +/-0.61%   (-4.92%)
> > > >  32 groups     2.911 +/-0.88%        2.837 +/-1.16%   (+2.54%)
> > > >  64 groups     2.805 +/-1.90%        2.716 +/-1.18%   (+3.17%)
> > > > 128 groups     3.166 +/-7.71%        3.891 +/-6.77%   (+5.82%)
> > > > 256 groups     3.655 +/-10.09%       3.185 +/-6.65%  (+12.87%)
> > > >
> > > > dbench
> > > >   1 groups   328.176 +/-0.29%      330.217 +/-0.32%   (+0.62%)
> > > >   4 groups   930.739 +/-0.50%      957.173 +/-0.66%   (+2.84%)
> > > >  16 groups  1928.292 +/-0.36%     1978.234 +/-0.88%   (+0.92%)
> > > >  32 groups  2369.348 +/-1.72%     2454.020 +/-0.90%   (+3.57%)
> > > >  64 groups  2583.880 +/-3.39%     2618.860 +/-0.84%   (+1.35%)
> > > > 128 groups  2256.406 +/-10.67%    2392.498 +/-2.13%   (+6.03%)
> > > > 256 groups  1257.546 +/-3.81%     1674.684 +/-4.97%  (+33.17%)
> > > >
> > > > Unixbench shell8
> > > >   1 test     6944.16 +/-0.02     6605.82 +/-0.11      (-4.87%)
> > > > 224 tests   13499.02 +/-0.14    13637.94 +/-0.47%     (+1.03%)
> > > > lkp reported a -10% regression on shell8 (1 test) for v3 that
> > > > seems that is partially recovered on my platform with v4.
> > > >
> > > > tip/sched/core sha1:
> > > >   commit 563c4f85f9f0 ("Merge branch 'sched/rt' into sched/core, to pick up -rt changes")
> > > >
> > > > Changes since v3:
> > > > - small typo and variable ordering fixes
> > > > - add some acked/reviewed tag
> > > > - set 1 instead of load for migrate_misfit
> > > > - use nr_h_running instead of load for asym_packing
> > > > - update the optimization of find_idlest_group() and put back somes
> > > >  conditions when comparing load
> > > > - rework find_idlest_group() to match find_busiest_group() behavior
> > > >
> > > > Changes since v2:
> > > > - fix typo and reorder code
> > > > - some minor code fixes
> > > > - optimize the find_idles_group()
> > > >
> > > > Not covered in this patchset:
> > > > - Better detection of overloaded and fully busy state, especially for cases
> > > >   when nr_running > nr CPUs.
> > > >
> > > > Vincent Guittot (11):
> > > >   sched/fair: clean up asym packing
> > > >   sched/fair: rename sum_nr_running to sum_h_nr_running
> > > >   sched/fair: remove meaningless imbalance calculation
> > > >   sched/fair: rework load_balance
> > > >   sched/fair: use rq->nr_running when balancing load
> > > >   sched/fair: use load instead of runnable load in load_balance
> > > >   sched/fair: evenly spread tasks when not overloaded
> > > >   sched/fair: use utilization to select misfit task
> > > >   sched/fair: use load instead of runnable load in wakeup path
> > > >   sched/fair: optimize find_idlest_group
> > > >   sched/fair: rework find_idlest_group
> > > >
> > > >  kernel/sched/fair.c | 1181 +++++++++++++++++++++++++++++----------------------
> > > >  1 file changed, 682 insertions(+), 499 deletions(-)
> > >
> > > Thanks, that's an excellent series!
> > >
> > > I've queued it up in sched/core with a handful of readability edits to
> > > comments and changelogs.
> > 
> > Thanks
> > 
> > >
> > > There are some upstreaming caveats though, I expect this series to be a
> > > performance regression magnet:
> > >
> > >  - load_balance() and wake-up changes invariably are such: some workloads
> > >    only work/scale well by accident, and if we touch the logic it might
> > >    flip over into a less advantageous scheduling pattern.
> > >
> > >  - In particular the changes from balancing and waking on runnable load
> > >    to full load that includes blocking *will* shift IO-intensive
> > >    workloads that you tests don't fully capture I believe. You also made
> > >    idle balancing more aggressive in essence - which might reduce cache
> > >    locality for some workloads.
> > >
> > > A full run on Mel Gorman's magic scalability test-suite would be super
> > > useful ...
> > >
> > > Anyway, please be on the lookout for such performance regression reports.
> > 
> > Yes I monitor the regressions on the mailing list
> 
> 
> Our kernel perf tests show good results across the board for v4. 
> 
> The issue we hit on the 8-node system is fixed. Thanks!
> 
> As we didn't see the fairness issue I don't expect the results to be
> that different on v4a (with the followup patch) but those tests are
> queued up now and we'll see what they look like. 
> 

Initial results with fix patch (v4a) show that the outlier issues on 
the 8-node system have returned.  Median time for 152 and 156 threads 
(160 cpu system) goes up significantly and worst case goes from 340 
and 250 to 550 sec. for both. And doubles from 150 to 300 for 144 
threads. These look more like the results from v3. 

We're re-running the test to get more samples. 


Other tests and systems were still fine.


Cheers,
Phil


> Numbers for my specific testcase (the cgroup imbalance) are basically 
> the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> series solves that issue. 
> 
> 
> Cheers,
> Phil
> 
> 
> > 
> > >
> > > Also, we seem to have grown a fair amount of these TODO entries:
> > >
> > >   kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > >   kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > >   kernel/sched/fair.c:     * XXX illustrate
> > >   kernel/sched/fair.c:    } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > >   kernel/sched/fair.c: * can also include other factors [XXX].
> > >   kernel/sched/fair.c: * [XXX expand on:
> > >   kernel/sched/fair.c: * [XXX more?]
> > >   kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > >   kernel/sched/fair.c:             * XXX for now avg_load is not computed and always 0 so we
> > >   kernel/sched/fair.c:            /* XXX broken for overlapping NUMA groups */
> > >
> > 
> > I will have a look :-)
> > 
> > > :-)
> > >
> > > Thanks,
> > >
> > >         Ingo
> 
> -- 
> 

-- 


^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-24 13:46       ` Phil Auld
@ 2019-10-24 14:59         ` Vincent Guittot
  2019-10-25 13:33           ` Phil Auld
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-10-24 14:59 UTC (permalink / raw)
  To: Phil Auld
  Cc: Ingo Molnar, Mel Gorman, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Valentin Schneider, Srikar Dronamraju,
	Quentin Perret, Dietmar Eggemann, Morten Rasmussen, Hillf Danton,
	Parth Shah, Rik van Riel

On Thu, 24 Oct 2019 at 15:47, Phil Auld <pauld@redhat.com> wrote:
>
> On Thu, Oct 24, 2019 at 08:38:44AM -0400 Phil Auld wrote:
> > Hi Vincent,
> >
> > On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> > > On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <mingo@kernel.org> wrote:
> > > >

[...]

> > > > A full run on Mel Gorman's magic scalability test-suite would be super
> > > > useful ...
> > > >
> > > > Anyway, please be on the lookout for such performance regression reports.
> > >
> > > Yes I monitor the regressions on the mailing list
> >
> >
> > Our kernel perf tests show good results across the board for v4.
> >
> > The issue we hit on the 8-node system is fixed. Thanks!
> >
> > As we didn't see the fairness issue I don't expect the results to be
> > that different on v4a (with the followup patch) but those tests are
> > queued up now and we'll see what they look like.
> >
>
> Initial results with fix patch (v4a) show that the outlier issues on
> the 8-node system have returned.  Median time for 152 and 156 threads
> (160 cpu system) goes up significantly and worst case goes from 340
> and 250 to 550 sec. for both. And doubles from 150 to 300 for 144

For v3, you had a x4 slow down IIRC.


> threads. These look more like the results from v3.

OK. For v3, we were not sure that your UC triggers the slow path but
it seems that we have the confirmation now.
The problem happens only for this  8 node 160 cores system, isn't it ?

The fix favors the local group so your UC seems to prefer spreading
tasks at wake up
If you have any traces that you can share, this could help to
understand what's going on. I will try to reproduce the problem on my
system

>
> We're re-running the test to get more samples.

Thanks
Vincent

>
>
> Other tests and systems were still fine.
>
>
> Cheers,
> Phil
>
>
> > Numbers for my specific testcase (the cgroup imbalance) are basically
> > the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> > series solves that issue.
> >
> >
> > Cheers,
> > Phil
> >
> >
> > >
> > > >
> > > > Also, we seem to have grown a fair amount of these TODO entries:
> > > >
> > > >   kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > > >   kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > > >   kernel/sched/fair.c:     * XXX illustrate
> > > >   kernel/sched/fair.c:    } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > > >   kernel/sched/fair.c: * can also include other factors [XXX].
> > > >   kernel/sched/fair.c: * [XXX expand on:
> > > >   kernel/sched/fair.c: * [XXX more?]
> > > >   kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > > >   kernel/sched/fair.c:             * XXX for now avg_load is not computed and always 0 so we
> > > >   kernel/sched/fair.c:            /* XXX broken for overlapping NUMA groups */
> > > >
> > >
> > > I will have a look :-)
> > >
> > > > :-)
> > > >
> > > > Thanks,
> > > >
> > > >         Ingo
> >
> > --
> >
>
> --
>

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-24 14:59         ` Vincent Guittot
@ 2019-10-25 13:33           ` Phil Auld
  2019-10-28 13:03             ` Vincent Guittot
  0 siblings, 1 reply; 89+ messages in thread
From: Phil Auld @ 2019-10-25 13:33 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Ingo Molnar, Mel Gorman, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Valentin Schneider, Srikar Dronamraju,
	Quentin Perret, Dietmar Eggemann, Morten Rasmussen, Hillf Danton,
	Parth Shah, Rik van Riel


Hi Vincent,


On Thu, Oct 24, 2019 at 04:59:05PM +0200 Vincent Guittot wrote:
> On Thu, 24 Oct 2019 at 15:47, Phil Auld <pauld@redhat.com> wrote:
> >
> > On Thu, Oct 24, 2019 at 08:38:44AM -0400 Phil Auld wrote:
> > > Hi Vincent,
> > >
> > > On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> > > > On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <mingo@kernel.org> wrote:
> > > > >
> 
> [...]
> 
> > > > > A full run on Mel Gorman's magic scalability test-suite would be super
> > > > > useful ...
> > > > >
> > > > > Anyway, please be on the lookout for such performance regression reports.
> > > >
> > > > Yes I monitor the regressions on the mailing list
> > >
> > >
> > > Our kernel perf tests show good results across the board for v4.
> > >
> > > The issue we hit on the 8-node system is fixed. Thanks!
> > >
> > > As we didn't see the fairness issue I don't expect the results to be
> > > that different on v4a (with the followup patch) but those tests are
> > > queued up now and we'll see what they look like.
> > >
> >
> > Initial results with fix patch (v4a) show that the outlier issues on
> > the 8-node system have returned.  Median time for 152 and 156 threads
> > (160 cpu system) goes up significantly and worst case goes from 340
> > and 250 to 550 sec. for both. And doubles from 150 to 300 for 144
> 
> For v3, you had a x4 slow down IIRC.
> 

Sorry, that was a confusing change of data point :)

 
That 4x was the normal versus group result for v3.  I.e. the usual 
view of this test case's data. 

These numbers above are the group vs group difference between 
v4 and v4a. 

The similar data points are that for v4 there was no difference 
in performance between group and normal at 152 threads and a 35% 
drop off from normal to group at 156. 

With v4a there was 100% drop (2x slowdown) normal to group at 152 
and close to that at 156 (~75-80% drop off).

So, yes, not as severe as v3. But significantly off from v4. 

> 
> > threads. These look more like the results from v3.
> 
> OK. For v3, we were not sure that your UC triggers the slow path but
> it seems that we have the confirmation now.
> The problem happens only for this  8 node 160 cores system, isn't it ?

Yes. It only shows up now on this 8-node system.

> 
> The fix favors the local group so your UC seems to prefer spreading
> tasks at wake up
> If you have any traces that you can share, this could help to
> understand what's going on. I will try to reproduce the problem on my
> system

I'm not actually sure the fix here is causing this. Looking at the data 
more closely I see similar imbalances on v4, v4a and v3. 

When you say slow versus fast wakeup paths what do you mean? I'm still
learning my way around all this code. 

This particular test is specifically designed to highlight the imbalance 
cause by the use of group scheduler defined load and averages. The threads
are mostly CPU bound but will join up every time step. So if each thread
more or less gets its own CPU (we run with fewer threads than CPUs) they
all finish the timestep at about the same time.  If threads are stuck
sharing cpus then those finish later and the whole computation is slowed
down.  In addition to the NAS benchmark threads there are 2 stress CPU
burners. These are either run in their own cgroups (thus having full "load")
or all in the same cgroup with the benchmarck, thus all having tiny "loads".

In this system, there are 20 cpus per node. We track average number of 
benchmark threads running in each node. Generally for a balanced case 
we should not have any much over 20 and indeed in the normal case (every
one in one cgroup) we see pretty nice balance. In the cgroup case we are 
still seeing numbers much higher than 20.

Here are some eye charts:

This is the GROUP numbers from that machine on the v1 series (I don't have the 
NORMAL lines handy for this one):
lu.C.x_152_GROUP_1 Average   18.08  18.17  19.58  19.29  19.25  17.50  21.46  18.67
lu.C.x_152_GROUP_2 Average   17.12  17.48  17.88  17.62  19.57  17.31  23.00  22.02
lu.C.x_152_GROUP_3 Average   17.82  17.97  18.12  18.18  24.55  22.18  16.97  16.21
lu.C.x_152_GROUP_4 Average   18.47  19.08  18.50  18.66  21.45  25.00  15.47  15.37
lu.C.x_152_GROUP_5 Average   20.46  20.71  27.38  24.75  17.06  16.65  12.81  12.19

lu.C.x_156_GROUP_1 Average   18.70  18.80  20.25  19.50  20.45  20.30  19.55  18.45
lu.C.x_156_GROUP_2 Average   19.29  19.90  17.71  18.10  20.76  21.57  19.81  18.86
lu.C.x_156_GROUP_3 Average   25.09  29.19  21.83  21.33  18.67  18.57  11.03  10.29
lu.C.x_156_GROUP_4 Average   18.60  19.10  19.20  18.70  20.30  20.00  19.70  20.40
lu.C.x_156_GROUP_5 Average   18.58  18.95  18.63  18.1   17.32  19.37  23.92  21.08

There are a couple that did not balance well but the overall results were good. 

This is v4:
lu.C.x_152_GROUP_1   Average    18.80  19.25  21.95  21.25  17.55  17.25  17.85  18.10
lu.C.x_152_GROUP_2   Average    20.57  20.62  19.76  17.76  18.95  18.33  18.52  17.48
lu.C.x_152_GROUP_3   Average    15.39  12.22  13.96  12.19  25.51  28.91  21.88  21.94
lu.C.x_152_GROUP_4   Average    20.30  19.75  20.75  19.45  18.15  17.80  18.15  17.65
lu.C.x_152_GROUP_5   Average    15.13  12.21  13.63  11.39  25.42  30.21  21.55  22.46
lu.C.x_152_NORMAL_1  Average    17.00  16.88  19.52  18.28  19.24  19.08  21.08  20.92
lu.C.x_152_NORMAL_2  Average    18.61  16.56  18.56  17.00  20.56  20.28  20.00  20.44
lu.C.x_152_NORMAL_3  Average    19.27  19.77  21.23  20.86  18.00  17.68  17.73  17.45
lu.C.x_152_NORMAL_4  Average    20.24  19.33  21.33  21.10  17.33  18.43  17.57  16.67
lu.C.x_152_NORMAL_5  Average    21.27  20.36  20.86  19.36  17.50  17.77  17.32  17.55

lu.C.x_156_GROUP_1   Average    18.60  18.68  21.16  23.40  18.96  19.72  17.76  17.72
lu.C.x_156_GROUP_2   Average    22.76  21.71  20.55  21.32  18.18  16.42  17.58  17.47
lu.C.x_156_GROUP_3   Average    13.62  11.52  15.54  15.58  25.42  28.54  23.22  22.56
lu.C.x_156_GROUP_4   Average    17.73  18.14  21.95  21.82  19.73  19.68  18.55  18.41
lu.C.x_156_GROUP_5   Average    15.32  15.14  17.30  17.11  23.59  25.75  20.77  21.02
lu.C.x_156_NORMAL_1  Average    19.06  18.72  19.56  18.72  19.72  21.28  19.44  19.50
lu.C.x_156_NORMAL_2  Average    20.25  19.86  22.61  23.18  18.32  17.93  16.39  17.46
lu.C.x_156_NORMAL_3  Average    18.84  17.88  19.24  17.76  21.04  20.64  20.16  20.44
lu.C.x_156_NORMAL_4  Average    20.67  19.44  20.74  22.15  18.89  18.85  18.00  17.26
lu.C.x_156_NORMAL_5  Average    20.12  19.65  24.12  24.15  17.40  16.62  17.10  16.83

This one is better overall, but there are some mid 20s abd 152_GROUP_5 is pretty bad.  


This is v4a
lu.C.x_152_GROUP_1   Average    28.64  34.49  23.60  24.48  10.35  11.99  8.36  10.09
lu.C.x_152_GROUP_2   Average    17.36  17.33  15.48  13.12  24.90  24.43  18.55  20.83
lu.C.x_152_GROUP_3   Average    20.00  19.92  20.21  21.33  18.50  18.50  16.50  17.04
lu.C.x_152_GROUP_4   Average    18.07  17.87  18.40  17.87  23.07  22.73  17.60  16.40
lu.C.x_152_GROUP_5   Average    25.50  24.69  21.48  21.46  16.85  16.00  14.06  11.96
lu.C.x_152_NORMAL_1  Average    22.27  20.77  20.60  19.83  16.73  17.53  15.83  18.43
lu.C.x_152_NORMAL_2  Average    19.83  20.81  23.06  21.97  17.28  16.92  15.83  16.31
lu.C.x_152_NORMAL_3  Average    17.85  19.31  18.85  19.08  19.00  19.31  19.08  19.54
lu.C.x_152_NORMAL_4  Average    18.87  18.13  19.00  20.27  18.20  18.67  19.73  19.13
lu.C.x_152_NORMAL_5  Average    18.16  18.63  18.11  17.00  19.79  20.63  19.47  20.21

lu.C.x_156_GROUP_1   Average    24.96  26.15  21.78  21.48  18.52  19.11  12.98  11.02
lu.C.x_156_GROUP_2   Average    18.69  19.00  18.65  18.42  20.50  20.46  19.85  20.42
lu.C.x_156_GROUP_3   Average    24.32  23.79  20.82  20.95  16.63  16.61  18.47  14.42
lu.C.x_156_GROUP_4   Average    18.27  18.34  14.88  16.07  27.00  21.93  20.56  18.95
lu.C.x_156_GROUP_5   Average    19.18  20.99  33.43  29.57  15.63  15.54  12.13  9.53
lu.C.x_156_NORMAL_1  Average    21.60  23.37  20.11  19.60  17.11  17.83  18.17  18.20
lu.C.x_156_NORMAL_2  Average    21.00  20.54  19.88  18.79  17.62  18.67  19.29  20.21
lu.C.x_156_NORMAL_3  Average    19.50  19.94  20.12  18.62  19.88  19.50  19.00  19.44
lu.C.x_156_NORMAL_4  Average    20.62  19.72  20.03  22.17  18.21  18.55  18.45  18.24
lu.C.x_156_NORMAL_5  Average    19.64  19.86  21.46  22.43  17.21  17.89  18.96  18.54


This shows much more imblance in the GROUP case. There are some single digits 
and some 30s.

For comparison here are some from my 4-node (80 cpu) system:

v4
lu.C.x_76_GROUP_1.ps.numa.hist   Average    19.58  17.67  18.25  20.50
lu.C.x_76_GROUP_2.ps.numa.hist   Average    19.08  19.17  17.67  20.08
lu.C.x_76_GROUP_3.ps.numa.hist   Average    19.42  18.58  18.42  19.58
lu.C.x_76_NORMAL_1.ps.numa.hist  Average    20.50  17.33  19.08  19.08
lu.C.x_76_NORMAL_2.ps.numa.hist  Average    19.45  18.73  19.27  18.55


v4a
lu.C.x_76_GROUP_1.ps.numa.hist   Average    19.46  19.15  18.62  18.77
lu.C.x_76_GROUP_2.ps.numa.hist   Average    19.00  18.58  17.75  20.67
lu.C.x_76_GROUP_3.ps.numa.hist   Average    19.08  17.08  20.08  19.77
lu.C.x_76_NORMAL_1.ps.numa.hist  Average    18.67  18.93  18.60  19.80
lu.C.x_76_NORMAL_2.ps.numa.hist  Average    19.08  18.67  18.58  19.67

Nicely balanced in both kernels and normal and group are basically the 
same. 

There's still something between v1 and v4 on that 8-node system that is 
still illustrating the original problem.  On our other test systems this
series really works nicely to solve this problem. And even if we can't get
to the bottom if this it's a significant improvement.


Here is v3 for the 8-node system
lu.C.x_152_GROUP_1  Average    17.52  16.86  17.90  18.52  20.00  19.00  22.00  20.19
lu.C.x_152_GROUP_2  Average    15.70  15.04  15.65  15.72  23.30  28.98  20.09  17.52
lu.C.x_152_GROUP_3  Average    27.72  32.79  22.89  22.62  11.01  12.90  12.14  9.93
lu.C.x_152_GROUP_4  Average    18.13  18.87  18.40  17.87  18.80  19.93  20.40  19.60
lu.C.x_152_GROUP_5  Average    24.14  26.46  20.92  21.43  14.70  16.05  15.14  13.16
lu.C.x_152_NORMAL_1 Average    21.03  22.43  20.27  19.97  18.37  18.80  16.27  14.87
lu.C.x_152_NORMAL_2 Average    19.24  18.29  18.41  17.41  19.71  19.00  20.29  19.65
lu.C.x_152_NORMAL_3 Average    19.43  20.00  19.05  20.24  18.76  17.38  18.52  18.62
lu.C.x_152_NORMAL_4 Average    17.19  18.25  17.81  18.69  20.44  19.75  20.12  19.75
lu.C.x_152_NORMAL_5 Average    19.25  19.56  19.12  19.56  19.38  19.38  18.12  17.62

lu.C.x_156_GROUP_1  Average    18.62  19.31  18.38  18.77  19.88  21.35  19.35  20.35
lu.C.x_156_GROUP_2  Average    15.58  12.72  14.96  14.83  20.59  19.35  29.75  28.22
lu.C.x_156_GROUP_3  Average    20.05  18.74  19.63  18.32  20.26  20.89  19.53  18.58
lu.C.x_156_GROUP_4  Average    14.77  11.42  13.01  10.09  27.05  33.52  23.16  22.98
lu.C.x_156_GROUP_5  Average    14.94  11.45  12.77  10.52  28.01  33.88  22.37  22.05
lu.C.x_156_NORMAL_1 Average    20.00  20.58  18.47  18.68  19.47  19.74  19.42  19.63
lu.C.x_156_NORMAL_2 Average    18.52  18.48  18.83  18.43  20.57  20.48  20.61  20.09
lu.C.x_156_NORMAL_3 Average    20.27  20.00  20.05  21.18  19.55  19.00  18.59  17.36
lu.C.x_156_NORMAL_4 Average    19.65  19.60  20.25  20.75  19.35  20.10  19.00  17.30
lu.C.x_156_NORMAL_5 Average    19.79  19.67  20.62  22.42  18.42  18.00  17.67  19.42


I'll try to find pre-patched results for this 8 node system.  Just to keep things
together for reference here is the 4-node system before this re-work series.

lu.C.x_76_GROUP_1  Average    15.84  24.06  23.37  12.73
lu.C.x_76_GROUP_2  Average    15.29  22.78  22.49  15.45
lu.C.x_76_GROUP_3  Average    13.45  23.90  22.97  15.68
lu.C.x_76_NORMAL_1 Average    18.31  19.54  19.54  18.62
lu.C.x_76_NORMAL_2 Average    19.73  19.18  19.45  17.64

This produced a 4.5x slowdown for the group runs versus the nicely balance
normal runs.  



I can try to get traces but this is not my system so it may take a little
while. I've found that the existing trace points don't give enough information
to see what is happening in this problem. But the visualization in kernelshark
does show the problem pretty well. Do you want just the existing sched tracepoints
or should I update some of the traceprintks I used in the earlier traces?



Cheers,
Phil  


> 
> >
> > We're re-running the test to get more samples.
> 
> Thanks
> Vincent
> 
> >
> >
> > Other tests and systems were still fine.
> >
> >
> > Cheers,
> > Phil
> >
> >
> > > Numbers for my specific testcase (the cgroup imbalance) are basically
> > > the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> > > series solves that issue.
> > >
> > >
> > > Cheers,
> > > Phil
> > >
> > >
> > > >
> > > > >
> > > > > Also, we seem to have grown a fair amount of these TODO entries:
> > > > >
> > > > >   kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > > > >   kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > > > >   kernel/sched/fair.c:     * XXX illustrate
> > > > >   kernel/sched/fair.c:    } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > > > >   kernel/sched/fair.c: * can also include other factors [XXX].
> > > > >   kernel/sched/fair.c: * [XXX expand on:
> > > > >   kernel/sched/fair.c: * [XXX more?]
> > > > >   kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > > > >   kernel/sched/fair.c:             * XXX for now avg_load is not computed and always 0 so we
> > > > >   kernel/sched/fair.c:            /* XXX broken for overlapping NUMA groups */
> > > > >
> > > >
> > > > I will have a look :-)
> > > >
> > > > > :-)
> > > > >
> > > > > Thanks,
> > > > >
> > > > >         Ingo
> > >
> > > --
> > >
> >
> > --
> >

-- 


^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-25 13:33           ` Phil Auld
@ 2019-10-28 13:03             ` Vincent Guittot
  2019-10-30 14:39               ` Phil Auld
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-10-28 13:03 UTC (permalink / raw)
  To: Phil Auld
  Cc: Ingo Molnar, Mel Gorman, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Valentin Schneider, Srikar Dronamraju,
	Quentin Perret, Dietmar Eggemann, Morten Rasmussen, Hillf Danton,
	Parth Shah, Rik van Riel

Hi Phil,

On Fri, 25 Oct 2019 at 15:33, Phil Auld <pauld@redhat.com> wrote:
>
>
> Hi Vincent,
>
>
> On Thu, Oct 24, 2019 at 04:59:05PM +0200 Vincent Guittot wrote:
> > On Thu, 24 Oct 2019 at 15:47, Phil Auld <pauld@redhat.com> wrote:
> > >
> > > On Thu, Oct 24, 2019 at 08:38:44AM -0400 Phil Auld wrote:
> > > > Hi Vincent,
> > > >
> > > > On Mon, Oct 21, 2019 at 10:44:20AM +0200 Vincent Guittot wrote:
> > > > > On Mon, 21 Oct 2019 at 09:50, Ingo Molnar <mingo@kernel.org> wrote:
> > > > > >
> >
> > [...]
> >
> > > > > > A full run on Mel Gorman's magic scalability test-suite would be super
> > > > > > useful ...
> > > > > >
> > > > > > Anyway, please be on the lookout for such performance regression reports.
> > > > >
> > > > > Yes I monitor the regressions on the mailing list
> > > >
> > > >
> > > > Our kernel perf tests show good results across the board for v4.
> > > >
> > > > The issue we hit on the 8-node system is fixed. Thanks!
> > > >
> > > > As we didn't see the fairness issue I don't expect the results to be
> > > > that different on v4a (with the followup patch) but those tests are
> > > > queued up now and we'll see what they look like.
> > > >
> > >
> > > Initial results with fix patch (v4a) show that the outlier issues on
> > > the 8-node system have returned.  Median time for 152 and 156 threads
> > > (160 cpu system) goes up significantly and worst case goes from 340
> > > and 250 to 550 sec. for both. And doubles from 150 to 300 for 144
> >
> > For v3, you had a x4 slow down IIRC.
> >
>
> Sorry, that was a confusing change of data point :)
>
>
> That 4x was the normal versus group result for v3.  I.e. the usual
> view of this test case's data.
>
> These numbers above are the group vs group difference between
> v4 and v4a.

ok. Thanks for the clarification

>
> The similar data points are that for v4 there was no difference
> in performance between group and normal at 152 threads and a 35%
> drop off from normal to group at 156.
>
> With v4a there was 100% drop (2x slowdown) normal to group at 152
> and close to that at 156 (~75-80% drop off).
>
> So, yes, not as severe as v3. But significantly off from v4.

Thanks for the details

>
> >
> > > threads. These look more like the results from v3.
> >
> > OK. For v3, we were not sure that your UC triggers the slow path but
> > it seems that we have the confirmation now.
> > The problem happens only for this  8 node 160 cores system, isn't it ?
>
> Yes. It only shows up now on this 8-node system.

The input could mean that this system reaches a particular level of
utilization and load that is close to the threshold between 2
different behavior like spare capacity and fully_busy/overloaded case.
But at the opposite, there is less threads that CPUs in your UCs so
one group at least at NUMA level should be tagged as
has_spare_capacity and should pull tasks.

>
> >
> > The fix favors the local group so your UC seems to prefer spreading
> > tasks at wake up
> > If you have any traces that you can share, this could help to
> > understand what's going on. I will try to reproduce the problem on my
> > system
>
> I'm not actually sure the fix here is causing this. Looking at the data
> more closely I see similar imbalances on v4, v4a and v3.
>
> When you say slow versus fast wakeup paths what do you mean? I'm still
> learning my way around all this code.

When task wakes up, we can decide to
- speedup the wakeup and shorten the list of cpus and compare only
prev_cpu vs this_cpu (in fact the group of cpu that share their
respective LLC). That's the fast wakeup path that is used most of the
time during a wakeup
- or start to find the idlest CPU of the system and scan all domains.
That's the slow path that is used for new tasks or when a task wakes
up a lot of other tasks at the same time


>
> This particular test is specifically designed to highlight the imbalance
> cause by the use of group scheduler defined load and averages. The threads
> are mostly CPU bound but will join up every time step. So if each thread

ok the fact that they join up might be the root cause of your problem.
They will wake up at the same time by the same task and CPU.

> more or less gets its own CPU (we run with fewer threads than CPUs) they
> all finish the timestep at about the same time.  If threads are stuck
> sharing cpus then those finish later and the whole computation is slowed
> down.  In addition to the NAS benchmark threads there are 2 stress CPU
> burners. These are either run in their own cgroups (thus having full "load")
> or all in the same cgroup with the benchmarck, thus all having tiny "loads".
>
> In this system, there are 20 cpus per node. We track average number of
> benchmark threads running in each node. Generally for a balanced case
> we should not have any much over 20 and indeed in the normal case (every
> one in one cgroup) we see pretty nice balance. In the cgroup case we are
> still seeing numbers much higher than 20.
>
> Here are some eye charts:
>
> This is the GROUP numbers from that machine on the v1 series (I don't have the
> NORMAL lines handy for this one):
> lu.C.x_152_GROUP_1 Average   18.08  18.17  19.58  19.29  19.25  17.50  21.46  18.67
> lu.C.x_152_GROUP_2 Average   17.12  17.48  17.88  17.62  19.57  17.31  23.00  22.02
> lu.C.x_152_GROUP_3 Average   17.82  17.97  18.12  18.18  24.55  22.18  16.97  16.21
> lu.C.x_152_GROUP_4 Average   18.47  19.08  18.50  18.66  21.45  25.00  15.47  15.37
> lu.C.x_152_GROUP_5 Average   20.46  20.71  27.38  24.75  17.06  16.65  12.81  12.19
>
> lu.C.x_156_GROUP_1 Average   18.70  18.80  20.25  19.50  20.45  20.30  19.55  18.45
> lu.C.x_156_GROUP_2 Average   19.29  19.90  17.71  18.10  20.76  21.57  19.81  18.86
> lu.C.x_156_GROUP_3 Average   25.09  29.19  21.83  21.33  18.67  18.57  11.03  10.29
> lu.C.x_156_GROUP_4 Average   18.60  19.10  19.20  18.70  20.30  20.00  19.70  20.40
> lu.C.x_156_GROUP_5 Average   18.58  18.95  18.63  18.1   17.32  19.37  23.92  21.08
>
> There are a couple that did not balance well but the overall results were good.
>
> This is v4:
> lu.C.x_152_GROUP_1   Average    18.80  19.25  21.95  21.25  17.55  17.25  17.85  18.10
> lu.C.x_152_GROUP_2   Average    20.57  20.62  19.76  17.76  18.95  18.33  18.52  17.48
> lu.C.x_152_GROUP_3   Average    15.39  12.22  13.96  12.19  25.51  28.91  21.88  21.94
> lu.C.x_152_GROUP_4   Average    20.30  19.75  20.75  19.45  18.15  17.80  18.15  17.65
> lu.C.x_152_GROUP_5   Average    15.13  12.21  13.63  11.39  25.42  30.21  21.55  22.46
> lu.C.x_152_NORMAL_1  Average    17.00  16.88  19.52  18.28  19.24  19.08  21.08  20.92
> lu.C.x_152_NORMAL_2  Average    18.61  16.56  18.56  17.00  20.56  20.28  20.00  20.44
> lu.C.x_152_NORMAL_3  Average    19.27  19.77  21.23  20.86  18.00  17.68  17.73  17.45
> lu.C.x_152_NORMAL_4  Average    20.24  19.33  21.33  21.10  17.33  18.43  17.57  16.67
> lu.C.x_152_NORMAL_5  Average    21.27  20.36  20.86  19.36  17.50  17.77  17.32  17.55
>
> lu.C.x_156_GROUP_1   Average    18.60  18.68  21.16  23.40  18.96  19.72  17.76  17.72
> lu.C.x_156_GROUP_2   Average    22.76  21.71  20.55  21.32  18.18  16.42  17.58  17.47
> lu.C.x_156_GROUP_3   Average    13.62  11.52  15.54  15.58  25.42  28.54  23.22  22.56
> lu.C.x_156_GROUP_4   Average    17.73  18.14  21.95  21.82  19.73  19.68  18.55  18.41
> lu.C.x_156_GROUP_5   Average    15.32  15.14  17.30  17.11  23.59  25.75  20.77  21.02
> lu.C.x_156_NORMAL_1  Average    19.06  18.72  19.56  18.72  19.72  21.28  19.44  19.50
> lu.C.x_156_NORMAL_2  Average    20.25  19.86  22.61  23.18  18.32  17.93  16.39  17.46
> lu.C.x_156_NORMAL_3  Average    18.84  17.88  19.24  17.76  21.04  20.64  20.16  20.44
> lu.C.x_156_NORMAL_4  Average    20.67  19.44  20.74  22.15  18.89  18.85  18.00  17.26
> lu.C.x_156_NORMAL_5  Average    20.12  19.65  24.12  24.15  17.40  16.62  17.10  16.83
>
> This one is better overall, but there are some mid 20s abd 152_GROUP_5 is pretty bad.
>
>
> This is v4a
> lu.C.x_152_GROUP_1   Average    28.64  34.49  23.60  24.48  10.35  11.99  8.36  10.09
> lu.C.x_152_GROUP_2   Average    17.36  17.33  15.48  13.12  24.90  24.43  18.55  20.83
> lu.C.x_152_GROUP_3   Average    20.00  19.92  20.21  21.33  18.50  18.50  16.50  17.04
> lu.C.x_152_GROUP_4   Average    18.07  17.87  18.40  17.87  23.07  22.73  17.60  16.40
> lu.C.x_152_GROUP_5   Average    25.50  24.69  21.48  21.46  16.85  16.00  14.06  11.96
> lu.C.x_152_NORMAL_1  Average    22.27  20.77  20.60  19.83  16.73  17.53  15.83  18.43
> lu.C.x_152_NORMAL_2  Average    19.83  20.81  23.06  21.97  17.28  16.92  15.83  16.31
> lu.C.x_152_NORMAL_3  Average    17.85  19.31  18.85  19.08  19.00  19.31  19.08  19.54
> lu.C.x_152_NORMAL_4  Average    18.87  18.13  19.00  20.27  18.20  18.67  19.73  19.13
> lu.C.x_152_NORMAL_5  Average    18.16  18.63  18.11  17.00  19.79  20.63  19.47  20.21
>
> lu.C.x_156_GROUP_1   Average    24.96  26.15  21.78  21.48  18.52  19.11  12.98  11.02
> lu.C.x_156_GROUP_2   Average    18.69  19.00  18.65  18.42  20.50  20.46  19.85  20.42
> lu.C.x_156_GROUP_3   Average    24.32  23.79  20.82  20.95  16.63  16.61  18.47  14.42
> lu.C.x_156_GROUP_4   Average    18.27  18.34  14.88  16.07  27.00  21.93  20.56  18.95
> lu.C.x_156_GROUP_5   Average    19.18  20.99  33.43  29.57  15.63  15.54  12.13  9.53
> lu.C.x_156_NORMAL_1  Average    21.60  23.37  20.11  19.60  17.11  17.83  18.17  18.20
> lu.C.x_156_NORMAL_2  Average    21.00  20.54  19.88  18.79  17.62  18.67  19.29  20.21
> lu.C.x_156_NORMAL_3  Average    19.50  19.94  20.12  18.62  19.88  19.50  19.00  19.44
> lu.C.x_156_NORMAL_4  Average    20.62  19.72  20.03  22.17  18.21  18.55  18.45  18.24
> lu.C.x_156_NORMAL_5  Average    19.64  19.86  21.46  22.43  17.21  17.89  18.96  18.54
>
>
> This shows much more imblance in the GROUP case. There are some single digits
> and some 30s.
>
> For comparison here are some from my 4-node (80 cpu) system:
>
> v4
> lu.C.x_76_GROUP_1.ps.numa.hist   Average    19.58  17.67  18.25  20.50
> lu.C.x_76_GROUP_2.ps.numa.hist   Average    19.08  19.17  17.67  20.08
> lu.C.x_76_GROUP_3.ps.numa.hist   Average    19.42  18.58  18.42  19.58
> lu.C.x_76_NORMAL_1.ps.numa.hist  Average    20.50  17.33  19.08  19.08
> lu.C.x_76_NORMAL_2.ps.numa.hist  Average    19.45  18.73  19.27  18.55
>
>
> v4a
> lu.C.x_76_GROUP_1.ps.numa.hist   Average    19.46  19.15  18.62  18.77
> lu.C.x_76_GROUP_2.ps.numa.hist   Average    19.00  18.58  17.75  20.67
> lu.C.x_76_GROUP_3.ps.numa.hist   Average    19.08  17.08  20.08  19.77
> lu.C.x_76_NORMAL_1.ps.numa.hist  Average    18.67  18.93  18.60  19.80
> lu.C.x_76_NORMAL_2.ps.numa.hist  Average    19.08  18.67  18.58  19.67
>
> Nicely balanced in both kernels and normal and group are basically the
> same.

That fact that the 4 nodes works well but not the 8 nodes is a bit
surprising except if this means more NUMA level in the sched_domain
topology
Could you give us more details about the sched domain topology ?

>
> There's still something between v1 and v4 on that 8-node system that is
> still illustrating the original problem.  On our other test systems this
> series really works nicely to solve this problem. And even if we can't get
> to the bottom if this it's a significant improvement.
>
>
> Here is v3 for the 8-node system
> lu.C.x_152_GROUP_1  Average    17.52  16.86  17.90  18.52  20.00  19.00  22.00  20.19
> lu.C.x_152_GROUP_2  Average    15.70  15.04  15.65  15.72  23.30  28.98  20.09  17.52
> lu.C.x_152_GROUP_3  Average    27.72  32.79  22.89  22.62  11.01  12.90  12.14  9.93
> lu.C.x_152_GROUP_4  Average    18.13  18.87  18.40  17.87  18.80  19.93  20.40  19.60
> lu.C.x_152_GROUP_5  Average    24.14  26.46  20.92  21.43  14.70  16.05  15.14  13.16
> lu.C.x_152_NORMAL_1 Average    21.03  22.43  20.27  19.97  18.37  18.80  16.27  14.87
> lu.C.x_152_NORMAL_2 Average    19.24  18.29  18.41  17.41  19.71  19.00  20.29  19.65
> lu.C.x_152_NORMAL_3 Average    19.43  20.00  19.05  20.24  18.76  17.38  18.52  18.62
> lu.C.x_152_NORMAL_4 Average    17.19  18.25  17.81  18.69  20.44  19.75  20.12  19.75
> lu.C.x_152_NORMAL_5 Average    19.25  19.56  19.12  19.56  19.38  19.38  18.12  17.62
>
> lu.C.x_156_GROUP_1  Average    18.62  19.31  18.38  18.77  19.88  21.35  19.35  20.35
> lu.C.x_156_GROUP_2  Average    15.58  12.72  14.96  14.83  20.59  19.35  29.75  28.22
> lu.C.x_156_GROUP_3  Average    20.05  18.74  19.63  18.32  20.26  20.89  19.53  18.58
> lu.C.x_156_GROUP_4  Average    14.77  11.42  13.01  10.09  27.05  33.52  23.16  22.98
> lu.C.x_156_GROUP_5  Average    14.94  11.45  12.77  10.52  28.01  33.88  22.37  22.05
> lu.C.x_156_NORMAL_1 Average    20.00  20.58  18.47  18.68  19.47  19.74  19.42  19.63
> lu.C.x_156_NORMAL_2 Average    18.52  18.48  18.83  18.43  20.57  20.48  20.61  20.09
> lu.C.x_156_NORMAL_3 Average    20.27  20.00  20.05  21.18  19.55  19.00  18.59  17.36
> lu.C.x_156_NORMAL_4 Average    19.65  19.60  20.25  20.75  19.35  20.10  19.00  17.30
> lu.C.x_156_NORMAL_5 Average    19.79  19.67  20.62  22.42  18.42  18.00  17.67  19.42
>
>
> I'll try to find pre-patched results for this 8 node system.  Just to keep things
> together for reference here is the 4-node system before this re-work series.
>
> lu.C.x_76_GROUP_1  Average    15.84  24.06  23.37  12.73
> lu.C.x_76_GROUP_2  Average    15.29  22.78  22.49  15.45
> lu.C.x_76_GROUP_3  Average    13.45  23.90  22.97  15.68
> lu.C.x_76_NORMAL_1 Average    18.31  19.54  19.54  18.62
> lu.C.x_76_NORMAL_2 Average    19.73  19.18  19.45  17.64
>
> This produced a 4.5x slowdown for the group runs versus the nicely balance
> normal runs.
>
>
>
> I can try to get traces but this is not my system so it may take a little
> while. I've found that the existing trace points don't give enough information
> to see what is happening in this problem. But the visualization in kernelshark
> does show the problem pretty well. Do you want just the existing sched tracepoints
> or should I update some of the traceprintks I used in the earlier traces?

The standard tracepoint is a good starting point but tracing the
statistings for find_busiest_group and find_idlest_group should help a
lot.

Cheers,
Vincent

>
>
>
> Cheers,
> Phil
>
>
> >
> > >
> > > We're re-running the test to get more samples.
> >
> > Thanks
> > Vincent
> >
> > >
> > >
> > > Other tests and systems were still fine.
> > >
> > >
> > > Cheers,
> > > Phil
> > >
> > >
> > > > Numbers for my specific testcase (the cgroup imbalance) are basically
> > > > the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> > > > series solves that issue.
> > > >
> > > >
> > > > Cheers,
> > > > Phil
> > > >
> > > >
> > > > >
> > > > > >
> > > > > > Also, we seem to have grown a fair amount of these TODO entries:
> > > > > >
> > > > > >   kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > > > > >   kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > > > > >   kernel/sched/fair.c:     * XXX illustrate
> > > > > >   kernel/sched/fair.c:    } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > > > > >   kernel/sched/fair.c: * can also include other factors [XXX].
> > > > > >   kernel/sched/fair.c: * [XXX expand on:
> > > > > >   kernel/sched/fair.c: * [XXX more?]
> > > > > >   kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > > > > >   kernel/sched/fair.c:             * XXX for now avg_load is not computed and always 0 so we
> > > > > >   kernel/sched/fair.c:            /* XXX broken for overlapping NUMA groups */
> > > > > >
> > > > >
> > > > > I will have a look :-)
> > > > >
> > > > > > :-)
> > > > > >
> > > > > > Thanks,
> > > > > >
> > > > > >         Ingo
> > > >
> > > > --
> > > >
> > >
> > > --
> > >
>
> --
>

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-28 13:03             ` Vincent Guittot
@ 2019-10-30 14:39               ` Phil Auld
  2019-10-30 16:24                 ` Dietmar Eggemann
  2019-10-30 17:25                 ` Vincent Guittot
  0 siblings, 2 replies; 89+ messages in thread
From: Phil Auld @ 2019-10-30 14:39 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Ingo Molnar, Mel Gorman, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Valentin Schneider, Srikar Dronamraju,
	Quentin Perret, Dietmar Eggemann, Morten Rasmussen, Hillf Danton,
	Parth Shah, Rik van Riel

Hi Vincent,

On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:
> Hi Phil,
> 

...

> 
> The input could mean that this system reaches a particular level of
> utilization and load that is close to the threshold between 2
> different behavior like spare capacity and fully_busy/overloaded case.
> But at the opposite, there is less threads that CPUs in your UCs so
> one group at least at NUMA level should be tagged as
> has_spare_capacity and should pull tasks.

Yes. Maybe we don't hit that and rely on "load" since things look 
busy. There are only 2 spare cpus in the 156 + 2 case. Is it possible
that information is getting lost with the extra NUMA levels? 

> 
> >
> > >
> > > The fix favors the local group so your UC seems to prefer spreading
> > > tasks at wake up
> > > If you have any traces that you can share, this could help to
> > > understand what's going on. I will try to reproduce the problem on my
> > > system
> >
> > I'm not actually sure the fix here is causing this. Looking at the data
> > more closely I see similar imbalances on v4, v4a and v3.
> >
> > When you say slow versus fast wakeup paths what do you mean? I'm still
> > learning my way around all this code.
> 
> When task wakes up, we can decide to
> - speedup the wakeup and shorten the list of cpus and compare only
> prev_cpu vs this_cpu (in fact the group of cpu that share their
> respective LLC). That's the fast wakeup path that is used most of the
> time during a wakeup
> - or start to find the idlest CPU of the system and scan all domains.
> That's the slow path that is used for new tasks or when a task wakes
> up a lot of other tasks at the same time
> 

Thanks. 

> 
> >
> > This particular test is specifically designed to highlight the imbalance
> > cause by the use of group scheduler defined load and averages. The threads
> > are mostly CPU bound but will join up every time step. So if each thread
> 
> ok the fact that they join up might be the root cause of your problem.
> They will wake up at the same time by the same task and CPU.
> 

If that was the problem I'd expect issues on other high node count systems.

> 
> That fact that the 4 nodes works well but not the 8 nodes is a bit
> surprising except if this means more NUMA level in the sched_domain
> topology
> Could you give us more details about the sched domain topology ?
> 

The 8-node system has 5 sched domain levels.  The 4-node system only 
has 3. 


cpu159 0 0 0 0 0 0 4361694551702 124316659623 94736
domain0 80000000,00000000,00008000,00000000,00000000 0 0 
domain1 ffc00000,00000000,0000ffc0,00000000,00000000 0 0 
domain2 fffff000,00000000,0000ffff,f0000000,00000000 0 0 
domain3 ffffffff,ff000000,0000ffff,ffffff00,00000000 0 0 
domain4 ffffffff,ffffffff,ffffffff,ffffffff,ffffffff 0 0 

numactl --hardware
available: 8 nodes (0-7)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 80 81 82 83 84 85 86 87 88 89
node 0 size: 126928 MB
node 0 free: 126452 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19 90 91 92 93 94 95 96 97 98 99
node 1 size: 129019 MB
node 1 free: 128813 MB
node 2 cpus: 20 21 22 23 24 25 26 27 28 29 100 101 102 103 104 105 106 107 108 109
node 2 size: 129019 MB
node 2 free: 128875 MB
node 3 cpus: 30 31 32 33 34 35 36 37 38 39 110 111 112 113 114 115 116 117 118 119
node 3 size: 129019 MB
node 3 free: 128850 MB
node 4 cpus: 40 41 42 43 44 45 46 47 48 49 120 121 122 123 124 125 126 127 128 129
node 4 size: 128993 MB
node 4 free: 128862 MB
node 5 cpus: 50 51 52 53 54 55 56 57 58 59 130 131 132 133 134 135 136 137 138 139
node 5 size: 129019 MB
node 5 free: 128872 MB
node 6 cpus: 60 61 62 63 64 65 66 67 68 69 140 141 142 143 144 145 146 147 148 149
node 6 size: 129019 MB
node 6 free: 128852 MB
node 7 cpus: 70 71 72 73 74 75 76 77 78 79 150 151 152 153 154 155 156 157 158 159
node 7 size: 112889 MB
node 7 free: 112720 MB
node distances:
node   0   1   2   3   4   5   6   7 
  0:  10  12  17  17  19  19  19  19 
  1:  12  10  17  17  19  19  19  19 
  2:  17  17  10  12  19  19  19  19 
  3:  17  17  12  10  19  19  19  19 
  4:  19  19  19  19  10  12  17  17 
  5:  19  19  19  19  12  10  17  17 
  6:  19  19  19  19  17  17  10  12 
  7:  19  19  19  19  17  17  12  10 



available: 4 nodes (0-3)
node 0 cpus: 0 1 2 3 4 5 6 7 8 9 40 41 42 43 44 45 46 47 48 49
node 0 size: 257943 MB
node 0 free: 257602 MB
node 1 cpus: 10 11 12 13 14 15 16 17 18 19 50 51 52 53 54 55 56 57 58 59
node 1 size: 258043 MB
node 1 free: 257619 MB
node 2 cpus: 20 21 22 23 24 25 26 27 28 29 60 61 62 63 64 65 66 67 68 69
node 2 size: 258043 MB
node 2 free: 257879 MB
node 3 cpus: 30 31 32 33 34 35 36 37 38 39 70 71 72 73 74 75 76 77 78 79
node 3 size: 258043 MB
node 3 free: 257823 MB
node distances:
node   0   1   2   3 
  0:  10  20  20  20 
  1:  20  10  20  20 
  2:  20  20  10  20 
  3:  20  20  20  10 




An 8-node system (albeit with sub-numa) has node distances 

node distances:
node   0   1   2   3   4   5   6   7 
  0:  10  11  21  21  21  21  21  21 
  1:  11  10  21  21  21  21  21  21 
  2:  21  21  10  11  21  21  21  21 
  3:  21  21  11  10  21  21  21  21 
  4:  21  21  21  21  10  11  21  21 
  5:  21  21  21  21  11  10  21  21 
  6:  21  21  21  21  21  21  10  11 
  7:  21  21  21  21  21  21  11  10 

This one does not exhibit the problem with the latest (v4a). But also
only has 3 levels.


> >
> > There's still something between v1 and v4 on that 8-node system that is
> > still illustrating the original problem.  On our other test systems this
> > series really works nicely to solve this problem. And even if we can't get
> > to the bottom if this it's a significant improvement.
> >
> >
> > Here is v3 for the 8-node system
> > lu.C.x_152_GROUP_1  Average    17.52  16.86  17.90  18.52  20.00  19.00  22.00  20.19
> > lu.C.x_152_GROUP_2  Average    15.70  15.04  15.65  15.72  23.30  28.98  20.09  17.52
> > lu.C.x_152_GROUP_3  Average    27.72  32.79  22.89  22.62  11.01  12.90  12.14  9.93
> > lu.C.x_152_GROUP_4  Average    18.13  18.87  18.40  17.87  18.80  19.93  20.40  19.60
> > lu.C.x_152_GROUP_5  Average    24.14  26.46  20.92  21.43  14.70  16.05  15.14  13.16
> > lu.C.x_152_NORMAL_1 Average    21.03  22.43  20.27  19.97  18.37  18.80  16.27  14.87
> > lu.C.x_152_NORMAL_2 Average    19.24  18.29  18.41  17.41  19.71  19.00  20.29  19.65
> > lu.C.x_152_NORMAL_3 Average    19.43  20.00  19.05  20.24  18.76  17.38  18.52  18.62
> > lu.C.x_152_NORMAL_4 Average    17.19  18.25  17.81  18.69  20.44  19.75  20.12  19.75
> > lu.C.x_152_NORMAL_5 Average    19.25  19.56  19.12  19.56  19.38  19.38  18.12  17.62
> >
> > lu.C.x_156_GROUP_1  Average    18.62  19.31  18.38  18.77  19.88  21.35  19.35  20.35
> > lu.C.x_156_GROUP_2  Average    15.58  12.72  14.96  14.83  20.59  19.35  29.75  28.22
> > lu.C.x_156_GROUP_3  Average    20.05  18.74  19.63  18.32  20.26  20.89  19.53  18.58
> > lu.C.x_156_GROUP_4  Average    14.77  11.42  13.01  10.09  27.05  33.52  23.16  22.98
> > lu.C.x_156_GROUP_5  Average    14.94  11.45  12.77  10.52  28.01  33.88  22.37  22.05
> > lu.C.x_156_NORMAL_1 Average    20.00  20.58  18.47  18.68  19.47  19.74  19.42  19.63
> > lu.C.x_156_NORMAL_2 Average    18.52  18.48  18.83  18.43  20.57  20.48  20.61  20.09
> > lu.C.x_156_NORMAL_3 Average    20.27  20.00  20.05  21.18  19.55  19.00  18.59  17.36
> > lu.C.x_156_NORMAL_4 Average    19.65  19.60  20.25  20.75  19.35  20.10  19.00  17.30
> > lu.C.x_156_NORMAL_5 Average    19.79  19.67  20.62  22.42  18.42  18.00  17.67  19.42
> >
> >
> > I'll try to find pre-patched results for this 8 node system.  Just to keep things
> > together for reference here is the 4-node system before this re-work series.
> >
> > lu.C.x_76_GROUP_1  Average    15.84  24.06  23.37  12.73
> > lu.C.x_76_GROUP_2  Average    15.29  22.78  22.49  15.45
> > lu.C.x_76_GROUP_3  Average    13.45  23.90  22.97  15.68
> > lu.C.x_76_NORMAL_1 Average    18.31  19.54  19.54  18.62
> > lu.C.x_76_NORMAL_2 Average    19.73  19.18  19.45  17.64
> >
> > This produced a 4.5x slowdown for the group runs versus the nicely balance
> > normal runs.
> >

Here is the base 5.4.0-rc3+ kernel on the 8-node system:

lu.C.x_156_GROUP_1  Average    10.87  0.00   0.00   11.49  36.69  34.26  30.59  32.10
lu.C.x_156_GROUP_2  Average    20.15  16.32  9.49   24.91  21.07  20.93  21.63  21.50
lu.C.x_156_GROUP_3  Average    21.27  17.23  11.84  21.80  20.91  20.68  21.11  21.16
lu.C.x_156_GROUP_4  Average    19.44  6.53   8.71   19.72  22.95  23.16  28.85  26.64
lu.C.x_156_GROUP_5  Average    20.59  6.20   11.32  14.63  28.73  30.36  22.20  21.98
lu.C.x_156_NORMAL_1 Average    20.50  19.95  20.40  20.45  18.75  19.35  18.25  18.35
lu.C.x_156_NORMAL_2 Average    17.15  19.04  18.42  18.69  21.35  21.42  20.00  19.92
lu.C.x_156_NORMAL_3 Average    18.00  18.15  17.55  17.60  18.90  18.40  19.90  19.75
lu.C.x_156_NORMAL_4 Average    20.53  20.05  20.21  19.11  19.00  19.47  19.37  18.26
lu.C.x_156_NORMAL_5 Average    18.72  18.78  19.72  18.50  19.67  19.72  21.11  19.78

Including the actual benchmark results. 
============156_GROUP========Mop/s===================================
min	q1	median	q3	max
1564.63	3003.87	3928.23	5411.13	8386.66
============156_GROUP========time====================================
min	q1	median	q3	max
243.12	376.82	519.06	678.79	1303.18
============156_NORMAL========Mop/s===================================
min	q1	median	q3	max
13845.6	18013.8	18545.5	19359.9	19647.4
============156_NORMAL========time====================================
min	q1	median	q3	max
103.78	105.32	109.95	113.19	147.27

You can see the ~5x slowdown of the pre-rework issue. v4a is much improved over
mainline.

I'll try to find some other machines as well. 


> >
> >
> > I can try to get traces but this is not my system so it may take a little
> > while. I've found that the existing trace points don't give enough information
> > to see what is happening in this problem. But the visualization in kernelshark
> > does show the problem pretty well. Do you want just the existing sched tracepoints
> > or should I update some of the traceprintks I used in the earlier traces?
> 
> The standard tracepoint is a good starting point but tracing the
> statistings for find_busiest_group and find_idlest_group should help a
> lot.
> 

I have some traces which I'll send you directly since they're large.


Cheers,
Phil



> Cheers,
> Vincent
> 
> >
> >
> >
> > Cheers,
> > Phil
> >
> >
> > >
> > > >
> > > > We're re-running the test to get more samples.
> > >
> > > Thanks
> > > Vincent
> > >
> > > >
> > > >
> > > > Other tests and systems were still fine.
> > > >
> > > >
> > > > Cheers,
> > > > Phil
> > > >
> > > >
> > > > > Numbers for my specific testcase (the cgroup imbalance) are basically
> > > > > the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> > > > > series solves that issue.
> > > > >
> > > > >
> > > > > Cheers,
> > > > > Phil
> > > > >
> > > > >
> > > > > >
> > > > > > >
> > > > > > > Also, we seem to have grown a fair amount of these TODO entries:
> > > > > > >
> > > > > > >   kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > > > > > >   kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > > > > > >   kernel/sched/fair.c:     * XXX illustrate
> > > > > > >   kernel/sched/fair.c:    } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > > > > > >   kernel/sched/fair.c: * can also include other factors [XXX].
> > > > > > >   kernel/sched/fair.c: * [XXX expand on:
> > > > > > >   kernel/sched/fair.c: * [XXX more?]
> > > > > > >   kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > > > > > >   kernel/sched/fair.c:             * XXX for now avg_load is not computed and always 0 so we
> > > > > > >   kernel/sched/fair.c:            /* XXX broken for overlapping NUMA groups */
> > > > > > >
> > > > > >
> > > > > > I will have a look :-)
> > > > > >
> > > > > > > :-)
> > > > > > >
> > > > > > > Thanks,
> > > > > > >
> > > > > > >         Ingo
> > > > >
> > > > > --
> > > > >
> > > >
> > > > --
> > > >
> >
> > --
> >

-- 


^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 01/11] sched/fair: clean up asym packing
  2019-10-18 13:26 ` [PATCH v4 01/11] sched/fair: clean up asym packing Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Clean " tip-bot2 for Vincent Guittot
@ 2019-10-30 14:51   ` Mel Gorman
  2019-10-30 16:03     ` Vincent Guittot
  1 sibling, 1 reply; 89+ messages in thread
From: Mel Gorman @ 2019-10-30 14:51 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, mingo, peterz, pauld, valentin.schneider, srikar,
	quentin.perret, dietmar.eggemann, Morten.Rasmussen, hdanton,
	parth, riel

On Fri, Oct 18, 2019 at 03:26:28PM +0200, Vincent Guittot wrote:
> Clean up asym packing to follow the default load balance behavior:
> - classify the group by creating a group_asym_packing field.
> - calculate the imbalance in calculate_imbalance() instead of bypassing it.
> 
> We don't need to test twice same conditions anymore to detect asym packing
> and we consolidate the calculation of imbalance in calculate_imbalance().
> 
> There is no functional changes.
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> Acked-by: Rik van Riel <riel@surriel.com>
> ---
>  kernel/sched/fair.c | 63 ++++++++++++++---------------------------------------
>  1 file changed, 16 insertions(+), 47 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 1f0a5e1..617145c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7675,6 +7675,7 @@ struct sg_lb_stats {
>  	unsigned int group_weight;
>  	enum group_type group_type;
>  	int group_no_capacity;
> +	unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
>  	unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
>  #ifdef CONFIG_NUMA_BALANCING
>  	unsigned int nr_numa_running;
> @@ -8129,9 +8130,17 @@ static bool update_sd_pick_busiest(struct lb_env *env,
>  	 * ASYM_PACKING needs to move all the work to the highest
>  	 * prority CPUs in the group, therefore mark all groups
>  	 * of lower priority than ourself as busy.
> +	 *
> +	 * This is primarily intended to used at the sibling level.  Some
> +	 * cores like POWER7 prefer to use lower numbered SMT threads.  In the
> +	 * case of POWER7, it can move to lower SMT modes only when higher
> +	 * threads are idle.  When in lower SMT modes, the threads will
> +	 * perform better since they share less core resources.  Hence when we
> +	 * have idle threads, we want them to be the higher ones.
>  	 */
>  	if (sgs->sum_nr_running &&
>  	    sched_asym_prefer(env->dst_cpu, sg->asym_prefer_cpu)) {
> +		sgs->group_asym_packing = 1;
>  		if (!sds->busiest)
>  			return true;
>  

(I did not read any of the earlier implementations of this series, maybe
this was discussed already in which case, sorry for the noise)

Are you *sure* this is not a functional change?

Asym packing is a twisty maze of headaches and I'm not familiar enough
with it to be 100% certain without spending a lot of time on this. Even
spotting how Power7 ends up using asym packing with lower-numbered SMT
threads is a bit of a challenge.  Specifically, it relies on the scheduler
domain SD_ASYM_PACKING flag for SMT domains to use the weak implementation
of arch_asym_cpu_priority which by defaults favours the lower-numbered CPU.

The check_asym_packing implementation checks that flag but I can't see
the equiavlent type of check here. update_sd_pick_busiest  could be called
for domains that span NUMA or basically any domain that does not specify
SD_ASYM_PACKING and end up favouring a lower-numbered CPU (or whatever
arch_asym_cpu_priority returns in the case of x86 which has a different
idea for favoured CPUs).

sched_asym_prefer appears to be a function that is very easy to use
incorrectly. Should it take env and check the SD flags first?

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 02/11] sched/fair: rename sum_nr_running to sum_h_nr_running
  2019-10-18 13:26 ` [PATCH v4 02/11] sched/fair: rename sum_nr_running to sum_h_nr_running Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rename sg_lb_stats::sum_nr_running " tip-bot2 for Vincent Guittot
@ 2019-10-30 14:53   ` Mel Gorman
  1 sibling, 0 replies; 89+ messages in thread
From: Mel Gorman @ 2019-10-30 14:53 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, mingo, peterz, pauld, valentin.schneider, srikar,
	quentin.perret, dietmar.eggemann, Morten.Rasmussen, hdanton,
	parth, riel

On Fri, Oct 18, 2019 at 03:26:29PM +0200, Vincent Guittot wrote:
> Rename sum_nr_running to sum_h_nr_running because it effectively tracks
> cfs->h_nr_running so we can use sum_nr_running to track rq->nr_running
> when needed.
> 
> There is no functional changes.
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> Acked-by: Rik van Riel <riel@surriel.com>
> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

Acked-by: Mel Gorman <mgorman@techsingularity.net>

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-10-18 13:26 ` [PATCH v4 04/11] sched/fair: rework load_balance Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rework load_balance() tip-bot2 for Vincent Guittot
@ 2019-10-30 15:45   ` Mel Gorman
  2019-10-30 16:16     ` Valentin Schneider
                       ` (2 more replies)
  1 sibling, 3 replies; 89+ messages in thread
From: Mel Gorman @ 2019-10-30 15:45 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, mingo, peterz, pauld, valentin.schneider, srikar,
	quentin.perret, dietmar.eggemann, Morten.Rasmussen, hdanton,
	parth, riel

On Fri, Oct 18, 2019 at 03:26:31PM +0200, Vincent Guittot wrote:
> The load_balance algorithm contains some heuristics which have become
> meaningless since the rework of the scheduler's metrics like the
> introduction of PELT.
> 
> Furthermore, load is an ill-suited metric for solving certain task
> placement imbalance scenarios. For instance, in the presence of idle CPUs,
> we should simply try to get at least one task per CPU, whereas the current
> load-based algorithm can actually leave idle CPUs alone simply because the
> load is somewhat balanced. The current algorithm ends up creating virtual
> and meaningless value like the avg_load_per_task or tweaks the state of a
> group to make it overloaded whereas it's not, in order to try to migrate
> tasks.
> 

I do not think this is necessarily 100% true. With both the previous
load-balancing behaviour and the apparent behaviour of this patch, it's
still possible to pull two communicating tasks apart and across NUMA
domains when utilisation is low. Specifically, a load difference of less
than SCHED_CAPACITY_SCALE between NUMA codes can be enough too migrate
task to level out load.

So, load might be ill-suited for some cases but that does not make it
completely useless either.

The type of behaviour can be seen by running netperf via mmtests
(configuration file configs/config-network-netperf-unbound) on a NUMA
machine and noting that the local vs remote NUMA hinting faults are roughly
50%. I had prototyped some fixes around this that took imbalance_pct into
account but it was too special-cased and was not a universal win. If
I was reviewing my own patch I would have naked it on the "you added a
special-case hack into the load balancer for one load". I didn't get back
to it before getting cc'd on this series.

> load_balance should better qualify the imbalance of the group and clearly
> define what has to be moved to fix this imbalance.
> 
> The type of sched_group has been extended to better reflect the type of
> imbalance. We now have :
> 	group_has_spare
> 	group_fully_busy
> 	group_misfit_task
> 	group_asym_packing
> 	group_imbalanced
> 	group_overloaded
> 
> Based on the type of sched_group, load_balance now sets what it wants to
> move in order to fix the imbalance. It can be some load as before but also
> some utilization, a number of task or a type of task:
> 	migrate_task
> 	migrate_util
> 	migrate_load
> 	migrate_misfit
> 
> This new load_balance algorithm fixes several pending wrong tasks
> placement:
> - the 1 task per CPU case with asymmetric system
> - the case of cfs task preempted by other class
> - the case of tasks not evenly spread on groups with spare capacity
> 

On the last one, spreading tasks evenly across NUMA domains is not
necessarily a good idea. If I have 2 tasks running on a 2-socket machine
with 24 logical CPUs per socket, it should not automatically mean that
one task should move cross-node and I have definitely observed this
happening. It's probably bad in terms of locality no matter what but it's
especially bad if the 2 tasks happened to be communicating because then
load balancing will pull apart the tasks while wake_affine will push
them together (and potentially NUMA balancing as well). Note that this
also applies for some IO workloads because, depending on the filesystem,
the task may be communicating with workqueues (XFS) or a kernel thread
(ext4 with jbd2).

> Also the load balance decisions have been consolidated in the 3 functions
> below after removing the few bypasses and hacks of the current code:
> - update_sd_pick_busiest() select the busiest sched_group.
> - find_busiest_group() checks if there is an imbalance between local and
>   busiest group.
> - calculate_imbalance() decides what have to be moved.
> 
> Finally, the now unused field total_running of struct sd_lb_stats has been
> removed.
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  kernel/sched/fair.c | 611 ++++++++++++++++++++++++++++++++++------------------
>  1 file changed, 402 insertions(+), 209 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e004841..5ae5281 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -7068,11 +7068,26 @@ static unsigned long __read_mostly max_load_balance_interval = HZ/10;
>  
>  enum fbq_type { regular, remote, all };
>  
> +/*
> + * group_type describes the group of CPUs at the moment of the load balance.
> + * The enum is ordered by pulling priority, with the group with lowest priority
> + * first so the groupe_type can be simply compared when selecting the busiest
> + * group. see update_sd_pick_busiest().
> + */

s/groupe_type/group_type/

>  enum group_type {
> -	group_other = 0,
> +	group_has_spare = 0,
> +	group_fully_busy,
>  	group_misfit_task,
> +	group_asym_packing,
>  	group_imbalanced,
> -	group_overloaded,
> +	group_overloaded
> +};
> +

While not your fault, it would be nice to comment on the meaning of each
group type. From a glance, it's not obvious to me why a misfit task should
be a high priority to move a task than a fully_busy (but not overloaded)
group given that moving the misfit task might make a group overloaded.

> +enum migration_type {
> +	migrate_load = 0,
> +	migrate_util,
> +	migrate_task,
> +	migrate_misfit
>  };
>  

Could do with a comment explaining what migration_type is for because
the name is unhelpful. I *think* at a glance it's related to what sort
of imbalance is being addressed which is partially addressed by the
group_type. That understanding may change as I continue reading the series
but now I have to figure it out which means it'll be forgotten again in
6 months.

>  #define LBF_ALL_PINNED	0x01
> @@ -7105,7 +7120,7 @@ struct lb_env {
>  	unsigned int		loop_max;
>  
>  	enum fbq_type		fbq_type;
> -	enum group_type		src_grp_type;
> +	enum migration_type	migration_type;
>  	struct list_head	tasks;
>  };
>  
> @@ -7328,7 +7343,7 @@ static struct task_struct *detach_one_task(struct lb_env *env)
>  static const unsigned int sched_nr_migrate_break = 32;
>  
>  /*
> - * detach_tasks() -- tries to detach up to imbalance runnable load from
> + * detach_tasks() -- tries to detach up to imbalance load/util/tasks from
>   * busiest_rq, as part of a balancing operation within domain "sd".
>   *
>   * Returns number of detached tasks if successful and 0 otherwise.
> @@ -7336,8 +7351,8 @@ static const unsigned int sched_nr_migrate_break = 32;
>  static int detach_tasks(struct lb_env *env)
>  {
>  	struct list_head *tasks = &env->src_rq->cfs_tasks;
> +	unsigned long util, load;
>  	struct task_struct *p;
> -	unsigned long load;
>  	int detached = 0;
>  
>  	lockdep_assert_held(&env->src_rq->lock);
> @@ -7370,19 +7385,51 @@ static int detach_tasks(struct lb_env *env)
>  		if (!can_migrate_task(p, env))
>  			goto next;
>  
> -		load = task_h_load(p);
> +		switch (env->migration_type) {
> +		case migrate_load:
> +			load = task_h_load(p);
>  
> -		if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed)
> -			goto next;
> +			if (sched_feat(LB_MIN) &&
> +			    load < 16 && !env->sd->nr_balance_failed)
> +				goto next;
>  
> -		if ((load / 2) > env->imbalance)
> -			goto next;
> +			if ((load / 2) > env->imbalance)
> +				goto next;
> +
> +			env->imbalance -= load;
> +			break;
> +
> +		case migrate_util:
> +			util = task_util_est(p);
> +
> +			if (util > env->imbalance)
> +				goto next;
> +
> +			env->imbalance -= util;
> +			break;
> +
> +		case migrate_task:
> +			env->imbalance--;
> +			break;
> +
> +		case migrate_misfit:
> +			load = task_h_load(p);
> +
> +			/*
> +			 * load of misfit task might decrease a bit since it has
> +			 * been recorded. Be conservative in the condition.
> +			 */
> +			if (load / 2 < env->imbalance)
> +				goto next;
> +
> +			env->imbalance = 0;
> +			break;
> +		}
>  

So, no problem with this but it brings up another point. migration_type
also determines what env->imbalance means (e.g. load, utilisation,
nr_running etc). That was true before your patch too but now it's much
more explicit, which is nice, but could do with a comment.

>  		detach_task(p, env);
>  		list_add(&p->se.group_node, &env->tasks);
>  
>  		detached++;
> -		env->imbalance -= load;
>  
>  #ifdef CONFIG_PREEMPTION
>  		/*
> @@ -7396,7 +7443,7 @@ static int detach_tasks(struct lb_env *env)
>  
>  		/*
>  		 * We only want to steal up to the prescribed amount of
> -		 * runnable load.
> +		 * load/util/tasks.
>  		 */
>  		if (env->imbalance <= 0)
>  			break;
> @@ -7661,7 +7708,6 @@ struct sg_lb_stats {
>  	unsigned int idle_cpus;
>  	unsigned int group_weight;
>  	enum group_type group_type;
> -	int group_no_capacity;
>  	unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
>  	unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
>  #ifdef CONFIG_NUMA_BALANCING

Glad to see group_no_capacity go away, that had some "interesting"
treatment in update_sd_lb_stats.

> @@ -7677,10 +7723,10 @@ struct sg_lb_stats {
>  struct sd_lb_stats {
>  	struct sched_group *busiest;	/* Busiest group in this sd */
>  	struct sched_group *local;	/* Local group in this sd */
> -	unsigned long total_running;
>  	unsigned long total_load;	/* Total load of all groups in sd */
>  	unsigned long total_capacity;	/* Total capacity of all groups in sd */
>  	unsigned long avg_load;	/* Average load across all groups in sd */
> +	unsigned int prefer_sibling; /* tasks should go to sibling first */
>  
>  	struct sg_lb_stats busiest_stat;/* Statistics of the busiest group */
>  	struct sg_lb_stats local_stat;	/* Statistics of the local group */
> @@ -7691,19 +7737,18 @@ static inline void init_sd_lb_stats(struct sd_lb_stats *sds)
>  	/*
>  	 * Skimp on the clearing to avoid duplicate work. We can avoid clearing
>  	 * local_stat because update_sg_lb_stats() does a full clear/assignment.
> -	 * We must however clear busiest_stat::avg_load because
> -	 * update_sd_pick_busiest() reads this before assignment.
> +	 * We must however set busiest_stat::group_type and
> +	 * busiest_stat::idle_cpus to the worst busiest group because
> +	 * update_sd_pick_busiest() reads these before assignment.
>  	 */
>  	*sds = (struct sd_lb_stats){
>  		.busiest = NULL,
>  		.local = NULL,
> -		.total_running = 0UL,
>  		.total_load = 0UL,
>  		.total_capacity = 0UL,
>  		.busiest_stat = {
> -			.avg_load = 0UL,
> -			.sum_h_nr_running = 0,
> -			.group_type = group_other,
> +			.idle_cpus = UINT_MAX,
> +			.group_type = group_has_spare,
>  		},
>  	};
>  }
> @@ -7945,19 +7990,26 @@ group_smaller_max_cpu_capacity(struct sched_group *sg, struct sched_group *ref)
>  }
>  
>  static inline enum
> -group_type group_classify(struct sched_group *group,
> +group_type group_classify(struct lb_env *env,
> +			  struct sched_group *group,
>  			  struct sg_lb_stats *sgs)
>  {
> -	if (sgs->group_no_capacity)
> +	if (group_is_overloaded(env, sgs))
>  		return group_overloaded;
>  
>  	if (sg_imbalanced(group))
>  		return group_imbalanced;
>  
> +	if (sgs->group_asym_packing)
> +		return group_asym_packing;
> +
>  	if (sgs->group_misfit_task_load)
>  		return group_misfit_task;
>  
> -	return group_other;
> +	if (!group_has_capacity(env, sgs))
> +		return group_fully_busy;
> +
> +	return group_has_spare;
>  }
>  
>  static bool update_nohz_stats(struct rq *rq, bool force)
> @@ -7994,10 +8046,12 @@ static inline void update_sg_lb_stats(struct lb_env *env,
>  				      struct sg_lb_stats *sgs,
>  				      int *sg_status)
>  {
> -	int i, nr_running;
> +	int i, nr_running, local_group;
>  
>  	memset(sgs, 0, sizeof(*sgs));
>  
> +	local_group = cpumask_test_cpu(env->dst_cpu, sched_group_span(group));
> +
>  	for_each_cpu_and(i, sched_group_span(group), env->cpus) {
>  		struct rq *rq = cpu_rq(i);
>  
> @@ -8022,9 +8076,16 @@ static inline void update_sg_lb_stats(struct lb_env *env,
>  		/*
>  		 * No need to call idle_cpu() if nr_running is not 0
>  		 */
> -		if (!nr_running && idle_cpu(i))
> +		if (!nr_running && idle_cpu(i)) {
>  			sgs->idle_cpus++;
> +			/* Idle cpu can't have misfit task */
> +			continue;
> +		}
> +
> +		if (local_group)
> +			continue;
>  
> +		/* Check for a misfit task on the cpu */
>  		if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
>  		    sgs->group_misfit_task_load < rq->misfit_task_load) {
>  			sgs->group_misfit_task_load = rq->misfit_task_load;

So.... why exactly do we not care about misfit tasks on CPUs in the
local group? I'm not saying you're wrong because you have a clear idea
on how misfit tasks should be treated but it's very non-obvious just
from the code.

> <SNIP>
>
> @@ -8079,62 +8154,80 @@ static bool update_sd_pick_busiest(struct lb_env *env,
>  	if (sgs->group_type < busiest->group_type)
>  		return false;
>  
> -	if (sgs->avg_load <= busiest->avg_load)
> -		return false;
> -
> -	if (!(env->sd->flags & SD_ASYM_CPUCAPACITY))
> -		goto asym_packing;
> -
>  	/*
> -	 * Candidate sg has no more than one task per CPU and
> -	 * has higher per-CPU capacity. Migrating tasks to less
> -	 * capable CPUs may harm throughput. Maximize throughput,
> -	 * power/energy consequences are not considered.
> +	 * The candidate and the current busiest group are the same type of
> +	 * group. Let check which one is the busiest according to the type.
>  	 */
> -	if (sgs->sum_h_nr_running <= sgs->group_weight &&
> -	    group_smaller_min_cpu_capacity(sds->local, sg))
> -		return false;
>  
> -	/*
> -	 * If we have more than one misfit sg go with the biggest misfit.
> -	 */
> -	if (sgs->group_type == group_misfit_task &&
> -	    sgs->group_misfit_task_load < busiest->group_misfit_task_load)
> +	switch (sgs->group_type) {
> +	case group_overloaded:
> +		/* Select the overloaded group with highest avg_load. */
> +		if (sgs->avg_load <= busiest->avg_load)
> +			return false;
> +		break;
> +
> +	case group_imbalanced:
> +		/*
> +		 * Select the 1st imbalanced group as we don't have any way to
> +		 * choose one more than another.
> +		 */
>  		return false;
>  
> -asym_packing:
> -	/* This is the busiest node in its class. */
> -	if (!(env->sd->flags & SD_ASYM_PACKING))
> -		return true;
> +	case group_asym_packing:
> +		/* Prefer to move from lowest priority CPU's work */
> +		if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
> +			return false;
> +		break;
>  

Again, I'm not seeing what prevents a !SD_ASYM_PACKING domain checking
sched_asym_prefer.

> <SNIP>
> +	case group_fully_busy:
> +		/*
> +		 * Select the fully busy group with highest avg_load. In
> +		 * theory, there is no need to pull task from such kind of
> +		 * group because tasks have all compute capacity that they need
> +		 * but we can still improve the overall throughput by reducing
> +		 * contention when accessing shared HW resources.
> +		 *
> +		 * XXX for now avg_load is not computed and always 0 so we
> +		 * select the 1st one.
> +		 */
> +		if (sgs->avg_load <= busiest->avg_load)
> +			return false;
> +		break;
> +

With the exception that if we are balancing between NUMA domains and they
were communicating tasks that we've now pulled them apart. That might
increase the CPU resources available at the cost of increased remote
memory access cost.

> +	case group_has_spare:
> +		/*
> +		 * Select not overloaded group with lowest number of
> +		 * idle cpus. We could also compare the spare capacity
> +		 * which is more stable but it can end up that the
> +		 * group has less spare capacity but finally more idle
> +		 * cpus which means less opportunity to pull tasks.
> +		 */
> +		if (sgs->idle_cpus >= busiest->idle_cpus)
> +			return false;
> +		break;
>  	}
>  
> -	return false;
> +	/*
> +	 * Candidate sg has no more than one task per CPU and has higher
> +	 * per-CPU capacity. Migrating tasks to less capable CPUs may harm
> +	 * throughput. Maximize throughput, power/energy consequences are not
> +	 * considered.
> +	 */
> +	if ((env->sd->flags & SD_ASYM_CPUCAPACITY) &&
> +	    (sgs->group_type <= group_fully_busy) &&
> +	    (group_smaller_min_cpu_capacity(sds->local, sg)))
> +		return false;
> +
> +	return true;
>  }
>  
>  #ifdef CONFIG_NUMA_BALANCING
> @@ -8172,13 +8265,13 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
>   * @env: The load balancing environment.
>   * @sds: variable to hold the statistics for this sched_domain.
>   */
> +

Spurious whitespace change.

>  static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sds)
>  {
>  	struct sched_domain *child = env->sd->child;
>  	struct sched_group *sg = env->sd->groups;
>  	struct sg_lb_stats *local = &sds->local_stat;
>  	struct sg_lb_stats tmp_sgs;
> -	bool prefer_sibling = child && child->flags & SD_PREFER_SIBLING;
>  	int sg_status = 0;
>  
>  #ifdef CONFIG_NO_HZ_COMMON
> @@ -8205,22 +8298,6 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
>  		if (local_group)
>  			goto next_group;
>  
> -		/*
> -		 * In case the child domain prefers tasks go to siblings
> -		 * first, lower the sg capacity so that we'll try
> -		 * and move all the excess tasks away. We lower the capacity
> -		 * of a group only if the local group has the capacity to fit
> -		 * these excess tasks. The extra check prevents the case where
> -		 * you always pull from the heaviest group when it is already
> -		 * under-utilized (possible with a large weight task outweighs
> -		 * the tasks on the system).
> -		 */
> -		if (prefer_sibling && sds->local &&
> -		    group_has_capacity(env, local) &&
> -		    (sgs->sum_h_nr_running > local->sum_h_nr_running + 1)) {
> -			sgs->group_no_capacity = 1;
> -			sgs->group_type = group_classify(sg, sgs);
> -		}
>  
>  		if (update_sd_pick_busiest(env, sds, sg, sgs)) {
>  			sds->busiest = sg;
> @@ -8229,13 +8306,15 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
>  
>  next_group:
>  		/* Now, start updating sd_lb_stats */
> -		sds->total_running += sgs->sum_h_nr_running;
>  		sds->total_load += sgs->group_load;
>  		sds->total_capacity += sgs->group_capacity;
>  
>  		sg = sg->next;
>  	} while (sg != env->sd->groups);
>  
> +	/* Tag domain that child domain prefers tasks go to siblings first */
> +	sds->prefer_sibling = child && child->flags & SD_PREFER_SIBLING;
> +
>  #ifdef CONFIG_NO_HZ_COMMON
>  	if ((env->flags & LBF_NOHZ_AGAIN) &&
>  	    cpumask_subset(nohz.idle_cpus_mask, sched_domain_span(env->sd))) {
> @@ -8273,69 +8352,149 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
>   */
>  static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
>  {
> -	unsigned long max_pull, load_above_capacity = ~0UL;
>  	struct sg_lb_stats *local, *busiest;
>  
>  	local = &sds->local_stat;
>  	busiest = &sds->busiest_stat;
>  
> -	if (busiest->group_asym_packing) {
> -		env->imbalance = busiest->group_load;
> +	if (busiest->group_type == group_misfit_task) {
> +		/* Set imbalance to allow misfit task to be balanced. */
> +		env->migration_type = migrate_misfit;
> +		env->imbalance = busiest->group_misfit_task_load;
> +		return;
> +	}
> +
> +	if (busiest->group_type == group_asym_packing) {
> +		/*
> +		 * In case of asym capacity, we will try to migrate all load to
> +		 * the preferred CPU.
> +		 */
> +		env->migration_type = migrate_task;
> +		env->imbalance = busiest->sum_h_nr_running;
> +		return;
> +	}
> +
> +	if (busiest->group_type == group_imbalanced) {
> +		/*
> +		 * In the group_imb case we cannot rely on group-wide averages
> +		 * to ensure CPU-load equilibrium, try to move any task to fix
> +		 * the imbalance. The next load balance will take care of
> +		 * balancing back the system.
> +		 */
> +		env->migration_type = migrate_task;
> +		env->imbalance = 1;
>  		return;
>  	}
>  
>  	/*
> -	 * Avg load of busiest sg can be less and avg load of local sg can
> -	 * be greater than avg load across all sgs of sd because avg load
> -	 * factors in sg capacity and sgs with smaller group_type are
> -	 * skipped when updating the busiest sg:
> +	 * Try to use spare capacity of local group without overloading it or
> +	 * emptying busiest
>  	 */
> -	if (busiest->group_type != group_misfit_task &&
> -	    (busiest->avg_load <= sds->avg_load ||
> -	     local->avg_load >= sds->avg_load)) {
> -		env->imbalance = 0;
> +	if (local->group_type == group_has_spare) {
> +		if (busiest->group_type > group_fully_busy) {
> +			/*
> +			 * If busiest is overloaded, try to fill spare
> +			 * capacity. This might end up creating spare capacity
> +			 * in busiest or busiest still being overloaded but
> +			 * there is no simple way to directly compute the
> +			 * amount of load to migrate in order to balance the
> +			 * system.
> +			 */

busiest may not be overloaded, it may be imbalanced. Maybe the
distinction is irrelevant though.

> +			env->migration_type = migrate_util;
> +			env->imbalance = max(local->group_capacity, local->group_util) -
> +					 local->group_util;
> +
> +			/*
> +			 * In some case, the group's utilization is max or even
> +			 * higher than capacity because of migrations but the
> +			 * local CPU is (newly) idle. There is at least one
> +			 * waiting task in this overloaded busiest group. Let
> +			 * try to pull it.
> +			 */
> +			if (env->idle != CPU_NOT_IDLE && env->imbalance == 0) {
> +				env->migration_type = migrate_task;
> +				env->imbalance = 1;
> +			}
> +

Not convinced this is a good thing to do across NUMA domains. If it was
tied to the group being definitely overloaded then I could see the logic.

> +			return;
> +		}
> +
> +		if (busiest->group_weight == 1 || sds->prefer_sibling) {
> +			unsigned int nr_diff = busiest->sum_h_nr_running;
> +			/*
> +			 * When prefer sibling, evenly spread running tasks on
> +			 * groups.
> +			 */
> +			env->migration_type = migrate_task;
> +			lsub_positive(&nr_diff, local->sum_h_nr_running);
> +			env->imbalance = nr_diff >> 1;
> +			return;
> +		}

Comment is slightly misleading given that it's not just preferring
siblings but for when balancing across single CPU domains.

> +
> +		/*
> +		 * If there is no overload, we just want to even the number of
> +		 * idle cpus.
> +		 */
> +		env->migration_type = migrate_task;
> +		env->imbalance = max_t(long, 0, (local->idle_cpus -
> +						 busiest->idle_cpus) >> 1);
>  		return;
>  	}

Why do we want an even number of idle CPUs unconditionally? This goes back
to the NUMA domain case. 2 communicating tasks running on a 2-socket system
should not be pulled apart just to have 1 task running on each socket.

I didn't see anything obvious after this point but I also am getting a
bit on the fried side trying to hold this entire patch in my head and
got hung up on the NUMA domain balancing in particular.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 05/11] sched/fair: use rq->nr_running when balancing load
  2019-10-18 13:26 ` [PATCH v4 05/11] sched/fair: use rq->nr_running when balancing load Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use " tip-bot2 for Vincent Guittot
@ 2019-10-30 15:54   ` Mel Gorman
  1 sibling, 0 replies; 89+ messages in thread
From: Mel Gorman @ 2019-10-30 15:54 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, mingo, peterz, pauld, valentin.schneider, srikar,
	quentin.perret, dietmar.eggemann, Morten.Rasmussen, hdanton,
	parth, riel

On Fri, Oct 18, 2019 at 03:26:32PM +0200, Vincent Guittot wrote:
> cfs load_balance only takes care of CFS tasks whereas CPUs can be used by
> other scheduling class. Typically, a CFS task preempted by a RT or deadline
> task will not get a chance to be pulled on another CPU because the
> load_balance doesn't take into account tasks from other classes.
> Add sum of nr_running in the statistics and use it to detect such
> situation.
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

Patch is ok but it'll be easier in the future to mix up sum_nr_running
and sum_h_nr_running in the future. Might be best to make sum_nr_running
sum_any_running and the hierarchy one sum_cfs_running. I don't feel
strongly either way, because it's almost certainly due to the fact I
almost never care about non-cfs tasks when thinking about the scheduler.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance
  2019-10-18 13:26 ` [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use load instead of runnable load in load_balance() tip-bot2 for Vincent Guittot
@ 2019-10-30 15:58   ` Mel Gorman
  1 sibling, 0 replies; 89+ messages in thread
From: Mel Gorman @ 2019-10-30 15:58 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, mingo, peterz, pauld, valentin.schneider, srikar,
	quentin.perret, dietmar.eggemann, Morten.Rasmussen, hdanton,
	parth, riel

On Fri, Oct 18, 2019 at 03:26:33PM +0200, Vincent Guittot wrote:
> runnable load has been introduced to take into account the case
> where blocked load biases the load balance decision which was selecting
> underutilized group with huge blocked load whereas other groups were
> overloaded.
> 
> The load is now only used when groups are overloaded. In this case,
> it's worth being conservative and taking into account the sleeping
> tasks that might wakeup on the cpu.
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

Hmm.... ok. Superficially I get what you're doing but worry slightly
about groups that have lots of tasks that are frequently idling on short
periods of IO.

Unfortuntely when I queued this series for testing I did not queue a load
that idles rapidly for short durations that would highlight problems in
that area.

I cannot convince myself it's ok enough for an ack but I have no reason
to complain either.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 01/11] sched/fair: clean up asym packing
  2019-10-30 14:51   ` [PATCH v4 01/11] sched/fair: clean " Mel Gorman
@ 2019-10-30 16:03     ` Vincent Guittot
  0 siblings, 0 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-30 16:03 UTC (permalink / raw)
  To: Mel Gorman
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Wed, 30 Oct 2019 at 15:51, Mel Gorman <mgorman@techsingularity.net> wrote:
>
> On Fri, Oct 18, 2019 at 03:26:28PM +0200, Vincent Guittot wrote:
> > Clean up asym packing to follow the default load balance behavior:
> > - classify the group by creating a group_asym_packing field.
> > - calculate the imbalance in calculate_imbalance() instead of bypassing it.
> >
> > We don't need to test twice same conditions anymore to detect asym packing
> > and we consolidate the calculation of imbalance in calculate_imbalance().
> >
> > There is no functional changes.
> >
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > Acked-by: Rik van Riel <riel@surriel.com>
> > ---
> >  kernel/sched/fair.c | 63 ++++++++++++++---------------------------------------
> >  1 file changed, 16 insertions(+), 47 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 1f0a5e1..617145c 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7675,6 +7675,7 @@ struct sg_lb_stats {
> >       unsigned int group_weight;
> >       enum group_type group_type;
> >       int group_no_capacity;
> > +     unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
> >       unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
> >  #ifdef CONFIG_NUMA_BALANCING
> >       unsigned int nr_numa_running;
> > @@ -8129,9 +8130,17 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> >        * ASYM_PACKING needs to move all the work to the highest
> >        * prority CPUs in the group, therefore mark all groups
> >        * of lower priority than ourself as busy.
> > +      *
> > +      * This is primarily intended to used at the sibling level.  Some
> > +      * cores like POWER7 prefer to use lower numbered SMT threads.  In the
> > +      * case of POWER7, it can move to lower SMT modes only when higher
> > +      * threads are idle.  When in lower SMT modes, the threads will
> > +      * perform better since they share less core resources.  Hence when we
> > +      * have idle threads, we want them to be the higher ones.
> >        */
> >       if (sgs->sum_nr_running &&
> >           sched_asym_prefer(env->dst_cpu, sg->asym_prefer_cpu)) {
> > +             sgs->group_asym_packing = 1;
> >               if (!sds->busiest)
> >                       return true;
> >
>
> (I did not read any of the earlier implementations of this series, maybe
> this was discussed already in which case, sorry for the noise)
>
> Are you *sure* this is not a functional change?
>
> Asym packing is a twisty maze of headaches and I'm not familiar enough
> with it to be 100% certain without spending a lot of time on this. Even
> spotting how Power7 ends up using asym packing with lower-numbered SMT
> threads is a bit of a challenge.  Specifically, it relies on the scheduler
> domain SD_ASYM_PACKING flag for SMT domains to use the weak implementation
> of arch_asym_cpu_priority which by defaults favours the lower-numbered CPU.
>
> The check_asym_packing implementation checks that flag but I can't see
> the equiavlent type of check here. update_sd_pick_busiest  could be called

The checks of  SD_ASYM_PACKING and CPU_NOT_IDLE are already above in
the function but out of the patch.
In fact this part of update_sd_pick_busiest is already dedicated to
asym_packing.

What I'm doing is that instead of checking asym_packing in
update_sd_pick_busiest and then rechecking the same thing in
find_busiest_group, I save the check result and reuse it

Also patch 04 moves further this code

> for domains that span NUMA or basically any domain that does not specify
> SD_ASYM_PACKING and end up favouring a lower-numbered CPU (or whatever
> arch_asym_cpu_priority returns in the case of x86 which has a different
> idea for favoured CPUs).
>
> sched_asym_prefer appears to be a function that is very easy to use
> incorrectly. Should it take env and check the SD flags first?
>
> --
> Mel Gorman
> SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 07/11] sched/fair: evenly spread tasks when not overloaded
  2019-10-18 13:26 ` [PATCH v4 07/11] sched/fair: evenly spread tasks when not overloaded Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Spread out tasks evenly " tip-bot2 for Vincent Guittot
@ 2019-10-30 16:03   ` Mel Gorman
  1 sibling, 0 replies; 89+ messages in thread
From: Mel Gorman @ 2019-10-30 16:03 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, mingo, peterz, pauld, valentin.schneider, srikar,
	quentin.perret, dietmar.eggemann, Morten.Rasmussen, hdanton,
	parth, riel

On Fri, Oct 18, 2019 at 03:26:34PM +0200, Vincent Guittot wrote:
> When there is only 1 cpu per group, using the idle cpus to evenly spread
> tasks doesn't make sense and nr_running is a better metrics.
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
>  kernel/sched/fair.c | 40 ++++++++++++++++++++++++++++------------
>  1 file changed, 28 insertions(+), 12 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 9ac2264..9b8e20d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -8601,18 +8601,34 @@ static struct sched_group *find_busiest_group(struct lb_env *env)
>  	    busiest->sum_nr_running > local->sum_nr_running + 1)
>  		goto force_balance;
>  
> -	if (busiest->group_type != group_overloaded &&
> -	     (env->idle == CPU_NOT_IDLE ||
> -	      local->idle_cpus <= (busiest->idle_cpus + 1)))
> -		/*
> -		 * If the busiest group is not overloaded
> -		 * and there is no imbalance between this and busiest group
> -		 * wrt idle CPUs, it is balanced. The imbalance
> -		 * becomes significant if the diff is greater than 1 otherwise
> -		 * we might end up to just move the imbalance on another
> -		 * group.
> -		 */
> -		goto out_balanced;
> +	if (busiest->group_type != group_overloaded) {
> +		if (env->idle == CPU_NOT_IDLE)
> +			/*
> +			 * If the busiest group is not overloaded (and as a
> +			 * result the local one too) but this cpu is already
> +			 * busy, let another idle cpu try to pull task.
> +			 */
> +			goto out_balanced;
> +
> +		if (busiest->group_weight > 1 &&
> +		    local->idle_cpus <= (busiest->idle_cpus + 1))
> +			/*
> +			 * If the busiest group is not overloaded
> +			 * and there is no imbalance between this and busiest
> +			 * group wrt idle CPUs, it is balanced. The imbalance
> +			 * becomes significant if the diff is greater than 1
> +			 * otherwise we might end up to just move the imbalance
> +			 * on another group. Of course this applies only if
> +			 * there is more than 1 CPU per group.
> +			 */
> +			goto out_balanced;
> +
> +		if (busiest->sum_h_nr_running == 1)
> +			/*
> +			 * busiest doesn't have any tasks waiting to run
> +			 */
> +			goto out_balanced;
> +	}
>  

While outside the scope of this patch, it appears that this would still
allow slight imbalances in idle CPUs to pull tasks across NUMA domains
too easily.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH] sched/fair: fix rework of find_idlest_group()
  2019-10-22 16:46   ` [PATCH] sched/fair: fix rework of find_idlest_group() Vincent Guittot
  2019-10-23  7:50     ` Chen, Rong A
@ 2019-10-30 16:07     ` Mel Gorman
  2019-11-18 17:42     ` [tip: sched/core] sched/fair: Fix " tip-bot2 for Vincent Guittot
  2019-11-22 14:37     ` [PATCH] sched/fair: fix " Valentin Schneider
  3 siblings, 0 replies; 89+ messages in thread
From: Mel Gorman @ 2019-10-30 16:07 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, mingo, peterz, pauld, valentin.schneider, srikar,
	quentin.perret, dietmar.eggemann, Morten.Rasmussen, hdanton,
	parth, riel, rong.a.chen

On Tue, Oct 22, 2019 at 06:46:38PM +0200, Vincent Guittot wrote:
> The task, for which the scheduler looks for the idlest group of CPUs, must
> be discounted from all statistics in order to get a fair comparison
> between groups. This includes utilization, load, nr_running and idle_cpus.
> 
> Such unfairness can be easily highlighted with the unixbench execl 1 task.
> This test continuously call execve() and the scheduler looks for the idlest
> group/CPU on which it should place the task. Because the task runs on the
> local group/CPU, the latter seems already busy even if there is nothing
> else running on it. As a result, the scheduler will always select another
> group/CPU than the local one.
> 
> Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
> Reported-by: kernel test robot <rong.a.chen@intel.com>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>

I had gotten fried by this point and had not queued this patch in advance
so I don't want to comment one way or the other. However, I note this was
not picked up in tip and it probably is best that this series all go in
as one lump and not separate out the fixes in the final merge. Otherwise
it'll trigger false positives by LKP.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-10-30 15:45   ` [PATCH v4 04/11] sched/fair: rework load_balance Mel Gorman
@ 2019-10-30 16:16     ` Valentin Schneider
  2019-10-31  9:09     ` Vincent Guittot
  2019-11-18 13:50     ` Ingo Molnar
  2 siblings, 0 replies; 89+ messages in thread
From: Valentin Schneider @ 2019-10-30 16:16 UTC (permalink / raw)
  To: Mel Gorman, Vincent Guittot
  Cc: linux-kernel, mingo, peterz, pauld, srikar, quentin.perret,
	dietmar.eggemann, Morten.Rasmussen, hdanton, parth, riel

On 30/10/2019 16:45, Mel Gorman wrote:
> On Fri, Oct 18, 2019 at 03:26:31PM +0200, Vincent Guittot wrote:
>> The load_balance algorithm contains some heuristics which have become
>> meaningless since the rework of the scheduler's metrics like the
>> introduction of PELT.
>>
>> Furthermore, load is an ill-suited metric for solving certain task
>> placement imbalance scenarios. For instance, in the presence of idle CPUs,
>> we should simply try to get at least one task per CPU, whereas the current
>> load-based algorithm can actually leave idle CPUs alone simply because the
>> load is somewhat balanced. The current algorithm ends up creating virtual
>> and meaningless value like the avg_load_per_task or tweaks the state of a
>> group to make it overloaded whereas it's not, in order to try to migrate
>> tasks.
>>
> 
> I do not think this is necessarily 100% true. With both the previous
> load-balancing behaviour and the apparent behaviour of this patch, it's
> still possible to pull two communicating tasks apart and across NUMA
> domains when utilisation is low. Specifically, a load difference of less
> than SCHED_CAPACITY_SCALE between NUMA codes can be enough too migrate
> task to level out load.
> 
> So, load might be ill-suited for some cases but that does not make it
> completely useless either.
> 

For sure! What we're trying to get to is that load is much less relevant
at low task count than at high task count.

A pathological case for asymmetric systems (big.LITTLE et al) is if you spawn
as many tasks as you have CPUs and they're not all spread out straight from
wakeup (so you have at least one rq coscheduling 2 tasks), the load balancer
might never solve that imbalance and leave a CPU idle for the duration of the
workload.

The problem there is that the current LB tries to balance avg_load, which is
screwy when asymmetric capacities are involved.

Take the h960 for instance, which is 4+4 big.LITTLE (LITTLEs are 462 capacity,
bigs 1024). Say you end up with 3 tasks on the LITTLEs (one is idle), and 5
tasks on the bigs (none idle). The tasks are CPU hogs, so they won't go
through new wakeups.                                                                                                                                                                   

You have something like 3072 load on the LITTLEs vs 5120 on the bigs. The
thing is, a group's avg_load scales inversely with capacity:
                                                                            
sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
				sgs->group_capacity;                                          

That means the LITTLEs group ends up with 1702 avg_load, whereas the bigs
group gets 1280 - the LITTLEs group is seen as more "busy" despite having an
idle CPU. This kind of imbalance usually ends up in fix_small_imbalance()
territory, which is random at best.

>> @@ -8022,9 +8076,16 @@ static inline void update_sg_lb_stats(struct lb_env *env,
>>  		/*
>>  		 * No need to call idle_cpu() if nr_running is not 0
>>  		 */
>> -		if (!nr_running && idle_cpu(i))
>> +		if (!nr_running && idle_cpu(i)) {
>>  			sgs->idle_cpus++;
>> +			/* Idle cpu can't have misfit task */
>> +			continue;
>> +		}
>> +
>> +		if (local_group)
>> +			continue;
>>  
>> +		/* Check for a misfit task on the cpu */
>>  		if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
>>  		    sgs->group_misfit_task_load < rq->misfit_task_load) {
>>  			sgs->group_misfit_task_load = rq->misfit_task_load;
> 
> So.... why exactly do we not care about misfit tasks on CPUs in the
> local group? I'm not saying you're wrong because you have a clear idea
> on how misfit tasks should be treated but it's very non-obvious just
> from the code.
> 

Misfit tasks need to be migrated away to CPUs of higher capacity. We let that
happen through the usual load-balance pull - there's little point in pulling
from ourselves.

> I didn't see anything obvious after this point but I also am getting a
> bit on the fried side trying to hold this entire patch in my head and
> got hung up on the NUMA domain balancing in particular.
> 

I share the feeling, I've done it by chunks but need to have another go at it.

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-21  7:50 ` [PATCH v4 00/10] sched/fair: rework the CFS load balance Ingo Molnar
  2019-10-21  8:44   ` Vincent Guittot
@ 2019-10-30 16:24   ` Mel Gorman
  2019-10-30 16:35     ` Vincent Guittot
  2019-11-18 13:15     ` Ingo Molnar
  1 sibling, 2 replies; 89+ messages in thread
From: Mel Gorman @ 2019-10-30 16:24 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Vincent Guittot, linux-kernel, mingo, peterz, pauld,
	valentin.schneider, srikar, quentin.perret, dietmar.eggemann,
	Morten.Rasmussen, hdanton, parth, riel

On Mon, Oct 21, 2019 at 09:50:38AM +0200, Ingo Molnar wrote:
> > <SNIP>
> 
> Thanks, that's an excellent series!
> 

Agreed despite the level of whining and complaining I made during the
review.

> I've queued it up in sched/core with a handful of readability edits to 
> comments and changelogs.
> 
> There are some upstreaming caveats though, I expect this series to be a 
> performance regression magnet:
> 
>  - load_balance() and wake-up changes invariably are such: some workloads 
>    only work/scale well by accident, and if we touch the logic it might 
>    flip over into a less advantageous scheduling pattern.
> 
>  - In particular the changes from balancing and waking on runnable load 
>    to full load that includes blocking *will* shift IO-intensive 
>    workloads that you tests don't fully capture I believe. You also made 
>    idle balancing more aggressive in essence - which might reduce cache 
>    locality for some workloads.
> 
> A full run on Mel Gorman's magic scalability test-suite would be super 
> useful ...
> 

I queued this back on the 21st and it took this long for me to get back
to it.

What I tested did not include the fix for the last patch so I cannot say
the data is that useful. I also failed to include something that exercised
the IO paths in a way that idles rapidly as that can catch interesting
details (usually cpufreq related but sometimes load-balancing related).
There was no real thinking behind this decision, I just used an old
collection of tests to get a general feel for the series.

Most of the results were performance-neutral and some notable gains
(kernel compiles were 1-6% faster depending on the -j count). Hackbench
saw a disproportionate gain in terms of performance but I tend to be wary
of hackbench as improving it is rarely a universal win.
There tends to be some jitter around the point where a NUMA nodes worth
of CPUs gets overloaded. tbench (mmtests configuation network-tbench) on
a NUMA machine showed gains for low thread counts and high thread counts
but a loss near the boundary where a single node would get overloaded.

Some NAS-related workloads saw a drop in performance on NUMA machines
but the size class might be too small to be certain, I'd have to rerun
with the D class to be sure.  The biggest strange drop in performance
was the elapsed time to run the git test suite (mmtests configuration
workload-shellscripts modified to use a fresh XFS partition) took 17.61%
longer to execute on a UMA Skylake machine. This *might* be due to the
missing fix because it is mostly a single-task workload.

I'm not going to go through the results in detail because I think another
full round of testing would be required to take the fix into account. I'd
also prefer to wait to see if the review results in any material change
to the series.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-30 14:39               ` Phil Auld
@ 2019-10-30 16:24                 ` Dietmar Eggemann
  2019-10-30 16:35                   ` Valentin Schneider
  2019-10-30 17:25                 ` Vincent Guittot
  1 sibling, 1 reply; 89+ messages in thread
From: Dietmar Eggemann @ 2019-10-30 16:24 UTC (permalink / raw)
  To: Phil Auld, Vincent Guittot
  Cc: Ingo Molnar, Mel Gorman, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Valentin Schneider, Srikar Dronamraju,
	Quentin Perret, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On 30.10.19 15:39, Phil Auld wrote:
> Hi Vincent,
> 
> On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:

[...]

>>> When you say slow versus fast wakeup paths what do you mean? I'm still
>>> learning my way around all this code.
>>
>> When task wakes up, we can decide to
>> - speedup the wakeup and shorten the list of cpus and compare only
>> prev_cpu vs this_cpu (in fact the group of cpu that share their
>> respective LLC). That's the fast wakeup path that is used most of the
>> time during a wakeup
>> - or start to find the idlest CPU of the system and scan all domains.
>> That's the slow path that is used for new tasks or when a task wakes
>> up a lot of other tasks at the same time

[...]

Is the latter related to wake_wide()? If yes, is the SD_BALANCE_WAKE
flag set on the sched domains on your machines? IMHO, otherwise those
wakeups are not forced into the slowpath (if (unlikely(sd))?

I had this discussion the other day with Valentin S. on #sched and we
were not sure how SD_BALANCE_WAKE is set on sched domains on
!SD_ASYM_CPUCAPACITY systems.

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-30 16:24   ` Mel Gorman
@ 2019-10-30 16:35     ` Vincent Guittot
  2019-11-18 13:15     ` Ingo Molnar
  1 sibling, 0 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-30 16:35 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Ingo Molnar, linux-kernel, Ingo Molnar, Peter Zijlstra,
	Phil Auld, Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Wed, 30 Oct 2019 at 17:24, Mel Gorman <mgorman@techsingularity.net> wrote:
>
> On Mon, Oct 21, 2019 at 09:50:38AM +0200, Ingo Molnar wrote:
> > > <SNIP>
> >
> > Thanks, that's an excellent series!
> >
>
> Agreed despite the level of whining and complaining I made during the
> review.

Thanks for the review.
I haven't gone through all your comments yet but will do in the coming days

>
> > I've queued it up in sched/core with a handful of readability edits to
> > comments and changelogs.
> >
> > There are some upstreaming caveats though, I expect this series to be a
> > performance regression magnet:
> >
> >  - load_balance() and wake-up changes invariably are such: some workloads
> >    only work/scale well by accident, and if we touch the logic it might
> >    flip over into a less advantageous scheduling pattern.
> >
> >  - In particular the changes from balancing and waking on runnable load
> >    to full load that includes blocking *will* shift IO-intensive
> >    workloads that you tests don't fully capture I believe. You also made
> >    idle balancing more aggressive in essence - which might reduce cache
> >    locality for some workloads.
> >
> > A full run on Mel Gorman's magic scalability test-suite would be super
> > useful ...
> >
>
> I queued this back on the 21st and it took this long for me to get back
> to it.
>
> What I tested did not include the fix for the last patch so I cannot say
> the data is that useful. I also failed to include something that exercised
> the IO paths in a way that idles rapidly as that can catch interesting
> details (usually cpufreq related but sometimes load-balancing related).
> There was no real thinking behind this decision, I just used an old
> collection of tests to get a general feel for the series.
>
> Most of the results were performance-neutral and some notable gains
> (kernel compiles were 1-6% faster depending on the -j count). Hackbench
> saw a disproportionate gain in terms of performance but I tend to be wary
> of hackbench as improving it is rarely a universal win.
> There tends to be some jitter around the point where a NUMA nodes worth
> of CPUs gets overloaded. tbench (mmtests configuation network-tbench) on
> a NUMA machine showed gains for low thread counts and high thread counts
> but a loss near the boundary where a single node would get overloaded.
>
> Some NAS-related workloads saw a drop in performance on NUMA machines
> but the size class might be too small to be certain, I'd have to rerun
> with the D class to be sure.  The biggest strange drop in performance
> was the elapsed time to run the git test suite (mmtests configuration
> workload-shellscripts modified to use a fresh XFS partition) took 17.61%
> longer to execute on a UMA Skylake machine. This *might* be due to the
> missing fix because it is mostly a single-task workload.
>
> I'm not going to go through the results in detail because I think another
> full round of testing would be required to take the fix into account. I'd
> also prefer to wait to see if the review results in any material change
> to the series.
>
> --
> Mel Gorman
> SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-30 16:24                 ` Dietmar Eggemann
@ 2019-10-30 16:35                   ` Valentin Schneider
  2019-10-30 17:19                     ` Phil Auld
  0 siblings, 1 reply; 89+ messages in thread
From: Valentin Schneider @ 2019-10-30 16:35 UTC (permalink / raw)
  To: Dietmar Eggemann, Phil Auld, Vincent Guittot
  Cc: Ingo Molnar, Mel Gorman, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Srikar Dronamraju, Quentin Perret,
	Morten Rasmussen, Hillf Danton, Parth Shah, Rik van Riel



On 30/10/2019 17:24, Dietmar Eggemann wrote:
> On 30.10.19 15:39, Phil Auld wrote:
>> Hi Vincent,
>>
>> On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:
> 
> [...]
> 
>>>> When you say slow versus fast wakeup paths what do you mean? I'm still
>>>> learning my way around all this code.
>>>
>>> When task wakes up, we can decide to
>>> - speedup the wakeup and shorten the list of cpus and compare only
>>> prev_cpu vs this_cpu (in fact the group of cpu that share their
>>> respective LLC). That's the fast wakeup path that is used most of the
>>> time during a wakeup
>>> - or start to find the idlest CPU of the system and scan all domains.
>>> That's the slow path that is used for new tasks or when a task wakes
>>> up a lot of other tasks at the same time
> 
> [...]
> 
> Is the latter related to wake_wide()? If yes, is the SD_BALANCE_WAKE
> flag set on the sched domains on your machines? IMHO, otherwise those
> wakeups are not forced into the slowpath (if (unlikely(sd))?
> 
> I had this discussion the other day with Valentin S. on #sched and we
> were not sure how SD_BALANCE_WAKE is set on sched domains on
> !SD_ASYM_CPUCAPACITY systems.
> 

Well from the code nobody but us (asymmetric capacity systems) set
SD_BALANCE_WAKE. I was however curious if there were some folks who set it
with out of tree code for some reason.

As Dietmar said, not having SD_BALANCE_WAKE means you'll never go through
the slow path on wakeups, because there is no domain with SD_BALANCE_WAKE for
the domain loop to find. Depending on your topology you most likely will
go through it on fork or exec though.

IOW wake_wide() is not really widening the wakeup scan on wakeups using
mainline topology code (disregarding asymmetric capacity systems), which
sounds a bit... off.

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-30 16:35                   ` Valentin Schneider
@ 2019-10-30 17:19                     ` Phil Auld
  2019-10-30 17:25                       ` Valentin Schneider
  2019-10-30 17:28                       ` Vincent Guittot
  0 siblings, 2 replies; 89+ messages in thread
From: Phil Auld @ 2019-10-30 17:19 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: Dietmar Eggemann, Vincent Guittot, Ingo Molnar, Mel Gorman,
	linux-kernel, Ingo Molnar, Peter Zijlstra, Srikar Dronamraju,
	Quentin Perret, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

Hi,

On Wed, Oct 30, 2019 at 05:35:55PM +0100 Valentin Schneider wrote:
> 
> 
> On 30/10/2019 17:24, Dietmar Eggemann wrote:
> > On 30.10.19 15:39, Phil Auld wrote:
> >> Hi Vincent,
> >>
> >> On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:
> > 
> > [...]
> > 
> >>>> When you say slow versus fast wakeup paths what do you mean? I'm still
> >>>> learning my way around all this code.
> >>>
> >>> When task wakes up, we can decide to
> >>> - speedup the wakeup and shorten the list of cpus and compare only
> >>> prev_cpu vs this_cpu (in fact the group of cpu that share their
> >>> respective LLC). That's the fast wakeup path that is used most of the
> >>> time during a wakeup
> >>> - or start to find the idlest CPU of the system and scan all domains.
> >>> That's the slow path that is used for new tasks or when a task wakes
> >>> up a lot of other tasks at the same time
> > 
> > [...]
> > 
> > Is the latter related to wake_wide()? If yes, is the SD_BALANCE_WAKE
> > flag set on the sched domains on your machines? IMHO, otherwise those
> > wakeups are not forced into the slowpath (if (unlikely(sd))?
> > 
> > I had this discussion the other day with Valentin S. on #sched and we
> > were not sure how SD_BALANCE_WAKE is set on sched domains on
> > !SD_ASYM_CPUCAPACITY systems.
> > 
> 
> Well from the code nobody but us (asymmetric capacity systems) set
> SD_BALANCE_WAKE. I was however curious if there were some folks who set it
> with out of tree code for some reason.
> 
> As Dietmar said, not having SD_BALANCE_WAKE means you'll never go through
> the slow path on wakeups, because there is no domain with SD_BALANCE_WAKE for
> the domain loop to find. Depending on your topology you most likely will
> go through it on fork or exec though.
> 
> IOW wake_wide() is not really widening the wakeup scan on wakeups using
> mainline topology code (disregarding asymmetric capacity systems), which
> sounds a bit... off.

Thanks. It's not currently set. I'll set it and re-run to see if it makes
a difference. 


However, I'm not sure why it would be making a difference for only the cgroup
case. If this is causing issues I'd expect it to effect both runs. 

In general I think these threads want to wake up the last cpu they were on.
And given there are fewer cpu bound tasks that CPUs that wake cpu should,
more often than not, be idle. 


Cheers,
Phil



-- 


^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-30 17:19                     ` Phil Auld
@ 2019-10-30 17:25                       ` Valentin Schneider
  2019-10-30 17:29                         ` Phil Auld
  2019-10-30 17:28                       ` Vincent Guittot
  1 sibling, 1 reply; 89+ messages in thread
From: Valentin Schneider @ 2019-10-30 17:25 UTC (permalink / raw)
  To: Phil Auld
  Cc: Dietmar Eggemann, Vincent Guittot, Ingo Molnar, Mel Gorman,
	linux-kernel, Ingo Molnar, Peter Zijlstra, Srikar Dronamraju,
	Quentin Perret, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On 30/10/2019 18:19, Phil Auld wrote:
>> Well from the code nobody but us (asymmetric capacity systems) set
>> SD_BALANCE_WAKE. I was however curious if there were some folks who set it
>> with out of tree code for some reason.
>>
>> As Dietmar said, not having SD_BALANCE_WAKE means you'll never go through
>> the slow path on wakeups, because there is no domain with SD_BALANCE_WAKE for
>> the domain loop to find. Depending on your topology you most likely will
>> go through it on fork or exec though.
>>
>> IOW wake_wide() is not really widening the wakeup scan on wakeups using
>> mainline topology code (disregarding asymmetric capacity systems), which
>> sounds a bit... off.
> 
> Thanks. It's not currently set. I'll set it and re-run to see if it makes
> a difference. 
> 

Note that it might do more harm than good, it's not set in the default
topology because it's too aggressive, see 

  182a85f8a119 ("sched: Disable wakeup balancing")

> 
> However, I'm not sure why it would be making a difference for only the cgroup
> case. If this is causing issues I'd expect it to effect both runs. 
> 
> In general I think these threads want to wake up the last cpu they were on.
> And given there are fewer cpu bound tasks that CPUs that wake cpu should,
> more often than not, be idle. 
> 
> 
> Cheers,
> Phil
> 
> 
> 

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-30 14:39               ` Phil Auld
  2019-10-30 16:24                 ` Dietmar Eggemann
@ 2019-10-30 17:25                 ` Vincent Guittot
  2019-10-31 13:57                   ` Phil Auld
  1 sibling, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-10-30 17:25 UTC (permalink / raw)
  To: Phil Auld
  Cc: Ingo Molnar, Mel Gorman, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Valentin Schneider, Srikar Dronamraju,
	Quentin Perret, Dietmar Eggemann, Morten Rasmussen, Hillf Danton,
	Parth Shah, Rik van Riel

On Wed, 30 Oct 2019 at 15:39, Phil Auld <pauld@redhat.com> wrote:
>
> Hi Vincent,
>
> On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:
> > Hi Phil,
> >
>
> ...
>
> >
> > The input could mean that this system reaches a particular level of
> > utilization and load that is close to the threshold between 2
> > different behavior like spare capacity and fully_busy/overloaded case.
> > But at the opposite, there is less threads that CPUs in your UCs so
> > one group at least at NUMA level should be tagged as
> > has_spare_capacity and should pull tasks.
>
> Yes. Maybe we don't hit that and rely on "load" since things look
> busy. There are only 2 spare cpus in the 156 + 2 case. Is it possible
> that information is getting lost with the extra NUMA levels?

It should not but i have to look more deeply your topology
If we have less tasks than CPUs, one group should always be tagged
"has_spare_capacity"

>
> >
> > >
> > > >
> > > > The fix favors the local group so your UC seems to prefer spreading
> > > > tasks at wake up
> > > > If you have any traces that you can share, this could help to
> > > > understand what's going on. I will try to reproduce the problem on my
> > > > system
> > >
> > > I'm not actually sure the fix here is causing this. Looking at the data
> > > more closely I see similar imbalances on v4, v4a and v3.
> > >
> > > When you say slow versus fast wakeup paths what do you mean? I'm still
> > > learning my way around all this code.
> >
> > When task wakes up, we can decide to
> > - speedup the wakeup and shorten the list of cpus and compare only
> > prev_cpu vs this_cpu (in fact the group of cpu that share their
> > respective LLC). That's the fast wakeup path that is used most of the
> > time during a wakeup
> > - or start to find the idlest CPU of the system and scan all domains.
> > That's the slow path that is used for new tasks or when a task wakes
> > up a lot of other tasks at the same time
> >
>
> Thanks.
>
> >
> > >
> > > This particular test is specifically designed to highlight the imbalance
> > > cause by the use of group scheduler defined load and averages. The threads
> > > are mostly CPU bound but will join up every time step. So if each thread
> >
> > ok the fact that they join up might be the root cause of your problem.
> > They will wake up at the same time by the same task and CPU.
> >
>
> If that was the problem I'd expect issues on other high node count systems.

yes probably

>
> >
> > That fact that the 4 nodes works well but not the 8 nodes is a bit
> > surprising except if this means more NUMA level in the sched_domain
> > topology
> > Could you give us more details about the sched domain topology ?
> >
>
> The 8-node system has 5 sched domain levels.  The 4-node system only
> has 3.

That's an interesting difference. and your additional tests on a 8
nodes with 3 level tends to confirm that the number of level make a
difference
I need to study a bit more how this can impact the spread of tasks

>
>
> cpu159 0 0 0 0 0 0 4361694551702 124316659623 94736
> domain0 80000000,00000000,00008000,00000000,00000000 0 0
> domain1 ffc00000,00000000,0000ffc0,00000000,00000000 0 0
> domain2 fffff000,00000000,0000ffff,f0000000,00000000 0 0
> domain3 ffffffff,ff000000,0000ffff,ffffff00,00000000 0 0
> domain4 ffffffff,ffffffff,ffffffff,ffffffff,ffffffff 0 0
>
> numactl --hardware
> available: 8 nodes (0-7)
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 80 81 82 83 84 85 86 87 88 89
> node 0 size: 126928 MB
> node 0 free: 126452 MB
> node 1 cpus: 10 11 12 13 14 15 16 17 18 19 90 91 92 93 94 95 96 97 98 99
> node 1 size: 129019 MB
> node 1 free: 128813 MB
> node 2 cpus: 20 21 22 23 24 25 26 27 28 29 100 101 102 103 104 105 106 107 108 109
> node 2 size: 129019 MB
> node 2 free: 128875 MB
> node 3 cpus: 30 31 32 33 34 35 36 37 38 39 110 111 112 113 114 115 116 117 118 119
> node 3 size: 129019 MB
> node 3 free: 128850 MB
> node 4 cpus: 40 41 42 43 44 45 46 47 48 49 120 121 122 123 124 125 126 127 128 129
> node 4 size: 128993 MB
> node 4 free: 128862 MB
> node 5 cpus: 50 51 52 53 54 55 56 57 58 59 130 131 132 133 134 135 136 137 138 139
> node 5 size: 129019 MB
> node 5 free: 128872 MB
> node 6 cpus: 60 61 62 63 64 65 66 67 68 69 140 141 142 143 144 145 146 147 148 149
> node 6 size: 129019 MB
> node 6 free: 128852 MB
> node 7 cpus: 70 71 72 73 74 75 76 77 78 79 150 151 152 153 154 155 156 157 158 159
> node 7 size: 112889 MB
> node 7 free: 112720 MB
> node distances:
> node   0   1   2   3   4   5   6   7
>   0:  10  12  17  17  19  19  19  19
>   1:  12  10  17  17  19  19  19  19
>   2:  17  17  10  12  19  19  19  19
>   3:  17  17  12  10  19  19  19  19
>   4:  19  19  19  19  10  12  17  17
>   5:  19  19  19  19  12  10  17  17
>   6:  19  19  19  19  17  17  10  12
>   7:  19  19  19  19  17  17  12  10
>
>
>
> available: 4 nodes (0-3)
> node 0 cpus: 0 1 2 3 4 5 6 7 8 9 40 41 42 43 44 45 46 47 48 49
> node 0 size: 257943 MB
> node 0 free: 257602 MB
> node 1 cpus: 10 11 12 13 14 15 16 17 18 19 50 51 52 53 54 55 56 57 58 59
> node 1 size: 258043 MB
> node 1 free: 257619 MB
> node 2 cpus: 20 21 22 23 24 25 26 27 28 29 60 61 62 63 64 65 66 67 68 69
> node 2 size: 258043 MB
> node 2 free: 257879 MB
> node 3 cpus: 30 31 32 33 34 35 36 37 38 39 70 71 72 73 74 75 76 77 78 79
> node 3 size: 258043 MB
> node 3 free: 257823 MB
> node distances:
> node   0   1   2   3
>   0:  10  20  20  20
>   1:  20  10  20  20
>   2:  20  20  10  20
>   3:  20  20  20  10
>
>
>
>
> An 8-node system (albeit with sub-numa) has node distances
>
> node distances:
> node   0   1   2   3   4   5   6   7
>   0:  10  11  21  21  21  21  21  21
>   1:  11  10  21  21  21  21  21  21
>   2:  21  21  10  11  21  21  21  21
>   3:  21  21  11  10  21  21  21  21
>   4:  21  21  21  21  10  11  21  21
>   5:  21  21  21  21  11  10  21  21
>   6:  21  21  21  21  21  21  10  11
>   7:  21  21  21  21  21  21  11  10
>
> This one does not exhibit the problem with the latest (v4a). But also
> only has 3 levels.
>
>
> > >
> > > There's still something between v1 and v4 on that 8-node system that is
> > > still illustrating the original problem.  On our other test systems this
> > > series really works nicely to solve this problem. And even if we can't get
> > > to the bottom if this it's a significant improvement.
> > >
> > >
> > > Here is v3 for the 8-node system
> > > lu.C.x_152_GROUP_1  Average    17.52  16.86  17.90  18.52  20.00  19.00  22.00  20.19
> > > lu.C.x_152_GROUP_2  Average    15.70  15.04  15.65  15.72  23.30  28.98  20.09  17.52
> > > lu.C.x_152_GROUP_3  Average    27.72  32.79  22.89  22.62  11.01  12.90  12.14  9.93
> > > lu.C.x_152_GROUP_4  Average    18.13  18.87  18.40  17.87  18.80  19.93  20.40  19.60
> > > lu.C.x_152_GROUP_5  Average    24.14  26.46  20.92  21.43  14.70  16.05  15.14  13.16
> > > lu.C.x_152_NORMAL_1 Average    21.03  22.43  20.27  19.97  18.37  18.80  16.27  14.87
> > > lu.C.x_152_NORMAL_2 Average    19.24  18.29  18.41  17.41  19.71  19.00  20.29  19.65
> > > lu.C.x_152_NORMAL_3 Average    19.43  20.00  19.05  20.24  18.76  17.38  18.52  18.62
> > > lu.C.x_152_NORMAL_4 Average    17.19  18.25  17.81  18.69  20.44  19.75  20.12  19.75
> > > lu.C.x_152_NORMAL_5 Average    19.25  19.56  19.12  19.56  19.38  19.38  18.12  17.62
> > >
> > > lu.C.x_156_GROUP_1  Average    18.62  19.31  18.38  18.77  19.88  21.35  19.35  20.35
> > > lu.C.x_156_GROUP_2  Average    15.58  12.72  14.96  14.83  20.59  19.35  29.75  28.22
> > > lu.C.x_156_GROUP_3  Average    20.05  18.74  19.63  18.32  20.26  20.89  19.53  18.58
> > > lu.C.x_156_GROUP_4  Average    14.77  11.42  13.01  10.09  27.05  33.52  23.16  22.98
> > > lu.C.x_156_GROUP_5  Average    14.94  11.45  12.77  10.52  28.01  33.88  22.37  22.05
> > > lu.C.x_156_NORMAL_1 Average    20.00  20.58  18.47  18.68  19.47  19.74  19.42  19.63
> > > lu.C.x_156_NORMAL_2 Average    18.52  18.48  18.83  18.43  20.57  20.48  20.61  20.09
> > > lu.C.x_156_NORMAL_3 Average    20.27  20.00  20.05  21.18  19.55  19.00  18.59  17.36
> > > lu.C.x_156_NORMAL_4 Average    19.65  19.60  20.25  20.75  19.35  20.10  19.00  17.30
> > > lu.C.x_156_NORMAL_5 Average    19.79  19.67  20.62  22.42  18.42  18.00  17.67  19.42
> > >
> > >
> > > I'll try to find pre-patched results for this 8 node system.  Just to keep things
> > > together for reference here is the 4-node system before this re-work series.
> > >
> > > lu.C.x_76_GROUP_1  Average    15.84  24.06  23.37  12.73
> > > lu.C.x_76_GROUP_2  Average    15.29  22.78  22.49  15.45
> > > lu.C.x_76_GROUP_3  Average    13.45  23.90  22.97  15.68
> > > lu.C.x_76_NORMAL_1 Average    18.31  19.54  19.54  18.62
> > > lu.C.x_76_NORMAL_2 Average    19.73  19.18  19.45  17.64
> > >
> > > This produced a 4.5x slowdown for the group runs versus the nicely balance
> > > normal runs.
> > >
>
> Here is the base 5.4.0-rc3+ kernel on the 8-node system:
>
> lu.C.x_156_GROUP_1  Average    10.87  0.00   0.00   11.49  36.69  34.26  30.59  32.10
> lu.C.x_156_GROUP_2  Average    20.15  16.32  9.49   24.91  21.07  20.93  21.63  21.50
> lu.C.x_156_GROUP_3  Average    21.27  17.23  11.84  21.80  20.91  20.68  21.11  21.16
> lu.C.x_156_GROUP_4  Average    19.44  6.53   8.71   19.72  22.95  23.16  28.85  26.64
> lu.C.x_156_GROUP_5  Average    20.59  6.20   11.32  14.63  28.73  30.36  22.20  21.98
> lu.C.x_156_NORMAL_1 Average    20.50  19.95  20.40  20.45  18.75  19.35  18.25  18.35
> lu.C.x_156_NORMAL_2 Average    17.15  19.04  18.42  18.69  21.35  21.42  20.00  19.92
> lu.C.x_156_NORMAL_3 Average    18.00  18.15  17.55  17.60  18.90  18.40  19.90  19.75
> lu.C.x_156_NORMAL_4 Average    20.53  20.05  20.21  19.11  19.00  19.47  19.37  18.26
> lu.C.x_156_NORMAL_5 Average    18.72  18.78  19.72  18.50  19.67  19.72  21.11  19.78
>
> Including the actual benchmark results.
> ============156_GROUP========Mop/s===================================
> min     q1      median  q3      max
> 1564.63 3003.87 3928.23 5411.13 8386.66
> ============156_GROUP========time====================================
> min     q1      median  q3      max
> 243.12  376.82  519.06  678.79  1303.18
> ============156_NORMAL========Mop/s===================================
> min     q1      median  q3      max
> 13845.6 18013.8 18545.5 19359.9 19647.4
> ============156_NORMAL========time====================================
> min     q1      median  q3      max
> 103.78  105.32  109.95  113.19  147.27
>
> You can see the ~5x slowdown of the pre-rework issue. v4a is much improved over
> mainline.
>
> I'll try to find some other machines as well.
>
>
> > >
> > >
> > > I can try to get traces but this is not my system so it may take a little
> > > while. I've found that the existing trace points don't give enough information
> > > to see what is happening in this problem. But the visualization in kernelshark
> > > does show the problem pretty well. Do you want just the existing sched tracepoints
> > > or should I update some of the traceprintks I used in the earlier traces?
> >
> > The standard tracepoint is a good starting point but tracing the
> > statistings for find_busiest_group and find_idlest_group should help a
> > lot.
> >
>
> I have some traces which I'll send you directly since they're large.

Thanks

>
>
> Cheers,
> Phil
>
>
>
> > Cheers,
> > Vincent
> >
> > >
> > >
> > >
> > > Cheers,
> > > Phil
> > >
> > >
> > > >
> > > > >
> > > > > We're re-running the test to get more samples.
> > > >
> > > > Thanks
> > > > Vincent
> > > >
> > > > >
> > > > >
> > > > > Other tests and systems were still fine.
> > > > >
> > > > >
> > > > > Cheers,
> > > > > Phil
> > > > >
> > > > >
> > > > > > Numbers for my specific testcase (the cgroup imbalance) are basically
> > > > > > the same as I posted for v3 (plus the better 8-node numbers). I.e. this
> > > > > > series solves that issue.
> > > > > >
> > > > > >
> > > > > > Cheers,
> > > > > > Phil
> > > > > >
> > > > > >
> > > > > > >
> > > > > > > >
> > > > > > > > Also, we seem to have grown a fair amount of these TODO entries:
> > > > > > > >
> > > > > > > >   kernel/sched/fair.c: * XXX borrowed from update_sg_lb_stats
> > > > > > > >   kernel/sched/fair.c: * XXX: only do this for the part of runnable > running ?
> > > > > > > >   kernel/sched/fair.c:     * XXX illustrate
> > > > > > > >   kernel/sched/fair.c:    } else if (sd_flag & SD_BALANCE_WAKE) { /* XXX always ? */
> > > > > > > >   kernel/sched/fair.c: * can also include other factors [XXX].
> > > > > > > >   kernel/sched/fair.c: * [XXX expand on:
> > > > > > > >   kernel/sched/fair.c: * [XXX more?]
> > > > > > > >   kernel/sched/fair.c: * [XXX write more on how we solve this.. _after_ merging pjt's patches that
> > > > > > > >   kernel/sched/fair.c:             * XXX for now avg_load is not computed and always 0 so we
> > > > > > > >   kernel/sched/fair.c:            /* XXX broken for overlapping NUMA groups */
> > > > > > > >
> > > > > > >
> > > > > > > I will have a look :-)
> > > > > > >
> > > > > > > > :-)
> > > > > > > >
> > > > > > > > Thanks,
> > > > > > > >
> > > > > > > >         Ingo
> > > > > >
> > > > > > --
> > > > > >
> > > > >
> > > > > --
> > > > >
> > >
> > > --
> > >
>
> --
>

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-30 17:19                     ` Phil Auld
  2019-10-30 17:25                       ` Valentin Schneider
@ 2019-10-30 17:28                       ` Vincent Guittot
  2019-10-30 17:44                         ` Phil Auld
  1 sibling, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-10-30 17:28 UTC (permalink / raw)
  To: Phil Auld
  Cc: Valentin Schneider, Dietmar Eggemann, Ingo Molnar, Mel Gorman,
	linux-kernel, Ingo Molnar, Peter Zijlstra, Srikar Dronamraju,
	Quentin Perret, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Wed, 30 Oct 2019 at 18:19, Phil Auld <pauld@redhat.com> wrote:
>
> Hi,
>
> On Wed, Oct 30, 2019 at 05:35:55PM +0100 Valentin Schneider wrote:
> >
> >
> > On 30/10/2019 17:24, Dietmar Eggemann wrote:
> > > On 30.10.19 15:39, Phil Auld wrote:
> > >> Hi Vincent,
> > >>
> > >> On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:
> > >
> > > [...]
> > >
> > >>>> When you say slow versus fast wakeup paths what do you mean? I'm still
> > >>>> learning my way around all this code.
> > >>>
> > >>> When task wakes up, we can decide to
> > >>> - speedup the wakeup and shorten the list of cpus and compare only
> > >>> prev_cpu vs this_cpu (in fact the group of cpu that share their
> > >>> respective LLC). That's the fast wakeup path that is used most of the
> > >>> time during a wakeup
> > >>> - or start to find the idlest CPU of the system and scan all domains.
> > >>> That's the slow path that is used for new tasks or when a task wakes
> > >>> up a lot of other tasks at the same time
> > >
> > > [...]
> > >
> > > Is the latter related to wake_wide()? If yes, is the SD_BALANCE_WAKE
> > > flag set on the sched domains on your machines? IMHO, otherwise those
> > > wakeups are not forced into the slowpath (if (unlikely(sd))?
> > >
> > > I had this discussion the other day with Valentin S. on #sched and we
> > > were not sure how SD_BALANCE_WAKE is set on sched domains on
> > > !SD_ASYM_CPUCAPACITY systems.
> > >
> >
> > Well from the code nobody but us (asymmetric capacity systems) set
> > SD_BALANCE_WAKE. I was however curious if there were some folks who set it
> > with out of tree code for some reason.
> >
> > As Dietmar said, not having SD_BALANCE_WAKE means you'll never go through
> > the slow path on wakeups, because there is no domain with SD_BALANCE_WAKE for
> > the domain loop to find. Depending on your topology you most likely will
> > go through it on fork or exec though.
> >
> > IOW wake_wide() is not really widening the wakeup scan on wakeups using
> > mainline topology code (disregarding asymmetric capacity systems), which
> > sounds a bit... off.
>
> Thanks. It's not currently set. I'll set it and re-run to see if it makes
> a difference.

Because the fix only touches the slow path and according to Valentin
and Dietmar comments on the wake up path, it would mean that your UC
creates regularly some new threads during the test ?

>
>
> However, I'm not sure why it would be making a difference for only the cgroup
> case. If this is causing issues I'd expect it to effect both runs.
>
> In general I think these threads want to wake up the last cpu they were on.
> And given there are fewer cpu bound tasks that CPUs that wake cpu should,
> more often than not, be idle.
>
>
> Cheers,
> Phil
>
>
>
> --
>

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-30 17:25                       ` Valentin Schneider
@ 2019-10-30 17:29                         ` Phil Auld
  0 siblings, 0 replies; 89+ messages in thread
From: Phil Auld @ 2019-10-30 17:29 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: Dietmar Eggemann, Vincent Guittot, Ingo Molnar, Mel Gorman,
	linux-kernel, Ingo Molnar, Peter Zijlstra, Srikar Dronamraju,
	Quentin Perret, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Wed, Oct 30, 2019 at 06:25:09PM +0100 Valentin Schneider wrote:
> On 30/10/2019 18:19, Phil Auld wrote:
> >> Well from the code nobody but us (asymmetric capacity systems) set
> >> SD_BALANCE_WAKE. I was however curious if there were some folks who set it
> >> with out of tree code for some reason.
> >>
> >> As Dietmar said, not having SD_BALANCE_WAKE means you'll never go through
> >> the slow path on wakeups, because there is no domain with SD_BALANCE_WAKE for
> >> the domain loop to find. Depending on your topology you most likely will
> >> go through it on fork or exec though.
> >>
> >> IOW wake_wide() is not really widening the wakeup scan on wakeups using
> >> mainline topology code (disregarding asymmetric capacity systems), which
> >> sounds a bit... off.
> > 
> > Thanks. It's not currently set. I'll set it and re-run to see if it makes
> > a difference. 
> > 
> 
> Note that it might do more harm than good, it's not set in the default
> topology because it's too aggressive, see 
> 
>   182a85f8a119 ("sched: Disable wakeup balancing")
> 

Heh, yeah... even as it's running I can see that this killing it :)


> > 
> > However, I'm not sure why it would be making a difference for only the cgroup
> > case. If this is causing issues I'd expect it to effect both runs. 
> > 
> > In general I think these threads want to wake up the last cpu they were on.
> > And given there are fewer cpu bound tasks that CPUs that wake cpu should,
> > more often than not, be idle. 
> > 
> > 
> > Cheers,
> > Phil
> > 
> > 
> > 

-- 


^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-30 17:28                       ` Vincent Guittot
@ 2019-10-30 17:44                         ` Phil Auld
  0 siblings, 0 replies; 89+ messages in thread
From: Phil Auld @ 2019-10-30 17:44 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Valentin Schneider, Dietmar Eggemann, Ingo Molnar, Mel Gorman,
	linux-kernel, Ingo Molnar, Peter Zijlstra, Srikar Dronamraju,
	Quentin Perret, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Wed, Oct 30, 2019 at 06:28:50PM +0100 Vincent Guittot wrote:
> On Wed, 30 Oct 2019 at 18:19, Phil Auld <pauld@redhat.com> wrote:
> >
> > Hi,
> >
> > On Wed, Oct 30, 2019 at 05:35:55PM +0100 Valentin Schneider wrote:
> > >
> > >
> > > On 30/10/2019 17:24, Dietmar Eggemann wrote:
> > > > On 30.10.19 15:39, Phil Auld wrote:
> > > >> Hi Vincent,
> > > >>
> > > >> On Mon, Oct 28, 2019 at 02:03:15PM +0100 Vincent Guittot wrote:
> > > >
> > > > [...]
> > > >
> > > >>>> When you say slow versus fast wakeup paths what do you mean? I'm still
> > > >>>> learning my way around all this code.
> > > >>>
> > > >>> When task wakes up, we can decide to
> > > >>> - speedup the wakeup and shorten the list of cpus and compare only
> > > >>> prev_cpu vs this_cpu (in fact the group of cpu that share their
> > > >>> respective LLC). That's the fast wakeup path that is used most of the
> > > >>> time during a wakeup
> > > >>> - or start to find the idlest CPU of the system and scan all domains.
> > > >>> That's the slow path that is used for new tasks or when a task wakes
> > > >>> up a lot of other tasks at the same time
> > > >
> > > > [...]
> > > >
> > > > Is the latter related to wake_wide()? If yes, is the SD_BALANCE_WAKE
> > > > flag set on the sched domains on your machines? IMHO, otherwise those
> > > > wakeups are not forced into the slowpath (if (unlikely(sd))?
> > > >
> > > > I had this discussion the other day with Valentin S. on #sched and we
> > > > were not sure how SD_BALANCE_WAKE is set on sched domains on
> > > > !SD_ASYM_CPUCAPACITY systems.
> > > >
> > >
> > > Well from the code nobody but us (asymmetric capacity systems) set
> > > SD_BALANCE_WAKE. I was however curious if there were some folks who set it
> > > with out of tree code for some reason.
> > >
> > > As Dietmar said, not having SD_BALANCE_WAKE means you'll never go through
> > > the slow path on wakeups, because there is no domain with SD_BALANCE_WAKE for
> > > the domain loop to find. Depending on your topology you most likely will
> > > go through it on fork or exec though.
> > >
> > > IOW wake_wide() is not really widening the wakeup scan on wakeups using
> > > mainline topology code (disregarding asymmetric capacity systems), which
> > > sounds a bit... off.
> >
> > Thanks. It's not currently set. I'll set it and re-run to see if it makes
> > a difference.
> 
> Because the fix only touches the slow path and according to Valentin
> and Dietmar comments on the wake up path, it would mean that your UC
> creates regularly some new threads during the test ?
> 

I believe it is not creating any new threads during each run. 


> >
> >
> > However, I'm not sure why it would be making a difference for only the cgroup
> > case. If this is causing issues I'd expect it to effect both runs.
> >
> > In general I think these threads want to wake up the last cpu they were on.
> > And given there are fewer cpu bound tasks that CPUs that wake cpu should,
> > more often than not, be idle.
> >
> >
> > Cheers,
> > Phil
> >
> >
> >
> > --
> >

-- 


^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-10-30 15:45   ` [PATCH v4 04/11] sched/fair: rework load_balance Mel Gorman
  2019-10-30 16:16     ` Valentin Schneider
@ 2019-10-31  9:09     ` Vincent Guittot
  2019-10-31 10:15       ` Mel Gorman
  2019-11-18 13:50     ` Ingo Molnar
  2 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-10-31  9:09 UTC (permalink / raw)
  To: Mel Gorman
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Wed, 30 Oct 2019 at 16:45, Mel Gorman <mgorman@techsingularity.net> wrote:
>
> On Fri, Oct 18, 2019 at 03:26:31PM +0200, Vincent Guittot wrote:
> > The load_balance algorithm contains some heuristics which have become
> > meaningless since the rework of the scheduler's metrics like the
> > introduction of PELT.
> >
> > Furthermore, load is an ill-suited metric for solving certain task
> > placement imbalance scenarios. For instance, in the presence of idle CPUs,
> > we should simply try to get at least one task per CPU, whereas the current
> > load-based algorithm can actually leave idle CPUs alone simply because the
> > load is somewhat balanced. The current algorithm ends up creating virtual
> > and meaningless value like the avg_load_per_task or tweaks the state of a
> > group to make it overloaded whereas it's not, in order to try to migrate
> > tasks.
> >
>
> I do not think this is necessarily 100% true. With both the previous
> load-balancing behaviour and the apparent behaviour of this patch, it's
> still possible to pull two communicating tasks apart and across NUMA
> domains when utilisation is low. Specifically, a load difference of less
> than SCHED_CAPACITY_SCALE between NUMA codes can be enough too migrate
> task to level out load.
>
> So, load might be ill-suited for some cases but that does not make it
> completely useless either.

I fully agree and we keep using it in some cases.
The goal is only to not use it when it is obviously the wrong metric to be used

>
> The type of behaviour can be seen by running netperf via mmtests
> (configuration file configs/config-network-netperf-unbound) on a NUMA
> machine and noting that the local vs remote NUMA hinting faults are roughly
> 50%. I had prototyped some fixes around this that took imbalance_pct into
> account but it was too special-cased and was not a universal win. If
> I was reviewing my own patch I would have naked it on the "you added a
> special-case hack into the load balancer for one load". I didn't get back
> to it before getting cc'd on this series.
>
> > load_balance should better qualify the imbalance of the group and clearly
> > define what has to be moved to fix this imbalance.
> >
> > The type of sched_group has been extended to better reflect the type of
> > imbalance. We now have :
> >       group_has_spare
> >       group_fully_busy
> >       group_misfit_task
> >       group_asym_packing
> >       group_imbalanced
> >       group_overloaded
> >
> > Based on the type of sched_group, load_balance now sets what it wants to
> > move in order to fix the imbalance. It can be some load as before but also
> > some utilization, a number of task or a type of task:
> >       migrate_task
> >       migrate_util
> >       migrate_load
> >       migrate_misfit
> >
> > This new load_balance algorithm fixes several pending wrong tasks
> > placement:
> > - the 1 task per CPU case with asymmetric system
> > - the case of cfs task preempted by other class
> > - the case of tasks not evenly spread on groups with spare capacity
> >
>
> On the last one, spreading tasks evenly across NUMA domains is not
> necessarily a good idea. If I have 2 tasks running on a 2-socket machine
> with 24 logical CPUs per socket, it should not automatically mean that
> one task should move cross-node and I have definitely observed this
> happening. It's probably bad in terms of locality no matter what but it's
> especially bad if the 2 tasks happened to be communicating because then
> load balancing will pull apart the tasks while wake_affine will push
> them together (and potentially NUMA balancing as well). Note that this
> also applies for some IO workloads because, depending on the filesystem,
> the task may be communicating with workqueues (XFS) or a kernel thread
> (ext4 with jbd2).

This rework doesn't touch the NUMA_BALANCING part and NUMA balancing
still gives guidances with fbq_classify_group/queue.
But the latter could also take advantage of the new type of group. For
example, what I did in the fix for find_idlest_group : checking
numa_preferred_nid when the group has capacity and keep the task on
preferred node if possible. Similar behavior could also be beneficial
in periodic load_balance case.

>
> > Also the load balance decisions have been consolidated in the 3 functions
> > below after removing the few bypasses and hacks of the current code:
> > - update_sd_pick_busiest() select the busiest sched_group.
> > - find_busiest_group() checks if there is an imbalance between local and
> >   busiest group.
> > - calculate_imbalance() decides what have to be moved.
> >
> > Finally, the now unused field total_running of struct sd_lb_stats has been
> > removed.
> >
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > ---
> >  kernel/sched/fair.c | 611 ++++++++++++++++++++++++++++++++++------------------
> >  1 file changed, 402 insertions(+), 209 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e004841..5ae5281 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -7068,11 +7068,26 @@ static unsigned long __read_mostly max_load_balance_interval = HZ/10;
> >
> >  enum fbq_type { regular, remote, all };
> >
> > +/*
> > + * group_type describes the group of CPUs at the moment of the load balance.
> > + * The enum is ordered by pulling priority, with the group with lowest priority
> > + * first so the groupe_type can be simply compared when selecting the busiest
> > + * group. see update_sd_pick_busiest().
> > + */
>
> s/groupe_type/group_type/
>
> >  enum group_type {
> > -     group_other = 0,
> > +     group_has_spare = 0,
> > +     group_fully_busy,
> >       group_misfit_task,
> > +     group_asym_packing,
> >       group_imbalanced,
> > -     group_overloaded,
> > +     group_overloaded
> > +};
> > +
>
> While not your fault, it would be nice to comment on the meaning of each
> group type. From a glance, it's not obvious to me why a misfit task should
> be a high priority to move a task than a fully_busy (but not overloaded)
> group given that moving the misfit task might make a group overloaded.
>
> > +enum migration_type {
> > +     migrate_load = 0,
> > +     migrate_util,
> > +     migrate_task,
> > +     migrate_misfit
> >  };
> >
>
> Could do with a comment explaining what migration_type is for because
> the name is unhelpful. I *think* at a glance it's related to what sort
> of imbalance is being addressed which is partially addressed by the
> group_type. That understanding may change as I continue reading the series
> but now I have to figure it out which means it'll be forgotten again in
> 6 months.
>
> >  #define LBF_ALL_PINNED       0x01
> > @@ -7105,7 +7120,7 @@ struct lb_env {
> >       unsigned int            loop_max;
> >
> >       enum fbq_type           fbq_type;
> > -     enum group_type         src_grp_type;
> > +     enum migration_type     migration_type;
> >       struct list_head        tasks;
> >  };
> >
> > @@ -7328,7 +7343,7 @@ static struct task_struct *detach_one_task(struct lb_env *env)
> >  static const unsigned int sched_nr_migrate_break = 32;
> >
> >  /*
> > - * detach_tasks() -- tries to detach up to imbalance runnable load from
> > + * detach_tasks() -- tries to detach up to imbalance load/util/tasks from
> >   * busiest_rq, as part of a balancing operation within domain "sd".
> >   *
> >   * Returns number of detached tasks if successful and 0 otherwise.
> > @@ -7336,8 +7351,8 @@ static const unsigned int sched_nr_migrate_break = 32;
> >  static int detach_tasks(struct lb_env *env)
> >  {
> >       struct list_head *tasks = &env->src_rq->cfs_tasks;
> > +     unsigned long util, load;
> >       struct task_struct *p;
> > -     unsigned long load;
> >       int detached = 0;
> >
> >       lockdep_assert_held(&env->src_rq->lock);
> > @@ -7370,19 +7385,51 @@ static int detach_tasks(struct lb_env *env)
> >               if (!can_migrate_task(p, env))
> >                       goto next;
> >
> > -             load = task_h_load(p);
> > +             switch (env->migration_type) {
> > +             case migrate_load:
> > +                     load = task_h_load(p);
> >
> > -             if (sched_feat(LB_MIN) && load < 16 && !env->sd->nr_balance_failed)
> > -                     goto next;
> > +                     if (sched_feat(LB_MIN) &&
> > +                         load < 16 && !env->sd->nr_balance_failed)
> > +                             goto next;
> >
> > -             if ((load / 2) > env->imbalance)
> > -                     goto next;
> > +                     if ((load / 2) > env->imbalance)
> > +                             goto next;
> > +
> > +                     env->imbalance -= load;
> > +                     break;
> > +
> > +             case migrate_util:
> > +                     util = task_util_est(p);
> > +
> > +                     if (util > env->imbalance)
> > +                             goto next;
> > +
> > +                     env->imbalance -= util;
> > +                     break;
> > +
> > +             case migrate_task:
> > +                     env->imbalance--;
> > +                     break;
> > +
> > +             case migrate_misfit:
> > +                     load = task_h_load(p);
> > +
> > +                     /*
> > +                      * load of misfit task might decrease a bit since it has
> > +                      * been recorded. Be conservative in the condition.
> > +                      */
> > +                     if (load / 2 < env->imbalance)
> > +                             goto next;
> > +
> > +                     env->imbalance = 0;
> > +                     break;
> > +             }
> >
>
> So, no problem with this but it brings up another point. migration_type
> also determines what env->imbalance means (e.g. load, utilisation,
> nr_running etc). That was true before your patch too but now it's much
> more explicit, which is nice, but could do with a comment.
>
> >               detach_task(p, env);
> >               list_add(&p->se.group_node, &env->tasks);
> >
> >               detached++;
> > -             env->imbalance -= load;
> >
> >  #ifdef CONFIG_PREEMPTION
> >               /*
> > @@ -7396,7 +7443,7 @@ static int detach_tasks(struct lb_env *env)
> >
> >               /*
> >                * We only want to steal up to the prescribed amount of
> > -              * runnable load.
> > +              * load/util/tasks.
> >                */
> >               if (env->imbalance <= 0)
> >                       break;
> > @@ -7661,7 +7708,6 @@ struct sg_lb_stats {
> >       unsigned int idle_cpus;
> >       unsigned int group_weight;
> >       enum group_type group_type;
> > -     int group_no_capacity;
> >       unsigned int group_asym_packing; /* Tasks should be moved to preferred CPU */
> >       unsigned long group_misfit_task_load; /* A CPU has a task too big for its capacity */
> >  #ifdef CONFIG_NUMA_BALANCING
>
> Glad to see group_no_capacity go away, that had some "interesting"
> treatment in update_sd_lb_stats.
>
> > @@ -7677,10 +7723,10 @@ struct sg_lb_stats {
> >  struct sd_lb_stats {
> >       struct sched_group *busiest;    /* Busiest group in this sd */
> >       struct sched_group *local;      /* Local group in this sd */
> > -     unsigned long total_running;
> >       unsigned long total_load;       /* Total load of all groups in sd */
> >       unsigned long total_capacity;   /* Total capacity of all groups in sd */
> >       unsigned long avg_load; /* Average load across all groups in sd */
> > +     unsigned int prefer_sibling; /* tasks should go to sibling first */
> >
> >       struct sg_lb_stats busiest_stat;/* Statistics of the busiest group */
> >       struct sg_lb_stats local_stat;  /* Statistics of the local group */
> > @@ -7691,19 +7737,18 @@ static inline void init_sd_lb_stats(struct sd_lb_stats *sds)
> >       /*
> >        * Skimp on the clearing to avoid duplicate work. We can avoid clearing
> >        * local_stat because update_sg_lb_stats() does a full clear/assignment.
> > -      * We must however clear busiest_stat::avg_load because
> > -      * update_sd_pick_busiest() reads this before assignment.
> > +      * We must however set busiest_stat::group_type and
> > +      * busiest_stat::idle_cpus to the worst busiest group because
> > +      * update_sd_pick_busiest() reads these before assignment.
> >        */
> >       *sds = (struct sd_lb_stats){
> >               .busiest = NULL,
> >               .local = NULL,
> > -             .total_running = 0UL,
> >               .total_load = 0UL,
> >               .total_capacity = 0UL,
> >               .busiest_stat = {
> > -                     .avg_load = 0UL,
> > -                     .sum_h_nr_running = 0,
> > -                     .group_type = group_other,
> > +                     .idle_cpus = UINT_MAX,
> > +                     .group_type = group_has_spare,
> >               },
> >       };
> >  }
> > @@ -7945,19 +7990,26 @@ group_smaller_max_cpu_capacity(struct sched_group *sg, struct sched_group *ref)
> >  }
> >
> >  static inline enum
> > -group_type group_classify(struct sched_group *group,
> > +group_type group_classify(struct lb_env *env,
> > +                       struct sched_group *group,
> >                         struct sg_lb_stats *sgs)
> >  {
> > -     if (sgs->group_no_capacity)
> > +     if (group_is_overloaded(env, sgs))
> >               return group_overloaded;
> >
> >       if (sg_imbalanced(group))
> >               return group_imbalanced;
> >
> > +     if (sgs->group_asym_packing)
> > +             return group_asym_packing;
> > +
> >       if (sgs->group_misfit_task_load)
> >               return group_misfit_task;
> >
> > -     return group_other;
> > +     if (!group_has_capacity(env, sgs))
> > +             return group_fully_busy;
> > +
> > +     return group_has_spare;
> >  }
> >
> >  static bool update_nohz_stats(struct rq *rq, bool force)
> > @@ -7994,10 +8046,12 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> >                                     struct sg_lb_stats *sgs,
> >                                     int *sg_status)
> >  {
> > -     int i, nr_running;
> > +     int i, nr_running, local_group;
> >
> >       memset(sgs, 0, sizeof(*sgs));
> >
> > +     local_group = cpumask_test_cpu(env->dst_cpu, sched_group_span(group));
> > +
> >       for_each_cpu_and(i, sched_group_span(group), env->cpus) {
> >               struct rq *rq = cpu_rq(i);
> >
> > @@ -8022,9 +8076,16 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> >               /*
> >                * No need to call idle_cpu() if nr_running is not 0
> >                */
> > -             if (!nr_running && idle_cpu(i))
> > +             if (!nr_running && idle_cpu(i)) {
> >                       sgs->idle_cpus++;
> > +                     /* Idle cpu can't have misfit task */
> > +                     continue;
> > +             }
> > +
> > +             if (local_group)
> > +                     continue;
> >
> > +             /* Check for a misfit task on the cpu */
> >               if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
> >                   sgs->group_misfit_task_load < rq->misfit_task_load) {
> >                       sgs->group_misfit_task_load = rq->misfit_task_load;
>
> So.... why exactly do we not care about misfit tasks on CPUs in the
> local group? I'm not saying you're wrong because you have a clear idea
> on how misfit tasks should be treated but it's very non-obvious just
> from the code.

local_group can't do anything with local misfit tasks so it doesn't
give any additional information compared to overloaded, fully_busy or
has_spare

>
> > <SNIP>
> >
> > @@ -8079,62 +8154,80 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> >       if (sgs->group_type < busiest->group_type)
> >               return false;
> >
> > -     if (sgs->avg_load <= busiest->avg_load)
> > -             return false;
> > -
> > -     if (!(env->sd->flags & SD_ASYM_CPUCAPACITY))
> > -             goto asym_packing;
> > -
> >       /*
> > -      * Candidate sg has no more than one task per CPU and
> > -      * has higher per-CPU capacity. Migrating tasks to less
> > -      * capable CPUs may harm throughput. Maximize throughput,
> > -      * power/energy consequences are not considered.
> > +      * The candidate and the current busiest group are the same type of
> > +      * group. Let check which one is the busiest according to the type.
> >        */
> > -     if (sgs->sum_h_nr_running <= sgs->group_weight &&
> > -         group_smaller_min_cpu_capacity(sds->local, sg))
> > -             return false;
> >
> > -     /*
> > -      * If we have more than one misfit sg go with the biggest misfit.
> > -      */
> > -     if (sgs->group_type == group_misfit_task &&
> > -         sgs->group_misfit_task_load < busiest->group_misfit_task_load)
> > +     switch (sgs->group_type) {
> > +     case group_overloaded:
> > +             /* Select the overloaded group with highest avg_load. */
> > +             if (sgs->avg_load <= busiest->avg_load)
> > +                     return false;
> > +             break;
> > +
> > +     case group_imbalanced:
> > +             /*
> > +              * Select the 1st imbalanced group as we don't have any way to
> > +              * choose one more than another.
> > +              */
> >               return false;
> >
> > -asym_packing:
> > -     /* This is the busiest node in its class. */
> > -     if (!(env->sd->flags & SD_ASYM_PACKING))
> > -             return true;
> > +     case group_asym_packing:
> > +             /* Prefer to move from lowest priority CPU's work */
> > +             if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
> > +                     return false;
> > +             break;
> >
>
> Again, I'm not seeing what prevents a !SD_ASYM_PACKING domain checking
> sched_asym_prefer.

the test is done when collecting group's statistic in update_sg_lb_stats()
/* Check if dst cpu is idle and preferred to this group */
if (env->sd->flags & SD_ASYM_PACKING &&
    env->idle != CPU_NOT_IDLE &&
    sgs->sum_h_nr_running &&
    sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
sgs->group_asym_packing = 1;
}

Then the group type group_asym_packing is only set if
sgs->group_asym_packing has been set

>
> > <SNIP>
> > +     case group_fully_busy:
> > +             /*
> > +              * Select the fully busy group with highest avg_load. In
> > +              * theory, there is no need to pull task from such kind of
> > +              * group because tasks have all compute capacity that they need
> > +              * but we can still improve the overall throughput by reducing
> > +              * contention when accessing shared HW resources.
> > +              *
> > +              * XXX for now avg_load is not computed and always 0 so we
> > +              * select the 1st one.
> > +              */
> > +             if (sgs->avg_load <= busiest->avg_load)
> > +                     return false;
> > +             break;
> > +
>
> With the exception that if we are balancing between NUMA domains and they
> were communicating tasks that we've now pulled them apart. That might
> increase the CPU resources available at the cost of increased remote
> memory access cost.

I expect the numa classification to help and skip those runqueue

>
> > +     case group_has_spare:
> > +             /*
> > +              * Select not overloaded group with lowest number of
> > +              * idle cpus. We could also compare the spare capacity
> > +              * which is more stable but it can end up that the
> > +              * group has less spare capacity but finally more idle
> > +              * cpus which means less opportunity to pull tasks.
> > +              */
> > +             if (sgs->idle_cpus >= busiest->idle_cpus)
> > +                     return false;
> > +             break;
> >       }
> >
> > -     return false;
> > +     /*
> > +      * Candidate sg has no more than one task per CPU and has higher
> > +      * per-CPU capacity. Migrating tasks to less capable CPUs may harm
> > +      * throughput. Maximize throughput, power/energy consequences are not
> > +      * considered.
> > +      */
> > +     if ((env->sd->flags & SD_ASYM_CPUCAPACITY) &&
> > +         (sgs->group_type <= group_fully_busy) &&
> > +         (group_smaller_min_cpu_capacity(sds->local, sg)))
> > +             return false;
> > +
> > +     return true;
> >  }
> >
> >  #ifdef CONFIG_NUMA_BALANCING
> > @@ -8172,13 +8265,13 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
> >   * @env: The load balancing environment.
> >   * @sds: variable to hold the statistics for this sched_domain.
> >   */
> > +
>
> Spurious whitespace change.
>
> >  static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sds)
> >  {
> >       struct sched_domain *child = env->sd->child;
> >       struct sched_group *sg = env->sd->groups;
> >       struct sg_lb_stats *local = &sds->local_stat;
> >       struct sg_lb_stats tmp_sgs;
> > -     bool prefer_sibling = child && child->flags & SD_PREFER_SIBLING;
> >       int sg_status = 0;
> >
> >  #ifdef CONFIG_NO_HZ_COMMON
> > @@ -8205,22 +8298,6 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
> >               if (local_group)
> >                       goto next_group;
> >
> > -             /*
> > -              * In case the child domain prefers tasks go to siblings
> > -              * first, lower the sg capacity so that we'll try
> > -              * and move all the excess tasks away. We lower the capacity
> > -              * of a group only if the local group has the capacity to fit
> > -              * these excess tasks. The extra check prevents the case where
> > -              * you always pull from the heaviest group when it is already
> > -              * under-utilized (possible with a large weight task outweighs
> > -              * the tasks on the system).
> > -              */
> > -             if (prefer_sibling && sds->local &&
> > -                 group_has_capacity(env, local) &&
> > -                 (sgs->sum_h_nr_running > local->sum_h_nr_running + 1)) {
> > -                     sgs->group_no_capacity = 1;
> > -                     sgs->group_type = group_classify(sg, sgs);
> > -             }
> >
> >               if (update_sd_pick_busiest(env, sds, sg, sgs)) {
> >                       sds->busiest = sg;
> > @@ -8229,13 +8306,15 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
> >
> >  next_group:
> >               /* Now, start updating sd_lb_stats */
> > -             sds->total_running += sgs->sum_h_nr_running;
> >               sds->total_load += sgs->group_load;
> >               sds->total_capacity += sgs->group_capacity;
> >
> >               sg = sg->next;
> >       } while (sg != env->sd->groups);
> >
> > +     /* Tag domain that child domain prefers tasks go to siblings first */
> > +     sds->prefer_sibling = child && child->flags & SD_PREFER_SIBLING;
> > +
> >  #ifdef CONFIG_NO_HZ_COMMON
> >       if ((env->flags & LBF_NOHZ_AGAIN) &&
> >           cpumask_subset(nohz.idle_cpus_mask, sched_domain_span(env->sd))) {
> > @@ -8273,69 +8352,149 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
> >   */
> >  static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
> >  {
> > -     unsigned long max_pull, load_above_capacity = ~0UL;
> >       struct sg_lb_stats *local, *busiest;
> >
> >       local = &sds->local_stat;
> >       busiest = &sds->busiest_stat;
> >
> > -     if (busiest->group_asym_packing) {
> > -             env->imbalance = busiest->group_load;
> > +     if (busiest->group_type == group_misfit_task) {
> > +             /* Set imbalance to allow misfit task to be balanced. */
> > +             env->migration_type = migrate_misfit;
> > +             env->imbalance = busiest->group_misfit_task_load;
> > +             return;
> > +     }
> > +
> > +     if (busiest->group_type == group_asym_packing) {
> > +             /*
> > +              * In case of asym capacity, we will try to migrate all load to
> > +              * the preferred CPU.
> > +              */
> > +             env->migration_type = migrate_task;
> > +             env->imbalance = busiest->sum_h_nr_running;
> > +             return;
> > +     }
> > +
> > +     if (busiest->group_type == group_imbalanced) {
> > +             /*
> > +              * In the group_imb case we cannot rely on group-wide averages
> > +              * to ensure CPU-load equilibrium, try to move any task to fix
> > +              * the imbalance. The next load balance will take care of
> > +              * balancing back the system.
> > +              */
> > +             env->migration_type = migrate_task;
> > +             env->imbalance = 1;
> >               return;
> >       }
> >
> >       /*
> > -      * Avg load of busiest sg can be less and avg load of local sg can
> > -      * be greater than avg load across all sgs of sd because avg load
> > -      * factors in sg capacity and sgs with smaller group_type are
> > -      * skipped when updating the busiest sg:
> > +      * Try to use spare capacity of local group without overloading it or
> > +      * emptying busiest
> >        */
> > -     if (busiest->group_type != group_misfit_task &&
> > -         (busiest->avg_load <= sds->avg_load ||
> > -          local->avg_load >= sds->avg_load)) {
> > -             env->imbalance = 0;
> > +     if (local->group_type == group_has_spare) {
> > +             if (busiest->group_type > group_fully_busy) {
> > +                     /*
> > +                      * If busiest is overloaded, try to fill spare
> > +                      * capacity. This might end up creating spare capacity
> > +                      * in busiest or busiest still being overloaded but
> > +                      * there is no simple way to directly compute the
> > +                      * amount of load to migrate in order to balance the
> > +                      * system.
> > +                      */
>
> busiest may not be overloaded, it may be imbalanced. Maybe the
> distinction is irrelevant though.

the case busiest->group_type == group_imbalanced has already been
handled earlier int he function

>
> > +                     env->migration_type = migrate_util;
> > +                     env->imbalance = max(local->group_capacity, local->group_util) -
> > +                                      local->group_util;
> > +
> > +                     /*
> > +                      * In some case, the group's utilization is max or even
> > +                      * higher than capacity because of migrations but the
> > +                      * local CPU is (newly) idle. There is at least one
> > +                      * waiting task in this overloaded busiest group. Let
> > +                      * try to pull it.
> > +                      */
> > +                     if (env->idle != CPU_NOT_IDLE && env->imbalance == 0) {
> > +                             env->migration_type = migrate_task;
> > +                             env->imbalance = 1;
> > +                     }
> > +
>
> Not convinced this is a good thing to do across NUMA domains. If it was
> tied to the group being definitely overloaded then I could see the logic.
>
> > +                     return;
> > +             }
> > +
> > +             if (busiest->group_weight == 1 || sds->prefer_sibling) {
> > +                     unsigned int nr_diff = busiest->sum_h_nr_running;
> > +                     /*
> > +                      * When prefer sibling, evenly spread running tasks on
> > +                      * groups.
> > +                      */
> > +                     env->migration_type = migrate_task;
> > +                     lsub_positive(&nr_diff, local->sum_h_nr_running);
> > +                     env->imbalance = nr_diff >> 1;
> > +                     return;
> > +             }
>
> Comment is slightly misleading given that it's not just preferring
> siblings but for when balancing across single CPU domains.
>
> > +
> > +             /*
> > +              * If there is no overload, we just want to even the number of
> > +              * idle cpus.
> > +              */
> > +             env->migration_type = migrate_task;
> > +             env->imbalance = max_t(long, 0, (local->idle_cpus -
> > +                                              busiest->idle_cpus) >> 1);
> >               return;
> >       }
>
> Why do we want an even number of idle CPUs unconditionally? This goes back
> to the NUMA domain case. 2 communicating tasks running on a 2-socket system
> should not be pulled apart just to have 1 task running on each socket.
>
> I didn't see anything obvious after this point but I also am getting a
> bit on the fried side trying to hold this entire patch in my head and
> got hung up on the NUMA domain balancing in particular.
>
> --
> Mel Gorman
> SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-10-31  9:09     ` Vincent Guittot
@ 2019-10-31 10:15       ` Mel Gorman
  2019-10-31 11:13         ` Vincent Guittot
  0 siblings, 1 reply; 89+ messages in thread
From: Mel Gorman @ 2019-10-31 10:15 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Thu, Oct 31, 2019 at 10:09:17AM +0100, Vincent Guittot wrote:
> On Wed, 30 Oct 2019 at 16:45, Mel Gorman <mgorman@techsingularity.net> wrote:
> >
> > On Fri, Oct 18, 2019 at 03:26:31PM +0200, Vincent Guittot wrote:
> > > The load_balance algorithm contains some heuristics which have become
> > > meaningless since the rework of the scheduler's metrics like the
> > > introduction of PELT.
> > >
> > > Furthermore, load is an ill-suited metric for solving certain task
> > > placement imbalance scenarios. For instance, in the presence of idle CPUs,
> > > we should simply try to get at least one task per CPU, whereas the current
> > > load-based algorithm can actually leave idle CPUs alone simply because the
> > > load is somewhat balanced. The current algorithm ends up creating virtual
> > > and meaningless value like the avg_load_per_task or tweaks the state of a
> > > group to make it overloaded whereas it's not, in order to try to migrate
> > > tasks.
> > >
> >
> > I do not think this is necessarily 100% true. With both the previous
> > load-balancing behaviour and the apparent behaviour of this patch, it's
> > still possible to pull two communicating tasks apart and across NUMA
> > domains when utilisation is low. Specifically, a load difference of less
> > than SCHED_CAPACITY_SCALE between NUMA codes can be enough too migrate
> > task to level out load.
> >
> > So, load might be ill-suited for some cases but that does not make it
> > completely useless either.
> 
> I fully agree and we keep using it in some cases.
> The goal is only to not use it when it is obviously the wrong metric to be used
> 

Understood, ideally it's be explicit why each metric (task/util/load)
is used each time for future reference and why it's the best for a given
situation. It's not a requirement for the series as the scheduler does
not have a perfect history of explaining itself.

> >
> > The type of behaviour can be seen by running netperf via mmtests
> > (configuration file configs/config-network-netperf-unbound) on a NUMA
> > machine and noting that the local vs remote NUMA hinting faults are roughly
> > 50%. I had prototyped some fixes around this that took imbalance_pct into
> > account but it was too special-cased and was not a universal win. If
> > I was reviewing my own patch I would have naked it on the "you added a
> > special-case hack into the load balancer for one load". I didn't get back
> > to it before getting cc'd on this series.
> >
> > > load_balance should better qualify the imbalance of the group and clearly
> > > define what has to be moved to fix this imbalance.
> > >
> > > The type of sched_group has been extended to better reflect the type of
> > > imbalance. We now have :
> > >       group_has_spare
> > >       group_fully_busy
> > >       group_misfit_task
> > >       group_asym_packing
> > >       group_imbalanced
> > >       group_overloaded
> > >
> > > Based on the type of sched_group, load_balance now sets what it wants to
> > > move in order to fix the imbalance. It can be some load as before but also
> > > some utilization, a number of task or a type of task:
> > >       migrate_task
> > >       migrate_util
> > >       migrate_load
> > >       migrate_misfit
> > >
> > > This new load_balance algorithm fixes several pending wrong tasks
> > > placement:
> > > - the 1 task per CPU case with asymmetric system
> > > - the case of cfs task preempted by other class
> > > - the case of tasks not evenly spread on groups with spare capacity
> > >
> >
> > On the last one, spreading tasks evenly across NUMA domains is not
> > necessarily a good idea. If I have 2 tasks running on a 2-socket machine
> > with 24 logical CPUs per socket, it should not automatically mean that
> > one task should move cross-node and I have definitely observed this
> > happening. It's probably bad in terms of locality no matter what but it's
> > especially bad if the 2 tasks happened to be communicating because then
> > load balancing will pull apart the tasks while wake_affine will push
> > them together (and potentially NUMA balancing as well). Note that this
> > also applies for some IO workloads because, depending on the filesystem,
> > the task may be communicating with workqueues (XFS) or a kernel thread
> > (ext4 with jbd2).
> 
> This rework doesn't touch the NUMA_BALANCING part and NUMA balancing
> still gives guidances with fbq_classify_group/queue.

I know the NUMA_BALANCING part is not touched, I'm talking about load
balancing across SD_NUMA domains which happens independently of
NUMA_BALANCING. In fact, there is logic in NUMA_BALANCING that tries to
override the load balancer when it moves tasks away from the preferred
node.

> But the latter could also take advantage of the new type of group. For
> example, what I did in the fix for find_idlest_group : checking
> numa_preferred_nid when the group has capacity and keep the task on
> preferred node if possible. Similar behavior could also be beneficial
> in periodic load_balance case.
> 

And this is the catch -- numa_preferred_nid is not guaranteed to be set at
all. NUMA balancing might be disabled, the task may not have been running
long enough to pick a preferred NID or NUMA balancing might be unable to
pick a preferred NID. The decision to avoid unnecessary migrations across
NUMA domains should be made independently of NUMA balancing. The netperf
configuration from mmtests is great at illustrating the point because it'll
also say what the average local/remote access ratio is. 2 communicating
tasks running on an otherwise idle NUMA machine should not have the load
balancer move the server to one node and the client to another.

Historically, we might have accounted for this with imbalance_pct which
makes sense for load and was special cased in some places, but it does
not make sense to use imbalance_pct for nr_running. Either way, I think
balancing across SD_NUMA should have explicit logic to take into account
that there is an additional cost outside of the scheduler when a task
moves cross-node.

Even if it's a case that this series does not tackle the problem now,
it'd be good to leave a TODO comment behind noting that SD_NUMA may need
to be special cased.

> > > @@ -8022,9 +8076,16 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> > >               /*
> > >                * No need to call idle_cpu() if nr_running is not 0
> > >                */
> > > -             if (!nr_running && idle_cpu(i))
> > > +             if (!nr_running && idle_cpu(i)) {
> > >                       sgs->idle_cpus++;
> > > +                     /* Idle cpu can't have misfit task */
> > > +                     continue;
> > > +             }
> > > +
> > > +             if (local_group)
> > > +                     continue;
> > >
> > > +             /* Check for a misfit task on the cpu */
> > >               if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
> > >                   sgs->group_misfit_task_load < rq->misfit_task_load) {
> > >                       sgs->group_misfit_task_load = rq->misfit_task_load;
> >
> > So.... why exactly do we not care about misfit tasks on CPUs in the
> > local group? I'm not saying you're wrong because you have a clear idea
> > on how misfit tasks should be treated but it's very non-obvious just
> > from the code.
> 
> local_group can't do anything with local misfit tasks so it doesn't
> give any additional information compared to overloaded, fully_busy or
> has_spare
> 

Ok, that's very clear and now I'm feeling a bit stupid because I should
have spotted that. It really could do with a comment so somebody else
does not try "fixing" it :(

> > > <SNIP>
> > >
> > > @@ -8079,62 +8154,80 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> > >       if (sgs->group_type < busiest->group_type)
> > >               return false;
> > >
> > > -     if (sgs->avg_load <= busiest->avg_load)
> > > -             return false;
> > > -
> > > -     if (!(env->sd->flags & SD_ASYM_CPUCAPACITY))
> > > -             goto asym_packing;
> > > -
> > >       /*
> > > -      * Candidate sg has no more than one task per CPU and
> > > -      * has higher per-CPU capacity. Migrating tasks to less
> > > -      * capable CPUs may harm throughput. Maximize throughput,
> > > -      * power/energy consequences are not considered.
> > > +      * The candidate and the current busiest group are the same type of
> > > +      * group. Let check which one is the busiest according to the type.
> > >        */
> > > -     if (sgs->sum_h_nr_running <= sgs->group_weight &&
> > > -         group_smaller_min_cpu_capacity(sds->local, sg))
> > > -             return false;
> > >
> > > -     /*
> > > -      * If we have more than one misfit sg go with the biggest misfit.
> > > -      */
> > > -     if (sgs->group_type == group_misfit_task &&
> > > -         sgs->group_misfit_task_load < busiest->group_misfit_task_load)
> > > +     switch (sgs->group_type) {
> > > +     case group_overloaded:
> > > +             /* Select the overloaded group with highest avg_load. */
> > > +             if (sgs->avg_load <= busiest->avg_load)
> > > +                     return false;
> > > +             break;
> > > +
> > > +     case group_imbalanced:
> > > +             /*
> > > +              * Select the 1st imbalanced group as we don't have any way to
> > > +              * choose one more than another.
> > > +              */
> > >               return false;
> > >
> > > -asym_packing:
> > > -     /* This is the busiest node in its class. */
> > > -     if (!(env->sd->flags & SD_ASYM_PACKING))
> > > -             return true;
> > > +     case group_asym_packing:
> > > +             /* Prefer to move from lowest priority CPU's work */
> > > +             if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
> > > +                     return false;
> > > +             break;
> > >
> >
> > Again, I'm not seeing what prevents a !SD_ASYM_PACKING domain checking
> > sched_asym_prefer.
> 
> the test is done when collecting group's statistic in update_sg_lb_stats()
> /* Check if dst cpu is idle and preferred to this group */
> if (env->sd->flags & SD_ASYM_PACKING &&
>     env->idle != CPU_NOT_IDLE &&
>     sgs->sum_h_nr_running &&
>     sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
> sgs->group_asym_packing = 1;
> }
> 
> Then the group type group_asym_packing is only set if
> sgs->group_asym_packing has been set
> 

Yeah, sorry. I need to get ASYM_PACKING clearer in my head.

> >
> > > <SNIP>
> > > +     case group_fully_busy:
> > > +             /*
> > > +              * Select the fully busy group with highest avg_load. In
> > > +              * theory, there is no need to pull task from such kind of
> > > +              * group because tasks have all compute capacity that they need
> > > +              * but we can still improve the overall throughput by reducing
> > > +              * contention when accessing shared HW resources.
> > > +              *
> > > +              * XXX for now avg_load is not computed and always 0 so we
> > > +              * select the 1st one.
> > > +              */
> > > +             if (sgs->avg_load <= busiest->avg_load)
> > > +                     return false;
> > > +             break;
> > > +
> >
> > With the exception that if we are balancing between NUMA domains and they
> > were communicating tasks that we've now pulled them apart. That might
> > increase the CPU resources available at the cost of increased remote
> > memory access cost.
> 
> I expect the numa classification to help and skip those runqueue
> 

It might but the "canary in the mine" is netperf. A basic pair should
not be pulled apart.

> > > @@ -8273,69 +8352,149 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
> > >   */
> > >  static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
> > >  {
> > > -     unsigned long max_pull, load_above_capacity = ~0UL;
> > >       struct sg_lb_stats *local, *busiest;
> > >
> > >       local = &sds->local_stat;
> > >       busiest = &sds->busiest_stat;
> > >
> > > -     if (busiest->group_asym_packing) {
> > > -             env->imbalance = busiest->group_load;
> > > +     if (busiest->group_type == group_misfit_task) {
> > > +             /* Set imbalance to allow misfit task to be balanced. */
> > > +             env->migration_type = migrate_misfit;
> > > +             env->imbalance = busiest->group_misfit_task_load;
> > > +             return;
> > > +     }
> > > +
> > > +     if (busiest->group_type == group_asym_packing) {
> > > +             /*
> > > +              * In case of asym capacity, we will try to migrate all load to
> > > +              * the preferred CPU.
> > > +              */
> > > +             env->migration_type = migrate_task;
> > > +             env->imbalance = busiest->sum_h_nr_running;
> > > +             return;
> > > +     }
> > > +
> > > +     if (busiest->group_type == group_imbalanced) {
> > > +             /*
> > > +              * In the group_imb case we cannot rely on group-wide averages
> > > +              * to ensure CPU-load equilibrium, try to move any task to fix
> > > +              * the imbalance. The next load balance will take care of
> > > +              * balancing back the system.
> > > +              */
> > > +             env->migration_type = migrate_task;
> > > +             env->imbalance = 1;
> > >               return;
> > >       }
> > >
> > >       /*
> > > -      * Avg load of busiest sg can be less and avg load of local sg can
> > > -      * be greater than avg load across all sgs of sd because avg load
> > > -      * factors in sg capacity and sgs with smaller group_type are
> > > -      * skipped when updating the busiest sg:
> > > +      * Try to use spare capacity of local group without overloading it or
> > > +      * emptying busiest
> > >        */
> > > -     if (busiest->group_type != group_misfit_task &&
> > > -         (busiest->avg_load <= sds->avg_load ||
> > > -          local->avg_load >= sds->avg_load)) {
> > > -             env->imbalance = 0;
> > > +     if (local->group_type == group_has_spare) {
> > > +             if (busiest->group_type > group_fully_busy) {
> > > +                     /*
> > > +                      * If busiest is overloaded, try to fill spare
> > > +                      * capacity. This might end up creating spare capacity
> > > +                      * in busiest or busiest still being overloaded but
> > > +                      * there is no simple way to directly compute the
> > > +                      * amount of load to migrate in order to balance the
> > > +                      * system.
> > > +                      */
> >
> > busiest may not be overloaded, it may be imbalanced. Maybe the
> > distinction is irrelevant though.
> 
> the case busiest->group_type == group_imbalanced has already been
> handled earlier int he function
> 

Bah, of course.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-10-31 10:15       ` Mel Gorman
@ 2019-10-31 11:13         ` Vincent Guittot
  2019-10-31 11:40           ` Mel Gorman
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-10-31 11:13 UTC (permalink / raw)
  To: Mel Gorman
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Thu, 31 Oct 2019 at 11:15, Mel Gorman <mgorman@techsingularity.net> wrote:
>
> On Thu, Oct 31, 2019 at 10:09:17AM +0100, Vincent Guittot wrote:
> > On Wed, 30 Oct 2019 at 16:45, Mel Gorman <mgorman@techsingularity.net> wrote:
> > >
> > > On Fri, Oct 18, 2019 at 03:26:31PM +0200, Vincent Guittot wrote:
> > > > The load_balance algorithm contains some heuristics which have become
> > > > meaningless since the rework of the scheduler's metrics like the
> > > > introduction of PELT.
> > > >
> > > > Furthermore, load is an ill-suited metric for solving certain task
> > > > placement imbalance scenarios. For instance, in the presence of idle CPUs,
> > > > we should simply try to get at least one task per CPU, whereas the current
> > > > load-based algorithm can actually leave idle CPUs alone simply because the
> > > > load is somewhat balanced. The current algorithm ends up creating virtual
> > > > and meaningless value like the avg_load_per_task or tweaks the state of a
> > > > group to make it overloaded whereas it's not, in order to try to migrate
> > > > tasks.
> > > >
> > >
> > > I do not think this is necessarily 100% true. With both the previous
> > > load-balancing behaviour and the apparent behaviour of this patch, it's
> > > still possible to pull two communicating tasks apart and across NUMA
> > > domains when utilisation is low. Specifically, a load difference of less
> > > than SCHED_CAPACITY_SCALE between NUMA codes can be enough too migrate
> > > task to level out load.
> > >
> > > So, load might be ill-suited for some cases but that does not make it
> > > completely useless either.
> >
> > I fully agree and we keep using it in some cases.
> > The goal is only to not use it when it is obviously the wrong metric to be used
> >
>
> Understood, ideally it's be explicit why each metric (task/util/load)
> is used each time for future reference and why it's the best for a given
> situation. It's not a requirement for the series as the scheduler does
> not have a perfect history of explaining itself.
>
> > >
> > > The type of behaviour can be seen by running netperf via mmtests
> > > (configuration file configs/config-network-netperf-unbound) on a NUMA
> > > machine and noting that the local vs remote NUMA hinting faults are roughly
> > > 50%. I had prototyped some fixes around this that took imbalance_pct into
> > > account but it was too special-cased and was not a universal win. If
> > > I was reviewing my own patch I would have naked it on the "you added a
> > > special-case hack into the load balancer for one load". I didn't get back
> > > to it before getting cc'd on this series.
> > >
> > > > load_balance should better qualify the imbalance of the group and clearly
> > > > define what has to be moved to fix this imbalance.
> > > >
> > > > The type of sched_group has been extended to better reflect the type of
> > > > imbalance. We now have :
> > > >       group_has_spare
> > > >       group_fully_busy
> > > >       group_misfit_task
> > > >       group_asym_packing
> > > >       group_imbalanced
> > > >       group_overloaded
> > > >
> > > > Based on the type of sched_group, load_balance now sets what it wants to
> > > > move in order to fix the imbalance. It can be some load as before but also
> > > > some utilization, a number of task or a type of task:
> > > >       migrate_task
> > > >       migrate_util
> > > >       migrate_load
> > > >       migrate_misfit
> > > >
> > > > This new load_balance algorithm fixes several pending wrong tasks
> > > > placement:
> > > > - the 1 task per CPU case with asymmetric system
> > > > - the case of cfs task preempted by other class
> > > > - the case of tasks not evenly spread on groups with spare capacity
> > > >
> > >
> > > On the last one, spreading tasks evenly across NUMA domains is not
> > > necessarily a good idea. If I have 2 tasks running on a 2-socket machine
> > > with 24 logical CPUs per socket, it should not automatically mean that
> > > one task should move cross-node and I have definitely observed this
> > > happening. It's probably bad in terms of locality no matter what but it's
> > > especially bad if the 2 tasks happened to be communicating because then
> > > load balancing will pull apart the tasks while wake_affine will push
> > > them together (and potentially NUMA balancing as well). Note that this
> > > also applies for some IO workloads because, depending on the filesystem,
> > > the task may be communicating with workqueues (XFS) or a kernel thread
> > > (ext4 with jbd2).
> >
> > This rework doesn't touch the NUMA_BALANCING part and NUMA balancing
> > still gives guidances with fbq_classify_group/queue.
>
> I know the NUMA_BALANCING part is not touched, I'm talking about load
> balancing across SD_NUMA domains which happens independently of
> NUMA_BALANCING. In fact, there is logic in NUMA_BALANCING that tries to
> override the load balancer when it moves tasks away from the preferred
> node.

Yes. this patchset relies on this override for now to prevent moving task away.
I agree that additional patches are probably needed to improve load
balance at NUMA level and I expect that this rework will make it
simpler to add.
I just wanted to get the output of some real use cases before defining
more numa level specific conditions. Some want to spread on there numa
nodes but other want to keep everything together. The preferred node
and fbq_classify_group was the only sensible metrics to me when he
wrote this patchset but changes can be added if they make sense.

>
> > But the latter could also take advantage of the new type of group. For
> > example, what I did in the fix for find_idlest_group : checking
> > numa_preferred_nid when the group has capacity and keep the task on
> > preferred node if possible. Similar behavior could also be beneficial
> > in periodic load_balance case.
> >
>
> And this is the catch -- numa_preferred_nid is not guaranteed to be set at
> all. NUMA balancing might be disabled, the task may not have been running
> long enough to pick a preferred NID or NUMA balancing might be unable to
> pick a preferred NID. The decision to avoid unnecessary migrations across
> NUMA domains should be made independently of NUMA balancing. The netperf
> configuration from mmtests is great at illustrating the point because it'll
> also say what the average local/remote access ratio is. 2 communicating
> tasks running on an otherwise idle NUMA machine should not have the load
> balancer move the server to one node and the client to another.

I'm going to make it a try on my setup to see the results

>
> Historically, we might have accounted for this with imbalance_pct which
> makes sense for load and was special cased in some places, but it does
> not make sense to use imbalance_pct for nr_running. Either way, I think
> balancing across SD_NUMA should have explicit logic to take into account
> that there is an additional cost outside of the scheduler when a task
> moves cross-node.
>
> Even if it's a case that this series does not tackle the problem now,
> it'd be good to leave a TODO comment behind noting that SD_NUMA may need
> to be special cased.
>
> > > > @@ -8022,9 +8076,16 @@ static inline void update_sg_lb_stats(struct lb_env *env,
> > > >               /*
> > > >                * No need to call idle_cpu() if nr_running is not 0
> > > >                */
> > > > -             if (!nr_running && idle_cpu(i))
> > > > +             if (!nr_running && idle_cpu(i)) {
> > > >                       sgs->idle_cpus++;
> > > > +                     /* Idle cpu can't have misfit task */
> > > > +                     continue;
> > > > +             }
> > > > +
> > > > +             if (local_group)
> > > > +                     continue;
> > > >
> > > > +             /* Check for a misfit task on the cpu */
> > > >               if (env->sd->flags & SD_ASYM_CPUCAPACITY &&
> > > >                   sgs->group_misfit_task_load < rq->misfit_task_load) {
> > > >                       sgs->group_misfit_task_load = rq->misfit_task_load;
> > >
> > > So.... why exactly do we not care about misfit tasks on CPUs in the
> > > local group? I'm not saying you're wrong because you have a clear idea
> > > on how misfit tasks should be treated but it's very non-obvious just
> > > from the code.
> >
> > local_group can't do anything with local misfit tasks so it doesn't
> > give any additional information compared to overloaded, fully_busy or
> > has_spare
> >
>
> Ok, that's very clear and now I'm feeling a bit stupid because I should
> have spotted that. It really could do with a comment so somebody else
> does not try "fixing" it :(
>
> > > > <SNIP>
> > > >
> > > > @@ -8079,62 +8154,80 @@ static bool update_sd_pick_busiest(struct lb_env *env,
> > > >       if (sgs->group_type < busiest->group_type)
> > > >               return false;
> > > >
> > > > -     if (sgs->avg_load <= busiest->avg_load)
> > > > -             return false;
> > > > -
> > > > -     if (!(env->sd->flags & SD_ASYM_CPUCAPACITY))
> > > > -             goto asym_packing;
> > > > -
> > > >       /*
> > > > -      * Candidate sg has no more than one task per CPU and
> > > > -      * has higher per-CPU capacity. Migrating tasks to less
> > > > -      * capable CPUs may harm throughput. Maximize throughput,
> > > > -      * power/energy consequences are not considered.
> > > > +      * The candidate and the current busiest group are the same type of
> > > > +      * group. Let check which one is the busiest according to the type.
> > > >        */
> > > > -     if (sgs->sum_h_nr_running <= sgs->group_weight &&
> > > > -         group_smaller_min_cpu_capacity(sds->local, sg))
> > > > -             return false;
> > > >
> > > > -     /*
> > > > -      * If we have more than one misfit sg go with the biggest misfit.
> > > > -      */
> > > > -     if (sgs->group_type == group_misfit_task &&
> > > > -         sgs->group_misfit_task_load < busiest->group_misfit_task_load)
> > > > +     switch (sgs->group_type) {
> > > > +     case group_overloaded:
> > > > +             /* Select the overloaded group with highest avg_load. */
> > > > +             if (sgs->avg_load <= busiest->avg_load)
> > > > +                     return false;
> > > > +             break;
> > > > +
> > > > +     case group_imbalanced:
> > > > +             /*
> > > > +              * Select the 1st imbalanced group as we don't have any way to
> > > > +              * choose one more than another.
> > > > +              */
> > > >               return false;
> > > >
> > > > -asym_packing:
> > > > -     /* This is the busiest node in its class. */
> > > > -     if (!(env->sd->flags & SD_ASYM_PACKING))
> > > > -             return true;
> > > > +     case group_asym_packing:
> > > > +             /* Prefer to move from lowest priority CPU's work */
> > > > +             if (sched_asym_prefer(sg->asym_prefer_cpu, sds->busiest->asym_prefer_cpu))
> > > > +                     return false;
> > > > +             break;
> > > >
> > >
> > > Again, I'm not seeing what prevents a !SD_ASYM_PACKING domain checking
> > > sched_asym_prefer.
> >
> > the test is done when collecting group's statistic in update_sg_lb_stats()
> > /* Check if dst cpu is idle and preferred to this group */
> > if (env->sd->flags & SD_ASYM_PACKING &&
> >     env->idle != CPU_NOT_IDLE &&
> >     sgs->sum_h_nr_running &&
> >     sched_asym_prefer(env->dst_cpu, group->asym_prefer_cpu)) {
> > sgs->group_asym_packing = 1;
> > }
> >
> > Then the group type group_asym_packing is only set if
> > sgs->group_asym_packing has been set
> >
>
> Yeah, sorry. I need to get ASYM_PACKING clearer in my head.
>
> > >
> > > > <SNIP>
> > > > +     case group_fully_busy:
> > > > +             /*
> > > > +              * Select the fully busy group with highest avg_load. In
> > > > +              * theory, there is no need to pull task from such kind of
> > > > +              * group because tasks have all compute capacity that they need
> > > > +              * but we can still improve the overall throughput by reducing
> > > > +              * contention when accessing shared HW resources.
> > > > +              *
> > > > +              * XXX for now avg_load is not computed and always 0 so we
> > > > +              * select the 1st one.
> > > > +              */
> > > > +             if (sgs->avg_load <= busiest->avg_load)
> > > > +                     return false;
> > > > +             break;
> > > > +
> > >
> > > With the exception that if we are balancing between NUMA domains and they
> > > were communicating tasks that we've now pulled them apart. That might
> > > increase the CPU resources available at the cost of increased remote
> > > memory access cost.
> >
> > I expect the numa classification to help and skip those runqueue
> >
>
> It might but the "canary in the mine" is netperf. A basic pair should
> not be pulled apart.
>
> > > > @@ -8273,69 +8352,149 @@ static inline void update_sd_lb_stats(struct lb_env *env, struct sd_lb_stats *sd
> > > >   */
> > > >  static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *sds)
> > > >  {
> > > > -     unsigned long max_pull, load_above_capacity = ~0UL;
> > > >       struct sg_lb_stats *local, *busiest;
> > > >
> > > >       local = &sds->local_stat;
> > > >       busiest = &sds->busiest_stat;
> > > >
> > > > -     if (busiest->group_asym_packing) {
> > > > -             env->imbalance = busiest->group_load;
> > > > +     if (busiest->group_type == group_misfit_task) {
> > > > +             /* Set imbalance to allow misfit task to be balanced. */
> > > > +             env->migration_type = migrate_misfit;
> > > > +             env->imbalance = busiest->group_misfit_task_load;
> > > > +             return;
> > > > +     }
> > > > +
> > > > +     if (busiest->group_type == group_asym_packing) {
> > > > +             /*
> > > > +              * In case of asym capacity, we will try to migrate all load to
> > > > +              * the preferred CPU.
> > > > +              */
> > > > +             env->migration_type = migrate_task;
> > > > +             env->imbalance = busiest->sum_h_nr_running;
> > > > +             return;
> > > > +     }
> > > > +
> > > > +     if (busiest->group_type == group_imbalanced) {
> > > > +             /*
> > > > +              * In the group_imb case we cannot rely on group-wide averages
> > > > +              * to ensure CPU-load equilibrium, try to move any task to fix
> > > > +              * the imbalance. The next load balance will take care of
> > > > +              * balancing back the system.
> > > > +              */
> > > > +             env->migration_type = migrate_task;
> > > > +             env->imbalance = 1;
> > > >               return;
> > > >       }
> > > >
> > > >       /*
> > > > -      * Avg load of busiest sg can be less and avg load of local sg can
> > > > -      * be greater than avg load across all sgs of sd because avg load
> > > > -      * factors in sg capacity and sgs with smaller group_type are
> > > > -      * skipped when updating the busiest sg:
> > > > +      * Try to use spare capacity of local group without overloading it or
> > > > +      * emptying busiest
> > > >        */
> > > > -     if (busiest->group_type != group_misfit_task &&
> > > > -         (busiest->avg_load <= sds->avg_load ||
> > > > -          local->avg_load >= sds->avg_load)) {
> > > > -             env->imbalance = 0;
> > > > +     if (local->group_type == group_has_spare) {
> > > > +             if (busiest->group_type > group_fully_busy) {
> > > > +                     /*
> > > > +                      * If busiest is overloaded, try to fill spare
> > > > +                      * capacity. This might end up creating spare capacity
> > > > +                      * in busiest or busiest still being overloaded but
> > > > +                      * there is no simple way to directly compute the
> > > > +                      * amount of load to migrate in order to balance the
> > > > +                      * system.
> > > > +                      */
> > >
> > > busiest may not be overloaded, it may be imbalanced. Maybe the
> > > distinction is irrelevant though.
> >
> > the case busiest->group_type == group_imbalanced has already been
> > handled earlier int he function
> >
>
> Bah, of course.
>
> --
> Mel Gorman
> SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-10-31 11:13         ` Vincent Guittot
@ 2019-10-31 11:40           ` Mel Gorman
  2019-11-08 16:35             ` Vincent Guittot
  0 siblings, 1 reply; 89+ messages in thread
From: Mel Gorman @ 2019-10-31 11:40 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Thu, Oct 31, 2019 at 12:13:09PM +0100, Vincent Guittot wrote:
> > > > On the last one, spreading tasks evenly across NUMA domains is not
> > > > necessarily a good idea. If I have 2 tasks running on a 2-socket machine
> > > > with 24 logical CPUs per socket, it should not automatically mean that
> > > > one task should move cross-node and I have definitely observed this
> > > > happening. It's probably bad in terms of locality no matter what but it's
> > > > especially bad if the 2 tasks happened to be communicating because then
> > > > load balancing will pull apart the tasks while wake_affine will push
> > > > them together (and potentially NUMA balancing as well). Note that this
> > > > also applies for some IO workloads because, depending on the filesystem,
> > > > the task may be communicating with workqueues (XFS) or a kernel thread
> > > > (ext4 with jbd2).
> > >
> > > This rework doesn't touch the NUMA_BALANCING part and NUMA balancing
> > > still gives guidances with fbq_classify_group/queue.
> >
> > I know the NUMA_BALANCING part is not touched, I'm talking about load
> > balancing across SD_NUMA domains which happens independently of
> > NUMA_BALANCING. In fact, there is logic in NUMA_BALANCING that tries to
> > override the load balancer when it moves tasks away from the preferred
> > node.
> 
> Yes. this patchset relies on this override for now to prevent moving task away.

Fair enough, netperf hits the corner case where it does not work but
that is also true without your series.

> I agree that additional patches are probably needed to improve load
> balance at NUMA level and I expect that this rework will make it
> simpler to add.
> I just wanted to get the output of some real use cases before defining
> more numa level specific conditions. Some want to spread on there numa
> nodes but other want to keep everything together. The preferred node
> and fbq_classify_group was the only sensible metrics to me when he
> wrote this patchset but changes can be added if they make sense.
> 

That's fair. While it was possible to address the case before your
series, it was a hatchet job. If the changelog simply notes that some
special casing may still be required for SD_NUMA but it's outside the
scope of the series, then I'd be happy. At least there is a good chance
then if there is follow-up work that it won't be interpreted as an
attempt to reintroduce hacky heuristics.

> >
> > > But the latter could also take advantage of the new type of group. For
> > > example, what I did in the fix for find_idlest_group : checking
> > > numa_preferred_nid when the group has capacity and keep the task on
> > > preferred node if possible. Similar behavior could also be beneficial
> > > in periodic load_balance case.
> > >
> >
> > And this is the catch -- numa_preferred_nid is not guaranteed to be set at
> > all. NUMA balancing might be disabled, the task may not have been running
> > long enough to pick a preferred NID or NUMA balancing might be unable to
> > pick a preferred NID. The decision to avoid unnecessary migrations across
> > NUMA domains should be made independently of NUMA balancing. The netperf
> > configuration from mmtests is great at illustrating the point because it'll
> > also say what the average local/remote access ratio is. 2 communicating
> > tasks running on an otherwise idle NUMA machine should not have the load
> > balancer move the server to one node and the client to another.
> 
> I'm going to make it a try on my setup to see the results
> 

Thanks.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-30 17:25                 ` Vincent Guittot
@ 2019-10-31 13:57                   ` Phil Auld
  2019-10-31 16:41                     ` Vincent Guittot
  0 siblings, 1 reply; 89+ messages in thread
From: Phil Auld @ 2019-10-31 13:57 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Ingo Molnar, Mel Gorman, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Valentin Schneider, Srikar Dronamraju,
	Quentin Perret, Dietmar Eggemann, Morten Rasmussen, Hillf Danton,
	Parth Shah, Rik van Riel

Hi Vincent,

On Wed, Oct 30, 2019 at 06:25:49PM +0100 Vincent Guittot wrote:
> On Wed, 30 Oct 2019 at 15:39, Phil Auld <pauld@redhat.com> wrote:
> > > That fact that the 4 nodes works well but not the 8 nodes is a bit
> > > surprising except if this means more NUMA level in the sched_domain
> > > topology
> > > Could you give us more details about the sched domain topology ?
> > >
> >
> > The 8-node system has 5 sched domain levels.  The 4-node system only
> > has 3.
> 
> That's an interesting difference. and your additional tests on a 8
> nodes with 3 level tends to confirm that the number of level make a
> difference
> I need to study a bit more how this can impact the spread of tasks

So I think I understand what my numbers have been showing. 

I believe the numa balancing is causing problems.

Here's numbers from the test on 5.4-rc3+ without your series: 

echo 1 >  /proc/sys/kernel/numa_balancing 
lu.C.x_156_GROUP_1  Average    10.87  0.00   0.00   11.49  36.69  34.26  30.59  32.10
lu.C.x_156_GROUP_2  Average    20.15  16.32  9.49   24.91  21.07  20.93  21.63  21.50
lu.C.x_156_GROUP_3  Average    21.27  17.23  11.84  21.80  20.91  20.68  21.11  21.16
lu.C.x_156_GROUP_4  Average    19.44  6.53   8.71   19.72  22.95  23.16  28.85  26.64
lu.C.x_156_GROUP_5  Average    20.59  6.20   11.32  14.63  28.73  30.36  22.20  21.98
lu.C.x_156_NORMAL_1 Average    20.50  19.95  20.40  20.45  18.75  19.35  18.25  18.35
lu.C.x_156_NORMAL_2 Average    17.15  19.04  18.42  18.69  21.35  21.42  20.00  19.92
lu.C.x_156_NORMAL_3 Average    18.00  18.15  17.55  17.60  18.90  18.40  19.90  19.75
lu.C.x_156_NORMAL_4 Average    20.53  20.05  20.21  19.11  19.00  19.47  19.37  18.26
lu.C.x_156_NORMAL_5 Average    18.72  18.78  19.72  18.50  19.67  19.72  21.11  19.78

============156_GROUP========Mop/s===================================
min	q1	median	q3	max
1564.63	3003.87	3928.23	5411.13	8386.66
============156_GROUP========time====================================
min	q1	median	q3	max
243.12	376.82	519.06	678.79	1303.18
============156_NORMAL========Mop/s===================================
min	q1	median	q3	max
13845.6	18013.8	18545.5	19359.9	19647.4
============156_NORMAL========time====================================
min	q1	median	q3	max
103.78	105.32	109.95	113.19	147.27

(This one above is especially bad... we don't usually see 0.00s, but overall it's 
basically on par. It's reflected in the spread of the results).
 

echo 0 >  /proc/sys/kernel/numa_balancing 
lu.C.x_156_GROUP_1  Average    17.75  19.30  21.20  21.20  20.20  20.80  18.90  16.65
lu.C.x_156_GROUP_2  Average    18.38  19.25  21.00  20.06  20.19  20.31  19.56  17.25
lu.C.x_156_GROUP_3  Average    21.81  21.00  18.38  16.86  20.81  21.48  18.24  17.43
lu.C.x_156_GROUP_4  Average    20.48  20.96  19.61  17.61  17.57  19.74  18.48  21.57
lu.C.x_156_GROUP_5  Average    23.32  21.96  19.16  14.28  21.44  22.56  17.00  16.28
lu.C.x_156_NORMAL_1 Average    19.50  19.83  19.58  19.25  19.58  19.42  19.42  19.42
lu.C.x_156_NORMAL_2 Average    18.90  18.40  20.00  19.80  19.70  19.30  19.80  20.10
lu.C.x_156_NORMAL_3 Average    19.45  19.09  19.91  20.09  19.45  18.73  19.45  19.82
lu.C.x_156_NORMAL_4 Average    19.64  19.27  19.64  19.00  19.82  19.55  19.73  19.36
lu.C.x_156_NORMAL_5 Average    18.75  19.42  20.08  19.67  18.75  19.50  19.92  19.92

============156_GROUP========Mop/s===================================
min	q1	median	q3	max
14956.3	16346.5	17505.7	18440.6	22492.7
============156_GROUP========time====================================
min	q1	median	q3	max
90.65	110.57	116.48	124.74	136.33
============156_NORMAL========Mop/s===================================
min	q1	median	q3	max
29801.3	30739.2	31967.5	32151.3	34036
============156_NORMAL========time====================================
min	q1	median	q3	max
59.91	63.42	63.78	66.33	68.42


Note there is a significant improvement already. But we are seeing imbalance due to
using weighted load and averages. In this case it's only 55% slowdown rather than
the 5x. But the overall performance if the benchmark is also much better in both cases.



Here's the same test, same system with the full series (lb_v4a as I've been calling it):

echo 1 >  /proc/sys/kernel/numa_balancing 
lu.C.x_156_GROUP_1  Average    18.59  19.36  19.50  18.86  20.41  20.59  18.27  20.41
lu.C.x_156_GROUP_2  Average    19.52  20.52  20.48  21.17  19.52  19.09  17.70  18.00
lu.C.x_156_GROUP_3  Average    20.58  20.71  20.17  20.50  18.46  19.50  18.58  17.50
lu.C.x_156_GROUP_4  Average    18.95  19.63  19.47  19.84  18.79  19.84  20.84  18.63
lu.C.x_156_GROUP_5  Average    16.85  17.96  19.89  19.15  19.26  20.48  21.70  20.70
lu.C.x_156_NORMAL_1 Average    18.04  18.48  20.00  19.72  20.72  20.48  18.48  20.08
lu.C.x_156_NORMAL_2 Average    18.22  20.56  19.50  19.39  20.67  19.83  18.44  19.39
lu.C.x_156_NORMAL_3 Average    17.72  19.61  19.56  19.17  20.17  19.89  20.78  19.11
lu.C.x_156_NORMAL_4 Average    18.05  19.74  20.21  19.89  20.32  20.26  19.16  18.37
lu.C.x_156_NORMAL_5 Average    18.89  19.95  20.21  20.63  19.84  19.26  19.26  17.95

============156_GROUP========Mop/s===================================
min	q1	median	q3	max
13460.1	14949	15851.7	16391.4	18993
============156_GROUP========time====================================
min	q1	median	q3	max
107.35	124.39	128.63	136.4	151.48
============156_NORMAL========Mop/s===================================
min	q1	median	q3	max
14418.5	18512.4	19049.5	19682	19808.8
============156_NORMAL========time====================================
min	q1	median	q3	max
102.93	103.6	107.04	110.14	141.42


echo 0 >  /proc/sys/kernel/numa_balancing 
lu.C.x_156_GROUP_1  Average    19.00  19.33  19.33  19.58  20.08  19.67  19.83  19.17
lu.C.x_156_GROUP_2  Average    18.55  19.91  20.09  19.27  18.82  19.27  19.91  20.18
lu.C.x_156_GROUP_3  Average    18.42  19.08  19.75  19.00  19.50  20.08  20.25  19.92
lu.C.x_156_GROUP_4  Average    18.42  19.83  19.17  19.50  19.58  19.83  19.83  19.83
lu.C.x_156_GROUP_5  Average    19.17  19.42  20.17  19.92  19.25  18.58  19.92  19.58
lu.C.x_156_NORMAL_1 Average    19.25  19.50  19.92  18.92  19.33  19.75  19.58  19.75
lu.C.x_156_NORMAL_2 Average    19.42  19.25  17.83  18.17  19.83  20.50  20.42  20.58
lu.C.x_156_NORMAL_3 Average    18.58  19.33  19.75  18.25  19.42  20.25  20.08  20.33
lu.C.x_156_NORMAL_4 Average    19.00  19.55  19.73  18.73  19.55  20.00  19.64  19.82
lu.C.x_156_NORMAL_5 Average    19.25  19.25  19.50  18.75  19.92  19.58  19.92  19.83

============156_GROUP========Mop/s===================================
min	q1	median	q3	max
28520.1	29024.2	29042.1	29367.4	31235.2
============156_GROUP========time====================================
min	q1	median	q3	max
65.28	69.43	70.21	70.25	71.49
============156_NORMAL========Mop/s===================================
min	q1	median	q3	max
28974.5	29806.5	30237.1	30907.4	31830.1
============156_NORMAL========time====================================
min	q1	median	q3	max
64.06	65.97	67.43	68.41	70.37


This all now makes sense. Looking at the numa balancing code a bit you can see 
that it still uses load so it will still be subject to making bogus decisions 
based on the weighted load. In this case it's been actively working against the 
load balancer because of that.

I think with the three numa levels on this system the numa balancing was able to 
win more often.  We don't see the same level of this result on systems with only 
one SD_NUMA level.

Following the other part of this thread, I have to add that I'm of the opinion 
that the weighted load (which is all we have now I believe) really should be used 
only in extreme cases of overload to deal with fairness. And even then maybe not.  
As far as I can see, once the fair group scheduling is involved, that load is 
basically a random number between 1 and 1024.  It really has no bearing on how 
much  "load" a task will put on a cpu.   Any comparison of that to cpu capacity 
is pretty meaningless. 

I'm sure there are workloads for which the numa balancing is more important. But 
even then I suspect it is making the wrong decisions more often than not. I think 
a similar rework may be needed :)

I've asked our perf team to try the full battery of tests with numa balancing 
disabled  to see what it shows across the board.


Good job on this and thanks for the time looking at my specific issues. 


As far as this series is concerned, and as far as it matters: 

Acked-by: Phil Auld <pauld@redhat.com>



Cheers,
Phil

-- 


^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-31 13:57                   ` Phil Auld
@ 2019-10-31 16:41                     ` Vincent Guittot
  0 siblings, 0 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-10-31 16:41 UTC (permalink / raw)
  To: Phil Auld
  Cc: Ingo Molnar, Mel Gorman, linux-kernel, Ingo Molnar,
	Peter Zijlstra, Valentin Schneider, Srikar Dronamraju,
	Quentin Perret, Dietmar Eggemann, Morten Rasmussen, Hillf Danton,
	Parth Shah, Rik van Riel

On Thu, 31 Oct 2019 at 14:57, Phil Auld <pauld@redhat.com> wrote:
>
> Hi Vincent,
>
> On Wed, Oct 30, 2019 at 06:25:49PM +0100 Vincent Guittot wrote:
> > On Wed, 30 Oct 2019 at 15:39, Phil Auld <pauld@redhat.com> wrote:
> > > > That fact that the 4 nodes works well but not the 8 nodes is a bit
> > > > surprising except if this means more NUMA level in the sched_domain
> > > > topology
> > > > Could you give us more details about the sched domain topology ?
> > > >
> > >
> > > The 8-node system has 5 sched domain levels.  The 4-node system only
> > > has 3.
> >
> > That's an interesting difference. and your additional tests on a 8
> > nodes with 3 level tends to confirm that the number of level make a
> > difference
> > I need to study a bit more how this can impact the spread of tasks
>
> So I think I understand what my numbers have been showing.
>
> I believe the numa balancing is causing problems.
>
> Here's numbers from the test on 5.4-rc3+ without your series:
>
> echo 1 >  /proc/sys/kernel/numa_balancing
> lu.C.x_156_GROUP_1  Average    10.87  0.00   0.00   11.49  36.69  34.26  30.59  32.10
> lu.C.x_156_GROUP_2  Average    20.15  16.32  9.49   24.91  21.07  20.93  21.63  21.50
> lu.C.x_156_GROUP_3  Average    21.27  17.23  11.84  21.80  20.91  20.68  21.11  21.16
> lu.C.x_156_GROUP_4  Average    19.44  6.53   8.71   19.72  22.95  23.16  28.85  26.64
> lu.C.x_156_GROUP_5  Average    20.59  6.20   11.32  14.63  28.73  30.36  22.20  21.98
> lu.C.x_156_NORMAL_1 Average    20.50  19.95  20.40  20.45  18.75  19.35  18.25  18.35
> lu.C.x_156_NORMAL_2 Average    17.15  19.04  18.42  18.69  21.35  21.42  20.00  19.92
> lu.C.x_156_NORMAL_3 Average    18.00  18.15  17.55  17.60  18.90  18.40  19.90  19.75
> lu.C.x_156_NORMAL_4 Average    20.53  20.05  20.21  19.11  19.00  19.47  19.37  18.26
> lu.C.x_156_NORMAL_5 Average    18.72  18.78  19.72  18.50  19.67  19.72  21.11  19.78
>
> ============156_GROUP========Mop/s===================================
> min     q1      median  q3      max
> 1564.63 3003.87 3928.23 5411.13 8386.66
> ============156_GROUP========time====================================
> min     q1      median  q3      max
> 243.12  376.82  519.06  678.79  1303.18
> ============156_NORMAL========Mop/s===================================
> min     q1      median  q3      max
> 13845.6 18013.8 18545.5 19359.9 19647.4
> ============156_NORMAL========time====================================
> min     q1      median  q3      max
> 103.78  105.32  109.95  113.19  147.27
>
> (This one above is especially bad... we don't usually see 0.00s, but overall it's
> basically on par. It's reflected in the spread of the results).
>
>
> echo 0 >  /proc/sys/kernel/numa_balancing
> lu.C.x_156_GROUP_1  Average    17.75  19.30  21.20  21.20  20.20  20.80  18.90  16.65
> lu.C.x_156_GROUP_2  Average    18.38  19.25  21.00  20.06  20.19  20.31  19.56  17.25
> lu.C.x_156_GROUP_3  Average    21.81  21.00  18.38  16.86  20.81  21.48  18.24  17.43
> lu.C.x_156_GROUP_4  Average    20.48  20.96  19.61  17.61  17.57  19.74  18.48  21.57
> lu.C.x_156_GROUP_5  Average    23.32  21.96  19.16  14.28  21.44  22.56  17.00  16.28
> lu.C.x_156_NORMAL_1 Average    19.50  19.83  19.58  19.25  19.58  19.42  19.42  19.42
> lu.C.x_156_NORMAL_2 Average    18.90  18.40  20.00  19.80  19.70  19.30  19.80  20.10
> lu.C.x_156_NORMAL_3 Average    19.45  19.09  19.91  20.09  19.45  18.73  19.45  19.82
> lu.C.x_156_NORMAL_4 Average    19.64  19.27  19.64  19.00  19.82  19.55  19.73  19.36
> lu.C.x_156_NORMAL_5 Average    18.75  19.42  20.08  19.67  18.75  19.50  19.92  19.92
>
> ============156_GROUP========Mop/s===================================
> min     q1      median  q3      max
> 14956.3 16346.5 17505.7 18440.6 22492.7
> ============156_GROUP========time====================================
> min     q1      median  q3      max
> 90.65   110.57  116.48  124.74  136.33
> ============156_NORMAL========Mop/s===================================
> min     q1      median  q3      max
> 29801.3 30739.2 31967.5 32151.3 34036
> ============156_NORMAL========time====================================
> min     q1      median  q3      max
> 59.91   63.42   63.78   66.33   68.42
>
>
> Note there is a significant improvement already. But we are seeing imbalance due to
> using weighted load and averages. In this case it's only 55% slowdown rather than
> the 5x. But the overall performance if the benchmark is also much better in both cases.
>
>
>
> Here's the same test, same system with the full series (lb_v4a as I've been calling it):
>
> echo 1 >  /proc/sys/kernel/numa_balancing
> lu.C.x_156_GROUP_1  Average    18.59  19.36  19.50  18.86  20.41  20.59  18.27  20.41
> lu.C.x_156_GROUP_2  Average    19.52  20.52  20.48  21.17  19.52  19.09  17.70  18.00
> lu.C.x_156_GROUP_3  Average    20.58  20.71  20.17  20.50  18.46  19.50  18.58  17.50
> lu.C.x_156_GROUP_4  Average    18.95  19.63  19.47  19.84  18.79  19.84  20.84  18.63
> lu.C.x_156_GROUP_5  Average    16.85  17.96  19.89  19.15  19.26  20.48  21.70  20.70
> lu.C.x_156_NORMAL_1 Average    18.04  18.48  20.00  19.72  20.72  20.48  18.48  20.08
> lu.C.x_156_NORMAL_2 Average    18.22  20.56  19.50  19.39  20.67  19.83  18.44  19.39
> lu.C.x_156_NORMAL_3 Average    17.72  19.61  19.56  19.17  20.17  19.89  20.78  19.11
> lu.C.x_156_NORMAL_4 Average    18.05  19.74  20.21  19.89  20.32  20.26  19.16  18.37
> lu.C.x_156_NORMAL_5 Average    18.89  19.95  20.21  20.63  19.84  19.26  19.26  17.95
>
> ============156_GROUP========Mop/s===================================
> min     q1      median  q3      max
> 13460.1 14949   15851.7 16391.4 18993
> ============156_GROUP========time====================================
> min     q1      median  q3      max
> 107.35  124.39  128.63  136.4   151.48
> ============156_NORMAL========Mop/s===================================
> min     q1      median  q3      max
> 14418.5 18512.4 19049.5 19682   19808.8
> ============156_NORMAL========time====================================
> min     q1      median  q3      max
> 102.93  103.6   107.04  110.14  141.42
>
>
> echo 0 >  /proc/sys/kernel/numa_balancing
> lu.C.x_156_GROUP_1  Average    19.00  19.33  19.33  19.58  20.08  19.67  19.83  19.17
> lu.C.x_156_GROUP_2  Average    18.55  19.91  20.09  19.27  18.82  19.27  19.91  20.18
> lu.C.x_156_GROUP_3  Average    18.42  19.08  19.75  19.00  19.50  20.08  20.25  19.92
> lu.C.x_156_GROUP_4  Average    18.42  19.83  19.17  19.50  19.58  19.83  19.83  19.83
> lu.C.x_156_GROUP_5  Average    19.17  19.42  20.17  19.92  19.25  18.58  19.92  19.58
> lu.C.x_156_NORMAL_1 Average    19.25  19.50  19.92  18.92  19.33  19.75  19.58  19.75
> lu.C.x_156_NORMAL_2 Average    19.42  19.25  17.83  18.17  19.83  20.50  20.42  20.58
> lu.C.x_156_NORMAL_3 Average    18.58  19.33  19.75  18.25  19.42  20.25  20.08  20.33
> lu.C.x_156_NORMAL_4 Average    19.00  19.55  19.73  18.73  19.55  20.00  19.64  19.82
> lu.C.x_156_NORMAL_5 Average    19.25  19.25  19.50  18.75  19.92  19.58  19.92  19.83
>
> ============156_GROUP========Mop/s===================================
> min     q1      median  q3      max
> 28520.1 29024.2 29042.1 29367.4 31235.2
> ============156_GROUP========time====================================
> min     q1      median  q3      max
> 65.28   69.43   70.21   70.25   71.49
> ============156_NORMAL========Mop/s===================================
> min     q1      median  q3      max
> 28974.5 29806.5 30237.1 30907.4 31830.1
> ============156_NORMAL========time====================================
> min     q1      median  q3      max
> 64.06   65.97   67.43   68.41   70.37
>
>
> This all now makes sense. Looking at the numa balancing code a bit you can see
> that it still uses load so it will still be subject to making bogus decisions
> based on the weighted load. In this case it's been actively working against the
> load balancer because of that.

Thanks for the tests and interesting results

>
> I think with the three numa levels on this system the numa balancing was able to
> win more often.  We don't see the same level of this result on systems with only
> one SD_NUMA level.
>
> Following the other part of this thread, I have to add that I'm of the opinion
> that the weighted load (which is all we have now I believe) really should be used
> only in extreme cases of overload to deal with fairness. And even then maybe not.
> As far as I can see, once the fair group scheduling is involved, that load is
> basically a random number between 1 and 1024.  It really has no bearing on how
> much  "load" a task will put on a cpu.   Any comparison of that to cpu capacity
> is pretty meaningless.
>
> I'm sure there are workloads for which the numa balancing is more important. But
> even then I suspect it is making the wrong decisions more often than not. I think
> a similar rework may be needed :)

Yes , there is probably space for a better collaboration between the
load and numa balancing

>
> I've asked our perf team to try the full battery of tests with numa balancing
> disabled  to see what it shows across the board.
>
>
> Good job on this and thanks for the time looking at my specific issues.

Thanks for your help


>
>
> As far as this series is concerned, and as far as it matters:
>
> Acked-by: Phil Auld <pauld@redhat.com>
>
>
>
> Cheers,
> Phil
>
> --
>

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-10-31 11:40           ` Mel Gorman
@ 2019-11-08 16:35             ` Vincent Guittot
  2019-11-08 18:37               ` Mel Gorman
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-11-08 16:35 UTC (permalink / raw)
  To: Mel Gorman
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

Le Thursday 31 Oct 2019 à 11:40:20 (+0000), Mel Gorman a écrit :
> On Thu, Oct 31, 2019 at 12:13:09PM +0100, Vincent Guittot wrote:
> > > > > On the last one, spreading tasks evenly across NUMA domains is not
> > > > > necessarily a good idea. If I have 2 tasks running on a 2-socket machine
> > > > > with 24 logical CPUs per socket, it should not automatically mean that
> > > > > one task should move cross-node and I have definitely observed this
> > > > > happening. It's probably bad in terms of locality no matter what but it's
> > > > > especially bad if the 2 tasks happened to be communicating because then
> > > > > load balancing will pull apart the tasks while wake_affine will push
> > > > > them together (and potentially NUMA balancing as well). Note that this
> > > > > also applies for some IO workloads because, depending on the filesystem,
> > > > > the task may be communicating with workqueues (XFS) or a kernel thread
> > > > > (ext4 with jbd2).
> > > >
> > > > This rework doesn't touch the NUMA_BALANCING part and NUMA balancing
> > > > still gives guidances with fbq_classify_group/queue.
> > >
> > > I know the NUMA_BALANCING part is not touched, I'm talking about load
> > > balancing across SD_NUMA domains which happens independently of
> > > NUMA_BALANCING. In fact, there is logic in NUMA_BALANCING that tries to
> > > override the load balancer when it moves tasks away from the preferred
> > > node.
> > 
> > Yes. this patchset relies on this override for now to prevent moving task away.
> 
> Fair enough, netperf hits the corner case where it does not work but
> that is also true without your series.

I run mmtest/netperf test on my setup. It's a mix of small positive or
negative differences (see below)

netperf-udp
                                    5.3-rc2                5.3-rc2
								        tip               +rwk+fix
Hmean     send-64          95.06 (   0.00%)       94.12 *  -0.99%*
Hmean     send-128        191.71 (   0.00%)      189.94 *  -0.93%*
Hmean     send-256        379.05 (   0.00%)      370.96 *  -2.14%*
Hmean     send-1024      1485.24 (   0.00%)     1476.64 *  -0.58%*
Hmean     send-2048      2894.80 (   0.00%)     2887.00 *  -0.27%*
Hmean     send-3312      4580.27 (   0.00%)     4555.91 *  -0.53%*
Hmean     send-4096      5592.99 (   0.00%)     5517.31 *  -1.35%*
Hmean     send-8192      9117.00 (   0.00%)     9497.06 *   4.17%*
Hmean     send-16384    15824.59 (   0.00%)    15824.30 *  -0.00%*
Hmean     recv-64          95.06 (   0.00%)       94.08 *  -1.04%*
Hmean     recv-128        191.68 (   0.00%)      189.89 *  -0.93%*
Hmean     recv-256        378.94 (   0.00%)      370.87 *  -2.13%*
Hmean     recv-1024      1485.24 (   0.00%)     1476.20 *  -0.61%*
Hmean     recv-2048      2893.52 (   0.00%)     2885.25 *  -0.29%*
Hmean     recv-3312      4580.27 (   0.00%)     4553.48 *  -0.58%*
Hmean     recv-4096      5592.99 (   0.00%)     5517.27 *  -1.35%*
Hmean     recv-8192      9115.69 (   0.00%)     9495.69 *   4.17%*
Hmean     recv-16384    15824.36 (   0.00%)    15818.36 *  -0.04%*
Stddev    send-64           0.15 (   0.00%)        1.17 (-688.29%)
Stddev    send-128          1.56 (   0.00%)        1.15 (  25.96%)
Stddev    send-256          4.20 (   0.00%)        5.27 ( -25.63%)
Stddev    send-1024        20.11 (   0.00%)        5.68 (  71.74%)
Stddev    send-2048        11.06 (   0.00%)       21.74 ( -96.50%)
Stddev    send-3312        61.10 (   0.00%)       48.03 (  21.39%)
Stddev    send-4096        71.84 (   0.00%)       31.99 (  55.46%)
Stddev    send-8192       165.14 (   0.00%)      159.99 (   3.12%)
Stddev    send-16384       81.30 (   0.00%)      188.65 (-132.05%)
Stddev    recv-64           0.15 (   0.00%)        1.15 (-673.42%)
Stddev    recv-128          1.58 (   0.00%)        1.14 (  28.27%)
Stddev    recv-256          4.29 (   0.00%)        5.19 ( -21.05%)
Stddev    recv-1024        20.11 (   0.00%)        5.70 (  71.67%)
Stddev    recv-2048        10.43 (   0.00%)       21.41 (-105.22%)
Stddev    recv-3312        61.10 (   0.00%)       46.92 (  23.20%)
Stddev    recv-4096        71.84 (   0.00%)       31.97 (  55.50%)
Stddev    recv-8192       163.90 (   0.00%)      160.88 (   1.84%)
Stddev    recv-16384       81.41 (   0.00%)      187.01 (-129.71%)

                     5.3-rc2     5.3-rc2
                         tip    +rwk+fix
Duration User          38.90       39.13
Duration System      1311.29     1311.10
Duration Elapsed     1892.82     1892.86
 
netperf-tcp
                               5.3-rc2                5.3-rc2
                                   tip               +rwk+fix
Hmean     64         871.30 (   0.00%)      860.90 *  -1.19%*
Hmean     128       1689.39 (   0.00%)     1679.31 *  -0.60%*
Hmean     256       3199.59 (   0.00%)     3241.98 *   1.32%*
Hmean     1024      9390.47 (   0.00%)     9268.47 *  -1.30%*
Hmean     2048     13373.95 (   0.00%)    13395.61 *   0.16%*
Hmean     3312     16701.30 (   0.00%)    17165.96 *   2.78%*
Hmean     4096     15831.03 (   0.00%)    15544.66 *  -1.81%*
Hmean     8192     19720.01 (   0.00%)    20188.60 *   2.38%*
Hmean     16384    23925.90 (   0.00%)    23914.50 *  -0.05%*
Stddev    64           7.38 (   0.00%)        4.23 (  42.67%)
Stddev    128         11.62 (   0.00%)       10.13 (  12.85%)
Stddev    256         34.33 (   0.00%)        7.94 (  76.88%)
Stddev    1024        35.61 (   0.00%)      116.34 (-226.66%)
Stddev    2048       285.30 (   0.00%)       80.50 (  71.78%)
Stddev    3312       304.74 (   0.00%)      449.08 ( -47.36%)
Stddev    4096       668.11 (   0.00%)      569.30 (  14.79%)
Stddev    8192       733.23 (   0.00%)      944.38 ( -28.80%)
Stddev    16384      553.03 (   0.00%)      299.44 (  45.86%)

                     5.3-rc2     5.3-rc2
                         tip    +rwk+fix
Duration User         138.05      140.95
Duration System      1210.60     1208.45
Duration Elapsed     1352.86     1352.90


> 
> > I agree that additional patches are probably needed to improve load
> > balance at NUMA level and I expect that this rework will make it
> > simpler to add.
> > I just wanted to get the output of some real use cases before defining
> > more numa level specific conditions. Some want to spread on there numa
> > nodes but other want to keep everything together. The preferred node
> > and fbq_classify_group was the only sensible metrics to me when he
> > wrote this patchset but changes can be added if they make sense.
> > 
> 
> That's fair. While it was possible to address the case before your
> series, it was a hatchet job. If the changelog simply notes that some
> special casing may still be required for SD_NUMA but it's outside the
> scope of the series, then I'd be happy. At least there is a good chance
> then if there is follow-up work that it won't be interpreted as an
> attempt to reintroduce hacky heuristics.
>

Would the additional comment make sense for you about work to be done
for SD_NUMA ?

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0ad4b21..7e4cb65 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6960,11 +6960,34 @@ enum fbq_type { regular, remote, all };
  * group. see update_sd_pick_busiest().
  */
 enum group_type {
+	/*
+	 * The group has spare capacity that can be used to process more work.
+	 */
 	group_has_spare = 0,
+	/*
+	 * The group is fully used and the tasks don't compete for more CPU
+	 * cycles. Nevetheless, some tasks might wait before running.
+	 */
 	group_fully_busy,
+	/*
+	 * One task doesn't fit with CPU's capacity and must be migrated on a
+	 * more powerful CPU.
+	 */
 	group_misfit_task,
+	/*
+	 * One local CPU with higher capacity is available and task should be
+	 * migrated on it instead on current CPU.
+	 */
 	group_asym_packing,
+	/*
+	 * The tasks affinity prevents the scheduler to balance the load across
+	 * the system.
+	 */
 	group_imbalanced,
+	/*
+	 * The CPU is overloaded and can't provide expected CPU cycles to all
+	 * tasks.
+	 */
 	group_overloaded
 };
 
@@ -8563,7 +8586,11 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
 
 	/*
 	 * Try to use spare capacity of local group without overloading it or
-	 * emptying busiest
+	 * emptying busiest.
+	 * XXX Spreading tasks across numa nodes is not always the best policy
+	 * and special cares should be taken for SD_NUMA domain level before
+	 * spreading the tasks. For now, load_balance() fully relies on
+	 * NUMA_BALANCING and fbq_classify_group/rq to overide the decision.
 	 */
 	if (local->group_type == group_has_spare) {
 		if (busiest->group_type > group_fully_busy) {
-- 
2.7.4

> 
> > >
> > > > But the latter could also take advantage of the new type of group. For
> > > > example, what I did in the fix for find_idlest_group : checking
> > > > numa_preferred_nid when the group has capacity and keep the task on
> > > > preferred node if possible. Similar behavior could also be beneficial
> > > > in periodic load_balance case.
> > > >
> > >
> > > And this is the catch -- numa_preferred_nid is not guaranteed to be set at
> > > all. NUMA balancing might be disabled, the task may not have been running
> > > long enough to pick a preferred NID or NUMA balancing might be unable to
> > > pick a preferred NID. The decision to avoid unnecessary migrations across
> > > NUMA domains should be made independently of NUMA balancing. The netperf
> > > configuration from mmtests is great at illustrating the point because it'll
> > > also say what the average local/remote access ratio is. 2 communicating
> > > tasks running on an otherwise idle NUMA machine should not have the load
> > > balancer move the server to one node and the client to another.
> > 
> > I'm going to make it a try on my setup to see the results
> > 
> 
> Thanks.
> 
> -- 
> Mel Gorman
> SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-11-08 16:35             ` Vincent Guittot
@ 2019-11-08 18:37               ` Mel Gorman
  2019-11-12 10:58                 ` Vincent Guittot
  0 siblings, 1 reply; 89+ messages in thread
From: Mel Gorman @ 2019-11-08 18:37 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Fri, Nov 08, 2019 at 05:35:01PM +0100, Vincent Guittot wrote:
> > Fair enough, netperf hits the corner case where it does not work but
> > that is also true without your series.
> 
> I run mmtest/netperf test on my setup. It's a mix of small positive or
> negative differences (see below)
> 
> <SNIP>
>
> netperf-tcp
>                                5.3-rc2                5.3-rc2
>                                    tip               +rwk+fix
> Hmean     64         871.30 (   0.00%)      860.90 *  -1.19%*
> Hmean     128       1689.39 (   0.00%)     1679.31 *  -0.60%*
> Hmean     256       3199.59 (   0.00%)     3241.98 *   1.32%*
> Hmean     1024      9390.47 (   0.00%)     9268.47 *  -1.30%*
> Hmean     2048     13373.95 (   0.00%)    13395.61 *   0.16%*
> Hmean     3312     16701.30 (   0.00%)    17165.96 *   2.78%*
> Hmean     4096     15831.03 (   0.00%)    15544.66 *  -1.81%*
> Hmean     8192     19720.01 (   0.00%)    20188.60 *   2.38%*
> Hmean     16384    23925.90 (   0.00%)    23914.50 *  -0.05%*
> Stddev    64           7.38 (   0.00%)        4.23 (  42.67%)
> Stddev    128         11.62 (   0.00%)       10.13 (  12.85%)
> Stddev    256         34.33 (   0.00%)        7.94 (  76.88%)
> Stddev    1024        35.61 (   0.00%)      116.34 (-226.66%)
> Stddev    2048       285.30 (   0.00%)       80.50 (  71.78%)
> Stddev    3312       304.74 (   0.00%)      449.08 ( -47.36%)
> Stddev    4096       668.11 (   0.00%)      569.30 (  14.79%)
> Stddev    8192       733.23 (   0.00%)      944.38 ( -28.80%)
> Stddev    16384      553.03 (   0.00%)      299.44 (  45.86%)
> 
>                      5.3-rc2     5.3-rc2
>                          tip    +rwk+fix
> Duration User         138.05      140.95
> Duration System      1210.60     1208.45
> Duration Elapsed     1352.86     1352.90
> 

This roughly matches what I've seen. The interesting part to me for
netperf is the next section of the report that reports the locality of
numa hints. With netperf on a 2-socket machine, it's generally around
50% as the client/server are pulled apart. Because netperf is not
heavily memory bound, it doesn't have much impact on the overall
performance but it's good at catching the cross-node migrations.

> > 
> > > I agree that additional patches are probably needed to improve load
> > > balance at NUMA level and I expect that this rework will make it
> > > simpler to add.
> > > I just wanted to get the output of some real use cases before defining
> > > more numa level specific conditions. Some want to spread on there numa
> > > nodes but other want to keep everything together. The preferred node
> > > and fbq_classify_group was the only sensible metrics to me when he
> > > wrote this patchset but changes can be added if they make sense.
> > > 
> > 
> > That's fair. While it was possible to address the case before your
> > series, it was a hatchet job. If the changelog simply notes that some
> > special casing may still be required for SD_NUMA but it's outside the
> > scope of the series, then I'd be happy. At least there is a good chance
> > then if there is follow-up work that it won't be interpreted as an
> > attempt to reintroduce hacky heuristics.
> >
> 
> Would the additional comment make sense for you about work to be done
> for SD_NUMA ?
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0ad4b21..7e4cb65 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6960,11 +6960,34 @@ enum fbq_type { regular, remote, all };
>   * group. see update_sd_pick_busiest().
>   */
>  enum group_type {
> +	/*
> +	 * The group has spare capacity that can be used to process more work.
> +	 */
>  	group_has_spare = 0,
> +	/*
> +	 * The group is fully used and the tasks don't compete for more CPU
> +	 * cycles. Nevetheless, some tasks might wait before running.
> +	 */
>  	group_fully_busy,
> +	/*
> +	 * One task doesn't fit with CPU's capacity and must be migrated on a
> +	 * more powerful CPU.
> +	 */
>  	group_misfit_task,
> +	/*
> +	 * One local CPU with higher capacity is available and task should be
> +	 * migrated on it instead on current CPU.
> +	 */
>  	group_asym_packing,
> +	/*
> +	 * The tasks affinity prevents the scheduler to balance the load across
> +	 * the system.
> +	 */
>  	group_imbalanced,
> +	/*
> +	 * The CPU is overloaded and can't provide expected CPU cycles to all
> +	 * tasks.
> +	 */
>  	group_overloaded
>  };

Looks good.

>  
> @@ -8563,7 +8586,11 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
>  
>  	/*
>  	 * Try to use spare capacity of local group without overloading it or
> -	 * emptying busiest
> +	 * emptying busiest.
> +	 * XXX Spreading tasks across numa nodes is not always the best policy
> +	 * and special cares should be taken for SD_NUMA domain level before
> +	 * spreading the tasks. For now, load_balance() fully relies on
> +	 * NUMA_BALANCING and fbq_classify_group/rq to overide the decision.
>  	 */
>  	if (local->group_type == group_has_spare) {
>  		if (busiest->group_type > group_fully_busy) {

Perfect. Any patch in that are can then update the comment and it
should be semi-obvious to the next reviewer that it's expected.

Thanks Vincent.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-11-08 18:37               ` Mel Gorman
@ 2019-11-12 10:58                 ` Vincent Guittot
  2019-11-12 15:06                   ` Mel Gorman
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-11-12 10:58 UTC (permalink / raw)
  To: Mel Gorman
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

Le Friday 08 Nov 2019 à 18:37:30 (+0000), Mel Gorman a écrit :
> On Fri, Nov 08, 2019 at 05:35:01PM +0100, Vincent Guittot wrote:
> > > Fair enough, netperf hits the corner case where it does not work but
> > > that is also true without your series.
> > 
> > I run mmtest/netperf test on my setup. It's a mix of small positive or
> > negative differences (see below)
> > 
> > <SNIP>
> >
> > netperf-tcp
> >                                5.3-rc2                5.3-rc2
> >                                    tip               +rwk+fix
> > Hmean     64         871.30 (   0.00%)      860.90 *  -1.19%*
> > Hmean     128       1689.39 (   0.00%)     1679.31 *  -0.60%*
> > Hmean     256       3199.59 (   0.00%)     3241.98 *   1.32%*
> > Hmean     1024      9390.47 (   0.00%)     9268.47 *  -1.30%*
> > Hmean     2048     13373.95 (   0.00%)    13395.61 *   0.16%*
> > Hmean     3312     16701.30 (   0.00%)    17165.96 *   2.78%*
> > Hmean     4096     15831.03 (   0.00%)    15544.66 *  -1.81%*
> > Hmean     8192     19720.01 (   0.00%)    20188.60 *   2.38%*
> > Hmean     16384    23925.90 (   0.00%)    23914.50 *  -0.05%*
> > Stddev    64           7.38 (   0.00%)        4.23 (  42.67%)
> > Stddev    128         11.62 (   0.00%)       10.13 (  12.85%)
> > Stddev    256         34.33 (   0.00%)        7.94 (  76.88%)
> > Stddev    1024        35.61 (   0.00%)      116.34 (-226.66%)
> > Stddev    2048       285.30 (   0.00%)       80.50 (  71.78%)
> > Stddev    3312       304.74 (   0.00%)      449.08 ( -47.36%)
> > Stddev    4096       668.11 (   0.00%)      569.30 (  14.79%)
> > Stddev    8192       733.23 (   0.00%)      944.38 ( -28.80%)
> > Stddev    16384      553.03 (   0.00%)      299.44 (  45.86%)
> > 
> >                      5.3-rc2     5.3-rc2
> >                          tip    +rwk+fix
> > Duration User         138.05      140.95
> > Duration System      1210.60     1208.45
> > Duration Elapsed     1352.86     1352.90
> > 
> 
> This roughly matches what I've seen. The interesting part to me for
> netperf is the next section of the report that reports the locality of
> numa hints. With netperf on a 2-socket machine, it's generally around
> 50% as the client/server are pulled apart. Because netperf is not
> heavily memory bound, it doesn't have much impact on the overall
> performance but it's good at catching the cross-node migrations.

Ok. I didn't want to make my reply too long. I have put them below for 
the netperf-tcp results:
                                        5.3-rc2        5.3-rc2
                                            tip      +rwk+fix
Ops NUMA alloc hit                  60077762.00    60387907.00
Ops NUMA alloc miss                        0.00           0.00
Ops NUMA interleave hit                    0.00           0.00
Ops NUMA alloc local                60077571.00    60387798.00
Ops NUMA base-page range updates        5948.00       17223.00
Ops NUMA PTE updates                    5948.00       17223.00
Ops NUMA PMD updates                       0.00           0.00
Ops NUMA hint faults                    4639.00       14050.00
Ops NUMA hint local faults %            2073.00        6515.00
Ops NUMA hint local percent               44.69          46.37
Ops NUMA pages migrated                 1528.00        4306.00
Ops AutoNUMA cost                         23.27          70.45


>
> > > 
> > > > I agree that additional patches are probably needed to improve load
> > > > balance at NUMA level and I expect that this rework will make it
> > > > simpler to add.
> > > > I just wanted to get the output of some real use cases before defining
> > > > more numa level specific conditions. Some want to spread on there numa
> > > > nodes but other want to keep everything together. The preferred node
> > > > and fbq_classify_group was the only sensible metrics to me when he
> > > > wrote this patchset but changes can be added if they make sense.
> > > > 
> > > 
> > > That's fair. While it was possible to address the case before your
> > > series, it was a hatchet job. If the changelog simply notes that some
> > > special casing may still be required for SD_NUMA but it's outside the
> > > scope of the series, then I'd be happy. At least there is a good chance
> > > then if there is follow-up work that it won't be interpreted as an
> > > attempt to reintroduce hacky heuristics.
> > >
> > 
> > Would the additional comment make sense for you about work to be done
> > for SD_NUMA ?
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 0ad4b21..7e4cb65 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6960,11 +6960,34 @@ enum fbq_type { regular, remote, all };
> >   * group. see update_sd_pick_busiest().
> >   */
> >  enum group_type {
> > +	/*
> > +	 * The group has spare capacity that can be used to process more work.
> > +	 */
> >  	group_has_spare = 0,
> > +	/*
> > +	 * The group is fully used and the tasks don't compete for more CPU
> > +	 * cycles. Nevetheless, some tasks might wait before running.
> > +	 */
> >  	group_fully_busy,
> > +	/*
> > +	 * One task doesn't fit with CPU's capacity and must be migrated on a
> > +	 * more powerful CPU.
> > +	 */
> >  	group_misfit_task,
> > +	/*
> > +	 * One local CPU with higher capacity is available and task should be
> > +	 * migrated on it instead on current CPU.
> > +	 */
> >  	group_asym_packing,
> > +	/*
> > +	 * The tasks affinity prevents the scheduler to balance the load across
> > +	 * the system.
> > +	 */
> >  	group_imbalanced,
> > +	/*
> > +	 * The CPU is overloaded and can't provide expected CPU cycles to all
> > +	 * tasks.
> > +	 */
> >  	group_overloaded
> >  };
> 
> Looks good.
> 
> >  
> > @@ -8563,7 +8586,11 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
> >  
> >  	/*
> >  	 * Try to use spare capacity of local group without overloading it or
> > -	 * emptying busiest
> > +	 * emptying busiest.
> > +	 * XXX Spreading tasks across numa nodes is not always the best policy
> > +	 * and special cares should be taken for SD_NUMA domain level before
> > +	 * spreading the tasks. For now, load_balance() fully relies on
> > +	 * NUMA_BALANCING and fbq_classify_group/rq to overide the decision.
> >  	 */
> >  	if (local->group_type == group_has_spare) {
> >  		if (busiest->group_type > group_fully_busy) {
> 
> Perfect. Any patch in that are can then update the comment and it
> should be semi-obvious to the next reviewer that it's expected.
> 
> Thanks Vincent.
> 
> -- 
> Mel Gorman
> SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-11-12 10:58                 ` Vincent Guittot
@ 2019-11-12 15:06                   ` Mel Gorman
  2019-11-12 15:40                     ` Vincent Guittot
  0 siblings, 1 reply; 89+ messages in thread
From: Mel Gorman @ 2019-11-12 15:06 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Tue, Nov 12, 2019 at 11:58:30AM +0100, Vincent Guittot wrote:
> > This roughly matches what I've seen. The interesting part to me for
> > netperf is the next section of the report that reports the locality of
> > numa hints. With netperf on a 2-socket machine, it's generally around
> > 50% as the client/server are pulled apart. Because netperf is not
> > heavily memory bound, it doesn't have much impact on the overall
> > performance but it's good at catching the cross-node migrations.
> 
> Ok. I didn't want to make my reply too long. I have put them below for 
> the netperf-tcp results:
>                                         5.3-rc2        5.3-rc2
>                                             tip      +rwk+fix
> Ops NUMA alloc hit                  60077762.00    60387907.00
> Ops NUMA alloc miss                        0.00           0.00
> Ops NUMA interleave hit                    0.00           0.00
> Ops NUMA alloc local                60077571.00    60387798.00
> Ops NUMA base-page range updates        5948.00       17223.00
> Ops NUMA PTE updates                    5948.00       17223.00
> Ops NUMA PMD updates                       0.00           0.00
> Ops NUMA hint faults                    4639.00       14050.00
> Ops NUMA hint local faults %            2073.00        6515.00
> Ops NUMA hint local percent               44.69          46.37
> Ops NUMA pages migrated                 1528.00        4306.00
> Ops AutoNUMA cost                         23.27          70.45
> 

Thanks -- it was "NUMA hint local percent" I was interested in and the
46.37% local hinting faults is likely indicative of the client/server
being load balanced across SD_NUMA domains without NUMA Balancing being
aggressive enough to fix it. At least I know I am not just seriously
unlucky or testing magical machines!

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-11-12 15:06                   ` Mel Gorman
@ 2019-11-12 15:40                     ` Vincent Guittot
  2019-11-12 17:45                       ` Mel Gorman
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-11-12 15:40 UTC (permalink / raw)
  To: Mel Gorman
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Tue, 12 Nov 2019 at 16:06, Mel Gorman <mgorman@techsingularity.net> wrote:
>
> On Tue, Nov 12, 2019 at 11:58:30AM +0100, Vincent Guittot wrote:
> > > This roughly matches what I've seen. The interesting part to me for
> > > netperf is the next section of the report that reports the locality of
> > > numa hints. With netperf on a 2-socket machine, it's generally around
> > > 50% as the client/server are pulled apart. Because netperf is not
> > > heavily memory bound, it doesn't have much impact on the overall
> > > performance but it's good at catching the cross-node migrations.
> >
> > Ok. I didn't want to make my reply too long. I have put them below for
> > the netperf-tcp results:
> >                                         5.3-rc2        5.3-rc2
> >                                             tip      +rwk+fix
> > Ops NUMA alloc hit                  60077762.00    60387907.00
> > Ops NUMA alloc miss                        0.00           0.00
> > Ops NUMA interleave hit                    0.00           0.00
> > Ops NUMA alloc local                60077571.00    60387798.00
> > Ops NUMA base-page range updates        5948.00       17223.00
> > Ops NUMA PTE updates                    5948.00       17223.00
> > Ops NUMA PMD updates                       0.00           0.00
> > Ops NUMA hint faults                    4639.00       14050.00
> > Ops NUMA hint local faults %            2073.00        6515.00
> > Ops NUMA hint local percent               44.69          46.37
> > Ops NUMA pages migrated                 1528.00        4306.00
> > Ops AutoNUMA cost                         23.27          70.45
> >
>
> Thanks -- it was "NUMA hint local percent" I was interested in and the
> 46.37% local hinting faults is likely indicative of the client/server
> being load balanced across SD_NUMA domains without NUMA Balancing being
> aggressive enough to fix it. At least I know I am not just seriously
> unlucky or testing magical machines!

I agree that the collaboration between load balanced across SD_NUMA
level and NUMA balancing should be improved

It's also interesting to notice that the patchset doesn't seem to do
worse than the baseline: 46.37% vs 44.69%

Vincent

>
> --
> Mel Gorman
> SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-11-12 15:40                     ` Vincent Guittot
@ 2019-11-12 17:45                       ` Mel Gorman
  0 siblings, 0 replies; 89+ messages in thread
From: Mel Gorman @ 2019-11-12 17:45 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Tue, Nov 12, 2019 at 04:40:20PM +0100, Vincent Guittot wrote:
> On Tue, 12 Nov 2019 at 16:06, Mel Gorman <mgorman@techsingularity.net> wrote:
> >
> > On Tue, Nov 12, 2019 at 11:58:30AM +0100, Vincent Guittot wrote:
> > > > This roughly matches what I've seen. The interesting part to me for
> > > > netperf is the next section of the report that reports the locality of
> > > > numa hints. With netperf on a 2-socket machine, it's generally around
> > > > 50% as the client/server are pulled apart. Because netperf is not
> > > > heavily memory bound, it doesn't have much impact on the overall
> > > > performance but it's good at catching the cross-node migrations.
> > >
> > > Ok. I didn't want to make my reply too long. I have put them below for
> > > the netperf-tcp results:
> > >                                         5.3-rc2        5.3-rc2
> > >                                             tip      +rwk+fix
> > > Ops NUMA alloc hit                  60077762.00    60387907.00
> > > Ops NUMA alloc miss                        0.00           0.00
> > > Ops NUMA interleave hit                    0.00           0.00
> > > Ops NUMA alloc local                60077571.00    60387798.00
> > > Ops NUMA base-page range updates        5948.00       17223.00
> > > Ops NUMA PTE updates                    5948.00       17223.00
> > > Ops NUMA PMD updates                       0.00           0.00
> > > Ops NUMA hint faults                    4639.00       14050.00
> > > Ops NUMA hint local faults %            2073.00        6515.00
> > > Ops NUMA hint local percent               44.69          46.37
> > > Ops NUMA pages migrated                 1528.00        4306.00
> > > Ops AutoNUMA cost                         23.27          70.45
> > >
> >
> > Thanks -- it was "NUMA hint local percent" I was interested in and the
> > 46.37% local hinting faults is likely indicative of the client/server
> > being load balanced across SD_NUMA domains without NUMA Balancing being
> > aggressive enough to fix it. At least I know I am not just seriously
> > unlucky or testing magical machines!
> 
> I agree that the collaboration between load balanced across SD_NUMA
> level and NUMA balancing should be improved
> 
> It's also interesting to notice that the patchset doesn't seem to do
> worse than the baseline: 46.37% vs 44.69%
> 

Yes, I should have highlighted that. The series appears to improve a
number of areas while being performance neutral with respect to SD_NUMA.
If this turns out to be wrong in some case, it should be semi-obvious even
if the locality looks ok. It'll be a headline regression with increased
NUMA pte scanning and increased frequency of migrations indicating that
NUMA balancing is taken excessive corrective action. I'll know it when
I see it :P

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-30 16:24   ` Mel Gorman
  2019-10-30 16:35     ` Vincent Guittot
@ 2019-11-18 13:15     ` Ingo Molnar
  1 sibling, 0 replies; 89+ messages in thread
From: Ingo Molnar @ 2019-11-18 13:15 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Vincent Guittot, linux-kernel, mingo, peterz, pauld,
	valentin.schneider, srikar, quentin.perret, dietmar.eggemann,
	Morten.Rasmussen, hdanton, parth, riel


* Mel Gorman <mgorman@techsingularity.net> wrote:

> On Mon, Oct 21, 2019 at 09:50:38AM +0200, Ingo Molnar wrote:
> > > <SNIP>
> > 
> > Thanks, that's an excellent series!
> > 
> 
> Agreed despite the level of whining and complaining I made during the
> review.

I saw no whining and complaining whatsoever, and thanks for the feedback!
:-)

> 
> > I've queued it up in sched/core with a handful of readability edits to 
> > comments and changelogs.
> > 
> > There are some upstreaming caveats though, I expect this series to be a 
> > performance regression magnet:
> > 
> >  - load_balance() and wake-up changes invariably are such: some workloads 
> >    only work/scale well by accident, and if we touch the logic it might 
> >    flip over into a less advantageous scheduling pattern.
> > 
> >  - In particular the changes from balancing and waking on runnable load 
> >    to full load that includes blocking *will* shift IO-intensive 
> >    workloads that you tests don't fully capture I believe. You also made 
> >    idle balancing more aggressive in essence - which might reduce cache 
> >    locality for some workloads.
> > 
> > A full run on Mel Gorman's magic scalability test-suite would be super 
> > useful ...
> > 
> 
> I queued this back on the 21st and it took this long for me to get back
> to it.
> 
> What I tested did not include the fix for the last patch so I cannot say
> the data is that useful. I also failed to include something that exercised
> the IO paths in a way that idles rapidly as that can catch interesting
> details (usually cpufreq related but sometimes load-balancing related).
> There was no real thinking behind this decision, I just used an old
> collection of tests to get a general feel for the series.

I have just applied Vincent's fix to find_idlest_group(), so that will 
probably modify some of the results. (Hopefully for the better.)

Will push it out later today-ish.

> Most of the results were performance-neutral and some notable gains 
> (kernel compiles were 1-6% faster depending on the -j count). Hackbench 
> saw a disproportionate gain in terms of performance but I tend to be 
> wary of hackbench as improving it is rarely a universal win. There 
> tends to be some jitter around the point where a NUMA nodes worth of 
> CPUs gets overloaded. tbench (mmtests configuation network-tbench) on a 
> NUMA machine showed gains for low thread counts and high thread counts 
> but a loss near the boundary where a single node would get overloaded.
> 
> Some NAS-related workloads saw a drop in performance on NUMA machines 
> but the size class might be too small to be certain, I'd have to rerun 
> with the D class to be sure.  The biggest strange drop in performance 
> was the elapsed time to run the git test suite (mmtests configuration 
> workload-shellscripts modified to use a fresh XFS partition) took 
> 17.61% longer to execute on a UMA Skylake machine. This *might* be due 
> to the missing fix because it is mostly a single-task workload.

Thanks a lot for your testing!

> I'm not going to go through the results in detail because I think 
> another full round of testing would be required to take the fix into 
> account. I'd also prefer to wait to see if the review results in any 
> material change to the series.

I'll try to make sure it all gets addressed.

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-10-30 15:45   ` [PATCH v4 04/11] sched/fair: rework load_balance Mel Gorman
  2019-10-30 16:16     ` Valentin Schneider
  2019-10-31  9:09     ` Vincent Guittot
@ 2019-11-18 13:50     ` Ingo Molnar
  2019-11-18 13:57       ` Vincent Guittot
  2019-11-18 14:51       ` Mel Gorman
  2 siblings, 2 replies; 89+ messages in thread
From: Ingo Molnar @ 2019-11-18 13:50 UTC (permalink / raw)
  To: Mel Gorman
  Cc: Vincent Guittot, linux-kernel, mingo, peterz, pauld,
	valentin.schneider, srikar, quentin.perret, dietmar.eggemann,
	Morten.Rasmussen, hdanton, parth, riel


* Mel Gorman <mgorman@techsingularity.net> wrote:

> s/groupe_type/group_type/
> 
> >  enum group_type {
> > -	group_other = 0,
> > +	group_has_spare = 0,
> > +	group_fully_busy,
> >  	group_misfit_task,
> > +	group_asym_packing,
> >  	group_imbalanced,
> > -	group_overloaded,
> > +	group_overloaded
> > +};
> > +
> 
> While not your fault, it would be nice to comment on the meaning of each
> group type. From a glance, it's not obvious to me why a misfit task should
> be a high priority to move a task than a fully_busy (but not overloaded)
> group given that moving the misfit task might make a group overloaded.

This part of your feedback should now be addressed in the scheduler tree 
via:

  a9723389cc75: sched/fair: Add comments for group_type and balancing at SD_NUMA level

> > +enum migration_type {
> > +	migrate_load = 0,
> > +	migrate_util,
> > +	migrate_task,
> > +	migrate_misfit
> >  };
> >  
> 
> Could do with a comment explaining what migration_type is for because
> the name is unhelpful. I *think* at a glance it's related to what sort
> of imbalance is being addressed which is partially addressed by the
> group_type. That understanding may change as I continue reading the series
> but now I have to figure it out which means it'll be forgotten again in
> 6 months.

Agreed. Vincent, is any patch brewing here, or should I take a stab?

Thanks,

	Ingo

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-11-18 13:50     ` Ingo Molnar
@ 2019-11-18 13:57       ` Vincent Guittot
  2019-11-18 14:51       ` Mel Gorman
  1 sibling, 0 replies; 89+ messages in thread
From: Vincent Guittot @ 2019-11-18 13:57 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Mel Gorman, linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Mon, 18 Nov 2019 at 14:50, Ingo Molnar <mingo@kernel.org> wrote:
>
>
> * Mel Gorman <mgorman@techsingularity.net> wrote:
>
> > s/groupe_type/group_type/
> >
> > >  enum group_type {
> > > -   group_other = 0,
> > > +   group_has_spare = 0,
> > > +   group_fully_busy,
> > >     group_misfit_task,
> > > +   group_asym_packing,
> > >     group_imbalanced,
> > > -   group_overloaded,
> > > +   group_overloaded
> > > +};
> > > +
> >
> > While not your fault, it would be nice to comment on the meaning of each
> > group type. From a glance, it's not obvious to me why a misfit task should
> > be a high priority to move a task than a fully_busy (but not overloaded)
> > group given that moving the misfit task might make a group overloaded.
>
> This part of your feedback should now be addressed in the scheduler tree
> via:
>
>   a9723389cc75: sched/fair: Add comments for group_type and balancing at SD_NUMA level
>
> > > +enum migration_type {
> > > +   migrate_load = 0,
> > > +   migrate_util,
> > > +   migrate_task,
> > > +   migrate_misfit
> > >  };
> > >
> >
> > Could do with a comment explaining what migration_type is for because
> > the name is unhelpful. I *think* at a glance it's related to what sort
> > of imbalance is being addressed which is partially addressed by the
> > group_type. That understanding may change as I continue reading the series
> > but now I have to figure it out which means it'll be forgotten again in
> > 6 months.
>
> Agreed. Vincent, is any patch brewing here, or should I take a stab?
>

No I haven't patch under preparation for this
So you can go ahead

Thanks,
Vincent

> Thanks,
>
>         Ingo

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 04/11] sched/fair: rework load_balance
  2019-11-18 13:50     ` Ingo Molnar
  2019-11-18 13:57       ` Vincent Guittot
@ 2019-11-18 14:51       ` Mel Gorman
  1 sibling, 0 replies; 89+ messages in thread
From: Mel Gorman @ 2019-11-18 14:51 UTC (permalink / raw)
  To: Ingo Molnar
  Cc: Vincent Guittot, linux-kernel, mingo, peterz, pauld,
	valentin.schneider, srikar, quentin.perret, dietmar.eggemann,
	Morten.Rasmussen, hdanton, parth, riel

On Mon, Nov 18, 2019 at 02:50:17PM +0100, Ingo Molnar wrote:
> 
> * Mel Gorman <mgorman@techsingularity.net> wrote:
> 
> > s/groupe_type/group_type/
> > 
> > >  enum group_type {
> > > -	group_other = 0,
> > > +	group_has_spare = 0,
> > > +	group_fully_busy,
> > >  	group_misfit_task,
> > > +	group_asym_packing,
> > >  	group_imbalanced,
> > > -	group_overloaded,
> > > +	group_overloaded
> > > +};
> > > +
> > 
> > While not your fault, it would be nice to comment on the meaning of each
> > group type. From a glance, it's not obvious to me why a misfit task should
> > be a high priority to move a task than a fully_busy (but not overloaded)
> > group given that moving the misfit task might make a group overloaded.
> 
> This part of your feedback should now be addressed in the scheduler tree 
> via:
> 
>   a9723389cc75: sched/fair: Add comments for group_type and balancing at SD_NUMA level
> 

While I can't see that commit ID yet, the discussed version of the patch
was fine by me.

-- 
Mel Gorman
SUSE Labs

^ permalink raw reply	[flat|nested] 89+ messages in thread

* [tip: sched/core] sched/fair: Fix rework of find_idlest_group()
  2019-10-22 16:46   ` [PATCH] sched/fair: fix rework of find_idlest_group() Vincent Guittot
  2019-10-23  7:50     ` Chen, Rong A
  2019-10-30 16:07     ` Mel Gorman
@ 2019-11-18 17:42     ` tip-bot2 for Vincent Guittot
  2019-11-22 14:37     ` [PATCH] sched/fair: fix " Valentin Schneider
  3 siblings, 0 replies; 89+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2019-11-18 17:42 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: kernel test robot, Vincent Guittot, Linus Torvalds,
	Morten.Rasmussen, Peter Zijlstra, Thomas Gleixner,
	dietmar.eggemann, hdanton, parth, pauld, quentin.perret, riel,
	srikar, valentin.schneider, Ingo Molnar, Borislav Petkov,
	linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     3318544b721d3072fdd1f85ee0f1f214c0b211ee
Gitweb:        https://git.kernel.org/tip/3318544b721d3072fdd1f85ee0f1f214c0b211ee
Author:        Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate:    Tue, 22 Oct 2019 18:46:38 +02:00
Committer:     Ingo Molnar <mingo@kernel.org>
CommitterDate: Mon, 18 Nov 2019 14:11:56 +01:00

sched/fair: Fix rework of find_idlest_group()

The task, for which the scheduler looks for the idlest group of CPUs, must
be discounted from all statistics in order to get a fair comparison
between groups. This includes utilization, load, nr_running and idle_cpus.

Such unfairness can be easily highlighted with the unixbench execl 1 task.
This test continuously call execve() and the scheduler looks for the idlest
group/CPU on which it should place the task. Because the task runs on the
local group/CPU, the latter seems already busy even if there is nothing
else running on it. As a result, the scheduler will always select another
group/CPU than the local one.

This recovers most of the performance regression on my system from the
recent load-balancer rewrite.

[ mingo: Minor cleanups. ]

Reported-by: kernel test robot <rong.a.chen@intel.com>
Tested-by: kernel test robot <rong.a.chen@intel.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Morten.Rasmussen@arm.com
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: dietmar.eggemann@arm.com
Cc: hdanton@sina.com
Cc: parth@linux.ibm.com
Cc: pauld@redhat.com
Cc: quentin.perret@arm.com
Cc: riel@surriel.com
Cc: srikar@linux.vnet.ibm.com
Cc: valentin.schneider@arm.com
Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
Link: https://lkml.kernel.org/r/1571762798-25900-1-git-send-email-vincent.guittot@linaro.org
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/fair.c | 91 ++++++++++++++++++++++++++++++++++++++++----
 1 file changed, 84 insertions(+), 7 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 81eba55..2fc08e7 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -5391,6 +5391,37 @@ static unsigned long cpu_load(struct rq *rq)
 	return cfs_rq_load_avg(&rq->cfs);
 }
 
+/*
+ * cpu_load_without - compute CPU load without any contributions from *p
+ * @cpu: the CPU which load is requested
+ * @p: the task which load should be discounted
+ *
+ * The load of a CPU is defined by the load of tasks currently enqueued on that
+ * CPU as well as tasks which are currently sleeping after an execution on that
+ * CPU.
+ *
+ * This method returns the load of the specified CPU by discounting the load of
+ * the specified task, whenever the task is currently contributing to the CPU
+ * load.
+ */
+static unsigned long cpu_load_without(struct rq *rq, struct task_struct *p)
+{
+	struct cfs_rq *cfs_rq;
+	unsigned int load;
+
+	/* Task has no contribution or is new */
+	if (cpu_of(rq) != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
+		return cpu_load(rq);
+
+	cfs_rq = &rq->cfs;
+	load = READ_ONCE(cfs_rq->avg.load_avg);
+
+	/* Discount task's util from CPU's util */
+	lsub_positive(&load, task_h_load(p));
+
+	return load;
+}
+
 static unsigned long capacity_of(int cpu)
 {
 	return cpu_rq(cpu)->cpu_capacity;
@@ -8142,10 +8173,55 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
 struct sg_lb_stats;
 
 /*
+ * task_running_on_cpu - return 1 if @p is running on @cpu.
+ */
+
+static unsigned int task_running_on_cpu(int cpu, struct task_struct *p)
+{
+	/* Task has no contribution or is new */
+	if (cpu != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
+		return 0;
+
+	if (task_on_rq_queued(p))
+		return 1;
+
+	return 0;
+}
+
+/**
+ * idle_cpu_without - would a given CPU be idle without p ?
+ * @cpu: the processor on which idleness is tested.
+ * @p: task which should be ignored.
+ *
+ * Return: 1 if the CPU would be idle. 0 otherwise.
+ */
+static int idle_cpu_without(int cpu, struct task_struct *p)
+{
+	struct rq *rq = cpu_rq(cpu);
+
+	if (rq->curr != rq->idle && rq->curr != p)
+		return 0;
+
+	/*
+	 * rq->nr_running can't be used but an updated version without the
+	 * impact of p on cpu must be used instead. The updated nr_running
+	 * be computed and tested before calling idle_cpu_without().
+	 */
+
+#ifdef CONFIG_SMP
+	if (!llist_empty(&rq->wake_list))
+		return 0;
+#endif
+
+	return 1;
+}
+
+/*
  * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
- * @denv: The ched_domain level to look for idlest group.
+ * @sd: The sched_domain level to look for idlest group.
  * @group: sched_group whose statistics are to be updated.
  * @sgs: variable to hold the statistics for this group.
+ * @p: The task for which we look for the idlest group/CPU.
  */
 static inline void update_sg_wakeup_stats(struct sched_domain *sd,
 					  struct sched_group *group,
@@ -8158,21 +8234,22 @@ static inline void update_sg_wakeup_stats(struct sched_domain *sd,
 
 	for_each_cpu(i, sched_group_span(group)) {
 		struct rq *rq = cpu_rq(i);
+		unsigned int local;
 
-		sgs->group_load += cpu_load(rq);
+		sgs->group_load += cpu_load_without(rq, p);
 		sgs->group_util += cpu_util_without(i, p);
-		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
+		local = task_running_on_cpu(i, p);
+		sgs->sum_h_nr_running += rq->cfs.h_nr_running - local;
 
-		nr_running = rq->nr_running;
+		nr_running = rq->nr_running - local;
 		sgs->sum_nr_running += nr_running;
 
 		/*
-		 * No need to call idle_cpu() if nr_running is not 0
+		 * No need to call idle_cpu_without() if nr_running is not 0
 		 */
-		if (!nr_running && idle_cpu(i))
+		if (!nr_running && idle_cpu_without(i, p))
 			sgs->idle_cpus++;
 
-
 	}
 
 	/* Check if task fits in the group */

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-10-18 13:26 ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Vincent Guittot
  2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rework find_idlest_group() tip-bot2 for Vincent Guittot
  2019-10-22 16:46   ` [PATCH] sched/fair: fix rework of find_idlest_group() Vincent Guittot
@ 2019-11-20 11:58   ` Qais Yousef
  2019-11-20 13:21     ` Vincent Guittot
  2019-11-22 14:34   ` Valentin Schneider
  3 siblings, 1 reply; 89+ messages in thread
From: Qais Yousef @ 2019-11-20 11:58 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, mingo, peterz, pauld, valentin.schneider, srikar,
	quentin.perret, dietmar.eggemann, Morten.Rasmussen, hdanton,
	parth, riel

Hi Vincent

On 10/18/19 15:26, Vincent Guittot wrote:
> The slow wake up path computes per sched_group statisics to select the
> idlest group, which is quite similar to what load_balance() is doing
> for selecting busiest group. Rework find_idlest_group() to classify the
> sched_group and select the idlest one following the same steps as
> load_balance().
> 
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---

LTP test has caught a regression in perf_event_open02 test on linux-next and I
bisected it to this patch.

That is checking out next-20191119 tag and reverting this patch on top the test
passes. Without the revert the test fails.

I think this patch disturbs this part of the test:

	https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/perf_event_open/perf_event_open02.c#L209

When I revert this patch count_hardware_counters() returns a non zero value.
But with it applied it returns 0 which indicates that the condition terminates
earlier than what the test expects.

I'm failing to see the connection yet, but since I spent enough time bisecting
it I thought I'll throw this out before I continue to bottom it out in hope it
rings a bell for you or someone else.

The problem was consistently reproducible on Juno-r2.

LTP was compiled from 20190930 tag using

	./configure --host=aarch64-linux-gnu --prefix=~/arm64-ltp/
	make && make install



*** Output of the test when it fails ***

	# ./perf_event_open02 -v
	at iteration:0 value:254410384 time_enabled:195570320 time_running:156044100
	perf_event_open02    0  TINFO  :  overall task clock: 166935520
	perf_event_open02    0  TINFO  :  hw sum: 1200812256, task clock sum: 667703360
	hw counters: 300202518 300202881 300203246 300203611
	task clock counters: 166927400 166926780 166925660 166923520
	perf_event_open02    0  TINFO  :  ratio: 3.999768
	perf_event_open02    0  TINFO  :  nhw: 0.000100     /* I added this extra line for debug */
	perf_event_open02    1  TFAIL  :  perf_event_open02.c:370: test failed (ratio was greater than )



*** Output of the test when it passes (this patch reverted) ***

	# ./perf_event_open02 -v
	at iteration:0 value:300271482 time_enabled:177756080 time_running:177756080
	at iteration:1 value:300252655 time_enabled:166939100 time_running:166939100
	at iteration:2 value:300252877 time_enabled:166924920 time_running:166924920
	at iteration:3 value:300242545 time_enabled:166909620 time_running:166909620
	at iteration:4 value:300250779 time_enabled:166918540 time_running:166918540
	at iteration:5 value:300250660 time_enabled:166922180 time_running:166922180
	at iteration:6 value:258369655 time_enabled:167388920 time_running:143996600
	perf_event_open02    0  TINFO  :  overall task clock: 167540640
	perf_event_open02    0  TINFO  :  hw sum: 1801473873, task clock sum: 1005046160
	hw counters: 177971955 185132938 185488818 185488199 185480943 185477118 179657001 172499668 172137672 172139561
	task clock counters: 99299900 103293440 103503840 103502040 103499020 103496160 100224320 96227620 95999400 96000420
	perf_event_open02    0  TINFO  :  ratio: 5.998820
	perf_event_open02    0  TINFO  :  nhw: 6.000100     /* I added this extra line for debug */
	perf_event_open02    1  TPASS  :  test passed

Thanks

--
Qais Yousef

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-11-20 11:58   ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Qais Yousef
@ 2019-11-20 13:21     ` Vincent Guittot
  2019-11-20 16:53       ` Vincent Guittot
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-11-20 13:21 UTC (permalink / raw)
  To: Qais Yousef
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

Hi Qais,

On Wed, 20 Nov 2019 at 12:58, Qais Yousef <qais.yousef@arm.com> wrote:
>
> Hi Vincent
>
> On 10/18/19 15:26, Vincent Guittot wrote:
> > The slow wake up path computes per sched_group statisics to select the
> > idlest group, which is quite similar to what load_balance() is doing
> > for selecting busiest group. Rework find_idlest_group() to classify the
> > sched_group and select the idlest one following the same steps as
> > load_balance().
> >
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > ---
>
> LTP test has caught a regression in perf_event_open02 test on linux-next and I
> bisected it to this patch.
>
> That is checking out next-20191119 tag and reverting this patch on top the test
> passes. Without the revert the test fails.
>
> I think this patch disturbs this part of the test:
>
>         https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/perf_event_open/perf_event_open02.c#L209
>
> When I revert this patch count_hardware_counters() returns a non zero value.
> But with it applied it returns 0 which indicates that the condition terminates
> earlier than what the test expects.

Thanks for the report and starting analysing it

>
> I'm failing to see the connection yet, but since I spent enough time bisecting
> it I thought I'll throw this out before I continue to bottom it out in hope it
> rings a bell for you or someone else.

I will try to reproduce the problem and understand why it's failing
because i don't have any clue of the relation between both for now

>
> The problem was consistently reproducible on Juno-r2.
>
> LTP was compiled from 20190930 tag using
>
>         ./configure --host=aarch64-linux-gnu --prefix=~/arm64-ltp/
>         make && make install
>
>
>
> *** Output of the test when it fails ***
>
>         # ./perf_event_open02 -v
>         at iteration:0 value:254410384 time_enabled:195570320 time_running:156044100
>         perf_event_open02    0  TINFO  :  overall task clock: 166935520
>         perf_event_open02    0  TINFO  :  hw sum: 1200812256, task clock sum: 667703360
>         hw counters: 300202518 300202881 300203246 300203611
>         task clock counters: 166927400 166926780 166925660 166923520
>         perf_event_open02    0  TINFO  :  ratio: 3.999768
>         perf_event_open02    0  TINFO  :  nhw: 0.000100     /* I added this extra line for debug */
>         perf_event_open02    1  TFAIL  :  perf_event_open02.c:370: test failed (ratio was greater than )
>
>
>
> *** Output of the test when it passes (this patch reverted) ***
>
>         # ./perf_event_open02 -v
>         at iteration:0 value:300271482 time_enabled:177756080 time_running:177756080
>         at iteration:1 value:300252655 time_enabled:166939100 time_running:166939100
>         at iteration:2 value:300252877 time_enabled:166924920 time_running:166924920
>         at iteration:3 value:300242545 time_enabled:166909620 time_running:166909620
>         at iteration:4 value:300250779 time_enabled:166918540 time_running:166918540
>         at iteration:5 value:300250660 time_enabled:166922180 time_running:166922180
>         at iteration:6 value:258369655 time_enabled:167388920 time_running:143996600
>         perf_event_open02    0  TINFO  :  overall task clock: 167540640
>         perf_event_open02    0  TINFO  :  hw sum: 1801473873, task clock sum: 1005046160
>         hw counters: 177971955 185132938 185488818 185488199 185480943 185477118 179657001 172499668 172137672 172139561
>         task clock counters: 99299900 103293440 103503840 103502040 103499020 103496160 100224320 96227620 95999400 96000420
>         perf_event_open02    0  TINFO  :  ratio: 5.998820
>         perf_event_open02    0  TINFO  :  nhw: 6.000100     /* I added this extra line for debug */
>         perf_event_open02    1  TPASS  :  test passed
>
> Thanks
>
> --
> Qais Yousef

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-11-20 13:21     ` Vincent Guittot
@ 2019-11-20 16:53       ` Vincent Guittot
  2019-11-20 17:34         ` Qais Yousef
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-11-20 16:53 UTC (permalink / raw)
  To: Qais Yousef
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
<vincent.guittot@linaro.org> wrote:
>
> Hi Qais,
>
> On Wed, 20 Nov 2019 at 12:58, Qais Yousef <qais.yousef@arm.com> wrote:
> >
> > Hi Vincent
> >
> > On 10/18/19 15:26, Vincent Guittot wrote:
> > > The slow wake up path computes per sched_group statisics to select the
> > > idlest group, which is quite similar to what load_balance() is doing
> > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > sched_group and select the idlest one following the same steps as
> > > load_balance().
> > >
> > > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > > ---
> >
> > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > bisected it to this patch.
> >
> > That is checking out next-20191119 tag and reverting this patch on top the test
> > passes. Without the revert the test fails.

I haven't tried linux-next yet but LTP test is passed with
tip/sched/core, which includes this patch, on hikey960 which is arm64
too.

Have you tried tip/sched/core on your juno ? this could help to
understand if it's only for juno or if this patch interact with
another branch merged in linux next

Thanks
Vincent

> >
> > I think this patch disturbs this part of the test:
> >
> >         https://github.com/linux-test-project/ltp/blob/master/testcases/kernel/syscalls/perf_event_open/perf_event_open02.c#L209
> >
> > When I revert this patch count_hardware_counters() returns a non zero value.
> > But with it applied it returns 0 which indicates that the condition terminates
> > earlier than what the test expects.
>
> Thanks for the report and starting analysing it
>
> >
> > I'm failing to see the connection yet, but since I spent enough time bisecting
> > it I thought I'll throw this out before I continue to bottom it out in hope it
> > rings a bell for you or someone else.
>
> I will try to reproduce the problem and understand why it's failing
> because i don't have any clue of the relation between both for now
>
> >
> > The problem was consistently reproducible on Juno-r2.
> >
> > LTP was compiled from 20190930 tag using
> >
> >         ./configure --host=aarch64-linux-gnu --prefix=~/arm64-ltp/
> >         make && make install
> >
> >
> >
> > *** Output of the test when it fails ***
> >
> >         # ./perf_event_open02 -v
> >         at iteration:0 value:254410384 time_enabled:195570320 time_running:156044100
> >         perf_event_open02    0  TINFO  :  overall task clock: 166935520
> >         perf_event_open02    0  TINFO  :  hw sum: 1200812256, task clock sum: 667703360
> >         hw counters: 300202518 300202881 300203246 300203611
> >         task clock counters: 166927400 166926780 166925660 166923520
> >         perf_event_open02    0  TINFO  :  ratio: 3.999768
> >         perf_event_open02    0  TINFO  :  nhw: 0.000100     /* I added this extra line for debug */
> >         perf_event_open02    1  TFAIL  :  perf_event_open02.c:370: test failed (ratio was greater than )
> >
> >
> >
> > *** Output of the test when it passes (this patch reverted) ***
> >
> >         # ./perf_event_open02 -v
> >         at iteration:0 value:300271482 time_enabled:177756080 time_running:177756080
> >         at iteration:1 value:300252655 time_enabled:166939100 time_running:166939100
> >         at iteration:2 value:300252877 time_enabled:166924920 time_running:166924920
> >         at iteration:3 value:300242545 time_enabled:166909620 time_running:166909620
> >         at iteration:4 value:300250779 time_enabled:166918540 time_running:166918540
> >         at iteration:5 value:300250660 time_enabled:166922180 time_running:166922180
> >         at iteration:6 value:258369655 time_enabled:167388920 time_running:143996600
> >         perf_event_open02    0  TINFO  :  overall task clock: 167540640
> >         perf_event_open02    0  TINFO  :  hw sum: 1801473873, task clock sum: 1005046160
> >         hw counters: 177971955 185132938 185488818 185488199 185480943 185477118 179657001 172499668 172137672 172139561
> >         task clock counters: 99299900 103293440 103503840 103502040 103499020 103496160 100224320 96227620 95999400 96000420
> >         perf_event_open02    0  TINFO  :  ratio: 5.998820
> >         perf_event_open02    0  TINFO  :  nhw: 6.000100     /* I added this extra line for debug */
> >         perf_event_open02    1  TPASS  :  test passed
> >
> > Thanks
> >
> > --
> > Qais Yousef

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-11-20 16:53       ` Vincent Guittot
@ 2019-11-20 17:34         ` Qais Yousef
  2019-11-20 17:43           ` Vincent Guittot
  0 siblings, 1 reply; 89+ messages in thread
From: Qais Yousef @ 2019-11-20 17:34 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On 11/20/19 17:53, Vincent Guittot wrote:
> On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> <vincent.guittot@linaro.org> wrote:
> >
> > Hi Qais,
> >
> > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <qais.yousef@arm.com> wrote:
> > >
> > > Hi Vincent
> > >
> > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > The slow wake up path computes per sched_group statisics to select the
> > > > idlest group, which is quite similar to what load_balance() is doing
> > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > sched_group and select the idlest one following the same steps as
> > > > load_balance().
> > > >
> > > > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > > > ---
> > >
> > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > bisected it to this patch.
> > >
> > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > passes. Without the revert the test fails.
> 
> I haven't tried linux-next yet but LTP test is passed with
> tip/sched/core, which includes this patch, on hikey960 which is arm64
> too.
> 
> Have you tried tip/sched/core on your juno ? this could help to
> understand if it's only for juno or if this patch interact with
> another branch merged in linux next

Okay will give it a go. But out of curiosity, what is the output of your run?

While bisecting on linux-next I noticed that at some point the test was
passing but all the read values were 0. At some point I started seeing
none-zero values.

--
Qais Yousef

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-11-20 17:34         ` Qais Yousef
@ 2019-11-20 17:43           ` Vincent Guittot
  2019-11-20 18:10             ` Qais Yousef
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-11-20 17:43 UTC (permalink / raw)
  To: Qais Yousef
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Wed, 20 Nov 2019 at 18:34, Qais Yousef <qais.yousef@arm.com> wrote:
>
> On 11/20/19 17:53, Vincent Guittot wrote:
> > On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> > <vincent.guittot@linaro.org> wrote:
> > >
> > > Hi Qais,
> > >
> > > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <qais.yousef@arm.com> wrote:
> > > >
> > > > Hi Vincent
> > > >
> > > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > > The slow wake up path computes per sched_group statisics to select the
> > > > > idlest group, which is quite similar to what load_balance() is doing
> > > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > > sched_group and select the idlest one following the same steps as
> > > > > load_balance().
> > > > >
> > > > > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > > > > ---
> > > >
> > > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > > bisected it to this patch.
> > > >
> > > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > > passes. Without the revert the test fails.
> >
> > I haven't tried linux-next yet but LTP test is passed with
> > tip/sched/core, which includes this patch, on hikey960 which is arm64
> > too.
> >
> > Have you tried tip/sched/core on your juno ? this could help to
> > understand if it's only for juno or if this patch interact with
> > another branch merged in linux next
>
> Okay will give it a go. But out of curiosity, what is the output of your run?
>
> While bisecting on linux-next I noticed that at some point the test was
> passing but all the read values were 0. At some point I started seeing
> none-zero values.

for tip/sched/core
linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
sudo ./perf_event_open02
perf_event_open02    0  TINFO  :  overall task clock: 63724479
perf_event_open02    0  TINFO  :  hw sum: 1800900992, task clock sum: 382170311
perf_event_open02    0  TINFO  :  ratio: 5.997229
perf_event_open02    1  TPASS  :  test passed

for next-2019119
~/ltp/testcases/kernel/syscalls/perf_event_open$ sudo ./perf_event_open02 -v
at iteration:0 value:0 time_enabled:69795312 time_running:0
perf_event_open02    0  TINFO  :  overall task clock: 63582292
perf_event_open02    0  TINFO  :  hw sum: 0, task clock sum: 0
hw counters: 0 0 0 0
task clock counters: 0 0 0 0
perf_event_open02    0  TINFO  :  ratio: 0.000000
perf_event_open02    1  TPASS  :  test passed

>
> --
> Qais Yousef

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-11-20 17:43           ` Vincent Guittot
@ 2019-11-20 18:10             ` Qais Yousef
  2019-11-20 18:20               ` Vincent Guittot
  0 siblings, 1 reply; 89+ messages in thread
From: Qais Yousef @ 2019-11-20 18:10 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On 11/20/19 18:43, Vincent Guittot wrote:
> On Wed, 20 Nov 2019 at 18:34, Qais Yousef <qais.yousef@arm.com> wrote:
> >
> > On 11/20/19 17:53, Vincent Guittot wrote:
> > > On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> > > <vincent.guittot@linaro.org> wrote:
> > > >
> > > > Hi Qais,
> > > >
> > > > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <qais.yousef@arm.com> wrote:
> > > > >
> > > > > Hi Vincent
> > > > >
> > > > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > > > The slow wake up path computes per sched_group statisics to select the
> > > > > > idlest group, which is quite similar to what load_balance() is doing
> > > > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > > > sched_group and select the idlest one following the same steps as
> > > > > > load_balance().
> > > > > >
> > > > > > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > > > > > ---
> > > > >
> > > > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > > > bisected it to this patch.
> > > > >
> > > > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > > > passes. Without the revert the test fails.
> > >
> > > I haven't tried linux-next yet but LTP test is passed with
> > > tip/sched/core, which includes this patch, on hikey960 which is arm64
> > > too.
> > >
> > > Have you tried tip/sched/core on your juno ? this could help to
> > > understand if it's only for juno or if this patch interact with
> > > another branch merged in linux next
> >
> > Okay will give it a go. But out of curiosity, what is the output of your run?
> >
> > While bisecting on linux-next I noticed that at some point the test was
> > passing but all the read values were 0. At some point I started seeing
> > none-zero values.
> 
> for tip/sched/core
> linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> sudo ./perf_event_open02
> perf_event_open02    0  TINFO  :  overall task clock: 63724479
> perf_event_open02    0  TINFO  :  hw sum: 1800900992, task clock sum: 382170311
> perf_event_open02    0  TINFO  :  ratio: 5.997229
> perf_event_open02    1  TPASS  :  test passed
> 
> for next-2019119
> ~/ltp/testcases/kernel/syscalls/perf_event_open$ sudo ./perf_event_open02 -v
> at iteration:0 value:0 time_enabled:69795312 time_running:0
> perf_event_open02    0  TINFO  :  overall task clock: 63582292
> perf_event_open02    0  TINFO  :  hw sum: 0, task clock sum: 0
> hw counters: 0 0 0 0
> task clock counters: 0 0 0 0
> perf_event_open02    0  TINFO  :  ratio: 0.000000
> perf_event_open02    1  TPASS  :  test passed

Okay that is weird. But ratio, hw sum, task clock sum are all 0 in your
next-20191119. I'm not sure why the counters return 0 sometimes - is it
dependent on some option or a bug somewhere.

I just did another run and it failed for me (building with defconfig)

# uname -a
Linux buildroot 5.4.0-rc8-next-20191119 #72 SMP PREEMPT Wed Nov 20 17:57:48 GMT 2019 aarch64 GNU/Linux

# ./perf_event_open02 -v
at iteration:0 value:260700250 time_enabled:172739760 time_running:144956600
perf_event_open02    0  TINFO  :  overall task clock: 166915220
perf_event_open02    0  TINFO  :  hw sum: 1200718268, task clock sum: 667621320
hw counters: 300179051 300179395 300179739 300180083
task clock counters: 166906620 166906200 166905160 166903340
perf_event_open02    0  TINFO  :  ratio: 3.999763
perf_event_open02    0  TINFO  :  nhw: 0.000100
perf_event_open02    1  TFAIL  :  perf_event_open02.c:370: test failed (ratio was greater than )

It is a funny one for sure. I haven't tried tip/sched/core yet.

Thanks

--
Qais Yousef

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-11-20 18:10             ` Qais Yousef
@ 2019-11-20 18:20               ` Vincent Guittot
  2019-11-20 18:27                 ` Qais Yousef
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-11-20 18:20 UTC (permalink / raw)
  To: Qais Yousef
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Wed, 20 Nov 2019 at 19:10, Qais Yousef <qais.yousef@arm.com> wrote:
>
> On 11/20/19 18:43, Vincent Guittot wrote:
> > On Wed, 20 Nov 2019 at 18:34, Qais Yousef <qais.yousef@arm.com> wrote:
> > >
> > > On 11/20/19 17:53, Vincent Guittot wrote:
> > > > On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> > > > <vincent.guittot@linaro.org> wrote:
> > > > >
> > > > > Hi Qais,
> > > > >
> > > > > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <qais.yousef@arm.com> wrote:
> > > > > >
> > > > > > Hi Vincent
> > > > > >
> > > > > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > > > > The slow wake up path computes per sched_group statisics to select the
> > > > > > > idlest group, which is quite similar to what load_balance() is doing
> > > > > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > > > > sched_group and select the idlest one following the same steps as
> > > > > > > load_balance().
> > > > > > >
> > > > > > > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > > > > > > ---
> > > > > >
> > > > > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > > > > bisected it to this patch.
> > > > > >
> > > > > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > > > > passes. Without the revert the test fails.
> > > >
> > > > I haven't tried linux-next yet but LTP test is passed with
> > > > tip/sched/core, which includes this patch, on hikey960 which is arm64
> > > > too.
> > > >
> > > > Have you tried tip/sched/core on your juno ? this could help to
> > > > understand if it's only for juno or if this patch interact with
> > > > another branch merged in linux next
> > >
> > > Okay will give it a go. But out of curiosity, what is the output of your run?
> > >
> > > While bisecting on linux-next I noticed that at some point the test was
> > > passing but all the read values were 0. At some point I started seeing
> > > none-zero values.
> >
> > for tip/sched/core
> > linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> > sudo ./perf_event_open02
> > perf_event_open02    0  TINFO  :  overall task clock: 63724479
> > perf_event_open02    0  TINFO  :  hw sum: 1800900992, task clock sum: 382170311
> > perf_event_open02    0  TINFO  :  ratio: 5.997229
> > perf_event_open02    1  TPASS  :  test passed
> >
> > for next-2019119
> > ~/ltp/testcases/kernel/syscalls/perf_event_open$ sudo ./perf_event_open02 -v
> > at iteration:0 value:0 time_enabled:69795312 time_running:0
> > perf_event_open02    0  TINFO  :  overall task clock: 63582292
> > perf_event_open02    0  TINFO  :  hw sum: 0, task clock sum: 0
> > hw counters: 0 0 0 0
> > task clock counters: 0 0 0 0
> > perf_event_open02    0  TINFO  :  ratio: 0.000000
> > perf_event_open02    1  TPASS  :  test passed
>
> Okay that is weird. But ratio, hw sum, task clock sum are all 0 in your
> next-20191119. I'm not sure why the counters return 0 sometimes - is it
> dependent on some option or a bug somewhere.
>
> I just did another run and it failed for me (building with defconfig)
>
> # uname -a
> Linux buildroot 5.4.0-rc8-next-20191119 #72 SMP PREEMPT Wed Nov 20 17:57:48 GMT 2019 aarch64 GNU/Linux
>
> # ./perf_event_open02 -v
> at iteration:0 value:260700250 time_enabled:172739760 time_running:144956600
> perf_event_open02    0  TINFO  :  overall task clock: 166915220
> perf_event_open02    0  TINFO  :  hw sum: 1200718268, task clock sum: 667621320
> hw counters: 300179051 300179395 300179739 300180083
> task clock counters: 166906620 166906200 166905160 166903340
> perf_event_open02    0  TINFO  :  ratio: 3.999763
> perf_event_open02    0  TINFO  :  nhw: 0.000100
> perf_event_open02    1  TFAIL  :  perf_event_open02.c:370: test failed (ratio was greater than )
>
> It is a funny one for sure. I haven't tried tip/sched/core yet.

I confirm that on next-20191119, hw counters always return 0
but on tip/sched/core which has this patch and v5.4-rc7 which has not,
the hw counters are always different from 0

on v5.4-rc7 i have got the same ratio :
linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
sudo ./perf_event_open02 -v
at iteration:0 value:300157088 time_enabled:80641145 time_running:80641145
at iteration:1 value:300100129 time_enabled:63572917 time_running:63572917
at iteration:2 value:300100885 time_enabled:63569271 time_running:63569271
at iteration:3 value:300103998 time_enabled:63573437 time_running:63573437
at iteration:4 value:300101477 time_enabled:63571875 time_running:63571875
at iteration:5 value:300100698 time_enabled:63569791 time_running:63569791
at iteration:6 value:245252526 time_enabled:63650520 time_running:52012500
perf_event_open02    0  TINFO  :  overall task clock: 63717187
perf_event_open02    0  TINFO  :  hw sum: 1800857435, task clock sum: 382156248
hw counters: 149326575 150152481 169006047 187845928 206684169
224693333 206543358 187716226 168865909 150023409
task clock counters: 31694792 31870834 35868749 39866666 43863541
47685936 43822396 39826042 35828125 31829167
perf_event_open02    0  TINFO  :  ratio: 5.997695
perf_event_open02    1  TPASS  :  test passed

Thanks

>
> Thanks
>
> --
> Qais Yousef

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-11-20 18:20               ` Vincent Guittot
@ 2019-11-20 18:27                 ` Qais Yousef
  2019-11-20 19:28                   ` Vincent Guittot
  0 siblings, 1 reply; 89+ messages in thread
From: Qais Yousef @ 2019-11-20 18:27 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On 11/20/19 19:20, Vincent Guittot wrote:
> On Wed, 20 Nov 2019 at 19:10, Qais Yousef <qais.yousef@arm.com> wrote:
> >
> > On 11/20/19 18:43, Vincent Guittot wrote:
> > > On Wed, 20 Nov 2019 at 18:34, Qais Yousef <qais.yousef@arm.com> wrote:
> > > >
> > > > On 11/20/19 17:53, Vincent Guittot wrote:
> > > > > On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> > > > > <vincent.guittot@linaro.org> wrote:
> > > > > >
> > > > > > Hi Qais,
> > > > > >
> > > > > > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <qais.yousef@arm.com> wrote:
> > > > > > >
> > > > > > > Hi Vincent
> > > > > > >
> > > > > > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > > > > > The slow wake up path computes per sched_group statisics to select the
> > > > > > > > idlest group, which is quite similar to what load_balance() is doing
> > > > > > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > > > > > sched_group and select the idlest one following the same steps as
> > > > > > > > load_balance().
> > > > > > > >
> > > > > > > > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > > > > > > > ---
> > > > > > >
> > > > > > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > > > > > bisected it to this patch.
> > > > > > >
> > > > > > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > > > > > passes. Without the revert the test fails.
> > > > >
> > > > > I haven't tried linux-next yet but LTP test is passed with
> > > > > tip/sched/core, which includes this patch, on hikey960 which is arm64
> > > > > too.
> > > > >
> > > > > Have you tried tip/sched/core on your juno ? this could help to
> > > > > understand if it's only for juno or if this patch interact with
> > > > > another branch merged in linux next
> > > >
> > > > Okay will give it a go. But out of curiosity, what is the output of your run?
> > > >
> > > > While bisecting on linux-next I noticed that at some point the test was
> > > > passing but all the read values were 0. At some point I started seeing
> > > > none-zero values.
> > >
> > > for tip/sched/core
> > > linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> > > sudo ./perf_event_open02
> > > perf_event_open02    0  TINFO  :  overall task clock: 63724479
> > > perf_event_open02    0  TINFO  :  hw sum: 1800900992, task clock sum: 382170311
> > > perf_event_open02    0  TINFO  :  ratio: 5.997229
> > > perf_event_open02    1  TPASS  :  test passed
> > >
> > > for next-2019119
> > > ~/ltp/testcases/kernel/syscalls/perf_event_open$ sudo ./perf_event_open02 -v
> > > at iteration:0 value:0 time_enabled:69795312 time_running:0
> > > perf_event_open02    0  TINFO  :  overall task clock: 63582292
> > > perf_event_open02    0  TINFO  :  hw sum: 0, task clock sum: 0
> > > hw counters: 0 0 0 0
> > > task clock counters: 0 0 0 0
> > > perf_event_open02    0  TINFO  :  ratio: 0.000000
> > > perf_event_open02    1  TPASS  :  test passed
> >
> > Okay that is weird. But ratio, hw sum, task clock sum are all 0 in your
> > next-20191119. I'm not sure why the counters return 0 sometimes - is it
> > dependent on some option or a bug somewhere.
> >
> > I just did another run and it failed for me (building with defconfig)
> >
> > # uname -a
> > Linux buildroot 5.4.0-rc8-next-20191119 #72 SMP PREEMPT Wed Nov 20 17:57:48 GMT 2019 aarch64 GNU/Linux
> >
> > # ./perf_event_open02 -v
> > at iteration:0 value:260700250 time_enabled:172739760 time_running:144956600
> > perf_event_open02    0  TINFO  :  overall task clock: 166915220
> > perf_event_open02    0  TINFO  :  hw sum: 1200718268, task clock sum: 667621320
> > hw counters: 300179051 300179395 300179739 300180083
> > task clock counters: 166906620 166906200 166905160 166903340
> > perf_event_open02    0  TINFO  :  ratio: 3.999763
> > perf_event_open02    0  TINFO  :  nhw: 0.000100
> > perf_event_open02    1  TFAIL  :  perf_event_open02.c:370: test failed (ratio was greater than )
> >
> > It is a funny one for sure. I haven't tried tip/sched/core yet.
> 
> I confirm that on next-20191119, hw counters always return 0
> but on tip/sched/core which has this patch and v5.4-rc7 which has not,
> the hw counters are always different from 0

It's the other way around for me. tip/sched/core returns 0 hw counters. I tried
enabling coresight; that had no effect. Nor copying the .config that failed
from linux-next to tip/sched/core. I'm not sure what's the dependency/breakage
:-/

--
Qais Yousef

> 
> on v5.4-rc7 i have got the same ratio :
> linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> sudo ./perf_event_open02 -v
> at iteration:0 value:300157088 time_enabled:80641145 time_running:80641145
> at iteration:1 value:300100129 time_enabled:63572917 time_running:63572917
> at iteration:2 value:300100885 time_enabled:63569271 time_running:63569271
> at iteration:3 value:300103998 time_enabled:63573437 time_running:63573437
> at iteration:4 value:300101477 time_enabled:63571875 time_running:63571875
> at iteration:5 value:300100698 time_enabled:63569791 time_running:63569791
> at iteration:6 value:245252526 time_enabled:63650520 time_running:52012500
> perf_event_open02    0  TINFO  :  overall task clock: 63717187
> perf_event_open02    0  TINFO  :  hw sum: 1800857435, task clock sum: 382156248
> hw counters: 149326575 150152481 169006047 187845928 206684169
> 224693333 206543358 187716226 168865909 150023409
> task clock counters: 31694792 31870834 35868749 39866666 43863541
> 47685936 43822396 39826042 35828125 31829167
> perf_event_open02    0  TINFO  :  ratio: 5.997695
> perf_event_open02    1  TPASS  :  test passed

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-11-20 18:27                 ` Qais Yousef
@ 2019-11-20 19:28                   ` Vincent Guittot
  2019-11-20 19:55                     ` Qais Yousef
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-11-20 19:28 UTC (permalink / raw)
  To: Qais Yousef
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On Wed, 20 Nov 2019 at 19:27, Qais Yousef <qais.yousef@arm.com> wrote:
>
> On 11/20/19 19:20, Vincent Guittot wrote:
> > On Wed, 20 Nov 2019 at 19:10, Qais Yousef <qais.yousef@arm.com> wrote:
> > >
> > > On 11/20/19 18:43, Vincent Guittot wrote:
> > > > On Wed, 20 Nov 2019 at 18:34, Qais Yousef <qais.yousef@arm.com> wrote:
> > > > >
> > > > > On 11/20/19 17:53, Vincent Guittot wrote:
> > > > > > On Wed, 20 Nov 2019 at 14:21, Vincent Guittot
> > > > > > <vincent.guittot@linaro.org> wrote:
> > > > > > >
> > > > > > > Hi Qais,
> > > > > > >
> > > > > > > On Wed, 20 Nov 2019 at 12:58, Qais Yousef <qais.yousef@arm.com> wrote:
> > > > > > > >
> > > > > > > > Hi Vincent
> > > > > > > >
> > > > > > > > On 10/18/19 15:26, Vincent Guittot wrote:
> > > > > > > > > The slow wake up path computes per sched_group statisics to select the
> > > > > > > > > idlest group, which is quite similar to what load_balance() is doing
> > > > > > > > > for selecting busiest group. Rework find_idlest_group() to classify the
> > > > > > > > > sched_group and select the idlest one following the same steps as
> > > > > > > > > load_balance().
> > > > > > > > >
> > > > > > > > > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > > > > > > > > ---
> > > > > > > >
> > > > > > > > LTP test has caught a regression in perf_event_open02 test on linux-next and I
> > > > > > > > bisected it to this patch.
> > > > > > > >
> > > > > > > > That is checking out next-20191119 tag and reverting this patch on top the test
> > > > > > > > passes. Without the revert the test fails.
> > > > > >
> > > > > > I haven't tried linux-next yet but LTP test is passed with
> > > > > > tip/sched/core, which includes this patch, on hikey960 which is arm64
> > > > > > too.
> > > > > >
> > > > > > Have you tried tip/sched/core on your juno ? this could help to
> > > > > > understand if it's only for juno or if this patch interact with
> > > > > > another branch merged in linux next
> > > > >
> > > > > Okay will give it a go. But out of curiosity, what is the output of your run?
> > > > >
> > > > > While bisecting on linux-next I noticed that at some point the test was
> > > > > passing but all the read values were 0. At some point I started seeing
> > > > > none-zero values.
> > > >
> > > > for tip/sched/core
> > > > linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> > > > sudo ./perf_event_open02
> > > > perf_event_open02    0  TINFO  :  overall task clock: 63724479
> > > > perf_event_open02    0  TINFO  :  hw sum: 1800900992, task clock sum: 382170311
> > > > perf_event_open02    0  TINFO  :  ratio: 5.997229
> > > > perf_event_open02    1  TPASS  :  test passed
> > > >
> > > > for next-2019119
> > > > ~/ltp/testcases/kernel/syscalls/perf_event_open$ sudo ./perf_event_open02 -v
> > > > at iteration:0 value:0 time_enabled:69795312 time_running:0
> > > > perf_event_open02    0  TINFO  :  overall task clock: 63582292
> > > > perf_event_open02    0  TINFO  :  hw sum: 0, task clock sum: 0
> > > > hw counters: 0 0 0 0
> > > > task clock counters: 0 0 0 0
> > > > perf_event_open02    0  TINFO  :  ratio: 0.000000
> > > > perf_event_open02    1  TPASS  :  test passed
> > >
> > > Okay that is weird. But ratio, hw sum, task clock sum are all 0 in your
> > > next-20191119. I'm not sure why the counters return 0 sometimes - is it
> > > dependent on some option or a bug somewhere.
> > >
> > > I just did another run and it failed for me (building with defconfig)
> > >
> > > # uname -a
> > > Linux buildroot 5.4.0-rc8-next-20191119 #72 SMP PREEMPT Wed Nov 20 17:57:48 GMT 2019 aarch64 GNU/Linux
> > >
> > > # ./perf_event_open02 -v
> > > at iteration:0 value:260700250 time_enabled:172739760 time_running:144956600
> > > perf_event_open02    0  TINFO  :  overall task clock: 166915220
> > > perf_event_open02    0  TINFO  :  hw sum: 1200718268, task clock sum: 667621320
> > > hw counters: 300179051 300179395 300179739 300180083
> > > task clock counters: 166906620 166906200 166905160 166903340
> > > perf_event_open02    0  TINFO  :  ratio: 3.999763
> > > perf_event_open02    0  TINFO  :  nhw: 0.000100
> > > perf_event_open02    1  TFAIL  :  perf_event_open02.c:370: test failed (ratio was greater than )
> > >
> > > It is a funny one for sure. I haven't tried tip/sched/core yet.
> >
> > I confirm that on next-20191119, hw counters always return 0
> > but on tip/sched/core which has this patch and v5.4-rc7 which has not,
> > the hw counters are always different from 0
>
> It's the other way around for me. tip/sched/core returns 0 hw counters. I tried
> enabling coresight; that had no effect. Nor copying the .config that failed
> from linux-next to tip/sched/core. I'm not sure what's the dependency/breakage
> :-/

I run few more tests and i can get either hw counter with 0 or not.
The main difference is on which CPU it runs: either big or little
little return always 0 and big always non-zero value

on v5.4-rc7 and tip/sched/core, cpu0-3 return 0 and other non zeroa
but on next, it's the opposite cpu0-3 return non zero ratio

Could you try to run the test with taskset to run it on big or little ?

>
> --
> Qais Yousef
>
> >
> > on v5.4-rc7 i have got the same ratio :
> > linaro@linaro-developer:~/ltp/testcases/kernel/syscalls/perf_event_open$
> > sudo ./perf_event_open02 -v
> > at iteration:0 value:300157088 time_enabled:80641145 time_running:80641145
> > at iteration:1 value:300100129 time_enabled:63572917 time_running:63572917
> > at iteration:2 value:300100885 time_enabled:63569271 time_running:63569271
> > at iteration:3 value:300103998 time_enabled:63573437 time_running:63573437
> > at iteration:4 value:300101477 time_enabled:63571875 time_running:63571875
> > at iteration:5 value:300100698 time_enabled:63569791 time_running:63569791
> > at iteration:6 value:245252526 time_enabled:63650520 time_running:52012500
> > perf_event_open02    0  TINFO  :  overall task clock: 63717187
> > perf_event_open02    0  TINFO  :  hw sum: 1800857435, task clock sum: 382156248
> > hw counters: 149326575 150152481 169006047 187845928 206684169
> > 224693333 206543358 187716226 168865909 150023409
> > task clock counters: 31694792 31870834 35868749 39866666 43863541
> > 47685936 43822396 39826042 35828125 31829167
> > perf_event_open02    0  TINFO  :  ratio: 5.997695
> > perf_event_open02    1  TPASS  :  test passed

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-11-20 19:28                   ` Vincent Guittot
@ 2019-11-20 19:55                     ` Qais Yousef
  2019-11-21 14:58                       ` Qais Yousef
  0 siblings, 1 reply; 89+ messages in thread
From: Qais Yousef @ 2019-11-20 19:55 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On 11/20/19 20:28, Vincent Guittot wrote:
> I run few more tests and i can get either hw counter with 0 or not.
> The main difference is on which CPU it runs: either big or little
> little return always 0 and big always non-zero value
> 
> on v5.4-rc7 and tip/sched/core, cpu0-3 return 0 and other non zeroa
> but on next, it's the opposite cpu0-3 return non zero ratio
> 
> Could you try to run the test with taskset to run it on big or little ?

Nice catch!

Yes indeed using taskset and forcing it to run on the big cpus it passes even
on linux-next/next-20191119.

So the relation to your patch is that it just biased where this test is likely
to run in my case and highlighted the breakage in the counters, probably?

FWIW, if I use taskset to force always big it passes. Always small, the counters
are always 0 and it passes too. But if I have mixed I see what I pasted before,
the counters have valid value but nhw is 0.

So the questions are, why little counters aren't working. And whether we should
run the test with taskset generally as it can't handle the asymmetry correctly.

Let me first try to find out why the little counters aren't working.

Thanks

--
Qais Yousef

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-11-20 19:55                     ` Qais Yousef
@ 2019-11-21 14:58                       ` Qais Yousef
  0 siblings, 0 replies; 89+ messages in thread
From: Qais Yousef @ 2019-11-21 14:58 UTC (permalink / raw)
  To: Vincent Guittot, mark.rutland
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Valentin Schneider, Srikar Dronamraju, Quentin Perret,
	Dietmar Eggemann, Morten Rasmussen, Hillf Danton, Parth Shah,
	Rik van Riel

On 11/20/19 19:55, Qais Yousef wrote:
> On 11/20/19 20:28, Vincent Guittot wrote:
> > I run few more tests and i can get either hw counter with 0 or not.
> > The main difference is on which CPU it runs: either big or little
> > little return always 0 and big always non-zero value
> > 
> > on v5.4-rc7 and tip/sched/core, cpu0-3 return 0 and other non zeroa
> > but on next, it's the opposite cpu0-3 return non zero ratio
> > 
> > Could you try to run the test with taskset to run it on big or little ?
> 
> Nice catch!
> 
> Yes indeed using taskset and forcing it to run on the big cpus it passes even
> on linux-next/next-20191119.
> 
> So the relation to your patch is that it just biased where this test is likely
> to run in my case and highlighted the breakage in the counters, probably?
> 
> FWIW, if I use taskset to force always big it passes. Always small, the counters
> are always 0 and it passes too. But if I have mixed I see what I pasted before,
> the counters have valid value but nhw is 0.
> 
> So the questions are, why little counters aren't working. And whether we should
> run the test with taskset generally as it can't handle the asymmetry correctly.
> 
> Let me first try to find out why the little counters aren't working.

So it turns out there's a caveat on usage of perf counters on big.LITTLE
systems.

Mark on CC can explain this better than me so I'll leave the details to him.

Sorry about the noise Vincent - it seems your patch was shifting things
slightly to cause migrating the task to another CPU, hence trigger the failure
on reading the perf counters, and the test in return.

Thanks

--
Qais Yousef

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-10-18 13:26 ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Vincent Guittot
                     ` (2 preceding siblings ...)
  2019-11-20 11:58   ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Qais Yousef
@ 2019-11-22 14:34   ` Valentin Schneider
  2019-11-25  9:59     ` Vincent Guittot
  3 siblings, 1 reply; 89+ messages in thread
From: Valentin Schneider @ 2019-11-22 14:34 UTC (permalink / raw)
  To: Vincent Guittot, linux-kernel, mingo, peterz
  Cc: pauld, srikar, quentin.perret, dietmar.eggemann,
	Morten.Rasmussen, hdanton, parth, riel

Hi Vincent,

Apologies for the delayed review on that one. I have a few comments inline,
otherwise for the misfit part, if at all still relevant:

Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>

On 18/10/2019 14:26, Vincent Guittot wrote:
>  static struct sched_group *
>  find_idlest_group(struct sched_domain *sd, struct task_struct *p,
> +		  int this_cpu, int sd_flag);
                                    ^^^^^^^
That parameter is now unused. AFAICT it was only used to special-case fork
events (sd flag & SD_BALANCE_FORK). I didn't see any explicit handling of
this case in the rework, I assume the new group type classification makes
it possible to forgo?

> @@ -8241,6 +8123,252 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
>  }
>  #endif /* CONFIG_NUMA_BALANCING */
>  
> +
> +struct sg_lb_stats;
> +
> +/*
> + * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
> + * @denv: The ched_domain level to look for idlest group.
> + * @group: sched_group whose statistics are to be updated.
> + * @sgs: variable to hold the statistics for this group.
> + */
> +static inline void update_sg_wakeup_stats(struct sched_domain *sd,
> +					  struct sched_group *group,
> +					  struct sg_lb_stats *sgs,
> +					  struct task_struct *p)
> +{
> +	int i, nr_running;
> +
> +	memset(sgs, 0, sizeof(*sgs));
> +
> +	for_each_cpu(i, sched_group_span(group)) {
> +		struct rq *rq = cpu_rq(i);
> +
> +		sgs->group_load += cpu_load(rq);
> +		sgs->group_util += cpu_util_without(i, p);
> +		sgs->sum_h_nr_running += rq->cfs.h_nr_running;
> +
> +		nr_running = rq->nr_running;
> +		sgs->sum_nr_running += nr_running;
> +
> +		/*
> +		 * No need to call idle_cpu() if nr_running is not 0
> +		 */
> +		if (!nr_running && idle_cpu(i))
> +			sgs->idle_cpus++;
> +
> +
> +	}
> +
> +	/* Check if task fits in the group */
> +	if (sd->flags & SD_ASYM_CPUCAPACITY &&
> +	    !task_fits_capacity(p, group->sgc->max_capacity)) {
> +		sgs->group_misfit_task_load = 1;
> +	}
> +
> +	sgs->group_capacity = group->sgc->capacity;
> +
> +	sgs->group_type = group_classify(sd->imbalance_pct, group, sgs);
> +
> +	/*
> +	 * Computing avg_load makes sense only when group is fully busy or
> +	 * overloaded
> +	 */
> +	if (sgs->group_type < group_fully_busy)
> +		sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
> +				sgs->group_capacity;
> +}
> +
> +static bool update_pick_idlest(struct sched_group *idlest,

Nit: could we name this update_sd_pick_idlest() to follow
update_sd_pick_busiest()? It's the kind of thing where if I typed
"update_sd" in gtags I'd like to see both listed, seeing as they are
*very* similar. And we already have update_sg_{wakeup, lb}_stats().

> +			       struct sg_lb_stats *idlest_sgs,
> +			       struct sched_group *group,
> +			       struct sg_lb_stats *sgs)
> +{
> +	if (sgs->group_type < idlest_sgs->group_type)
> +		return true;
> +
> +	if (sgs->group_type > idlest_sgs->group_type)
> +		return false;
> +
> +	/*
> +	 * The candidate and the current idles group are the same type of
> +	 * group. Let check which one is the idlest according to the type.
> +	 */
> +
> +	switch (sgs->group_type) {
> +	case group_overloaded:
> +	case group_fully_busy:
> +		/* Select the group with lowest avg_load. */
> +		if (idlest_sgs->avg_load <= sgs->avg_load)
> +			return false;
> +		break;
> +
> +	case group_imbalanced:
> +	case group_asym_packing:
> +		/* Those types are not used in the slow wakeup path */
> +		return false;
> +
> +	case group_misfit_task:
> +		/* Select group with the highest max capacity */
> +		if (idlest->sgc->max_capacity >= group->sgc->max_capacity)
> +			return false;
> +		break;
> +
> +	case group_has_spare:
> +		/* Select group with most idle CPUs */
> +		if (idlest_sgs->idle_cpus >= sgs->idle_cpus)
> +			return false;
> +		break;
> +	}
> +
> +	return true;
> +}
> +
> +/*
> + * find_idlest_group finds and returns the least busy CPU group within the
> + * domain.
> + *
> + * Assumes p is allowed on at least one CPU in sd.
> + */
> +static struct sched_group *
> +find_idlest_group(struct sched_domain *sd, struct task_struct *p,
> +		  int this_cpu, int sd_flag)
> +{
> +	struct sched_group *idlest = NULL, *local = NULL, *group = sd->groups;
> +	struct sg_lb_stats local_sgs, tmp_sgs;
> +	struct sg_lb_stats *sgs;
> +	unsigned long imbalance;
> +	struct sg_lb_stats idlest_sgs = {
> +			.avg_load = UINT_MAX,
> +			.group_type = group_overloaded,
> +	};
> +
> +	imbalance = scale_load_down(NICE_0_LOAD) *
> +				(sd->imbalance_pct-100) / 100;
> +
> +	do {
> +		int local_group;
> +
> +		/* Skip over this group if it has no CPUs allowed */
> +		if (!cpumask_intersects(sched_group_span(group),
> +					p->cpus_ptr))
> +			continue;
> +
> +		local_group = cpumask_test_cpu(this_cpu,
> +					       sched_group_span(group));
> +
> +		if (local_group) {
> +			sgs = &local_sgs;
> +			local = group;
> +		} else {
> +			sgs = &tmp_sgs;
> +		}
> +
> +		update_sg_wakeup_stats(sd, group, sgs, p);
> +
> +		if (!local_group && update_pick_idlest(idlest, &idlest_sgs, group, sgs)) {
> +			idlest = group;
> +			idlest_sgs = *sgs;
> +		}
> +
> +	} while (group = group->next, group != sd->groups);
> +
> +
> +	/* There is no idlest group to push tasks to */
> +	if (!idlest)
> +		return NULL;
> +
> +	/*
> +	 * If the local group is idler than the selected idlest group
> +	 * don't try and push the task.
> +	 */
> +	if (local_sgs.group_type < idlest_sgs.group_type)
> +		return NULL;
> +
> +	/*
> +	 * If the local group is busier than the selected idlest group
> +	 * try and push the task.
> +	 */
> +	if (local_sgs.group_type > idlest_sgs.group_type)
> +		return idlest;
> +
> +	switch (local_sgs.group_type) {
> +	case group_overloaded:
> +	case group_fully_busy:
> +		/*
> +		 * When comparing groups across NUMA domains, it's possible for
> +		 * the local domain to be very lightly loaded relative to the
> +		 * remote domains but "imbalance" skews the comparison making
> +		 * remote CPUs look much more favourable. When considering
> +		 * cross-domain, add imbalance to the load on the remote node
> +		 * and consider staying local.
> +		 */
> +
> +		if ((sd->flags & SD_NUMA) &&
> +		    ((idlest_sgs.avg_load + imbalance) >= local_sgs.avg_load))
> +			return NULL;
> +
> +		/*
> +		 * If the local group is less loaded than the selected
> +		 * idlest group don't try and push any tasks.
> +		 */
> +		if (idlest_sgs.avg_load >= (local_sgs.avg_load + imbalance))
> +			return NULL;
> +
> +		if (100 * local_sgs.avg_load <= sd->imbalance_pct * idlest_sgs.avg_load)
> +			return NULL;
> +		break;
> +
> +	case group_imbalanced:
> +	case group_asym_packing:
> +		/* Those type are not used in the slow wakeup path */
> +		return NULL;

I suppose group_asym_packing could be handled similarly to misfit, right?
i.e. make the group type group_asym_packing if

  !sched_asym_prefer(sg.asym_prefer_cpu, local.asym_prefer_cpu)

> +
> +	case group_misfit_task:
> +		/* Select group with the highest max capacity */
> +		if (local->sgc->max_capacity >= idlest->sgc->max_capacity)
> +			return NULL;

Got confused a bit here due to the naming; in this case 'group_misfit_task'
only means 'if placed on this group, the task will be misfit'. If the
idlest group will cause us to remain misfit, but can give us some extra
capacity, I think it makes sense to move.

> +		break;
> +
> +	case group_has_spare:
> +		if (sd->flags & SD_NUMA) {
> +#ifdef CONFIG_NUMA_BALANCING
> +			int idlest_cpu;
> +			/*
> +			 * If there is spare capacity at NUMA, try to select
> +			 * the preferred node
> +			 */
> +			if (cpu_to_node(this_cpu) == p->numa_preferred_nid)
> +				return NULL;
> +
> +			idlest_cpu = cpumask_first(sched_group_span(idlest));
> +			if (cpu_to_node(idlest_cpu) == p->numa_preferred_nid)
> +				return idlest;
> +#endif
> +			/*
> +			 * Otherwise, keep the task on this node to stay close
> +			 * its wakeup source and improve locality. If there is
> +			 * a real need of migration, periodic load balance will
> +			 * take care of it.
> +			 */
> +			if (local_sgs.idle_cpus)
> +				return NULL;
> +		}
> +
> +		/*
> +		 * Select group with highest number of idle cpus. We could also
> +		 * compare the utilization which is more stable but it can end
> +		 * up that the group has less spare capacity but finally more
> +		 * idle cpus which means more opportunity to run task.
> +		 */
> +		if (local_sgs.idle_cpus >= idlest_sgs.idle_cpus)
> +			return NULL;
> +		break;
> +	}
> +
> +	return idlest;
> +}
> +
>  /**
>   * update_sd_lb_stats - Update sched_domain's statistics for load balancing.
>   * @env: The load balancing environment.
> 

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH] sched/fair: fix rework of find_idlest_group()
  2019-10-22 16:46   ` [PATCH] sched/fair: fix rework of find_idlest_group() Vincent Guittot
                       ` (2 preceding siblings ...)
  2019-11-18 17:42     ` [tip: sched/core] sched/fair: Fix " tip-bot2 for Vincent Guittot
@ 2019-11-22 14:37     ` Valentin Schneider
  2019-11-25  9:16       ` Vincent Guittot
  3 siblings, 1 reply; 89+ messages in thread
From: Valentin Schneider @ 2019-11-22 14:37 UTC (permalink / raw)
  To: Vincent Guittot, linux-kernel, mingo, peterz
  Cc: pauld, srikar, quentin.perret, dietmar.eggemann,
	Morten.Rasmussen, hdanton, parth, riel, rong.a.chen

Hi Vincent,

I took the liberty of adding some commenting nits in my review. I
know this is already in tip, but as Mel pointed out this should be merged
with the rework when sent out to mainline (similar to the removal of
fix_small_imbalance() & the LB rework).

On 22/10/2019 17:46, Vincent Guittot wrote:
> The task, for which the scheduler looks for the idlest group of CPUs, must
> be discounted from all statistics in order to get a fair comparison
> between groups. This includes utilization, load, nr_running and idle_cpus.
> 
> Such unfairness can be easily highlighted with the unixbench execl 1 task.
> This test continuously call execve() and the scheduler looks for the idlest
> group/CPU on which it should place the task. Because the task runs on the
> local group/CPU, the latter seems already busy even if there is nothing
> else running on it. As a result, the scheduler will always select another
> group/CPU than the local one.
> 
> Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
> Reported-by: kernel test robot <rong.a.chen@intel.com>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> ---
> 
> This recover most of the perf regression on my system and I have asked
> Rong if he can rerun the test with the patch to check that it fixes his
> system as well.
> 
>  kernel/sched/fair.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++-----
>  1 file changed, 83 insertions(+), 7 deletions(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a81c364..0ad4b21 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -5379,6 +5379,36 @@ static unsigned long cpu_load(struct rq *rq)
>  {
>  	return cfs_rq_load_avg(&rq->cfs);
>  }
> +/*
> + * cpu_load_without - compute cpu load without any contributions from *p
> + * @cpu: the CPU which load is requested
> + * @p: the task which load should be discounted

For both @cpu and @p, s/which/whose/ (also applies to cpu_util_without()
which inspired this).

> + *
> + * The load of a CPU is defined by the load of tasks currently enqueued on that
> + * CPU as well as tasks which are currently sleeping after an execution on that
> + * CPU.
> + *
> + * This method returns the load of the specified CPU by discounting the load of
> + * the specified task, whenever the task is currently contributing to the CPU
> + * load.
> + */
> +static unsigned long cpu_load_without(struct rq *rq, struct task_struct *p)
> +{
> +	struct cfs_rq *cfs_rq;
> +	unsigned int load;
> +
> +	/* Task has no contribution or is new */
> +	if (cpu_of(rq) != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
> +		return cpu_load(rq);
> +
> +	cfs_rq = &rq->cfs;
> +	load = READ_ONCE(cfs_rq->avg.load_avg);
> +
> +	/* Discount task's util from CPU's util */

s/util/load

> +	lsub_positive(&load, task_h_load(p));
> +
> +	return load;
> +}
>  
>  static unsigned long capacity_of(int cpu)
>  {
> @@ -8117,10 +8147,55 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
>  struct sg_lb_stats;
>  
>  /*
> + * task_running_on_cpu - return 1 if @p is running on @cpu.
> + */
> +
> +static unsigned int task_running_on_cpu(int cpu, struct task_struct *p)
          ^^^^^^^^^^^^
That could very well be bool, right?


> +{
> +	/* Task has no contribution or is new */
> +	if (cpu != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
> +		return 0;
> +
> +	if (task_on_rq_queued(p))
> +		return 1;
> +
> +	return 0;
> +}
> +
> +/**
> + * idle_cpu_without - would a given CPU be idle without p ?
> + * @cpu: the processor on which idleness is tested.
> + * @p: task which should be ignored.
> + *
> + * Return: 1 if the CPU would be idle. 0 otherwise.
> + */
> +static int idle_cpu_without(int cpu, struct task_struct *p)
          ^^^
Ditto on the boolean return values

> +{
> +	struct rq *rq = cpu_rq(cpu);
> +
> +	if ((rq->curr != rq->idle) && (rq->curr != p))
> +		return 0;
> +
> +	/*
> +	 * rq->nr_running can't be used but an updated version without the
> +	 * impact of p on cpu must be used instead. The updated nr_running
> +	 * be computed and tested before calling idle_cpu_without().
> +	 */
> +
> +#ifdef CONFIG_SMP
> +	if (!llist_empty(&rq->wake_list))
> +		return 0;
> +#endif
> +
> +	return 1;
> +}
> +
> +/*
>   * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
> - * @denv: The ched_domain level to look for idlest group.
> + * @sd: The sched_domain level to look for idlest group.
>   * @group: sched_group whose statistics are to be updated.
>   * @sgs: variable to hold the statistics for this group.
> + * @p: The task for which we look for the idlest group/CPU.
>   */
>  static inline void update_sg_wakeup_stats(struct sched_domain *sd,
>  					  struct sched_group *group,

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH] sched/fair: fix rework of find_idlest_group()
  2019-11-22 14:37     ` [PATCH] sched/fair: fix " Valentin Schneider
@ 2019-11-25  9:16       ` Vincent Guittot
  2019-11-25 11:03         ` Valentin Schneider
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-11-25  9:16 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Srikar Dronamraju, Quentin Perret, Dietmar Eggemann,
	Morten Rasmussen, Hillf Danton, Parth Shah, Rik van Riel,
	kernel test robot

On Fri, 22 Nov 2019 at 15:37, Valentin Schneider
<valentin.schneider@arm.com> wrote:
>
> Hi Vincent,
>
> I took the liberty of adding some commenting nits in my review. I
> know this is already in tip, but as Mel pointed out this should be merged
> with the rework when sent out to mainline (similar to the removal of
> fix_small_imbalance() & the LB rework).
>
> On 22/10/2019 17:46, Vincent Guittot wrote:
> > The task, for which the scheduler looks for the idlest group of CPUs, must
> > be discounted from all statistics in order to get a fair comparison
> > between groups. This includes utilization, load, nr_running and idle_cpus.
> >
> > Such unfairness can be easily highlighted with the unixbench execl 1 task.
> > This test continuously call execve() and the scheduler looks for the idlest
> > group/CPU on which it should place the task. Because the task runs on the
> > local group/CPU, the latter seems already busy even if there is nothing
> > else running on it. As a result, the scheduler will always select another
> > group/CPU than the local one.
> >
> > Fixes: 57abff067a08 ("sched/fair: Rework find_idlest_group()")
> > Reported-by: kernel test robot <rong.a.chen@intel.com>
> > Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
> > ---
> >
> > This recover most of the perf regression on my system and I have asked
> > Rong if he can rerun the test with the patch to check that it fixes his
> > system as well.
> >
> >  kernel/sched/fair.c | 90 ++++++++++++++++++++++++++++++++++++++++++++++++-----
> >  1 file changed, 83 insertions(+), 7 deletions(-)
> >
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index a81c364..0ad4b21 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -5379,6 +5379,36 @@ static unsigned long cpu_load(struct rq *rq)
> >  {
> >       return cfs_rq_load_avg(&rq->cfs);
> >  }
> > +/*
> > + * cpu_load_without - compute cpu load without any contributions from *p
> > + * @cpu: the CPU which load is requested
> > + * @p: the task which load should be discounted
>
> For both @cpu and @p, s/which/whose/ (also applies to cpu_util_without()
> which inspired this).

As you mentioned, this is inspired from cpu_util_without and stay
consistent with it

>
> > + *
> > + * The load of a CPU is defined by the load of tasks currently enqueued on that
> > + * CPU as well as tasks which are currently sleeping after an execution on that
> > + * CPU.
> > + *
> > + * This method returns the load of the specified CPU by discounting the load of
> > + * the specified task, whenever the task is currently contributing to the CPU
> > + * load.
> > + */
> > +static unsigned long cpu_load_without(struct rq *rq, struct task_struct *p)
> > +{
> > +     struct cfs_rq *cfs_rq;
> > +     unsigned int load;
> > +
> > +     /* Task has no contribution or is new */
> > +     if (cpu_of(rq) != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
> > +             return cpu_load(rq);
> > +
> > +     cfs_rq = &rq->cfs;
> > +     load = READ_ONCE(cfs_rq->avg.load_avg);
> > +
> > +     /* Discount task's util from CPU's util */
>
> s/util/load
>
> > +     lsub_positive(&load, task_h_load(p));
> > +
> > +     return load;
> > +}
> >
> >  static unsigned long capacity_of(int cpu)
> >  {
> > @@ -8117,10 +8147,55 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
> >  struct sg_lb_stats;
> >
> >  /*
> > + * task_running_on_cpu - return 1 if @p is running on @cpu.
> > + */
> > +
> > +static unsigned int task_running_on_cpu(int cpu, struct task_struct *p)
>           ^^^^^^^^^^^^
> That could very well be bool, right?
>
>
> > +{
> > +     /* Task has no contribution or is new */
> > +     if (cpu != task_cpu(p) || !READ_ONCE(p->se.avg.last_update_time))
> > +             return 0;
> > +
> > +     if (task_on_rq_queued(p))
> > +             return 1;
> > +
> > +     return 0;
> > +}
> > +
> > +/**
> > + * idle_cpu_without - would a given CPU be idle without p ?
> > + * @cpu: the processor on which idleness is tested.
> > + * @p: task which should be ignored.
> > + *
> > + * Return: 1 if the CPU would be idle. 0 otherwise.
> > + */
> > +static int idle_cpu_without(int cpu, struct task_struct *p)
>           ^^^
> Ditto on the boolean return values

This is an extension of idle_cpu which also returns int and I wanted
to stay consistent with it

So we might want to make some kind of cleanup or rewording of
interfaces and their descriptions but this should be done as  a whole
and out of the scope of this patch and would worth having a dedicated
patch IMO because it would imply to modify other patch of the code
that is not covered by this patch like idle_cpu or cpu_util_without


>
> > +{
> > +     struct rq *rq = cpu_rq(cpu);
> > +
> > +     if ((rq->curr != rq->idle) && (rq->curr != p))
> > +             return 0;
> > +
> > +     /*
> > +      * rq->nr_running can't be used but an updated version without the
> > +      * impact of p on cpu must be used instead. The updated nr_running
> > +      * be computed and tested before calling idle_cpu_without().
> > +      */
> > +
> > +#ifdef CONFIG_SMP
> > +     if (!llist_empty(&rq->wake_list))
> > +             return 0;
> > +#endif
> > +
> > +     return 1;
> > +}
> > +
> > +/*
> >   * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
> > - * @denv: The ched_domain level to look for idlest group.
> > + * @sd: The sched_domain level to look for idlest group.
> >   * @group: sched_group whose statistics are to be updated.
> >   * @sgs: variable to hold the statistics for this group.
> > + * @p: The task for which we look for the idlest group/CPU.
> >   */
> >  static inline void update_sg_wakeup_stats(struct sched_domain *sd,
> >                                         struct sched_group *group,

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-11-22 14:34   ` Valentin Schneider
@ 2019-11-25  9:59     ` Vincent Guittot
  2019-11-25 11:13       ` Valentin Schneider
  0 siblings, 1 reply; 89+ messages in thread
From: Vincent Guittot @ 2019-11-25  9:59 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Srikar Dronamraju, Quentin Perret, Dietmar Eggemann,
	Morten Rasmussen, Hillf Danton, Parth Shah, Rik van Riel

On Fri, 22 Nov 2019 at 15:34, Valentin Schneider
<valentin.schneider@arm.com> wrote:
>
> Hi Vincent,
>
> Apologies for the delayed review on that one. I have a few comments inline,
> otherwise for the misfit part, if at all still relevant:
>
> Reviewed-by: Valentin Schneider <valentin.schneider@arm.com>
>
> On 18/10/2019 14:26, Vincent Guittot wrote:
> >  static struct sched_group *
> >  find_idlest_group(struct sched_domain *sd, struct task_struct *p,
> > +               int this_cpu, int sd_flag);
>                                     ^^^^^^^
> That parameter is now unused. AFAICT it was only used to special-case fork
> events (sd flag & SD_BALANCE_FORK). I didn't see any explicit handling of
> this case in the rework, I assume the new group type classification makes
> it possible to forgo?
>
> > @@ -8241,6 +8123,252 @@ static inline enum fbq_type fbq_classify_rq(struct rq *rq)
> >  }
> >  #endif /* CONFIG_NUMA_BALANCING */
> >
> > +
> > +struct sg_lb_stats;
> > +
> > +/*
> > + * update_sg_wakeup_stats - Update sched_group's statistics for wakeup.
> > + * @denv: The ched_domain level to look for idlest group.
> > + * @group: sched_group whose statistics are to be updated.
> > + * @sgs: variable to hold the statistics for this group.
> > + */
> > +static inline void update_sg_wakeup_stats(struct sched_domain *sd,
> > +                                       struct sched_group *group,
> > +                                       struct sg_lb_stats *sgs,
> > +                                       struct task_struct *p)
> > +{
> > +     int i, nr_running;
> > +
> > +     memset(sgs, 0, sizeof(*sgs));
> > +
> > +     for_each_cpu(i, sched_group_span(group)) {
> > +             struct rq *rq = cpu_rq(i);
> > +
> > +             sgs->group_load += cpu_load(rq);
> > +             sgs->group_util += cpu_util_without(i, p);
> > +             sgs->sum_h_nr_running += rq->cfs.h_nr_running;
> > +
> > +             nr_running = rq->nr_running;
> > +             sgs->sum_nr_running += nr_running;
> > +
> > +             /*
> > +              * No need to call idle_cpu() if nr_running is not 0
> > +              */
> > +             if (!nr_running && idle_cpu(i))
> > +                     sgs->idle_cpus++;
> > +
> > +
> > +     }
> > +
> > +     /* Check if task fits in the group */
> > +     if (sd->flags & SD_ASYM_CPUCAPACITY &&
> > +         !task_fits_capacity(p, group->sgc->max_capacity)) {
> > +             sgs->group_misfit_task_load = 1;
> > +     }
> > +
> > +     sgs->group_capacity = group->sgc->capacity;
> > +
> > +     sgs->group_type = group_classify(sd->imbalance_pct, group, sgs);
> > +
> > +     /*
> > +      * Computing avg_load makes sense only when group is fully busy or
> > +      * overloaded
> > +      */
> > +     if (sgs->group_type < group_fully_busy)
> > +             sgs->avg_load = (sgs->group_load * SCHED_CAPACITY_SCALE) /
> > +                             sgs->group_capacity;
> > +}
> > +
> > +static bool update_pick_idlest(struct sched_group *idlest,
>
> Nit: could we name this update_sd_pick_idlest() to follow
> update_sd_pick_busiest()? It's the kind of thing where if I typed
> "update_sd" in gtags I'd like to see both listed, seeing as they are
> *very* similar. And we already have update_sg_{wakeup, lb}_stats().
>
> > +                            struct sg_lb_stats *idlest_sgs,
> > +                            struct sched_group *group,
> > +                            struct sg_lb_stats *sgs)
> > +{
> > +     if (sgs->group_type < idlest_sgs->group_type)
> > +             return true;
> > +
> > +     if (sgs->group_type > idlest_sgs->group_type)
> > +             return false;
> > +
> > +     /*
> > +      * The candidate and the current idles group are the same type of
> > +      * group. Let check which one is the idlest according to the type.
> > +      */
> > +
> > +     switch (sgs->group_type) {
> > +     case group_overloaded:
> > +     case group_fully_busy:
> > +             /* Select the group with lowest avg_load. */
> > +             if (idlest_sgs->avg_load <= sgs->avg_load)
> > +                     return false;
> > +             break;
> > +
> > +     case group_imbalanced:
> > +     case group_asym_packing:
> > +             /* Those types are not used in the slow wakeup path */
> > +             return false;
> > +
> > +     case group_misfit_task:
> > +             /* Select group with the highest max capacity */
> > +             if (idlest->sgc->max_capacity >= group->sgc->max_capacity)
> > +                     return false;
> > +             break;
> > +
> > +     case group_has_spare:
> > +             /* Select group with most idle CPUs */
> > +             if (idlest_sgs->idle_cpus >= sgs->idle_cpus)
> > +                     return false;
> > +             break;
> > +     }
> > +
> > +     return true;
> > +}
> > +
> > +/*
> > + * find_idlest_group finds and returns the least busy CPU group within the
> > + * domain.
> > + *
> > + * Assumes p is allowed on at least one CPU in sd.
> > + */
> > +static struct sched_group *
> > +find_idlest_group(struct sched_domain *sd, struct task_struct *p,
> > +               int this_cpu, int sd_flag)
> > +{
> > +     struct sched_group *idlest = NULL, *local = NULL, *group = sd->groups;
> > +     struct sg_lb_stats local_sgs, tmp_sgs;
> > +     struct sg_lb_stats *sgs;
> > +     unsigned long imbalance;
> > +     struct sg_lb_stats idlest_sgs = {
> > +                     .avg_load = UINT_MAX,
> > +                     .group_type = group_overloaded,
> > +     };
> > +
> > +     imbalance = scale_load_down(NICE_0_LOAD) *
> > +                             (sd->imbalance_pct-100) / 100;
> > +
> > +     do {
> > +             int local_group;
> > +
> > +             /* Skip over this group if it has no CPUs allowed */
> > +             if (!cpumask_intersects(sched_group_span(group),
> > +                                     p->cpus_ptr))
> > +                     continue;
> > +
> > +             local_group = cpumask_test_cpu(this_cpu,
> > +                                            sched_group_span(group));
> > +
> > +             if (local_group) {
> > +                     sgs = &local_sgs;
> > +                     local = group;
> > +             } else {
> > +                     sgs = &tmp_sgs;
> > +             }
> > +
> > +             update_sg_wakeup_stats(sd, group, sgs, p);
> > +
> > +             if (!local_group && update_pick_idlest(idlest, &idlest_sgs, group, sgs)) {
> > +                     idlest = group;
> > +                     idlest_sgs = *sgs;
> > +             }
> > +
> > +     } while (group = group->next, group != sd->groups);
> > +
> > +
> > +     /* There is no idlest group to push tasks to */
> > +     if (!idlest)
> > +             return NULL;
> > +
> > +     /*
> > +      * If the local group is idler than the selected idlest group
> > +      * don't try and push the task.
> > +      */
> > +     if (local_sgs.group_type < idlest_sgs.group_type)
> > +             return NULL;
> > +
> > +     /*
> > +      * If the local group is busier than the selected idlest group
> > +      * try and push the task.
> > +      */
> > +     if (local_sgs.group_type > idlest_sgs.group_type)
> > +             return idlest;
> > +
> > +     switch (local_sgs.group_type) {
> > +     case group_overloaded:
> > +     case group_fully_busy:
> > +             /*
> > +              * When comparing groups across NUMA domains, it's possible for
> > +              * the local domain to be very lightly loaded relative to the
> > +              * remote domains but "imbalance" skews the comparison making
> > +              * remote CPUs look much more favourable. When considering
> > +              * cross-domain, add imbalance to the load on the remote node
> > +              * and consider staying local.
> > +              */
> > +
> > +             if ((sd->flags & SD_NUMA) &&
> > +                 ((idlest_sgs.avg_load + imbalance) >= local_sgs.avg_load))
> > +                     return NULL;
> > +
> > +             /*
> > +              * If the local group is less loaded than the selected
> > +              * idlest group don't try and push any tasks.
> > +              */
> > +             if (idlest_sgs.avg_load >= (local_sgs.avg_load + imbalance))
> > +                     return NULL;
> > +
> > +             if (100 * local_sgs.avg_load <= sd->imbalance_pct * idlest_sgs.avg_load)
> > +                     return NULL;
> > +             break;
> > +
> > +     case group_imbalanced:
> > +     case group_asym_packing:
> > +             /* Those type are not used in the slow wakeup path */
> > +             return NULL;
>
> I suppose group_asym_packing could be handled similarly to misfit, right?
> i.e. make the group type group_asym_packing if
>
>   !sched_asym_prefer(sg.asym_prefer_cpu, local.asym_prefer_cpu)

Unlike group_misfit_task that was somehow already taken into account
through the comparison of spare capacity, group_asym_packing was not
considered at all in find_idlest_group so I prefer to stay
conservative and wait for users of asym_packing to come with a need
before adding this new mechanism.

>
> > +
> > +     case group_misfit_task:
> > +             /* Select group with the highest max capacity */
> > +             if (local->sgc->max_capacity >= idlest->sgc->max_capacity)
> > +                     return NULL;
>
> Got confused a bit here due to the naming; in this case 'group_misfit_task'
> only means 'if placed on this group, the task will be misfit'. If the
> idlest group will cause us to remain misfit, but can give us some extra
> capacity, I think it makes sense to move.
>
> > +             break;
> > +
> > +     case group_has_spare:
> > +             if (sd->flags & SD_NUMA) {
> > +#ifdef CONFIG_NUMA_BALANCING
> > +                     int idlest_cpu;
> > +                     /*
> > +                      * If there is spare capacity at NUMA, try to select
> > +                      * the preferred node
> > +                      */
> > +                     if (cpu_to_node(this_cpu) == p->numa_preferred_nid)
> > +                             return NULL;
> > +
> > +                     idlest_cpu = cpumask_first(sched_group_span(idlest));
> > +                     if (cpu_to_node(idlest_cpu) == p->numa_preferred_nid)
> > +                             return idlest;
> > +#endif
> > +                     /*
> > +                      * Otherwise, keep the task on this node to stay close
> > +                      * its wakeup source and improve locality. If there is
> > +                      * a real need of migration, periodic load balance will
> > +                      * take care of it.
> > +                      */
> > +                     if (local_sgs.idle_cpus)
> > +                             return NULL;
> > +             }
> > +
> > +             /*
> > +              * Select group with highest number of idle cpus. We could also
> > +              * compare the utilization which is more stable but it can end
> > +              * up that the group has less spare capacity but finally more
> > +              * idle cpus which means more opportunity to run task.
> > +              */
> > +             if (local_sgs.idle_cpus >= idlest_sgs.idle_cpus)
> > +                     return NULL;
> > +             break;
> > +     }
> > +
> > +     return idlest;
> > +}
> > +
> >  /**
> >   * update_sd_lb_stats - Update sched_domain's statistics for load balancing.
> >   * @env: The load balancing environment.
> >

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH] sched/fair: fix rework of find_idlest_group()
  2019-11-25  9:16       ` Vincent Guittot
@ 2019-11-25 11:03         ` Valentin Schneider
  0 siblings, 0 replies; 89+ messages in thread
From: Valentin Schneider @ 2019-11-25 11:03 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Srikar Dronamraju, Quentin Perret, Dietmar Eggemann,
	Morten Rasmussen, Hillf Danton, Parth Shah, Rik van Riel,
	kernel test robot

On 25/11/2019 09:16, Vincent Guittot wrote:
> 
> This is an extension of idle_cpu which also returns int and I wanted
> to stay consistent with it
> 
> So we might want to make some kind of cleanup or rewording of
> interfaces and their descriptions but this should be done as  a whole
> and out of the scope of this patch and would worth having a dedicated
> patch IMO because it would imply to modify other patch of the code
> that is not covered by this patch like idle_cpu or cpu_util_without
> 

Fair enough.

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 11/11] sched/fair: rework find_idlest_group
  2019-11-25  9:59     ` Vincent Guittot
@ 2019-11-25 11:13       ` Valentin Schneider
  0 siblings, 0 replies; 89+ messages in thread
From: Valentin Schneider @ 2019-11-25 11:13 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: linux-kernel, Ingo Molnar, Peter Zijlstra, Phil Auld,
	Srikar Dronamraju, Quentin Perret, Dietmar Eggemann,
	Morten Rasmussen, Hillf Danton, Parth Shah, Rik van Riel

On 25/11/2019 09:59, Vincent Guittot wrote:
>>> +     case group_imbalanced:
>>> +     case group_asym_packing:
>>> +             /* Those type are not used in the slow wakeup path */
>>> +             return NULL;
>>
>> I suppose group_asym_packing could be handled similarly to misfit, right?
>> i.e. make the group type group_asym_packing if
>>
>>   !sched_asym_prefer(sg.asym_prefer_cpu, local.asym_prefer_cpu)
> 
> Unlike group_misfit_task that was somehow already taken into account
> through the comparison of spare capacity, group_asym_packing was not
> considered at all in find_idlest_group so I prefer to stay
> conservative and wait for users of asym_packing to come with a need
> before adding this new mechanism.
> 

Right, makes sense.

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
                   ` (11 preceding siblings ...)
  2019-10-21  7:50 ` [PATCH v4 00/10] sched/fair: rework the CFS load balance Ingo Molnar
@ 2019-11-25 12:48 ` Valentin Schneider
  2020-01-03 16:39   ` Valentin Schneider
  12 siblings, 1 reply; 89+ messages in thread
From: Valentin Schneider @ 2019-11-25 12:48 UTC (permalink / raw)
  To: Vincent Guittot, linux-kernel, mingo, peterz
  Cc: pauld, srikar, quentin.perret, dietmar.eggemann,
	Morten.Rasmussen, hdanton, parth, riel

On 18/10/2019 14:26, Vincent Guittot wrote:
>            tip/sched/core        w/ this patchset    improvement
> schedpipe      53125 +/-0.18%        53443 +/-0.52%   (+0.60%)
> 
> hackbench -l (2560/#grp) -g #grp
>  1 groups      1.579 +/-29.16%       1.410 +/-13.46% (+10.70%)
>  4 groups      1.269 +/-9.69%        1.205 +/-3.27%   (+5.00%)
>  8 groups      1.117 +/-1.51%        1.123 +/-1.27%   (+4.57%)
> 16 groups      1.176 +/-1.76%        1.164 +/-2.42%   (+1.07%)
> 
> Unixbench shell8
>   1 test     1963.48 +/-0.36%       1902.88 +/-0.73%    (-3.09%)
> 224 tests    2427.60 +/-0.20%       2469.80 +/-0.42%  (1.74%)
> 
> - large arm64 2 nodes / 224 cores system
> 
>            tip/sched/core        w/ this patchset    improvement
> schedpipe     124084 +/-1.36%       124445 +/-0.67%   (+0.29%)
> 
> hackbench -l (256000/#grp) -g #grp
>   1 groups    15.305 +/-1.50%       14.001 +/-1.99%   (+8.52%)
>   4 groups     5.959 +/-0.70%        5.542 +/-3.76%   (+6.99%)
>  16 groups     3.120 +/-1.72%        3.253 +/-0.61%   (-4.92%)
>  32 groups     2.911 +/-0.88%        2.837 +/-1.16%   (+2.54%)
>  64 groups     2.805 +/-1.90%        2.716 +/-1.18%   (+3.17%)
> 128 groups     3.166 +/-7.71%        3.891 +/-6.77%   (+5.82%)
> 256 groups     3.655 +/-10.09%       3.185 +/-6.65%  (+12.87%)
> 
> dbench
>   1 groups   328.176 +/-0.29%      330.217 +/-0.32%   (+0.62%)
>   4 groups   930.739 +/-0.50%      957.173 +/-0.66%   (+2.84%)
>  16 groups  1928.292 +/-0.36%     1978.234 +/-0.88%   (+0.92%)
>  32 groups  2369.348 +/-1.72%     2454.020 +/-0.90%   (+3.57%)
>  64 groups  2583.880 +/-3.39%     2618.860 +/-0.84%   (+1.35%)
> 128 groups  2256.406 +/-10.67%    2392.498 +/-2.13%   (+6.03%)
> 256 groups  1257.546 +/-3.81%     1674.684 +/-4.97%  (+33.17%)
> 
> Unixbench shell8
>   1 test     6944.16 +/-0.02     6605.82 +/-0.11      (-4.87%)
> 224 tests   13499.02 +/-0.14    13637.94 +/-0.47%     (+1.03%)
> lkp reported a -10% regression on shell8 (1 test) for v3 that 
> seems that is partially recovered on my platform with v4.
> 

I've been busy trying to get some perf numbers on arm64 server~ish systems,
I finally managed to get some specjbb numbers on TX2 (the 2 nodes, 224
CPUs version which I suspect is the same as you used in the above). I only
have a limited number of iterations (5, although each runs for about 2h)
because I wanted to get some (usable) results by today, I'll spin some more
during the week.


This is based on the "critical-jOPs" metric which AFAIU higher is better:

Baseline, SMTOFF:
  mean     12156.400000
  std        660.640068
  min      11016.000000
  25%      12158.000000
  50%      12464.000000
  75%      12521.000000
  max      12623.000000

Patches (+ find_idlest_group() fixup), SMTOFF:
  mean     12487.250000
  std        184.404221
  min      12326.000000
  25%      12349.250000
  50%      12449.500000
  75%      12587.500000
  max      12724.000000


It looks slightly better overall (mean, stddev), but I'm annoyed by that
low iteration count. I also had some issues with my SMTON run and I only
got numbers for 2 iterations, so I'll respin that before complaining.

FWIW the branch I've been using is:

  http://www.linux-arm.org/git?p=linux-vs.git;a=shortlog;h=refs/heads/mainline/load-balance/vincent_rework/tip

^ permalink raw reply	[flat|nested] 89+ messages in thread

* Re: [PATCH v4 00/10] sched/fair: rework the CFS load balance
  2019-11-25 12:48 ` Valentin Schneider
@ 2020-01-03 16:39   ` Valentin Schneider
  0 siblings, 0 replies; 89+ messages in thread
From: Valentin Schneider @ 2020-01-03 16:39 UTC (permalink / raw)
  To: Vincent Guittot, linux-kernel, mingo, peterz
  Cc: pauld, srikar, quentin.perret, dietmar.eggemann,
	Morten.Rasmussen, hdanton, parth, riel

On 25/11/2019 12:48, Valentin Schneider wrote:
> I've been busy trying to get some perf numbers on arm64 server~ish systems,
> I finally managed to get some specjbb numbers on TX2 (the 2 nodes, 224
> CPUs version which I suspect is the same as you used in the above). I only
> have a limited number of iterations (5, although each runs for about 2h)
> because I wanted to get some (usable) results by today, I'll spin some more
> during the week.
> 
> 
> This is based on the "critical-jOPs" metric which AFAIU higher is better:
> 
> Baseline, SMTOFF:
>   mean     12156.400000
>   std        660.640068
>   min      11016.000000
>   25%      12158.000000
>   50%      12464.000000
>   75%      12521.000000
>   max      12623.000000
> 
> Patches (+ find_idlest_group() fixup), SMTOFF:
>   mean     12487.250000
>   std        184.404221
>   min      12326.000000
>   25%      12349.250000
>   50%      12449.500000
>   75%      12587.500000
>   max      12724.000000	
> 
> 
> It looks slightly better overall (mean, stddev), but I'm annoyed by that
> low iteration count. I also had some issues with my SMTON run and I only
> got numbers for 2 iterations, so I'll respin that before complaining.
> 
> FWIW the branch I've been using is:
> 
>   http://www.linux-arm.org/git?p=linux-vs.git;a=shortlog;h=refs/heads/mainline/load-balance/vincent_rework/tip
> 

Forgot about that; I got some more results in the meantime, still specjbb
and still ThunderX2):

| kernel          | count |         mean |        std |     min |     50% |      75% |      99% |     max |
|-----------------+-------+--------------+------------+---------+---------+----------+----------+---------|
| -REWORK SMT-ON  |    15 | 19961.133333 | 613.406515 | 19058.0 | 20006.0 | 20427.50 | 20903.42 | 20924.0 |
| +REWORK SMT-ON  |    12 | 19265.666667 | 563.959917 | 18380.0 | 19133.5 | 19699.25 | 20024.90 | 20026.0 |
| -REWORK SMT-OFF |    25 | 12397.000000 | 425.763628 | 11016.0 | 12377.0 | 12623.00 | 13137.20 | 13154.0 |
| +REWORK SMT-OFF |    20 | 12436.700000 | 414.130554 | 11313.0 | 12505.0 | 12687.00 | 12981.44 | 12986.0 |

SMT-ON  delta: -3.48%
SMT-OFF delta: +0.32%


This is consistent with some earlier runs (where I had a few issues
getting enough iterations): SMT-OFF performs a tad better, and SMT-ON
performs slightly worse.

Looking at the 99th percentile, it seems we're a bit worse compared to
the previous best cases, but looking at the slightly reduced stddev it also
seems that we are somewhat more consistent.

^ permalink raw reply	[flat|nested] 89+ messages in thread

end of thread, other threads:[~2020-01-03 16:39 UTC | newest]

Thread overview: 89+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-18 13:26 [PATCH v4 00/10] sched/fair: rework the CFS load balance Vincent Guittot
2019-10-18 13:26 ` [PATCH v4 01/11] sched/fair: clean up asym packing Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Clean " tip-bot2 for Vincent Guittot
2019-10-30 14:51   ` [PATCH v4 01/11] sched/fair: clean " Mel Gorman
2019-10-30 16:03     ` Vincent Guittot
2019-10-18 13:26 ` [PATCH v4 02/11] sched/fair: rename sum_nr_running to sum_h_nr_running Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rename sg_lb_stats::sum_nr_running " tip-bot2 for Vincent Guittot
2019-10-30 14:53   ` [PATCH v4 02/11] sched/fair: rename sum_nr_running " Mel Gorman
2019-10-18 13:26 ` [PATCH v4 03/11] sched/fair: remove meaningless imbalance calculation Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Remove " tip-bot2 for Vincent Guittot
2019-10-18 13:26 ` [PATCH v4 04/11] sched/fair: rework load_balance Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rework load_balance() tip-bot2 for Vincent Guittot
2019-10-30 15:45   ` [PATCH v4 04/11] sched/fair: rework load_balance Mel Gorman
2019-10-30 16:16     ` Valentin Schneider
2019-10-31  9:09     ` Vincent Guittot
2019-10-31 10:15       ` Mel Gorman
2019-10-31 11:13         ` Vincent Guittot
2019-10-31 11:40           ` Mel Gorman
2019-11-08 16:35             ` Vincent Guittot
2019-11-08 18:37               ` Mel Gorman
2019-11-12 10:58                 ` Vincent Guittot
2019-11-12 15:06                   ` Mel Gorman
2019-11-12 15:40                     ` Vincent Guittot
2019-11-12 17:45                       ` Mel Gorman
2019-11-18 13:50     ` Ingo Molnar
2019-11-18 13:57       ` Vincent Guittot
2019-11-18 14:51       ` Mel Gorman
2019-10-18 13:26 ` [PATCH v4 05/11] sched/fair: use rq->nr_running when balancing load Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use " tip-bot2 for Vincent Guittot
2019-10-30 15:54   ` [PATCH v4 05/11] sched/fair: use " Mel Gorman
2019-10-18 13:26 ` [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use load instead of runnable load in load_balance() tip-bot2 for Vincent Guittot
2019-10-30 15:58   ` [PATCH v4 06/11] sched/fair: use load instead of runnable load in load_balance Mel Gorman
2019-10-18 13:26 ` [PATCH v4 07/11] sched/fair: evenly spread tasks when not overloaded Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Spread out tasks evenly " tip-bot2 for Vincent Guittot
2019-10-30 16:03   ` [PATCH v4 07/11] sched/fair: evenly spread tasks " Mel Gorman
2019-10-18 13:26 ` [PATCH v4 08/11] sched/fair: use utilization to select misfit task Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use " tip-bot2 for Vincent Guittot
2019-10-18 13:26 ` [PATCH v4 09/11] sched/fair: use load instead of runnable load in wakeup path Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Use " tip-bot2 for Vincent Guittot
2019-10-18 13:26 ` [PATCH v4 10/11] sched/fair: optimize find_idlest_group Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Optimize find_idlest_group() tip-bot2 for Vincent Guittot
2019-10-18 13:26 ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Vincent Guittot
2019-10-21  9:12   ` [tip: sched/core] sched/fair: Rework find_idlest_group() tip-bot2 for Vincent Guittot
2019-10-22 16:46   ` [PATCH] sched/fair: fix rework of find_idlest_group() Vincent Guittot
2019-10-23  7:50     ` Chen, Rong A
2019-10-30 16:07     ` Mel Gorman
2019-11-18 17:42     ` [tip: sched/core] sched/fair: Fix " tip-bot2 for Vincent Guittot
2019-11-22 14:37     ` [PATCH] sched/fair: fix " Valentin Schneider
2019-11-25  9:16       ` Vincent Guittot
2019-11-25 11:03         ` Valentin Schneider
2019-11-20 11:58   ` [PATCH v4 11/11] sched/fair: rework find_idlest_group Qais Yousef
2019-11-20 13:21     ` Vincent Guittot
2019-11-20 16:53       ` Vincent Guittot
2019-11-20 17:34         ` Qais Yousef
2019-11-20 17:43           ` Vincent Guittot
2019-11-20 18:10             ` Qais Yousef
2019-11-20 18:20               ` Vincent Guittot
2019-11-20 18:27                 ` Qais Yousef
2019-11-20 19:28                   ` Vincent Guittot
2019-11-20 19:55                     ` Qais Yousef
2019-11-21 14:58                       ` Qais Yousef
2019-11-22 14:34   ` Valentin Schneider
2019-11-25  9:59     ` Vincent Guittot
2019-11-25 11:13       ` Valentin Schneider
2019-10-21  7:50 ` [PATCH v4 00/10] sched/fair: rework the CFS load balance Ingo Molnar
2019-10-21  8:44   ` Vincent Guittot
2019-10-21 12:56     ` Phil Auld
2019-10-24 12:38     ` Phil Auld
2019-10-24 13:46       ` Phil Auld
2019-10-24 14:59         ` Vincent Guittot
2019-10-25 13:33           ` Phil Auld
2019-10-28 13:03             ` Vincent Guittot
2019-10-30 14:39               ` Phil Auld
2019-10-30 16:24                 ` Dietmar Eggemann
2019-10-30 16:35                   ` Valentin Schneider
2019-10-30 17:19                     ` Phil Auld
2019-10-30 17:25                       ` Valentin Schneider
2019-10-30 17:29                         ` Phil Auld
2019-10-30 17:28                       ` Vincent Guittot
2019-10-30 17:44                         ` Phil Auld
2019-10-30 17:25                 ` Vincent Guittot
2019-10-31 13:57                   ` Phil Auld
2019-10-31 16:41                     ` Vincent Guittot
2019-10-30 16:24   ` Mel Gorman
2019-10-30 16:35     ` Vincent Guittot
2019-11-18 13:15     ` Ingo Molnar
2019-11-25 12:48 ` Valentin Schneider
2020-01-03 16:39   ` Valentin Schneider

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).