* [PATCH] sched/fair: ensure tasks spreading in LLC during LB
@ 2020-11-02 10:24 Vincent Guittot
2020-11-02 13:31 ` Rik van Riel
` (3 more replies)
0 siblings, 4 replies; 5+ messages in thread
From: Vincent Guittot @ 2020-11-02 10:24 UTC (permalink / raw)
To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
mgorman, linux-kernel, riel, clm
Cc: hannes, Vincent Guittot
schbench shows latency increase for 95 percentile above since:
commit 0b0695f2b34a ("sched/fair: Rework load_balance()")
Align the behavior of the load balancer with the wake up path, which tries
to select an idle CPU which belongs to the LLC for a waking task.
calculate_imbalance() will use nr_running instead of the spare
capacity when CPUs share resources (ie cache) at the domain level. This
will ensure a better spread of tasks on idle CPUs.
Running schbench on a hikey (8cores arm64) shows the problem:
tip/sched/core :
schbench -m 2 -t 4 -s 10000 -c 1000000 -r 10
Latency percentiles (usec)
50.0th: 33
75.0th: 45
90.0th: 51
95.0th: 4152
*99.0th: 14288
99.5th: 14288
99.9th: 14288
min=0, max=14276
tip/sched/core + patch :
schbench -m 2 -t 4 -s 10000 -c 1000000 -r 10
Latency percentiles (usec)
50.0th: 34
75.0th: 47
90.0th: 52
95.0th: 78
*99.0th: 94
99.5th: 94
99.9th: 94
min=0, max=94
Fixes: 0b0695f2b34a ("sched/fair: Rework load_balance()")
Reported-by: Chris Mason <clm@fb.com>
Suggested-by: Rik van Riel <riel@surriel.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
---
kernel/sched/fair.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aa4c6227cd6d..210b15f068a6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9031,7 +9031,8 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
* emptying busiest.
*/
if (local->group_type == group_has_spare) {
- if (busiest->group_type > group_fully_busy) {
+ if ((busiest->group_type > group_fully_busy) &&
+ !(env->sd->flags & SD_SHARE_PKG_RESOURCES)) {
/*
* If busiest is overloaded, try to fill spare
* capacity. This might end up creating spare capacity
--
2.17.1
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] sched/fair: ensure tasks spreading in LLC during LB
2020-11-02 10:24 [PATCH] sched/fair: ensure tasks spreading in LLC during LB Vincent Guittot
@ 2020-11-02 13:31 ` Rik van Riel
2020-11-10 12:55 ` Peter Zijlstra
` (2 subsequent siblings)
3 siblings, 0 replies; 5+ messages in thread
From: Rik van Riel @ 2020-11-02 13:31 UTC (permalink / raw)
To: Vincent Guittot, mingo, peterz, juri.lelli, dietmar.eggemann,
rostedt, bsegall, mgorman, linux-kernel, clm
Cc: hannes
[-- Attachment #1: Type: text/plain, Size: 377 bytes --]
On Mon, 2020-11-02 at 11:24 +0100, Vincent Guittot wrote:
> Fixes: 0b0695f2b34a ("sched/fair: Rework load_balance()")
> Reported-by: Chris Mason <clm@fb.com>
> Suggested-by: Rik van Riel <riel@surriel.com>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-and-reviewed-by: Rik van Riel <riel@surriel.com>
Thank you!
--
All Rights Reversed.
[-- Attachment #2: This is a digitally signed message part --]
[-- Type: application/pgp-signature, Size: 488 bytes --]
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: [PATCH] sched/fair: ensure tasks spreading in LLC during LB
2020-11-02 10:24 [PATCH] sched/fair: ensure tasks spreading in LLC during LB Vincent Guittot
2020-11-02 13:31 ` Rik van Riel
@ 2020-11-10 12:55 ` Peter Zijlstra
2020-11-11 8:23 ` [tip: sched/urgent] sched/fair: Ensure " tip-bot2 for Vincent Guittot
2020-11-12 12:34 ` [PATCH] sched/fair: ensure " Mel Gorman
3 siblings, 0 replies; 5+ messages in thread
From: Peter Zijlstra @ 2020-11-10 12:55 UTC (permalink / raw)
To: Vincent Guittot
Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
linux-kernel, riel, clm, hannes
On Mon, Nov 02, 2020 at 11:24:57AM +0100, Vincent Guittot wrote:
> schbench shows latency increase for 95 percentile above since:
> commit 0b0695f2b34a ("sched/fair: Rework load_balance()")
>
> Align the behavior of the load balancer with the wake up path, which tries
> to select an idle CPU which belongs to the LLC for a waking task.
>
> calculate_imbalance() will use nr_running instead of the spare
> capacity when CPUs share resources (ie cache) at the domain level. This
> will ensure a better spread of tasks on idle CPUs.
>
> Running schbench on a hikey (8cores arm64) shows the problem:
>
> tip/sched/core :
> schbench -m 2 -t 4 -s 10000 -c 1000000 -r 10
> Latency percentiles (usec)
> 50.0th: 33
> 75.0th: 45
> 90.0th: 51
> 95.0th: 4152
> *99.0th: 14288
> 99.5th: 14288
> 99.9th: 14288
> min=0, max=14276
>
> tip/sched/core + patch :
> schbench -m 2 -t 4 -s 10000 -c 1000000 -r 10
> Latency percentiles (usec)
> 50.0th: 34
> 75.0th: 47
> 90.0th: 52
> 95.0th: 78
> *99.0th: 94
> 99.5th: 94
> 99.9th: 94
> min=0, max=94
>
> Fixes: 0b0695f2b34a ("sched/fair: Rework load_balance()")
> Reported-by: Chris Mason <clm@fb.com>
> Suggested-by: Rik van Riel <riel@surriel.com>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Thanks!
^ permalink raw reply [flat|nested] 5+ messages in thread
* [tip: sched/urgent] sched/fair: Ensure tasks spreading in LLC during LB
2020-11-02 10:24 [PATCH] sched/fair: ensure tasks spreading in LLC during LB Vincent Guittot
2020-11-02 13:31 ` Rik van Riel
2020-11-10 12:55 ` Peter Zijlstra
@ 2020-11-11 8:23 ` tip-bot2 for Vincent Guittot
2020-11-12 12:34 ` [PATCH] sched/fair: ensure " Mel Gorman
3 siblings, 0 replies; 5+ messages in thread
From: tip-bot2 for Vincent Guittot @ 2020-11-11 8:23 UTC (permalink / raw)
To: linux-tip-commits
Cc: Chris Mason, Rik van Riel, Vincent Guittot,
Peter Zijlstra (Intel),
x86, linux-kernel
The following commit has been merged into the sched/urgent branch of tip:
Commit-ID: 16b0a7a1a0af9db6e008fecd195fe4d6cb366d83
Gitweb: https://git.kernel.org/tip/16b0a7a1a0af9db6e008fecd195fe4d6cb366d83
Author: Vincent Guittot <vincent.guittot@linaro.org>
AuthorDate: Mon, 02 Nov 2020 11:24:57 +01:00
Committer: Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 10 Nov 2020 18:38:48 +01:00
sched/fair: Ensure tasks spreading in LLC during LB
schbench shows latency increase for 95 percentile above since:
commit 0b0695f2b34a ("sched/fair: Rework load_balance()")
Align the behavior of the load balancer with the wake up path, which tries
to select an idle CPU which belongs to the LLC for a waking task.
calculate_imbalance() will use nr_running instead of the spare
capacity when CPUs share resources (ie cache) at the domain level. This
will ensure a better spread of tasks on idle CPUs.
Running schbench on a hikey (8cores arm64) shows the problem:
tip/sched/core :
schbench -m 2 -t 4 -s 10000 -c 1000000 -r 10
Latency percentiles (usec)
50.0th: 33
75.0th: 45
90.0th: 51
95.0th: 4152
*99.0th: 14288
99.5th: 14288
99.9th: 14288
min=0, max=14276
tip/sched/core + patch :
schbench -m 2 -t 4 -s 10000 -c 1000000 -r 10
Latency percentiles (usec)
50.0th: 34
75.0th: 47
90.0th: 52
95.0th: 78
*99.0th: 94
99.5th: 94
99.9th: 94
min=0, max=94
Fixes: 0b0695f2b34a ("sched/fair: Rework load_balance()")
Reported-by: Chris Mason <clm@fb.com>
Suggested-by: Rik van Riel <riel@surriel.com>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Reviewed-by: Rik van Riel <riel@surriel.com>
Tested-by: Rik van Riel <riel@surriel.com>
Link: https://lkml.kernel.org/r/20201102102457.28808-1-vincent.guittot@linaro.org
---
kernel/sched/fair.c | 3 ++-
1 file changed, 2 insertions(+), 1 deletion(-)
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index aa4c622..210b15f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -9031,7 +9031,8 @@ static inline void calculate_imbalance(struct lb_env *env, struct sd_lb_stats *s
* emptying busiest.
*/
if (local->group_type == group_has_spare) {
- if (busiest->group_type > group_fully_busy) {
+ if ((busiest->group_type > group_fully_busy) &&
+ !(env->sd->flags & SD_SHARE_PKG_RESOURCES)) {
/*
* If busiest is overloaded, try to fill spare
* capacity. This might end up creating spare capacity
^ permalink raw reply related [flat|nested] 5+ messages in thread
* Re: [PATCH] sched/fair: ensure tasks spreading in LLC during LB
2020-11-02 10:24 [PATCH] sched/fair: ensure tasks spreading in LLC during LB Vincent Guittot
` (2 preceding siblings ...)
2020-11-11 8:23 ` [tip: sched/urgent] sched/fair: Ensure " tip-bot2 for Vincent Guittot
@ 2020-11-12 12:34 ` Mel Gorman
3 siblings, 0 replies; 5+ messages in thread
From: Mel Gorman @ 2020-11-12 12:34 UTC (permalink / raw)
To: Vincent Guittot
Cc: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
linux-kernel, riel, clm, hannes
On Mon, Nov 02, 2020 at 11:24:57AM +0100, Vincent Guittot wrote:
> schbench shows latency increase for 95 percentile above since:
> commit 0b0695f2b34a ("sched/fair: Rework load_balance()")
>
> Align the behavior of the load balancer with the wake up path, which tries
> to select an idle CPU which belongs to the LLC for a waking task.
>
> calculate_imbalance() will use nr_running instead of the spare
> capacity when CPUs share resources (ie cache) at the domain level. This
> will ensure a better spread of tasks on idle CPUs.
>
> Running schbench on a hikey (8cores arm64) shows the problem:
>
> tip/sched/core :
> schbench -m 2 -t 4 -s 10000 -c 1000000 -r 10
> Latency percentiles (usec)
> 50.0th: 33
> 75.0th: 45
> 90.0th: 51
> 95.0th: 4152
> *99.0th: 14288
> 99.5th: 14288
> 99.9th: 14288
> min=0, max=14276
>
> tip/sched/core + patch :
> schbench -m 2 -t 4 -s 10000 -c 1000000 -r 10
> Latency percentiles (usec)
> 50.0th: 34
> 75.0th: 47
> 90.0th: 52
> 95.0th: 78
> *99.0th: 94
> 99.5th: 94
> 99.9th: 94
> min=0, max=94
>
> Fixes: 0b0695f2b34a ("sched/fair: Rework load_balance()")
> Reported-by: Chris Mason <clm@fb.com>
> Suggested-by: Rik van Riel <riel@surriel.com>
> Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Acked-by: Mel Gorman <mgorman@suse.de>
--
Mel Gorman
SUSE Labs
^ permalink raw reply [flat|nested] 5+ messages in thread
end of thread, other threads:[~2020-11-12 12:35 UTC | newest]
Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-11-02 10:24 [PATCH] sched/fair: ensure tasks spreading in LLC during LB Vincent Guittot
2020-11-02 13:31 ` Rik van Riel
2020-11-10 12:55 ` Peter Zijlstra
2020-11-11 8:23 ` [tip: sched/urgent] sched/fair: Ensure " tip-bot2 for Vincent Guittot
2020-11-12 12:34 ` [PATCH] sched/fair: ensure " Mel Gorman
This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).