All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
@ 2022-09-15 16:54 Chen Yu
  2022-09-15 17:10 ` Tim Chen
                   ` (4 more replies)
  0 siblings, 5 replies; 20+ messages in thread
From: Chen Yu @ 2022-09-15 16:54 UTC (permalink / raw)
  To: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel, Chen Yu

[Background]
At LPC 2022 Real-time and Scheduling Micro Conference we presented
the cross CPU wakeup issue. This patch is a text version of the
talk, and hopefully we can clarify the problem and appreciate for any
feedback.

[re-send due to the previous one did not reach LKML, sorry
 for any inconvenience.]

[Problem Statement]
For a workload that is doing frequent context switches, the throughput
scales well until the number of instances reaches a peak point. After
that peak point, the throughput drops significantly if the number of
instances continues to increase.

The will-it-scale context_switch1 test case exposes the issue. The
test platform has 112 CPUs per LLC domain. The will-it-scale launches
1, 8, 16 ... 112 instances respectively. Each instance is composed
of 2 tasks, and each pair of tasks would do ping-pong scheduling via
pipe_read() and pipe_write(). No task is bound to any CPU.
We found that, once the number of instances is higher than
56(112 tasks in total, every CPU has 1 task), the throughput
drops accordingly if the instance number continues to increase:

          ^
throughput|
          |                 X
          |               X   X X
          |             X         X X
          |           X               X
          |         X                   X
          |       X
          |     X
          |   X
          | X
          |
          +-----------------.------------------->
                            56
                                 number of instances

[Symptom analysis]
Both perf profile and lockstat have shown that, the bottleneck
is the runqueue spinlock. Take perf profile for example:

nr_instance          rq lock percentage
1                    1.22%
8                    1.17%
16                   1.20%
24                   1.22%
32                   1.46%
40                   1.61%
48                   1.63%
56                   1.65%
--------------------------
64                   3.77%      |
72                   5.90%      | increase
80                   7.95%      |
88                   9.98%      v
96                   11.81%
104                  13.54%
112                  15.13%

And the rq lock bottleneck is composed of two paths(perf profile):

(path1):
raw_spin_rq_lock_nested.constprop.0;
try_to_wake_up;
default_wake_function;
autoremove_wake_function;
__wake_up_common;
__wake_up_common_lock;
__wake_up_sync_key;
pipe_write;
new_sync_write;
vfs_write;
ksys_write;
__x64_sys_write;
do_syscall_64;
entry_SYSCALL_64_after_hwframe;write

(path2):
raw_spin_rq_lock_nested.constprop.0;
__sched_text_start;
schedule_idle;
do_idle;
cpu_startup_entry;
start_secondary;
secondary_startup_64_no_verify

The idle percentage is around 30% when there are 112 instances:
%Cpu0  :  2.7 us, 66.7 sy,  0.0 ni, 30.7 id

As a comparison, if we set CPU affinity to these workloads,
which stops them from migrating among CPUs, the idle percentage
drops to nearly 0%, and the throughput increases by about 300%.
This indicates that there is room for optimization.

A possible scenario to describe the lock contention:
task A tries to wakeup task B on CPU1, then task A grabs the
runqueue lock of CPU1. If CPU1 is about to quit idle, it needs
to grab its own lock which has been taken by someone else. Then
CPU1 takes more time to quit which hurts the performance.

TTWU_QUEUE could mitigate the cross CPU runqueue lock contention.
Since commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU
on wakelist if wakee cpu is idle"), TTWU_QUEUE offloads the work from
the waker and leverages the idle CPU to queue the wakee. However, a long
idle duration is still observed. The idle task spends quite some time
on sched_ttwu_pending() before it switches out. This long idle
duration would mislead SIS_UTIL, then SIS_UTIL suggests the waker scan
for more CPUs. The time spent searching for an idle CPU would make
wakee waiting for more time, which in turn leads to more idle time.
The NEWLY_IDLE balance fails to pull tasks to the idle CPU, which
might be caused by no runnable wakee being found.

[Proposal]
If a system is busy, and if the workloads are doing frequent context
switches, it might not be a good idea to spread the wakee on different
CPUs. Instead, consider the task running time and enhance wake affine
might be applicable.

This idea has been suggested by Rik at LPC 2019 when discussing
the latency nice. He asked the following question: if P1 is a small-time
slice task on CPU, can we put the waking task P2 on the CPU and wait for
P1 to release the CPU, without wasting time to search for an idle CPU?
At LPC 2021 Vincent Guittot has proposed:
1. If the wakee is a long-running task, should we skip the short idle CPU?
2. If the wakee is a short-running task, can we put it onto a lightly loaded
   local CPU?

Current proposal is a variant of 2:
If the target CPU is running a short-time slice task, and the wakee
is also a short-time slice task, the target CPU could be chosen as the
candidate when the system is busy.

The definition of a short-time slice task is: The average running time
of the task during each run is no more than sysctl_sched_min_granularity.
If a task switches in and then voluntarily relinquishes the CPU
quickly, it is regarded as a short-running task. Choosing
sysctl_sched_min_granularity because it is the minimal slice if there
are too many runnable tasks.

Reuse the nr_idle_scan of SIS_UTIL to decide if the system is busy.
If yes, then a compromised "idle" CPU might be acceptable.

The reason is that, if the waker is a short running task, it might 
relinquish the CPU soon, the wakee has the chance to be scheduled.
On the other hand, if the wakee is also a short-running task, the
impact it brings to the target CPU is small. If the system is
already busy, maybe we could lower the bar to find an idle CPU. 
The effect is, the wake affine is enhanced. 

[Benchmark results]
The baseline is 6.0-rc4.

The throughput of will-it-scale.context_switch1 has been increased by
331.13% with this patch applied.

netperf
=======
case            	load    	baseline(std%)	compare%( std%)
TCP_RR          	28 threads	 1.00 (  0.57)	 +0.29 (  0.59)
TCP_RR          	56 threads	 1.00 (  0.49)	 +0.43 (  0.43)
TCP_RR          	84 threads	 1.00 (  0.34)	 +0.24 (  0.34)
TCP_RR          	112 threads	 1.00 (  0.26)	 +1.57 (  0.20)
TCP_RR          	140 threads	 1.00 (  0.20)	+178.05 (  8.83)
TCP_RR          	168 threads	 1.00 ( 10.14)	 +0.87 ( 10.03)
TCP_RR          	196 threads	 1.00 ( 13.51)	 +0.90 ( 11.84)
TCP_RR          	224 threads	 1.00 (  7.12)	 +0.66 (  8.28)
UDP_RR          	28 threads	 1.00 (  0.96)	 -0.10 (  0.97)
UDP_RR          	56 threads	 1.00 ( 10.93)	 +0.24 (  0.82)
UDP_RR          	84 threads	 1.00 (  8.99)	 +0.40 (  0.71)
UDP_RR          	112 threads	 1.00 (  0.15)	 +0.72 (  7.77)
UDP_RR          	140 threads	 1.00 ( 11.11)	+135.81 ( 13.86)
UDP_RR          	168 threads	 1.00 ( 12.58)	+147.63 ( 12.72)
UDP_RR          	196 threads	 1.00 ( 19.47)	 -0.34 ( 16.14)
UDP_RR          	224 threads	 1.00 ( 12.88)	 -0.35 ( 12.73)

hackbench
=========
case            	load    	baseline(std%)	compare%( std%)
process-pipe    	1 group 	 1.00 (  1.02)	 +0.14 (  0.62)
process-pipe    	2 groups 	 1.00 (  0.73)	 +0.29 (  0.51)
process-pipe    	4 groups 	 1.00 (  0.16)	 +0.24 (  0.31)
process-pipe    	8 groups 	 1.00 (  0.06)	+11.56 (  0.11)
process-sockets 	1 group 	 1.00 (  1.59)	 +0.06 (  0.77)
process-sockets 	2 groups 	 1.00 (  1.13)	 -1.86 (  1.31)
process-sockets 	4 groups 	 1.00 (  0.14)	 +1.76 (  0.29)
process-sockets 	8 groups 	 1.00 (  0.27)	 +2.73 (  0.10)
threads-pipe    	1 group 	 1.00 (  0.43)	 +0.83 (  2.20)
threads-pipe    	2 groups 	 1.00 (  0.52)	 +1.03 (  0.55)
threads-pipe    	4 groups 	 1.00 (  0.44)	 -0.08 (  0.31)
threads-pipe    	8 groups 	 1.00 (  0.04)	+11.86 (  0.05)
threads-sockets 	1 groups 	 1.00 (  1.89)	 +3.51 (  0.57)
threads-sockets 	2 groups 	 1.00 (  0.04)	 -1.12 (  0.69)
threads-sockets 	4 groups 	 1.00 (  0.14)	 +1.77 (  0.18)
threads-sockets 	8 groups 	 1.00 (  0.03)	 +2.75 (  0.03)

tbench
======
case            	load    	baseline(std%)	compare%( std%)
loopback        	28 threads	 1.00 (  0.08)	 +0.51 (  0.25)
loopback        	56 threads	 1.00 (  0.15)	 -0.89 (  0.16)
loopback        	84 threads	 1.00 (  0.03)	 +0.35 (  0.07)
loopback        	112 threads	 1.00 (  0.06)	 +2.84 (  0.01)
loopback        	140 threads	 1.00 (  0.07)	 +0.69 (  0.11)
loopback        	168 threads	 1.00 (  0.09)	 +0.14 (  0.18)
loopback        	196 threads	 1.00 (  0.04)	 -0.18 (  0.20)
loopback        	224 threads	 1.00 (  0.25)	 -0.37 (  0.03)

Other benchmarks are under testing.

This patch is more about enhancing the wake affine, rather than improving
the SIS efficiency, so Mel's SIS statistic patch was not deployed for now.

[Limitations]
When the number of CPUs suggested by SIS_UTIL is lower than 60% of the LLC
CPUs, the LLC domain is regarded as relatively busy. However, the 60% is
somewhat hacky, because it indicates that the util_avg% is around 50%,
a half busy LLC. I don't have other lightweight/accurate method in mind to
check if the LLC domain is busy or not.

[Misc]
At LPC we received useful suggestions. The first one is that we should look at
the time from the task is woken up, to the time the task goes back to sleep.
I assume this is aligned with what is proposed here - we consider the average
running time, rather than the total running time. The second one is that we
should consider the long-running task. And this is under investigation.

Besides, Prateek has mentioned that the SIS_UTIL is unable to deal with
burst workload.  Because there is a delay to reflect the instantaneous
utilization and SIS_UTIL expects the workload to be stable. If the system
is idle most of the time, but suddenly the workloads burst, the SIS_UTIL
overscans. The current patch might mitigate this symptom somehow, as burst
workload is usually regarded as a short-running task.

Suggested-by: Tim Chen <tim.c.chen@intel.com>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
---
 kernel/sched/fair.c | 31 ++++++++++++++++++++++++++++++-
 1 file changed, 30 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 914096c5b1ae..7519ab5b911c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6020,6 +6020,19 @@ static int wake_wide(struct task_struct *p)
 	return 1;
 }
 
+/*
+ * If a task switches in and then voluntarily relinquishes the
+ * CPU quickly, it is regarded as a short running task.
+ * sysctl_sched_min_granularity is chosen as the threshold,
+ * as this value is the minimal slice if there are too many
+ * runnable tasks, see __sched_period().
+ */
+static int is_short_task(struct task_struct *p)
+{
+	return (p->se.sum_exec_runtime <=
+		(p->nvcsw * sysctl_sched_min_granularity));
+}
+
 /*
  * The purpose of wake_affine() is to quickly determine on which CPU we can run
  * soonest. For the purpose of speed we only consider the waking and previous
@@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
 	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
 		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
 
-	if (sync && cpu_rq(this_cpu)->nr_running == 1)
+	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
+	    is_short_task(cpu_curr(this_cpu)))
 		return this_cpu;
 
 	if (available_idle_cpu(prev_cpu))
@@ -6434,6 +6448,21 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
 			/* overloaded LLC is unlikely to have idle cpu/core */
 			if (nr == 1)
 				return -1;
+
+			/*
+			 * If nr is smaller than 60% of llc_weight, it
+			 * indicates that the util_avg% is higher than 50%.
+			 * This is calculated by SIS_UTIL in
+			 * update_idle_cpu_scan(). The 50% util_avg indicates
+			 * a half-busy LLC domain. System busier than this
+			 * level could lower its bar to choose a compromised
+			 * "idle" CPU. If the waker on target CPU is a short
+			 * task and the wakee is also a short task, pick
+			 * target directly.
+			 */
+			if (!has_idle_core && (5 * nr < 3 * sd->span_weight) &&
+			    is_short_task(p) && is_short_task(cpu_curr(target)))
+				return target;
 		}
 	}
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-15 16:54 [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
@ 2022-09-15 17:10 ` Tim Chen
  2022-09-16 10:49   ` Chen Yu
  2022-09-16 11:45 ` Peter Zijlstra
                   ` (3 subsequent siblings)
  4 siblings, 1 reply; 20+ messages in thread
From: Tim Chen @ 2022-09-15 17:10 UTC (permalink / raw)
  To: Chen Yu, Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

On Fri, 2022-09-16 at 00:54 +0800, Chen Yu wrote:
> 
> +/*
> + * If a task switches in and then voluntarily relinquishes the
> + * CPU quickly, it is regarded as a short running task.
> + * sysctl_sched_min_granularity is chosen as the threshold,
> + * as this value is the minimal slice if there are too many
> + * runnable tasks, see __sched_period().
> + */
> +static int is_short_task(struct task_struct *p)
> +{
> +	return (p->se.sum_exec_runtime <=
> +		(p->nvcsw * sysctl_sched_min_granularity));
> +}
> +
>  /*
>   * The purpose of wake_affine() is to quickly determine on which CPU we can run
>   * soonest. For the purpose of speed we only consider the waking and previous
> @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
>  	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
>  		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
>  
> -	if (sync && cpu_rq(this_cpu)->nr_running == 1)
> +	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
> +	    is_short_task(cpu_curr(this_cpu)))
>  		return this_cpu;
>  
>  	if (available_idle_cpu(prev_cpu))
> @@ -6434,6 +6448,21 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>  			/* overloaded LLC is unlikely to have idle cpu/core */
>  			if (nr == 1)
>  				return -1;
> +
> +			/*
> +			 * If nr is smaller than 60% of llc_weight, it
> +			 * indicates that the util_avg% is higher than 50%.
> +			 * This is calculated by SIS_UTIL in
> +			 * update_idle_cpu_scan(). The 50% util_avg indicates
> +			 * a half-busy LLC domain. System busier than this
> +			 * level could lower its bar to choose a compromised
> +			 * "idle" CPU. If the waker on target CPU is a short
> +			 * task and the wakee is also a short task, pick
> +			 * target directly.
> +			 */
> +			if (!has_idle_core && (5 * nr < 3 * sd->span_weight) &&
> +			    is_short_task(p) && is_short_task(cpu_curr(target)))

Should we check if target's rq's nr_running is 1, and if there's pending waking
task before picking it?

> +				return target;
>  		}
>  	}
>  

Thanks.

Tim


^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-15 17:10 ` Tim Chen
@ 2022-09-16 10:49   ` Chen Yu
  0 siblings, 0 replies; 20+ messages in thread
From: Chen Yu @ 2022-09-16 10:49 UTC (permalink / raw)
  To: Tim Chen
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

On 2022-09-15 at 10:10:25 -0700, Tim Chen wrote:
> On Fri, 2022-09-16 at 00:54 +0800, Chen Yu wrote:
> > 
> > +/*
> > + * If a task switches in and then voluntarily relinquishes the
> > + * CPU quickly, it is regarded as a short running task.
> > + * sysctl_sched_min_granularity is chosen as the threshold,
> > + * as this value is the minimal slice if there are too many
> > + * runnable tasks, see __sched_period().
> > + */
> > +static int is_short_task(struct task_struct *p)
> > +{
> > +	return (p->se.sum_exec_runtime <=
> > +		(p->nvcsw * sysctl_sched_min_granularity));
> > +}
> > +
> >  /*
> >   * The purpose of wake_affine() is to quickly determine on which CPU we can run
> >   * soonest. For the purpose of speed we only consider the waking and previous
> > @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
> >  	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
> >  		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
> >  
> > -	if (sync && cpu_rq(this_cpu)->nr_running == 1)
> > +	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
> > +	    is_short_task(cpu_curr(this_cpu)))
> >  		return this_cpu;
> >  
> >  	if (available_idle_cpu(prev_cpu))
> > @@ -6434,6 +6448,21 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> >  			/* overloaded LLC is unlikely to have idle cpu/core */
> >  			if (nr == 1)
> >  				return -1;
> > +
> > +			/*
> > +			 * If nr is smaller than 60% of llc_weight, it
> > +			 * indicates that the util_avg% is higher than 50%.
> > +			 * This is calculated by SIS_UTIL in
> > +			 * update_idle_cpu_scan(). The 50% util_avg indicates
> > +			 * a half-busy LLC domain. System busier than this
> > +			 * level could lower its bar to choose a compromised
> > +			 * "idle" CPU. If the waker on target CPU is a short
> > +			 * task and the wakee is also a short task, pick
> > +			 * target directly.
> > +			 */
> > +			if (!has_idle_core && (5 * nr < 3 * sd->span_weight) &&
> > +			    is_short_task(p) && is_short_task(cpu_curr(target)))
> 
> Should we check if target's rq's nr_running is 1, and if there's pending waking
> task before picking it?
>
Yes we can consider the two factors, then the criteria to pick up a target CPU
would be more strict. After taking nr_running and the pending wakeup request into
consideration, I think it would be a variant of WF_SYNC and we can get rid of
'system should be busy' restriction. I'll do some test in this direction.

thanks,
Chenyu
> > +				return target;
> >  		}
> >  	}
> >  
> 
> Thanks.
> 
> Tim
> 

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-15 16:54 [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
  2022-09-15 17:10 ` Tim Chen
@ 2022-09-16 11:45 ` Peter Zijlstra
  2022-09-17 13:55   ` Chen Yu
  2022-09-16 11:47 ` Peter Zijlstra
                   ` (2 subsequent siblings)
  4 siblings, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2022-09-16 11:45 UTC (permalink / raw)
  To: Chen Yu
  Cc: Vincent Guittot, Tim Chen, Mel Gorman, Juri Lelli, Rik van Riel,
	Aaron Lu, Abel Wu, K Prateek Nayak, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

On Fri, Sep 16, 2022 at 12:54:07AM +0800, Chen Yu wrote:
> And the rq lock bottleneck is composed of two paths(perf profile):
> 
> (path1):
> raw_spin_rq_lock_nested.constprop.0;
> try_to_wake_up;
> default_wake_function;
> autoremove_wake_function;
> __wake_up_common;
> __wake_up_common_lock;
> __wake_up_sync_key;
> pipe_write;
> new_sync_write;
> vfs_write;
> ksys_write;
> __x64_sys_write;
> do_syscall_64;
> entry_SYSCALL_64_after_hwframe;write

Can you please addr2line -i the raw_spin_rq_lock callsite so we know which is
the one causing grief?

Specifically; I'm worried about PSI, psi_ttwu_dequeue() can cause ttwu()
to take _2_ rq->lock, which absolutely blows for this case.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-15 16:54 [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
  2022-09-15 17:10 ` Tim Chen
  2022-09-16 11:45 ` Peter Zijlstra
@ 2022-09-16 11:47 ` Peter Zijlstra
  2022-09-17 14:15   ` Chen Yu
  2022-09-26  5:50 ` K Prateek Nayak
  2022-09-29  8:00 ` Vincent Guittot
  4 siblings, 1 reply; 20+ messages in thread
From: Peter Zijlstra @ 2022-09-16 11:47 UTC (permalink / raw)
  To: Chen Yu
  Cc: Vincent Guittot, Tim Chen, Mel Gorman, Juri Lelli, Rik van Riel,
	Aaron Lu, Abel Wu, K Prateek Nayak, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

On Fri, Sep 16, 2022 at 12:54:07AM +0800, Chen Yu wrote:

> Current proposal is a variant of 2:
> If the target CPU is running a short-time slice task, and the wakee
> is also a short-time slice task, the target CPU could be chosen as the
> candidate when the system is busy.

Since this benchmark only has short running tasks, the result is that
you always pick the local cpu and therefore the migrations are reduced?

Doesn't this inhibit spreading the workload when there's geniunely idle
CPUs around?

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-16 11:45 ` Peter Zijlstra
@ 2022-09-17 13:55   ` Chen Yu
  0 siblings, 0 replies; 20+ messages in thread
From: Chen Yu @ 2022-09-17 13:55 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Vincent Guittot, Tim Chen, Mel Gorman, Juri Lelli, Rik van Riel,
	Aaron Lu, Abel Wu, K Prateek Nayak, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

On 2022-09-16 at 13:45:00 +0200, Peter Zijlstra wrote:
> On Fri, Sep 16, 2022 at 12:54:07AM +0800, Chen Yu wrote:
> > And the rq lock bottleneck is composed of two paths(perf profile):
> > 
> > (path1):
> > raw_spin_rq_lock_nested.constprop.0;
> > try_to_wake_up;
> > default_wake_function;
> > autoremove_wake_function;
> > __wake_up_common;
> > __wake_up_common_lock;
> > __wake_up_sync_key;
> > pipe_write;
> > new_sync_write;
> > vfs_write;
> > ksys_write;
> > __x64_sys_write;
> > do_syscall_64;
> > entry_SYSCALL_64_after_hwframe;write
> 
> Can you please addr2line -i the raw_spin_rq_lock callsite so we know which is
> the one causing grief?
> 
> Specifically; I'm worried about PSI, psi_ttwu_dequeue() can cause ttwu()
> to take _2_ rq->lock, which absolutely blows for this case.
Above perf profile result was captured with 'psi=0' appended in the boot
commandline, and with NO_TTWU_QUEUE on 6.0-rc4. To narrow down we disabled
psi the first time we saw a rq lock contention. But even with psi=0 we still
observe the rq lock contention.

To confirm this, the 'perf report -F+period,srcline' was used to leverage
addr2line to parse the line. However it seems that with DWARF v4 enabled
in the kernel, the rq lock issue could not be reproduced. So I hacked the
code to make ttwu_queue() non-static, and perf profile shows that it grabs
the rq lock:

raw_spin_rq_lock_nested.constprop.0;
ttwu_queue;    <----------
try_to_wake_up;
default_wake_function;
autoremove_wake_function;
__wake_up_common;
__wake_up_common_lock;
__wake_up_sync_key;
pipe_write;
vfs_write;
ksys_write;
__x64_sys_write;
do_syscall_64;
entry_SYSCALL_64_after_hwframe;
write

Then if TTWU_QUEUE is enabled, the rq lock contention issue could
not be reproduced, but long idle duration was still observed due to
sched_ttwu_pending(as descibed in the commit log).


thanks,
Chenyu

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-16 11:47 ` Peter Zijlstra
@ 2022-09-17 14:15   ` Chen Yu
  0 siblings, 0 replies; 20+ messages in thread
From: Chen Yu @ 2022-09-17 14:15 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Vincent Guittot, Tim Chen, Mel Gorman, Juri Lelli, Rik van Riel,
	Aaron Lu, Abel Wu, K Prateek Nayak, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

On 2022-09-16 at 13:47:49 +0200, Peter Zijlstra wrote:
> On Fri, Sep 16, 2022 at 12:54:07AM +0800, Chen Yu wrote:
> 
> > Current proposal is a variant of 2:
> > If the target CPU is running a short-time slice task, and the wakee
> > is also a short-time slice task, the target CPU could be chosen as the
> > candidate when the system is busy.
> 
> Since this benchmark only has short running tasks, the result is that
> you always pick the local cpu and therefore the migrations are reduced?
>
Yes, local cpu is preferred.
> Doesn't this inhibit spreading the workload when there's geniunely idle
> CPUs around?
Yes, there could be some idle CPUs undetected, although this strategy is
in effect when the system is busy. And maybe we could raise the bar to
enable this strategy. For example as Tim mentioned, if the target CPU is
running a short-running task, and the nr_running is 1, meanwhile there's no
ttwu_pending flag on this CPU, we can choose the target. I don't have
good idea on how to extract the criterias to descibe the scenario, for
example, how to detect if the sched_domain has too many context switch and
we can safely inhibit spreading the workload.

thanks,
Chenyu

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-15 16:54 [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
                   ` (2 preceding siblings ...)
  2022-09-16 11:47 ` Peter Zijlstra
@ 2022-09-26  5:50 ` K Prateek Nayak
  2022-09-26 14:39   ` Gautham R. Shenoy
  2022-09-29  5:25   ` Chen Yu
  2022-09-29  8:00 ` Vincent Guittot
  4 siblings, 2 replies; 20+ messages in thread
From: K Prateek Nayak @ 2022-09-26  5:50 UTC (permalink / raw)
  To: Chen Yu, Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

Hello Chenyu,

When testing the patch on a dual socket Zen3 system (3 x 64C/128T) we
noticed some regressions in some standard benchmark.

tl;dr

o Hackbench shows noticeable regression in most cases. Looking at schedstat
  data, we see there is an increased number of affine wakeup and an increase
  in the average wait time. As the LLC size on the Zen3 machine is only
  16 CPUs, there is good chance the LLC was overloaded and it required
  intervention from load balancer to distribute tasks optimally.

o There is a regression in Stream which is cause by piling up of more than
  one Stream thread on the same LLC. This happens as a result of migration
  in the wakeup path where the logic goes for an affine wakeup if the
  waker is short lived task even if sync flag is not set and the previous
  CPU might be idle.

I'll inline the results are detailed observation below:

On 9/15/2022 10:24 PM, Chen Yu wrote:
> [Background]
> At LPC 2022 Real-time and Scheduling Micro Conference we presented
> the cross CPU wakeup issue. This patch is a text version of the
> talk, and hopefully we can clarify the problem and appreciate for any
> feedback.
> 
> [re-send due to the previous one did not reach LKML, sorry
>  for any inconvenience.]
> 
> [Problem Statement]
> For a workload that is doing frequent context switches, the throughput
> scales well until the number of instances reaches a peak point. After
> that peak point, the throughput drops significantly if the number of
> instances continues to increase.
> 
> The will-it-scale context_switch1 test case exposes the issue. The
> test platform has 112 CPUs per LLC domain. The will-it-scale launches
> 1, 8, 16 ... 112 instances respectively. Each instance is composed
> of 2 tasks, and each pair of tasks would do ping-pong scheduling via
> pipe_read() and pipe_write(). No task is bound to any CPU.
> We found that, once the number of instances is higher than
> 56(112 tasks in total, every CPU has 1 task), the throughput
> drops accordingly if the instance number continues to increase:
> 
>           ^
> throughput|
>           |                 X
>           |               X   X X
>           |             X         X X
>           |           X               X
>           |         X                   X
>           |       X
>           |     X
>           |   X
>           | X
>           |
>           +-----------------.------------------->
>                             56
>                                  number of instances
> 
> [Symptom analysis]
> Both perf profile and lockstat have shown that, the bottleneck
> is the runqueue spinlock. Take perf profile for example:
> 
> nr_instance          rq lock percentage
> 1                    1.22%
> 8                    1.17%
> 16                   1.20%
> 24                   1.22%
> 32                   1.46%
> 40                   1.61%
> 48                   1.63%
> 56                   1.65%
> --------------------------
> 64                   3.77%      |
> 72                   5.90%      | increase
> 80                   7.95%      |
> 88                   9.98%      v
> 96                   11.81%
> 104                  13.54%
> 112                  15.13%
> 
> And the rq lock bottleneck is composed of two paths(perf profile):
> 
> (path1):
> raw_spin_rq_lock_nested.constprop.0;
> try_to_wake_up;
> default_wake_function;
> autoremove_wake_function;
> __wake_up_common;
> __wake_up_common_lock;
> __wake_up_sync_key;
> pipe_write;
> new_sync_write;
> vfs_write;
> ksys_write;
> __x64_sys_write;
> do_syscall_64;
> entry_SYSCALL_64_after_hwframe;write
> 
> (path2):
> raw_spin_rq_lock_nested.constprop.0;
> __sched_text_start;
> schedule_idle;
> do_idle;
> cpu_startup_entry;
> start_secondary;
> secondary_startup_64_no_verify
> 
> The idle percentage is around 30% when there are 112 instances:
> %Cpu0  :  2.7 us, 66.7 sy,  0.0 ni, 30.7 id
> 
> As a comparison, if we set CPU affinity to these workloads,
> which stops them from migrating among CPUs, the idle percentage
> drops to nearly 0%, and the throughput increases by about 300%.
> This indicates that there is room for optimization.
> 
> A possible scenario to describe the lock contention:
> task A tries to wakeup task B on CPU1, then task A grabs the
> runqueue lock of CPU1. If CPU1 is about to quit idle, it needs
> to grab its own lock which has been taken by someone else. Then
> CPU1 takes more time to quit which hurts the performance.
> 
> TTWU_QUEUE could mitigate the cross CPU runqueue lock contention.
> Since commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU
> on wakelist if wakee cpu is idle"), TTWU_QUEUE offloads the work from
> the waker and leverages the idle CPU to queue the wakee. However, a long
> idle duration is still observed. The idle task spends quite some time
> on sched_ttwu_pending() before it switches out. This long idle
> duration would mislead SIS_UTIL, then SIS_UTIL suggests the waker scan
> for more CPUs. The time spent searching for an idle CPU would make
> wakee waiting for more time, which in turn leads to more idle time.
> The NEWLY_IDLE balance fails to pull tasks to the idle CPU, which
> might be caused by no runnable wakee being found.
> 
> [Proposal]
> If a system is busy, and if the workloads are doing frequent context
> switches, it might not be a good idea to spread the wakee on different
> CPUs. Instead, consider the task running time and enhance wake affine
> might be applicable.
> 
> This idea has been suggested by Rik at LPC 2019 when discussing
> the latency nice. He asked the following question: if P1 is a small-time
> slice task on CPU, can we put the waking task P2 on the CPU and wait for
> P1 to release the CPU, without wasting time to search for an idle CPU?
> At LPC 2021 Vincent Guittot has proposed:
> 1. If the wakee is a long-running task, should we skip the short idle CPU?
> 2. If the wakee is a short-running task, can we put it onto a lightly loaded
>    local CPU?
> 
> Current proposal is a variant of 2:
> If the target CPU is running a short-time slice task, and the wakee
> is also a short-time slice task, the target CPU could be chosen as the
> candidate when the system is busy.
> 
> The definition of a short-time slice task is: The average running time
> of the task during each run is no more than sysctl_sched_min_granularity.
> If a task switches in and then voluntarily relinquishes the CPU
> quickly, it is regarded as a short-running task. Choosing
> sysctl_sched_min_granularity because it is the minimal slice if there
> are too many runnable tasks.
> 
> Reuse the nr_idle_scan of SIS_UTIL to decide if the system is busy.
> If yes, then a compromised "idle" CPU might be acceptable.
> 
> The reason is that, if the waker is a short running task, it might 
> relinquish the CPU soon, the wakee has the chance to be scheduled.
> On the other hand, if the wakee is also a short-running task, the
> impact it brings to the target CPU is small. If the system is
> already busy, maybe we could lower the bar to find an idle CPU. 
> The effect is, the wake affine is enhanced. 
> 
> [Benchmark results]
> The baseline is 6.0-rc4.
> 
> The throughput of will-it-scale.context_switch1 has been increased by
> 331.13% with this patch applied.
> 
> netperf
> =======
> case            	load    	baseline(std%)	compare%( std%)
> TCP_RR          	28 threads	 1.00 (  0.57)	 +0.29 (  0.59)
> TCP_RR          	56 threads	 1.00 (  0.49)	 +0.43 (  0.43)
> TCP_RR          	84 threads	 1.00 (  0.34)	 +0.24 (  0.34)
> TCP_RR          	112 threads	 1.00 (  0.26)	 +1.57 (  0.20)
> TCP_RR          	140 threads	 1.00 (  0.20)	+178.05 (  8.83)
> TCP_RR          	168 threads	 1.00 ( 10.14)	 +0.87 ( 10.03)
> TCP_RR          	196 threads	 1.00 ( 13.51)	 +0.90 ( 11.84)
> TCP_RR          	224 threads	 1.00 (  7.12)	 +0.66 (  8.28)
> UDP_RR          	28 threads	 1.00 (  0.96)	 -0.10 (  0.97)
> UDP_RR          	56 threads	 1.00 ( 10.93)	 +0.24 (  0.82)
> UDP_RR          	84 threads	 1.00 (  8.99)	 +0.40 (  0.71)
> UDP_RR          	112 threads	 1.00 (  0.15)	 +0.72 (  7.77)
> UDP_RR          	140 threads	 1.00 ( 11.11)	+135.81 ( 13.86)
> UDP_RR          	168 threads	 1.00 ( 12.58)	+147.63 ( 12.72)
> UDP_RR          	196 threads	 1.00 ( 19.47)	 -0.34 ( 16.14)
> UDP_RR          	224 threads	 1.00 ( 12.88)	 -0.35 ( 12.73)
> 
> hackbench
> =========
> case            	load    	baseline(std%)	compare%( std%)
> process-pipe    	1 group 	 1.00 (  1.02)	 +0.14 (  0.62)
> process-pipe    	2 groups 	 1.00 (  0.73)	 +0.29 (  0.51)
> process-pipe    	4 groups 	 1.00 (  0.16)	 +0.24 (  0.31)
> process-pipe    	8 groups 	 1.00 (  0.06)	+11.56 (  0.11)
> process-sockets 	1 group 	 1.00 (  1.59)	 +0.06 (  0.77)
> process-sockets 	2 groups 	 1.00 (  1.13)	 -1.86 (  1.31)
> process-sockets 	4 groups 	 1.00 (  0.14)	 +1.76 (  0.29)
> process-sockets 	8 groups 	 1.00 (  0.27)	 +2.73 (  0.10)
> threads-pipe    	1 group 	 1.00 (  0.43)	 +0.83 (  2.20)
> threads-pipe    	2 groups 	 1.00 (  0.52)	 +1.03 (  0.55)
> threads-pipe    	4 groups 	 1.00 (  0.44)	 -0.08 (  0.31)
> threads-pipe    	8 groups 	 1.00 (  0.04)	+11.86 (  0.05)
> threads-sockets 	1 groups 	 1.00 (  1.89)	 +3.51 (  0.57)
> threads-sockets 	2 groups 	 1.00 (  0.04)	 -1.12 (  0.69)
> threads-sockets 	4 groups 	 1.00 (  0.14)	 +1.77 (  0.18)
> threads-sockets 	8 groups 	 1.00 (  0.03)	 +2.75 (  0.03)
> 
> tbench
> ======
> case            	load    	baseline(std%)	compare%( std%)
> loopback        	28 threads	 1.00 (  0.08)	 +0.51 (  0.25)
> loopback        	56 threads	 1.00 (  0.15)	 -0.89 (  0.16)
> loopback        	84 threads	 1.00 (  0.03)	 +0.35 (  0.07)
> loopback        	112 threads	 1.00 (  0.06)	 +2.84 (  0.01)
> loopback        	140 threads	 1.00 (  0.07)	 +0.69 (  0.11)
> loopback        	168 threads	 1.00 (  0.09)	 +0.14 (  0.18)
> loopback        	196 threads	 1.00 (  0.04)	 -0.18 (  0.20)
> loopback        	224 threads	 1.00 (  0.25)	 -0.37 (  0.03)
> 
> Other benchmarks are under testing.

Discussed below are the results from running standard benchmarks on
a dual socket Zen3 (2 x 64C/128T) machine configured in different
NPS modes.

NPS Modes are used to logically divide single socket into
multiple NUMA region.
Following is the NUMA configuration for each NPS mode on the system:

NPS1: Each socket is a NUMA node.
    Total 2 NUMA nodes in the dual socket machine.

    Node 0: 0-63,   128-191
    Node 1: 64-127, 192-255

NPS2: Each socket is further logically divided into 2 NUMA regions.
    Total 4 NUMA nodes exist over 2 socket.
   
    Node 0: 0-31,   128-159
    Node 1: 32-63,  160-191
    Node 2: 64-95,  192-223
    Node 3: 96-127, 223-255

NPS4: Each socket is logically divided into 4 NUMA regions.
    Total 8 NUMA nodes exist over 2 socket.
   
    Node 0: 0-15,    128-143
    Node 1: 16-31,   144-159
    Node 2: 32-47,   160-175
    Node 3: 48-63,   176-191
    Node 4: 64-79,   192-207
    Node 5: 80-95,   208-223
    Node 6: 96-111,  223-231
    Node 7: 112-127, 232-255

Benchmark Results:

Kernel versions:
- tip:       5.19.0 tip sched/core
- shortrun:  5.19.0 tip sched/core + this patch

When we started testing, the tip was at:
commit 7e9518baed4c ("sched/fair: Move call to list_last_entry() in detach_tasks")

~~~~~~~~~~~~~
~ hackbench ~
~~~~~~~~~~~~~

NPS1

Test:			tip			shortrun
 1-groups:	   4.23 (0.00 pct)	   4.24 (-0.23 pct)
 2-groups:	   4.93 (0.00 pct)	   5.68 (-15.21 pct)
 4-groups:	   5.32 (0.00 pct)	   6.21 (-16.72 pct)
 8-groups:	   5.46 (0.00 pct)	   6.49 (-18.86 pct)
16-groups:	   7.31 (0.00 pct)	   7.78 (-6.42 pct)

NPS2

Test:			tip			shortrun
 1-groups:	   4.19 (0.00 pct)	   4.19 (0.00 pct)
 2-groups:	   4.77 (0.00 pct)	   5.43 (-13.83 pct)
 4-groups:	   5.15 (0.00 pct)	   6.20 (-20.38 pct)
 8-groups:	   5.47 (0.00 pct)	   6.54 (-19.56 pct)
16-groups:	   6.63 (0.00 pct)	   7.28 (-9.80 pct)

NPS4

Test:			tip			shortrun
 1-groups:	   4.23 (0.00 pct)	   4.39 (-3.78 pct)
 2-groups:	   4.78 (0.00 pct)	   5.48 (-14.64 pct)
 4-groups:	   5.17 (0.00 pct)	   6.14 (-18.76 pct)
 8-groups:	   5.63 (0.00 pct)	   6.51 (-15.63 pct)
16-groups:	   7.88 (0.00 pct)	   7.03 (10.78 pct)

~~~~~~~~~~~~
~ schbench ~
~~~~~~~~~~~~

NPS1

#workers:       tip			shortrun
  1:	  22.00 (0.00 pct)	  36.00 (-63.63 pct)
  2:	  34.00 (0.00 pct)	  38.00 (-11.76 pct)
  4:	  37.00 (0.00 pct)	  36.00 (2.70 pct)
  8:	  55.00 (0.00 pct)	  51.00 (7.27 pct)
 16:	  69.00 (0.00 pct)	  68.00 (1.44 pct)
 32:	 113.00 (0.00 pct)	 116.00 (-2.65 pct)
 64:	 219.00 (0.00 pct)	 232.00 (-5.93 pct)
128:	 506.00 (0.00 pct)	 1019.00 (-101.38 pct)
256:	 45440.00 (0.00 pct)	 44864.00 (1.26 pct)
512:	 76672.00 (0.00 pct)	 73600.00 (4.00 pct)

NPS2

#workers:	tip			shortrun
  1:	  31.00 (0.00 pct)	  36.00 (-16.12 pct)
  2:	  36.00 (0.00 pct)	  36.00 (0.00 pct)
  4:	  45.00 (0.00 pct)	  39.00 (13.33 pct)
  8:	  47.00 (0.00 pct)	  48.00 (-2.12 pct)
 16:	  66.00 (0.00 pct)	  71.00 (-7.57 pct)
 32:	 114.00 (0.00 pct)	 123.00 (-7.89 pct)
 64:	 215.00 (0.00 pct)	 248.00 (-15.34 pct)
128:	 495.00 (0.00 pct)	 531.00 (-7.27 pct)
256:	 48576.00 (0.00 pct)	 47552.00 (2.10 pct)
512:	 79232.00 (0.00 pct)	 74624.00 (5.81 pct)

NPS4

#workers:	tip			shortrun
  1:	  30.00 (0.00 pct)	  36.00 (-20.00 pct)
  2:	  34.00 (0.00 pct)	  38.00 (-11.76 pct)
  4:	  41.00 (0.00 pct)	  44.00 (-7.31 pct)
  8:	  60.00 (0.00 pct)	  53.00 (11.66 pct)
 16:	  68.00 (0.00 pct)	  73.00 (-7.35 pct)
 32:	 116.00 (0.00 pct)	 125.00 (-7.75 pct)
 64:	 224.00 (0.00 pct)	 248.00 (-10.71 pct)
128:	 495.00 (0.00 pct)	 569.00 (-14.94 pct)
256:	 45888.00 (0.00 pct)	 38720.00 (15.62 pct)
512:	 78464.00 (0.00 pct)	 73600.00 (6.19 pct)


~~~~~~~~~~
~ tbench ~
~~~~~~~~~~

NPS1

Clients:	tip			shortrun
    1	 550.66 (0.00 pct)	 546.56 (-0.74 pct)
    2	 1009.69 (0.00 pct)	 1010.01 (0.03 pct)
    4	 1795.32 (0.00 pct)	 1782.71 (-0.70 pct)
    8	 2971.16 (0.00 pct)	 3035.58 (2.16 pct)
   16	 4627.98 (0.00 pct)	 4816.82 (4.08 pct)
   32	 8065.15 (0.00 pct)	 9269.52 (14.93 pct)
   64	 14994.32 (0.00 pct)	 14704.38 (-1.93 pct)
  128	 5175.73 (0.00 pct)	 5174.77 (-0.01 pct)
  256	 48763.57 (0.00 pct)	 49649.67 (1.81 pct)
  512	 43780.78 (0.00 pct)	 44717.04 (2.13 pct)
 1024	 40341.84 (0.00 pct)	 42078.99 (4.30 pct)

NPS2

Clients:	tip			shortrun
    1	 551.06 (0.00 pct)	 549.17 (-0.34 pct)
    2	 1000.76 (0.00 pct)	 993.75 (-0.70 pct)
    4	 1737.02 (0.00 pct)	 1773.33 (2.09 pct)
    8	 2992.31 (0.00 pct)	 2971.05 (-0.71 pct)
   16	 4579.29 (0.00 pct)	 4470.71 (-2.37 pct)
   32	 9120.73 (0.00 pct)	 8080.89 (-11.40 pct)
   64	 14918.58 (0.00 pct)	 14395.57 (-3.50 pct)
  128	 20830.61 (0.00 pct)	 20579.09 (-1.20 pct)
  256	 47708.18 (0.00 pct)	 47416.37 (-0.61 pct)
  512	 43721.79 (0.00 pct)	 43754.83 (0.07 pct)
 1024	 40920.49 (0.00 pct)	 40701.90 (-0.53 pct)

NPS4

Clients:	tip			shortrun
    1	 549.22 (0.00 pct)	 548.36 (-0.15 pct)
    2	 1000.08 (0.00 pct)	 1037.74 (3.76 pct)
    4	 1794.78 (0.00 pct)	 1802.11 (0.40 pct)
    8	 3008.50 (0.00 pct)	 2989.22 (-0.64 pct)
   16	 4804.71 (0.00 pct)	 4706.51 (-2.04 pct)
   32	 9156.57 (0.00 pct)	 8253.84 (-9.85 pct)
   64	 14901.45 (0.00 pct)	 15049.51 (0.99 pct)
  128	 20771.20 (0.00 pct)	 13229.50 (-36.30 pct)
  256	 47033.88 (0.00 pct)	 46737.17 (-0.63 pct)
  512	 43429.01 (0.00 pct)	 43246.64 (-0.41 pct)
 1024	 39271.27 (0.00 pct)	 42194.75 (7.44 pct)


~~~~~~~~~~
~ stream ~
~~~~~~~~~~

NPS1

10 Runs:

Test:	        tip			shortrun
 Copy:	 336311.52 (0.00 pct)	 330116.75 (-1.84 pct)
Scale:	 212955.82 (0.00 pct)	 215330.30 (1.11 pct)
  Add:	 251518.23 (0.00 pct)	 250926.53 (-0.23 pct)
Triad:	 262077.88 (0.00 pct)	 259618.70 (-0.93 pct)

100 Runs:

Test:		tip			shortrun
 Copy:	 339533.83 (0.00 pct)	 323452.74 (-4.73 pct)
Scale:	 194736.72 (0.00 pct)	 215789.55 (10.81 pct)
  Add:	 218294.54 (0.00 pct)	 244916.33 (12.19 pct)
Triad:	 262371.40 (0.00 pct)	 252997.84 (-3.57 pct)

NPS2

10 Runs:

Test:		tip			shortrun
 Copy:	 335277.15 (0.00 pct)	 305516.57 (-8.87 pct)
Scale:	 220990.24 (0.00 pct)	 207061.22 (-6.30 pct)
  Add:	 264156.13 (0.00 pct)	 243368.49 (-7.86 pct)
Triad:	 268707.53 (0.00 pct)	 223486.30 (-16.82 pct)

100 Runs:

Test:		tip			shortrun
 Copy:	 334913.73 (0.00 pct)	 319677.81 (-4.54 pct)
Scale:	 230522.47 (0.00 pct)	 222757.62 (-3.36 pct)
  Add:	 264567.28 (0.00 pct)	 254883.62 (-3.66 pct)
Triad:	 272974.23 (0.00 pct)	 260561.08 (-4.54 pct)

NPS4

10 Runs:

Test:		tip			shortrun
 Copy:	 356452.47 (0.00 pct)	 255911.77 (-28.20 pct)
Scale:	 242986.42 (0.00 pct)	 171587.28 (-29.38 pct)
  Add:	 268512.09 (0.00 pct)	 188244.75 (-29.89 pct)
Triad:	 281622.43 (0.00 pct)	 193271.97 (-31.37 pct)

100 Runs:

Test:		tip			shortrun
 Copy:	 367384.81 (0.00 pct)	 273101.20 (-25.66 pct)
Scale:	 254289.04 (0.00 pct)	 189986.88 (-25.28 pct)
  Add:	 273683.33 (0.00 pct)	 206384.96 (-24.58 pct)
Triad:	 285696.90 (0.00 pct)	 217214.10 (-23.97 pct)

~~~~~~~~~~~~~~~~~~~~~~~~~~
~ Notes and Observations ~
~~~~~~~~~~~~~~~~~~~~~~~~~~

o Schedstat data for Hackbench with 2 groups in NPS1 mode:

        ---------------------------------------------------------------------------------------------------
        cpu:  all_cpus (avg) vs cpu:  all_cpus (avg)
        ---------------------------------------------------------------------------------------------------
        kernel:                                                    :           tip      shortrun
        sched_yield count                                          :             0,            0
        Legacy counter can be ignored                              :             0,            0
        schedule called                                            :         53305,        40615  | -23.81|
        schedule left the processor idle                           :         22406,        16919  | -24.49|
        try_to_wake_up was called                                  :         30822,        23625  | -23.35|
        try_to_wake_up was called to wake up the local cpu         :           984,         2583  | 162.50|
        total runtime by tasks on this processor (in jiffies)      :     596998654,    481267347  | -19.39| *
        total waittime by tasks on this processor (in jiffies)     :     514142630,    766745576  |  49.13| * Longer wait time
        total timeslices run on this cpu                           :         30893,        23691  | -23.31| *
        ---------------------------------------------------------------------------------------------------


        < --------------------------------------  Wakeup info:  -------------------------------------- >
        kernel:                                                 :           tip      shortrun
        Wakeups on same         SMT cpus = all_cpus (avg)       :          1470,         1301  | -11.50|
        Wakeups on same         MC cpus = all_cpus (avg)        :         22913,        18606  | -18.80|
        Wakeups on same         DIE cpus = all_cpus (avg)       :          3634,          693  | -80.93|
        Wakeups on same         NUMA cpus = all_cpus (avg)      :          1819,          440  | -75.81|
        Affine wakeups on same  SMT cpus = all_cpus (avg)       :          1025,         1421  |  38.63| * More affine wakeups on possibly
        Affine wakeups on same  MC cpus = all_cpus (avg)        :         14455,        17514  |  21.16| * busy runqueue leading to longer
        Affine wakeups on same  DIE cpus = all_cpus (avg)       :          2828,          701  | -75.21|   wait time
        Affine wakeups on same  NUMA cpus = all_cpus (avg)      :          1194,          456  | -61.81|
        ------------------------------------------------------------------------------------------------

	We observe a larger wait time which which the patch which points
        to the fact that the tasks are piling on the run queue. I believe
	Tim's suggestion will help here where we can avoid a pileup as a
	result of waker task being a short run task.

o Tracepoint data for Stream for 100 runs in NPS4

	Following tracepoints were enabled for Stream threads:
	  - sched_wakeup_new: To observe initial placement
	  - sched_waking: To check if migration is in wakeup context or lb contxt
	  - sched_wakeup: To check if migration is in wakeup context or lb contxt
	  - sched_migrate_task: To observe task movements

	--> tip:

   run_stream.sh-3724    [057] d..2.   450.593407: sched_wakeup_new: comm=run_stream.sh pid=3733 prio=120 target_cpu=050 *LLC: 6
          <idle>-0       [182] d.s4.   450.594375: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
          <idle>-0       [050] dNh2.   450.594381: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
          <idle>-0       [182] d.s4.   450.594657: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
          <idle>-0       [050] dNh2.   450.594661: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
          stream-3733    [050] d..2.   450.594893: sched_wakeup_new: comm=stream pid=3735 prio=120 target_cpu=057 *LLC: 7
          stream-3733    [050] d..2.   450.594955: sched_wakeup_new: comm=stream pid=3736 prio=120 target_cpu=078 *LLC: 9
          stream-3733    [050] d..2.   450.594988: sched_wakeup_new: comm=stream pid=3737 prio=120 target_cpu=045 *LLC: 5
          stream-3733    [050] d..2.   450.595016: sched_wakeup_new: comm=stream pid=3738 prio=120 target_cpu=008 *LLC: 1
          stream-3733    [050] d..2.   450.595029: sched_waking: comm=stream pid=3737 prio=120 target_cpu=045
          <idle>-0       [045] dNh2.   450.595037: sched_wakeup: comm=stream pid=3737 prio=120 target_cpu=045
          stream-3737    [045] d..2.   450.595072: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
          <idle>-0       [050] dNh2.   450.595078: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
          stream-3738    [008] d..2.   450.595102: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
          <idle>-0       [050] dNh2.   450.595111: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
          stream-3733    [050] d..2.   450.595151: sched_wakeup_new: comm=stream pid=3739 prio=120 target_cpu=097 *LLC: 12
          stream-3733    [050] d..2.   450.595181: sched_wakeup_new: comm=stream pid=3740 prio=120 target_cpu=194 *LLC: 8
          stream-3733    [050] d..2.   450.595221: sched_wakeup_new: comm=stream pid=3741 prio=120 target_cpu=080 *LLC: 10
          stream-3733    [050] d..2.   450.595249: sched_wakeup_new: comm=stream pid=3742 prio=120 target_cpu=144 *LLC: 2
          stream-3733    [050] d..2.   450.595285: sched_wakeup_new: comm=stream pid=3743 prio=120 target_cpu=239 *LLC: 13
          stream-3733    [050] d..2.   450.595320: sched_wakeup_new: comm=stream pid=3744 prio=120 target_cpu=130 *LLC: 0
          stream-3733    [050] d..2.   450.595364: sched_wakeup_new: comm=stream pid=3745 prio=120 target_cpu=113 *LLC: 14
          stream-3744    [130] d..2.   450.595407: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
          <idle>-0       [050] dNh2.   450.595416: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
          stream-3733    [050] d..2.   450.595423: sched_waking: comm=stream pid=3745 prio=120 target_cpu=113
          <idle>-0       [113] dNh2.   450.595433: sched_wakeup: comm=stream pid=3745 prio=120 target_cpu=113
          stream-3733    [050] d..2.   450.595452: sched_wakeup_new: comm=stream pid=3746 prio=120 target_cpu=160 *LLC: 4
          stream-3733    [050] d..2.   450.595486: sched_wakeup_new: comm=stream pid=3747 prio=120 target_cpu=255 *LLC: 15
          stream-3733    [050] d..2.   450.595513: sched_wakeup_new: comm=stream pid=3748 prio=120 target_cpu=159 *LLC: 3
          stream-3746    [160] d..2.   450.595533: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
          <idle>-0       [050] dNh2.   450.595542: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
          stream-3747    [255] d..2.   450.595562: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
          <idle>-0       [050] dNh2.   450.595573: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
          stream-3733    [050] d..2.   450.595614: sched_wakeup_new: comm=stream pid=3749 prio=120 target_cpu=222 *LLC: 11
          stream-3740    [194] d..2.   451.140510: sched_waking: comm=stream pid=3747 prio=120 target_cpu=255
          <idle>-0       [255] dNh2.   451.140523: sched_wakeup: comm=stream pid=3747 prio=120 target_cpu=255
          stream-3733    [050] d..2.   451.617257: sched_waking: comm=stream pid=3740 prio=120 target_cpu=194
          stream-3733    [050] d..2.   451.617267: sched_waking: comm=stream pid=3746 prio=120 target_cpu=160
          stream-3733    [050] d..2.   451.617269: sched_waking: comm=stream pid=3739 prio=120 target_cpu=097
          stream-3733    [050] d..2.   451.617272: sched_waking: comm=stream pid=3742 prio=120 target_cpu=144
          stream-3733    [050] d..2.   451.617275: sched_waking: comm=stream pid=3749 prio=120 target_cpu=222
          ... (No migrations observed)

          In most cases, each LLCs is running only 1 stream thread leading to optimal performance.

	--> with patch:

   run_stream.sh-4383    [070] d..2.  1237.764236: sched_wakeup_new: comm=run_stream.sh pid=4392 prio=120 target_cpu=206 *LLC: 9
          stream-4392    [206] d..2.  1237.765121: sched_wakeup_new: comm=stream pid=4394 prio=120 target_cpu=070 *LLC: 8
          stream-4392    [206] d..2.  1237.765171: sched_wakeup_new: comm=stream pid=4395 prio=120 target_cpu=169 *LLC: 5
          stream-4392    [206] d..2.  1237.765204: sched_wakeup_new: comm=stream pid=4396 prio=120 target_cpu=111 *LLC: 13
          stream-4392    [206] d..2.  1237.765243: sched_wakeup_new: comm=stream pid=4397 prio=120 target_cpu=130 *LLC: 0
          stream-4392    [206] d..2.  1237.765249: sched_waking: comm=stream pid=4396 prio=120 target_cpu=111
          <idle>-0       [111] dNh2.  1237.765260: sched_wakeup: comm=stream pid=4396 prio=120 target_cpu=111
          stream-4392    [206] d..2.  1237.765281: sched_wakeup_new: comm=stream pid=4398 prio=120 target_cpu=182 *LLC: 6
          stream-4392    [206] d..2.  1237.765318: sched_wakeup_new: comm=stream pid=4399 prio=120 target_cpu=060 *LLC: 7
          stream-4392    [206] d..2.  1237.765368: sched_wakeup_new: comm=stream pid=4400 prio=120 target_cpu=124 *LLC: 15
          stream-4392    [206] d..2.  1237.765408: sched_wakeup_new: comm=stream pid=4401 prio=120 target_cpu=031 *LLC: 3
          stream-4392    [206] d..2.  1237.765439: sched_wakeup_new: comm=stream pid=4402 prio=120 target_cpu=095 *LLC: 11
          stream-4392    [206] d..2.  1237.765475: sched_wakeup_new: comm=stream pid=4403 prio=120 target_cpu=015 *LLC: 1
          stream-4401    [031] d..2.  1237.765497: sched_waking: comm=stream pid=4392 prio=120 target_cpu=206
          stream-4401    [031] d..2.  1237.765506: sched_migrate_task: comm=stream pid=4392 prio=120 orig_cpu=206 dest_cpu=152 *LLC: 9 -> 3
          <idle>-0       [152] dNh2.  1237.765540: sched_wakeup: comm=stream pid=4392 prio=120 target_cpu=152
          stream-4403    [015] d..2.  1237.765562: sched_waking: comm=stream pid=4392 prio=120 target_cpu=152
          stream-4403    [015] d..2.  1237.765570: sched_migrate_task: comm=stream pid=4392 prio=120 orig_cpu=152 dest_cpu=136 *LLC: 3 -> 1
          <idle>-0       [136] dNh2.  1237.765602: sched_wakeup: comm=stream pid=4392 prio=120 target_cpu=136
          stream-4392    [136] d..2.  1237.765799: sched_wakeup_new: comm=stream pid=4404 prio=120 target_cpu=097 *LLC: 12
          stream-4392    [136] d..2.  1237.765893: sched_wakeup_new: comm=stream pid=4405 prio=120 target_cpu=084 *LLC: 10
          stream-4392    [136] d..2.  1237.765957: sched_wakeup_new: comm=stream pid=4406 prio=120 target_cpu=119 *LLC: 14
          stream-4392    [136] d..2.  1237.766018: sched_wakeup_new: comm=stream pid=4407 prio=120 target_cpu=038 *LLC: 4
          stream-4406    [119] d..2.  1237.766044: sched_waking: comm=stream pid=4392 prio=120 target_cpu=136
          stream-4406    [119] d..2.  1237.766050: sched_migrate_task: comm=stream pid=4392 prio=120 orig_cpu=136 dest_cpu=240 *LLC: 1 -> 14
          <idle>-0       [240] dNh2.  1237.766154: sched_wakeup: comm=stream pid=4392 prio=120 target_cpu=240
          stream-4392    [240] d..2.  1237.766361: sched_wakeup_new: comm=stream pid=4408 prio=120 target_cpu=023 *LLC: 2
          stream-4399    [060] d..2.  1238.300605: sched_waking: comm=stream pid=4406 prio=120 target_cpu=119 *LLC: 14 <--- Two stream threads are
          stream-4399    [060] d..2.  1238.300611: sched_waking: comm=stream pid=4392 prio=120 target_cpu=240 *LLC: 14 <--- on the same LLC leading to
          <idle>-0       [119] dNh2.  1238.300620: sched_wakeup: comm=stream pid=4406 prio=120 target_cpu=119 *LLC: 14      cache contention, degrading
          <idle>-0       [240] dNh2.  1238.300621: sched_wakeup: comm=stream pid=4392 prio=120 target_cpu=240 *LLC: 14      the Stream throughput.
          ... (No more migrations observed)

          After all the wakeups and migrations, LLC 14 contains two stream threads (pid: 4392 and 4406)
          All the migrations happen between the events sched_waking and sched_wakeup showing the migrations
          happens during a wakeup and not as a resutl of load balancing.

> 
> This patch is more about enhancing the wake affine, rather than improving
> the SIS efficiency, so Mel's SIS statistic patch was not deployed for now.
> 
> [Limitations]
> When the number of CPUs suggested by SIS_UTIL is lower than 60% of the LLC
> CPUs, the LLC domain is regarded as relatively busy. However, the 60% is
> somewhat hacky, because it indicates that the util_avg% is around 50%,
> a half busy LLC. I don't have other lightweight/accurate method in mind to
> check if the LLC domain is busy or not.
> 
> [Misc]
> At LPC we received useful suggestions. The first one is that we should look at
> the time from the task is woken up, to the time the task goes back to sleep.
> I assume this is aligned with what is proposed here - we consider the average
> running time, rather than the total running time. The second one is that we
> should consider the long-running task. And this is under investigation.
> 
> Besides, Prateek has mentioned that the SIS_UTIL is unable to deal with
> burst workload.  Because there is a delay to reflect the instantaneous
> utilization and SIS_UTIL expects the workload to be stable. If the system
> is idle most of the time, but suddenly the workloads burst, the SIS_UTIL
> overscans. The current patch might mitigate this symptom somehow, as burst
> workload is usually regarded as a short-running task.
> 
> Suggested-by: Tim Chen <tim.c.chen@intel.com>
> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> ---
>  kernel/sched/fair.c | 31 ++++++++++++++++++++++++++++++-
>  1 file changed, 30 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 914096c5b1ae..7519ab5b911c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6020,6 +6020,19 @@ static int wake_wide(struct task_struct *p)
>  	return 1;
>  }
>  
> +/*
> + * If a task switches in and then voluntarily relinquishes the
> + * CPU quickly, it is regarded as a short running task.
> + * sysctl_sched_min_granularity is chosen as the threshold,
> + * as this value is the minimal slice if there are too many
> + * runnable tasks, see __sched_period().
> + */
> +static int is_short_task(struct task_struct *p)
> +{
> +	return (p->se.sum_exec_runtime <=
> +		(p->nvcsw * sysctl_sched_min_granularity));
> +}
> +
>  /*
>   * The purpose of wake_affine() is to quickly determine on which CPU we can run
>   * soonest. For the purpose of speed we only consider the waking and previous
> @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
>  	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
>  		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
>  
> -	if (sync && cpu_rq(this_cpu)->nr_running == 1)
> +	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
> +	    is_short_task(cpu_curr(this_cpu)))

This change seems to optimize for affine wakeup which benefits
tasks with producer-consumer pattern but is not ideal for Stream.
Currently the logic ends will do an affine wakeup even if sync
flag is not set:

          stream-4135    [029] d..2.   353.580953: sched_waking: comm=stream pid=4129 prio=120 target_cpu=082
          stream-4135    [029] d..2.   353.580957: select_task_rq_fair: wake_affine_idle: Select this_cpu: sync(0) rq->nr_running(1) is_short_task(1)
          stream-4135    [029] d..2.   353.580960: sched_migrate_task: comm=stream pid=4129 prio=120 orig_cpu=82 dest_cpu=30
          <idle>-0       [030] dNh2.   353.580993: sched_wakeup: comm=stream pid=4129 prio=120 target_cpu=030

I believe a consideration should be made for the sync flag when
going for an affine wakeup. Also the check for short running could
be at the end after checking if prev_cpu is an available_idle_cpu.

>  		return this_cpu;
>  
>  	if (available_idle_cpu(prev_cpu))
> @@ -6434,6 +6448,21 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>  			/* overloaded LLC is unlikely to have idle cpu/core */
>  			if (nr == 1)
>  				return -1;
> +
> +			/*
> +			 * If nr is smaller than 60% of llc_weight, it
> +			 * indicates that the util_avg% is higher than 50%.
> +			 * This is calculated by SIS_UTIL in
> +			 * update_idle_cpu_scan(). The 50% util_avg indicates
> +			 * a half-busy LLC domain. System busier than this
> +			 * level could lower its bar to choose a compromised
> +			 * "idle" CPU. If the waker on target CPU is a short
> +			 * task and the wakee is also a short task, pick
> +			 * target directly.
> +			 */
> +			if (!has_idle_core && (5 * nr < 3 * sd->span_weight) &&
> +			    is_short_task(p) && is_short_task(cpu_curr(target)))
> +				return target;

Pileup seen in hackbench could also be a result of an early
bailout here for smaller LLCs but I don't have any data to
substantiate that claim currently.

>  		}
>  	}
>  
Please let me know if you need any more data from the test
system for any of the benchmarks covered or if you would like
me to run any other benchmark on the test system.
--
Thanks and Regards,
Prateek

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-26  5:50 ` K Prateek Nayak
@ 2022-09-26 14:39   ` Gautham R. Shenoy
  2022-09-29 16:58     ` K Prateek Nayak
  2022-09-29  5:25   ` Chen Yu
  1 sibling, 1 reply; 20+ messages in thread
From: Gautham R. Shenoy @ 2022-09-26 14:39 UTC (permalink / raw)
  To: K Prateek Nayak
  Cc: Chen Yu, Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, Yicong Yang,
	Ingo Molnar, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, linux-kernel

Hello Prateek,

On Mon, Sep 26, 2022 at 11:20:16AM +0530, K Prateek Nayak wrote:[

[..snip..]

> > @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
> >  	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
> >  		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
> >  
> > -	if (sync && cpu_rq(this_cpu)->nr_running == 1)
> > +	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
> > +	    is_short_task(cpu_curr(this_cpu)))
> 
> This change seems to optimize for affine wakeup which benefits
> tasks with producer-consumer pattern but is not ideal for Stream.
> Currently the logic ends will do an affine wakeup even if sync
> flag is not set:
> 
>           stream-4135    [029] d..2.   353.580953: sched_waking: comm=stream pid=4129 prio=120 target_cpu=082
>           stream-4135    [029] d..2.   353.580957: select_task_rq_fair: wake_affine_idle: Select this_cpu: sync(0) rq->nr_running(1) is_short_task(1)
>           stream-4135    [029] d..2.   353.580960: sched_migrate_task: comm=stream pid=4129 prio=120 orig_cpu=82 dest_cpu=30
>           <idle>-0       [030] dNh2.   353.580993: sched_wakeup: comm=stream pid=4129 prio=120 target_cpu=030
> 
> I believe a consideration should be made for the sync flag when
> going for an affine wakeup. Also the check for short running could
> be at the end after checking if prev_cpu is an available_idle_cpu.

We need to check if moving the is_short_task() to a later point after
checking the availability of the previous CPU solve the problem for
the workloads which showed regressions on AMD EPYC systems.

> --
> Thanks and Regards,
> Prateek

--
Thanks and Regards
gautham.

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-26  5:50 ` K Prateek Nayak
  2022-09-26 14:39   ` Gautham R. Shenoy
@ 2022-09-29  5:25   ` Chen Yu
  2022-09-29  6:59     ` Honglei Wang
  2022-09-29 17:19     ` K Prateek Nayak
  1 sibling, 2 replies; 20+ messages in thread
From: Chen Yu @ 2022-09-29  5:25 UTC (permalink / raw)
  To: K Prateek Nayak
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

Hi Prateek,
On 2022-09-26 at 11:20:16 +0530, K Prateek Nayak wrote:
> Hello Chenyu,
> 
> When testing the patch on a dual socket Zen3 system (3 x 64C/128T) we
> noticed some regressions in some standard benchmark.
> 
> tl;dr
> 
> o Hackbench shows noticeable regression in most cases. Looking at schedstat
>   data, we see there is an increased number of affine wakeup and an increase
>   in the average wait time. As the LLC size on the Zen3 machine is only
>   16 CPUs, there is good chance the LLC was overloaded and it required
>   intervention from load balancer to distribute tasks optimally.
> 
> o There is a regression in Stream which is cause by piling up of more than
>   one Stream thread on the same LLC. This happens as a result of migration
>   in the wakeup path where the logic goes for an affine wakeup if the
>   waker is short lived task even if sync flag is not set and the previous
>   CPU might be idle.
> 
Nice analysis and thanks for your testing.
> I'll inline the results are detailed observation below:
> 
> On 9/15/2022 10:24 PM, Chen Yu wrote:
> > [Background]
> > At LPC 2022 Real-time and Scheduling Micro Conference we presented
> > the cross CPU wakeup issue. This patch is a text version of the
> > talk, and hopefully we can clarify the problem and appreciate for any
> > feedback.
> > 
> > [re-send due to the previous one did not reach LKML, sorry
> >  for any inconvenience.]
> > 
> > [Problem Statement]
> > For a workload that is doing frequent context switches, the throughput
> > scales well until the number of instances reaches a peak point. After
> > that peak point, the throughput drops significantly if the number of
> > instances continues to increase.
> > 
> > The will-it-scale context_switch1 test case exposes the issue. The
> > test platform has 112 CPUs per LLC domain. The will-it-scale launches
> > 1, 8, 16 ... 112 instances respectively. Each instance is composed
> > of 2 tasks, and each pair of tasks would do ping-pong scheduling via
> > pipe_read() and pipe_write(). No task is bound to any CPU.
> > We found that, once the number of instances is higher than
> > 56(112 tasks in total, every CPU has 1 task), the throughput
> > drops accordingly if the instance number continues to increase:
> > 
> >           ^
> > throughput|
> >           |                 X
> >           |               X   X X
> >           |             X         X X
> >           |           X               X
> >           |         X                   X
> >           |       X
> >           |     X
> >           |   X
> >           | X
> >           |
> >           +-----------------.------------------->
> >                             56
> >                                  number of instances
> > 
> > [Symptom analysis]
> > Both perf profile and lockstat have shown that, the bottleneck
> > is the runqueue spinlock. Take perf profile for example:
> > 
> > nr_instance          rq lock percentage
> > 1                    1.22%
> > 8                    1.17%
> > 16                   1.20%
> > 24                   1.22%
> > 32                   1.46%
> > 40                   1.61%
> > 48                   1.63%
> > 56                   1.65%
> > --------------------------
> > 64                   3.77%      |
> > 72                   5.90%      | increase
> > 80                   7.95%      |
> > 88                   9.98%      v
> > 96                   11.81%
> > 104                  13.54%
> > 112                  15.13%
> > 
> > And the rq lock bottleneck is composed of two paths(perf profile):
> > 
> > (path1):
> > raw_spin_rq_lock_nested.constprop.0;
> > try_to_wake_up;
> > default_wake_function;
> > autoremove_wake_function;
> > __wake_up_common;
> > __wake_up_common_lock;
> > __wake_up_sync_key;
> > pipe_write;
> > new_sync_write;
> > vfs_write;
> > ksys_write;
> > __x64_sys_write;
> > do_syscall_64;
> > entry_SYSCALL_64_after_hwframe;write
> > 
> > (path2):
> > raw_spin_rq_lock_nested.constprop.0;
> > __sched_text_start;
> > schedule_idle;
> > do_idle;
> > cpu_startup_entry;
> > start_secondary;
> > secondary_startup_64_no_verify
> > 
> > The idle percentage is around 30% when there are 112 instances:
> > %Cpu0  :  2.7 us, 66.7 sy,  0.0 ni, 30.7 id
> > 
> > As a comparison, if we set CPU affinity to these workloads,
> > which stops them from migrating among CPUs, the idle percentage
> > drops to nearly 0%, and the throughput increases by about 300%.
> > This indicates that there is room for optimization.
> > 
> > A possible scenario to describe the lock contention:
> > task A tries to wakeup task B on CPU1, then task A grabs the
> > runqueue lock of CPU1. If CPU1 is about to quit idle, it needs
> > to grab its own lock which has been taken by someone else. Then
> > CPU1 takes more time to quit which hurts the performance.
> > 
> > TTWU_QUEUE could mitigate the cross CPU runqueue lock contention.
> > Since commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU
> > on wakelist if wakee cpu is idle"), TTWU_QUEUE offloads the work from
> > the waker and leverages the idle CPU to queue the wakee. However, a long
> > idle duration is still observed. The idle task spends quite some time
> > on sched_ttwu_pending() before it switches out. This long idle
> > duration would mislead SIS_UTIL, then SIS_UTIL suggests the waker scan
> > for more CPUs. The time spent searching for an idle CPU would make
> > wakee waiting for more time, which in turn leads to more idle time.
> > The NEWLY_IDLE balance fails to pull tasks to the idle CPU, which
> > might be caused by no runnable wakee being found.
> > 
> > [Proposal]
> > If a system is busy, and if the workloads are doing frequent context
> > switches, it might not be a good idea to spread the wakee on different
> > CPUs. Instead, consider the task running time and enhance wake affine
> > might be applicable.
> > 
> > This idea has been suggested by Rik at LPC 2019 when discussing
> > the latency nice. He asked the following question: if P1 is a small-time
> > slice task on CPU, can we put the waking task P2 on the CPU and wait for
> > P1 to release the CPU, without wasting time to search for an idle CPU?
> > At LPC 2021 Vincent Guittot has proposed:
> > 1. If the wakee is a long-running task, should we skip the short idle CPU?
> > 2. If the wakee is a short-running task, can we put it onto a lightly loaded
> >    local CPU?
> > 
> > Current proposal is a variant of 2:
> > If the target CPU is running a short-time slice task, and the wakee
> > is also a short-time slice task, the target CPU could be chosen as the
> > candidate when the system is busy.
> > 
> > The definition of a short-time slice task is: The average running time
> > of the task during each run is no more than sysctl_sched_min_granularity.
> > If a task switches in and then voluntarily relinquishes the CPU
> > quickly, it is regarded as a short-running task. Choosing
> > sysctl_sched_min_granularity because it is the minimal slice if there
> > are too many runnable tasks.
> > 
> > Reuse the nr_idle_scan of SIS_UTIL to decide if the system is busy.
> > If yes, then a compromised "idle" CPU might be acceptable.
> > 
> > The reason is that, if the waker is a short running task, it might 
> > relinquish the CPU soon, the wakee has the chance to be scheduled.
> > On the other hand, if the wakee is also a short-running task, the
> > impact it brings to the target CPU is small. If the system is
> > already busy, maybe we could lower the bar to find an idle CPU. 
> > The effect is, the wake affine is enhanced. 
> > 
> > [Benchmark results]
> > The baseline is 6.0-rc4.
> > 
> > The throughput of will-it-scale.context_switch1 has been increased by
> > 331.13% with this patch applied.
> > 
> > netperf
> > =======
> > case            	load    	baseline(std%)	compare%( std%)
> > TCP_RR          	28 threads	 1.00 (  0.57)	 +0.29 (  0.59)
> > TCP_RR          	56 threads	 1.00 (  0.49)	 +0.43 (  0.43)
> > TCP_RR          	84 threads	 1.00 (  0.34)	 +0.24 (  0.34)
> > TCP_RR          	112 threads	 1.00 (  0.26)	 +1.57 (  0.20)
> > TCP_RR          	140 threads	 1.00 (  0.20)	+178.05 (  8.83)
> > TCP_RR          	168 threads	 1.00 ( 10.14)	 +0.87 ( 10.03)
> > TCP_RR          	196 threads	 1.00 ( 13.51)	 +0.90 ( 11.84)
> > TCP_RR          	224 threads	 1.00 (  7.12)	 +0.66 (  8.28)
> > UDP_RR          	28 threads	 1.00 (  0.96)	 -0.10 (  0.97)
> > UDP_RR          	56 threads	 1.00 ( 10.93)	 +0.24 (  0.82)
> > UDP_RR          	84 threads	 1.00 (  8.99)	 +0.40 (  0.71)
> > UDP_RR          	112 threads	 1.00 (  0.15)	 +0.72 (  7.77)
> > UDP_RR          	140 threads	 1.00 ( 11.11)	+135.81 ( 13.86)
> > UDP_RR          	168 threads	 1.00 ( 12.58)	+147.63 ( 12.72)
> > UDP_RR          	196 threads	 1.00 ( 19.47)	 -0.34 ( 16.14)
> > UDP_RR          	224 threads	 1.00 ( 12.88)	 -0.35 ( 12.73)
> > 
> > hackbench
> > =========
> > case            	load    	baseline(std%)	compare%( std%)
> > process-pipe    	1 group 	 1.00 (  1.02)	 +0.14 (  0.62)
> > process-pipe    	2 groups 	 1.00 (  0.73)	 +0.29 (  0.51)
> > process-pipe    	4 groups 	 1.00 (  0.16)	 +0.24 (  0.31)
> > process-pipe    	8 groups 	 1.00 (  0.06)	+11.56 (  0.11)
> > process-sockets 	1 group 	 1.00 (  1.59)	 +0.06 (  0.77)
> > process-sockets 	2 groups 	 1.00 (  1.13)	 -1.86 (  1.31)
> > process-sockets 	4 groups 	 1.00 (  0.14)	 +1.76 (  0.29)
> > process-sockets 	8 groups 	 1.00 (  0.27)	 +2.73 (  0.10)
> > threads-pipe    	1 group 	 1.00 (  0.43)	 +0.83 (  2.20)
> > threads-pipe    	2 groups 	 1.00 (  0.52)	 +1.03 (  0.55)
> > threads-pipe    	4 groups 	 1.00 (  0.44)	 -0.08 (  0.31)
> > threads-pipe    	8 groups 	 1.00 (  0.04)	+11.86 (  0.05)
> > threads-sockets 	1 groups 	 1.00 (  1.89)	 +3.51 (  0.57)
> > threads-sockets 	2 groups 	 1.00 (  0.04)	 -1.12 (  0.69)
> > threads-sockets 	4 groups 	 1.00 (  0.14)	 +1.77 (  0.18)
> > threads-sockets 	8 groups 	 1.00 (  0.03)	 +2.75 (  0.03)
> > 
> > tbench
> > ======
> > case            	load    	baseline(std%)	compare%( std%)
> > loopback        	28 threads	 1.00 (  0.08)	 +0.51 (  0.25)
> > loopback        	56 threads	 1.00 (  0.15)	 -0.89 (  0.16)
> > loopback        	84 threads	 1.00 (  0.03)	 +0.35 (  0.07)
> > loopback        	112 threads	 1.00 (  0.06)	 +2.84 (  0.01)
> > loopback        	140 threads	 1.00 (  0.07)	 +0.69 (  0.11)
> > loopback        	168 threads	 1.00 (  0.09)	 +0.14 (  0.18)
> > loopback        	196 threads	 1.00 (  0.04)	 -0.18 (  0.20)
> > loopback        	224 threads	 1.00 (  0.25)	 -0.37 (  0.03)
> > 
> > Other benchmarks are under testing.
> 
> Discussed below are the results from running standard benchmarks on
> a dual socket Zen3 (2 x 64C/128T) machine configured in different
> NPS modes.
> 
> NPS Modes are used to logically divide single socket into
> multiple NUMA region.
> Following is the NUMA configuration for each NPS mode on the system:
> 
> NPS1: Each socket is a NUMA node.
>     Total 2 NUMA nodes in the dual socket machine.
> 
>     Node 0: 0-63,   128-191
>     Node 1: 64-127, 192-255
> 
> NPS2: Each socket is further logically divided into 2 NUMA regions.
>     Total 4 NUMA nodes exist over 2 socket.
>    
>     Node 0: 0-31,   128-159
>     Node 1: 32-63,  160-191
>     Node 2: 64-95,  192-223
>     Node 3: 96-127, 223-255
> 
> NPS4: Each socket is logically divided into 4 NUMA regions.
>     Total 8 NUMA nodes exist over 2 socket.
>    
>     Node 0: 0-15,    128-143
>     Node 1: 16-31,   144-159
>     Node 2: 32-47,   160-175
>     Node 3: 48-63,   176-191
>     Node 4: 64-79,   192-207
>     Node 5: 80-95,   208-223
>     Node 6: 96-111,  223-231
>     Node 7: 112-127, 232-255
> 
> Benchmark Results:
> 
> Kernel versions:
> - tip:       5.19.0 tip sched/core
> - shortrun:  5.19.0 tip sched/core + this patch
> 
> When we started testing, the tip was at:
> commit 7e9518baed4c ("sched/fair: Move call to list_last_entry() in detach_tasks")
> 
> ~~~~~~~~~~~~~
> ~ hackbench ~
> ~~~~~~~~~~~~~
> 
> NPS1
> 
> Test:			tip			shortrun
>  1-groups:	   4.23 (0.00 pct)	   4.24 (-0.23 pct)
>  2-groups:	   4.93 (0.00 pct)	   5.68 (-15.21 pct)
>  4-groups:	   5.32 (0.00 pct)	   6.21 (-16.72 pct)
>  8-groups:	   5.46 (0.00 pct)	   6.49 (-18.86 pct)
> 16-groups:	   7.31 (0.00 pct)	   7.78 (-6.42 pct)
> 
> NPS2
> 
> Test:			tip			shortrun
>  1-groups:	   4.19 (0.00 pct)	   4.19 (0.00 pct)
>  2-groups:	   4.77 (0.00 pct)	   5.43 (-13.83 pct)
>  4-groups:	   5.15 (0.00 pct)	   6.20 (-20.38 pct)
>  8-groups:	   5.47 (0.00 pct)	   6.54 (-19.56 pct)
> 16-groups:	   6.63 (0.00 pct)	   7.28 (-9.80 pct)
> 
> NPS4
> 
> Test:			tip			shortrun
>  1-groups:	   4.23 (0.00 pct)	   4.39 (-3.78 pct)
>  2-groups:	   4.78 (0.00 pct)	   5.48 (-14.64 pct)
>  4-groups:	   5.17 (0.00 pct)	   6.14 (-18.76 pct)
>  8-groups:	   5.63 (0.00 pct)	   6.51 (-15.63 pct)
> 16-groups:	   7.88 (0.00 pct)	   7.03 (10.78 pct)
> 
> ~~~~~~~~~~~~
> ~ schbench ~
> ~~~~~~~~~~~~
> 
> NPS1
> 
> #workers:       tip			shortrun
>   1:	  22.00 (0.00 pct)	  36.00 (-63.63 pct)
>   2:	  34.00 (0.00 pct)	  38.00 (-11.76 pct)
>   4:	  37.00 (0.00 pct)	  36.00 (2.70 pct)
>   8:	  55.00 (0.00 pct)	  51.00 (7.27 pct)
>  16:	  69.00 (0.00 pct)	  68.00 (1.44 pct)
>  32:	 113.00 (0.00 pct)	 116.00 (-2.65 pct)
>  64:	 219.00 (0.00 pct)	 232.00 (-5.93 pct)
> 128:	 506.00 (0.00 pct)	 1019.00 (-101.38 pct)
> 256:	 45440.00 (0.00 pct)	 44864.00 (1.26 pct)
> 512:	 76672.00 (0.00 pct)	 73600.00 (4.00 pct)
> 
> NPS2
> 
> #workers:	tip			shortrun
>   1:	  31.00 (0.00 pct)	  36.00 (-16.12 pct)
>   2:	  36.00 (0.00 pct)	  36.00 (0.00 pct)
>   4:	  45.00 (0.00 pct)	  39.00 (13.33 pct)
>   8:	  47.00 (0.00 pct)	  48.00 (-2.12 pct)
>  16:	  66.00 (0.00 pct)	  71.00 (-7.57 pct)
>  32:	 114.00 (0.00 pct)	 123.00 (-7.89 pct)
>  64:	 215.00 (0.00 pct)	 248.00 (-15.34 pct)
> 128:	 495.00 (0.00 pct)	 531.00 (-7.27 pct)
> 256:	 48576.00 (0.00 pct)	 47552.00 (2.10 pct)
> 512:	 79232.00 (0.00 pct)	 74624.00 (5.81 pct)
> 
> NPS4
> 
> #workers:	tip			shortrun
>   1:	  30.00 (0.00 pct)	  36.00 (-20.00 pct)
>   2:	  34.00 (0.00 pct)	  38.00 (-11.76 pct)
>   4:	  41.00 (0.00 pct)	  44.00 (-7.31 pct)
>   8:	  60.00 (0.00 pct)	  53.00 (11.66 pct)
>  16:	  68.00 (0.00 pct)	  73.00 (-7.35 pct)
>  32:	 116.00 (0.00 pct)	 125.00 (-7.75 pct)
>  64:	 224.00 (0.00 pct)	 248.00 (-10.71 pct)
> 128:	 495.00 (0.00 pct)	 569.00 (-14.94 pct)
> 256:	 45888.00 (0.00 pct)	 38720.00 (15.62 pct)
> 512:	 78464.00 (0.00 pct)	 73600.00 (6.19 pct)
> 
> 
> ~~~~~~~~~~
> ~ tbench ~
> ~~~~~~~~~~
> 
> NPS1
> 
> Clients:	tip			shortrun
>     1	 550.66 (0.00 pct)	 546.56 (-0.74 pct)
>     2	 1009.69 (0.00 pct)	 1010.01 (0.03 pct)
>     4	 1795.32 (0.00 pct)	 1782.71 (-0.70 pct)
>     8	 2971.16 (0.00 pct)	 3035.58 (2.16 pct)
>    16	 4627.98 (0.00 pct)	 4816.82 (4.08 pct)
>    32	 8065.15 (0.00 pct)	 9269.52 (14.93 pct)
>    64	 14994.32 (0.00 pct)	 14704.38 (-1.93 pct)
>   128	 5175.73 (0.00 pct)	 5174.77 (-0.01 pct)
>   256	 48763.57 (0.00 pct)	 49649.67 (1.81 pct)
>   512	 43780.78 (0.00 pct)	 44717.04 (2.13 pct)
>  1024	 40341.84 (0.00 pct)	 42078.99 (4.30 pct)
> 
> NPS2
> 
> Clients:	tip			shortrun
>     1	 551.06 (0.00 pct)	 549.17 (-0.34 pct)
>     2	 1000.76 (0.00 pct)	 993.75 (-0.70 pct)
>     4	 1737.02 (0.00 pct)	 1773.33 (2.09 pct)
>     8	 2992.31 (0.00 pct)	 2971.05 (-0.71 pct)
>    16	 4579.29 (0.00 pct)	 4470.71 (-2.37 pct)
>    32	 9120.73 (0.00 pct)	 8080.89 (-11.40 pct)
>    64	 14918.58 (0.00 pct)	 14395.57 (-3.50 pct)
>   128	 20830.61 (0.00 pct)	 20579.09 (-1.20 pct)
>   256	 47708.18 (0.00 pct)	 47416.37 (-0.61 pct)
>   512	 43721.79 (0.00 pct)	 43754.83 (0.07 pct)
>  1024	 40920.49 (0.00 pct)	 40701.90 (-0.53 pct)
> 
> NPS4
> 
> Clients:	tip			shortrun
>     1	 549.22 (0.00 pct)	 548.36 (-0.15 pct)
>     2	 1000.08 (0.00 pct)	 1037.74 (3.76 pct)
>     4	 1794.78 (0.00 pct)	 1802.11 (0.40 pct)
>     8	 3008.50 (0.00 pct)	 2989.22 (-0.64 pct)
>    16	 4804.71 (0.00 pct)	 4706.51 (-2.04 pct)
>    32	 9156.57 (0.00 pct)	 8253.84 (-9.85 pct)
>    64	 14901.45 (0.00 pct)	 15049.51 (0.99 pct)
>   128	 20771.20 (0.00 pct)	 13229.50 (-36.30 pct)
>   256	 47033.88 (0.00 pct)	 46737.17 (-0.63 pct)
>   512	 43429.01 (0.00 pct)	 43246.64 (-0.41 pct)
>  1024	 39271.27 (0.00 pct)	 42194.75 (7.44 pct)
> 
> 
> ~~~~~~~~~~
> ~ stream ~
> ~~~~~~~~~~
> 
> NPS1
> 
> 10 Runs:
> 
> Test:	        tip			shortrun
>  Copy:	 336311.52 (0.00 pct)	 330116.75 (-1.84 pct)
> Scale:	 212955.82 (0.00 pct)	 215330.30 (1.11 pct)
>   Add:	 251518.23 (0.00 pct)	 250926.53 (-0.23 pct)
> Triad:	 262077.88 (0.00 pct)	 259618.70 (-0.93 pct)
> 
> 100 Runs:
> 
> Test:		tip			shortrun
>  Copy:	 339533.83 (0.00 pct)	 323452.74 (-4.73 pct)
> Scale:	 194736.72 (0.00 pct)	 215789.55 (10.81 pct)
>   Add:	 218294.54 (0.00 pct)	 244916.33 (12.19 pct)
> Triad:	 262371.40 (0.00 pct)	 252997.84 (-3.57 pct)
> 
> NPS2
> 
> 10 Runs:
> 
> Test:		tip			shortrun
>  Copy:	 335277.15 (0.00 pct)	 305516.57 (-8.87 pct)
> Scale:	 220990.24 (0.00 pct)	 207061.22 (-6.30 pct)
>   Add:	 264156.13 (0.00 pct)	 243368.49 (-7.86 pct)
> Triad:	 268707.53 (0.00 pct)	 223486.30 (-16.82 pct)
> 
> 100 Runs:
> 
> Test:		tip			shortrun
>  Copy:	 334913.73 (0.00 pct)	 319677.81 (-4.54 pct)
> Scale:	 230522.47 (0.00 pct)	 222757.62 (-3.36 pct)
>   Add:	 264567.28 (0.00 pct)	 254883.62 (-3.66 pct)
> Triad:	 272974.23 (0.00 pct)	 260561.08 (-4.54 pct)
> 
> NPS4
> 
> 10 Runs:
> 
> Test:		tip			shortrun
>  Copy:	 356452.47 (0.00 pct)	 255911.77 (-28.20 pct)
> Scale:	 242986.42 (0.00 pct)	 171587.28 (-29.38 pct)
>   Add:	 268512.09 (0.00 pct)	 188244.75 (-29.89 pct)
> Triad:	 281622.43 (0.00 pct)	 193271.97 (-31.37 pct)
> 
> 100 Runs:
> 
> Test:		tip			shortrun
>  Copy:	 367384.81 (0.00 pct)	 273101.20 (-25.66 pct)
> Scale:	 254289.04 (0.00 pct)	 189986.88 (-25.28 pct)
>   Add:	 273683.33 (0.00 pct)	 206384.96 (-24.58 pct)
> Triad:	 285696.90 (0.00 pct)	 217214.10 (-23.97 pct)
> 
> ~~~~~~~~~~~~~~~~~~~~~~~~~~
> ~ Notes and Observations ~
> ~~~~~~~~~~~~~~~~~~~~~~~~~~
> 
> o Schedstat data for Hackbench with 2 groups in NPS1 mode:
> 
>         ---------------------------------------------------------------------------------------------------
>         cpu:  all_cpus (avg) vs cpu:  all_cpus (avg)
>         ---------------------------------------------------------------------------------------------------
>         kernel:                                                    :           tip      shortrun
>         sched_yield count                                          :             0,            0
>         Legacy counter can be ignored                              :             0,            0
>         schedule called                                            :         53305,        40615  | -23.81|
>         schedule left the processor idle                           :         22406,        16919  | -24.49|
>         try_to_wake_up was called                                  :         30822,        23625  | -23.35|
>         try_to_wake_up was called to wake up the local cpu         :           984,         2583  | 162.50|
>         total runtime by tasks on this processor (in jiffies)      :     596998654,    481267347  | -19.39| *
>         total waittime by tasks on this processor (in jiffies)     :     514142630,    766745576  |  49.13| * Longer wait time
Agree, the wait length is 766745576 / 481267347 = 1.59 after patched, which is
much bigger than 514142630 / 596998654 = 0.86 before the patch.

>         total timeslices run on this cpu                           :         30893,        23691  | -23.31| *
>         ---------------------------------------------------------------------------------------------------
> 
> 
>         < --------------------------------------  Wakeup info:  -------------------------------------- >
>         kernel:                                                 :           tip      shortrun
>         Wakeups on same         SMT cpus = all_cpus (avg)       :          1470,         1301  | -11.50|
>         Wakeups on same         MC cpus = all_cpus (avg)        :         22913,        18606  | -18.80|
>         Wakeups on same         DIE cpus = all_cpus (avg)       :          3634,          693  | -80.93|
>         Wakeups on same         NUMA cpus = all_cpus (avg)      :          1819,          440  | -75.81|
>         Affine wakeups on same  SMT cpus = all_cpus (avg)       :          1025,         1421  |  38.63| * More affine wakeups on possibly
>         Affine wakeups on same  MC cpus = all_cpus (avg)        :         14455,        17514  |  21.16| * busy runqueue leading to longer
>         Affine wakeups on same  DIE cpus = all_cpus (avg)       :          2828,          701  | -75.21|   wait time
>         Affine wakeups on same  NUMA cpus = all_cpus (avg)      :          1194,          456  | -61.81|
>         ------------------------------------------------------------------------------------------------
Agree, for SMT and MC domain, the wake affine has been enhanced to suggest to pick
a short running CPU rather than an idle one. Then later SIS_UTIL would prefer to
pick this candidate CPU.
> 
> 	We observe a larger wait time which which the patch which points
>         to the fact that the tasks are piling on the run queue. I believe
> 	Tim's suggestion will help here where we can avoid a pileup as a
> 	result of waker task being a short run task.
Yes, we'll raise the bar to pick a short running CPU.
> 
> o Tracepoint data for Stream for 100 runs in NPS4
> 
> 	Following tracepoints were enabled for Stream threads:
> 	  - sched_wakeup_new: To observe initial placement
> 	  - sched_waking: To check if migration is in wakeup context or lb contxt
> 	  - sched_wakeup: To check if migration is in wakeup context or lb contxt
> 	  - sched_migrate_task: To observe task movements
> 
> 	--> tip:
> 
>    run_stream.sh-3724    [057] d..2.   450.593407: sched_wakeup_new: comm=run_stream.sh pid=3733 prio=120 target_cpu=050 *LLC: 6
>           <idle>-0       [182] d.s4.   450.594375: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>           <idle>-0       [050] dNh2.   450.594381: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>           <idle>-0       [182] d.s4.   450.594657: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>           <idle>-0       [050] dNh2.   450.594661: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>           stream-3733    [050] d..2.   450.594893: sched_wakeup_new: comm=stream pid=3735 prio=120 target_cpu=057 *LLC: 7
>           stream-3733    [050] d..2.   450.594955: sched_wakeup_new: comm=stream pid=3736 prio=120 target_cpu=078 *LLC: 9
>           stream-3733    [050] d..2.   450.594988: sched_wakeup_new: comm=stream pid=3737 prio=120 target_cpu=045 *LLC: 5
>           stream-3733    [050] d..2.   450.595016: sched_wakeup_new: comm=stream pid=3738 prio=120 target_cpu=008 *LLC: 1
>           stream-3733    [050] d..2.   450.595029: sched_waking: comm=stream pid=3737 prio=120 target_cpu=045
>           <idle>-0       [045] dNh2.   450.595037: sched_wakeup: comm=stream pid=3737 prio=120 target_cpu=045
>           stream-3737    [045] d..2.   450.595072: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>           <idle>-0       [050] dNh2.   450.595078: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>           stream-3738    [008] d..2.   450.595102: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>           <idle>-0       [050] dNh2.   450.595111: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>           stream-3733    [050] d..2.   450.595151: sched_wakeup_new: comm=stream pid=3739 prio=120 target_cpu=097 *LLC: 12
>           stream-3733    [050] d..2.   450.595181: sched_wakeup_new: comm=stream pid=3740 prio=120 target_cpu=194 *LLC: 8
>           stream-3733    [050] d..2.   450.595221: sched_wakeup_new: comm=stream pid=3741 prio=120 target_cpu=080 *LLC: 10
>           stream-3733    [050] d..2.   450.595249: sched_wakeup_new: comm=stream pid=3742 prio=120 target_cpu=144 *LLC: 2
>           stream-3733    [050] d..2.   450.595285: sched_wakeup_new: comm=stream pid=3743 prio=120 target_cpu=239 *LLC: 13
>           stream-3733    [050] d..2.   450.595320: sched_wakeup_new: comm=stream pid=3744 prio=120 target_cpu=130 *LLC: 0
>           stream-3733    [050] d..2.   450.595364: sched_wakeup_new: comm=stream pid=3745 prio=120 target_cpu=113 *LLC: 14
>           stream-3744    [130] d..2.   450.595407: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>           <idle>-0       [050] dNh2.   450.595416: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>           stream-3733    [050] d..2.   450.595423: sched_waking: comm=stream pid=3745 prio=120 target_cpu=113
>           <idle>-0       [113] dNh2.   450.595433: sched_wakeup: comm=stream pid=3745 prio=120 target_cpu=113
>           stream-3733    [050] d..2.   450.595452: sched_wakeup_new: comm=stream pid=3746 prio=120 target_cpu=160 *LLC: 4
>           stream-3733    [050] d..2.   450.595486: sched_wakeup_new: comm=stream pid=3747 prio=120 target_cpu=255 *LLC: 15
>           stream-3733    [050] d..2.   450.595513: sched_wakeup_new: comm=stream pid=3748 prio=120 target_cpu=159 *LLC: 3
>           stream-3746    [160] d..2.   450.595533: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>           <idle>-0       [050] dNh2.   450.595542: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>           stream-3747    [255] d..2.   450.595562: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>           <idle>-0       [050] dNh2.   450.595573: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>           stream-3733    [050] d..2.   450.595614: sched_wakeup_new: comm=stream pid=3749 prio=120 target_cpu=222 *LLC: 11
>           stream-3740    [194] d..2.   451.140510: sched_waking: comm=stream pid=3747 prio=120 target_cpu=255
>           <idle>-0       [255] dNh2.   451.140523: sched_wakeup: comm=stream pid=3747 prio=120 target_cpu=255
>           stream-3733    [050] d..2.   451.617257: sched_waking: comm=stream pid=3740 prio=120 target_cpu=194
>           stream-3733    [050] d..2.   451.617267: sched_waking: comm=stream pid=3746 prio=120 target_cpu=160
>           stream-3733    [050] d..2.   451.617269: sched_waking: comm=stream pid=3739 prio=120 target_cpu=097
>           stream-3733    [050] d..2.   451.617272: sched_waking: comm=stream pid=3742 prio=120 target_cpu=144
>           stream-3733    [050] d..2.   451.617275: sched_waking: comm=stream pid=3749 prio=120 target_cpu=222
>           ... (No migrations observed)
> 
>           In most cases, each LLCs is running only 1 stream thread leading to optimal performance.
> 
> 	--> with patch:
> 
>    run_stream.sh-4383    [070] d..2.  1237.764236: sched_wakeup_new: comm=run_stream.sh pid=4392 prio=120 target_cpu=206 *LLC: 9
>           stream-4392    [206] d..2.  1237.765121: sched_wakeup_new: comm=stream pid=4394 prio=120 target_cpu=070 *LLC: 8
>           stream-4392    [206] d..2.  1237.765171: sched_wakeup_new: comm=stream pid=4395 prio=120 target_cpu=169 *LLC: 5
>           stream-4392    [206] d..2.  1237.765204: sched_wakeup_new: comm=stream pid=4396 prio=120 target_cpu=111 *LLC: 13
>           stream-4392    [206] d..2.  1237.765243: sched_wakeup_new: comm=stream pid=4397 prio=120 target_cpu=130 *LLC: 0
>           stream-4392    [206] d..2.  1237.765249: sched_waking: comm=stream pid=4396 prio=120 target_cpu=111
>           <idle>-0       [111] dNh2.  1237.765260: sched_wakeup: comm=stream pid=4396 prio=120 target_cpu=111
>           stream-4392    [206] d..2.  1237.765281: sched_wakeup_new: comm=stream pid=4398 prio=120 target_cpu=182 *LLC: 6
>           stream-4392    [206] d..2.  1237.765318: sched_wakeup_new: comm=stream pid=4399 prio=120 target_cpu=060 *LLC: 7
>           stream-4392    [206] d..2.  1237.765368: sched_wakeup_new: comm=stream pid=4400 prio=120 target_cpu=124 *LLC: 15
>           stream-4392    [206] d..2.  1237.765408: sched_wakeup_new: comm=stream pid=4401 prio=120 target_cpu=031 *LLC: 3
>           stream-4392    [206] d..2.  1237.765439: sched_wakeup_new: comm=stream pid=4402 prio=120 target_cpu=095 *LLC: 11
>           stream-4392    [206] d..2.  1237.765475: sched_wakeup_new: comm=stream pid=4403 prio=120 target_cpu=015 *LLC: 1
>           stream-4401    [031] d..2.  1237.765497: sched_waking: comm=stream pid=4392 prio=120 target_cpu=206
>           stream-4401    [031] d..2.  1237.765506: sched_migrate_task: comm=stream pid=4392 prio=120 orig_cpu=206 dest_cpu=152 *LLC: 9 -> 3
>           <idle>-0       [152] dNh2.  1237.765540: sched_wakeup: comm=stream pid=4392 prio=120 target_cpu=152
>           stream-4403    [015] d..2.  1237.765562: sched_waking: comm=stream pid=4392 prio=120 target_cpu=152
>           stream-4403    [015] d..2.  1237.765570: sched_migrate_task: comm=stream pid=4392 prio=120 orig_cpu=152 dest_cpu=136 *LLC: 3 -> 1
>           <idle>-0       [136] dNh2.  1237.765602: sched_wakeup: comm=stream pid=4392 prio=120 target_cpu=136
>           stream-4392    [136] d..2.  1237.765799: sched_wakeup_new: comm=stream pid=4404 prio=120 target_cpu=097 *LLC: 12
>           stream-4392    [136] d..2.  1237.765893: sched_wakeup_new: comm=stream pid=4405 prio=120 target_cpu=084 *LLC: 10
>           stream-4392    [136] d..2.  1237.765957: sched_wakeup_new: comm=stream pid=4406 prio=120 target_cpu=119 *LLC: 14
>           stream-4392    [136] d..2.  1237.766018: sched_wakeup_new: comm=stream pid=4407 prio=120 target_cpu=038 *LLC: 4
>           stream-4406    [119] d..2.  1237.766044: sched_waking: comm=stream pid=4392 prio=120 target_cpu=136
>           stream-4406    [119] d..2.  1237.766050: sched_migrate_task: comm=stream pid=4392 prio=120 orig_cpu=136 dest_cpu=240 *LLC: 1 -> 14
>           <idle>-0       [240] dNh2.  1237.766154: sched_wakeup: comm=stream pid=4392 prio=120 target_cpu=240
>           stream-4392    [240] d..2.  1237.766361: sched_wakeup_new: comm=stream pid=4408 prio=120 target_cpu=023 *LLC: 2
>           stream-4399    [060] d..2.  1238.300605: sched_waking: comm=stream pid=4406 prio=120 target_cpu=119 *LLC: 14 <--- Two stream threads are
>           stream-4399    [060] d..2.  1238.300611: sched_waking: comm=stream pid=4392 prio=120 target_cpu=240 *LLC: 14 <--- on the same LLC leading to
>           <idle>-0       [119] dNh2.  1238.300620: sched_wakeup: comm=stream pid=4406 prio=120 target_cpu=119 *LLC: 14      cache contention, degrading
>           <idle>-0       [240] dNh2.  1238.300621: sched_wakeup: comm=stream pid=4392 prio=120 target_cpu=240 *LLC: 14      the Stream throughput.
>           ... (No more migrations observed)
> 
>           After all the wakeups and migrations, LLC 14 contains two stream threads (pid: 4392 and 4406)
>           All the migrations happen between the events sched_waking and sched_wakeup showing the migrations
>           happens during a wakeup and not as a resutl of load balancing.
> 
> > 
> > This patch is more about enhancing the wake affine, rather than improving
> > the SIS efficiency, so Mel's SIS statistic patch was not deployed for now.
> > 
> > [Limitations]
> > When the number of CPUs suggested by SIS_UTIL is lower than 60% of the LLC
> > CPUs, the LLC domain is regarded as relatively busy. However, the 60% is
> > somewhat hacky, because it indicates that the util_avg% is around 50%,
> > a half busy LLC. I don't have other lightweight/accurate method in mind to
> > check if the LLC domain is busy or not.
> > 
> > [Misc]
> > At LPC we received useful suggestions. The first one is that we should look at
> > the time from the task is woken up, to the time the task goes back to sleep.
> > I assume this is aligned with what is proposed here - we consider the average
> > running time, rather than the total running time. The second one is that we
> > should consider the long-running task. And this is under investigation.
> > 
> > Besides, Prateek has mentioned that the SIS_UTIL is unable to deal with
> > burst workload.  Because there is a delay to reflect the instantaneous
> > utilization and SIS_UTIL expects the workload to be stable. If the system
> > is idle most of the time, but suddenly the workloads burst, the SIS_UTIL
> > overscans. The current patch might mitigate this symptom somehow, as burst
> > workload is usually regarded as a short-running task.
> > 
> > Suggested-by: Tim Chen <tim.c.chen@intel.com>
> > Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> > ---
> >  kernel/sched/fair.c | 31 ++++++++++++++++++++++++++++++-
> >  1 file changed, 30 insertions(+), 1 deletion(-)
> > 
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index 914096c5b1ae..7519ab5b911c 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6020,6 +6020,19 @@ static int wake_wide(struct task_struct *p)
> >  	return 1;
> >  }
> >  
> > +/*
> > + * If a task switches in and then voluntarily relinquishes the
> > + * CPU quickly, it is regarded as a short running task.
> > + * sysctl_sched_min_granularity is chosen as the threshold,
> > + * as this value is the minimal slice if there are too many
> > + * runnable tasks, see __sched_period().
> > + */
> > +static int is_short_task(struct task_struct *p)
> > +{
> > +	return (p->se.sum_exec_runtime <=
> > +		(p->nvcsw * sysctl_sched_min_granularity));
> > +}
> > +
> >  /*
> >   * The purpose of wake_affine() is to quickly determine on which CPU we can run
> >   * soonest. For the purpose of speed we only consider the waking and previous
> > @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
> >  	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
> >  		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
> >  
> > -	if (sync && cpu_rq(this_cpu)->nr_running == 1)
> > +	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
> > +	    is_short_task(cpu_curr(this_cpu)))
> 
> This change seems to optimize for affine wakeup which benefits
> tasks with producer-consumer pattern but is not ideal for Stream.
> Currently the logic ends will do an affine wakeup even if sync
> flag is not set:
> 
>           stream-4135    [029] d..2.   353.580953: sched_waking: comm=stream pid=4129 prio=120 target_cpu=082
>           stream-4135    [029] d..2.   353.580957: select_task_rq_fair: wake_affine_idle: Select this_cpu: sync(0) rq->nr_running(1) is_short_task(1)
>           stream-4135    [029] d..2.   353.580960: sched_migrate_task: comm=stream pid=4129 prio=120 orig_cpu=82 dest_cpu=30
>           <idle>-0       [030] dNh2.   353.580993: sched_wakeup: comm=stream pid=4129 prio=120 target_cpu=030
> 
> I believe a consideration should be made for the sync flag when
> going for an affine wakeup. Also the check for short running could
> be at the end after checking if prev_cpu is an available_idle_cpu.
> 
We can move the short running check after the prev_cpu check. If we
add the sync flag check would it shrink the coverage of this change?
Since I found that there is limited scenario would enable the sync
flag and we want to make the short running check a generic optimization.
But yes, we can test with/without sync flag constrain to see which one
gives better data.
> >  		return this_cpu;
> >  
> >  	if (available_idle_cpu(prev_cpu))
> > @@ -6434,6 +6448,21 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> >  			/* overloaded LLC is unlikely to have idle cpu/core */
> >  			if (nr == 1)
> >  				return -1;
> > +
> > +			/*
> > +			 * If nr is smaller than 60% of llc_weight, it
> > +			 * indicates that the util_avg% is higher than 50%.
> > +			 * This is calculated by SIS_UTIL in
> > +			 * update_idle_cpu_scan(). The 50% util_avg indicates
> > +			 * a half-busy LLC domain. System busier than this
> > +			 * level could lower its bar to choose a compromised
> > +			 * "idle" CPU. If the waker on target CPU is a short
> > +			 * task and the wakee is also a short task, pick
> > +			 * target directly.
> > +			 */
> > +			if (!has_idle_core && (5 * nr < 3 * sd->span_weight) &&
> > +			    is_short_task(p) && is_short_task(cpu_curr(target)))
> > +				return target;
> 
> Pileup seen in hackbench could also be a result of an early
> bailout here for smaller LLCs but I don't have any data to
> substantiate that claim currently.
> 
> >  		}
> >  	}
> >  
> Please let me know if you need any more data from the test
> system for any of the benchmarks covered or if you would like
> me to run any other benchmark on the test system.
Thank you for your testing, I'll enable SNC to divide the LLC domain
into smaller ones, and to see if the issue could be reproduced
on my platform too, then I'll update my finding on this.

thanks,
Chenyu
> --
> Thanks and Regards,
> Prateek

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-29  5:25   ` Chen Yu
@ 2022-09-29  6:59     ` Honglei Wang
  2022-09-29 17:34       ` K Prateek Nayak
  2022-09-30 16:03       ` Chen Yu
  2022-09-29 17:19     ` K Prateek Nayak
  1 sibling, 2 replies; 20+ messages in thread
From: Honglei Wang @ 2022-09-29  6:59 UTC (permalink / raw)
  To: Chen Yu, K Prateek Nayak
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel



On 2022/9/29 13:25, Chen Yu wrote:
> Hi Prateek,
> On 2022-09-26 at 11:20:16 +0530, K Prateek Nayak wrote:
>> Hello Chenyu,
>>
>> When testing the patch on a dual socket Zen3 system (3 x 64C/128T) we
>> noticed some regressions in some standard benchmark.
>>
>> tl;dr
>>
>> o Hackbench shows noticeable regression in most cases. Looking at schedstat
>>    data, we see there is an increased number of affine wakeup and an increase
>>    in the average wait time. As the LLC size on the Zen3 machine is only
>>    16 CPUs, there is good chance the LLC was overloaded and it required
>>    intervention from load balancer to distribute tasks optimally.
>>
>> o There is a regression in Stream which is cause by piling up of more than
>>    one Stream thread on the same LLC. This happens as a result of migration
>>    in the wakeup path where the logic goes for an affine wakeup if the
>>    waker is short lived task even if sync flag is not set and the previous
>>    CPU might be idle.
>>
> Nice analysis and thanks for your testing.
>> I'll inline the results are detailed observation below:
>>
>> On 9/15/2022 10:24 PM, Chen Yu wrote:
>>> [Background]
>>> At LPC 2022 Real-time and Scheduling Micro Conference we presented
>>> the cross CPU wakeup issue. This patch is a text version of the
>>> talk, and hopefully we can clarify the problem and appreciate for any
>>> feedback.
>>>
>>> [re-send due to the previous one did not reach LKML, sorry
>>>   for any inconvenience.]
>>>
>>> [Problem Statement]
>>> For a workload that is doing frequent context switches, the throughput
>>> scales well until the number of instances reaches a peak point. After
>>> that peak point, the throughput drops significantly if the number of
>>> instances continues to increase.
>>>
>>> The will-it-scale context_switch1 test case exposes the issue. The
>>> test platform has 112 CPUs per LLC domain. The will-it-scale launches
>>> 1, 8, 16 ... 112 instances respectively. Each instance is composed
>>> of 2 tasks, and each pair of tasks would do ping-pong scheduling via
>>> pipe_read() and pipe_write(). No task is bound to any CPU.
>>> We found that, once the number of instances is higher than
>>> 56(112 tasks in total, every CPU has 1 task), the throughput
>>> drops accordingly if the instance number continues to increase:
>>>
>>>            ^
>>> throughput|
>>>            |                 X
>>>            |               X   X X
>>>            |             X         X X
>>>            |           X               X
>>>            |         X                   X
>>>            |       X
>>>            |     X
>>>            |   X
>>>            | X
>>>            |
>>>            +-----------------.------------------->
>>>                              56
>>>                                   number of instances
>>>
>>> [Symptom analysis]
>>> Both perf profile and lockstat have shown that, the bottleneck
>>> is the runqueue spinlock. Take perf profile for example:
>>>
>>> nr_instance          rq lock percentage
>>> 1                    1.22%
>>> 8                    1.17%
>>> 16                   1.20%
>>> 24                   1.22%
>>> 32                   1.46%
>>> 40                   1.61%
>>> 48                   1.63%
>>> 56                   1.65%
>>> --------------------------
>>> 64                   3.77%      |
>>> 72                   5.90%      | increase
>>> 80                   7.95%      |
>>> 88                   9.98%      v
>>> 96                   11.81%
>>> 104                  13.54%
>>> 112                  15.13%
>>>
>>> And the rq lock bottleneck is composed of two paths(perf profile):
>>>
>>> (path1):
>>> raw_spin_rq_lock_nested.constprop.0;
>>> try_to_wake_up;
>>> default_wake_function;
>>> autoremove_wake_function;
>>> __wake_up_common;
>>> __wake_up_common_lock;
>>> __wake_up_sync_key;
>>> pipe_write;
>>> new_sync_write;
>>> vfs_write;
>>> ksys_write;
>>> __x64_sys_write;
>>> do_syscall_64;
>>> entry_SYSCALL_64_after_hwframe;write
>>>
>>> (path2):
>>> raw_spin_rq_lock_nested.constprop.0;
>>> __sched_text_start;
>>> schedule_idle;
>>> do_idle;
>>> cpu_startup_entry;
>>> start_secondary;
>>> secondary_startup_64_no_verify
>>>
>>> The idle percentage is around 30% when there are 112 instances:
>>> %Cpu0  :  2.7 us, 66.7 sy,  0.0 ni, 30.7 id
>>>
>>> As a comparison, if we set CPU affinity to these workloads,
>>> which stops them from migrating among CPUs, the idle percentage
>>> drops to nearly 0%, and the throughput increases by about 300%.
>>> This indicates that there is room for optimization.
>>>
>>> A possible scenario to describe the lock contention:
>>> task A tries to wakeup task B on CPU1, then task A grabs the
>>> runqueue lock of CPU1. If CPU1 is about to quit idle, it needs
>>> to grab its own lock which has been taken by someone else. Then
>>> CPU1 takes more time to quit which hurts the performance.
>>>
>>> TTWU_QUEUE could mitigate the cross CPU runqueue lock contention.
>>> Since commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU
>>> on wakelist if wakee cpu is idle"), TTWU_QUEUE offloads the work from
>>> the waker and leverages the idle CPU to queue the wakee. However, a long
>>> idle duration is still observed. The idle task spends quite some time
>>> on sched_ttwu_pending() before it switches out. This long idle
>>> duration would mislead SIS_UTIL, then SIS_UTIL suggests the waker scan
>>> for more CPUs. The time spent searching for an idle CPU would make
>>> wakee waiting for more time, which in turn leads to more idle time.
>>> The NEWLY_IDLE balance fails to pull tasks to the idle CPU, which
>>> might be caused by no runnable wakee being found.
>>>
>>> [Proposal]
>>> If a system is busy, and if the workloads are doing frequent context
>>> switches, it might not be a good idea to spread the wakee on different
>>> CPUs. Instead, consider the task running time and enhance wake affine
>>> might be applicable.
>>>
>>> This idea has been suggested by Rik at LPC 2019 when discussing
>>> the latency nice. He asked the following question: if P1 is a small-time
>>> slice task on CPU, can we put the waking task P2 on the CPU and wait for
>>> P1 to release the CPU, without wasting time to search for an idle CPU?
>>> At LPC 2021 Vincent Guittot has proposed:
>>> 1. If the wakee is a long-running task, should we skip the short idle CPU?
>>> 2. If the wakee is a short-running task, can we put it onto a lightly loaded
>>>     local CPU?
>>>
>>> Current proposal is a variant of 2:
>>> If the target CPU is running a short-time slice task, and the wakee
>>> is also a short-time slice task, the target CPU could be chosen as the
>>> candidate when the system is busy.
>>>
>>> The definition of a short-time slice task is: The average running time
>>> of the task during each run is no more than sysctl_sched_min_granularity.
>>> If a task switches in and then voluntarily relinquishes the CPU
>>> quickly, it is regarded as a short-running task. Choosing
>>> sysctl_sched_min_granularity because it is the minimal slice if there
>>> are too many runnable tasks.
>>>
>>> Reuse the nr_idle_scan of SIS_UTIL to decide if the system is busy.
>>> If yes, then a compromised "idle" CPU might be acceptable.
>>>
>>> The reason is that, if the waker is a short running task, it might
>>> relinquish the CPU soon, the wakee has the chance to be scheduled.
>>> On the other hand, if the wakee is also a short-running task, the
>>> impact it brings to the target CPU is small. If the system is
>>> already busy, maybe we could lower the bar to find an idle CPU.
>>> The effect is, the wake affine is enhanced.
>>>
>>> [Benchmark results]
>>> The baseline is 6.0-rc4.
>>>
>>> The throughput of will-it-scale.context_switch1 has been increased by
>>> 331.13% with this patch applied.
>>>
>>> netperf
>>> =======
>>> case            	load    	baseline(std%)	compare%( std%)
>>> TCP_RR          	28 threads	 1.00 (  0.57)	 +0.29 (  0.59)
>>> TCP_RR          	56 threads	 1.00 (  0.49)	 +0.43 (  0.43)
>>> TCP_RR          	84 threads	 1.00 (  0.34)	 +0.24 (  0.34)
>>> TCP_RR          	112 threads	 1.00 (  0.26)	 +1.57 (  0.20)
>>> TCP_RR          	140 threads	 1.00 (  0.20)	+178.05 (  8.83)
>>> TCP_RR          	168 threads	 1.00 ( 10.14)	 +0.87 ( 10.03)
>>> TCP_RR          	196 threads	 1.00 ( 13.51)	 +0.90 ( 11.84)
>>> TCP_RR          	224 threads	 1.00 (  7.12)	 +0.66 (  8.28)
>>> UDP_RR          	28 threads	 1.00 (  0.96)	 -0.10 (  0.97)
>>> UDP_RR          	56 threads	 1.00 ( 10.93)	 +0.24 (  0.82)
>>> UDP_RR          	84 threads	 1.00 (  8.99)	 +0.40 (  0.71)
>>> UDP_RR          	112 threads	 1.00 (  0.15)	 +0.72 (  7.77)
>>> UDP_RR          	140 threads	 1.00 ( 11.11)	+135.81 ( 13.86)
>>> UDP_RR          	168 threads	 1.00 ( 12.58)	+147.63 ( 12.72)
>>> UDP_RR          	196 threads	 1.00 ( 19.47)	 -0.34 ( 16.14)
>>> UDP_RR          	224 threads	 1.00 ( 12.88)	 -0.35 ( 12.73)
>>>
>>> hackbench
>>> =========
>>> case            	load    	baseline(std%)	compare%( std%)
>>> process-pipe    	1 group 	 1.00 (  1.02)	 +0.14 (  0.62)
>>> process-pipe    	2 groups 	 1.00 (  0.73)	 +0.29 (  0.51)
>>> process-pipe    	4 groups 	 1.00 (  0.16)	 +0.24 (  0.31)
>>> process-pipe    	8 groups 	 1.00 (  0.06)	+11.56 (  0.11)
>>> process-sockets 	1 group 	 1.00 (  1.59)	 +0.06 (  0.77)
>>> process-sockets 	2 groups 	 1.00 (  1.13)	 -1.86 (  1.31)
>>> process-sockets 	4 groups 	 1.00 (  0.14)	 +1.76 (  0.29)
>>> process-sockets 	8 groups 	 1.00 (  0.27)	 +2.73 (  0.10)
>>> threads-pipe    	1 group 	 1.00 (  0.43)	 +0.83 (  2.20)
>>> threads-pipe    	2 groups 	 1.00 (  0.52)	 +1.03 (  0.55)
>>> threads-pipe    	4 groups 	 1.00 (  0.44)	 -0.08 (  0.31)
>>> threads-pipe    	8 groups 	 1.00 (  0.04)	+11.86 (  0.05)
>>> threads-sockets 	1 groups 	 1.00 (  1.89)	 +3.51 (  0.57)
>>> threads-sockets 	2 groups 	 1.00 (  0.04)	 -1.12 (  0.69)
>>> threads-sockets 	4 groups 	 1.00 (  0.14)	 +1.77 (  0.18)
>>> threads-sockets 	8 groups 	 1.00 (  0.03)	 +2.75 (  0.03)
>>>
>>> tbench
>>> ======
>>> case            	load    	baseline(std%)	compare%( std%)
>>> loopback        	28 threads	 1.00 (  0.08)	 +0.51 (  0.25)
>>> loopback        	56 threads	 1.00 (  0.15)	 -0.89 (  0.16)
>>> loopback        	84 threads	 1.00 (  0.03)	 +0.35 (  0.07)
>>> loopback        	112 threads	 1.00 (  0.06)	 +2.84 (  0.01)
>>> loopback        	140 threads	 1.00 (  0.07)	 +0.69 (  0.11)
>>> loopback        	168 threads	 1.00 (  0.09)	 +0.14 (  0.18)
>>> loopback        	196 threads	 1.00 (  0.04)	 -0.18 (  0.20)
>>> loopback        	224 threads	 1.00 (  0.25)	 -0.37 (  0.03)
>>>
>>> Other benchmarks are under testing.
>>
>> Discussed below are the results from running standard benchmarks on
>> a dual socket Zen3 (2 x 64C/128T) machine configured in different
>> NPS modes.
>>
>> NPS Modes are used to logically divide single socket into
>> multiple NUMA region.
>> Following is the NUMA configuration for each NPS mode on the system:
>>
>> NPS1: Each socket is a NUMA node.
>>      Total 2 NUMA nodes in the dual socket machine.
>>
>>      Node 0: 0-63,   128-191
>>      Node 1: 64-127, 192-255
>>
>> NPS2: Each socket is further logically divided into 2 NUMA regions.
>>      Total 4 NUMA nodes exist over 2 socket.
>>     
>>      Node 0: 0-31,   128-159
>>      Node 1: 32-63,  160-191
>>      Node 2: 64-95,  192-223
>>      Node 3: 96-127, 223-255
>>
>> NPS4: Each socket is logically divided into 4 NUMA regions.
>>      Total 8 NUMA nodes exist over 2 socket.
>>     
>>      Node 0: 0-15,    128-143
>>      Node 1: 16-31,   144-159
>>      Node 2: 32-47,   160-175
>>      Node 3: 48-63,   176-191
>>      Node 4: 64-79,   192-207
>>      Node 5: 80-95,   208-223
>>      Node 6: 96-111,  223-231
>>      Node 7: 112-127, 232-255
>>
>> Benchmark Results:
>>
>> Kernel versions:
>> - tip:       5.19.0 tip sched/core
>> - shortrun:  5.19.0 tip sched/core + this patch
>>
>> When we started testing, the tip was at:
>> commit 7e9518baed4c ("sched/fair: Move call to list_last_entry() in detach_tasks")
>>
>> ~~~~~~~~~~~~~
>> ~ hackbench ~
>> ~~~~~~~~~~~~~
>>
>> NPS1
>>
>> Test:			tip			shortrun
>>   1-groups:	   4.23 (0.00 pct)	   4.24 (-0.23 pct)
>>   2-groups:	   4.93 (0.00 pct)	   5.68 (-15.21 pct)
>>   4-groups:	   5.32 (0.00 pct)	   6.21 (-16.72 pct)
>>   8-groups:	   5.46 (0.00 pct)	   6.49 (-18.86 pct)
>> 16-groups:	   7.31 (0.00 pct)	   7.78 (-6.42 pct)
>>
>> NPS2
>>
>> Test:			tip			shortrun
>>   1-groups:	   4.19 (0.00 pct)	   4.19 (0.00 pct)
>>   2-groups:	   4.77 (0.00 pct)	   5.43 (-13.83 pct)
>>   4-groups:	   5.15 (0.00 pct)	   6.20 (-20.38 pct)
>>   8-groups:	   5.47 (0.00 pct)	   6.54 (-19.56 pct)
>> 16-groups:	   6.63 (0.00 pct)	   7.28 (-9.80 pct)
>>
>> NPS4
>>
>> Test:			tip			shortrun
>>   1-groups:	   4.23 (0.00 pct)	   4.39 (-3.78 pct)
>>   2-groups:	   4.78 (0.00 pct)	   5.48 (-14.64 pct)
>>   4-groups:	   5.17 (0.00 pct)	   6.14 (-18.76 pct)
>>   8-groups:	   5.63 (0.00 pct)	   6.51 (-15.63 pct)
>> 16-groups:	   7.88 (0.00 pct)	   7.03 (10.78 pct)
>>
>> ~~~~~~~~~~~~
>> ~ schbench ~
>> ~~~~~~~~~~~~
>>
>> NPS1
>>
>> #workers:       tip			shortrun
>>    1:	  22.00 (0.00 pct)	  36.00 (-63.63 pct)
>>    2:	  34.00 (0.00 pct)	  38.00 (-11.76 pct)
>>    4:	  37.00 (0.00 pct)	  36.00 (2.70 pct)
>>    8:	  55.00 (0.00 pct)	  51.00 (7.27 pct)
>>   16:	  69.00 (0.00 pct)	  68.00 (1.44 pct)
>>   32:	 113.00 (0.00 pct)	 116.00 (-2.65 pct)
>>   64:	 219.00 (0.00 pct)	 232.00 (-5.93 pct)
>> 128:	 506.00 (0.00 pct)	 1019.00 (-101.38 pct)
>> 256:	 45440.00 (0.00 pct)	 44864.00 (1.26 pct)
>> 512:	 76672.00 (0.00 pct)	 73600.00 (4.00 pct)
>>
>> NPS2
>>
>> #workers:	tip			shortrun
>>    1:	  31.00 (0.00 pct)	  36.00 (-16.12 pct)
>>    2:	  36.00 (0.00 pct)	  36.00 (0.00 pct)
>>    4:	  45.00 (0.00 pct)	  39.00 (13.33 pct)
>>    8:	  47.00 (0.00 pct)	  48.00 (-2.12 pct)
>>   16:	  66.00 (0.00 pct)	  71.00 (-7.57 pct)
>>   32:	 114.00 (0.00 pct)	 123.00 (-7.89 pct)
>>   64:	 215.00 (0.00 pct)	 248.00 (-15.34 pct)
>> 128:	 495.00 (0.00 pct)	 531.00 (-7.27 pct)
>> 256:	 48576.00 (0.00 pct)	 47552.00 (2.10 pct)
>> 512:	 79232.00 (0.00 pct)	 74624.00 (5.81 pct)
>>
>> NPS4
>>
>> #workers:	tip			shortrun
>>    1:	  30.00 (0.00 pct)	  36.00 (-20.00 pct)
>>    2:	  34.00 (0.00 pct)	  38.00 (-11.76 pct)
>>    4:	  41.00 (0.00 pct)	  44.00 (-7.31 pct)
>>    8:	  60.00 (0.00 pct)	  53.00 (11.66 pct)
>>   16:	  68.00 (0.00 pct)	  73.00 (-7.35 pct)
>>   32:	 116.00 (0.00 pct)	 125.00 (-7.75 pct)
>>   64:	 224.00 (0.00 pct)	 248.00 (-10.71 pct)
>> 128:	 495.00 (0.00 pct)	 569.00 (-14.94 pct)
>> 256:	 45888.00 (0.00 pct)	 38720.00 (15.62 pct)
>> 512:	 78464.00 (0.00 pct)	 73600.00 (6.19 pct)
>>
>>
>> ~~~~~~~~~~
>> ~ tbench ~
>> ~~~~~~~~~~
>>
>> NPS1
>>
>> Clients:	tip			shortrun
>>      1	 550.66 (0.00 pct)	 546.56 (-0.74 pct)
>>      2	 1009.69 (0.00 pct)	 1010.01 (0.03 pct)
>>      4	 1795.32 (0.00 pct)	 1782.71 (-0.70 pct)
>>      8	 2971.16 (0.00 pct)	 3035.58 (2.16 pct)
>>     16	 4627.98 (0.00 pct)	 4816.82 (4.08 pct)
>>     32	 8065.15 (0.00 pct)	 9269.52 (14.93 pct)
>>     64	 14994.32 (0.00 pct)	 14704.38 (-1.93 pct)
>>    128	 5175.73 (0.00 pct)	 5174.77 (-0.01 pct)
>>    256	 48763.57 (0.00 pct)	 49649.67 (1.81 pct)
>>    512	 43780.78 (0.00 pct)	 44717.04 (2.13 pct)
>>   1024	 40341.84 (0.00 pct)	 42078.99 (4.30 pct)
>>
>> NPS2
>>
>> Clients:	tip			shortrun
>>      1	 551.06 (0.00 pct)	 549.17 (-0.34 pct)
>>      2	 1000.76 (0.00 pct)	 993.75 (-0.70 pct)
>>      4	 1737.02 (0.00 pct)	 1773.33 (2.09 pct)
>>      8	 2992.31 (0.00 pct)	 2971.05 (-0.71 pct)
>>     16	 4579.29 (0.00 pct)	 4470.71 (-2.37 pct)
>>     32	 9120.73 (0.00 pct)	 8080.89 (-11.40 pct)
>>     64	 14918.58 (0.00 pct)	 14395.57 (-3.50 pct)
>>    128	 20830.61 (0.00 pct)	 20579.09 (-1.20 pct)
>>    256	 47708.18 (0.00 pct)	 47416.37 (-0.61 pct)
>>    512	 43721.79 (0.00 pct)	 43754.83 (0.07 pct)
>>   1024	 40920.49 (0.00 pct)	 40701.90 (-0.53 pct)
>>
>> NPS4
>>
>> Clients:	tip			shortrun
>>      1	 549.22 (0.00 pct)	 548.36 (-0.15 pct)
>>      2	 1000.08 (0.00 pct)	 1037.74 (3.76 pct)
>>      4	 1794.78 (0.00 pct)	 1802.11 (0.40 pct)
>>      8	 3008.50 (0.00 pct)	 2989.22 (-0.64 pct)
>>     16	 4804.71 (0.00 pct)	 4706.51 (-2.04 pct)
>>     32	 9156.57 (0.00 pct)	 8253.84 (-9.85 pct)
>>     64	 14901.45 (0.00 pct)	 15049.51 (0.99 pct)
>>    128	 20771.20 (0.00 pct)	 13229.50 (-36.30 pct)
>>    256	 47033.88 (0.00 pct)	 46737.17 (-0.63 pct)
>>    512	 43429.01 (0.00 pct)	 43246.64 (-0.41 pct)
>>   1024	 39271.27 (0.00 pct)	 42194.75 (7.44 pct)
>>
>>
>> ~~~~~~~~~~
>> ~ stream ~
>> ~~~~~~~~~~
>>
>> NPS1
>>
>> 10 Runs:
>>
>> Test:	        tip			shortrun
>>   Copy:	 336311.52 (0.00 pct)	 330116.75 (-1.84 pct)
>> Scale:	 212955.82 (0.00 pct)	 215330.30 (1.11 pct)
>>    Add:	 251518.23 (0.00 pct)	 250926.53 (-0.23 pct)
>> Triad:	 262077.88 (0.00 pct)	 259618.70 (-0.93 pct)
>>
>> 100 Runs:
>>
>> Test:		tip			shortrun
>>   Copy:	 339533.83 (0.00 pct)	 323452.74 (-4.73 pct)
>> Scale:	 194736.72 (0.00 pct)	 215789.55 (10.81 pct)
>>    Add:	 218294.54 (0.00 pct)	 244916.33 (12.19 pct)
>> Triad:	 262371.40 (0.00 pct)	 252997.84 (-3.57 pct)
>>
>> NPS2
>>
>> 10 Runs:
>>
>> Test:		tip			shortrun
>>   Copy:	 335277.15 (0.00 pct)	 305516.57 (-8.87 pct)
>> Scale:	 220990.24 (0.00 pct)	 207061.22 (-6.30 pct)
>>    Add:	 264156.13 (0.00 pct)	 243368.49 (-7.86 pct)
>> Triad:	 268707.53 (0.00 pct)	 223486.30 (-16.82 pct)
>>
>> 100 Runs:
>>
>> Test:		tip			shortrun
>>   Copy:	 334913.73 (0.00 pct)	 319677.81 (-4.54 pct)
>> Scale:	 230522.47 (0.00 pct)	 222757.62 (-3.36 pct)
>>    Add:	 264567.28 (0.00 pct)	 254883.62 (-3.66 pct)
>> Triad:	 272974.23 (0.00 pct)	 260561.08 (-4.54 pct)
>>
>> NPS4
>>
>> 10 Runs:
>>
>> Test:		tip			shortrun
>>   Copy:	 356452.47 (0.00 pct)	 255911.77 (-28.20 pct)
>> Scale:	 242986.42 (0.00 pct)	 171587.28 (-29.38 pct)
>>    Add:	 268512.09 (0.00 pct)	 188244.75 (-29.89 pct)
>> Triad:	 281622.43 (0.00 pct)	 193271.97 (-31.37 pct)
>>
>> 100 Runs:
>>
>> Test:		tip			shortrun
>>   Copy:	 367384.81 (0.00 pct)	 273101.20 (-25.66 pct)
>> Scale:	 254289.04 (0.00 pct)	 189986.88 (-25.28 pct)
>>    Add:	 273683.33 (0.00 pct)	 206384.96 (-24.58 pct)
>> Triad:	 285696.90 (0.00 pct)	 217214.10 (-23.97 pct)
>>
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~
>> ~ Notes and Observations ~
>> ~~~~~~~~~~~~~~~~~~~~~~~~~~
>>
>> o Schedstat data for Hackbench with 2 groups in NPS1 mode:
>>
>>          ---------------------------------------------------------------------------------------------------
>>          cpu:  all_cpus (avg) vs cpu:  all_cpus (avg)
>>          ---------------------------------------------------------------------------------------------------
>>          kernel:                                                    :           tip      shortrun
>>          sched_yield count                                          :             0,            0
>>          Legacy counter can be ignored                              :             0,            0
>>          schedule called                                            :         53305,        40615  | -23.81|
>>          schedule left the processor idle                           :         22406,        16919  | -24.49|
>>          try_to_wake_up was called                                  :         30822,        23625  | -23.35|
>>          try_to_wake_up was called to wake up the local cpu         :           984,         2583  | 162.50|
>>          total runtime by tasks on this processor (in jiffies)      :     596998654,    481267347  | -19.39| *
>>          total waittime by tasks on this processor (in jiffies)     :     514142630,    766745576  |  49.13| * Longer wait time
> Agree, the wait length is 766745576 / 481267347 = 1.59 after patched, which is
> much bigger than 514142630 / 596998654 = 0.86 before the patch.
> 
>>          total timeslices run on this cpu                           :         30893,        23691  | -23.31| *
>>          ---------------------------------------------------------------------------------------------------
>>
>>
>>          < --------------------------------------  Wakeup info:  -------------------------------------- >
>>          kernel:                                                 :           tip      shortrun
>>          Wakeups on same         SMT cpus = all_cpus (avg)       :          1470,         1301  | -11.50|
>>          Wakeups on same         MC cpus = all_cpus (avg)        :         22913,        18606  | -18.80|
>>          Wakeups on same         DIE cpus = all_cpus (avg)       :          3634,          693  | -80.93|
>>          Wakeups on same         NUMA cpus = all_cpus (avg)      :          1819,          440  | -75.81|
>>          Affine wakeups on same  SMT cpus = all_cpus (avg)       :          1025,         1421  |  38.63| * More affine wakeups on possibly
>>          Affine wakeups on same  MC cpus = all_cpus (avg)        :         14455,        17514  |  21.16| * busy runqueue leading to longer
>>          Affine wakeups on same  DIE cpus = all_cpus (avg)       :          2828,          701  | -75.21|   wait time
>>          Affine wakeups on same  NUMA cpus = all_cpus (avg)      :          1194,          456  | -61.81|
>>          ------------------------------------------------------------------------------------------------
> Agree, for SMT and MC domain, the wake affine has been enhanced to suggest to pick
> a short running CPU rather than an idle one. Then later SIS_UTIL would prefer to
> pick this candidate CPU.
>>
>> 	We observe a larger wait time which which the patch which points
>>          to the fact that the tasks are piling on the run queue. I believe
>> 	Tim's suggestion will help here where we can avoid a pileup as a
>> 	result of waker task being a short run task.
> Yes, we'll raise the bar to pick a short running CPU.
>>
>> o Tracepoint data for Stream for 100 runs in NPS4
>>
>> 	Following tracepoints were enabled for Stream threads:
>> 	  - sched_wakeup_new: To observe initial placement
>> 	  - sched_waking: To check if migration is in wakeup context or lb contxt
>> 	  - sched_wakeup: To check if migration is in wakeup context or lb contxt
>> 	  - sched_migrate_task: To observe task movements
>>
>> 	--> tip:
>>
>>     run_stream.sh-3724    [057] d..2.   450.593407: sched_wakeup_new: comm=run_stream.sh pid=3733 prio=120 target_cpu=050 *LLC: 6
>>            <idle>-0       [182] d.s4.   450.594375: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>>            <idle>-0       [050] dNh2.   450.594381: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>>            <idle>-0       [182] d.s4.   450.594657: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>>            <idle>-0       [050] dNh2.   450.594661: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>>            stream-3733    [050] d..2.   450.594893: sched_wakeup_new: comm=stream pid=3735 prio=120 target_cpu=057 *LLC: 7
>>            stream-3733    [050] d..2.   450.594955: sched_wakeup_new: comm=stream pid=3736 prio=120 target_cpu=078 *LLC: 9
>>            stream-3733    [050] d..2.   450.594988: sched_wakeup_new: comm=stream pid=3737 prio=120 target_cpu=045 *LLC: 5
>>            stream-3733    [050] d..2.   450.595016: sched_wakeup_new: comm=stream pid=3738 prio=120 target_cpu=008 *LLC: 1
>>            stream-3733    [050] d..2.   450.595029: sched_waking: comm=stream pid=3737 prio=120 target_cpu=045
>>            <idle>-0       [045] dNh2.   450.595037: sched_wakeup: comm=stream pid=3737 prio=120 target_cpu=045
>>            stream-3737    [045] d..2.   450.595072: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>>            <idle>-0       [050] dNh2.   450.595078: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>>            stream-3738    [008] d..2.   450.595102: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>>            <idle>-0       [050] dNh2.   450.595111: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>>            stream-3733    [050] d..2.   450.595151: sched_wakeup_new: comm=stream pid=3739 prio=120 target_cpu=097 *LLC: 12
>>            stream-3733    [050] d..2.   450.595181: sched_wakeup_new: comm=stream pid=3740 prio=120 target_cpu=194 *LLC: 8
>>            stream-3733    [050] d..2.   450.595221: sched_wakeup_new: comm=stream pid=3741 prio=120 target_cpu=080 *LLC: 10
>>            stream-3733    [050] d..2.   450.595249: sched_wakeup_new: comm=stream pid=3742 prio=120 target_cpu=144 *LLC: 2
>>            stream-3733    [050] d..2.   450.595285: sched_wakeup_new: comm=stream pid=3743 prio=120 target_cpu=239 *LLC: 13
>>            stream-3733    [050] d..2.   450.595320: sched_wakeup_new: comm=stream pid=3744 prio=120 target_cpu=130 *LLC: 0
>>            stream-3733    [050] d..2.   450.595364: sched_wakeup_new: comm=stream pid=3745 prio=120 target_cpu=113 *LLC: 14
>>            stream-3744    [130] d..2.   450.595407: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>>            <idle>-0       [050] dNh2.   450.595416: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>>            stream-3733    [050] d..2.   450.595423: sched_waking: comm=stream pid=3745 prio=120 target_cpu=113
>>            <idle>-0       [113] dNh2.   450.595433: sched_wakeup: comm=stream pid=3745 prio=120 target_cpu=113
>>            stream-3733    [050] d..2.   450.595452: sched_wakeup_new: comm=stream pid=3746 prio=120 target_cpu=160 *LLC: 4
>>            stream-3733    [050] d..2.   450.595486: sched_wakeup_new: comm=stream pid=3747 prio=120 target_cpu=255 *LLC: 15
>>            stream-3733    [050] d..2.   450.595513: sched_wakeup_new: comm=stream pid=3748 prio=120 target_cpu=159 *LLC: 3
>>            stream-3746    [160] d..2.   450.595533: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>>            <idle>-0       [050] dNh2.   450.595542: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>>            stream-3747    [255] d..2.   450.595562: sched_waking: comm=stream pid=3733 prio=120 target_cpu=050
>>            <idle>-0       [050] dNh2.   450.595573: sched_wakeup: comm=stream pid=3733 prio=120 target_cpu=050
>>            stream-3733    [050] d..2.   450.595614: sched_wakeup_new: comm=stream pid=3749 prio=120 target_cpu=222 *LLC: 11
>>            stream-3740    [194] d..2.   451.140510: sched_waking: comm=stream pid=3747 prio=120 target_cpu=255
>>            <idle>-0       [255] dNh2.   451.140523: sched_wakeup: comm=stream pid=3747 prio=120 target_cpu=255
>>            stream-3733    [050] d..2.   451.617257: sched_waking: comm=stream pid=3740 prio=120 target_cpu=194
>>            stream-3733    [050] d..2.   451.617267: sched_waking: comm=stream pid=3746 prio=120 target_cpu=160
>>            stream-3733    [050] d..2.   451.617269: sched_waking: comm=stream pid=3739 prio=120 target_cpu=097
>>            stream-3733    [050] d..2.   451.617272: sched_waking: comm=stream pid=3742 prio=120 target_cpu=144
>>            stream-3733    [050] d..2.   451.617275: sched_waking: comm=stream pid=3749 prio=120 target_cpu=222
>>            ... (No migrations observed)
>>
>>            In most cases, each LLCs is running only 1 stream thread leading to optimal performance.
>>
>> 	--> with patch:
>>
>>     run_stream.sh-4383    [070] d..2.  1237.764236: sched_wakeup_new: comm=run_stream.sh pid=4392 prio=120 target_cpu=206 *LLC: 9
>>            stream-4392    [206] d..2.  1237.765121: sched_wakeup_new: comm=stream pid=4394 prio=120 target_cpu=070 *LLC: 8
>>            stream-4392    [206] d..2.  1237.765171: sched_wakeup_new: comm=stream pid=4395 prio=120 target_cpu=169 *LLC: 5
>>            stream-4392    [206] d..2.  1237.765204: sched_wakeup_new: comm=stream pid=4396 prio=120 target_cpu=111 *LLC: 13
>>            stream-4392    [206] d..2.  1237.765243: sched_wakeup_new: comm=stream pid=4397 prio=120 target_cpu=130 *LLC: 0
>>            stream-4392    [206] d..2.  1237.765249: sched_waking: comm=stream pid=4396 prio=120 target_cpu=111
>>            <idle>-0       [111] dNh2.  1237.765260: sched_wakeup: comm=stream pid=4396 prio=120 target_cpu=111
>>            stream-4392    [206] d..2.  1237.765281: sched_wakeup_new: comm=stream pid=4398 prio=120 target_cpu=182 *LLC: 6
>>            stream-4392    [206] d..2.  1237.765318: sched_wakeup_new: comm=stream pid=4399 prio=120 target_cpu=060 *LLC: 7
>>            stream-4392    [206] d..2.  1237.765368: sched_wakeup_new: comm=stream pid=4400 prio=120 target_cpu=124 *LLC: 15
>>            stream-4392    [206] d..2.  1237.765408: sched_wakeup_new: comm=stream pid=4401 prio=120 target_cpu=031 *LLC: 3
>>            stream-4392    [206] d..2.  1237.765439: sched_wakeup_new: comm=stream pid=4402 prio=120 target_cpu=095 *LLC: 11
>>            stream-4392    [206] d..2.  1237.765475: sched_wakeup_new: comm=stream pid=4403 prio=120 target_cpu=015 *LLC: 1
>>            stream-4401    [031] d..2.  1237.765497: sched_waking: comm=stream pid=4392 prio=120 target_cpu=206
>>            stream-4401    [031] d..2.  1237.765506: sched_migrate_task: comm=stream pid=4392 prio=120 orig_cpu=206 dest_cpu=152 *LLC: 9 -> 3
>>            <idle>-0       [152] dNh2.  1237.765540: sched_wakeup: comm=stream pid=4392 prio=120 target_cpu=152
>>            stream-4403    [015] d..2.  1237.765562: sched_waking: comm=stream pid=4392 prio=120 target_cpu=152
>>            stream-4403    [015] d..2.  1237.765570: sched_migrate_task: comm=stream pid=4392 prio=120 orig_cpu=152 dest_cpu=136 *LLC: 3 -> 1
>>            <idle>-0       [136] dNh2.  1237.765602: sched_wakeup: comm=stream pid=4392 prio=120 target_cpu=136
>>            stream-4392    [136] d..2.  1237.765799: sched_wakeup_new: comm=stream pid=4404 prio=120 target_cpu=097 *LLC: 12
>>            stream-4392    [136] d..2.  1237.765893: sched_wakeup_new: comm=stream pid=4405 prio=120 target_cpu=084 *LLC: 10
>>            stream-4392    [136] d..2.  1237.765957: sched_wakeup_new: comm=stream pid=4406 prio=120 target_cpu=119 *LLC: 14
>>            stream-4392    [136] d..2.  1237.766018: sched_wakeup_new: comm=stream pid=4407 prio=120 target_cpu=038 *LLC: 4
>>            stream-4406    [119] d..2.  1237.766044: sched_waking: comm=stream pid=4392 prio=120 target_cpu=136
>>            stream-4406    [119] d..2.  1237.766050: sched_migrate_task: comm=stream pid=4392 prio=120 orig_cpu=136 dest_cpu=240 *LLC: 1 -> 14
>>            <idle>-0       [240] dNh2.  1237.766154: sched_wakeup: comm=stream pid=4392 prio=120 target_cpu=240
>>            stream-4392    [240] d..2.  1237.766361: sched_wakeup_new: comm=stream pid=4408 prio=120 target_cpu=023 *LLC: 2
>>            stream-4399    [060] d..2.  1238.300605: sched_waking: comm=stream pid=4406 prio=120 target_cpu=119 *LLC: 14 <--- Two stream threads are
>>            stream-4399    [060] d..2.  1238.300611: sched_waking: comm=stream pid=4392 prio=120 target_cpu=240 *LLC: 14 <--- on the same LLC leading to
>>            <idle>-0       [119] dNh2.  1238.300620: sched_wakeup: comm=stream pid=4406 prio=120 target_cpu=119 *LLC: 14      cache contention, degrading
>>            <idle>-0       [240] dNh2.  1238.300621: sched_wakeup: comm=stream pid=4392 prio=120 target_cpu=240 *LLC: 14      the Stream throughput.
>>            ... (No more migrations observed)
>>
>>            After all the wakeups and migrations, LLC 14 contains two stream threads (pid: 4392 and 4406)
>>            All the migrations happen between the events sched_waking and sched_wakeup showing the migrations
>>            happens during a wakeup and not as a resutl of load balancing.
>>
>>>
>>> This patch is more about enhancing the wake affine, rather than improving
>>> the SIS efficiency, so Mel's SIS statistic patch was not deployed for now.
>>>
>>> [Limitations]
>>> When the number of CPUs suggested by SIS_UTIL is lower than 60% of the LLC
>>> CPUs, the LLC domain is regarded as relatively busy. However, the 60% is
>>> somewhat hacky, because it indicates that the util_avg% is around 50%,
>>> a half busy LLC. I don't have other lightweight/accurate method in mind to
>>> check if the LLC domain is busy or not.
>>>
>>> [Misc]
>>> At LPC we received useful suggestions. The first one is that we should look at
>>> the time from the task is woken up, to the time the task goes back to sleep.
>>> I assume this is aligned with what is proposed here - we consider the average
>>> running time, rather than the total running time. The second one is that we
>>> should consider the long-running task. And this is under investigation.
>>>
>>> Besides, Prateek has mentioned that the SIS_UTIL is unable to deal with
>>> burst workload.  Because there is a delay to reflect the instantaneous
>>> utilization and SIS_UTIL expects the workload to be stable. If the system
>>> is idle most of the time, but suddenly the workloads burst, the SIS_UTIL
>>> overscans. The current patch might mitigate this symptom somehow, as burst
>>> workload is usually regarded as a short-running task.
>>>
>>> Suggested-by: Tim Chen <tim.c.chen@intel.com>
>>> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
>>> ---
>>>   kernel/sched/fair.c | 31 ++++++++++++++++++++++++++++++-
>>>   1 file changed, 30 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index 914096c5b1ae..7519ab5b911c 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -6020,6 +6020,19 @@ static int wake_wide(struct task_struct *p)
>>>   	return 1;
>>>   }
>>>   
>>> +/*
>>> + * If a task switches in and then voluntarily relinquishes the
>>> + * CPU quickly, it is regarded as a short running task.
>>> + * sysctl_sched_min_granularity is chosen as the threshold,
>>> + * as this value is the minimal slice if there are too many
>>> + * runnable tasks, see __sched_period().
>>> + */
>>> +static int is_short_task(struct task_struct *p)
>>> +{
>>> +	return (p->se.sum_exec_runtime <=
>>> +		(p->nvcsw * sysctl_sched_min_granularity));
>>> +}
>>> +
>>>   /*
>>>    * The purpose of wake_affine() is to quickly determine on which CPU we can run
>>>    * soonest. For the purpose of speed we only consider the waking and previous
>>> @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
>>>   	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
>>>   		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
>>>   
>>> -	if (sync && cpu_rq(this_cpu)->nr_running == 1)
>>> +	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
>>> +	    is_short_task(cpu_curr(this_cpu)))

Seems it a bit breaks idle (or will be idle) purpose of 
wake_affine_idle() here. Maybe we can do it something like this?

if ((sync || is_short_task(cpu_curr(this_cpu))) && 
cpu_rq(this_cpu)->nr_running == 1)

Thanks,
Honglei

>>
>> This change seems to optimize for affine wakeup which benefits
>> tasks with producer-consumer pattern but is not ideal for Stream.
>> Currently the logic ends will do an affine wakeup even if sync
>> flag is not set:
>>
>>            stream-4135    [029] d..2.   353.580953: sched_waking: comm=stream pid=4129 prio=120 target_cpu=082
>>            stream-4135    [029] d..2.   353.580957: select_task_rq_fair: wake_affine_idle: Select this_cpu: sync(0) rq->nr_running(1) is_short_task(1)
>>            stream-4135    [029] d..2.   353.580960: sched_migrate_task: comm=stream pid=4129 prio=120 orig_cpu=82 dest_cpu=30
>>            <idle>-0       [030] dNh2.   353.580993: sched_wakeup: comm=stream pid=4129 prio=120 target_cpu=030
>>
>> I believe a consideration should be made for the sync flag when
>> going for an affine wakeup. Also the check for short running could
>> be at the end after checking if prev_cpu is an available_idle_cpu.
>>
> We can move the short running check after the prev_cpu check. If we
> add the sync flag check would it shrink the coverage of this change?
> Since I found that there is limited scenario would enable the sync
> flag and we want to make the short running check a generic optimization.
> But yes, we can test with/without sync flag constrain to see which one
> gives better data.
>>>   		return this_cpu;
>>>   
>>>   	if (available_idle_cpu(prev_cpu))
>>> @@ -6434,6 +6448,21 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>>>   			/* overloaded LLC is unlikely to have idle cpu/core */
>>>   			if (nr == 1)
>>>   				return -1;
>>> +
>>> +			/*
>>> +			 * If nr is smaller than 60% of llc_weight, it
>>> +			 * indicates that the util_avg% is higher than 50%.
>>> +			 * This is calculated by SIS_UTIL in
>>> +			 * update_idle_cpu_scan(). The 50% util_avg indicates
>>> +			 * a half-busy LLC domain. System busier than this
>>> +			 * level could lower its bar to choose a compromised
>>> +			 * "idle" CPU. If the waker on target CPU is a short
>>> +			 * task and the wakee is also a short task, pick
>>> +			 * target directly.
>>> +			 */
>>> +			if (!has_idle_core && (5 * nr < 3 * sd->span_weight) &&
>>> +			    is_short_task(p) && is_short_task(cpu_curr(target)))
>>> +				return target;
>>
>> Pileup seen in hackbench could also be a result of an early
>> bailout here for smaller LLCs but I don't have any data to
>> substantiate that claim currently.
>>
>>>   		}
>>>   	}
>>>   
>> Please let me know if you need any more data from the test
>> system for any of the benchmarks covered or if you would like
>> me to run any other benchmark on the test system.
> Thank you for your testing, I'll enable SNC to divide the LLC domain
> into smaller ones, and to see if the issue could be reproduced
> on my platform too, then I'll update my finding on this.
> 
> thanks,
> Chenyu
>> --
>> Thanks and Regards,
>> Prateek

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-15 16:54 [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
                   ` (3 preceding siblings ...)
  2022-09-26  5:50 ` K Prateek Nayak
@ 2022-09-29  8:00 ` Vincent Guittot
  2022-09-30 16:53   ` Chen Yu
  4 siblings, 1 reply; 20+ messages in thread
From: Vincent Guittot @ 2022-09-29  8:00 UTC (permalink / raw)
  To: Chen Yu
  Cc: Peter Zijlstra, Tim Chen, Mel Gorman, Juri Lelli, Rik van Riel,
	Aaron Lu, Abel Wu, K Prateek Nayak, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

On Thu, 15 Sept 2022 at 18:54, Chen Yu <yu.c.chen@intel.com> wrote:
>
> [Background]
> At LPC 2022 Real-time and Scheduling Micro Conference we presented
> the cross CPU wakeup issue. This patch is a text version of the
> talk, and hopefully we can clarify the problem and appreciate for any
> feedback.
>
> [re-send due to the previous one did not reach LKML, sorry
>  for any inconvenience.]
>
> [Problem Statement]
> For a workload that is doing frequent context switches, the throughput
> scales well until the number of instances reaches a peak point. After
> that peak point, the throughput drops significantly if the number of
> instances continues to increase.
>
> The will-it-scale context_switch1 test case exposes the issue. The
> test platform has 112 CPUs per LLC domain. The will-it-scale launches
> 1, 8, 16 ... 112 instances respectively. Each instance is composed
> of 2 tasks, and each pair of tasks would do ping-pong scheduling via
> pipe_read() and pipe_write(). No task is bound to any CPU.
> We found that, once the number of instances is higher than
> 56(112 tasks in total, every CPU has 1 task), the throughput
> drops accordingly if the instance number continues to increase:
>
>           ^
> throughput|
>           |                 X
>           |               X   X X
>           |             X         X X
>           |           X               X
>           |         X                   X
>           |       X
>           |     X
>           |   X
>           | X
>           |
>           +-----------------.------------------->
>                             56
>                                  number of instances
>
> [Symptom analysis]
> Both perf profile and lockstat have shown that, the bottleneck
> is the runqueue spinlock. Take perf profile for example:
>
> nr_instance          rq lock percentage
> 1                    1.22%
> 8                    1.17%
> 16                   1.20%
> 24                   1.22%
> 32                   1.46%
> 40                   1.61%
> 48                   1.63%
> 56                   1.65%
> --------------------------
> 64                   3.77%      |
> 72                   5.90%      | increase
> 80                   7.95%      |
> 88                   9.98%      v
> 96                   11.81%
> 104                  13.54%
> 112                  15.13%
>
> And the rq lock bottleneck is composed of two paths(perf profile):
>
> (path1):
> raw_spin_rq_lock_nested.constprop.0;
> try_to_wake_up;
> default_wake_function;
> autoremove_wake_function;
> __wake_up_common;
> __wake_up_common_lock;
> __wake_up_sync_key;
> pipe_write;
> new_sync_write;
> vfs_write;
> ksys_write;
> __x64_sys_write;
> do_syscall_64;
> entry_SYSCALL_64_after_hwframe;write
>
> (path2):
> raw_spin_rq_lock_nested.constprop.0;
> __sched_text_start;
> schedule_idle;
> do_idle;
> cpu_startup_entry;
> start_secondary;
> secondary_startup_64_no_verify
>
> The idle percentage is around 30% when there are 112 instances:
> %Cpu0  :  2.7 us, 66.7 sy,  0.0 ni, 30.7 id
>
> As a comparison, if we set CPU affinity to these workloads,
> which stops them from migrating among CPUs, the idle percentage
> drops to nearly 0%, and the throughput increases by about 300%.
> This indicates that there is room for optimization.
>
> A possible scenario to describe the lock contention:
> task A tries to wakeup task B on CPU1, then task A grabs the
> runqueue lock of CPU1. If CPU1 is about to quit idle, it needs
> to grab its own lock which has been taken by someone else. Then
> CPU1 takes more time to quit which hurts the performance.
>
> TTWU_QUEUE could mitigate the cross CPU runqueue lock contention.
> Since commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU
> on wakelist if wakee cpu is idle"), TTWU_QUEUE offloads the work from
> the waker and leverages the idle CPU to queue the wakee. However, a long
> idle duration is still observed. The idle task spends quite some time
> on sched_ttwu_pending() before it switches out. This long idle
> duration would mislead SIS_UTIL, then SIS_UTIL suggests the waker scan
> for more CPUs. The time spent searching for an idle CPU would make
> wakee waiting for more time, which in turn leads to more idle time.
> The NEWLY_IDLE balance fails to pull tasks to the idle CPU, which
> might be caused by no runnable wakee being found.
>
> [Proposal]
> If a system is busy, and if the workloads are doing frequent context
> switches, it might not be a good idea to spread the wakee on different
> CPUs. Instead, consider the task running time and enhance wake affine
> might be applicable.
>
> This idea has been suggested by Rik at LPC 2019 when discussing
> the latency nice. He asked the following question: if P1 is a small-time
> slice task on CPU, can we put the waking task P2 on the CPU and wait for
> P1 to release the CPU, without wasting time to search for an idle CPU?
> At LPC 2021 Vincent Guittot has proposed:
> 1. If the wakee is a long-running task, should we skip the short idle CPU?
> 2. If the wakee is a short-running task, can we put it onto a lightly loaded
>    local CPU?

When I said that, I had in mind to use the task utilization (util_avg
or util_est) which reflects the recent behavior of the task but not to
compute an average duration

>
> Current proposal is a variant of 2:
> If the target CPU is running a short-time slice task, and the wakee
> is also a short-time slice task, the target CPU could be chosen as the
> candidate when the system is busy.
>
> The definition of a short-time slice task is: The average running time
> of the task during each run is no more than sysctl_sched_min_granularity.
> If a task switches in and then voluntarily relinquishes the CPU
> quickly, it is regarded as a short-running task. Choosing
> sysctl_sched_min_granularity because it is the minimal slice if there
> are too many runnable tasks.
>
> Reuse the nr_idle_scan of SIS_UTIL to decide if the system is busy.
> If yes, then a compromised "idle" CPU might be acceptable.
>
> The reason is that, if the waker is a short running task, it might
> relinquish the CPU soon, the wakee has the chance to be scheduled.
> On the other hand, if the wakee is also a short-running task, the
> impact it brings to the target CPU is small. If the system is
> already busy, maybe we could lower the bar to find an idle CPU.
> The effect is, the wake affine is enhanced.
>
> [Benchmark results]
> The baseline is 6.0-rc4.
>
> The throughput of will-it-scale.context_switch1 has been increased by
> 331.13% with this patch applied.
>
> netperf
> =======
> case                    load            baseline(std%)  compare%( std%)
> TCP_RR                  28 threads       1.00 (  0.57)   +0.29 (  0.59)
> TCP_RR                  56 threads       1.00 (  0.49)   +0.43 (  0.43)
> TCP_RR                  84 threads       1.00 (  0.34)   +0.24 (  0.34)
> TCP_RR                  112 threads      1.00 (  0.26)   +1.57 (  0.20)
> TCP_RR                  140 threads      1.00 (  0.20)  +178.05 (  8.83)
> TCP_RR                  168 threads      1.00 ( 10.14)   +0.87 ( 10.03)
> TCP_RR                  196 threads      1.00 ( 13.51)   +0.90 ( 11.84)
> TCP_RR                  224 threads      1.00 (  7.12)   +0.66 (  8.28)
> UDP_RR                  28 threads       1.00 (  0.96)   -0.10 (  0.97)
> UDP_RR                  56 threads       1.00 ( 10.93)   +0.24 (  0.82)
> UDP_RR                  84 threads       1.00 (  8.99)   +0.40 (  0.71)
> UDP_RR                  112 threads      1.00 (  0.15)   +0.72 (  7.77)
> UDP_RR                  140 threads      1.00 ( 11.11)  +135.81 ( 13.86)
> UDP_RR                  168 threads      1.00 ( 12.58)  +147.63 ( 12.72)
> UDP_RR                  196 threads      1.00 ( 19.47)   -0.34 ( 16.14)
> UDP_RR                  224 threads      1.00 ( 12.88)   -0.35 ( 12.73)
>
> hackbench
> =========
> case                    load            baseline(std%)  compare%( std%)
> process-pipe            1 group          1.00 (  1.02)   +0.14 (  0.62)
> process-pipe            2 groups         1.00 (  0.73)   +0.29 (  0.51)
> process-pipe            4 groups         1.00 (  0.16)   +0.24 (  0.31)
> process-pipe            8 groups         1.00 (  0.06)  +11.56 (  0.11)
> process-sockets         1 group          1.00 (  1.59)   +0.06 (  0.77)
> process-sockets         2 groups         1.00 (  1.13)   -1.86 (  1.31)
> process-sockets         4 groups         1.00 (  0.14)   +1.76 (  0.29)
> process-sockets         8 groups         1.00 (  0.27)   +2.73 (  0.10)
> threads-pipe            1 group          1.00 (  0.43)   +0.83 (  2.20)
> threads-pipe            2 groups         1.00 (  0.52)   +1.03 (  0.55)
> threads-pipe            4 groups         1.00 (  0.44)   -0.08 (  0.31)
> threads-pipe            8 groups         1.00 (  0.04)  +11.86 (  0.05)
> threads-sockets         1 groups         1.00 (  1.89)   +3.51 (  0.57)
> threads-sockets         2 groups         1.00 (  0.04)   -1.12 (  0.69)
> threads-sockets         4 groups         1.00 (  0.14)   +1.77 (  0.18)
> threads-sockets         8 groups         1.00 (  0.03)   +2.75 (  0.03)
>
> tbench
> ======
> case                    load            baseline(std%)  compare%( std%)
> loopback                28 threads       1.00 (  0.08)   +0.51 (  0.25)
> loopback                56 threads       1.00 (  0.15)   -0.89 (  0.16)
> loopback                84 threads       1.00 (  0.03)   +0.35 (  0.07)
> loopback                112 threads      1.00 (  0.06)   +2.84 (  0.01)
> loopback                140 threads      1.00 (  0.07)   +0.69 (  0.11)
> loopback                168 threads      1.00 (  0.09)   +0.14 (  0.18)
> loopback                196 threads      1.00 (  0.04)   -0.18 (  0.20)
> loopback                224 threads      1.00 (  0.25)   -0.37 (  0.03)
>
> Other benchmarks are under testing.
>
> This patch is more about enhancing the wake affine, rather than improving
> the SIS efficiency, so Mel's SIS statistic patch was not deployed for now.
>
> [Limitations]
> When the number of CPUs suggested by SIS_UTIL is lower than 60% of the LLC
> CPUs, the LLC domain is regarded as relatively busy. However, the 60% is
> somewhat hacky, because it indicates that the util_avg% is around 50%,
> a half busy LLC. I don't have other lightweight/accurate method in mind to
> check if the LLC domain is busy or not.
>
> [Misc]
> At LPC we received useful suggestions. The first one is that we should look at
> the time from the task is woken up, to the time the task goes back to sleep.
> I assume this is aligned with what is proposed here - we consider the average
> running time, rather than the total running time. The second one is that we
> should consider the long-running task. And this is under investigation.
>
> Besides, Prateek has mentioned that the SIS_UTIL is unable to deal with
> burst workload.  Because there is a delay to reflect the instantaneous
> utilization and SIS_UTIL expects the workload to be stable. If the system
> is idle most of the time, but suddenly the workloads burst, the SIS_UTIL
> overscans. The current patch might mitigate this symptom somehow, as burst
> workload is usually regarded as a short-running task.
>
> Suggested-by: Tim Chen <tim.c.chen@intel.com>
> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> ---
>  kernel/sched/fair.c | 31 ++++++++++++++++++++++++++++++-
>  1 file changed, 30 insertions(+), 1 deletion(-)
>
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 914096c5b1ae..7519ab5b911c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6020,6 +6020,19 @@ static int wake_wide(struct task_struct *p)
>         return 1;
>  }
>
> +/*
> + * If a task switches in and then voluntarily relinquishes the
> + * CPU quickly, it is regarded as a short running task.
> + * sysctl_sched_min_granularity is chosen as the threshold,
> + * as this value is the minimal slice if there are too many
> + * runnable tasks, see __sched_period().
> + */
> +static int is_short_task(struct task_struct *p)
> +{
> +       return (p->se.sum_exec_runtime <=
> +               (p->nvcsw * sysctl_sched_min_granularity));

you assume that the task behavior will never change during is whole life time

> +}
> +
>  /*
>   * The purpose of wake_affine() is to quickly determine on which CPU we can run
>   * soonest. For the purpose of speed we only consider the waking and previous
> @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
>         if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
>                 return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
>
> -       if (sync && cpu_rq(this_cpu)->nr_running == 1)
> +       if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
> +           is_short_task(cpu_curr(this_cpu)))
>                 return this_cpu;
>
>         if (available_idle_cpu(prev_cpu))
> @@ -6434,6 +6448,21 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>                         /* overloaded LLC is unlikely to have idle cpu/core */
>                         if (nr == 1)
>                                 return -1;
> +
> +                       /*
> +                        * If nr is smaller than 60% of llc_weight, it
> +                        * indicates that the util_avg% is higher than 50%.
> +                        * This is calculated by SIS_UTIL in
> +                        * update_idle_cpu_scan(). The 50% util_avg indicates
> +                        * a half-busy LLC domain. System busier than this
> +                        * level could lower its bar to choose a compromised
> +                        * "idle" CPU. If the waker on target CPU is a short
> +                        * task and the wakee is also a short task, pick
> +                        * target directly.
> +                        */
> +                       if (!has_idle_core && (5 * nr < 3 * sd->span_weight) &&
> +                           is_short_task(p) && is_short_task(cpu_curr(target)))
> +                               return target;
>                 }
>         }
>
> --
> 2.25.1
>

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-26 14:39   ` Gautham R. Shenoy
@ 2022-09-29 16:58     ` K Prateek Nayak
  2022-09-30 17:26       ` Chen Yu
  0 siblings, 1 reply; 20+ messages in thread
From: K Prateek Nayak @ 2022-09-29 16:58 UTC (permalink / raw)
  To: Gautham R. Shenoy, Chen Yu
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, Yicong Yang,
	Ingo Molnar, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, linux-kernel

Hello Gautham and Chenyu,

On 9/26/2022 8:09 PM, Gautham R. Shenoy wrote:
> Hello Prateek,
> 
> On Mon, Sep 26, 2022 at 11:20:16AM +0530, K Prateek Nayak wrote:[
> 
> [..snip..]
> 
>>> @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
>>>  	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
>>>  		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
>>>  
>>> -	if (sync && cpu_rq(this_cpu)->nr_running == 1)
>>> +	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
>>> +	    is_short_task(cpu_curr(this_cpu)))
>>
>> This change seems to optimize for affine wakeup which benefits
>> tasks with producer-consumer pattern but is not ideal for Stream.
>> Currently the logic ends will do an affine wakeup even if sync
>> flag is not set:
>>
>>           stream-4135    [029] d..2.   353.580953: sched_waking: comm=stream pid=4129 prio=120 target_cpu=082
>>           stream-4135    [029] d..2.   353.580957: select_task_rq_fair: wake_affine_idle: Select this_cpu: sync(0) rq->nr_running(1) is_short_task(1)
>>           stream-4135    [029] d..2.   353.580960: sched_migrate_task: comm=stream pid=4129 prio=120 orig_cpu=82 dest_cpu=30
>>           <idle>-0       [030] dNh2.   353.580993: sched_wakeup: comm=stream pid=4129 prio=120 target_cpu=030
>>
>> I believe a consideration should be made for the sync flag when
>> going for an affine wakeup. Also the check for short running could
>> be at the end after checking if prev_cpu is an available_idle_cpu.
> 
> We need to check if moving the is_short_task() to a later point after
> checking the availability of the previous CPU solve the problem for
> the workloads which showed regressions on AMD EPYC systems.

I've done some testing with moving the condition check for short
running task to the end of wake_affine_idle and checking if the
length of run queue is 1 similar to what Tim suggested in the thread
but doing it upfront in wake_affine_idle. There are a few variations
I've tested:

v1: move the check for short running task on current CPU to end of wake_affine_idle

v2: move the check for short running task on current CPU to end of wake_affine_idle
    + remove entire hunk in select_idle_cpu

v3: move the check for short running task on current CPU to end of wake_affine_idle
    + check if run queue of current CPU only has 1 task

v4: move the check for short running task on current CPU to end of wake_affine_idle
    + check if run queue of current CPU only has 1 task
    + remove entire hunk in select_idle_cpu

Adding diff for v3 below:
--
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 0ad8e7183bf2..dad9bfb0248d 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6074,13 +6074,15 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
 	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
 		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
 
-	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
-	    is_short_task(cpu_curr(this_cpu)))
+	if (sync && cpu_rq(this_cpu)->nr_running == 1)
 		return this_cpu;
 
 	if (available_idle_cpu(prev_cpu))
 		return prev_cpu;
 
+	if (cpu_rq(this_cpu)->nr_running == 1 && is_short_task(cpu_curr(this_cpu)))
+		return this_cpu;
+
 	return nr_cpumask_bits;
 }
 
--

Deviation from above diff in other versions are as follows:

o v1 and v2 doesn't check cpu_rq(this_cpu)->nr_running == 1 and only
  moves the condition check to end of wake_affine_idle as:

	if (is_short_task(cpu_curr(this_cpu)))
		return this_cpu;

o The second hunk of changes in select_idle_cpu form RFC remains same
  in v1 and v3 but is removed in v2 and v3 to check if that was the
  cause of pileup seen in case of Hackbench.

Following are the results of the standard benchmarks on a dual socket
Zen3 system (2 x 64C/128T) in NPS1 and NPS4 mode:

~~~~~~~~~~~~~
~ Hackbench ~
~~~~~~~~~~~~~

o NPS1

Test:                   tip                     v1                      v2                      v3                      v4
 1-groups:         4.23 (0.00 pct)         4.21 (0.47 pct)         4.29 (-1.41 pct)        4.02 (4.96 pct)         4.34 (-2.60 pct)
 2-groups:         4.93 (0.00 pct)         5.23 (-6.08 pct)        5.20 (-5.47 pct)        4.75 (3.65 pct)         4.77 (3.24 pct)
 4-groups:         5.32 (0.00 pct)         5.64 (-6.01 pct)        5.66 (-6.39 pct)        5.13 (3.57 pct)         5.22 (1.87 pct)
 8-groups:         5.46 (0.00 pct)         5.92 (-8.42 pct)        5.96 (-9.15 pct)        5.24 (4.02 pct)         5.37 (1.64 pct)
16-groups:         7.31 (0.00 pct)         7.16 (2.05 pct)         7.17 (1.91 pct)         6.65 (9.02 pct)         7.05 (3.55 pct)

o NPS4

Test:                   tip                     v1                      v2                      v3                      v4
 1-groups:         4.23 (0.00 pct)         4.20 (0.70 pct)         4.37 (-3.30 pct)        4.02 (4.96 pct)         4.23 (0.00 pct)
 2-groups:         4.78 (0.00 pct)         5.07 (-6.06 pct)        5.07 (-6.06 pct)        4.60 (3.76 pct)         4.67 (2.30 pct)
 4-groups:         5.17 (0.00 pct)         5.47 (-5.80 pct)        5.50 (-6.38 pct)        5.01 (3.09 pct)         5.12 (0.96 pct)
 8-groups:         5.63 (0.00 pct)         5.77 (-2.48 pct)        5.84 (-3.73 pct)        5.48 (2.66 pct)         5.47 (2.84 pct)
16-groups:         7.88 (0.00 pct)         6.43 (18.40 pct)        6.60 (16.24 pct)       12.14 (-54.06 pct)       6.51 (17.38 pct)  *
16-groups:        10.28 (0.00 pct)         6.62 (35.60 pct)        6.68 (35.01 pct)        8.67 (15.66 pct)        6.96 (32.29 pct)  [Verification Run]

~~~~~~~~~~~~~
~ schebench ~
~~~~~~~~~~~~~

o NPS 1

#workers:     tip                     v1                      v2                      v3                      v4
  1:      22.00 (0.00 pct)        33.00 (-50.00 pct)      29.00 (-31.81 pct)      33.00 (-50.00 pct)      32.00 (-45.45 pct)
  2:      34.00 (0.00 pct)        34.00 (0.00 pct)        36.00 (-5.88 pct)       37.00 (-8.82 pct)       36.00 (-5.88 pct)
  4:      37.00 (0.00 pct)        39.00 (-5.40 pct)       36.00 (2.70 pct)        40.00 (-8.10 pct)       34.00 (8.10 pct)
  8:      55.00 (0.00 pct)        43.00 (21.81 pct)       52.00 (5.45 pct)        47.00 (14.54 pct)       55.00 (0.00 pct)
 16:      69.00 (0.00 pct)        64.00 (7.24 pct)        65.00 (5.79 pct)        65.00 (5.79 pct)        67.00 (2.89 pct)
 32:     113.00 (0.00 pct)       110.00 (2.65 pct)       112.00 (0.88 pct)       106.00 (6.19 pct)       108.00 (4.42 pct)
 64:     219.00 (0.00 pct)       200.00 (8.67 pct)       221.00 (-0.91 pct)      214.00 (2.28 pct)       217.00 (0.91 pct)
128:     506.00 (0.00 pct)       509.00 (-0.59 pct)      507.00 (-0.19 pct)      495.00 (2.17 pct)       535.00 (-5.73 pct)
256:     45440.00 (0.00 pct)     44096.00 (2.95 pct)     47296.00 (-4.08 pct)    43968.00 (3.23 pct)     42432.00 (6.61 pct)
512:     76672.00 (0.00 pct)     82304.00 (-7.34 pct)    82304.00 (-7.34 pct)    73088.00 (4.67 pct)     78976.00 (-3.00 pct)

o NPS4

#workers:     tip                     v1                      v2                      v3                      v4
  1:      30.00 (0.00 pct)        35.00 (-16.66 pct)      20.00 (33.33 pct)       30.00 (0.00 pct)        34.00 (-13.33 pct)
  2:      34.00 (0.00 pct)        35.00 (-2.94 pct)       36.00 (-5.88 pct)       38.00 (-11.76 pct)      37.00 (-8.82 pct)
  4:      41.00 (0.00 pct)        39.00 (4.87 pct)        43.00 (-4.87 pct)       39.00 (4.87 pct)        41.00 (0.00 pct)
  8:      60.00 (0.00 pct)        64.00 (-6.66 pct)       53.00 (11.66 pct)       52.00 (13.33 pct)       56.00 (6.66 pct)
 16:      68.00 (0.00 pct)        68.00 (0.00 pct)        69.00 (-1.47 pct)       71.00 (-4.41 pct)       67.00 (1.47 pct)
 32:     116.00 (0.00 pct)       115.00 (0.86 pct)       118.00 (-1.72 pct)      111.00 (4.31 pct)       113.00 (2.58 pct)
 64:     224.00 (0.00 pct)       208.00 (7.14 pct)       217.00 (3.12 pct)       224.00 (0.00 pct)       231.00 (-3.12 pct)
128:     495.00 (0.00 pct)       523.00 (-5.65 pct)      567.00 (-14.54 pct)     515.00 (-4.04 pct)      675.00 (-36.36 pct)  *
256:     45888.00 (0.00 pct)     45888.00 (0.00 pct)     46656.00 (-1.67 pct)    47168.00 (-2.78 pct)    44864.00 (2.23 pct)
512:     78464.00 (0.00 pct)     78976.00 (-0.65 pct)    83584.00 (-6.52 pct)    76672.00 (2.28 pct)     80768.00 (-2.93 pct)

Note: schbench shows a large amount of run to run variation for
lower worker count. The results have been included to check for
any large increase in latency that suggests schbench task queuing
behind one another.

~~~~~~~~~~
~ tbench ~
~~~~~~~~~~

o NPS 1

Clients:      tip                     v1                      v2                      v3                      v4
    1    550.66 (0.00 pct)       582.73 (5.82 pct)       572.06 (3.88 pct)       576.94 (4.77 pct)       582.44 (5.77 pct)
    2    1009.69 (0.00 pct)      1087.30 (7.68 pct)      1056.81 (4.66 pct)      1072.44 (6.21 pct)      1041.94 (3.19 pct)
    4    1795.32 (0.00 pct)      1847.22 (2.89 pct)      1869.23 (4.11 pct)      1839.32 (2.45 pct)      1877.57 (4.58 pct)
    8    2971.16 (0.00 pct)      3144.05 (5.81 pct)      3137.94 (5.61 pct)      3100.27 (4.34 pct)      3032.99 (2.08 pct)
   16    4627.98 (0.00 pct)      4704.22 (1.64 pct)      4752.77 (2.69 pct)      4833.24 (4.43 pct)      4726.70 (2.13 pct)
   32    8065.15 (0.00 pct)      8172.79 (1.33 pct)      9266.77 (14.89 pct)     9508.24 (17.89 pct)     9199.91 (14.06 pct)
   64    14994.32 (0.00 pct)     15357.75 (2.42 pct)     15246.82 (1.68 pct)     15670.37 (4.50 pct)     15433.18 (2.92 pct)
  128    5175.73 (0.00 pct)      3062.00 (-40.83 pct)    18429.11 (256.06 pct)   3365.81 (-34.96 pct)    2633.09 (-49.12 pct)  *
  128    20490.63 (0.00 pct)     20504.17 (0.06 pct)     21183.21 (3.37 pct)     20469.20 (-0.10 pct)    20879.77 (1.89 pct)   [Verification Run]
  256    48763.57 (0.00 pct)     50703.97 (3.97 pct)     50723.68 (4.01 pct)     49387.93 (1.28 pct)     49552.81 (1.61 pct)
  512    43780.78 (0.00 pct)     45328.44 (3.53 pct)     45328.59 (3.53 pct)     45384.80 (3.66 pct)     43897.43 (0.26 pct)
 1024    40341.84 (0.00 pct)     42823.05 (6.15 pct)     42262.72 (4.76 pct)     41856.06 (3.75 pct)     40785.67 (1.10 pct)

o NPS 4

Clients:      tip                     v1                      v2                      v3                      v4
    1    549.22 (0.00 pct)       582.89 (6.13 pct)       576.74 (5.01 pct)       582.34 (6.03 pct)       585.19 (6.54 pct)
    2    1000.08 (0.00 pct)      1111.54 (11.14 pct)     1043.47 (4.33 pct)      1060.99 (6.09 pct)      1071.39 (7.13 pct)
    4    1794.78 (0.00 pct)      1895.64 (5.61 pct)      1858.40 (3.54 pct)      1828.08 (1.85 pct)      1862.47 (3.77 pct)
    8    3008.50 (0.00 pct)      3117.10 (3.60 pct)      3060.15 (1.71 pct)      3143.65 (4.49 pct)      3065.17 (1.88 pct)
   16    4804.71 (0.00 pct)      4677.82 (-2.64 pct)     4587.01 (-4.53 pct)     4694.21 (-2.29 pct)     4627.39 (-3.69 pct)
   32    9156.57 (0.00 pct)      8462.23 (-7.58 pct)     8290.70 (-9.45 pct)     7906.44 (-13.65 pct)    8679.98 (-5.20 pct)    *
   32    9157.62 (0.00 pct)      8712.33 (-4.86 pct)     8640.77 (-5.64 pct)     9415.99 (2.82 pct)      9403.35 (2.68 pct)     [Verification Run]
   64    14901.45 (0.00 pct)     15263.87 (2.43 pct)     15031.33 (0.87 pct)     15149.54 (1.66 pct)     14714.04 (-1.25 pct)
  128    20771.20 (0.00 pct)     21114.00 (1.65 pct)     17818.77 (-14.21 pct)   17686.98 (-14.84 pct)   15917.79 (-23.36 pct)  *
  128    20490.63 (0.00 pct)     20504.17 (0.06 pct)     21183.21 (3.37 pct)     20469.20 (-0.10 pct)    20879.77 (1.89 pct)    [Verification Run]
  256    47033.88 (0.00 pct)     48021.71 (2.10 pct)     48439.88 (2.98 pct)     48042.49 (2.14 pct)     49294.05 (4.80 pct)
  512    43429.01 (0.00 pct)     44488.54 (2.43 pct)     43672.99 (0.56 pct)     42462.44 (-2.22 pct)    44072.90 (1.48 pct)
 1024    39271.27 (0.00 pct)     42304.03 (7.72 pct)     41850.17 (6.56 pct)     39791.47 (1.32 pct)     41528.81 (5.74 pct)

Note: tbench for 128 clients runs into an ACPI idle driver issue
that is fixed by the commit e400ad8b7e6a ("ACPI: processor idle:
Practically limit "Dummy wait" workaround to old Intel systems")
which will be a part of the v6.0 kernel release.

~~~~~~~~~~
~ stream ~
~~~~~~~~~~

o NPS 1

- 10 runs

Test:            tip                     v1                      v2                      v3                      v4
 Copy:   335832.93 (0.00 pct)    338535.58 (0.80 pct)    334772.76 (-0.31 pct)   337487.50 (0.49 pct)    336720.22 (0.26 pct)
Scale:   212781.21 (0.00 pct)    217118.20 (2.03 pct)    213011.28 (0.10 pct)    216905.50 (1.93 pct)    213371.06 (0.27 pct)
  Add:   251667.59 (0.00 pct)    240811.38 (-4.31 pct)   250478.75 (-0.47 pct)   250584.95 (-0.43 pct)   250987.62 (-0.27 pct)
Triad:   251537.87 (0.00 pct)    261919.66 (4.12 pct)    260702.92 (3.64 pct)    251181.87 (-0.14 pct)   262152.01 (4.21 pct)

- 100 runs

Test:            tip                     v1                      v2                      v3                      v4
 Copy:   335721.37 (0.00 pct)    337441.09 (0.51 pct)    338472.90 (0.81 pct)    335777.78 (0.01 pct)    338434.23 (0.80 pct)
Scale:   219593.12 (0.00 pct)    224083.11 (2.04 pct)    218742.58 (-0.38 pct)   221381.50 (0.81 pct)    219603.23 (0.00 pct)
  Add:   251612.53 (0.00 pct)    251633.66 (0.00 pct)    251593.37 (0.00 pct)    251261.72 (-0.13 pct)   251838.27 (0.08 pct)
Triad:   261985.15 (0.00 pct)    261639.38 (-0.13 pct)   263003.34 (0.38 pct)    261084.30 (-0.34 pct)   260353.64 (-0.62 pct)

o NPS 4

- 10 runs

Test:            tip                     v1                      v2                      v3                      v4
 Copy:   354774.17 (0.00 pct)    359486.69 (1.32 pct)    368017.56 (3.73 pct)    374514.29 (5.56 pct)    344022.60 (-3.03 pct)
Scale:   231870.22 (0.00 pct)    221056.77 (-4.66 pct)   246191.29 (6.17 pct)    244736.54 (5.54 pct)    232084.49 (0.09 pct)
  Add:   258728.29 (0.00 pct)    243136.12 (-6.02 pct)   259962.30 (0.47 pct)    273104.99 (5.55 pct)    256671.88 (-0.79 pct)
Triad:   269237.56 (0.00 pct)    282994.33 (5.10 pct)    286902.41 (6.56 pct)    290661.36 (7.95 pct)    269610.52 (0.13 pct)

- 100 runs

Test:            tip                     v1                      v2                      v3                      v4
 Copy:   369249.91 (0.00 pct)    360411.30 (-2.39 pct)   364531.71 (-1.27 pct)   374280.94 (1.36 pct)    372066.41 (0.76 pct)
Scale:   254849.59 (0.00 pct)    253724.21 (-0.44 pct)   254868.47 (0.00 pct)    254916.90 (0.02 pct)    256054.43 (0.47 pct)
  Add:   273124.66 (0.00 pct)    272945.31 (-0.06 pct)   272989.26 (-0.04 pct)   260213.79 (-4.72 pct)   273955.09 (0.30 pct)
Triad:   287935.27 (0.00 pct)    284522.85 (-1.18 pct)   284797.06 (-1.08 pct)   290192.01 (0.78 pct)    288755.39 (0.28 pct)

~~~~~~~~~~~~~~~~~~~~~~~~~
~ Notes and Observation ~
~~~~~~~~~~~~~~~~~~~~~~~~~

We still see a pileup with v1 and v2 but not with v3 and v4 suggesting
that the second hunk is not the reason for the pileup but rather
choosing the current CPU in wake_affine_idle on the basis that the
current running task is the short running task. To prevent a pileup, we
must only choose the current rq if the short running task is the only
task running there.

I've not checked for the sync flag to allow for a larger opportunity
for affine wakeup. This assumes that wake_affine() is called only for
tasks that can benefit from an affine wakeup.

Sharing more data from the test runs:

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ Hackbench 2 groups schedstat data ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

o NPS1

---------------------------------------------------------------------------------------------------------------------------
cpu:  all_cpus (avg) vs cpu:  all_cpus (avg)
---------------------------------------------------------------------------------------------------------------------------
kernel                                                     :            v1            v3                      v4
sched_yield count                                          :             0,            0                       0
Legacy counter can be ignored                              :             0,            0                       0
schedule called                                            :         49196,        51320                   67541  |  37.29|
schedule left the processor idle                           :         21399,        21609                   32655  |  52.60|
try_to_wake_up was called                                  :         27726,        29630  |   6.87|        34868  |  25.76|
try_to_wake_up was called to wake up the local cpu         :          2049,         1195  | -41.68|          409  | -80.04|
total runtime by tasks on this processor (in jiffies)      :     548520817,    582155720  |   6.13|   1068137641  |  94.73|
total waittime by tasks on this processor (in jiffies)     :     668076627,    480034644  | -28.15|     77773209  | -88.36|  * v3 and v4 have lower wait time
total timeslices run on this cpu                           :         27791,        29703  |   6.88|        34883  |  25.52|    and a larger runtime / waittime ratio

< -----------------------------------------------------------------  Wakeup info:  -------------------------------------- >
kernel                                                  :            v1            v3                    v4
Wakeups on same         SMT cpus = all_cpus (avg)       :          1368,         1403                   309  | -77.41|
Wakeups on same         MC cpus = all_cpus (avg)        :         20980,        21018                 11493  | -45.22|
Wakeups on same         DIE cpus = all_cpus (avg)       :          2074,         3499  |  68.71|      11166  | 438.38|
Wakeups on same         NUMA cpus = all_cpus (avg)      :          1252,         2514  | 100.80|      11489  | 817.65|
Affine wakeups on same  SMT cpus = all_cpus (avg)       :          1400,         1046  | -25.29|        142  | -89.86|
Affine wakeups on same  MC cpus = all_cpus (avg)        :         18940,        13474  | -28.86|       2916  | -84.60|
Affine wakeups on same  DIE cpus = all_cpus (avg)       :          2163,         2827  |  30.70|       3771  |  74.34|
Affine wakeups on same  NUMA cpus = all_cpus (avg)      :          1145,         1945  |  69.87|       3466  | 202.71|
---------------------------------------------------------------------------------------------------------------------------

o NPS4

----------------------------------------------------------------------------------------------------------------------------
cpu:  all_cpus (avg) vs cpu:  all_cpus (avg)
----------------------------------------------------------------------------------------------------------------------------
kernel                                                     :            v1            v3                       v4
sched_yield count                                          :             0,            0                        0
Legacy counter can be ignored                              :             0,            0                        0
schedule called                                            :         49685,        50335                    55266  |  11.23|
schedule left the processor idle                           :         21755,        21269                    25277  |  16.19|
try_to_wake_up was called                                  :         27870,        28990                    29955  |   7.48|
try_to_wake_up was called to wake up the local cpu         :          2054,         1246  | -39.34|           666  | -67.58|
total runtime by tasks on this processor (in jiffies)      :     582044948,    657092589  |  12.89|     860907207  |  47.91|
total waittime by tasks on this processor (in jiffies)     :     610820439,    435359035  | -28.73|     171279622  | -71.96| * Same is observed in NPS4 runs
total timeslices run on this cpu                           :         27923,        29059                    29985  |   7.38|

< -----------------------------------------------------------------  Wakeup info:  --------------------------------------- >
kernel                                                  :            v1            v3                    v4
Wakeups on same         SMT cpus = all_cpus (avg)       :          1307,         1229  |  -5.97|        699  | -46.52|
Wakeups on same         MC cpus = all_cpus (avg)        :         19854,        19726                 16895  | -14.90|
Wakeups on same         NODE cpus = all_cpus (avg)      :           818,         1442  |  76.28|       1959  | 139.49|
Wakeups on same         NUMA cpus = all_cpus (avg)      :          2068,         3257  |  57.50|       6861  | 231.77|
Wakeups on same         NUMA cpus = all_cpus (avg)      :          1767,         2088  |  18.17|       2871  |  62.48|
Affine wakeups on same  SMT cpus = all_cpus (avg)       :          1314,          887  | -32.50|        439  | -66.59|
Affine wakeups on same  MC cpus = all_cpus (avg)        :         17572,        11754  | -33.11|       6971  | -60.33|
Affine wakeups on same  NODE cpus = all_cpus (avg)      :           885,         1195  |  35.03|       1379  |  55.82|
Affine wakeups on same  NUMA cpus = all_cpus (avg)      :          1516,         2792  |  84.17|       4070  | 168.47|
Affine wakeups on same  NUMA cpus = all_cpus (avg)      :           845,         2042  | 141.66|       1823  | 115.74|
----------------------------------------------------------------------------------------------------------------------------

~~~~~~~~~~~~~~~~~
~ Stream traces ~
~~~~~~~~~~~~~~~~~

Trace is obtained by enabling the following tracepoints:
- sched_wakeup_new
- sched_migrate_task

 trace_stream.sh-4581    [130] d..2.  1795.126862: sched_wakeup_new: comm=trace_stream.sh pid=4589 prio=120 target_cpu=008 (LLC: 1)
          stream-4589    [008] d..2.  1795.128145: sched_wakeup_new: comm=stream pid=4591 prio=120 target_cpu=159 (LLC: 3)
          stream-4589    [008] d..2.  1795.128189: sched_wakeup_new: comm=stream pid=4592 prio=120 target_cpu=162 (LLC: 4)
          stream-4589    [008] d..2.  1795.128259: sched_wakeup_new: comm=stream pid=4593 prio=120 target_cpu=202 (LLC: 9)
          stream-4589    [008] d..2.  1795.128281: sched_wakeup_new: comm=stream pid=4594 prio=120 target_cpu=173 (LLC: 5)
          stream-4589    [008] d..2.  1795.128311: sched_wakeup_new: comm=stream pid=4595 prio=120 target_cpu=214 (LLC: 10)
          stream-4589    [008] d..2.  1795.128366: sched_wakeup_new: comm=stream pid=4596 prio=120 target_cpu=053 (LLC: 6)
          stream-4589    [008] d..2.  1795.128454: sched_wakeup_new: comm=stream pid=4597 prio=120 target_cpu=088 (LLC: 11)
          stream-4589    [008] d..2.  1795.128475: sched_wakeup_new: comm=stream pid=4598 prio=120 target_cpu=191 (LLC: 7)
          stream-4589    [008] d..2.  1795.128508: sched_wakeup_new: comm=stream pid=4599 prio=120 target_cpu=096 (LLC: 12)
          stream-4589    [008] d..2.  1795.128568: sched_wakeup_new: comm=stream pid=4600 prio=120 target_cpu=130 (LLC: 0)
          stream-4589    [008] d..2.  1795.128620: sched_wakeup_new: comm=stream pid=4601 prio=120 target_cpu=239 (LLC: 13)
          stream-4589    [008] d..2.  1795.128641: sched_wakeup_new: comm=stream pid=4602 prio=120 target_cpu=146 (LLC: 2)
          stream-4589    [008] d..2.  1795.128672: sched_wakeup_new: comm=stream pid=4603 prio=120 target_cpu=247 (LLC: 14)
          stream-4589    [008] d..2.  1795.128747: sched_wakeup_new: comm=stream pid=4604 prio=120 target_cpu=255 (LLC: 15)
          stream-4589    [008] d..2.  1795.128784: sched_wakeup_new: comm=stream pid=4605 prio=120 target_cpu=066 (LLC: 8)

	No migrations were observed till the end of the run

- Initial task placement distribution

        +--------+-------------------------------------+
        | LLC ID |  Tasks initially placed on this LLC |
        +--------+-------------------------------------+
        |   0    |                  1                  |
        |   1    |                  1                  |
        |   2    |                  1                  |
        |   3    |                  1                  |
        |   4    |                  1                  |
        |   5    |                  1                  |
        |   6    |                  1                  |
        |   7    |                  1                  |
        |   8    |                  1                  |
        |   9    |                  1                  |
        |   10   |                  1                  |
        |   11   |                  1                  |
        |   12   |                  1                  |
        |   13   |                  1                  |
        |   14   |                  1                  |
        |   15   |                  1                  |
        +--------+-------------------------------------+

A point to note is Stream is more sensitive initially when tasks have not
run for long enough where, if a kworker or another short running task
is running on the previous CPU during wakeup, the logic will favor an
affine wakeup as initially as scheduler might not realize Stream is a
long running task.

Let me know if you would like me to gather more data on the test system
for the modified kernels discussed above. 
--
Thanks and Regards,
Prateek

^ permalink raw reply related	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-29  5:25   ` Chen Yu
  2022-09-29  6:59     ` Honglei Wang
@ 2022-09-29 17:19     ` K Prateek Nayak
  1 sibling, 0 replies; 20+ messages in thread
From: K Prateek Nayak @ 2022-09-29 17:19 UTC (permalink / raw)
  To: Chen Yu
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

Hello Chenyu,

Thank you for looking into this issue.

On 9/29/2022 10:55 AM, Chen Yu wrote:
> Hi Prateek,
> [..snip..]
>
>>>  kernel/sched/fair.c | 31 ++++++++++++++++++++++++++++++-
>>>  1 file changed, 30 insertions(+), 1 deletion(-)
>>>
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index 914096c5b1ae..7519ab5b911c 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -6020,6 +6020,19 @@ static int wake_wide(struct task_struct *p)
>>>  	return 1;
>>>  }
>>>  
>>> +/*
>>> + * If a task switches in and then voluntarily relinquishes the
>>> + * CPU quickly, it is regarded as a short running task.
>>> + * sysctl_sched_min_granularity is chosen as the threshold,
>>> + * as this value is the minimal slice if there are too many
>>> + * runnable tasks, see __sched_period().
>>> + */
>>> +static int is_short_task(struct task_struct *p)
>>> +{
>>> +	return (p->se.sum_exec_runtime <=
>>> +		(p->nvcsw * sysctl_sched_min_granularity));
>>> +}
>>> +
>>>  /*
>>>   * The purpose of wake_affine() is to quickly determine on which CPU we can run
>>>   * soonest. For the purpose of speed we only consider the waking and previous
>>> @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
>>>  	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
>>>  		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
>>>  
>>> -	if (sync && cpu_rq(this_cpu)->nr_running == 1)
>>> +	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
>>> +	    is_short_task(cpu_curr(this_cpu)))
>>
>> This change seems to optimize for affine wakeup which benefits
>> tasks with producer-consumer pattern but is not ideal for Stream.
>> Currently the logic ends will do an affine wakeup even if sync
>> flag is not set:
>>
>>           stream-4135    [029] d..2.   353.580953: sched_waking: comm=stream pid=4129 prio=120 target_cpu=082
>>           stream-4135    [029] d..2.   353.580957: select_task_rq_fair: wake_affine_idle: Select this_cpu: sync(0) rq->nr_running(1) is_short_task(1)
>>           stream-4135    [029] d..2.   353.580960: sched_migrate_task: comm=stream pid=4129 prio=120 orig_cpu=82 dest_cpu=30
>>           <idle>-0       [030] dNh2.   353.580993: sched_wakeup: comm=stream pid=4129 prio=120 target_cpu=030
>>
>> I believe a consideration should be made for the sync flag when
>> going for an affine wakeup. Also the check for short running could
>> be at the end after checking if prev_cpu is an available_idle_cpu.
>>
> We can move the short running check after the prev_cpu check. If we
> add the sync flag check would it shrink the coverage of this change?

I've ran some test where I just move the condition to check for
short running towards the end of task wake_affine_idle and also
incorporated suggestion from Tim in wake_affine_idle. I've shared
the results in a parallel thread.

> Since I found that there is limited scenario would enable the sync
> flag and we want to make the short running check a generic optimization.
> But yes, we can test with/without sync flag constrain to see which one
> gives better data.
>>>  		return this_cpu;
>>>  
>>>  	if (available_idle_cpu(prev_cpu))
>>> @@ -6434,6 +6448,21 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>>>  			/* overloaded LLC is unlikely to have idle cpu/core */
>>>  			if (nr == 1)
>>>  				return -1;
>>> +
>>> +			/*
>>> +			 * If nr is smaller than 60% of llc_weight, it
>>> +			 * indicates that the util_avg% is higher than 50%.
>>> +			 * This is calculated by SIS_UTIL in
>>> +			 * update_idle_cpu_scan(). The 50% util_avg indicates
>>> +			 * a half-busy LLC domain. System busier than this
>>> +			 * level could lower its bar to choose a compromised
>>> +			 * "idle" CPU. If the waker on target CPU is a short
>>> +			 * task and the wakee is also a short task, pick
>>> +			 * target directly.
>>> +			 */
>>> +			if (!has_idle_core && (5 * nr < 3 * sd->span_weight) &&
>>> +			    is_short_task(p) && is_short_task(cpu_curr(target)))
>>> +				return target;
>>
>> Pileup seen in hackbench could also be a result of an early
>> bailout here for smaller LLCs but I don't have any data to
>> substantiate that claim currently.
>>
>>>  		}
>>>  	}
>>>  
>> Please let me know if you need any more data from the test
>> system for any of the benchmarks covered or if you would like
>> me to run any other benchmark on the test system.
> Thank you for your testing, I'll enable SNC to divide the LLC domain
> into smaller ones, and to see if the issue could be reproduced
> on my platform too, then I'll update my finding on this.

Thank you for testing with SNC enabled. It should get the LLC size
closer to the Zen3 system I've tested on.

> 
> thanks,
> Chenyu
--
Thanks and Regards,
Prateek

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-29  6:59     ` Honglei Wang
@ 2022-09-29 17:34       ` K Prateek Nayak
  2022-09-30  0:58         ` Honglei Wang
  2022-09-30 16:03       ` Chen Yu
  1 sibling, 1 reply; 20+ messages in thread
From: K Prateek Nayak @ 2022-09-29 17:34 UTC (permalink / raw)
  To: Honglei Wang, Chen Yu
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

Hello Honglei,

Thank you for looking into this.

On 9/29/2022 12:29 PM, Honglei Wang wrote:
> 
> [..snip..]
> 
>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>> index 914096c5b1ae..7519ab5b911c 100644
>>>> --- a/kernel/sched/fair.c
>>>> +++ b/kernel/sched/fair.c
>>>> @@ -6020,6 +6020,19 @@ static int wake_wide(struct task_struct *p)
>>>>       return 1;
>>>>   }
>>>>   +/*
>>>> + * If a task switches in and then voluntarily relinquishes the
>>>> + * CPU quickly, it is regarded as a short running task.
>>>> + * sysctl_sched_min_granularity is chosen as the threshold,
>>>> + * as this value is the minimal slice if there are too many
>>>> + * runnable tasks, see __sched_period().
>>>> + */
>>>> +static int is_short_task(struct task_struct *p)
>>>> +{
>>>> +    return (p->se.sum_exec_runtime <=
>>>> +        (p->nvcsw * sysctl_sched_min_granularity));
>>>> +}
>>>> +
>>>>   /*
>>>>    * The purpose of wake_affine() is to quickly determine on which CPU we can run
>>>>    * soonest. For the purpose of speed we only consider the waking and previous
>>>> @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
>>>>       if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
>>>>           return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
>>>>   -    if (sync && cpu_rq(this_cpu)->nr_running == 1)
>>>> +    if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
>>>> +        is_short_task(cpu_curr(this_cpu)))
> 
> Seems it a bit breaks idle (or will be idle) purpose of wake_affine_idle() here. Maybe we can do it something like this?
> 
> if ((sync || is_short_task(cpu_curr(this_cpu))) && cpu_rq(this_cpu)->nr_running == 1)

I believe this will still cause performance degradation on split-LLC
system for Stream like workloads. Based on the logs below, we can
have a situation as follows:

	stream-4135    [029] d..2.   353.580957: select_task_rq_fair: wake_affine_idle: Select this_cpu: sync(0) rq->nr_running(1) is_short_task(1)

Where sync is 0 but is_short_task() may return 1 and the
current_rq->nr_running is 1. This will lead to two Stream threads
getting placed on same LLC during wakeup which will cause cache
contention and performance degradation.

> 
> Thanks,
> Honglei
> 
>>>
>>> This change seems to optimize for affine wakeup which benefits
>>> tasks with producer-consumer pattern but is not ideal for Stream.
>>> Currently the logic ends will do an affine wakeup even if sync
>>> flag is not set:
>>>
>>>            stream-4135    [029] d..2.   353.580953: sched_waking: comm=stream pid=4129 prio=120 target_cpu=082
>>>            stream-4135    [029] d..2.   353.580957: select_task_rq_fair: wake_affine_idle: Select this_cpu: sync(0) rq->nr_running(1) is_short_task(1)
>>>            stream-4135    [029] d..2.   353.580960: sched_migrate_task: comm=stream pid=4129 prio=120 orig_cpu=82 dest_cpu=30
>>>            <idle>-0       [030] dNh2.   353.580993: sched_wakeup: comm=stream pid=4129 prio=120 target_cpu=030

This is the exact situation observed during our testing.

>>>
>>> [..snip..]
>>>  
--
Thanks and Regards,
Prateek

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-29 17:34       ` K Prateek Nayak
@ 2022-09-30  0:58         ` Honglei Wang
  0 siblings, 0 replies; 20+ messages in thread
From: Honglei Wang @ 2022-09-30  0:58 UTC (permalink / raw)
  To: K Prateek Nayak, Chen Yu
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

Hi Prateek,


On 2022/9/30 01:34, K Prateek Nayak wrote:
> Hello Honglei,
> 
> Thank you for looking into this.
> 
> On 9/29/2022 12:29 PM, Honglei Wang wrote:
>>
>> [..snip..]
>>
>>>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>>>> index 914096c5b1ae..7519ab5b911c 100644
>>>>> --- a/kernel/sched/fair.c
>>>>> +++ b/kernel/sched/fair.c
>>>>> @@ -6020,6 +6020,19 @@ static int wake_wide(struct task_struct *p)
>>>>>        return 1;
>>>>>    }
>>>>>    +/*
>>>>> + * If a task switches in and then voluntarily relinquishes the
>>>>> + * CPU quickly, it is regarded as a short running task.
>>>>> + * sysctl_sched_min_granularity is chosen as the threshold,
>>>>> + * as this value is the minimal slice if there are too many
>>>>> + * runnable tasks, see __sched_period().
>>>>> + */
>>>>> +static int is_short_task(struct task_struct *p)
>>>>> +{
>>>>> +    return (p->se.sum_exec_runtime <=
>>>>> +        (p->nvcsw * sysctl_sched_min_granularity));
>>>>> +}
>>>>> +
>>>>>    /*
>>>>>     * The purpose of wake_affine() is to quickly determine on which CPU we can run
>>>>>     * soonest. For the purpose of speed we only consider the waking and previous
>>>>> @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
>>>>>        if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
>>>>>            return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
>>>>>    -    if (sync && cpu_rq(this_cpu)->nr_running == 1)
>>>>> +    if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
>>>>> +        is_short_task(cpu_curr(this_cpu)))
>>
>> Seems it a bit breaks idle (or will be idle) purpose of wake_affine_idle() here. Maybe we can do it something like this?
>>
>> if ((sync || is_short_task(cpu_curr(this_cpu))) && cpu_rq(this_cpu)->nr_running == 1)
> 
> I believe this will still cause performance degradation on split-LLC
> system for Stream like workloads. Based on the logs below, we can
> have a situation as follows:
> 
> 	stream-4135    [029] d..2.   353.580957: select_task_rq_fair: wake_affine_idle: Select this_cpu: sync(0) rq->nr_running(1) is_short_task(1)
> 
> Where sync is 0 but is_short_task() may return 1 and the
> current_rq->nr_running is 1. This will lead to two Stream threads
> getting placed on same LLC during wakeup which will cause cache
> contention and performance degradation.
> 

What I meant was that we should not break the purpose of 
wake_affine_idle(). 'nr_running == 1' makes sure there won't be a long 
queue here, and this might be helpful in the benchmark tests as well. 
Probably the short code section I sent was not considerate.. It's just 
kinda clue.

I see your test result in another mail. It's great and is exactly what I 
was thinking we should test.

Thanks,
Honglei

>>
>> Thanks,
>> Honglei
>>
>>>>
>>>> This change seems to optimize for affine wakeup which benefits
>>>> tasks with producer-consumer pattern but is not ideal for Stream.
>>>> Currently the logic ends will do an affine wakeup even if sync
>>>> flag is not set:
>>>>
>>>>             stream-4135    [029] d..2.   353.580953: sched_waking: comm=stream pid=4129 prio=120 target_cpu=082
>>>>             stream-4135    [029] d..2.   353.580957: select_task_rq_fair: wake_affine_idle: Select this_cpu: sync(0) rq->nr_running(1) is_short_task(1)
>>>>             stream-4135    [029] d..2.   353.580960: sched_migrate_task: comm=stream pid=4129 prio=120 orig_cpu=82 dest_cpu=30
>>>>             <idle>-0       [030] dNh2.   353.580993: sched_wakeup: comm=stream pid=4129 prio=120 target_cpu=030
> 
> This is the exact situation observed during our testing.
> 
>>>>
>>>> [..snip..]
>>>>   
> --
> Thanks and Regards,
> Prateek

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-29  6:59     ` Honglei Wang
  2022-09-29 17:34       ` K Prateek Nayak
@ 2022-09-30 16:03       ` Chen Yu
  1 sibling, 0 replies; 20+ messages in thread
From: Chen Yu @ 2022-09-30 16:03 UTC (permalink / raw)
  To: Honglei Wang
  Cc: K Prateek Nayak, Peter Zijlstra, Vincent Guittot, Tim Chen,
	Mel Gorman, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

On 2022-09-29 at 14:59:46 +0800, Honglei Wang wrote:
> > > > @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
> > > >   	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
> > > >   		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
> > > > -	if (sync && cpu_rq(this_cpu)->nr_running == 1)
> > > > +	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
> > > > +	    is_short_task(cpu_curr(this_cpu)))
> 
> Seems it a bit breaks idle (or will be idle) purpose of wake_affine_idle()
> here.
Exactly, we should prefer previous idle CPU to 'potential idle' this_cpu to keep
it consistent.
> Maybe we can do it something like this?
> 
> if ((sync || is_short_task(cpu_curr(this_cpu))) &&
> cpu_rq(this_cpu)->nr_running == 1)
>
Yes, Prateek's experimental results has proven your suggestion.

thanks,
Chenyu
> Thanks,
> Honglei

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-29  8:00 ` Vincent Guittot
@ 2022-09-30 16:53   ` Chen Yu
  2022-10-03 12:42     ` Vincent Guittot
  0 siblings, 1 reply; 20+ messages in thread
From: Chen Yu @ 2022-09-30 16:53 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Peter Zijlstra, Tim Chen, Mel Gorman, Juri Lelli, Rik van Riel,
	Aaron Lu, Abel Wu, K Prateek Nayak, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

Hi Vincent,
On 2022-09-29 at 10:00:40 +0200, Vincent Guittot wrote:
[cut]
> >
> > This idea has been suggested by Rik at LPC 2019 when discussing
> > the latency nice. He asked the following question: if P1 is a small-time
> > slice task on CPU, can we put the waking task P2 on the CPU and wait for
> > P1 to release the CPU, without wasting time to search for an idle CPU?
> > At LPC 2021 Vincent Guittot has proposed:
> > 1. If the wakee is a long-running task, should we skip the short idle CPU?
> > 2. If the wakee is a short-running task, can we put it onto a lightly loaded
> >    local CPU?
> 
> When I said that, I had in mind to use the task utilization (util_avg
> or util_est) which reflects the recent behavior of the task but not to
> compute an average duration
> 
Ah I see. However there is a scenario(will-it-scale context switch sub-test)
that, if task A is doing frequent ping-pong context switch with task B on one
CPU, we should avoid cross-CPU wakeup, by placing the wakee on the same CPU
as the waker. Since util_avg/est might be high for both waker and wakee,
we use the average duration to detect this scenario.
> >
> > Current proposal is a variant of 2:
> > If the target CPU is running a short-time slice task, and the wakee
> > is also a short-time slice task, the target CPU could be chosen as the
> > candidate when the system is busy.
> >
> > The definition of a short-time slice task is: The average running time
> > of the task during each run is no more than sysctl_sched_min_granularity.
> > If a task switches in and then voluntarily relinquishes the CPU
> > quickly, it is regarded as a short-running task. Choosing
> > sysctl_sched_min_granularity because it is the minimal slice if there
> > are too many runnable tasks.
> >
[cut]
> >
> > +/*
> > + * If a task switches in and then voluntarily relinquishes the
> > + * CPU quickly, it is regarded as a short running task.
> > + * sysctl_sched_min_granularity is chosen as the threshold,
> > + * as this value is the minimal slice if there are too many
> > + * runnable tasks, see __sched_period().
> > + */
> > +static int is_short_task(struct task_struct *p)
> > +{
> > +       return (p->se.sum_exec_runtime <=
> > +               (p->nvcsw * sysctl_sched_min_granularity));
> 
> you assume that the task behavior will never change during is whole life time
>
I was thinking that the average running time of a task could slowly catch
up with the latest task behavior, but yes, there would be delay especially
for rapid changing tasks(and similar to rq->avg_idle). I wonder if we
could use something like: 
	return (p->se.avg.util_avg <=
		(p->nvcsw * PELT(sysctl_sched_min_granularity));
to reflect the recent behavior of the task.

thanks,
Chenyu

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-29 16:58     ` K Prateek Nayak
@ 2022-09-30 17:26       ` Chen Yu
  0 siblings, 0 replies; 20+ messages in thread
From: Chen Yu @ 2022-09-30 17:26 UTC (permalink / raw)
  To: K Prateek Nayak
  Cc: Gautham R. Shenoy, Peter Zijlstra, Vincent Guittot, Tim Chen,
	Mel Gorman, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	Yicong Yang, Ingo Molnar, Dietmar Eggemann, Steven Rostedt,
	Ben Segall, Daniel Bristot de Oliveira, Valentin Schneider,
	linux-kernel

Hi Prateek,
On 2022-09-29 at 22:28:38 +0530, K Prateek Nayak wrote:
> Hello Gautham and Chenyu,
> 
> On 9/26/2022 8:09 PM, Gautham R. Shenoy wrote:
> > Hello Prateek,
> > 
> > On Mon, Sep 26, 2022 at 11:20:16AM +0530, K Prateek Nayak wrote:[
> > 
> > [..snip..]
> > 
> >>> @@ -6050,7 +6063,8 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
> >>>  	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
> >>>  		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
> >>>  
> >>> -	if (sync && cpu_rq(this_cpu)->nr_running == 1)
> >>> +	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
> >>> +	    is_short_task(cpu_curr(this_cpu)))
> >>
> >> This change seems to optimize for affine wakeup which benefits
> >> tasks with producer-consumer pattern but is not ideal for Stream.
> >> Currently the logic ends will do an affine wakeup even if sync
> >> flag is not set:
> >>
> >>           stream-4135    [029] d..2.   353.580953: sched_waking: comm=stream pid=4129 prio=120 target_cpu=082
> >>           stream-4135    [029] d..2.   353.580957: select_task_rq_fair: wake_affine_idle: Select this_cpu: sync(0) rq->nr_running(1) is_short_task(1)
> >>           stream-4135    [029] d..2.   353.580960: sched_migrate_task: comm=stream pid=4129 prio=120 orig_cpu=82 dest_cpu=30
> >>           <idle>-0       [030] dNh2.   353.580993: sched_wakeup: comm=stream pid=4129 prio=120 target_cpu=030
> >>
> >> I believe a consideration should be made for the sync flag when
> >> going for an affine wakeup. Also the check for short running could
> >> be at the end after checking if prev_cpu is an available_idle_cpu.
> > 
> > We need to check if moving the is_short_task() to a later point after
> > checking the availability of the previous CPU solve the problem for
> > the workloads which showed regressions on AMD EPYC systems.
> 
> I've done some testing with moving the condition check for short
> running task to the end of wake_affine_idle and checking if the
> length of run queue is 1 similar to what Tim suggested in the thread
> but doing it upfront in wake_affine_idle.
Thanks for the investigation. After a second thought, for will-it-scale
context_switch test case, all the tasks have SYNC flag, so I wonder if
putting the check to the end of wake_affine_idle() would make any
difference for will-it-scale test. Because will-it-scale might have
already returned this_cpu via 'if (sync && cpu_rq(this_cpu)->nr_running == 1)'
I'll do some test tomorrow on this.
> There are a few variations I've tested:
> 
> v1: move the check for short running task on current CPU to end of wake_affine_idle
> 
> v2: move the check for short running task on current CPU to end of wake_affine_idle
>     + remove entire hunk in select_idle_cpu
> 
> v3: move the check for short running task on current CPU to end of wake_affine_idle
>     + check if run queue of current CPU only has 1 task
> 
> v4: move the check for short running task on current CPU to end of wake_affine_idle
>     + check if run queue of current CPU only has 1 task
>     + remove entire hunk in select_idle_cpu
> 
> Adding diff for v3 below:
> --
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index 0ad8e7183bf2..dad9bfb0248d 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6074,13 +6074,15 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
>  	if (available_idle_cpu(this_cpu) && cpus_share_cache(this_cpu, prev_cpu))
>  		return available_idle_cpu(prev_cpu) ? prev_cpu : this_cpu;
>  
> -	if ((sync && cpu_rq(this_cpu)->nr_running == 1) ||
> -	    is_short_task(cpu_curr(this_cpu)))
> +	if (sync && cpu_rq(this_cpu)->nr_running == 1)
>  		return this_cpu;
>  
>  	if (available_idle_cpu(prev_cpu))
>  		return prev_cpu;
>  
> +	if (cpu_rq(this_cpu)->nr_running == 1 && is_short_task(cpu_curr(this_cpu)))
> +		return this_cpu;
> +
I'm also thinking of adding this check in SIS and also the ttwu_pending flag
check in SIS.
>  	return nr_cpumask_bits;
>  }
>  
> --
>
[cut] 
> 
> We still see a pileup with v1 and v2 but not with v3 and v4 suggesting
> that the second hunk is not the reason for the pileup but rather
> choosing the current CPU in wake_affine_idle on the basis that the
> current running task is the short running task. To prevent a pileup, we
> must only choose the current rq if the short running task is the only
> task running there.
>
OK, I see. 

[cut]
> 
> A point to note is Stream is more sensitive initially when tasks have not
> run for long enough where, if a kworker or another short running task
> is running on the previous CPU during wakeup, the logic will favor an
> affine wakeup as initially as scheduler might not realize Stream is a
> long running task.
Maybe we can add restriction that only after the task has run for a while
we start the short_task() check?
> 
> Let me know if you would like me to gather more data on the test system
> for the modified kernels discussed above.
While waiting for Vincent's feedback, I'll refine the patch per your experiment
and modify the code in SIS per Tim's suggestion.

thanks,
Chenyu 
> --
> Thanks and Regards,
> Prateek

^ permalink raw reply	[flat|nested] 20+ messages in thread

* Re: [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up
  2022-09-30 16:53   ` Chen Yu
@ 2022-10-03 12:42     ` Vincent Guittot
  0 siblings, 0 replies; 20+ messages in thread
From: Vincent Guittot @ 2022-10-03 12:42 UTC (permalink / raw)
  To: Chen Yu
  Cc: Peter Zijlstra, Tim Chen, Mel Gorman, Juri Lelli, Rik van Riel,
	Aaron Lu, Abel Wu, K Prateek Nayak, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, linux-kernel

On Fri, 30 Sept 2022 at 18:53, Chen Yu <yu.c.chen@intel.com> wrote:
>
> Hi Vincent,
> On 2022-09-29 at 10:00:40 +0200, Vincent Guittot wrote:
> [cut]
> > >
> > > This idea has been suggested by Rik at LPC 2019 when discussing
> > > the latency nice. He asked the following question: if P1 is a small-time
> > > slice task on CPU, can we put the waking task P2 on the CPU and wait for
> > > P1 to release the CPU, without wasting time to search for an idle CPU?
> > > At LPC 2021 Vincent Guittot has proposed:
> > > 1. If the wakee is a long-running task, should we skip the short idle CPU?
> > > 2. If the wakee is a short-running task, can we put it onto a lightly loaded
> > >    local CPU?
> >
> > When I said that, I had in mind to use the task utilization (util_avg
> > or util_est) which reflects the recent behavior of the task but not to
> > compute an average duration
> >
> Ah I see. However there is a scenario(will-it-scale context switch sub-test)
> that, if task A is doing frequent ping-pong context switch with task B on one
> CPU, we should avoid cross-CPU wakeup, by placing the wakee on the same CPU
> as the waker. Since util_avg/est might be high for both waker and wakee,
> we use the average duration to detect this scenario.

yeah, this can be up to 50%

> > >
> > > Current proposal is a variant of 2:
> > > If the target CPU is running a short-time slice task, and the wakee
> > > is also a short-time slice task, the target CPU could be chosen as the
> > > candidate when the system is busy.
> > >
> > > The definition of a short-time slice task is: The average running time
> > > of the task during each run is no more than sysctl_sched_min_granularity.
> > > If a task switches in and then voluntarily relinquishes the CPU
> > > quickly, it is regarded as a short-running task. Choosing
> > > sysctl_sched_min_granularity because it is the minimal slice if there
> > > are too many runnable tasks.
> > >
> [cut]
> > >
> > > +/*
> > > + * If a task switches in and then voluntarily relinquishes the
> > > + * CPU quickly, it is regarded as a short running task.
> > > + * sysctl_sched_min_granularity is chosen as the threshold,
> > > + * as this value is the minimal slice if there are too many
> > > + * runnable tasks, see __sched_period().
> > > + */
> > > +static int is_short_task(struct task_struct *p)
> > > +{
> > > +       return (p->se.sum_exec_runtime <=
> > > +               (p->nvcsw * sysctl_sched_min_granularity));
> >
> > you assume that the task behavior will never change during is whole life time
> >
> I was thinking that the average running time of a task could slowly catch
> up with the latest task behavior, but yes, there would be delay especially

Because you don't forget oldest activity, it will be more and more
difficult to catch up with the latest behavior.

> for rapid changing tasks(and similar to rq->avg_idle). I wonder if we
> could use something like:
>         return (p->se.avg.util_avg <=
>                 (p->nvcsw * PELT(sysctl_sched_min_granularity));

What is PELT(sysctl_sched_min_granularity) ?

You need at least a runtime and a period to compute something similar
to a PELT value.

As an example, a task running A ms every B ms period will have  an util_avg  of
At wakeup, util_avg = (1-y^A)/(1-y^B)*1024*y^(B-A) with y^32=1/2
Before sleeping, util_avg = (1-y^A)/(1-y^B)*1024

To be exact, it's running A segments of 1024us every period of B
segments of 1024us

> to reflect the recent behavior of the task.
>
> thanks,
> Chenyu

^ permalink raw reply	[flat|nested] 20+ messages in thread

end of thread, other threads:[~2022-10-03 12:42 UTC | newest]

Thread overview: 20+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-09-15 16:54 [RFC PATCH] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
2022-09-15 17:10 ` Tim Chen
2022-09-16 10:49   ` Chen Yu
2022-09-16 11:45 ` Peter Zijlstra
2022-09-17 13:55   ` Chen Yu
2022-09-16 11:47 ` Peter Zijlstra
2022-09-17 14:15   ` Chen Yu
2022-09-26  5:50 ` K Prateek Nayak
2022-09-26 14:39   ` Gautham R. Shenoy
2022-09-29 16:58     ` K Prateek Nayak
2022-09-30 17:26       ` Chen Yu
2022-09-29  5:25   ` Chen Yu
2022-09-29  6:59     ` Honglei Wang
2022-09-29 17:34       ` K Prateek Nayak
2022-09-30  0:58         ` Honglei Wang
2022-09-30 16:03       ` Chen Yu
2022-09-29 17:19     ` K Prateek Nayak
2022-09-29  8:00 ` Vincent Guittot
2022-09-30 16:53   ` Chen Yu
2022-10-03 12:42     ` Vincent Guittot

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.