All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up
@ 2022-12-16  6:08 Chen Yu
  2022-12-16  6:11 ` [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check Chen Yu
                   ` (3 more replies)
  0 siblings, 4 replies; 19+ messages in thread
From: Chen Yu @ 2022-12-16  6:08 UTC (permalink / raw)
  To: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Tianchen Ding, Joel Fernandes, Josh Don, linux-kernel,
	Chen Yu

The main purpose of this change is to avoid too many cross CPU
wake up when it is unnecessary. The frequent cross CPU wake up
brings significant damage to some workloads, especially on high
core count systems.

This patch set inhibits the cross CPU wake-up by placing the wakee
on waking CPU or previous CPU, if both the waker and wakee are
short-duration tasks.

The first patch is to introduce the definition of a short-duration
task. The second patch leverages the first patch to choose a local
or previous CPU for wakee.

Changes since v3:
1. Honglei and Josh have concern that the threshold of short
   task duration could be too long. Decreased the threshold from
   sysctl_sched_min_granularity to (sysctl_sched_min_granularity / 8),
   and the '8' comes from get_update_sysctl_factor().
2. Export p->se.dur_avg to /proc/{pid}/sched per Yicong's suggestion.
3. Move the calculation of average duration from put_prev_task_fair()
   to dequeue_task_fair(). Because there is an issue in v3 that,
   put_prev_task_fair() will not be invoked by pick_next_task_fair()
   in fast path, thus the dur_avg could not be updated timely.
4. Fix the comment in PATCH 2/2, that "WRITE_ONCE(CPU1->ttwu_pending, 1);"
   on CPU0 is earlier than CPU1 getting "ttwu_list->p0", per Tianchen.
5. Move the scan for CPU with short duration task from select_idle_cpu()
   to select_idle_siblings(), because there is no CPU scan involved, per
   Yicong.

Changes since v2:

1. Peter suggested comparing the duration of waker and the cost to
   scan for an idle CPU: If the cost is higher than the task duration,
   do not waste time finding an idle CPU, choose the local or previous
   CPU directly. A prototype was created based on this suggestion.
   However, according to the test result, this prototype does not inhibit
   the cross CPU wakeup and did not bring improvement. Because the cost
   to find an idle CPU is small in the problematic scenario. The root
   cause of the problem is a race condition between scanning for an idle
   CPU and task enqueue(please refer to the commit log in PATCH 2/2).
   So v3 does not change the core logic of v2, with some refinement based
   on Peter's suggestion.

2. Simplify the logic to record the task duration per Peter and Abel's suggestion.

This change brings overall improvement on some microbenchmarks, both on
Intel and AMD platforms.

v3: https://lore.kernel.org/lkml/cover.1669862147.git.yu.c.chen@intel.com/
v2: https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/
v1: https://lore.kernel.org/lkml/20220915165407.1776363-1-yu.c.chen@intel.com/

Chen Yu (2):
  sched/fair: Introduce short duration task check
  sched/fair: Choose the CPU where short task is running during wake up

 include/linux/sched.h   |  3 +++
 kernel/sched/core.c     |  2 ++
 kernel/sched/debug.c    |  1 +
 kernel/sched/fair.c     | 32 ++++++++++++++++++++++++++++++++
 kernel/sched/features.h |  1 +
 5 files changed, 39 insertions(+)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 19+ messages in thread

* [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check
  2022-12-16  6:08 [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
@ 2022-12-16  6:11 ` Chen Yu
  2023-01-05 11:33   ` Dietmar Eggemann
  2022-12-16  6:11 ` [RFC PATCH v4 2/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
                   ` (2 subsequent siblings)
  3 siblings, 1 reply; 19+ messages in thread
From: Chen Yu @ 2022-12-16  6:11 UTC (permalink / raw)
  To: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Tianchen Ding, Joel Fernandes, Josh Don, linux-kernel,
	Chen Yu

Introduce short-duration task checks, as there is a requirement
to leverage this attribute for better task placement.

There are several choices of metrics that could be used to
indicate if a task is a short-duration task.

At first thought the (p->se.sum_exec_runtime / p->nvcsw)
can be used to measure the task duration. However, the
history long past was factored too heavily in such a formula.
Ideally, the old activity should decay and not affect
the current status too much.

Although something based on PELT can be used, se.util_avg might
not be appropriate to describe the task duration:
1. Task p1 and task p2 are doing frequent ping-pong scheduling on
   one CPU, both p1 and p2 have a short duration, but the util_avg
   can be up to 50%.
2. Suppose a task lasting less than 4ms is regarded as a short task.
   If task p3 runs for 6 ms and sleeps for 32 ms, p3 should not be a
   short-duration task. However, PELT would decay p3's accumulated
   running time from 6 ms to 3 ms, because 32 ms is the half-life
   in PELT. As a result, p3 would be incorrectly treated as a short
   task.

It was found that there was once a similar feature to track the
duration of a task, which is in Commit ad4b78bbcbab ("sched: Add
new wakeup preemption mode: WAKEUP_RUNNING"). Unfortunately, it
was reverted because it was an experiment. So pick the patch up
again, by recording the average duration when a task voluntarily
switches out.

The threshold of short duration is sysctl_sched_min_granularity / 8,
so it can be tuned by the user. By default, the threshold is 375 us.
The reason to reuse sysctl_sched_min_granularity is that it reflects
how long the user would like the task to run. So the criteria of a
short task have a connection to it.

Josh is not in favor of tying the threshold to
sysctl_sched_min_granularity, ideally there should be a dedicated
parameter for the threshold, but that introduces complexity for
maintenance and the user.

Introduce SIS_SHORT to enable the short duration check.

Suggested-by: Tim Chen <tim.c.chen@intel.com>
Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
---
 include/linux/sched.h   |  3 +++
 kernel/sched/core.c     |  2 ++
 kernel/sched/debug.c    |  1 +
 kernel/sched/fair.c     | 23 +++++++++++++++++++++++
 kernel/sched/features.h |  1 +
 5 files changed, 30 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index ffb6eb55cd13..26f4768e63f3 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -557,6 +557,9 @@ struct sched_entity {
 	u64				prev_sum_exec_runtime;
 
 	u64				nr_migrations;
+	u64				prev_sum_exec_runtime_vol;
+	/* average duration of a task */
+	u64				dur_avg;
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
 	int				depth;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index daff72f00385..c5202f1be3f7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4348,6 +4348,8 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 	p->se.prev_sum_exec_runtime	= 0;
 	p->se.nr_migrations		= 0;
 	p->se.vruntime			= 0;
+	p->se.dur_avg			= 0;
+	p->se.prev_sum_exec_runtime_vol	= 0;
 	INIT_LIST_HEAD(&p->se.group_node);
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 1637b65ba07a..8d64fba16cfe 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -1024,6 +1024,7 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
 	__PS("nr_involuntary_switches", p->nivcsw);
 
 	P(se.load.weight);
+	P(se.dur_avg);
 #ifdef CONFIG_SMP
 	P(se.avg.load_sum);
 	P(se.avg.runnable_sum);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e4a0b8bd941c..abdb7a442052 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4824,6 +4824,16 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
 static int
 wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se);
 
+/*
+ * If a task switches in and then voluntarily relinquishes the
+ * CPU quickly, it is regarded as a short duration task.
+ */
+static inline int is_short_task(struct task_struct *p)
+{
+	return sched_feat(SIS_SHORT) && p->se.dur_avg &&
+		((p->se.dur_avg * 8) <= sysctl_sched_min_granularity);
+}
+
 /*
  * Pick the next process, keeping these things in mind, in this order:
  * 1) keep things fair between processes/task groups
@@ -5995,6 +6005,18 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 
 static void set_next_buddy(struct sched_entity *se);
 
+static inline void dur_avg_update(struct task_struct *p, bool task_sleep)
+{
+	u64 dur;
+
+	if (!task_sleep)
+		return;
+
+	dur = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime_vol;
+	p->se.prev_sum_exec_runtime_vol = p->se.sum_exec_runtime;
+	update_avg(&p->se.dur_avg, dur);
+}
+
 /*
  * The dequeue_task method is called before nr_running is
  * decreased. We remove the task from the rbtree and
@@ -6067,6 +6089,7 @@ static void dequeue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 
 dequeue_throttle:
 	util_est_update(&rq->cfs, p, task_sleep);
+	dur_avg_update(p, task_sleep);
 	hrtick_update(rq);
 }
 
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index ee7f23c76bd3..efdc29c42161 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -62,6 +62,7 @@ SCHED_FEAT(TTWU_QUEUE, true)
  */
 SCHED_FEAT(SIS_PROP, false)
 SCHED_FEAT(SIS_UTIL, true)
+SCHED_FEAT(SIS_SHORT, true)
 
 /*
  * Issue a WARN when we do multiple update_rq_clock() calls
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* [RFC PATCH v4 2/2] sched/fair: Choose the CPU where short task is running during wake up
  2022-12-16  6:08 [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
  2022-12-16  6:11 ` [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check Chen Yu
@ 2022-12-16  6:11 ` Chen Yu
  2022-12-29  7:16 ` [RFC PATCH v4 0/2] " K Prateek Nayak
  2023-01-17  2:53 ` Yicong Yang
  3 siblings, 0 replies; 19+ messages in thread
From: Chen Yu @ 2022-12-16  6:11 UTC (permalink / raw)
  To: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Tianchen Ding, Joel Fernandes, Josh Don, linux-kernel,
	Chen Yu, kernel test robot

[Problem Statement]
For a workload that is doing frequent context switches, the throughput
scales well until the number of instances reaches a peak point. After
that peak point, the throughput drops significantly if the number of
instances continues to increase.

The will-it-scale context_switch1 test case exposes the issue. The
test platform has 112 CPUs per LLC domain. The will-it-scale launches
1, 8, 16 ... 112 instances respectively. Each instance is composed
of 2 tasks, and each pair of tasks would do ping-pong scheduling via
pipe_read() and pipe_write(). No task is bound to any CPU.
It is found that, once the number of instances is higher than
56(112 tasks in total, every CPU has 1 task), the throughput
drops accordingly if the instance number continues to increase:

          ^
throughput|
          |                 X
          |               X   X X
          |             X         X X
          |           X               X
          |         X                   X
          |       X
          |     X
          |   X
          | X
          |
          +-----------------.------------------->
                            56
                                 number of instances

[Symptom analysis]

The performance downgrading was caused by a high system idle
percentage(around 20% ~ 30%). The CPUs waste a lot of time in
idle and do nothing. As a comparison, if set CPU affinity to
these workloads and stops them from migrating among CPUs,
the idle percentage drops to nearly 0%, and the throughput
increases by about 300%. This indicates room for optimization.

The cause is the race condition between select_task_rq() and
the task enqueue.

Suppose there are nr_cpus pairs of ping-pong scheduling
tasks. For example, p0' and p0 are ping-pong scheduling,
so do p1' <=> p1, and p2'<=> p2. None of these tasks are
bound to any CPUs. The problem can be summarized as:
more than 1 wakers are stacked on 1 CPU, which slows down
waking up their wakees:

CPU0					CPU1				CPU2

p0'					p1' => idle			p2'

try_to_wake_up(p0)							try_to_wake_up(p2);
CPU1 = select_task_rq(p0);						CPU1 = select_task_rq(p2);
ttwu_queue(p0, CPU1);							ttwu_queue(p2, CPU1);
  __ttwu_queue_wakelist(p0, CPU1);
    WRITE_ONCE(CPU1->ttwu_pending, 1);
    __smp_call_single_queue(CPU1, p0);	=> ttwu_list->p0
					quiting cpuidle_idle_call()

									  __ttwu_queue_wakelist(p2, CPU1);
									    WRITE_ONCE(CPU1->ttwu_pending, 1);
					ttwu_list->p2->p0	<=	    __smp_call_single_queue(CPU1, p2);

p0' => idle
					sched_ttwu_pending()
					  enqueue_task(p2 and p0)

					idle => p2

					...
					p2 time slice expires
					...
									!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
								<===	!!! p2 delays the wake up of p0' !!!
									!!! causes long idle on CPU0     !!!
					p2 => p0			!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
					p0 wakes up p0'

idle => p0'

Since there are many waker/wakee pairs in the system, the chain reaction
causes many CPUs to be victims. These idle CPUs wait for their waker to
be scheduled.

Actually, Tiancheng has mentioned the above issue here[1].

[Proposal]
The root cause is that there is no strict synchronization of
select_task_rq() and the set of ttwu_pending flag among several CPUs.
And this might be by design because the scheduler prefers parallel
wakeup.

So avoid this problem indirectly. If a system does not have idle cores,
and if the waker and wakee are both short duration tasks, wake up the
wakee on the same CPU as waker.

The reason is that, if the waker is a short-duration task, it might
relinquish the CPU soon, and the wakee has the chance to be scheduled.
On the other hand, if the wakee is a short duration task, putting it on
non-idle CPU would bring minimal impact to the running task. No idle
core in the system indicates that this mechanism should not inhibit
spreading the tasks if the system has idle cores.

[Benchmark results]
The baseline is v6.1-rc8. The test platform has 56 Cores(112 CPUs) per
LLC domain. C-states deeper than C1E are disabled. Turbo is disabled.
CPU frequency governor is performance.

will-it-scale.context_switch1
=============================
+331.13%

hackbench
=========
case            	load    	baseline(std%)	compare%( std%)
process-pipe    	1 group 	 1.00 (  1.50)	 +0.83 (  0.19)
process-pipe    	2 groups 	 1.00 (  0.77)	 +0.82 (  0.52)
process-pipe    	4 groups 	 1.00 (  0.20)	 -2.07 (  2.91)
process-pipe    	8 groups 	 1.00 (  0.05)	 +3.48 (  0.06)
process-sockets 	1 group 	 1.00 (  2.90)	-11.20 ( 11.99)
process-sockets 	2 groups 	 1.00 (  5.42)	 -1.39 (  1.70)
process-sockets 	4 groups 	 1.00 (  0.17)	 -0.20 (  0.19)
process-sockets 	8 groups 	 1.00 (  0.03)	 -0.05 (  0.11)
threads-pipe    	1 group 	 1.00 (  2.09)	 -1.63 (  0.44)
threads-pipe    	2 groups 	 1.00 (  0.28)	 -0.21 (  1.48)
threads-pipe    	4 groups 	 1.00 (  0.27)	 +0.13 (  0.63)
threads-pipe    	8 groups 	 1.00 (  0.14)	 +5.04 (  0.04)
threads-sockets 	1 group 	 1.00 (  2.51)	 -1.86 (  2.08)
threads-sockets 	2 groups 	 1.00 (  1.24)	 -0.60 (  3.83)
threads-sockets 	4 groups 	 1.00 (  0.49)	 +0.07 (  0.46)
threads-sockets 	8 groups 	 1.00 (  0.09)	 -0.04 (  0.08)

netperf
=======
case            	load    	baseline(std%)	compare%( std%)
TCP_RR          	28-threads	 1.00 (  0.74)	 -1.31 (  0.78)
TCP_RR          	56-threads	 1.00 (  0.60)	 -0.54 (  0.76)
TCP_RR          	84-threads	 1.00 (  0.32)	 -3.22 (  0.30)
TCP_RR          	112-threads	 1.00 (  0.21)	 -4.34 (  0.22)
TCP_RR          	140-threads	 1.00 (  0.19)	+202.56 ( 10.11)
TCP_RR          	168-threads	 1.00 ( 59.37)	+82.78 ( 10.85)
TCP_RR          	196-threads	 1.00 ( 12.89)	 -0.16 ( 13.80)
TCP_RR          	224-threads	 1.00 (  7.14)	 -0.68 (  7.02)
UDP_RR          	28-threads	 1.00 (  1.39)	 +0.26 (  1.13)
UDP_RR          	56-threads	 1.00 ( 10.91)	 +0.80 (  1.14)
UDP_RR          	84-threads	 1.00 (  0.17)	 -4.00 (  0.75)
UDP_RR          	112-threads	 1.00 (  9.53)	 -6.41 (  5.47)
UDP_RR          	140-threads	 1.00 (  6.93)	+146.56 ( 13.58)
UDP_RR          	168-threads	 1.00 ( 13.09)	+156.22 ( 11.81)
UDP_RR          	196-threads	 1.00 ( 19.75)	 +5.75 ( 15.65)
UDP_RR          	224-threads	 1.00 ( 13.90)	 +5.48 ( 14.49)

tbench
======
case            	load    	baseline(std%)	compare%( std%)
loopback        	28-threads	 1.00 (  0.20)	 -0.24 (  0.51)
loopback        	56-threads	 1.00 (  0.03)	 -0.59 (  0.08)
loopback        	84-threads	 1.00 (  0.11)	 -1.76 (  0.13)
loopback        	112-threads	 1.00 (  0.04)	+161.78 (  0.25)
loopback        	140-threads	 1.00 (  0.14)	 -0.54 (  0.02)
loopback        	168-threads	 1.00 (  0.38)	 -0.33 (  0.21)
loopback        	196-threads	 1.00 (  0.15)	 -0.46 (  0.06)
loopback        	224-threads	 1.00 (  0.08)	 -0.36 (  0.11)

schbench
========
case            	load    	baseline(std%)	compare%( std%)
normal          	1-mthread	 1.00 (  1.49)	 -3.16 (  5.20)
normal          	2-mthreads	 1.00 (  9.25)	 -1.87 ( 12.97)
normal          	4-mthreads	 1.00 (  4.82)	 -3.91 (  5.92)
normal          	8-mthreads	 1.00 (  0.54)	 +4.14 (  3.28)

In summary, overall there is no significant performance regression detected
and there is a big improvement in netperf/tbench in partially-busy cases.
There should be no impact on schbench in theory, because the default task
duration of schbench is 30 ms.

[Limitations]
As Peter said, the criteria of a short duration task is intuitive, but it
seems to be hard to find an accurate criterion to describe this.

This wake up strategy can be viewed as dynamic WF_SYNC. Except that:
1. Some workloads do not have WF_SYNC set.
2. WF_SYNC does not treat non-idle CPU as candidate target CPU.

Peter has suggested[2] comparing task duration with the cost of searching
for an idle CPU. If the latter is higher, then give up the scan, to
achieve better task affine. However, this method does not fit in the case
encountered in this patch. Because there are plenty of idle CPUs in the
system, it will not take too long to find an idle CPU. The bottleneck is
caused by the race condition mentioned above.

[1] https://lore.kernel.org/lkml/9ed75cad-3718-356f-21ca-1b8ec601f335@linux.alibaba.com/
[2] https://lore.kernel.org/lkml/Y2O8a%2FOhk1i1l8ao@hirez.programming.kicks-ass.net/

Suggested-by: Tim Chen <tim.c.chen@intel.com>
Suggested-by: K Prateek Nayak <kprateek.nayak@amd.com>
Tested-by: kernel test robot <yujie.liu@intel.com>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
---
 kernel/sched/fair.c | 9 +++++++++
 1 file changed, 9 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index abdb7a442052..4983c60c1b3f 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6259,6 +6259,11 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
 	if (available_idle_cpu(prev_cpu))
 		return prev_cpu;
 
+	/* The only running task is a short duration one. */
+	if (cpu_rq(this_cpu)->nr_running == 1 &&
+	    is_short_task(cpu_curr(this_cpu)))
+		return this_cpu;
+
 	return nr_cpumask_bits;
 }
 
@@ -6809,6 +6814,10 @@ static int select_idle_sibling(struct task_struct *p, int prev, int target)
 		}
 	}
 
+	if (!has_idle_core && cpu_rq(target)->nr_running == 1 &&
+	    is_short_task(cpu_curr(target)) && is_short_task(p))
+		return target;
+
 	i = select_idle_cpu(p, sd, has_idle_core, target);
 	if ((unsigned)i < nr_cpumask_bits)
 		return i;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up
  2022-12-16  6:08 [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
  2022-12-16  6:11 ` [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check Chen Yu
  2022-12-16  6:11 ` [RFC PATCH v4 2/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
@ 2022-12-29  7:16 ` K Prateek Nayak
  2022-12-30  2:47   ` Chen Yu
  2023-01-17  2:53 ` Yicong Yang
  3 siblings, 1 reply; 19+ messages in thread
From: K Prateek Nayak @ 2022-12-29  7:16 UTC (permalink / raw)
  To: Chen Yu, Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Tianchen Ding, Joel Fernandes, Josh Don, linux-kernel

Hello Chenyu,

Including the detailed results from testing below.

tl;dr

o There seems to be 3 noticeable regressions:
  - tbench for lower number of clients. The schedstat data shows
    an increase in wait time.
  - SpecJBB MultiJVM performance drops as the workload prefers
    an idle CPU over a busy one.
  - Unixbench-pipe benchmark performance drops.

o Most benchmark numbers remain same.

o Small gains seen for ycsb-mongodb and unixbench-syscall.

On 12/16/2022 11:38 AM, Chen Yu wrote:
> The main purpose of this change is to avoid too many cross CPU
> wake up when it is unnecessary. The frequent cross CPU wake up
> brings significant damage to some workloads, especially on high
> core count systems.
> 
> This patch set inhibits the cross CPU wake-up by placing the wakee
> on waking CPU or previous CPU, if both the waker and wakee are
> short-duration tasks.
> 
> The first patch is to introduce the definition of a short-duration
> task. The second patch leverages the first patch to choose a local
> or previous CPU for wakee.
> 
> Changes since v3:
> 1. Honglei and Josh have concern that the threshold of short
>    task duration could be too long. Decreased the threshold from
>    sysctl_sched_min_granularity to (sysctl_sched_min_granularity / 8),
>    and the '8' comes from get_update_sysctl_factor().
> 2. Export p->se.dur_avg to /proc/{pid}/sched per Yicong's suggestion.
> 3. Move the calculation of average duration from put_prev_task_fair()
>    to dequeue_task_fair(). Because there is an issue in v3 that,
>    put_prev_task_fair() will not be invoked by pick_next_task_fair()
>    in fast path, thus the dur_avg could not be updated timely.
> 4. Fix the comment in PATCH 2/2, that "WRITE_ONCE(CPU1->ttwu_pending, 1);"
>    on CPU0 is earlier than CPU1 getting "ttwu_list->p0", per Tianchen.
> 5. Move the scan for CPU with short duration task from select_idle_cpu()
>    to select_idle_siblings(), because there is no CPU scan involved, per>    Yicong.

Following are the results from running standard benchmarks on a
dual socket Zen3 (2 x 64C/128T) machine configured in different
NPS modes.

NPS Modes are used to logically divide single socket into
multiple NUMA region.
Following is the NUMA configuration for each NPS mode on the system:

NPS1: Each socket is a NUMA node.
    Total 2 NUMA nodes in the dual socket machine.

    Node 0: 0-63,   128-191
    Node 1: 64-127, 192-255

NPS2: Each socket is further logically divided into 2 NUMA regions.
    Total 4 NUMA nodes exist over 2 socket.
   
    Node 0: 0-31,   128-159
    Node 1: 32-63,  160-191
    Node 2: 64-95,  192-223
    Node 3: 96-127, 223-255

NPS4: Each socket is logically divided into 4 NUMA regions.
    Total 8 NUMA nodes exist over 2 socket.
   
    Node 0: 0-15,    128-143
    Node 1: 16-31,   144-159
    Node 2: 32-47,   160-175
    Node 3: 48-63,   176-191
    Node 4: 64-79,   192-207
    Node 5: 80-95,   208-223
    Node 6: 96-111,  223-231
    Node 7: 112-127, 232-255

Benchmark Results:

Kernel versions:
- tip:          6.1.0-rc2 tip sched/core
- sis_short: 	6.1.0-rc2 tip sched/core + this series

When the testing started, the tip was at:
commit d6962c4fe8f9 "sched: Clear ttwu_pending after enqueue_task()"

~~~~~~~~~~~~~
~ hackbench ~
~~~~~~~~~~~~~

NPS1

Test:			tip			sis_short
 1-groups:	   4.25 (0.00 pct)	   4.26 (-0.23 pct)
 2-groups:	   4.95 (0.00 pct)	   4.87 (1.61 pct)
 4-groups:	   5.19 (0.00 pct)	   5.09 (1.92 pct)
 8-groups:	   5.45 (0.00 pct)	   5.37 (1.46 pct)
16-groups:	   7.33 (0.00 pct)	   7.65 (-4.36 pct)

NPS2

Test:			tip			sis_short
 1-groups:	   4.09 (0.00 pct)	   4.20 (-2.68 pct)
 2-groups:	   4.68 (0.00 pct)	   4.75 (-1.49 pct)
 4-groups:	   5.05 (0.00 pct)	   4.94 (2.17 pct)
 8-groups:	   5.37 (0.00 pct)	   5.29 (1.48 pct)
16-groups:	   6.69 (0.00 pct)	   6.78 (-1.34 pct)

NPS4

Test:			tip			sis_short
 1-groups:	   4.28 (0.00 pct)	   4.37 (-2.10 pct)
 2-groups:	   4.78 (0.00 pct)	   4.82 (-0.83 pct)
 4-groups:	   5.11 (0.00 pct)	   5.06 (0.97 pct)
 8-groups:	   5.48 (0.00 pct)	   5.38 (1.82 pct)
16-groups:	   7.07 (0.00 pct)	   6.93 (1.98 pct)

~~~~~~~~~~~~
~ schbench ~
~~~~~~~~~~~~

NPS1

#workers:	tip			sis_short
  1:	  31.00 (0.00 pct)	  32.00 (-3.22 pct)
  2:	  33.00 (0.00 pct)	  32.00 (3.03 pct)
  4:	  39.00 (0.00 pct)	  34.00 (12.82 pct)
  8:	  45.00 (0.00 pct)	  43.00 (4.44 pct)
 16:	  61.00 (0.00 pct)	  66.00 (-8.19 pct)
 32:	 108.00 (0.00 pct)	 107.00 (0.92 pct)
 64:	 212.00 (0.00 pct)	 223.00 (-5.18 pct)
128:	 475.00 (0.00 pct)	 519.00 (-9.26 pct)	*
128:	 434.00 (0.00 pct)	 438.00 (-0.92 pct)	[Verification Run]
256:	 44736.00 (0.00 pct)	 42048.00 (6.00 pct)
512:	 77184.00 (0.00 pct)	 76672.00 (0.66 pct)

NPS2

#workers:	tip			sis_short
  1:	  28.00 (0.00 pct)	  32.00 (-14.28 pct)
  2:	  34.00 (0.00 pct)	  37.00 (-8.82 pct)
  4:	  36.00 (0.00 pct)	  37.00 (-2.77 pct)
  8:	  51.00 (0.00 pct)	  47.00 (7.84 pct)
 16:	  68.00 (0.00 pct)	  67.00 (1.47 pct)
 32:	 113.00 (0.00 pct)	 117.00 (-3.53 pct)
 64:	 221.00 (0.00 pct)	 221.00 (0.00 pct)
128:	 553.00 (0.00 pct)	 567.00 (-2.53 pct)
256:	 43840.00 (0.00 pct)	 47040.00 (-7.29 pct)	*
256:	 46016.00 (0.00 pct)	 46272.00 (-0.55 pct)	[Verification Run]
512:	 76672.00 (0.00 pct)	 78976.00 (-3.00 pct)

NPS4

#workers:	tip			sis_short
  1:	  33.00 (0.00 pct)	  32.00 (3.03 pct)
  2:	  29.00 (0.00 pct)	  42.00 (-44.82 pct)
  4:	  39.00 (0.00 pct)	  43.00 (-10.25 pct)
  8:	  58.00 (0.00 pct)	  56.00 (3.44 pct)
 16:	  66.00 (0.00 pct)	  68.00 (-3.03 pct)
 32:	 112.00 (0.00 pct)	 113.00 (-0.89 pct)
 64:	 215.00 (0.00 pct)	 214.00 (0.46 pct)
128:	 689.00 (0.00 pct)	 823.00 (-19.44 pct)	*
128:	 424.00 (0.00 pct)	 449.00 (-5.89 pct)	[Verification Run]
256:	 45120.00 (0.00 pct)	 44608.00 (1.13 pct)
512:	 77440.00 (0.00 pct)	 79488.00 (-2.64 pct)


~~~~~~~~~~
~ tbench ~
~~~~~~~~~~

NPS1

Clients:	tip			sis_short
    1	 581.75 (0.00 pct)	 545.15 (-6.29 pct)	*
    1    588.80 (0.00 pct)       547.71 (-6.97 pct)	[Verification Run]
    2	 1145.75 (0.00 pct)	 1074.91 (-6.18 pct)	*
    2    1153.55 (0.00 pct)      1068.06 (-7.41 pct)	[Verification Run]
    4	 2127.94 (0.00 pct)	 1958.71 (-7.95 pct)	*
    4    2133.95 (0.00 pct)      1965.03 (-7.91 pct)	[Verification Run]
    8	 3838.27 (0.00 pct)	 3701.75 (-3.55 pct)
   16	 6272.71 (0.00 pct)	 6045.56 (-3.62 pct)
   32	 11400.12 (0.00 pct)	 11688.67 (2.53 pct)
   64	 21605.96 (0.00 pct)	 22070.92 (2.15 pct)
  128	 30715.43 (0.00 pct)	 30772.11 (0.18 pct)
  256	 55580.78 (0.00 pct)	 54998.38 (-1.04 pct)
  512	 56528.79 (0.00 pct)	 55239.00 (-2.28 pct)
 1024	 56520.40 (0.00 pct)	 55153.71 (-2.41 pct)

NPS2

Clients:	tip			sis_short
    1	 584.13 (0.00 pct)	 555.55 (-4.89 pct)
    2	 1153.63 (0.00 pct)	 1094.62 (-5.11 pct)	*
    2    1147.26 (0.00 pct)      1063.35 (-7.31 pct)	[Verification Run]
    4	 2212.89 (0.00 pct)	 2049.24 (-7.39 pct)	*
    4    2133.57 (0.00 pct)      2004.66 (-6.04 pct)	[Verification Run]
    8	 3871.35 (0.00 pct)	 3793.66 (-2.00 pct)
   16	 6216.72 (0.00 pct)	 5895.87 (-5.16 pct)	*
   16    6342.15 (0.00 pct)      5962.58 (-5.98 pct)	[Verification Run]
   32	 11766.98 (0.00 pct)	 11013.97 (-6.39 pct)	*
   32    11009.53 (0.00 pct)     10215.06 (-7.21 pct)	[Verification Run]
   64	 22000.93 (0.00 pct)	 20563.47 (-6.53 pct)	*
   64    20315.34 (0.00 pct)     20097.13 (-1.07 pct)	[Verification Run]
  128	 31520.53 (0.00 pct)	 27489.63 (-12.78 pct)	*
  128    29224.27 (0.00 pct)     27872.81 (-4.62 pct)	[Verification Run]
  256	 51420.11 (0.00 pct)	 51968.82 (1.06 pct)
  512	 53935.90 (0.00 pct)	 54053.59 (0.21 pct)
 1024	 55239.73 (0.00 pct)	 54505.89 (-1.32 pct)

NPS4

Clients:	tip			sis_short
    1	 585.83 (0.00 pct)	 547.36 (-6.56 pct)	*
    1    590.36 (0.00 pct)       546.57 (-7.41 pct)	[Verification Run]
    2	 1141.59 (0.00 pct)	 1061.99 (-6.97 pct)	*
    2    1167.56 (0.00 pct)      1059.16 (-9.28 pct)	[Verification Run]
    4	 2174.79 (0.00 pct)	 2002.42 (-7.92 pct)	*
    4    2169.72 (0.00 pct)      1973.22 (-9.05 pct)	[Verification Run]
    8	 3887.56 (0.00 pct)	 3543.62 (-8.84 pct)	*
    8    3747.36 (0.00 pct)      3398.57 (-9.30 pct)	[Verification Run]
   16	 6441.59 (0.00 pct)	 5896.92 (-8.45 pct)	*
   16    6165.54 (0.00 pct)      6019.96 (-2.36 pct)	[Verification Run]
   32	 12133.60 (0.00 pct)	 11074.16 (-8.73 pct)	*
   32    11182.63 (0.00 pct)     11241.82 (0.52 pct)	[Verification Run]
   64	 21769.15 (0.00 pct)	 19485.94 (-10.48 pct)	*
   64    20168.52 (0.00 pct)     20333.98 (0.82 pct)	[Verification Run]
  128	 31396.31 (0.00 pct)	 29029.29 (-7.53 pct)	*
  128    29437.98 (0.00 pct)     27438.60 (-6.79 pct)	[Verification Run]
  256	 52792.39 (0.00 pct)	 51560.92 (-2.33 pct)
  512	 55315.44 (0.00 pct)	 52914.21 (-4.34 pct)
 1024	 52150.27 (0.00 pct)	 53286.08 (2.17 pct)


~~~~~~~~~~
~ stream ~
~~~~~~~~~~

NPS1

10 Runs:

Test:		tip			sis_short
 Copy:	 307827.79 (0.00 pct)	 320671.93 (4.17 pct)
Scale:	 208872.28 (0.00 pct)	 213725.16 (2.32 pct)
  Add:	 239404.64 (0.00 pct)	 240872.59 (0.61 pct)
Triad:	 247258.30 (0.00 pct)	 249764.53 (1.01 pct)

100 Runs:

Test:		tip			sis_short
 Copy:	 317217.55 (0.00 pct)	 318031.46 (0.25 pct)
Scale:	 208740.82 (0.00 pct)	 213773.01 (2.41 pct)
  Add:	 240550.63 (0.00 pct)	 239618.79 (-0.38 pct)
Triad:	 249594.21 (0.00 pct)	 245722.98 (-1.55 pct)

NPS2

10 Runs:

Test:		tip			sis_short
 Copy:	 340877.18 (0.00 pct)	 336204.91 (-1.37 pct)
Scale:	 217318.16 (0.00 pct)	 218200.84 (0.40 pct)
  Add:	 259078.93 (0.00 pct)	 258847.83 (-0.08 pct)
Triad:	 274500.78 (0.00 pct)	 267234.01 (-2.64 pct)

100 Runs:

Test:		tip			sis_short
 Copy:	 341860.73 (0.00 pct)	 335116.25 (-1.97 pct)
Scale:	 218043.00 (0.00 pct)	 219265.81 (0.56 pct)
  Add:	 253698.22 (0.00 pct)	 260792.08 (2.79 pct)
Triad:	 265011.84 (0.00 pct)	 269655.66 (1.75 pct)

NPS4

10 Runs:

Test:		tip			sis_short
 Copy:	 323775.81 (0.00 pct)	 363923.22 (12.39 pct)
Scale:	 237719.83 (0.00 pct)	 238074.93 (0.14 pct)
  Add:	 251896.35 (0.00 pct)	 269129.04 (6.84 pct)
Triad:	 264124.72 (0.00 pct)	 284456.89 (7.69 pct)

100 Runs:

Test:		tip			sis_short
 Copy:	 360777.62 (0.00 pct)	 375325.94 (4.03 pct)
Scale:	 239333.27 (0.00 pct)	 238341.16 (-0.41 pct)
  Add:	 271142.94 (0.00 pct)	 269924.55 (-0.44 pct)
Triad:	 285408.97 (0.00 pct)	 290345.65 (1.72 pct)

~~~~~~~~~~~~~~~~
~ ycsb-mongodb ~
~~~~~~~~~~~~~~~~

o NPS1

tip:                    131696.33 (var: 2.03%)
sis_short:              129031.33 (var: 1.71%)  (-2.02%)

o NPS2:

tip:                    129895.33 (var: 2.34%)
sis_short:              133652.33 (var: 1.11%)  (+2.89%)

o NPS4:

tip:                    131165.00 (var: 1.06%)
sis_short:              135975.00 (var: 0.34%)  (+3.66%)

~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ SPECjbb MultiJVM - NPS1 ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~

Kernel			 tip		sis_short
Max-jOPS		100%		  96%
Critical-jOPS		100%		  97%

~~~~~~~~~~~~~
~ unixbench ~
~~~~~~~~~~~~~

o NPS1

Test			Metric	  Parallelism			tip		      	sis_short
unixbench-dhry2reg      Hmean     unixbench-dhry2reg-1      48929419.48 (   0.00%)    48992339.28 (   0.13%)
unixbench-dhry2reg      Hmean     unixbench-dhry2reg-512  6266355505.80 (   0.00%)  6251441423.60 (  -0.24%)
unixbench-syscall       Amean     unixbench-syscall-1        2994319.73 (   0.00%)     2665595.13 *  10.98%*
unixbench-syscall       Amean     unixbench-syscall-512      7349715.87 (   0.00%)     7645690.70 *  -4.03%*
unixbench-pipe          Hmean     unixbench-pipe-1           2830206.03 (   0.00%)     2508957.89 * -11.35%*	*
unixbench-pipe          Hmean     unixbench-pipe-512       326207828.01 (   0.00%)   306588592.66 *  -6.01%*	*
unixbench-spawn         Hmean     unixbench-spawn-1             6394.21 (   0.00%)        6153.47 (  -3.76%)
unixbench-spawn         Hmean     unixbench-spawn-512          72700.64 (   0.00%)       87873.75 *  20.87%*
unixbench-execl         Hmean     unixbench-execl-1             4723.61 (   0.00%)        4678.46 (  -0.96%)
unixbench-execl         Hmean     unixbench-execl-512          11212.05 (   0.00%)       11067.41 *  -1.29%*

o NPS2

Test			Metric	  Parallelism			tip		      	sis_short
unixbench-dhry2reg      Hmean     unixbench-dhry2reg-1      49271512.85 (   0.00%)    48842191.93 (  -0.87%)
unixbench-dhry2reg      Hmean     unixbench-dhry2reg-512  6267992483.03 (   0.00%)  6261579339.17 (  -0.10%)
unixbench-syscall       Amean     unixbench-syscall-1        2995885.93 (   0.00%)     2668794.40 *  10.92%*
unixbench-syscall       Amean     unixbench-syscall-512      7388865.77 (   0.00%)     7229531.57 *   2.16%*
unixbench-pipe          Hmean     unixbench-pipe-1           2828971.95 (   0.00%)     2509560.35 * -11.29%*	*
unixbench-pipe          Hmean     unixbench-pipe-512       326225385.37 (   0.00%)   306941838.69 *  -5.91%*	*
unixbench-spawn         Hmean     unixbench-spawn-1             6958.71 (   0.00%)        6613.24 (  -4.96%)
unixbench-spawn         Hmean     unixbench-spawn-512          85443.56 (   0.00%)       89130.15 *   4.31%*
unixbench-execl         Hmean     unixbench-execl-1             4767.99 (   0.00%)        4684.44 *  -1.75%*
unixbench-execl         Hmean     unixbench-execl-512          11250.72 (   0.00%)       11197.80 (  -0.47%)

o NPS4

Test			Metric	  Parallelism			tip		      	sis_short
unixbench-dhry2reg      Hmean     unixbench-dhry2reg-1      49041932.68 (   0.00%)    48942640.63 (  -0.20%)
unixbench-dhry2reg      Hmean     unixbench-dhry2reg-512  6277864986.70 (   0.00%)  6263890106.10 (  -0.22%)
unixbench-syscall       Amean     unixbench-syscall-1        2992405.60 (   0.00%)     2663959.00 *  10.98%*
unixbench-syscall       Amean     unixbench-syscall-512      7971789.70 (   0.00%)     7834566.33 *   1.72%*
unixbench-pipe          Hmean     unixbench-pipe-1           2822892.54 (   0.00%)     2505995.43 * -11.23%*	*
unixbench-pipe          Hmean     unixbench-pipe-512       326408309.83 (   0.00%)   306887263.78 *  -5.98%*	*
unixbench-spawn         Hmean     unixbench-spawn-1             7685.31 (   0.00%)        7675.29 (  -0.13%)
unixbench-spawn         Hmean     unixbench-spawn-512          72245.56 (   0.00%)       74190.34 *   2.69%*
unixbench-execl         Hmean     unixbench-execl-1             4761.42 (   0.00%)        4683.56 *  -1.64%*
unixbench-execl         Hmean     unixbench-execl-512          11533.53 (   0.00%)       11298.59 (  -2.04%)

~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ Analyzing tbench (2 client) for NPS1 configuration ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~

tip: 		1126.33 MB/sec
sis_short:	1031.26 MB/sec (-8.44%)

-> Legend for per CPU stats:

rq->yld_count:                  sched_yield count
rq->sched_count:                schedule called
rq->sched_goidle:               schedule left the processor idle
rq->ttwu_count:                 try_to_wake_up was called
rq->ttwu_local:                 try_to_wake_up was called to wake up the local cpu
rq->rq_cpu_time:                total runtime by tasks on this processor (in jiffies)
rq->rq_sched_info.run_delay:    total waittime by tasks on this processor (in jiffies)
rq->rq_sched_info.pcount:       total timeslices run on this cpu

-> System wide stats

-------------------------------------------------------------------------------------------------------------------------
cpu:  all_cpus (avg) vs cpu:  all_cpus (avg)
-------------------------------------------------------------------------------------------------------------------------
sched_yield count                                          :             0,            0
Legacy counter can be ignored                              :             0,            0
schedule called                                            :        233543,       213786  |  -8.46|
schedule left the processor idle                           :        116745,       106863  |  -8.46|
try_to_wake_up was called                                  :        116772,       106893  |  -8.46|
try_to_wake_up was called to wake up the local cpu         :            74,           84  |  13.51|
total runtime by tasks on this processor (in jiffies)      :     579771198,    585440128
total waittime by tasks on this processor (in jiffies)     :        176365,       258702  |  46.69|	* Wait time is much longer
total timeslices run on this cpu                           :        116797,       106922  |  -8.45|
-------------------------------------------------------------------------------------------------------------------------
< -----------------------------------------------------------------  Wakeup info:  ------------------------------------ >
Wakeups on same         SMT cpus = all_cpus (avg)       :             0,            0             
Wakeups on same         MC cpus = all_cpus (avg)        :        116689,       106797  |  -8.48|
Wakeups on same         DIE cpus = all_cpus (avg)       :             2,            4
Wakeups on same         NUMA cpus = all_cpus (avg)      :             5,            7
Affine wakeups on same  SMT cpus = all_cpus (avg)       :             0,            0
Affine wakeups on same  MC cpus = all_cpus (avg)        :        116667,       106781  |  -8.47|
Affine wakeups on same  DIE cpus = all_cpus (avg)       :             2,            4
Affine wakeups on same  NUMA cpus = all_cpus (avg)      :             5,            6
--------------------------------------------------------------------------------------------------------------------------

The rq->rq_sched_info.pcount and rq->sched_count seems to
have reduced proportionally.

> 
> Changes since v2:
> 
> 1. Peter suggested comparing the duration of waker and the cost to
>    scan for an idle CPU: If the cost is higher than the task duration,
>    do not waste time finding an idle CPU, choose the local or previous
>    CPU directly. A prototype was created based on this suggestion.
>    However, according to the test result, this prototype does not inhibit
>    the cross CPU wakeup and did not bring improvement. Because the cost
>    to find an idle CPU is small in the problematic scenario. The root
>    cause of the problem is a race condition between scanning for an idle
>    CPU and task enqueue(please refer to the commit log in PATCH 2/2).
>    So v3 does not change the core logic of v2, with some refinement based
>    on Peter's suggestion.
> 
> 2. Simplify the logic to record the task duration per Peter and Abel's suggestion.
> 
> This change brings overall improvement on some microbenchmarks, both on
> Intel and AMD platforms.
> 
> v3: https://lore.kernel.org/lkml/cover.1669862147.git.yu.c.chen@intel.com/
> v2: https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/
> v1: https://lore.kernel.org/lkml/20220915165407.1776363-1-yu.c.chen@intel.com/
> 
> Chen Yu (2):
>   sched/fair: Introduce short duration task check
>   sched/fair: Choose the CPU where short task is running during wake up
> 
>  include/linux/sched.h   |  3 +++
>  kernel/sched/core.c     |  2 ++
>  kernel/sched/debug.c    |  1 +
>  kernel/sched/fair.c     | 32 ++++++++++++++++++++++++++++++++
>  kernel/sched/features.h |  1 +
>  5 files changed, 39 insertions(+)
> 

All numbers are with turbo and C2 enabled. I wonder if the
check "(5 * nr < 3 * sd->span_weight)" in v2 helped workloads
like tbench and SpecJBB. I'll queue some runs with the condition
added back and separate run with turbo and C2 disabled to see
if they helps. I'll update the thread once the results are in.

--
Thanks and Regards,
Prateek

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up
  2022-12-29  7:16 ` [RFC PATCH v4 0/2] " K Prateek Nayak
@ 2022-12-30  2:47   ` Chen Yu
  2023-01-16 10:53     ` K Prateek Nayak
  0 siblings, 1 reply; 19+ messages in thread
From: Chen Yu @ 2022-12-30  2:47 UTC (permalink / raw)
  To: K Prateek Nayak
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Tianchen Ding, Joel Fernandes, Josh Don, linux-kernel

On 2022-12-29 at 12:46:59 +0530, K Prateek Nayak wrote:
> Hello Chenyu,
> 
> Including the detailed results from testing below.
> 
> tl;dr
> 
> o There seems to be 3 noticeable regressions:
>   - tbench for lower number of clients. The schedstat data shows
>     an increase in wait time.
>   - SpecJBB MultiJVM performance drops as the workload prefers
>     an idle CPU over a busy one.
>   - Unixbench-pipe benchmark performance drops.
> 
> o Most benchmark numbers remain same.
> 
> o Small gains seen for ycsb-mongodb and unixbench-syscall.
>
Thanks Prateek for your test. 
> On 12/16/2022 11:38 AM, Chen Yu wrote:
> > The main purpose of this change is to avoid too many cross CPU
> > wake up when it is unnecessary. The frequent cross CPU wake up
> > brings significant damage to some workloads, especially on high
> > core count systems.
> > 
> > This patch set inhibits the cross CPU wake-up by placing the wakee
> > on waking CPU or previous CPU, if both the waker and wakee are
> > short-duration tasks.
> > 
> > The first patch is to introduce the definition of a short-duration
> > task. The second patch leverages the first patch to choose a local
> > or previous CPU for wakee.
> > 
> > Changes since v3:
> > 1. Honglei and Josh have concern that the threshold of short
> >    task duration could be too long. Decreased the threshold from
> >    sysctl_sched_min_granularity to (sysctl_sched_min_granularity / 8),
> >    and the '8' comes from get_update_sysctl_factor().
> > 2. Export p->se.dur_avg to /proc/{pid}/sched per Yicong's suggestion.
> > 3. Move the calculation of average duration from put_prev_task_fair()
> >    to dequeue_task_fair(). Because there is an issue in v3 that,
> >    put_prev_task_fair() will not be invoked by pick_next_task_fair()
> >    in fast path, thus the dur_avg could not be updated timely.
> > 4. Fix the comment in PATCH 2/2, that "WRITE_ONCE(CPU1->ttwu_pending, 1);"
> >    on CPU0 is earlier than CPU1 getting "ttwu_list->p0", per Tianchen.
> > 5. Move the scan for CPU with short duration task from select_idle_cpu()
> >    to select_idle_siblings(), because there is no CPU scan involved, per>    Yicong.
> 
> Following are the results from running standard benchmarks on a
> dual socket Zen3 (2 x 64C/128T) machine configured in different
> NPS modes.
> 
> NPS Modes are used to logically divide single socket into
> multiple NUMA region.
> Following is the NUMA configuration for each NPS mode on the system:
>
> NPS1: Each socket is a NUMA node.
>     Total 2 NUMA nodes in the dual socket machine.
> 
>     Node 0: 0-63,   128-191
>     Node 1: 64-127, 192-255
> 
> NPS2: Each socket is further logically divided into 2 NUMA regions.
>     Total 4 NUMA nodes exist over 2 socket.
>    
>     Node 0: 0-31,   128-159
>     Node 1: 32-63,  160-191
>     Node 2: 64-95,  192-223
>     Node 3: 96-127, 223-255
> 
> NPS4: Each socket is logically divided into 4 NUMA regions.
>     Total 8 NUMA nodes exist over 2 socket.
>    
>     Node 0: 0-15,    128-143
>     Node 1: 16-31,   144-159
>     Node 2: 32-47,   160-175
>     Node 3: 48-63,   176-191
>     Node 4: 64-79,   192-207
>     Node 5: 80-95,   208-223
>     Node 6: 96-111,  223-231
>     Node 7: 112-127, 232-255
> 
> Benchmark Results:
> 
> Kernel versions:
> - tip:          6.1.0-rc2 tip sched/core
> - sis_short: 	6.1.0-rc2 tip sched/core + this series
> 
> When the testing started, the tip was at:
> commit d6962c4fe8f9 "sched: Clear ttwu_pending after enqueue_task()"
>
OK, I'll rebase the code and to check if I could reproduce the regression
with SNC enabled. Previously I tested v3 with SNC enabled and did not see regress
so I did not enable SNC when testing v4, I'll do the test with SNC enabled.
[...]
> 
> o NPS1
> 
> Test			Metric	  Parallelism			tip		      	sis_short
> unixbench-dhry2reg      Hmean     unixbench-dhry2reg-1      48929419.48 (   0.00%)    48992339.28 (   0.13%)
> unixbench-dhry2reg      Hmean     unixbench-dhry2reg-512  6266355505.80 (   0.00%)  6251441423.60 (  -0.24%)
> unixbench-syscall       Amean     unixbench-syscall-1        2994319.73 (   0.00%)     2665595.13 *  10.98%*
> unixbench-syscall       Amean     unixbench-syscall-512      7349715.87 (   0.00%)     7645690.70 *  -4.03%*
> unixbench-pipe          Hmean     unixbench-pipe-1           2830206.03 (   0.00%)     2508957.89 * -11.35%*	*
> unixbench-pipe          Hmean     unixbench-pipe-512       326207828.01 (   0.00%)   306588592.66 *  -6.01%*	*
I'll also run unixbench-pipe in my environment. As will-it-scale context_switch1 also stress pipe and did
see improvement, I'll check why unixbench-pipe regress.
[...]
> total waittime by tasks on this processor (in jiffies)     :        176365,       258702  |  46.69|	* Wait time is much longer
This seems to be fall back to the result of v1, where we inhibit the idle CPU scan and caused problems.
> total timeslices run on this cpu                           :        116797,       106922  |  -8.45|
> -------------------------------------------------------------------------------------------------------------------------
> < -----------------------------------------------------------------  Wakeup info:  ------------------------------------ >
> Wakeups on same         SMT cpus = all_cpus (avg)       :             0,            0             
> Wakeups on same         MC cpus = all_cpus (avg)        :        116689,       106797  |  -8.48|
> Wakeups on same         DIE cpus = all_cpus (avg)       :             2,            4
> Wakeups on same         NUMA cpus = all_cpus (avg)      :             5,            7
> Affine wakeups on same  SMT cpus = all_cpus (avg)       :             0,            0
> Affine wakeups on same  MC cpus = all_cpus (avg)        :        116667,       106781  |  -8.47|
> Affine wakeups on same  DIE cpus = all_cpus (avg)       :             2,            4
> Affine wakeups on same  NUMA cpus = all_cpus (avg)      :             5,            6
> --------------------------------------------------------------------------------------------------------------------------
> 
> The rq->rq_sched_info.pcount and rq->sched_count seems to
> have reduced proportionally.
> 
> > 
> > Changes since v2:
> > 
> > 1. Peter suggested comparing the duration of waker and the cost to
> >    scan for an idle CPU: If the cost is higher than the task duration,
> >    do not waste time finding an idle CPU, choose the local or previous
> >    CPU directly. A prototype was created based on this suggestion.
> >    However, according to the test result, this prototype does not inhibit
> >    the cross CPU wakeup and did not bring improvement. Because the cost
> >    to find an idle CPU is small in the problematic scenario. The root
> >    cause of the problem is a race condition between scanning for an idle
> >    CPU and task enqueue(please refer to the commit log in PATCH 2/2).
> >    So v3 does not change the core logic of v2, with some refinement based
> >    on Peter's suggestion.
> > 
> > 2. Simplify the logic to record the task duration per Peter and Abel's suggestion.
> > 
> > This change brings overall improvement on some microbenchmarks, both on
> > Intel and AMD platforms.
> > 
> > v3: https://lore.kernel.org/lkml/cover.1669862147.git.yu.c.chen@intel.com/
> > v2: https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/
> > v1: https://lore.kernel.org/lkml/20220915165407.1776363-1-yu.c.chen@intel.com/
> > 
> > Chen Yu (2):
> >   sched/fair: Introduce short duration task check
> >   sched/fair: Choose the CPU where short task is running during wake up
> > 
> >  include/linux/sched.h   |  3 +++
> >  kernel/sched/core.c     |  2 ++
> >  kernel/sched/debug.c    |  1 +
> >  kernel/sched/fair.c     | 32 ++++++++++++++++++++++++++++++++
> >  kernel/sched/features.h |  1 +
> >  5 files changed, 39 insertions(+)
> > 
> 
> All numbers are with turbo and C2 enabled. I wonder if the
> check "(5 * nr < 3 * sd->span_weight)" in v2 helped workloads
> like tbench and SpecJBB. I'll queue some runs with the condition
> added back and separate run with turbo and C2 disabled to see
> if they helps. I'll update the thread once the results are in.
Thanks for helping check if the nr part in v2 could bring the improvement
back. However Peter seems to have concern regarding the nr check, I'll
think about it more.

thanks,
Chenyu
> 
> --
> Thanks and Regards,
> Prateek

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check
  2022-12-16  6:11 ` [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check Chen Yu
@ 2023-01-05 11:33   ` Dietmar Eggemann
  2023-01-06  8:34     ` Chen Yu
  2023-01-16 10:33     ` Peter Zijlstra
  0 siblings, 2 replies; 19+ messages in thread
From: Dietmar Eggemann @ 2023-01-05 11:33 UTC (permalink / raw)
  To: Chen Yu, Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Steven Rostedt,
	Ben Segall, Daniel Bristot de Oliveira, Valentin Schneider,
	Hillf Danton, Honglei Wang, Len Brown, Chen Yu, Tianchen Ding,
	Joel Fernandes, Josh Don, linux-kernel

On 16/12/2022 07:11, Chen Yu wrote:

[...]

> @@ -5995,6 +6005,18 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>  
>  static void set_next_buddy(struct sched_entity *se);
>  
> +static inline void dur_avg_update(struct task_struct *p, bool task_sleep)
> +{
> +	u64 dur;
> +
> +	if (!task_sleep)
> +		return;
> +
> +	dur = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime_vol;
> +	p->se.prev_sum_exec_runtime_vol = p->se.sum_exec_runtime;

Shouldn't se->prev_sum_exec_runtime_vol be set in enqueue_task_fair()
and not in dequeue_task_fair()->dur_avg_update()? Otherwise `dur` will
contain sleep time.

Like we do for se->prev_sum_exec_runtime in set_next_entity() but for
one `set_next_entity()-put_prev_entity()` run section.

AFAICS, you want to measure the exec_runtime sum over all run sections
between enqueue and dequeue.

> +	update_avg(&p->se.dur_avg, dur);
> +}
> +

[...]


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check
  2023-01-05 11:33   ` Dietmar Eggemann
@ 2023-01-06  8:34     ` Chen Yu
  2023-01-06 11:28       ` Dietmar Eggemann
  2023-01-16 10:33     ` Peter Zijlstra
  1 sibling, 1 reply; 19+ messages in thread
From: Chen Yu @ 2023-01-06  8:34 UTC (permalink / raw)
  To: Dietmar Eggemann
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Steven Rostedt,
	Ben Segall, Daniel Bristot de Oliveira, Valentin Schneider,
	Hillf Danton, Honglei Wang, Len Brown, Chen Yu, Tianchen Ding,
	Joel Fernandes, Josh Don, linux-kernel

Hi Dietmar,
thanks for reviewing the patch!
On 2023-01-05 at 12:33:16 +0100, Dietmar Eggemann wrote:
> On 16/12/2022 07:11, Chen Yu wrote:
> 
> [...]
> 
> > @@ -5995,6 +6005,18 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> >  
> >  static void set_next_buddy(struct sched_entity *se);
> >  
> > +static inline void dur_avg_update(struct task_struct *p, bool task_sleep)
> > +{
> > +	u64 dur;
> > +
> > +	if (!task_sleep)
> > +		return;
> > +
> > +	dur = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime_vol;
> > +	p->se.prev_sum_exec_runtime_vol = p->se.sum_exec_runtime;
> 
> Shouldn't se->prev_sum_exec_runtime_vol be set in enqueue_task_fair()
> and not in dequeue_task_fair()->dur_avg_update()? Otherwise `dur` will
> contain sleep time.
> 
After the task p is dequeued, p's sum_exec_runtime will not be increased.
Unless task p is switched in again, p's sum_exec_runtime will continue to
increase. So dur should not include the sleep time, because we substract
between the sum_exec_runtime rather than rq->clock_task. Not sure if I understand
this correctly?

My original thought was that, record the average run time of every section:
Only consider that task voluntarily relinquishes the CPU.
For example, suppose on CPU1, task p1 and p2 run alternatively:

 --------------------> time

 | p1 runs 1ms | p2 preempt p1 | p1 switch in, runs 0.5ms and blocks |
               ^               ^                                     ^
 |_____________|               |_____________________________________|
                                                                     ^
                                                                     |
                                                                  p1 dequeued

p1's duration in one section is (1 + 0.5)ms. Because if p2 does not
preempt p1, p1 can run 1.5ms. This reflects the nature of a task,
how long it wishes to run at most.

> Like we do for se->prev_sum_exec_runtime in set_next_entity() but for
> one `set_next_entity()-put_prev_entity()` run section.
> 
> AFAICS, you want to measure the exec_runtime sum over all run sections
> between enqueue and dequeue.
Yes, we tried to record the 'decayed' average exec_runtime for each section.
Say, task p runs for a ms , then p is dequeued and blocks for b ms, and then
runs for c ms, its average duration is 0.875 * a + 0.125 * c , which is
what update_avg() does.

thanks,
Chenyu

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check
  2023-01-06  8:34     ` Chen Yu
@ 2023-01-06 11:28       ` Dietmar Eggemann
  2023-01-06 14:29         ` Chen Yu
  0 siblings, 1 reply; 19+ messages in thread
From: Dietmar Eggemann @ 2023-01-06 11:28 UTC (permalink / raw)
  To: Chen Yu
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Steven Rostedt,
	Ben Segall, Daniel Bristot de Oliveira, Valentin Schneider,
	Hillf Danton, Honglei Wang, Len Brown, Chen Yu, Tianchen Ding,
	Joel Fernandes, Josh Don, linux-kernel

On 06/01/2023 09:34, Chen Yu wrote:
> Hi Dietmar,
> thanks for reviewing the patch!
> On 2023-01-05 at 12:33:16 +0100, Dietmar Eggemann wrote:
>> On 16/12/2022 07:11, Chen Yu wrote:
>>
>> [...]
>>
>>> @@ -5995,6 +6005,18 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>>>  
>>>  static void set_next_buddy(struct sched_entity *se);
>>>  
>>> +static inline void dur_avg_update(struct task_struct *p, bool task_sleep)
>>> +{
>>> +	u64 dur;
>>> +
>>> +	if (!task_sleep)
>>> +		return;
>>> +
>>> +	dur = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime_vol;
>>> +	p->se.prev_sum_exec_runtime_vol = p->se.sum_exec_runtime;
>>
>> Shouldn't se->prev_sum_exec_runtime_vol be set in enqueue_task_fair()
>> and not in dequeue_task_fair()->dur_avg_update()? Otherwise `dur` will
>> contain sleep time.
>>
> After the task p is dequeued, p's sum_exec_runtime will not be increased.

True.

> Unless task p is switched in again, p's sum_exec_runtime will continue to
> increase. So dur should not include the sleep time, because we substract

Not sure I get this sentence? p's se->sum_exec_runtime will only
increase if p is current, so running?

> between the sum_exec_runtime rather than rq->clock_task. Not sure if I understand
> this correctly?

No, you're right. We're not dealing with time snapshots but rather with
sum_exec_runtime snapshots. So the value will not change between dequeue
and the next enqueue.

e ... enqueue_task_fair()
d ... dequeue_task_fair()
s ... set_next_entity()
p ... put_prev_entity()
u ... update_curr_fair()->update_curr()

p1:

---|---||--|--|---|--|--||---
   d   es  u  p   s  u  pd

   ^   ^
   |   |
  (A) (B)

Same se->prev_sum_exec_runtime_vol value in (A) and (B).

> My original thought was that, record the average run time of every section:
> Only consider that task voluntarily relinquishes the CPU.
> For example, suppose on CPU1, task p1 and p2 run alternatively:
> 
>  --------------------> time
> 
>  | p1 runs 1ms | p2 preempt p1 | p1 switch in, runs 0.5ms and blocks |
>                ^               ^                                     ^
>  |_____________|               |_____________________________________|
>                                                                      ^
>                                                                      |
>                                                                   p1 dequeued
> 
> p1's duration in one section is (1 + 0.5)ms. Because if p2 does not
> preempt p1, p1 can run 1.5ms. This reflects the nature of a task,
> how long it wishes to run at most.
> 
>> Like we do for se->prev_sum_exec_runtime in set_next_entity() but for
>> one `set_next_entity()-put_prev_entity()` run section.
>>
>> AFAICS, you want to measure the exec_runtime sum over all run sections
>> between enqueue and dequeue.
> Yes, we tried to record the 'decayed' average exec_runtime for each section.
> Say, task p runs for a ms , then p is dequeued and blocks for b ms, and then
> runs for c ms, its average duration is 0.875 * a + 0.125 * c , which is
> what update_avg() does.

OK.


^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check
  2023-01-06 11:28       ` Dietmar Eggemann
@ 2023-01-06 14:29         ` Chen Yu
  0 siblings, 0 replies; 19+ messages in thread
From: Chen Yu @ 2023-01-06 14:29 UTC (permalink / raw)
  To: Dietmar Eggemann
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Steven Rostedt,
	Ben Segall, Daniel Bristot de Oliveira, Valentin Schneider,
	Hillf Danton, Honglei Wang, Len Brown, Chen Yu, Tianchen Ding,
	Joel Fernandes, Josh Don, linux-kernel

On 2023-01-06 at 12:28:26 +0100, Dietmar Eggemann wrote:
> On 06/01/2023 09:34, Chen Yu wrote:
> > Hi Dietmar,
> > thanks for reviewing the patch!
> > On 2023-01-05 at 12:33:16 +0100, Dietmar Eggemann wrote:
> >> On 16/12/2022 07:11, Chen Yu wrote:
> >>
> >> [...]
> >>
> >>> @@ -5995,6 +6005,18 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> >>>  
> >>>  static void set_next_buddy(struct sched_entity *se);
> >>>  
> >>> +static inline void dur_avg_update(struct task_struct *p, bool task_sleep)
> >>> +{
> >>> +	u64 dur;
> >>> +
> >>> +	if (!task_sleep)
> >>> +		return;
> >>> +
> >>> +	dur = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime_vol;
> >>> +	p->se.prev_sum_exec_runtime_vol = p->se.sum_exec_runtime;
> >>
> >> Shouldn't se->prev_sum_exec_runtime_vol be set in enqueue_task_fair()
> >> and not in dequeue_task_fair()->dur_avg_update()? Otherwise `dur` will
> >> contain sleep time.
> >>
> > After the task p is dequeued, p's sum_exec_runtime will not be increased.
> 
> True.
> 
> > Unless task p is switched in again, p's sum_exec_runtime will continue to
> > increase. So dur should not include the sleep time, because we substract
> 
> Not sure I get this sentence? p's se->sum_exec_runtime will only
> increase if p is current, so running?
>
Yes, it was a typo, should be "will not continue to increase".
> > between the sum_exec_runtime rather than rq->clock_task. Not sure if I understand
> > this correctly?
> 
> No, you're right. We're not dealing with time snapshots but rather with
> sum_exec_runtime snapshots. So the value will not change between dequeue
> and the next enqueue.
> 
> e ... enqueue_task_fair()
> d ... dequeue_task_fair()
> s ... set_next_entity()
> p ... put_prev_entity()
> u ... update_curr_fair()->update_curr()
> 
> p1:
> 
> ---|---||--|--|---|--|--||---
>    d   es  u  p   s  u  pd
> 
>    ^   ^
>    |   |
>   (A) (B)
> 
> Same se->prev_sum_exec_runtime_vol value in (A) and (B).
> 
Yes.
> > My original thought was that, record the average run time of every section:
> > Only consider that task voluntarily relinquishes the CPU.
> > For example, suppose on CPU1, task p1 and p2 run alternatively:
> > 
> >  --------------------> time
> > 
> >  | p1 runs 1ms | p2 preempt p1 | p1 switch in, runs 0.5ms and blocks |
> >                ^               ^                                     ^
> >  |_____________|               |_____________________________________|
> >                                                                      ^
> >                                                                      |
> >                                                                   p1 dequeued
> > 
> > p1's duration in one section is (1 + 0.5)ms. Because if p2 does not
> > preempt p1, p1 can run 1.5ms. This reflects the nature of a task,
> > how long it wishes to run at most.
> > 
> >> Like we do for se->prev_sum_exec_runtime in set_next_entity() but for
> >> one `set_next_entity()-put_prev_entity()` run section.
> >>
> >> AFAICS, you want to measure the exec_runtime sum over all run sections
> >> between enqueue and dequeue.
> > Yes, we tried to record the 'decayed' average exec_runtime for each section.
> > Say, task p runs for a ms , then p is dequeued and blocks for b ms, and then
> > runs for c ms, its average duration is 0.875 * a + 0.125 * c , which is
> > what update_avg() does.
> 
> OK.
> 
I'll add more descriptions in next version to avoid confusing.

thanks,
Chenyu

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check
  2023-01-05 11:33   ` Dietmar Eggemann
  2023-01-06  8:34     ` Chen Yu
@ 2023-01-16 10:33     ` Peter Zijlstra
  2023-01-18 17:48       ` Dietmar Eggemann
  2023-01-19 14:29       ` Chen Yu
  1 sibling, 2 replies; 19+ messages in thread
From: Peter Zijlstra @ 2023-01-16 10:33 UTC (permalink / raw)
  To: Dietmar Eggemann
  Cc: Chen Yu, Vincent Guittot, Tim Chen, Mel Gorman, Juri Lelli,
	Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Honglei Wang, Len Brown, Chen Yu, Tianchen Ding, Joel Fernandes,
	Josh Don, linux-kernel

On Thu, Jan 05, 2023 at 12:33:16PM +0100, Dietmar Eggemann wrote:
> On 16/12/2022 07:11, Chen Yu wrote:
> 
> [...]
> 
> > @@ -5995,6 +6005,18 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> >  
> >  static void set_next_buddy(struct sched_entity *se);
> >  
> > +static inline void dur_avg_update(struct task_struct *p, bool task_sleep)
> > +{
> > +	u64 dur;
> > +
> > +	if (!task_sleep)
> > +		return;
> > +
> > +	dur = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime_vol;
> > +	p->se.prev_sum_exec_runtime_vol = p->se.sum_exec_runtime;
> 
> Shouldn't se->prev_sum_exec_runtime_vol be set in enqueue_task_fair()
> and not in dequeue_task_fair()->dur_avg_update()? Otherwise `dur` will
> contain sleep time.
> 
> Like we do for se->prev_sum_exec_runtime in set_next_entity() but for
> one `set_next_entity()-put_prev_entity()` run section.
> 
> AFAICS, you want to measure the exec_runtime sum over all run sections
> between enqueue and dequeue.

You were thinking of the dynamic PELT window size thread? (which is what
I had to think of when I looked at this).

I think we can still do that with this prev_sum_exec_runtime_vol (can't
say I love the name though). At any point (assuming we update
sum_exec_runtime) sum_exec_runtime - prev_sum_exec_runtime_vol is the
duration of the current activation.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up
  2022-12-30  2:47   ` Chen Yu
@ 2023-01-16 10:53     ` K Prateek Nayak
  2023-01-19 14:58       ` Chen Yu
  0 siblings, 1 reply; 19+ messages in thread
From: K Prateek Nayak @ 2023-01-16 10:53 UTC (permalink / raw)
  To: Chen Yu
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Tianchen Ding, Joel Fernandes, Josh Don, linux-kernel

Hello Chenyu,

On 12/30/2022 8:17 AM, Chen Yu wrote:
> On 2022-12-29 at 12:46:59 +0530, K Prateek Nayak wrote:
>> Hello Chenyu,
>>
>> Including the detailed results from testing below.
>>
>> tl;dr
>>
>> o There seems to be 3 noticeable regressions:
>>   - tbench for lower number of clients. The schedstat data shows
>>     an increase in wait time.
>>   - SpecJBB MultiJVM performance drops as the workload prefers
>>     an idle CPU over a busy one.
>>   - Unixbench-pipe benchmark performance drops.
>>
>> o Most benchmark numbers remain same.
>>
>> o Small gains seen for ycsb-mongodb and unixbench-syscall.
>>

Please ignore the last test results. The tests did not use
exactly same config for tip and sis_short kernel which led
to more overhead in the network stack for sis_short kernel
and the longer wait time seen in sched_stat data for tbench
was a result of each loop taking longer to finish.

I reran the benchmarks on the latest tip making sure the
configs are identical this time and only notice one
regression in Spec-JBB Critical-jOPS.

tl;dr

o tbench sees good improvement in the throughput when
  the machine is fully loaded and beyond.
o Some unixbench test cases show improvement as well as
  ycsb-mongodb in NPS2 and NPS4 mode.
o Most benchmark results are same.
o SpecJBB Critical-jOPS are still down. I'll share full
  schedstat dump for tasks separately with you.

Following are the detailed results from a dual socket
Zen3 machine:

Following are the Kernel versions:

tip:            6.2.0-rc2 tip:sched/core at
                commit: bbd0b031509b "sched/rseq: Fix concurrency ID handling of usermodehelper kthreads"
sis_short:      tip + series

~~~~~~~~~~~~~
~ hackbench ~
~~~~~~~~~~~~~

o NPS1

Test:			tip			sis_short
 1-groups:	   4.36 (0.00 pct)	   4.33 (0.68 pct)
 2-groups:	   5.17 (0.00 pct)	   5.20 (-0.58 pct)
 4-groups:	   4.17 (0.00 pct)	   4.19 (-0.47 pct)
 8-groups:	   4.64 (0.00 pct)	   4.71 (-1.50 pct)
16-groups:	   5.43 (0.00 pct)	   6.60 (-21.54 pct)    * [Machine is overloaded with avg 2.5 tasks per run queue]
16-groups:	   5.90 (0.00 pct)	   5.87 (0.50 pct)    [Verification Run]

o NPS2

Test:			tip			sis_short
 1-groups:	   4.43 (0.00 pct)	   4.18 (5.64 pct)
 2-groups:	   4.61 (0.00 pct)	   4.99 (-8.24 pct)     *
 2-groups:	   4.72 (0.00 pct)	   4.74 (-0.42 pct)     [Verification Run]
 4-groups:	   4.25 (0.00 pct)	   4.20 (1.17 pct)
 8-groups:	   4.91 (0.00 pct)	   5.08 (-3.46 pct)
16-groups:	   5.84 (0.00 pct)	   5.84 (0.00 pct)

o NPS4

Test:			tip			sis_short
 1-groups:	   4.34 (0.00 pct)	   4.39 (-1.15 pct)
 2-groups:	   4.64 (0.00 pct)	   4.97 (-7.11 pct)     *
 2-groups:	   4.86 (0.00 pct)	   4.77 (1.85 pct)     [Verification Run]
 4-groups:	   4.20 (0.00 pct)	   4.26 (-1.42 pct)
 8-groups:	   5.21 (0.00 pct)	   5.19 (0.38 pct)
16-groups:	   6.24 (0.00 pct)	   5.93 (4.96 pct)

~~~~~~~~~~~~
~ schbench ~
~~~~~~~~~~~~

$ schbench -m 2 -t <workers> -s 30

o NPS1

#workers:	tip			sis_short
  1:	  36.00 (0.00 pct)	  26.00 (27.77 pct)
  2:	  37.00 (0.00 pct)	  36.00 (2.70 pct)
  4:	  37.00 (0.00 pct)	  37.00 (0.00 pct)
  8:	  47.00 (0.00 pct)	  49.00 (-4.25 pct)
 16:	  64.00 (0.00 pct)	  69.00 (-7.81 pct)
 32:	 109.00 (0.00 pct)	 118.00 (-8.25 pct)     *
 32:	 117.00 (0.00 pct)	 116.00 (0.86 pct)      [Verification Run]
 64:	 222.00 (0.00 pct)	 219.00 (1.35 pct)
128:	 515.00 (0.00 pct)	 513.00 (0.38 pct)
256:	 39744.00 (0.00 pct)	 35776.00 (9.98 pct)
512:	 81280.00 (0.00 pct)	 76672.00 (5.66 pct)

o NPS2

#workers:	tip			sis_short
  1:	  27.00 (0.00 pct)	  25.00 (7.40 pct)
  2:	  31.00 (0.00 pct)	  31.00 (0.00 pct)
  4:	  38.00 (0.00 pct)	  37.00 (2.63 pct)
  8:	  50.00 (0.00 pct)	  55.00 (-10.00 pct)
 16:	  66.00 (0.00 pct)	  66.00 (0.00 pct)
 32:	 116.00 (0.00 pct)	 119.00 (-2.58 pct)
 64:	 210.00 (0.00 pct)	 219.00 (-4.28 pct)
128:	 523.00 (0.00 pct)	 497.00 (4.97 pct)
256:	 44864.00 (0.00 pct)	 46656.00 (-3.99 pct)
512:	 78464.00 (0.00 pct)	 79488.00 (-1.30 pct)

o NPS4

#workers:	tip			sis_short
  1:	  32.00 (0.00 pct)	  34.00 (-6.25 pct)
  2:	  32.00 (0.00 pct)	  35.00 (-9.37 pct)
  4:	  34.00 (0.00 pct)	  39.00 (-14.70 pct)
  8:	  58.00 (0.00 pct)	  51.00 (12.06 pct)
 16:	  67.00 (0.00 pct)	  70.00 (-4.47 pct)
 32:	 118.00 (0.00 pct)	 123.00 (-4.23 pct)
 64:	 224.00 (0.00 pct)	 227.00 (-1.33 pct)
128:	 533.00 (0.00 pct)	 537.00 (-0.75 pct)
256:	 43456.00 (0.00 pct)	 48192.00 (-10.89 pct)  * [Machine overloaded - avg 2 tasks per run queue]
256:	 46911.59 (0.00 pct)	 47163.28 (-0.53 pct)  	[Verification Run]
512:	 78976.00 (0.00 pct)	 78976.00 (0.00 pct)


~~~~~~~~~~
~ tbench ~
~~~~~~~~~~

o NPS1

Clients:	tip			sis_short
    1	 539.96 (0.00 pct)	 532.63 (-1.35 pct)
    2	 1068.21 (0.00 pct)	 1057.35 (-1.01 pct)
    4	 1994.76 (0.00 pct)	 2015.05 (1.01 pct)
    8	 3602.30 (0.00 pct)	 3598.70 (-0.09 pct)
   16	 6075.49 (0.00 pct)	 6019.96 (-0.91 pct)
   32	 11641.07 (0.00 pct)	 11774.03 (1.14 pct)
   64	 21529.16 (0.00 pct)	 21392.97 (-0.63 pct)
  128	 30852.92 (0.00 pct)	 31355.87 (1.63 pct)
  256	 51901.20 (0.00 pct)	 54896.08 (5.77 pct)
  512	 46797.40 (0.00 pct)	 55090.07 (17.72 pct)
 1024	 46057.28 (0.00 pct)	 54374.89 (18.05 pct)

o NPS2

Clients:	tip			sis_short
    1	 536.11 (0.00 pct)	 542.14 (1.12 pct)
    2	 1044.58 (0.00 pct)	 1057.98 (1.28 pct)
    4	 2043.92 (0.00 pct)	 1981.64 (-3.04 pct)
    8	 3572.50 (0.00 pct)	 3579.03 (0.18 pct)
   16	 6040.97 (0.00 pct)	 5946.20 (-1.56 pct)
   32	 10794.10 (0.00 pct)	 11348.54 (5.13 pct)
   64	 20905.89 (0.00 pct)	 21340.31 (2.07 pct)
  128	 30885.39 (0.00 pct)	 30834.59 (-0.16 pct)
  256	 48901.25 (0.00 pct)	 51905.17 (6.14 pct)
  512	 49673.91 (0.00 pct)	 53608.18 (7.92 pct)
 1024	 47626.34 (0.00 pct)	 53396.88 (12.11 pct)

o NPS4

Clients:	tip			sis_short
    1	 544.91 (0.00 pct)	 542.78 (-0.39 pct)
    2	 1046.49 (0.00 pct)	 1057.16 (1.01 pct)
    4	 2007.11 (0.00 pct)	 2001.21 (-0.29 pct)
    8	 3590.66 (0.00 pct)	 3427.33 (-4.54 pct)
   16	 5956.60 (0.00 pct)	 5898.69 (-0.97 pct)
   32	 10431.73 (0.00 pct)	 10732.48 (2.88 pct)
   64	 21563.37 (0.00 pct)	 19141.76 (-11.23 pct)  *
   64	 21140.75 (0.00 pct)	 20883.78 (-1.21 pct)  [Verification Run]
  128	 30352.16 (0.00 pct)	 28757.44 (-5.25 pct)   *
  128	 29537.66 (0.00 pct)	 29488.04 (-0.16 pct)   [Verification Run]
  256	 49504.51 (0.00 pct)	 52492.40 (6.03 pct)
  512	 44916.61 (0.00 pct)	 52746.38 (17.43 pct)
 1024	 49986.21 (0.00 pct)	 53169.62 (6.36 pct)


~~~~~~~~~~
~ stream ~
~~~~~~~~~~

o NPS1

- 10 Runs:

Test:		tip			sis_short
 Copy:   336796.79 (0.00 pct)    336858.84 (0.01 pct)
Scale:   212768.55 (0.00 pct)    213061.98 (0.13 pct)
  Add:   244000.34 (0.00 pct)    237874.08 (-2.51 pct)
Triad:   255042.52 (0.00 pct)    250122.34 (-1.92 pct)

- 100 Runs:

Test:		tip			sis_short
 Copy:	 335938.02 (0.00 pct)	 324841.05 (-3.30 pct)
Scale:	 212597.92 (0.00 pct)	 211516.93 (-0.50 pct)
  Add:	 248294.62 (0.00 pct)	 241706.28 (-2.65 pct)
Triad:	 258400.88 (0.00 pct)	 251427.43 (-2.69 pct)

o NPS2

- 10 Runs:

Test:		tip			sis_short
 Copy:   340709.53 (0.00 pct)    338797.30 (-0.56 pct)
Scale:   216849.08 (0.00 pct)    218167.05 (0.60 pct)
  Add:   257761.46 (0.00 pct)    258717.38 (0.37 pct)
Triad:   268615.11 (0.00 pct)    270284.11 (0.62 pct)

- 100 Runs:

Test:		tip			sis_short
 Copy:	 326385.13 (0.00 pct)	 314299.63 (-3.70 pct)
Scale:	 216440.37 (0.00 pct)	 213573.71 (-1.32 pct)
  Add:	 255062.22 (0.00 pct)	 250837.63 (-1.65 pct)
Triad:	 265442.03 (0.00 pct)	 259851.69 (-2.10 pct)

o NPS4

- 10 Runs:

Test:		tip			sis_short
 Copy:   363927.86 (0.00 pct)    364556.47 (0.17 pct)
Scale:   238190.49 (0.00 pct)    245339.08 (3.00 pct)
  Add:   262806.49 (0.00 pct)    270349.31 (2.87 pct)
Triad:   276492.33 (0.00 pct)    274536.47 (-0.70 pct)

- 100 Runs:

Test:		tip			sis_short
 Copy:   369197.55 (0.00 pct)    365775.06 (-0.92 pct)
Scale:   250508.46 (0.00 pct)    251164.01 (0.26 pct)
  Add:   267792.19 (0.00 pct)    268477.42 (0.25 pct)
Triad:   280010.98 (0.00 pct)    272448.39 (-2.70 pct)


~~~~~~~~~~~~~~~~
~ ycsb-mongodb ~
~~~~~~~~~~~~~~~~

o NPS1

tip:                    131328.67 (var: 2.97%)
sis_short:              130867.33 (var: 3.23%)  (-0.35%)

o NPS2:

tip:                    132819.67 (var: 0.85%)
sis_short:              135295.33 (var: 2.02%)  (+1.86%)

o NPS4:

tip:                    134130.00 (var: 4.12%)
sis_short:              138018.67 (var: 1.92%)  (+2.89%)


~~~~~~~~~~~~~~~~~~~~~~~~~~~
~ SPECjbb MultiJVM - NPS1 ~
~~~~~~~~~~~~~~~~~~~~~~~~~~~

Kernel			  tip		 sis_short
Max-jOPS		100.00%		  100.00%
Critical-jOPS		100.00%		   95.62%	*

~~~~~~~~~~~~~
~ unixbench ~
~~~~~~~~~~~~~

o NPS1

Test			Metric	  Parallelism			tip		      sis_short
unixbench-dhry2reg      Hmean     unixbench-dhry2reg-1      48518455.50 (   0.00%)    48698507.20 (   0.37%)
unixbench-dhry2reg      Hmean     unixbench-dhry2reg-512  6268185467.60 (   0.00%)  6274727387.00 (   0.10%)
unixbench-syscall       Amean     unixbench-syscall-1        2685321.17 (   0.00%)     2701231.47 *   0.59%*
unixbench-syscall       Amean     unixbench-syscall-512      7291476.20 (   0.00%)     7886916.33 *   8.17%*
unixbench-pipe          Hmean     unixbench-pipe-1           2480858.53 (   0.00%)     2585426.98 *   4.22%*
unixbench-pipe          Hmean     unixbench-pipe-512       300739256.62 (   0.00%)   306860971.25 *   2.04%*
unixbench-spawn         Hmean     unixbench-spawn-1             4358.14 (   0.00%)        4393.55 (   0.81%)
unixbench-spawn         Hmean     unixbench-spawn-512          76497.32 (   0.00%)       76175.34 *  -0.42%*
unixbench-execl         Hmean     unixbench-execl-1             4147.12 (   0.00%)        4164.73 *   0.42%*
unixbench-execl         Hmean     unixbench-execl-512          12435.26 (   0.00%)       12669.59 (   1.88%)

o NPS2

Test			Metric	  Parallelism			tip		      sis_short
unixbench-dhry2reg      Hmean     unixbench-dhry2reg-1      48872335.50 (   0.00%)    48880544.40 (   0.02%)
unixbench-dhry2reg      Hmean     unixbench-dhry2reg-512  6264134378.20 (   0.00%)  6269767014.50 (   0.09%)
unixbench-syscall       Amean     unixbench-syscall-1        2683903.13 (   0.00%)     2698851.60 *   0.56%*
unixbench-syscall       Amean     unixbench-syscall-512      7746773.60 (   0.00%)     7767971.50 (   0.27%)
unixbench-pipe          Hmean     unixbench-pipe-1           2476724.23 (   0.00%)     2587534.80 *   4.47%*
unixbench-pipe          Hmean     unixbench-pipe-512       300277350.41 (   0.00%)   306600469.40 *   2.11%*
unixbench-spawn         Hmean     unixbench-spawn-1             5026.50 (   0.00%)        4765.11 (  -5.20%)	*
unixbench-spawn         Hmean     unixbench-spawn-1             4965.80 (   0.00%)        5283.00 (   6.40%)	[Verification Run]
unixbench-spawn         Hmean     unixbench-spawn-512          80375.59 (   0.00%)       79331.12 *  -1.30%*
unixbench-execl         Hmean     unixbench-execl-1             4151.70 (   0.00%)        4139.06 (  -0.30%)
unixbench-execl         Hmean     unixbench-execl-512          13605.15 (   0.00%)       11898.26 ( -12.55%)	*
unixbench-execl         Hmean     unixbench-execl-512          12617.90 (   0.00%)       13735.00 (   8.85%)	[Verification Run]

o NPS4

Test			Metric	  Parallelism			tip		      sis_short
unixbench-dhry2reg      Hmean     unixbench-dhry2reg-1      48506771.20 (   0.00%)    48886194.50 (   0.78%)
unixbench-dhry2reg      Hmean     unixbench-dhry2reg-512  6280954362.50 (   0.00%)  6289332433.50 (   0.13%)
unixbench-syscall       Amean     unixbench-syscall-1        2687259.30 (   0.00%)     2700170.93 *   0.48%*
unixbench-syscall       Amean     unixbench-syscall-512      7350275.67 (   0.00%)     7858736.83 *   6.92%*
unixbench-pipe          Hmean     unixbench-pipe-1           2478893.01 (   0.00%)     2585741.42 *   4.31%*
unixbench-pipe          Hmean     unixbench-pipe-512       301830155.61 (   0.00%)   307556537.55 *   1.90%*
unixbench-spawn         Hmean     unixbench-spawn-1             5208.55 (   0.00%)        5280.85 (   1.39%)
unixbench-spawn         Hmean     unixbench-spawn-512          80745.79 (   0.00%)       80411.55 *  -0.41%*
unixbench-execl         Hmean     unixbench-execl-1             4072.72 (   0.00%)        4152.37 *   1.96%*
unixbench-execl         Hmean     unixbench-execl-512          13746.56 (   0.00%)       13247.30 (  -3.63%)	*
unixbench-execl         Hmean     unixbench-execl-512          13797.40 (   0.00%)       13624.00 (  -1.25%)	[Verification Run]

> [..snip..]
>>
>> All numbers are with turbo and C2 enabled. I wonder if the
>> check "(5 * nr < 3 * sd->span_weight)" in v2 helped workloads
>> like tbench and SpecJBB. I'll queue some runs with the condition
>> added back and separate run with turbo and C2 disabled to see
>> if they helps. I'll update the thread once the results are in.
> Thanks for helping check if the nr part in v2 could bring the improvement
> back. However Peter seems to have concern regarding the nr check, I'll
> think about it more.

SpecJBB Critical-jOPS performance is known to suffer when tasks
queue behind each other. I'll share the data separately. I do see
the average wait_sum go up 1.3%. The Max-jOPS throughput, however,
is identical on both kernels which means sis_short does not affect
the overall throughput but only for the critical jobs, do we see
the regression due to possible queuing of tasks.

> 
> [..snip..]
>
--
Thanks and Regards,
Prateek

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up
  2022-12-16  6:08 [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
                   ` (2 preceding siblings ...)
  2022-12-29  7:16 ` [RFC PATCH v4 0/2] " K Prateek Nayak
@ 2023-01-17  2:53 ` Yicong Yang
  2023-01-19 16:11   ` Chen Yu
  3 siblings, 1 reply; 19+ messages in thread
From: Yicong Yang @ 2023-01-17  2:53 UTC (permalink / raw)
  To: Chen Yu
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	yangyicong, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	K Prateek Nayak, Gautham R . Shenoy, Ingo Molnar,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Honglei Wang, Len Brown, Chen Yu, Tianchen Ding, Joel Fernandes,
	Josh Don, linux-kernel

Hi Chen Yu,

On 2022/12/16 14:08, Chen Yu wrote:
> The main purpose of this change is to avoid too many cross CPU
> wake up when it is unnecessary. The frequent cross CPU wake up
> brings significant damage to some workloads, especially on high
> core count systems.
> 
> This patch set inhibits the cross CPU wake-up by placing the wakee
> on waking CPU or previous CPU, if both the waker and wakee are
> short-duration tasks.
> 
> The first patch is to introduce the definition of a short-duration
> task. The second patch leverages the first patch to choose a local
> or previous CPU for wakee.
> 

Tested with tbench and netperf locally and MySQL remotely. Unluckily
cannot reproduce the improvement and see some regression. Detailed
information below. The tests are done based on 6.2-rc1 on a 2x64 CPUs
Kunpeng 920 server with 4 nodes and no SMT, CONFIG_SCHED_CLUSTER
enabled.

For tbench and netperf tested on single node and two nodes on one socket.

tbench(node 0)
             vanilla                   sis_short
  1:        326.2950        312.3700 (   -4.27%)
  4:       1299.7700       1241.3400 (   -4.50%)
  8:       2613.2900       2476.4700 (   -5.24%)
 16:       5091.9400       5096.3100 (    0.09%)
 32:      10104.1000       9905.8500 (   -1.96%)
 64:       7558.8500       7826.2000 (    3.54%)
128:       6641.4000       6608.6600 (   -0.49%)

tbench(node 0-1)
             vanilla                   sis_short
  1:        328.0880        313.5830 (   -4.42%)
  4:       1318.4700       1237.2100 (   -6.16%)
  8:       2546.1700       2372.5900 (   -6.82%)
 16:       5147.1700       4651.3400 (   -9.63%)
 32:       9275.1100       8772.0100 (   -5.42%)
 64:      17526.6000      17918.8000 (    2.23%)
128:      13108.1000      13317.0000 (    1.59%)

netperf TCP_RR(node 0)
              vanilla                   sis_short
  1:      76444.3100      72681.4200 (   -4.92%)
  4:      77615.7475      73436.4575 (   -5.38%)
  8:      78066.5887      69473.9312 (  -11.01%)
 16:      77134.8562      76473.4144 (   -0.86%)
 32:      76626.3322      63785.1650 (  -16.76%)
 64:      29072.7872      24552.9916 (  -15.55%)
128:      11875.6830      10532.4508 (  -11.31%)

netperf TCP_RR(node 0-1)
              vanilla                   sis_short
  1:      77645.3700      72879.3300 (   -6.14%)
  4:      78180.0150      72926.2225 (   -6.72%)
  8:      77878.7675      66724.6137 (  -14.32%)
 16:      77427.6944      65682.6012 (  -15.17%)
 32:      71774.5309      71887.1747 (    0.16%)
 64:      75023.3972      62730.7395 (  -16.39%)
128:      28468.2309      25048.1859 (  -12.01%)

netperf UDP_RR(node 0)
              vanilla                   sis_short
  1:      91614.6400      89437.6600 (   -2.38%)
  4:      91926.3425      90511.9750 (   -1.54%)
  8:      92596.4850      87336.7162 (   -5.68%)
 16:      91803.2525      95212.5069 (    3.71%)
 32:      98482.9212      79793.1803 (  -18.98%)
 64:      36373.5367      30577.2208 (  -15.94%)
128:      14440.2914      12727.7759 (  -11.86%)

netperf UDP_RR(node 0-1)
              vanilla                   sis_short
  1:      92609.3400      90915.8600 (   -1.83%)
  4:      93034.8425      91742.4825 (   -1.39%)
  8:      91963.3950      85801.8250 (   -6.70%)
 16:      91589.4900      83339.7944 (   -9.01%)
 32:      83161.0234      90298.3934 (    8.58%)
 64:      88860.0155      74933.8544 (  -15.67%)
128:      24715.9842      28022.6998 (   13.38%)

MySQL tests are done among two servers with same hardware setup and kernel
updated on both kernel. Network device locates on node 0 and network
interrupts is balanced evenly by irqbalance. No NUMA binding applied.

oltp_read_only
8-TPS		4315.11		4236.45		-1.82%
8-QPS		69041.74	67783.19	-1.82%
8-avg-lat	1.85		1.89		-1.98%
8-95th-lat	2.14		2.24		-4.35%
16-TPS		8315.80		7993.78		-3.87%
16-QPS		133052.87	127900.49	-3.87%
16-avg-lat	1.92		2.00		-3.99%
16-95th-lat	2.36		2.60		-10.17%
24-TPS		11601.29	11387.88	-1.84%
24-QPS		185620.63	182206.08	-1.84%
24-avg-lat	2.07		2.11		-1.94%
24-95th-lat	2.61		2.74		-4.97%
32-TPS		13882.59	13277.71	-4.36%
32-QPS		222121.46	212443.37	-4.36%
32-avg-lat	2.31		2.41		-4.48%
32-95th-lat	2.93		3.38		-15.47%
64-TPS		17558.38	16200.13	-7.74%
64-QPS		280934.13	259202.00	-7.74%
64-avg-lat	3.64		3.95		-8.42%
64-95th-lat	5.12		5.40		-5.53%
96-TPS		21858.44	21104.80	-3.45%
96-QPS		349735.03	337676.78	-3.45%
96-avg-lat	4.39		4.55		-3.57%
96-95th-lat	5.92		6.09		-2.93%
128-TPS		24633.09	24278.48	-1.44%
128-QPS		394129.44	388455.76	-1.44%
128-avg-lat	5.19		5.27		-1.54%
128-95th-lat	6.51		6.79		-4.30%
256-TPS		40793.59	41878.29	2.66%
256-QPS		652697.45	670052.64	2.66%
256-avg-lat	6.27		6.11		2.55%
256-95th-lat	9.67		9.79		-1.21%
512-TPS		50461.71	50593.95	0.26%
512-QPS		807387.36	809503.29	0.26%
512-avg-lat	10.13		10.10		0.30%
512-95th-lat	14.64		14.73		-0.61%
700-TPS		50848.04	50899.43	0.10%
700-QPS		813568.68	814390.94	0.10%
700-avg-lat	13.74		13.73		0.10%
700-95th-lat	18.95		19.06		-0.60%

Further looked into 64 threads since it regress most and mainly looked on the
server(mysqld) side. In this case most loads will be on node 0 and partly
on node 1. With SIS_SHORT the utilization on node 0 keeps almost the same
but utilization on node 1 remains nearly half comparing to baseline. That
may partly answers why the performance decreases.

mpstat 60s, non-idle%
	vanilla	sis_short	
node 0	94.25	91.67	-2.73%
node 1	46.41	30.03	-35.29%

Previously there's argument that sync wakeup from interrupts maybe incorrectly
and SIS_SHORT seems to consolidate that and tie more tasks on the interrupt node.

SIS_SHORT will likely to make short tasks wakeup locally but seems to increase the
wait time in this case. schedstat for CPUs on node 0 shows:

                        vanilla               sis_short
time elapsed            60.201085s            60.200395s
[cpu Average, counts/sec]
#0 cpu zeros            0.0                   0.0
#1 sched_yied()         0.03                  0.12
#2 zeros                0.0                   0.0
#3 schedule()           409415.36             340304.86
#4 sched->idle()        125616.92             91078.52
#5 try_to_wake_up()     317404.25             280896.88
#6 try_to_wakeup(local) 115451.87             140567.68
#7 run time             26071494674.87        26164834556.84
#8 wait time            3861592891.75         7361228637.54 (+ 90.63%)
#9 timeslices,          283747.1              249182.34

Since it's a 128 CPU machine the min_granularity is 3000000ns which means a short
task running within 375000ns. Most mysql worker threads' se.dur_avg is around
110000ns and will be regarded as short tasks.

To exclude the influence of Patch 1 (which we'll always record the dur_avg and related
informations), tested 64 threads case with SIS_SHORT enabled or not through debugfs
control and seems Patch 2 mainly dominate the results.

		NO_SIS_SHORT	SIS_SHORT	
TPS		17446.54	16464.83	-5.63%
QPS		279144.61	263437.21	-5.63%
avg lat		3.67		3.89		-5.99%
95th lat	5.09		5.37		-5.50%

Please let me know if you need more information.

Thanks.

> Changes since v3:
> 1. Honglei and Josh have concern that the threshold of short
>    task duration could be too long. Decreased the threshold from
>    sysctl_sched_min_granularity to (sysctl_sched_min_granularity / 8),
>    and the '8' comes from get_update_sysctl_factor().
> 2. Export p->se.dur_avg to /proc/{pid}/sched per Yicong's suggestion.
> 3. Move the calculation of average duration from put_prev_task_fair()
>    to dequeue_task_fair(). Because there is an issue in v3 that,
>    put_prev_task_fair() will not be invoked by pick_next_task_fair()
>    in fast path, thus the dur_avg could not be updated timely.
> 4. Fix the comment in PATCH 2/2, that "WRITE_ONCE(CPU1->ttwu_pending, 1);"
>    on CPU0 is earlier than CPU1 getting "ttwu_list->p0", per Tianchen.
> 5. Move the scan for CPU with short duration task from select_idle_cpu()
>    to select_idle_siblings(), because there is no CPU scan involved, per
>    Yicong.
> 
> Changes since v2:
> 
> 1. Peter suggested comparing the duration of waker and the cost to
>    scan for an idle CPU: If the cost is higher than the task duration,
>    do not waste time finding an idle CPU, choose the local or previous
>    CPU directly. A prototype was created based on this suggestion.
>    However, according to the test result, this prototype does not inhibit
>    the cross CPU wakeup and did not bring improvement. Because the cost
>    to find an idle CPU is small in the problematic scenario. The root
>    cause of the problem is a race condition between scanning for an idle
>    CPU and task enqueue(please refer to the commit log in PATCH 2/2).
>    So v3 does not change the core logic of v2, with some refinement based
>    on Peter's suggestion.
> 
> 2. Simplify the logic to record the task duration per Peter and Abel's suggestion.
> 
> This change brings overall improvement on some microbenchmarks, both on
> Intel and AMD platforms.
> 
> v3: https://lore.kernel.org/lkml/cover.1669862147.git.yu.c.chen@intel.com/
> v2: https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/
> v1: https://lore.kernel.org/lkml/20220915165407.1776363-1-yu.c.chen@intel.com/
> 
> Chen Yu (2):
>   sched/fair: Introduce short duration task check
>   sched/fair: Choose the CPU where short task is running during wake up
> 
>  include/linux/sched.h   |  3 +++
>  kernel/sched/core.c     |  2 ++
>  kernel/sched/debug.c    |  1 +
>  kernel/sched/fair.c     | 32 ++++++++++++++++++++++++++++++++
>  kernel/sched/features.h |  1 +
>  5 files changed, 39 insertions(+)
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check
  2023-01-16 10:33     ` Peter Zijlstra
@ 2023-01-18 17:48       ` Dietmar Eggemann
  2023-01-19 14:29       ` Chen Yu
  1 sibling, 0 replies; 19+ messages in thread
From: Dietmar Eggemann @ 2023-01-18 17:48 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Chen Yu, Vincent Guittot, Tim Chen, Mel Gorman, Juri Lelli,
	Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Honglei Wang, Len Brown, Chen Yu, Tianchen Ding, Joel Fernandes,
	Josh Don, linux-kernel

On 16/01/2023 10:33, Peter Zijlstra wrote:
> On Thu, Jan 05, 2023 at 12:33:16PM +0100, Dietmar Eggemann wrote:
>> On 16/12/2022 07:11, Chen Yu wrote:

[...]

> You were thinking of the dynamic PELT window size thread? (which is what
> I had to think of when I looked at this).

Yes, indeed.

> I think we can still do that with this prev_sum_exec_runtime_vol (can't
> say I love the name though). At any point (assuming we update
> sum_exec_runtime) sum_exec_runtime - prev_sum_exec_runtime_vol is the
> duration of the current activation.

I ran Jankbench with your UTIL_EST_FASTER patch and:

    runtime = curr->se.sum_exec_runtime -
              curr->se.prev_sum_exec_runtime_vol

plus:

    runtime >>= 10

before doing:

    util_est_fast = faster_est_approx(runtime * 2)
                                              ^^^ (boost)

on a Pixel6 and the results look promising:

Max frame duration (ms)

+-------------------+-----------+------------+
|      wa_path      | iteration |   value    |
+-------------------+-----------+------------+
|        base       |    10     | 147.571352 |
|    pelt-hl-m2     |    10     | 119.416351 |
|    pelt-hl-m4     |    10     | 96.473412  |
|  util_est_faster  |    10     | 84.834999  |
+-------------------+-----------+------------+

Mean frame duration (average case)

+---------------+-------------------+-------+-----------+
|   variable    |      kernel       | value | perc_diff |
+---------------+-------------------+-------+-----------+
| mean_duration |        base       | 14.7  |   0.0%    |
| mean_duration |    pelt-hl-m2     | 13.6  |   -7.5%   |
| mean_duration |    pelt-hl-m4     | 13.0  |  -11.68%  |
| mean_duration |  util_est_faster  | 12.6  |  -14.01%  |
+---------------+-------------------+-------+-----------+

Jank percentage

+-----------+-------------------+-------+-----------+
| variable  |      kernel       | value | perc_diff |
+-----------+-------------------+-------+-----------+
| jank_perc |        base       |  1.8  |   0.0%    |
| jank_perc |    pelt-hl-m2     |  1.8  |  -4.91%   |
| jank_perc |    pelt-hl-m4     |  1.2  |  -36.61%  |
| jank_perc |  util_est_faster  |  0.8  |  -57.8%   |
+-----------+-------------------+-------+-----------+

Power usage [mW]

+--------------+-------------------+-------+-----------+
|  chan_name   |      kernel       | value | perc_diff |
+--------------+-------------------+-------+-----------+
| total_power  |        base       | 144.4 |   0.0%    |
| total_power  |    pelt-hl-m2     | 141.6 |  -1.97%   |
| total_power  |    pelt-hl-m4     | 163.2 |  12.99%   |
| total_power  |  util_est_faster  | 150.9 |   4.45%   |
+--------------+-------------------+-------+-----------+

At first glance it looks promising! Have to do more testing to
understand the behaviour fully.

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check
  2023-01-16 10:33     ` Peter Zijlstra
  2023-01-18 17:48       ` Dietmar Eggemann
@ 2023-01-19 14:29       ` Chen Yu
  1 sibling, 0 replies; 19+ messages in thread
From: Chen Yu @ 2023-01-19 14:29 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Dietmar Eggemann, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Steven Rostedt,
	Ben Segall, Daniel Bristot de Oliveira, Valentin Schneider,
	Hillf Danton, Honglei Wang, Len Brown, Chen Yu, Tianchen Ding,
	Joel Fernandes, Josh Don, linux-kernel

Hi Peter,
On 2023-01-16 at 11:33:26 +0100, Peter Zijlstra wrote:
> On Thu, Jan 05, 2023 at 12:33:16PM +0100, Dietmar Eggemann wrote:
> > On 16/12/2022 07:11, Chen Yu wrote:
> > 
> > [...]
> > 
> > > @@ -5995,6 +6005,18 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> > >  
> > >  static void set_next_buddy(struct sched_entity *se);
> > >  
> > > +static inline void dur_avg_update(struct task_struct *p, bool task_sleep)
> > > +{
> > > +	u64 dur;
> > > +
> > > +	if (!task_sleep)
> > > +		return;
> > > +
> > > +	dur = p->se.sum_exec_runtime - p->se.prev_sum_exec_runtime_vol;
> > > +	p->se.prev_sum_exec_runtime_vol = p->se.sum_exec_runtime;
> > 
> > Shouldn't se->prev_sum_exec_runtime_vol be set in enqueue_task_fair()
> > and not in dequeue_task_fair()->dur_avg_update()? Otherwise `dur` will
> > contain sleep time.
> > 
> > Like we do for se->prev_sum_exec_runtime in set_next_entity() but for
> > one `set_next_entity()-put_prev_entity()` run section.
> > 
> > AFAICS, you want to measure the exec_runtime sum over all run sections
> > between enqueue and dequeue.
> 
> You were thinking of the dynamic PELT window size thread? (which is what
> I had to think of when I looked at this).
> 
> I think we can still do that with this prev_sum_exec_runtime_vol (can't
> say I love the name though).
I agree that this name is not accurate, maybe prev_sleep_sum_exec_runtime?
I'm open to any other name for this : )

Currently I'm checking Prateek's data on Zen3 and Yicong's data on Arm64,
and their data suggested that: inhibiting the spreading of short wakee is not
always a good idea on a system with small LLC. Meanwhile, according to my
test on a system with large number of CPUs in 1 LLC, short duration wakee become
a trouble maker if spreading them on different CPUs, which could trigger unexpected
race condition. I'm thinking of taking nr_llc_cpu into consideration when defining
a short duration task, and do some experiment on this.

thanks,
Chenyu

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up
  2023-01-16 10:53     ` K Prateek Nayak
@ 2023-01-19 14:58       ` Chen Yu
  0 siblings, 0 replies; 19+ messages in thread
From: Chen Yu @ 2023-01-19 14:58 UTC (permalink / raw)
  To: K Prateek Nayak
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, Yicong Yang,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Tianchen Ding, Joel Fernandes, Josh Don, linux-kernel

On 2023-01-16 at 16:23:13 +0530, K Prateek Nayak wrote:
> Hello Chenyu,
> 
> On 12/30/2022 8:17 AM, Chen Yu wrote:
> > On 2022-12-29 at 12:46:59 +0530, K Prateek Nayak wrote:
> >> Hello Chenyu,
> >>
> >> Including the detailed results from testing below.
> >>
> >> tl;dr
> >>
> >> o There seems to be 3 noticeable regressions:
> >>   - tbench for lower number of clients. The schedstat data shows
> >>     an increase in wait time.
> >>   - SpecJBB MultiJVM performance drops as the workload prefers
> >>     an idle CPU over a busy one.
> >>   - Unixbench-pipe benchmark performance drops.
> >>
> >> o Most benchmark numbers remain same.
> >>
> >> o Small gains seen for ycsb-mongodb and unixbench-syscall.
> >>
> 
> Please ignore the last test results. The tests did not use
> exactly same config for tip and sis_short kernel which led
> to more overhead in the network stack for sis_short kernel
> and the longer wait time seen in sched_stat data for tbench
> was a result of each loop taking longer to finish.
> 
> I reran the benchmarks on the latest tip making sure the
> configs are identical this time and only notice one
> regression in Spec-JBB Critical-jOPS.
>
> tl;dr
> 
> o tbench sees good improvement in the throughput when
>   the machine is fully loaded and beyond.
> o Some unixbench test cases show improvement as well as
>   ycsb-mongodb in NPS2 and NPS4 mode.
> o Most benchmark results are same.
> o SpecJBB Critical-jOPS are still down. I'll share full
>   schedstat dump for tasks separately with you.
> 
Thanks Prateek! I checked the task duration for these workloads,
they fall into the short duration task range, so SIS_SHORT takes
effect.
[snip]
> 
> SpecJBB Critical-jOPS performance is known to suffer when tasks
> queue behind each other. I'll share the data separately. I do see
> the average wait_sum go up 1.3%. The Max-jOPS throughput, however,
> is identical on both kernels which means sis_short does not affect
> the overall throughput but only for the critical jobs, do we see
> the regression due to possible queuing of tasks.
>
If I understand correctly, most workloads on Zen3 prefers to be spreaded
on idle CPUs in the same LLC, except for the scenario when the system is
extremly busy(and SIS_UTIL handles that). As Zen3 has 8C/16T per LLC,
it is unlikely to trigger the race condition I described in PATCH 2/2.
I'm preparing for a patch to also take nr_llc into considerdation.

thanks,
Chenyu
> --
> Thanks and Regards,
> Prateek

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up
  2023-01-17  2:53 ` Yicong Yang
@ 2023-01-19 16:11   ` Chen Yu
  2023-01-20  9:09     ` Yicong Yang
  0 siblings, 1 reply; 19+ messages in thread
From: Chen Yu @ 2023-01-19 16:11 UTC (permalink / raw)
  To: Yicong Yang
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	yangyicong, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	K Prateek Nayak, Gautham R . Shenoy, Ingo Molnar,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Honglei Wang, Len Brown, Chen Yu, Tianchen Ding, Joel Fernandes,
	Josh Don, linux-kernel

Hi Yicong,
On 2023-01-17 at 10:53:25 +0800, Yicong Yang wrote:
> Hi Chen Yu,
> 
> On 2022/12/16 14:08, Chen Yu wrote:
> > The main purpose of this change is to avoid too many cross CPU
> > wake up when it is unnecessary. The frequent cross CPU wake up
> > brings significant damage to some workloads, especially on high
> > core count systems.
> > 
> > This patch set inhibits the cross CPU wake-up by placing the wakee
> > on waking CPU or previous CPU, if both the waker and wakee are
> > short-duration tasks.
> > 
> > The first patch is to introduce the definition of a short-duration
> > task. The second patch leverages the first patch to choose a local
> > or previous CPU for wakee.
> > 
> 
> Tested with tbench and netperf locally and MySQL remotely. Unluckily
> cannot reproduce the improvement and see some regression.
Thank you for the test.
> Detailed information below. The tests are done based on 6.2-rc1 on a 2x64 CPUs
> Kunpeng 920 server with 4 nodes and no SMT, CONFIG_SCHED_CLUSTER
> enabled.
> 
> For tbench and netperf tested on single node and two nodes on one socket.
> 
Does this platform have 2 sockets, each socket has 2 Numa nodes, each node
is attached to 32 CPUs, and the benchmarks are bound to the first sockets?
And may I know how many CPUs there are in one LLC? Is it also 32?
> Further looked into 64 threads since it regress most and mainly looked on the
> server(mysqld) side. In this case most loads will be on node 0 and partly
> on node 1. With SIS_SHORT the utilization on node 0 keeps almost the same
> but utilization on node 1 remains nearly half comparing to baseline. That
> may partly answers why the performance decreases.
> 
OK, I see. So it seems that the Numa sched domain load balance does not spread
tasks fast enough while SIS_SHORT is waking task on the same node frequently.
But I thought select_idle_sibling() would wake up the tasks
on the same node(LLC) anyway, why SIS_SHORT inhibits the load balance more?
> mpstat 60s, non-idle%
> 	vanilla	sis_short	
> node 0	94.25	91.67	-2.73%
> node 1	46.41	30.03	-35.29%
> 
> Previously there's argument that sync wakeup from interrupts maybe incorrectly
I see. Do you mean, wakeup from interrupt might cause tasks stacking on the same
CPU, because the interrupt is delivered to the same CPU? I thought userspace
irq load balance would spread the interrupt? Or do you mean others?
> and SIS_SHORT seems to consolidate that and tie more tasks on the interrupt node.
>
> SIS_SHORT will likely to make short tasks wakeup locally but seems to increase the
> wait time in this case. schedstat for CPUs on node 0 shows:
> 
Agree. 
>                         vanilla               sis_short
> time elapsed            60.201085s            60.200395s
> [cpu Average, counts/sec]
> #0 cpu zeros            0.0                   0.0
> #1 sched_yied()         0.03                  0.12
> #2 zeros                0.0                   0.0
> #3 schedule()           409415.36             340304.86
> #4 sched->idle()        125616.92             91078.52
> #5 try_to_wake_up()     317404.25             280896.88
> #6 try_to_wakeup(local) 115451.87             140567.68
> #7 run time             26071494674.87        26164834556.84
> #8 wait time            3861592891.75         7361228637.54 (+ 90.63%)
> #9 timeslices,          283747.1              249182.34
> 
> Since it's a 128 CPU machine the min_granularity is 3000000ns which means a short
> task running within 375000ns. Most mysql worker threads' se.dur_avg is around
> 110000ns and will be regarded as short tasks.
> 
> To exclude the influence of Patch 1 (which we'll always record the dur_avg and related
> informations), tested 64 threads case with SIS_SHORT enabled or not through debugfs
> control and seems Patch 2 mainly dominate the results.
> 
> 		NO_SIS_SHORT	SIS_SHORT	
> TPS		17446.54	16464.83	-5.63%
> QPS		279144.61	263437.21	-5.63%
> avg lat		3.67		3.89		-5.99%
> 95th lat	5.09		5.37		-5.50%
> 
> Please let me know if you need more information.
> 
Very useful information, I'm thinking of taking nr_llc into consideration to
adjust the threshold of short task, so on system with smaller llc we do
not inhibit the idle CPU scan too much.

thanks,
Chenyu
> Thanks.
> 
> > Changes since v3:
> > 1. Honglei and Josh have concern that the threshold of short
> >    task duration could be too long. Decreased the threshold from
> >    sysctl_sched_min_granularity to (sysctl_sched_min_granularity / 8),
> >    and the '8' comes from get_update_sysctl_factor().
> > 2. Export p->se.dur_avg to /proc/{pid}/sched per Yicong's suggestion.
> > 3. Move the calculation of average duration from put_prev_task_fair()
> >    to dequeue_task_fair(). Because there is an issue in v3 that,
> >    put_prev_task_fair() will not be invoked by pick_next_task_fair()
> >    in fast path, thus the dur_avg could not be updated timely.
> > 4. Fix the comment in PATCH 2/2, that "WRITE_ONCE(CPU1->ttwu_pending, 1);"
> >    on CPU0 is earlier than CPU1 getting "ttwu_list->p0", per Tianchen.
> > 5. Move the scan for CPU with short duration task from select_idle_cpu()
> >    to select_idle_siblings(), because there is no CPU scan involved, per
> >    Yicong.
> > 
> > Changes since v2:
> > 
> > 1. Peter suggested comparing the duration of waker and the cost to
> >    scan for an idle CPU: If the cost is higher than the task duration,
> >    do not waste time finding an idle CPU, choose the local or previous
> >    CPU directly. A prototype was created based on this suggestion.
> >    However, according to the test result, this prototype does not inhibit
> >    the cross CPU wakeup and did not bring improvement. Because the cost
> >    to find an idle CPU is small in the problematic scenario. The root
> >    cause of the problem is a race condition between scanning for an idle
> >    CPU and task enqueue(please refer to the commit log in PATCH 2/2).
> >    So v3 does not change the core logic of v2, with some refinement based
> >    on Peter's suggestion.
> > 
> > 2. Simplify the logic to record the task duration per Peter and Abel's suggestion.
> > 
> > This change brings overall improvement on some microbenchmarks, both on
> > Intel and AMD platforms.
> > 
> > v3: https://lore.kernel.org/lkml/cover.1669862147.git.yu.c.chen@intel.com/
> > v2: https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/
> > v1: https://lore.kernel.org/lkml/20220915165407.1776363-1-yu.c.chen@intel.com/
> > 
> > Chen Yu (2):
> >   sched/fair: Introduce short duration task check
> >   sched/fair: Choose the CPU where short task is running during wake up
> > 
> >  include/linux/sched.h   |  3 +++
> >  kernel/sched/core.c     |  2 ++
> >  kernel/sched/debug.c    |  1 +
> >  kernel/sched/fair.c     | 32 ++++++++++++++++++++++++++++++++
> >  kernel/sched/features.h |  1 +
> >  5 files changed, 39 insertions(+)
> > 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up
  2023-01-19 16:11   ` Chen Yu
@ 2023-01-20  9:09     ` Yicong Yang
  2023-01-22 15:37       ` Chen Yu
  0 siblings, 1 reply; 19+ messages in thread
From: Yicong Yang @ 2023-01-20  9:09 UTC (permalink / raw)
  To: Chen Yu
  Cc: yangyicong, Peter Zijlstra, Vincent Guittot, Tim Chen,
	Mel Gorman, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	K Prateek Nayak, Gautham R . Shenoy, Ingo Molnar,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Honglei Wang, Len Brown, Chen Yu, Tianchen Ding, Joel Fernandes,
	Josh Don, Barry Song, linux-kernel

On 2023/1/20 0:11, Chen Yu wrote:
> Hi Yicong,
> On 2023-01-17 at 10:53:25 +0800, Yicong Yang wrote:
>> Hi Chen Yu,
>>
>> On 2022/12/16 14:08, Chen Yu wrote:
>>> The main purpose of this change is to avoid too many cross CPU
>>> wake up when it is unnecessary. The frequent cross CPU wake up
>>> brings significant damage to some workloads, especially on high
>>> core count systems.
>>>
>>> This patch set inhibits the cross CPU wake-up by placing the wakee
>>> on waking CPU or previous CPU, if both the waker and wakee are
>>> short-duration tasks.
>>>
>>> The first patch is to introduce the definition of a short-duration
>>> task. The second patch leverages the first patch to choose a local
>>> or previous CPU for wakee.
>>>
>>
>> Tested with tbench and netperf locally and MySQL remotely. Unluckily
>> cannot reproduce the improvement and see some regression.
> Thank you for the test.
>> Detailed information below. The tests are done based on 6.2-rc1 on a 2x64 CPUs
>> Kunpeng 920 server with 4 nodes and no SMT, CONFIG_SCHED_CLUSTER
>> enabled.
>>
>> For tbench and netperf tested on single node and two nodes on one socket.
>>
> Does this platform have 2 sockets, each socket has 2 Numa nodes, each node
> is attached to 32 CPUs, and the benchmarks are bound to the first sockets?
> And may I know how many CPUs there are in one LLC? Is it also 32?

you're right, exactly. There're 32 CPUs sharing one LLC.

>> Further looked into 64 threads since it regress most and mainly looked on the
>> server(mysqld) side. In this case most loads will be on node 0 and partly
>> on node 1. With SIS_SHORT the utilization on node 0 keeps almost the same
>> but utilization on node 1 remains nearly half comparing to baseline. That
>> may partly answers why the performance decreases.
>>
> OK, I see. So it seems that the Numa sched domain load balance does not spread
> tasks fast enough while SIS_SHORT is waking task on the same node frequently.
> But I thought select_idle_sibling() would wake up the tasks
> on the same node(LLC) anyway, why SIS_SHORT inhibits the load balance more?

Wakeup will happen much more frequently than load balance in this case and I don't
think load balance can handle it in time. In the fast path of wakeup we care little
about the load. SIS_SHORT may aggravate it since we're likely to put 2 short tasks
on one single CPU.

>> mpstat 60s, non-idle%
>> 	vanilla	sis_short	
>> node 0	94.25	91.67	-2.73%
>> node 1	46.41	30.03	-35.29%
>>
>> Previously there's argument that sync wakeup from interrupts maybe incorrectly
> I see. Do you mean, wakeup from interrupt might cause tasks stacking on the same
> CPU, because the interrupt is delivered to the same CPU? I thought userspace
> irq load balance would spread the interrupt? Or do you mean others?

Nop. I'm using HiSilicon HNS3 network card which has 32 queues and 33 interrupts
spread evenly on 32 CPUs, not on one single CPU. I considering a case that the p1
on cpu0 and p2 on cpu32, interrupt happens on cpu0 and tries to wakeup p1, since
p1 and p2 are both short tasks and then p2 will likely to wake and wait on cpu0.
p1 is not the actual waker and has not finished yet and this will increase the wait
time and also load on cpu0. Previously we may wake on the LLC of cpu0 and try
to find an idle CPU, with SIS_SHORT we'll just wake on cpu0.

Does it make sense to allow the SIS_SHORT wakeup only from the task context? I'll
have a try on this.

Some previous patch arguing interrupt wakeup from Barry and Libo:
[1] https://lkml.org/lkml/2021/11/5/234
[2] https://lore.kernel.org/lkml/20220711224704.1672831-1-libo.chen@oracle.com/

>> and SIS_SHORT seems to consolidate that and tie more tasks on the interrupt node.
>>
>> SIS_SHORT will likely to make short tasks wakeup locally but seems to increase the
>> wait time in this case. schedstat for CPUs on node 0 shows:
>>
> Agree. 
>>                         vanilla               sis_short
>> time elapsed            60.201085s            60.200395s
>> [cpu Average, counts/sec]
>> #0 cpu zeros            0.0                   0.0
>> #1 sched_yied()         0.03                  0.12
>> #2 zeros                0.0                   0.0
>> #3 schedule()           409415.36             340304.86
>> #4 sched->idle()        125616.92             91078.52
>> #5 try_to_wake_up()     317404.25             280896.88
>> #6 try_to_wakeup(local) 115451.87             140567.68
>> #7 run time             26071494674.87        26164834556.84
>> #8 wait time            3861592891.75         7361228637.54 (+ 90.63%)
>> #9 timeslices,          283747.1              249182.34
>>
>> Since it's a 128 CPU machine the min_granularity is 3000000ns which means a short
>> task running within 375000ns. Most mysql worker threads' se.dur_avg is around
>> 110000ns and will be regarded as short tasks.
>>
>> To exclude the influence of Patch 1 (which we'll always record the dur_avg and related
>> informations), tested 64 threads case with SIS_SHORT enabled or not through debugfs
>> control and seems Patch 2 mainly dominate the results.
>>
>> 		NO_SIS_SHORT	SIS_SHORT	
>> TPS		17446.54	16464.83	-5.63%
>> QPS		279144.61	263437.21	-5.63%
>> avg lat		3.67		3.89		-5.99%
>> 95th lat	5.09		5.37		-5.50%
>>
>> Please let me know if you need more information.
>>
> Very useful information, I'm thinking of taking nr_llc into consideration to
> adjust the threshold of short task, so on system with smaller llc we do
> not inhibit the idle CPU scan too much.
> 

That sounds reasonable to me to link the SIS_SHORT threshold to CPU scale like
what min_granule does.

Thanks,
Yicong

> thanks,
> Chenyu
>> Thanks.
>>
>>> Changes since v3:
>>> 1. Honglei and Josh have concern that the threshold of short
>>>    task duration could be too long. Decreased the threshold from
>>>    sysctl_sched_min_granularity to (sysctl_sched_min_granularity / 8),
>>>    and the '8' comes from get_update_sysctl_factor().
>>> 2. Export p->se.dur_avg to /proc/{pid}/sched per Yicong's suggestion.
>>> 3. Move the calculation of average duration from put_prev_task_fair()
>>>    to dequeue_task_fair(). Because there is an issue in v3 that,
>>>    put_prev_task_fair() will not be invoked by pick_next_task_fair()
>>>    in fast path, thus the dur_avg could not be updated timely.
>>> 4. Fix the comment in PATCH 2/2, that "WRITE_ONCE(CPU1->ttwu_pending, 1);"
>>>    on CPU0 is earlier than CPU1 getting "ttwu_list->p0", per Tianchen.
>>> 5. Move the scan for CPU with short duration task from select_idle_cpu()
>>>    to select_idle_siblings(), because there is no CPU scan involved, per
>>>    Yicong.
>>>
>>> Changes since v2:
>>>
>>> 1. Peter suggested comparing the duration of waker and the cost to
>>>    scan for an idle CPU: If the cost is higher than the task duration,
>>>    do not waste time finding an idle CPU, choose the local or previous
>>>    CPU directly. A prototype was created based on this suggestion.
>>>    However, according to the test result, this prototype does not inhibit
>>>    the cross CPU wakeup and did not bring improvement. Because the cost
>>>    to find an idle CPU is small in the problematic scenario. The root
>>>    cause of the problem is a race condition between scanning for an idle
>>>    CPU and task enqueue(please refer to the commit log in PATCH 2/2).
>>>    So v3 does not change the core logic of v2, with some refinement based
>>>    on Peter's suggestion.
>>>
>>> 2. Simplify the logic to record the task duration per Peter and Abel's suggestion.
>>>
>>> This change brings overall improvement on some microbenchmarks, both on
>>> Intel and AMD platforms.
>>>
>>> v3: https://lore.kernel.org/lkml/cover.1669862147.git.yu.c.chen@intel.com/
>>> v2: https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/
>>> v1: https://lore.kernel.org/lkml/20220915165407.1776363-1-yu.c.chen@intel.com/
>>>
>>> Chen Yu (2):
>>>   sched/fair: Introduce short duration task check
>>>   sched/fair: Choose the CPU where short task is running during wake up
>>>
>>>  include/linux/sched.h   |  3 +++
>>>  kernel/sched/core.c     |  2 ++
>>>  kernel/sched/debug.c    |  1 +
>>>  kernel/sched/fair.c     | 32 ++++++++++++++++++++++++++++++++
>>>  kernel/sched/features.h |  1 +
>>>  5 files changed, 39 insertions(+)
>>>
> 
> .
> 

^ permalink raw reply	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up
  2023-01-20  9:09     ` Yicong Yang
@ 2023-01-22 15:37       ` Chen Yu
  2023-01-23  4:47         ` K Prateek Nayak
  0 siblings, 1 reply; 19+ messages in thread
From: Chen Yu @ 2023-01-22 15:37 UTC (permalink / raw)
  To: Yicong Yang
  Cc: yangyicong, Peter Zijlstra, Vincent Guittot, Tim Chen,
	Mel Gorman, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	K Prateek Nayak, Gautham R . Shenoy, Ingo Molnar,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Honglei Wang, Len Brown, Chen Yu, Tianchen Ding, Joel Fernandes,
	Josh Don, Barry Song, linux-kernel

On 2023-01-20 at 17:09:28 +0800, Yicong Yang wrote:
> On 2023/1/20 0:11, Chen Yu wrote:
> > Hi Yicong,
> > On 2023-01-17 at 10:53:25 +0800, Yicong Yang wrote:
> >> Hi Chen Yu,
> >>
> >> On 2022/12/16 14:08, Chen Yu wrote:
> >>> The main purpose of this change is to avoid too many cross CPU
> >>> wake up when it is unnecessary. The frequent cross CPU wake up
> >>> brings significant damage to some workloads, especially on high
> >>> core count systems.
> >>>
> >>> This patch set inhibits the cross CPU wake-up by placing the wakee
> >>> on waking CPU or previous CPU, if both the waker and wakee are
> >>> short-duration tasks.
> >>>
> >>> The first patch is to introduce the definition of a short-duration
> >>> task. The second patch leverages the first patch to choose a local
> >>> or previous CPU for wakee.
> >>>
> >>
> >> Tested with tbench and netperf locally and MySQL remotely. Unluckily
> >> cannot reproduce the improvement and see some regression.
> > Thank you for the test.
> >> Detailed information below. The tests are done based on 6.2-rc1 on a 2x64 CPUs
> >> Kunpeng 920 server with 4 nodes and no SMT, CONFIG_SCHED_CLUSTER
> >> enabled.
> >>
> >> For tbench and netperf tested on single node and two nodes on one socket.
> >>
> > Does this platform have 2 sockets, each socket has 2 Numa nodes, each node
> > is attached to 32 CPUs, and the benchmarks are bound to the first sockets?
> > And may I know how many CPUs there are in one LLC? Is it also 32?
> 
> you're right, exactly. There're 32 CPUs sharing one LLC.
> 
> >> Further looked into 64 threads since it regress most and mainly looked on the
> >> server(mysqld) side. In this case most loads will be on node 0 and partly
> >> on node 1. With SIS_SHORT the utilization on node 0 keeps almost the same
> >> but utilization on node 1 remains nearly half comparing to baseline. That
> >> may partly answers why the performance decreases.
> >>
> > OK, I see. So it seems that the Numa sched domain load balance does not spread
> > tasks fast enough while SIS_SHORT is waking task on the same node frequently.
> > But I thought select_idle_sibling() would wake up the tasks
> > on the same node(LLC) anyway, why SIS_SHORT inhibits the load balance more?
> 
> Wakeup will happen much more frequently than load balance in this case and I don't
> think load balance can handle it in time. In the fast path of wakeup we care little
> about the load. SIS_SHORT may aggravate it since we're likely to put 2 short tasks
> on one single CPU.
>
I see. Suppose cpu0 ~ cpu31 are on node0, cpu32~cpu64 are on node1. With SIS_SHORT
enabled, wakees could be stacked on cpu0. With SIS_SHORT disabled, wakees could be
woken up on cpu0~cpu31. I undertsnad that SIS_SHORT inhits searching for idle CPUs
in node0, but why SIS_SHORT inhibits tasks migrating to node1?
> >> mpstat 60s, non-idle%
> >> 	vanilla	sis_short	
> >> node 0	94.25	91.67	-2.73%
> >> node 1	46.41	30.03	-35.29%
> >>
> >> Previously there's argument that sync wakeup from interrupts maybe incorrectly
> > I see. Do you mean, wakeup from interrupt might cause tasks stacking on the same
> > CPU, because the interrupt is delivered to the same CPU? I thought userspace
> > irq load balance would spread the interrupt? Or do you mean others?
> 
> Nop. I'm using HiSilicon HNS3 network card which has 32 queues and 33 interrupts
> spread evenly on 32 CPUs, not on one single CPU. I considering a case that the p1
> on cpu0 and p2 on cpu32, interrupt happens on cpu0 and tries to wakeup p1, since
> p1 and p2 are both short tasks and then p2 will likely to wake and wait on cpu0.
> p1 is not the actual waker and has not finished yet and this will increase the wait
> time and also load on cpu0. Previously we may wake on the LLC of cpu0 and try
> to find an idle CPU, with SIS_SHORT we'll just wake on cpu0.
> 
> Does it make sense to allow the SIS_SHORT wakeup only from the task context? I'll
> have a try on this.
> 
This sounds reasonable to me. I think we can further enhance it.
And before you sending out the proposal, could you help test the following
experimental change, to see if it makes tbench netperf and MySQL better?

Hi Prateek, could you also help check if below change would make the Spec regression
go away?

thanks,
Chenyu

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index fc8a636eca5e..6e1ab5e2686c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4974,7 +4974,7 @@ wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se);
 static inline int is_short_task(struct task_struct *p)
 {
 	return sched_feat(SIS_SHORT) && p->se.dur_avg &&
-		((p->se.dur_avg * 8) <= sysctl_sched_min_granularity);
+		((p->se.dur_avg * 1024) <= (__this_cpu_read(sd_llc_size) * sysctl_sched_min_granularity));
 }
 
 /*
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 19+ messages in thread

* Re: [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up
  2023-01-22 15:37       ` Chen Yu
@ 2023-01-23  4:47         ` K Prateek Nayak
  0 siblings, 0 replies; 19+ messages in thread
From: K Prateek Nayak @ 2023-01-23  4:47 UTC (permalink / raw)
  To: Chen Yu, Yicong Yang
  Cc: yangyicong, Peter Zijlstra, Vincent Guittot, Tim Chen,
	Mel Gorman, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Tianchen Ding, Joel Fernandes, Josh Don, Barry Song,
	linux-kernel

Hello Chenyu,

Thank you for looking into this.

On 1/22/2023 9:07 PM, Chen Yu wrote:
> On 2023-01-20 at 17:09:28 +0800, Yicong Yang wrote:
> [..snip..]
>
> This sounds reasonable to me. I think we can further enhance it.
> And before you sending out the proposal, could you help test the following
> experimental change, to see if it makes tbench netperf and MySQL better?
> 
> Hi Prateek, could you also help check if below change would make the Spec regression
> go away?

Sure. I'll add this on top of v4 and queue some runs.
Will share the results soon.

> 
> thanks,
> Chenyu
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index fc8a636eca5e..6e1ab5e2686c 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -4974,7 +4974,7 @@ wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se);
>  static inline int is_short_task(struct task_struct *p)
>  {
>  	return sched_feat(SIS_SHORT) && p->se.dur_avg &&
> -		((p->se.dur_avg * 8) <= sysctl_sched_min_granularity);
> +		((p->se.dur_avg * 1024) <= (__this_cpu_read(sd_llc_size) * sysctl_sched_min_granularity));
>  }
>  
>  /*


--
Thanks and Regards,
Prateek

^ permalink raw reply	[flat|nested] 19+ messages in thread

end of thread, other threads:[~2023-01-23  4:48 UTC | newest]

Thread overview: 19+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-16  6:08 [RFC PATCH v4 0/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
2022-12-16  6:11 ` [RFC PATCH v4 1/2] sched/fair: Introduce short duration task check Chen Yu
2023-01-05 11:33   ` Dietmar Eggemann
2023-01-06  8:34     ` Chen Yu
2023-01-06 11:28       ` Dietmar Eggemann
2023-01-06 14:29         ` Chen Yu
2023-01-16 10:33     ` Peter Zijlstra
2023-01-18 17:48       ` Dietmar Eggemann
2023-01-19 14:29       ` Chen Yu
2022-12-16  6:11 ` [RFC PATCH v4 2/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
2022-12-29  7:16 ` [RFC PATCH v4 0/2] " K Prateek Nayak
2022-12-30  2:47   ` Chen Yu
2023-01-16 10:53     ` K Prateek Nayak
2023-01-19 14:58       ` Chen Yu
2023-01-17  2:53 ` Yicong Yang
2023-01-19 16:11   ` Chen Yu
2023-01-20  9:09     ` Yicong Yang
2023-01-22 15:37       ` Chen Yu
2023-01-23  4:47         ` K Prateek Nayak

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.