linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/2] sched/fair: Choose the CPU where short task is running during wake up
@ 2022-12-01  8:43 Chen Yu
  2022-12-01  8:44 ` [PATCH v3 1/2] sched/fair: Introduce short duration task check Chen Yu
  2022-12-01  8:44 ` [PATCH v3 2/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
  0 siblings, 2 replies; 21+ messages in thread
From: Chen Yu @ 2022-12-01  8:43 UTC (permalink / raw)
  To: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Tianchen Ding, Joel Fernandes, Josh Don, linux-kernel,
	Chen Yu

The main purpose of this change is to avoid too many cross CPU
wake up when it is unnecessary. The frequent cross CPU wake up
brings significant damage to some workloads, especially on high
core count systems.

This patch set inhibits the cross CPU wake-up by placing the wakee
on waking CPU or previous CPU, if both the waker and wakee are
short-duration tasks.

The first patch is to introduce the definition of a short-duration
task. The second patch leverages the first patch to choose a local
or previous CPU for wakee.

This version is modified based on the following feedback on v2:

1. Peter suggested comparing the duration of waker and the cost to
   scan for an idle CPU: If the cost is higher than the task duration,
   do not waste time finding an idle CPU, choose the local or previous
   CPU directly. A prototype was created based on this suggestion.
   However, according to the test result, this prototype does not inhibit
   the cross CPU wakeup and did not bring improvement. Because the cost
   to find an idle CPU is small in the problematic scenario. The root
   cause of the problem is a race condition between scanning for an idle
   CPU and task enqueue(please refer to the commit log in PATCH 2/2).
   So v3 does not change the core logic of v2, with some refinement based
   on Peter's suggestion.

2. Simplify the logic to record the task duration per Peter and Abel's suggestion.

This change brings overall improvement on some microbenchmarks, both on
Intel and AMD platforms.

v2: https://lore.kernel.org/all/cover.1666531576.git.yu.c.chen@intel.com/
v1: https://lore.kernel.org/lkml/20220915165407.1776363-1-yu.c.chen@intel.com/

Chen Yu (2):
  sched/fair: Introduce short duration task check
  sched/fair: Choose the CPU where short task is running during wake up

 include/linux/sched.h   |  4 ++++
 kernel/sched/core.c     |  2 ++
 kernel/sched/fair.c     | 27 +++++++++++++++++++++++++++
 kernel/sched/features.h |  1 +
 4 files changed, 34 insertions(+)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 21+ messages in thread

* [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-01  8:43 [PATCH v3 0/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
@ 2022-12-01  8:44 ` Chen Yu
  2022-12-02  7:44   ` Honglei Wang
  2022-12-01  8:44 ` [PATCH v3 2/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
  1 sibling, 1 reply; 21+ messages in thread
From: Chen Yu @ 2022-12-01  8:44 UTC (permalink / raw)
  To: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Tianchen Ding, Joel Fernandes, Josh Don, linux-kernel,
	Chen Yu

Introduce short-duration task checks, as there is requirement
to leverage this attribute for better task placement.

There are several choices of metrics that could be used to
indicate if a task is a short-duration task.

At first thought the (p->se.sum_exec_runtime / p->nvcsw)
could be used to measure the task duration. However, the
history long past was factored too heavily in such a formula.
Ideally, the old activity should decay and not affect
the current status too much.

Although something based on PELT could be used, se.util_avg might
not be appropriate to describe the task duration:
1. Task p1 and task p2 are doing frequent ping-pong scheduling on
   one CPU, both p1 and p2 have a short duration, but the util_avg
   can be up to 50%.
2. Suppose a task lasting less than 4ms is regarded as a short task.
   If task p3 runs for 6ms and sleeps for 32ms, p3 should not be a
   short-duration task. However, PELT would decay p3's accumulated
   running time from 6ms to 3ms, because 32ms is the half-life in PELT.
   As a result, p3 would be incorrectly treated as a short task.

It was found that there was once a similar feature to track the
duration of a task, which is in Commit ad4b78bbcbab ("sched: Add
new wakeup preemption mode: WAKEUP_RUNNING"). Unfortunately, it
was reverted because it was an experiment. So pick the patch up
again, by recording the average duration when a task voluntarily
switches out. Introduce SIS_SHORT to control this strategy.

The threshold of short duration reuses sysctl_sched_min_granularity,
so it can be tuned by the user. Ideally there should be a dedicated
parameter for the threshold, but that might introduce complexity.

Suggested-by: Tim Chen <tim.c.chen@intel.com>
Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
---
 include/linux/sched.h   |  4 ++++
 kernel/sched/core.c     |  2 ++
 kernel/sched/fair.c     | 17 +++++++++++++++++
 kernel/sched/features.h |  1 +
 4 files changed, 24 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index ffb6eb55cd13..64b7acb77a11 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -558,6 +558,10 @@ struct sched_entity {
 
 	u64				nr_migrations;
 
+	u64				prev_sum_exec_runtime_vol;
+	/* average duration of a task */
+	u64				dur_avg;
+
 #ifdef CONFIG_FAIR_GROUP_SCHED
 	int				depth;
 	struct sched_entity		*parent;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index daff72f00385..c5202f1be3f7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4348,6 +4348,8 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 	p->se.prev_sum_exec_runtime	= 0;
 	p->se.nr_migrations		= 0;
 	p->se.vruntime			= 0;
+	p->se.dur_avg			= 0;
+	p->se.prev_sum_exec_runtime_vol	= 0;
 	INIT_LIST_HEAD(&p->se.group_node);
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e4a0b8bd941c..a4b314b664f8 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6200,6 +6200,16 @@ static int wake_wide(struct task_struct *p)
 	return 1;
 }
 
+/*
+ * If a task switches in and then voluntarily relinquishes the
+ * CPU quickly, it is regarded as a short duration task.
+ */
+static inline int is_short_task(struct task_struct *p)
+{
+	return sched_feat(SIS_SHORT) &&
+		(p->se.dur_avg <= sysctl_sched_min_granularity);
+}
+
 /*
  * The purpose of wake_affine() is to quickly determine on which CPU we can run
  * soonest. For the purpose of speed we only consider the waking and previous
@@ -7680,6 +7690,13 @@ static void put_prev_task_fair(struct rq *rq, struct task_struct *prev)
 	struct sched_entity *se = &prev->se;
 	struct cfs_rq *cfs_rq;
 
+	if (sched_feat(SIS_SHORT) && !prev->on_rq) {
+		u64 this_dur = se->sum_exec_runtime - se->prev_sum_exec_runtime_vol;
+
+		se->prev_sum_exec_runtime_vol = se->sum_exec_runtime;
+		update_avg(&se->dur_avg, this_dur);
+	}
+
 	for_each_sched_entity(se) {
 		cfs_rq = cfs_rq_of(se);
 		put_prev_entity(cfs_rq, se);
diff --git a/kernel/sched/features.h b/kernel/sched/features.h
index ee7f23c76bd3..efdc29c42161 100644
--- a/kernel/sched/features.h
+++ b/kernel/sched/features.h
@@ -62,6 +62,7 @@ SCHED_FEAT(TTWU_QUEUE, true)
  */
 SCHED_FEAT(SIS_PROP, false)
 SCHED_FEAT(SIS_UTIL, true)
+SCHED_FEAT(SIS_SHORT, true)
 
 /*
  * Issue a WARN when we do multiple update_rq_clock() calls
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* [PATCH v3 2/2] sched/fair: Choose the CPU where short task is running during wake up
  2022-12-01  8:43 [PATCH v3 0/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
  2022-12-01  8:44 ` [PATCH v3 1/2] sched/fair: Introduce short duration task check Chen Yu
@ 2022-12-01  8:44 ` Chen Yu
  2022-12-05  2:36   ` Tianchen Ding
  2022-12-06 13:02   ` Yicong Yang
  1 sibling, 2 replies; 21+ messages in thread
From: Chen Yu @ 2022-12-01  8:44 UTC (permalink / raw)
  To: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Tianchen Ding, Joel Fernandes, Josh Don, linux-kernel,
	Chen Yu, kernel test robot

[Problem Statement]
For a workload that is doing frequent context switches, the throughput
scales well until the number of instances reaches a peak point. After
that peak point, the throughput drops significantly if the number of
instances continues to increase.

The will-it-scale context_switch1 test case exposes the issue. The
test platform has 112 CPUs per LLC domain. The will-it-scale launches
1, 8, 16 ... 112 instances respectively. Each instance is composed
of 2 tasks, and each pair of tasks would do ping-pong scheduling via
pipe_read() and pipe_write(). No task is bound to any CPU.
It is found that, once the number of instances is higher than
56(112 tasks in total, every CPU has 1 task), the throughput
drops accordingly if the instance number continues to increase:

          ^
throughput|
          |                 X
          |               X   X X
          |             X         X X
          |           X               X
          |         X                   X
          |       X
          |     X
          |   X
          | X
          |
          +-----------------.------------------->
                            56
                                 number of instances

[Symptom analysis]

The performance downgrading was caused by a high system idle
percentage(around 20% ~ 30%). The CPUs waste a lot of time in
idle and do nothing. As a comparison, if set CPU affinity to
these workloads and stops them from migrating among CPUs,
the idle percentage drops to nearly 0%, and the throughput
increases by about 300%. This indicates room for optimization.

The reason for the high idle percentage is different before/after
commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU
on wakelist if wakee cpu is idle").

[Before the commit]
The bottleneck is the runqueue spinlock.

nr_instance          rq lock percentage
1                    1.22%
8                    1.17%
16                   1.20%
24                   1.22%
32                   1.46%
40                   1.61%
48                   1.63%
56                   1.65%
--------------------------
64                   3.77%      |
72                   5.90%      | increase
80                   7.95%      |
88                   9.98%      v
96                   11.81%
104                  13.54%
112                  15.13%

And top 2 rq lock hot paths:

(path1):
raw_spin_rq_lock_nested.constprop.0;
try_to_wake_up;
default_wake_function;
autoremove_wake_function;
__wake_up_common;
__wake_up_common_lock;
__wake_up_sync_key;
pipe_write;
new_sync_write;
vfs_write;
ksys_write;
__x64_sys_write;
do_syscall_64;
entry_SYSCALL_64_after_hwframe;write

(path2):
raw_spin_rq_lock_nested.constprop.0;
__sched_text_start;
schedule_idle;
do_idle;
cpu_startup_entry;
start_secondary;
secondary_startup_64_no_verify

task A tries to wake up task B on CPU1, then task A grabs the
runqueue lock of CPU1. If CPU1 is about to quit idle, it needs
to grab its lock which has been taken by someone else. Then
CPU1 takes more time to quit which hurts the performance.

[After the commit]
The cause is the race condition between select_task_rq() and
the task enqueue.

Suppose there are nr_cpus pairs of ping-pong scheduling
tasks. For example, p0' and p0 are ping-pong scheduling,
so do p1' <=> p1, and p2'<=> p2. None of these tasks are
bound to any CPUs. The problem can be summarized as:
more than 1 wakers are stacked on 1 CPU, which slows down
waking up their wakees:

CPU0					CPU1				CPU2

p0'					p1' => idle			p2'

try_to_wake_up(p0)							try_to_wake_up(p2);
CPU1 = select_task_rq(p0);						CPU1 = select_task_rq(p2);
ttwu_queue(p0, CPU1);							ttwu_queue(p2, CPU1);
  __ttwu_queue_wakelist(p0, CPU1); =>	ttwu_list->p0
					quiting cpuidle_idle_call()

					ttwu_list->p2->p0	<=	  __ttwu_queue_wakelist(p2, CPU1);

    WRITE_ONCE(CPU1->ttwu_pending, 1);					    WRITE_ONCE(CPU1->ttwu_pending, 1);

p0' => idle
					sched_ttwu_pending()
					  enqueue_task(p2 and p0)

					idle => p2

					...
					p2 time slice expires
					...
									!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
								<===	!!! p2 delays the wake up of p0' !!!
									!!! causes long idle on CPU0     !!!
					p2 => p0			!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
					p0 wakes up p0'

idle => p0'



Since there are many waker/wakee pairs in the system, the chain reaction
causes many CPUs to be victims. These idle CPUs wait for their waker to
be scheduled.

Actually Tiancheng has mentioned above issue here[2].

[Proposal]
The root cause is that there is no strict synchronization of select_task_rq()
and the set of ttwu_pending flag among several CPUs. And this might be by
design because the scheduler prefers parallel wakeup.

So avoid this problem indirectly. If a system does not have idle cores,
and if the waker and wakee are both short duration tasks, wake up the wakee on
the same CPU as waker.

The reason is that, if the waker is a short-duration task, it might
relinquish the CPU soon, and the wakee has the chance to be scheduled.
On the other hand, if the wakee is a short duration task, putting it on
non-idle CPU would bring minimal impact to the running task. No idle
core in the system indicates that this mechanism should not inhibit
spreading the tasks if the system have idle core.

[Benchmark results]
The baseline is v6.1-rc6. The test platform has 56 Cores(112 CPUs) per
LLC domain. C-states deeper than C1E are disabled. Turbo is disabled.
CPU frequency governor is performance.

will-it-scale.context_switch1
=============================
+331.13%

hackbench
=========
case            	load    	baseline(std%)	compare%( std%)
process-pipe    	1 group 	 1.00 (  1.50)	 +0.83 (  0.19)
process-pipe    	2 groups 	 1.00 (  0.77)	 +0.82 (  0.52)
process-pipe    	4 groups 	 1.00 (  0.20)	 -2.07 (  2.91)
process-pipe    	8 groups 	 1.00 (  0.05)	 +3.48 (  0.06)
process-sockets 	1 group 	 1.00 (  2.90)	-11.20 ( 11.99)
process-sockets 	2 groups 	 1.00 (  5.42)	 -1.39 (  1.70)
process-sockets 	4 groups 	 1.00 (  0.17)	 -0.20 (  0.19)
process-sockets 	8 groups 	 1.00 (  0.03)	 -0.05 (  0.11)
threads-pipe    	1 group 	 1.00 (  2.09)	 -1.63 (  0.44)
threads-pipe    	2 groups 	 1.00 (  0.28)	 -0.21 (  1.48)
threads-pipe    	4 groups 	 1.00 (  0.27)	 +0.13 (  0.63)
threads-pipe    	8 groups 	 1.00 (  0.14)	 +5.04 (  0.04)
threads-sockets 	1 group 	 1.00 (  2.51)	 -1.86 (  2.08)
threads-sockets 	2 groups 	 1.00 (  1.24)	 -0.60 (  3.83)
threads-sockets 	4 groups 	 1.00 (  0.49)	 +0.07 (  0.46)
threads-sockets 	8 groups 	 1.00 (  0.09)	 -0.04 (  0.08)

netperf
=======
case            	load    	baseline(std%)	compare%( std%)
TCP_RR          	28 threads	 1.00 (  0.81)	 -0.13 (  0.80)
TCP_RR          	56 threads	 1.00 (  0.55)	 +0.03 (  0.64)
TCP_RR          	84 threads	 1.00 (  0.33)	 +1.74 (  0.31)
TCP_RR          	112 threads	 1.00 (  0.24)	 +3.71 (  0.23)
TCP_RR          	140 threads	 1.00 (  0.21)	+215.10 ( 12.37)
TCP_RR          	168 threads	 1.00 ( 61.97)	+86.15 ( 12.26)
TCP_RR          	196 threads	 1.00 ( 14.49)	 +0.71 ( 14.20)
TCP_RR          	224 threads	 1.00 (  9.54)	 +0.68 (  7.00)
UDP_RR          	28 threads	 1.00 (  1.51)	 +0.25 (  1.02)
UDP_RR          	56 threads	 1.00 (  7.90)	 +0.57 (  7.89)
UDP_RR          	84 threads	 1.00 (  6.38)	 +3.66 ( 20.77)
UDP_RR          	112 threads	 1.00 ( 10.15)	 +3.16 ( 11.87)
UDP_RR          	140 threads	 1.00 (  9.98)	+164.29 ( 12.55)
UDP_RR          	168 threads	 1.00 ( 10.72)	+174.41 ( 17.05)
UDP_RR          	196 threads	 1.00 ( 18.84)	 +3.92 ( 15.48)
UDP_RR          	224 threads	 1.00 ( 16.97)	 +2.98 ( 16.69)

tbench
======
case            	load    	baseline(std%)	compare%( std%)
loopback        	28 threads	 1.00 (  0.12)	 -0.38 (  0.35)
loopback        	56 threads	 1.00 (  0.17)	 -0.04 (  0.19)
loopback        	84 threads	 1.00 (  0.03)	 +0.95 (  0.07)
loopback        	112 threads	 1.00 (  0.03)	+162.42 (  0.05)
loopback        	140 threads	 1.00 (  0.14)	 -2.26 (  0.14)
loopback        	168 threads	 1.00 (  0.49)	 -2.15 (  0.54)
loopback        	196 threads	 1.00 (  0.06)	 -2.38 (  0.22)
loopback        	224 threads	 1.00 (  0.20)	 -1.95 (  0.30)

schbench
========
case            	load    	baseline(std%)	compare%( std%)
normal          	1 mthread	 1.00 (  1.46)	 +1.03 (  0.00)
normal          	2 mthreads	 1.00 (  3.82)	 -5.41 (  8.37)
normal          	4 mthreads	 1.00 (  1.03)	 +5.11 (  2.88)
normal          	8 mthreads	 1.00 (  2.96)	 -2.41 (  0.93)

In summary, overall there is no significant performance regression detected
and there is a big improvement in netperf/tbench in partially-busy cases.

[Limitations]
As Peter said, the criteria of a short duration task is intuitive, but it
seems to be hard to find an accurate criterion to describe this.

This wake up strategy can be viewed as dynamic WF_SYNC. Except that:
1. Some workloads do not have WF_SYNC set.
2. WF_SYNC does not treat non-idle CPU as candidate target CPU.

Peter has suggested[1] comparing task duration with the cost of searching
for an idle CPU. If the latter is higher, then give up the scan, to
achieve better task affine. However, this method does not fit in the case
encountered in this patch. Because there are plenty of idle CPUs in the system,
it will not take too long to find an idle CPU. The bottleneck is caused by the
race condition mentioned above.

[1] https://lore.kernel.org/lkml/Y2O8a%2FOhk1i1l8ao@hirez.programming.kicks-ass.net/
[2] https://lore.kernel.org/lkml/9ed75cad-3718-356f-21ca-1b8ec601f335@linux.alibaba.com/

Suggested-by: Tim Chen <tim.c.chen@intel.com>
Suggested-by: K Prateek Nayak <kprateek.nayak@amd.com>
Tested-by: kernel test robot <yujie.liu@intel.com>
Signed-off-by: Chen Yu <yu.c.chen@intel.com>
---
 kernel/sched/fair.c | 10 ++++++++++
 1 file changed, 10 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a4b314b664f8..3f7361ec1330 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6246,6 +6246,11 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
 	if (available_idle_cpu(prev_cpu))
 		return prev_cpu;
 
+	/* The only running task is a short duration one. */
+	if (cpu_rq(this_cpu)->nr_running == 1 &&
+	    is_short_task((struct task_struct *)cpu_curr(this_cpu)))
+		return this_cpu;
+
 	return nr_cpumask_bits;
 }
 
@@ -6612,6 +6617,11 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
 		time = cpu_clock(this);
 	}
 
+	if (!has_idle_core && cpu_rq(target)->nr_running == 1 &&
+	    is_short_task((struct task_struct *)cpu_curr(target)) &&
+	    is_short_task(p))
+		return target;
+
 	if (sched_feat(SIS_UTIL)) {
 		sd_share = rcu_dereference(per_cpu(sd_llc_shared, target));
 		if (sd_share) {
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-01  8:44 ` [PATCH v3 1/2] sched/fair: Introduce short duration task check Chen Yu
@ 2022-12-02  7:44   ` Honglei Wang
  2022-12-03  7:49     ` Chen Yu
  0 siblings, 1 reply; 21+ messages in thread
From: Honglei Wang @ 2022-12-02  7:44 UTC (permalink / raw)
  To: Chen Yu, Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Len Brown, Chen Yu,
	Tianchen Ding, Joel Fernandes, Josh Don, linux-kernel



On 2022/12/1 16:44, Chen Yu wrote:
> Introduce short-duration task checks, as there is requirement
> to leverage this attribute for better task placement.
> 
> There are several choices of metrics that could be used to
> indicate if a task is a short-duration task.
> 
> At first thought the (p->se.sum_exec_runtime / p->nvcsw)
> could be used to measure the task duration. However, the
> history long past was factored too heavily in such a formula.
> Ideally, the old activity should decay and not affect
> the current status too much.
> 
> Although something based on PELT could be used, se.util_avg might
> not be appropriate to describe the task duration:
> 1. Task p1 and task p2 are doing frequent ping-pong scheduling on
>     one CPU, both p1 and p2 have a short duration, but the util_avg
>     can be up to 50%.
> 2. Suppose a task lasting less than 4ms is regarded as a short task.
>     If task p3 runs for 6ms and sleeps for 32ms, p3 should not be a
>     short-duration task. However, PELT would decay p3's accumulated
>     running time from 6ms to 3ms, because 32ms is the half-life in PELT.
>     As a result, p3 would be incorrectly treated as a short task.
> 
> It was found that there was once a similar feature to track the
> duration of a task, which is in Commit ad4b78bbcbab ("sched: Add
> new wakeup preemption mode: WAKEUP_RUNNING"). Unfortunately, it
> was reverted because it was an experiment. So pick the patch up
> again, by recording the average duration when a task voluntarily
> switches out. Introduce SIS_SHORT to control this strategy.
> 
> The threshold of short duration reuses sysctl_sched_min_granularity,
> so it can be tuned by the user. Ideally there should be a dedicated
> parameter for the threshold, but that might introduce complexity.
> 
> Suggested-by: Tim Chen <tim.c.chen@intel.com>
> Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> ---
>   include/linux/sched.h   |  4 ++++
>   kernel/sched/core.c     |  2 ++
>   kernel/sched/fair.c     | 17 +++++++++++++++++
>   kernel/sched/features.h |  1 +
>   4 files changed, 24 insertions(+)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index ffb6eb55cd13..64b7acb77a11 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -558,6 +558,10 @@ struct sched_entity {
>   
>   	u64				nr_migrations;
>   
> +	u64				prev_sum_exec_runtime_vol;
> +	/* average duration of a task */
> +	u64				dur_avg;
> +
>   #ifdef CONFIG_FAIR_GROUP_SCHED
>   	int				depth;
>   	struct sched_entity		*parent;
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index daff72f00385..c5202f1be3f7 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -4348,6 +4348,8 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
>   	p->se.prev_sum_exec_runtime	= 0;
>   	p->se.nr_migrations		= 0;
>   	p->se.vruntime			= 0;
> +	p->se.dur_avg			= 0;
> +	p->se.prev_sum_exec_runtime_vol	= 0;
>   	INIT_LIST_HEAD(&p->se.group_node);
>   
>   #ifdef CONFIG_FAIR_GROUP_SCHED
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index e4a0b8bd941c..a4b314b664f8 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6200,6 +6200,16 @@ static int wake_wide(struct task_struct *p)
>   	return 1;
>   }
>   
> +/*
> + * If a task switches in and then voluntarily relinquishes the
> + * CPU quickly, it is regarded as a short duration task.
> + */
> +static inline int is_short_task(struct task_struct *p)
> +{
> +	return sched_feat(SIS_SHORT) &&
> +		(p->se.dur_avg <= sysctl_sched_min_granularity);
> +}
> +

Hi Yu,

I still have a bit concern about the sysctl_sched_min_granularity 
stuff.. This grab can be set to different value which will impact the 
action of this patch and make things not totally under control.

Not sure if we can add a new grab for this.. The test result shows good 
improvement for short task, and with this grab, admins will be able to 
custom the system base on their own 'short task' view.

Thanks,
Honglei

>   /*
>    * The purpose of wake_affine() is to quickly determine on which CPU we can run
>    * soonest. For the purpose of speed we only consider the waking and previous
> @@ -7680,6 +7690,13 @@ static void put_prev_task_fair(struct rq *rq, struct task_struct *prev)
>   	struct sched_entity *se = &prev->se;
>   	struct cfs_rq *cfs_rq;
>   
> +	if (sched_feat(SIS_SHORT) && !prev->on_rq) {
> +		u64 this_dur = se->sum_exec_runtime - se->prev_sum_exec_runtime_vol;
> +
> +		se->prev_sum_exec_runtime_vol = se->sum_exec_runtime;
> +		update_avg(&se->dur_avg, this_dur);
> +	}
> +
>   	for_each_sched_entity(se) {
>   		cfs_rq = cfs_rq_of(se);
>   		put_prev_entity(cfs_rq, se);
> diff --git a/kernel/sched/features.h b/kernel/sched/features.h
> index ee7f23c76bd3..efdc29c42161 100644
> --- a/kernel/sched/features.h
> +++ b/kernel/sched/features.h
> @@ -62,6 +62,7 @@ SCHED_FEAT(TTWU_QUEUE, true)
>    */
>   SCHED_FEAT(SIS_PROP, false)
>   SCHED_FEAT(SIS_UTIL, true)
> +SCHED_FEAT(SIS_SHORT, true)
>   
>   /*
>    * Issue a WARN when we do multiple update_rq_clock() calls

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-02  7:44   ` Honglei Wang
@ 2022-12-03  7:49     ` Chen Yu
  2022-12-03 15:35       ` Joel Fernandes
  0 siblings, 1 reply; 21+ messages in thread
From: Chen Yu @ 2022-12-03  7:49 UTC (permalink / raw)
  To: Honglei Wang
  Cc: Peter Zijlstra, Vincent Guittot, Tim Chen, Mel Gorman,
	Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Len Brown, Chen Yu,
	Tianchen Ding, Joel Fernandes, Josh Don, linux-kernel

Hi Honglei,
On 2022-12-02 at 15:44:18 +0800, Honglei Wang wrote:
> 
> 
> On 2022/12/1 16:44, Chen Yu wrote:
> > Introduce short-duration task checks, as there is requirement
> > to leverage this attribute for better task placement.
> > 
> > There are several choices of metrics that could be used to
> > indicate if a task is a short-duration task.
> > 
> > At first thought the (p->se.sum_exec_runtime / p->nvcsw)
> > could be used to measure the task duration. However, the
> > history long past was factored too heavily in such a formula.
> > Ideally, the old activity should decay and not affect
> > the current status too much.
> > 
> > Although something based on PELT could be used, se.util_avg might
> > not be appropriate to describe the task duration:
> > 1. Task p1 and task p2 are doing frequent ping-pong scheduling on
> >     one CPU, both p1 and p2 have a short duration, but the util_avg
> >     can be up to 50%.
> > 2. Suppose a task lasting less than 4ms is regarded as a short task.
> >     If task p3 runs for 6ms and sleeps for 32ms, p3 should not be a
> >     short-duration task. However, PELT would decay p3's accumulated
> >     running time from 6ms to 3ms, because 32ms is the half-life in PELT.
> >     As a result, p3 would be incorrectly treated as a short task.
> > 
> > It was found that there was once a similar feature to track the
> > duration of a task, which is in Commit ad4b78bbcbab ("sched: Add
> > new wakeup preemption mode: WAKEUP_RUNNING"). Unfortunately, it
> > was reverted because it was an experiment. So pick the patch up
> > again, by recording the average duration when a task voluntarily
> > switches out. Introduce SIS_SHORT to control this strategy.
> > 
> > The threshold of short duration reuses sysctl_sched_min_granularity,
> > so it can be tuned by the user. Ideally there should be a dedicated
> > parameter for the threshold, but that might introduce complexity.
> > 
> > Suggested-by: Tim Chen <tim.c.chen@intel.com>
> > Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
> > Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> > ---
> >   include/linux/sched.h   |  4 ++++
> >   kernel/sched/core.c     |  2 ++
> >   kernel/sched/fair.c     | 17 +++++++++++++++++
> >   kernel/sched/features.h |  1 +
> >   4 files changed, 24 insertions(+)
> > 
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index ffb6eb55cd13..64b7acb77a11 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -558,6 +558,10 @@ struct sched_entity {
> >   	u64				nr_migrations;
> > +	u64				prev_sum_exec_runtime_vol;
> > +	/* average duration of a task */
> > +	u64				dur_avg;
> > +
> >   #ifdef CONFIG_FAIR_GROUP_SCHED
> >   	int				depth;
> >   	struct sched_entity		*parent;
> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index daff72f00385..c5202f1be3f7 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -4348,6 +4348,8 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
> >   	p->se.prev_sum_exec_runtime	= 0;
> >   	p->se.nr_migrations		= 0;
> >   	p->se.vruntime			= 0;
> > +	p->se.dur_avg			= 0;
> > +	p->se.prev_sum_exec_runtime_vol	= 0;
> >   	INIT_LIST_HEAD(&p->se.group_node);
> >   #ifdef CONFIG_FAIR_GROUP_SCHED
> > diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > index e4a0b8bd941c..a4b314b664f8 100644
> > --- a/kernel/sched/fair.c
> > +++ b/kernel/sched/fair.c
> > @@ -6200,6 +6200,16 @@ static int wake_wide(struct task_struct *p)
> >   	return 1;
> >   }
> > +/*
> > + * If a task switches in and then voluntarily relinquishes the
> > + * CPU quickly, it is regarded as a short duration task.
> > + */
> > +static inline int is_short_task(struct task_struct *p)
> > +{
> > +	return sched_feat(SIS_SHORT) &&
> > +		(p->se.dur_avg <= sysctl_sched_min_granularity);
> > +}
> > +
> 
> Hi Yu,
> 
> I still have a bit concern about the sysctl_sched_min_granularity stuff..
> This grab can be set to different value which will impact the action of this
> patch and make things not totally under control.
> 
> Not sure if we can add a new grab for this.. The test result shows good
> improvement for short task, and with this grab, admins will be able to
> custom the system base on their own 'short task' view.
>
It would be ideal to have a dedicated parameter to tweak this. For example,
something under /sys/kernel/debug/sched/, and initilized to sysctl_sched_min_granularity
by default. 
Hi Peter, Vincent,
may I have your opinion if this is applicable?

thanks,
Chenyu

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-03  7:49     ` Chen Yu
@ 2022-12-03 15:35       ` Joel Fernandes
  2022-12-05  8:38         ` Chen Yu
  0 siblings, 1 reply; 21+ messages in thread
From: Joel Fernandes @ 2022-12-03 15:35 UTC (permalink / raw)
  To: Chen Yu
  Cc: Honglei Wang, Peter Zijlstra, Vincent Guittot, Tim Chen,
	Mel Gorman, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	K Prateek Nayak, Yicong Yang, Gautham R . Shenoy, Ingo Molnar,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Len Brown, Chen Yu, Tianchen Ding, Josh Don, linux-kernel



> On Dec 3, 2022, at 2:50 AM, Chen Yu <yu.c.chen@intel.com> wrote:
> 
> Hi Honglei,
>> On 2022-12-02 at 15:44:18 +0800, Honglei Wang wrote:
>> 
>> 
>>> On 2022/12/1 16:44, Chen Yu wrote:
>>> Introduce short-duration task checks, as there is requirement
>>> to leverage this attribute for better task placement.
>>> 
>>> There are several choices of metrics that could be used to
>>> indicate if a task is a short-duration task.
>>> 
>>> At first thought the (p->se.sum_exec_runtime / p->nvcsw)
>>> could be used to measure the task duration. However, the
>>> history long past was factored too heavily in such a formula.
>>> Ideally, the old activity should decay and not affect
>>> the current status too much.
>>> 
>>> Although something based on PELT could be used, se.util_avg might
>>> not be appropriate to describe the task duration:
>>> 1. Task p1 and task p2 are doing frequent ping-pong scheduling on
>>>    one CPU, both p1 and p2 have a short duration, but the util_avg
>>>    can be up to 50%.
>>> 2. Suppose a task lasting less than 4ms is regarded as a short task.
>>>    If task p3 runs for 6ms and sleeps for 32ms, p3 should not be a
>>>    short-duration task. However, PELT would decay p3's accumulated
>>>    running time from 6ms to 3ms, because 32ms is the half-life in PELT.
>>>    As a result, p3 would be incorrectly treated as a short task.
>>> 
>>> It was found that there was once a similar feature to track the
>>> duration of a task, which is in Commit ad4b78bbcbab ("sched: Add
>>> new wakeup preemption mode: WAKEUP_RUNNING"). Unfortunately, it
>>> was reverted because it was an experiment. So pick the patch up
>>> again, by recording the average duration when a task voluntarily
>>> switches out. Introduce SIS_SHORT to control this strategy.
>>> 
>>> The threshold of short duration reuses sysctl_sched_min_granularity,
>>> so it can be tuned by the user. Ideally there should be a dedicated
>>> parameter for the threshold, but that might introduce complexity.
>>> 
>>> Suggested-by: Tim Chen <tim.c.chen@intel.com>
>>> Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
>>> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
>>> ---
>>>  include/linux/sched.h   |  4 ++++
>>>  kernel/sched/core.c     |  2 ++
>>>  kernel/sched/fair.c     | 17 +++++++++++++++++
>>>  kernel/sched/features.h |  1 +
>>>  4 files changed, 24 insertions(+)
>>> 
>>> diff --git a/include/linux/sched.h b/include/linux/sched.h
>>> index ffb6eb55cd13..64b7acb77a11 100644
>>> --- a/include/linux/sched.h
>>> +++ b/include/linux/sched.h
>>> @@ -558,6 +558,10 @@ struct sched_entity {
>>>      u64                nr_migrations;
>>> +    u64                prev_sum_exec_runtime_vol;
>>> +    /* average duration of a task */
>>> +    u64                dur_avg;
>>> +
>>>  #ifdef CONFIG_FAIR_GROUP_SCHED
>>>      int                depth;
>>>      struct sched_entity        *parent;
>>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>>> index daff72f00385..c5202f1be3f7 100644
>>> --- a/kernel/sched/core.c
>>> +++ b/kernel/sched/core.c
>>> @@ -4348,6 +4348,8 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
>>>      p->se.prev_sum_exec_runtime    = 0;
>>>      p->se.nr_migrations        = 0;
>>>      p->se.vruntime            = 0;
>>> +    p->se.dur_avg            = 0;
>>> +    p->se.prev_sum_exec_runtime_vol    = 0;
>>>      INIT_LIST_HEAD(&p->se.group_node);
>>>  #ifdef CONFIG_FAIR_GROUP_SCHED
>>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
>>> index e4a0b8bd941c..a4b314b664f8 100644
>>> --- a/kernel/sched/fair.c
>>> +++ b/kernel/sched/fair.c
>>> @@ -6200,6 +6200,16 @@ static int wake_wide(struct task_struct *p)
>>>      return 1;
>>>  }
>>> +/*
>>> + * If a task switches in and then voluntarily relinquishes the
>>> + * CPU quickly, it is regarded as a short duration task.
>>> + */
>>> +static inline int is_short_task(struct task_struct *p)
>>> +{
>>> +    return sched_feat(SIS_SHORT) &&
>>> +        (p->se.dur_avg <= sysctl_sched_min_granularity);
>>> +}
>>> +
>> 
>> Hi Yu,
>> 
>> I still have a bit concern about the sysctl_sched_min_granularity stuff..
>> This grab can be set to different value which will impact the action of this
>> patch and make things not totally under control.

There are already ways to misconfigure sched sysctl to make bad/weird things happen.

>> Not sure if we can add a new grab for this.. The test result shows good
>> improvement for short task, and with this grab, admins will be able to
>> custom the system base on their own 'short task' view.
>> 
> It would be ideal to have a dedicated parameter to tweak this. For example,
> something under /sys/kernel/debug/sched/, and initilized to sysctl_sched_min_granularity
> by default. 

It would be nice to not have to introduce a new knob for this. IMO, min_granularity is reasonable.

Thanks.



> 
> thanks,
> Chenyu

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/2] sched/fair: Choose the CPU where short task is running during wake up
  2022-12-01  8:44 ` [PATCH v3 2/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
@ 2022-12-05  2:36   ` Tianchen Ding
  2022-12-05  8:40     ` Chen Yu
  2022-12-06 13:02   ` Yicong Yang
  1 sibling, 1 reply; 21+ messages in thread
From: Tianchen Ding @ 2022-12-05  2:36 UTC (permalink / raw)
  To: Chen Yu
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Joel Fernandes, Josh Don, linux-kernel,
	kernel test robot, Peter Zijlstra, Vincent Guittot, Tim Chen,
	Mel Gorman

Hi Chen.

On 2022/12/1 16:44, Chen Yu wrote:
> [Problem Statement]
> For a workload that is doing frequent context switches, the throughput
> scales well until the number of instances reaches a peak point. After
> that peak point, the throughput drops significantly if the number of
> instances continues to increase.
> 
> The will-it-scale context_switch1 test case exposes the issue. The
> test platform has 112 CPUs per LLC domain. The will-it-scale launches
> 1, 8, 16 ... 112 instances respectively. Each instance is composed
> of 2 tasks, and each pair of tasks would do ping-pong scheduling via
> pipe_read() and pipe_write(). No task is bound to any CPU.
> It is found that, once the number of instances is higher than
> 56(112 tasks in total, every CPU has 1 task), the throughput
> drops accordingly if the instance number continues to increase:
> 
>            ^
> throughput|
>            |                 X
>            |               X   X X
>            |             X         X X
>            |           X               X
>            |         X                   X
>            |       X
>            |     X
>            |   X
>            | X
>            |
>            +-----------------.------------------->
>                              56
>                                   number of instances
> 
> [Symptom analysis]
> 
> The performance downgrading was caused by a high system idle
> percentage(around 20% ~ 30%). The CPUs waste a lot of time in
> idle and do nothing. As a comparison, if set CPU affinity to
> these workloads and stops them from migrating among CPUs,
> the idle percentage drops to nearly 0%, and the throughput
> increases by about 300%. This indicates room for optimization.
> 
> The reason for the high idle percentage is different before/after
> commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU
> on wakelist if wakee cpu is idle").
> 
> [Before the commit]
> The bottleneck is the runqueue spinlock.
> 
> nr_instance          rq lock percentage
> 1                    1.22%
> 8                    1.17%
> 16                   1.20%
> 24                   1.22%
> 32                   1.46%
> 40                   1.61%
> 48                   1.63%
> 56                   1.65%
> --------------------------
> 64                   3.77%      |
> 72                   5.90%      | increase
> 80                   7.95%      |
> 88                   9.98%      v
> 96                   11.81%
> 104                  13.54%
> 112                  15.13%
> 
> And top 2 rq lock hot paths:
> 
> (path1):
> raw_spin_rq_lock_nested.constprop.0;
> try_to_wake_up;
> default_wake_function;
> autoremove_wake_function;
> __wake_up_common;
> __wake_up_common_lock;
> __wake_up_sync_key;
> pipe_write;
> new_sync_write;
> vfs_write;
> ksys_write;
> __x64_sys_write;
> do_syscall_64;
> entry_SYSCALL_64_after_hwframe;write
> 
> (path2):
> raw_spin_rq_lock_nested.constprop.0;
> __sched_text_start;
> schedule_idle;
> do_idle;
> cpu_startup_entry;
> start_secondary;
> secondary_startup_64_no_verify
> 
> task A tries to wake up task B on CPU1, then task A grabs the
> runqueue lock of CPU1. If CPU1 is about to quit idle, it needs
> to grab its lock which has been taken by someone else. Then
> CPU1 takes more time to quit which hurts the performance.
> 
> [After the commit]
> The cause is the race condition between select_task_rq() and
> the task enqueue.
> 
> Suppose there are nr_cpus pairs of ping-pong scheduling
> tasks. For example, p0' and p0 are ping-pong scheduling,
> so do p1' <=> p1, and p2'<=> p2. None of these tasks are
> bound to any CPUs. The problem can be summarized as:
> more than 1 wakers are stacked on 1 CPU, which slows down
> waking up their wakees:
> 
> CPU0					CPU1				CPU2
> 
> p0'					p1' => idle			p2'
> 
> try_to_wake_up(p0)							try_to_wake_up(p2);
> CPU1 = select_task_rq(p0);						CPU1 = select_task_rq(p2);
> ttwu_queue(p0, CPU1);							ttwu_queue(p2, CPU1);
>    __ttwu_queue_wakelist(p0, CPU1); =>	ttwu_list->p0
> 					quiting cpuidle_idle_call()
> 
> 					ttwu_list->p2->p0	<=	  __ttwu_queue_wakelist(p2, CPU1);
> 
>      WRITE_ONCE(CPU1->ttwu_pending, 1);					    WRITE_ONCE(CPU1->ttwu_pending, 1);
> 
> p0' => idle
> 					sched_ttwu_pending()
> 					  enqueue_task(p2 and p0)
> 
> 					idle => p2
> 
> 					...
> 					p2 time slice expires
> 					...
> 									!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> 								<===	!!! p2 delays the wake up of p0' !!!
> 									!!! causes long idle on CPU0     !!!
> 					p2 => p0			!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> 					p0 wakes up p0'
> 
> idle => p0'
> 
> 
> 
> Since there are many waker/wakee pairs in the system, the chain reaction
> causes many CPUs to be victims. These idle CPUs wait for their waker to
> be scheduled.
> 
> Actually Tiancheng has mentioned above issue here[2].
> 

First I want to say that this issue (race condition between selecting idle cpu 
and enqueuing task) always exists before or after the commit f3dd3f674555. And I 
know this is a big issue so in [2] I only try to fix a small part of it. Of 
course I'm glad to see you trying solving this issue too :-)

There may be a bit wrong in your comment about the order. 
"WRITE_ONCE(CPU1->ttwu_pending, 1);" on CPU0 is earlier than CPU1 getting 
"ttwu_list->p0", right?

Thanks.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-03 15:35       ` Joel Fernandes
@ 2022-12-05  8:38         ` Chen Yu
  2022-12-05  9:59           ` Vincent Guittot
  0 siblings, 1 reply; 21+ messages in thread
From: Chen Yu @ 2022-12-05  8:38 UTC (permalink / raw)
  To: Joel Fernandes
  Cc: Honglei Wang, Peter Zijlstra, Vincent Guittot, Tim Chen,
	Mel Gorman, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	K Prateek Nayak, Yicong Yang, Gautham R . Shenoy, Ingo Molnar,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Len Brown, Chen Yu, Tianchen Ding, Josh Don, linux-kernel

Hi Joel,
On 2022-12-03 at 10:35:46 -0500, Joel Fernandes wrote:
> 
> 
> > On Dec 3, 2022, at 2:50 AM, Chen Yu <yu.c.chen@intel.com> wrote:
> > 
> > Hi Honglei,
> >> On 2022-12-02 at 15:44:18 +0800, Honglei Wang wrote:
> >> 
> >> 
> >>> On 2022/12/1 16:44, Chen Yu wrote:
> >>> Introduce short-duration task checks, as there is requirement
> >>> to leverage this attribute for better task placement.
> >>> 
> >>> There are several choices of metrics that could be used to
> >>> indicate if a task is a short-duration task.
> >>> 
> >>> At first thought the (p->se.sum_exec_runtime / p->nvcsw)
> >>> could be used to measure the task duration. However, the
> >>> history long past was factored too heavily in such a formula.
> >>> Ideally, the old activity should decay and not affect
> >>> the current status too much.
> >>> 
> >>> Although something based on PELT could be used, se.util_avg might
> >>> not be appropriate to describe the task duration:
> >>> 1. Task p1 and task p2 are doing frequent ping-pong scheduling on
> >>>    one CPU, both p1 and p2 have a short duration, but the util_avg
> >>>    can be up to 50%.
> >>> 2. Suppose a task lasting less than 4ms is regarded as a short task.
> >>>    If task p3 runs for 6ms and sleeps for 32ms, p3 should not be a
> >>>    short-duration task. However, PELT would decay p3's accumulated
> >>>    running time from 6ms to 3ms, because 32ms is the half-life in PELT.
> >>>    As a result, p3 would be incorrectly treated as a short task.
> >>> 
> >>> It was found that there was once a similar feature to track the
> >>> duration of a task, which is in Commit ad4b78bbcbab ("sched: Add
> >>> new wakeup preemption mode: WAKEUP_RUNNING"). Unfortunately, it
> >>> was reverted because it was an experiment. So pick the patch up
> >>> again, by recording the average duration when a task voluntarily
> >>> switches out. Introduce SIS_SHORT to control this strategy.
> >>> 
> >>> The threshold of short duration reuses sysctl_sched_min_granularity,
> >>> so it can be tuned by the user. Ideally there should be a dedicated
> >>> parameter for the threshold, but that might introduce complexity.
> >>> 
> >>> Suggested-by: Tim Chen <tim.c.chen@intel.com>
> >>> Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
> >>> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> >>> ---
> >>>  include/linux/sched.h   |  4 ++++
> >>>  kernel/sched/core.c     |  2 ++
> >>>  kernel/sched/fair.c     | 17 +++++++++++++++++
> >>>  kernel/sched/features.h |  1 +
> >>>  4 files changed, 24 insertions(+)
> >>> 
> >>> diff --git a/include/linux/sched.h b/include/linux/sched.h
> >>> index ffb6eb55cd13..64b7acb77a11 100644
> >>> --- a/include/linux/sched.h
> >>> +++ b/include/linux/sched.h
> >>> @@ -558,6 +558,10 @@ struct sched_entity {
> >>>      u64                nr_migrations;
> >>> +    u64                prev_sum_exec_runtime_vol;
> >>> +    /* average duration of a task */
> >>> +    u64                dur_avg;
> >>> +
> >>>  #ifdef CONFIG_FAIR_GROUP_SCHED
> >>>      int                depth;
> >>>      struct sched_entity        *parent;
> >>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> >>> index daff72f00385..c5202f1be3f7 100644
> >>> --- a/kernel/sched/core.c
> >>> +++ b/kernel/sched/core.c
> >>> @@ -4348,6 +4348,8 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
> >>>      p->se.prev_sum_exec_runtime    = 0;
> >>>      p->se.nr_migrations        = 0;
> >>>      p->se.vruntime            = 0;
> >>> +    p->se.dur_avg            = 0;
> >>> +    p->se.prev_sum_exec_runtime_vol    = 0;
> >>>      INIT_LIST_HEAD(&p->se.group_node);
> >>>  #ifdef CONFIG_FAIR_GROUP_SCHED
> >>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> >>> index e4a0b8bd941c..a4b314b664f8 100644
> >>> --- a/kernel/sched/fair.c
> >>> +++ b/kernel/sched/fair.c
> >>> @@ -6200,6 +6200,16 @@ static int wake_wide(struct task_struct *p)
> >>>      return 1;
> >>>  }
> >>> +/*
> >>> + * If a task switches in and then voluntarily relinquishes the
> >>> + * CPU quickly, it is regarded as a short duration task.
> >>> + */
> >>> +static inline int is_short_task(struct task_struct *p)
> >>> +{
> >>> +    return sched_feat(SIS_SHORT) &&
> >>> +        (p->se.dur_avg <= sysctl_sched_min_granularity);
> >>> +}
> >>> +
> >> 
> >> Hi Yu,
> >> 
> >> I still have a bit concern about the sysctl_sched_min_granularity stuff..
> >> This grab can be set to different value which will impact the action of this
> >> patch and make things not totally under control.
> 
> There are already ways to misconfigure sched sysctl to make bad/weird things happen.
> 
> >> Not sure if we can add a new grab for this.. The test result shows good
> >> improvement for short task, and with this grab, admins will be able to
> >> custom the system base on their own 'short task' view.
> >> 
> > It would be ideal to have a dedicated parameter to tweak this. For example,
> > something under /sys/kernel/debug/sched/, and initilized to sysctl_sched_min_granularity
> > by default. 
> 
> It would be nice to not have to introduce a new knob for this. IMO, min_granularity is reasonable.
>
OK, got it, thanks for the suggestion.

thanks,
Chenyu 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/2] sched/fair: Choose the CPU where short task is running during wake up
  2022-12-05  2:36   ` Tianchen Ding
@ 2022-12-05  8:40     ` Chen Yu
  0 siblings, 0 replies; 21+ messages in thread
From: Chen Yu @ 2022-12-05  8:40 UTC (permalink / raw)
  To: Tianchen Ding
  Cc: Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Yicong Yang, Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Honglei Wang, Len Brown,
	Chen Yu, Joel Fernandes, Josh Don, linux-kernel,
	kernel test robot, Peter Zijlstra, Vincent Guittot, Tim Chen,
	Mel Gorman

On 2022-12-05 at 10:36:25 +0800, Tianchen Ding wrote:
> Hi Chen.
> 
> On 2022/12/1 16:44, Chen Yu wrote:
> > [Problem Statement]
> > For a workload that is doing frequent context switches, the throughput
> > scales well until the number of instances reaches a peak point. After
> > that peak point, the throughput drops significantly if the number of
> > instances continues to increase.
> > 
> > The will-it-scale context_switch1 test case exposes the issue. The
> > test platform has 112 CPUs per LLC domain. The will-it-scale launches
> > 1, 8, 16 ... 112 instances respectively. Each instance is composed
> > of 2 tasks, and each pair of tasks would do ping-pong scheduling via
> > pipe_read() and pipe_write(). No task is bound to any CPU.
> > It is found that, once the number of instances is higher than
> > 56(112 tasks in total, every CPU has 1 task), the throughput
> > drops accordingly if the instance number continues to increase:
> > 
> >            ^
> > throughput|
> >            |                 X
> >            |               X   X X
> >            |             X         X X
> >            |           X               X
> >            |         X                   X
> >            |       X
> >            |     X
> >            |   X
> >            | X
> >            |
> >            +-----------------.------------------->
> >                              56
> >                                   number of instances
> > 
> > [Symptom analysis]
> > 
> > The performance downgrading was caused by a high system idle
> > percentage(around 20% ~ 30%). The CPUs waste a lot of time in
> > idle and do nothing. As a comparison, if set CPU affinity to
> > these workloads and stops them from migrating among CPUs,
> > the idle percentage drops to nearly 0%, and the throughput
> > increases by about 300%. This indicates room for optimization.
> > 
> > The reason for the high idle percentage is different before/after
> > commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU
> > on wakelist if wakee cpu is idle").
> > 
> > [Before the commit]
> > The bottleneck is the runqueue spinlock.
> > 
> > nr_instance          rq lock percentage
> > 1                    1.22%
> > 8                    1.17%
> > 16                   1.20%
> > 24                   1.22%
> > 32                   1.46%
> > 40                   1.61%
> > 48                   1.63%
> > 56                   1.65%
> > --------------------------
> > 64                   3.77%      |
> > 72                   5.90%      | increase
> > 80                   7.95%      |
> > 88                   9.98%      v
> > 96                   11.81%
> > 104                  13.54%
> > 112                  15.13%
> > 
> > And top 2 rq lock hot paths:
> > 
> > (path1):
> > raw_spin_rq_lock_nested.constprop.0;
> > try_to_wake_up;
> > default_wake_function;
> > autoremove_wake_function;
> > __wake_up_common;
> > __wake_up_common_lock;
> > __wake_up_sync_key;
> > pipe_write;
> > new_sync_write;
> > vfs_write;
> > ksys_write;
> > __x64_sys_write;
> > do_syscall_64;
> > entry_SYSCALL_64_after_hwframe;write
> > 
> > (path2):
> > raw_spin_rq_lock_nested.constprop.0;
> > __sched_text_start;
> > schedule_idle;
> > do_idle;
> > cpu_startup_entry;
> > start_secondary;
> > secondary_startup_64_no_verify
> > 
> > task A tries to wake up task B on CPU1, then task A grabs the
> > runqueue lock of CPU1. If CPU1 is about to quit idle, it needs
> > to grab its lock which has been taken by someone else. Then
> > CPU1 takes more time to quit which hurts the performance.
> > 
> > [After the commit]
> > The cause is the race condition between select_task_rq() and
> > the task enqueue.
> > 
> > Suppose there are nr_cpus pairs of ping-pong scheduling
> > tasks. For example, p0' and p0 are ping-pong scheduling,
> > so do p1' <=> p1, and p2'<=> p2. None of these tasks are
> > bound to any CPUs. The problem can be summarized as:
> > more than 1 wakers are stacked on 1 CPU, which slows down
> > waking up their wakees:
> > 
> > CPU0					CPU1				CPU2
> > 
> > p0'					p1' => idle			p2'
> > 
> > try_to_wake_up(p0)							try_to_wake_up(p2);
> > CPU1 = select_task_rq(p0);						CPU1 = select_task_rq(p2);
> > ttwu_queue(p0, CPU1);							ttwu_queue(p2, CPU1);
> >    __ttwu_queue_wakelist(p0, CPU1); =>	ttwu_list->p0
> > 					quiting cpuidle_idle_call()
> > 
> > 					ttwu_list->p2->p0	<=	  __ttwu_queue_wakelist(p2, CPU1);
> > 
> >      WRITE_ONCE(CPU1->ttwu_pending, 1);					    WRITE_ONCE(CPU1->ttwu_pending, 1);
> > 
> > p0' => idle
> > 					sched_ttwu_pending()
> > 					  enqueue_task(p2 and p0)
> > 
> > 					idle => p2
> > 
> > 					...
> > 					p2 time slice expires
> > 					...
> > 									!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> > 								<===	!!! p2 delays the wake up of p0' !!!
> > 									!!! causes long idle on CPU0     !!!
> > 					p2 => p0			!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> > 					p0 wakes up p0'
> > 
> > idle => p0'
> > 
> > 
> > 
> > Since there are many waker/wakee pairs in the system, the chain reaction
> > causes many CPUs to be victims. These idle CPUs wait for their waker to
> > be scheduled.
> > 
> > Actually Tiancheng has mentioned above issue here[2].
> > 
> 
> First I want to say that this issue (race condition between selecting idle
> cpu and enqueuing task) always exists before or after the commit
> f3dd3f674555. And I know this is a big issue so in [2] I only try to fix a
> small part of it. Of course I'm glad to see you trying solving this issue
> too :-)
> 
> There may be a bit wrong in your comment about the order.
> "WRITE_ONCE(CPU1->ttwu_pending, 1);" on CPU0 is earlier than CPU1 getting
> "ttwu_list->p0", right?
>
Yes, you are right, I'll refine the comment.

thanks,
Chenyu 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-05  8:38         ` Chen Yu
@ 2022-12-05  9:59           ` Vincent Guittot
  2022-12-05 14:38             ` Chen Yu
  2022-12-07  2:23             ` Josh Don
  0 siblings, 2 replies; 21+ messages in thread
From: Vincent Guittot @ 2022-12-05  9:59 UTC (permalink / raw)
  To: Chen Yu
  Cc: Joel Fernandes, Honglei Wang, Peter Zijlstra, Tim Chen,
	Mel Gorman, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	K Prateek Nayak, Yicong Yang, Gautham R . Shenoy, Ingo Molnar,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Len Brown, Chen Yu, Tianchen Ding, Josh Don, linux-kernel

On Mon, 5 Dec 2022 at 09:39, Chen Yu <yu.c.chen@intel.com> wrote:
>
> Hi Joel,
> On 2022-12-03 at 10:35:46 -0500, Joel Fernandes wrote:
> >
> >
> > > On Dec 3, 2022, at 2:50 AM, Chen Yu <yu.c.chen@intel.com> wrote:
> > >
> > > Hi Honglei,
> > >> On 2022-12-02 at 15:44:18 +0800, Honglei Wang wrote:
> > >>
> > >>
> > >>> On 2022/12/1 16:44, Chen Yu wrote:
> > >>> Introduce short-duration task checks, as there is requirement
> > >>> to leverage this attribute for better task placement.
> > >>>
> > >>> There are several choices of metrics that could be used to
> > >>> indicate if a task is a short-duration task.
> > >>>
> > >>> At first thought the (p->se.sum_exec_runtime / p->nvcsw)
> > >>> could be used to measure the task duration. However, the
> > >>> history long past was factored too heavily in such a formula.
> > >>> Ideally, the old activity should decay and not affect
> > >>> the current status too much.
> > >>>
> > >>> Although something based on PELT could be used, se.util_avg might
> > >>> not be appropriate to describe the task duration:
> > >>> 1. Task p1 and task p2 are doing frequent ping-pong scheduling on
> > >>>    one CPU, both p1 and p2 have a short duration, but the util_avg
> > >>>    can be up to 50%.
> > >>> 2. Suppose a task lasting less than 4ms is regarded as a short task.
> > >>>    If task p3 runs for 6ms and sleeps for 32ms, p3 should not be a
> > >>>    short-duration task. However, PELT would decay p3's accumulated
> > >>>    running time from 6ms to 3ms, because 32ms is the half-life in PELT.
> > >>>    As a result, p3 would be incorrectly treated as a short task.
> > >>>
> > >>> It was found that there was once a similar feature to track the
> > >>> duration of a task, which is in Commit ad4b78bbcbab ("sched: Add
> > >>> new wakeup preemption mode: WAKEUP_RUNNING"). Unfortunately, it
> > >>> was reverted because it was an experiment. So pick the patch up
> > >>> again, by recording the average duration when a task voluntarily
> > >>> switches out. Introduce SIS_SHORT to control this strategy.
> > >>>
> > >>> The threshold of short duration reuses sysctl_sched_min_granularity,
> > >>> so it can be tuned by the user. Ideally there should be a dedicated
> > >>> parameter for the threshold, but that might introduce complexity.
> > >>>
> > >>> Suggested-by: Tim Chen <tim.c.chen@intel.com>
> > >>> Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
> > >>> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> > >>> ---
> > >>>  include/linux/sched.h   |  4 ++++
> > >>>  kernel/sched/core.c     |  2 ++
> > >>>  kernel/sched/fair.c     | 17 +++++++++++++++++
> > >>>  kernel/sched/features.h |  1 +
> > >>>  4 files changed, 24 insertions(+)
> > >>>
> > >>> diff --git a/include/linux/sched.h b/include/linux/sched.h
> > >>> index ffb6eb55cd13..64b7acb77a11 100644
> > >>> --- a/include/linux/sched.h
> > >>> +++ b/include/linux/sched.h
> > >>> @@ -558,6 +558,10 @@ struct sched_entity {
> > >>>      u64                nr_migrations;
> > >>> +    u64                prev_sum_exec_runtime_vol;
> > >>> +    /* average duration of a task */
> > >>> +    u64                dur_avg;
> > >>> +
> > >>>  #ifdef CONFIG_FAIR_GROUP_SCHED
> > >>>      int                depth;
> > >>>      struct sched_entity        *parent;
> > >>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > >>> index daff72f00385..c5202f1be3f7 100644
> > >>> --- a/kernel/sched/core.c
> > >>> +++ b/kernel/sched/core.c
> > >>> @@ -4348,6 +4348,8 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
> > >>>      p->se.prev_sum_exec_runtime    = 0;
> > >>>      p->se.nr_migrations        = 0;
> > >>>      p->se.vruntime            = 0;
> > >>> +    p->se.dur_avg            = 0;
> > >>> +    p->se.prev_sum_exec_runtime_vol    = 0;
> > >>>      INIT_LIST_HEAD(&p->se.group_node);
> > >>>  #ifdef CONFIG_FAIR_GROUP_SCHED
> > >>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > >>> index e4a0b8bd941c..a4b314b664f8 100644
> > >>> --- a/kernel/sched/fair.c
> > >>> +++ b/kernel/sched/fair.c
> > >>> @@ -6200,6 +6200,16 @@ static int wake_wide(struct task_struct *p)
> > >>>      return 1;
> > >>>  }
> > >>> +/*
> > >>> + * If a task switches in and then voluntarily relinquishes the
> > >>> + * CPU quickly, it is regarded as a short duration task.
> > >>> + */
> > >>> +static inline int is_short_task(struct task_struct *p)
> > >>> +{
> > >>> +    return sched_feat(SIS_SHORT) &&
> > >>> +        (p->se.dur_avg <= sysctl_sched_min_granularity);
> > >>> +}
> > >>> +
> > >>
> > >> Hi Yu,
> > >>
> > >> I still have a bit concern about the sysctl_sched_min_granularity stuff..
> > >> This grab can be set to different value which will impact the action of this
> > >> patch and make things not totally under control.
> >
> > There are already ways to misconfigure sched sysctl to make bad/weird things happen.
> >
> > >> Not sure if we can add a new grab for this.. The test result shows good
> > >> improvement for short task, and with this grab, admins will be able to
> > >> custom the system base on their own 'short task' view.
> > >>
> > > It would be ideal to have a dedicated parameter to tweak this. For example,
> > > something under /sys/kernel/debug/sched/, and initilized to sysctl_sched_min_granularity
> > > by default.
> >
> > It would be nice to not have to introduce a new knob for this. IMO, min_granularity is reasonable.
> >
> OK, got it, thanks for the suggestion.

Sorry for the late answer.
We don't want to add more dedicated knobs. So using
sysctl_sched_min_granularity as you are doing in this patch looks ok

>
> thanks,
> Chenyu

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-05  9:59           ` Vincent Guittot
@ 2022-12-05 14:38             ` Chen Yu
  2022-12-07  2:23             ` Josh Don
  1 sibling, 0 replies; 21+ messages in thread
From: Chen Yu @ 2022-12-05 14:38 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: Joel Fernandes, Honglei Wang, Peter Zijlstra, Tim Chen,
	Mel Gorman, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	K Prateek Nayak, Yicong Yang, Gautham R . Shenoy, Ingo Molnar,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Len Brown, Chen Yu, Tianchen Ding, Josh Don, linux-kernel

On 2022-12-05 at 10:59:06 +0100, Vincent Guittot wrote:
> On Mon, 5 Dec 2022 at 09:39, Chen Yu <yu.c.chen@intel.com> wrote:
> >
> > Hi Joel,
> > On 2022-12-03 at 10:35:46 -0500, Joel Fernandes wrote:
> > >
> > >
> > > > On Dec 3, 2022, at 2:50 AM, Chen Yu <yu.c.chen@intel.com> wrote:
> > > >
> > > > Hi Honglei,
> > > >> On 2022-12-02 at 15:44:18 +0800, Honglei Wang wrote:
> > > >>
> > > >>
> > > >>> On 2022/12/1 16:44, Chen Yu wrote:
> > > >>> Introduce short-duration task checks, as there is requirement
> > > >>> to leverage this attribute for better task placement.
> > > >>>
> > > >>> There are several choices of metrics that could be used to
> > > >>> indicate if a task is a short-duration task.
> > > >>>
> > > >>> At first thought the (p->se.sum_exec_runtime / p->nvcsw)
> > > >>> could be used to measure the task duration. However, the
> > > >>> history long past was factored too heavily in such a formula.
> > > >>> Ideally, the old activity should decay and not affect
> > > >>> the current status too much.
> > > >>>
> > > >>> Although something based on PELT could be used, se.util_avg might
> > > >>> not be appropriate to describe the task duration:
> > > >>> 1. Task p1 and task p2 are doing frequent ping-pong scheduling on
> > > >>>    one CPU, both p1 and p2 have a short duration, but the util_avg
> > > >>>    can be up to 50%.
> > > >>> 2. Suppose a task lasting less than 4ms is regarded as a short task.
> > > >>>    If task p3 runs for 6ms and sleeps for 32ms, p3 should not be a
> > > >>>    short-duration task. However, PELT would decay p3's accumulated
> > > >>>    running time from 6ms to 3ms, because 32ms is the half-life in PELT.
> > > >>>    As a result, p3 would be incorrectly treated as a short task.
> > > >>>
> > > >>> It was found that there was once a similar feature to track the
> > > >>> duration of a task, which is in Commit ad4b78bbcbab ("sched: Add
> > > >>> new wakeup preemption mode: WAKEUP_RUNNING"). Unfortunately, it
> > > >>> was reverted because it was an experiment. So pick the patch up
> > > >>> again, by recording the average duration when a task voluntarily
> > > >>> switches out. Introduce SIS_SHORT to control this strategy.
> > > >>>
> > > >>> The threshold of short duration reuses sysctl_sched_min_granularity,
> > > >>> so it can be tuned by the user. Ideally there should be a dedicated
> > > >>> parameter for the threshold, but that might introduce complexity.
> > > >>>
> > > >>> Suggested-by: Tim Chen <tim.c.chen@intel.com>
> > > >>> Suggested-by: Vincent Guittot <vincent.guittot@linaro.org>
> > > >>> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> > > >>> ---
> > > >>>  include/linux/sched.h   |  4 ++++
> > > >>>  kernel/sched/core.c     |  2 ++
> > > >>>  kernel/sched/fair.c     | 17 +++++++++++++++++
> > > >>>  kernel/sched/features.h |  1 +
> > > >>>  4 files changed, 24 insertions(+)
> > > >>>
> > > >>> diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > >>> index ffb6eb55cd13..64b7acb77a11 100644
> > > >>> --- a/include/linux/sched.h
> > > >>> +++ b/include/linux/sched.h
> > > >>> @@ -558,6 +558,10 @@ struct sched_entity {
> > > >>>      u64                nr_migrations;
> > > >>> +    u64                prev_sum_exec_runtime_vol;
> > > >>> +    /* average duration of a task */
> > > >>> +    u64                dur_avg;
> > > >>> +
> > > >>>  #ifdef CONFIG_FAIR_GROUP_SCHED
> > > >>>      int                depth;
> > > >>>      struct sched_entity        *parent;
> > > >>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > > >>> index daff72f00385..c5202f1be3f7 100644
> > > >>> --- a/kernel/sched/core.c
> > > >>> +++ b/kernel/sched/core.c
> > > >>> @@ -4348,6 +4348,8 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
> > > >>>      p->se.prev_sum_exec_runtime    = 0;
> > > >>>      p->se.nr_migrations        = 0;
> > > >>>      p->se.vruntime            = 0;
> > > >>> +    p->se.dur_avg            = 0;
> > > >>> +    p->se.prev_sum_exec_runtime_vol    = 0;
> > > >>>      INIT_LIST_HEAD(&p->se.group_node);
> > > >>>  #ifdef CONFIG_FAIR_GROUP_SCHED
> > > >>> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> > > >>> index e4a0b8bd941c..a4b314b664f8 100644
> > > >>> --- a/kernel/sched/fair.c
> > > >>> +++ b/kernel/sched/fair.c
> > > >>> @@ -6200,6 +6200,16 @@ static int wake_wide(struct task_struct *p)
> > > >>>      return 1;
> > > >>>  }
> > > >>> +/*
> > > >>> + * If a task switches in and then voluntarily relinquishes the
> > > >>> + * CPU quickly, it is regarded as a short duration task.
> > > >>> + */
> > > >>> +static inline int is_short_task(struct task_struct *p)
> > > >>> +{
> > > >>> +    return sched_feat(SIS_SHORT) &&
> > > >>> +        (p->se.dur_avg <= sysctl_sched_min_granularity);
> > > >>> +}
> > > >>> +
> > > >>
> > > >> Hi Yu,
> > > >>
> > > >> I still have a bit concern about the sysctl_sched_min_granularity stuff..
> > > >> This grab can be set to different value which will impact the action of this
> > > >> patch and make things not totally under control.
> > >
> > > There are already ways to misconfigure sched sysctl to make bad/weird things happen.
> > >
> > > >> Not sure if we can add a new grab for this.. The test result shows good
> > > >> improvement for short task, and with this grab, admins will be able to
> > > >> custom the system base on their own 'short task' view.
> > > >>
> > > > It would be ideal to have a dedicated parameter to tweak this. For example,
> > > > something under /sys/kernel/debug/sched/, and initilized to sysctl_sched_min_granularity
> > > > by default.
> > >
> > > It would be nice to not have to introduce a new knob for this. IMO, min_granularity is reasonable.
> > >
> > OK, got it, thanks for the suggestion.
> 
> Sorry for the late answer.
> We don't want to add more dedicated knobs. So using
> sysctl_sched_min_granularity as you are doing in this patch looks ok
>
I see, thanks Vincent.

thanks,
Chenyu 
> >
> > thanks,
> > Chenyu

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/2] sched/fair: Choose the CPU where short task is running during wake up
  2022-12-01  8:44 ` [PATCH v3 2/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
  2022-12-05  2:36   ` Tianchen Ding
@ 2022-12-06 13:02   ` Yicong Yang
  2022-12-07  3:54     ` Chen Yu
  1 sibling, 1 reply; 21+ messages in thread
From: Yicong Yang @ 2022-12-06 13:02 UTC (permalink / raw)
  To: Chen Yu
  Cc: yangyicong, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	K Prateek Nayak, Gautham R . Shenoy, Ingo Molnar,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Honglei Wang, Len Brown, Chen Yu, Tianchen Ding, Joel Fernandes,
	Josh Don, linux-kernel, kernel test robot, Peter Zijlstra,
	Vincent Guittot, Mel Gorman, Tim Chen

Hi Chen Yu,

Just some minor questions below.

On 2022/12/1 16:44, Chen Yu wrote:
> [Problem Statement]
> For a workload that is doing frequent context switches, the throughput
> scales well until the number of instances reaches a peak point. After
> that peak point, the throughput drops significantly if the number of
> instances continues to increase.
> 
> The will-it-scale context_switch1 test case exposes the issue. The
> test platform has 112 CPUs per LLC domain. The will-it-scale launches
> 1, 8, 16 ... 112 instances respectively. Each instance is composed
> of 2 tasks, and each pair of tasks would do ping-pong scheduling via
> pipe_read() and pipe_write(). No task is bound to any CPU.
> It is found that, once the number of instances is higher than
> 56(112 tasks in total, every CPU has 1 task), the throughput
> drops accordingly if the instance number continues to increase:
> 
>           ^
> throughput|
>           |                 X
>           |               X   X X
>           |             X         X X
>           |           X               X
>           |         X                   X
>           |       X
>           |     X
>           |   X
>           | X
>           |
>           +-----------------.------------------->
>                             56
>                                  number of instances
> 
> [Symptom analysis]
> 
> The performance downgrading was caused by a high system idle
> percentage(around 20% ~ 30%). The CPUs waste a lot of time in
> idle and do nothing. As a comparison, if set CPU affinity to
> these workloads and stops them from migrating among CPUs,
> the idle percentage drops to nearly 0%, and the throughput
> increases by about 300%. This indicates room for optimization.
> 
> The reason for the high idle percentage is different before/after
> commit f3dd3f674555 ("sched: Remove the limitation of WF_ON_CPU
> on wakelist if wakee cpu is idle").
> 
> [Before the commit]
> The bottleneck is the runqueue spinlock.
> 
> nr_instance          rq lock percentage
> 1                    1.22%
> 8                    1.17%
> 16                   1.20%
> 24                   1.22%
> 32                   1.46%
> 40                   1.61%
> 48                   1.63%
> 56                   1.65%
> --------------------------
> 64                   3.77%      |
> 72                   5.90%      | increase
> 80                   7.95%      |
> 88                   9.98%      v
> 96                   11.81%
> 104                  13.54%
> 112                  15.13%
> 
> And top 2 rq lock hot paths:
> 
> (path1):
> raw_spin_rq_lock_nested.constprop.0;
> try_to_wake_up;
> default_wake_function;
> autoremove_wake_function;
> __wake_up_common;
> __wake_up_common_lock;
> __wake_up_sync_key;
> pipe_write;
> new_sync_write;
> vfs_write;
> ksys_write;
> __x64_sys_write;
> do_syscall_64;
> entry_SYSCALL_64_after_hwframe;write
> 
> (path2):
> raw_spin_rq_lock_nested.constprop.0;
> __sched_text_start;
> schedule_idle;
> do_idle;
> cpu_startup_entry;
> start_secondary;
> secondary_startup_64_no_verify
> 
> task A tries to wake up task B on CPU1, then task A grabs the
> runqueue lock of CPU1. If CPU1 is about to quit idle, it needs
> to grab its lock which has been taken by someone else. Then
> CPU1 takes more time to quit which hurts the performance.
> 
> [After the commit]
> The cause is the race condition between select_task_rq() and
> the task enqueue.
> 
> Suppose there are nr_cpus pairs of ping-pong scheduling
> tasks. For example, p0' and p0 are ping-pong scheduling,
> so do p1' <=> p1, and p2'<=> p2. None of these tasks are
> bound to any CPUs. The problem can be summarized as:
> more than 1 wakers are stacked on 1 CPU, which slows down
> waking up their wakees:
> 
> CPU0					CPU1				CPU2
> 
> p0'					p1' => idle			p2'
> 
> try_to_wake_up(p0)							try_to_wake_up(p2);
> CPU1 = select_task_rq(p0);						CPU1 = select_task_rq(p2);
> ttwu_queue(p0, CPU1);							ttwu_queue(p2, CPU1);
>   __ttwu_queue_wakelist(p0, CPU1); =>	ttwu_list->p0
> 					quiting cpuidle_idle_call()
> 
> 					ttwu_list->p2->p0	<=	  __ttwu_queue_wakelist(p2, CPU1);
> 
>     WRITE_ONCE(CPU1->ttwu_pending, 1);					    WRITE_ONCE(CPU1->ttwu_pending, 1);
> 
> p0' => idle
> 					sched_ttwu_pending()
> 					  enqueue_task(p2 and p0)
> 
> 					idle => p2
> 
> 					...
> 					p2 time slice expires
> 					...
> 									!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> 								<===	!!! p2 delays the wake up of p0' !!!
> 									!!! causes long idle on CPU0     !!!
> 					p2 => p0			!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!!
> 					p0 wakes up p0'
> 
> idle => p0'
> 
> 
> 
> Since there are many waker/wakee pairs in the system, the chain reaction
> causes many CPUs to be victims. These idle CPUs wait for their waker to
> be scheduled.
> 
> Actually Tiancheng has mentioned above issue here[2].
> 
> [Proposal]
> The root cause is that there is no strict synchronization of select_task_rq()
> and the set of ttwu_pending flag among several CPUs. And this might be by
> design because the scheduler prefers parallel wakeup.
> 
> So avoid this problem indirectly. If a system does not have idle cores,
> and if the waker and wakee are both short duration tasks, wake up the wakee on
> the same CPU as waker.
> 
> The reason is that, if the waker is a short-duration task, it might
> relinquish the CPU soon, and the wakee has the chance to be scheduled.
> On the other hand, if the wakee is a short duration task, putting it on
> non-idle CPU would bring minimal impact to the running task. No idle
> core in the system indicates that this mechanism should not inhibit
> spreading the tasks if the system have idle core.
> 
> [Benchmark results]
> The baseline is v6.1-rc6. The test platform has 56 Cores(112 CPUs) per
> LLC domain. C-states deeper than C1E are disabled. Turbo is disabled.
> CPU frequency governor is performance.
> 
> will-it-scale.context_switch1
> =============================
> +331.13%
> 
> hackbench
> =========
> case            	load    	baseline(std%)	compare%( std%)
> process-pipe    	1 group 	 1.00 (  1.50)	 +0.83 (  0.19)
> process-pipe    	2 groups 	 1.00 (  0.77)	 +0.82 (  0.52)
> process-pipe    	4 groups 	 1.00 (  0.20)	 -2.07 (  2.91)
> process-pipe    	8 groups 	 1.00 (  0.05)	 +3.48 (  0.06)
> process-sockets 	1 group 	 1.00 (  2.90)	-11.20 ( 11.99)
> process-sockets 	2 groups 	 1.00 (  5.42)	 -1.39 (  1.70)
> process-sockets 	4 groups 	 1.00 (  0.17)	 -0.20 (  0.19)
> process-sockets 	8 groups 	 1.00 (  0.03)	 -0.05 (  0.11)
> threads-pipe    	1 group 	 1.00 (  2.09)	 -1.63 (  0.44)
> threads-pipe    	2 groups 	 1.00 (  0.28)	 -0.21 (  1.48)
> threads-pipe    	4 groups 	 1.00 (  0.27)	 +0.13 (  0.63)
> threads-pipe    	8 groups 	 1.00 (  0.14)	 +5.04 (  0.04)
> threads-sockets 	1 group 	 1.00 (  2.51)	 -1.86 (  2.08)
> threads-sockets 	2 groups 	 1.00 (  1.24)	 -0.60 (  3.83)
> threads-sockets 	4 groups 	 1.00 (  0.49)	 +0.07 (  0.46)
> threads-sockets 	8 groups 	 1.00 (  0.09)	 -0.04 (  0.08)
> 
> netperf
> =======
> case            	load    	baseline(std%)	compare%( std%)
> TCP_RR          	28 threads	 1.00 (  0.81)	 -0.13 (  0.80)
> TCP_RR          	56 threads	 1.00 (  0.55)	 +0.03 (  0.64)
> TCP_RR          	84 threads	 1.00 (  0.33)	 +1.74 (  0.31)
> TCP_RR          	112 threads	 1.00 (  0.24)	 +3.71 (  0.23)
> TCP_RR          	140 threads	 1.00 (  0.21)	+215.10 ( 12.37)
> TCP_RR          	168 threads	 1.00 ( 61.97)	+86.15 ( 12.26)
> TCP_RR          	196 threads	 1.00 ( 14.49)	 +0.71 ( 14.20)
> TCP_RR          	224 threads	 1.00 (  9.54)	 +0.68 (  7.00)
> UDP_RR          	28 threads	 1.00 (  1.51)	 +0.25 (  1.02)
> UDP_RR          	56 threads	 1.00 (  7.90)	 +0.57 (  7.89)
> UDP_RR          	84 threads	 1.00 (  6.38)	 +3.66 ( 20.77)
> UDP_RR          	112 threads	 1.00 ( 10.15)	 +3.16 ( 11.87)
> UDP_RR          	140 threads	 1.00 (  9.98)	+164.29 ( 12.55)
> UDP_RR          	168 threads	 1.00 ( 10.72)	+174.41 ( 17.05)
> UDP_RR          	196 threads	 1.00 ( 18.84)	 +3.92 ( 15.48)
> UDP_RR          	224 threads	 1.00 ( 16.97)	 +2.98 ( 16.69)
> 
> tbench
> ======
> case            	load    	baseline(std%)	compare%( std%)
> loopback        	28 threads	 1.00 (  0.12)	 -0.38 (  0.35)
> loopback        	56 threads	 1.00 (  0.17)	 -0.04 (  0.19)
> loopback        	84 threads	 1.00 (  0.03)	 +0.95 (  0.07)
> loopback        	112 threads	 1.00 (  0.03)	+162.42 (  0.05)
> loopback        	140 threads	 1.00 (  0.14)	 -2.26 (  0.14)
> loopback        	168 threads	 1.00 (  0.49)	 -2.15 (  0.54)
> loopback        	196 threads	 1.00 (  0.06)	 -2.38 (  0.22)
> loopback        	224 threads	 1.00 (  0.20)	 -1.95 (  0.30)
> 
> schbench
> ========
> case            	load    	baseline(std%)	compare%( std%)
> normal          	1 mthread	 1.00 (  1.46)	 +1.03 (  0.00)
> normal          	2 mthreads	 1.00 (  3.82)	 -5.41 (  8.37)
> normal          	4 mthreads	 1.00 (  1.03)	 +5.11 (  2.88)
> normal          	8 mthreads	 1.00 (  2.96)	 -2.41 (  0.93)
> 
> In summary, overall there is no significant performance regression detected
> and there is a big improvement in netperf/tbench in partially-busy cases.
> 
> [Limitations]
> As Peter said, the criteria of a short duration task is intuitive, but it
> seems to be hard to find an accurate criterion to describe this.
> 
> This wake up strategy can be viewed as dynamic WF_SYNC. Except that:
> 1. Some workloads do not have WF_SYNC set.
> 2. WF_SYNC does not treat non-idle CPU as candidate target CPU.
> 
> Peter has suggested[1] comparing task duration with the cost of searching
> for an idle CPU. If the latter is higher, then give up the scan, to
> achieve better task affine. However, this method does not fit in the case
> encountered in this patch. Because there are plenty of idle CPUs in the system,
> it will not take too long to find an idle CPU. The bottleneck is caused by the
> race condition mentioned above.
> 
> [1] https://lore.kernel.org/lkml/Y2O8a%2FOhk1i1l8ao@hirez.programming.kicks-ass.net/
> [2] https://lore.kernel.org/lkml/9ed75cad-3718-356f-21ca-1b8ec601f335@linux.alibaba.com/
> 
> Suggested-by: Tim Chen <tim.c.chen@intel.com>
> Suggested-by: K Prateek Nayak <kprateek.nayak@amd.com>
> Tested-by: kernel test robot <yujie.liu@intel.com>
> Signed-off-by: Chen Yu <yu.c.chen@intel.com>
> ---
>  kernel/sched/fair.c | 10 ++++++++++
>  1 file changed, 10 insertions(+)
> 
> diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
> index a4b314b664f8..3f7361ec1330 100644
> --- a/kernel/sched/fair.c
> +++ b/kernel/sched/fair.c
> @@ -6246,6 +6246,11 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
>  	if (available_idle_cpu(prev_cpu))
>  		return prev_cpu;
>  
> +	/* The only running task is a short duration one. */
> +	if (cpu_rq(this_cpu)->nr_running == 1 &&
> +	    is_short_task((struct task_struct *)cpu_curr(this_cpu)))
> +		return this_cpu;
> +

Is it necessary to check the ttwu pending state here and below?

>  	return nr_cpumask_bits;
>  }
>  
> @@ -6612,6 +6617,11 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
>  		time = cpu_clock(this);
>  	}
>  
> +	if (!has_idle_core && cpu_rq(target)->nr_running == 1 &&
> +	    is_short_task((struct task_struct *)cpu_curr(target)) &&
> +	    is_short_task(p))
> +		return target;
> +

A short running task doesn't means a low utilization (you also mentioned in Patch 1/2).
So should we concern that we may overload the target?

btw, we're doing no scanning here so I may think it'll be more consistent to put this part
in select_idle_siblings(), considering we've already have some similiar judgement for the
prev_cpu, recent_used_cpu, etc. there.

Still doing some test, will reply the results once I get them.

Thanks,
Yicong

>  	if (sched_feat(SIS_UTIL)) {
>  		sd_share = rcu_dereference(per_cpu(sd_llc_shared, target));
>  		if (sd_share) {
> 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-05  9:59           ` Vincent Guittot
  2022-12-05 14:38             ` Chen Yu
@ 2022-12-07  2:23             ` Josh Don
  2022-12-07 14:24               ` Chen Yu
  1 sibling, 1 reply; 21+ messages in thread
From: Josh Don @ 2022-12-07  2:23 UTC (permalink / raw)
  To: Vincent Guittot, Chen Yu
  Cc: Joel Fernandes, Honglei Wang, Peter Zijlstra, Tim Chen,
	Mel Gorman, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	K Prateek Nayak, Yicong Yang, Gautham R . Shenoy, Ingo Molnar,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Len Brown, Chen Yu, Tianchen Ding, linux-kernel

> We don't want to add more dedicated knobs. So using
> sysctl_sched_min_granularity as you are doing in this patch looks ok

If not a knob, then maybe at least a smaller hardcoded value? Several
milliseconds seems like too long for this optimization; the
opportunity cost of not doing the idle search likely doesn't make
sense if the waker is going to be active for several additional
milliseconds. I would have guessed something on the order of 100us.

Chen, what is the run duration of tasks in your ping pong example?

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 2/2] sched/fair: Choose the CPU where short task is running during wake up
  2022-12-06 13:02   ` Yicong Yang
@ 2022-12-07  3:54     ` Chen Yu
  0 siblings, 0 replies; 21+ messages in thread
From: Chen Yu @ 2022-12-07  3:54 UTC (permalink / raw)
  To: Yicong Yang
  Cc: yangyicong, Juri Lelli, Rik van Riel, Aaron Lu, Abel Wu,
	K Prateek Nayak, Gautham R . Shenoy, Ingo Molnar,
	Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Honglei Wang, Len Brown, Chen Yu, Tianchen Ding, Joel Fernandes,
	Josh Don, linux-kernel, kernel test robot, Peter Zijlstra,
	Vincent Guittot, Mel Gorman, Tim Chen

Hi Yicong,
On 2022-12-06 at 21:02:11 +0800, Yicong Yang wrote:
[...]
> > +++ b/kernel/sched/fair.c
> > @@ -6246,6 +6246,11 @@ wake_affine_idle(int this_cpu, int prev_cpu, int sync)
> >  	if (available_idle_cpu(prev_cpu))
> >  		return prev_cpu;
> >  
> > +	/* The only running task is a short duration one. */
> > +	if (cpu_rq(this_cpu)->nr_running == 1 &&
> > +	    is_short_task((struct task_struct *)cpu_curr(this_cpu)))
> > +		return this_cpu;
> > +
> 
> Is it necessary to check the ttwu pending state here and below?
>
My understanding is that, ttwu_pending will be set on this_cpu if
1) this_cpu is idle, or
2) waker on another LLC domain wants to wake up the wakee on this_cpu,
see ttwu_queue_cond().
For 1), the nr_running is 1, so it is not idle. For 2) the
chance to do a cross LLC wake up is relatively low with current patch
applied. Besides, I was trying to make this proposal a dynamic version
of WF_SYNC, since the latter does not check ttwu_pending, I did
not add this check as well.(for now)+
> >  	return nr_cpumask_bits;
> >  }
> >  
> > @@ -6612,6 +6617,11 @@ static int select_idle_cpu(struct task_struct *p, struct sched_domain *sd, bool
> >  		time = cpu_clock(this);
> >  	}
> >  
> > +	if (!has_idle_core && cpu_rq(target)->nr_running == 1 &&
> > +	    is_short_task((struct task_struct *)cpu_curr(target)) &&
> > +	    is_short_task(p))
> > +		return target;
> > +
> 
> A short running task doesn't means a low utilization (you also mentioned in Patch 1/2).
> So should we concern that we may overload the target?
> 
The overloaded target might be expected in this case IMO. Because for a ping-pong scheduling
pair, we want to saturate the target CPU to eliminate the idle time. And this strategy
only takes effect when !has_idle_core.
> btw, we're doing no scanning here so I may think it'll be more consistent to put this part
> in select_idle_siblings(), considering we've already have some similiar judgement for the
> prev_cpu, recent_used_cpu, etc. there.
> 
Got it, I can change it in next version.
> Still doing some test, will reply the results once I get them.
Thanks for the test, we can tune this patch when we have the data.

thanks,
Chenyu
> 
> Thanks,
> Yicong
> 
> >  	if (sched_feat(SIS_UTIL)) {
> >  		sd_share = rcu_dereference(per_cpu(sd_llc_shared, target));
> >  		if (sd_share) {
> > 

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-07  2:23             ` Josh Don
@ 2022-12-07 14:24               ` Chen Yu
  2022-12-12 11:22                 ` Yicong Yang
  0 siblings, 1 reply; 21+ messages in thread
From: Chen Yu @ 2022-12-07 14:24 UTC (permalink / raw)
  To: Josh Don
  Cc: Vincent Guittot, Joel Fernandes, Honglei Wang, Peter Zijlstra,
	Tim Chen, Mel Gorman, Juri Lelli, Rik van Riel, Aaron Lu,
	Abel Wu, K Prateek Nayak, Yicong Yang, Gautham R . Shenoy,
	Ingo Molnar, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Len Brown, Chen Yu, Tianchen Ding, linux-kernel

Hi Josh,
On 2022-12-06 at 18:23:47 -0800, Josh Don wrote:
> > We don't want to add more dedicated knobs. So using
> > sysctl_sched_min_granularity as you are doing in this patch looks ok
> 
> If not a knob, then maybe at least a smaller hardcoded value? Several
> milliseconds seems like too long for this optimization; the
> opportunity cost of not doing the idle search likely doesn't make
> sense if the waker is going to be active for several additional
> milliseconds. I would have guessed something on the order of 100us.
> 
> Chen, what is the run duration of tasks in your ping pong example?
OK, I'll measure the duration of all the tests and summarize the data later.

thanks,
Chenyu

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-07 14:24               ` Chen Yu
@ 2022-12-12 11:22                 ` Yicong Yang
  2022-12-12 14:33                   ` Chen Yu
  0 siblings, 1 reply; 21+ messages in thread
From: Yicong Yang @ 2022-12-12 11:22 UTC (permalink / raw)
  To: Chen Yu
  Cc: yangyicong, Vincent Guittot, Joel Fernandes, Honglei Wang,
	Peter Zijlstra, Tim Chen, Mel Gorman, Juri Lelli, Rik van Riel,
	Aaron Lu, Abel Wu, K Prateek Nayak, Gautham R . Shenoy,
	Ingo Molnar, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Len Brown, Chen Yu, Tianchen Ding, linux-kernel, Josh Don

On 2022/12/7 22:24, Chen Yu wrote:
> Hi Josh,
> On 2022-12-06 at 18:23:47 -0800, Josh Don wrote:
>>> We don't want to add more dedicated knobs. So using
>>> sysctl_sched_min_granularity as you are doing in this patch looks ok
>>
>> If not a knob, then maybe at least a smaller hardcoded value? Several
>> milliseconds seems like too long for this optimization; the
>> opportunity cost of not doing the idle search likely doesn't make
>> sense if the waker is going to be active for several additional
>> milliseconds. I would have guessed something on the order of 100us.
>>
>> Chen, what is the run duration of tasks in your ping pong example?
> OK, I'll measure the duration of all the tests and summarize the data later.
> 

If we decide to use sysctl_sched_min_granularity as the threshold of a short
task, can we also export p->se.dur_avg (somewhere like /proc/[pid]/sched)?
It may be helpful to choose a proper value for sysctl_sched_min_granularity.

Thanks.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-12 11:22                 ` Yicong Yang
@ 2022-12-12 14:33                   ` Chen Yu
  2022-12-12 18:17                     ` Josh Don
  0 siblings, 1 reply; 21+ messages in thread
From: Chen Yu @ 2022-12-12 14:33 UTC (permalink / raw)
  To: Yicong Yang
  Cc: yangyicong, Vincent Guittot, Joel Fernandes, Honglei Wang,
	Peter Zijlstra, Tim Chen, Mel Gorman, Juri Lelli, Rik van Riel,
	Aaron Lu, Abel Wu, K Prateek Nayak, Gautham R . Shenoy,
	Ingo Molnar, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Len Brown, Chen Yu, Tianchen Ding, linux-kernel, Josh Don

On 2022-12-12 at 19:22:48 +0800, Yicong Yang wrote:
> On 2022/12/7 22:24, Chen Yu wrote:
> > Hi Josh,
> > On 2022-12-06 at 18:23:47 -0800, Josh Don wrote:
> >>> We don't want to add more dedicated knobs. So using
> >>> sysctl_sched_min_granularity as you are doing in this patch looks ok
> >>
> >> If not a knob, then maybe at least a smaller hardcoded value? Several
> >> milliseconds seems like too long for this optimization; the
> >> opportunity cost of not doing the idle search likely doesn't make
> >> sense if the waker is going to be active for several additional
> >> milliseconds. I would have guessed something on the order of 100us.
> >>
> >> Chen, what is the run duration of tasks in your ping pong example?
> > OK, I'll measure the duration of all the tests and summarize the data later.
> > 
> 
> If we decide to use sysctl_sched_min_granularity as the threshold of a short
> task, can we also export p->se.dur_avg (somewhere like /proc/[pid]/sched)?
> It may be helpful to choose a proper value for sysctl_sched_min_granularity.
This looks reasonable, I can add one in next version.
BTW, I've changed the threshold to (sysctl_sched_min_granularity / 8) in my next
version, as this is the value that fit my previous test case and also not to break
the case Josh mentioned. Also I've changed the place where dur_avg is updated from
put_prev_task() to dequeue_task(), which could fix an issue in v3. Will send v4
out after the test finished.

thanks,
Chenyu
> 
> Thanks.

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-12 14:33                   ` Chen Yu
@ 2022-12-12 18:17                     ` Josh Don
  2022-12-13  5:46                       ` Chen Yu
  0 siblings, 1 reply; 21+ messages in thread
From: Josh Don @ 2022-12-12 18:17 UTC (permalink / raw)
  To: Chen Yu
  Cc: Yicong Yang, yangyicong, Vincent Guittot, Joel Fernandes,
	Honglei Wang, Peter Zijlstra, Tim Chen, Mel Gorman, Juri Lelli,
	Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Len Brown, Chen Yu,
	Tianchen Ding, linux-kernel

> BTW, I've changed the threshold to (sysctl_sched_min_granularity / 8) in my next
> version, as this is the value that fit my previous test case and also not to break
> the case Josh mentioned.

Do you mean a hardcoded value of some number of micros, or literally
sched_min_granularity / 8? I don't think the latter is necessary, and
indeed can lead to weirdness if min_gran is too small or too large. I
don't think the concept of what a short duration task is should
expand/contract with min_gran.

Best,
Josh

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-12 18:17                     ` Josh Don
@ 2022-12-13  5:46                       ` Chen Yu
  2022-12-13 10:06                         ` Honglei Wang
  0 siblings, 1 reply; 21+ messages in thread
From: Chen Yu @ 2022-12-13  5:46 UTC (permalink / raw)
  To: Josh Don
  Cc: Yicong Yang, yangyicong, Vincent Guittot, Joel Fernandes,
	Honglei Wang, Peter Zijlstra, Tim Chen, Mel Gorman, Juri Lelli,
	Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Len Brown, Chen Yu,
	Tianchen Ding, linux-kernel

On 2022-12-12 at 10:17:35 -0800, Josh Don wrote:
> > BTW, I've changed the threshold to (sysctl_sched_min_granularity / 8) in my next
> > version, as this is the value that fit my previous test case and also not to break
> > the case Josh mentioned.
> 
> Do you mean a hardcoded value of some number of micros, or literally
> sched_min_granularity / 8?
The latter. According to the test, the average task duration when system
is under heavy load:
6 ~ 9 us for netperf
7 ~ 70 us for hackbench
7 ~ 8 us for tbench
13 ~ 20 ms for schbench
Overall the duration of the micros are quite small(except for schbench).
The default sysctl_sched_min_granularity is 750 us in kernel if no user
has changed it. Then 750 / 8 = 93 us, which is close to what you suggested(100us).
On the other hand, if someone changes sysctl_sched_min_granularity,
then '8' can be viewed as log2(256). That is, if there are 256 CPUs online,
and the sysctl_sched_min_granularity is changed to 750 us * log2(256) by
the user, we can devide the sysctl_sched_min_granularity by 8 in case the
sysctl_sched_min_granularity is too large.

My concern of using hardcoded value is that, this value depends on how fast
the CPU runs(cpu frequency). The value I measured above is when the
CPU is running at 1.9Ghz. If a CPU runs faster, a hard code value might not
be appropriate and could not be tuned.
> I don't think the latter is necessary, and
> indeed can lead to weirdness if min_gran is too small or too large. I
> don't think the concept of what a short duration task is should
> expand/contract with min_gran.
The value of sysctl_sched_min_granularity might indicate how long the
user would like a task to run at least. If the user enlarge this value,
does it mean the user wants every task in the system to run longer?
From this point I found connection between the the definition of short task
duration and this value. I'm open to changing this value to a fixed one, may
I have more insights on how this value would be set in production environment?

thanks,
Chenyu
> 
> Best,
> Josh

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-13  5:46                       ` Chen Yu
@ 2022-12-13 10:06                         ` Honglei Wang
  2022-12-13 12:24                           ` Chen Yu
  0 siblings, 1 reply; 21+ messages in thread
From: Honglei Wang @ 2022-12-13 10:06 UTC (permalink / raw)
  To: Chen Yu, Josh Don
  Cc: Yicong Yang, yangyicong, Vincent Guittot, Joel Fernandes,
	Peter Zijlstra, Tim Chen, Mel Gorman, Juri Lelli, Rik van Riel,
	Aaron Lu, Abel Wu, K Prateek Nayak, Gautham R . Shenoy,
	Ingo Molnar, Dietmar Eggemann, Steven Rostedt, Ben Segall,
	Daniel Bristot de Oliveira, Valentin Schneider, Hillf Danton,
	Len Brown, Chen Yu, Tianchen Ding, linux-kernel



On 2022/12/13 13:46, Chen Yu wrote:
> On 2022-12-12 at 10:17:35 -0800, Josh Don wrote:
>>> BTW, I've changed the threshold to (sysctl_sched_min_granularity / 8) in my next
>>> version, as this is the value that fit my previous test case and also not to break
>>> the case Josh mentioned.
>>
>> Do you mean a hardcoded value of some number of micros, or literally
>> sched_min_granularity / 8?
> The latter. According to the test, the average task duration when system
> is under heavy load:
> 6 ~ 9 us for netperf
> 7 ~ 70 us for hackbench
> 7 ~ 8 us for tbench
> 13 ~ 20 ms for schbench
> Overall the duration of the micros are quite small(except for schbench).
> The default sysctl_sched_min_granularity is 750 us in kernel if no user
> has changed it. Then 750 / 8 = 93 us, which is close to what you suggested(100us).
> On the other hand, if someone changes sysctl_sched_min_granularity,
> then '8' can be viewed as log2(256). That is, if there are 256 CPUs online,
> and the sysctl_sched_min_granularity is changed to 750 us * log2(256) by
> the user, we can devide the sysctl_sched_min_granularity by 8 in case the
> sysctl_sched_min_granularity is too large.
> 

Hi Yu,

Seems there is a min_t() call in get_update_sysctl_factor(). In most 
cases, we'll get 750 us * (1+log2(8)) = 3000 us in default due to 
sysctl_sched_tunable_scaling is set as '1' default. (Correct me if I 
misunderstand).

For the value in production environment, I've seen 10 ms and 3 ms in 
different place, FYI. Hope this help.

Thanks,
Honglei

> My concern of using hardcoded value is that, this value depends on how fast
> the CPU runs(cpu frequency). The value I measured above is when the
> CPU is running at 1.9Ghz. If a CPU runs faster, a hard code value might not
> be appropriate and could not be tuned.
>> I don't think the latter is necessary, and
>> indeed can lead to weirdness if min_gran is too small or too large. I
>> don't think the concept of what a short duration task is should
>> expand/contract with min_gran.
> The value of sysctl_sched_min_granularity might indicate how long the
> user would like a task to run at least. If the user enlarge this value,
> does it mean the user wants every task in the system to run longer?
>  From this point I found connection between the the definition of short task
> duration and this value. I'm open to changing this value to a fixed one, may
> I have more insights on how this value would be set in production environment?
> 
> thanks,
> Chenyu
>>
>> Best,
>> Josh

^ permalink raw reply	[flat|nested] 21+ messages in thread

* Re: [PATCH v3 1/2] sched/fair: Introduce short duration task check
  2022-12-13 10:06                         ` Honglei Wang
@ 2022-12-13 12:24                           ` Chen Yu
  0 siblings, 0 replies; 21+ messages in thread
From: Chen Yu @ 2022-12-13 12:24 UTC (permalink / raw)
  To: Honglei Wang
  Cc: Josh Don, Yicong Yang, yangyicong, Vincent Guittot,
	Joel Fernandes, Peter Zijlstra, Tim Chen, Mel Gorman, Juri Lelli,
	Rik van Riel, Aaron Lu, Abel Wu, K Prateek Nayak,
	Gautham R . Shenoy, Ingo Molnar, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Daniel Bristot de Oliveira,
	Valentin Schneider, Hillf Danton, Len Brown, Chen Yu,
	Tianchen Ding, linux-kernel

On 2022-12-13 at 18:06:50 +0800, Honglei Wang wrote:
> 
> 
> On 2022/12/13 13:46, Chen Yu wrote:
> > On 2022-12-12 at 10:17:35 -0800, Josh Don wrote:
> > > > BTW, I've changed the threshold to (sysctl_sched_min_granularity / 8) in my next
> > > > version, as this is the value that fit my previous test case and also not to break
> > > > the case Josh mentioned.
> > > 
> > > Do you mean a hardcoded value of some number of micros, or literally
> > > sched_min_granularity / 8?
> > The latter. According to the test, the average task duration when system
> > is under heavy load:
> > 6 ~ 9 us for netperf
> > 7 ~ 70 us for hackbench
> > 7 ~ 8 us for tbench
> > 13 ~ 20 ms for schbench
> > Overall the duration of the micros are quite small(except for schbench).
> > The default sysctl_sched_min_granularity is 750 us in kernel if no user
> > has changed it. Then 750 / 8 = 93 us, which is close to what you suggested(100us).
> > On the other hand, if someone changes sysctl_sched_min_granularity,
> > then '8' can be viewed as log2(256). That is, if there are 256 CPUs online,
> > and the sysctl_sched_min_granularity is changed to 750 us * log2(256) by
> > the user, we can devide the sysctl_sched_min_granularity by 8 in case the
> > sysctl_sched_min_granularity is too large.
> > 
> 
> Hi Yu,
> 
> Seems there is a min_t() call in get_update_sysctl_factor(). In most cases,
> we'll get 750 us * (1+log2(8)) = 3000 us in default due to
> sysctl_sched_tunable_scaling is set as '1' default. (Correct me if I
> misunderstand).
>
Thanks for pointing this out! I overlooked this part previously.
So (sysctl_sched_min_granularity / 8) is 375 us by default.
> For the value in production environment, I've seen 10 ms and 3 ms in
> different place, FYI. Hope this help.
I see. 10 ms would generate a short duration threshold of 1.25 ms. From
my understanding if the user increases the sysctl_sched_min_granularity
then it is expected to have longer average duration tasks in the system.
And the short duration threshold should also be raised.

thanks,
Chenyu

^ permalink raw reply	[flat|nested] 21+ messages in thread

end of thread, other threads:[~2022-12-13 12:25 UTC | newest]

Thread overview: 21+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-12-01  8:43 [PATCH v3 0/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
2022-12-01  8:44 ` [PATCH v3 1/2] sched/fair: Introduce short duration task check Chen Yu
2022-12-02  7:44   ` Honglei Wang
2022-12-03  7:49     ` Chen Yu
2022-12-03 15:35       ` Joel Fernandes
2022-12-05  8:38         ` Chen Yu
2022-12-05  9:59           ` Vincent Guittot
2022-12-05 14:38             ` Chen Yu
2022-12-07  2:23             ` Josh Don
2022-12-07 14:24               ` Chen Yu
2022-12-12 11:22                 ` Yicong Yang
2022-12-12 14:33                   ` Chen Yu
2022-12-12 18:17                     ` Josh Don
2022-12-13  5:46                       ` Chen Yu
2022-12-13 10:06                         ` Honglei Wang
2022-12-13 12:24                           ` Chen Yu
2022-12-01  8:44 ` [PATCH v3 2/2] sched/fair: Choose the CPU where short task is running during wake up Chen Yu
2022-12-05  2:36   ` Tianchen Ding
2022-12-05  8:40     ` Chen Yu
2022-12-06 13:02   ` Yicong Yang
2022-12-07  3:54     ` Chen Yu

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).