linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v10 0/9] Add latency priority for CFS class
@ 2023-01-13 14:12 Vincent Guittot
  2023-01-13 14:12 ` [PATCH v10 1/9] sched/fair: fix unfairness at wakeup Vincent Guittot
                   ` (8 more replies)
  0 siblings, 9 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-01-13 14:12 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, linux-kernel, parth, cgroups
  Cc: qyousef, chris.hyser, patrick.bellasi, David.Laight, pjt, pavel,
	tj, qperret, tim.c.chen, joshdon, timj, kprateek.nayak,
	yu.c.chen, youssefesmat, joel, Vincent Guittot

This patchset restarts the work about adding a latency priority to describe
the latency tolerance of cfs tasks.

Patch [1] is a new one that has been added with v6. It fixes an
unfairness for low prio tasks because of wakeup_gran() being bigger
than the maximum vruntime credit that a waking task can keep after
sleeping.

The patches [2-4] have been done by Parth:
https://lore.kernel.org/lkml/20200228090755.22829-1-parth@linux.ibm.com/

I have just rebased and moved the set of latency priority outside the
priority update. I have removed the reviewed tag because the patches
are 2 years old.

This aims to be a generic interface and the following patches is one use
of it to improve the scheduling latency of cfs tasks.

Patch [5] uses latency nice priority to define a latency offset
and then decide if a cfs task can or should preempt the current
running task. The patch gives some tests results with cyclictests and
hackbench to highlight the benefit of latency priority for short
interactive task or long intensive tasks.

Patch [6] adds the support of latency nice priority to task group by
adding a cpu.latency.nice field. The range is [-20:19] as for setting task
latency priority.

Patch [7] makes sched_core taking into account the latency offset.

Patch [8] adds a rb tree to cover some corner cases where the latency
sensitive task (priority < 0) is preempted by high priority task (RT/DL)
or fails to preempt them. This patch ensures that tasks will have at least
a slice of sched_min_granularity in priority at wakeup.

Patch [9] removes useless check after adding a latency rb tree.

I have also backported the patchset on a dragonboard RB3 with an android
mainline kernel based on v5.18 for a quick test. I have used the
TouchLatency app which is part of AOSP and described to be a very good
test to highlight jitter and jank frame sources of a system [1].
In addition to the app, I have added some short running tasks waking-up
regularly (to use the 8 cpus for 4 ms every 37777us) to stress the system
without overloading it (and disabling EAS). The 1st results shows that the
patchset helps to reduce the missed deadline frames from 5% to less than
0.1% when the cpu.latency.nice of task group are set. I haven't rerun the
test with latest version.

I have also tested the patchset with the modified version of the alsa
latency test that has been shared by Tim. The test quickly xruns with
default latency nice priority 0 but is able to run without underuns with
a latency -20 and hackbench running simultaneously.

While preparing the version 8, I have evaluated the benefit of using an
augmented rbtree instead of adding a rbtree for latency sensitive entities,
which was a relevant suggestion done by PeterZ. Although the augmented
rbtree enables to sort additional information in the tree with a limited
overhead, it has more impact on legacy use cases (latency_nice >= 0)
because the augmented callbacks are always called to maintain this
additional information even when there is no sensitive tasks. In such
cases, the dedicated rbtree remains empty and the overhead is reduced to
loading a cached null node pointer. Nevertheless, we might want to
reconsider the augmented rbtree once the use of negative latency_nice will
be more widlely deployed. At now, the different tests that I have done,
have not shown improvements with augmented rbtree.

Below are some hackbench results:
        2 rbtrees               augmented rbtree        augmented rbtree	
                                sorted by vruntime      sorted by wakeup_vruntime
sched	pipe	
avg     26311,000               25976,667               25839,556
stdev   0,15 %                  0,28 %                  0,24 %
vs tip  0,50 %                  -0,78 %                 -1,31 %
hackbench	1 group	
avg     1,315                   1,344                   1,359
stdev   0,88 %                  1,55 %                  1,82 %
vs tip  -0,47 %                 -2,68 %                 -3,87 %
hackbench	4 groups
avg     1,339                   1,365                   1,367
stdev   2,39 %                  2,26 %                  3,58 %
vs tip  -0,08 %                 -2,01 %                 -2,22 %
hackbench	8 groups
avg     1,233                   1,286                   1,301
stdev   0,74 %                  1,09 %                  1,52 %
vs tip  0,29 %                  -4,05 %                 -5,27 %
hackbench	16 groups	
avg     1,268                   1,313                   1,319
stdev   0,85 %                  1,60 %                  0,68 %
vs tip  -0,02 %                 -3,56 %                 -4,01 %

[1] https://source.android.com/docs/core/debug/eval_perf#touchlatency

Change since v9:
- Rebase
- add tags

Change since v8:
- Rename get_sched_latency by get_sleep_latency
- move latency nice defines in sched/prio.h and fix latency_prio init value
- Fix typo and comments

Change since v7:
- Replaced se->on_latency by using RB_CLEAR_NODE() and RB_EMPTY_NODE()
- Clarify the limit behavior fo the cgroup cpu.latenyc_nice

Change since v6:
- Fix compilation error for !CONFIG_SCHED_DEBUG

Change since v5:
- Add patch 1 to fix unfairness for low prio task. This has been
  discovered while studying Youssef's tests results with latency nice
  which were hitting the same problem.
- Fixed latency_offset computation to take into account
  GENTLE_FAIR_SLEEPERS. This has diseappeared with v2and has been raised
  by Youssef's tests.
- Reworked and optimized how latency_offset in used to check for
  preempting current task at wakeup and tick. This cover more cases too.
- Add patch 9 to remove check_preempt_from_others() which is not needed
  anymore with the rb tree.

Change since v4:
- Removed permission checks to set latency priority. This enables user
  without elevated privilege like audio application to set their latency
  priority as requested by Tim.
- Removed cpu.latency and replaced it by cpu.latency.nice so we keep a
  generic interface not tied to latency_offset which can be used to
  implement other latency features.
- Added an entry in Documentation/admin-guide/cgroup-v2.rst to describe
  cpu.latency.nice.
- Fix some typos.

Change since v3:
- Fix 2 compilation warnings raised by kernel test robot <lkp@intel.com>

Change since v2:
- Set a latency_offset field instead of saving a weight and computing it
  on the fly.
- Make latency_offset available for task group: cpu.latency
- Fix some corner cases to make latency sensitive tasks schedule first and
  add a rb tree for latency sensitive task.

Change since v1:
- fix typo
- move some codes in the right patch to make bisect happy
- simplify and fixed how the weight is computed
- added support of sched core patch 7

Parth Shah (3):
  sched: Introduce latency-nice as a per-task attribute
  sched/core: Propagate parent task's latency requirements to the child
    task
  sched: Allow sched_{get,set}attr to change latency_nice of the task

Vincent Guittot (6):
  sched/fair: fix unfairness at wakeup
  sched/fair: Take into account latency priority at wakeup
  sched/fair: Add sched group latency support
  sched/core: Support latency priority with sched core
  sched/fair: Add latency list
  sched/fair: remove check_preempt_from_others

 Documentation/admin-guide/cgroup-v2.rst |  10 ++
 include/linux/sched.h                   |   4 +
 include/linux/sched/prio.h              |  27 +++
 include/uapi/linux/sched.h              |   4 +-
 include/uapi/linux/sched/types.h        |  19 +++
 init/init_task.c                        |   1 +
 kernel/sched/core.c                     | 106 ++++++++++++
 kernel/sched/debug.c                    |   1 +
 kernel/sched/fair.c                     | 209 ++++++++++++++++++++----
 kernel/sched/sched.h                    |  45 ++++-
 tools/include/uapi/linux/sched.h        |   4 +-
 11 files changed, 394 insertions(+), 36 deletions(-)

-- 
2.34.1


^ permalink raw reply	[flat|nested] 27+ messages in thread

* [PATCH v10 1/9] sched/fair: fix unfairness at wakeup
  2023-01-13 14:12 [PATCH v10 0/9] Add latency priority for CFS class Vincent Guittot
@ 2023-01-13 14:12 ` Vincent Guittot
  2023-01-13 14:12 ` [PATCH v10 2/9] sched: Introduce latency-nice as a per-task attribute Vincent Guittot
                   ` (7 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-01-13 14:12 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, linux-kernel, parth, cgroups
  Cc: qyousef, chris.hyser, patrick.bellasi, David.Laight, pjt, pavel,
	tj, qperret, tim.c.chen, joshdon, timj, kprateek.nayak,
	yu.c.chen, youssefesmat, joel, Vincent Guittot

At wake up, the vruntime of a task is updated to not be more older than
a sched_latency period behind the min_vruntime. This prevents long sleeping
task to get unlimited credit at wakeup.
Such waking task should preempt current one to use its CPU bandwidth but
wakeup_gran() can be larger than sched_latency, filter out the
wakeup preemption and as a results steals some CPU bandwidth to
the waking task.

Make sure that a task, which vruntime has been capped, will preempt current
task and use its CPU bandwidth even if wakeup_gran() is in the same range
as sched_latency.

If the waking task failed to preempt current it could to wait up to
sysctl_sched_min_granularity before preempting it during next tick.

Strictly speaking, we should use cfs->min_vruntime instead of
curr->vruntime but it doesn't worth the additional overhead and complexity
as the vruntime of current should be close to min_vruntime if not equal.

Reported-by: Youssef Esmat <youssefesmat@chromium.org>
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Reviewed-by: Joel Fernandes (Google) <joel@joelfernandes.org>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 kernel/sched/fair.c  | 46 ++++++++++++++++++++------------------------
 kernel/sched/sched.h | 34 +++++++++++++++++++++++++++++++-
 2 files changed, 54 insertions(+), 26 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e9d906a9bba9..8a85c6cf781e 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4657,33 +4657,17 @@ place_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int initial)
 {
 	u64 vruntime = cfs_rq->min_vruntime;
 
-	/*
-	 * The 'current' period is already promised to the current tasks,
-	 * however the extra weight of the new task will slow them down a
-	 * little, place the new task so that it fits in the slot that
-	 * stays open at the end.
-	 */
-	if (initial && sched_feat(START_DEBIT))
-		vruntime += sched_vslice(cfs_rq, se);
-
-	/* sleeps up to a single latency don't count. */
-	if (!initial) {
-		unsigned long thresh;
-
-		if (se_is_idle(se))
-			thresh = sysctl_sched_min_granularity;
-		else
-			thresh = sysctl_sched_latency;
-
+	if (!initial)
+		/* sleeps up to a single latency don't count. */
+		vruntime -= get_sleep_latency(se_is_idle(se));
+	else if (sched_feat(START_DEBIT))
 		/*
-		 * Halve their sleep time's effect, to allow
-		 * for a gentler effect of sleepers:
+		 * The 'current' period is already promised to the current tasks,
+		 * however the extra weight of the new task will slow them down a
+		 * little, place the new task so that it fits in the slot that
+		 * stays open at the end.
 		 */
-		if (sched_feat(GENTLE_FAIR_SLEEPERS))
-			thresh >>= 1;
-
-		vruntime -= thresh;
-	}
+		vruntime += sched_vslice(cfs_rq, se);
 
 	/* ensure we never gain time by being placed backwards. */
 	se->vruntime = max_vruntime(se->vruntime, vruntime);
@@ -7656,6 +7640,18 @@ wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se)
 		return -1;
 
 	gran = wakeup_gran(se);
+
+	/*
+	 * At wake up, the vruntime of a task is capped to not be older than
+	 * a sched_latency period compared to min_vruntime. This prevents long
+	 * sleeping task to get unlimited credit at wakeup. Such waking up task
+	 * has to preempt current in order to not lose its share of CPU
+	 * bandwidth but wakeup_gran() can become higher than scheduling period
+	 * for low priority task. Make sure that long sleeping task will get a
+	 * chance to preempt current.
+	 */
+	gran = min_t(s64, gran, get_latency_max());
+
 	if (vdiff > gran)
 		return 1;
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 1072502976df..df7db06c9943 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2459,9 +2459,9 @@ extern void check_preempt_curr(struct rq *rq, struct task_struct *p, int flags);
 extern const_debug unsigned int sysctl_sched_nr_migrate;
 extern const_debug unsigned int sysctl_sched_migration_cost;
 
-#ifdef CONFIG_SCHED_DEBUG
 extern unsigned int sysctl_sched_latency;
 extern unsigned int sysctl_sched_min_granularity;
+#ifdef CONFIG_SCHED_DEBUG
 extern unsigned int sysctl_sched_idle_min_granularity;
 extern unsigned int sysctl_sched_wakeup_granularity;
 extern int sysctl_resched_latency_warn_ms;
@@ -2476,6 +2476,38 @@ extern unsigned int sysctl_numa_balancing_scan_size;
 extern unsigned int sysctl_numa_balancing_hot_threshold;
 #endif
 
+static inline unsigned long get_sleep_latency(bool idle)
+{
+	unsigned long thresh;
+
+	if (idle)
+		thresh = sysctl_sched_min_granularity;
+	else
+		thresh = sysctl_sched_latency;
+
+	/*
+	 * Halve their sleep time's effect, to allow
+	 * for a gentler effect of sleepers:
+	 */
+	if (sched_feat(GENTLE_FAIR_SLEEPERS))
+		thresh >>= 1;
+
+	return thresh;
+}
+
+static inline unsigned long get_latency_max(void)
+{
+	unsigned long thresh = get_sleep_latency(false);
+
+	 /*
+	  * If the waking task failed to preempt current it could to wait up to
+	  * sysctl_sched_min_granularity before preempting it during next tick.
+	  */
+	thresh -= sysctl_sched_min_granularity;
+
+	return thresh;
+}
+
 #ifdef CONFIG_SCHED_HRTICK
 
 /*
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v10 2/9] sched: Introduce latency-nice as a per-task attribute
  2023-01-13 14:12 [PATCH v10 0/9] Add latency priority for CFS class Vincent Guittot
  2023-01-13 14:12 ` [PATCH v10 1/9] sched/fair: fix unfairness at wakeup Vincent Guittot
@ 2023-01-13 14:12 ` Vincent Guittot
  2023-01-13 14:12 ` [PATCH v10 3/9] sched/core: Propagate parent task's latency requirements to the child task Vincent Guittot
                   ` (6 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-01-13 14:12 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, linux-kernel, parth, cgroups
  Cc: qyousef, chris.hyser, patrick.bellasi, David.Laight, pjt, pavel,
	tj, qperret, tim.c.chen, joshdon, timj, kprateek.nayak,
	yu.c.chen, youssefesmat, joel, Vincent Guittot

From: Parth Shah <parth@linux.ibm.com>

Latency-nice indicates the latency requirements of a task with respect
to the other tasks in the system. The value of the attribute can be within
the range of [-20, 19] both inclusive to be in-line with the values just
like task nice values.

latency_nice = -20 indicates the task to have the least latency as
compared to the tasks having latency_nice = +19.

The latency_nice may affect only the CFS SCHED_CLASS by getting
latency requirements from the userspace.

Additionally, add debugging bits for newly added latency_nice attribute.

Signed-off-by: Parth Shah <parth@linux.ibm.com>
[rebase, move defines in sched/prio.h]
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 include/linux/sched.h      |  1 +
 include/linux/sched/prio.h | 18 ++++++++++++++++++
 kernel/sched/debug.c       |  1 +
 3 files changed, 20 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 4df2b3e76b30..6c61bde49152 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -784,6 +784,7 @@ struct task_struct {
 	int				static_prio;
 	int				normal_prio;
 	unsigned int			rt_priority;
+	int				latency_nice;
 
 	struct sched_entity		se;
 	struct sched_rt_entity		rt;
diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
index ab83d85e1183..bfcd7f1d1e11 100644
--- a/include/linux/sched/prio.h
+++ b/include/linux/sched/prio.h
@@ -42,4 +42,22 @@ static inline long rlimit_to_nice(long prio)
 	return (MAX_NICE - prio + 1);
 }
 
+/*
+ * Latency nice is meant to provide scheduler hints about the relative
+ * latency requirements of a task with respect to other tasks.
+ * Thus a task with latency_nice == 19 can be hinted as the task with no
+ * latency requirements, in contrast to the task with latency_nice == -20
+ * which should be given priority in terms of lower latency.
+ */
+#define MAX_LATENCY_NICE	19
+#define MIN_LATENCY_NICE	-20
+
+#define LATENCY_NICE_WIDTH	\
+	(MAX_LATENCY_NICE - MIN_LATENCY_NICE + 1)
+
+/*
+ * Default tasks should be treated as a task with latency_nice = 0.
+ */
+#define DEFAULT_LATENCY_NICE	0
+
 #endif /* _LINUX_SCHED_PRIO_H */
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 1637b65ba07a..68be7a3e42a3 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -1043,6 +1043,7 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
 #endif
 	P(policy);
 	P(prio);
+	P(latency_nice);
 	if (task_has_dl_policy(p)) {
 		P(dl.runtime);
 		P(dl.deadline);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v10 3/9] sched/core: Propagate parent task's latency requirements to the child task
  2023-01-13 14:12 [PATCH v10 0/9] Add latency priority for CFS class Vincent Guittot
  2023-01-13 14:12 ` [PATCH v10 1/9] sched/fair: fix unfairness at wakeup Vincent Guittot
  2023-01-13 14:12 ` [PATCH v10 2/9] sched: Introduce latency-nice as a per-task attribute Vincent Guittot
@ 2023-01-13 14:12 ` Vincent Guittot
  2023-01-13 14:12 ` [PATCH v10 4/9] sched: Allow sched_{get,set}attr to change latency_nice of the task Vincent Guittot
                   ` (5 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-01-13 14:12 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, linux-kernel, parth, cgroups
  Cc: qyousef, chris.hyser, patrick.bellasi, David.Laight, pjt, pavel,
	tj, qperret, tim.c.chen, joshdon, timj, kprateek.nayak,
	yu.c.chen, youssefesmat, joel, Vincent Guittot

From: Parth Shah <parth@linux.ibm.com>

Clone parent task's latency_nice attribute to the forked child task.

Reset the latency_nice value to default value when the child task is
set to sched_reset_on_fork.

Also, initialize init_task.latency_nice value with DEFAULT_LATENCY_NICE
value

Signed-off-by: Parth Shah <parth@linux.ibm.com>
[rebase]
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 init/init_task.c    | 1 +
 kernel/sched/core.c | 1 +
 2 files changed, 2 insertions(+)

diff --git a/init/init_task.c b/init/init_task.c
index ff6c4b9bfe6b..7dd71dd2d261 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -78,6 +78,7 @@ struct task_struct init_task
 	.prio		= MAX_PRIO - 20,
 	.static_prio	= MAX_PRIO - 20,
 	.normal_prio	= MAX_PRIO - 20,
+	.latency_nice	= DEFAULT_LATENCY_NICE,
 	.policy		= SCHED_NORMAL,
 	.cpus_ptr	= &init_task.cpus_mask,
 	.user_cpus_ptr	= NULL,
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 03b8529db73f..012a1f551f4f 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4632,6 +4632,7 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
 		p->prio = p->normal_prio = p->static_prio;
 		set_load_weight(p, false);
 
+		p->latency_nice = DEFAULT_LATENCY_NICE;
 		/*
 		 * We don't need the reset flag anymore after the fork. It has
 		 * fulfilled its duty:
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v10 4/9] sched: Allow sched_{get,set}attr to change latency_nice of the task
  2023-01-13 14:12 [PATCH v10 0/9] Add latency priority for CFS class Vincent Guittot
                   ` (2 preceding siblings ...)
  2023-01-13 14:12 ` [PATCH v10 3/9] sched/core: Propagate parent task's latency requirements to the child task Vincent Guittot
@ 2023-01-13 14:12 ` Vincent Guittot
  2023-01-13 14:12 ` [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup Vincent Guittot
                   ` (4 subsequent siblings)
  8 siblings, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-01-13 14:12 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, linux-kernel, parth, cgroups
  Cc: qyousef, chris.hyser, patrick.bellasi, David.Laight, pjt, pavel,
	tj, qperret, tim.c.chen, joshdon, timj, kprateek.nayak,
	yu.c.chen, youssefesmat, joel, Vincent Guittot

From: Parth Shah <parth@linux.ibm.com>

Introduce the latency_nice attribute to sched_attr and provide a
mechanism to change the value with the use of sched_setattr/sched_getattr
syscall.

Also add new flag "SCHED_FLAG_LATENCY_NICE" to hint the change in
latency_nice of the task on every sched_setattr syscall.

Signed-off-by: Parth Shah <parth@linux.ibm.com>
[rebase and add a dedicated __setscheduler_latency ]
Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 include/uapi/linux/sched.h       |  4 +++-
 include/uapi/linux/sched/types.h | 19 +++++++++++++++++++
 kernel/sched/core.c              | 24 ++++++++++++++++++++++++
 tools/include/uapi/linux/sched.h |  4 +++-
 4 files changed, 49 insertions(+), 2 deletions(-)

diff --git a/include/uapi/linux/sched.h b/include/uapi/linux/sched.h
index 3bac0a8ceab2..b2e932c25be6 100644
--- a/include/uapi/linux/sched.h
+++ b/include/uapi/linux/sched.h
@@ -132,6 +132,7 @@ struct clone_args {
 #define SCHED_FLAG_KEEP_PARAMS		0x10
 #define SCHED_FLAG_UTIL_CLAMP_MIN	0x20
 #define SCHED_FLAG_UTIL_CLAMP_MAX	0x40
+#define SCHED_FLAG_LATENCY_NICE		0x80
 
 #define SCHED_FLAG_KEEP_ALL	(SCHED_FLAG_KEEP_POLICY | \
 				 SCHED_FLAG_KEEP_PARAMS)
@@ -143,6 +144,7 @@ struct clone_args {
 			 SCHED_FLAG_RECLAIM		| \
 			 SCHED_FLAG_DL_OVERRUN		| \
 			 SCHED_FLAG_KEEP_ALL		| \
-			 SCHED_FLAG_UTIL_CLAMP)
+			 SCHED_FLAG_UTIL_CLAMP		| \
+			 SCHED_FLAG_LATENCY_NICE)
 
 #endif /* _UAPI_LINUX_SCHED_H */
diff --git a/include/uapi/linux/sched/types.h b/include/uapi/linux/sched/types.h
index f2c4589d4dbf..db1e8199e8c8 100644
--- a/include/uapi/linux/sched/types.h
+++ b/include/uapi/linux/sched/types.h
@@ -10,6 +10,7 @@ struct sched_param {
 
 #define SCHED_ATTR_SIZE_VER0	48	/* sizeof first published struct */
 #define SCHED_ATTR_SIZE_VER1	56	/* add: util_{min,max} */
+#define SCHED_ATTR_SIZE_VER2	60	/* add: latency_nice */
 
 /*
  * Extended scheduling parameters data structure.
@@ -98,6 +99,22 @@ struct sched_param {
  * scheduled on a CPU with no more capacity than the specified value.
  *
  * A task utilization boundary can be reset by setting the attribute to -1.
+ *
+ * Latency Tolerance Attributes
+ * ===========================
+ *
+ * A subset of sched_attr attributes allows to specify the relative latency
+ * requirements of a task with respect to the other tasks running/queued in the
+ * system.
+ *
+ * @ sched_latency_nice	task's latency_nice value
+ *
+ * The latency_nice of a task can have any value in a range of
+ * [MIN_LATENCY_NICE..MAX_LATENCY_NICE].
+ *
+ * A task with latency_nice with the value of LATENCY_NICE_MIN can be
+ * taken for a task requiring a lower latency as opposed to the task with
+ * higher latency_nice.
  */
 struct sched_attr {
 	__u32 size;
@@ -120,6 +137,8 @@ struct sched_attr {
 	__u32 sched_util_min;
 	__u32 sched_util_max;
 
+	/* latency requirement hints */
+	__s32 sched_latency_nice;
 };
 
 #endif /* _UAPI_LINUX_SCHED_TYPES_H */
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 012a1f551f4f..981665550f8c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7392,6 +7392,14 @@ static void __setscheduler_params(struct task_struct *p,
 	p->rt_priority = attr->sched_priority;
 	p->normal_prio = normal_prio(p);
 	set_load_weight(p, true);
+
+}
+
+static void __setscheduler_latency(struct task_struct *p,
+		const struct sched_attr *attr)
+{
+	if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE)
+		p->latency_nice = attr->sched_latency_nice;
 }
 
 /*
@@ -7534,6 +7542,13 @@ static int __sched_setscheduler(struct task_struct *p,
 			return retval;
 	}
 
+	if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) {
+		if (attr->sched_latency_nice > MAX_LATENCY_NICE)
+			return -EINVAL;
+		if (attr->sched_latency_nice < MIN_LATENCY_NICE)
+			return -EINVAL;
+	}
+
 	if (pi)
 		cpuset_read_lock();
 
@@ -7568,6 +7583,9 @@ static int __sched_setscheduler(struct task_struct *p,
 			goto change;
 		if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP)
 			goto change;
+		if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE &&
+		    attr->sched_latency_nice != p->latency_nice)
+			goto change;
 
 		p->sched_reset_on_fork = reset_on_fork;
 		retval = 0;
@@ -7656,6 +7674,7 @@ static int __sched_setscheduler(struct task_struct *p,
 		__setscheduler_params(p, attr);
 		__setscheduler_prio(p, newprio);
 	}
+	__setscheduler_latency(p, attr);
 	__setscheduler_uclamp(p, attr);
 
 	if (queued) {
@@ -7866,6 +7885,9 @@ static int sched_copy_attr(struct sched_attr __user *uattr, struct sched_attr *a
 	    size < SCHED_ATTR_SIZE_VER1)
 		return -EINVAL;
 
+	if ((attr->sched_flags & SCHED_FLAG_LATENCY_NICE) &&
+	    size < SCHED_ATTR_SIZE_VER2)
+		return -EINVAL;
 	/*
 	 * XXX: Do we want to be lenient like existing syscalls; or do we want
 	 * to be strict and return an error on out-of-bounds values?
@@ -8103,6 +8125,8 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
 	get_params(p, &kattr);
 	kattr.sched_flags &= SCHED_FLAG_ALL;
 
+	kattr.sched_latency_nice = p->latency_nice;
+
 #ifdef CONFIG_UCLAMP_TASK
 	/*
 	 * This could race with another potential updater, but this is fine
diff --git a/tools/include/uapi/linux/sched.h b/tools/include/uapi/linux/sched.h
index 3bac0a8ceab2..b2e932c25be6 100644
--- a/tools/include/uapi/linux/sched.h
+++ b/tools/include/uapi/linux/sched.h
@@ -132,6 +132,7 @@ struct clone_args {
 #define SCHED_FLAG_KEEP_PARAMS		0x10
 #define SCHED_FLAG_UTIL_CLAMP_MIN	0x20
 #define SCHED_FLAG_UTIL_CLAMP_MAX	0x40
+#define SCHED_FLAG_LATENCY_NICE		0x80
 
 #define SCHED_FLAG_KEEP_ALL	(SCHED_FLAG_KEEP_POLICY | \
 				 SCHED_FLAG_KEEP_PARAMS)
@@ -143,6 +144,7 @@ struct clone_args {
 			 SCHED_FLAG_RECLAIM		| \
 			 SCHED_FLAG_DL_OVERRUN		| \
 			 SCHED_FLAG_KEEP_ALL		| \
-			 SCHED_FLAG_UTIL_CLAMP)
+			 SCHED_FLAG_UTIL_CLAMP		| \
+			 SCHED_FLAG_LATENCY_NICE)
 
 #endif /* _UAPI_LINUX_SCHED_H */
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup
  2023-01-13 14:12 [PATCH v10 0/9] Add latency priority for CFS class Vincent Guittot
                   ` (3 preceding siblings ...)
  2023-01-13 14:12 ` [PATCH v10 4/9] sched: Allow sched_{get,set}attr to change latency_nice of the task Vincent Guittot
@ 2023-01-13 14:12 ` Vincent Guittot
  2023-02-21 12:52   ` Peter Zijlstra
  2023-02-21 13:04   ` Peter Zijlstra
  2023-01-13 14:12 ` [PATCH v10 6/9] sched/fair: Add sched group latency support Vincent Guittot
                   ` (3 subsequent siblings)
  8 siblings, 2 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-01-13 14:12 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, linux-kernel, parth, cgroups
  Cc: qyousef, chris.hyser, patrick.bellasi, David.Laight, pjt, pavel,
	tj, qperret, tim.c.chen, joshdon, timj, kprateek.nayak,
	yu.c.chen, youssefesmat, joel, Vincent Guittot

Take into account the latency priority of a thread when deciding to
preempt the current running thread. We don't want to provide more CPU
bandwidth to a thread but reorder the scheduling to run latency sensitive
task first whenever possible.

As long as a thread didn't use its bandwidth, it will be able to preempt
the current thread.

At the opposite, a thread with a low latency priority will preempt current
thread at wakeup only to keep fair CPU bandwidth sharing. Otherwise it will
wait for the tick to get its sched slice.

                                   curr vruntime
                                       |
                      sysctl_sched_wakeup_granularity
                                   <-->
----------------------------------|----|-----------------------|---------------
                                  |    |<--------------------->
                                  |    .  sysctl_sched_latency
                                  |    .
default/current latency entity    |    .
                                  |    .
1111111111111111111111111111111111|0000|-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-
se preempts curr at wakeup ------>|<- se doesn't preempt curr -----------------
                                  |    .
                                  |    .
                                  |    .
low latency entity                |    .
                                   ---------------------->|
                               % of sysctl_sched_latency  |
1111111111111111111111111111111111111111111111111111111111|0000|-1-1-1-1-1-1-1-
preempt ------------------------------------------------->|<- do not preempt --
                                  |    .
                                  |    .
                                  |    .
high latency entity               |    .
         |<-----------------------|----.
         | % of sysctl_sched_latency   .
111111111|0000|-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1-1
preempt->|<- se doesn't preempt curr ------------------------------------------

Tests results of nice latency impact on heavy load like hackbench:

hackbench -l (2560 / group) -g group
group        latency 0             latency 19
1            1.378(+/-  1%)      1.337(+/- 1%) + 3%
4            1.393(+/-  3%)      1.312(+/- 3%) + 6%
8            1.308(+/-  2%)      1.279(+/- 1%) + 2%
16           1.347(+/-  1%)      1.317(+/- 1%) + 2%

hackbench -p -l (2560 / group) -g group
group
1            1.836(+/- 17%)      1.148(+/- 5%) +37%
4            1.586(+/-  6%)      1.109(+/- 8%) +30%
8            1.209(+/-  4%)      0.780(+/- 4%) +35%
16           0.805(+/-  5%)      0.728(+/- 4%) +10%

By deacreasing the latency prio, we reduce the number of preemption at
wakeup and help hackbench making progress.

Test results of nice latency impact on short live load like cyclictest
while competing with heavy load like hackbench:

hackbench -l 10000 -g $group &
cyclictest --policy other -D 5 -q -n
        latency 0           latency -20
group   min  avg    max     min  avg    max
0       16    19     29      17   18     29
1       43   299   7359      63   84   3422
4       56   449  14806      45   83    284
8       63   820  51123      63   83    283
16      64  1326  70684      41  157  26852

group = 0 means that hackbench is not running.

The avg is significantly improved with nice latency -20 especially with
large number of groups but min and max remain quite similar. If we add the
histogram parameter to get details of latency, we have :

hackbench -l 10000 -g 16 &
cyclictest --policy other -D 5 -q -n  -H 20000 --histfile data.txt
              latency 0    latency -20
Min Latencies:    64           62
Avg Latencies:  1170          107
Max Latencies: 88069        10417
50% latencies:   122           86
75% latencies:   614           91
85% latencies:   961           94
90% latencies:  1225           97
95% latencies:  6120          102
99% latencies: 18328          159

With percentile details, we see the benefit of nice latency -20 as
only 1% of the latencies are above 159us whereas the default latency
has got 15% around ~1ms or above and 5% over the 6ms.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 include/linux/sched.h      |  4 ++-
 include/linux/sched/prio.h |  9 ++++++
 init/init_task.c           |  2 +-
 kernel/sched/core.c        | 38 +++++++++++++++++++---
 kernel/sched/debug.c       |  2 +-
 kernel/sched/fair.c        | 66 ++++++++++++++++++++++++++++++++++----
 kernel/sched/sched.h       |  6 ++++
 7 files changed, 112 insertions(+), 15 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 6c61bde49152..38decae3e156 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -568,6 +568,8 @@ struct sched_entity {
 	/* cached value of my_q->h_nr_running */
 	unsigned long			runnable_weight;
 #endif
+	/* preemption offset in ns */
+	long				latency_offset;
 
 #ifdef CONFIG_SMP
 	/*
@@ -784,7 +786,7 @@ struct task_struct {
 	int				static_prio;
 	int				normal_prio;
 	unsigned int			rt_priority;
-	int				latency_nice;
+	int				latency_prio;
 
 	struct sched_entity		se;
 	struct sched_rt_entity		rt;
diff --git a/include/linux/sched/prio.h b/include/linux/sched/prio.h
index bfcd7f1d1e11..be79503d86af 100644
--- a/include/linux/sched/prio.h
+++ b/include/linux/sched/prio.h
@@ -59,5 +59,14 @@ static inline long rlimit_to_nice(long prio)
  * Default tasks should be treated as a task with latency_nice = 0.
  */
 #define DEFAULT_LATENCY_NICE	0
+#define DEFAULT_LATENCY_PRIO	(DEFAULT_LATENCY_NICE + LATENCY_NICE_WIDTH/2)
+
+/*
+ * Convert user-nice values [ -20 ... 0 ... 19 ]
+ * to static latency [ 0..39 ],
+ * and back.
+ */
+#define NICE_TO_LATENCY(nice)	((nice) + DEFAULT_LATENCY_PRIO)
+#define LATENCY_TO_NICE(prio)	((prio) - DEFAULT_LATENCY_PRIO)
 
 #endif /* _LINUX_SCHED_PRIO_H */
diff --git a/init/init_task.c b/init/init_task.c
index 7dd71dd2d261..071deff8dbd1 100644
--- a/init/init_task.c
+++ b/init/init_task.c
@@ -78,7 +78,7 @@ struct task_struct init_task
 	.prio		= MAX_PRIO - 20,
 	.static_prio	= MAX_PRIO - 20,
 	.normal_prio	= MAX_PRIO - 20,
-	.latency_nice	= DEFAULT_LATENCY_NICE,
+	.latency_prio	= DEFAULT_LATENCY_PRIO,
 	.policy		= SCHED_NORMAL,
 	.cpus_ptr	= &init_task.cpus_mask,
 	.user_cpus_ptr	= NULL,
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 981665550f8c..402c8d622b76 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -1285,6 +1285,16 @@ static void set_load_weight(struct task_struct *p, bool update_load)
 	}
 }
 
+static void set_latency_offset(struct task_struct *p)
+{
+	long weight = sched_latency_to_weight[p->latency_prio];
+	s64 offset;
+
+	offset = weight * get_sleep_latency(false);
+	offset = div_s64(offset, NICE_LATENCY_WEIGHT_MAX);
+	p->se.latency_offset = (long)offset;
+}
+
 #ifdef CONFIG_UCLAMP_TASK
 /*
  * Serializes updates of utilization clamp values
@@ -4632,7 +4642,9 @@ int sched_fork(unsigned long clone_flags, struct task_struct *p)
 		p->prio = p->normal_prio = p->static_prio;
 		set_load_weight(p, false);
 
-		p->latency_nice = DEFAULT_LATENCY_NICE;
+		p->latency_prio = NICE_TO_LATENCY(0);
+		set_latency_offset(p);
+
 		/*
 		 * We don't need the reset flag anymore after the fork. It has
 		 * fulfilled its duty:
@@ -7398,8 +7410,10 @@ static void __setscheduler_params(struct task_struct *p,
 static void __setscheduler_latency(struct task_struct *p,
 		const struct sched_attr *attr)
 {
-	if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE)
-		p->latency_nice = attr->sched_latency_nice;
+	if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE) {
+		p->latency_prio = NICE_TO_LATENCY(attr->sched_latency_nice);
+		set_latency_offset(p);
+	}
 }
 
 /*
@@ -7584,7 +7598,7 @@ static int __sched_setscheduler(struct task_struct *p,
 		if (attr->sched_flags & SCHED_FLAG_UTIL_CLAMP)
 			goto change;
 		if (attr->sched_flags & SCHED_FLAG_LATENCY_NICE &&
-		    attr->sched_latency_nice != p->latency_nice)
+		    attr->sched_latency_nice != LATENCY_TO_NICE(p->latency_prio))
 			goto change;
 
 		p->sched_reset_on_fork = reset_on_fork;
@@ -8125,7 +8139,7 @@ SYSCALL_DEFINE4(sched_getattr, pid_t, pid, struct sched_attr __user *, uattr,
 	get_params(p, &kattr);
 	kattr.sched_flags &= SCHED_FLAG_ALL;
 
-	kattr.sched_latency_nice = p->latency_nice;
+	kattr.sched_latency_nice = LATENCY_TO_NICE(p->latency_prio);
 
 #ifdef CONFIG_UCLAMP_TASK
 	/*
@@ -11334,6 +11348,20 @@ const u32 sched_prio_to_wmult[40] = {
  /*  15 */ 119304647, 148102320, 186737708, 238609294, 286331153,
 };
 
+/*
+ * latency weight for wakeup preemption
+ */
+const int sched_latency_to_weight[40] = {
+ /* -20 */     -1024,     -973,     -922,      -870,      -819,
+ /* -15 */      -768,     -717,     -666,      -614,      -563,
+ /* -10 */      -512,     -461,     -410,      -358,      -307,
+ /*  -5 */      -256,     -205,     -154,      -102,       -51,
+ /*   0 */         0,       51,      102,       154,       205,
+ /*   5 */       256,      307,      358,       410,       461,
+ /*  10 */       512,      563,      614,       666,       717,
+ /*  15 */       768,      819,      870,       922,       973,
+};
+
 void call_trace_sched_update_nr_running(struct rq *rq, int count)
 {
         trace_sched_update_nr_running_tp(rq, count);
diff --git a/kernel/sched/debug.c b/kernel/sched/debug.c
index 68be7a3e42a3..b3922184af91 100644
--- a/kernel/sched/debug.c
+++ b/kernel/sched/debug.c
@@ -1043,7 +1043,7 @@ void proc_sched_show_task(struct task_struct *p, struct pid_namespace *ns,
 #endif
 	P(policy);
 	P(prio);
-	P(latency_nice);
+	P(latency_prio);
 	if (task_has_dl_policy(p)) {
 		P(dl.runtime);
 		P(dl.deadline);
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 8a85c6cf781e..e87a863a2aa6 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -4870,6 +4870,8 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 		update_idle_cfs_rq_clock_pelt(cfs_rq);
 }
 
+static long wakeup_latency_gran(struct sched_entity *curr, struct sched_entity *se);
+
 /*
  * Preempt the current task with a newly woken task if needed:
  */
@@ -4878,7 +4880,7 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
 {
 	unsigned long ideal_runtime, delta_exec;
 	struct sched_entity *se;
-	s64 delta;
+	s64 delta, offset;
 
 	ideal_runtime = sched_slice(cfs_rq, curr);
 	delta_exec = curr->sum_exec_runtime - curr->prev_sum_exec_runtime;
@@ -4903,10 +4905,12 @@ check_preempt_tick(struct cfs_rq *cfs_rq, struct sched_entity *curr)
 	se = __pick_first_entity(cfs_rq);
 	delta = curr->vruntime - se->vruntime;
 
-	if (delta < 0)
+	offset = wakeup_latency_gran(curr, se);
+	if (delta < offset)
 		return;
 
-	if (delta > ideal_runtime)
+	if ((delta > ideal_runtime) ||
+	    (delta > get_latency_max()))
 		resched_curr(rq_of(cfs_rq));
 }
 
@@ -6155,6 +6159,35 @@ static int sched_idle_cpu(int cpu)
 }
 #endif
 
+static void set_next_buddy(struct sched_entity *se);
+
+static void check_preempt_from_others(struct cfs_rq *cfs, struct sched_entity *se)
+{
+	struct sched_entity *next;
+
+	if (se->latency_offset >= 0)
+		return;
+
+	if (cfs->nr_running <= 1)
+		return;
+	/*
+	 * When waking from another class, we don't need to check to preempt at
+	 * wakeup and don't set next buddy as a candidate for being picked in
+	 * priority.
+	 * In case of simultaneous wakeup when current is another class, the
+	 * latency sensitive tasks lost opportunity to preempt non sensitive
+	 * tasks which woke up simultaneously.
+	 */
+
+	if (cfs->next)
+		next = cfs->next;
+	else
+		next = __pick_first_entity(cfs);
+
+	if (next && wakeup_preempt_entity(next, se) == 1)
+		set_next_buddy(se);
+}
+
 /*
  * The enqueue_task method is called before nr_running is
  * increased. Here we update the fair scheduling stats and
@@ -6241,14 +6274,15 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	if (!task_new)
 		update_overutilized_status(rq);
 
+	if (rq->curr->sched_class != &fair_sched_class)
+		check_preempt_from_others(cfs_rq_of(&p->se), &p->se);
+
 enqueue_throttle:
 	assert_list_leaf_cfs_rq(rq);
 
 	hrtick_update(rq);
 }
 
-static void set_next_buddy(struct sched_entity *se);
-
 /*
  * The dequeue_task method is called before nr_running is
  * decreased. We remove the task from the rbtree and
@@ -7597,6 +7631,23 @@ balance_fair(struct rq *rq, struct task_struct *prev, struct rq_flags *rf)
 }
 #endif /* CONFIG_SMP */
 
+static long wakeup_latency_gran(struct sched_entity *curr, struct sched_entity *se)
+{
+	long latency_offset = se->latency_offset;
+
+	/*
+	 * A negative latency offset means that the sched_entity has latency
+	 * requirement that needs to be evaluated versus other entity.
+	 * Otherwise, use the latency weight to evaluate how much scheduling
+	 * delay is acceptable by se.
+	 */
+	if ((latency_offset < 0) || (curr->latency_offset < 0))
+		latency_offset -= curr->latency_offset;
+	latency_offset = min_t(long, latency_offset, get_latency_max());
+
+	return latency_offset;
+}
+
 static unsigned long wakeup_gran(struct sched_entity *se)
 {
 	unsigned long gran = sysctl_sched_wakeup_granularity;
@@ -7635,11 +7686,12 @@ static int
 wakeup_preempt_entity(struct sched_entity *curr, struct sched_entity *se)
 {
 	s64 gran, vdiff = curr->vruntime - se->vruntime;
+	s64 offset = wakeup_latency_gran(curr, se);
 
-	if (vdiff <= 0)
+	if (vdiff < offset)
 		return -1;
 
-	gran = wakeup_gran(se);
+	gran = offset + wakeup_gran(se);
 
 	/*
 	 * At wake up, the vruntime of a task is capped to not be older than
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index df7db06c9943..fd099f3961e4 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -125,6 +125,11 @@ extern int sched_rr_timeslice;
  */
 #define NS_TO_JIFFIES(TIME)	((unsigned long)(TIME) / (NSEC_PER_SEC / HZ))
 
+/* Maximum nice latency weight used to scale the latency_offset */
+
+#define NICE_LATENCY_SHIFT	(SCHED_FIXEDPOINT_SHIFT)
+#define NICE_LATENCY_WEIGHT_MAX	(1L << NICE_LATENCY_SHIFT)
+
 /*
  * Increase resolution of nice-level calculations for 64-bit architectures.
  * The extra resolution improves shares distribution and load balancing of
@@ -2121,6 +2126,7 @@ static_assert(WF_TTWU == SD_BALANCE_WAKE);
 
 extern const int		sched_prio_to_weight[40];
 extern const u32		sched_prio_to_wmult[40];
+extern const int		sched_latency_to_weight[40];
 
 /*
  * {de,en}queue flags:
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v10 6/9] sched/fair: Add sched group latency support
  2023-01-13 14:12 [PATCH v10 0/9] Add latency priority for CFS class Vincent Guittot
                   ` (4 preceding siblings ...)
  2023-01-13 14:12 ` [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup Vincent Guittot
@ 2023-01-13 14:12 ` Vincent Guittot
  2023-02-21 15:01   ` Peter Zijlstra
  2023-01-13 14:12 ` [PATCH v10 7/9] sched/core: Support latency priority with sched core Vincent Guittot
                   ` (2 subsequent siblings)
  8 siblings, 1 reply; 27+ messages in thread
From: Vincent Guittot @ 2023-01-13 14:12 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, linux-kernel, parth, cgroups
  Cc: qyousef, chris.hyser, patrick.bellasi, David.Laight, pjt, pavel,
	tj, qperret, tim.c.chen, joshdon, timj, kprateek.nayak,
	yu.c.chen, youssefesmat, joel, Vincent Guittot

Task can set its latency priority with sched_setattr(), which is then used
to set the latency offset of its sched_enity, but sched group entities
still have the default latency offset value.

Add a latency.nice field in cpu cgroup controller to set the latency
priority of the group similarly to sched_setattr(). The latency priority
is then used to set the offset of the sched_entities of the group.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 Documentation/admin-guide/cgroup-v2.rst | 10 +++++
 kernel/sched/core.c                     | 52 +++++++++++++++++++++++++
 kernel/sched/fair.c                     | 33 ++++++++++++++++
 kernel/sched/sched.h                    |  4 ++
 4 files changed, 99 insertions(+)

diff --git a/Documentation/admin-guide/cgroup-v2.rst b/Documentation/admin-guide/cgroup-v2.rst
index 1b3ed1c3b3f1..c08424593e4a 100644
--- a/Documentation/admin-guide/cgroup-v2.rst
+++ b/Documentation/admin-guide/cgroup-v2.rst
@@ -1121,6 +1121,16 @@ All time durations are in microseconds.
         values similar to the sched_setattr(2). This maximum utilization
         value is used to clamp the task specific maximum utilization clamp.
 
+  cpu.latency.nice
+	A read-write single value file which exists on non-root
+	cgroups.  The default is "0".
+
+	The nice value is in the range [-20, 19].
+
+	This interface file allows reading and setting latency using the
+	same values used by sched_setattr(2). The latency_nice of a group is
+	used to limit the impact of the latency_nice of a task outside the
+	group.
 
 
 Memory
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 402c8d622b76..6798c9a297d6 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -11007,6 +11007,47 @@ static int cpu_idle_write_s64(struct cgroup_subsys_state *css,
 {
 	return sched_group_set_idle(css_tg(css), idle);
 }
+
+static s64 cpu_latency_nice_read_s64(struct cgroup_subsys_state *css,
+				    struct cftype *cft)
+{
+	int prio, delta, last_delta = INT_MAX;
+	s64 weight;
+
+	weight = css_tg(css)->latency_offset * NICE_LATENCY_WEIGHT_MAX;
+	weight = div_s64(weight, get_sleep_latency(false));
+
+	/* Find the closest nice value to the current weight */
+	for (prio = 0; prio < ARRAY_SIZE(sched_latency_to_weight); prio++) {
+		delta = abs(sched_latency_to_weight[prio] - weight);
+		if (delta >= last_delta)
+			break;
+		last_delta = delta;
+	}
+
+	return LATENCY_TO_NICE(prio-1);
+}
+
+static int cpu_latency_nice_write_s64(struct cgroup_subsys_state *css,
+				     struct cftype *cft, s64 nice)
+{
+	s64 latency_offset;
+	long weight;
+	int idx;
+
+	if (nice < MIN_LATENCY_NICE || nice > MAX_LATENCY_NICE)
+		return -ERANGE;
+
+	idx = NICE_TO_LATENCY(nice);
+	idx = array_index_nospec(idx, LATENCY_NICE_WIDTH);
+	weight = sched_latency_to_weight[idx];
+
+	latency_offset = weight * get_sleep_latency(false);
+	latency_offset = div_s64(latency_offset, NICE_LATENCY_WEIGHT_MAX);
+
+	return sched_group_set_latency(css_tg(css), latency_offset);
+}
+
 #endif
 
 static struct cftype cpu_legacy_files[] = {
@@ -11021,6 +11062,11 @@ static struct cftype cpu_legacy_files[] = {
 		.read_s64 = cpu_idle_read_s64,
 		.write_s64 = cpu_idle_write_s64,
 	},
+	{
+		.name = "latency.nice",
+		.read_s64 = cpu_latency_nice_read_s64,
+		.write_s64 = cpu_latency_nice_write_s64,
+	},
 #endif
 #ifdef CONFIG_CFS_BANDWIDTH
 	{
@@ -11238,6 +11284,12 @@ static struct cftype cpu_files[] = {
 		.read_s64 = cpu_idle_read_s64,
 		.write_s64 = cpu_idle_write_s64,
 	},
+	{
+		.name = "latency.nice",
+		.flags = CFTYPE_NOT_ON_ROOT,
+		.read_s64 = cpu_latency_nice_read_s64,
+		.write_s64 = cpu_latency_nice_write_s64,
+	},
 #endif
 #ifdef CONFIG_CFS_BANDWIDTH
 	{
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index e87a863a2aa6..80ad27ddb4a1 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -12296,6 +12296,7 @@ int alloc_fair_sched_group(struct task_group *tg, struct task_group *parent)
 		goto err;
 
 	tg->shares = NICE_0_LOAD;
+	tg->latency_offset = 0;
 
 	init_cfs_bandwidth(tg_cfs_bandwidth(tg));
 
@@ -12394,6 +12395,9 @@ void init_tg_cfs_entry(struct task_group *tg, struct cfs_rq *cfs_rq,
 	}
 
 	se->my_q = cfs_rq;
+
+	se->latency_offset = tg->latency_offset;
+
 	/* guarantee group entities always have weight */
 	update_load_set(&se->load, NICE_0_LOAD);
 	se->parent = parent;
@@ -12524,6 +12528,35 @@ int sched_group_set_idle(struct task_group *tg, long idle)
 	return 0;
 }
 
+int sched_group_set_latency(struct task_group *tg, s64 latency)
+{
+	int i;
+
+	if (tg == &root_task_group)
+		return -EINVAL;
+
+	if (abs(latency) > sysctl_sched_latency)
+		return -EINVAL;
+
+	mutex_lock(&shares_mutex);
+
+	if (tg->latency_offset == latency) {
+		mutex_unlock(&shares_mutex);
+		return 0;
+	}
+
+	tg->latency_offset = latency;
+
+	for_each_possible_cpu(i) {
+		struct sched_entity *se = tg->se[i];
+
+		WRITE_ONCE(se->latency_offset, latency);
+	}
+
+	mutex_unlock(&shares_mutex);
+	return 0;
+}
+
 #else /* CONFIG_FAIR_GROUP_SCHED */
 
 void free_fair_sched_group(struct task_group *tg) { }
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index fd099f3961e4..5a4ce8e61f47 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -383,6 +383,8 @@ struct task_group {
 
 	/* A positive value indicates that this is a SCHED_IDLE group. */
 	int			idle;
+	/* latency constraint of the group. */
+	int			latency_offset;
 
 #ifdef	CONFIG_SMP
 	/*
@@ -493,6 +495,8 @@ extern int sched_group_set_shares(struct task_group *tg, unsigned long shares);
 
 extern int sched_group_set_idle(struct task_group *tg, long idle);
 
+extern int sched_group_set_latency(struct task_group *tg, s64 latency);
+
 #ifdef CONFIG_SMP
 extern void set_task_rq_fair(struct sched_entity *se,
 			     struct cfs_rq *prev, struct cfs_rq *next);
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v10 7/9] sched/core: Support latency priority with sched core
  2023-01-13 14:12 [PATCH v10 0/9] Add latency priority for CFS class Vincent Guittot
                   ` (5 preceding siblings ...)
  2023-01-13 14:12 ` [PATCH v10 6/9] sched/fair: Add sched group latency support Vincent Guittot
@ 2023-01-13 14:12 ` Vincent Guittot
  2023-01-13 14:12 ` [PATCH v10 8/9] sched/fair: Add latency list Vincent Guittot
  2023-01-13 14:12 ` [PATCH v10 9/9] sched/fair: remove check_preempt_from_others Vincent Guittot
  8 siblings, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-01-13 14:12 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, linux-kernel, parth, cgroups
  Cc: qyousef, chris.hyser, patrick.bellasi, David.Laight, pjt, pavel,
	tj, qperret, tim.c.chen, joshdon, timj, kprateek.nayak,
	yu.c.chen, youssefesmat, joel, Vincent Guittot

Take into account wakeup_latency_gran() when ordering the cfs threads.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 kernel/sched/fair.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index 80ad27ddb4a1..a4bfa03d096c 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -11971,6 +11971,9 @@ bool cfs_prio_less(const struct task_struct *a, const struct task_struct *b,
 	delta = (s64)(sea->vruntime - seb->vruntime) +
 		(s64)(cfs_rqb->min_vruntime_fi - cfs_rqa->min_vruntime_fi);
 
+	/* Take into account latency prio */
+	delta -= wakeup_latency_gran(sea, seb);
+
 	return delta > 0;
 }
 #else
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v10 8/9] sched/fair: Add latency list
  2023-01-13 14:12 [PATCH v10 0/9] Add latency priority for CFS class Vincent Guittot
                   ` (6 preceding siblings ...)
  2023-01-13 14:12 ` [PATCH v10 7/9] sched/core: Support latency priority with sched core Vincent Guittot
@ 2023-01-13 14:12 ` Vincent Guittot
  2023-02-21 15:11   ` Peter Zijlstra
  2023-02-22  9:49   ` Peter Zijlstra
  2023-01-13 14:12 ` [PATCH v10 9/9] sched/fair: remove check_preempt_from_others Vincent Guittot
  8 siblings, 2 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-01-13 14:12 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, linux-kernel, parth, cgroups
  Cc: qyousef, chris.hyser, patrick.bellasi, David.Laight, pjt, pavel,
	tj, qperret, tim.c.chen, joshdon, timj, kprateek.nayak,
	yu.c.chen, youssefesmat, joel, Vincent Guittot

Add a rb tree for latency sensitive entities so we can schedule the most
sensitive one first even when it failed to preempt current at wakeup or
when it got quickly preempted by another entity of higher priority.

In order to keep fairness, the latency is used once at wakeup to get a
minimum slice and not during the following scheduling slice to prevent
long running entity to got more running time than allocated to his nice
priority.

The rb tree enables to cover the last corner case where latency
sensitive entity can't got schedule quickly after the wakeup.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 include/linux/sched.h |  1 +
 kernel/sched/core.c   |  1 +
 kernel/sched/fair.c   | 95 +++++++++++++++++++++++++++++++++++++++++--
 kernel/sched/sched.h  |  1 +
 4 files changed, 95 insertions(+), 3 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index 38decae3e156..41bb92be5ecc 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -548,6 +548,7 @@ struct sched_entity {
 	/* For load-balancing: */
 	struct load_weight		load;
 	struct rb_node			run_node;
+	struct rb_node			latency_node;
 	struct list_head		group_node;
 	unsigned int			on_rq;
 
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 6798c9a297d6..5c99f67c7e7b 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -4390,6 +4390,7 @@ static void __sched_fork(unsigned long clone_flags, struct task_struct *p)
 	p->se.nr_migrations		= 0;
 	p->se.vruntime			= 0;
 	INIT_LIST_HEAD(&p->se.group_node);
+	RB_CLEAR_NODE(&p->se.latency_node);
 
 #ifdef CONFIG_FAIR_GROUP_SCHED
 	p->se.cfs_rq			= NULL;
diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a4bfa03d096c..a8f0e32431e2 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -680,7 +680,76 @@ struct sched_entity *__pick_last_entity(struct cfs_rq *cfs_rq)
 
 	return __node_2_se(last);
 }
+#endif
+
+/**************************************************************
+ * Scheduling class tree data structure manipulation methods:
+ * for latency
+ */
+
+static inline bool latency_before(struct sched_entity *a,
+				struct sched_entity *b)
+{
+	return (s64)(a->vruntime + a->latency_offset - b->vruntime - b->latency_offset) < 0;
+}
+
+#define __latency_node_2_se(node) \
+	rb_entry((node), struct sched_entity, latency_node)
+
+static inline bool __latency_less(struct rb_node *a, const struct rb_node *b)
+{
+	return latency_before(__latency_node_2_se(a), __latency_node_2_se(b));
+}
+
+/*
+ * Enqueue an entity into the latency rb-tree:
+ */
+static void __enqueue_latency(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
+{
+
+	/* Only latency sensitive entity can be added to the list */
+	if (se->latency_offset >= 0)
+		return;
+
+	if (!RB_EMPTY_NODE(&se->latency_node))
+		return;
+
+	/*
+	 * An execution time less than sysctl_sched_min_granularity means that
+	 * the entity has been preempted by a higher sched class or an entity
+	 * with higher latency constraint.
+	 * Put it back in the list so it gets a chance to run 1st during the
+	 * next slice.
+	 */
+	if (!(flags & ENQUEUE_WAKEUP)) {
+		u64 delta_exec = se->sum_exec_runtime - se->prev_sum_exec_runtime;
+
+		if (delta_exec >= sysctl_sched_min_granularity)
+			return;
+	}
+
+	rb_add_cached(&se->latency_node, &cfs_rq->latency_timeline, __latency_less);
+}
+
+static void __dequeue_latency(struct cfs_rq *cfs_rq, struct sched_entity *se)
+{
+	if (!RB_EMPTY_NODE(&se->latency_node)) {
+		rb_erase_cached(&se->latency_node, &cfs_rq->latency_timeline);
+		RB_CLEAR_NODE(&se->latency_node);
+	}
+}
+
+static struct sched_entity *__pick_first_latency(struct cfs_rq *cfs_rq)
+{
+	struct rb_node *left = rb_first_cached(&cfs_rq->latency_timeline);
+
+	if (!left)
+		return NULL;
+
+	return __latency_node_2_se(left);
+}
 
+#ifdef CONFIG_SCHED_DEBUG
 /**************************************************************
  * Scheduling class statistics methods:
  */
@@ -4751,8 +4820,10 @@ enqueue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 	check_schedstat_required();
 	update_stats_enqueue_fair(cfs_rq, se, flags);
 	check_spread(cfs_rq, se);
-	if (!curr)
+	if (!curr) {
 		__enqueue_entity(cfs_rq, se);
+		__enqueue_latency(cfs_rq, se, flags);
+	}
 	se->on_rq = 1;
 
 	if (cfs_rq->nr_running == 1) {
@@ -4838,8 +4909,10 @@ dequeue_entity(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
 
 	clear_buddies(cfs_rq, se);
 
-	if (se != cfs_rq->curr)
+	if (se != cfs_rq->curr) {
 		__dequeue_entity(cfs_rq, se);
+		__dequeue_latency(cfs_rq, se);
+	}
 	se->on_rq = 0;
 	account_entity_dequeue(cfs_rq, se);
 
@@ -4928,6 +5001,7 @@ set_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *se)
 		 */
 		update_stats_wait_end_fair(cfs_rq, se);
 		__dequeue_entity(cfs_rq, se);
+		__dequeue_latency(cfs_rq, se);
 		update_load_avg(cfs_rq, se, UPDATE_TG);
 	}
 
@@ -4966,7 +5040,7 @@ static struct sched_entity *
 pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
 {
 	struct sched_entity *left = __pick_first_entity(cfs_rq);
-	struct sched_entity *se;
+	struct sched_entity *latency, *se;
 
 	/*
 	 * If curr is set we have to see if its left of the leftmost entity
@@ -5008,6 +5082,12 @@ pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
 		se = cfs_rq->last;
 	}
 
+	/* Check for latency sensitive entity waiting for running */
+	latency = __pick_first_latency(cfs_rq);
+	if (latency && (latency != se) &&
+	    wakeup_preempt_entity(latency, se) < 1)
+		se = latency;
+
 	return se;
 }
 
@@ -5031,6 +5111,7 @@ static void put_prev_entity(struct cfs_rq *cfs_rq, struct sched_entity *prev)
 		update_stats_wait_start_fair(cfs_rq, prev);
 		/* Put 'current' back into the tree. */
 		__enqueue_entity(cfs_rq, prev);
+		__enqueue_latency(cfs_rq, prev, 0);
 		/* in !on_rq case, update occurred at dequeue */
 		update_load_avg(cfs_rq, prev, 0);
 	}
@@ -12244,6 +12325,7 @@ static void set_next_task_fair(struct rq *rq, struct task_struct *p, bool first)
 void init_cfs_rq(struct cfs_rq *cfs_rq)
 {
 	cfs_rq->tasks_timeline = RB_ROOT_CACHED;
+	cfs_rq->latency_timeline = RB_ROOT_CACHED;
 	u64_u32_store(cfs_rq->min_vruntime, (u64)(-(1LL << 20)));
 #ifdef CONFIG_SMP
 	raw_spin_lock_init(&cfs_rq->removed.lock);
@@ -12552,8 +12634,15 @@ int sched_group_set_latency(struct task_group *tg, s64 latency)
 
 	for_each_possible_cpu(i) {
 		struct sched_entity *se = tg->se[i];
+		struct rq *rq = cpu_rq(i);
+		struct rq_flags rf;
+
+		rq_lock_irqsave(rq, &rf);
 
+		__dequeue_latency(se->cfs_rq, se);
 		WRITE_ONCE(se->latency_offset, latency);
+
+		rq_unlock_irqrestore(rq, &rf);
 	}
 
 	mutex_unlock(&shares_mutex);
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 5a4ce8e61f47..186873bb41e2 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -575,6 +575,7 @@ struct cfs_rq {
 #endif
 
 	struct rb_root_cached	tasks_timeline;
+	struct rb_root_cached	latency_timeline;
 
 	/*
 	 * 'curr' points to currently running entity on this cfs_rq.
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* [PATCH v10 9/9] sched/fair: remove check_preempt_from_others
  2023-01-13 14:12 [PATCH v10 0/9] Add latency priority for CFS class Vincent Guittot
                   ` (7 preceding siblings ...)
  2023-01-13 14:12 ` [PATCH v10 8/9] sched/fair: Add latency list Vincent Guittot
@ 2023-01-13 14:12 ` Vincent Guittot
  8 siblings, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-01-13 14:12 UTC (permalink / raw)
  To: mingo, peterz, juri.lelli, dietmar.eggemann, rostedt, bsegall,
	mgorman, bristot, vschneid, linux-kernel, parth, cgroups
  Cc: qyousef, chris.hyser, patrick.bellasi, David.Laight, pjt, pavel,
	tj, qperret, tim.c.chen, joshdon, timj, kprateek.nayak,
	yu.c.chen, youssefesmat, joel, Vincent Guittot

With the dedicated latency list, we don't have to take care of this special
case anymore as pick_next_entity checks for a runnable latency sensitive
task.

Signed-off-by: Vincent Guittot <vincent.guittot@linaro.org>
Tested-by: K Prateek Nayak <kprateek.nayak@amd.com>
---
 kernel/sched/fair.c | 34 ++--------------------------------
 1 file changed, 2 insertions(+), 32 deletions(-)

diff --git a/kernel/sched/fair.c b/kernel/sched/fair.c
index a8f0e32431e2..fb2a5b2e3440 100644
--- a/kernel/sched/fair.c
+++ b/kernel/sched/fair.c
@@ -6240,35 +6240,6 @@ static int sched_idle_cpu(int cpu)
 }
 #endif
 
-static void set_next_buddy(struct sched_entity *se);
-
-static void check_preempt_from_others(struct cfs_rq *cfs, struct sched_entity *se)
-{
-	struct sched_entity *next;
-
-	if (se->latency_offset >= 0)
-		return;
-
-	if (cfs->nr_running <= 1)
-		return;
-	/*
-	 * When waking from another class, we don't need to check to preempt at
-	 * wakeup and don't set next buddy as a candidate for being picked in
-	 * priority.
-	 * In case of simultaneous wakeup when current is another class, the
-	 * latency sensitive tasks lost opportunity to preempt non sensitive
-	 * tasks which woke up simultaneously.
-	 */
-
-	if (cfs->next)
-		next = cfs->next;
-	else
-		next = __pick_first_entity(cfs);
-
-	if (next && wakeup_preempt_entity(next, se) == 1)
-		set_next_buddy(se);
-}
-
 /*
  * The enqueue_task method is called before nr_running is
  * increased. Here we update the fair scheduling stats and
@@ -6355,15 +6326,14 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
 	if (!task_new)
 		update_overutilized_status(rq);
 
-	if (rq->curr->sched_class != &fair_sched_class)
-		check_preempt_from_others(cfs_rq_of(&p->se), &p->se);
-
 enqueue_throttle:
 	assert_list_leaf_cfs_rq(rq);
 
 	hrtick_update(rq);
 }
 
+static void set_next_buddy(struct sched_entity *se);
+
 /*
  * The dequeue_task method is called before nr_running is
  * decreased. We remove the task from the rbtree and
-- 
2.34.1


^ permalink raw reply related	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup
  2023-01-13 14:12 ` [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup Vincent Guittot
@ 2023-02-21 12:52   ` Peter Zijlstra
  2023-02-21 14:12     ` Vincent Guittot
  2023-02-21 14:15     ` Peter Zijlstra
  2023-02-21 13:04   ` Peter Zijlstra
  1 sibling, 2 replies; 27+ messages in thread
From: Peter Zijlstra @ 2023-02-21 12:52 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Fri, Jan 13, 2023 at 03:12:30PM +0100, Vincent Guittot wrote:

> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index 6c61bde49152..38decae3e156 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -568,6 +568,8 @@ struct sched_entity {
>  	/* cached value of my_q->h_nr_running */
>  	unsigned long			runnable_weight;
>  #endif
> +	/* preemption offset in ns */
> +	long				latency_offset;

I wonder about the type here; does it make sense to have it depend on
the bitness; that is if s32 is big enough on 32bit then surely it is so
too on 64bit, and if not, then it should be unconditionally s64.


> +static void set_latency_offset(struct task_struct *p)
> +{
> +	long weight = sched_latency_to_weight[p->latency_prio];
> +	s64 offset;
> +
> +	offset = weight * get_sleep_latency(false);
> +	offset = div_s64(offset, NICE_LATENCY_WEIGHT_MAX);
> +	p->se.latency_offset = (long)offset;
> +}

> +/*
> + * latency weight for wakeup preemption
> + */
> +const int sched_latency_to_weight[40] = {
> + /* -20 */     -1024,     -973,     -922,      -870,      -819,
> + /* -15 */      -768,     -717,     -666,      -614,      -563,
> + /* -10 */      -512,     -461,     -410,      -358,      -307,
> + /*  -5 */      -256,     -205,     -154,      -102,       -51,
> + /*   0 */         0,       51,      102,       154,       205,
> + /*   5 */       256,      307,      358,       410,       461,
> + /*  10 */       512,      563,      614,       666,       717,
> + /*  15 */       768,      819,      870,       922,       973,
> +};

I'm slightly confused by this table, isn't that simply the linear
function?

Isn't all that the same as:

	se->se.latency_offset = get_sleep_latency * nice / (NICE_LATENCY_WIDTH/2);

? The reason we have prio_to_weight[] is because it's an exponential,
which is a bit more cumbersome to calculate, but surely we can do a
linear function at runtime.



^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup
  2023-01-13 14:12 ` [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup Vincent Guittot
  2023-02-21 12:52   ` Peter Zijlstra
@ 2023-02-21 13:04   ` Peter Zijlstra
  2023-02-21 14:21     ` Vincent Guittot
  1 sibling, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2023-02-21 13:04 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Fri, Jan 13, 2023 at 03:12:30PM +0100, Vincent Guittot wrote:
> @@ -6155,6 +6159,35 @@ static int sched_idle_cpu(int cpu)
>  }
>  #endif
>  
> +static void set_next_buddy(struct sched_entity *se);
> +
> +static void check_preempt_from_others(struct cfs_rq *cfs, struct sched_entity *se)
> +{
> +	struct sched_entity *next;
> +
> +	if (se->latency_offset >= 0)
> +		return;
> +
> +	if (cfs->nr_running <= 1)
> +		return;
> +	/*
> +	 * When waking from another class, we don't need to check to preempt at
> +	 * wakeup and don't set next buddy as a candidate for being picked in
> +	 * priority.
> +	 * In case of simultaneous wakeup when current is another class, the
> +	 * latency sensitive tasks lost opportunity to preempt non sensitive
> +	 * tasks which woke up simultaneously.
> +	 */
> +
> +	if (cfs->next)
> +		next = cfs->next;
> +	else
> +		next = __pick_first_entity(cfs);
> +
> +	if (next && wakeup_preempt_entity(next, se) == 1)
> +		set_next_buddy(se);
> +}
> +
>  /*
>   * The enqueue_task method is called before nr_running is
>   * increased. Here we update the fair scheduling stats and
> @@ -6241,14 +6274,15 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
>  	if (!task_new)
>  		update_overutilized_status(rq);
>  
> +	if (rq->curr->sched_class != &fair_sched_class)
> +		check_preempt_from_others(cfs_rq_of(&p->se), &p->se);
> +
>  enqueue_throttle:
>  	assert_list_leaf_cfs_rq(rq);
>  
>  	hrtick_update(rq);
>  }

Hmm.. This sets a next selection when the task gets enqueued while not
running a fair task -- and looses a wakeup preemption opportunity.

Should we perhaps also do this for latency_nice == 0?, in any case I
think this can be moved to its own patch to avoid doing too much in the
one patch. It seems fairly self contained.


^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup
  2023-02-21 12:52   ` Peter Zijlstra
@ 2023-02-21 14:12     ` Vincent Guittot
  2023-02-21 14:15     ` Peter Zijlstra
  1 sibling, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-02-21 14:12 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Tue, 21 Feb 2023 at 13:53, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Jan 13, 2023 at 03:12:30PM +0100, Vincent Guittot wrote:
>
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index 6c61bde49152..38decae3e156 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -568,6 +568,8 @@ struct sched_entity {
> >       /* cached value of my_q->h_nr_running */
> >       unsigned long                   runnable_weight;
> >  #endif
> > +     /* preemption offset in ns */
> > +     long                            latency_offset;
>
> I wonder about the type here; does it make sense to have it depend on
> the bitness; that is if s32 is big enough on 32bit then surely it is so
> too on 64bit, and if not, then it should be unconditionally s64.

I mainly wanted to stay aligned with the optimal width of the arch but
32bits is enough

>
>
> > +static void set_latency_offset(struct task_struct *p)
> > +{
> > +     long weight = sched_latency_to_weight[p->latency_prio];
> > +     s64 offset;
> > +
> > +     offset = weight * get_sleep_latency(false);
> > +     offset = div_s64(offset, NICE_LATENCY_WEIGHT_MAX);
> > +     p->se.latency_offset = (long)offset;
> > +}
>
> > +/*
> > + * latency weight for wakeup preemption
> > + */
> > +const int sched_latency_to_weight[40] = {
> > + /* -20 */     -1024,     -973,     -922,      -870,      -819,
> > + /* -15 */      -768,     -717,     -666,      -614,      -563,
> > + /* -10 */      -512,     -461,     -410,      -358,      -307,
> > + /*  -5 */      -256,     -205,     -154,      -102,       -51,
> > + /*   0 */         0,       51,      102,       154,       205,
> > + /*   5 */       256,      307,      358,       410,       461,
> > + /*  10 */       512,      563,      614,       666,       717,
> > + /*  15 */       768,      819,      870,       922,       973,
> > +};
>
> I'm slightly confused by this table, isn't that simply the linear
> function?

Yes, I had in mind to use a nonlinear function at the beginning so the table.

>
> Isn't all that the same as:
>
>         se->se.latency_offset = get_sleep_latency * nice / (NICE_LATENCY_WIDTH/2);
>
> ? The reason we have prio_to_weight[] is because it's an exponential,
> which is a bit more cumbersome to calculate, but surely we can do a
> linear function at runtime.
>
>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup
  2023-02-21 12:52   ` Peter Zijlstra
  2023-02-21 14:12     ` Vincent Guittot
@ 2023-02-21 14:15     ` Peter Zijlstra
  2023-02-21 14:25       ` Vincent Guittot
  1 sibling, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2023-02-21 14:15 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Tue, Feb 21, 2023 at 01:52:58PM +0100, Peter Zijlstra wrote:
> On Fri, Jan 13, 2023 at 03:12:30PM +0100, Vincent Guittot wrote:
> 
> > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > index 6c61bde49152..38decae3e156 100644
> > --- a/include/linux/sched.h
> > +++ b/include/linux/sched.h
> > @@ -568,6 +568,8 @@ struct sched_entity {
> >  	/* cached value of my_q->h_nr_running */
> >  	unsigned long			runnable_weight;
> >  #endif
> > +	/* preemption offset in ns */
> > +	long				latency_offset;
> 
> I wonder about the type here; does it make sense to have it depend on
> the bitness; that is if s32 is big enough on 32bit then surely it is so
> too on 64bit, and if not, then it should be unconditionally s64.
> 

The cgroup patch has this as 'int'. I'm thinking we ought to be
consistent :-)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup
  2023-02-21 13:04   ` Peter Zijlstra
@ 2023-02-21 14:21     ` Vincent Guittot
  2023-02-21 14:51       ` Peter Zijlstra
  2023-02-21 15:08       ` Peter Zijlstra
  0 siblings, 2 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-02-21 14:21 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Tue, 21 Feb 2023 at 14:05, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Jan 13, 2023 at 03:12:30PM +0100, Vincent Guittot wrote:
> > @@ -6155,6 +6159,35 @@ static int sched_idle_cpu(int cpu)
> >  }
> >  #endif
> >
> > +static void set_next_buddy(struct sched_entity *se);
> > +
> > +static void check_preempt_from_others(struct cfs_rq *cfs, struct sched_entity *se)
> > +{
> > +     struct sched_entity *next;
> > +
> > +     if (se->latency_offset >= 0)
> > +             return;
> > +
> > +     if (cfs->nr_running <= 1)
> > +             return;
> > +     /*
> > +      * When waking from another class, we don't need to check to preempt at
> > +      * wakeup and don't set next buddy as a candidate for being picked in
> > +      * priority.
> > +      * In case of simultaneous wakeup when current is another class, the
> > +      * latency sensitive tasks lost opportunity to preempt non sensitive
> > +      * tasks which woke up simultaneously.
> > +      */
> > +
> > +     if (cfs->next)
> > +             next = cfs->next;
> > +     else
> > +             next = __pick_first_entity(cfs);
> > +
> > +     if (next && wakeup_preempt_entity(next, se) == 1)
> > +             set_next_buddy(se);
> > +}
> > +
> >  /*
> >   * The enqueue_task method is called before nr_running is
> >   * increased. Here we update the fair scheduling stats and
> > @@ -6241,14 +6274,15 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> >       if (!task_new)
> >               update_overutilized_status(rq);
> >
> > +     if (rq->curr->sched_class != &fair_sched_class)
> > +             check_preempt_from_others(cfs_rq_of(&p->se), &p->se);
> > +
> >  enqueue_throttle:
> >       assert_list_leaf_cfs_rq(rq);
> >
> >       hrtick_update(rq);
> >  }
>
> Hmm.. This sets a next selection when the task gets enqueued while not
> running a fair task -- and looses a wakeup preemption opportunity.
>
> Should we perhaps also do this for latency_nice == 0?, in any case I
> think this can be moved to its own patch to avoid doing too much in the
> one patch. It seems fairly self contained.

This function is then removed by patch 9 as the additional rb tree
fixes all cases

>

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup
  2023-02-21 14:15     ` Peter Zijlstra
@ 2023-02-21 14:25       ` Vincent Guittot
  0 siblings, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-02-21 14:25 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Tue, 21 Feb 2023 at 15:15, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Tue, Feb 21, 2023 at 01:52:58PM +0100, Peter Zijlstra wrote:
> > On Fri, Jan 13, 2023 at 03:12:30PM +0100, Vincent Guittot wrote:
> >
> > > diff --git a/include/linux/sched.h b/include/linux/sched.h
> > > index 6c61bde49152..38decae3e156 100644
> > > --- a/include/linux/sched.h
> > > +++ b/include/linux/sched.h
> > > @@ -568,6 +568,8 @@ struct sched_entity {
> > >     /* cached value of my_q->h_nr_running */
> > >     unsigned long                   runnable_weight;
> > >  #endif
> > > +   /* preemption offset in ns */
> > > +   long                            latency_offset;
> >
> > I wonder about the type here; does it make sense to have it depend on
> > the bitness; that is if s32 is big enough on 32bit then surely it is so
> > too on 64bit, and if not, then it should be unconditionally s64.
> >
>
> The cgroup patch has this as 'int'. I'm thinking we ought to be
> consistent :-)

Yes, good point

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup
  2023-02-21 14:21     ` Vincent Guittot
@ 2023-02-21 14:51       ` Peter Zijlstra
  2023-02-21 15:08       ` Peter Zijlstra
  1 sibling, 0 replies; 27+ messages in thread
From: Peter Zijlstra @ 2023-02-21 14:51 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Tue, Feb 21, 2023 at 03:21:54PM +0100, Vincent Guittot wrote:
> On Tue, 21 Feb 2023 at 14:05, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Fri, Jan 13, 2023 at 03:12:30PM +0100, Vincent Guittot wrote:
> > > @@ -6155,6 +6159,35 @@ static int sched_idle_cpu(int cpu)
> > >  }
> > >  #endif
> > >
> > > +static void set_next_buddy(struct sched_entity *se);
> > > +
> > > +static void check_preempt_from_others(struct cfs_rq *cfs, struct sched_entity *se)
> > > +{
> > > +     struct sched_entity *next;
> > > +
> > > +     if (se->latency_offset >= 0)
> > > +             return;
> > > +
> > > +     if (cfs->nr_running <= 1)
> > > +             return;
> > > +     /*
> > > +      * When waking from another class, we don't need to check to preempt at
> > > +      * wakeup and don't set next buddy as a candidate for being picked in
> > > +      * priority.
> > > +      * In case of simultaneous wakeup when current is another class, the
> > > +      * latency sensitive tasks lost opportunity to preempt non sensitive
> > > +      * tasks which woke up simultaneously.
> > > +      */
> > > +
> > > +     if (cfs->next)
> > > +             next = cfs->next;
> > > +     else
> > > +             next = __pick_first_entity(cfs);
> > > +
> > > +     if (next && wakeup_preempt_entity(next, se) == 1)
> > > +             set_next_buddy(se);
> > > +}
> > > +
> > >  /*
> > >   * The enqueue_task method is called before nr_running is
> > >   * increased. Here we update the fair scheduling stats and
> > > @@ -6241,14 +6274,15 @@ enqueue_task_fair(struct rq *rq, struct task_struct *p, int flags)
> > >       if (!task_new)
> > >               update_overutilized_status(rq);
> > >
> > > +     if (rq->curr->sched_class != &fair_sched_class)
> > > +             check_preempt_from_others(cfs_rq_of(&p->se), &p->se);
> > > +
> > >  enqueue_throttle:
> > >       assert_list_leaf_cfs_rq(rq);
> > >
> > >       hrtick_update(rq);
> > >  }
> >
> > Hmm.. This sets a next selection when the task gets enqueued while not
> > running a fair task -- and looses a wakeup preemption opportunity.
> >
> > Should we perhaps also do this for latency_nice == 0?, in any case I
> > think this can be moved to its own patch to avoid doing too much in the
> > one patch. It seems fairly self contained.
> 
> This function is then removed by patch 9 as the additional rb tree
> fixes all cases

Ah, I'm currently 'stuck' at 8.. I'll get there :-)

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 6/9] sched/fair: Add sched group latency support
  2023-01-13 14:12 ` [PATCH v10 6/9] sched/fair: Add sched group latency support Vincent Guittot
@ 2023-02-21 15:01   ` Peter Zijlstra
  2023-02-21 15:32     ` Vincent Guittot
  0 siblings, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2023-02-21 15:01 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Fri, Jan 13, 2023 at 03:12:31PM +0100, Vincent Guittot wrote:

> +static s64 cpu_latency_nice_read_s64(struct cgroup_subsys_state *css,
> +				    struct cftype *cft)
> +{
> +	int prio, delta, last_delta = INT_MAX;
> +	s64 weight;
> +
> +	weight = css_tg(css)->latency_offset * NICE_LATENCY_WEIGHT_MAX;
> +	weight = div_s64(weight, get_sleep_latency(false));
> +
> +	/* Find the closest nice value to the current weight */

This comment isn't entirely accurate, since we only have the nice_write
interface below, this will be an exact match. The thing with weight is
that we first had the raw weight value interface and then the nice
interface had to map random values back to a 'nice' value.

Arguably we can simply store the raw nice value in write and print it
out again here.

> +	for (prio = 0; prio < ARRAY_SIZE(sched_latency_to_weight); prio++) {
> +		delta = abs(sched_latency_to_weight[prio] - weight);
> +		if (delta >= last_delta)
> +			break;
> +		last_delta = delta;
> +	}
> +
> +	return LATENCY_TO_NICE(prio-1);
> +}
> +
> +static int cpu_latency_nice_write_s64(struct cgroup_subsys_state *css,
> +				     struct cftype *cft, s64 nice)
> +{
> +	s64 latency_offset;
> +	long weight;
> +	int idx;
> +
> +	if (nice < MIN_LATENCY_NICE || nice > MAX_LATENCY_NICE)
> +		return -ERANGE;
> +
> +	idx = NICE_TO_LATENCY(nice);
> +	idx = array_index_nospec(idx, LATENCY_NICE_WIDTH);
> +	weight = sched_latency_to_weight[idx];
> +
> +	latency_offset = weight * get_sleep_latency(false);
> +	latency_offset = div_s64(latency_offset, NICE_LATENCY_WEIGHT_MAX);
> +
> +	return sched_group_set_latency(css_tg(css), latency_offset);
> +}

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup
  2023-02-21 14:21     ` Vincent Guittot
  2023-02-21 14:51       ` Peter Zijlstra
@ 2023-02-21 15:08       ` Peter Zijlstra
  2023-02-21 15:34         ` Vincent Guittot
  1 sibling, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2023-02-21 15:08 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Tue, Feb 21, 2023 at 03:21:54PM +0100, Vincent Guittot wrote:
> On Tue, 21 Feb 2023 at 14:05, Peter Zijlstra <peterz@infradead.org> wrote:

> > Should we perhaps also do this for latency_nice == 0?, in any case I
> > think this can be moved to its own patch to avoid doing too much in the
> > one patch. It seems fairly self contained.
> 
> This function is then removed by patch 9 as the additional rb tree
> fixes all cases

Also, since you remove it again later, perhaps not introduce it at all?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 8/9] sched/fair: Add latency list
  2023-01-13 14:12 ` [PATCH v10 8/9] sched/fair: Add latency list Vincent Guittot
@ 2023-02-21 15:11   ` Peter Zijlstra
  2023-02-21 15:42     ` Vincent Guittot
  2023-02-22  9:49   ` Peter Zijlstra
  1 sibling, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2023-02-21 15:11 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Fri, Jan 13, 2023 at 03:12:33PM +0100, Vincent Guittot wrote:
> @@ -12552,8 +12634,15 @@ int sched_group_set_latency(struct task_group *tg, s64 latency)
>  
>  	for_each_possible_cpu(i) {
>  		struct sched_entity *se = tg->se[i];
> +		struct rq *rq = cpu_rq(i);
> +		struct rq_flags rf;
> +
> +		rq_lock_irqsave(rq, &rf);
>  
> +		__dequeue_latency(se->cfs_rq, se);
>  		WRITE_ONCE(se->latency_offset, latency);
> +
> +		rq_unlock_irqrestore(rq, &rf);
>  	}

This seems asymmetric; maybe something like:

	queued = __dequeue_latency(..);
	WRITE_ONCE(...);
	if (queued)
		__enqueue_latency(...);

?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 6/9] sched/fair: Add sched group latency support
  2023-02-21 15:01   ` Peter Zijlstra
@ 2023-02-21 15:32     ` Vincent Guittot
  0 siblings, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-02-21 15:32 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Tue, 21 Feb 2023 at 16:01, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Jan 13, 2023 at 03:12:31PM +0100, Vincent Guittot wrote:
>
> > +static s64 cpu_latency_nice_read_s64(struct cgroup_subsys_state *css,
> > +                                 struct cftype *cft)
> > +{
> > +     int prio, delta, last_delta = INT_MAX;
> > +     s64 weight;
> > +
> > +     weight = css_tg(css)->latency_offset * NICE_LATENCY_WEIGHT_MAX;
> > +     weight = div_s64(weight, get_sleep_latency(false));
> > +
> > +     /* Find the closest nice value to the current weight */
>
> This comment isn't entirely accurate, since we only have the nice_write
> interface below, this will be an exact match. The thing with weight is
> that we first had the raw weight value interface and then the nice
> interface had to map random values back to a 'nice' value.

Yes, there was a long discussion about the interface and without any
simple raw value to share, we decided to only use latency_nice until
we found a generic metric
>
> Arguably we can simply store the raw nice value in write and print it
> out again here.

Probably, I just wanted to prevent the latency.nice being the main
value saved in cgroup . But I suppose it could be ok to save it
directly

>
> > +     for (prio = 0; prio < ARRAY_SIZE(sched_latency_to_weight); prio++) {
> > +             delta = abs(sched_latency_to_weight[prio] - weight);
> > +             if (delta >= last_delta)
> > +                     break;
> > +             last_delta = delta;
> > +     }
> > +
> > +     return LATENCY_TO_NICE(prio-1);
> > +}
> > +
> > +static int cpu_latency_nice_write_s64(struct cgroup_subsys_state *css,
> > +                                  struct cftype *cft, s64 nice)
> > +{
> > +     s64 latency_offset;
> > +     long weight;
> > +     int idx;
> > +
> > +     if (nice < MIN_LATENCY_NICE || nice > MAX_LATENCY_NICE)
> > +             return -ERANGE;
> > +
> > +     idx = NICE_TO_LATENCY(nice);
> > +     idx = array_index_nospec(idx, LATENCY_NICE_WIDTH);
> > +     weight = sched_latency_to_weight[idx];
> > +
> > +     latency_offset = weight * get_sleep_latency(false);
> > +     latency_offset = div_s64(latency_offset, NICE_LATENCY_WEIGHT_MAX);
> > +
> > +     return sched_group_set_latency(css_tg(css), latency_offset);
> > +}

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup
  2023-02-21 15:08       ` Peter Zijlstra
@ 2023-02-21 15:34         ` Vincent Guittot
  0 siblings, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-02-21 15:34 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Tue, 21 Feb 2023 at 16:08, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Tue, Feb 21, 2023 at 03:21:54PM +0100, Vincent Guittot wrote:
> > On Tue, 21 Feb 2023 at 14:05, Peter Zijlstra <peterz@infradead.org> wrote:
>
> > > Should we perhaps also do this for latency_nice == 0?, in any case I
> > > think this can be moved to its own patch to avoid doing too much in the
> > > one patch. It seems fairly self contained.
> >
> > This function is then removed by patch 9 as the additional rb tree
> > fixes all cases
>
> Also, since you remove it again later, perhaps not introduce it at all?

Yes, I have done the split to easily revert patch 8  if needed but
keep good behavior. I can probably remove this and patch 9 completly

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 8/9] sched/fair: Add latency list
  2023-02-21 15:11   ` Peter Zijlstra
@ 2023-02-21 15:42     ` Vincent Guittot
  0 siblings, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-02-21 15:42 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Tue, 21 Feb 2023 at 16:12, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Jan 13, 2023 at 03:12:33PM +0100, Vincent Guittot wrote:
> > @@ -12552,8 +12634,15 @@ int sched_group_set_latency(struct task_group *tg, s64 latency)
> >
> >       for_each_possible_cpu(i) {
> >               struct sched_entity *se = tg->se[i];
> > +             struct rq *rq = cpu_rq(i);
> > +             struct rq_flags rf;
> > +
> > +             rq_lock_irqsave(rq, &rf);
> >
> > +             __dequeue_latency(se->cfs_rq, se);
> >               WRITE_ONCE(se->latency_offset, latency);
> > +
> > +             rq_unlock_irqrestore(rq, &rf);
> >       }
>
> This seems asymmetric; maybe something like:
>
>         queued = __dequeue_latency(..);
>         WRITE_ONCE(...);
>         if (queued)
>                 __enqueue_latency(...);
>
> ?

Fair enough

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 8/9] sched/fair: Add latency list
  2023-01-13 14:12 ` [PATCH v10 8/9] sched/fair: Add latency list Vincent Guittot
  2023-02-21 15:11   ` Peter Zijlstra
@ 2023-02-22  9:49   ` Peter Zijlstra
  2023-02-22 11:16     ` Vincent Guittot
  1 sibling, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2023-02-22  9:49 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Fri, Jan 13, 2023 at 03:12:33PM +0100, Vincent Guittot wrote:

> +static void __enqueue_latency(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> +{
> +
> +	/* Only latency sensitive entity can be added to the list */
> +	if (se->latency_offset >= 0)
> +		return;
> +
> +	if (!RB_EMPTY_NODE(&se->latency_node))
> +		return;
> +
> +	/*
> +	 * An execution time less than sysctl_sched_min_granularity means that
> +	 * the entity has been preempted by a higher sched class or an entity
> +	 * with higher latency constraint.
> +	 * Put it back in the list so it gets a chance to run 1st during the
> +	 * next slice.
> +	 */
> +	if (!(flags & ENQUEUE_WAKEUP)) {
> +		u64 delta_exec = se->sum_exec_runtime - se->prev_sum_exec_runtime;
> +
> +		if (delta_exec >= sysctl_sched_min_granularity)
> +			return;
> +	}

I'm not a big fan of this dynamic enqueueing condition; it makes it
rather hard to interpret the below addition to pick_next_entity().

Let me think about this more... at the very least the comment with
__pick_first_latency() use below needs to be expanded upon if we keep it
like so.

> +
> +	rb_add_cached(&se->latency_node, &cfs_rq->latency_timeline, __latency_less);
> +}

> @@ -4966,7 +5040,7 @@ static struct sched_entity *
>  pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
>  {
>  	struct sched_entity *left = __pick_first_entity(cfs_rq);
> -	struct sched_entity *se;
> +	struct sched_entity *latency, *se;
>  
>  	/*
>  	 * If curr is set we have to see if its left of the leftmost entity
> @@ -5008,6 +5082,12 @@ pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
>  		se = cfs_rq->last;
>  	}
>  
> +	/* Check for latency sensitive entity waiting for running */
> +	latency = __pick_first_latency(cfs_rq);
> +	if (latency && (latency != se) &&
> +	    wakeup_preempt_entity(latency, se) < 1)
> +		se = latency;

I'm not quite sure why this condition isn't sufficient on it's own.
After all, if a task does a 'spurious' nanosleep it can get around the
'restriction' in __enqueue_latency() without any great penalty to it's
actual bandwidth consumption.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 8/9] sched/fair: Add latency list
  2023-02-22  9:49   ` Peter Zijlstra
@ 2023-02-22 11:16     ` Vincent Guittot
  2023-02-27 13:29       ` Peter Zijlstra
  0 siblings, 1 reply; 27+ messages in thread
From: Vincent Guittot @ 2023-02-22 11:16 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Wed, 22 Feb 2023 at 10:50, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Fri, Jan 13, 2023 at 03:12:33PM +0100, Vincent Guittot wrote:
>
> > +static void __enqueue_latency(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> > +{
> > +
> > +     /* Only latency sensitive entity can be added to the list */
> > +     if (se->latency_offset >= 0)
> > +             return;
> > +
> > +     if (!RB_EMPTY_NODE(&se->latency_node))
> > +             return;
> > +
> > +     /*
> > +      * An execution time less than sysctl_sched_min_granularity means that
> > +      * the entity has been preempted by a higher sched class or an entity
> > +      * with higher latency constraint.
> > +      * Put it back in the list so it gets a chance to run 1st during the
> > +      * next slice.
> > +      */
> > +     if (!(flags & ENQUEUE_WAKEUP)) {
> > +             u64 delta_exec = se->sum_exec_runtime - se->prev_sum_exec_runtime;
> > +
> > +             if (delta_exec >= sysctl_sched_min_granularity)
> > +                     return;
> > +     }
>
> I'm not a big fan of this dynamic enqueueing condition; it makes it
> rather hard to interpret the below addition to pick_next_entity().
>
> Let me think about this more... at the very least the comment with
> __pick_first_latency() use below needs to be expanded upon if we keep it
> like so.

Only the waking tasks should be added in the latency rb tree so they
can be selected to run 1st (as long as they don't use too much
runtime). But task A can wake up, preempts current task B thanks to
its latency nice , starts to run few usecs but then is immediately
preempted by a RT task C as an example. In this case, we consider that
the task A didn't get a chance to run after its wakeup and we put it
back to the latency rb tree just as if task A has just woken up but
didn't preempted the new current task C.

I have used sysctl_sched_min_granularity has this is min runtime for a
task before being possibly preempted at tick by another cfs task with
a lower vruntime so if it runs less than sysctl_sched_min_granularity
we are sure that it has been preempted by higher prio tasks and it's
not because it used all its runtime compared to others

>
> > +
> > +     rb_add_cached(&se->latency_node, &cfs_rq->latency_timeline, __latency_less);
> > +}
>
> > @@ -4966,7 +5040,7 @@ static struct sched_entity *
> >  pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
> >  {
> >       struct sched_entity *left = __pick_first_entity(cfs_rq);
> > -     struct sched_entity *se;
> > +     struct sched_entity *latency, *se;
> >
> >       /*
> >        * If curr is set we have to see if its left of the leftmost entity
> > @@ -5008,6 +5082,12 @@ pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
> >               se = cfs_rq->last;
> >       }
> >
> > +     /* Check for latency sensitive entity waiting for running */
> > +     latency = __pick_first_latency(cfs_rq);
> > +     if (latency && (latency != se) &&
> > +         wakeup_preempt_entity(latency, se) < 1)
> > +             se = latency;
>
> I'm not quite sure why this condition isn't sufficient on it's own.
> After all, if a task does a 'spurious' nanosleep it can get around the
> 'restriction' in __enqueue_latency() without any great penalty to it's
> actual bandwidth consumption.

Yes it until it used all its runtime.

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 8/9] sched/fair: Add latency list
  2023-02-22 11:16     ` Vincent Guittot
@ 2023-02-27 13:29       ` Peter Zijlstra
  2023-02-27 14:55         ` Vincent Guittot
  0 siblings, 1 reply; 27+ messages in thread
From: Peter Zijlstra @ 2023-02-27 13:29 UTC (permalink / raw)
  To: Vincent Guittot
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Wed, Feb 22, 2023 at 12:16:29PM +0100, Vincent Guittot wrote:
> On Wed, 22 Feb 2023 at 10:50, Peter Zijlstra <peterz@infradead.org> wrote:
> >
> > On Fri, Jan 13, 2023 at 03:12:33PM +0100, Vincent Guittot wrote:
> >
> > > +static void __enqueue_latency(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> > > +{
> > > +
> > > +     /* Only latency sensitive entity can be added to the list */
> > > +     if (se->latency_offset >= 0)
> > > +             return;
> > > +
> > > +     if (!RB_EMPTY_NODE(&se->latency_node))
> > > +             return;
> > > +
> > > +     /*
> > > +      * An execution time less than sysctl_sched_min_granularity means that
> > > +      * the entity has been preempted by a higher sched class or an entity
> > > +      * with higher latency constraint.
> > > +      * Put it back in the list so it gets a chance to run 1st during the
> > > +      * next slice.
> > > +      */
> > > +     if (!(flags & ENQUEUE_WAKEUP)) {
> > > +             u64 delta_exec = se->sum_exec_runtime - se->prev_sum_exec_runtime;
> > > +
> > > +             if (delta_exec >= sysctl_sched_min_granularity)
> > > +                     return;
> > > +     }
> >
> > I'm not a big fan of this dynamic enqueueing condition; it makes it
> > rather hard to interpret the below addition to pick_next_entity().
> >
> > Let me think about this more... at the very least the comment with
> > __pick_first_latency() use below needs to be expanded upon if we keep it
> > like so.
>
> Only the waking tasks should be added in the latency rb tree so they

But that's what I'm saying, you can game this by doing super short
sleeps every min_gran.

> can be selected to run 1st (as long as they don't use too much
> runtime). But task A can wake up, preempts current task B thanks to
> its latency nice , starts to run few usecs but then is immediately
> preempted by a RT task C as an example. In this case, we consider that
> the task A didn't get a chance to run after its wakeup and we put it
> back to the latency rb tree just as if task A has just woken up but
> didn't preempted the new current task C.

So ideally, and this is where I'm very slow with thinking, that
wakeup_preempt_entity() condition here:

> > > @@ -5008,6 +5082,12 @@ pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
> > >               se = cfs_rq->last;
> > >       }
> > >
> > > +     /* Check for latency sensitive entity waiting for running */
> > > +     latency = __pick_first_latency(cfs_rq);
> > > +     if (latency && (latency != se) &&
> > > +         wakeup_preempt_entity(latency, se) < 1)
> > > +             se = latency;

should be sufficient to provide fair bandwidth usage. The EEVDF paper
achieves this by selecting the leftmost elegible task, where elegibility
is dependent on negative lag. Only those tasks that are behind the pack
are allowed runtime.

Now clearly our min_vruntime is unsuited for that exact scheme, but iirc
wake_preempt_entity() does not allow for starvation, so we should be
good, even without that weird condition in __enqueue_latency(), hmm?

^ permalink raw reply	[flat|nested] 27+ messages in thread

* Re: [PATCH v10 8/9] sched/fair: Add latency list
  2023-02-27 13:29       ` Peter Zijlstra
@ 2023-02-27 14:55         ` Vincent Guittot
  0 siblings, 0 replies; 27+ messages in thread
From: Vincent Guittot @ 2023-02-27 14:55 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: mingo, juri.lelli, dietmar.eggemann, rostedt, bsegall, mgorman,
	bristot, vschneid, linux-kernel, parth, cgroups, qyousef,
	chris.hyser, patrick.bellasi, David.Laight, pjt, pavel, tj,
	qperret, tim.c.chen, joshdon, timj, kprateek.nayak, yu.c.chen,
	youssefesmat, joel

On Mon, 27 Feb 2023 at 14:29, Peter Zijlstra <peterz@infradead.org> wrote:
>
> On Wed, Feb 22, 2023 at 12:16:29PM +0100, Vincent Guittot wrote:
> > On Wed, 22 Feb 2023 at 10:50, Peter Zijlstra <peterz@infradead.org> wrote:
> > >
> > > On Fri, Jan 13, 2023 at 03:12:33PM +0100, Vincent Guittot wrote:
> > >
> > > > +static void __enqueue_latency(struct cfs_rq *cfs_rq, struct sched_entity *se, int flags)
> > > > +{
> > > > +
> > > > +     /* Only latency sensitive entity can be added to the list */
> > > > +     if (se->latency_offset >= 0)
> > > > +             return;
> > > > +
> > > > +     if (!RB_EMPTY_NODE(&se->latency_node))
> > > > +             return;
> > > > +
> > > > +     /*
> > > > +      * An execution time less than sysctl_sched_min_granularity means that
> > > > +      * the entity has been preempted by a higher sched class or an entity
> > > > +      * with higher latency constraint.
> > > > +      * Put it back in the list so it gets a chance to run 1st during the
> > > > +      * next slice.
> > > > +      */
> > > > +     if (!(flags & ENQUEUE_WAKEUP)) {
> > > > +             u64 delta_exec = se->sum_exec_runtime - se->prev_sum_exec_runtime;
> > > > +
> > > > +             if (delta_exec >= sysctl_sched_min_granularity)
> > > > +                     return;
> > > > +     }
> > >
> > > I'm not a big fan of this dynamic enqueueing condition; it makes it
> > > rather hard to interpret the below addition to pick_next_entity().
> > >
> > > Let me think about this more... at the very least the comment with
> > > __pick_first_latency() use below needs to be expanded upon if we keep it
> > > like so.
> >
> > Only the waking tasks should be added in the latency rb tree so they
>
> But that's what I'm saying, you can game this by doing super short
> sleeps every min_gran.

yes, it 's for a time. But i'm not sure this will be always beneficial
at the end because most of the time you will have your full slice
without doing this game

>
> > can be selected to run 1st (as long as they don't use too much
> > runtime). But task A can wake up, preempts current task B thanks to
> > its latency nice , starts to run few usecs but then is immediately
> > preempted by a RT task C as an example. In this case, we consider that
> > the task A didn't get a chance to run after its wakeup and we put it
> > back to the latency rb tree just as if task A has just woken up but
> > didn't preempted the new current task C.
>
> So ideally, and this is where I'm very slow with thinking, that
> wakeup_preempt_entity() condition here:
>
> > > > @@ -5008,6 +5082,12 @@ pick_next_entity(struct cfs_rq *cfs_rq, struct sched_entity *curr)
> > > >               se = cfs_rq->last;
> > > >       }
> > > >
> > > > +     /* Check for latency sensitive entity waiting for running */
> > > > +     latency = __pick_first_latency(cfs_rq);
> > > > +     if (latency && (latency != se) &&
> > > > +         wakeup_preempt_entity(latency, se) < 1)
> > > > +             se = latency;
>
> should be sufficient to provide fair bandwidth usage. The EEVDF paper
> achieves this by selecting the leftmost elegible task, where elegibility
> is dependent on negative lag. Only those tasks that are behind the pack
> are allowed runtime.
>
> Now clearly our min_vruntime is unsuited for that exact scheme, but iirc
> wake_preempt_entity() does not allow for starvation, so we should be
> good, even without that weird condition in __enqueue_latency(), hmm?

If we unconditionally  __enqueue_latency() the task then it can end up
providing more bandwidth to those tasks because it's like having a
larger sleep credit than others.

The original condition in __enqueue_latency() was :
    if (!(flags & ENQUEUE_WAKEUP)) {
        return;
    }

So that task gets a chance to preempt others only at wakeup.
But then, I have seen such tasks being preempted immediately but RT
tasks and as a result lost their latency advantage. Maybe I should be
the condition above and add the weird condition in a separate patch
with dedicated figures

^ permalink raw reply	[flat|nested] 27+ messages in thread

end of thread, other threads:[~2023-02-27 14:55 UTC | newest]

Thread overview: 27+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-01-13 14:12 [PATCH v10 0/9] Add latency priority for CFS class Vincent Guittot
2023-01-13 14:12 ` [PATCH v10 1/9] sched/fair: fix unfairness at wakeup Vincent Guittot
2023-01-13 14:12 ` [PATCH v10 2/9] sched: Introduce latency-nice as a per-task attribute Vincent Guittot
2023-01-13 14:12 ` [PATCH v10 3/9] sched/core: Propagate parent task's latency requirements to the child task Vincent Guittot
2023-01-13 14:12 ` [PATCH v10 4/9] sched: Allow sched_{get,set}attr to change latency_nice of the task Vincent Guittot
2023-01-13 14:12 ` [PATCH v10 5/9] sched/fair: Take into account latency priority at wakeup Vincent Guittot
2023-02-21 12:52   ` Peter Zijlstra
2023-02-21 14:12     ` Vincent Guittot
2023-02-21 14:15     ` Peter Zijlstra
2023-02-21 14:25       ` Vincent Guittot
2023-02-21 13:04   ` Peter Zijlstra
2023-02-21 14:21     ` Vincent Guittot
2023-02-21 14:51       ` Peter Zijlstra
2023-02-21 15:08       ` Peter Zijlstra
2023-02-21 15:34         ` Vincent Guittot
2023-01-13 14:12 ` [PATCH v10 6/9] sched/fair: Add sched group latency support Vincent Guittot
2023-02-21 15:01   ` Peter Zijlstra
2023-02-21 15:32     ` Vincent Guittot
2023-01-13 14:12 ` [PATCH v10 7/9] sched/core: Support latency priority with sched core Vincent Guittot
2023-01-13 14:12 ` [PATCH v10 8/9] sched/fair: Add latency list Vincent Guittot
2023-02-21 15:11   ` Peter Zijlstra
2023-02-21 15:42     ` Vincent Guittot
2023-02-22  9:49   ` Peter Zijlstra
2023-02-22 11:16     ` Vincent Guittot
2023-02-27 13:29       ` Peter Zijlstra
2023-02-27 14:55         ` Vincent Guittot
2023-01-13 14:12 ` [PATCH v10 9/9] sched/fair: remove check_preempt_from_others Vincent Guittot

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).