All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH 0/6] sched/deadline, (rt): Sched class cleanups
@ 2022-03-02 18:34 Dietmar Eggemann
  2022-03-02 18:34 ` [PATCH 1/6] sched/deadline: Remove unused def_dl_bandwidth Dietmar Eggemann
                   ` (6 more replies)
  0 siblings, 7 replies; 15+ messages in thread
From: Dietmar Eggemann @ 2022-03-02 18:34 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Steven Rostedt,
	Daniel Bristot de Oliveira
  Cc: Vincent Guittot, Mel Gorman, Ben Segall, Luca Abeni, linux-kernel

While trying to improve the Deadline sched class behaviour for
asymmetric CPU capacity systems I came across some possible
cleanups for DL (and RT).

Overview:

[PATCH 1/6] - Remove `struct dl_bandwidth def_dl_bandwidth`.

[PATCH 2/6] - Move functions into DL sched class which don't have to
              be exported.

[PATCH 3/6] - Merge two DL admission control functions which provide
              very similar functionality.

[PATCH 4/6] - Use DL rb_entry() macros and cached rbtree wrapper
              `rb_first_cached()` consistently.

[PATCH 5/6] - Remove unused !CONFIG_SMP function definitions in DL/RT.

[PATCH 6/6] - Remove redundant function parameter in DL/RT.

Dietmar Eggemann (6):
  sched/deadline: Remove unused def_dl_bandwidth
  sched/deadline: Move bandwidth mgmt and reclaim functions into sched
    class source file
  sched/deadline: Merge dl_task_can_attach() and dl_cpu_busy()
  sched/deadline: Use __node_2_[pdl|dle]() and rb_first_cached()
    consistently
  sched/deadline,rt: Remove unused functions for !CONFIG_SMP
  sched/deadline,rt: Remove unused parameter from
    pick_next_[rt|dl]_entity()

 kernel/sched/core.c     |  14 ++--
 kernel/sched/deadline.c | 141 ++++++++++++++++++++--------------------
 kernel/sched/rt.c       |  16 +----
 kernel/sched/sched.h    |  53 +--------------
 4 files changed, 84 insertions(+), 140 deletions(-)

-- 
2.25.1


^ permalink raw reply	[flat|nested] 15+ messages in thread

* [PATCH 1/6] sched/deadline: Remove unused def_dl_bandwidth
  2022-03-02 18:34 [PATCH 0/6] sched/deadline, (rt): Sched class cleanups Dietmar Eggemann
@ 2022-03-02 18:34 ` Dietmar Eggemann
  2022-03-08 22:25   ` [tip: sched/core] " tip-bot2 for Dietmar Eggemann
  2022-03-02 18:34 ` [PATCH 2/6] sched/deadline: Move bandwidth mgmt and reclaim functions into sched class source file Dietmar Eggemann
                   ` (5 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Dietmar Eggemann @ 2022-03-02 18:34 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Steven Rostedt,
	Daniel Bristot de Oliveira
  Cc: Vincent Guittot, Mel Gorman, Ben Segall, Luca Abeni, linux-kernel

Since commit 1724813d9f2c ("sched/deadline: Remove the sysctl_sched_dl
knobs") the default deadline bandwidth control structure has no purpose.
Remove it.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---
 kernel/sched/core.c     | 1 -
 kernel/sched/deadline.c | 7 -------
 kernel/sched/sched.h    | 1 -
 3 files changed, 9 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3aafc15da24a..d342c4c779f7 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -9420,7 +9420,6 @@ void __init sched_init(void)
 #endif /* CONFIG_CPUMASK_OFFSTACK */
 
 	init_rt_bandwidth(&def_rt_bandwidth, global_rt_period(), global_rt_runtime());
-	init_dl_bandwidth(&def_dl_bandwidth, global_rt_period(), global_rt_runtime());
 
 #ifdef CONFIG_SMP
 	init_defrootdomain();
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 62f0cf842277..ed4251fa87c7 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -18,8 +18,6 @@
 #include "sched.h"
 #include "pelt.h"
 
-struct dl_bandwidth def_dl_bandwidth;
-
 static inline struct task_struct *dl_task_of(struct sched_dl_entity *dl_se)
 {
 	return container_of(dl_se, struct task_struct, dl);
@@ -423,12 +421,10 @@ void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime)
 void init_dl_bw(struct dl_bw *dl_b)
 {
 	raw_spin_lock_init(&dl_b->lock);
-	raw_spin_lock(&def_dl_bandwidth.dl_runtime_lock);
 	if (global_rt_runtime() == RUNTIME_INF)
 		dl_b->bw = -1;
 	else
 		dl_b->bw = to_ratio(global_rt_period(), global_rt_runtime());
-	raw_spin_unlock(&def_dl_bandwidth.dl_runtime_lock);
 	dl_b->total_bw = 0;
 }
 
@@ -2731,9 +2727,6 @@ void sched_dl_do_global(void)
 	int cpu;
 	unsigned long flags;
 
-	def_dl_bandwidth.dl_period = global_rt_period();
-	def_dl_bandwidth.dl_runtime = global_rt_runtime();
-
 	if (global_rt_runtime() != RUNTIME_INF)
 		new_bw = to_ratio(global_rt_period(), global_rt_runtime());
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 3da5718cd641..a8b8516b8452 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2333,7 +2333,6 @@ extern void resched_cpu(int cpu);
 extern struct rt_bandwidth def_rt_bandwidth;
 extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime);
 
-extern struct dl_bandwidth def_dl_bandwidth;
 extern void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime);
 extern void init_dl_task_timer(struct sched_dl_entity *dl_se);
 extern void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 2/6] sched/deadline: Move bandwidth mgmt and reclaim functions into sched class source file
  2022-03-02 18:34 [PATCH 0/6] sched/deadline, (rt): Sched class cleanups Dietmar Eggemann
  2022-03-02 18:34 ` [PATCH 1/6] sched/deadline: Remove unused def_dl_bandwidth Dietmar Eggemann
@ 2022-03-02 18:34 ` Dietmar Eggemann
  2022-03-08 22:25   ` [tip: sched/core] " tip-bot2 for Dietmar Eggemann
  2022-03-02 18:34 ` [PATCH 3/6] sched/deadline: Merge dl_task_can_attach() and dl_cpu_busy() Dietmar Eggemann
                   ` (4 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Dietmar Eggemann @ 2022-03-02 18:34 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Steven Rostedt,
	Daniel Bristot de Oliveira
  Cc: Vincent Guittot, Mel Gorman, Ben Segall, Luca Abeni, linux-kernel

Move the deadline bandwidth management (admission control) functions
__dl_add(), __dl_sub() and __dl_overflow() as well as the bandwidth
reclaim function __dl_update() from private task scheduler header file
to the deadline sched class source file.
The functions are only used internally so they don't have to be
exported.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---
 kernel/sched/deadline.c | 44 ++++++++++++++++++++++++++++++++++++
 kernel/sched/sched.h    | 49 -----------------------------------------
 2 files changed, 44 insertions(+), 49 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index ed4251fa87c7..81bf97648e42 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -128,6 +128,21 @@ static inline bool dl_bw_visited(int cpu, u64 gen)
 	rd->visit_gen = gen;
 	return false;
 }
+
+static inline
+void __dl_update(struct dl_bw *dl_b, s64 bw)
+{
+	struct root_domain *rd = container_of(dl_b, struct root_domain, dl_bw);
+	int i;
+
+	RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(),
+			 "sched RCU must be held");
+	for_each_cpu_and(i, rd->span, cpu_active_mask) {
+		struct rq *rq = cpu_rq(i);
+
+		rq->dl.extra_bw += bw;
+	}
+}
 #else
 static inline struct dl_bw *dl_bw_of(int i)
 {
@@ -148,8 +163,37 @@ static inline bool dl_bw_visited(int cpu, u64 gen)
 {
 	return false;
 }
+
+static inline
+void __dl_update(struct dl_bw *dl_b, s64 bw)
+{
+	struct dl_rq *dl = container_of(dl_b, struct dl_rq, dl_bw);
+
+	dl->extra_bw += bw;
+}
 #endif
 
+static inline
+void __dl_sub(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
+{
+	dl_b->total_bw -= tsk_bw;
+	__dl_update(dl_b, (s32)tsk_bw / cpus);
+}
+
+static inline
+void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
+{
+	dl_b->total_bw += tsk_bw;
+	__dl_update(dl_b, -((s32)tsk_bw / cpus));
+}
+
+static inline bool
+__dl_overflow(struct dl_bw *dl_b, unsigned long cap, u64 old_bw, u64 new_bw)
+{
+	return dl_b->bw != -1 &&
+	       cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw;
+}
+
 static inline
 void __add_running_bw(u64 dl_bw, struct dl_rq *dl_rq)
 {
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index a8b8516b8452..4dfc3b02df61 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -301,29 +301,6 @@ struct dl_bw {
 	u64			total_bw;
 };
 
-static inline void __dl_update(struct dl_bw *dl_b, s64 bw);
-
-static inline
-void __dl_sub(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
-{
-	dl_b->total_bw -= tsk_bw;
-	__dl_update(dl_b, (s32)tsk_bw / cpus);
-}
-
-static inline
-void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
-{
-	dl_b->total_bw += tsk_bw;
-	__dl_update(dl_b, -((s32)tsk_bw / cpus));
-}
-
-static inline bool __dl_overflow(struct dl_bw *dl_b, unsigned long cap,
-				 u64 old_bw, u64 new_bw)
-{
-	return dl_b->bw != -1 &&
-	       cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw;
-}
-
 /*
  * Verify the fitness of task @p to run on @cpu taking into account the
  * CPU original capacity and the runtime/deadline ratio of the task.
@@ -2748,32 +2725,6 @@ extern void nohz_run_idle_balance(int cpu);
 static inline void nohz_run_idle_balance(int cpu) { }
 #endif
 
-#ifdef CONFIG_SMP
-static inline
-void __dl_update(struct dl_bw *dl_b, s64 bw)
-{
-	struct root_domain *rd = container_of(dl_b, struct root_domain, dl_bw);
-	int i;
-
-	RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(),
-			 "sched RCU must be held");
-	for_each_cpu_and(i, rd->span, cpu_active_mask) {
-		struct rq *rq = cpu_rq(i);
-
-		rq->dl.extra_bw += bw;
-	}
-}
-#else
-static inline
-void __dl_update(struct dl_bw *dl_b, s64 bw)
-{
-	struct dl_rq *dl = container_of(dl_b, struct dl_rq, dl_bw);
-
-	dl->extra_bw += bw;
-}
-#endif
-
-
 #ifdef CONFIG_IRQ_TIME_ACCOUNTING
 struct irqtime {
 	u64			total;
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 3/6] sched/deadline: Merge dl_task_can_attach() and dl_cpu_busy()
  2022-03-02 18:34 [PATCH 0/6] sched/deadline, (rt): Sched class cleanups Dietmar Eggemann
  2022-03-02 18:34 ` [PATCH 1/6] sched/deadline: Remove unused def_dl_bandwidth Dietmar Eggemann
  2022-03-02 18:34 ` [PATCH 2/6] sched/deadline: Move bandwidth mgmt and reclaim functions into sched class source file Dietmar Eggemann
@ 2022-03-02 18:34 ` Dietmar Eggemann
  2022-03-08 22:25   ` [tip: sched/core] " tip-bot2 for Dietmar Eggemann
  2022-03-02 18:34 ` [PATCH 4/6] sched/deadline: Use __node_2_[pdl|dle]() and rb_first_cached() consistently Dietmar Eggemann
                   ` (3 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Dietmar Eggemann @ 2022-03-02 18:34 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Steven Rostedt,
	Daniel Bristot de Oliveira
  Cc: Vincent Guittot, Mel Gorman, Ben Segall, Luca Abeni, linux-kernel

Both functions are doing almost the same, that is checking if admission
control is still respected.

With exclusive cpusets, dl_task_can_attach() checks if the destination
cpuset (i.e. its root domain) has enough CPU capacity to accommodate the
task.
dl_cpu_busy() checks if there is enough CPU capacity in the cpuset in
case the CPU is hot-plugged out.

dl_task_can_attach() is used to check if a task can be admitted while
dl_cpu_busy() is used to check if a CPU can be hotplugged out.

Make dl_cpu_busy() able to deal with a task and use it instead of
dl_task_can_attach() in task_can_attach().

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---
 kernel/sched/core.c     | 13 +++++++----
 kernel/sched/deadline.c | 52 +++++++++++------------------------------
 kernel/sched/sched.h    |  3 +--
 3 files changed, 24 insertions(+), 44 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d342c4c779f7..68736d1dc0f4 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8805,8 +8805,11 @@ int task_can_attach(struct task_struct *p,
 	}
 
 	if (dl_task(p) && !cpumask_intersects(task_rq(p)->rd->span,
-					      cs_cpus_allowed))
-		ret = dl_task_can_attach(p, cs_cpus_allowed);
+					      cs_cpus_allowed)) {
+		int cpu = cpumask_any_and(cpu_active_mask, cs_cpus_allowed);
+
+		ret = dl_cpu_busy(cpu, p);
+	}
 
 out:
 	return ret;
@@ -9090,8 +9093,10 @@ static void cpuset_cpu_active(void)
 static int cpuset_cpu_inactive(unsigned int cpu)
 {
 	if (!cpuhp_tasks_frozen) {
-		if (dl_cpu_busy(cpu))
-			return -EBUSY;
+		int ret = dl_cpu_busy(cpu, NULL);
+
+		if (ret)
+			return ret;
 		cpuset_update_active_cpus();
 	} else {
 		num_cpus_frozen++;
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 81bf97648e42..de677b1e3767 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2992,41 +2992,6 @@ bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr)
 }
 
 #ifdef CONFIG_SMP
-int dl_task_can_attach(struct task_struct *p, const struct cpumask *cs_cpus_allowed)
-{
-	unsigned long flags, cap;
-	unsigned int dest_cpu;
-	struct dl_bw *dl_b;
-	bool overflow;
-	int ret;
-
-	dest_cpu = cpumask_any_and(cpu_active_mask, cs_cpus_allowed);
-
-	rcu_read_lock_sched();
-	dl_b = dl_bw_of(dest_cpu);
-	raw_spin_lock_irqsave(&dl_b->lock, flags);
-	cap = dl_bw_capacity(dest_cpu);
-	overflow = __dl_overflow(dl_b, cap, 0, p->dl.dl_bw);
-	if (overflow) {
-		ret = -EBUSY;
-	} else {
-		/*
-		 * We reserve space for this task in the destination
-		 * root_domain, as we can't fail after this point.
-		 * We will free resources in the source root_domain
-		 * later on (see set_cpus_allowed_dl()).
-		 */
-		int cpus = dl_bw_cpus(dest_cpu);
-
-		__dl_add(dl_b, p->dl.dl_bw, cpus);
-		ret = 0;
-	}
-	raw_spin_unlock_irqrestore(&dl_b->lock, flags);
-	rcu_read_unlock_sched();
-
-	return ret;
-}
-
 int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur,
 				 const struct cpumask *trial)
 {
@@ -3048,7 +3013,7 @@ int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur,
 	return ret;
 }
 
-bool dl_cpu_busy(unsigned int cpu)
+int dl_cpu_busy(int cpu, struct task_struct *p)
 {
 	unsigned long flags, cap;
 	struct dl_bw *dl_b;
@@ -3058,11 +3023,22 @@ bool dl_cpu_busy(unsigned int cpu)
 	dl_b = dl_bw_of(cpu);
 	raw_spin_lock_irqsave(&dl_b->lock, flags);
 	cap = dl_bw_capacity(cpu);
-	overflow = __dl_overflow(dl_b, cap, 0, 0);
+	overflow = __dl_overflow(dl_b, cap, 0, p ? p->dl.dl_bw : 0);
+
+	if (!overflow && p) {
+		/*
+		 * We reserve space for this task in the destination
+		 * root_domain, as we can't fail after this point.
+		 * We will free resources in the source root_domain
+		 * later on (see set_cpus_allowed_dl()).
+		 */
+		__dl_add(dl_b, p->dl.dl_bw, dl_bw_cpus(cpu));
+	}
+
 	raw_spin_unlock_irqrestore(&dl_b->lock, flags);
 	rcu_read_unlock_sched();
 
-	return overflow;
+	return overflow ? -EBUSY : 0;
 }
 #endif
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4dfc3b02df61..0720cf0c7df1 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -324,9 +324,8 @@ extern void __setparam_dl(struct task_struct *p, const struct sched_attr *attr);
 extern void __getparam_dl(struct task_struct *p, struct sched_attr *attr);
 extern bool __checkparam_dl(const struct sched_attr *attr);
 extern bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr);
-extern int  dl_task_can_attach(struct task_struct *p, const struct cpumask *cs_cpus_allowed);
 extern int  dl_cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial);
-extern bool dl_cpu_busy(unsigned int cpu);
+extern int  dl_cpu_busy(int cpu, struct task_struct *p);
 
 #ifdef CONFIG_CGROUP_SCHED
 
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 4/6] sched/deadline: Use __node_2_[pdl|dle]() and rb_first_cached() consistently
  2022-03-02 18:34 [PATCH 0/6] sched/deadline, (rt): Sched class cleanups Dietmar Eggemann
                   ` (2 preceding siblings ...)
  2022-03-02 18:34 ` [PATCH 3/6] sched/deadline: Merge dl_task_can_attach() and dl_cpu_busy() Dietmar Eggemann
@ 2022-03-02 18:34 ` Dietmar Eggemann
  2022-03-08 22:25   ` [tip: sched/core] " tip-bot2 for Dietmar Eggemann
  2022-03-02 18:34 ` [PATCH 5/6] sched/deadline,rt: Remove unused functions for !CONFIG_SMP Dietmar Eggemann
                   ` (2 subsequent siblings)
  6 siblings, 1 reply; 15+ messages in thread
From: Dietmar Eggemann @ 2022-03-02 18:34 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Steven Rostedt,
	Daniel Bristot de Oliveira
  Cc: Vincent Guittot, Mel Gorman, Ben Segall, Luca Abeni, linux-kernel

Deploy __node_2_pdl(node), __node_2_dle(node) and rb_first_cached()
consistently throughout the sched class source file which makes the
code at least easier to read.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---
 kernel/sched/deadline.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index de677b1e3767..3242dd4972e1 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -450,7 +450,7 @@ static inline int is_leftmost(struct task_struct *p, struct dl_rq *dl_rq)
 {
 	struct sched_dl_entity *dl_se = &p->dl;
 
-	return dl_rq->root.rb_leftmost == &dl_se->rb_node;
+	return rb_first_cached(&dl_rq->root) == &dl_se->rb_node;
 }
 
 static void init_dl_rq_bw_ratio(struct dl_rq *dl_rq);
@@ -1433,6 +1433,9 @@ void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se)
 	timer->function = inactive_task_timer;
 }
 
+#define __node_2_dle(node) \
+	rb_entry((node), struct sched_dl_entity, rb_node)
+
 #ifdef CONFIG_SMP
 
 static void inc_dl_deadline(struct dl_rq *dl_rq, u64 deadline)
@@ -1462,10 +1465,9 @@ static void dec_dl_deadline(struct dl_rq *dl_rq, u64 deadline)
 		cpudl_clear(&rq->rd->cpudl, rq->cpu);
 		cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio.curr);
 	} else {
-		struct rb_node *leftmost = dl_rq->root.rb_leftmost;
-		struct sched_dl_entity *entry;
+		struct rb_node *leftmost = rb_first_cached(&dl_rq->root);
+		struct sched_dl_entity *entry = __node_2_dle(leftmost);
 
-		entry = rb_entry(leftmost, struct sched_dl_entity, rb_node);
 		dl_rq->earliest_dl.curr = entry->deadline;
 		cpudl_set(&rq->rd->cpudl, rq->cpu, entry->deadline);
 	}
@@ -1506,9 +1508,6 @@ void dec_dl_tasks(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
 	dec_dl_migration(dl_se, dl_rq);
 }
 
-#define __node_2_dle(node) \
-	rb_entry((node), struct sched_dl_entity, rb_node)
-
 static inline bool __dl_less(struct rb_node *a, const struct rb_node *b)
 {
 	return dl_time_before(__node_2_dle(a)->deadline, __node_2_dle(b)->deadline);
@@ -1979,7 +1978,7 @@ static struct sched_dl_entity *pick_next_dl_entity(struct rq *rq,
 	if (!left)
 		return NULL;
 
-	return rb_entry(left, struct sched_dl_entity, rb_node);
+	return __node_2_dle(left);
 }
 
 static struct task_struct *pick_task_dl(struct rq *rq)
@@ -2074,15 +2073,17 @@ static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu)
  */
 static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq, int cpu)
 {
-	struct rb_node *next_node = rq->dl.pushable_dl_tasks_root.rb_leftmost;
 	struct task_struct *p = NULL;
+	struct rb_node *next_node;
 
 	if (!has_pushable_dl_tasks(rq))
 		return NULL;
 
+	next_node = rb_first_cached(&rq->dl.pushable_dl_tasks_root);
+
 next_node:
 	if (next_node) {
-		p = rb_entry(next_node, struct task_struct, pushable_dl_tasks);
+		p = __node_2_pdl(next_node);
 
 		if (pick_dl_task(rq, p, cpu))
 			return p;
@@ -2248,8 +2249,7 @@ static struct task_struct *pick_next_pushable_dl_task(struct rq *rq)
 	if (!has_pushable_dl_tasks(rq))
 		return NULL;
 
-	p = rb_entry(rq->dl.pushable_dl_tasks_root.rb_leftmost,
-		     struct task_struct, pushable_dl_tasks);
+	p = __node_2_pdl(rb_first_cached(&rq->dl.pushable_dl_tasks_root));
 
 	BUG_ON(rq->cpu != task_cpu(p));
 	BUG_ON(task_current(rq, p));
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 5/6] sched/deadline,rt: Remove unused functions for !CONFIG_SMP
  2022-03-02 18:34 [PATCH 0/6] sched/deadline, (rt): Sched class cleanups Dietmar Eggemann
                   ` (3 preceding siblings ...)
  2022-03-02 18:34 ` [PATCH 4/6] sched/deadline: Use __node_2_[pdl|dle]() and rb_first_cached() consistently Dietmar Eggemann
@ 2022-03-02 18:34 ` Dietmar Eggemann
  2022-03-08 22:25   ` [tip: sched/core] " tip-bot2 for Dietmar Eggemann
  2022-03-02 18:34 ` [PATCH 6/6] sched/deadline,rt: Remove unused parameter from pick_next_[rt|dl]_entity() Dietmar Eggemann
  2022-03-04  9:21 ` [PATCH 0/6] sched/deadline, (rt): Sched class cleanups Juri Lelli
  6 siblings, 1 reply; 15+ messages in thread
From: Dietmar Eggemann @ 2022-03-02 18:34 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Steven Rostedt,
	Daniel Bristot de Oliveira
  Cc: Vincent Guittot, Mel Gorman, Ben Segall, Luca Abeni, linux-kernel

The need_pull_[rt|dl]_task() and pull_[rt|dl]_task() functions are not
used on a !CONFIG_SMP system. Remove them.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---
 kernel/sched/deadline.c |  9 ---------
 kernel/sched/rt.c       | 11 -----------
 2 files changed, 20 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 3242dd4972e1..93fcef57dd59 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -723,15 +723,6 @@ void dec_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
 {
 }
 
-static inline bool need_pull_dl_task(struct rq *rq, struct task_struct *prev)
-{
-	return false;
-}
-
-static inline void pull_dl_task(struct rq *rq)
-{
-}
-
 static inline void deadline_queue_push_tasks(struct rq *rq)
 {
 }
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 14f273c29518..b62e7652464b 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -271,8 +271,6 @@ int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent)
 
 #ifdef CONFIG_SMP
 
-static void pull_rt_task(struct rq *this_rq);
-
 static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev)
 {
 	/* Try to pull RT tasks here if we lower this rq's prio */
@@ -429,15 +427,6 @@ void dec_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
 {
 }
 
-static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev)
-{
-	return false;
-}
-
-static inline void pull_rt_task(struct rq *this_rq)
-{
-}
-
 static inline void rt_queue_push_tasks(struct rq *rq)
 {
 }
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [PATCH 6/6] sched/deadline,rt: Remove unused parameter from pick_next_[rt|dl]_entity()
  2022-03-02 18:34 [PATCH 0/6] sched/deadline, (rt): Sched class cleanups Dietmar Eggemann
                   ` (4 preceding siblings ...)
  2022-03-02 18:34 ` [PATCH 5/6] sched/deadline,rt: Remove unused functions for !CONFIG_SMP Dietmar Eggemann
@ 2022-03-02 18:34 ` Dietmar Eggemann
  2022-03-08 22:25   ` [tip: sched/core] " tip-bot2 for Dietmar Eggemann
  2022-03-04  9:21 ` [PATCH 0/6] sched/deadline, (rt): Sched class cleanups Juri Lelli
  6 siblings, 1 reply; 15+ messages in thread
From: Dietmar Eggemann @ 2022-03-02 18:34 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Steven Rostedt,
	Daniel Bristot de Oliveira
  Cc: Vincent Guittot, Mel Gorman, Ben Segall, Luca Abeni, linux-kernel

The `struct rq *rq` parameter isn't used. Remove it.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
---
 kernel/sched/deadline.c | 5 ++---
 kernel/sched/rt.c       | 5 ++---
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 93fcef57dd59..11cdc6d0c45f 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1961,8 +1961,7 @@ static void set_next_task_dl(struct rq *rq, struct task_struct *p, bool first)
 	deadline_queue_push_tasks(rq);
 }
 
-static struct sched_dl_entity *pick_next_dl_entity(struct rq *rq,
-						   struct dl_rq *dl_rq)
+static struct sched_dl_entity *pick_next_dl_entity(struct dl_rq *dl_rq)
 {
 	struct rb_node *left = rb_first_cached(&dl_rq->root);
 
@@ -1981,7 +1980,7 @@ static struct task_struct *pick_task_dl(struct rq *rq)
 	if (!sched_dl_runnable(rq))
 		return NULL;
 
-	dl_se = pick_next_dl_entity(rq, dl_rq);
+	dl_se = pick_next_dl_entity(dl_rq);
 	BUG_ON(!dl_se);
 	p = dl_task_of(dl_se);
 
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index b62e7652464b..67039e5d359b 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1719,8 +1719,7 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f
 	rt_queue_push_tasks(rq);
 }
 
-static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq,
-						   struct rt_rq *rt_rq)
+static struct sched_rt_entity *pick_next_rt_entity(struct rt_rq *rt_rq)
 {
 	struct rt_prio_array *array = &rt_rq->active;
 	struct sched_rt_entity *next = NULL;
@@ -1742,7 +1741,7 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq)
 	struct rt_rq *rt_rq  = &rq->rt;
 
 	do {
-		rt_se = pick_next_rt_entity(rq, rt_rq);
+		rt_se = pick_next_rt_entity(rt_rq);
 		BUG_ON(!rt_se);
 		rt_rq = group_rt_rq(rt_se);
 	} while (rt_rq);
-- 
2.25.1


^ permalink raw reply related	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/6] sched/deadline, (rt): Sched class cleanups
  2022-03-02 18:34 [PATCH 0/6] sched/deadline, (rt): Sched class cleanups Dietmar Eggemann
                   ` (5 preceding siblings ...)
  2022-03-02 18:34 ` [PATCH 6/6] sched/deadline,rt: Remove unused parameter from pick_next_[rt|dl]_entity() Dietmar Eggemann
@ 2022-03-04  9:21 ` Juri Lelli
  2022-03-04 11:39   ` Peter Zijlstra
  6 siblings, 1 reply; 15+ messages in thread
From: Juri Lelli @ 2022-03-04  9:21 UTC (permalink / raw)
  To: Dietmar Eggemann
  Cc: Ingo Molnar, Peter Zijlstra, Steven Rostedt,
	Daniel Bristot de Oliveira, Vincent Guittot, Mel Gorman,
	Ben Segall, Luca Abeni, linux-kernel

Hi,

On 02/03/22 19:34, Dietmar Eggemann wrote:
> While trying to improve the Deadline sched class behaviour for
> asymmetric CPU capacity systems I came across some possible
> cleanups for DL (and RT).
> 
> Overview:
> 
> [PATCH 1/6] - Remove `struct dl_bandwidth def_dl_bandwidth`.
> 
> [PATCH 2/6] - Move functions into DL sched class which don't have to
>               be exported.
> 
> [PATCH 3/6] - Merge two DL admission control functions which provide
>               very similar functionality.
> 
> [PATCH 4/6] - Use DL rb_entry() macros and cached rbtree wrapper
>               `rb_first_cached()` consistently.
> 
> [PATCH 5/6] - Remove unused !CONFIG_SMP function definitions in DL/RT.
> 
> [PATCH 6/6] - Remove redundant function parameter in DL/RT.
> 
> Dietmar Eggemann (6):
>   sched/deadline: Remove unused def_dl_bandwidth
>   sched/deadline: Move bandwidth mgmt and reclaim functions into sched
>     class source file
>   sched/deadline: Merge dl_task_can_attach() and dl_cpu_busy()
>   sched/deadline: Use __node_2_[pdl|dle]() and rb_first_cached()
>     consistently
>   sched/deadline,rt: Remove unused functions for !CONFIG_SMP
>   sched/deadline,rt: Remove unused parameter from
>     pick_next_[rt|dl]_entity()
> 
>  kernel/sched/core.c     |  14 ++--
>  kernel/sched/deadline.c | 141 ++++++++++++++++++++--------------------
>  kernel/sched/rt.c       |  16 +----
>  kernel/sched/sched.h    |  53 +--------------
>  4 files changed, 84 insertions(+), 140 deletions(-)

These look ok to me. Thanks for the cleanups!

Acked-by: Juri Lelli <juri.lelli@redhat.com>

Best,
Juri


^ permalink raw reply	[flat|nested] 15+ messages in thread

* Re: [PATCH 0/6] sched/deadline, (rt): Sched class cleanups
  2022-03-04  9:21 ` [PATCH 0/6] sched/deadline, (rt): Sched class cleanups Juri Lelli
@ 2022-03-04 11:39   ` Peter Zijlstra
  0 siblings, 0 replies; 15+ messages in thread
From: Peter Zijlstra @ 2022-03-04 11:39 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Dietmar Eggemann, Ingo Molnar, Steven Rostedt,
	Daniel Bristot de Oliveira, Vincent Guittot, Mel Gorman,
	Ben Segall, Luca Abeni, linux-kernel

On Fri, Mar 04, 2022 at 10:21:49AM +0100, Juri Lelli wrote:
> Hi,
> 
> On 02/03/22 19:34, Dietmar Eggemann wrote:
> > While trying to improve the Deadline sched class behaviour for
> > asymmetric CPU capacity systems I came across some possible
> > cleanups for DL (and RT).
> > 
> > Overview:
> > 
> > [PATCH 1/6] - Remove `struct dl_bandwidth def_dl_bandwidth`.
> > 
> > [PATCH 2/6] - Move functions into DL sched class which don't have to
> >               be exported.
> > 
> > [PATCH 3/6] - Merge two DL admission control functions which provide
> >               very similar functionality.
> > 
> > [PATCH 4/6] - Use DL rb_entry() macros and cached rbtree wrapper
> >               `rb_first_cached()` consistently.
> > 
> > [PATCH 5/6] - Remove unused !CONFIG_SMP function definitions in DL/RT.
> > 
> > [PATCH 6/6] - Remove redundant function parameter in DL/RT.
> > 
> > Dietmar Eggemann (6):
> >   sched/deadline: Remove unused def_dl_bandwidth
> >   sched/deadline: Move bandwidth mgmt and reclaim functions into sched
> >     class source file
> >   sched/deadline: Merge dl_task_can_attach() and dl_cpu_busy()
> >   sched/deadline: Use __node_2_[pdl|dle]() and rb_first_cached()
> >     consistently
> >   sched/deadline,rt: Remove unused functions for !CONFIG_SMP
> >   sched/deadline,rt: Remove unused parameter from
> >     pick_next_[rt|dl]_entity()
> > 
> >  kernel/sched/core.c     |  14 ++--
> >  kernel/sched/deadline.c | 141 ++++++++++++++++++++--------------------
> >  kernel/sched/rt.c       |  16 +----
> >  kernel/sched/sched.h    |  53 +--------------
> >  4 files changed, 84 insertions(+), 140 deletions(-)
> 
> These look ok to me. Thanks for the cleanups!
> 
> Acked-by: Juri Lelli <juri.lelli@redhat.com>

Thanks!

^ permalink raw reply	[flat|nested] 15+ messages in thread

* [tip: sched/core] sched/deadline,rt: Remove unused parameter from pick_next_[rt|dl]_entity()
  2022-03-02 18:34 ` [PATCH 6/6] sched/deadline,rt: Remove unused parameter from pick_next_[rt|dl]_entity() Dietmar Eggemann
@ 2022-03-08 22:25   ` tip-bot2 for Dietmar Eggemann
  0 siblings, 0 replies; 15+ messages in thread
From: tip-bot2 for Dietmar Eggemann @ 2022-03-08 22:25 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Dietmar Eggemann, Peter Zijlstra (Intel), Juri Lelli, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     821aecd09e5ad2f8d4c3d8195333d272b392f7d3
Gitweb:        https://git.kernel.org/tip/821aecd09e5ad2f8d4c3d8195333d272b392f7d3
Author:        Dietmar Eggemann <dietmar.eggemann@arm.com>
AuthorDate:    Wed, 02 Mar 2022 19:34:33 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 08 Mar 2022 16:08:40 +01:00

sched/deadline,rt: Remove unused parameter from pick_next_[rt|dl]_entity()

The `struct rq *rq` parameter isn't used. Remove it.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lore.kernel.org/r/20220302183433.333029-7-dietmar.eggemann@arm.com
---
 kernel/sched/deadline.c | 5 ++---
 kernel/sched/rt.c       | 5 ++---
 2 files changed, 4 insertions(+), 6 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 93fcef5..11cdc6d 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -1961,8 +1961,7 @@ static void set_next_task_dl(struct rq *rq, struct task_struct *p, bool first)
 	deadline_queue_push_tasks(rq);
 }
 
-static struct sched_dl_entity *pick_next_dl_entity(struct rq *rq,
-						   struct dl_rq *dl_rq)
+static struct sched_dl_entity *pick_next_dl_entity(struct dl_rq *dl_rq)
 {
 	struct rb_node *left = rb_first_cached(&dl_rq->root);
 
@@ -1981,7 +1980,7 @@ static struct task_struct *pick_task_dl(struct rq *rq)
 	if (!sched_dl_runnable(rq))
 		return NULL;
 
-	dl_se = pick_next_dl_entity(rq, dl_rq);
+	dl_se = pick_next_dl_entity(dl_rq);
 	BUG_ON(!dl_se);
 	p = dl_task_of(dl_se);
 
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index b62e765..67039e5 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -1719,8 +1719,7 @@ static inline void set_next_task_rt(struct rq *rq, struct task_struct *p, bool f
 	rt_queue_push_tasks(rq);
 }
 
-static struct sched_rt_entity *pick_next_rt_entity(struct rq *rq,
-						   struct rt_rq *rt_rq)
+static struct sched_rt_entity *pick_next_rt_entity(struct rt_rq *rt_rq)
 {
 	struct rt_prio_array *array = &rt_rq->active;
 	struct sched_rt_entity *next = NULL;
@@ -1742,7 +1741,7 @@ static struct task_struct *_pick_next_task_rt(struct rq *rq)
 	struct rt_rq *rt_rq  = &rq->rt;
 
 	do {
-		rt_se = pick_next_rt_entity(rq, rt_rq);
+		rt_se = pick_next_rt_entity(rt_rq);
 		BUG_ON(!rt_se);
 		rt_rq = group_rt_rq(rt_se);
 	} while (rt_rq);

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [tip: sched/core] sched/deadline,rt: Remove unused functions for !CONFIG_SMP
  2022-03-02 18:34 ` [PATCH 5/6] sched/deadline,rt: Remove unused functions for !CONFIG_SMP Dietmar Eggemann
@ 2022-03-08 22:25   ` tip-bot2 for Dietmar Eggemann
  0 siblings, 0 replies; 15+ messages in thread
From: tip-bot2 for Dietmar Eggemann @ 2022-03-08 22:25 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Dietmar Eggemann, Peter Zijlstra (Intel), Juri Lelli, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     71d29747b0e26f36a50e6a65dc0191ca742b9222
Gitweb:        https://git.kernel.org/tip/71d29747b0e26f36a50e6a65dc0191ca742b9222
Author:        Dietmar Eggemann <dietmar.eggemann@arm.com>
AuthorDate:    Wed, 02 Mar 2022 19:34:32 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 08 Mar 2022 16:08:39 +01:00

sched/deadline,rt: Remove unused functions for !CONFIG_SMP

The need_pull_[rt|dl]_task() and pull_[rt|dl]_task() functions are not
used on a !CONFIG_SMP system. Remove them.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lore.kernel.org/r/20220302183433.333029-6-dietmar.eggemann@arm.com
---
 kernel/sched/deadline.c |  9 ---------
 kernel/sched/rt.c       | 11 -----------
 2 files changed, 20 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 3242dd4..93fcef5 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -723,15 +723,6 @@ void dec_dl_migration(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
 {
 }
 
-static inline bool need_pull_dl_task(struct rq *rq, struct task_struct *prev)
-{
-	return false;
-}
-
-static inline void pull_dl_task(struct rq *rq)
-{
-}
-
 static inline void deadline_queue_push_tasks(struct rq *rq)
 {
 }
diff --git a/kernel/sched/rt.c b/kernel/sched/rt.c
index 14f273c..b62e765 100644
--- a/kernel/sched/rt.c
+++ b/kernel/sched/rt.c
@@ -271,8 +271,6 @@ int alloc_rt_sched_group(struct task_group *tg, struct task_group *parent)
 
 #ifdef CONFIG_SMP
 
-static void pull_rt_task(struct rq *this_rq);
-
 static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev)
 {
 	/* Try to pull RT tasks here if we lower this rq's prio */
@@ -429,15 +427,6 @@ void dec_rt_migration(struct sched_rt_entity *rt_se, struct rt_rq *rt_rq)
 {
 }
 
-static inline bool need_pull_rt_task(struct rq *rq, struct task_struct *prev)
-{
-	return false;
-}
-
-static inline void pull_rt_task(struct rq *this_rq)
-{
-}
-
 static inline void rt_queue_push_tasks(struct rq *rq)
 {
 }

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [tip: sched/core] sched/deadline: Use __node_2_[pdl|dle]() and rb_first_cached() consistently
  2022-03-02 18:34 ` [PATCH 4/6] sched/deadline: Use __node_2_[pdl|dle]() and rb_first_cached() consistently Dietmar Eggemann
@ 2022-03-08 22:25   ` tip-bot2 for Dietmar Eggemann
  0 siblings, 0 replies; 15+ messages in thread
From: tip-bot2 for Dietmar Eggemann @ 2022-03-08 22:25 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Dietmar Eggemann, Peter Zijlstra (Intel), Juri Lelli, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     f4478e7c855d2d6b2fde5126ebcca2cb5b34ee36
Gitweb:        https://git.kernel.org/tip/f4478e7c855d2d6b2fde5126ebcca2cb5b34ee36
Author:        Dietmar Eggemann <dietmar.eggemann@arm.com>
AuthorDate:    Wed, 02 Mar 2022 19:34:31 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 08 Mar 2022 16:08:39 +01:00

sched/deadline: Use __node_2_[pdl|dle]() and rb_first_cached() consistently

Deploy __node_2_pdl(node), __node_2_dle(node) and rb_first_cached()
consistently throughout the sched class source file which makes the
code at least easier to read.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lore.kernel.org/r/20220302183433.333029-5-dietmar.eggemann@arm.com
---
 kernel/sched/deadline.c | 24 ++++++++++++------------
 1 file changed, 12 insertions(+), 12 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index de677b1..3242dd4 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -450,7 +450,7 @@ static inline int is_leftmost(struct task_struct *p, struct dl_rq *dl_rq)
 {
 	struct sched_dl_entity *dl_se = &p->dl;
 
-	return dl_rq->root.rb_leftmost == &dl_se->rb_node;
+	return rb_first_cached(&dl_rq->root) == &dl_se->rb_node;
 }
 
 static void init_dl_rq_bw_ratio(struct dl_rq *dl_rq);
@@ -1433,6 +1433,9 @@ void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se)
 	timer->function = inactive_task_timer;
 }
 
+#define __node_2_dle(node) \
+	rb_entry((node), struct sched_dl_entity, rb_node)
+
 #ifdef CONFIG_SMP
 
 static void inc_dl_deadline(struct dl_rq *dl_rq, u64 deadline)
@@ -1462,10 +1465,9 @@ static void dec_dl_deadline(struct dl_rq *dl_rq, u64 deadline)
 		cpudl_clear(&rq->rd->cpudl, rq->cpu);
 		cpupri_set(&rq->rd->cpupri, rq->cpu, rq->rt.highest_prio.curr);
 	} else {
-		struct rb_node *leftmost = dl_rq->root.rb_leftmost;
-		struct sched_dl_entity *entry;
+		struct rb_node *leftmost = rb_first_cached(&dl_rq->root);
+		struct sched_dl_entity *entry = __node_2_dle(leftmost);
 
-		entry = rb_entry(leftmost, struct sched_dl_entity, rb_node);
 		dl_rq->earliest_dl.curr = entry->deadline;
 		cpudl_set(&rq->rd->cpudl, rq->cpu, entry->deadline);
 	}
@@ -1506,9 +1508,6 @@ void dec_dl_tasks(struct sched_dl_entity *dl_se, struct dl_rq *dl_rq)
 	dec_dl_migration(dl_se, dl_rq);
 }
 
-#define __node_2_dle(node) \
-	rb_entry((node), struct sched_dl_entity, rb_node)
-
 static inline bool __dl_less(struct rb_node *a, const struct rb_node *b)
 {
 	return dl_time_before(__node_2_dle(a)->deadline, __node_2_dle(b)->deadline);
@@ -1979,7 +1978,7 @@ static struct sched_dl_entity *pick_next_dl_entity(struct rq *rq,
 	if (!left)
 		return NULL;
 
-	return rb_entry(left, struct sched_dl_entity, rb_node);
+	return __node_2_dle(left);
 }
 
 static struct task_struct *pick_task_dl(struct rq *rq)
@@ -2074,15 +2073,17 @@ static int pick_dl_task(struct rq *rq, struct task_struct *p, int cpu)
  */
 static struct task_struct *pick_earliest_pushable_dl_task(struct rq *rq, int cpu)
 {
-	struct rb_node *next_node = rq->dl.pushable_dl_tasks_root.rb_leftmost;
 	struct task_struct *p = NULL;
+	struct rb_node *next_node;
 
 	if (!has_pushable_dl_tasks(rq))
 		return NULL;
 
+	next_node = rb_first_cached(&rq->dl.pushable_dl_tasks_root);
+
 next_node:
 	if (next_node) {
-		p = rb_entry(next_node, struct task_struct, pushable_dl_tasks);
+		p = __node_2_pdl(next_node);
 
 		if (pick_dl_task(rq, p, cpu))
 			return p;
@@ -2248,8 +2249,7 @@ static struct task_struct *pick_next_pushable_dl_task(struct rq *rq)
 	if (!has_pushable_dl_tasks(rq))
 		return NULL;
 
-	p = rb_entry(rq->dl.pushable_dl_tasks_root.rb_leftmost,
-		     struct task_struct, pushable_dl_tasks);
+	p = __node_2_pdl(rb_first_cached(&rq->dl.pushable_dl_tasks_root));
 
 	BUG_ON(rq->cpu != task_cpu(p));
 	BUG_ON(task_current(rq, p));

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [tip: sched/core] sched/deadline: Merge dl_task_can_attach() and dl_cpu_busy()
  2022-03-02 18:34 ` [PATCH 3/6] sched/deadline: Merge dl_task_can_attach() and dl_cpu_busy() Dietmar Eggemann
@ 2022-03-08 22:25   ` tip-bot2 for Dietmar Eggemann
  0 siblings, 0 replies; 15+ messages in thread
From: tip-bot2 for Dietmar Eggemann @ 2022-03-08 22:25 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Dietmar Eggemann, Peter Zijlstra (Intel), Juri Lelli, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     772b6539fdda31462cc08368e78df60b31a58bab
Gitweb:        https://git.kernel.org/tip/772b6539fdda31462cc08368e78df60b31a58bab
Author:        Dietmar Eggemann <dietmar.eggemann@arm.com>
AuthorDate:    Wed, 02 Mar 2022 19:34:30 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 08 Mar 2022 16:08:39 +01:00

sched/deadline: Merge dl_task_can_attach() and dl_cpu_busy()

Both functions are doing almost the same, that is checking if admission
control is still respected.

With exclusive cpusets, dl_task_can_attach() checks if the destination
cpuset (i.e. its root domain) has enough CPU capacity to accommodate the
task.
dl_cpu_busy() checks if there is enough CPU capacity in the cpuset in
case the CPU is hot-plugged out.

dl_task_can_attach() is used to check if a task can be admitted while
dl_cpu_busy() is used to check if a CPU can be hotplugged out.

Make dl_cpu_busy() able to deal with a task and use it instead of
dl_task_can_attach() in task_can_attach().

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lore.kernel.org/r/20220302183433.333029-4-dietmar.eggemann@arm.com
---
 kernel/sched/core.c     | 13 ++++++----
 kernel/sched/deadline.c | 52 ++++++++++------------------------------
 kernel/sched/sched.h    |  3 +--
 3 files changed, 24 insertions(+), 44 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index d342c4c..68736d1 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -8805,8 +8805,11 @@ int task_can_attach(struct task_struct *p,
 	}
 
 	if (dl_task(p) && !cpumask_intersects(task_rq(p)->rd->span,
-					      cs_cpus_allowed))
-		ret = dl_task_can_attach(p, cs_cpus_allowed);
+					      cs_cpus_allowed)) {
+		int cpu = cpumask_any_and(cpu_active_mask, cs_cpus_allowed);
+
+		ret = dl_cpu_busy(cpu, p);
+	}
 
 out:
 	return ret;
@@ -9090,8 +9093,10 @@ static void cpuset_cpu_active(void)
 static int cpuset_cpu_inactive(unsigned int cpu)
 {
 	if (!cpuhp_tasks_frozen) {
-		if (dl_cpu_busy(cpu))
-			return -EBUSY;
+		int ret = dl_cpu_busy(cpu, NULL);
+
+		if (ret)
+			return ret;
 		cpuset_update_active_cpus();
 	} else {
 		num_cpus_frozen++;
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 81bf976..de677b1 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -2992,41 +2992,6 @@ bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr)
 }
 
 #ifdef CONFIG_SMP
-int dl_task_can_attach(struct task_struct *p, const struct cpumask *cs_cpus_allowed)
-{
-	unsigned long flags, cap;
-	unsigned int dest_cpu;
-	struct dl_bw *dl_b;
-	bool overflow;
-	int ret;
-
-	dest_cpu = cpumask_any_and(cpu_active_mask, cs_cpus_allowed);
-
-	rcu_read_lock_sched();
-	dl_b = dl_bw_of(dest_cpu);
-	raw_spin_lock_irqsave(&dl_b->lock, flags);
-	cap = dl_bw_capacity(dest_cpu);
-	overflow = __dl_overflow(dl_b, cap, 0, p->dl.dl_bw);
-	if (overflow) {
-		ret = -EBUSY;
-	} else {
-		/*
-		 * We reserve space for this task in the destination
-		 * root_domain, as we can't fail after this point.
-		 * We will free resources in the source root_domain
-		 * later on (see set_cpus_allowed_dl()).
-		 */
-		int cpus = dl_bw_cpus(dest_cpu);
-
-		__dl_add(dl_b, p->dl.dl_bw, cpus);
-		ret = 0;
-	}
-	raw_spin_unlock_irqrestore(&dl_b->lock, flags);
-	rcu_read_unlock_sched();
-
-	return ret;
-}
-
 int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur,
 				 const struct cpumask *trial)
 {
@@ -3048,7 +3013,7 @@ int dl_cpuset_cpumask_can_shrink(const struct cpumask *cur,
 	return ret;
 }
 
-bool dl_cpu_busy(unsigned int cpu)
+int dl_cpu_busy(int cpu, struct task_struct *p)
 {
 	unsigned long flags, cap;
 	struct dl_bw *dl_b;
@@ -3058,11 +3023,22 @@ bool dl_cpu_busy(unsigned int cpu)
 	dl_b = dl_bw_of(cpu);
 	raw_spin_lock_irqsave(&dl_b->lock, flags);
 	cap = dl_bw_capacity(cpu);
-	overflow = __dl_overflow(dl_b, cap, 0, 0);
+	overflow = __dl_overflow(dl_b, cap, 0, p ? p->dl.dl_bw : 0);
+
+	if (!overflow && p) {
+		/*
+		 * We reserve space for this task in the destination
+		 * root_domain, as we can't fail after this point.
+		 * We will free resources in the source root_domain
+		 * later on (see set_cpus_allowed_dl()).
+		 */
+		__dl_add(dl_b, p->dl.dl_bw, dl_bw_cpus(cpu));
+	}
+
 	raw_spin_unlock_irqrestore(&dl_b->lock, flags);
 	rcu_read_unlock_sched();
 
-	return overflow;
+	return overflow ? -EBUSY : 0;
 }
 #endif
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 4dfc3b0..0720cf0 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -324,9 +324,8 @@ extern void __setparam_dl(struct task_struct *p, const struct sched_attr *attr);
 extern void __getparam_dl(struct task_struct *p, struct sched_attr *attr);
 extern bool __checkparam_dl(const struct sched_attr *attr);
 extern bool dl_param_changed(struct task_struct *p, const struct sched_attr *attr);
-extern int  dl_task_can_attach(struct task_struct *p, const struct cpumask *cs_cpus_allowed);
 extern int  dl_cpuset_cpumask_can_shrink(const struct cpumask *cur, const struct cpumask *trial);
-extern bool dl_cpu_busy(unsigned int cpu);
+extern int  dl_cpu_busy(int cpu, struct task_struct *p);
 
 #ifdef CONFIG_CGROUP_SCHED
 

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [tip: sched/core] sched/deadline: Move bandwidth mgmt and reclaim functions into sched class source file
  2022-03-02 18:34 ` [PATCH 2/6] sched/deadline: Move bandwidth mgmt and reclaim functions into sched class source file Dietmar Eggemann
@ 2022-03-08 22:25   ` tip-bot2 for Dietmar Eggemann
  0 siblings, 0 replies; 15+ messages in thread
From: tip-bot2 for Dietmar Eggemann @ 2022-03-08 22:25 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Dietmar Eggemann, Peter Zijlstra (Intel), Juri Lelli, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     f1304ecbef3c9f4aec119ce2a07335d3a0bc55a6
Gitweb:        https://git.kernel.org/tip/f1304ecbef3c9f4aec119ce2a07335d3a0bc55a6
Author:        Dietmar Eggemann <dietmar.eggemann@arm.com>
AuthorDate:    Wed, 02 Mar 2022 19:34:29 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 08 Mar 2022 16:08:39 +01:00

sched/deadline: Move bandwidth mgmt and reclaim functions into sched class source file

Move the deadline bandwidth management (admission control) functions
__dl_add(), __dl_sub() and __dl_overflow() as well as the bandwidth
reclaim function __dl_update() from private task scheduler header file
to the deadline sched class source file.
The functions are only used internally so they don't have to be
exported.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lore.kernel.org/r/20220302183433.333029-3-dietmar.eggemann@arm.com
---
 kernel/sched/deadline.c | 44 ++++++++++++++++++++++++++++++++++++-
 kernel/sched/sched.h    | 49 +----------------------------------------
 2 files changed, 44 insertions(+), 49 deletions(-)

diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index ed4251f..81bf976 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -128,6 +128,21 @@ static inline bool dl_bw_visited(int cpu, u64 gen)
 	rd->visit_gen = gen;
 	return false;
 }
+
+static inline
+void __dl_update(struct dl_bw *dl_b, s64 bw)
+{
+	struct root_domain *rd = container_of(dl_b, struct root_domain, dl_bw);
+	int i;
+
+	RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(),
+			 "sched RCU must be held");
+	for_each_cpu_and(i, rd->span, cpu_active_mask) {
+		struct rq *rq = cpu_rq(i);
+
+		rq->dl.extra_bw += bw;
+	}
+}
 #else
 static inline struct dl_bw *dl_bw_of(int i)
 {
@@ -148,9 +163,38 @@ static inline bool dl_bw_visited(int cpu, u64 gen)
 {
 	return false;
 }
+
+static inline
+void __dl_update(struct dl_bw *dl_b, s64 bw)
+{
+	struct dl_rq *dl = container_of(dl_b, struct dl_rq, dl_bw);
+
+	dl->extra_bw += bw;
+}
 #endif
 
 static inline
+void __dl_sub(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
+{
+	dl_b->total_bw -= tsk_bw;
+	__dl_update(dl_b, (s32)tsk_bw / cpus);
+}
+
+static inline
+void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
+{
+	dl_b->total_bw += tsk_bw;
+	__dl_update(dl_b, -((s32)tsk_bw / cpus));
+}
+
+static inline bool
+__dl_overflow(struct dl_bw *dl_b, unsigned long cap, u64 old_bw, u64 new_bw)
+{
+	return dl_b->bw != -1 &&
+	       cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw;
+}
+
+static inline
 void __add_running_bw(u64 dl_bw, struct dl_rq *dl_rq)
 {
 	u64 old = dl_rq->running_bw;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index a8b8516..4dfc3b0 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -301,29 +301,6 @@ struct dl_bw {
 	u64			total_bw;
 };
 
-static inline void __dl_update(struct dl_bw *dl_b, s64 bw);
-
-static inline
-void __dl_sub(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
-{
-	dl_b->total_bw -= tsk_bw;
-	__dl_update(dl_b, (s32)tsk_bw / cpus);
-}
-
-static inline
-void __dl_add(struct dl_bw *dl_b, u64 tsk_bw, int cpus)
-{
-	dl_b->total_bw += tsk_bw;
-	__dl_update(dl_b, -((s32)tsk_bw / cpus));
-}
-
-static inline bool __dl_overflow(struct dl_bw *dl_b, unsigned long cap,
-				 u64 old_bw, u64 new_bw)
-{
-	return dl_b->bw != -1 &&
-	       cap_scale(dl_b->bw, cap) < dl_b->total_bw - old_bw + new_bw;
-}
-
 /*
  * Verify the fitness of task @p to run on @cpu taking into account the
  * CPU original capacity and the runtime/deadline ratio of the task.
@@ -2748,32 +2725,6 @@ extern void nohz_run_idle_balance(int cpu);
 static inline void nohz_run_idle_balance(int cpu) { }
 #endif
 
-#ifdef CONFIG_SMP
-static inline
-void __dl_update(struct dl_bw *dl_b, s64 bw)
-{
-	struct root_domain *rd = container_of(dl_b, struct root_domain, dl_bw);
-	int i;
-
-	RCU_LOCKDEP_WARN(!rcu_read_lock_sched_held(),
-			 "sched RCU must be held");
-	for_each_cpu_and(i, rd->span, cpu_active_mask) {
-		struct rq *rq = cpu_rq(i);
-
-		rq->dl.extra_bw += bw;
-	}
-}
-#else
-static inline
-void __dl_update(struct dl_bw *dl_b, s64 bw)
-{
-	struct dl_rq *dl = container_of(dl_b, struct dl_rq, dl_bw);
-
-	dl->extra_bw += bw;
-}
-#endif
-
-
 #ifdef CONFIG_IRQ_TIME_ACCOUNTING
 struct irqtime {
 	u64			total;

^ permalink raw reply related	[flat|nested] 15+ messages in thread

* [tip: sched/core] sched/deadline: Remove unused def_dl_bandwidth
  2022-03-02 18:34 ` [PATCH 1/6] sched/deadline: Remove unused def_dl_bandwidth Dietmar Eggemann
@ 2022-03-08 22:25   ` tip-bot2 for Dietmar Eggemann
  0 siblings, 0 replies; 15+ messages in thread
From: tip-bot2 for Dietmar Eggemann @ 2022-03-08 22:25 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: Dietmar Eggemann, Peter Zijlstra (Intel), Juri Lelli, x86, linux-kernel

The following commit has been merged into the sched/core branch of tip:

Commit-ID:     eb77cf1c151c4a1c2147cbf24d84bcf0ba504e7c
Gitweb:        https://git.kernel.org/tip/eb77cf1c151c4a1c2147cbf24d84bcf0ba504e7c
Author:        Dietmar Eggemann <dietmar.eggemann@arm.com>
AuthorDate:    Wed, 02 Mar 2022 19:34:28 +01:00
Committer:     Peter Zijlstra <peterz@infradead.org>
CommitterDate: Tue, 08 Mar 2022 16:08:38 +01:00

sched/deadline: Remove unused def_dl_bandwidth

Since commit 1724813d9f2c ("sched/deadline: Remove the sysctl_sched_dl
knobs") the default deadline bandwidth control structure has no purpose.
Remove it.

Signed-off-by: Dietmar Eggemann <dietmar.eggemann@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Acked-by: Juri Lelli <juri.lelli@redhat.com>
Link: https://lore.kernel.org/r/20220302183433.333029-2-dietmar.eggemann@arm.com
---
 kernel/sched/core.c     | 1 -
 kernel/sched/deadline.c | 7 -------
 kernel/sched/sched.h    | 1 -
 3 files changed, 9 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 3aafc15..d342c4c 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -9420,7 +9420,6 @@ void __init sched_init(void)
 #endif /* CONFIG_CPUMASK_OFFSTACK */
 
 	init_rt_bandwidth(&def_rt_bandwidth, global_rt_period(), global_rt_runtime());
-	init_dl_bandwidth(&def_dl_bandwidth, global_rt_period(), global_rt_runtime());
 
 #ifdef CONFIG_SMP
 	init_defrootdomain();
diff --git a/kernel/sched/deadline.c b/kernel/sched/deadline.c
index 62f0cf8..ed4251f 100644
--- a/kernel/sched/deadline.c
+++ b/kernel/sched/deadline.c
@@ -18,8 +18,6 @@
 #include "sched.h"
 #include "pelt.h"
 
-struct dl_bandwidth def_dl_bandwidth;
-
 static inline struct task_struct *dl_task_of(struct sched_dl_entity *dl_se)
 {
 	return container_of(dl_se, struct task_struct, dl);
@@ -423,12 +421,10 @@ void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime)
 void init_dl_bw(struct dl_bw *dl_b)
 {
 	raw_spin_lock_init(&dl_b->lock);
-	raw_spin_lock(&def_dl_bandwidth.dl_runtime_lock);
 	if (global_rt_runtime() == RUNTIME_INF)
 		dl_b->bw = -1;
 	else
 		dl_b->bw = to_ratio(global_rt_period(), global_rt_runtime());
-	raw_spin_unlock(&def_dl_bandwidth.dl_runtime_lock);
 	dl_b->total_bw = 0;
 }
 
@@ -2731,9 +2727,6 @@ void sched_dl_do_global(void)
 	int cpu;
 	unsigned long flags;
 
-	def_dl_bandwidth.dl_period = global_rt_period();
-	def_dl_bandwidth.dl_runtime = global_rt_runtime();
-
 	if (global_rt_runtime() != RUNTIME_INF)
 		new_bw = to_ratio(global_rt_period(), global_rt_runtime());
 
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 3da5718..a8b8516 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2333,7 +2333,6 @@ extern void resched_cpu(int cpu);
 extern struct rt_bandwidth def_rt_bandwidth;
 extern void init_rt_bandwidth(struct rt_bandwidth *rt_b, u64 period, u64 runtime);
 
-extern struct dl_bandwidth def_dl_bandwidth;
 extern void init_dl_bandwidth(struct dl_bandwidth *dl_b, u64 period, u64 runtime);
 extern void init_dl_task_timer(struct sched_dl_entity *dl_se);
 extern void init_dl_inactive_task_timer(struct sched_dl_entity *dl_se);

^ permalink raw reply related	[flat|nested] 15+ messages in thread

end of thread, other threads:[~2022-03-08 22:25 UTC | newest]

Thread overview: 15+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-03-02 18:34 [PATCH 0/6] sched/deadline, (rt): Sched class cleanups Dietmar Eggemann
2022-03-02 18:34 ` [PATCH 1/6] sched/deadline: Remove unused def_dl_bandwidth Dietmar Eggemann
2022-03-08 22:25   ` [tip: sched/core] " tip-bot2 for Dietmar Eggemann
2022-03-02 18:34 ` [PATCH 2/6] sched/deadline: Move bandwidth mgmt and reclaim functions into sched class source file Dietmar Eggemann
2022-03-08 22:25   ` [tip: sched/core] " tip-bot2 for Dietmar Eggemann
2022-03-02 18:34 ` [PATCH 3/6] sched/deadline: Merge dl_task_can_attach() and dl_cpu_busy() Dietmar Eggemann
2022-03-08 22:25   ` [tip: sched/core] " tip-bot2 for Dietmar Eggemann
2022-03-02 18:34 ` [PATCH 4/6] sched/deadline: Use __node_2_[pdl|dle]() and rb_first_cached() consistently Dietmar Eggemann
2022-03-08 22:25   ` [tip: sched/core] " tip-bot2 for Dietmar Eggemann
2022-03-02 18:34 ` [PATCH 5/6] sched/deadline,rt: Remove unused functions for !CONFIG_SMP Dietmar Eggemann
2022-03-08 22:25   ` [tip: sched/core] " tip-bot2 for Dietmar Eggemann
2022-03-02 18:34 ` [PATCH 6/6] sched/deadline,rt: Remove unused parameter from pick_next_[rt|dl]_entity() Dietmar Eggemann
2022-03-08 22:25   ` [tip: sched/core] " tip-bot2 for Dietmar Eggemann
2022-03-04  9:21 ` [PATCH 0/6] sched/deadline, (rt): Sched class cleanups Juri Lelli
2022-03-04 11:39   ` Peter Zijlstra

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.