All of lore.kernel.org
 help / color / mirror / Atom feed
* [RFC PATCH 0/3] sched/deadline: cpuset: Rework DEADLINE bandwidth restoration
@ 2023-03-15 12:18 ` Juri Lelli
  0 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 12:18 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Qais Yousef, Waiman Long, Tejun Heo,
	Zefan Li, Johannes Weiner, Hao Luo
  Cc: Dietmar Eggemann, Steven Rostedt, linux-kernel, luca.abeni,
	claudio, tommaso.cucinotta, bristot, mathieu.poirier, cgroups,
	Vincent Guittot, Wei Wang, Rick Yiu, Quentin Perret,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Sudeep Holla,
	Juri Lelli

Qais reported [1] that iterating over all tasks when rebuilding root
domains for finding out which ones are DEADLINE and need their bandwidth
correctly restored on such root domains can be a costly operation (10+
ms delays on suspend-resume). He proposed we skip rebuilding root
domains for certain operations, but that approach seemed arch specific
and possibly prone to errors, as paths that ultimately trigger a rebuild
might be quite convoluted (thanks Qais for spending time on this!).

To fix the problem I instead would propose we

 1 - Bring back cpuset_mutex (so that we have write access to cpusets
     from scheduler operations - and we also fix some problems
     associated to percpu_cpuset_rwsem)
 2 - Keep track of the number of DEADLINE tasks belonging to each cpuset
 3 - Use this information to only perform the costly iteration if
     DEADLINE tasks are actually present in the cpuset for which a
     corresponding root domain is being rebuilt

This set is also available from

https://github.com/jlelli/linux.git deadline/rework-cpusets

Feedback is more than welcome.

Best,
Juri

1 - https://lore.kernel.org/lkml/20230206221428.2125324-1-qyousef@layalina.io/

Juri Lelli (3):
  sched/cpuset: Bring back cpuset_mutex
  sched/cpuset: Keep track of SCHED_DEADLINE task in cpusets
  cgroup/cpuset: Iterate only if DEADLINE tasks are present

 include/linux/cpuset.h |  12 ++-
 kernel/cgroup/cgroup.c |   4 +
 kernel/cgroup/cpuset.c | 175 +++++++++++++++++++++++------------------
 kernel/sched/core.c    |  32 ++++++--
 4 files changed, 137 insertions(+), 86 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH 0/3] sched/deadline: cpuset: Rework DEADLINE bandwidth restoration
@ 2023-03-15 12:18 ` Juri Lelli
  0 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 12:18 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Qais Yousef, Waiman Long, Tejun Heo,
	Zefan Li, Johannes Weiner, Hao Luo
  Cc: Dietmar Eggemann, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Sudeep Holla, Juri Lelli

Qais reported [1] that iterating over all tasks when rebuilding root
domains for finding out which ones are DEADLINE and need their bandwidth
correctly restored on such root domains can be a costly operation (10+
ms delays on suspend-resume). He proposed we skip rebuilding root
domains for certain operations, but that approach seemed arch specific
and possibly prone to errors, as paths that ultimately trigger a rebuild
might be quite convoluted (thanks Qais for spending time on this!).

To fix the problem I instead would propose we

 1 - Bring back cpuset_mutex (so that we have write access to cpusets
     from scheduler operations - and we also fix some problems
     associated to percpu_cpuset_rwsem)
 2 - Keep track of the number of DEADLINE tasks belonging to each cpuset
 3 - Use this information to only perform the costly iteration if
     DEADLINE tasks are actually present in the cpuset for which a
     corresponding root domain is being rebuilt

This set is also available from

https://github.com/jlelli/linux.git deadline/rework-cpusets

Feedback is more than welcome.

Best,
Juri

1 - https://lore.kernel.org/lkml/20230206221428.2125324-1-qyousef-wp2msK0BRk8tq7phqP6ubQ@public.gmane.org/

Juri Lelli (3):
  sched/cpuset: Bring back cpuset_mutex
  sched/cpuset: Keep track of SCHED_DEADLINE task in cpusets
  cgroup/cpuset: Iterate only if DEADLINE tasks are present

 include/linux/cpuset.h |  12 ++-
 kernel/cgroup/cgroup.c |   4 +
 kernel/cgroup/cpuset.c | 175 +++++++++++++++++++++++------------------
 kernel/sched/core.c    |  32 ++++++--
 4 files changed, 137 insertions(+), 86 deletions(-)

-- 
2.39.2


^ permalink raw reply	[flat|nested] 32+ messages in thread

* [RFC PATCH 1/3] sched/cpuset: Bring back cpuset_mutex
@ 2023-03-15 12:18   ` Juri Lelli
  0 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 12:18 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Qais Yousef, Waiman Long, Tejun Heo,
	Zefan Li, Johannes Weiner, Hao Luo
  Cc: Dietmar Eggemann, Steven Rostedt, linux-kernel, luca.abeni,
	claudio, tommaso.cucinotta, bristot, mathieu.poirier, cgroups,
	Vincent Guittot, Wei Wang, Rick Yiu, Quentin Perret,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Sudeep Holla,
	Juri Lelli

Turns out percpu_cpuset_rwsem - commit 1243dc518c9d ("cgroup/cpuset:
Convert cpuset_mutex to percpu_rwsem") - wasn't such a brilliant idea,
as it has been reported to cause slowdowns in workloads that need to
change cpuset configuration frequently and it is also not implementing
priority inheritance (which causes troubles with realtime workloads).

Convert percpu_cpuset_rwsem back to regular cpuset_mutex. Also grab it
only for SCHED_DEADLINE tasks (other policies don't care about stable
cpusets anyway).

Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
---
 include/linux/cpuset.h |   8 +--
 kernel/cgroup/cpuset.c | 147 ++++++++++++++++++++---------------------
 kernel/sched/core.c    |  22 ++++--
 3 files changed, 91 insertions(+), 86 deletions(-)

diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index d58e0476ee8e..355f796c5f07 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -71,8 +71,8 @@ extern void cpuset_init_smp(void);
 extern void cpuset_force_rebuild(void);
 extern void cpuset_update_active_cpus(void);
 extern void cpuset_wait_for_hotplug(void);
-extern void cpuset_read_lock(void);
-extern void cpuset_read_unlock(void);
+extern void cpuset_lock(void);
+extern void cpuset_unlock(void);
 extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
 extern bool cpuset_cpus_allowed_fallback(struct task_struct *p);
 extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
@@ -196,8 +196,8 @@ static inline void cpuset_update_active_cpus(void)
 
 static inline void cpuset_wait_for_hotplug(void) { }
 
-static inline void cpuset_read_lock(void) { }
-static inline void cpuset_read_unlock(void) { }
+static inline void cpuset_lock(void) { }
+static inline void cpuset_unlock(void) { }
 
 static inline void cpuset_cpus_allowed(struct task_struct *p,
 				       struct cpumask *mask)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index a29c0b13706b..8d82d66d432b 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -366,22 +366,21 @@ static struct cpuset top_cpuset = {
 		if (is_cpuset_online(((des_cs) = css_cs((pos_css)))))
 
 /*
- * There are two global locks guarding cpuset structures - cpuset_rwsem and
+ * There are two global locks guarding cpuset structures - cpuset_mutex and
  * callback_lock. We also require taking task_lock() when dereferencing a
  * task's cpuset pointer. See "The task_lock() exception", at the end of this
- * comment.  The cpuset code uses only cpuset_rwsem write lock.  Other
- * kernel subsystems can use cpuset_read_lock()/cpuset_read_unlock() to
- * prevent change to cpuset structures.
+ * comment.  The cpuset code uses only cpuset_mutex. Other kernel subsystems
+ * can use cpuset_lock()/cpuset_unlock() to prevent change to cpuset
+ * structures.
  *
  * A task must hold both locks to modify cpusets.  If a task holds
- * cpuset_rwsem, it blocks others wanting that rwsem, ensuring that it
- * is the only task able to also acquire callback_lock and be able to
- * modify cpusets.  It can perform various checks on the cpuset structure
- * first, knowing nothing will change.  It can also allocate memory while
- * just holding cpuset_rwsem.  While it is performing these checks, various
- * callback routines can briefly acquire callback_lock to query cpusets.
- * Once it is ready to make the changes, it takes callback_lock, blocking
- * everyone else.
+ * cpuset_mutex, it blocks others, ensuring that it is the only task able to
+ * also acquire callback_lock and be able to modify cpusets.  It can perform
+ * various checks on the cpuset structure first, knowing nothing will change.
+ * It can also allocate memory while just holding cpuset_mutex.  While it is
+ * performing these checks, various callback routines can briefly acquire
+ * callback_lock to query cpusets.  Once it is ready to make the changes, it
+ * takes callback_lock, blocking everyone else.
  *
  * Calls to the kernel memory allocator can not be made while holding
  * callback_lock, as that would risk double tripping on callback_lock
@@ -403,16 +402,16 @@ static struct cpuset top_cpuset = {
  * guidelines for accessing subsystem state in kernel/cgroup.c
  */
 
-DEFINE_STATIC_PERCPU_RWSEM(cpuset_rwsem);
+static DEFINE_MUTEX(cpuset_mutex);
 
-void cpuset_read_lock(void)
+void cpuset_lock(void)
 {
-	percpu_down_read(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 }
 
-void cpuset_read_unlock(void)
+void cpuset_unlock(void)
 {
-	percpu_up_read(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 static DEFINE_SPINLOCK(callback_lock);
@@ -496,7 +495,7 @@ static inline bool partition_is_populated(struct cpuset *cs,
  * One way or another, we guarantee to return some non-empty subset
  * of cpu_online_mask.
  *
- * Call with callback_lock or cpuset_rwsem held.
+ * Call with callback_lock or cpuset_mutex held.
  */
 static void guarantee_online_cpus(struct task_struct *tsk,
 				  struct cpumask *pmask)
@@ -538,7 +537,7 @@ static void guarantee_online_cpus(struct task_struct *tsk,
  * One way or another, we guarantee to return some non-empty subset
  * of node_states[N_MEMORY].
  *
- * Call with callback_lock or cpuset_rwsem held.
+ * Call with callback_lock or cpuset_mutex held.
  */
 static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
 {
@@ -550,7 +549,7 @@ static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
 /*
  * update task's spread flag if cpuset's page/slab spread flag is set
  *
- * Call with callback_lock or cpuset_rwsem held. The check can be skipped
+ * Call with callback_lock or cpuset_mutex held. The check can be skipped
  * if on default hierarchy.
  */
 static void cpuset_update_task_spread_flags(struct cpuset *cs,
@@ -575,7 +574,7 @@ static void cpuset_update_task_spread_flags(struct cpuset *cs,
  *
  * One cpuset is a subset of another if all its allowed CPUs and
  * Memory Nodes are a subset of the other, and its exclusive flags
- * are only set if the other's are set.  Call holding cpuset_rwsem.
+ * are only set if the other's are set.  Call holding cpuset_mutex.
  */
 
 static int is_cpuset_subset(const struct cpuset *p, const struct cpuset *q)
@@ -713,7 +712,7 @@ static int validate_change_legacy(struct cpuset *cur, struct cpuset *trial)
  * If we replaced the flag and mask values of the current cpuset
  * (cur) with those values in the trial cpuset (trial), would
  * our various subset and exclusive rules still be valid?  Presumes
- * cpuset_rwsem held.
+ * cpuset_mutex held.
  *
  * 'cur' is the address of an actual, in-use cpuset.  Operations
  * such as list traversal that depend on the actual address of the
@@ -829,7 +828,7 @@ static void update_domain_attr_tree(struct sched_domain_attr *dattr,
 	rcu_read_unlock();
 }
 
-/* Must be called with cpuset_rwsem held.  */
+/* Must be called with cpuset_mutex held.  */
 static inline int nr_cpusets(void)
 {
 	/* jump label reference count + the top-level cpuset */
@@ -855,7 +854,7 @@ static inline int nr_cpusets(void)
  * domains when operating in the severe memory shortage situations
  * that could cause allocation failures below.
  *
- * Must be called with cpuset_rwsem held.
+ * Must be called with cpuset_mutex held.
  *
  * The three key local variables below are:
  *    cp - cpuset pointer, used (together with pos_css) to perform a
@@ -1084,7 +1083,7 @@ static void rebuild_root_domains(void)
 	struct cpuset *cs = NULL;
 	struct cgroup_subsys_state *pos_css;
 
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 	lockdep_assert_cpus_held();
 	lockdep_assert_held(&sched_domains_mutex);
 
@@ -1134,7 +1133,7 @@ partition_and_rebuild_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
  * 'cpus' is removed, then call this routine to rebuild the
  * scheduler's dynamic sched domains.
  *
- * Call with cpuset_rwsem held.  Takes cpus_read_lock().
+ * Call with cpuset_mutex held.  Takes cpus_read_lock().
  */
 static void rebuild_sched_domains_locked(void)
 {
@@ -1145,7 +1144,7 @@ static void rebuild_sched_domains_locked(void)
 	int ndoms;
 
 	lockdep_assert_cpus_held();
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 
 	/*
 	 * If we have raced with CPU hotplug, return early to avoid
@@ -1196,9 +1195,9 @@ static void rebuild_sched_domains_locked(void)
 void rebuild_sched_domains(void)
 {
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	rebuild_sched_domains_locked();
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 }
 
@@ -1207,7 +1206,7 @@ void rebuild_sched_domains(void)
  * @cs: the cpuset in which each task's cpus_allowed mask needs to be changed
  *
  * Iterate through each task of @cs updating its cpus_allowed to the
- * effective cpuset's.  As this function is called with cpuset_rwsem held,
+ * effective cpuset's.  As this function is called with cpuset_mutex held,
  * cpuset membership stays stable.
  */
 static void update_tasks_cpumask(struct cpuset *cs)
@@ -1313,7 +1312,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
 	int old_prs, new_prs;
 	int part_error = PERR_NONE;	/* Partition error? */
 
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 
 	/*
 	 * The parent must be a partition root.
@@ -1536,7 +1535,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
  *
  * On legacy hierarchy, effective_cpus will be the same with cpu_allowed.
  *
- * Called with cpuset_rwsem held
+ * Called with cpuset_mutex held
  */
 static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
 				 bool force)
@@ -1696,7 +1695,7 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
 	struct cpuset *sibling;
 	struct cgroup_subsys_state *pos_css;
 
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 
 	/*
 	 * Check all its siblings and call update_cpumasks_hier()
@@ -1938,12 +1937,12 @@ static void *cpuset_being_rebound;
  * @cs: the cpuset in which each task's mems_allowed mask needs to be changed
  *
  * Iterate through each task of @cs updating its mems_allowed to the
- * effective cpuset's.  As this function is called with cpuset_rwsem held,
+ * effective cpuset's.  As this function is called with cpuset_mutex held,
  * cpuset membership stays stable.
  */
 static void update_tasks_nodemask(struct cpuset *cs)
 {
-	static nodemask_t newmems;	/* protected by cpuset_rwsem */
+	static nodemask_t newmems;	/* protected by cpuset_mutex */
 	struct css_task_iter it;
 	struct task_struct *task;
 
@@ -1956,7 +1955,7 @@ static void update_tasks_nodemask(struct cpuset *cs)
 	 * take while holding tasklist_lock.  Forks can happen - the
 	 * mpol_dup() cpuset_being_rebound check will catch such forks,
 	 * and rebind their vma mempolicies too.  Because we still hold
-	 * the global cpuset_rwsem, we know that no other rebind effort
+	 * the global cpuset_mutex, we know that no other rebind effort
 	 * will be contending for the global variable cpuset_being_rebound.
 	 * It's ok if we rebind the same mm twice; mpol_rebind_mm()
 	 * is idempotent.  Also migrate pages in each mm to new nodes.
@@ -2002,7 +2001,7 @@ static void update_tasks_nodemask(struct cpuset *cs)
  *
  * On legacy hierarchy, effective_mems will be the same with mems_allowed.
  *
- * Called with cpuset_rwsem held
+ * Called with cpuset_mutex held
  */
 static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems)
 {
@@ -2055,7 +2054,7 @@ static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems)
  * mempolicies and if the cpuset is marked 'memory_migrate',
  * migrate the tasks pages to the new memory.
  *
- * Call with cpuset_rwsem held. May take callback_lock during call.
+ * Call with cpuset_mutex held. May take callback_lock during call.
  * Will take tasklist_lock, scan tasklist for tasks in cpuset cs,
  * lock each such tasks mm->mmap_lock, scan its vma's and rebind
  * their mempolicies to the cpusets new mems_allowed.
@@ -2147,7 +2146,7 @@ static int update_relax_domain_level(struct cpuset *cs, s64 val)
  * @cs: the cpuset in which each task's spread flags needs to be changed
  *
  * Iterate through each task of @cs updating its spread flags.  As this
- * function is called with cpuset_rwsem held, cpuset membership stays
+ * function is called with cpuset_mutex held, cpuset membership stays
  * stable.
  */
 static void update_tasks_flags(struct cpuset *cs)
@@ -2167,7 +2166,7 @@ static void update_tasks_flags(struct cpuset *cs)
  * cs:		the cpuset to update
  * turning_on: 	whether the flag is being set or cleared
  *
- * Call with cpuset_rwsem held.
+ * Call with cpuset_mutex held.
  */
 
 static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
@@ -2217,7 +2216,7 @@ static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
  * @new_prs: new partition root state
  * Return: 0 if successful, != 0 if error
  *
- * Call with cpuset_rwsem held.
+ * Call with cpuset_mutex held.
  */
 static int update_prstate(struct cpuset *cs, int new_prs)
 {
@@ -2440,7 +2439,7 @@ static int fmeter_getrate(struct fmeter *fmp)
 
 static struct cpuset *cpuset_attach_old_cs;
 
-/* Called by cgroups to determine if a cpuset is usable; cpuset_rwsem held */
+/* Called by cgroups to determine if a cpuset is usable; cpuset_mutex held */
 static int cpuset_can_attach(struct cgroup_taskset *tset)
 {
 	struct cgroup_subsys_state *css;
@@ -2452,7 +2451,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
 	cpuset_attach_old_cs = task_cs(cgroup_taskset_first(tset, &css));
 	cs = css_cs(css);
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	/* allow moving tasks into an empty cpuset if on default hierarchy */
 	ret = -ENOSPC;
@@ -2482,7 +2481,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
 	cs->attach_in_progress++;
 	ret = 0;
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	return ret;
 }
 
@@ -2492,13 +2491,13 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
 
 	cgroup_taskset_first(tset, &css);
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	css_cs(css)->attach_in_progress--;
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /*
- * Protected by cpuset_rwsem.  cpus_attach is used only by cpuset_attach()
+ * Protected by cpuset_mutex.  cpus_attach is used only by cpuset_attach()
  * but we can't allocate it dynamically there.  Define it global and
  * allocate from cpuset_init().
  */
@@ -2506,7 +2505,7 @@ static cpumask_var_t cpus_attach;
 
 static void cpuset_attach(struct cgroup_taskset *tset)
 {
-	/* static buf protected by cpuset_rwsem */
+	/* static buf protected by cpuset_mutex */
 	static nodemask_t cpuset_attach_nodemask_to;
 	struct task_struct *task;
 	struct task_struct *leader;
@@ -2519,7 +2518,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
 	cs = css_cs(css);
 
 	lockdep_assert_cpus_held();	/* see cgroup_attach_lock() */
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	cpus_updated = !cpumask_equal(cs->effective_cpus,
 				      oldcs->effective_cpus);
 	mems_updated = !nodes_equal(cs->effective_mems, oldcs->effective_mems);
@@ -2592,7 +2591,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
 	if (!cs->attach_in_progress)
 		wake_up(&cpuset_attach_wq);
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /* The various types of files and directories in a cpuset file system */
@@ -2624,7 +2623,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
 	int retval = 0;
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	if (!is_cpuset_online(cs)) {
 		retval = -ENODEV;
 		goto out_unlock;
@@ -2660,7 +2659,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
 		break;
 	}
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	return retval;
 }
@@ -2673,7 +2672,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
 	int retval = -ENODEV;
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	if (!is_cpuset_online(cs))
 		goto out_unlock;
 
@@ -2686,7 +2685,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
 		break;
 	}
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	return retval;
 }
@@ -2719,7 +2718,7 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	 * operation like this one can lead to a deadlock through kernfs
 	 * active_ref protection.  Let's break the protection.  Losing the
 	 * protection is okay as we check whether @cs is online after
-	 * grabbing cpuset_rwsem anyway.  This only happens on the legacy
+	 * grabbing cpuset_mutex anyway.  This only happens on the legacy
 	 * hierarchies.
 	 */
 	css_get(&cs->css);
@@ -2727,7 +2726,7 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	flush_work(&cpuset_hotplug_work);
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	if (!is_cpuset_online(cs))
 		goto out_unlock;
 
@@ -2751,7 +2750,7 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 
 	free_cpuset(trialcs);
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	kernfs_unbreak_active_protection(of->kn);
 	css_put(&cs->css);
@@ -2899,13 +2898,13 @@ static ssize_t sched_partition_write(struct kernfs_open_file *of, char *buf,
 
 	css_get(&cs->css);
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	if (!is_cpuset_online(cs))
 		goto out_unlock;
 
 	retval = update_prstate(cs, val);
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	css_put(&cs->css);
 	return retval ?: nbytes;
@@ -3122,7 +3121,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
 		return 0;
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	set_bit(CS_ONLINE, &cs->flags);
 	if (is_spread_page(parent))
@@ -3173,7 +3172,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
 	cpumask_copy(cs->effective_cpus, parent->cpus_allowed);
 	spin_unlock_irq(&callback_lock);
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	return 0;
 }
@@ -3194,7 +3193,7 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
 	struct cpuset *cs = css_cs(css);
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	if (is_partition_valid(cs))
 		update_prstate(cs, 0);
@@ -3213,7 +3212,7 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
 	cpuset_dec();
 	clear_bit(CS_ONLINE, &cs->flags);
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 }
 
@@ -3226,7 +3225,7 @@ static void cpuset_css_free(struct cgroup_subsys_state *css)
 
 static void cpuset_bind(struct cgroup_subsys_state *root_css)
 {
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	spin_lock_irq(&callback_lock);
 
 	if (is_in_v2_mode()) {
@@ -3239,7 +3238,7 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css)
 	}
 
 	spin_unlock_irq(&callback_lock);
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /*
@@ -3281,8 +3280,6 @@ struct cgroup_subsys cpuset_cgrp_subsys = {
 
 int __init cpuset_init(void)
 {
-	BUG_ON(percpu_init_rwsem(&cpuset_rwsem));
-
 	BUG_ON(!alloc_cpumask_var(&top_cpuset.cpus_allowed, GFP_KERNEL));
 	BUG_ON(!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL));
 	BUG_ON(!zalloc_cpumask_var(&top_cpuset.subparts_cpus, GFP_KERNEL));
@@ -3354,7 +3351,7 @@ hotplug_update_tasks_legacy(struct cpuset *cs,
 	is_empty = cpumask_empty(cs->cpus_allowed) ||
 		   nodes_empty(cs->mems_allowed);
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 
 	/*
 	 * Move tasks to the nearest ancestor with execution resources,
@@ -3364,7 +3361,7 @@ hotplug_update_tasks_legacy(struct cpuset *cs,
 	if (is_empty)
 		remove_tasks_in_empty_cpuset(cs);
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 }
 
 static void
@@ -3415,14 +3412,14 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
 retry:
 	wait_event(cpuset_attach_wq, cs->attach_in_progress == 0);
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	/*
 	 * We have raced with task attaching. We wait until attaching
 	 * is finished, so we won't attach a task to an empty cpuset.
 	 */
 	if (cs->attach_in_progress) {
-		percpu_up_write(&cpuset_rwsem);
+		mutex_unlock(&cpuset_mutex);
 		goto retry;
 	}
 
@@ -3516,7 +3513,7 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
 		hotplug_update_tasks_legacy(cs, &new_cpus, &new_mems,
 					    cpus_updated, mems_updated);
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /**
@@ -3546,7 +3543,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
 	if (on_dfl && !alloc_cpumasks(NULL, &tmp))
 		ptmp = &tmp;
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	/* fetch the available cpus/mems and find out which changed how */
 	cpumask_copy(&new_cpus, cpu_active_mask);
@@ -3603,7 +3600,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
 		update_tasks_nodemask(&top_cpuset);
 	}
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 
 	/* if cpus or mems changed, we need to propagate to descendants */
 	if (cpus_updated || mems_updated) {
@@ -4008,7 +4005,7 @@ void __cpuset_memory_pressure_bump(void)
  *  - Used for /proc/<pid>/cpuset.
  *  - No need to task_lock(tsk) on this tsk->cpuset reference, as it
  *    doesn't really matter if tsk->cpuset changes after we read it,
- *    and we take cpuset_rwsem, keeping cpuset_attach() from changing it
+ *    and we take cpuset_mutex, keeping cpuset_attach() from changing it
  *    anyway.
  */
 int proc_cpuset_show(struct seq_file *m, struct pid_namespace *ns,
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4580fe3e1d0c..5902cbb5e751 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7535,6 +7535,7 @@ static int __sched_setscheduler(struct task_struct *p,
 	int reset_on_fork;
 	int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
 	struct rq *rq;
+	bool cpuset_locked = false;
 
 	/* The pi code expects interrupts enabled */
 	BUG_ON(pi && in_interrupt());
@@ -7584,8 +7585,14 @@ static int __sched_setscheduler(struct task_struct *p,
 			return retval;
 	}
 
-	if (pi)
-		cpuset_read_lock();
+	/*
+	 * SCHED_DEADLINE bandwidth accounting relies on stable cpusets
+	 * information.
+	 */
+	if (dl_policy(policy) || dl_policy(p->policy)) {
+		cpuset_locked = true;
+		cpuset_lock();
+	}
 
 	/*
 	 * Make sure no PI-waiters arrive (or leave) while we are
@@ -7661,8 +7668,8 @@ static int __sched_setscheduler(struct task_struct *p,
 	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
 		policy = oldpolicy = -1;
 		task_rq_unlock(rq, p, &rf);
-		if (pi)
-			cpuset_read_unlock();
+		if (cpuset_locked)
+			cpuset_unlock();
 		goto recheck;
 	}
 
@@ -7729,7 +7736,8 @@ static int __sched_setscheduler(struct task_struct *p,
 	task_rq_unlock(rq, p, &rf);
 
 	if (pi) {
-		cpuset_read_unlock();
+		if (cpuset_locked)
+			cpuset_unlock();
 		rt_mutex_adjust_pi(p);
 	}
 
@@ -7741,8 +7749,8 @@ static int __sched_setscheduler(struct task_struct *p,
 
 unlock:
 	task_rq_unlock(rq, p, &rf);
-	if (pi)
-		cpuset_read_unlock();
+	if (cpuset_locked)
+		cpuset_unlock();
 	return retval;
 }
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC PATCH 1/3] sched/cpuset: Bring back cpuset_mutex
@ 2023-03-15 12:18   ` Juri Lelli
  0 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 12:18 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Qais Yousef, Waiman Long, Tejun Heo,
	Zefan Li, Johannes Weiner, Hao Luo
  Cc: Dietmar Eggemann, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Sudeep Holla, Juri Lelli

Turns out percpu_cpuset_rwsem - commit 1243dc518c9d ("cgroup/cpuset:
Convert cpuset_mutex to percpu_rwsem") - wasn't such a brilliant idea,
as it has been reported to cause slowdowns in workloads that need to
change cpuset configuration frequently and it is also not implementing
priority inheritance (which causes troubles with realtime workloads).

Convert percpu_cpuset_rwsem back to regular cpuset_mutex. Also grab it
only for SCHED_DEADLINE tasks (other policies don't care about stable
cpusets anyway).

Signed-off-by: Juri Lelli <juri.lelli-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 include/linux/cpuset.h |   8 +--
 kernel/cgroup/cpuset.c | 147 ++++++++++++++++++++---------------------
 kernel/sched/core.c    |  22 ++++--
 3 files changed, 91 insertions(+), 86 deletions(-)

diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index d58e0476ee8e..355f796c5f07 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -71,8 +71,8 @@ extern void cpuset_init_smp(void);
 extern void cpuset_force_rebuild(void);
 extern void cpuset_update_active_cpus(void);
 extern void cpuset_wait_for_hotplug(void);
-extern void cpuset_read_lock(void);
-extern void cpuset_read_unlock(void);
+extern void cpuset_lock(void);
+extern void cpuset_unlock(void);
 extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
 extern bool cpuset_cpus_allowed_fallback(struct task_struct *p);
 extern nodemask_t cpuset_mems_allowed(struct task_struct *p);
@@ -196,8 +196,8 @@ static inline void cpuset_update_active_cpus(void)
 
 static inline void cpuset_wait_for_hotplug(void) { }
 
-static inline void cpuset_read_lock(void) { }
-static inline void cpuset_read_unlock(void) { }
+static inline void cpuset_lock(void) { }
+static inline void cpuset_unlock(void) { }
 
 static inline void cpuset_cpus_allowed(struct task_struct *p,
 				       struct cpumask *mask)
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index a29c0b13706b..8d82d66d432b 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -366,22 +366,21 @@ static struct cpuset top_cpuset = {
 		if (is_cpuset_online(((des_cs) = css_cs((pos_css)))))
 
 /*
- * There are two global locks guarding cpuset structures - cpuset_rwsem and
+ * There are two global locks guarding cpuset structures - cpuset_mutex and
  * callback_lock. We also require taking task_lock() when dereferencing a
  * task's cpuset pointer. See "The task_lock() exception", at the end of this
- * comment.  The cpuset code uses only cpuset_rwsem write lock.  Other
- * kernel subsystems can use cpuset_read_lock()/cpuset_read_unlock() to
- * prevent change to cpuset structures.
+ * comment.  The cpuset code uses only cpuset_mutex. Other kernel subsystems
+ * can use cpuset_lock()/cpuset_unlock() to prevent change to cpuset
+ * structures.
  *
  * A task must hold both locks to modify cpusets.  If a task holds
- * cpuset_rwsem, it blocks others wanting that rwsem, ensuring that it
- * is the only task able to also acquire callback_lock and be able to
- * modify cpusets.  It can perform various checks on the cpuset structure
- * first, knowing nothing will change.  It can also allocate memory while
- * just holding cpuset_rwsem.  While it is performing these checks, various
- * callback routines can briefly acquire callback_lock to query cpusets.
- * Once it is ready to make the changes, it takes callback_lock, blocking
- * everyone else.
+ * cpuset_mutex, it blocks others, ensuring that it is the only task able to
+ * also acquire callback_lock and be able to modify cpusets.  It can perform
+ * various checks on the cpuset structure first, knowing nothing will change.
+ * It can also allocate memory while just holding cpuset_mutex.  While it is
+ * performing these checks, various callback routines can briefly acquire
+ * callback_lock to query cpusets.  Once it is ready to make the changes, it
+ * takes callback_lock, blocking everyone else.
  *
  * Calls to the kernel memory allocator can not be made while holding
  * callback_lock, as that would risk double tripping on callback_lock
@@ -403,16 +402,16 @@ static struct cpuset top_cpuset = {
  * guidelines for accessing subsystem state in kernel/cgroup.c
  */
 
-DEFINE_STATIC_PERCPU_RWSEM(cpuset_rwsem);
+static DEFINE_MUTEX(cpuset_mutex);
 
-void cpuset_read_lock(void)
+void cpuset_lock(void)
 {
-	percpu_down_read(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 }
 
-void cpuset_read_unlock(void)
+void cpuset_unlock(void)
 {
-	percpu_up_read(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 static DEFINE_SPINLOCK(callback_lock);
@@ -496,7 +495,7 @@ static inline bool partition_is_populated(struct cpuset *cs,
  * One way or another, we guarantee to return some non-empty subset
  * of cpu_online_mask.
  *
- * Call with callback_lock or cpuset_rwsem held.
+ * Call with callback_lock or cpuset_mutex held.
  */
 static void guarantee_online_cpus(struct task_struct *tsk,
 				  struct cpumask *pmask)
@@ -538,7 +537,7 @@ static void guarantee_online_cpus(struct task_struct *tsk,
  * One way or another, we guarantee to return some non-empty subset
  * of node_states[N_MEMORY].
  *
- * Call with callback_lock or cpuset_rwsem held.
+ * Call with callback_lock or cpuset_mutex held.
  */
 static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
 {
@@ -550,7 +549,7 @@ static void guarantee_online_mems(struct cpuset *cs, nodemask_t *pmask)
 /*
  * update task's spread flag if cpuset's page/slab spread flag is set
  *
- * Call with callback_lock or cpuset_rwsem held. The check can be skipped
+ * Call with callback_lock or cpuset_mutex held. The check can be skipped
  * if on default hierarchy.
  */
 static void cpuset_update_task_spread_flags(struct cpuset *cs,
@@ -575,7 +574,7 @@ static void cpuset_update_task_spread_flags(struct cpuset *cs,
  *
  * One cpuset is a subset of another if all its allowed CPUs and
  * Memory Nodes are a subset of the other, and its exclusive flags
- * are only set if the other's are set.  Call holding cpuset_rwsem.
+ * are only set if the other's are set.  Call holding cpuset_mutex.
  */
 
 static int is_cpuset_subset(const struct cpuset *p, const struct cpuset *q)
@@ -713,7 +712,7 @@ static int validate_change_legacy(struct cpuset *cur, struct cpuset *trial)
  * If we replaced the flag and mask values of the current cpuset
  * (cur) with those values in the trial cpuset (trial), would
  * our various subset and exclusive rules still be valid?  Presumes
- * cpuset_rwsem held.
+ * cpuset_mutex held.
  *
  * 'cur' is the address of an actual, in-use cpuset.  Operations
  * such as list traversal that depend on the actual address of the
@@ -829,7 +828,7 @@ static void update_domain_attr_tree(struct sched_domain_attr *dattr,
 	rcu_read_unlock();
 }
 
-/* Must be called with cpuset_rwsem held.  */
+/* Must be called with cpuset_mutex held.  */
 static inline int nr_cpusets(void)
 {
 	/* jump label reference count + the top-level cpuset */
@@ -855,7 +854,7 @@ static inline int nr_cpusets(void)
  * domains when operating in the severe memory shortage situations
  * that could cause allocation failures below.
  *
- * Must be called with cpuset_rwsem held.
+ * Must be called with cpuset_mutex held.
  *
  * The three key local variables below are:
  *    cp - cpuset pointer, used (together with pos_css) to perform a
@@ -1084,7 +1083,7 @@ static void rebuild_root_domains(void)
 	struct cpuset *cs = NULL;
 	struct cgroup_subsys_state *pos_css;
 
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 	lockdep_assert_cpus_held();
 	lockdep_assert_held(&sched_domains_mutex);
 
@@ -1134,7 +1133,7 @@ partition_and_rebuild_sched_domains(int ndoms_new, cpumask_var_t doms_new[],
  * 'cpus' is removed, then call this routine to rebuild the
  * scheduler's dynamic sched domains.
  *
- * Call with cpuset_rwsem held.  Takes cpus_read_lock().
+ * Call with cpuset_mutex held.  Takes cpus_read_lock().
  */
 static void rebuild_sched_domains_locked(void)
 {
@@ -1145,7 +1144,7 @@ static void rebuild_sched_domains_locked(void)
 	int ndoms;
 
 	lockdep_assert_cpus_held();
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 
 	/*
 	 * If we have raced with CPU hotplug, return early to avoid
@@ -1196,9 +1195,9 @@ static void rebuild_sched_domains_locked(void)
 void rebuild_sched_domains(void)
 {
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	rebuild_sched_domains_locked();
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 }
 
@@ -1207,7 +1206,7 @@ void rebuild_sched_domains(void)
  * @cs: the cpuset in which each task's cpus_allowed mask needs to be changed
  *
  * Iterate through each task of @cs updating its cpus_allowed to the
- * effective cpuset's.  As this function is called with cpuset_rwsem held,
+ * effective cpuset's.  As this function is called with cpuset_mutex held,
  * cpuset membership stays stable.
  */
 static void update_tasks_cpumask(struct cpuset *cs)
@@ -1313,7 +1312,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
 	int old_prs, new_prs;
 	int part_error = PERR_NONE;	/* Partition error? */
 
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 
 	/*
 	 * The parent must be a partition root.
@@ -1536,7 +1535,7 @@ static int update_parent_subparts_cpumask(struct cpuset *cs, int cmd,
  *
  * On legacy hierarchy, effective_cpus will be the same with cpu_allowed.
  *
- * Called with cpuset_rwsem held
+ * Called with cpuset_mutex held
  */
 static void update_cpumasks_hier(struct cpuset *cs, struct tmpmasks *tmp,
 				 bool force)
@@ -1696,7 +1695,7 @@ static void update_sibling_cpumasks(struct cpuset *parent, struct cpuset *cs,
 	struct cpuset *sibling;
 	struct cgroup_subsys_state *pos_css;
 
-	percpu_rwsem_assert_held(&cpuset_rwsem);
+	lockdep_assert_held(&cpuset_mutex);
 
 	/*
 	 * Check all its siblings and call update_cpumasks_hier()
@@ -1938,12 +1937,12 @@ static void *cpuset_being_rebound;
  * @cs: the cpuset in which each task's mems_allowed mask needs to be changed
  *
  * Iterate through each task of @cs updating its mems_allowed to the
- * effective cpuset's.  As this function is called with cpuset_rwsem held,
+ * effective cpuset's.  As this function is called with cpuset_mutex held,
  * cpuset membership stays stable.
  */
 static void update_tasks_nodemask(struct cpuset *cs)
 {
-	static nodemask_t newmems;	/* protected by cpuset_rwsem */
+	static nodemask_t newmems;	/* protected by cpuset_mutex */
 	struct css_task_iter it;
 	struct task_struct *task;
 
@@ -1956,7 +1955,7 @@ static void update_tasks_nodemask(struct cpuset *cs)
 	 * take while holding tasklist_lock.  Forks can happen - the
 	 * mpol_dup() cpuset_being_rebound check will catch such forks,
 	 * and rebind their vma mempolicies too.  Because we still hold
-	 * the global cpuset_rwsem, we know that no other rebind effort
+	 * the global cpuset_mutex, we know that no other rebind effort
 	 * will be contending for the global variable cpuset_being_rebound.
 	 * It's ok if we rebind the same mm twice; mpol_rebind_mm()
 	 * is idempotent.  Also migrate pages in each mm to new nodes.
@@ -2002,7 +2001,7 @@ static void update_tasks_nodemask(struct cpuset *cs)
  *
  * On legacy hierarchy, effective_mems will be the same with mems_allowed.
  *
- * Called with cpuset_rwsem held
+ * Called with cpuset_mutex held
  */
 static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems)
 {
@@ -2055,7 +2054,7 @@ static void update_nodemasks_hier(struct cpuset *cs, nodemask_t *new_mems)
  * mempolicies and if the cpuset is marked 'memory_migrate',
  * migrate the tasks pages to the new memory.
  *
- * Call with cpuset_rwsem held. May take callback_lock during call.
+ * Call with cpuset_mutex held. May take callback_lock during call.
  * Will take tasklist_lock, scan tasklist for tasks in cpuset cs,
  * lock each such tasks mm->mmap_lock, scan its vma's and rebind
  * their mempolicies to the cpusets new mems_allowed.
@@ -2147,7 +2146,7 @@ static int update_relax_domain_level(struct cpuset *cs, s64 val)
  * @cs: the cpuset in which each task's spread flags needs to be changed
  *
  * Iterate through each task of @cs updating its spread flags.  As this
- * function is called with cpuset_rwsem held, cpuset membership stays
+ * function is called with cpuset_mutex held, cpuset membership stays
  * stable.
  */
 static void update_tasks_flags(struct cpuset *cs)
@@ -2167,7 +2166,7 @@ static void update_tasks_flags(struct cpuset *cs)
  * cs:		the cpuset to update
  * turning_on: 	whether the flag is being set or cleared
  *
- * Call with cpuset_rwsem held.
+ * Call with cpuset_mutex held.
  */
 
 static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
@@ -2217,7 +2216,7 @@ static int update_flag(cpuset_flagbits_t bit, struct cpuset *cs,
  * @new_prs: new partition root state
  * Return: 0 if successful, != 0 if error
  *
- * Call with cpuset_rwsem held.
+ * Call with cpuset_mutex held.
  */
 static int update_prstate(struct cpuset *cs, int new_prs)
 {
@@ -2440,7 +2439,7 @@ static int fmeter_getrate(struct fmeter *fmp)
 
 static struct cpuset *cpuset_attach_old_cs;
 
-/* Called by cgroups to determine if a cpuset is usable; cpuset_rwsem held */
+/* Called by cgroups to determine if a cpuset is usable; cpuset_mutex held */
 static int cpuset_can_attach(struct cgroup_taskset *tset)
 {
 	struct cgroup_subsys_state *css;
@@ -2452,7 +2451,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
 	cpuset_attach_old_cs = task_cs(cgroup_taskset_first(tset, &css));
 	cs = css_cs(css);
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	/* allow moving tasks into an empty cpuset if on default hierarchy */
 	ret = -ENOSPC;
@@ -2482,7 +2481,7 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
 	cs->attach_in_progress++;
 	ret = 0;
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	return ret;
 }
 
@@ -2492,13 +2491,13 @@ static void cpuset_cancel_attach(struct cgroup_taskset *tset)
 
 	cgroup_taskset_first(tset, &css);
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	css_cs(css)->attach_in_progress--;
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /*
- * Protected by cpuset_rwsem.  cpus_attach is used only by cpuset_attach()
+ * Protected by cpuset_mutex.  cpus_attach is used only by cpuset_attach()
  * but we can't allocate it dynamically there.  Define it global and
  * allocate from cpuset_init().
  */
@@ -2506,7 +2505,7 @@ static cpumask_var_t cpus_attach;
 
 static void cpuset_attach(struct cgroup_taskset *tset)
 {
-	/* static buf protected by cpuset_rwsem */
+	/* static buf protected by cpuset_mutex */
 	static nodemask_t cpuset_attach_nodemask_to;
 	struct task_struct *task;
 	struct task_struct *leader;
@@ -2519,7 +2518,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
 	cs = css_cs(css);
 
 	lockdep_assert_cpus_held();	/* see cgroup_attach_lock() */
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	cpus_updated = !cpumask_equal(cs->effective_cpus,
 				      oldcs->effective_cpus);
 	mems_updated = !nodes_equal(cs->effective_mems, oldcs->effective_mems);
@@ -2592,7 +2591,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
 	if (!cs->attach_in_progress)
 		wake_up(&cpuset_attach_wq);
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /* The various types of files and directories in a cpuset file system */
@@ -2624,7 +2623,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
 	int retval = 0;
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	if (!is_cpuset_online(cs)) {
 		retval = -ENODEV;
 		goto out_unlock;
@@ -2660,7 +2659,7 @@ static int cpuset_write_u64(struct cgroup_subsys_state *css, struct cftype *cft,
 		break;
 	}
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	return retval;
 }
@@ -2673,7 +2672,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
 	int retval = -ENODEV;
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	if (!is_cpuset_online(cs))
 		goto out_unlock;
 
@@ -2686,7 +2685,7 @@ static int cpuset_write_s64(struct cgroup_subsys_state *css, struct cftype *cft,
 		break;
 	}
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	return retval;
 }
@@ -2719,7 +2718,7 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	 * operation like this one can lead to a deadlock through kernfs
 	 * active_ref protection.  Let's break the protection.  Losing the
 	 * protection is okay as we check whether @cs is online after
-	 * grabbing cpuset_rwsem anyway.  This only happens on the legacy
+	 * grabbing cpuset_mutex anyway.  This only happens on the legacy
 	 * hierarchies.
 	 */
 	css_get(&cs->css);
@@ -2727,7 +2726,7 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 	flush_work(&cpuset_hotplug_work);
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	if (!is_cpuset_online(cs))
 		goto out_unlock;
 
@@ -2751,7 +2750,7 @@ static ssize_t cpuset_write_resmask(struct kernfs_open_file *of,
 
 	free_cpuset(trialcs);
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	kernfs_unbreak_active_protection(of->kn);
 	css_put(&cs->css);
@@ -2899,13 +2898,13 @@ static ssize_t sched_partition_write(struct kernfs_open_file *of, char *buf,
 
 	css_get(&cs->css);
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	if (!is_cpuset_online(cs))
 		goto out_unlock;
 
 	retval = update_prstate(cs, val);
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	css_put(&cs->css);
 	return retval ?: nbytes;
@@ -3122,7 +3121,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
 		return 0;
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	set_bit(CS_ONLINE, &cs->flags);
 	if (is_spread_page(parent))
@@ -3173,7 +3172,7 @@ static int cpuset_css_online(struct cgroup_subsys_state *css)
 	cpumask_copy(cs->effective_cpus, parent->cpus_allowed);
 	spin_unlock_irq(&callback_lock);
 out_unlock:
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 	return 0;
 }
@@ -3194,7 +3193,7 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
 	struct cpuset *cs = css_cs(css);
 
 	cpus_read_lock();
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	if (is_partition_valid(cs))
 		update_prstate(cs, 0);
@@ -3213,7 +3212,7 @@ static void cpuset_css_offline(struct cgroup_subsys_state *css)
 	cpuset_dec();
 	clear_bit(CS_ONLINE, &cs->flags);
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 	cpus_read_unlock();
 }
 
@@ -3226,7 +3225,7 @@ static void cpuset_css_free(struct cgroup_subsys_state *css)
 
 static void cpuset_bind(struct cgroup_subsys_state *root_css)
 {
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 	spin_lock_irq(&callback_lock);
 
 	if (is_in_v2_mode()) {
@@ -3239,7 +3238,7 @@ static void cpuset_bind(struct cgroup_subsys_state *root_css)
 	}
 
 	spin_unlock_irq(&callback_lock);
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /*
@@ -3281,8 +3280,6 @@ struct cgroup_subsys cpuset_cgrp_subsys = {
 
 int __init cpuset_init(void)
 {
-	BUG_ON(percpu_init_rwsem(&cpuset_rwsem));
-
 	BUG_ON(!alloc_cpumask_var(&top_cpuset.cpus_allowed, GFP_KERNEL));
 	BUG_ON(!alloc_cpumask_var(&top_cpuset.effective_cpus, GFP_KERNEL));
 	BUG_ON(!zalloc_cpumask_var(&top_cpuset.subparts_cpus, GFP_KERNEL));
@@ -3354,7 +3351,7 @@ hotplug_update_tasks_legacy(struct cpuset *cs,
 	is_empty = cpumask_empty(cs->cpus_allowed) ||
 		   nodes_empty(cs->mems_allowed);
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 
 	/*
 	 * Move tasks to the nearest ancestor with execution resources,
@@ -3364,7 +3361,7 @@ hotplug_update_tasks_legacy(struct cpuset *cs,
 	if (is_empty)
 		remove_tasks_in_empty_cpuset(cs);
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 }
 
 static void
@@ -3415,14 +3412,14 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
 retry:
 	wait_event(cpuset_attach_wq, cs->attach_in_progress == 0);
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	/*
 	 * We have raced with task attaching. We wait until attaching
 	 * is finished, so we won't attach a task to an empty cpuset.
 	 */
 	if (cs->attach_in_progress) {
-		percpu_up_write(&cpuset_rwsem);
+		mutex_unlock(&cpuset_mutex);
 		goto retry;
 	}
 
@@ -3516,7 +3513,7 @@ static void cpuset_hotplug_update_tasks(struct cpuset *cs, struct tmpmasks *tmp)
 		hotplug_update_tasks_legacy(cs, &new_cpus, &new_mems,
 					    cpus_updated, mems_updated);
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 }
 
 /**
@@ -3546,7 +3543,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
 	if (on_dfl && !alloc_cpumasks(NULL, &tmp))
 		ptmp = &tmp;
 
-	percpu_down_write(&cpuset_rwsem);
+	mutex_lock(&cpuset_mutex);
 
 	/* fetch the available cpus/mems and find out which changed how */
 	cpumask_copy(&new_cpus, cpu_active_mask);
@@ -3603,7 +3600,7 @@ static void cpuset_hotplug_workfn(struct work_struct *work)
 		update_tasks_nodemask(&top_cpuset);
 	}
 
-	percpu_up_write(&cpuset_rwsem);
+	mutex_unlock(&cpuset_mutex);
 
 	/* if cpus or mems changed, we need to propagate to descendants */
 	if (cpus_updated || mems_updated) {
@@ -4008,7 +4005,7 @@ void __cpuset_memory_pressure_bump(void)
  *  - Used for /proc/<pid>/cpuset.
  *  - No need to task_lock(tsk) on this tsk->cpuset reference, as it
  *    doesn't really matter if tsk->cpuset changes after we read it,
- *    and we take cpuset_rwsem, keeping cpuset_attach() from changing it
+ *    and we take cpuset_mutex, keeping cpuset_attach() from changing it
  *    anyway.
  */
 int proc_cpuset_show(struct seq_file *m, struct pid_namespace *ns,
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 4580fe3e1d0c..5902cbb5e751 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7535,6 +7535,7 @@ static int __sched_setscheduler(struct task_struct *p,
 	int reset_on_fork;
 	int queue_flags = DEQUEUE_SAVE | DEQUEUE_MOVE | DEQUEUE_NOCLOCK;
 	struct rq *rq;
+	bool cpuset_locked = false;
 
 	/* The pi code expects interrupts enabled */
 	BUG_ON(pi && in_interrupt());
@@ -7584,8 +7585,14 @@ static int __sched_setscheduler(struct task_struct *p,
 			return retval;
 	}
 
-	if (pi)
-		cpuset_read_lock();
+	/*
+	 * SCHED_DEADLINE bandwidth accounting relies on stable cpusets
+	 * information.
+	 */
+	if (dl_policy(policy) || dl_policy(p->policy)) {
+		cpuset_locked = true;
+		cpuset_lock();
+	}
 
 	/*
 	 * Make sure no PI-waiters arrive (or leave) while we are
@@ -7661,8 +7668,8 @@ static int __sched_setscheduler(struct task_struct *p,
 	if (unlikely(oldpolicy != -1 && oldpolicy != p->policy)) {
 		policy = oldpolicy = -1;
 		task_rq_unlock(rq, p, &rf);
-		if (pi)
-			cpuset_read_unlock();
+		if (cpuset_locked)
+			cpuset_unlock();
 		goto recheck;
 	}
 
@@ -7729,7 +7736,8 @@ static int __sched_setscheduler(struct task_struct *p,
 	task_rq_unlock(rq, p, &rf);
 
 	if (pi) {
-		cpuset_read_unlock();
+		if (cpuset_locked)
+			cpuset_unlock();
 		rt_mutex_adjust_pi(p);
 	}
 
@@ -7741,8 +7749,8 @@ static int __sched_setscheduler(struct task_struct *p,
 
 unlock:
 	task_rq_unlock(rq, p, &rf);
-	if (pi)
-		cpuset_read_unlock();
+	if (cpuset_locked)
+		cpuset_unlock();
 	return retval;
 }
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 12:18   ` Juri Lelli
  0 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 12:18 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Qais Yousef, Waiman Long, Tejun Heo,
	Zefan Li, Johannes Weiner, Hao Luo
  Cc: Dietmar Eggemann, Steven Rostedt, linux-kernel, luca.abeni,
	claudio, tommaso.cucinotta, bristot, mathieu.poirier, cgroups,
	Vincent Guittot, Wei Wang, Rick Yiu, Quentin Perret,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Sudeep Holla,
	Juri Lelli

Qais reported that iterating over all tasks when rebuilding root domains
for finding out which ones are DEADLINE and need their bandwidth
correctly restored on such root domains can be a costly operation (10+
ms delays on suspend-resume).

To fix the problem keep track of the number of DEADLINE tasks belonging
to each cpuset and then use this information (followup patch) to only
perform the above iteration if DEADLINE tasks are actually present in
the cpuset for which a corresponding root domain is being rebuilt.

Reported-by: Qais Yousef <qyousef@layalina.io>
Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
---
 include/linux/cpuset.h |  4 ++++
 kernel/cgroup/cgroup.c |  4 ++++
 kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
 kernel/sched/core.c    | 10 ++++++++++
 4 files changed, 43 insertions(+)

diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 355f796c5f07..0348dba5680e 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
 extern void cpuset_force_rebuild(void);
 extern void cpuset_update_active_cpus(void);
 extern void cpuset_wait_for_hotplug(void);
+extern void inc_dl_tasks_cs(struct task_struct *task);
+extern void dec_dl_tasks_cs(struct task_struct *task);
 extern void cpuset_lock(void);
 extern void cpuset_unlock(void);
 extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
@@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
 
 static inline void cpuset_wait_for_hotplug(void) { }
 
+static inline void inc_dl_tasks_cs(struct task_struct *task) { }
+static inline void dec_dl_tasks_cs(struct task_struct *task) { }
 static inline void cpuset_lock(void) { }
 static inline void cpuset_unlock(void) { }
 
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index c099cf3fa02d..357925e1e4af 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -57,6 +57,7 @@
 #include <linux/file.h>
 #include <linux/fs_parser.h>
 #include <linux/sched/cputime.h>
+#include <linux/sched/deadline.h>
 #include <linux/psi.h>
 #include <net/sock.h>
 
@@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
 	list_add_tail(&tsk->cg_list, &cset->dying_tasks);
 	cset->nr_tasks--;
 
+	if (dl_task(tsk))
+		dec_dl_tasks_cs(tsk);
+
 	WARN_ON_ONCE(cgroup_task_frozen(tsk));
 	if (unlikely(!(tsk->flags & PF_KTHREAD) &&
 		     test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 8d82d66d432b..57bc60112618 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -193,6 +193,12 @@ struct cpuset {
 	int use_parent_ecpus;
 	int child_ecpus_count;
 
+	/*
+	 * number of SCHED_DEADLINE tasks attached to this cpuset, so that we
+	 * know when to rebuild associated root domain bandwidth information.
+	 */
+	int nr_deadline_tasks;
+
 	/* Invalid partition error code, not lock protected */
 	enum prs_errcode prs_err;
 
@@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct cpuset *cs)
 	return css_cs(cs->css.parent);
 }
 
+void inc_dl_tasks_cs(struct task_struct *p)
+{
+	struct cpuset *cs = task_cs(p);
+
+	cs->nr_deadline_tasks++;
+}
+
+void dec_dl_tasks_cs(struct task_struct *p)
+{
+	struct cpuset *cs = task_cs(p);
+
+	cs->nr_deadline_tasks--;
+}
+
 /* bits in struct cpuset flags field */
 typedef enum {
 	CS_ONLINE,
@@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
 		ret = security_task_setscheduler(task);
 		if (ret)
 			goto out_unlock;
+
+		if (dl_task(task)) {
+			cs->nr_deadline_tasks++;
+			cpuset_attach_old_cs->nr_deadline_tasks--;
+		}
 	}
 
 	/*
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5902cbb5e751..d586a8440348 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7683,6 +7683,16 @@ static int __sched_setscheduler(struct task_struct *p,
 		goto unlock;
 	}
 
+	/*
+	 * In case a task is setscheduled to SCHED_DEADLINE, or if a task is
+	 * moved to a different sched policy, we need to keep track of that on
+	 * its cpuset (for correct bandwidth tracking).
+	 */
+	if (dl_policy(policy) && !dl_task(p))
+		inc_dl_tasks_cs(p);
+	else if (dl_task(p) && !dl_policy(policy))
+		dec_dl_tasks_cs(p);
+
 	p->sched_reset_on_fork = reset_on_fork;
 	oldprio = p->prio;
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 12:18   ` Juri Lelli
  0 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 12:18 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Qais Yousef, Waiman Long, Tejun Heo,
	Zefan Li, Johannes Weiner, Hao Luo
  Cc: Dietmar Eggemann, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Sudeep Holla, Juri Lelli

Qais reported that iterating over all tasks when rebuilding root domains
for finding out which ones are DEADLINE and need their bandwidth
correctly restored on such root domains can be a costly operation (10+
ms delays on suspend-resume).

To fix the problem keep track of the number of DEADLINE tasks belonging
to each cpuset and then use this information (followup patch) to only
perform the above iteration if DEADLINE tasks are actually present in
the cpuset for which a corresponding root domain is being rebuilt.

Reported-by: Qais Yousef <qyousef-wp2msK0BRk8tq7phqP6ubQ@public.gmane.org>
Signed-off-by: Juri Lelli <juri.lelli-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 include/linux/cpuset.h |  4 ++++
 kernel/cgroup/cgroup.c |  4 ++++
 kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
 kernel/sched/core.c    | 10 ++++++++++
 4 files changed, 43 insertions(+)

diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
index 355f796c5f07..0348dba5680e 100644
--- a/include/linux/cpuset.h
+++ b/include/linux/cpuset.h
@@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
 extern void cpuset_force_rebuild(void);
 extern void cpuset_update_active_cpus(void);
 extern void cpuset_wait_for_hotplug(void);
+extern void inc_dl_tasks_cs(struct task_struct *task);
+extern void dec_dl_tasks_cs(struct task_struct *task);
 extern void cpuset_lock(void);
 extern void cpuset_unlock(void);
 extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
@@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
 
 static inline void cpuset_wait_for_hotplug(void) { }
 
+static inline void inc_dl_tasks_cs(struct task_struct *task) { }
+static inline void dec_dl_tasks_cs(struct task_struct *task) { }
 static inline void cpuset_lock(void) { }
 static inline void cpuset_unlock(void) { }
 
diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
index c099cf3fa02d..357925e1e4af 100644
--- a/kernel/cgroup/cgroup.c
+++ b/kernel/cgroup/cgroup.c
@@ -57,6 +57,7 @@
 #include <linux/file.h>
 #include <linux/fs_parser.h>
 #include <linux/sched/cputime.h>
+#include <linux/sched/deadline.h>
 #include <linux/psi.h>
 #include <net/sock.h>
 
@@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
 	list_add_tail(&tsk->cg_list, &cset->dying_tasks);
 	cset->nr_tasks--;
 
+	if (dl_task(tsk))
+		dec_dl_tasks_cs(tsk);
+
 	WARN_ON_ONCE(cgroup_task_frozen(tsk));
 	if (unlikely(!(tsk->flags & PF_KTHREAD) &&
 		     test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 8d82d66d432b..57bc60112618 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -193,6 +193,12 @@ struct cpuset {
 	int use_parent_ecpus;
 	int child_ecpus_count;
 
+	/*
+	 * number of SCHED_DEADLINE tasks attached to this cpuset, so that we
+	 * know when to rebuild associated root domain bandwidth information.
+	 */
+	int nr_deadline_tasks;
+
 	/* Invalid partition error code, not lock protected */
 	enum prs_errcode prs_err;
 
@@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct cpuset *cs)
 	return css_cs(cs->css.parent);
 }
 
+void inc_dl_tasks_cs(struct task_struct *p)
+{
+	struct cpuset *cs = task_cs(p);
+
+	cs->nr_deadline_tasks++;
+}
+
+void dec_dl_tasks_cs(struct task_struct *p)
+{
+	struct cpuset *cs = task_cs(p);
+
+	cs->nr_deadline_tasks--;
+}
+
 /* bits in struct cpuset flags field */
 typedef enum {
 	CS_ONLINE,
@@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
 		ret = security_task_setscheduler(task);
 		if (ret)
 			goto out_unlock;
+
+		if (dl_task(task)) {
+			cs->nr_deadline_tasks++;
+			cpuset_attach_old_cs->nr_deadline_tasks--;
+		}
 	}
 
 	/*
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 5902cbb5e751..d586a8440348 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -7683,6 +7683,16 @@ static int __sched_setscheduler(struct task_struct *p,
 		goto unlock;
 	}
 
+	/*
+	 * In case a task is setscheduled to SCHED_DEADLINE, or if a task is
+	 * moved to a different sched policy, we need to keep track of that on
+	 * its cpuset (for correct bandwidth tracking).
+	 */
+	if (dl_policy(policy) && !dl_task(p))
+		inc_dl_tasks_cs(p);
+	else if (dl_task(p) && !dl_policy(policy))
+		dec_dl_tasks_cs(p);
+
 	p->sched_reset_on_fork = reset_on_fork;
 	oldprio = p->prio;
 
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC PATCH 3/3] cgroup/cpuset: Iterate only if DEADLINE tasks are present
@ 2023-03-15 12:18   ` Juri Lelli
  0 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 12:18 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Qais Yousef, Waiman Long, Tejun Heo,
	Zefan Li, Johannes Weiner, Hao Luo
  Cc: Dietmar Eggemann, Steven Rostedt, linux-kernel, luca.abeni,
	claudio, tommaso.cucinotta, bristot, mathieu.poirier, cgroups,
	Vincent Guittot, Wei Wang, Rick Yiu, Quentin Perret,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Sudeep Holla,
	Juri Lelli

update_tasks_root_domain currently iterates over all tasks even if no
DEADLINE task is present on the cpuset/root domain for which bandwidth
accounting is being rebuilt. This has been reported to introduce 10+ ms
delays on suspend-resume operations.

Skip the costly iteration for cpusets that don't contain DEADLINE tasks.

Reported-by: Qais Yousef <qyousef@layalina.io>
Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
---
 kernel/cgroup/cpuset.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 57bc60112618..f46192d2e97e 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1090,6 +1090,9 @@ static void update_tasks_root_domain(struct cpuset *cs)
 	struct css_task_iter it;
 	struct task_struct *task;
 
+	if (cs->nr_deadline_tasks == 0)
+		return;
+
 	css_task_iter_start(&cs->css, 0, &it);
 
 	while ((task = css_task_iter_next(&it)))
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* [RFC PATCH 3/3] cgroup/cpuset: Iterate only if DEADLINE tasks are present
@ 2023-03-15 12:18   ` Juri Lelli
  0 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 12:18 UTC (permalink / raw)
  To: Peter Zijlstra, Ingo Molnar, Qais Yousef, Waiman Long, Tejun Heo,
	Zefan Li, Johannes Weiner, Hao Luo
  Cc: Dietmar Eggemann, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Sudeep Holla, Juri Lelli

update_tasks_root_domain currently iterates over all tasks even if no
DEADLINE task is present on the cpuset/root domain for which bandwidth
accounting is being rebuilt. This has been reported to introduce 10+ ms
delays on suspend-resume operations.

Skip the costly iteration for cpusets that don't contain DEADLINE tasks.

Reported-by: Qais Yousef <qyousef-wp2msK0BRk8tq7phqP6ubQ@public.gmane.org>
Signed-off-by: Juri Lelli <juri.lelli-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 kernel/cgroup/cpuset.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 57bc60112618..f46192d2e97e 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -1090,6 +1090,9 @@ static void update_tasks_root_domain(struct cpuset *cs)
 	struct css_task_iter it;
 	struct task_struct *task;
 
+	if (cs->nr_deadline_tasks == 0)
+		return;
+
 	css_task_iter_start(&cs->css, 0, &it);
 
 	while ((task = css_task_iter_next(&it)))
-- 
2.39.2


^ permalink raw reply related	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 14:49     ` Qais Yousef
  0 siblings, 0 replies; 32+ messages in thread
From: Qais Yousef @ 2023-03-15 14:49 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Waiman Long, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel, luca.abeni, claudio, tommaso.cucinotta, bristot,
	mathieu.poirier, cgroups, Vincent Guittot, Wei Wang, Rick Yiu,
	Quentin Perret, Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Sudeep Holla

On 03/15/23 12:18, Juri Lelli wrote:
> Qais reported that iterating over all tasks when rebuilding root domains
> for finding out which ones are DEADLINE and need their bandwidth
> correctly restored on such root domains can be a costly operation (10+
> ms delays on suspend-resume).
> 
> To fix the problem keep track of the number of DEADLINE tasks belonging
> to each cpuset and then use this information (followup patch) to only
> perform the above iteration if DEADLINE tasks are actually present in
> the cpuset for which a corresponding root domain is being rebuilt.
> 
> Reported-by: Qais Yousef <qyousef@layalina.io>
> Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
> ---
>  include/linux/cpuset.h |  4 ++++
>  kernel/cgroup/cgroup.c |  4 ++++
>  kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
>  kernel/sched/core.c    | 10 ++++++++++
>  4 files changed, 43 insertions(+)
> 
> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> index 355f796c5f07..0348dba5680e 100644
> --- a/include/linux/cpuset.h
> +++ b/include/linux/cpuset.h
> @@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
>  extern void cpuset_force_rebuild(void);
>  extern void cpuset_update_active_cpus(void);
>  extern void cpuset_wait_for_hotplug(void);
> +extern void inc_dl_tasks_cs(struct task_struct *task);
> +extern void dec_dl_tasks_cs(struct task_struct *task);
>  extern void cpuset_lock(void);
>  extern void cpuset_unlock(void);
>  extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
> @@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
>  
>  static inline void cpuset_wait_for_hotplug(void) { }
>  
> +static inline void inc_dl_tasks_cs(struct task_struct *task) { }
> +static inline void dec_dl_tasks_cs(struct task_struct *task) { }
>  static inline void cpuset_lock(void) { }
>  static inline void cpuset_unlock(void) { }
>  
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index c099cf3fa02d..357925e1e4af 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -57,6 +57,7 @@
>  #include <linux/file.h>
>  #include <linux/fs_parser.h>
>  #include <linux/sched/cputime.h>
> +#include <linux/sched/deadline.h>
>  #include <linux/psi.h>
>  #include <net/sock.h>
>  
> @@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
>  	list_add_tail(&tsk->cg_list, &cset->dying_tasks);
>  	cset->nr_tasks--;
>  
> +	if (dl_task(tsk))
> +		dec_dl_tasks_cs(tsk);
> +
>  	WARN_ON_ONCE(cgroup_task_frozen(tsk));
>  	if (unlikely(!(tsk->flags & PF_KTHREAD) &&
>  		     test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 8d82d66d432b..57bc60112618 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -193,6 +193,12 @@ struct cpuset {
>  	int use_parent_ecpus;
>  	int child_ecpus_count;
>  
> +	/*
> +	 * number of SCHED_DEADLINE tasks attached to this cpuset, so that we
> +	 * know when to rebuild associated root domain bandwidth information.
> +	 */
> +	int nr_deadline_tasks;
> +
>  	/* Invalid partition error code, not lock protected */
>  	enum prs_errcode prs_err;
>  
> @@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct cpuset *cs)
>  	return css_cs(cs->css.parent);
>  }
>  
> +void inc_dl_tasks_cs(struct task_struct *p)
> +{
> +	struct cpuset *cs = task_cs(p);

nit:

I *think* task_cs() assumes rcu_read_lock() is held, right?

Would it make sense to WARN_ON(!rcu_read_lock_held()) to at least
annotate the deps?

Or maybe task_cs() should do that..

> +
> +	cs->nr_deadline_tasks++;
> +}
> +
> +void dec_dl_tasks_cs(struct task_struct *p)
> +{
> +	struct cpuset *cs = task_cs(p);

nit: ditto

> +
> +	cs->nr_deadline_tasks--;
> +}
> +
>  /* bits in struct cpuset flags field */
>  typedef enum {
>  	CS_ONLINE,
> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
>  		ret = security_task_setscheduler(task);
>  		if (ret)
>  			goto out_unlock;
> +
> +		if (dl_task(task)) {
> +			cs->nr_deadline_tasks++;
> +			cpuset_attach_old_cs->nr_deadline_tasks--;
> +		}
>  	}
>  
>  	/*
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 5902cbb5e751..d586a8440348 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7683,6 +7683,16 @@ static int __sched_setscheduler(struct task_struct *p,
>  		goto unlock;
>  	}
>  
> +	/*
> +	 * In case a task is setscheduled to SCHED_DEADLINE, or if a task is
> +	 * moved to a different sched policy, we need to keep track of that on
> +	 * its cpuset (for correct bandwidth tracking).
> +	 */
> +	if (dl_policy(policy) && !dl_task(p))
> +		inc_dl_tasks_cs(p);
> +	else if (dl_task(p) && !dl_policy(policy))
> +		dec_dl_tasks_cs(p);
> +

Would it be better to use switched_to_dl()/switched_from_dl() instead to
inc/dec_dl_tasks_cs()?


Thanks!

--
Qais Yousef

>  	p->sched_reset_on_fork = reset_on_fork;
>  	oldprio = p->prio;
>  
> -- 
> 2.39.2
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 14:49     ` Qais Yousef
  0 siblings, 0 replies; 32+ messages in thread
From: Qais Yousef @ 2023-03-15 14:49 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Waiman Long, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev

On 03/15/23 12:18, Juri Lelli wrote:
> Qais reported that iterating over all tasks when rebuilding root domains
> for finding out which ones are DEADLINE and need their bandwidth
> correctly restored on such root domains can be a costly operation (10+
> ms delays on suspend-resume).
> 
> To fix the problem keep track of the number of DEADLINE tasks belonging
> to each cpuset and then use this information (followup patch) to only
> perform the above iteration if DEADLINE tasks are actually present in
> the cpuset for which a corresponding root domain is being rebuilt.
> 
> Reported-by: Qais Yousef <qyousef-wp2msK0BRk8tq7phqP6ubQ@public.gmane.org>
> Signed-off-by: Juri Lelli <juri.lelli-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---
>  include/linux/cpuset.h |  4 ++++
>  kernel/cgroup/cgroup.c |  4 ++++
>  kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
>  kernel/sched/core.c    | 10 ++++++++++
>  4 files changed, 43 insertions(+)
> 
> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> index 355f796c5f07..0348dba5680e 100644
> --- a/include/linux/cpuset.h
> +++ b/include/linux/cpuset.h
> @@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
>  extern void cpuset_force_rebuild(void);
>  extern void cpuset_update_active_cpus(void);
>  extern void cpuset_wait_for_hotplug(void);
> +extern void inc_dl_tasks_cs(struct task_struct *task);
> +extern void dec_dl_tasks_cs(struct task_struct *task);
>  extern void cpuset_lock(void);
>  extern void cpuset_unlock(void);
>  extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
> @@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
>  
>  static inline void cpuset_wait_for_hotplug(void) { }
>  
> +static inline void inc_dl_tasks_cs(struct task_struct *task) { }
> +static inline void dec_dl_tasks_cs(struct task_struct *task) { }
>  static inline void cpuset_lock(void) { }
>  static inline void cpuset_unlock(void) { }
>  
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index c099cf3fa02d..357925e1e4af 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -57,6 +57,7 @@
>  #include <linux/file.h>
>  #include <linux/fs_parser.h>
>  #include <linux/sched/cputime.h>
> +#include <linux/sched/deadline.h>
>  #include <linux/psi.h>
>  #include <net/sock.h>
>  
> @@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
>  	list_add_tail(&tsk->cg_list, &cset->dying_tasks);
>  	cset->nr_tasks--;
>  
> +	if (dl_task(tsk))
> +		dec_dl_tasks_cs(tsk);
> +
>  	WARN_ON_ONCE(cgroup_task_frozen(tsk));
>  	if (unlikely(!(tsk->flags & PF_KTHREAD) &&
>  		     test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 8d82d66d432b..57bc60112618 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -193,6 +193,12 @@ struct cpuset {
>  	int use_parent_ecpus;
>  	int child_ecpus_count;
>  
> +	/*
> +	 * number of SCHED_DEADLINE tasks attached to this cpuset, so that we
> +	 * know when to rebuild associated root domain bandwidth information.
> +	 */
> +	int nr_deadline_tasks;
> +
>  	/* Invalid partition error code, not lock protected */
>  	enum prs_errcode prs_err;
>  
> @@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct cpuset *cs)
>  	return css_cs(cs->css.parent);
>  }
>  
> +void inc_dl_tasks_cs(struct task_struct *p)
> +{
> +	struct cpuset *cs = task_cs(p);

nit:

I *think* task_cs() assumes rcu_read_lock() is held, right?

Would it make sense to WARN_ON(!rcu_read_lock_held()) to at least
annotate the deps?

Or maybe task_cs() should do that..

> +
> +	cs->nr_deadline_tasks++;
> +}
> +
> +void dec_dl_tasks_cs(struct task_struct *p)
> +{
> +	struct cpuset *cs = task_cs(p);

nit: ditto

> +
> +	cs->nr_deadline_tasks--;
> +}
> +
>  /* bits in struct cpuset flags field */
>  typedef enum {
>  	CS_ONLINE,
> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
>  		ret = security_task_setscheduler(task);
>  		if (ret)
>  			goto out_unlock;
> +
> +		if (dl_task(task)) {
> +			cs->nr_deadline_tasks++;
> +			cpuset_attach_old_cs->nr_deadline_tasks--;
> +		}
>  	}
>  
>  	/*
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 5902cbb5e751..d586a8440348 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -7683,6 +7683,16 @@ static int __sched_setscheduler(struct task_struct *p,
>  		goto unlock;
>  	}
>  
> +	/*
> +	 * In case a task is setscheduled to SCHED_DEADLINE, or if a task is
> +	 * moved to a different sched policy, we need to keep track of that on
> +	 * its cpuset (for correct bandwidth tracking).
> +	 */
> +	if (dl_policy(policy) && !dl_task(p))
> +		inc_dl_tasks_cs(p);
> +	else if (dl_task(p) && !dl_policy(policy))
> +		dec_dl_tasks_cs(p);
> +

Would it be better to use switched_to_dl()/switched_from_dl() instead to
inc/dec_dl_tasks_cs()?


Thanks!

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 0/3] sched/deadline: cpuset: Rework DEADLINE bandwidth restoration
@ 2023-03-15 14:55   ` Qais Yousef
  0 siblings, 0 replies; 32+ messages in thread
From: Qais Yousef @ 2023-03-15 14:55 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Waiman Long, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel, luca.abeni, claudio, tommaso.cucinotta, bristot,
	mathieu.poirier, cgroups, Vincent Guittot, Wei Wang, Rick Yiu,
	Quentin Perret, Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Sudeep Holla

On 03/15/23 12:18, Juri Lelli wrote:
> Qais reported [1] that iterating over all tasks when rebuilding root
> domains for finding out which ones are DEADLINE and need their bandwidth
> correctly restored on such root domains can be a costly operation (10+
> ms delays on suspend-resume). He proposed we skip rebuilding root
> domains for certain operations, but that approach seemed arch specific
> and possibly prone to errors, as paths that ultimately trigger a rebuild
> might be quite convoluted (thanks Qais for spending time on this!).

Thanks a lot for this! And sorry I couldn't provide something better.

> 
> To fix the problem I instead would propose we
> 
>  1 - Bring back cpuset_mutex (so that we have write access to cpusets
>      from scheduler operations - and we also fix some problems
>      associated to percpu_cpuset_rwsem)
>  2 - Keep track of the number of DEADLINE tasks belonging to each cpuset
>  3 - Use this information to only perform the costly iteration if
>      DEADLINE tasks are actually present in the cpuset for which a
>      corresponding root domain is being rebuilt

nit:

Would you consider adding another patch to rename the functions?
rebuild_root_domains() and update_tasks_root_domain() are deadline accounting
specific functions and don't actually rebuild root domains.


Thanks!

--
Qais Yousef

> 
> This set is also available from
> 
> https://github.com/jlelli/linux.git deadline/rework-cpusets
> 
> Feedback is more than welcome.
> 
> Best,
> Juri
> 
> 1 - https://lore.kernel.org/lkml/20230206221428.2125324-1-qyousef@layalina.io/
> 
> Juri Lelli (3):
>   sched/cpuset: Bring back cpuset_mutex
>   sched/cpuset: Keep track of SCHED_DEADLINE task in cpusets
>   cgroup/cpuset: Iterate only if DEADLINE tasks are present
> 
>  include/linux/cpuset.h |  12 ++-
>  kernel/cgroup/cgroup.c |   4 +
>  kernel/cgroup/cpuset.c | 175 +++++++++++++++++++++++------------------
>  kernel/sched/core.c    |  32 ++++++--
>  4 files changed, 137 insertions(+), 86 deletions(-)
> 
> -- 
> 2.39.2
> 

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 0/3] sched/deadline: cpuset: Rework DEADLINE bandwidth restoration
@ 2023-03-15 14:55   ` Qais Yousef
  0 siblings, 0 replies; 32+ messages in thread
From: Qais Yousef @ 2023-03-15 14:55 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Waiman Long, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev

On 03/15/23 12:18, Juri Lelli wrote:
> Qais reported [1] that iterating over all tasks when rebuilding root
> domains for finding out which ones are DEADLINE and need their bandwidth
> correctly restored on such root domains can be a costly operation (10+
> ms delays on suspend-resume). He proposed we skip rebuilding root
> domains for certain operations, but that approach seemed arch specific
> and possibly prone to errors, as paths that ultimately trigger a rebuild
> might be quite convoluted (thanks Qais for spending time on this!).

Thanks a lot for this! And sorry I couldn't provide something better.

> 
> To fix the problem I instead would propose we
> 
>  1 - Bring back cpuset_mutex (so that we have write access to cpusets
>      from scheduler operations - and we also fix some problems
>      associated to percpu_cpuset_rwsem)
>  2 - Keep track of the number of DEADLINE tasks belonging to each cpuset
>  3 - Use this information to only perform the costly iteration if
>      DEADLINE tasks are actually present in the cpuset for which a
>      corresponding root domain is being rebuilt

nit:

Would you consider adding another patch to rename the functions?
rebuild_root_domains() and update_tasks_root_domain() are deadline accounting
specific functions and don't actually rebuild root domains.


Thanks!

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 15:46     ` Waiman Long
  0 siblings, 0 replies; 32+ messages in thread
From: Waiman Long @ 2023-03-15 15:46 UTC (permalink / raw)
  To: Juri Lelli, Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo,
	Zefan Li, Johannes Weiner, Hao Luo
  Cc: Dietmar Eggemann, Steven Rostedt, linux-kernel, luca.abeni,
	claudio, tommaso.cucinotta, bristot, mathieu.poirier, cgroups,
	Vincent Guittot, Wei Wang, Rick Yiu, Quentin Perret,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Sudeep Holla


On 3/15/23 08:18, Juri Lelli wrote:
> Qais reported that iterating over all tasks when rebuilding root domains
> for finding out which ones are DEADLINE and need their bandwidth
> correctly restored on such root domains can be a costly operation (10+
> ms delays on suspend-resume).
>
> To fix the problem keep track of the number of DEADLINE tasks belonging
> to each cpuset and then use this information (followup patch) to only
> perform the above iteration if DEADLINE tasks are actually present in
> the cpuset for which a corresponding root domain is being rebuilt.
>
> Reported-by: Qais Yousef <qyousef@layalina.io>
> Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
> ---
>   include/linux/cpuset.h |  4 ++++
>   kernel/cgroup/cgroup.c |  4 ++++
>   kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
>   kernel/sched/core.c    | 10 ++++++++++
>   4 files changed, 43 insertions(+)
>
> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> index 355f796c5f07..0348dba5680e 100644
> --- a/include/linux/cpuset.h
> +++ b/include/linux/cpuset.h
> @@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
>   extern void cpuset_force_rebuild(void);
>   extern void cpuset_update_active_cpus(void);
>   extern void cpuset_wait_for_hotplug(void);
> +extern void inc_dl_tasks_cs(struct task_struct *task);
> +extern void dec_dl_tasks_cs(struct task_struct *task);
>   extern void cpuset_lock(void);
>   extern void cpuset_unlock(void);
>   extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
> @@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
>   
>   static inline void cpuset_wait_for_hotplug(void) { }
>   
> +static inline void inc_dl_tasks_cs(struct task_struct *task) { }
> +static inline void dec_dl_tasks_cs(struct task_struct *task) { }
>   static inline void cpuset_lock(void) { }
>   static inline void cpuset_unlock(void) { }
>   
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index c099cf3fa02d..357925e1e4af 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -57,6 +57,7 @@
>   #include <linux/file.h>
>   #include <linux/fs_parser.h>
>   #include <linux/sched/cputime.h>
> +#include <linux/sched/deadline.h>
>   #include <linux/psi.h>
>   #include <net/sock.h>
>   
> @@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
>   	list_add_tail(&tsk->cg_list, &cset->dying_tasks);
>   	cset->nr_tasks--;
>   
> +	if (dl_task(tsk))
> +		dec_dl_tasks_cs(tsk);
> +
>   	WARN_ON_ONCE(cgroup_task_frozen(tsk));
>   	if (unlikely(!(tsk->flags & PF_KTHREAD) &&
>   		     test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 8d82d66d432b..57bc60112618 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -193,6 +193,12 @@ struct cpuset {
>   	int use_parent_ecpus;
>   	int child_ecpus_count;
>   
> +	/*
> +	 * number of SCHED_DEADLINE tasks attached to this cpuset, so that we
> +	 * know when to rebuild associated root domain bandwidth information.
> +	 */
> +	int nr_deadline_tasks;
> +
>   	/* Invalid partition error code, not lock protected */
>   	enum prs_errcode prs_err;
>   
> @@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct cpuset *cs)
>   	return css_cs(cs->css.parent);
>   }
>   
> +void inc_dl_tasks_cs(struct task_struct *p)
> +{
> +	struct cpuset *cs = task_cs(p);
> +
> +	cs->nr_deadline_tasks++;
> +}
> +
> +void dec_dl_tasks_cs(struct task_struct *p)
> +{
> +	struct cpuset *cs = task_cs(p);
> +
> +	cs->nr_deadline_tasks--;
> +}
> +
>   /* bits in struct cpuset flags field */
>   typedef enum {
>   	CS_ONLINE,
> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
>   		ret = security_task_setscheduler(task);
>   		if (ret)
>   			goto out_unlock;
> +
> +		if (dl_task(task)) {
> +			cs->nr_deadline_tasks++;
> +			cpuset_attach_old_cs->nr_deadline_tasks--;
> +		}
>   	}

Any one of the tasks in the cpuset can cause the test to fail and abort 
the attachment. I would suggest that you keep a deadline task transfer 
count in the loop and then update cs and cpouset_attach_old_cs only 
after all the tasks have been iterated successfully.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 15:46     ` Waiman Long
  0 siblings, 0 replies; 32+ messages in thread
From: Waiman Long @ 2023-03-15 15:46 UTC (permalink / raw)
  To: Juri Lelli, Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo,
	Zefan Li, Johannes Weiner, Hao Luo
  Cc: Dietmar Eggemann, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Sudeep Holla


On 3/15/23 08:18, Juri Lelli wrote:
> Qais reported that iterating over all tasks when rebuilding root domains
> for finding out which ones are DEADLINE and need their bandwidth
> correctly restored on such root domains can be a costly operation (10+
> ms delays on suspend-resume).
>
> To fix the problem keep track of the number of DEADLINE tasks belonging
> to each cpuset and then use this information (followup patch) to only
> perform the above iteration if DEADLINE tasks are actually present in
> the cpuset for which a corresponding root domain is being rebuilt.
>
> Reported-by: Qais Yousef <qyousef-wp2msK0BRk8tq7phqP6ubQ@public.gmane.org>
> Signed-off-by: Juri Lelli <juri.lelli-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---
>   include/linux/cpuset.h |  4 ++++
>   kernel/cgroup/cgroup.c |  4 ++++
>   kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
>   kernel/sched/core.c    | 10 ++++++++++
>   4 files changed, 43 insertions(+)
>
> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> index 355f796c5f07..0348dba5680e 100644
> --- a/include/linux/cpuset.h
> +++ b/include/linux/cpuset.h
> @@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
>   extern void cpuset_force_rebuild(void);
>   extern void cpuset_update_active_cpus(void);
>   extern void cpuset_wait_for_hotplug(void);
> +extern void inc_dl_tasks_cs(struct task_struct *task);
> +extern void dec_dl_tasks_cs(struct task_struct *task);
>   extern void cpuset_lock(void);
>   extern void cpuset_unlock(void);
>   extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
> @@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
>   
>   static inline void cpuset_wait_for_hotplug(void) { }
>   
> +static inline void inc_dl_tasks_cs(struct task_struct *task) { }
> +static inline void dec_dl_tasks_cs(struct task_struct *task) { }
>   static inline void cpuset_lock(void) { }
>   static inline void cpuset_unlock(void) { }
>   
> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> index c099cf3fa02d..357925e1e4af 100644
> --- a/kernel/cgroup/cgroup.c
> +++ b/kernel/cgroup/cgroup.c
> @@ -57,6 +57,7 @@
>   #include <linux/file.h>
>   #include <linux/fs_parser.h>
>   #include <linux/sched/cputime.h>
> +#include <linux/sched/deadline.h>
>   #include <linux/psi.h>
>   #include <net/sock.h>
>   
> @@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
>   	list_add_tail(&tsk->cg_list, &cset->dying_tasks);
>   	cset->nr_tasks--;
>   
> +	if (dl_task(tsk))
> +		dec_dl_tasks_cs(tsk);
> +
>   	WARN_ON_ONCE(cgroup_task_frozen(tsk));
>   	if (unlikely(!(tsk->flags & PF_KTHREAD) &&
>   		     test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> index 8d82d66d432b..57bc60112618 100644
> --- a/kernel/cgroup/cpuset.c
> +++ b/kernel/cgroup/cpuset.c
> @@ -193,6 +193,12 @@ struct cpuset {
>   	int use_parent_ecpus;
>   	int child_ecpus_count;
>   
> +	/*
> +	 * number of SCHED_DEADLINE tasks attached to this cpuset, so that we
> +	 * know when to rebuild associated root domain bandwidth information.
> +	 */
> +	int nr_deadline_tasks;
> +
>   	/* Invalid partition error code, not lock protected */
>   	enum prs_errcode prs_err;
>   
> @@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct cpuset *cs)
>   	return css_cs(cs->css.parent);
>   }
>   
> +void inc_dl_tasks_cs(struct task_struct *p)
> +{
> +	struct cpuset *cs = task_cs(p);
> +
> +	cs->nr_deadline_tasks++;
> +}
> +
> +void dec_dl_tasks_cs(struct task_struct *p)
> +{
> +	struct cpuset *cs = task_cs(p);
> +
> +	cs->nr_deadline_tasks--;
> +}
> +
>   /* bits in struct cpuset flags field */
>   typedef enum {
>   	CS_ONLINE,
> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
>   		ret = security_task_setscheduler(task);
>   		if (ret)
>   			goto out_unlock;
> +
> +		if (dl_task(task)) {
> +			cs->nr_deadline_tasks++;
> +			cpuset_attach_old_cs->nr_deadline_tasks--;
> +		}
>   	}

Any one of the tasks in the cpuset can cause the test to fail and abort 
the attachment. I would suggest that you keep a deadline task transfer 
count in the loop and then update cs and cpouset_attach_old_cs only 
after all the tasks have been iterated successfully.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 0/3] sched/deadline: cpuset: Rework DEADLINE bandwidth restoration
  2023-03-15 14:55   ` Qais Yousef
@ 2023-03-15 17:10     ` Juri Lelli
  -1 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 17:10 UTC (permalink / raw)
  To: Qais Yousef
  Cc: Peter Zijlstra, Ingo Molnar, Waiman Long, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel, luca.abeni, claudio, tommaso.cucinotta, bristot,
	mathieu.poirier, cgroups, Vincent Guittot, Wei Wang, Rick Yiu,
	Quentin Perret, Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Sudeep Holla

On 15/03/23 14:55, Qais Yousef wrote:
> On 03/15/23 12:18, Juri Lelli wrote:
> > Qais reported [1] that iterating over all tasks when rebuilding root
> > domains for finding out which ones are DEADLINE and need their bandwidth
> > correctly restored on such root domains can be a costly operation (10+
> > ms delays on suspend-resume). He proposed we skip rebuilding root
> > domains for certain operations, but that approach seemed arch specific
> > and possibly prone to errors, as paths that ultimately trigger a rebuild
> > might be quite convoluted (thanks Qais for spending time on this!).
> 
> Thanks a lot for this! And sorry I couldn't provide something better.

Ah, no worries. Actually still have to convice myself what I have it's
actually better. :)

> > 
> > To fix the problem I instead would propose we
> > 
> >  1 - Bring back cpuset_mutex (so that we have write access to cpusets
> >      from scheduler operations - and we also fix some problems
> >      associated to percpu_cpuset_rwsem)
> >  2 - Keep track of the number of DEADLINE tasks belonging to each cpuset
> >  3 - Use this information to only perform the costly iteration if
> >      DEADLINE tasks are actually present in the cpuset for which a
> >      corresponding root domain is being rebuilt
> 
> nit:
> 
> Would you consider adding another patch to rename the functions?
> rebuild_root_domains() and update_tasks_root_domain() are deadline accounting
> specific functions and don't actually rebuild root domains.

Yep, can do.

Thanks,
Juri


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 0/3] sched/deadline: cpuset: Rework DEADLINE bandwidth restoration
@ 2023-03-15 17:10     ` Juri Lelli
  0 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 17:10 UTC (permalink / raw)
  To: Qais Yousef
  Cc: Peter Zijlstra, Ingo Molnar, Waiman Long, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev

On 15/03/23 14:55, Qais Yousef wrote:
> On 03/15/23 12:18, Juri Lelli wrote:
> > Qais reported [1] that iterating over all tasks when rebuilding root
> > domains for finding out which ones are DEADLINE and need their bandwidth
> > correctly restored on such root domains can be a costly operation (10+
> > ms delays on suspend-resume). He proposed we skip rebuilding root
> > domains for certain operations, but that approach seemed arch specific
> > and possibly prone to errors, as paths that ultimately trigger a rebuild
> > might be quite convoluted (thanks Qais for spending time on this!).
> 
> Thanks a lot for this! And sorry I couldn't provide something better.

Ah, no worries. Actually still have to convice myself what I have it's
actually better. :)

> > 
> > To fix the problem I instead would propose we
> > 
> >  1 - Bring back cpuset_mutex (so that we have write access to cpusets
> >      from scheduler operations - and we also fix some problems
> >      associated to percpu_cpuset_rwsem)
> >  2 - Keep track of the number of DEADLINE tasks belonging to each cpuset
> >  3 - Use this information to only perform the costly iteration if
> >      DEADLINE tasks are actually present in the cpuset for which a
> >      corresponding root domain is being rebuilt
> 
> nit:
> 
> Would you consider adding another patch to rename the functions?
> rebuild_root_domains() and update_tasks_root_domain() are deadline accounting
> specific functions and don't actually rebuild root domains.

Yep, can do.

Thanks,
Juri


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
  2023-03-15 15:46     ` Waiman Long
@ 2023-03-15 17:14       ` Juri Lelli
  -1 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 17:14 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel, luca.abeni, claudio, tommaso.cucinotta, bristot,
	mathieu.poirier, cgroups, Vincent Guittot, Wei Wang, Rick Yiu,
	Quentin Perret, Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Sudeep Holla

On 15/03/23 11:46, Waiman Long wrote:
> 
> On 3/15/23 08:18, Juri Lelli wrote:
> > Qais reported that iterating over all tasks when rebuilding root domains
> > for finding out which ones are DEADLINE and need their bandwidth
> > correctly restored on such root domains can be a costly operation (10+
> > ms delays on suspend-resume).
> > 
> > To fix the problem keep track of the number of DEADLINE tasks belonging
> > to each cpuset and then use this information (followup patch) to only
> > perform the above iteration if DEADLINE tasks are actually present in
> > the cpuset for which a corresponding root domain is being rebuilt.
> > 
> > Reported-by: Qais Yousef <qyousef@layalina.io>
> > Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
> > ---
> >   include/linux/cpuset.h |  4 ++++
> >   kernel/cgroup/cgroup.c |  4 ++++
> >   kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
> >   kernel/sched/core.c    | 10 ++++++++++
> >   4 files changed, 43 insertions(+)
> > 
> > diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> > index 355f796c5f07..0348dba5680e 100644
> > --- a/include/linux/cpuset.h
> > +++ b/include/linux/cpuset.h
> > @@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
> >   extern void cpuset_force_rebuild(void);
> >   extern void cpuset_update_active_cpus(void);
> >   extern void cpuset_wait_for_hotplug(void);
> > +extern void inc_dl_tasks_cs(struct task_struct *task);
> > +extern void dec_dl_tasks_cs(struct task_struct *task);
> >   extern void cpuset_lock(void);
> >   extern void cpuset_unlock(void);
> >   extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
> > @@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
> >   static inline void cpuset_wait_for_hotplug(void) { }
> > +static inline void inc_dl_tasks_cs(struct task_struct *task) { }
> > +static inline void dec_dl_tasks_cs(struct task_struct *task) { }
> >   static inline void cpuset_lock(void) { }
> >   static inline void cpuset_unlock(void) { }
> > diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> > index c099cf3fa02d..357925e1e4af 100644
> > --- a/kernel/cgroup/cgroup.c
> > +++ b/kernel/cgroup/cgroup.c
> > @@ -57,6 +57,7 @@
> >   #include <linux/file.h>
> >   #include <linux/fs_parser.h>
> >   #include <linux/sched/cputime.h>
> > +#include <linux/sched/deadline.h>
> >   #include <linux/psi.h>
> >   #include <net/sock.h>
> > @@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
> >   	list_add_tail(&tsk->cg_list, &cset->dying_tasks);
> >   	cset->nr_tasks--;
> > +	if (dl_task(tsk))
> > +		dec_dl_tasks_cs(tsk);
> > +
> >   	WARN_ON_ONCE(cgroup_task_frozen(tsk));
> >   	if (unlikely(!(tsk->flags & PF_KTHREAD) &&
> >   		     test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
> > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> > index 8d82d66d432b..57bc60112618 100644
> > --- a/kernel/cgroup/cpuset.c
> > +++ b/kernel/cgroup/cpuset.c
> > @@ -193,6 +193,12 @@ struct cpuset {
> >   	int use_parent_ecpus;
> >   	int child_ecpus_count;
> > +	/*
> > +	 * number of SCHED_DEADLINE tasks attached to this cpuset, so that we
> > +	 * know when to rebuild associated root domain bandwidth information.
> > +	 */
> > +	int nr_deadline_tasks;
> > +
> >   	/* Invalid partition error code, not lock protected */
> >   	enum prs_errcode prs_err;
> > @@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct cpuset *cs)
> >   	return css_cs(cs->css.parent);
> >   }
> > +void inc_dl_tasks_cs(struct task_struct *p)
> > +{
> > +	struct cpuset *cs = task_cs(p);
> > +
> > +	cs->nr_deadline_tasks++;
> > +}
> > +
> > +void dec_dl_tasks_cs(struct task_struct *p)
> > +{
> > +	struct cpuset *cs = task_cs(p);
> > +
> > +	cs->nr_deadline_tasks--;
> > +}
> > +
> >   /* bits in struct cpuset flags field */
> >   typedef enum {
> >   	CS_ONLINE,
> > @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
> >   		ret = security_task_setscheduler(task);
> >   		if (ret)
> >   			goto out_unlock;
> > +
> > +		if (dl_task(task)) {
> > +			cs->nr_deadline_tasks++;
> > +			cpuset_attach_old_cs->nr_deadline_tasks--;
> > +		}
> >   	}
> 
> Any one of the tasks in the cpuset can cause the test to fail and abort the
> attachment. I would suggest that you keep a deadline task transfer count in
> the loop and then update cs and cpouset_attach_old_cs only after all the
> tasks have been iterated successfully.

Right, Dietmar I think commented pointing out something along these
lines. Think though we already have this problem with current
task_can_attach -> dl_cpu_busy which reserves bandwidth for each tasks
in the destination cs. Will need to look into that. Do you know which
sort of operation would move multiple tasks at once?


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 17:14       ` Juri Lelli
  0 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 17:14 UTC (permalink / raw)
  To: Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel, luca.abeni, claudio, tommaso.cucinotta, bristot,
	mathieu.poirier, cgroups, Vincent Guittot, Wei Wang, Rick Yiu,
	Quentin Perret, Heiko Carstens, Vasily Gorbik, Alexander Gordeev

On 15/03/23 11:46, Waiman Long wrote:
> 
> On 3/15/23 08:18, Juri Lelli wrote:
> > Qais reported that iterating over all tasks when rebuilding root domains
> > for finding out which ones are DEADLINE and need their bandwidth
> > correctly restored on such root domains can be a costly operation (10+
> > ms delays on suspend-resume).
> > 
> > To fix the problem keep track of the number of DEADLINE tasks belonging
> > to each cpuset and then use this information (followup patch) to only
> > perform the above iteration if DEADLINE tasks are actually present in
> > the cpuset for which a corresponding root domain is being rebuilt.
> > 
> > Reported-by: Qais Yousef <qyousef@layalina.io>
> > Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
> > ---
> >   include/linux/cpuset.h |  4 ++++
> >   kernel/cgroup/cgroup.c |  4 ++++
> >   kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
> >   kernel/sched/core.c    | 10 ++++++++++
> >   4 files changed, 43 insertions(+)
> > 
> > diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
> > index 355f796c5f07..0348dba5680e 100644
> > --- a/include/linux/cpuset.h
> > +++ b/include/linux/cpuset.h
> > @@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
> >   extern void cpuset_force_rebuild(void);
> >   extern void cpuset_update_active_cpus(void);
> >   extern void cpuset_wait_for_hotplug(void);
> > +extern void inc_dl_tasks_cs(struct task_struct *task);
> > +extern void dec_dl_tasks_cs(struct task_struct *task);
> >   extern void cpuset_lock(void);
> >   extern void cpuset_unlock(void);
> >   extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
> > @@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
> >   static inline void cpuset_wait_for_hotplug(void) { }
> > +static inline void inc_dl_tasks_cs(struct task_struct *task) { }
> > +static inline void dec_dl_tasks_cs(struct task_struct *task) { }
> >   static inline void cpuset_lock(void) { }
> >   static inline void cpuset_unlock(void) { }
> > diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
> > index c099cf3fa02d..357925e1e4af 100644
> > --- a/kernel/cgroup/cgroup.c
> > +++ b/kernel/cgroup/cgroup.c
> > @@ -57,6 +57,7 @@
> >   #include <linux/file.h>
> >   #include <linux/fs_parser.h>
> >   #include <linux/sched/cputime.h>
> > +#include <linux/sched/deadline.h>
> >   #include <linux/psi.h>
> >   #include <net/sock.h>
> > @@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
> >   	list_add_tail(&tsk->cg_list, &cset->dying_tasks);
> >   	cset->nr_tasks--;
> > +	if (dl_task(tsk))
> > +		dec_dl_tasks_cs(tsk);
> > +
> >   	WARN_ON_ONCE(cgroup_task_frozen(tsk));
> >   	if (unlikely(!(tsk->flags & PF_KTHREAD) &&
> >   		     test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
> > diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
> > index 8d82d66d432b..57bc60112618 100644
> > --- a/kernel/cgroup/cpuset.c
> > +++ b/kernel/cgroup/cpuset.c
> > @@ -193,6 +193,12 @@ struct cpuset {
> >   	int use_parent_ecpus;
> >   	int child_ecpus_count;
> > +	/*
> > +	 * number of SCHED_DEADLINE tasks attached to this cpuset, so that we
> > +	 * know when to rebuild associated root domain bandwidth information.
> > +	 */
> > +	int nr_deadline_tasks;
> > +
> >   	/* Invalid partition error code, not lock protected */
> >   	enum prs_errcode prs_err;
> > @@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct cpuset *cs)
> >   	return css_cs(cs->css.parent);
> >   }
> > +void inc_dl_tasks_cs(struct task_struct *p)
> > +{
> > +	struct cpuset *cs = task_cs(p);
> > +
> > +	cs->nr_deadline_tasks++;
> > +}
> > +
> > +void dec_dl_tasks_cs(struct task_struct *p)
> > +{
> > +	struct cpuset *cs = task_cs(p);
> > +
> > +	cs->nr_deadline_tasks--;
> > +}
> > +
> >   /* bits in struct cpuset flags field */
> >   typedef enum {
> >   	CS_ONLINE,
> > @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
> >   		ret = security_task_setscheduler(task);
> >   		if (ret)
> >   			goto out_unlock;
> > +
> > +		if (dl_task(task)) {
> > +			cs->nr_deadline_tasks++;
> > +			cpuset_attach_old_cs->nr_deadline_tasks--;
> > +		}
> >   	}
> 
> Any one of the tasks in the cpuset can cause the test to fail and abort the
> attachment. I would suggest that you keep a deadline task transfer count in
> the loop and then update cs and cpouset_attach_old_cs only after all the
> tasks have been iterated successfully.

Right, Dietmar I think commented pointing out something along these
lines. Think though we already have this problem with current
task_can_attach -> dl_cpu_busy which reserves bandwidth for each tasks
in the destination cs. Will need to look into that. Do you know which
sort of operation would move multiple tasks at once?


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
  2023-03-15 14:49     ` Qais Yousef
@ 2023-03-15 17:18       ` Juri Lelli
  -1 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 17:18 UTC (permalink / raw)
  To: Qais Yousef
  Cc: Peter Zijlstra, Ingo Molnar, Waiman Long, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel, luca.abeni, claudio, tommaso.cucinotta, bristot,
	mathieu.poirier, cgroups, Vincent Guittot, Wei Wang, Rick Yiu,
	Quentin Perret, Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Sudeep Holla

On 15/03/23 14:49, Qais Yousef wrote:
> On 03/15/23 12:18, Juri Lelli wrote:

...

> > +void inc_dl_tasks_cs(struct task_struct *p)
> > +{
> > +	struct cpuset *cs = task_cs(p);
> 
> nit:
> 
> I *think* task_cs() assumes rcu_read_lock() is held, right?
> 
> Would it make sense to WARN_ON(!rcu_read_lock_held()) to at least
> annotate the deps?

Think we have that check in task_css_set_check()?

> Or maybe task_cs() should do that..
> 
> > +
> > +	cs->nr_deadline_tasks++;
> > +}
> > +
> > +void dec_dl_tasks_cs(struct task_struct *p)
> > +{
> > +	struct cpuset *cs = task_cs(p);
> 
> nit: ditto
> 
> > +
> > +	cs->nr_deadline_tasks--;
> > +}
> > +

...

> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 5902cbb5e751..d586a8440348 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -7683,6 +7683,16 @@ static int __sched_setscheduler(struct task_struct *p,
> >  		goto unlock;
> >  	}
> >  
> > +	/*
> > +	 * In case a task is setscheduled to SCHED_DEADLINE, or if a task is
> > +	 * moved to a different sched policy, we need to keep track of that on
> > +	 * its cpuset (for correct bandwidth tracking).
> > +	 */
> > +	if (dl_policy(policy) && !dl_task(p))
> > +		inc_dl_tasks_cs(p);
> > +	else if (dl_task(p) && !dl_policy(policy))
> > +		dec_dl_tasks_cs(p);
> > +
> 
> Would it be better to use switched_to_dl()/switched_from_dl() instead to
> inc/dec_dl_tasks_cs()?

Ah, makes sense. I'll play with this.

Thanks,
Juri


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 17:18       ` Juri Lelli
  0 siblings, 0 replies; 32+ messages in thread
From: Juri Lelli @ 2023-03-15 17:18 UTC (permalink / raw)
  To: Qais Yousef
  Cc: Peter Zijlstra, Ingo Molnar, Waiman Long, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev

On 15/03/23 14:49, Qais Yousef wrote:
> On 03/15/23 12:18, Juri Lelli wrote:

...

> > +void inc_dl_tasks_cs(struct task_struct *p)
> > +{
> > +	struct cpuset *cs = task_cs(p);
> 
> nit:
> 
> I *think* task_cs() assumes rcu_read_lock() is held, right?
> 
> Would it make sense to WARN_ON(!rcu_read_lock_held()) to at least
> annotate the deps?

Think we have that check in task_css_set_check()?

> Or maybe task_cs() should do that..
> 
> > +
> > +	cs->nr_deadline_tasks++;
> > +}
> > +
> > +void dec_dl_tasks_cs(struct task_struct *p)
> > +{
> > +	struct cpuset *cs = task_cs(p);
> 
> nit: ditto
> 
> > +
> > +	cs->nr_deadline_tasks--;
> > +}
> > +

...

> > diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> > index 5902cbb5e751..d586a8440348 100644
> > --- a/kernel/sched/core.c
> > +++ b/kernel/sched/core.c
> > @@ -7683,6 +7683,16 @@ static int __sched_setscheduler(struct task_struct *p,
> >  		goto unlock;
> >  	}
> >  
> > +	/*
> > +	 * In case a task is setscheduled to SCHED_DEADLINE, or if a task is
> > +	 * moved to a different sched policy, we need to keep track of that on
> > +	 * its cpuset (for correct bandwidth tracking).
> > +	 */
> > +	if (dl_policy(policy) && !dl_task(p))
> > +		inc_dl_tasks_cs(p);
> > +	else if (dl_task(p) && !dl_policy(policy))
> > +		dec_dl_tasks_cs(p);
> > +
> 
> Would it be better to use switched_to_dl()/switched_from_dl() instead to
> inc/dec_dl_tasks_cs()?

Ah, makes sense. I'll play with this.

Thanks,
Juri


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
  2023-03-15 17:14       ` Juri Lelli
@ 2023-03-15 18:01         ` Waiman Long
  -1 siblings, 0 replies; 32+ messages in thread
From: Waiman Long @ 2023-03-15 18:01 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel, luca.abeni, claudio, tommaso.cucinotta, bristot,
	mathieu.poirier, cgroups, Vincent Guittot, Wei Wang, Rick Yiu,
	Quentin Perret, Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Sudeep Holla


On 3/15/23 13:14, Juri Lelli wrote:
> On 15/03/23 11:46, Waiman Long wrote:
>> On 3/15/23 08:18, Juri Lelli wrote:
>>> Qais reported that iterating over all tasks when rebuilding root domains
>>> for finding out which ones are DEADLINE and need their bandwidth
>>> correctly restored on such root domains can be a costly operation (10+
>>> ms delays on suspend-resume).
>>>
>>> To fix the problem keep track of the number of DEADLINE tasks belonging
>>> to each cpuset and then use this information (followup patch) to only
>>> perform the above iteration if DEADLINE tasks are actually present in
>>> the cpuset for which a corresponding root domain is being rebuilt.
>>>
>>> Reported-by: Qais Yousef <qyousef@layalina.io>
>>> Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
>>> ---
>>>    include/linux/cpuset.h |  4 ++++
>>>    kernel/cgroup/cgroup.c |  4 ++++
>>>    kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
>>>    kernel/sched/core.c    | 10 ++++++++++
>>>    4 files changed, 43 insertions(+)
>>>
>>> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
>>> index 355f796c5f07..0348dba5680e 100644
>>> --- a/include/linux/cpuset.h
>>> +++ b/include/linux/cpuset.h
>>> @@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
>>>    extern void cpuset_force_rebuild(void);
>>>    extern void cpuset_update_active_cpus(void);
>>>    extern void cpuset_wait_for_hotplug(void);
>>> +extern void inc_dl_tasks_cs(struct task_struct *task);
>>> +extern void dec_dl_tasks_cs(struct task_struct *task);
>>>    extern void cpuset_lock(void);
>>>    extern void cpuset_unlock(void);
>>>    extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
>>> @@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
>>>    static inline void cpuset_wait_for_hotplug(void) { }
>>> +static inline void inc_dl_tasks_cs(struct task_struct *task) { }
>>> +static inline void dec_dl_tasks_cs(struct task_struct *task) { }
>>>    static inline void cpuset_lock(void) { }
>>>    static inline void cpuset_unlock(void) { }
>>> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
>>> index c099cf3fa02d..357925e1e4af 100644
>>> --- a/kernel/cgroup/cgroup.c
>>> +++ b/kernel/cgroup/cgroup.c
>>> @@ -57,6 +57,7 @@
>>>    #include <linux/file.h>
>>>    #include <linux/fs_parser.h>
>>>    #include <linux/sched/cputime.h>
>>> +#include <linux/sched/deadline.h>
>>>    #include <linux/psi.h>
>>>    #include <net/sock.h>
>>> @@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
>>>    	list_add_tail(&tsk->cg_list, &cset->dying_tasks);
>>>    	cset->nr_tasks--;
>>> +	if (dl_task(tsk))
>>> +		dec_dl_tasks_cs(tsk);
>>> +
>>>    	WARN_ON_ONCE(cgroup_task_frozen(tsk));
>>>    	if (unlikely(!(tsk->flags & PF_KTHREAD) &&
>>>    		     test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>> index 8d82d66d432b..57bc60112618 100644
>>> --- a/kernel/cgroup/cpuset.c
>>> +++ b/kernel/cgroup/cpuset.c
>>> @@ -193,6 +193,12 @@ struct cpuset {
>>>    	int use_parent_ecpus;
>>>    	int child_ecpus_count;
>>> +	/*
>>> +	 * number of SCHED_DEADLINE tasks attached to this cpuset, so that we
>>> +	 * know when to rebuild associated root domain bandwidth information.
>>> +	 */
>>> +	int nr_deadline_tasks;
>>> +
>>>    	/* Invalid partition error code, not lock protected */
>>>    	enum prs_errcode prs_err;
>>> @@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct cpuset *cs)
>>>    	return css_cs(cs->css.parent);
>>>    }
>>> +void inc_dl_tasks_cs(struct task_struct *p)
>>> +{
>>> +	struct cpuset *cs = task_cs(p);
>>> +
>>> +	cs->nr_deadline_tasks++;
>>> +}
>>> +
>>> +void dec_dl_tasks_cs(struct task_struct *p)
>>> +{
>>> +	struct cpuset *cs = task_cs(p);
>>> +
>>> +	cs->nr_deadline_tasks--;
>>> +}
>>> +
>>>    /* bits in struct cpuset flags field */
>>>    typedef enum {
>>>    	CS_ONLINE,
>>> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
>>>    		ret = security_task_setscheduler(task);
>>>    		if (ret)
>>>    			goto out_unlock;
>>> +
>>> +		if (dl_task(task)) {
>>> +			cs->nr_deadline_tasks++;
>>> +			cpuset_attach_old_cs->nr_deadline_tasks--;
>>> +		}
>>>    	}
>> Any one of the tasks in the cpuset can cause the test to fail and abort the
>> attachment. I would suggest that you keep a deadline task transfer count in
>> the loop and then update cs and cpouset_attach_old_cs only after all the
>> tasks have been iterated successfully.
> Right, Dietmar I think commented pointing out something along these
> lines. Think though we already have this problem with current
> task_can_attach -> dl_cpu_busy which reserves bandwidth for each tasks
> in the destination cs. Will need to look into that. Do you know which
> sort of operation would move multiple tasks at once?

Actually, what I said previously may not be enough. There can be 
multiple controllers attached to a cgroup. If any of thier can_attach() 
calls fails, the whole transaction is aborted and cancel_attach() will 
be called. My new suggestion is to add a new deadline task transfer 
count into the cpuset structure and store the information there 
temporarily. If cpuset_attach() is called, it means all the can_attach 
calls succeed. You can then update the dl task count accordingly and 
clear the temporary transfer count.

I guess you may have to do something similar with dl_cpu_busy().

My 2 cents.

Cheers,
Longman



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 18:01         ` Waiman Long
  0 siblings, 0 replies; 32+ messages in thread
From: Waiman Long @ 2023-03-15 18:01 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel, luca.abeni, claudio, tommaso.cucinotta, bristot,
	mathieu.poirier, cgroups, Vincent Guittot, Wei Wang, Rick Yiu,
	Quentin Perret, Heiko Carstens, Vasily Gorbik, Alexander Gordeev


On 3/15/23 13:14, Juri Lelli wrote:
> On 15/03/23 11:46, Waiman Long wrote:
>> On 3/15/23 08:18, Juri Lelli wrote:
>>> Qais reported that iterating over all tasks when rebuilding root domains
>>> for finding out which ones are DEADLINE and need their bandwidth
>>> correctly restored on such root domains can be a costly operation (10+
>>> ms delays on suspend-resume).
>>>
>>> To fix the problem keep track of the number of DEADLINE tasks belonging
>>> to each cpuset and then use this information (followup patch) to only
>>> perform the above iteration if DEADLINE tasks are actually present in
>>> the cpuset for which a corresponding root domain is being rebuilt.
>>>
>>> Reported-by: Qais Yousef <qyousef@layalina.io>
>>> Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
>>> ---
>>>    include/linux/cpuset.h |  4 ++++
>>>    kernel/cgroup/cgroup.c |  4 ++++
>>>    kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
>>>    kernel/sched/core.c    | 10 ++++++++++
>>>    4 files changed, 43 insertions(+)
>>>
>>> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
>>> index 355f796c5f07..0348dba5680e 100644
>>> --- a/include/linux/cpuset.h
>>> +++ b/include/linux/cpuset.h
>>> @@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
>>>    extern void cpuset_force_rebuild(void);
>>>    extern void cpuset_update_active_cpus(void);
>>>    extern void cpuset_wait_for_hotplug(void);
>>> +extern void inc_dl_tasks_cs(struct task_struct *task);
>>> +extern void dec_dl_tasks_cs(struct task_struct *task);
>>>    extern void cpuset_lock(void);
>>>    extern void cpuset_unlock(void);
>>>    extern void cpuset_cpus_allowed(struct task_struct *p, struct cpumask *mask);
>>> @@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
>>>    static inline void cpuset_wait_for_hotplug(void) { }
>>> +static inline void inc_dl_tasks_cs(struct task_struct *task) { }
>>> +static inline void dec_dl_tasks_cs(struct task_struct *task) { }
>>>    static inline void cpuset_lock(void) { }
>>>    static inline void cpuset_unlock(void) { }
>>> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
>>> index c099cf3fa02d..357925e1e4af 100644
>>> --- a/kernel/cgroup/cgroup.c
>>> +++ b/kernel/cgroup/cgroup.c
>>> @@ -57,6 +57,7 @@
>>>    #include <linux/file.h>
>>>    #include <linux/fs_parser.h>
>>>    #include <linux/sched/cputime.h>
>>> +#include <linux/sched/deadline.h>
>>>    #include <linux/psi.h>
>>>    #include <net/sock.h>
>>> @@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
>>>    	list_add_tail(&tsk->cg_list, &cset->dying_tasks);
>>>    	cset->nr_tasks--;
>>> +	if (dl_task(tsk))
>>> +		dec_dl_tasks_cs(tsk);
>>> +
>>>    	WARN_ON_ONCE(cgroup_task_frozen(tsk));
>>>    	if (unlikely(!(tsk->flags & PF_KTHREAD) &&
>>>    		     test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>> index 8d82d66d432b..57bc60112618 100644
>>> --- a/kernel/cgroup/cpuset.c
>>> +++ b/kernel/cgroup/cpuset.c
>>> @@ -193,6 +193,12 @@ struct cpuset {
>>>    	int use_parent_ecpus;
>>>    	int child_ecpus_count;
>>> +	/*
>>> +	 * number of SCHED_DEADLINE tasks attached to this cpuset, so that we
>>> +	 * know when to rebuild associated root domain bandwidth information.
>>> +	 */
>>> +	int nr_deadline_tasks;
>>> +
>>>    	/* Invalid partition error code, not lock protected */
>>>    	enum prs_errcode prs_err;
>>> @@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct cpuset *cs)
>>>    	return css_cs(cs->css.parent);
>>>    }
>>> +void inc_dl_tasks_cs(struct task_struct *p)
>>> +{
>>> +	struct cpuset *cs = task_cs(p);
>>> +
>>> +	cs->nr_deadline_tasks++;
>>> +}
>>> +
>>> +void dec_dl_tasks_cs(struct task_struct *p)
>>> +{
>>> +	struct cpuset *cs = task_cs(p);
>>> +
>>> +	cs->nr_deadline_tasks--;
>>> +}
>>> +
>>>    /* bits in struct cpuset flags field */
>>>    typedef enum {
>>>    	CS_ONLINE,
>>> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
>>>    		ret = security_task_setscheduler(task);
>>>    		if (ret)
>>>    			goto out_unlock;
>>> +
>>> +		if (dl_task(task)) {
>>> +			cs->nr_deadline_tasks++;
>>> +			cpuset_attach_old_cs->nr_deadline_tasks--;
>>> +		}
>>>    	}
>> Any one of the tasks in the cpuset can cause the test to fail and abort the
>> attachment. I would suggest that you keep a deadline task transfer count in
>> the loop and then update cs and cpouset_attach_old_cs only after all the
>> tasks have been iterated successfully.
> Right, Dietmar I think commented pointing out something along these
> lines. Think though we already have this problem with current
> task_can_attach -> dl_cpu_busy which reserves bandwidth for each tasks
> in the destination cs. Will need to look into that. Do you know which
> sort of operation would move multiple tasks at once?

Actually, what I said previously may not be enough. There can be 
multiple controllers attached to a cgroup. If any of thier can_attach() 
calls fails, the whole transaction is aborted and cancel_attach() will 
be called. My new suggestion is to add a new deadline task transfer 
count into the cpuset structure and store the information there 
temporarily. If cpuset_attach() is called, it means all the can_attach 
calls succeed. You can then update the dl task count accordingly and 
clear the temporary transfer count.

I guess you may have to do something similar with dl_cpu_busy().

My 2 cents.

Cheers,
Longman



^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 18:10           ` Waiman Long
  0 siblings, 0 replies; 32+ messages in thread
From: Waiman Long @ 2023-03-15 18:10 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel, luca.abeni, claudio, tommaso.cucinotta, bristot,
	mathieu.poirier, cgroups, Vincent Guittot, Wei Wang, Rick Yiu,
	Quentin Perret, Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Sudeep Holla

On 3/15/23 14:01, Waiman Long wrote:
>
> On 3/15/23 13:14, Juri Lelli wrote:
>> On 15/03/23 11:46, Waiman Long wrote:
>>> On 3/15/23 08:18, Juri Lelli wrote:
>>>> Qais reported that iterating over all tasks when rebuilding root 
>>>> domains
>>>> for finding out which ones are DEADLINE and need their bandwidth
>>>> correctly restored on such root domains can be a costly operation (10+
>>>> ms delays on suspend-resume).
>>>>
>>>> To fix the problem keep track of the number of DEADLINE tasks 
>>>> belonging
>>>> to each cpuset and then use this information (followup patch) to only
>>>> perform the above iteration if DEADLINE tasks are actually present in
>>>> the cpuset for which a corresponding root domain is being rebuilt.
>>>>
>>>> Reported-by: Qais Yousef <qyousef@layalina.io>
>>>> Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
>>>> ---
>>>>    include/linux/cpuset.h |  4 ++++
>>>>    kernel/cgroup/cgroup.c |  4 ++++
>>>>    kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
>>>>    kernel/sched/core.c    | 10 ++++++++++
>>>>    4 files changed, 43 insertions(+)
>>>>
>>>> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
>>>> index 355f796c5f07..0348dba5680e 100644
>>>> --- a/include/linux/cpuset.h
>>>> +++ b/include/linux/cpuset.h
>>>> @@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
>>>>    extern void cpuset_force_rebuild(void);
>>>>    extern void cpuset_update_active_cpus(void);
>>>>    extern void cpuset_wait_for_hotplug(void);
>>>> +extern void inc_dl_tasks_cs(struct task_struct *task);
>>>> +extern void dec_dl_tasks_cs(struct task_struct *task);
>>>>    extern void cpuset_lock(void);
>>>>    extern void cpuset_unlock(void);
>>>>    extern void cpuset_cpus_allowed(struct task_struct *p, struct 
>>>> cpumask *mask);
>>>> @@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
>>>>    static inline void cpuset_wait_for_hotplug(void) { }
>>>> +static inline void inc_dl_tasks_cs(struct task_struct *task) { }
>>>> +static inline void dec_dl_tasks_cs(struct task_struct *task) { }
>>>>    static inline void cpuset_lock(void) { }
>>>>    static inline void cpuset_unlock(void) { }
>>>> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
>>>> index c099cf3fa02d..357925e1e4af 100644
>>>> --- a/kernel/cgroup/cgroup.c
>>>> +++ b/kernel/cgroup/cgroup.c
>>>> @@ -57,6 +57,7 @@
>>>>    #include <linux/file.h>
>>>>    #include <linux/fs_parser.h>
>>>>    #include <linux/sched/cputime.h>
>>>> +#include <linux/sched/deadline.h>
>>>>    #include <linux/psi.h>
>>>>    #include <net/sock.h>
>>>> @@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
>>>>        list_add_tail(&tsk->cg_list, &cset->dying_tasks);
>>>>        cset->nr_tasks--;
>>>> +    if (dl_task(tsk))
>>>> +        dec_dl_tasks_cs(tsk);
>>>> +
>>>>        WARN_ON_ONCE(cgroup_task_frozen(tsk));
>>>>        if (unlikely(!(tsk->flags & PF_KTHREAD) &&
>>>>                 test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
>>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>>> index 8d82d66d432b..57bc60112618 100644
>>>> --- a/kernel/cgroup/cpuset.c
>>>> +++ b/kernel/cgroup/cpuset.c
>>>> @@ -193,6 +193,12 @@ struct cpuset {
>>>>        int use_parent_ecpus;
>>>>        int child_ecpus_count;
>>>> +    /*
>>>> +     * number of SCHED_DEADLINE tasks attached to this cpuset, so 
>>>> that we
>>>> +     * know when to rebuild associated root domain bandwidth 
>>>> information.
>>>> +     */
>>>> +    int nr_deadline_tasks;
>>>> +
>>>>        /* Invalid partition error code, not lock protected */
>>>>        enum prs_errcode prs_err;
>>>> @@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct 
>>>> cpuset *cs)
>>>>        return css_cs(cs->css.parent);
>>>>    }
>>>> +void inc_dl_tasks_cs(struct task_struct *p)
>>>> +{
>>>> +    struct cpuset *cs = task_cs(p);
>>>> +
>>>> +    cs->nr_deadline_tasks++;
>>>> +}
>>>> +
>>>> +void dec_dl_tasks_cs(struct task_struct *p)
>>>> +{
>>>> +    struct cpuset *cs = task_cs(p);
>>>> +
>>>> +    cs->nr_deadline_tasks--;
>>>> +}
>>>> +
>>>>    /* bits in struct cpuset flags field */
>>>>    typedef enum {
>>>>        CS_ONLINE,
>>>> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct 
>>>> cgroup_taskset *tset)
>>>>            ret = security_task_setscheduler(task);
>>>>            if (ret)
>>>>                goto out_unlock;
>>>> +
>>>> +        if (dl_task(task)) {
>>>> +            cs->nr_deadline_tasks++;
>>>> +            cpuset_attach_old_cs->nr_deadline_tasks--;
>>>> +        }
>>>>        }
>>> Any one of the tasks in the cpuset can cause the test to fail and 
>>> abort the
>>> attachment. I would suggest that you keep a deadline task transfer 
>>> count in
>>> the loop and then update cs and cpouset_attach_old_cs only after all 
>>> the
>>> tasks have been iterated successfully.
>> Right, Dietmar I think commented pointing out something along these
>> lines. Think though we already have this problem with current
>> task_can_attach -> dl_cpu_busy which reserves bandwidth for each tasks
>> in the destination cs. Will need to look into that. Do you know which
>> sort of operation would move multiple tasks at once?
>
> Actually, what I said previously may not be enough. There can be 
> multiple controllers attached to a cgroup. If any of thier 
> can_attach() calls fails, the whole transaction is aborted and 
> cancel_attach() will be called. My new suggestion is to add a new 
> deadline task transfer count into the cpuset structure and store the 
> information there temporarily. If cpuset_attach() is called, it means 
> all the can_attach calls succeed. You can then update the dl task 
> count accordingly and clear the temporary transfer count.
>
> I guess you may have to do something similar with dl_cpu_busy().
>
> My 2 cents.

Alternatively, you can do the nr_deadline_tasks update in 
cpuset_attach(). However, there is an optimization to skip the task 
iteration if the cpu and memory list haven't changed. You will have to 
skip that optimization if there are DL tasks in the cpuset.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 18:10           ` Waiman Long
  0 siblings, 0 replies; 32+ messages in thread
From: Waiman Long @ 2023-03-15 18:10 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev

On 3/15/23 14:01, Waiman Long wrote:
>
> On 3/15/23 13:14, Juri Lelli wrote:
>> On 15/03/23 11:46, Waiman Long wrote:
>>> On 3/15/23 08:18, Juri Lelli wrote:
>>>> Qais reported that iterating over all tasks when rebuilding root 
>>>> domains
>>>> for finding out which ones are DEADLINE and need their bandwidth
>>>> correctly restored on such root domains can be a costly operation (10+
>>>> ms delays on suspend-resume).
>>>>
>>>> To fix the problem keep track of the number of DEADLINE tasks 
>>>> belonging
>>>> to each cpuset and then use this information (followup patch) to only
>>>> perform the above iteration if DEADLINE tasks are actually present in
>>>> the cpuset for which a corresponding root domain is being rebuilt.
>>>>
>>>> Reported-by: Qais Yousef <qyousef-wp2msK0BRk8tq7phqP6ubQ@public.gmane.org>
>>>> Signed-off-by: Juri Lelli <juri.lelli-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>>>> ---
>>>>    include/linux/cpuset.h |  4 ++++
>>>>    kernel/cgroup/cgroup.c |  4 ++++
>>>>    kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
>>>>    kernel/sched/core.c    | 10 ++++++++++
>>>>    4 files changed, 43 insertions(+)
>>>>
>>>> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
>>>> index 355f796c5f07..0348dba5680e 100644
>>>> --- a/include/linux/cpuset.h
>>>> +++ b/include/linux/cpuset.h
>>>> @@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
>>>>    extern void cpuset_force_rebuild(void);
>>>>    extern void cpuset_update_active_cpus(void);
>>>>    extern void cpuset_wait_for_hotplug(void);
>>>> +extern void inc_dl_tasks_cs(struct task_struct *task);
>>>> +extern void dec_dl_tasks_cs(struct task_struct *task);
>>>>    extern void cpuset_lock(void);
>>>>    extern void cpuset_unlock(void);
>>>>    extern void cpuset_cpus_allowed(struct task_struct *p, struct 
>>>> cpumask *mask);
>>>> @@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
>>>>    static inline void cpuset_wait_for_hotplug(void) { }
>>>> +static inline void inc_dl_tasks_cs(struct task_struct *task) { }
>>>> +static inline void dec_dl_tasks_cs(struct task_struct *task) { }
>>>>    static inline void cpuset_lock(void) { }
>>>>    static inline void cpuset_unlock(void) { }
>>>> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
>>>> index c099cf3fa02d..357925e1e4af 100644
>>>> --- a/kernel/cgroup/cgroup.c
>>>> +++ b/kernel/cgroup/cgroup.c
>>>> @@ -57,6 +57,7 @@
>>>>    #include <linux/file.h>
>>>>    #include <linux/fs_parser.h>
>>>>    #include <linux/sched/cputime.h>
>>>> +#include <linux/sched/deadline.h>
>>>>    #include <linux/psi.h>
>>>>    #include <net/sock.h>
>>>> @@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
>>>>        list_add_tail(&tsk->cg_list, &cset->dying_tasks);
>>>>        cset->nr_tasks--;
>>>> +    if (dl_task(tsk))
>>>> +        dec_dl_tasks_cs(tsk);
>>>> +
>>>>        WARN_ON_ONCE(cgroup_task_frozen(tsk));
>>>>        if (unlikely(!(tsk->flags & PF_KTHREAD) &&
>>>>                 test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
>>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>>> index 8d82d66d432b..57bc60112618 100644
>>>> --- a/kernel/cgroup/cpuset.c
>>>> +++ b/kernel/cgroup/cpuset.c
>>>> @@ -193,6 +193,12 @@ struct cpuset {
>>>>        int use_parent_ecpus;
>>>>        int child_ecpus_count;
>>>> +    /*
>>>> +     * number of SCHED_DEADLINE tasks attached to this cpuset, so 
>>>> that we
>>>> +     * know when to rebuild associated root domain bandwidth 
>>>> information.
>>>> +     */
>>>> +    int nr_deadline_tasks;
>>>> +
>>>>        /* Invalid partition error code, not lock protected */
>>>>        enum prs_errcode prs_err;
>>>> @@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct 
>>>> cpuset *cs)
>>>>        return css_cs(cs->css.parent);
>>>>    }
>>>> +void inc_dl_tasks_cs(struct task_struct *p)
>>>> +{
>>>> +    struct cpuset *cs = task_cs(p);
>>>> +
>>>> +    cs->nr_deadline_tasks++;
>>>> +}
>>>> +
>>>> +void dec_dl_tasks_cs(struct task_struct *p)
>>>> +{
>>>> +    struct cpuset *cs = task_cs(p);
>>>> +
>>>> +    cs->nr_deadline_tasks--;
>>>> +}
>>>> +
>>>>    /* bits in struct cpuset flags field */
>>>>    typedef enum {
>>>>        CS_ONLINE,
>>>> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct 
>>>> cgroup_taskset *tset)
>>>>            ret = security_task_setscheduler(task);
>>>>            if (ret)
>>>>                goto out_unlock;
>>>> +
>>>> +        if (dl_task(task)) {
>>>> +            cs->nr_deadline_tasks++;
>>>> +            cpuset_attach_old_cs->nr_deadline_tasks--;
>>>> +        }
>>>>        }
>>> Any one of the tasks in the cpuset can cause the test to fail and 
>>> abort the
>>> attachment. I would suggest that you keep a deadline task transfer 
>>> count in
>>> the loop and then update cs and cpouset_attach_old_cs only after all 
>>> the
>>> tasks have been iterated successfully.
>> Right, Dietmar I think commented pointing out something along these
>> lines. Think though we already have this problem with current
>> task_can_attach -> dl_cpu_busy which reserves bandwidth for each tasks
>> in the destination cs. Will need to look into that. Do you know which
>> sort of operation would move multiple tasks at once?
>
> Actually, what I said previously may not be enough. There can be 
> multiple controllers attached to a cgroup. If any of thier 
> can_attach() calls fails, the whole transaction is aborted and 
> cancel_attach() will be called. My new suggestion is to add a new 
> deadline task transfer count into the cpuset structure and store the 
> information there temporarily. If cpuset_attach() is called, it means 
> all the can_attach calls succeed. You can then update the dl task 
> count accordingly and clear the temporary transfer count.
>
> I guess you may have to do something similar with dl_cpu_busy().
>
> My 2 cents.

Alternatively, you can do the nr_deadline_tasks update in 
cpuset_attach(). However, there is an optimization to skip the task 
iteration if the cpu and memory list haven't changed. You will have to 
skip that optimization if there are DL tasks in the cpuset.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 19:25         ` Qais Yousef
  0 siblings, 0 replies; 32+ messages in thread
From: Qais Yousef @ 2023-03-15 19:25 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Waiman Long, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel, luca.abeni, claudio, tommaso.cucinotta, bristot,
	mathieu.poirier, cgroups, Vincent Guittot, Wei Wang, Rick Yiu,
	Quentin Perret, Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Sudeep Holla

On 03/15/23 17:18, Juri Lelli wrote:
> On 15/03/23 14:49, Qais Yousef wrote:
> > On 03/15/23 12:18, Juri Lelli wrote:
> 
> ...
> 
> > > +void inc_dl_tasks_cs(struct task_struct *p)
> > > +{
> > > +	struct cpuset *cs = task_cs(p);
> > 
> > nit:
> > 
> > I *think* task_cs() assumes rcu_read_lock() is held, right?
> > 
> > Would it make sense to WARN_ON(!rcu_read_lock_held()) to at least
> > annotate the deps?
> 
> Think we have that check in task_css_set_check()?

Yes you're right, I didn't go forward enough in the call stack.

It seems to depend on PROVE_RCU, which sounds irrelevant, but I see PROVE_RCU
is actually an alias for PROVE_LOCKING.


Cheers

--
Qais Yousef

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 19:25         ` Qais Yousef
  0 siblings, 0 replies; 32+ messages in thread
From: Qais Yousef @ 2023-03-15 19:25 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Waiman Long, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev

On 03/15/23 17:18, Juri Lelli wrote:
> On 15/03/23 14:49, Qais Yousef wrote:
> > On 03/15/23 12:18, Juri Lelli wrote:
> 
> ...
> 
> > > +void inc_dl_tasks_cs(struct task_struct *p)
> > > +{
> > > +	struct cpuset *cs = task_cs(p);
> > 
> > nit:
> > 
> > I *think* task_cs() assumes rcu_read_lock() is held, right?
> > 
> > Would it make sense to WARN_ON(!rcu_read_lock_held()) to at least
> > annotate the deps?
> 
> Think we have that check in task_css_set_check()?

Yes you're right, I didn't go forward enough in the call stack.

It seems to depend on PROVE_RCU, which sounds irrelevant, but I see PROVE_RCU
is actually an alias for PROVE_LOCKING.


Cheers

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 23:27           ` Waiman Long
  0 siblings, 0 replies; 32+ messages in thread
From: Waiman Long @ 2023-03-15 23:27 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel, luca.abeni, claudio, tommaso.cucinotta, bristot,
	mathieu.poirier, cgroups, Vincent Guittot, Wei Wang, Rick Yiu,
	Quentin Perret, Heiko Carstens, Vasily Gorbik, Alexander Gordeev,
	Sudeep Holla

On 3/15/23 14:01, Waiman Long wrote:
>
> On 3/15/23 13:14, Juri Lelli wrote:
>> On 15/03/23 11:46, Waiman Long wrote:
>>> On 3/15/23 08:18, Juri Lelli wrote:
>>>> Qais reported that iterating over all tasks when rebuilding root 
>>>> domains
>>>> for finding out which ones are DEADLINE and need their bandwidth
>>>> correctly restored on such root domains can be a costly operation (10+
>>>> ms delays on suspend-resume).
>>>>
>>>> To fix the problem keep track of the number of DEADLINE tasks 
>>>> belonging
>>>> to each cpuset and then use this information (followup patch) to only
>>>> perform the above iteration if DEADLINE tasks are actually present in
>>>> the cpuset for which a corresponding root domain is being rebuilt.
>>>>
>>>> Reported-by: Qais Yousef <qyousef@layalina.io>
>>>> Signed-off-by: Juri Lelli <juri.lelli@redhat.com>
>>>> ---
>>>>    include/linux/cpuset.h |  4 ++++
>>>>    kernel/cgroup/cgroup.c |  4 ++++
>>>>    kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
>>>>    kernel/sched/core.c    | 10 ++++++++++
>>>>    4 files changed, 43 insertions(+)
>>>>
>>>> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
>>>> index 355f796c5f07..0348dba5680e 100644
>>>> --- a/include/linux/cpuset.h
>>>> +++ b/include/linux/cpuset.h
>>>> @@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
>>>>    extern void cpuset_force_rebuild(void);
>>>>    extern void cpuset_update_active_cpus(void);
>>>>    extern void cpuset_wait_for_hotplug(void);
>>>> +extern void inc_dl_tasks_cs(struct task_struct *task);
>>>> +extern void dec_dl_tasks_cs(struct task_struct *task);
>>>>    extern void cpuset_lock(void);
>>>>    extern void cpuset_unlock(void);
>>>>    extern void cpuset_cpus_allowed(struct task_struct *p, struct 
>>>> cpumask *mask);
>>>> @@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
>>>>    static inline void cpuset_wait_for_hotplug(void) { }
>>>> +static inline void inc_dl_tasks_cs(struct task_struct *task) { }
>>>> +static inline void dec_dl_tasks_cs(struct task_struct *task) { }
>>>>    static inline void cpuset_lock(void) { }
>>>>    static inline void cpuset_unlock(void) { }
>>>> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
>>>> index c099cf3fa02d..357925e1e4af 100644
>>>> --- a/kernel/cgroup/cgroup.c
>>>> +++ b/kernel/cgroup/cgroup.c
>>>> @@ -57,6 +57,7 @@
>>>>    #include <linux/file.h>
>>>>    #include <linux/fs_parser.h>
>>>>    #include <linux/sched/cputime.h>
>>>> +#include <linux/sched/deadline.h>
>>>>    #include <linux/psi.h>
>>>>    #include <net/sock.h>
>>>> @@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
>>>>        list_add_tail(&tsk->cg_list, &cset->dying_tasks);
>>>>        cset->nr_tasks--;
>>>> +    if (dl_task(tsk))
>>>> +        dec_dl_tasks_cs(tsk);
>>>> +
>>>>        WARN_ON_ONCE(cgroup_task_frozen(tsk));
>>>>        if (unlikely(!(tsk->flags & PF_KTHREAD) &&
>>>>                 test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
>>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>>> index 8d82d66d432b..57bc60112618 100644
>>>> --- a/kernel/cgroup/cpuset.c
>>>> +++ b/kernel/cgroup/cpuset.c
>>>> @@ -193,6 +193,12 @@ struct cpuset {
>>>>        int use_parent_ecpus;
>>>>        int child_ecpus_count;
>>>> +    /*
>>>> +     * number of SCHED_DEADLINE tasks attached to this cpuset, so 
>>>> that we
>>>> +     * know when to rebuild associated root domain bandwidth 
>>>> information.
>>>> +     */
>>>> +    int nr_deadline_tasks;
>>>> +
>>>>        /* Invalid partition error code, not lock protected */
>>>>        enum prs_errcode prs_err;
>>>> @@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct 
>>>> cpuset *cs)
>>>>        return css_cs(cs->css.parent);
>>>>    }
>>>> +void inc_dl_tasks_cs(struct task_struct *p)
>>>> +{
>>>> +    struct cpuset *cs = task_cs(p);
>>>> +
>>>> +    cs->nr_deadline_tasks++;
>>>> +}
>>>> +
>>>> +void dec_dl_tasks_cs(struct task_struct *p)
>>>> +{
>>>> +    struct cpuset *cs = task_cs(p);
>>>> +
>>>> +    cs->nr_deadline_tasks--;
>>>> +}
>>>> +
>>>>    /* bits in struct cpuset flags field */
>>>>    typedef enum {
>>>>        CS_ONLINE,
>>>> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct 
>>>> cgroup_taskset *tset)
>>>>            ret = security_task_setscheduler(task);
>>>>            if (ret)
>>>>                goto out_unlock;
>>>> +
>>>> +        if (dl_task(task)) {
>>>> +            cs->nr_deadline_tasks++;
>>>> +            cpuset_attach_old_cs->nr_deadline_tasks--;
>>>> +        }
>>>>        }
>>> Any one of the tasks in the cpuset can cause the test to fail and 
>>> abort the
>>> attachment. I would suggest that you keep a deadline task transfer 
>>> count in
>>> the loop and then update cs and cpouset_attach_old_cs only after all 
>>> the
>>> tasks have been iterated successfully.
>> Right, Dietmar I think commented pointing out something along these
>> lines. Think though we already have this problem with current
>> task_can_attach -> dl_cpu_busy which reserves bandwidth for each tasks
>> in the destination cs. Will need to look into that. Do you know which
>> sort of operation would move multiple tasks at once?
>
> Actually, what I said previously may not be enough. There can be 
> multiple controllers attached to a cgroup. If any of thier 
> can_attach() calls fails, the whole transaction is aborted and 
> cancel_attach() will be called. My new suggestion is to add a new 
> deadline task transfer count into the cpuset structure and store the 
> information there temporarily. If cpuset_attach() is called, it means 
> all the can_attach calls succeed. You can then update the dl task 
> count accordingly and clear the temporary transfer count.
>
> I guess you may have to do something similar with dl_cpu_busy().

Another possibility is that you may record the cpu where the new DL 
bandwidth is allocated from in the task_struct. Then in 
cpuset_cancel_attach(), you can revert the dl_cpu_busy() change if DL 
tasks are in the css_set to be transferred. That will likely require 
having a DL task transfer count in the cpuset and iterating all the 
tasks to look for ones with a previously recorded cpu # if the transfer 
count is non-zero.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-15 23:27           ` Waiman Long
  0 siblings, 0 replies; 32+ messages in thread
From: Waiman Long @ 2023-03-15 23:27 UTC (permalink / raw)
  To: Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Dietmar Eggemann, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev

On 3/15/23 14:01, Waiman Long wrote:
>
> On 3/15/23 13:14, Juri Lelli wrote:
>> On 15/03/23 11:46, Waiman Long wrote:
>>> On 3/15/23 08:18, Juri Lelli wrote:
>>>> Qais reported that iterating over all tasks when rebuilding root 
>>>> domains
>>>> for finding out which ones are DEADLINE and need their bandwidth
>>>> correctly restored on such root domains can be a costly operation (10+
>>>> ms delays on suspend-resume).
>>>>
>>>> To fix the problem keep track of the number of DEADLINE tasks 
>>>> belonging
>>>> to each cpuset and then use this information (followup patch) to only
>>>> perform the above iteration if DEADLINE tasks are actually present in
>>>> the cpuset for which a corresponding root domain is being rebuilt.
>>>>
>>>> Reported-by: Qais Yousef <qyousef-wp2msK0BRk8tq7phqP6ubQ@public.gmane.org>
>>>> Signed-off-by: Juri Lelli <juri.lelli-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>>>> ---
>>>>    include/linux/cpuset.h |  4 ++++
>>>>    kernel/cgroup/cgroup.c |  4 ++++
>>>>    kernel/cgroup/cpuset.c | 25 +++++++++++++++++++++++++
>>>>    kernel/sched/core.c    | 10 ++++++++++
>>>>    4 files changed, 43 insertions(+)
>>>>
>>>> diff --git a/include/linux/cpuset.h b/include/linux/cpuset.h
>>>> index 355f796c5f07..0348dba5680e 100644
>>>> --- a/include/linux/cpuset.h
>>>> +++ b/include/linux/cpuset.h
>>>> @@ -71,6 +71,8 @@ extern void cpuset_init_smp(void);
>>>>    extern void cpuset_force_rebuild(void);
>>>>    extern void cpuset_update_active_cpus(void);
>>>>    extern void cpuset_wait_for_hotplug(void);
>>>> +extern void inc_dl_tasks_cs(struct task_struct *task);
>>>> +extern void dec_dl_tasks_cs(struct task_struct *task);
>>>>    extern void cpuset_lock(void);
>>>>    extern void cpuset_unlock(void);
>>>>    extern void cpuset_cpus_allowed(struct task_struct *p, struct 
>>>> cpumask *mask);
>>>> @@ -196,6 +198,8 @@ static inline void cpuset_update_active_cpus(void)
>>>>    static inline void cpuset_wait_for_hotplug(void) { }
>>>> +static inline void inc_dl_tasks_cs(struct task_struct *task) { }
>>>> +static inline void dec_dl_tasks_cs(struct task_struct *task) { }
>>>>    static inline void cpuset_lock(void) { }
>>>>    static inline void cpuset_unlock(void) { }
>>>> diff --git a/kernel/cgroup/cgroup.c b/kernel/cgroup/cgroup.c
>>>> index c099cf3fa02d..357925e1e4af 100644
>>>> --- a/kernel/cgroup/cgroup.c
>>>> +++ b/kernel/cgroup/cgroup.c
>>>> @@ -57,6 +57,7 @@
>>>>    #include <linux/file.h>
>>>>    #include <linux/fs_parser.h>
>>>>    #include <linux/sched/cputime.h>
>>>> +#include <linux/sched/deadline.h>
>>>>    #include <linux/psi.h>
>>>>    #include <net/sock.h>
>>>> @@ -6673,6 +6674,9 @@ void cgroup_exit(struct task_struct *tsk)
>>>>        list_add_tail(&tsk->cg_list, &cset->dying_tasks);
>>>>        cset->nr_tasks--;
>>>> +    if (dl_task(tsk))
>>>> +        dec_dl_tasks_cs(tsk);
>>>> +
>>>>        WARN_ON_ONCE(cgroup_task_frozen(tsk));
>>>>        if (unlikely(!(tsk->flags & PF_KTHREAD) &&
>>>>                 test_bit(CGRP_FREEZE, &task_dfl_cgroup(tsk)->flags)))
>>>> diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
>>>> index 8d82d66d432b..57bc60112618 100644
>>>> --- a/kernel/cgroup/cpuset.c
>>>> +++ b/kernel/cgroup/cpuset.c
>>>> @@ -193,6 +193,12 @@ struct cpuset {
>>>>        int use_parent_ecpus;
>>>>        int child_ecpus_count;
>>>> +    /*
>>>> +     * number of SCHED_DEADLINE tasks attached to this cpuset, so 
>>>> that we
>>>> +     * know when to rebuild associated root domain bandwidth 
>>>> information.
>>>> +     */
>>>> +    int nr_deadline_tasks;
>>>> +
>>>>        /* Invalid partition error code, not lock protected */
>>>>        enum prs_errcode prs_err;
>>>> @@ -245,6 +251,20 @@ static inline struct cpuset *parent_cs(struct 
>>>> cpuset *cs)
>>>>        return css_cs(cs->css.parent);
>>>>    }
>>>> +void inc_dl_tasks_cs(struct task_struct *p)
>>>> +{
>>>> +    struct cpuset *cs = task_cs(p);
>>>> +
>>>> +    cs->nr_deadline_tasks++;
>>>> +}
>>>> +
>>>> +void dec_dl_tasks_cs(struct task_struct *p)
>>>> +{
>>>> +    struct cpuset *cs = task_cs(p);
>>>> +
>>>> +    cs->nr_deadline_tasks--;
>>>> +}
>>>> +
>>>>    /* bits in struct cpuset flags field */
>>>>    typedef enum {
>>>>        CS_ONLINE,
>>>> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct 
>>>> cgroup_taskset *tset)
>>>>            ret = security_task_setscheduler(task);
>>>>            if (ret)
>>>>                goto out_unlock;
>>>> +
>>>> +        if (dl_task(task)) {
>>>> +            cs->nr_deadline_tasks++;
>>>> +            cpuset_attach_old_cs->nr_deadline_tasks--;
>>>> +        }
>>>>        }
>>> Any one of the tasks in the cpuset can cause the test to fail and 
>>> abort the
>>> attachment. I would suggest that you keep a deadline task transfer 
>>> count in
>>> the loop and then update cs and cpouset_attach_old_cs only after all 
>>> the
>>> tasks have been iterated successfully.
>> Right, Dietmar I think commented pointing out something along these
>> lines. Think though we already have this problem with current
>> task_can_attach -> dl_cpu_busy which reserves bandwidth for each tasks
>> in the destination cs. Will need to look into that. Do you know which
>> sort of operation would move multiple tasks at once?
>
> Actually, what I said previously may not be enough. There can be 
> multiple controllers attached to a cgroup. If any of thier 
> can_attach() calls fails, the whole transaction is aborted and 
> cancel_attach() will be called. My new suggestion is to add a new 
> deadline task transfer count into the cpuset structure and store the 
> information there temporarily. If cpuset_attach() is called, it means 
> all the can_attach calls succeed. You can then update the dl task 
> count accordingly and clear the temporary transfer count.
>
> I guess you may have to do something similar with dl_cpu_busy().

Another possibility is that you may record the cpu where the new DL 
bandwidth is allocated from in the task_struct. Then in 
cpuset_cancel_attach(), you can revert the dl_cpu_busy() change if DL 
tasks are in the css_set to be transferred. That will likely require 
having a DL task transfer count in the cpuset and iterating all the 
tasks to look for ones with a previously recorded cpu # if the transfer 
count is non-zero.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-22 13:18         ` Dietmar Eggemann
  0 siblings, 0 replies; 32+ messages in thread
From: Dietmar Eggemann @ 2023-03-22 13:18 UTC (permalink / raw)
  To: Juri Lelli, Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Steven Rostedt, linux-kernel,
	luca.abeni, claudio, tommaso.cucinotta, bristot, mathieu.poirier,
	cgroups, Vincent Guittot, Wei Wang, Rick Yiu, Quentin Perret,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Sudeep Holla

On 15/03/2023 18:14, Juri Lelli wrote:
> On 15/03/23 11:46, Waiman Long wrote:
>>
>> On 3/15/23 08:18, Juri Lelli wrote:

[...]

>>> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
>>>   		ret = security_task_setscheduler(task);
>>>   		if (ret)
>>>   			goto out_unlock;
>>> +
>>> +		if (dl_task(task)) {
>>> +			cs->nr_deadline_tasks++;
>>> +			cpuset_attach_old_cs->nr_deadline_tasks--;
>>> +		}
>>>   	}
>>
>> Any one of the tasks in the cpuset can cause the test to fail and abort the
>> attachment. I would suggest that you keep a deadline task transfer count in
>> the loop and then update cs and cpouset_attach_old_cs only after all the
>> tasks have been iterated successfully.
> 
> Right, Dietmar I think commented pointing out something along these
> lines. Think though we already have this problem with current
> task_can_attach -> dl_cpu_busy which reserves bandwidth for each tasks
> in the destination cs. Will need to look into that. Do you know which
> sort of operation would move multiple tasks at once?

Moving the process instead of the individual tasks makes
cpuset_can_attach() have to deal with multiple tasks.

# ps2 | grep DLN
 1614  1615 140      0   - DLN thread0-0
 1614  1616 140      0   - DLN thread0-1
 1614  1617 140      0   - DLN thread0-2

# echo 1614 > /sys/fs/cgroup/cpuset/cs2/cgroup.procs

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-22 13:18         ` Dietmar Eggemann
  0 siblings, 0 replies; 32+ messages in thread
From: Dietmar Eggemann @ 2023-03-22 13:18 UTC (permalink / raw)
  To: Juri Lelli, Waiman Long
  Cc: Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Sudeep Holla

On 15/03/2023 18:14, Juri Lelli wrote:
> On 15/03/23 11:46, Waiman Long wrote:
>>
>> On 3/15/23 08:18, Juri Lelli wrote:

[...]

>>> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct cgroup_taskset *tset)
>>>   		ret = security_task_setscheduler(task);
>>>   		if (ret)
>>>   			goto out_unlock;
>>> +
>>> +		if (dl_task(task)) {
>>> +			cs->nr_deadline_tasks++;
>>> +			cpuset_attach_old_cs->nr_deadline_tasks--;
>>> +		}
>>>   	}
>>
>> Any one of the tasks in the cpuset can cause the test to fail and abort the
>> attachment. I would suggest that you keep a deadline task transfer count in
>> the loop and then update cs and cpouset_attach_old_cs only after all the
>> tasks have been iterated successfully.
> 
> Right, Dietmar I think commented pointing out something along these
> lines. Think though we already have this problem with current
> task_can_attach -> dl_cpu_busy which reserves bandwidth for each tasks
> in the destination cs. Will need to look into that. Do you know which
> sort of operation would move multiple tasks at once?

Moving the process instead of the individual tasks makes
cpuset_can_attach() have to deal with multiple tasks.

# ps2 | grep DLN
 1614  1615 140      0   - DLN thread0-0
 1614  1616 140      0   - DLN thread0-1
 1614  1617 140      0   - DLN thread0-2

# echo 1614 > /sys/fs/cgroup/cpuset/cs2/cgroup.procs

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-22 14:05           ` Dietmar Eggemann
  0 siblings, 0 replies; 32+ messages in thread
From: Dietmar Eggemann @ 2023-03-22 14:05 UTC (permalink / raw)
  To: Waiman Long, Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Steven Rostedt, linux-kernel,
	luca.abeni, claudio, tommaso.cucinotta, bristot, mathieu.poirier,
	cgroups, Vincent Guittot, Wei Wang, Rick Yiu, Quentin Perret,
	Heiko Carstens, Vasily Gorbik, Alexander Gordeev, Sudeep Holla

On 15/03/2023 19:01, Waiman Long wrote:
> 
> On 3/15/23 13:14, Juri Lelli wrote:
>> On 15/03/23 11:46, Waiman Long wrote:
>>> On 3/15/23 08:18, Juri Lelli wrote:

[...]

>>>> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct
>>>> cgroup_taskset *tset)
>>>>            ret = security_task_setscheduler(task);
>>>>            if (ret)
>>>>                goto out_unlock;
>>>> +
>>>> +        if (dl_task(task)) {
>>>> +            cs->nr_deadline_tasks++;
>>>> +            cpuset_attach_old_cs->nr_deadline_tasks--;
>>>> +        }
>>>>        }
>>> Any one of the tasks in the cpuset can cause the test to fail and
>>> abort the
>>> attachment. I would suggest that you keep a deadline task transfer
>>> count in
>>> the loop and then update cs and cpouset_attach_old_cs only after all the
>>> tasks have been iterated successfully.
>> Right, Dietmar I think commented pointing out something along these
>> lines. Think though we already have this problem with current
>> task_can_attach -> dl_cpu_busy which reserves bandwidth for each tasks
>> in the destination cs. Will need to look into that. Do you know which
>> sort of operation would move multiple tasks at once?
> 
> Actually, what I said previously may not be enough. There can be
> multiple controllers attached to a cgroup. If any of thier can_attach()
> calls fails, the whole transaction is aborted and cancel_attach() will
> be called. My new suggestion is to add a new deadline task transfer
> count into the cpuset structure and store the information there
> temporarily. If cpuset_attach() is called, it means all the can_attach
> calls succeed. You can then update the dl task count accordingly and
> clear the temporary transfer count.
> 
> I guess you may have to do something similar with dl_cpu_busy().

I gave it a shot:

https://lkml.kernel.org/r/20230322135959.1998790-1-dietmar.eggemann@arm.com

^ permalink raw reply	[flat|nested] 32+ messages in thread

* Re: [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets
@ 2023-03-22 14:05           ` Dietmar Eggemann
  0 siblings, 0 replies; 32+ messages in thread
From: Dietmar Eggemann @ 2023-03-22 14:05 UTC (permalink / raw)
  To: Waiman Long, Juri Lelli
  Cc: Peter Zijlstra, Ingo Molnar, Qais Yousef, Tejun Heo, Zefan Li,
	Johannes Weiner, Hao Luo, Steven Rostedt,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA,
	luca.abeni-5rdYK369eBLQB0XuIGIEkQ,
	claudio-YOzL5CV4y4YG1A2ADO40+w,
	tommaso.cucinotta-5rdYK369eBLQB0XuIGIEkQ,
	bristot-H+wXaHxf7aLQT0dZR+AlfA,
	mathieu.poirier-QSEj5FYQhm4dnm+yROfE0A,
	cgroups-u79uwXL29TY76Z2rM5mHXA, Vincent Guittot, Wei Wang,
	Rick Yiu, Quentin Perret, Heiko Carstens, Vasily Gorbik,
	Alexander Gordeev, Sudeep Holla

On 15/03/2023 19:01, Waiman Long wrote:
> 
> On 3/15/23 13:14, Juri Lelli wrote:
>> On 15/03/23 11:46, Waiman Long wrote:
>>> On 3/15/23 08:18, Juri Lelli wrote:

[...]

>>>> @@ -2472,6 +2492,11 @@ static int cpuset_can_attach(struct
>>>> cgroup_taskset *tset)
>>>>            ret = security_task_setscheduler(task);
>>>>            if (ret)
>>>>                goto out_unlock;
>>>> +
>>>> +        if (dl_task(task)) {
>>>> +            cs->nr_deadline_tasks++;
>>>> +            cpuset_attach_old_cs->nr_deadline_tasks--;
>>>> +        }
>>>>        }
>>> Any one of the tasks in the cpuset can cause the test to fail and
>>> abort the
>>> attachment. I would suggest that you keep a deadline task transfer
>>> count in
>>> the loop and then update cs and cpouset_attach_old_cs only after all the
>>> tasks have been iterated successfully.
>> Right, Dietmar I think commented pointing out something along these
>> lines. Think though we already have this problem with current
>> task_can_attach -> dl_cpu_busy which reserves bandwidth for each tasks
>> in the destination cs. Will need to look into that. Do you know which
>> sort of operation would move multiple tasks at once?
> 
> Actually, what I said previously may not be enough. There can be
> multiple controllers attached to a cgroup. If any of thier can_attach()
> calls fails, the whole transaction is aborted and cancel_attach() will
> be called. My new suggestion is to add a new deadline task transfer
> count into the cpuset structure and store the information there
> temporarily. If cpuset_attach() is called, it means all the can_attach
> calls succeed. You can then update the dl task count accordingly and
> clear the temporary transfer count.
> 
> I guess you may have to do something similar with dl_cpu_busy().

I gave it a shot:

https://lkml.kernel.org/r/20230322135959.1998790-1-dietmar.eggemann-5wv7dgnIgG8@public.gmane.org

^ permalink raw reply	[flat|nested] 32+ messages in thread

end of thread, other threads:[~2023-03-22 14:05 UTC | newest]

Thread overview: 32+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2023-03-15 12:18 [RFC PATCH 0/3] sched/deadline: cpuset: Rework DEADLINE bandwidth restoration Juri Lelli
2023-03-15 12:18 ` Juri Lelli
2023-03-15 12:18 ` [RFC PATCH 1/3] sched/cpuset: Bring back cpuset_mutex Juri Lelli
2023-03-15 12:18   ` Juri Lelli
2023-03-15 12:18 ` [RFC PATCH 2/3] sched/cpuset: Keep track of SCHED_DEADLINE tasks in cpusets Juri Lelli
2023-03-15 12:18   ` Juri Lelli
2023-03-15 14:49   ` Qais Yousef
2023-03-15 14:49     ` Qais Yousef
2023-03-15 17:18     ` Juri Lelli
2023-03-15 17:18       ` Juri Lelli
2023-03-15 19:25       ` Qais Yousef
2023-03-15 19:25         ` Qais Yousef
2023-03-15 15:46   ` Waiman Long
2023-03-15 15:46     ` Waiman Long
2023-03-15 17:14     ` Juri Lelli
2023-03-15 17:14       ` Juri Lelli
2023-03-15 18:01       ` Waiman Long
2023-03-15 18:01         ` Waiman Long
2023-03-15 18:10         ` Waiman Long
2023-03-15 18:10           ` Waiman Long
2023-03-15 23:27         ` Waiman Long
2023-03-15 23:27           ` Waiman Long
2023-03-22 14:05         ` Dietmar Eggemann
2023-03-22 14:05           ` Dietmar Eggemann
2023-03-22 13:18       ` Dietmar Eggemann
2023-03-22 13:18         ` Dietmar Eggemann
2023-03-15 12:18 ` [RFC PATCH 3/3] cgroup/cpuset: Iterate only if DEADLINE tasks are present Juri Lelli
2023-03-15 12:18   ` Juri Lelli
2023-03-15 14:55 ` [RFC PATCH 0/3] sched/deadline: cpuset: Rework DEADLINE bandwidth restoration Qais Yousef
2023-03-15 14:55   ` Qais Yousef
2023-03-15 17:10   ` Juri Lelli
2023-03-15 17:10     ` Juri Lelli

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.