linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v5 0/3] sched, cgroup/cpuset: Keep user set cpus affinity
@ 2022-08-16 19:27 Waiman Long
  2022-08-16 19:27 ` [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
                   ` (2 more replies)
  0 siblings, 3 replies; 13+ messages in thread
From: Waiman Long @ 2022-08-16 19:27 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups, linux-kernel, Linus Torvalds, Waiman Long

v5:
 - Update patch 3 to handle race with concurrent sched_setaffinity()
   by rechecking a previously cleared user_cpus_ptr afterward.

v4:
 - Update patch 1 to make sched_setaffinity() the only function to
   update user_cpus_ptr to make the logic simpler and
   easier to understand. restrict_cpus_allowed_ptr() and
   relax_compatible_cpus_allowed_ptr() will just use it if present.

v3:
 - Attach a new patch 2 to introduce a copy_user_cpus_mask() to copy
   out user masks with lock protection & use it in patch 3.

v2:
 - Rework the v1 patch by extending the semantics of user_cpus_ptr to
   store user set cpus affinity and keeping to it as much as possible.

The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
Introduce task_struct::user_cpus_ptr to track requested affinity")
which uses it narrowly to allow keeping cpus affinity intact with
asymmetric cpu setup.

This patchset extends user_cpus_ptr to store user set cpus affinity via
sched_setaffinity() API. With that information available, it will enable
cpuset to keep cpus afinity as close to what the user wants as possible
within the cpu list constraint of the current cpuset. Otherwise some
change to the cpuset hierarchy may reset the cpumask of the tasks in
the affected cpusets to the default cpuset value even if those tasks
have cpus affinity explicitly set by the users before.

It also means that once sched_setaffinity() is called, user_cpus_ptr
will remain allocated until the task exits.

Waiman Long (3):
  sched: Use user_cpus_ptr for saving user provided cpumask in
    sched_setaffinity()
  sched: Provide copy_user_cpus_mask() to copy out user mask
  cgroup/cpuset: Keep user set cpus affinity

 include/linux/sched.h  |   1 +
 kernel/cgroup/cpuset.c |  42 ++++++++++++++-
 kernel/sched/core.c    | 119 ++++++++++++++++++++++++-----------------
 kernel/sched/sched.h   |   1 -
 4 files changed, 112 insertions(+), 51 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 13+ messages in thread

* [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
  2022-08-16 19:27 [PATCH v5 0/3] sched, cgroup/cpuset: Keep user set cpus affinity Waiman Long
@ 2022-08-16 19:27 ` Waiman Long
  2022-08-17  8:28   ` Peter Zijlstra
                     ` (2 more replies)
  2022-08-16 19:27 ` [PATCH v5 2/3] sched: Provide copy_user_cpus_mask() to copy out user mask Waiman Long
  2022-08-16 19:27 ` [PATCH v5 3/3] cgroup/cpuset: Keep user set cpus affinity Waiman Long
  2 siblings, 3 replies; 13+ messages in thread
From: Waiman Long @ 2022-08-16 19:27 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups, linux-kernel, Linus Torvalds, Waiman Long

The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
Introduce task_struct::user_cpus_ptr to track requested affinity"). It
is currently used only by arm64 arch due to possible asymmetric CPU
setup. This patch extends its usage to save user provided cpumask when
sched_setaffinity() is called for all arches. With this patch applied,
user_cpus_ptr, once allocated after a call to sched_setaffinity(),
will only be freed when the task exits.

Since user_cpus_ptr is supposed to be used for "requested
affinity", there is actually no point to save current cpu affinity in
restrict_cpus_allowed_ptr() if sched_setaffinity() has never been called.
Modify the logic to set user_cpus_ptr only in sched_setaffinity() and use
it in restrict_cpus_allowed_ptr() and relax_compatible_cpus_allowed_ptr()
if defined but not changing it.

This will be some changes in behavior for arm64 systems with asymmetric
CPUs in some corner cases. For instance, if sched_setaffinity()
has never been called and there is a cpuset change before
relax_compatible_cpus_allowed_ptr() is called, its subsequent call will
follow what the cpuset allows but not what the previous cpu affinity
setting allows.

As a call to sched_setaffinity() will no longer clear user_cpus_ptr
but set it instead, the SCA_USER flag is no longer necessary and can
be removed.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/sched/core.c  | 100 ++++++++++++++++++++++---------------------
 kernel/sched/sched.h |   1 -
 2 files changed, 52 insertions(+), 49 deletions(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ee28253c9ac0..03053eebb22e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2848,7 +2848,6 @@ static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
 	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
 	const struct cpumask *cpu_valid_mask = cpu_active_mask;
 	bool kthread = p->flags & PF_KTHREAD;
-	struct cpumask *user_mask = NULL;
 	unsigned int dest_cpu;
 	int ret = 0;
 
@@ -2907,14 +2906,7 @@ static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
 
 	__do_set_cpus_allowed(p, new_mask, flags);
 
-	if (flags & SCA_USER)
-		user_mask = clear_user_cpus_ptr(p);
-
-	ret = affine_move_task(rq, p, rf, dest_cpu, flags);
-
-	kfree(user_mask);
-
-	return ret;
+	return affine_move_task(rq, p, rf, dest_cpu, flags);
 
 out:
 	task_rq_unlock(rq, p, rf);
@@ -2949,8 +2941,10 @@ EXPORT_SYMBOL_GPL(set_cpus_allowed_ptr);
 
 /*
  * Change a given task's CPU affinity to the intersection of its current
- * affinity mask and @subset_mask, writing the resulting mask to @new_mask
- * and pointing @p->user_cpus_ptr to a copy of the old mask.
+ * affinity mask and @subset_mask, writing the resulting mask to @new_mask.
+ * If user_cpus_ptr is defined, use it as the basis for restricting CPU
+ * affinity or use cpu_online_mask instead.
+ *
  * If the resulting mask is empty, leave the affinity unchanged and return
  * -EINVAL.
  */
@@ -2958,16 +2952,10 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
 				     struct cpumask *new_mask,
 				     const struct cpumask *subset_mask)
 {
-	struct cpumask *user_mask = NULL;
 	struct rq_flags rf;
 	struct rq *rq;
 	int err;
-
-	if (!p->user_cpus_ptr) {
-		user_mask = kmalloc(cpumask_size(), GFP_KERNEL);
-		if (!user_mask)
-			return -ENOMEM;
-	}
+	bool not_empty;
 
 	rq = task_rq_lock(p, &rf);
 
@@ -2981,25 +2969,21 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
 		goto err_unlock;
 	}
 
-	if (!cpumask_and(new_mask, &p->cpus_mask, subset_mask)) {
+
+	if (p->user_cpus_ptr)
+		not_empty = cpumask_and(new_mask, p->user_cpus_ptr, subset_mask);
+	else
+		not_empty = cpumask_and(new_mask, cpu_online_mask, subset_mask);
+
+	if (!not_empty) {
 		err = -EINVAL;
 		goto err_unlock;
 	}
 
-	/*
-	 * We're about to butcher the task affinity, so keep track of what
-	 * the user asked for in case we're able to restore it later on.
-	 */
-	if (user_mask) {
-		cpumask_copy(user_mask, p->cpus_ptr);
-		p->user_cpus_ptr = user_mask;
-	}
-
 	return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf);
 
 err_unlock:
 	task_rq_unlock(rq, p, &rf);
-	kfree(user_mask);
 	return err;
 }
 
@@ -3049,34 +3033,27 @@ void force_compatible_cpus_allowed_ptr(struct task_struct *p)
 }
 
 static int
-__sched_setaffinity(struct task_struct *p, const struct cpumask *mask);
+__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save_mask);
 
 /*
  * Restore the affinity of a task @p which was previously restricted by a
- * call to force_compatible_cpus_allowed_ptr(). This will clear (and free)
- * @p->user_cpus_ptr.
+ * call to force_compatible_cpus_allowed_ptr().
  *
  * It is the caller's responsibility to serialise this with any calls to
  * force_compatible_cpus_allowed_ptr(@p).
  */
 void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
 {
-	struct cpumask *user_mask = p->user_cpus_ptr;
-	unsigned long flags;
+	const struct cpumask *user_mask = p->user_cpus_ptr;
+
+	if (!user_mask)
+		user_mask = cpu_online_mask;
 
 	/*
-	 * Try to restore the old affinity mask. If this fails, then
-	 * we free the mask explicitly to avoid it being inherited across
-	 * a subsequent fork().
+	 * Try to restore the old affinity mask with __sched_setaffinity().
+	 * Cpuset masking will be done there too.
 	 */
-	if (!user_mask || !__sched_setaffinity(p, user_mask))
-		return;
-
-	raw_spin_lock_irqsave(&p->pi_lock, flags);
-	user_mask = clear_user_cpus_ptr(p);
-	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
-
-	kfree(user_mask);
+	__sched_setaffinity(p, user_mask, false);
 }
 
 void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
@@ -8079,10 +8056,11 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
 #endif
 
 static int
-__sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
+__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save_mask)
 {
 	int retval;
 	cpumask_var_t cpus_allowed, new_mask;
+	struct cpumask *user_mask = NULL;
 
 	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
 		return -ENOMEM;
@@ -8098,8 +8076,33 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
 	retval = dl_task_check_affinity(p, new_mask);
 	if (retval)
 		goto out_free_new_mask;
+
+	/*
+	 * Save the user requested mask into user_cpus_ptr if save_mask set.
+	 * pi_lock is used for protecting user_cpus_ptr.
+	 */
+	if (save_mask && !p->user_cpus_ptr) {
+		user_mask = kmalloc(cpumask_size(), GFP_KERNEL);
+
+		if (!user_mask) {
+			retval = -ENOMEM;
+			goto out_free_new_mask;
+		}
+	}
+	if (save_mask) {
+		unsigned long flags;
+
+		raw_spin_lock_irqsave(&p->pi_lock, flags);
+		if (!p->user_cpus_ptr) {
+			p->user_cpus_ptr = user_mask;
+			user_mask = NULL;
+		}
+
+		cpumask_copy(p->user_cpus_ptr, mask);
+		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
+	}
 again:
-	retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | SCA_USER);
+	retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK);
 	if (retval)
 		goto out_free_new_mask;
 
@@ -8113,6 +8116,7 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
 		goto again;
 	}
 
+	kfree(user_mask);
 out_free_new_mask:
 	free_cpumask_var(new_mask);
 out_free_cpus_allowed:
@@ -8156,7 +8160,7 @@ long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
 	if (retval)
 		goto out_put_task;
 
-	retval = __sched_setaffinity(p, in_mask);
+	retval = __sched_setaffinity(p, in_mask, true);
 out_put_task:
 	put_task_struct(p);
 	return retval;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e26688d387ae..15eefcd65faa 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2283,7 +2283,6 @@ extern struct task_struct *pick_next_task_idle(struct rq *rq);
 #define SCA_CHECK		0x01
 #define SCA_MIGRATE_DISABLE	0x02
 #define SCA_MIGRATE_ENABLE	0x04
-#define SCA_USER		0x08
 
 #ifdef CONFIG_SMP
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v5 2/3] sched: Provide copy_user_cpus_mask() to copy out user mask
  2022-08-16 19:27 [PATCH v5 0/3] sched, cgroup/cpuset: Keep user set cpus affinity Waiman Long
  2022-08-16 19:27 ` [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
@ 2022-08-16 19:27 ` Waiman Long
  2022-08-16 19:27 ` [PATCH v5 3/3] cgroup/cpuset: Keep user set cpus affinity Waiman Long
  2 siblings, 0 replies; 13+ messages in thread
From: Waiman Long @ 2022-08-16 19:27 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups, linux-kernel, Linus Torvalds, Waiman Long

Since accessing the content of the user_cpus_ptr requires lock protection
to ensure its validity, provide a helper function copy_user_cpus_mask()
to facilitate its reading.

Suggested-by: Peter Zijlstra <peterz@infradead.org>
Signed-off-by: Waiman Long <longman@redhat.com>
---
 include/linux/sched.h |  1 +
 kernel/sched/core.c   | 19 +++++++++++++++++++
 2 files changed, 20 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index e7b2f8a5c711..f2b0340c094e 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1830,6 +1830,7 @@ extern int task_can_attach(struct task_struct *p, const struct cpumask *cs_effec
 extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask);
 extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask);
 extern int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, int node);
+extern struct cpumask *copy_user_cpus_mask(struct task_struct *p, struct cpumask *user_mask);
 extern void release_user_cpus_ptr(struct task_struct *p);
 extern int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask);
 extern void force_compatible_cpus_allowed_ptr(struct task_struct *p);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 03053eebb22e..a0987784913e 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2618,6 +2618,25 @@ void release_user_cpus_ptr(struct task_struct *p)
 	kfree(clear_user_cpus_ptr(p));
 }
 
+/*
+ * Return the copied mask pointer or NULL if user mask not available.
+ * Acquiring pi_lock for read access protection.
+ */
+struct cpumask *copy_user_cpus_mask(struct task_struct *p,
+				    struct cpumask *user_mask)
+{
+	struct cpumask *mask = NULL;
+	unsigned long flags;
+
+	raw_spin_lock_irqsave(&p->pi_lock, flags);
+	if (p->user_cpus_ptr) {
+		cpumask_copy(user_mask, p->user_cpus_ptr);
+		mask = user_mask;
+	}
+	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
+	return mask;
+}
+
 /*
  * This function is wildly self concurrent; here be dragons.
  *
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* [PATCH v5 3/3] cgroup/cpuset: Keep user set cpus affinity
  2022-08-16 19:27 [PATCH v5 0/3] sched, cgroup/cpuset: Keep user set cpus affinity Waiman Long
  2022-08-16 19:27 ` [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
  2022-08-16 19:27 ` [PATCH v5 2/3] sched: Provide copy_user_cpus_mask() to copy out user mask Waiman Long
@ 2022-08-16 19:27 ` Waiman Long
  2022-08-16 20:15   ` Tejun Heo
  2 siblings, 1 reply; 13+ messages in thread
From: Waiman Long @ 2022-08-16 19:27 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups, linux-kernel, Linus Torvalds, Waiman Long

It was found that any change to the current cpuset hierarchy may reset
the cpumask of the tasks in the affected cpusets to the default cpuset
value even if those tasks have cpus affinity explicitly set by the users
before. That is especially easy to trigger under a cgroup v2 environment
where writing "+cpuset" to the root cgroup's cgroup.subtree_control
file will reset the cpus affinity of all the processes in the system.

That is problematic in a nohz_full environment where the tasks running
in the nohz_full CPUs usually have their cpus affinity explicitly set
and will behave incorrectly if cpus affinity changes.

Fix this problem by looking at user_cpus_ptr which will be set if
cpus affinity have been explicitly set before and use it to restrcit
the given cpumask unless there is no overlap. In that case, it will
fallback to the given one.

To handle possible racing with concurrent sched_setaffinity() call, the
user_cpus_ptr is rechecked again after a successful set_cpus_allowed_ptr()
call. If the user_cpus_ptr status changes, the operation is retried
again with the newly assigned user_cpus_ptr.

With that change in place, it was verified that tasks that have its
cpus affinity explicitly set will not be affected by changes made to
the v2 cgroup.subtree_control files.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/cgroup/cpuset.c | 42 ++++++++++++++++++++++++++++++++++++++++--
 1 file changed, 40 insertions(+), 2 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 58aadfda9b8b..a663848d0459 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -704,6 +704,44 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
 	return ret;
 }
 
+/*
+ * Preserve user provided cpumask (if set) as much as possible unless there
+ * is no overlap with the given mask.
+ */
+static int cpuset_set_cpus_allowed_ptr(struct task_struct *p,
+				       const struct cpumask *mask)
+{
+	cpumask_var_t new_mask;
+	int ret;
+
+	if (!READ_ONCE(p->user_cpus_ptr)) {
+		ret = set_cpus_allowed_ptr(p, mask);
+		/*
+		 * If user_cpus_ptr becomes set now, we are racing with
+		 * a concurrent sched_setaffinity(). So use the newly
+		 * set user_cpus_ptr and retry again.
+		 *
+		 * TODO: We cannot detect change in the cpumask pointed to
+		 * by user_cpus_ptr. We will have to add a sequence number
+		 * if such a race needs to be addressed.
+		 */
+		if (ret || !READ_ONCE(p->user_cpus_ptr))
+			return ret;
+	}
+
+	if (!alloc_cpumask_var(&new_mask, GFP_KERNEL))
+		return -ENOMEM;
+
+	if (copy_user_cpus_mask(p, new_mask) &&
+	    cpumask_and(new_mask, new_mask, mask))
+		ret = set_cpus_allowed_ptr(p, new_mask);
+	else
+		ret = set_cpus_allowed_ptr(p, mask);
+
+	free_cpumask_var(new_mask);
+	return ret;
+}
+
 #ifdef CONFIG_SMP
 /*
  * Helper routine for generate_sched_domains().
@@ -1130,7 +1168,7 @@ static void update_tasks_cpumask(struct cpuset *cs)
 
 	css_task_iter_start(&cs->css, 0, &it);
 	while ((task = css_task_iter_next(&it)))
-		set_cpus_allowed_ptr(task, cs->effective_cpus);
+		cpuset_set_cpus_allowed_ptr(task, cs->effective_cpus);
 	css_task_iter_end(&it);
 }
 
@@ -2303,7 +2341,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
 		 * can_attach beforehand should guarantee that this doesn't
 		 * fail.  TODO: have a better way to handle failure here
 		 */
-		WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
+		WARN_ON_ONCE(cpuset_set_cpus_allowed_ptr(task, cpus_attach));
 
 		cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to);
 		cpuset_update_task_spread_flag(cs, task);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 3/3] cgroup/cpuset: Keep user set cpus affinity
  2022-08-16 19:27 ` [PATCH v5 3/3] cgroup/cpuset: Keep user set cpus affinity Waiman Long
@ 2022-08-16 20:15   ` Tejun Heo
  2022-08-16 22:11     ` Waiman Long
  0 siblings, 1 reply; 13+ messages in thread
From: Tejun Heo @ 2022-08-16 20:15 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Zefan Li,
	Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

On Tue, Aug 16, 2022 at 03:27:34PM -0400, Waiman Long wrote:
> +static int cpuset_set_cpus_allowed_ptr(struct task_struct *p,
> +				       const struct cpumask *mask)
> +{
> +	cpumask_var_t new_mask;
> +	int ret;
> +
> +	if (!READ_ONCE(p->user_cpus_ptr)) {
> +		ret = set_cpus_allowed_ptr(p, mask);
> +		/*
> +		 * If user_cpus_ptr becomes set now, we are racing with
> +		 * a concurrent sched_setaffinity(). So use the newly
> +		 * set user_cpus_ptr and retry again.
> +		 *
> +		 * TODO: We cannot detect change in the cpumask pointed to
> +		 * by user_cpus_ptr. We will have to add a sequence number
> +		 * if such a race needs to be addressed.
> +		 */

This is too ugly and obviously broken. Let's please do it properly.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 3/3] cgroup/cpuset: Keep user set cpus affinity
  2022-08-16 20:15   ` Tejun Heo
@ 2022-08-16 22:11     ` Waiman Long
  2022-08-16 22:19       ` Tejun Heo
  0 siblings, 1 reply; 13+ messages in thread
From: Waiman Long @ 2022-08-16 22:11 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Zefan Li,
	Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds


On 8/16/22 16:15, Tejun Heo wrote:
> On Tue, Aug 16, 2022 at 03:27:34PM -0400, Waiman Long wrote:
>> +static int cpuset_set_cpus_allowed_ptr(struct task_struct *p,
>> +				       const struct cpumask *mask)
>> +{
>> +	cpumask_var_t new_mask;
>> +	int ret;
>> +
>> +	if (!READ_ONCE(p->user_cpus_ptr)) {
>> +		ret = set_cpus_allowed_ptr(p, mask);
>> +		/*
>> +		 * If user_cpus_ptr becomes set now, we are racing with
>> +		 * a concurrent sched_setaffinity(). So use the newly
>> +		 * set user_cpus_ptr and retry again.
>> +		 *
>> +		 * TODO: We cannot detect change in the cpumask pointed to
>> +		 * by user_cpus_ptr. We will have to add a sequence number
>> +		 * if such a race needs to be addressed.
>> +		 */
> This is too ugly and obviously broken. Let's please do it properly.

Actually, there is similar construct in __sched_setaffinity():

again:
         retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK);
         if (retval)
                 goto out_free_new_mask;

         cpuset_cpus_allowed(p, cpus_allowed);
         if (!cpumask_subset(new_mask, cpus_allowed)) {
                 /*
                  * We must have raced with a concurrent cpuset update.
                  * Just reset the cpumask to the cpuset's cpus_allowed.
                  */
                 cpumask_copy(new_mask, cpus_allowed);
                 goto again;
         }

It is hard to synchronize different subsystems atomically without 
running into locking issue. Let me think about what can be done in this 
case.

Is using a sequence number to check for race with retry good enough?

Cheers,
Longman


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 3/3] cgroup/cpuset: Keep user set cpus affinity
  2022-08-16 22:11     ` Waiman Long
@ 2022-08-16 22:19       ` Tejun Heo
  2022-08-17  0:13         ` Waiman Long
  0 siblings, 1 reply; 13+ messages in thread
From: Tejun Heo @ 2022-08-16 22:19 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Zefan Li,
	Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

Hello,

On Tue, Aug 16, 2022 at 06:11:03PM -0400, Waiman Long wrote:
> It is hard to synchronize different subsystems atomically without running
> into locking issue. Let me think about what can be done in this case.

I have a hard time seeing why this would be particularly difficult. cpuset
just needs to make the latest cpumask available to sched core in an easily
accessible form and whenever that changes, trigger a set_cpus_allowed call.
There's no need to entangle operations across the whole subsystems. All
that's needed to be communicated is the current cpumask.

> Is using a sequence number to check for race with retry good enough?

It seems unnecessarily fragile and complicated to me. If we're gonna change
it, let's change it right.

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 3/3] cgroup/cpuset: Keep user set cpus affinity
  2022-08-16 22:19       ` Tejun Heo
@ 2022-08-17  0:13         ` Waiman Long
  0 siblings, 0 replies; 13+ messages in thread
From: Waiman Long @ 2022-08-17  0:13 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Zefan Li,
	Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds


On 8/16/22 18:19, Tejun Heo wrote:
> Hello,
>
> On Tue, Aug 16, 2022 at 06:11:03PM -0400, Waiman Long wrote:
>> It is hard to synchronize different subsystems atomically without running
>> into locking issue. Let me think about what can be done in this case.
> I have a hard time seeing why this would be particularly difficult. cpuset
> just needs to make the latest cpumask available to sched core in an easily
> accessible form and whenever that changes, trigger a set_cpus_allowed call.
> There's no need to entangle operations across the whole subsystems. All
> that's needed to be communicated is the current cpumask.
>
>> Is using a sequence number to check for race with retry good enough?
> It seems unnecessarily fragile and complicated to me. If we're gonna change
> it, let's change it right.

Thanks for the suggestion. I think I get what you want. I am going to 
migrate the cpuset_set_cpus_allowed_ptr() logic into 
set_cpus_allowed_ptr() itself. IOW, if user_cpus_ptr is defined, it will 
be an additional mask to be applied on top. It does affect all callers 
of set_cpus_allowed_ptr() though. I am going to drop this cpuset 
specific patch.

BTW, I will be on PTO starting tomorrow until next Tuesday. So I will be 
slow in responding to emails.

Cheers,
Longman

Cheers,
Longman


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
  2022-08-16 19:27 ` [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
@ 2022-08-17  8:28   ` Peter Zijlstra
  2022-08-18 14:37     ` Waiman Long
  2022-08-17  8:41   ` Peter Zijlstra
  2022-08-17  8:46   ` Peter Zijlstra
  2 siblings, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2022-08-17  8:28 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

On Tue, Aug 16, 2022 at 03:27:32PM -0400, Waiman Long wrote:

> This will be some changes in behavior for arm64 systems with asymmetric
> CPUs in some corner cases. For instance, if sched_setaffinity()
> has never been called and there is a cpuset change before
> relax_compatible_cpus_allowed_ptr() is called, its subsequent call will
> follow what the cpuset allows but not what the previous cpu affinity
> setting allows.

That's arguably a correctness fix, no? That is, the save/restore should
not have been allowed to revert to an earlier cpuset state.

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
  2022-08-16 19:27 ` [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
  2022-08-17  8:28   ` Peter Zijlstra
@ 2022-08-17  8:41   ` Peter Zijlstra
  2022-08-18 15:31     ` Waiman Long
  2022-08-17  8:46   ` Peter Zijlstra
  2 siblings, 1 reply; 13+ messages in thread
From: Peter Zijlstra @ 2022-08-17  8:41 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

On Tue, Aug 16, 2022 at 03:27:32PM -0400, Waiman Long wrote:
> @@ -2981,25 +2969,21 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
>  		goto err_unlock;
>  	}
>  
> -	if (!cpumask_and(new_mask, &p->cpus_mask, subset_mask)) {
> +
> +	if (p->user_cpus_ptr)
> +		not_empty = cpumask_and(new_mask, p->user_cpus_ptr, subset_mask);
> +	else
> +		not_empty = cpumask_and(new_mask, cpu_online_mask, subset_mask);
> +
> +	if (!not_empty) {
>  		err = -EINVAL;
>  		goto err_unlock;
>  	}
>  
> -	/*
> -	 * We're about to butcher the task affinity, so keep track of what
> -	 * the user asked for in case we're able to restore it later on.
> -	 */
> -	if (user_mask) {
> -		cpumask_copy(user_mask, p->cpus_ptr);
> -		p->user_cpus_ptr = user_mask;
> -	}
> -
>  	return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf);
>  
>  err_unlock:
>  	task_rq_unlock(rq, p, &rf);
> -	kfree(user_mask);
>  	return err;
>  }
>  
> @@ -3049,34 +3033,27 @@ void force_compatible_cpus_allowed_ptr(struct task_struct *p)
>  }
>  
>  static int
> -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask);
> +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save_mask);
>  
>  /*
>   * Restore the affinity of a task @p which was previously restricted by a
> - * call to force_compatible_cpus_allowed_ptr(). This will clear (and free)
> - * @p->user_cpus_ptr.
> + * call to force_compatible_cpus_allowed_ptr().
>   *
>   * It is the caller's responsibility to serialise this with any calls to
>   * force_compatible_cpus_allowed_ptr(@p).
>   */
>  void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
>  {
> -	struct cpumask *user_mask = p->user_cpus_ptr;
> -	unsigned long flags;
> +	const struct cpumask *user_mask = p->user_cpus_ptr;
> +
> +	if (!user_mask)
> +		user_mask = cpu_online_mask;
>  
>  	/*
> -	 * Try to restore the old affinity mask. If this fails, then
> -	 * we free the mask explicitly to avoid it being inherited across
> -	 * a subsequent fork().
> +	 * Try to restore the old affinity mask with __sched_setaffinity().
> +	 * Cpuset masking will be done there too.
>  	 */
> -	if (!user_mask || !__sched_setaffinity(p, user_mask))
> -		return;
> -
> -	raw_spin_lock_irqsave(&p->pi_lock, flags);
> -	user_mask = clear_user_cpus_ptr(p);
> -	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
> -
> -	kfree(user_mask);
> +	__sched_setaffinity(p, user_mask, false);
>  }
>  
>  void set_task_cpu(struct task_struct *p, unsigned int new_cpu)


Would it not be simpler to write it something like so?

---
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 03053eebb22e..cdae4d50a588 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2955,7 +2955,6 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
 	struct rq_flags rf;
 	struct rq *rq;
 	int err;
-	bool not_empty;
 
 	rq = task_rq_lock(p, &rf);
 
@@ -2969,13 +2968,7 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
 		goto err_unlock;
 	}
 
-
-	if (p->user_cpus_ptr)
-		not_empty = cpumask_and(new_mask, p->user_cpus_ptr, subset_mask);
-	else
-		not_empty = cpumask_and(new_mask, cpu_online_mask, subset_mask);
-
-	if (!not_empty) {
+	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
 		err = -EINVAL;
 		goto err_unlock;
 	}
@@ -3044,16 +3037,11 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save
  */
 void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
 {
-	const struct cpumask *user_mask = p->user_cpus_ptr;
-
-	if (!user_mask)
-		user_mask = cpu_online_mask;
-
 	/*
 	 * Try to restore the old affinity mask with __sched_setaffinity().
 	 * Cpuset masking will be done there too.
 	 */
-	__sched_setaffinity(p, user_mask, false);
+	__sched_setaffinity(p, task_user_cpus(p), false);
 }
 
 void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 15eefcd65faa..426e9b64b587 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -1881,6 +1881,13 @@ static inline void dirty_sched_domain_sysctl(int cpu)
 #endif
 
 extern int sched_update_scaling(void);
+
+static inline const struct cpumask *task_user_cpus(struct task_struct *p)
+{
+	if (!p->user_cpus_ptr)
+		return cpus_possible_mask; /* &init_task.cpus_mask */
+	return p->user_cpus_ptr;
+}
 #endif /* CONFIG_SMP */
 
 #include "stats.h"

^ permalink raw reply related	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
  2022-08-16 19:27 ` [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
  2022-08-17  8:28   ` Peter Zijlstra
  2022-08-17  8:41   ` Peter Zijlstra
@ 2022-08-17  8:46   ` Peter Zijlstra
  2 siblings, 0 replies; 13+ messages in thread
From: Peter Zijlstra @ 2022-08-17  8:46 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

On Tue, Aug 16, 2022 at 03:27:32PM -0400, Waiman Long wrote:
> @@ -8079,10 +8056,11 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
>  #endif
>  
>  static int
> -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
> +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save_mask)
>  {
>  	int retval;
>  	cpumask_var_t cpus_allowed, new_mask;
> +	struct cpumask *user_mask = NULL;
>  
>  	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
>  		return -ENOMEM;

Please move that retval down so the variable declarations are properly
ordered again.

> @@ -8098,8 +8076,33 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
>  	retval = dl_task_check_affinity(p, new_mask);
>  	if (retval)
>  		goto out_free_new_mask;
> +
> +	/*
> +	 * Save the user requested mask into user_cpus_ptr if save_mask set.
> +	 * pi_lock is used for protecting user_cpus_ptr.
> +	 */
> +	if (save_mask && !p->user_cpus_ptr) {
> +		user_mask = kmalloc(cpumask_size(), GFP_KERNEL);
> +
> +		if (!user_mask) {
> +			retval = -ENOMEM;
> +			goto out_free_new_mask;
> +		}
> +	}
> +	if (save_mask) {
> +		unsigned long flags;
> +
> +		raw_spin_lock_irqsave(&p->pi_lock, flags);
> +		if (!p->user_cpus_ptr) {
> +			p->user_cpus_ptr = user_mask;
> +			user_mask = NULL;
> +		}
> +
> +		cpumask_copy(p->user_cpus_ptr, mask);
> +		raw_spin_unlock_irqrestore(&p->pi_lock, flags);
> +	}

How about:

	if (save_mask) {
		if (!p->user_cpus_ptr) {
			...
		}
		...
	}

?

^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
  2022-08-17  8:28   ` Peter Zijlstra
@ 2022-08-18 14:37     ` Waiman Long
  0 siblings, 0 replies; 13+ messages in thread
From: Waiman Long @ 2022-08-18 14:37 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

On 8/17/22 04:28, Peter Zijlstra wrote:
> On Tue, Aug 16, 2022 at 03:27:32PM -0400, Waiman Long wrote:
>
>> This will be some changes in behavior for arm64 systems with asymmetric
>> CPUs in some corner cases. For instance, if sched_setaffinity()
>> has never been called and there is a cpuset change before
>> relax_compatible_cpus_allowed_ptr() is called, its subsequent call will
>> follow what the cpuset allows but not what the previous cpu affinity
>> setting allows.
> That's arguably a correctness fix, no? That is, the save/restore should
> not have been allowed to revert to an earlier cpuset state.

Yes, it is a correctness fix in a sense. I just want to highlight that 
there will be some slight changes in behavior in some corner cases for 
the arm64 arch.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 13+ messages in thread

* Re: [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
  2022-08-17  8:41   ` Peter Zijlstra
@ 2022-08-18 15:31     ` Waiman Long
  0 siblings, 0 replies; 13+ messages in thread
From: Waiman Long @ 2022-08-18 15:31 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

On 8/17/22 04:41, Peter Zijlstra wrote:
> On Tue, Aug 16, 2022 at 03:27:32PM -0400, Waiman Long wrote:
>> @@ -2981,25 +2969,21 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
>>   		goto err_unlock;
>>   	}
>>   
>> -	if (!cpumask_and(new_mask, &p->cpus_mask, subset_mask)) {
>> +
>> +	if (p->user_cpus_ptr)
>> +		not_empty = cpumask_and(new_mask, p->user_cpus_ptr, subset_mask);
>> +	else
>> +		not_empty = cpumask_and(new_mask, cpu_online_mask, subset_mask);
>> +
>> +	if (!not_empty) {
>>   		err = -EINVAL;
>>   		goto err_unlock;
>>   	}
>>   
>> -	/*
>> -	 * We're about to butcher the task affinity, so keep track of what
>> -	 * the user asked for in case we're able to restore it later on.
>> -	 */
>> -	if (user_mask) {
>> -		cpumask_copy(user_mask, p->cpus_ptr);
>> -		p->user_cpus_ptr = user_mask;
>> -	}
>> -
>>   	return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf);
>>   
>>   err_unlock:
>>   	task_rq_unlock(rq, p, &rf);
>> -	kfree(user_mask);
>>   	return err;
>>   }
>>   
>> @@ -3049,34 +3033,27 @@ void force_compatible_cpus_allowed_ptr(struct task_struct *p)
>>   }
>>   
>>   static int
>> -__sched_setaffinity(struct task_struct *p, const struct cpumask *mask);
>> +__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save_mask);
>>   
>>   /*
>>    * Restore the affinity of a task @p which was previously restricted by a
>> - * call to force_compatible_cpus_allowed_ptr(). This will clear (and free)
>> - * @p->user_cpus_ptr.
>> + * call to force_compatible_cpus_allowed_ptr().
>>    *
>>    * It is the caller's responsibility to serialise this with any calls to
>>    * force_compatible_cpus_allowed_ptr(@p).
>>    */
>>   void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
>>   {
>> -	struct cpumask *user_mask = p->user_cpus_ptr;
>> -	unsigned long flags;
>> +	const struct cpumask *user_mask = p->user_cpus_ptr;
>> +
>> +	if (!user_mask)
>> +		user_mask = cpu_online_mask;
>>   
>>   	/*
>> -	 * Try to restore the old affinity mask. If this fails, then
>> -	 * we free the mask explicitly to avoid it being inherited across
>> -	 * a subsequent fork().
>> +	 * Try to restore the old affinity mask with __sched_setaffinity().
>> +	 * Cpuset masking will be done there too.
>>   	 */
>> -	if (!user_mask || !__sched_setaffinity(p, user_mask))
>> -		return;
>> -
>> -	raw_spin_lock_irqsave(&p->pi_lock, flags);
>> -	user_mask = clear_user_cpus_ptr(p);
>> -	raw_spin_unlock_irqrestore(&p->pi_lock, flags);
>> -
>> -	kfree(user_mask);
>> +	__sched_setaffinity(p, user_mask, false);
>>   }
>>   
>>   void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
>
> Would it not be simpler to write it something like so?
>
> ---
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index 03053eebb22e..cdae4d50a588 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2955,7 +2955,6 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
>   	struct rq_flags rf;
>   	struct rq *rq;
>   	int err;
> -	bool not_empty;
>   
>   	rq = task_rq_lock(p, &rf);
>   
> @@ -2969,13 +2968,7 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
>   		goto err_unlock;
>   	}
>   
> -
> -	if (p->user_cpus_ptr)
> -		not_empty = cpumask_and(new_mask, p->user_cpus_ptr, subset_mask);
> -	else
> -		not_empty = cpumask_and(new_mask, cpu_online_mask, subset_mask);
> -
> -	if (!not_empty) {
> +	if (!cpumask_and(new_mask, task_user_cpus(p), subset_mask)) {
>   		err = -EINVAL;
>   		goto err_unlock;
>   	}
> @@ -3044,16 +3037,11 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save
>    */
>   void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
>   {
> -	const struct cpumask *user_mask = p->user_cpus_ptr;
> -
> -	if (!user_mask)
> -		user_mask = cpu_online_mask;
> -
>   	/*
>   	 * Try to restore the old affinity mask with __sched_setaffinity().
>   	 * Cpuset masking will be done there too.
>   	 */
> -	__sched_setaffinity(p, user_mask, false);
> +	__sched_setaffinity(p, task_user_cpus(p), false);
>   }
>   
>   void set_task_cpu(struct task_struct *p, unsigned int new_cpu)
> diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
> index 15eefcd65faa..426e9b64b587 100644
> --- a/kernel/sched/sched.h
> +++ b/kernel/sched/sched.h
> @@ -1881,6 +1881,13 @@ static inline void dirty_sched_domain_sysctl(int cpu)
>   #endif
>   
>   extern int sched_update_scaling(void);
> +
> +static inline const struct cpumask *task_user_cpus(struct task_struct *p)
> +{
> +	if (!p->user_cpus_ptr)
> +		return cpus_possible_mask; /* &init_task.cpus_mask */
> +	return p->user_cpus_ptr;
> +}
>   #endif /* CONFIG_SMP */
>   
>   #include "stats.h"
>
Thanks for the good suggestions, will make the changes.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 13+ messages in thread

end of thread, other threads:[~2022-08-18 15:31 UTC | newest]

Thread overview: 13+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-16 19:27 [PATCH v5 0/3] sched, cgroup/cpuset: Keep user set cpus affinity Waiman Long
2022-08-16 19:27 ` [PATCH v5 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
2022-08-17  8:28   ` Peter Zijlstra
2022-08-18 14:37     ` Waiman Long
2022-08-17  8:41   ` Peter Zijlstra
2022-08-18 15:31     ` Waiman Long
2022-08-17  8:46   ` Peter Zijlstra
2022-08-16 19:27 ` [PATCH v5 2/3] sched: Provide copy_user_cpus_mask() to copy out user mask Waiman Long
2022-08-16 19:27 ` [PATCH v5 3/3] cgroup/cpuset: Keep user set cpus affinity Waiman Long
2022-08-16 20:15   ` Tejun Heo
2022-08-16 22:11     ` Waiman Long
2022-08-16 22:19       ` Tejun Heo
2022-08-17  0:13         ` Waiman Long

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).