linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v3 0/3] sched, cgroup/cpuset: Keep user set cpus affinity
@ 2022-08-12 20:39 Waiman Long
  2022-08-12 20:39 ` [PATCH v3 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
                   ` (2 more replies)
  0 siblings, 3 replies; 10+ messages in thread
From: Waiman Long @ 2022-08-12 20:39 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups, linux-kernel, Linus Torvalds, Waiman Long

v3:
 - Attach a new patch 2 to introduce a copy_user_cpus_mask() to copy
   out user masks with lock protection & use it in patch 3.
v2:
 - Rework the v1 patch by extending the semantics of user_cpus_ptr to
   store user set cpus affinity and keeping to it as much as possible.

The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
Introduce task_struct::user_cpus_ptr to track requested affinity")
which uses it narrowly to allow keeping cpus affinity intact with
asymmetric cpu setup.

This patchset extends user_cpus_ptr to store user set cpus affinity via
sched_setaffinity() API. With that information avaiable, it will enable
cpuset to keep cpus afinity as close to what the user wants as possible
within the cpu list constraint of the current cpuset. Otherwise some
change to the cpuset hierarchy may reset the cpumask of the tasks in
the affected cpusets to the default cpuset value even if those tasks
have cpus affinity explicitly set by the users before.

It also means that once sched_setaffinity() is called, user_cpus_ptr
will remain allocated until the task exits.

Waiman Long (3):
  sched: Use user_cpus_ptr for saving user provided cpumask in
    sched_setaffinity()
  sched: Provide copy_user_cpus_mask() to copy out user mask
  cgroup/cpuset: Keep user set cpus affinity

 include/linux/sched.h  |  2 +
 kernel/cgroup/cpuset.c | 28 ++++++++++++-
 kernel/sched/core.c    | 89 ++++++++++++++++++++++++++++++++----------
 kernel/sched/sched.h   |  1 -
 4 files changed, 97 insertions(+), 23 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 10+ messages in thread

* [PATCH v3 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
  2022-08-12 20:39 [PATCH v3 0/3] sched, cgroup/cpuset: Keep user set cpus affinity Waiman Long
@ 2022-08-12 20:39 ` Waiman Long
  2022-08-15  8:57   ` Peter Zijlstra
  2022-08-12 20:39 ` [PATCH v3 2/3] sched: Provide copy_user_cpus_mask() to copy out user mask Waiman Long
  2022-08-12 20:39 ` [PATCH v3 3/3] cgroup/cpuset: Keep user set cpus affinity Waiman Long
  2 siblings, 1 reply; 10+ messages in thread
From: Waiman Long @ 2022-08-12 20:39 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups, linux-kernel, Linus Torvalds, Waiman Long

The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
Introduce task_struct::user_cpus_ptr to track requested affinity"). It
is currently used only by arm64 arch due to possible asymmetric cpu
setup. This patch extends its usage to save user provided cpumask when
sched_setaffinity() is called for all arches.

To preserve the existing arm64 use case, a new cpus_affinity_set flag is
added to differentiate if user_cpus_ptr is set up by sched_setaffinity()
or by force_compatible_cpus_allowed_ptr(). user_cpus_ptr
set by sched_setaffinity() has priority and won't be
overwritten by force_compatible_cpus_allowed_ptr() or
relax_compatible_cpus_allowed_ptr().

As a call to sched_setaffinity() will no longer clear user_cpus_ptr
but set it instead, the SCA_USER flag is no longer necessary and can
be removed.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 include/linux/sched.h |  1 +
 kernel/sched/core.c   | 71 +++++++++++++++++++++++++++++++------------
 kernel/sched/sched.h  |  1 -
 3 files changed, 52 insertions(+), 21 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index e7b2f8a5c711..cf7206a9b29a 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -816,6 +816,7 @@ struct task_struct {
 
 	unsigned int			policy;
 	int				nr_cpus_allowed;
+	int				cpus_affinity_set;
 	const cpumask_t			*cpus_ptr;
 	cpumask_t			*user_cpus_ptr;
 	cpumask_t			cpus_mask;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index ee28253c9ac0..7e2576068331 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2601,6 +2601,7 @@ int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
 		return -ENOMEM;
 
 	cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
+	dst->cpus_affinity_set = src->cpus_affinity_set;
 	return 0;
 }
 
@@ -2848,7 +2849,6 @@ static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
 	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
 	const struct cpumask *cpu_valid_mask = cpu_active_mask;
 	bool kthread = p->flags & PF_KTHREAD;
-	struct cpumask *user_mask = NULL;
 	unsigned int dest_cpu;
 	int ret = 0;
 
@@ -2907,14 +2907,7 @@ static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
 
 	__do_set_cpus_allowed(p, new_mask, flags);
 
-	if (flags & SCA_USER)
-		user_mask = clear_user_cpus_ptr(p);
-
-	ret = affine_move_task(rq, p, rf, dest_cpu, flags);
-
-	kfree(user_mask);
-
-	return ret;
+	return affine_move_task(rq, p, rf, dest_cpu, flags);
 
 out:
 	task_rq_unlock(rq, p, rf);
@@ -2988,19 +2981,24 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
 
 	/*
 	 * We're about to butcher the task affinity, so keep track of what
-	 * the user asked for in case we're able to restore it later on.
+	 * the user asked for in case we're able to restore it later on
+	 * unless it has been set before by sched_setaffinity().
 	 */
-	if (user_mask) {
+	if (user_mask && !p->cpus_affinity_set) {
 		cpumask_copy(user_mask, p->cpus_ptr);
 		p->user_cpus_ptr = user_mask;
+		user_mask = NULL;
 	}
 
-	return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf);
+	err = __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf);
 
-err_unlock:
-	task_rq_unlock(rq, p, &rf);
+free_user_mask:
 	kfree(user_mask);
 	return err;
+
+err_unlock:
+	task_rq_unlock(rq, p, &rf);
+	goto free_user_mask;
 }
 
 /*
@@ -3049,7 +3047,7 @@ void force_compatible_cpus_allowed_ptr(struct task_struct *p)
 }
 
 static int
-__sched_setaffinity(struct task_struct *p, const struct cpumask *mask);
+__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save_mask);
 
 /*
  * Restore the affinity of a task @p which was previously restricted by a
@@ -3067,9 +3065,10 @@ void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
 	/*
 	 * Try to restore the old affinity mask. If this fails, then
 	 * we free the mask explicitly to avoid it being inherited across
-	 * a subsequent fork().
+	 * a subsequent fork() unless it is set by sched_setaffinity().
 	 */
-	if (!user_mask || !__sched_setaffinity(p, user_mask))
+	if (!user_mask || !__sched_setaffinity(p, user_mask, false) ||
+	    p->cpus_affinity_set)
 		return;
 
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
@@ -8079,10 +8078,11 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
 #endif
 
 static int
-__sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
+__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save_mask)
 {
 	int retval;
 	cpumask_var_t cpus_allowed, new_mask;
+	struct cpumask *user_mask = NULL;
 
 	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
 		return -ENOMEM;
@@ -8098,8 +8098,38 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
 	retval = dl_task_check_affinity(p, new_mask);
 	if (retval)
 		goto out_free_new_mask;
+
+	/*
+	 * Save the user requested mask into user_cpus_ptr
+	 */
+	if (save_mask && !p->user_cpus_ptr) {
+alloc_again:
+		user_mask = kmalloc(cpumask_size(), GFP_KERNEL);
+
+		if (!user_mask) {
+			retval = -ENOMEM;
+			goto out_free_new_mask;
+		}
+	}
+	if (save_mask) {
+		struct rq_flags rf;
+		struct rq *rq = task_rq_lock(p, &rf);
+
+		if (unlikely(!p->user_cpus_ptr && !user_mask)) {
+			task_rq_unlock(rq, p, &rf);
+			goto alloc_again;
+		}
+		if (!p->user_cpus_ptr) {
+			p->user_cpus_ptr = user_mask;
+			user_mask = NULL;
+		}
+
+		cpumask_copy(p->user_cpus_ptr, mask);
+		p->cpus_affinity_set = 1;
+		task_rq_unlock(rq, p, &rf);
+	}
 again:
-	retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | SCA_USER);
+	retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK);
 	if (retval)
 		goto out_free_new_mask;
 
@@ -8113,6 +8143,7 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
 		goto again;
 	}
 
+	kfree(user_mask);
 out_free_new_mask:
 	free_cpumask_var(new_mask);
 out_free_cpus_allowed:
@@ -8156,7 +8187,7 @@ long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
 	if (retval)
 		goto out_put_task;
 
-	retval = __sched_setaffinity(p, in_mask);
+	retval = __sched_setaffinity(p, in_mask, true);
 out_put_task:
 	put_task_struct(p);
 	return retval;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index e26688d387ae..15eefcd65faa 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2283,7 +2283,6 @@ extern struct task_struct *pick_next_task_idle(struct rq *rq);
 #define SCA_CHECK		0x01
 #define SCA_MIGRATE_DISABLE	0x02
 #define SCA_MIGRATE_ENABLE	0x04
-#define SCA_USER		0x08
 
 #ifdef CONFIG_SMP
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 2/3] sched: Provide copy_user_cpus_mask() to copy out user mask
  2022-08-12 20:39 [PATCH v3 0/3] sched, cgroup/cpuset: Keep user set cpus affinity Waiman Long
  2022-08-12 20:39 ` [PATCH v3 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
@ 2022-08-12 20:39 ` Waiman Long
  2022-08-15  8:58   ` Peter Zijlstra
  2022-08-12 20:39 ` [PATCH v3 3/3] cgroup/cpuset: Keep user set cpus affinity Waiman Long
  2 siblings, 1 reply; 10+ messages in thread
From: Waiman Long @ 2022-08-12 20:39 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups, linux-kernel, Linus Torvalds, Waiman Long

Since accessing the content of the user_cpus_ptr requires lock protection
to ensure its validity, provide a helper function copy_user_cpus_mask()
to facilitate its reading.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 include/linux/sched.h |  1 +
 kernel/sched/core.c   | 18 ++++++++++++++++++
 2 files changed, 19 insertions(+)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index cf7206a9b29a..e06bc1cbccca 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -1831,6 +1831,7 @@ extern int task_can_attach(struct task_struct *p, const struct cpumask *cs_effec
 extern void do_set_cpus_allowed(struct task_struct *p, const struct cpumask *new_mask);
 extern int set_cpus_allowed_ptr(struct task_struct *p, const struct cpumask *new_mask);
 extern int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src, int node);
+extern struct cpumask *copy_user_cpus_mask(struct task_struct *p, struct cpumask *user_mask);
 extern void release_user_cpus_ptr(struct task_struct *p);
 extern int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask);
 extern void force_compatible_cpus_allowed_ptr(struct task_struct *p);
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index 7e2576068331..4d3b10e91e1a 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2619,6 +2619,24 @@ void release_user_cpus_ptr(struct task_struct *p)
 	kfree(clear_user_cpus_ptr(p));
 }
 
+/*
+ * Return the copied mask pointer or NULL if user mask not available.
+ */
+struct cpumask *copy_user_cpus_mask(struct task_struct *p,
+				    struct cpumask *user_mask)
+{
+	struct rq_flags rf;
+	struct rq *rq = task_rq_lock(p, &rf);
+	struct cpumask *mask = NULL;
+
+	if (p->user_cpus_ptr) {
+		cpumask_copy(user_mask, p->user_cpus_ptr);
+		mask = user_mask;
+	}
+	task_rq_unlock(rq, p, &rf);
+	return mask;
+}
+
 /*
  * This function is wildly self concurrent; here be dragons.
  *
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* [PATCH v3 3/3] cgroup/cpuset: Keep user set cpus affinity
  2022-08-12 20:39 [PATCH v3 0/3] sched, cgroup/cpuset: Keep user set cpus affinity Waiman Long
  2022-08-12 20:39 ` [PATCH v3 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
  2022-08-12 20:39 ` [PATCH v3 2/3] sched: Provide copy_user_cpus_mask() to copy out user mask Waiman Long
@ 2022-08-12 20:39 ` Waiman Long
  2 siblings, 0 replies; 10+ messages in thread
From: Waiman Long @ 2022-08-12 20:39 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups, linux-kernel, Linus Torvalds, Waiman Long

It was found that any change to the current cpuset hierarchy may reset
the cpumask of the tasks in the affected cpusets to the default cpuset
value even if those tasks have cpus affinity explicitly set by the users
before. That is especially easy to trigger under a cgroup v2 environment
where writing "+cpuset" to the root cgroup's cgroup.subtree_control
file will reset the cpus affinity of all the processes in the system.

That is problematic in a nohz_full environment where the tasks running
in the nohz_full CPUs usually have their cpus affinity explicitly set
and will behave incorrectly if cpus affinity changes.

Fix this problem by looking at user_cpus_ptr which will be set if
cpus affinity have been explicitly set before and use it to restrcit
the given cpumask unless there is no overlap. In that case, it will
fallback to the given one.

With that change in place, it was verified that tasks that have its
cpus affinity explicitly set will not be affected by changes made to
the v2 cgroup.subtree_control files.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/cgroup/cpuset.c | 28 ++++++++++++++++++++++++++--
 1 file changed, 26 insertions(+), 2 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 58aadfda9b8b..cabfac540fd8 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -704,6 +704,30 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
 	return ret;
 }
 
+/*
+ * Preserve user provided cpumask (if set) as much as possible unless there
+ * is no overlap with the given mask.
+ */
+static int cpuset_set_cpus_allowed_ptr(struct task_struct *p,
+				       const struct cpumask *mask)
+{
+	if (p->user_cpus_ptr) {
+		cpumask_var_t new_mask;
+
+		if (alloc_cpumask_var(&new_mask, GFP_KERNEL) &&
+		    copy_user_cpus_mask(p, new_mask) &&
+		    cpumask_and(new_mask, new_mask, mask)) {
+			int ret = set_cpus_allowed_ptr(p, new_mask);
+
+			free_cpumask_var(new_mask);
+			return ret;
+		}
+		free_cpumask_var(new_mask);
+	}
+
+	return set_cpus_allowed_ptr(p, mask);
+}
+
 #ifdef CONFIG_SMP
 /*
  * Helper routine for generate_sched_domains().
@@ -1130,7 +1154,7 @@ static void update_tasks_cpumask(struct cpuset *cs)
 
 	css_task_iter_start(&cs->css, 0, &it);
 	while ((task = css_task_iter_next(&it)))
-		set_cpus_allowed_ptr(task, cs->effective_cpus);
+		cpuset_set_cpus_allowed_ptr(task, cs->effective_cpus);
 	css_task_iter_end(&it);
 }
 
@@ -2303,7 +2327,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
 		 * can_attach beforehand should guarantee that this doesn't
 		 * fail.  TODO: have a better way to handle failure here
 		 */
-		WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
+		WARN_ON_ONCE(cpuset_set_cpus_allowed_ptr(task, cpus_attach));
 
 		cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to);
 		cpuset_update_task_spread_flag(cs, task);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
  2022-08-12 20:39 ` [PATCH v3 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
@ 2022-08-15  8:57   ` Peter Zijlstra
  2022-08-15 13:52     ` Waiman Long
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2022-08-15  8:57 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

On Fri, Aug 12, 2022 at 04:39:27PM -0400, Waiman Long wrote:
> The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
> Introduce task_struct::user_cpus_ptr to track requested affinity"). It
> is currently used only by arm64 arch due to possible asymmetric cpu
> setup. This patch extends its usage to save user provided cpumask when
> sched_setaffinity() is called for all arches.
> 
> To preserve the existing arm64 use case, a new cpus_affinity_set flag is
> added to differentiate if user_cpus_ptr is set up by sched_setaffinity()
> or by force_compatible_cpus_allowed_ptr(). user_cpus_ptr
> set by sched_setaffinity() has priority and won't be
> overwritten by force_compatible_cpus_allowed_ptr() or
> relax_compatible_cpus_allowed_ptr().

What why ?! The only possible case where
restrict_cpus_allowed_ptr() will now need that weird new state is when
the affinity has never been set before, in that case cpus_ptr should be
possible_mask.

Please just make a single consistent rule and don't make weird corner
cases like this.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/3] sched: Provide copy_user_cpus_mask() to copy out user mask
  2022-08-12 20:39 ` [PATCH v3 2/3] sched: Provide copy_user_cpus_mask() to copy out user mask Waiman Long
@ 2022-08-15  8:58   ` Peter Zijlstra
  2022-08-15 13:37     ` Waiman Long
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2022-08-15  8:58 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

On Fri, Aug 12, 2022 at 04:39:28PM -0400, Waiman Long wrote:
> Since accessing the content of the user_cpus_ptr requires lock protection
> to ensure its validity, provide a helper function copy_user_cpus_mask()
> to facilitate its reading.

Sure, but this is atrocious.

> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2619,6 +2619,24 @@ void release_user_cpus_ptr(struct task_struct *p)
>  	kfree(clear_user_cpus_ptr(p));
>  }
>  
> +/*
> + * Return the copied mask pointer or NULL if user mask not available.
> + */
> +struct cpumask *copy_user_cpus_mask(struct task_struct *p,
> +				    struct cpumask *user_mask)
> +{
> +	struct rq_flags rf;
> +	struct rq *rq = task_rq_lock(p, &rf);
> +	struct cpumask *mask = NULL;
> +
> +	if (p->user_cpus_ptr) {
> +		cpumask_copy(user_mask, p->user_cpus_ptr);
> +		mask = user_mask;
> +	}
> +	task_rq_unlock(rq, p, &rf);
> +	return mask;
> +}

For reading the mask you only need one of those locks, and I would
suggest p->pi_lock is much less contended than rq->lock.

^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 2/3] sched: Provide copy_user_cpus_mask() to copy out user mask
  2022-08-15  8:58   ` Peter Zijlstra
@ 2022-08-15 13:37     ` Waiman Long
  0 siblings, 0 replies; 10+ messages in thread
From: Waiman Long @ 2022-08-15 13:37 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

On 8/15/22 04:58, Peter Zijlstra wrote:
> On Fri, Aug 12, 2022 at 04:39:28PM -0400, Waiman Long wrote:
>> Since accessing the content of the user_cpus_ptr requires lock protection
>> to ensure its validity, provide a helper function copy_user_cpus_mask()
>> to facilitate its reading.
> Sure, but this is atrocious.
>
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -2619,6 +2619,24 @@ void release_user_cpus_ptr(struct task_struct *p)
>>   	kfree(clear_user_cpus_ptr(p));
>>   }
>>   
>> +/*
>> + * Return the copied mask pointer or NULL if user mask not available.
>> + */
>> +struct cpumask *copy_user_cpus_mask(struct task_struct *p,
>> +				    struct cpumask *user_mask)
>> +{
>> +	struct rq_flags rf;
>> +	struct rq *rq = task_rq_lock(p, &rf);
>> +	struct cpumask *mask = NULL;
>> +
>> +	if (p->user_cpus_ptr) {
>> +		cpumask_copy(user_mask, p->user_cpus_ptr);
>> +		mask = user_mask;
>> +	}
>> +	task_rq_unlock(rq, p, &rf);
>> +	return mask;
>> +}
> For reading the mask you only need one of those locks, and I would
> suggest p->pi_lock is much less contended than rq->lock.
>
Right. pi_lock should be enough for read access. Will make the change.

Thanks,
Longman


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
  2022-08-15  8:57   ` Peter Zijlstra
@ 2022-08-15 13:52     ` Waiman Long
  2022-08-15 14:25       ` Peter Zijlstra
  0 siblings, 1 reply; 10+ messages in thread
From: Waiman Long @ 2022-08-15 13:52 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds


On 8/15/22 04:57, Peter Zijlstra wrote:
> On Fri, Aug 12, 2022 at 04:39:27PM -0400, Waiman Long wrote:
>> The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
>> Introduce task_struct::user_cpus_ptr to track requested affinity"). It
>> is currently used only by arm64 arch due to possible asymmetric cpu
>> setup. This patch extends its usage to save user provided cpumask when
>> sched_setaffinity() is called for all arches.
>>
>> To preserve the existing arm64 use case, a new cpus_affinity_set flag is
>> added to differentiate if user_cpus_ptr is set up by sched_setaffinity()
>> or by force_compatible_cpus_allowed_ptr(). user_cpus_ptr
>> set by sched_setaffinity() has priority and won't be
>> overwritten by force_compatible_cpus_allowed_ptr() or
>> relax_compatible_cpus_allowed_ptr().
> What why ?! The only possible case where
> restrict_cpus_allowed_ptr() will now need that weird new state is when
> the affinity has never been set before, in that case cpus_ptr should be
> possible_mask.

Since I don't have a full history for this particular patch series that 
add user_cpus_ptr, I am hesitant to change the current behavior for 
arm64 systems. However, given the statement that user_cpus_ptr is for 
tracking "requested affinity" which I assume is when user applications 
call sched_setaffinity(). It does make sense we may not really need this 
if sched_setaffinity() is never called.


> Please just make a single consistent rule and don't make weird corner
> cases like this.

I will take a closer look to try to simplify the rule here.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
  2022-08-15 13:52     ` Waiman Long
@ 2022-08-15 14:25       ` Peter Zijlstra
  2022-08-15 15:23         ` Waiman Long
  0 siblings, 1 reply; 10+ messages in thread
From: Peter Zijlstra @ 2022-08-15 14:25 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

On Mon, Aug 15, 2022 at 09:52:27AM -0400, Waiman Long wrote:
> 
> On 8/15/22 04:57, Peter Zijlstra wrote:
> > On Fri, Aug 12, 2022 at 04:39:27PM -0400, Waiman Long wrote:
> > > The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
> > > Introduce task_struct::user_cpus_ptr to track requested affinity"). It
> > > is currently used only by arm64 arch due to possible asymmetric cpu
> > > setup. This patch extends its usage to save user provided cpumask when
> > > sched_setaffinity() is called for all arches.
> > > 
> > > To preserve the existing arm64 use case, a new cpus_affinity_set flag is
> > > added to differentiate if user_cpus_ptr is set up by sched_setaffinity()
> > > or by force_compatible_cpus_allowed_ptr(). user_cpus_ptr
> > > set by sched_setaffinity() has priority and won't be
> > > overwritten by force_compatible_cpus_allowed_ptr() or
> > > relax_compatible_cpus_allowed_ptr().
> > What why ?! The only possible case where
> > restrict_cpus_allowed_ptr() will now need that weird new state is when
> > the affinity has never been set before, in that case cpus_ptr should be
> > possible_mask.
> 
> Since I don't have a full history for this particular patch series that add
> user_cpus_ptr, I am hesitant to change the current behavior for arm64
> systems. However, given the statement that user_cpus_ptr is for tracking
> "requested affinity" which I assume is when user applications call
> sched_setaffinity(). It does make sense we may not really need this if
> sched_setaffinity() is never called.

So it comes from the asymmetric arm stuff, where only little cores can
still run arm32 code. This means that on those machines, 32bit code
needs to be contrained so a subset of CPUs.

A direct consequence of that was that if you have any 32bit program in
your process hierarchy, you loose the big cores from you affinity mask.

For some reason that wasn't popular..  Hence the save/restore of cpumasks.

> > Please just make a single consistent rule and don't make weird corner
> > cases like this.
> 
> I will take a closer look to try to simplify the rule here.

I think something like:

	mask = p->user_cpus_ptr;
	if (!mask)
		mask = &init_task.cpus_mask;

	// impose cpuset masks

should 'just-work'.


^ permalink raw reply	[flat|nested] 10+ messages in thread

* Re: [PATCH v3 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
  2022-08-15 14:25       ` Peter Zijlstra
@ 2022-08-15 15:23         ` Waiman Long
  0 siblings, 0 replies; 10+ messages in thread
From: Waiman Long @ 2022-08-15 15:23 UTC (permalink / raw)
  To: Peter Zijlstra
  Cc: Ingo Molnar, Juri Lelli, Vincent Guittot, Dietmar Eggemann,
	Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

On 8/15/22 10:25, Peter Zijlstra wrote:
> On Mon, Aug 15, 2022 at 09:52:27AM -0400, Waiman Long wrote:
>> On 8/15/22 04:57, Peter Zijlstra wrote:
>>> On Fri, Aug 12, 2022 at 04:39:27PM -0400, Waiman Long wrote:
>>>> The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
>>>> Introduce task_struct::user_cpus_ptr to track requested affinity"). It
>>>> is currently used only by arm64 arch due to possible asymmetric cpu
>>>> setup. This patch extends its usage to save user provided cpumask when
>>>> sched_setaffinity() is called for all arches.
>>>>
>>>> To preserve the existing arm64 use case, a new cpus_affinity_set flag is
>>>> added to differentiate if user_cpus_ptr is set up by sched_setaffinity()
>>>> or by force_compatible_cpus_allowed_ptr(). user_cpus_ptr
>>>> set by sched_setaffinity() has priority and won't be
>>>> overwritten by force_compatible_cpus_allowed_ptr() or
>>>> relax_compatible_cpus_allowed_ptr().
>>> What why ?! The only possible case where
>>> restrict_cpus_allowed_ptr() will now need that weird new state is when
>>> the affinity has never been set before, in that case cpus_ptr should be
>>> possible_mask.
>> Since I don't have a full history for this particular patch series that add
>> user_cpus_ptr, I am hesitant to change the current behavior for arm64
>> systems. However, given the statement that user_cpus_ptr is for tracking
>> "requested affinity" which I assume is when user applications call
>> sched_setaffinity(). It does make sense we may not really need this if
>> sched_setaffinity() is never called.
> So it comes from the asymmetric arm stuff, where only little cores can
> still run arm32 code. This means that on those machines, 32bit code
> needs to be contrained so a subset of CPUs.
>
> A direct consequence of that was that if you have any 32bit program in
> your process hierarchy, you loose the big cores from you affinity mask.
>
> For some reason that wasn't popular..  Hence the save/restore of cpumasks.

I am aware of that part of the patch series.


>>> Please just make a single consistent rule and don't make weird corner
>>> cases like this.
>> I will take a closer look to try to simplify the rule here.
> I think something like:
>
> 	mask = p->user_cpus_ptr;
> 	if (!mask)
> 		mask = &init_task.cpus_mask;
>
> 	// impose cpuset masks
>
> should 'just-work'.

I think that should work in relax_compatible_cpus_allowed_ptr().

Thanks,
Longman


^ permalink raw reply	[flat|nested] 10+ messages in thread

end of thread, other threads:[~2022-08-15 15:23 UTC | newest]

Thread overview: 10+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-12 20:39 [PATCH v3 0/3] sched, cgroup/cpuset: Keep user set cpus affinity Waiman Long
2022-08-12 20:39 ` [PATCH v3 1/3] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
2022-08-15  8:57   ` Peter Zijlstra
2022-08-15 13:52     ` Waiman Long
2022-08-15 14:25       ` Peter Zijlstra
2022-08-15 15:23         ` Waiman Long
2022-08-12 20:39 ` [PATCH v3 2/3] sched: Provide copy_user_cpus_mask() to copy out user mask Waiman Long
2022-08-15  8:58   ` Peter Zijlstra
2022-08-15 13:37     ` Waiman Long
2022-08-12 20:39 ` [PATCH v3 3/3] cgroup/cpuset: Keep user set cpus affinity Waiman Long

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).