All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v2 0/2] sched, cgroup/cpuset: Keep user set cpus affinity
@ 2022-08-01 15:41 ` Waiman Long
  0 siblings, 0 replies; 14+ messages in thread
From: Waiman Long @ 2022-08-01 15:41 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups, linux-kernel, Waiman Long

v2:
 - Rework the v1 patch by extending the semantics of user_cpus_ptr to
   store user set cpus affinity and keeping to it as much as possible.

The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
Introduce task_struct::user_cpus_ptr to track requested affinity")
which uses it narrowly to allow keeping cpus affinity intact with
asymmetric cpu setup.

This patchset extends user_cpus_ptr to store user set cpus affinity
via sched_setaffinity() API. With that information avaiable, it will
enable cpuset to keep cpus afinity as close to what the user wants as
possible within the cpu list constraint of the current cpuset.

Waiman Long (2):
  sched: Use user_cpus_ptr for saving user provided cpumask in
    sched_setaffinity()
  cgroup/cpuset: Keep user set cpus affinity

 include/linux/sched.h  |  1 +
 kernel/cgroup/cpuset.c | 24 ++++++++++++--
 kernel/sched/core.c    | 71 ++++++++++++++++++++++++++++++------------
 kernel/sched/sched.h   |  1 -
 4 files changed, 74 insertions(+), 23 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 0/2] sched, cgroup/cpuset: Keep user set cpus affinity
@ 2022-08-01 15:41 ` Waiman Long
  0 siblings, 0 replies; 14+ messages in thread
From: Waiman Long @ 2022-08-01 15:41 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Waiman Long

v2:
 - Rework the v1 patch by extending the semantics of user_cpus_ptr to
   store user set cpus affinity and keeping to it as much as possible.

The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
Introduce task_struct::user_cpus_ptr to track requested affinity")
which uses it narrowly to allow keeping cpus affinity intact with
asymmetric cpu setup.

This patchset extends user_cpus_ptr to store user set cpus affinity
via sched_setaffinity() API. With that information avaiable, it will
enable cpuset to keep cpus afinity as close to what the user wants as
possible within the cpu list constraint of the current cpuset.

Waiman Long (2):
  sched: Use user_cpus_ptr for saving user provided cpumask in
    sched_setaffinity()
  cgroup/cpuset: Keep user set cpus affinity

 include/linux/sched.h  |  1 +
 kernel/cgroup/cpuset.c | 24 ++++++++++++--
 kernel/sched/core.c    | 71 ++++++++++++++++++++++++++++++------------
 kernel/sched/sched.h   |  1 -
 4 files changed, 74 insertions(+), 23 deletions(-)

-- 
2.31.1


^ permalink raw reply	[flat|nested] 14+ messages in thread

* [PATCH v2 1/2] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
@ 2022-08-01 15:41   ` Waiman Long
  0 siblings, 0 replies; 14+ messages in thread
From: Waiman Long @ 2022-08-01 15:41 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups, linux-kernel, Waiman Long

The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
Introduce task_struct::user_cpus_ptr to track requested affinity"). It
is currently used only by arm64 arch due to possible asymmetric cpu
setup. This patch extends its usage to save user provided cpumask when
sched_setaffinity() is called for all arches.

To preserve the existing arm64 use case, a new cpus_affinity_set flag is
added to differentiate if user_cpus_ptr is set up by sched_setaffinity()
or by force_compatible_cpus_allowed_ptr(). user_cpus_ptr
set by sched_setaffinity() has priority and won't be
overwritten by force_compatible_cpus_allowed_ptr() or
relax_compatible_cpus_allowed_ptr().

As a call to sched_setaffinity() will no longer clear user_cpus_ptr
but set it instead, the SCA_USER flag is no longer necessary and can
be removed.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 include/linux/sched.h |  1 +
 kernel/sched/core.c   | 71 +++++++++++++++++++++++++++++++------------
 kernel/sched/sched.h  |  1 -
 3 files changed, 52 insertions(+), 21 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index c46f3a63b758..60ae022fa842 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -815,6 +815,7 @@ struct task_struct {
 
 	unsigned int			policy;
 	int				nr_cpus_allowed;
+	int				cpus_affinity_set;
 	const cpumask_t			*cpus_ptr;
 	cpumask_t			*user_cpus_ptr;
 	cpumask_t			cpus_mask;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index da0bf6fe9ecd..7757828c7422 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2607,6 +2607,7 @@ int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
 		return -ENOMEM;
 
 	cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
+	dst->cpus_affinity_set = src->cpus_affinity_set;
 	return 0;
 }
 
@@ -2854,7 +2855,6 @@ static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
 	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
 	const struct cpumask *cpu_valid_mask = cpu_active_mask;
 	bool kthread = p->flags & PF_KTHREAD;
-	struct cpumask *user_mask = NULL;
 	unsigned int dest_cpu;
 	int ret = 0;
 
@@ -2913,14 +2913,7 @@ static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
 
 	__do_set_cpus_allowed(p, new_mask, flags);
 
-	if (flags & SCA_USER)
-		user_mask = clear_user_cpus_ptr(p);
-
-	ret = affine_move_task(rq, p, rf, dest_cpu, flags);
-
-	kfree(user_mask);
-
-	return ret;
+	return affine_move_task(rq, p, rf, dest_cpu, flags);
 
 out:
 	task_rq_unlock(rq, p, rf);
@@ -2994,19 +2987,24 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
 
 	/*
 	 * We're about to butcher the task affinity, so keep track of what
-	 * the user asked for in case we're able to restore it later on.
+	 * the user asked for in case we're able to restore it later on
+	 * unless it has been set before by sched_setaffinity().
 	 */
-	if (user_mask) {
+	if (user_mask && !p->cpus_affinity_set) {
 		cpumask_copy(user_mask, p->cpus_ptr);
 		p->user_cpus_ptr = user_mask;
+		user_mask = NULL;
 	}
 
-	return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf);
+	err = __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf);
 
-err_unlock:
-	task_rq_unlock(rq, p, &rf);
+free_user_mask:
 	kfree(user_mask);
 	return err;
+
+err_unlock:
+	task_rq_unlock(rq, p, &rf);
+	goto free_user_mask;
 }
 
 /*
@@ -3055,7 +3053,7 @@ void force_compatible_cpus_allowed_ptr(struct task_struct *p)
 }
 
 static int
-__sched_setaffinity(struct task_struct *p, const struct cpumask *mask);
+__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save_mask);
 
 /*
  * Restore the affinity of a task @p which was previously restricted by a
@@ -3073,9 +3071,10 @@ void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
 	/*
 	 * Try to restore the old affinity mask. If this fails, then
 	 * we free the mask explicitly to avoid it being inherited across
-	 * a subsequent fork().
+	 * a subsequent fork() unless it is set by sched_setaffinity().
 	 */
-	if (!user_mask || !__sched_setaffinity(p, user_mask))
+	if (!user_mask || !__sched_setaffinity(p, user_mask, false) ||
+	    p->cpus_affinity_set)
 		return;
 
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
@@ -8010,10 +8009,11 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
 #endif
 
 static int
-__sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
+__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save_mask)
 {
 	int retval;
 	cpumask_var_t cpus_allowed, new_mask;
+	struct cpumask *user_mask = NULL;
 
 	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
 		return -ENOMEM;
@@ -8029,8 +8029,38 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
 	retval = dl_task_check_affinity(p, new_mask);
 	if (retval)
 		goto out_free_new_mask;
+
+	/*
+	 * Save the user requested mask into user_cpus_ptr
+	 */
+	if (save_mask && !p->user_cpus_ptr) {
+alloc_again:
+		user_mask = kmalloc(cpumask_size(), GFP_KERNEL);
+
+		if (!user_mask) {
+			retval = -ENOMEM;
+			goto out_free_new_mask;
+		}
+	}
+	if (save_mask) {
+		struct rq_flags rf;
+		struct rq *rq = task_rq_lock(p, &rf);
+
+		if (unlikely(!p->user_cpus_ptr && !user_mask)) {
+			task_rq_unlock(rq, p, &rf);
+			goto alloc_again;
+		}
+		if (!p->user_cpus_ptr) {
+			p->user_cpus_ptr = user_mask;
+			user_mask = NULL;
+		}
+
+		cpumask_copy(p->user_cpus_ptr, mask);
+		p->cpus_affinity_set = 1;
+		task_rq_unlock(rq, p, &rf);
+	}
 again:
-	retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | SCA_USER);
+	retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK);
 	if (retval)
 		goto out_free_new_mask;
 
@@ -8044,6 +8074,7 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
 		goto again;
 	}
 
+	kfree(user_mask);
 out_free_new_mask:
 	free_cpumask_var(new_mask);
 out_free_cpus_allowed:
@@ -8087,7 +8118,7 @@ long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
 	if (retval)
 		goto out_put_task;
 
-	retval = __sched_setaffinity(p, in_mask);
+	retval = __sched_setaffinity(p, in_mask, true);
 out_put_task:
 	put_task_struct(p);
 	return retval;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 47b89a0fc6e5..c9e9731a1a17 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2242,7 +2242,6 @@ extern struct task_struct *pick_next_task_idle(struct rq *rq);
 #define SCA_CHECK		0x01
 #define SCA_MIGRATE_DISABLE	0x02
 #define SCA_MIGRATE_ENABLE	0x04
-#define SCA_USER		0x08
 
 #ifdef CONFIG_SMP
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 1/2] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
@ 2022-08-01 15:41   ` Waiman Long
  0 siblings, 0 replies; 14+ messages in thread
From: Waiman Long @ 2022-08-01 15:41 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Waiman Long

The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
Introduce task_struct::user_cpus_ptr to track requested affinity"). It
is currently used only by arm64 arch due to possible asymmetric cpu
setup. This patch extends its usage to save user provided cpumask when
sched_setaffinity() is called for all arches.

To preserve the existing arm64 use case, a new cpus_affinity_set flag is
added to differentiate if user_cpus_ptr is set up by sched_setaffinity()
or by force_compatible_cpus_allowed_ptr(). user_cpus_ptr
set by sched_setaffinity() has priority and won't be
overwritten by force_compatible_cpus_allowed_ptr() or
relax_compatible_cpus_allowed_ptr().

As a call to sched_setaffinity() will no longer clear user_cpus_ptr
but set it instead, the SCA_USER flag is no longer necessary and can
be removed.

Signed-off-by: Waiman Long <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 include/linux/sched.h |  1 +
 kernel/sched/core.c   | 71 +++++++++++++++++++++++++++++++------------
 kernel/sched/sched.h  |  1 -
 3 files changed, 52 insertions(+), 21 deletions(-)

diff --git a/include/linux/sched.h b/include/linux/sched.h
index c46f3a63b758..60ae022fa842 100644
--- a/include/linux/sched.h
+++ b/include/linux/sched.h
@@ -815,6 +815,7 @@ struct task_struct {
 
 	unsigned int			policy;
 	int				nr_cpus_allowed;
+	int				cpus_affinity_set;
 	const cpumask_t			*cpus_ptr;
 	cpumask_t			*user_cpus_ptr;
 	cpumask_t			cpus_mask;
diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index da0bf6fe9ecd..7757828c7422 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -2607,6 +2607,7 @@ int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
 		return -ENOMEM;
 
 	cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
+	dst->cpus_affinity_set = src->cpus_affinity_set;
 	return 0;
 }
 
@@ -2854,7 +2855,6 @@ static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
 	const struct cpumask *cpu_allowed_mask = task_cpu_possible_mask(p);
 	const struct cpumask *cpu_valid_mask = cpu_active_mask;
 	bool kthread = p->flags & PF_KTHREAD;
-	struct cpumask *user_mask = NULL;
 	unsigned int dest_cpu;
 	int ret = 0;
 
@@ -2913,14 +2913,7 @@ static int __set_cpus_allowed_ptr_locked(struct task_struct *p,
 
 	__do_set_cpus_allowed(p, new_mask, flags);
 
-	if (flags & SCA_USER)
-		user_mask = clear_user_cpus_ptr(p);
-
-	ret = affine_move_task(rq, p, rf, dest_cpu, flags);
-
-	kfree(user_mask);
-
-	return ret;
+	return affine_move_task(rq, p, rf, dest_cpu, flags);
 
 out:
 	task_rq_unlock(rq, p, rf);
@@ -2994,19 +2987,24 @@ static int restrict_cpus_allowed_ptr(struct task_struct *p,
 
 	/*
 	 * We're about to butcher the task affinity, so keep track of what
-	 * the user asked for in case we're able to restore it later on.
+	 * the user asked for in case we're able to restore it later on
+	 * unless it has been set before by sched_setaffinity().
 	 */
-	if (user_mask) {
+	if (user_mask && !p->cpus_affinity_set) {
 		cpumask_copy(user_mask, p->cpus_ptr);
 		p->user_cpus_ptr = user_mask;
+		user_mask = NULL;
 	}
 
-	return __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf);
+	err = __set_cpus_allowed_ptr_locked(p, new_mask, 0, rq, &rf);
 
-err_unlock:
-	task_rq_unlock(rq, p, &rf);
+free_user_mask:
 	kfree(user_mask);
 	return err;
+
+err_unlock:
+	task_rq_unlock(rq, p, &rf);
+	goto free_user_mask;
 }
 
 /*
@@ -3055,7 +3053,7 @@ void force_compatible_cpus_allowed_ptr(struct task_struct *p)
 }
 
 static int
-__sched_setaffinity(struct task_struct *p, const struct cpumask *mask);
+__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save_mask);
 
 /*
  * Restore the affinity of a task @p which was previously restricted by a
@@ -3073,9 +3071,10 @@ void relax_compatible_cpus_allowed_ptr(struct task_struct *p)
 	/*
 	 * Try to restore the old affinity mask. If this fails, then
 	 * we free the mask explicitly to avoid it being inherited across
-	 * a subsequent fork().
+	 * a subsequent fork() unless it is set by sched_setaffinity().
 	 */
-	if (!user_mask || !__sched_setaffinity(p, user_mask))
+	if (!user_mask || !__sched_setaffinity(p, user_mask, false) ||
+	    p->cpus_affinity_set)
 		return;
 
 	raw_spin_lock_irqsave(&p->pi_lock, flags);
@@ -8010,10 +8009,11 @@ int dl_task_check_affinity(struct task_struct *p, const struct cpumask *mask)
 #endif
 
 static int
-__sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
+__sched_setaffinity(struct task_struct *p, const struct cpumask *mask, bool save_mask)
 {
 	int retval;
 	cpumask_var_t cpus_allowed, new_mask;
+	struct cpumask *user_mask = NULL;
 
 	if (!alloc_cpumask_var(&cpus_allowed, GFP_KERNEL))
 		return -ENOMEM;
@@ -8029,8 +8029,38 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
 	retval = dl_task_check_affinity(p, new_mask);
 	if (retval)
 		goto out_free_new_mask;
+
+	/*
+	 * Save the user requested mask into user_cpus_ptr
+	 */
+	if (save_mask && !p->user_cpus_ptr) {
+alloc_again:
+		user_mask = kmalloc(cpumask_size(), GFP_KERNEL);
+
+		if (!user_mask) {
+			retval = -ENOMEM;
+			goto out_free_new_mask;
+		}
+	}
+	if (save_mask) {
+		struct rq_flags rf;
+		struct rq *rq = task_rq_lock(p, &rf);
+
+		if (unlikely(!p->user_cpus_ptr && !user_mask)) {
+			task_rq_unlock(rq, p, &rf);
+			goto alloc_again;
+		}
+		if (!p->user_cpus_ptr) {
+			p->user_cpus_ptr = user_mask;
+			user_mask = NULL;
+		}
+
+		cpumask_copy(p->user_cpus_ptr, mask);
+		p->cpus_affinity_set = 1;
+		task_rq_unlock(rq, p, &rf);
+	}
 again:
-	retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK | SCA_USER);
+	retval = __set_cpus_allowed_ptr(p, new_mask, SCA_CHECK);
 	if (retval)
 		goto out_free_new_mask;
 
@@ -8044,6 +8074,7 @@ __sched_setaffinity(struct task_struct *p, const struct cpumask *mask)
 		goto again;
 	}
 
+	kfree(user_mask);
 out_free_new_mask:
 	free_cpumask_var(new_mask);
 out_free_cpus_allowed:
@@ -8087,7 +8118,7 @@ long sched_setaffinity(pid_t pid, const struct cpumask *in_mask)
 	if (retval)
 		goto out_put_task;
 
-	retval = __sched_setaffinity(p, in_mask);
+	retval = __sched_setaffinity(p, in_mask, true);
 out_put_task:
 	put_task_struct(p);
 	return retval;
diff --git a/kernel/sched/sched.h b/kernel/sched/sched.h
index 47b89a0fc6e5..c9e9731a1a17 100644
--- a/kernel/sched/sched.h
+++ b/kernel/sched/sched.h
@@ -2242,7 +2242,6 @@ extern struct task_struct *pick_next_task_idle(struct rq *rq);
 #define SCA_CHECK		0x01
 #define SCA_MIGRATE_DISABLE	0x02
 #define SCA_MIGRATE_ENABLE	0x04
-#define SCA_USER		0x08
 
 #ifdef CONFIG_SMP
 
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/2] cgroup/cpuset: Keep user set cpus affinity
@ 2022-08-01 15:41   ` Waiman Long
  0 siblings, 0 replies; 14+ messages in thread
From: Waiman Long @ 2022-08-01 15:41 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups, linux-kernel, Waiman Long

It was found that any change to the current cpuset hierarchy may reset
the cpumask of the tasks in the affected cpusets to the default cpuset
value even if those tasks have cpus affinity explicitly set by the users
before. That is especially easy to trigger under a cgroup v2 environment
where writing "+cpuset" to the root cgroup's cgroup.subtree_control
file will reset the cpus affinity of all the processes in the system.

That is problematic in a nohz_full environment where the tasks running
in the nohz_full CPUs usually have their cpus affinity explicitly set
and will behave incorrectly if cpus affinity changes.

Fix this problem by looking at user_cpus_ptr which will be set if
cpus affinity have been explicitly set before and use it to restrcit
the given cpumask unless there is no overlap. In that case, it will
fallback to the given one.

With that change in place, it was verified that tasks that have its
cpus affinity explicitly set will not be affected by changes made to
the v2 cgroup.subtree_control files.

Signed-off-by: Waiman Long <longman@redhat.com>
---
 kernel/cgroup/cpuset.c | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 71a418858a5e..2e3af93bed03 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -704,6 +704,26 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
 	return ret;
 }
 
+/*
+ * Preserve user provided cpumask if set unless there is no overlap.
+ */
+static int cpuset_set_cpus_allowed_ptr(struct task_struct *p,
+				       const struct cpumask *mask)
+{
+	if (p->user_cpus_ptr && cpumask_intersects(p->user_cpus_ptr, mask)) {
+		cpumask_var_t new_mask;
+		int ret;
+
+		alloc_cpumask_var(&new_mask, GFP_KERNEL);
+		cpumask_and(new_mask, p->user_cpus_ptr, mask);
+		ret = set_cpus_allowed_ptr(p, new_mask);
+		free_cpumask_var(new_mask);
+		return ret;
+	}
+
+	return set_cpus_allowed_ptr(p, mask);
+}
+
 #ifdef CONFIG_SMP
 /*
  * Helper routine for generate_sched_domains().
@@ -1130,7 +1150,7 @@ static void update_tasks_cpumask(struct cpuset *cs)
 
 	css_task_iter_start(&cs->css, 0, &it);
 	while ((task = css_task_iter_next(&it)))
-		set_cpus_allowed_ptr(task, cs->effective_cpus);
+		cpuset_set_cpus_allowed_ptr(task, cs->effective_cpus);
 	css_task_iter_end(&it);
 }
 
@@ -2303,7 +2323,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
 		 * can_attach beforehand should guarantee that this doesn't
 		 * fail.  TODO: have a better way to handle failure here
 		 */
-		WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
+		WARN_ON_ONCE(cpuset_set_cpus_allowed_ptr(task, cpus_attach));
 
 		cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to);
 		cpuset_update_task_spread_flag(cs, task);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* [PATCH v2 2/2] cgroup/cpuset: Keep user set cpus affinity
@ 2022-08-01 15:41   ` Waiman Long
  0 siblings, 0 replies; 14+ messages in thread
From: Waiman Long @ 2022-08-01 15:41 UTC (permalink / raw)
  To: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, Will Deacon
  Cc: cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Waiman Long

It was found that any change to the current cpuset hierarchy may reset
the cpumask of the tasks in the affected cpusets to the default cpuset
value even if those tasks have cpus affinity explicitly set by the users
before. That is especially easy to trigger under a cgroup v2 environment
where writing "+cpuset" to the root cgroup's cgroup.subtree_control
file will reset the cpus affinity of all the processes in the system.

That is problematic in a nohz_full environment where the tasks running
in the nohz_full CPUs usually have their cpus affinity explicitly set
and will behave incorrectly if cpus affinity changes.

Fix this problem by looking at user_cpus_ptr which will be set if
cpus affinity have been explicitly set before and use it to restrcit
the given cpumask unless there is no overlap. In that case, it will
fallback to the given one.

With that change in place, it was verified that tasks that have its
cpus affinity explicitly set will not be affected by changes made to
the v2 cgroup.subtree_control files.

Signed-off-by: Waiman Long <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
---
 kernel/cgroup/cpuset.c | 24 ++++++++++++++++++++++--
 1 file changed, 22 insertions(+), 2 deletions(-)

diff --git a/kernel/cgroup/cpuset.c b/kernel/cgroup/cpuset.c
index 71a418858a5e..2e3af93bed03 100644
--- a/kernel/cgroup/cpuset.c
+++ b/kernel/cgroup/cpuset.c
@@ -704,6 +704,26 @@ static int validate_change(struct cpuset *cur, struct cpuset *trial)
 	return ret;
 }
 
+/*
+ * Preserve user provided cpumask if set unless there is no overlap.
+ */
+static int cpuset_set_cpus_allowed_ptr(struct task_struct *p,
+				       const struct cpumask *mask)
+{
+	if (p->user_cpus_ptr && cpumask_intersects(p->user_cpus_ptr, mask)) {
+		cpumask_var_t new_mask;
+		int ret;
+
+		alloc_cpumask_var(&new_mask, GFP_KERNEL);
+		cpumask_and(new_mask, p->user_cpus_ptr, mask);
+		ret = set_cpus_allowed_ptr(p, new_mask);
+		free_cpumask_var(new_mask);
+		return ret;
+	}
+
+	return set_cpus_allowed_ptr(p, mask);
+}
+
 #ifdef CONFIG_SMP
 /*
  * Helper routine for generate_sched_domains().
@@ -1130,7 +1150,7 @@ static void update_tasks_cpumask(struct cpuset *cs)
 
 	css_task_iter_start(&cs->css, 0, &it);
 	while ((task = css_task_iter_next(&it)))
-		set_cpus_allowed_ptr(task, cs->effective_cpus);
+		cpuset_set_cpus_allowed_ptr(task, cs->effective_cpus);
 	css_task_iter_end(&it);
 }
 
@@ -2303,7 +2323,7 @@ static void cpuset_attach(struct cgroup_taskset *tset)
 		 * can_attach beforehand should guarantee that this doesn't
 		 * fail.  TODO: have a better way to handle failure here
 		 */
-		WARN_ON_ONCE(set_cpus_allowed_ptr(task, cpus_attach));
+		WARN_ON_ONCE(cpuset_set_cpus_allowed_ptr(task, cpus_attach));
 
 		cpuset_change_task_nodemask(task, &cpuset_attach_nodemask_to);
 		cpuset_update_task_spread_flag(cs, task);
-- 
2.31.1


^ permalink raw reply related	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/2] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
@ 2022-08-01 16:45     ` Will Deacon
  0 siblings, 0 replies; 14+ messages in thread
From: Will Deacon @ 2022-08-01 16:45 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, cgroups, linux-kernel

On Mon, Aug 01, 2022 at 11:41:23AM -0400, Waiman Long wrote:
> The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
> Introduce task_struct::user_cpus_ptr to track requested affinity"). It
> is currently used only by arm64 arch due to possible asymmetric cpu
> setup. This patch extends its usage to save user provided cpumask when
> sched_setaffinity() is called for all arches.
> 
> To preserve the existing arm64 use case, a new cpus_affinity_set flag is
> added to differentiate if user_cpus_ptr is set up by sched_setaffinity()
> or by force_compatible_cpus_allowed_ptr(). user_cpus_ptr
> set by sched_setaffinity() has priority and won't be
> overwritten by force_compatible_cpus_allowed_ptr() or
> relax_compatible_cpus_allowed_ptr().
> 
> As a call to sched_setaffinity() will no longer clear user_cpus_ptr
> but set it instead, the SCA_USER flag is no longer necessary and can
> be removed.
> 
> Signed-off-by: Waiman Long <longman@redhat.com>
> ---
>  include/linux/sched.h |  1 +
>  kernel/sched/core.c   | 71 +++++++++++++++++++++++++++++++------------
>  kernel/sched/sched.h  |  1 -
>  3 files changed, 52 insertions(+), 21 deletions(-)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index c46f3a63b758..60ae022fa842 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -815,6 +815,7 @@ struct task_struct {
>  
>  	unsigned int			policy;
>  	int				nr_cpus_allowed;
> +	int				cpus_affinity_set;
>  	const cpumask_t			*cpus_ptr;
>  	cpumask_t			*user_cpus_ptr;
>  	cpumask_t			cpus_mask;
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index da0bf6fe9ecd..7757828c7422 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2607,6 +2607,7 @@ int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
>  		return -ENOMEM;
>  
>  	cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
> +	dst->cpus_affinity_set = src->cpus_affinity_set;

I haven't been through this thorougly, but it looks a bit suspicious to me
to inherit this field directly across fork(). If a 64-bit task with this
flag set forks and then exec's a 32-bit program, arm64 will be in trouble if
we're not able to override the affinity forcefully.

Will

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/2] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
@ 2022-08-01 16:45     ` Will Deacon
  0 siblings, 0 replies; 14+ messages in thread
From: Will Deacon @ 2022-08-01 16:45 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

On Mon, Aug 01, 2022 at 11:41:23AM -0400, Waiman Long wrote:
> The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
> Introduce task_struct::user_cpus_ptr to track requested affinity"). It
> is currently used only by arm64 arch due to possible asymmetric cpu
> setup. This patch extends its usage to save user provided cpumask when
> sched_setaffinity() is called for all arches.
> 
> To preserve the existing arm64 use case, a new cpus_affinity_set flag is
> added to differentiate if user_cpus_ptr is set up by sched_setaffinity()
> or by force_compatible_cpus_allowed_ptr(). user_cpus_ptr
> set by sched_setaffinity() has priority and won't be
> overwritten by force_compatible_cpus_allowed_ptr() or
> relax_compatible_cpus_allowed_ptr().
> 
> As a call to sched_setaffinity() will no longer clear user_cpus_ptr
> but set it instead, the SCA_USER flag is no longer necessary and can
> be removed.
> 
> Signed-off-by: Waiman Long <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
> ---
>  include/linux/sched.h |  1 +
>  kernel/sched/core.c   | 71 +++++++++++++++++++++++++++++++------------
>  kernel/sched/sched.h  |  1 -
>  3 files changed, 52 insertions(+), 21 deletions(-)
> 
> diff --git a/include/linux/sched.h b/include/linux/sched.h
> index c46f3a63b758..60ae022fa842 100644
> --- a/include/linux/sched.h
> +++ b/include/linux/sched.h
> @@ -815,6 +815,7 @@ struct task_struct {
>  
>  	unsigned int			policy;
>  	int				nr_cpus_allowed;
> +	int				cpus_affinity_set;
>  	const cpumask_t			*cpus_ptr;
>  	cpumask_t			*user_cpus_ptr;
>  	cpumask_t			cpus_mask;
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index da0bf6fe9ecd..7757828c7422 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -2607,6 +2607,7 @@ int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
>  		return -ENOMEM;
>  
>  	cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
> +	dst->cpus_affinity_set = src->cpus_affinity_set;

I haven't been through this thorougly, but it looks a bit suspicious to me
to inherit this field directly across fork(). If a 64-bit task with this
flag set forks and then exec's a 32-bit program, arm64 will be in trouble if
we're not able to override the affinity forcefully.

Will

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/2] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
  2022-08-01 16:45     ` Will Deacon
@ 2022-08-01 17:15       ` Waiman Long
  -1 siblings, 0 replies; 14+ messages in thread
From: Waiman Long @ 2022-08-01 17:15 UTC (permalink / raw)
  To: Will Deacon
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, cgroups, linux-kernel

On 8/1/22 12:45, Will Deacon wrote:
> On Mon, Aug 01, 2022 at 11:41:23AM -0400, Waiman Long wrote:
>> The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
>> Introduce task_struct::user_cpus_ptr to track requested affinity"). It
>> is currently used only by arm64 arch due to possible asymmetric cpu
>> setup. This patch extends its usage to save user provided cpumask when
>> sched_setaffinity() is called for all arches.
>>
>> To preserve the existing arm64 use case, a new cpus_affinity_set flag is
>> added to differentiate if user_cpus_ptr is set up by sched_setaffinity()
>> or by force_compatible_cpus_allowed_ptr(). user_cpus_ptr
>> set by sched_setaffinity() has priority and won't be
>> overwritten by force_compatible_cpus_allowed_ptr() or
>> relax_compatible_cpus_allowed_ptr().
>>
>> As a call to sched_setaffinity() will no longer clear user_cpus_ptr
>> but set it instead, the SCA_USER flag is no longer necessary and can
>> be removed.
>>
>> Signed-off-by: Waiman Long <longman@redhat.com>
>> ---
>>   include/linux/sched.h |  1 +
>>   kernel/sched/core.c   | 71 +++++++++++++++++++++++++++++++------------
>>   kernel/sched/sched.h  |  1 -
>>   3 files changed, 52 insertions(+), 21 deletions(-)
>>
>> diff --git a/include/linux/sched.h b/include/linux/sched.h
>> index c46f3a63b758..60ae022fa842 100644
>> --- a/include/linux/sched.h
>> +++ b/include/linux/sched.h
>> @@ -815,6 +815,7 @@ struct task_struct {
>>   
>>   	unsigned int			policy;
>>   	int				nr_cpus_allowed;
>> +	int				cpus_affinity_set;
>>   	const cpumask_t			*cpus_ptr;
>>   	cpumask_t			*user_cpus_ptr;
>>   	cpumask_t			cpus_mask;
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index da0bf6fe9ecd..7757828c7422 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -2607,6 +2607,7 @@ int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
>>   		return -ENOMEM;
>>   
>>   	cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
>> +	dst->cpus_affinity_set = src->cpus_affinity_set;
> I haven't been through this thorougly, but it looks a bit suspicious to me
> to inherit this field directly across fork(). If a 64-bit task with this
> flag set forks and then exec's a 32-bit program, arm64 will be in trouble if
> we're not able to override the affinity forcefully.

I believe you can still override the affinity. What is in user_cpus_ptr 
is not the actual affinity which is in cpus_mask. It is just what the 
users desire. Its value has be masked off by the current cpuset as well 
as what is allowed in task_cpu_possible_mask().

Cheers,
Longman


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 1/2] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity()
@ 2022-08-01 17:15       ` Waiman Long
  0 siblings, 0 replies; 14+ messages in thread
From: Waiman Long @ 2022-08-01 17:15 UTC (permalink / raw)
  To: Will Deacon
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Tejun Heo,
	Zefan Li, Johannes Weiner, cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA

On 8/1/22 12:45, Will Deacon wrote:
> On Mon, Aug 01, 2022 at 11:41:23AM -0400, Waiman Long wrote:
>> The user_cpus_ptr field is added by commit b90ca8badbd1 ("sched:
>> Introduce task_struct::user_cpus_ptr to track requested affinity"). It
>> is currently used only by arm64 arch due to possible asymmetric cpu
>> setup. This patch extends its usage to save user provided cpumask when
>> sched_setaffinity() is called for all arches.
>>
>> To preserve the existing arm64 use case, a new cpus_affinity_set flag is
>> added to differentiate if user_cpus_ptr is set up by sched_setaffinity()
>> or by force_compatible_cpus_allowed_ptr(). user_cpus_ptr
>> set by sched_setaffinity() has priority and won't be
>> overwritten by force_compatible_cpus_allowed_ptr() or
>> relax_compatible_cpus_allowed_ptr().
>>
>> As a call to sched_setaffinity() will no longer clear user_cpus_ptr
>> but set it instead, the SCA_USER flag is no longer necessary and can
>> be removed.
>>
>> Signed-off-by: Waiman Long <longman-H+wXaHxf7aLQT0dZR+AlfA@public.gmane.org>
>> ---
>>   include/linux/sched.h |  1 +
>>   kernel/sched/core.c   | 71 +++++++++++++++++++++++++++++++------------
>>   kernel/sched/sched.h  |  1 -
>>   3 files changed, 52 insertions(+), 21 deletions(-)
>>
>> diff --git a/include/linux/sched.h b/include/linux/sched.h
>> index c46f3a63b758..60ae022fa842 100644
>> --- a/include/linux/sched.h
>> +++ b/include/linux/sched.h
>> @@ -815,6 +815,7 @@ struct task_struct {
>>   
>>   	unsigned int			policy;
>>   	int				nr_cpus_allowed;
>> +	int				cpus_affinity_set;
>>   	const cpumask_t			*cpus_ptr;
>>   	cpumask_t			*user_cpus_ptr;
>>   	cpumask_t			cpus_mask;
>> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
>> index da0bf6fe9ecd..7757828c7422 100644
>> --- a/kernel/sched/core.c
>> +++ b/kernel/sched/core.c
>> @@ -2607,6 +2607,7 @@ int dup_user_cpus_ptr(struct task_struct *dst, struct task_struct *src,
>>   		return -ENOMEM;
>>   
>>   	cpumask_copy(dst->user_cpus_ptr, src->user_cpus_ptr);
>> +	dst->cpus_affinity_set = src->cpus_affinity_set;
> I haven't been through this thorougly, but it looks a bit suspicious to me
> to inherit this field directly across fork(). If a 64-bit task with this
> flag set forks and then exec's a 32-bit program, arm64 will be in trouble if
> we're not able to override the affinity forcefully.

I believe you can still override the affinity. What is in user_cpus_ptr 
is not the actual affinity which is in cpus_mask. It is just what the 
users desire. Its value has be masked off by the current cpuset as well 
as what is allowed in task_cpu_possible_mask().

Cheers,
Longman


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/2] cgroup/cpuset: Keep user set cpus affinity
@ 2022-08-09 19:55     ` Tejun Heo
  0 siblings, 0 replies; 14+ messages in thread
From: Tejun Heo @ 2022-08-09 19:55 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Zefan Li,
	Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

(cc'ing Linus)

Hello,

On Mon, Aug 01, 2022 at 11:41:24AM -0400, Waiman Long wrote:
> It was found that any change to the current cpuset hierarchy may reset
> the cpumask of the tasks in the affected cpusets to the default cpuset
> value even if those tasks have cpus affinity explicitly set by the users
> before. That is especially easy to trigger under a cgroup v2 environment
> where writing "+cpuset" to the root cgroup's cgroup.subtree_control
> file will reset the cpus affinity of all the processes in the system.
> 
> That is problematic in a nohz_full environment where the tasks running
> in the nohz_full CPUs usually have their cpus affinity explicitly set
> and will behave incorrectly if cpus affinity changes.
> 
> Fix this problem by looking at user_cpus_ptr which will be set if
> cpus affinity have been explicitly set before and use it to restrcit
> the given cpumask unless there is no overlap. In that case, it will
> fallback to the given one.
> 
> With that change in place, it was verified that tasks that have its
> cpus affinity explicitly set will not be affected by changes made to
> the v2 cgroup.subtree_control files.

The fact that the kernel clobbers user-specified cpus_allowed as cpu
availability changes always bothered me and it has been causing this sort of
problems w/ cpu hotplug and cpuset. We've been patching this up partially
here and there but I think it would be better if we just make the rules
really simple - ie. allow users to configure whatever cpus_allowed as long
as that's within cpu_possible_mask and override only the effective
cpus_allowed if the mask leaves no runnable CPUs, so that we can restore the
original configured behavior if and when some of the cpus become available
again.

One obvious problem with changing the behavior is that it may affect /
confuse users expecting the current behavior however inconsistent it may be,
but given that we have partially changed how cpus_allowed interacts with
hotplug in the past and the current behavior can be inconsistent and
surprising, I don't think this is a bridge we can't cross. What do others
think?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/2] cgroup/cpuset: Keep user set cpus affinity
@ 2022-08-09 19:55     ` Tejun Heo
  0 siblings, 0 replies; 14+ messages in thread
From: Tejun Heo @ 2022-08-09 19:55 UTC (permalink / raw)
  To: Waiman Long
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Zefan Li,
	Johannes Weiner, Will Deacon, cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Linus Torvalds

(cc'ing Linus)

Hello,

On Mon, Aug 01, 2022 at 11:41:24AM -0400, Waiman Long wrote:
> It was found that any change to the current cpuset hierarchy may reset
> the cpumask of the tasks in the affected cpusets to the default cpuset
> value even if those tasks have cpus affinity explicitly set by the users
> before. That is especially easy to trigger under a cgroup v2 environment
> where writing "+cpuset" to the root cgroup's cgroup.subtree_control
> file will reset the cpus affinity of all the processes in the system.
> 
> That is problematic in a nohz_full environment where the tasks running
> in the nohz_full CPUs usually have their cpus affinity explicitly set
> and will behave incorrectly if cpus affinity changes.
> 
> Fix this problem by looking at user_cpus_ptr which will be set if
> cpus affinity have been explicitly set before and use it to restrcit
> the given cpumask unless there is no overlap. In that case, it will
> fallback to the given one.
> 
> With that change in place, it was verified that tasks that have its
> cpus affinity explicitly set will not be affected by changes made to
> the v2 cgroup.subtree_control files.

The fact that the kernel clobbers user-specified cpus_allowed as cpu
availability changes always bothered me and it has been causing this sort of
problems w/ cpu hotplug and cpuset. We've been patching this up partially
here and there but I think it would be better if we just make the rules
really simple - ie. allow users to configure whatever cpus_allowed as long
as that's within cpu_possible_mask and override only the effective
cpus_allowed if the mask leaves no runnable CPUs, so that we can restore the
original configured behavior if and when some of the cpus become available
again.

One obvious problem with changing the behavior is that it may affect /
confuse users expecting the current behavior however inconsistent it may be,
but given that we have partially changed how cpus_allowed interacts with
hotplug in the past and the current behavior can be inconsistent and
surprising, I don't think this is a bridge we can't cross. What do others
think?

Thanks.

-- 
tejun

^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/2] cgroup/cpuset: Keep user set cpus affinity
@ 2022-08-09 20:15       ` Waiman Long
  0 siblings, 0 replies; 14+ messages in thread
From: Waiman Long @ 2022-08-09 20:15 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Zefan Li,
	Johannes Weiner, Will Deacon, cgroups, linux-kernel,
	Linus Torvalds

On 8/9/22 15:55, Tejun Heo wrote:
> (cc'ing Linus)
>
> Hello,
>
> On Mon, Aug 01, 2022 at 11:41:24AM -0400, Waiman Long wrote:
>> It was found that any change to the current cpuset hierarchy may reset
>> the cpumask of the tasks in the affected cpusets to the default cpuset
>> value even if those tasks have cpus affinity explicitly set by the users
>> before. That is especially easy to trigger under a cgroup v2 environment
>> where writing "+cpuset" to the root cgroup's cgroup.subtree_control
>> file will reset the cpus affinity of all the processes in the system.
>>
>> That is problematic in a nohz_full environment where the tasks running
>> in the nohz_full CPUs usually have their cpus affinity explicitly set
>> and will behave incorrectly if cpus affinity changes.
>>
>> Fix this problem by looking at user_cpus_ptr which will be set if
>> cpus affinity have been explicitly set before and use it to restrcit
>> the given cpumask unless there is no overlap. In that case, it will
>> fallback to the given one.
>>
>> With that change in place, it was verified that tasks that have its
>> cpus affinity explicitly set will not be affected by changes made to
>> the v2 cgroup.subtree_control files.
> The fact that the kernel clobbers user-specified cpus_allowed as cpu
> availability changes always bothered me and it has been causing this sort of
> problems w/ cpu hotplug and cpuset. We've been patching this up partially
> here and there but I think it would be better if we just make the rules
> really simple - ie. allow users to configure whatever cpus_allowed as long
> as that's within cpu_possible_mask and override only the effective
> cpus_allowed if the mask leaves no runnable CPUs, so that we can restore the
> original configured behavior if and when some of the cpus become available
> again.
>
> One obvious problem with changing the behavior is that it may affect /
> confuse users expecting the current behavior however inconsistent it may be,
> but given that we have partially changed how cpus_allowed interacts with
> hotplug in the past and the current behavior can be inconsistent and
> surprising, I don't think this is a bridge we can't cross. What do others
> think?

My patch will still subject the cpus_allowed list to the constraint 
imposed by the current cpuset. It will keep as much of what the user 
specified though. If we are worrying about backward compatibility, maybe 
we can restrict that change in behavior to cgroup v2 only or we can add 
a sysctl parameter to restore old behavior if the user choose to.

Users are now gradually migrating over to cgroup v2 and they do 
understand that there are some changes in behavior when using cgroup v2.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 14+ messages in thread

* Re: [PATCH v2 2/2] cgroup/cpuset: Keep user set cpus affinity
@ 2022-08-09 20:15       ` Waiman Long
  0 siblings, 0 replies; 14+ messages in thread
From: Waiman Long @ 2022-08-09 20:15 UTC (permalink / raw)
  To: Tejun Heo
  Cc: Ingo Molnar, Peter Zijlstra, Juri Lelli, Vincent Guittot,
	Dietmar Eggemann, Steven Rostedt, Ben Segall, Mel Gorman,
	Daniel Bristot de Oliveira, Valentin Schneider, Zefan Li,
	Johannes Weiner, Will Deacon, cgroups-u79uwXL29TY76Z2rM5mHXA,
	linux-kernel-u79uwXL29TY76Z2rM5mHXA, Linus Torvalds

On 8/9/22 15:55, Tejun Heo wrote:
> (cc'ing Linus)
>
> Hello,
>
> On Mon, Aug 01, 2022 at 11:41:24AM -0400, Waiman Long wrote:
>> It was found that any change to the current cpuset hierarchy may reset
>> the cpumask of the tasks in the affected cpusets to the default cpuset
>> value even if those tasks have cpus affinity explicitly set by the users
>> before. That is especially easy to trigger under a cgroup v2 environment
>> where writing "+cpuset" to the root cgroup's cgroup.subtree_control
>> file will reset the cpus affinity of all the processes in the system.
>>
>> That is problematic in a nohz_full environment where the tasks running
>> in the nohz_full CPUs usually have their cpus affinity explicitly set
>> and will behave incorrectly if cpus affinity changes.
>>
>> Fix this problem by looking at user_cpus_ptr which will be set if
>> cpus affinity have been explicitly set before and use it to restrcit
>> the given cpumask unless there is no overlap. In that case, it will
>> fallback to the given one.
>>
>> With that change in place, it was verified that tasks that have its
>> cpus affinity explicitly set will not be affected by changes made to
>> the v2 cgroup.subtree_control files.
> The fact that the kernel clobbers user-specified cpus_allowed as cpu
> availability changes always bothered me and it has been causing this sort of
> problems w/ cpu hotplug and cpuset. We've been patching this up partially
> here and there but I think it would be better if we just make the rules
> really simple - ie. allow users to configure whatever cpus_allowed as long
> as that's within cpu_possible_mask and override only the effective
> cpus_allowed if the mask leaves no runnable CPUs, so that we can restore the
> original configured behavior if and when some of the cpus become available
> again.
>
> One obvious problem with changing the behavior is that it may affect /
> confuse users expecting the current behavior however inconsistent it may be,
> but given that we have partially changed how cpus_allowed interacts with
> hotplug in the past and the current behavior can be inconsistent and
> surprising, I don't think this is a bridge we can't cross. What do others
> think?

My patch will still subject the cpus_allowed list to the constraint 
imposed by the current cpuset. It will keep as much of what the user 
specified though. If we are worrying about backward compatibility, maybe 
we can restrict that change in behavior to cgroup v2 only or we can add 
a sysctl parameter to restore old behavior if the user choose to.

Users are now gradually migrating over to cgroup v2 and they do 
understand that there are some changes in behavior when using cgroup v2.

Cheers,
Longman


^ permalink raw reply	[flat|nested] 14+ messages in thread

end of thread, other threads:[~2022-08-09 20:16 UTC | newest]

Thread overview: 14+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2022-08-01 15:41 [PATCH v2 0/2] sched, cgroup/cpuset: Keep user set cpus affinity Waiman Long
2022-08-01 15:41 ` Waiman Long
2022-08-01 15:41 ` [PATCH v2 1/2] sched: Use user_cpus_ptr for saving user provided cpumask in sched_setaffinity() Waiman Long
2022-08-01 15:41   ` Waiman Long
2022-08-01 16:45   ` Will Deacon
2022-08-01 16:45     ` Will Deacon
2022-08-01 17:15     ` Waiman Long
2022-08-01 17:15       ` Waiman Long
2022-08-01 15:41 ` [PATCH v2 2/2] cgroup/cpuset: Keep user set cpus affinity Waiman Long
2022-08-01 15:41   ` Waiman Long
2022-08-09 19:55   ` Tejun Heo
2022-08-09 19:55     ` Tejun Heo
2022-08-09 20:15     ` Waiman Long
2022-08-09 20:15       ` Waiman Long

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.