All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH v3] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu onlining
@ 2021-03-17  0:36 ` Alexey Klimov
  0 siblings, 0 replies; 18+ messages in thread
From: Alexey Klimov @ 2021-03-17  0:36 UTC (permalink / raw)
  To: linux-kernel, cgroups
  Cc: peterz, yury.norov, daniel.m.jordan, tglx, jobaker,
	audralmitchel, arnd, gregkh, rafael, tj, qais.yousef, hannes,
	klimov.linux

When a CPU offlined and onlined via device_offline() and device_online()
the userspace gets uevent notification. If, after receiving "online" uevent,
userspace executes sched_setaffinity() on some task trying to move it
to a recently onlined CPU, then it sometimes fails with -EINVAL. Userspace
needs to wait around 5..30 ms before sched_setaffinity() will succeed for
recently onlined CPU after receiving uevent.

If in_mask argument for sched_setaffinity() has only recently onlined CPU,
it could fail with such flow:

  sched_setaffinity()
    cpuset_cpus_allowed()
      guarantee_online_cpus()   <-- cs->effective_cpus mask does not
                                        contain recently onlined cpu
    cpumask_and()               <-- final new_mask is empty
    __set_cpus_allowed_ptr()
      cpumask_any_and_distribute() <-- returns dest_cpu equal to nr_cpu_ids
      returns -EINVAL

Cpusets used in guarantee_online_cpus() are updated using workqueue from
cpuset_update_active_cpus() which in its turn is called from cpu hotplug callback
sched_cpu_activate() hence it may not be observable by sched_setaffinity() if
it is called immediately after uevent.

Out of line uevent can be avoided if we will ensure that cpuset_hotplug_work
has run to completion using cpuset_wait_for_hotplug() after onlining the
cpu in cpu_device_up() and in cpuhp_smt_enable().

Cc: Daniel Jordan <daniel.m.jordan@oracle.com>
Reviewed-by: Qais Yousef <qais.yousef@arm.com>
Co-analyzed-by: Joshua Baker <jobaker@redhat.com>
Signed-off-by: Alexey Klimov <aklimov@redhat.com>
---

Changes since v2:
	- restore cpuhp_{online,offline}_cpu_device back and move it out
		of cpu maps lock;
	- use Reviewed-by from Qais;
	- minor corrections in commit message and in comment in code.

Changes since v1:
	- cpuset_wait_for_hotplug() moved to cpu_device_up();
	- corrections in comments;
	- removed cpuhp_{online,offline}_cpu_device.

Changes since RFC:
	- cpuset_wait_for_hotplug() used in cpuhp_smt_enable().

Previous patches and discussion are:
RFC patch: https://lore.kernel.org/lkml/20201203171431.256675-1-aklimov@redhat.com/
v1 patch:  https://lore.kernel.org/lkml/20210204010157.1823669-1-aklimov@redhat.com/
v2 patch: https://lore.kernel.org/lkml/20210212003032.2037750-1-aklimov@redhat.com/

The commit a49e4629b5ed "cpuset: Make cpuset hotplug synchronous"
would also get rid of the early uevent but it was reverted (deadlocks).

The nature of this bug is also described here (with different consequences):
https://lore.kernel.org/lkml/20200211141554.24181-1-qais.yousef@arm.com/

Reproducer: https://gitlab.com/0xeafffffe/xlam

Currently with such changes the reproducer code continues to work without issues.
The idea is to avoid the situation when userspace receives the event about
onlined CPU which is not ready to take tasks for a while after uevent.

 kernel/cpu.c | 74 +++++++++++++++++++++++++++++++++++++++-------------
 1 file changed, 56 insertions(+), 18 deletions(-)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 1b6302ecbabe..9b091d8a8811 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -15,6 +15,7 @@
 #include <linux/sched/smt.h>
 #include <linux/unistd.h>
 #include <linux/cpu.h>
+#include <linux/cpuset.h>
 #include <linux/oom.h>
 #include <linux/rcupdate.h>
 #include <linux/export.h>
@@ -1301,7 +1302,17 @@ static int cpu_up(unsigned int cpu, enum cpuhp_state target)
  */
 int cpu_device_up(struct device *dev)
 {
-	return cpu_up(dev->id, CPUHP_ONLINE);
+	int err;
+
+	err = cpu_up(dev->id, CPUHP_ONLINE);
+	/*
+	 * Wait for cpuset updates to cpumasks to finish.  Later on this path
+	 * may generate uevents whose consumers rely on the updates.
+	 */
+	if (!err)
+		cpuset_wait_for_hotplug();
+
+	return err;
 }
 
 int add_cpu(unsigned int cpu)
@@ -2084,8 +2095,13 @@ static void cpuhp_online_cpu_device(unsigned int cpu)
 
 int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 {
-	int cpu, ret = 0;
+	cpumask_var_t mask;
+	int cpu, ret;
 
+	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
+		return -ENOMEM;
+
+	ret = 0;
 	cpu_maps_update_begin();
 	for_each_online_cpu(cpu) {
 		if (topology_is_primary_thread(cpu))
@@ -2093,31 +2109,42 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 		ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
 		if (ret)
 			break;
-		/*
-		 * As this needs to hold the cpu maps lock it's impossible
-		 * to call device_offline() because that ends up calling
-		 * cpu_down() which takes cpu maps lock. cpu maps lock
-		 * needs to be held as this might race against in kernel
-		 * abusers of the hotplug machinery (thermal management).
-		 *
-		 * So nothing would update device:offline state. That would
-		 * leave the sysfs entry stale and prevent onlining after
-		 * smt control has been changed to 'off' again. This is
-		 * called under the sysfs hotplug lock, so it is properly
-		 * serialized against the regular offline usage.
-		 */
-		cpuhp_offline_cpu_device(cpu);
+
+		cpumask_set_cpu(cpu, mask);
 	}
 	if (!ret)
 		cpu_smt_control = ctrlval;
 	cpu_maps_update_done();
+
+	/*
+	 * When the cpu maps lock was taken above it was impossible
+	 * to call device_offline() because that ends up calling
+	 * cpu_down() which takes cpu maps lock. cpu maps lock
+	 * needed to be held as this might race against in-kernel
+	 * abusers of the hotplug machinery (thermal management).
+	 *
+	 * So nothing would update device:offline state. That would
+	 * leave the sysfs entry stale and prevent onlining after
+	 * smt control has been changed to 'off' again. This is
+	 * called under the sysfs hotplug lock, so it is properly
+	 * serialized against the regular offline usage.
+	 */
+	for_each_cpu(cpu, mask)
+		cpuhp_offline_cpu_device(cpu);
+
+	free_cpumask_var(mask);
 	return ret;
 }
 
 int cpuhp_smt_enable(void)
 {
-	int cpu, ret = 0;
+	cpumask_var_t mask;
+	int cpu, ret;
+
+	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
+		return -ENOMEM;
 
+	ret = 0;
 	cpu_maps_update_begin();
 	cpu_smt_control = CPU_SMT_ENABLED;
 	for_each_present_cpu(cpu) {
@@ -2128,9 +2155,20 @@ int cpuhp_smt_enable(void)
 		if (ret)
 			break;
 		/* See comment in cpuhp_smt_disable() */
-		cpuhp_online_cpu_device(cpu);
+		cpumask_set_cpu(cpu, mask);
 	}
 	cpu_maps_update_done();
+
+	/*
+	 * Wait for cpuset updates to cpumasks to finish.  Later on this path
+	 * may generate uevents whose consumers rely on the updates.
+	 */
+	cpuset_wait_for_hotplug();
+
+	for_each_cpu(cpu, mask)
+		cpuhp_online_cpu_device(cpu);
+
+	free_cpumask_var(mask);
 	return ret;
 }
 #endif
-- 
2.31.0


^ permalink raw reply related	[flat|nested] 18+ messages in thread

end of thread, other threads:[~2021-06-21  8:36 UTC | newest]

Thread overview: 18+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-03-17  0:36 [PATCH v3] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu onlining Alexey Klimov
2021-03-17  0:36 ` Alexey Klimov
2021-03-18 19:28 ` Daniel Jordan
2021-03-18 19:28   ` Daniel Jordan
2021-03-27 21:01 ` Thomas Gleixner
2021-03-27 21:01   ` Thomas Gleixner
2021-04-01 14:06   ` Qais Yousef
2021-04-01 14:06     ` Qais Yousef
2021-04-04  2:32   ` Alexey Klimov
2021-04-04  2:32     ` Alexey Klimov
2021-04-15  1:30     ` Alexey Klimov
2021-04-15  1:30       ` Alexey Klimov
2021-06-14 18:14       ` Alexey Klimov
2021-06-14 18:14         ` Alexey Klimov
2021-06-19 20:18         ` Thomas Gleixner
2021-06-19 20:18           ` Thomas Gleixner
2021-06-19 20:27   ` [tip: smp/urgent] cpu/hotplug: Cure the cpusets trainwreck tip-bot2 for Thomas Gleixner
2021-06-21  8:36   ` tip-bot2 for Thomas Gleixner

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.