linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu onlining
@ 2021-02-12  0:30 Alexey Klimov
  2021-02-12 19:41 ` Daniel Jordan
  2021-02-16 18:29 ` Qais Yousef
  0 siblings, 2 replies; 5+ messages in thread
From: Alexey Klimov @ 2021-02-12  0:30 UTC (permalink / raw)
  To: linux-kernel, cgroups
  Cc: peterz, yury.norov, daniel.m.jordan, tglx, jobaker,
	audralmitchel, arnd, gregkh, rafael, tj, qais.yousef, hannes,
	klimov.linux

When a CPU offlined and onlined via device_offline() and device_online()
the userspace gets uevent notification. If, after receiving "online" uevent,
userspace executes sched_setaffinity() on some task trying to move it
to a recently onlined CPU, then it often fails with -EINVAL. Userspace needs
to wait around 5..30 ms before sched_setaffinity() will succeed for recently
onlined CPU after receiving uevent.

If in_mask argument for sched_setaffinity() has only recently onlined CPU,
it often fails with such flow:

  sched_setaffinity()
    cpuset_cpus_allowed()
      guarantee_online_cpus()   <-- cs->effective_cpus mask does not
                                        contain recently onlined cpu
    cpumask_and()               <-- final new_mask is empty
    __set_cpus_allowed_ptr()
      cpumask_any_and_distribute() <-- returns dest_cpu equal to nr_cpu_ids
      returns -EINVAL

Cpusets used in guarantee_online_cpus() are updated using workqueue from
cpuset_update_active_cpus() which in its turn is called from cpu hotplug callback
sched_cpu_activate() hence it may not be observable by sched_setaffinity() if
it is called immediately after uevent.
Out of line uevent can be avoided if we will ensure that cpuset_hotplug_work
has run to completion using cpuset_wait_for_hotplug() after onlining the
cpu in cpu_device_up() and in cpuhp_smt_enable().

Co-analyzed-by: Joshua Baker <jobaker@redhat.com>
Signed-off-by: Alexey Klimov <aklimov@redhat.com>
---

Previous patches and discussion are:
RFC patch: https://lore.kernel.org/lkml/20201203171431.256675-1-aklimov@redhat.com/
v1 patch:  https://lore.kernel.org/lkml/20210204010157.1823669-1-aklimov@redhat.com/

The commit a49e4629b5ed "cpuset: Make cpuset hotplug synchronous"
would also get rid of the early uevent but it was reverted (deadlocks).

The nature of this bug is also described here (with different consequences):
https://lore.kernel.org/lkml/20200211141554.24181-1-qais.yousef@arm.com/

Reproducer: https://gitlab.com/0xeafffffe/xlam

Currently with such changes the reproducer code continues to work without issues.
The idea is to avoid the situation when userspace receives the event about
onlined CPU which is not ready to take tasks for a while after uevent.

 kernel/cpu.c | 79 +++++++++++++++++++++++++++++++++++++---------------
 1 file changed, 56 insertions(+), 23 deletions(-)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 4e11e91010e1..8817ccdc8e11 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -15,6 +15,7 @@
 #include <linux/sched/smt.h>
 #include <linux/unistd.h>
 #include <linux/cpu.h>
+#include <linux/cpuset.h>
 #include <linux/oom.h>
 #include <linux/rcupdate.h>
 #include <linux/export.h>
@@ -1294,7 +1295,17 @@ static int cpu_up(unsigned int cpu, enum cpuhp_state target)
  */
 int cpu_device_up(struct device *dev)
 {
-	return cpu_up(dev->id, CPUHP_ONLINE);
+	int err;
+
+	err = cpu_up(dev->id, CPUHP_ONLINE);
+	/*
+	 * Wait for cpuset updates to cpumasks to finish.  Later on this path
+	 * may generate uevents whose consumers rely on the updates.
+	 */
+	if (!err)
+		cpuset_wait_for_hotplug();
+
+	return err;
 }
 
 int add_cpu(unsigned int cpu)
@@ -2057,28 +2068,16 @@ void __cpuhp_remove_state(enum cpuhp_state state, bool invoke)
 EXPORT_SYMBOL(__cpuhp_remove_state);
 
 #ifdef CONFIG_HOTPLUG_SMT
-static void cpuhp_offline_cpu_device(unsigned int cpu)
-{
-	struct device *dev = get_cpu_device(cpu);
-
-	dev->offline = true;
-	/* Tell user space about the state change */
-	kobject_uevent(&dev->kobj, KOBJ_OFFLINE);
-}
-
-static void cpuhp_online_cpu_device(unsigned int cpu)
-{
-	struct device *dev = get_cpu_device(cpu);
-
-	dev->offline = false;
-	/* Tell user space about the state change */
-	kobject_uevent(&dev->kobj, KOBJ_ONLINE);
-}
-
 int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 {
-	int cpu, ret = 0;
+	struct device *dev;
+	cpumask_var_t mask;
+	int cpu, ret;
+
+	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
+		return -ENOMEM;
 
+	ret = 0;
 	cpu_maps_update_begin();
 	for_each_online_cpu(cpu) {
 		if (topology_is_primary_thread(cpu))
@@ -2099,18 +2098,35 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 		 * called under the sysfs hotplug lock, so it is properly
 		 * serialized against the regular offline usage.
 		 */
-		cpuhp_offline_cpu_device(cpu);
+		dev = get_cpu_device(cpu);
+		dev->offline = true;
+
+		cpumask_set_cpu(cpu, mask);
 	}
 	if (!ret)
 		cpu_smt_control = ctrlval;
 	cpu_maps_update_done();
+
+	/* Tell user space about the state changes */
+	for_each_cpu(cpu, mask) {
+		dev = get_cpu_device(cpu);
+		kobject_uevent(&dev->kobj, KOBJ_OFFLINE);
+	}
+
+	free_cpumask_var(mask);
 	return ret;
 }
 
 int cpuhp_smt_enable(void)
 {
-	int cpu, ret = 0;
+	struct device *dev;
+	cpumask_var_t mask;
+	int cpu, ret;
+
+	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
+		return -ENOMEM;
 
+	ret = 0;
 	cpu_maps_update_begin();
 	cpu_smt_control = CPU_SMT_ENABLED;
 	for_each_present_cpu(cpu) {
@@ -2121,9 +2137,26 @@ int cpuhp_smt_enable(void)
 		if (ret)
 			break;
 		/* See comment in cpuhp_smt_disable() */
-		cpuhp_online_cpu_device(cpu);
+		dev = get_cpu_device(cpu);
+		dev->offline = false;
+
+		cpumask_set_cpu(cpu, mask);
 	}
 	cpu_maps_update_done();
+
+	/*
+	 * Wait for cpuset updates to cpumasks to finish.  Later on this path
+	 * may generate uevents whose consumers rely on the updates.
+	 */
+	cpuset_wait_for_hotplug();
+
+	/* Tell user space about the state changes */
+	for_each_cpu(cpu, mask) {
+		dev = get_cpu_device(cpu);
+		kobject_uevent(&dev->kobj, KOBJ_ONLINE);
+	}
+
+	free_cpumask_var(mask);
 	return ret;
 }
 #endif
-- 
2.30.0


^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu onlining
  2021-02-12  0:30 [PATCH v2] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu onlining Alexey Klimov
@ 2021-02-12 19:41 ` Daniel Jordan
  2021-03-16  3:15   ` Alexey Klimov
  2021-02-16 18:29 ` Qais Yousef
  1 sibling, 1 reply; 5+ messages in thread
From: Daniel Jordan @ 2021-02-12 19:41 UTC (permalink / raw)
  To: Alexey Klimov, linux-kernel, cgroups
  Cc: peterz, yury.norov, tglx, jobaker, audralmitchel, arnd, gregkh,
	rafael, tj, qais.yousef, hannes, klimov.linux

Alexey Klimov <aklimov@redhat.com> writes:
> int cpu_device_up(struct device *dev)

Yeah, definitely better to do the wait here.

>  int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
>  {
> -	int cpu, ret = 0;
> +	struct device *dev;
> +	cpumask_var_t mask;
> +	int cpu, ret;
> +
> +	if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
> +		return -ENOMEM;
>  
> +	ret = 0;
>  	cpu_maps_update_begin();
>  	for_each_online_cpu(cpu) {
>  		if (topology_is_primary_thread(cpu))
> @@ -2099,18 +2098,35 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
>  		 * called under the sysfs hotplug lock, so it is properly
>  		 * serialized against the regular offline usage.
>  		 */
> -		cpuhp_offline_cpu_device(cpu);
> +		dev = get_cpu_device(cpu);
> +		dev->offline = true;
> +
> +		cpumask_set_cpu(cpu, mask);
>  	}
>  	if (!ret)
>  		cpu_smt_control = ctrlval;
>  	cpu_maps_update_done();
> +
> +	/* Tell user space about the state changes */
> +	for_each_cpu(cpu, mask) {
> +		dev = get_cpu_device(cpu);
> +		kobject_uevent(&dev->kobj, KOBJ_OFFLINE);
> +	}
> +
> +	free_cpumask_var(mask);
>  	return ret;
>  }

Hrm, should the dev manipulation be kept in one place, something like
this?

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 8817ccdc8e112..aa21219a7b7c4 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -2085,11 +2085,20 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 		ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
 		if (ret)
 			break;
+
+		cpumask_set_cpu(cpu, mask);
+	}
+	if (!ret)
+		cpu_smt_control = ctrlval;
+	cpu_maps_update_done();
+
+	/* Tell user space about the state changes */
+	for_each_cpu(cpu, mask) {
 		/*
-		 * As this needs to hold the cpu maps lock it's impossible
+		 * When the cpu maps lock was taken above it was impossible
 		 * to call device_offline() because that ends up calling
 		 * cpu_down() which takes cpu maps lock. cpu maps lock
-		 * needs to be held as this might race against in kernel
+		 * needed to be held as this might race against in kernel
 		 * abusers of the hotplug machinery (thermal management).
 		 *
 		 * So nothing would update device:offline state. That would
@@ -2100,16 +2109,6 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
 		 */
 		dev = get_cpu_device(cpu);
 		dev->offline = true;
-
-		cpumask_set_cpu(cpu, mask);
-	}
-	if (!ret)
-		cpu_smt_control = ctrlval;
-	cpu_maps_update_done();
-
-	/* Tell user space about the state changes */
-	for_each_cpu(cpu, mask) {
-		dev = get_cpu_device(cpu);
 		kobject_uevent(&dev->kobj, KOBJ_OFFLINE);
 	}
 
@@ -2136,9 +2135,6 @@ int cpuhp_smt_enable(void)
 		ret = _cpu_up(cpu, 0, CPUHP_ONLINE);
 		if (ret)
 			break;
-		/* See comment in cpuhp_smt_disable() */
-		dev = get_cpu_device(cpu);
-		dev->offline = false;
 
 		cpumask_set_cpu(cpu, mask);
 	}
@@ -2152,7 +2148,9 @@ int cpuhp_smt_enable(void)
 
 	/* Tell user space about the state changes */
 	for_each_cpu(cpu, mask) {
+		/* See comment in cpuhp_smt_disable() */
 		dev = get_cpu_device(cpu);
+		dev->offline = false;
 		kobject_uevent(&dev->kobj, KOBJ_ONLINE);
 	}
 

^ permalink raw reply related	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu onlining
  2021-02-12  0:30 [PATCH v2] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu onlining Alexey Klimov
  2021-02-12 19:41 ` Daniel Jordan
@ 2021-02-16 18:29 ` Qais Yousef
  1 sibling, 0 replies; 5+ messages in thread
From: Qais Yousef @ 2021-02-16 18:29 UTC (permalink / raw)
  To: Alexey Klimov
  Cc: linux-kernel, cgroups, peterz, yury.norov, daniel.m.jordan, tglx,
	jobaker, audralmitchel, arnd, gregkh, rafael, tj, hannes,
	klimov.linux

On 02/12/21 00:30, Alexey Klimov wrote:
> When a CPU offlined and onlined via device_offline() and device_online()
> the userspace gets uevent notification. If, after receiving "online" uevent,
> userspace executes sched_setaffinity() on some task trying to move it
> to a recently onlined CPU, then it often fails with -EINVAL. Userspace needs
> to wait around 5..30 ms before sched_setaffinity() will succeed for recently
> onlined CPU after receiving uevent.
> 
> If in_mask argument for sched_setaffinity() has only recently onlined CPU,
> it often fails with such flow:
> 
>   sched_setaffinity()
>     cpuset_cpus_allowed()
>       guarantee_online_cpus()   <-- cs->effective_cpus mask does not
>                                         contain recently onlined cpu
>     cpumask_and()               <-- final new_mask is empty
>     __set_cpus_allowed_ptr()
>       cpumask_any_and_distribute() <-- returns dest_cpu equal to nr_cpu_ids
>       returns -EINVAL
> 
> Cpusets used in guarantee_online_cpus() are updated using workqueue from
> cpuset_update_active_cpus() which in its turn is called from cpu hotplug callback
> sched_cpu_activate() hence it may not be observable by sched_setaffinity() if
> it is called immediately after uevent.

nit: newline

> Out of line uevent can be avoided if we will ensure that cpuset_hotplug_work
> has run to completion using cpuset_wait_for_hotplug() after onlining the
> cpu in cpu_device_up() and in cpuhp_smt_enable().
> 
> Co-analyzed-by: Joshua Baker <jobaker@redhat.com>
> Signed-off-by: Alexey Klimov <aklimov@redhat.com>
> ---

This looks good to me.

Reviewed-by: Qais Yousef <qais.yousef@arm.com>

Thanks

--
Qais Yousef

^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu onlining
  2021-02-12 19:41 ` Daniel Jordan
@ 2021-03-16  3:15   ` Alexey Klimov
  2021-03-18 19:12     ` Daniel Jordan
  0 siblings, 1 reply; 5+ messages in thread
From: Alexey Klimov @ 2021-03-16  3:15 UTC (permalink / raw)
  To: Daniel Jordan
  Cc: linux-kernel, cgroups, Peter Zijlstra, yury.norov, tglx,
	Joshua Baker, audralmitchel, arnd, gregkh, rafael, tj,
	Qais Yousef, hannes, Alexey Klimov

On Fri, Feb 12, 2021 at 7:42 PM Daniel Jordan
<daniel.m.jordan@oracle.com> wrote:
>
> Alexey Klimov <aklimov@redhat.com> writes:
> > int cpu_device_up(struct device *dev)
>
> Yeah, definitely better to do the wait here.
>
> >  int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
> >  {
> > -     int cpu, ret = 0;
> > +     struct device *dev;
> > +     cpumask_var_t mask;
> > +     int cpu, ret;
> > +
> > +     if (!zalloc_cpumask_var(&mask, GFP_KERNEL))
> > +             return -ENOMEM;
> >
> > +     ret = 0;
> >       cpu_maps_update_begin();
> >       for_each_online_cpu(cpu) {
> >               if (topology_is_primary_thread(cpu))
> > @@ -2099,18 +2098,35 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
> >                * called under the sysfs hotplug lock, so it is properly
> >                * serialized against the regular offline usage.
> >                */
> > -             cpuhp_offline_cpu_device(cpu);
> > +             dev = get_cpu_device(cpu);
> > +             dev->offline = true;
> > +
> > +             cpumask_set_cpu(cpu, mask);
> >       }
> >       if (!ret)
> >               cpu_smt_control = ctrlval;
> >       cpu_maps_update_done();
> > +
> > +     /* Tell user space about the state changes */
> > +     for_each_cpu(cpu, mask) {
> > +             dev = get_cpu_device(cpu);
> > +             kobject_uevent(&dev->kobj, KOBJ_OFFLINE);
> > +     }
> > +
> > +     free_cpumask_var(mask);
> >       return ret;
> >  }
>
> Hrm, should the dev manipulation be kept in one place, something like
> this?

The first section of comment seems problematic to me with regards to such move:

                 * As this needs to hold the cpu maps lock it's impossible
                 * to call device_offline() because that ends up calling
                 * cpu_down() which takes cpu maps lock. cpu maps lock
                 * needs to be held as this might race against in kernel
                 * abusers of the hotplug machinery (thermal management).

Cpu maps lock is released in cpu_maps_update_done() hence we will move
dev->offline out of cpu maps lock. Maybe I misunderstood the comment
and it relates to calling cpu_down_maps_locked() under lock to avoid
race?
I failed to find the abusers of hotplug machinery in drivers/thermal/*
to track down the logic of potential race but I might have overlooked.
Anyway, if we move the update of dev->offline out, then it makes sense
to restore cpuhp_{offline,online}_cpu_device back and just use it.

I guess I'll update and re-send the patch and see how it goes.

> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 8817ccdc8e112..aa21219a7b7c4 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -2085,11 +2085,20 @@ int cpuhp_smt_disable(enum cpuhp_smt_control ctrlval)
>                 ret = cpu_down_maps_locked(cpu, CPUHP_OFFLINE);
>                 if (ret)
>                         break;
> +
> +               cpumask_set_cpu(cpu, mask);
> +       }
> +       if (!ret)
> +               cpu_smt_control = ctrlval;
> +       cpu_maps_update_done();
> +
> +       /* Tell user space about the state changes */
> +       for_each_cpu(cpu, mask) {
>                 /*
> -                * As this needs to hold the cpu maps lock it's impossible
> +                * When the cpu maps lock was taken above it was impossible
>                  * to call device_offline() because that ends up calling
>                  * cpu_down() which takes cpu maps lock. cpu maps lock
> -                * needs to be held as this might race against in kernel
> +                * needed to be held as this might race against in kernel
>                  * abusers of the hotplug machinery (thermal management).
>                  *
>                  * So nothing would update device:offline state. That would

Yeah, reading how you re-phrased it, this seems to be about
cpu_down_maps_locked()/device_offline() locks and race rather than
updating stale dev->offline.

Thank you,
Alexey


^ permalink raw reply	[flat|nested] 5+ messages in thread

* Re: [PATCH v2] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu onlining
  2021-03-16  3:15   ` Alexey Klimov
@ 2021-03-18 19:12     ` Daniel Jordan
  0 siblings, 0 replies; 5+ messages in thread
From: Daniel Jordan @ 2021-03-18 19:12 UTC (permalink / raw)
  To: Alexey Klimov
  Cc: linux-kernel, cgroups, Peter Zijlstra, yury.norov, tglx,
	Joshua Baker, audralmitchel, arnd, gregkh, rafael, tj,
	Qais Yousef, hannes, Alexey Klimov

Alexey Klimov <aklimov@redhat.com> writes:
> The first section of comment seems problematic to me with regards to such move:
>
>                  * As this needs to hold the cpu maps lock it's impossible
>                  * to call device_offline() because that ends up calling
>                  * cpu_down() which takes cpu maps lock. cpu maps lock
>                  * needs to be held as this might race against in kernel
>                  * abusers of the hotplug machinery (thermal management).
>
> Cpu maps lock is released in cpu_maps_update_done() hence we will move
> dev->offline out of cpu maps lock. Maybe I misunderstood the comment
> and it relates to calling cpu_down_maps_locked() under lock to avoid
> race?

Yes, that's what I take from the comment, the cpu maps lock protects
against racing hotplug operations.

^ permalink raw reply	[flat|nested] 5+ messages in thread

end of thread, other threads:[~2021-03-18 19:13 UTC | newest]

Thread overview: 5+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2021-02-12  0:30 [PATCH v2] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu onlining Alexey Klimov
2021-02-12 19:41 ` Daniel Jordan
2021-03-16  3:15   ` Alexey Klimov
2021-03-18 19:12     ` Daniel Jordan
2021-02-16 18:29 ` Qais Yousef

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).