linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online
@ 2020-12-03 17:14 Alexey Klimov
  2020-12-07  8:38 ` Peter Zijlstra
  2020-12-09  2:09 ` Daniel Jordan
  0 siblings, 2 replies; 7+ messages in thread
From: Alexey Klimov @ 2020-12-03 17:14 UTC (permalink / raw)
  To: linux-kernel, cgroups
  Cc: peterz, yury.norov, tglx, jobaker, audralmitchel, arnd, gregkh,
	rafael, tj, lizefan, qais.yousef, hannes, klimov.linux

When a CPU offlined and onlined via device_offline() and device_online()
the userspace gets uevent notification. If, after receiving uevent,
userspace executes sched_setaffinity() on some task trying to move it
to a recently onlined CPU, then it will fail with -EINVAL. Userspace needs
to wait around 5..30 ms before sched_setaffinity() will succeed for
recently onlined CPU after receiving uevent.

If in_mask for sched_setaffinity() has only recently onlined CPU, it
quickly fails with such flow:

  sched_setaffinity()
    cpuset_cpus_allowed()
      guarantee_online_cpus()   <-- cs->effective_cpus mask does not
                                        contain recently onlined cpu
    cpumask_and()               <-- final new_mask is empty
    __set_cpus_allowed_ptr()
      cpumask_any_and_distribute() <-- returns dest_cpu equal to nr_cpu_ids
      returns -EINVAL

Cpusets are updated using workqueue from cpuset_update_active_cpus() which
in its turn is called from cpu hotplug callback sched_cpu_activate() hence
the delay observable by sched_setaffinity().
Out of line uevent can be avoided if we will ensure that cpuset_hotplug_work
has run to completion using cpuset_wait_for_hotplug() after onlining the
cpu in cpu_up(). Unfortunately, the execution time of
echo 1 > /sys/devices/system/cpu/cpuX/online roughly doubled with this
change (on my test machine).

Co-analyzed-by: Joshua Baker <jobaker@redhat.com>
Signed-off-by: Alexey Klimov <aklimov@redhat.com>
---

The commit "cpuset: Make cpuset hotplug synchronous" would also get rid of the
early uevent but it was reverted.

The nature of this bug is also described here (with different consequences):
https://lore.kernel.org/lkml/20200211141554.24181-1-qais.yousef@arm.com/

Reproducer: https://gitlab.com/0xeafffffe/xlam

It could be that I missed the correct place for cpuset synchronisation and it should
be done in cpu_device_up() instead.
I also in doubts if we need cpuset_wait_for_hotplug() in cpuhp_online_cpu_device()
since an online uevent is sent there too.
Currently with such change the reproducer code continues to work without issues.
The idea is to avoid the situation when userspace receives the event about
onlined CPU which is not ready to take tasks for a while after uevent.


 kernel/cpu.c | 3 +++
 1 file changed, 3 insertions(+)

diff --git a/kernel/cpu.c b/kernel/cpu.c
index 6ff2578ecf17..f39a27a7f24b 100644
--- a/kernel/cpu.c
+++ b/kernel/cpu.c
@@ -15,6 +15,7 @@
 #include <linux/sched/smt.h>
 #include <linux/unistd.h>
 #include <linux/cpu.h>
+#include <linux/cpuset.h>
 #include <linux/oom.h>
 #include <linux/rcupdate.h>
 #include <linux/export.h>
@@ -1275,6 +1276,8 @@ static int cpu_up(unsigned int cpu, enum cpuhp_state target)
 	}
 
 	err = _cpu_up(cpu, 0, target);
+	if (!err)
+		cpuset_wait_for_hotplug();
 out:
 	cpu_maps_update_done();
 	return err;
-- 
2.26.2


^ permalink raw reply related	[flat|nested] 7+ messages in thread

* Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online
  2020-12-03 17:14 [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online Alexey Klimov
@ 2020-12-07  8:38 ` Peter Zijlstra
  2020-12-09  2:40   ` Daniel Jordan
  2020-12-09  2:09 ` Daniel Jordan
  1 sibling, 1 reply; 7+ messages in thread
From: Peter Zijlstra @ 2020-12-07  8:38 UTC (permalink / raw)
  To: Alexey Klimov
  Cc: linux-kernel, cgroups, yury.norov, tglx, jobaker, audralmitchel,
	arnd, gregkh, rafael, tj, lizefan, qais.yousef, hannes,
	klimov.linux

On Thu, Dec 03, 2020 at 05:14:31PM +0000, Alexey Klimov wrote:
> When a CPU offlined and onlined via device_offline() and device_online()
> the userspace gets uevent notification. If, after receiving uevent,
> userspace executes sched_setaffinity() on some task trying to move it
> to a recently onlined CPU, then it will fail with -EINVAL. Userspace needs
> to wait around 5..30 ms before sched_setaffinity() will succeed for
> recently onlined CPU after receiving uevent.

Right.

>  Unfortunately, the execution time of
> echo 1 > /sys/devices/system/cpu/cpuX/online roughly doubled with this
> change (on my test machine).

Nobody cares, it's hotplug, it's supposed to be slow :-) That is,
we fundamentally shift the work _to_ the hotplug path, so as to keep
everybody else fast.

> The nature of this bug is also described here (with different consequences):
> https://lore.kernel.org/lkml/20200211141554.24181-1-qais.yousef@arm.com/

Yeah, pesky deadlocks.. someone was going to try again.


>  kernel/cpu.c | 3 +++
>  1 file changed, 3 insertions(+)
> 
> diff --git a/kernel/cpu.c b/kernel/cpu.c
> index 6ff2578ecf17..f39a27a7f24b 100644
> --- a/kernel/cpu.c
> +++ b/kernel/cpu.c
> @@ -15,6 +15,7 @@
>  #include <linux/sched/smt.h>
>  #include <linux/unistd.h>
>  #include <linux/cpu.h>
> +#include <linux/cpuset.h>
>  #include <linux/oom.h>
>  #include <linux/rcupdate.h>
>  #include <linux/export.h>
> @@ -1275,6 +1276,8 @@ static int cpu_up(unsigned int cpu, enum cpuhp_state target)
>  	}
>  
>  	err = _cpu_up(cpu, 0, target);
> +	if (!err)
> +		cpuset_wait_for_hotplug();
>  out:
>  	cpu_maps_update_done();
>  	return err;

My only consideration is if doing that flush under cpu_add_remove_lock()
is wise.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online
  2020-12-03 17:14 [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online Alexey Klimov
  2020-12-07  8:38 ` Peter Zijlstra
@ 2020-12-09  2:09 ` Daniel Jordan
  1 sibling, 0 replies; 7+ messages in thread
From: Daniel Jordan @ 2020-12-09  2:09 UTC (permalink / raw)
  To: Alexey Klimov, linux-kernel, cgroups
  Cc: peterz, yury.norov, tglx, jobaker, audralmitchel, arnd, gregkh,
	rafael, tj, lizefan, qais.yousef, hannes, klimov.linux

Alexey Klimov <aklimov@redhat.com> writes:
> I also in doubts if we need cpuset_wait_for_hotplug() in cpuhp_online_cpu_device()
> since an online uevent is sent there too.

We do need it there if we go with this fix.  Your reproducer hits the
same issue when it's changed to exercise smt/control instead of
cpuN/online.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online
  2020-12-07  8:38 ` Peter Zijlstra
@ 2020-12-09  2:40   ` Daniel Jordan
  2021-01-15  3:21     ` Daniel Jordan
  0 siblings, 1 reply; 7+ messages in thread
From: Daniel Jordan @ 2020-12-09  2:40 UTC (permalink / raw)
  To: Peter Zijlstra, Alexey Klimov
  Cc: linux-kernel, cgroups, yury.norov, tglx, jobaker, audralmitchel,
	arnd, gregkh, rafael, tj, lizefan, qais.yousef, hannes,
	klimov.linux

Peter Zijlstra <peterz@infradead.org> writes:
>> The nature of this bug is also described here (with different consequences):
>> https://lore.kernel.org/lkml/20200211141554.24181-1-qais.yousef@arm.com/
>
> Yeah, pesky deadlocks.. someone was going to try again.

I dug up the synchronous patch

    https://lore.kernel.org/lkml/1579878449-10164-1-git-send-email-prsood@codeaurora.org/

but surprisingly wasn't able to reproduce the lockdep splat from

    https://lore.kernel.org/lkml/F0388D99-84D7-453B-9B6B-EEFF0E7BE4CC@lca.pw/

even though I could hit it a few weeks ago.  I'm going to try to mess
with it later, but don't let me hold this up.

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online
  2020-12-09  2:40   ` Daniel Jordan
@ 2021-01-15  3:21     ` Daniel Jordan
  2021-01-19 16:52       ` Alexey Klimov
  0 siblings, 1 reply; 7+ messages in thread
From: Daniel Jordan @ 2021-01-15  3:21 UTC (permalink / raw)
  To: Peter Zijlstra, Alexey Klimov
  Cc: linux-kernel, cgroups, yury.norov, tglx, jobaker, audralmitchel,
	arnd, gregkh, rafael, tj, lizefan.x, qais.yousef, hannes,
	klimov.linux

Daniel Jordan <daniel.m.jordan@oracle.com> writes:
> Peter Zijlstra <peterz@infradead.org> writes:
>>> The nature of this bug is also described here (with different consequences):
>>> https://lore.kernel.org/lkml/20200211141554.24181-1-qais.yousef@arm.com/
>>
>> Yeah, pesky deadlocks.. someone was going to try again.
>
> I dug up the synchronous patch
>
>     https://lore.kernel.org/lkml/1579878449-10164-1-git-send-email-prsood@codeaurora.org/
>
> but surprisingly wasn't able to reproduce the lockdep splat from
>
>     https://lore.kernel.org/lkml/F0388D99-84D7-453B-9B6B-EEFF0E7BE4CC@lca.pw/
>
> even though I could hit it a few weeks ago.

oh okay, you need to mount a legacy cpuset hierarchy.

So as the above splat shows, making cpuset_hotplug_workfn() synchronous
means cpu_hotplug_lock (and "cpuhp_state-down") can be acquired before
cgroup_mutex.

But there are at least four cgroup paths that take the locks in the
opposite order.  They're all the same, they take cgroup_mutex and then
cpu_hotplug_lock later on to modify one or more static keys.

cpu_hotplug_lock should probably be ahead of cgroup_mutex because the
latter is taken in a hotplug callback, and we should keep the static
branches in cgroup, so the only way out I can think of is moving
cpu_hotplug_lock to just before cgroup_mutex is taken and switching to
_cpuslocked flavors of the static key calls.

lockdep quiets down with that change everywhere, but it puts another big
lock around a lot of cgroup paths.  Seems less heavyhanded to go with
this RFC.  What do you all think?

Absent further discussion, Alexey, do you plan to post another version?

^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online
  2021-01-15  3:21     ` Daniel Jordan
@ 2021-01-19 16:52       ` Alexey Klimov
  2021-01-21  2:09         ` Daniel Jordan
  0 siblings, 1 reply; 7+ messages in thread
From: Alexey Klimov @ 2021-01-19 16:52 UTC (permalink / raw)
  To: Daniel Jordan
  Cc: Peter Zijlstra, linux-kernel, cgroups, yury.norov, tglx, jobaker,
	audralmitchel, arnd, gregkh, rafael, tj, lizefan.x, qais.yousef,
	hannes, Alexey Klimov

On Fri, Jan 15, 2021 at 6:54 AM Daniel Jordan
<daniel.m.jordan@oracle.com> wrote:
>
> Daniel Jordan <daniel.m.jordan@oracle.com> writes:
> > Peter Zijlstra <peterz@infradead.org> writes:
> >>> The nature of this bug is also described here (with different consequences):
> >>> https://lore.kernel.org/lkml/20200211141554.24181-1-qais.yousef@arm.com/
> >>
> >> Yeah, pesky deadlocks.. someone was going to try again.
> >
> > I dug up the synchronous patch
> >
> >     https://lore.kernel.org/lkml/1579878449-10164-1-git-send-email-prsood@codeaurora.org/
> >
> > but surprisingly wasn't able to reproduce the lockdep splat from
> >
> >     https://lore.kernel.org/lkml/F0388D99-84D7-453B-9B6B-EEFF0E7BE4CC@lca.pw/
> >
> > even though I could hit it a few weeks ago.
>
> oh okay, you need to mount a legacy cpuset hierarchy.
>
> So as the above splat shows, making cpuset_hotplug_workfn() synchronous
> means cpu_hotplug_lock (and "cpuhp_state-down") can be acquired before
> cgroup_mutex.
>
> But there are at least four cgroup paths that take the locks in the
> opposite order.  They're all the same, they take cgroup_mutex and then
> cpu_hotplug_lock later on to modify one or more static keys.
>
> cpu_hotplug_lock should probably be ahead of cgroup_mutex because the
> latter is taken in a hotplug callback, and we should keep the static
> branches in cgroup, so the only way out I can think of is moving
> cpu_hotplug_lock to just before cgroup_mutex is taken and switching to
> _cpuslocked flavors of the static key calls.
>
> lockdep quiets down with that change everywhere, but it puts another big
> lock around a lot of cgroup paths.  Seems less heavyhanded to go with
> this RFC.  What do you all think?

Daniel, thank you for taking a look. I don't mind reviewing+testing
another approach that you described.

> Absent further discussion, Alexey, do you plan to post another version?

I plan to update this patch and re-send in the next couple of days. It
looks like it might be a series of two patches. Sorry for delays.

Best regards,
Alexey


^ permalink raw reply	[flat|nested] 7+ messages in thread

* Re: [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online
  2021-01-19 16:52       ` Alexey Klimov
@ 2021-01-21  2:09         ` Daniel Jordan
  0 siblings, 0 replies; 7+ messages in thread
From: Daniel Jordan @ 2021-01-21  2:09 UTC (permalink / raw)
  To: Alexey Klimov
  Cc: Peter Zijlstra, linux-kernel, cgroups, yury.norov, tglx, jobaker,
	audralmitchel, arnd, gregkh, rafael, tj, lizefan.x, qais.yousef,
	hannes, Alexey Klimov

Alexey Klimov <aklimov@redhat.com> writes:
> Daniel, thank you for taking a look. I don't mind reviewing+testing
> another approach that you described.

Eh, I like yours better :)

>> Absent further discussion, Alexey, do you plan to post another version?
>
> I plan to update this patch and re-send in the next couple of days. It
> looks like it might be a series of two patches. Sorry for delays.

Not at all, this is just something I'm messing with when I have the
time.

^ permalink raw reply	[flat|nested] 7+ messages in thread

end of thread, other threads:[~2021-01-21  2:35 UTC | newest]

Thread overview: 7+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2020-12-03 17:14 [RFC][PATCH] cpu/hotplug: wait for cpuset_hotplug_work to finish on cpu online Alexey Klimov
2020-12-07  8:38 ` Peter Zijlstra
2020-12-09  2:40   ` Daniel Jordan
2021-01-15  3:21     ` Daniel Jordan
2021-01-19 16:52       ` Alexey Klimov
2021-01-21  2:09         ` Daniel Jordan
2020-12-09  2:09 ` Daniel Jordan

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).