All of lore.kernel.org
 help / color / mirror / Atom feed
* [PATCH] sched/core: Take the hotplug lock in sched_init_smp()
@ 2018-10-23 13:37 Valentin Schneider
  2018-11-04  0:01 ` [tip:sched/urgent] " tip-bot for Valentin Schneider
  0 siblings, 1 reply; 2+ messages in thread
From: Valentin Schneider @ 2018-10-23 13:37 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, peterz, morten.rasmussen, sudeep.holla, quentin.perret,
	Dietmar.Eggemann

When running on linux-next (8c60c36d0b8c ("Add linux-next specific files
for 20181019")) + CONFIG_PROVE_LOCKING=y on a big.LITTLE system (e.g.
Juno or HiKey960), we get the following report:

[    0.748225] Call trace:
[    0.750685]  lockdep_assert_cpus_held+0x30/0x40
[    0.755236]  static_key_enable_cpuslocked+0x20/0xc8
[    0.760137]  build_sched_domains+0x1034/0x1108
[    0.764601]  sched_init_domains+0x68/0x90
[    0.768628]  sched_init_smp+0x30/0x80
[    0.772309]  kernel_init_freeable+0x278/0x51c
[    0.776685]  kernel_init+0x10/0x108
[    0.780190]  ret_from_fork+0x10/0x18

The static_key in question is sched_asym_cpucapacity introduced by
commit df054e8445a4 ("sched/topology: Add static_key for asymmetric CPU
capacity optimizations"). In this particular case, we enable it because
smp_prepare_cpus() will end up fetching the capacity-dmips-mhz entry
from the devicetree, so we already have some asymmetry detected when
entering sched_init_smp().

This didn't get detected in tip/sched/core because we were missing
commit cb538267ea1e ("jump_label/lockdep: Assert we hold the hotplug
lock for _cpuslocked() operations").

Calls to build_sched_domains() post sched_init_smp() will hold the
hotplug lock, it just so happens that this very first call is a
special case. As stated by a comment in sched_init_smp(), "There's no
userspace yet to cause hotplug operations" so this is a harmless
warning.

However, to both respect the semantics of underlying
callees and make lockdep happy, take the hotplug lock in
sched_init_smp(). This also satisfies the comment atop
sched_init_domains() that says "Callers must hold the hotplug lock".

Reported-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
---
 kernel/sched/core.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fd2fce8..02a20ef 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5859,11 +5859,14 @@ void __init sched_init_smp(void)
 	/*
 	 * There's no userspace yet to cause hotplug operations; hence all the
 	 * CPU masks are stable and all blatant races in the below code cannot
-	 * happen.
+	 * happen. The hotplug lock is nevertheless taken to satisfy lockdep,
+	 * but there won't be any contention on it.
 	 */
+	cpus_read_lock();
 	mutex_lock(&sched_domains_mutex);
 	sched_init_domains(cpu_active_mask);
 	mutex_unlock(&sched_domains_mutex);
+	cpus_read_unlock();
 
 	/* Move init over to a non-isolated CPU */
 	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_DOMAIN)) < 0)
--
2.7.4


^ permalink raw reply related	[flat|nested] 2+ messages in thread

* [tip:sched/urgent] sched/core: Take the hotplug lock in sched_init_smp()
  2018-10-23 13:37 [PATCH] sched/core: Take the hotplug lock in sched_init_smp() Valentin Schneider
@ 2018-11-04  0:01 ` tip-bot for Valentin Schneider
  0 siblings, 0 replies; 2+ messages in thread
From: tip-bot for Valentin Schneider @ 2018-11-04  0:01 UTC (permalink / raw)
  To: linux-tip-commits
  Cc: hpa, tglx, torvalds, linux-kernel, sudeep.holla, peterz,
	valentin.schneider, mingo

Commit-ID:  40fa3780bac2b654edf23f6b13f4e2dd550aea10
Gitweb:     https://git.kernel.org/tip/40fa3780bac2b654edf23f6b13f4e2dd550aea10
Author:     Valentin Schneider <valentin.schneider@arm.com>
AuthorDate: Tue, 23 Oct 2018 14:37:31 +0100
Committer:  Ingo Molnar <mingo@kernel.org>
CommitDate: Sun, 4 Nov 2018 00:57:44 +0100

sched/core: Take the hotplug lock in sched_init_smp()

When running on linux-next (8c60c36d0b8c ("Add linux-next specific files
for 20181019")) + CONFIG_PROVE_LOCKING=y on a big.LITTLE system (e.g.
Juno or HiKey960), we get the following report:

 [    0.748225] Call trace:
 [    0.750685]  lockdep_assert_cpus_held+0x30/0x40
 [    0.755236]  static_key_enable_cpuslocked+0x20/0xc8
 [    0.760137]  build_sched_domains+0x1034/0x1108
 [    0.764601]  sched_init_domains+0x68/0x90
 [    0.768628]  sched_init_smp+0x30/0x80
 [    0.772309]  kernel_init_freeable+0x278/0x51c
 [    0.776685]  kernel_init+0x10/0x108
 [    0.780190]  ret_from_fork+0x10/0x18

The static_key in question is 'sched_asym_cpucapacity' introduced by
commit:

  df054e8445a4 ("sched/topology: Add static_key for asymmetric CPU capacity optimizations")

In this particular case, we enable it because smp_prepare_cpus() will
end up fetching the capacity-dmips-mhz entry from the devicetree,
so we already have some asymmetry detected when entering sched_init_smp().

This didn't get detected in tip/sched/core because we were missing:

  commit cb538267ea1e ("jump_label/lockdep: Assert we hold the hotplug lock for _cpuslocked() operations")

Calls to build_sched_domains() post sched_init_smp() will hold the
hotplug lock, it just so happens that this very first call is a
special case. As stated by a comment in sched_init_smp(), "There's no
userspace yet to cause hotplug operations" so this is a harmless
warning.

However, to both respect the semantics of underlying
callees and make lockdep happy, take the hotplug lock in
sched_init_smp(). This also satisfies the comment atop
sched_init_domains() that says "Callers must hold the hotplug lock".

Reported-by: Sudeep Holla <sudeep.holla@arm.com>
Tested-by: Sudeep Holla <sudeep.holla@arm.com>
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
Signed-off-by: Peter Zijlstra (Intel) <peterz@infradead.org>
Cc: Dietmar.Eggemann@arm.com
Cc: Linus Torvalds <torvalds@linux-foundation.org>
Cc: Peter Zijlstra <peterz@infradead.org>
Cc: Thomas Gleixner <tglx@linutronix.de>
Cc: morten.rasmussen@arm.com
Cc: quentin.perret@arm.com
Link: http://lkml.kernel.org/r/1540301851-3048-1-git-send-email-valentin.schneider@arm.com
Signed-off-by: Ingo Molnar <mingo@kernel.org>
---
 kernel/sched/core.c | 5 ++++-
 1 file changed, 4 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/core.c b/kernel/sched/core.c
index fd2fce8a001b..02a20ef196a6 100644
--- a/kernel/sched/core.c
+++ b/kernel/sched/core.c
@@ -5859,11 +5859,14 @@ void __init sched_init_smp(void)
 	/*
 	 * There's no userspace yet to cause hotplug operations; hence all the
 	 * CPU masks are stable and all blatant races in the below code cannot
-	 * happen.
+	 * happen. The hotplug lock is nevertheless taken to satisfy lockdep,
+	 * but there won't be any contention on it.
 	 */
+	cpus_read_lock();
 	mutex_lock(&sched_domains_mutex);
 	sched_init_domains(cpu_active_mask);
 	mutex_unlock(&sched_domains_mutex);
+	cpus_read_unlock();
 
 	/* Move init over to a non-isolated CPU */
 	if (set_cpus_allowed_ptr(current, housekeeping_cpumask(HK_FLAG_DOMAIN)) < 0)

^ permalink raw reply related	[flat|nested] 2+ messages in thread

end of thread, other threads:[~2018-11-04  0:01 UTC | newest]

Thread overview: 2+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2018-10-23 13:37 [PATCH] sched/core: Take the hotplug lock in sched_init_smp() Valentin Schneider
2018-11-04  0:01 ` [tip:sched/urgent] " tip-bot for Valentin Schneider

This is an external index of several public inboxes,
see mirroring instructions on how to clone and mirror
all data and code used by this external index.