linux-kernel.vger.kernel.org archive mirror
 help / color / mirror / Atom feed
* [PATCH v2] sched/topology: Allow sched_asym_cpucapacity to be disabled
@ 2019-10-15 10:29 Valentin Schneider
  2019-10-15 10:40 ` Quentin Perret
  0 siblings, 1 reply; 6+ messages in thread
From: Valentin Schneider @ 2019-10-15 10:29 UTC (permalink / raw)
  To: linux-kernel
  Cc: mingo, peterz, vincent.guittot, Dietmar.Eggemann,
	morten.rasmussen, qperret, stable

While the static key is correctly initialized as being disabled, it will
remain forever enabled once turned on. This means that if we start with an
asymmetric system and hotplug out enough CPUs to end up with an SMP system,
the static key will remain set - which is obviously wrong. We should detect
this and turn off things like misfit migration and capacity aware wakeups.

As Quentin pointed out, having separate root domains makes this slightly
trickier. We could have exclusive cpusets that create an SMP island - IOW,
the domains within this root domain will not see any asymmetry. This means
we need to count how many asymmetric root domains we have.

Change the simple key enablement to an increment, and decrement the key
counter when destroying domains that cover asymmetric CPUs.

Cc: <stable@vger.kernel.org>
Fixes: df054e8445a4 ("sched/topology: Add static_key for asymmetric CPU capacity optimizations")
Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
---
Changes since v1:

Use static_branch_{inc,dec} rather than enable/disable
---
 kernel/sched/topology.c | 11 ++++++++++-
 1 file changed, 10 insertions(+), 1 deletion(-)

diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
index b5667a273bf6..79944e969bcf 100644
--- a/kernel/sched/topology.c
+++ b/kernel/sched/topology.c
@@ -2026,7 +2026,7 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
 	rcu_read_unlock();
 
 	if (has_asym)
-		static_branch_enable_cpuslocked(&sched_asym_cpucapacity);
+		static_branch_inc_cpuslocked(&sched_asym_cpucapacity);
 
 	if (rq && sched_debug_enabled) {
 		pr_info("root domain span: %*pbl (max cpu_capacity = %lu)\n",
@@ -2124,8 +2124,17 @@ static void detach_destroy_domains(const struct cpumask *cpu_map)
 	int i;
 
 	rcu_read_lock();
+
+	if (static_key_enabled(&sched_asym_cpucapacity)) {
+		unsigned int cpu = cpumask_any(cpu_map);
+
+		if (rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu)))
+			static_branch_dec_cpuslocked(&sched_asym_cpucapacity);
+	}
+
 	for_each_cpu(i, cpu_map)
 		cpu_attach_domain(NULL, &def_root_domain, i);
+
 	rcu_read_unlock();
 }
 
-- 
2.22.0


^ permalink raw reply related	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] sched/topology: Allow sched_asym_cpucapacity to be disabled
  2019-10-15 10:29 [PATCH v2] sched/topology: Allow sched_asym_cpucapacity to be disabled Valentin Schneider
@ 2019-10-15 10:40 ` Quentin Perret
  2019-10-15 10:58   ` Valentin Schneider
  0 siblings, 1 reply; 6+ messages in thread
From: Quentin Perret @ 2019-10-15 10:40 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: linux-kernel, mingo, peterz, vincent.guittot, Dietmar.Eggemann,
	morten.rasmussen, stable

On Tuesday 15 Oct 2019 at 11:29:56 (+0100), Valentin Schneider wrote:
> While the static key is correctly initialized as being disabled, it will
> remain forever enabled once turned on. This means that if we start with an
> asymmetric system and hotplug out enough CPUs to end up with an SMP system,
> the static key will remain set - which is obviously wrong. We should detect
> this and turn off things like misfit migration and capacity aware wakeups.
> 
> As Quentin pointed out, having separate root domains makes this slightly
> trickier. We could have exclusive cpusets that create an SMP island - IOW,
> the domains within this root domain will not see any asymmetry. This means
> we need to count how many asymmetric root domains we have.
> 
> Change the simple key enablement to an increment, and decrement the key
> counter when destroying domains that cover asymmetric CPUs.
> 
> Cc: <stable@vger.kernel.org>
> Fixes: df054e8445a4 ("sched/topology: Add static_key for asymmetric CPU capacity optimizations")
> Signed-off-by: Valentin Schneider <valentin.schneider@arm.com>
> ---
> Changes since v1:
> 
> Use static_branch_{inc,dec} rather than enable/disable
> ---
>  kernel/sched/topology.c | 11 ++++++++++-
>  1 file changed, 10 insertions(+), 1 deletion(-)
> 
> diff --git a/kernel/sched/topology.c b/kernel/sched/topology.c
> index b5667a273bf6..79944e969bcf 100644
> --- a/kernel/sched/topology.c
> +++ b/kernel/sched/topology.c
> @@ -2026,7 +2026,7 @@ build_sched_domains(const struct cpumask *cpu_map, struct sched_domain_attr *att
>  	rcu_read_unlock();
>  
>  	if (has_asym)
> -		static_branch_enable_cpuslocked(&sched_asym_cpucapacity);
> +		static_branch_inc_cpuslocked(&sched_asym_cpucapacity);
>  
>  	if (rq && sched_debug_enabled) {
>  		pr_info("root domain span: %*pbl (max cpu_capacity = %lu)\n",
> @@ -2124,8 +2124,17 @@ static void detach_destroy_domains(const struct cpumask *cpu_map)
>  	int i;
>  
>  	rcu_read_lock();
> +
> +	if (static_key_enabled(&sched_asym_cpucapacity)) {
> +		unsigned int cpu = cpumask_any(cpu_map);
> +
> +		if (rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu)))
> +			static_branch_dec_cpuslocked(&sched_asym_cpucapacity);

Lockdep should scream for this :)
> +	}
> +
>  	for_each_cpu(i, cpu_map)
>  		cpu_attach_domain(NULL, &def_root_domain, i);
> +
>  	rcu_read_unlock();
>  }
>  
> -- 
> 2.22.0
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] sched/topology: Allow sched_asym_cpucapacity to be disabled
  2019-10-15 10:40 ` Quentin Perret
@ 2019-10-15 10:58   ` Valentin Schneider
  2019-10-15 11:49     ` Valentin Schneider
  0 siblings, 1 reply; 6+ messages in thread
From: Valentin Schneider @ 2019-10-15 10:58 UTC (permalink / raw)
  To: Quentin Perret
  Cc: linux-kernel, mingo, peterz, vincent.guittot, Dietmar.Eggemann,
	morten.rasmussen, stable

On 15/10/2019 11:40, Quentin Perret wrote:
>> @@ -2124,8 +2124,17 @@ static void detach_destroy_domains(const struct cpumask *cpu_map)
>>  	int i;
>>  
>>  	rcu_read_lock();
>> +
>> +	if (static_key_enabled(&sched_asym_cpucapacity)) {
>> +		unsigned int cpu = cpumask_any(cpu_map);
>> +
>> +		if (rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu)))
>> +			static_branch_dec_cpuslocked(&sched_asym_cpucapacity);
> 
> Lockdep should scream for this :)

Bleh, yes indeed...

>> +	}
>> +
>>  	for_each_cpu(i, cpu_map)
>>  		cpu_attach_domain(NULL, &def_root_domain, i);
>> +
>>  	rcu_read_unlock();
>>  }
>>  
>> -- 
>> 2.22.0
>>

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] sched/topology: Allow sched_asym_cpucapacity to be disabled
  2019-10-15 10:58   ` Valentin Schneider
@ 2019-10-15 11:49     ` Valentin Schneider
  2019-10-15 11:56       ` Quentin Perret
  0 siblings, 1 reply; 6+ messages in thread
From: Valentin Schneider @ 2019-10-15 11:49 UTC (permalink / raw)
  To: Quentin Perret
  Cc: linux-kernel, mingo, peterz, vincent.guittot, Dietmar.Eggemann,
	morten.rasmussen, stable



On 15/10/2019 11:58, Valentin Schneider wrote:
> On 15/10/2019 11:40, Quentin Perret wrote:
>>> @@ -2124,8 +2124,17 @@ static void detach_destroy_domains(const struct cpumask *cpu_map)
>>>  	int i;
>>>  
>>>  	rcu_read_lock();
>>> +
>>> +	if (static_key_enabled(&sched_asym_cpucapacity)) {
>>> +		unsigned int cpu = cpumask_any(cpu_map);
>>> +
>>> +		if (rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu)))
>>> +			static_branch_dec_cpuslocked(&sched_asym_cpucapacity);
>>
>> Lockdep should scream for this :)
> 
> Bleh, yes indeed...
> 

Urgh, I forgot about the funny hotplug lock scenario at boot time.
rebuild_sched_domains() takes the lock but sched_init_domains() doesn't, so
we don't get the might_sleep warn at boot time.

So if we want to flip the key post boot time we probably need to separately
count our asymmetric root domains and flip the key after all the rebuilds,
outside of the hotplug lock.

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] sched/topology: Allow sched_asym_cpucapacity to be disabled
  2019-10-15 11:49     ` Valentin Schneider
@ 2019-10-15 11:56       ` Quentin Perret
  2019-10-15 12:57         ` Valentin Schneider
  0 siblings, 1 reply; 6+ messages in thread
From: Quentin Perret @ 2019-10-15 11:56 UTC (permalink / raw)
  To: Valentin Schneider
  Cc: linux-kernel, mingo, peterz, vincent.guittot, Dietmar.Eggemann,
	morten.rasmussen, stable

On Tuesday 15 Oct 2019 at 12:49:22 (+0100), Valentin Schneider wrote:
> 
> 
> On 15/10/2019 11:58, Valentin Schneider wrote:
> > On 15/10/2019 11:40, Quentin Perret wrote:
> >>> @@ -2124,8 +2124,17 @@ static void detach_destroy_domains(const struct cpumask *cpu_map)
> >>>  	int i;
> >>>  
> >>>  	rcu_read_lock();
> >>> +
> >>> +	if (static_key_enabled(&sched_asym_cpucapacity)) {
> >>> +		unsigned int cpu = cpumask_any(cpu_map);
> >>> +
> >>> +		if (rcu_dereference(per_cpu(sd_asym_cpucapacity, cpu)))
> >>> +			static_branch_dec_cpuslocked(&sched_asym_cpucapacity);
> >>
> >> Lockdep should scream for this :)
> > 
> > Bleh, yes indeed...
> > 
> 
> Urgh, I forgot about the funny hotplug lock scenario at boot time.
> rebuild_sched_domains() takes the lock but sched_init_domains() doesn't, so
> we don't get the might_sleep warn at boot time.
> 
> So if we want to flip the key post boot time we probably need to separately
> count our asymmetric root domains and flip the key after all the rebuilds,
> outside of the hotplug lock.

Hmm, a problem here is that static_branch*() can block (it uses a
mutex) while you're in the rcu section, I think.

I suppose you could just move this above rcu_read_lock() and use
rcu_access_pointer() instead ?

Thanks,
Quentin

^ permalink raw reply	[flat|nested] 6+ messages in thread

* Re: [PATCH v2] sched/topology: Allow sched_asym_cpucapacity to be disabled
  2019-10-15 11:56       ` Quentin Perret
@ 2019-10-15 12:57         ` Valentin Schneider
  0 siblings, 0 replies; 6+ messages in thread
From: Valentin Schneider @ 2019-10-15 12:57 UTC (permalink / raw)
  To: Quentin Perret
  Cc: linux-kernel, mingo, peterz, vincent.guittot, Dietmar.Eggemann,
	morten.rasmussen, stable

On 15/10/2019 12:56, Quentin Perret wrote:
> Hmm, a problem here is that static_branch*() can block (it uses a
> mutex) while you're in the rcu section, I think.
> 
> I suppose you could just move this above rcu_read_lock() and use
> rcu_access_pointer() instead ?
> 

Right, I got confused again, the only problem is with the rcu_read_lock(),
so the increment is fine but the decrement isn't.

Let me try this again with more coffee.

> Thanks,
> Quentin
> 

^ permalink raw reply	[flat|nested] 6+ messages in thread

end of thread, other threads:[~2019-10-15 12:57 UTC | newest]

Thread overview: 6+ messages (download: mbox.gz / follow: Atom feed)
-- links below jump to the message on this page --
2019-10-15 10:29 [PATCH v2] sched/topology: Allow sched_asym_cpucapacity to be disabled Valentin Schneider
2019-10-15 10:40 ` Quentin Perret
2019-10-15 10:58   ` Valentin Schneider
2019-10-15 11:49     ` Valentin Schneider
2019-10-15 11:56       ` Quentin Perret
2019-10-15 12:57         ` Valentin Schneider

This is a public inbox, see mirroring instructions
for how to clone and mirror all data and code used for this inbox;
as well as URLs for NNTP newsgroup(s).