* Re: linux-next: new scheduler messages span: 0-15 (max cpu_capacity = 589) when starting KVM guests
2016-09-19 13:40 ` Peter Zijlstra
@ 2016-09-19 13:58 ` Dietmar Eggemann
2016-09-19 14:01 ` Christian Borntraeger
2016-09-20 7:43 ` Christian Borntraeger
2 siblings, 0 replies; 5+ messages in thread
From: Dietmar Eggemann @ 2016-09-19 13:58 UTC (permalink / raw)
To: Peter Zijlstra, Christian Borntraeger
Cc: Ingo Molnar, Tejun Heo, linux-kernel
On 19/09/16 14:40, Peter Zijlstra wrote:
> On Mon, Sep 19, 2016 at 03:19:11PM +0200, Christian Borntraeger wrote:
>> Dietmar, Ingo, Tejun,
>>
>> since commit cd92bfd3b8cb0ec2ee825e55a3aee704cd55aea9
>> sched/core: Store maximum per-CPU capacity in root domain
>>
>> I get tons of messages from the scheduler like
>> [..]
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> [..]
>>
>
> Oh, oops ;-)
>
> Something like the below ought to cure I think.
Haven't tested it in kvm guests with libvirt env.
This message makes sense for asymmetric compute capacities (ARM
big.LITTLE) for a setup where cpu_capacity = 1024 (a logical cpu w/o
SMT) can't be assumed for the big cpus.
I also tells you that you run in an SMT env. (2 hw threads hence 589)
but this is probably less important.
Guarding it w/ sched_debug_enabled makes sense for this.
> ---
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index f5f7b3cdf0be..fdc9e311fd29 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6990,7 +6990,7 @@ static int build_sched_domains(const struct cpumask *cpu_map,
> }
> rcu_read_unlock();
>
> - if (rq) {
> + if (rq && sched_debug_enabled) {
> pr_info("span: %*pbl (max cpu_capacity = %lu)\n",
> cpumask_pr_args(cpu_map), rq->rd->max_cpu_capacity);
> }
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: linux-next: new scheduler messages span: 0-15 (max cpu_capacity = 589) when starting KVM guests
2016-09-19 13:40 ` Peter Zijlstra
2016-09-19 13:58 ` Dietmar Eggemann
@ 2016-09-19 14:01 ` Christian Borntraeger
2016-09-20 7:43 ` Christian Borntraeger
2 siblings, 0 replies; 5+ messages in thread
From: Christian Borntraeger @ 2016-09-19 14:01 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Dietmar Eggemann, Ingo Molnar, Tejun Heo, linux-kernel
On 09/19/2016 03:40 PM, Peter Zijlstra wrote:
> On Mon, Sep 19, 2016 at 03:19:11PM +0200, Christian Borntraeger wrote:
>> Dietmar, Ingo, Tejun,
>>
>> since commit cd92bfd3b8cb0ec2ee825e55a3aee704cd55aea9
>> sched/core: Store maximum per-CPU capacity in root domain
>>
>> I get tons of messages from the scheduler like
>> [..]
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> [..]
>>
>
> Oh, oops ;-)
>
> Something like the below ought to cure I think.
That would certainly make the message go away. (e.g. also
good for cpu hotplug)
I am still asking myself why cgroup cpuset really needs to rebuild
the scheduling domains if a vcpu thread is moved.
>
> ---
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index f5f7b3cdf0be..fdc9e311fd29 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6990,7 +6990,7 @@ static int build_sched_domains(const struct cpumask *cpu_map,
> }
> rcu_read_unlock();
>
> - if (rq) {
> + if (rq && sched_debug_enabled) {
> pr_info("span: %*pbl (max cpu_capacity = %lu)\n",
> cpumask_pr_args(cpu_map), rq->rd->max_cpu_capacity);
> }
>
^ permalink raw reply [flat|nested] 5+ messages in thread
* Re: linux-next: new scheduler messages span: 0-15 (max cpu_capacity = 589) when starting KVM guests
2016-09-19 13:40 ` Peter Zijlstra
2016-09-19 13:58 ` Dietmar Eggemann
2016-09-19 14:01 ` Christian Borntraeger
@ 2016-09-20 7:43 ` Christian Borntraeger
2 siblings, 0 replies; 5+ messages in thread
From: Christian Borntraeger @ 2016-09-20 7:43 UTC (permalink / raw)
To: Peter Zijlstra; +Cc: Dietmar Eggemann, Ingo Molnar, Tejun Heo, linux-kernel
On 09/19/2016 03:40 PM, Peter Zijlstra wrote:
> On Mon, Sep 19, 2016 at 03:19:11PM +0200, Christian Borntraeger wrote:
>> Dietmar, Ingo, Tejun,
>>
>> since commit cd92bfd3b8cb0ec2ee825e55a3aee704cd55aea9
>> sched/core: Store maximum per-CPU capacity in root domain
>>
>> I get tons of messages from the scheduler like
>> [..]
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> span: 0-15 (max cpu_capacity = 589)
>> [..]
>>
>
> Oh, oops ;-)
>
> Something like the below ought to cure I think.
Still trying to get some opinion from Tejun, why moving vcpus in
its cpuset causes schedule domain rebuilds, but
Acked-by: Christian Borntraeger <borntraeger@de.ibm.com>
for such a patch.
>
> ---
> kernel/sched/core.c | 2 +-
> 1 file changed, 1 insertion(+), 1 deletion(-)
>
> diff --git a/kernel/sched/core.c b/kernel/sched/core.c
> index f5f7b3cdf0be..fdc9e311fd29 100644
> --- a/kernel/sched/core.c
> +++ b/kernel/sched/core.c
> @@ -6990,7 +6990,7 @@ static int build_sched_domains(const struct cpumask *cpu_map,
> }
> rcu_read_unlock();
>
> - if (rq) {
> + if (rq && sched_debug_enabled) {
> pr_info("span: %*pbl (max cpu_capacity = %lu)\n",
> cpumask_pr_args(cpu_map), rq->rd->max_cpu_capacity);
> }
>
^ permalink raw reply [flat|nested] 5+ messages in thread