From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S940978AbcISOBn (ORCPT ); Mon, 19 Sep 2016 10:01:43 -0400 Received: from mx0b-001b2d01.pphosted.com ([148.163.158.5]:42296 "EHLO mx0a-001b2d01.pphosted.com" rhost-flags-OK-OK-OK-FAIL) by vger.kernel.org with ESMTP id S1761448AbcISOBh (ORCPT ); Mon, 19 Sep 2016 10:01:37 -0400 Subject: Re: linux-next: new scheduler messages span: 0-15 (max cpu_capacity = 589) when starting KVM guests To: Peter Zijlstra References: <20160919134046.GO5012@twins.programming.kicks-ass.net> Cc: Dietmar Eggemann , Ingo Molnar , Tejun Heo , linux-kernel@vger.kernel.org From: Christian Borntraeger Date: Mon, 19 Sep 2016 16:01:25 +0200 User-Agent: Mozilla/5.0 (X11; Linux x86_64; rv:45.0) Gecko/20100101 Thunderbird/45.3.0 MIME-Version: 1.0 In-Reply-To: <20160919134046.GO5012@twins.programming.kicks-ass.net> Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: 7bit X-TM-AS-MML: disable X-Content-Scanned: Fidelis XPS MAILER x-cbid: 16091914-0028-0000-0000-0000021EE1E4 X-IBM-AV-DETECTION: SAVI=unused REMOTE=unused XFE=unused x-cbparentid: 16091914-0029-0000-0000-0000207F819F Message-Id: <12d8250b-e7d0-9d6f-3ab6-fdfd65a59133@de.ibm.com> X-Proofpoint-Virus-Version: vendor=fsecure engine=2.50.10432:,, definitions=2016-09-19_08:,, signatures=0 X-Proofpoint-Spam-Details: rule=outbound_notspam policy=outbound score=0 spamscore=0 suspectscore=0 malwarescore=0 phishscore=0 adultscore=0 bulkscore=0 classifier=spam adjust=0 reason=mlx scancount=1 engine=8.0.1-1609020000 definitions=main-1609190190 Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org On 09/19/2016 03:40 PM, Peter Zijlstra wrote: > On Mon, Sep 19, 2016 at 03:19:11PM +0200, Christian Borntraeger wrote: >> Dietmar, Ingo, Tejun, >> >> since commit cd92bfd3b8cb0ec2ee825e55a3aee704cd55aea9 >> sched/core: Store maximum per-CPU capacity in root domain >> >> I get tons of messages from the scheduler like >> [..] >> span: 0-15 (max cpu_capacity = 589) >> span: 0-15 (max cpu_capacity = 589) >> span: 0-15 (max cpu_capacity = 589) >> span: 0-15 (max cpu_capacity = 589) >> [..] >> > > Oh, oops ;-) > > Something like the below ought to cure I think. That would certainly make the message go away. (e.g. also good for cpu hotplug) I am still asking myself why cgroup cpuset really needs to rebuild the scheduling domains if a vcpu thread is moved. > > --- > kernel/sched/core.c | 2 +- > 1 file changed, 1 insertion(+), 1 deletion(-) > > diff --git a/kernel/sched/core.c b/kernel/sched/core.c > index f5f7b3cdf0be..fdc9e311fd29 100644 > --- a/kernel/sched/core.c > +++ b/kernel/sched/core.c > @@ -6990,7 +6990,7 @@ static int build_sched_domains(const struct cpumask *cpu_map, > } > rcu_read_unlock(); > > - if (rq) { > + if (rq && sched_debug_enabled) { > pr_info("span: %*pbl (max cpu_capacity = %lu)\n", > cpumask_pr_args(cpu_map), rq->rd->max_cpu_capacity); > } >