From mboxrd@z Thu Jan 1 00:00:00 1970 Return-Path: Received: (majordomo@vger.kernel.org) by vger.kernel.org via listexpand id S1752277AbbCXJbk (ORCPT ); Tue, 24 Mar 2015 05:31:40 -0400 Received: from mga03.intel.com ([134.134.136.65]:7985 "EHLO mga03.intel.com" rhost-flags-OK-OK-OK-OK) by vger.kernel.org with ESMTP id S1751814AbbCXJbi (ORCPT ); Tue, 24 Mar 2015 05:31:38 -0400 X-ExtLoop1: 1 X-IronPort-AV: E=Sophos;i="5.11,457,1422950400"; d="scan'208";a="696833977" Date: Tue, 24 Mar 2015 17:13:32 +0800 From: Wanpeng Li To: Juri Lelli Cc: Peter Zijlstra , Ingo Molnar , Wanpeng Li , "linux-kernel@vger.kernel.org" , "juri.lelli@gmail.com" Subject: Re: [PATCH RESEND v10] sched/deadline: support dl task migration during cpu hotplug Message-ID: <20150324091332.GA3945@kernel> Reply-To: Wanpeng Li References: <1426231647-11966-1-git-send-email-wanpeng.li@linux.intel.com> <20150316150101.GA18521@gmail.com> <20150316230110.GA14994@kernel> <20150317080612.GA28235@gmail.com> <20150317075322.GA5431@kernel> <20150317081302.GA28462@gmail.com> <20150317075945.GA5646@kernel> <20150323072504.GA25184@gmail.com> <20150323085523.GB23123@twins.programming.kicks-ass.net> <55112DED.1060506@arm.com> MIME-Version: 1.0 Content-Type: text/plain; charset=us-ascii Content-Disposition: inline In-Reply-To: <55112DED.1060506@arm.com> User-Agent: Mutt/1.5.21 (2010-09-15) Sender: linux-kernel-owner@vger.kernel.org List-ID: X-Mailing-List: linux-kernel@vger.kernel.org Hi Juri, On Tue, Mar 24, 2015 at 09:27:09AM +0000, Juri Lelli wrote: >Hi, > >On 23/03/2015 08:55, Peter Zijlstra wrote: >> On Mon, Mar 23, 2015 at 08:25:04AM +0100, Ingo Molnar wrote: >>>>>>>>>> + if (cpu >= nr_cpu_ids) { >>>>>>>>>> + if (dl_bandwidth_enabled()) { >>>>>>>>>> + /* >>>>>>>>>> + * Fail to find any suitable cpu. >>>>>>>>>> + * The task will never come back! >>>>>>>>>> + */ >>>>>>>>>> + WARN_ON(1); >>>>>>>>> >>>>>>>>> Can this condition happen to users with a non-buggy kernel? >> >>> I still haven't seen a satisfactory answer to this question. Please >>> don't resend patches without clearing questions raised during review. >> >> So I had a look on Friday, it _should_ not happen, but it does due to a >> second bug Juri is currently chasing down. >> > >Right, it should not happen. It happens because hotplug operations are >destructive w.r.t. cpusets. Peter, how about we move the check you put >in sched_cpu_inactive() to cpuset_cpu_inactive()? This way, if we fail, >we don't need to destroy/rebuild the domains. I remember you mentioned that there is a bug through IRC last week, if this patch solve it? Regards, Wanpeng Li > >Thanks, > >- Juri > >>>From 65e8033e05f8b70116747062d00d5a5c266699fb Mon Sep 17 00:00:00 2001 >From: Juri Lelli >Date: Tue, 24 Mar 2015 07:47:03 +0000 >Subject: [PATCH] sched/core: check for available -dl bandwidth in > cpuset_cpu_inactive > >Signed-off-by: Juri Lelli >--- > kernel/sched/core.c | 56 ++++++++++++++++++++++++++--------------------------- > 1 file changed, 28 insertions(+), 28 deletions(-) > >diff --git a/kernel/sched/core.c b/kernel/sched/core.c >index 50927eb..3723ad0 100644 >--- a/kernel/sched/core.c >+++ b/kernel/sched/core.c >@@ -5318,36 +5318,13 @@ static int sched_cpu_active(struct notifier_block *nfb, > static int sched_cpu_inactive(struct notifier_block *nfb, > unsigned long action, void *hcpu) > { >- unsigned long flags; >- long cpu = (long)hcpu; >- struct dl_bw *dl_b; >- > switch (action & ~CPU_TASKS_FROZEN) { > case CPU_DOWN_PREPARE: >- set_cpu_active(cpu, false); >- >- /* explicitly allow suspend */ >- if (!(action & CPU_TASKS_FROZEN)) { >- bool overflow; >- int cpus; >- >- rcu_read_lock_sched(); >- dl_b = dl_bw_of(cpu); >- >- raw_spin_lock_irqsave(&dl_b->lock, flags); >- cpus = dl_bw_cpus(cpu); >- overflow = __dl_overflow(dl_b, cpus, 0, 0); >- raw_spin_unlock_irqrestore(&dl_b->lock, flags); >- >- rcu_read_unlock_sched(); >- >- if (overflow) >- return notifier_from_errno(-EBUSY); >- } >+ set_cpu_active((long)hcpu, false); > return NOTIFY_OK; >+ default: >+ return NOTIFY_DONE; > } >- >- return NOTIFY_DONE; > } > > static int __init migration_init(void) >@@ -7001,7 +6978,6 @@ static int cpuset_cpu_active(struct notifier_block *nfb, unsigned long action, > */ > > case CPU_ONLINE: >- case CPU_DOWN_FAILED: > cpuset_update_active_cpus(true); > break; > default: >@@ -7013,8 +6989,32 @@ static int cpuset_cpu_active(struct notifier_block *nfb, unsigned long action, > static int cpuset_cpu_inactive(struct notifier_block *nfb, unsigned long action, > void *hcpu) > { >- switch (action) { >+ unsigned long flags; >+ long cpu = (long)hcpu; >+ struct dl_bw *dl_b; >+ >+ switch (action & ~CPU_TASKS_FROZEN) { > case CPU_DOWN_PREPARE: >+ /* explicitly allow suspend */ >+ if (!(action & CPU_TASKS_FROZEN)) { >+ bool overflow; >+ int cpus; >+ >+ rcu_read_lock_sched(); >+ dl_b = dl_bw_of(cpu); >+ >+ raw_spin_lock_irqsave(&dl_b->lock, flags); >+ cpus = dl_bw_cpus(cpu); >+ overflow = __dl_overflow(dl_b, cpus, 0, 0); >+ raw_spin_unlock_irqrestore(&dl_b->lock, flags); >+ >+ rcu_read_unlock_sched(); >+ >+ if (overflow) { >+ trace_printk("hotplug failed for cpu %lu", cpu); >+ return notifier_from_errno(-EBUSY); >+ } >+ } > cpuset_update_active_cpus(false); > break; > case CPU_DOWN_PREPARE_FROZEN: >-- >2.3.0